Decision Making Using the Analytic Hierarchy Process

Quote of the Day

There are stars whose radiance is visible on Earth though they have long been extinct. There are people whose brilliance continues to light the world even though they are no longer among the living. These lights are particularly bright when the night is dark. They light the way for humankind.

Hannah Szenes. I love this quote. I also recommend that you read about Hannah Szenes – a remarkable woman.


Figure 1: NASA AHP Example.

Figure 1: NASA AHP Example.

I gave a seminar a few days ago on System Engineering – a favorite subject of mine. One of the topics covered during my System Engineering seminar was decision making. Specifically, I speak about how I want trade studies to be performed in my group. I have participated in hundreds of trade studies, and I have used many different approaches.

I usually perform trade studies using some form of Multi-Criteria Decision Making (MCDM). There are numerous types of MCDM, many of which are enumerated here. I generally use one of two methods: Kepner-Tregoe (K-T) and Analytic Hierarchy Process (AHP).

I took a week-long K-T course back in the 1980s, and I have used this methods for most of my trade studies because it is the most intuitive approach I know. It does not make decision-making easier, but it does provide a good framework for making decisions. One complaint about K-T is that it simply moves the stress of arguing about alternatives to arguing about the weighting of criteria – a valid point.

AHP is an interesting alternative to K-T. It is more mathematical in nature, and it bases the criteria weighting on the pair-wise comparison of the relative importance of the criteria. This mathematical aspect makes it more mysterious to folks, and I do not use this approach in presentations to upper management.

Since AHP is more complex than K-T, I thought I would work through an example here. I will going through the details of a NASA trade study example on automotive fuel options to illustrate how to apply the method.



Multi-Criteria Decision Making (MCDM)
MCDM is a sub-discipline of operations research that explicitly considers multiple criteria in decision-making environments (Source).
Analytic Hierarchy Process (AHP)
AHP decomposes decision into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. Once the hierarchy is built, the decision makers systematically evaluate its various elements by comparing them to each other two at a time, with respect to their impact on an element above them in the hierarchy. In making the comparisons, the decision makers can use concrete data about the elements, but they typically use their judgments about the elements' relative meaning and importance. It is the essence of the AHP that human judgments, and not just the underlying information, can be used in performing the evaluations (Source).
Sensitivity Analysis
The study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be apportioned to different sources of uncertainty in its inputs (Source).

MCDM Basics

Figure M: Standard MCDM Tabular Format.

Figure 2: Standard MCDM Tabular Format.

All MCDM methods work with a table similar to that shown in Figure 2, which consists of:

  • A set of alternatives to decide between (far left column of A's)
  • A set of criteria (top row of C's) used to evaluate the alternatives.
  • A set of criteria weights (row of W's below the C's) to set the relative priority of the criteria.
  • A set of grades for each criteria that have been assigned to each alternative (small a's).

It is important that each criteria be scored using the same scale. The grades given for each criteria can then be priority weighted and aggregated in various ways. Both K-T and AHP are known as weighted-sum models. Other approaches exist (e.g. weighted-product model).

AHP Basics


The key concepts in AHP are:

  • Each decision can be broken down into a series of sub-decisions.
  • These sub-decisions can be broken down into a set of evaluations based on specific criteria.
  • The relative performance each alternative against the criteria can be evaluated in a pair-wise manner according to the rubric in Figure 3. These comparisons are placed in a table for eigenvector determination. Note that comparisons must meet the reciprocal axiom, which states comparison(i,j)=1/comparison(j,i) where i and j are array indices. The AHP algorithm includes a measure for how well the reciprocal axiom is met – it does not need to be met exactly in practical applications.
  • The table of pair-wise comparisons can be aggregated into weights or preferences using eigenvector methods. I should note that there are other ways of aggregating pair-wise comparisons, but the eigenvector method is the most common.

I have found an excellent master's thesis that provides a simple and thorough discussion of how to build the pair-wise comparison table and the eigenvector approach to criteria weighting. I should also mention that the Wikipedia has an excellent example, which unfortunately does not go into the mathematical details.

Grading Rubric

Figure 3 shows the rubric for how comparison results are to be graded on a scale from 1 to 9. You can use other scales, but you need to be consistent.

Figure M: Comparison Scale.

Figure 3: Standard AHP Comparison Scale.


Figure M: Decision-Making Process.

Figure 4: Decision-Making Process.

Let's go through the steps in the decision making process (Figure 4).

  • Determine the criteria relevant in making your decision. In general, your criteria will include both qualitative and quantitative values.
  • Determine the set of all viable alternatives.
  • Evaluate each alternative relative to your criteria.
  • Using Figure 3, scale or code the results of your evaluation in the form of a relative grade from 1 to 9, with 1 being equal and 9 being much better.
  • Apply the AHP algorithm. It will generate a set of weights for each criteria.
  • Look at your result. The alternative with the largest score is the best choice relative to your criteria.
  • Because your criteria and their evaluation undoubtedly are subjective, vary the your qualitative evaluations and scaling (coding) slightly to determine if your outcome is strongly dependent on your specific assumptions.

If you see that your decision is very sensitive to your scaling or the qualitative aspects of your criteria, consider adding more criteria to evaluation and see if you can make the decision less subjective.

I should point out that you will never remove all subjectivity from the process.


You can access my Mathcad file and its pdf version here.

Determine the Decision-Making Criteria

Most decisions involve many complex qualitative and quantitative parameters. We are working a simple problem here with the decision criteria limited to four items:

  • CO2 Emissions (grams/km)
  • Fuel Cost ($ /mile)
  • Range (miles)
  • Vehicle Cost ($)

Each criteria has different units. Also, "goodness" can be small (e.g. cost) or big (e.g. range). We need to code these results in a way so that they are on the same scale. The NASA trade study reference chose to use a scale from one to ten, which differs from AHP's typical one through 9. It really does not matter as long as you are consistent.

Determine the Alternatives

Because this is a "toy" problem, there are only three alternatives:

  • Propane
  • Hybrid-Electric
  • Electric-Only

This is a large enough set to illustrate the principles involved.

Evaluate the Alternatives Relative to the Criteria

Figure 5 shows the raw performance data for the three alternatives. In Figure 5, I also include a couple of items used for calculation purposes:

  • Random Index (RI): A list of values defined in the AHP algorithm. This is discussed in the master's thesis I reference above.
  • n(x): a function that normalized vectors so that the element-sum equals 1.
Figure M: Raw Performance Metrics.

Figure 5: Raw Performance Metrics and A Couple of Utility Items.

Scale the Criteria from 1 to 10

Figure 6 shows how I linearly map my performance metrics to the range from one to ten. You certainly could use a nonlinear mapping if that makes sense. For example, HP occasionally would optimize for performance/cost2.

Figure M: Code the Results in a Manner that Allows Direct Comparison.

Figure 6: Code the Results in a Manner that Allows Direct Comparison. I arbitrarily chose to normalize the sum of each set of grades to one to be consistent with the NASA example.

Apply the AHP Algorithm

Determine the Weights to Apply to Each Criterion

Figure 7 shows the pair-wise preference matrix generated for the NASA study. We can compute the "best" aggregate weight vector using an eigenvector-based approach.

Figure M: Generate the Criteria Weighting or Preferences.

Figure 7: Generate the Criteria Weighting (AKA Preference Vector).

Generate the Scores

The actual scoring is quite simple – multiply the Preference vector times the array of coded performance metrics. We obtain one score for each alternative (Figure 8).

Figure M: Scoring the Alternatives.

Figure 8: Scoring the Alternatives.

Look at the Results

Figure 9 shows a summary of my decision-making inputs.

Figure M: Presentation of My Final Result.

Figure 9: Presentation of My Final Result.

Perform Sensitivity Analysis

Because this is a "toy" problem, I will not be performing any sensitivity analysis. However, it would be very easy – just vary how I code  the input results and see if my outcome changes


While my scoring of the alternatives against the criteria was different than in the NASA example, my final scores for the alternatives are quite similar to those of the NASA paper.

My intent here was to review a NASA example in fine-detail.Their example was simple but provided a good general vehicle for illustrating how AHP is applied.

This entry was posted in General Mathematics, Management. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *