Pipeline Physics Logo

Pipeline Physics

Pipeline Physics Logo
Pipeline Physics produces profit
Gary Summers, PhD 1700 University Blvd, #936
President, Pipeline Physics LLC Round Rock, TX 78665-8016
gary.summers@PipelinePhysics.com 503-332-4095

[an error occurred while processing this directive]

Why some C-level executives are skeptical of PPM

Should PPM use simple or sophisticated techniques? Solving this conundrum, which may be the oldest debate in PPM, requires research, but we can develop intuition by observing how decision theory has addressed risk and uncertainty. Let's start with some definitions:

For decades, decision theory assumed decision-makers represent with probabilities all the unpredictable events they face. Decision theory addressed risk, but it ignored uncertainty. Fortunately, new research is expanding decision theory to include uncertainty, but the older assumptions have caused discord between theory and practice. (You can learn about decision theory's history from my discussion "The difference between theory and practice: its disappearing.")

Of course, managers always face uncertainty, especially in PPM, where projects intersect strategy. This difference between theory and practice — theory ignores uncertainty; managers experience it — causes the debate over the performance of simpler methods versus more sophisticated ones.

To see how the argument arises let's compare two fictional managers. One manager is a PPM expert who strictly adheres to the older decision theory, which ignores uncertainty, and its practical incarnation, decision analysis. The other manager is a C-level executive who receives recommendations from the PPM expert. How do their views of PPM differ?

To answer this question let's see how uncertainty affects decision models. Every decision model suffers from two sources of loss:

Total Loss = Loss from incompleteness + Loss from imperfections

Consider each loss:

Using a well-designed simple model is like purchasing insurance. Foregoing optimality is the cost and being reasonably assured of a decent result is the benefit. As an analogy, one can view simple models as a mini-max approach to decision-making: it minimizes the maximum loss. One mathematical investigation of uncertainty uses this approach.

(PPM provides two other methods of building state flexible portfolios: "what if" analysis and simulation optimization. See my discussion, "Action flexibility and state flexibility in PPM.")

Let's see some implications of the two-loss model.

  1. Uncertainty favors simplicity: A detailed model has a positive probability of being correct, but when uncertainty exists, the probability is small. Furthermore, as uncertainty increases, the loss from modeling flaws increases. Now recall that state flexible models reduce the likelihood of large losses. As uncertainty increases, detailed models suffer more than simpler, state flexible models. Eventually, simpler, state flexible models have higher expected values than their complex cousins. This result is outstanding. The mini-max approach produces less risk and a higher expected value.

    As a general rule, as uncertainty increases, decision models and methods should become simpler. However, the specific impact of uncertainty on various project evaluation and selection techniques and the amount of uncertainty that causes one technique to be favored over another is unknown. Identifying these crucial relationships requires research.
  2. Optimal decision method is unknown: The fictional PPM manager from above might claim, "The two-loss model changes nothing. You still make decisions by maximizing expected value. Specifically, you select the optimal level of sophistication by minimizing the sum of the losses." Some research in decision theory uses this approach, but while it suffices for theory, it fails in practice. The optimal level of sophistication depends on uncertainty, and by definition, the amount of uncertainty that one faces is unknown.
  3. Risk preferences affect one's choice of decision method: When ignoring uncertainty, decision theory offers the following advice: (1) always choose by maximizing expected value and (2) risk preferences affect one's choice but not one's method of choosing. As recent advances in decision theory show, the existence of uncertainty changes this advice. When uncertainty exists, the preferred decision method is a matter of risk preference. Risk seeking decision-makers will favor more complex models and risk adverse decision-makers will favor simpler decision methods.

    Here's why. When uncertainty exists, adding details to a model might create value but it risks large losses since these details could be modeling errors. Meanwhile, by sacrificing some "upside," simpler, state flexible models protect one from large losses.
  4. Overconfidence breeds complexity: An affinity for risk is only one behavior that causes decision-makers to favor sophisticated models. Overconfidence is another. Managers who have too much confidence in their knowledge will underestimate the amount of and significance of the uncertainty they face. They will prefer models that are too sophisticated, which reduces the value PPM creates.

    Unfortunately, overconfidence in one's knowledge is common. For example, in research on the construction of decision trees, managers are asked to assign a probability to a branch labeled "All events that are not explicated in the tree." Under a great variety of circumstances, managers underestimate the probability of this branch. Additional confirmation comes from research that asks managers to make confidence intervals around their estimates, as described in my discussion, "Overconfidence and underestimating risk."

    An education in decision analysis can cause overconfidence as well. Few decision analysis textbooks state the assumption that uncertainty is absent or discuss the consequences of the assumption. Additionally, the decision analysis practice of dismissing bad results as bad luck perpetuates the problem. It assigns bad results to risk, rather than uncertainty. For an example of this practice and the harm it causes, see my discussion, "How to count cards in blackjack." Finally, some decision analysis textbooks teach managers to evaluate their models via sensitivity analysis. This technique uses the model to evaluate itself, which is tautological. One must evaluate models by analyzing their performance.

Now, let's return to the two managers. Recall, the fictional PPM expert strictly adheres to the older decision theory, which ignores uncertainty. Perhaps, he or she is overconfident as well, believing modeling errors are unlikely. The PPM expert favors sophisticated models. In metaphor, when building models, the PPM expert acts like a surgeon, finely dissecting and exploiting every nuance and subtle quality of a PPM situation.

Meanwhile, the C-level executive routinely confronts strategy, where uncertainty is pervasive. He or she reads articles like the Harvard Business Review's "Strategy as Simple Rules."* Additionally, the C-level executive suffers the consequences of poor portfolios, making him or her risk adverse. For these qualities, the C-level executive favors simpler models and views the PPM expert's advice with suspicion.

C-level executives have an additional reason for distrust, which applies to PPM in general. It's called the optimizer's curse. This curse arises from the combination of risk and project selection. Given accurate (unbiased) but imprecise estimates of project values, every selection technique tends to select a greater number of overvalued projects than undervalued projects. Every project selection technique tends to overestimate a portfolio's value, and as a result PPM consistently disappoints C-level executives. I describe the optimizer's curse in my forthcoming discussion, "The optimizer's curse: how PPM overestimates portfolio value."

Let's afford our fictional PPM expert one last argument. Addressing the two-loss model (above), he or she may exclaim:

This argument ignores how simplifying a model affects the two sources of error: incompleteness and imperfection. Simplifying a model increases the loss from incompleteness, but when done weill, it decreases the loss from imperfection. When uncertainty is sufficiently high, the trade-off is beneficial.

The two-loss model has affected optimization models used in the field of machine learning. These models include a penalty term in their objective functions, and the penalty increases with the number of constraints and variables in the model. Practitioners then adjust their models, striving to include the most important aspects of a problem while omitting the less important aspects. By limiting complexity, the penalty terms prevent optimization from over-fitting its solution to modeling errors and erroneous information. (To learn about overfitting, see my discussion, "Action flexibility and state flexibility in PPM.")

The two-loss model affects statistical analysis as well. Many metrics that measure the fit of a model to data penalize a model for having too many variables and parameters.

Unfortunately, the two-loss not yet been applied to PPM.

How does a PPM expert know which phenomena to model and which to exclude? One must study the various aspects of portfolio optimization models and determine which aspects heighten and which dampen a model's sensitivity to uncertainty. This study is part of my current research on project selection.

* Eisenhardt, K. M. and D. N. Sull (2001), "Strategy as simple rules," Harvard Business Review, vol.79, no. 1, pp. 107-116.


After reading my discussions, many managers wish to share their experiences, thoughts and critiques of my ideas. I always welcome and reply to their comments.

Please share your thoughts with me by using form below. I will send reply to you via email. If you prefer to be contacted by phone, fax or postal mail, please send your comments via my contact page.


Contact Information

 First name
 Last name
 Title
 Company
 E-mail address