New Thinking about Decision-Making Under Uncertainty
The categories below contain provocative papers that challenge orthodox
theories on decision-making under uncertainty, while also providing insights,
models and tools, often mathematical, for advancing decision theory and
practice. We might consider these papers when selecting our monthly readings.
Anyone can add papers to this list. Just send the paper to me, and
I'll post it.
Click on a category's heading to toggle the display of its papers.
New frameworks for theorizing about decision-making under uncertainty
The following frameworks persue different mathematical formulations but predict the same coarse behavior. Coarse behavior is behavior that is less variable than predicted by traditional optimization models. All three frameworks show that coase behavior can outperform the prescriptions of expected utility theory (EUT), as traditionally applied, when decision-makers have scant data, face unknown-unknowns or are boundedly rational. Importantly, the frameworks provide mathematical tools for extending EUT to address these ubiquitious phenomena, thereby extending EUT from the "small words" that Savage postulated, which are akin to casino games, to the "large worlds" of real life. Several hypotheses from these frameworks challenge key qualities of the heuristics and biases framework as well.
- Bookstaber, R. and J. Langsam (1985), "On the optimality of coarse behavior rules," Journal of Theoretical Biology, 55(2): 18-20. Most critiques of EUT begin with bounded rationality. In constrast, this paper addresses one of the most important, but least studied, because theorizing about it is so difficult, predicaments that thwart decision-makers: incomplete knowledge about the decision environment. The paper produces stirring conclusions and powerful tools for learning how incomplete information, limited understanding, unknown-unknowns and black swan events effect decision-making.
- Al-Najjar, N.I. and M.M. Pai (2014),
"Coarse decision making
and overfitting," Journal of Economic Theory, 150: 467-486.
This paper studies how scant data, overfitting and
misspecifying models affect forecasting. (Note: on the
current webpage, in the philosophy category, a paper by Forster and Sober
addresses the same problem via a different mathematical formulation, using
Akaike's information criterion.) Abstracting from Al-Najjar and Pai's
analysis, their approach represents forecasting errors as arising from
two sources:
Total Loss = Loss from Imperfections + Loss from Incompleteness
The loss from imperfections is the cost of decision errors arising from imprecise and inaccurate information and parameters and from incorrectly modeled relationships. The loss from incompleteness is the cost of decision errors arising from having an incomplete model. The optimal model is the one that minimizes total loss. Simplifying a model increases the loss from incompleteness, but if done well, it decreases the loss from imperfections. When the loss from imperfections are large, the trade-off is beneficial. Linear regression provides an example. When data is scant, a model with fewer variables, which ignores some truely impactful variables, forecasts better than a model that includes all impactful variables. In the latter case, the regression coefficients are so imprecise they make terrible forecasts. On this webpage, in the Applications category (below), the paper by DeMiguel et al. (2009), which studies the construstion of financial portfolios, provides another example. Finally, imcomplete models, becausae they use less information, are less sensitive to changes in the enviroment. They produce coarse behavior. - Heiner, R.A. (1983), "The origin of predictable behavior," American Economic Review, 73: 560-595. Anytime the difficulty of a problem exceeds a decision-maker's competence, a C-D gap > 0, the decision-maker experiences uncertainty. For example, you experience uncertainty when playing chess but not when playing tic-tac-toe. Expected utility theory assumes a C-D gap = 0, which limits its usefulness. Heiner's framework extends EUT to situations where C-D gap > 0. This paper introduces the intuition of Heiner's framework and proposes numerous applications and implications for economics, psychology, animal behavior, evolution and systems theory, among other fields. The paper provides little formal rigor, but Heiner provides this rigor in other works, including the ones listed below. While this innitial presentation of the framework lacks formalization, it is seminal, having been cited more than 2,000 publications.
- Heiner, R.A. (1988), "The necessity of imperfect decisions," Journal of Economic Behavior and Organization, 10: 29-55. Heiner develops the formalization of his framework and extends the framework to include both imperfect behavior (C-D gap > 0) and imperfect information (the traditional uncertainty of EUT). He shows that the optimal behavior of a boundedly rational decision-maker uses information that is too complex to use perfectly, which causes decision errors. Impressively, Heiner present his framework in his usual way and also with the formalization of information theory. The decision-maker is a channel of finite capacity, and one can theorize about the entropy of the environment, messages and behavior, including providing a definition of coarse behavior.
- Heiner, R.A. (1988), "Imperfect decisions in organizations," Journal of Economic Behavior and Organization, 9: 25-44. Heiner provides a formal derivation of his main result, the reliability condition, and applies his theory to organizations.
- Heiner, R.A. (1986), "Uncertainty, signal-detection experiments and modeling behavior," in Economics as a Process: Essays in the New Institutional Economics, ed. R. Langlois. New York: Cambridge University Press. vol. 9, pp. 25-44. Heiner formally derives the key results of his papers: the reliability condition, marginal reliability condition and two-stage relaibility ratio. He illustrates his framework with signal detection theory and applies his framework to a variety of phenomena.
Human cognition, machine learning and AI
Heuristics and baises (H&B) is one of the major paradigms for studying human cognition (see this webpage's Background section). By comparing people's decisions to the predictions of expected utility theory (EUT), the H&B paradigm has identified numerous situations in which people systematically violate EUT and postulated cognitive heuristics as the cause of these behaviors. Many disciplines, especially economics and management science, have embraced H&B. Below are some papers that challenge the H&B paradigm, and these challenges that seem particularly potent, at least to me.
- Khaw, M.W., Z. Li and M. Woodford (2017), "Risk aversion as a perceptual bias," working paper. This paper proposes that human beings are risk neutral and their (apparent) risk averse behavior arises becuase the human brain imperfectly codes numbers. This hypothesis disputes perhaps the oldest, foundational concept in EUT: Daniel Bernoulli's concave utility function, proposed in 1738. Additionly, the imperfect coding hypothesis explains peoples' risk seeking behavior when confronting losses, and possibly, it challenges the H&B's literatures postulate that people are more stongly motivated to avoid regret than to seek gains. Imperfect coding provides an explanation for prospect theory as well, which has been an empirical observation in need of an explanation.
- A. Agrawal, J.S. Gans and A. Goldfarb (2017), "Prediction judgment and complexity," working paper.
- F. Lieder, T.L. Griffiths, Q.J.M. Huys and N.D. Goodman (2018),
"The anchoring bias reflects rational
use of cognitive resources," Psycholgoical Bulletin and Review, 25(1): 322-349.
The authors propose that the biases and judgemental errors
observed in the heuristics and biases literature represent rational decision-making
in the following sense. They arise from human cognition making the best (optimal)
use of limited computational resources. To illustrate the idea, the authors present
a model of the anchoring and adjustment heuristic. Studies of this heuristic find
that people adjust too little, so the anchor influences their judgment too much.
The authors' model implies that this bias is a consequence of brains making optimal
use of limited computational capacity. When the group read this paper, we questioned
the authors' assumptions about decision-makers' subjective probabilities,
which is fundamental to their model, and how authors used subjective probabilities in
their simulation tests. Nonetheless, the idea that biases arise from the optimal use
of limited resources is compelling. However, in a contrastnig view, the paper by
Busemyer and Johnson (below) provides another interesting idea: the biases are qualities
of neural networks.
Griffiths proposes several provocative ideas for studying cognition and AI, which you can find in a quick-reading, non-academic paper: "Aerodynamics for Cognition." - Busemeyer, J.R. and J.G. Johnson (2004), "Computational models of decision making," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 133-154. This paper suggests that neural networks can cause some of the preference reversals identified by H&B experiments, so one need not postulate heuristics to explain these phenomena. I've read little about neural networks, but judging from this paper, the preference reversals arise from circumstances similar to the heuristics. Neural networks "compare" choices to each other, rather than to an absolute standard. Nonetheless, the H&B field has criticized expected utility theory for assuming people act "as if" they optimize, even if they don't. Now one can critique H&B for presuming people act "as if" they use heuristics, even if they don't.
- Note: Bookstaber and Langsam (see this webpage's Frameworks section) propose that extended uncertainty, which is similar to unknown-unknowns, can cause deviations from EUT. If human brains evolved while experiencing extended uncertainty and if people's real world decisions are made while experiencing extended uncertainty, people may behave as if they face extended uncertainty, even when this such uncertainty is eliminated, as it is in H&B experiments. The "errors" observed in some H&B experiments may actually be good responses to situations with extended uncertainty. The decision errors lie not with the decision-makers but with experimental designs that eliminate extended uncertainty.
- Griffin, D. and L. Brenner (2004), "Perspectives on probability judgment calibration," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 133-154. Some of the heuristics hypothesized by the H&B literature are insufficiently developed and unable to generate testable hypotheses. Probability calibration is an exception. Several models make testable predictions and compete to describe empirical observations. For this fruitful situation, the subject aptly identifies strengths and limitations of the H&B approach. Are cognitive "biases" caused by a mismatch of laboratory experiments with real decision situations? Are "biases" etherial, resulting from imprecise, but unbiased, judgment? If heuristics cause biases, which heuristic best explains the optimism and overconfidence that accompanies assessments of uncertainty?
Philosophy
The three new frameworks (see this webpage's Frameworks section, above) imply provocative ideas about science. If the best decision models are incomplete and simple:
- Two models that are logically inconsistent can have similar success rates for predicting phenomena, although they will succeed and fail in different ways. We then have evidence for and against two (or more) logically incompatible theories.
- A simple model, for which the physics is known to be wrong, can make better predictions than a model made with the right physics. Climate change may be an example, although I haven't read into its literature, so let's use climate change only to illustrate something plausible. Perhaps, the best predictions of climate change are made by linear equations, even though the physics of climate is nonlinear. Economics, neural networks and various phenomena from complexity science might provide additional examples. If the best predictions are made by the wrong physics, how can science proceed?
Spurred by new thinking about uncertainty, philosophers are exploring the impact of uncertainty on the practice of science and offering new answers to fundamental questions, including, "What is knowledge?" Below, I list some papers by Malcolm Forster, which we might enjoy discussing, plus a provocative, philosophy-focused paper from the psycholoy literature.
- Forster, M.R. (1997), "Causation, prediction and accommodation."
- Forster, M.R. (1999), "How do simple rules 'fit to reality' in a complex world?" Minds and Machines, 9: 543-564.
- Forster, M.R. (2001), "The new science of simplicity," in A. Zellner, H. A. Keuzenkamp and M. AcAleer (eds.), Simplicity, Inference and Modeling, Cambridge University Press, pp. 83-119.
- Forster, M.R. (2002), "Predictive accuracy as an achievable goal of science," Philosophy of Science, 69: S124-S134.
- Forster, M.R. and E. Sober (1994), "How to tell when simpler, more unified, or less ad hoc theories will provide more accurate predictions," British Journal of the Philosophy of Science, 45: 1-35. Like Al-Najjar and Pai (see this webpage's Frameworks section, above), Forster and Sober study the impact of overfitting on prediction. However, these two philosophers use a different approach, based on Akaike's information criterion, thereby providing another mathematical tool for theorizing about decision-making under uncertainty. Furthermore, rather than focusing on coarse behavior, as do Al-Najjar and Pai, Foster and Sober apply thier analysis to the philosophy of science, addressing the issues of causal modeling, ad hocness, Bayesianism, empriicism and realism.
- Kieseppa, I.A. and R.M. Forster, "How to remove the ad hoc feature of statistical inference within a frequentist paradigm."
- Greenwald, A.G., A.R. Pratkanis, M.R. Leippe and M.H. Baumgardner (1986), "Under what conditions does theory obstruct research progress?," Psychological Review, 93(2): 216-229. . This paper proposes that the dominant paradigm of science, of trying to disprove hypotheses, leads to confirmation bias and to tweaking theories, to preserve them when disconfirming evidence arises. The authors proposes that scientists instead (1) strive to find the limits of a theory, which are the conditions that make the theory fail, and (2) try to produce previously unobtainable results. With great effectiveness, the heuristics and biases school applied this approach to identify the limits of EUT. However, some areas of economics may resist the approach. Macroeconomics spontaneously creates theory-breaking situations, such as the 1970s stagflation.
- Gigerenzer, G. (2008), "What's in a sample, in in G. Gigerenzer, Rationality for Mortals, Oxford University Press."
Applications
Consider these definitions:
- Action flexibility frequently adjusts behavior to changes in the environment. A decision-maker has a large set of choices and skillfully exploits nuances of the environment to select the best action in every situation.
- State flexibility is coarse behavior. The decision-maker has few actions. These actions are not optimal in any situation, but each works well in a great variety of situations. You can think of state flexibility as insurance. Foregoing optimality is the cost. A high probability of achieving reasonable results is the benefit.
The new frameworks for theorizing about uncertainty (see this webpage's Frameworks category, above) describe how increased uncertainty, from various sources, causes decision-makers to retreat from action flexibility to state flexibility.
With these definitions, let's consider some practical applications of the new thinking about uncertainty.
- DeMiguel, V., L. Garlappi and R. Uppal (2009), "Optimal versus naive diversification: how inefficient is the 1/n portfolio strategy," The Review of Financial Studies, vol. 22, no. 5, pp. 1915-1953. This paper illustrates the mechanism from Al-Najjar and Pai's framework (see the Frameworks section, above). Modern portfolio theory strives to create profolios that maximize the ratio of return to risk. The calculations use quadradic optimization, which is extremely senstive to erros in information. This result is not a situation of garbage in, garbate out. Rather, it is great data in, garbage out. Modeling simplifications improve performance, including using linear regression, constraints that reduce the solution space, such as preventing short selling, and using a constant correlation matrix. The above paper goes even further. You can perfectly prevent the impact of erroneous information by allocating investment equally over all asset classes, called a 1/n strategy. Using both theory and empirical tests, this paper shows that the 1/n strategy outperforms many complex approaches found in the financial literature.
- Lempert, R.J., Popper, S.W., Groves, D.G., Kalra, N., Fischbach, J.R., Bankes, S.C., Bryant, B.P., Collins, M.T., Keller, K., Hackbarth, A., Dixon, L., LaTourrette, T., Reville, R.T., Hall, J.W., Mijere C., McInerney, D.J. (2013), "Making good decisions without predictions: robust decision making for planning under deep uncertainty," Research Brief, no. RB-9701, RAND Corporation. In this paper, the RAND corporation presents a procedure for making plans that are robust to uncertainty, plans that are state flexible. Via scenario analysis, they adjust a scenario until a plan fails. Then they fix them plan to prevent that failure. By repeating the procedure, they steel plans, making they robust to uncertainty, perhaps even to contingencies that were not imagined. Note: the above paper is a short research brief, but it cites several scholarly articles, including one from Management Science. Our group may gain more insights by discussing one of these papers.
- Feduzi, A. and R. Jochen (2014), "Uncovering unknown unknowns: Towards a Baconian approach to management decision-making," Organizational Behavior and Human Decision Processes, 124: 268-283. RAND's technique for making robust plans (see the previous paper) requires scenario analysis. Feduzi and Jochen present an analogous technique for decision analysis.
- Sull, D. and K.M. Eisenhardt (2012), "Simple rules for a complex world," Harvard Business Review, September, pp. 69-72. According to the three new frameworks (see this webpage's Frameworks category, above), decision-makers seek state flexibility to reduce the cost of decision errors. This paper proposes that by creating and defining strategy to be a small set of simple, coarse, state flexible rules, an organization can be action flexible in high uncertainty environment and still make few costly decision errors. This paradox has a precedent. Just-in-Time manufacturing is a highly constrained, rule-based method of manufacturing (JIT). Yet, JIT is far more flexible than other mothods of managing manufacturing, such as MRP, which uses mathematical optimization models. Chris and I published an article that explains this paradox by applying Heiner's framework. Applying our reasonning to corporate strategy might be fruitful.
- Perretti, C.T., S.B. Munch and G. Sugihara (2013), "Model-free forecasting outperforms the correct mechanistic model for simulated and experimental data," Proceedings of the National Institute of Science of the USA, 110(13): 5253-5267. The authors create data with times series models that have parameters set to produce species' population dynamics. They then add measurement error to the data. Finally, they test two forecasting approaches, striving to predict the species populations. They fit the model that created the data, using a full set of data - the correct physics, with data to average away measurement errors, and no missing data. These are prestine conditions. The competitors were simpler statistical methods that used less data: SSR and two linear time series methods. SSR and the simpler linear times series method performed best. Sometimes forecasts from the model that created the data performed worse than guessing the population average every period. These results resonate with the philosophy papers by Forster (see Philosophy section, above).
- Verheij, B. (2014), "To catch a thief with and without numbers: arguments, scenarios and probabilities in evidential reasoning," Law, Probability and Risk, 13(3-4): 307-325.
- March, J.G., L.S. Sproull and M. Tamuz. (1991), "Learning from samples of one or fewer," Organization Science, 2(1): 1-13.
Background
The papers below summarize the main approaches to studying and understanding decision-making under uncertainty. I've posted them so you learn about the environment in which the above papers were published. Of course, if you wish, we can read and discuss any of these background papers.
Every paper we will read references subjective (or expected) utility theory (SUT or EUT). Here is an introduction to SUT:
- Baron, J. (2004), "Normative models of judgment and decision making," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 19-36.
In economics, SUT became the standard approach to studying decision-making under uncertainty, but critiques arose, and three critiques grew into new schools of study: information processing, heuristics and biases (H&B), and fast and frugal heuristics (F&F).
As Herbert Simon recognized, SUT considers choice but ignores the method of choosing. Simon argued that, for most real-world problems, humans lack the computational capacity to find the optimal, SUT choice. Decision-makers must seek satisfactory, suboptimal solutions, so any study of rational decision-making must study the search (decision) process. Simon's approach to studying boundedly rationality, his word for finite computational ability, became the information processing school. Here is an introduction:
- Payne, J.W. and J.R. Bettman (2004), "Walking with the scarcrow: the information-processing approach to decision research," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 110-132.
Amos Tversky and Daniel Kahneman critiqued SUT by showing that people deviated from SUT's axioms and prescriptions when facing simple problems, problems within anyone's computational capacity. Their research grew to become the heuristics and biases (H&B) approach. A good summary of the approach, its development and its response to critics is:
- Gilovich, T. and D. Griffin (2002), "Introduction - heuristics and biases: then and now," in T. Gilovich, D. Griffin and D. Kahneman (eds.), Heuristics and Biases: the Psychology of Intuitive Judgment, pp. 1-18.
For an early, brief presentation of key heuristics and biases, see:
- Tversky, A. and D. Kahneman (1974), "Judgment under uncertainty: heuristis and biases," Science, 185(4157): 1124-1131.
For comprehensive reviews, see:
- Slovic, P., B. Fischhoff and S. Lichtenstein (1977), "Behavioral decision theory," Annual Review of Psychology, 28: 1-39.
- Einhorn, H.j. and R.M. Hogarth (1981), "Behavioral decision theory: processes of judgment and choice," Annual Review of Psychology, 32: 53-88.
Gerd Gigerenzer argued that many deviations from SUT, including many identified by the H&B literature, are not decision errors. He proposed that people's heuristics are effective by and because they exploit qualities of the decision environment, a quality he called ecological fitness. These ideas developed into the fast and frugal (F&F) approach to studying decision-making. Here is an introduction:
- Gigerenzer, G. (2004), "Fast and frugal heuristics: the tools of bounded rationality," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 62-88.
The information processing, H&B and F&F approaches study three facets of decision-making: decision processes, comparison to an ideal (SUT) and comparison to the decision environment. In this way, the approaches complement each other.
Are there other approaches? Herbert Simon made the following analogy. For centuries, people tried to learn to fly by studying birds, even by placing feathers under microscopes. Yet, people learned to fly, not by studying birds, but by trying to build artificial flying machines. Likewise, people can learn about decision-making by building artificial thinking machines. Following Simon's logic, we should find important approaches in the computer science literature.
Regrettably, I've read too little of this literature. If you know of good articles, kindly suggest them. Focusing on decision-making, here are introductions to two approaches: neural networks and rule-based decision-making:
- Busemeyer, J.R. and J.G. Johnson (2004), "Computational models of decision making," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 133-154.
- Smith, E.E., C. Langston and R.E. Nisbett (1992), "The case for rules in reasoning," Cognitive Science, 16: 1-40.