Papers we discussed
Heiner, R.A. (1983), "The origin of predictable behavior," American Economic Review, 73: 560-595. (Click for Abstract)
Anytime the difficulty of a problem exceeds a decision-maker's competence, a C-D gap > 0, the decision-maker experiences uncertainty. For example, you experience uncertainty when playing chess but not when playing tic-tac-toe. Expected utility theory assumes a C-D gap = 0, which limits its usefulness. Heiner's framework extends EUT to situations where C-D gap > 0. This paper introduces the intuition of Heiner's framework and proposes numerous applications and implications for economics, psychology, animal behavior, evolution and systems theory, among other fields. The paper provides little formal rigor, but Heiner provides this rigor in other works, including the ones listed below. While this initial presentation of the framework lacks formalization, it is seminal, having been cited more than 2,000 publications.
Bookstaber, R. and J. Langsam (1985), "On the optimality of coarse behavior rules," Journal of Theoretical Biology, 55(2): 18-20. (Click for Abstract)
Animal behavior can be characterized by the degree of responsiveness it has to variations in the environment. Some behavioral rules lead to fine-tuned responses that carefully adjust to environmental cues, while other rules fail to discriminate as carefully, and lead to more inflexible responses. In this paper we seek to explain such inflexible behavior. We show that coarse behavior, behavior which appears to be rule-bound and inflexible, and which fails to adapt to predictable changes in the environment, is an optimal response to a particular type of uncertainty we call extended uncertainty. We show that the very variability and unpredictability that arises from extended uncertainty will lead to more rigid and possibly more predictable behavior.
We relate coarse behavior to the failures to meet optimality conditions in animal behavior, most notably in foraging behavior, and also address the implications of extended uncertainty and coarse behavior rules for some results in experimental versus naturalistic approaches to ethology.Gigerenzer, G. (2004), "Fast and frugal heuristics: the tools of bounded rationality," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 62-88. (Click for Abstract)
It you open a book on judgment and decision making, chances are you will stumble over the following moral: Good reasoning must adhere to the laws of logic, the calculus of probability, or the maximization of expected utility; if not, there must be a cognitive or motivational flaw. Don't be taken in by this fable. Logic and probability are mathematically beautiful and elegant systems. But they do not describe how actual people - including the authors of books on decision making - reason.... I will introduce you to the study of cognitive heuristics: how people actually make judgments and decisions in everyday life, generally without calculating probabilities and utilities.
Forster, M. (1999), "How do simple rules 'fit to reality' in a complex world?," Mind and Machines, 9: 543-564. (Click for Abstract)
The theory of fast and frugal heuristics, developed in a new book called Simple Heuristics that make Us Smart (Gigerenzer, Todd, and the ABC Research Group, in press), includes two requirements for rational decision making. One is that decision rules are bounded in their rationality - that rules are frugal in what they take into account, and therefor fast in their operation. The second is that the rules are ecologically adapted to the environment, which means that they "fit to reality." The main purpose of this article is to apply these ideas to learning rules - methods for constructing, selecting, or evaluating competing hypotheses in science - and to the methodology of machine learning, of which connectionist learning is a special case. The bad news is that ecological validity is particularly difficult to implement and difficult to understand in all cases. The good news is that it builds an important bridge from normative psychology and machine learning to recent work in the philosophy of science, which considers predictive accuracy to be a primary goal of science.
Griffin and Brenner. (2004), "Perspective on probability judgment calibration," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, pp. 177-199. (Click for Abstract)
This paper presents a comprehensive literature review on probability estimation and calibration, including judgment errors that make probabilities ill-calibrated.
Lieder, Griffiths, Huys, and Goodman (2018), "The anchoring bias reflects rational use of cognitive resources," Psychonomic Bulletin and Review, 25: 322-349. (Click for Abstract)
Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind's bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.
Agrawal, Gans, Goldfarb (2018), "Prediction, judgment, and complexity: A theory of decision making and artificial intelligence," NBER Working Paper 24243, National Bureau of Economic Research. (Click for Abstract)
We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries.
Verheij (2014), "To catch a thief with and without numbers: Arguments, scenarios, and probabilities in evidential reasoning," Law, Probability, and Risk, 13: 307-325. (Click for Abstract)
Mistakes in evidential reasoning can have severe consequences. Especially, errors in the use of statistics have led to serious miscarriages of justice. Fact-finders and forensic experts make errors in reasoning and fail to communicate effectively. As tools to prevent mistakes, three kinds of methods are available. Argumentative methods analyze the arguments and counterarguments that are presented in court. Narrative methods consider the construction and comparison of scenarios of what may have happened. Probabilistic methods show the connections between the probability of hypothetical events and the evidence. Each of the kinds of methods has provided useful normative maxims for good evidential reasoning. Argumentative and narrative methods are especially helpful for the analysis of qualitative information, but do not come with a formal theory that is as well-established as probability theory. In probabilistic methods, the emphasis is on numeric information, so much so that a standard criticism is that these methods require more numbers than are available. This article offers an integrating perspective on evidential reasoning, combining the strengths of each of the kinds of methods: the adversarial setting of arguments pro and con, the globally coherent perspective provided by scenarios, and the gradual uncertainty of probabilities. In the integrating perspective, arguments and scenarios are interpreted in the quantitative setting of standard probability theory. In this way, the integrated perspective provides a normative framework that bridges the communicative gap between fact-finders and forensic experts. Both qualitative and quantitative information can be used safely, focusing on what is relevant.
Silberzahn et al. (2018), "Many analysts, one data set: Making transparent how variations in analytic choices affect results," Advances in Methods and Practices in Psychological Science, 1: 337-356. (Click for Abstract)
Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.
Forster (1997), "Causation, prediction, and accommodation," Working Paper, Department of Philosophy, University of Wisconsin. (Click for Abstract)
Causal inference is commonly viewed in two steps: (1) Represent the empirical data in terms of a probability distribution. (2) Draw causal conclusions from the conditional independencies exhibited in that distribution I challenge the reconstruction by arguing that the empirical data are often better partitioned into different domains and represented by a separate probability distribution within each domain. From then their similarities and the differences provide a wealth of relevant causal information. Computer simulations confirm this hunch, and the results are explained in terms of a distinction between prediction and accommodation, and William Whewell's consilience of inductions. If the diagnosis is correct, then the standard notion of the empirical distinguishability, or equivalence, of causal models needs revision and the idea that cause can be defined in terms of probability is far more plausible than before.
Russell (2016), "Rationality and intelligence: A brief update," in V. Muller (ed.), Fundamental Issues of Artificial Intelligence, pp. 7-28. (Click for Abstract)
The long-term goal of AI is the creation and understanding of intelligence. This requires a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. The concept of rational agency has long been considered a leading candidate to fulfill this role. This paper, which updates a much earlier version (Russell, Artif Intell 94:57–77, 1997), reviews the sequence of conceptual shifts leading to a different candidate, bounded optimality, that is closer to our informal conception of intelligence and reduces the gap between theory and practice. Some promising recent developments are also described.
Bamford and Mackenzie (2018), "Counterperformativity," New Left Review, 113: 97-121. (Click for Abstract)
Counterperformativity is a very particular form of misfire, of unsuccessful framing, when the use of a mathematical model does not simply fail to produce a reality (i.e., market results) that is consistent with the model, but actively undermines the postulates of the model. The use of a model, in other words, can itself create phenomena at odds with the model. This article proceeds as follows. First, to anchor the discussion we consider the contexts and effects of what is arguably the twentieth century's most influential mathematical model in finance, the Black-Scholes model of options, which was hugely important to the emergence from 1070s onwards of giant-scale markets in financial derivatives….Then we take on the article's main tasks, which is to sketch the beginnings of a topology of forms of the counterperformativity of mathematical models in finance. We identify three mechanisms of conterperformativity. The first is when the use of a model such as Black-Scholes in hedging alters the market in the underlying instrument in such as a way as to undermine the model's assumed price dynamics…. The second mechanism of counterperformativity that we identify is when a model that has a regulatory function is gamed by financial practitioners taking self-interested actions that are informed by the model but again have the effect of undermining it…. The third mechanism of counterperformativity is what we call deliberate counterperformativity: the use of a model with the conscious goal of creating a world radically at odds with the world postulated by the model….As we will see, the family of models in question was adopted precisely to reduce the chances of the world they posited becoming real.
Brighton and Gigerenzer (2015), "The bias bias," Journal of Business Research, 68: 1772-1784. (Click for Abstract)
In marketing and finance, surprisingly simple models sometimes predict more accurately than more complex, sophisticated models. Here, we address the question of when and why simple models succeed — or fail — by framing the forecasting problem in terms of the bias–variance dilemma. Controllable error in forecasting consists of two components, the “bias” and the “variance”. We argue that the benefits of simplicity are often overlooked because of a pervasive “bias bias”: the importance of the bias component of prediction error is inflated, and the variance component of prediction error, which reflects an oversensitivity of a model to different samples from the same population, is neglected. Using the study of cognitive heuristics, we discuss how to reduce variance by ignoring weights, attributes, and dependencies between attributes, and thus make better decisions. Bias and variance, we argue, offer a more insightful perspective on the benefits of simplicity than Occam's razor.
Taleb (2020), "A non-technical overview---the Darwin College lecture," in N. Taleb (ed.), Statistical Consequences of Fat Tails, STEM Press, pp. 21-64. (Click for Abstract)
The monograph investigates the misapplication of conventional statistical techniques to fat tailed distributions and looks for remedies, when possible. Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress". Traditional asymptotics deal mainly with either n=1 or n=8, and the real world is in between, under of the "laws of the medium numbers" --which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence.
A few examples:- The sample mean is rarely in line with the population mean, with effect on "naive empiricism", but can be sometimes be estimated via parametric methods.
- The "empirical distribution" is rarely empirical.
- Parameter uncertainty has compounding effects on statistical metrics.
- Dimension reduction (principal components) fails.
- Inequality estimators (GINI or quantile contributions) are not additive and produce wrong results.
- Many "biases" found in psychology become entirely rational under more sophisticated probability distributions
- Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions.
This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles.
-
Lempert (2019), "Robust Decision Making," Chap. 2 of Decision Making Under Deep Uncertainty: From Theory to Practice, Springer, pp. 23-51. (Click for Abstract)
- The quest for predictions⸺and a reliance on the analytical methods that require them⸺can prove counter-productive and sometimes dangerous in a fast-changing world.
- Robust Decision Making (RDM) is a set of concepts, processes, and enabling tools that use computation, not to make better predictions, but to yield better decisions under conditions of deep uncertainty.
- RMD combines decision analysis, assumption-based planning, scenarios, and exploratory modeling to stress test strategies over myriad plausible paths into the future, and then to identify policy-relevant scenarios and robust adaptive strategies.
- RMD embeds analytic tools in a decision support process called "deliberation with analysis" that promotes learning and consensus-building among stakehoders.
- This chapter demonstrates an RDM approach to identifying a robust mix of policy intruments⸺carbon taxes and technology subsidies⸺for reducing greenhouse gass emissions. The example also highlights RMD's approach to adaptive strategies, agent-based modeling, and complex systems.
- Frontiers for RDM development include expanding the capabilities of multi-objective RDM (MORDM), more extensive evaluation of the impact and effectiveness of RDM-based decision support systems, and using ARDM's abilit to reflect multiple world views and ethical frameworks to help improve the way organizations use and communicate analytics for wicked problems.