Algorithms for Simpler Decision-Making (2/2): Fighting Irrationality with Nonrationality

Algorithms have been designed as linear, rational agents for the purpose of optimizing decisions in the face of risk. Unquestionably, this design is capable of consistently analyzing mass quantities of data with probabilistic accuracy that the human brain simply cannot fathom. However, this utilitarian approach to decision-making differs from that of human decision-makers on a fundamental level. As Hafenbrädl, Waeger, Marewski, & Gigerenzer (2016) explain, algorithmic decisions are made in a different world, the small world of risk, than real-world, human decisions, which take place in the big world of uncertainty. In the world of risk, probabilities, alternatives, and consequences can be readily calculated, weighed, and considered; and we must wrestle our intuitive impulses into submission for rational optimization. In the world of uncertainty, probabilities, consequences, and alternatives are unknowable or incalculable; and our intuitive heuristics are integral to satisficing under time and resource constraints (Hafenbrädl et al., 2016; Simon, 1956).

These contrasting characteristics delineate two views of decision-making — traditional rational theory and nonrational theory[1]. Traditional rationality suggests a good decision is made by considering all decision alternatives and accompanying consequences, estimating and multiplying the subjective probability by the expected utility of each consequence, and then selecting the option with the greatest expected utility. But for human decision-makers in uncertain environments, this process is psychologically unrealistic (Gigerenzer, 2001). Instead of viewing humans as omniscient beings, nonrational theories, such as bounded rationality, illustrate a decision-making process in which the environment is marked by limited time, resources, and information; where rational optimization is unfeasible and unwise. While traditional rationality entices with a sense of reasonableness, applied real-world decision-making naturally abides by the principles of nonrationality. So, when standard rational algorithms are advertised as aids to human decision-makers, a false assumption of compatibility between intrinsically different decision strategies is made. Algorithm aversion, directly and indirectly, can be traced back to this assumption.

Due to their probabilistic focus, standard algorithmic decision aids confront human cognition head on — you either accept or reject the algorithmic insight; all or nothing. Because these algorithms perform a process of rational optimization, opportunities for integrating with human nonrationality are sparse. In the predominant consumer model of algorithmic decision-making, this mismatch of rationality and nonrationality manifests as an interaction where a human decision-maker performs an intuitive calculation, consults the algorithm’s calculation, and then must choose a course of action with or without regard to the algorithmic advice. Needless to say, very little interaction occurs in this model as intuitive and statistical judgment are pitted against one another — a psychological tug-o’-war dominated by intuition time and again.

To design cognitive prosthetics capable of linking the human mind to normally incomprehensible data flows, enabling better decision-making, nonrationality must be the founding principle. Meeting human decision-makers in the world of uncertainty, where decisions must be made with limited time ( fast) and with limited information (frugal), the application of the fast-and-frugalframework to the design of algorithms is a contemporary case of mobilizing nonrational theory for cohesive human-algorithm decision systems (Phillips, Neth, Woike, & Gaissmaier, 2017). While not without limitations, this move to structure heuristic-led algorithms allows human decision-makers and algorithms to share the step-by-step gathering, ordering, and evaluating of available data and ultimately arrive at a single, joint conclusion. In doing so, human-algorithm cognition is meshed upstream in the decision process permitting a more participatory, less confrontational augmentation experience.

This integration of algorithmic statistical rigor with humanly heuristic-led sensibility is an evidently difficult task that calls for a multidisciplinary community. As the discourse flounders between the abstract and the pragmatic, it is important to consider what we want, expect, and demand from our decision-makers and our decision-making algorithms.

In our quantified society, big data is the new oil, and algorithmic decision-making, as means of refining and commoditizing big data, is here to stay. Digitalization and datafication have provided us with profound knowledge of human behavior. The trouble that remains is what to do with it. Inevitably, if not already, algorithms will evolve beyond the imaginations of their human creators, but for now it is up to us to steer them in the right direction. Whether the thought of algocracy has utopian or dystopian connotations for you, establishing a human presence in our data-fuelled ecosystem, and safeguarding against algorithm misuse, means striving towards augmented, human-in-the-loop decision-making.

Read part 1 here.

[1] Not to be confused with irrationality, which describes decision-making outcomes, nonrationality is a theoretical approach to describing the decision-making process (Gigerenzer, 2001).

Read Next


How To Motivate Volunteers With Behavioral Science

Volunteering is often thought as the ultimate act of altruism. Yet there is some evidence that volunteering has many benefits for the mental and physical health of the person who is volunteering their time and energy. Learn more in this discussion between Jayden Rae and Julian Hazell.


The Science Behind Curiosity

The gap that emerges between what’s known and what’s unknown is what drives the motivation for curiosity. Based on this, we arrive at a simple framework to think about curiosity and how we can use it in product design and marketing.