symbols and binary digits

AI, Indeterminism and Good Storytelling

read time - icon

0 min read

Mar 12, 2020

You want to get insurance for your car. The insurer offers you a deal: install an app on your phone to benefit from a special rate. After a month, the insurer contacts you to offer you a revised rate for your insurance—you save an extra $15 a month (no need to pass Go or go directly to jail). How’d that happen? That app you installed tracks your driving habits, sending data back to the central server, and an AI-powered system makes a determination about the level of risk you represent and (accordingly) the price that the provider should quote you for insurance coverage.

Can the AI system foretell the precise moment that you’re going to have an accident, how damaged your car (and perhaps your body) might be, etc.? No, nothing quite so precise as all that. Instead, the system will offer probabilities, and in large numbers (lots of drivers, lots of road time) the insurance company can probably rely on those probabilities to make quite precise estimates about the collective number of accidents they’ll need to cover, how expensive they’ll be, what kinds of people will get into those accidents, and so forth.

While we’re quite accustomed to these probabilistic models for insurance, loans and the like (courtesy of the actuarial sciences), AI is upping the ante—potentially even changing the game. AI is bringing probabilistic models into areas that they’ve never been before, and that doesn’t always sit easily with our human desire for clean, causal narratives. As humans, we like deterministic stories: ones where the narrative shows that things had to unfold the way they do. We don’t like “open loops:” questions that never get answered, gaps in the story that never resolve. Probabilistic, indeterministic systems don’t offer that.

In this piece, I’ll explore some historical forebears to the illustrate that current, AI-focused discussions about indeterminism, are not new, but, given the revolutionary potential of AI may require further, deeper reflection. To complement the historical analysis, I will provide a few contemporary examples and try to draw some tangible insights about the societal and social challenges that AI-driven decision-making may engender as it becomes engrained in daily life.

In sum, I aim to contribute to ongoing discussions about AI will influence the course of human societies, both what problems it will solve and what problems it will blow wide open. And specifically I want to do that by bringing in some historical perspective.

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

Old debate, new technology

In a recent conversation, Dr. Richard Bruno raised the question of whether it’s the first time in history that we’re relying in indeterministic systems to underpin our decisions. The question prompted me to reflect on the history of discussions about determinism and indeterminism; one of the most famous in the recent past coalesced around the development of quantum mechanics in the early 20th century. In quantum mechanics (according to the Copenhagen interpretation, but let’s not get into that now), a particle does not exist at one definite position until it is measured. Rather, it has a “superposition:” a probability distribution of places that it might be. When we measure its location, we force this distribution to collapse, giving the particle one specific location. But where is the particle prior to measurement? It has a certain probability of being in a number of places. That’s what superposition means.

In response to this problem, Schrödinger proposed a thought experiment that suggests the whole idea of superposition might be absurd. Suppose that we put a cat into a box with: a particle in superposition, a small position-measuring instrument, and a vial of poisonous gas. If the particle is in Location A, then the measuring instrument triggers the vial to break and the cat meets an unhappy end. If the particle is not in Location A, then the vial remains sealed and the cat lives to hunt another day. According to the Copenhagen interpretation, that means that until we actually open the box and look inside (forcing the particle to take on one determinate position, and the vial to either break or not break), the cat is both alive and dead at the same time: the cat is in a superposition of life and death until we open the box.

For some, that life–death superposition is taken to be completely absurd, and thus an illustration of the folly of the Copenhagen interpretation—perhaps a challenge to all of quantum mechanics as a whole. For others, they deny the absurdity, saying there’s no problem with the situation as described. For others still, they accept the absurdity but present various rejoinders to try to salvage the situation. But let’s not get too far into that.

What I want to extract here is the worry that we have about indeterminism: when we take these kinds of systems and apply them to everyday objects, we sometimes end up tying ourselves in knots. Let’s say that, according to the probability distribution, there’s a 50% chance that the particle is in Location A (cat is dead), and a 50% chance that it is not (cat is alive). When we open the box, we will find either one outcome or the other. The superposition will collapse to only one of the two possible outcomes (A or not A), and there’s a 50% chance of each. But why it fell into this 50% in this specific instance is not something we can explain. The probabilities are fully deterministic, but the outcome in a single case is indeterministic.

So then how do we explain the outcome in a given case? We actually can’t; quantum mechanics can show us the probability distribution itself, and it can tell us that the distribution will collapse when we observe it, but it cannot explain why the cat actually lived instead of dying. There was just a 50% chance this was the way things would end up in this case.

Why do I feel so uncomfortable?

As humans, we find that lack of explanatory power profoundly discomforting. We don’t like things unresolved. We like causal narratives that explain the story from beginning to end; unresolved sections of the story make it harder to grapple with (they actually increase cognitive load of understanding and remembering the story, cognitive load that we seek to minimize—brain power is expensive, from an evolutionary perspective).

Quantum mechanics doesn’t give us that comforting, cognitive satisfaction that everything is resolved, in its proper order. Rather, it gets us as far as knowing the probability distribution, but not much further. If we ran the experiment a million times, quantum mechanics would explain very well why 50% of cases went one way while the other 50% of cases went the other. (Like the actuaries for car accidents.) But for a single case it leaves us cognitively unsatisfied.

In this way, the indeterministic systems of quantum mechanics are very much like the indeterministic systems often produced using AI algorithms. A lot of what AI does is segment groups to maximize predictive power in novel cases. For instance, if you want to know which individuals within a population are most likely to forget to pay their taxes on time (so that you can send them a timely reminder), AI algorithms are usually pretty good at discerning a pattern in a large dataset of past instances.

The AI Governance Challenge book
eBook

The AI Governance Challenge

It might be, for example, that after the birth of their first child, the parents of the newborn are more likely to forget to pay their taxes on time—or, perhaps they remember, but simply don’t manage to get around to it. An AI system could detect that pattern and pick out all the people you should be reminding. (Notice that in presenting this example, I introduced it with a causal narrative; if I hadn’t, you would have found the example more cognitively demanding to parse and consider, you might have even found my writing difficult to understand.)

The algorithm might tell you that 80% of first-time parents will pay their taxes late. But what about one individual first-time parent? The model can’t tell you which ones will fall in the 80% who forget or the 20% who remember, and won’t be able to explain the outcome for specific individuals. The model gets as far as telling you that there’s an 80% chance a given person in the group will forget, and then leave you to fend for yourself from there.

Transparency, accountability and trust problems

This creates at least two types of challenges around institutional accountability. When making a large number of decisions, we sometimes don’t have an alternative but to work with probabilities. There simply aren’t the resources to make some types of decisions on a case-by-case basis, strictly because of the volume of decisions to make (similar considerations can apply when we have intense time constraints).

For example, if a public health authority needs to screen individuals at an airport as potential carriers of a virus, they might very well rely on probabilistic methods to triage their screening and their decisions. Travelers arriving from certain hot spots might be automatically bumped up into a heightened screening protocol, and even the screening itself might incorrectly flag 5% of healthy people as carriers, and incorrectly flag 1% of actual carriers as healthy.

The first challenge comes in here. The institution takes a probabilistic approach to groups that is indeterministic at the level of individuals. But when we assess the results, our interpretations of the situation and our resulting actions will be driven by our desire for clean, causal narratives (which give us that cognitive satisfaction). Suppose that a healthy person is falsely flagged as a carrier, gets temporarily quarantined with people who are actually sick, contracts the sickness themselves while in quarantine, and later dies from the illness.

When society demands answers about how this person got falsely flagged in the first place, what can the public health authority say? They can explain how the model classified the individual based on XYZ characteristics, and explain the features of the model in virtue of which this class of people have only a small chance of being incorrectly flagged as carriers. (This is another layer of the challenge: experts are finding it harder and harder to offer an explanation even for the probability as AI models become more and more complex and opaque.) But even if the health authority can explain the chain of reasoning to reach their probabilistic conclusion,

they will not be able to explain why this individual person was incorrectly flagged. The model brings them only this far, but then leaves the explanation incomplete, from a causal narrative perspective.

And this brings us to a second type of problem that institutions can face. When society calls for an explanation but receives an unsatisfying response, this can trigger calls for systemic reform, often coalescing around these individual narratives. For example, when a Polish man was tasered to death at an airport in 2007 and public explanations were left wanting, it eventually created the pressure that led to reforms. A man unjustly killed becomes a symbol of excessive force by police. If there were an adequate explanation, we wouldn’t feel the cognitive dissonance.

But who gets to be a symbol and who doesn’t (and a symbol of what)? The level of cognitive dissonance we feel is heavily biased along predictable dimensions: race, socioeconomic class, age, gender, and so on. To illustrate this point, suppose that in 2007, the man tasered to death in an airport had come from Afghanistan rather than Poland; do we really think that the level of public outrage would have been the same? Would we have felt the same cognitive dissonance at the thought of an Islamic person being labeled as a threat in an airport, rather than a Catholic person? Research suggests that the public narratives will be heavily influenced by the race of individuals involved, and we make different policy/political choices as a result.

In brief, we find some narratives more incomplete (with the associated cognitive dissonance, sense of injustice, etc.) than other similar narratives, often depending on race or other features.

Can we trust AI, enough to leverage its benefits?

What do these two challenges look like when we translate them to the case of AI, and specifically to probabilistic modeling? The first challenge is that we can’t offer “complete” explanations for individual cases; the models are deterministic only so far as explaining segmentation and probabilities of groups, but indeterministic relative to individual cases within a group. The second problem is that when more “complete” explanations are not available, some individual cases get much more attention than others, and thus certain systemic changes get made while others don’t—often based on biases and prejudices.

Coming back to the initial question, what are the novel aspects of using AI to build decision-support systems? Historically, their application to social challenges blows wide open some of the challenges that were arguments strictly among academics a century ago when we were talking about the positions of particles; the discussion now involves us all and the stakes are much higher. We’re talking about applying these systems to a much wider segment of our decisions than ever before, and in terms of technological deployment that application is rolling out extremely quickly. That wide scope and rapid rollout puts pressure on our social and democratic systems to apply the scrutiny and critical reflection necessary to ensure the systems lead to fair and just outcomes for everyone. Seldom in the past (if ever) have our social and democratic systems been called to respond so quickly to such a rapid and large-scale change.

About the Author

Brooke Struck portrait

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?