You want to get insurance for your car. The insurer offers you a deal: install an app on your phone to benefit from a special rate. After a month, the insurer contacts you to offer you a revised rate for your insurance—you save an extra $15 a month (no need to pass Go or go directly to jail). How’d that happen? That app you installed tracks your driving habits, sending data back to the central server, and an AI-powered system makes a determination about the level of risk you represent and (accordingly) the price that the provider should quote you for insurance coverage.
Can the AI system foretell the precise moment that you’re going to have an accident, how damaged your car (and perhaps your body) might be, etc.? No, nothing quite so precise as all that. Instead, the system will offer probabilities, and in large numbers (lots of drivers, lots of road time) the insurance company can probably rely on those probabilities to make quite precise estimates about the collective number of accidents they’ll need to cover, how expensive they’ll be, what kinds of people will get into those accidents, and so forth.
While we’re quite accustomed to these probabilistic models for insurance, loans and the like (courtesy of the actuarial sciences), AI is upping the ante—potentially even changing the game. AI is bringing probabilistic models into areas that they’ve never been before, and that doesn’t always sit easily with our human desire for clean, causal narratives. As humans, we like deterministic stories: ones where the narrative shows that things had to unfold the way they do. We don’t like “open loops:” questions that never get answered, gaps in the story that never resolve. Probabilistic, indeterministic systems don’t offer that.
In this piece, I’ll explore some historical forebears to the illustrate that current, AI-focused discussions about indeterminism, are not new, but, given the revolutionary potential of AI may require further, deeper reflection. To complement the historical analysis, I will provide a few contemporary examples and try to draw some tangible insights about the societal and social challenges that AI-driven decision-making may engender as it becomes engrained in daily life.
In sum, I aim to contribute to ongoing discussions about AI will influence the course of human societies, both what problems it will solve and what problems it will blow wide open. And specifically I want to do that by bringing in some historical perspective.
Old debate, new technology
In a recent conversation, Dr. Richard Bruno raised the question of whether it’s the first time in history that we’re relying in indeterministic systems to underpin our decisions. The question prompted me to reflect on the history of discussions about determinism and indeterminism; one of the most famous in the recent past coalesced around the development of quantum mechanics in the early 20th century. In quantum mechanics (according to the Copenhagen interpretation, but let’s not get into that now), a particle does not exist at one definite position until it is measured. Rather, it has a “superposition:” a probability distribution of places that it might be. When we measure its location, we force this distribution to collapse, giving the particle one specific location. But where is the particle prior to measurement? It has a certain probability of being in a number of places. That’s what superposition means.
In response to this problem, Schrödinger proposed a thought experiment that suggests the whole idea of superposition might be absurd. Suppose that we put a cat into a box with: a particle in superposition, a small position-measuring instrument, and a vial of poisonous gas. If the particle is in Location A, then the measuring instrument triggers the vial to break and the cat meets an unhappy end. If the particle is not in Location A, then the vial remains sealed and the cat lives to hunt another day. According to the Copenhagen interpretation, that means that until we actually open the box and look inside (forcing the particle to take on one determinate position, and the vial to either break or not break), the cat is both alive and dead at the same time: the cat is in a superposition of life and death until we open the box.
For some, that life–death superposition is taken to be completely absurd, and thus an illustration of the folly of the Copenhagen interpretation—perhaps a challenge to all of quantum mechanics as a whole. For others, they deny the absurdity, saying there’s no problem with the situation as described. For others still, they accept the absurdity but present various rejoinders to try to salvage the situation. But let’s not get too far into that.
What I want to extract here is the worry that we have about indeterminism: when we take these kinds of systems and apply them to everyday objects, we sometimes end up tying ourselves in knots. Let’s say that, according to the probability distribution, there’s a 50% chance that the particle is in Location A (cat is dead), and a 50% chance that it is not (cat is alive). When we open the box, we will find either one outcome or the other. The superposition will collapse to only one of the two possible outcomes (A or not A), and there’s a 50% chance of each. But why it fell into this 50% in this specific instance is not something we can explain. The probabilities are fully deterministic, but the outcome in a single case is indeterministic.
So then how do we explain the outcome in a given case? We actually can’t; quantum mechanics can show us the probability distribution itself, and it can tell us that the distribution will collapse when we observe it, but it cannot explain why the cat actually lived instead of dying. There was just a 50% chance this was the way things would end up in this case.
Why do I feel so uncomfortable?
As humans, we find that lack of explanatory power profoundly discomforting. We don’t like things unresolved. We like causal narratives that explain the story from beginning to end; unresolved sections of the story make it harder to grapple with (they actually increase cognitive load of understanding and remembering the story, cognitive load that we seek to minimize—brain power is expensive, from an evolutionary perspective).
Quantum mechanics doesn’t give us that comforting, cognitive satisfaction that everything is resolved, in its proper order. Rather, it gets us as far as knowing the probability distribution, but not much further. If we ran the experiment a million times, quantum mechanics would explain very well why 50% of cases went one way while the other 50% of cases went the other. (Like the actuaries for car accidents.) But for a single case it leaves us cognitively unsatisfied.
In this way, the indeterministic systems of quantum mechanics are very much like the indeterministic systems often produced using AI algorithms. A lot of what AI does is segment groups to maximize predictive power in novel cases. For instance, if you want to know which individuals within a population are most likely to forget to pay their taxes on time (so that you can send them a timely reminder), AI algorithms are usually pretty good at discerning a pattern in a large dataset of past instances.