Combining AI and Behavioral Science Responsibly

If you haven’t spent the last five years living under a rock, you’ve likely heard at least one way in which artificial intelligence (AI) is being applied to something important in your life. From determining the musical characteristics of a hit song for Grammy-nominated producers1 to training NASA’s Curiosity rover to better navigate its abstract Martian environment,2 AI is as useful as it is ubiquitous. Yet despite AI’s omnipresence, few truly understand what is going on under the hood of these complex algorithms — and, concerningly, few seem to care, even when it is directly impacting society. Take for example the United Kingdom, where one in three local councils are using AI to assist with public welfare decisions, ranging from deciding where kids go to school to investigating benefits claims for fraud.3

What is AI?

In simple terms, AI describes machines that are made to think and act human. Like us, AI machines can learn from their environments and take steps towards achieving their goals based on past experiences. Artificial intelligence was first coined as a term in 1956 by John McCarthy, a mathematics professor at Dartmouth College.4 McCarthy posited that every aspect of learning and other features of human intelligence can, in theory, be described so precisely that a machine can be made to mathematically simulate them.

Back in McCarthy’s era, AI was merely conjecture that was limited in scope to a series of brainstorming sessions by idealistic mathematicians. Now, it is undergoing a sort of renaissance due to massive advancements in computing power and the sheer amount of data at our fingertips.

While the post-human, dystopian depictions of advanced AI may seem far-fetched, one must keep in mind that AI, even in its current and relatively rudimentary form, is still a powerful tool that can be used to create tremendous good or bad for society. The stakes are even higher when behavioral science interventions make use of AI. Problematic outcomes can occur when the uses of these tools are obfuscated from the public under a shroud of technocracy — especially if AI machines develop the same biases as their human creators. There is evidence that this can occur, as researchers have even managed to deliberately implement cognitive biases into machine learning algorithms according to an article published in Nature in 2018.5

Machines that act like us

A term that is almost as much of a buzzword as AI is machine learning (ML), which is a subset of AI that describes systems that have the capability of learning automatically from experience, much like humans. ML is used extensively by social media platforms to predict the types of content that we are most likely to read, from the news articles that show up on our Facebook feeds to the videos that YouTube recommends to us. According to Facebook6, their use of ML is for “connecting people with the content and stories they care about most.”

Yet perhaps we only tend to care about the things that reinforce our beliefs. Analysis from McKinsey & Company argues that social media sites use ML algorithms to “[filter] news based on user preferences [and reinforce] natural confirmation bias in readers”.7 For social media giants, confirmation bias is a feature, not a bug.

Worldwide Google searches for machine learning
Source: Google Trends

Despite concerns of ML-generated feedback loops that create ideological echo chambers on social media sites8 — which might indeed be an axiom that is built upon an incomplete view of individuals’ media diets, according to research from the Oxford Internet Institute9 — these (and many other) applications of ML are not inherently negative. Much of the time, it can be beneficial for us to be connected with the people and content that we care about the most. However, problematic uses of ML can cause bad outcomes: If we program machines to optimize for results that conform to our normative views and goals, they might do just that. AI machines are only as intelligent, rational, thoughtful, and unbiased as their creators. And, as the field of behavioral economics tells us, human rationality has its limits.

eBook

The AI Governance Challenge

When AI is used for the wrong reasons

The existence of biases does not necessarily mean we should slow down or stop our use of AI. Rather, we need to be mindful as we proceed so AI doesn’t become a sort of enigmatic black box over which we ostensibly have little control. Artificial intelligence and machine learning are simply tools we have at our disposal; it is up to us to decide how to use them responsibly. Special attention is required when we use a tool as powerful as ML — one that, when partnered with behavioral science, has the potential to exacerbate the biases that impact our decision making on an unprecedented scale. Bad outcomes of this partnership could include a reinforcement of biases we have towards marginalized individuals, or myopia towards equitable progress in the name of calculated optimization. Mediocre outcomes could include the use of ML-infused behavioral science interventions to sell us more stuff we don’t need or to bureaucratize our choice environments in a web of tedium. These tools could also encourage pernicious rent-seeking by uninspired businesses, leading to stifled innovation and lower competition.

Targeted nudges

Does any good lie at the intersection of ML and behavioral science? With an asterisk that strongly cautions against bad or mediocre uses — or the act of carelessly labelling ML as a faultless panacea — the answer is yes. Behavioral science solutions that are augmented with ML can better predict what interventions will work most effectively and for whom. ML can also allow us to create personalized nudges to better scale over large, heterogeneous populations.10 These personalized nudges could do wonders for addressing qualms about the external validity of randomized controlled trials, a type of experiment that is commonly used in behavioral science to determine which interventions work and to what degree. Idealistic daydreaming isn’t necessary to think of the many different pressing policy problems that could benefit from precise nudges. From predicting which messages will be the most salient to specific individuals, to personalized health recommendations based on our unique genetic makeup, many policy areas exist as suitable candidates for these kinds of interventions.

Going forward

The benefits of using ML to improve behavioral science applications may indeed outweigh the risks of creating bad outcomes — and, perhaps more pervasively, mediocre ones. In order for us to get it right, behavioral science must play a role in identifying and correcting the harmful biases that impact both our decisions and the decisions of our intelligent machines. When using AI, we must remain faithful to a key tenet of behavioral science: Interventions should influence our behavior so we can make better decisions for ourselves, all without interfering with our freedom of choice. Like their creators, intelligent machines can be bias-prone and imperfect. It is crucial that we remain aware of this as the marriage between behavioral science and AI matures so we can use these tools purposefully and ethically.

Read Next

Perspective

Humans and AI: Rivals or Romance?

Humans and machines will – and must – work together. What really matters is how to prepare people to work increasingly closely with machines.

Perspective

Government Nudging in the Age of Big Data

Instead of applying and re-applying nudges as ‘best-guesses’, governments can tailor very specific, personalized behavioral nudges to individuals and small groups.

Perspective

Why Machines Will Not Replace Us

Rather than undermining humans, we are much better off thinking hard about how to upskill ourselves and learn how to work alongside machines.

Perspective

The AI Governance of AI

The question of AI governance leads us to a foundational issue: to govern AI, we may need to use AI.