If you haven’t spent the last five years living under a rock, you’ve likely heard at least one way in which artificial intelligence (AI) is being applied to something important in your life. From determining the musical characteristics of a hit song for Grammy-nominated producers1 to training NASA’s Curiosity rover to better navigate its abstract Martian environment,2 AI is as useful as it is ubiquitous. Yet despite AI’s omnipresence, few truly understand what is going on under the hood of these complex algorithms — and, concerningly, few seem to care, even when it is directly impacting society. Take for example the United Kingdom, where one in three local councils are using AI to assist with public welfare decisions, ranging from deciding where kids go to school to investigating benefits claims for fraud.3
What is AI?
In simple terms, AI describes machines that are made to think and act human. Like us, AI machines can learn from their environments and take steps towards achieving their goals based on past experiences. Artificial intelligence was first coined as a term in 1956 by John McCarthy, a mathematics professor at Dartmouth College.4 McCarthy posited that every aspect of learning and other features of human intelligence can, in theory, be described so precisely that a machine can be made to mathematically simulate them.
Back in McCarthy’s era, AI was merely conjecture that was limited in scope to a series of brainstorming sessions by idealistic mathematicians. Now, it is undergoing a sort of renaissance due to massive advancements in computing power and the sheer amount of data at our fingertips.
While the post-human, dystopian depictions of advanced AI may seem far-fetched, one must keep in mind that AI, even in its current and relatively rudimentary form, is still a powerful tool that can be used to create tremendous good or bad for society. The stakes are even higher when behavioral science interventions make use of AI. Problematic outcomes can occur when the uses of these tools are obfuscated from the public under a shroud of technocracy — especially if AI machines develop the same biases as their human creators. There is evidence that this can occur, as researchers have even managed to deliberately implement cognitive biases into machine learning algorithms according to an article published in Nature in 2018.5
Machines that act like us
A term that is almost as much of a buzzword as AI is machine learning (ML), which is a subset of AI that describes systems that have the capability of learning automatically from experience, much like humans. ML is used extensively by social media platforms to predict the types of content that we are most likely to read, from the news articles that show up on our Facebook feeds to the videos that YouTube recommends to us. According to Facebook6, their use of ML is for “connecting people with the content and stories they care about most.”
Yet perhaps we only tend to care about the things that reinforce our beliefs. Analysis from McKinsey & Company argues that social media sites use ML algorithms to “[filter] news based on user preferences [and reinforce] natural confirmation bias in readers”.7 For social media giants, confirmation bias is a feature, not a bug.
Despite concerns of ML-generated feedback loops that create ideological echo chambers on social media sites8 — which might indeed be an axiom that is built upon an incomplete view of individuals’ media diets, according to research from the Oxford Internet Institute9 — these (and many other) applications of ML are not inherently negative. Much of the time, it can be beneficial for us to be connected with the people and content that we care about the most. However, problematic uses of ML can cause bad outcomes: If we program machines to optimize for results that conform to our normative views and goals, they might do just that. AI machines are only as intelligent, rational, thoughtful, and unbiased as their creators. And, as the field of behavioral economics tells us, human rationality has its limits.