Algorithmic Bias
What is Algorithmic Bias?
Algorithmic bias occurs when computer programs, machine learning models, or artificial intelligence (AI) systems produce unfair or discriminatory results. These biases often stem from unrepresentative training data or built-in assumptions in the algorithm’s design, and they reinforce existing social inequities such as those based on race, gender, or socioeconomic status
The Basic Idea
What do you picture when you think about the future? If you imagine classic sci-fi images of technological advancement—flying cars, robot companions, and the like—you’re not alone. For decades, humans have associated forward progress with technological innovation. Even today, it’s not uncommon to think of computers as humanity’s saviors, finally allowing us to overcome biases, prejudices, and inequities through a gold standard of rationality and objectivity.
As is the case with many sci-fi fantasies, the reality is much more complicated. While we rapidly advance toward unprecedented achievements in automation and artificial intelligence (AI), it’s more important than ever that we pay attention to how these systems perpetuate human biases and systematically produce harmful and discriminatory outcomes, also known as algorithmic bias.1,2 These hidden skews can be embedded into many aspects of an algorithm’s design, training, or distribution, shaping who gets to benefit from technological innovation and who bears the cost.3 Can a robot be racist? The answer is: sometimes, yes.
By definition, algorithmic bias can theoretically refer to a skew or systematic error in any algorithm, digital or not.3 However, the term most often refers to computer algorithms, and more specifically, machine learning algorithms. This is because many of these systems are considered black boxes, meaning that the algorithms’ structure and inner workings are hidden from view, posing unique challenges for identifying and resolving biases.1,4,5 As AI and machine learning algorithms make their way into healthcare, education, finance, and many other fields, algorithmic bias has become a key interest for industry professionals and researchers.
Beyond the type of algorithm we’re talking about, what does algorithmic bias actually look like? Despite receiving significant attention from computer scientists, academics, and the general public, definitions of algorithmic bias vary significantly depending on the person and the context. Generally, but not always, algorithmic bias goes beyond the broad definition of bias alone, referring specifically to tendencies that compound existing inequities and harm disadvantaged groups.1,3 For example, a hiring algorithm that tends to favor the name “Bob” over the name “Joe” might warrant a second look, but algorithmic bias would more commonly apply to a program that favors white sounding names over non-white sounding names, since it reproduces existing racial discrimination in hiring practices.
Dr. Kate Crawford, a leading researcher of AI and its social and material impacts, classifies algorithmic bias into harms of allocation and representation.6 Harms of allocation involve a system that unfairly withholds opportunities or resources from certain groups, such as job opportunities, healthcare, or insurance distribution programs. Harms of representation refer to biases in how people are depicted, like when Google users realized that searching for images of CEOs produced mostly pictures of white men.7 While allocative harms shape how we interact with the world, representative harms influence how we see the world, potentially reinforcing prejudice and limiting the futures people can imagine for themselves.
“The datasets and models used in [AI] systems are not objective representations of reality. They are the culmination of particular tools, people, and power structures that foreground one way of seeing or judging over another.”
— M.C. Elish and Danah Boyd, Data & Society Research Institute8
About the Author
Celine Huang
Celine Huang is a Summer Content Intern at The Decision Lab. She is passionate about science communication, information equity, and interdisciplinary approaches to understanding decision-making. Celine is a recent graduate of McGill University, holding a Bachelor of Arts and Sciences in Cognitive Science and Communications. Her undergraduate research examined the neurobiology of pediatric ADHD to improve access to ADHD diagnoses and treatments. She also sits on the North American Coordinating Committee of Universities Allied for Essential Medicines (UAEM), where she applies her behavioral science background to health equity advocacy. In her free time, Celine is an avid crocheter and concertgoer.