split photo of a caucasian woman and an african american woman

AI algorithms at work: How to use AI to help overcome historical biases

read time - icon

0 min read

Aug 30, 2022

AI algorithms are becoming the norm without the public’s trust

Only 35% of consumers are comfortable with businesses using AI algorithms to interact with them - and only 25% said they would trust a decision made by an AI algorithm over a person regarding their qualification for a bank loan.1 

Though the general public remains apprehensive, AI algorithms will inevitably become the status quo for businesses: 25% of companies already have selective processes fully enabled by AI algorithms.2 And for good reason: they’re favored across industries thanks to their efficient processing speed, consistent performance, and capacity to reduce the cost of human labor. 

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services
TDL Insights graph_use of AI

Results from a survey conducted by PwC asking “To what extent is your company looking to integrate AI technologies into its operations?”

Considering its advantages, there is no slowing down the pace of the AI revolution. However, we must ensure that AI algorithms are generating fair outcomes and be transparent about how they are achieving such outcomes. There are two ways that we can leverage the advantages of AI algorithms in selection processes while promoting fairness:

  • The anonymity and efficiency afforded by AI algorithms can reduce the influence of stereotyping prevalent in human decision-making. 
  • Our knowledge about stereotyping can improve AI algorithms, from how they leverage data to preventing the perpetuation of historical outcomes.

Historical uses of AI has led to public apprehension

It’s no wonder the general public feels precarious about AI algorithms dictating their fate. Headlines about AI-enabled selective algorithms range from wildly optimistic to dismal: 

  • The Bad: A variety of newspapers featured exposées about an AI algorithm used by Florida’s criminal justice system that falsely identified African-American defendants as “high risk” at nearly twice the rate as the mislabeled white defendants.3
  • The Good: Experts have predicted that AI algorithms could potentially eradicate the cognitive biases that disadvantaged historically marginalized jobseekers.4

These contradicting portrayals send mixed signals about the reliability of  AI algorithms, leading to unease about their widespread use. 

Even when AI algorithms are performing well, we’re more likely to lose confidence in algorithmic forecasters than human forecasters—even if they’re making errors at the same rate.5 Increasing public trust in AI algorithms allows us to make use of their many advantages (e.g., efficiency, cost-efficacy). Nevertheless, establishing trust remains a complex challenge —especially considering organizations’ haphazard use of AI algorithms in the past. 

How unwanted algorithmic bias is caused by historical outcomes

When speaking of algorithms, a "bias" refers to a disproportionate weighting towards a particular outcome based on certain variables. Algorithms exhibit unwanted bias when they are trained with past data that is influenced by historical prejudice.

For example: In 2018, Amazon discovered that their AI-enabled recruitment tool unintentionally downgraded resumes including the word "women" and consequently, overlooked all candidates from all-women's colleges.6  The algorithm was tailored to select applicants based on ones that were historically successful. As a result, the algorithm was attuned to identify male candidates as preferable—even if their work experience and education level was identical to their female competitors. 

Ultimately, AI algorithms are both a means of perpetuating historical bias and a helpful tool for overcoming them. To prevent the former outcome, it is imperative that organizations evaluate the historical prejudice baked into their data sets from decades prior. In the words of Dr. Kirsten Martin from University of Notre Dame’s Mendoza School of Business, “data is not clean of the sins of the past.”

How stereotyping influences human decision-making

AI algorithms can be responsible for unjust outcomes, but humans aren’t any better. We’re prone to stereotyping because our cognitive mechanisms have adapted to rely on heuristics to synthesize information quickly.

Stereotyping is detrimental to selection processes because it lends itself to favoring historically privileged demographics. Human evaluators favor male applicants over female applicants, despite having the same work experience.8

Algorithmic bias does not create unheard-of, harmful outcomes—it merely amplifies the trends that we see in human decision-makers.

How AI algorithms combat stereotyping through automatization and anonymization 

During the stages of selection processes when stereotyping is the most prevalent—such as preliminary screening stages—AI algorithms can be strategically implemented. AI algorithms can anonymize applicant’s demographic information and efficiently process data better than a human can.

Multivariable human decision-making is inefficient, and we are prone to stereotyping under time constraints. We’re also vulnerable to becoming overwhelmed by the magnitude of information that requires processing. Under such pressure, recruiters will often limit their review of the applicant pool to the top 10-20% of applicants they expect to show promise, i.e. those coming from Ivy league campuses or employee-referral programs.9

How we strategically combine the nuances of human decision-making with AI

In the past, filtering applicants based on these stereotypical attributes has tended to favor less diverse applicants. Alternatively, AI selection algorithms enable applicants to be evaluated based on relevant variables (i.e. the quality of their work, relevant credentials) at every stage in the pipeline. Here, AI algorithms provide a fairer alternative to time constrained human evaluators who are pressured to shrink the pipeline based on stereotypical features. 

Research in behavioral science can help identify the superficial features that we tend to prioritize and ensure that algorithms are selecting applicants using relevant variables. Although AI algorithms are powerful and effective, they cannot tell us what constitutes a “fair” metric or outcome. That’s where interdisciplinary discussion comes into play. 

AI Algorithms can help organizations identify systemic bias 

Besides optimizing efficiency and providing anonymity, AI algorithms can shed light onto unwanted bias that is systematically perpetuated within organizations. Cognitive biases are challenging to identify because the way a single person arrives at a particular outcome cannot be systematically analyzed and quantitatively measured.

Alternatively, the outcomes produced by AI algorithms can be interrogated with a high degree of precision through data analysis. For example, if a candidate’s outcome is affected by their race—while all other matters are considered equal—then there is evidence of algorithmic bias.10 

There are already accessible tools available that can help organizations analyze their training data before AI algorithms are built: Google’s Know Your Data allows researchers and product developers to identify unwanted bias latent within their data sets through basic criteria (e.g., “diversity of images”). Knowledge about behavioral science can help organizations analyze the type of biased outcomes that they’re likely to see, since the biases of human decision-making have already been codified in historical data. 

It’s easier to accuse an algorithm of systemic bias than a human

The capacity for algorithmic outcomes to be rigorously analyzed provides a significant advantage over human decision-making.

We are often reluctant to admit our decisions are influenced by cognitive biases. According to a survey, 97% of admissions committees at prestigious universities agree on the importance of fair admissions, but less than half believed that biases could be a factor in their own admissions process.11

AI algorithms are advantageous because they can reveal the systemic bias through quantitative evidence—so we don’t have to rely on people to realize the consequences of their biased decision-making decades later. AI algorithms can achieve this by providing feedback to developers in real-time.

How AI algorithms enable a fairer future

When implementing AI algorithms in selection processes, it’s imperative that systematic bias—whether it be “cognitive” or “algorithmic”— is mitigated at all costs. We can increase the public’s trust in AI algorithms by anonymizing applicant’s biographic information to prevent the influence of stereotyping, and addressing algorithmic bias by quantitatively measuring how historical prejudice affects training data. 

The Decision Lab is a behavioral consultancy that examines how research in behavioral science can promote social good. If you are interested in learning about how to promote fairness through your adoption of AI algorithms, please contact us to hear more about our services. 

References

  1. Cannon, J. (2019) “Report shows consumers don’t trust artificial intelligence.” Fintech News,https://www.fintechnews.org/report-shows-consumers-dont-trust-artificial-intelligence/; “What Consumers Really Think About AI: A Global Study: Insights into the minds of consumers to help businesses reshape their customer engagement strategies,” Pega PowerPoint. 
  2. Best, B., Rao A., (2022). “Understanding algorithmic bias and how to build trust in AI,” PWC https://www.pwc.com/us/en/tech-effect/ai-analytics/algorithmic-bias-and-trust-in-ai.html 
  3. Angwin, J., Larson, J., Mattu S., Kircnur, L. (2016) Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks, ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 
  4. Polli, F (2019) “Using AI to Eliminate Bias from Hiring,” Harvard Business Review, https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring 
  5. Dietvorst, B., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144 (1), 114-126.
  6. Dastin, J. (2018) Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women, Reuters. https://www.diverseeducation.com/students/article/15114427/ai-in-admissions-can-reduce-or-reinforce-biases#:~:text=Admissions%20offices%20have%20been%20rolling,these%20emerging%20tools%20are%20used
  7. Notre Dame Technology Ethics Center (2022), “New Anthology by ND TEC Director Kirsten Martin Explores ‘Ethics of Data and Analytics’” University of Notre Dame, https://techethics.nd.edu/news-and-events/news/new-anthology-by-nd-tec-director-kirsten-martin-explores-ethics-of-data-and-analytics/ 
  8. J González., C. Cortina., Rodríguez, J (2019). The Role of Gender Stereotypes in Hiring: A Field Experiment, European Sociological Review, Volume 35, Issue 2, 187–204, https://doi.org/10.1093/esr/jcy055  
  9. Polli, F (2019) “Using AI to Eliminate Bias from Hiring,” Harvard Business Review, https://hbr.org/2019/10/using-ai-to-eliminate-bias-from-hiring 
  10.  Chiappa, S. (2019). Path-Specific Counterfactual Fairness. Proceedings of the AAAI Conference on Artificial Intelligence, 7801-7808. 
  11. McCracken, M (2020). Breaking Down Bias in Admissions, Kira Talent, https://blog.kiratalent.com/nine-forms-of-bias-in-admissions/

About the Authors

Ariel LaFayette's portrait

Ariel LaFayette

Ariel is an incoming Philosophy PhD student at the University of Toronto (UofT) and specializes in hermeneutics, phenomenology, the philosophy of religion, and the history of psychology. More broadly, she is interested in how the questions posed by seminal scholars (e.g., Augustine, Kierkegaard, Gadamer) continue to influence our investigations into self-knowledge, the limitations of reason, and personal fulfillment.

Turney McKee's portrait

Turney McKee

Turney McKee is a Director at The Decision Lab. He holds a Masters of Science in Cellular Biology and Bachelors of Science in Pharmacology, both from McGill University. He is interested in international healthcare systems and public policy. Before joining The Decision Lab, Turney worked as a competitive and business intelligence analyst in the healthcare and technology sectors.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?