The Dangers of an Artificially Intelligent Future
There can be no doubt that we’ve entered the latest revolutionary period in human history, The Technological Revolution. This new era promises efficiency, convenience, communication, equal access to information, and unrivaled prosperity, but at what cost?
It is easy to be dazzled by the technological prowess of modern smartphones, self-driving cars, or VR gaming, and forget that these machines were built by humans, in all their irrational and illogical glory. Programmers become choice architects: they have the power to shape the contexts in which people make decisions, and thus those decisions themselves.1 The designers of these technologies, however, are susceptible to the same unconscious biases and prejudices as the rest of us, and the resulting technologies—AI and machine learning algorithms in particular—threaten to exacerbate social inequalities by encoding our human biases and proliferating them on a massive scale.
Machine learning, explained
Machine learning (ML) algorithms may not be the great predictors we believe them to be. They merely replicate society as it is and was, rather than predict what it could be—or, more importantly, what we would like it to be.
Algorithms seem quite complex, and in some cases they are, but they are hardly beyond human comprehension. In fact, we use predictive algorithms in our heads hundreds of times each day. What should I eat for dinner today? Perhaps I could stop by the supermarket on my way home from work and pick up some vegetables to go with the leftover salmon I have from yesterday. That would be the most cost- and time-effective option, and that’s the solution to a predictive algorithm you’ve computed in your head.
When we run these calculations in our heads, we draw from our lived experience and our learning history to inform our decisions. Machine learning algorithms, meanwhile, make choices based on what they’ve learned from datasets fed to them by their developers. If you regularly surf the internet, then you will be familiar with reCAPTCHA, a security feature that asks users to (for example) select all images containing traffic lights. This is a basic image processing machine learning algorithm: Google uses your responses to train its AI and make it better at image recognition.
Datasets used to program more advanced machine learning algorithms include collections of human faces for facial recognition software, information about successful employees for application screening software, and locations of police arrests for predictive policing software. So, how intelligent is our artificially “intelligent” future?
How algorithms learn our prejudices
Joy Buolamwini, a graduate researcher at MIT, drew attention to the issue of algorithmic discrimination when she unveiled a discovery she had made while working with facial recognition software. In her 2016 TED Talk “How I’m fighting bias in algorithms,” which has over 1.2 million views at the time of writing,2 Buolamwini describes a university project she undertook titled “Aspire Mirror,” where she attempted to project digital masks onto her reflection. She ran into trouble when the facial recognition software she was using failed to recognize her face—until she put on a white mask.
Facial recognition systems are ML algorithms trained by large data sets. The algorithm identifies, collects, and evaluates facial characteristics and compares them to existing images in the database. If you have ever, like me, tried and failed to apply one of TikTok or Instagram’s facial filters to one of your pets, this is because the dataset used to train the facial recognition software contains only human faces and is unfamiliar with animal facial characteristics.
The problem is that facial recognition algorithms are overwhelmingly trained using datasets tainted by sampling bias. In a 2018 study entitled “Gender Shades,” it was found that two facial analysis benchmarks were overwhelmingly composed of lighter-skinned individuals (79.6% for IJB-A and 86.2% for Adience).3 The study also revealed that dark-skinned females are the most misclassified group, with error rates of up to 34.7% (compared to 0.8% for white males). Similar results have been found for Amazon’s facial recognition software Rekognition.4
It is no surprise, therefore, that the facial recognition software was failing for Joy, a black woman. The algorithm was failing to recognize Joy’s face as it contained very few examples of faces like hers—an issue that ethnic minorities, particularly women of color, are accustomed to. As the prominence of machine learning algorithms creeps quietly into every aspect of our lives, it is imperative that we illuminate the hidden faces in our datasets to eliminate existing bias.
References
- Thaler, R. H., & Sunstein, C. R. Nudge: Improving decisions about health, wealth, and happiness.
- Buolamwini, J. (2016). How I’m fighting bias in algorithms. https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en#t-170583
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81:1–15. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
- Singer, N. (2019, January 24). Amazon Is Pushing Facial Technology That a Study Says Could Be Biased. The New York Times. https://www.nytimes.com/2019/01/24/technology/amazon-facial-technology-study.html
- Givens, A. R., Schellmann, H., & Stoyanovich, J. (2021, March 17). We Need Laws to Take On Racism and Sexism in Hiring Technology. The New York Times. https://www.nytimes.com/2021/03/17/opinion/ai-employment-bias-nyc.html
- Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review, 94(4), 991-1013. https://www.aeaweb.org/articles?id=10.1257/0002828042002561
- O’Neil, C. (2016). Weapons of Maths Destruction: How big data increases inequality and threatens democracy. Crown.
- Goldin, C., & Rouse, C. (2000). Orchestrating Impartiality: The Impact of “Blind” Auditions on Female Musicians. American Economic Review, 90(4), 7515-741. https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.90.4.715
About the Author
Eva McCarthy
Eva holds a Bachelor of Science Mathematics degree and is currently undertaking a Master's in Cognitive and Decision Science at University College London. She is a committee member for UCL’s Behavioral Innovations Society, a student community of behavioral scientists that aims to deliver positive and sustainable behavior change within UCL and beyond. She also works for Essentia Analytics, a behavioral data analytics service that helps investment managers make measurably better investment decisions. Standing at the precipice of major technological upheaval she believes it is essential to apply behavioral science research to new technological advancements.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%