The Dangers of an Artificially Intelligent Future

read time - icon

0 min read

Dec 03, 2021

There can be no doubt that we’ve entered the latest revolutionary period in human history, The Technological Revolution. This new era promises efficiency, convenience, communication, equal access to information, and unrivaled prosperity, but at what cost? 

It is easy to be dazzled by the technological prowess of modern smartphones, self-driving cars, or VR gaming, and forget that these machines were built by humans, in all their irrational and illogical glory. Programmers become choice architects: they have the power to shape the contexts in which people make decisions, and thus those decisions themselves.The designers of these technologies, however, are susceptible to the same unconscious biases and prejudices as the rest of us, and the resulting technologies—AI and machine learning algorithms in particular—threaten to exacerbate social inequalities by encoding our human biases and proliferating them on a massive scale.

Machine learning, explained

Machine learning (ML) algorithms may not be the great predictors we believe them to be. They merely replicate society as it is and was, rather than predict what it could be—or, more importantly, what we would like it to be. 

Algorithms seem quite complex, and in some cases they are, but they are hardly beyond human comprehension. In fact, we use predictive algorithms in our heads hundreds of times each day. What should I eat for dinner today? Perhaps I could stop by the supermarket on my way home from work and pick up some vegetables to go with the leftover salmon I have from yesterday. That would be the most cost- and time-effective option, and that’s the solution to a predictive algorithm you’ve computed in your head. 

When we run these calculations in our heads, we draw from our lived experience and our learning history to inform our decisions. Machine learning algorithms, meanwhile, make choices based on what they’ve learned from datasets fed to them by their developers. If you regularly surf the internet, then you will be familiar with reCAPTCHA, a security feature that asks users to (for example) select all images containing traffic lights. This is a basic image processing machine learning algorithm: Google uses your responses to train its AI and make it better at image recognition. 

Datasets used to program more advanced machine learning algorithms include collections of human faces for facial recognition software, information about successful employees for application screening software, and locations of police arrests for predictive policing software. So, how intelligent is our artificially “intelligent” future? 

How algorithms learn our prejudices

Joy Buolamwini, a graduate researcher at MIT, drew attention to the issue of algorithmic discrimination when she unveiled a discovery she had made while working with facial recognition software. In her 2016 TED Talk “How I’m fighting bias in algorithms,” which has over 1.2 million views at the time of writing,2 Buolamwini describes a university project she undertook titled “Aspire Mirror,” where she attempted to project digital masks onto her reflection. She ran into trouble when the facial recognition software she was using failed to recognize her face—until she put on a white mask. 

Facial recognition systems are ML algorithms trained by large data sets. The algorithm identifies, collects, and evaluates facial characteristics and compares them to existing images in the database. If you have ever, like me, tried and failed to apply one of TikTok or Instagram’s facial filters to one of your pets, this is because the dataset used to train the facial recognition software contains only human faces and is unfamiliar with animal facial characteristics. 

The problem is that facial recognition algorithms are overwhelmingly trained using datasets tainted by sampling bias. In a 2018 study entitled “Gender Shades,” it was found that two facial analysis benchmarks were overwhelmingly composed of lighter-skinned individuals (79.6% for IJB-A and 86.2% for Adience).3 The study also revealed that dark-skinned females are the most misclassified group, with error rates of up to 34.7% (compared to 0.8% for white males). Similar results have been found for Amazon’s facial recognition software Rekognition.4

It is no surprise, therefore, that the facial recognition software was failing for Joy, a black woman. The algorithm was failing to recognize Joy’s face as it contained very few examples of faces like hers—an issue that ethnic minorities, particularly women of color, are accustomed to. As the prominence of machine learning algorithms creeps quietly into every aspect of our lives, it is imperative that we illuminate the hidden faces in our datasets to eliminate existing bias. 

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services

Algorithmic bias and hiring

Automated tools to screen job applicants are another example of machine learning algorithms that are often based on biased datasets. These algorithms are gaining traction due to their time- and cost-saving benefits. The technology claims to scan resumes for buzzwords that are associated with desirable candidates.5 The problem is, these algorithms are trained on datasets of the company’s existing, successful employees.

It is no new revelation that the corporate world is already awash with gender, racial, and disability discrimination, and this only increases as you climb the corporate ladder. A landmark study in this area, entitled “Are Emily and Greg More Employable than Lakisha and Jamal?”, famously found that resumes with African American names on them received 50% fewer callbacks than identical resumes with white-sounding names attached.6 This existing racial bias in hiring practices is contained in the datasets of current employees and can easily be encoded into an ML algorithm. The algorithm simply learns by example, and our society does not set the best example. 

Take gender bias, for example. If a woman applies for a job at a firm using an algorithm in their hiring process, she is more likely to be rejected than her equally qualified male counterparts, because the algorithm has learned (from its biased dataset) that men are more likely to be successful employees. It believes this because there tend to be fewer women in high-powered corporate positions than there are men. In fact, this is precisely what happened at Amazon when they tried to introduce AI into its recruiting process; its algorithm reportedly learned (among other things) to penalize candidates whose resumes included the word “women’s” (as in “women’s chess club”) and who had attended all-women colleges or universities. AI is oblivious to this inequity, and will only serve to crystallize these existing biases on large scales as hiring technology gains dominance over human intervention. 

How algorithms create reality

Machine learning and AI are increasingly held up as tools to predict the future. But if they are not used carefully, there is a danger that they will actually just end up creating the conditions for the same patterns to continue, often at the expense of groups that are already vulnerable. 

A prime example of this effect can be seen in predictive policing software such as PredPol and Compstat, which boast of their ability to predict crime hotspots that vary by time of day. This allows police forces to deploy officers to where they are needed most, to intervene more efficiently when a crime occurs. Seems like a noble agenda—so what’s the catch? Well, these algorithms are trained using datasets on the number of arrests in given areas. As in the examples above, this means that the algorithm becomes a reflection of our present socio-political environment—one where police are most often deployed to underprivileged areas inhabited by black and minority groups. This creates a pernicious feedback loop: data from policing is used to justify additional police presence, which in turn leads to more arrests.7 Because of examples like these, the mathematician and data scientist Cathy O’Neil has dubbed certain machine learning algorithms “weapons of math destruction” (WMDs).

The issue here is that there is a massive data gap between crime in poor areas compared to middle-class and wealthy ones, due to the lack of policing in the latter areas. These algorithms are also using arrests as a proxy for crime, but many arrests do not end in a conviction. This is an example of Goodhart’s Law: when a measure becomes a target, it ceases to be a good measure. If finding crime is the true goal of predictive policing, then focusing on arrest numbers is a poor substitute for cases where police were actually able to reduce or avert harm because of their ability to intervene quickly.

A better future with AI

It is clear that AI and ML have the ability to damage social equality and democracy, but could they also provide the solution to what we fear? For example, if current bias is correctly accounted for, could hiring technology actually serve to improve equality in the workplace? Very likely—as long as people take action to ensure it.

Making algorithms more equitable

In a famous study titled “Orchestrating Impartiality,” experimenters compared the results of “blind” orchestra auditions, where judges could not see applicants while they performed, with those of the existing face-to-face audition model.8 Since the blind audition model was introduced, the number of female musicians has increased fivefold, revealing an unsavory dose of gender bias in former orchestra hiring processes. 

Technology has the ability to remove biases, but only if our algorithms are trained to do so. For instance, hiring algorithms could be trained to look solely at an applicant’s relevant experience rather than potentially bias-inducing variables such as name, race, gender, zip code, or whether or not they went to an Ivy League university. In this way, AI could offer its own version of blind auditioning. For example, Amazon, following revelations about its discriminatory hiring algorithm, recalibrated its AI to make it neutral towards terms like “women’s chess club” (though critics said this was still not enough, and Amazon later scrapped the tool entirely).

This approach could be applied in hiring processes and beyond to remove existing biases, demonstrating that AI has immense power to uplift and protect our society if it is used conscientiously. 

Being mindful about machine learning

In the words of Eckhart Tolle, “Awareness is the greatest agent for change.” Greater awareness regarding how these algorithms are used, how they impact us, and where to go for help is imperative for moving forward in a world of AI.

Joy Buolamwini and Cathy O’Neil are both founders of organizations that seek to reduce algorithmic discrimination, known respectively as The Algorithmic Justice League and ORCAA. ORCAA is a consultancy firm that can be hired to evaluate whether an organization’s use of algorithms is ethical and in line with the company’s values. ORCAA’s aim is to incorporate and address concerns from all the stakeholders of an algorithm, not just those who built or deployed it. They also assist in the remediation of fairness issues, distribute certifications of fairness (where applicable), and assist with education regarding algorithmic fairness.

It’s also worth noting that in many cases, such as hiring, it is in the best interest of a company to evaluate applicants fairly. If not, they can easily miss out on the applicants that could bring exceptional talent and experience to their business. In cases like these, an external audit is an excellent way to verify the equitability of their automated application screening tool.

Regulating AI

It is clear that the opacity of ML algorithms has allowed some companies to use them to the detriment of society. In a more artificially intelligent future, we will require publicly aware definitions of fairness to be referenced and upheld by a court of law. Algorithms should be rigorously tested prior to their release, and companies should have to demonstrate compliance with anti-discrimination laws. 

Already, regulators and lawmakers in the US are in the process of developing standards for algorithmic auditing, including translating existing fairness laws into rules for algorithm builders.7 Algorithmic fairness testing could follow an FDA-like approval process, where the burden of proof of efficacy lies with the parties developing the algorithms. There should also be disclosure as to when these algorithms are being used to evaluate a person and what the algorithm’s desired outcome is. This allows people to question its results and seek legal assistance if required. 

However, this still leaves some questions. For instance, who should be the one to enforce laws surrounding machine learning algorithms? Can we trust corporations to have our best interests at heart and be transparent about the inner workings of their algorithms? Do we trust governments to pass and effectively enforce strong regulations in this area? Even as we move towards a system for regulating algorithms, it remains to be seen how it will operate in practice. These questions should be answered with the input of the public, and of experts on equity and AI. 

Getting to the root of the problem

As with most solutions, education is also an imperative here. Choice architects (or programmers in this case) carry a burden of responsibility when creating algorithms with the ability to change the society we live in. Training programmers in algorithmic fairness with the help of behavioral science is also a crucial next step. We urgently need to act on issues regarding algorithmic fairness as AI continues to permeate the various levels of our society. 

Conclusion

In the words of American author William Gibson, “The future is here, it’s just not evenly distributed”—meaning, the future is here, but not all groups are fully reaping its benefits. AI is still in its infancy, and in many cases, it merely replicates the world as it is or has been, replete with biased and discriminatory practices. Encoded bias not only features in facial recognition software, hiring, and policing, but it also appears in college applications processes, health and automobile insurance, decisions about creditworthiness, and more. It is already shaping the society we live in! 

Have we not learned from the 2008 Wall Street Crash that we shouldn’t wait for disaster to strike before we regulate modern financial instruments, or in this case, modern technologies? If not, the future AI is promising us may merely be a reflection of our past mistakes. Who wields the power of our future? Governments? Or is it a handful of corporations controlling AI whose current main focus is their bottom line? 

It is up to us to educate ourselves regarding issues of algorithmic fairness. It is up to us to lobby for better regulation of the ethicality of machine learning algorithms. It is up to us to seek legal help when wronged by these algorithms to set legal precedents that benefit the rest of society. We have the opportunity to design a society based on equality and acceptance by weaving these values into our technology. This article warns of the dangers we face if we continue on our current trajectory, the dangers of an unregulated artificially “intelligent” future.

References

  1. Thaler, R. H., & Sunstein, C. R. Nudge: Improving decisions about health, wealth, and happiness. 
  2. Buolamwini, J. (2016). How I’m fighting bias in algorithmshttps://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en#t-170583 
  3. Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81:1–15. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf  
  4. Singer, N. (2019, January 24). Amazon Is Pushing Facial Technology That a Study Says Could Be Biased. The New York Times. https://www.nytimes.com/2019/01/24/technology/amazon-facial-technology-study.html 
  5. Givens, A. R., Schellmann, H., & Stoyanovich, J. (2021, March 17). We Need Laws to Take On Racism and Sexism in Hiring Technology. The New York Times. https://www.nytimes.com/2021/03/17/opinion/ai-employment-bias-nyc.html
  6. Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination. American Economic Review94(4), 991-1013. https://www.aeaweb.org/articles?id=10.1257/0002828042002561  
  7. O’Neil, C. (2016). Weapons of Maths Destruction: How big data increases inequality and threatens democracy. Crown. 
  8. Goldin, C., & Rouse, C. (2000). Orchestrating Impartiality: The Impact of “Blind” Auditions on Female Musicians. American Economic Review, 90(4), 7515-741. https://pubs.aeaweb.org/doi/pdfplus/10.1257/aer.90.4.715 

About the Author

Eva McCarthy

Eva McCarthy

Eva holds a Bachelor of Science Mathematics degree and is currently undertaking a Master's in Cognitive and Decision Science at University College London. She is a committee member for UCL’s Behavioral Innovations Society, a student community of behavioral scientists that aims to deliver positive and sustainable behavior change within UCL and beyond. She also works for Essentia Analytics, a behavioral data analytics service that helps investment managers make measurably better investment decisions. Standing at the precipice of major technological upheaval she believes it is essential to apply behavioral science research to new technological advancements.

Read Next

digital art of robot crane carrying the earth
Insight

Automation At Work Will Change Our Home Lives

Large-scale automation will change how our jobs are organized, how we learn, and how companies make decisions. These effects will not just be seen at the office; likely, they will also spill over into our home lives in ways we might not expect.

Notes illustration

Eager to learn about how behavioral science can help your organization?