IMG_3888
đź‘‹
Hi, there

I’m just gonna get right to it. Today’s newsletter is about AI. Don’t you dare click away! If you click away I will know about it!!

Listen, I get it. You’re tired of hearing about AI. AI is all anybody has been talking about for, like, a year and a half now. We’ve all had our fun futzing around on ChatGPT and Midjourney. We’ve all read twelve bajillion LinkedIn posts about how organizations can use AI to boost their KPIs. We’ve talked this topic to death and back already.  

But don’t touch that dial, because we promise you, we’ve just barely scratched the surface. AI is playing a bigger and bigger role in our society every single day, and yet, a lot of us still don’t really know how it works. And crucially, a lot of us don’t realize how AI reinforces inequality

You may have heard about algorithmic bias: how AI learns and recreates human biases (more on that below). But you probably don’t realize that the development, training, operation, and distribution of AI can also create harms that disproportionately hurt marginalized people.

Today, midway through Black History Month, we’re taking a look at how AI can be damaging to EDI — and what to do about it. 

Until next time,
Katie and the humans @ TDL

PS. Today’s newsletter title is taken from Virginia Eubanks’s book Automating Inequality, which you should definitely read. 

📧 Want more perspectives on tech and decision-making? Subscribe to our newsletter here.
Today’s topics 👀
🤖 Deep Dive: How AI Reproduces Bias
đź“” Field Notes: Algorithms That Run the World
🌍 Viewpoints: AI for Everyone 
DEEP DIVE
🤖 How AI Reproduces Bias

+ AI is grounded in a very particular, Western understanding of “intelligence.” By now, you’ve probably heard about how IQ tests have a problematic history: they’re calibrated to a very specific cultural context (specifically, a white European one) and linked to just one conceptualization of how the mind works. These ideas come burdened with an ugly history, having been used to justify colonialism and the eugenics movement.

+ Racial inequality is baked into AI’s operations today. While AI systems tend to be designed by companies in the Global North, their day-to-day functioning is often maintained by racialized people in the Global South. These workers perform necessary tasks such as data annotation (which allows AI to learn), usually in exchange for very low wages. 

+ AI tends to perpetuate racial biases. As the saying goes: “garbage in, garbage out.” We live in a biased world, so large AI systems tend to be trained on biased data. That means they’re likely to replicate the same inequities that exist in our social world. In other words, AI systems can have very pernicious effects for marginalized groups, especially people of color and low-income communities

+ AI comes with a big carbon footprint — and that hurts racialized people the most. Researchers estimate that the process of training GPT-3 emitted more than 550 tons of carbon dioxide. To put that into perspective, that’s roughly equal to taking 550 roundtrips between New York and San Francisco. How does race figure into this? Well, the climate crisis has impacts on all of us, but it’s people of color living in the Global South who are set to suffer disproportionately.

🎧 FIELD NOTES: Algorithms That Run the World

Machine learning and algorithmic decision-making have come to play a huge role in shaping our everyday lives. We often talk about these systems as if they’re predicting the future. But as mathematician and data scientist Cathy O’Neil argues in her book Weapons of Math Destruction, algorithms aren’t neutral — and we’re letting them shape our society in harmful ways. 

In this episode of our podcast, O’Neil dives into some of the “invisible” problems that algorithms pose for society, and how decision-makers can create more responsible algorithms to better societal outcomes. Listen on our website or wherever you get your podcasts.  

Cathy ONeil podcast
Viewpoints
🌍 Viewpoints: AI for Everyone

Criticisms notwithstanding, it’s clear that AI is here to stay. And at the end of the day, AI is just a tool: it has the potential to create harm or benefit, depending on how we deploy it. For example, this article from Microsoft talks about how large language models (LLMs) could be used to personalize HIV care in the Global South. 

So, the question now isn’t whether or not we should use AI; it’s how we can build and use it more equitably. Here are a few ways we could go about doing just that. 

+ Ensure equal access to AI and AI education. AI, used responsibly, can be a powerful tool. That’s why we need to make sure that everybody is able to access it — especially young people from marginalized backgrounds

+ Center stakeholders from marginalized communities. There’s a lot of very interesting, very important scholarship being done on how we might reimagine AI through alternative lenses. This position paper on Indigenous Protocol and Artificial Intelligence (IP AI) is a great example. More generally speaking, it’s crucial that marginalized communities, particularly communities from low- and middle-income countries, have a seat at the table as AI systems are being designed.

+ Build AI that’s bias-aware. To some extent, we can train AI to recognize & try to counteract biases. For example, it could be manually programmed to adjust its decision-making processes for fairness, or could be deployed to detect potential bias in human decision-making. This isn’t a silver bullet, though: it also opens up a lot of questions about what’s “fair” and who gets to decide that.   

+ Train AI on more diverse datasets. A lot of mainstream news coverage overlooks or excludes the viewpoints of marginalized communities, which means that AI trained on that data is more likely to replicate those biases. To tackle this problem, The New York Amsterdam News, NYC’s oldest Black newspaper, has partnered with an AI startup to train their LLM on the paper’s archives. If scaled up, this model could help correct some of the biases being encoded in AI. 

+ Regulation, regulation, regulation. We need policies that clearly delineate how AI can and cannot be used, grounded in commitments to equity, diversity, and workers’ rights. It’s as simple as that. 

A bar graph of CO2 emissions benchmarks.
Training an AI model emits about 5 times as much carbon as the manufacturing and lifetime usage of a car in the U.S. (Source: S&P Global Market Intelligence)
!
Authority Bias
AI is an amazing tool in a lot of ways — but it’s far from perfect, and it does make mistakes. Why are so many people (and organizations) entrusting it with important decisions? This might have something to do with authority bias: we tend to trust the opinions of people who are established authority figures (like certain tech personalities, for example). Read more about it on our website.
What’s new at TDL

TDL is hiring! We’re hiring for a number of positions, both remote and based in our Montreal office. Some open roles include:

Find out more by visiting our careers portal

Want to have your voice heard? We'd love to hear from you. Reply to this email to share your thoughts, feedback, and questions with the TDL team.
THE DECISION LAB
linkedin facebook twitter

The Decision Lab

4030 St Ambroise Street,Suite 413

Montreal, Quebec

H4C 2C7, Canada 

© 2022 The Decision Lab. All Rights Reserved
4030 St Ambroise Street Quebec The Decision Lab Montreal #