Algorithms that Run the World with Cathy O’Neil

PodcastSeptember 20, 2021

As soon as you start thinking ‘what are you building evidence for?’, you realize it’s all human choices, it’s all agendas, it’s all politics, really.

Listen to this episode

spotify button background imagespotify button background image

Intro

In this episode of The Decision Corner, Brooke Struck sits down with Cathy O’Neil, CEO of ORCAA and author of the New York Times bestseller Weapons of Math Destruction. Having studied and worked at some of the most prestigious universities in the world, including Harvard, MIT, Barnard College, and Columbia, O’Neil has been outspoken about the social risks of algorithms. 

In this conversation, O’Neil dives into some of the “invisible” problems that algorithms pose for society, and how decision-makers can create more responsible algorithms to better outcomes for society.

This episode includes discussions about:
  • The political nature of algorithms
  • How algorithms don’t predict the future, but create conditions for future events to occur 
  • How algorithms influence predictive policing 
  • How these biases invade hiring platforms and processes
  • The purpose of algorithms, which tend to serve those who create them
  • How policymakers and decision-makers can generate more responsibility among technicians

The conversation continues

TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.

Our services

Key Quotes

On algorithms creating the future

By using these algorithms to predict things, we actually create the future rather than just predict the future. We’re creating the conditions that we measured last hour or yesterday, or ten years ago. We’re reconstituting the past and propagating it. Credit card companies don’t just predict whether people will pay back their loans. They use that prediction to decide who deserves a loan.”

On hotspot policing

“People are arrested where there’s over-policing, and where there’s historical over-policing, and those historical over-policing, broken windows policies are in poor, Black neighborhoods. So when we hear the police say, ‘Oh, we’re going to send police back to these hotspots,’ the hotspots are defined by arrests, which we know happen more often to Black people, at least at the level of drugs, and very likely at other levels as well… the hotspot policing system propagates the uneven policing system.”

On how decision making can be improved beyond algorithms

“Algorithms, especially our blind faith in algorithms, is a plastering over of these more complicated conversations, and they show up in exactly the places where we need those conversations the most. We need the nuanced aspects of these conversations desperately, instead what we get is an algorithm.”

On what policymakers can do to improve algorithms

A similar type of [FDA] process should be used for high-stakes algorithms. Not all algorithms, a lot of algorithms are dumb and nobody cares, but high-stakes algorithms where people’s livelihoods or liberty are on the line or financial future, they should go through a similar process, where there’s evidence with clear definitions, maybe even publicly available definitions of fairness, of safety and of effectiveness. You have to define what success means, and you have to give evidence that it is successful.”

Transcript

Brooke Struck: Hello everyone, and welcome to the podcast of The Decision Lab, a socially conscious, applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, a research director at TDL and I’ll be your host for the discussion. 

My guest today is Cathy O’Neil, CEO of ORCAA, a consultancy that audits algorithms and provides risk assessments. She’s also the author of Weapons of Math Destruction. In today’s episode, we’ll be talking about what the algorithms don’t decide, the unavoidably political nature of data and the bad things that happen when we convince ourselves that algorithms are actually apolitical. Cathy, thanks for joining us.

Cathy O’Neil: Thanks for having me, Brooke.

Brooke Struck: Please tell us a bit about yourself and what you’re up to these days at ORCAA.

Cathy O’Neil: Let’s see, what about myself? I have an abiding interest and fascination with the way we think that technology works versus how it actually works, so I’ve spent the last few years thinking through people’s assumptions and blind faith in certain ways of the perfections of algorithms.What I do at ORCAA is I try to develop ways of measuring the extent to which those assumptions hold or fail, so we develop tests. We put the science into data science. We’re developing these tests so that we have hypotheses that we’re actually testing rather than assuming.

Brooke Struck: That’s really interesting and that picks up so nicely on this idea from your TED Talk that really stood out to me. You have this criticism that algorithms are not scientific. How do you see algorithms falling short of that scientific moniker?

Cathy O’Neil: In every single way except maybe a couple is the right answer. First of all, I would distinguish between mathematics and science. There are ways that some of these tools are mathematical, just because they’re sophisticated, mathematical structures, the underlying algorithms can be. The algorithms are logistical regression or decision trees; you could think of them as axiomatic mathematical structures. 

But the way I would just distinguish mathematics from science is, math has axioms and proofs and logic, whereas science has evidence and hypotheses. Pure logic is the math part of science, so science itself is testing things and building up evidence and looking for patterns in the data. And there’s part of that there too in algorithms – there is pattern matching and data following and looking for probabilities based on historical patterns. But that’s where it ends because as soon as you start thinking what you are building evidence for – you realize that the choices one makes in what we’re predicting or how we’re measuring or what’s the data, how the data is curated, in order to do those things – you realize it’s all human choices, it’s all agendas, it’s all politics after that.

Brooke Struck: That’s a topic that I’m definitely going to want to unpack in a moment. One of the things you also said is this idea that essentially to build an algorithm, as you’ve just noted, you don’t need a hypothesis to go in and test. All you really need is a bunch of data and a definition of success, and then you let the algorithm loose a little bit and you let the program decide how to optimize for the function that you’ve given it as the success criteria. Is there something missing in there from your perspective?

Cathy O’Neil: Well, as you say, if you just had historical data – and historical data has to contain an initial condition and then the result, and the result has to be whether success was achieved or not–  you can build an algorithm. The definition of success in historical data that has initial conditions and results. That’s, by the way, how most algorithms are built. They’re typically scoring systems. You’re trying to predict the chances that some initial condition will end up as successful based on historical patterns. That’s everything like – “Is this person going to click on this ad? Is this person going to buy this purse on this website?” All of those things are based on systems like that, that you just described. But what you’re really pattern matching there, what the algorithm is that you’re building, what you’re actually measuring is – were people in the past likely to do this, or did this structure in the past lead to that measured outcome?

What it doesn’t do is answer the question – should it have resulted in that? Or what are the values embedded in the system that led to those results, and were they appropriate or not? The reason I say that is it probably doesn’t really become the high stakes if you’re talking about someone clicking on an ad or somebody buying a purse. Although even there, which ads? Are they predatory ads for payday loans or for gambling websites or for-profit colleges or for misinformation? Those kinds of questions already become incredibly mired in values.

So that’s where I’m aiming at – the value list. Because we’re just pattern matching and we’re not adding values, but that’s a cheap way of wriggling out of the true fact, which is that you are propagating past values. You’re saying, “Whatever happened in the past, we’re going to predict will happen in the future.” That’s a way of propagating whatever historical biases that we lived with back when the data was collected, which might’ve been yesterday by the way. This is an ongoing thing. This stuff is really quick.

So it might literally be last hour we’re talking about –like, “Last hour, it worked like this, so we’re going to predict it will continue to work like this.” And that might be true, but it also might be a bad thing. The real thing I’m trying to point out is that by using these algorithms to predict things, we actually create the future rather than just predict the future. We’re creating the conditions that we measured last hour or yesterday or ten years ago. We’re reconstituting the past and propagating it. It’d be one thing by the way, Brooke, if we’re just predicting, but we’re not, we’re not predicting. Credit card companies don’t just predict whether people will pay back their loans. They use that prediction to decide who deserves a loan. So if it were just simply a prediction, we wouldn’t care, but because it’s not just a prediction, it’s really imbued with power over who gets what options in their lives. That’s when it becomes a propagation of values rather than just a descriptive prediction.

Brooke Struck: That’s really, really rich, and let me try to break that down a bit and digest it. One of the points that you made there is that data is always about the past. You talked about historical patterns a number of times, and it seems like a trivial thing, but actually, in the conversation we’re having right now, I think that this is something that’s worth calling out explicitly, that data is always about what’s happened before. We don’t have data about the future. If anything, that’s what we want, that’s what these algorithms are trying to – It’s the need or the desire that we have – that these algorithms are coming to address, as we want to know what’s going to happen in the future.

So we take data about the past, we identify these patterns about the past, and then we define these success criteria Essentially, what we mean by that is it just fits the data, so it describes the past really well. And then we go through this series of logical inferences. Like, “If the past was this way, we anticipate that the future will be this way also.” Now, there are reasons that that inference might break down, conditions can change, this kind of thing, but that inference is being made, whether it’s appropriate or not, as a question that we should add.

Now that’s an epistemic inference, as you mentioned, that’s just making a prediction and the prediction on its own is not necessarily so problematic. But then we make the second inference, which is, “If this is the way things have been in the past, this is the way that things ought to continue into the future.” That’s what guides our decisions and really cements the past and projects it into the future. 

That, I think, is a really, really dangerous inference to make because things have been a certain way, they ought to continue to be that way. As you say, in a way that’s really swept under the rug, that somehow these inferences aren’t being made, that there aren’t any political claims or political positions that are going into the data, going into the way that the data is projected into the future and these kinds of things. Can you share some examples of how this structure goes wrong? Where have the implementations of these kinds of approaches and these kinds of practices led us to outcomes that are clearly identifiable as the outcomes that shouldn’t have come about?

Cathy O’Neil: Brooke, I like the way you reframed it a little bit. In fact, if I had the time from now on, instead of saying, “what’s going to happen in the future?,” I would say these algorithms are saying, “if things continue as they did in the past, then we predict this will happen in the future.” That’s a mouthful, but the point is that that’s all we’re doing. We’re saying ”we’re going to expect, and in fact, enforce the past values to continue.” 

So I think one of the most understandable examples, for me anyway, of how this can go really wrong is policing. There’s something called hotspot policing, also called predictive policing. It’s used in almost every large city in the country, and I will make the claim that it propagates broken windows, very uneven policing, that overly focuses on poor minority neighborhoods.The way it actually works is it looks for locations of historical arrests, like “where have people been arrested geographically?” And then it sends police to those neighborhoods with the assumption that,”oh, that’s where the crime is because look at all these arrests.” And I’m using my words carefully because crime is not the same thing as arrests. 

In fact, let’s go there for a second because it’s a huge problem. It’s a huge problem that we have arrests as a proxy of crime because they’re just not a good proxy. Couple examples – murders, that only a little bit more than half the time lead to an arrest – even when you have a dead body, you know there’s been a murder – but less than half, if the victim is Black.

Rape, we know is very under-reported. We don’t even know how to report it, but we do know it’s deeply under-reported and it gets even more under-reported under certain kinds of situations, when people don’t have trust in the police. And then, once it is reported, I’ve heard a Reveal episode describe that 7% of the time it leads to an arrest. Seven percent. So I just want you to think on the one hand of arrests, on the other hand of crime. It’s just not the same set, and yet we use arrests as a proxy.

And the final example of crime categories is smoking pot. We don’t even think of that as particularly illegal, and the good news because of that, is that people admit to smoking pot. White people and Black people admit to smoking pot at the same rate. I want you to imagine how often smoking pot leads to an arrest. The answer is very, very seldom. But the missingness of that data  is not equally distributed, if you will. So Black people will get arrested five times more often than white people, historically, consistently.Even as things become less criminalized, there’s still criminal activity and arrests,ven once it’s become less criminal overall, Black people still get arrested four or five times more often than white people.

That was a little sidebar on the problematic nature of using arrests as proxies for crime. Now think back to the geographic location of arrests. Well, guess what,? People are arrested where there’s over-policing, and where there’s historical over-policing, and those historical over-policing, broken windows policies are in poor, Black neighborhoods. So when we hear the police say, “Oh, we’re going to send police back to these hotspots,” the hotspots are defined by arrests, which we know happen more often to Black people, at least at the level of drugs, and very likely at other levels as well.

So when we are doing that – I’m going to use your phrase again, the mouthful – we’re saying, “If things continue as they do, then police will arrest people in the future in these neighborhoods, so let’s send police to these neighborhoods.” We’re saying, “If policing goes on, as it has, then we predict there will be a lot of arrests in these neighborhoods.” Now, I’m still being super careful. The way the framing is in that enormous industry is – “That’s where the crime is, send the cops there.”

I’m just making the point that the hotspot policing system propagates the uneven policing system. Another way of saying that is instead of thinking of it as predicting crime – because I don’t think it predicts crimes – I’d like to think of it as predicting the police. It is predicting where the police will go. What we’d like to think is that the police are going to say to themselves, “Oh, is this the best use of our patrol officers? Are there crimes being committed in other places? How should we really define success for the police department as a whole, etcetera?” I hope that makes sense as an example of how algorithms propagate the past rather than reimagine what a future could look like.

Brooke Struck: Right. So beyond just repeating the past, what ought to go into our decisions? What additional ingredients can we think about bringing into the recipe?

Cathy O’Neil: Well, the truth is we have to go much deeper. We have to think through what  policing is for, how we can make it just and fair, and what  the goal of policing is. When I talk about propagating broken windows policing, I’m talking about things like stop, question and frisk. And I feel like we still haven’t had a reckoning of what  that accomplished, both positive and negative. Because it’s so political. People will say, “Oh, it scared some people straight.” Other people will say, “It’s a lifelong lack of trust with the community and the police,” and I think they both have points, and we need to have that reckoning.

So Brooke, to be clear, I think of these algorithms – which I call weapons of math destruction in my book, these really problematic algorithms that make things worse rather than better, that hide their inner workings and just try to intimidate people with the science brand – I think of these as typically replacing a difficult conversation that we need to have, but refuse to have, it is a very convenient way of papering over some kind of deeply complicated, hard conversation, in this case, the way that police are used to oppress a minority group. We don’t want to have the conversation, we don’t want to reckon with that, so we just say, “This is a silver bullet, let’s use this. Let’s trust in the data. Let’s follow the “data,” which is to say, “Let us continue at the status quo.”

Brooke Struck: It seems like there are two paths that you’re illuminating there in terms of improving what we’re currently doing. One path is seemingly not your preferred path,. That’s like– we need to improve the data, we need to check the data for biases. There are potentially additional dimensions of data and additional indicators, maybe different analytical approaches, that we should be taking to improve the data side of the equation as much as we can.

Now, I’ll just share my perspective –  I think that that is a valuable thing to do, but I suspect that, like you, I also share the belief that’s not going to be enough. We want the algorithms and the underlying data to be good, but that’s never going to be sufficient to overcome the challenges that you’re talking about, at least in my perspective. And that’s what illuminates this second path, which is over and above anything that’s going on with the underlying data, the algorithm itself, this kind of thing. There’s also the wider context in which that exists, the context in which people are going to make decisions based on what the algorithm tells them, and it’s going to feed into discussions, it’s going to feed into the disparate distribution of resources, resources including money for benefits, including policing, including all kinds of different resources that we allocate currently based on some kind of algorithmic supports.

It seems that part of the inescapable ingredient of the additional thing we need to bring into that recipe is, as you say, this reckoning, this hard conversation. We need to have some conception of what it is that we actually want to achieve. We need to come to some agreement on what constitutes fairness, on what constitutes the actual purpose of a police force within a society, which is not to say that police don’t have a purpose – I think that that’s a very facile argument that a lot of people have gotten from this defund the police message that circulated widely for a while. Having a conversation about what the appropriate role and function of police in a society might be, is not to say that they don’t have an appropriate role, but that we need to have that hard conversation of what it is that they actually should be doing, and what it is that we’re hoping to achieve through the actions of the police and the way that we structure their activity. What are some of the barriers to having those conversations? Why is it so hard to have that reckoning about the outcomes that we actually want?

Cathy O’Neil: I’m going to back up a little bit and mention that the predictive policing algorithm, the family of such algorithms, I don’t think it’s retrievable. I don’t think it’s fixable because I simply don’t think we have the resources or the will to collect crime data. We don’t have crime data. We don’t want to collect crime data, because crime data collection would essentially mean videos in every person’s bedroom. We don’t want that. That’s Orwellian, although I will add that certain people live with something much closer to that than others. The missingness of this data that we think that we’re following – we think we’re following crime data when we build these algorithms – the problematic missingness is unsolvable, and that’s okay, but we have to take that into the discussion of what the police are for, which I agree is a really difficult conversation to have.

I will say though, that other algorithms that are problematic might be fixable or might be at least modifiable – things like a hiring algorithm where you have more data and you can adjust the data. It’s not a missingness problem so much as a bias data problem, and just at a technical level, it’s a lot easier to adjust for bias than it is to adjust for missingness. The other thing is, morally speaking, there’s a weird asymmetry to people’s moral view of crime data, which is that they object to the idea that some people who actually were found with criminal activity aren’t getting punished because other people who we know had committed crimes, but we didn’t catch them, so we don’t have that data, got away with it. So there’s this asymmetry in the missingness problem. It’s part of that conversation that we’re not having, that reckoning. If we only collect information on Black people’s crimes, but not on white people’s crimes, what does that say about us? That’s the conversation we don’t want to have. 

By the way, I worked on Wall Street, and as far as I could tell, everybody was snorting cocaine, but they didn’t get stopped and frisked. But the kids in Harlem did, and for that matter, the kids at Columbia University were constantly having these drug busts in the frat houses near where I lived, but they were never declared gang members. There’s just so much there, there’s so much asymmetry and yet people have a lot of problems with the idea that the missing white crime is as much of a problem as the actual Black crime data. 

Anyway, I don’t know what we need to do. I think the real problem is that we have a conversation we need to have before that, which is about race itself. That’s another reckoning conversation that we’re struggling with, but maybe we’re a little closer than we were a couple of years ago I’m not really sure.

Brooke Struck: Let’s think about the role of the algorithms and how to have these reckoning conversations. So one of the points you made earlier in our conversation is that these algorithms bring with them a lot of opacity, the brand of science used to paper over a lot of things and make it at one seem totally spick-and-span clean, and on the other hand, far too complicated for a normal person to understand. How could you even possibly begin to have the conversation?

Cathy O’Neil: It’s actually not that complicated though, Brooke. One of my moves is, actually everyone does predictive algorithms in their head when they get dressed in the morning. Everyone does it. “I’ve worn this outfit a million times. I’ve worn that outfit a million times. This one tends to be uncomfortable, and by the way, it’s cold today so I definitely don’t want to wear that,” that’s a predictive algorithm. Of course, on a given day, you might have a different definition of success, “Today, I want to look good, tomorrow, I want to be comfortable,” but we all do it, it’s not that complicated. In fact, I think the idea that it’s too hard for the average person is a problem. They don’t need to know the tactical details of this stuff to get the idea, “Wait a second, that’s not my definition of success, that Facebook is using to serve me misinformation, that’s their definition of success, and it’s just meant to keep me on Facebook, clicking on ads and getting into fights with my neighbors.”

Brooke Struck: It’s interesting, the point that you made there, that this is actually something that’s not that complicated. If I hear you correctly, what you’re saying is that there is a level of granularity at which it is going to be too technical for most of the public, but there’s also a level of granularity that is going to be quite accessible, and that level of granularity is actually a useful thing to talk about. We can make a lot of progress, even if we don’t get really deep into the guts of the most technical parts.

Cathy O’Neil: Correct. In fact, almost nobody wants to get into those deep technical guts. In my algorithmic auditing company, I work with the data people, obviously, but I work 85% of the time with other people to audit the algorithms within the company, people in comms, like, “What is the New York Times headline that you don’t want to see? What are the values of this company CEO or CTO? What are the values you guys want embedded in this algorithm, and how do we measure whether that is there or not?” I talk to the business owner, I talk to the lawyers, the compliance officers.

There’s just so many people that are stakeholders in a given algorithm, and they don’t need to be technical. Just say, “Here’s what I want to make sure isn’t happening here. Here’s what I want to make sure is happening.” Those are values, and my job as a technician is to translate their values into data and to check whether or not the algorithm is following those values. But it’s really not inherently a technical conversation and it shouldn’t be. These are algorithms that are affecting our society as a whole and the way our society works, the way policing works, the way the justice system works, the way hiring works, the way college admissions work. We need to have this conversation as a society, it’s not just for the technical people.

Brooke Struck: Yeah, it’s interesting you should say that, it sounds like what you’re describing is that you require a level of technical fluency in order to carry out that conversation with your clients. But actually, the conversations are not about technical things, so I can easily imagine, as you described, as you’re auditing the algorithm, you’re checking the values that the CTO or whichever stakeholder has expressed to you are important to find themselves embedded in this algorithm, you can help them to measure the extent to which those values are coming through. There’s also working in the opposite direction, if I anticipate correctly, some stuff that’s happening in the data that hasn’t been part of the conversation about values or part of the conversation about the objectives that the algorithm ought to achieve, which you can then carry forward and ask, “Are these things you actually want?”

Cathy O’Neil: Yeah, and I will add that sometimes it’s really not possible to make everyone happy. There really are embedded trolley problems in these stakeholder concern matrices, if we call the ethical nature matrices. 

Sometimes you’ll have one group that cares more about false positives and another group that cares more about false negatives. And because no algorithm is perfect, you can’t make both of those go down to zero, so someone’s going to remain unhappy. And going back to the earlier conversation about why it is so hard to have a reckoning, I think there are often embedded trolley problems in these conversations, and what ends up happening is we don’t really have good conversations around this when there’s direct conflict. Instead, we just basically see who’s in power politically and they get their way. 

Anyway, I’m just making some obvious points in terms of ethics and philosophy. But the larger point is that algorithms, especially our blind faith in algorithms, is a plastering over of these more complicated conversations, and they show up in exactly the places where we need those conversations the most, and we need the nuanced aspects of this conversations desperately.Instead what we get is an algorithm.

Brooke Struck: Right. So the places where those conversations are the most difficult are also the places where those conversations are the most needed.

Cathy O’Neil: Absolutely.

Brooke Struck: So for somebody who has been listening to this and is just like, “Oh my gosh, finally, someone who understands my struggle,” what are the practical steps that they can do to move forward on this? How do we get those conversations underway?

Cathy O’Neil: We didn’t talk about crime risk algorithms being used to decide who gets parole, or how long someone gets sentenced, or whether they get incarcerated pre-trial, but I keep thinking the constitutional rights of due process that should be part of the conversation here like, “How is it possible that you’re using a secret algorithm that no one understands to decide these things? Isn’t that counter to due process?” That should be a fact. I am not a lawyer though. 

More generally, I feel like algorithms are being used in all sorts of places without our knowledge or with our knowledge, where we should have or we sometimes do have extended rights, and that we sometimes don’t really make use of all of our rights simply because they’re secret or because we were told, “It’s math, you wouldn’t understand it,” or because we just don’t know how to organize ourselves. And those are real obstacles by the way.

I think the easiest one to overcome is, “It’s math, so you wouldn’t understand it.” You can, after thinking through it, be like, “Wait a second, that’s some bullshit, and you can’t use the fact that I don’t have a math PhD against me so that I lose my job without any due process.” Harder stuff though, is how to organize around that. I will say that the Chicago teacher strike, which was a few years ago, was in no small part caused by a fight over a teacher value added model algorithm, which they wanted to use to fire teachers. So it’s not like it never happens, and I’m always happy to see that happening, and I’m always happy to help out people who want to push back against an algorithm that was treating them badly. But I will say that the way it’s set up, this mechanism of using algorithms secretly, especially for a job application, how will people even know it happened to them if they just simply don’t get an interview, and how will they organize with other people who also got unfairly thrown out of a process? It’s really hard to imagine. It’s really, really hard to imagine.

That’s why when I talk to policymakers, I often suggest that they make the people who build the algorithms more responsible, they have more of a burden of proof along the lines of something like an FDA type review process. We all are all too familiar with the safe and effective process that the drug manufacturers have to go through in order to get a drug approved by the FDA. I feel like a similar type of process should be used for high-stakes algorithms. Not all algorithms, a lot of algorithms are dumb and nobody cares, but high-stakes algorithms where people’s livelihoods or liberty are on the line or financial future, those should be going through a similar process, where there’s evidence with clear definitions, maybe even publicly available definitions of fairness, of safety and of effectiveness.

By the way, just saying the word effectiveness in the context of an algorithm, what I’m really saying there, Brooke, is I’m saying you have to define what success means, and you have to give evidence that it is successful. And what I like about that idea that the company that’s using an algorithm has to define success clearly and give evidence that it is successful, that means that outside stakeholders can scrutinize that definition of success and say, “Hey, that’s not the definition of success I want to see, that’s the definition of success that works the best for that company, but it’s not the definition of success that I’m agreeing to,” and that would be transparency that I think we could use a lot.

Brooke Struck: It’s interesting in the context of a shift from shareholder to stakeholder capitalism. Whether you believe that that shift is actually happening in full-blooded way is maybe a different matter, but this idea that the objectives of an algorithm in a stakeholder model should be shared value creation, and everybody in the ecosystem is getting good things out of the interaction. Whereas I think one of the things about algorithmic opacity is that it makes it really easy for the people who have the most control over the algorithm to load the deck in their favor and ensure that the value that’s being created disproportionately flows towards them rather than others in the ecosystem.

Cathy O’Neil: Well, exactly. I’m a one-trick pony in terms of what I’m harping on right now, but what I’m harping on right now is this idea that that system of getting a job for millions of Americans who were out of work after the COVID hit, it’s all happening online. And they’re all going to these job platforms where they’re looking for jobs. And those platforms, and I happen to know, they get paid when someone gets a job, when they can make a connection that leads to an actual hire. If you think through what that means, it means that those platforms have every incentive in the world to show you jobs that you’re easily qualified for, and easily can get, thus assuring them of their kickback, but not the jobs that are the best for you, not the job that you really want. That’s a dream job, that would be a stretch, you’d be lucky to get.

No, that’s not what they’re going to be incentivized to show you. They’re going to show you the stuff that they know you’re qualified for, which is to say, there’s no reason for anybody to think that they’re getting the best possible options. And by the way, they won’t know, they’re not getting the things they can’t see, so that’s another invisible failure problem. But it’s going to have such a large effect in the near future. And I will say that people who are in protected classes should be particularly wary of this system where people just get paid when someone gets a job because the historical biases will all be in place and in action. So the point is, algorithms are set up to align with the interests of the people who own them more than anyone else, and we should always keep that in mind.

Brooke Struck: So maybe some practical takeaways then, not necessarily about getting the conversation started, but one step further back from that, of identifying the areas where we should be most wary of how algorithms might be getting built and used. Some of those indicators would be – is there historical unfairness in the system, such that we know that’s going to find its way into the data, and therefore into the algorithm, and therefore into the predictions and decisions? The example of policing is a good one that you’ve talked about at length. The example of hiring is another one that you’ve phrased. Those are really good marquee examples there.

On another track it’s – where is there a situation where there’s likely to be a misalignment between value creation for the people who are building the algorithm versus value creation by those who are using it or subject to it? And those are, again, likely to be areas where the opacity of an algorithm without any conversation or without sufficient and open and participatory conversation around it, we should expect that the algorithm is probably going to lean into creating disproportionate value for some actors relative to others, perhaps even at the expense of others. Are there other markers that we should be looking for in terms of where the most worrying algorithms are, and therefore, the most pressing conversations are needed or conversations are most pressingly needed?

Cathy O’Neil: The answer is every place that there has been a bureaucratic decision that’s messy, and that people don’t want to take accountability for, is now an algorithm, so they’re everywhere. It’s hard for me to say where to look.

Brooke Struck: That’s a nice third one there, that anywhere where someone might conceivably be looking to avoid accountability, expect an algorithm where the secret sauce might be pretty messy.

Cathy O’Neil: Yeah, exactly.

Brooke Struck: Okay. That’s really helpful. Cathy, this conversation has been great. Thank you so much for helping us to unpack this really important topic, to come to grips with some of the hard truths that sometimes it’s just nicer not to face and not to believe are there. bBut for the future of our societies, it’s really important that these conversations happen, so thanks for helping us walk down that road.

Cathy O’Neil: Thanks for having me, Brooke, it was my pleasure.

Brooke Struck: And we hope to see you soon.

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guest

Cathy O’Neil

Cathy O’Neil

Cathy O’Neil earned a Ph.D. in math from Harvard, was a postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then switched over to the private sector, working as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. She left finance in 2011 and started working as a data scientist in the New York start-up scene, building models that predicted people’s purchases and clicks. She wrote Doing Data Science in 2013 and launched the Lede Program in Data Journalism at Columbia in 2014. She is a regular contributor to Bloomberg View and wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy. She recently founded ORCAA, an algorithmic auditing company.

About the Interviewer

Brooke Struck portrait

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Listen to next

Faisal Naru
Podcast

Policy And Social Behavior During A Crisis: Faisal Naru

In this podcast episode, we are joined by Faisal Naru of the OECD to discuss policy and social behavior during a crisis. Some topics include how the COVID-19 crisis has altered behavior and policy at a variety of scales and contexts, the role of trust in institutional effectiveness, and the relationship between expertise and effectiveness in policy.

Notes illustration

Eager to learn about how behavioral science can help your organization?