Artificial Intelligence for Social Betterment: Bob Suh

PodcastFebruary 12, 2021
man holding a computer connecting to another man holding the same

I believe we’re social animals, and with some exceptions, we tend to govern ourselves with mores and behaviors that help the entire group. And, I think that, again, this comes back to AI can help that, or it can hurt that. It could exploit the arbitrage and information or reality, or it can guide behavioral patterns more toward the common good, if you will.

Listen to this episode

spotify button background imagespotify button background image

Intro

From the proliferation of social comparisons to growing political extremism, the negative effects that technology has on our societal fabric are evident. But can we create a future that looks different? One in which technology has our backs, and our best interests at heart?

On this episode of The Decision Corner, our host Brooke Struck is joined by Bob Suh, founder and CEO of OnCorps. OnCorps is an American company dedicated to elevating workplace performance outcomes through the use of artificial intelligence. By applying predictable algorithms to a variety of decisions and tasks, OnCorps reduces work, errors, and risk within the financial service industry. The organization provides cutting-edge solutions to an array of companies, with advisors from Yale, Harvard, and Oxford. Their innovative algorithms earned them the 2019 NOVA Award by NICSA and the 2019 Fintech Breakthrough Award for best banking infrastructure software.

Bob Suh has also written for the Harvard Business Review about how efficient use of AI can lead to better decision making and forecasting. Find his work here.

In this episode, Brooke and Bob discuss:

  • The foundational behavioral science that applies to AI
  • How the emotional intelligence of humans can be harnessed to benefit technology
  • How our strong grip on data analytics can influence behavior that is inconsistent with truth and authenticity, as well as a saturation of advertising
  • How we can mitigate some of these problems to use technology for positive choice architecture
  • The monetization of current and future social platforms, and how it is changing
  • The future of social and technological innovation: how technology is moving toward positive social outcomes

The conversation continues

TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.

Our services

Key Quotes

Intuition over probability in the halls of power

“Particularly high-end decision makers in very high paying, white-collar professions, because they’re essentially paid decision makers, have essentially a bias toward over-crediting the power of their intuitive capabilities and under-crediting the laws of probability, and, frankly, the mundanity of the kinds of decisions they get paid well for and the predictability of those decisions..”

AI isn’t the problem – it’s us

“AI just bumps up whatever we bump up. So, it’s just amplifying whatever we humans do on social media. The problem is that humans are much more likely to share fake news. The AIs, bots, just reshare stuff that we share a lot. Not to put all the onus on our shoulders. Certainly, we are part of the problem. The fact that we overshare fake news is an issue. It’s especially an issue in combination with these AIs that just bump up the most shared stuff.”

Attention is money

“It’s not just about user acquisition and retention. It’s also about monetization. The current mega platforms are effective because they can convert user engagement into assets that someone is willing to pay for. Usually, that means they sell our data to someone who wants to sell us something, and sometimes the something they want to sell us is a political ideology.”

Alternative models for the information supply

“I think that the challenge is that the Facebook delivery mechanism for advertising is so much more efficient than magazines or newspapers, that I still believe the possibility that a subscriber-based model will still benefit producers of goods and services who essentially want to promote something positive in somebody’s life. It could be making them feel better about themselves or getting back to a healthy diet, or whatever it is.”

Reflective technology 

“I’d love to see an app that just says, ‘Hey, you’re thinking poorly about what you just did. Guess what? It’s totally normal.’ I think that the media in general, in AI-based social platforms in particular, have a way of magnifying the right tail of things….I’m just saying that we definitely have a medium where the right tail is just hyper magnified, and it makes people feel badly. It makes them, I think, worry about things they shouldn’t worry about. It makes them feel inadequate.”

Quality over quantity

“Ferrari and Porsche are only two companies that decided not to sell the most cars, but to sell the best cars in their mind. It’s always, I think, rare for someone to take a different path on an objective function. But, maybe that’s the way it should be. Maybe that’s why it becomes high quality.”

Disillusionment is a catalyst for change

“There are a lot of people right now who are feeling very disillusioned with the kinds of systems that we find ourselves in presently, and…it’ll probably get further before it starts to turn around. That disillusionment might be a causally necessary ingredient for people to really take a hard look at the model that we’re operating in right now and say, ‘That’s just not the type of world that I want to be building. And so, I’m going to go and make it my professional mission for the next 10, 20, 30 years to put something out there that offers us an option of doing differently.”

Transcript

Artificial Intelligence and Behavioral Science

Brooke: Hello, everyone. Welcome to the podcast of The Decision Lab, a socially conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, research director at TDL, and I’ll be your host for the discussion.

My guest today is Bob Suh, founder and CEO of OnCorps. In today’s episode, we’ll be talking about AI, empathy, and whether the machines can save us from ourselves. Bob, thanks for joining us.

Bob: Thanks, Brooke. Nice to be here.

Brooke: So, you’ve been writing about bias and human behavior and how these biases lead to some bad outcomes for us. Let’s wade into the water slowly. What are some of the outcomes that you’re worried about, and how are biases contributing to these?

Bob: I think the biggest concern I have is that particularly high-end decision makers in very high paying, white-collar professions, and, in fact, leaders themselves – because they’re essentially paid decision makers – have essentially a bias toward over-crediting the power of their intuitive capabilities and under-crediting the laws of probability, and, frankly, the mundanity of the kinds of decisions they get paid well for and the predictability of those decisions.

There is a documentary – I forget the name of it – where they’re trying to show the law of large numbers, and the journalist begins shooting baskets with a collegiate varsity player. In the first three shots, the journalist is up two to three. But, obviously, to prove the law of large numbers, after 20 or 30 shots, it’s clear that the varsity college player is superior. And so, what that tells you, to use an analogy with AI, is that humans can over credit chance, early, non-law of large numbers kind of events and underestimate the power of computing, computation, applying probability toward every decision, and, actually, the need to fail to make something work well or to train something well.

So, to me, the biggest issue is: Will leaders and humans give AI a chance, allow the law of large numbers to prevail, allow AI to be trained properly? And then, will they also learn to adopt and be self-aware, of the decisions that are frankly mundane and should be done by the book statistically versus the ones that really require intuition, multifactor sensing, whatever a human is better at than a computer could be?

Brooke: Yeah, sense-making, meaning-making, those kinds of things. You note that AI can make a situation better or worse. If I understand it correctly, what you’re basically saying is that AI is a powerful amplifier. It’s neither inherently positive nor negative. Rather, the AI can amplify either the good bits or the bad bits in the signal. So, if we take fake news as an example, there’s been a lot of discussion about AI algorithms and social media potentially exacerbating the fake news problem. Some researchers did a study that I found was really interesting, looking at whether AI was particularly prone to bump up fake news more than real news.

The findings that they came to is that AI just bumps up whatever we bump up. So, it’s just amplifying whatever we humans do on social media. The problem is that humans are much more likely to share fake news. The AIs, bots, just reshare stuff that we share a lot. Not to put all the onus on our shoulders. Certainly, we are part of the problem. The fact that we overshare fake news is an issue. It’s especially an issue in combination with these AIs that just bump up the most shared stuff. How do you see this in the context that you’re talking about here, AI as an amplifier, potentially just bumping up whatever it is that humans are doing underneath the surface?

Bob: Right. Well, when you learn basic statistics, you’re taught that there’s a Y variable and an X variable. The Y variable is the outcome. It’s the thing that you want to change. The X variable is the independent variable. You spend most of your time as a student trying to change the X variable to influence the Y variable. You’re trying to find causality or correlation. What I’ve learned over the years is that, too often, people don’t consider changing the Y variable, and changing the Y variable is probably the highest leverage action you can take to solve a problem, not only with AI but in research as well. In this case, what you’re describing is a Y variable for popularity, essentially.

I say popularity for a variety of reasons. One is that it’s almost like a two-part incentive, where it’s not just that the social networking firms need to be regulated or are evil or whatever. It’s that, to your point, the people are essentially trying to gain more views and more hits. They’re changing out the X variable to figure out how to gain in popularity, but they’re not changing the Y variable. They’re just trying to get more views. And so, you’ve got people who are intentionally promoting conspiracies and other crazy news to get more views. They notice they get more views, and [I saw a documentary where] Rush Limbaugh, in his early days, was actually a liberal. But, he noticed that when he went hard right, he got more viewership. And so, what’s scary about that, to me, is that our mechanism for not changing the Y variable from the popularity reward function, if you will, is we’re creating leaders who really aren’t leaders. They’re just in a sense, followers. They’re just subscribing to a way of leading that gains them more views or more hits.

I think that’s really dangerous, and I think that what we really need to do is look at, what is the Y variable we really want, not only for ourselves but for our political environment, for our business environment? You’ll often find that, for example, a company whose sole objective, whose sole Y variable is profit could do very destructive things. Whereas, if their Y variable, for example, were product quality, that would probably lead to sufficient profits, but they could probably live with themselves at night as well. I think that there is definitely a problem with social networking and with the first instantiations of AI where the Y variable is almost fixed on this “views, hits, advertising” syndrome, and it’s definitely not healthy. Clearly, it’s not healthy, and I’m sure it will be, at some point, regulated.

Technology and the Emotional Quotient (EQ)

Brooke: Right. I think it’s a nice twist to bring us around to this point that you discussed in your recent HBR article, talking about injecting a little bit of EQ into the mix when it comes to AI. I want to give you some space to talk about that. I mean, I want to come back to this idea of adjusting the Y variable. I think that there’s something that you’ve picked up there on leadership that I want to come back to as well. But, let’s just start with EQ.

Bob: Sure. Well, my premise is that you have a relationship. Particularly in the world of AI, you form a relationship with an app in some of the same ways that you form a relationship with a person. A college student goes on a date. That student tells their mother and their roommate two entirely different stories about the date. So, you select what you share based on the audience.

And, in turn, if you take this personality analogy with apps to the extreme, you can think of corporate systems as nags. They’re almost always nagging you. You’ve done something wrong. You can’t go. You can’t log in to this thing. You’re over your limit on number of characters or whatever it is. And, you can think of social networking systems as sycophantic. In other words, all they’re trying to do is make you feel good irrespective of the consequences.

And so, when you think about apps and people as being in a relationship, which I truly believe, then you begin to see where there are relationships that are sycophantic, that are nag defined. But, there are also relationships that have extraordinarily high EQ, where the actions and counteractions of the two parties are empathetic. They’re self-aware. They are recognizing the fact that they may be nagging too much or too sycophantic. They understand that they need to roll back something because it’s a little too punitive to the discussion.

And so, one of the examples I gave is you’re in an early relationship with someone. You’re living together. One person nags you and says, “Hey, you haven’t taken the trash out.” Not unlike a system that would warn you or alert you. But, the other system might say or the other person in your apartment might say, “Hey, I noticed you took out the trash. Thank you.” Well, that’s a high EQ interaction because you’re essentially being nice, and you’re rewarding somebody. But, you’re probably also affecting behavioral change much more than you would if you were just nagging.

And so, I believe that’s actually ground that an app can take. I don’t think those are peculiarly human interactions. I just think they’re considerate. They require more sophisticated branch logic and that sort of thing.

Brooke: You’re starting to get into the behavioral realm and the empathetic realm here. What role do behavioral concepts and behavioral theory have to play in enriching these AI ecosystems to, on the one hand, promote better behavior, better outcomes, but, on the other hand, also to promote healthier mindsets among users?

Bob: I think they’re huge. I mean, when you think about the research that Thaler and Kahneman have done, you think about [the fact] that just they’re revealing patterns in human nature that, if changed, could really open up very powerful performance changes, if you will, especially in business, or maybe even in personal life. I think that there is no question in my mind that there is a future in algorithms that not only get you to try to click on something, but temper you from clicking on something or temper you from sending something.

I think there’s a whole world of algorithms out there that are likely game theoretic, that consider the consequences of action A or action B, and then consider what type of communication or message to send that would most probabilistically get you to act in an optimal way. I’m very excited about it.

We are doing research with Nicholas Christakis’s Human Nature Lab on very simple behavioral patterns. We call it eye-glaze syndrome. But, essentially, in business, if you’re reviewing something that occurs very rarely, all you’re seeing are false positives. [Say, for example,] somebody came to the TSA syndrome. You tend to eye glaze. In other words, we can prove statistically that you are not really capable of finding an error, for example, in a huge financial transaction.

We have found that we can actually identify when you’re “eye glazing,” and, essentially, it’s time based. Time actually is a hugely important variable in behavioral AI. If you’re too fast or you’re too slow, it’s a very powerful variable.

Brooke: So, The TSA syndrome and all these false positives, for anyone who’s not familiar with that background, if I’m reading you right, what you’re talking about is, for instance, the airport security guard who is watching everyone pass their luggage through, and on heightened alert, suspicious of everyone, trying to identify who’s the person who’s trying to smuggle through a weapon or whatever it is that they’re not supposed to be bringing through. And, in that kind of situation where if they’re on heightened alert, heightened suspicion of every single person, but we know that the vast majority of people are not doing anything wrong, that’s where you get this eye glaze syndrome, where you hyper focus on too many small details. Is that it?

Bob: Exactly. You see a pattern of false positives. Well, I’ll give you a perfect example: Hotel alarms. Nobody wakes up to a hotel alarm believing they’re in imminent danger. They usually wake up and the first thing on their minds is when will they shut it off so I can go back to sleep? They never think, oh, maybe I should check whether the door is warm, or maybe I could check to see if there’s smoke or something like that. And, that’s essentially this syndrome of numbing yourself to a false positive.

Something like 99% of home alarms are false positives, so it’s essentially rendered the technology obsolete in a way. Probably the most important thing is to have the alarm sign as a deterrent, not the alarm itself. And so, yes, that’s exactly it. It is this way for humans to shut off alerts or anything like that, and it’s been proven that actually happens quite frequently and quite quickly.

The downside of data

Brooke: Okay. I think you’ve laid out the terrain quite nicely here around how AI can help us to overcome some of these challenges and that behavioral science has its role to play in informing how it is that we design algorithms in ways that really will promote more compliant behavior or more effective behavior, as well as healthier mindsets.

Now I want to come back to this issue of changing the Y variable because I think that’s a really interesting concept here. Changing the Y variable is what leadership and strategy are all about. Strategy is about deciding on the thing that you’re going to lean into and identifying what it is that you’re going to optimize.

So, in my initial setup, when I was introducing this and giving you some space to outline your ideas, I characterized AI as a neutral amplifier. But, maybe that’s wrong. Maybe we should never think about AI as a neutral amplifier. It’s always neutral in the sense that it’s always optimizing. But, it’s never neutral in the sense that we always have to choose what it is that we’re optimizing. There’s always this optimized for what? And, that’s where leadership and strategy come in.

Bob: Exactly.

Brooke:  The Rush Limbaugh example, which I’d never heard, by the way, before you mentioned it, I think, is fascinating. It made me think that in the context of a business where you might hear someone just say, “Our strategy is to sell whatever it is that people buy,” that’s the abdication from leadership. It’s the abdication from strategy. And, that seems to be exactly what it is that this kind of neutral amplifier theory is all about. It’s just saying, “We minimize the role that we have to play deciding what the optimization is about.”

Bob: Exactly. And, the scary thing is you could ask, “Well, why wouldn’t somebody change the Y variable and do something with more integrity?” Because it works. I mean, it pays off to not change the Y variable. It pays off to exploit systems as they’re set up, and it’s unfortunate. I would argue it’s always been that way. People have always exploited the systems in which they operate. It definitely is up to larger communities to, in the long run, regulate bad behavior.

I think that’s something that is worth exploring in AI. Can an AI, just like the way a village might try to discourage harmful behavior to the village, can a larger community – not a small group that you enroll in your Facebook page, – but a large community somehow regulate the definition of what a Y variable should be?

Brooke: Yeah. You mentioned that it pays off and that there’s perhaps no consequence. I’ve been reading some history lately, but the first thing that came to my mind is, well, I wonder what the royal family of France thinks about that. In the Belle Époque, they really were exploiting the system to the hilt and accumulating massive wealth within the royal family and this extended aristocracy. And, of course, the system as a whole came back around to correct that, with a guillotine.

Bob: Exactly. Particularly when there were inequities. And, I know that’s what troubles people now, is that the inequities are so large that possibly there will be a backlash. And, I guess history is right in that regard. It usually comes full circle. Sometimes it just takes a long time, though.

Brooke: Yeah. It would certainly help us to make sense of some of the stuff that we read about in the news. For instance, earlier this week and even at the end of last week, one of the trending news items has been GameStop and all of these leagues and leagues of investors through Reddit coordinating big buys of this GameStop stock as well as some others. Part of that seems to be economically motivated because as everyone piles on, of course, the stock starts to heat up. But, there’s also a sense in which a lot of the discourse around that is vindictive. It’s not about people making economic choices about how they want to make money. It’s about people making expressive choices about telling a whole bunch of Wall Street insiders what they think of them and how they behave.

Bob: Right. And, I think in a similar vein some people see Bitcoin in the same way, that they say it as a liberalization of something that’s controlled by very powerful entities who get to make up the rules, frankly. It’s very interesting. It’s a very interesting discussion.

Using technology for positive outcomes

Brooke: Let’s move from the descriptions that you’ve given to something more of a normative model. Who is it that gets to set the terms of good behavior? Before I launch you off on that, I wanted to throw a shout-out to a colleague of mine, Adam Briggle, who wrote this paper that I love called “The Great Impacts Houdini,” just talking about the whole infrastructure and the research and innovation landscape that has been built up around research needing to have impact. The critique that he levels against that whole infrastructure is that nowhere does anyone talk about what the impacts are supposed to be. What differentiates positive impact from negative impact? That whole content of the edifice is missing. It really is, as we talked about before, simply just optimization. But, there’s no discussion about what to optimize.

But, the point that he makes in his article is societally, we’re really not in a good place to have conversations about the ideal society that we want to build. We’re having trouble just having civil conversations between people who disagree about who they think should be holding political office. So, this kind of wider project of what is the society we want to have for ourselves, for our descendants, 50 years, 100 years, 200 years down the road? We don’t seem like we’re in a very advantageous position to have those conversations. But, ultimately, someone does need to make that call. When we build a system, like an AI system, who gets to determine what constitutes a healthy behavior, and who doesn’t?

Bob: Well, I think there are two responses to that. The first is that going back to the Rush Limbaugh example, it is true, but you could also likely come up with people on the other end of the political spectrum who became liberal because they thought it would gain them popularity. And, in that lies, I think, part of the answer to your question, which is do we want a system or an AI or a series of apps that promote the right thing to say, or that true-up what is said with what is actually accomplished?

I think that’s where there is hope for something where an AI. Obviously, as you said, an AI is only doing what you architect it to do. But, an AI is very objective and very mathematical, and I believe more people should be thinking about Y variables that are real outcomes.

Really beneficial just for simplistic reasons, to use politics as the example, one could argue that extreme views on either end, on the left or the right, are well-intended. Somebody’s trying to fix something, and it’s an act of positive intention. But then, if you ask the algorithm, “Well, change the Y variable,” and try to find where it actually makes a difference, where is it actually affecting people’s lives or not affecting people’s lives? That might be one thing I would say we should be looking at.

In the field of business, in the field of investing, it’s black and white. You either make money or you don’t. You can’t defend a trade that loses money for too long, which is essentially why there’s tremendous outflows over the last few years from active funds to indexed funds and algorithmic funds. So, because the only thing that matters, that outcome is not subjective at all. If you were to ask an AI or ask a system that includes an AI to true-up to the actual results, I think that’s what’s been missing.

Brooke: Yeah. As you were speaking, one of the things that came to mind is that there aren’t that many players out there in the field. Perhaps one of the ways to attack this problem a little bit is innovation, and I have to laugh at myself saying that because so many times I’ve rolled my eyes at people who said, “Well, the solution to any problem is just more innovation.” But, part of the solution might be to say, “It’s not up to us to decide what the right outcomes are.” It’s up to us to put lots of systems and lots of options out there for people to choose from, and if we think about this EQ idea that you’ve been outlining for us today, people will gravitate towards systems that promote the kinds of interactions that they want to have and leave them feeling the way that they want to feel.

Bob: I believe that. I believe we’re social animals, and with some exceptions, we tend to govern ourselves with mores and behaviors that help the entire group. And, I think that, again, this comes back to AI can help that, or it can hurt that. It could exploit the arbitrage and information or reality, or it can guide behavioral patterns more toward the common good, if you will. Definitely.

Brooke: There’s definitely a behavioral role to be played there as well. Part of the decision to continue using a platform is not just about the way that it leaves me feeling. But, it’s also going to be deeply informed by my perception of how others are feeling as well. If I think about, for instance, I go to some party, and I look around, and it looks like everyone around me is having fun, but I’m not having a lot of fun, I might stick around and continue to put on this piece of theater, that [looks] like, “Yeah, I’m having fun, too.” When, in fact, if there’s a moment to pull a couple people aside and say, “Listen, are you actually having fun at this party, or are we all not having a good time, and we should just bail?” That kind of conversation can promote better outcomes.

So, these are the kinds of things where, as you mentioned earlier, the system design, integrating some of these behavioral features, can be really powerful because as long as that illusion is sustained, that other people are having fun, but this platform is leaving other people with the kind of feeling that they want to feel in terms of connection to other users, that kind of thing can string me along as an individual user for quite a while, potentially indefinitely.

Bob: No question. I’ve never had that problem at parties, by the way, mostly because I never go to them.

Brooke: As a very brief aside. During this entire pandemic, I feel like so much energy is building up that once controls are very much released, there’s just going to be the series of most incredible, outrageous parties. When I think back about the Spanish flu in the late 1910s and then I watch a movie like The Great Gatsby, I’m like, “Oh, that’s why they did all those things.”

Bob: Right. Pent up demand.

Social context and monetization

Brooke: Yeah, that’s right. So, we’ve been talking a lot about how we might architect a perfect system that would overcome some of these challenges, and I think that this idea that we actually need lots of systems out there and to take a more portfolio approach to this, that we experiment with lots of different things and that users have opportunities to vote with their data, to vote with their participation and a platform, that the ones that promote the kinds of things that users are after will be the platforms that ultimately carry the day, if there are options and clarity around those platforms.

But, part of the discussion that’s so challenging around these perfect systems is that we never get to start from zero. We’re never operating in a historical vacuum. There are platforms already out there that have certain dynamics and features to them, which also creates certain expectations among user bases about how platforms ought to operate and how it ought to feel to use a platform and this kind of thing. Equally well, you mentioned earlier that you can’t continue throwing money indefinitely into a business venture that ultimately is not showing any return. So, there are economic incentives out there in the ecosystem as well.

It’s not just about user acquisition and retention. It’s also about monetization. The current mega platforms are effective because they can convert user engagement into assets that someone is willing to pay for. Usually, that means they sell our data to someone who wants to sell us something, and sometimes the something they want to sell us is a political ideology. What kinds of incentive environments do we need to support a platform ecosystem and an AI ecosystem that goes after the kinds of healthy behaviors that you describe? To put the question in a bit of a pithy way, what kind of company would want to advertise on my platform if my platform is helping people to control their self-destructive, impulsive spending habits?

Bob: I think that the perfect storm of the advertising industry, realizing that its physical ad model was grossly inefficient, combined with social networks coming on the scene, I think history’s going to look back on that as almost like a powder keg, and fairly negatively. I think that what was missed is the fact that there were actually some great businesses that followed a subscription model in which people actually paid, were willing to pay, not a lot, but were willing to pay for the right to use their service, whether it’s Spotify or Apple. Amazon Prime is even an example of that. That’s a subscription service. It’s a subscription to buy more things. But, you get benefits for it.

I could see a scenario where you could reconstruct a way of collaborating and sharing personally and professionally that is worth paying for. And, maybe to make it worth paying for, you are able to control what you see. You’re able to regulate things. It learns about your things, but in a positive way. I think that the challenge is that the Facebook delivery mechanism for advertising is so much more efficient than magazines or newspapers, that I still believe the possibility that a subscriber-based model will still benefit producers of goods and services who essentially want to promote something positive in somebody’s life. It could be making them feel better about themselves or getting back to a healthy diet, or whatever it is.

And, I believe that a self-selected, subscriber-based model would be as potent as a Facebook delivery mechanism. I just don’t think anybody’s ever tried to. I mean, maybe somebody has, but I think that will be probably the next generation. I think it would be very positive, I mean, unless you’re promoting something that’s blatantly destructive to people. But, I think most people that spend their time building products and services believe that they are improving the lives of organizations, teams, and people.

Brooke: You mentioned how platforms get financed, and I was thinking about the history, once again, of that, and thinking, well, Facebook, Twitter, these big social media platforms, they’re not the first platforms that have ever come into existence to promote communication between individuals. How did the post office get off the ground in the first place? It had to have financing behind it as well, and it also really revolutionized the way that people communicate with each other.

We see all of the same kinds of characteristics where advertising and, well, advertising for products and services, as well as advertising politics and policies, also grafted themselves on there. The US Postal Service, when it initially reached out across the country, was a novel and extremely powerful delivery mechanism for political essays and these kinds of things, which parties were putting out to try to influence voters. I don’t know specifically when more commercial-type advertising ended up in the mail, but we don’t feel that it’s inappropriate that we have to pay for a stamp to send something. We can stick a no-junk-mail sticker onto our post box, and that doesn’t mean that we are no longer allowed to receive the stuff we do want. There are other platforms out there that have occupied similar functional roles in our society historically, and perhaps there are some lessons to be learned there.

Bob: Definitely.

The future of social innovation

Brooke: In terms of user-paid models, there’s also something to consider, I think, around the win-wins that can be created with these platforms. If we ask ourselves the question, who benefits from a whole platform of users who are happy or less stressed, these kinds of things, it’s not just this utopian ideal that, well, that’s just a perfect society, but no one’s going to pay for it. There are real, concrete, tangible benefits that some people will pay for to achieve that kind of situation. One of the examples that comes to mind is employers.

Employers spend billions and billions of dollars on HR every year to try and make sure that their employees are happy, well-adjusted, resilient individuals, because individuals who have all of those characteristics, or who are described by all those characteristics at a given time, are productive. They do better work, and that creates more value for the companies that they work for. So, that’s perhaps just one example. An employer might be actually the customer, even if they’re not the end user, for a platform that promotes healthy collaboration, positive mindsets, these kinds of things, as part of the systemic design of the platform.

In the United States, it’s maybe a little bit different because you don’t have universal healthcare funded by the state, but in many countries around the world where that is the case, governments also have a huge incentive for public health burdens, for mental health burdens, to be reduced. So, anywhere where a platform can be designed to promote the physical and mental wellbeing of individuals, you need to be a bit creative. It’s definitely not easy to get those conversations going or even to get into the room. But, there are actors out there with a lot of money to spend who are currently spending it solving problems, addressing effects, whereas if you can say that I can address this upstream for a much lower cost and take care of the causes, there are really, really strong business models that can be built along those lines.

Bob: Oh, I completely agree, and I think when you bring it back to AI and some of the misfires, I’d love to see an app that just says, “Hey, you’re thinking poorly about what you just did. Guess what? It’s totally normal.” I think that the media in general, in AI-based social platforms in particular, have a way of magnifying the right tail of things.

My wife will tell me about some tragic death resulting from COVID and how horrible it is, and I’ll say, “Well, I know there are 25 million cases. If you took 25 million cases of somebody driving to the grocery store, I’m sure I could find some really horrid stories.” I’m not underplaying COVID at all. I’m just saying that we definitely have a medium where the right tail is just hyper magnified, and it makes people feel badly. It makes them, I think, worry about things they shouldn’t worry about. It makes them feel inadequate.

As a leader of a startup over nine years, I try to tell everyone my favorite movie for a startup is Rocky because it’s not winning, it’s  just going all 12 rounds that matters. It’s a different objective function or Y variable. Surviving is what really matters in some startups. If you don’t keep getting up,  that’s the surest way to fail. But, keep getting up, and the more you keep getting up in a startup from your setbacks, which are inevitable, the more likely your chances are of succeeding.

Brooke: So, For someone who’s interested in AI powered, empathetic nudging, building these kinds of platforms, where do you begin? What can you start doing on Monday morning to put these ideas into practice?

Bob: I think there is rich literature from the behavioral science and behavior economics side that you’re obviously well aware of. That’s certainly a starting point. But, then you’ve got some pretty brilliant people that are doing modeling. I was just reading Professor Sandholm at Carnegie Mellon, who is doing an AI Libratus that is beating the world champion poker players. And, there’s something in those models that are game theoretic, interactive, trying to get a Nash equilibrium, if you will.

There are other people that are studying how learning systems, education systems, can infer from the questions you ask how much you’re learning. I think there are disparate pieces of some very powerful ideas that people could start looking at. Definitely the professors that are using neural nets and game theory to be professionals in games. I think that’s definitely a starting point. The reason I say that is because it’s usually a two-party model, I think there’s something there where I would begin looking, as well as looking at the foundational work of Kahneman and others.

Brooke: What about more on the business model, leadership, finding the right variable or the right Y variable side? Where do we begin there?

Bob: Philosophically, I would say it’s never going to be the most popular route. Ferrari and Porsche are only two companies that decided not to sell the most cars, but to sell the best cars in their mind. It’s always, I think, rare for someone to take a different path on an objective function. But, maybe that’s the way it should be. Maybe that’s why it becomes high quality.

And so, I would say it’s no different from any other thing you do. You have to decide why you get up in the morning. I think that when you decide that it’s something superficial or something uncontrollable, you’re going to be in for a world of hurt. Whereas, if you decide it’s something intrinsic to my learning or a quality or bettering my part of the world, I just think that it’s just something that’s intrinsic to some people. I hope more people get in touch with it, but I believe that it’s just one of these things where some people decide to do that, and others don’t.

Brooke: Well, perhaps that’s some hope for us to look forward to in the future. There are a lot of people right now who are feeling very disillusioned with the kinds of systems that we find ourselves in presently, and that kind of disillusionment, disappointing as it is that we have had to let it get this far, it’ll probably get further before it starts to turn around. That disillusionment might be a causally necessary ingredient for people to really take a hard look at the model that we’re operating in right now and say, “That’s just not the type of world that I want to be building. And so, I’m going to go and make it my professional mission for the next 10, 20, 30 years to put something out there that offers us an option of doing differently.”

Bob: I agree, and I know that there are many, many people already thinking about this and working very hard toward a better way of thinking about collaboration. So, I’m optimistic.

Brooke: All right. Well, Bob, thank you very much for your time and your insights today.

Bob: Thank you.

Brooke: We look forward to having you back at some point.

Bob: Okay. A pleasure.

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guest

Bob Suh

Bob Suh

Bob is the founder and CEO of OnCorps. He has published behavioral economics articles in Harvard Business Review and the Financial Times. Prior to OnCorps, Bob was the chief technology strategist at Accenture and chief strategy officer for the firm's technology division. He was named by Consulting Magazine to the Top 25 Consultants list. Bob was also a group president of Perot Systems, where he helped take the company public. Bob received his master's, concentration in political economy, from Harvard University, where he was class advisor and research assistant to the 2005 Nobel Laureate in economics. He received his bachelor's from the University of Southern California.

About the Interviewer

Brooke Struck portrait

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Listen to next

stick man with a map and a weird creature
Podcast

Run for the Cure: Kelley Keehn

On this episode of The Decision Corner, Kelley Keehn joins Brooke to talk about COVID-19 and personal finances in a changing world.

Notes illustration

Eager to learn about how behavioral science can help your organization?