Mental models for business decisions with Roger Martin

PodcastMay 10, 2022
scientist finding the perfect recipe

We live in this uncertain world. You never know what the future's going to hold for sure so you have to make bets about the future that are based on a model. The real key to being a successful person in life is not how smart are you on day one, it's how much better can I get? How much can I improve myself? You improve yourself through learning and consolidating that learning in order to make, hopefully, a better decision the next time.

Listen to this episode

spotify button background imagespotify button background image

Intro

In this episode of The Decision Corner, Brooke is joined for the second time by Roger Martin, one the the world’s leading business minds, the former dean of the Rotman School of Business, and the author of the newly released book A New Way to Think: Your Guide to Superior Management Effectiveness. This time around, the two discuss how mental models guide business decisions, and how we can restructure failing mental models to improve ourselves, our teams, and our organizations. 

Topics discussed include: 

  •  When you should give up on your mental models - and when to keep pushing at it
  •  Why writing down your decision making process is vital - and the dangerous behaviors that occur if you don’t
  •  The simple, but powerful “if-then” model
  •  How socializing strategy can help us scale over decision-making hurdles
  •  Why you should make the most skeptical person in the room in charge of test design
  •  How to overcome disappointing decisions - and why they are so important to make

The conversation continues

TDL is a socially conscious consulting firm. Our mission is to translate insights from behavioral research into practical, scalable solutions—ones that create better outcomes for everyone.

Our services

Sneak Peek

On truth, logic, and getting to what matters: 

“In these process, it turns out that it is much harder to get agreement on what is true than what would have to be true, because what would have to be true is logic. What is true is the combination of logic, if-thens, and data. I like to separate those two things out to say, what's the logic structure? Then let's ask how would we apply data to that logic structure to the extent we can.” 

On applying a growth mindset to management:

“What I try to help executives do is create a world around them where it's an upward spiral of learning, rather than a downward spiral of defensiveness.”

On constantly updating models:

“All models are wrong. They're abstractions. You should always be looking for a better model maintain the growth mindset. I recommend people find what would-have-to-be-true about that model and stick them up in your tack board in front of your desk. Every morning, come to work and ask: are the things that would-have-to-be-true still true?”

On giving people with dissenting opinions decision making power:

“People tend to be obstreperous to the extent that they aren't listened to.…if you instead say “no, no, no, we're not only listening to you, we are actually putting the keys in your hands”, then they become, I think, hypersensible.”

Transcript

Brooke Struck: Hello everyone and welcome to the podcast of The Decision Lab, a socially-conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, Research Director at TDL, and I'll be your host for the discussion. My guest today is Roger Martin, former Dean of the Rotman School of Business at the University of Toronto and author of over a dozen bestselling business books, as well as longtime strategy advisor to global brands such as Procter & Gamble, Lego, and Ford. Roger has joined me on the show once before to discuss a previous book, Creating Great Choices. But in today's episode, we'll be talking about his latest book, A New Way to Think, in which Roger walks us through some very prominent mental models in the business world. How we generally react when those models cease to serve us well and some options for replacing them. Roger, thanks for joining us once again.

Roger Martin: It's great to be back, Brooke.

Brooke Struck: Let's dive in. The latest book is about mental models and how they're used by executives. But what is a mental model? Let's just get some alignment on that before we go any further.

Roger Martin: Sure. It's a way of thinking about something about the world. You create a model that helps you make a decision. So you might have a model that says the best thing to do when you walk up to somebody for the first time is extend your hand to shake their hands in greetings. That's a mental model that says, rather than randomly kind of think about what one might do when one meet somebody else, you've got this model that said that's probably a good way to start. Basically we have models for everything we do in life, and so they're a huge part of our world.

Brooke Struck: Right. So there are certain representation of stuff that we see out there and they include a lot of “if-then” rules that help you to figure out what to do in various situations.

Roger Martin: Yeah, exactly. Like if meeting a new person, extend a hand to shake. That's an if-then rule. and the model holds that is a friendly way that would be perceived as friendly by somebody else, and I want to start out in a friendly way, so it's got a bunch of if-then embedded in it.

Brooke Struck: Right. In your consulting work, and this is really the kind of impetus for the book, you've noticed a certain pattern of what happens when mental models break down in the world of work and executives working with mental models. When we're confronted with the shortcomings of these models, when we extend our hand and someone doesn't respond the way that we expect them to, how do executives usually respond?

Roger Martin: What I found, somewhat surprising I guess, is that the usual response is to say “I didn't use the model well enough”. Maybe I wasn't smiling while I extended my hand. Maybe I extended it quickly and too aggressively. So I should do it in a slightly different way, but the model of that's the right thing remains unchanged.

Brooke Struck: Right. Does that lead us anywhere productive or does it get us stuck in ruts?

Roger Martin: It can lead in a productive direction. If indeed you did a crummy job of utilizing the model, you are really curt and abrupt or you had a frown on your face, it may help you say, oh, no. It's important to do it in a friendly way. However, it sometimes simply doubles down on something that isn't working. In some parts of the world, extending your hand to shake it is insulting. And so in that part of the world, if you keep saying, well, I got to do it nicer, I got to smile, you will just get yourself deeper and deeper into trouble. That's the phenomenon that I see, when something isn't working and a model in use is not producing the results it purports to or is intended to, they just keep trying it more and that is not good for anybody.

Brooke Struck: Right. And so that's really the impetus for the book is these situation that you've seen where the mental model itself is not delivering their results promised. And so people just kind of burrow in and keep pushing harder and harder and harder on the model. And so the book itself explores over a dozen prominent models in business. And in each of the chapters you identify some of the serious challenges to that model and offer an alternative that seems to fare better in response to that specific set of challenges. I don't want to use this conversation just to rehearse what you've already written in the book. I loved the book and I strongly encourage readers to listen to it, but I'd like to come back to your real purpose in the book which is to help executives to think through what to do when models are breaking down. In that vein, I propose that we dive in with some tactical advice. How can an executive identify when the model is breaking down because it isn't being applied rigorously enough, as opposed to the model itself no longer being sufficient to the task at hand?

Roger Martin: Sure. I think the most important thing to do is to be really clear on what would be a signal that the model is working, right? If you've got a vague model that says, oh, you know, well, we should pursue shareholder value maximization, and you don't say “here's what I expect to see”. So, I expect to see that executives will behave in this fashion and produce this kind of result, and that will result in shareholder value maximization. If it's a vague notion, then lots of things can go on that end up not getting you to what you want, or you say, well, I don't know if that had to do with the model. What that will give you a better chance of doing is auditing what has happened, right?

Let's just use shareholder value:we want our executives to maximize shareholder value, and for that reason, we're going to give them lots of stock based compensation. If they get the shareholder value to go up, they'll make lots, and  there will be proper incentive for them to take steps that are always in the interest of that. If you then observe executives doing things that are inconsistent with that, such as going to Wall Street and hyping the stock,watching the stock fall, then getting it to go back up again, then you say, well, they seem to be promoting volatility, not going in the right direction. Then you could say, well, the model presumed that they would engage in these behaviors, and they're not engaging in these behaviors. Then you can have a chance of reverse engineering that and finding  what could possibly be causing that. 

Then you go to the forks in your road and say, well, what could be causing that is that we didn't give enough stock based compensation or it wasn't the right form. You can at least test that and say, well, it's because we gave them all options and we should give them deferred stock units and then we'll see this instead. You're once again are doing the if-- then, so that you can audit it again and say, well, whoa, wow, they're still doing what they were doing before! Then, I think you have a better chance of saying that something about this model is completely flawed and is not producing what we wanted. Now, it's time to revisit the model.

Brooke Struck: I want to dig into some of the behavioral angles there. There's one aspect here which is about being very explicit and concrete, as opposed to somewhat vague and nebulous. Why is vague not good enough?

Roger Martin: Because it'll be hard to audit it afterwards. To figure out whether your cause and effect model works, the if-then model, you have to know the “if” and the “then”, or it's going to be hard. Human beings, unfortunately, have an almost infinite capacity to ex post rationalize, right? So they can say, if we do this investment, our sales will grow at 10% a year for five years. And then the five years sales grow at 7%. The mind has an uncanny ability to say “yeah, yeah, seven. It was seven.” Unless you were explicit, wrote it down, and said “I am making this investment because if we build that factory, we will have the capacity to grow our sales at 10%” and you write it down and you put it in the drawer, and five years later, you pull it out and then you compare the 10% to the 7%. Well, it wasn't a disaster, but something about my model for what would get you a 10% sales increase was simply not working properly. So, you're trying to guard against ex post rationalization.

Brooke Struck: Yeah. It's that confirmation bias that we will find a way to make the whole story cohere nicely.

Roger Martin: Yes, absolutely. All the behavioral biases absolutely play into this, and that's why I recommend to everybody that when they make a decision, write down why they made that decision. Just write it down. It's because I think this, I think this, I think this, and I think this. Only then can you learn, because you can go back. So if something doesn't work the way you want and you didn't write it down, all you have is that it didn't work. If instead you wrote it down and said, well, I thought customers were going to do this, I thought competitors were going to do this, and the regulators were going to do this. Then you can look back and say: oh, oh, oh, oh, oh, the competitors did this other thing.

So the rest of my logic was actually pretty good, but what did I not see about competitors in the face of this? So next time you can build a model that says we're going to do this, you'll have a more sophisticated understanding of the competitors. If you didn't write all of that down, you'll just say, well, that was a bust. Of course, I don't want to have another bust, but I don't really have any good ideas about what was flawed about it, other than it was wrong? That’s sort of devastating, right? I was wrong and it didn't work well, and what do you have to show for it? Nothing, nothing. You want to have something useful to show for your mistakes or failures.

Brooke Struck: Right. There's kind of a two-tiered shortcoming here if you don't write it down. The first is that you're much more likely to find a nice coherent story to tell yourself that, in fact, it wasn't the failure that you're trying to avoid staring in the face. The second is that, even if you do manage to stare it in the face and say that we didn't actually reach the outcomes that we'd hoped, if you haven't written it down, you haven't put yourself in a position to actually learn from that shortcoming, to identify what went wrong, and to be able to do better the next time.

Roger Martin: Absolutely. Again, we live in this sort of uncertain world. You never know what the future's going to hold for sure, so you have to make bets about the future that are based on a model. The real key to being a successful person in life is not how smart are you on day one, t's how much better can I get? How much can I improve myself? You improve yourself through learning and then consolidating that learning in order to make, hopefully, a better decision the next time.

Brooke Struck: Right. Speaking of better decisions the next time, once we start to recognize that the models are not holding true to the world in the way that we'd hoped, how can we go about identifying or crafting a new model that's better suited to the challenge? Before tossing it back to you, I'll pull one of the examples that you mentioned previously, where you said: we're expecting customers to behave this way, we're expecting input prices to behave this way, and we're expecting competitors to behave this way. Then, you notice that competitors are starting to behave differently than you had anticipated. Assume, in this instance, that is a very, very crucial assumption for your model. Now you say, well, we need to move towards something new, we need to craft a new model that better reflects the facts that are becoming clear about the world. How do we go about doing that? Identifying what the key determinants are in this situation that we know our model needs to solve for.

Roger Martin: Sure. Well, it's a tricky question because it is a creative act. I mean, the creation of a way of understanding the world is a creative act. It's advancing the state of knowledge in the world, so it's no small feat. But what I would do, if the rest of my model seemed to hold, and it was just the competitors, I would attempt to understand more. I'd study the competitors more and say, okay, they did something other than we thought, what is it about them that we could imagine they did? Did they do other things in a similar way? Ah! Now we can understand if its competitors in general or it's a specific competitor we didn't understand. I'd focus on that to be able to come up with a coherent explanation of why what actually happened, rather than the thing we expected. That would be my new kind of model for testing, my presumptive model.

I'd say, okay, if this is important, let's go off and do a test to see whether that competitor will behave in a way that's consistent with this new model. It's mainly just trying to reflect on what they did and explain it. You've got to explain what's already happened. In what way would a sensible competitor choose that versus what we thought they would choose?

Brooke Struck: So explaining the previous failure is an important starting point because that's a key input into redefining the problem space that you find yourself in. So the failure of the old model, when we keep kind of butting our heads against the wall, what we're doing is essentially trying to apply a model to a problem space that it doesn't fit anymore.

Roger Martin: Yes.

Brooke Struck: And so explaining the previous failure is valuable because it helps us to update our view on what the problem space is in which we actually find ourselves.

Roger Martin: Yep. It would be what the economists call Bayesian updating.

Brooke Struck: Yeah. I can't help but make a comment at this point about the views of humanities and lots of different fields out there to remark that Bayes himself was a minister in some far-flung parish who had a hobby. Statistics was just his hobby on the side. So this idea that, you know, actually inspiration and really powerful innovation can come from very unexpected places is an example that Bayes himself proves very well.

Roger Martin: Yes, no, no. It's a good story. Same with patent clerks. They can also make a difference in history.

Brooke Struck: Have there been some famous ones of those?

Roger Martin: Like Albert Einstein.

Brooke Struck: Okay. So we start to update our views on the problem space. As you articulated, based on that updated understanding, we start to piece together some new if-then rules. And then, of course, the next thing to do is go out and start testing them. Because ultimately, the model is a hypothesis. It's something that we aren't sure about. It's an educated guess. It's not a stab in the dark, but there's still always going to be some uncertainty and there are going to be some assumptions that need to be validated before we have confidence to go out and start applying this new model. What are some of the barriers or challenges that executives will face in trying to get a new model defined within the organization? When the old model stops working, we've just been talking about the operational process for how to update that and build or identify new model, but there's also beyond the operational, a bit of a small p politics involved of the dynamics of getting the organization to go along through this conversation.

Roger Martin: Sure. And to me, to a certain extent, there's a bit of a growth mindset, Carol Dweck angle to this, right? If the executive in question has a great stake in demonstrating that he or she is right more often than not, they may be more inclined to double down on this model and force us to try it again. Maybe then it'll work and I don't have to admit I was wrong on this. If they have more of a growth mindset, rather than a fixed mindset, they would probably be inclined to say: Hey, as a leader, my job is to keep getting better, and the way to keep getting better is to enhance my models and tell my organization that this model that we tried didn't work. It's not some giant defeat, it's just a stepping stone in getting a better model.

I think this has a lot to do with the mindset of the person deploying the model. The mindset that I try to encourage people to take is, as the great John Sterman of MIT says, all models are wrong. They're an abstraction on life. So they're all going to be found wanting in some way, it's only a question of how useful the model is. And then when we find it failing to be as useful as we wish, how can you modify it? You want to have them have that mindset rather than thinking their job is to be perfect, because that's an impossible job.

Brooke Struck: Yeah. In terms of that mindset, I mean, my sense is that a growth mindset is something that it's hard to propagate in a vacuum. If you find yourself surrounded by people who are trying to be perfect or trying to signal perfect very strongly within their organization, someone with a growth mindset will be swimming against the current. Another thing that came to mind as you were speaking is the parallel between a growth mindset and things like strong public health institutions and trust in government and these kinds of things. They're the types of things that you want to already have developed as assets at the moment when you need to reach for them. The moment that your model breaks down and you're in crisis, it's a difficult moment to say, well, now we should start adopting a growth mindset.

Roger Martin: Yes, that's true. There are downward and upward spirals of this too. If you're projecting everything about a fixed mindset, which is you're making promises, whether  implicit or explicit, that say you've appointed me, hired me, elected me because I'm right. Then the stakes are higher when you're wrong. Right? You might well be inclined to say, oh my God, since I've told them I'm right on this, and now I'm going to have to admit I'm wrong, I'll be crushed. But it's only because you promised it, right? If instead you say, you know, I always learn, I always get better., there will be missteps, and all I can promise you is that when there are missteps, I'm going to do a better job the next time. I'll take into account whatever we've learned and be better and better and better.

If that's whatever your election promise, if we want to talk about politicians, then it's no big deal to say, hey, this thing we did didn't work out so hot, but we've analyzed it a lot. Here's what we were missing and here's what we're going to try now that's built on that. Everyone would say, yeah, of course! That's what he or she told us that they were going to operate like, so no biggie. I tend to think that people create traps for themselves. I mean, in many respects it's like that metaphor of the person walking into a jail cell locking himself in, throwing the key outside, and then going on the bars and saying “get me out of here, get me out of here!” It's like, no, that was you who locked yourself into that cell.

I see a lot of that going on, which is sad. I mean, we shouldn't joke about it. It is sad when people trap themselves into a situation that they loathe. What I try to help executives do is create a world around them where it's an upward spiral of learning, rather than a downward spiral of defensiveness.

Brooke Struck: Right. It strikes me that one of the key pieces in that is going to be around those if-then rules. We want to make a prediction about what we expect to happen on the basis of certain assumptions holding and this kind of thing. To use a distinctly Canadian idiom, there's a deft bit of stick handling that needs to be done to say that we will make this projection, but the projection is not a promise.

Roger Martin: Yes.

Brooke Struck: How does one weave that into there to put forward that projection and be clear about what it is that we expect without so strongly tying themselves to it that, if that expectation is not achieved, then somehow that reflects on them as a failure?

Roger Martin: Sure. Well, I think it's to socialize the if-thens.There's a chapter on strategy in the book about what would have to be true to answer such  an important question. Right? What would have to be true is a categorization of the if-then statements, right? You say, well, what would have to be true for this strategy that we're considering pursuing to be a good idea versus what would have to be true for this other strategy to be a good idea? I believe that it's much better to socialize those. Let's say there’s a chief executive with a senior management team. To have all the senior management team agree that these things would have to be true for A to be a good idea, and these would have to be true for B to be a good idea. Now let's, as a management team say, which of those, A or B, do we think is more likely to be true? And so we're going B. We like the looks of B better and the assumptions we're making on that.

So the team as a whole says that's what we're going to do. Then you have what-would-have-to-be-true diagram, so that after the fact if B doesn't work out the way you wished, it's not the chief executive alone doing it. It was the management team as a whole doing it. They can tell, in fact, that we missed thinking about what that competitor would do. Again, they're not shattered as a team. They understand they had it almost right, but not quite, and how can they improve it. But if you keep those what would-have-to-be-trues to yourself as your secret, then it's going to be harder for other people to see that the magnitude of the error maybe was not all that gigantic. It wasn't that it was a stupid idea on its face. It wasn't that it was not thought through at all. It was thought through and seven assumptions were made. One of them didn't work out, okay. That will help you feel better about what's happened rather than despondent, shocked, or dazed by what happened.

Brooke Struck: It's also more actionable because you've identified what you need to correct for in the next model.

Roger Martin: Absolutely.

Brooke Struck: As you were talking about that, it started to become clear in my head that there are really two perspectives to take on this. There's the defensive model perspective which is what are the barriers that I'm going to face in bringing together my executive group and getting them to share the risk? That's the defensive posture perspective on this. The challenge, seen from another direction in this more growth mindset posture, is how am I going to convene the discussion? How am I going to manage the dynamics of the discussion to make sure that everyone's input actually gets heard in articulating, first of all, fleshing out what are the assumptions that are baked into our model, and also making sure that any insights or intelligence that any of those individuals has sitting around the table actually gets aired at that critical moment.

Like if we've got 12 people sitting around the table and one of them knows with a very high degree of certainty that one of our key assumptions is false, and that person doesn't speak up, that is an enormous loss of important resources there. You're investing time, you're investing money to learn something that somebody at the table already knew.

Roger Martin: Yep. I agree. I couldn't agree more. That's why I think of strategy as a team sport, right? Yes, you can have somebody who's got a bunch of brilliant insights, but even if they do, it doesn't mean they have all the necessary brilliant insights. They may have a set of brilliant insights that comes up with possibilities that maybe nobody else would've thought of, but then there might be somebody else who's got a perspective on that now that it's been put out there that is absolutely critical to it. I think the best strategies come when a group comes together and the diversity of the group is embraced, not discouraged. Because you could have that as you pointed out, you could have that person in the room with a diverse perspective that they never share because they think they're going to be shot down for having shared something that other people wish they wouldn't be saying.

And so make sure that you get those on the table. In these process, it turns out that it is much harder to get agreement on what is true than what would have to be true, because what would have to be true is logic, right? What is true is the combination of logic, if-thens, and data. I like to separate those two things out to say, what's the logic structure? Then let's ask how would we apply data to that logic structure, to the extent we can. If you just say what would have to be true, what I find is that diverse voice may be saying, “I don't think that thing is true” to him or her, but you're not asking for that now. You're just asking them to help you figure out the things that would have to be true that we are going to have to get comfortable with in order to make that choice.

Brooke Struck: In terms of the barriers that one might face in trying to convene this kind of conversation internally, the first challenge is to shift the conversation dynamics from what is true to what would have to be true. And to shift that perspective, which is maybe the bigger shift from everyone in the room trying to figure out what it is that they can commit to because they're sure it's right, versus trying to figure out where their greatest dependencies are and where the biggest assumptions are that then need to go out and be tested. So that's barrier number one: shifting the conversational dynamics. Then barrier number two is, once you've set the agenda for that meeting effectively, how do you conduct the conversation in such a way to be able to get those perspectives coming out so that people actually do challenge each other and say, well, I'm not sure that that's actually as essential to the kind of picture that's emerging here as you say that it is. What if X , Y and Z, are all true, and we could still have a problem. Or this thing could turn out to be false, and we shouldn't worry a wit about it, and these kinds of things.

Roger Martin: No, absolutely. I think the key for conducting meetings like that is to find a way to incorporate everyone's perspective in some way, right? Sometimes the answer is: Ah! You are thinking about a different possibility. Let's focus on this one that we're working on now, and then we're going to come back to yours and reverse engineer what would have to be true about that other possibility. Or if somebody says the following would have to be true but I'm sure it's not true, then you just say: Well, let's wait. Other people think it may well be true. How could we test that in a way that both people would be compelled by the answer? 

Brooke Struck: One of the questions that we address is what would have to be true. That's articulating these if-then statements, but then you mentioned, of course, testing is the next part that comes along there. That's a slightly different question which is, what would it look like for that to be true? How would we recognize it? What would be a valid test of this? How can we follow along, in this team sport of strategy, to create essentially testing criteria for the things that we've just articulated?

Roger Martin: Well, one thing I do is turn over test design and standards of proof to the most skeptical person in the room. Some people tend to reply with, “Really? Aren't they just going to try and submarine the possibility that they don't like?” And I say, “No, if you want everybody to put their hands on their heart at the end and say, we all feel committed enough to try this, the most skeptical person is a person who's going to have the hardest time putting their hand on their heart. So, let's help them by putting them in charge of testing.” While it's theoretically possible that person would create a test that's so high that you could never pass it, so that they eliminate it, I've been doing this this approach to strategy for, well, I guess it's 30 years and I've yet to see that happen.

When I haven't seen something happen in 30 years of doing it, I start to think that the empirics would say that the theoretical problem is not manifest. You might ask why. My theory, and I'd be interested in yours, Brooke, because from your work you may have additional, better theories, is that people tend to be obstreperous to the extent that they aren't listened to. Right. You know, terrorists are often terrorists because they don't feel like their voice is heard, so they do extreme things. I'm not justifying them. It's quite possible that the voice should have never been heard in the first place, but that's what drives them to do it. So, if you instead say, no, no, no, we're not only listening to you, we are actually putting the keys in your hands, then they become, I think, hypersensible. 

The other thing is that there's a mutual assured destruction aspect to it, right? If you design a test that's impossible to pass to a possibility you don't like, somebody else is going to do the same to the possibility that you like. So the social behavior I've seen is being hyper-responsible. What it does is it has a drag along effect: if the most skeptical person ends up feeling good about it, everybody else is likely to. You don't get this sense of, at the end when the choice needs to be made, that somebody is saying “yeah, I still just don't buy it.” Like thanks a lot, buddy, but that doesn't help anyone.

Brooke Struck: Yeah. One of the things that I've noticed is that the people who are often the most skeptical are more fiery in their rhetoric than they are in the actual practice. Like if you can force them to be concrete, what they're asking for concretely is often much less intense than it would appear just based on the intensity of their discourse.

Roger Martin: I agree. I would agree. They want to be heard. I think the number one reason, if they feel strongly about something and they're not heard on it, it is doubly upsetting.

Brooke Struck: Yeah. When they are heard, what they actually are asking for seems in some sense like shockingly not that high a bar.

Roger Martin:Yes.

Brooke Struck: The longer they go on being not heard, the higher the bar will feel like it is. But once you actually concretize the thing and say, “okay, sure, I understand you have misgivings about this. What would you have to see?’ Then being as exploratory and concrete in that instance as you are in trying to articulate the if-then rules. 

Roger Martin: Yep. I agree.

Brooke Struck: It tends to come to a place that's actually quite reasonable and not even in the long run like you have to wait until the data is in and these kinds of things to realize it was reasonable in retrospect. Actually, in the moment when you sit down and articulate it and concretize it, it comes off as much more reasonable than things seemed.

Roger Martin: No, I agree with that. The other thing I've noticed, and I'd be curious if you noticed this too, they also tend to be good at designing the test because it's something that really matters to them. They'd say, I'd have to see customers who I think are going to say this, say this other thing. Whereas somebody else might not be as good at designing the test because they're not quite as clear on exactly what's at stake and what matters, that person tends to have thought about it a lot and be good. Designing ways of testing propositions is a real skill. It's an important thing that I think is often undervalued. It's doing the analysis once the test is designed is lauded, wow, he or she is a real analytical wizard. They were doing the R squares till the cows came home, as opposed to no, no, no, it was actually the tricky creative part was designing a test that could work.

Brooke Struck: Yeah. There are two things that come to mind in response to that. The first is that an old friend of mine who's a statistician said “I know when research groups are in trouble because they have to call in a statistician.”

Roger Martin: Interesting.

Brooke Struck: The other is that, as you were mentioning, there definitely is this kind of persona or archetype of the person who is so used to being that challenger, so used to being that skeptic, That they spent an inordinate amount of time going out and doing the data collection because the conversation dynamics haven't been in a place where just bringing a little bit of evidence was already enough to raise the flag, like, oh, maybe there's something interesting going on here. Maybe we should dig further into that. Like they need to be putting up massive placards about how intense the risks are, so they've needed to deliver the big guns over and over and over again. And of course in the process of doing that, they've become quite good at it.

Roger Martin: Yeah. No, that's a good thought. That's very interesting.

Brooke Struck: Let's think about practical steps. Let's envision there is some executive that is sitting, listening to this, and saying “Oh my gosh, this is just describing exactly what's happening around me in my organization.” What can they start doing tomorrow morning to, for instance, just get conversations started around which of the mental models that they're using right now that are not serving as well?

Roger Martin: I guess I would first just think about things that have been disappointing. Right. I think any executive could very, very quickly say, we tried this thinking that we're going to get this, and this other thing happened. We raised wages by 10% and thought that people would feel all cheery and the quit rate didn't decline at all. Just go to that and write down, as best we can recall, what was our model? What did we believe? Was it like, well, people are motivated by money and if we give them more of it, they'll be motivated to stay. Okay, got that. Let's now try and come up with other explanations for why that didn't happen. That might involve going and talking to half a dozen people who you were stunned and amazed that resigned after the salary increase was put into place.

Ask them, get some data to be able to come up with at least a couple of theories that could sensibly explain what you've already seen. I think out of those theories, you'll come up with one that's worth testing that is a better theory than the theory that you had.

Brooke Struck: Right. Maybe we can offer some more specific criteria for what kinds of frustrations, one where you're very disappointed and maybe surprised or a bit blindsided by the outcome that you saw. Perhaps going back to the very, very beginning of the conversation where we talked about doubling down on models and really using the amount of effort that's being put into trying to apply the model better and better as a bit of a queue as well. What are the things where we're investing a lot of effort in trying to improve our implementation of this, but even as we improve or as we invest resources in implementation, we're not seeing the needle move or we're not seeing it move nearly enough to satisfy us that we're moving in the right direction?

Roger Martin: Absolutely. Absolutely. Results that are in the direction that we thought, but not nearly as far, would be one kind. Ones that would be in the opposite direction would be another kind, we thought they would be happy, they're sad. And then maybe ones that are orthogonal, something happened that was not more of what we thought, was not the opposite of what we thought, was just something different that happened. I think you could put them into those categories. I think one thing that should always be on the table is: did we actually do what the model said? Because I think sometimes you look back on it and say, well, we didn't actually do it. We thought we did what the model would've said are the steps that we should have taken, but we look back and we didn't do it.

We thought we had implemented a 10% salary increase across the board but we actually didn't. It was 10% of the people at head office or 10% for people in these functions, and so we didn't actually do what we said. Then you can explore why. Typically, somebody thought it was not such a good idea and just refused to throw a proverbial wrench in the machinery.

Brooke Struck: Yeah. That helps us to identify those early cases of like, what should we be focusing on? And then that kind of gets the whole machine running of articulating what are the assumptions that need to be true in order for this to work the way that we think it is and so on and so forth as we've just been discussing. Let's talk about those lucky few who managed to scale the mountain. There they stand proud atop the peak having planted the flag, like this worked, we built a new model, everything's rosy and cheery, right? Like we built a new model and now the new model is here to stay. How do we avoid that trap of saying, okay, great. Now we have grown and therefore we don't need our growth mindset anymore.

Roger Martin: Sure. Well, I mean, I guess there's two things that I would say. One is that I'd be trying to inculcate in that person, the John Sterman notion that all models are wrong. They're abstractions, and, a better model is what you should always be looking for to maintain the growth mindset. But the other thing that I recommend for people is to put what would have to be true about that model. I'd say, stick them up in your tack board in front of your desk, and every morning come to work and ask are the things that would have to be true still true? Because many models as you would know are temporal. Given a set of circumstances that's now the situation, this model is a really good fit, it's fit for the environment. But then if the environment changes, there's a new competitor, new set of regulations, customers changing their interests and behaviors, the model can break down.

If you got the what would have to be trues, what I'd say is, look at your list of what-would-have to-be-trues and ask the question based on your most recent observation. Are those things still true? If so, I'd say, put your head back down and keep working. But if you notice that the distribution channel that we thought would be really enthusiastic about this is actually just taking some of our shelf space and give it to this new entrant that's doing something other than us. What we thought was true about our distribution channel ain't looking so true anymore. Does that mean the model crumbles entirely? Could we tweak it to make up for that? You can start making that decision. I think of that as the canary in the coal mine, right? What would have to be true list is the canary in the coal mine that'll give you an early warning that even though the model maybe seems to still be doing okay, it's probably going to fall off a cliff in the future.

Brooke Struck: Yeah. I'll just say one short thing on that front and say, that's the approach that I've always advocated for building dashboards. You know, these data dashboards that are constantly updated and these kinds of things, I've seen so many instances where those are so driven by what data is available rather than by what's important to pay attention to. That we've got these massive dashboards full of dials and lights and all kinds of things, but only one or two of them are really essential for decision making. Three or four things that are essential for decision making are not there.

Roger Martin: Not there. No. Yeah. I agree. I mean, I often, sadly I would say, I more often think of the dashboards that I've seen as being things that the creator of the dashboard love, not the user of the dashboard. That lots of dashboards aren't actually used. Somebody had an idea of what would be a cool dashboard rather than a dashboard that would be highly, highly employed.

Brooke Struck: All right. Roger, that wraps up the questions that I had for you today. Do you want to tell us a little bit about the book before we sign off?

Roger Martin: Sure. It's coming out on May 3rd. You can pre-order it on Amazon and everywhere else already. What I hope it is is not unlike Playing to Win, Creating Great Choices. It's a manual, it's a practical manual for helping you make decisions that you will enjoy the results of more. This one is slightly different in that there's 14 chapters that  are on the same theme, but don't lead one from another. So you can read it one chapter at a time as a situation on M&A comes up, you can read the M&A chapter. If capital allocation comes up, you can read the chapter on capital allocation, talent, et cetera. It has a slightly different feel than my prior books, but I hope it's sort of a manual that people will put on their shelves and refer to.

Brooke Struck: Yeah. Well, I count myself among the lucky few who got an advanced copy and it's already on my shelf with a pride of place. So thank you very much. Thanks for taking some time to talk with us today.

Roger Martin: Not at all. It's always a pleasure.

 

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guest

Roger Martin 2

Roger Martin

Named the #1 management thinker in the world in 2017, Roger Martin has extensive experience in business management and advising. Roger received his BA from Harvard College and subsequently received his MBA from Harvard Business School. Remaining in the world of academia, Roger is a Professor Emeritus at the Rotman School of Management at the University of Toronto, where he previously served as Dean from 1998 to 2013. Roger has also acted as a trusted strategy advisor to recognizable companies such as Procter & Gamble, Lego, and Ford, and served as a Director of Monitor Company for 13 years, a global strategy consulting firm in Cambridge, Massachusetts.

Roger’s research has been popularized through his twelve books including: When More is Not Better: Overcoming America's Obsession with Economic Efficiency; Creating Great Choices; and Playing to Win. His new book, A New Way to Think, offers a fresh, comprehensive guide to the essentials of strategy and management. He currently serves as Chair of the Good Jobs Institute and the I-Think Initiative, helping companies thrive by creating good jobs and transforming student thinking with a community-built approach to real-world problem solving, respectively.

About the Interviewer

Brooke Struck portrait

Dr. Brooke Struck

Dr. Brooke Struck is the Research Director at The Decision Lab. He is an internationally recognized voice in applied behavioural science, representing TDL’s work in outlets such as Forbes, Vox, Huffington Post and Bloomberg, as well as Canadian venues such as the Globe & Mail, CBC and Global Media. Dr. Struck hosts TDL’s podcast “The Decision Corner” and speaks regularly to practicing professionals in industries from finance to health & wellbeing to tech & AI.

Listen to next

Notes illustration

Eager to learn about how behavioral science can help your organization?