Cyber Scenario Planning with Alan Iny, Sanjay Khanna, and Michael Coden

Podcast July 5th, 2021

It’s the truly farsighted strategic thinking executives who say, “I need to make sure that my team has a plan and they know how to execute that plan. I know the plan is not going to be exactly right for the calamity, or that it’s going to occur, but it’s always easier to change a plan than to make a plan.”

Subscribe to get notified when a new podcast episode is out

Intro

In this episode of the podcast, Brooke chairs a roundtable discussion at the intersection of risk, scenario planning and cybersecurity. His guests are Sanjay Khanna, Strategic Advisor and Foresight Expert, and Advisor to The Decision Lab; Alan Iny, Global Lead for Creativity and Scenarios at the Boston Consulting Group (BCG), and Michael Coden, Global Lead for BCG Platinion’s Cybersecurity Practice. Together they discuss the human and systemic vulnerabilities that expose us to cybersecurity risks, and how scenario planning and creative problem solving can help mitigate such threats. Drawing from countless real-world examples of major global crises, they argue that although our best thought-out plans may never materialize, the process of planning itself is invaluable. 

Some topics discussed include:

  • The guests’ recent thought leadership on cybersecurity, including two potential future cybersecurity scenarios – one reflecting greater multi-stakeholder cooperation, the other reflecting a more fragmented, individualistic response.
  • Balancing a need for individual awareness and responsibility around cybersecurity with a wider systematic approach to the challenge.
  • If human error is the root cause of cybersecurity breaches, how can we help people avoid such errors? 
  • The case for scenario planning, not as a prediction tool, but as a mechanism to prepare for a range of plausible scenarios.
  • Real-world examples of how scenario planning has enabled international organisations to prepare for risks that bear similarities to events such as Brexit and the COVID-19 pandemic.

Key Quotes

Hackers don’t break in, we let them in

  • “Hackers don’t really break into our systems. They log into our systems, and that’s because they’ve socially engineered their credentials.”

Addressing the root cause of cyberattacks – human error

  • “77% of successful cyber attacks are due to some sort of human error, and only 23% of cybersecurity attacks could be prevented by technology.”

Crises are rarely singular events

  • “These things are all flowing into one another, impeding compound events and compound shocks. So it’s not just cyber, it might be cyber plus a climate shock. And the climate shock makes you more vulnerable to a cyber attack. We have to think about these things in an integrated way.”

Plans are worthless, but planning is invaluable

  • “It seems like a lot of what happens in mapping out the scenarios is applicable to many different contexts. The risks actually end up materializing sometimes in ways or through causal pathways that we didn’t anticipate initially. But when we arrive in the situation, despite having gotten the details wrong, there’s a lot of preparation that’s been made, and it ends up being very valuable anyway.”

Getting everyone on board with scenario planning

  • “The scenarios process: it’s really about building social capital among stakeholders. So there’s a greater shorthand and ability to discuss the relevant issues when a crisis hits and also to be able to move more quickly.”

Using narratives to draw out information

  • “Stories are the most powerful way to bring together relevant data. Because within the story, the data is embedded.”

Embracing human bias and learning to work with it

  • “If we are able to help people challenge their assumptions, their mental models, their biases, their habits and move into new ones, that to me is creativity. And that to me, is taking advantage of this beautiful opportunity that’s in front of us.”

Transcript

Brooke: Hello, everyone and welcome to the podcast of The Decision Lab, a socially conscious applied research firm that uses behavioral science to improve outcomes for all of society. My name is Brooke Struck, research director at TDL and I’ll be your host for the discussion.Our episode today is a bit special as we’ve got three guests. First, we’ve got Sanjay Khanna, Foresight Expert and strategic advisor. Listeners may recall that I recorded an episode with Sanjay recently. He’s also a TDL advisor. Second, we’ve got Alan Iny, Global Lead for Creativity and Scenarios at the Boston Consulting Group. And finally, we’ve got Michael Coden, Global Lead for BCG’s cybersecurity practice, working specifically at BCG Platinion.

In today’s episode, we’ll be talking about cybersecurity, how to identify vulnerabilities and what to do about them. This episode builds on a report that the four of us have co-authored together, which you can find linked on the decision lab dot com. Hello to you all and thanks for joining us.

Sanjay: Hi Brooke.

Brooke: Sanjay, let’s start with you. And we’ll give you a bit of space to introduce yourself first.

Sanjay: As you mentioned, I’m a strategic advisor and foresight expert, who also has the pleasure of advising the Decision Lab and the pleasure of building collegial relationships and collaborative relationships with people like Alan and Michael. Part of the impetus of this session was really to talk about that, the Decision Lab context of behavioral science and cybersecurity, so I’m delighted to be here.

Brooke : So let’s dig into what we talked about in the report a little bit and specifically, I want to start with storytelling. Because stories are such a great way to get into the material. So in the report, we tell two different stories about the future of cybersecurity. One that’s pretty dystopian, and one that’s a bit more utopic. Can you walk us through these potential futures? One where it’s a fight of all against all and another where we band together for the common good?

Sanjay: Yeah, certainly. So we call these “stories of the future scenarios.” And they’re part of what  strategic foresight processes call scenario planning. Alan and I first came together to discuss a side-of-the-desk project in my prior role as the futurist at Baker McKenzie, and in Alan’s current role as a partner and head of global scenarios and creativity at BCG. We were really thinking about technology and accelerating change, and what the implications of that are for organizations facing cybersecurity risks and what it means for developing strategy.

So having these two contrasting scenarios that we’ve developed and co-authored with TDL, the idea was to have the scenarios be contrasting enough that you could have a meaningful strategic conversation about them, but also that you could use them as a way to incorporate other practices like Michael’s work in tabletop exercises, and the way people in cybersecurity use scenarios, which is sometimes slightly different than the ways that Alan and I do in corporate strategy. 

So one scenario was really about the challenge if things become challenged enough, because of cyber attacks and cyber security issues, that everyone’s either an organization dealing with it, or an individual, addressing it on their own with little support. And then the second scenario is fundamentally about looking at a scenario in which there are ways that the private sector, the public sector and the social sector, might ally to protect organizations and communities and governance in the context of there being some very severe cyber attacks. And in both scenarios, the cyber risk is growing and growing and becoming more disruptive for society. So that’s one thing that’s there in common. The intensity of the cybersecurity issues that affect societies, that include nation-state adversaries and that include real impacts on the economy are common to both stories.

Brooke: So something that I want to raise here is that as we were working on this report,  I shared it with one of our editors internally here at the Decision Lab. When she read these scenarios, one of the first things that she said was “Now I don’t trust anything out there on the internet. Every shadow I look in is full of monsters that are just waiting to jump out and grab me.” And I thought that that was really interesting. And a bit fun. And in a certain sense, I thought about the lived experience of this as well. What would being a person living in these futures feel like? 

So right now, for instance, when I receive spam messages the worst of the spam messages are not at all credible. I identify them as phishing attacks right away. But I’ve also seen over the last few years that some of those attacks are actually getting more sophisticated, that it’s not as trivial to identify the noise and the junk from stuff that’s real. You have emails that are coming from accounts with names of people who are in your contact list, and they sometimes contain information that can really make it seem credible. I’ll share one story about this that I hope can put a bit of flesh on the bones. At one point, I was working with a collaborator in Europe and I received a phishing message, the content of which was not credible, but it actually came from his email account. And so I sent him, I didn’t reply to the message, I just created a new email and said, “Hey, by the way, I just wanted to let you know that I’ve been receiving these phishing attacks from your email account.”And he wrote back and said to me, “Oh, thanks for letting me know. There’s nothing that I personally can do about it. But if you can reach out to my IT team, they’ll give you step-by-step instructions to help us address this with our IT system.” And there was a link in that email. And as I read that email, I was like, is this the next stage of evolution in phishing attacks that the person that I wrote to saying, “Hey, by the way, someone’s trying to scam you through your email,” that there’s this second layer to it, that actually this response saying, “Oh, thank you very much, here’s how to help me with this problem, is actually part of the attack.” Am I still in the matrix right now? And this link that’s been sent to me to help fix the problem is actually the attack itself. 

So  I found myself in this situation where I didn’t know what to trust anymore. Now, I had pretty good instincts that that follow up email was too customized and too elaborate and too smart to be part of a real sting operation. But I envisioned a future where that thing might not be out of the question. And that experience that I felt of not knowing whether the situation that I was in was real or a ruse, and not knowing where to turn. Essentially, what we’re mocking up in the scenarios in this report, is a future where most of us are in that situation, most of the time, with any digital interaction that we have. We’re always asking ourselves whether this is genuine.

And so that, I think, is the groundwork for now asking about these two potential futures. How do we as a society respond to that? Do we leave it to individuals to sort it out for themselves and to separate the wheat from the chaff? Or do we take a common response to this and say, “Actually, we need a systemic solution to this now systemic problem.”

Sanjay: It’s a really good question Brooke and I’ll respond briefly, clarify my prior response really briefly, as well. And then hand it over to Michael and Alan. In the two cybersecurity scenarios we outlined, we did a fairly short time horizon, three years, because things are moving so quickly, that this could create these issues with trust that we’ve already seen with COVID vaccines and information sources and these things that have played out recently. Vaccine hesitancy, attacks on financial institutions, and so on. The first cyber scenario was called “Take One 2024, The Gated Digital Self,” where each person had to protect their own cyberspace, in terms of their own interactions and protecting their identity. And it was very, very individualistic, because there wasn’t a common response.

The second one is called, “Take Two, Gated Virtual Communities,” where there’s still the sense of something being gated, but it’s a larger entity involving greater collaboration. And that collaboration can both be there to find out where you can trust information so you can be less disorientated in the new context. And I would argue that the second scenario we outline addresses the trust issues that become either endemic to the second scenario, or in the first scenario are actually epidemic. They’re moving so quickly that very few people know how to orient themselves to that risk environment. So just clarifying those two things, heading over maybe to Michael and Alan to introduce themselves. 

Alan: Well, let me introduce myself then. Alan Iny, I have the pleasure of a role with Boston Consulting Group focused entirely on creativity in the business world. And part of creativity means thinking creatively about the future, about what’s possible. And in that capacity, it’s been really, really interesting over the years to develop a lot of scenarios with a lot of different clients. I’ve been thinking about the future of Russia with the World Economic Forum, the future of mobility with a car maker, the future of water in Africa and the future of any number of other things. One of those projects actually was around the future of cyber attacks, even six years ago. And the way that the world has evolved since then, is super interesting. We can get deeper into that.

But fundamentally, the point for me is how we can become better prepared for this. Yes, we all get phishing attacks by email. Yes, we all read newspaper headlines about this pipeline being hacked, or this school or this university or hospital or whatever. And these are happening more and more, they will continue to happen. And Michael will get into the statistics even more than I can. But one thing I do know, Brooke, you laid out this contrast between an individual response to this thing and a systemic. And I think both are needed. What has been seen is that a lot of the breaches that actually happen, are because of human error or because of humans clicking on something they shouldn’t. So clearly, that speaks to the need for individual education and all the rest. But at the same time, some sort of systemic approach where we are preparing organisations systematically to become better prepared for things that they cannot predict. That to me is the goal of scenarios generally. How can we build resilience and strengthen our strategies in the face of all this uncertainty that’s constantly around us?

Brooke: Yeah, I’ll pick back up there. And I want to highlight one of the things that you said there, Alan, before I throw it over to Michael, and that point that you made, is that increasingly the vulnerability in the system is us. It’s humans, it’s not actually the machines. So once upon a time, maybe the easiest way to try to steal money or to steal data was to try to hack somebody’s password. But at this point, hacking somebody’s password is not the easy way, hacking a person is the easier way. It’s these social attacks where, in fact, what we do is we trick somebody into letting us in, rather than trying to break in. And on that note, I might throw it over to Michael to enlighten us a little bit more.

Michael: That was really well said. Hackers don’t really break into our systems, they do log into our systems. And that’s because they’ve socially engineered their credentials. Alan really made a good point, our studies have shown, and government studies have shown that 77% of successful cyber attacks are due to some sort of human error, and only 23% of cybersecurity attacks could be prevented by technology. So people are really the first line of defense in cybersecurity. 

In my mind, there’s a great analogy in the 21st century, of cybersecurity being what safety was in the 20th century. Nowadays,everybody thinks about safety. Our companies, when they design factories, they design safety into the factory. We now need to design cybersecurity into the way we use our digital technology. That’s going to be one of the first things that we have to do.

Brooke: So picking up on a behavioral perspective there, if we’re thinking about these social attacks, Michael, could you perhaps give us an example of how one of these attacks works and how it really leverages or exploits the human vulnerability, a social vulnerability, and what it means to build with cybersecurity first? An instantiation of how a system can be designed to thwart that type of approach.

Michael: Okay. So, great question. One of the interesting things that we’ve found, with a great deal of research, is that emotional emails, emotional phone calls, anytime somebody sends you something that evokes a strong emotion, that’s probably a false email or a false phone call. And because it’s evoking this emotion, you have this immediate, humans have this immediate desire to respond. So the email that says “Guess what? I have a vaccination appointment for you. I know you’ve been waiting months for a vaccination appointment.” Or: “Guess what, your grandmother’s in the hospital.” That’s probably somebody trying to phish you. Or: “I need money right away.” It’s interesting you brought up, at the beginning the progression of phishing emails from the ludicrous, to the impossible, almost impossible, to detect. They actually serve two different purposes. Those emails which are ridiculously filled with misspellings and malapropisms, those emails are carefully designed to weed out people who are really gullible. If someone does respond to those emails, then they were a good target for the extortionist.

So actually, they’re not poorly designed, they’re extremely well designed in order to do that. On the other hand, the exchange that you described, where the email actually came from your colleagues’ email account. If you verified that it was truly his account, which you can do by just putting your mouse over the email address where it says from. If it really is that email address, and you believe that it’s a fake email, then there’s a chance that the bad guys have taken over the email account. They got your friend’s password. They took over the email account and your friend is not sending the emails, which also means that your reply was not being read by your friend, it was being read by the bad guys. And that’s why you got the second email, which said, please go to this website and enter this information, because that’s how they were going to grab you. 

And what we know is that in all but the most serious nation state phishing attacks, it’s a two-step process. So I can give people one one handy bit of advice here, that when you get this phishing email that says, there’s been a terrible accident in the school bus and your kid may have been on the bus, click here for more information. You click, well, that’s the wrong thing to do, because it’s an emotional response and you’re reacting too quickly because you’re emotional. So the best thing to do is to stop and not do that. However, once you’ve done that, you really haven’t done anything bad. Because what that click does is it takes you to a fake web page and that fake web page asks you to enter your user ID and your credentials for your school website, or your bank account or whatever. That’s a bad step.

We’re all creatures of habit, as soon as we see those login pages, we, by habit, put in our user ID and our password. If you don’t do that, if you don’t do the second step, then you won’t really have created a problem for yourself. How could you prevent doing the second step? There is a technology available that can help you called password managers. And so if you have all your passwords managed by your password manager program (the free one is available online), then the password manager always looks at the web page and says, “Is this the web page that I really learned that password from? Or is it a fake web page?” And it won’t put the password into that fake web page. So you can actually, just by using password managers, solve a lot of this phishing email problem.

Brooke: Okay, that’s really helpful. And in terms of the design of those password managers. From a behavioral perspective, there’s going to be something very different between the experience of going to that website and seeing that the password manager just isn’t filling things in, the way that it normally does, versus a password manager that actively prompts me and says, “Are you where you think you are?” And gives me a salient hint that I should be looking for something, as opposed to just passively standing back and allowing me to do the thing I shouldn’t be doing?

Michael: Good suggestion. We should pass that on to the password manager’s designers.

Alan: Yeah. And it’s also good for improving situational awareness. If it’s not too automated, is it automated enough? So it’s also interesting.

Michael: Yeah. But getting back to the scenarios, the way we use them in cybersecurity, to build on what Sanjay and Alan have said, is to create situations where people can practice their response to a cyber attack. So the simple analogy is, a real fire is not when you want to be learning where the exits are. You want to know where the exits are before there’s a real fire. So fire drills are really important. And they are the simplest of all these scenario exercises. So imagine there’s a fire, everybody has to go to this stairway, exit the building, stand outside, people take a headcount, they practice doing this to build some muscle memory.

Scenarios in cybersecurity, we use them in the same way. We create a realistic situation, one that may not have been thought about before. And then we ask the people in the company to practice what’s called their incident response plan. And we see whether they really read the incident response plan. If they’ve read it, did they understand it? Is the incident response plan a good incident response plan, or when they tried to implement it in the simulated cyber attack, did they find that there’s something missing?

Well, our phone system is down and we don’t have a backup communication system. Or Pat is the only one who knows how to deal with that single sign-on server, but Pat’s on vacation and we don’t know who to go to. So there wasn’t a process for how to deal with bringing up the active directory again. So, using these scenarios, we can actually help people practice and build muscle memory on how to deal with a real cyber attack. We can find weaknesses in the plan and we can try new ideas out before we actually invest money in them.

Brooke: Okay, that’s interesting. I really like this parallel to the fire drill. I think it helps us to think through some questions about scenario planning. So, for instance, part of the discussion that we have had amongst ourselves, is about the difference between general scenarios and tailored or bespoke scenarios. There are some things that we can learn from what fire drill plans, in general, can look like. There are some things that we can learn from studying the history of real fires. But there’s also something to be said for having a fire drill plan that’s actually tailored to our own building. And to be thinking about what kinds of fire risks exist within the building that we are working in, not just buildings in general. Alan, I might throw it over to you and ask a little bit about the differences between working with the creation of a bespoke scenario, versus working with more general scenarios that have broader likeability?

Alan: It’s a really interesting point, because of course, we do both. And in the end, the real power of any scenarios exercise, is indeed in thinking about the implications. What does it actually mean for your organization? What is your escape route from the fire or whatever the case may be? And so if one thinks about it this way, in any scenarios effort, there’s the process of developing the scenarios. And then there’s the process of using them. Taking the scenarios one by one. 

In this case, we’ve got two for a few years out, which are deliberately a little bit extreme, but stretch us in different directions.If you take them one by one. And, for each one, you say, all right, well, if we knew this was going to happen, if we did have a crystal ball, what specifically would we do? What would be the opportunities for us, the challenges for us, and what specific actions would we take now to become better prepared? Okay, then you do that again for the second one. Then suddenly, you can look across both of those and think, well, are there some
“no regrets” moves that make sense across all of these future states? Are there some contingent moves that we should plan for, but not pull the trigger on yet, because we want to see if the world moves in this direction or that direction?

Are there things we just want to learn more about? Are there some big bets we want to make? Or maybe even try and nudge the world in this direction, or that direction, through our lobbying or partnerships or investments, or whatever the case may be? And so all of that is where the real customization comes. In thinking about how we can strengthen our strategy, how we can become better prepared for whatever the future holds. 

Because, it’s not a question, if in a specific scenario’s custom bespoke version, we might make three or four or five scenarios, to stretch the things further instead of two. But it doesn’t mean that we think even if we have four, that the real future is going to have a 25% chance of being each one of those. No, of course not. There’s an infinite number of possible future states. And the idea of any scenario’s effort is to try and cover the range of options. To try and be somewhere representative of the range of futures that might happen, since we can never be properly comprehensive. And if we’ve done that right, then by going through the exercise I describe, exploring strategic implications and what we would do, we actually become better prepared for whatever may happen. Regardless of the individual scenarios themselves.

So really pushing people. One of the mindset shifts that are important regardless of whether it’s a general or a bespoke version is acknowledging that this is not an exercise in prediction. A lot of clients tell me after we do a scenarios exercise, that they see it in the newspaper the next day, and they say, “Oh, look scenario number two is coming true.” Oh, no, actually, the next day scenario one is coming. And that’s fine. That just means we did our job right. But it’s not an exercise in prediction. It’s not an exercise even really, in improving forecasts, although that can come of it. It’s really an exercise in preparation and strengthening strategy. 

If I’m allowed to continue for one moment, I’ll just add one more point about that, which is, that can actually be viewed as an advantage, as an opportunity. When people think about cyber attacks, if you just hear that word, you’re very likely to think of it as a challenge, a negative, a hurdle. I could lose all my money, I could lose all this. And there are indeed plenty of challenges inherent in it. But there is also, in theory, an advantage in being better prepared than everyone else. I know that’s a relative thing. But if as an organization or even as an individual, you’re better prepared than everyone else, the competition or however you think of everyone else, that’s a competitive advantage. That’s a way of being better prepared for the future.

And there are strengths in that. It’s a fantastic opportunity that’s present now, before the crisis happens. As Michael said, the best time to prepare your plan is not when there’s an actual fire and smoke alarm.

Brooke: Yeah, that’s great. Sanjay, you wanted to jump in there?

Sanjay: Yeah. I just wanted to build on Alan’s comments about scenarios and plausibility. We really are, to build on his thoughts, trying to stretch the boundaries of the organization’s thinking to include elements they don’t normally think of, in their siloed practices, some of which may be contributing more to the profits to the business than some other areas, and may have influence on budgets for preparation, for example. So we’re really trying to get this multi-stakeholder engagement inside and sometimes outside the organization and building the scenarios in order to open up a diversity of thoughts, in order to support greater resilience, adaptation and also capturing those opportunities that Alan so eloquently described.

The opportunities and preparedness, but then the opportunities that preparedness gives you [the ability] to be more proactive in the bigger bets you might place. So, this is a very interconnected way of thinking and drawing on a multiplicity of talent to solve these problems and get the organizations ready for this converging crisis context that we are in. Where these things are all flowing into one another, yielding compound events and compound shocks. So it’s not just cyber, it might be cyber plus a climate shock. And the climate shock makes you more vulnerable to a cyber attack. And so we now have to be thinking about these things in an integrated way.

Brooke: Yeah, Sanjay, I’d ask you to expand a little bit on that. So both you and Alan talked about getting past silos and expanding out from, potentially, compartmentalized thinking and compartmentalized operations. Sanjay, I invite you to talk about that on the creative front. So, how does that change the type of scenarios that might come out of the process? In virtue of having different perspectives around the table and the richness that lends. And if you’ve got an example to help us walk through that, that’d be really helpful.

Sanjay: Yeah, I’ll probably do this a bit dialogically with Alan, because I think this is an interesting space in this conversation to be a bit dialogic. So if you look at this, even this particular podcast that we’re doing, we have you representing the behavioral science organization and bringing behavioral science more into organizations to influence people’s behavior around cyber risks, but also other issues inside organizations where nudges and other techniques are particularly valuable. We have Alan who brings the creativity practice as well as a scenarios practice to this work, which is a really powerful way to ensure that you’re drawing on more of the human mind that may be latent, but needs to become more present in tackling some of these questions. And you’re now drawing that expertise out of your team using techniques from creativity.

You have scenario practices that are now becoming, in some contexts, a bit more like design thinking and prototyping, and in other areas like our work involving narratives and creativity in order to engage people’s minds and hearts in the story. So they feel committed to the narratives and so on. So there’s that confluence. And then you have the deep technical, practical and strategic experience of Michael in addressing cyber security issues. So in a way, what  we’re doing in this podcast is modeling what we’re trying to do in the starting point of building a community of stakeholders to build these scenarios and to be engaged in the actual process.

And Alan talked about two things, developing the scenarios, which takes a lot of time and is very instructive because you do interviews, you engage with people, you try to figure out what are the deep uncertainties and questions that are critical for the particular organization in a bespoke context. You also try to ensure that you’re discovering some blind spots in the interviewing process that might lead you to involve even more stakeholders. And then in the process, you’re doing the same thing. You’re drawing different stakeholders together in order to review these scenarios, and then look at what Michael so clearly described as important in the preparedness aspect, from a practical operational standpoint, to make sure you’re ready.

And from Alan, that intellectual standpoint of looking at the wider strategic context, in order to ensure preparedness. So for me too, like Michael and Alan, it is fundamentally about being much better prepared to deal with all plausible scenarios to the extent that you can be. The “no regrets” moves that Alan mentioned are really, really important.

Alan: I wonder if it might be interesting, Brooke, for me to give an example. And Michael can take us back to a cyber example in a moment, but I thought I would share the example of the first scenarios project I ever did at BCG, this was back in 2008. And you’ll see why I think it’s relevant. It was for the European rail Industry. And so we’re thinking about all of these trends about rail, what’s going to happen with cargo, with passengers, sustainability, urbanization, security, all of these issues. 

We came up with a set of four scenarios, one of which involved everybody in their little pod at home, working virtually, and having these VR/AR type of things in 2020. There was another scenario where China took over the world of rail, which subsequently happened in terms of them building all these high speed trains, so there was another scenario where the European Union broke apart, and they had different regulations in different languages and different issues across all the countries. I won’t get into all the scenarios, but the point is in that first scenario, where everyone was in their little pod, there was no pandemic. We certainly didn’t predict that. But it was a high-tech world with everybody using technology and no transit. In the last scenario I described, with Europe breaking apart, again, I will absolutely not claim that we predicted Brexit in 2008-9. We were thinking about Greece. We were thinking about Portugal and the debt crisis and stuff like this.

But the point is, that those players who took those scenarios seriously, were better prepared for the pandemic. And were better prepared for Brexit, which one could say were relatively unpredictable in 2008, even though of course, there were experts who were warning us and all the rest. But they were unpredictable in terms of their specificity. The point is, those who took the scenario seriously, were much better prepared for those massive shocks than otherwise. And I think the same holds true here when we think about cyber shocks. If and when there is some bigger crisis, bigger issue here, among the five pipeline operators, among the five rail operators, among the five utilities, there will be some that are better prepared than others and those are the ones that will get a massive advantage when that moment happens.

Sanjay: And I can add a brief example maybe earlier in the scenario process.  Yesterday I was presenting with the team to a European development finance organization looking at the post-COVID environment, and looking at the scenarios they can develop to address the emerging context. Where to do impact investment, how to address the climate impacts across the geographies in the developing world that they fund and to look at the different 1.5 or three degrees scenarios and how to think through some of these things and build their strategy to support some of the risks that they’re seeing emerging. Helping them introduce new areas of contextual understanding that they would need to integrate into their scenario process.To make them better prepared as a financial institution that’s working in international development against competitors, but also potentially with new collaborators in that space to increase and amplify their impact. When they know they’re dealing with these multiple converging issues. And so hopefully, they end up like Alan’s clients, more prepared than the others for the evolving post-COVID environment where again, these issues are coming together.

Brooke: So, I want to pivot a little bit and just before doing so, Alan, I’d like to point out that I really appreciate your example for how well it shows that all bad things eventually happen. The inner pessimist is really happy to hear that. But I’d follow your lead and throw it over to Michael to bring us back a bit more into the cyber realm. So do you have some examples of scenarios that you’ve worked through, and maybe contrasting examples, if you’ve got them on hand, of types of scenarios that got mocked up, and there was a strong response. And, in fact, when certain risks did materialize, someone was ready. Versus other situations where a scenario didn’t get the traction that it needed, either because the process didn’t develop a sense of ownership among the stakeholders, or even just in the prioritization stage, that relative to the other scenarios that were developed, it just got bumped down the ladder and how the outcomes were different there.

Michael: Sure, great point, Brooke. As we were talking, I was just thinking about a scenario planning situation from a sports analogy. I’ll just throw that in as I’m an avid tennis fan, been watching the French Open as we’re recording this, not at this moment, but during this week. And I can remember a shot that Roger Federer did, where he actually was so wide out from the court, that he hit the ball around the side of the net. And he won the point. It didn’t go over the net, it went around the side of the net. And at the end of the match, he was interviewed and in the interview, [the interviewer] said, “How did you do that? How did you know to do that?” And Roger said, “Well, we practiced that shot.” And you don’t know in any tennis game what you’re going to be faced with, but practice, practice, practice is how the pros get to win.

I can tell you, like Alan, we had the experience in 2017, early 2017 first quarter, presenting a ransomware attack cybersecurity scenario to the Community Chairmen of the World Economic Forum, which is the 100 Chairmen of the boards of the 100 largest companies in the World Economic Forum. And to several corporate boards and management teams’ C-Suite. July of 2017 was when the first major ransomware attack “WannaCry” happened, maybe you remember, and then followed three weeks later by “NotPetya,” which caused a major shipping company to shut down for three weeks, which caused a major pharmaceutical company to have to shut down its factories and lose $1.1 billion. And it really felt a little odd, just as Alan did, two months before this actually happened, I was telling people to prepare for this. 

We do these scenarios, simulated cyber attacks, every 6 to 12 months with our clients. One of our clients created a ransomware committee. Because one of the situations we came up with in the first simulation was that the board was very divided into four different sections. One section said “We will never pay ransom, it’s immoral. It may even be illegal.” Another said, “It’s only $15 million, what are you wasting time on? Let’s just pay it, get it over with.” Another said, “Can we negotiate with the guys and see if we can knock it down.” And then the fourth group said, “Well, we have seven days to pay. Let’s see if we can recover our systems before we end up paying for it.”

And the argument got very heated and emotional and to the group that said, “We can’t pay a ransom, it’s against our company policy.” I said, “It may be against your company policy not to pay ransom for a kidnapping, but do you have a policy on data ransom?” And there was silence in the boardroom. And so this hadn’t been thought about. And those companies now all have plans for who makes the decision, how they make the decision and equally as important, who can make the payment if they decide to make the payment. You have to be very careful. It is against the law, at least in the United States, to make a payment to a terrorist group. So, you have to make sure that this is not a terrorist group. You have to know how to deal with a bitcoin wallet. So, it’s important to get law enforcement engaged and to have third parties that you can work with, who know how to make these things happen, if you do decide to do that.

So that’s one of several examples. Another really fun example was we did a cyber attack scenario for an entertainment company. And they had a data leak, a lot of private information of clients out on the internet and the cyber attackers who had stolen the information threatened the company that they were going to convince the entertainers that were under license to that company, to get their fans to boycott the company. And we actually use Taylor Swift as an example. And literally three weeks later, Taylor Swift got very upset with a company, who will remain nameless, for their policies and asked her fans to start a social campaign against this company, to get them to change their policies. So while in the simulation, some of the people in the room said this could never happen, well, we imagined it. And lo and behold, it happened three weeks later.

Brooke: I’ll pick up there and raise this idea that it seems like a lot of what happens in mapping out the scenarios is applicable to many different contexts. The risks actually end up materializing sometimes in ways or through causal pathways that we didn’t anticipate initially. But when we arrive in the situation, despite having gotten the details wrong, there’s a lot of preparation that’s been made, and it ends up being very valuable anyway. Something else that I picked up there in both of the stories that you talked about, is how you’ve got these tensions, these new kinds of threats, these new kinds of risks, that really challenge boundaries and silos that can appear firm under, let’s call them peace times. And that really speaks to this idea of converging crises that Sanjay has worked quite a bit about.

So Sanjay, can you bring us some examples as well about how these crises converge? So you alluded to it a little bit earlier, about how, for instance, a climate crisis tips over into some other crisis, perhaps COVID is is an example that you want to explore there about these spillover effects from one dimension to several.

Sanjay: Yeah, so just picking up, I’ll try to pick up a bit from Michael’s comments where he really highlights that situation where there’s dissension in the board simulation. And there are these four groups that split out. And until you have that conversation, you can’t build the social capital that you need when an actual crisis happens. So there’s the other thing about the scenarios process, it’s really about building social capital among stakeholders. So there’s a greater shorthand and ability to discuss the relevant issues when a crisis hits and also to be able to move more quickly. The second thing is that because we’re in this converging crisis environment that we’ve described, access to really good, strong, informed by both theory and practice, experts in strategic foresight is very hard. It’s hard to gather the expertise to bring into companies and to organizations of all types to develop scenarios.

And so part of what we’re doing here in the session is alerting the audience of The Decision Lab that there’s something to this process. And I’ll just share the inside ball here and that Michael and Alan and I have all been in situations where we’ve done some kind of scenario work. We’ve told people it’s not predictive, we’re not trying to predict things, and then we’ve also been very pleased in some weird sense that something played out from that work, that was relevant to the client. 

And so in this converging crises environment, the way I bucket them to help clients understand this, is in five buckets. One is geopolitical fragmentation, in that we’re seeing a fragmentation of the geopolitical stabilities that we’ve seen in recent history. Secondly, socio-economic reordering. We’re seeing the reordering of economic classes within society, in terms of wealth and income inequality and also various social inequities that build up from that. Technological acceleration, which is what we’re looking at in terms of mass networks and the use of this information in different technologies for cyber attacks, [and] extreme weather and climate change. And, finally, population health and global health are also a factor feeding into scenarios. So for example, before a response to COVID in Brazil led to massive social protests, you wouldn’t necessarily think that a health issue would lead to significant social protests. But the COVID crisis has revealed that as well.

So looking at what happens when you have a global health crisis, combined with social and wealth inequities in different parts of the world, in different contexts, it starts to be one area of confluence where you could have compound events, but there are many others. And when things really converge, how things might play out in this context of radical uncertainty becomes more unpredictable. You asked a question very early on about stories and storytelling. Well, stories are the most powerful way in narratives to bring together relevant data. Because within the story, the data is embedded. So when you look at scenarios, you can also see things like, “Oh, we need to study this area that’s been brought out, because there’s a huge amount of data underlying that narrative that we need to surface and bring to the fore.” And it might be something we want Alan, who’s the scenario person on the strategy team, to look at with Boston Consulting Group. And we might want to pull in Michael who has cyber expertise too, as part of the stakeholder group to inform that. We need some behavioral science angles about how we respond from the Decision Lab. And you can start to see how this confluence of crises starts to surface new interactions that require new disciplines and interdisciplinary approaches.

Brooke: I really appreciate the comments that you’ve just made. And it’s settling in, how these converging crises, seen through a behavioral lens, one of the things that it’s really pushing up against, is the challenge of mental models. Our mental models of how social dynamics and social crises work are oftentimes compartmentalized from the mental models that we have for how health works, and compartmentalized from the way that we think that the climate works in these things. And what ends up happening is these converging crises run up against the compartmentalization of mental models. There are some other very strong behavioral dynamics that have been swimming, lurking, if you will, like sharks beneath the surface in some of this conversation. 

Michael, you mentioned very early on that a lot of the social attacks, those social phishing scams and these kinds of things, are really trying to key up the emotions of readers. Because, once you get people into a hot state, into an emotional state, it’s harder for them to tone down and detach themselves and take a more reasoned assessment of what it is that’s sitting in front of them. And to go more to the response angle, that seems to be exactly what’s happening in boards as well. Marking up these scenarios and putting together plans, essentially, what we’re looking to do is to avoid finding ourselves in that emotional state like we would facing that phishing scheme to say, well, actually, when faced with a crisis, emotions are likely to run high. That’s not when we’re likely to make our best decisions. So what we’re doing is preparatory work that’s essentially putting in place some pre-commitment devices. 

That’s not to say that we have a clear mapping of “If x happens, then we will take y action.” “If x2 happens, we’ll take y2 action.” It doesn’t necessarily need to be as detailed and prescriptive as that, but at the very least, what we should be pre-committing to are the terms of discussion, the terms of engagement. On what basis are we going to make a decision about x or y or z? And part of that also has to do with monitoring the ecosystem to know what are the early signs, what are the leading indicators that we should be paying attention to, to trigger us into a situation where we say, “Now we know we need to make a decision”? Rather than waiting for the full brunt of the risk to crash on us.

What are some of the barriers that you’ve seen to that pre-commitment, when it comes to stakeholders who essentially need to bind themselves to the mast and have all the other sailors on the ship put wax in their ears, as they sail by the sirens. What is it that makes it challenging to get senior decision makers on board with the need to pre-commit?

Michael: So, human beings are essentially optimists. Just because a cyber attack has happened to other companies, doesn’t mean it’s going to happen to me. Just because other houses have been broken into, doesn’t mean my house is going to be broken into. Ninety percent of burglar alarms are sold after our house has been burglarized. And that’s just part of our genetic makeup. On the other hand, we have a few forward thinkers. The former CEO of Intel Corporation said “Only the paranoid survive.” He actually wrote a book with that title, right? And that’s what we have to think about here, everybody is a potential victim. And what we have also seen is that most of the damage in the cyber attack is done after the attack. Not before. The greatest financial losses come from poor response. 

So, when a company like Equifax responds the way it did to its cyber attack, stock price tanks. People are enraged, everybody is very unhappy. Most people don’t even remember that there was a very large cyber attack against Home Depot. But Home Depot really handled it very well. They had a plan, they executed the plan, they had great communications with their customers, with their shareholders, with the media. And as I say, nobody really remembers that happening. It’s the truly farsighted strategic thinking executives who say, “I need to make sure that my team has a plan and they know how to execute on that plan. And I know the plan is not going to be exactly right for the calamity, or that’s going to occur, but it’s always easier to change a plan than to make a plan.”

Something that made a big impression on me was an interview with a great Israeli general, Moshe Dayan, after the Six Day War in Israel, or talking about the Six Day War in Israel. When he was asked “How could you implement this, win this war in just six days?” He said “It was all our logistical planning, we had every piece of equipment in the right place at the right time.” And they said, “How could you plan a war?” And he said, “Well, it’s much easier to transmit changes to the plan, than it is to transmit a plan.” And that really sat with me and that’s why we recommend to clients that you have as many as a dozen different plans.

Whatever real calamity comes, one of those plans will be closer than the others. And you can then start with that and adjust it to the real calamity. We had situations like this earlier last year and early 2019, we did a scenario where, similar to what Alan did, we had the company’s computer systems and network completely go down in their site. So no one could work in the office, they all had to go home and work on their personal devices. And, again, the client said it was a little bit ridiculous that everything would go down, but they were prepared for COVID remote work from home.

Brooke: I really like what you just mentioned about having several playbooks and pulling one off the shelf and having to adapt them. Reminds me of one of my favorite sayings that, plans are worthless, but planning is invaluable. And you also mentioned that the majority of the damage actually comes after the attack, that takes me by surprise. It takes me aback a little bit to hear that. What it suggests, at least in the cyber context and perhaps in the financial context, is that actually the major asset that we’re protecting is trust. It’s not actually the data, it’s not actually money, it’s protecting trust in the system, because ultimately, that’s the currency that we need to spend. Or that is a currency without which none of the systems can operate. And Alan, I wonder whether you could chime in on this and talk about whether you’ve seen similar patterns to that outside of the financial context? And perhaps outside of the cybersecurity context as well.

Alan: Yes, I think so. Look, first things first, we’ll give credit to Eisenhower for that quote, although Churchill said something similar. And even Mike Tyson said, “Everyone has a plan until they get punched in the mouth.” So it’s the same general spirit of all of these things. And I’m completely with you. Going back to your question, really, the whole reason any of this is necessary is indeed because we’re human beings and every CEO is a human being. And every leader and exec is a human being. One of the really cool examples I’ve seen is actually in Formula One racing. You talked a little bit about pre-commitment and making some of these decisions in advance.

Well, the McLaren racing team has their control center in England with people there, regardless of where the race is around the world. And they have millions of situations pre-committed to, pre-decided. Because if they’re making a decision, at any point in a race – “Should we use these tires?” Are those tires based on the weather?” Or, “Should we fill up this much or that much gas based on how many laps are left, or who we’re competing against?” Whatever the case may be. All of these things are pre-wired in events. They have simulated millions of races and it’s true that this is a human thing, but in the end, they can simulate it through technology as well. The end result was that they developed the competitive advantage that I was talking about earlier, over the competition. And the result of that, was that they found themselves actually lobbying for more rules. They found themselves lobbying the governing body for more regulations and more rules, because they were better equipped to deal with that thing than the competition.

Now, none of my other clients in any industry are typically lobbying the governing body for new regulations and new rules. This is counter to the mindset. But fundamentally, all of this is necessary because human beings run corporations and have bank accounts and all the rest of it. So I think fundamentally, when we talk about what makes it a challenge, yes, it boils down to this trust that you’re talking about. Do I trust this person? Who says that my kids have been in a school bus crash? Or do I trust only the bank that I know? Or do I trust this or do I trust that? Sure, fine. And people try to exploit that all the time. In the end, it does come down to our mental models. Our mental models are, this is the way we think about this. This is the way we react to emails, this is the way we click on links. This is the way we do these things.

And changing these habits, these ways of thinking about all of this, are what’s so important, but so difficult. If we were purely rational beings, who weren’t governed by habits and mental models and biases and all the rest, then we wouldn’t need scenarios. We wouldn’t need brainstorms and creativity, we wouldn’t need a lot of things, because we would just very rationally and logically come up with the correct answer to all of these things. But the challenge is of course, that we do have all of these biases and most of the time, that’s wonderful. Most of the time, that’s an amazing benefit of being human. That we can think creatively and challenge the status quo.

So if we, and I don’t just mean the four of us on this podcast, but if we, any listeners as well, are able to help people challenge their assumptions, their mental models, their biases, their habits and move into new ones, that to me is creativity. And that to me, is taking advantage of this beautiful opportunity that’s in front of us.

Brooke: I really like what you mentioned there, especially around McLaren pushing for more rules. Of course, what the rules allow is for more effective scenario simulation, and therefore more effective plans and therefore greater competitive advantage. But it struck me that a lot of what you were talking about there is also the fact that these are muscles. And the more that we exercise them, the better that we do. It’s not just the regulations that give us a more structured environment in which to make decisions, actually the scenario planning itself creates more structured environments. If I already have scenarios for x, y, and z, when I need to think about how I might react to some other new occurrence or phenomenon that I haven’t thought about, I can go back to that library and think about what I’ve already charted out.

And I can also start to build more on the relationships and the practices that I’ve refined through the course of that. And essentially, I might throw it over to you and ask you to talk about the value of plans in this converging crises context where in fact, planning for one type of scenario actually helps us to plan for other kinds of scenarios. And rather than having this Sisyphean situation, where we’ve got this massive boulder that we’re trying to push up the hill, actually, we end up in a somewhat similar metaphor, but in a better direction. Where we’ve got a bit more of a snowball. And once it starts to gain some momentum and roll down the hill, it picks up steam and it picks up snow as it goes. And so, the whole process becomes a bit easier, the more we practice it.

Sanjay: I think, some of the key aspects of this are what the scenario planning process does, all the preparation and all the using and all the practices that can emerge from that, because we have to see scenario planning within the context of either doing cybersecurity work that leads into the practices Michael has mentioned, or into strategic work for the organization. And in order to get together the expertise in people who have the organizational cohesion or the social cohesion, to tackle the problems that are converging effectively. The practice of scenario planning does build social capital. And that’s what you’re trying to harness, it’s not so much the individual, if you have one person on a team, say on a board, or on a team in an organization that “gets”  the scenarios or gets the environment. The problem is all the other folks who might have more influence or have greater power or have better relationships, that might subvert a very good plan. So having the social capital within which to have these conversations and be comfortable and looking at scenarios against different time horizons. There might be scenarios that are six months or a year out, because you’re in the middle of a pandemic. There might be ones you need to do that are three or five years out, because you feel you’re moving into the post-COVID environment. And there are some things you want to understand over a longer horizon.

Or there may be cases in relation to trying to reduce carbon emissions a lot by 2050, where you’re looking at an even longer time horizon of these issues. But how do you know who you can have these conversations with? They all have to have been trained in some way to have the conversations and to understand through that process, a little bit about the mental models, the biases and the cognitive heuristics that come into play, and the prejudices. So I think this process is also about, again, gathering the organizational capacity and expanding the organizational capacity to deal with crises when they pile up on each other a bit as they’re starting to do today.

Brooke: Thanks for that, Sanjay. Alan, I’ll throw it over to you next for a bit of a summative word.

Alan: Sure. Well, look, I think my greatest hope is that these scenarios are useful. It’s not a question of the scenarios being correct or not correct, as we try and predict what’s going to happen in terms of cybersecurity, but we’ve made an effort to make them reflect a range of possibilities and I hope that they’re useful. I hope that they help people stretch their perspectives and become better prepared for the things that might happen. I hope that what we’ve laid out here is that there’s a real competitive advantage to be had in that. And if we’ve helped even one organization or person become better prepared for this sort of thing and minimize the damage, then I’m delighted.

Brooke: Michael, I might throw it over to you for one last word pointing out that if anywhere the risks are becoming more and more visible, just looking at the headlines, cybersecurity is for sure going to be up there.

Michael: The last few months have just been amazing with the solar winds attack on nine US government agencies and 18,000 corporations, the Microsoft Exchange Server attack has impacted hundreds of thousands of companies and we’ve seen two effects of that with the JDS Company having a niche origin in the United States and the Colonial Pipeline having to be shut down, cutting off gasoline and jet fuel supplies to the Eastern United States. So it’s absolutely critical, and I applaud the Colonial Pipeline decision, they had a plan. Now, the consequences of the plan were unfortunate and disruptive to a large number of people, but the consequences of them not shutting down the pipeline could have been disastrous as in Ukraine, when there was a Russian hack on a pipeline that caused it to blow up. And there could have been loss of life.

So somebody in Colonial Pipeline made a really courageous decision to take the safe way out, causing some inconvenience, but there was no harm done. The interesting thing about most scenarios for most companies, is it’s important to bring in a third party to create the scenarios. Because the people inside the company have, as Einstein once said, “We can’t solve problems with the same thinking that created them.” So people inside the company have built up systems, they built  them as carefully as they could and they’ve put them together thinking of all the possible things that could go wrong. So it’s really a good idea to bring in a third party, to challenge the status quo and to try and bring in new ideas that will get people to think about new things that they didn’t think about before. And that’s where Alan Sanjay and I have had the most fun and been the most impactful, by helping people to think about things that they hadn’t thought about. And experience those things through a scenario.

Brooke: That’s really helpful. And I think, my own summative word there is that the practice of generating scenarios is not about guessing the future. It’s about being better prepared for a range of futures. Because a future is coming, of that there is no doubt. The only thing we don’t know is which one. And planning makes us better prepared, whether or not we guessed right.

Sanjay: Yeah, I’d say which ones. We’re operating in this multiplicity of contexts. And the more you’ve engaged in this, not only the better prepared you will be, but the better prepared you’ll be at a few things. One is getting into the operational preparation and how this can affect your strategy. But I think also, because we’re in a converging crises environment, people talk about accelerating change. And those concepts, it means that we’re going to have to practice this orientation and reorientation to our context, much more frequently than we’ve done in the past. And so, Michael saying that his work, I won’t speak for Alan, but I believe Alan is saying organizations are more regularly trying to orient and reorient themselves. And this may mean that you have to do scenario work more often than you used to do. You might have to have parallel teams running scenarios to get more of this reorientation capacity.

And I think BCG has done a lovely job in some of the work that Alan’s working on called the ‘Uncertainty Advantage’. And I’ll just highlight that a little bit because we’re really trying to figure out how all of us can have an advantage in a very uncertain context. And that includes private  sector, public sector and even social sector organizations, with whom all of us work. Because we don’t always just work with organizations with lots of money. We sometimes need to support organizations with foresight, who have fewer resources but need to feel they have some agency and ability to think clearly about their path forward too.

Brooke: All right Gentlemen, thank you very much for this conversation. It’s been really great.

Michael: All the best, Brooke.

Brooke: Thank you very much.

Brooke: For you listeners out there, if you’re interested in learning more about practical applications of scenario planning, including in cybersecurity, but more generally, please check out our recently published reports that the four of us have put together, you can find the link for that on our websites, https://thedecisionlab.com. You can also reach out to us directly, our email addresses are listed in the report as well as on the podcast episode page. And here’s a bit of a reveal. We’ve also got something special in the works. We’re putting together a webinar panel exploring some lived experiences of cybersecurity and scenarios planning. That will be happening in the autumn of 2021. Which hopefully we won’t be calling the fall of 2021. It feels like we had enough falls recently.

It’s going to be a free event and you can sign up to the TDL newsletter to keep up to date on that. We’ll also have more information on the podcast episode page as it becomes available. Finally, we wanted to address an issue that’s near to our hearts. As has come up several times in the conversation. Diversity is a problem in the cybersecurity world and in developing responses to cybersecurity threats. We’re conscientious of the fact that we’re four men talking about this issue today on this episode, though we do come from different racial [ethnocultural], disciplinary and other backgrounds. Nevertheless, communities of color, women and other minority groups are typically underrepresented in cyber security and planning.

Disadvantaged communities are also more vulnerable to cyber threats. Regular listeners to the show might remember the conversation that I had with Kimberly Seals Alors, about the cyber security premium that she pays in running a Black [owned and operated] technology company, which needs to fend off cyber attacks targeting them specifically because of the race of who works there and the user base that they serve. TDL strongly supports diversity and inclusion, so to women, people of color, members of traditionally disadvantaged or underrepresented groups, this conversation is for all of us and we look forward to connecting with you at the webinar in the autumn. Thanks again for listening and we’ll talk soon.

 

We want to hear from you! If you are enjoying these podcasts, please let us know. Email our editor with your comments, suggestions, recommendations, and thoughts about the discussion.

About the Guests

Sanjay Khanna

Sanjay Khanna is a strategic advisor and foresight expert. Previously the futurist at global law firm Baker McKenzie, today Sanjay works with leading organizations to illuminate risks and opportunities associated with the converging crises of geopolitical fragmentation, socioeconomic reordering, population health issues, technological acceleration, environmental and climate change. The Financial Times, The Globe and Mail, and the Canadian Broadcasting Corporation, among other media outlets, have featured Sanjay’s wide-ranging insights on twenty-first century change in their publications.

Alan Iny

Alan Iny has the pleasure of spending his entire working life on creativity in business. With Boston Consulting Group since 2003, he has trained thousands of people on how to think creatively, and he works with a wide range of clients across industries worldwide. He also has a deep focus on navigating uncertainty, using scenarios to help clients think more expansively about the future. Alan is a member of the firm’s Corporate Finance & Strategy practice leadership team and has expertise in innovation, transformation, and organization design. With BCG’s Luc de Brabandere, he co-authored ‘Thinking in New Boxes: A New Paradigm for Business Creativity’ in 2013. The book is available in 11 languages, and Publishers Weekly called it “A must-read for anyone in a leadership position who dares to look at the world in new ways”.

Michael Coden

Michael Coden is Head of the Cybersecurity Practice at BCG Platinion, the part of Boston Consulting Group that provides deep technical expertise. Michael has over 30 years of experience in cybersecurity strategy, organization, processes, technologies, research, product design, and markets for both users and producers of cybersecurity products in all industry and public sectors. He has advised clients in the Americas, Europe, the Middle East, and Asia, and is the North America lead for Cybersecurity at BCG. 

Michael is also co-founder and Associate Director of the research consortium, Cybersecurity at MIT Sloan and was involved in the USA NIST Cybersecurity Framework process. He served as Editor of the ISA99/IEC-62443 Cybersecurity Technical Report and Standard, and has published numerous scholarly articles as well as a book. He holds 16 patents on cybersecurity hardware and software technologies.

About the Interviewer

Brooke Struck

Brooke Struck is Research Director at The Decision Lab. He holds a doctorate in philosophy of science. His dissertation research focused on the relationship between quantitative and qualitative research methods, and the relationship between research and other social systems such as language, history and politics. Since finishing his academic work, Dr. Struck has worked in science & innovation policy, first within the Canadian federal government, and then subsequently in the private sector at Science-Metrix. In recent years, he has been researching the interface of big data analytics with organizational decision-making structures, especially in policy-making contexts.

Listen to next

Insights

Combatting Digital Addiction

The Covid-19 pandemic has led to an emphasis on digital communication methods, and for many, a deepening technological addition.