From Theory to Frameworks: Putting Behavioral Science to Work

In the very first class of my Masters in Behavioral Science, our professor wrote in capital letters on the whiteboard: “CONTEXT MATTERS.” I still have my notes from the class where I doodled around these two words, not really understanding their gravity or the reasoning behind why this was being emphasized. 

Cut to now. A few weeks ago, the entire world of behavioral science was shaken when a blog made startling revelations about the validity of some high-profile research in behavioral science.1 But that’s not even the first time the field has been called out. The replication crisis has bogged us down for years.2 Yet, somehow, when I read these articles, my main response is not alarm, but rather the conviction that my doodle drawing of “Context Matters” from so many years ago really was correct.

The reason I say that is because being an applied behavioral scientist basically means all the research we do starts from scratch, irrespective of replicability or conclusive laboratory results. Each analysis is in a completely new context, and possibly with a completely different audience. 

How then do we make use of all this academic literature? What’s the point of academic literature if it fails in a different context? How do we make sense of what works and what doesn’t?

Frameworks to the rescue

After a few years of asking myself these questions, I have come to the conclusion that consultants and MBAs were not completely off the mark with their dependence on frameworks. A framework distills the research on a given topic into its core insights and helps connect various components in a meaningful way, to help people understand how complex behavioral mechanisms work in practice. Some prominent frameworks include the MINDSPACE framework and the COM-B model.

But how do we go about creating such a framework? Let’s take an example of a framework and try to understand how we go about creating it.

Last year, I wrote a piece for The Decision Lab on why we cannot say no to promotional offers. It was a basic framework synthesizing various theories for how promotions or discounts are perceived by consumers.

In this area, there are two dominant hypotheses: the Lucky Shopper hypothesis and the Smart Shopper hypothesis. According to the former, consumers enjoy discounts the most when they come about as a result of external factors; by contrast, the latter states that consumers appreciate discounts more when they deliberately found them under their own power. It’s a question of luck versus skill.

Now, if you look into the literature, you’ll find papers that state that the Lucky Shopper hypothesis cannot be proven; you’ll also find others that say it is the most valid explanation for why people love discounts. The same is true for the Smart Shopper hypothesis. But when you dig deeper into these papers, you’ll also realize that none of these are conclusive in either direction, because they worked with small sample sizes, were run as surveys instead of randomized controlled trials, and basically cannot be taken at face value.

That said, that doesn’t mean all this research is useless. These papers still tell us that in certain circumstances, certain factors impact decisions in a certain way. That is still important information—it just needs to be contextualized within the wider body of research. That’s where frameworks come in.

In short, my framework looked like this:

This framework doesn’t center either hypothesis, but instead creates a model for weighing various factors to predict how people might perceive discounts in certain situations.

Why make frameworks?

  1. They provide an exhaustive list of influences. What happens if we don’t go through this process? Let’s take something as simple as loss aversion. Loss aversion is a disputed topic. Are people averse to loss? Maybe they are. Maybe they are not. But, if we go by whichever paper we’ve most recently read and categorically ignore the potential influence of loss aversion on a decision, we risk missing a valuable part of the picture.

    Maybe in our context, because of how our brand is seen by customers (premium brand) or because of the category we operate in (for example, insurance or financial services), people are really averse to loss. A framework that incorporates any and all biases that may or may not come into play, and then allows practitioners to determine to what degree it is relevant, will ensure that we don’t fail to detect an important influence on people’s behavior.
  2. They provide a more fundamental understanding. As an applied behavioral scientist, your solution cannot be a single, one-line statement telling the team what nudge to apply. It has to be a nuanced understanding of human behavior that helps all stakeholders comprehend the behavior at a fundamental level. A framework helps you do that. It just helps everyone understand all factors that can influence a behavior.
  3. They help us solve a variety of problems. When you create a framework, you help open up the problem space. If anything can influence behavior, you should be solving for as many things as possible. A framework approach helps the entire team think through problems keeping in mind these multiple influences and thinking of ways of overcoming them.

So, how does one create a solid framework?

  1. Assemble an exhaustive list of inputs. This is where the massive academic literature comes in handy. Comb through literature and find all possible publications on the subject. Some may not be proving anything conclusively. Some may support the exact opposite conclusion of what you intend. The only thing that matters is, you collect as much information as possible on everything that impacts the behavior you are trying to study.

    For instance, in order to create this framework on discounts, search for all possible research around promotions, offers, transactional utility, and so on. Sometimes, even adjacent literature, like literature on casinos, might give you clues. Make a note of all things that impact the behavior. Conclusive, inconclusive, positive, negative, does not matter.
  2. Connect the dots in logical order. Armed with the list of inputs, think of logical ways in which you can connect them. Some will merge together, some will stay separate entities. Skill and luck are opposite ends of the locus of control, so they come together. A few factors come together and become “controllability.” And so on.
  3. Now run the framework in different contexts. Knowing that we created this framework as an exhaustive list, it is important to run the framework in different contexts and come up with case studies on when a factor works and when it doesn’t.

    For instance, based on your research, you can now you can say that the Smart Shopper hypothesis seems to be relevant when purchasing products like electronics, because consumers put in the research before deciding what exactly to buy. When they find themselves a discount through extensive research, they feel better about themselves. However, for the same people, the Lucky Shopper hypothesis might still apply in some cases, like when given a scratch card to get a week’s worth of groceries for free. These shoppers may find themselves overjoyed, because they got “lucky.”

    The important thing is—there are no limits. Everything can be applied to everyone, theoretically. But, how can that be feasible?
  4. That’s why you experiment! Now you know all factors that can impact how people feel about discounts. But how do you know exactly which of these is at work in your particular context? You experiment. The framework merely laid down the guiding principles to help you come up with all possible experiment ideas. Create an exhaustive list of influences, design experiments around them, weed out ideas that seem unreasonable or intuitively wrong, and finally, decide what to experiment on. Only at the end of the experiment, you conclusively know what is working in your context.

Why now?

In the recent debates about the validity of behavioral science research, several troublesome factors have come up. Are these studies valid outside university labs? Are these studies valid for audiences outside of the WEIRD (Western, Educated, Industrialized, Rich, and Democratic) world? Are these studies valid under all contexts? Are these studies going to be true a few years from now? Are these studies replicable?

In the midst of all this intense debate, there are two learnings for all behavioral science practitioners:

  1. Overreliance on one study to prove all these above factors is misguided. No study is able to conclusively prove everything.
  2. Focusing on proving the existence of one effect on one small audience may or may not result in research that stands the test of time.

Instead, the whole mental model around understanding human behavior is gradually shifting from focusing on one influence to a more nuanced understanding. Frameworks reflect this changing paradigm, transforming behavioral research from a black-and-white understanding of human behavior to a more holistic model. As an applied behavioral scientist, I have been witness to this transformation and can vouch for the increased practicality of behavioral science thanks to this shift towards frameworks.

Concluding remarks

Do frameworks always work? Maybe not. But, given what we know about our subject, we also know that drawing conclusions based on academic research alone is also not reliable.

Like my first day in behavioral science taught me, context matters. Frameworks help test academic research in different contexts more rigorously. They give us a means of respecting academic literature and at the same time, adapting it for practical use. If none of the factors in your framework end up working for you, you know there’s a gap in the literature. But if even one factor works, you know the framework helped you rule out so much more. Just for that, it’s worth the effort.

Why There’s No Such Thing as “Just Asking Questions”

In one episode of the NBC sitcom Parks and Recreation, our protagonist, Leslie Knope, becomes embroiled in a political scandal when her efforts to secure funding for her governmental department result in the imminent closure of a local animal shelter, imperiling its resident cats and dogs. Wasting no time, one of Leslie’s political opponents (played by the delightful Kathryn Hahn) appears on local TV for comment. 

“I’m not saying Leslie Knope is a dog murderer, per se,” Hahn’s character says smilingly. “I just think her actions raise some questions. Like, for example, is she a dog murderer?” 

“Well, I don’t know the answer to that, Jennifer,” says the show’s host gravely, “But your tone makes me think, yes.” 

This absurd scene is, of course, fictional. But in the real world, you may have had the displeasure of overhearing some similarly cynical questions: “Do vaccines really work?” “Aren’t there any alternative treatments?” “Shouldn’t we just let COVID do its thing?”

Sometimes, questions like these are sincere, and our job as behavioral scientists is to answer and navigate them in earnest. Other times, however, they are used to sow misinformation and distrust among the general public. In these cases, our job as behavioral scientists is instead to figure out how to stop bad questions from inferring with good policymaking. 

To do that, though, we need to figure out why questions can be misleading in the first place. Thankfully, there is already quite a bit of work, in psychology, linguistics, and philosophy, that can help us understand why questions can lead us astray.

How do bad questions work? A psychological perspective

Questions, just like statements,  can trigger all sorts of cognitive biases. For example, consider framing effects, as first proposed by Tversky and Kahneman 19811 (and recently replicated in 2015).2 Framing effects occur when individuals respond differently to the same information: for instance, people respond more positively to “the glass is 50% full” than to “the glass is 50% empty,” even though the statements convey the exact same information.

However, many studies point to similar effects occurring in public opinion surveys. Two questions might solicit the same information, but, based on how the question is framed, participants might give entirely different answers.

This is an intuitive point. Compare these two questions, from a 2003 Pew Poll:4

  1. Do you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule?
  2. Do you favor or oppose taking military action in Iraq to end Saddam Hussein’s rule even if it meant that U.S. forces might suffer thousands of casualties?

For the first question, 68% of participants said they favored military action, while 25% said they did not.4 But when the cost of going to war was made explicit in the second question, their attitudes changed: only 43% said they favored military action, whereas 48% did not.4  Even though the questions ask for the same information, the framing effect led to a massive shift in how participants responded.

A linguistic perspective

Framing effects come into play with all kinds of utterances, statements included. Questions, however, have their own, unique linguistic properties. According to most standard models, questions act as a collection of possibilities.5 Additionally, questions are then added to what some linguists call “the Question Set”: the questions everyone in a conversation is committed to answering.6,7 When I ask “Where is my ice cream?” I am asking others to sort through the set of possibilities to fill in the blank—“His ice cream is at ___”—so that we can figure out what the right answer is.

A good question gives us the right blank to fill in. It chops up the possibility space in a way that lets us sort through the options that will let us accomplish our goals (e.g., finding the truth). A bad question, in contrast, chops things wrong. It forces us to spend time sorting through the wrong possibilities, distracting us from the issues we really want to solve. 

This is a common, but subtle, occurrence. For instance, some people within institutions of higher learning ask “Will diversity harm our research?”, prompting others to spend time trying to assure them that it won’t. However, in spending time sorting through the possibilities of “diversity harms rigor” and “diversity does not harm rigor,” we end up forgetting about the possibility that diversity could help research.8,9 (In fact, the historical record of figures like Francis Cecil Sumner, Albert Sidney Beckham, and Kenneth and Mamie Clark indicate that this is almost certainly the case.) Even without any malice, the question derails the conversation from the possibilities that really matter.

A philosophical perspective

It’s worth remembering, though, that communication does not exist in a vacuum. We don’t just know our native language(s); we use them. And just like any other human action, what we do with our words is governed by certain sets of norms. 

Many epistemologists—people who study the nature of knowledge—have recently written about the norms that govern how we communicate what we know. Famously, the philosopher Timothy Williamson argues for a knowledge norm of assertion: don’t assert what you don’t know.10 

Importantly, we assume that others follow this norm, too. This is why, when someone says “you left your ice cream at home,” I am inclined to think they’re right. I assume they know what they’re saying because they’ve said it. 

Recently, this approach has been extended to questions, too. And while we don’t yet know what the right norm is (that’s philosophy for you!), one thing we do know is that, when someone asks a question, we assume they don’t know the answer.11 This is fairly intuitive: it’d be really weird for me to keep asking you where my ice cream is if I secretly knew it’s been in my freezer all along. 

In a healthy, cooperative conversation, this norm serves us well. It allows us to ask questions to solicit information we need, and it spares our conversational partner from having to explain what we already know.

Malicious actors, however, can use this norm to their advantage. They can use questions to feign ignorance and pretend that they are just engaging in healthy inquiry. This tactic even has a name: “I’m just asking questions!” A bad actor will ask a clearly inflammatory question; but when met with backlash, they will simply say that they are innocently inquiring because they don’t know the right answer. They exploit the fact that questions signal ignorance in order to hide their ulterior motives.

A Case Study

Let us return to the questions we started with:

  1. Do vaccines really work?
  2. Aren’t there any alternative treatments?
  3. Shouldn’t we just let COVID do its thing?

For starters, notice that all three of these questions frame things in a negative light. They paint the picture that our current means to reduce the spread of COVID are not working, and the last one even frames deaths from COVID as an inevitable fact of life. This, of course, ignores the fact that 1) our current measures (masks, vaccines, and social distancing) are incredibly effective, and that 2) our goal should be to reduce deaths wherever it’s feasible to do so, especially in a pandemic.  

Additionally, these questions chop up the possibility space in a way that leads us astray. We need to address important questions surrounding, say, how to deliver vaccines to groups that have borne the worst brunt of the pandemic (and who have been traditionally underserved by the medical system), or how to establish post-pandemic accommodations for the people who need them. Instead, the questions above obscure these important issues, by having us sort through a possibility space we have already explored.

Finally, and perhaps most importantly, these questions are often not asked in earnest. Many figures use these questions to delegitimize otherwise effective policy interventions, or, more broadly, to promote their political or financial interests. When confronted, however, they simply resort to saying that they are just “asking questions” because they want to find out the truth. (They usually lament how “people can’t just ask questions anymore.”) In reality, they have usually made up their mind. They just do not want to say the quiet part out loud.  

Conclusion

In theory, questions just help us solicit information we don’t know. In reality, though, what we ask has an impact on how a conversation goes about. Good questions lead us down the right path, towards the truth or some other goal we have. Bad questions lead us astray, and malicious actors can weaponize them to shape a conversation in a way that only benefits them.

Sometimes, as Kathryn Hahn’s Parks and Rec character demonstrates, a question is just meant to get us to say “yes.” But as our media ecosystem becomes increasingly polarized—and as more people receive their news from outside mainstream media12—we have to be vigilant of the rhetorical effects questions can have. Otherwise, we’ll be unable to see how a bad question steers public discourse away from where it ought to go, and we’ll be unable to get it back on track.

Remember:  just because someone is—even sincerely— “just asking questions” does not mean those questions deserve answers.

Hyper-personalization: The Silver Bullet to Success in Digital Banking

This article originally appeared in The Times of India and belongs to the creators.

Digitalization, a high-priority exercise, will open new doors to success in the banking industry. However, here is the catch: the approach needs to shift from being an effort that simplifies a bank’s administration and internal processes to a dynamic palette that offers highly customized experiences to consumers.

The internal focus on digitalization, no doubt, results in addressing consumer needs better than traditional methods. However, banks have to ramp up their efforts significantly to suit individual consumer demands. 

New-age fintech companies are already high on the digital platform and are luring the digital-savvy clientele. If banks want to stem this tide, they must move from being a “Digital Inside-Out” entity to a “Digital Outside-In” service provider. 

In other words, banks must offer more than traditional, transactional banking solutions and enhance their digital capabilities. Also, these digital systems need to integrate with their offline touchpoints in a seamless manner to offer the best “phygital” experience to banking customers. This will help them transform into “bespoke financial hypermarkets.”   

Financial hypermarkets through hyper-personalization

A Deloitte report The future of retail banking: The hyper-personalization imperative November 2020, stresses, “Hyper-personalization is imperative for banks, enabling them to respond to customers’ manifest and latent needs.” 

Hyper-personalization is all about a consumer-centric approach that leverages real-time data to deliver services, products, and pricing that suit their current and future needs and also their latent needs. These insights are driven by behavioral and data science using Artificial Intelligence (AI). 

Transaction banks are staring at a digitalization opportunity that could be worth over a trillion dollars. The prospects are the “digital natives” who will become financially active by 2025. 

Some banks have managed to cross over and launch services like accounts-receivable management, factoring, accounting, and cash-flow analysis to SMEs. A few of them are also partnering with health insurers to enable consumers to pay their health bills. 

Banks have to move from tangible performance achievements like credit allocation, capital management, and operations and align their digital skills to achieve customer-centric success metrics. These include factors like technological preferences, product and service offerings based on customer’s financial standing, milestones in life, and household relationships, like having young adult children or dependent parents.  

This, in turn, will build their brand, boost revenues, improve the financial inclusion of the target base, and, most importantly, extend the lifespan of their relationship with customers.  

The road ahead from mass segmentation to hyper-personalization may be wrought with challenges, but banks have to be ready with their A-game. The first step is gathering crucial customer data, which must be critically combed using AI to provide insights into the customer’s current context. The insights thus derived can be used to tailor an attractive recommendation – cost, terms, penalties, mortgages, and bonuses. Armed with this custom-made proposition, the relationship manager can make a compelling case to the client, thus invoking the power of a hybrid digital-human model. 

Hyper-personalization helps in reducing the costs of reaching products/services to consumers, tailoring products/services to customer needs, reworking risk methods, and simplifying products. In short, it helps in getting a 360-degree view of the consumer and sending out the right message at the right time. 

Challenges to hyper-personalization

There are some stumbling blocks along the path. One is fulfilling a consumer’s perception of being a “unique user” who ought to receive personalized service. 

Second, many banks are unable to effectively leverage the data goldmine they possess because the information is stored on different platforms and legacy systems. The need of the hour is a simple solution that can seamlessly integrate and correlate various platforms and systems effectively. 

Banks also have to reckon with concerns revolving around credibility and trust. They have to constantly communicate to their customers on the reliability of their offerings and update and convince them on the range of solutions on offer. 

Getting started with hyper-personalization

The following areas can generate a robust hyper-personalization process. If used to full capacity, they can answer the critical customer demands of “what,” “how,” and “why.”

1.Data analytics

Granular consumer data can help banks to break up their engagement strategy to suit different user segments. Banks can monetize the data they possess by differentiating between actionable and non-actionable data. This can help them identify behavioral patterns, model customers’ propensity to buy a product, and offer timely products/services to customers – whether it is a student looking for refinance borrowing or a doctor looking for special interest rates. 

2. Behavioral science

This enables banks to refine personalization by exploring, measuring, and predicting consumer behavior. For example, a technology start-up can be offered an attractive corporate credit card with relevant benefits and services. Or a small firm that needs quick funds can be targeted appropriately.  

Here is another example to show how banks can tap into the behavioral patterns of customers. A leading bank uses AI to predict how customers like to redeem credit card points. This allows it to offer customers valuable personalized rewards that they are more likely to appreciate. 

3. Ethnographic research

This answers questions around the “why” of customer behavior. By using ethnographic research, banks can collect data on observed cultural and social influences on customer behavior, rather than intentions as stated in surveys, and do away with biases and beliefs about customer behavior. 

4. Segmenting customers

Through tools like RFM (recency, frequency, and monetary) analysis, banks can categorize customers based on the likelihood of their purchase. For example, customers can be grouped based on even user events, like, users who have done x number of transactions in the last one week. 

5. Choosing the right channels to communicate

Bringing in digital services is not just stacking up new digital touchpoints such as email, social media, app, website and e-wearables. It’s more about building a hub that enables each of these channels to integrate seamlessly with each other and offline touchpoints to share data. Banks can then leverage that information to build meaningful conversations in real-time. No matter what channel consumers choose to connect with their banks, the latter can build upon their previous engagements and make meaningful conversations with each service user. 

When this happens, banks are no longer talking to the masses but speaking one to one, through the innovative and optimum use of the tools of digitization. Therein lies the success of banks in the digital world.

Virtual Brainstorming for an Innovation Advantage in Hybrid and Remote Work

Fear of losing their innovative edge pushes many leaders to reject hybrid and virtual work arrangements. They feel that on-site, synchronous brainstorming sessions are more effective than those done online, and push for staff to return to the office to avoid being overtaken by competitors.

Yet extensive research shows that hybrid and remote teams can gain an innovation advantage and outcompete in-person teams by adopting best practices for innovation. What explains this discrepancy between leadership beliefs and scientific evidence?

Having consulted1 for over a dozen companies on a strategic return back to the office, I discovered the root of the problem. The vast majority of leaders have tried to pursue innovation during lockdown by adapting their office-based approach of synchronous brainstorming to videoconference meetings. They found that videoconferences aren’t well suited for traditional brainstorming, and thus feel they need to go back to the office.

Unfortunately, these leaders are stuck with their existing methods for innovation, and haven’t investigated and adapted to modalities better suited to virtual innovation. This failure to adapt strategically to their new circumstances is now threatening their capacity for innovation. 

How returning to the office full time may threaten innovation 

All of the leaders I spoke with shared the same overarching goal: to maximize innovation in the most efficient and effective manner possible. To this end, they all tried to replicate their office-based approach of synchronous brainstorming in this new modality of videoconferencing.

Therein lies the problem. None of the leaders I spoke to had tried to research best practices in virtual innovation in order to adapt strategically to their new circumstances; instead, they tried to impose their pre-existing, office-based methods on virtual work. While understandable in the initial stages of lockdown, it might seem surprising that they would pursue this same office-based toolkit over the many months of the pandemic. Yet, that’s exactly what happened. Thus, these leaders started pushing for a full-time return to the office after vaccines became widely available.

The problem with this approach is that remote and hybrid work are here to stay. Extensive research on employee attitudes2 towards post-vaccine work arrangements showed that 25%–35% wanted to work remotely all the time; 50–65% wanted to return to the office with a hybrid schedule (with a day or two of in-person work per week); and only 15–25% desired to go back Monday to Friday on a 9–5 schedule.

These employee desires represent a definite mismatch with the demands of product leaders, the large majority of whom wanted to go back to the office full time. The surveys showed that 40–55% of employees intended to find a new job if they did not get access to their desired working preferences.3 Indeed, we know that many have already resigned4 due to their employers trying to force them back to the office. 

It’s obvious that having a large portion of your workforce resign is no way to maintain an innovation advantage. That’s why Google backtracked5 from its intention to force all employees to return to campus and permitted full-time remote work to many workers, in the face of mass employee resistance and resignations. Amazon did the same for similar reasons.6 

Even still, these trillion-dollar companies have floundered as they’ve tried to navigate the landscape of post-vaccine work, with the departure of top employees, serious hits to employee morale and engagement, and repeated changes to their return-to-office plans. If these top companies, with supposedly the best leadership and policies, can screw up this transition so badly, no wonder leaders at less-resourced smaller companies are struggling as well.

The judgment errors blocking innovation best practices 

Leaders often fail to adopt best practices in innovation because of errors in judgment known as cognitive biases. For instance, the rejection of new methods & tools in favor of pre-established ways is known as the status quo bias.

A related bias is functional fixedness, which prevents us from seeing the alternative usages of some object—for example, the ways we might adapt an organization’s existing suite of tools and programs to better facilitate virtual brainstorming.

Finally, the not-invented-here syndrome7 arises when leaders have an antipathy toward practices not invented within their organization. 

In the future of work, defeating cognitive biases in order to thrive means relying on research-based best practices.8 That involves a hybrid model of one to two days in the office each week, while permitting a substantial minority of employees to work remotely full time. This best-practice setup9 will translate to diverse benefits: retention of top talent, creation of flexible company culture, and—most importantly for our purposes—seizing an innovation advantage.

Traditional brainstorming

Brainstorming represents the traditional approach to intentional, non-serendipitous innovation. That involves groups of 4–8 people getting together in a room to come up with innovative ideas about a pre-selected topic. 

At first, everyone shares their ideas, with no criticism permitted. Then, after group members run out of ideas, the pool of ideas is edited to remove duplicates and obvious non-starters. Finally, the group discusses the remaining possibilities and decides on which ones to pursue.

Research in behavioral science10 reveals that brainstorming participants enjoy these sessions and find them to be effective in generating ideas. That benefit in idea generation comes from two areas identified by scientists.11

One involves idea synergy, meaning that ideas shared by one participant help trigger ideas in other participants. Experiments show that synergy benefits are especially high if participants are instructed to pay attention to the ideas of others and focus on being inspired by these ideas. 

Another benefit comes from what scholars term social facilitation. That’s the benefit of social support from working with others on a shared task. Participants feel motivated when they know they’re collaborating with their peers on the same goal.

Personality barriers to traditional brainstorming

Sadly, these benefits come with costs attached. One of the biggest problems is called production blocking.12 

Did you ever participate in a brainstorming session where you had what you felt to be a brilliant idea, but someone else was talking? And then the next person responded to that person, and they took the conversation in a different direction? By the time you had a chance to speak, the idea seemed not relevant, or redundant, or maybe you had even forgotten what you wanted to say.

If you never had that happen, you’re likely extroverted and optimistic. Introverts, however, have a lot of difficulty with production blocking. It’s harder for them to formulate ideas in an environment of team brainstorming. They generally think better in a quiet environment, by themselves or with one other person at most. And they have difficulty interrupting a stream of conversation, making it more likely for their idea to remain unstated.

Those with a more pessimistic than optimistic13 personality also struggle with brainstorming. Optimists tend to process verbally, spitballing half-baked ideas on the fly. That’s perfect for traditional brainstorming. By contrast, pessimists generally process internally. They feel the need to think through their ideas, to make sure they don’t have flaws. Although brainstorming explicitly permits flawed ideas, it’s very hard for pessimists to overcome their own personalities, just like it’s hard for introverts to generate ideas in a noisy team setting. 

Pessimists are also powerfully impacted by a second major problem for traditional brainstorming: evaluation apprehension.14 Many more pessimistic and/or lower status, junior group members feel worried about sharing their ideas openly, due to social anxiety about what their peers might think about them. Moreover, despite instructions to share off-the-wall ideas, many people don’t want to be perceived as weird or out of line. 

Finally, conflict-avoidant and/or politically savvy team members may feel reluctant to share more controversial ideas that challenge existing practices and/or the territory associated with high-status team members, especially the team leader. These ideas are often the most innovative ideas, but they are frequently left unsaid.

Other barriers to traditional brainstorming

A related problem to evaluation apprehension is brainstorming groupthink.15 That refers to team members coalescing around the ideas of the most powerful people in the room. In the idea generation stage, groupthink involves lower-power team members focusing more on reinforcing and building on the ideas of the more powerful participants. In the idea evaluation stage, groupthink results in the ideas of the more powerful getting more preferential selection.

A final problem relates to group size. The more people you get in a traditional brainstorming session, the fewer ideas16 you get per person. Scholars attribute this loss of efficiency to a phenomenon called social loafing.14 The more people participate, the more tempting it is for each individual to not work quite as hard at generating ideas. They feel—rightfully so—that they can skate by with less effort and engagement. That’s why research finds that the most efficient size for traditional brainstorming groups, in terms of maximizing the number of novel ideas per person, is 2.

As a result of these problems, numerous studies show that traditional brainstorming is substantially worse for producing innovative ideas than alternative best practices.17,18,19 It’s a great fit for helping build team alignment and collaboration, and helping group members feel good about their participation. But you shouldn’t fool yourself that using this technique will result in maximizing innovation. Thus, if you want to leverage innovation to gain or keep your competitive edge, traditional brainstorming is not the way to go.

The final barrier: Team leaders

Leaders often told me that they don’t resonate with these problems. What I explain to them is that they, as leaders, tend to be extroverted and optimistic, as these personality traits facilitate leadership. Leaders, by definition, are the centers of power in product brainstorming sessions: they can interrupt at any time, without any problems, and all groupthink coalesces around their ideas. Because they own the outcomes of the brainstorming meeting and are thus strongly motivated, they don’t feel social loafing. It’s a classic case of the bias blind spot,20 our tendency to not be aware of our own cognitive shortcomings.

When I ask leaders to survey their staff on these issues, employees report experiencing most or all of these issues. That helps convince leaders that traditional brainstorming is not the panacea they typically perceive it to be.

Virtual brainstorming

Trying to do traditional brainstorming via videoconference is a poor substitute for the energizing presence of colleagues in a small conference room, thus weakening the benefits of social facilitation. It’s also subject to the same exact problems of evaluation apprehension as traditional brainstorming. No wonder leaders responsible for innovation dislike it.

Instead of the losing proposition of videoconference brainstorming, leaders need to abandon their fixation on synchronous team meetings for brainstorming. Instead, they should adopt the best practice of asynchronous virtual brainstorming.

Step 1: Initial idea generation

All team members generate ideas by themselves and input them into a shared spreadsheet. You can do so via many software platforms: when I facilitate brainstorming meetings, I typically use a Google Form, which automatically produces a Google Spreadsheet with the responses. 

To tap into social facilitation, the group can input ideas during a digital co-working meeting. You all get on a videoconference call for an hour, turn off your microphones but keep speakers on, with video optional (although preferable). If someone has a clarifying question, they can turn on their microphone and ask, but avoid brainstorming out loud. However, this step is not necessary, especially if the team is geographically distributed such that time zone differences make coordination difficult.

Research has shown21 that to get the greatest number of novel ideas, all team members should be told to focus on generating as many ideas as possible, and informed that the focus will be on quantity, not quality. Likewise, participants should be encouraged to consider contradictions22 between different and often opposing goals in their innovative ideas, such as maximizing impact while minimizing costs. Science has found that this focus on opposing goals facilitates innovation.23, 24

The submissions should be anonymized to avoid evaluation apprehension. However, the team leader should be able to later track each person’s submissions for accountability, as such accountability helps maximize novel ideas.

Step 2: Idea cleanup

The brainstorming meeting facilitator accesses the spreadsheet, removes duplicates, breaks ideas up into categories, and sends them out to all team members. As an alternative, some or all participants can be given access to the Google Spreadsheet and work together asynchronously on this process. If you adopt the latter process, for the sake of anonymity, create throwaway Gmail accounts for those collaborating on the spreadsheet.

Step 3: Idea evaluation

After the ideas are cleaned up, all team members anonymously comment on and rate each others’ ideas. Thus, in a 6-person group, each idea should have 5 comments and ratings. The ratings should assess at least 3 categories, each on a scale of 1–10: the idea’s novelty, practicality, and usefulness. Additional ratings can depend on the specific context of the brainstorming topic.

Step 4: Revised idea generation

After commenting on and rating ideas, team members do another round of idea generation, either revising previous ideas based on feedback or sharing new ones inspired by seeing what others came up with. In both cases, the process taps into the benefits of synergy by incorporating the perspectives of other team members. 

Step 5: Cleanup of revised ideas

The next step is to clean up and categorize the revised ideas. Use the same process as step 2.

Step 6: Evaluation of revised ideas

Following that, do another round of commenting and rating, this time on revised ideas, in parallel to step 3.

Step 7: Meet to discuss ideas

At this point, it’s helpful to have a synchronous meeting if possible to discuss the ideas. Anonymity at this point is unnecessary, since there are clear ratings and comments on the ideas. Group participants decide which ideas it makes the sense to move forward with immediately, which should be put in the medium-term plans, and which should be put on the back burner or even discarded. As part of doing so, they decide on next steps for implementation, assigning responsibility to different participants for various tasks. 

This kind of practical planning meeting is easy to have virtually for full-time virtual workers. Of course, it also works well to have steps 1–6 done virtually by hybrid teams, and do step 7 when they come to the office. However, it’s critical to avoid doing steps 1–6 in the office to avoid production blocking, evaluation apprehension, groupthink, and social loafing. 

You can also attain the same outcome through an asynchronous exchange of messages rather than a meeting. Yet, in my experience facilitating virtual brainstorming, having a meeting reduces miscommunication and confusion for more complex and controversial innovative ideas.

Does virtual brainstorming work?

Virtual brainstorming appears to solve the biggest obstacles to traditional in-person brainstorming. Here’s the big question: does it work? 

Behavioral economics and psychology research supports the conclusion that digital brainstorming has some advantages over in-person brainstorming. For example, a study14 comparing virtual and in-person groups found that, although participants in in-person groups felt better about their collaboration, the feeling proved deceptive: virtual brainstorming resulted in more ideas being generated. While in-person brainstorming may feel more fun, it actually results in worse outcomes. 

Another group of scholars25 researched the effects of group size. It found that the larger the group of participants, the more benefits to electronic brainstorming in terms of ideas generated. That’s because electronic brainstorming is not subject to social loafing. Each participant works by themselves and knows they’re accountable for the quantity of novel ideas, with novelty determined by ratings from group participants.

In fact, research finds that while the larger the in-person group, the fewer novel ideas per person, the opposite is the case for electronic brainstorming. That means with more people, you get a larger number26 of novel ideas per person. That’s likely because of synergy, with a greater total number of ideas inspiring participants to have more additional ideas.

A hidden benefit of virtual brainstorming comes after the initial brainstorming process is complete. While traditional brainstorming leaves a far-from-complete record of ideas, due to sparse notes and fuzzy memories, scholars found12 that the complete record of electronic brainstorming has a substantial benefit as a treasury of novel ideas. As a situation changes, ideas that seemed more practical and useful in the past may appear less so in the future, and vice versa. The group can thus always go back to past ideas and re-rank them accordingly.

My experience implementing it for clients reveals similar outcomes. At first, many participants—especially the more extroverted, high-status, and optimistic ones—complain about the “dry” nature of the process. They miss the fun and engagement of collaborative ideas flying around the table. 

In contrast, more introverted participants take to the process pretty quickly, finding it to be a relief from the cognitive overload of a noisy environment where they can’t hear themselves think. So do more pessimistic and lower-status participants, who are relieved at not having to feel judged for their ideas and worry less about criticizing the ideas of others in the evaluation stage.

After two or three sessions, even the extroverts—including leaders—tend to come around. They acknowledge, even if sometimes grudgingly, that the process seems to produce more novel ideas than traditional in-person brainstorming. In fact, hybrid groups trained on this process, who have the option of doing steps 1–5 in person, nearly always prefer to do virtual brainstorming for these initial steps, while doing step 6 in the office. 

That approach creates the maximum number of novel ideas, gaining an innovation advantage. It also provided the optimal experience for the most group members, balancing the preferences of introverts and extroverts, optimists and pessimists, lower-status and higher-status members. Team leaders who wisely prioritize focusing on integrating introverts, pessimists, and lower-status team members into the team—which is more difficult than extroverts, pessimists, and higher-status members—find virtual brainstorming especially beneficial.

Conclusion

If you want to gain an innovation advantage in the future of work, you need to avoid the tendency to stick to pre-pandemic innovation methodology. Instead, you need to adopt research-based best practices8 for innovation in the return to the office and the future of work, such as virtual brainstorming. By doing so, your hybrid and remote teams will enable you to gain a true competitive advantage in innovation.