Mind the Gap (1/2): Environmental Behavior and Observed Consequences

The Disconnect

From an early age, negative behavior is followed by a negative consequence. Misbehave in class, get detention. Steal something, get arrested. Drink too much and wake up with a hangover. In each case, we can directly attribute the negative consequence we receive to our own negative behavior.

However, when it comes to our behavior and the environment, there is a disconnect between our behavior and the perceived consequences. The temperature doesn’t increase by 10 °C every time you drive to work instead of walking. There isn’t an immediate outbreak of MRSA every time you don’t finish a course of antibiotics and discard the unwanted tablets in the bin [1].

This disconnect between behavior and consequence is deepened further, as the greatest emitters of CO2 are not the ones that bear the brunt of the consequences. As the developed world continues to damage the environment, it is the developing world that is disproportionately affected [2].

If we change our behavior, such as walking to work, or buying environmentally friendly products, there is no observable change to the environmental outcome.  If there is no observable benefit to individual green behavior, how can we motivate individual consumers to behave in an environmentally responsible manner?

Given the sheer scale of emissions and waste, the environmental benefit of ‘green behavior will only be noticeable if a sizeable number of individuals engage in the same green behavior [3]. In order to observe the benefits of green behavior, green behavior has to become the social norm.

Bring a Friend

Social norms are a powerful and well documented phenomenon, and refer to the behaviors and actions that we perceive to be acceptable and ‘normal’. If we perceive a behavior to be a social norm, we are highly likely to partake in that behavior, even if we know it is wrong or goes against our moral compass [4].

Our environmental behavior is highly susceptible to the actions of others and perceived social norms.

In one study, a promotional flyer was left under the windscreens of several cars in a multi-story car park that was either littered (littering is the social norm) or clean (littering is not the social norm) [5]. When subjects returned to their car, the rate of littering (discarding the flyer on the ground) was observed.

When the car park was clean, 14 % of subjects discarded the flyer on the ground, whilst 32 % of subjects discarded the flyer on the ground if there was already litter in the car park. The perception of littering as a social norm led to an over 100 % increase in the rate of littering.

Observing someone else discard the flyer on the ground had a similarly drastic effect on the number of subjects who littered. In a littered car park, viewing someone else discard the flyer on the ground increased the rate of subjects littering, from 32 % to 54 %. As the social norm is more explicitly emphasised, more subjects copied the behavior and discarded the flyer on the ground.

However, if the car park was clean, and the subject viewed someone discard the flyer on the ground, the rate of littering decreased, from 14 % to only 6 %. In this instance, it appears that viewing another individual violate the social norm caused subjects to adhere to it more strongly, reducing the littering rate by over 50 %.

In this study, it is highly unlikely that the subjects were pro-littering. In each instance they went along with the perceived social norm. When others had littered, the subjects were more likely to follow suit, as littering was perceived to be the social norm.

A similar study was conducted on towel reuse in hotels [6]. 36 % of guests staying in a hotel room with a towel hanger stating with the environmental benefits of towel reuse reused their towels. Switching the message to ’75 % of guest’s reuse their towels’ led to an increase in towel reuse, with 44 % of guests choosing to reuse their towels.

By giving the perception that towel reuse was a social norm amongst guests, occupants of the hotel room were more likely to copy the behavior, despite the majority of guests not actually reusing their towels.

In the same study, towel reuse could be increased further, by making the message relate more directly to the guests in the hotel room. If the message was altered again, to ’75 % of guests who stayed in this room reused their towels’ the number of guests reusing towels increased again, to 49.1 %. The more explicit the norm, the more we are likely to follow it.

However, here we run into a chicken and egg type problem. If green behavior is the social norm, consumers are likely to behave in an environmentally responsible manner. But, to establish green behavior as a social norm, consumers must behave in an environmentally responsible manner to begin with.

Whilst social norms provide a medium to maintain green behavior, we still have to get consumers to exhibit green behavior in the first place. So how do we influence enough consumers on an individual level to establish green behavior as a social norm?

Read part 2 here.

Mind the Gap (2/2): Environmental Behavior and Observed Consequences

Go With The Flow

Frequently, consumers do not behave rationally when making decisions [7]. Even when faced with more beneficial alternatives, consumers tend to revert to a known ‘default’ choice, or ‘go with the flow’ and choose a pre-set option that is provided to them [8]. The occurrence of this behavior increases as the complexity of the information regarding the decision increases [9]. However, by ensuring consumers are provided with a green ‘default’ choice, consumers are much more likely to engage in green behavior.

Schönau is a small German town with a population of 2500, located near the black forest. In 1997 a citizen’s initiative purchased the Schönau electrical grid and began purchasing its energy from renewable sources, such as solar energy. When the German electricity market was opened in 1998, this initiative became the EWS (elektrizitätswerke Schönau, the Schönau power company), and by default, the incumbent electricity supplier in Schönau.

If we fast forward to 2006, 1669 out of 1683 electricity meters in Schönau were still supplied with renewable derived electricity from EWS.10 Nearly every energy consumer in Schönau remained with the green, default provider.  In nearby towns, where the default is non-renewable energy, only 1 % of electricity meters are supplied with electricity from renewable sources. Clearly, energy consumers are reluctant to change their ‘default’ tariff or provider, even when faced with superior monetary of environmental outcome.

In 1999, another German electricity provider, Energiedienst GmbH, began offering three new energy tariffs, and letters were sent to 150,000 private and business customers explaining these new options [10]. Customers were given three options for their tariff: a slightly cheaper green tariff, an 8 % cheaper non-green tariff or a 23 % more expensive (but greener) tariff. Customers wishing to change to the latter two tariffs were required to respond, otherwise they would be switched to the slightly cheaper green tariff automatically.

After two months, 4.3 % of customers had switched to the 8 % cheaper (non-green) tariff, and 1 % had switched to the more expensive (but greener) tariff. 94 % of customers, however, did not respond, ‘picking’ the default green option, even though there was a significantly cheaper or greener option available. Whilst providing consumers with a green default in these instances led to green behavior, it is not always possible to establish a green default choice. This is especially true if the consumer has an established behavior pattern that includes has a non-green default choice.

If providing consumers with a green default is not possible, are there additional methods for motivating consumers to behave in an environmentally responsible manner?

Breaking The Habit

40 % of the daily actions we undertake are habits and involve no conscious thought or effort, saving our mental capacity for other tasks throughout our day [11]. Whilst some habits, such as recycling, have positive environmental consequences, many do not. How can we intervene, and break consumer’s habitual behaviors that have negative environmental consequences?

Behavioral economics tells us that financial incentives, typically as subsidies or taxes enacted by the state, are an effective tool for changing both businesses’ and consumer’s behavior. However, if we are going to use money as a motivator, is it more effective to offer financial incentives for good behavior or to financially penalise bad behavior?

Several studies support a ‘loss averse’ model for consumer’s behavior, in which consumers react much more strongly to the prospect of losing money than to obtaining a comparable financial windfall [12].

A 2012 initiative in Montgomery County, Maryland, that required all retailers to charge customers 5 cents for disposable plastic bags led to a 72 % decrease in plastic bag litter [13]. Clearly financially penalizing customers for not engaging in environmentally damaging behavior had the desired effect.

However, would this initiative work the other way around? What if stores gave customers 5 cents for each disposable bag they didn’t use?

Before the 2012 initiative, several stores already offered incentives and disincentives relating to customers’ plastic bag use. At stores that offered no incentive for plastic bag reuse, 84 % of customers used one or more plastic bags [14]. At stores where customers were given 5 cents for each reusable bag they used for their shopping, 82 % of customers took one or more disposable plastic bag with their purchase, a decrease of 2 %.

However, at stores that charged customers 5 cents for each plastic bag they used with their shopping, only 39 % of customers used a disposal plastic bag with their purchase, despite the overall monetary cost/benefit being identical in both instances. When it comes to plastic bags, penalising shoppers for making bad environmental choices is significantly more effective than rewarding them for making good ones.

In instances where financially penalizing behavior is not possible or practical, a softer approach is the introduction of ‘hassle factors’ to more subtly penalize undesirable behavior [15].

A hassle factor is broadly defined as a factor that inconveniences us when performing our desired action. For example waiting, travelling or filling out paperwork are all examples of hassle factors that might dissuade us from taking a particular course of action.

Hassle factors are particularly effective at changing behavior in the workplace, where financially penalizing employees is unlikely to be possible. In 2012, chlorinated solvents accounted for 15 % of all solvent use at the pharmaceutical giant GlaxoSmithKline, with the majority being used in drug intermediate purification [16]. Whilst the environmental hazards and costly disposal procedures of chlorinated solvents are well known, their use remains widespread in chemical industry [17].

Whilst the use of these solvents was sometimes necessary, in most cases more environmentally friendly solvents could be used instead [18]. Employees at GlaxoSmithKline were aware of this, as well as the negative environmental impacts of chlorinated solvent use.

In 2012, instead of an outright ban, the storage of chlorinated solvents was forbidden in the same room as the communal purification equipment. Employees were still allowed to use chlorinated solvents during purification, but would have bring the solvent (typically a 4 kg container) from their own laboratory, and then remove if afterwards.

In 2015, chlorinated solvent use had dropped to 9 % of total solvent use across GlaxoSmithKline. The addition of a ‘hassle factor’ (carrying the solvent to the purification lab) was sufficient to dissuade employees from using chlorinated solvents unless they had to, without the need for an outright ban.

The Future’s (Almost) Green

With demand for water, energy and natural resources set to skyrocket in the next decade, our ‘default’ action cannot be to carry on with our present rates of consumption and waste [19]. In the absence of viable technologies to sustain our current lifestyles indefinitely, the onus is on us, as consumers, to change our behaviors [20]. Research indicates that 75 % of people who read this article do.

Read part 1 here.

Why Is the Backward Research Method So Effective?

Applied psychologists and marketers in particular conduct research to choose one action over another. Through their research, market researchers want to answer questions such as:

What price should we charge for the productWhich customer segments should we target? and Which advertisement will consumers find most appealing?

Similarly, educational researchers have questions regarding the effectiveness of a particular instruction method, and organizational psychologists may want to study how different incentive structures affect the performance of employees. For such applied research, the main criterion of success lies in actionability – how well findings from the research can be used to make a decision on how to act.

This goal is different from academic psychological research which generates basic knowledge, or deeper understanding regarding a particular phenomenon, without consideration of the finding’s actionability or solving particular problems.

In this post, my focus is on applied research. I want to discuss how the traditional applied research process works, and why it often fails to produce actionable results. The backward research method, developed by Professor Alan Andreasen, turns the traditional research process on its head, solving its main weaknesses and forcing the decision maker and researcher to think about what specific results will be obtained and how they will be used in the beginning. Because of this up-front work requiring deliberation, more reliable, actionable outcomes are ultimately obtained. So far, it has been used mainly by market researchers, but any applied psychological researcher will benefit from designing research the “backward” way.

How the Traditional Applied Research Process Works

Applied research is done because someone encounters a problem. A bank manager may notice that the rate of new customers joining a branch has slowed markedly, and is far lower than predicted at the beginning of the year. She wants to know what’s going on, and how to fix this problem. So she hires a market researcher and explains the problem. After an initial consultation, the researcher makes a list of possible causes (e.g., customers are unhappy with the service at the branch, employees are slacking off because they are disgruntled with low wages, long hours, etc.), and then designs studies to investigate which of these causes is responsible for the slowing sales.

Interviews and surveys are conducted with customers first, followed by employees. The data from these studies is analyzed, the results are compiled carefully, and a glossy report is produced and delivered to the manager. The report states that some customers are delighted with the bank’s service, a few are unhappy, but most are indifferent. The employees, too, have mixed opinions.  In the end, the report tells the manager mostly what she already knew, and fails to provide her with a solution to her “slowing sales” problem. What is more, such research is expensive. I have seen tens of thousands of dollars spent on market research projects in this way, without achieving any actionable outcome.

Why did this happen? It happened because the researcher and the manager did not spend enough time or effort in the beginning fleshing out the core issues and thinking about potential solutions, what is and is not on the table.

How the Backward Research Method Works

The backward research method was first described in a 1985 Harvard Business Review article by marketing professor Alan Andreasen. In my view, this is one of the most powerful and useful articles that any applied psychological researcher can read, and I strongly recommend. Here’s how the method works.

Unlike the traditional process, the backward research method begins with the manager and the researcher spending a significant amount of time discussing how the manager will use the research findings and in deciding what the final report will look like, even going so far as to map out the empty tables and the skeletons of the graphs and charts that will appear in the final report.

This may sound like going too far. But it really works because it forces the manager (or the decision maker commissioning the research) to think deeply about actionable solutions to the problem, and to carefully consider what actions she is, and is not, willing to take. For instance, to deal with sluggish new account openings at the bank, the manager may be willing to offer gifts to current customers for referrals. She may also be willing to run a sales promotion offering a better interest rate on new accounts, but only if competitors are doing this. But she may not be willing to advertise in local media, or to change employee pay-scales or work schedules. Once these constraints are understood, the market researcher can focus on designing research to study receptiveness of potential customers and current customers to incentives like better interest rates and gifts. And he can drop the ideas of conducting a study with employees or consider the effects of a radio commercial.

Thinking through what the final report will contain and look like also allows the manager to link the research results to the decision directly, with questions like: What percentage of current customers have to say they are interested in a referral program for her to go ahead with it? What types of incentives are the most preferred? Which customer segments (e.g., over age 55, suburban) will be more willing to provide referrals and should be targeted?

From An Exploratory Fishing Expedition to Focused Problem Solving

Consider these and other such issues up-front forces the manager to make hard choices about the scope of the research, and the translation of quantitative values from the research into making particular decisions. After the first two steps of the backward research method (determining how the results will be used, and what the final report will look like), the rest of the process unfolds just like the traditional process, but more smoothly and with greater certainty in usefulness of ultimate outcomes.

The backward research method requires a lot of effort and initial discussion up among those who will make decisions or act, and those who will design and conduct the research before any research has taken place. From a kick-off meeting that took an hour or less, the backward research may require ten hours or more of intense collaboration in the very beginning. But the pay-off will be huge.  Having used this method dozens of times when working on applied market research projects with companies and non-profit organizations, I can attest to its effectiveness. It has worked for me every single time, leaving my clients completely satisfied and in possession of an actionable outcome. If carried out properly, every applied research psychological project will be certain to lead to the decision or action it was designed for.

I Think I Am, Therefore I Am

“Man is what he believes.” – Anton Chekhov

On the expansive, straightened shoulders of Amy Cuddy’s now-infamous “power poses,” a wave of interest has swept through behavioral science – and indeed, popular culture – around the notion that one can “fake it until you make it” [1]. Headlines boast the saying, offering strategies for success, and human resources managers inculcate the ideal in new hires with Cuddy’s TED talk, the second most popular in series history.

The concept is quite simple: act how you want others to perceive you and, over time, you will come to see yourself that way. Easy to understand and exceedingly optimistic, the fake it approach is at best a homespun panacea for self-doubt, and at worst a clever placebo, winking at a deeper truth.

A number of similar findings bolster the influence of self-perception on one’s achievements.

The White-Coat Effect: Positive Perceptions and Performance

At Northwestern University, researchers sought to capture how the clothing one wears affects behavior [2]. In the study, participants were given a white lab coat to put on and asked to perform a series of tasks that require attention. In one condition, the lab coat was described to participants as a doctor’s coat, while in another, the same coat was described as a painter’s smock [3].

The authors hypothesized that, given the stereotypes surrounding the two professions – attentiveness and care for a doctor, aloofness and creativity for a painter – there would be a difference in how participants performed on the attention task. The table below summarizes the results:

As can be seen in the graph, the participants who wore the doctor’s coat performed significantly better than those who wore the painter’s smock – despite the fact that it was indeed the same piece of clothing. This effect, which the authors dub “enclothed cognition,” is daunting, and gives credence to the subconscious influence of perceptions on outcomes.

Stereotype Threat: Misperceptions and Mal-Effects

Of course, not all perceptions are positive. While the above example shows a way that we can weaponize perceptions to improve our behavior, just as often they can work against us.

This is the assertion of Steele (1997), who coined the idea of “stereotype threat.”[4] The basic concept is that any member of a group for which a negative stereotype exists — even those to whom it clearly does not apply — “can fear being reduced to that stereotype.” To go further, one does not need to believe the stereotype is true to be vulnerable to it.

To test this theory, researchers examined the gender difference in performance on a math exam, with male and female participants of equal (strong) math ability [5]. Inherent in their hypothesis is the common (mis)perception that women are worse at math than men.

Steele and his colleagues gave two versions of the test: on the first, participants were told that, in the past, women had performed worse on the test than men — on the second, they were told the test had yielded no such gender differences. In the “stereotype threat” condition, the gender gap in mean scores was close to 20 points — in the “no differences” condition, it was less than 5. This divide dramatically supports the claim that self-perception and expectation, framed at least in part by ubiquitous stereotypes, can meaningfully impact our performance.

Takeaways for the Workplace

Part of management is the ability to understand — and control — these forces that affect behavior. Both the white-coat effect and stereotype threat imply that it is not ability but rather perception of ability that often dictates how we perform.

The implications for managers are extensive. For one, this research may explain why some classical management techniques that rely on rewarding good and punishing bad behavior fail. If an employee is treated according to how he is perceived, then a poorly performing employee can be stuck in a self-fulfilling loop, where perceptions reinforce behavior and vice versa. More fundamentally, something as simple as how a job or task is framed may dictate who applies to do it, and how well she does it.

Follow-up research on stereotype threat seeks to identify steps organizations can take to minimize its impact in the workplace. Roberson et al (2007) suggest directly acknowledging and addressing the presence of stereotypes may be a good place to start [6]. Given that these social forces are present at least at a subconscious level, the logic goes, it is disingenuous to discuss workplace diversity without doing so.

Related to the former, there is a growing body of literature that posits certain types of clothing enhance productivity in the workplace. Slepian et al (2015) find that clothing worn influences cognition, and that more formal clothing enhances global and abstract processing.

What is interesting about the Slepian study is that, rather than mandate a dress code for the formal clothes-wearers in their treatment condition, they simply asked participants to wear what they might “to a job interview.” Thus, though these findings may imply the benefits of a formal dress code, a better approach may be to ask employees to think consciously about how clothes affect their own self-perception, and to dress accordingly.

In this way, we might combine the axiomatic “dress for the job you want” with our earlier fake it approach to be something like “dress like the person you want to be.” Or, for managers, treat your employees like the person you want them to be.

Closing Thoughts

It is clear that perceptions – both internal and external, implicit and explicit – shape our behavior in profound, unseen ways. Simple societal norms conspire with pernicious prejudices to tell us what we should and should not or can and cannot do. Yet, simple tweaks in our environment – a reminder that gender has no bearing on ability or, perhaps, a quick power pose – have the ability to render these effects insignificant. A clear understanding of and willingness to adapt to this rather banal insight that perceptions and context are in fact important will enable the best managers to leverage the awesome power of perceptions for the better.

From an organizational standpoint, small contextual and environmental cues in the workplace can meaningfully alter the behavior and performance of employees. Organizations may find a use in applying these nudges, which can be as simple as striking a power pose.

Nudging In Education: Guiding Principles

Behavioral economics, in the form of nudges to help students and families make more active and informed decisions, has entered the mainstream of American education, guiding practice and policy from when children are barely old enough to toddle all the way through college and beyond. As a result of nudge work over the last decade, school choice information is simpler and more visually digestible; the architecture of school cafeterias subtly encourages children to select healthier eating options; text messages flow in the hundreds of thousands if not millions, prompting parents to sing the ABCs with their children, high school graduates to finalize their financial aid, and college students to meet with their academic advisor.

The infusion of behavioral insights into education holds considerable potential for leveling the decision-making playing field for economically-disadvantaged students and their families throughout all stages of schooling.

Targeted implementation of behavioral solutions can help compensate for low-income families’ lack of access to quality information or advising about their educational options. Well-designed nudges can help students and families make active and informed decisions about the educational pathways they pursue.

A cautionary note about nudges

Like any innovation in education, however, behavioral nudges do not offer a panacea. One question we have to wrestle with is: When should the nudging stop? Much of my research focuses on using text messaging to provide students with simplified information and access to assistance with college and financial aid decisions. When I present about this work, I often hear concerns that we are creating dependencies among the students we’re serving. “You’re texting them reminders about renewing their financial aid,” someone will say. “Are you going to nudge them to hand in their homework in college? To wake up in the morning? Where do you draw the line?” The flip side of this question is whether students will quickly learn to tune out educational nudges. Maybe putting fruit in front of the cafeteria register instead of candy leads kids to make healthier choices the first few times, but after a while do they just start seeking out the candy? And to paraphrase Richard Thaler, one of the pioneers of behavioral economics, how do we ensure educators nudge for good, using behavioral strategies to help students and families make active and informed decisions about what’s best for their personal circumstances, rather than use nudges to tell students what’s best for them?

Guiding principles for applying behavioral insights in education

Applied behavioral research in education is nascent enough that we lack empirical evidence to answer many of these questions. What we can rely on in the interim is a series of guiding principles that I believe should steer nudge work in education:

  1. Start with critical junctures: From preschool through college, students and families face a series of transitional decisions that are often complex and hard to navigate, but which can have long-lasting ramifications for how they do in school and whether they pursue additional education. These junctures often occur over a short time frame—a window of only a few months in which students can actively choose which elementary school to attend, which high school courses to take, or whether to apply for financial aid for college.  Families that stick with the default school or course assignment, or who fail to apply for financial aid by priority deadlines, may miss out on the chance to attend high-quality schools, take college-preparatory coursework, or receive thousands of dollars in additional grant aid. I am strongly in favor of well-designed nudges that encourage students and families to make active and informed decisions during these critical junctures.
  2. Prompt active engagement rather than give directions: Nudges that are overly directive run the risk of distorting the choices students and families make, leading them to make choices that don’t align with their goals or interests. An example would be to send students nudges encouraging them to limit how much they borrow for college. On the surface this seems advisable, but what about the student for whom additional loans would allow her to take more costly STEM courses and pursue an engineering degree? Similarly, declarative nudges (e.g. “Make sure you sign up for an after school enrichment program!”) can confuse and worry families who thought they had already completed an important task. Nudges should encourage active thinking and decision making, not tell people what to do. Well-framed and timed questions can also prompt students to reach out for professional guidance when they are struggling with a complicated decision.
  3. View nudges as supplements to, not substitutes for, existing educational investments: Behavioral interventions often have a creative appeal, and their low cost is alluring to educational leaders and policy makers who often are grappling with tight budgets. But the apparent “bang for the buck” of behavioral interventions should not be used as justification for scaling back spending on other important areas in education. After all, interventions to promote more active and informed decision-making only can be successful if there are quality educational opportunities available to students.

With these nudge principles in mind, we have a tremendous opportunity in the coming years to help students and families make more informed decisions and to pursue educational opportunities that allow them to realize their full potential. Over the long term, we should continue to invest in the systemic changes necessary to ensure that every student in America has a high quality education. But for the students who need our help today, behavioral nudges offer a powerful strategy for creating lasting educational improvement among disadvantaged populations.

This article originally appeared in [https://www.brookings.edu/blog/brown-center-chalkboard/2015/08/06/knowing-when-to-nudge-in-education/] and belongs to the creators.

Gender and Self-Perception in Competition

In my first year of graduate school, I had the privilege of working for one of the most brilliant thinkers I have ever known — let’s call her Sarah. Sarah is objectively smarter than I am. She is also much better-credentialed and more knowledgeable. Yet, as we grew close over the course of the semester, Sarah confided that throughout her life she has often questioned her own intelligence. While this is likely a product of the hyper-intellectual circles she finds herself in, there does seem to be another component of this uncertainty that is socially-induced — one that may explain why Sarah has these doubts, while I tend not to.

Gender and Self-Perception of Intelligence

In an attempt to understand how self-perception of intelligence differs between genders, Hogan (1978) asked nearly 2,000 survey respondents to estimate their own IQs, as well as those of their parents and grandparents.[1] He found that female participants underestimated their own IQ scores, while males tended to overestimate theirs. More shockingly, without exception both male and female participants “projected higher IQs onto their fathers than their mothers.” Throughout the 90s, a number of follow-up studies successfully replicated these findings, and though some argued the effect is due to outliers rather than general differences (Reilly & Mulhern, 1995)[2], Furnham and Rawles (1999) demonstrated that these effects hold even after such outliers are removed.[3]

Expanding on these results, Rammstedt and Rammsayer (2000) showed that gender differences were not significant in overall intelligence, but rather in specific domains — with males overestimating “their mathematical, spatial, and reasoning abilities relative to females” and females rating their musical and interpersonal intelligence as higher than males.[4] The authors note that, because mathematical and spatial reasoning are often the traits most strongly-weighted when considering overall intelligence, these results may drive the differing perceptions of general intellectual ability.

From a behavioral economics standpoint, these gender differences are critically important. If one perceives herself (rightly or wrongly) to be less-qualified, she may also believe her academic or professional potential to be lower. Because elite academic and workplace environments tend to be thought of as highly competitive, self-perception may lead well-qualified candidates to not pursue such opportunities.

Gender and Competition

The academic literature pertaining to gender divides in competitive attitudes typically looks at two related factors: propensity to compete and performance within competition. The former is used as a proxy for individual preferences over competitive environments (i.e., whether one prefers to compete or not) while the latter relates to some of the observable gender differences in competitive environments, such as timed standardized tests or high-frequency stock trading.

In a 2011 paper aptly titled “Gender and competition,” Niederle and Vesterlund review a number of the foundational studies on the topic.[5] One of these findings is that women “respond less favorably to competition than men,” and thus, self-select into competitive environments less frequently. Conversely, men are highly likely to choose competition, a fact they attribute to varying levels of confidence in one’s own ability. Citing their own previous study from 2007, the authors note that, while women are less likely to choose competitive environments regardless of ability, men demonstrate over-confidence in their relative ability when selecting competitive environments.[6]

This effect was studied as follows: participants in a lab experiment were broken into teams of equal number male and female peers, and asked to solve an individual task, initially compensated based solely on their own outcomes. This individual compensation scheme is referred to as “piece-rate” incentives. After receiving feedback on their performance, participants were then entered into a “tournament” compensation scheme, where payments were directly linked to relative performance within their group. In the third round of the study, participants were given the choice of selecting which incentive scheme they preferred, competitive or piece-rate. What the authors found was, regardless of one’s own performance on the previous tasks, men were substantially more likely to self-select into the competitive tournament than women — at a rate of 73 and 35 percent, respectively. As the authors write, this finding suggests that, given the same ability, men are roughly twice as likely to opt into competition than women.

Gneezy et al. (2003) further add to the weight of evidence that men not only prefer competition more than women, but also perform better in competitive environments.[7] Using a similar structure as the 2007 Niederle and Vesterlund study, within a competitive task environment they find “a significant increase in performance for men, but not for women.” One further distinction they add to the literature is that, when such competitions are within- rather than between-gender, this effect is practically eliminated.

Believing that some of this effect may be due to socialization, Gneezy et al (2009) sought to highlight the role that societal norms play in gender differences in preference for and performance within competition.[8] The authors were interested in how these norms dictate differences in competitive attitudes between sexes – and took to the field in two distinct environments. As they describe it:

 “One unique aspect of these societies is that the Maasai [in Tanzania] represent a textbook example of a patriarchal society, whereas the Khasi [in India] are matrilineal.”

In both places, the authors asked participants to perform a task, and allowed them to choose between a competitive or non-competitive compensation structure. In Maasai, the results were similar to those observed throughout the West: men were nearly twice as likely to compete as women. In Khasi, however, the trend disappeared, and women were equally likely to compete as men. The result makes a compelling case for the role of socialization on behavior.

Perceptions and Professions

The impacts of these results are plenty. For one, they underline the enormous role that socialization has on self-perception. Moreover, they suggest that, ceterus paribus, women may be less likely to pursue careers in industries or roles deemed as highly-competitive. Indeed, a 2001 study published in the Journal of Industrial and Labor Relations Review found that much of the observed gender gap in compensation — i.e., the gender pay-gap — is driven by an underrepresentation of women in the highest-compensated jobs within large organizations.[9] Though, of course, there are a number of complicating societal factors that also contribute to this reality, the authors suggest that preferences for competition (dubbed “taste discrimination”) may also be a contributor.

The question that individuals and organizations must engage with is how to combat gender differences in preferences for competition. As the Gneezy (2003) study demonstrates, in within-gender competitions, women perform just as well as men. What, then, can be done to elicit the within- rather than between-gender responses to competition for women?

Perhaps a more fundamental question is how to make jobs and industries that are traditionally male dominated more appealing to female applicants. A study of how graduates from a top MBA program apply to and choose different professions showed a significant divide between genders — with women less likely to apply for finance and consulting jobs than their equally qualified male counterparts.[10] Of course, both finance and consulting are traditionally conceived of as “competitive” professions.

But what is the cause and what is the effect of this reality?

In studying labor segregation between the sexes, the social psychologists Cejka and Eagly (1999) found that “to the extent that occupations were male dominated, masculine personality or physical attributes were thought more essential.”[11] Surely enough, of the eight stereotypically masculine personality traits listed by the researchers, competitive is number one.

Yet, it is not the case that personal competitiveness is a necessary (or even desirable) quality of a high-performing employee in these sectors. Perhaps, then, the culture of these professions has been driven by the fact that they are male dominated — rather than their being male dominated on the basis of their culture. One place organizations can start is by actively trying to dispel the stereotypes associated with them.

The gender divide in both job-choice and pay is not simply unjust, it creates horrible inefficiency. Women, on the whole, are more educated and score higher on measures of intelligence (qualms with IQ tests notwithstanding) than men. Yet, the foreboding façade of male-dominated professions is causing well-qualified candidates to apply elsewhere. In order to dismantle this counter-productive construct, we must disentangle the norm from the necessary — the perceptual from the real.