How Cities Trick You Into Better Behavior

What behavioral scientists are doing to improve urban living.

“Do you see the sets of traffic lights on either side?” says Anand Damani, pointing out a chaotic four-way junction from the balcony of his Mumbai apartment. Cars have stopped, blocking the pedestrian crossing. Both pedestrians and cars whose turn it is to move are stuck.

“There’s no need for two sets of signals at the same junction,” Damani points out. “Without an identical signal on the other side, drivers would stop at the right place so that they could retain a line of sight with the signal.”

People don’t change faulty behavior despite awareness, as humans are not rational beings. So you need new theories to change people’s behavior

Damani’s suggestion is simple, yet effective. One small change could make errant drivers do the right thing, without them even knowing it. But then he’s an expert. He is a behavioral scientist in a country where the field is yet to convince the government. “You don’t need to make people mindful to change their behavior,” Damani says. “Subconscious nudges through the environment also work. We call this behavioral design.”

Damani started a behavioral design firm, Briefcase, back in 2013, along with his partner Mayur Tekchandaney. Their first project, Bleep, aimed to reduce Mumbai’s rampant car horn honking problem.

Drivers reduced their honking by an average of 61%, because the buzzer was such a nuisance

The method was simple: over six months, Anand and his partner offered their own cars to participants to drive for a week. The vehicles were fitted with a buzzer that would light up and go off each time the driver pressed the horn and had to be manually switched off. It also recorded honking data that was used for analysis. The result: drivers reduced their honking by an average of 61%, because the buzzer was such a nuisance.

Damani describes Bleep as a simple, practical and low-cost solution to a big urban problem: noise pollution. “If the government wants to reduce noise levels in cities, it should make Bleep mandatory in all vehicles. We even installed it in the official vehicles of the Joint Transport Commissioner of Maharashtra, but it hasn’t moved further from there,” he says.

Beyond India other countries have been quicker to embrace behavioral design at government level. The UK, for instance, has a well-established, successful ‘nudge unit’, a public-private partnership called the Behavioural Insights Team. Its projects include helping persuade people to pay tax on time, increasing organ donor numbers and convincing students from low-income backgrounds to aim for top universities.

Meanwhile in the US the Obama administration created a similar unit, the Social and Behavioral Sciences Team (though it appears to have been mothballed by the Trump government). From helping people conserve electricity to suicide prevention, introducing subconscious ‘nudges’ has proved to be more efficient than the usual awareness campaigns that typically use logic to explain to people why they should or should not do something.

Final Mile’s controversial experiment featured large billboards with staged images showing the facial expressions of a man being run over by a train (Credit: Final Mile)

Take public transport, in Mumbai on average between nine and 10 people die each day from train-related accidents, many involving people taking a short-cut across train tracks. In 2010, the Indian Railways approached behavioral architecture firm, Final Mile, to find a way to reduce the deaths.

Among the experiments Final Mile conducted, one in particular grabbed the spotlight for its shocking grotesquery. At Mumbai’s Wadala station, which was notorious for deaths on the track, it put up big billboards with images showing the facial expressions of a man being run over by a train. The photos (which looked alarmingly real but were staged) were arranged in three panels, amplifying the horror. The experiments, which also included painting yellow lines on sleepers to help people gauge the speed of approaching trains, succeeded deterring people from crossing the tracks. The number of deaths at the station went from 40 in the year prior to the experiment to 10 after Final Mile’s measures.

“People don’t change faulty behavior despite awareness, as humans are not rational beings. So you need new theories to change people’s behavior,” says Biju Dominic, co-founder of Final Mile.

In Mumbai, on average, between nine and 10 people die each day from train-related accidents (Credit: Getty Images)

The idea that human beings are irrational goes against classical economics, which assumes that all decision-making is based on logic. It is the premise for behavioral economics, which also incorporates psychology and cognitive neuroscience into the discipline. In 2017, American economist and creator of the “nudge theory” Richard Thaler won the Nobel Prize for Economics for his pioneering work in this new field. But is behavioral science as promising and innocuous as it seems? Could human irrationality also be exploited, not just manipulated?

Last year, Uber, the global taxi-hire service, found itself on the wrong side of nudging. As the New York Times reported, Uber used “videogame techniques, graphics and non-cash rewards to prod drivers into working longer and harder”, arguably to the company’s gain. For instance, Uber preloaded the next journey before the current one had ended, enticing drivers to continue working without a break. Some local Uber managers adopted female personas online so that the largely male driver workforce might be more inclined to their suggestions. Finally, when drivers logged out for the day, the app would encourage them to keep working, citing arbitrary targets such as beating the previous day’s earnings, the paper reported.

The firm successfully reduced speeding by 50% on the National Highway 44 between Hyderabad and Bengaluru

“Nudges and other uses of behavioral science to change behavior are supposed to be used in the best interest of the people whose behavior the nudges are trying to change,” says Francesca Gino, a behavioral scientist and professor at the Harvard Business School.

Dominic of Final Mile seems to agree with her. Initially, his company helped brands like Unilever market more successfully using behavioral science. “But soon enough, we realised that this science had more width than just helping to sell soap. We eventually moved out of marketing into tackling social problems like open defecation and garbage disposal,” he adds.

Behavioral scientists believe you don’t need to make people mindful to change their behavior, subconscious nudges can work (Credit: Getty Images)

Final Mile has been approached once again by the Indian government – this time to use behavioral science to prevent speeding on the Mumbai-Pune Expressway. In the past, the firm successfully reduced speeding by 50% on the National Highway 44 between Hyderabad and Bengaluru. Then, Dominic’s team painted sets of parallel white lines that ran across the road. The space between these lines narrowed as a driver approached an accident-prone zone. The driver, fooled into thinking that the vehicle was accelerating, involuntarily reached for the brake.   

According to Dominic, nudges could also be as simple as a change of name. “When you call it the Mumbai-Pune ‘Expressway’, it gives people the impression that the road is meant to be sped on.”

This article originally appeared in [https://www.bbc.com/worklife/article/20180810-how-cities-trick-you-into-better-behaviour] and belongs to the creators.

The Post-Truth Problem

A crowd is only as smart as any given individual if that individual crafts the beliefs of the crowd

Our morals and ideologies are the building blocks for shared identities. They are uniquely human concepts that have driven progress towards common goals. Today, they feel far from it. But partisan conflict and identity politics are not new, nor are they inherently irrational. So, what is it about today’s moral climate that seems so explosive? In our hyper-connected, digitalized world, a Victorian-era essay provides surprisingly relevant guidance.

William Clifford (1877) opens his inquiry, The Ethics of Belief, with an image of a shipowner about to send his emigrant-ship to sea. Prior to raising anchor, the shipowner inspects the vessel’s aging construction and makes note of possible repairs, nonchalantly casting doubts on the seaworthiness of the ship and suggesting a thorough refurbishment could be in order. Then again, the shipowner deliberates to himself, the ship has weathered many journeys and overhauling it would mean delaying the many hopeful families due to board; not to mention the significant financial cost of repairs. After mulling over his inconvenient thoughts and arriving at a sincere conclusion that the vessel would carry the families unfailingly, the shipowner sends the emigrant-ship on its way with peace-of-mind. The ship then goes on to sink, taking its passengers and their hopes down with it.

Clifford explains that, unquestionably, the shipowner is to blame for the deaths of the emigrant families. Despite the sincerity of his belief in the ship’s sturdiness, he must be held accountable. More contentiously, Clifford also suggests that the shipowner should have faced sentencing regardless of the ultimate fate of the journey, because “he had no right to believe on such evidence as was before him” (Clifford, 1877, p. 1). This moral obligation to hold only those beliefs for which you have ample evidence is known as epistemic responsibility.

Fast-forward to the present day, it is implied that our society is devoid of epistemic responsibility, and that the degradation of social capital — shared ideals of goodwill, trust, and civic engagement — is the result of irrational beliefs and alternative epistemologies being acted out (Lewandowsky, Ecker, & Cook, 2017). In our post-truth age, we are all Clifford’s shipowner, interpreting the evidence before our eyes as it best suits us.

Motivated Reasoning

To rebuild epistemic responsibility in the post-truth age, where conventional norms of consistency, coherence, and fact-seeking have been abandoned, we must first consider how our individual rationality and collective cognition brought us here. Within an individual mind there are a multitude of heuristics and biases that have evolved adaptatively, but in some circumstances, they do not match up with modern decision-making environments. While the study of heuristics and biases has seen a renewal in popularity through the introduction of behavioral economics, these cognitive miscues have long been empirically demonstrated (e.g. Macdougall, 1906).

Recently, much attention has been given to the examination of motivated reasoning: the tendency for an individual to unconsciously fit or distort his or her information processing to suit his or her beliefs, goals, or motives. Put simply, motivated reasoning suggests that an individual’s evaluation of the world conforms to what he or she already knows, ultimately driving towards particular, preordained conclusions (Kunda, 1990). Drawing from Kunda’s seminal conceptualization, extensions and remodelling of motivated reasoning for topical settings has led to the creation of the Politically Motivated Reasoning Paradigm, which proposes that information and evidence is processed by individuals on the basis of their social meaning — such as their connection to one’s social identity, group membership, or policy relevance — rather than their truth (Kahan, 2016). Naturally, this avenue of psychological research appeals to contemporary cultural critics as a succinct explanation of why climate change communication is failing, and why bipartisan cooperation feels like little more than a farfetched aspiration.

Despite the negative connotations that one might attach to this manner of self-deception, motivated reasoning, like many other biases, plays a valuable role in an information-flooded world. The ability to “believe what [we] want to believe because [we] want to believe it” (Kunda, 1990, p. 480) helps us to preserve our self-concepts and navigate around uncomfortable, dissonant cognitive states, in turn promoting happiness and positive mental health. Besides, there remains a debate as to whether or not the process of motivated reasoning can be considered a bias (i.e., a systematic rather than occasional deviation from accuracy) at all. As Kahan (2016) explains, “motivated reasoning, far from reflecting too little rationality, reflects too much” (p. 12), because for the average individual, one’s beliefs about a global issue are of little significance to inspiring any policy-level change. Yet, one’s beliefs about that same issue may be of vital importance to maintaining solid standing among peers who are vital to that individual’s emotional and material wellbeing (Kahan, 2016). For example, if the majority of an individual’s friends hold similar beliefs on a given issue such as gun control — as is often the case — one’s decision to change this belief, while unlikely to inspire organized reform, could very well alienate him from the group. As such, this individual might be making an economically perfect assessment of expected utility by allowing motivated reasoning to guide him through the gun control debate and safeguard the coherence of his position. Viewed this way, motivated reasoning appears to be a completely rational response for individuals in a world of risk and uncertainty, but its prioritization of identity protection over truth-seeking means it may very well foil any attempt at epistemic responsibility.

Connected & Incompetent

Still, narrowing in on a single adaptive cognitive process as the sole cause for ideological segregation is a vast oversimplification, and one that does little to inform potential solutions. To effectively diagnose the issues of polarization at hand, our view of cognition must look beyond the individual and examine the mediums of communication that disperse and legitimize information through our social networks.

The shift towards internet-based media as a news source is well-documented (Shearer, 2018). Major media firms no longer hold a monopoly on information as the advent of social media platforms allow for a direct link from content producers to consumers, and what’s more, from consumer to consumer. While this creation of a readily-accessible epistemic commons has an intuitive appeal, the hyper-connectivity and structure of the social networks through which information is exchanged may be contributing to the degradation of collective competence (Hahn, Hansen, & Olsson, 2018).

Conventional knowledge celebrates the wisdom of crowds: the idea that the collective, aggregated judgment of the many can outperform any expert individual. While this theory indeed holds some validity, it is a romanticized concept, a populist view of human problem-solving in which democratized intelligence infallibly converges on truth. In practice, however, it fails to address the nuances of social influence. That is, in real-world social networks, individual intelligence is subject to individuals’ interdependence. When we turn to one another for information, we delegate a degree of our cognitive autonomy and rely on the competence of others, for better or worse. Given that digital spaces of socialization and news media have grown intertwined, navigating online information truly independently of the influence of others seems an unrealistic task. In other words, a crowd is only as smart as any given individual if that individual crafts the beliefs of the crowd (Hahn, Sydow, & Merdes, 2018). And as new post-truth norms develop in our social networks, where the information we need to know is replaced with the information that we “like,” taking a laissez-faire approach to mediating networked influence may prove damning (Seifert, 2017).

Tragedy of the Epistemic Commons

As things stand, the epistemic interests of individuals are at odds with the interests of our social networks. Motivated reasoning serves cognitive comfort, offering identity protection but restricting critical thought. Our social networks provide dynamic, around-the-clock connectivity, circumnavigating traditional media filters but relying on individual integrity. Together, investigations of both individual and collective cognition along these avenues illustrate the post-truth problem as a tragedy of the epistemic commons, where acting for ourselves on our cognitive impulses is dissolving the possibility of an epistemically responsible, global social network.

Where do we go from here? Evidently, the case of Clifford’s shipowner warrants continued thought. In an era where every action and belief is engraved in data, “no real belief, however trifling and fragmentary it may seem, is ever truly insignificant” (Clifford, 1877, p. 3).