Unitasking: How to Get More Done in Less Time

Open new tab. Check inbox. Respond to emails. Switch to Facebook. Scroll down mindlessly. Open new tab. Time to get some work done. Take down some quick notes. Text. Repeat. Take a look at the to-do list…gasp.

Have you ever found yourself falling into this overwhelmingly effort-consuming task series? Once perceived as an elusive virtue, multitasking, the act of dealing with more than one task at the same time, has now been shown to adversely impact brain regions responsible for higher cognitive abilities and informational processing, as discussed below. It often slows us down and increases the number of errors we make, all while simultaneously giving us a false sense of productivity. [1],[2]

The dopamine highs from rapidly switching from one task to another establish neural feedback loops that are hard to overwrite. Experience tells us that traditional solutions like checking emails only 3 times a day or turning off mobile phone notifications are nearly impossible to adapt to for the long term. Instead, ‘unitasking’ — focusing on one task at a time by using techniques like clumping similar tasks together, blocking out distractions, and designating relaxation time — may prove to be the healthier way out of this inevitable trap.

What does the science say?

A study conducted by researchers at the University of Sussex aimed at understanding the neural processes underlying brain structure alterations caused by multitasking. [3] Participants reported how many hours they spent per week using 12 common types of media (print media, television, computer-based video, music, voice calls, SMS, emailing, web-surfing etc.). fMRI and VBM analyses reported a significant negative association between media multitasking index scores and gray matter density in the anterior cingulate cortex, a brain region responsible for higher cognitive and motivational/emotional processes. While this research is, of course, not conclusive, and was unable to establish causality between multitasking and damage to the brain, it highlights the need for further research on how this ubiquitous behavior might impact not just our productivity, but our mental and physical well-being.

Another piece of research conducted at Stanford University studied how participants performed on memory tests. [4] Participants were split into “heavy vs. light media multitaskers” based on their responses to the mean number of media consumed while consuming other media. High media multitaskers were found to be less likely to filter irrelevant information, showing a bias for exploratory instead of exploitative information processing. They were less selective in allowing information into their working memory and thus, were more affected by distractors and performed worse on task-switching tests.

Citing evidence outside of laboratory settings, a study conducted with Microsoft employees claimed that after employees were interrupted by an email, they took 15 minutes to fully regain their train of thought — irrespective of whether or not they responded to the email! [5]

Although the dopamine kicks experienced from multitasking may make us skeptical of research denouncing the habit, our seemingly endless piles of work — despite our efforts to get many things done at once — might indicate that unitasking is a better way to work.

What exactly does unitasking entail?

Unitasking is not merely working on one task at a time. It also entails working less hard for fewer hours, removing distractions during these hours, chunking similar tasks together, and allowing yourself some breathing space.

While creating your to-do list for the day, clump together tasks that complement each other. For example, checking emails and responding to messages can fall under “respond to emails, texts, and messages”; personal research work and reading can be combined into a “personal projects” category; and meetings, gatherings, and events can share a “social” group.

Once you finish chunking, the next step is to identify time slots for uninterrupted solo-working. If you’re a student, go over your calendar and identify blocks of time that aren’t spent in class, in activity meetings or eating/sleeping. Usually, these might be early morning slots or late afternoons and evenings. Download apps like “Self Control” for your computer that block websites and distractions for specific time intervals. Because it is impossible to quit the application (as is the case for Self Control for MacBooks), you would be forced to not visit distracting websites during these pre-set time intervals. Using time splitting techniques like the Pomodoro technique, which uses a timer to break up work segments into 25 minutes followed by short breaks, can be useful too. Work through at least 20-30 minutes of uninterrupted time slots and then proceed to take a 5-10 minutes’ break. Make sure you perform tasks according to the clumping performed in step 1 (i.e. tackle “personal projects” and “social” individually). You will need to think about creating time slots accordingly as the categories you come up with may require significantly different time commitments.

Finally, give yourself some breathing room by organizing your day around your task-clumps. If your meetings are usually flexible, give yourself some time to tackle them after you have managed to get through more pressing task-clumps for the day. In addition, setting a time past which you are no longer allowed to work can lead to positive outcomes like getting more sleep, winding down, and feeling less constrained by constantly multi-tasking. To maximize benefits from unitasking, clump your tasks for the next day and identify dedicated time slots for each before you go to bed.

Close this tab. Create a new to-do list. Block out times. Remove distractions. Get through each clump at a time. Check to-do list. Repeat. Take a look at the to-do list…gasp at how much work you got done.

Why Machines Will Not Replace Us

Lately, we have received quite a number of requests asking us to explain further why artificial intelligence (AI) and robots are unlikely to put humans out of work soon. It may be a contrarian position, but we are definitely optimistic about the future, believing that the displacement of labour won’t turn out to be as gloomy as many are speculating. Despite the endless talk on the threat of machines to human jobs, the truth is that, while we have lost jobs in some areas, we have gained them in others. For instance, the invention of automatic teller machines (ATMs), introduced in the 1960s, ought to have eliminated the need for many bank employees in the US. Yet, over time, the industry has not just hired more staff, but job growth in the sector is, in fact, doing better than the average [1].

So, why is this? The answer can actually be found in Hollywood movies. In the 1957 film Desk Set, the entire audience research department in a company is about to be replaced by a giant calculator. It is a relief to the staff, however, when they find out that the machine makes errors, and so they get to keep their jobs, learning to work alongside the calculator. Fast forward to the 2016 film Hidden Figures. The human ‘computers’ at NASA are about to be replaced by the newly introduced IBM mainframe. The heroine, Dorothy Vaughan, decides to teach herself Fortran, a computer language, in order to stay on top of it. She ends up leading a team to ensure the technology performs according to plan.

Facts and not fantasies

These are not merely fantasies concocted by film studios. Granted, realistically, many jobs, especially those involving repetitive and routine actions, may succumb to automation for good. But the movies above do encourage us not to overrate computers and underrate humans. Delving deeper into this, we believe there are several elements that underpin this message.

  • Only humans can do non-standardised tasks. While traditional assembly line workers are set to be replaced by automation, hotel housekeeping staff are unlikely to face the same prospect any time soon. Robots are good at repetitive tasks but lousy at dealing with varied and unique situations. Jobs like room service require flexibility, object recognition, physical dexterity and fine motor coordination; skills like these are – at the moment at least – beyond the capabilities of machines, even for those considered intelligent.
  • Machines make human skills more important. It is possible to imagine an activity – such as a mission or producing goods – to be made up of a series of interlocking steps, like the links in a chain. A variety of elements goes into these steps to increase the value of the activity, such as labour and capital; brain and physical power; exciting new ideas and boring repetition; technical mastery and intuitive judgement; perspiration and inspiration; adherence to rules; and the considered use of discretion. But, for the overall activity to work as expected, every one of the steps must be performed well, just as each link in a chain must do its job for the chain to be complete and useful. So, if we were to make one of these steps or links in a chain more robust and reliable, the value of improving the other links goes up [2]. In this sense, automation does not necessarily make humans superfluous. Not in any fundamental way, at least; instead, it increases the value of our skill sets. As AI and robots emerge, our expertise, problem-solving, judgement and creativity are more important than ever [3]. For example, a recent study looks into a Californian tech startup. Despite the company providing a technology-based service, it finds itself to be growing so fast that, with the computing systems getting larger and more complex, it is constantly drafting in more humans to monitor, manage and interpret the data [4]. Here, the technologies are merely making the human skills more valuable than before.
  • Social aspects matter. Perhaps one of the most telling lessons learnt from underestimating the power of human interactions can be found by looking at Massive Open Online Courses (MOOCs). Until recently, it was widely believed that the rise of digital teaching tools would make human teachers less relevant, or even superfluous. However, that was not found to be the case with MOOCs. Instead, they have shown that human teachers can be made more effective with the use of digital tools. The rise of hybrid programmes, in which online tools are combined with a physical presence, has only partially reduced the number of face-to-face hours for teachers, while freeing them up to be more involved with curriculum design, video recording and assessment writing. Ultimately, it is this combination of human interactions and computers that champions [5].
  • Human resistance is not futile. Many of us have witnessed seemingly promising IT projects end up in failure. But, very often, this is not the result of technological shortcomings. Instead, it is the human users that stand in the way. Unfamiliar interfaces, additional data-entry work, disruptions to routines, and the necessity to learn and understand the goals the newly implemented system is trying to achieve, for instance, often cause frustration. The aftermath is that people can put up an enormous amount of resistance to taking on novel technologies, no matter how much the new systems would benefit them and the company. Such an urge to reject new systems is unlikely to change in the short term.

Closer together

There is simply no reason to think that AI and robots will render us redundant. It is projected that, by 2025, there will be 3.5 million manufacturing job openings in the US, and yet 2 million of them will go unfilled because there will not be enough skilled workers [6]. In conclusion, rather than undermining humans, we are much better off thinking hard about how to upskill ourselves and learn how to work alongside machines, as we will inevitably coexist – but it won’t be a case of us surrendering to them.

Get ’em While They’re Young: Why Children May Be Better Economists Than Economists

Could children be the policy makers of today, today? Behavioral economists are praised for explaining some of the obvious causes of human behavior, and rightfully so. Although, most often it is the thing right under our nose that goes unnoticed. Children however seem immune to this lapse in judgement.

An investigation by Kenneth J. Arrow into the ‘Rationality of Self and Others in an Economic System’,  he postulates “an acceptable and fundamental way to test economic theory is to test directly the economic rationality of individuals isolated from interactive experience in the social and economic institution.” Through infancy, a primitive mind bears little cognitive predisposition to selection bias, bigotry, or other adverse effects often observed in real world behavior. This is not to question the art of economics, rather, the science.

Born with the innate ability to process simple economic transactions, perhaps our ‘matured’ minds have come to over-complicate economic thought. For example, one day while driving through a suburb under the sweltering July sun, a young purveyor of lemonade set up shop on the front lawn of his family’s home, selling lemonade for 50 cents/glass. Directly next to his table was a boy of a similar age advertising ‘Joe’s Iced Tea’, and at only 40 cents/cup it was a steal. This is the greatest example of a competitive microeconomic pricing strategy I’ve ever seen, especially from an 8 year old. I propose it is I who misbehaves as a student of economics, and that the 40 cents I paid Joe isn’t nearly as valuable as the 2 cents he gave me.

In a 1958 inquisition into ‘Children’s earliest conceptions of economic relationships’, a research group from the University of Melbourne posed trivial questions pertaining to various economic processes to a group of children aged 5-8 years old (unfortunately, Joe hadn’t yet been born). They find an initial pre-categorical stage of development in which there is no realm of economic concepts differentiated from social concepts in general. The Kahneman’s and Tversky’s of the world rejoice.

A normative approach

While a standard economic approach bases rational reasoning on a fully-informed, unbiased agent, there are of course instances in which we misbehave, as Nobel-winning economist Richard Thaler would argue. This view of economics represents innocuous convention often unsatisfactory in describing human behavior. Through a normative lens it becomes evident why 35% of Canadians still don’t save enough for retirement and why there was an outstanding credit card debt of $94.2 billion in the fourth quarter of 2016. People clearly don’t always make the most rational choices.

Similarly, in the “kids vs cookies” YouTube video here, you’ll find a researcher presenting kids with the option of eating a cookie now or, if determined enough to wait, receive two later. You see the anticipation and their mouths undoubtedly salivate at the delicious baked good on the table. Their internal struggle is evident, and the majority of them give into a temptation which supersedes the prospect of future satiation. Do we not as adults face the same battle on a daily basis? Certainly the glass pane separating you and the donut display is not nearly as obtrusive as the mental barrier between you and the treadmill. What’s amazing, and perhaps disappointing is that we will jump physical hurdles to do a bad thing, but be reluctant to perform the simple good.

What are the implications?

While as adults, we often look back on our lives and wonder when our brains ceased operating as a child’s. While the stakes grow as we age and decisions become more complex, how could clear, unbiased, yet inexperienced insight be applied to solve today’s most vexing problems? Uri Gneezy, a frequent contributor to the ‘Freakonomics’ saga, signifies the importance of addressing bigotry early on. Gneezy suggests the best way of closing the gender-wage gap is through investment into education. While I can recall as a child questioning why my best friend had different coloured skin, I did so out of curiosity. Whether discriminatory against gender or race, animus is an unfortunate disease contracted through social-environmental contamination.

The point is; we have evolved from a primordial process of cognition and we retain a similar pattern of though throughout our lives. Clearly there is a disjoint in that we as adults cannot fathom the thought processes of a child. Since we often seek the advice of an impartial third party in our decision making process, how could involving children in developing policy be a bad thing? It’s an impudent strategy requiring little investment, besides time and patients necessarily, but a simple scheme nonetheless. Perhaps we could all benefit from policy makers honouring the ‘Take our Kids to Work Day’ tradition.

Conclusion

To conclude, the studies mentioned provide a salient framework from which future academics may evaluate the extent of a child’s comprehension of real world problems and the underlying mechanics of potential policy measures. The great thing is, there will always be a readily available population of 5-8 year olds ranging across every demographic and socioeconomic background willing to weigh in.

Despite your intuition and academic prowess, sit at the kids table and listen. You just may learn something.

Government Nudging in the Age of Big Data

Nudging is a science, and its practitioners are scientists. More specifically, it is a type of applied science; taking results recorded in laboratories and field experiments, and applying them to the real world. For policy-makers, this is an enormously valuable tool — policy decisions can be backed up by hard evidence, recorded time and time again by researchers across the world.

However, the transition from psychology journals into the real world is not always easy. Seeing which nudges actually work is a process of trial and error — recording the data, looking at the outcomes, and adjusting policies accordingly. In fact, one criticism of changing behavior through nudging is that these results are too context-specific and cannot easily be replicated in different environments.

To its credit, ‘trial and error’ nudging has worked very well. It has been used successfully by governments all over the world; from tax-collection to urinals, nudging has provided behavioral solutions to social problems.

Thanks to data science, however, the future of government nudging looks quite different. Every year, the behavioral Insights Team (BIT) — Britain’s government department for behavioral science — releases a report, reviewing how behavioral insights have been used in British policy. This year, there was a crucial inclusion: the BIT has recently added a Data Science team, which aims to use the latest methods from data science, machine learning and predictive analytics to make smarter policy implementations.

This is hardly surprising; given the rise in popularity and application of both data science and behavioral science, combining the two seems to be the next logical step. In fact, sophisticated data analytics have the potential not only to improve behavioral insights, but to transform how governments interact with their citizens.

Take machine learning, for example. Simply put, machine learning consists in modelling an algorithm to find patterns in very large datasets. These algorithms consolidate information and adapt to become increasingly sophisticated and accurate, allowing them to learn automatically without being explicitly programmed.  

The BIT’s application of these techniques has been fairly modest, but the results are hugely promising. The first major trial has involved trying to solve a road traffic problem in East Sussex, a small county on the south coast of the UK. For whatever reason, East Sussex has a disproportionately high number of fatal traffic collisions (64% higher than the national average). Faced with this problem, the local council has implemented a number of road safety initiatives to try to reduce speeding, encourage concentration at the wheel, and provide road users with information promoting safer driving.

Last year, the BIT tried to solve this problem with data science. Algorithms based on over ten years of collected local data allowed the BIT to make extremely accurate predictions about which types of drivers would be more likely to be involved in serious traffic accidents. For example, they found that a collision between a person over 65 and a younger driver is more likely to result in a fatality if the ‘younger’ driver is aged 40–50. After all, previously unnoticeable behavioral patterns like these can be found in large enough data sets. Most importantly, this allowed the BIT to better design and target road safety initiatives — to provide the right behavioral interventions for the right people.

Right now, these models have only been applied to small-scale road safety initiatives, but their potential to solve major social problems is clear. The amount of data we amass, individually and as a society, is staggering: in fact, we generate more collected data every two days than we did in the entire history of the universe until 2003. All of our online interactions, our purchase histories, our medical records, our government information — it all leaves a digital footprint. When datasets are so large, behavioral predictions can be startlingly accurate. Michal Kosinski has already used digital footprints left behind while using online platforms and devices to study, and anticipate, human behavior and psychological traits. His models have been able to predict people’s psychological traits, behavior, sexuality, and even who they will vote for.

How does this relate to better policy-making? As the BIT showed, instead of applying and re-applying nudges as ‘best-guesses’, governments can tailor very specific, personalised behavioral nudges to individuals and small groups. If Kosinski and his team can make extremely accurate predictions about an individual’s private preferences based on a fairly limited amount of social media data, imagine how accurately governments could design and target the right behavioral nudges.

Things get really interesting when you consider countries like Estonia and Finland — both of which generate vast amounts of open government data about the behavior of their citizens. In Estonia, 99% of public services are available online (including voting, paying taxes and access to medical records) — and citizens can even register new companies digitally from their smartphones, in a matter of minutes. In principle, as governments like Estonia and Finland continue to accrue enormous sets of behavioral data — in so many different domains — they will, with the right tools, be able to develop the most powerful and well-targeted behavioral nudges possible.   

What this means for the future of data protection and behavioral interventions is complex. Questions about how much data access policy designers should have are extremely contentious and will continue to be in the future. One thing is for sure though, governments have already made enormous progress with scaling behavioral nudges thanks to the insights of fairly modest local records. As the techniques become more sophisticated and the datasets grow larger, the old scientific approach of trialing ‘best-guesses’ could soon be replaced by machines that learn and improve automatically. Policy design does not get smarter than that.

Reducing Water Consumption: Why You Care What Your Neighbours Think

Water usage in domestic environments has risen dramatically in the past century, and maintaining access to fresh water is increasingly becoming a major concern, especially in areas prone to droughts. The particular crisis of water scarcity is considered one of the most important issues facing policy makers today. This is the case especially for countries and regions affected by drought, including California in the U.S. and over 50% of India, which is in a critical ‘water-stressed’ area (World Resources Institute, 2015).

This precarious scenario could be dealt with via two potential solutions: increase the amount of freshwater available or decrease the demand for it. As many urban areas of the world lack the ability to increase the supply of fresh water (in some cases resulting in water rationing in Brazil, Colombia, USA, India), the most realistic option is to encourage consumers to conserve water themselves. However, this may be easier said than done.

Changing consumption behavior, and changing people’s behavior in general, is a complex task.  Policies designed to do this can often result in inaction, regardless of whether that individual believes it is important to save water at home or not. Many strategies have been introduced, with varying degrees of success, to try and encourage people to decrease water usage in their homes, but which techniques are the most effective in changing people’s behavior?

Personalised Feedback 

Some of these interventions traditionally focused on simply presenting information to people and expecting them to respond appropriately. This idea is based on ‘the knowledge-deficit approach’, which assumes that people make environmentally harmful decisions due to a lack of information. If people are provided with evidence about how serious overusing water can be, for example, then surely, they will make their own rational decision to reduce water consumption.

However, current research revealed that this technique isn’t sufficient (Seyranian et al., 2014). Criticism arose after many unsuccessful campaigns discovered that, despite individuals reported greater knowledge of environmental issues, people’s behavior did not actually change. Consequently, other ideas about how to influence behavior have been developed. The implementation of low-cost behavioral interventions, otherwise known as ‘social nudges’, in public policy reflects the evidence that people are more influenced by indirect suggestions rather than forced compliance.

These indirect suggestions can take many forms and rely on the fact that our choices in life are inherently influenced by bias, habit, past experiences and contextual factors. Unfortunately, this can lead to some poor decision-making skills but can also be utilised positively to promote more preferred behaviors. A nudge takes into account these social, psychological and physical aspects by influencing judgements and changing behavior in a much more effective and less costly way than traditional regulations and campaigns.

These nudges are used as incentives in decision making. An early example of this is the use of descriptive normative information, such as average neighbourhood energy usage, along with a message conveying social approval or disapproval. Social nudges like personalized feedback have been used to change a variety of consumption behaviors, ranging from electricity, heating, water and even alcohol consumption (Alcott, 2011; Schultz et al., 2007; Dotson, Dunn & Bowers, 2015). Their use in reducing water consumption highlights the potential benefits of applying behavioral theories to real-world situations.

Comparing your lifestyle to others is a common phenomenon and thus the theory of social nudges can be implemented by using this naturally occurring circumstance in a positive way. Do you care how much water your neighbours use? Or would you care more if they knew how much water you use?

According to Schultz and his team, the answer is both. A water reduction intervention was more successful when households were presented with a combination of information on: 1. the amount of water they consumed relative to others in their neighbourhood and 2. whether this behavior was socially desirable.

Schultz and his colleagues found this personalized normative feedback, in combination with an injunctive message, to be a successful tool in reducing energy consumption in high-energy homes and further proved that this can also be applied to water usage. The use of both techniques simultaneously also removes what is known as ‘the boomerang effect’, when households with lower energy or water use realise their neighbours use more than them and thus feel justified in increasing their usage.

One-Nudge-Fits-All

However, despite the usefulness of one particular nudge in one setting, it is important to avoid any generalised assumptions about how that nudge may perform under different circumstances. Whilst behavioral science looks for the patterns and causalities in human behavior, the influence of individual factors cannot be under-estimated. Hagman et al. studied the acceptance of nudges in different communities across Sweden and the US and found that in reality, a ‘one-nudge-fits-all’ approach does not hold true. Worldview and attitudes of individual difference should be taken into account when formulating nudge policies.

The significance of individual differences when using these personalised normative feedback and inductive messaging techniques is further highlighted by Costa and Kahn’s (2010) research on political ideology. They discovered that the effectiveness of these interventions on electricity consumption varied by as much as 4 times depending on whether the household was a liberal or conservative supporter. Moreover, conservatives are more likely to opt-out of interventions and thus applying these social nudges and normative feedback must be tailored to the individuals involved.

When designing these interventions, in addition to considering the individuals involved, it is also necessary to factor in the effect of the ‘comparison group’. Would you be more likely to care about social disapproval from an unknown family on the other side of your city or from the neighbours down the street? As you might correctly assume, closer proximity to the disapproving audience results in a larger change in consumption, as found by Datta at al. (2015). In an intervention in Costa Rica in 2015, a town-level comparison on water consumption levels actually had no significant effect in reducing water usage, whereas a neighbourhood comparison reduced consumption up to 5.6%. Thus, it is also crucial that the group used as a reference for comparison must be carefully chosen.

Ultimately, current research, such as that presented here, should guide policies on water consumption using the most effective techniques and interventions. Despite the ease of printing information on leaflets or sending email notices about the importance of saving water, these interventions should be completely replaced with more personalised feedback to consumers.

However, can these findings be solely attributed to receiving comparative feedback. An important discussion continues due to research conducted in the UK by Harries et al. (2013), which reveals that the impact of information regarding social norms may have been confounded with the effect of receiving individual feedback. In essence, when households were provided with clear electricity usage in a historical format they were found to reduce their energy consumption by up to 3% regardless of whether they had information about other people’s consumption. Simply being provided with clear, detailed information about their energy use across a fixed period may be enough to change people’s behavior.

As discussed previously, using personalised normative feedback was a better solution than alerting people to the need to reduce water consumption. It is true that the public are well-aware of how crucial water conservation is, but are they even aware of just how much water they use?

Perhaps making people acutely conscious of their own consumption would also be an effective intervention. Further research must be conducted to confirm whether this information, in comparison to personalised normative feedback, is enough to influence behavior across a variety of situations and countries.

Conclusion 

In summary, social nudging attempts to change behavior on a large scale and at a relatively low cost. As the evidence reveals, they are much more effective than traditional methods because they incorporate the fact that people do not always act in their best interests, intentions do not match actions and real-world behaviors often reflect unexpected attitudes. However, there is still a lot to be discovered about the long-term impacts of these interventions and how much information people really need in order to change their behaviors.

So, what can you do to reduce your water consumption? It turns out rather than just thinking about how severe a water crisis would be, it may be better to find out how much water you are exactly using and how that compares with your neighbours.

Charity, Parochialism, and the Inefficiencies of Altruism

Most of us are good and charitable people. Whether it is pure altruism, the good feeling we get, or just a matter of doing it because others do, many of us have a desire to give to charity in some form or another (Brooks, 2007). Last year in the United Kingdom for example, people donated around £9.7 billion to charity (CAF, 2017), with medical research being the most popular cause with 26% of total donations, and overseas aid and disaster relief getting 19%.

However, is this money going to the right places? Whilst there has been an increasing amount of research and attention given to the behavioral science of giving (BIT, 2013), the issue of why we specifically donate our money to the places that we do, and where it ends up, has been neglected. Fortunately, some of these issues can be addressed using simple theories from the field of behavioral science.

Why do people give to already well-funded charities?

One of the questions left unanswered by research into the science of giving, is why people feel compelled to give money to charities that are already very highly funded by public donations. An example of this is Cancer Research UK. This organisation received £442 million from private donations alone (£544 million if you include trading income) in 12 months between 2016 and 2017 (Cancer Research UK, 2017).

In the case of this particular charity, one important reason for people’s persistent donations can be put down to the fact that it is advertised well, and subsequently occupies a salient position in the public eye. As they are exposed to advertising, people become familiar with Cancer Research UK, resulting in an association between them and charity in general.

Hence, when you are thinking about where to donate your money, Cancer Research UK is one of the first places you think of. This idea primarily comes from the research of Robert Zajonc (1968), who argued that simply being repeatedly exposed to something makes you think of it more frequently and positively, potentially explaining how Cancer Research UK’s position in the public is related to its persistent rate of donations.

However, arguably the main reason behind people’s donation to this particular charity is because a lot of people know someone who has or had cancer, making it a very personal, relatable and familiar issue. We may be more inclined to give to a charity that helps people nearest to us rather than in distant and unidentifiable places. This is known as parochialism (Baron & Szymanska, 2011), where our close proximity to a charity’s focus leads us to care more about it and those who it concerns.

Meanwhile, we don’t tend to feel as much for those suffering far away or in another country because we don’t get as much real exposure to it, and can therefore distance ourselves from it. So given two individuals in equal and deserving need of help, we have a preference to help those at home first. However, this isn’t always the case, which leads me on to the next question.

Why do people donate to disasters that are broadcast on the news?

Considering Baron and Szymanska’s principle of parochialism, we should expect to see considerably less charitable giving and public concern toward issues that are far away from us, such as foreign conflicts and natural disasters in distant nations. However, sometimes we see the opposite.

Take the ongoing humanitarian crisis in Syria for instance. Contrary to the theories stating societies’ parochial interests toward issues that are closest to them, there has at times been a huge amount of concern and altruistic behavior towards events taking place extremely far away. Why is this?

Recent behavioral science theories have also put this to the role of the media. When a recent disaster has been in the news, we can identify with that particular event because modern technology allows us to visualise it, and see exactly what our money would go towards helping if we decided to donate to it. The identifiable victim effect (Small & Lowenstein, 2003) for instance causes us to empathise with those affected in reaction to emotional and heart-wrenching stories.

This may also account for the selective variance in which distant issues receive charitable focus or not. Disasters which are framed in the public eye by just using numbers and statistics on their own often fail to provoke such an emotional response, due to the fact that they are not as relatable as the vivid scenes depicted in media campaigns, and easy to ignore as a result.

However, this may not make sense. Why should we be less interested in helping any victim and so much more keen to help the specified victim we can visualise and relate to? Either way, people are suffering and we can help them. But the one in the news will likely get more donations. This is an ‘irrational’ way to think, as behavioral scientists would put it; it involves us making decisions emotionally using our hearts rather than our heads, which brings me to the final question.

Why do people give a small amount to several charities rather than one big amount to one single charity?

People give in all sorts of different ways. Some people give frequently to the same charity, some give large one-off donations, and some give on an ad-hoc basis to multiple charities. But what is the reasoning behind donating little and often? The reason some individuals do this is because of the ‘warm glow’ they get as a result of helping lots of different charities (Andreoni, 1990).

Think of it like the following choice. You could get a single good feeling from making one big donation, and it’s a fairly good feeling because it’s a large amount. Or you can get multiple good feelings from every time you donate a little bit to a different charity. Which would you prefer? Though it is very hard to get hard evidence for this, the literature suggests it is mostly the initial choice (Rotemberg, 2014). 

We maximise our “warm glow” feeling by giving little and often, since there is declining marginal utility from giving. This means that we get more of a “warm glow” by giving one pound to two different charities than by donating the entire two pounds to one single charity (providing we have no irrational emotional attachments).

In other words, the difference in “warm glow” between donating either nothing and donating one pound is much greater than the difference between donating one pound and two pounds. According to Baron & Szymanska (2011), we can avoid these biases. To be purely efficient in our altruism, we should identify a charity that does the most ‘good’ per pound and make our entire contribution to that one charity.

Efficiency?

But what counts as efficient in this context? We could say your donation is efficient when your extra pound goes somewhere where the most ‘good’ is done. An economist would define this as the Marginal Benefit (MB), concerning the direct impact your extra £1 donation has on helping people.

A popular and very well-funded charity such as Cancer Research UK for instance has a low MB, because donating one extra pound to them will make very little difference, as they already have millions of pounds being pumped into their search for a cure, so your extra few pounds isn’t going to save any lives. Whereas, if your extra pound was given somewhere that helped prevent the spread of malaria in Africa, that pound might purchase a malaria net which would directly save lives – meaning that the MB is much larger in such charities.

So if we gave all of our money to the charity with the highest MB, we would be helping the most people, whilst doing the most ‘good’, and therefore be the most efficient.Of course there are issues with this way of thinking.

Firstly, what do we define as ‘good?’ Even though your donation to Cancer UK might not directly help anyone in the short term, it is funding research that one day may help millions. So in some people’s eyes, that is just as ‘good.’ Also, if everyone gave to the same charity with the highest MB, this would instantly lower that MB. It would also leave many other charities without any donations at all.

Similarly, how can you morally compare dying from a preventable disease in a less developed country with dying from a less understood disease in a more developed country? This is where the distinction between efficiency and morality becomes blurred. However, this draws us into unchartered territory, away from the premise of this article. Before we can debate which charities do the most ‘good’ and what the optimal allocation of existing donations would be, we first need to decipher exactly why people give to charities in the way that they do.

Conclusion

The purpose of this article is not to tell people where to give their money, whether that is to Cancer Research UK or not. Similarly, it is not advising against donating to the latest tragic cause displayed on TV. Its purpose is merely to highlight the biases we fall victim to. It has been revealed that a lot of people donate to certain very well-funded charities due to parochialism; the idea that we care more about issues that are close to us (both physically and emotionally).

Furthermore, people tend to donate more when they can visualise a specific victim rather than view them as part of an unidentifiable statistic. Individuals also prefer donating to several places rather than one single place because of the ‘warm glow’ they receive as a result. If we can realise theses biases are present, we can at least start to make conscious efforts towards making sure our money is helping the most people for our pound.

Does the Quantified-Self Lead to Behavior Change?

In 2006, marketing commentator, Michael Palmer said “Data is just like crude. It’s valuable, but if unrefined it cannot really be used” [15]. Over a decade on, our lives are more saturated with data than ever, but we still seem far from harnessing its full potential.

Health and well-being is one area in particular where this issue is very prevalent. In recent years, fitness technologies have produced a plethora of data for individuals who want to change, abandon, or adopt particular habits related to their health. Just like crude oil, health data has subsequently become hugely abundant.

However, if we don’t learn how to best navigate and refine the large amounts of data offered by this technology, then it will be difficult to help people use it for the betterment of their health and well-being. This article explores how these difficulties can be overcome, and highlights how behavioral science can untangle the complex relationship between technology and long-lasting behavior change.

The Quantified Self

In the context of health, a habit is defined as an automatic response to a contextual cue (e.g. location, object, or preceding action), which forms when the response is repeated in a stable context [22]. When it comes to overcoming bad habits and continuing good ones, people have come to rely on new developments in fitness and app technology. For example, apps such as MyFitnessPal allows users to track what they eat to develop healthy nutrition habits, while others like Headspace help users build mindfulness habits to improve their mental well-being.

Using such technology to track one’s habits and behavior has been referred to as the Quantified Self (QS) (“self-knowledge through numbers”) – a term coined in 2007 by Wired editor Gary Wolf and writer Kevin Kelly. Other definitions include “personal informatics,” “lifelogging,” and “self-tracking” [12]. The explosion in the QS has been characterised by the growing development and use of wearable devices and apps (Fitbit, Sleep Cycle, Fooducate, Happify), and has meant that individuals can access information about many of their behaviors such as stress, menstruation, mood, heart rate, sleeping patterns, diet, mental attention, and physical activity.

Such is the rising popularity of the QS, that the wearable electronics business is expected to reach over 150 billion dollars annually by 2027 [9]. But how exactly does it affect behaviors related to health?

The importance of feedback

Research suggests that there are many ways in which QS can create positive user experiences and influence behaviors. One of these involves the importance of user feedback. Fritz et al. (2013) conducted interviews with 30 participants who had used a FuelBand, Fitbit, Jawbone, or a Striiv for a minimum of 3 months to understand their value “in the wild” [7]. They found that numerical feedback motivated and reinforced participants’ activities, because it created a sense of achievement and helped them to reach their goals.

Meanwhile, Renfree et al. (2016) did a qualitative study of the app Lift, which allows users to select or create habits that they wish to develop [18]. The number of consecutive days a behavior has been performed – called ‘streaks’ – are used to reward app users. This received positive reactions as it helped support behavioral repetition, and participants were reluctant to lose streaks. For example, one recipient was motivated to keep up a particular habit, because they wanted to maintain their long streak, and that “having a big number is helpful in that you don’t want to break it.”

However, despite these positive reactions and the rapid growth in the industry, new evidence suggests that the presence of data concerning one’s habits is often ineffective in instilling long-lasting behavior change. Researchers are finally beginning to explain exactly why this may be, and have started providing possible solutions. For instance, Patel, Asch and Volpp (2015) argue that wearable devices are merely facilitators, rather than drivers of behavior change [16]. They believe that technology companies should focus on engagement strategies, rather than features, to help bridge the gap between recording information and long term behavior change. Fortunately, a wave of further research is examining this issue both empirically and theoretically.

The bothersome nature of apps

Firstly, studies suggest that wearables are inconvenient in various ways. For example, Harrison et al. (2015) conducted interviews with 24 users of wearable devices, and one participant said her wristband was “pretty ugly,” while another said, “it wasn’t practical for wearing all the time.” Another inconvenience was battery life, leading users to abandon the app entirely [8].

Furthermore, in Renfree et al.’s (2016) study on Lift, the reminders sometimes caused negative affect because they were deemed annoying, particularly when participants were going through busy or stressful periods [18]. Sjöklint, Constantiou and Trier (2015) interviewed 42 users of devices which track moving and sleeping activities, and uncovered similar findings, where one participant reported, “I sometimes got upset about the fact that I couldn’t always achieve my goal” [19]. Further still, they argued that despite being marketed as enabling devices which support the “rational self (the planner),” they actually attract “the emotional self (the doer).” This is because unsatisfactory results, such as underachievement, do not lead to behavior change but rather the emergence of coping tactics: disregard, procrastination, selective attention, and neglect.

The difficulties of interpreting the data

In addition to the issues with the devices themselves, interpreting the data produced from them is another practical problem in the QS movement. In fact, Swan (2015) points out that one of the main difficulties in big data science is finding meaning amongst the large quantities of information, or as they put it, “extracting signal from noise” [21]. Here, lots of data can in fact be a hindrance rather than a benefit.

In addition, the precision of this available information has also caused concern. Yang et al. (2015) analysed 600 product reviews, and conducted interviews with users of devices such as Jawbone, Fitbit, Basis, and Nike + Fuelband [23], finding that users were not satisfied with the accuracy of their device. For example, some users had multiple devices, and would compare the accuracy, but there was no absolute standard which made it difficult to resolve discrepancies.

Users also liked to test the accuracy of the device with different movements, but those that they tested were not reflective of a realistic scenario. In one instance, one participant wanted to test the sensitivity of his Basis B1 fitness watch, so he tried “jumping,” “punching,” “swinging around,” and “tapping it on things.” However, the authors note that these were not ordinary movements that users would do in daily life. Finally, participants complained that the units of measurement driving their behavior were not clearly defined, such as a calorie, a step, or sleep.

Along with the accuracy of the device, users also did not have sufficient understanding of how they worked, and developed ‘folk theories’ to make sense of the data. In one instance, one user found that the device was over-rating their activity, and was unaware that the issue could be solved through calibration; correcting the measurement of a device so that it matches the standard measurement. Another participant made the wrong assessment by comparing measurements taken in different physical conditions. They concluded that a user’s understanding of how the device or app works is crucial, and suggest supporting testability, allowing greater end-user calibration, and increasing transparency will improve users’ experience.

Habit formation theory

Considering all these issues, how is it that so many fitness and health apps fail to counter-act them, and fall short of satisfying customers by fostering long term behavior change? From a theoretical perspective, researchers have uncovered issues concerning the QS and its lack of grounding in habit formation literature.

Adopting a habit relies on repeatedly performing a specific action in a stable context, because this allows the action to become automatic [22]. Stawarz, Cox and Blandford (2015) reviewed the functionality of 115 habit formation apps [20]. They listed the app features, which resulted in 14 app feature categories, such as task tracking or rewards, and then coded each app for habit formation features, for example, supporting the use of contextual cues. It was found that only 5/14 app feature categories supported habit formation, while just one – routine creation – could help users to find a trigger event for the behavior in question.

The research concluded that these apps are not supported by habit formation theory, and only “provide functionality to enable tackling of task completion and reminders.” While monitoring one’s behavior is initially important, it leads to a dependency on reminders and does not support the development of automaticity, which is crucial in behavior change. It is suggested that apps and devices would benefit from supporting trigger events, using reminders to reinforce implementation intentions, and avoiding features which cause a reliance on technology.

Pinder et al. (2015) took another approach, and argue that persuasive technology, such as the QS, should target the nonconscious system [17]. Outlining dual process theory, they note that habits are not consciously motivated, chosen or monitored. This is because a habit is an association between a situation and an action that has become established in memory [22], so many habits are triggered automatically, outside of our awareness [1]. Current behavior change interventions, however, use many conscious behavior change strategies, which result in users ignoring prompts or unwanted interruptions.

They offer two solutions, the first of which concerns “priming the nonconscious system to behave in the desired way.”  For example, the exercise game Zombies, Run! uses the instinct of running from fear as a trigger for physical activity. The second solution involves “retraining the nonconscious system such that the user is more likely to behave in the desired way.” They argue that this can be done through nonconscious goal priming, where the new behavior masks the existing unwanted behavior.

One example of this is glanceable persuasion, which presents a user’s physical activities in a subtle and abstract manner. Klasnja et al. (2009) investigated users’ physical activity with UbiFit’s garden display, which grows different flowers depending on the activity performed [10]. The researchers found that weekly activity level was higher for participants with the glanceable display than those without the display. They argued that this was because it kept physical activity goals “chronically activated,” which reminded users of their commitment to stay fit. One participant said: “[With the garden] I think about it maybe subconsciously every time I look at my phone.” It should be noted, however, that they did not monitor conscious level of attention on goal feedback [17], which could be a fruitful avenue for future research.

Along similar lines, Calvo and Peters (2013) call for designers to be aware that we are not always rational or, consistent beings, and are subject to complex psychological phenomena [4]. They argue that reflecting on our past can impact our future behavior, for example, by understanding the impact of poor oral hygiene, we brush our teeth more often. The way we reflect on the past is determined by various influences.  For instance, we remember an experience based on what happened at the beginning and end. This reinterpretation of events is influenced by the primacy and recency effect, which is the psychological tendency to recall the first and last items in a list [3].

To create effective technologies, app designers are therefore advised to take into account how this affects subjective interpretation of past experiences. For example, end rewards such as badges, achievements, and motivational messages provide intrinsic value for users, which fosters positive emotions [5]. This means that the user reinterprets the end of an event, such as physical activity, as positive, and they are motivated to repeat the behavior again [2].

The authors also warn against systems designed to change behavior because they may have opposite results. This is known as “ironic effects,” which is when attempts to convince ourselves to do or think something backfire. For example, a study on smoking cessation found that when participants tried to stop thinking about cigarettes, they smoked more than those who did not attempt thought suppression [6]. Calvo and Peters suggest that motivational messages which ask users to focus on a certain goal should be carefully tailored, via profiling or data mining, to avoid unexpected results.

Conclusion

In summary, the progress of our understanding of the effectiveness of the QS is promising, but further work is required. Perhaps the most important observation in many of these studies on the effectiveness of QS devices and apps is the notion of dynamism; not only do designers need to understand that individuals have different needs but that these needs are constantly evolving.

The theoretical and empirical issues raise seemingly insurmountable obstacles in the search of understanding how persuasive technology can motivate and maintain behavior change. Yet, this challenge is offset by the value of the answer, because if such concerns can be addressed, then the Quantified Self could have a life changing impact on the health and well-being of millions.

Overconfidence: From PacMan to ‘Ghost’ Torpedoes

Most 80’s kids will recall Atari- a household name at the time, which birthed a series of wildly popular video games from PacMan to Star Wars. An 80’s kid myself, I was intrigued to discover the use of video games in military training- a fact I chanced upon in a behavioral study on overconfidence and conflict. The authors describe how “the US Army used a modified commercial Atari game Battle-zone for gunnery training”. It also highlighted another war game run by the US Department of Defense back in 2002, which played a fundamental role in “examining scenarios” for the 2003 invasion of Iraq. The game came with a whopping price tag of $250 million.

Video games are far from the only interesting premise in this paper, which studies what behavioral scientists broadly term overconfidence. The authors posit that the human predisposition towards overconfidence, or what they call, ‘positive illusions’, has negative implications for conflict decisions. In a lab environment, they asked 200 volunteers to play the role of the leader of a fictitious country, where newly discovered diamond resources lay along a disputed border.

The volunteers were given different alternatives in the computer game- from trading and negotiating with opponents for additional resources, to ‘waging war’, wherein attacks could be launched on opponents. Volunteers were also asked to rate their ‘pre’ and ‘post’ likelihoods of success- and give saliva samples at different stages, to measure testosterone levels.

The authors discovered that players who made higher-than-average predictions of their performance, or in other words, were overconfident, launched more unprovoked attacks against competitors. These players were also more likely to be men, who not only possessed greater overconfidence than their female peers, but also tended towards greater levels of narcissism.

While some naysayers might react to the above with a big fat “so what, how is this at all relevant to conflict?”, still others may allege that what happens in a lab simply does not translate to the real world. These naysayer concerns are addressed in forthcoming paragraphs- but first, an analysis of the phenomenon of overconfidence.

Why the Fuss Over ‘Overconfidence’?

When an individual’s belief or perception in their abilities is higher than their ‘actual’ or realised abilities, behavioral scientists peg the phenomenon as overconfidence. Overconfidence, however, is a more complex phenomenon than simply being optimistic. Moore and Healy describe the three main types of overconfidence as follows:

  1. Overestimation, where individuals overestimate their ability, performance or likelihood of success. The authors use the example of overestimating the speed at which one can complete one’s work.
  2. Overplacement, where individuals overestimate their abilities relative to others. Here, the authors cite a popular study on American and Swedish drivers reporting themselves as better skilled than each country’s median driver.
  3. Overprecision, where individuals have an “excessive certainty” about the accuracy of their beliefs. The authors use the example of posing questions such as, “How long is the river Nile?” and having participants answer with 90% confidence intervals. Individuals are often ‘too sure’ that they know the correct answer.

In the abovementioned computer war game, individuals (particularly men), overestimated their likelihood of success, which falls into the first two brackets by this definition- ‘overestimation’, when individuals overrated their own performance, and ‘overplacement’, when individuals rated themselves as likely to perform better than others.

Overconfidence- Bad for Conflict?

The authors state the following in their paper- “Since militaries are often concerned with how wargames represent real war, there is a significant need to understand human biology and behavior in wargames, whether or not they also reflect real war.” They are therefore not suggesting that asking participants in a computer game to pick between negotiating and waging war has the same stakes or equivalent payoffs as that of the global political stage.

Instead, studies like these underscore underlying behavioral phenomena that are fundamentally human and could have sweeping consequences, particularly for elected (or indeed, unelected) officials who call the shots on decisions to wage war. That overconfidence seems to be inextricably linked with narcissism- and in some cases, mania, is especially worrisome, given that narcissistic traits have been found to be overrepresented in present day political leaders relative to the rest of the population.

Also of note is the fact that these leaders are overwhelmingly male – a population, as this study suggests, which tends to be ‘overconfident’, possibly translating to greater aggression on the global political stage. Feminists like myself argue that even if women are less ‘overconfident’ than men, these aspects are likely to have arisen from socialisation and rigid gender norms. Regardless of the underlying reason for these disparate behavioral tendencies between men and women, studies such as these add to the debate of the need for the greater representation of women in politics and key decision-making roles. Going by the evidence from behavioral science, women might just be more likely to wage peace than war.

McNamara and The “Fog of War”

In a lecture from the course titled, the ‘Science of Behavior Change’ taught by Professor Todd Rogers at the Harvard Kennedy School, he plays a video excerpt for the class from the documentary, ‘The Fog of War’. The video depicts scenes from the first and second ‘Gulf of Tonkin’ incidents- on 2nd and 4th of August, 1964 respectively. Former US Secretary of Defense, Robert McNamara, describes how the second incident on the 4th of August, where attacks were believed to have arisen from North Vietnamese torpedo boats against the USS Maddox, did not actually occur.

In the video, McNamara says, “There were sonar soundings, torpedoes had been detected, other indications of attack from patrol boats. We spent about 10 hours that day trying to figure out what in the hell had happened. At one point, the commander of the ship said, “we’re not certain of the attack”. At another, he said, “yes, we’re absolutely positive… So, I reported to this to President Johnson. And, as a result, there were bombings on targets in North Vietnam”.

It was later discovered that the torpedoes were actually entirely imagined ‘ghost torpedoes’. Since the ‘sonar men’ on board the Maddox were actively looking for signs of attack, they overestimated or ‘imagined’ signs that they via sonar, as actual attacks. An amalgam of ‘overestimation’, ‘overprecision’ and confirmation bias- or an overconfidence in the degree of precision of military intelligence.

President Johnson used these ghost torpedo attacks as the basis to put forth ‘the Gulf of Tonkin resolution’ to Congress, which essentially gave him authority to take the country to war against Vietnam- an apt example of why overconfidence should not be dealt with flippantly.

What it All Means

The McNamara documentary is a testament to the fact that leaders can be overconfident in several ways- towards egregiously dire ends. Perhaps the time is nigh to glean lessons from biases such as overconfidence: that leaders can be prone to error, that male leaders with tendencies towards narcissism might be torpedo-happy, or perhaps, that we need more women represented in the global political sphere.

At the very least, we ought to be aware of the consequences of these biases for decisions on war. An article in the New Scientist, referencing the same computer game study, ends with a quote from Peter Turchin of the University of Connecticut, USA. “This study fits within a relatively new field of research which connects motivations of individual people to their collective behavior,” says Turchin. “One wishes that members of the Bush administration had known about this research before they initiated invasion of Iraq three years ago,” he adds. “I think it would be fair to say that the general opinion of political scientists is that the Bush administration was overconfident of victory, and that the Iraq war is a debacle.”

Aside from ensuring greater awareness of these biases amongst voter bases across the world, perhaps the time has also come to move from PacMan to Miss PacMan, with a greater representation of women in politics- even if behavior is too complex to segregate into easy-cut boxes, at the very least, it is fair. Besides- in the end, all I am saying is, give peace a chance.

Texting Our Way to the Polls, Three Friends at a Time

“What three friends would you like to hold accountable on Election Day?”

It’s three weeks until Election Day, and you are an active voter speaking to a civically engaged behavioral scientist. You provide the first name of three friends – Sam, Alex, and Jamie – and provide your cell phone number. The behavioral scientist thanks you, and says, “You’ll hear from me right before Election Day”. By doing this, you have just participated in one of the first attempts to use behavioral science to motivate voter turnout.

This new “Get Out The Vote” (GOTV) tactic utilizes three behavioral science strategies to increase voter turnout, and evidence suggests that they significantly increase polling numbers. Political campaigners however are struggling to catch on. Through text reminders, implementation intentions, and social influence, campaigners can build on behavioral science research and shape a new generation of civic engagement. In this article, I will explain how.

Strategy 1: Text Reminders

Behavioral scientists are discovering the power of texting in multiple domains, including civic engagement, and have found that it may be more effective than traditional communication methods.  Phone-banks and mail reminders are impersonal, easy to ignore, and costly: The least expensive professional and personalized phone bank produces a vote at $19. [3] Face-to-face communication meanwhile, such as canvassing, is more effective, but is timely and limited in scope. In a 2006 study, these traditional methods reached about 30% of a targeted population. Texts, by contrast, had an estimated contact rate of 80% of the target population, and was much cheaper, costing at most $0.10 per recipient. [1] Finally, if you attempt to call an individual, or knock on a door at an inconvenient time, your message is never received. This hindrance is not experienced with texting strategies.

Behavioral scientists therefore argue that texting is the solution. Recipients have little motivation to ignore the text, due to its relatively small amount of spam messaging compared with calls and email, and the text recipient has the flexibility to read and respond to the message on their own schedule. In addition, young voters are increasingly reliant on handheld technology such as mobile phones and tablets, so the text reminder is likely to command their attention. Researchers found that text message reminders increased voter turnout rates by 4.1 percentage points over the control group, significantly more than the other tested strategies. [1]

Strategy 2: Implementation Intentions

 Texting strategies that appeal to accessibility and convenience are not the only way to improve voter turnout. Other areas of research focus on how structure and organization in the run up to election day also weigh heavily on an individual’s decision to go out and vote. Findings suggest that individuals are most likely to vote if they plan for Election Day. By considering the structure of their day, what time works best for voting, and where they need to go to cast their vote, a citizen will be prepared and more likely to vote when the day arrives.

In one study, the participants in the implementation intention condition reported what time they would vote, where they were coming from, and what they would be doing before voting. Participants that shared these concrete implementation intentions were 0.9 percentage points more likely to vote than the control: a statistically significant result. [4] Researchers also analyzed households by number of eligible voters. They found the implementation intention condition to increase voter turnout in one-eligible-voter households by a statistically significant 2 percent over the control, and found none of the other conditions increased voter turnout significantly in these households. [4] Participants that determined their voting implementation intentions and shared them with the researchers were more likely to vote.

Strategy 3: Social Influence

In addition to text reminders and implementation intentions, strategies aiming to boost voter turn-out can build on the fact that people are highly influenced by their peers. Societal expectations that promote or hinder a behavior has social influence. People often strive to behave in a way that warrants acceptance in society. This societal influence is relevant because voting is a prosocial behavior: voters are well-regarded because they are active and engaged members of society. Researchers found that when participants believe others in their community will know whether they participate in the election, participants are significantly more likely to vote.

In one study, participants in the social influence condition received a list of names of people in their neighborhood, including themselves, indicating who voted in the previous election. The message informed participants that a similar list will be released to the neighborhood following the upcoming election. 37.8 percent of participants in this condition voted. This turnout is substantially higher than a “civic duty” condition, where households read, “Remember your rights and responsibilities as a citizen. Remember to vote.” 31.5 percent of voters in this civic duty condition voted. In the control group, where participants did not receive a message, 29.7 percent of participants voted. [2] The results suggest that participants were highly motivated to vote due to the social influence of their neighbors.

How to Apply these Voting Strategies

Considering the research backing up these strategies, how could they be applied?  Imagine that tomorrow is Election Day, and you receive the following text: “This is your reminder to vote on Tuesday, June 13. Do you know where your polling station is, and do you have a plan to get to the polls?” [Strategy 1: Text Reminder]. You think for a moment, and respond, “Yes, I plan to vote at the library at 4 pm.” [Strategy 2: Implementation Intentions] Another text arrives: “Great, let’s make sure your friends Sam, Alex, and Jamie get to the polls too…” You committed to holding these friends accountable weeks ago, now you remember to follow through on your commitment. You also receive a sample text that you can copy and paste to your peers, and it looks like: “Hey Alex! Tuesday is Election Day. Do you know where and when you’re voting? I’m voting at 4 pm, and I hope to see you there.” [Strategy 3: Social Influence] Once you complete this step, all three strategies are activated for you and three of your peers. The effects can spread rapidly, and increase voter turnout without expending time or resources.

The behavioral scientists that employed this strategy had encouraging results during the Virginia Gubernatorial primary election in June, where 67 percent of the participants responded to the texts, and over half of the responders affirmed contacting their three friends or family members about the election. The majority of participants attended to the reminder and followed-through on their commitment. Given the evidence on each method for encouraging voter turnout, voter turnout in these social networks increased by multiples of 3. Participants received a text about their intentions to vote and to hold their friends accountable, developed and shared a plan for voting with the behavioral scientist, and then they alerted three friends, asking for their voting plans. By engaging with their friends, the influence of societal expectations come to light. All parties benefitted from the three strategies: Text reminders, implementation intentions, and social influence. According to the research, this increased voting numbers in the primary.

Behavioral science findings are only as powerful as their application. The behavioral scientists behind this strategy possessed two pivotal yet simple characteristics: the desire to promote civic engagement in young people, and an interest in behavioral science research. Anyone who shares these qualities can participate and apply evidence-based findings about human behavior to motivate voter action. Through text message reminders, setting implementation intentions, and activating social influence, a few engaged citizens reached dozens of potential voters. The next step is scaling up the strategy, so we can text our way to the polls, three friends at a time.