Nudging is a science, and its practitioners are scientists. More specifically, it is a type of applied science; taking results recorded in laboratories and field experiments, and applying them to the real world. For policy-makers, this is an enormously valuable tool — policy decisions can be backed up by hard evidence, recorded time and time again by researchers across the world.
However, the transition from psychology journals into the real world is not always easy. Seeing which nudges actually work is a process of trial and error — recording the data, looking at the outcomes, and adjusting policies accordingly. In fact, one criticism of changing behavior through nudging is that these results are too context-specific and cannot easily be replicated in different environments.
To its credit, ‘trial and error’ nudging has worked very well. It has been used successfully by governments all over the world; from tax-collection to urinals, nudging has provided behavioral solutions to social problems.
Thanks to data science, however, the future of government nudging looks quite different. Every year, the behavioral Insights Team (BIT) — Britain’s government department for behavioral science — releases a report, reviewing how behavioral insights have been used in British policy. This year, there was a crucial inclusion: the BIT has recently added a Data Science team, which aims to use the latest methods from data science, machine learning and predictive analytics to make smarter policy implementations.
This is hardly surprising; given the rise in popularity and application of both data science and behavioral science, combining the two seems to be the next logical step. In fact, sophisticated data analytics have the potential not only to improve behavioral insights, but to transform how governments interact with their citizens.
Take machine learning, for example. Simply put, machine learning consists in modelling an algorithm to find patterns in very large datasets. These algorithms consolidate information and adapt to become increasingly sophisticated and accurate, allowing them to learn automatically without being explicitly programmed.
The BIT’s application of these techniques has been fairly modest, but the results are hugely promising. The first major trial has involved trying to solve a road traffic problem in East Sussex, a small county on the south coast of the UK. For whatever reason, East Sussex has a disproportionately high number of fatal traffic collisions (64% higher than the national average). Faced with this problem, the local council has implemented a number of road safety initiatives to try to reduce speeding, encourage concentration at the wheel, and provide road users with information promoting safer driving.
Last year, the BIT tried to solve this problem with data science. Algorithms based on over ten years of collected local data allowed the BIT to make extremely accurate predictions about which types of drivers would be more likely to be involved in serious traffic accidents. For example, they found that a collision between a person over 65 and a younger driver is more likely to result in a fatality if the ‘younger’ driver is aged 40–50. After all, previously unnoticeable behavioral patterns like these can be found in large enough data sets. Most importantly, this allowed the BIT to better design and target road safety initiatives — to provide the right behavioral interventions for the right people.
Right now, these models have only been applied to small-scale road safety initiatives, but their potential to solve major social problems is clear. The amount of data we amass, individually and as a society, is staggering: in fact, we generate more collected data every two days than we did in the entire history of the universe until 2003. All of our online interactions, our purchase histories, our medical records, our government information — it all leaves a digital footprint. When datasets are so large, behavioral predictions can be startlingly accurate. Michal Kosinski has already used digital footprints left behind while using online platforms and devices to study, and anticipate, human behavior and psychological traits. His models have been able to predict people’s psychological traits, behavior, sexuality, and even who they will vote for.
How does this relate to better policy-making? As the BIT showed, instead of applying and re-applying nudges as ‘best-guesses’, governments can tailor very specific, personalised behavioral nudges to individuals and small groups. If Kosinski and his team can make extremely accurate predictions about an individual’s private preferences based on a fairly limited amount of social media data, imagine how accurately governments could design and target the right behavioral nudges.
Things get really interesting when you consider countries like Estonia and Finland — both of which generate vast amounts of open government data about the behavior of their citizens. In Estonia, 99% of public services are available online (including voting, paying taxes and access to medical records) — and citizens can even register new companies digitally from their smartphones, in a matter of minutes. In principle, as governments like Estonia and Finland continue to accrue enormous sets of behavioral data — in so many different domains — they will, with the right tools, be able to develop the most powerful and well-targeted behavioral nudges possible.
What this means for the future of data protection and behavioral interventions is complex. Questions about how much data access policy designers should have are extremely contentious and will continue to be in the future. One thing is for sure though, governments have already made enormous progress with scaling behavioral nudges thanks to the insights of fairly modest local records. As the techniques become more sophisticated and the datasets grow larger, the old scientific approach of trialing ‘best-guesses’ could soon be replaced by machines that learn and improve automatically. Policy design does not get smarter than that.