Humans and AI: Rivals or Romance?

Artificial intelligence (AI) has been developing at a frightening pace. It is debatable to what extent it has improved our lives – being able to use geolocation and search for the best restaurants or places of interest is great; however, AI is, at the same time, eliminating plenty of jobs, fast. A frequently cited report points out that a staggering 47 per cent of jobs in the US will be automated soon [1]. Another study suggests that 45 per cent of the daily tasks currently done by humans could be automated if current trends continue [2]. These numbers are inconceivable, considering that the worst case of unemployment to be recorded was during the Great Depression, in 1929, where an estimated 25 per cent of the population was out of work.

In our most recent book, we mentioned the case of a CFO at an investment bank. Last year, he was given the task of reducing the size of his staff by 80 per cent because off-the-shelf digital technologies could be doing the jobs that were currently occupied by humans [3]. And, in 2017, we have seen large banks close record numbers of physical branches, making thousands redundant in the process. Judging by this, humans are starting to look like horses before the arrival of automobiles.

The (human) empire strikes back

It’s certain that we will hear more and more alarmist accounts. However, we have seen it before – many times, in fact. Back in 1963, it was J F Kennedy who said, “We have a combination of older workers who have been thrown out of work because of technology and younger people coming in […] too many people are coming into the labor market and too many machines are throwing people out” [4]. Going further back, when the first printed books with illustrations started to appear in the 1470s in Germany, wood engravers protested as they thought they would no longer be needed [5].

But this all begs one question: If technological progress represents a comprehensive threat to humans, then why do we still have jobs left? In fact, many of us are still working, probably much harder than before. The answer: machines and humans excel in different activities. For instance, machines are frequently no match for our human minds, senses and dexterity. For example, even though Amazon’s warehouses are automated, humans are still required to do the actual shelving.

And this doesn’t only apply to physical jobs. The real story behind today’s AI is that it cannot function without humans in the loop. Google is thought to have 10,000 ‘raters’ who look at YouTube videos or test new services. Microsoft, on the other hand, has a crowdsourcing platform called Universal Human Relevance System to handle a great deal of small activities, including checking the results of its search algorithms [6]. And this blend of AI and humans, who follow through when the AI falls short, is not going to disappear any time soon [7]. Indeed, the demand for such on-demand human interventions is expected to continue to grow. The ‘human cloud’ is set to boom.

Closer together

The above illustrates a very important lesson – humans will be needed. The key is how to integrate humans and machines in various activities and how to steer AI towards the creation of new economic interfaces, rather than towards the mere replacement/displacement of existing ones. At the moment, the probability of AI getting things right is between 85 and 95 per cent. Humans, on the other hand, generally score 60 to 70 per cent. On this basis alone, we need only machines and not humans.

Yet, in some highly data-driven industries such as financial and legal services, there can be no error – any mistake can result in huge financial costs in the form of economic losses or expensive lawsuits. Machines by themselves are not enough. Furthermore, AI can only run an algorithm that is predefined and trained by a human, and so a margin of error will always exist. When mistakes take place, AI will not be able to fix them. Humans, by contrast, are able to create solutions to problems. We believe that the best solution is to use machines to run production up to the level of 95 per cent accuracy, and supplement this with human engineers to mitigate risks if not to strive to improve accuracy.

Humans and machines will – and must – work together. As business consultants, educators and policy advisors, we all strongly believe that, ultimately, what really matters is how to prepare people to work increasingly closely with machines.

Three Thought Patterns Which Let Advertisers Influence You on Social Media

Advertising has a staggering impact on what we buy, what we do, and how we behave. Some ad-campaigns alone have managed to trigger international shifts in culture and consumption. One of the most famous examples is that of Gillette, some one-hundred years ago. At the time, the company decided to expand its product selection to include women’s razors — on the off-chance they would catch on — and introduced adverts for the new product. As a result, they produced consumer demand which now extends across most of the western world, along with a trend for women’s body hair removal which had not existed prior to Gillette’s ad-campaign.

The impact of Gillette’s marketing demonstrates just how much advertisers’ advances can influence our behavior, both individually and collectively. Now, with the ubiquity of social media, advertisers see new opportunities: indeed, worldwide budgets for social media advertising are predicted to have soon doubled from 2014 levels, and revenue from these efforts has more than doubled, according to Statista.com. Given that advertisers are constantly refining their social media engagement, it is increasingly important to ask ourselves: how (and and to what end) might advertisers be aiming to manipulate us on social media?

What do we know about the science of advertising?

The science of advertising has long been a topic of public interest and a subject of research. Some of the earliest theories instilled public fear about the apparent use of subliminal advertising; in other words, influencing people below the threshold of consciousness. Vicary conducted a famous experiment in 1957, where the words “Eat Popcorn” and “Drink Coca-Cola” were flashed up on a cinema screen so fast than the human mind could not consciously process them [1]. The audience in question did not register having seen the messages. Vicary reported, however, that popcorn and Coca-Cola sales increased dramatically (by 57.5% and 18.1%, respectively) after the audience had been unknowingly exposed to this message. The idea that advertisers could influence people’s behavior with such ease, and without their knowledge, was a frightening prospect, and one which caused fear and backlash among the general public. Fortunately, Vicary’s experiment was revealed to be a hoax, and the initial fears about subliminal messaging were assuaged.

In more recent years, research has found that the secrets behind advertising are not as mysterious as they once appeared; instead, they are firmly grounded in knowledge about the realities of human behavior. An extensive body of social psychology research has explored the processes of influence and persuasion; normative influence (the pressure to conform to the majority) and informational influence (our instinct to defer to a more knowledgeable party) have been identified as salient forms of influence [2], and have frequently been used to explain our susceptibility to advertising [3]. However, research has only recently begun to look at how these techniques are applied to advertising within the world of social media.

How and why do these techniques work so effectively on social media?

In the past, advertising consisted largely of television ads, billboards, packaging – essentially, the aim was for the advert itself to directly engage the attention of whoever caught sight of it. However, the rise of social media has allowed adverts to make use of a new dimension: the viewer’s exposure to other viewers. Three of our own thought patterns make us particularly easy prey to this aspect of social media advertising:

1. ‘Just act normal.’

As advertisers have long known, normative influence appeals to us because we instinctively avoid censure from the majority (Deutsch & Gerard, 1955). Adverts in the past have been able to make claims about how many people use and love their product, but this has only been achieved by explicitly telling the viewer the figures. This is where social media comes in: it can show rather than tell. On many social media channels, advertisers have no need to even mention other consumers’ preferences; instead, they can instantly let us see for ourselves how many others have ‘liked’ their product, are ‘interested in’ their event, or ‘follow’ their page.

Indeed, research has found that we are more inclined to ‘like’ something which is well-‘liked’ already; teenagers (incidentally, the biggest users of social media) are particularly vulnerable to this type of majority influence [4]. Consumer behavior can even be influenced by low-consensus information [5] – in other words, seeing that even a few others have ‘liked’ a promotional post on Instagram can (for some people) still act as an incentive to ‘like’ it too. This makes social media a useful platform even for little-known products.

2. ‘Maybe I’m wrong.’

Informational influence [2] means that we seek out validation from other people’s answers or opinions if we doubt our own judgement. If we are unsure of our own decision in any way, we look to see what the people around us have decided – particularly if they are deemed more ‘expert’ than us – and we are influenced by their decision. For this reason, it has been long-accepted in advertising that social proof sells, particularly if a renowned expert agrees to promote a product. Social media provides more avenues than ever before for expert endorsements [6]. Many social media-based ‘experts’ — particularly the new class of #instafamous, vloggers, and fitness Instagrammers — make a living by endorsing products and businesses on their social media channels. This is testament to how effective these endorsements are. These individuals use features like Instagram stories and Snapchat stories to allow their endorsements to seem more informal, unpolished, and ‘real’.

3. ‘Us and them.’

Another type of influence which is increasingly being identified in research is the appeal to ‘social identity’. People love to self-categorize, framing their identity using social groups [7]. As a result, we are more likely to remember adverts which appeal to our social identity, and to forget those which do not. In the past, this has created a dilemma for advertisers: which social groups should they pitch to? Social media makes this less of a problem, because advertisers are able to track an individual’s social media usage and their search history, and therefore tailor adverts to each person’s ‘social identity’ [8], ensuring we see an advert which appeals to us [9]. Many social media platforms now even give the user the option of selecting which advert they want to watch, thus creating an even more personalised advertising experience and increasing the likelihood that the viewer will engage with the advert.

Can we resist these mind games?

According to some broader research on informational influence, one way in which we can resist it is by creating our own certainty. Alexander, Zucker, and Brody (1970) performed an experiment which demonstrated that if people have enough information to be certain of their own decision, they do not succumb so strongly to informational influence [10]. Their study focused on a problem-solving task, yet the conclusion can nonetheless be applied to the decisions we make about whether an endorsed product is worth spending our money on. Are the ‘Instafamous’ really the best source of expertise on which new gadget or gear to buy? We can equip ourselves better against persuasion if we do our own research first.

In theory, the most easily resisted phenomena is normative influence, which usually produces behavioral conformity but not genuine mind-changes. We might follow a new enterprise on Instagram because our friends have raved about it, and buy its products to show that we share their tastes, while in reality not being fully convinced of its merits. This lack of ‘cognitive change’ might be thought to leave more scope for independent behavior to win out. Despite some research demonstrating salient conformity rates, Liu (2008) also found social media profiles to act as “taste performances” [11] where people specifically depict their differentiation from others. Could this motivation work against the majority influence encouraged by advertisers? Liu’s study only focused on MySpace profile presentation, but future research could usefully investigate the relationship between the desire for differentiation and any ‘liking’ or purchasing activity on platforms like Instagram and Facebook.

The instinct for self-categorisation is perhaps the hardest to resist. Research has deemed that self-categorisation into social groups has been an advantage in evolutionary terms [12]. However, it can easily be turned to our disadvantage when advertisers use it for their own interests. While opting out of online tracking is not yet a transparent process, we can help ourselves to some extent by remaining aware of the advertisers’ aims. Research has found that if somebody is obviously trying to persuade us then their message becomes less effective [13]; this also applies to overly-blunt messaging by advertisers (e.g., ‘Buy our product’). By this logic, if we remain aware that adverts are deliberately tailored to appeal to us (and therefore sell to us), we are less likely to succumb.

Precommitment and Procrastination: Behavioral Tools for Students

Imagine being a student today. Every time you sit down to open your books, your phone buzzes, or your laptop pings. Entire films and television series are accessible at the click of a button. Celebrities, friends and family all continuously post social updates; on almost every platform imaginable. Modern technology has been designed so as to demand all of our attention, all of the time; and the tactics for doing so are becoming increasingly sophisticated. The temptation to procrastinate from doing your work is more irresistible, and easier, than ever.

Procrastination (from the Latin, pro: for, cras: tomorrow) is a special case of our more general present-bias: our tendency to give stronger weight to payoffs that are closer to the present time when considering future trade-offs [1]. It is a common feeling; we all make noble plans about how much we are going to study, how we will hand in all of our assignments on time, and how we are going to be more focused and productive than ever. All of this starting tomorrow, of course. When it comes to it, we tend to choose the instant gratification of ‘just one more’ YouTube video over long hours spent studying; which, according to our present-bias-informed preferences, feels difficult, boring, and daunting. Instant gratification is all well and good, but it can seriously disrupt our more rational, long-term study goals. In fact, evidence seems to suggest that almost all students procrastinate more than they would like to.

So, in a world designed to exploit our searches for instant gratification and short attention spans, is there any hope for the student? Well, the good news is that behavioral insights can offer students smarter strategies to block out temptation, and focus on their work.

Smarter precommitments

First, a studying classic; precommitment strategies. Precommitment strategies involve blocking out some of our future choices, in the knowledge that we will not have the will-power to resist them later. The result is that we can make plans that are more consistent with our long-term goals, without succumbing to instant gratification. None of this is particularly new to the student. There are many tools which allow web-users, in their more rational and future-oriented moments, to block access to distracting websites; so that, in their weaker moments, they cannot stray away from their work. Popular examples of these tools include extensions ‘Block site’ (for Chrome) and ‘SelfControl’ (for Mac); the latter of which is particularly difficult for students to disable once configured.

Interestingly, research has shown that self-imposed precommitment strategies are less effective than externally-implemented ones, for improving self-control [2]. So if students really want to see their behavior improve through precommitment, they would be better off having friends set their precommitment deadlines for them.

In fact, incorporating small rewards into our precommitment strategies can further improve the results for our self-control. The idea is that by associating our precommitments with small bonuses, we become more willing to stick to them. A beautiful example of this kind of tool is ‘Forest’; a precommitment app which invites users to block out distracting sites for a set amount of time, and then initiates a graphic of a tree growing on the screen, from seedling to fully-blossomed. Users are given the option to quit their commitment, but if they do so before the allocated time is up, they kill their tree and watch it wilt before their eyes. The function is simple, but the idea is extremely insightful; we are incentivized to watch our tree grow to full-size, because of the satisfaction it gives us. Conversely, it is painful to watch it die, especially if we have invested a great deal of time into growing it.

The importance of feedback

Next, receiving regular feedback on procrastination habits could be key to helping students behave more rationally when working. Evaluating our own behavior is not easy; and sometimes we need help. Accessible feedback makes important information more salient, and so helps us to make better-informed decisions. In fact, this exact lever is used by smart-energy meters, which aim to show customers (in accessible monetary terms) exactly how much they are spending on their energy usage [3].

Similar tools can be implemented for self-control strategies; if students regularly reminded themselves of how much time they had spent procrastinating (in that day, that week, that month, etc.) they would be able to make better-informed decisions about how they actually behave when they are meant to be studying. This is particularly important for procrastination, because it is such a mindless habit. When we find ourselves scrolling through our Twitter news feed instead of working, we are often acting almost automatically; as if we had no control over our actions. Regular feedback can help students be more mindful of their wasted time, and avoid slipping into mindless three-hour procrastination spirals. In fact, research tells us that receiving long-term feedback on a task can help students behave more rationally when tackling similar tasks in the long-run [4].

An extremely effective strategy in the fight against procrastination, then, is to keep track of procrastination habits. Students should check their browsing history and note how much time they actually waste when studying (and, perhaps, on what). If they can, they should keep a log of this information; it could be vital to helping them to behave more rationally and mindfully when they approach their work. An example of a tool that can give students this kind of feedback is Moment, which tracks a user’s phone usage, and notifies them when they have used their phones too much.

Leave your phone at home

Finally, students would be wise to work in a different room from their phones altogether. First, of all, because notifications and phone calls — including those that are ignored — can be extremely disruptive to our workflow [5]. The research found that a three-second distraction (the time it takes to silence a ringing phone) while conducting a basic sequencing task is sufficient to significantly disrupt attention; and result in twice the number of errors made once the task is continued after the disruption than had been made before.

Even more surprisingly, recent research indicates that the mere presence of phones (even when switched off) can reduce the cognitive capacity of students [6]. The explanation of these results is extremely illuminating; evidence suggests that our attentional and cognitive resources and are finite, and depleted as a function of task demands — a theory known as ‘decision fatigue’ [7]. The basic idea, then, is that every time we make a decision, we use up some of our finite cognitive resources, affecting the quality of our future decisions. Whenever we look at our phones, and manage to avoid the temptation of switching them on, we make it more difficult for our future selves to exercise the same level of self-control. In the end, our will-power and attention dwindles, and we simply cannot keep ourselves focused any longer. A smart way of delaying the onset of cognitive fatigue is to minimize the number of decisions we have to make when studying; and that includes leaving our phone somewhere that it cannot tempt us.

Every day, students are forced to fight off an enormous amount of distractions and temptations. The reality is that this fight cannot be won with will-power alone; the attention-grabbing techniques have simply become too prevalent and sophisticated. Luckily, behavioral insights can equip students with smarter studying strategies — aligning their daily habits with their long-term academic goals.

A Better Explanation Of The Endowment Effect

It’s a famous study. Give a mug to a random subset of a group of people. Then ask those who got the mug (the sellers) to tell you the lowest price they’d sell the mug for, and ask those who didn’t get the mug (the buyers) to tell you the highest price they’d pay for the mug. You’ll find that sellers’ minimum selling prices exceed buyers’ maximum buying prices by a factor of 2 or 3 (.pdf).

This famous finding, known as the endowment effect, is presumed to have a famous cause: loss aversion. Just as loss aversion maintains that people dislike losses more than they like gains, the endowment effect seems to show that people put a higher price on losing a good than on gaining it. The endowment effect seems to perfectly follow from loss aversion.

But a 2012 paper by Ray Weaver and Shane Frederick convincingly shows that loss aversion is not the cause of the endowment effect (.pdf). Instead, “the endowment effect is often better understood as the reluctance to trade on unfavorable terms,” in other words “as an aversion to bad deals.” [1]

This paper changed how I think about the endowment effect, and so I wanted to write about it.

A Reference Price Theory Of The Endowment Effect

Weaver and Frederick’s theory is simple: Selling and buying prices reflect two concerns. First, people don’t want to sell the mug for less, or buy the mug for more, than their own value of it. Second, they don’t want to sell the mug for less, or buy the mug for more, than the market price. This is because people dislike feeling like a sucker. [2]

To see how this produces the endowment effect, imagine you are willing to pay $1 for the mug and you believe it usually sells for $3. As a buyer, you won’t pay more than $1, because you don’t want to pay more than it’s worth to you. But as a seller, you don’t want to sell for as little as $1, because you’ll feel like a chump selling it for much less than it is worth. [3]. Thus, because there’s a gap between people’s perception of the market price and their valuation of the mug, there’ll be a large gap between selling ($3) and buying ($1) prices:

Weaver and Frederick predict that the endowment effect will arise whenever market prices differ from valuations.

However, when market prices are not different from valuations, you shouldn’t see the endowment effect. For example, if people value a mug at $2 and also think that its market price is $2, then both buyers and sellers will price it at $2:

And this is what Weaver and Frederick find. Repeatedly. There is no endowment effect when valuations are equal to perceived market prices. Wow.

Just to be sure, I ran a within-subjects hypothetical study that is much inferior to Weaver and Frederick’s between-subjects incentivized studies, and, although my unusual design produced some unusual results, I found strong support for their hypothesis (full description .pdf; data .xls). Most importantly, I found that people who gave higher selling prices than buying prices for the same good were much more likely to say they did this because they wanted to avoid a bad deal than because of loss aversion:

In fact, whereas 82.5% of participants endorsed at least one bad-deal reason, only 18.8% of participants endorsed at least one loss-aversion reason. [4]

I think Weaver and Frederick’s evidence makes it difficult to consider loss aversion the best explanation of the endowment effect. Loss aversion can’t explain why the endowment effect is so sensitive to the difference between market prices and valuations, and it certainly can’t explain why the effect vanishes when market prices and valuations converge. [5]

Weaver and Frederick’s theory is simple, plausible, supported by the data, and doesn’t assume that people treat losses differently than gains. It just assumes that, when setting prices, people consider both their valuations and market prices, and dislike feeling like a sucker.

Notes:

  1. Even if you don’t read Weaver and Frederick’s paper, I strongly advise you to read Footnote 10.
  2. Thaler (1985) called this “transaction utility” (.pdf). Technically Weaver and Frederick’s theory is about “reference prices” rather than “market prices”, but since market prices are the most common/natural reference price I’m going to use the term market prices.
  3. Maybe because you got the mug for free, you’d be willing to sell it for a little bit less than the market price – perhaps $2 rather than $3. Even so, if the gap between market prices and valuations is large enough, there’ll still be an endowment effect.
  4. For a similar result, see Brown 2005 (.pdf).
  5. Loss aversion is not the only popular account. According to an “ownership” account of the endowment effect (.pdf), owning a good makes you like it more, and thus price it higher, than not owning it. Although this mechanism may account for some of the effect (the endowment effect may be multiply determined), it cannot explain all the effects Weaver and Frederick report. Nor can it easily account for why the endowment effect is observed in hypothetical studies, when people simply imagine being buyers or sellers.