Tempting the Creation of Habits

In August, Panera Bread had this amazing deal.  A free bagel every morning.  For the entire month.  For free.  Even their specialty cinnamon crunch bagel.  Did I mention it was free?  And so I began driving a different way to work.  I didn’t have to make breakfast in the morning.  I picked up my bagel.  And so often couldn’t resist the Chai Tea Latte.

And then September dutifully came.  It seemed I had forgotten how to even make breakfast. I forgot my old route to work, too —  I was used to driving the way that passes the Panera.  And when I saw the sign, like Pavlov’s dogs, I began to drool.

Taking a similar tack, this past May, my local Dunkin Donuts had a promotion for 99 cent iced tea.  Now, I should be paid to say this, but Dunkin Donuts really does make amazing iced tea.  And when it’s only 99 cents? Even better.  Which size cup? Any size.  Clearly I’m going to get the large.  And so for hours, I’d sip on my large iced tea.  It became a fixture of my classroom when I’d teach — the huge cup that I’d drink all day.  And then, in the heat of the summer, Dunkin Donuts would raise its price.

Not only did I have to fight a physiological habit of a steady cool stream of caffeine, but I now had to fight a psychological habit of having my cup all day long.

Habits are powerful, and the above examples illustrate how two companies were willing to lower costs initially to reel in consumers, hoping to make having a morning bagel or an iced tea as habitual as putting your shoes on.

Why do habits work?

When we are used to doing something, the behavior becomes automatic.  That is, it doesn’t require cognitive effort or planning.  Consider how hard it is for a five year old to learn to tie shoes.  I’ll assume if you are reading this, you probably can do it much faster (while having a conversation, even).  When something becomes automatic, it takes less effort.  If everyday you get up and stretch, at first, it’s hard to remember and to motivate yourself.  But after a while, you can do it half-asleep.  It’s simply what you do.  It follows, therefore, that we should habitualize as many healthy behaviors as we can: drinking water, brushing teeth, washing our hands before cooking, stretching.

When routines are created, we create associations that cue us to future expectations.  The human brain is designed to notice cause and effect.  Pink and orange mean Dunkin Donuts.  The street where the Panera is means I’m about to eat some carbs.  Those associations provide cues, and we become primed (mouth drools).  For example, I rarely go to the movie theatre without ordering popcorn.  For me, this is part of the experience.  A movie theatre is a cue for those who normally eat popcorn while watching movies.  In one study, those who habitually eat popcorn at the movies ate more popcorn at a movie theatre than non-habitual popcorn eaters (Neal, Wood, Wu, & Kurlander, 2011).  This is not surprising.  However, Neal et al. manipulated another variable — whether the popcorn was stale or not.  Unlike the non-habitual eaters, the ones who usually eat popcorn seem to mindlessly eat the popcorn regardless of hunger or popcorn freshness.

So, how do we break habits?

That same study showed that the result of habitual popcorn eaters eating stale popcorn only occurred in the setting of a movie theatre.  When they watched movie clips in a conference room, they ate less popcorn when it was stale or when they weren’t hungry.  The setting of a movie theatre was a cue to eat popcorn.  In a second study, Neal et al. required the popcorn eaters to eat with their non-dominant hand.  This disruption also broke the habitual tendency; participants now ate less popcorn when it was stale or if they weren’t hungry.

So, to break a habit, we need to disrupt a cue.  We need to consciously make an effort to avoid cues that may tempt us to engage in our automatic tendencies.  I can’t drive blindfolded, but maybe I could avoid the way to work that passes the Panera.  Or maybe I can consciously implement what my breakfast plan is the night before.

In general, research has shown that recruiting our goal-directed (non-automatic) part of our brain is important in creating and breaking habits.  For example, in one study examining the creation of a flossing habit, half of subjects were assigned to write about when and where they would do their once-daily flossing (Orbell & Verplanken, 2010).  Those who made an implementation plan flossed twice as many days as those without an implementation plan.

Habitual behavior is neither inherently good nor bad; it is simply a powerful tendency of our brain that can turn us into iced tea junkies or flossing fanatics.  This design likely evolved to conserve scarce cognitive resources — allowing us to perform functions without thinking much about their execution.

But sometimes we need to force ourselves to think. We need to make plans, and at least try to remove the cues that lead us to suboptimal behavior (like, say, continuing to buy a bagel everyday at full price). By definition, if a great deal is what drives us to a product, then our value function for the product should change when that great deal is removed. If it doesn’t, you can bet that our tendency toward the familiar has been unconsciously factored in.

Put simply, when we don’t evaluate the cues driving our behavior — taking stock of what they are, and who is setting them — we fall prey to our worst inclinations and automatic actions. It’s something you can be sure bagel and tea salesman the world across are banking on.

Are You Making Bad Financial Decisions Because of Information Avoidance?

We live in the age of information, when more and more data is becoming available to us for free and with little effort. We can view our banking statements online and receive mobile notifications for every transaction. Digital tools can automatically consolidate our income, spending, debt, and savings from a number of different accounts, group them by category, and provide daily, weekly, or monthly snapshots. They can also help us to set up our budget and goals, and track our progress towards them. For people who are in financial difficulty or just want to learn the basics of personal finance management,  there are many resources with free financial help and information.

How well do we know our finances?

In theory, with all this information at hand, a lot of people should be able to stay on top of their finances, budget, plan, and ultimately, make better financial decisions. Indeed, the assumption of rationality at the core of standard economic models stipulates that actors perfectly absorb all available information, and make decisions on the basis of this search. In reality, however, surveys in different countries indicate that consumers’ knowledge of their finances can be very poor. US consumers were found to significantly underestimate their credit card and student debt [1], and 48% of balance-carrying cardholders didn’t know their APR. In Australia, a survey in 2017 found that 75% of people didn’t know the size of their credit card debt, while 41% of mortgage holders had no idea about their mortgage rate, and 49% didn’t know their credit card interest rate. According to the most recent National Savings and Investment survey, 29 million Britons worried about their finances, but 73% of these never sought advice or guidance. Why do so many people make little use of all the available information, which could help them make important decisions about credit card repayments, savings, and debt?

Why, when and how do we avoid information?

As noted above, traditional economics suggests that having more relevant information is always better. Nevertheless, people sometimes purposefully choose not to access potentially helpful information, even when it is free and readily available, or ignore such information, even when it has been directly provided to them. In behavioral science, this phenomenon is known as information avoidance, and spans many everyday behaviors related to “personal health, financial affairs, religious issues, relationship issues, and political issues” [2].

Avoiding exposure to certain information or not paying attention to it is quite common, but is not observed in every situation that has the potential to engender a negative affective reaction. An online experiment conducted in 2018 investigated how personal characteristics and characteristics of the potential threat affect the likelihood of information avoidance [3]. The highest rates of information avoidance were observed in the experimental group, where participants had high levels of potential losses, high perceived relative risk, and could only have a small impact on their probability of losing money. Interestingly, there were no effects found for each of these factors individually. The likelihood of information avoidance was also associated with gender, anticipated reaction to losing money, coping style, and locus of control (i.e. whether the person believes that outcome depends on external factors such as fate or on own behavior).

Information avoidance can take many different forms. Golman, Hagmann and Loewenstein (2017) reviewed theoretical and empirical research across different disciplines and listed the following tactics: physical avoidance, inattention, biased interpretation, forgetting and self-handicapping [4]. Narayan et al (2011) asked people to keep daily diaries of their information-related activities, from which they categorized all information avoidance behaviors into passive and active [5]. The former relates to long-term avoidance of information, which can interfere with one’s existing beliefs or perception of self, thus causing cognitive dissonance. Personal finances are normally associated with the latter, which is a stress-coping mechanism. It gets activated when a person has already had a negative emotional reaction to some piece of information, so any further information seeking is blocked to prevent more distress.

This corresponds to what Golman et al (2017) call hedonic reasons for information avoidance — namely “a desire to avoid bad news because it will make one feel bad” [4]. Such reasons include risk, loss, and disappointment aversion, as well as anxiety, optimism maintenance, and others. The second broad category of reasons identified by the authors is strategic information avoidance, which can be further split into interpersonal and intrapersonal types. Information avoidance in interpersonal interactions can be used to influence over other people’s actions, with many examples found in game theory. Intrapersonal reasons relate to avoiding information for commitment or self-control purposes, such as to resist temptations or maintain motivation. For example, if a person knows that receiving certain information about his investments’ performance may likely lead to an emotional response, perhaps resulting in panic selling, overtrading, or other suboptimal decisions, this person may prefer to “buy and forget”. In such a case, information avoidance can actually correct for a harmful heuristic.

So, what’s the problem?

Strategically avoiding information can therefore be sensible and result in better outcomes. What about avoiding information for hedonic reasons? If information carries negative utility from an emotional perspective, wouldn’t we be better off without it? Research on information avoidance with respect to health issues suggest that it is more likely to occur when no treatment is available [5]. Intuitively this makes sense, as in this case information is likely to not carry many benefits for decision-making, but only to cause stress and anxiety. It is hard to use the same argument for situations related to personal finances, though, because in most cases the problems can be mitigated, if not fully resolved. According to the Money Advice Service research, the key non-demographic factors determining current financial well-being include financial confidence, managing credit, active saving, financial engagement, and considered spending [6]. Avoiding information about personal finances, especially debt and expenditure, can therefore be a major barrier for one’s financial well-being. This leads to the question of what can be done to overcome it.

Is there a solution?

Recent research has provided some encouraging results which indicate that merely thinking about the consequences of information avoidance can reduce one’s probability of avoiding information [7]. In two experiments, participants were prompted to contemplate the risks and benefits of debt-related information avoidance by completing a questionnaire or watching a video. They were then asked to provide certain information used to calculate their own risk of debt problems, and offered to view this risk. In both experiments, the proportion of people who refused to view their risk was significantly lower in the treatment group than in the control. In the third experiment, people applying for a loan in a credit union received either a standard application form or a revised form which included items to prompt contemplation. People who received the revised form provided more expenditure information and higher overall expenditure estimates, which reduced the discrepancy with the estimates calculated by the credit union staff.

Another potential solution is to use technology to analyze consumers’ financial information, and give specific recommendations to help people manage their savings, expenses, and debt. There are now an ever-increasing number of financial apps providing such services. Taken together, these developments bode well for the potential to mitigate the negative effects of information avoidance — particularly with respect to financial decisions. How these improvements translate to other domains of information avoidance remains a question for future research.

Algorithms for Simpler Decision-Making (1/2): The Case for Cognitive Prosthetics

Our cognitive functions are increasingly being outsourced to computational algorithms, simultaneously enhancing our decision-making capabilities and manipulating our behavior. Digital spaces, where information is more accessible and more affordable than ever before, provide us with insights and data for us to use at will. Nowadays, a simple Google search can take on the role of financial advisor, lawyer, or even doctor. But the information we find online is silently sorted, ordered and presented by algorithms that delve through our digital data traces for the most relevant, most ‘likable’ media to feed us. In many ways, this unseen curation is a welcome convenience; sifting through and reasoning with seemingly endless online data and information is an unrealistic task for any human. Nevertheless, we begin to forfeit cognitive autonomy each time we delegate information gathering and evaluation to algorithms, in turn restricting our thinking to what the algorithms deem appropriate.

Interacting with these algorithms allows us to make sense of and participate in the flows of data constantly constructing the ways we work and live. Algorithmic decision-making — that is, automated analytics deployed for the purpose of informing better, data-driven decisions — epitomizes this phenomenon. And while a world directed by algorithms presents countless opportunities for optimizing the human experience, it also calls for reflection on the human-algorithm relationship upon which we now rely.

As our views on data shift from empiricism to ideology, from datafication to dataism, it is easy to get caught in the fervor. Countless articles call for transparency, accountability, and privacy in the roll out of algorithmic practices.  These are of course noble (and often necessary) ideals — for example, data watchdog committees and legislative safeguards can ensure responsible development and implementation. Yet, many of these sweeping calls for oversight implicitly rest on unfounded assumptions about the socio-political impacts of algorithms. In turn, we wind up with a number of a priori hypotheses about how algorithms will affect society — and thus, claims about the steps we must take to regulate them — which are often premised on misguided assumptions.

For one, the conventional data dogma has warped and distorted the concept of the algorithm into some kind of agential, all-knowing, impossible-to-comprehend being. This misconception suggests that an algorithm possesses authoritative power in itself, when in reality any influence the algorithm may project is the result of human design and legitimization (Beer, 2017). In other words, as the role of algorithms evolves to a semi-mythical (perhaps deified) status from Silicon Valley to Wall Street, it is often forgotten that algorithms are a product of human effort, and subject to human control.

Secondly, mainstream depictions of algorithmic decision-making presuppose a specific model in which algorithms have been so deeply embedded into bureaucracy that negotiating with an algorithmic decision is impossible for the common individual. While this power structure is surely a future possibility — as seen in the algorithmic management of Uber drivers (cf. Lee, Kusbit, Metsky, & Dabbish, 2015) — a vast majority of present-day algorithmic decision making operates in a consumer model. Here, the users of algorithmic tools are free to use (consume) or ignore the insights provided by algorithms, compelled by little more than preferences for convenience. In fact, this augmented decision-making process, where algorithms are consulted but ultimately remain passive, is pervasive in our everyday lives. Proprietary algorithms direct us through city streets, recommend films, and tell us who to date, but our understanding of how people trust, utilize, and make sense of algorithmic advice is noticeably thin (Prahl & Van Swol, 2017). While this micro-level interaction between human and algorithm is perhaps more mundane than theorizing the implications of algocratic rule, it will ultimately determine whether the human role in our data-fuelled ecosystem will be augmented or automated.

Despite related philosophical advances like the theory of “the extended mind” (Clark & Chalmers, 1998), the long-term success or failure of augmented decision-making depends on practical, scientific solutions to effectively integrate human judgment and algorithmic logic. Whereas decision aids and support systems have been working to do so for decades, the evolution of big data and the discovery of “algorithm aversion” has called for revisions to our notions of hybrid decision-making. In conceptualizing algorithm aversion, Dietvorst, Simmons, & Massey (2015, 2016) found that human forecasters display a reluctance to use superior but imperfect algorithms, oftentimes leading to a revert back to gut feelings. Perhaps this isn’t so surprising: Meehl’s (1954) seminal work on the superiority of statistical to clinical (or intuitive) judgment, and the ensuing uproar, highlighted this same conflict some 60 years ago. While this stubborn confidence in intuition has been fodder for decision scientists ever since Meehl, this aversion toward statistical, computational decision-making has been revived as algorithms are no longer a luxury, but a necessity. Like a prosthetic leg might allow an impaired individual to comfortably move through the physical environment, behavioral scientists must now come together to design cognitive prosthetics — algorithmic extensions of the human mind that allow individuals to navigate the boundless digital environment, enabling data-driven decision-making without forfeiting human autonomy. To inform the design of cognitive prosthetics, the root of algorithm aversion, the overarching obstacle for human-algorithm symbiosis, must be addressed.

Read part 2 here.

Algorithms for Simpler Decision-Making (2/2): Fighting Irrationality with Nonrationality

Algorithms have been designed as linear, rational agents for the purpose of optimizing decisions in the face of risk. Unquestionably, this design is capable of consistently analyzing mass quantities of data with probabilistic accuracy that the human brain simply cannot fathom. However, this utilitarian approach to decision-making differs from that of human decision-makers on a fundamental level. As Hafenbrädl, Waeger, Marewski, & Gigerenzer (2016) explain, algorithmic decisions are made in a different world, the small world of risk, than real-world, human decisions, which take place in the big world of uncertainty. In the world of risk, probabilities, alternatives, and consequences can be readily calculated, weighed, and considered; and we must wrestle our intuitive impulses into submission for rational optimization. In the world of uncertainty, probabilities, consequences, and alternatives are unknowable or incalculable; and our intuitive heuristics are integral to satisficing under time and resource constraints (Hafenbrädl et al., 2016; Simon, 1956).

These contrasting characteristics delineate two views of decision-making — traditional rational theory and nonrational theory[1]. Traditional rationality suggests a good decision is made by considering all decision alternatives and accompanying consequences, estimating and multiplying the subjective probability by the expected utility of each consequence, and then selecting the option with the greatest expected utility. But for human decision-makers in uncertain environments, this process is psychologically unrealistic (Gigerenzer, 2001). Instead of viewing humans as omniscient beings, nonrational theories, such as bounded rationality, illustrate a decision-making process in which the environment is marked by limited time, resources, and information; where rational optimization is unfeasible and unwise. While traditional rationality entices with a sense of reasonableness, applied real-world decision-making naturally abides by the principles of nonrationality. So, when standard rational algorithms are advertised as aids to human decision-makers, a false assumption of compatibility between intrinsically different decision strategies is made. Algorithm aversion, directly and indirectly, can be traced back to this assumption.

Due to their probabilistic focus, standard algorithmic decision aids confront human cognition head on — you either accept or reject the algorithmic insight; all or nothing. Because these algorithms perform a process of rational optimization, opportunities for integrating with human nonrationality are sparse. In the predominant consumer model of algorithmic decision-making, this mismatch of rationality and nonrationality manifests as an interaction where a human decision-maker performs an intuitive calculation, consults the algorithm’s calculation, and then must choose a course of action with or without regard to the algorithmic advice. Needless to say, very little interaction occurs in this model as intuitive and statistical judgment are pitted against one another — a psychological tug-o’-war dominated by intuition time and again.

To design cognitive prosthetics capable of linking the human mind to normally incomprehensible data flows, enabling better decision-making, nonrationality must be the founding principle. Meeting human decision-makers in the world of uncertainty, where decisions must be made with limited time ( fast) and with limited information (frugal), the application of the fast-and-frugal framework to the design of algorithms is a contemporary case of mobilizing nonrational theory for cohesive human-algorithm decision systems (Phillips, Neth, Woike, & Gaissmaier, 2017). While not without limitations, this move to structure heuristic-led algorithms allows human decision-makers and algorithms to share the step-by-step gathering, ordering, and evaluating of available data and ultimately arrive at a single, joint conclusion. In doing so, human-algorithm cognition is meshed upstream in the decision process permitting a more participatory, less confrontational augmentation experience.

This integration of algorithmic statistical rigor with humanly heuristic-led sensibility is an evidently difficult task that calls for a multidisciplinary community. As the discourse flounders between the abstract and the pragmatic, it is important to consider what we want, expect, and demand from our decision-makers and our decision-making algorithms.

In our quantified society, big data is the new oil, and algorithmic decision-making, as means of refining and commoditizing big data, is here to stay. Digitalization and datafication have provided us with profound knowledge of human behavior. The trouble that remains is what to do with it. Inevitably, if not already, algorithms will evolve beyond the imaginations of their human creators, but for now it is up to us to steer them in the right direction. Whether the thought of algocracy has utopian or dystopian connotations for you, establishing a human presence in our data-fuelled ecosystem, and safeguarding against algorithm misuse, means striving towards augmented, human-in-the-loop decision-making.

Read part 1 here.

Endnotes
[1] Not to be confused with irrationality, which describes decision-making outcomes, nonrationality is a theoretical approach to describing the decision-making process (Gigerenzer, 2001).

Do Dating Apps Affect Relationship Decision Making? 

‘Good relationships are not born out of complex algorithms, because attraction is unpredictable…’ (Joel, Eastwick, and Finkel 2017)

Times are changing, people are becoming more tech savvy and are living fast paced and busy lives. Increased work hours and more demanding responsibilities often impedes on our ability to socialise, consequentially creating a negative impact on personal life. One such impediment that is becoming more common is the ability to seek a potential relationship or life partner.

Evidence of this emerging difficulty can be seen with the boom of online dating smartphone apps such as Tinder, Badoo, and Plenty of fish. Such apps seek to resolve this growing disparity between work and social life, allowing the individual to scour over potential matches whilst on their commute, at their desk, or on their sofa.

A survey conducted by Statista (2017) showed that these three platforms rank in the top 4 alongside match.com, where regular respondent usage ranged between 32 – 45% of singles. With increased popularity, and reduced stigma, around their use – online dating apps have fundamentally changed the dating landscape. However, change can often bring about new risks.

The Risks of Virtual Dating 

Creating a culture of short-term relationships that never truly materialise may subsequently have a negative effect on well-being and mental health, especially as 1 in 6 individuals reportedly develop a mental health problem such as anxiety over their lives (Stansfeld et al 2016). Such increases in anxiety may arise from concerns of self-esteem that come under fire from poor quality conversations, dates, and relationships that create doubts of self-image. Considering how issues such as these are hastened by dating apps, it is necessary to ask are dating apps improve relationships, and if not, how can they be improved?

Behavioral science is well equipped to explore this domain through the collaboration of economics, psychology, and sociology to understand individuals dating choices and behaviors.  Despite many longstanding clichés of love being a function of the heart, it is now widely accepted and observed to be a function of the brain (Bartels and Zeki 2000; Zeki 2007).

Individuals consider an array of multiple factors that make the perfect romantic match, such as their personality, hobbies, interests, and physical aspects to name a few. These aspects therefore lend themselves to a series of biases and heuristics that influence decision making, and ultimately may produce romantic outcomes that create imperfect or even negative relationships.

For instance, behavioral science explores the role of visceral factors – such as love – on decision making, showing how these temporary states of arousal lend themselves to behaviors which deviate from individuals stated preferences. This was famously shown by Ariely and Loewenstein (2006), who through a series of experiments on male students, showed that ‘sexual arousal has a strong impact on all three areas of judgement and decision making’ characterized as the heat of the moment effect (Ariely and Loewenstein, 2006).

By understanding the mechanisms of such cognitive barriers, behavioral science is perfectly suited to express not only why these decisions are made, but how these can be overcome with potential interventions. The amalgamation of economic decision making, psychological states of emotions, and sociological factors of relationships allows for the mixture of rivalling practices to be combined in a multidisciplinary and scientific way.

In doing so, behavioral science can seek to develop novel and unique insights into how love and emotions play a role in our lives and the current dating climate.

Plenty of Fish, or Too Many?

So, what are the behavioral mechanisms behind the use of dating apps? And how can they induce negative emotional outcomes? One behavioral tendency considers the ease and convenience dating platforms offer and in particular, the sheer volume of information presented when making choices of potential partners, seen with Tinder and Badoo respectively receiving 57 million U.K users in 2017 (Belton, 2018).

This concept is called the paradox of choice, where an increased freedom of choice – in this case, choice of people – results in decreased subjective well-being (Schwartz 2004). This paradox has been witnessed when individuals are choosing between types of jam. When given the choice of either 24 or 6 kinds of jam, there was a significant reduction in purchases by respondents presented with 24 compared to 6 (Iynegar and Lepper 2000).

Evidence from Schawrtz (2004) and Iyneger and Lepper (2000) shows that this paradox occurs due to inherent difficulties humans have in managing complex choices. Increasing the number of attractive alternatives – such as picking an alternative, deferring the option, choosing the default or opting out – has been shown to increase the level of internal conflict in decision making (Shafir, Simonsen and Tversky, 1993). Furthermore, the behavioral tendency of narrow framing exacerbates this difficulty, meaning that when more alternatives are presented, individuals tend to use a rule of thumb based on a small sample of all alternatives (Hauser and Wernerfelt, 1990).

While experimenting with jams can be considered somewhat crude, the paradox can be applied to dating apps. The sheer volume facilitates the tendency to increase the likelihood of objectification and ill-advised decisions (Finkel et al. 2012), allocating a preference for rushed choices in light of a mass of potential candidates. This can be seen with individuals potentially swiping right for all candidates, leading to choices being made without considerable thought or none at all.

With this notion, the user may seem confused to why they have been matched with certain individuals, due to a lack of consideration when swiping through individuals in such a hasty manner and looking at individuals on face value.

Is Desire Feasible?

In line with a focusing on skin deep features, a second behavioral principle involved in dating app decision making is the concept of construal level theory (Liberman and Trope 1998). Construal level theory (CLT) defined as ‘an account of how psychological distance influences individual thoughts and behavior’ (Trope and Liberman 2010) where objects and contexts are interpreted as either being low or high level.

A low level of construal provides focus on the core details of an object or context, such as the color, temperature, or size. In contrast, a high level of construal takes focus on overarching perceptions, and essentially differ between looking at objective details or the bigger picture.

By exploring  the foundations of CLT, it has been shown that levels of construal are affected through different domains of psychological distance – such as time, space, social, and hypothetical – that alter individual perception and factors associated with decision making (Wakslak, Liberman, and Trope 2006; Malkoc, Zauberman, and Bettman 2010). In relation to dating apps, the use of a computer-mediated-communication platform (Finkel et al. 2012) on a smartphone creates an increase in spatial and social distance, and therefore a higher level of construal.

Additionally, different weightings are given to different objects depending on the level of psychological distance concerning their attributes. Through a series of 5 different choice experiments targeting pre, intra, and post decision making, Lu, Xie, and Xu (2012) found that concerns of desirability receive a greater weighting over more feasible attributes as psychological distance increases, consistent with past research into CLT (Todorov, Green, and Trope 2007).

This impression highlights that when individuals make choices on dating platforms – with greater psychological distance – more desirable features such as looks and physical attributes are emphasized over their feasible counterparts including personality and other deeper individual differences. Consequentially, this may lead to choices being made based on incomplete evidence of the whole individual, potentially leading to sub-optimal outcomes such as regret after a date, contributing to future communications or long-term intimacies breaking down.

Discussion

By discussing two potential behavioral mechanisms that play a role in emotional decision making, what can be done to try and mitigate these biases? One recommendation worthy of exploration would be to integrate methods of improving the level of information given to users.

The concept of salience is widely used in the world of behavioral science (Behavioral Insights Team. 2014), and could be applied to this domain by utilizing personality and compatibility tests. With this notion, Piasecki and Hanna (2011) propose an alteration to defining the paradox of choice to a lack of meaningful choices instead of the volume of choices leading to negative outcomes.

Providing a salient personality or compatibly score may allow for some potential matches to be more meaningful due to the initial perception that the two users are well suited to each other, allowing users to better allocate their time to candidates more likely to produce positive emotional outcomes, filtering down the pool of choice, and the paradox. Likewise, looking at matches on a personality / compatibility basis lends the user to be considering more feasible factors over the desirable, potentially altering behavior through different levels of construal.

Taking this idea further, it has recently been announced that the dating platform Badoo is set to scrap the mainstream swipe-interface for the use of a live stream feature, called Badoo Live (Lomas, 2018). Based on a survey of 5,000 users aged 18 – 30, Badoo found that the widely common swiping interface and use of photos lacks the “real” experience that is ascertained from a real-life scenarios (Peat, 2018).

By adding these features, Badoo has taken the first step into overcoming the current barriers to positive emotional outcomes on dating apps. The use of the live stream feature reduces the psychological distance of matches with the face-to-face communication, providing a better platform for meaningful and genuine conversations that are not over a series of texts.

Conclusion

In conclusion, despite being highly convenient, dating apps can easily result in ill-advised romantic decisions due to a cognitive overload of options and abstract thinking which produce choice inconsistencies between a screen and reality. Despite concerns being raised of the apps impact on mental well-being, time is a finite resource, and these dating apps can provide a solid platform for meeting new people in a world where loneliness is a pressing social concern.

As seen with the recent innovations from the platform Badoo, changes are being made to try and replicate face-to-face meetings of the past. Ultimately, one may expect these technological advances to give rise for a virtual reality interface, where dates can be had in virtual space, recreating a face to face scenario on the go or in the comfort of the home.