Tackling Climate Change (1/2): Why Don’t We Act On Climate Issues?

Often we behave in ways that are against our longer-term interests. Most of the time this is manifested in rather trivial affairs, such as picking up that chocolate bar in the supermarket — which, though far from ideal for our health, only impacts our own wellbeing in the long-run. The same cannot be said about climate change, however. Taking that extra (well-earned!) vacation abroad, or travelling across the country for a meeting that could just as well be conducted over the phone, contributes heavily to the already enormous level of greenhouse gas emissions (GHG) in our atmosphere (Ernmenta & Nel, 2014). If sustained, these sorts of behaviors run counter to our longer term survival — so why don’t we act in more pro-environmental ways?

Cognitive Barriers in Addressing Climate Change

A big part of the problem is that we like to live in the moment, preferring to satisfy our immediate needs rather than considering what may serve us best in the future. This bias, commonly known as present bias, refers to the greater weight people place on payoffs that are closer to the present moment, as compared to those in the future (Frederick, Lowenstein & O’Donoghue, 2002). This makes intuitive sense when we consider some everyday examples, such as choosing a chocolate bar over a healthier option. It seems we are somewhat hardwired to choose options that best gratify our immediate needs, and put more effortful options to one side for our future selves to worry about (Bisin & Hyndman, 2014). For example, in the case of choosing to drive or take the bus to the grocery store, we will tend to give in to the easier ‘self-gratifying’ yet-un-environmentally friendly option and take the car.

Another part of the reason that we do not prioritize climate action comes down to salience (or rather the lack of salience) associated with the effects of climate change. Salience is essentially how noticeable and memorable certain stimuli are to us — and there is a tendency for our behavior to be influenced by the most novel and seemingly relevant stimuli. This can help to explain why certain previous environmental campaigns have been more successful in their impact than others. The ‘hole in the ozone layer’ scare in the 90s was successfully communicated to the general public by the use of vivid metaphors (UV rays penetrating the earths “shield”) and by its direct relevance to immediate health risks, such as skin cancer — but the same cannot be said for climate change. Although we are all aware of the issue, people do not seem to perceive the risks as being as vivid, relevant, or alarming (Ungar, 2007).

Part of this can be explained by perceptions of psychological distance – insomuch as climate change is not an effect that people feel day-to-day. According to construal level theory, when people, places, objects, or events are removed from an individual’s immediate experience, their mental representations become less concrete and more abstract (Trope & Liberman, 2010). Daniel Kahneman, Nobel Memorial Prize winner in Economics and author of Thinking, Fast and Slow, shares a similar sentiment, adding that “our brains respond most decisively to those things we know for certain”.


The AI Governance Challenge

Based on the above ‘psychological barriers’, it follows that if we could make climate change seem more tangible, urgent, and salient in the minds of individuals, then people might be more motivated to change their current behavior and make a conscious effort to help reduce their carbon footprint. The next section of this article explores this idea by providing a potentially scalable solution in the form of immersive technologies – in this case virtual reality – and shows how this technology can help to overcome these psychological barriers via three core aspects, immersion, interactivity, and presence.

Read part two: Tackling Climate Change (2/2): Using VR To Influence Behavior.

Tackling Climate Change (2/2): Using VR To Influence Behavior

What is Virtual Reality?

Researchers have long sought to incorporate the use of state-of-the-art technology as a vehicle to change behavior. While Virtual Reality (VR) has become increasingly talked about as of late (at time of writing, in 2017), the technology has actually been around for decades, with seminal constructs of VR having been around since the 1960s. Witmer and Singer (1998) describe immersive virtual environments (IVEs) “as those that perceptually surrounds an individual. In this sense, immersion in such an environment is characterized as a psychological state in which the individual perceives himself or herself to be enveloped by, included in, and interacting with an environment that provides a continuous stream of stimuli”.

Why VR Works

Arguably, the use of simulation as a means to influence behavior has been around for a long time. Social psychologists have been creating virtual (synthetic) environments or even immersive ones for decades using hard scenery, props, and real people. Milgram’s (1963) obedience environment, for example, is amongst the most well-known and publicized.

Today, however, we are able to generate immersive virtual environments with laboratory computer technology. Using standard smartphones plugged into VR headsets, we can create an almost infinite number of simulations, some of which would not be possible to recreate in any traditional laboratory setting.

VR works, then, by taking a modern approach to an old methodology, in which we can test a particular behavior through the manipulation of the environment.

How VR works (in influencing our behavior)

The ability of VR to blur the distinction between reality and its virtual representation is what sets it apart from traditional forms of media (Ellis,1991). What separates VR from more traditional means of consumer content, such as TV or PCs, are the following three factors: levels of immersion, interactivity, and presence.

Immersion is best described metaphorically, in that psychological immersion seeks to induce the same feeling from an experience that we would get from a physical immersion, such as taking a plunge in the ocean or a swimming pool. It is the sensation of being surrounded by a completely other reality. VR is able to deliver immersion in bundles as the user is completely absorbed in another world.

Interactivity is another unique feature of VR that provides a sense of relationship between the user and their environment (Leiner & Quiring 2008). Users can walk, touch, feel, and, maybe one day, even smell their virtual world, adding another level of realism to their environment.

Presence is the final way in which VR is able to trick our minds into feeling that the mediated experience we are in is “real.” Interestingly, in a study of video gamers, researchers found that users playing an aggressive game in a virtual environment were more aggressive than those playing the same game on a PC – that is, high presence led to more aggressive feelings (Cummings et al, 2008). Perhaps it’s unsurprising that the degree of presence can affect our emotions — an insight that films like Paranormal Activity and The Blair Witch Project tapped into with their first person perspective style of storytelling, deliberately aimed to induce anxiety in those watching.

So what does this have to do with climate change, and why is VR important? Well, as we saw earlier in this article, our behavior is often at odds with our longer term interests, and this is in part due to us engaging in activities that seek to gratify immediate wants and needs, putting off those that are more effortful (e.g. driving to the shops instead of taking the bus). We also saw that people often don’t have a concrete understanding of what climate change is, and what the tangible impacts of it may be. VR has the unique ability to address these issues by fully immersing the user in an environment where she feels as though climate change is happening in the present. The immediacy of the events has the potential to resonate on a deeper emotional level. This then allows researchers to test whether these factors influence our behavior in the real world, such as energy consumption.

Is There Any Proof This Actually Works?

Experimenting with VR scientifically remains a relatively new (but growing) field of research. Stanford University are leaders in the field, with a growing number of publications highlighting the significant effects obtained using VR to change behavior. In one experiment, researchers looked at the effects of VR on impacting pro-environmental behavior in the real world (Joo et al, 2014). In this experiment, researchers were able to compare the effects of people virtually cutting down a tree, versus hearing a graphic description of the same event. Unbeknownst to both groups of participants, the researchers wanted to test how many paper napkins each group would use when the researcher “accidentally” spilled some water after they had finished with the experiment. Those in the treatment group who “embodied” the virtual lumberjack picked up 20% less paper napkins to clean the spill than those in the control – a statistically significant finding, and one that provides support for the view that VR really can influence real world behaviors.


The AI Governance Challenge

There is still a lot of headway to be made with researchers using this technology, but as demonstrated in this article, the potential of VR to affect our unconscious decision making through the use of increased levels of immersion, interactivity, and presence is potentially huge. As VR continues to advance in its realism and as the costs of producing simulations goes down, this medium offers future researchers an interesting tool to enable positive behavioral change en masse.

Read part one: Tackling Climate Change (1/2): Why Don’t We Act On Climate Issues?

A Magna Carta for Inclusivity and Fairness in the Global AI Economy

Should your self-driving Uber be allowed to break traffic regulations in order to safely merge onto the highway? To what extent are algorithms as prone to discriminatory patterns of thinking as humans – and how might a regulatory body make this determination? More fundamentally, as more tasks are delegated to intelligent machines, to what extent will those of us who are not directly involved in the development of these technologies be able to influence their decisions? It is with these questions in mind that we are pleased to have adapted the following article for publication at TDL. – Andrew Lewis, Editor-in-Chief

We stand at a watershed moment for society’s vast, unknown digital future.  A powerful technology, artificial intelligence (AI), has emerged from its own ashes, thanks largely to advances in neural networks modeled loosely on the human brain.  AI can find patterns in massive unstructured data sets and improve its own performance as more data becomes available. It can identify objects quickly and accurately, and make ever more and better recommendations — improving decision-making, while minimizing interference from complicated, political humans.  This raises major questions about the degree of human choice and inclusion for the decades to come. How will humans, across all levels of power and income, be engaged and represented?  How will we govern this brave new world of machine meritocracy?


The AI Governance Challenge

Machine meritocracy

To find perspective on this questions, we must travel back 800 years: It was January, 1215, and King John of England, having just returned from France, faced angry barons who wished to end his unpopular vis et voluntas (“force and will”) rule over the realm.  In an effort to appease them, the king and the Archbishop of Canterbury brought 25 rebellious barons together to negotiate a “Charter of Liberties” that would enshrine a body of rights to serve as a check on the king’s discretionary power.  By June they had an agreement that provided greater transparency and representation in royal decision-making, limits on taxes and feudal payments, and even some rights for serfs. The famous “Magna Carta” was an imperfect document, teeming with special-interest provisions, but today we tend to regard the Carta as a watershed moment in humanity’s advancement toward an equitable relationship between power and those subject to it.  It set the stage eventually for the Enlightenment, the Renaissance, and democracy.

Balance of power

It is that balance between the ever-increasing power of the new potentate — the intelligent machine — and the power of human beings that is at stake. Increasingly, our world will be one in which machines create ever more value, producing more of our everyday products. As this role expands, and AI improves, human control over designs and decisions will naturally decrease. Existing work and life patterns will be forever changed. Our own creation is now running circles around us, faster than we can count the laps.

Machine decisions

This goes well beyond jobs and economics: in every area of life machines are starting to make decisions for us without our conscious involvement. Machines recognize our past patterns and those of (allegedly) similar people across the world. We receive news that shapes our opinions, outlooks, and actions based on inclinations we’ve expressed in past actions, or that are derived from the actions of others in our bubbles. While driving our cars, we share our behavioral patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous vehicle technology, which in return provide us new conveniences and safer transportation. We enjoy richer, customized entertainment and video games, the makers of which know our socioeconomic profiles, our movement patterns, and our cognitive and visual preferences to determine pricing sensitivity.

As we continue to opt-in to more and more conveniences, we choose to trust a machine to “get us right.” The machine will get to know us in, perhaps, more honest ways than we know ourselves — at least from a strictly rational perspective. But the machine will not readily account for cognitive disconnects between that which we purport to be and that which we actually are. Reliant on real data from our real actions, the machine constrains us to what we have been, rather than what we wish we were or what we hope to become.

Personal choice

Will the machine eliminate that personal choice? Will it do away with life’s serendipity — planning and plotting our lives so we meet people like us, thus depriving us of encounters and friction that force us to evolve into different, perhaps better human beings? There’s tremendous potential in this: personal decisions are inherently subjective, but many could be improved by including more objective analyses. For instance, including the carbon footprint for different modes of transportation and integrating this with our schedules and pro-social proclivities may lead us to make more eco-friendly decisions; getting honest pointers on our more and less desirable characteristics, as well as providing insight into characteristics we consistently find appealing in others, may improve our partner choices; curricula for large and diverse student bodies could become more tailored to the individual, based on the engine of information about what has worked in the past for similar profiles.


But might it also polarize societies by pushing us further into bubbles of like-minded people, reinforcing our beliefs and values without the random opportunity to check them, defend them, and be forced to rethink them? AI might get used for “digital social engineering” to create parallel micro-societies. Imagine digital gerrymandering with political operatives using AI to lure voters of certain profiles into certain districts years ahead of elections, or AirBnB micro-communities only renting to and from certain socio-political, economic, or psychometric profiles. Consider companies being able to hire in much more surgically-targeted fashion, at once increasing their success rates and also compromising their strategic optionality with a narrower, less multi-faceted employee pool.

Who makes judgments?

A machine judges us on our expressed values — especially those implicit in our commercial transactions — yet overlooks other deeply held values that we have suppressed or that are dormant at any given point in our lives. An AI might not account for newly formed beliefs or changes in what we value outside the readily-codified realm. As a result, it might, for example, make decisions about our safety that compromise the wellbeing of others — doing so based on historical data of our judgments and decisions, but resulting in actions we find objectionable in the present moment. We are complex beings who regularly make value trade-offs within the context of the situation at hand, and sometimes those situations have little or no codified precedent for an AI to process.  Will the machine respect our rights to free will and self-reinvention?

Discrimination and bias

Similarly, a machine might discriminate against people of lesser health or standing in society because its algorithms are based on pattern recognition and broad statistical averages. Uber has already faced an outcry over racial discrimination when its algorithms relied on zip codes to identify the neighborhoods where riders were most likely to originate. Will the AI favor survival of the fittest, the most liked, or the most productive? Will it make those decisions transparently? What will our recourse be?

Moreover, a programmer’s personal history, predisposition, and unseen biases — or the motivations and incentives of or from their employer — might unwillingly influence the design of algorithms and sourcing of data sets. Can we assume an AI will work with objectivity all the time? Will companies develop AIs that favor their customers, partners, executives, or shareholders? Will, for instance, a healthcare-AI jointly developed by technology firms, hospital corporations, and insurance companies act in the patient’s best interest, or will it prioritize a certain financial return?


The AI Governance Challenge

We can’t put the genie back in the bottle, nor should we try — the benefits will be transformative, leading us to new frontiers in human growth and development. We stand at the threshold of an evolutionary explosion unlike anything in the last millennium. And like all explosions and revolutions, it will be messy, murky, and fraught with ethical pitfalls.

A new charter of rights

Therefore, we propose a Magna Carta for the Global AI Economy — an inclusive, collectively developed, multi-stakeholder charter of rights that will guide our ongoing development of artificial intelligence and lay the groundwork for the future of human-machine co-existence and continued, more inclusive, human growth. Whether in an economic, social, or political context, we as a society must start to identify rights, responsibilities, and accountability guidelines for inclusiveness and fairness at the intersections of AI and human life. Without it, we will not establish enough trust in AI to capitalize on the amazing opportunities it can and will afford us.

Implicit Bias, Gender – And Why We Are All Culprits

It took a 2-hour meeting back in 2004 with a stately transwoman named Madhu for me to realise that my “holistic” comprehension of gender was in fact, profoundly flawed [1].

Madhu is a ‘Hijra’, part of India’s transgender community, comprised of transpeople, eunuchs, intersex persons and other sexual minorities. At the time of our first interaction, I was an undergraduate student based in the city of Chennai, capital of the South Indian state of Tamil Nadu. Madhu was a spokesperson for her community in the city, often dealing with student groups and NGOs to tackle the slew of problems with which the community was constantly grappling. A recurring theme was finding gainful employment outside of prostitution – something into which Hijras were often coerced, owing to rampant hiring discrimination based on their sexuality.

The first time I met Madhu was at a meeting organized by a student group at my university with herself and some of her colleagues, to discuss these problems at length and derive viable solutions.  

Madhu’s personality was as vibrant as her bottle-green sari and the large vermillion bindhi on her forehead. It was not long until we were engrossed in her story. With candour, she recounted how she had never felt at home in her formerly male body, a sense that began revealing itself to her more acutely from the start of her teenage years. When she told her family of her desire to physically transition into a woman, they disowned her. She then fled from her village to the city, and underwent the excruciating pain of non-medical castration, nearly facing death in the process.

A week after our heart-to-heart with Madhu, I had a thought. At the start of the meeting, I realised, I had made mental references to Madhu as ‘him’ and ‘he’ — but as the meeting concluded, Madhu was forever after, ‘her’.

I wondered: was it possible that I had pre-existing, implicit biases towards Madhu (and perhaps all transwomen), that caused me to think of her as a man, even prior to meeting her? What were the implications of these biases, and could I remedy them?

Years later, I found clues to these questions in an ostensibly unlikely place: the world of Behavioral Science.

Why My Bias Against Madhu Matters

The past decade has witnessed several behavioral studies on the toxicity of gender biases, which manifest themselves in several ways. With Madhu, my biases began with perceiving her as male instead of female, since my instincts sought to associate her with her assigned gender at birth, which was male.

Yet, given that I bore Madhu no ill will from the start of our meeting, did my implicit biases even matter?

A host of literature on implicit biases and gender suggests that the answer to my query is an unequivocal ‘yes’. Moreover, this ‘yes’ applies to biases against people across the gender spectrum, with implicit biases adapting themselves and donning different costumes to suit different gender identities.

A 2017 article describes a survey on transmen hailing from workplaces across the USA, who have had the experience of working first as women, and then as men [2]. Their description of gender bias and discrimination in the workplace is complex and exceedingly layered. To begin with, many transmen reported that they were treated better in the workplace after physically transitioning into men, as compared to their earlier experiences as women. A quote from one respondent is telling: “As a male, people assume that you know what you’re talking about. As a female, they assume that you probably don’t.” Still other respondents described instances of discrimination only after colleagues realised they were transmen. The survey strikes at the core of how implicit biases are as multidimensional as the very gender identities against which they are prejudiced. The consequences of these biases are as diverse as they are dire: from choosing not to hire someone due to their gender identity, to paying them less than they would a cisgender straight male, or excluding several staff from having access to a toilet whilst at work.

Implicit biases are similarly pernicious for women in the workplace. A study on hiring bias against women for jobs requiring a certain degree of mathematics expertise finds that “both male and female hiring personnel hired men twice as often as they hired women — despite similar outcomes on the math tests.” The researchers attribute part of this bias to interviews, where women are more likely to play down their successes than their male counterparts. However, the study rightly notes, “If ability is self-reported, women still are discriminated against, because employers do not fully account for men’s tendency to boast about performance”.

The fact that implicit biases are so varied, and often individualised, further convolutes matters, as they afflict even those who see themselves as without prejudice. The use of the pejorative phrase, “That’s so gay makes for an apt example [3]. A study on using the phrase shows that the harm runs deep. While the phrase is blatantly derogatory and causes immediate unpleasantness for the target, there are also long-term effects, since saying “that’s so gay” exacerbates the perpetrator’s implicit bias toward gay people, perpetuating a cycle of more acute bias and discrimination.

Judging by the literature, therefore, someone like Madhu who hails from a sexual minority has precious little chance of going a day without encountering some form of bias- be it while waiting for a bus at the station, or being screened for a job interview.   

What makes matters worse is that the subject of implicit bias does not apply solely to gender — it has a series of implications, an apt example being the vast literature on biases and race, with consequences ranging from race-based hiring discrimination to the higher rates of incarceration for people of colour.  

There Is Always Hope: How to Combat our Biases

The story so far seems desperately bleak. Yet, as recent research into debiasing has shown, there are tactics for combatting our biases that offer hope.

A study by Broockman and Kalla (2016) suggests that the key to alleviating biases against transgender individuals could be as simple as looking to engage with perspectives from ‘the other side’ (i.e., those who hold discriminatory views toward transgender individuals) [4]. Their study involved canvassers actively seeking to engage with voters who held anti-transgender viewpoints by knocking at their doors and engaging in brief conversations. Their report states, “[h]ere, we show that a single approximately 10-minute conversation encouraging actively taking the perspective of others can markedly reduce prejudice for at least 3 months… A randomised trial found that these conversations substantially reduced transphobia, with decreases greater than Americans’ average decrease in homophobia from 1998 to 2012.” The gender identity of the messenger did not change these results.

Another debiasing attempt came from Morewedge et al [5]. The authors designed an experiment which involved a training video on biases, followed by a video game designed to “elicit and mitigate” specific biases- a tactic they deemed largely effectual.

The more we learn about implicit biases, the more they seem to reflect the dual-systems approach put forth by Kahneman (2003), which gained prominence in his Thinking Fast and Slow. Kahneman’s contention is that people are subject to two distinct modes of thought, dubbed System 1 and System 2. System 1, which can be broadly construed of as intuition, is instinctive, forming instantaneous impressions based on heuristics, which can then lend themselves to cognitive (and implicit) biases.


The AI Governance Challenge

The real hope in combating biases like my own against Madhu during our first meeting, lies in System 2, which involves active contemplation, or ‘slow thinking’. This type of thought, Kahneman argues, can override the heuristics of System 1 when they lead to subpar decisions — thus preventing a heuristic from becoming a bias.

All in all, we are still in the early stages of our understanding of both, the ways in which implicit biases shape our behavior, and the ways we can actively combat them. As is often the case, a sound place to start is to actively challenge our own prejudicial perceptions when they lead us to conclusions that are harmful to those around us.

At the very least, we owe it to the millions just like Madhu, who should not have to wage war on discrimination each day simply to be themselves.