The Impact of Community Trust on Poverty

It is now well-known that most individuals tend to discount future outcomes relative to the present. Coined present bias, many of our sub-optimal behaviors are attributed to this mode of thought. But is such temporal discounting always irrational? Perhaps it is not —particularly when we consider the scarcity of cognitive (and other) resources. Sometimes it might make complete sense to focus on the present, especially if your future feels unreliable or uncertain.

In support of this, experimental research shows that scarcity-induced focus might not necessarily be myopia, but a shifted attention toward specific expenses while disregarding others (Shah, Mullainathan, & Shafir, 2012). Additionally, poverty related concerns can consume mental resources and reduce individuals’ cognitive control abilities, which are necessary for making optimal choices (Mani, Mullainathan, Shafir, & Zhao, 2013).

Unfortunately, this kind of mindset can lead to a vicious cycle: individuals in poverty — often as a result of financial constraints — tend to make financial, health, and educational decisions that favor the present. In turn, their long-term financial, health, and educational outcomes suffer, feeding back into the cycle of poverty. So how can people possibly escape this pattern?

Research by Jachimowicz and colleagues (2017) suggests that, while having lower socioeconomic status does influence the likelihood of making short-sighted decisions, this type of myopic decision making can be mitigated by community trust. The researchers hypothesized that community trust may act as a buffer for low-income individuals as a way of protecting against potential losses. Such support could act as a “cushion” and allow for individuals to take riskier (but more financially-rewarding) decisions. Through a remarkable set of studies utilizing archival, correlational, experimental, and field data, they demonstrate that interventions aiming to increase community trust among impoverished individuals can decrease their present-biased decisions, ultimately allowing them to improve their financial situations.

The first study was an online experiment with participants from the US that aimed at exploring the relationship between community trust and temporal discounting — and if this differed between low- and high-income individuals. Indeed, the researchers found a main effect of both income and community trust: participants with lower incomes (<$40,000) discounted the future more, and individuals with higher levels of community trust discounted the future less. Interestingly, there was also an interaction between these two variables: lower levels of income were related to higher discounting of the future – but only when community trust was low. That is, low-income does not itself increase present bias. In the related second study, a real-world example of myopic decision making (payday loan use) was also found to be negatively associated with community trust.

The final two studies aimed at identifying a causal relationship between community trust and temporal discounting by low-income individuals in the lab and in the field. First, the researchers did an in-lab experiment that involved manipulating levels of felt income and felt community trust, by asking them questions that induced emotions related to each construct (“imagine scenarios with severe financial implications”; “list 10 examples from your own experience where community trust was justified”). As expected, individuals who were in the low-felt income, low-community trust condition had higher temporal discounting than those who were in the low-felt income, high community trust condition.

Finally, a field experiment was conducted in conjunction with an intervention to increase community trust in 121 union councils (a total of 1447 people) in rural Bangladesh. This 2-year intervention involved community volunteers who interacted with other members and helped residents access public services from the local government, as well as getting the residents involved in community-level decisions. About half of the union councils in the study received the intervention, and the other half were in the control condition. After two years, individuals in the areas that received the intervention did indeed have greater community trust than the control areas that did not, proving their intervention successful. Furthermore, individuals in the regions that received the intervention discounted the future less than their peers in the control regions (even when controlling for income), demonstrating the effectiveness of this intervention in reducing short-sighted decision making.

These results suggest that policies attempting to tackle the challenges of poverty should, instead of solely focusing on decreasing poverty on an individual level, shift some of that focus to the community level. As the article states, “The poor may lack in material wealth relative to the rich, but they possess social wealth in the shape of their communities upon which they can draw.” Policy-makers should utilize these findings to broaden their understanding of how poverty is perpetuated, and use the researchers’ successful intervention as a model for how to increase community trust in areas of low-income individuals.


The AI Governance Challenge

This innovative, multi-part study also provides a good model for social science researchers to test out their theoretical models in the lab and in the field. Most studies tend to focus on one or the other; researchers should instead aim to design theory and evidence-based lab experiments to inform field interventions that can be tested for their efficacy to increase the chances of truly making an impact on tackling complex, multifaceted challenges like poverty.

Learning Within Limits: How Curated Content Affects Education

We are increasingly living in realities that we construct ourselves, with adverts of things we’ve already bought and opinions we already hold being reinforced through social media and advertising. As the content that is presented to us is refined to reflect our previous choices, we frame more of our own reality so that it is subject-orientated before we’ve even experienced it. As a result, the behavioral science concepts of framing and confirmation bias (amongst other cognitive biases) can be seen in daily decisions that shape our worlds.

In the midst of this drive towards self-centered content came the advent of trigger warnings —statements that alert people to the fact that a piece of writing/video/talk contains potentially distressing material. It is important to note that these were introduced with noble intentions: for example, of preventing a victim of sexual violence from having to accidentally view content that could trigger a flashback. And while, when used in this way, trigger warnings can have a valid and valuable role in daily life, they can also be used to cocoon those who are distressed by views contrary to their own, and serve simply to reinforce existing biases. An example of this could be a religious fundamentalist who would not take a course discussing evolutionary theory.

With the introduction of trigger warnings into education at institutions such as Cornell and Oxford Universities, amongst others — so that even the learning process is censored to what individuals feel comfortable with, and learning institutions provide a ‘safe space’ instead of a space where open discussion and different opinions are celebrated — we should consider our obligation to introduce into our education system some kind of mandatory learning around the effects of such hyper-curated content, particularly in the use of social media.

According to prospect theory (Kahneman and Tversky, 1979), from which framing is derived, decisions are made in two phases. The first involves editing available options, which are coded as gains or losses. This can be applied to trigger warnings, whereby the student decides, based on the warning, whether they will experience a gain or loss, such as whether they will feel something positive or negative as a result of going to a class. The second involves evaluation (McElroy and Seta, 2003). If the class is subsequently evaluated as a loss, with a high likelihood of feeling stressed or upset, they will be less likely to choose to attend the class.

Confirmation bias rests on a heuristic called confirmatory hypothesis testing, which is a tendency to search for and overweight information that confirms an existing belief rather than paying mind to disconfirming evidence (Jones and Sugden, 2001). It is evident in the decisions of students who use trigger warnings and make a decision not to expose themselves to certain elements and ideas, as they may be upsetting. Such students would rather look for ideas and groups that confirm their current way of thinking, instead of disturbing it and providing contradictory ideas.

Broadening one’s mind and challenging one’s ideas are key goals of learning in general. This is far more likely to happen when exposed to ideas that have the potential to upset, to force one to question one’s way of thinking, and thereby, to force a deeper evaluation of one’s opinions. When a limited way of thinking is introduced in an educational context, the possibility for a person’s pre-existing ideas about something to be perpetuated increases, allowing her more power to curate her own reality — something we all do to a certain extent anyway, but which makes staying in one’s comfort zone easier than ever before.

While it is okay to have your own opinion, is it not healthier to have considered the merits and downfalls of both your own and others’ opinions before taking a decision? Furthermore, are our decisions not sounder if we seek and consider disconfirming evidence when it appears? Based on previous research by Kahneman and Tversky, Stanovich and West (2000), propose a two systems approach to decision-making. They argue that we have two main types of cognitive processes: system one is heuristic-based, automatic and subconscious (which leads to an automatic contextualization of problems), while system two is conscious, measured and reflective. When system one decisions lead to sub-optimal choices (not conscious or mindful), system two is supposed to override system one. When this fails to happen on a regular basis, the heuristics used in system one become ingrained biases, inherent in the way we will think about certain things.

In the case of trigger warnings, students are making snap decisions, based on potentially little information about the context of the content, allowing these mental shortcuts taken to reach the decision (in this case not to attend a class) to become entrenched as biases. We thus end up avoiding anything that has to do with an associated topic or phrase not to our liking. One’s decisions then become more influenced by cognitive biases and the way that situations are framed. One way to interpret this is that people who use trigger warnings are inherently more likely to be influenced by biases. An ‘I didn’t do it because they warned me’ justification for not introducing system two thinking into a decision of whether or not to go ahead with a course is not the effect that trigger warnings should have.

However, with more consideration, system two can override system one. If we pay more mind to the greater context of our decisions, and the potential benefits (or damaging effects) of shutting ourselves off from certain arguments, we may overcome this rush to prejudgment. Short of doing this, our heuristics and biases will reinforce the idea that it is okay not to consider other opinions —  that not questioning your beliefs, however fundamental they might be, is an acceptable way to exit the education system.

In a world in which boundaries of all kinds are becoming increasingly blurred — with integration happening through travel, business, and other channels — the reality is that most people who participate actively in the global economy are going to have their culture and beliefs questioned at some point, and need to be prepared for that. Allowing yourself to be in uncomfortable situations is a key learning experience, and those who don’t do so at university should be made aware of the impact that not doing this can have.

Moreover, increasingly intelligent machines are only likely to compound our one-sidedness. Algorithms that drive our social media platforms and product advertisements rely on their own heuristics to decide what is presented to you, the user — which are directed by our past behavior; clicks, likes and searches. In this way, technology conspires with our inherent biases to present us with a world that is curated to our individual belief system. Our world and beliefs get confirmed and reconfirmed to us, as that is how the machine algorithms learn, by giving us back related topics to what we have already looked at. 


The AI Governance Challenge

The impact that this curation will have, through social media, trigger warnings, and other cues is a progressive and compounding narrow mindedness. This involves having one’s own ideas confirmed and having reality framed through a series of choices we make, from what classes we go to, to what appears in our Google searches — this in turn will shape where we travel, what we buy, what social events we get involved in, and whom we meet. In an educational context, trigger warnings effectively decrease the learning space at institutions, and increase the ‘comfortable’ space. This diminishes the chances of being exposed to potentially upsetting (and hence challenging) ideas, completely avoiding exploring the thoughts and emotions that come with that environment. While being confronted with these ideas can be uncomfortable, that is the time when learning and growth take place, as people are pushed outside of their comfort zones.

Will this censorship not impact, ultimately, people’s ability to sympathize, empathize, and put themselves in others’ shoes? Progressively, as we consolidate our own opinions, thoughts, and norms by not exposing ourselves to anything we don’t want to (within the growing number of contexts we have control over), the range of possible ideas and experiences open to us narrows. We need to ask what we can do to educate the generation growing up with this as the norm on the impact that such hyper-curation can have. Perhaps trigger warning could be framed differently, as even the term itself has a negative connotation that warns of ‘danger’. Alternatively, mandatory learning around this could be introduced — but at what stage, to what degree, and in which context are all open to debate.