Technology has drastically changed how we behave and what we feel and think. Every day, the first and last thing1 most of us do is check our phones. Sounds and vibrations from our phones distract us dozens of times a day.2 Phones and computers mediate a lot of our social interaction: Zoom meetings are not just for work, but personal events from baby showers to funerals.
Our social media feeds not only move our moods temporarily,3 but they also change how we view and how much we like ourselves.4 Technology also impacts the choices we make, from our shopping decisions (e.g., Uber can deter us from buying a car)5 to our political beliefs (e.g., YouTube can radicalize us into white supremacy).6
Technology changes human behavior, emotion, and cognition, despite being largely built or designed by technologists without a background in psychological science ( the term I’ll use to refer to the scientific study of human psychology, not the therapeutic practice of counseling) and whose jobs don’t explicitly involve applying or conducting psychological science.
Many of these technologists are scientists — e.g., data scientists building statistical models to predict human behavior. For example, about 18,000 employees across Alphabet, Amazon, Apple, Meta, and Microsoft have job titles containing the words “science” or “scientist.” But only about 200 (1%) have the word “behavior” or “behavioral” (including British spelling of these terms) in their job titles. There are psychologists who work in technology companies, but their jobs most often don’t involve applying psychological science. Job descriptions that overtly mention behavioral science are still relatively rare in the product teams that build most consumer technology.
Given that technology companies change human behavior at scale, profit from behavior change, and generally pride themselves on embracing scientific innovation, why do so many of them lack psychologists with a more formal and central role in product development?
The answer to this question holds the key to technology products that cause less unintended harm to people because they are designed with a more realistic understanding of human psychology.
Why psychologists are few and peripheral
A weak pipeline from academic psychology to the technology sector
When technology companies were more focused on enterprise rather than consumer applications, hardware rather than software, and technical professionals rather than non-technical amateurs, technology companies did not have the same need for psychologists. Technology companies were popular employers of computer scientists, not psychological scientists.
This pattern has continued to the present. For example, computer and computational science were the most popular university subjects for Googlers (29% of Google employees studied them, excluding other engineering subjects); psychology ranks thirteenth (less than 2% of Googlers studied it).
This situation likely led to psychologist underrepresentation in the leadership of technology companies, which leads to the potential contributions of psychology being unfamiliar or, potentially, underestimated. To continue our example, Google directors show a similar pattern in their university studies, favoring computer and computational science (26%) over psychology (3%).
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
Widely-held folk psychology that undervalues psychological science
People have an impressive ability to understand, infer, and predict the behavior of others,7 which is called folk psychology.8 We use this ability regularly in our personal lives: for example, to anticipate how a friend will react to bad news and how we can support them afterward. Reasonably, technologists rely on this ability to design products for other people. They don’t need psychologists to tell them that digital products should be easy to use or what makes a product so, because people excel in folk psychology.
Unfortunately, people suffer from an illusion of explanatory depth in this regard, assuming that they understand human psychology better than they do.9 Our folk psychology is impressive but imperfect: it can be inconsistent (e.g., do birds of a feather flock together or do opposites attract?), inaccurate (e.g., predictions about our future behavior tend to be too optimistic),10 and biased (e.g., people overestimate how much others agree with them).11 Our ability to introspect on our thoughts has fundamental limitations,12 and our behavior is computationally complex.
The limits of folk psychology are why academic psychologists are experimentalists rather than philosophers. Technology companies could build upon and improve their intuitions about product design if they availed themselves of psychologists and the discoveries they have amassed over the years.
Technology companies do have user researchers who study the behavior of the consumers who use their products, many of whom rely on experimental methods. But to the extent that this work isn’t performed or informed by psychologists, technology companies are ultimately rediscovering more than a century of discoveries from academic psychology.
The non-trivial translation of academic psychology into product design
Most technologists are untrained in psychology and consequently unaware of most scientific research on human psychology, at least not beyond the pop science that bubbles up — with varying degrees of scientific accuracy — to the surface of popular press books and news media accounts on the subject.
Even when technologists are aware of this research, its translation to the design of technology products is not straightforward.
The experiment is the gold standard of causal inference in science, but human psychology is hard to study experimentally. Psychological variables (e.g., enthusiasm) are difficult to capture as experimental stimuli and measure as outcomes. They can vary significantly across context and culture, and they can be impractical or unethical to manipulate experimentally.
Academic psychologists tackle most of these challenges by studying human psychology in a laboratory setting that increases scientific rigor, at the risk of reducing the extent to which their findings will generalize to the world outside their laboratories.
In a testament to the scientific skill of academic psychologists, laboratory findings generally replicate outside the laboratory setting that first reports them.13 However, the extent to which laboratory findings replicate outside the laboratory varies14 across psychology subfields, research topics, and the statistical strength of the laboratory findings.
The mixed generalizability of academic psychology makes it challenging for technologists untrained in psychology to translate laboratory discoveries from academic journals to their technology products.
Data-driven over theory-informed prediction of human behavior
Finally, machine learning — and deep learning in particular — have given technology companies a means to predict human behavior without the need for psychological science. Machine learning models with the right and sufficient data can find statistical relationships that predict behavior without any theoretical account of human psychology as input.
But this data-driven shortcut to behavior prediction isn’t a free lunch. It comes at the cost of interpretability and generalizability.
Machine learning models generate a map between input data (e.g., logs of online browsing activity) and output predictions (e.g., the likelihood to read specific online articles). These are complex, often non-linear maps that seem arbitrary and are not interpretable accounts of human behavior. They provide technologists with means of anticipating, but not understanding, the behavior of their users.
As a result, a machine learning model that predicts human behavior accurately in one context is not commonly used to predict human behavior in a different context. For example, a model trained to predict grocery purchases isn’t likely to predict fitness decisions as well, even if similar psychology is at the root of these two prediction problems (e.g., concern about physical health).
Transfer learning — redeploying machine learning architecture across similar prediction problems — is commonly used in computer vision, like object detection from images and videos. But the decision-making that precedes human behavior, and its psychological substrates like the elicitation of specific emotions, remain largely outside the practical and widespread application of transfer learning.
Potential benefits of further integrating psychologists in the development of technology products
Most consumer technology products try to change human behavior in one way or another, including product adoption. And the companies that build them profit from this change. What benefits could these companies and their products derive from psychologists joining their development teams?
The first benefit is faster learning. Psychologists have been studying human behavior scientifically for over a century. Although their knowledge of psychology isn’t perfect or complete, it’s a strong head start from which to begin the development of products for humans. There is no need to start from scratch, ignoring a century’s worth of relevant research.
Psychologists are also helpful in performing applied research on consumer psychology in product development. Often, technologists design product experiments for one-time or short-term discoveries (e.g., do users click more on a red or a blue button?). Psychologists, by contrast, can help design theory-driven experiments (e.g., what’s the minimum contrast difference between a button and its surroundings to yield a pop-out effect and increase clickthrough rate?). Experiments like these can yield more generalizable findings (e.g., a 20% color contrast, irrespective of button color, increases button clickthrough rate by one standard deviation).
Second, design principles. Technologists build novel digital ecosystems with a complexity that is not amenable to simple experimentation. For example, they create online communities to promote positive and frequent engagement while disincentivizing negative behavior. In practice, this is not always the case, as in the harmful effect of Instagram on the mental health of teenage girls.15 Product development teams could use psychological principles as a basis on which to build and expand (e.g., motivating engagement in online communities16 and inspiring trust in the fairness of algorithmic decision-making).17
The final benefit is a reduction in unintended harm. Technologists design their products with the intent to change the world for good. But this intent sometimes assumes unrealistic human psychology. For example, the assumption that people can moderate their use of certain digital products better than they actually can (a sentence I write after taking a break from a doomscrolling session on Twitter).
As a result of incorrect assumptions of human psychology, technologists can design digital products that harm their users in unintended ways. Psychologists on the teams that build these platforms would not solve this problem, but they could help design them with a more realistic understanding of human limits and reduce some of these unintended harms.
Juan Manuel Contreras, Ph.D. is an applied science manager at Uber. He was trained as a cognitive neuroscientist at Harvard University and Princeton University. The opinions expressed in this essay are personal and do not necessarily represent or reflect the views, opinions, or policies of Uber.
- Keating, L. (2017, March 2). Survey Finds Most People Check Their Smartphones Before Getting Out Of Bed In The Morning. Tech Times. https://www.techtimes.com/articles/199967/20170302/survey-finds-people-check-smartphones-before-getting-out-bed.htm
- Push Notifications Statistics (2021). (2021, August 31). Business of Apps. https://www.businessofapps.com/marketplace/push-notifications/research/push-notifications-statistics/#:%7E:text=The%20average%20US%20smartphone%20user,Restraint%%2020can%20be%20key%2C%20therefore
- Lewis, T. (2014, July 1). Facebook Emotions Are Contagious, Study Finds. Yahoo!News. https://news.yahoo.com/facebook-emotions-contagious-study-finds-133519750.html
- Duffy, A. C. B. J. (2021, October 5). Instagram’s grim appeal as a silent self-esteem breaker. CNN. https://edition.cnn.com/2021/10/05/health/instagram-self-esteem-parenting-wellness/index.html
- Hampshire, R. C., Simek, C., Fabusuyi, T., Di, X., & Chen, X. (2017). Measuring the Impact of an Unanticipated Disruption of Uber/Lyft in Austin, TX. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2977969
- Koppelman, A. (2019, March 18). YouTube and other social networks are radicalizing white men. Big tech could be doing more. CNN Business. https://edition.cnn.com/2019/03/17/tech/youtube-facebook-twitter-radicalization-new-zealand/
- Mitchell, J. P. (2009). Inferences about mental states. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1521), 1309–1316. https://doi.org/10.1098/rstb.2008.0318
- Stich, S., & Ravenscroft, I. (1993, October). What is Folk Psychology? (Technical Report #5). Rutgers University Center for Cognitive Science, Rutgers University. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.365.5705&rep=rep1&type=pdf
- Houston, J. P. (1985). Untutored Lay Knowledge of the Principles of Psychology: Do We Know Anything They Don’t? Psychological Reports, 57(2), 567–570. https://doi.org/10.2466/pr0.1922.214.171.1247
- Buehler, R., Griffin, D., & Ross, M. (1995). It’s About Time: Optimistic Predictions in Work and Love. European Review of Social Psychology, 6(1), 1–32. https://doi.org/10.1080/14792779343000112
- Ross, L., Greene, D., & House, P. (1977). The “False Consensus Effect”: An Egocentric Bias in Social Perception and Attribution Processes. Journal of Experimental Social Psychology, 13(3), 279–301. https://doi.org/10.1016/0022-1031(77)90049-x
- Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295x.84.3.231
- Anderson, C. A., Lindsay, J. J., & Bushman, B. J. (1999). Research in the Psychological Laboratory. Current Directions in Psychological Science, 8(1), 3–9. https://doi.org/10.1111/1467-8721.00002
- Mitchell, G. (2012). Revisiting Truth or Triviality : The External Validity of Research in the Psychological Laboratory. Perspectives on Psychological Science, 7(2), 109–117. https://doi.org/10.1177/1745691611432343
- Wells, G., Horwitz, J., & Seetharaman, D. (2021, September 14). Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. The Wall Street Journal. https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739
- Ling, K., Beenen, G., Ludford, P., Wang, X., Chang, K., Li, X., Cosley, D., Frankowski, D., Terveen, L., Rashid, A. M., Resnick, P., & Kraut, R. (2005). Using Social Psychology to Motivate Contributions to Online Communities. Journal of Computer-Mediated Communication, 10(4), 00. https://doi.org/10.1111/j.1083-6101.2005.tb00273.x
- Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 205395171875668. https://doi.org/10.1177/2053951718756684
About the Author
Juan Manuel Contreras
Juan Manuel Contreras, Ph.D. is an applied science manager at Uber working at the intersection of technology, policy, and law. He was trained as a social psychologist and cognitive neuroscientist at Harvard University and Princeton University.