The algorithm age and its consequences:
We’ve entered the age of workplace AI. Seven years ago, only 10% of large companies had integrated AI into their organization. Today, it’s over 80%.1
Just as we never could have predicted the ways the Internet would transform the workplace, the full power of artificial intelligence is yet to be known. However, current trends tell us one thing for certain: we will use AI to help us make better decisions.
AI decision aids have been shown to save us money, time, and cognitive effort.2 Whoever integrates AI correctly will have massive competitive upside.
But if decision-making is increasingly being done by highly efficient, constantly improving algorithms, what is left for employees to do? How do they find meaning and fulfillment in their jobs when they have less and less to do?
Behavioral Science, Democratized
We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices.
At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.
Mind vs Machine: How AI could hurt the workforce
While AI is in its infancy, we have seen early warnings about how artificially intelligent decision aids could hinder employees’ sense of meaning at work.
As our workplaces have become increasingly digitalized, the amount of information that employees have to manage has skyrocketed. In already stressful positions, this data abundance can lead to information overload, technostress, and difficulty making decisions.2
Therefore, it makes sense to introduce a highly effective AI decision-maker that can cut through the noise and choose for employees — right? Not exactly. Evidence shows that when AI tells employees what to do, they tend to grow skeptical of the technology. The end result: they become even less likely to use AI to facilitate their work.2
If this seems counterintuitive, just imagine having a boss who micromanages all your decisions, but can’t even communicate how they came up with the solution they want you to implement. Sounds frustrating? When AI is given completely free rein, it is easy to end up feeling like a cog in the machine, simply an implementor of a computer’s confusing decisions.
Case Study: Domo Arigato, Dr. Roboto
Consider IBM’s Watson for Oncology. This supercomputer could accurately identify 12 common cancers and provide expert-level recommendations for them.3 It was designed to help doctors make more accurate assessments and not purely rely on gut instinct.
When put into the field, however, IBM Watson didn’t mix well with the doctors. When it made recommendations that agreed with theirs, the doctors saw it as redundant. When it made recommendations that contradicted them, they saw it as faulty. Because its machine learning algorithm was so complex, it couldn’t give justification for its reasoning, which made the doctors less trusting and more stressed. It was a PR nightmare for IBM, whose medical partner later dropped the program.3 Clearly, integrating AI decision aids must be done with human users in mind.
Machine managers and pencil pushers: handling meaninglessness in the age of AI
So, what do we do? Balancing management’s pushes to quickly integrate AI and a workforce that needs to find fulfillment in their jobs, we cannot proceed recklessly.
We must first understand the science behind meaningful work, and then integrate AI in a way that doesn’t infringe on employees’ quest for purpose.
What makes work meaningful?
To tap into this mysterious question of purpose, we must first understand that human beings fundamentally require three psychological nutrients to feel motivated:4
1. Autonomy: The feeling of control over one’s own actions, thoughts, and goals.
2. Relatedness: The feeling of meaningfully connecting with others.
3. Competence: The feeling of growth and gaining mastery over skills.
Together, these three facets make up self-determination theory, an evidence-based motivation framework. Fulfilling these requirements in the workplace has several beneficial downstream implications: employees will feel more intrinsically motivated, which leads to greater performance, job satisfaction, well-being, organizational citizenship, and positive work-related attitudes.4
In addition to self-determination theory, a bevy of research into workplace fulfillment has found two more essential facets of meaningful work:5
4. Significance: A perception that one’s work is worth doing (i.e. is intrinsically valuable).
5. Broader purpose: The idea that our work must contribute to something bigger than oneself, or the greater good.
In essence, work that helps employees grow, build relationships, do things they value, and positively impact the world will be seen as meaningful. If AI decision aids are coming, they must be implemented in a way that doesn’t detract from why we go to work in the first place. But how do we go about doing this?
Responsible workplace AI: integrating behavioral science
Human-centric design, with a focus on self-determination theory, can bring employees improved satisfaction, stronger trust, and greater confidence in the decision-making process. When done correctly, it can even make employees more engaged, leading to more efficient and accurate decisions.
Autonomy: putting the power in the employee’s hands
Autonomy is perhaps the most important facet. When an AI is making all the decisions, how do you feel like you are in control? Thus, it is critical to design some element of choice into the equation.
Simply providing users with the option to ask for help from the AI, rather than the AI automatically providing information, has been shown to elicit strong feelings of autonomy.6 In one study, participants who were allowed to interact with an AI decision aid on their own terms were more satisfied with the bot — and more engaged with it — compared to participants who didn’t have a choice.
To ensure both employee satisfaction and better decision-making, it is essential to strike a balance between AI use and autonomy. If employees feel like an AI is running their life, they’re not likely to use it. But if AI is just another tool in their arsenal, then adding an optional decision aid can empower employees instead of stifling them.
Competence: show, don’t tell
AI systems are complicated, and thus are difficult to understand. This confusion is a big threat to feelings of competency.6 However, if we make room for growth within AI–employee interactions, we can allow employees to feel like they’re learning.
Instead of keeping AI’s processes a black box, showing employees a simplified, educational version of how the AI came to the solution will help them:
- Understand the system better
- Feel like they are learning and improving.
As AI becomes more complex, it will get more difficult to teach employees exactly what is going on under the hood. Even if it is just the broad strokes, adding an educational element to AI recommendations has been shown to boost feelings of competence. This doesn’t only benefit workers’ well-being; it also can lead to more accurate decision-making, improving outcomes for both employees and their organizations.6
Relatedness: say hello to your new Robo-friend!
To tackle relatedness, AI decision aids must take on more human characteristics. For example, simply programming a chatbot to refer to itself as “I” and address the employee by their first name dramatically improves feelings of relatedness.6 Additionally, representing the AI with an avatar makes employees feel like they are collaborating with a human, rather than being told what to do by a machine.6
Purpose: The final frontier
However, employees’ sense of meaning doesn’t stop with self-determination. As noted above, workers also need a sense of significance and broader purpose to be fully motivated. AI is an incredibly powerful tool on this front, with the ability to scale social change to a shocking degree.7 If leaders embrace technological social responsibility, they can align their AI efforts with the social good.8 This purposeful use of AI will not only bring transformational positive change, but should also lead to higher feelings of purpose in employees.
The workplace of tomorrow: integrating AI responsibly for the social good
We are standing on the precipice of an AI-enabled world. It is difficult to predict what its development will mean for the workplace, the economy, or the future of humanity. Connecting scenario planning and behavioral science, we predict that AI will be used as a decision aid, which could threaten a worker’s sense of intrinsic motivation.
Leaning on human-centric design, we can mitigate the negative effects of AI decision aids by tailoring them towards satisfying needs for competency, autonomy, and relatedness. Additionally, aligning your company to a broader social purpose will make AI-assisted decisions feel more necessary, just, and meaningful.
The Decision Lab is a behavioral consultancy that uses science to advance social good. Artificial intelligence and its implementation have the potential to radically transform all facets of human life and business. Working alongside tech giants, we have learned how to leverage big data to create big impact. If you are interested in integrating artificial intelligence into your organization in a human-focused, socially impactful way, contact us.
- Ghosh, B. (2022, January 31). Taking a systems approach to adopting AI. Harvard Business Review. Retrieved July 29, 2022, from https://hbr.org/2019/05/taking-a-systems-approach-to-adopting-ai
- Ulfert, A.-S., Antoni, C. H., & Ellwart, T. (2022). The role of agent autonomy in using decision support systems at work. Computers in Human Behavior, 126, 106987. https://doi.org/10.1016/j.chb.2021.106987
- Polonski, V. (2021, July 13). Humans don't trust AI predictions - here's how to fix it. The OECD Forum Network. Retrieved July 29, 2022, from https://www.oecd-forum.org/posts/29988-humans-don-t-trust-artificial-intelligence-predictions-here-s-how-to-fix-it
- Gagné, M., & Deci, E. L. (205AD). Self-determination theory for work motivation. Journal of Organizational Behavior, 26, 331–362. https://doi.org/10.1093/obo/9780199846740-0182
- Martela, F., & Pessi, A. B. (2018). Significant work is about self-realization and broader purpose: Defining the key dimensions of meaningful work. Frontiers in Psychology, 9. https://doi.org/10.3389/fpsyg.2018.00363
- De Vreede, T., Raghavan, M., & De Vreede, G.-J. (2021). Design foundations for AI assisted decision making: A self determination theory approach. Proceedings of the Annual Hawaii International Conference on System Sciences, 166–175. https://doi.org/10.24251/hicss.2021.019
- Tomašev, N., Cornebise, J., Hutter, F., Mohamed, S., Picciariello, A., Connelly, B., Belgrave, D. C., Ezer, D., Haert, F. C., Mugisha, F., Abila, G., Arai, H., Almiraat, H., Proskurnia, J., Snyder, K., Otake-Matsuura, M., Othman, M., Glasmachers, T., Wever, W. de, … Clopath, C. (2020). AI for social good: Unlocking the opportunity for positive impact. Nature Communications, 11(1). https://doi.org/10.1038/s41467-020-15871-z
- Bughin, J., & Hazan, E. (2022, April 28). Can artificial intelligence help society as much as it helps business? McKinsey & Company. Retrieved August 11, 2022, from https://www.mckinsey.com/business-functions/quantumblack/our-insights/can-artificial-intelligence-help-society-as-much-as-it-helps-business
About the Authors
Triumph is passionate about understanding how human behavior influences our world. Whether it be global macroeconomics or neural networks, he is fascinated by how complex systems work, as well as how our own behavior can help create, sustain, and break these systems. He is currently pursuing a Bachelor’s degree in Economics and Psychology at McGill University, attempting to design an interdisciplinary approach to better understand all the quirks that make us human. He has experience in non-profit consulting, journalism, and research. Outside of work, you can find Triumph playing bass guitar, gardening, or down at a local basketball court.
Sekoul is a Co-Founder and Managing Director at The Decision Lab. A decision scientist with an MSc in Decision Neuroscience from McGill University, Sekoul’s work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.
Sarah Chudleigh is passionate about the accessible distribution of academic research. She has had the opportunity to practice this as an organizer of TEDx conferences, editor-in-chief of her undergraduate academic journal, and lead editor at the LSE Social Policy Blog. Sarah gained a deep appreciation for interdisciplinary research during her liberal arts degree at Quest University Canada, where she specialized in political decision-making. Her current graduate research at the London School of Economics and Political Science examines the impact of national values on motivations to privately sponsor refugees, a continuation of her interest in political analysis, identity, and migration policy. On weekends, you can find Sarah gardening at her local urban farm.