How to Preserve Agency in an AI-Driven Future
Imagine sitting in a self-driving car with no steering wheel, no brakes, and no clear way to override its decisions. At first, this might seem liberating—after all, you don’t need to worry about checking the directions or navigating through traffic. But as the car starts taking turns you wouldn’t have chosen yourself, a question arises: are you truly in control, or have you handed over your agency to a machine?
Agency, or the ability to make meaningful decisions and control one’s path, is central to human fulfillment. Yet, as artificial intelligence drives more of our choices, our relationship with agency grows more complex. Are we benefiting from AI's efficiencies, or are we quietly losing something essential in the process?
In this article, we’ll explore what agency means, the challenges posed by automation (especially at work), and some actionable steps we can take to preserve autonomy in an AI-driven future.
Agency is… well, important
Hopefully, this is a completely uncontentious statement, but agency is really, really important for developing personal and professional happiness. You could read Seligman on “authentic happiness,”1 Ryan & Deci on self-determination theory,2 or any of the thousands of other studies and articles advocating that we care about being able to control our lives. Regardless, they'll all roughly come back to the same point: the ability to act independently and make choices is a cornerstone of our well-being.
This, among other practical factors, is why one of the biggest punishments we’ve designed as a species has been to constrain people’s agency by putting them in prison. In fact, according to Hojman & Miranda, agency can have a comparable effect on life satisfaction to income.3
Agency is also easier said than done
While we can hopefully all agree that exercising agency is important, this might actually be easier said than done—especially in a world that is increasingly being automated by AI. In fact, given how important it is to our well-being, we as individuals know surprisingly little about what it takes for us to actually feel like we’re exercising agency. One critical point that we may not often consider, and which may be relevant to our AI-led future, is that our well-being depends on a feeling of agency, not on our actual ability to make decisions themselves.
Think of it this way: imagine you’re in a highly complex environment where you are free to do whatever you want, but it always seems like your actions have unintended consequences. You might technically be able to exercise control over your choices, but you’ll likely feel a strong lack of agency and a general feeling of bummed-outness. This is because our sense of autonomy is a fragile thing—something that we need to understand, nurture, and, in the context of AI, hopefully not accidentally kill. It might seem like a luxury, but it’s actually a necessity for setting ourselves up for success. Sheldon et al. even showed across three studies that supporting one’s own autonomy is more important than feeling supported by others.4
Agency and feedback loops
Research on this topic shows that our feeling of autonomy is driven by the feedback loop between our actions and their outcomes. When we feel our actions lead to predictable or desired results, our sense of autonomy strengthens (hence why you might feel bummed out in that complex and unpredictable environment). However, in an AI-dominated world, where complex systems often mediate or obscure the connection between our choices and the outcomes, this feedback loop can get pretty easily lost.
Think about recommender systems for a second—such as TikTok’s “For You Page” or Netflix’s tailored movie selections. On the surface, such features are designed to enhance our experience by tailoring content or services to our preferences. But when AI systems make decisions on our behalf—be it in the media we consume, the products we buy, or even the jobs we are matched to—we might begin to feel less like active agents in our own lives and more like passive participants in the decisions we didn’t fully make ourselves. Even when these systems are designed to serve us, there’s a psychological toll when the underlying mechanisms are opaque, making it difficult to understand or feel in control of the outcomes.
The erosion of professional agency
As AI systems become more sophisticated, they'll inevitably play a bigger and bigger role in the work we do. Leaving aside doomy predictions that this will eliminate all jobs, we are likely to live very different professional lives in a world dominated by AI. While this integration promises enhanced efficiency and productivity, it also raises questions about the nature of human agency in the workplace.
Consider a scenario where an AI system is implemented to optimize a company's workflow. It might assign tasks, set deadlines, and even evaluate performance based on complex algorithms. On paper, this sounds like a dream come true for efficiency-obsessed managers. But for the employees, it could feel like their professional autonomy is being slowly chipped away.
The “black box” nature of many AI systems exacerbates this issue. When decisions about your work are made by an algorithm you don't understand, it's easy to feel like a cog in a machine rather than a valued professional making meaningful choices. The satisfaction of problem-solving, the pride in creative solutions, and the sense of ownership over one's work—all these could potentially be diminished in an AI-driven workplace.
Another problem we might encounter—and this is especially true at work—revolves around our feelings of flow. Flow, the state proposed by Mihaly Csikszentmihalyi of being “in the zone” or “locked in,”5 relies on doing things that are engaging and at an ideal level of difficulty for us. Too easy, and we get bored. Too difficult, and we get frustrated. The problem is that in a workplace where AI helps us do the things it’s best at helping us do, we may increasingly be driven to outsource tasks that would normally help us reach that “flow sweet spot.”
What can we do?
What we should, but are unlikely to do, is to make very intentional decisions about the role we want AI to play at work. However, most of us are far too pragmatic (or rather, short-term focused) for that. It’s far more likely that we’ll let AI do whatever AI is best at doing and then cross our fingers and hope for the best.
A better approach might be to think really deeply about agency. Perhaps study it a lot more and invest resources into understanding how AI-augmented work environments can be designed to better fit our psychological needs while still yielding increases in productivity. Here are a few ways we might be able to achieve just that:
1. Human-centric AI design principles
Instead of retrofitting AI systems to accommodate human agency, we need to bake it into the cake from the get-go. We could push for the development of a set of "Human-Centric AI Design Principles" that major tech companies and AI developers would pledge to follow.
One approach to designing a line of inquiry that can help us build AI systems that better support our autonomy is by Calvo and colleagues.6 These principles would prioritize human agency and well-being in AI system design. For instance, they might mandate that AI systems always present multiple options rather than a single recommendation, or require that AI assistants actively encourage human input and creativity rather than simply providing solutions.
The key here is to shift the paradigm of AI development from "maximum efficiency" to "optimal human-AI collaboration." This could be achieved through a combination of public pressure, employee advocacy, and potentially even legislation. It's a big ask, but if we can get major players on board, it could fundamentally change how AI is integrated into our work lives.
2. Agency-preserving workflow structures
We need to rethink how we structure our work processes in an AI-augmented world. Instead of letting AI handle entire workflows, we could design systems where human decision-making is strategically inserted at key points.
For example, in a marketing campaign, AI might handle data analysis and generate initial ideas, but humans would be responsible for selecting which ideas to ultimately pursue, refining the chosen concepts, and making final approval decisions. This approach ensures that while AI handles the heavy lifting, humans remain the ultimate decision-makers.
This structure could be formalized across industries, creating new roles focused on human oversight and creative input. It's not about making busy work for humans, but about leveraging our unique capabilities for judgment, creativity, and ethical considerations.
3. Radical transparency and AI literacy programs
One of the biggest threats to our sense of agency is the "black box" nature of many AI systems. To combat this, we need to push for radical transparency in AI operations and invest heavily in AI literacy for all employees.
Companies could be required to provide clear, accessible explanations of how their AI systems work, what data they use, and how they make decisions. This goes beyond just technical documentation—it means creating intuitive visualizations, interactive demos, and other tools that make AI processes understandable to non-experts.
Alongside this, we need comprehensive AI literacy programs for employees at all levels. These wouldn't just cover the basics of how AI works but would focus on how to effectively collaborate with AI, how to critically evaluate AI outputs, and how to maintain agency in an AI-augmented workplace. By demystifying AI and empowering employees with knowledge, we can help preserve their sense of agency even as AI becomes more prevalent in their work lives.
behavior change 101
Start your behavior change journey at the right place
Taking back the wheel
These solutions are ambitious, but they're the kind of big-picture changes we need to seriously consider if we want to maintain meaningful human agency in an increasingly AI-driven workplace. They require buy-in from companies, developers, and potentially regulators, but they offer a path to a future where AI enhances rather than diminishes our sense of control over our work lives.
References
- Seligman, M. E. P. (2002). Authentic happiness: Using the new positive psychology to realize your potential for lasting fulfillment. Free Press.
- Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037/0003-066X.55.1.68
- Hojman, D. A., & Miranda, Á. (2018). Agency, human dignity, and subjective well-being. World Development, 101(C), 1-15. https://doi.org/10.1016/j.worlddev.2017.07.010
- Sheldon, K. M., Corcoran, M., & Titova, L. (2021). Supporting one’s own autonomy may be more important than feeling supported by others. Motivation Science, 7(2), 176–186. https://doi.org/10.1037/mot0000215
- Beck, L. A. (1992). Csikszentmihalyi, Mihaly. (1990). Flow: The Psychology of Optimal Experience. Journal of Leisure Research, 24(1), 93–94. https://doi.org/10.1080/00222216.1992.11969876
- Calvo, Rafael & Peters, Dorian & Vold, Karina & Ryan, Richard. (2020). Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry. 10.1007/978-3-030-50585-1_2.
About the Author
Dr. Sekoul Krastev
Sekoul is a Co-Founder and Managing Director at The Decision Lab. He is a bestselling author of Intention - a book he wrote with Wiley on the mindful application of behavioral science in organizations. A decision scientist with a PhD in Decision Neuroscience from McGill University, Sekoul's work has been featured in peer-reviewed journals and has been presented at conferences around the world. Sekoul previously advised management on innovation and engagement strategy at The Boston Consulting Group as well as on online media strategy at Google. He has a deep interest in the applications of behavioral science to new technology and has published on these topics in places such as the Huffington Post and Strategy & Business.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%