Why do we keep handing more decisions to AI?

The 

Delegation Creep Bias

, explained.
Bias

What is Delegation Creep in AI?

Delegation creep in AI is a bias where people and organizations steadily expand what they let automated systems do. It often starts with low-stakes tasks like sorting emails or drafting boilerplate, then drifts into screening candidates, triaging welfare cases, or shaping messages with real strategic or ethical weight. Human factors research shows related patterns of automation bias and complacency, where decision makers miss problems the system does not flag and follow incorrect recommendations after good past performance. Over-reliance grows with increasing time pressure and distraction, so sensitive decisions end up flowing through systems never designed or governed for that level of responsibility.

Where this bias occurs

Delegation creep appears wherever AI tools are embedded into workflows as helpers or copilots. In personal productivity, people start by using an assistant to manage calendar invites and inbox triage, then drift toward letting it prioritize which opportunities or relationships receive attention first. In content creation, teams adopt AI to write outlines and rough drafts, then lean on it to choose angles, frame arguments, and even decide which audiences to prioritize.

In operational settings, the pattern is sharper. Research on automation in complex systems shows that once a decision aid is introduced, operators quickly shift monitoring and judgment toward the automated channel.1 Under routine conditions, they tend to assume the system is functioning correctly and may overlook cues that contradict its output.2 Studies of trust in automated systems find that when tools are framed as accurate and efficient, people become more willing to lean on them and less inclined to question their recommendations, especially under time pressure.3

Policy and ethics work on AI governance has flagged a similar problem with formal oversight requirements. Many regulations call for human review of automated decisions, yet in practice, “human-in-the-loop” often means adding a signature step at the end of a process that is otherwise automated.4 When those humans lack time, context, or real authority to override the system, review becomes a ritual rather than a safeguard. The concept of moral crumple zones captures how humans can be left to absorb blame when things go wrong, even though the structure of the system gave them little real control in the first place.5

In workplaces governed by algorithmic management, automated systems assign tasks, rate performance, and suggest disciplinary steps. Empirical Human-Computer Interaction (HCI) research shows that such tools often begin as scheduling aids or performance dashboards and gradually become central to decisions about pay, promotion, and continued employment.6 Employer surveys indicate that once in place, these systems are frequently expanded to new uses without a corresponding upgrade in governance or worker voice.7 Organizational psychology research links this dynamic to reduced autonomy and motivation, as employees adapt their behavior to what the system measures and rewards rather than to professional judgment.8 Labor law scholars warn that, without safeguards, this expansion of delegated decision making can entrench discrimination and weaken existing protections at work.9

Sources

  1. Skitka, L. J., Mosier, K., & Burdick, M. (1999). Does automation bias decision-making. International Journal of Human-Computer Studies, 51(5), 991–1006. https://doi.org/10.1006/ijhc.1999.0252
  2. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
  3. Gsenger, R., Eier, J., Csillag, J., & Schlogl, S. (2021). Trust, automation bias and aversion: Investigating trust in automated systems. Interdisciplinary Description of Complex Systems, 19(4), 542–560.
  4. Green, B. (2022). The flaws of policies requiring human oversight of artificial intelligence. Computer Law & Security Review, 46, 105710. https://doi.org/10.1016/j.clsr.2022.105681
  5. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human–robot interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260
  6. Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with machines: The impact of algorithmic and data-driven management on human workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 1603–1612). https://doi.org/10.1145/2702123.2702548
  7. Milanez, A., Lemmens, A., & Ruggiu, C. (2025). Algorithmic management in the workplace: New evidence from an OECD employer survey (OECD Artificial Intelligence Papers, No. 31). OECD Publishing. https://doi.org/10.1787/287c13c4-en
  8. Gagné, M., Parent-Rocheleau, X., Bujold, A., Gaudet, M.-C., & Lirio, P. (2022). How algorithmic management influences worker motivation: A self-determination theory perspective. Canadian Psychology / Psychologie canadienne, 63(2), 247–260. https://doi.org/10.1037/cap0000324
  9. De Stefano, V., & Taes, S. (2023). Regulating AI at work: Labour relations, automation, and discrimination. International Journal of Comparative Labour Law and Industrial Relations, 39(1), 13–40. 
  10. Shah, C. (2023, July 26). Australia’s Robodebt scheme: A tragic case of public policy failure. Blavatnik School of Government, University of Oxford. https://www.bsg.ox.ac.uk/blog/australias-robodebt-scheme-tragic-case-public-policy-failure
  11. Merken, S. (2023, June 26). New York lawyers sanctioned for using fake ChatGPT cases in legal brief. Reuters.https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?