Why do we keep handing more decisions to AI?
Delegation Creep Bias
, explained.What is Delegation Creep in AI?
Delegation creep in AI is a bias where people and organizations steadily expand what they let automated systems do. It often starts with low-stakes tasks like sorting emails or drafting boilerplate, then drifts into screening candidates, triaging welfare cases, or shaping messages with real strategic or ethical weight. Human factors research shows related patterns of automation bias and complacency, where decision makers miss problems the system does not flag and follow incorrect recommendations after good past performance. Over-reliance grows with increasing time pressure and distraction, so sensitive decisions end up flowing through systems never designed or governed for that level of responsibility.
Where this bias occurs
Delegation creep appears wherever AI tools are embedded into workflows as helpers or copilots. In personal productivity, people start by using an assistant to manage calendar invites and inbox triage, then drift toward letting it prioritize which opportunities or relationships receive attention first. In content creation, teams adopt AI to write outlines and rough drafts, then lean on it to choose angles, frame arguments, and even decide which audiences to prioritize.
In operational settings, the pattern is sharper. Research on automation in complex systems shows that once a decision aid is introduced, operators quickly shift monitoring and judgment toward the automated channel.1 Under routine conditions, they tend to assume the system is functioning correctly and may overlook cues that contradict its output.2 Studies of trust in automated systems find that when tools are framed as accurate and efficient, people become more willing to lean on them and less inclined to question their recommendations, especially under time pressure.3
Policy and ethics work on AI governance has flagged a similar problem with formal oversight requirements. Many regulations call for human review of automated decisions, yet in practice, “human-in-the-loop” often means adding a signature step at the end of a process that is otherwise automated.4 When those humans lack time, context, or real authority to override the system, review becomes a ritual rather than a safeguard. The concept of moral crumple zones captures how humans can be left to absorb blame when things go wrong, even though the structure of the system gave them little real control in the first place.5
In workplaces governed by algorithmic management, automated systems assign tasks, rate performance, and suggest disciplinary steps. Empirical Human-Computer Interaction (HCI) research shows that such tools often begin as scheduling aids or performance dashboards and gradually become central to decisions about pay, promotion, and continued employment.6 Employer surveys indicate that once in place, these systems are frequently expanded to new uses without a corresponding upgrade in governance or worker voice.7 Organizational psychology research links this dynamic to reduced autonomy and motivation, as employees adapt their behavior to what the system measures and rewards rather than to professional judgment.8 Labor law scholars warn that, without safeguards, this expansion of delegated decision making can entrench discrimination and weaken existing protections at work.9














