Why do AI systems make responsibility feel like no one’s job?

The 

Accountability diffusion in AI

, explained.
Bias

What is Accountability Diffusion in AI?

Accountability diffusion in AI is the tendency for responsibility to spread so widely across people, teams, and systems that work on AI projects that no one feels fully answerable for an outcome. When a decision goes well, everyone can claim some credit. When it goes badly, it becomes easier to say “the AI decided” than “I decided.” The concept builds on classic studies on diffusion of responsibility, which show that people feel less personally responsible to act when many others are present who could intervene.[1] Over time, decisions that affect real lives come to feel like the output of a process rather than a choice anyone made.

Where this bias occurs

Picture a loan officer reviewing applications in a busy call center. A scoring model provides a clean risk score and a recommendation: “Decline,” accompanied by a small explanation box. The officer has a long queue, a script on screen, and internal messages reminding the team to “stay aligned with the model.” The applicant sounds nervous on the phone. Their file includes some unusual circumstances that do not fit the standard categories.

The officer glances at the score, feels a twinge of doubt, and then clicks “decline” while reading the wording that the system suggests. It feels like the safe option. After all, the model was validated, compliance approved it, and leadership is tracking adherence.

Months later, investigative reporters reveal that the model systematically rated certain neighborhoods as higher risk based on historical data patterns. The bank issues a statement noting that humans made final decisions. Staff talk about “following the system.” Vendors emphasize that clients are the ones who choose specific thresholds and policies. Regulators ask who owns the outcome. Inside the organization, there is no single, clear answer.

Accountability diffusion in AI occurs when responsibility is fragmented into thin layers across design, deployment, day-to-day use, and post-mortems, allowing every actor to point elsewhere when things go awry.

Sources

  1. Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 8(4), 377–383. https://doi.org/10.1037/h0025589 
  2. Goddard, K., Roudsari, A., & Wyatt, J. C. (2012). Automation bias: A systematic review of frequency, effect mediators, and mitigators. Journal of the American Medical Informatics Association, 19(1), 121–127. https://doi.org/10.1136/amiajnl-2011-000089
  3. Elish, M. C. (2019). Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society, 5, 40–60. https://doi.org/10.17351/ests2019.260 
  4. Caplan, R., Donovan, J., Hanson, L., & Matthews, J. (2018). Algorithmic accountability: A primer. Data & Society. https://datasociety.net/library/algorithmic-accountability-a-primer/ 
  5. Alon-Barkat, S., & Busuioc, M. (2023). Human–AI interactions in public sector decision making: “Automation bias” and “selective adherence” to algorithmic advice. Journal of Public Administration Research and Theory, 33(1), 153–169. https://doi.org/10.1093/jopart/muac007 
  6. Ruschemeier, H. (2024). Automation bias in public administration: An interdisciplinary perspective from law and psychology. Government Information Quarterly, 41(3), 101953. https://doi.org/10.1016/j.giq.2024.101953 
  7. Cheong, B. C. (2024). Transparency and accountability in AI systems: Safeguarding wellbeing in the age of algorithmic decision-making. Frontiers in Human Dynamics, 6, 1421273. https://doi.org/10.3389/fhumd.2024.1421273 
  8. Singhal, A., Neveditsin, N., Tanveer, H., & Mago, V. (2024). Toward fairness, accountability, transparency, and ethics in AI for social media and health care: Scoping review. JMIR Medical Informatics, 12, e50048. https://doi.org/10.2196/50048 
  9. Zheng, E. L., Jin, W., Hamarneh, G., & Lee, S. S. J. (2024). From human-in-the-loop to human-in-power. The American Journal of Bioethics, 24(9), 84–86. https://doi.org/10.1080/15265161.2024.2377139 
  10. Wiewiórowski, W. (2025). TechDispatch #2/2025: Human oversight of automated decision-making. European Data Protection Supervisor. https://www.edps.europa.eu/data-protection/our-work/publications/techdispatch/2025-09-23-techdispatch-22025-human-oversight-automated-making 
  11. Breidbach, C. F. (2024). Responsible algorithmic decision-making. Organizational Dynamics, 53(2), 101031. https://doi.org/10.1016/j.orgdyn.2024.101031 
  12. Besio, C., Fedtke, C., Grothe-Hammer, M., Karafillidis, A., & Pronzini, A. (2025). Algorithmic responsibility without accountability: Understanding data-intensive algorithms and decisions in organisations. Systems Research and Behavioral Science, 42(3), 739–755. https://doi.org/10.1002/sres.3028 
  13. Fleisher, W., Cibralic, B., Basl, J., Ricks, V., & Smith, M. N. (2025). Responsibility and accountability in an algorithmic society. Philosophy & Technology, 38(4), 144. https://doi.org/10.1007/s13347-025-00970-w 
  14. Enqvist, L. (2023). “Human oversight” in the EU artificial intelligence act: What, when and by whom. Law, Innovation and Technology, 15(2), 508–535. https://doi.org/10.1080/17579961.2023.2245683 
  15. Daniels, O. J., & Murdick, D. (2024). Enabling principles for AI governance. Center for Security and Emerging Technology. https://cset.georgetown.edu/publication/enabling-principles-for-ai-governance/ 
  16. Amnesty International. (2021). Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal (EUR 35/4686/2021). Amnesty International.https://www.amnesty.org/en/documents/eur35/4686/2021/en/

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?