Why do we accept the first plausible AI solution and stop searching?
The Automation Bias
, explained.What is Automation Bias?
Automation bias describes our tendency to accept and favor answers from automated decision-making systems, such as large language models like ChatGPT, even when we encounter contradictory information. We often trust the output of automated systems without critically evaluating it, even when our own human judgment suggests otherwise.
Where this bias occurs
Imagine that you are flying a plane (don’t worry, you have your pilot’s license!) with advanced autopilot and flight management systems. When it comes time to land, you program the systems for an automatic landing. As you begin your descent, you notice that the runway lights appear higher than usual, which can be a sign that the plane is too low—but since the navigation guide display shows that you’re perfectly lined up for landing, you ignore it. A few moments later, air traffic control calls in and warns you that the plane is too low for landing. Luckily, you have time to adjust and land safely, but that was a close one!
In this scenario, a key factor at play is automation bias. You trusted the navigation system more than your own judgment, causing you to ignore what you saw with your own eyes. As automated systems have become more advanced, automation bias is more likely to occur, as we tend to perceive technology as more reliable than human judgment. Unfortunately, automation bias has contributed multiple real-world crashes. It’s also led to poor outcomes in healthcare, finance, and military defense.1
Although automated tools can help us complete tasks, scenarios like this demonstrate the importance of continuing to apply our critical thinking skills to evaluate their outputs rather than blindly accepting them.















