Why do we feel so confident using generative AI while our AI literacy lags behind?

The 

AI Literacy Gap

, explained.
Bias

What is the AI literacy gap?

AI literacy is the combination of knowledge, skills, and attitudes that enables people to understand and work with AI systems in an informed way. Researchers describe AI literacy as a set of competencies that help people explain in simple language what an AI system is doing, allowing them to anticipate where it might fail and engage with its ethical and social impacts. The AI literacy gap appears when people feel at ease using tools like large language models while lacking the concepts needed to judge when these tools are helpful, when they are risky, and how to use them responsibly.

Where this bias occurs

Picture a normal week at work. You open a chat window with a large language model to help you respond to a client, summarize a report, or outline a presentation. The model responds with clean, persuasive text. You skim it, change a few phrases, and send it along. Hours later, a colleague notices that a regulation is misquoted or that a reference cannot be found anywhere outside the AI response.

Scenes like this emerge in classrooms, clinics, and public agencies. A student uses an AI assistant to generate study notes, but fails to realize that it misstated a concept before she confidently takes the exam. A manager relies on an AI summary of survey responses, and an important minority concern disappears in the aggregation. Many users have strong digital skills and extensive experience with search engines, messaging apps, and productivity tools, yet still struggle to distinguish between strong and weak AI outputs. When the interface feels familiar and the writing sounds polished, it is easy to forget that the system is generating predictions rather than retrieving facts.

The gap also appears in the opposite direction. Some people avoid generative AI entirely because they feel overwhelmed, fear making a mistake, or worry that using AI breaks an unstated rule. Colleagues describe how they save hours with AI support, while less confident users stay on the sidelines. From the outside, this looks like a choice; in practice, it often reflects unequal access to clear explanations, guided practice, and psychological safety around experimenting with AI.

Sources

  1. Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727
  2. Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041. https://doi.org/10.1016/j.caeai.2021.100041
  3. Chiu, T. K. F. (2024). What are artificial intelligence literacy and competency? A comprehensive framework to support them. Computers and Education: Artificial Intelligence, 5, 100120. (Structural Learning)
  4. Zhou, X., & Schofield, L. (2024). Developing a conceptual framework for Artificial Intelligence literacy in higher education. Journal of Learning Development in Higher Education, 31, 1–20. https://doi.org/10.47408/jldhe.vi31.1354
  5. Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C., & Welsch, R. (2026). AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Computers in Human Behavior, 175, 108779. https://doi.org/10.1016/j.chb.2025.108779
  6. Almatrafi, O., et al. (2024). A systematic review of AI literacy conceptualization, implementation, and assessment. Computers and Education: Artificial Intelligence, 5, 100135.
  7. European Commission & OECD. (2025). Empowering learners for the age of AI: An AILit framework for primary and secondary education.
  8. The Decision Lab. (2025). Automation bias. The Decision Lab. https://thedecisionlab.com/biases/automation-bias
  9. Callegari, A., & Kingery, D. (2025). Why building trust and literacy in AI is essential for digital safety. World Economic Forum. (World Economic Forum)
  10. Stanford Teaching Commons. (2023). Understanding AI literacy. Stanford University.
  11. Digital Learning Institute. (2025). Professional certificate in applied AI literacy. Digital Learning Institute.
  12. The Decision Lab. (2024). Organizational barriers to AI adoption. The Decision Lab. https://thedecisionlab.com/reference-guide/management/organizational-barriers-to-ai-adoption
  13. Vered, M., Livni, T., Howe, P. D. L., Miller, T., & Sonenberg, L. (2023). The effects of explanations on automation bias. Artificial Intelligence, 322, 103952. https://doi.org/10.1016/j.artint.2023.103952 
  14. University of Toronto. (2025). Generative AI Teaching Project. University of Toronto Libraries.
  15. Siddharth, S., Prince, B., Harsh, A., & Ramachandran, S. (2025). The World of AI: A novel approach to AI literacy for first-year engineering students. arXiv preprint. https://arxiv.org/abs/2506.08041

About us

We are the leading applied research & innovation consultancy

Our insights are leveraged by the most ambitious organizations

Image

I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.

Heather McKee

BEHAVIORAL SCIENTIST

GLOBAL COFFEEHOUSE CHAIN PROJECT

OUR CLIENT SUCCESS

$0M

Annual Revenue Increase

By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.

0%

Increase in Monthly Users

By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.

0%

Reduction In Design Time

By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.

0%

Reduction in Client Drop-Off

By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%

Notes illustration

Eager to learn about how behavioral science can help your organization?