multicolored brain

Cognitive Science Can Improve Decision Making

read time - icon

0 min read

Aug 13, 2020


This article is part of a series on cutting edge research that has the potential to create positive social impact. While the research is inherently specific, we believe that the insights gleaned from each piece in this series are relevant to behavioral science practitioners in many different fields. At TDL, we are always looking for ways to translate science into impact. If you would like to chat with us about a potential collaboration, feel free to contact us.


As a socially-conscious applied research firm, TDL is interested in connecting cutting-edge research with real-world applications. To further this interest, The Decision Lab reached out to Michał Klincewicz, an assistant professor in Tilburg University in the Department of Cognitive Science, to learn more about his work on using video games to explore moral cognition and stimulate moral insight, as well as his use of machine learning to spot conspiratorial online videos.

As a socially-conscious applied research firm, TDL is interested in connecting cutting-edge research with real-world applications. To further this interest, The Decision Lab reached out to Michał Klincewicz, an assistant professor in Tilburg University in the Department of Cognitive Science, to learn more about his work on using video games to explore moral cognition and stimulate moral insight, as well as his use of machine learning to spot conspiratorial online videos.

In his research, Professor Klincewicz combines insights from social epistemology, data science, computational linguistics, psychology, neuroscience, and philosophy to learn about what can make individuals better decision-makers. With the aforementioned disciplines, he combines a mix of empirical and theoretical thinking to create transformative technologies.

A full version of some of Michał’s studies are available here:

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services


Julian: How would you describe the focus of your research?

Michał: Recently, I’ve been focusing on two things: First, on video games in which players are faced with moral dilemmas. These simulations are a great way to stimulate moral insight, develop moral sensitivity, and are a versatile environment that can help us understand the psychological mechanisms behind complex decisions under uncertainty. Second, I’ve been focusing on developing machine learning algorithms that can spot conspiratorial YouTube content. A variety of insights from social epistemology, data science, and computational linguistics can be used to make these algorithms perform better. There is also a pressing need to counter the spread of misinformation.

Julian: What was your research question, broadly speaking?

Michał: I want to know what makes individuals better decision-makers. To find out, I use insights from across the cognitive sciences: psychology, neuroscience, linguistics, and philosophy. This is a mix of empirical and theoretical work, where it isn’t always clear which discipline may turn out to be relevant. I then use this knowledge to design technologies that can facilitate decision-making or improve individuals in the long-term.

Julian: What insights did you think you’d find out from your research, and why?

Michał: Put plainly, I’m interested in finding ways to make people better. There has been a lot of discussion across disciplines about how the psychology we inherited from our ancestors has left us unprepared for rapid technological change and globalization. 

I look to identify which particular aspect of that inheritance is the main culprit and then find a way to either limit its impact or a way to counteract it with something else. There were some relatively good candidates to look at first: tribalism, biases, negative emotions, and general intelligence.

I thought one of these or a combination of them would be a good place to start and then that most of the work would be in designing an appropriate intervention to deal with it. Artificial intelligence techniques seemed like a very promising avenue at that point, given how well they do in classifying things and in finding patterns where none are immediately apparent. 

Once the main problematic psychological mechanism is identified, we can use artificial intelligence techniques to identify when it is active and design an intervention to deal with it.

Julian: What is your general research process?

Michał: I work with a number of researchers across disciplines that have similar research agendas. In short, I like to work with people that aim to understand how technology shapes individuals and their environment for better and for worse. Their work and community is an important source of inspiration and direction to my own work. 

I have also been fortunate to have dozens of talented students over the years. Together, we have designed and carried out controlled experiments, designed nudges and tested them, and developed methods for studying decision-making in video games. Overall, I would characterize my research process as being both vertical and horizontal collaboration. It is dialogue that is bound together by a common commitment to serious theory and science that serves the public good. 

The AI Governance Challenge book

The AI Governance Challenge

Julian: What sorts of insights did you end up discovering?

Michał: Perhaps unsurprisingly, I found out that there is no single problematic psychological mechanism or even a set of such mechanisms that can be the primary focus for an improvement intervention. There are many individual differences responsible for the way in which people make decisions. Things like age, experience, knowledge, and so on all interact with each other to yield an idiosyncratic style of decision-making. 

However, the way I got there yielded a number of useful new methods that I aim to apply in future work, including the aforementioned video games and infrared imaging of the face. The work on conspiracy videos has given us a number of promising and scalable methods for classifying conspiratorial content. I think the work is extremely promising and I hope it will soon result in a workable solution that can be taken in the wild.

Julian: How do you think this is relevant to an applied setting?

Michał: I am currently starting the supervision of a PhD project realized in Tilburg’s MindLabs, a collaborative initiative that investigates human minds and artificial minds, with the aim of developing a serious game that will attract, train, and retain key personnel in the logistics sector. The team from Tilburg University’s CSAI department will work with the Port of Rotterdam and other industry partners, but the project could have found a home in any setting where critical decisions are made by experts.

Julian: What do you think are some exciting directions for future research?

Michał: The most exciting direction for this research is to see it in the wild, outside of the academy, making a genuine difference in people’s lives. I believe that our work on decision-making and nudging can, with sufficient support, mitigate some of the damage caused by poor individual decisions. The work on classifiers for conspiratorial content has the potential to help control the spread of misinformation and counteract its negative impact on democracy and public health as well give us a new tool to combat radicalization online

About the Authors

Michał Klincewicz

Michał Klincewicz

Tilburg University

Michał is a research scientist and assistant professor in the department of cognitive science and artificial intelligence at Tilburg University. His research involves moral enhancement with AI, the use of video games to investigate moral cognition, and the temporal dimension of cognition, including conscious perception, experience, dreams, and memory. Michał is also strongly interested in ethically problematic consequences of emerging technologies, such as autonomous weapon systems. Michał received his Ph.D. in philosophy in 2013 at City University of New York, Graduate Center.

Julian Hazell portrait

Julian Hazell

McGill University

Julian is passionate about understanding human behavior by analyzing the data behind the decisions that individuals make. He is also interested in communicating social science insights to the public, particularly at the intersection of behavioral science, microeconomics, and data science. Before joining The Decision Lab, he was an economics editor at Graphite Publications, a Montreal-based publication for creative and analytical thought. He has written about various economic topics ranging from carbon pricing to the impact of political institutions on economic performance. Julian graduated from McGill University with a Bachelor of Arts in Economics and Management.

Read Next


Beyond Irrational Politics

What can behavioral science tell us about politics? A lot, it turns out. Political polarization has intensified to the extent that we give our trust based on who says something, not what they say.

Hospital hallway

A Nudge A Day Keeps The Doctor Away

Even individuals who are fully aware of the risks associated with certain behaviors, and have the intention to make good choices, struggle to do so.

symbols and binary digits

AI, Indeterminism and Good Storytelling

While we’re quite accustomed to these probabilistic models for insurance, loans and the like, AI is upping the ante—potentially even changing the game.

Notes illustration

Eager to learn about how behavioral science can help your organization?