laptop

Algorithms for Simpler Decision-Making (1/2): The Case for Cognitive Prosthetics

read time - icon

0 min read

Oct 23, 2018

Our cognitive functions are increasingly being outsourced to computational algorithms, simultaneously enhancing our decision-making capabilities and manipulating our behavior. Digital spaces, where information is more accessible and more affordable than ever before, provide us with insights and data for us to use at will. Nowadays, a simple Google search can take on the role of financial advisor, lawyer, or even doctor. But the information we find online is silently sorted, ordered and presented by algorithms that delve through our digital data traces for the most relevant, most ‘likable’ media to feed us. In many ways, this unseen curation is a welcome convenience; sifting through and reasoning with seemingly endless online data and information is an unrealistic task for any human. Nevertheless, we begin to forfeit cognitive autonomy each time we delegate information gathering and evaluation to algorithms, in turn restricting our thinking to what the algorithms deem appropriate.

Interacting with these algorithms allows us to make sense of and participate in the flows of data constantly constructing the ways we work and live. Algorithmic decision-making— that is, automated analytics deployed for the purpose of informing better, data-driven decisions — epitomizes this phenomenon. And while a world directed by algorithms presents countless opportunities for optimizing the human experience, it also calls for reflection on the human-algorithm relationship upon which we now rely.

As our views on data shift from empiricism to ideology, from datafication to dataism, it is easy to get caught in the fervor. Countless articles call for transparency, accountability, and privacy in the roll out of algorithmic practices.  These are of course noble (and often necessary) ideals — for example, data watchdog committees and legislative safeguards can ensure responsible development and implementation. Yet, many of these sweeping calls for oversight implicitly rest on unfounded assumptions about the socio-political impacts of algorithms. In turn, we wind up with a number of a priori hypotheses about how algorithms will affect society — and thus, claims about the steps we must take to regulate them — which are often premised on misguided assumptions.

For one, the conventional data dogma has warped and distorted the concept of the algorithm into some kind of agential, all-knowing, impossible-to-comprehend being. This misconception suggests that an algorithm possesses authoritative power in itself, when in reality any influence the algorithm may project is the result of human design and legitimization (Beer, 2017). In other words, as the role of algorithms evolves to a semi-mythical (perhaps deified) status from Silicon Valley to Wall Street, it is often forgotten that algorithms are a product of human effort, and subject to human control.

Behavioral Science, Democratized

We make 35,000 decisions each day, often in environments that aren’t conducive to making sound choices. 

At TDL, we work with organizations in the public and private sectors—from new startups, to governments, to established players like the Gates Foundation—to debias decision-making and create better outcomes for everyone.

More about our services
The AI Governance Challenge book
eBook

The AI Governance Challenge

Secondly, mainstream depictions of algorithmic decision-making presuppose a specific model in which algorithms have been so deeply embedded into bureaucracy that negotiating with an algorithmic decision is impossible for the common individual. While this power structure is surely a future possibility — as seen in the algorithmic management of Uber drivers (cf. Lee, Kusbit, Metsky, & Dabbish, 2015) — a vast majority of present-day algorithmic decision making operates in a consumer model. Here, the users of algorithmic tools are free to use (consume) or ignore the insights provided by algorithms, compelled by little more than preferences for convenience. In fact, this augmented decision-making process, where algorithms are consulted but ultimately remain passive, is pervasive in our everyday lives. Proprietary algorithms direct us through city streets, recommend films, and tell us who to date, but our understanding of how people trust, utilize, and make sense of algorithmic advice is noticeably thin (Prahl & Van Swol, 2017). While this micro-level interaction between human and algorithm is perhaps more mundane than theorizing the implications of algocratic rule, it will ultimately determine whether the human role in our data-fuelled ecosystem will be augmented or automated.

Despite related philosophical advances like the theory of “the extended mind” (Clark & Chalmers, 1998), the long-term success or failure of augmented decision-making depends on practical, scientific solutions to effectively integrate human judgment and algorithmic logic. Whereas decision aids and support systems have been working to do so for decades, the evolution of big data and the discovery of “algorithm aversion” has called for revisions to our notions of hybrid decision-making. In conceptualizing algorithm aversion, Dietvorst, Simmons, & Massey (2015, 2016) found that human forecasters display a reluctance to use superior but imperfect algorithms, oftentimes leading to a revert back to gut feelings. Perhaps this isn’t so surprising: Meehl’s (1954) seminal work on the superiority of statistical to clinical (or intuitive) judgment, and the ensuing uproar, highlighted this same conflict some 60 years ago. While this stubborn confidence in intuition has been fodder for decision scientists ever since Meehl, this aversion toward statistical, computational decision-making has been revived as algorithms are no longer a luxury, but a necessity. Like a prosthetic leg might allow an impaired individual to comfortably move through the physical environment, behavioral scientists must now come together to design cognitive prosthetics — algorithmic extensions of the human mind that allow individuals to navigate the boundless digital environment, enabling data-driven decision-making without forfeiting human autonomy. To inform the design of cognitive prosthetics, the root of algorithm aversion, the overarching obstacle for human-algorithm symbiosis, must be addressed.

Read part 2 here.

References

Beer, D. (2017). The social power of algorithms. Information Communication and Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147

Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, (January), 7–19.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2016). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), 1155–1170. https://doi.org/10.1287/mnsc.2016.2643

Gigerenzer, G. (2001). Decision Making: Nonrational Theories. International Encyclopedia of the Social and Behavioral Sciences, 5, 3304–3309. https://doi.org/10.1016/B978-0-08-097086-8.26017-0

Hafenbrädl, S., Waeger, D., Marewski, J. N., & Gigerenzer, G. (2016). Applied Decision Making With Fast-and-Frugal Heuristics. Journal of Applied Research in Memory and Cognition, 5, 215–231. https://doi.org/10.1016/j.jarmac.2016.04.011

Lee, M. K., Kusbit, D., Metsky, E., & Dabbish, L. (2015). Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers. Proceedings of the ACM CHI’15 Conference on Human Factors in Computing Systems, 1, 1603–1612. https://doi.org/10.1145/2702123.2702548

Meehl, P. E. (1954). Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence.

Phillips, N. D., Neth, H., Woike, J. K., & Gaissmaier, W. (2017). FFTrees: A toolbox to create, visualize, and evaluate fast-and-frugal decision trees. Judgment and Decision Making, 12(4), 344–368. Retrieved from https://journal.sjdm.org/17/17217/jdm17217.pdf

Prahl, A., & Van Swol, L. (2017). Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, 36, 691–702. https://doi.org/10.1002/for.2464

Simon, H. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–138.

About the Author

Jason Burton

Jason Burton

Birkbeck, University of London

Jason is a PhD researcher at the Centre for Cognition, Computation & Modelling (CCCM) at Birkbeck, University of London. Before joining Birkbeck, he earned an MSc in Organisational Psychiatry & Psychology from King’s College London and held a research position at Copenhagen Business School’s Department of Digitalization. His research seeks to further our understandings of how cognitive processes intersect with the post-truth environment, ultimately revolving around the topic of human rationality. Outside of academia, Jason works with HATCH Analytics as a research psychologist to apply behavioural insights in the workplace.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?