Algorithms for Simpler Decision-Making (1/2): The Case for Cognitive Prosthetics

Our cognitive functions are increasingly being outsourced to computational algorithms, simultaneously enhancing our decision-making capabilities and manipulating our behavior. Digital spaces, where information is more accessible and more affordable than ever before, provide us with insights and data for us to use at will. Nowadays, a simple Google search can take on the role of financial advisor, lawyer, or even doctor. But the information we find online is silently sorted, ordered and presented by algorithms that delve through our digital data traces for the most relevant, most ‘likable’ media to feed us. In many ways, this unseen curation is a welcome convenience; sifting through and reasoning with seemingly endless online data and information is an unrealistic task for any human. Nevertheless, we begin to forfeit cognitive autonomy each time we delegate information gathering and evaluation to algorithms, in turn restricting our thinking to what the algorithms deem appropriate.

Interacting with these algorithms allows us to make sense of and participate in the flows of data constantly constructing the ways we work and live. Algorithmic decision-making — that is, automated analytics deployed for the purpose of informing better, data-driven decisions — epitomizes this phenomenon. And while a world directed by algorithms presents countless opportunities for optimizing the human experience, it also calls for reflection on the human-algorithm relationship upon which we now rely.

As our views on data shift from empiricism to ideology, from datafication to dataism, it is easy to get caught in the fervor. Countless articles call for transparency, accountability, and privacy in the roll out of algorithmic practices.  These are of course noble (and often necessary) ideals — for example, data watchdog committees and legislative safeguards can ensure responsible development and implementation. Yet, many of these sweeping calls for oversight implicitly rest on unfounded assumptions about the socio-political impacts of algorithms. In turn, we wind up with a number of a priori hypotheses about how algorithms will affect society — and thus, claims about the steps we must take to regulate them — which are often premised on misguided assumptions.

For one, the conventional data dogma has warped and distorted the concept of the algorithm into some kind of agential, all-knowing, impossible-to-comprehend being. This misconception suggests that an algorithm possesses authoritative power in itself, when in reality any influence the algorithm may project is the result of human design and legitimization (Beer, 2017). In other words, as the role of algorithms evolves to a semi-mythical (perhaps deified) status from Silicon Valley to Wall Street, it is often forgotten that algorithms are a product of human effort, and subject to human control.


The AI Governance Challenge

Secondly, mainstream depictions of algorithmic decision-making presuppose a specific model in which algorithms have been so deeply embedded into bureaucracy that negotiating with an algorithmic decision is impossible for the common individual. While this power structure is surely a future possibility — as seen in the algorithmic management of Uber drivers (cf. Lee, Kusbit, Metsky, & Dabbish, 2015) — a vast majority of present-day algorithmic decision making operates in a consumer model. Here, the users of algorithmic tools are free to use (consume) or ignore the insights provided by algorithms, compelled by little more than preferences for convenience. In fact, this augmented decision-making process, where algorithms are consulted but ultimately remain passive, is pervasive in our everyday lives. Proprietary algorithms direct us through city streets, recommend films, and tell us who to date, but our understanding of how people trust, utilize, and make sense of algorithmic advice is noticeably thin (Prahl & Van Swol, 2017). While this micro-level interaction between human and algorithm is perhaps more mundane than theorizing the implications of algocratic rule, it will ultimately determine whether the human role in our data-fuelled ecosystem will be augmented or automated.

Despite related philosophical advances like the theory of “the extended mind” (Clark & Chalmers, 1998), the long-term success or failure of augmented decision-making depends on practical, scientific solutions to effectively integrate human judgment and algorithmic logic. Whereas decision aids and support systems have been working to do so for decades, the evolution of big data and the discovery of “algorithm aversion” has called for revisions to our notions of hybrid decision-making. In conceptualizing algorithm aversion, Dietvorst, Simmons, & Massey (2015, 2016) found that human forecasters display a reluctance to use superior but imperfect algorithms, oftentimes leading to a revert back to gut feelings. Perhaps this isn’t so surprising: Meehl’s (1954) seminal work on the superiority of statistical to clinical (or intuitive) judgment, and the ensuing uproar, highlighted this same conflict some 60 years ago. While this stubborn confidence in intuition has been fodder for decision scientists ever since Meehl, this aversion toward statistical, computational decision-making has been revived as algorithms are no longer a luxury, but a necessity. Like a prosthetic leg might allow an impaired individual to comfortably move through the physical environment, behavioral scientists must now come together to design cognitive prosthetics — algorithmic extensions of the human mind that allow individuals to navigate the boundless digital environment, enabling data-driven decision-making without forfeiting human autonomy. To inform the design of cognitive prosthetics, the root of algorithm aversion, the overarching obstacle for human-algorithm symbiosis, must be addressed.

Read part 2 here.

Read Next