In the aftermath of the cognitive revolution, the cognitive viewpoint overtook behaviorism, becoming the dominant lens from which psychology was studied. Furthermore, cognitive science carved a legitimate stake for itself as its own field of academic study. There now exist multiple departments, journals, professors, and degrees specifically tailored toward understanding and forwarding cognitive science.
Key findings in the field of cognitive science have been used to help us understand decision-making. As decision-making is often roped into problem-solving, it makes sense that examining the mind from a computational, problem-focused standpoint is advantageous. Cognitive science has delivered on this promise: the majority of evidence about limited short-term memory, divided attention, use of heuristics, and warped perception of risk come from studying the brain from this cognitive perspective.
Aside from decision-making, cognitive science has broadened our perspectives on other social, linguistic, and mental phenomena. It has been used to understand how the brain’s functional systems operate, providing insights into processes such as how we perceive, learn, and think. Furthermore, cognitive science has also been useful in understanding mental disorders and dysfunctions, offering an integrated framework to understand multiple psychiatric phenomena. In the realm of linguistics, cognitive linguistics, through its theories like generative grammar and natural language, has come to revolutionize academic practice. Finally, cognitive science has allowed people to develop better understandings of social psychological processes such as persuasion, coercion, and negotiation.
While these discoveries are important, they are overshadowed by one looming development: artificial intelligence. The prospect of a thinking machine has long been an idea that both worries, excites, and intrigues people, but only recent developments in cognitive science have made it a reality. Using tools such as neural networks, cognitive scientists have come to view the mind as a computational network and been able to drastically increase our artificial intelligence capabilities.
While the prospect of a hyper-intelligent, HAL-9000-esq computer is still a distant fantasy, artificial intelligence has already come to revolutionize our day-to-day lives in many subtle ways, ranging from your daily Spotify playlists to helping radiologists detect cancer in patients. Looking towards the future, improving artificial intelligence could be a key factor in how history unravels. In such an innovative period in history, our increasing capabilities, understanding, and ethical choices regarding this powerful tool could be key in either healing or worsening societal ills, such as climate change and global inequities.