“It’s Complicated:” An Ode to Our Relationship with AI
I remember my first few interactions with ChatGPT. My classmates had been buzzing about this supposed “life hack,”—and coincidentally, our professors were reiterating and refining the plagiarism policy at the start of each lecture.
As someone who considers themselves an avid writer, it was a point of pride that I had never even logged on… that was until I got stuck coming up with an idea for a final project in a class that I had, admittedly, left on the back burner. After far too much contemplation (during which I probably could have completed the assignment on my own), I finally gave in to the temptation and created an account.
At first, it felt like magic. One by one, the words were generated before my very eyes. Within minutes, I had solidified my topic and began typing away—but this wasn’t the end of the story. My first successful experience with ChatGPT was also my first big disappointment. I had asked it to help find some specific sources, but after scouring the web with the responses it generated, I was left empty-handed. The sources simply didn’t exist. While I’ll take some accountability for not understanding the platform's limits, it still left me questioning whether it was a tool I should adopt.
By now, you’ve likely heard all about the various benefits and disadvantages of the rapid integration of artificial intelligence into all aspects of our lives. It’s efficient, but it replicates biases. It’s cost-effective but replaces human jobs. It knows everything, but then there are privacy concerns.
I could go on and on—but instead of looking at the pros and cons from a birds-eye view, I propose we turn our attention to the cold, hard facts of how AI is being used. Rather than tumbling down the artificial rabbit hole, we’ll explore some specific examples and potential concerns. This way, you can navigate the AI era with a healthy balance of confidence and caution.
Putting a Label on it
Before diving in, let’s ensure we’re all on the same page by clearing up some terminology. If you’re like me, you may have lumped buzzwords like AI, large language models (LLMs), machine learning, and algorithms together. However, for us to fully grasp what we’re working with, we first need to know how to tell them apart.
Without getting too caught up in the technicalities, here is a rundown on the broad definitions of some important terms:
Artificial Intelligence (AI): A rising subfield within computer science, AI systems are designed to analyze data, recognize patterns, and make decisions or predictions without human intervention. These systems rely on algorithms, statistical models, and often machine learning techniques to improve over time.
Algorithms: Step-by-step procedures or formulas used to perform computations, process data, and solve problems. In the context of AI, algorithms are the core mechanisms driving the learning and decision-making processes based on input data and predefined rules.12
Machine Learning (ML): Broadly, this refers to the training process used to create the algorithms upon which artificial intelligence is built. This involves computers making predictions and learning from labeled or unlabeled data that is fed to it by a human—but no explicit programming is required.
Generative artificial intelligence (GenAI): A specific type of artificial intelligence that involves the creation of new content based on a prompt or instruction provided by the user. This is achieved through algorithms that attempt to mimic human intelligence and response patterns. Outputs include text, sound, code, images, or video—with the repertoire steadily expanding. For reference, ChatGPT, DALL-E, and Sora are all examples of GenAI tools.1
Large language models (LLMs): A type of GenAI concentrated on all things text and language processing. LLMs can understand vast amounts of data and return with human-like responses. When prompted correctly, they perform various tasks ranging from writing code for developers to answering customers’ questions as chatbots.2 ChatGPT and Bard would both be classified as LLMs.
Since the world of AI is expansive and complicated, these definitions don’t capture everything—but should at least equip you to get through this article with a little bit more context.
One More Thing: Artificial Intelligence isn’t New
In the spirit of honesty, I think I will just come out and say it: humans are a tad narcissistic. We’ve always been a bit obsessed with our own reflection—what makes us tick, why we do what we do—which explains why fields like behavioral science and psychology even exist in the first place. So, of course, when these new tools can suddenly do the things we uniquely do, like writing essays and producing “artwork” (I say lightly), it’s no wonder that we become fascinated. But this knee-jerk reaction is somewhat unwarranted since AI isn’t new—well, it isn’t that new.
When we think about it, some form of artificial intelligence has been publicly accessible for decades now. One of the first chatbots, ELIZA, was developed by Joseph Wizenbaum in the 1960s at the Massachusetts Institute of Technology. Ahead of its time, the program was first created to study human-computer interactions. To his surprise, many users humanized ELIZA (later known as the ELIZA Effect), and it was even used as a complementary tool in therapy.3
My chat with ELIZA
In more recent history, virtual assistants like Siri and Alexa have been by our side (or in our pockets) since the 2010s, answering our queries and sending text messages, all with the touch of a button or by voice command. These tools have certainly helped to increase accessibility—especially for users who have difficulty using the touchscreen.4
But it’s not all sunshine and robots, not all AI is helpful. In fact, they can be quite hurtful. Deepfakes, for example, have been creating major security issues and raising ethical concerns that lawmakers haven’t quite caught up to. Recently, a story aired on CTV News Ottawa about a couple scammed into investing money in a cryptocurrency under the advice of a “financial advisor.” This segment, which served to warn other Canadians, was later repurposed to tell a different tale. The new, doctored video showed the same trusted anchors beaming about a "program that helps Canadians achieve financial independence." It even featured the same couple, but rather than reporting on the scam, they had both been “able to retire.”5
Still, AI remains at the forefront of innovation today, meaning we need to figure out how to get along—especially when it comes to work.
behavior change 101
Start your behavior change journey at the right place
AI as a Coworker
With the rapid pace at which these technologies evolve, one would expect us to exercise some caution. However, many industry leaders are diving into the deep end of AI headfirst… despite the virtual sharks.
That’s not to say they’re wrong for doing so. Businesses, big and small, are profiting from AI integration. Customer service, writing routine documents, and software projects can all be facilitated with AI assistance. Not to mention, on the creative end of things, marketing campaigns, content creation, and social media copy are easier to churn out than ever.6
But this means nothing if we don’t have the numbers to back it up. A few studies have already been conducted to assess productivity boost from AI integration. Let me put it this way: between 2007 and 2019, the US saw an average labor productivity growth of 1.4% per year—but when GenAI was brought into the picture, output skyrocketed by 66% across three studies.6
Not only that, but it just takes a quick search to see all of the good things AI is doing outside of the office, for example, in the laboratory. Google’s company, DeepMind, has contributed to a longstanding problem in the field of biology. AlphaFold, a predictive program, outlines the molecular structure of almost every known protein—one of the most vital biomolecules for every function of the body. When it comes to the human body, “structure determines function,” meaning this breakthrough has allowed scientists to understand the specific tasks (and possible dysfunctions) of proteins in the body.11
So yes, AI can be leveraged for good. Helping us with both routine tasks as well as major discoveries and innovations. However, is it too good… should we be worried?
…But Not a Candidate
The World Economic Forum estimates that around 85 million jobs will be replaced by AI by 2025.7 Yes, this may feel like a very discouraging (and disturbing) number, but stay with me here for a second. You possess something unique that sets humans apart from the machines: emotional intelligence.
People are still skeptical about AI, and some are reluctant to let go of the traditional ways of the workplace. This is (partly) because soft skills matter. Humans are uniquely emotionally intelligent and empathic—abilities that will take you far both in life and in the business world. Research from Peter Cardon and colleagues found that, in a survey of 692 business practitioners, “virtue,” “integrity,” and “strong moral character” will be among the most vital skills in an AI-integrated workplace.8 The researchers go on to highlight the importance of fostering trust and open communication—noting the prevalent ethical concerns surrounding AI.
What do we mean by ethical concerns? Well, while only getting more sophisticated, AI is not a silver bullet. Models are still plagued by incredibly biased inputs, which, in turn, create biased outputs. In one example of this, a US health care risk-prediction algorithm was found to generate a strong and incredibly harmful racial bias. The study found that Black patients often received lower risk scores than white patients with similar conditions, leading to fewer Black patients being flagged for high-risk care management.9
While AI’s advancements are undeniably impressive, it’s our uniquely human qualities—our empathy, ethical judgment, and understanding of social issues—that will ensure we work not just alongside AI but above it, guiding its development to serve everyone's best interests.
How to Navigate Our Relationship with AI
Yes, it’s overwhelming, and your gut reaction may be telling you to be an ostrich and stick your head in the metaphorical sand to avoid the AI that is seemingly everywhere. But this, unfortunately, isn’t the most productive response.
Commit to learning. It can be easy to want to avoid AI altogether, but it will steadily become a part of your daily life, chances are it already has. Do you remember teaching your parents to use FaceTime or Zoom during the pandemic? Do you also remember how annoyed you were when they were resistant? My point exactly.
Beyond being a tool to help you with tedious tasks, AI is shaping our world as it evolves, and we can’t deny that. It’s used as a tool to recommend media in applications like Spotify and Netflix, deliver personalized ads, and even help brands solidify their image.10
Unlike my naïve, four-month younger self, I’ve learned how to use ChatGPT in my day to day. I recognize it as a support tool, one that can point out gaps in my research or that can translate densely technical language into something a little more digestible. But, as we’ve seen, it’s important to approach this topic with caution. Much like how we tend to overly rely on Google for a quick answer to our most pressing questions, our increasing use of ChatGPT can give rise to a version of what is aptly called the Google effect, where we don’t retain information readily available through a quick search.
When it comes to AI, I'm here for the nuance. It’s impossible to boil the entire culture that has been created around the increased access to AI as either good or bad, that’s far too reductive. Instead of thinking in absolutes, I encourage you to adopt the same approach for evaluating artificial intelligence and its applications. Innovation doesn’t mean perfection, nor does it need to incite existential dread. Ultimately, embracing the complexity of AI allows us to harness its potential responsibly while acknowledging the challenges it presents, leading to a more informed and balanced perspective.
References
- McKinsey & Company. (n.d.). What is generative AI? McKinsey & Company. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
- Cloudflare. (n.d.). What is a large language model (LLM)? Cloudflare. https://www.cloudflare.com/learning/ai/what-is-large-language-model
- Jarow, O. (2023, March 5). How the first chatbot predicted the dangers of AI more than 50 years ago. Vox. https://www.vox.com/future-perfect/23617185/ai-chatbots-eliza-chatgpt-bing-sydney-artificial-intelligence-history
- Bureau of Internet Accessibility. (2022, February 18). Apple’s Siri changed accessibility—but no voice assistant is perfect. Bureau of Internet Accessibility. https://www.boia.org/blog/apples-siri-changed-accessibility-but-no-voice-assistant-is-perfect
- Lee, A. (2024, January 29). Fraudsters use deepfake technology to turn CTV Ottawa story into scam video. CTV News. https://ottawa.ctvnews.ca/fraudsters-use-deepfake-technology-to-turn-ctv-ottawa-story-into-scam-video-1.6747363
- Nielsen, J. (2023, July 16). AI improves employee productivity by 66%. Nielsen Norman Group. https://www.nngroup.com/articles/ai-tools-productivity-gains/
- World Economic Forum. (2020, October 20). Recession and automation changes our future of work, but there are jobs coming, report says. World Economic Forum. https://www.weforum.org/press/2020/10/recession-and-automation-changes-our-future-of-work-but-there-are-jobs-coming-report-says-52c5162fce/
- Cardon, P. (2024, January 23). New study finds AI makes employers value soft skills more. Fast Company. https://www.fastcompany.com/91012874/new-study-finds-ai-makes-employers-value-soft-skills-more
- Vartan, S. (2019, October 24). Racial bias found in a major health care risk algorithm. Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
- Bryant, K. (2023, December 13). How AI is impacting society and shaping the future. Forbes. https://www.forbes.com/sites/kalinabryant/2023/12/13/how-ai-is-impacting-society-and-shaping-the-future/
- Lewis, T. (2022, October 31). One of the biggest problems in biology has finally been solved. Scientific American. https://www.scientificamerican.com/article/one-of-the-biggest-problems-in-biology-has-finally-been-solved/
- Network of the National Library of Medicine. (n.d.). Algorithm. NNLM. https://www.nnlm.gov/guides/data-glossary/algorithm
About the Author
Charlotte Sparkes
Charlotte Sparkes is a full-time psychology and behavioural science student at McGill University. Interning at The Decision Lab as a Summer Content Associate, she is passionate about all things cognition. She is especially interested in the explicit and implicit factors required for decision-making. Through her work as a research assistant, Charlotte has gained practical experience in the field of social psychology, specifically testing participants on their empathic accuracy. In addition, she is the current president of the MPSA (McGill Psychology Students’ Association). In this role, she has worked on projects alongside professors to make research opportunities accessible to all students.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%