“It’s Complicated:” An Ode to Our Relationship with AI
I remember my first few interactions with ChatGPT. My classmates had been buzzing about this supposed “life hack,”—and coincidentally, our professors were reiterating and refining the plagiarism policy at the start of each lecture.
As someone who considers themselves an avid writer, it was a point of pride that I had never even logged on… that was until I got stuck coming up with an idea for a final project in a class that I had, admittedly, left on the back burner. After far too much contemplation (during which I probably could have completed the assignment on my own), I finally gave in to the temptation and created an account.
At first, it felt like magic. One by one, the words were generated before my very eyes. Within minutes, I had solidified my topic and began typing away—but this wasn’t the end of the story. My first successful experience with ChatGPT was also my first big disappointment. I had asked it to help find some specific sources, but after scouring the web with the responses it generated, I was left empty-handed. The sources simply didn’t exist. While I’ll take some accountability for not understanding the platform's limits, it still left me questioning whether it was a tool I should adopt.
By now, you’ve likely heard all about the various benefits and disadvantages of the rapid integration of artificial intelligence into all aspects of our lives. It’s efficient, but it replicates biases. It’s cost-effective but replaces human jobs. It knows everything, but then there are privacy concerns.
I could go on and on—but instead of looking at the pros and cons from a birds-eye view, I propose we turn our attention to the cold, hard facts of how AI is being used. Rather than tumbling down the artificial rabbit hole, we’ll explore some specific examples and potential concerns. This way, you can navigate the AI era with a healthy balance of confidence and caution.
Putting a Label on it
Before diving in, let’s ensure we’re all on the same page by clearing up some terminology. If you’re like me, you may have lumped buzzwords like AI, large language models (LLMs), machine learning, and algorithms together. However, for us to fully grasp what we’re working with, we first need to know how to tell them apart.
Without getting too caught up in the technicalities, here is a rundown on the broad definitions of some important terms:
Artificial Intelligence (AI): A rising subfield within computer science, AI systems are designed to analyze data, recognize patterns, and make decisions or predictions without human intervention. These systems rely on algorithms, statistical models, and often machine learning techniques to improve over time.
Algorithms: Step-by-step procedures or formulas used to perform computations, process data, and solve problems. In the context of AI, algorithms are the core mechanisms driving the learning and decision-making processes based on input data and predefined rules.12
Machine Learning (ML): Broadly, this refers to the training process used to create the algorithms upon which artificial intelligence is built. This involves computers making predictions and learning from labeled or unlabeled data that is fed to it by a human—but no explicit programming is required.
Generative artificial intelligence (GenAI): A specific type of artificial intelligence that involves the creation of new content based on a prompt or instruction provided by the user. This is achieved through algorithms that attempt to mimic human intelligence and response patterns. Outputs include text, sound, code, images, or video—with the repertoire steadily expanding. For reference, ChatGPT, DALL-E, and Sora are all examples of GenAI tools.1
Large language models (LLMs): A type of GenAI concentrated on all things text and language processing. LLMs can understand vast amounts of data and return with human-like responses. When prompted correctly, they perform various tasks ranging from writing code for developers to answering customers’ questions as chatbots.2 ChatGPT and Bard would both be classified as LLMs.
Since the world of AI is expansive and complicated, these definitions don’t capture everything—but should at least equip you to get through this article with a little bit more context.
References
- McKinsey & Company. (n.d.). What is generative AI? McKinsey & Company. https://www.mckinsey.com/featured-insights/mckinsey-explainers/what-is-generative-ai
- Cloudflare. (n.d.). What is a large language model (LLM)? Cloudflare. https://www.cloudflare.com/learning/ai/what-is-large-language-model
- Jarow, O. (2023, March 5). How the first chatbot predicted the dangers of AI more than 50 years ago. Vox. https://www.vox.com/future-perfect/23617185/ai-chatbots-eliza-chatgpt-bing-sydney-artificial-intelligence-history
- Bureau of Internet Accessibility. (2022, February 18). Apple’s Siri changed accessibility—but no voice assistant is perfect. Bureau of Internet Accessibility. https://www.boia.org/blog/apples-siri-changed-accessibility-but-no-voice-assistant-is-perfect
- Lee, A. (2024, January 29). Fraudsters use deepfake technology to turn CTV Ottawa story into scam video. CTV News. https://ottawa.ctvnews.ca/fraudsters-use-deepfake-technology-to-turn-ctv-ottawa-story-into-scam-video-1.6747363
- Nielsen, J. (2023, July 16). AI improves employee productivity by 66%. Nielsen Norman Group. https://www.nngroup.com/articles/ai-tools-productivity-gains/
- World Economic Forum. (2020, October 20). Recession and automation changes our future of work, but there are jobs coming, report says. World Economic Forum. https://www.weforum.org/press/2020/10/recession-and-automation-changes-our-future-of-work-but-there-are-jobs-coming-report-says-52c5162fce/
- Cardon, P. (2024, January 23). New study finds AI makes employers value soft skills more. Fast Company. https://www.fastcompany.com/91012874/new-study-finds-ai-makes-employers-value-soft-skills-more
- Vartan, S. (2019, October 24). Racial bias found in a major health care risk algorithm. Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
- Bryant, K. (2023, December 13). How AI is impacting society and shaping the future. Forbes. https://www.forbes.com/sites/kalinabryant/2023/12/13/how-ai-is-impacting-society-and-shaping-the-future/
- Lewis, T. (2022, October 31). One of the biggest problems in biology has finally been solved. Scientific American. https://www.scientificamerican.com/article/one-of-the-biggest-problems-in-biology-has-finally-been-solved/
- Network of the National Library of Medicine. (n.d.). Algorithm. NNLM. https://www.nnlm.gov/guides/data-glossary/algorithm
About the Author
Charlotte Sparkes
Charlotte Sparkes is a full-time psychology and behavioural science student at McGill University. Interning at The Decision Lab as a Summer Content Associate, she is passionate about all things cognition. She is especially interested in the explicit and implicit factors required for decision-making. Through her work as a research assistant, Charlotte has gained practical experience in the field of social psychology, specifically testing participants on their empathic accuracy. In addition, she is the current president of the MPSA (McGill Psychology Students’ Association). In this role, she has worked on projects alongside professors to make research opportunities accessible to all students.
About us
We are the leading applied research & innovation consultancy
Our insights are leveraged by the most ambitious organizations
“
I was blown away with their application and translation of behavioral science into practice. They took a very complex ecosystem and created a series of interventions using an innovative mix of the latest research and creative client co-creation. I was so impressed at the final product they created, which was hugely comprehensive despite the large scope of the client being of the world's most far-reaching and best known consumer brands. I'm excited to see what we can create together in the future.
Heather McKee
BEHAVIORAL SCIENTIST
GLOBAL COFFEEHOUSE CHAIN PROJECT
OUR CLIENT SUCCESS
$0M
Annual Revenue Increase
By launching a behavioral science practice at the core of the organization, we helped one of the largest insurers in North America realize $30M increase in annual revenue.
0%
Increase in Monthly Users
By redesigning North America's first national digital platform for mental health, we achieved a 52% lift in monthly users and an 83% improvement on clinical assessment.
0%
Reduction In Design Time
By designing a new process and getting buy-in from the C-Suite team, we helped one of the largest smartphone manufacturers in the world reduce software design time by 75%.
0%
Reduction in Client Drop-Off
By implementing targeted nudges based on proactive interventions, we reduced drop-off rates for 450,000 clients belonging to USA's oldest debt consolidation organizations by 46%