What DALL-E gave us when we asked for a stick figure saying, "I love the TDL newsletter!"
👋 Hi, there
On rare occasions in this life, you come across a work of art that truly moves you; that resonates with you on a visceral, primordial level. A work of art that connects with something buried so deeply within your psyche, it feels it was made for you. In beholding it, you feel outside of yourself, as if you’ve transcended the limitations of the physical realm and made contact with something mystical, archetypal, true. A work of art that changes you.
For me, that art is this oil painting of a sullen humanoid cabbage drinking whisky on the rocks.
This modern masterpiece was bestowed upon us by DALL-E, the image-generating AI. Over the past few weeks, it has become our primary focus to feed DALL-E the strangest prompts we can imagine. The results? Surprisingly good.
Make no mistake, dear reader: DALL-E isn’t replacing human artists anytime soon. It presented us with its fair share of weird, uncanny valley, glitch-in-the-matrix-y images, like this one (which was supposed to be “sweet potato chasing an avocado with a car”):
But other images (like my friend, the cabbage) not only accurately captured our nonsensical prompts, but rendered them so well that a human easily could have been responsible.
In short, DALL-E has got us reflecting on just how far AI has come — and on how far it still has to go. We’re now at a point where many bots have gotten good enough at what they do that it’s possible to forget an algorithm was responsible. But what is missing when we let AI make our decisions for us? And how could its ubiquity reshape our behavior? In today’s newsletter, we’re sharing some perspectives on these questions.
Until next time,
Katie and the 100% human team @ TDL
📧 Looking for something to read while we await the rise of our robot overlords? Sign up for the TDL newsletterhere.
Today’s topics 👀
🧠 Deep Dive: Where AI Falls Short
🤖 Field Notes: Meet Your Robo-Therapist
🫶 Viewpoints: Helping AI Help Us
DEEP DIVE
🧠 Deep Dive: Where AI Falls Short
AI is helpful — but only if it works perfectly. Machine learning can only make our lives simpler if it’s able to accurately predict our behavior. That's not always so easy. One study on smart thermostats found that people actually spent more time fiddling with the temperature if the AI’s predictions were just slightly off.
Garbage in, garbage out. We live in a biased world, meaning many AI are trained using biased datasets. This has led to situations where an AI’s decisions just replicate the biases that they were supposed to eliminate, like gender bias in hiring.
Technology can rewire our brains. Our brains are constantly adapting to our environments, and evidence shows that technology is already changing the way we think. One study found that regular use of GPS weakens spatial memory. As AI becomes more widespread, it remains to be soon how its presence will sculpt our minds.
AI may weaken our ability to trust our judgment. People tend to accept recommendations proposed by AI, often trusting algorithms more than other humans — and sometimes more than themselves. This scenario can create undue stress in decision-making situations where AI is not applicable by weakening one’s natural ability to trust their judgment.
🤖 FIELD NOTES: Meet Your Robo-Therapist
AI has huge potential in the field of mental health care, where traditional treatment systems are struggling to keep pace with increasing demand. In the midst of this crisis, digital mental health platforms have become invaluable tools to expand access and fight stigma.
A few years back, TDL was part of a mental health consortium at the forefront of digital mental health care. We worked alongside leading mental health experts to build an AI chatbot named Hikai, designed to help boost employees’ well-being at work. Find the case study here.
Viewpoints
🫶 Helping AI Help Us
None of the above problems means that we should bail on AI altogether. Algorithms can still help us make better decisions; they just need to be used thoughtfully and carefully. Below are some perspectives on what needs to happen for humans and AI to peacefully co-exist.
Use AI to complement humans, not replace them. Algorithms can do a lot of stuff better than we can, but they lack our capacity for holistic, moral reasoning. AI should be deployed in ways that maximize its strengths, while still leaving humans in charge of the things we do best.
Follow behavioral frameworks when building AI interfaces. How can algorithms be leveraged in a way that doesn’t undermine motivation, impede learning, or just generally piss people off? Behavioral science can help. In the workplace, for instance, instead of abruptly introducing AI and assuming that it will improve outcomes, leaders should structure these programs around the pillars of self-determination theory to ensure that new algorithms will support employees’ basic psychological needs.
Regulate risky algorithms. As the writer and mathematician Cathy O’Neill explained on our podcast, blind faith in algorithms can erase important nuances and perpetuate inequalities. O’Neill argues that because of these risks, high-stakes algorithms (like one used to decide who gets a mortgage) should have to be approved by a dedicated agency, just as food and drugs need to be approved by the FDA.
Statistics from Oberlo on the growth of AI in the workplace.
AI and Confirmation Bias
When an AI makes decisions that are in line with what we already believe, we’re less likely to scrutinize them. Learn more about confirmation bias on the TDL website.
Opportunities in Behavioral Science
TDL is hiring! We’re looking to fill a range of positions, in Montreal and beyond. Some of the roles currently open include: