Winter is coming (if you’re in the other hemisphere, well, good for you).
Along with pumpkin spice and seasonal affective disorder, Fall now brings us the annual tradition of tech giants promising that Artificial General Intelligence is just around the corner. You know, the one that’s supposed to out-think us, out-work us, and probably steal our wives while it’s at it.
Last year, OpenAI CEO Sam Altman implied that Artificial General Intelligence—the theoretical human-like AI of robot apocalypse lore—might finally become reality with the arrival of GPT in 2025. Spoiler alert: GPT-5 is here, and AGI it is not. Or is it? Do we even know what AGI looks like? Who knows.
Whether you’re a truther or a skeptic, GPT-5’s non-AGI-ness reignites long-standing speculation about the limits of computational intelligence. If not now, when will it come? How will we recognize it? Will it spare my job if I ask really, really nicely?
But seriously, it’s easy to get swept up in all the hype. So today’s newsletter is about cutting through the smoke, looking at what AGI actually means, and why we’re probably not about to be conquered just yet.
Until next time, Celine and the generally intelligent team @TDL
đź“§ Want to set the bar high for AGI? Subscribe to our newsletter here.
Today’s topics 👀
Deep Dive: đź§ Almost(?) General Intelligence
Field Notes: đź’» Computer Colleagues
Viewpoints: 📣 Handling the Hype
Deep Dive: đź§ Almost(?) General Intelligence
A jack of all trades.Humans are the bar. Unlike today’s AI, which excels in narrow, specialized domains, AGI would display versatility, adaptability, and problem-solving skills that generalize across contexts.
The boy who cried AGI. After GPT-5’s underwhelming launch, Sam Altman was quick to claim that, actually, GPT-6 would really be the one to change things. Anthropic’s CEO, Dario Amodei, predicted that AI would write 90% of all code by now (which hasn’t happened either, but we’re getting there… maybe?). Amidst all the hype, disappointment, and whatever the thing in between is called, it’s clear that AGI predictions should probably be taken with many grains of salt.
Already here? The flexible definition of AGI means we might not be able to see it even if it was right under our noses.In 2023, past and present Google executives claimed that AGI is already here, demonstrating just how variable the standards of AGI can be. From the 1950s Turing test to Stanford’s state-of-the-art HELM evaluation, the future of AGI largely depends on the tools we use to measure it.
MIA ROI. Venture capitalists poured $110 billion USD into AI startups alone last year, representing a 62% growth from the year before. But with AGI’s uncertain future and the distant return on investment for existing AI tools, these historic investments may be outrunning tangible benefits, blowing a bubble with profound economic consequences.
Field Notes: đź’» Computer Colleagues
Just because AGI hasn’t materialized yet, doesn’t mean we shouldn’t start intentionally integrating AI systems into our work. This means more than just sticking it into our toolkits and crossing our fingers. At best, hasty implementation leads to lost potential; at worst, it disrupts, creating mistrust and frustration among human counterparts.
AI is increasingly steering our personal and professional decisions, becoming an invaluable tool that can boost work quality and efficiency. However, we can’t afford to ignore the human side of the equation. In this TDL article, Managing Director Sekoul Krastev discusses how we can leverage AI in the workplace without sacrificing agency and purpose. The key? Building in collaboration from start to finish to keep us in the driver’s seat.
Viewpoints: 📣 Handling the Hype
Future imperfect. Instead of asking ourselves whether or not a model is AGI, we should be asking whether or not it is useful to our organizations, and how. Holding out for a perfect AGI only leads to lost potential now, whereas existing tools, albeit task-specific and limited, still hold significant potential to improve the quality and quantity of our work.
Designing ethical AI. Even the most advanced AI models contain flaws and algorithmic bias, such as racial biases found in industry-standard facial recognition systems. It's an unfortunate reality of innovating using humans as imperfect standards. As the race toward AGI continues, industry and organizations alike must commit to ethics principles, including privacy regulations and safeguards to tackle hidden bias.
Breaking the curse. We’re already experiencing the effects of the “intelligence curse,” which describes how powerful actors prioritize AI advancement over investments in real people. With growing distrust towards AI and anxiety over job security, centering human involvement with the integration of AI tools can shift the focus from replacing people to boosting their work.
One size doesn’t fit all. As AI adoption slows down among larger companies, smaller organizations have the flexibility to adopt AI tools in novel ways, building live AI agents with custom-tailored tones and knowledge base. Size doesn’t have to be a limitation—it can allow teams to add that personal touch before AGI hits the market.
Luckily, the ongoing race toward AGI gives us time to ensure that AI systems present and future align with our values, before it might be too late. Learn more on our website.
What’s new at TDL
TDL is hiring! We’re hiring for a number of positions, both remote and based in our Montreal office. Some open roles include: