Artificial General Intelligence

The Basic Idea

Imagine a world where humans and computers are indistinguishable. Believe it or not, we might not be too far off from this futuristic reality.

Artificial general intelligence (AGI), also called strong AI, is a hypothetical type of artificial intelligence (AI) with human-like cognitive abilities. Unlike the AI systems we have today, AGI would be able to think, reason, and learn as well as or even better than humans.1

This level of human-like intelligence assumes that AGI would have a sense of self-control, self-understanding, and an ability to learn new skills on its own, similar to human consciousness. Some experts even believe that AGI programs would be conscious or sentient.

So, we don’t have AGI yet? That’s right. Our existing Generative AI systems, such as ChatGPT, excel in specific areas and rely heavily on the data they are trained on. They follow sets of instructions, called algorithms, to analyze vast amounts of data, identify patterns, and generate human-like responses. 

These systems can even learn and adjust their behavior based on interactions, which helps them get better at what they do. In this way, they boast impressive generative capabilities, but they are still limited by their training data and predetermined parameters set by humans. Current AI programs are not actually “thinking” on their own. They mimic human intelligence but don’t possess it.

AGI, on the other hand, would have a form of human intelligence. It would possess the ability to self-teach and generalize. As such, it could complete a wide variety of tasks and solve various problems regardless of what it’s expressly trained to do.

Are we on the path to AGI? Most experts think so. Leading companies like OpenAI and Anthropic are currently racing to create AGI, but the majority of top thinkers predict we’re still a couple of decades away.2

AI programs are already disrupting life as we know it, automating many human roles and shaping the content we find online. How will the emergence of true AGI affect us? Will we be able to adapt our behavior to live in harmony with seemingly-sentient machines?

A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.


— Alan Turing, computer scientist and AI pioneer.

Theory, meet practice

TDL is an applied research consultancy. In our work, we leverage the insights of diverse fields—from psychology and economics to machine learning and behavioral data science—to sculpt targeted solutions to nuanced problems.

Our consulting services

Key Terms

Weak AI: Also known as narrow AI or specialized AI, weak AI is only capable of performing the task it is designed for.3 All current AI systems are forms of weak AI, including facial recognition tools, chatbots, voice assistants, and self-driving vehicles. In comparison, strong AI refers to AGI, which could perform any task regardless of what it is trained for.

Machine Learning: A field of AI involving computer systems that learn through analyzing data rather than following explicit instructions.4 Machine learning allows software to learn similarly to the way humans learn, gradually improving its performance over time through trial and error. This is what allows AI systems to recognize patterns in data, learn from experience, and adapt to new situations.

Algorithm: A set of instructions provided to a computer. In the context of AI or AGI, algorithms tell computers how to analyze data and make decisions, essentially telling the computer how to learn and operate on its own.

Large Language Model (LLM): A type of AI algorithm that can generate human text through a process of prediction and learning. LLMs are trained on vast sets of textural data and continue learning through dynamic interactions with humans, allowing them to pick up patterns and generate text without necessarily understanding the content. ChatGPT and Gemini are two popular examples of LLMs.

AI Alignment: A field of AI research focused on safety to ensure AI systems ultimately serve humans and support our goals, adhering to our values and aligning with humanity. This topic is heavily debated due to the difficulty of defining universal human values, but most experts agree that AI alignment is essential for preventing potential harm as AI systems become more autonomous.

History

The idea of machines sporting human-like intelligence dates back to the early science fiction novels of the late 1800s, but the first generation of AI research didn’t take place until the 1950s. Early researchers were extremely optimistic about the promise of AGI. They predicted that AGI would be fully developed within the following few decades, but they would soon be humbled by the limitations of early computing.

Alan Turing was one of the first to question whether or not machines could actually exhibit intelligence. He noted that it would be difficult to determine if computers could “think,” and that attempting to answer such a question was too ambiguous a way to determine machine intelligence.5 He proposed the Turing Test, originally called the imitation game, to test a machine’s ability to exhibit intelligence. Essentially, the Turing Test asks a single question: can a machine convince a human that it’s human? Turing argued that although these thinking machines did not exist at the time, they would certainly exist in the future.

The field of AI research received a further boost from other notable figures like John McCarthy and Marvin Minsky. These pioneers, leveraging their expertise in computer science and cognitive science, played an important role in shaping early AI development. The 1950s through the 1960s was the golden era of AI research — everyone was inspired by the possibility of achieving AGI.6 However, disappointing progress caused researchers to shift their focus to other projects, resulting in a period of skepticism between the 1970s and 1990s known as the AI winter.

AI research experienced another surge in popularity at the turn of the century as researchers began focusing on practical applications, applying AI to specific problems rather than attempting to create AGI. The commercial success of AI programs like speech recognition and machine learning algorithms, along with advancements in computing power, sparked renewed interest in AGI research.

Today, achieving AGI remains a common goal among AI researchers. Significant advancements have been made in very recent years, and many experts believe we have achieved early AGI through the release of GPT-4 in 2023, which exhibits broad intelligence strikingly similar to human cognitive abilities.

People

Alan Turing: A British mathematician and computer scientist best known for creating the Turing test. Turing conducted some of the earliest work in the field of AI in the mid-20th century and created the Turing machine, a theoretical computing device to demonstrate how computers could accomplish any task or execute any algorithm given enough time.

John McCarthy: Known as the father of AI, McCarthy was an American computer scientist who coined the term “artificial intelligence” in 1955 to describe the field of research focused on creating intelligent machines. 

Marvin Minsky: An AI pioneer and cognitive scientist who presented the idea that human thought processes could be expressed mathematically and performed by machines.7 Minsky developed and built the first neural network simulator in 1951 and is credited with the invention of mechanical arms and other robotic devices.

Consequences

While the full potential of AGI is yet to be realized, many experts have discussed the possible consequences of fully autonomous AGI systems. On the positive side, AGI could help in solving the world’s most pressing problems — disease, poverty, climate change, healthcare, etc. — by fast-tracking our ability to analyze data, identify patterns, and develop solutions.8 Some sources even suggest that it could even help us find a cure for cancer!9

However, AGI will not be a silver bullet. While this powerful tool can augment our human abilities to accelerate data analysis and model incredibly complex simulations of real-world systems, these multidisciplinary challenges will still require coordinated human intervention between researchers, policymakers, and the public. The key lies in learning how to optimize our use of AGI alongside other tools, including human expertise.

For the average person, AGI will likely change how we experience everything from work to play. AGI tools would supercharge our productivity and automate nearly every cognitive task, allowing us to make decisions with much greater speed and accuracy.8 As just one example, AI is already showing significant promise for improving supply chain management.

If you’re wondering how this will impact jobs, you’re not alone. Current estimates suggest that AI programs and robots could replace up to 30% of human labor by 2030.10 AI already poses a real threat to the labor market, but AGI systems would completely devalue expertise — what happens when everyone can leverage AGI to produce expert-level work?

On the bright side, the World Economic Forum suggests that AGI will create new types of jobs, just as the internet ushered in a new world of online industries.11 Automation could free up our time to work on the new challenges that come with advanced technology. Plus, because AGI could lower the cost of production and potentially increase demand for products, economists suggest that this could create more jobs. It’s possible that we just can’t imagine the jobs that might exist with AGI. On the other hand, wide-scale job loss might lead to the introduction of universal basic income (UBI), to replace the wages lost to automation.

The Human Element

The development of AGI also has the potential to impact us on a psychological level, causing us to question what it means to be human and forcing us to adapt our behavior so we can collaborate with AGI systems.

One of the most unsettling things about the emergence of AGI is its potential to threaten our uniqueness and challenge our purpose in society.12 What happens when human expertise is no longer valuable? What about when AGI can produce art, music, and literature that rivals the works of the greats? Can we still find meaning and purpose in our work and creative pursuits? Moreover, if we possess the same level of intelligence as a machine, what makes us different? Are we just computers made of flesh and blood?

If you’re feeling a hint of existential anxiety at these questions, we get it! While many of these questions are entirely philosophical, they have important implications for our psychological well-being. AGI is bound to increase feelings of inadequacy and diminished self-worth.

Some experts also wonder if we’ll lose our ability to make decisions and judgments on our own if we rely on AGI to take over these important skills. An over-reliance on AGI could even diminish our ability to think critically, which is a skill that we currently hone and develop over our lives.

But it’s not all doom and gloom. Clearly, humans will need to find a way to coexist with AGI, and perhaps we can achieve this by focusing on our unique human strengths, like ethical decision-making and emotional intelligence.

For example, humans will likely play a pivotal role in managing and mitigating AGI biases — biases that AI algorithms pick up from their training data. Understanding these cognitive biases will be crucial for creating AGI systems that are fair and free from discrimination.

Reducing biases in AGI systems will require interdisciplinary collaboration from fields such as ethics, psychology, and economics. Insights from experts in these disciplines can help us create AGI programs that are fair, particularly in critical areas like job applications, loan approvals, and criminal justice sentencing. For example, psychologists can identify potential problems in training data and shed light on how our human biases might show up in AI systems. Interdisciplinary cooperation will be essential in developing techniques to detect and remove biases from AI.

It’s also possible that humans will work alongside AGI systems to provide necessary empathy and social intelligence in various roles, shoring up the weaknesses of emotionless algorithms. We will certainly be involved in regulating AGI. Regulatory bodies composed of experts from various fields will likely be tasked with overseeing AI decision-making to evaluate safety and ethics. This might involve establishing safety protocols and establishing policies for reviewing and possibly appealing AI decisions.

Fortunately, consumers seem to agree that intelligent machines should work with us rather than completely replace our roles in society. In a recent consumer survey, the vast majority of respondents preferred that services like marketing, healthcare, accounting, and legal aid be performed either entirely by humans or humans supported by AI rather than AI alone.13

Controversies

The concept of AGI is full of criticisms. Here are a few of the most common topics of debate in the industry.

Would AGI Be Conscious?

Most experts agree on the possibility of creating an AI that can problem-solve, learn, and make decisions like a human, but many are adamant that AGI could not develop emotional intelligence or consciousness.14 Similarly, some argue that AGI will simply be a different form of intelligence — like how humans and animals possess different forms of intelligence — rather than a level of intelligence that’s superior to humans.2

Ethical Challenges

Ethics present another problem with AGI. As we mentioned earlier, AI algorithms are prone to human biases because they’re trained on human data.8 Experts question whether AGI could be made to understand human ethics and morals, stressing the importance of human oversight on AGI decisions.10 This might be essential to prevent AGI from making discriminatory decisions or bad moral judgments based entirely on logic — without considering the complexities of human cognition and behavior.

Still, this opens up another can of worms. Even if we focus on AI alignment to ensure AGI systems support our ethics, how do we decide which “human values” to teach these systems?15 Who chooses these values? How do we come to a consensus when people all over the world inevitably disagree on values?

Could AGI Advance Beyond Our Control?

Perhaps one of the most highly debated controversies around AGI is its potential to advance beyond our control and threaten our very existence. Stephen Hawking, OpenAI’s CEO Sam Altman, and even the U.S. government have said that AGI could lead to human extinction. 

Experts have talked extensively about the need to establish safety measures in the development of AGI to prevent a situation in which developers lose control over AGI systems. This threat is even more serious when we consider that AGI could be used to supercharge bioweapons, autonomous robots, and cyberattacks.16 This sounds like the stuff of science fiction, but it’s a real risk that many notable figures are currently exploring.

Related TDL Content

Combining AI and Behavioral Science Responsibly

This article explores the intersection between AI and behavior science, discussing how AI systems can be biased and why it’s important to address this issue before leveraging AI to influence human behavior.

How to keep work meaningful in the age of AI

If you want to learn more about how AI might impact the workforce and affect our relationship with work, give this a read. This article dives into the psychological motivations that make work meaningful and offers suggestions for maintaining a sense of value and purpose in the world of AI.

References

  1. What is AGI? - Artificial General Intelligence Explained. (n.d.). AWS. Retrieved April 23, 2024, from https://aws.amazon.com/what-is/artificial-general-intelligence/
  2. Dilmegani, C. (2024, March 10). When will singularity happen? 1700 expert opinions of AGI. Research AIMultiple. Retrieved April 23, 2024, from https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
  3. Labbe, M., & Wigmore, I. (n.d.). What is narrow AI (weak AI)? TechTarget. Retrieved April 23, 2024, from https://www.techtarget.com/searchenterpriseai/definition/narrow-AI-weak-AI
  4. What Is Machine Learning (ML)? (n.d.). IBM. Retrieved April 23, 2024, from https://www.ibm.com/topics/machine-learning
  5. A. M. Turing (1950) I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX(236) 433-460, https://doi.org/10.1093/mind/LIX.236.433
  6. AI Winter: The Highs and Lows of Artificial Intelligence. (2021). History of Data Science. Retrieved April 23, 2024, from https://www.historyofdatascience.com/ai-winter-the-highs-and-lows-of-artificial-intelligence/
  7. Marvin Minsky, Ph.D. Academy of Achievement. Retrieved April 23, 2024, from https://achievement.org/achiever/marvin-minsky-ph-d/
  8. Pogla, M. (2024, February 3). What is Artificial General Intelligence (AGI) and Why Should You Care? Auto-GPT. Retrieved April 23, 2024, from https://autogpt.net/what-is-artificial-general-intelligence-agi-and-why-should-you-care/
  9. Sebastian, A. M., & Peter, D. (2022). Artificial Intelligence in Cancer Research: Trends, Challenges and Future Directions. Life (Basel, Switzerland), 12(12), 1991. https://doi.org/10.3390/life12121991
  10. Dilmegani, C. (2024, January 2). Top 9 Dilemmas of AI Ethics in 2024 & How to Navigate Them. Research AIMultiple. Retrieved April 23, 2024, from https://research.aimultiple.com/ai-ethics/
  11. Why there will be plenty of jobs in the future - even with AI. (2024, February 26). The World Economic Forum. Retrieved April 23, 2024, from https://www.weforum.org/agenda/2024/02/artificial-intelligence-ai-jobs-future/
  12. Eisikovits, N. (2023, July 12). AI Is an Existential Threat--Just Not the Way You Think. Scientific American. Retrieved April 23, 2024, from https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/
  13. Crestodina, A. (2024, April 5). AI Consumer Readiness Survey: Do We Want AI-Powered Doctors? Lawyers? Marketers? We Asked 1000+ Consumers…. Orbit Media. Retrieved April 23, 2024, from https://orbitmedia.com/blog/ai-powered-services/
  14. Topper, N. (2023, January 24). Is Artificial General Intelligence (AGI) Possible? Built In. Retrieved April 23, 2024, from https://builtin.com/artificial-intelligence/is-artificial-general-intelligence-possible
  15. Snoswell, A. J. (2023, July 11). What is 'AI alignment'? Silicon Valley's favourite way to think about AI safety misses the real issues. The Conversation. Retrieved April 23, 2024, from https://theconversation.com/what-is-ai-alignment-silicon-valleys-favourite-way-to-think-about-ai-safety-misses-the-real-issues-209330
  16. AGI Challenges and Debates. (2024). Just Think AI. Retrieved April 23, 2024, from https://www.justthink.ai/artificial-general-intelligence/agi-challenges-and-debates

About the Author

Kira Warje

Kira holds a degree in Psychology with an extended minor in Anthropology. Fascinated by all things human, she has written extensively on cognition and mental health, often leveraging insights about the human mind to craft actionable marketing content for brands. She loves talking about human quirks and motivations, driven by the belief that behavioural science can help us all lead healthier, happier, and more sustainable lives. Occasionally, Kira dabbles in web development and enjoys learning about the synergy between psychology and UX design.

Read Next

Notes illustration

Eager to learn about how behavioral science can help your organization?