Is AGI a tangible objective?

News
 |  
Apr 2024
 |  
Financial Times
Save to favorites
Your item is now saved. It can take a few minutes to sync into your saved list.

What: Analyst Benedict Evans reviews where Generative AI stands for now and what we know.

Why it is important: This is arefreshingly honest point of view, away from hysteria and speculation.


The idea of "artificial intelligence" or AGI (artificial general intelligence) capable of human-level reasoning and beyond has been explored in science fiction for decades, with examples like the story "A Logic Named Joe" from 1946. While we've made impressive progress in narrow AI capabilities like superhuman math and memory, we still don't have a coherent theory of what general intelligence is or why humans possess it differently than other animals.

There have been waves of excitement about the potential for AGI breakthroughs, including in the 1970s and more recently with the rapid progress of large language models (LLMs). Some experts believe AGI could be closer than previously thought, while others remain highly skeptical.

The uncertainty around if and when AGI could be achieved makes analogies and thought experiments difficult. There is no equivalent scientific theory to guide us, unlike with nuclear fission.

The potential risks of advanced AGI systems, sometimes called the "doom" scenario, are being debated, with calls for urgent action, though the reality is that the technology is inherently public and difficult to control.

Ultimately, the most likely outcome is that LLMs and other AI advances will continue to produce more automation and disruption, similar to past technological revolutions, rather than a singular AGI breakthrough. The focus should be on managing the societal impacts rather than speculating about existential risks.


Is AGI a tangible objective?