Artificial intelligence has modified form lately.

What began in the general public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed right into a greater than US$100 billion industry where the heavy hitters – Microsoft, Google and OpenAI, to call a number of – seem intent on out-competing each other.

The result has been increasingly sophisticated large language models, often released in haste and without adequate testing and oversight.

These models can do much of what a human can, and in lots of cases do it higher. They can beat us at advanced strategy games, generate incredible art, diagnose cancers and compose music.



There’s little question AI systems seem like “intelligent” to some extent. But could they ever be as intelligent as humans?

There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you’ll be able to consider AGI as the purpose at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the purpose where AI can tackle any mental task a human can.

AGI isn’t here yet; current AI models are held back by an absence of certain human traits resembling true creativity and emotional awareness.

We asked five experts in the event that they think AI will ever reach AGI, and five out of 5 said yes.

But there are subtle differences in how they approach the query. From their responses, more questions emerge. When might we achieve AGI? Will it go on to humans? And what constitutes “intelligence”, anyway?

Here are their detailed responses:



This article was originally published at theconversation.com