Sam Altman, CEO of ChatGPT maker OpenAI, is reportedly trying to search out this as much as $7 trillion He believes the world needs to speculate in producing massive amounts of computer chips to power artificial intelligence (AI) systems. Altman said the identical thing recently The world will need more energy within the AI-saturated future he envisions – so way more that some type of technological breakthrough like nuclear fusion could also be required.

Altman clearly has big plans for his company’s technology, but is the long run of AI really that shiny? As a long-time researcher in the sector of “artificial intelligence”, I actually have my doubts.

Today’s AI systems – especially generative AI tools like ChatGPT – should not truly intelligent. Furthermore, there is no such thing as a evidence that this may occur without fundamental changes to the way in which they work.

What is AI?

One definition of AI is a pc system that “Perform tasks commonly related to intelligent beings“.

This definition, like many others, is a bit unclear: should we call spreadsheets AI since they will perform calculations that will previously have been a demanding human task? How about factory robots which have not only replaced humans, but in lots of cases have surpassed us of their ability to perform complex and delicate tasks?



While spreadsheets and robots can actually do things once reserved for humans, they accomplish that by following an algorithm – a process or algorithm for approaching and completing a task.

One thing we are able to say is that there is no such thing as a such thing as “AI” within the sense of a system that may perform a series of intelligent actions like a human would. Rather, there are numerous different AI technologies that may do very various things.

Making decisions as an alternative of generating results

Perhaps an important difference is between “discriminatory AI” and “generative AI.”

Discriminatory AI helps in decision-making, comparable to whether a bank should grant a loan to a small business or whether a physician diagnoses a patient with disease X or disease Y. AI technologies of this sort have been around for many years, and are greater and higher keep cropping up.



Generative AI systems, alternatively – ChatGPT, Midjourney and their relatives – generate outputs in response to inputs: in other words, they create things. Essentially, they’re exposed to billions of information points (e.g. sentences) and use them to guess a possible response to a prompt. Depending on the source data, the reply can often be “true,” but there aren’t any guarantees.

With generative AI, there is no such thing as a difference between a “hallucination” – a false response invented by the system – and a response that a human would consider to be true. This appears to be an inherent flaw within the technology, which uses a form of neural network called a transformer.

AI, but not intelligent

Another example shows how the “AI” goalposts are consistently shifting. In the Eighties I worked on a pc system designed to supply expert medical advice on laboratory results. It has been written within the US research literature as considered one of the highest 4 medical “expert systems” in clinical use, and in 1986 an Australian Government report described it as essentially the most successful expert system developed in Australia.

I used to be pretty pleased with that. It was a milestone in AI, accomplishing a task that will normally require highly trained medical professionals. However, the system was not intelligent in any respect. It was really only a lookup table of sorts that matched lab test results with high-level diagnostic and patient management advice.

There at the moment are technologies that make constructing such systems very easy, so there are millions of them in use all over the world. (This technology relies on research by myself and colleagues and is provided by an Australian company called Beamtree.)

If they’re doing a task that is finished by highly trained specialists, they’re actually “AI”, but they’re still not intelligent in any respect (although the more complex tasks could have hundreds and hundreds of rules for locating answers).

The transformer networks utilized in generative AI systems are still based on rule sets, although there could also be thousands and thousands or billions of them and they can not be easily explained in human terms.

What is real intelligence?

If algorithms can produce mind-blowing results like we see from ChatGPT without being intelligent, then what’s real intelligence?

We could say intelligence is insight: the judgment about whether something is a very good idea or not. Think of Archimedes jumping out of his bathtub and shouting “Eureka” because he understood the principle of buoyancy.

Generative AI has no insight. ChatGPT cannot inform you whether the reply to an issue is healthier than Gemini’s. (Gemini, until recently often known as Bard, is Google’s competitor to OpenAI’s GPT family of AI tools.)

Or to place it one other way: Generative AI could produce amazing Monet-style images, but when it were only trained on Renaissance art, it could never invent Impressionism.

Nympheas (water lilies)
Claude Monet / Google Art Project

Generative AI is extraordinary and other people will undoubtedly find widespread and really precious applications for it. It already provides extremely useful tools for transforming and presenting (but not discovering) information, and tools for turning specifications into code are already in routine use.

These will keep recuperating: Google’s just-released Gemini, for instance, appears to be attempting to do that Minimize the hallucination problemby utilizing search after which re-expressing the search results.

However, the more we turn out to be aware of generative AI systems, the more we realize that they should not truly intelligent; there is no such thing as a insight. It’s not magic, but a really clever magic trick: an algorithm that’s the product of extraordinary human ingenuity.

This article was originally published at theconversation.com