Debates about AI often characterise it as a technology that has come to compete with human intelligence. Indeed, some of the widely pronounced fears is that AI may achieve human-like intelligence and render humans obsolete in the method.

However, one in every of the world’s top AI scientists is now describing AI as a brand new type of intelligence – one which poses unique risks, and can subsequently require unique solutions.

Geoffrey Hinton, a number one AI scientist and winner of the 2018 Turing Award, just stepped down from his role at Google to warn the world in regards to the dangers of AI. He follows within the steps of greater than 1,000 technology leaders who signed an open letter calling for a worldwide halt on the event of advanced AI for not less than six months.

Hinton’s argument is nuanced. While he does think AI has the capability to turn out to be smarter than humans, he also proposes it ought to be considered an altogether type of intelligence to our own.

Why Hinton’s ideas matter

Although experts have been raising red flags for months, Hinton’s decision to voice his concerns is important.

Dubbed the “godfather of AI”, he has helped pioneer lots of the methods underlying the fashionable AI systems we see today. His early work on neural networks led to him being one in every of three individuals awarded the 2018 Turing Award. And one in every of his students, Ilya Sutskever, went on to turn out to be co-founder of OpenAI, the organisation behind ChatGPT.

When Hinton speaks, the AI world listens. And if we’re to noticeably consider his framing of AI as an intelligent non-human entity, one could argue we’ve been serious about all of it improper.

The false equivalence trap

On one hand, large language model-based tools reminiscent of ChatGPT produce text that’s very just like what humans write. ChatGPT even makes stuff up, or “hallucinates”, which Hinton points out is something humans do as well. But we risk being reductive after we consider such similarities a basis for comparing AI intelligence with human intelligence.

We can discover a useful analogy within the invention of artificial flight. For hundreds of years, humans tried to fly by imitating birds: flapping their arms with some contraption mimicking feathers. This didn’t work. Eventually, we realised fixed wings create uplift, using a distinct principle, and this heralded the invention of flight.

Planes aren’t any higher or worse than birds; they’re different. They do various things and face different risks.

AI (and computation, for that matter) is the same story. Large language models reminiscent of GPT-3 are comparable to human intelligence in some ways, but work in a different way. ChatGPT crunches vast swathes of text to predict the following word in a sentence. Humans take a distinct approach to forming sentences. Both are impressive.

How is AI intelligence unique?

Both AI experts and non-experts have long drawn a link between AI and human intelligence – not to say the tendency to anthropomorphise AI. But AI is fundamentally different to us in several ways. As Hinton explains:

If you or I learn something and wish to transfer that knowledge to another person, we are able to’t just send them a replica […] But I can have 10,000 neural networks, each having their very own experiences, and any of them can share what they learn immediately. That’s an enormous difference. It’s as if there have been 10,000 of us, and as soon as one person learns something, all of us comprehend it.

AI outperforms humans on many tasks, including any task that relies on assembling patterns and knowledge gleaned from large datasets. Humans are sluggishly slow as compared, and have lower than a fraction of AI’s memory.

Yet humans have the upper hand on some fronts. We make up for our poor memory and slow processing speed by utilizing common sense and logic. We can and learn the way the world works, and use this information to predict the likelihood of events. AI still struggles with this (although researchers are working on it).

Humans are also very energy-efficient, whereas AI requires powerful computers (especially for learning) that use orders of magnitude more energy than us. As Hinton puts it:

humans can imagine the longer term […] on a cup of coffee and a slice of toast.

Okay, so what if AI is different to us?

If AI is fundamentally a distinct intelligence to ours, then it follows that we are able to’t (or shouldn’t) compare it to ourselves.

A brand new intelligence presents latest dangers to society and would require a paradigm shift in the way in which we discuss and manage AI systems. In particular, we might have to reassess the way in which we take into consideration guarding against the risks of AI.

One of the fundamental questions that has dominated these debates is how you can define AI. After all, AI shouldn’t be binary; intelligence exists on a spectrum, and the spectrum for human intelligence could be very different from that for machine intelligence.

This very point was the downfall of one in every of the earliest attempts to control AI back in 2017 in New York, when auditors couldn’t agree on which systems ought to be classified as AI. Defining AI when designing regulation could be very difficult.

So perhaps we should always focus less on defining AI in a binary fashion, and more on the precise consequences of AI-driven actions.

What risks are we facing?

The speed of AI uptake in industries has taken everyone by surprise, and a few experts are anxious in regards to the way forward for work.

This week, IBM CEO Arvind Krishna announced the corporate might be replacing some 7,800 back-office jobs with AI in the following five years. We’ll must adapt how we manage AI because it becomes increasingly deployed for tasks once accomplished by humans.

More worryingly, AI’s ability to generate fake text, images and video is leading us right into a latest age of data manipulation. Our current methods of coping with human-generated misinformation won’t be enough to handle it.

Hinton can also be anxious in regards to the dangers of AI-driven autonomous weapons, and the way bad actors may leverage them to commit all types of atrocity.

These are just a few examples of how AI – and specifically, different characteristics of AI – can bring risk to the human world. To regulate AI productively and proactively, we want to think about these specific characteristics, and never apply recipes designed for human intelligence.

The excellent news is humans have learnt to administer potentially harmful technologies before, and AI isn’t any different.

This article was originally published at