Doomsaying is an old occupation. Artificial intelligence (AI) is a posh subject. It’s easy to fear what you don’t understand. These three truths go a way towards explaining the oversimplification and dramatisation plaguing discussions about AI.

Yesterday outlets world wide were plastered with news of yet one more open letter claiming AI poses an existential threat to humankind. This letter, published through the nonprofit Center for AI Safety, has been signed by industry figureheads including Geoffrey Hinton and the chief executives of Google DeepMind, Open AI and Anthropic.

However, I’d argue a healthy dose of scepticism is warranted when considering the AI doomsayer narrative. Upon close inspection, we see there are industrial incentives to fabricate fear within the AI space.

And as a researcher of artificial general intelligence (AGI), it seems to me the framing of AI as an existential threat has more in common with Seventeenth-century philosophy than computer science.

Was ChatGPT a ‘breakthrough’?

When ChatGPT was released late last yr, people were delighted, entertained and horrified.

But ChatGPT isn’t a research breakthrough as much because it is a product. The technology it is predicated on is several years old. An early version of its underlying model, GPT-3, was released in 2020 with lots of the same capabilities. It just wasn’t easily accessible online for everybody to play with.

Back in 2020 and 2021, I and lots of others wrote papers discussing the capabilities and shortcomings of GPT-3 and similar models – and the world carried on as at all times. Forward to today, and ChatGPT has had an incredible impact on society. What modified?

In March, Microsoft researchers published a paper claiming GPT-4 showed “sparks of artificial general intelligence”. AGI is the topic of quite a lot of competing definitions, but for the sake of simplicity might be understood as AI with human-level intelligence.

Some immediately interpreted the Microsoft research as saying GPT-4 an AGI. By the definitions of AGI I’m acquainted with, that is definitely not true. Nonetheless, it added to the hype and furore, and it was hard to not get caught up within the panic. Scientists are not any more resistant to group think than anyone else.

The same day that paper was submitted, The Future of Life Institute published an open letter calling for a six-month pause on training AI models more powerful than GPT-4, to permit everyone to take stock and plan ahead. Some of the AI luminaries who signed it expressed concern that AGI poses an existential threat to humans, and that ChatGPT is just too near AGI for comfort.

Soon after, distinguished AI safety researcher Eliezer Yudkowsky – who has been commenting on the hazards of superintelligent AI since well before 2020 – took things a step further. He claimed we were on a path to constructing a “superhumanly smart AI”, by which case “the plain thing that might occur” is “literally everyone on Earth will die”. He even suggested countries must be willing to risk nuclear war to implement compliance with AI regulation across borders.

I don’t consider AI an imminent existential threat

One aspect of AI safety research is to deal with potential dangers AGI might present. It’s a difficult topic to check because there may be little agreement on what intelligence is and the way it functions, let alone what a superintelligence might entail. As such, researchers must rely as much on speculation and philosophical argument as on evidence and mathematical proof.

There are two reasons I’m not concerned by ChatGPT and its byproducts.

First, it isn’t even near the form of artificial superintelligence which may conceivably pose a threat to humankind. The models underpinning it are slow learners that require immense volumes of information to construct anything akin to the versatile concepts humans can concoct from only a number of examples. In this sense, it will not be “intelligent”.

Second, lots of the more catastrophic AGI scenarios rely upon premises I find implausible. For instance, there appears to be a prevailing (but unspoken) assumption that sufficient intelligence amounts to limitless real-world power. If this was true, more scientists can be billionaires.

Moreover, cognition as we understand it in humans takes place as a part of a physical environment (which incorporates our bodies), and this environment imposes limitations. The concept of AI as a “software mind” unconstrained by hardware has more in common with Seventeenth-century dualism (the concept that the mind and body are separable) than with contemporary theories of the mind existing as a part of the physical world.

Why the sudden concern?

Still, doomsaying is old hat, and the events of the previous couple of years probably haven’t helped – but there could also be more to this story than meets the attention.

Among the distinguished figures calling for AI regulation, many work for or have ties to incumbent AI firms. This technology is helpful, and there may be money and power at stake – so fearmongering presents a possibility.

Almost all the pieces involved in constructing ChatGPT has been published in research anyone can access. OpenAI’s competitors can (and have) replicated the method, and it won’t be long before free and open-source alternatives flood the market.

This point was made clearly in a memo purportedly leaked from Google entitled “We haven’t any moat, and neither does OpenAI”. A moat is jargon for a solution to secure your enterprise against competitors.

Yann LeCun, who leads AI research at Meta, says these models must be open since they may grow to be public infrastructure. He and lots of others are unconvinced by the AGI doom narrative.

Notably, Meta wasn’t invited when US President Joe Biden recently met with the leadership of Google DeepMind and OpenAI. That’s despite the proven fact that Meta is sort of definitely a frontrunner in AI research; it produced PyTorch, the machine-learning framework OpenAI used to make GPT-3.

At the White House meetings, OpenAI chief executive Sam Altman suggested the US government should issue licences to those that are trusted to responsibly train AI models. Licences, as Stability AI chief executive Emad Mostaque puts it, “are a kinda moat”.

Companies corresponding to Google, OpenAI and Microsoft have all the pieces to lose by allowing small, independent competitors to flourish. Bringing in licensing and regulation would help cement their position as market leaders and hamstring competition before it may possibly emerge.

While regulation is acceptable in some circumstances, regulations which are rushed through will favour incumbents and suffocate small, free and open-source competition.

This article was originally published at