The release of the advanced chatbot ChatGPT in 2022 got everyone talking about artificial intelligence (AI). Its sophisticated capabilities amplified concerns about AI becoming so advanced that soon we’d not have the ability to manage it. This even led some experts and industry leaders to warn that the technology could lead on to human extinction.

Other commentators, though, weren’t convinced. Noam Chomsky, a professor of linguistics, dismissed ChatGPT as “hi-tech plagiarism”.

For years, I used to be relaxed concerning the prospect of AI’s impact on human existence and the environment. That’s because I all the time considered it as a guide or adviser to humans. But the prospect of AIs taking decisions – exerting executive control – is one other matter. And it’s one which is now being seriously entertained.

One of the important thing reasons we shouldn’t let AI have executive power is that it entirely lacks emotion, which is crucial for decision-making. Without emotion, empathy and an ethical compass, you could have created the proper psychopath. The resulting system could also be extremely smart, but it can lack the human emotional core that permits it to measure the possibly devastating emotional consequences of an otherwise rational decision.

When AI takes executive control

Importantly, we shouldn’t only consider AI as an existential threat if we were to put it in control of nuclear arsenals. There is actually no limit to the variety of positions of control from which it could exert unimaginable damage.

Consider, for instance, how AI can already discover and organise the knowledge required to construct your personal conservatory. Current iterations of the technology can guide you effectively through each step of the construct and stop many beginner’s mistakes. But in future, an AI might act as project manager and coordinate the construct by choosing contractors and paying them directly out of your budget.

AI is already getting used in just about all domains of knowledge processing and data evaluation – from modelling weather patterns to controlling driverless vehicles to helping with medical diagnoses. But that is where problems start – once we let AI systems take the critical step up from the role of adviser to that of executive manager.

Instead of just suggesting remedies to a company’s accounts, what if an AI was given direct control, with the flexibility to implement procedures for recovering debts, make bank transfers, and maximise profits – with no limits on tips on how to do that. Or imagine an AI system not only providing a diagnosis based on X-rays, but being given the ability to directly prescribe treatments or medication.

You might start feeling uneasy about such scenarios – I actually would. The reason could be your intuition that these machines do probably not have “souls”. They are only programs designed to digest huge amounts of knowledge to be able to simplify complex data into much simpler patterns, allowing humans to make decisions with more confidence. They don’t – and can’t – have emotions, that are intimately linked to biological senses and instincts.

Emotions and morals

Emotional intelligence is the flexibility to administer our emotions to beat stress, empathise, and communicate effectively. This arguably matters more within the context of decision-making than intelligence alone, because the perfect decision isn’t all the time essentially the most rational one.

It’s likely that intelligence, the flexibility to reason and operate logically, will be embedded into AI-powered systems so that they could make rational decisions. But imagine asking a strong AI with executive capabilities to resolve the climate crisis. The very first thing it could be inspired to do is drastically reduce the human population.

This deduction doesn’t need much explaining. We humans are, almost by definition, the source of pollution in every possible form. Axe humanity and climate change can be resolved. It’s not the alternative that human decision-makers would come to, one hopes, but an AI would find its own solutions – impenetrable and unencumbered by a human aversion to causing harm. And if it had executive power, there may not be anything to stop it from proceeding.

Giving an AI the flexibility to take executive decisions in air traffic control could be a mistake.
Gorodenkoff / Shutterstock

Sabotage scenarios

How about sabotaging sensors and monitors controlling food farms? This might occur steadily at first, pushing controls just past a tipping point in order that no human notices the crops are condemned. Under certain scenarios, this might quickly result in famine.

Alternatively, how about shutting down air traffic control globally, or just crashing all planes flying at anybody time? Some 22,000 planes are normally within the air concurrently, which adds as much as a possible death toll of several million people.

If you’re thinking that that we’re removed from being in that situation, reassess. AIs already drive cars and fly military aircraft, autonomously.

Alternatively, how about shutting down access to bank accounts across vast regions of the world, triggering civil unrest all over the place directly? Or shutting off computer-controlled heating systems in the midst of winter, or air-conditioning systems at the height of summer heat?



In short, an AI system doesn’t should be put in control of nuclear weapons to represent a serious threat to humanity. But while we’re on this topic, if an AI system was powerful and intelligent enough, it could discover a way of faking an attack on a rustic with nuclear weapons, triggering a human-initiated retaliation.

Could AI kill large numbers of humans? The answer must be yes, in theory. But this relies largely on humans deciding to present it executive control. I can’t really consider anything more terrifying than an AI that could make decisions and has the ability to implement them.

This article was originally published at theconversation.com