Artificial intelligence can play chess, drive a automotive and diagnose medical issues. Examples include Google DeepMind’s AlphaGo, Tesla’s self-driving vehicles, and IBM’s Watson.

This form of artificial intelligence is known as Artificial Narrow Intelligence (ANI) – non-human systems that may perform a particular task. We encounter this kind on a day by day basis, and its use is growing rapidly.



But while many impressive capabilities have been demonstrated, we’re also starting to see problems. The worst case involved a self-driving test automotive that hit a pedestrian in March. The pedestrian died and the incident remains to be under investigation.

The next generation of AI

With the following generation of AI the stakes will almost actually be much higher.

Artificial General Intelligence (AGI) may have advanced computational powers and human level intelligence. AGI systems will have the ability to learn, solve problems, adapt and self-improve. They will even do tasks beyond those they were designed for.

Importantly, their rate of improvement may very well be exponential as they change into much more advanced than their human creators. The introduction of AGI could quickly bring about Artificial Super Intelligence (ASI).

While fully functioning AGI systems don’t yet exist, it has been estimated that they can be with us anywhere between 2029 and the top of the century.

What appears almost certain is that they’ll arrive eventually. When they do, there’s a terrific and natural concern that we won’t have the ability to regulate them.

The risks related to AGI

There is little question that AGI systems could transform humanity. Some of the more powerful applications include curing disease, solving complex global challenges akin to climate change and food security, and initiating a worldwide technology boom.

But a failure to implement appropriate controls may lead to catastrophic consequences.

Despite what we see in Hollywood movies, existential threats should not more likely to involve killer robots. The problem is not going to be one in all malevolence, but fairly one in all intelligence, writes MIT professor Max Tegmark in his 2017 book Life 3.0: Being Human within the Age of Artificial Intelligence.

It is here that the science of human-machine systems – often known as Human Factors and Ergonomics – will come to the fore. Risks will emerge from the indisputable fact that super-intelligent systems will discover more efficient ways of doing things, concoct their very own strategies for achieving goals, and even develop goals of their very own.

Imagine these examples:

  • an AGI system tasked with stopping HIV decides to eradicate the issue by killing everybody who carries the disease, or one tasked with curing cancer decides to kill everybody who has any genetic predisposition for it

  • an autonomous AGI military drone decides the one method to guarantee an enemy goal is destroyed is to wipe out a whole community

  • an environmentally protective AGI decides the one method to slow or reverse climate change is to remove technologies and humans that induce it.

These scenarios raise the spectre of disparate AGI systems battling one another, none of which take human concerns as their central mandate.

Various dystopian futures have been advanced, including those wherein humans eventually change into obsolete, with the next extinction of the human race.

Others have forwarded less extreme but still significant disruption, including malicious use of AGI for terrorist and cyber-attacks, the removal of the necessity for human work, and mass surveillance, to call only a couple of.

So there’s a necessity for human-centred investigations into the safest ways to design and manage AGI to minimise risks and maximise advantages.

How to regulate AGI

Controlling AGI is just not as straightforward as simply applying the identical sorts of controls that are likely to keep humans in check.

Many controls on human behaviour depend on our consciousness, our emotions, and the applying of our moral values. AGIs won’t need any of those attributes to cause us harm. Current types of control should not enough.

Arguably, there are three sets of controls that require development and testing immediately:

  1. the controls required to make sure AGI system designers and developers create protected AGI systems

  2. the controls that have to be built into the AGIs themselves, akin to “common sense”, morals, operating procedures, decision-rules, and so forth

  3. the controls that have to be added to the broader systems wherein AGI will operate, akin to regulation, codes of practice, standard operating procedures, monitoring systems, and infrastructure.

Human Factors and Ergonomics offers methods that will be used to discover, design and test such controls well before AGI systems arrive.

For example, it’s possible to model the controls that exist in a selected system, to model the likely behaviour of AGI systems inside this control structure, and discover safety risks.

This will allow us to discover where latest controls are required, design them, after which remodel to see if the risks are removed in consequence.

In addition, our models of cognition and decision making will be used to make sure AGIs behave appropriately and have humanistic values.

Act now, not later

This form of research is in progress, but there is just not nearly enough of it and never enough disciplines are involved.



Even the high-profile tech entrepreneur Elon Musk has warned of the “existential crisis” humanity faces from advanced AI and has spoken concerning the need to control AI before it’s too late.

The next decade or so represents a critical period. There is a possibility to create protected and efficient AGI systems that may have far reaching advantages to society and humanity.

At the identical time, a business-as-usual approach wherein we play catch-up with rapid technological advances could contribute to the extinction of the human race. The ball is in our court, however it won’t be for for much longer.

This article was originally published at theconversation.com