For probably the most part, the main focus of latest emergency management has been on natural, technological and human-made hazards akin to flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks.

However, with the rise in the supply and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to those technologies that we’ll must mitigate and prepare for.

Over the past 20 years, my colleagues and I — together with many other researchers — have been leveraging AI to develop models and applications that may discover, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making.

We are actually reaching a turning point where AI is becoming a possible source of risk at a scale that ought to be incorporated into risk and emergency management phases — mitigation or prevention, preparedness, response and recovery.

AI and hazard classification

AI hazards will be classified into two types: intentional and unintentional. Unintentional hazards are those attributable to human errors or technological failures.

As the usage of AI increases, there can be more antagonistic events attributable to human error in AI models or technological failures in AI based technologies. These events can occur in every kind of industries including transportation (like drones, trains or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health and mining.

Intentional AI hazards are potential threats which can be caused through the use of AI to harm people and properties. AI will also be used to achieve illegal advantages by compromising safety and security systems.

In my view, this straightforward intentional and unintentional classification might not be sufficient in case of AI. Here, we want so as to add a brand new class of emerging threats — the opportunity of AI overtaking human control and decision-making. This could also be triggered intentionally or unintentionally.

Many AI experts have already warned against such potential threats. A recent open letter by researchers, scientists and others involved in the event of AI called for a moratorium on its further development.

AI pioneer Geoffrey Hinton is interviewed by CBS concerning the dangers of the technology.

Public safety risks

Public safety and emergency management experts use risk matrices to evaluate and compare risks. Using this method, hazards are qualitatively or quantitatively assessed based on their frequency and consequence, and their impacts are classified as low, medium or high.

Hazards which have low frequency and low consequence or impact are considered low risk and no additional actions are required to administer them. Hazards which have medium consequence and medium frequency are considered medium risk. These risks should be closely monitored.

Hazards with high frequency or high consequence or high in each consequence and frequency are classified as high risks. These risks should be reduced by taking additional risk reduction and mitigation measures. Failure to take immediate and proper motion may end in sever human and property losses.

Up until now, AI hazards and risks haven’t been added into the danger assessment matrices much beyond organizational use of AI applications. The time has come when we should always quickly start bringing the potential AI risks into local, national and global risk and emergency management.

AI risk assessment

AI technologies have gotten more widely utilized by institutions, organizations and corporations in numerous sectors, and hazards related to the AI are beginning to emerge.

In 2018, the accounting firm KPMG developed an “AI Risk and Controls Matrix.” It highlights the risks of using AI by businesses and urges them to acknowledge these latest emerging risks. The report warned that AI technology is advancing in a short time and that risk control measures have to be in place before they overwhelm the systems.

Governments have also began developing some risk assessment guidelines for the usage of AI-based technologies and solutions. However, these guidelines are limited to risks akin to algorithmic bias and violation of individual rights.

At the federal government level, the Canadian government issued the “Directive on Automated Decision-Making” to make sure that federal institutions minimize the risks related to the AI systems and create appropriate governance mechanisms.

The fundamental objective of the directive is to make sure that when AI systems are deployed, risks to clients, federal institutions and Canadian society are reduced. According to this directive, risk assessments have to be conducted by each department to ensure that appropriate safeguards are in place in accordance with the Policy on Government Security.

In 2021, the U.S. Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. The proposed voluntary AI risk assessment framework recommends banning the usage of AI systems that present unacceptable risks.

A robot forklift and package mover are seen at Deloitte Canada’s Smart Factory AI robotic warehouse showroom in Montréal.
THE CANADIAN PRESS/Ryan Remiorz

Threats and competition

Much of the national level policy deal with AI has been from national security and global competition perspectives — the national security and economic risks of falling behind within the AI technology.

The U.S. National Security Commission on Artificial Intelligence highlighted national security risks related to AI. These weren’t from the general public threats of the technology itself, but from losing out in the worldwide competition for AI development in other countries, including China.

In its 2017 , the World Economic Forum highlighted that AI is simply one among emerging technologies that may exacerbate global risk. While assessing the risks posed by the AI, the report concluded that, at the moment, super-intelligent AI systems remain a theoretical threat.

However, the most recent doesn’t even mention the AI and AI associated risks which suggests that the leaders of the worldwide corporations that provide inputs to the worldwide risk report had not viewed the AI as a right away risk.

Faster than policy

AI development is progressing much faster than government and company policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think about a chance for governments to pause and develop risk governance mechanisms.

While we should always collectively and proactively try for such governance mechanisms, all of us must brace for major catastrophic AI’s impacts on our systems and societies.

This article was originally published at theconversation.com