You’d considering flying in a plane could be more dangerous than driving a automobile. In reality it’s much safer, partly since the aviation industry is heavily regulated.

Airlines must keep on with strict standards for safety, testing, training, policies and procedures, auditing and oversight. And when things do go mistaken, we investigate and try to rectify the problem to enhance safety in the longer term.

It’s not only airlines, either. Other industries where things can go very badly mistaken, resembling pharmaceuticals and medical devices, are also heavily regulated.

Artificial intelligence is a comparatively recent industry, but it surely’s growing fast and has great capability to do harm. Like aviation and pharmaceuticals, it must be regulated.

AI can do great harm

A big selection of technologies and applications that fit under the rubric of “artificial intelligence” have begun to play a major role in our lives and social institutions. But they could be utilized in ways which can be harmful, which we’re already beginning to see.

In the “robodebt” affair, for instance, the Australian government welfare agency Centrelink used data-matching and automatic decision-making to issue (often incorrect) debt notices to welfare recipients. What’s more, the burden of proof was reversed: individuals were required to prove they didn’t owe the claimed debt.

The New South Wales government has also began using AI to identify drivers with mobile phones. This involves expanded public surveillance via cell phone detection cameras that use AI to mechanically detect an oblong object in the motive force’s hands and classify it as a phone.



Facial recognition is one other AI application under intense scrutiny around the globe. This is attributable to its potential to undermine human rights: it may be used for widespread surveillance and suppression of public protest, and programmed bias can result in inaccuracy and racial discrimination. Some have even called for a moratorium or outright ban since it is so dangerous.

In several countries, including Australia, AI is getting used to predict how likely an individual is to commit a criminal offense. Such predictive methods have been shown to affect Indigenous youth disproportionately and result in oppressive policing practices.

AI that assists train drivers can also be coming into use, and in future we are able to expect to see self-driving cars and other autonomous vehicles on our roads. Lives will rely on this software.

The European approach

Once we’ve decided that AI must be regulated, there continues to be the query of easy methods to do it. Authorities within the European Union have recently made a set of proposals for easy methods to regulate AI.

The first step, they argue, is to evaluate the risks AI poses in several sectors resembling transport, healthcare, and government applications resembling migration, criminal justice and social security. They also have a look at AI applications that pose a risk of death or injury, or have an effect on human rights resembling the rights to privacy, equality, liberty and security, freedom of movement and assembly, social security and way of life, and the presumption of innocence.

The greater the danger an AI application was deemed to pose, the more regulation it could face. The regulations would cover the whole lot from the information used to coach the AI and the way records are kept, to how transparent the creators and operators of the system should be, testing for robustness and accuracy, and requirements for human oversight. This would come with certification and assurances that the usage of AI systems is secure, and doesn’t result in discriminatory or dangerous outcomes.

While the EU’s approach has strong points, even apparently “low-risk” AI applications can do real harm. For example, suggestion algorithms in serps are discriminatory too. The EU proposal has also been criticised for in search of to manage facial recognition technology reasonably than banning it outright.

The EU has led the world on data protection regulation. If the identical happens with AI, these proposals are more likely to function a model for other countries and apply to anyone doing business with the EU and even EU residents.

What’s happening in Australia?

In Australia there are some applicable laws and regulations, but there are many gaps, they usually usually are not all the time enforced. The situation is made tougher by the dearth of human rights protections on the federal level.

One distinguished attempt at drawing up some rules for AI got here last yr from Data61, the information and digital arm of CSIRO. They developed an AI ethics framework built around eight ethical principles for AI.

These ethical principles aren’t entirely irrelevant (number two is “do no harm”, for instance), but they’re unenforceable and subsequently largely meaningless. Ethics frameworks like this one for AI have been criticised as “ethics washing”, and a ploy for industry to avoid hard law and regulation.



Another attempt is the Human Rights and Technology project of the Australian Human Rights Commission. It goals to guard and promote human rights within the face of recent technology.

We are more likely to see some changes following the Australian Competition and Consumer Commission’s recent inquiry into digital platforms. And a protracted overdue review of the (Cth) is slated for later this yr.

These initiatives will hopefully strengthen Australian protections within the digital age, but there continues to be much work to be done. Stronger human rights protections could be a vital step on this direction, to offer a foundation for regulation.

Before AI is adopted much more widely, we’d like to grasp its impacts and put protections in place. To realise the potential advantages of AI, we must make sure that it’s governed appropriately. Otherwise, we risk paying a heavy price as individuals and as a society.

This article was originally published at theconversation.com