The word “risk” is usually seen in the identical sentence as “artificial intelligence” nowadays. While it’s encouraging to see world leaders consider the potential problems of AI, together with its industrial and strategic advantages, we should always keep in mind that not all risks are equal.

On Wednesday, June 14, the European Parliament voted to approve its own draft proposal for the AI Act, a bit of laws two years within the making, with the ambition of shaping global standards within the regulation of AI.

After a final stage of negotiations, to reconcile different drafts produced by the European Parliament, Commission and Council, the law needs to be approved before the top of the 12 months. It will develop into the primary laws on this planet dedicated to regulating AI in just about all sectors of society – although defence shall be exempt.

Of all of the ways one could approach AI regulation, it’s value noticing that this laws is entirely framed across the notion of risk. It will not be AI itself that’s being regulated, but reasonably the way in which it’s utilized in specific domains of society, each of which carries different potential problems. The 4 categories of risk, subject to different legal obligations, are: unacceptable, high, limited and minimal.

Systems deemed to pose a threat to fundamental rights or EU values, shall be categorised as having an “unacceptable risk” and be prohibited. An example of such a risk could be AI systems used for “predictive policing”. This is using AI to make risk assessments of people, based on personal information, to predict whether or not they are more likely to commit crimes.

A more controversial case is using face recognition technology on live street camera feeds. This has also been added to the list of unacceptable risks and would only be allowed after the commission of a criminal offense and with judicial authorisation.

Those systems classified as “high risk” shall be subject to obligations of disclosure and expected to be registered in a special database. They may even be subject to numerous monitoring or auditing requirements.

The kinds of applications resulting from be classified as high risk include AI that would control access to services in education, employment, financing, healthcare and other critical areas. Using AI in such areas will not be seen as undesirable, but oversight is important due to its potential to negatively affect safety or fundamental rights.

The idea is that we should always have the opportunity to trust that any software making decisions about our mortgage shall be rigorously checked for compliance with European laws to make sure we aren’t being discriminated against based on protected characteristics like sex or ethnic background – a minimum of if we live within the EU.

Ursula von der Leyen proposed the laws in 2019.
Olivier Hostel / EPA

“Limited risk” AI systems shall be subject to minimal transparency requirements. Similarly, operators of generative AI systems – for instance, bots producing text or images – may have to reveal that the users are interacting with a machine.

During its long journey through the European institutions, which began in 2019, the laws has develop into increasingly specific and explicit concerning the potential risks of deploying AI in sensitive situations – together with how these might be monitored and mitigated. Much more work must be done, but the thought is evident: we must be specific if we wish to get things done.

Risk of extinction?

By contrast, now we have recently seen petitions calling for mitigation of a presumed “risk of extinction” posed by AI, giving no further details. Various politicians have echoed these views. This generic and really long-term risk is kind of different from what shapes the AI Act, since it doesn’t provide any detail about what we needs to be searching for, nor what we should always do now to guard against it.

If “risk” is the “expected harm” which will come from something, then we’d do well to give attention to possible scenarios which might be each harmful and probable, because these carry the best risk. Very improbable events, equivalent to an asteroid collision, shouldn’t take priority over more probable ones, equivalent to the consequences of pollution.

Destroyed buildings.
Generic risks, just like the potential for human extinction, aren’t mentioned within the act.
Zsolt Biczo / Shutterstock

In this sense, the draft laws that has just been approved by the EU parliament has less flash but more substance than a few of the recent warnings about AI. It attempts to walk the positive line between protecting rights and values, without stopping innovation, and specifically addressing each dangers and remedies. While removed from perfect, it a minimum of provides concrete actions.

The next stage within the journey of this laws shall be the trilogues – three-way dialogues – where the separate drafts of the parliament, commission and council shall be merged right into a final text. Compromises are expected to occur on this phase. The resulting law shall be voted into force, probably at the top of 2023, before campaigning starts for the subsequent European elections.

After two or three years, the act will take effect and any business operating throughout the EU may have to comply with it. This long timeline does pose some questions of its own, because we don’t understand how AI, or the world, will look in 2027.

Let’s keep in mind that the president of the European Commission, Ursula von der Leyen, first proposed this regulation in the summertime of 2019, just before a pandemic, a war and an energy crisis. This was also before ChatGPT got politicians and the media talking often about an existential risk from AI.

However, the act is written in a sufficiently general way which will help it remain relevant for some time. It will possibly influence how researchers and businesses approach AI beyond Europe.

What is evident, nevertheless, is that each technology poses risks, and reasonably than wait for something negative to occur, academic and policymaking institutions try to think ahead about the results of research. Compared with the way in which we adopted previous technologies – equivalent to fossil fuels – this does represent a level of progress.

This article was originally published at