The latest generation of artificial intelligence (AI), equivalent to ChatGPT, will revolutionise the way in which we live and work. AI technologies could significantly improve education, healthcare, transport and welfare. But there are downsides, too: jobs automated out of existence, surveillance abuses, and discrimination, including in healthcare and policing.

There’s general agreement that AI must be regulated, given its awesome potential for good and harm. The EU has proposed one approach, based on potential problems. The UK is proposing a unique, pro-business, approach.

This 12 months, the UK government published a white paper (a policy document setting out plans for future laws) unveiling the way it intends to manage AI, with an emphasis on flexibility to avoid stifling innovation. The document favours voluntary compliance, with five principles meant to tackle AI risks.

Strict enforcement of those principles by regulators could possibly be added later if it’s required. But is such an approach too lenient given the risks?

Crucial components

The UK approach differs from the EU’s risk-based regulation. The EU’s proposed AI Act prohibits certain AI uses, equivalent to live facial recognition technology, where people shown on a camera feed are compared against police “watch lists”, in public spaces.

The EU approach creates stringent standards for so-called high-risk AI systems. These include systems used to judge job applications, student admissions, eligibility for loans and public services.

I imagine the UK’s approach higher balances AI’s risks and advantages, fostering innovation that advantages the economy and society. However, critical challenges should be addressed.

The EU’s AI Act would prohibit live face recognition by police forces in public spaces.
Gorodenkoff / Shutterstock

The UK approach to AI regulation has three crucial components. First, it relies on existing legal frameworks equivalent to privacy, data protection and product liability laws, fairly than implementing recent AI-centred laws.

Second, five general principles – each consisting of several components – can be applied by regulators together with existing laws. These principles are (1) “safety, security and robustness”, (2) “appropriate transparency and explainability”, (3) “fairness”, (4) “accountability and governance”, and (5) “contestability and redress”.

During initial implementation, regulators wouldn’t be legally required to implement the principles. A statute imposing these obligations can be enacted later, if considered essential. Organisations would subsequently be expected to comply with the principles voluntarily in the primary instance.

Third, regulators could adapt the five principles to the topics they cover, with support from a central coordinating body. So, there is not going to be a single enforcement authority.

Promising approach?

The UK’s regime is promising for 3 reasons. First, it guarantees to make use of evidence about AI in its correct context, fairly than applying an example from one area to a different inappropriately.

Second, it’s designed in order that rules may be easily tailored to the necessities of AI used in several areas of on a regular basis life. Third, there are benefits to its decentralised approach. For example, a single regulatory organisation, were it to underperform, would affect AI use across the board.

Let’s have a look at how it will use evidence about AI. As AI’s risks are yet to be fully understood, predicting future problems involves guesswork. To fill the gap, evidence with no relevance to a selected use of AI could possibly be appropriated to propose drastic and inappropriate regulatory solutions.

For instance, some US web corporations use algorithms to determine an individual’s sex based on facial expression. These showed poor performance when presented with photos of darker-skinned women.

This finding has been cited in support of a ban on law enforcement use of face recognition technology within the UK. However, the 2 areas are quite different and problems with gender classification don’t imply an analogous issue with facial recognition in law enforcement.

These US gender algorithms work under relatively lower legal standards. Face recognition utilized by UK law enforcement undergoes rigorous testing, and is deployed under strict legal requirements.

Driverless car.
Some AI applications, equivalent to driverless cars, could fall under multiple regulatory regime.
riopatuca / Shutterstock

Another advantage of the UK approach is its adaptability. It may be difficult to predict potential risks, particularly with AI that could possibly be appropriated for purposes apart from those foreseen by its developers and machine learning systems, which improve of their performance over time.

The framework allows regulators to quickly address risks as they arise, avoiding lengthy debates in parliament. Responsibilities can be spread between different organisations. Centralising AI oversight under a single national regulator may lead to inefficient enforcement.

Regulators with expertise in specific areas equivalent to transport, aviation, and financial markets are higher suited to manage using AI inside their fields of interest.

This decentralised approach could minimise the results of corruption, of regulators becoming preoccupied with concerns apart from the general public interest and differing approaches to enforcement. It also avoids a single point of enforcement failure.

Enforcement and coordination

Some businesses could resist voluntary standards, so, if and when regulators are granted enforcement powers, they need to give you the option to issue fines. The public also needs to have the correct to hunt compensation for harms brought on by AI systems.

Enforcement needn’t undermine flexibility. Regulators can still tighten or loosen standards as required. However, the UK framework could encounter difficulties where AI systems fall under the jurisdiction of multiple regulators, leading to overlaps. For example, transport, insurance, and data protection authorities could all issue conflicting guidelines for self-driving cars.

To tackle this, the white paper suggests establishing a central body, which might make sure the harmonious implementation of guidance. It’s vital to compel different regulators to seek the advice of this organisation fairly than leaving the choice as much as them.

The UK approach shows promise for fostering innovation and addressing risks. But to strengthen the country’s position as a leader in the world, the framework have to be aligned with regulation elsewhere, especially the EU.

Fine-tuning the framework can enhance legal certainty for businesses and bolster public trust. It can even foster international confidence within the UK’s system of regulation for this transformative technology.

This article was originally published at theconversation.com