Last week, artificial intelligence pioneers and experts urged major AI labs to right away pause the training of AI systems more powerful than GPT-4 for a minimum of six months.

An open letter penned by the Future of Life Institute cautioned that AI systems with “human-competitive intelligence” could turn into a significant threat to humanity. Among the risks, the opportunity of AI outsmarting humans, rendering us obsolete, and taking control of civilisation.

The letter emphasises the necessity to develop a comprehensive set of protocols to manipulate the event and deployment of AI. It states:

These protocols should be certain that systems adhering to them are protected beyond an inexpensive doubt. This doesn’t mean a pause on AI development basically, merely a stepping back from the damaging race to ever-larger unpredictable black-box models with emergent capabilities.

Typically, the battle for regulation has pitted governments and huge technology corporations against each other. But the recent open letter – thus far signed by greater than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side.

Could we actually implement a streamlined, global framework for AI regulation? And in that case, what would this appear to be?



What regulation already exists?

In Australia, the federal government has established the National AI Centre to assist develop the nation’s AI and digital ecosystem. Under this umbrella is the Responsible AI Network, which goals to drive responsible practise and supply leadership on laws and standards.

However, there may be currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a lightweight touch approach that widely embraces the concept of responsible AI, but stops wanting setting parameters that can ensure it’s achieved.

Similarly, the US has adopted a hands-off strategy. Lawmakers haven’t shown any urgency in attempts to manage AI, and have relied on existing laws to manage its use. The US Chamber of Commerce recently called for AI regulation, to make sure it doesn’t hurt growth or turn into a national security risk, but no motion has been taken yet.

Leading the way in which in AI regulation is the European Union, which is racing to create an Artificial Intelligence Act. This proposed law will assign three risk categories referring to AI:

  • applications and systems that create “unacceptable risk” shall be banned, reminiscent of government-run social scoring utilized in China
  • applications considered “high-risk”, reminiscent of CV-scanning tools that rank job applicants, shall be subject to specific legal requirements, and
  • all other applications shall be largely unregulated.

Although some groups argue the EU’s approach will stifle innovation, it’s one Australia should closely monitor, since it balances offering predictability with keeping pace with the event of AI.

China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, reminiscent of algorithms that generate harmful information, as an illustration. While this approach offers specificity, it risks having rules that can quickly fall behind rapidly evolving technology.



The pros and cons

There are several arguments each for and against allowing caution to drive the control of AI.

On one hand, AI is widely known for with the ability to generate all types of content, handle mundane tasks and detect cancers, amongst other things. On the opposite hand, it may deceive, perpetuate bias, plagiarise and – after all – has some experts apprehensive about humanity’s collective future. Even OpenAI’s CTO, Mira Murati, has suggested there needs to be movement toward regulating AI.

Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with “creative destruction” – a theory which suggests long-standing norms and practices have to be pulled apart to ensure that innovation to thrive.

Likewise, through the years business groups have pushed for regulation that’s flexible and limited to targeted applications, in order that it doesn’t hamper competition. And industry associations have called for ethical “guidance” reasonably than regulation – arguing that AI development is just too fast-moving and open-ended to adequately regulate.

But residents appear to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of Australian and British people consider the AI industry should be regulated and held accountable.

What’s next?

A six-month pause on the event of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t appear to be letting up. However, up to now there was no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax.

A worldwide moratorium could be difficult to implement, but not unattainable. The open letter raises questions across the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools.

If anything is to vary, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level mustn’t be within the hands of “unelected tech leaders”.

Governments should subsequently engage with industry to co-develop a world framework that lays out comprehensive rules governing AI development. This is one of the best method to protect against harmful impacts and avoid a race to the underside. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the long run of AI.



This article was originally published at theconversation.com