Artificial intelligence (AI) can have serious societal impact globally. So it’s more urgent than ever that state leaders cooperate to control the technology.

There have been various calls already: the Bletchley Declaration at a recent UK summit and the 11 AI principles and code of conduct agreed on by G7 leaders, for instance. But these largely state the plain. The real query is just not whether international cooperation on AI is required, but how can or not it’s realised?

The most evident strategy to secure this in a way that maximises the advantages of AI, and puts in “guardrails” – controls – to administer the numerous risks posed, is to establish an intergovernmental body.

Indeed, one idea is to create a World Technology Organisation. Others advocate for a body much like the International Atomic Energy Agency (IAEA), drawing a comparison between AI and nuclear weapons when it comes to the risks posed.

Another view is to develop an institutional framework inspired by entities reminiscent of Cern, the Human Genome Project, or the International Space Station (ISS).

However, creating an AI or technology-specific international organisation, whatever it might be called, faces three particularly difficult challenges.

Friction between powers

First, with AI being a dual use technology – which implies it’s able to being deployed for each peaceful and military purposes – it’s unlikely that the key powers can be willing to return together to form a world institution that may meaningfully police its development and use.

The so called chip war between the US and China is in full flow. AI technology can be the topic of intense geopolitical competition. The friction between the key powers creates serious hurdles for international cooperation on AI specifically.

In fact, existing international institutions built following the Second World War are already structurally affected by interstate friction. For example, the UN Security Council continues to be paralysed on the best controversies of international concern today.

The Appellate Body of the World Trade Organisation, some of the successful international mechanisms for adjudicating on trade issues prior to now, is currently dysfunctional since the US refuses to endorse judicial appointments to it. But, even before its demise, I argued that it encountered significant structural deficits.

Disagreements within the UN Security Council have posed major challenges to resolving armed conflicts.
lev radin / Shutterstock

The major international financial institutions are also facing serious governance challenges. The G20 leaders recently called for reforms to the World Bank and the International Monetary Fund (IMF) and for his or her roles to grow to be more clearly defined.

With existing international institutions in crisis, it is difficult to assume that a stand-alone international organisation to control AI may be created any time soon.

What will an AI-focused organisation do?

Second, even when the international community one way or the other agrees to create an AI or tech-specific regulating body, the query stays what is going to this organisation actually do? Would an AI-focused organisation seek to reinforce scientific cooperation between different research groups, or will it attempt to coordinate AI regulation across countries.

Would any such organisation create a monitoring regime to be certain that only human-centric, trustworthy and responsible AI is developed? How would such a regime come into operation and perform enforcement? Would it even be mandated to assist developing and the least developed countries realise AI’s full potential?

Sovereignty concerns, national security, perceived national interest and, ultimately, different approaches taken to AI, mean that reaching a priceless consensus on what such an organisation should do is prone to remain elusive for a while. Already, we see different decisions made on AI regulatory frameworks and deployment. While the EU’s AI act outlaws social scoring and real-time facial recognition, authoritarian states take a distinct approach.

It is thus necessary to not get carried away by generalised statements by the international community, giving the impression that a global law on AI could also be emerging. Hardly anyone would disagree that society must be protected against the risks posed by AI. Its deployment mustn’t undermine human rights and it needs to be secure and trustworthy.

But it’s the interpretation of such generalised principles into specific commitments made in international law that poses a big challenge.

Risk assessments on AI tools may yield distinct results depending on who carries them out. Which rights should be prioritised – individual rights versus security interests – might differ across different countries. So would what constitutes ethical types of AI.

What role for personal actors?

The third essential challenge in creating a global overseeing body pertains to the institutional character that should be adopted. This includes the role that the private sector is given in any governance framework.

Given the very significant role of the private sector in developing and deploying AI tools, a joint public-private governance model could be the only realistic option. Presently, it’s countries which might be the central actors within the international community.

Incorporating private corporations into a global governance structure that generally favours nations over all the things else could pose problems. That’s a challenge that have to be overcome before such an organisation is created.

Finally, international cooperation on AI already exists to some extent. Organisations, including the OECD, UNESCO and the International Organization for Standardization have already developed recommendations or standards within the spheres of their expertise.

Other bodies, reminiscent of the International Labour Organisation and the World Health Organization have began to contemplate the impact of AI of their mandates.

The UN has also established a High-Level Advisory Body on AI to undertake evaluation and advance recommendations for the international governance of this technology. It is just too early to conclude whether this fragmented approach can result in a well thought out and coordinated response.

Until the circumstances are right for making a standalone AI-focused international organisation, what is nearly certain is that powerful actors, reminiscent of the US – where most tech-companies are based – and the European Union’s AI Act can have an outsized influence on the content of AI regulation and governance globally.

This article was originally published at theconversation.com