The significant risks that AI poses to global security have gotten clearer. That’s partly why UK prime minister Rishi Sunak is hosting other world leaders on the AI Safety summit on November 1-2 on the famous second world war code-breaking site Bletchley Park. Yet while the technology of AI is developing at an alarming pace, the true threat may come from governments themselves.

The track record of AI development over the past 20 years provides a spread of evidence of presidency misuse of the technology around the globe. This includes excessive surveillance practices, the harnessing of AI for the spread of disinformation.

Although recent focus has been on private firms that develop AI products, governments aren’t the impartial arbiters they might sound to be at this AI summit. Instead, they’ve played a job that’s just as integral to the precise way that AI has developed – and they’re going to proceed to.

Militarising AI

There are continual reports that the leading technological nations are getting into an AI arms race. No one state really began this race. Its development has been complex, and lots of groups – from inside and outdoors governments – have played a job.

During the cold war, US intelligence agencies got interested in the usage of artificial intelligence for surveillance, nuclear defence and for the automated interrogation of spies. It is due to this fact not surprising that in newer years, the combination of AI into military capabilities has proceeded apace in other countries, equivalent to the UK.

Automated technologies developed to be used within the war on terror have fed into the event of powerful AI-based military capabilities, including AI-powered drones (unmanned aerial vehicles) which might be being deployed in current conflict zones.

Russia’s president, Vladimir Putin, has declared that the country that leads in AI technology will rule the world. China has also declared its own intent to change into an AI superpower.

Surveillance states

The other major concern here is the usage of AI by governments in surveillance of their very own societies. As governments have seen domestic threats to security develop, including from terrorism, they’ve increasingly deployed AI domestically to reinforce the safety of the state.

In China, this has been taken to extreme degrees, with the usage of facial recognition technologies, social media algorithms, and web censorship to manage and surveil populations, including in Xinjiang where AI forms an integral a part of the oppression of the Uyghur population.

But the west’s track record isn’t great either. In 2013, it was revealed that the US government had developed autonomous tools to gather and sift through huge amounts of knowledge on people’s web usage, ostensibly for counter terrorism. It was also reported that the UK government had access to those tools. As AI develops, its use in surveillance by governments is a serious concern to privacy campaigners.

Meanwhile, borders are policed by algorithms and facial recognition technologies, that are increasingly being deployed by domestic police forces. There are also wider concerns about “predictive policing”, the usage of algorithms to predict crime hotspots (often in ethnic minority communities) that are then subject to extra policing effort.

These recent and current trends suggest governments may not find a way to withstand the temptation to make use of increasingly sophisticated AI in ways in which create concerns around surveillance.

Governing AI?

Despite the nice intentions of the UK government to convene its safety summit and to change into a world leader within the protected and responsible use of AI, the technology would require serious and sustained efforts on the international level for any type of regulation to be effective.

Governance mechanisms are starting to emerge, with the US and EU recently introducing significant recent regulation of AI.

But governing AI on the international level is fraught with difficulties. There will in fact be states that join to AI regulation after which ignore them in practice.

Western governments are also faced with arguments that overly strict regulation of AI will allow authoritarian states to fulfil their aspirations to take the lead on the technology. But allowing firms to “rush to release” recent products risk unleashing systems that would have huge unexpected consequences on society. Just take a look at how advanced text-generating AI equivalent to ChatGPT could increase misinformation and propaganda.

And not even the developers themselves understand exactly how advanced algorithms work. Puncturing this “black box” of AI technology would require sophisticated and sustained investment in testing and verification capabilities by national authorities. But the capabilities or the authorities don’t exist at the moment.

The politics of fear

We’re used to hearing from the news about a super-intelligent type of AI threatening human civilisation. But there are reasons to be wary of such a mindset.

As my very own research highlights, the “securitisation” of AI – that’s, presenting technology as an existential threat – could possibly be used as an excuse by governments to grab power, to misuse it themselves, or to take narrow self-interested approaches to AI that don’t harness the potential advantages it could confer on all people.

Rishi Sunak’s AI summit could be a great opportunity to focus on that governments should keep the politics of fear out of efforts to bring AI under control.

This article was originally published at theconversation.com