On Monday US President Joe Biden released a large ranging and impressive executive order on artificial intelligence (AI) – catapulting the US to the front of conversations about regulating AI.

In doing so, the US is leap frogging over other states within the race to rule over AI. Europe previously led the best way with its AI Act, which was passed by the European Parliament in June 2023, but which won’t take full effect until 2025.

The presidential executive order is a grab bag of initiatives for regulating AI – a few of that are good, and others which seem reasonably half-baked. It goals to handle harms starting from the immediate, corresponding to AI-generated deepfakes, through to intermediate harms corresponding to job losses, to longer-term harms corresponding to the much-disputed existential threat AI may pose to humans.



Biden’s ambitious plan

The US Congress has been slow to pass significant regulation of huge tech firms. This presidential executive order is probably going each an try and sidestep an often deadlocked Congress, in addition to to kick-start motion. For example, the order calls upon Congress to pass bipartisan data privacy laws.

Bipartisan support in the present climate? Good luck with that, Mr President.

The executive order will reportedly be implemented over the subsequent three months to 1 12 months. It covers eight areas:

  1. safety and security standards for AI
  2. privacy protections
  3. equity and civil rights
  4. consumer rights
  5. jobs
  6. innovation and competition
  7. international leadership
  8. AI governance.

On one hand, the order covers many concerns raised by academics and the general public. For example, one in all its directives is to issue official guidance on how AI-generated content could also be watermarked to cut back the danger from deepfakes.

It also requires firms developing AI models to prove they’re secure before they may be rolled out for wider use. President Biden said:

meaning firms must tell the federal government concerning the large scale AI systems they’re developing and share rigorous independent test results to prove they pose no national security or safety risk to the American people.

AI’s potentially disastrous use in warfare

At the identical time, the order fails to handle quite a lot of pressing issues. For instance, it doesn’t directly address tips on how to cope with killer AI robots, a vexing topic that was under discussion over the past two weeks at the General Assembly of the United Nations.

This concern shouldn’t be ignored. The Pentagon is developing swarms of low-cost autonomous drones as a part of its recently announced Replicator program. Similarly, Ukraine has developed homegrown AI-powered attack drones that may discover and attack Russian forces without human intervention.

Could we find yourself in a world where machines determine who lives or dies? The executive order merely asks for the military to make use of AI ethically, but doesn’t stipulate what meaning.

And what about protecting elections from AI-powered weapons of mass persuasion? Quite a lot of outlets have reported on how the recent election in Slovakia could have been influenced by deepfakes. Many experts, myself included, are also concerned concerning the misuse of AI within the upcoming US presidential election.

Unless strict controls are implemented, we risk living in an age where nothing you see or hear online may be trusted. If this feels like an exaggeration, consider that the US Republican Party has already released a campaign advert which appears entirely generated by AI.

Missed opportunities

Many of the initiatives in the chief order could and must be replicated elsewhere, including Australia. We too should, because the order requires, provide guidance to landlords, government programs and government contractors on tips on how to ensure AI algorithms aren’t getting used to discriminate against individuals.

We must also, because the order requires, address algorithmic discrimination within the criminal justice system where AI is increasingly getting used in high stakes settings, including for sentencing, parole and probation, pre-trial release and detention, risk assessments, surveillance and predictive policing, to call just a few.

AI has controversially been used for such applications in Australia, too, corresponding to within the Suspect Targeting Management Plan used to watch youths in New South Wales.

Perhaps probably the most controversial aspect of the chief order is that which addresses the potential harms of probably the most powerful so-called “frontier” AI models. Some experts imagine these models – that are being developed by firms corresponding to Open AI, Google and Anthropic – pose an existential threat to humanity.

Others, including myself, imagine such concerns are overblown and might distract from more immediate harms, corresponding to misinformation and inequity, which might be already hurting society.

Biden’s order invokes extraordinary war powers (specifically the 1950 Defense Production Act introduced throughout the Korean war) to require firms to notify the federal government when training such frontier models. It also requires they share the outcomes of “red-team” safety tests, wherein internal hackers use attacks to probe a software for bugs and vulnerabilities.

I might say it’s going to be difficult, and maybe not possible, to police the event of frontier models. The above directives won’t stop firms developing such models overseas, where the US government has limited power. The open source community also can develop them in a distributed fashion – one which makes the tech world “borderless”.

The impact of the chief order will likely have the best impact on the federal government itself, and the way it goes about using AI, reasonably than businesses.

Nevertheless, it’s a welcome piece of motion. The UK Prime Minister Rishi Sunak’s AI Safety Summit, happening over the subsequent two days, now looks to be somewhat of a diplomatic talk fest as compared.

It does make one envious of the presidential power to get things done.

This article was originally published at theconversation.com