Today, the federal Minister for Industry and Science Ed Husic revealed an interim response from the Australian government on the secure and responsible use of artificial intelligence (AI).

The public, especially the Australian public, have real concerns about AI. And it’s appropriate that they need to.

AI is a strong technology entering our lives quickly. By 2030, it may increase the Australian economy by 40%, adding A$600 billion to our annual gross domestic product. A recent International Monetary Fund report estimates AI may additionally impact 40% of jobs worldwide, and 60% of jobs in developed nations like Australia.

In half of those jobs, the impacts might be positive, lifting productivity and reducing drudgery. But in the opposite half, the impacts could also be negative, taking away work, even eliminating some jobs completely. Just as lift attendants and secretaries in typing pools had to maneuver on and find recent vocations, so might truck drivers and law clerks.

Perhaps not surprisingly then, in a recent market researcher Ipsos survey of 31 countries, Australia was the nation most nervous about AI. Some 69% of Australians, in comparison with just 23% of Japanese, were apprehensive concerning the use of AI. And only 20% of us thought it will improve the job market.

The Australian government’s recent interim response is due to this fact to be welcomed. It’s a somewhat delayed reply to last 12 months’s public consultation on AI. It received over 500 submissions from business, civil society and academia. I contributed to multiple of those submissions.

Minister for Industry Ed Husic speaks to the media at a press conference at Parliament House in Canberra, Wednesday, January 17 2024.
AAP Image/Mick Tsikas

What are the fundamental points in the federal government’s response on AI?

Like any good plan, the federal government’s response has three legs.

First, there’s a plan to work with industry to develop voluntary AI Safety Standards. Second, there’s also a plan to work with industry to develop options for voluntary labelling and watermarking of AI-generated materials. And finally, the federal government will arrange an authority advisory body to “support the event of options for mandatory AI guardrails”.

These are all good ideas. The International Organisation for Standardisation have been working on AI standards for multiple years. For example, Standards Australia just helped launch a brand new international standard that supports the responsible development of AI management systems.

An industry group containing Microsoft, Adobe, Nikon and Leica has developed open tools for labelling and watermarking digital content. Keep a glance out for the brand new “Content Credentials” logo that’s starting to look on digital content.

And the New South Wales government arrange an 11-member advisory committee of experts to advise it on the suitable use of artificial intelligence back in 2021.

Person holding phone with ChatGPT logo displayed
OpenAI’s ChatGPT is one among the massive language model applications that sparked concerns regarding copyright and mass production of AI-generated content.
Mojahid Mottakin/Unsplash

A bit of late?

It’s hard to not conclude then that the federal government’s most up-to-date response is a bit of light and a bit of late.

Over half the world’s democracies get to vote this 12 months. Over 4 billion people will go to the polls. And we’re set to see AI transform those elections.

We’ve already seen deepfakes utilized in recent elections in Argentina and Slovakia. The Republican party within the US have put out a campaign advert that uses entirely AI-generated imagery.

Are we prepared for a world wherein every thing you see or hear may very well be fake? And will voluntary guidelines be enough to guard the integrity of those elections? Sadly, lots of the tech corporations are reducing staff on this area, just on the time after they are needed essentially the most.

The European Union has led the way in which within the regulation of AI – it began drafting regulation back in 2020. And we’re still a 12 months or so away before the EU AI Act comes into force. This emphasises how far behind Australia is.

A risk-based approach

Like the EU, the Australian government’s interim response proposes a risk-based approach. There are loads of harmless uses of AI which might be of little concern. For example, you likely get lots less spam email because of AI filters. And there’s little regulation needed to make sure those AI filters do an appropriate job.

But there are other areas, reminiscent of the judiciary and policing, where the impact of AI may very well be more problematic. What if AI discriminates on who gets interviewed for a job? Or bias in facial recognition technologies lead to much more Indigenous people being incorrectly incarcerated?

The interim response identifies such risks but takes few concrete steps to avoid them.

Diagram of impacts through the AI lifecycle, as summarised within the Australian government’s interim response.
Australian Government

However, the largest risk the report fails to deal with is the danger of missing out. AI is an important opportunity, as great or greater than the web.

When the United Kingdom government put out the same report on AI risks last 12 months, they addressed this risk by announcing one other 1 billion kilos (A$1.9 billion) of investment so as to add to the greater than 1 billion kilos of previous investment.

The Australian government has thus far announced lower than A$200 million. Our economy and population is around a 3rd of the UK. Yet the investment thus far has been 20 times smaller. We risk missing the boat.

This article was originally published at