The world missed the boat with social media. It fuelled misinformation, fake news, and polarisation. We saw the harms too late, once that they had already began to have a substantive impact on society.

With artificial intelligence – especially generative AI – we’re earlier to the party. Not a day goes by with out a latest deepfake, open letter, product release or interview raising the general public’s concern.

Responding to this, the Australian government has just released two necessary documents. One is a report commissioned by the National Science and Technology Council (NSTC) on the opportunities and risks posed by generative AI, and the opposite is a consultation paper asking for input on possible regulatory and policy responses to those risks.

I used to be one in every of the external reviewers of the NSTC report. I’ve read each documents rigorously so that you don’t should. Here’s what it is advisable to know.

Trillions of life-changing opportunities

With AI, we see a multi-trillion dollar industry coming into existence before our eyes – and Australia might be well-placed to profit.

In the previous few months, two local unicorns (billion dollar firms) pivoted to AI. Online graphic design company Canva introduced its “magic” AI tools to generate and edit content, and software development company Atlassian introduced “Atlassian intelligence” – a brand new virtual teammate to assist with tasks similar to summarising meetings and answering questions.

These are only two examples. We see many other opportunities across industry, government, education and health.

AI tools to predict early signs of Parkinson’s disease? Tick. AI tools to predict when solar storms will hit? Tick. Checkout-free, grab-and-go shopping, courtesy of AI? Tick.

The list of how AI can improve our lives seems countless.

What in regards to the risks?

The NSTC report outlines essentially the most obvious risks: job displacement, misinformation and polarisation, wealth concentration and regulatory misalignment.

For example, are entry level lawyers going to get replaced by robots? Are we going to drown in a sea of deepfakes and computer generated tweets? Will big tech firms capture much more wealth? And how can little old Australia have a say on global changes?

The Australian government’s consultation paper looks at how different nations are responding to those challenges. This includes the US, which is adopting a lightweight touch approach with voluntary codes and standards; the UK, which looks to empower existing sector-specific regulators; and Europe’s forthcoming AI Act, which is one in every of the primary AI-specific regulations.

Europe’s approach is value watching if their previous data protection law – the General Data Protection Regulation (GDPR) – is anything to go by. The GDPR has turn out to be somewhat viral; 17 countries outside of Europe now have similar privacy laws.

We can expect the European Union’s AI Act to set the same precedent on find out how to regulate AI.

The European Union’s GDPR regulations got here into effect on May 25 2018, and have turn out to be a model for other nations all over the world.

Indeed, the Australian government’s consultation paper specifically asks if we should always adopt the same risk and audit-based approach because the AI Act. The Act outlaws high-risk AI applications, similar to AI-driven social scoring systems (just like the system in use in China) and real-time distant biometric identification systems utilized by law enforcement in public spaces. It allows other riskier applications only after suitable safety audits.

China stands somewhat apart so far as regulating AI goes. It proposes to implement very strict rules, which might require AI-generated content to reflect the “core value of socialism”, “respect social morality and public order”, and never “subvert state power”, “undermine national unity” or encourage “violence, extremism, terrorism or discrimination”.

In addition, AI tools might want to undergo a “security review” before release, and confirm users’ identities and track usage.

It seems unlikely Australia may have the appetite for such strict state control over AI. Nonetheless, China’s approach reinforces how powerful AI goes to be, and the way necessary it’s to get right.

Existing rules

As the federal government’s consultation paper notes, AI is already subject to existing rules. These include general regulations (similar to privacy and consumer protection laws that apply across industries) and sector-specific regulations (similar to those who apply to financial services or therapeutic goods).

One of the main goals of the consultation is to come to a decision whether to strengthen these rules or, because the EU has done, to introduce specific AI risk-based regulation – or perhaps some mixture of those two approaches.

Government itself is a (potential) major user of AI and due to this fact has an enormous role to play in setting regulation standards. For example, procurement rules utilized by government can turn out to be de facto rules across other industries.

Missing the boat

The biggest risk, in my opinion, is that Australia misses this chance.

A couple of weeks ago, when the UK government announced its approach to take care of the risks of AI, it also announced an extra £1 billion of investment in AI, alongside the several billion kilos already committed.

We’ve not seen any such ambition from the Australian government.

The technologies that gave us the iPhone, the web, GPS, and wifi got here about because of presidency investment in fundamental research and training for scientists and engineers. They didn’t come into existence due to enterprise funding in Silicon Valley.

We’re still waiting to see the federal government invest thousands and thousands (and even billions) of dollars in fundamental research, and within the scientists and engineers that can allow Australia to compete within the AI race. There remains to be every thing to play for.

AI goes to the touch everyone’s lives, so I strongly encourage you to have your say. You only have eight weeks to accomplish that.

This article was originally published at