This week, federal Minister for Industry and Science Ed Husic announced the Australian government’s response to the Safe and Responsible AI in Australia consultation.

The response addresses feedback from last yr’s consultation on artificial intelligence (AI). It received greater than 500 submissions, noting “excitement for the opportunities” of AI tools, but in addition raising concerns about potential risks and Australians’ expectations for “regulatory safeguards to forestall harms”.

Instead of enacting a single AI regulatory law just like the European Union has done, the Australian government plans to concentrate on high-risk areas of AI implementation – ones with the best potential for harm. This could include examples resembling discrimination within the workplace, the justice system, surveillance or self-driving cars.

The government also plans to create a brief expert advisory group to support the event of those guardrails.



How will we define ‘high-risk’ AI?

While this proportional response could also be welcomed by some, specializing in high-risk areas with only a brief advisory body raises significant questions:

  • how will high-risk areas be defined – and who makes that call?

  • should low-risk AI applications face similar regulation, when some interventions (resembling requiring watermarks for AI-generated content) could broadly combat misinformation?

  • and not using a everlasting advisory board, how can organisations anticipate risks for brand spanking new AI technologies and recent applications of AI tools in the long run?

Assessing “risk” in using recent technologies is just not recent. We have many existing principles, guidelines, and regulations that might be adapted to deal with concerns about AI tools.

For example, many Australian sectors are already highly regulated to deal with safety concerns, resembling vehicles and medical devices.

In all research involving people, Australian researchers must comply with national guidelines where risk assessment practices are well defined:

  • identifying the risks and who is likely to be vulnerable to harm;

  • assessing the likelihood, severity and magnitude of risk;

  • considering strategies to minimise, mitigate, and/or manage risks;

  • identifying potential advantages, and who may profit; and

  • weighing the risks and determining whether the risks are justified by potential advantages.

This risk assessment is finished before research being done, with significant review and oversight by Human Research Ethics Committees. The same approach could possibly be used for AI risk assessment.

AI is already in our lives

One significant problem with AI regulation is that many tools are already utilized in Australian homes and workplaces, but without regulatory guardrails to administer risks.

A recent YouGov report found 90% of Australian staff used AI tools for day by day tasks, despite serious limitations and flaws. AI tools can “hallucinate” and present fake information to users. The lack of transparency about training data raises concerns about bias and copyright infringement.

Consumers and organisations need guidance on appropriate adoption of AI tools to administer risks, but many uses are outside “high risk” areas.

Defining “high risk” settings is difficult. The concept of “risk” sits on a spectrum and is just not absolute. Risk is just not determined by a tool itself, or the setting where it’s used. Risk arises from contextual aspects that create potential for harm.

For example, while knitting needles pose little risk in on a regular basis life, knitters are cautioned against carrying metal needles on airplanes. Airport security views these as “dangerous” tools and restricts their use on this setting to forestall harm.

To discover “high risk” settings we must understand how AI tools work. Knowing AI tools can result in gender discrimination in hiring practices means all organisations must manage risk in recruitment. Not understanding the restrictions of AI, just like the American lawyer who trusted fake case law generated by ChatGPT, highlights the danger of human error in AI tool use.

Risks posed by people and organisations in using AI tools should be managed alongside risks posed by the technology itself.



Who will advise the federal government?

The government notes in its response that the expert advisory body on AI risks will need “diverse membership and expertise from across industry, academia, civil society and the legal occupation”.

Within industry, membership should include various sectors (resembling healthcare, banking, law enforcement) with representation from large organisations and small-to-medium enterprises.

Within academia, membership should include not only AI computing experts, but in addition social scientists with expertise in consumer and organisational behaviour. They can advise on risk evaluation, ethics, and what people worry about in the case of adopting recent technology, including misinformation, trust and privacy concerns.

The government must also determine how one can manage potential future AI risks. A everlasting advisory body could manage risks for future technologies and for brand spanking new uses of existing tools.

Such a body could also advise consumers and workplaces on AI applications at lower levels of risk, particularly where limited or no regulations are in place.

Misinformation is one key area where the restrictions of AI tools are known, requiring people to have strong critical considering and data literacy skills. For example, requiring transparency in the usage of AI-generated images can ensure consumers should not misled.

Yet the federal government’s current focus for transparency is proscribed to “high-risk” settings. This is a start, but more advice – and more regulation – shall be needed.

This article was originally published at theconversation.com