The generative AI industry might be value about A$22 trillion by 2030, in response to the CSIRO. These systems – of which ChatGPT is currently the very best known – can write essays and code, generate music and artwork, and have entire conversations. But what happens once they’re turned to illegal uses?

Last week, the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary eyed, after being caught viewing pornography with the superimposed faces of other women streamers.

The “deepfake” technology needed to Photoshop a celeb’s head on a porn actor’s body has been around for some time, but recent advances have made it much harder to detect.

And that’s the tip of the iceberg. In the unsuitable hands, generative AI could do untold damage. There’s loads we stand to lose, should laws and regulation fail to maintain up.

The same tools used to make deepfake porn videos might be used to fake a US president’s speech. Credit: Buzzfeed.


From controversy to outright crime

Last month, generative AI app Lensa got here under fire for allowing its system to create fully nude and hyper-sexualised images from users’ headshots. Controversially, it also whitened the skin of girls of color and made their features more European.

The backlash was swift. But what’s relatively ignored is the vast potential to make use of artistic generative AI in scams. At the far end of the spectrum, there are reports of those tools with the ability to fake fingerprints and facial scans (the tactic most of us use to lock our phones).

Criminals are quickly finding recent ways to make use of generative AI to enhance the frauds they already perpetrate. The lure of generative AI in scams comes from its ability to search out patterns in large amounts of knowledge.

Cybersecurity has seen an increase in “bad bots”: malicious automated programs that mimic human behaviour to conduct crime. Generative AI will make these much more sophisticated and difficult to detect.

Ever received a scam text from the “tax office” claiming you had a refund waiting? Or perhaps you bought a call claiming a warrant was out to your arrest?

In such scams, generative AI could possibly be used to enhance the standard of the texts or emails, making them rather more believable. For example, in recent times we’ve seen AI systems being used to impersonate essential figures in “voice spoofing” attacks.

Then there are romance scams, where criminals pose as romantic interests and ask their targets for money to assist them out of monetary distress. These scams are already widespread and sometimes lucrative. Training AI on actual messages between intimate partners could help create a scam chatbot that’s indistinguishable from a human.

Generative AI could also allow cybercriminals to more selectively goal vulnerable people. For instance, training a system on information stolen from major firms, resembling within the Optus or Medibank hacks last yr, could help criminals goal elderly people, individuals with disabilities, or people in financial hardship.

Further, these systems might be used to improve computer code, which some cybersecurity experts say will make malware and viruses easier to create and harder to detect for antivirus software.

The technology is here, and we aren’t prepared

Australia’s and New Zealand’s governments have published frameworks referring to AI, but they aren’t binding rules. Both countries’ laws referring to privacy, transparency and freedom from discrimination aren’t as much as the duty, so far as AI’s impact is anxious. This puts us behind the remainder of the world.

The US has had a legislated National Artificial Intelligence Initiative in place since 2021. And since 2019 it has been illegal in California for a bot to interact with users for commerce or electoral purposes without disclosing it’s not human.

The European Union can also be well on the option to enacting the world’s first AI law. The AI Act bans certain forms of AI programs posing “unacceptable risk” – resembling those utilized by China’s social credit system – and imposes mandatory restrictions on “high risk” systems.

Although asking ChatGPT to interrupt the law ends in warnings that “planning or carrying out a serious crime can result in severe legal consequences”, the very fact is there’s no requirement for these systems to have a “moral code” programmed into them.

There could also be no limit to what they might be asked to do, and criminals will likely determine workarounds for any rules intended to stop their illegal use. Governments must work closely with the cybersecurity industry to control generative AI without stifling innovation, resembling by requiring ethical considerations for AI programs.

The Australian government should use the upcoming Privacy Act review to get ahead of potential threats from generative AI to our online identities. Meanwhile, New Zealand’s Privacy, Human Rights and Ethics Framework is a positive step.

We also should be more cautious as a society about believing what we see online, and do not forget that humans are traditionally bad at with the ability to detect fraud.

Can you notice a scam?

As criminals add generative AI tools to their arsenal, spotting scams will only get trickier. The classic suggestions will still apply. But beyond those, we’ll learn loads from assessing the ways wherein these tools fall short.

Generative AI is bad at critical reasoning and conveying emotion. It may even be tricked into giving unsuitable answers. Knowing when and why this happens could us help develop effective methods to catch cybercriminals using AI for extortion.

There are also tools being developed to detect AI outputs from tools resembling ChatGPT. These could go a good distance towards stopping AI-based cybercrime in the event that they prove to be effective.



This article was originally published at theconversation.com