Earlier this month, a Hong Kong company lost HK$200 million (A$40 million) in a single Deepfake Fraud. An worker transferred money after a video conference with scammers who looked and appeared like high-ranking company officials.

Generative AI tools can create image, video and voice replicas of real people saying and doing things they might never have done. And these tools have gotten increasingly easier to access and use.

This can perpetuate and disrupt the abuse of intimate images (including things like “revenge porn”) democratic processes. Currently, many jurisdictions are grappling with the way to do that Regulation of AI deepfakes.

But in the event you are a victim of a deepfake scam, are you able to receive compensation or redress on your losses? The laws has not yet caught up.

Who is responsible?

In most cases of deepfake fraud, fraudsters avoid attempting to outsmart banks and security systems and as a substitute go for so-called “push payment” scams, wherein victims are tricked into instructing their bank to pay the fraudster.

So in the event you’re searching for a treatment, there are at the least 4 possible destinations:

  1. the fraudster (who often disappeared)

  2. the social media platform that hosted the fake

  3. Any bank that paid out the cash on behalf of the fraud victim

  4. the provider of the AI ​​tool that created the fake.

The quick answer is that after the scammer disappears, it’s currently unclear whether you shall be entitled to any redress from any of those other parties (although that would change in the longer term).

Let’s see why.

The social media platform

In principle, you may claim damages from a social media platform if it hosts a deepfake intended to defraud you. But there are hurdles to beat.

Platforms often present themselves as mere intermediaries of content – ​​meaning that they will not be legally chargeable for the content. In the United States, platforms are explicit be spared from this sort of liability. However, in most other common law countries, including Australia, no such protection exists.

The Australian Competition and Consumer Commission (ACCC) takes Meta (Facebook’s parent company) in court. They are testing the potential of holding digital platforms directly answerable for deepfake crypto scams in the event that they actively goal the ads to potential victims.

The ACCC also argues that Meta ought to be held liable as an accomplice to the fraud – since it didn’t promptly remove the misleading ads after being informed of the issue.

At a minimum, platforms ought to be chargeable for promptly removing deepfake content used for fraudulent purposes. They may already claim to do that, however it could soon turn into a legal requirement.

The ACCC has sued Meta (Facebook’s parent company) to look at whether Facebook could be sued for targeting victims with scam ads.
Jeff Chiu/AP

The Bank

In Australia, the legal obligations around whether a bank has to reimburse you within the event of a deepfake fraud will not be regulated.

This has recently been considered by the Supreme Court of the United Kingdom, in a case that’s more likely to be influential in Australia. It points out that banks will not be obliged to reject a customer’s payment instructions if the recipient is suspected of being a (deepfake) fraudster, although they’re generally obliged to act promptly as soon because the fraud is discovered becomes.

That is, the United Kingdom introduces Compulsory system This requires banks to compensate victims Push payment fraudat the least under certain circumstances.

In Australia it’s ACCC and others have recommend proposals for an analogous scheme, although none are currently available.

Customers stand in front of ATMs at Australian banks
Australian banks are unlikely to be held answerable for customer losses resulting from fraud, but latest systems could force them to pay compensation to victims.
TK Kurikawa/Shutterstock

The provider of AI tools

The providers of generative AI tools are currently not legally obliged to make their tools unusable for fraud or deception. Legally, there is no such thing as a duty of care to the world at large to forestall the fraud of others.

However, generative AI providers have the chance to make use of technology to cut back the likelihood of deepfakes. Like banks and social media platforms, they might soon be required to achieve this, at the least in some jurisdictions.

The recently proposed I HAVE Act requires providers of generative AI tools to design these tools in such a way that synthetic/fake content could be detected.

It is currently assumed that this might work digital watermarkalthough its effectiveness still exists discussed. Other measures include closing dates, a digital ID card to confirm an individual’s identity, and further education in regards to the signs of deepfakes.

Can we stop deepfake scams completely?

None of those legal or technical protections are more likely to be entirely effective in stemming the tide of deepfake scams, scams, or deceptions—particularly as generative AI technology continues to advance.

However, the response doesn’t need to be perfect: slowing down AI-generated fakes and scams can still reduce the damage. We also must put pressure on platforms, banks and technology providers to maintain track of the risks.

While chances are you’ll never have the ability to completely avoid becoming a victim of a deepfake scam, with all of those latest legal and technological developments, chances are you’ll soon have the ability to hunt compensation if something goes mistaken.

As audio, video and image deepfakes turn into more realistic, we want multi-layered strategies for prevention, education and compensation.

This article was originally published at theconversation.com