A deep fake video featuring Kari Lake, created by digital news outlet Arizona Agenda, surfaced online. 

In the video, an AI-generated Lake endorses the Arizona Agenda, stating, “Subscribe to the Arizona Agenda for hard-hitting real news.”

It then delivers a message in regards to the role of AI in elections: “And a preview of the terrifying artificial intelligence coming your way in the subsequent election, like this video, which is an AI deepfake the Arizona Agenda made to point out you simply how good this technology is getting.”

The video deceived even Hank Stephenson, Arizona Agenda’s co-founder and journalist, who admitted, “When we began doing this, I assumed it was going to be so bad it wouldn’t trick anyone, but I used to be blown away.” 

The video, though perhaps benign in its intent, backfired spectacularly. Lakes team responded with a cease-and-desist letter, demanding that the videos be faraway from all platforms immediately. They warned that failure to comply would end in legal motion. 

Despite this legal pressure, Stephenson revealed he consulted with lawyers about easy methods to respond. He believes these deep fakes are vital learning tools, stating, “Fighting this recent wave of technological disinformation this election cycle is on all of us.”

A good comment? Possibly, but creating self-promotional deep fakes of individuals without their permission isn’t an ideal approach to go about it.

As one commenter on Reddit said, “It doesn’t matter who it’s — deepfakes made to deceive people when it comes to politics is dangerous and flawed.”

The issue extends beyond individual instances, as seen within the broader political arena. Donald Trump himself has previously accused opponents of using AI-generated content against him, showing how deep fakes are each a weapon and a vulnerability. 

AI misuse in elections is a world trend, hitting countries from Slovakia to Indonesia. Digital tactics for influencing political outcomes through deception have grow to be more diverse and realistic. 

Regulations, including the Federal Communications Commission (FTC) banning certain AI-generated robocalls and forming a bipartisan task force to explore AI regulation, indicate steps towards addressing deep fakes. 

However, with AI technology advancing swiftly, the FEC has yet to determine rules governing AI in political ads.

AI is moving faster than legislators, and the subsequent controversy might be imminent. 

The post Arizona Senate candidate Kari Lake hit by non-consensual deep fake appeared first on DailyAI.

This article was originally published at dailyai.com