Trump supporters have been caught creating and disseminating AI-generated images that falsely depict him alongside African American voters. 

This tactic, first uncovered by the BBC, aimed to fabricate a way of Trump’s popularity amongst black voters – a demographic that played an important role in Joe Biden’s 2020 victory. 

Mark Kaye, a conservative radio show host in Florida, was amongst those that created these images, depicting Trump surrounded by black people and sharing them widely on social media. 

An example of considered one of the AI-generated images.

Kaye’s approach to this matter was quite straightforward. “I’m not a photojournalist. I’m a storyteller,” he explained.

Kaye took to X to defend himself, stating, “Guys! The Fake News @BBC has accused me of leading a “disinformation” campaign. Oh, the irony. That’s like me calling them “bald!”

Guys! The Fake News @BBC has accused me of leading a “disinformation” campaign. Oh, the irony. That’s like me calling them “bald!”https://t.co/SagEkC6Hmp

— Mark Kaye (@markkayeshow) March 4, 2024

Commenters were largely unsympathetic, with one saying Kaye has been “caught along with his pants down.”

Unsurprisingly, these images raised ethical concerns, prompting a response from Cliff Albright, the co-founder of Black Voters Matter. 

Albright criticized the manipulation, stating, “There have been documented attempts to focus on disinformation to black communities again, especially younger black voters.” 

While it could be tempting to assume these images are readily dismissable as fake, the BBC found many who thought they were real. Awareness of deep fakes stays unquantified.

Critics from across the political spectrum argue this tactic not only misrepresents political realities but additionally deliberately targets vulnerable segments of the electorate.

Deep fakes are taking elections by storm. We’ve seen large-scale campaigns within the Pakistani, Indonesia, Slovakian, and Bangladeshi elections, amongst others. We’ve also observed deep fake campaigns from foreign state actors aiming to control voting behaviors. 

Now, attention turns to the US election, which has already been a testing ground for deep fakes. The heat will only crank up as we approach polling day. 

AI bot assistant ‘Jennifer’

On the technological frontier of campaign strategies, Peter Dixon, a Democratic congressional candidate from California, employed an AI bot named “Jennifer” to call voters, raising eyebrows inside his own team. 

Jennifer’s introduction to voters was clear and upfront: “Hello there. My name is Jennifer and I’m a man-made intelligence volunteer.” This is an element of Dixon’s broader strategy to achieve a large audience and isn’t the primary AI robocaller, with the primary being deployed in Pennslyvania last yr

The results of employing Jennifer were surprisingly positive, difficult initial skepticism. Dixon himself was bowled over by how well it worked, commenting on the general public’s response: “People were shocked at how good the aptitude was.” 

Deep fake electioneering highlights the dual-edged nature of AI’s role in modern political campaigns.

While AI offers modern tools for engagement, it also poses ethical challenges, particularly when used to fabricate or manipulate political support. 

Drawing lines on fair usage of AI in political scenarios has proved nigh inconceivable. US regulators have discussed banning various deep fake campaign materials, but this hasn’t materialized.

The post US election controversy: Trump deep fakes and AI robocallers appeared first on DailyAI.

This article was originally published at dailyai.com