At the Munich Security Conference, a coalition of 20 tech giants, including OpenAI, Meta, Microsoft and others, declared a joint effort to combat fraudulent AI content influencing elections worldwide.

This comes amid growing concerns that AI-generated deep fakes could manipulate electoral processes, especially with major elections coming up in several countries this yr.

We have already seen that deep fakes play a task, not less than within the elections in Pakistan, Indonesia, Slovakia and Bangladesh.

This latest agreement includes commitments to develop tools to detect and combat misleading AI-generated media, raise public awareness of misleading content, and take rapid motion to remove such content from their platforms.

However, the reality is that we now have heard this repeatedly before. So what’s different now?

While details on implementation timelines remain vague, the businesses stressed the necessity for a typical approach to handle this evolving threat.

Technology firms have committed to using collaborative tools to detect and mitigate the spread of harmful AI-generated election content, including techniques equivalent to watermarking to certify origin and alter content. They also committed to being transparent about their efforts and assessing the risks posed by their generative AI models.

“I feel the utility of this (agreement) is the breadth of firms which can be signing it,” said Nick Clegg, president of worldwide affairs at Meta Platforms.

“It’s all well and good if individual platforms develop latest policies around discovery, provenance, labeling, watermarking, etc., but unless there may be a broader commitment to do that in a typical, interoperable way, we shall be stuck in a hodgepodge. “different obligations.”

Again, nothing we have not heard before. There have been several cross-industry agreements, but no real effective plan to stop deep fakes.

For example, MLCommons worked with Big Tech to define security benchmarks, firms committed to watermarking, and rejoined the Frontier Model Forum to ascertain a “unified approach.” These three industry-wide agreements just got here to mind, but there are actually many others.

Deep fakes are usually not easy to detect, especially on a big scale. They are so near reality that it’s exceedingly difficult to discover them using AI or algorithmic techniques.

Tech firms have responded by adding metadata to content and identifying it as AI-generated. But how does this discover the aim of the image?

Metadata may also be easily faraway from a file. Additionally, there’ll at all times be AI firms that don’t adhere to agreements and ways to bypass existing controls.

Dana Rao, Adobe’s chief trust officer, explained how and why this content was effective: “There’s an emotional connection to audio, video and pictures,” he said. “Your brain is programmed to imagine such media.”

In fact, deep fakes appear to spread long after they’ve been declared fake. Although it’s difficult to quantify exactly how much they modify our behavior, given the sheer scale of their impact – with the content being viewed by hundreds of thousands of individuals at the identical time – it’s hard to take any probabilities.

The fact is that we will expect more AI-related deep fake incidents and controversies.

Individual awareness and important considering shall be humanity’s best weapon within the fight against negative impacts.

This article was originally published at dailyai.com