The World Economic Forum’s Global Risks Report 2024 has issued a stark warning: misinformation and disinformation, primarily driven by deepfakes, are ranked as probably the most severe global short-term risks the world faces in the following two years.

In October 2023, the Innovation council of Québec shared the identical realization after months of consultations with experts and the general public.

This digital deception, which leverages artificial intelligence and, more recently generative AI, to create hyper-realistic fabrications, extends beyond being a technological marvel; it poses a profound societal threat.

In response to the gap in effectively combating deepfakes with technology and laws alone, a research project led by my team and I sheds light on a significant solution: human intervention through education.

Technological solutions alone are inadequate

Despite ongoing development of deepfake detection tools, these technological solutions are racing to meet up with the rapidly advancing capabilities of deepfake algorithms.

Legal systems and governments are struggling to maintain pace with this swift advancement of digital deception.

Professor Hany Farid from University of California at Berkeley, expert in analyzing digital images and detecting digital manipulation shares, on LinkedIn, an example of the rapid evolution of the technology.
Nadia Naffi

There is an urgent need for education to adopt a more serious, aggressive and strategic approach in equipping youth to combat this imminent threat.

Political disinformation concerns

The potential for political polarization is especially alarming.

Nearly three billion individuals are expected to vote in countries including Bangladesh, India, Indonesia, Mexico, Pakistan, the United Kingdom and the United States inside the following two years.

Disinformation campaigns threaten to undermine the legitimacy of newly elected governments.

Deepfakes of outstanding figures like Palestinian American supermodel Bella Hadid and others have been manipulated to falsify their political statements, exemplifying the technology’s capability to sway public opinion and skew political narratives.

A deepfake of Greta Thunberg advocating for “vegan grenades” highlights the nefarious use of this technology.

Meta’s unveiling of an AI assistant featuring celebrities’ likenesses raises concerns about misuse and spreading disinformation.

Financial fraud, pornographic harms

Deepfake videos are also, unsurprisingly, being leveraged to commit financial fraud.

The popular YouTuber MrBeast was impersonated in a deepfake scam on TikTok, falsely promising an iPhone 15 giveaway that led to financial deceit.

These incidents highlight vulnerability to stylish AI-driven frauds and scams targeting people of all ages.

Deepfake pornography represents a grave concern for young people and adults alike, where individuals’ faces are non-consensually superimposed onto explicit content. Sexually explicit deepfake images of Taylor Swift spread on social media before platforms took them down. One was viewed over 45 million times.

Policy and technology approaches

Meta’s policy now mandates political advertisers to reveal any AI manipulation in ads, a move mirrored by Google.

Neil Zhang, a PhD student on the University of Rochester, is developing detection tools for audio deepfakes, including advanced algorithms and watermarking techniques.

The U.S. has introduced several acts: the Deepfakes Accountability Act of 2023, the No AI FRAUD Act safeguarding identities against AI misuse and the Preventing Deepfakes of Intimate Images Act targeting non-consensual pornographic deepfakes.

In Canada, legislators have proposed Bill C-27 and the Artificial Intelligence and Data Act (AIDA) which emphasize AI transparency and data privacy.

‘Disinformation may cause harm’ video from the Communications Security Establishment (CSE), a Canadian federal agency dedicated to security and intelligence.

The United Kingdom adopted its Online Safety Bill. The EU recently announced a provisional deal surrounding its AI Act; the EU’s AI Liability Directive addresses broader online safety and AI regulation issues.

The Indian government announced plans to draft regulations targeting deepfakes.

These measures reflect growing global commitments to curbing the pernicious effects of deepfakes. However, these efforts are insufficient to contain, let alone stop, the proliferation of deepfake dissemination.

Research study with youth

Research I even have conducted with colleagues, funded by the Social Sciences and Humanities Research Council (SSHRC) and Canadian Heritage, unveils how empowering youth with digital agency is usually a force against the rising tide of disinformation fueled by deepfake and artificial intelligence technologies.

Our study focused on how youth perceive the impact of deepfakes on critical issues and their very own means of constructing knowledge in digital contexts. We explored their capability and willingness to effectively counterbalance disinformation.

Author Nadia Naffi shares some results of a study on youth digital agency and deepfakes.

The study brought together Canadian university students, aged 18 to 24, for a series of hands-on workshops, in-depth individual interviews and focus group discussions.

Participants created deepfakes, gaining a firsthand understanding of easy accessibility to and use of this technology and its potential for misuse. This experiential learning proved invaluable in demystifying how easily deepfakes are generated.

Participants initially perceived deepfakes as an uncontrollable and inevitable a part of the digital landscape.

Through engagement and discussion, they went from being passive deepfake bystanders to developing a deeper realization of their grave threat. Critically, in addition they developed a way of responsibility in stopping and mitigating deepfakes’ spread, and a readiness to counter deepfakes.

Students shared recommendations for concrete actions, including urging educational systems to empower youth and help them recognize their actions could make a difference. This includes:

  • teaching the detrimental effects of disinformation on society;

  • providing spaces for youth to reflect on and challenge societal norms, inform them about social media policies and outlining permissible and prohibited content;

  • training students in recognizing deepfakes through exposure to the technology behind them;

  • encouraging involvement in meaningful causes while staying alert to disinformation and guiding youth in respectfully and productively countering disinformation.

Students seen at a laptop.
Educational systems have a vital role empowering youth and helping them recognize their actions could make a difference.
(Allison Shelley/EDUimages), CC BY-NC

Multifaceted strategy needed

Based on our research and the participants’ recommendations, we propose a multifaceted technique to counter the proliferation of deepfakes.

Deepfake education must be integrated into educational curricula, together with nurturing critical considering and digital agency in our youth. Youth have to be encouraged in energetic, yet secure, well-informed and strategic, participation within the fight against malicious deepfakes in digital spaces.

We emphasize the importance of hands-on collaborative learning experiences. We also advocate for an interdisciplinary educational approach that marries technology, psychology, media studies and ethics to completely grasp the implications of deepfakes.

The human element

Our research underscores an important realization: The human element, particularly the role of education, is indispensable within the fight against deepfakes. We cannot rely solely on technology and legal fixes.

By equipping younger generations, but in addition each member of our society, with the talents to critically analyze and challenge disinformation, we’re nurturing a digitally literate society resilient enough to face up to the manipulative power of deepfakes.

To accomplish that, we must equip people to know they’ve roles and agency in safeguarding the integrity of our digital world.

This article was originally published at