A team of researchers found that when a big language model (LLM) is personalized with an individual’s demographic information, it’s significantly more persuasive than a human.

Every day we’re presented with messaging that tries to influence us to form an opinion or alter a belief. It could also be a web based advert for a brand new product, a robocall asking on your vote, or a news report from a network with a selected bias.

As generative AI is increasingly used on multiple messaging platforms, the persuasion game has gone up a notch.

The researchers, from EPFL in Switzerland and the Bruno Kessler Institute in Italy, experimented to see how AI models like GPT-4 compared with human persuasiveness.

Their paper explains how they created an online platform where human participants engaged in multiple-round debates with a live opponent. The participants were randomly assigned to have interaction with a human opponent or GPT-4, without knowing whether their opponent was human.

In some matchups, one in all the opponents (human or AI) was personalized by providing them with demographic details about their opponent.

The questions debated were “Should the penny stay in circulation?”, “Should animals be used for scientific research?”, and “Should colleges consider race as a consider admissions to make sure diversity?”

🚨Excited to share our latest pre-print: “On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial”, with @manoelribeiro, @ricgallotti, and @cervisiarius.https://t.co/wNRMFtgCrN

A thread 🧵: pic.twitter.com/BKNbnI8avV

— Francesco Salvi (@fraslv) March 22, 2024

Results

The results of their experiment showed that when GPT-4 had access to private information of its debate opponent it had significantly higher persuasive power than humans. A personalised GPT-4 was 81.7% more prone to persuade its debate opponent than a human was.

When GPT-4 didn’t have access to private data it still showed a rise in persuasiveness over humans, but it surely was just over 20% and deemed not statistically significant.

The researchers noted that “these results provide evidence that LLM-based microtargeting strongly outperforms each normal LLMs and human-based microtargeting, with GPT-4 having the ability to exploit personal information far more effectively than humans.”

Implications

Concerns over AI-generated disinformation are justified each day as political propaganda, fake news, or social media posts created using AI proliferate.

This research shows an excellent larger risk of persuading individuals to consider false narratives when the messaging is personalized based on an individual’s demographics.

We may not volunteer personal information online but previous research has shown how good language models are at inferring very personal information from seemingly innocuous words.

The results of this research imply that if someone had access to private details about you they may use GPT-4 to influence you on a subject so much easier than a human could.

As AI models crawl the web and browse Reddit posts and other user-generated content, these models are going to know us more intimately than we may like. And as they do, they could possibly be used persuasively by the state, big business, or bad actors with microtargeted messaging.

Future AI models with improved persuasive powers can have broader implications too. It’s often argued that you could possibly simply pull its power cord if an AI ever went rogue. But a brilliant persuasive AI may thoroughly have the ability to persuade human operators that leaving it connected was a greater option.

The post Personalized LLMs have gotten more persuasive than humans appeared first on DailyAI.


This article was originally published at dailyai.com