Anthropic research found that their latest AI model, Claude 3 Opus, can generate arguments just as compelling as those created by humans.

The Researchled by Esin Durmus examines the connection between model size and persuasiveness across different generations of anthropic language models.

It focused on 28 complex and emerging topics, reminiscent of online content moderation and ethical guidelines for space exploration, where individuals are less more likely to hold concrete or long-held views.

The researchers compared the persuasiveness of the arguments of varied anthropic models, including Claude 1, 2 and three, with those written by human participants.

Key findings from the study include:

  • The study used 4 different prompts to generate AI-generated arguments and captured a broader range of persuasive writing styles and techniques.
  • Claude 3 Opus, Anthropic’s most advanced model, produced arguments that were statistically indistinguishable from human-authored arguments when it comes to persuasiveness.
  • A transparent upward trend was observed across model generations, with each subsequent generation showing increased persuasiveness in each the Compact and Frontier models.
Anthropic’s Claude models have turn out to be increasingly convincing over time. Source: Anthropocene.

The Anthropic team acknowledges limitations, writing: “Persuasion is difficult to review in a laboratory setting – our results may not generalize to the true world.”

Still, Claude’s powers of persuasion are obviously impressive, and this is not the one study to prove it.

In March 2024, a team from EPFL in Switzerland and the Bruno Kessler Institute in Italy discovered that GPT-4 had access to non-public details about its debate opponent 81.7% more likely to persuade his opponent, as a human being.

The researchers concluded that “these results provide evidence that LLM-based microtargeting significantly outperforms each regular LLMs and human-based microtargeting, with GPT-4 having the ability to use personal information way more effectively than humans.”

Compelling AI for social engineering

The most evident risks of persuasive LLMs are coercion and social engineering.

As Anthropic notes, “The persuasive power of language models raises legitimate societal concerns about protected use and potential misuse. The ability to evaluate and quantify these risks is critical to developing responsible protective measures.”

We also need to think about how the growing persuasive power of AI voice models may be combined with cutting-edge voice cloning technology like OpenAI’s Voice Engine Releasing OpenAI seemed dangerous.

VoiceEngine takes just 15 seconds to realistically clone a voice, which could be used for nearly anything, including sophisticated fraud or social engineering scams.

Deep fake scam are already widespread and can improve as threat actors mix voice cloning technology with the frighteningly competent persuasion techniques of AI.

This article was originally published at dailyai.com