Does ChatGPT ever provide you with the eerie sense you’re interacting with one other human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the purpose that some tools may even idiot people into considering they’re interacting with one other human.

The eeriness doesn’t stop there. In a study published today in Psychological Science, we’ve discovered images of white faces generated by the favored StyleGAN2 algorithm look more “human” than actual people’s faces.

AI creates hyperrealistic faces

For our research, we showed 124 participants pictures of many various white faces and asked them to make a decision whether each face was real or generated by AI.

Half the images were of real faces, while half were AI-generated. If the participants had guessed randomly, we’d expect them to be correct about half the time – akin to flipping a coin and getting tails half the time.

Instead, participants were systematically unsuitable, and were more more likely to say AI-generated faces were real. On average, people labelled about 2 out of three of the AI-generated faces as human.

These results suggest AI-generated faces look more real than actual faces; we call this effect “hyperrealism”. They also suggest people, on average, aren’t superb at detecting AI-generated faces. You can compare for yourself the portraits of real people at the highest of the page with those embedded below.

But perhaps individuals are aware of their very own limitations, and due to this fact aren’t more likely to fall prey to AI-generated faces online?

To discover, we asked participants how confident they felt about their decisions. Paradoxically, the individuals who were the worst at identifying AI impostors were probably the most confident of their guesses.

In other words, the individuals who were most vulnerable to being tricked by AI weren’t even aware they were being deceived.



Biased training data deliver biased outputs

The fourth industrial revolution – which incorporates technologies similar to AI, robotics and advanced computing – has profoundly modified the sorts of “faces” we see online.

AI-generated faces are available, and their use comes with each risks and advantages. Although they’ve been used to help find missing children, they’ve also been utilized in identity fraud, catfishing and cyber warfare.

People’s misplaced confidence of their ability to detect AI faces could make them more vulnerable to deceptive practices. They may, for example, readily hand over sensitive information to cybercriminals masquerading behind hyperrealistic AI identities.

Another worrying aspect of AI hyperrealism is that it’s racially biased. Using data from one other study which also tested Asian and Black faces, we found only white AI-generated faces looked hyperreal.

When asked to make a decision whether faces of color were human or AI-generated, participants guessed appropriately about half the time – akin to guessing randomly.

This means white AI-generated faces look more real than AI-generated faces of color, in addition to white human faces.

Implications of bias and hyperrealistic AI

This racial bias likely stems from the incontrovertible fact that AI algorithms, including the one we tested, are sometimes trained on images of mostly white faces.

Racial bias in algorithmic training can have serious implications. One recent study found self-driving cars are less more likely to detect Black people, placing them at greater risk than white people. Both the businesses producing AI, and the governments overseeing them, have a responsibility to make sure diverse representation and mitigate bias in AI.

The realism of AI-generated content also raises questions on our ability to accurately detect it and protect ourselves.

In our research, we identified several features that make white AI faces look hyperreal. For instance, they often have proportionate and familiar features, they usually lack distinctive characteristics that make them stand out as “odd” from other faces. Participants misinterpreted these features as signs of “humanness”, resulting in the hyperrealism effect.

At the identical time, AI technology is advancing so rapidly it should be interesting to see how long these findings apply. There’s also no guarantee AI faces generated by other algorithms will differ from human faces in the identical ways as those we tested.

Since our study was published, now we have also tested the power of AI detection technology to discover our AI faces. Although this technology claims to discover the actual kind of AI faces we used with a high accuracy, it performed as poorly as our human participants.

Similarly, software for detecting AI writing has also had high rates of falsely accusing people of cheating – especially people whose native language isn’t English.

Managing the risks of AI

So how can people protect themselves from misidentifying AI-generated content as real?

One way is to easily pay attention to how poorly people perform when tasked with separating AI-generated faces from real ones. If we’re more wary of our own limitations on this front, we could also be less easily influenced by what we see online – and may take additional steps to confirm information when it matters.

Public policy also plays a very important role. One option is to require using AI to be declared. However, this may not help, or may inadvertently provide a false sense of security when AI is used for deceptive purposes – wherein case it is sort of unimaginable to police.

Another approach is to concentrate on authenticating trusted sources. Similar to the “Made in Australia” or “European CE tag”, applying a trusted source badge – which could be verified and needs to be earned through rigorous checks – could help users select reliable media.



This article was originally published at theconversation.com