Imagine you might be in a job interview. As you answer the recruiter’s questions, a man-made intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people’s knowledge or consent.

Emotion recognition technology (ERT) is in truth a burgeoning multi-billion-dollar industry that goals to make use of AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.

Many firms use ERT to check customer reactions to their products, from cereal to video games. But it may well even be utilized in situations with much higher stakes, equivalent to in hiring, by airport security to flag faces as revealing deception or fear, in border control, in policing to discover “dangerous people” or in education to observe students’ engagement with their homework.

Shaky scientific ground

Fortunately, facial recognition technology is receiving public attention. The award-winning film Coded Bias, recently released on Netflix, documents the invention that many facial recognition technologies don’t accurately detect darker-skinned faces. And the research team managing ImageNet, one in all the most important and most vital datasets used to coach facial recognition, was recently forced to blur 1.5 million images in response to privacy concerns.

Revelations about algorithmic bias and discriminatory datasets in facial recognition technology have led large technology firms, including Microsoft, Amazon and IBM, to halt sales. And the technology faces legal challenges regarding its use in policing within the UK. In the EU, a coalition of greater than 40 civil society organisations have called for a ban on facial recognition technology entirely.

Like other types of facial recognition, ERT raises questions on bias, privacy and mass surveillance. But ERT raises one other concern: the science of emotion behind it’s controversial. Most ERT relies on the theory of “basic emotions” which holds that emotions are biologically hard-wired and expressed in the identical way by people all over the place.

This is increasingly being challenged, nevertheless. Research in anthropology shows that emotions are expressed otherwise across cultures and societies. In 2019, the Association for Psychological Science conducted a review of the evidence, concluding that there is no such thing as a scientific support for the common assumption that an individual’s emotional state may be readily inferred from their facial movements. In short, ERT is built on shaky scientific ground.

Also, like other types of facial recognition technology, ERT is encoded with racial bias. A study has shown that systems consistently read black people’s faces as angrier than white people’s faces, whatever the person’s expression. Although the study of racial bias in ERT is small, racial bias in other types of facial recognition is well-documented.

There are two ways in which this technology can hurt people, says AI researcher Deborah Raji in an interview with MIT Technology Review: “One way is by not working: by virtue of getting higher error rates for people of color, it puts them at greater risk. The second situation is when it does work — where you’ve the right facial recognition system, nevertheless it’s easily weaponized against communities to harass them.”

So even when facial recognition technology may be de-biased and accurate for all people, it still will not be fair or simply. We see these disparate effects when facial recognition technology is utilized in policing and judicial systems which can be already discriminatory and harmful to people of color. Technologies may be dangerous once they don’t work as they need to. And they can be dangerous once they work perfectly in an imperfect world.

The challenges raised by facial recognition technologies – including ERT – shouldn’t have easy or clear answers. Solving the issues presented by ERT requires moving from AI ethics centred on abstract principles to AI ethics centred on practice and effects on people’s lives.

AI may be racist.
HQuality/Shutterstock

When it involves ERT, we want to collectively examine the controversial science of emotion built into these systems and analyse their potential for racial bias. And we want to ask ourselves: even when ERT may very well be engineered to accurately read everyone’s inner feelings, do we would like such intimate surveillance in our lives? These are questions that require everyone’s deliberation, input and motion.

Citizen science project

ERT has the potential to affect the lives of tens of millions of individuals, yet there was little public deliberation about how – and if – it ought to be used. This is why now we have developed a citizen science project.

On our interactive website (which works best on a laptop, not a phone) you possibly can check out a personal and secure ERT for yourself, to see the way it scans your face and interprets your emotions. You may play games comparing human versus AI skills in emotion recognition and learn concerning the controversial science of emotion behind ERT.

Most importantly, you possibly can contribute your perspectives and concepts to generate recent knowledge concerning the potential impacts of ERT. As the pc scientist and digital activist Joy Buolamwini says: “If you’ve a face, you’ve a spot within the conversation.”


This article was originally published at theconversation.com