Digital video surveillance systems can’t just discover who someone is. They may also work out how someone is feeling and how much personality they’ve. They may even tell how they may behave in the long run. And the important thing to unlocking this details about an individual is the movement of their head.

That is the claim made by the corporate behind the VibraImage artificial intelligence (AI) system. (The term “AI” is used here in a broad sense to confer with digital systems that use algorithms and tools reminiscent of automated biometrics and computer vision). You may never have heard of it, but digital tools based on VibraImage are getting used across a broad range of applications in Russia, China, Japan and South Korea.

But as I show in my recent research, published in Science, Technology and Society, there may be very little reliable, empirical evidence that VibraImage and systems prefer it are literally effective at what they claim to do.

Among other things, these applications include identifying “suspect” individuals amongst crowds of individuals. They are also used to grade the mental and emotional states of employees. Users of VibraImage include police forces, the nuclear industry and airport security. The technology has already been deployed at two Olympic Games, a FIFA World Cup and a G7 Summit.

In Japan, clients of such systems include considered one of the world’s leading facial recognition providers (NEC), considered one of the most important security services firms (ALSOK), in addition to Fujitsu and Toshiba. In South Korea, amongst other uses it’s being developed as a contactless lie detection system to be used in police interrogations. In China, it has already been officially certified for police use to discover suspicious individuals at airports, border crossings and elsewhere.

Across east Asia and beyond, algorithmic security, surveillance, predictive policing and smart city infrastructure are becoming mainstream. VibraImage forms one a part of this emerging infrastructure. Like other algorithmic emotion detection systems being developed and deployed globally, it guarantees to take video surveillance to a brand new level. As I explain in my paper, it claims to do that by generating details about subjects’ characters and inner lives that they don’t even learn about themselves.

Vibraimage has been developed by Russian biometrist Viktor Minkin through his company ELSYS Corp since 2001. Other emotion detection systems attempt to calculate people’s emotional states by analysing their facial expressions. By contrast, VibraImage analyses video footage of the involuntary micro movements, or “vibrations”, of an individual’s head, that are attributable to muscles and the circulatory system. The evaluation of facial expressions to discover emotions has come under growing criticism in recent times. Could VibraImage provide a more accurate approach?

Surveillance systems can profile individuals in huge crowns.
Csaba Peterdi/Shutterstock

Minkin puts forward two theories apparently supporting the concept that these movements are tied to emotional states. The first is the existence of a “vestibulo-emotional reflex” based on the concept that the body’s system liable for balance and spatial orientation is expounded to psychological and emotional states. The second is a “thermodynamic model of emotions”, which pulls a direct link between specific emotional-mental states and the quantity of energy expended by muscles. What’s more, Minkin claims this energy might be measured through tiny vibrations of the pinnacle.

According to those theories, involuntary movement of the face and head are due to this fact emotion, intention and personality made visible. In addition to spotting suspect individuals, supporters of VibraImage also consider this data might be used to find out personality type, identifying adolescents more prone to commit crimes, or categorising kinds of intelligence based on nationality and ethnicity. They even suggest it might be used to create a 1984-style test of loyalty to the values of an organization or nation, based on how someone’s head vibrations change in response to statements.

But the various claims made about its effects seem unprovable. Very few scientific articles on VibraImage have been published in academic journals with rigorous peer review processes – and plenty of are written by those with an interest within the success of the technology. This research often relies on experiments that already assume VibraImage is effective. How exactly certain head movements are linked to specific emotional-mental states isn’t explained. One study from Kagawa University of Japan found almost no correlation between the outcomes of a VibraImage assessment and people of existing psychological tests.

In an announcement in response to the claims in this text, Minkin says that VibraImage isn’t an AI technology, but “is predicated on comprehensible physics and cybernetics and physiology principles and transparent equations for emotions calculations”. It may use AI processing in behaviour detection or emotion recognition once they have “technical necessity for it”.

He also argues that individuals might assume the technology is “fake” as “contactless and straightforward technology of psychophysiological detection looks so improbable”, and since it’s related to Russia. Minkin has also published a technical response to my paper.

‘Suspect AI’

One of the major explanation why it’s so difficult to prove whether VibraImage works is its underlying premise that the system reveals more about subjects than they learn about themselves. But there isn’t a compelling evidence that that’s the case.

I propose the term “suspect AI” to explain the growing variety of systems that algorithmically classify individuals as suspects, yet I argue are themselves deeply suspect. They are opaque, unproven, developed and implemented without democratic input or oversight. They are also largely unregulated, and possess the potential for serious harm.

VibraImage isn’t the one such system on the market. Other AI systems to detect suspicious or deceptive individuals have been trialled. For example, Avatar has been tested on the US-Mexico border, and iBorderCtrl on the EU’s borders. Both are designed to detect deception amongst migrants. In China, VibraImage-based systems and similar products are getting used for a growing range of applications in law enforcement, security and healthcare.

The broader algorithmic emotion recognition industry was price as much as US$12 billion in 2018. It is anticipated to achieve US$37.1 billion by 2026. Amid growing global concern in regards to the must create rules across the ethical development of AI, we want to look much more closely at such opaque algorithmic systems of surveillance and control.

The European Commission’s recently announced draft AI regulations categorise the usage of emotion recognition systems by law enforcement as “high-risk” and subject to a better level of governance control. This is a vital start. Other countries should now follow this result in make sure that possible harms from these high-risk systems are minimised.

This article was originally published at theconversation.com