Australian police agencies are reportedly using a personal, unaccountable facial recognition service that mixes machine learning and wide-ranging data-gathering practices to discover members of the general public from online photographs.

The service, Clearview AI, is sort of a reverse image seek for faces. You upload a picture of somebody’s face and Clearview searches its database to search out other images that contain the identical face. It also tells you where the image was found, which could aid you determine the name and other information in regards to the person in the image.

Clearview AI built this method by collecting several billion publicly available images from the online, including from social media sites corresponding to Facebook and YouTube. Then they used machine learning to make a biometric template for every face and match those templates to the net sources of the photographs.

It was revealed in January that lots of of US law enforcement agencies are using Clearview AI, starting a storm of dialogue in regards to the system’s privacy implications and the legality of the web-scraping used to construct the database.

Australian police agencies initially denied they were using the service. The denial held until a listing of Clearview AI’s customers was stolen and disseminated, revealing users from the Australian Federal Police in addition to the state police in Queensland, Victoria and South Australia.

Lack of accountability

This development is especially concerning because the Department of Home Affairs, which oversees the federal police, is in search of to extend the usage of facial recognition and other biometric identity systems. (An try to introduce recent laws was knocked back last yr for not being adequately transparent or privacy-protecting.)

Gaining trust in the correct use of biometric surveillance technology should be essential for Home Affairs. And being deceptive in regards to the use of those tools is a nasty look.



But the shortage of accountability may transcend poor decisions at the highest. It could also be that management at law enforcement agencies didn’t know their employees were using Clearview AI. The company offers free trials to “energetic law enforcement personnel”, but it surely’s unclear how they confirm this beyond requiring a government email address.

Why aren’t law enforcement agencies enforcing rules about which surveillance tools officers can use? Why aren’t their internal accountability mechanisms working?

There are also very real concerns around security when using Clearview AI. It monitors and logs every search, and we realize it has already had one data breach. If police are going to make use of powerful surveillance technologies, there have to be systems in place for ensuring those technological tools do what they are saying they do, and in a secure and accountable way.

Is it even accurate?

Relatively little is understood about how the Clearview AI system actually works. To be accountable, a technology utilized by law enforcement needs to be tested by a standards body to make sure it’s fit for purpose.

Clearview AI, however, has had its own testing done – and in consequence its developers claim it’s 100% accurate.

That report doesn’t represent the sort of testing that an entity in search of to supply an accountable system would undertake. In the US at the least, there are agencies just like the National Institute for Standards and Technology that do precisely that sort of accuracy testing. There are also many qualified researchers in universities and labs that would properly evaluate the system.

Instead, Clearview AI gave the duty to a trio composed of a retired judge turned private attorney, an urban policy analyst who wrote some open source software within the Nineties, and a former computer science professor who’s now a Silicon Valley entrepreneur. There isn’t any discussion of why those individuals were chosen.

The method used to check the system also leaves quite a bit to be desired. Clearview AI based their testing on a test by the American Civil Liberties Union of Amazon’s Rekognition image evaluation tool.

However, the ACLU test was a media stunt. The ACLU ran headshots of 28 members of congress against a mugshot database. None of the politicians were within the database, meaning any match returned could be an error. However, the test only required the system to be 80% certain of its results, making it quite prone to return a match.



The Clearview AI test also used headshots of politicians taken from the online (front-on, nicely framed, well-lit images), but ran them across their database of several billion images, which did include those politicians.

The hits returned by the system were then confirmed visually by the three report authors as 100% accurate. But what does 100% mean here?

The report stipulates that the primary two hits provided by the system were accurate. But we don’t understand how many other hits there have been, or at what point they stopped being accurate. Politicians have numerous smiling headshots online, so finding two images mustn’t be complex.

What’s more, law enforcement agencies are unlikely to be working with nice clean headshots. Poor-quality images taken from strange angles – the sort you get from surveillance or CCTV cameras – could be more like what law enforcement agencies are literally using.

Despite these and other criticisms, Clearview AI CEO Hoan Ton-That stands by the testing, telling Buzzfeed News he believes it’s diligent and thorough.

More understanding and accountability are needed

The Clearview AI case shows there just isn’t enough understanding or accountability around how this and other software tools work in law enforcement. Nor can we know enough in regards to the company selling it and their security measures, nor about who in law enforcement is using it or under what conditions.

Beyond the moral arguments around facial recognition, Clearview AI reveals Australian law enforcement agencies have such limited technical and organisational accountability that we needs to be questioning their competency even to judge, let alone use, this type of technology.

This article was originally published at theconversation.com