The proliferation of technology for on a regular basis living might be seen through ChatGPT writing term papers or robots serving meals at a restaurant.

Technology can be used towards less utilitarian ends. Unfortunately, deepfakes — digitally altered images of individuals — might be used to spread misinformation.

A brand new edited volume, which I co-edited, considers using on a regular basis technologies within the criminal justice system, starting from detecting deception to web sleuthing to assist law enforcement solve crime.

Technology and policing

Consider using body-worn cameras by police, as within the fatal shooting of Ontario Provincial Police Const. Greg Pierzchala in December 2022. Footage from his body camera will provide evidence during the trial of his accused killers.

Police investigations have also been aided by private citizen sleuths via technology, who gather evidence to assist police discover criminals. This was the case with convicted murderer Luka Magnotta, where a web based network identified him in cat torture videos and provided the data to law enforcement agencies.

Another use of technology might be for public surveillance for crime prevention through the appliance of facial recognition software.

Security cameras at the moment are a ubiquitous feature in public places. In 2021, it was estimated that one billion security cameras were getting used world wide. China is listed as having about 54 per cent of all surveillance cameras.

In 2020, Toronto had roughly 2,000 cameras at city-owned facilities.

Security cameras may or is probably not used together with facial recognition software.

Security cameras have gotten regular features of outside public spaces.

Finding faces

Facial recognition uses software to discover or confirm someone’s identity using a picture of their face. Captured faces are in comparison with a database, often for the needs of crime prevention.

Some retailers have used facial recognition to assist reduce theft. In 2022, Josh Soika, an Indigenous man, was confronted by a security guard as a consequence of being “flagged” as having stolen previously from the shop. Later, it was determined that Soika was misidentified by the unreal intelligence (AI) utilized by Canadian Tire for facial recognition.

In 2023, Canadian Tire Corporation and its dealers have since agreed to not use facial recognition technology.

In the United States recently, the Federal Trade Commission (FTC) banned the pharmacy chain Rite Aid for five years from using facial recognition software to discover customers who’ve stolen merchandise or displayed other problematic behaviours. In some instances, Rite Aid employees would follow “identified” customers around, accuse them of stealing and call police. People of color were falsely identified at a greater rate than white customers.

It is significant to notice that somebody who has shoplifted up to now isn’t necessarily planning to shoplift again.

The use of facial recognition software in Canada is controversial. In 2021, it was reported that Toronto police used Clearview AI, a facial recognition software, in 84 investigations, with at the least two cases proceeding to prosecution. Once it was discovered by the police chief nonetheless, the practice was stopped.

Discrimination and AI

Accuracy rates with facial recognition software are above 90 per cent, but that number is greatly reduced inside certain demographics. Facial recognition software is documented to misidentify women, racialized people and people between the ages of 18-30 years, with accuracy reduced to 35 per cent.

In February 2023, Porcha Woodruff, a 32-year-old pregnant Black woman from Detroit, was arrested for robbery and carjacking based on a facial recognition match. Police used AI that had run a picture of a carjacker caught on video through a mugshot database that contained Woodruff’s photo, and incorrectly matched it.

Woodruff was jailed for 11 hours and went into labour. The charges were dropped, and Woodruff is currently suing the city of Detroit and the Detroit Police Department.

CBS Detroit interviews researcher Dorothy Roberts about Porcha Woodruff’s misidentification as a consequence of facial recognition technology.

Consequences of misidentification

According to the U.S.-based Innocence Project, over 70 per cent of known wrongful convictions are as a consequence of mistaken identification by people as a contributing factor. The Canadian Registry of Wrongful Convictions finds roughly a 3rd of their cases involved false identification.

People can show what’s often known as “own-race bias” when identifying faces; persons are more accurate when identifying faces of their very own race than other races.

The misidentification of a perpetrator — whether by a human or an AI program — can result in the identical consequences: being charged, prosecuted or wrongfully convicted. Technology, as with humans, isn’t at all times accurate and will succumb to similar biases.

Legislation must sustain to guard people’s rights and privacy. As technology evolves, adequate information and full transparency must be provided to the general public on how, when and where a technology is in use. It also is evident that way more research is required to higher understand the impact of technology on the criminal justice system.

This article was originally published at