The use of artificial intelligence (AI) by New Zealand police is putting the highlight on policing tactics within the twenty first century.

A recent Official Information Act request by Radio New Zealand revealed using SearchX, an AI tool that may draw connections between suspects and their wider networks.

SearchX works by immediately finding connections between people, locations, criminal charges and other aspects prone to increase the chance of harm to officers.

Police say SearchX is at the guts of a NZ$200 million front-line safety programme, primarily developed after the death of police constable Matthew Hunt in West Auckland in 2020, in addition to other recent gun violence.

But using SearchX and other AI programmes raises questions on the invasive nature of the technology, inherent biases and whether New Zealand’s current legal framework shall be enough to guard the rights of everyone.

Controversial technologies

At this stage, New Zealanders only have a limited view of the AI programmes getting used by the police. While some the programmes are public, others are being kept under wraps.

Police have acknowledged using Cellebrite, a controversial phone hacker technology. This programme extracts personal data from iPhones and Android mobiles and may access greater than 50 social media platforms, including Instagram and Facebook.



The police have also acknowledged using BriefCam, which aggregates video footage, including facial recognition and vehicle licence plates.

Briefcam allows police to concentrate on and track an individual or vehicle of interest. Police claim Briefcam can reduce the time analysing CCTV footage from three months to 2 hours.

Other AI tools comparable to Clearview AI – which takes photographs from publicly accessible social media sites to discover an individual – were tested by police before being abandoned.

The use of Clearview was particularly controversial because it was trialled without the clearance of the police leadership team or the Privacy Commissioner.

Under existing rules, the New Zealand Police can use AI programmes comparable to BriefCam, which aggregates video footage, including facial recognition and vehicle licence plates.
Ignatiev/Getty Images

Eroding privacy?

The promise of AI is that it might probably predict and stop crime. But there are also concerns over using these tools by police.

Cellebrite and Briefcam are highly intrusive programmes. They enable law enforcement to access and analyse personal data without people realising, much less providing consent.

But under current laws, using each programmes by police is legal.

The Privacy Act 2020 allows government agencies – including police – to gather, withhold, use or disclose personal information in a way that might otherwise breach the act, where crucial for the “maintenance of the law”.

AI’s biased decisions

Privacy will not be the one issue being raised by way of these programmes. There is an inclination to assume decisions made by AI are more accurate than humans – particularly as tasks turn out to be tougher.

This bias in favour of AI decisions means investigations may harden towards the AI-identified perpetrator fairly than other suspects.

Some of the mistakes will be tied to biases within the algorithms. In the past decade, scholars have begun to document the negative impacts of AI on individuals with low incomes and the working class, particularly within the justice system.



Research has shown ethnic minorities usually tend to be misidentified by facial recognition software.

AI’s use in predictive policing can be a difficulty as AI will be fed data from over-policed neighbourhoods, which fails to record crime occurring in other neighbourhoods.

The bias is compounded further as AI increasingly directs police patrols and other surveillance onto these already over-policed neighbourhoods.

This will not be just an issue overseas. Analyses of the New Zealand government’s use of AI have raised a variety of concerns, comparable to the problem of transparency and privacy, in addition to manage “dirty data” – data with human biases already baked in before it’s entered into AI programmes.

We need updated laws

There is not any legal framework for using AI in New Zealand, much less for the police use of it. This lack of regulation will not be unique, though. Europe’s long awaited AI law still hasn’t been implemented.

That said, New Zealand Police is a signatory to the Australia New Zealand Police Artificial Intelligence Principles. These establish guidelines around transparency, proportionality and justifiability, human oversight, explainability, fairness, reliability, accountability, privacy and security.

The Algorithm Charter for Aotearoa New Zealand covers the moral and responsible use of AI by government agencies.



Under the principles, police are supposed to repeatedly monitor, test and develop AI systems and ensure data are relevant and contemporary. Under the charter, police will need to have some extent of contact for public inquiries and a channel for difficult or appealing decisions made by AI.

But these are each voluntary codes, leaving significant gaps for legal accountability and police antipathy.

And it’s not looking good to date. Police have did not implement one in every of the primary – and most simple – steps of the charter: to ascertain some extent of inquiry for people who find themselves concerned by way of AI.

There is not any special page on the police website coping with using AI, neither is there anything on the essential feedback page specifically mentioning the subject.

In the absence of a transparent legal framework, with an independent body monitoring the police’s actions and enforcing the law, New Zealanders are left counting on police to observe themselves.

AI is barely on the radar ahead of the 2023 election. But because it becomes more pervasive across government agencies, New Zealand must follow Europe’s lead and enact AI regulation to make sure police use of AI doesn’t cause more problems than it solves.

This article was originally published at theconversation.com