Artificial intelligence is already helping determine your future – whether it’s your Netflix viewing preferences, your suitability for a mortgage or your compatibility with a prospective employer. But can we agree, not less than for now, that having an AI determine your guilt or innocence in a court of law is a step too far?

Worryingly, it seems this will likely already be happening. When American Chief Justice John Roberts recently attended an event, he was asked whether he could forsee a day “when smart machines, driven with artificial intelligences, will assist with courtroom fact finding or, more controversially even, judicial decision making”. He responded: “It’s a day that’s here and it’s putting a major strain on how the judiciary goes about doing things”.

Roberts may need been referring to the recent case of Eric Loomis, who was sentenced to 6 years in prison not less than partly by the suggestion of a personal company’s secret proprietary software. Loomis, who has a criminal history and was sentenced for having fled the police in a stolen automobile, now asserts that his right to due process was violated as neither he nor his representatives were capable of scrutinise or challenge the algorithm behind the suggestion.

The report was produced by a software product called Compas, which is marketed and sold by Nortpointe Inc to courts. The program is one incarnation of a brand new trend inside AI research: ones designed to assist judges make “higher” – or not less than more data-centric – decisions in court.

While specific details of Loomis’ report remain sealed, the document is more likely to contain quite a few charts and diagrams quantifying Loomis’ life, behaviour and likelihood of re-offending. It might also include his age, race, gender identity, browsing habits and, I don’t know … measurements of his skull. The point is we don’t know.

What we do know is that the prosecutor within the case told the judge that Loomis displayed “a high risk of violence, high risk of recidivism, high pretrial risk.” This is standard stuff on the subject of sentencing. The judge concurred and told Loomis that he was “identified, through the Compas assessment, as a person who’s a high risk to the community”.

The Wisconsin Supreme Court convicted Loomis, adding that the Compas report brought priceless information to their decision, but qualified it by saying he would have received the identical sentence without it. But how can we all know that needless to say? What form of cognitive biases are involved when an all-powerful “smart” system like Compas suggests what a judge should do?

Unknown use

Now let’s be clear, there may be nothing “illegal” about what the Wisconsin court did – it’s just a nasty idea under the circumstances. Other courts are free to do the identical.

Worryingly, we don’t actually know the extent to which AI and other algorithms are getting used in sentencing. My own research indicates that several jurisdictions are “trialling” systems like Compas in closed trials, but that they can’t announce details of their partnerships or where and after they are getting used. We also know that there are quite a few AI startups which can be competing to construct similar systems.

However, the usage of AI in law doesn’t start and end with sentencing, it starts at investigation. A system called VALCRI has already been developed to perform the labour-intensive elements of a criminal offense analyst’s job in mere seconds – wading through tonnes of knowledge like texts, lab reports and police documents to spotlight things that will warrant further investigation.

The UK’s West Midlands Police shall be trialling VALCRI for the following three years using anonymised data – amounting to some 6.5m records. An analogous trial is underway from the police in Antwerp, Belgium. However, past AI and deep learning projects involving massive data sets have been have been problematic.

Benefits for the few?

Technology has brought many advantages to the court room, starting from photocopiers to DNA fingerprinting and complicated surveillance techniques. But that doesn’t mean any technology is an improvement.

Algorithms may be racist, too.
Vintage Tone/Shutterstock

While using AI in investigations and sentencing could potentially help save money and time, it raises some thorny issues. A report on Compas from ProPublica made clear that black defendants in Broward County Florida “were way more likely than white defendants to be incorrectly judged to be at a better rate of recidivism”. Recent work by Joanna Bryson, professor of computer science on the University of Bath, highlights that even probably the most “sophisticated” AIs can inherit the racial and gender biases of those that create them.

What’s more, what’s the point of offloading decision making (not less than partly) to an algorithm on matters which can be uniquely human? Why will we undergo the difficulty of choosing juries composed of our peers? The standard in law has never been certainly one of perfection, but fairly the most effective that our abilities as mere humans allow us. We make mistakes but, over time, and with practice, we accumulate knowledge on how to not make them again – consistently refining the system.

What Compas, and systems prefer it, represent is the “black boxing” of the legal system. This have to be resisted forcefully. Legal systems rely on continuity of data, transparency and skill to review. What we are not looking for as a society is a justice system that encourages a race to the underside for AI startups to deliver products as quickly, cheaply and exclusively as possible. While some AI observers have seen this coming for years, it’s now here – and it’s a terrible idea.

An open source, reviewable version of Compas can be an improvement. However, we must be certain that we first raise standards within the justice system before we start offloading responsibility to algorithims. AI shouldn’t just be an excuse not to speculate.

While there may be a variety of money to be made in AI, there may be also a variety of real opportunity. It can change loads for the higher if we get it right, and be certain that its advantages accrue for all and don’t just entrench power at the highest of the pyramid.

I even have no perfect solutions for all these problems immediately. But I do know that on the subject of the role of AI in law we must ask by which context they’re getting used, for what purposes and with what meaningful oversight. Until those questions may be answered with certainty, be very very sceptical. Or on the very least know some superb lawyers.

This article was originally published at theconversation.com