Algorithms have taken a whole lot of flak recently, particularly those getting used by the federal government and other public bodies within the UK. The controversial algorithm used to award student grades caused an enormous public outcry, but national and local governments and a number of other police forces have been withdrawing other algorithms and artificial intelligence tools from use all year long in response to legal challenges and design failures.

This has quite rightly brought it home to public sector organisations that a more critical approach to AI and algorithmic decision-making is required. But there are numerous cases through which government bodies can deploy such technology in lower risk, high-impact scenarios that may improve lives, particularly in the event that they don’t directly use personal data.

So before we leap full pelt into AI cynicism we must always consider advantages in addition to risks it offers, and demand a more responsible approach to AI development and deployment.

One example of that is the Intelligent Street Lighting project being trialled by Glasgow City Council. It uses an algorithm to process real time sensor data on noise, air pollution and footfall around the town and control street lighting in response to people’s use of cycle paths and open spaces.

The aim is to right away improve safety but additionally allow for higher city planning and environmental protection. Importantly, this project is being properly trialled and is open to public scrutiny, which is able to help address people’s concerns and wishes.

Similarly, Liverpool City Council is working with the corporate Red Ninja on the Life First Emergency Traffic Control project, which goals to chop ambulance journey times by as much as 40%. A brand new algorithm works inside the present traffic signal system to prioritise emergency vehicles, aiming to reduce congestion ahead of emergency vehicles and save critical minutes on ambulance response times.

Governments also can use AI for a lot of low-risk jobs which do in a roundabout way aim to predict human behaviour or make decisions directly affecting individuals. For example, National Grid uses AI and drones to examine 7,200 miles of overhead power lines in England and Wales.

It is are capable of assess the steelwork, wear and corrosion and faults to conductors. This quickens inspection, saving money and time and allows human engineers to concentrate on repairs and enhancements, producing a more reliable energy supply.

AI can power automation of inauspicious jobs.
KOHUKU/Shutterstock

The Driver and Vehicle Standards Agency (DVSA) has used AI to enhance MOT testing by utilizing AI to analyse the vast amount of testing data to develop risk scores for garages and discover potentially underperforming centres. This has reduced enforcement visits by 50%.

The counterpart Driver and Vehicle Licensing Agency (DVLA) used a natural language processing algorithm to develop a chatbot to take care of customer enquiries. This is integrated right into a single customer support platform in order that staff can monitor all customer interactions by phone, email, webchat and social media.

These examples show the potential for presidency to make use of AI successfully and responsibly. So how can public sector bodies ensure their algorithms manage this?

To begin with, there are many sets of guidelines they’ll follow, similar to the OECD Principles on AI.
These principles state that AI must be designed in a way that respects human rights, democratic values and variety and include appropriate safeguards and monitoring of risks. There is a requirement for transparency and responsible disclosure so people understand the systems and might challenge them.

But guidelines aren’t necessarily enough. The UK government has published its own guidelines for trustworthy use of AI, and has invested significantly in quite a few expert AI advisory bodies. Yet it has still managed to get many things improper in its development of algorithms, as recent events have shown.

One reason for that is that there may be little acceptance even now that AI technology just isn’t ok to soundly be used on high-impact and high-risk cases, similar to awarding grades and visas. Sometimes AI shouldn’t be an answer.

Laws and nudges

New laws regulating using AI could help, but few countries have yet to pass specific laws. There are some good examples in development, similar to the proposed US AI Accountability Bill. However, laws moves slowly, is subject to significant lobbying and outstripped by the speed of tech innovation. So quicker nudges to responsible behaviour are needed.

The recent abandoning of certain government algorithms have shown that when the general public is aware of poorly developed AI it might probably change government behaviour and create demand for more trustworthy use of technology. So one possible solution, called for by the researcher network Women Leading in AI, of which I’m a founder, is an AI Infomark.

Any apps, web sites or documents referring to government services, systems or decisions that use AI would display the mark to alert people of that fact and point them to details about how the AI works and its potential impact and risk. This is a citizen-first strategy designed to empower people to know and challenge an algorithm or AI system that has affected them. And this could hopefully push government to be sure that it gets things right in the primary place.

If government can mix adequate regulation with this sort empowering, bottom-up approach to making sure more responsible technology, we are able to begin to reap the actual advantages of greater use of algorithms and AI.

This article was originally published at theconversation.com