The London Underground employed AI surveillance technology in a year-long trial. 

Spanning from October 2022 to September 2023, Transport for London (TfL) tested 11 distinct algorithms on the Willesden Green Tube station, situated within the northwest sector of the town. 

According to detailed documents obtained by WIRED, the trial involved monitoring hundreds of passengers’ movements, behaviors, and body language to discover potential criminal activities and safety hazards. 

The AI software, linked with live CCTV footage (which is the pc vision (CV) branch of machine learning), was trained to discover aggressive behaviors, weapons, fare evasion, and accidents, akin to people potentially falling onto the tracks.

The UK police have previously experimented with AI surveillance and proceed to accomplish that at some public events, as they did at a Beyonce concert last yr.

However, it’s often been ineffective, and human rights groups have criticized the technology, calling it a difficult invasion of privacy and source of prejudice and discrimination.

AI video tech has a problematic history, with quite a few projects worldwide underdelivering while sometimes associating darker-skinned individuals with crimes they didn’t commit. 

Throughout TfL’s trial period, some 44,000 alerts were generated, of which roughly 19,000 were relayed on to staff for intervention.

Officers participated in tests by brandishing weapons like machetes and guns throughout the CCTV’s field of view (albeit during times when the station was closed), aiming to raised train the AI.

Here’s the entire list of results:

  1. Total alerts: Over 44,000 alerts were issued by the AI system.
  2. Real-time alerts to station staff: 19,000 were delivered in real-time to station staff for immediate motion.
  3. Fare evasion alerts: The AI system generated 26,000 alerts related to fare evasion activities.
  4. Wheelchair alerts: There were 59 alerts concerning wheelchair users on the station, which lacks proper wheelchair access facilities.
  5. Safety line alerts: Nearly 2,200 were issued for people crossing the yellow safety lines on platforms.
  6. Platform edge alerts: The system generated 39 alerts for people leaning over the sting of the train platforms.
  7. Extended bench sitting alerts: Almost 2,000 alerts were for people sitting on benches for prolonged periods, which could indicate various concerns, including passenger well-being or security risks.
  8. Aggressive behavior alerts: There were 66 alerts related to aggressive behavior, although the AI system struggled to detect such incidents reliably on account of insufficient training data.

However, the AI system didn’t perform well in some scenarios, resulting in erroneous results, akin to flagging children passing through ticket barriers as potential fare evasion.

According to TfL, the final word goal is to foster a safer, more efficient tube that protects each the general public and staff. 

AI surveillance technology isn’t intrinsically awful when used for public safety, but once the tech is there, keeping it under control is a tough endeavor.

There is already evidence of AI misuse within the UK’s public sector, and scandals in other countries indicate this can be a slippery slope when not handled with the utmost care.

This article was originally published at dailyai.com