In October 2023, New York City Mayor Eric Adams announced an AI-powered chatbot collaboration with Microsoft to help business owners in understanding government regulations. 

This project soon veered off beam and provided illegal advice to sensitive questions surrounding housing and consumer rights.

For example, when landlords inquired about accepting tenants with Section 8 vouchers, the chatbot advised to disclaim them.

As per New York City’s laws, discriminating against tenants based on their source of income is prohibited, with very limited exceptions.

Upon examining the chatbot’s outputs, Rosalind Black, Citywide Housing Director at Legal Services NYC, discovered how the chatbot advised that it was permissible to lock out tenants. The chatbot claimed, “There are not any restrictions on the quantity of rent that you may charge a residential tenant.” 

The chatbot’s flawed advice prolonged beyond housing. “Yes, you’ll be able to make your restaurant cash-free,” it advised, contradicting a 2020 city law that mandates businesses to simply accept money to avoid discrimination against customers without bank accounts. 

Moreover, it wrongly suggested employers could take cuts from their staff’ suggestions and provided misinformation regarding the regulation of notifying staff about scheduling changes.

Black warned, “If this chatbot just isn’t being done in a way that’s responsible and accurate, it needs to be taken down.”

Andrew Rigie, Executive Director of the NYC Hospitality Alliance, described how anyone following the chatbot’s advice could incur hefty legal liabilities. “AI is usually a powerful tool to support small business…but it may well even be a large liability if it’s providing the mistaken legal information,” Rigie said. 

In response to mounting criticism, Leslie Brown from the NYC Office of Technology and Innovation framed the chatbot as a piece in progress. 

Brown asserted, “The city has been clear the chatbot is a pilot program and can improve, but has already provided 1000’s of individuals with timely, accurate answers.”

Was deploying a “work in progress” on this sensitive area an excellent plan in the primary place?

AI legal liabilities hit firms

AI chatbots can do many things, but providing legal advice just isn’t yet one in all them.

In February, Air Canada found itself at the middle of a legal dispute because of a misleading refund policy communicated by its AI chatbot. 

Jake Moffatt, in search of clarity on the airline’s bereavement fare policy during a private crisis, was wrongly informed by the chatbot that he could secure a special discounted rate after booking. This contradicts the airline’s policy, which doesn’t permit refunds for bereavement travel after booking. 

This led to a legal battle, culminating in Air Canada being ordered to honor the wrong policy stated by the chatbot, which resulted in Moffatt receiving a refund. 

AI has also gotten judges themselves in trouble. Perhaps most notably, New York lawyer Steven A Schwartz used ChatGPT for legal research and inadvertently cited fabricated legal cases in a transient. 

With every thing we learn about AI hallucinations, counting on chatbots for legal advice just isn’t advisable, regardless of how seemingly trivial the matter is.

The post New York City-endorsed AI chatbot provides illegal advice to users appeared first on DailyAI.

This article was originally published at