The U.S. Federal Trade Commission just fired a shot across the bow of the unreal intelligence industry. On April 19, 2021, a staff attorney on the agency, which serves because the nation’s leading consumer protection authority, wrote a blog post about biased AI algorithms that included a blunt warning: “Keep in mind that in the event you don’t hold yourself accountable, the FTC may do it for you.”

The post, titled “Aiming for truth, fairness, and equity in your organization’s use of AI,” was notable for its tough and specific rhetoric about discriminatory AI. The creator observed that the commission’s authority to ban unfair and deceptive practices “would come with the sale or use of – for instance – racially biased algorithms” and that industry exaggerations regarding the potential of AI to make fair or unbiased hiring decisions could lead to “deception, discrimination – and an FTC law enforcement motion.”

Bias seems to pervade the AI industry. Companies large and small are selling demonstrably biased systems, and their customers are in turn applying them in ways in which disproportionately affect the vulnerable and marginalized. Examples of areas where they’re being abused include health care, criminal justice and hiring.

Whatever they are saying or do, corporations seem unable or unwilling to rid their data sets and models of the racial, gender and other biases that suffuse society. Industry efforts to deal with fairness and equity have come under fire as inadequate or poorly supported by leadership, sometimes collapsing entirely.

As a researcher who studies law and technology and a longtime observer of the FTC, I took particular note of the not-so-veiled threat of agency motion. Agencies routinely use formal and informal policy statements to place regulated entities on notice that they’re listening to a selected industry or issue. But such a direct threat of agency motion – get your act together, or else – is comparatively rare for the commission.

What the FTC can do – but hasn’t done

The FTC’s approach on discriminatory AI stands in stark contrast to, as an example, the early days of web privacy. In the Nineties, the agency embraced a more hands-off, self-regulatory paradigm, becoming more assertive only after years of privacy and security lapses.

Tech industry critic Lina Khan’s nomination to be a commissioner on the FTC is further evidence of the Biden administration’s intention to make use of the agency to control the industry.
Graeme Jennings/Pool via AP

How much should industry or the general public read right into a blog post by one government attorney? In my experience, FTC staff generally don’t go rogue. If anything, that a staff attorney apparently felt empowered to make use of such strong rhetoric on behalf of the commission confirms a broader basis of support throughout the agency for policing AI.

Can a federal agency, or anyone, define what makes AI fair or equitable? Not easily. But that’s not the FTC’s charge. The agency only has to find out whether the AI industry’s business practices are unfair or deceptive – a normal the agency has almost a century of experience enforcing – or otherwise in violation of laws that Congress has asked the agency to implement.

Shifting winds on regulating AI

There are reasons to be skeptical of a sea change. The FTC is chronically understaffed, especially with respect to technologists. The Supreme Court recently dealt the agency a setback by requiring additional hurdles before the FTC can seek monetary restitution from violators of the FTC Act.

But the winds are also within the commission’s sails. Public concern over AI is growing. Current and incoming commissioners – there are five, with three Democratic appointees – have been vocally skeptical of the technology industry, as is President Biden. The same week as this Supreme Court decision, the commissioners found themselves before the U.S. Senate answering the Commerce Committee’s questions on how the agency could do more for American consumers.

I don’t expect the AI industry to alter overnight in response to a blog post. But I could be equally surprised if this blog post were the agency’s last word on discriminatory AI.

This article was originally published at