Takeaways:

  • A brand new federal agency to control AI sounds helpful but could turn out to be unduly influenced by the tech industry. Instead, Congress can legislate accountability.

  • Instead of licensing firms to release advanced AI technologies, the federal government could license auditors and push for firms to establish institutional review boards.

  • The government hasn’t had great success in curbing technology monopolies, but disclosure requirements and data privacy laws could help check corporate power.


OpenAI CEO Sam Altman urged lawmakers to think about regulating AI during his Senate testimony on May 16, 2023. That suggestion raises the query of what comes next for Congress. The solutions Altman proposed – creating an AI regulatory agency and requiring licensing for firms – are interesting. But what the opposite experts on the identical panel suggested is not less than as vital: requiring transparency on training data and establishing clear frameworks for AI-related risks.

Another point left unsaid was that, given the economics of constructing large-scale AI models, the industry could also be witnessing the emergence of a brand new kind of tech monopoly.

As a researcher who studies social media and artificial intelligence, I imagine that Altman’s suggestions have highlighted vital issues but don’t provide answers in and of themselves. Regulation could be helpful, but in what form? Licensing also is sensible, but for whom? And any effort to control the AI industry might want to account for the businesses’ economic power and political sway.

An agency to control AI?

Lawmakers and policymakers the world over have already begun to handle a few of the issues raised in Altman’s testimony. The European Union’s AI Act relies on a risk model that assigns AI applications to a few categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from using AI in spam filters, for instance.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, in addition to other business and skilled associations, technology firms and think tanks.

Federal agencies similar to the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on a few of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a task to play as well.

Rather than create a brand new agency that runs the risk of becoming compromised by the technology industry it’s meant to control, Congress can support private and public adoption of the NIST risk management framework and pass bills similar to the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for firms. Congress can even adopt comprehensive laws around data privacy.

Regulating AI should involve collaboration amongst academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations similar to the European Organization for Nuclear Research, referred to as CERN, and the Intergovernmental Panel on Climate Change. The web has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, similar to the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers today.

Cognitive scientist and AI developer Gary Marcus explains the necessity to control AI.

Licensing auditors, not firms

Though OpenAI’s Altman suggested that firms could possibly be licensed to release artificial intelligence technologies to the general public, he clarified that he was referring to artificial general intelligence, meaning potential future AI systems with humanlike intelligence that would pose a threat to humanity. That could be akin to firms being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a task to play well before such a futuristic scenario involves pass.

Algorithmic auditing would require credentialing, standards of practice and extensive training. Requiring accountability is just not only a matter of licensing individuals but additionally requires companywide standards and practices.

Experts on AI fairness contend that problems with bias and fairness in AI can’t be addressed by technical methods alone but require more comprehensive risk mitigation practices similar to adopting institutional review boards for AI. Institutional review boards within the medical field help uphold individual rights, for instance.

Academic bodies and skilled societies have likewise adopted standards for responsible use of AI, whether it’s authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It’s also vital to acknowledge that greater data accountability and transparency may impose latest restrictions on organizations.

Scholars of knowledge privacy and AI ethics have called for “technological due process” and frameworks to acknowledge harms of predictive processes. The widespread use of AI-enabled decision-making in such fields as employment, insurance and health care calls for licensing and audit requirements to make sure procedural fairness and privacy safeguards.

Requiring such accountability provisions, though, demands a robust debate amongst AI developers, policymakers and people who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the looks of compliance.

AI monopolies?

What was also missing in Altman’s testimony is the extent of investment required to coach large-scale AI models, whether it’s GPT-4, which is certainly one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of firms, similar to Google, Meta, Amazon and Microsoft, are accountable for developing the world’s largest language models.

Given the dearth of transparency within the training data utilized by these firms, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.

It can also be vital to acknowledge that the training data for tools similar to ChatGPT includes the mental labor of a number of individuals similar to Wikipedia contributors, bloggers and authors of digitized books. The economic advantages from these tools, nonetheless, accrue only to the technology corporations.

Proving technology firms’ monopoly power might be difficult, because the Department of Justice’s antitrust case against Microsoft demonstrated. I imagine that essentially the most feasible regulatory options for Congress to handle potential algorithmic harms from AI could also be to strengthen disclosure requirements for AI firms and users of AI alike, to induce comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.


This article was originally published at theconversation.com