The comprehensive, even sweeping, set of guidelines for artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, show that the U.S. government is attempting to handle the risks posed by AI.

As a researcher of knowledge systems and responsible AI, I imagine the manager order represents a vital step in constructing responsible and trustworthy AI.

The order is just a step, nonetheless, and it leaves unresolved the problem of comprehensive data privacy laws. Without such laws, persons are at greater risk of AI systems revealing sensitive or confidential information.

Understanding AI risks

Technology is often evaluated for performance, cost and quality, but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:

The National Institute of Standards and Technology (NIST) issued a comprehensive AI risk management framework in January 2023 that goals to handle a lot of these issues. The framework serves as the inspiration for much of the Biden administration’s executive order. The executive order also empowers the Department of Commerce, NIST’s home within the federal government, to play a key role in implementing the proposed directives.

Researchers of AI ethics have long cautioned that stronger auditing of AI systems is required to avoid giving the looks of scrutiny without real accountability. As it stands, a recent study public disclosures from firms found that claims of AI ethics practices outpace actual AI ethics initiatives. The executive order could help by specifying avenues for enforcing accountability.

Another necessary initiative outlined in the manager order is probing for vulnerabilities of very large-scale general-purpose AI models trained on massive amounts of knowledge, akin to the models that power OpenAI’s ChatGPT or DALL-E. The order requires firms that construct large AI systems with the potential to affect national security, public health or the economy to perform red teaming and report the outcomes to the federal government. Red teaming is using manual or automated methods to try and force an AI model to provide harmful output – for instance, make offensive or dangerous statements like advice on find out how to sell drugs.

Reporting to the federal government is significant on condition that a recent study found many of the firms that make these large-scale AI systems lacking relating to transparency.

Similarly, the general public is vulnerable to being fooled by AI-generated content. To address this, the manager order directs the Department of Commerce to develop guidance for labeling AI-generated content. Federal agencies shall be required to make use of AI watermarking – technology that marks content as AI-generated to scale back fraud and misinformation – though it’s not required for the private sector.

The executive order also recognizes that AI systems can pose unacceptable risks of harm to civil and human rights and the well-being of people: “Artificial Intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused recent varieties of harmful discrimination, and exacerbated online and physical harms.”

The U.S. government takes steps to handle the risks posed by AI.

What the manager order doesn’t do

A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy laws. The executive order only calls on Congress to adopt privacy laws, but it surely doesn’t provide a legislative framework. It stays to be seen how the courts will interpret the manager order’s directives in light of existing consumer privacy and data rights statutes.

Without strong data privacy laws within the U.S. as other countries have, the manager order could have minimal effect on getting AI firms to spice up data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have on data privacy and freedoms.

It’s also value noting that algorithmic transparency isn’t a panacea. For example, the European Union’s General Data Protection Regulation laws mandates “meaningful information concerning the logic involved” in automated decisions. This suggests a right to an evidence of the standards that algorithms use of their decision-making. The mandate treats the means of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they’ll understand how the system affects them. But knowing how an AI system works doesn’t necessarily let you know why it made a selected decision.

With algorithmic decision-making becoming pervasive, the White House executive order and the international summit on AI safety highlight that lawmakers are starting to grasp the importance of AI regulation, even when comprehensive laws is lacking.

This article was originally published at theconversation.com