Human foibles and a moving goal
Combining “soft” and “hard” approaches
Four key inquiries to ask


Human foibles and a moving goal

S. Shyam Sundar, Professor of Media Effects & Director, Center for Socially Responsible AI, Penn State

The reason to control AI isn’t since the technology is uncontrolled, but because human imagination is out of proportion. Gushing media coverage has fueled irrational beliefs about AI’s abilities and consciousness. Such beliefs construct on “automation bias” or the tendency to let your guard down when machines are performing a task. An example is reduced vigilance amongst pilots when their aircraft is flying on autopilot.

Numerous studies in my lab have shown that when a machine, fairly than a human, is identified as a source of interaction, it triggers a mental shortcut within the minds of users that we call a “machine heuristic.” This shortcut is the assumption that machines are accurate, objective, unbiased, infallible and so forth. It clouds the user’s judgment and leads to the user overly trusting machines. However, simply disabusing people of AI’s infallibility isn’t sufficient, because humans are known to unconsciously assume competence even when the technology doesn’t warrant it.

Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanness, equivalent to using conversational language. In these cases, people apply social rules of human interaction, equivalent to politeness and reciprocity. So, when computers seem sentient, people are likely to trust them, blindly. Regulation is required to be certain that AI products deserve this trust and don’t exploit it.

AI poses a singular challenge because, unlike in traditional engineering systems, designers cannot be certain how AI systems will behave. When a standard automobile was shipped out of the factory, engineers knew exactly how it could function. But with self-driving cars, the engineers can never be certain how it is going to perform in novel situations.

Lately, 1000’s of individuals world wide have been marveling at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their prompts. None of the engineers involved in developing these AI models could inform you exactly what the models will produce. To complicate matters, such models change and evolve with increasingly interaction.

All this implies there may be loads of potential for misfires. Therefore, rather a lot depends upon how AI systems are deployed and what provisions for recourse are in place when human sensibilities or welfare are hurt. AI is more of an infrastructure, like a freeway. You can design it to shape human behaviors within the collective, but you will want mechanisms for tackling abuses, equivalent to speeding, and unpredictable occurrences, like accidents.

AI developers may also have to be inordinately creative in envisioning ways in which the system might behave and take a look at to anticipate potential violations of social standards and responsibilities. This means there may be a necessity for regulatory or governance frameworks that depend on periodic audits and policing of AI’s outcomes and products, though I consider that these frameworks must also recognize that the systems’ designers cannot all the time be held accountable for mishaps.

Artificial intelligence researcher Joanna Bryson describes how skilled organizations can play a job in regulating AI.

Combining ‘soft’ and ‘hard’ approaches

Cason Schmit, Assistant Professor of Public Health, Texas A&M University

Regulating AI is difficult. To regulate AI well, you will need to first define AI and understand anticipated AI risks and advantages.
Legally defining AI is significant to discover what’s subject to the law. But AI technologies are still evolving, so it is tough to pin down a stable legal definition.

Understanding the risks and advantages of AI can also be vital. Good regulations should maximize public advantages while minimizing risks. However, AI applications are still emerging, so it’s difficult to know or predict what future risks or advantages may be. These sorts of unknowns make emerging technologies like AI extremely difficult to control with traditional laws and regulations.

Lawmakers are often too slow to adapt to the rapidly changing technological environment. Some latest laws are obsolete by the point they’re enacted and even introduced. Without latest laws, regulators must use old laws to handle latest problems. Sometimes this results in legal barriers for social advantages or legal loopholes for harmful conduct.

Soft laws” are the choice to traditional “hard law” approaches of laws intended to forestall specific violations. In the soft law approach, a non-public organization sets rules or standards for industry members. These can change more rapidly than traditional lawmaking. This makes soft laws promising for emerging technologies because they will adapt quickly to latest applications and risks. However, soft laws can mean soft enforcement.

Megan Doerr, Jennifer Wagner and I propose a 3rd way: Copyleft AI with Trusted Enforcement (CAITE). This approach combines two very different concepts in mental property — copyleft licensing and patent trolls.

Copyleft licensing allows for content for use, reused or modified easily under the terms of a license – for instance, open-source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, equivalent to transparent assessments of the impact of bias.

In our model, these licenses also transfer the legal right to implement license violations to a trusted third party. This creates an enforcement entity that exists solely to implement ethical AI standards and could be funded partly by fines from unethical conduct. This entity is sort of a patent troll in that it’s private fairly than governmental and it supports itself by enforcing the legal mental property rights that it collects from others. In this case, fairly than enforcement for profit, the entity enforces the moral guidelines defined within the licenses – a “troll for good.”

This model is flexible and adaptable to satisfy the needs of a changing AI environment. It also enables substantial enforcement options like a standard government regulator. In this manner, it combines the perfect elements of hard and soft law approaches to satisfy the unique challenges of AI.

Though generative AI has been grabbing headlines of late, other varieties of AI have been posing challenges for regulators for years, particularly in the world of information privacy.

Four key inquiries to ask

John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management, University of California, Los Angeles

The extraordinary recent advances in large language model-based generative AI are spurring calls to create latest AI-specific regulation. Here are 4 key inquiries to ask as that dialogue progresses:

1) Is latest AI-specific regulation mandatory? Many of the doubtless problematic outcomes from AI systems are already addressed by existing frameworks. If an AI algorithm utilized by a bank to judge loan applications results in racially discriminatory loan decisions, that will violate the Fair Housing Act. If the AI software in a driverless automotive causes an accident, products liability law provides a framework for pursuing remedies.

2) What are the risks of regulating a rapidly changing technology based on a snapshot of time? A classic example of that is the Stored Communications Act, which was enacted in 1986 to handle then-novel digital communication technologies like email. In enacting the SCA, Congress provided substantially less privacy protection for emails greater than 180 days old.

The logic was that limited cupboard space meant that individuals were consistently cleansing out their inboxes by deleting older messages to make room for brand spanking new ones. As a result, messages stored for greater than 180 days were deemed less vital from a privacy standpoint. It’s not clear that this logic ever made sense, and it actually doesn’t make sense within the 2020s, when nearly all of our emails and other stored digital communications are older than six months.

A typical rejoinder to concerns about regulating technology based on a single snapshot in time is that this: If a law or regulation becomes outdated, update it. But this is less complicated said than done. Most people agree that the SCA became outdated a long time ago. But because Congress hasn’t been in a position to agree on specifically tips on how to revise the 180-day provision, it’s still on the books over a 3rd of a century after its enactment.

3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there’s little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on a distinct group of individuals: sex staff who used to depend on the web sites knocked offline by FOSTA-SESTA to exchange details about dangerous clients. This example shows the importance of taking a broad have a look at the potential effects of proposed regulations.

4) What are the economic and geopolitical implications? If regulators within the United States act to intentionally slow the progress in AI, that can simply push investment and innovation — and the resulting job creation — elsewhere. While emerging AI raises many concerns, it also guarantees to bring enormous advantages in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services and more.

I consider AI regulations drafted with the above 4 questions in mind will probably be more prone to successfully address the potential harms of AI while also ensuring access to its advantages.

This article was originally published at theconversation.com