Over 100 leading AI experts issued an open letter demanding that corporations behind generative AI technologies, like OpenAI, Meta, and others, open their doors to independent testing. 

Their message is evident: AI developers’ terms and conditions are curbing independent research efforts into AI tool safety. 

Co-signees feature leading experts equivalent to Stanford’s Percy Liang, Pulitzer Prize-winner Julia Angwin, Stanford Internet Observatory’s Renée DiResta, Mozilla Fellow Deb Raji, ex-European Parliament member Marietje Schaake, and Suresh Venkatasubramanian from Brown University.

Researchers argue that the teachings from the social media era, when independent research was often marginalized, shouldn’t be repeated.

To combat this risk, they ask that OpenAI, Meta, Anthropic, Google, Midjourney, and others create a legal and technical protected space for researchers to guage AI products without fearing being sued or banned.

The letter says, “While corporations’ terms of service deter malicious use, in addition they offer no exemption for independent good faith research, leaving researchers vulnerable to account suspension and even legal reprisal.”

AI tools impose strict usage policies to stop them from being manipulated into bypassing their guardrails. For example, OpenAI recently branded investigative efforts by the New York Times as “hacking,” and Meta threatened to withdraw licenses over mental property disputes. 

Other investigations probed MidJourney to disclose quite a few instances of copyright violation, which might have been against the corporate’s T&Cs.

The problem is that since AI tools are largely unpredictable under the hood, they rely on people using them in a particular method to remain ‘protected.’ 

However, those self same policies make it tough for researchers to probe and understand models. 

The letter, published on MIT’s website, makes two pleas:

1. “First, a legal protected harbor would indemnify good faith independent AI safety, security, and trustworthiness research, provided it’s conducted in accordance with well-established vulnerability disclosure rules.”

2. “Second, corporations should commit to more equitable access, through the use of independent reviewers to moderate researchers’ evaluation applications, which might protect rule-abiding safety research from counterproductive account suspensions, and mitigate the priority of corporations choosing their very own evaluators.”

The letter also introduces a policy proposal, co-drafted by some signatories, which suggests modifications in the businesses’ terms of service to accommodate academic and safety research.

This contributes to broadening consensus in regards to the risks related to generative AI, including bias, copyright infringement, and the creation of non-consensual intimate imagery. 

By advocating for a “protected harbor” for independent evaluation, these experts are championing the reason behind public interest, aiming to create an ecosystem where AI technologies will be developed and deployed responsibly, with the well-being of society on the forefront.

The post Researchers join open letter advocating for independent AI evaluations appeared first on DailyAI.

This article was originally published at dailyai.com