In an open letter titled “Build AI for a Better Future,” leaders from the tech world vow to make use of AI to enhance human lives and solve global challenges. 

This document, signed by a listing of 179 entities and growing, emphasizes AI’s potential to revolutionize elements of each day life and work, drawing parallels to historical milestones just like the printing press and the web.

The letter eloquently articulates the vision that AI should function a catalyst for human progress, enhancing learning through AI tutors, bridging linguistic divides with translation tools, advancing healthcare via diagnostic aids, speeding up scientific research, and simplifying each day tasks with AI assistants. 

Signatories include investment company SV Angel (who created the letter) to OpenAI, Meta, Google, Microsoft, and Salesforce, amongst others.

The letter says, “We all have something to contribute to shaping AI’s future, from those using it to create and learn, to those developing latest services on top of the technology, to those using AI to pursue latest solutions to a few of humanity’s biggest challenges, to those sharing their hopes and concerns for the impact of AI on their lives. AI is for all of us, and all of us have a task to play in constructing AI to enhance people’s lives.”

“We, the undersigned, already are experiencing the advantages from AI, and are committed to constructing AI that may contribute to a greater future for humanity – please join us!”

So, what concerning the reception?

It’s been frosty, to say the least. At just a few 100 words, the letter doesn’t exactly make much of an attempt at disentangling the trajectory and impacts of AI. In fact, it says little or no except lazy platitudes. 

One critic points out the dearth of explicit mention of AI safety, branding it as “PR junk.” Another labels the statement as “totally vacuous,” criticizing it for not addressing critical issues like AGI extinction risk, the disruption of livelihoods, or the specter of geopolitical arms races.

This joins many other cross-industry agreements, resembling the MLCommons collaborating with Big Tech to define safety benchmarks, recent commitments to create unified watermarking, and the Frontier Model Forum.

And let’s not forget the co-signed seismic Center for AI Safety’s (CAIS) statement early in 2023, which compared AI risks to pandemics and nuclear war.

Tech corporations also recently joined forces to tackle deep fake electioneering and made quite a few agreements at key events just like the World Economic Forum (WEF) and the UK AI Safety Summit. 

Generative AI is weathering a media storm

Generative AI has been facing significant scrutiny and controversy, particularly in relation to mental property, ethical use, and the potential for reinforcing big tech’s dominance. 

One of the main concerns revolves across the legal and ethical implications of using copyrighted content without permission to coach AI models. 

Companies like Stability AI and OpenAI claim that “fair use” protects them, but it is a highly debated and untested theory within the era of generative AI. 

The difficulty of defining what constitutes “within the kind of” an artist and the responsibilities of entities just like the Large-scale Artificial Intelligence Open Network (LAION) in compiling training datasets have been highlighted as key issues​.

Further, tech corporations’ rapid deployment of AI-powered products without fully addressing flaws, resembling the perpetuation of harmful biases, copyright infringement, and security vulnerabilities, has been criticized. 

Open letters signal some level of awareness. But stunting actual risks will take little less talk and a bit of more motion.

This article was originally published at