The rapid rise of generative AI has captivated the world, but because the technology advances at an unprecedented pace, a crisis has emerged: the erosion of public trust within the AI industry. 

The 2024 Edelman Trust Barometer, a comprehensive survey of over 32,000 respondents across 28 countries, has revealed a startling decline in global confidence in AI corporations, with trust levels plummeting from 61% to 53% in only five years.

The US has seen a fair more dramatic drop, with trust falling from 50% to 35% over the identical period. This cuts across political lines, with Democrats (38%), independents (25%), and Republicans (24%) all expressing deep skepticism concerning the AI industry.

Once well-trusted by the general public, the technology sector is losing its luster. Eight years ago, technology reigned as essentially the most trusted industry in 90% of the countries studied by Edelman. 

Today, that figure has plummeted to simply 50%. In fact, the tech sector has lost its position as essentially the most trusted industry in key markets just like the US, UK, Germany, and France.

When it involves specific technologies, trust levels are much more concerning. While 76% of world respondents trust tech corporations overall, only 50% trust AI. This 26-point gap is much more pronounced in areas like gene-based medicine (23-point gap) and genetically modified foods (40-point gap). 

The Edelman study also highlights a stark divide between developed and developing nations of their attitudes toward AI. Respondents in France, Canada, Ireland, the UK, the US, Germany, Australia, the Netherlands, and Sweden reject the growing use of AI by a three-to-one margin. 

In contrast, acceptance of AI significantly outpaces resistance in developing markets comparable to Saudi Arabia, India, China, Kenya, Nigeria, and Thailand. 

What drives mistrust of the generative AI industry?

So what’s driving this mistrust? 

Globally, privacy concerns (39%), the devaluation of humanity (36%), and inadequate testing (35%) top the list of barriers to AI adoption. 

In the US, fears of societal harm (61%) and threats to non-public well-being (57%) are particularly acute. Interestingly, job displacement ranks near the underside of concerns each globally (22%) and within the US (19%).

These findings are further reinforced by a recent AI Policy Institute poll conducted by YouGov, which found that a staggering 72% of American voters advocate for slower AI development, contrasting to the mere 8% favoring hastening it. 

The poll also revealed that 62% of Americans express apprehension about AI, overshadowing the 21% who feel enthusiastic.

Recent controversies, comparable to the leak of over 16,000 artist names linked to training Midjourney’s image generation models and insider revelations at Microsoft and Google, have only heightened public concerns concerning the AI industry.

While industry titans like Sam Altman, Brad Smith, and Jensen Huang are wanting to advance AI development for the ‘greater good,’ the general public doesn’t necessarily share the identical fervor. 

To rebuild trust, the Edelman report recommends businesses partner with the federal government to make sure responsible development and earn public trust through thorough testing. 

Scientists and experts still hold authority but increasingly need to interact in public dialogue. Above all, people wish to feel a way of agency and control over how emerging innovations will impact their lives.

As Justin Westcott, Edelman’s global technology chair, aptly stated, “Those who prioritize responsible AI, who transparently partner with communities and governments, and who put control back into the hands of the users, won’t only lead the industry but will rebuild the bridge of trust that technology has, somewhere along the best way, lost.”

Fear of the unknown?

Throughout human history, the emergence of groundbreaking technologies has often been accompanied by a posh interplay of fascination, adoption, and apprehension. 

There is little question that thousands and thousands of individuals use generative AI usually now, with surveys showing that some 1/sixth of individuals in digitally advanced economies use AI tools every day. 

Studies from individual industries find that individuals save every day hours using generative AI, lowering their risk of burnout and lessening administrative burdens. 

Generative AI is maybe representative of an unknown and potentially unpredictable feature. Fear surrounding it will not be a wholly recent phenomenon but fairly an echo of historical patterns which have shaped our relationship with transformative innovations.

Consider, as an example, the appearance of the printing press within the fifteenth century. This revolutionary technology democratized access to knowledge, paved the best way for mass communication, and catalyzed profound social, political, and spiritual shifts. 

Amid the rapid proliferation of printed materials, there have been fears concerning the potential for misinformation, the erosion of authority, and the disruption of established power structures.

Similarly, the Industrial Revolution of the 18th and nineteenth centuries led to unprecedented advancements in manufacturing, transportation, and communication. 

The steam engine, the telegraph, and the factory system transformed the material of society, unleashing recent possibilities for productivity and progress. However, these innovations also raised concerns concerning the displacement of staff, the concentration of wealth and power, and the dehumanizing effects of mechanization.

This dissonance surrounding generative AI reflects a deeper tension between our innate desire for progress and our fear of the unknown. Humans are drawn to the novelty and potential of recent technologies, yet we also grapple with the uncertainty and risks they bring about. 

The French philosopher Jean-Paul Sartre, in his magnum opus “Being and Nothingness” (1943), explores the concept of “bad faith,” a type of self-deception by which individuals deny their very own freedom and responsibility within the face of existential anxiety.

In the context of generative AI, the widespread adoption of the technology, despite growing mistrust, will be seen as a type of bad faith, a way of embracing the advantages of AI while avoiding the difficult questions and ethical dilemmas it raises.

Moreover, the pace and scale of generative AI development amplify the dissonance between adoption and mistrust. 

Unlike previous technological revolutions that unfolded over many years or centuries, the rise of AI is occurring at an unprecedented speed, outpacing our ability to grasp its implications and develop adequate governance frameworks fully.

This rapid advancement has left many feeling a way of vertigo as if the bottom beneath their feet is shifting faster than they’ll adapt. It has also exposed the constraints of our existing legal, ethical, and social structures, which struggle to maintain pace with AI’s transformative power.

We must work to create a future by which the advantages of this technology are realized in a way that upholds our values, protects our rights, and promotes the greater good. 

The challenge is that ‘the greater good’ is something of immense subjectivity and obscurity.

Guiding generative AI towards it can demand open and honest dialogue, a willingness to confront difficult questions, and a commitment to constructing bridges of understanding and trust.

This article was originally published at dailyai.com