Artificial intelligence (AI) tools are increasingly used at work to boost productivity, improve decision making and reduce costs, including automating administrative tasks and monitoring security.

But sharing your workplace with AI poses unique challenges, including the query – can we trust the technology?

Our recent, 17-country study involving over 17,000 people reveals how much and in what ways we trust AI within the workplace, how we view the risks and advantages, and what is anticipated for AI to be trusted.

We find that just one in two employees are willing to trust AI at work. Their attitude relies on their role, what country they live in, and what the AI is used for. However, people across the globe are nearly unanimous of their expectations of what must be in place for AI to be trusted.

Our global survey

AI is rapidly reshaping the best way work is completed and services are delivered, with all sectors of the worldwide economy investing in artificial intelligence tools. Such tools can automate marketing activities, assist staff with various queries, and even monitor employees.

To understand people’s trust and attitudes towards workplace AI, we surveyed over 17,000 people from 17 countries: Australia, Brazil, Canada, China, Estonia, Finland, France, Germany, India, Israel, Japan, the Netherlands, Singapore, South Africa, South Korea, the United Kingdom, and the United States. These data, which used nationally representative samples, were collected just prior to the discharge of ChatGPT.

The countries we surveyed are leaders in AI activity inside their regions, as evidenced by their investment in AI and AI-specific employment.



Do employees trust AI at work?

We found nearly half of all employees (48%) are wary about trusting AI at work – for instance by counting on AI decisions and proposals, or sharing information with AI tools so that they can function.

People have more faith in the power of AI systems to provide reliable output and supply helpful services, than the protection, security and fairness of those systems, and the extent to which they uphold privacy rights.

However, trust is contextual and relies on the AI’s purpose. As shown within the figure below, most persons are comfortable with using AI at work to reinforce and automate tasks and help employees, but they’re less comfortable when AI is used for human resources, performance management, or monitoring purposes.

AI as a decision-making tool

Most employees view AI use in managerial decision-making as acceptable, and really prefer AI involvement to sole human decision-making. However, the popular option is to have humans retain more control than the AI system, or no less than the identical amount.

What might this appear to be? People showed essentially the most support for a 75% human to 25% AI decision-making collaboration, or a 50%-50% split. This indicates a transparent preference for managers to make use of AI as a choice aid, and an absence of support for fully automated AI decision-making at work. These decisions could include whom to rent and whom to advertise, or the best way resources are allocated.

While nearly half of the people surveyed imagine AI will enhance their competence and autonomy at work, lower than one in three (29%) imagine AI will create more jobs than it’ll eliminate.

This reflects a distinguished fear: 77% of individuals report feeling concerned about job loss, and 73% say they’re concerned about losing necessary skills on account of AI.

However, managers usually tend to imagine that AI will create jobs and are less concerned about its risks than other occupations. This reflects a broader trend of managers being more comfortable, trusting and supportive of AI use at work than other worker groups.

Given managers are typically the drivers of AI adoption at work, these differing views may cause tensions in organisations implementing AI tools.



Trust is a serious concern

Younger generations and people with a university education are also more trusting and comfy with AI, and more more likely to use it of their work. Over time this may occasionally escalate divisions in employment.

We found necessary differences amongst countries in our findings. For example, people in western countries are among the many least trusting of AI use at work, whereas those in emerging economies (China, India, Brazil and South Africa) are more trusting and comfy.

This difference partially reflects the actual fact a minority of individuals in western countries imagine the advantages of AI outweigh the risks, in contrast to the big majority of individuals in emerging economies.

How will we make AI trustworthy?

The excellent news is our findings show persons are united on the principles and practices they expect to be in place so as to trust AI. On average, 97% of individuals report that every of those are necessary for his or her trust in AI.

People say they’d trust AI more when oversight tools are in place, similar to monitoring the AI for accuracy and reliability, AI “codes of conduct”, independent AI ethical review boards, and adherence to international AI standards.

This strong endorsement for the trustworthy AI principles and practices across all countries provides a blueprint for a way organisations can design, use and govern AI in a way that secures trust.

This article was originally published at theconversation.com