There has been increasing interest in using health “big data” for artificial intelligence (AI) research. As such, it is necessary to know which uses of health data are supported by the general public and which will not be.

Previous studies have shown that members of the general public see health data as an asset that must be used for research provided there’s a public profit and concerns about privacy, business motives and other risks are addressed.

However, this general support may not extend to health AI research due to concerns about the potential for AI-related job losses and other negative impacts.

Our research team conducted six focus groups in Ontario in October 2019 to learn more about how members of most of the people perceive using health data for AI research. We found that members of the general public supported using health data in three realistic health AI research scenarios, but their approval had conditions and limits.

Robot fears

Each of our focus groups began with a discussion of participants’ views about AI generally. Consistent with the findings from other studies, people had mixed — but mostly negative — views about AI. There were multiple references to malicious robots, like the Terminator within the 1984 James Cameron film.

“You can create a Terminator, literally, something that’s artificially intelligent, or the matrix … it goes awry, it tries to take over the world and humans got to fight this. Or it may go in absolutely the opposite where it helps … androids … implants.… Like I said, it’s unlimited to go either way.” (Mississauga focus group participant)

Popular culture is filled with tales of AI and robots run amok, feeding into concerns concerning the use of AI in health-care delivery.

Additionally, several people shared their belief that there’s already AI surveillance of their very own behaviour, referencing targeted ads that they’ve received for products that they had only spoken privately about.

Some participants commented on how AI could have positive impacts, as within the case of autonomous vehicles. However, the general public who said positive things about AI also expressed concern about how AI will affect society.

“It’s portrayed as friendly and helpful, but it surely’s all the time watching and listening.… So I’m enthusiastic about the chances, but concerned concerning the implications and reaching into personal privacy.” (Sudbury focus group participant)

Supporting scenarios

In contrast, focus group participants reacted positively to 3 realistic health AI research scenarios. In one in all the scenarios, some perceived that health data and AI research could actually save lives, and most of the people were also supportive of two other scenarios which didn’t include potential lifesaving advantages.

They commented favourably concerning the potential for health data and AI research to generate knowledge that might otherwise be inconceivable to acquire. For example, they reacted very positively to the potential for an AI-based test to avoid wasting lives by identifying origin of cancers in order that treatment could be tailored. Participants also noted practical benefits of AI including the flexibility to sift through large amounts of information, perform real-time analyses and supply recommendations to health care providers and patients.

When you may reach out and have a sample size of a gaggle of ten million people and to have the option to extract data from that, you may’t try this with the human brain. A gaggle, a team of researchers can’t try this. You need AI. (Mississauga focus group participant)

A CBC report on the long run of AI in health care.

Protecting privacy

The focus group participants weren’t positively disposed towards all possible uses of health data in AI research.

They were concerned that the health data provided for one health AI purpose is perhaps sold or used for other purposes that they don’t agree with. Participants also nervous concerning the negative impacts if AI research creates products that result in lack of human touch, job losses and a decrease in human skills over time because people change into overly reliant on computers.

The focus group participants also suggested ways to deal with their concerns. Foremost, they spoke about how necessary it’s to have assurance that privacy shall be protected and transparency about how data are utilized in health AI research. Several people stated the condition that health AI research should create tools that function in support of humans, reasonably than autonomous decision-making systems.

“As long because it’s a tool, just like the doctor uses the tool and the doctor makes the decision…it’s not a pc telling the doctor what to do.” (Sudbury focus group participant)

Involving members of the general public in decisions about health AI

Engaging with members of the general public took effort and time. In particular, considerable work was required to develop, test and refine realistic, plain language health AI scenarios that deliberately included potentially contentious points. But there was a big return on investment.

The focus group participants — none of whom were AI experts — had some necessary insights and concrete suggestions about tips on how to make health AI research more responsible and acceptable to members of the general public.

Studies like ours could be necessary inputs into policies and practice guides for health data and AI research. Consistent with the Montréal Declaration for Responsible Development of AI, we consider that researchers, scientists and policy makers have to work with members of the general public to take the science of health AI in directions that members of the general public support.

By understanding and addressing public concerns, we are able to establish trustworthy and socially helpful ways of using health data in AI research.

This article was originally published at