According to DeepMind founder Demis Hassabis, Google hopes it’s going to soon give you the chance to unlock the flexibility of its multimodal generative AI tool Gemini to represent people. The ability to answer requests for images of individuals must be back online within the “next few weeks,” he said today.

Google shut down the Gemini feature last week after users identified that the tool produced historically contradictory images, similar to depicting the founding fathers of the United States as a various group of individuals somewhat than simply white men.

Hassabis answered questions on the Snafu product during an on-stage interview on the Mobile World Congress in Barcelona today.

When asked by a moderator, Wired’s Steven Levy, to clarify what went unsuitable with the image generation function, Hassabis kept away from providing an in depth technical explanation. Instead, he suggested that the issue was brought on by Google’s failure to discover cases where users are essentially after what he called a “universal representation.” He also said the instance points to “nuances that include advanced AI.”

“This is an area that all of us grapple with. So, for instance, in the event you type a prompt that asks, “Give me an image of an individual walking a dog or a nurse in a hospital,” right, in those cases you clearly want some sort “universal representation”. “Especially while you consider that as Google we serve greater than 200 countries, you understand, every country around the globe – so you do not know where the user is coming from, what their background is, or what context they’re in positioned. So you must show a type of universal spectrum of possibilities there.”

Hassabis said the problem boils right down to a “well-intentioned feature” – to advertise diversity in Gemini’s image expressions of individuals – that has been applied “too bluntly in every single place.”

Prompts that ask for content about historical people should “naturally” result in a “much narrower distribution that you simply return,” he added, hinting at how Gemini might approach people prompts in the long run.

“Historical accuracy is after all necessary to us. Therefore, we now have taken this feature offline while we troubleshoot the problem and hope to have it back online in a really short time period. Next few weeks, next few weeks.”

In response to a follow-up query about easy methods to prevent generative AI tools from being misused by malicious actors, similar to authoritarian regimes that need to spread propaganda, Hassabis didn’t have a straightforward answer. The problem is “very complex,” he said — and sure requires a mobilization and response from your complete society to set and implement limits.

“There must be really necessary research and debate – including with civil society and governments, not only with tech firms,” he said. “It is a socio-technical issue that affects everyone and everybody should participate within the discussion. What values ​​should these systems have? What would they represent? How do you prevent malicious actors from accessing the identical technologies and misusing them, as you’re talking about, for harmful purposes that weren’t intended by the designers of those systems?”

He addressed the challenge of open source general-purpose AI models, which Google also offers, adding: “Customers need to use open source systems that they’ll fully control.” . But then the query becomes: With these increasingly powerful systems, how do you be sure that what individuals are using downstream is not harmful?

“I believe this just isn’t an issue today since the systems are still relatively young. But while you fast forward three, 4 or five years and begin talking about next-generation systems with planning capabilities and the flexibility to act on this planet and solve problems and goals, I believe society really must think seriously about these issues – from “What happens if this spreads after which bad actors, from individuals to rogue states, can reap the benefits of it?”

During the interview, Hassabis was also asked about his thoughts on AI devices and where the mobile market is likely to be headed as generative AI continues to drive latest developments here. He predicted a wave of “next-generation intelligent assistants” that shall be useful in people’s on a regular basis lives, somewhat than the “gimmicks” of previous generations of AI assistants, which he said would even overpower the mobile hardware people carry with them. could change.

“I believe there’ll even be questions on which variety of device is the suitable one,” he suggested. “But will the phone even have the right form consider greater than five years? Maybe we’d like glasses or other things in order that the AI ​​system can actually recognize a number of the context that you simply are in and so be much more helpful in your every day life. So I believe there are plenty of amazing things to invent.”

This article was originally published at techcrunch.com