That famous saying: “The more we all know, the more we don’t know,” actually rings true for AI.

The more we find out about AI, the less we appear to know for certain.

Experts and industry leaders often find themselves at bitter loggerheads about where AI is now and where it’s heading, failing to see eye to eye on seemingly elemental concepts like machine intelligence, consciousness, and safety.

Will machines at some point surpass the intellect of their human creators? Is AI advancement accelerating towards a technological singularity, or are we on the cusp of an AI winter?

And perhaps most crucially, how can we be certain that the event of AI stays protected and useful when even the experts can’t agree on what the long run holds?

We’re immersed in a fog of uncertainty. The best we are able to do is explore perspectives and are available to our own informed yet fluid views in an industry consistently in flux.

Debate one: AI intelligence

With each latest generation of generative AI models comes a renewed debate on machine intelligence.

Elon Musk recently fuelled debate on AI intelligence when he said, “AI will probably be smarter than any single human next 12 months. By 2029, AI might be smarter than all humans combined.”

Musk was immediately disputed by Meta’s chief AI scientist and eminent AI researcher, Yann LeCun, who said, “No. If it were the case, we might have AI systems that might teach themselves to drive a automobile in 20 hours of practice, like several 17 year-old. But we still don’t have fully autonomous, reliable self-driving, although we (you) have thousands and thousands of hours of *labeled* training data.”

This conversation indicates a small a part of an ambiguous void within the opinion of AI experts and tech leaders. It’s a conversation that results in a never-ending spiral of interpretation with no conesus, as demonstrated by the wildly contrasting views of technologists and AI leaders during the last 12 months or so (info from Improve the News):

  • Geoffrey Hinton: “Digital intelligence” could overtake us inside “5 to twenty years.”
  • Yann LeCun: Society is more more likely to get “cat-level” or “dog-level” AI years before human-level AI.
  • Demis Hassabis: We may achieve “something like AGI or AGI-like in the following decade.”
  • Gary Marcus: “[W]e will eventually reach AGI… and quite possibly before the top of this century.”
  • Geoffrey Hinton: “Current AI like GPT-4 “eclipses an individual” typically knowledge and will soon accomplish that in reasoning as well.
  • Geoffrey Hinton: AI is “very near it now” and shall be “far more intelligent than us in the long run.”
  • Elon Musk: “We could have, for the primary time, something that’s smarter than the neatest human.”
  • Elon Musk: “I’d be surprised if we don’t have AGI by [2029].”
  • Sam Altman: “[W]e could get to real AGI in the following decade.”
  • Yoshua Bengio: “Superhuman AIs” shall be achieved “between a number of years and a few a long time.”
  • Dario Amodei: “Human-level” AI could occur in “two or three years.”
  • Sam Altman: AI could surpass the “expert skill level” in most fields inside a decade.
  • Gary Marcus: “I don’t [think] we’re all that near machines which might be more intelligent than us.”

No party is unequivocally right or unsuitable in the talk of machine intelligence. It ultimately hinges on one’s subjective interpretation of intelligence and the way AI systems measure up against that definition.

Pessimists may point to AI’s potential risks and unintended consequences, emphasizing the necessity for caution and stringent safety measures. They argue that as AI systems turn out to be more autonomous and powerful, they might develop goals and behaviors misaligned with human values, resulting in catastrophic outcomes.

Conversely, optimists may concentrate on AI’s transformative potential, envisioning a future wherein machines work alongside humans to unravel complex problems and drive innovation. They may downplay the risks, arguing that concerns about superintelligent AI are largely hypothetical and that the technology’s advantages far outweigh the potential drawbacks.

The crux of the problem lies in the problem of defining and quantifying intelligence, especially when comparing entities as disparate as humans and machines.

For example, calculators reveal superior speed and accuracy in mathematical computations, outperforming humans on this narrow domain. A fly has advanced neural circuits and might successfully evade our attempts to swat or catch it.

In these narrow domains and potentially limitless others, humans are bested.

Pick your examples of intelligence, and everybody could be right or unsuitable.

Debate two: is AI accelerating or slowing?

Is AI advancement set to speed up or plateau and decelerate?

Some argue that we’re within the midst of an AI revolution, with breakthroughs happening faster than ever. Others contend that progress has hit a plateau, and the sphere faces momentous challenges that might slow innovation in the approaching years.

Generative AI is the culmination of a long time of research and billions in funding. When ChatGPT landed in 2022, the technology had already attained a high level in research environments, setting the bar high and throwing society in on the deep end.

The resulting hype also drummed up immense funding for AI startups, from Anthropic to Inflection and Stability AI to MidJourney.

This, combined with immense internal efforts from Silicon Valley veterans Meta, Google, Amazon, Nvidia, and Microsoft, resulted in a rapid proliferation of AI tools. GPT-3 quickly morphed into heavyweight GPT-4, while competitors like LLMs like Claude 3 Opus, xAI’s Grok and Mistral, and Meta’s open-source models have also made their mark.

Some experts and technologists,  corresponding to Sam Altman, George Hinton, Yoshio Bengio, Demis Hassabis, and Elon Musk, feel that AI acceleration has just begun.

Musk said generative AI was like “waking the demon,” whereas Altman said AI mind control was imminent in the following few years (which Musk has evidenced with recent advancements in Neuralink; see below for the way one man played a game of chess through thought alone).

On the opposite hand, experts corresponding to Gary Marcus and Yann LeCun feel we’re hitting brick partitions, with generative AI facing an introspective period or ‘winter.’

This can be exacerbated by practical obstacles, corresponding to rising energy costs, the restrictions of brute-force computing, regulation, and material shortages.

We’ve observed how AI is exceptionally expensive, and monetization isn’t straightforward, so tech corporations need to seek out ways to maintain up the momentum so money keeps flowing into the industry.

Debate three: AI safety

Conversations on AI intelligence and progress even have implications for AI safety. If we cannot agree on what constitutes intelligence or how one can measure it, how can we be certain that AI systems are designed and deployed in a way that’s protected and useful to society?

The absence of a shared understanding of intelligence makes it difficult to ascertain appropriate safety measures and ethical guidelines for AI development.

To underestimate AI intelligence is to underestimate the necessity for AI safety controls and regulation.

Conversely, overestimating or exaggerating AI’s abilities warps perceptions and risks over-regulation. This could silo power in Big Tech, which has proven clout in lobbying and out-maneuvering laws.

Last 12 months, protracted X debates amongst Yann LeCun, George Hinton, Max Tegmark, Gary Marcus, Elon Musk, and diverse other outstanding figures within the AI community highlighted deep divisions in AI safety. Big Tech has been hard at work self-regulating and creating ‘voluntary guidelines,’ with leaders actively advocating regulation.

Critics suggest that regulation enables Big Tech to bolster market structures, rid themselves of disruptors, and set the terms of play to their liking.

On that side of the talk, experts like LeCun argue that the existential risks of AI have been overstated and are getting used as a smokescreen by Big Tech corporations to push for regulations that will stifle competition and consolidate their control over the industry.

LeCun and his supporters also indicate that AI’s immediate risks, corresponding to misinformation, deep fakes, and bias, are already harming people and require urgent attention.

On the opposite hand, Hinton, Bengio, Hassabis, and Musk have sounded the alarm concerning the potential existential risks of AI.

Bengio, LeCun, and Hinton, often referred to as the ‘godfathers of AI’ for developing neural networking, deep learning, and other AI techniques throughout the 90s and early 2000s, remain influential today. Hinton and Bengio, whose views generally align, sat in a recent rare meeting between US and Chinese researchers on the International Dialogue on AI Safety in Beijing.

The meeting culminated in a press release: “In the depths of the Cold War, international scientific and governmental coordination helped avert thermonuclear catastrophe. Humanity again must coordinate to avert a catastrophe that might arise from unprecedented technology.”

It must be said that Bengio, Hinton, and diverse others are highly unlikely disingenuous. They aren’t financially aligned with Big Tech and haven’t any reason to over-egg AI risks.

Hinton raised this point himself in an X spat with LeCun and ex-Google Brain co-founder Andrew Ng, highlighting that he left Google to talk freely about AI risks.

That doesn’t add weight to his views, nevertheless it’d be pretty far-out to query the motive of his warnings. Indeed, many great scientists have questioned AI safety through the years, including the late Profession Stephen Hawking, who viewed the technology as an existential risk.

This swirling mixture of polemic exchanges leaves little space for people to occupy the center ground, fueling generative AI’s image as a polarizing technology.

AI regulation, meanwhile, has turn out to be a geopolitical issue, with the US and China tentatively collaborating over AI safety despite escalating tensions in other departments.

So, just as experts disagree about when and the way AI will surpass human capabilities, additionally they differ of their assessments of the risks and challenges of developing protected and useful AI systems.

Debates surrounding AI intelligence aren’t just principled or philosophical in nature also they are an issue of governance.

When experts vehemently disagree over even the fundamental elements of AI intelligence and safety, regulation can’t hope to serve people’s interests.

Creating consensus would require tough realizations from experts, AI developers, governments, and society at large.

However, along with many other challenges, steering AI into the long run would require some tech leaders and experts to confess they were unsuitable. And that’s not going to be easy.

This article was originally published at dailyai.com