Welcome to this week’s roundup of AI news for sentient, conscious readers. You know who you might be.

This week AI sparked debate over how smart or secure it’s.

AI agents are learning by playing computer games.

And DeepMind desires to teach you the best way to kick a ball.

Let’s dig in.

Do AIs dream of electrical sheep?

Can we expect an AI to develop into self-aware or truly conscious? What does “conscious” even mean within the context of AI?

Claude 3 Opus did something really interesting during training. Its response to an engineer has reawakened debates on AI sentience and consciousness. We’re entering Blade Runner territory ahead of some thought.

Does “I believe, due to this fact I’m,” apply only to humans?

These discussions on X are fascinating.

Inflection AI’s quest for “personal AI” is likely to be over. The company’s CEO, Mustafa Suleyman, and other key staff jumped ship to affix the Microsoft Copilot team. What does this mean for Inflection AI and other smaller players funded by Big Tech investments?

AI playing games

If 2023 was the yr of the LLM, then 2024 is on target to be the yr of AI agents. DeepMind demonstrated SIMA, a generalist AI agent for 3D environments. SIMA was trained using computer games and the examples of what SIMA can do are impressive.

Will AI solve the Soccer vs Football nomenclature debate? Unlikely. But it could help players rating more goals. DeepMind is collaborating with Liverpool FC to optimize how the club’s players take corners.

It is likely to be some time before robots replace humans on the sector though.

via GIPHY

Risky business

Will AI save the world or doom it? It is dependent upon who you ask. Experts and tech leaders can’t agree on how intelligent AI is, how soon we’ll have AGI, or how much of a risk it poses.

Leading Western and Chinese AI scientists met in Beijing to debate international efforts to make sure the secure development of AI. They agreed on several AI development ‘red lines’ that they are saying pose an existential threat to humanity.

If these red lines really are mandatory, shouldn’t we’ve got had them in place months ago? Does anyone consider the US or Chinese governments can pay them attention?

The EU AI Act was passed in a landslide within the European Parliament and can likely come into force in May. The list of restrictions is interesting, with a few of the banned AI applications unlikely to ever make it onto an analogous list in China.

The transparency of coaching data requirements might be particularly tricky for OpenAI, Meta, and Microsoft to satisfy without opening themselves as much as much more copyright lawsuits.

Across the pond, the FTC is questioning Reddit over its deal to license user-generated data to Google. Reddit is preparing for its IPO but is feeling the warmth from each regulators and Redditors who aren’t too blissful about having their content sold for AI training fodder.

Apple playing AI catchup

Apple hasn’t exactly been blazing any AI trails, but it surely has been buying up several AI startups over the previous few months. Its recent acquisition of a Canadian AI startup may give some insight into the corporate’s push for generative AI.

When Apple does produce impressive AI tech, it keeps the news pretty low-key until it will definitely becomes a part of certainly one of its products. Apple engineers quietly published a paper that reveals MM1, Apple’s first family of multimodal LLMs.

MM1 is absolutely good at visual query answering. Its ability to reply queries and reason on multiple images is especially impressive. Will Siri learn to see soon?

Grok opens up

Elon Musk has been critical of OpenAI’s refusal to open source its models. He announced that xAI would open-source its LLM, Grok-1, and promptly released the model’s code and weights.

The incontrovertible fact that Grok-1 is really open-source (Apache 2.0 license) signifies that corporations can use it for business ends as a substitute of getting to pay for alternatives like GPT-4. You’ll need some serious hardware to coach and run Grok though.

The excellent news is that there could also be some second-hand NVIDIA H100s going low cost soon.

New NVIDIA tech

NVIDIA unveiled recent chips, tools, and Omniverse at its GTC event this week.

One of the large announcements was NVIDIA’s recent Blackwell GPU computing platform. It offers big improvements in training and inference speed over even its most advanced Grace Hopper platform.

There’s already a protracted list of Big Tech AI corporations which have signed up for the advanced hardware.

Researchers from the University of Geneva published a paper showing how they connected two AI models, enabling them to speak with one another.

When you learn a brand new task, you possibly can normally explain it well enough in order that one other person can use those instructions to perform the duty themselves. This recent research shows the best way to get an AI model to do the identical.

Soon we could give instructions to a robot after which have it go off to clarify them to a team of robots to get the job done.

In other news…

And that’s a wrap.

Do you’re thinking that we’re seeing glimmers of consciousness in Claude 3 or does the interaction with the engineer have a less complicated explanation? If an AI model does achieve AGI and reads the growing list of AI development restrictions, it’ll probably be smart enough to shut up about it.

When we glance back just a few years from now will we laugh at how freaked out everyone was about AI risks, or lament that we didn’t do more about AI safety after we could?

Let us know what you’re thinking that and please keep sending us links to AI news we could have missed. We can’t get enough of the stuff.


This article was originally published at dailyai.com