Welcome to this week’s roundup of artisanal handcrafted AI news.

This week we discovered AI can’t provide help to construct a bio-weapon in spite of everything, or perhaps it might probably.

Tay Tay’s army of Swifties fought fake AI porn.

And AI made us trust politicians even lower than we already did.

Let’s dig in.

Swift injustice

Taylor Swift found herself the goal of explicit AI deep fake images this week. The comprehensible outrage that ensued was also directed at platforms like X which looked as if it would don’t have any defense against this type of content.

There was widespread response from each industry and members of the general public. Taylor’s army of Swifties went into full Sherlock Holmes mode. They tracked down and doxxed the guy allegedly behind the pictures after he confidently claimed, “They’ll never find me.”

It’s just too easy for anyone to create these sorts of images using AI. With InstantID, it just got lots easier. The model enables AI image generators to create reproductions from a single image of an individual’s face.

The InstantID research paper was published before the Swiftgate drama, but guess whose face the researchers used for instance.

via GIPHY

Unbelievable

It’s official, we are able to’t imagine our eyes and ears anymore. Sam’s compilation of the political deep fakes we’ve seen over the previous few months highlights the progression in each scale and capability that AI is affording fraudsters.

AI clones of voices have improved dramatically. We’ve gone from robotic monotone attempts to full-scale imitation of tone and emotion. The fake George Carlin comedy video on YouTube was a working example.

Carlin’s estate is now suing the creators of the AI fake comedy show with some surprising admissions by the people behind the video.

Deep fake audio is getting easier to make and harder to detect. The democratization of those AI tools signifies that the strange man on the road is finding himself a goal. A Baltimore principal says the voice in an offensive audio recording wasn’t his but an AI fake. You be the judge.

My first instinct was, “He’s lying,” but then I saw what an audio forensic expert said concerning the clip.

Open and shut

What’s in a reputation? The “Open” in OpenAI doesn’t appear to mean what it once did. OpenAI’s drift from its namesake and founding principles makes for interesting reading.

The company claims it’s still transparent, so long as you don’t ask questions on its financial statements, model training data, conflict-of-interest policy, why they fired Altman,… You get the image.

Something OpenAI has been candid about is its ambitions to provide its own AI chips. Altman quietly jetted off to South Korea to talk with Samsung and another chip manufacturers for help with this.

OpenAI’s opaque operations could also be more in keeping with the secretive nature of leaders further north of Seoul.

A report that uncovered the dynamics of North Korea’s resurging AI industry shows that AI is playing a much bigger role there than some could have thought. I’m guessing Kim Jong Un is a giant fan of Meta’s open-source strategy.

The Biden administration now requires cloud firms to report foreign users. If you’re a pc scientist in North Korea, you could need to use a good VPN while you hook up with AWS.

GPT-4 gets brainy

If you ask GPT-4 to provide help to brainstorm it gets just a little repetitive with among the ideas it comes up with. Researchers got here up with some clever prompt engineering strategies that may fix that.

If you’re searching for some creative entries so as to add to your resume, ChatGPT will help with that too. It seems that AI is widely utilized by job applicants, and hiring managers encourage it.

This week we discovered something else that GPT-4 was good at. Researchers found that GPT-4 agreed with expert doctors on really useful treatments for stroke victims.

Patients affected by paralysis or ALS could soon profit from one other one among Elon Musk’s plethora of projects. Musk announced that Neuralink accomplished its first brain implant in a human subject.

This could eventually allow for direct communication between the brain and devices like cell phones or computers. Are we living the prequel to The Matrix?

via GIPHY

Musk has also been trying to lift $6 billion to take his AI project xAI to the subsequent level. Take a have a look at what this man has done up to now after which just give him the cash. When does this guy sleep?

Safety first

The RAND Corporation got lots of criticism for an October report that said LLMs “might” help bad guys make a bioweapon. Their latest report says that may not be true in spite of everything. And then OpenAI did a study of their very own that concluded that a special version of GPT-4 might help the bad guys just a little.

A really real danger could come from AI agents let out on the web without supervision. Researchers outlined the potential dangers and proposed three things that might increase visibility into AI agents to make them safer.

Is your superpower stating other people’s mistakes? The CDAO and DoD are organizing events to discover bias in language models. They’ll even pay you bounties for spotting bias bugs.

AI in EU

The upcoming EU AI Act Summit 2024 kicks off next week. The summit shall be a really perfect opportunity to debate AI regulation proposals and familiarize yourself with the EU AI Act and its global implications.

Some civil rights groups are calling for the EU to probe OpenAI and Microsoft. The big chunk of money Microsoft invested in OpenAI raises questions on the impact on competition inside the AI sector.

It could be tough to argue against that, as Microsoft is predicted to post its best quarterly revenue growth in two years. A variety of that comes off the back of AI developments that OpenAI helped it make.

Italy’s data protection authority has raised data privacy concerns over ChatGPT’s slipups with personal information and the implications of libelous hallucinations.

In other news…

Here are another clickworthy AI stories we enjoyed this week:

And that’s a wrap.

To the Swifties in our audience, we hope you’ve recovered from the traumatic week. Were you browsing X while you spotted the AI pics unintentionally? Or did you could have to work hard to search out them online?

I don’t think anyone shall be making fake nudes using my face, but I could also be more careful who I send a voice note to in future. AI voice cloning is getting crazy.

Have you signed up for the Neuralink trial? Would you let Elon Musk put a chip in your brain? Musk managed to explode a number of SpaceX rockets before he got it right. I believe I’ll wait until they’ve worked out the bugs.

Let us know what you think that, and send us links to any juicy AI stories we could have missed.


This article was originally published at dailyai.com