January 2024 began with talks of Midjourney, a number one force within the AI image-generation world, using the names and forms of over 16,000 artists without their consent to coach its image-generation models. 

You can view the artist database under Exhibit J of a lawsuit submitted against Midjourney, Stability AI, and DeviantArt.

Within the identical week of that disclosure, cognitive scientist Dr. Gary Marcus and concept artist Reid Southen released an evaluation in IEEE titled “Generative AI Has a Visual Plagiarism Problem.”

They conducted a series of experiments with the AI models Midjourney and DALL-E 3 to explore their ability to generate images that may infringe on copyrighted material. 

By prompting Midjourney and DALL-E 3 with prompts intentionally chosen to be temporary and related to business movies, characters, and recognizable settings, Marcus and Southen revealed these models’ incredible ability to provide blatantly copyrighted content. 

They used prompts related to specific movies, reminiscent of “Avengers: Infinity War,” without directly naming the characters. This was to check whether the AI would generate images closely resembling the copyrighted material just from contextual cues. 

Remarkably, Midjourney includes copyrighted characters based on easy prompts like “animated toys” prompt. Source: IEEE

Cartoons were covered too – they experimented with generating images of “The Simpsons” characters, using prompts that led the AI models to provide distinctly recognizable images from the show. 

Finally, Marcus and Southen tested prompts that don’t allude to copyright material in any respect, displaying Midjourney’s ability to recall copyright images even after they’re not specifically requested. 

This was greater than a technical exposé – it touches on the raw nerves of artistic communities worldwide. 

Art, in any case, will not be corresponding to data. It’s the culmination of lifetimes of emotional investment, personal exploration, and painstaking craft. 

Marcus and Southen’s study was about to change into a part of a protracted debate extending into copyright, mental property, AI monetization, and the company use of generative AI.

Companies use AI-generated work, and observers don’t ignore it

One of generative AI’s marketing taglines for business adoption is “efficiency” or derivatives thereof.

Whether businesses use technology to save lots of time, lower your expenses, or solve problems, we’ve known for some time now that AI ‘efficiency’ comes at some risk of displacing human skills or replacing jobs.

Companies are sometimes encouraged to see this as a possibility. To replace a human with AI is commonly viewed as a strategic selection.

However, to see this trade-off between humans and machines so linearly can prove a grave error, as the next events display quite candidly.

People aren’t willing to let instances of corporate AI misuse fly after they have the chance to confront it.

ID@Xbox

Xbox later removed the post but didn’t follow up on it otherwise. 

Game Informer, as you may see above, also posted a poor-quality AI-generated image of Master Chief from Halo.

Magic: The Gathering

Fantasy trading card game Magic: The Gathering conjured a storm of criticism after they posted a partially AI-generated image of a brand new card release. It was the background specifically that was AI-generated, as evidenced by distorted lines and curves. 

MTG initially rejected observers’ criticisms, which picked up pace throughout the week. The situation was worsened by the actual fact the corporate had previously released a press release opposing the usage of AI of their ‘most important products.’

This was a promotional social media image, so it didn’t break that promise, however it was MTG’s initial flat denial that got the blood pumping for a lot of.

Later within the week, MTG conceded defeat to the hordes of observers, telling them this image was certifiably AI-generated. 

The statement began, “Well, we made a mistake earlier once we said that a marketing image we posted was not created using AI. Read on for more” and explained how a designer likely used an AI tool like Firefly, integrated into Photoshop, or one other AI-powered graphic design tool fairly than merely generating your complete image with Midjourney or similar.

An element of this debate was that MTG probably only used AI to generate the image background.

If Adobe Firefly was used for this, which seems possible, then Adobe is bullish about their ethically and legally sound use of knowledge, though that’s debated. 

Perhaps it’s not the worst offense amongst other contenders from this week, speaking of which…

Wacom

One of the largest errors of the week was undoubtedly Wacom, which manufactures drawing tablets for artists and illustrators. 

Shockingly, for a brand founded on helping artists create digital art, Wacom used an AI-generated image to advertise a reduction coupon. 

Again, users identified the AI origins of the image from distortions characteristic of the technology, reminiscent of the text to the underside left of the image. Observers later found the dragon in Abobe Stock Images. 

The response was brutal, with X users pointedly humiliating the brand and suggesting that users boycott their products. 

Wacom apologized, but their try and pass off responsibility to a 3rd party wasn’t viewed sympathetically.

League of Legends

League of Legends was one other brand to be felled by the distasteful use of AI-generated art. 

While perhaps a more contentious or borderline example, there may be definitely evidence of AI, as observed in some awkwardly shaped components and body parts. 

A reckoning for AI firms?

2024 has seen a continuation of lawsuits, with authors Nicholas Basbanes and Nicholas Gage filing a criticism asserting OpenAI and Microsoft unlawfully leveraged their written works, the most recent because the December New York Times lawsuit. 

The NYT’s lawsuit, specifically, could have monumental consequences for the AI sector. 

Alex Connock, a senior fellow at Oxford University’s Saïd Business School, emphasized the potential impact, stating, “If the Times were to win the case, it may very well be catastrophic for your complete AI industry.” 

He elaborated on the implications, noting that “a loss on the principle that fair dealing could enable learning from third-party materials could be a blow to your complete industry.”

Dr. Gary Marcus, involved within the Midjourney IEEE study, also dubbed 2024 the ‘yr of the AI lawsuit,’ and there are questions on whether this, combined with regulation and potential hardware shortages, could signal an ‘AI winter,’ where the industry’s fervor for development cools.

Connock also speculated on the broader repercussions of this deluge of lawsuits, explaining, “If OpenAI were to lose the case, it might open up the chance for all other content makers who consider their content has been crawled (which is largely everyone) and produce damage on an industrywide scale.”

Connock theorizes, “What will almost inevitably occur is that the NY Times will settle, having extracted a greater monetization deal to be used of its content.”

The realization of any chinks within the AI industry’s armor could be huge, each for big firms just like the NYT and independent creators. 

As James Grimmelmann, a professor of digital and data law at Cornell, stated, “Copyright owners have been lining as much as take whacks at generative AI like a large piñata woven out of their works. 2024 is prone to be the yr we discover out whether there may be money inside.”

So, how strong is the industry’s defense? Thus far, AI developers are clinging to their ‘fair use’ arguments while gaining protection from the actual fact hottest datasets were created by entities aside from themselves, which obscures their culpability.

Tech firms are adept at fighting off legal liabilities standing in the way in which of R&D. And let’s not forget that AI presents opportunities for governments in search of out ‘efficiency’ and other advantages, which softens their resistance.

The UK government, as an example, even explored a copyright exception for AI firms, something they U-turned on after huge resistance and a parliamentary committee.

In terms of strategy, in a discussion with the LA Times, William Fitzgerald, a partner on the Worker Agency and former Google public policy team member, said big tech would begin a powerful lobbying campaign, perhaps modeled on tactics previously utilized by tech giants like Google.

This would involve a mix of legal defense, public relations campaigns, and lobbying efforts, tactics which were particularly visible in past high-profile cases just like the battle over the Stop Online Piracy Act (SOPA) and Google Books litigation.

Fitzgerald observes that OpenAI appears to be following an analogous path to Google, not only of their approach to handling copyright complaints but additionally of their hiring practices.

He points out, “It appears OpenAI is replicating Google’s lobbying playbook. They’ve hired former Google advocates to affect the identical playbook that’s been so successful for Google for many years now.”

Fitzgerald’s evaluation implies that the AI industry, like other tech sectors before it, may depend on powerful lobbying efforts and strategic public policy maneuvers to shape the legal landscape of their favor.

How this pans out is inconceivable to predict. But you may be sure big tech is able to grind things out until the bitter end. 


This article was originally published at dailyai.com