In 1954, the Guardian’s science correspondent reported on “electronic brains”, which had a type of memory that would allow them to retrieve information, like airline seat allocations, in a matter of seconds.

Nowadays the concept of computers storing information is so commonplace that we don’t even take into consideration what words like “memory” really mean. Back within the Fifties, nonetheless, this language was recent to most individuals, and the concept of an “electronic brain” was heavy with possibility.

In 2024, your microwave has more computing power than anything that was called a
brain within the Fifties, however the world of artificial intelligence is posing fresh challenges for language – and lawyers. Last month, the New York Times newspaper filed a lawsuit against OpenAI and Microsoft, the owners of popular AI-based text-generation tool ChatGPT, over their alleged use of the Times’ articles in the info they use to coach (improve) and test their systems.

They claim that OpenAI has infringed copyright by utilizing their journalism as a part of the means of creating ChatGPT. In doing so, the lawsuit claims, they’ve created a competing product that threatens their business. OpenAI’s response up to now has been very cautious, but a key tenet outlined in a press release released by the corporate is that their use of online data falls under the principle generally known as “fair use”. This is because, OpenAI argues, they transform the work into something recent in the method – the text generated by ChatGPT.

At the crux of this issue is the query of knowledge use. What data do firms like
OpenAI have a right to make use of, and what do concepts like “transform” really
mean in these contexts? Questions like this, surrounding the info we train AI systems, or models, like ChatGPT on, remain a fierce academic battleground. The law often lags behind the behaviour of industry.

If you’ve used AI to reply emails or summarise be just right for you, you would possibly see ChatGPT as an end justifying the means. However, it perhaps should worry us if the one technique to achieve that’s by exempting specific corporate entities from laws that apply to everyone else.

Not only could that change the character of debate around copyright lawsuits like this one, however it has the potential to alter the best way societies structure their legal system.



Fundamental questions

Cases like this may throw up thorny questions on the long run of legal systems, but they also can query the long run of AI models themselves. The New York Times believes
that ChatGPT threatens the long-term existence of the newspaper. On this point, OpenAI says in its statement that it’s collaborating with news organisations to supply novel opportunities in journalism. It says the corporate’s goals are to “support a healthy news ecosystem” and to “be a great partner”.

Even if we consider that AI systems are a crucial a part of the long run for our society, it looks like a nasty idea to destroy the sources of knowledge that they were
originally trained on. This is a priority shared by creative endeavours just like the New York Times, authors like George R.R. Martin, and in addition the online encyclopedia Wikipedia.

Advocates of large-scale data collection – like that used to power Large Language
Models (LLMs), the technology underlying AI chatbots similar to ChatGPT – argue that AI systems “transform” the info they train on by “learning” from their datasets after which creating something recent.

OpenAI CEO Sam Altman has develop into a recognised name amongst Silicon Valley’s tech leaders.
Jamesonwu1972 / Shutterstock

Effectively, what they mean is that researchers provide data written by people and
ask these systems to guess the subsequent words within the sentence, as they’d when coping with an actual query from a user. By hiding after which revealing these answers, researchers can provide a binary “yes” or “no” answer that helps push AI systems towards accurate predictions. It’s because of this that LLMs need vast reams of written texts.

If we were to repeat the articles from the New York Times’ website and charge people for access, most individuals would agree this might be “systematic theft on a mass scale” (because the newspaper’s lawsuit puts it). But improving the accuracy of an AI by utilizing data to guide it, as shown above, is more complicated than this.

Firms like OpenAI don’t store their training data and so argue that the articles from the New York Times fed into the dataset will not be actually being reused. A counter-argument to this defence of AI, though, is that there may be evidence that systems similar to ChatGPT can “leak” verbatim excerpts from their training data. OpenAI says that is a “rare bug”.

However, it suggests that these systems do store and memorise a number of the data they’re trained on – unintentionally – and might regurgitate it verbatim when prompted in specific ways. This would bypass any paywalls a for-profit publication may put in place to guard its mental property.

Language use

But what’s prone to have a long run impact on the best way we approach laws in cases similar to these is our use of language. Most AI researchers will inform you that the word “learning” is a really weighty and inaccurate word to make use of to explain what AI is definitely doing.

The query have to be asked whether the law in its current form is sufficient to guard and support people as society experiences an enormous shift into the AI age.
Whether something builds on an existing copyrighted piece of labor in a fashion
different from the unique is known as “transformative use” and is a defence utilized by OpenAI.

However, these laws were designed to encourage people to remix, recombine and
experiment with work already released into the surface world. The same laws were probably not designed to guard multi-billion-dollar technology products that work at a speed and scale many orders of magnitude greater than any human author could aspire to.

The problems with lots of the defences of large-scale data collection and usage is
that they depend on strange uses of the English language. We say that AI “learns”, that it “understands”, that it will possibly “think”. However, these are analogies, not precise technical language.

Just like in 1954, when people checked out the fashionable equivalent of a broken
calculator and called it a “brain”, we’re using old language to grapple with completely recent concepts. No matter what we call it, systems like ChatGPT don’t work like our brains, and AI systems don’t play the identical role in society that folks play.

Just as we needed to develop recent words and a brand new common understanding of technology to make sense of computers within the Fifties, we may have to develop recent language and recent laws to assist protect our society within the 2020s.

This article was originally published at theconversation.com