For the primary time in 60 years, the Writers Guild of America (WGA) and the Screen Actors Guild (SAG) are concurrently facing off against the Alliance of Motion Picture and Television Producers.

The key points of contention? Working conditions, adequate pay, and the increasing encroachment of artificial intelligence (AI) into their professions.

The use of AI within the film and tv industry isn’t recent. Many common post-production techniques use AI technology in computer graphics, color grading, animation and video editing.

Not only was the Lord of the Rings trilogy a defining moment of the early 2000s, it also illuminated how AI may very well be utilized in film production. The Battle of Helm’s Deep features computer-generated AI armies to create one of the vital memorable scenes in cinematic history.

But in the present strike, the precise concern is a subset of AI often called generative AI. It is crucial that an equilibrium is reached between protections for creative professionals, and the appliance of generative AI as a useful gizmo.



Remind me, what’s generative AI?

Like all AI, a generative AI model is fed existing data (content), using algorithms to process this data, discover patterns and produce outputs – comparable to a picture or an editorial. What is important about AI is the capability to undertake the so-called “learning” process relatively autonomously and to generate original content.

Many of us are most conversant in generative AI because the technical process that offers us increasingly sophisticated deepfakes.

The now infamous image of Pope Francis wearing an oversized puffer jacket? Courtesy of a 31-year-old construction employee using the AI image generator Midjourney.

Generative AI has taken off within the mainstream through corporations comparable to Midjourney, Stable Diffusion, Meta and OpenAI. The latter is now infamous for its large image model Dall-E and enormous language model ChatGPT.

So what is going on in Hollywood?

Hollywood staff have valid reason for his or her unease. The fear is AI won’t only be used for supporting technical jobs comparable to color grading or adding characters within the far background, but it can also replace creative jobs.

For each the WGA and SAG, there’s also a legitimate worry that entry level jobs (comparable to writers’ assistants and background extras on sets) shall be largely replaced by AI.

This would significantly reduce opportunities for people entering the workforce to realize vital expertise of their craft.

With the staggering improvements in each ChatGPT iteration, screenwriters have also been grappling with the chance they shall be sharing creative control over scripts with large language models.

Questions arise around how these works could be attributed, who or could be given credit, and consequently how payment could be allocated.

These unions aren’t entirely against using AI. The WGA has proposed a model for human-AI collaboration where generative AI could produce early versions of a script which human screenwriters will then refine. But many experts and industry professionals see this proposal as alienating writers from the creative process, repositioning writers as copy editors.

One of probably the most dystopian scenarios to be placed on the table by big studios has been termed “performance cloning”. This involves paying background actors a one-off fee to scan their likeness. This likeness can then be owned and utilized by corporations in perpetuity.

While making a regressive payment model, it also raises problems with consent: what happens in case your AI body double is utilized in a way you’ll never conform to?

It’s also a matter of copyright

With generative AI, consent is closely certain along with problems with copyright.

Comedian Sarah Silverman is currently suing OpenAI and Meta for copyright infringement. She alleges their AI models were trained on her work without her consent, and were consequently in a position to roughly reproduce her comedy style.

That her oeuvre is a component of the machine learning dataset is unsurprising. This dataset encompasses billions of knowledge points – essentially all that has made its way onto the web.

Though generative AI is claimed to supply original content, a greater strategy to view this content is as a remix. These models regurgitate what they’ve been trained on.

If they change into foundational to the film and tv industry, the originality of our cultural products is up for debate.

Streaming services have already primed audiences within the algorithmic curation of taste. Generative AI extends this existing trajectory. If studios change into overly reliant on these technologies, chances are high the “recent” content offered to us will only echo what has come before. It may even move us further away from equality in representation, with the bias of those AI models well-documented.



We need collaboration without exploitation

As staff fight for industry regulation to ban the alternative of humans by AI, it is vital to reiterate this just isn’t a call to ban the technology outright. Generative AI has already been utilized in precious and compelling ways in film.

An early example is David France’s 2020 documentary Welcome to Chechnya, which explores the persecution of LGBTQ+ people in Russia. France did extensive post-production work using AI, producing synthetic voices and superimposed faces to guard his subjects’ anonymity while retaining their humanity.

The query at the guts of copyright – how we balance protecting the rights of creatives with the openness needed for cultural production – resurfaces on this context. We need regulatory measures that enable creative collaboration with generative AI while ensuring creative staff should not exploited to further centralised power.

In June, the Directors Guild of America won protection against being replaced by AI tools in a brand new labour contract with producers. The hope is that protections shall be prolonged to screenwriters and actors.

Otherwise, in Hollywood, AI might just steal the show.



This article was originally published at theconversation.com