Each fall, I begin my course on the intersection of music and artificial intelligence by asking my students in the event that they’re concerned about AI’s role in composing or producing music.

So far, the query has all the time elicited a convincing “yes.”

Their fears may be summed up in a sentence: AI will create a world where music is plentiful, but musicians get forged aside.

In the upcoming semester, I’m anticipating a discussion about Paul McCartney, who in June 2023 announced that he and a team of audio engineers had used machine learning to uncover a “lost” vocal track of John Lennon by separating the instruments from a demo recording.

But resurrecting the voices of long-dead artists is just the tip of the iceberg by way of what’s possible – and what’s already being done.

In an interview, McCartney admitted that AI represents a “scary” but “exciting” future for music. To me, his mixture of consternation and exhilaration is spot on.

Here are 3 ways AI is changing the best way music gets made – each of which could threaten human musicians in various ways:

1. Song composition

Many programs can already generate music with an easy prompt from the user, resembling “Electronic Dance with a Warehouse Groove.”

Fully generative apps train AI models on extensive databases of existing music. This enables them to learn musical structures, harmonies, melodies, rhythms, dynamics, timbres and form, and generate recent content that stylistically matches the fabric within the database.

There are many examples of those sorts of apps. But essentially the most successful ones, like Boomy, allow nonmusicians to generate music after which post the AI-generated results on Spotify to earn money. Spotify recently removed a lot of these Boomy-generated tracks, claiming that this may protect human artists’ rights and royalties.

The two corporations quickly got here to an agreement that allowed Boomy to re-upload the tracks. But the algorithms powering these apps still have a troubling ability to infringe upon existing copyright, which could go unnoticed to most users. After all, basing recent music on an information set of existing music is sure to cause noticeable similarities between the music in the information set and the generated content.

A poster for the AI music service Boomy in Austin, Texas.
Smith Collection/Gado/Getty Images

Furthermore, streaming services like Spotify and Amazon Music are naturally incentivized to develop their very own AI music-generation technology. Spotify, as an illustration, pays 70% of the revenue of every stream to the artist who created it. If the corporate could generate that music with its own algorithms, it could cut human artists out of the equation altogether.

Over time, this might mean more cash for large streaming services, less money for musicians – and a less human approach to creating music.

2. Mixing and mastering

Machine-learning-enabled apps that help musicians balance all the instruments and clean up the audio in a song – what’s referred to as mixing and mastering – are beneficial tools for individuals who lack the experience, skill or resources to tug off professional-sounding tracks.

Over the past decade, AI’s integration into music production has revolutionized how music is mixed and mastered. AI-driven apps like Landr, Cryo Mix and iZotope’s Neutron can mechanically analyze tracks, balance audio levels and take away noise.

These technologies streamline the production process, allowing musicians and producers to concentrate on the creative elements of their work and leave among the technical drudgery to AI.

While these apps undoubtedly take some work away from skilled mixers and producers, additionally they allow professionals to quickly complete less lucrative jobs, resembling mixing or mastering for an area band, and concentrate on high-paying commissions that require more finesse. These apps also allow musicians to supply more professional-sounding work without involving an audio engineer they’ll’t afford.

3. Instrumental and vocal reproduction

Using “tone transfer” algorithms via apps like Mawf, musicians can transform the sound of 1 instrument into one other.

Thai musician and engineer Yaboi Hanoi’s song “Enter Demons & Gods,” which won the third international AI Song Contest in 2022, was unique in that it was influenced not only by Thai mythology, but in addition by the sounds of native Thai musical instruments, which have a non-Western system of intonation. One of essentially the most technically exciting elements of Yaboi Hanoi’s entry was the reproduction of a standard Thai woodwind instrument – the pi naiwhich was resynthesized to perform the track.

A variant of this technology lies on the core of the Vocaloid voice synthesis software, which allows users to supply convincingly human vocal tracks with swappable voices.

Unsavory applications of this system are popping up outside of the musical realm. For example, AI voice swapping has been used to scam people out of cash.

But musicians and producers can already use it to realistically reproduce the sound of any instrument or voice conceivable. The downside, in fact, is that this technology can rob instrumentalists of the chance to perform on a recorded track.

Using tone transfer, a singer’s voice is become the sound of a trumpet.
Jason Palamara, CC BY289 KB (download)

AI’s Wild West moment

While I applaud Yaboi Hanoi’s victory, I even have to wonder if it would encourage musicians to make use of AI to fake a cultural connection where none exists.

In 2021, Capitol Music Group made headlines by signing an “AI rapper” that had been given the avatar of a Black male cyborg, but which was really the work of Factory New non-Black software engineers. The backlash was swift, with the record label roundly excoriated for blatant cultural appropriation.

But AI musical cultural appropriation is simpler to stumble into than you would possibly think. With the extraordinary size of songs and samples that comprise the information sets utilized by apps like Boomy – see the open source “Million Song Dataset” for a way of the dimensions – there’s a superb probability that a user may unwittingly upload a newly generated track that pulls from a culture that isn’t their very own, or cribs from an artist in a way that too closely mimics the unique. Worse still, it won’t all the time be clear who’s accountable for the offense, and current U.S. copyright laws are contradictory and woefully inadequate to the duty of regulating these issues.

These are all topics which have come up in my very own class, which has allowed me to no less than inform my students of the hazards of unchecked AI and the best way to best avoid these pitfalls.

At the identical time, at the tip of every fall semester, I’ll again ask my students in the event that they’re concerned about an AI takeover of music. At that time, and with an entire semester’s experience investigating these technologies, most of them say they’re excited to see how the technology will evolve and where the sphere will go.

Some dark possibilities do lie ahead for humanity and AI. Still, no less than within the realm of musical AI, there may be cause for some optimism – assuming the pitfalls are avoided.

This article was originally published at theconversation.com