The past few years have seen an explosion in applications of artificial intelligence to creative fields. A brand new generation of image and text generators is delivering impressive results. Now AI has also found applications in music, too.

Last week, a gaggle of researchers at Google released MusicLM – an AI-based music generator that may convert text prompts into audio segments. It’s one other example of the rapid pace of innovation in an incredible few years for creative AI.

With the music industry still adjusting to disruptions attributable to the web and streaming services, there’s a variety of interest in how AI might change the way in which we create and experience music.



Automating music creation

Quite a lot of AI tools now allow users to robotically generate musical sequences or audio segments. Many are free and open source, akin to Google’s Magenta toolkit.

Two of probably the most familiar approaches in AI music generation are:

  1. continuation, where the AI continues a sequence of notes or waveform data, and

  2. harmonisation or accompaniment, where the AI generates something to enrich the input, akin to chords to go together with a melody.

Similar to text- and image-generating AI, music AI systems could be trained on quite a lot of different data sets. You could, for instance, extend a melody by Chopin using a system trained within the kind of Bon Jovi – as beautifully demonstrated in OpenAI’s MuseNet.

Such tools could be great inspiration for artists with “blank page syndrome”, even when the artist themselves provide the ultimate push. Creative stimulation is considered one of the immediate applications of creative AI tools today.

But where these tools may sooner or later be much more useful is in extending musical expertise. Many people can write a tune, but fewer know find out how to adeptly manipulate chords to evoke emotions, or find out how to write music in a variety of styles.

Although music AI tools have a approach to go to reliably do the work of talented musicians, a handful of firms are developing AI platforms for music generation.

Boomy takes the minimalist path: users with no musical experience can create a song with a couple of clicks after which rearrange it. Aiva has an analogous approach, but allows finer control; artists can edit the generated music note-by-note in a custom editor.

There is a catch, nevertheless. Machine learning techniques are famously hard to manage, and generating music using AI is a little bit of a lucky dip for now; you may occasionally strike gold while using these tools, but you could not know why.

An ongoing challenge for people creating these AI tools is to permit more precise and deliberate control over what the generative algorithms produce.

New ways to govern style and sound

Music AI tools also allow users to rework a musical sequence or audio segment. Google Magenta’s Differentiable Digital Signal Processing library technology, for instance, performs timbre transfer.

Timbre is the technical term for the feel of the sound – the difference between a automobile engine and a whistle. Using timbre transfer, the timbre of a segment of audio could be modified.

Such tools are an incredible example of how AI may help musicians compose wealthy orchestrations and achieve completely recent sounds. In the primary AI Song Contest, held in 2020, Sydney-based music studio Uncanny Valley (with whom I collaborate), used timbre transfer to bring singing koalas into the combination.

Uncanny Valley’s song Beautiful The World won the 2020 AI Song Contest.

Timbre transfer has joined a protracted history of synthesis techniques which have turn into instruments in themselves.

Taking music apart

Music generation and transformation are only one a part of the equation. A longstanding problem in audio work is that of “source separation”. This means with the ability to break an audio recording of a track into its separate instruments.

Although it’s not perfect, AI-powered source separation has come a good distance. Its use is more likely to be a giant deal for artists; a few of whom won’t like that others can “pick the lock” on their compositions.

Meanwhile, DJs and mashup artists will gain unprecedented control over how they mix and remix tracks. Source separation start-up Audioshake claims this may provide recent revenue streams for artists who allow their music to be adapted more easily, akin to for TV and film.

Artists could have to just accept this Pandora’s box has been opened, as was the case when synthesizers and drum machines first arrived and, in some circumstances, replaced the necessity for musicians in certain contexts.

But watch this space, because copyright laws do offer artists protection from the unauthorised manipulation of their work. This is more likely to turn into one other grey area within the music industry, and regulation may struggle to maintain up.

New musical experiences

Playlist popularity has revealed how much we prefer to take heed to music that has some “functional” utility, akin to to focus, calm down, go to sleep, or work out to.

The start-up Endel has made AI-powered functional music its business model, creating infinite streams to assist maximise certain cognitive states.

Endel’s music could be connected to physiological data akin to a listener’s heart rate. Its manifesto draws heavily on practices of mindfulness and makes the daring proposal we will use “recent technology to assist our bodies and brains adapt to the brand new world”, with its hectic and anxiety-inducing pace.

Other start-ups are also exploring functional music. Aimi is examining how individual electronic music producers can turn their music into infinite and interactive streams.

Aimi’s listener app invites fans to govern the system’s generative parameters akin to “intensity” or “texture”, or deciding when a drop happens. The listener engages with the music reasonably than listening passively.

It’s hard to say how much heavy lifting AI is doing in these applications – potentially little. Even so, such advances are guiding firms’ visions of how musical experience might evolve in the long run.

The way forward for music

The initiatives mentioned above are in conflict with several long-established conventions, laws and cultural values regarding how we create and share music.

Will copyright laws be tightened to make sure firms training AI systems on artists’ works compensate those artists? And what would that compensation be for? Will recent rules apply to source separation? Will musicians using AI spend less time making music, or make more music than ever before?

If there’s one thing that’s certain, it’s change. As a brand new generation of musicians grows up immersed in AI’s creative possibilities, they’ll find recent ways of working with these tools.

Such turbulence is nothing recent within the history of music technology, and neither powerful technologies nor standing conventions should dictate our creative future.



This article was originally published at theconversation.com