AI and the future of human made music
Full article with interviews from some incredible artists:
http://www.furious.com/perfect/artificialintelligencemusic.html
Question from J. Vognsen:
The ability of AI to make text, images and sound is improving at a rapid pace. Does this affect how you think about your own music? Does it change how you view the value and purpose of making music? For example: On a practical level, do you worry about being unable to reach an audience as AI generated music continues to proliferate? Or, on a more existential level, can you imagine a point where AI gets so good that it will no longer be meaningful for you to continue making music?
Response:
For me, when answering this question, it’s first important to consider what the musical output of AI currently is and what it could be. I think that a lot of the AI-generated music currently available isn’t very interesting because it exists to re-create what a human can do without space for movement beyond this, resulting in the music stagnating. Often what happens in these cases is the music that the AI is trained on is represented symbolically (such as in MIDI form) which is often too constrained and cannot capture the nuances in music found in its timbre. Though it’s certainly possible to generate infinite amounts of music this way, and though I’m sure this will inevitably be done and pushed by companies, perhaps in the form of stock/library music; I don’t think that it will be anything worth actively listening to.
I don’t believe that AI will change the purpose and value of making music as there will always be a need for human-made music because of the lived experience, culture, tension, and interaction tied up with it both for creators and listeners. I do, however, think that AI can be helpful for musicians once we’ve filtered through the buzzwords and soul-sucking corporate approaches. The most straightforward example of this is using AI tools for efficiency when making music. Perhaps an AI tool could exist that generates a MIDI pattern to fit a certain mood or style from a starting point that can then be altered, or one that helps you find the right synthesised sound with text prompts (I learned while writing this that this now exists in the form of Vroom).
The role of AI in music for me revolves more around trying to push it to aesthetic limits that have previously been unattainable in human-made music while still having a human artist as the main decision maker. AI can recognise and create patterns that are imperceptible to humans and this can be especially useful when considering timbre because it allows opportunity for it to be altered in a lot of detail that is hard for humans to recognise. Leijnen (2014) has suggested using ‘bugs, errors and random numbers’, or in a musical context, audible glitches, as a means of generating and eliminating creative constraints and suggests that this has the potential to create new ideas that do not fit into a previous style or convention. The output of raw audio neural networks (RANNs) is an example of this. RANNs work by dividing audio at the sample level (e.g. 44100 samples per second) and then using a neural network to try and piece together the audio it has been fed. Due to the neural network’s failure to piece together these tiny samples entirely correctly, the result is an inhuman aesthetic that is not present in the original training material, instead driven by many subtle, intricate glitches. These outputs are unique sounding and are aesthetically beyond something a human could create alone. RANNs are not capable of generating as much music as the symbolic models mentioned prior as their training material is more complex (44100 samples per second racks up to a lot of data in just a few minutes). They also require their training material to be relatively timbrally consistent. The plus side of this limitation is the fact that it means that humans must have creative control over these systems for them to be useful. The artist’s job is to select or create focused training material for the neural network and spend time curating the output. Jennifer Walshe and Dadabots’ A Late Anthology of Early Music Vol. 1 – Ancient to Renaissance (2020) is a good example of this. For this album, Dadabots trained a RANN on hours of acapella voice recordings of Walshe over 40 generations. The final album (curated by Walshe) dips into the different generations in the training process with the track titles comically following the early history of Western music. The album creates a boundless world of alien-sounding vocal technique with squelching formants, ghostly moving drones, glitches, and bleeps and bloops, with each track being unique to the last. Another example is Vicky Clarke’s Aura Machine (2021) which involved training a RANN on field recordings categorised into the classes of ‘Echoes of Industry’ (Manchester mill spaces), ‘Age of Electricity’ (DIY technology, noise & machinery) and ‘Materiality’ (glass fragments and metal sound sculptures). The output of this has some additional contributions from the composer of mostly pitched synth material which helps draw attention to the sounds created by the RANN. I like this as a cleaner version of what can be done with RANNs. Both these approaches are much more musical to me than symbolic approaches because of how much detail timbral emulation allows for. Being able to use AI to emulate and ultimately change this is a really exciting creative tool that helps rather than threatens the human artist. I am excited when new meaningful music-making methods become available and tools like this give me more creative drive.
Reflecting on the non-human qualities of AI more generally, I think having clear oppositions in art is productive for shifting perspectives. I’ve seen a lot of AI-generated visual art that ends up looking ‘too perfect’ or uncanny and I often find it quite beautiful in its emptiness. Though important as a standalone, I also think that art like this draws attention to what’s missing, or rather, what is so important in human or human-machine (rather than just machine) created art. I believe that if we remain critical and creative, we can gain from the outputs of AI in meaningful ways, without taking away from human artists.
References
Leijnen, S. 2014. Creativity & Constraint in Artificial Systems. Ph.D. thesis, Radboud University Nijmegen.
Available from: https://repository.ubn.ru.nl/bitstream/handle/2066/132144/132144.pdf