Using raw audio neural network systems to define musical creativity

Click here to read the paper

Presented at the 3rd Conference on AI Music Creativity.

Abstract:

This paper will use the hacker-duo Dadabots (who generate raw audio using SampleRNN) and OpenAI’s Jukebox project (who generate raw audio using a hierarchical vq-vae transformer) as case studies to assess whether machines are capable of musical creativity, how they are capable of musical creativity, and whether this helps to define what musical creativity is. It will also discuss how these systems can be useful for human creative processes. The findings from evaluating Dadabots’ and OpenAI’s work will firstly demonstrate that our assumptions about musical creativity in both humans and machines revolve too strongly around symbolic models. Secondly, the findings will suggest that what Boden describes as ‘transformational creativity’ can take place through unexpected machine consequences.