Algorithmic Composition

Teaching a workshop on Algorithmic Composition at an amazing school in Yerevan, Armenia called Tumo – Centre for Creative Technologies. As a person with equal parts enthusiasm for music, design and code, I’ve always been intrigued by the idea of using technology to enhance and propel our experiences with music, both creating and enjoying.

I chose to work on algorithmic music composition with the p5.sound library. P5.sound is an add-on library that equips p5.js with the ability to do all manner of audio-related tasks, from playing a sound file to applying audio effects directly on the browser.

The p5.sound library provided basic support for synthesizing sounds at different frequencies, and gave access to the Web Audio clock, which allows for accurate audio-scheduling. These features provided a solid foundation for working with generative music.

Many Questions

There were so many questions to consider about how a composition program might look in the context of a p5.js sketch: How do we represent musical qualities like pitch and velocity in code? What about timing information? How do we write a program which handles composition tasks and visual animation simultaneously, and how do we make sure both tasks can interact and sync with one another? Most importantly, how do we make all of this simple and intuitive to use?

Interestingly, I found that the more I worked on the examples and tried to make them sound good – more musical, the more I had to hand-engineer ideas from musical theory into the code. Simultaneously, I never really knew what results I would get when I put new rules into the system. This was challenging yet exciting at the same time, and suggests that perhaps the role of algorithms in music will never be to replace humans entirely, but to facilitate new ideas and give us new ways to be creative.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.