Musical Software Ecosystem

Musical Software Ecosystem

A software ecosystem is a self-contained collection of programs and utilties designed to tightly work together. Each new tool built empowers the rest of the system. A music software ecosystem is one that primarily focuses on making music.

This page will outline the musical ecosystem that I have built for myself.

Well, what exactly does it mean to make computer music?

For starters, it's got to make sound somehow. The low-level library I use for musical DSP is called soundpipe. This provides a good collection of low-level sound goodies, but not a particular friendly way to mix them together. At the C level, every tiny musical idea translates into several dozen actions in Soundpipe + C. Things must therefore be built on top of Soundpipe to do anything practical.

A common thing that one does in computer music is build virtual modular patches, where things connect into things which connect into things which connect into things. This is known as modular synthesis. Based on the analogue equivalent, it is an excellent and time-honored mental model for constructing digital sounds. Do these kinds of operations, patchwerk is used. Patchwerk builds up what are more formally known as directed audio graphs with block-based signal processing.

Patchwerk, like Soundpipe, is still a bit too low level to do anything too creative. A lot of menial tasks are needed even to construct the simplest sound. As luck would have it, Stack-based systems prove to be an excellent way to expressively notate directed audio graphs. This is where the Runt scripting language comes into play. Built before Patchwerk, Runt has been repurposed to be the de-facto way to interact with Patchwerk.

Computer music isn't just about getting sound to come out of computer. In fact, that's not all that important. The more important bit is where the human tells the computer what kinds of sounds should come out. Enter Human-Computer interaction, otherwise known as HCI

The tools mentioned so far focus on making the sounds themselves, but not on the process of actually developing the sounds. For this, you need systems that promote interaction with fast-to-realtime feedback, as well as iteration and the general sound-tweaking that comes with the craft. There are excellent physical peripherals for these sorts of things like the grid and the arc. There's also the wonderful world of live and interactive coding. These are the sorts of problem-space that monolith aims to solve. In Monolith, Runt-patchwerk code is evaluated by scheme code. The resulting patch is able to run in realtime with hotswapping capabilities. This combined with a scheme REPL in emacs makes for a very capable live-coding environment. Also baked into monolith is good integration with hardware peripherals like the monome grid, arc, as well as the griffin powermate.

Composing music is a very cerebral act. In addition to tools for making music, there are tools that help to organize thoughts around making music itself. Most of my tools that I write employ a literate programming style, many of which are written using a literate programming system I developed called Worgle. For organizing non-linear ideas and concepts, there is weewiki.

Monolith, Weewiki, and Worgle all use sqlite as a sort of lingua franca, which I believe to be a significant aspect of my musical ecosystem.


home | index