Weekly Update 4: A Text Editor for the Monome Grid
A Text Editor on the Grid
This week, I found myself building a hyper-minimal "text editor" for the monome grid (previously known as just the "monome" before they started making other hardware). The grid is a 16-by-8 array of silicone buttons that have LED lights that can be indepedently controlled.
One could say I have chosen to build a text editor for the one of the world's largest smallest touch screens.
Here's a video of me interacting with it. Notice how I use a large knob to scroll through a set of symbols:
With some updates, I added some navigation. Now, there's a mix of a mechanical keypad, the knob, and the grid itself to jump around the symbols:
Here I am now inserting and removing symbols via the mechanical keypad using the ortho33 input method I designed a last week. You'll note that the bottom righthand portion of the Grid is a 3x3 status "box" that shows the state of the keypad as well as if I'm in edit or navigation mode:
Finally, I added multi-line support, which as this point almost makes it a "text editor". Except, instead of "text", they are symbols.
Where's the Sound?: an attempt to explain myself
Before you ask, there is no sound. Not yet at least. The idea is to have this be a core tool for the gestlings project. But not too much of a tool, since I have to do a lot of things and I have given my self a time budget to finish the project.
The thing is, there's just a lot to build before actual sound can occur. Gestlings, a spirtual descendent of Sporthings, can be thought of as a collection studies in Gestures Synthesis, a novel technique I've been developing for controlling sound in a procedurally generated way. This by itself is too vague and too stiff and too boring. There needs to be more to it. Gesture synthesis is connected to human singing, the voice apparatus, and lyrical performances in music. So, since it's the voice and everything, let's have each "Gestling" introduce themselves. That's much more interesting than "computer music sound studies". Voices imply character, which imply personality. Also, voices don't come from nowhere, they usually come from a mouth, and mouths are attached to faces. And faces are usually attached to a body, and bodies are in an environment. What are these Gestlings saying? Why are they saying it? How Are They Saying It? What is their motivation? And so on, and so forth.
Needless to say, Gestlings are more than mere "etudes" or "compositional studies". Now they are "Sounds With Faces", and hopefully they have something to say. I am in over my head.
So, Gestlings are going to be these creatures that have something to say, and I'm going to be the one programming what it is they say. How to do that? Develop a constructed language, or "conlang", write something to say in that language, and make a speech synthesizer that can perform those words. Realize that I'm a composer and not a linguist. I am also lazy and on a (time) budget, so a lot of shortcuts are taken. Firstly, I don't need formal language, just a blurry shadow of one. It's the tone color of speech that fascinates me, not really the semantics of language. Really the prosody is all I care about, in my initial gestling prototype has subtitles and is speaking gibberish. Puppycat does it. R2D2 does it. Charlie Brown's Teacher does it. I think it works out quite well for them.
Even if you build a "language" that isn't actually a language and total gibberish or "asemic speech", there's still going to be some structure that remains. I've decided to try thinking of this structure as a set of symbols. This editor, which I intend to call "bitrune", aims to be an editor for these symbols.
I've built out a very constrained system for myself. It may end up being too constained, but I'm going to try to stick to it. The design challenge will be building up symbol sets that allow one to be creatively expressive with these constraints.