Dissertation Draft: Solo Live Electronic Pieces


the local 2600 group, started organising BrumCon 2007, which took
place on 3 May 2008, I asked if I could present a musical set. They
had never done such a thing before, but they agreed. 2600 is a
hacker group (“2600 Meetings”) with roots in phone phreaking
(Trigaux), so I decided to reference that in a piece written for the

noted in The Complete Hacker’s Handbook, “Phone phreaking”
refers to the practice hacking phone systems in order to get free
calls or just explore the workings of the system. Phone systems used
to be controlled by in-band signalling, that is, audible tones that a
user could reproduce in order to gain unauthorised control. For
example, 2600 Hz was a useful tone to “seize” control of a line.
Other such sounds commonly found in telephony are Dual Tone Multi
Frequency [DTMF] sounds, which are the ones produced by a landline
keypad. (Dr. K.)

looked up red box phreaking on Wikipedia and also DTMF signals and
used those tones as the heart of the piece. It starts with a dial
tone, then does coin dropping sounds, followed by the sound of
dialling and then a ring back sound, followed by a 2600 Hz tone.
After that introduction, it plays dialling signals and then a beat.
The beat is made up of patterns of sampled drums. The programme
picks random beats to be accented, which will always have a drum
sound on them and then scatters drum sounds on some of the other
beats also. The loop is repeated between 8-10 times and then a new
pattern is created, retaining the same accents for the duration of
the piece. If the randomly generated drum pattern seems too sparse
or too full of beats, the performer can intervene by pressing a
joystick button to add some drum beats or another to remove them. The
idea for randomly accenting beats comes a lecture by Paul Berg at
Sonology in the Hague where he noted that accenting random beats
seems like it had a deliberate rhythm when it’s heard by audiences.
This is related to Trevor Wishart’s discussion of Clarence Barlow’s
“indispensability factor,” where Wishart notes that changing
accents of a steady beat can alter the listener’s perception between
time signatures. (Wishart p 64) It seems that greater randomness in
picking accents leads listeners to perceive more complex rhythms.

the beats, a busy signal comes in occasionally. There are also bass
frequencies which are DTMF sine tones transposed by octaves.
Finally, there are samples of operator messages that are used in the
American phone system. These are glitched and stuttered, the degree
of which is controlled with a joystick. Thus, this piece is partly a
live-realisation, self-running piece and partly controlled by a

the time, I was interested in making computer pieces that necessarily
had to be computer pieces and could not be realised with live
instruments or with an analogue synthesiser. Extremely exact tunings
and sample processing are both examples of things that are
computer-dependant. I was also interested to have more live control
and more visible gesture, in order to, as Paine describes in his
paper on gesture in laptop performance, “inject
a sense of the now, an engagement with audience in an effort to
reclaim the authenticity associated with ‘live’ performance.”
(Paine p 4) I thought having physical motions would engage the
audience more than a live realisation. Conversely and relatedly, I
was also interested in the aesthetics of computer failure, within the
glitches I was creating. Cascone writes, “'[Failure]’ has become a
prominent aesthetic in many of the arts in the late 20th century,
reminding us that our control of technology is an illusion, and
revealing digital tools to be only as perfect, precise, and efficient
as the humans who build them. “ (Cascone) I thought this
intentional highlighting of imperfection would especially resonate
with an audience that largely worked in a highly technical and
professional capacity with computers.

also find glitches to be aesthetically appealing and have been
influenced by the extremely glitchy work of Ryoji Ikeda, especially
works like Data.Matrix,
which is a sonification of data. (“Datamatics”) Similarly,
in-band signalling is literally a sonic encoding of data, designed
for computer usage.

I performed the piece at BrumCon, their sound system did not have an
even frequency response. Some sine waves sounded way louder than
others and I did not have a way to adjust. I suspect this problem is
much more pronounced for sine tones than it is for richer
frequencies. Another problem I encountered was that I was using
sounds with strong semantic meanings for the audience. Many of them
had been phreakers and the sounds already had a specific meaning and
context that I was not accurately reproducing. Listeners without
this background have generally been more positive about the piece.
One blogger wrote the piece sounded like a “demonic
homage to Gaga’s Telephone,
(Lao) although he did note that my piece was written earlier.


music of the BBC Radiophonic Workshop has been a major influence on
my music for a long time. The incidental music and sound effects of
Doctor Who during the Tom Baker years was especially
formative. I found time in 2008 to watch every episode of Blakes
and found the sound effects to be equally compelling. I spent
some time with my analogue synthsiser and tried to create sounds like
the ones used in the series. I liked the sounds I got, but they were
a bit too complex to layer into a collage for making a piece that
way, but not complex enough to stand on their own. I wrote a
SuperCollider programme to process them through granular synthesis
and other means and to create a piece using the effects as source
material, mixed with other computer generated sounds.

timing on a micro, “beat,” and loop level are all in groups of
nine or multiples of nine, which is why I changed the number in the
piece title. I was influenced to use this number by a London poet,
Mendoza, who had a project called ninerrors
which ze*
describes as, “a
sequence of poems constricted by configurations of 9: connected &
dis-connected by self-imposed constraint. each has 9 lines
or multiples of 9, some have 9 words or syllables per line, others
are divisible by 9.  ninerrors is presented
as a series of 9 pamphlets containing 9 pages of poetry.”
(“ninerrors”) I adopted a use of nines not only in the timings,
but also in shifting the playback rate of buffers, which are played
at rates of 27/25,
9/7, 7/9 or 25/27. The
tone clusters frequencies also are related to each other by tuning
ratios that are similarly based on nine. I was in contact with
Mendoza while writing this piece and one of the poems in hir

cycle, an
obsessive compulsive disorder
mentions part of the creation of this piece in it’s first line,
“washing machine spin cycle drowns out synth drones.” (“an
obsessive compulsive disorder”)

ratios based on nines gave me the internal tunings of the tone
cluster, I used Dissonance Curves, as described by William Sethares,
to generate the tuning and scale for the base frequencies of the
clusters. The clusters should therefore sound as consonant as
possible and provide a contrast to the rest of the piece, which is
rather glitchy. The glitches come partly from the analogue material,
but also from sudden cuts in the playback of buffers. For some parts
of the piece, the programme records it’s own output and then uses
that as source material, something that may stutter, especially if
the buffer is recording it’s own output. I used this effect because,
as mentioned above, I want to use a computer to do things which only
it can do. When writing about glitches, Vanhanen writes that their
sounds “are
sounds of the . . . technology itself.” (p 47) He notes that “if
phonography is essentially acousmatic, then the ultimate phonographic
music would consist of sounds that have no acoustic origin,” (p
49) thus asserting that skips and “deliberate mistakes” (ibid)
are the essential sound of “phonographic styles of music.” (ibid)
Similarly, “glitch is the digital equivalent of the phonographic
metasound.” (p 50) It is necessarily digital and thus is
inherently tied to my use of a computer. While my use of glitch is
oppositional to the dominant style of BEAST, according to Vanhanen,
it is also the logical extension of acousmatic music.

the piece was written with the BEAST system in mind. The code was
written to allow N-channel realisations. Some gestures are designed
with rings of 8 in mind, but others, notably at the very start, are
designed to be front speakers only. Some of the “recycled”
buffers, playing back the pieces own recordings were originally
intended to be sent to distant speakers, not pointed at the audience,
thus give some distance between the audience and those glitches when
they are first introduced. I chose to do it this way partly in order
to automate the use of spatial gesture. In his paper on gesture,
Paine notes that moving a sound in space is a from of gesture,
specifically mentioning the BEAST system. (p 11) I think that because
this gesture is already physical, it does not need to necessarily
rely on the physical gesture of a performer moving faders. Removing
my own physical input from the spatialisation process allowed me more
control over the physical placement of the sound, without diminishing
the audience’s experience of the piece as authentic. It also gives
me greater separation between sounds, since the stems are generated
separately and lets me use more speakers at once, thus increasing the
immersive aspect (p 13) of the performance.

this piece is entirely non-interactive, it is a live realisation
which makes extensive use of randomisations and can vary
significantly between performances. In case I get a additional
chances to perform it on a large speaker system, I would like the
audience to have a fresh experience every time it is played.


I was a student at Wesleyan, I had my MOTM analogue modular
synthesier mounted into a 6 foot tall free-standing rack that was
intended for use in server rooms. It was not very giggable, but it
was visually quite striking. When my colleagues saw it, they
launched a campaign that I should do a live-patching concert. I was
initially resistant to their encouragement, as it seemed like a
terrible idea, but eventually I gave in and spent several days
practicing getting sounds quickly and then refining them. In
performance, as with other types of improvisation, I would find
exciting and interesting sounds that I had not previously stumbled on
in the studio. Some of my best patches have been live.

been deeply interested in the music of other composers who do live
analogue electronics, especially in the American experimental
tradition of the 1960s and 70s. Bye
Bye Butterfly

by Pauline Oliveros is one such piece, although she realised it in a
studio. (Bernstein p 30) This piece and others that I find
interesting are based on discovering the parameters and limits of a
sound phenomenon. Bernstein writes that “She discovered that a
beautiful low difference tone would sound” when her oscillators
were tuned in a particular way. (ibid) Live patching also seems to
be music built on discovery, but perhaps a step more radical for it’s
being performed live.

more radical than live patching is Runthrough
by David Behrman, which is realised live with DIY electronics. The
programme notes for that piece state, “No
special skills or training are helpful in turning knobs or shining
flashlights, so whatever music can emerge from the equipment is as
available to non-musicians as to musicians . . .. Things
are going well when all the players have the sensation they are
riding a sound they like in harmony together, and when each is
appreciative of what the others are doing.”
(“Sonic Arts Union”) The piece is based entirely on discovery and
has no set plan or written score. (ibid) The piece-ness relies on
the equipment. This is different than live-patching because a
modular synthesiser is designed to be a more general purpose tool and
its use does not imply a particular piece. Understanding the
interaction between synthesiser modules is also a specialist skill
and does imply that expertise is possible. However, the idea of
finding a sound and following it is similar.

I have been investigating ways to merge my synthesiser performance
with my laptop performance. The first obvious avenue of exploration
was via live sampling. This works well with a small modular, like the
Evenfall Mini Modular, which is small enough to put into a rucksack
and has many normalised connections. It has enough flexibility to
make interesting and somewhat unexpected music, but is small and
simple enough that I can divide my attention between it and a laptop.
Unfortunately, mine was damaged in a bike accident in Amsterdam in
2008 and has not yet been repaired.

MOTM synthesiser, however, is too large and too complex to divide my
attention between it and a laptop screen. I experimented with using
gamepad control of a live sampler, such that I did not look at the
computer screen at all, but relied on being able to hear the state of
the programme and a memory of what the different buttons did. I
tried this once in concert at Noise = Noise #19 in April 2010. As is
often the case in small concerts, I could not fully hear both monitor
speakers, which made it difficult to monitor the programme.
Furthermore, as my patch grew in complexity, the computer-added
complexity became difficult to perceive and I stopped being able to
tell if it was still working correctly or at all. A few minutes into
the performance, I stopped the computer programme entirely and
switched to all analogue sounds. While the programme did not perform
in the manner I had intended, the set was a success and the recording
also came out quite well and is included in my portfolio. One
blogger compared the track to Jimi Hendrix, (Weidenbaum) which was
certainly unexpected.

is unusual for me to have a live recording come out well. This is
because of the live, exploratory aspect to the music. If I discover
that the I can make the subwoofers shake the room or make the stage
rattle, or discover another acoustic phenomenon in the space, I will
push the music in that direction. While this is exciting to play, and
hopefully to hear, it doesn’t tend to come out well on recordings. I
also have a persistent problem with panning. On stage, it’s often
difficult to hear both monitors and judging the relative amplitudes
between them requires a certain concentration that I find difficult
to do while simultaneously patching and altering timbres. In order
to solve this problem, I’ve written a small programme in
SuperCollider which monitors the stereo inputs of the computer and
pans them to output according to their amplitude. If one is much
louder than the other, it is panned to centre and the other output
slowly oscillated between left and right. If the two inputs are
close in amplitude, the inputs are panned left and right, with some
overlap. I think this is the right answer for how to integrate a
computer with my synthesiser. Rather than giving me more things to
think about, this silently fixes a problem, thus removing a
responsibility. An example of playing with the autopanner, from a
small living room concert in 2011, is included in my portfolio.

future exploration, I am thinking of returning to the idea of live
sampling, but similarly without my interaction. I would tell the
computer when the set starts and it would help me build up a texture
through live sampling. Then, as my inputted sound became more complex
(or after a set period of time), the computer interventions would
fade out, leaving me entirely analogue. This could help me get
something interesting going more quickly, although it may violate the
“blank canvas” convention of live coding and live patching. In
February 2011, there was a very brief discussion on the TopLap email
list as to whether live patching was an analogue form of live coding
(Rohrhuber). I do not see that much commonality between them, partly
because a synthsiser patch is more like a chainsaw than it is like an
idea, (“ManifestoDraft”) and partly because patching is much more
tactile than coding is. However, some of the same conventions do seem
to apply to both.


Meetings.” 2600: The Hacker Quarterly. Web. 8 September
2011. <http://www.2600.com/meetings/>

David. “Runthrough.” 1971. Web. 15 September 2011.

Bernstein, David. “The San Francisco Tape Music Center: Emering Art Forms and the American Counterculture, 1961 – 1966.” The San Francisco Tape Music Center: 1960s Counterculture and the Avant-Garde. Ed. David Bernstein. Berkeley, CA, USA: University of California Press, 2008. Print.

. BBC. BBC One, UK. 2 January
1978 – 21 December 1981. Television.

Kim. “The Aesthetics of Failure: ‘Post-Digital’ Tendencies in
Contemporary Computer Music.” rseonancias. 2002. Web. 8
September 2011.

Kim. “The Microsound Scene: An Interview with Kim Cascone.”
Interview with Jeremy Turner. Ctheory.net. 12 April 2001. Web.
11 September 2011. <http://www.ctheory.net/articles.aspx?id=322>

ryoji ikeda. Web. 12 September 2011.

BBC. BBC One, UK. 23
November 1963 – 6 December 1989. Television.

K. “Chapter 9: Phone Phreaking in the US & UK.” Complete
Hacker’s Handbook.

Web. 5 September 2011.

Charles Céleste. “Gig report: Edgetone Summit.” Les
said, the better
6 August 2008. Web. 5 September 2011.

Ryoji, “Data.Matrix” 2005. Web. 12 September 2011.

and Bortz. “Mobilemuse:
Integral music control goes mobile
2011 NIME. University of Oslo, Norway. 31 May 2011.

Ron. “David Tutor: Live Electonic Music.” Leonardo
Music Journal
December 2004. 106-107. Web. 12Sepetember 2011.

Linus. “The
music of Charles Celeste Hutchins.”
Hardware Store
17 June 2010. Web. 8 September 2011.


14 November 2010. Web. 12 September 2011.

“an obsessive compulsive disorder” ninerrors.
2009. Web. 9 September 2011.


“ninerrors: Poetry Series.” Web. 9 September 2011.

Polly. “Pre-concert Q&A Session.” Edgetone New Music Summit.
San Francisco Community Music Center, San Francisco, California, USA.
23 July 2008.

Brian. “Live Electronic Music.”
Web. 12 September 2011.

Pauline. “Bye Bye Butterfly.” 1965. Web. 12 September 2011.

Garth. “Gesture and Morphology in Laptop Music Performance.”
2008. Web. 8 September 2011.

box (phreaking).” Wikipedia.
Web. 5 September 2011.

Julian. “[livecode]
analogue live coding?”
Email to livecode list. 19 February 2011. <


W. Andrew. “Using Contemporary Technology in Live Performance: The
Dilemma of the Performer.” Journal of New Music Research.
2002. Web. 11 September 2011.

William. “Relating Tuning and Timbre.” Web. 11 September 2011.

Arts Union”. UbuWeb. 1971. Web. 15 September 2011.

Robert. “The bible of the phreaking faithful.” St
Petersburg Times
15 June 1998. Web. 5 September 2011.

Transparent Tape Music Festival.” SF

Web. 12 September 2011. <http://sfsound.org/tape/oliveros.html>

Truth About Lie Detectors (aka Polygraph Tests)” American
Psychological Association.

5 August 2004. Web. 5 September 2011.

States v. Scheffer , 523 U.S. 303. Supreme Court of the US, 1998.

Web.16 June 2011.

Janne. “Virtual Sound: Examining Glitch and Production.”
Music Review
(2003) 45-52. Web. 11 September 2011.


Mark. “In London, Noise = Noise.” Disquiet.
7 May 2010. Web. 12 September 2011.

Trevor. Audible
Orpheus the Pantomime Ltd, 1994. Print.

and “hir” are gender neutral pronouns

Published by

Charles Céleste Hutchins

Supercolliding since 2003

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.