How to do page numbers and a table of contents in NeoOffice

This may also apply to Open Office.
Go to the format menu and select styles and formatting. a window or something will come up. one of the tabs on it is for page types. Duplicate the default type and call it something else. Then, go to the start of your document at the very top of the page and under insert, select manual break. A window will open that will have a drop down menu of the page types. Change it from [none] to the page type you just created. At the bottom of the window, in the second little box from the left, it lists page type. so when you’re on your new blank page, it should say default and when you go to the second page, it should say the name of your new type of page.
Next, go to the first page of your text and go to the insert menu and select footer and it will have next to that your two page types. Check the one that describes the main body of your text. It will hopefully make you a footer, which it will then stick your cursor in. then under the insert menu, go to fields and the page number. The popup will have something asking about style, make sure it’s set to 1 2 3, etc. and it will have something about starting with a specific page number. set that to 1.
Then go back to that first empty, still blank page and insert two manual breaks so you have three blank pages at the start of your document. Go the middle of those pages and go to the insert menu and then select indexes and tables and then indexes and tables and then table of contents
You’re nearly there. Go back to the styles and formatting window and go to the paragraph styles tag. Right click on the one called heading1 and select modify. It will give you a window asking about spacing, fonts, etc. Set it to look like every heading you’ve used for your chapter titles. If they’re 14 point centred times new roman, then make it like that. then go to heading2 and modify it so it looks like your subheadings you’ve used for chapter sections and keep doing this for all the heading levels you’ve used.
Now, lastly, go through your text and highlight all your headings. So at the starts of chapter 1, highlight your title “why i’m awesome” (or whatever you called it) and then, in the styles and formatting window, double click on the heading 1. This will switch that title to have the heading 1 formatting, which, if you did it right, will look the same as it did before. Then go to your subheading and select it and double click on heading2 in the styles and formatting window. Go through your entire document changing your heading to use the heading1, heading2, etc. When you’ve got everything, go back up to your table of contents page and right click on the table of contents. A menu will pop up and one of the options will be to update the table of contents. Do that and it should list all fo your headings and subheadings. If you’re missing some of them, then you forgot to change it in the document.
And of course, the best way to do this is to use the headings and stuff from when you first start writing your document, so keep that in mind for next time.

Dissertation Draft: Solo Live Electronic Pieces


the local 2600 group, started organising BrumCon 2007, which took
place on 3 May 2008, I asked if I could present a musical set. They
had never done such a thing before, but they agreed. 2600 is a
hacker group (“2600 Meetings”) with roots in phone phreaking
(Trigaux), so I decided to reference that in a piece written for the

noted in The Complete Hacker’s Handbook, “Phone phreaking”
refers to the practice hacking phone systems in order to get free
calls or just explore the workings of the system. Phone systems used
to be controlled by in-band signalling, that is, audible tones that a
user could reproduce in order to gain unauthorised control. For
example, 2600 Hz was a useful tone to “seize” control of a line.
Other such sounds commonly found in telephony are Dual Tone Multi
Frequency [DTMF] sounds, which are the ones produced by a landline
keypad. (Dr. K.)

looked up red box phreaking on Wikipedia and also DTMF signals and
used those tones as the heart of the piece. It starts with a dial
tone, then does coin dropping sounds, followed by the sound of
dialling and then a ring back sound, followed by a 2600 Hz tone.
After that introduction, it plays dialling signals and then a beat.
The beat is made up of patterns of sampled drums. The programme
picks random beats to be accented, which will always have a drum
sound on them and then scatters drum sounds on some of the other
beats also. The loop is repeated between 8-10 times and then a new
pattern is created, retaining the same accents for the duration of
the piece. If the randomly generated drum pattern seems too sparse
or too full of beats, the performer can intervene by pressing a
joystick button to add some drum beats or another to remove them. The
idea for randomly accenting beats comes a lecture by Paul Berg at
Sonology in the Hague where he noted that accenting random beats
seems like it had a deliberate rhythm when it’s heard by audiences.
This is related to Trevor Wishart’s discussion of Clarence Barlow’s
“indispensability factor,” where Wishart notes that changing
accents of a steady beat can alter the listener’s perception between
time signatures. (Wishart p 64) It seems that greater randomness in
picking accents leads listeners to perceive more complex rhythms.

the beats, a busy signal comes in occasionally. There are also bass
frequencies which are DTMF sine tones transposed by octaves.
Finally, there are samples of operator messages that are used in the
American phone system. These are glitched and stuttered, the degree
of which is controlled with a joystick. Thus, this piece is partly a
live-realisation, self-running piece and partly controlled by a

the time, I was interested in making computer pieces that necessarily
had to be computer pieces and could not be realised with live
instruments or with an analogue synthesiser. Extremely exact tunings
and sample processing are both examples of things that are
computer-dependant. I was also interested to have more live control
and more visible gesture, in order to, as Paine describes in his
paper on gesture in laptop performance, “inject
a sense of the now, an engagement with audience in an effort to
reclaim the authenticity associated with ‘live’ performance.”
(Paine p 4) I thought having physical motions would engage the
audience more than a live realisation. Conversely and relatedly, I
was also interested in the aesthetics of computer failure, within the
glitches I was creating. Cascone writes, “'[Failure]’ has become a
prominent aesthetic in many of the arts in the late 20th century,
reminding us that our control of technology is an illusion, and
revealing digital tools to be only as perfect, precise, and efficient
as the humans who build them. “ (Cascone) I thought this
intentional highlighting of imperfection would especially resonate
with an audience that largely worked in a highly technical and
professional capacity with computers.

also find glitches to be aesthetically appealing and have been
influenced by the extremely glitchy work of Ryoji Ikeda, especially
works like Data.Matrix,
which is a sonification of data. (“Datamatics”) Similarly,
in-band signalling is literally a sonic encoding of data, designed
for computer usage.

I performed the piece at BrumCon, their sound system did not have an
even frequency response. Some sine waves sounded way louder than
others and I did not have a way to adjust. I suspect this problem is
much more pronounced for sine tones than it is for richer
frequencies. Another problem I encountered was that I was using
sounds with strong semantic meanings for the audience. Many of them
had been phreakers and the sounds already had a specific meaning and
context that I was not accurately reproducing. Listeners without
this background have generally been more positive about the piece.
One blogger wrote the piece sounded like a “demonic
homage to Gaga’s Telephone,
(Lao) although he did note that my piece was written earlier.


music of the BBC Radiophonic Workshop has been a major influence on
my music for a long time. The incidental music and sound effects of
Doctor Who during the Tom Baker years was especially
formative. I found time in 2008 to watch every episode of Blakes
and found the sound effects to be equally compelling. I spent
some time with my analogue synthsiser and tried to create sounds like
the ones used in the series. I liked the sounds I got, but they were
a bit too complex to layer into a collage for making a piece that
way, but not complex enough to stand on their own. I wrote a
SuperCollider programme to process them through granular synthesis
and other means and to create a piece using the effects as source
material, mixed with other computer generated sounds.

timing on a micro, “beat,” and loop level are all in groups of
nine or multiples of nine, which is why I changed the number in the
piece title. I was influenced to use this number by a London poet,
Mendoza, who had a project called ninerrors
which ze*
describes as, “a
sequence of poems constricted by configurations of 9: connected &
dis-connected by self-imposed constraint. each has 9 lines
or multiples of 9, some have 9 words or syllables per line, others
are divisible by 9.  ninerrors is presented
as a series of 9 pamphlets containing 9 pages of poetry.”
(“ninerrors”) I adopted a use of nines not only in the timings,
but also in shifting the playback rate of buffers, which are played
at rates of 27/25,
9/7, 7/9 or 25/27. The
tone clusters frequencies also are related to each other by tuning
ratios that are similarly based on nine. I was in contact with
Mendoza while writing this piece and one of the poems in hir

cycle, an
obsessive compulsive disorder
mentions part of the creation of this piece in it’s first line,
“washing machine spin cycle drowns out synth drones.” (“an
obsessive compulsive disorder”)

ratios based on nines gave me the internal tunings of the tone
cluster, I used Dissonance Curves, as described by William Sethares,
to generate the tuning and scale for the base frequencies of the
clusters. The clusters should therefore sound as consonant as
possible and provide a contrast to the rest of the piece, which is
rather glitchy. The glitches come partly from the analogue material,
but also from sudden cuts in the playback of buffers. For some parts
of the piece, the programme records it’s own output and then uses
that as source material, something that may stutter, especially if
the buffer is recording it’s own output. I used this effect because,
as mentioned above, I want to use a computer to do things which only
it can do. When writing about glitches, Vanhanen writes that their
sounds “are
sounds of the . . . technology itself.” (p 47) He notes that “if
phonography is essentially acousmatic, then the ultimate phonographic
music would consist of sounds that have no acoustic origin,” (p
49) thus asserting that skips and “deliberate mistakes” (ibid)
are the essential sound of “phonographic styles of music.” (ibid)
Similarly, “glitch is the digital equivalent of the phonographic
metasound.” (p 50) It is necessarily digital and thus is
inherently tied to my use of a computer. While my use of glitch is
oppositional to the dominant style of BEAST, according to Vanhanen,
it is also the logical extension of acousmatic music.

the piece was written with the BEAST system in mind. The code was
written to allow N-channel realisations. Some gestures are designed
with rings of 8 in mind, but others, notably at the very start, are
designed to be front speakers only. Some of the “recycled”
buffers, playing back the pieces own recordings were originally
intended to be sent to distant speakers, not pointed at the audience,
thus give some distance between the audience and those glitches when
they are first introduced. I chose to do it this way partly in order
to automate the use of spatial gesture. In his paper on gesture,
Paine notes that moving a sound in space is a from of gesture,
specifically mentioning the BEAST system. (p 11) I think that because
this gesture is already physical, it does not need to necessarily
rely on the physical gesture of a performer moving faders. Removing
my own physical input from the spatialisation process allowed me more
control over the physical placement of the sound, without diminishing
the audience’s experience of the piece as authentic. It also gives
me greater separation between sounds, since the stems are generated
separately and lets me use more speakers at once, thus increasing the
immersive aspect (p 13) of the performance.

this piece is entirely non-interactive, it is a live realisation
which makes extensive use of randomisations and can vary
significantly between performances. In case I get a additional
chances to perform it on a large speaker system, I would like the
audience to have a fresh experience every time it is played.


I was a student at Wesleyan, I had my MOTM analogue modular
synthesier mounted into a 6 foot tall free-standing rack that was
intended for use in server rooms. It was not very giggable, but it
was visually quite striking. When my colleagues saw it, they
launched a campaign that I should do a live-patching concert. I was
initially resistant to their encouragement, as it seemed like a
terrible idea, but eventually I gave in and spent several days
practicing getting sounds quickly and then refining them. In
performance, as with other types of improvisation, I would find
exciting and interesting sounds that I had not previously stumbled on
in the studio. Some of my best patches have been live.

been deeply interested in the music of other composers who do live
analogue electronics, especially in the American experimental
tradition of the 1960s and 70s. Bye
Bye Butterfly

by Pauline Oliveros is one such piece, although she realised it in a
studio. (Bernstein p 30) This piece and others that I find
interesting are based on discovering the parameters and limits of a
sound phenomenon. Bernstein writes that “She discovered that a
beautiful low difference tone would sound” when her oscillators
were tuned in a particular way. (ibid) Live patching also seems to
be music built on discovery, but perhaps a step more radical for it’s
being performed live.

more radical than live patching is Runthrough
by David Behrman, which is realised live with DIY electronics. The
programme notes for that piece state, “No
special skills or training are helpful in turning knobs or shining
flashlights, so whatever music can emerge from the equipment is as
available to non-musicians as to musicians . . .. Things
are going well when all the players have the sensation they are
riding a sound they like in harmony together, and when each is
appreciative of what the others are doing.”
(“Sonic Arts Union”) The piece is based entirely on discovery and
has no set plan or written score. (ibid) The piece-ness relies on
the equipment. This is different than live-patching because a
modular synthesiser is designed to be a more general purpose tool and
its use does not imply a particular piece. Understanding the
interaction between synthesiser modules is also a specialist skill
and does imply that expertise is possible. However, the idea of
finding a sound and following it is similar.

I have been investigating ways to merge my synthesiser performance
with my laptop performance. The first obvious avenue of exploration
was via live sampling. This works well with a small modular, like the
Evenfall Mini Modular, which is small enough to put into a rucksack
and has many normalised connections. It has enough flexibility to
make interesting and somewhat unexpected music, but is small and
simple enough that I can divide my attention between it and a laptop.
Unfortunately, mine was damaged in a bike accident in Amsterdam in
2008 and has not yet been repaired.

MOTM synthesiser, however, is too large and too complex to divide my
attention between it and a laptop screen. I experimented with using
gamepad control of a live sampler, such that I did not look at the
computer screen at all, but relied on being able to hear the state of
the programme and a memory of what the different buttons did. I
tried this once in concert at Noise = Noise #19 in April 2010. As is
often the case in small concerts, I could not fully hear both monitor
speakers, which made it difficult to monitor the programme.
Furthermore, as my patch grew in complexity, the computer-added
complexity became difficult to perceive and I stopped being able to
tell if it was still working correctly or at all. A few minutes into
the performance, I stopped the computer programme entirely and
switched to all analogue sounds. While the programme did not perform
in the manner I had intended, the set was a success and the recording
also came out quite well and is included in my portfolio. One
blogger compared the track to Jimi Hendrix, (Weidenbaum) which was
certainly unexpected.

is unusual for me to have a live recording come out well. This is
because of the live, exploratory aspect to the music. If I discover
that the I can make the subwoofers shake the room or make the stage
rattle, or discover another acoustic phenomenon in the space, I will
push the music in that direction. While this is exciting to play, and
hopefully to hear, it doesn’t tend to come out well on recordings. I
also have a persistent problem with panning. On stage, it’s often
difficult to hear both monitors and judging the relative amplitudes
between them requires a certain concentration that I find difficult
to do while simultaneously patching and altering timbres. In order
to solve this problem, I’ve written a small programme in
SuperCollider which monitors the stereo inputs of the computer and
pans them to output according to their amplitude. If one is much
louder than the other, it is panned to centre and the other output
slowly oscillated between left and right. If the two inputs are
close in amplitude, the inputs are panned left and right, with some
overlap. I think this is the right answer for how to integrate a
computer with my synthesiser. Rather than giving me more things to
think about, this silently fixes a problem, thus removing a
responsibility. An example of playing with the autopanner, from a
small living room concert in 2011, is included in my portfolio.

future exploration, I am thinking of returning to the idea of live
sampling, but similarly without my interaction. I would tell the
computer when the set starts and it would help me build up a texture
through live sampling. Then, as my inputted sound became more complex
(or after a set period of time), the computer interventions would
fade out, leaving me entirely analogue. This could help me get
something interesting going more quickly, although it may violate the
“blank canvas” convention of live coding and live patching. In
February 2011, there was a very brief discussion on the TopLap email
list as to whether live patching was an analogue form of live coding
(Rohrhuber). I do not see that much commonality between them, partly
because a synthsiser patch is more like a chainsaw than it is like an
idea, (“ManifestoDraft”) and partly because patching is much more
tactile than coding is. However, some of the same conventions do seem
to apply to both.


Meetings.” 2600: The Hacker Quarterly. Web. 8 September
2011. <>

David. “Runthrough.” 1971. Web. 15 September 2011.

Bernstein, David. “The San Francisco Tape Music Center: Emering Art Forms and the American Counterculture, 1961 – 1966.” The San Francisco Tape Music Center: 1960s Counterculture and the Avant-Garde. Ed. David Bernstein. Berkeley, CA, USA: University of California Press, 2008. Print.

. BBC. BBC One, UK. 2 January
1978 – 21 December 1981. Television.

Kim. “The Aesthetics of Failure: ‘Post-Digital’ Tendencies in
Contemporary Computer Music.” rseonancias. 2002. Web. 8
September 2011.

Kim. “The Microsound Scene: An Interview with Kim Cascone.”
Interview with Jeremy Turner. 12 April 2001. Web.
11 September 2011. <>

ryoji ikeda. Web. 12 September 2011.

BBC. BBC One, UK. 23
November 1963 – 6 December 1989. Television.

K. “Chapter 9: Phone Phreaking in the US & UK.” Complete
Hacker’s Handbook.

Web. 5 September 2011.

Charles Céleste. “Gig report: Edgetone Summit.” Les
said, the better
6 August 2008. Web. 5 September 2011.

Ryoji, “Data.Matrix” 2005. Web. 12 September 2011.

and Bortz. “Mobilemuse:
Integral music control goes mobile
2011 NIME. University of Oslo, Norway. 31 May 2011.

Ron. “David Tutor: Live Electonic Music.” Leonardo
Music Journal
December 2004. 106-107. Web. 12Sepetember 2011.

Linus. “The
music of Charles Celeste Hutchins.”
Hardware Store
17 June 2010. Web. 8 September 2011.


14 November 2010. Web. 12 September 2011.

“an obsessive compulsive disorder” ninerrors.
2009. Web. 9 September 2011.


“ninerrors: Poetry Series.” Web. 9 September 2011.

Polly. “Pre-concert Q&A Session.” Edgetone New Music Summit.
San Francisco Community Music Center, San Francisco, California, USA.
23 July 2008.

Brian. “Live Electronic Music.”
Web. 12 September 2011.

Pauline. “Bye Bye Butterfly.” 1965. Web. 12 September 2011.

Garth. “Gesture and Morphology in Laptop Music Performance.”
2008. Web. 8 September 2011.

box (phreaking).” Wikipedia.
Web. 5 September 2011.

Julian. “[livecode]
analogue live coding?”
Email to livecode list. 19 February 2011. <>

W. Andrew. “Using Contemporary Technology in Live Performance: The
Dilemma of the Performer.” Journal of New Music Research.
2002. Web. 11 September 2011.

William. “Relating Tuning and Timbre.” Web. 11 September 2011.

Arts Union”. UbuWeb. 1971. Web. 15 September 2011.

Robert. “The bible of the phreaking faithful.” St
Petersburg Times
15 June 1998. Web. 5 September 2011.

Transparent Tape Music Festival.” SF

Web. 12 September 2011. <>

Truth About Lie Detectors (aka Polygraph Tests)” American
Psychological Association.

5 August 2004. Web. 5 September 2011.

States v. Scheffer , 523 U.S. 303. Supreme Court of the US, 1998.

Web.16 June 2011.

Janne. “Virtual Sound: Examining Glitch and Production.”
Music Review
(2003) 45-52. Web. 11 September 2011.


Mark. “In London, Noise = Noise.” Disquiet.
7 May 2010. Web. 12 September 2011.

Trevor. Audible
Orpheus the Pantomime Ltd, 1994. Print.

and “hir” are gender neutral pronouns

Dissertation Draft: Copyright, Social Networking and Commissioning Music

When I started at Birmingham, I was in
the midst of a project where in I was soliciting commissions of
pieces of music around one minute in length. My plan was originally
to compose 45 of these in total and release an album at the end. I
intend to pick up where I left off and finish the project after

This project was an attempt to get a
bit of attention and to address some economic and political issues of
music distribution. The amount of music available to listeners only
continues to rise. Every day, somebody records something and makes
it available in a digital format. The cost of providing copies to
consumers is practically nil. Any composer can upload an mp3 to or bandcamp at no cost. (“Bandcamp Pricing”)

Because copying and distribution is so
cheap and easy, consumers often share files without paying for them.
The Recording Industry Association of America [RIAA] struck back
against this by suing fans (“For Students Doing Reports”), an
idea which seemed poorly thought out and which has made them
extremely unpopular. (Reisinger) At the same time, the duration of
copyright keeps being extended, seemingly so that Mickey Mouse will
never fall into the public domain. (Springman) What used to be a
system to make sure that creators got their fair share is
increasingly perceived as a way for big companies to control culture.
(Lessig 61) Rather than adapt to new conditions, media companies are
lobbying for new laws to force the genie back in the bottle. (Lessig
48) Some of these, like the Digital Economy Act are fairly draconian
in that households may have their internet switched off if only one
member of the household breaks copyright law. (Digital Economy Act s.
10) Despite the severity of this crackdown, many are skeptical it
will make any difference. (“Q&A: The Digital Economy bill”)
Rather than adapt to changing conditions, powerful,
politically-connected media companies are abusing state power. Their
position is morally bankrupt and their claims of victimhood are
laughably overstated. The RIAA tells students that sharing an mp3 is
worse than maritime hostage-taking: “It’s commonly known as
‘piracy,’ but that’s too benign of a term to adequately describe
the toll that music theft takes . . ..” (“For Students Doing

It seems clear that copyright is in
crisis and in need of a major reform. However, the crisis of
copyright does not diminish the notion of authorship. Those engaged
in online music sharing still are deeply invested in who

created the work they’re copying. Artists that chose to
participate in Open Culture models, such as Creative Commons are not
ceding their work to the public domain, but instead protecting the
rights of their fans. (“Frequently Asked Questions”) An artist
choosing this model may be potentially sacrificing some forms of
revenue, but for most emerging artists, getting heard is far more
important than protecting rights which may
one day generate income. Even for successful artists like Bob
Ostertag, this income has failed to materialise. He writes,
“[S]elling recordings in whatever format has been a break-even
proposition at best.” (Ostertag)

However, artists
still need to eat and pay rent. Sustainability is a major concern,
but, economic support for non-mainstream composers will not be coming
from record labels. Indeed, they have historically worked against
the interests of composers. In 2001, composer Judy Dunaway wrote:

Of course, the recording industry does not care at all about
contemporary and experimental music. The sales figures on such CDs
are miniscule compared to popular music. In the words of Foster Reed
at the New Albion label, "The corporate recording industry lives
in a completely different world, of commodity and markets, than the
independents do, who make and publish work that is near and dear to
them." But accessibility to innovative music on the internet may
be blocked by the record industry’s rush to protect and maintain
total control of its own high-profit intellectual property.

Composers are thus
left to their own devices when it comes to both generating revenue
and attracting listeners. Without a budget for publicity, one of the
best ways to gather attention is by word of mouth “buzz.” Social
networking is one venue where this can happen, which has the
advantage of the possibility of fast transmission and direct links to
online content. I suspect people may be motivated to share music they
like or find interesting because it gains them cultural capital. They
would thus take on a curatorial role and hope to gain the respect of
their friends and social contacts. A musician interested in using
this as a path to wider recognition would need to create music that
works in an online context. For example he or she might want to
include video content, so it can be uploaded to YouTube or create
music that the sharer will identify with in some way. They may also
produce music with the goal of having it sound good in stereo mp3
format out of home speakers. The music should be engineered for the
playback environment in which it is expected to be heard. The music
created must also be accessible in some way, although, obviously an
artist wishes to remain interesting.

In my case, I chose
duration as my most accessible component. All of my pieces in this
project are around one minute long. I strongly suspect that if you
ask most people to listen to ten minutes of noise music, they would
refuse unless they were already fans of the genre or the composer.
However, in my experience, people are much more willing to sacrifice
a minute of their time. Many more people are going to be willing to
listen to very short pieces. However, just because people will
listen to something does not mean they will share it. I thought
sharing would be more likely to occur if listeners felt connected to
the music in some way. One way to get that feeling of connection was
to get a listener to commission me.

The commissioner
gets their name attached to a short piece of music, which becomes
integrally linked to them. The piece of music would not exist if it
were not for their financial involvement. This, in return gives them
cultural capital. They are the proprietor of a new piece of music.
This also solves the dilemma of sustainability. The commissioning
amount should cover costs, at least. The commissioner would be
motivated to share their new piece of music as far and wide as they
can, as every re-sharing increases their own cultural capital.
Instead of fighting the online sharing that people seem inclined to
do, this model requires it and does not require coercive action on
the part of the state.

I started by using
eBay as my sales platform. This allowed me to control how many
commissions I might sell at a time, handle the monetary transaction
and the platform itself made the commissioners feel engaged and
interested some music press. (“Music Commissioning on eBay”)
Much to my surprise, a bidding war erupted on one of my early
offerings, despite the promise of many more to come. However, before
that bidding war could conclude, eBay terminated my account, banning
me from the service. They refused to tell me why they had done this,
so I don’t know if it was because they suspected fraud or because
they objected to my business model. I moved to Etsy, a much less
exciting web store where users sell craft items and resumed.

the course of my project, several people did share their short pieces
via their blogs, facebook or another online medium. One person used
her piece as her ringtone. In 2008, I approached a popular blogger,
Josh Fruhlinger of The Comics Curmudgeon
and asked if he would trade me advertising space for a free
commission. His blog had been ranked #13 by PC Magazine’s “100
Favorite Blogs for 2007” (Heater) and won a Webby Award in 2008 for
Best Humor Blog. (“Best Humor Blog”) His blog was also popular
with composers and was mentioned on Kyle Gann’s blog (Gann) and
others. Fruhlinger agreed to this plan and I composed a short piece
related the the American comic strip Gil Thorp. In order to cope
with the expected server traffic, I created a very simple video of
the face of the titular character slowly zooming in with the piece as
soundtrack and uploaded it to YouTube.
( My small advertisement
ran for a week and then Fruhlinger made a post specifically about the
piece. He was very positive, using words like “stunning” and
“masterpiece.” (Fruhlinger) The video got 4000 views in a very
short period of time. However, despite how happy Fruhlinger was and
some positive comments from his readers such as, “I stand amazed,”
(commodorejohn) this got me no new

Does this mean that
the model fails? I had predicted that I would get some new
commissions out of such a high profile endorsement, but didn’t.
There are a few possible explanations. Consumers may be unused to the
idea of commissioning a composer. In my brief stint in marketing, I
was told that consumers do not absorb an idea until they encounter it
multiple times. This was just one post. Or, conversely, it could have
been their lack of familiarity with me. A better known composer may
have fared better. It may also have been the economy, which was not
doing well at the time and has since gotten worse. Commissioning
music is a luxury and one that might seem eccentric and easy to

Marketing this
project is actually quite difficult. I found I could do three
commissions in a week. There is no way I could cope with the volume
that mass-market success would imply. Therefore, going after
high-profile general subject bloggers is not the way to draw in new
customers, as success could be as much a disaster as failure.
However, it is a way to draw in new listeners. Most of the visitors
to the blog would not have heard my piece otherwise. My attempts at
an accessible duration did pay off, even if social media buzz didn’t
gain me new customers. Making a piece for one person motivates that
one person to share it, but it does not motivate his or her friends
to share it also.

Musically, I was
interested in very short pieces because of the 60×60 Project, in
which I had participated. I found it very frustrating to make a pice
so short. While my piece Clocker had been accepted, I did not
feel happy with it. I started listening to very short pieces, for
example, tracks from the albums Haikus Urbanis and Snakes
and Ladders
, to get into the right mindset.

When constructing
my very short pieces, I’ve found that it’s best to have three closely
related ideas, and three overdubbed mono tracks. A minute is too long
to only have one idea, but too short to go through a lot of material.
There is also not a lot of time for major density changes, unless
that is the focus of the piece. As I worked on this project, I found
that a minute began to seem longer and longer. A composer could
easily fit over a hundred discrete events in a minute.

In my portfolio, I
have included several of these pieces, listed here with their
programme notes:

Shorts #29: Raining Up

and titled by Autumn Looijen

This piece was created using a MOTM
synthesiser and mixed in Ardour.
There were several false starts. I had been doing field recordings of
storms and for a while, every artificial sound I made seemed to also
sound like weather. The title Autumn chose seems to indicate that I
didn’t quite get away from weather-related sounds.

Shorts #28: Untitled
by Cecile Moochnek

I wasn’t looking for a
commission when I walked into the Cecile
Moochnek Gallery
on 4th Street in Berkeley, California. I was
looking to do Christmas shopping. But I got talking to the gallery
owner about art and music and she asked me to write her a short
piece. This was in December of 2007. I wrote the piece in 2008.

I made this piece with a Evenfall MiniModular Synthesiser. This
was an all-in-one box modular synthesiser from the 1990′s. It’s a
great little synth.

Shorts #27:
Gil Thorp
and titled by Josh Fruhlinger

Josh gave me the title before I
started the piece. Gil Thorp is the name of a surreal American
newspaper comic which is supposed to be about high school sports.
Josh runs a blog discussing newspaper comics, called the Comics
I recorded (British) football from my TV, which included my
housemate clapping after a goal. Then, I decided to use white noise,
because it’s very similar to crowd sounds. I filtered it a lot to
make sort of screetchy sounds. The football announcers didn’t
exactly have the accent that I would expect Marty Moon to have, so I
kept them in the background. My girlfriend said that it struck her as
very Mark Trail-like, so I raised the volume of the background at the
end, to make the sports connection clearer.

Bird-like sounds remind me of high school sports, but that’s
probably because my high school had a terrible seagull infestation.

Shorts #26: Ecstatic Rivulet
and titled by Clyde Nielsen

For this piece, I wanted to use a
field recording that I made while camping over the summer. Visually,
the campground looked like it would make a suitable set for a horror
movie. The animals were correspondingly loud and screetchy at night
and so I made a recording with my cell phone.
I listened to the recording a few times and it made me think of
a project that I had intended to abandon. Everything I do with this
always sounds kind of rough and unpolished, which is why I stopped
working with it. But it seems to fit well with my memory of that

Shorts #25: Untitled
by Scott Wilson

approached for a title for this piece, Scott
noted that the piece has a “flatulent quality,” but it would be
better to resist referencing that in a title.
To make this piece, I recorded myself playing a bovine signaling
horn and a didjeridu, both of which I ran through a Sherman
filterbank to use as FX. There’s also a little bit of feedback,
especially the very last sounds. Processing a didgeridu turns out to
be much more straightforward and easy than processing a cow horn.

Shorts #24: College Promo

and titled by Jean Sirius

wanted to something that started out serious, but got more playful
further in. The opening is square waves, which are pulse-width
modulated and slightly frequency modulated. While I was recording
them, my dog was sleeping nearby. She started barking in her sleep.
The almost never barks when she’s awake, but when she’s asleep,
she barks quiet, air, high pitched barks which cause her snout to
slightly inflate, since she doesn’t open her mouth. Maybe she’s
actually dreaming of chasing pigeons? The sleep-barking sounded
really great with the music! I couldn’t record my dog without
accidentally waking her, so instead I tried to mimic the sound with a
Sherman filterbank. I failed miserably, but I like the sounds that I
got. Every time I use this instrument, I have a little more fun with
it and like it a little bit more. It’s frustrating at first, but
the effort is paying off.

Shorts #23: Gamut
and titled by Devin Hurd

piece was made with a MOTM analogue synthesiser.


“Bandcamp Pricing.” Bandcamp.
Web. 12 August 2011. <>

“Best Humor Blog.” The 2008
Weblog Awards.
31 December 2008.
Web. 11 August 2011.


comment on “Metapost: I sing the body beefy.” The
Comics Curmudgeon
. 15 April
2008. Web. 11 August 2011.

Digital Economy Act 2010 s. 10. UK.
Web. 11 August 2011.

Dunaway, Judy. “The MP3 Phenomena and
Innovative Music.” ZKM. 9
April 2001. Web. 11 August 2001.


“For Students Doing Reports.” RIAA-
Recording Industry Association of America
Web. 11 August 2011. <>

Asked Questions.” Creative Commons. 28
July 2011. Web. 11 August 2011.

Fruhlinger, Josh.
“Metapost: I sing the body beefy.” The Comics Curmudgeon.
15 April 2008. Web. 11 August 2011. <>

Kyle. “Seriously Off-Topic.” PostClassical: Kyle Gann
on music after the fact
. Arts
Journal. 29 April 2007. Web. 11 August 2011.

Haikus Ubranis.
Cavel2Disks, 1997. CD.

Brain. “Our 100 Favorite Blogs.” PC Magazine.
15 October 2007. Web. 11 August 2011.

Lawrence. “Free Culture – How Big Media Uses Technology and the
Law to Lock Down Culture and Control Creativity.” SiSU
information Structuring Universe
University of Oslo: The Faculty of Law. 2004. Web. 11 August 2011.

Commissioning on eBay.” PodComplex.

14 April 2007. Web. 11 August 2011.

Ostertag, Bob. “The
Professional Suicide of a Recording Musician.” 9 April 2007. Web. 11 August 2001.

The Digital Economy bill.” BBC News.

9 April 2010. Web. 11 August 2011.

Don. “The RIAA speaks – and it gets worse.” Cnet.
14 January 2008. Web. 11 August 2011.

Rubin, Neal and Whigham, Rod. Gil
. Web. 11
August 2011. <>

Slaw. Snakes and Ladders.
Doubtful Palace, 2002. CD.

Springman, Chris. “The Mouse that Ate
the Public Domain: Disney, The Copyright Term Extension Act, and
Eldred v. Ashcroft.”
FindLaw. 5 March 2002.
Web. 11 August 2001.

Dissertation Draft: BiLE Bibliography


Kinect for Windows: SDK
2011. Microsoft
Research. 8 August 2011.

Watched Over by Machines of Loving Grace
By Curtis, Richard. BBC. BBC Two. 23 May 2011

Julia and Valentino-Devries, Jennifer. “Apple, Google Collect User
Data.” The Wall
Street Journal.
April 2011. Web. 8 August 2011.

Richard. “All Watched Over by Machines of Loving Grace.” Red
House Books
. Web. 10
August 2010.

Chris. “Indigenous to the Net ~ Early Network Music Band in the San
Francisco Bay Area.” Crossfade.
8 September 2002. Web. 8 August 2011.

Cornelius. Treatise. Buffalo: The Gallery Upstairs Press,

Patrick. “Apple closes the iTunes store for iPhone users who don’t
want to share their location” Google
23 June 2010.
Web. 24 June 2010.


I, Norton: an Opera in
Real-Time by Gino Robair
Web. 10 August 2011. <>

Dan. “Here’s the Amazing ‘Word Lens’ iPhone App Everyone is Talking
About This Morning (AAPL).” The
San Francisco Chronicle
17 December 2010. Web. 10 August 2011.

Scott, et al. “HELO: The Laptop Ensemble as an Incubator for
Individual Laptop Performance Practices.” Huddersfield
Experimental Laptop Orchestra.

June 2010. Web. 9 August 2011.

Thom. “OpenNI to Max/MSP vis OSC.” Thom
Judson: music and media
12 January 2011. Web. 8 August 2011. <>

Yannis. “Scam Spam.” [“LabO III Scam Spam.m4v.”] Perf. Tako
Hyakutome. YouTube.
4 April 2011. Web. 10 August 2011.

Middleware.” PrimeSense.
2010. Web. 8 August 2011. <>

More Twist. “Nice to See You.” Perf. Charles Céleste Hutchins,
Polly Moller. Music by
Charles Céleste Hutchins
18 July 2008. Web. 11 August 2011.


Nathan. “Microsoft releases Kinect for Windows SDK.” Los
Angeles Times.
16 June
2011. Web. 8 August 2011.

Pauline. “Exchanges.” Deep Listening Pieces. Kingston:
Deep Listening Publications, 1990

Pauline. “Give Sound/Receive Sound.” Deep Listening Pieces.
Kingston: Deep Listening Publications, 1990

Eric and Vermeule, Adrian. “Obama Should Raise the Debt Ceiling on
His Own.” New York
. 22 July 2011.
Web. 10 August 2011.

Andreas. “Darwiinosc – darwiin remote with OSC extension for OS
X.” Google Code.
Web. 9 August 2011. <>

Karlheinz, “Right Durations”

Web. 8 August 2011. <>

Alexia. “Word Lens Translates Words Inside of Images. Yes Really.”
Tech Crunch.
16 December 2010. Web. 10 August 2011.


Katharine. “Adam Curtis: Have computers taken away our power?.”
The Guardian.
6 May 2011. Web. 10 August 2011.

Dissertation Draft: BiLE – The Death of Stockhausen

next piece for BiLE is a large-scale piece called The Death of
, which will be approximately an hour long. I’m
calling the piece a “laptopera,” although there are not currently
any singers. Although this may stretch the opera genre a bit, it’s
not unprecedented, as Gino Robair’s opera in real time, I, Norton,
lists singers as an optional part: “A
performance can be done without actors, singers, or even musicians.”

inspiration for my opera largely comes from the Adam Curtis
documentary All Watched Over by Machines of Loving Grace,
which discusses how individuals stopped feeling like we are in
control of society or the future. A review of the series in the
Guardian describes the premise as,

realising it we, and our leaders, have given up the old progressive
dreams of changing the world and instead become like managers –
seeing ourselves as components in a system, and believing our duty is
to help that system balance itself. Indeed, Curtis says, “The
underlying aim of the series is to make people aware that this has
happened – and to try to recapture the optimistic potential of
politics to change the world.” (Viner)

lays much of the blame for the current state of affairs at the feet
of computers, or at least the mythology of stable systems which was
inspired by computer science. (All Watched Over by Machines of
Loving Grace
) I thought it would be interesting to do a
computer-based piece that addressed his documentary. While I don’t
believe that computers or anything else are a neutral platform, I
think a large part of the problem comes from the way in which we are
using computers and allowing ourselves to be used by technology
companies. Any solution will certainly have to involve computers, so
it seems useful to think about how to deploy them positively rather
than under a politics of invisible corporate control.

Curtis documentary is also appealing because he addressed some issues
that had been coming up in conversations I have been having with
friends. When we think of the future, we think only of better
gadgets, not a better world. For example, when describing a new
iPhone app Word Lens, techCrunch breathlessly stated, “This
is what the future, literally, looks like.”
They were not alone in this pronouncement, which was widely echoed
through major media outlets, including the San Francisco Chronicle
who imagined a consumer reaction of, "holy cow, this is the
future." (Frommer)

envisioned future is thus one of hypercapitalism. More and more
things to buy while at the same time, less and less money with which
to buy it. Consumers economise on food, but still buy expensive
iPhone contracts, presumably because they want to own a piece of the
future. Meanwhile, they have less and less control of even that as
Apple’s curatorial role prevents most consumers from being able to
install apps not approved by the corporation. Smart phones
disempower their users further by collecting their private
information. (Angwin and Valentino-Devries) The future is passive
consumers under greater control from the state and from corporations,
such as Google, Apple and Facebook, who win us over with appealing
gadgets. An online contact described this as a "totalitarian
pleasure regime." (Dugan) Thus we envision Huxley’s Brave New
for those who can afford it and Orwell’s 1984 for
those who can’t.

left seems to have no widely articulated alternative idea of what a
better world would even look like. The Guardian quotes Curtis on
2011 protests, “’Even
the “march against the cuts”,’ he says, referring to the
TUC march in London in March
‘it was a noble thing, but it was still a managerial approach. We
mustn’t cut this, we can’t cut that. Not, “There is another way.”’”
Curtis does not hand us a vision for what this other way might be,
but calls on us to imagine one.

opera will restate the problem outlined by Curtis and go on to link
the end of the future with the current apocalyptic concerns.
Originally, I want to focus mostly on the American preoccupation with
the Apocalypse and Rapture. If all the future will be just like now,
but with better gadgets, then we are only waiting for the end of the
world, which might as well come sooner rather than later. However,
various recent secular events seem to also bear inclusion. The New
York Times described a possible outcome of the US debt crisis as a
“Götterdämmerung,” describing a far right wing hope for a
“purifying” fire. (Posner and Vermeule) As the stock market
tumbled, looters set fire to high streets in the UK. Zoe Williams,
writing in the Guardian, noted that the consumer-oriented nature of
the riots is something “we’ve never seen before.” (Williams)
Rather than battle with the police, looters focused on gathering
consumer goods. Williams quotes Alex Hiller, “Consumer
society relies on your ability to participate in it.”
(Williams) Even their ability to be passive consumers was thwarted.
They had minimal access to what we’ve deemed to be the future.
However, setting large, destructive fires seems to imply that there
is more than just this going on. All of these things from religious
beliefs, to economic disaster to civil unrest share a sense of
hopelessness and feeling of things ending.

rather than end on a negative note of yearning for oblivion, and the
end of the avant-garde, I do want listeners to consider a better
world. All of us have agency that can be expressed in ways other
than acquiring consumer goods. I do not present a view of what a
better world might look like, but do hope to remind them that one is
possible. There is another way.

broken the opera up into four acts with connecting transition
sections. The durations are based on the fibonacci series. The
structure will be as follows:

min: Act 1 – The
Promise: Cooperative Cybernetics

min: Transition 1
min: Act 2 – The
Reality: The Rise of the Machines / Hypercapitlaism

min: Transition 2
min: Act 3 – The

min: Transition 3
min: Act 4 – A
Better World is Possible: Ascension to Sirius

durations will probably vary slightly from performance to performance
and may evolve with our practice.

1 explores the idealistic ideas of self-organising networks. Every
player in BiLE, as is normal, will create their own sound generation
code which will take no more than five shared parameters plus
amplitude to control their sounds. These parameters may be:
granular, sparse, resonant, pitched. Each player would have a slider
going from zero to one where zero means not at all and one means
entirely. Players will not control their sliders directly, but
instead vote for a value to increase or decrease. Their sound will
thus change in response to their own votes and votes of other
players. They can control their own amplitude at will. There is
also another slider, individual to every player, which controls how
anti-social they are. A value of zero will follow the group
decisions entirely and as the value increases, they will deviate more
and more from the group. A value of one should be actively
disruptive. All players should start with anti-social values of zero
and increase that number in a non-linear fashion until at the end the
group is, in general, very anti-social. The idea of group following
in this piece is also present in my earlier piece Partially

but the users have much less agency in carrying it out in this act.

opera will be accompanied by video projections from Antonio Roberts.
I would like the start of this section to visually reference Richard
Brautigan’s poem “All Watched Over by Machines of Loving Grace”
from which the curtis documentary takes it’s name. The second stanza

like to think
   (right now, please!)
of a
cybernetic forest
filled with pines and electronics
where deer
stroll peacefully
past computers
as if they were flowers
spinning blossoms.

there, I would like there to be archival images of advertising and
assembly lines. As anti-social disorder increases, I’d like to see
more archival images of rioting and property destruction.

act will not begin rehearsals until October 2011.

2 is included in my portfolio. It is the most operatic of all the
acts in that it includes live vocals. Players sample themselves first
reading common subject lines of spam emails, then common lines from
within spam emails and finally start reading an example of “spoetry”
– machine generated text that is sometimes used in an attempt to fool
spam filters. The players manipulate these samples to create a live
piece of text-sound poetry. In order to get material, I mined the
spam folder of my email account. I broke the material into sections
and assigned every line a number. (See attached)

composers, such as Yannis Kyriakides in his piece “Scam Spam”
have used spam emails as source material. However, Kyraikides does
not include a vocal line in his piece. In 2008, composer/performer
Polly Moller approached me to improvise live on KFJC radio in
California. She played flute and pitched noisemakers and read a
“spoem” called “Nice to See You” and I did live
sampling/looping of her sounds and vocals. (No More Twist) I felt
satisfied with the results of this improvisation. Afterwards, I was
interested to keep working with spoetry and to look at doing more
structured text-sound pieces with a greater live component than I had

act builds on my experiences with Moller, using a larger ensemble,
and asking every member of BiLE to develop programmes specifically
for the manipulation of text sounds. They also manipulate artificial
sounds, which are recordings of my analog syntheiser. The score is
expressed as rules:

for playing:


immediately with the artificial sounds. You may play these throughout
the piece.


start recording and playing from the A section. These can go
throughout the piece.


go on to the B section. These can also go throughout the piece, but
should be used more sparingly once this section is passed.


C section takes up the largest section of the piece. You do not need
to get to the end of all the lines provided.

should announce what line they are recording via the chat.

a line is recorded, other players may record that line (or fragments
of it) again, but cannot backtrack to a previous line. Players can
also choose to advance to the next line, but, again, backtracking is
not allowed.

a player is picking a soundfile to process, she can pick from any
section. If she picks from section C, it should be normally a recent
line, however you can break this rule if you have a good reason, ie.
you feel a really strong attachment to a previous line or think it
can exist as a counterpoint / commentary to the current line.

lines in the text should be interpreted as pauses in making new

have not yet thought about videos for this section.

3 will also have text sound, but as a collage on top of other
material. I was originally planning to have this section concentrate
solely on people’s religious or spiritual beliefs surrounding the
rapture, the apocalypse or 2012. I’m hoping to do telephone
interviews of Americans who believe the end is neigh. I’m hoping the
promise of being able to witness to new audiences will be enticing
enough to persuade them to participate. I have not yet found any
rapture believers to record, but as we are not going to start working
on this until October or November of 2011, it’s not yet urgent.

addition to rapture believers, I hope to do in-person recordings of
people who have New Age beliefs about the winter solstice of 2012. So
far I have interviewed one person and another has agreed to
participate also. I plan to have the piece organised so that the
rapture believers come first in the piece and the 2012 believers, but
as I have not yet acquired much material, this is subject to change.

plan to ask interviewees about current events, like rioting, economic
turmoil and climate change and include those things in the text by
how they correlate to religious and spiritual beliefs. The collage
will also be made up of samples referencing these also, such as fire
sounds, windows breaking, sirens, etc. This does present a risk of
being overly dramatic, but appropriate use of heavy processing will
turn the sounds into references that are more indirect. Also drama is
not inappropriate to the medium. The collage should become less and
less alarming towards the end, as the text switches to the generally
more hopeful New Age respondents.

4 will have a graphic score, in the style of Cornelius Cardew’s
Treatise. I recently participated in the first all-vocal
performance of that piece, at the South London Gallery on 16
September 2011. The group I sang with did not have a lot of
experience with free improvisation, and it was interesting to see how
exposure to such open material challenged and inspired them. I hope
that similarly, BiLE will go places we would not have otherwise
without the graphic score.

do want it to move from a spiritual hope of the previous act to
something more inclusive. My hope is that the audience will leave
not with a sense that the apocalypse is coming in one form or
another, but that it is possible to avert disaster.

Dissertation Draft: BiLE, XYZ and gesture tracking

My colleague Shelly Knotts wrote a piece that uses the network features much more fully. Her piece, XYZ, uses gestural data and state-based OSC messages as well, to create a game system where players “fight” for control of sounds. Because BiLE does not follow the composer-programmer model of most LOrks, it ended up that I wrote most of the non-sound producing code for the SuperCollider implementation of Knotts’s piece. This involved tracking who is “fighting” for a particular value, picking a winner from among them, and specifying the OSC messages that would be used for the piece. I also created the GUI for SuperCollider users. All of this relied heavily on my BileTools classes, especially NetAPI and SharedResource.

Her piece specifically stipulated the use of gestural controllers. Other players used Wiimotes and iPhones, but I was assigned the Microsoft Kinect. There exists an OSC skeleton tracking application, but I was having problems with it segfaulting. Also, full-body tracking sends many more OSC messages than I needed and requires a calibration pose be held while it waits to recognise a user. ( ) This seemed like overkill, so I decided it would be best to write an application to track one hand only.

Microsoft had not released official drivers yet when I started this and, as far as I know, their only drivers so far are windows-only. ( I had to use several third-party, open source driver and middleware layers. ( I did most of my coding around the level of the PrimesenseNITE ( middleware. There is a NITE sample application called PointViewer that tracks one hand. ( I modified the programme’s source code so that in addition to tracking the user’s hand, it sends an OSC message with the hand’s x, y and z coordinates. This allows a user to generate gesture data from hand position with a Kinect.

In future versions of my Kinect application, I would like to create a “Touch-lessOSC,” similar to the TouchOSC iPhone app, but with dual hand tracking. One hand would simply send it’s z,y,z, coordinates, but the other would move within pre-defined regions to send it’s location within a particular square, move a slider, or “press” a button. This will require me to create a way for users to define shapes and actions as well as recognise gestures related to button pressing. I expect to release an alpha version of this around January 2012.

For the SuperCollider side of things, I wrote some classes, OSCHID and OscSlot (see attached), that mimmic the Human Interface Device (HID) classes, but for HIDs that communicate via OSC via third-party applications such as the one I wrote. They also work with DarwiinOSC ( and TouchOSC ( on the iPhone. As they have the same structure and methods as regular HID classes, they should be relatively easy for programmers and the wiimote subclass, WiiOSCClient (see attached), in particular, is drop-in-place compatible with the pre-existing supercollider wiimote class, which, unfortunately, is currently not working.

All of my BiLE-related class libraries have been posted to SourceForge ( and will be released as a SuperCollider quark. My Kinect code has been posted to my blog only, ( but I’ve gotten email indicating that at least a few people are using the programme.

Dissertation Draft: BiLE – Partially Percussive

I wrote a gui class called BileChat in order to provide a chat interface, to allow typed communication during concerts and BileClock for a shared stopwatch. We use these tools in every piece that we play.

We played our first gig very shortly after forming and while we were able to meet the technical challenges, the musical result was not entirely compelling. Our major problems were not looking at each other and not listening to each other, which was exacerbated by the networking tools, especially the chat, but still the standard problems new ensembles tend to have.

Several years ago, when I was running an ensemble of amateur percussionists, I used Deep Listening pieces by Pauline Oliveros to help focus the group and encourage greater listening. Most of those exercises are very physical, asking the participants to use body percussion or to sing. This worked well for percussionists, but did not seem well suited to a laptop band. Almost all of the members of BiLE have previous experience playing in ensembles. While every group can benefit from listening exercises, we were not starting from scratch and the exercises we use should be ones that are compatible with networked laptop music. In other words, we needed listening skills within the context in which we were trying to perform.

I wrote a piece called Partially Percussive in order to implement Deep Listening-like ideas in a laptop context. I wrote the score on a studio white board as a list of rules:


To start playing, sample the object.
Listen to other players. Are they playing:

  • Percussive vs Sustained
  • Sparse vs Dense
  • Loud vs Soft
  • Pointalistic vs Flowing

Follow the group until you decide to change.
If you hear a change, follow it.
Lay out whenever you want, for how long you want.
Sample the object to come in again.

The score stayed on the white board for two or three weeks. I took a photo of it for my records, however, the score for this piece has never been distributed via paper or email. I do not know what notes, if any, my colleagues took on the score. When describing the score to them, I said that they should drop out (“lay out”) when they “feel it” and return similarly.

I specified live sampling to add transparency to our performance, so audiences can have an idea of where our sounds are coming from. I picked percussion in particular after a IM conversation with Charles Amirkhanian, in which he encouraged me to write for percussion. We originally had a haphazard collection of various metal objects, however, we forgot to bring any of them for one of our gigs, so I went to Poundland and purchased a collection of very cheap but resonant kitchen objects and wooden spoons to play them with. We also use a fire bell. Because it has a long ringing tail on it’s sound, which is quite nice, we use it to start and end the piece. Finally, one of the ensemble members owns some cowbells, which we often also use. Each player usually has a single metal object, but is free to borrow objects from each other. In the case where someone is borrowing the cow bell, they typically allow the bell to ring while carrying it.

While the rules, especially in regards to ‘laying out,’ are influenced by Oliveros, our practice of the piece draws heavily on the performance practice of the anthony Braxton ensemble, which I played in 2004-5. In this piece, as well as in Braxton’s ensemble, players form spontaneous duos or trios and begin trading gestures. This depends on both eye contact and listening and thus requires us to develop both those skills.

When we started playing this piece, I was controlling my own patch with a wireless gamepad, with two analog sticks and several buttons. This gave me the ability to make physical motions and control my patch while away from my computer, for example, while getting an object from another player. Over time, more BiLE members have incorporated even more gestural controllers, such as iPhones running TouchOSC. Thus, when trading gestures, players will mimmic sound quality and physical movement. I believe this aids both our performance practice and audience understanding of the piece.

The technology of this piece does not require more than the chat and the shared stopwatch, but it appeals to audiences and we play it frequently.

Dissertation: BiLE Networking White Paper

This document describes the networking
infrastructure in use by BiLE.

The goal of the infrastructure design
has been flexibility for real time changes in sharing network data
and calling remote methods for users of languages like supercollider.
While this flexibility is somewhat lost to users of inflexible
languages like MAX, they, nevertheless, can benefit from having a
structure for data sharing.


If there is a good reason, for
example, a remote user, we support OSCGroups as a means of sharing

If all users are located together on
the same subnet, then we use broadcast on port 57120.


By convention, all OSC messages start
with ‘/bile/’


Strings must all be ASCII. Non ASCII
characters will be ignored.


Upon joining the network, users
should announce their identity:

nickname ipaddress port

nicknames must be ASCII only.


Nick 57120

Note that because broadcast
echoes back, users may see their own ID arrive as an announcement.


Users should also send out their
ID in response to an IDQuery:


Users can send this message at
any time, in order to compile a list of everyone on the network.

Users can enquire what methods
they can remotely invoke and what data they can request.


reply to this, users should send /bile/API/Key and /bile/API/Shared
(see below)

Keys represent remote methods.
The user should report their accessible methods in response to a

symbol desc nickname

The symbol is an OSC message
that the user is listening for.
The desc is a text based
description of what this message does. It should include a usage
The nickname is the name of the
user that accepts this message.


/bile/msg "For chatting. Usage: msg, nick, text" Nick
Shared represents available
data streams. Sources may include input devices, control data sent
to running audio processes or analysis. The user should report their
shared data response to a Query
symbol desc
The symbol is an OSC message
that the user sends with. The format of this should be

The desc is a text based
description of the data. If the range is not between 0-1, it should
mention this.
The nickname is the name of the
user that accepts this message.

/bile/Nick/freq "Frequency. Not scaled."

Shared data will not be sent out if no one has requested
it and it may be sent either directly to interested users or to the
entire group, at the sender’s discretion. In order to ensure
receiving the data stream, a user must register as a listener.
symbol nickname ip port
The symbol is an OSC message
that the user will listening for. It should correspond with a
previously advertised shared item. If the receiver of this message
recognises their own nickname in in the symbol (which is formatted
they should return an error:

The nickname is the name of the
user that will accept the symbol as a message.
The ip is the ip address of the
user that will accept the symbol as a message.
The port is the port of the
user that will accept the symbol as a message.
/bile/Nick/freq Shelly 57120


In the case that a user receives a request to register a
listener or to remove a listener for data that they are not sharing,
they can reply with

The symbol is an OSC message
that the user tried to start or stop listening to. It is formatted
Users should not reply with an error unless they recognise their own
nickname as the middle element of the OSC message. This message may
be sent directly to the confused user.


To announce an intention to ignore subsequent data, a
user can ask to be removed.
symbol nickname ip
The symbol is an OSC message
that the user will no longer be listening for. If the receiver of
this message sees their nickname in the symbol which is formatted

they can reply with /bile/API/Error/noSuchSymbol
The nickname is the name of the
user that will no longer accept the symbol as a message.
The ip is the ip address of the
user that will no longer accept the symbol as a message.
/bile/Nick/freq Shelly

Users who are quitting the network can asked to be
removed from everything that they were listening to.
nickname ip

The nickname is the name of the
user that will no longer accept any shared data.
The ip is the ip address of the
user that will no longer accept any shared data.

Used Messages

This is used for chatting.
nickname text
The nickname is the name of the
user who is sending the message.
The text is the text that the
user wishes to send to the group.

This is for a shared stopwatch and not for serious
timing applications
Clock start or

The symbol is either start or

Reset the clock to zero.
Set the clock time
minutes seconds
Minutes is the number of minutes past zero.

Seconds is the number of seconds past zero.


Because users can silently join, leave
and re-join the network, it could be a good idea to have users time
out after a period of silence, maybe around 30 seconds or so. To
stay active, they would need to send I’m-still-here messages.

There should possibly also be a way
for a user to announce that they have just arrived, so, for example,
if a SuperCollider user recompiles, her connection will think of
itself as new and other users will need to delete or recreate
connections depending on that user.

Dissertation Draft: BLE Tech

In January 2011, five of my colleagues in BEAST and I founded BiLE, the Birmingham Laptop Ensemble. All of the founding members are electroacoustic composers, most of whom have at least some experience with an audio programming language, either SuperCollider or MAX. We decided that our sound would be strongest if every player took responsibility for their own sound and did his or her own audio programming. This is similar to the model used by the Huddersfield Experimental Laptop Orchestra (HELO) who describe their approach as a “Do-It-Yourself (DIY) laptop instrument design paradigm.” (Hewitt p 1 Hewitt et al write that they “[embrace] a lack of hardware uniformity as a strength” and implies their software diversity is similarly a strength and grants them greater musical, (rather than technical) focus. (ibid) BiLE started with similar goals – focus on the music and empower the user, and has had similar positive results.

My inspiration, however, was largely drawn from The Hub, the first laptop band, some members of which were my teachers at Mills College in Oakland California. I saw them perform in the mid 1990s, while I was still an undergrad and had an opportunity then to speak with them about their music. I remember John Bischoff telling me that they did their own sound creation patches, although for complicated network infrastructure, like the Points of Presence Concert in 1987, Chris Brown wrote the networking code. (Cite comments from class?)

One of the first pieces in BiLE’s repertoire was a Hub piece, Stucknote by Scott Gresham Lancaster. This piece not only requires every user to create their own sound, but also has several network interactions including a shared stopwatch, sending chat messages and the sharing of gestural data for every sound. In Bischoff and Brown’s paper, the score for Stucknote is described as follows:

“Stuck Note” was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one “note”, meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an “x-factor”, which is a controller that in some way changes the timbral character or continuity of the instrument. Every player’s two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players’ computer synthesizers, taking over their sounds with two simple control streams. Like in “Wheelies”, this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of “Stuck Note” was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly. (

Because BiLE was a mostly inexperienced group, even the “easy to implement for everyone” Stucknote presented some serious technical hurdles. We were all able to create the sounds needed for the piece, but the networking required was a challenge. Because we have software diversity, there was no pre-existing SuperCollider Quark or MAX external to solve our networking problems. Instead, we decided to use the more generic music networking protocol Open Sound Control (OSC). I created a template for our OSC messages. In addition to the gestural data for amplitude and x-factor, specified in the score, I thought there was a lot of potential for remote method invocation and wanted a structure that could work with live coding, should that situation ever arise. I wrote a white paper (see attached) which specifies message formatting and messages for users to identify themselves on the network and advertise remotely invokable functions and shared data.

When a user first joins the network, she advertises her existence with her username, her IP address and the port she is using. Then, she asks for other users to identify themselves, so they broadcast the same kind of message. Thus, every user should be aware of every other user. However, there is currently no structure for users to quit the network. There is an assumption, instead, that the network only lasts as long as each piece. SuperCollider users, for example, tend to re-compile between pieces.

Users can also register a function on the network, specifying a OSC message that will invoke it. They advertise these functions to other users. In addition, they can share data with the network. For example, with Stucknote, everyone is sharing amplitude values such that they are controllable by anyone, including two people at the same time. The person who is using the amplitude data to control sound can be thought of as the owner of the data, however, they or anyone else can broadcast a new value for their amplitude. Typically, this kind of shared data is gestural and used to control sound creation directly. There may be cases where different users are in disagreement about the current value or packets may get lost. This does not tend to cause a problem. With gestural data, not every packet is important and packet loss is not a serious issue.

When a user puts shared data on the network, she also advertises it. Users can request to be told of all advertised data and functions. Typically, a user would request functions and shared data after asking for ids, upon joining the network. She may ask again at any time. Interested users can register as listeners of shared data. The possibility exists, (currently unused), for the owner of the data to send its value out on to registered users instead of the network as a whole.

In order to implement the network protocol, I created a SuperCollider class called NetAPI (see attached code and help file). It handles OSC communications and the infrastructure of advertising and requesting ids, shared functions and shared data. In order to handle notifications for shared data changes, I wrote a class called SharedResource. When writing the code for Stucknote, I had problems with infinite loops with change notifications. The SharedResource class has listeners and actions, but the value setting method also takes an additional argument specifying what is setting it. The setting object will not have it’s action called. So, for example, if the change came from the GUI, the SharedResource will notify all listeners except for the GUI. When SharedResources “mount” the NetAPI class, they become shared gestural data, as described above.

Forming a plan

Today is 30 July. My dissertation is due on 30 September. I am now planning on how things will be between now and then.
I know that I cannot work every day between now and then. My maximum sprint time is 10 days. So I need to plan on taking one day off per week, which might was well be on the weekend. With BiLE on Wednesdays, that gives me 5 days a week. Also, planning on working 16 hour days is also not going to work. Instead, I can do 4 hours on music and 4 hours on words. Roughly, I have 160 hours of each to spend.
If I keep to a reasonable sleeping schedule and cut back on facebook, I can still go out occasionally. I am not going to drink unless it is the evening before my one break day per week. Also, since stress levels will be high, that break day needs to actually be spent away from a computer like riding my bike or going to the beach or something worthwhile.
Everything is going to be fine. This will all be over soon. I will get it it all done. I just need to focus and work hard.
I may start doing again what I did with my MA and start posting drafts of various bits, looking for feedback.