Engaging and Adjusting

The thing about negative feedback is that it’s extremely useful for knowing how to improve. (Mostly, not counting the guy who wondered if our mothers were proud (I’d like to think mine would be.)) And the topic that stands out most glaringly is audience engagement.
This is a long standing problem for many groups dating back to the start of the genre. Somebody left an anonymous comment on my last post comparing us to “geography teachers.” Scot Gresham-Lancaster wrote that The Hub was compared to air traffic controllers. Their solution was to project their chat window, something we’ve talked about, but never actually implemented. There are papers written about how the use of gestural controllers can bridge this gap, something we have implemented. But what projected chat, gestural control, and synthesised voice all have in common is hiding behind technology.
Thus far, we usually physically hide behind technology as well, sat behind tables, behind laptops and do not tend to talk to the audience. However, not all of our gigs have been this way. When we played at the Sonic Picnic, we were standing and we had a better connection to the audience, I think because we were behind plinths, which are smaller and thus we were more exposed. Other concerts, we’ve talked to the audience and even even have given them some control of our interface at certain events. This also helps.
Performers who have good posture and good engagement are not like that naturally; they practice it like all their other skills. A cellist in a conservatory practices in front of a mirror so ze can see how ze looks while ze plays and adjust accordingly.
Also, it turns out that it wasn’t just me that ‘crashed’ due to user error rather than technical failure. There’s two solutions for this – one is to have a todo list reminding the player what they need to do for every piece and to automate as much of that process as possible. The other is to be more calm and focussed going on stage. When we were getting increasingly nervous waiting to be called on to perform, we could have been taking deep breaths, reassuring each other and finding a point of focus, which is what happens when gigs go really well. Alas, this is not what we did at all.
So, starting next week, we are practising in front of a ‘mirror’ (actually a video projection of ourselves, which we can also watch afterwards to talk about what went right and wrong). We are going to source tall, plinth-like portable tables to stand behind or next to. The composer of every piece will write a short two sentence summary explaining the piece and then, in future, we’ll have microphones at future gigs, such that whoever has the fastest change will announce the piece, say a bit about it and have a few bad jokes like rock bands do between songs. We’re also going to take deep breaths before going on and have check lists to make sure we’re ready for stuff.
On the technical side, I’m going to change the networking code to broadcast to multiple ports, so if SuperCollider does crash and refuse to release the port, the user will not have to restart the computer, just the programme. Also, I’m hoping that 3.5.1 will have some increased stability on networking. My networked interactions tend to crash if left running for long periods of time, which is probably a memory management issue that I’ll attempt to find and fix, but in the mean time, we get everything but that running ahead of going on stage and then start the networking just before the piece and recompile it between pieces. To make the changeover faster, we’ve changed our practice such that who ever is ready to go first just starts and other people catch up, which is something we also need to practice.
A pile of negative feedback, even if uncomfortable, is a tremendous opportunity for improvement. So our last gig was amazingly useful even if not amazingly fun.

Gig Report: The adoration may not be universal

BiLe had two gigs yesterday but I’m just going to talk about the second one. However, first I’m going to talk about some gigs I played a few years ago. One was a cafe gig, or possible several cage gigs. They tend to blend together. I was playing tuba with some free improvisers, including the owner of the cafe. A bunch of people were there talking, we started to play and just about everybody left.
It’s slightly uncomfortable, but it’s well known to anybody who has ever played in a cafe. And there have been times when I’ve meant to have a cup of coffee and talk with friends and then, rather than talk over the music, we’ve moved on when it started. At other times, I’ve been happily surprised by live music and there have been many times I’ve gone out to a cafe specifically to hear the music that was programmed.
The other was in 2004 and I had just started doing live computer pieces in SuperCollider, but they were not interactive, they were live realisations. (I called them “press the button” pieces.) I was testing out a new one at an open mic night at a restaurant. My friend had organised the evening and asked me to play, but it was me and all acoustic guitars. It was a very early version of the piece and it still had some major aesthetic problems, which became glaringly apparent as it played. Many people in the room left to go home over the course of the piece. It was not a cafe, it was a restaurant. People had plates of food in front of them which they apparently abandoned during the longest 11 minutes of my life. (I blogged about this at the time.)
A few things happened as a result of this. One was that a busboy came out and game me a thumbs up, I’m pretty sure because he liked the music, but you never know. Another was that I instantly got much more respect from my colleagues at the university. For my own part, I pledged to become more aware of how listeners may respond to pieces I was working on to try to prevent a repeat of this. And finally, I learned the value of playing things in front of people as part of the path to finishing a piece.
The reasons for the increased respect from my colleagues is slightly complex. Part of it was simple elitism, but I think a part of it was an encouragement to take risks. Being likeable is not enough. Some fantastic music is loved upon first listening. But a lot is hated. A lot of fantastic an important pieces caused riots on their first playing. 4’33” by John Cage, Rite of Spring by Stravinsky and Ballet Mecanique by George Antheil are all well-known examples of this. Of course, causing an uproar does not mean that you’re good. You could just be terrible. But it does mean you’re taking a risk.
Of course, I tend to blunder into risks blindly and be caught a bit by surprise.

TEDxBrum

Localities can put on their own, independent TED conferences. One in Birmingham decided to invite BiLE and despite having a gig already lined up the same morning, we agreed to to play.
I’d been at the LoveBytes festival in Sheffield (which was excellent) the day before and stayed over. Alas, it turned out that the reason that my hotel room was so cheap was because it was directly over a Reggae club. I think my room must have been right over the bass amp. One song was in the same key as the resonant frequency of the door frame. We woke up early yesterday morning, played a set at a headphone concert at the LoveBytes Festival, and then got on a train back to Birmingham and got to the MAC centre just in time to set up and play another set at TEDx
We waited nervously back stage for our turn, filed in and started to play XYZ by Shelly Knotts. For some reason, there was a lot of crashing. Chris missed the entire piece, trying to recover from a crash. Julien and Shelly both crashed mid-piece, but were able to recover quickly. I did not crash, but I’m the last to come in. It was sparse and a bit stressful, but we got through it. We’ve played that piece a lot previously. It’s not our first piece, but it’s the first we proposed, as we spent our first-ever meeting writing a vague proposal to NIME last year and this was the piece that we played there.
Then we played Sonnation 2 by Julien Guillamat. We’ve only played that piece a couple of times before, but it’s not difficult. I forgot to plug in my faders and spent the first two minutes trying to figure out what was wrong and then recovering, so it also had some sparseness. The end was not as tight as it could be and I smiled a bit at the error, but then it was over and we filed back off stage.
We always have problems with having the right sort of game face for playing live. I’ve been working on my posture, but we still sometimes slip into head resting on arm with elbow on the table. And I should have kept a straight face at the end. I typed some lines into the speech synthesiser to announce piece titles, which is something I’ve seen other bands do at laptop concerts. I have mixed feelings about it. It seemed better than not engaging at all (which is what we usually do, alas) and we didn’t have a microphone.
Afterwards, we went outside to wait for the talks to end so we could break down our gear. It was then that somebody pulled out their smart phone to check Twitter.

Reactions

The tweets are below in chronological order (oldest first). While it was clear the performance had some technical issues, it had not seemed unusual in any way. We picked pieces that I thought would be accessible. XYZ has computer game elements, including players competing for control of sound parameters and lo-fi game-ish graphics. Sonnations also seems accessible in that is uses live sampling of metallic instruments, something that has worked with Partially Percussive and because it has a physically performative element at the end. Plus it gets nice sounds.
It may be that the difference between reactions to Sonnations and, say, Partially Percussive may have to do with managing audience reactions in some way. The bells do sound nicer than the kitchen hardware, but, because they look like instruments, the audience may be expecting something much more conventionally tonal. They resonances of the metal bowls might be a nice surprise vs the cow bell sounds might be slightly disappointing. Of course, it’s even more likely that the audience would have found the use of kitchen objects to be unbearably pretentious. It may have been better to play Act 2 of the Laptopera as the second piece. It sounds weirder, but the obvious references to spam email, especially the penis-enlargement ones are funny and may have engaged them. Or maybe not. It’s hard to know.
We’re playing at the symphony hall in May and this does have me a bit worried in that I would not have predicted these crashes and I don’t know what caused them. And I’m worried that we might be too brutal for fans of minimalism. It’s caught on much more than other genres of 21st century art music and appeals to a mainstream audience. Just because an audience wants to be challenged a bit, doesn’t mean they want what we do.
On the other hand, as somebody who often specialises in noise music, I’ve never expected to get mass approval or even approval from the majority of people at any given gig. Probably the only exception here is that I’m not usually as directly exposed to audience reaction. And, indeed, there were people who liked it. So maybe it’s a storm in a teacup? It’s impossible to get perspective on things from the stage, as it were.

Tweets

  • About to find out what a laptop ensemble is at #TEDxBrum @EskimoDalton
  • And the laptop ensemble are (is?) using macbook pros, because they’re the best kind of laptops #ilovemac #tedxbrum @Dr_Bob82 (replies)
  • Oh…it’s BILE! Ha! #TEDxBrum @EskimoDalton
  • What a treat. Watch the bham laptop ensamble being streamed live on #tedxbrum websire now x @JoyOfFengShui
  • Using iphone as a sound control device – motion control + music = electro-weirdness! #tedxbrum @Dr_Bob82
  • It’s like being stuck INSIDE A LAPTOP right now #tedxbrum @Dr_Bob82
  • i got a headache can we get @Flutebox on pls? #tedxbrum @tedxbrum @Flutebox @aerosolali
  • The Birmingham Laptop Ensemble. It could only come out of the University of Birmingham. #tedxbrum #notforme @mrmarksteadman
    • @mrmarksteadman 🙂 @carolinebeavon
    • @carolinebeavon I’m sure it’s all really clever, but just a tad self-indulgent for me @mrmarksteadman
    • @mrmarksteadman I agree. No real musical quality from what I can tell … But then, I went to BCU 😉 @carolinebeavon
    • @carolinebeavon That’s kinda my point! Good, no-nonsense uni 😉 Sad to have missed @flutebox; will defo check out the @civicolive replay @mrmarksteadman
    • @mrmarksteadman yup. They were great. This … Hmmmm, not a fan @carolinebeavon
    • @carolinebeavon Guess you had to be there. Oh no, you are, sorry. And it continues. *sigh* mrmarksteadman
    • @mrmarksteadman let me out!!!!! 🙂 @carolinebeavon
    • @carolinebeavon OH GOD IT’S SO SMUG! I CAN’T TAKE HOW PLEASED THEY ARE WITH THEMSELVES! (Sorry… just… yeah, sorry.) #tedxbrum @mrmarksteadman
  • Not getting the laptop ensemble – will try harder #tedxbrum @mrspicto
  • I was expecting some form of 8bit electro music. This is not that. #tedxbrum @JAWilletts
  • I think the computers have taken over #tedxbrum @dorvago
  • Very impressive technically, although not sure if it’s supposed to be music? #tedxbrum @Dr_Bob82
    • @Dr_Bob82 BiLE = sound art ?! @PostFilm
    • @PostFilm I’d agree that it was ‘sound’ but never been a fan of electro-music 🙂 @Dr_Bob82
    • @Dr_Bob82 sound art: I guess it’s just a matter of taste. You don’t hang someone for not liking coffee, anchovies, or cucumber @PostFilm
    • @PostFilm It’s definitely a matter of taste, although occasionally I have felt socially ostracised for not liking coffee 😉 @Dr_Bob82
    • @Dr_Bob82 harmony, melody, rhythm are culture- and time-specific; but electroacoustic is so broad now that it’s difficult to generalise @PostFilm
  • @BiLEnsemble > visually Kraftwerk/Modified Toy Orchestra minus suits, audibly Aphex Twin via laptops & remote controls. Madness! #TEDxBrum @asmallfurrybear
  • #TEDxBrum Bored already @Keybored_KATz
  • Horrible feeling that this isn’t going down as expected… Please, some melody for the love of god!! #tedxbrum @Dr_Bob82
  • #TEDxBrum Trying to be positive – but really – pass the paracetamol @Pictoontwit
  • Birmingham Laptop Ensemble – using interference to create music! #tedxbrum http://pic.twitter.com/PmCOiBQG @CerasellaChis
  • #TEDxBrum I feel very old right now. @Stephen_Griffin
  • It’s like a game, where I don’t know the rules and can’t tell if it’s glitching or not. #tedxbrum @JAWilletts
  • Think there’s some sort of Kinect-type deal going on here as well with controlling the ‘music’ #tedxbrum @Dr_Bob82
  • No, sorry I tried but not for me ( laptop ensemble) #TEDxBRUM @mrspicto
  • somebody pls where is nathan @Flutebox come back pls! #tedxbrum @aerosolali
  • #TEDxBrum can Flutebox come back on please @Pictoontwit
  • #tedxbrum not sure what to make of this music @simonjenner
  • @BiLEnsemble > a possible contender for @supersonicfest 2012 line up? #TEDxBrum @asmallfurrybear
  • nah not for me… Seems too out of control & random…“@vixfitzgerald: I don’t get it #TEDxBrum Birmingham laptop ensemble 🙁 ??” @Soulsailor
  • Like War of the Worlds meets Aphex Twin meets an over-enthusiastic computer geek #tedxbrum @Dr_Bob82
  • Anyone else not got a clue what’s going on? Even the performers look disinterested! Smile and nod, smile and nod… #TEDxBrum @MykWilliams
  • Its getting an interesting Twitter reaction. Not sure whether it’s quite a bit too revolutionary. #tedxbrum @JAWilletts
  • As if my head didn’t hurt enough from all the ideas #TEDxBrum crammed in, BiLE start their intense sonic assault http://yfrog.com/khb13bqj @orangejon
  • Birmingham Laptop Ensemble at #TEDxBrum http://pic.twitter.com/DdGoqzSJ @stanchers
  • #TEDxBrum that made Kraftwerk look pedestrian Stephen_Griffin
  • Birmingham Laptop Orchestra. Industrial grunge synth from the 70s. A little to atonal for me. #tedxbrum @DaveSussman
  • Please. Melody. Just a little bit. I won’t tell the experimentalist musicians that you did it #tedxbrum @Dr_Bob82
  • Amazing stuff around here. 🙂 #TEDxBrum @CerasellaChis
  • Talk amongst yourselves. #tedxbrum @mrmarksteadman
  • #TEDxBrum I am sure there mothers are very proud – I am now reflecting on the value or otherwise of a University education @Pictoontwit
  • I feel like this needs an explanation #TEDxBrum @chargedatom
  • Hmm sorry but please don’t “play” another “track” /Birmingham laptop Ensemble ;-( #WTF #TEDxBrum @Soulsailor (replies)
  • WE NEED MOAR COWBELL!: http://www.funnyordie.com/videos/80a71ef8cb/more-cowbell #tedxbrum Dr_Bob82
  • …but i do like the guys stickers on his laptop…. #tedxbrum @aerosolali
  • #TEDxBrum the power of social media – and when you die on your feet even faster @Pictoontwit
  • Really not feeling Laptop Ensemble.I’m afraid at #TEDxBrum even they look bored. @carolinebeavon
  • Wouldn’t it be better to just plug an iPod in. #tedxbrum @dorvago
  • The cowbell is a way too understated instrument, let’s get the cowbell trending too! #TEDxBrum #morecowbell @TEDxBrum
  • Is it possible to rehearse this? #seriousquestion #TEDxBrum @chargedatom
    • @chargedatom I think they’re winging it. Most UoB students do 😉 @Dr_Bob82
  • @BiLEnsemble it’s interesting to watch here in the MAC. Physical meets digital, theres so much that could go wrong, it’s working!! #tedxbrum @Ben_R_Murphy
  • If we don’t get more cowbell, we may as well all go home #cowbell #tedxbrum @Dr_Bob82
  • Who spiked my drink with acid? Is this real? #TEDxBrum @craiggumbley
  • I for one, was happy to have #8bit of silence ¦-) #bless RT @mrmarksteadman Talk amongst yourselves. #tedxbrum @Jacattell
  • #TEDxBrum the emperor’s new laptop? @Stephen_Griffin
  • I’m now imagining myself in a rainforest. Away from this. Far away. #tedxbrum @Dr_Bob82
  • #TEDxBrum PLEASE STOP @Pictoontwit
  • Britain ‘s not got talent sorry #TEDxBrum @vixfitzgerald
  • One of them must be checking the twitter feed #tedxbrum #multitasking @dorvago
  • Oh dear twitter generated laughter in danger of breaking out now. At least it is a more positive effect than i expected #tedxbrum @mrspicto
  • Ah, so they played instruments at the start, recorded them, now they’ve digitised and resampled them and are playing them back #tedxbrum @Dr_Bob82
  • I’m not at #TEDxBrum, but finding the tweets about the “Laptop Ensemble” hilarious. It sounds dreadful (but I bet you all clap at the end). @editorialgirl
  • Ordered chaos; #LOVEIT! MT @Soulsailor nah not for me… Seems too out of control & random… /cc @vixfitzgerald #TEDxBrum @Jacattell
  • #morecowbell #lesscowbell would it make a difference?? #TEDxBrum @chargedatom
  • Massive TUNE! #tedxbrum @n_chalmers
    • @n_chalmers will buy u the CD for ur bday! #tedxbrum @J_K_Schofield
    • @n_chalmers going to download this one after for sure @kathpreston1
  • Twitter is my outlet. Can’t keep straight face. #TEDxBrum @karldoody
  • No one said innovation was going to be easy, right? #TEDxBrum @TEDxBrum
  • #TEDxBrum Warming to BiLE – snugly weird. @Stephen_Griffin
    • @Stephen_Griffin Was that smugly weird? #tedxbrum @Dr_Bob82
  • So now the track is on a loop and they’re playing along ‘in real time’ with it. Except it sounds… well… it’s finished now #tedxbrum @Dr_Bob82
  • Balls. #tedxbrum @mrmarksteadman
  • #TEDxBrum …. Laptop Ensemble … Seriously … Is that it 😉 @shuhabtrq
  • I want to see more people preoccupied with the stuff BiLE is doing. #TEDxBrum @CerasellaChis
  • Well I liked it… #TEDxBrum @stanchers
  • Brilliant performance from Laptop Ensemble BiLE – enjoyed watching and listening to them on the live stream #TEDxBrum @PostFilm
  • thinks BiLE upset some #tedxbrum delegates who did not want to open up to sound art and opportunity for digital experimentation @PostFilm
  • Skimmed the #TEDxBrum stream – if that sad reaction to @BiLEnsemble is accurate reflection of audience vibe I’m glad I’m not there. @peteashton
    • @peteashton Actually the reception to it IN THE ROOM in the real world was warm. The dissenters were vocal on Twitter. Go figure. @helgahenry
    • @peteashton We don’t know how much info (if any) was given to the audience about what they were listening to. Tweets sounded… surprised. @editorialgirl
    • @editorialgirl Indeed. I just don’t think I’d enjoy being in an audience which is surprised in that way by their work. Which is fine. @peteashton
    • @peteashton if it’s any consolation at all, I was there, at TEDxBrum & I enjoyed BiLE. New to me, a surprise, yes, but in a good way! @KendaLeeG
  • @hellocatfood I think you v can now legitimately claim to be a misunderstood artist now! The #TedxBrum audience just weren’t ready for you. @AndyPryke
  • @gregmcdougall there was a random laptop music segment that didn’t work for me then more awesomeness #TEDxBrum @Soulsailor
  • for me ‘sound art’ is part of the creative “T” in TED. More radical digitral sonic experimentation please from BiLE #tedxbrum @PostFilm
  • Oddest moment today: watching @BiLEnsemble use modern technology to give the audience a scarily accurate experience of tinnitus. #tedxbrum @catharker
  • #tedxBrum @BiLEnsemble have potential. I heard some cool sounding stuff and was a little jazzy. Maybe mix with instruments/samples/beats? @RenewableSave
  • i see bile at #tedxbrum has caused some controversy. i don’t think any performer has an inherent right to have their performance liked. @simonjgray
    • (& i type this as somebody who has made music which is well far from being universally liked. #tedxbrum ) @simonjgray
  • Really enjoyed playing at #lovebytes and #tedxbrum yesterday… as well as the post-TED discussion 😉 @BiLEnsemble
    • @BiLEnsemble and we enjoyed you! @TEDxBrum, out of interest, was the #lovebytes performance different? @Ben_R_Murphy
    • @BiLEnsemble well done BiLE performing at #tedxbrum !!!! @InterFace_2012
  • @celesteh obvious there were probs at #TEDxBrum, but I enjoyed the pieces – although was brought up on Harvey’s “Mortuos Plango, Vivos Voco” @davidburden
  • BiLE Blog #tedxbrum http://celesteh.blogspot.co.uk/2012/03/gig-report-adoration-may-not-be.html @PostFilm
  • BiLE’s last piece at #TEDxBrum http://dl.dropbox.com/u/8693004/TedxBrum%20BiLE.mp3 Quietat points so some mobile signal interference. @Acuity_Design

Press Release

Download PDF

Birmingham’s first Network Music Festival 27-29th January.

For immediate release: 24th January 2011

Birmingham’s first Network Music Festival presents hi-tech music performances from local and international artists.

On 27-29th January 2012 the first Network Music Festival will showcase some of the most innovative UK and international artists using networking technology. Presenting a broad spectrum of work from laptop bands, to live coding, to online collaborative improvisation, to modified radio networks, audio-visual opera and iPhone battles, Network Music Festival will be a weekend of exciting performances, installations, talks and workshops showcasing over 70 artists!

Network Music Festival are working alongside local organisations Friction Arts, SOUNDkitchen, BEAST, Ort Cafe and The Old Print Works and PST/Kismet in order to bring this new and innovative festival to Birmingham.

With 20 performances, 5 installations, 5 talks and a 2 day work Network Music Festival will be a vibrant and diverse festival presenting musical work where networking is central to the aesthetic, creation or performance practice. Acts include: Live-coding laptop quartet Benoit and the Mandelbrots (Germany); algorithmic music duo Wrongheaded (UK), transatlantic network band Glitch Lich (UK/USA) and home grown laptop bands BiLE (Birmingham Laptop Ensemble) and BEER (Birmingham Ensemble for Electroacoustic Research) as well as many more local, UK, European and international acts programmed from our OPEN CALL for performances, installations and talks.

If that’s not enough, we’ll be kicking off the festival early on Thursday 26th January with a pre-festival party programmed in collaboration with local sound-art collective SOUNDkitchen which showcases some of Birmingham best electronic acts, Freecode, Juneau Brothers and Lash Frenzy as well as one of SOUNDkitchen’s own sound installations.

There’s also an opportunity for you to get involved as we’re running a 2 day workshop on ‘Collaborative Live Coding Performance’ led by members of the first live coding band [PB_UP] (Powerbooks Unplugged).

“Birmingham has a reputation for being the birth place of new genres of music,” said festival organiser, Shelly Knotts. “We’re excited to be a part of this and to be bringing the relatively new genre of computer network based music to Brum. Some of these concerts are going to be epic!”

Tickets are available from www.brownpapertickets.com. Day and weekend passes available £5-£25. Workshop £20.

For more information visit our website: networkmusicfestival.org and follow us on twitter: @NetMusicFest. To tweet about the festival use the hashtag #NMF2012. We also have a facebook page: www.facebook.com/networkmusicfestival

Network Music Festival // 27-29th January 2012 // The Edge, 79-81 Cheapside, Birmingham, B12 0QH

Web:networkmusicfestival.org

Twitter: @NetMusicFest Hashtag: #NMF2012

Facebook: www.facebook.com/networkmusicfestival

Email: networkmusicfestival@gmail.com

On Friday will be a sneak preview of an excerpt from Act 2 of the Death of Stockhausen, the world’s first ‘laptopera.’

Dissertation Draft: BiLE Bibliography

Bibliography:

“About.”
Kinect for Windows: SDK
beta.
2011. Microsoft
Research. 8 August 2011.
<http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/about.aspx>

All
Watched Over by Machines of Loving Grace
.
By Curtis, Richard. BBC. BBC Two. 23 May 2011


Angwin,
Julia and Valentino-Devries, Jennifer. “Apple, Google Collect User
Data.” The Wall
Street Journal.
22
April 2011. Web. 8 August 2011.
<http://online.wsj.com/article/SB10001424052748703983704576277101723453610.html>

Brautigan,
Richard. “All Watched Over by Machines of Loving Grace.” Red
House Books
. Web. 10
August 2010.
<http://www.redhousebooks.com/galleries/freePoems/allWatchedOver.htm>

Brown,
Chris. “Indigenous to the Net ~ Early Network Music Band in the San
Francisco Bay Area.” Crossfade.
8 September 2002. Web. 8 August 2011.
<http://crossfade.walkerart.org/brownbischoff/index.html>

Cardew,
Cornelius. Treatise. Buffalo: The Gallery Upstairs Press,
1967.

Dugan,
Patrick. “Apple closes the iTunes store for iPhone users who don’t
want to share their location” Google
Buzz.
23 June 2010.
Web. 24 June 2010.

<http://www.google.com/buzz/106577349578207351822/FunaUuPBmzD/lameApple-closes-the-iTunes-store-for-iPhone-users>

“FAQ.”
I, Norton: an Opera in
Real-Time by Gino Robair
.
Web. 10 August 2011. <http://www.ginorobair.com/inorton/faq.html>

Frommer,
Dan. “Here’s the Amazing ‘Word Lens’ iPhone App Everyone is Talking
About This Morning (AAPL).” The
San Francisco Chronicle
.
17 December 2010. Web. 10 August 2011.
<http://www.sfgate.com/cgi-bin/article.cgi?f=/g/a/2010/12/17/businessinsider-word-lens-2010-12.DTL>


Hewitt,
Scott, et al. “HELO: The Laptop Ensemble as an Incubator for
Individual Laptop Performance Practices.” Huddersfield
Experimental Laptop Orchestra.

June 2010. Web. 9 August 2011.
<http://helo.ablelemon.co.uk/lib/exe/fetch.php/materials/helo-laptop-ensemble-incubator.pdf>

Judson,
Thom. “OpenNI to Max/MSP vis OSC.” Thom
Judson: music and media
.
12 January 2011. Web. 8 August 2011. <http://tohmjudson.com/?p=30>

Kyriadkis,
Yannis. “Scam Spam.” [“LabO III Scam Spam.m4v.”] Perf. Tako
Hyakutome. YouTube.
4 April 2011. Web. 10 August 2011.
<http://www.youtube.com/watch?v=tMRQOeA_Rk0>

“NITE
Middleware.” PrimeSense.
2010. Web. 8 August 2011. <http://www.primesense.com/?p=515>

No
More Twist. “Nice to See You.” Perf. Charles Céleste Hutchins,
Polly Moller. Music by
Charles Céleste Hutchins
.
18 July 2008. Web. 11 August 2011.

<http://www.berkeleynoise.com/celesteh/podcast/?p=100>

Olivarez-Giles,
Nathan. “Microsoft releases Kinect for Windows SDK.” Los
Angeles Times.
16 June
2011. Web. 8 August 2011.
<http://latimesblogs.latimes.com/technology/2011/06/microsoft-releases-kinect-for-windows-sdk.html>

Oliveros,
Pauline. “Exchanges.” Deep Listening Pieces. Kingston:
Deep Listening Publications, 1990

Oliveros,
Pauline. “Give Sound/Receive Sound.” Deep Listening Pieces.
Kingston: Deep Listening Publications, 1990

Posner,
Eric and Vermeule, Adrian. “Obama Should Raise the Debt Ceiling on
His Own.” New York
Times
. 22 July 2011.
Web. 10 August 2011.
<https://www.nytimes.com/2011/07/22/opinion/22posner.html?_r=1>


Schlegel,
Andreas. “Darwiinosc – darwiin remote with OSC extension for OS
X.” Google Code.
Web. 9 August 2011. <https://code.google.com/p/darwiinosc/>


Stockhausen,
Karlheinz, “Right Durations”

“TouchOSC.”
hexler.net.
Web. 8 August 2011. <http://hexler.net/software/touchosc>

Tsotsis,
Alexia. “Word Lens Translates Words Inside of Images. Yes Really.”
Tech Crunch.
16 December 2010. Web. 10 August 2011.

<http://techcrunch.com/2010/12/16/world-lens-translates-words-inside-of-images-yes-really/>

Viner,
Katharine. “Adam Curtis: Have computers taken away our power?.”
The Guardian.
6 May 2011. Web. 10 August 2011.
<http://www.guardian.co.uk/tv-and-radio/2011/may/06/adam-curtis-computers-documentary>

Dissertation Draft: BiLE – The Death of Stockhausen

My
next piece for BiLE is a large-scale piece called The Death of
Stockhausen
, which will be approximately an hour long. I’m
calling the piece a “laptopera,” although there are not currently
any singers. Although this may stretch the opera genre a bit, it’s
not unprecedented, as Gino Robair’s opera in real time, I, Norton,
lists singers as an optional part: “A
performance can be done without actors, singers, or even musicians.”
(“FAQ”)


The
inspiration for my opera largely comes from the Adam Curtis
documentary All Watched Over by Machines of Loving Grace,
which discusses how individuals stopped feeling like we are in
control of society or the future. A review of the series in the
Guardian describes the premise as,


[W]ithout
realising it we, and our leaders, have given up the old progressive
dreams of changing the world and instead become like managers –
seeing ourselves as components in a system, and believing our duty is
to help that system balance itself. Indeed, Curtis says, “The
underlying aim of the series is to make people aware that this has
happened – and to try to recapture the optimistic potential of
politics to change the world.” (Viner)

Curtis
lays much of the blame for the current state of affairs at the feet
of computers, or at least the mythology of stable systems which was
inspired by computer science. (All Watched Over by Machines of
Loving Grace
) I thought it would be interesting to do a
computer-based piece that addressed his documentary. While I don’t
believe that computers or anything else are a neutral platform, I
think a large part of the problem comes from the way in which we are
using computers and allowing ourselves to be used by technology
companies. Any solution will certainly have to involve computers, so
it seems useful to think about how to deploy them positively rather
than under a politics of invisible corporate control.

The
Curtis documentary is also appealing because he addressed some issues
that had been coming up in conversations I have been having with
friends. When we think of the future, we think only of better
gadgets, not a better world. For example, when describing a new
iPhone app Word Lens, techCrunch breathlessly stated, “This
is what the future, literally, looks like.”
(Tsotsis)
They were not alone in this pronouncement, which was widely echoed
through major media outlets, including the San Francisco Chronicle
who imagined a consumer reaction of, "holy cow, this is the
future." (Frommer)


Our
envisioned future is thus one of hypercapitalism. More and more
things to buy while at the same time, less and less money with which
to buy it. Consumers economise on food, but still buy expensive
iPhone contracts, presumably because they want to own a piece of the
future. Meanwhile, they have less and less control of even that as
Apple’s curatorial role prevents most consumers from being able to
install apps not approved by the corporation. Smart phones
disempower their users further by collecting their private
information. (Angwin and Valentino-Devries) The future is passive
consumers under greater control from the state and from corporations,
such as Google, Apple and Facebook, who win us over with appealing
gadgets. An online contact described this as a "totalitarian
pleasure regime." (Dugan) Thus we envision Huxley’s Brave New
World
for those who can afford it and Orwell’s 1984 for
those who can’t.


The
left seems to have no widely articulated alternative idea of what a
better world would even look like. The Guardian quotes Curtis on
2011 protests, “’Even
the “march against the cuts”,’ he says, referring to the
TUC march in London in March
,
‘it was a noble thing, but it was still a managerial approach. We
mustn’t cut this, we can’t cut that. Not, “There is another way.”’”
(Viner)
Curtis does not hand us a vision for what this other way might be,
but calls on us to imagine one.


This
opera will restate the problem outlined by Curtis and go on to link
the end of the future with the current apocalyptic concerns.
Originally, I want to focus mostly on the American preoccupation with
the Apocalypse and Rapture. If all the future will be just like now,
but with better gadgets, then we are only waiting for the end of the
world, which might as well come sooner rather than later. However,
various recent secular events seem to also bear inclusion. The New
York Times described a possible outcome of the US debt crisis as a
“Götterdämmerung,” describing a far right wing hope for a
“purifying” fire. (Posner and Vermeule) As the stock market
tumbled, looters set fire to high streets in the UK. Zoe Williams,
writing in the Guardian, noted that the consumer-oriented nature of
the riots is something “we’ve never seen before.” (Williams)
Rather than battle with the police, looters focused on gathering
consumer goods. Williams quotes Alex Hiller, “Consumer
society relies on your ability to participate in it.”
(Williams) Even their ability to be passive consumers was thwarted.
They had minimal access to what we’ve deemed to be the future.
However, setting large, destructive fires seems to imply that there
is more than just this going on. All of these things from religious
beliefs, to economic disaster to civil unrest share a sense of
hopelessness and feeling of things ending.


However,
rather than end on a negative note of yearning for oblivion, and the
end of the avant-garde, I do want listeners to consider a better
world. All of us have agency that can be expressed in ways other
than acquiring consumer goods. I do not present a view of what a
better world might look like, but do hope to remind them that one is
possible. There is another way.


I’ve
broken the opera up into four acts with connecting transition
sections. The durations are based on the fibonacci series. The
structure will be as follows:


8
min: Act 1 – The
Promise: Cooperative Cybernetics

2
min: Transition 1
21
min: Act 2 – The
Reality: The Rise of the Machines / Hypercapitlaism

3
min: Transition 2
13
min: Act 3 – The
Apocalypse

1
min: Transition 3
5
min: Act 4 – A
Better World is Possible: Ascension to Sirius


The
durations will probably vary slightly from performance to performance
and may evolve with our practice.


Act
1 explores the idealistic ideas of self-organising networks. Every
player in BiLE, as is normal, will create their own sound generation
code which will take no more than five shared parameters plus
amplitude to control their sounds. These parameters may be:
granular, sparse, resonant, pitched. Each player would have a slider
going from zero to one where zero means not at all and one means
entirely. Players will not control their sliders directly, but
instead vote for a value to increase or decrease. Their sound will
thus change in response to their own votes and votes of other
players. They can control their own amplitude at will. There is
also another slider, individual to every player, which controls how
anti-social they are. A value of zero will follow the group
decisions entirely and as the value increases, they will deviate more
and more from the group. A value of one should be actively
disruptive. All players should start with anti-social values of zero
and increase that number in a non-linear fashion until at the end the
group is, in general, very anti-social. The idea of group following
in this piece is also present in my earlier piece Partially
Percussive

but the users have much less agency in carrying it out in this act.


The
opera will be accompanied by video projections from Antonio Roberts.
I would like the start of this section to visually reference Richard
Brautigan’s poem “All Watched Over by Machines of Loving Grace”
from which the curtis documentary takes it’s name. The second stanza
is


I
like to think
   (right now, please!)
of a
cybernetic forest
filled with pines and electronics
where deer
stroll peacefully
past computers
as if they were flowers
with
spinning blossoms.
(Brautigan)


From
there, I would like there to be archival images of advertising and
assembly lines. As anti-social disorder increases, I’d like to see
more archival images of rioting and property destruction.


This
act will not begin rehearsals until October 2011.


Act
2 is included in my portfolio. It is the most operatic of all the
acts in that it includes live vocals. Players sample themselves first
reading common subject lines of spam emails, then common lines from
within spam emails and finally start reading an example of “spoetry”
– machine generated text that is sometimes used in an attempt to fool
spam filters. The players manipulate these samples to create a live
piece of text-sound poetry. In order to get material, I mined the
spam folder of my email account. I broke the material into sections
and assigned every line a number. (See attached)


Other
composers, such as Yannis Kyriakides in his piece “Scam Spam”
have used spam emails as source material. However, Kyraikides does
not include a vocal line in his piece. In 2008, composer/performer
Polly Moller approached me to improvise live on KFJC radio in
California. She played flute and pitched noisemakers and read a
“spoem” called “Nice to See You” and I did live
sampling/looping of her sounds and vocals. (No More Twist) I felt
satisfied with the results of this improvisation. Afterwards, I was
interested to keep working with spoetry and to look at doing more
structured text-sound pieces with a greater live component than I had
previously.


This
act builds on my experiences with Moller, using a larger ensemble,
and asking every member of BiLE to develop programmes specifically
for the manipulation of text sounds. They also manipulate artificial
sounds, which are recordings of my analog syntheiser. The score is
expressed as rules:


Rules
for playing:


Intro:

Start
immediately with the artificial sounds. You may play these throughout
the piece.

A:

Then
start recording and playing from the A section. These can go
throughout the piece.


B:

Then
go on to the B section. These can also go throughout the piece, but
should be used more sparingly once this section is passed.

C:

The
C section takes up the largest section of the piece. You do not need
to get to the end of all the lines provided.

Players
should announce what line they are recording via the chat.

Once
a line is recorded, other players may record that line (or fragments
of it) again, but cannot backtrack to a previous line. Players can
also choose to advance to the next line, but, again, backtracking is
not allowed.

When
a player is picking a soundfile to process, she can pick from any
section. If she picks from section C, it should be normally a recent
line, however you can break this rule if you have a good reason, ie.
you feel a really strong attachment to a previous line or think it
can exist as a counterpoint / commentary to the current line.

Blank
lines in the text should be interpreted as pauses in making new
recordings.




I
have not yet thought about videos for this section.


Act
3 will also have text sound, but as a collage on top of other
material. I was originally planning to have this section concentrate
solely on people’s religious or spiritual beliefs surrounding the
rapture, the apocalypse or 2012. I’m hoping to do telephone
interviews of Americans who believe the end is neigh. I’m hoping the
promise of being able to witness to new audiences will be enticing
enough to persuade them to participate. I have not yet found any
rapture believers to record, but as we are not going to start working
on this until October or November of 2011, it’s not yet urgent.


In
addition to rapture believers, I hope to do in-person recordings of
people who have New Age beliefs about the winter solstice of 2012. So
far I have interviewed one person and another has agreed to
participate also. I plan to have the piece organised so that the
rapture believers come first in the piece and the 2012 believers, but
as I have not yet acquired much material, this is subject to change.


I
plan to ask interviewees about current events, like rioting, economic
turmoil and climate change and include those things in the text by
how they correlate to religious and spiritual beliefs. The collage
will also be made up of samples referencing these also, such as fire
sounds, windows breaking, sirens, etc. This does present a risk of
being overly dramatic, but appropriate use of heavy processing will
turn the sounds into references that are more indirect. Also drama is
not inappropriate to the medium. The collage should become less and
less alarming towards the end, as the text switches to the generally
more hopeful New Age respondents.

Act
4 will have a graphic score, in the style of Cornelius Cardew’s
Treatise. I recently participated in the first all-vocal
performance of that piece, at the South London Gallery on 16
September 2011. The group I sang with did not have a lot of
experience with free improvisation, and it was interesting to see how
exposure to such open material challenged and inspired them. I hope
that similarly, BiLE will go places we would not have otherwise
without the graphic score.


I
do want it to move from a spiritual hope of the previous act to
something more inclusive. My hope is that the audience will leave
not with a sense that the apocalypse is coming in one form or
another, but that it is possible to avert disaster.

Dissertation Draft: BiLE, XYZ and gesture tracking

My colleague Shelly Knotts wrote a piece that uses the network features much more fully. Her piece, XYZ, uses gestural data and state-based OSC messages as well, to create a game system where players “fight” for control of sounds. Because BiLE does not follow the composer-programmer model of most LOrks, it ended up that I wrote most of the non-sound producing code for the SuperCollider implementation of Knotts’s piece. This involved tracking who is “fighting” for a particular value, picking a winner from among them, and specifying the OSC messages that would be used for the piece. I also created the GUI for SuperCollider users. All of this relied heavily on my BileTools classes, especially NetAPI and SharedResource.

Her piece specifically stipulated the use of gestural controllers. Other players used Wiimotes and iPhones, but I was assigned the Microsoft Kinect. There exists an OSC skeleton tracking application, but I was having problems with it segfaulting. Also, full-body tracking sends many more OSC messages than I needed and requires a calibration pose be held while it waits to recognise a user. (http://tohmjudson.com/?p=30 ) This seemed like overkill, so I decided it would be best to write an application to track one hand only.

Microsoft had not released official drivers yet when I started this and, as far as I know, their only drivers so far are windows-only. (http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/about.aspx) I had to use several third-party, open source driver and middleware layers. (http://tohmjudson.com/?p=30) I did most of my coding around the level of the PrimesenseNITE (http://www.primesense.com/?p=515) middleware. There is a NITE sample application called PointViewer that tracks one hand. (http://tohmjudson.com/?p=30) I modified the programme’s source code so that in addition to tracking the user’s hand, it sends an OSC message with the hand’s x, y and z coordinates. This allows a user to generate gesture data from hand position with a Kinect.

In future versions of my Kinect application, I would like to create a “Touch-lessOSC,” similar to the TouchOSC iPhone app, but with dual hand tracking. One hand would simply send it’s z,y,z, coordinates, but the other would move within pre-defined regions to send it’s location within a particular square, move a slider, or “press” a button. This will require me to create a way for users to define shapes and actions as well as recognise gestures related to button pressing. I expect to release an alpha version of this around January 2012.

For the SuperCollider side of things, I wrote some classes, OSCHID and OscSlot (see attached), that mimmic the Human Interface Device (HID) classes, but for HIDs that communicate via OSC via third-party applications such as the one I wrote. They also work with DarwiinOSC (http://code.google.com/p/darwiinosc/) and TouchOSC (http://hexler.net/software/touchosc) on the iPhone. As they have the same structure and methods as regular HID classes, they should be relatively easy for programmers and the wiimote subclass, WiiOSCClient (see attached), in particular, is drop-in-place compatible with the pre-existing supercollider wiimote class, which, unfortunately, is currently not working.

All of my BiLE-related class libraries have been posted to SourceForge (http://sourceforge.net/projects/biletools/) and will be released as a SuperCollider quark. My Kinect code has been posted to my blog only, (http://www.celesteh.com/blog/2011/05/23/xyz-with-kinec/) but I’ve gotten email indicating that at least a few people are using the programme.

Dissertation Draft: BiLE – Partially Percussive

I wrote a gui class called BileChat in order to provide a chat interface, to allow typed communication during concerts and BileClock for a shared stopwatch. We use these tools in every piece that we play.

We played our first gig very shortly after forming and while we were able to meet the technical challenges, the musical result was not entirely compelling. Our major problems were not looking at each other and not listening to each other, which was exacerbated by the networking tools, especially the chat, but still the standard problems new ensembles tend to have.

Several years ago, when I was running an ensemble of amateur percussionists, I used Deep Listening pieces by Pauline Oliveros to help focus the group and encourage greater listening. Most of those exercises are very physical, asking the participants to use body percussion or to sing. This worked well for percussionists, but did not seem well suited to a laptop band. Almost all of the members of BiLE have previous experience playing in ensembles. While every group can benefit from listening exercises, we were not starting from scratch and the exercises we use should be ones that are compatible with networked laptop music. In other words, we needed listening skills within the context in which we were trying to perform.

I wrote a piece called Partially Percussive in order to implement Deep Listening-like ideas in a laptop context. I wrote the score on a studio white board as a list of rules:

Rules:

To start playing, sample the object.
Listen to other players. Are they playing:

  • Percussive vs Sustained
  • Sparse vs Dense
  • Loud vs Soft
  • Pointalistic vs Flowing

Follow the group until you decide to change.
If you hear a change, follow it.
Lay out whenever you want, for how long you want.
Sample the object to come in again.

The score stayed on the white board for two or three weeks. I took a photo of it for my records, however, the score for this piece has never been distributed via paper or email. I do not know what notes, if any, my colleagues took on the score. When describing the score to them, I said that they should drop out (“lay out”) when they “feel it” and return similarly.

I specified live sampling to add transparency to our performance, so audiences can have an idea of where our sounds are coming from. I picked percussion in particular after a IM conversation with Charles Amirkhanian, in which he encouraged me to write for percussion. We originally had a haphazard collection of various metal objects, however, we forgot to bring any of them for one of our gigs, so I went to Poundland and purchased a collection of very cheap but resonant kitchen objects and wooden spoons to play them with. We also use a fire bell. Because it has a long ringing tail on it’s sound, which is quite nice, we use it to start and end the piece. Finally, one of the ensemble members owns some cowbells, which we often also use. Each player usually has a single metal object, but is free to borrow objects from each other. In the case where someone is borrowing the cow bell, they typically allow the bell to ring while carrying it.

While the rules, especially in regards to ‘laying out,’ are influenced by Oliveros, our practice of the piece draws heavily on the performance practice of the anthony Braxton ensemble, which I played in 2004-5. In this piece, as well as in Braxton’s ensemble, players form spontaneous duos or trios and begin trading gestures. This depends on both eye contact and listening and thus requires us to develop both those skills.

When we started playing this piece, I was controlling my own patch with a wireless gamepad, with two analog sticks and several buttons. This gave me the ability to make physical motions and control my patch while away from my computer, for example, while getting an object from another player. Over time, more BiLE members have incorporated even more gestural controllers, such as iPhones running TouchOSC. Thus, when trading gestures, players will mimmic sound quality and physical movement. I believe this aids both our performance practice and audience understanding of the piece.

The technology of this piece does not require more than the chat and the shared stopwatch, but it appeals to audiences and we play it frequently.

Dissertation: BiLE Networking White Paper

This document describes the networking
infrastructure in use by BiLE.

The goal of the infrastructure design
has been flexibility for real time changes in sharing network data
and calling remote methods for users of languages like supercollider.
While this flexibility is somewhat lost to users of inflexible
languages like MAX, they, nevertheless, can benefit from having a
structure for data sharing.


Network
Models


If there is a good reason, for
example, a remote user, we support OSCGroups as a means of sharing
data.

If all users are located together on
the same subnet, then we use broadcast on port 57120.

OSC
Prefix


By convention, all OSC messages start
with ‘/bile/’

Data
Restrictions


Strings must all be ASCII. Non ASCII
characters will be ignored.

Establishing
Communication

Identity
ID
Upon joining the network, users
should announce their identity:

/bile/API/ID
nickname ipaddress port

nicknames must be ASCII only.

Example:

/bile/API/ID
Nick 192.168.1.66 57120

Note that because broadcast
echoes back, users may see their own ID arrive as an announcement.

IDQuery

Users should also send out their
ID in response to an IDQuery:

/bile/API/IDQery

Users can send this message at
any time, in order to compile a list of everyone on the network.

API
Query
Users can enquire what methods
they can remotely invoke and what data they can request.

/bile/API/Query

In
reply to this, users should send /bile/API/Key and /bile/API/Shared
(see below)

Key
Keys represent remote methods.
The user should report their accessible methods in response to a
Query

/bile/API/Key
symbol desc nickname

The symbol is an OSC message
that the user is listening for.
The desc is a text based
description of what this message does. It should include a usage
example.
The nickname is the name of the
user that accepts this message.

Example

/bile/API/Key
/bile/msg "For chatting. Usage: msg, nick, text" Nick
Shared
Shared represents available
data streams. Sources may include input devices, control data sent
to running audio processes or analysis. The user should report their
shared data response to a Query
/bile/API/Shared
symbol desc
The symbol is an OSC message
that the user sends with. The format of this should be

/bile/nickname/symbol
The desc is a text based
description of the data. If the range is not between 0-1, it should
mention this.
The nickname is the name of the
user that accepts this message.

Example
/bile/API/Shared
/bile/Nick/freq "Frequency. Not scaled."

Listening
RegisterListener
Shared data will not be sent out if no one has requested
it and it may be sent either directly to interested users or to the
entire group, at the sender’s discretion. In order to ensure
receiving the data stream, a user must register as a listener.
/bile/API/registerListener
symbol nickname ip port
The symbol is an OSC message
that the user will listening for. It should correspond with a
previously advertised shared item. If the receiver of this message
recognises their own nickname in in the symbol (which is formatted
/bile/nickname/symbol),
they should return an error:
/bileAPI/Error/noSuchSymbol

The nickname is the name of the
user that will accept the symbol as a message.
The ip is the ip address of the
user that will accept the symbol as a message.
The port is the port of the
user that will accept the symbol as a message.
Example
/bile/API/registerListener
/bile/Nick/freq Shelly 192.168.1.67 57120

Error

noSuchSymbol
In the case that a user receives a request to register a
listener or to remove a listener for data that they are not sharing,
they can reply with

/bile/API/Error/noSuchSymbol
OSCsymbol
The symbol is an OSC message
that the user tried to start or stop listening to. It is formatted
/bile/nickname/symbol.
Users should not reply with an error unless they recognise their own
nickname as the middle element of the OSC message. This message may
be sent directly to the confused user.

Example

/bile/API/Error/noSuchSymbol
/bile/Nick/freq
De-listening
RemoveListener
To announce an intention to ignore subsequent data, a
user can ask to be removed.
/bile/API/removeListener
symbol nickname ip
The symbol is an OSC message
that the user will no longer be listening for. If the receiver of
this message sees their nickname in the symbol which is formatted

/bile/nickname/symbol),
they can reply with /bile/API/Error/noSuchSymbol
symbol
The nickname is the name of the
user that will no longer accept the symbol as a message.
The ip is the ip address of the
user that will no longer accept the symbol as a message.
Example
/bile/API/removeListener
/bile/Nick/freq Shelly 192.168.1.67
RemoveAll

Users who are quitting the network can asked to be
removed from everything that they were listening to.
/bile/API/removeAll
nickname ip

The nickname is the name of the
user that will no longer accept any shared data.
The ip is the ip address of the
user that will no longer accept any shared data.
Example
/bile/API/removeAll
Nick 192.168.1.66

Commonly
Used Messages

Chatting
Msg
This is used for chatting.
/bile/msg
nickname text
The nickname is the name of the
user who is sending the message.
The text is the text that the
user wishes to send to the group.

Clock
This is for a shared stopwatch and not for serious
timing applications
Clock start or
stop

/bile/clock/clock
symbol
The symbol is either start or
stop.
Reset

Reset the clock to zero.
/bile/clock/reset
Set
Set the clock time
/bile/clock/set
minutes seconds
Minutes is the number of minutes past zero.

Seconds is the number of seconds past zero.


Proposed
Additions

Because users can silently join, leave
and re-join the network, it could be a good idea to have users time
out after a period of silence, maybe around 30 seconds or so. To
stay active, they would need to send I’m-still-here messages.

There should possibly also be a way
for a user to announce that they have just arrived, so, for example,
if a SuperCollider user recompiles, her connection will think of
itself as new and other users will need to delete or recreate
connections depending on that user.

Dissertation Draft: BLE Tech

In January 2011, five of my colleagues in BEAST and I founded BiLE, the Birmingham Laptop Ensemble. All of the founding members are electroacoustic composers, most of whom have at least some experience with an audio programming language, either SuperCollider or MAX. We decided that our sound would be strongest if every player took responsibility for their own sound and did his or her own audio programming. This is similar to the model used by the Huddersfield Experimental Laptop Orchestra (HELO) who describe their approach as a “Do-It-Yourself (DIY) laptop instrument design paradigm.” (Hewitt p 1 http://helo.ablelemon.co.uk/lib/exe/fetch.php/materials/helo-laptop-ensemble-incubator.pdf) Hewitt et al write that they “[embrace] a lack of hardware uniformity as a strength” and implies their software diversity is similarly a strength and grants them greater musical, (rather than technical) focus. (ibid) BiLE started with similar goals – focus on the music and empower the user, and has had similar positive results.

My inspiration, however, was largely drawn from The Hub, the first laptop band, some members of which were my teachers at Mills College in Oakland California. I saw them perform in the mid 1990s, while I was still an undergrad and had an opportunity then to speak with them about their music. I remember John Bischoff telling me that they did their own sound creation patches, although for complicated network infrastructure, like the Points of Presence Concert in 1987, Chris Brown wrote the networking code. (Cite comments from class?)

One of the first pieces in BiLE’s repertoire was a Hub piece, Stucknote by Scott Gresham Lancaster. This piece not only requires every user to create their own sound, but also has several network interactions including a shared stopwatch, sending chat messages and the sharing of gestural data for every sound. In Bischoff and Brown’s paper, the score for Stucknote is described as follows:

“Stuck Note” was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one “note”, meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an “x-factor”, which is a controller that in some way changes the timbral character or continuity of the instrument. Every player’s two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players’ computer synthesizers, taking over their sounds with two simple control streams. Like in “Wheelies”, this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of “Stuck Note” was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly. (http://crossfade.walkerart.org/brownbischoff/hub_texts/stucknote.html)

Because BiLE was a mostly inexperienced group, even the “easy to implement for everyone” Stucknote presented some serious technical hurdles. We were all able to create the sounds needed for the piece, but the networking required was a challenge. Because we have software diversity, there was no pre-existing SuperCollider Quark or MAX external to solve our networking problems. Instead, we decided to use the more generic music networking protocol Open Sound Control (OSC). I created a template for our OSC messages. In addition to the gestural data for amplitude and x-factor, specified in the score, I thought there was a lot of potential for remote method invocation and wanted a structure that could work with live coding, should that situation ever arise. I wrote a white paper (see attached) which specifies message formatting and messages for users to identify themselves on the network and advertise remotely invokable functions and shared data.

When a user first joins the network, she advertises her existence with her username, her IP address and the port she is using. Then, she asks for other users to identify themselves, so they broadcast the same kind of message. Thus, every user should be aware of every other user. However, there is currently no structure for users to quit the network. There is an assumption, instead, that the network only lasts as long as each piece. SuperCollider users, for example, tend to re-compile between pieces.

Users can also register a function on the network, specifying a OSC message that will invoke it. They advertise these functions to other users. In addition, they can share data with the network. For example, with Stucknote, everyone is sharing amplitude values such that they are controllable by anyone, including two people at the same time. The person who is using the amplitude data to control sound can be thought of as the owner of the data, however, they or anyone else can broadcast a new value for their amplitude. Typically, this kind of shared data is gestural and used to control sound creation directly. There may be cases where different users are in disagreement about the current value or packets may get lost. This does not tend to cause a problem. With gestural data, not every packet is important and packet loss is not a serious issue.

When a user puts shared data on the network, she also advertises it. Users can request to be told of all advertised data and functions. Typically, a user would request functions and shared data after asking for ids, upon joining the network. She may ask again at any time. Interested users can register as listeners of shared data. The possibility exists, (currently unused), for the owner of the data to send its value out on to registered users instead of the network as a whole.

In order to implement the network protocol, I created a SuperCollider class called NetAPI (see attached code and help file). It handles OSC communications and the infrastructure of advertising and requesting ids, shared functions and shared data. In order to handle notifications for shared data changes, I wrote a class called SharedResource. When writing the code for Stucknote, I had problems with infinite loops with change notifications. The SharedResource class has listeners and actions, but the value setting method also takes an additional argument specifying what is setting it. The setting object will not have it’s action called. So, for example, if the change came from the GUI, the SharedResource will notify all listeners except for the GUI. When SharedResources “mount” the NetAPI class, they become shared gestural data, as described above.

BiLE in Venice

Juju and I flew in a day before the Laptops Meet Musicians Festival, because we wanted to go to the Biennale. Our flight was at 6 am, so we slept about 2 hours before having to leave my flat at an ungodly hour. Once arrived in Venice, the first thing I noticed that it was about 15 degrees warmer than London. And I wondered why I thought it would be a good idea to wear steel capped boots!

We found our hotel, which said it could get us a 35% price reduction on tickets for the Biennale, starting the next day, so we spent the first day wandering the narrow streets and looking into churches. It was my 3rd time in the city, but I have always gone during the art show, so had barely been in any of the churches before. They are astounding.

Covered in marble and monuments many metres tall. The Basilicas have no shortage of relics. I saw St Theresa’s foot! (Random aside: My mum had a piece of St Theresa in a tiny envelope, which I accidentally dropped into the carpet. Some bit of her was hoovered up and is now sanctifying a California landfill)

We walked down to San Marco square. It used to be described by The Rough Guide as “pigeon infested,” but this has improved vastly since I was last there. Street vendors no longer sell pigeon food, thank gods.

At about 10, the lack of sleep and the heat were too much for me, so I went to go lie down in the hotel. I had booked a hostel bed, but they had reassigned us to a tiny hotel room with a double bed. It was theoretically a step up. I thought about asking for twin beds, but then didn’t want to bother, as it was only for one night. I lay down on the bed and turned on the fan and lay awake sweating, wearing nothing but my shorts. For hours.
Juju came home at 2 and we both lay on top of the bed in nothing but shorts. It was not the best night of holiday ever.
Shelly and Antonio arrived the next morning, so we checked out and went to meet them at the bus station. We went then to the Island of San Giorgio Maggiore, where we were going to be lodged by the Foundazione Giorgio Cini.
Even though it was early in the day, they gave us our room keys and let us check in. during the long process of photocopying passports and signing documents written in Italian, the festival organisers happened by and told us where to meet them for dinner and gave us a sneak peak of the concert hall.
We took a vaporetto boat back to the rest of the islands and went into some of the national pavilions for the Biennale. This year, they’re scattered around the city and largely free. The one that I liked best was Taiwan. Theirs was focused on sound. They had a large listening room and then a smaller room showing two movies side by side that were different perspectives on the same scene. Two sound artists were recording the harvest and processing of some grain or rice. They started in the fields and then tracked it’s harvest, it’s transport by train, the processing in a factory, the distribution, the processing of the chaff. They worked directly with the workers and got recordings form insides the cabs of vehicles and very very close to things. It was amazing, especially the sound, but also augmented by the video. I think it was my favourite thing at the Biennale this year.
The Festival took us out to dinner that night and the two subsequent nights, always to the same nice restaurant. The food was fantastic.
Antonio was talking about how he always buys travel insurance because he always accidentally eats something that he’s allergic to. Then, moments later, he confused a fish for a chicken. Fortunately, medical intervention was not required, although he is allergic to fish. People teased him for this, but I totally understand not recognising something that you never eat. I don’t really know many French food words for meat items because I never ate them, so I never made a strong association with the word.
The next day, we had the early sound check slot, so we did our technical stuff and then had some rehearsal time. We switched to using a BT home hub, in the hopes that supercollider would beachball less often. This was semi-successful. Supercollider just has a major issue with wifi, as far as I can tell. Also, when there were two iPhones running touchOSC on the network, data transmition got really blocky and jerky for SC users. I don’t remember if the Juju was effected on Max or not, but we had to have one of them switch to using an adhoc network to talk to their phone. That fixed that. So after endless faffing, we had a not overly inspiring rehearsal. Then they took us out for lunch at the one café on the island. It ended with coffee and ice cream, as all good summer lunches should.
We spent the entire afternoon writing our 10 minute presentation on the ensemble.
The evening started with a presentation from David Ogborn about the Cybernetic Orchestra, the LOrk he runs. He spoke about how he uses a code-based interface for some pieces. He described this as Live Coding, but I think that term is much more specific and refers to a particular type of on the fly code generation, whereas, the players in his group start with a programme already written and make changes to it.
Code-based interfaces are, of course, entirely legitimate ways to write and control pieces. They also do have some pedagogical value, however, I think it’s easy to overstate that case. For example, I can open a CSS file and make a bunch of changes to it in order to get roughly the look I want out of my website, but I cannot say that I know CSS and would not know CSS unless I actually studied it by reading a book or several help files and coding something from scratch. However, by being able to modify code, it does make the user into a kind of a power user and does demystify code, so it’s a good thing to do, but one needs to keep it in perspective.
After the longish presentation, BiLE then played for about 20 minutes. We played XYZ first and Partially Percussive second. I think that musically, they work best in the opposite order, but Antonio decided he thought it would be best to do my piece without graphics and as the projection screen was on the other side of the room than where we were playing, we had to do that order or nobody would look back to see the video for XYZ.
Shelly’s piece is normally for 4 audio players, so it scaled down very well for 3. I accidentally hit the mute button instead of fading out, so the end was a bit abrupt, but it was ok. It came out well enough that another band wants to cover it!
My piece is normally for 6 audio players and probably should have been practiced more for the smaller group, as it came out a bit more roughly. One thing that came out very nicely is that the piece ends with a bell sound and as that rung out, the church bells all over the city were ringing, so the bells sounded like a part of the piece. That was really nice.
Then we gave out presentation which probably went on for a bit longer than the allocated 10 minutes. I’m not sure if I said anything useful, especially after Ogborn spoke for so long. Normally, I want to differentiate between LOrks and BiLE, which is a laptop ensemble, but every band at LMMF was a LE, so I think this distinction was just confusing.
After the concert, they took us all for dinner again and then a bar. We all slept in a it later than intended the next morning, except for Juju, who flew to France. Shelly and I poked around the Foundation’s buildings and then went up the church’s clock tower in time for the noon bells ringing. I set up my zoom recorder, put in ear plugs and waited for the bells to ring. I could feel the vibrations of the big bells on my body and there were amazing partials after the ring. I haven’t listened to the recording yet, but I’m hoping it’s good.
After lunch we went to the Biennale at Giardini. We didn’t see much of it, actually. There was an unfortunate tendency for the national pavilions to have art pieces that were self-referential and about themselves. Or worse, about the Biennale. I get that it’s a lot of pressure and whatnot to do something for such a prestigious show, but maybe that pressure could be let out via long, rambling blog posts rather than via the art.
One high point was the USA’s pavilion, which had what seemed like some very smart critiques of consumer capitalism, all with a million corporate sponsorships. They had the symbol of liberty in a tanning bed, for example. Given the number of sponsors and the apparent popularity, I am slightly afraid I’m attributing irony and critique where none exists, but for the mean time, I’m impressed by an upside-down army tank with a treadmill on it.
The big pavilion there had a bunch of stuffed pigeons on it. There were some cool things inside, but I was not blown away by anything. We didn’t get very far in before we needed to go back for a concert.
The second night of LMMF was all music and no talking, which is good. All the bands were very good.
After dinner and the bar, we went to the old greek-style amphitheatre on the foundation grounds and opened a couple of bottles of wine. I crashed out around 4 am, but most everybody else stayed up until 5. Or at least, most everybody younger than me.
We were hanging out a lot with Benoit and the Mandelbrots, a live coding quartet from Karlsruhe, Germany. They were signing songs from youtube videos. At one point, we were walking along and everybody was singing the theme song from Super Mario Brothers. They have their finger on the pulse of pop culture, or at least internet memes.
The final day, we checked out and then went to Arsenal to see a last bit of the Biennale. Like in the last 2 times I’ve gone, I’ve like Arsenal more than some other parts of the show. (Although, this year, the stuff in the city centre was really the best.) There were a lot of pieces made out of trash, and dealing with waste and refuse and the disposability in general of pop culture seemed to be a major theme this year. There was a large hanging dragon made of discarded truck innertubes and fine embroidery, it was cool.
One very impressive piece was a giant statue, in the style of ancient Greece or Rome. It was as tall as a double decker bus. But instead of being made of marble, which it resembled, it was made of candle wax, was full of wicks and was actually burning. Already the heads of the figures had come off from the burning. The whole thing was gradually being consumed during the course of the exhibit.
Another piece that caught my attention beastiaity video from Germany called Tierfick. The animals involved were taxidermied. The video was disturbing but also silly. I actually do like stuff that tries to be shocking.
So, I heard a bunch of good music, ate a bunch of good food, stayed in rooms that were a reasonable temperature, talked to a lot of good people and saw a lot of art. I hope gigs like this become a trend for BiLE!