Dissertation Draft: BiLE, XYZ and gesture tracking

My colleague Shelly Knotts wrote a piece that uses the network features much more fully. Her piece, XYZ, uses gestural data and state-based OSC messages as well, to create a game system where players “fight” for control of sounds. Because BiLE does not follow the composer-programmer model of most LOrks, it ended up that I wrote most of the non-sound producing code for the SuperCollider implementation of Knotts’s piece. This involved tracking who is “fighting” for a particular value, picking a winner from among them, and specifying the OSC messages that would be used for the piece. I also created the GUI for SuperCollider users. All of this relied heavily on my BileTools classes, especially NetAPI and SharedResource.

Her piece specifically stipulated the use of gestural controllers. Other players used Wiimotes and iPhones, but I was assigned the Microsoft Kinect. There exists an OSC skeleton tracking application, but I was having problems with it segfaulting. Also, full-body tracking sends many more OSC messages than I needed and requires a calibration pose be held while it waits to recognise a user. (http://tohmjudson.com/?p=30 ) This seemed like overkill, so I decided it would be best to write an application to track one hand only.

Microsoft had not released official drivers yet when I started this and, as far as I know, their only drivers so far are windows-only. (http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/about.aspx) I had to use several third-party, open source driver and middleware layers. (http://tohmjudson.com/?p=30) I did most of my coding around the level of the PrimesenseNITE (http://www.primesense.com/?p=515) middleware. There is a NITE sample application called PointViewer that tracks one hand. (http://tohmjudson.com/?p=30) I modified the programme’s source code so that in addition to tracking the user’s hand, it sends an OSC message with the hand’s x, y and z coordinates. This allows a user to generate gesture data from hand position with a Kinect.

In future versions of my Kinect application, I would like to create a “Touch-lessOSC,” similar to the TouchOSC iPhone app, but with dual hand tracking. One hand would simply send it’s z,y,z, coordinates, but the other would move within pre-defined regions to send it’s location within a particular square, move a slider, or “press” a button. This will require me to create a way for users to define shapes and actions as well as recognise gestures related to button pressing. I expect to release an alpha version of this around January 2012.

For the SuperCollider side of things, I wrote some classes, OSCHID and OscSlot (see attached), that mimmic the Human Interface Device (HID) classes, but for HIDs that communicate via OSC via third-party applications such as the one I wrote. They also work with DarwiinOSC (http://code.google.com/p/darwiinosc/) and TouchOSC (http://hexler.net/software/touchosc) on the iPhone. As they have the same structure and methods as regular HID classes, they should be relatively easy for programmers and the wiimote subclass, WiiOSCClient (see attached), in particular, is drop-in-place compatible with the pre-existing supercollider wiimote class, which, unfortunately, is currently not working.

All of my BiLE-related class libraries have been posted to SourceForge (http://sourceforge.net/projects/biletools/) and will be released as a SuperCollider quark. My Kinect code has been posted to my blog only, (http://www.celesteh.com/blog/2011/05/23/xyz-with-kinec/) but I’ve gotten email indicating that at least a few people are using the programme.

Dissertation Draft: BiLE – Partially Percussive

I wrote a gui class called BileChat in order to provide a chat interface, to allow typed communication during concerts and BileClock for a shared stopwatch. We use these tools in every piece that we play.

We played our first gig very shortly after forming and while we were able to meet the technical challenges, the musical result was not entirely compelling. Our major problems were not looking at each other and not listening to each other, which was exacerbated by the networking tools, especially the chat, but still the standard problems new ensembles tend to have.

Several years ago, when I was running an ensemble of amateur percussionists, I used Deep Listening pieces by Pauline Oliveros to help focus the group and encourage greater listening. Most of those exercises are very physical, asking the participants to use body percussion or to sing. This worked well for percussionists, but did not seem well suited to a laptop band. Almost all of the members of BiLE have previous experience playing in ensembles. While every group can benefit from listening exercises, we were not starting from scratch and the exercises we use should be ones that are compatible with networked laptop music. In other words, we needed listening skills within the context in which we were trying to perform.

I wrote a piece called Partially Percussive in order to implement Deep Listening-like ideas in a laptop context. I wrote the score on a studio white board as a list of rules:

Rules:

To start playing, sample the object.
Listen to other players. Are they playing:

  • Percussive vs Sustained
  • Sparse vs Dense
  • Loud vs Soft
  • Pointalistic vs Flowing

Follow the group until you decide to change.
If you hear a change, follow it.
Lay out whenever you want, for how long you want.
Sample the object to come in again.

The score stayed on the white board for two or three weeks. I took a photo of it for my records, however, the score for this piece has never been distributed via paper or email. I do not know what notes, if any, my colleagues took on the score. When describing the score to them, I said that they should drop out (“lay out”) when they “feel it” and return similarly.

I specified live sampling to add transparency to our performance, so audiences can have an idea of where our sounds are coming from. I picked percussion in particular after a IM conversation with Charles Amirkhanian, in which he encouraged me to write for percussion. We originally had a haphazard collection of various metal objects, however, we forgot to bring any of them for one of our gigs, so I went to Poundland and purchased a collection of very cheap but resonant kitchen objects and wooden spoons to play them with. We also use a fire bell. Because it has a long ringing tail on it’s sound, which is quite nice, we use it to start and end the piece. Finally, one of the ensemble members owns some cowbells, which we often also use. Each player usually has a single metal object, but is free to borrow objects from each other. In the case where someone is borrowing the cow bell, they typically allow the bell to ring while carrying it.

While the rules, especially in regards to ‘laying out,’ are influenced by Oliveros, our practice of the piece draws heavily on the performance practice of the anthony Braxton ensemble, which I played in 2004-5. In this piece, as well as in Braxton’s ensemble, players form spontaneous duos or trios and begin trading gestures. This depends on both eye contact and listening and thus requires us to develop both those skills.

When we started playing this piece, I was controlling my own patch with a wireless gamepad, with two analog sticks and several buttons. This gave me the ability to make physical motions and control my patch while away from my computer, for example, while getting an object from another player. Over time, more BiLE members have incorporated even more gestural controllers, such as iPhones running TouchOSC. Thus, when trading gestures, players will mimmic sound quality and physical movement. I believe this aids both our performance practice and audience understanding of the piece.

The technology of this piece does not require more than the chat and the shared stopwatch, but it appeals to audiences and we play it frequently.

Dissertation: BiLE Networking White Paper

This document describes the networking
infrastructure in use by BiLE.

The goal of the infrastructure design
has been flexibility for real time changes in sharing network data
and calling remote methods for users of languages like supercollider.
While this flexibility is somewhat lost to users of inflexible
languages like MAX, they, nevertheless, can benefit from having a
structure for data sharing.


Network
Models


If there is a good reason, for
example, a remote user, we support OSCGroups as a means of sharing
data.

If all users are located together on
the same subnet, then we use broadcast on port 57120.

OSC
Prefix


By convention, all OSC messages start
with ‘/bile/’

Data
Restrictions


Strings must all be ASCII. Non ASCII
characters will be ignored.

Establishing
Communication

Identity
ID
Upon joining the network, users
should announce their identity:

/bile/API/ID
nickname ipaddress port

nicknames must be ASCII only.

Example:

/bile/API/ID
Nick 192.168.1.66 57120

Note that because broadcast
echoes back, users may see their own ID arrive as an announcement.

IDQuery

Users should also send out their
ID in response to an IDQuery:

/bile/API/IDQery

Users can send this message at
any time, in order to compile a list of everyone on the network.

API
Query
Users can enquire what methods
they can remotely invoke and what data they can request.

/bile/API/Query

In
reply to this, users should send /bile/API/Key and /bile/API/Shared
(see below)

Key
Keys represent remote methods.
The user should report their accessible methods in response to a
Query

/bile/API/Key
symbol desc nickname

The symbol is an OSC message
that the user is listening for.
The desc is a text based
description of what this message does. It should include a usage
example.
The nickname is the name of the
user that accepts this message.

Example

/bile/API/Key
/bile/msg "For chatting. Usage: msg, nick, text" Nick
Shared
Shared represents available
data streams. Sources may include input devices, control data sent
to running audio processes or analysis. The user should report their
shared data response to a Query
/bile/API/Shared
symbol desc
The symbol is an OSC message
that the user sends with. The format of this should be

/bile/nickname/symbol
The desc is a text based
description of the data. If the range is not between 0-1, it should
mention this.
The nickname is the name of the
user that accepts this message.

Example
/bile/API/Shared
/bile/Nick/freq "Frequency. Not scaled."

Listening
RegisterListener
Shared data will not be sent out if no one has requested
it and it may be sent either directly to interested users or to the
entire group, at the sender’s discretion. In order to ensure
receiving the data stream, a user must register as a listener.
/bile/API/registerListener
symbol nickname ip port
The symbol is an OSC message
that the user will listening for. It should correspond with a
previously advertised shared item. If the receiver of this message
recognises their own nickname in in the symbol (which is formatted
/bile/nickname/symbol),
they should return an error:
/bileAPI/Error/noSuchSymbol

The nickname is the name of the
user that will accept the symbol as a message.
The ip is the ip address of the
user that will accept the symbol as a message.
The port is the port of the
user that will accept the symbol as a message.
Example
/bile/API/registerListener
/bile/Nick/freq Shelly 192.168.1.67 57120

Error

noSuchSymbol
In the case that a user receives a request to register a
listener or to remove a listener for data that they are not sharing,
they can reply with

/bile/API/Error/noSuchSymbol
OSCsymbol
The symbol is an OSC message
that the user tried to start or stop listening to. It is formatted
/bile/nickname/symbol.
Users should not reply with an error unless they recognise their own
nickname as the middle element of the OSC message. This message may
be sent directly to the confused user.

Example

/bile/API/Error/noSuchSymbol
/bile/Nick/freq
De-listening
RemoveListener
To announce an intention to ignore subsequent data, a
user can ask to be removed.
/bile/API/removeListener
symbol nickname ip
The symbol is an OSC message
that the user will no longer be listening for. If the receiver of
this message sees their nickname in the symbol which is formatted

/bile/nickname/symbol),
they can reply with /bile/API/Error/noSuchSymbol
symbol
The nickname is the name of the
user that will no longer accept the symbol as a message.
The ip is the ip address of the
user that will no longer accept the symbol as a message.
Example
/bile/API/removeListener
/bile/Nick/freq Shelly 192.168.1.67
RemoveAll

Users who are quitting the network can asked to be
removed from everything that they were listening to.
/bile/API/removeAll
nickname ip

The nickname is the name of the
user that will no longer accept any shared data.
The ip is the ip address of the
user that will no longer accept any shared data.
Example
/bile/API/removeAll
Nick 192.168.1.66

Commonly
Used Messages

Chatting
Msg
This is used for chatting.
/bile/msg
nickname text
The nickname is the name of the
user who is sending the message.
The text is the text that the
user wishes to send to the group.

Clock
This is for a shared stopwatch and not for serious
timing applications
Clock start or
stop

/bile/clock/clock
symbol
The symbol is either start or
stop.
Reset

Reset the clock to zero.
/bile/clock/reset
Set
Set the clock time
/bile/clock/set
minutes seconds
Minutes is the number of minutes past zero.

Seconds is the number of seconds past zero.


Proposed
Additions

Because users can silently join, leave
and re-join the network, it could be a good idea to have users time
out after a period of silence, maybe around 30 seconds or so. To
stay active, they would need to send I’m-still-here messages.

There should possibly also be a way
for a user to announce that they have just arrived, so, for example,
if a SuperCollider user recompiles, her connection will think of
itself as new and other users will need to delete or recreate
connections depending on that user.

Dissertation Draft: BLE Tech

In January 2011, five of my colleagues in BEAST and I founded BiLE, the Birmingham Laptop Ensemble. All of the founding members are electroacoustic composers, most of whom have at least some experience with an audio programming language, either SuperCollider or MAX. We decided that our sound would be strongest if every player took responsibility for their own sound and did his or her own audio programming. This is similar to the model used by the Huddersfield Experimental Laptop Orchestra (HELO) who describe their approach as a “Do-It-Yourself (DIY) laptop instrument design paradigm.” (Hewitt p 1 http://helo.ablelemon.co.uk/lib/exe/fetch.php/materials/helo-laptop-ensemble-incubator.pdf) Hewitt et al write that they “[embrace] a lack of hardware uniformity as a strength” and implies their software diversity is similarly a strength and grants them greater musical, (rather than technical) focus. (ibid) BiLE started with similar goals – focus on the music and empower the user, and has had similar positive results.

My inspiration, however, was largely drawn from The Hub, the first laptop band, some members of which were my teachers at Mills College in Oakland California. I saw them perform in the mid 1990s, while I was still an undergrad and had an opportunity then to speak with them about their music. I remember John Bischoff telling me that they did their own sound creation patches, although for complicated network infrastructure, like the Points of Presence Concert in 1987, Chris Brown wrote the networking code. (Cite comments from class?)

One of the first pieces in BiLE’s repertoire was a Hub piece, Stucknote by Scott Gresham Lancaster. This piece not only requires every user to create their own sound, but also has several network interactions including a shared stopwatch, sending chat messages and the sharing of gestural data for every sound. In Bischoff and Brown’s paper, the score for Stucknote is described as follows:

“Stuck Note” was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one “note”, meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an “x-factor”, which is a controller that in some way changes the timbral character or continuity of the instrument. Every player’s two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players’ computer synthesizers, taking over their sounds with two simple control streams. Like in “Wheelies”, this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of “Stuck Note” was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly. (http://crossfade.walkerart.org/brownbischoff/hub_texts/stucknote.html)

Because BiLE was a mostly inexperienced group, even the “easy to implement for everyone” Stucknote presented some serious technical hurdles. We were all able to create the sounds needed for the piece, but the networking required was a challenge. Because we have software diversity, there was no pre-existing SuperCollider Quark or MAX external to solve our networking problems. Instead, we decided to use the more generic music networking protocol Open Sound Control (OSC). I created a template for our OSC messages. In addition to the gestural data for amplitude and x-factor, specified in the score, I thought there was a lot of potential for remote method invocation and wanted a structure that could work with live coding, should that situation ever arise. I wrote a white paper (see attached) which specifies message formatting and messages for users to identify themselves on the network and advertise remotely invokable functions and shared data.

When a user first joins the network, she advertises her existence with her username, her IP address and the port she is using. Then, she asks for other users to identify themselves, so they broadcast the same kind of message. Thus, every user should be aware of every other user. However, there is currently no structure for users to quit the network. There is an assumption, instead, that the network only lasts as long as each piece. SuperCollider users, for example, tend to re-compile between pieces.

Users can also register a function on the network, specifying a OSC message that will invoke it. They advertise these functions to other users. In addition, they can share data with the network. For example, with Stucknote, everyone is sharing amplitude values such that they are controllable by anyone, including two people at the same time. The person who is using the amplitude data to control sound can be thought of as the owner of the data, however, they or anyone else can broadcast a new value for their amplitude. Typically, this kind of shared data is gestural and used to control sound creation directly. There may be cases where different users are in disagreement about the current value or packets may get lost. This does not tend to cause a problem. With gestural data, not every packet is important and packet loss is not a serious issue.

When a user puts shared data on the network, she also advertises it. Users can request to be told of all advertised data and functions. Typically, a user would request functions and shared data after asking for ids, upon joining the network. She may ask again at any time. Interested users can register as listeners of shared data. The possibility exists, (currently unused), for the owner of the data to send its value out on to registered users instead of the network as a whole.

In order to implement the network protocol, I created a SuperCollider class called NetAPI (see attached code and help file). It handles OSC communications and the infrastructure of advertising and requesting ids, shared functions and shared data. In order to handle notifications for shared data changes, I wrote a class called SharedResource. When writing the code for Stucknote, I had problems with infinite loops with change notifications. The SharedResource class has listeners and actions, but the value setting method also takes an additional argument specifying what is setting it. The setting object will not have it’s action called. So, for example, if the change came from the GUI, the SharedResource will notify all listeners except for the GUI. When SharedResources “mount” the NetAPI class, they become shared gestural data, as described above.

Forming a plan

Today is 30 July. My dissertation is due on 30 September. I am now planning on how things will be between now and then.
I know that I cannot work every day between now and then. My maximum sprint time is 10 days. So I need to plan on taking one day off per week, which might was well be on the weekend. With BiLE on Wednesdays, that gives me 5 days a week. Also, planning on working 16 hour days is also not going to work. Instead, I can do 4 hours on music and 4 hours on words. Roughly, I have 160 hours of each to spend.
If I keep to a reasonable sleeping schedule and cut back on facebook, I can still go out occasionally. I am not going to drink unless it is the evening before my one break day per week. Also, since stress levels will be high, that break day needs to actually be spent away from a computer like riding my bike or going to the beach or something worthwhile.
Everything is going to be fine. This will all be over soon. I will get it it all done. I just need to focus and work hard.
I may start doing again what I did with my MA and start posting drafts of various bits, looking for feedback.

Why I'm meh on Google+

I want to quit facebook, but not for Google. Facebook has a lot of faults, most notably privacy-related: they sell your personal data to third parties. They also have a second problem, which is shared with any other provider of a “free” service. They can terminate your account without warning for any perceived TOS violation. This means nursing mums or trans men who post too much nipple can wake up one morning to find they can’t log in. The same thing has been happening lately to activists. Losing your login means losing your data.
If I found one day I’d been arbitrarily deleted from facebook, I would be seriously irritated. For all its faults, it’s been useful in helping me keep in contact with my godmother, for promoting upcoming gigs and for essential goofing off. But a deleted facebook account is not the end of the world. Nothing terribly essential would be lost.
I do worry about arbitrary deletions after I was mysteriously banned from ebay. But I just don’t have that much invested in facebook.
Google has also been known for mysterious and arbitrary deletions, often of activists. They purport to be politically neutral, which seems unlikely. In any case, they have no appeal process and no customer support, so terminated google accounts are very rarely reinstated. If Google takes a dim view of some exposed nipple or a political opinion or just gets its wires crossed, it may also delete an account without warning. The newly deleted person is not a customer. Google has no contractual responcibilities. And it’s not just Google+ access that goes.
If I lose my google account, I lose my email, my contacts, my calendar, my rss reader and my blog. I have too many eggs in one “free” basket. I am increasingly of the opinion that paying for services is the way forward. “Free” accounts are selling my demographics, forcing me to look at adverts and have no legal responcibility to me or my data.
I don’t want to go from one privacy-compromising arbitrarily-deleting social network to another. Google+ is not the solution to this problem, it’s just another instance of it. Like the great MySpace migration of a few years back, this offers only new shiny bits and no progress on fundamental issues. If they actually wanted to not be evil, they would fund Diaspora and host a pod. Otherwise, this is just another walled garden.

BiLE in Venice

Juju and I flew in a day before the Laptops Meet Musicians Festival, because we wanted to go to the Biennale. Our flight was at 6 am, so we slept about 2 hours before having to leave my flat at an ungodly hour. Once arrived in Venice, the first thing I noticed that it was about 15 degrees warmer than London. And I wondered why I thought it would be a good idea to wear steel capped boots!

We found our hotel, which said it could get us a 35% price reduction on tickets for the Biennale, starting the next day, so we spent the first day wandering the narrow streets and looking into churches. It was my 3rd time in the city, but I have always gone during the art show, so had barely been in any of the churches before. They are astounding.

Covered in marble and monuments many metres tall. The Basilicas have no shortage of relics. I saw St Theresa’s foot! (Random aside: My mum had a piece of St Theresa in a tiny envelope, which I accidentally dropped into the carpet. Some bit of her was hoovered up and is now sanctifying a California landfill)

We walked down to San Marco square. It used to be described by The Rough Guide as “pigeon infested,” but this has improved vastly since I was last there. Street vendors no longer sell pigeon food, thank gods.

At about 10, the lack of sleep and the heat were too much for me, so I went to go lie down in the hotel. I had booked a hostel bed, but they had reassigned us to a tiny hotel room with a double bed. It was theoretically a step up. I thought about asking for twin beds, but then didn’t want to bother, as it was only for one night. I lay down on the bed and turned on the fan and lay awake sweating, wearing nothing but my shorts. For hours.
Juju came home at 2 and we both lay on top of the bed in nothing but shorts. It was not the best night of holiday ever.
Shelly and Antonio arrived the next morning, so we checked out and went to meet them at the bus station. We went then to the Island of San Giorgio Maggiore, where we were going to be lodged by the Foundazione Giorgio Cini.
Even though it was early in the day, they gave us our room keys and let us check in. during the long process of photocopying passports and signing documents written in Italian, the festival organisers happened by and told us where to meet them for dinner and gave us a sneak peak of the concert hall.
We took a vaporetto boat back to the rest of the islands and went into some of the national pavilions for the Biennale. This year, they’re scattered around the city and largely free. The one that I liked best was Taiwan. Theirs was focused on sound. They had a large listening room and then a smaller room showing two movies side by side that were different perspectives on the same scene. Two sound artists were recording the harvest and processing of some grain or rice. They started in the fields and then tracked it’s harvest, it’s transport by train, the processing in a factory, the distribution, the processing of the chaff. They worked directly with the workers and got recordings form insides the cabs of vehicles and very very close to things. It was amazing, especially the sound, but also augmented by the video. I think it was my favourite thing at the Biennale this year.
The Festival took us out to dinner that night and the two subsequent nights, always to the same nice restaurant. The food was fantastic.
Antonio was talking about how he always buys travel insurance because he always accidentally eats something that he’s allergic to. Then, moments later, he confused a fish for a chicken. Fortunately, medical intervention was not required, although he is allergic to fish. People teased him for this, but I totally understand not recognising something that you never eat. I don’t really know many French food words for meat items because I never ate them, so I never made a strong association with the word.
The next day, we had the early sound check slot, so we did our technical stuff and then had some rehearsal time. We switched to using a BT home hub, in the hopes that supercollider would beachball less often. This was semi-successful. Supercollider just has a major issue with wifi, as far as I can tell. Also, when there were two iPhones running touchOSC on the network, data transmition got really blocky and jerky for SC users. I don’t remember if the Juju was effected on Max or not, but we had to have one of them switch to using an adhoc network to talk to their phone. That fixed that. So after endless faffing, we had a not overly inspiring rehearsal. Then they took us out for lunch at the one café on the island. It ended with coffee and ice cream, as all good summer lunches should.
We spent the entire afternoon writing our 10 minute presentation on the ensemble.
The evening started with a presentation from David Ogborn about the Cybernetic Orchestra, the LOrk he runs. He spoke about how he uses a code-based interface for some pieces. He described this as Live Coding, but I think that term is much more specific and refers to a particular type of on the fly code generation, whereas, the players in his group start with a programme already written and make changes to it.
Code-based interfaces are, of course, entirely legitimate ways to write and control pieces. They also do have some pedagogical value, however, I think it’s easy to overstate that case. For example, I can open a CSS file and make a bunch of changes to it in order to get roughly the look I want out of my website, but I cannot say that I know CSS and would not know CSS unless I actually studied it by reading a book or several help files and coding something from scratch. However, by being able to modify code, it does make the user into a kind of a power user and does demystify code, so it’s a good thing to do, but one needs to keep it in perspective.
After the longish presentation, BiLE then played for about 20 minutes. We played XYZ first and Partially Percussive second. I think that musically, they work best in the opposite order, but Antonio decided he thought it would be best to do my piece without graphics and as the projection screen was on the other side of the room than where we were playing, we had to do that order or nobody would look back to see the video for XYZ.
Shelly’s piece is normally for 4 audio players, so it scaled down very well for 3. I accidentally hit the mute button instead of fading out, so the end was a bit abrupt, but it was ok. It came out well enough that another band wants to cover it!
My piece is normally for 6 audio players and probably should have been practiced more for the smaller group, as it came out a bit more roughly. One thing that came out very nicely is that the piece ends with a bell sound and as that rung out, the church bells all over the city were ringing, so the bells sounded like a part of the piece. That was really nice.
Then we gave out presentation which probably went on for a bit longer than the allocated 10 minutes. I’m not sure if I said anything useful, especially after Ogborn spoke for so long. Normally, I want to differentiate between LOrks and BiLE, which is a laptop ensemble, but every band at LMMF was a LE, so I think this distinction was just confusing.
After the concert, they took us all for dinner again and then a bar. We all slept in a it later than intended the next morning, except for Juju, who flew to France. Shelly and I poked around the Foundation’s buildings and then went up the church’s clock tower in time for the noon bells ringing. I set up my zoom recorder, put in ear plugs and waited for the bells to ring. I could feel the vibrations of the big bells on my body and there were amazing partials after the ring. I haven’t listened to the recording yet, but I’m hoping it’s good.
After lunch we went to the Biennale at Giardini. We didn’t see much of it, actually. There was an unfortunate tendency for the national pavilions to have art pieces that were self-referential and about themselves. Or worse, about the Biennale. I get that it’s a lot of pressure and whatnot to do something for such a prestigious show, but maybe that pressure could be let out via long, rambling blog posts rather than via the art.
One high point was the USA’s pavilion, which had what seemed like some very smart critiques of consumer capitalism, all with a million corporate sponsorships. They had the symbol of liberty in a tanning bed, for example. Given the number of sponsors and the apparent popularity, I am slightly afraid I’m attributing irony and critique where none exists, but for the mean time, I’m impressed by an upside-down army tank with a treadmill on it.
The big pavilion there had a bunch of stuffed pigeons on it. There were some cool things inside, but I was not blown away by anything. We didn’t get very far in before we needed to go back for a concert.
The second night of LMMF was all music and no talking, which is good. All the bands were very good.
After dinner and the bar, we went to the old greek-style amphitheatre on the foundation grounds and opened a couple of bottles of wine. I crashed out around 4 am, but most everybody else stayed up until 5. Or at least, most everybody younger than me.
We were hanging out a lot with Benoit and the Mandelbrots, a live coding quartet from Karlsruhe, Germany. They were signing songs from youtube videos. At one point, we were walking along and everybody was singing the theme song from Super Mario Brothers. They have their finger on the pulse of pop culture, or at least internet memes.
The final day, we checked out and then went to Arsenal to see a last bit of the Biennale. Like in the last 2 times I’ve gone, I’ve like Arsenal more than some other parts of the show. (Although, this year, the stuff in the city centre was really the best.) There were a lot of pieces made out of trash, and dealing with waste and refuse and the disposability in general of pop culture seemed to be a major theme this year. There was a large hanging dragon made of discarded truck innertubes and fine embroidery, it was cool.
One very impressive piece was a giant statue, in the style of ancient Greece or Rome. It was as tall as a double decker bus. But instead of being made of marble, which it resembled, it was made of candle wax, was full of wicks and was actually burning. Already the heads of the figures had come off from the burning. The whole thing was gradually being consumed during the course of the exhibit.
Another piece that caught my attention beastiaity video from Germany called Tierfick. The animals involved were taxidermied. The video was disturbing but also silly. I actually do like stuff that tries to be shocking.
So, I heard a bunch of good music, ate a bunch of good food, stayed in rooms that were a reasonable temperature, talked to a lot of good people and saw a lot of art. I hope gigs like this become a trend for BiLE!

The future of BiLE

what will happen to us when we’re famous? Will we lose it like Amy Winehouse? Antionio predicts:
Antonio will crumble under pressure from the ladies and become a porn star.
Juju will go into politics, campaigning for animals.
Shelly will become a talk show host for the culture show.
Chris will go into boxing, venting pressures from his computers. He’ll will grow a twirly moustache like an old fashioned pugalist.
Jorge will be a famous singer in Colombia. Women, children and teenagers will throw their knickers at him. He’ll do ocassional BiLE reunions. They will be the most awesome gigs ever.
Norah’s love for pandas will lead her to write a famous blog or become like Jeanine Girafalo.
I will become a fashionisto in NYC, wearing a baret, smoking a cigarette out of a long holder and own a toy poodle.
I think I kind of object to this…..
Antonio makes no apologies and plays the cards as he seems them.
Shelly predict that 4 of us will end up living in squats until we’re 75, hoping somebody eventually pays us for a gig.

Do you believe in the rapture?

I’m looking for people who believe that the world is going to end soon, or people who pray it ends soon. If you think the rapture is around the corner or that we’re nearing the end times, I would really like to talk to you!
I would like to interview you talking about your beliefs. This can be in person, by phone or by skype. I’d like to record this interview, so I can use it as material in a musical piece that I’m writing. This piece will be played in England. Most of the people who hear it will have not previously heard the rapture described by a believer.
In order to make the music, your words will be put into a collage that makes musical sense. This does require some cutting, but I will preserve your meaning. I want to accurately convey your views, your beliefs and your hopes for the future.
This is for a 13 minute section of a longer piece of music performed by people with laptop computers. The entire thing will be an hour long. I’m calling it a “laptopera,” but it does not actually contain singing. The title of the piece will be The Death of Stockhausen. Your section does not yet have a title, but will probably include the word “Apocalypse.” The section will also include people with New Age beliefs surrounding 2012, but will make sure to differentiate their views from yours. (If you want to say anything about how the New Agers are right or wrong, I’d also like the hear that).
If you want to help, please leave a comment! Or, would you mind praying that somebody does want to help?

Concert Review: RCM LOrk

Last night, I went to see the Royal College of Music Laptop Orchestra perform in their institution’s main hall. I found out about the concert at the last minute because a friend spotted it on twitter. Until yesterday, I didn’t even know there was a LOrk in in London!
The audience was quite small and out numbered by the performers. There were 6 people on stage and one guy working at a mixing desk, who got up to play piano for one of the pieces. The programme was quite short, with 5 pieces on it. They started with Drone by Dan Trueman, which was the first ever LOrk composition, according to the printed programme. They walked in from the back, carrying laptops and playing from the internal speakers. The tilt of the laptop changes the sound. They then walked around the space, making this drone. It worked well as an introduction and had a good performative element, but I find this piece disturbing in general because it pains me slightly whenever I see anyone shake a laptop. This kind of treatment leads disks to die. Somebody should port this piece to PD and run it via RjDj on an iPhone.
The next piece they played was Something Completely Different by Charles Mauleverer. It was quite short and was made up of clips from Monty Python. Somebody from the ensemble explained that they were playing YouTube videos directly and using the number keys to skip around in the videos and stutter and glitch in that way. This piece was played through two large monitors on the stage. Because all the clips are in the vocal range, using only two speakers made it a bit muddy. Also, the lack of processing the sounds in any meaningful way could become an issue, but the piece was quite short and therefore mostly avoided the limitations of it’s simple implementation.
Then, alas, there was a few minutes pause for technical issues and a member of the group stood up and gave a short talk about what was going on in the pieces played so far.
After they got everything going again, they played Synchronicity by Ellis Pecen, which was very well done. The players were given already processed sounds of a guitar and were playing and possibly modifying those further. The programme notes said it used instrumental sounds “process[ed] to such a degree that it would be difficult to discern the original instrument and the listener would … perceive” the source materials only as “a source of sound.” As such it was acousmatic in it’s construction and it’s ideals but the result was a nice drone/ambient piece. After a few minutes, the sound guy got up and joined the ensemble to play some ambient piano sounds. The result was a piece outside of the normal LOrk genre (as fas as one can be said to exist) and was extremely musical.
Spirala by David Rees, the next piece on the programme, was supposed to have a projected element, but the projector crashed just as the piece was about to start. The piece was apparently built in flash and involved the players turning some sort of crank, by drawing circles on their trackpads. the sounds it made (and perhaps the mental image of crank-turning) lead me to think of a jack in the box. The programme says the piece is online, but I’m getting a 404 on it, alas.
The last piece was Sisal Red by Tim Yates. It relied on network communication, making groups of three laptops into “distributed instruments.” The piece didn’t seem to match it’s programme notes, however, as there only seemed to be four people actually playing laptops. One of the players was on a keyboard controller and another one was playing the gong with a beater and a microphone as if it were Mikrophonie by Stockhausen. This piece used 4 channels of sound, with the two monitors on stage and the two behind the audience. It seemed to fill up the hall as if were were swimming in sound. I’m not sure what sounds were computer generated and what were from the gong or other sources, but I had the impression that the gong sound was swaying around us and was a very strong part of the piece. It certainly harkened back to the practice of putting instruments with electronics and also seemed to be an expansion of the normal LOrk genre. The result was very musical.
According to the programme, this is the only LOrk situated at a conservatory rather than a university. The players were all post graduates, which is also a break with the normal American practice of undergraduate ensembles. All of the pieces except the first one were written by ensemble members. As is the case with most other LOrks, the composer also supplied the “instrument,” so all the players were running particular programmes as specified by (or written by) the composer. Aside from the first piece, there were no gestural controllers present.
I think putting a LOrk into a conservatory is an especially good idea. This will create LOrks that will concentrate heavily on performance practice. In their piece Something Completely Different, they completely de-emphasised the technology and created something that was almost purely performative. However, they obviously still embrace the technical, not only through their choice of medium, but in pieces such as Spirala which required the composer to code in flash.
I was really impressed by the concert overall and especially their musicality and hope they get larger audiences at their future gigs, as they certainly deserve them.
By the way, if you’re in a LOrk and have not done so already, there is a mailing list for LOrks, Laptop Bands, Laptop Ensembles and any group computer performance: LiGroCoP, which you should join. Please use it to announce your gigs! Also, BiLE will be using it to make announcements regarding our Network Music Festival, which will happen early next year and will have some open calls.