Laptop and Tuba

This post is taken from the lightening talk I gave at AMRO

Abstract

I have decided to try to solve a problem that I’m sure we’ve all had – it’s very difficult to play a tuba and program a computer at the same time. A tuba can be played one-handed but the form factor makes typing difficult. Of course, it’s also possible to make a tuba into an augmented instrument, but most players can only really cope with two sensors and it’s hard to attach them without changing the acoustics of the instrument.

The solution to this classic conundrum is to unplug the keyboard and ditch the sensors. Use the tuba itself to input code.

Languages

Constructed languages are human languages that were intentionally invented rather than developing via the normal evolutionary processes. One of the most famous constructed languages is Esperanto, but modern Hebrew is also a conlang. One of the early European conlangs is Solresol, invented in 1827 by François Sudre. This is a “whistling language” in that it’s syllables are all musical pitches. They can be expressed as notes, numbers or via solfèdge.

The “universal languages” of the 19th century were invented to allow different people to speak to each other, but previously to that some philosophers also invented languages to try to remove ambiguity from human speech. These attempts were not successful, but in the 20th century, the need to invent unambiguous language re-emerged in computer languages. Programming languages are based off of human languages. This is most commonly English, although many exceptions exist, including Algol which was always multilingual.

Domifare

I decided to build a programming language out of Solresol, as it’s already highly systematised and has an existing vocabulary I can use. This language, Domifare is a live coding language very strongly influenced by ixi lang, which is also written in SuperCollider. Statements are entered by playing tuba into a microphone. These can create and modify objects, all of which are loops.

Creating an object causes the interpreter to start recording immediately. The recording starts to play back as a loop as soon as the recording is complete. Loops can be started, stopped or “shaken”. The loop object contains a list of note onsets, so when it’s shaken, the notes played are re-ordered randomly. A future version may use the onsets to play synthesised drum sounds for percussion loops.

Pitch Detection

Entering code relies on pitch tracking. This is a notoriously error-prone process. Human voices and brass instruments are especially difficult to track because of the overtone content. That is to say, these sounds are extremely rich and have resonances that can confuse pitch trackers. This is especially complicated for the tuba in the low register because the overtones may be significantly louder than the fundamental frequency. This instrument design is useful for human listeners. Our brains can hear the higher frequencies in the sound and use them to identify the fundamental sound even if it’s absent because it’s obscured by another sound. For example, if a loud train partially obscures a cello sound, a listener can still tell what note was played. This also works if the fundamental frequency is lower than humans can physically hear! There are tubists who can play notes below the range of human hearing, but which people perceive through the overtones! This is fantastic for people, but somewhat challenging for most pitch detection algorithms.

I included two pitch detection algorithms, one of which is a time based system I’ve blogged about previously and the other is one built into SuperCollider using a technique called autocorrelation. Much to my surprise, the autocorrelation was the more reliable, although it still makes mistakes the majority of the time.

Other possibilities for pitch detection might include tightly tuned bandpass filters. This is the technique used by David Behrman for his piece On the Other Ocean, and was suggested by my dad (who I’ve recently learned built electronic musical instruments in 1960s or 70s!!) Experimentation is required to see if this would work.

AI

Another possible technique likely to be more reliable is AI. I anticipate this could potentially correctly identify commands more often than not, which would substantially change the experience of performance. Experimentation is needed to see if this would improve the piece or not. Use of this technique would also require pre-training variable names, so a player would have to draw on a set of pre-existing names rather than deciding names on the fly. However, in performance, I’ve had a hard time deciding on variable names on-the-fly anyway and have ended up with random strings.

Learning to play this piece already involves a neural learning process, but a physical one in my brain, as I practice and internalise the methods of the DomifareLoop class. It’s already a good idea for me to pre-decide some variable names and practice them so I have them ready. My current experience of performance is that I’m surprised when a command is recognised and play something weird for the variable name and am caught unawares again when the loop begins immediately recording. I think this experience would be improved for the performer and the listener with more preparation.

Performance Practice

The theme for AMRO, where this piece premiered was “debug”, so I included both pitch detection algorithms and left space to switch between them and adjust parameters instead of launching with the optimal setup. The performance was in Stadtwerkstadt, which is a clubby space and this nuance didn’t seem to come across. It would probably not be compelling for most audiences.

Audience feedback was entirely positive but this is a very friendly crowd, so negative feedback would not be within the community norms. Constructive criticism also may not be offered.

My plan for this piece is to perform it several more times and then record it as part of an album tentatively titled “Laptop and Tuba” which would come out in 2023 on the Other Minds record label. If you would like to book me, please get in touch. I am hoping that there is a recording of the premiere.

It works!

After many very long days, My project Domifare is working. For me. It won’t work for you because there is a bug in TuningLib. I have raised an issue, which the package maintainer will get to shortly. The package maintainer, who is me will fix it shortly. When I get back from Austria. I need to test my fix properly.

Only a subset of specified commands have been implemented, but I can record a loop and re-order the playback of a loop based on detected onsets. Hypothetically, I can also start and stop loops. In practice, pitch detection is terrible and the language is barely usable. Annoyingly, the utility of it depends on how good my tuba playing sounds.

If I want to use this as an actual tool, the way forward is playing the key phrases in as training data to an AI thing.

While writing this project, I raised three issues with the SuperCollider project over documentation and one issue with the LinuxExternals Quark over Pipewire. That will turn into a merge request. I might update the documentation for it.

If you want to hear this thing in progress, I’ll be using it on Friday. You can turn up in person to Linz, Austria or tune into the live stream. This is part of AMRO, who have a helpful schedule.

I feel like a zombie and will say something more coherent later.

Running an online student concert

I wanted to come up with the most straightforward possible setup, so that students would be able to copy it and run their own events with minimal fuss.

This plan uses Twitch, which has two tremendous advantages. It has a performance rights society license, so everyone is free to do covers with no copyright consequences. (Just don’t save the stream to twitch.) The other is that the platform is designed around liveness, so if there are gaps in the stream, it’s not a problem. This means that no stream switching is required.

Student skills required

The students need to be able to get their audio into a computer. This might entail using a DAW, such as Reaper, or some sort of performance tool. They need to be able to use their DAW or tool in a real-time way, so that performing with it makes sense. If they can create a piece of music or a performance with software that they are capable of recording, then they have adequate skills.

This checklist covers all the skills and tools that a Mac or Linux user will need to play their piece. It will work for many, but not all, Windows users. This is because Windows setups can vary enormously.

Once everyone is able to stream to their own Twitch channel, they have the skills required to do the concert.

Setup and Organisation

You will need a twitch account dedicated to your class or organisation. You will also need a chatroom or other text-based chat application to use as a “backstage”. Many students are familiar with Discord, which makes it an obvious choice. Matrix chat is another good possibility. If you go with discord, students will need to temporarily disable the audio features of that platform.

As the students are already able to stream to Twitch, the only thing that will change for them is the stream key. Schedule tech rehearsals the day of the concert. Arrange that the students should “show up” in your backstage chat. At those rehearsals, give out the stream key for your channel’s stream. Give the students a few minutes to do a test stream and test that their setup is working.

The students should be instructed to wait until instructed to start their streams and to announce in the chat when they stop. If they get disconnected due to any kind of crash, they should check in in the chat before restarting. Once they finish their performance, they should quit OBS so they do not accidentally restart their stream.

When it’s call time for the concert, they also need to show up in the backstage chat. They should be aware of the concert order, but this may also change as students encounter technical challenges. You or a colleague should broadcast a brief welcome, introductory message which should mention that there will be gaps between performances as the stream switches.

As you stop broadcasting, tell the first student to start and the next student to be ready (but not go yet). The first student will hopefully remember to tell you when done and stop their stream. As their stream ends, you can tell the next student to go. You should be logged into the Twitch web interface so you can post in the chat who is playing or about to play.

After the concert ends, reset the stream key. This will make sure their next twitch stream doesn’t accidentally come out of your organisation’s channel.

Conclusion

The downsides of this steup is that there will be gaps in the stream. If a student goes wildly over time, it’s hard to cut them off. However, the tech requirements do not need any investment from your institution and, again, they should be able to organise their own events in a similar way using the skills they learned from participating in this event.

Performance Disasters

Some of my students have stage fright and don’t want to perform. This circumstance is highly relatable. I thought it might be helpful to share some stories of stage fight and performance gone wrong.

I used to get terrible stage fright. The way I got over it was to keep going on stage, a lot, despite being absolutely terrified every time. As a youth, I got relatively used to playing in front of strangers, but one time, in a youth group, I was playing trumpet in front of my peers and got so alarmed, I couldn’t get my lips to buzz.

More recently, I wrote a piece performed with a gamepad and when I went to perform it, found my hands were shaking too much to play it!

For me, just giving it ago, despite the fear, was got me through it. But, perhaps for some, an exercise in “what’s the worst that could happen”? will help.

Let’s watch a John Cage performance. Do you think the audience’s reaction indicates success or failure?

John Cage performs Water Walk on the TV show I Have a Secret

Ok, so the audience laughed but he said he was ok with that and his performance got broadcast out on national television, so perhaps the exposure was worth the mirth. But did you notice anything wrong with the performance?

Things that I noticed going wrong included

  • The radios were being plugged in. This was due to disagreement in the unions about who’s job it was to plug them in. (The moral of that story is to keep the union on side. Solidarity. Also, if you’re doing something weird, be patient while they go through the normal setup process. They’re used to being talked down to, so don’t offer instructions or suggestions unless their normal setup doesn’t work.)
  • The blender caught fire.
  • The rubber ducky was completely inaudible.

Two of these three things were huge problems. Radios are a key part of the piece. The blender situation was also quite alarming and changed the flow of the piece, as the crushed ice used later on was not available. Cage had to be adaptable and think on his feet in a performance situation that had numerous disasters before and duing.

Most people don’t notice the problems because he kept his cool throughout. This kind of composure is the product of experience. Things go wrong, but the show must go on. For people unused to performance, it’s likely that you will seem nervous. You are nervous. But with practice and experience, you too can keep your cool. After all, what’s the worst that can go wrong?

https://twitter.com/etgesofspain/status/1379444619272937472

The SynthDefs for my Christmas EP

As threatened, I have once again made some Christmas music.

If you enjoy (or hate it, or are indifferent), please consider donating to the UKLGIG. cafdonate.cafonline.org/111#/DonationDetails They support LGBTQI+ people through the asylum and immigration process. Their vision is a world where there is equality, dignity, respect and safety for all people in the expression of their sexual or gender identity.

Or, if you are in the US, please donate to the National Center for Transgender Equality secure2.convio.net/ncftge/site/Donation2;jsessionid=00000000.app268b?df_id=1480&mfc_pref=T&1480.donation=form1&NONCE_TOKEN=C5EA18E62F736227261DC4CE5C50ADBE

The notes in the 5 movements all come from the same pop song, but in 4 of the movements, they pass through a class I (accidentally) wrote called MidiMangler. It’s undocumented, but the constructor expects the kind of midi events that come from SimpleMIDIFile in wslib and the .p method spits out a pbind.

The instruments are some of the sample I used a couple of years ago, but the organ is new. It’s based on one from http://sccode.org/1-5as but modified to be played with a PmonoArtic.

SynthDef(\organ, {| freq = 440, gate=1, amp=0.25, pan=0 |
    // from http://sccode.org/1-5as
    var lagdur, env, saw, panner;

    lagdur = 0.4;

    saw = VarSaw.ar(Lag.kr(freq-440)+440,
        width:LFNoise2.kr(1).range(0.2, 0.8) *
        SinOsc.kr(5, Rand(0.0, 1.0)).range(0.7,0.8));

    env = EnvGen.kr(Env.asr(Rand(0.5, 0.7), 1, Rand(1.0, 2.0), Rand(-10.0, -5.0)), gate, doneAction:2);

    amp = Lag.ar((amp / 4) * (Lag.ar(LFClipNoise.ar(lagdur.reciprocal, 0.1), lagdur) + 1)); // tremolo

    panner = Pan2.ar(saw, Lag.kr(pan,1), env * amp);

    Out.ar(0, panner);
}).add;

The other instruments are the default synthdef *cough*, a Risset bell and Karplus Strong – taken directly from a help file with no changes. These are presented at the bottom for the sake of completion. The other sound is a bomb sample I found on freesound.

The video is taken from an atom bomb test video, but slowed down and stretched. I used ffmpeg to do this. The original film was 24 frames per second. I used a ffmpeg filter to create a lot of extra in-between frames and then, separately, changed the frame rate to be much slower. The original film was a bit over 20 seconds and got stretched out to 15 minutes. The really low frame rate is a bit choppy, but I think more tweening would have just increased distortion. The commands for that were:

% ffmpeg -i trees-bomb.mp4 -filter:v "minterpolate='fps=180'" 180trees.mkv
% ffmpeg -i 180trees.mkv -filter:v "setpts=33.4*PTS" strch180.mk

The other day, I read someone putting for the idea that apocalyptic thinking is so profoundly unhelpful as to be self-indulgent. Climate change is not going out with a bang, but a very prolonged whimper, whilst, for the duration, failing to make any significant changes. We can address it and avoid many of the worst impacts, but we need to get very serious about it immediately. If we can build thousands of expensive, terrifying bombs just in case there might be a war nobody wants, surely, we can afford to spend some of that resource averting a disaster that we know is actually coming.

SynthDef(\bell, // a church bell (by Risset, described in Dodge 1997)
    {arg freq=440, amp=0.1, dur=4.0, out=0, pan;
        var env, partials, addPartial, son, sust, delay;

        freq = freq * 2;
        sust = 4;
        amp = amp/11;
        partials = Array.new(9);
        delay = Rand(0, 0.001);

        //bell = SinOsc(freq);

        addPartial = { |amplitude, rel_duration, rel_freq, detune, pan=0|
            partials.add((
                Pan2.ar(
                    FSinOsc.ar(freq*rel_freq+detune, Rand(0, 2pi), amp * amplitude* (1 + Rand(-0.01, 0.01))), pan)
                * EnvGen.kr(
                    Env.perc(0.01, sust*rel_duration* (1 + Rand(-0.01, 0.01)), 1, -4).delay(delay), doneAction: 0))
            ).tanh /2
        };

        //addPartial.(1, 1, 0.24, 0, Rand(-0.7, 0.7));
        addPartial.(1, 1, 0.95, 0, Rand(-0.7, 0.7));
        addPartial.(0.67, 0.9, 0.64, 1, Rand(-0.7, 0.7));
        addPartial.(1, 0.65, 1.23, 1, Rand(-0.7, 0.7));
        addPartial.(1.8, 0.55, 2, 0, 0); // root
        addPartial.(2.67, 0.325, 2.91, 1, Rand(-0.7, 0.7));
        addPartial.(1.67, 0.35, 3.96, 1, Rand(-0.7, 0.7));
        addPartial.(1.46, 0.25, 5.12, 1, Rand(-0.7, 0.7));
        addPartial.(1.33, 0.2, 6.37, 1, Rand(-0.7, 0.7));

        son = Mix(partials).tanh;
        son = DelayC.ar(son, 0.06, Rand(0, 0.02));
        EnvGen.kr(Env.perc(0.01, sust * 1.01), doneAction:2);

        Out.ar(out, son);
}).add;
SynthDef("plucking", {arg amp = 0.1, freq = 440, decay = 5, coef = 0.1, pan=0;

    var env, snd, panner, verb;

    freq = freq + Rand(-10.0, 10.0);
    env = EnvGen.kr(Env.linen(0, decay, 0).delay(Rand(0, 0.001)), doneAction: 2);
    snd = Pluck.ar(
        in: WhiteNoise.ar(amp),
        trig: Impulse.kr(0),

        maxdelaytime: 0.1,
        delaytime: freq.reciprocal,
        decaytime: decay,
        coef: coef);

    //verb = FreeVerb.ar(snd);
    panner = Pan2.ar(snd, pan);
    Out.ar(0, panner);
}).add;


Canon Fodder

I’ve made Christmas albums the last two years and I feel sort of obligated to do another one, although this year is rather a late start.

I’m just listening to what spotify is telling me were my top tracks of 2018 and one lurking in there is the 1812 Overture, which is incredibly cheesy, but is redeemed by it’s cannon fire. I only know of two pieces with canons in them, which suggests there is rather a shortage.

Obviously, as a composer who feels vaguely compelled to put out an album at short notice, I’m well-positioned to address this dire shortage. Indeed, I can think of no Christmas songs with cannons in them at all.

The other piece I know of with cannons in it is Beethoven’s Wellington’s Victory, which doesn’t wait for the end for the big payout, but has cannons starting early and booming often. I almost hate to admit this, but they get really boring. The more heteronormative* model of music structure seems to work best for explosions. Although, the piece is just terrible throughout, with themes from God Save the [Monarch], Rule Britannia and For He’s a Jolly Good Fellow, it is unbearable. The lack of adequate build-up for the cannon fire is only one of it’s sins – although certainly the one with the greatest grinding, grating duration.

This extremely through analysis of the use of cannons in music implies that extremely bombastic Christmas music is called for.

This isn’t the world we dreamed of, but it’s the world we’ve got.

* Yeah, I went there. I’ve got a critique of Feminine Endings, in which I make the argument that some of it is inspired by TERF attacks on Sandy Stone, but I was advised that this would not be a brilliant career move and I should let the dated past stay there. But, I dunno, some of that text actually is useful – the metaphors are really apt when it comes to things like this piece in particular.

150 years of Toxic Masculinity in the Arts

Angelica Jade Bastién, writing in the Atlantic, has an article, Hollywood has Ruined Method Acting. In it, she describes how some male Hollywood actors have undertaken extreme preparations for their roles. She notes women actors doing the same would be labelled high maintenance and have their careers suffer. Indeed, it’s considered risky for women to make any change to their appearance that does not increase how ‘conventionally’ attractive they are.

Two things strike me about this article. One is how these extreme methods to increase ‘art’ are often applied to films that hardly seem worth the effort. Jared Leto engaged in anti-social behaviour over a Batman spin-off. I know we’re at a high point of pop culture, etc, but summer Batman movies are not usually considered the kind of high art in which ones needs be a master of the craft. It’s a silly franchise with some very silly films and, lately, some extremely mediocre films.

Much like Leto’s latest film is a boring retread, so is this entire discourse. Undertaking hollowly desperate manoeuvres to reflect masculinity to a supposedly effeminate art is, alas, not forging new ground. I’m reminded of 19th Century composer Charles Ives’s horror of being considered anything other than hyper-masculine. Indeed, Ives, despite being a composer, viewed all of music with deep suspicion. When people asked him what he played, he would tell them ‘baseball’.

Ives learned music from his father and, like his father, played church organ. Somehow, this literal patriarchy was not enough for Ives, who sought desperately to distance himself from composers and listeners he felt to be beneath him. This, unsurprisingly, included women, men he felt were effeminate, and people of other races. None of these people performed masculinity as well as Ives, or so he asserted.

Ives imagined a delicate listener, unable to deal with the sheer virility of Ives’s chords. This imaginary audience member was named Rollo. Ives frequently mocked Rollo, demolishing this strawman at every opportunity. Rollo was responsible for Ives’s struggling music career for years, until a younger generation of composers discovered and championed Ives’s work

Composers such as Henry Cowell, who wrote Ives’s biography, and Lou Harrison, who edited Ives’s work for publication, pushed to get Ives’s work more well known. (Unlike Leto, Ives really was a master of his craft. His work was worth listening to.) Both of these younger composers worked closely with Ives on their project of getting his music out.

Cowell wrote approvingly of Ives’s attacks on Rollo, treating it as a family joke. His recounting is affectionate and warm. Of course it’s humorous to hate the inadequately masculine, he affirmed. He wrote the book before he got caught cruising and sent to prison. Ives and Cowell were on less good terms after Cowell went to San Quentin for homosexual acts. Harrison, too, was gay, although luckier than Cowell.

It was (and is) normal for people in the closet to laugh off jokes about themselves and participate in hatred against them. And these wasn’t much of a chance to be out of the closet at the time.

In addition to the inadequately masculine men, there were, of course, women who were not just listeners but composers. Ives’s assertions that some chords were masculine successfully gained traction. So that, in the early 20th Century, when Ruth Crawford Seeger received critical praise for her work, they wrote that she could ‘sling dissonances like a man’. Seeger understood this as praise and took it as such (and also had the support of Henry Cowell), but still stopped composing within a few years to work on folk music instead.

How much pain have people like Ives been able to cause people like Cowell, Harrison and Seeger, all for the sake of their insecurity? Were he alive now, instead of ‘Rollo’, Ives would certainly attack ‘PC Culture’ in his quest to make music great again. Ives and Leto both use toxic masculinity to boost their self esteem or their careers or both. Acting like a dickhead for publicity is nothing new. Toxic masculinity has always been, and remains, corrosive and succesful.

12 Days of Crimbo

I had a plan in the fortnight before Christmas to write 12 songs in 12 days. I nearly made it in time!

I’ve posted all of the pieces as a free album on Bandcamp. My only request is that if you download it and can afford to, please donate to one of the listed charities, such as Crisis.

While I didn’t get twelve pieces in 12 days, each piece only got a few hours of attention. Because Christmas music tends to be tonal, I looked into more instrumental synthdefs and because of the constraints on time, I tended to borrow and adapt instead of inventing totally new sounds. I’ve not got a pretty good additive bell, based on Risset, a good karplus strong plucked sound and decent jingle bells. These shall go up shortly on the sccode site.

I used glitched jpegs as still images for each track on Bandcamp, so when I decided to also upload the tracks to youtube, I created a glitch movie maker script. It’s based on a workshop I saw Antonio Roberts do at Tate Britain. He opened up a jpeg file, typed some junk into it and then it glitched. I wrote a script to insert junk into jpegs. It first looks at the aiff, to decide how many jpegs will be needed, makes all of them, then turns them into a music video. I’ve posted it to github.

This is an example of the script’s output. All 12 pieces are up on YouTube also, if you would like to have a look.

Liberationist Agendas and Notation

Graphic notation, the story goes, is meant to be liberating. But for whom?
Not all graphic notation is actually open. Some of it, like the pieces written for David Tudor by Cage and others, were not open at all. Tudor used a ruler to take very precise measurements and worked out a performance score from the score that he received. These scores were graphic, but also very highly specified. When discussing notation in 1976, David Berhman wrote, ‘Learning a new piece can be like learning a new game or a new grammar, and first rehearsals are often taken up by discussions about the rules – about “how” to play rather than “how well” (which must be put off until later).’ (p 74). Indeed, this mining for exactness and rules meant that players needed specificity to approach a new piece. In the same book, but in a different article about the performer’s perspective, Leonard Stein wrote, ‘Little wonder, then, that when first faced with a new score of great apparent ambiguity the performer’s reactions to the music may be seriously inhibited, and he may be discouraged from playing it at all.’ (p 41)
In the era of serialism, every aspect of the piece (from notes, to dynamics, to timbres to articulations) would be carefully mapped out according to rules. Although he’s framed in opposition to this movement, Cage did also often map everything out, but used ‘chance operations’ to do so. That is, he cast the I-Ching, which is all a roundabout way of saying he used different algorithms to write very precisely closed music.
When everything is specified, the performer is at risk of falling into very rote renditions of things. He or she may play very mechanically, as if they are on a grid, or just repeating practices they learned in school, trying to get everything right. Musicality is at risk from hyper-specification. Therefore, according to Berhman, when Morton Feldman’s Projection scores have little high boxes in them, specifying a range of possible pitches, but not precise notes, this is meant to nudge the performer into greater engagement with the piece and the genre. ‘As a part of his interpretation, the player must ask himself what sort of pitches are appropriate – in effect, what sort of music he is playing.’ (p 79) The performer is liberated from their rote practice and forced to engage. But this liberation is not the performer’s liberation – it is the composers. The composer, broken free from the shackles of European Art Music and Serialism can use any method they want to get something very exact from a performer. Cage draws squiggles and Tudor takes very fine measurements of them. Performers: meet the new boss, same as the old boss.
Meanwhile, European Art Music was also weighing down in people in Europe. But obviously, the political valences of this were completely different. Cage, tired of Americans being compared negatively to dead white European males joked that the US needed ‘music with less sauerkraut in it’. (Problematic!!) But Europeans who wanted more freedom had much less to prove. Nobody thought British people were somehow culturally incapable of writing large scale symphonic works worth listening to. They had Elgar! Which is not to say they didn’t also long for freedom, but they did so with much less nationalism.
American experimentalist composers had a project of proving their worth as composers. They rejected the strict, imported methods that came form Europe, but reacting to that by relinquishing control would be risky. Firstly, there is the danger of association with Jazz. White supremacy may have pushed some white composers away from engaging any of the openness suggested by jazz practice. Improvisation would be a step too far. And, indeed, composers trying to prove their worth as masters of their art may assume that retaining control would make a stronger case for their own work.
Those not embarking on nationalist projects, who have much less to prove, did turn out to be more open. Cornelius Cardew played in the AMM, a small group that improvised, influenced by jazz, but tryied to play outside of jazz’s generic boundaries. Cagean composers shunned improv, but Cardew embraced it and developed his own squiggly notation. Unlike Feldman, he did not seek exactness or a greater freedom to realise the composer’s vision more precisely. Cardew wrote, ‘A square musician (like myself) might use Treatise as a path to the ocean of spontaneity.’ (1971 p i) What Cardew gives, Feldman takes away. (Of course, when generalising about entire cultures, exceptions abound. Earle Brown argued for performer freedom.)
There is a tendency in musical writing, especially in the popular press, to see graphic notations as a high point of music’s historic embrace of left-wing libertarianism. While certainly Cage did come to embrace anarchism (and his writings on that deserve a fresh look), it would be an error to see most American notational experimentation of the period up to the 70’s as embracing any kind of class-conscious liberation. Sure it was liberationist for composers, but performers had to look abroad if they wanted freedom for themselves.

Works Cited

Behrman, David. ‘What Indertiminate Notation Determines’ (1976) Perspectives on Notation and Performance ed Benjamin Bortez and Edward T Cone. New York: W. W. Norton & Company Inc. [ book]
Cardew, Cornelius. “Treatise Handbook” (1971) London: Edition Peters. [Book]
Stein, Leonard. ‘The Performer’s Point of View’ (1976) Perspectives on Notation and Performance ed Benjamin Bortez and Edward T Cone. New York: W. W. Norton & Company Inc. [ book]

A note about notes

Musical notation, as you may have learned in school, is a lot like a mathematical function. That is, one of those math equations that you can graph. For every x, there is exactly one y. Which means that the graph is a line that may meander up or down, but it will never loop back on itself, nor split in two, nor do anything more interesting other than getting more and more to the right as x goes up

Similarly, unless there is a repeat sign, you read notes strictly left to right. There is no symbol for linked 8th notes (aka: quavers) that play in any order aside from left to right.

And, indeed, letters of words plot a similar route. But when drawing musical lines, like the UPIC system, people sometimes want to double back. This impulse is also evident, at least occasionally, in non-musicians.

Wallenda by Penalva at the Irish Museum of Modern Art is a study in naive notation developed by a visual artist. This is an example of a closed and particular form of graphic notation, invented to communicate a monophonic line extracted from the orchestral score of Rite of Spring. Its meaning is specific and fixed.

The artist has divided the movements into sections, each of which has a single page of notation. The third movement is 153 pages. The notation is sometimes mnemonic and sometimes drawn lines. It appears to be read right to left, top to bottom. many of the images resemble piano roll notation as used by some MIDI programs. Some of the lines curve up and down, presumably tracing a melodic line. This has a strong implication of a left to right directionality. However many panels, starting with 69 in the first movement as the first such example, have loops in them.

Loop pages include 69, 94 in the first set. 16, 74, 107, 110, 111, 113 in the second set and 23, 57, 92, 93, 117, 119 in the third.

While I can only speculate as to the meanings of these gestures, some of the very tight loops do seem as if they may be intended to suggest vibrato. Some of the larger loops appear more mysterious, given their violation of the directionality implied.

Page 44 in set 2 does not loop but does have a gesture that is not a function in a mathematical sense. Instead, it goes down and then backwards. It’s meaning is intriguingly mysterious.

I would guess that the reason that people tend to want loops (despite making up a system that does not support them), has a lot to do with gestalt psychology. The relationship between it and musical notation is very beautifully illustrated, in this analysis of Cardew.

Alas, no pictures are allowed in the museum, so this post is without illustrations of Penalva’s score, but I did do some possibly ambiguous notation of my own in myPaint. In what order would you play those notes?