Semaphores are awesomesauce

Imagine, if you will, that you are a programmer and somebody has asked you to write an application that counts cars in intersections. You have webcams mounted on top of the traffic lights and it sends you a message when it sees a car. You have saved somewhere a count of all the cars so far. So, when it tells you it sees a car, you go find that number, add one to it and write down the new number. There is more than one camera in the intersections, though and while some cars are travelling north, others are travelling south at the same time. What if two cameras see different cars at the same time?

  1. Camera one sees a car and the programme listening to it goes and finds the count of cars so far, which is 75.
  2. Camera two sees a car and the programme listening to it goes and finds the count of cars so far, which is 75.
  3. Camera one’s programme adds one to the total and gets 76.
  4. Camera two’s programme adds one to the total and gets 76.
  5. Camera one’s programme saves it’s new total.
  6. Camera two’s programme saves it’s new total.
  7. You go to look how many cars have been through the intersection and the number recorded is 76.

Camera one and camera two are operating separately from each other at the same time. They are sperate threads. when two of them are trying to change the same resource at the same time, you get something called a race condition. Will the first thread finish before the second thread clobbers it’s changes? The race is on!
Fortunately, there is a solution to this problem: semaphores! Lets’ say you are trying to update your traffic count with SuperCollider:

(
 var traffic_count, camera1, camera2, semaphore, action;

 traffic_count = 0;
 semaphore = Semaphore.new(1);

 camera1 = TrafficCounterCamera(north);
 camera2 = TrafficCounterCamera(south);

 action = { // this will be called when a car is seen
  Task({
   semaphore.wait; // only one thread can get past this point at a time
   traffic_count = traffic_count +1;
   semaphore.signal; // relinquish control of the semaphore
   traffic_count.postln;
  }).play;
 };

 camera1.action = action;
 camera2.action = action;
)

You need to make a new Semaphore before you use it. By default, they allow one thread through at a time, but you can change the argument to 2 or 3 or whatever number you want.
When your code encounters a semaphore.wait, the thread will pause and wait until it’s turn to procede. Only one thread will be allowed past that line at a time. If both cameras update at the exact same time, one of them will have to wait until the other says it’s good to go ahead.
semaphore.signal is how that thread says it’s good to go ahead. The code in between those lines can only be accessed by a single thread at a time. the traffic_count.postln line is outside the seamphore because it’s not making a change to anything, so it’s safe to read it outside of the semaphore.
So when you have two seperate threads trying to change something at the same time, semaphors can help you out! This sort of situation can arrive with GUI objects, or with OSC messages, HID objects or anything with an action method.
Be thread-safe!

My life lately (is tl;dr)

Tuesday and Wednesday Last Week

A week ago Tuesday, I taught my module in Cambridge. The next morning, I got on a train to Birmingham for BiLE practice. I’m a co-founder of BiLE, the Birmingham Laptop Ensemble. We formed in February and we have a gig next week. The technical hurdles to getting a laptop ensemble going are not minor, so there has been a lot of energy going into this from everybody. We have got group messaging going, thanks to OSCGroups and I wrote some SuperCollider infrastructure based on the API quark and a small chat GUI and a stopwatch sort of timer, which is controlled with OSC, so there’s been a lot of that sort of tool writing. And much less successful coding of sound-making items, which will eventually be joystick controllable if I ever get them to work. All my code is written for mono samples and all of the shared samples people are using are in stereo, so I spent a lot of time trying to stereo-ise my code before finally mixing the samples down to mono.
I’m a big believer in mono, actually, in shared playing environments. If I am playing with other people, I’m playing my computer as an instrument and instruments have set sound-radiation patterns. I could go with a PLOrk-style 6-speaker hemisphere, if I wanted to spend a boatload of money on a single-use speaker to get an instrumental radiation pattern form my laptop, so I could just use a single Genelec 1029 that I already own.
Anyway, after the BiLE rehearsal, a couple students gave a group presentation on Reaper, which is a shareware, cheap, powerful DAW. I’m quite impressed and am pondering switching. My main hesitation is that I expect my next computer will be linux, so I don’t know if I want to get heavily involved with a program that won’t run on that OS. On the other hand, I don’t actually like Ardour very much, truth be told. I haven’t liked any of them since I walked away from ProTools.
After that we went out for socialising and instead of catching a train home, I went to stay on the floor of Julien’s studio. He lives way out in the country, up a lane (British for a single-track country road). It’s quite lovely. I would not be a fan of that commute, but I might do it for that cottage.

Thursday

The next morning, Juju and I set back to campus quite early so he could meet his supervisor. I ran a couple of errands and got a uni-branded hoodie. I haven’t worn such a garment for years, because fabric clinging to my chest in the bad old days was not a good thing. But now I can wear snug woven fabrics, like T-shirts, hoodies and jumpers! It’s amazing! Also, I remember the major student protests about university branded clothing made by child labour, but this was actually fairtrade, according to the label, which is fairly impressive.
Then all the postgrads met in the basement of the Barber Institute to start loading speakers into a truck for a gig. We were moving a relatively small system, only 70 speakers, but that’s still a fair amount of gear to muscle around. Then we went to the Midlands Arts Centre to move all the gear into the venue and set it up. The gear is all in heavy flight cases, which needed to be pushed up and down ramps and down hallways and then the speakers inside needed to be carried to where they would be set up, as did the stands to which they would be attached and the cables that connect them. It’s a lot of gear. We worked until 6 or 7 pm and then went back to the studios at uni to get a 2 hour long presentation from Hans Tutchku about how he does music stuff. I tried desperately to stay awake because it was interesting and I wanted to hear what he was saying, but I did not entirely succeed in my quest.

Friday

Then, Juju and I went back to his place, 45 minutes away and then came back to the MAC early the next morning to finish rigging the system. We put up the remainder of the system and then people who were playing in that evening’s concert began to rehearse. I hung around for the afternoon, trying to get my BiLE code working. Kees Tazelaar, who played the next evening came along to see how things were going and recognised me from Sonology and greeted me by my old name. I like Kees quite a lot, but it was a very awkward moment for me and I wasn’t sure what to do, so I spoke to him only briefly and then mostly avoided him later. This was not the best way to handle it.
There were two concerts in the evening. The second of them was organised by Sound Kitchen and was a continuous hour with no break between pieces. The people diffusing the stereo audio to the 70 speakers took turns, but changed places without interrupting the sound flow. It was extremely successful, I thought. The hour was made up of the work of many different composers, each of whom had contributed only 5 minutes, but somehow this was arranged into a much larger whole that held together quite well, partly because many of the different composers had used similar sound material. A lot of them used bird sounds, for example, so that was a repeating motif throughout the concert.

Saturday

After that, we hung around the bar for a bit afterwards. The next morning was not so early, thank goodness, when we went back to the MAC and then back to the uni for the BiLE hack day. The idea was that we would do a long group coding session, where people could write code around each other and ask for clarification or feedback or help or whatever from band mates. However, it started really late and everybody was really tired, so it was not entirely successful in it’s goals.
Then we went back to the MAC for the concerts. I was sitting in the hallway, trying to figure out why my BiLE code had failed so completely when I got drafted into being in charge of the comp tickets. It turns out that this is actually somewhat stressful, because it requires knowing who is supposed to get comped in, getting tickets for them and then distributing them. Which means approaching Francis Dhomont and speaking to him.
The first concert was curated by Kees Tazelaar and started with a reconstruction of the sounds played in the Philips Pavilion at the Brussels Worlds Fair in 1958. He found the source tapes and remixed them. Concrete PH sounded much more raw and rougher than other mixes I’ve heard. It had a gritty quality that seemed much more grounded in a physical process. I was surprised by how different it sounded. Then he played Poem électronique and a his own work called Voyage dans l’espace. I hope he plays these again on large multi-channel systems, because it was pretty cool.
I was feeling fairly overwhelmed by the lack of sleep, my lack of success with BiLE and getting stuck with all the comp tickets, so I was not happy between concerts. The next one was all pieces by Anette Vande Gorne, a Belgian woman who runs the Espace du son festival in Brussels and who has very definite theories about how to diffuse sounds in space. Some of them are quite sensible, however, she thinks that sound can start at the front of the hall and be panned towards the back of the hall, but sound cannot originate at the back of the hall and travel to the front. Hearing about this had prejudiced me against her, as it seems rather silly.
She always diffuses standing up, so they had raised the faders for her, with one bank slightly higher than the other, like organ manuals. She started to play her pieces… and it was amazing. It was like being transported to another place. All of my stress was lifted from my shoulders. It was just awe inspiring. The second piece was even better. I was sitting in the back half, so I could see her standing at the mixers, her hands flying across the faders dramatically, like an organist, full of intensity as her music dramatically swelled and travelled around the room. It was awe-inspiring. Then I understood why people listened to her, even when some of her theories sound silly. She might not be right about everything, but there’s quite a lot she is right about. This was one of the best concerts that I’ve ever been to.
The last concert was a surprise booking, so it wasn’t well publicised. It was Jonty Harrison, Francis Dhomont and Hans Tutchku. It was also quite good, but I wouldn’t want to play after Vande Gorne. Tutchku’s piece had several pauses in it that went on just a few moments too long. It’s major climax came quite early. It worked as a piece, but seemed like it could be experienced in another order as easily as the way it was actually constructed. I talked to him at the party afterwards and he said that the pauses were climaxes for him and ways of building tension and that he had carried them out for too long in order to build suspense. I’m not entirely positive they functioned in this way, but the idea is quite ineresting and I may look into it. He also asked me what I thought of his presentation for two days earlier, so I was hoping he hadn’t noticed me dozing off, but I think he did.
After the final concert, there was a large party at Jonty’s house. I got a lift from Jonty, so I was squeezed in the back of a car with Anette Vande Gorne on one side of me and Hans Tutchku on the other side with Francis Dhomont in the front. They all spoke French the whole way. I’ve been filling out job applications and one them wants to know about my foreign language skills and now I can say with certainty that if I’m stuck in a car with several famous composers speaking French, I can follow their conversation fairly well, but would be way too starstruck to contribute anything.
Apparently, the party went on until 4:30 in the morning, but I didn’t stay so late. I talked a lot to Jean-François Denis, the director of empreintes DIGITALes, a Canadian record label. He flew from Canada just for the weekend and showed up without anyone expecting him. He is extraordinarily charming.

Sunday

The next morning, we went back again to the MAC and then there was a long concert with an intermission in the early afternoon. Amazingly, none of the concerts over the entire weekend featured overhead water drops. There were barely any dripping sounds at all.
After the concert, we de-rigged the system and packed all the gear back into cases and loaded it onto the two rented trucks. Then we went for curry in Mosely, which we seem to do after every gig. Shelly was talking about how it was her last BEAST gig and I wasn’t paying much attention until I realised this meant it was my last gig too. I really should have signed up to play something. I thought there was another gig coming later in the year, but it was cancelled. I’m seriously going to graduate from Brum having only played a piece at a BEAST gig one time and never having diffused a stereo piece. That is extremely lame on my part.

Monday

Juju was completely exhausted, so we left the curry early, so he could go home and catch up on sleep. The next morning, we all went back to the Barber Institute to unload the trucks and put everything away. Then we, as usual, went to the senior common room to have cups of terrible coffee. Their tea is alright, so that’s what I had, but most people go for the coffee, which could double as diesel fuel. I guess this was my last time of that also.
Normally, I would then gather my things and go home, but I did not. I worked on code and faffed and worried about my lecture the next day and then in the evening, we had another seminar. Howard Skempton came and talked for two hours about Cardew and Morton Feldman and his own music. It was quite good. We all went to the pub afterwards, but that dissipated quickly as people left to sleep it off.

Tueday

I got the train home, finally and got in after midnight. There’s a large stack of mail inside my door. I woke up early the next morning to assemble my presentation for my module. As luck would have it, the topic was acousmatic music, so I talked about BEAST and played them some of the music from the weekend. I also pointed them at some tools. I was supposed to have them start their task during the class time, but a surprising number of them wanted to show their works in progress, so that didn’t happen.
As I was on the train back to London from Cambridge, I wondered whether I should go out to a bar that night to socialise when I fell completely asleep on the train. Drooling on my backpack asleep. I completely crashed. I woke myself up enough to get the tube home and then thought I would sort out my BiLE code instead of going out, but I couldn’t concentrate, so I just faffed around on the internet instead of sleeping or going out. Meh to me.

Wednesday

Then, the next day, which was Wednesday, a week and a day after all of this started, I got on the train for Birmingham to go to a BiLE rehearsal and to go to a seminar. I got my code working on the train and was feeling somewhat happy about that, but when I got to the rehearsal, it just gave up completely. I managed to make sounds twice during the entire rehearsal, one of which was during a grand pause. When I tried repeating the sound later, it wouldn’t play. Also, Shelly found a crash bug in my chat application, when Juju typed a french character. On the bright side, however, all of the MAX users got all the way through one of the pieces we’re playing next Thursday, which is quite encouraging. Antonio, our graphics guy got the projector sort of working, so I was able to glance at what he was doing a couple of times and it looked good.
We took a break and a bunch of the postgrads were dissing live coding, so I guess that might not be a good goal for the ensemble. They thought projected code was self-indulgent and only programmers would care. I need to link them to the toplap mainfesto. Actually, they were more dissing the idea of live coding, having barely witnessed any themselves. Non-programmers do seem to care and, while it is a movement that does require some thoughtful understanding to fully appreciate it, the same could certainly be said of acousmatic music. I like the danger of live coding, something that I think a laptop ensemble ought to appreciate. It’s a bit like a high wire act.
The presentations at the seminar were interesting and then we went to the pub. I was so tired biking home from the train station that I got confused about which side of the street I’m supposed to be on.

Thursday

I slept until 2 this afternoon and I was supposed to sort out my BiLE code and fix up my CV and write my research portfolio, but all I did was send out email about Monday’s supercollider meetup and fix the crashbug in the chat thing. SuperCollider strings are in 7 bit ascii and fuck up if you give them unicode, which is really quire shocking and not documented anywhere.
Then I went to Sam’s to get Xena back and I wired up part of the 5.1 system she got for her daughter and sorted out her daughter’s macmini so that she could connect to it with VNC and so it was wired to the sound system and the projector and quit asking for the keychain password every 5 seconds. Then I came home and spent ages typing this up. Tomorrow, I will do my CV stuff for real, because I have to get it done and then work on my BiLE code. Saturday I’m going back to Brum again for a 5 hour rehearsal in wich we sort out the rest of our music for the gig. Sunday, I need to finish and job application related stuff and write my presentation for Tuesday. Monday is the job application deadline and a SuperCollider meetup. Tuesday, I teach. Wednesday, I need to get Xena back to Sam’s and then go to Brum again for a rehearsal and will be there overnight to practice the next day and then play the gig and then get stonkingly drunk. Friday, I go home. And then start sorting out the tech stuff for the next two pieces, which at least are by me and count towards my portfolio. And I need to sort out my stretched piece which is a disorganised mess and start writing a 20 minut piece, which I haven’t done at all and needs to be done very soon because I need to graduate and I have not spent all this busy time working on my own music, although the tools I’ve written should be kind of valuable. All I can think about now, going over and over in my head is all the stuff I have to do. And snogging. That thing about men thinking about sex every 7 seconds has never been true for me before, but it is now. And it’s actually quite annoying except that as the alternative is thinking about everything that I have to do, I actually prefer it.

How to Write BBCut FX

First, here’s my file:

CutMask : CutSynth { 
 var bits,sr,bitadd,srmult; 
 var synthid;
 
 //makes SynthDef for filter FX Synth 
 *initClass { 
 
   StartUp.add({
  
  2.do({arg i;

  SynthDef.writeOnce("cutmaskchan"++((i+1).asSymbol),{ arg inbus=0, outbus=0, bits; 
  var input, fx;
  
  input= In.ar(inbus,i+1);
  
  fx = MantissaMask.ar(input, bits);
  
  ReplaceOut.ar(outbus,fx);
  
  }); 
  });
  
  });
  
 } 
 
 *new{arg bits=16,sr,bitadd=1,srmult=1;
  
 ^super.new.bits_(bits).bitadd_(bitadd).sr_(sr ?? {Server.default.sampleRate/2}).srmult_(srmult);
 }
 
 setup { 
 //tail of cutgroup
 synthid= cutgroup.server.nextNodeID;
  
 cutgroup.server.sendMsg(s_new, cutmaskchan++(cutgroup.numChannels.asSymbol), synthid, 1,cutgroup.fxgroup.nodeID,inbus,cutgroup.index,outbus,cutgroup.index, bits, bits);
   
 } 


//can't assume, individual free required for cut fx
//synth should be freed automatically by group free
 free {
  cutgroup.server.sendMsg(n_free,synthid); 
 }

 renderBlock {arg block,clock;
  var samprate,bitstart,bitarray,srarray, s;
  
  s= cutgroup.server;

  bitstart= bits.value(block);
  samprate= sr.value(block);
  
  srarray= Array.geom(block.cuts.size,samprate,srmult.value(block));
  bitarray= Array.series(block.cuts.size,bitstart,bitadd.value(block));

  bitarray= 0.5**((bitarray).max(2)-1);

  block.cuts.do({arg cut,i;
  
  block.msgs[i].add([n_set, synthid,bits,bits]);
  
  });
  
  //don't need to return block, updated by reference
 }
 

}

What I did there: I took CutBit1.sc and did a saveAs CutMask.sc. Then I changed the name of the class to CutMask.

In initClass

I changed the synthdef name to cutmaskchan
I changed the arguments to the SynthDef
I put in my own code for the fx = line. That’s where the magic happens!

In new

I changed cutgroup.server.sendMsg to so it uses my synthdef name and my synthdef arguments

In renderBlock

I changed block.msgs[i].add( to have my synthdef arguments
since mine doesn’t change, I could skip sending anything, afaik

Chiptune, Dub, BBCut

This is based off MCLD’s Dubstep Patch, with a modification to make it output a Pulse wave and then using some BBCut from a previous post.

(
var bus, sf, buf, clock, synthgroup, bbgroup, loop, group, cut1, cut2, cut3, stream, pb,
  cut4, out;


SynthDef(dub, {|out = 0, amp|
 
    var trig, note, son, sweep;

    trig = CoinGate.kr(0.5, Impulse.kr(/*2*/ 1.5.reciprocal));

    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));

    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

    son = Pulse.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);   
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);

    son = (son * 5).tanh;
    //son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    //son.dup;
    Out.ar(out, son.dup * amp);
}).add;


 // groups
 synthgroup= Group.head(Node.basicNew(s,1)); // one at the head
 bbgroup= Group.after(synthgroup); // this one comes after, so it can do stuff with audio
        // from the synthgroup
 bus= Bus.audio(s,1); // a bus to route audio around

 // a buffer holding a breakbeat. The first argument is the filename, the second is the number of
 // beats in the file.
 sf = BBCutBuffer("sounds/drums/breaks/hiphop/22127__nikolat__oldskoolish_90bpm.wav", 16);
 
 // a buffer used by BBCut to hold anaylsis
 buf = BBCutBuffer.alloc(s,44100,1);
 
 //  The default clock.  90 is the BPM / 60 for the number of seconds in a minute
 TempoClock.default.tempo_(180/60);

 // BBCut uses it's own clock class. We're using the default clock as a base
 clock= ExternalClock(TempoClock.default); 
 clock.play;  
 
 // Where stuff actually happens
 Routine.run({

  s.sync; // wait for buffers to load

 loop = (instrument:dub, out:0, amp: 0.3,
    group:synthgroup.nodeID).play(clock.tempoclock);

  /* That's an Event, which you can create by using parens like this.  We're using
  an event because of the timing built in to that class.  Passing the clock
  argument to play means that the loop will always start on a beat and thus be 
  synced with other BBCut stuff. */
  
  // let it play for 5 seconds
  5.wait;
  
  group = CutGroup(CutBuf3(sf, 0.5));
  group.add(CutBit1.new(4));
    cut2 = BBCut2(group, BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

  
  // start a process to cut things coming in on the bus
  cut1 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), 
   BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

 2.wait;
 
   loop.set(out, bus.index);
   
 "bbcut".postln;
 
 30.wait;
 
   cut4 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), 
    SQPusher2.new).play(clock);

   
 });
)

I think it would be cool to run the drums through a 4 or 8 bit MantissaMask, but first I have to figure out how to write FX.

Stupid BBCut Tricks

I’ve been messing a out with the BBCut Library and will shortly be generating some documentation for my students. In the mean time, I give you some commented source code and the output which it creates. In order to play at home, you need a particular sample.

(

 var bus, sf, buf, clock, synthgroup, bbgroup, loop, group, cut1, cut2, cut3, stream, pb,
  cut4, out;

 // this first synth is just to play notes
 SynthDef(squared, { |out, freq, amp, pan, dur|
  
  var tri, env, panner;
  
  env = EnvGen.kr(Env.triangle(dur, amp), doneAction: 2);
  tri = MantissaMask.ar(Saw.ar(freq, env), 8);
  panner = Pan2.ar(tri, pan);
  Out.ar(out, panner)
 }).add;
 
 
 // a looping buffer player
 SynthDef(loop, { |out = 0, bufnum = 0, amp = 0.2, loop=1|

  var player;
  
  player = PlayBuf.ar(2, bufnum, 2 * BufRateScale.kr(bufnum), loop: loop, doneAction:2);
  Out.ar(out, player * amp);
 }).add;
 
 // groups
 synthgroup= Group.head(Node.basicNew(s,1)); // one at the head
 bbgroup= Group.after(synthgroup); // this one comes after, so it can do stuff with audio
        // from the synthgroup
 bus= Bus.audio(s,1); // a bus to route audio around

 // a buffer holding a breakbeat. The first argument is the filename, the second is the number of
 // beats in the file.
 sf = BBCutBuffer("sounds/drums/breaks/hiphop/22127__nikolat__oldskoolish_90bpm.wav", 16);
 
 // a buffer used by BBCut to hold anaylsis
 buf = BBCutBuffer.alloc(s,44100,1);
 
 //  The default clock.  180 is the BPM / 60 for the number of seconds in a minute
 TempoClock.default.tempo_(180/60);

 // BBCut uses it's own clock class. We're using the default clock as a base
 clock= ExternalClock(TempoClock.default); 
 clock.play;  
 
 // Where stuff actually happens
 Routine.run({

  s.sync; // wait for buffers to load
  
  // start playing the breakbeat
  loop = (instrument:loop, out:0, bufnum: sf.bufnum, amp: 0.5, loop:1, 
    group:synthgroup.nodeID).play(clock.tempoclock);

  /* That's an Event, which you can create by using parens like this.  We're using
  an event because of the timing built in to that class.  Passing the clock
  argument to play means that the loop will always start on a beat and thus be 
  synced with other BBCut stuff. */
  
  // let it play for 5 seconds
  5.wait;
  
  // start a process to cut things coming in on the bus
  cut1 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), 
   BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

  /*  
  We use a cut group to make sure that the BBCut synths get added to the bbgroup.
  This is to make sure that all the audio happens in the right order.
  
  CutStream1 cuts up an audio stream. In this case, from our bus.  It uses a buffer to 
  hold analysis data.
  
  BBCutProc11 is a cut proceedure.  
  The arguments are: sdiv, barlength, phrasebars, numrepeats, stutterchance, 
  stutterspeed, stutterarea
  * sdiv - is subdivision. 8 subdivsions gives quaver (eighthnote) resolution.
  * barlength - is normally set to 4 for 4/4 bars. If you give it 3, you get 3/4
  * phrasebars - the length of the current phrase is barlength * phrasebars
  * numrepeats - Total number of repeats for normal cuts. So 2 corresponds to a 
  particular size cut at one offset plus one exact repetition.
  * stutterchance - the tail of a phrase has this chance of becoming a repeating 
  one unit cell stutter (0.0 to 1.0)

  For more on this, see the helpfile.
  
  And we play it with the clock to line everything up
  */

  // wait a bit, so the BBCut2 stuff has a time to start
  2.wait;

  // change the output of the looping synth from 0 to the bus, so the BBCut buffer
  // can start working on it
  loop.set(out, bus.index);
  
  // let it play for 5 seconds
  5.wait;
  
  // start another BBCut process, this one just using the sound file.
  cut2 = BBCut2(CutBuf3(sf, 0.3), BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);
  // We use CutBuf instead of CutStream, because we're just cutting a buffer
  
  // stop looping the first synth we started
  loop.set(loop, 0);

  cut1.stop;

  10.wait;
  
  // To add in some extra effects, we can use a CutGroup
  group = CutGroup(CutBuf3(sf, 0.5));
  cut3 = BBCut2(group, BBCutProc11(8, 4, 16, 2, 0.2)).play(clock);

  // play is straight for 5 seconds
  5.wait;

  // add a couple of filters to our cutgroup
  group.add(CutMod1.new);
  group.add(CutBRF1({rrand(1000,5000)},{rrand(0.1,0.9)},{rrand(1.01,1.05)}));

  10.wait;
  
  // we can take the filters back off
  group.removeAt(2);
  group.removeAt(2);
  
  // we can use BBCut cut proceedures to control Pbinds
  stream = CutProcStream(BBCutProc11.new);
  
  pb = Pbindf(
   stream,
   instrument, squared,
   scale,  Scale.gong,
   degree,  Pwhite(0,7, inf),
   octave,  Prand([2, 3], inf),
   amp,  0.2,
   sustain,  0.01,
   out,  0,
   group,  synthgroup.nodeID
  ).play(clock.tempoclock);

  // the stream provides durations
  
  10.wait;
  
  // We can also process this is like we did the loop at the start
  
  pb.stop;
  pb = Pbindf(
   stream,
   instrument, squared,
   scale,  Scale.gong,
   degree,  Pwrand([Pwhite(0,7, inf), rest], [0.8, 0.2], inf),
   octave,  Prand([3, 4], inf),
   amp,  0.2,
   sustain,  0.01,
   out,  bus.index,
   group,  synthgroup.nodeID
  ).play(clock.tempoclock);
  
  
  
  cut4 = BBCut2(CutGroup(CutStream1(bus.index, buf), bbgroup), 
    SQPusher2.new).play(clock);
    
  // SQPusher2 is another cut proc
  
  
  30.wait;
  cut3.stop;
  5.wait;
  cut2.stop;
  1.wait;
  cut4.stop;
  pb.stop;
  
 })
)

Renate Wiesser and Julian Rohrhuber: Meaning without Words

Last conference presentation to live blog from the sc symposium
A sonification project. Alberto de Campo is consulting on the project.
A project 7 years in the world, inspired bya test from the 70’s. You can distinguish educated and uneducated background based on how they speak. Sociologists picked up on this. There was an essay about this, using Chomsky’s grammar ideas. Learning grammar as a kid may help with maths and programming. Evidence of how programmers speak would seem to contradict this . . .
But these guys had the idea of sonifying grammar and not the words.
Sapir-Whorf: how much does language influence what we think. This also has implications for programming languages. How does your medium influence your message?
(If this stuff came form the 70’s and was used on little kids, I wonder if I got any of this)
Get unstuck form hearing only the meaning of words.

Corpus Linguistics

don’t use grammar as a general rule: no top down. Instead use bottom up! Every rules comes with an example. Ambiguous and interesting cases.

Elements
  • syntax categories – noun phrases, prepositional phreases, verb phrases. These make up a recurive tree.
  • word positon: verb, nouns, adverb
  • morphology: plural singular, word forms, etc
  • function: subject object predicate. <– This is disputed

The linguistics professor in the audience says everything is disputed. “We don’t even know what a word is.”
They’re showing an XML file of “terminals,” words where the sentence ends.
They’re showing an XML file of non-terminals.
Now a graph of a tree – which represents a sentence diagram. How to sonifiy a tree? There are several nodes in it. Should you hear the whole sentence the whole time? The first branch? Should the second noun phrase have the same sound as the first, or should it be different because it’s lower in the tree?
Now they have a timeline associated with the tree.
they’re using depth first traversal.
Now the audience members are being solicited for suggestions.
(My though is that the tree is implicitly timed because sentences are spoken over time. So the tree problem should reflect that, I think.)
Ron Kuivila is bringing up Indeterminacy by John Cage. He notes that the pauses have meaning when Cage speaks slowly. One graph could map to many many sentences.
Somebody else is recommending an XML-like approach with only tags sonified.
What they’re thinking is – chord structures by relative step. This is hard for users to understand. Chord structures by assigning notes to categories. They also though maybe they could build a UGen graph directly from the tree. but programming is not language. Positions can be triggers, syntax as filters.
Ron Kuivila is suggesting substituting other words: noun for noun, etc, but with a small number of them, so they repeat often.
They’re not into this, (but I think it’s a brilliant idea. Sort of reminiscent of aphasia).
Now a demonstration!
Dan Stowell wants to know about the stacking of harmonics idea. Answer: it could lead to ambiguity.
Somebody else is pointing out that language is recursive, but music is repetitive.
Ron Kuivila points out that the rhythmic regularity is coming from the analysis rather than from the data. Maybe the duration should come how long it takes to speak the sentence. The beat might be distracting for users, he says.
Sergio Luque felt an intuitive familiarity with the structure.

Martin Carlé / Thomas Noll: Fourier-Scratching

More live blogging
The legacy of Helmholtz.
they’re using slow fourier transforms instead of fft. sft!
they’re running something very sci-fi-ish, playing FM synthesis. (FM is really growing on me lately.) FM is simple and easy, w only two oscillators, you get a lot of possible sounds. They modulate the two modulators to forma sphere or something. You can select the spheres. They project the complex plane on the the sphere.
you can change one Fourier thing and it changes the whole sphere. (I think I missed an important step here of how the FM is mapped to the sphere and how changing the coefficients back to the FM.)
(Ok, I’m a bit lost.)
(I am still lost.)
Fourier scratching: “you have a rhythm that you like, and you let it travel.”
Ok the spheres are in fourier-domain / time-domain paris. Something about the cycle of 5ths. Now he’s changing the phase of the first coefficient. Now there are different timbres, but the rhythm is not changing.
(I am still lost. I should have had a second cup of coffee after lunch.)
(Actually, I frequently feel lost when people present on maths and the like associated with music. Science / tech composers are often smarter than I am.)
you can hear the coefficients, he says. There’s a lot of beeping and some discussion in german between the presenters. The example is starting to sound like you could dance to it, but a timbre is creeping up behind. All this needs is some bass drums.
If you try it out, he says, you’ll dig it.
Finite Fourier analysis with a time domain of 6 beats. Each coefficient is represented by a little ball and the signal is looping on the same beat. The loops move on a complex plane. The magnitude represents something with fm?
the extra dimension from Fourier is used to control any parameter. It is a sonfication. This approach could be used to control anything. You could put a mixing board on the sphere.
JMC changed the definition to what t means to exponentiate.
Ron Kuivila is offering useful feedback.

Alo Allik: Audiovisual Composition with Three-Dimensional Continuous Cellular Automata

Still live blogging the supercollider symposium
f(x) – audio visual performance environment, based on 3d cellular automata. Uses objective X, but he audio is in scserver.
the continuous cellular automata are values between 0 and 1. The state at the next time step is determined by evaluating the neighbours + a constant. Now, a demo of a 1-d world of 19 cells. All are 0 except for the middle which is 1. Now it’s chugging a long. 0.2 added to all. The value is modulus 1, to just get the fractional part. Changing the offset to 0.5, really changes the results. Can have very dramatic transitions, but with very gradual fades. The images he’s showing are quite lovely. and the 3d version is cool
Then he tried changing the weight of the neighbours, This causes the blobs to sort of scroll to the side. The whole effect is kind of like rain drops falling in a stream or in a moving bit of water in the road. CAn also change the effect by changing the add over time.
Now he’s demoing his program and has allowed us to download his code off his computer. Somehow he’s gotten grids and stuff to dance around based on this. “The ‘World’ button resets the world.” Audience member: “Noooo!”
Now an audio example, that’s very clearly tied in. Hopefully this is in the sample code we downloaded. It uses the Warp1.ar 8 times.
This is nifty. Now there’s a question I couldn’t hear. Alo’s favourite passtime is to invent new mappings. He uses control specs on data from the visual app. There are many many cells in the automata, thus he polls the automata when he wants data and only certain ones.
More examples!

Julian Rohruber: Introducing Sonification Variables

More sc symposium live blogging

sonification

Objectivity is considered important in the sciences. The notions of this have changed quite a bit over the last 50 years, however. The old style of imaging has as much data as possible crammed in, like atlas maps. Mechanical reproduction subsequently becomes important – photos are objective. However, perception is somewhat unreliable. So now we have structural objectivity which uses logic + measurements.
We are data-centric.
What’s the real source of a recording? The original recording? The performer? The score? The mind of the composer?
Sound can be just sound, or it can just be a way of conveying information or something in between. You need theory to understand collected data.
What do we notice when we listen that we wouldn’t have noticed by looking? There needs to be collaboration. Sonification needs to integrate the theory.
In sonfication, time must be scaled. There is a sonification operator that does something with maths. Now there are some formulas on his slide, but no audio examples.
Waveshaping is applying one function to another.
Theoretical physics. (SuperCollider for SuperColliders.) Particles accelerate and a few of them crash. Electrons and protons in this example. There’s a diagram with squiggly lines. Virtual photons are emitted backwards in time? And interacts with a proton? And something changes colour. There’s a theory or something called BFKL.
He’s showing an application that’s showing an equation and has a slider, and does something with the theory, so you can hear how the function would be graphed. Quantum Mechanics is now thinking about frequencies. Also, this is a very nice sounding equation
Did this enable discover anything? No, but it changed the conceptualisation of the theory, very slightly.
apparently, the scientists are also seeking beauty with sonification, so they involve artists to get that?
(I may be slightly misunderstanding this, I was at the club event until very late last night (this morning, actually).)
Ron Kuivila is saying something meaningful. Something about temporality, metaphilosophics, enumeration of state. Sound allows us to hear proportions w great precision, he says. There may be more interesting dynamical systems. Now about linguistics and mathematics and how linguistics help you understand equations and this is like Red Bird by Trevor Wishart.
Sound is therefore a formalisation.

Miguel Negrão: Real time wave field synthesis

Live blogging the sc symposium. I showed up late for this one.
He’s given a summary of the issues of wave field synthesis (using two computers) and is working on a sample accurate real time version entirely in supercollider. He has a sample-accurate version of SC, provided by Blackrain.
The master computer and slave computer are started at unknown times, but synched via an impulse. The sample number can then be calculated, since you know how long it’s been since each computer started.
All SynthDefs need to be the same on both computers All must have the same random seed. All buffers must be on both. Etc. So he wrote a Cluster library that handles all of this, making two copies of all in the background, but looking just like one. It holds an array of stuff. Has no methods, but sends stuff down to the stuff it’s holding.
Applications of real time Wave Field Synthesis: connecting the synthesis with the place where it is spatialized. He’s ding some sort of form of spectral synthesis-ish thing. Putting sine waves close together, get nifty beating, which creates even more ideas of movement, The position of the sine wave in space, gives it a frequency. He thus makes a frequency field of the room. When stuff moves, it changes pitch according to location.
This is an artificial restriction that he has imposed. It suggested relationships that were interesting.
the scalar field is selected randomly, Each sine wave oscillator has (x, y) coords. The system is defined by choosing a set of frequencies, a set of scalar fields and groups of closely tunes sine wave oscillators. He’s used this system in several performances, including in the symposium concert. That had maximum 60 sine waves at any time. It was about slow changes and slow movements.
His code is available http://github.com/miguel-negrao/Cluster
He prefers the Leiden WFS system to the Berlin one.