Jason Dixon – controlling group laptop improvisation
Problems often stem from performers not listening to each other. Huge cacophony of noise, competitive, lost players. Then things drag on much too long. There is a sameness. People don’t look at each other and miss cues. Also, lack of visual element. Entire frequenc spectrum used by every player makes it impossible to pick out lnes or anything.
Sonic example 1: improv gone wrong (have any of us here not heard this at least once?) And the example does indeed sound like a whole lotta noise.
Keys to success: force people to play qiuetly. Small amps, speakers located very close to the performers.
Alain Renaud developed a good system: www.alainrenaud.net The Frequencyliator
Frequency spectrum divided among players, like instruments. Filters used to enforce this!
Presenter has an idea for a genetic algorithm to instruct players.
Tag: SuperCollider
Live!! From the supercollider symposium
16:30
Cadavre ezquisite!
Need to grab my mac!
Site gets slow when everbody in the room tries to download from it.
Public class send actual code across the network. Yikes. There’s a message called ‘avoid the worst’ which keeps folks from executing unix commands. Sorta.
It’s polite in gamelan to not play on each other’s beats, so speed changes are lagged. This clock system sort of models that.
There is a collective class that discovers folks and keeps track of their ip addresed. Broadcasting makes this possible, i think.
Sc con live blogging
16:00
Thom & Voldemars present about sonic quantums.
How do you deal with many, many control parameters? Understanding, controlling, but not being able to touch them individually.
1 method is parameter reduction. However, they seek to be as direct as possible.
They have a matrix at the center of their system. Which deals with all their data. A multidemensonal data structre.
They have a visual representation. (How do they pick parameters and adjust them?)
The matrix projection has 3d clouds that look sorta chaos based. These clouds can rotate, move along, expand and contract. Also can warp from a plane to a surface.
They use things like speed of movement as control values for things like amplitude. The matrix may relate to spatialization? They are not using statistical controls for their grains. Makes parameters and relationshps clear. This gui is built in gtk, not supercollider.
They will use this as an installation. Now working on trajectory mapping, maybe with envelopes. The visualization is done in jitter.
They worked on a controller at steim, but then moved to mathematical controls.
Oops, it IS statistical. Oh, and they do use randomness and parameter reduction. I’m confused, except that there are white dots forming 3d states swooping around on the screen. Woosh! Swoop!
They are not sharing their code as of yet. Too shy.
Live bloggig the supercollider symposium
15:13
GUIs for live improv
Musical software imimitating physical instruments is silly. Screens are 2d, so interfaces should be 2d.
Www.ixi-audio.net
sound scratcher tool allows some mousy ways to modify sound file playback with loopongs, granulation, scratching, etc. X,y axis is file selection and pitch.
Live patching. Predators is awesome game-like algorithmic players which can have live coding synths and other aspects. Many playful, expressive, imaginative interfaces. Polyrhythm player.
Very interesting environment. Also suggests evolutionary strategies for algorithnic composition.
And another one
Auction 3 got a bid, so I put up auction 5. Auction 4 is still bidless.
The bidder for #3 wanted to know if s/he could remix the track when done. What a fabulous idea! I told hir yes. It feels so collaborative. I love it.
The whole eBay process, though is kind of nerve wracking. What if nobody bids in the next 4 days and 22 hours? ack!. Note to self: eBay bids are not a good measure of self-worth.
I contacted an arts blog today about buying advertising space. I’m running out of ideas for free publicity, since I’ve gotten mention on most of the New Music blogs that I know of, I’ve posted on most of my email lists and I listed on Tribe. The publicity part of this project is kind of weird, but I see it as part of the project in a sort of conceptual-everythng-is-art kind of way.
OSC -> CV
In other news, I’ve been investigating interfaces between computers and analog information. I’ve ordered a nifty joystick brain and I’ve just been informed of a cool-looking open source device which can create control voltages to send to a synthesizer. DIY electronics are really big right now. And this is good for lazy people like me, because it means that people are designing and selling little boutique devices. So I don’t have to do my own designing. The Arduino is cheap, open source(!), and made by workers getting a living wage. It’s perfect and somebody has already written a SuperCollider interface. W00t. Now all I need is to decide whether to go with USB or bluetooth. Wireless synthesizer control with a modular might be a little silly, but is still tempting.
Subliminals, Timbre and Convolution
Recently, in Boing Boing, there was a post about a company marketing a subliminal message to gamers. They would hear the message 10000 – 20000 times a second. That’s 10 kHz – 20 kHz. Those repetitions are almost too high to be in the audio range! I can’t hear 20 kHz all that well. Also, what about scaling? To keep from peaking, the maximum amplitude of
each message would have to be between 0.00005 – 0.0001 of the total amplitude range. That’s pretty subliminal, all right.
I went to work trying to play a short aiff file over and over at that rate. My processor crapped out really fast. That’s a lot of addition. As I was falling asleep that night, I calculated that on a CD, each new message would start every 4 – 10 bytes! Why at that rate, it’s practically convolution.
Indeed, it is more than “practically” convolution, it is convolution and as such it doesn’t need to be done via real-time additions, but can be done via free software like SoundHack. The first step is getting a series of impulses. To try to create a “subliminal” message, you need a series of positive impulses that vary randomly between 10000 – 20000 times per second. I wrote a short SuperCollider program to produce such impulses.
SynthDef("subliminal-impulse", {arg out = 0; var white, ir; white = WhiteNoise.kr; white = white.abs; white = white * 10000; white = white + 10000; ir = Dust.ar(white); Out.ar(out, ir); }).play
The WhiteNoise.kr produces random values between -1 and 1. We take the absolute value of that to just get numbers between 0 – 1. Then we multiply, to make them numbers between 0 – 10000 and add to put them in the range 10k – 20k.
Dust makes impulses at random intervals. The impulses are between 0 – 1. The argument is the average number of impulses per second. So Dust makes 10k – 20k impulses per second. Record the output of that to disk and you’ve got some noise, but it’s noise with some important characteristics – all the impulses are positive and they have zeros between them. This is what we need if we’re going to be subliminal at gamers.
Ok, so I’m going to take that file and open it SoundHack and save a copy of it as a 16bit file, rather and a 32 bit file. Then I’ll split the copy into separate mono files. (This is all under the file menu.) then, to save disk space, I’ll throw away the 32 bit file and the silent right channel. So now I have a 16bit mono file full of impulses open in SoundHack
Under the Hack menu, there’s an option called “Convolution.” Pick that. Check the box that says “Normalize” (that will handle the amplitude for you so the result is neither too quiet or too loud) and then hit the button that says “Pick Impulse.” This will be our recording of spoken text that we want made subliminal. (Fortunately, I had such a message at hand.) In actuality, it doesn’t matter which file is the one with the clicks and which is the one with the text. Convolution treats both files as equal partners. Then it asks us to name the output file. Then it goes, then we’re done. Here’s my result.
If you suddenly feel like forming a militia or running in fear, then it worked. If not, well, the sonic result is still kind of interesting. The timbres are all totally present but the actual sound events are unintelligible (at least to the conscious mind). For every one of our little impulses created by Dust.ar, we’ve got a new copy of Jessica plotting revolution. (The text is actually from Lesbian Philosophy: Explorations by Jeffner Allen (Palo Alto: Institute of Lesbian Studies, 1987) and the piece I originally made with it is here.)
This is actually a lot like granular synthesis, if you think about it. Imagine that instead of convolving the whole audio file, we just did 50ms bits of it. Every impulse would start a new copy of the 50 ms grain, but instead of with additions, with FFTs, which are faster – we can have many, many more grains. And they could be smaller and still be meaningful. Heck, they could be the size of the FFT window.
The FFT version of a convolution involves taking a window of the impulse and another of the IR (our subliminal message – normally known as an impulse response). You add the phases together and multiply the amplitudes. The amplitudes multiplications give us the right pitch and the phase addition gives us the right timing – almost. Some additions will be too big for the window and wrap around to the beginning. You can avoid that by adding zero padding. You double the size of the window, but only put input in the first half. Then none of your phases will wrap around.
We can get some very granular like processes, but with nicer sound and better efficiency. For example, time stretching. We could only update the IR half as often as the impulse stream and do window-by window convolutions. There are other applications here. I need to spend time thinking of what to do with this. Aside from sublimating revolution.
Podcast: Play in new window | Download
Time domain frequency tracking for voice and tuba
There are certain timbres which are most easily pitch tracked in the time domain, rather than the frequency domain. The tuba is one such timbre. It, like the human voice, tends to have one large impulse followed by several smaller impulses. The large impulse is the fundamental frequency. The following little hills are overtones and formants and what not. For the tuba, every large impulse is caused by a single buzz of the lips. Lips flap open, impulse happens, impulse echos, lips flap open again, another impulse, more echoes inside the horn. Similarly, your vocal chords work in the same way. The vibrate like buzzing lips and echo in your head. So the fundamental frequency of your voice is the frequency of the large impulses, not the smaller ones that follow.
So to know the pitch, all you have to do is know how often those peaks come. How do you recognize the peaks? Well, they certainly spike up above the average amplitude, while also raising the amplitude. A good amplitude following algorithm is the Root Mean Square. Since the tuba and the voice have low fundamental frequencies (220 Hz for a typical female voice and the tuba gets down around 40 Hz and possibly below), it’s important to have a long enough window length for the RMS. You want to get enough samples such that you don’t have false positives.
Then, you can subtract the RMS from the original frequency. All but the high peaks will drop below 0. Then count the zero crossings. You’ll have to divide by two, since each peak crosses zero twice: once on the way up and once on the way down.
If you suspect that all of your pulse energy is negative for some reason, you have two options: You can try multiplying your original signal by -1 before subtracting. Or you can take the absolute value of the signal and use that.
Here’s some sample code for SuperCollider. Use headphones to prevent feedback and then sing into your computer. If your voice is low, you may need to adjust the window size. Note that it’s given in samples.
SynthDef("test-time-domain-freq-tracker", { arg in, out, rmswindow = 200; var rms, xings, inner, peaks, sin; inner = AudioIn.ar(in, 1); rms = (RunningSum.ar(inner.squared, rmswindow)/rmswindow).sqrt; peaks = inner - rms; xings = ZeroCrossing.ar(peaks); sin = SinOsc.ar(xings/2); Out.ar(out, sin); }).send(s);
Tags: SuperCollider, Celesteh
How to write a SC plugin / Biquad filter!!!
The tutorials in the SC distro are not really great on how to do plugins, so here’s my version. After you get the SC source code (an exercize left to the reader), you’re going to want to build your own copy of SC. This should not be the same copy that you use for doing your music, because development tends to break things and you don’t want to break your instrument. First build the Server, then the Plugins, then the Lang.
Once you’ve done that, open the plugin project with Xcode. In the Project Window, pick a target and ctrl-click on it to duplicate it. Then option click the new target to rename it to [whatever]. Double click on it to bring up the target inspector. In the summary, rename Base Product Name to [whatever].scx . Then click on “Settings” in the list on the left. Change Product Name to [whatever].scx . Close the target inspector menu and go back to the project menu. Drag your new target into the list for All, so it gets built when you build all.
Ctrl-click on your new target again to Add. Add a new C++ file. Don’t generate a header file for it. This is my example, a biquad filter:
/* * LesUGens.cpp * xSC3plugins * * Created by Celeste Hutchins on 16/10/06. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA */ // all plugins should include SC_Plugin.h #include "SC_PlugIn.h" // and this line static InterfaceTable *ft; // here you define the data that your plugin will need to keep around // biquads have two delayed samples, so that's what we save struct Biquad : public Unit { // delayed samples float m_sa1; float m_sa2; }; // declare the functions that your UGen will need extern "C" { // this line is required void load(InterfaceTable *inTable); // calculate the next batch of samples void Biquad_next(Biquad *unit, int numsamples); // constructor void Biquad_Ctor(Biquad* unit); } ////////////////////////////////////////////////////////////////////////////////////////////////// // Calculation function for the UGen. This gets called once per x samples (usually 66) void Biquad_next(Biquad *unit, int numsamples) { // pointers to in and out float *out = ZOUT(0); float *in = ZIN(0); // load delayed samples from our struct float delay1 = unit->m_sa1; float delay2 = unit->m_sa2; // the filter co-efficients are passed in. These might change at the control rate, // so we re-read them every time. // the optimizer will stick these in registers float amp0 = ZIN0(1); float amp1 = ZIN0(2); float amp2 = ZIN0(3); float amp3 = ZIN0(4); float amp4 = ZIN0(5); float next_delay; // This loop actually does the calculation LOOP(numsamples, // read in the next sample float samp = ZXP(in); // calculate next_delay = (amp0 * samp) + (amp1 * delay1) + (amp2 * delay2); //write out result ZXP(out) = next_delay - (amp3 *delay1) - (amp4 * delay2); // keep track of data delay2 = delay1; delay1 = next_delay; ); // write data back into the struct for the next time unit->m_sa1 = delay1; unit->m_sa2 = delay2; } // The constructor function // This only runs once // It initializes the struct // Sets the calculation function // And, for reasons I don't understand, calculates one sample of output void Biquad_Ctor(Biquad *unit) { // set the calculation function SETCALC(Biquad_next); // initialize data unit->m_sa1 = 0.f; unit->m_sa2 = 0.f; // 1 sample of output Biquad_next(unit, 1); } // This function gets called when the plugin is loaded void load(InterfaceTable *inTable) { // don't forget this line ft = inTable; // Nor this line DefineSimpleUnit(Biquad); }
Ok, when you build this, it will get copied into the plugin directory. But that’s not enough. The SCLang also needs to know about your new UGen. Create a new class file called [whatever].sc .  You can stick this in your ~/Library, it won’t mess up your other copies of SuperCollider. This is my file:
Biquad : UGen { *ar { arg in, a0 = 1, a1 = 0, a2 =0, a3 =0, a4 =0, mul = 1.0, add = 0.0; ^this.multiNew('audio', in, a0, a1, a2, a3, a4).madd(mul, add) } *kr { arg in, a0 = 1, a1 = 0, a2 =0, a3 =0, a4 =0, mul = 1.0, add = 0.0; ^this.multiNew('control', in, a0, a1, a2, a3, a4).madd(mul, add) } }
The multiNew part handles multiple channel expansion for you. The .madd ads the convenience variables mul and add. Your users like to have those.
I don’t know if a biquad filter comes with SC or not. I couldn’t find one. They’re useful for Karplus-Strong and a few other things. For more information, check out the wikipedia article on Filter Design
Tags: SuperCollider , Celesteh
HID, SuperCollider and whatnot
My HID classes are now in version 1.0 and I’m not going to change them again without good reason. The change is that the call back action passes the changed element back as the first parameter, followed by all the other parameters. It’s redundant, but it matches how the subclasses work. This should be usable for HID applications. a helpfile will be forthcoming.
I am trying to write a Biquad filter UGen and also to compile a FFTW UGen that my friend wrote. Jam fails when I try to compile, on both files. I think there is a step missing in the howto document on Ugen writing. My code is simple and the syntax looks ok. I’ve done every step (as far as I know) in the howto. bah. I could use a better error message than “jam failed.” sheesh. I like XCode, but it’s got nothing on the Boreland compilers I used to use back in the 90’s as far as usability and usefulness of error messages.
Workaround: Take an existing target and duplicate it and use that for your target. Rename it and replace the files in it with your own. Ctrl-click on the target to bring up a menu with the option to duplicate.
Tags: Celesteh, SuperCollider
Tutorials
The world has been crying out for my old SuperCollider tutorial. Well, not crying out, exactly. Some of you may recall that I had the idea of a doing a tutorial as a thesis project. My advisor said it was disorganized and error-riddled, and so the project was abandoned. However, some stranger on the internet convinced me to send it to him.
This stranger was my host for my first two weeks here (the house with no hot water). His name is Jeremiah and he’s cool. Anyway, he told me that he liked the tutorial and I should put it on the internet. So here you go. It’s incomplete and disorganized. The errors aren’t serious. (Lines of code are separated by semicolons, not terminated: that means that the last line in any block doesn’t need a semi colon, but can have one anyway if you want. Blocks are not defined by parenthesis, but rather by curly brackets or by highlighting code with the mouse. These are the two most glaring errors. All the examples should work.)
Edit
Tutorials have moved to http://www.berkeleynoise.com/celesteh/podcast/?page_id=65 . Please update your links.