My Talk at the Sc Symposium

Picking Musical Tones

One of the great problems in electronic music is picking pitches and tunings.

The TuningLib quark helps manage this process.

First, there is some Scale stuff already in SuperColider.
How to use a scale in a Pbind:

     a = Scale.ionian;

     p = Pbind(
          degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7, 6, 5, 4, 3, 2, 1, 0, rest], 2),
          scale, a,
          dur, 0.25

     q =;


Key tracks key changes and modulations, so you can keep modulating or back out of modulations:

k = Key(Scale.choose);
k.change(4); // modulate to the 5th scale degree (we start counting with 0)
k.change; // go back


This will keep up through as many layers of modulations as you want.

It also does rounding:

quantizeFreq (freq, base, round , gravity )
Snaps the feq value in Hz to the nearest Hz value in the current key

gravity changes the level of attraction to the in tune frequency.

k.quantizeFreq(660, 440, down, 0.5) // half way in tune

By changing gravity over time, you can have pitched tend towards being in or out of tune.


There is a huge library of pre-cooked tunings for the scala program. ( at
) This class opens those files.

a = Scala("slendro.scl");
b = a.scale;


This is actually a partchian tuning diamond (and this class may get a new name in a new release)

l = Lattice([ 2, 5, 3, 7, 9])

The array is numbers to use in generated tuning ratios, so this gives:

1/1 5/4 3/2 7/4 9/8   for otonality
1/1 8/5 4/3 8/7 16/9  for utonality

otonality is overtones – the numbers you give are in the numerator
utonality is undertones – the numbers are in denominator

all of the other numbers are powers of 2. You could change that with an optional second argument to any other number, such as 3:

l = Lattice([ 2, 3, 5, 7, 11], 3)

Lattices also generate a table:

1/1  5/4  3/2  7/4  9/8
8/5  1/1  6/5  7/5  9/5
4/3  5/3  1/1  7/6  3/2
8/7  10/7 12/7 1/1  9/7
16/9 10/9 4/3  14/9 1/1

It is possible to walk around this table to make nice triads that are harmonically related:


 var lat, orientation, startx, starty, baseFreq;

 SynthDef("sine", {arg out = 0, dur = 5, freq, amp=0.2, pan = 0;
  var env, osc;
  env =, amp), doneAction: 2);
  osc =, 0, env);, osc * amp);


 lat =;
 orientation = true;
 startx = 0;
 starty = 0;
 baseFreq = 440;

  instrument, sine,
  amp, 0.3,
  freq, Pfunc({
   var starts, result;
     orientation = orientation.not;
     starts = lat.d3Pivot(startx, starty, orientation);
     startx = starts.first;
     starty = starts.last;
   result = lat.makeIntervals(startx, starty, orientation);
   (result * baseFreq)

Somewhat embarrassingly, I got confused between 2 and 3 dimensions when I wrote this code. A forthcoming version will have different method names, but the old ones will still be kept around so as not to break your code.


This is not the only quark that does dissonance curves in SuperCollider.

Dissonance curves are used to compute tunings based on timbre, which is to say the spectrum.

d = DissonanceCurve([440], [1])

The high part of the graph is highly dissonant and the low part is not dissonant. (The horizontal access is cents.) This is for just one pitch, but with additional pitches, the graph changes:

d = DissonanceCurve([335, 440], [0.7, 0.3])

The combination of pitches produces a more complex graph with minima. Those minima are good scale steps.

This class is currently optimised for FM, but subsequent versions will calculate spectra for Ring Modulation, AM Modulation, Phase Modulation and combinations of all of those things.



 var carrier, modulator, depth, curve, scale, degrees;

 SynthDef("fm", {arg out, amp, carrier, modulator, depth, dur, midinote = 0;
  var sin, ratio, env;

  ratio = midinote.midiratio;
  carrier = carrier * ratio;
  modulator = modulator * ratio;
  depth = depth * ratio;

  sin =, 0, depth, carrier));
  env = dur)) * amp;, (sin * env).dup);


 carrier = 440;
 modulator = 600;
 depth = 100;
 curve =, modulator, depth, 1200);
 scale = curve.scale;

 degrees = (0..scale.size); // make an array of all the scale degrees

// We don't know how many pitches per octave  will be until after the
// DissonanceCurve is calculated.  However, deprees outside of the range
// will be mapped accordingly.


  instrument, fm,
  octave, 0,
  scale, scale,
  degree, Pseq([
   Pseq(degrees, 1), // play one octave
   Pseq([-3, 2, 0, -1, 3, 1], 1) // play other notes
  ], 1),

  carrier, carrier,
  modulator, modulator,
  depth, depth

The only problem here is that this conflicts entirely with Just Intonation!

For just tunings based on spectra, we would calculate dissonance based on the ratios of the partials of the sound. Low numbers are more in tune, high numbers are less in tune.

There’s only one problem with this:
Here’s a graph of just a sine tone:

d = DissonanceCurve([440], [1])
d.just_curve.collect({|diss| diss.dissonance}).plot

How do we pick tuning degrees?

We use a moving window where we pick the most consonant tuning within that window. This defaults to 100 cents, assuming you want something with roughly normal step sizes.

Then to pick scale steps, we can ask for the n most consonant tunings

t = d.digestibleScale(100, 7); // pick the 7 most consonant tunings
var carrier, modulator, depth, curve, scale, degrees;
carrier = 440;
modulator = 600;
depth = 100;
curve =, modulator, depth, 1200);
scale = curve.digestibleScale(100, 7); // pick the 7 most consonant tunings
degrees = (0..(scale.size - 1)); // make an array of all the scale degrees (you can't assume the size is 7)

 instrument, fm,
 octave, 0,
 scale, scale,
 degree, Pseq([
  Pseq(degrees, 1), // play one octave
  Pseq([-7, 2, 0, -5, 4, 1], 1)], 1), // play other notes
 carrier, carrier,
 modulator, modulator,
 depth, depth

Future plans

  • Update the help files!
  • Add the ability to calculate more spectra – PM, RM AM, etc
  • Make some of the method names more reasonable


Comments from the audience.

  • key – does it recalc the scale or not? Let the user decide
  • just dissonance curve – limit tuning ratios
  • lattice – make n dimensional
  • digestible scale – print scale ratios

Sc symposium – chris brown – ritmos

software to explore perception and performance of polyrhythms
inspired by afro-cuban music
ritmos can play in polyrythmic modes and can listen to a live input. It deals with a difference between a player and a clave.
this was first implemented in HMSL!! As a piece called Talking Drum.
Branches is in sc2 and is in the same series of pieces.
so now there’s a new version in sc3.


RitmosPlay defines a voice stream heirachy and scheduling
RitmosXfrm F interaction algorythms
he uses a genetic algorithm to balance between the specified clave and the input.
he’s got presets and sequences that deal w current settings.
he’s going into a lot of detail about how this works. It’s complex.
this has an impressive gui. And indeed an impressive functionality. And sounds great.
graphics library… I wish i’d caught the name of…

Sc symposium – lily play – bernardo barros

music notation with supercollider
he used to use OpenMusic, but then moved to linux.
OpenMusic is IRCAM software in common lisp for algorithmic music composition. It looks like max, but makes notation.
SC can do everything om can do except the notation visualisation.
he uses LilyPond. INScore might be faster.
LilyPond is free and cross-platform. It’s simple.
He’s done 3 projects? superFomus and LilyCollider.


Uses fomus ( Works with events and patterns. It outputs to lillypond and mjusescore
this is cool
he’s showing a useage case with xenakis’s sieves. He’s got some functions as sieves and then does set operations.
this doesn’t work well with metric structures. You’re stuck wrt bar lengths.


division and addition models of rhythm
rhythm trees can represent almost all kins of rythm. It’s an array of duration and division that can be nested.
he is using a syntax that i don’t know at all… Someone in front of me has a help file open on list comprehensions.
he’s got a very compelling looking score open.
in future he wants to use spanners by abjad to handle some markings. And also he wants some help and feedback for future versions.


can you use this to get from MIDI to LilyPond? Yes, with Fomus.
what about includes? You can make a template.

Live blogging the sc symposium – ron kuivila

naming is the fundamental control mechanism of supercollider (unnamed gets garbage collected).
‘play’ generates instances. It returns a new object of a different class, which confuses n00bs. What you see on the screen does not give you a clue.
The object that defines the work gets misidentified as the one doing the work.
jIT lib’s def classes solves this problem. It makes it easier to share code. Play messages go to the def class. The def classs gives it all a name.
node proxies give you a gui for free also and is also useful pedagogically.
PatternConductor is a interactive control easier than EventStreamPlayer. It deals better with sustain issues.
CV is a control value constrained by an associated Spec. CV can bbe applied to several different contexts simutaneously. Touch is a companion class that does something complex about a CV’s value.
Ron is rewriting Conductor and I should talk to him about this.
yield is a bummer for beginners writing co-routines.


is confusing.
Pspawnern is a class that seeks to be less confusing syntactically. It does something with Prouts that’s slightly confusing….
Syntactic convience yields conceptual confusion…
he’s asking if Pspawnern is a good idea.
Pspawner is a hybrid between patterns and Routines. One of my students would have loved this. He says it’s a compositional strategy about notation and direction in scores. I may also come to love this class.
And he took no questions!

Live Blogging Sc Symposium – Guitar Granulator by Martin Hünniger

He’s got a stomp box that granulates – he has a software and hardware version of this.
2 granulators, fx chains, midi controller knobs, patches etc in software
The hardware version has one granulator and some knobs.
It’s built on a beagle board. And an Arduino Uno for pots, leds
He uses SC running the BeagleBoard linux distro form Stanford. Works out of box. Satellite CCRMA
Granular synthesis is cool. He uses a ‘ring buffer’ because it’s live sampling. This is a buffer that loops.
This is really cool.

Live Blogging the Sc Symposium – Flocking by Colin Clark

Flocking – audio synthesis in javascript on the web

audio synthesis framework written in javascript

specifically intended to support artists

Inspired by SC

Web is everywhere

programming environments that have graphical tools

Flocking is highly declarative

Synth graphs declares trees of names unit generators – you write data strictures, not code

Data is easy to manipulate

  synthDef: {
    ugen: "flock.ugen.sinOsc",
    freq: 440
    mul: 0.25

He skips the Rate:”audio” because that’s the default.

  synthDef: {
    ugen: "flocl.ugen.sinOsc",
    freq: 440
    mul: {

It handles buffers and whatnot, but not multichannel expansion.
Scheduling is unreliable…but works

A useful script

The best way to remember to do something when you’re going to run some important program is to put in the program itself. Or at the very least, put it in a script that you use to invoke the program.
I have a few things I need to remember for the performance I’m preparing for. One has to do with a projector. I’m using a stylus to draw cloud shapes on my screen. And one way I can do this is to mirror my screen to a projector so the audience can see my GUI. However, doing this usually changes the geometry of my laptop screen, so that instead of extending all the way to the edges, there are empty black bars on either side of the used portion of my display. That’s fine, except the stylus doesn’t know and doesn’t adjust. So to reach the far right edge of the drawn portion of the screen, I need to touch the far right edge of the drawn portion, which puts over a centimetre between the stylus tip and the arrow pointer. Suboptimal!
Ideally, I’d like to have any change in screen geometry trigger a script that changes the settings for for the stylus (and I have ideas about how that may or may not work, using upstart, xrandr and xsetwacom), but in the absence of that, I just want to launch a manual calibration program. If I launch the settings panel, there’s a button on that that launches one. So the top part of my script checks if the calibration is different than normal and launches settings if it is.
The next things I need to remember are audio related. I need to kill pulseaudio. If my soundcard (a Fast Track Ultra) is attached, I need to change the amplitude settings internally so it doesn’t send the input straight to the output. Then I need to start jack using it. Or if it’s not attached, I need to start jack using a default device. Then, because it’s useful, I should start up Jack Control, so I can do some routing, should I need it. (Note: in 12.04 if you start qjackctl after starting jackd, it doesn’t work properly. This is fixed by 13.04.) Finally, I should see if SuperCollider is already running and if not, I should start it.
That’s a bit too much to remember for a performance, so I wrote a script. The one thing I need to remember with this script is that if I want to kill jack, it won’t die from Jack Control, so I’ll need to do a kill -9 from the prompt. hopefully, this will not be an issue on stage.
This is my script:


# first check the screen

LINE=`xrandr -q | grep Screen`
WIDTH=`echo ${LINE} | awk '{ print $8 }'`
HEIGHT=`echo ${LINE} | awk '{ print $10 }' | awk -F"," '{ print $1 }'`

if  [[ ${WIDTH} != 1366 || ${HEIGHT} != 768 ]]
 gnome-control-center wacom
 echo normal resolution

# now setup the audio

pulseaudio --kill

# is the ultra attached?
if aplay -l | grep -qi ultra
 echo ultra
 #adjust amplitude
 for i in $(seq 8); do
         for j in $(seq 8); do
                 if [ "$i" != "$j" ]; then
                         amixer -c Ultra set "DIn$i - Out$j" 0% > /dev/null
                         amixer -c Ultra set "DIn$i - Out$j" 100% > /dev/null
                 amixer -c Ultra set "AIn$i - Out$j" 0% > /dev/null

 #for i in $(seq 4); do 
 # amixer -c Ultra set "Effects return $i" 0% > /dev/null 

 #start jack
 jackd -d alsa -d hw:Ultra &
 #start jack with default hardware
 jackd -d alsa -d hw:0 &

sleep 2

# jack control
qjackctl &

sleep 1

# is supercollider running?
if ps aux | grep -vi grep | grep -q scide
 echo already running
 scide test.scd &

Compiling SuperCollider on Ubuntu Studio 3.04 beta 2 (and otherwise setting up audio stuff)

The list of required libraries has changed somewhat from different versions. This is what I did:

sudo apt-get install git cmake libsndfile1-dev libfftw3-dev  build-essential  libqt4-dev libqtwebkit-dev libasound2-dev libavahi-client-dev libicu-dev libreadline6-dev libxt-dev pkg-config subversion libcwiid1 libjack-jackd2-dev emacs gnome-alsamixer  libbluetooth-dev libcwiid-dev netatalk

git clone --recursive

cd supercollider

mkdir build

cd build

cmake ..


If all that worked, then you should install it:

sudo make install


If it starts, you’re all good!
Users may note that this version of Ubuntu Studio can compile in Supernova support, so that’s very exciting.
I’ve gone to a beta version of Ubuntu Studio because Jack was giving me a bit of trouble on my previous install, so we’ll see if this sorts it out.
Note in the apt-get part that emacs is extremely optional and netatalk allows me to mount apple mac file systems that are shared via apple talks, something I need to do with my laptop ensemble, but which not everyone will need. Gone-alsamixer is also optional and is a gnome app. It’s a gui mixer application which lets you set levels on your sound card. Mine was sending the ins straight to the outs, which is not what I wanted, so I could fix it this way or by writing and running a script. Being lazy, I thought the GUI would be a bit easier. There’s also a command line terminal application called alsamixer, if you like that retro 80’s computing feeling.
It can also be handy to sometimes kill pulse audio without it respawning over and over. Fortunately, it’s possible to do this:

sudo gedit /etc/pulse/client.conf

Add in these two lines:

autospawn = no
daemon-binary = /bin/true

I still want pulse to start by default when I login, though, so I’ve set it to start automatically. I found the application called Startup Applications and clicked add. For the name, I put pulseaudio. for the command, I put:

pulseaudio --start

Then I clicked the add button on that dialog screen and it’s added. When I want to kill pulseaudio, I will open a terminal and type:

pulseaudio --kill

and when I want it back again, I’ll type:

pulseaudio --start

(I have not yet had a killing and a restarting throw any of my applications into silent confusion, but I’m sure it will happen at some point.)
There’s more on this


It’s time for everybody’s favourite collaborative real time network live coding tool for SuperCollider.
Invented by PowerBooks UnPlugged – granular synthesis playing across a bunch of unplugged laptops.
Then some of them started Republic111, which is named for the room number where the workshop where they taught stuff.
Code reading is interesting in network msic partly because of stealing, but also to understand somebody else’s code quickly, or actively understand it by changing it. Live coding is a public or collective thinking action.
If you evaluate code, it shows up in a history file, and gets sent to everybody else in the Republic. You can stop the sound of everybody on the network. All the SynthDefs are saved. People play ‘really equally’ on everybody’s computer. Users don’t feel obligated to act, but rather to respond. Participants spend most of their time listening
Republic is mainly one big class, which is a weakness and should be broken up into smaller classes hat can be used separately. Scott Wilson is working on a newer versions which on github. Look up ‘The Way things May Go on Vimeo’.
Graham and Jonas have done a system which allows you to see a map of who is emitting what sound and you can click on it and get the Tdef that made it.
Scott is putting out a call for participation and discussion about how it should be.