Source code for Christmawave – overlap, duration and repetition

This is the fourth part of a series of posts about how I created my album Christmaswave. Previously, part 1 posted a list of questions and answered the first two of them: ‘What sample am I going to play?’ and ‘How many times am I going to divide it in half?’. Part 2 answered the question, ‘Once it’s chopped into little (or not-so-little) pieces, which one of them am I going to play?’. Part 3 talked endlessly in the meandering way of hangovers about playback rates.

One thing I have not addressed is how I picked which parts of source material to use. Unless the words were compelling in some way, I tended to go for phrases that were instrumental. In some songs, this left me only the intro and the outro. This is why most of the pieces have source material taken from multiple versions of the same song.

How much should an event overlap whatever comes after?

Normally, you would do this by setting the \legato part of the event, which is ‘The ratio of the synth’s duration to the event’s duration.’ I set this to 1.1 in most of the pieces, which helped make up a bit in case the slicing was in slightly the wrong place.

Most I varied:


\legato, Pwhite(0.8, 1.5)

Some depended on the duration of the it of sample I’m playing. This is what I did with Sledge Trudge


\legato, Pfunc({|evt|
var dur, legato;
dur = evt[\dur];
legato = 1;
(dur < 0.01).if({ legato = 2}, { (dur < 0.07). if({ legato = 1.5}, {legato = 1}) }); legato })

How long should I wait before going to the next thing (which might be a repetition of what I just did)?

This is a question as to what to put for the \dur part of the event, which depends on the number of frames of sample that we're playing, the sample rate of it, and the playback rate.

I pulled a lot of my source material from youtube - by using a browser plugin to download and convert to mp4 and the using audacity to convert to wav files. These files were often 48k, which is the standard for film, but audio is generally at 44.1k, so I did have to take the source file's sample rate into account.

It makes sense to calculate the start frame and duration together. This example is from Out in the Cold:


[\startFrame, \dur], Pfunc({|evt|
var buf, startFrame, dur, div, start, frames;
start = evt[\start];
div = evt[\div];
buf = evt[\buf];
frames = buf.numFrames;
startFrame= frames * (start/div);
rate =evt[\rate];
dur = (frames / buf.sampleRate) / rate * div.reciprocal;

[startFrame, dur]
})

How many times should I repeat this thing?

The glitchy repeating of the same event multiple times is a repetition of the entire event, not just some parts of it. I did this with Pclutch, which has a slightly weird syntax.


Pclutch(
pattern,
connected
).play

The pattern is just your Pbind or similar, but the connected part is slightly odd. If it's true, the Pbind is evaluated to produce a new value. It's it's false, the previous event is repeated. It's possible to use a pattern to control this.

This is the pattern I used for Walking in a Winter No Man's Land. 0 is equivalent to false and 1 is equivalent to true.


Pseq([0, 0, 0, 0, 1,
Prand([
Pseq([Pn(0,2), 1],1),
Pseq([Pn(0, 3),1],1),
Pseq([Pn(0,4), 1], 1)
],100)
])

Starting with the Pseq means that the Pclutch will repeat 4 times, then make a new event. After that, it picks randomly from 3 Pseqs. The first repeats twice, the second 3 times and the third 4 times. It will randomly pick between these repetitions for 100 times and then stop.

Source code for Christmawave – picking playback rates

This is part 3 of a series about how I created my album Christmaswave. Previously, part 1 posted a list of questions and answered the first two of them: ‘What sample am I going to play?’ and ‘How many times am I going to divide it in half?’. Part 2 answered the question, ‘Once it’s chopped into little (or not-so-little) pieces, which one of them am I going to play?’

One thing I’ve talked about in both posts is balancing a desire for predictability in running the code and in allowing for random variation. This may seem strange for a studio album, but with my previous Christmas album, 12 Days of Crimbo, I did end up playing one of the pieces, Little Dubstep Boy live at an algorave in Birmingham, so it is possible some of these pieces may have a live afterlife.

Perhaps more importantly, allowing for variation allows the discovery of good juxtapositions, which can be made more likely in subsequent revisions of the code. It also makes making the pieces themselves more interesting. Listening to the same thing over and over again in a DAW can get tedious, but subtle changes give something to listen carefully for, keeping one’s ears fresh. I have way more skill in coding pieces than trying to piece them together manually and allowing randomness is an important part of my process.

What speed am I going to play the next bit at?

Having decided which sample to play, how small to chop it up and which subsection of the sample to play, the next question is what speed to play it back at. This is a question the rate at which I playback the buffer, such that both speed and pitch are effected.

A lot of the Vaporwave I’ve listened to seems to play back their samples at slower than the original speed. It also sometimes switches speeds, so that both pitch and timing jump. I like this effect and chose to copy it.

I thought of the timing changes in terms of scales. In Just Intonation, scale steps are given as fractions between the ratios 1/1 and 2/1, where 2/1 is one octave higher than 1/1. Because I wanted slower speeds, I adjust downwards, so that the ratios I used were either between 1/2 and 1/1 or between 1/4 and 1/2. These ratios can then be used directly as the rate parameter in a PlayBuf.ar ugen.

Obviously, almost all of the source material is in 12TET – that is to say: 12 tone equal temperament, the tuning one finds across the white and black keys of a standardly-tuned piano. Any tuning I apply on top of that is more or less in conflict with the original. This can be minimised by only using one rate at a time and repeating it across several subsequent bits. By sticking to one rate for a while, a change later on becomes roughly analogous to a key change. It would have been possible to have a set of sensible key changes that work with the standardised harmony conventions of the original song, but this is not what I did.

Long story short: I picked some scales and tunings

The first piece I worked on, Have Yourself a Scary Christmas is based on Have Yourself a Merry Little Christmas, sung by Judy Garland. That piece and especially that performance of the piece has quite a bit of emotional complexity. The ambiguity of the piece is apparent even in the title, which is not ‘Have a Merry Christmas’, but a more emotionally strangled ‘Have Yourself a Merry Little Christmas’, like someone trying unsuccessfully to be friendly to an ex they’re desperately not over. The music itself is ambiguous about whether it’s in a major key or in it’s relative minor. Garland’s performance adds to this further, soundling slightly drunken and sliding the tuning around in a way that sounds like she’s papering over crushing unhappiness. This effect is especially apparent if you listen to the song slowed down by a factor of 4 with Paulstretch.

In order to draw out the sadness a bit, I used a Romainian minor scale. This is part of the included scale library that comes with the SuperCollider Scale object. I then randomly picked ratios from the scale.


\scale, Scale.romanianMinor(\just),
\rate, Pfunc({|evt|
var scale, degree, rate;

scale = evt[\scale];
degree = (Array.series(scale.ratios.size, 0, 1) ++ [0, 0, 0, 0, 1, 1, 1, 2, 2, 3]).choose;
rate = scale.ratios[degree] * [0.25, 0.5, 0.5].choose;
rate
})

Support for scales is built into Patterns, but it’s most commonly used to compute frequencies, not playback rates.

Almost all the pieces I made used minor scales. I used trial and error to pick which scale worked best with which samples. For all the pieces, I picked degrees randomly and never used anything like a Finite State Machine or a random walk, even those are usually well-suited to picking scale degrees.

In most pieces I repeated the chosen rate, using Pstutter. This is code from Santa Loves You:


\scale, Scale.hungarianMinor(\just),
\degree, Pstutter(8, Pfunc({|evt| evt[\scale].degrees.size.rand}),1),
\rate, Pfunc({|evt| evt[\scale].degreeToRatio(evt[\degree])}) * (2/3),

The Pstutter on the scale degree repeats the degree 8 times before picking a new one with the Pfunc. The rate is calculated from the degree. Because this piece uses voices, I’ve decided to stay closer to a normal playback rate. Using 2/3 instead of 0.5 or smaller means that the samples will sometimes play out faster than their original pitch.

Source code for Christmawave – what segment do I play next?

Previously, I talked about the method I used for creating the Christmaswave pieces and the issues this raises. My last post answered the questions ‘What sample am I going to play?’ and ‘How many times am I going to divide it in half?’ and, alas, the answer for the latter of those was hopelessly over complicated. Doing everything in terms of powers of two was not always a good approach, even for this project. Some songs are in 3/4 time and won’t do well with these divisions. Also, a lot of Christmas classics are performed in a jazz style and thus have a strong triplet feel. Play any Bing Crosby Christmas song at 1/2 speed and the swing becomes extremely apparent. It becomes ludicrously strongly swung at quarter speed. Well, get back to these issues a bit later on.

Once it’s chopped into little (or not-so-little) pieces, which one of them am I going to play?

Randomly

Over the course of the album, I tried a few approaches. For several of the songs, I just picked randomly. This did work, but this means that some runs of the code come out with much better results than other runs of the code. Random number generators in The Santopticon (He Sees You) meant I had to try recording that piece 14 times before I found the version I wanted to use.

In order

Another approach I used was to start at the beginning of the sample and move towards the end of it. This copies the arc of the original and makes the playback/recording more reliable, although it does remove some opportunities for serendipity.

this is the code I used for Der Tannenbaumherumtanz:


\env, Pseg(Pseq([0, 1]), 200),
\start, Pfunc({|evt|
var frac, start, div;
div = evt[\div];
frac = evt[\env];
start = frac * evt[\div];
start = start.asInt;
"div is %, start is %, frac is %".format(div, start, frac.round(0.001)).postln;
start;
}) + Pwhite(-1, 1),

\env has a value in it that goes from 0 to 1 over 200 seconds, which was the length of this section.

\start holds the number of the division to start on – for example, if there are 4 divisions, it might hold 0, 1, 2, or 3. It computes this by first getting the \div, computer previously in the event. Then it looks at \env, which, as it goes from 0 to 1, is a fractional representation of how far we are through the section. A quarter of the way through, it should be 0.25. At the half way point, it should be 0.5, etc. It puts this in the variable frac.

I multiplied frac by the number of divisions. Then, because this number will always be a fraction, I convert it to an integer, so as to get a good index number. This rounds it.

Finally, in order to make this slightly less predictable, I take the result of that calculation and subtract 1, do nothing or add 1.

Random Walk

In Lettuce, No, I picked a section adjacent to the one just played. This tends to meander around the middle of the sample, but the content of the neighbouring divisions are always related to each other, so the results tend to sound good without being too predictable.


\start, Prout({|evt|
var div, start, pos;

pos = 0; // starting position

inf.do({

div = 2.pow(evt[\pow]).asInt;
start = (pos*div).asInt;

// figure out pos for next time
pos = start + [-1, 1, 1].choose; // tend to increment
pos = (pos/div).abs.min(1);

evt = start.yield;
})
})

The Prout returns the start index, but also, while doing so, computes the position. This position is a fraction, like \env above. However, instead of moving from 0 to 1, it meanders around, tending to move towards 1.

It does this calculation based partially on the number of divisions. So if the sample is cut in half, it will tend to jump from 0 to 0.5 in a single step!

The pos must be between 0 and 1, so it takes the absolute value and then compares it to 1 and takes whichever of the two values is smaller.

It would have been possible to do this in a way less dependant on the divisions, for example, by deciding the position should be an integer between 0-127 and then computing a faction based on that. This would meander quite differently as it would be freed from the divisions actually used in the piece. I have no examples of this approach from this album, but will give it a try the next time I’m cutting samples!

Source code for Christmawave

If you follow my podcast, you’ll note I put out a vaporwave-ish Christmas album, Christmawave. It’s a free download on Bandcamp, but I’m asking those who can afford it to donate to the Hackney Winter Night Shelter.

Almost all of the pieces are constructed using variations on one algorithm. I found hoary, old baby boomer Christmas favourites and then took the instrumental sections, which was sometimes just the intro or the outro. All of the songs were in 4/4 and most of the instrumental parts where cut into either in 2 or 4 bar phrases. This means every sample is divisible by many powers of 2 and can be cut in half several times before it loses musical/rhythmic meaning.

I made these cuts, played the section of the sample with some stuttering and then went on to another section of the sample. This method requires some decision making:

  1. Which sample am I going to play?
  2. How many times am I going to divide it in half?
  3. Once it’s chopped into little (or not-so-little) pieces, which one of them am I going to play?
  4. What speed am I going to play that bit at?
  5. How much should it overlap whatever comes after?
  6. How long should I wait before going to the next thing (which might be a repetition of what I just did)?
  7. How many times should I repeat this thing?

All of the pieces answered these questions in slightly different ways. (Or very different ways, in the case of question 1!) Some of the structuring of how I thought about these questions and how I solved them have to do with how the Pattern library works in SuperCollider.

What sample am I going to play?

In almost every case, I switched samples based on how much time had passed since the start of the piece. I used Ptpars to start different sections.

How many times am I going to divide it in half?

Another way of asking the question is, ‘what power of 2 am I going to use?’ I did this a few different ways. In most cases, I stuffed this into part of the event I called pow. Here are some ways I figured out what power of 2 to use:


\pow, Prand([0, 0, 0, 0, 0, 1, 1], inf)

Then later, on I could go from that to powers of two:


\div, Pfunc({|evt|
2.pow(evt[\pow])
})

(Usually, I would compute the \div in a larger Pfunc that figures out more things.) The advantages of figuring out the power of 2 instead of just having a Prand full of 1, 2, 4, etc are that this is harder to screw up. I don’t need to worry about a stray 3 sneaking in, and, if a sample is longer, I can add some number to the \pow to make the \div bigger.

Another way I computed the \pow was using a Finite State Machine. This was completely overkill, but I’ll walk you through how it worked.

What I wanted was to have a possibility of a \pow being as small as 0 or as big as 8, but not to jump from one of those numbers to the other. Instead, I wanted a route going through intermediate numbers, in which it could potentially get to an 8 and have a path back to 0. I wanted a way for it to wander from one extreme to the other.

A FSM offers a way to give a path. This is what the code looks like from Funky (The Slow Jam):


\pow, Pfsm([
#[0], //start
2, #[3], //0
Prand([0, 0, 1], 1), #[1, 2], //1
Prand([0, 1, 2]), #[1, 2, 3], //2
Prand([1, 2]), #[3, 4], //3
Prand([0, 1, 2]), #[3, 5], //4
Prand([2, 3]), #[4, 3, 5, 6], //5
Prand([3, 4]), #[4, 3, 5, 7], //6
Prand([3, 4, 5]), #[4, 3, 5, 6] //7
], inf),

Pfsm is the pattern library that does state machines. It takes an array. The first item in the array is an array of what states it can start with. Next comes pairs. Each pair is a state. The first pair is state 0. The second pair is state 1. The third pair is state 2, etc. The first item in a pair is the output. In the example above, the output of state 0 is 2 and the output of state 1 is Prand([0, 0, 1], 1).

The second item in the pair is an array of one or more integers. The numbers in the array are the states you can go to next. So with state 0, the array is ‘#[3]’, so it goes on to state 3. When it gets to state 3, it produces the output, which is Prand([1, 2]) and then looks where it can go next, which is ‘#[3, 4]’. That is, it can go state 3 again, or it can go on to state 4.

I could draw a map of this (which would reveal that there is no path to states 1 and 2 – oops).

The FSM described in the code above
In the graph, you can see a lot of arrows pointing up and few pointing down, only along a path that goes through all the states. Thus, it’s relatively unlikely to reach state 7.

Because Pfsm is just another pattern, I could add 1 or 2 to the output of it in the case of a particularly long sample and it would gracefully handle the maths.

Answers to the other questions will be forthcoming in following posts!

Tested with human voice

Testing showed that for human voice, the frequency domain onsets and pitch tracking were more accurate and faster than the time domain, which is good to know.

Once the frequency is detected, it needs to be mapped to a scale degree. I’ve added this functionality to the Tuning Lib quark. While doing this, I could the help file was confusing and badly laid out and some of the names of flags on the quantisations were not helpful, so I fixed the helpfile, documented the new method, renamed some of the flags (the old ones still work). And then I found it wasn’t handling octaves correctly – it assumed the octave ratio is always 2, which is not true for Bohlen Pierce scales, or some scales derived by Dissonance Curve. So this was good because that bug is not fixed after a mere 8 years of lurking there. HOWEVER, the more I think about it, the less I think this belongs in Key….

Pitch detecting is flaky as hell, but onsets are solid, which is going to make the creation of melodic loops difficult, unless they actually just record the tuba and do stuff with it.

This is the code that’s working with my voice:


(

s.waitForBoot({

s.meter;

SynthDef(\domifare_input, { arg gate=0, in=0;

var input, env, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
env = EnvGen.kr(Env.asr, gate, doneAction:2);

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\phase);//odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 2, fft_pitch);

// send onsets
SendTrig.kr(onset, 4, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

k = Key(Scale.major); // A maj
//k.change(6); // C maj - changing to c maj puts degree[0] to 6!

b = [\Do, \Re, \Mi, \Fa, \So, \La, \Si];
(scale:k.scale, note:k.scale.degrees[0]).play;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 2 } {
//value.postln;
//c = k.freqToDegree(value.asFloat).postln;
//b[c.asInt].postln;
b[k.freqToDegree(value.asFloat)].postln;
}
{ id == 4 } { "4 freq dom onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 50, \gate, 1, \thresh, 0.01]);

})
)

Domifare input

Entering code requires the ability to determine pitch and entering data requires both pitch and onset. Ergo, we need a synthdef to listen for both things. There is also two ways to determine pitch, one in the time domain and the other in the frequency domain.

The frequency domain, of course, refers to FFT and is probably the best method for instruments like flute. It has a pure tone, where the loudest one is the fundamental. However, brass instruments and the human voice both have formants (loud overtones). In the case of tuba, in low notes, the overtones can be louder than the main pitch. I’ve described time-domain frequency tracking for brass and voice in an old post.

The following is completely untested sample code…. It’s my wife’s birthday and I had to go out before I could try it. It does both time and frequency domain tracking, using the fft code to trigger sending the pitch in both cases. For time domain tracking, it could -and possibly should- use the amplitude follower as a gate/trigger in combination with a frequency change of greater than some threshold. The onset cannot be used as the trigger, as the pitch doesn’t stabilise for some time after the note begins. A good player will get it within two periods, which is still rather a long time in such a low instrument. A less good player will take longer to stabilise on a pitch.

Everything in the code is default values, aside from the RMS window, so some tweaking is probably required. Presumably, every performer of this language would need to make some changes to reflect their instrument and playing technique.


(

s.waitForBoot({

SynthDef(\domifare_input, { arg in=0, out=3, rmswindow = 200;

var rms, xings, input, amp, peaks, sin, time_pitch, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
amp = Amplitude.kr(input);
rms = RunningSum.rms(input, window);
peaks = input - rms;
xings = ZeroCrossing.ar(peaks);
time_pitch = xings * 2;

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 0, time_pitch);
SendTrig.kr(hasfreq, 1, fft_pitch);

// send onsets
SendTrig.kr(onset, 2, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 0 } { "time dom pitch is %".format(value).postln; }
{ id == 1 } { "freq dom pitch is %".format(value).postln; }
{ id == 2 } { "onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 200]);

})
)

Building SuperCollider 3.6 on Raspberry Pi

Raspberry Pi Wheezy ships with SuperCollider, but it ships with an old version that does not have support for Qt graphics. This post is only slightly modified from this (formerly) handy guide for building an unstable snapshot of 3.7 without graphic support. There are a few differences, however to add graphic support and maintain wii support.
This requires the Raspbian operating system, and should work if you get it via NOOBs. I could not get this to fit on a 4 gig SD card.
Note: This whole process takes many hours, but has long stretches where it’s chugging away and you can go work on something else.

Preparation

  1. log in and type sudo raspi-config, select expand file system, set timezone, finish and reboot
  2. sudo apt-get update
  3. sudo apt-get upgrade # this might take a while
  4. sudo apt-get remove supercollider # remove old supercollider
  5. sudo apt-get autoremove
  6. sudo apt-get install cmake libasound2-dev libsamplerate0-dev libsndfile1-dev libavahi-client-dev libicu-dev libreadline-dev libfftw3-dev libxt-dev libcwiid1 libcwiid-dev subversion libqt4-dev libqtwebkit-dev libjack-jackd2-dev
  7. sudo ldconfig

Build SuperCollider

  1. wget http://downloads.sourceforge.net/project/supercollider/Source/3.6/SuperCollider-3.6.6-Source.tar.bz2
  2. tar -xvf SuperCollider-3.6.6-Source.tar.bz2
  3. rm SuperCollider-3.6.6-Source.tar.bz2
  4. cd SuperCollider-Source
  5. mkdir build && cd build
  6. sudo dd if=/dev/zero of=/swapfile bs=1MB count=512 # create a temporary swap file
  7. sudo mkswap /swapfile
  8. sudo swapon /swapfile
  9. CC=”gcc” CXX=”g++” cmake -L -DCMAKE_BUILD_TYPE=”Release” -DBUILD_TESTING=OFF -DSSE=OFF -DSSE2=OFF -DSUPERNOVA=OFF -DNOVA_SIMD=ON -DNATIVE=OFF -DSC_ED=OFF -DSC_EL=OFF -DCMAKE_C_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” -DCMAKE_CXX_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” ..
    # should add ‘-ffast-math -O3’ here but then gcc4.6.3 fails
  10. make # this takes hours
  11. sudo make install
  12. cd ../..
  13. sudo rm -r SuperCollider-Source
  14. sudo swapoff /swapfile
  15. sudo rm /swapfile
  16. sudo ldconfig
  17. echo "export SC_JACK_DEFAULT_INPUTS="system"" >> ~/.bashrc
  18. echo "export SC_JACK_DEFAULT_OUTPUTS="system"" >> ~/.bashrc
  19. sudo reboot

Test SuperCollider

  1. jackd -p32 -dalsa -dhw:0,0 -p1024 -n3 -s & # built-in sound. change to -dhw:1,0 for usb sound card (see more below)
  2. scsynth -u 57110 &
  3. scide
  4. s.boot;
  5. {SinOsc.ar(440)}.play
  6. Control-.

Optional: Low latency, RealTime, USB Soundcard etc

  1. sudo pico /etc/security/limits.conf
  2. and add the following lines somewhere before it says end of file.
  3. @audio – memlock 256000
  4. @audio – rtprio 99
  5. @audio – nice -19
  6. save and exit with ctrl+o, ctrl+x
  7. sudo halt
  8. power off the rpi and insert the sd card in your laptop.
  9. dwc_otg.speed=1 # add the following to beginning of /boot/cmdline.txt (see http://wiki.linuxaudio.org/wiki/raspberrypi under force usb1.1 mode)
  10. eject the sd card and put it back in the rpi, make sure usb soundcard is connected and power on again.
  11. log in with ssh and now you can start jack with a lower blocksize
  12. jackd -p32 -dalsa -dhw:1,0 -p256 -n3 -s & # uses an usb sound card and lower blocksize
  13. continue like in step5.2 above

links:

This post is licensed under the GNU General Public License.

Linux Midi on SuperCollider

This is just how I got it to work and should not be considered a definitive guide.
I started Jack via QJackCntrl and then booted the SuperCollider server.
I’ve got a drum machine connected via a MIDI cable to an m-audio fast track ultra.
This code is Making some noises:

(
var ultra;

MIDIClient.init;
"init".postln;
MIDIClient.destinations.do({|m, i|
 //m.postln;
 //m.name.postln;
 m.name.contains("Ultra").if({
  ultra = MIDIOut(i);
  ultra.connect(0);
  i .postln;
 });
});

//u = ultra;

Pbind(
 midinote, Pseq((36..53), inf),
 amp, 1,
 type, midi,
 midiout, ultra,
 chan, 1,
 foo, Pfunc({|e|"tick % %n".postf(e[chan], e[midinote])}),
 dur, 0.2
).play


)

My Talk at the Sc Symposium

Picking Musical Tones

One of the great problems in electronic music is picking pitches and tunings.

The TuningLib quark helps manage this process.

First, there is some Scale stuff already in SuperColider.
How to use a scale in a Pbind:

(
s.waitForBoot({
     a = Scale.ionian;

     p = Pbind(
          degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7, 6, 5, 4, 3, 2, 1, 0, rest], 2),
          scale, a,
          dur, 0.25
     );

     q = p.play;
})
)

Key

Key tracks key changes and modulations, so you can keep modulating or back out of modulations:

k = Key(Scale.choose);
k.scale.degrees;
k.scale.cents;
k.change(4); // modulate to the 5th scale degree (we start counting with 0)
k.scale.degrees;
k.scale.cents;
k.change; // go back

k.scale.degrees;

This will keep up through as many layers of modulations as you want.

It also does rounding:

quantizeFreq (freq, base, round , gravity )
Snaps the feq value in Hz to the nearest Hz value in the current key

gravity changes the level of attraction to the in tune frequency.

k.quantizeFreq(660, 440, down, 0.5) // half way in tune

By changing gravity over time, you can have pitched tend towards being in or out of tune.

Scala

There is a huge library of pre-cooked tunings for the scala program. ( at http://www.huygens-fokker.org/scala/scl_format.html
) This class opens those files.

a = Scala("slendro.scl");
b = a.scale;

Lattice

This is actually a partchian tuning diamond (and this class may get a new name in a new release)

l = Lattice([ 2, 5, 3, 7, 9])

The array is numbers to use in generated tuning ratios, so this gives:

1/1 5/4 3/2 7/4 9/8   for otonality
1/1 8/5 4/3 8/7 16/9  for utonality

otonality is overtones – the numbers you give are in the numerator
utonality is undertones – the numbers are in denominator

all of the other numbers are powers of 2. You could change that with an optional second argument to any other number, such as 3:

l = Lattice([ 2, 3, 5, 7, 11], 3)

Lattices also generate a table:

1/1  5/4  3/2  7/4  9/8
8/5  1/1  6/5  7/5  9/5
4/3  5/3  1/1  7/6  3/2
8/7  10/7 12/7 1/1  9/7
16/9 10/9 4/3  14/9 1/1

It is possible to walk around this table to make nice triads that are harmonically related:

(
s.waitForBoot({

 var lat, orientation, startx, starty, baseFreq;

 SynthDef("sine", {arg out = 0, dur = 5, freq, amp=0.2, pan = 0;
  var env, osc;
  env = EnvGen.kr(Env.sine(dur, amp), doneAction: 2);
  osc = SinOsc.ar(freq, 0, env);
  Out.ar(out, osc * amp);
 }).add;

 s.sync;


 lat = Lattice.new;
 orientation = true;
 startx = 0;
 starty = 0;
 baseFreq = 440;

 Pbind(
  instrument, sine,
  amp, 0.3,
  freq, Pfunc({
   var starts, result;
     orientation = orientation.not;
     starts = lat.d3Pivot(startx, starty, orientation);
     startx = starts.first;
     starty = starts.last;
   result = lat.makeIntervals(startx, starty, orientation);
   (result * baseFreq)
  })
 ).play
})
)

Somewhat embarrassingly, I got confused between 2 and 3 dimensions when I wrote this code. A forthcoming version will have different method names, but the old ones will still be kept around so as not to break your code.

DissonanceCurve

This is not the only quark that does dissonance curves in SuperCollider.

Dissonance curves are used to compute tunings based on timbre, which is to say the spectrum.

d = DissonanceCurve([440], [1])
d.plot

The high part of the graph is highly dissonant and the low part is not dissonant. (The horizontal access is cents.) This is for just one pitch, but with additional pitches, the graph changes:

d = DissonanceCurve([335, 440], [0.7, 0.3])
d.plot

The combination of pitches produces a more complex graph with minima. Those minima are good scale steps.

This class is currently optimised for FM, but subsequent versions will calculate spectra for Ring Modulation, AM Modulation, Phase Modulation and combinations of all of those things.

(

s.waitForBoot({

 var carrier, modulator, depth, curve, scale, degrees;

 SynthDef("fm", {arg out, amp, carrier, modulator, depth, dur, midinote = 0;
  var sin, ratio, env;

  ratio = midinote.midiratio;
  carrier = carrier * ratio;
  modulator = modulator * ratio;
  depth = depth * ratio;

  sin = SinOsc.ar(SinOsc.ar(modulator, 0, depth, carrier));
  env = EnvGen.kr(Env.perc(releaseTime: dur)) * amp;
  Out.ar(out, (sin * env).dup);
 }).add;

 s.sync;

 carrier = 440;
 modulator = 600;
 depth = 100;
 curve = DissonanceCurve.fm(carrier, modulator, depth, 1200);
 scale = curve.scale;


 degrees = (0..scale.size); // make an array of all the scale degrees


// We don't know how many pitches per octave  will be until after the
// DissonanceCurve is calculated.  However, deprees outside of the range
// will be mapped accordingly.


 Pbind(

  instrument, fm,
  octave, 0,
  scale, scale,
  degree, Pseq([
   Pseq(degrees, 1), // play one octave
   Pseq([-3, 2, 0, -1, 3, 1], 1) // play other notes
  ], 1),

  carrier, carrier,
  modulator, modulator,
  depth, depth
 ).play
});
)

The only problem here is that this conflicts entirely with Just Intonation!

For just tunings based on spectra, we would calculate dissonance based on the ratios of the partials of the sound. Low numbers are more in tune, high numbers are less in tune.

There’s only one problem with this:
Here’s a graph of just a sine tone:

d = DissonanceCurve([440], [1])
d.just_curve.collect({|diss| diss.dissonance}).plot

How do we pick tuning degrees?

We use a moving window where we pick the most consonant tuning within that window. This defaults to 100 cents, assuming you want something with roughly normal step sizes.

Then to pick scale steps, we can ask for the n most consonant tunings

t = d.digestibleScale(100, 7); // pick the 7 most consonant tunings
(
var carrier, modulator, depth, curve, scale, degrees;
carrier = 440;
modulator = 600;
depth = 100;
curve = DissonanceCurve.fm(carrier, modulator, depth, 1200);
scale = curve.digestibleScale(100, 7); // pick the 7 most consonant tunings
degrees = (0..(scale.size - 1)); // make an array of all the scale degrees (you can't assume the size is 7)

Pbind(
 instrument, fm,
 octave, 0,
 scale, scale,
 degree, Pseq([
  Pseq(degrees, 1), // play one octave
  Pseq([-7, 2, 0, -5, 4, 1], 1)], 1), // play other notes
 carrier, carrier,
 modulator, modulator,
 depth, depth
).play
)

Future plans

  • Update the help files!
  • Add the ability to calculate more spectra – PM, RM AM, etc
  • Make some of the method names more reasonable

Comments

Comments from the audience.

  • key – does it recalc the scale or not? Let the user decide
  • just dissonance curve – limit tuning ratios
  • lattice – make n dimensional
  • digestible scale – print scale ratios

Sc symposium – chris brown – ritmos

software to explore perception and performance of polyrhythms
inspired by afro-cuban music
ritmos can play in polyrythmic modes and can listen to a live input. It deals with a difference between a player and a clave.
this was first implemented in HMSL!! As a piece called Talking Drum.
Branches is in sc2 and is in the same series of pieces.
so now there’s a new version in sc3.

classes

RitmosPlay defines a voice stream heirachy and scheduling
RitmosCtlGUI
RitmosSeqGUI
RitmosSynthDefs
RitmosXfrm F interaction algorythms
MIDIListener
he uses a genetic algorithm to balance between the specified clave and the input.
he’s got presets and sequences that deal w current settings.
he’s going into a lot of detail about how this works. It’s complex.
this has an impressive gui. And indeed an impressive functionality. And sounds great.
graphics library… I wish i’d caught the name of…