Domifare input

Entering code requires the ability to determine pitch and entering data requires both pitch and onset. Ergo, we need a synthdef to listen for both things. There is also two ways to determine pitch, one in the time domain and the other in the frequency domain.

The frequency domain, of course, refers to FFT and is probably the best method for instruments like flute. It has a pure tone, where the loudest one is the fundamental. However, brass instruments and the human voice both have formants (loud overtones). In the case of tuba, in low notes, the overtones can be louder than the main pitch. I’ve described time-domain frequency tracking for brass and voice in an old post.

The following is completely untested sample code…. It’s my wife’s birthday and I had to go out before I could try it. It does both time and frequency domain tracking, using the fft code to trigger sending the pitch in both cases. For time domain tracking, it could -and possibly should- use the amplitude follower as a gate/trigger in combination with a frequency change of greater than some threshold. The onset cannot be used as the trigger, as the pitch doesn’t stabilise for some time after the note begins. A good player will get it within two periods, which is still rather a long time in such a low instrument. A less good player will take longer to stabilise on a pitch.

Everything in the code is default values, aside from the RMS window, so some tweaking is probably required. Presumably, every performer of this language would need to make some changes to reflect their instrument and playing technique.


(

s.waitForBoot({

SynthDef(\domifare_input, { arg in=0, out=3, rmswindow = 200;

var rms, xings, input, amp, peaks, sin, time_pitch, fft_pitch, onset, chain, hasfreq;

input = SoundIn.ar(in, 1);
amp = Amplitude.kr(input);
rms = RunningSum.rms(input, window);
peaks = input - rms;
xings = ZeroCrossing.ar(peaks);
time_pitch = xings * 2;

chain = FFT(LocalBuf(2048), input);
onset = Onsets.kr(chain, odftype:\wphase);
#fft_pitch, hasfreq = Pitch.kr(input);

//send pitch
SendTrig.kr(hasfreq, 0, time_pitch);
SendTrig.kr(hasfreq, 1, fft_pitch);

// send onsets
SendTrig.kr(onset, 2, 1);

//sin = SinOsc.ar(xings/2);

//Out.ar(out, sin);

// audio routing
//Out.ar(out, input);

}).add;

OSCdef(\domifare_in, {|msg, time, addr, recvPort|
var tag, node, id, value;

#tag, node, id, value = msg;
case
{ id == 0 } { "time dom pitch is %".format(value).postln; }
{ id == 1 } { "freq dom pitch is %".format(value).postln; }
{ id == 2 } { "onset".postln; }

}, '/tr', s.addr);

s.sync;

a = Synth(\domifare_input, [\in, 0 , \out, 3, \rmswindow, 200]);

})
)

Building SuperCollider 3.6 on Raspberry Pi

Raspberry Pi Wheezy ships with SuperCollider, but it ships with an old version that does not have support for Qt graphics. This post is only slightly modified from this (formerly) handy guide for building an unstable snapshot of 3.7 without graphic support. There are a few differences, however to add graphic support and maintain wii support.
This requires the Raspbian operating system, and should work if you get it via NOOBs. I could not get this to fit on a 4 gig SD card.
Note: This whole process takes many hours, but has long stretches where it’s chugging away and you can go work on something else.

Preparation

  1. log in and type sudo raspi-config, select expand file system, set timezone, finish and reboot
  2. sudo apt-get update
  3. sudo apt-get upgrade # this might take a while
  4. sudo apt-get remove supercollider # remove old supercollider
  5. sudo apt-get autoremove
  6. sudo apt-get install cmake libasound2-dev libsamplerate0-dev libsndfile1-dev libavahi-client-dev libicu-dev libreadline-dev libfftw3-dev libxt-dev libcwiid1 libcwiid-dev subversion libqt4-dev libqtwebkit-dev libjack-jackd2-dev
  7. sudo ldconfig

Build SuperCollider

  1. wget http://downloads.sourceforge.net/project/supercollider/Source/3.6/SuperCollider-3.6.6-Source.tar.bz2
  2. tar -xvf SuperCollider-3.6.6-Source.tar.bz2
  3. rm SuperCollider-3.6.6-Source.tar.bz2
  4. cd SuperCollider-Source
  5. mkdir build && cd build
  6. sudo dd if=/dev/zero of=/swapfile bs=1MB count=512 # create a temporary swap file
  7. sudo mkswap /swapfile
  8. sudo swapon /swapfile
  9. CC=”gcc” CXX=”g++” cmake -L -DCMAKE_BUILD_TYPE=”Release” -DBUILD_TESTING=OFF -DSSE=OFF -DSSE2=OFF -DSUPERNOVA=OFF -DNOVA_SIMD=ON -DNATIVE=OFF -DSC_ED=OFF -DSC_EL=OFF -DCMAKE_C_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” -DCMAKE_CXX_FLAGS=”-march=armv6 -mtune=arm1176jzf-s -mfloat-abi=hard -mfpu=vfp” ..
    # should add ‘-ffast-math -O3’ here but then gcc4.6.3 fails
  10. make # this takes hours
  11. sudo make install
  12. cd ../..
  13. sudo rm -r SuperCollider-Source
  14. sudo swapoff /swapfile
  15. sudo rm /swapfile
  16. sudo ldconfig
  17. echo "export SC_JACK_DEFAULT_INPUTS="system"" >> ~/.bashrc
  18. echo "export SC_JACK_DEFAULT_OUTPUTS="system"" >> ~/.bashrc
  19. sudo reboot

Test SuperCollider

  1. jackd -p32 -dalsa -dhw:0,0 -p1024 -n3 -s & # built-in sound. change to -dhw:1,0 for usb sound card (see more below)
  2. scsynth -u 57110 &
  3. scide
  4. s.boot;
  5. {SinOsc.ar(440)}.play
  6. Control-.

Optional: Low latency, RealTime, USB Soundcard etc

  1. sudo pico /etc/security/limits.conf
  2. and add the following lines somewhere before it says end of file.
  3. @audio – memlock 256000
  4. @audio – rtprio 99
  5. @audio – nice -19
  6. save and exit with ctrl+o, ctrl+x
  7. sudo halt
  8. power off the rpi and insert the sd card in your laptop.
  9. dwc_otg.speed=1 # add the following to beginning of /boot/cmdline.txt (see http://wiki.linuxaudio.org/wiki/raspberrypi under force usb1.1 mode)
  10. eject the sd card and put it back in the rpi, make sure usb soundcard is connected and power on again.
  11. log in with ssh and now you can start jack with a lower blocksize
  12. jackd -p32 -dalsa -dhw:1,0 -p256 -n3 -s & # uses an usb sound card and lower blocksize
  13. continue like in step5.2 above

links:

This post is licensed under the GNU General Public License.

Linux Midi on SuperCollider

This is just how I got it to work and should not be considered a definitive guide.
I started Jack via QJackCntrl and then booted the SuperCollider server.
I’ve got a drum machine connected via a MIDI cable to an m-audio fast track ultra.
This code is Making some noises:

(
var ultra;

MIDIClient.init;
"init".postln;
MIDIClient.destinations.do({|m, i|
 //m.postln;
 //m.name.postln;
 m.name.contains("Ultra").if({
  ultra = MIDIOut(i);
  ultra.connect(0);
  i .postln;
 });
});

//u = ultra;

Pbind(
 midinote, Pseq((36..53), inf),
 amp, 1,
 type, midi,
 midiout, ultra,
 chan, 1,
 foo, Pfunc({|e|"tick % %n".postf(e[chan], e[midinote])}),
 dur, 0.2
).play


)

My Talk at the Sc Symposium

Picking Musical Tones

One of the great problems in electronic music is picking pitches and tunings.

The TuningLib quark helps manage this process.

First, there is some Scale stuff already in SuperColider.
How to use a scale in a Pbind:

(
s.waitForBoot({
     a = Scale.ionian;

     p = Pbind(
          degree, Pseq([0, 1, 2, 3, 4, 5, 6, 7, 6, 5, 4, 3, 2, 1, 0, rest], 2),
          scale, a,
          dur, 0.25
     );

     q = p.play;
})
)

Key

Key tracks key changes and modulations, so you can keep modulating or back out of modulations:

k = Key(Scale.choose);
k.scale.degrees;
k.scale.cents;
k.change(4); // modulate to the 5th scale degree (we start counting with 0)
k.scale.degrees;
k.scale.cents;
k.change; // go back

k.scale.degrees;

This will keep up through as many layers of modulations as you want.

It also does rounding:

quantizeFreq (freq, base, round , gravity )
Snaps the feq value in Hz to the nearest Hz value in the current key

gravity changes the level of attraction to the in tune frequency.

k.quantizeFreq(660, 440, down, 0.5) // half way in tune

By changing gravity over time, you can have pitched tend towards being in or out of tune.

Scala

There is a huge library of pre-cooked tunings for the scala program. ( at http://www.huygens-fokker.org/scala/scl_format.html
) This class opens those files.

a = Scala("slendro.scl");
b = a.scale;

Lattice

This is actually a partchian tuning diamond (and this class may get a new name in a new release)

l = Lattice([ 2, 5, 3, 7, 9])

The array is numbers to use in generated tuning ratios, so this gives:

1/1 5/4 3/2 7/4 9/8   for otonality
1/1 8/5 4/3 8/7 16/9  for utonality

otonality is overtones – the numbers you give are in the numerator
utonality is undertones – the numbers are in denominator

all of the other numbers are powers of 2. You could change that with an optional second argument to any other number, such as 3:

l = Lattice([ 2, 3, 5, 7, 11], 3)

Lattices also generate a table:

1/1  5/4  3/2  7/4  9/8
8/5  1/1  6/5  7/5  9/5
4/3  5/3  1/1  7/6  3/2
8/7  10/7 12/7 1/1  9/7
16/9 10/9 4/3  14/9 1/1

It is possible to walk around this table to make nice triads that are harmonically related:

(
s.waitForBoot({

 var lat, orientation, startx, starty, baseFreq;

 SynthDef("sine", {arg out = 0, dur = 5, freq, amp=0.2, pan = 0;
  var env, osc;
  env = EnvGen.kr(Env.sine(dur, amp), doneAction: 2);
  osc = SinOsc.ar(freq, 0, env);
  Out.ar(out, osc * amp);
 }).add;

 s.sync;


 lat = Lattice.new;
 orientation = true;
 startx = 0;
 starty = 0;
 baseFreq = 440;

 Pbind(
  instrument, sine,
  amp, 0.3,
  freq, Pfunc({
   var starts, result;
     orientation = orientation.not;
     starts = lat.d3Pivot(startx, starty, orientation);
     startx = starts.first;
     starty = starts.last;
   result = lat.makeIntervals(startx, starty, orientation);
   (result * baseFreq)
  })
 ).play
})
)

Somewhat embarrassingly, I got confused between 2 and 3 dimensions when I wrote this code. A forthcoming version will have different method names, but the old ones will still be kept around so as not to break your code.

DissonanceCurve

This is not the only quark that does dissonance curves in SuperCollider.

Dissonance curves are used to compute tunings based on timbre, which is to say the spectrum.

d = DissonanceCurve([440], [1])
d.plot

The high part of the graph is highly dissonant and the low part is not dissonant. (The horizontal access is cents.) This is for just one pitch, but with additional pitches, the graph changes:

d = DissonanceCurve([335, 440], [0.7, 0.3])
d.plot

The combination of pitches produces a more complex graph with minima. Those minima are good scale steps.

This class is currently optimised for FM, but subsequent versions will calculate spectra for Ring Modulation, AM Modulation, Phase Modulation and combinations of all of those things.

(

s.waitForBoot({

 var carrier, modulator, depth, curve, scale, degrees;

 SynthDef("fm", {arg out, amp, carrier, modulator, depth, dur, midinote = 0;
  var sin, ratio, env;

  ratio = midinote.midiratio;
  carrier = carrier * ratio;
  modulator = modulator * ratio;
  depth = depth * ratio;

  sin = SinOsc.ar(SinOsc.ar(modulator, 0, depth, carrier));
  env = EnvGen.kr(Env.perc(releaseTime: dur)) * amp;
  Out.ar(out, (sin * env).dup);
 }).add;

 s.sync;

 carrier = 440;
 modulator = 600;
 depth = 100;
 curve = DissonanceCurve.fm(carrier, modulator, depth, 1200);
 scale = curve.scale;


 degrees = (0..scale.size); // make an array of all the scale degrees


// We don't know how many pitches per octave  will be until after the
// DissonanceCurve is calculated.  However, deprees outside of the range
// will be mapped accordingly.


 Pbind(

  instrument, fm,
  octave, 0,
  scale, scale,
  degree, Pseq([
   Pseq(degrees, 1), // play one octave
   Pseq([-3, 2, 0, -1, 3, 1], 1) // play other notes
  ], 1),

  carrier, carrier,
  modulator, modulator,
  depth, depth
 ).play
});
)

The only problem here is that this conflicts entirely with Just Intonation!

For just tunings based on spectra, we would calculate dissonance based on the ratios of the partials of the sound. Low numbers are more in tune, high numbers are less in tune.

There’s only one problem with this:
Here’s a graph of just a sine tone:

d = DissonanceCurve([440], [1])
d.just_curve.collect({|diss| diss.dissonance}).plot

How do we pick tuning degrees?

We use a moving window where we pick the most consonant tuning within that window. This defaults to 100 cents, assuming you want something with roughly normal step sizes.

Then to pick scale steps, we can ask for the n most consonant tunings

t = d.digestibleScale(100, 7); // pick the 7 most consonant tunings
(
var carrier, modulator, depth, curve, scale, degrees;
carrier = 440;
modulator = 600;
depth = 100;
curve = DissonanceCurve.fm(carrier, modulator, depth, 1200);
scale = curve.digestibleScale(100, 7); // pick the 7 most consonant tunings
degrees = (0..(scale.size - 1)); // make an array of all the scale degrees (you can't assume the size is 7)

Pbind(
 instrument, fm,
 octave, 0,
 scale, scale,
 degree, Pseq([
  Pseq(degrees, 1), // play one octave
  Pseq([-7, 2, 0, -5, 4, 1], 1)], 1), // play other notes
 carrier, carrier,
 modulator, modulator,
 depth, depth
).play
)

Future plans

  • Update the help files!
  • Add the ability to calculate more spectra – PM, RM AM, etc
  • Make some of the method names more reasonable

Comments

Comments from the audience.

  • key – does it recalc the scale or not? Let the user decide
  • just dissonance curve – limit tuning ratios
  • lattice – make n dimensional
  • digestible scale – print scale ratios

Sc symposium – chris brown – ritmos

software to explore perception and performance of polyrhythms
inspired by afro-cuban music
ritmos can play in polyrythmic modes and can listen to a live input. It deals with a difference between a player and a clave.
this was first implemented in HMSL!! As a piece called Talking Drum.
Branches is in sc2 and is in the same series of pieces.
so now there’s a new version in sc3.

classes

RitmosPlay defines a voice stream heirachy and scheduling
RitmosCtlGUI
RitmosSeqGUI
RitmosSynthDefs
RitmosXfrm F interaction algorythms
MIDIListener
he uses a genetic algorithm to balance between the specified clave and the input.
he’s got presets and sequences that deal w current settings.
he’s going into a lot of detail about how this works. It’s complex.
this has an impressive gui. And indeed an impressive functionality. And sounds great.
graphics library… I wish i’d caught the name of…

Sc symposium – lily play – bernardo barros

music notation with supercollider
he used to use OpenMusic, but then moved to linux.
OpenMusic is IRCAM software in common lisp for algorithmic music composition. It looks like max, but makes notation.
SC can do everything om can do except the notation visualisation.
he uses LilyPond. INScore might be faster.
LilyPond is free and cross-platform. It’s simple.
He’s done 3 projects? superFomus and LilyCollider.

Fomus

Uses fomus (fomus.sf.net). Works with events and patterns. It outputs to lillypond and mjusescore
this is cool
he’s showing a useage case with xenakis’s sieves. He’s got some functions as sieves and then does set operations.
this doesn’t work well with metric structures. You’re stuck wrt bar lengths.

LilyCollider

division and addition models of rhythm
rhythm trees can represent almost all kins of rythm. It’s an array of duration and division that can be nested.
he is using a syntax that i don’t know at all… Someone in front of me has a help file open on list comprehensions.
he’s got a very compelling looking score open.
in future he wants to use spanners by abjad to handle some markings. And also he wants some help and feedback for future versions.

questions

can you use this to get from MIDI to LilyPond? Yes, with Fomus.
what about includes? You can make a template.

Live blogging the sc symposium – ron kuivila

naming is the fundamental control mechanism of supercollider (unnamed gets garbage collected).
‘play’ generates instances. It returns a new object of a different class, which confuses n00bs. What you see on the screen does not give you a clue.
The object that defines the work gets misidentified as the one doing the work.
jIT lib’s def classes solves this problem. It makes it easier to share code. Play messages go to the def class. The def classs gives it all a name.
node proxies give you a gui for free also and is also useful pedagogically.
PatternConductor is a interactive control easier than EventStreamPlayer. It deals better with sustain issues.
CV is a control value constrained by an associated Spec. CV can bbe applied to several different contexts simutaneously. Touch is a companion class that does something complex about a CV’s value.
Ron is rewriting Conductor and I should talk to him about this.
yield is a bummer for beginners writing co-routines.

x=(x**i).yield

is confusing.
Pspawnern is a class that seeks to be less confusing syntactically. It does something with Prouts that’s slightly confusing….
Syntactic convience yields conceptual confusion…
he’s asking if Pspawnern is a good idea.
Pspawner is a hybrid between patterns and Routines. One of my students would have loved this. He says it’s a compositional strategy about notation and direction in scores. I may also come to love this class.
And he took no questions!

Live Blogging Sc Symposium – Guitar Granulator by Martin Hünniger

He’s got a stomp box that granulates – he has a software and hardware version of this.
2 granulators, fx chains, midi controller knobs, patches etc in software
The hardware version has one granulator and some knobs.
It’s built on a beagle board. And an Arduino Uno for pots, leds
He uses SC running the BeagleBoard linux distro form Stanford. Works out of box. Satellite CCRMA
Granular synthesis is cool. He uses a ‘ring buffer’ because it’s live sampling. This is a buffer that loops.
This is really cool.

Live Blogging the Sc Symposium – Flocking by Colin Clark

Flocking – audio synthesis in javascript on the web

flockingjs.org
github/colinbdclark/flocking

audio synthesis framework written in javascript

specifically intended to support artists

Inspired by SC

Web is everywhere

programming environments that have graphical tools

Flocking is highly declarative

Synth graphs declares trees of names unit generators – you write data strictures, not code

Data is easy to manipulate

flock.synth({
  synthDef: {
    ugen: "flock.ugen.sinOsc",
    freq: 440
    mul: 0.25
  }
})

He skips the Rate:”audio” because that’s the default.
Modulation:

flock.synth({
  synthDef: {
    ugen: "flocl.ugen.sinOsc",
    freq: 440
    mul: {
      ugen:flock.ugen.line"
....
  }
})

It handles buffers and whatnot, but not multichannel expansion.
Scheduling is unreliable…but works

A useful script

The best way to remember to do something when you’re going to run some important program is to put in the program itself. Or at the very least, put it in a script that you use to invoke the program.
I have a few things I need to remember for the performance I’m preparing for. One has to do with a projector. I’m using a stylus to draw cloud shapes on my screen. And one way I can do this is to mirror my screen to a projector so the audience can see my GUI. However, doing this usually changes the geometry of my laptop screen, so that instead of extending all the way to the edges, there are empty black bars on either side of the used portion of my display. That’s fine, except the stylus doesn’t know and doesn’t adjust. So to reach the far right edge of the drawn portion of the screen, I need to touch the far right edge of the drawn portion, which puts over a centimetre between the stylus tip and the arrow pointer. Suboptimal!
Ideally, I’d like to have any change in screen geometry trigger a script that changes the settings for for the stylus (and I have ideas about how that may or may not work, using upstart, xrandr and xsetwacom), but in the absence of that, I just want to launch a manual calibration program. If I launch the settings panel, there’s a button on that that launches one. So the top part of my script checks if the calibration is different than normal and launches settings if it is.
The next things I need to remember are audio related. I need to kill pulseaudio. If my soundcard (a Fast Track Ultra) is attached, I need to change the amplitude settings internally so it doesn’t send the input straight to the output. Then I need to start jack using it. Or if it’s not attached, I need to start jack using a default device. Then, because it’s useful, I should start up Jack Control, so I can do some routing, should I need it. (Note: in 12.04 if you start qjackctl after starting jackd, it doesn’t work properly. This is fixed by 13.04.) Finally, I should see if SuperCollider is already running and if not, I should start it.
That’s a bit too much to remember for a performance, so I wrote a script. The one thing I need to remember with this script is that if I want to kill jack, it won’t die from Jack Control, so I’ll need to do a kill -9 from the prompt. hopefully, this will not be an issue on stage.
This is my script:

#!/bin/bash


# first check the screen

LINE=`xrandr -q | grep Screen`
WIDTH=`echo ${LINE} | awk '{ print $8 }'`
HEIGHT=`echo ${LINE} | awk '{ print $10 }' | awk -F"," '{ print $1 }'`

if  [[ ${WIDTH} != 1366 || ${HEIGHT} != 768 ]]
  then
 gnome-control-center wacom
  else
 echo normal resolution
fi


# now setup the audio

pulseaudio --kill

# is the ultra attached?
if aplay -l | grep -qi ultra
  then
 echo ultra
 
 #adjust amplitude
 i=0
 j=0
 for i in $(seq 8); do
         for j in $(seq 8); do
                 if [ "$i" != "$j" ]; then
                         amixer -c Ultra set "DIn$i - Out$j" 0% > /dev/null
                 else
                         amixer -c Ultra set "DIn$i - Out$j" 100% > /dev/null
                 fi
                 amixer -c Ultra set "AIn$i - Out$j" 0% > /dev/null
         done
 done

 #for i in $(seq 4); do 
 # amixer -c Ultra set "Effects return $i" 0% > /dev/null 
 #done 

 #start jack
 jackd -d alsa -d hw:Ultra &
  else
 #start jack with default hardware
 jackd -d alsa -d hw:0 &
fi

sleep 2

# jack control
qjackctl &

sleep 1

# is supercollider running?
if ps aux | grep -vi grep | grep -q scide
  then
 echo already running
  else
 scide test.scd &
fi