Bassdrum Update

Alas, my last bassdrum SynthDef had errors in it. You can’t do a .rand in a SynthDef, you need to use a UGen: Rand(lo, hi). And my sample and hold-ish bit was beyond screwed up.

Alas. To make up for this gruesome oversight, I’ve posted 16 samples of my analog drum patch to my sample library. They’re all public domain, because crediting individual samples is too annoying for users. I hate having to keep track of stuff like that. I use a personal wiki to track this stuff when I use samples that require attribution, and it’s still annoying.
Also, here’s my fixed version:


(
 SynthDef(bassD, {|out = 0, hit_dur, amp, pan|
 
  var ringmod, noise, lpf, hpf, lpf_env, hpf_env, noise_env, env, panner, 
   pitch_env, slew, trig, sh;
  
  
  lpf_env = EnvGen.kr(Env.perc(0.05, 56.56, 12, -4));
  hpf_env = EnvGen.kr(Env.perc(0.05, 48.54, 12, -6));
  noise_env = EnvGen.kr(Env.perc(0.0001, 0.032, 1, -8));
  pitch_env = EnvGen.kr(Env.perc(0.07, hit_dur, 12, -2));
  
  env = EnvGen.kr(Env.perc(0.00005, hit_dur, amp, -2), doneAction: 2);
  
  trig = Impulse.ar(0.45/hit_dur, Rand(0.0, 1.0));
  sh = Dwhite(-6.0, 6.0,inf);
  slew =  Lag.ar(Demand.ar(trig, 0, sh), hit_dur/1.7) + Rand(-6.0, 6.0);
  
  ringmod = LFTri.ar(
     (31 + slew + LFTri.ar((27 + pitch_env).midicps, Rand(0.0, 4.0), 60)).midicps, 
     Rand(0.0, 4.0)); // 5 octave log range
  noise = PinkNoise.ar(noise_env);

  lpf = RLPF.ar(ringmod, (56.56 + lpf_env).midicps, 0.5);
  hpf = RHPF.ar(lpf +  noise, (48.54 + hpf_env).midicps, 0.5);
  
  panner = Pan2.ar(hpf, pan, env);
  
  Out.ar(out, panner);
 }).store;
)

It really doesn’t sound as good as the analog. Frankly, nothing ever sounds as good as analog. My dream system is to have a real analog synthesiser on a chip or a board with a physical interface with actual knobs and stuff. However, in my dream system, you’d be able to save knob settings and patches. And then you’d be able to use it with supercollider (or a similar system). Of course, I’d need a bunch of these things to get polyphony, so the rig would be outrageously expensive, but a boy can dream.
If you want to tell me that this already exists (in real analog, none of this fake stuff) please also tell me I’ve won the lottery! Er no, do tell me about it. It would certainly give me incentive to start buying lottery tickets.
Speaking of analog, I’m still looking for synth repair in the greater London area. My power supply for my Evenfall Minimodular is dead and there’s just not an off-the-shelf one that I can find. I theoretically know how to build my own, but I’d like my first power supply to power something slightly more replaceable. Meh. At least any new one I get will work with multiple voltages. Hauling around step down converters is a pain.

Bassdrum

There’s no better way to procrastinate than trying to do the perfect anything. You spend days on tiny things.
Um, anyway, completely unrelated to that, I’ve spent the last two days making a bassdrum synthdef.
Bass drum patch
First, I made an awesome bass drum patch on my analog synthesizer. (Well, “awesome” is a strong word, but anyway, I made a patch.) Then, I recorded the patch to my computer using the beta version of Audacity and an older version of OS X. Then I lovingly cut them up perfectly and saved them as AIFF files. Then I threw away my original recording to save space on my overly crowded hard drive. then I discovered all my AIFF files were empty because the beta version of Audacity is great if you have the lastest version of OS X, but much less fantastic if you don’t.
Hey, but that analog warmth takes up a lot of hard drive space. You need to rotate through several samples or it sounds the same every time. And I don’t have a MIDI->CV converter to use if I carry around my synths thusly patched to all my gigs. Also, it’s kind of hard to transport while patched. And I’d need to buy a second synthesizer for other stuff. So clearly the next best thing to do is replicate the patch as a SuperCollider synthdef.
I have to admit upfront that this is not as nice as the analog version because the RLPF UGen is nice, but it’s not the same kind of sound as the Voyetra8 RLPF, which is Moogy. So if you want to improve the patch, you could use a Moog emulator ugen and make sure the bass is enhanced. I’m not going to bother because of the CPU hit, but I like ot keep stupid flash apps running in the background and have a poor sense of priorities.
Um, anyway, here’s the final version:

(
  SynthDef(bassD, {|out = 0, hit_dur, amp, pan|
 
    var ringmod, noise, lpf, hpf, lpf_env, hpf_env, noise_env, env, panner, 
      pitch_env, slew, trig, sh;
  
  
    lpf_env = EnvGen.kr(Env.perc(0.05, 56.56, 12, -4));
    hpf_env = EnvGen.kr(Env.perc(0.05, 48.54, 12, -6));
    noise_env = EnvGen.kr(Env.perc(0.0001, 0.032, 1, -8));
    pitch_env = EnvGen.kr(Env.perc(0.07, hit_dur, 12, -2));
  
    env = EnvGen.kr(Env.perc(0.00005, hit_dur, amp, -2), doneAction: 2);
  
   trig = Impulse.ar(0.45/hit_dur, 1.0.rand);
  sh = Dwhite(-6, 6,inf);
  slew =  Lag.ar(Demand.ar(trig, 0, sh), hit_dur/1.7);
  
    ringmod = LFTri.ar(
        (31 + slew + LFTri.ar((27 + pitch_env).midicps, 4.0.rand, 60)).midicps, 
        4.0.rand); // 5 octave log range
    noise = PinkNoise.ar(noise_env);

    lpf = RLPF.ar(ringmod, (56.56 + lpf_env).midicps, 0.5);
    hpf = RHPF.ar(lpf +  noise, (48.54 + hpf_env).midicps, 0.5);
  
    panner = Pan2.ar(hpf, pan, env);
  
    Out.ar(out, panner);
  }).store;
)

I use “hit_dur” instead of dur because I want my drum sound to end before the Pbind gets around to playing again, but you can change that to dur.
I figured out those values by using the Conductor class, which is in a quark. I’m fond of this as a sound design method because you can mess around with a GUI to change values and you can save them to disk. Here’s what the test code looks like:

(

 SynthDef(bassD, {|out = 0, freq1, freq2, lpf_f, hpf_f, lpf_d, hpf_d, noise_d, dur, amp|
 
  var ringmod, noise, lpf, hpf, lpf_env, hpf_env, noise_env, env, panner, 
   pitch_env, slew, imp;
  
  
  lpf_env = EnvGen.kr(Env.perc(0.05, lpf_d, 12, -4));
  hpf_env = EnvGen.kr(Env.perc(0.05, hpf_d, 12, -6));
  noise_env = EnvGen.kr(Env.perc(0.0001, noise_d, 1, -8));
  pitch_env = EnvGen.kr(Env.perc(0.07, dur, 12, -2));
  
  env = EnvGen.kr(Env.perc(0.00005, dur, amp, -2), doneAction: 2);
  
  imp = Dust.ar(0.45/dur, 12) - 6;
  slew =  Lag.ar(imp, dur/1.7);
  
  ringmod = LFTri.ar((freq2.cpsmidi + slew + 
     LFTri.ar((freq1.cpsmidi + pitch_env).midicps, 4.0.rand, 60)
    ).midicps, 4.0.rand); // 5 octave log range
  noise = PinkNoise.ar(noise_env);

  lpf = RLPF.ar(ringmod, (lpf_f.cpsmidi + lpf_env).midicps, 0.5);
  hpf = RHPF.ar(lpf +  noise, (hpf_f.cpsmidi + hpf_env).midicps, 0.5);
  
  panner = Pan2.ar(hpf, 0, env);
  
  Out.ar(out, panner);
 }).store;

 Conductor.make({arg cond, freq1, freq2, lpf_f, hpf_f, lpf_d, hpf_d, noise_d, dur, db;
  freq1.spec_(freq, 200+660.rand);
  freq2.spec_(freq, 200+660.rand);
  lpf_f.spec_(freq, 100+200.rand);
  hpf_f.spec_(freq, 200+660.rand);
  lpf_d.sp(1, 0.0001, 1.5, 0, 'linear');
  hpf_d.sp(1, 0.0001, 1.5, 0, 'linear');
  dur.sp(1, 0.0001, 2, 0, 'linear');
  noise_d.sp(0.1, 0.00001, 1, 0, 'linear');
  db.spec_(db, 02.ampdb);
  

  
  cond.pattern_(
   Pbind(
    instrument, bassD,
    db,   db,
    freq1,   freq1,
    freq2,  freq2,
    lpf_f,   lpf_f,
    hpf_f,   hpf_f,
    lpf_d,   lpf_d,
    hpf_d,   hpf_d,
    noise_d,  noise_d, 
    dur,  dur    
   )
  )
 }).show;
 
)

Those of you who like high bass drums will have fun with freq1 and freq2, if you’re bored and feel like messing around with such things.
That synthdef actually has more envelopes than my IRL synth does. I think it’s because I wrote a post about envelopes today like they’re the greatest thing since sliced bread. I also have a post about Conductors that I find somewhat more readable than the official documentation, but that’s probably just because I wrote it.

Edit

I corrected an error in the main synthdef. However, the one next to the conductor is the old version.

TuningLib

Yesterday, I added a new Quark to Supercollider, called TuningLib. It requires a recent version of MathLib, one with the Bessel.sc file included. There are several classes in the new Quark, all realted to tuning.

Stuff from Jascha

Scala – This class is based on the SCL class from Jascha Narveson, but updated so it’s a subclass of the newer Tuning class. It opens Scala Files, which means you can use the large and interesting scala file library of thousands of tunings.
Key – Jascha’s SCL file also did a bunch of other interesting tuning-related things that the newer Tuning class does not, so I put these features in Key. It tracks your key changes and can interpolate between a given frequency or tuning ratio and the current active Scale.

Dissonance Curves

DissonanceCurve – is, I think, the most interesting part of the TuningLib. It generates Tunings, on the fly, for a given timbre. Give it your spectrum as lists of frequencies and amplitudes, or as a FFT buffer or as the specs for an FM tone, and it makes two different scales.
The first kind of scale it makes is the sort described by Bill Sethares. If you want to see the generated curve, you can plot it. Or you can get a Tuning from it. Or, you can get a scale made up of the n most consonant Tuning ratios. This is used in the second section of my piece Blakes 9
The other sort of tuning it does is based on a similar idea, but using the classic Just Intonation notions of consonance. Like with Sethares’ algorithm, every partial of a timbre’s spectrum is compared against every partial of the proposed tuning. It calculates the ratio between the frequencies. This could be 3/2, for example, or 115/114 or any whole number ratio. The numerator and denominator of that ratio are summed. In just intonation, smaller numbers are considered more consonant, so the smaller the sum, the more consonant the ratio. (This sum is related to Clarence Barlow‘s ideas of ‘digestibility.’) Then, the resultant sum is scaled by the amplitude of the quieter of the two partials. So if they are 3/2 and one has an amplitude of 0.2 and the other of 0.1, the result will be 0.5 ( = (3 + 2) * 0.1). This process repeats for every partial, and the results for each are summed, giving the level of dissonance (or digestibility) of the proposed tuning.
After computing the relative dissonance of all 1200 possible tunings in an octave, the next step is to figure out which ones to select as members of a scale. For this, the algorithm uses a moving window of n potential tunings. For a given tuning, if it is the most consonant of the n/2 tunings below it and the n/2 tunings above it, then it gets added to the Tuning returned by digestibleTuning.
I don’t have any sound examples for this usage yet, but I’m working on some. I don’t know of any pieces by anybody else using this algorithm either, but I’m sure I’m not the first person to think of it. If you know of any prior work using this idea, please leave a comment.

Tuning Tables

Lattice – This is based on some tuning methods that Ellen Fullman showed me a few years ago. Based on the numbers you feed it, which should be an array of 2 and then odd numbers, it generates a tuning table. for [2, 5, 3, 7, 9], it creates:

 1/1  5/4  3/2  7/4  9/8
 8/5  1/1  6/5  7/5  9/5
 4/3  5/3  1/1  7/6  3/2
 8/7  10/7 12/7 1/1  9/7
 16/9 10/9 4/3  14/9 1/1

You can use this class to navigate around in your generated table. For otonality, adjacent fractions are horizontal neighbors, so they share a denominator. For utonality, neighbors are on the vertical axis, so they have the same numerator. Three neighboring ratios make up a triad. You can walk around the table, so that you’re playing a triad, and then pick a member f that triad to be a pivot. Then, create a new triad on the other axis that contains your pivot as one of the members.
For example, one possible walk around the table, starting at 0,0 would be [1/1, 5/4, 3/2], [5/4, 1/1, 5/3], [3/2, 4/3, 5/3], [8/5, 4/3, 8/7], [8/7, 9/7, 1/1] etc. As you can (hopefully) see, the table wraps around at the edges.
I’ve done several pieces using this class, usually initializing it with odd numbers up to 21. Two examples are Beep and Bell Tolls

Undocumented

There is also a class FMSpectrum that will compute the spectrum for a FM tone if given the carrier frequency, the modulation frequency and depth (in Hz). I would like to also add in a class to calculate the spectrum of phase-modulated signals, but I don’t have the formula for this. If you know it (or where to find it), leave a comment!

More Tuning

While continuing to ponder tuning, I realized that it would be possible to create a dissonance curve for just intonation. Instead of judging how close the frequencies are to each other to look for roughness, you would look at what tuning ratio they described. If one frequency was 1.5 times the other, then that’s a ratio of 3 / 2. Then add the numerator and denominator to get 5. Then scale by amplitude.
In Sethares’ dissonance curves, you get scale degrees by searching for minima in the curve, but that approach is not a meaningful way to sort just ratios. Instead, they can be sorted by their relative dissonance.
I’ve updated my class, DissonanceCurve.sc (and fixed the url. sorry) so it can do just curves also. I ran it with the following (rough draft-ish) code:

 b = Buffer.alloc(s,1024); // use global buffers for plotting the data
 c = BufferTool.open(s, "sounds/a11wlk01.wav"); 

// when that's loaded, evaluate the following
(

 Task({

   d = SynthDef(foo, 
    { FFT(b, PlayBuf.ar(1, c.bufnum, BufRateScale.kr(c.bufnum))); 0.0 }).play;

  0.2.rand(3.7).wait;

   e = DissonanceCurve.newFromFFT(b, 1024, highInterval: 2, action: {arg dis;
 
    var degr, top5;
 
    d.free;

     dis.scale.do({ |deg|
  
      postf(" % / %,",
        deg.numerator, deg.denominator);
     });
    "n---just---".postln;
    dis.scale.size.max(25).do({ |index|
 
     degr = dis.just_scale[index];
        postf(" % / %,",
         degr.numerator, degr.denominator);
     }); 

 
   });
 
 
 }).play
)

And, after seriously heating up my computer, and waiting a bit, I got the following output:

 1 / 1, 29 / 28, 6 / 5 , 5 / 4 , 33 / 26, 4 / 3, 15 / 11, 29 / 21, 7 / 5, 10 / 7, 
3 / 2 , 8 / 5, 5 / 3, 27 / 16, 49 / 29, 12 / 7, 67 / 39, 7 / 4, 17 / 9, 2 / 1, 
---just---
 1 / 1, 3 / 2, 2 / 1, 4 / 3, 5 / 4, 6 / 5, 7 / 6, 9 / 8,  8 / 7, 10 / 9, 5 / 3, 
12 / 11, 15 / 14, 13 / 12, 11 / 10, 16 / 15, 9 / 7, 7 / 5, 14 / 13, 22 / 21, 7 / 4, 
10 / 7, 21 / 20, 11 / 8, 11 / 9,

The top section is the Sethares algorithm dissonance curve. I made a minor adjustment so that it looks at fractions one cent on either side of them minima and grabs the simpler one if it exists. (This is optional, add “simplify: false” to the method invokation to turn it off.)
The bottom section is the 25 least dissonant just ratios. Looking first at those, note that 1/1 is the least dissonant,as one would expect. Usually, 2/1 would be next, but note that in this case, it’s 3/2 instead. The algorithm does favor low number ratios, which is logical. Notice, also, that it tends to favor smaller numbers. There are a lot of (d+1)/d fractions: 4/3, 5/4, 6/5, 7/6, 9/8. It hugely favors these numbers. The top half of the octave is under represented. I do not know why this is so.
But Sethares’ algorithm, because it uses the critical band, tends to favor higher pitches as more consonant. However, since we search for minima rather than order the intervals by dissonance, this tendency’s effect on the results is reduced.
Both of these computations of dissonance seem to give meaningful data that does seem to have some kind of correlation to each other. On both lists we find, 6/5, 5/4, 4/3, etc. However, the length of the list of just ratios is arbitrary. If we take only the Sethares intervals that are in the top 5% most consonant (least dissonant) just intervals, we are left with:

 1/1, 29/28, 6/5, 5/4, 4/3, 15/11, 7/5, 10/7, 3/2, 8/5, 5/3, 12/7, 7/4,
2/1

Of those, 29/28 is the most dissonant, by both measures, so it may not be a the best scale degree. If that’s the case, then the top 5% is not the best cutoff. So what is? How do we choose it?
On the other hand, one way that just intonation is corralled is through factorization limits. For example, 7-limit tuning means that all the numbers in the ratios must be multiples of numbers that are less than 7. So 14 is ok (7 * 2), but 11 and 13 are not, as they’re prime and greater than 7. If we were to apply a 7-limit to the Sethares curve, the scale we would have is

1/1, 6/5, 5/4, 4/3, 7/5, 10/7, 3/2, 8/5, 5/3, 27/16, 12/7, 7/4, 2/1

Is that better? Does the 27/16 (aka: (3*3*3)/(4*4)) impact that?
Alas, we can’t use our ears because we don’t know what moment of the source was measured. But we can use our ears with a synthetic sound whose frequency content is known.

f = [50] ++ ( [50/27, 18/7, 54/25, 25/27, 9/7, 27/25, 25/54, 9/14, 27/50] * 300);
a = [0.055, 0.1, 0.1, 0.1, 0.105, 0.105, 0.105, 0.11, 0.11, 0.11];
e = DissonanceCurve.new(f, a, 2);

With some print statements, abbreviated for the sake of not being too boring, we get a Sethares scale of 1, 7/6, 25/21, 25/18, 36/25, 42/25, 12/7, 2, which, note, falls within a 7-limit. For the top 8 just results, we get, 1, 3/2, 6/5, 5/4, 7/5, 5/3, 34/27, 10/9. A list which does not include 2! And if we do the top 5% thing described above, we get, 1, 7/6, 25/21, 25/18, 36/25, 2. And we can compare these aurally:

(
 SynthDef("space", {|out = 0, freq = 440, amp 0.2, dur = 1, pan = 0|
  var cluster, env, panner;
 
  // detune
  freq = freq + 2.0.rand2;
 
  cluster = 
  SinOsc.ar(50, 1.0.rand, 0.055 * amp) + 
  SinOsc.ar((freq * 50/27) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 18/7) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 54/25) + 1.0.rand2, 1.0.rand, 0.1 * amp) + 
  SinOsc.ar((freq * 25/27) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 9/7) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 27/25) + 1.0.rand2, 1.0.rand, 0.105 * amp) + 
  SinOsc.ar((freq * 25/54) + 1.0.rand2, 1.0.rand, 0.11 * amp) + 
  SinOsc.ar((freq * 9/14) + 1.0.rand2, 1.0.rand, 0.11 * amp) + 
  SinOsc.ar((freq * 27/50) + 1.0.rand2, 1.0.rand, amp * 0.11);
 
  env = EnvGen.kr(Env.perc(0.05, dur + 1.0.rand, 1, -4), doneAction: 2);
  panner = Pan2.ar(cluster, pan, env);
  Out.ar(out, panner);
 }).send(s);
)
(
   Pbind(
  //Sethares
    freq,  Prand([1, 7/6, 25/21, 25/18, 36/25, 42/25, 12/7, 2] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)
(
   Pbind(
  // Just
    freq,  Prand([1, 3/2, 6/5, 5/4, 7/5, 5/3, 34/27, 10/9] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)

(
   Pbind(
  // Top 5%
    freq,  Prand([1, 7/6, 25/21, 25/18, 36/25, 2] *300, 27),
    dur,  0.3,
    instrument,  space,
    amp,  0.2,
    pan,  0
   ).play   
)

Which of those pbinds do you think sounds best? Leave a comment.

Questions about Differing Approaches to Tuning

Let’s start by all admitting that Equal Temperament is a compromise and that computers can do better. They’re fast at math and nothing physical needs to move, so we can can do better and be more in tune. (The next admission is that I haven’t had the attention span to actually read all the way through the Just Intonation Primer, although it is a very informative book and everyone should buy a copy and actually read it. Nor have a I read Tuning Timbre Spectrum Scale, alas.)
When we say “in tune,” what does that actually mean? On the one hand, we are talking about beating. You know that when you’re trying to tune two sound-generating objects playing the same note, there’s weird phasing and stuff that happens until you get it right. The beating sound you get when tuning a guitar. There’s also just a sort of roughness you hear when you play two notes that are really close to each other, like a C and a C# together. Both of these things seem to have something to do with being in tune and both suggest possible approaches.
Just Intonation, seem to be all about beating and zero crossings. Note relationships described with ratios in which the numerator and denominator are small, whole numbers have less beating. This is because when the waveforms cross, it’s at the low energy position, so they don’t interfere. 3/2 is thus very in tune. You can compute the amount of dissonance by adding the numerator to the denominator. Lower numbers are more in tune.
Bill Sethares, though, likes ten tone equal temperament (and writing songs in Klingon) and came up with some timbres that sound good in such a strange tuning. He’s got some math about dissonance curves. The roughness mentioned above has to do with how our ears work and critical bandwidth. If we hear two tones that are close to each other in pitch, the ear hairs they stimulate overlap, so the interfere with each other and create roughness. We can take a timbre and see how much internal roughness it has and then transpose it a bit and measure the roughness of the original and the transposed version played at the same time. Do this a bunch of times and you get a curve. The minima on the curve are good scale degrees.
Both of these approaches are perceptual and both seem to be in conflict. They both seem to use different parts of our perception, one more around critical band and the other more around amplitude and phase. So I wonder how to get them to work together? I can compute a dissonance curve that goes from 1 -1200 cents, but if I do it from a FFT’s spectrogram, the data I’m putting in is inexact. It only knows 512 frequencies, each of them slightly blurred and I’m using it with 1200 transpositions. Also, the transpositions are appropriately logarithmic, but the bins of the FFT are not, they’re linear. Should I do a similarly linear comparison and save myself a lot of unnecessary computation or does it make sense to do it by cents? Since I know there are artifacts in the spectrogram, should I find the minima and then search for “good” tuning ratios near them? Should the internal dissonance in the sample change the approach that I use?
I ported Sethares’ code to SuperCollider. You can download a working draft of DissonanceCurve.sc, if you desire. It’s quite math intensive for FFTs, but I have a synthDef made up of SinOsc, which is easy to analyze, since all the frequencies and amplitudes are known and there aren’t many of them. The freqs are in f and the amps are in a:

f = [50] ++ ( [50/27, 18/7, 54/25, 25/27, 9/7, 27/25, 25/54, 9/14, 27/50] * 300);
a = [0.055, 0.1, 0.1, 0.1, 0.105, 0.105, 0.105, 0.11, 0.11, 0.11];

e = DissonanceCurve.new(f, a, 2);
e.scale.do({ |deg|
  
 postf("Interval % - Dissonance %tRatio % / % n",
  deg.interval, deg.dissonance, deg.numerator, deg.denominator);
});

Which prints

Interval 1 - Dissonance 0.93194694734913 Ratio 1 / 1 
Interval 1.1667536657322 - Dissonance 1.1967977845161 Ratio 7 / 6 
Interval 1.1905817347928 - Dissonance 1.1395899373121 Ratio 25 / 21 
Interval 1.3883134504798 - Dissonance 0.92933737113208 Ratio 118 / 85 
Interval 1.4405968618317 - Dissonance 0.95473900132736 Ratio 85 / 59 
Interval 1.6798510690642 - Dissonance 0.79165734602377 Ratio 42 / 25 
Interval 1.7141578884562 - Dissonance 0.80288033573481 Ratio 12 / 7 
Interval 2 - Dissonance 0.49046268655094 Ratio 2 / 1 

118 / 85 is not a ratio of small, whole numbers, but it’s apparently less dissonant than 7 / 6 or even 85/59 or even the internal dissonance of the source sound! But, if we look in the curve, we can find the ratios 1 cent distant on either side of 118 / 85:

Interval 1.3875117607442 - Dissonance 0.9386025721761 Ratio 111 / 80 
Interval 1.3883134504798 - Dissonance 0.929337371132 Ratio 118 / 85 
Interval 1.3891156034233 - Dissonance 0.92966297781753 Ratio 25 / 18 

25 / 18 is a much smaller ratio and a distance of 1 cent is not perceivable, so it’s probably a better number. But I am still slightly confused / unconvinced. Note also, that sounds closer to 2/1 are all, in general, less dissonant that sounds closer to 1/1, because of the nature of the algorithm / critical bandwidth. But for just intonation, an inversion is barely more or less dissonant than it’s non-inverted form.
Also, an issue: the width of the critical band changes in different frequency ranges and I think it might help to use the Bark scale or something in the Dissonance Curve code, but the math is, as yet, a bit beyond me.
For the purposes of showing off, here’s a silly example with FFTs, which is not at all real time:(WARNING: THIS IS SLOW!)

 b = Buffer.alloc(s,1024); // use global buffers for plotting the data
 c = BufferTool.open(s, "sounds/a11wlk01.wav"); 
 d = { FFT(b, PlayBuf.ar(1, c.bufnum, BufRateScale.kr(c.bufnum))); 0.0 }.play;

// when that's playing, evaluate the following

 e = DissonanceCurve.newFromFFT(b, 1024, highInterval: 2, action: {arg dis;
 
  dis.scale.do({ |deg|
  
   postf("Interval % - Dissonance %tRatio % / % n",
    deg.interval, deg.dissonance, deg.numerator, deg.denominator);
  });
 });

Go and get a snack while that’s going. Make a cup of tea. You won’t be able to do anything else with SuperCollider until it finishes, so leave some comments about tuning. How should I be trying to combine dissonances curves and Just Intonation?
(My result for the code above (timing matters) was:

Interval 1 - Dissonance 2.4284846123288 Ratio 1 / 1 
Interval 1.0346671040459 - Dissonance 2.9055490440413 Ratio 30 / 29 
Interval 1.0557976305092 - Dissonance 2.9396229209406 Ratio 19 / 18 
Interval 1.0588513011885 - Dissonance 2.9394283497832 Ratio 18 / 17 
Interval 1.0625273666152 - Dissonance 2.9404120579786 Ratio 17 / 16 
Interval 1.0767375682475 - Dissonance 2.9248076874065 Ratio 14 / 13 
Interval 1.1114938763335 - Dissonance 2.8528563216285 Ratio 10 / 9 
Interval 1.1250584846888 - Dissonance 2.8384180012931 Ratio 9 / 8 
Interval 1.1302693892732 - Dissonance 2.8422250578475 Ratio 26 / 23 
Interval 1.1335384537169 - Dissonance 2.8404168678269 Ratio 17 / 15 
Interval 1.1667536657322 - Dissonance 2.773742908553 Ratio 7 / 6 
Interval 1.2002486666653 - Dissonance 2.6741210623142 Ratio 6 / 5 
Interval 1.2497735102289 - Dissonance 2.5747254321313 Ratio 5 / 4 
Interval 1.2628354511916 - Dissonance 2.5907859768328 Ratio 24 / 19 
Interval 1.2664879348481 - Dissonance 2.5910058160679 Ratio 19 / 15 
Interval 1.2856518332381 - Dissonance 2.5784271202225 Ratio 9 / 7 
Interval 1.3332986770912 - Dissonance 2.4554262314412 Ratio 4 / 3 
Interval 1.3503499461682 - Dissonance 2.4863589672953 Ratio 27 / 20 
Interval 1.3573881591926 - Dissonance 2.4874055968135 Ratio 19 / 14 
Interval 1.3755418181397 - Dissonance 2.469278592016 Ratio 11 / 8 
Interval 1.3811148862791 - Dissonance 2.4674194148261 Ratio 29 / 21 
Interval 1.3843096285337 - Dissonance 2.4676720796587 Ratio 18 / 13 
Interval 1.3891156034233 - Dissonance 2.4680185402198 Ratio 25 / 18 
Interval 1.4003945316219 - Dissonance 2.4587789993728 Ratio 7 / 5 
Interval 1.4289941397411 - Dissonance 2.432946313225 Ratio 10 / 7 
Interval 1.5000389892858 - Dissonance 2.2587031579717 Ratio 3 / 2 
Interval 1.5262592089606 - Dissonance 2.3003029027958 Ratio 29 / 19 
Interval 1.529789693524 - Dissonance 2.299895874529 Ratio 26 / 17 
Interval 1.5333283446696 - Dissonance 2.2993307022943 Ratio 23 / 15 
Interval 1.555631119012 - Dissonance 2.2871143779032 Ratio 14 / 9 
Interval 1.5619338268699 - Dissonance 2.2878440907054 Ratio 25 / 16 
Interval 1.6002899594453 - Dissonance 2.2385589006945 Ratio 8 / 5 
Interval 1.6114208563635 - Dissonance 2.2381572659306 Ratio 29 / 18 
Interval 1.6188844330948 - Dissonance 2.236239217168 Ratio 34 / 21 
Interval 1.625443414535 - Dissonance 2.2331690259349 Ratio 13 / 8 
Interval 1.6663213678518 - Dissonance 2.1624083601251 Ratio 5 / 3 
Interval 1.68763159226 - Dissonance 2.1765673941027 Ratio 27 / 16 
Interval 1.7141578884562 - Dissonance 2.1648475763908 Ratio 12 / 7 
Interval 1.7501759894904 - Dissonance 2.1359651045669 Ratio 7 / 4 
Interval 1.8004197968362 - Dissonance 2.0659411117752 Ratio 9 / 5 
Interval 1.8340080864093 - Dissonance 2.0362153732133 Ratio 11 / 6 
Interval 2 - Dissonance 1.6432830079835 Ratio 2 / 1 

Yikes)

Some Source Code

Yesterday, I posted some links to a supercollider class, BufferTool and it’s helpfile. I thought maybe I should also post an example of using the class.
I wrote one piece, “Rush to Excuse,” in 2004 that uses most of the features of the class. Program notes are posted at my podcast. And, The code is below. It requires two audio files, geneva-rush.wav and limbaugh-dog.aiff You will need to modify the source code to point at your local copy of those files.
The piece chops up the dog file into evenly sized pieces and finds the average pitch for each of them. It also finds phrases in Rush Limbaugh’s speech and intersperses his phrases with the shorter grains. This piece is several years old, but I think it’s a good example to post because the code is all cleaned up to be in my MA thesis. Also, listening to this for the first time in a few years makes me feel really happy. Americans finally seem to agree that torture is bad! Yay! (Oh alas, that it was ever a conversation.)

(

 // first run this section

 var sdef, buf;
 
 
 // a callback function for loading pitches
  
 c = {

  var callback, array, count;

    array = g.grains;
    count = g.grains.size - 1;
     
    callback = { 
   
     var next, failed;
      
     failed = true;
      
     {failed == true}. while ({
    
      (count > 0 ). if ({
    
       count = count -1;
       next = array.at(count);
    
       (next.notNil).if ({
        next.findPitch(action: callback);
        failed = false;
       }, {
        // this is bad. 
        "failed".postln;   
        failed = true;
       });
      }, { 
       // we've run out of grains, so we must have succeeded
       failed = false;
       "pitch finding finished".postln;
      });
     });
    };
   
   };

 
 // buffers can take a callback function for when they finish loading
 // so when the buffer loads, we create the BufferTools and then
 // analyze them
  
   buf = Buffer.read(s, "sounds/pundits/limbaugh-dog.aiff", action: {
  
    g = BufferTool.grain(s, buf);
    h = BufferTool.grain(s, buf);
    "buffers read!".postln;
   
  g.calc_grains_num(600, g.dur);
    g.grains.last.findPitch(action: c.value);
    h.prepareWords(0.35, 8000, true, 4000);
 
   });
 
   i = BufferTool.open(s, "sounds/pundits/geneva-rush.wav");

  
   sdef = SynthDef(marimba, {arg out=0, freq, dur, amp = 1, pan = 0;
  
   var ring, noiseEnv, noise, panner, totalEnv;
   noise = WhiteNoise.ar(1);
   noiseEnv = EnvGen.kr(Env.triangle(0.001, 1));
     ring = Ringz.ar(noise * noiseEnv, freq, dur*5, amp);
     totalEnv = EnvGen.kr(Env.linen(0.01, dur*5, 2, 1), doneAction:2);
     panner = Pan2.ar(ring * totalEnv * amp, pan, 1);
     Out.ar(out, panner);
    }).writeDefFile;
   sdef.load(s);
   sdef.send(s);

   SynthDescLib.global.read;  // pbinds and buffers act strangely if this line is omitted

)

// wait for: pitch finding finished

(

 // this section runs the piece

 var end_grains, doOwnCopy;
 
 end_grains = g.grains.copyRange(g.grains.size - 20, g.grains.size);
 

 // for some reason, Array.copyRange blows up
 // this is better anyway because it creates copies of
 // the array elements

 // also:  why not stress test the garbage collector?
 
 doOwnCopy = { arg arr, start = 0, end = 10, inc = 1;
 
  var new_arr, index;
  
  new_arr = [];
  index = start.ceil;
  end = end.floor;
  
  {(index < end) && (index < arr.size)}. while ({
  
   new_arr = new_arr.add(arr.at(index).copy);
   index = index + inc;
  });
  
  new_arr;
 };

 
 
 Pseq( [


  // The introduction just plays the pitches of the last 20 grains
  
  Pbind(
 
   instrument, marimba,
   amp,   0.4,
   pan,   0,
   
   grain,   Pseq(end_grains, 1),
   
   [freq, dur],
      Pfunc({ arg event;
     
       var grain;
      
       grain = event.at(grain);
       [ grain.pitch, grain.dur];
      })
  ),
  Pbind(
  
   grain, Prout({
   
      var length, loop, num_words, loop_size, max, grains, filler, size,
       grain;
      
      length = 600;
      loop = 5;
      num_words = h.grains.size;
      loop_size = num_words / loop;
      filler = 0.6;
      size = (g.grains.size * filler).floor;
      
      
      grains = g.grains.reverse.copy;

      // then play it straight through with buffer and pitches

      {grains.size > 0} . while ({
        
       //"pop".postln;
       grain = grains.pop;
       (grain.notNil).if({
        grain.yield;
       });
      });
      
      
      loop.do ({ arg index;
      
       "looping".postln;
      

       // mix up some pitched even sizes grains with phrases

       max = ((index +2) * loop_size).floor;
       (max > num_words). if ({ max = num_words});
       
       grains = 
         //g.grains.scramble.copyRange(0, size) ++
         doOwnCopy.value(g.grains.scramble, 0, size) ++
         //h.grains.copyRange((index * loop_size).ceil, max);
         doOwnCopy.value(h.grains, (index * loop_size).ceil, max);
         
       

       // start calculating for the next pass through the loop
       
       length = (length / 1.5).floor;
       g.calc_grains_num(length, g.dur);
       g.grains.last.findPitch(action: c.value);
       
       grains = grains.scramble;
       

       // ok, play them
       
       {grains.size > 0} . while ({
        
        //"pop".postln;
        grain = grains.pop;
        (grain.notNil).if({
         grain.yield;
        });
       });
      });
      
      i.yield;
      "end".postln;
     }),
   [bufnum, dur, grainDur, startFrame, freq, instrument], 
    Pfunc({arg event;
    
     // oddly, i find it easier to figure out the grain in one step
     // and extract data from it in another step
     
     // this gets all the data you might need
    
     var grain, dur, pitch;
     
     grain = event.at(grain);
     dur = grain.dur - 0.002;
     
     pitch = grain.pitch;
     
     (pitch == nil).if ({
      pitch = 0;
     });
     
     [
      grain.bufnum,
      dur,
      grain.dur,
      grain.startFrame,
      pitch,
      grain.synthDefName
     ];
     
    }),
      
      
   amp,   0.6,
   pan,   0,
   xPan,   0,
   yPan,   0,
   rate,   1,
   
   
   twoinsts, Pfunc({ arg event;
       
     // so how DO you play two different synths in a Pbind
     // step 1: figure out all the data you need for both
     // step 2: give that a synthDef that will get invoked no matter what
     // step 3: duplicate the event generated by the Pbind and tell it to play
       
       var evt, pitch;
       
       pitch = event.at(freq);
       
       (pitch.notNil). if ({

        // the pitches below 20 Hz do cool things to the 
        // speakers, but they're not really pitches,
        // so screw 'em
        
        (pitch > 20). if ({
         evt = event.copy;
         evt.put(instrument, marimba);
         evt.put(amp, 0.4);
         evt.play;
         true;
        }, {
         false;
        })
       }, {
        // don't let a nil pitch cause the Pbind to halt
        event.put(freq, rest);
        false;
       });
      })
        
  )      
      
       
 ], 1).play
)
 

This code is under a Creative Commons Share Music License

BufferTool

A while back, I wrote some code and put it in a class called BufferTool. It’s useful for granulation. Any number of BufferTools may point at a single Buffer. Each of them knows it’s own startFrame, endFrame and duration. Each one also can hold an array of other BufferTools which are divisions of itself. Each one may also know it’s own SynthDef for playback and it’s own amplitude. You can mix and match arrays of them.
You can give them rules for how to subdivide, like a set duration of each grain, a range of allowable durations or even an array of allowed duration lengths. Or, it can detect pauses in itself and subdivide according to them. It can calculate the fundamental pitch of itself.
I want to release this as a quark, but first I’d like it if some other people used it a bit. The class file is BufferTool.sc, and there’s a helpfile and a quark file.
Leave comments with feedback, if you’d like.

more performance stuff

vincent rioux is now talking about his work with sc

he improvised with an avant sort of theatre company. The video documentation was cool. I didn’t know about events like this in paris. Want to know more.

in another project, he made very simple controllers with arduino inside. He had 6 controllers. One arduino for all 6.

Tiny speakers. This is also nifty. Used it at Pixelache festival.

the next project uses a light system. Uses a hypercube thing. Which is a huge thing that the dancer stands inside. Sc controls it.

the next thing is a street performance asking folks to help clean the street. Part of festival mal au pixel. This is mental! Also, near where i used to live. Man, i miss paris sometimes.

the next ne is a crazy steam punk dinner jacket. With a wiimote thing.

dan’s installation

dan st. Clair is talking about his awesome instillation which involves speakers hangong from trees doing bird like rendition of ‘like a virgin’, which is utterly tweaking out the local mocking birds.

when he was an undergrad he did a nifty project with songs stuck in people’s heads. It was conxeptual and not musical.

when he lived in chicago he did a map of muzak in stores on state street, including genre and de;ivery .ethod. He made a tourist brochure with muzak maps and put them in visitor centers.

he’s interested in popular music in environemtnal settings

max neuhaus did an unmarked, invisible sounds installation in time square. Dan dug the sort of invisible, discovery aspect.

his bird e.ulator is solar powered. Needs no cables. Has an 8bit microcontroller. They’re cheap as hell.

he’s loaded frequncy envelopes in memory. Fixed control rate. Uses a single wavetable oscillator httP://www.myplace.nu/avr/minidds/index.htm

he made recordings of birds and extracted the partials.

he throws this up into trees. However, neighbors got annoyed and called the cops or destroyed the speakers.

he’s working on a new version which is in close proximity to houses. He’s adding a calddendar to shut it down sometimes and amplitude controls.

he has an IFF class to deal with sdif and midi files. SDIFFrames class works with these files.

there’s some cool classes for fft, like FFTPeaks

he’s written some cool guis for finding partials.

his method of morphing between bird calls and pop songs is pretty brilliant.

dan is awesome

live video

sam pluta wrote some live video software. It’s inspired by glitchbot, meapsoft

glitchbot records sequnces and loops and stutters them. Records 16 bar phrase and loops and tweaks it. I think i have seen this. It can add beats and do subloops, etc

the sample does indeed sound glitchy

probability control can be clumsy in live performance. Live control of beats is hard.

MEAPSoft does reordering.

his piece from the last symposium used a sample bank which he can interpret and record his interpretting and then do stuff with that. So there are two layers of improvisation. It has a small initial parameter space nad uses a little source to make a lot of stuff.

i remember his pice from last time

what he learned from that was that it was good, especially for noisy music. And he controlled it by hitting a lot of keys which was awesome

he wrote an acoustic oiece using sound block. Live instruments can do looping differently, you can make the same note longer.

so he wrote michel chion’s book on film and was influenced. He started finding sound moments in films. And decided to use them for source material.

sci-fi films have the best sound, he says.

playing a lot of video clips in fast succession is hard, because you need a format that renders single frames quickly. Pixlet format is good for that.

audio video synch is hard with quicktime, so he loaded audio into sc and did a bridge to video with quartz composer.

qc is efficient at rendering

he wanted to make noisy loops, like to change them. You can’t buffer video loops in the same way, so he needed to create metaloops of playback information. So looped data.

a loop contains pointers to movies clips, but starts from where he last stopped. Which sounds right

he organized the loops by category, kissing, car chases, drones,etc

this is an interesting way of organizing and might help my floundering blake piece.

he varies loop duration based on the section of the piece.