Unpopular Music

Once in a while, I get the idea of doing algorithmic pop music and labor intensely on it and then come up with something and then walk away horrified. So, um, if anybody’s interested, here’s the latest incarnation of this cycle: S’onewall.
The samples are recordings of the largest-ever transgender rights protest in the UK, which took place last month. And then there are drum beats. The bassline uses a subset of the Bohlen-Pierce scale, in just intonation, with notes chosen according to a variation of Clarence Barlow’s “digestibility” formula. To determine the relative consonance of two ratios, divide one by the other and then take the result and add the numerator to the denominator. A lower number indicates greater simplicity of the result and thus a higher degree of consonance. There is ugly code, available for your perusal. Quick examples are at the bottom of this post.
This is not on my podcast because I’m not so into it. I have ideas of what might fix it, but I suspect those ideas are wrong and it’s taken up so much time already. However, as un-enthused as I am, I think somebody, someplace might want to remix this. Or maybe I’m flattering myself.
I wish I could offer the pieces sent to different tracks, but, ha ha, the only way I could get this to record was with Audio Hijack, because there’s a logic problem somewhere in the code which causes it to hang right at the end and chasing that bug is just more trouble than it’s worth.

Code Example

Ok, using the ScaleLattice: First declare a scale with some ratios in it:

 ~scale = ScaleLattice([[1, 1], [11, 9], [9,7],  [7,5], [5, 3], [9, 5], 
     [11,5], [7,3], [27, 11], [27, 25],  [25, 9]], 3);

That’s not the scale from the piece, but it’s also a nice one. We can then try to construct a melody, by getting some step-wise motion:

 ~melody = ~scale.getNsteps(4);

And them maybe jump to the most consonant note from the tonic, followed by one step down:

 ~melody = ~melody ++ ~scale.getIstepsBelowJconsonance(2, 0);

Um, and then let’s get the most consonant pitch from the last one in the melody:


 ~melody = ~melody ++ ~scale.consonanceAtFloat(0, ~melody.last);

Yeah, this probably sound bad, but we could play it:


 Pbind(dur, 0.3, freq, Pseq(~melody * 440, 1)).play;

I have a hypothesis that with the combination of relative consonances and stepwise motion, you could abstract music theory to the point where you could construct a meaningful melody from an arbitrary scale. Such that the program doesn’t know the scale ahead of time. The missing piece is notes that are too close to each other, which I suspect will have very high relative dissonance. I may think on this further, or I might go back to doing whatever else it is that I do.

It’s Alive!!

Remember, back in 2004 – 2005 I was working on the SuperCollider tutorial of doom? It was going to be my thesis, but, alas, it was not meant to be.

It turns out that writing tutorial chapters is actually a great way to procrastinate. It sort of feels like I’m working on music, but without actually making any sound (alas, this has a lot in common with certain pieces I’m writing). So the project is alive right now.
If you are interested in alpha-testing these chapters as I write them, please leave a comment. The intended audience is people who have never programmed before (and MAX users). If you have never before used SuperCollider in your life, I have the tutorial for you! Or, if you’ve tried and become confused. Or if you just want to see a different way of approaching the language.
Alas, most music professors have never taught (or taken) a regular computer science class. My goal is to convey all the important CS concepts, but in a way that’s immediately useful to musicians. Hopefully, if you follow the tutorial, at the end you’ll not only be able to make some cool sounds in SuperCollider, but you’ll be able to quickly grasp other object oriented languages, like Java (which is actually a very useful second language for SC programmers who want to add visual components to their work).
I’m re-writing them to be more sound focussed than last time. I’m starting users with Pbinds, which are a way of handling note creation and timing and are fast and easy despite being kind of weird. So I need n00bs. Pass it on.

HIDden Options

I’m trying to plug a new joystick into SuperCollider. I got a Logitech Attack 3 joystick which I want to do a short act with, maybe next week at a drag king bar. But I can’t get SuperCollider to talk to it.

The newer version of SC broke all of my joystick code. That’s fine, except I can’t get the newer stuff to work. When I try running the examples under GeneralHID, it can see my joystick and knows about all the buttons and the XYZ stuff, but it doesn’t seem to notice when I push one of those buttons or wiggle the stick. I tried the joystick briefly with JunXion, so I know it works, but SC just isn’t getting data from it.
I wonder if there’s some sort of trick or secret to this? I had to switch my audio stuff to an aggregate device to read in and out. Is there something like that for HIDs? Some secret magic?

Algorithmic dance music generation

Nick Collins

His laptop is signed by Stockhausen.

He wrote a techno generator 10 years ago, which was silly. So he’s trying it again, but with synthpop. The new project is called Infno.

When you press play, you want something that’s different every time in a significant way. (This sounds like old school video game music.)

Whoah, it really is different everytime! Still video-gamey, though. This has garnered applause from the audience.

The lines all know about each other and share data. The order of generation matters.

This is really cool.

Also, he has the idea of generative karaoke! Ooh, now there is audience participation. More applause.

This is the coolest thing ever.

There is a computer-written pop song from 1956. Kako will be singing the lyrics from that song. The melody here is not known in advance.

This sounds like jpop. Also like drunken karaoke. Wow, a lovely disaster. I am in love with everything about this. The singer is muddling through. Wow, now she’s getting it, sorta.

Applause and cheering.

Now he’s playing techno.

More applause.

Algorithmic lyric generation is next!

A paper will be forthcoming.

Loris: a supercollider implementation

Scott Wilson

Loris is an additive sound modelling method.

A sines plus noise approach. Noise is assigned to partials, modulating partials with a filtered noise source. This is a lossy process but is perceptually accurate.

Loris is a class library which can do some interesting things with partials. The python api is very good.

Data is exportable in several formats. Spear, a piece of free software, is nice for editing some of these file types. Also the command line tools are good.

Loris was not developed for real time use. It’s not fast to compute this kind of analysis. Sometimes, you must change params to get a good analysis, which can be a problem for real time. Also, in real time might not want to listen to every partial, but that’s also computationally expensive.

Analysis yields a partial list with envelopes for freq, amp, bandwidth, phase, etc.

Scott sticks analysis results in an sc object. There are 4 classes. Some ugens, data-holding classes, an oscillator.

The oscillator does all the partials. Can do some spectral difusion.

Can stretch stuff, mess with bandwidth, do funny things with different partials to move them around. This may work with prevois topic.

New release forthcoming. This is cool.

Dissonance curves

Jaaun Sebastian Lach

Roughness or beating is equal to hz difference betwwen two sounds. Has to do with physical properties of the ear and the critical bandwidth- which is the width of hearing of discrete sounds. You can only hear one sine wave per critical bandwidth. The bark scale climbs the critical band.

Disonance is perceived from bark scale and also cultural factors. Bark scale also applies to partials and overtones. Helmholtz held that acceptable amounts of roughness are cultural.

This speaker has a Disonance class, which looks to be very interesting. Also has method barkToHerz

Tenney thought that consonance and dissonance meant diferent things in different contexts. The terms have a functiona; usage depending on how music is composed: hisorical systems.

Barlow has some fancy-sounding theories. He imagines a consonance-disonance axis.

The Dissonance class can be used to derive scales. I must have this class!!

Sethares holds that tunings are related to timbres of instruments. Scales are derived according to roughness of partials present in the instruments used.

Computer composers can use a tmbral grammar. The presenter has some real-time analysis. He’s been doing this stuff while i’ve been navel gazing about it. Awesome.

Sc symposium

Jason Dixon – controlling group laptop improvisation

Problems often stem from performers not listening to each other. Huge cacophony of noise, competitive, lost players. Then things drag on much too long. There is a sameness. People don’t look at each other and miss cues. Also, lack of visual element. Entire frequenc spectrum used by every player makes it impossible to pick out lnes or anything.

Sonic example 1: improv gone wrong (have any of us here not heard this at least once?) And the example does indeed sound like a whole lotta noise.

Keys to success: force people to play qiuetly. Small amps, speakers located very close to the performers.

Alain Renaud developed a good system: www.alainrenaud.net The Frequencyliator

Frequency spectrum divided among players, like instruments. Filters used to enforce this!

Presenter has an idea for a genetic algorithm to instruct players.

Live!! From the supercollider symposium

16:30

Cadavre ezquisite!

Need to grab my mac!

Site gets slow when everbody in the room tries to download from it.

Public class send actual code across the network. Yikes. There’s a message called ‘avoid the worst’ which keeps folks from executing unix commands. Sorta.

It’s polite in gamelan to not play on each other’s beats, so speed changes are lagged. This clock system sort of models that.

There is a collective class that discovers folks and keeps track of their ip addresed. Broadcasting makes this possible, i think.

Sc con live blogging

16:00

Thom & Voldemars present about sonic quantums.

How do you deal with many, many control parameters?  Understanding, controlling, but not being able to touch them individually.

1 method is parameter reduction. However, they seek to be as direct as possible.

They have a matrix at the center of their system. Which deals with all their data. A multidemensonal data structre.

They have a visual representation. (How do they pick parameters  and adjust them?)

The matrix projection has 3d clouds that look sorta chaos based. These clouds can rotate, move along, expand and contract. Also can warp from a plane to a surface. 

They use things like speed of movement as control values for things like amplitude. The matrix may relate to spatialization? They are not using statistical controls for their grains. Makes parameters and relationshps clear. This gui is built in gtk, not supercollider.

They will use this as an installation. Now working on trajectory mapping, maybe with envelopes. The visualization is done in jitter.

They worked on a controller at steim, but then moved to mathematical controls.

Oops, it IS statistical. Oh, and they do use randomness and parameter reduction. I’m confused, except that there are white dots forming 3d states swooping around on the screen. Woosh! Swoop!

They are not sharing their code as of yet. Too shy.

Live bloggig the supercollider symposium

15:13

GUIs for live improv

Musical software imimitating physical instruments is silly.  Screens are 2d, so interfaces should be 2d.

Www.ixi-audio.net

sound scratcher tool allows some mousy ways to modify sound file playback with loopongs, granulation, scratching, etc. X,y axis is file selection and pitch.

Live  patching. Predators is awesome game-like algorithmic players which can have live coding synths and other aspects. Many playful, expressive, imaginative interfaces. Polyrhythm player.

Very interesting environment. Also suggests evolutionary strategies for algorithnic composition.