A while back, I wrote some code and put it in a class called BufferTool. It’s useful for granulation. Any number of BufferTools may point at a single Buffer. Each of them knows it’s own startFrame, endFrame and duration. Each one also can hold an array of other BufferTools which are divisions of itself. Each one may also know it’s own SynthDef for playback and it’s own amplitude. You can mix and match arrays of them.
You can give them rules for how to subdivide, like a set duration of each grain, a range of allowable durations or even an array of allowed duration lengths. Or, it can detect pauses in itself and subdivide according to them. It can calculate the fundamental pitch of itself.
I want to release this as a quark, but first I’d like it if some other people used it a bit. The class file is BufferTool.sc, and there’s a helpfile and a quark file.
Leave comments with feedback, if you’d like.
Tag: SuperCollider
more performance stuff
vincent rioux is now talking about his work with sc
he improvised with an avant sort of theatre company. The video documentation was cool. I didn’t know about events like this in paris. Want to know more.
in another project, he made very simple controllers with arduino inside. He had 6 controllers. One arduino for all 6.
Tiny speakers. This is also nifty. Used it at Pixelache festival.
the next project uses a light system. Uses a hypercube thing. Which is a huge thing that the dancer stands inside. Sc controls it.
the next thing is a street performance asking folks to help clean the street. Part of festival mal au pixel. This is mental! Also, near where i used to live. Man, i miss paris sometimes.
the next ne is a crazy steam punk dinner jacket. With a wiimote thing.
dan’s installation
dan st. Clair is talking about his awesome instillation which involves speakers hangong from trees doing bird like rendition of ‘like a virgin’, which is utterly tweaking out the local mocking birds.
when he was an undergrad he did a nifty project with songs stuck in people’s heads. It was conxeptual and not musical.
when he lived in chicago he did a map of muzak in stores on state street, including genre and de;ivery .ethod. He made a tourist brochure with muzak maps and put them in visitor centers.
he’s interested in popular music in environemtnal settings
max neuhaus did an unmarked, invisible sounds installation in time square. Dan dug the sort of invisible, discovery aspect.
his bird e.ulator is solar powered. Needs no cables. Has an 8bit microcontroller. They’re cheap as hell.
he’s loaded frequncy envelopes in memory. Fixed control rate. Uses a single wavetable oscillator httP://www.myplace.nu/avr/minidds/index.htm
he made recordings of birds and extracted the partials.
he throws this up into trees. However, neighbors got annoyed and called the cops or destroyed the speakers.
he’s working on a new version which is in close proximity to houses. He’s adding a calddendar to shut it down sometimes and amplitude controls.
he has an IFF class to deal with sdif and midi files. SDIFFrames class works with these files.
there’s some cool classes for fft, like FFTPeaks
he’s written some cool guis for finding partials.
his method of morphing between bird calls and pop songs is pretty brilliant.
dan is awesome
live video
sam pluta wrote some live video software. It’s inspired by glitchbot, meapsoft
glitchbot records sequnces and loops and stutters them. Records 16 bar phrase and loops and tweaks it. I think i have seen this. It can add beats and do subloops, etc
the sample does indeed sound glitchy
probability control can be clumsy in live performance. Live control of beats is hard.
MEAPSoft does reordering.
his piece from the last symposium used a sample bank which he can interpret and record his interpretting and then do stuff with that. So there are two layers of improvisation. It has a small initial parameter space nad uses a little source to make a lot of stuff.
i remember his pice from last time
what he learned from that was that it was good, especially for noisy music. And he controlled it by hitting a lot of keys which was awesome
he wrote an acoustic oiece using sound block. Live instruments can do looping differently, you can make the same note longer.
so he wrote michel chion’s book on film and was influenced. He started finding sound moments in films. And decided to use them for source material.
sci-fi films have the best sound, he says.
playing a lot of video clips in fast succession is hard, because you need a format that renders single frames quickly. Pixlet format is good for that.
audio video synch is hard with quicktime, so he loaded audio into sc and did a bridge to video with quartz composer.
qc is efficient at rendering
he wanted to make noisy loops, like to change them. You can’t buffer video loops in the same way, so he needed to create metaloops of playback information. So looped data.
a loop contains pointers to movies clips, but starts from where he last stopped. Which sounds right
he organized the loops by category, kissing, car chases, drones,etc
this is an interesting way of organizing and might help my floundering blake piece.
he varies loop duration based on the section of the piece.
tea tracks
Jan is showing TeaTracks, a multi track sequencer.
http://sampleandhold.org
sc3.3 required. It schedules events of many kinds, including sound files. Or textfiles, which can include things like patterns.
he used it to control mxwendler, a video app which is free
i feel sleepy
pfindur stops patterns
live blog : beast mulch
Scott is talking about beast mulch, which is still unreleased,
there are calsses for controllers, like hardware. There’s a plugin framework to easily extend stuff. BMPulginSpec(‘name’, {|this| etc. . . .
multichannel stuff and swarm granulation, etc.
kd tree class finds closest speaker neighbor
if you want beastmulch, get it from scott’s website
there’s speaker classes, BMSpeaker
BMInOutArray does associations.
beast mulch is a big library for everyone. Everything must be named. There are time references, like a soundflile player;
trying to be adaptible. 100 channels or 8, make it work on both. Supports jit and stems.
a usage example: can be used live. Routing table, control matrixes. Pre and post processing use plugins
i NEED to download this & use it.
http://scottwilson.ca
http://www.beast.bham.ac.uk/research/mulch.shtml
. . .
timbral analysis
dan stowell is talking about beatboxing and machine listening
live blogging the supercollider symposium
analyze one signal and use it to control another. Pitch and amplitude are done. So let’s do timbre remapping.
extract features from sound, decorrelate and reduce dimensions, map it to a space. What features to use? Mfccs, spectral crest factors. That’s looking for peaks vs flatness.
his experiments use simulated degredation to make sure it works in performance.
voice works well with mfccs, but are not noise robust. Spectral crests are xomplimntaery and are npise robust. The two give you a lot of info.
a lot of different analysis give you useful information about perceptual differences.
now he’s talking about an 8bit chip and controlling it. Was this on boing boing or something recently?
spectral centroid 95th percentile of energy from the left ahows rolloff frequency.
he’s showing a video of the inside of his throat
timbral analysis
nick collins is talking about timbral analysis and phase vocoders, which is supercollider-ese for ffts.
i missed the first couple of minutes of this becasue there is an installarion outside of solar-powered speakers in trees, doing bird song ;ike sounds, which played madonna’s ‘like a virgin’ when i walked by and i had to fall over laughing. Hahahah
ok, back to the present AtsSynth does some cool stuff with pitch shifting.
scott wilson’s ugens do loris stuff. Which is noise modulated sine tones. Sinusoidal peak detection.
TPV ugen does pure sinsoidal stuff. Sines and phases. Takes an fft chain input and creates sine outputs with resynthesis. Finds n peaks and uses that number of sinusoids. This is cool. And is part of sc 3.3
SMS is spectral modelling synthesis. Sines plus noise. This is slightly expensive. But it preserves formants in repitching. So it sounds right with shifting speech.
good stuff!
theory continued
theory continued.
time point synthesis. Babbit wanted to serialize parameters in addition to pitch. He used durational sets, which becomes dull. And doesn’t transform well.
instead use integers to map to a table of durations. Your grid has 12 durations just cuz. Andrew Mead did some work on this.
there is a class TimePoints. Which is an array.
this a rythm lib. I should look into this.
we’re listening to ‘homily’ by babbitt, which uses these kinds of transformations.
and the code isn’t on the internets.
and now virtual gamelan graz
this is an attempt to model everything about gamelan.
tuning: well, don’t model everything, just the metalaphones. The tuning should be an ideal. This requires fieldwork and interviweing builders. Or you could just measure existing instruments and measure them.
pick one instrument. Measure root pitches. You’re good.
or do more recording like sethares. Measure more ensembles. Which partial is the root?
these guys sampled the local gamelan and went with that.
the tuning . . . Are wesure of the root pitches? Is it the instruments relative to each other, one in reference to itself, the partials in a single note?
there is an image on a grid, which is hard to see as a slide.
you can do a lot of retuning.
sumarsam is raising a point on pelog tuning. The musicologist in the group is absent so the presenters have to defer.
how to synthesize -samples or synthesis. They use sines and formlet filters.
performance modelling. Model human actors or do contextual knowledge.
they did not go with individuals.
They have an event model. Each note is an event, which hold what you need to know.
audio demo. It does tempo changes right. They use ListeningClocks to do time right. I need to look at this class. They follow each other. You can set empathy and confidence, to how much they deviate.
listening to theory
Live blogging the sc symposium
panel: listening to theory
Soumd in film makes film Real and amchors it to the real world. People infer sources of sound with visual cues.
causation – synchresis is synchronization and synthesis. Does sound exist in a vacuum? This a philosophical question. A realed question is where does sound come from?
is an echo one sound or two? Depending on what you think, your perception changes.
what about form and matter? Is it just a medium, or is it the very stuff of sound?
now we are watching a film of car traffic which looks like it might have been filmed in germany. It’s got sounds of cars and wind and birds.
but all the sounds were made in supercollider!
so what was before intentions or agency is now about algorithms and effects.
now renate wieser will speak. She did an installation called the phaedrus machine. This is related to a socratic dialog, which she is describing. Good people are reincarnated as philosphers, bad people as george bush. (These are my words, not hers.)
to practice good life and avoid a bad reincarnation, she has a video game you can play to practice looking for truth. There are sound cues if you reach truth or if you fall from it. The game is audio only and uses a verticle speaker arrangement. You do get feedback in the form of a spreadsheet at the end which described your reincarnation level.
she has another installation called ‘survival of the cutest.’ It’s a play with voices coming out of different speakers. Sc sends them to whatever channel, semi-randomly.
the excell thing with the speadsheet works because sc writes to a tab dilineated file and excell look at it from time to time.
tom hall will speak now. He’s talking about 20th century stuff. Legacy of musical modernism. What is a muscal object? Instruments vs sounds.
20th century had more math stuff in music than any time since the renaissance. Schoenburg came up with twelve tone almost a hundred yaers ago. Stravinsky took it up after schoennburg died.
stravinsky said when he composed with intervals, he was aware of them as objects. Babbitt took up the 12tone. He was up in the maximum diversity of permutations.
set class theory is an american thing. There’s some set class stuff in supercollider, though.
a set can be represented by an array. Tones are integers in equal temperament, much like midi.
he has a pitchcircle class to visualize sets.
powersets are all permutations of elements. A n size powerset will have size 2**n.
tom johnson wrote a piece called ‘chord catalog’ which sounds cool. Http://www.editions75 . . .
break for 5