Miguel Negrão: Real time wave field synthesis

Live blogging the sc symposium. I showed up late for this one.
He’s given a summary of the issues of wave field synthesis (using two computers) and is working on a sample accurate real time version entirely in supercollider. He has a sample-accurate version of SC, provided by Blackrain.
The master computer and slave computer are started at unknown times, but synched via an impulse. The sample number can then be calculated, since you know how long it’s been since each computer started.
All SynthDefs need to be the same on both computers All must have the same random seed. All buffers must be on both. Etc. So he wrote a Cluster library that handles all of this, making two copies of all in the background, but looking just like one. It holds an array of stuff. Has no methods, but sends stuff down to the stuff it’s holding.
Applications of real time Wave Field Synthesis: connecting the synthesis with the place where it is spatialized. He’s ding some sort of form of spectral synthesis-ish thing. Putting sine waves close together, get nifty beating, which creates even more ideas of movement, The position of the sine wave in space, gives it a frequency. He thus makes a frequency field of the room. When stuff moves, it changes pitch according to location.
This is an artificial restriction that he has imposed. It suggested relationships that were interesting.
the scalar field is selected randomly, Each sine wave oscillator has (x, y) coords. The system is defined by choosing a set of frequencies, a set of scalar fields and groups of closely tunes sine wave oscillators. He’s used this system in several performances, including in the symposium concert. That had maximum 60 sine waves at any time. It was about slow changes and slow movements.
His code is available http://github.com/miguel-negrao/Cluster
He prefers the Leiden WFS system to the Berlin one.

Julian Rohrhuber: <<> and <>> : Two Simple Operators for Composing Processes at Runtime

Still Live blogging the SC symposium
A Proposal for a new thing, which everybody else here seems to already know about.

NamedControl

a = { |freq = 700, t_trig = 1.0| Decay.kr(t_trig) * Blip.ar(freq) * 0.1}.play

becomes

a = { Decay.kr(trig.tr) * Blip.ar(freq.kr(400) * 0.1}.play;
a.set(trig . . .

JITLib

Proxy stuff. (Man, I learned SC 3.0 and then now there’s just all this extra stuff in the last 7 years and I should probably learn it.)

ProxySapce.push(s);
~out.play;
~out = {Dust.ar(5000 ! 2, 0.01) };
~out.fadeTime = 4

a = NodeProxy(s);
a.source =  {Dust.ar(5000 ! 2, 0.01) };

Ndef(x, . . .)

(there are too many fucking syntaxes to do exactly the same thing. Why do we need three different ones? Why?!!)

Ndef(x, { BPF.ar(Dust.ar(5000 ! 2, 0.01)) }).play;

Ndef(x, { BPF.ar(Ndef.ar(y), 2000, 0.1)}).play;
Ndef(y, {Dust.ar(500)})

. . .

Ndef(out) <<> Ndef(k) <<> Ndef(x)

does routing
NdefMixer(s) opens a GUI.
Ron Kuivila asks: this is mapping input. Notationally, you could pass the Ndef a symbol array. Answer: you could write map(map(Ndef(out), in, Ndef(x) . . .
Ron says this is beautiful and great.
Ndef(comb <<>.x nil //adverb action
the reverse syntax just works form the other direction.
Ndefs can feedback, but everything is delayed on block size.

Hanns Holger Rutz: ScalaCollider

Live blogging the sc symposium
What’s the difference between high level and low-level
Why should computer music have a specialised languages?
In 2002, JMC rejected the GPL languages that he considered, because they didn’t have the features he needed. But OCaml, Dylan, GOO and Ruby seemed good candidates, which are OOP + FP. They have dynamic typing.
There are a lot of languages that talk to Sc Server now. He has a table of several languages and the libraries which extend them to supercollider.
Are they dynamically types or static? Object oriented? Functional? Do the extension libraries handle UGen graphs? Musical Scheduling? Do they have an interactive mode? (All do but Java and Processing.) A domain-specific gui?
And now a slide of UGen graphs in a bunch of other languages. ScalaCollider is virtually identical to SuperCollider
What’s the Scala Language? Invented in 2003 by a Swiss Scientist, Martin Odersky at EPFL. Has a diverse community of users. It’s a pragmatic language Scala = Scalable language. It draws from Haskell and OCaml, but has java-like syntax. Runs on top of JVM (or .Net). Thus it is interoperable with java and is cross-platform. IS both OOP and FP.
Type at the prompt, “scala” and it opens an interpreter window.

def isPrime(n: Int) = (2 until n) forall (n % _ != 0)
isPrime: (n: Int)Boolean

If you type “isPrime(3.4)” you get a type error.

def test(n: Float) = isPrime(n)

Also causes a type error
There are also lazy types. Has different names for stuff than sc, but many of the same concepts.
Scala does not easily allow you to add methods to existing classes. You use a wrapper class. You need to do explicitly class conversions. However, there is a way to tell the interpreter that there’s a method to do class conversions.
You want to pick a language that will still have a user base in 10 years. Ho o predict that? fun tricks with statistics. Scala is less popular than fortran or forth. It’s very well designed, though. You can also poll communities in what they think about the language. Users find it expressive, good at concurrency, people like using it, good for distributed computing, reusable code, etc. Downsides is that there’s not a lot of stuff written in it.
http://github.com/Sciss/ScalaCollider . http://github.com/Sciss/ScalaColliderSwing
The Swing thing just opens a development environment, which doesn’t have a way to save documents. Really not yet ready for prime time.
Side effect-free ugens are removed automatically from SynthGraphs
ScalaDoc creates javadoc-like files describing APIs.
Now there’s some code with a lot of arrows.
The class Object has 278 methods, not even counting quarks. Subclasses get overwhelmed. Scala’s base object java.lang.Object has only 9 methods.
Scala has multiple inheritance.
(Ok, this talk is all about technical details. The gurus are starting to make decisions about SC4, which will probably include SuperNova server and might switch to Scala Collider and this talk is important for that. However, ScalaCollider is not yet fixed and may or may not be the future of SC, so it’s not at all clear that it’s worthwhile for average users to start learning this, unless, of course, you want to give feedback on the lang, which would make you a very useful part of the SC community. So if you want to help shape the future of SC, go for it. Otherwise, wait and see.)
Latency may be an issue, plus there’s not realtime guarantees. In practice, this ok. The server handles a lot of timing issues. The JIT might also cause latency. You might want to pre-load all the classes.
In conclusion, this might be the future. Sclang is kind of fragmented, because classes can’t be made on the fly, some stuff is written in C, etc. In Scala, everything is written in Scala, no primitives, but still fast

Thor Magnusson: ixi lang: A SuperCollider Parasite for Live Coding

Summer project: impromptu client for scserver. Start the server, the fire up impromptu, which is a live coding environment. Start it’s server and tell it to talk to scserver. It’s a different way of making music. To stop a function, you re-define it to make errors.
Impromptu 2.5 is being released in a few days as will Thor’s library on the ixi website.
Now for the main presentation. He has a logn standing interest in making constrained system, for example, using ixi quarks. These are very cool. He has very elaborate guis, modelling predator/prey relationships to control step sequencers. His research shows that people enjoy constraints as a way to explore content.
He’s showing a video taking the pis out of laptop performances, which is funny. How to deal with laptop music: VJing provides visuals. NIME – physical interface controllers. or Live Coding. Otherwise, it’s people sitting behind laptops.
ixi lang is an interpreted language that can rewrite it’s own code in real time that has the the power to access sc
It takes a maximum of 5 seconds of coding to make noise. Easy for non programmers t use. Understandable for the audience. The system has constraints as it has easy features.
Affordances and constraints are two sides of the same coin. “Affordance” is how something is perceived as being usable.
composing an instrument has both affordances and constrains.
ixi lang live coding window. There are 3 modes.


agent1   -> xylo[1  5  3  2]

spaces are silences, numbers are notes. instrument is xylophone

scale minor
agent1   -> xylo[1  5  3  2] + 12

in minor an octave higher.
“xylo” is a synthdef name

SynthDef(berlin{ . . . .}).add;


scale minor
agent1   -> berlin[1  5  3  2] + 12/2

Can add any pbind-ready synthdef. multiply and divide change speed


agent1   -> xylo[1  5  3  2]
agent1))

increases amplitude of agent1

percussive mode

ringo -> |t b w b |

can do crazy pattern things
letters correspond to synthdefs, there is a default library

sos -> grill[2 3 5 3 ]

Using pitch shifted samples

Concrete mode

ss -> nully{ 1  3 4  6 6 7 8 0    }

0 is silence

tying it together

rit -> | t  t  t ttt  |
ss ->|ttt t t t     |
sso -> | t t t t    t   t|^482846

>shift ss 1

shake ss
up ss
yoyo ss 
doze ss

future 4:12 >> shake ss

group ringo -> rit ss sso

shake ringo

(um, wow. I think I will try to teach this, if I can get a handle on it fast enough.)

ss -> | o   x  o  x|
xxox -> | osdi f si b b i|!12

xxox >> reverb

mel -> wood[1 5 2 3 ]
xo -> glass[32 5 35 46 3] +12

xo >> distort >> techno

shake mel

snapshot -> sn1

snapshop sn1

future 3:4 >> snapshot

scalepush hungarianMinor


suicide 20:5

The suicide function gives it an a percentage chance of crashing every b time
The satisfaction survey results of users is very high. Some people found it too rigid and others thought it was too difficult. Survey feedback is 1% of users.
www.ixi-audio.net
This makes live coding faster and understandable. You can put regular sc code in the ixi lang docs. good educational tool. can be used by children. successful experiment for a very high level live coding project.
You can easily add audio plugins. The lang is very extendable.

Richard Hoadley: Implementation and Development of Interfaces for Music Generation and Performance though Analysis of Improvised Movement and Dance

Still liveblogging the sc symposium. This speaker is now my colleague at Anglia Ruskin. He also did a poster presentation on this at AES, iirc

Small devices, easily portable. Appearance and design effect how people interact. Dancers not so different than regular people.
He makes little arduino-powered boxes with proximity detectors. This is not new tech, but is just gaining popularity due to low cost and ease of use.
He’s got a picture up called “gaggle” which has a bunch of ultrasonic sensors. The day before the event at which is was demonstrated, the developers were asked if they wanted to collaborate with dancers. (It’s sort of theremin-esque. There was actually a theremin dance troupe, back in the day and I wonder if their movements looked similar?) The dancers in the video were improvising and not choreographed. They found the device easy to improvise with. Entirely wireless access for them lets them move freely.
How do sounds map to those movement? How nice are the sounds for the interactors (the dancers)?
Now a video of somebody trying the thing out. (I can say from experience that the device is fun to play with).
He’s showing a picture of a larger version that cannot be packed on an airplane and plans to build even bigger versions. HE’s also showing a version with knobs and buttons – and is uncertain whether those features are a good or bad idea.
He also has something where you touch wires, called “wired”. Measures human capacitance. You have to be grounded for it to work. (Is this connected electrically to a laptop?) (He says, “it’s very simple.” and then supercollider crashed at that instant.)
The ultrasound things is called “gaggle” and he’s showing the sc code. The maximum range of the sensor is 3 metres. In the gui he wrote allows for calibration of the device. How far away is the user going to be. How dramatic will the response be to a given amount of movement?
You can use it to trigger a process when something is in range, so it doesn’t need to react dumbly. There is a calibration for “sudden”, which responds to fast, dramatic movements. (This is a really great example of how very much data you can get from a single sensor, using deltas and the like.)
Once you get the delta, average that.
Showing a video of dancers waving around podium things like you see in art museums.
Now a video of contact dancing with the podiums. There’s a guy with a laptop in the corner of the stage. It does seem to work well, although not as musically dramatically when the dancers do normal dancey stuff without waving their arms over the devices, which actually looks oddly worshipful in a worrying way.
Question: do dancers become players, like bassoonists or whatever? He thinks not because the interactivity is somewhat opaque. Also, violinists practice for years to control only a very few parameters, so it would take the dancers a long time to become players. He sees this as empowering dancers to further express themselves.
Dan Stowell wants to know what the presenter was doing on stage behind the dancers? He was altering the parameters with the GUI to calibrate to what the dancers are doing. A later version uses proximity sensors to control the calibration of other proximity sensors, instead of using the mouse.
Question: could calibration be automated? Probably, but it’s hard.

Daniel Mayer: miSCellaneoud lib

still liveblogging the SC symposium
His libs. VarGui: multi-slider gui. HS (HelpSynth) HSPar and related.
LFO-like control fo synths, generated by Pbinds
Can be discrete or continuous – a perceptual thing in the interval size.
Discrete control can be moved towards continuous by shortening the control interval.

Overview

Can do direct LFO control. Pbind-generated synths that read from or write to control busses.
Or you can do new values per event, which is language only or put synth values in a Pbind.

Pbind generated synths

Write a synthdef that reads from a bus. Write a synth that writes to a bus. Make a bus. Make a Pbind

Pbind(
 instrument, A1,
 dor, 0.5,
 pitchBus, c
)

Ok, w his lib, make a sequence of durations. Starts the synths. Get the values at the intervals with defined latency. The values are sent back to the language, which has more latency. Then you have a bunch of values that you can use. If you play audio with it, there is yet another layer of latency.

h = HS(s, {/* usegn graph*/});

p = PHS(h, [], 0.15 [ /*usual Pbind def*/ ]).play

. . .
p.stop; // just stops the PHS
p.stop(true); // also stops the HS

or

// normal synth
..
.
.

(
p = PHS(h, [], 0.2, [/* pdind list*/]).play(c, q)

PHS is a PHelp Synth *new (helpSynth, helpSynthArgs, dur1, pbdindData1 . . .durN, pbindDataN)
PHSuse has a clock
PHSpar switches between two patterns.
(I do not understand why you would do this instead of just use a Pbind? Apparently, a this is widely used, so I assume there exists a compelling reason.)
download it from http://www.daniel-mayer.at
Ah, apparently, the advantage is that you can easily connect ugens to patterns, as input sources w the s.getSharedContol(0)

Dan Stowell and Alex Shaw: SuperCollider and Android

Still live blogging the Sc symposium
their subtitle: “kickass sound, open platform.”
Android is an open platform for phones, curated by google. Linux w java. it’s not normal linux, though. it’s NOT APPLE.
phones are computers these days. they’re well-connected and have a million sensors, w microphones and speakers. Andriods multitask, it’s more open, libraries and APKs are sharable.
Downsides is that it’s less mature and has some performance issues w audio.
sccynth on android. The audio engine can be put in all kinds of places. So the server has been ported. The lang has not yet been ported. So to use it, you could write a java app and use it as an audio engine. Can control remotely, or control it from another android app. ScalaCollider, for example.
Alex is an android developer. Every android app has an “activity” which is an app thingee on the desktop. Also has services, which is like a daemon, which is deployed as part of an app and persists in the background. An intent is a loosely-coupled message. AIDL is Android Interface Definition Language, in which a service says what kinds of messages it understands. The OS will handle the binding
things you can do w supercollider on android: write cool apps that do audio. making instruments, for example. He’s playing a demo of an app that says “satan” and is apparently addictive. You can write reactive music players (yay). Since you can multitask, you can keep running this as you text people or whatever.
what languages to use? sclang to pre-prepare synthdefs, OSC and java for the UI.
A quick demo! Create an activity in Eclipse!
Create a new project. Pick a target w/ a lower number for increased interoperability. Music create an activity to have a UI. SDK version 4.Associate project w/ supercollider, by telling it to use it as a library. There are some icon collisions, so we’ll use the SC ones. Now open the automatically generated file. Add SCAudio object. When the activity is created, initialise the object.

 
public void onCreate 
 . . .
superCollider = new SCAudio("/data/data/com.hello.world/lib");
superCollider.start;
superCollider.sendMEssage(OscMEssage.createSynthMessage("default", 1000, 1, 0); // default synth
…
}

 . . .

@Override
public void onPause(){
 super.onPause();
supercollider.sendQuit();
}

Send it to the phone and holy crap that worked.
Beware of audio latency, 50 milliseconds. multitasking also.
Ron Kuivila wants to know if there are provisions for other kinds of hardware IO, kind of like the arduino. Something called bluesmurf is a possible client
Getting to the add store, just upload some stuff, fill out a form and it’s there. No curation.

Tim Blechman: Parallelising SuperCollider

Still live blogging the SC symposium
Single processors are not getting faster, so most development is going for multicore architectures. But, most computer music systems are sequential.
How to parallelise? Pipelining! Split the algorithm into stages. This introduces delay ad stuff goes from one processor to the other. Doesn’t scale well. Each stage would need to have around the same computational cost, also.
You could split blocks into smaller chunks. Pipeline must be filled and them emptied, which is a limit. Not all processors can be working all the time.
SuperCollider has special limitations in that OSC commands come at the control rate and the synth graph changes at that time. Thus no pipelining across control rate blocks. Also, there are small block sizes.
For automatic parallelisation, you have to do do dependency analysis. However, there are implicit dependencies with busses. The synth engine doesn’t know which resources are accesses by a synths. This can even depend on other synths. Resources can be accessed at audio rate. Very hard to tell dependencies ahead of time. Automatic parallelisation for supercollider might be impossible. You can do it with CSound because their instrument graphs are way more limited and the compiler knows what resources each one will be accessing. They just duplicate stuff when it seems like they might need it on both. This results in almost no speedup.
The goals for SC are to not change the language and to be real time safe. Pipelining is not going to work and automatic parallelisation is not feasible. So the solution is to parallelise not automatically and let the user sort it out. So try parallel groups.
Groups with no node ordering constraint, so they can be executed in parallel.
easy to use and understand and compatible with the existing group architecture. doesn’t break existing code. You can mix parallel groups with non-parallel ones.
the problems is that the user needs to figure stuff out and make sure it’s correct. Each node has two dependency relations. There is a node before every parallel group and a node afterwards.
This is not always optimal. Satellite nodes can be set to run before or after another node, so 2 new add actions.
There is an example that shows how this is cool. It could be optimised, so that some nodes have higher precedence.

Semantics

Satellite nodes are ordered in relation w one other node
Each node can have multiple satellite predecessors and satellite successors. They may have their own satellite nodes. They can be addressed by the parent group of their reference node. Their lifetime should relate to the lifetime of their reference node.
This is good because it increases the parallelism and is easier, but it more complicated.
Completely rewritten scsynth w a multiprocessor aware synthesis engine. Has good support for parallel groups, working on support for satellite nodes. Loads only slightly patches Ugens. Tested on linux, w more than 20 concerts. Compiles on OS X, might work. We’ll see. (linux is the future)
supernova is designed for low latency, real time. dependency graph representation has higher overhead. There’s a few microsecond delay.
For resource consistency, spinlocks have been added. Reading the same resource from parallel synths is safe. Writing may be safe. Out.ar is safe. Replace.ar might not be. The infrastructure is already part of the svn trunk.
(I’m wondering if this makes writing UGens harder?)
A graph of benchmarks for supernova. Scales well. Now a graph of average case speedup. W big synths speedup is nearly 4.
Proposed extensions: parallel groups, satellite nodes. Supernova is cool.
There is an article about tis on teh interweb, part of his MA thesis.
Scott Wilson wants to know about dependencies in satellite nodes. All of them have dependencies. Also wants to know if you need parallel nodes if you have satellite nodes. Answer: you need both.

Nick Collins: Acousmatic

continuing live blogging the SC symposium
He’s written Anti-aliasing Oscillators: BlitB3Saw – BLIT derived sawtooth. Twice as efficient as the current band-limited sawtooth. There’s a bunch of Ugens in the pack. The delay lines are good, apparently

Auditory Modelling plugin pack – Meddis models choclear implants.(!)
Try out something called Impromptu, which is a good programming environment for audio-visual programming. You can re-write ugens on the fly.(!)

Kling Klang

(If Nick Collins ever decided to be an evil genius, the world would be in trouble)

{ SinOsc.ar* ClangUgen.ar(SoundIn.ar)}.play

The Clang Ugen is undefined. He’s got a thing that opens a C editor window. He can write the Ugen and then run it. Maybe, I think. His demo has just crashed.
Ok, so you can edit a C file and load it into SC without recompiling, etc. Useful for livecoding gigs, if you’re scarily smart, or for debugging sorts of things.

Auto acousmatic

Automatic generation of electroacoustic works. Integrate machine listening into composition process. Algorithmic processes are used by electroacoustic composers, so take that as far as possible. Also involves studying the design cycle of pieces.
the setup requires knowing the output number of channels the duration and some input samples.
In bottom up construction, sources files are analysed to find interesting bits, those parts are processed and the used again as input. The output files are scattered across the work. Uses onset detection, finding dominant frequency, excluding silence, other machine listening ugens.
Generative effect processing like granulations.
top down construction imposes musical form. Cross-synthesis options for this.this needs to run in non-real time, since this will take a lot of processing. There’s a lot of server-> language communication, done w/ Logger currently.
How to evaluate the output: Tell people that it’s not machine composed and play it for people, and then ask how they like it. It’s been entering electroacoustic compositions. Need to know the normal probability of rejection. He normally gets rejected 36% of the time (he’s doing better than me).
He’s sending things he hasn’t listened to, to avoid cherry picking.
Example work: fibbermegibbet20
A self-analysing critic is a hard problem for machine listening
this is only a prototype. The real evil plan to put us all out of business is coming soon.
The example work is 55 seconds long ABA form. The program has rules for section overlap to create a sense of drama. It has a database of gestures. The rules are contained in a bunch of SC classes, based on his personal preferences. Will there be presets, i.e., sound like Birmingham? Maybe.
Scott Wilson is hoping this forces people to stop writing electroacoustic works. Phrased as “forces people to think about other things.” He sees it as intelligent batch processing.
The version he rendered during the talk is 60 seconds long, completely different than the other one and certainly adequate as an acousmatic work.
Will this be the end of acousmatic composing? We can only hope.

Live blogging the Supercollider Symposium:: Hannes Hoezl: Sounds, Spaces, Listening

Maifesta, “European Nomad Art Biennale” takes places in European non-capital cities every 2 years. The next is in Murcia, Span, 2010
No 7 was in 2008 in italy, in 4 locations.
(This talk is having technical issues and it wounds like somebody is drilling the ceiling.)
The locations are along Hannibal’s route with the elephants. Napoleon went through there? It used to be part of the Austrian empire. The locals were not into Napoleon and launched a resistance against him. The “farmer’s army” defeated the French 3 times.
(I think this presentation might also be an artwork. I don’t understand what is going on.)
Every year, the locals light a fire in the shape of a cross on the mountain, commemorating their victories.
The passages were narrow and steep and the local dropped stones on the army, engaging in “site specific” tactics. One of the narrowest spots was Fortezza, which was also a site for manifesta. There is a fortress there, built afterwards, the blocks the entire passage. There is now a lake beside there, created by Mussolini for hydroelectric power. The fortress takes up 1 square kilometre.
there is a very long subterranean tunnel connecting the 3 parts of the fort.
(He has now switched something off and the noise has greatly decreased)
The fortress was built after the 1809 shock. But nobody has ever attacked it. There was military there until 2002. They used it to hold weapons. The border doesn’t need to be gaurded anymore.
during ww2, it held the gold reserves from the Bank of Rome
The manifesta was the first major civilian use. None of the nearby villages had previously been allowed to access the space.
The other 3 manifesta locations were real cities. Each had their own curatorial team. They collaborated on the fortress
The fortress’ exhibition’s theme was imaginary scenarios, because that’s basically the story of the never-attacked fort.
The fortress has a bunch of rooms around the perimeter, with cannons in them, designed to get the smoke out very quickly.
We live our lives in highly designed spaces, where architects have made up a bunch of scenarios on how the space will be used and then design it to accommodate that purpose.
the exhibition was “immaterial” using recordings, texts, light
There were 10 text contributors. A team did the readings and recordings. Poets, theatre writers, etc.
The sound installations were for active listening, movement, site specific.
He wanted to do small listening stations where a very few people can hear the text clearly, as there are unlikely to be crowds and the space was acoustically weird. The installations needed to have text intelligibility. They needed to be in english, italian and german, thus there were 30 recordings.
The sound artist involved focusses on sound and space. The dramatic team focusses on the user experience design.
(Now he’s showing a video os setting up a megaphone in a cannon window. It is a consonant cannon. Filters the consonants of one of the texts and just plays the clicks. He was playing this behind him during the first part of the talk, which explains some of the strange noises. In one of the rooms, they buried the speakers in the dirt floor/ In another room, they did a tin can telephone sort of thing with transducers attached to string. Another room has the speakers in the chairs. Another had transducers on hanging plexiglass. The last one they had the sound along a corridor, where there was a speaker in every office, so the sound moved from one to the next.