Compiling superCollider on Ubuntu studio 12.0.4

For various annoying reasons, I’ve just reformatted my hard drive and freshly installed the latest Ubuntu Studio LTS. As soon as I’d restored my home directory, I set to work compiling SuperCollider.
The README_Linux.txt file covers most of what you need to know, but here’s what I had to type to get going:

sudo apt-get install git cmake libsndfile1-dev libfftw3-dev  build-essential  libqt4-dev libqtwebkit-dev libasound2-dev libavahi-client-dev libicu-dev libreadline6-dev libxt-dev pkg-config subversion libcwiid1 libjack-jackd2-dev emacs

cd ~/Documents

git clone --recursive https://github.com/supercollider/supercollider.git

cd supercollider

mkdir build

cd build

cmake .. -DSUPERNOVA=OFF

make

You have to disable supernova because it requires a newer version of gcc. If all that works, then you can install it:

sudo make install

I had something go wrong with my installed SC libs, but that’s a side issue.

Granular Performance Thingee

I’ve been working on a thing where I draw shapes and they turn into granular synthesis clouds. I have some ideas for what I want it to do, but one thing I’d really like to do is be able to record video of me playing it, via recording my desktop – with audio. Indeed, I need to sort this out very shortly, as I’d like to submit a proposal for the SuperCollider Symposium. (If anybody has advice on how to record this on Ubuntu, please do leave a comment.)

As it was, all I’ve got is a screen shot of post-drawing and an audio file of what it sounded like whilst playing.

Although this has quite a few bugs, some people have expressed some interest the system, so I’m making source code available. It’s written in SuperCollider. This is nowhere near a release version and the code is ugly, so be forewarned. but it mostly works. It relies on the Conductor quark by Ron Kuivila. There are two files. Put Cloud.sc in your Extensions folder. The open up test.scd in the lovely new SC 3.6 IDE. Or it will probably work in earlier versions. Select the whole file and evaluate it.
One of the windows that opens is a controller with a play and stop button. One is a large black window. And one is a bunch of blank buttons. I have no idea what they don’t have labels. Draw in the black window using click and drag (or a stylus if you’re lucky). Press play in the controller window. When the cursor gets to the cloud, you can hopefully hear it playing. There are some sliders in that window to change the speed of the cursor. You can do this while it is playing. You can also draw new shapes while it playing. The blank buttons have sound parameters attached to them. So if you press one, the next cloud you draw will have that button’s parameters. (The buttons are supposed to say things like ‘Short Sine Grains’ or ‘Long Saw Grains’.) If you want to modify a cloud, you can right click on it to get a popup window that changes some of the sound settings.
You can’t save your work, but the server window has a record button on it. If you press that and then play your drawing and then hit the ‘stop recording’ button in the server window, you’ll be able to record the audio in real time..
I had the idea that I could draw on this with my stylus while on stage in real time, but when I plugged my computer into the projector, it changed my screen parameters and my stylus calibration failed. I’m sure there is a work around, which, ideally, I’d like to find by next week.

LiveBlogging: Modality – modal control in SuperCollider

by many people

Modality is a loose collaboration to make a toolkit to hook up controllers to SC.  Does mapping, including some complex stuff and some on-the-fly stuff.

Marije spoke a bit of how they began collaborating

Concept – support many devices over many protocols. Make a common interface. Easily remap.

Devices

They currently support MIDI and HID. the common interface is MKtl. Provides a system to process the data. They have templates. Templates for common ways of processing. Same interface for MKtl and MDispatch. (they may move to FRP (I don’t know what that is))

Ktl quark is out of date.

(I think I might be interested in contributing to this project – or at least provide templates for stuff)

Different protocol have different transport mechanisms. Things very by OS. Different controllers have different semantics.

A general solution is not trivial.

Scaling is different on different OSes. Names of devices may have variations. MIDI has some device name issues.  real MIDI (non-usb) will not report their names, but use MIDI ports.  Similar issues will arise with OSC or SerialPort. 

The device description index is an identity dictionary. It’s got some NanoKontrol stuff in it. I am definitely interested in this…

They’ve got some templates, but it’s still a bit vapourware.

For every button or input on your device, they define what it is, where it is, etc.  This is good stuff.  You can also set the I/O type.

Device descriptions have names, specifications, platform differences, hierarchical naming (for use in pattern-matching). You can programmatically fill in the description

nanoKontrol, Gamepad, DanceMat, a bunch of things.

Events and signals

Functional reactive processing. Events, data flow, change propogation. FRP – functional reactive programming

These are functions without sideFX until you get to the output phase.

In the FP Quark – functional programming Quark.

Events are encoded in an event stream.  Event Source with a do method adds a side effect.  When somethng happens (is “fired”), do the do.  Only event sources can be fired.

the network starts with an event source. 

Signals are similar but have state? You can ask for the value and change it.

To create the network use combinators.

inject has state internally.

Dynamic Event Switching limits and event depending on a selector.  this is kind of like the gate thing in max.

With Modality, every control has an elements, every element has a singal and a source. Controls have keys.

You can combine values, attach stuff to knob changes. Easy to attach event streams to functions.

this is complex to describe, but works intuitively in practice.  You can do deltas, accumulators, etc.

Closing remarks

this is on github, but it not yet released.  depends on the FP quark.

Needs gui replacements.  Needs a backend for OSC devices.

Needs some hackin in the SC source.

Questions

  • Would you be interested in doing the descriptors in JSON, so it can be used by non-SC guys? Yeah, why not.  This is a good plan, even.

Liveblogging the Sc symposium: Overtone Library

Collaborative programmable music. Runs in LISP (dialect of LISP?) that runs in the JVM.  It’s got concurrency stuff. It’s programmable. It runs in Clojure.

Deals with the SC server.  This sort of looks like it’s running in emacs…

All SC Ugens are available.  He built a bunch of metadata for this, a lot like the SC classes for the Ugens.  There is in-line documentation, which is nice.  The Node-tree shows all currently running UGens.

Midi events are received as events and can be used by any function. Wiggle your nano controller.  This came with the JVM.  So all Java libraries are supported.  OSC support. Serial support.

Synth code and musical expression code can be written in the same language.  Specify phrases in a score, concat them.  The language is relatively readable. as far as lisp goes.  Most things are immutable, so this is good for concurrence. Too many variables can confuse the programmer.

He’s using a monome. Every button call has a function, which has the X,Y coordinate, whether it’s pressed or released and a history of all other button presses.

Now he’s doing some mono-controlled dubstep.

C-Gens are re-usable UGen trees, possible a bit like synthdefs. Can do groups also.

This can also use Processing.org stuff, because it’s got java.  OpenGL graphics also supported. They can hook into any UGen

Anything can be glued together.

This is kind of cool. But you need to deal with both java and lisp.

Questions

  • Collaboration?  It helps you deal with shared state, without blocking or locking.

LiveBlogging SC: Mx

by Chris Satinger (aka Felix Crucial)

Mx is a tool for connecting objects together.  audio, control, midi etc

Anything that plays on a bus, the bus can go in and it can be put on a mixer.

This mixer is a GUI thing. You can use it just to glue on things like fadeouts or amplitude control.

Just write a descriptor file.

The system is not the gui, it’s the patching framework.

You can patch synthdefs together. and edit the synthdefs on the fly.

This patches things a wee bit like PD.

It checks for bad values and prevents explosions.

There is no time line system. It’s a hosting system and only manages connections and starts and stops. You can put in other timelines

It uses environment variables. ~this is the unit.

~this.sched(32, { … }, { … })

You can put documents in the Mx. Those can change the Mx as it runs, so it’s all very self-modifying. (When I was an undergrad, they told me this was naughty, but like many other naughty things, it can be very cool.)

Things have outlets and inlets that you can connect.   There is apparently a querying system which we will learn about.

He gets good music out of the system despite having no idea what’s going on a lot of the time

Dragging cables is fun for a while, but then…

Questions

  • Adaptors? The describe what an object is and describes the inlets and outlets.  There’s also a system for announcements. Cable strategies also define behaviours.

Liveblogging SC: live coding with assembler

Dave – 

Esoteric programming languages are an interesting thing we might care about.

CPUs in mine craft – you can see the processing.

Space invaders assembler with lines showing the order of execution.

Very slow execution can show what’s going on. This can be sonified.

 Till – 

BetaBlocker is a quark in sc3-plugins

(talk to him if you want to go work in helsinki)

BBlocker never crashes, but it  might not do anything.  It has a stack and a heap and a program counter.

This is like Dave’s grid on the DS, where it runs in an infinite loop.

UGens

DetaBlockerBuf – is a demand rate UGen. So you can do weird computations in your ugen?  It does a programming step everytime it gets triggered.

The programs are stored in buffers. You can do random ones.

There is also a visual thingee.

BBlockerBuf exposes the stack and the program counter.

BBlockerProgram holds a beta blocker program for the assembler. 

You can create a program with the assembler code.  you can play the program.

BetaBlockerProgram([NOP, POP, ADD]) etc

Tom Hall – 

John Cage would be 100 this year.

A metaphorically digital, constrained, sonic system. An invitation to listen

Questions

  • Is the heap a wave table? No, the output of the program is the sound.
  • Is it a coincidence that it sounds like putting a induction coil on a laptop?  Um, maybe. He says it sounds very 8-bit-y. Maybe because it’s 8bit.
  • Is it easy to write logical seeming programs, or are they mostly random? It is possible to write things that make sense. The fun of it is the weirdness and things getting trashed by accident.  Dave is going genetic programing with a system like this.
  • The output is one byte at a time? No, each step does something and the output is something I didn’t understand.
  • Graphics question? Not Till’s field.

I think this could be really useful for student or teenagers who are sort of intereted in programming.

LiveBloggin the SC symposium: Keynote – Takeko Akamatsu

Using SC since 2000.

Main project is Craftwife. (All members are housewives, she says).  Going since 2008.  There are 5 members now. They are between pop and art culture.

She started initially doing demos of Remkon, an iOS OSC app.  How to make this popular? 

  • Borrow the image of something already famous – Karftwerk.
  • What is Originality? – SC patterns
  • Crash of music industry – live to record, record to live. Craftwife should be live only

Influenced by “the Work of Art in the Age of Mechanical Reproduction”

She makes extensive use of PatternProxies

She also works with Craftwife + Kaeso+.  Kaseo+ is a circuit bender.  she controls strobe lights, analogue synthesier, etc.

SuperCollider.jp

SC in Japan. They have a meetup in Tokyo. She posts on twitter. She does workshops.

During her show in the Hague in 2007, she got frustrated and smashed her computer. And then quit making computer music for a year and grew vegetables.

She held a workshop at a place called the WombLounge.  Not everyone was a musician. She covered interaction between many environments.

SuperColliderSpeedCodingShow

She will give people a theme and five minutes and they have to make a sound.

4 people are quickly coding something on the theme of spring.

SuperCollider.future

She wants the book in an eBook in Japanese.

SuperCollider.cycling

She has attached a sensor to her exercise bike and uses this during her workout routine.

She’s tired of loud sounds. And sound systems are annoying.

She played a video of JMC saying what he wants for sc4. It’s not client server and it’s a lot smaller.

Liveblogging the SuperCollider Symposium: SC AU UI

by Jan Trüzschler and Zlatko Brackski

This is the SuperCollider Audio Unit User Interface Library, which enables the creation of custom user interfaces for Audio Units built in SC.

You can use AU stuff in live or Logic and having a nice GUI can enhance the user experience.  Mapping controls can increase the complexity possible with the AU library.

This is MAC-only, as it uses Objective C.

The interface has some grey boxes and is editable. 

This is not yet added to the main SCAU library yet, as it needs to be merged with the SCAU lib.  The UI library needs some work. There needs to be some documentation.

Examples

This would be cool, but the GUI is really obtuse. 

You can download this stuff from BCU via TEE DMT. Or this will be released in a more normal way.

Questions

  • Where is the lovely GUI coming from? Objective C, so you can’t do your own version in SuperCollider
  • Why is this a one-time library install rather than packaged in the component? Jan thought it would be easier to do an installer.  They’re not difficult to distribute.
  • Can the AUUI controller thing use sidechains? Not yet.

Live Blogging the SuperCollider Symposium: Freesound Quark

By Gerard Roma

Uses the Freesound Website. www.freesound.org. The sounds are Creative Commons.  The website has more than 150,000 sounds from around 4000 users.  Most users only download sounds.  All sounds are moderated – listened to by a human.

I’m always charmed when a presenter shows a supercollider window rather than using a slide programme.  The syntax highlighting of their talk notes is especially good.

Google gave them a grant and they re-wrote the site.  They have a feature extraction library to analyse the sounds.

There is a new freesound quark based on their API.  The quark will give you the sound, the sound’s preview, the tags, the spectrogram, the signal descriptors from freesound’s feature extraction.

You need to get an API key to use the quark. The quark will search stuff for you according to filters. You can find a sound that’s glitchy with a particular duration.  You can search by similarity as well.

The analysis frames of the sound are kept in a separate file, but can be loaded into an IdentityDictionary.

This quark could be really interesting if you want to do stuff with freesound, you don’t need to do your own MIR and you might be able to make cool pieces in real time.

Questions

  • Are people doing cool things with this outside of SuperCollider?  He doesn’t know.
  • Will the API upload to freesound? No.  The API needs some more authentication stuff put in. Also the moderation creates a delay.
  • Zlatko wants to know about how they know if sounds are copyrighted.  The moderators try to figure it out and respond to complaints.
  • Can the same API key be used across multiple computers? Yes.
  • Does the metadata include the licence terms and the user who uploaded it? Yes
  • Is there a GUI? No, this is a new quark, not the old one.

Dissertation Draft: BLE Tech

In January 2011, five of my colleagues in BEAST and I founded BiLE, the Birmingham Laptop Ensemble. All of the founding members are electroacoustic composers, most of whom have at least some experience with an audio programming language, either SuperCollider or MAX. We decided that our sound would be strongest if every player took responsibility for their own sound and did his or her own audio programming. This is similar to the model used by the Huddersfield Experimental Laptop Orchestra (HELO) who describe their approach as a “Do-It-Yourself (DIY) laptop instrument design paradigm.” (Hewitt p 1 http://helo.ablelemon.co.uk/lib/exe/fetch.php/materials/helo-laptop-ensemble-incubator.pdf) Hewitt et al write that they “[embrace] a lack of hardware uniformity as a strength” and implies their software diversity is similarly a strength and grants them greater musical, (rather than technical) focus. (ibid) BiLE started with similar goals – focus on the music and empower the user, and has had similar positive results.

My inspiration, however, was largely drawn from The Hub, the first laptop band, some members of which were my teachers at Mills College in Oakland California. I saw them perform in the mid 1990s, while I was still an undergrad and had an opportunity then to speak with them about their music. I remember John Bischoff telling me that they did their own sound creation patches, although for complicated network infrastructure, like the Points of Presence Concert in 1987, Chris Brown wrote the networking code. (Cite comments from class?)

One of the first pieces in BiLE’s repertoire was a Hub piece, Stucknote by Scott Gresham Lancaster. This piece not only requires every user to create their own sound, but also has several network interactions including a shared stopwatch, sending chat messages and the sharing of gestural data for every sound. In Bischoff and Brown’s paper, the score for Stucknote is described as follows:

“Stuck Note” was designed to be easy to implement for everyone, and became a favorite of the late Hub repertoire. The basic idea was that every player can only play one “note”, meaning one continuous sound, at a time. There are only two allowable controls for changing that sound as it plays: a volume control, and an “x-factor”, which is a controller that in some way changes the timbral character or continuity of the instrument. Every player’s two controls are always available to be played remotely by any other player in the group. Players would send streams of MIDI controller messages through the hub to other players’ computer synthesizers, taking over their sounds with two simple control streams. Like in “Wheelies”, this created an ensemble situation in which all players are together shaping the whole sound of the group. An interesting social and sonic situation developed when more than one player would contest over the same controller, resulting in rapid fluctuations between the values of parameters sent by each. The sound of “Stuck Note” was a large complex drone that evolved gradually, even though it was woven from individual strands of sound that might be changing in character very rapidly. (http://crossfade.walkerart.org/brownbischoff/hub_texts/stucknote.html)

Because BiLE was a mostly inexperienced group, even the “easy to implement for everyone” Stucknote presented some serious technical hurdles. We were all able to create the sounds needed for the piece, but the networking required was a challenge. Because we have software diversity, there was no pre-existing SuperCollider Quark or MAX external to solve our networking problems. Instead, we decided to use the more generic music networking protocol Open Sound Control (OSC). I created a template for our OSC messages. In addition to the gestural data for amplitude and x-factor, specified in the score, I thought there was a lot of potential for remote method invocation and wanted a structure that could work with live coding, should that situation ever arise. I wrote a white paper (see attached) which specifies message formatting and messages for users to identify themselves on the network and advertise remotely invokable functions and shared data.

When a user first joins the network, she advertises her existence with her username, her IP address and the port she is using. Then, she asks for other users to identify themselves, so they broadcast the same kind of message. Thus, every user should be aware of every other user. However, there is currently no structure for users to quit the network. There is an assumption, instead, that the network only lasts as long as each piece. SuperCollider users, for example, tend to re-compile between pieces.

Users can also register a function on the network, specifying a OSC message that will invoke it. They advertise these functions to other users. In addition, they can share data with the network. For example, with Stucknote, everyone is sharing amplitude values such that they are controllable by anyone, including two people at the same time. The person who is using the amplitude data to control sound can be thought of as the owner of the data, however, they or anyone else can broadcast a new value for their amplitude. Typically, this kind of shared data is gestural and used to control sound creation directly. There may be cases where different users are in disagreement about the current value or packets may get lost. This does not tend to cause a problem. With gestural data, not every packet is important and packet loss is not a serious issue.

When a user puts shared data on the network, she also advertises it. Users can request to be told of all advertised data and functions. Typically, a user would request functions and shared data after asking for ids, upon joining the network. She may ask again at any time. Interested users can register as listeners of shared data. The possibility exists, (currently unused), for the owner of the data to send its value out on to registered users instead of the network as a whole.

In order to implement the network protocol, I created a SuperCollider class called NetAPI (see attached code and help file). It handles OSC communications and the infrastructure of advertising and requesting ids, shared functions and shared data. In order to handle notifications for shared data changes, I wrote a class called SharedResource. When writing the code for Stucknote, I had problems with infinite loops with change notifications. The SharedResource class has listeners and actions, but the value setting method also takes an additional argument specifying what is setting it. The setting object will not have it’s action called. So, for example, if the change came from the GUI, the SharedResource will notify all listeners except for the GUI. When SharedResources “mount” the NetAPI class, they become shared gestural data, as described above.