A useful script

The best way to remember to do something when you’re going to run some important program is to put in the program itself. Or at the very least, put it in a script that you use to invoke the program.
I have a few things I need to remember for the performance I’m preparing for. One has to do with a projector. I’m using a stylus to draw cloud shapes on my screen. And one way I can do this is to mirror my screen to a projector so the audience can see my GUI. However, doing this usually changes the geometry of my laptop screen, so that instead of extending all the way to the edges, there are empty black bars on either side of the used portion of my display. That’s fine, except the stylus doesn’t know and doesn’t adjust. So to reach the far right edge of the drawn portion of the screen, I need to touch the far right edge of the drawn portion, which puts over a centimetre between the stylus tip and the arrow pointer. Suboptimal!
Ideally, I’d like to have any change in screen geometry trigger a script that changes the settings for for the stylus (and I have ideas about how that may or may not work, using upstart, xrandr and xsetwacom), but in the absence of that, I just want to launch a manual calibration program. If I launch the settings panel, there’s a button on that that launches one. So the top part of my script checks if the calibration is different than normal and launches settings if it is.
The next things I need to remember are audio related. I need to kill pulseaudio. If my soundcard (a Fast Track Ultra) is attached, I need to change the amplitude settings internally so it doesn’t send the input straight to the output. Then I need to start jack using it. Or if it’s not attached, I need to start jack using a default device. Then, because it’s useful, I should start up Jack Control, so I can do some routing, should I need it. (Note: in 12.04 if you start qjackctl after starting jackd, it doesn’t work properly. This is fixed by 13.04.) Finally, I should see if SuperCollider is already running and if not, I should start it.
That’s a bit too much to remember for a performance, so I wrote a script. The one thing I need to remember with this script is that if I want to kill jack, it won’t die from Jack Control, so I’ll need to do a kill -9 from the prompt. hopefully, this will not be an issue on stage.
This is my script:

#!/bin/bash


# first check the screen

LINE=`xrandr -q | grep Screen`
WIDTH=`echo ${LINE} | awk '{ print $8 }'`
HEIGHT=`echo ${LINE} | awk '{ print $10 }' | awk -F"," '{ print $1 }'`

if  [[ ${WIDTH} != 1366 || ${HEIGHT} != 768 ]]
  then
 gnome-control-center wacom
  else
 echo normal resolution
fi


# now setup the audio

pulseaudio --kill

# is the ultra attached?
if aplay -l | grep -qi ultra
  then
 echo ultra
 
 #adjust amplitude
 i=0
 j=0
 for i in $(seq 8); do
         for j in $(seq 8); do
                 if [ "$i" != "$j" ]; then
                         amixer -c Ultra set "DIn$i - Out$j" 0% > /dev/null
                 else
                         amixer -c Ultra set "DIn$i - Out$j" 100% > /dev/null
                 fi
                 amixer -c Ultra set "AIn$i - Out$j" 0% > /dev/null
         done
 done

 #for i in $(seq 4); do 
 # amixer -c Ultra set "Effects return $i" 0% > /dev/null 
 #done 

 #start jack
 jackd -d alsa -d hw:Ultra &
  else
 #start jack with default hardware
 jackd -d alsa -d hw:0 &
fi

sleep 2

# jack control
qjackctl &

sleep 1

# is supercollider running?
if ps aux | grep -vi grep | grep -q scide
  then
 echo already running
  else
 scide test.scd &
fi

Live code, code based interfaces and live patching – theory and practice

Some theory

Not every use of code interaction on stage is an instance of live coding. When I first started working with SuperCollider a decade ago, I didn’t create GUIs. I started and stopped code on stage. I had comments in the code giving me instructions on how to do this. One piece instructed me to count to ten between evaluating code blocks. Another told me to take a deep breath.
Part of this was because I hadn’t yet learned how to invoke callback methods or use tasks. Some of it was to create a musical timing – a deep breath is not the same as a two second pause. This was undoubtedly using code interactions to make music. But in no sense were these programs examples of live coding. Once a block was started, there was no further intervention possible aside from halting executing or turning down a fader to slowly mute the output. These pieces were live realisations of generative music, which means they have virtually no interactivity once started whether by code or by other means.
There is not a bright line separating code based interfaces from live coding but instead a continuum between pieces like the ones I used to write and blank slate live coding. The more the interactivity of the code, the farther along this continuum something falls. Levels of interaction could be thought to include starting and stopping, changing parameters on the fly, and changing the logic or signal graph on the fly. Changing logic or signal graph would put one closer to blank slate coding than does just changing numbers on something while it plays.
This argument does imply a value judgement about authenticity, however, this is not my purpose. Different types of code interactions are better suited to different circumstance. A piece that is more live coded isn’t necessarily sonically or objectively better. However, this kind of value judgement is useful in applying the metaphor of live coding to other interactions.
I have been pondering for a while whether or not live synthesiser patching is an analogue form of live coding, a question first posed by Julian Rohrhuber (2011) on the live coding email list. On the one hand, the kind of analogue modules used for modular synthesisers were originally developed for analogue computers. The synthesiser itself is a general purpose tool for sound, although, obviously limited to whatever modules are available. (Thus putting it some place between a chainsaw and an idea. (Toplap 2010)) Both computer programs and live patches can quickly grow in complexity to there the performer can no longer comprehend exactly what’s happening. (Collins 2007)
On the other hand, there is no code. However, I’m not sure how much that matters. A PD or MAX patch created on the fly that crates a signal graph is clearly an example of live coding. If for some reason, the patching language had hard limits on what unit generators were available and in what quantity, this would still count. Therefore the transition from virtual to physical seems small. Instead of focussing on the code itself, then, let’s look at the metaphor.
Knob twirling is an example of changing numbers on the fly. Modular synthesisers do contain logic in the forms of gates and switches. This logic and the musical signal routing can be changed on the fly via re-patching. Therefore, a live patching performance that contained all of these elements would be an example of analogue live coding.

Gig Report

I very recently did some live patching in the Live Code Festival in Karlsruhe. Alas, this reasoning of what is or is not live coding did not become clear to me until I was reflecting back on my performance afterwards. This is the first time I was doing patching with any other goals in addition to making nice sounds, which meant I was pushing against places the sounds wanted to settle and I realised on stage I was inadequately prepared. There is both in live coding and live patching a problem of how to prepare for a show, something it has in common with forms of improvised or partially improvised music.
I had a conversation with Scott Wilson about how to practice improvised music that has an agenda. I should have spent the few days before the show building patches that use gates to control timbrel or graph changes. I should have also practised making graph changes in the middle of playing. Instead, I spent the days ahead wrestling with problems with Jack on Linux. I use SuperCollider to manage some panning and recording for me and was having tech problems with it. Mixing analogue and digital systems in this way exposes one to the greater inherent instability of computers. I could make my own stereo autopanner with some envelope followers, a comparator and a panner, so I’ll be looking into putting something together out of guitar pedals or seeing of there is an off-the-shelf solution available.
For this performance, I decided to colour code my cables, following the colour conventions of Sonology in the Hague, so that I would use black for audio, blue for control voltages and red for triggers. This was so users with synthesiser knowledge might be able to at least partly decode my patches. However, this caused a few problems. Normally, I play with cables around my neck and I’ve never before done anything live with cable colours. This time, every time I looked down, I only saw red cables but never actually wanted to use them. For the audience, I tried to make up for the difficulty of seeing a distant synth by using a web cam to project live images of it, but the colour was lost in the low resolution of the web cam. People who tried to just look at the synth directly would have trouble perceiving black cables on a black synth. If I do colour coding again, I need to swap the colours I use and not wear them around my neck. A better webcam might also help.
Aside from the low resolution, the web cam part was successful. I also set the program up so that if I pressed certain buttons on my midi device, slides would be displayed of modules. So when I patched an oscillator, I pushed the oscillator button on the midi control and a labelled picture of an oscillator appeared in the upper left corner. I didn’t always remember to push the buttons, but the audience appreciated the slides and I may extend this in future with more of my modules (I forgot to do the ring mod module) and also extend it to more combinations, so I would have a slide of two oscillators showing FM and one of three showing chaos.
Sonically, the patching seems to have been a success although it was not fun to do because I did have an agenda I was trying to push towards, but had not rehearsed adequately. I want to spend a lot of time now working this out and getting it right and doing another show, but that was it. My next presentation will be all SuperCollider and I need to work on that. I am thinking a lot, though about what I might do for the Other Minds festival next spring. I wonder if live patching would be adequately ambitious for such a high profile gig….

Citations

Collins, Nick. “Live Coding Practice,” 2007. The Proceedings of NIME 2007 [E-JOURNAL]
Available at: <http://www.nime.org/2007/proceedings.php> [Accessed 3 March 2012].
Rohrhuber, Julian. “[livecode] analogue live coding?” 19 February 2011. [Email to livecode list].
Available at: <http://lists.lurk.org/mailman/private/livecode/2011-February/001176.html>
[Accessed: 1 March 2012].
TopLap, “ManifestoDraft.” 14 November 2010. TopLap. [ONLINE] Available at:
<http://toplap.org/index.php/ManifestoDraft> [Accessed: 12 September 2011].

Republic

It’s time for everybody’s favourite collaborative real time network live coding tool for SuperCollider.
Invented by PowerBooks UnPlugged – granular synthesis playing across a bunch of unplugged laptops.
Then some of them started Republic111, which is named for the room number where the workshop where they taught stuff.
Code reading is interesting in network msic partly because of stealing, but also to understand somebody else’s code quickly, or actively understand it by changing it. Live coding is a public or collective thinking action.
If you evaluate code, it shows up in a history file, and gets sent to everybody else in the Republic. You can stop the sound of everybody on the network. All the SynthDefs are saved. People play ‘really equally’ on everybody’s computer. Users don’t feel obligated to act, but rather to respond. Participants spend most of their time listening
Republic is mainly one big class, which is a weakness and should be broken up into smaller classes hat can be used separately. Scott Wilson is working on a newer versions which on github. Look up ‘The Way things May Go on Vimeo’.
Graham and Jonas have done a system which allows you to see a map of who is emitting what sound and you can click on it and get the Tdef that made it.
Scott is putting out a call for participation and discussion about how it should be.

David Ogborn: EspGrid

In Canada, laptop orchestras get tones of gigs.
Naive sync methods: do redundant packet transmission – so send the same value several times in a row. This actually increases the chance of collision, but probably one will get through. Or you can schedule further in advance and schedule larger chunks – so send a measure instead of just sending a beat.
download it from esp.mcmasters.ca. mac only
5 design principles

  • Immediacy – launch it and you’ve got stuff going right away
  • Decentralisation – everything is peer to peer
  • Neutrality – works with chuck, supercollider, whatever
  • Hybridity – they can even use different software on the same computer at the same time
  • Extensibility – it can schedule arbitrary stuff

The grid has public and private parts. EspGrid communicates with other apps via localhost OSC. Your copy of supercollider does not talk to the larger network. EspGrid handles all that.
The “private protocol” is not osc. It’s going to use a binary format for transmission. Interoperability is thus only based on client software, not based on the middleware.
Because the Grid thing runs OSC to clients, it can run on a neighbour’s computer and send the osc messages to linux users or other unsupported OSes.
The program is largely meant to be run in the background. You can turn a beat on or off, and this is shared across the network. You can chat. You can share clipboards. Also, Chuck will dump stuff directly.
Arbitrary osc messages will be echoed out, with a time stamp. you can schedule them for the future.
You can publish papers on this stuff or use it to test shit for papers. Like swap sync methods and test which works best.
Reference Beacon does triangulation to figure out latencies.
He wants to add WAN stuff, but not change the UI, so the users won’t notice.

Question

Have they considered client/server topology for time sync? No. A server is a point of failure.
Security implications? He has not considered the possibility of sending naughty messages or how to stop them.
Licence? Some Open Source one… maybe GPL2. It’s on google code.

Chad McKinney – Lich.js – A Networked Audio / Visual Live Coding Language

They started with SuperCollider and have gone on from there. He’s into updates in browser technologies.
He decided to write a language first as a way to start live coding.
Uses web audio and web gl
This language is GPL2 and is on github
If you’re on Chrome, go mess with http://www.chadmckinneyaudio.com/Lich.js/Lich.html

Battery dying

Alex McLean

He did command line scripts 2001-2004, then started working in perl, the Haskell.
slub – writing code to make music to drink beer to.
Feedback.pl – writing code to write code to make music to drink beer to
He read Laurie Spiegel’s paper on manipulations of musical patterns, so he got into pattern languages (ie HMSL, Common music, SuperCollider)
Tidal is embedded in Haskell for pattern manipulation for music. Complexity through combination of simplicity.
Structures could be trees of structures…
So he moved to functions of time. Time is an integer. Give a Pattern a time and it gives you an integer and the periodicity. This is limited because time is an integer.
Now, he thinks of time as cyclic, with repetition. So make time a floating point instead of an int. But this makes lookups hard.
So the thought of having patterns be a sequence with discrete stuff or a signal, which is indexed by a rational and is non-discrete. However, mixing analog and digital in one data type is a problem.
So he made separate types, but then this caused massive code inflation.
He’s gone back to one type again.
Haskell is intense…..
and REALLY concise

Questions

Does the system have state? No, which means he can’t even have random numbers.
Is time still a loop in the current version? Notionally, yes. But representationally, it’s a number, just in the functions, so its what you make it.

Live Coding as Research

This is new stuff and it combines art and science. Plus you’ve got tons of outcomes including papers, performances, languages, theories, etc.
(thinking about music as research in terms of grant application criteria saps my will to live.)
You can do stuff based on perception – the speaker has used psychology studies as a basis for his work.
Gestalt psychology ideas he uses: grouping,continuation and closure.
He is only working on pitch and time
Pseduo-jazz fusion can be expressed through surprisingly short lisp expressions.
Gaussian probabilities do stuff around proximity, some range constraints and some directions.
Ordering of lists leads to closure.
Iteration is repetition
He wants to interact more with tech communities

Questions

Scott wants all his numbers. He’s about to publish in the CMJ.
Nick wants to know why he picked Gaussian. It’s good enough and it’s succinct. He’s being inspired by processes which are learned from modelling research.

Live coding in Mexico

Centro Multimedia is a space for arts research in new technologies. The have an audio workshop with a special interest in code and FLOSS (“software libre”).
The history of live code:

  • 3 concerts in 2006 by an experimental laptop band called mU.
  • Another concert in 2009
  • A telematic concert in 2009
  • A supercollider course since 2007 in the audio workshop and also Fluxus since 2010 – this grew a code community
  • They had a collective live coding session in 2010 just after the first fluxus course

They had used MAX. Later, SuperCollider changed everything because of the philosophy of Open Source. It was free and legal to share. They felt a sense of ownership and it grew a community.
since 2011, they’ve organised 21 live coding events. They do collaborations with other institutions in Mexico City. This is a local scene.
At the National Autonomous University, they did a blank slate coding sessions where everybody had 9 minutes. This was especially beneficial for coding practice of participants.
There was a Vivo conference in 2012 which had more participation form overseas, with longer time slots and had some non-blank slate code, which also caused an explosion in the community.
Their audiences are very diverse with a lot of new people coming in. They are receptive to new ideas.
They are now doing a series of live coding concerts that also mix practices, so with dance, or with circuit bending, sound art, poetry, etc. There now a website hackpact.mx, which has a philosophy of live coding. These projects grow community. Sharing builds personal relationships and knowledge. People form many backgrounds are involved.

Questions

What else goes on at the centre? Lots and lots of new media stuff. They have artistic residence programs. Once specific to Germany. One for Latin Americans. There is a electronic and video festival this year with an open call.
The centre is free, so anyone can come and learn without paying. This increases diversity.
Does anybody in the US or Canada pay attention to what’s going on in Mexico? Artists from the Canada can come for residences, so there is some collaboration there. There are some collaborations with the US through other institutions, but not this one.
Do they do any teaching of coding or live coding in schools? There is not official school of electronic music in Mexico, so teaching mostly happens through workshops. Mexicans who want to do electronic music degrees go abroad. There is not a strong programme for children or teenagers during school time. They do some workshops in summer. They may expand this, but need to do some work on pedagogy. They have also been running some workshops with indigenous people who have no background at all with computers. Sometimes they learn faster because they don’t know it’s supposed to be difficult.
what’s the future of live coding in Mexico? More people, more groups. The future is bright across Mexico for live coding.

Live blogging Live.Code.Festival: Yiorgos Diapoulis – Live Hardware coding

He’s build some sort of binary adding machine that plays sounds based on the current number, which adds to the total every clock cycle. IT creates patterns based on the total not including overflow. The use rprovides a 3 bit word to the counter. Te counter outputs a serial transmission to a decoder. Both of these things are connected to an Ardunio, which is connected to SuperCollider. The counter outputs 3 bits to the ardunio. the decode does one bit?

Battery dying!