Collaborative Live Coding via Jitsi Meet for Linux Users

Jitsi Meet works best in Chromium, so these instructions are for that browser. It also assumes that your live coding language uses Jack Audio.

Jack and Pulse

The first thing you need to do is to get Pulse Audio and Jack Audio running at the same time and talking to each other. For me, I was able to solve this in Qjackctl. In settings, under the option tab, go to Execute Script after startup. Paste in “pactl load-module module-jack-sink channels=2; pactl load-module module-jack-source channels=2; pacmd set-default-sink jack_out” without the quotes.

You can test if both are running at once by starting jack and then playing audio from a normal system program like your web browser. If you hear sounds, it worked.

More information about getting both running is here.

Jack Audio – Pulse Sink

Make Connections

You get two sinks. One of them is going to be used to send audio into Jitsi and the other will be used to get audio out.

Jack with two Pulse sinks and the system in/out

Start your live coding language audio server. (Ie Boot SuperCollider). Go back to Qjackctl. Click on connections. Go to the Audio tab. Make a connection from your live coding language output to Pulse Audio Jack Source-01 input. Do this by clicking on the language in the left column and the Source-01 in the right column so both are highlighted. Then click the “Connect” button on the lower left.

Disconnect the the system output from that Source’s input if you want to turn off your microphone. Do this by clicking on the system in the left column and Source-01 in the right column and clicking the “Disconnect” button.

Everything connected correctly

Chromium Settings

(These will also work for Chrome.)

First open your Jitsi Meet connection. If you are using the server at Meet.jit.si, you can skip this step.

For most other Jitsi servers, in Chromium, go to chrome://settings/content/microphone Change the mic input to Pulse Audio Jack Source-01.

Chromium Settings

Jitsi Settings

As we’re live coding, you’ll want to share your screens. Mouse over the screen so you can see the icons at the bottom appear. The one in the lower left corner looks like a screen. Click on it.

The farthest left is the screen

It lets you pick a video source. If you’re on Wayland, like most linux users have been for years now, you can share your entire screen, but you should be able to share a single window. If you don’t see your live coding language listed as a window, make sure Chromium and it are on the same virtual desktop.

Share Your Screen

Click the screen icon again to switch back to your webcam.

Fancy New Options

If you’re on a shiny new version of jitsi, such as the one at Meet.jit.si, You’ll see little carrots by the mic and video icons in the centre bottom of your browser window.

New Features!!

These allow you to pick your audio source without having to go into Chrom/ium settings. If you have more than one webcam, you can also pick which one you want to use there, without having to go into Chrom/ium settings for that either.

Be Careful of Levels!

Jitsi was made assuming that more or less one person would be talking at a time, so multiple streams at full volume can distort. Make sure to leave your output low enough to leave room for your collaborators. Your system volume controls will not work, so be sure to set the levels in your live coding language IDE.

Also be aware that while the compression sounds good for free improvisation on acoustic instruments, the transformations on digital audio will seem more pronounced. Go with it. This is your aesthetic now.

Scores for Quarantine 2: Homage to Norman Rockwell

For four or more players.

Score

Players assign themselves positions in a virtual circle.

Using two different communications devices, players make an audio connection to the players on their left and right, but no other players.

Improvise in this trio.

Hardware

One possible setup for this might be to use two phones and use one ear bud from each.

Using a mixer would also be possible, but heapdhones are still advised to prevent sounds from the distant players reaching each other.

Software

Just using a real telephone is fine, but any audio software will work.

Documentation

If everyone is using two mobile phones, call recording software running on every single one of them could potentially be mixed together to make a large piece, but it may be that this piece is only meaningful to participants.

This score is Creative Commons Share-a-like

Have you tried playing it? How did it go? Were you able to record it?

Telematic Performance and e-learning

I’ve put some resources up for my students and I’m going to copy them here in case they’re of wider interest. I’ve made instructional videos for using some of the tools.

Online Meetings / Online Jams

  • Jitsi Meet– Doesn’t spy on you or sell your data. Can be used via mobile device with a free app or accessed via a web browser on your computer. Users without either of these can call in using local numbers in several countries. Can record to Dropbox or stream to YouTube. Works best with chromium/chrome. Some people have good luck with Firefox. Safari has poor results.

Telematic Performance Software and Platforms

  • OBS Studio – Stream Audio and Video and/or record your desktop. (How to use on mac.)
  • Upstage – Cyber performance platform, mostly used by artists.
  • LNX Studio – Collaborative platform for making popular music across a network. Mac only. Last updated in 2016, so may not work with the newest macs.
  • Soundflower – Zero latency audio routing for mac. (Use it to get audio to and from jitsi meet and OBS.)
  • BlackHole – Even more zero latency audio routing for mac. (See above.)

Video Tutorials

Made by me. My students like videos. I’ll post text here later. All of these are for Mac.

Scores for Quarantine 1: Jitsi Solos

Score

Players connect to Jitsi Meet.

Everyone plays background textures.

When the textures have gone on long enough, tale a solo.

When the solo has gone on long enough, stop.

If anyone starts soloing while you are soloing, stop your solo immediately.

Hardware

Players must have a phone, or a tablet or a laptop.

Headphones are recommended.

Software

Chromium browser (or Chrome), if the players are using laptops.

Players on tablets or smart phones can use free Jitis Software.

Players can also just dial in to the phone number provided by the jitsi meet server.

Documentation

Record to dropbox or stream to youtube via the links provided by jitsi meet.

This score is Creative Commons Share-a-like

Please let me know if you tried playing it. How did it go? Send me a link to the recording?

Publishing Live Notation

My piece Immrama is a live notation piece. A python script generates image files, as the performance is happening, which are put on a web page. Performers connect via any wifi device with a web browser to see the notation. It uses really simple technologies, so nearly any device should work. A Newton won’t (I made enquires) but an old Blackberry will.
Setting it up requires python and a web server and a lot of faff. It could be packaged into a mac app, but I’m working on linux and it seems like more and more people in the arts are turning to windows, as Apple increasingly ignores their former core audience of artists and designers. It runs fine on my laptop, of course, but I don’t want to have to provide that to anybody who wants to do the piece. Nor do I want to force ensembles to have IT people on hand. Fortunately, I think I’ve stumbled on how to package this for the masses.
I’m working right now to get it all running on a Raspberry Pi. This is a tiny, cheap computer. Instead of having a hard drive, it uses SD cards. This means that I can set everything up to run my piece, put it all on an SD card, and then anybody can put that SD card into their Raspberry Pi and the piece will be ready to go! …In principle, at least.
This piece needs wifi, which does not come with the Pi. Pi owners who want wireless networking get their wifi dongles separately. I got mine off a friend who didn’t need it any more. And while setting up the networking bit, I found at least three different sets of instructions depending on what dongle people have. I could try to detect what dongle they have and then auto-install needed software to match, but, yikes, there are many things I would rather do with my life. I think instead, if you order an SD card, by default, it should come with a dongle – the buyer can opt out, but not without understanding they may need to install different libraries and do some reconfiguring.
Or, I dunno, if you want to run the piece and don’t want to buy a dongle, send me yours and I’ll get it working and send it back with an SD card?
My last software job was doing something called being a release engineer – I took people’s stuff that worked on their own machine and packaged it so the rest of the world could use it. I wanted to be a developer, but that was the job I could get. It seems like I’m still release engineering, even as a composer.
Anyway, this is all very techy, but the point here is to prevent end users from having to do all this. When I’m done, I’ll make an image of the card and use that to make new cards, which I can post to people, saving them my woe. Or, even better, some publishing company will send them to people, so I don’t need to do my own order fulfilment, because queuing at the post office, keeping cards and dongles on hand, etc gets very much like running a small business, which is not actually the point.

Tech Notes so far

Later, I’m going to forget how I got this working, so this is what I did:

  1. Get Raspian wheezy, put it on a card.
  2. Boot the Pi off the card
    1. Put the card in the Pi
    2. Plug in the HDMI cable to the monitor and the Pi
    3. Connect the Pi to a powered USB hub
    4. Put the dongle on the powered hub.
    5. Plug in a mouse and keyboard
    6. Connect your Piu to the internet via an ethernet cable
    7. Turn on the HDMI monitor and the hub
    8. Plug in the Pi’s power cable (and send electricity to the Pi). Make sure you do this last.
  3. On the setup screen, set it to boot to the desktop and set the locale. then reboot
  4. Open a terminal and run:

    sudo apt-get update
    sudo apt-get install aptitude
    sudo aptitude safe-upgrade
    sudo apt-get autoremove
    sudo apt-get clean
    sudo aptitude install rfkill hostapd hostap-utils iw dnsmasq lighttpd

  5. Using your regular computer (not the Pi), Find the wifi channel with the least traffic and least overlap

    sudo iwlist wlan0 scan | grep Frequency | sort | uniq -c | sort -n

  6. Try to find out what dongle I have
    1. run: iw list
    2. That returns ‘nl80211 not found’
    3. run: lsusb
    4. That says I have a RTL8188CUS 802.11n adaptor
  7. Use this script for a rtl8188CUS dongle
    1. For future, it would be nice to get the location from the system locale
    2. Autoset the SSID to he name of the piece
    3. Autoset a default password
    4. Indeed, remove all interactivity from the script
  8. Reboot

It might not seem like much, but that was all day yesterday. The first step alone took bloody ages.

To Do

  • Install needed fonts, etc.
  • Try to ensure that the internet remains available over ethernet, but if this isn’t possible, You can still chekc out a github repo to a USB stick and move data that way…
  • Find out what wifi dongle would be best for this application – ideally it has a low power draw, decent range, cheap and commonly owned among people with Pis
  • Set it to hijack all web traffic and serve pages but not with apache! Use the highttpd installed earlier

Making the most of Wifi

‘Wires are not that bad (compared to wireless)’ – Perry R. Cook 2001
Wireless performance is risky, lower, etc than wired, but dancers don’t want to be cabled.
People use bluetooth, ZigBee and wifi. Everything is in the 2.4 gHz ISM band. All of these technologies use the same bands. Bluetooth has 79 narrowband channels. It will always collide, but always find a gap, leading a large variance in latency.
Zigbee has 16 channels, doesn’t hop.
Wifi has 11 channels in the UK. Many of them overlap, but 1, 6, and 11 don’t. It has broad bandwidth. It will swamp out zigbee and bluetooth.
the have seveloped XOSC, which sends OSC over wifi. It hosts ad-hoc networks. The presenter is rubbing a device and a fader is going up and down on a screen. The device is configured via a web browser.
You can further optimise on top of wifi. By using a high gain directional antenna. And by optimising router settings to minimise latency.
Normally, access points are omni directional, which will get signals from audiences, like mobile phone wifi or bluetooth. People’s phones will try to connect with the network. A directional antenna does not include as much of the audience. They tested the antenna patterns of routers. Their custom antenna has three antennas in it, in a line. It is ugly, but solves many problems. the tested results show it’s got very low gain at the rear, partly because it is mounted on a grounded copper plate.
Even commercial routers can have their settings optimised. This is detailed in their paper.
Packet size in routers is optimised for web browsing and is biased towards large packets, which has high latency. Tiny packets have huge throughput in musical applications.
Under ideal conditions, they can get 5ms of latency.
They found that channel 6 does overlap a bit with 1 and 11, so if you have two different devices, but them on the far outside channels.

Questions

UDP vs TCP – have you studied this wrt latency?
No, they only use UDP
How many drop packets do they get when there is interference?
that’s what the graph showed.

Republic

It’s time for everybody’s favourite collaborative real time network live coding tool for SuperCollider.
Invented by PowerBooks UnPlugged – granular synthesis playing across a bunch of unplugged laptops.
Then some of them started Republic111, which is named for the room number where the workshop where they taught stuff.
Code reading is interesting in network msic partly because of stealing, but also to understand somebody else’s code quickly, or actively understand it by changing it. Live coding is a public or collective thinking action.
If you evaluate code, it shows up in a history file, and gets sent to everybody else in the Republic. You can stop the sound of everybody on the network. All the SynthDefs are saved. People play ‘really equally’ on everybody’s computer. Users don’t feel obligated to act, but rather to respond. Participants spend most of their time listening
Republic is mainly one big class, which is a weakness and should be broken up into smaller classes hat can be used separately. Scott Wilson is working on a newer versions which on github. Look up ‘The Way things May Go on Vimeo’.
Graham and Jonas have done a system which allows you to see a map of who is emitting what sound and you can click on it and get the Tdef that made it.
Scott is putting out a call for participation and discussion about how it should be.

David Ogborn: EspGrid

In Canada, laptop orchestras get tones of gigs.
Naive sync methods: do redundant packet transmission – so send the same value several times in a row. This actually increases the chance of collision, but probably one will get through. Or you can schedule further in advance and schedule larger chunks – so send a measure instead of just sending a beat.
download it from esp.mcmasters.ca. mac only
5 design principles

  • Immediacy – launch it and you’ve got stuff going right away
  • Decentralisation – everything is peer to peer
  • Neutrality – works with chuck, supercollider, whatever
  • Hybridity – they can even use different software on the same computer at the same time
  • Extensibility – it can schedule arbitrary stuff

The grid has public and private parts. EspGrid communicates with other apps via localhost OSC. Your copy of supercollider does not talk to the larger network. EspGrid handles all that.
The “private protocol” is not osc. It’s going to use a binary format for transmission. Interoperability is thus only based on client software, not based on the middleware.
Because the Grid thing runs OSC to clients, it can run on a neighbour’s computer and send the osc messages to linux users or other unsupported OSes.
The program is largely meant to be run in the background. You can turn a beat on or off, and this is shared across the network. You can chat. You can share clipboards. Also, Chuck will dump stuff directly.
Arbitrary osc messages will be echoed out, with a time stamp. you can schedule them for the future.
You can publish papers on this stuff or use it to test shit for papers. Like swap sync methods and test which works best.
Reference Beacon does triangulation to figure out latencies.
He wants to add WAN stuff, but not change the UI, so the users won’t notice.

Question

Have they considered client/server topology for time sync? No. A server is a point of failure.
Security implications? He has not considered the possibility of sending naughty messages or how to stop them.
Licence? Some Open Source one… maybe GPL2. It’s on google code.