Running an online student concert

I wanted to come up with the most straightforward possible setup, so that students would be able to copy it and run their own events with minimal fuss.

This plan uses Twitch, which has two tremendous advantages. It has a performance rights society license, so everyone is free to do covers with no copyright consequences. (Just don’t save the stream to twitch.) The other is that the platform is designed around liveness, so if there are gaps in the stream, it’s not a problem. This means that no stream switching is required.

Student skills required

The students need to be able to get their audio into a computer. This might entail using a DAW, such as Reaper, or some sort of performance tool. They need to be able to use their DAW or tool in a real-time way, so that performing with it makes sense. If they can create a piece of music or a performance with software that they are capable of recording, then they have adequate skills.

This checklist covers all the skills and tools that a Mac or Linux user will need to play their piece. It will work for many, but not all, Windows users. This is because Windows setups can vary enormously.

Once everyone is able to stream to their own Twitch channel, they have the skills required to do the concert.

Setup and Organisation

You will need a twitch account dedicated to your class or organisation. You will also need a chatroom or other text-based chat application to use as a “backstage”. Many students are familiar with Discord, which makes it an obvious choice. Matrix chat is another good possibility. If you go with discord, students will need to temporarily disable the audio features of that platform.

As the students are already able to stream to Twitch, the only thing that will change for them is the stream key. Schedule tech rehearsals the day of the concert. Arrange that the students should “show up” in your backstage chat. At those rehearsals, give out the stream key for your channel’s stream. Give the students a few minutes to do a test stream and test that their setup is working.

The students should be instructed to wait until instructed to start their streams and to announce in the chat when they stop. If they get disconnected due to any kind of crash, they should check in in the chat before restarting. Once they finish their performance, they should quit OBS so they do not accidentally restart their stream.

When it’s call time for the concert, they also need to show up in the backstage chat. They should be aware of the concert order, but this may also change as students encounter technical challenges. You or a colleague should broadcast a brief welcome, introductory message which should mention that there will be gaps between performances as the stream switches.

As you stop broadcasting, tell the first student to start and the next student to be ready (but not go yet). The first student will hopefully remember to tell you when done and stop their stream. As their stream ends, you can tell the next student to go. You should be logged into the Twitch web interface so you can post in the chat who is playing or about to play.

After the concert ends, reset the stream key. This will make sure their next twitch stream doesn’t accidentally come out of your organisation’s channel.

Conclusion

The downsides of this steup is that there will be gaps in the stream. If a student goes wildly over time, it’s hard to cut them off. However, the tech requirements do not need any investment from your institution and, again, they should be able to organise their own events in a similar way using the skills they learned from participating in this event.

Running a festival fast with less Google

One day, a little while into lockdown, I rang up my friend Shelly Knotts and suggested we reboot the Network Music Festival, which had last run 6 years earlier. “Leaving aside all the reasons this is a terrible idea,” I began and she revealed that she’d had the same thought. We were in.

She made a Gantt chart, a drawing that shows dependencies for scheduling, and said the soonest reasonable time to do it was roughly three months in future. So we set that date and set out. Our goals were to do an all-telematic festival with an open call, a few invited acts and no use of google. We were not to achieve all these goals, but we did put on a festival!

Shelly made a budget and asked me for some estimates. My first mistake was not taking that step seriously enough. We’d applied for funding previously and never gotten it, so I did not put in any pay for myself and I did not spend enough time looking at how much streaming would actually cost assuming we could just buy two months of a vimeo subscription. It turns out you can’t sign up for less than a year.

We already had a wordpress site with a minor malware problem, a google mailing list and a gmail account. I removed the malware and installed bunch of security software on wordpress, but didn’t have time to upgrade or replace the theme. I tried to setup a mailman list, but ran into some weird problem with the anti-spam features. And as a temporary measure, I set the new email address to direct to the old one. All of these things are still on my todo list.

To run the open call, I looked at all the conference management systems widely used. Like many of the formerly no-cost solutions I remembered from 2014, these charge money now. I wasn’t sure how much response we would get, but I was worried about capping it at the free tier. All of these systems are also designed for science conferences, not artistic events. I decided to use Ninja Forms.

To test Ninja Forms, I put up a default wordpress installation at a subdomain and put Ninja Forms on it. My mistake as that this was not a duplication of our existing site – it had many of the same plugins but not the same theme and certainly not the same content. It wasn’t a proper testing server. So as the open call was running, I got some email form people who were unable to submit their forms. I couldn’t submit them either – something about the content was screwing up and it wasn’t the post length or anything I could pin down. It worked fine on the testing site. We quickly recreated the forms on Google.

Meanwhile, rather than use Google sheets or their online text editor, we started using Cryptpad because Fossbox has an account there. Their spreadsheet glitched and lost a lot of data. We filed a bug report and they said they fixed the problem, but we did not return to them for spreadsheets. We did use their rich text documents, which work beautifully.

When it came time to combine all the data from the open call forms, I put both batches of data into CSVs and used Libre Office to create a giant set of sheets. I assigned entries IDs. By hand. This was a mistake. And broke them up into a bunch of sheets associated with submission types. I also looked for duplicates and removed them – sorting them by title and email to look for matches. I used a mail merge operation to product a bunch of html documents with anonymised submissions for our reviewers. Because I sorted them after assigning IDs, the names of the documents and the ID numbers didn’t match. Only one reviewer noticed.

Meanwhile, we’d been collecting potential reviewers into an excel sheet. We tried to have equal numbers of men and people who are not men (read: women and enbies, a group that otherwise should not be lumped in together). We found that men were way more likely to agree to do reviews and ended up asking about twice as many women. It’s extremely fortunate that I decided not to use the free tier of Easy Chair as we had more than double the number of submissions they allow. It was a real scramble trying to find reviewers. I returned to facebook and leaned on friends for recommendation.

I used the UNO bridge from libre office to python to email all the reviewers their agreed number of reviews. Each item was to get double reviewed, once by a man and once not by a man. Every item did get reviewed at least twice. A few got additional reviews. Reviews went to another google sheet.

Collating the submissions with the reviews is too much to reasonably to in a spreadsheet, so I imported all the submissions and all the review CSVs into sqlite3. I wrote some python scripts to spit out submissions coupled with reviews so we could use them to figure out what to accept and reject. These were sorted by score. We decided that everything with a score of 4 or 5 should probably be accepted and anything at a 2 or lower should probably be rejected. However, we did both look at every single one of the submissions and decide to take them or not, sometimes overriding a low score, but never a high score. We put our decisions into a spreadsheet and then had a meeting to discuss only our disagreements.

Like all our meetings, this was held on Jitsi Meet, one of the platforms we were considering for workshops. And, indeed, it was not just one meeting, but several. We started out imagining getting around 50 submissions and accepting around 25. We got 164 and accepted 101.

After we came to an agreement about who to programme, I spit out sheets with just the reviews and emailed them to all the submitters. It was here that an error came to light. I’d assigned the same ID to two different groups and they both got back a mixture of reviews meant for both of them. Because I’d exported the submissions via a mail merge in libre office, it hadn’t cared about the duplicated ID. It made a file for every single row of the spreadsheet. So everything had been reviewed, but the reviews had been jumbled up. We got an email from one of the groups saying they thought they’d received the wrong reviews. I double checked everything then and this only happened to one group.

Once we knew who we were accepting, we asked accepted groups to verify if they would perform and to use a free web meeting calendar to track their availability. We also asked them if they want to correct any of their information. I wrote another python script to copy everyone who accepts into a new sqlite3 database, and replacing their information where they offered corrections. Only a tiny number of groups wrote “no” in those forms, so it worked mostly without a hitch.

I wrote yet another script to upload every database record to our website. While I was doing that and uploading the late responders, Shelly was proofreading everyone’s programme notes and biographies. We shared the database on Dropbox. My internet connection is quite slow and big databases are big and also not text, so automated merging of file changes went catastrophically wrong. Shelly lost most of her edits. We moved our database back to google sheets.

It was then I found out that the meeting-scheduling website would only export a CSV if you bought a year long subscription. I signed up for an account and tried to get the calendar to associate with my new account and it wouldn’t, so at least I was spared the expense. I used google sheets to create a date field incrementing hour by hour for the full duration. I cut and pasted this list into a plain text document and followed every line with a comma. As I moused over every single separate square of the 100 and some possible festival hours, a little popup appeared with a list of everyone who said they were available at that time. I copied and pasted these lines into the text document. This gave me a bunch of lines starting with a date and time, followed by a bunch of identifiers.

I uploaded my new file as a CSV back to google sheets. Each row started with a time and the subsequent cells identified which groups were available. It was a perfect triangle wave of availability. Shelly had made a draft programme and it did not correspond to the listed availabilities. Worse, many of the groups had put their names in the form rather than the submission ID number, so I had to do a lot of database searching to figure out who “Joe Bloggs” was and eventually resolve everyone to an ID.

Then, once I knew who was who, it looked NP complete to figure out how and when to programme them. I found the least available bands and made graphs of when they could play. I started looking into what algorithms might cope with this completely intractable problem. It was then that Shelly put her foot down and demanded she take charge of this creative, curatorial problem. She came back the next day with a working schedule. I still have no idea how she did it.

Our old paypal account still existed and I started seeing notices of donations. But what was the password? The bank account it was linked to was long dead, but an old email with the account information contained enough detail that paypal let me back into the account. Thank goodness.

Around then, I went virtually to the Art Meets Radical Openness conference, which went extremely well, so I decided we should copy them as much as possible. They used BigBlueButton, which I liked, so we began looking for a BigBlueButton host. Fo.am runs a server, so we asked them and because life is uncertain, I also asked a bunch of other people, two of whom agreed. One was senf call, which was great, but in German and the other was nixnet in North America. One thing we didn’t know was how many people could be in a single BBB meeting, so I read the FAQ for the software and it said it should be stress tested by getting some volunteers to all open five browser tabs to the service. I rounded up some friends and found definitively that if anyone opens five browser tabs to BigBlueButton, their laptop will crash. Later, our North American host laughed and said he regularly has groups of 100 people!

AMRO had backchannel chat in Matrix, so we did as well. They ran their own server, which it made it work way better than using the regular Matrix server. The invite link for that service leads to what looks exactly like an error page. I ended up putting a screenshot of that page on our website with an arrow showing where to click.

AMRO had a stream out via DorfTV and an audio-only stream via a radio station, so we decided to try to have a produced stream and an audio stream. The audio-only component was especially important to me, since my home internet quality is variable and I ended up replying on the audio stream to be able to listen to concerts. I’ll talk more about how we did the AV in another post.

AMRO had a website for the conference that had large portions that were just flat files. No caching. No overhead. Fast to load. No trackers. Holger Ballweg, Shelly’s partner, built a flat site for us that had a video player, a link to the audio player, a chat widget and a scrolling schedule that changed times based on your submitted time zone and told you what was coming when. He also made a bunch of pages for our exhibition of web installations.

I was trying to get something similar on the wordpress site, but the calendar plugins did not do what I wanted. We finally ended up making a bunch of Event Bright events and told people to use those to get their time zone. Those events also export in very few clicks to facebook, so we put up a a facebook event for every concert and workshop.

I exported all those events to a google calendar and then used IFFT to post about the events as they started to our official twitter account. I hope this drew people over. Most of our promotion of the open call and the festival was twitter and facebook. We also had a presence on mastodon, which cross-posed to my personal twitter account. I’d never deleted it, so I also turned Cheap Bots Done Quick on to it with some sincere left wing content and a bunch of posts about the festival. Shelly also wrote a press release and subsequently we were included in some concert listings as well as the Other Minds mailings, which certainly helped.

As the festival drew ever closer, my attention turned to the streaming process and also providing tech support to people inexperienced with streaming. We had “new to networking” categories and “student” categories and hoped to help people get into telematic performance who might not have done it before. After a long Jitsi meeting with someone new to networking, I wrote the conversation up in a blog post for fossbox. I learned how to do weird things with system audio in BigBlueButton.

I also started trying to harden the WordPress site against heavy traffic with SuperCache. This says it can’t cache the mobile site without Jetpack, so I installed the Jetpack plugin to find it has spyware in it. I set about trying to disable it, something their privacy policy notes the site owner can do. Their privacy policy doesn’t note that it requires either patching the source code or installing yet another plugin to patch the source code. Otherwise, it merrily ignores Do Not Track settings, sharing the data with their parent company. Alas, all of WordPress seems to be this kind of dodgy grift these days.

In the end, we presented 97 groups. We raised more than £1000 in donations. These are meant to go to the independent artists in the festival, of which I don’t yet have a count – they’ve got yet another form to fill out. I don’t know how many people passed on stage and backstage but it was a lot. We streamed roughly two terabytes of festival.

My todo list now is: Get away from wordpress to a lower-carbon system such as Hugo. Find and deploy a database with a Google-sheets-like interface, so I can do SQL while Shelly edits. Get rid of Google forever for real.

We’re planning on repeating the festival next year, so we really should start getting on funding applications. I felt a real sense of camaraderie in the VR algoraves, so it would be nice if the entire festival was in VR next time. Mozilla hubs is still kind of buggy, but hopefully it will be better by next year. Of course, Second Life supports audio streams, so there’s always a dual option for the overly old school among us.

It’s been a week and a half since the festival ended and I still feel slightly tired out after it. This post mostly only covers what I did for the festival but not Shelly. It would be more than twice as long if it included all of her intense work. There will be forthcoming a more how-to post on running the data admin. I’m not sure anyone would want to copy us, but it did work and I think a documented process is valuable. I’d like to see an AGPL solution that can be customisable for the arts. And, as promised, I’ll talk in detail about how we managed the streaming, as that ended up working really well.

If you want to make a donation, all of the money raised via paypal before 10 August will go to our independent artists.

Collaborative Live Coding via Jitsi Meet for Linux Users

Jitsi Meet works best in Chromium, so these instructions are for that browser. It also assumes that your live coding language uses Jack Audio.

Jack and Pulse

The first thing you need to do is to get Pulse Audio and Jack Audio running at the same time and talking to each other. For me, I was able to solve this in Qjackctl. In settings, under the option tab, go to Execute Script after startup. Paste in “pactl load-module module-jack-sink channels=2; pactl load-module module-jack-source channels=2; pacmd set-default-sink jack_out” without the quotes.

You can test if both are running at once by starting jack and then playing audio from a normal system program like your web browser. If you hear sounds, it worked.

More information about getting both running is here.

Jack Audio – Pulse Sink

Make Connections

You get two sinks. One of them is going to be used to send audio into Jitsi and the other will be used to get audio out.

Jack with two Pulse sinks and the system in/out

Start your live coding language audio server. (Ie Boot SuperCollider). Go back to Qjackctl. Click on connections. Go to the Audio tab. Make a connection from your live coding language output to Pulse Audio Jack Source-01 input. Do this by clicking on the language in the left column and the Source-01 in the right column so both are highlighted. Then click the “Connect” button on the lower left.

Disconnect the the system output from that Source’s input if you want to turn off your microphone. Do this by clicking on the system in the left column and Source-01 in the right column and clicking the “Disconnect” button.

Everything connected correctly

Chromium Settings

(These will also work for Chrome.)

First open your Jitsi Meet connection. If you are using the server at Meet.jit.si, you can skip this step.

For most other Jitsi servers, in Chromium, go to chrome://settings/content/microphone Change the mic input to Pulse Audio Jack Source-01.

Chromium Settings

Jitsi Settings

As we’re live coding, you’ll want to share your screens. Mouse over the screen so you can see the icons at the bottom appear. The one in the lower left corner looks like a screen. Click on it.

The farthest left is the screen

It lets you pick a video source. If you’re on Wayland, like most linux users have been for years now, you can share your entire screen, but you should be able to share a single window. If you don’t see your live coding language listed as a window, make sure Chromium and it are on the same virtual desktop.

Share Your Screen

Click the screen icon again to switch back to your webcam.

Fancy New Options

If you’re on a shiny new version of jitsi, such as the one at Meet.jit.si, You’ll see little carrots by the mic and video icons in the centre bottom of your browser window.

New Features!!

These allow you to pick your audio source without having to go into Chrom/ium settings. If you have more than one webcam, you can also pick which one you want to use there, without having to go into Chrom/ium settings for that either.

Be Careful of Levels!

Jitsi was made assuming that more or less one person would be talking at a time, so multiple streams at full volume can distort. Make sure to leave your output low enough to leave room for your collaborators. Your system volume controls will not work, so be sure to set the levels in your live coding language IDE.

Also be aware that while the compression sounds good for free improvisation on acoustic instruments, the transformations on digital audio will seem more pronounced. Go with it. This is your aesthetic now.

Scores for Quarantine 2: Homage to Norman Rockwell

For four or more players.

Score

Players assign themselves positions in a virtual circle.

Using two different communications devices, players make an audio connection to the players on their left and right, but no other players.

Improvise in this trio.

Hardware

One possible setup for this might be to use two phones and use one ear bud from each.

Using a mixer would also be possible, but heapdhones are still advised to prevent sounds from the distant players reaching each other.

Software

Just using a real telephone is fine, but any audio software will work.

Documentation

If everyone is using two mobile phones, call recording software running on every single one of them could potentially be mixed together to make a large piece, but it may be that this piece is only meaningful to participants.

This score is Creative Commons Share-a-like

Have you tried playing it? How did it go? Were you able to record it?

Telematic Performance and e-learning

I’ve put some resources up for my students and I’m going to copy them here in case they’re of wider interest. I’ve made instructional videos for using some of the tools.

Online Meetings / Online Jams

  • Jitsi Meet– Doesn’t spy on you or sell your data. Can be used via mobile device with a free app or accessed via a web browser on your computer. Users without either of these can call in using local numbers in several countries. Can record to Dropbox or stream to YouTube. Works best with chromium/chrome. Some people have good luck with Firefox. Safari has poor results.

Telematic Performance Software and Platforms

  • OBS Studio – Stream Audio and Video and/or record your desktop. (How to use on mac.)
  • Upstage – Cyber performance platform, mostly used by artists.
  • LNX Studio – Collaborative platform for making popular music across a network. Mac only. Last updated in 2016, so may not work with the newest macs.
  • Soundflower – Zero latency audio routing for mac. (Use it to get audio to and from jitsi meet and OBS.)
  • BlackHole – Even more zero latency audio routing for mac. (See above.)

Video Tutorials

Made by me. My students like videos. I’ll post text here later. All of these are for Mac.

Scores for Quarantine 1: Jitsi Solos

Score

Players connect to Jitsi Meet.

Everyone plays background textures.

When the textures have gone on long enough, tale a solo.

When the solo has gone on long enough, stop.

If anyone starts soloing while you are soloing, stop your solo immediately.

Hardware

Players must have a phone, or a tablet or a laptop.

Headphones are recommended.

Software

Chromium browser (or Chrome), if the players are using laptops.

Players on tablets or smart phones can use free Jitis Software.

Players can also just dial in to the phone number provided by the jitsi meet server.

Documentation

Record to dropbox or stream to youtube via the links provided by jitsi meet.

This score is Creative Commons Share-a-like

Please let me know if you tried playing it. How did it go? Send me a link to the recording?

Publishing Live Notation

My piece Immrama is a live notation piece. A python script generates image files, as the performance is happening, which are put on a web page. Performers connect via any wifi device with a web browser to see the notation. It uses really simple technologies, so nearly any device should work. A Newton won’t (I made enquires) but an old Blackberry will.
Setting it up requires python and a web server and a lot of faff. It could be packaged into a mac app, but I’m working on linux and it seems like more and more people in the arts are turning to windows, as Apple increasingly ignores their former core audience of artists and designers. It runs fine on my laptop, of course, but I don’t want to have to provide that to anybody who wants to do the piece. Nor do I want to force ensembles to have IT people on hand. Fortunately, I think I’ve stumbled on how to package this for the masses.
I’m working right now to get it all running on a Raspberry Pi. This is a tiny, cheap computer. Instead of having a hard drive, it uses SD cards. This means that I can set everything up to run my piece, put it all on an SD card, and then anybody can put that SD card into their Raspberry Pi and the piece will be ready to go! …In principle, at least.
This piece needs wifi, which does not come with the Pi. Pi owners who want wireless networking get their wifi dongles separately. I got mine off a friend who didn’t need it any more. And while setting up the networking bit, I found at least three different sets of instructions depending on what dongle people have. I could try to detect what dongle they have and then auto-install needed software to match, but, yikes, there are many things I would rather do with my life. I think instead, if you order an SD card, by default, it should come with a dongle – the buyer can opt out, but not without understanding they may need to install different libraries and do some reconfiguring.
Or, I dunno, if you want to run the piece and don’t want to buy a dongle, send me yours and I’ll get it working and send it back with an SD card?
My last software job was doing something called being a release engineer – I took people’s stuff that worked on their own machine and packaged it so the rest of the world could use it. I wanted to be a developer, but that was the job I could get. It seems like I’m still release engineering, even as a composer.
Anyway, this is all very techy, but the point here is to prevent end users from having to do all this. When I’m done, I’ll make an image of the card and use that to make new cards, which I can post to people, saving them my woe. Or, even better, some publishing company will send them to people, so I don’t need to do my own order fulfilment, because queuing at the post office, keeping cards and dongles on hand, etc gets very much like running a small business, which is not actually the point.

Tech Notes so far

Later, I’m going to forget how I got this working, so this is what I did:

  1. Get Raspian wheezy, put it on a card.
  2. Boot the Pi off the card
    1. Put the card in the Pi
    2. Plug in the HDMI cable to the monitor and the Pi
    3. Connect the Pi to a powered USB hub
    4. Put the dongle on the powered hub.
    5. Plug in a mouse and keyboard
    6. Connect your Piu to the internet via an ethernet cable
    7. Turn on the HDMI monitor and the hub
    8. Plug in the Pi’s power cable (and send electricity to the Pi). Make sure you do this last.
  3. On the setup screen, set it to boot to the desktop and set the locale. then reboot
  4. Open a terminal and run:

    sudo apt-get update
    sudo apt-get install aptitude
    sudo aptitude safe-upgrade
    sudo apt-get autoremove
    sudo apt-get clean
    sudo aptitude install rfkill hostapd hostap-utils iw dnsmasq lighttpd

  5. Using your regular computer (not the Pi), Find the wifi channel with the least traffic and least overlap

    sudo iwlist wlan0 scan | grep Frequency | sort | uniq -c | sort -n

  6. Try to find out what dongle I have
    1. run: iw list
    2. That returns ‘nl80211 not found’
    3. run: lsusb
    4. That says I have a RTL8188CUS 802.11n adaptor
  7. Use this script for a rtl8188CUS dongle
    1. For future, it would be nice to get the location from the system locale
    2. Autoset the SSID to he name of the piece
    3. Autoset a default password
    4. Indeed, remove all interactivity from the script
  8. Reboot

It might not seem like much, but that was all day yesterday. The first step alone took bloody ages.

To Do

  • Install needed fonts, etc.
  • Try to ensure that the internet remains available over ethernet, but if this isn’t possible, You can still chekc out a github repo to a USB stick and move data that way…
  • Find out what wifi dongle would be best for this application – ideally it has a low power draw, decent range, cheap and commonly owned among people with Pis
  • Set it to hijack all web traffic and serve pages but not with apache! Use the highttpd installed earlier

Making the most of Wifi

‘Wires are not that bad (compared to wireless)’ – Perry R. Cook 2001
Wireless performance is risky, lower, etc than wired, but dancers don’t want to be cabled.
People use bluetooth, ZigBee and wifi. Everything is in the 2.4 gHz ISM band. All of these technologies use the same bands. Bluetooth has 79 narrowband channels. It will always collide, but always find a gap, leading a large variance in latency.
Zigbee has 16 channels, doesn’t hop.
Wifi has 11 channels in the UK. Many of them overlap, but 1, 6, and 11 don’t. It has broad bandwidth. It will swamp out zigbee and bluetooth.
the have seveloped XOSC, which sends OSC over wifi. It hosts ad-hoc networks. The presenter is rubbing a device and a fader is going up and down on a screen. The device is configured via a web browser.
You can further optimise on top of wifi. By using a high gain directional antenna. And by optimising router settings to minimise latency.
Normally, access points are omni directional, which will get signals from audiences, like mobile phone wifi or bluetooth. People’s phones will try to connect with the network. A directional antenna does not include as much of the audience. They tested the antenna patterns of routers. Their custom antenna has three antennas in it, in a line. It is ugly, but solves many problems. the tested results show it’s got very low gain at the rear, partly because it is mounted on a grounded copper plate.
Even commercial routers can have their settings optimised. This is detailed in their paper.
Packet size in routers is optimised for web browsing and is biased towards large packets, which has high latency. Tiny packets have huge throughput in musical applications.
Under ideal conditions, they can get 5ms of latency.
They found that channel 6 does overlap a bit with 1 and 11, so if you have two different devices, but them on the far outside channels.

Questions

UDP vs TCP – have you studied this wrt latency?
No, they only use UDP
How many drop packets do they get when there is interference?
that’s what the graph showed.

Republic

It’s time for everybody’s favourite collaborative real time network live coding tool for SuperCollider.
Invented by PowerBooks UnPlugged – granular synthesis playing across a bunch of unplugged laptops.
Then some of them started Republic111, which is named for the room number where the workshop where they taught stuff.
Code reading is interesting in network msic partly because of stealing, but also to understand somebody else’s code quickly, or actively understand it by changing it. Live coding is a public or collective thinking action.
If you evaluate code, it shows up in a history file, and gets sent to everybody else in the Republic. You can stop the sound of everybody on the network. All the SynthDefs are saved. People play ‘really equally’ on everybody’s computer. Users don’t feel obligated to act, but rather to respond. Participants spend most of their time listening
Republic is mainly one big class, which is a weakness and should be broken up into smaller classes hat can be used separately. Scott Wilson is working on a newer versions which on github. Look up ‘The Way things May Go on Vimeo’.
Graham and Jonas have done a system which allows you to see a map of who is emitting what sound and you can click on it and get the Tdef that made it.
Scott is putting out a call for participation and discussion about how it should be.

David Ogborn: EspGrid

In Canada, laptop orchestras get tones of gigs.
Naive sync methods: do redundant packet transmission – so send the same value several times in a row. This actually increases the chance of collision, but probably one will get through. Or you can schedule further in advance and schedule larger chunks – so send a measure instead of just sending a beat.
download it from esp.mcmasters.ca. mac only
5 design principles

  • Immediacy – launch it and you’ve got stuff going right away
  • Decentralisation – everything is peer to peer
  • Neutrality – works with chuck, supercollider, whatever
  • Hybridity – they can even use different software on the same computer at the same time
  • Extensibility – it can schedule arbitrary stuff

The grid has public and private parts. EspGrid communicates with other apps via localhost OSC. Your copy of supercollider does not talk to the larger network. EspGrid handles all that.
The “private protocol” is not osc. It’s going to use a binary format for transmission. Interoperability is thus only based on client software, not based on the middleware.
Because the Grid thing runs OSC to clients, it can run on a neighbour’s computer and send the osc messages to linux users or other unsupported OSes.
The program is largely meant to be run in the background. You can turn a beat on or off, and this is shared across the network. You can chat. You can share clipboards. Also, Chuck will dump stuff directly.
Arbitrary osc messages will be echoed out, with a time stamp. you can schedule them for the future.
You can publish papers on this stuff or use it to test shit for papers. Like swap sync methods and test which works best.
Reference Beacon does triangulation to figure out latencies.
He wants to add WAN stuff, but not change the UI, so the users won’t notice.

Question

Have they considered client/server topology for time sync? No. A server is a point of failure.
Security implications? He has not considered the possibility of sending naughty messages or how to stop them.
Licence? Some Open Source one… maybe GPL2. It’s on google code.