Making the most of Wifi

‘Wires are not that bad (compared to wireless)’ – Perry R. Cook 2001
Wireless performance is risky, lower, etc than wired, but dancers don’t want to be cabled.
People use bluetooth, ZigBee and wifi. Everything is in the 2.4 gHz ISM band. All of these technologies use the same bands. Bluetooth has 79 narrowband channels. It will always collide, but always find a gap, leading a large variance in latency.
Zigbee has 16 channels, doesn’t hop.
Wifi has 11 channels in the UK. Many of them overlap, but 1, 6, and 11 don’t. It has broad bandwidth. It will swamp out zigbee and bluetooth.
the have seveloped XOSC, which sends OSC over wifi. It hosts ad-hoc networks. The presenter is rubbing a device and a fader is going up and down on a screen. The device is configured via a web browser.
You can further optimise on top of wifi. By using a high gain directional antenna. And by optimising router settings to minimise latency.
Normally, access points are omni directional, which will get signals from audiences, like mobile phone wifi or bluetooth. People’s phones will try to connect with the network. A directional antenna does not include as much of the audience. They tested the antenna patterns of routers. Their custom antenna has three antennas in it, in a line. It is ugly, but solves many problems. the tested results show it’s got very low gain at the rear, partly because it is mounted on a grounded copper plate.
Even commercial routers can have their settings optimised. This is detailed in their paper.
Packet size in routers is optimised for web browsing and is biased towards large packets, which has high latency. Tiny packets have huge throughput in musical applications.
Under ideal conditions, they can get 5ms of latency.
They found that channel 6 does overlap a bit with 1 and 11, so if you have two different devices, but them on the far outside channels.


UDP vs TCP – have you studied this wrt latency?
No, they only use UDP
How many drop packets do they get when there is interference?
that’s what the graph showed.

To gesture or not? An analysis of technology in nime proceedings 2001-2013

How many papers use the word ‘gesture’?
Gesture can mean many different things. (my battery is dying.)
Gesture is defined as movement of the body in dictionaries. (59 slides, 4 minutes of battery)
Research deifnitions of gesture: communication, control, metaphor (movement of sound or notation).
Who knows what gesture even means??
He downloaded nime papers and ran searches in them. 62% of all nime papers have mentioned gesture. (Only 90% of 2009 papers use the word ‘music’)
Only 32% of SMC papers mention gesture. 17% of ICMC
He checked what words ‘gesture’ came next to – collocation analysis.

NIME papers are good meta-research material
He suggests people define the term when they use it.
Data is available/


I can’t tell if this question is a joke or not….. oh, no, we’re on semiotics…. Maybe the pairong of the word ‘gesture’ with ‘recognition’ says something fundamental about why we care about gesture.
The word ‘gesture’ goes in and out of fashion.
Maybe ‘movement’ is a more meaningful word sometimes.
how often is gesture defined?
He should have checked that, he says.

Harmonic Motion: a tool kit for processing gestural data for interactive sound

they want to turn movement data into music.
This has come out of a collaboration with a dancer, using kinect. It was an exploration. He added visualisation to his interface. And eventually 240 parameters. The interface ended up taking over compared to the sound design.
They did a user survey to find out what other people were doing. So they wanted to write something that people could use for prototyping, that’s easy, extensible, and re-usable.
They wanted something stable, fast, free and complementary, so you could use your prototype in production. Not GPL, so you can sell stuff.
A patch-based system, because MAX is awesome all of the time.
This system is easily modifiable. He’s making it sound extremely powerful. Paramters are easy to tweak and saved with the patch, because parameters are important.
Has a simple SDK. Save it as a library, so you can run it in your project without the gui. this really does sound very cool.
Still in alpha.


CNMAT is doing something he should look at, says a CNMAT guy.

Creating music with leap motion and big bang rubette

Leap Motion is cool, he says.
rubato composer is software that allows people to do stuff with music and maths structures and transforms. It’s MAXish, but with maths.
The maths are forms and denotators, which is based on category theory and something about vector spaces. You can define vectors and do map stuff with them. He’s giving some examples, which I’m sure are meaningful to a lot of people in this room. Alas, both the music AND the math terms are outside of my experience. …. Oh no wait, you just define things and make associations between them. …. Or maybe not…..
It sounds complicated, but you can learning while doing it. They want to make it intuitive to enter matrixes via a visual interface, by drawing stuff.
This is built on ontological levels of embodiment. Facts, processes, gestures (and perhaps jargon). Fortunately, he has provided a helpful diagram of triangular planed in different colours, with little hand icons and wavy lines in a different set of colours, all floating in front of a star field.
Now we are looking at graphs that have many colours, which we could interact with.

Leap Motion

A cheap, small device that track hands above the device. More embodied than mouse or multitouch, as it’s in 3d and you cna use all your fingers.


Is built in java, as all excellent music software is. You can grab many possible spaces. Here is a straightfoward one in a five dimensional space, which we can draw with a mouse, but sadly, not in five dimensions. Intuitively, his gui plays audio from right to left. The undo interface is actually kind of interesting. This also sends midi….
The demo seems fun.
Now he’s showing a demo of waving his hands over a MIDI piano.


Is the software available?
Yes, on sourceforge, but that’s crashy. And there will be an andriod version.
Are there strategies to use it without looking at the screen?
That’s what was in the video, apparently.
Can you use all 3 dimensions?

Triggering Dounds From Discrete Gestures

Studying air drumming
Air instruments, like the theremin need no physical contact. The kinect has expanded this.
Continuous air gestures are like the theremin.
Discrete movements are meant to be triggers.
Air instruments have no tactile feedback, which is hard. They work ok for continuous air gestures, though. Discrete ones work less well.
He asked users to air drum along to a recorded rhythm.
Sensorimotor Synchronization research found that people who tap along to metronomes are ahead of the beat by 100ms.
Recorded motion with sensors on people.
All participants had musical experience, were right handed.
They need to analyze the audio to find drum sounds.
anaylsis looks for ‘sudden change of direction’ in user hand motion.
The have envelope following that is slow and fast and then compare those results. Hit occurs at velocity minimum. (usually)
Acceleration peaks take place before audio events, but very close to it.
Fast notes and slow notes have different means for velocity, but acceleration is unaffected.


Can this system be used to predict notes to fight latency in the kinect system?
Will result be different if users have drum sticks?

Nime live blog: Conducting aanalysis

They asked people to conduct however they wanted, to build a data set. Focus on the relationship between motion and loudness.
25 subjects conducting along to a recording. Used kinect to sample data. Used libxtract to measure loudness in the recordings.
Users listen to the recording twice and then conduct it 3 times
Got joint descriptors; velocity, acceleration and jerk; distance to torso.
Got general descriptors about quality of motion, maximum hand height.
they looked for descriptors highly correlated to loudness. they found none. some participants said they didn’t follow dynamics. 8 subjects were removed.
Some users used hand height for loudness, others used larger gestures. they separated users into two groups.

They have been able to find tendencies across users. However,a general model may not be the right approach.


How do they choose users?
People with no musical training were in the group that raised hand height.

My Journey through NIME creation, research and industry (by Sergi Jordà)

He got into computer music in the 80’s and was really in to George Lewis and the League of Automatic Composers. In 1989 he made PITEL machine listening and improvisation. in Max
He collaborated with an artist to make dancing pig-meat sculptures that dance and listen to people. Robots made of pork. they showed this project in food markets. This is horrible and wonderful.
In 1984, his collaborator decided to be in the robot. So they set up a situation where an audience could torture a guy in a robotic exoskeleton. Actuators would poke at him and pull at him. The part attached to the mouth made him suffer a bit. The audience always asked for the mouth interface. This taught him stuff about audience interactive. Lesson 1: It’s not a good idea to let audience torture people.
In 1987-88, they did an opera with robots and another exoskeleton. The system was too big and difficult to control. This looks like it was amazing. He’s describing it as a “terrible experience.”
In 1995, he did a piece for Disk Klavier. He did a real-time feedback system w/ a sequencer, a piano module, and fx box, to an amp, to a mic, to a pitch converter to the sequencer. He did this 4 times to make a 4-track MIDI score. This gave him a great piece that took maybe a half hour to realise. Simple things can have wonderful results. He has a metaphor of a truck vs a moped. A system that knows too much has too much intertia and are difficult to drive. Something smaller, like a moped is more versatile.
In one hour he made a “Lowtech QWERTYCaster” in 1996, which was a keyboard, a mouse and joystick attached together like a demented keytar.
In 1989, he did some real time synthesis controllable via the internet. It was an opera with electronic parts composer ny internet users. Formed a trio around this instrument called the FMOL Trio in 2001-2002 (their recordings are downloadable). He started doing workshops with kids and is showing a video of working with 5 year olds. They all learned the instrument and then did a concert. This is adorable. He put the different sections in different kinds of hats. The concert is full of electronic sounds and kid noises.
He learned that you have to make things that people like.
Then he got a real job in academia.
Why were so many new interfaces being invented, but nobody uses them?
In a traditional instrument, the performer has to do everything. In laptop music, the computer does everything and the performer only changes things. In live computer music, the control is shared between the performer and the instrument.
Visual feedback becomes very important. Laptop musicians care more about the screen than the mouse. This inspired the reactable, which he began in 2003
Goal: maximised bandwidth – get the most from the human and the most easily understandable communication from the computer to the human. He decided to go with a modular approach. Modular, tablebletop system. He wanted to make instruments that were fun to learn rather than hurty.
A round table has a non-leader position. Many people can play at once. You can become highly skilled at it.
When they started conceiving it, they were not thinking about technology. They developed a lot of tehcnologu like ReacTIVision, which is open source.
They posted some videos on youTube and got to be very popular. They started selling tables to museums. People liked it and the tables are not breaking down.
They started a company. Three work for the company and the presenter is still at the uni. They’ve done some mobile iApps.
The team quit going to NIME when the company started. They didn’t have things new to say. Reviewers didn’t think small steps were important.
Instruments need to be able to make bad sounds as well as good ones, or else it is just a toy.

The Snyderphonics Manta, a Novel USB Touch Controller

What is the Manta?

A USB touch controller for audio and video. Uses HID spec. Does capacitive sensing. 6-8ms latency (w some jitter). Portable and somewhat tough. Bus powered. It’s slightly like a monome….

Design features

It has a fixed layout because it’s a hardware device. It is discrete. 48 hexagonal pads which outputs how much surface area is covered. Slightly less than 8 bit. The sliders at the top have 12 bit resolution and are single touch.
the hexagon grid is inspired by just intonation lattices. Based on Erv Wilson and RH Bosanquet’s papers graphs
If every sensor is a note, you have 6 neighbours
It has LED feedback under the sensors (you can turn this off) inspired by monome.
The touch sensing is inspired by the Buchla 100-series controller.
Has velocity detection. Does this based on two consecutive samples.


Microtonal keyboard, live processing, etc


Something called the manta mate will allow this to be used to control analog synthesisers

Latency improvement in sensor wireless transmission using IEEE 802.15.4

MO- Interlude Project Motivations

Multipurpose handheld unit w/ RF capabilities with network oriented protocol. With a custom messaging schema to reduce latency in a small size.
He’s showing a video of tiny grabbable objects with accelerometers in them. They have a nice aspect. You could use them like reactable elements that send out data, but the ones he’s showing are way more multipurpose.
The unit can be connected to accessories and is a radio controlled wireless device that can stream sensors and can pre-process their own data to cut down on bandwidth usage. They use Zigbee which is not as fast as wifi but low power.
They use off the shelf modules so they don’t need to mess with radio stuff directly. This does require some middleware. Digitizing is surprisingly slow. So they decided to do an all in one solution, using an embedded modem. This si 54 times faster! Plus it’s generic and scalable.
Given that this is IRCAM, I suspect that it’s expensive.
The accessories of the device declare themselves to the device and contain their own specs
The presenter wants to make this Open Source, but needs to get that through internal IRCAM politics and to “clean the code” which is a process that seems to sometimes drag on for people.


He’s describing it as wireless MIDI. It’s plug and play across many OSes. Large Open Source community. Very usable language. Good platform for prototyping.
Arduinos mostly limited to serial over USB (except the teeny according to the last guy). Students had major software issues. The hardware was easy, but the middleware was a pain in the arse and added a lot of latency. They tried a MIDI shield added on to the Arduino, which was not quite good enough.
The 2010 Arduino had a programmable USB chip, so could use different protocols.
There is a LUFA API to do UDB programming.
This means they could use an Arduino directly as a HID. They also have complete implementation of the MIDI spec.
The arduino still needs to be flashed over serial.
HIDuino is quite good for output, especially musical robotic. This creates standardised interface for robot controlling, though MIDI, which is actually pretty cool.
They are working on USB audio class specification. This will require a chip upgrade, as the current arduino only does 8 bit audio. They want to work on a multichannel audio device.
Fortunately, this guy hates MIDI, so they’re looking at ECM Ethernet Control Mode, which would enable OSC over USB.
this looks promising, especially for projects that don’t require the oomph of a purpose-build computer.