Live blogging NIME keynote: Adventures in Phy-gital Space (David Rokeby)

He started from a contrarian response to the basic characteristics of the computer. Instead of being logical, he wanted to be intuitive. He wanted things to be bodily engaged. The experience should be intimate.
Put the computer out into physical space. Eneter into a space with the computer.
He did a piece called Reflexions in 1983. He made an 8×8 pixel digital camera, had 30 fps, which was quite good for the time.
He made a piece that required very jerky movement. The computer could understand those movements and he made algorythms based on them. This was not accessible to other people. He internalised the computer’s requirements.
In 1987 he made a piece called Very Nervous System. He found it easier to work with amateur dancers because they don’t have a pre-defined agenda of how they want to move. This lets them find a middle space between them and the computer.
The dancer dances to the sound, and the system responds to the sound. This creates a feedback loop. The real delay is not just the framerate, but the speed of the mind. It reenforces the particular aspects of the person within the system.
The system seemed to anticipate his movements. Because consciousness lags movement by 100 milliseconds. We mentally float behind our actions.
This made him feel like time was sculptable.
He did a piece called Measure in 1991. The only sound source was ticking clock, but it was transformed based on user movement near the clock and the shape of the gallery space. He felt he was literally working with space and time.
He began to think of analysing the camera data as if it were a sound signal. Time domain anaylises could extract interesting information. Responsive sound behaviours could respond to different parts of the movement spectrum. Fast movements were high frequency and applied to one instrument. Mid speed was midrange. Slow was low freq.
With just very blocky 8×8 pixels, he’s got more responsiveness than a kinect seems to have now. Of course, this is on the computer’s terms rather than tracking each finger, etc.
There is no haptic response. This means that if you throw yourself at a virtual drum, your body has to provide counterforce, using isometric muscular tension. The virtual casts a real shadow into the interacting body.
Proprioception: How the body imagines its place within a virtual context.
This sense changes in response to an interface. Virtual spaces create an artificial state of being.
He did a piece called Dark Matter in 2010 using several cameras to track a large space, defining several “interactive zones.” He used his iphone to map a virtual scultpure that people could sonically activate by touching the virtual sculpture. He ran it in pitch dark using IR cameras to track people.
After spending time in the installation, he began to feel a physical imbalance. It felt like he was moving heavy things, but he wasn’t. It can look like having a neurological disorder to an outside observer. The performer performs to the interface, navigating an impenetrable internal space. The audience can see it as an esoteric ritual.
this was a lot like building mirrors. He got kind of tired of doing this.
To what degree should an interface be legible? If people understand that something is interactive, they spend a bunch of time trying to figure out how it works. If they can’t see the interaction, they can more directly engage the work.
The audience has expectations around traditional instruments. New interfaces create problems by removing the context in which performers and audiences communicate.
Does the audience need to know a work is interactive?
Interactivity can bring a performer to a new place, even if the audience doesn’t see the interaction. He gave an example of a threatre company using this for a sound track.
He did a piece from 1993-2000 called Silicon Remembers Carbon. Cameras looking at IR shadows of people. Video is projected onto sand. Sometimes pre-recorded shadows accompanied people. And people walking across the space shadowed the video.
If you project a convincing fake shadow, people will think it’s their shadow, and will follow it if it moves subtly.
He did a piece called Taken in 2002. It shows video of all the people that have been there before. And a camera locks on to a person’s head and shows a projection of just the head, right in the middle.
Designing interfaces now will profoundly effect future quality of life, he concludes.

Questions

External feedback loops can make the invisible visible. It helps us see ourselves.

Grid based laptop orchestras

lorks use orchestral metaphor. Sometimes use real istruments as well. This is a growing art form.
configuration of software for eah laptop is a pain in the arse. Custom code, middleware (chuck, etc) HIDs, system config, etc. This can be a “nightmare.” Painful for audiences to watch. Complex setsups and larger ensembles have more problems.
GRENDL: grid enables deployment for laptop orchestras
these kinds of problems are why grid computing was invented. Rules sharing across multiple computers. The shared computers are called organisations. What if a lork was an organisation?
they didn’t want to make musicans learn new stuff. They wanted grendl to be a librarian, not another source of complexity. It would deliver scored and configurations
it deploys files. It does not get used while playing. Before performance, the scores are put on a master computer which distrubtes to ensemble laptops.
grendl executes scripts on the laptops before each piece. Once the piece finishes, the laptop returns to pre-performance state. The composer writes the scripts for each piece.
grendl is a wrapper for the saga api.
they’re trying to make the compositions more portable with tangible control. They have a human/computer readable card with qr codes. Will be simpler to deploy
they’ve been suing this for a year. It has surpassed expectations. Their todo list needs a server application rather than specifying everything at the command line w a script. They’re going to simplyify this with using osc commands to go from composition to composition.
this makes them rethink how to score for a lork. Including archiving and metadata.
grid systems do not account for latency and timing issues and so it’s role in performance is so far liimitted. They have run a piece from grendl.
how do you recover when things go titsup? How to you debug? Answer: it’s the composer’s problem. Things going wrong means segfaults.
the server version gives better feedback. Each computer will now reportback which step borked.
philosophical: Who owns the instrument? The composer? The player? Their goal is to let composers write at the same sort of level as they would for real orchetras

Live-blogging nime: mobilemuse: integral music control goes mobile

music sensors and emotion
integral musical control involves state and physicl interaction. Performer interacts with other performers, with audience and with the instrument. Sounds from emotional states. Performers normally ingeract by looking and hearing, but these guys have added emotional state.
audiences also communicate in the same way. This guy wants to measure the audience.
temperature, heart rate, respiration eeg and other thing you can’t really attach to an audience.
send measurements to a pattern recognition system. The performer wears sensors.
he’s showing a graph of emotional states where a performer’s state and an audience memberms state track almost exactly.
they actually do attach things to the audience. This turns out to be a pain in the arse. They have now a small sensor thing called a “fuzzball” whnich attaches to a mobile phone.
despite me blogging this from my phone, i find it hugely problematic that this level of technology and economic privilege would be required to even go to a concert….
they monitor lie detector sort of things. The mobile phone demodulates the signals. The phone can plot them. There is a huge mess of liscence issues to connect hardware to the phone, so they encode it to the audio in.
they did a project where a movie’s scenes and order was set by the audience’s state.
the application is open source.

How musicians create augmented musical instruments

augmented instruments are easy for performers of the pre-existing instruments. Musicians themselves have expertise, so let them do design. Come up w a system to let them easily do augmentations.
thr augmentalist was designed collaboratively.
gestures go to instrument, sesnors or both. Sound goes into daw. Processing happens
photo of a slider bar taped to a guitar: quick and easy!
instrument design sessions w 10 pop musicians. Experimenters presented system and updates then a testing session, then instrumentalists played arouns, then instrumentalists made suugestions for changes.
guitarists put tilt measurements on the head. Slider on guitar body. Sensors mapped to typical pedal fx.
an mc stuck thing to a mic and himself. Slider on the mic. Fsr on mic body. Accelerometer on his hand. And movements went to pich shifter.
interesting results: most performers tried to use similar movement, like moving head and body. Only one person kept this. All instruments used tilt.
hundreds of gesture/sound mappings were tried. Most considered successful, but not all were kept. Guitarists tend to develop the same augmentations as each other. But some unusual things were developed also.
musicians start with technology rather than the gesture. Technology is seen as the limitation, so start w it’s limitations.
all musicians believed they could come to master systems.
over time, the musicans make the fx more subtle and musical
can people swap instruments? Yes, they felt each other’s instruments were easy to use.
one guitarist uses the system with his band and they’re gigging w it.
takes up a lot of brain cycles to use extensions. It takes a lot of practice.
every musician had maximum enjoyment at every session.
this kind of user-lead thing can create new avenues for research.

Live blogging nime – ircam assigning gesture to sound

they play a sound and then ask people to represent the sound as gesture and then use that gesture to control a new soud. The sound to gesture is an experimental study, which was a very good idea!
in the existing literature: tapping a beat is a well – known gesture. Body motion to music and more new: mimic instrumental performances (ex air guitar). Sound tracting is sketching a sound?
Gaver says musicl listing has a focus on acoustic properties and everyday listening focuses on cause.
categorisation of sounds involves the sound sources. People would categorise door sounds togther, even if they are very different sonically.
will subjects try to mimic the origins of causal sounds, eg mime slamming a door?
will they trace non causal sounds?
they played kitchen sounds and kitchen sounds convoluted w white noise
track subject’s hand position. Each subject gets some time to work out and practice her gesture, then record it three times. Ask the subject to watch the viseo and narrate it.
the transformed sounds are described more metaphorically. Non transformed sounds describe the object and the action is described, rather than the sound.

Live blogging nime papers – gamelan elektrika

a midi gamelan at mit
the slide shows the url Supercollider.ch
they wanted flixible tuning for the gamelan. Evan ziporyn worked on this project. They got alex rigopolus something about touring with kronos. This is going on too long about name dropping and not enough about how it works. I’m not even yet sure what the heck this is, but media lab sure is cool.
this must be a really long time slot becuase she has not yet talked about a technical issue yet.
low latency is important. I think she accidentally let slip that this uses ableton, thus revealing a technical issue.
mit people are often very pleased with themselves.
urathane bars with piezo sensors for version 1, so they switched to FSR and a different material. Didn’t catch how it senses damping.
reong has 5 sensors per pot anf FSRs for damping. Hit the middle, touch something to damp.
gongs have capacitive disks on side, piezo on the other, to sence strikes and damping
supercollider and ableton on the backend to handle tuning and sample pplaying
samsung laughes at them when approached for sponsorship but they sure showed them! Nime is thus invited to share in gloating of the amazing skillz of mit.
instrument demo fail.
question about why not use normal percussion controller? answer: to play like a regular gamelan.
the monitoring situation is a problem for them. They use 4 speakers to monitor. House speaker to play

Live bloggig NIME papers – electromagnetically sustained rhodes piano

Make a rhodes piano that doesn’t decay. Can start from a strike or from the excitation. The examples sound like an ebow made for the rhodes.
Rhodes has enharmonic overtones, especially on the strike. The pickup gets mostly integer multiples of the fundamental, especially even partials.
the actuator is an electromechanical coil driven by a sinetone generator of the fundamental. The pickup also grabs the actuator, so they remove the phase inverse of it past the pickup. They can also use feedback to drive the tine. This causes out of control feedback, so they sense the output just of the actuator and subtract that out, leaving just the tine, thus getting increasingly rube goldberg.
there are pros and cons of each approach. They measure after touch for control using a pressure sensor below each key. Appropriately, all the dignal processing is analog