Why define frameworks? It’s established practice in Human computer Interaction. Because it is useful for designing. they propose heuristics or interaction dimensions.
Existing frameworks have little use in practice. Also ‘interactive installations are often missing (i don’t understand this). Things are very complex and often arbitrary.
He’s showing some cool instruments/installations and aksing us to consider the player’s experience.
their framework is Musical INterface for User Experience Tracking – MINUET
focuses on player experience. it is a design process unfolding time. includes DMIs and interactive installations. (by which they mean they consider installations and notjust instruments)
Design goal: purpose of interactions. Design specification: how to fill objectives.
Design goals are about a high-level user story. Lie a playful experience, or to stimulate creativity, education etc.
Goals can be about people, activities or contexts.
contexts: musical style, physical environment, social environment.
case study: hexagon
An educational interface to train musical perception
they wanted to make it easy to master the iterface ut have a high ceiling for the educational bit.
It’s for tonal music….
speeding through the last slides
Maraije (sp) wants to know about player, participant and audience as three user types on a scale. More or less informed participants. she also wants to know about presentation, to teach people how to use the instrument or environment.
Instrument controlled only by rotation of eyeballs!
He’s only found 6 examples of eyeball music.
Eye trackers shine an IR light on your eyeball and then look for a reflection vector with a camera. Callibrate by having users look at spots on a display.
eyes move to look at objects in peripheral vision. Eyes jump fast.
Data is error-prone. Eyes need visual targets. Only 4 movements per second with eyes.
Eyes can only move to focus. So an eyeball piece needs visual targets.
Audiences have nothing to see with eyeball instruments. Also, eyeball trackers are super expensive.
People have built eyeball instruments for computer musicians or for people with disabilities.
He is showing us documentation of 6 existing pieces.
“It’s sad to see a person [who] is severely disabled” says the presenter, and yet nobody in the documentation looks sad at all….
Q: Many of the nime performances were designed for people with disability.
Musicans play instruments in unexpected ways. SO they decided to build something and see what people did with it.
appropriation is a process in which a performer develops a working relationship with the instrument. This is exploitation or subversion of the design features of a technology (ie turntablism)
Appropriation is related to style. Appropriation is a very different way of working with something.
How does the design of the instrument effect style and appropriation?
they gave 10 musicians a highly constrained instruments to see if they got style diversity. They made a box with a speaker in it with a position and pressure speaker. the mapped timbre to pressure and pitch to movement. And a control group with only pitch.
People did some unexpected things, like tapping the box, putting the hand over the speaker and licking the sensor (ewww)
The users developed unconventional techniques because of and in spite of constraints
One desgree of freedom had more hidden affordances. The two agrees of freedom had only 2 additional variations and no additional affordances.
Users of the 1 degree of freedom group described it as a richer and more complex device which they had not fully explored. Users of the more complex instrument felt they had explored all options and were upset about pitch range.
The presenter beleives there is a cognitive bandwidth for appropriation. More built options limit exploration of hidden affordances.
This was a very information-rich presentation of a really interesting study.
Q: Is pitch so dominant that it skews everything? What if you did an instrument that did just timbre?
A: Nobody complained about loudness.
Q: If participants were all musicians, did their primary instrument effect their performance?
A: Some participants were NIMErs, others were acoustic players. They’re studying whether backgrounds effected perofrmance style.
Someone is playing a sort of a squeeze box that is electronic, so you can spin the end of it. Plus it has accelerometers. It’s kind of demo-y, but there’s potential there.
This comes from a movement-based design approach. They came up with the movement first. Design for actions.
Movement-based designed need not use the whole body. It need not be kinecty, non-touch. The material has a physicality.
Now we see a slide with tiny tiny words. It’s about LMA, which is Laban Movement Analysis. For doing a taxonomy of movements. They want to make expressive movement not just for dancers.
Observe movement, explore movement, devise movement, design to support the designed movement. (is this how traditional instrument makes work??)
the analysed a violinist and a no-input mixer. they made a shape change graph. There is a movement called ‘carving’ which he liked. The squeeze box uses the movement and is called ‘the twister’
They gave the prototype to a user study and told them to try moving it (with no sound). Everybody did the carving movement. People asked for buttons and accelerometers (really??)
the demo video was an artistic commission, specifically meant to be demo-y (obvs)
Laban theory is about ‘effort theory’ about the body. Did the instruments offer resistance?
They looked at interface stiffness, but decided not to focus on it. Effort describes movement dynamics, but is personal to performers.
the Prosthetic Instruments are a family of wearable instruments designed for use by dancers in a professional context. The instruments go on tour without the creators.
the piece was called ‘Les Gestes’ and was a tour in Canada. Instrument designers and composers were from McGill. The choreography and dancing was form a professional dance company in Montreal. Van Grimde Corps Secret.
there were a fuckton of people involved in the production. Lighting gesigners, costume designers, etc all had a stake in instrument design.
One of these instruments was in a concert here and was great. It looks like part of a costume.
The three instruments are called spine, ribs and visors. They are hypothetical extensions to the body. Extra limbs for your body. Dancers wear them in performance. They are removable in this context.
Ribs and Visors are extremely similar. They are touch sensitive. The spine has vertebrae connected by pvc tubing and a PET-G rod.
Professional artistic considerations – durability, usability. backups required. limiting funding and timeframes. small scale manufacturing. How are these stored and transported? what about batteries? Is there anything that needs special consideration or explanation (how to reboot).
Collaboration requires iterative design and tweaking.
Bill Buxton talks of ‘artist spec’, the most demanding standard of design. People have spent year developing a technique and your tool needs to fit in that technique.
- Why mix acrylic and pvc?
There is a lot of stress on the instruments, so they use tough materials.
- Can you talk about the dancer’s experiences?
The dancers did not seek technical knowledge, but they wanted to know how to experience and interact with it. They had preferences for certain instruments.
There is a beautiful graphic and another beatiful graphic. All presentations should be made up of cartoons like this.
Cloud computing keeps you from having to drag your entire wolfram cluster to a gig. However, the cloud does not have a speaker at your venue. Unless you use internet audio streaming.
The graphic is slightly less beautiful
anyway, there is latency issues, compatibility issues, packet size issues… You can get fragmentation. and TCP is acked and all that.
OMG, the speed of light is TOO SLOW
Things can get jittered
Big buffers are lovely, but are more jittery. HTML5 has big buffers.
Window size in compression has delay as well. He says to send raw, uncompressed audio.
Use HTML 5 to play audio in the browser and then you get portability.
He suggests sending 256 or 357 bytes per packet
70 ms delay in sending http request sounds 160ms to alberta. Eduoroam is like 300 ms.
Granular synth is a csound synth running on 14 virtual computers. (oh my god)
Oh my god
star networks are fast, but don’t scale. Tree networks scale, but have latency per hub.
the demo is, wow, less than compelling
this is a video of a talk of a guy stuck in visa-hell. alas, I’ve been there.
this is for mobile phone ensembles. devices do not know each other’s physical position. so performers need not stand in a pre-arranged formation.
They play sounds at each other to measure location. This is a piece in and of itself.
They make pairwise distances and then compute three dimension position from the distances. 1 sample error is 1.5cm error. also, clock issues.
One device hears a message, the other plays sound back. They have an agreed upon delay, so as to duck the clock issue. They have an onsent detector that does fast and slow envelopes, like yesterday’s presentations. The measurements were repeatable and accurate.
They do vector matrixes to guess positions.
The ping pong notes are going along a midi score. There are some ways to recover failure.
There is geometric ambiguity. Background noise creates a problem. As do reflections. They are wondering how to solve this without restorting to harsh noise, but I say embrace it. Why not hardcore mobile phone ensembles?
Will the system work with sounds outside of the human range of hearing?
Probably not with iPhones or iPods, but it could work.
Why use the audio clock instead of the CPU clock?
the audio clock is much more reliable because its runs in real time. The CPU clock has a lot of jitter.
Use case: Lautlots
People walked around wearing headphones with a mobile phone stuck on top like extra silly cybermen.
they had six rooms, including two with position tracking. They used camera tracking in one room, and bluetooth plus a step counter in the other room.. They had LEDs on the headset for the camera tracking
He is showing a video of the walk.
they used a server/client architecture, so the server knows where everyone is. This is to prevent the guided walk from directing people to sit on each other.
Client asks for messages they want to receive.
He is showing his PD code, which makes me happy I never have to code in PD
this is also at github
What did users think of this?
Users were very happy and came out smiling.
Collaborative live coding is more than one performer live coding at the same time, networked or not, he says.
Network music can by synchronus or asynchornos, collocated or remote.
There are many networked live coding environments.
You can add instrumental performers to live code stuff, for example by live-generating notation. Or by having somebody play an electronic instrument that is being modifies on the fly in software.
How can a live coding environment facillitate mixed collaboration? How and what sould people share? Code text? State? Clock.? variables? How to communicate? How do you share control? SO MANY QUESTIONS!!
They have a client/server model where only one machine makes sound. No synchronisation is required. There is only one master state. However, there are risks of collision and conflict and version control.
the editor runs in a web browser because every fucking thing is in a browser now.
Shows variables in a window and a chat window and a big section of the text. shows the live value of all variables in the program state. Can also show the network/live value.
Now showing collusion risk in this. if two coders use the same variable name, this creates a conflict. Alice is corrupting Bob’s code, but maybe Bob is actually corrupting her code. Anyway, every coder has their own name space and can’t access each other’s variables, which seems reductive. Maybe Bob such just be less of a twat. The live variable view shows both Alice’s and Bob’s variables under separate tabs.
His demo says at the top (‘skip this demo is late’
How do people collaborate if they want to mess around with each other’s variables? They can put some variables ina shared name space. click your variables and hit the shared button and woo shared.
How do you share control?
Chat messages show in the mobile instrument screen for the ipad performer. The programmer can submit a function to the performer in such a way so that the performer has agency in deciding when to run the function.
the tool for all of the this is called UrMus
Would live coders actually be monitoring each other’s variables in a performance?
Of course, this used in general coding, and hand waving
This is a distributed performance system for the web. started being focussed on the server, but changed toelp with user interface development tools. Anything that uses a browser can use it, but they’re into mobile devices.
They started with things like knobs, sliders and now offer widgets of various sorces. This is slightly gimmicky, but ok.
NexusUI.js allows you to access the interface. The example is very short and has some toys on it.
They’re being very handy-wavy about how and where audio happens. (they say this runs on a refrigerator (with a browser), but the tilt might not be supported in that case)
They are showing a slide of how to get OSC data from the UI object.
This is a great competitor to touchOSC, as far as I can tell form this paper.
However, Nexus is a platform. There is a template for building new interfaces. It’s got nifty core features.
They are showing a demo of a video game for iphone that uses libPD
Now they are testifying as to ease of use. They have made a bunch of Max tutorials for each nexus object. Tutorials on how to set up on a local server. They have a nexusDrop ui interface builder makes it very competitive with touchOSC, but more generally useful. Comes with an included server or something.
NexusUP is a max thingee that will automagically build a nexusUI based on your pre-existing max patch. (whoah)
Free and open source software
Building a bunch of tools for their mobile phone orchestra.
laser cut a thingee in the shape of your ui. put it on your ipad and you get a tactile sense of the interface.
Can they show this on the friday hackathon?