Algorithms and Authorship

A recent Wall Street Journal article (paywalled, see below for relevant quotes) felt it necessary to quote associate professor Zeynep Tufekci on the seemingly self-evident assertion that ‘Choosing what to highlight in the trending section, whether by algorithms or humans, is an editorial process’. This quote was necessary, as Zuckerburg asserts Facebook is a technology company, building tools but not content. He thus seeks to absolve himself of responsibility for the output of his algorithms.

It’s surprising he’s taken this argument, and not just because it didn’t help Microsoft when they tried it after their twitter bot turned into a Nazi.

Facebook is acting as if the question of authorship of algorithmic output is an open question, when this has been settled in the arts for decades. Musicians have been using algorithmic processes for years. Some John Cage scores are lists of operations performers should undertake in order to generate a ‘performance score’ which is then ‘realised’. The 1958 score of Fonatana Mix ‘consists of 10 sheets of paper and 12 transparencies’ and a set of instructions on how to use these materials. (ibid) Any concert programme for a performance of this piece would list Cage as the composer. That is, he assumes authorship of algorithmic output. The question of authorship has had an answer for at least 58 years.

Indeed, other Silicon Valley companies, some located just down the road from Facebook have quite clearly acknowledged this. The Google-sponsored ‘’ exhibition, included in the Digital Revolutions show at London’s Barbican in 2014, included artist attribution next to every single piece, including those making copious use of algorithms.

Art has already tackled even the issues of collective and collaborative algorithmic authorship. In 1969 Cornelius Cardew published Nature Study Notes: Improvisation Rites, a collection of text pieces by Scratch Orchestra members. Each of the short pieces, or ‘rites’, has individually listed authors. However, when programmed for performance in 2015 at Cafe Oto, the programme was billed as ‘The Scratch Orchestra’s Nature Study Notes,’ thus indicating both individual and corporate authorship. Some of these pieces are best described as algorithms, and indeed have been influential in tech circles. As Simon Yuill points out in his paper All Problems of Notation Will Be Solved By The Masses the anti-copyright notice included with the score uses copy left mechanisms to encourage modification.

Some may argue that the artist gains authorship through a curatorial process of selecting algorithmic output. Unlike Iannis Xenakis, John Cage never altered the output of his formulas. He did, however, throw away results that he deemed unsatisfactory. Similarly, Nature Study Notes was curated by the listed editor, Cardew. One can assume that performing musicians would make musical choices during performance of algorithmic scores. It’s arguable that these musical choices would also be a form of curation. However, composers have been making music that is played without human performers since the invention of the music box. To take a more recent algorithmic example, Clarence Barlow’s piece Autobusk, first released in 1986, is a fully autonomous music generation program for the Atari. The piece uses algorithms to endlessly noodle out MIDI notes. Although phrasing the description of the piece in this way would seem to bestow some sort of agency upon it, any released recordings of the piece would certainly list Barlow as the composer.

Facebook’s odd claims to distance itself from it’s tools fail by any standard I can think of. It’s strange they would attempt this now, in light of not just Net.Art, but also Algorave music. That is dance music created by algorithms, an art form that is having a moment and which is tied in closely with the ‘live-code’ movement. Composer/performers Alex McLean, Nick Collins, and Shelly Knotts are all examples of ‘live-code’ artists, who write algorithms on stage to produce music. This is the form of artistic programming that is perhaps the closest analogue to writing code for a live web service. Performers generate algorithms and try them out – live – to see if they work. Algorithms are deployed for as long as useful in context and then are tweaked, changed or replaced as needed. Results may be unpredictable or even undesired, but a skilled performer can put a stop to elements that are going awry. Obviously, should someone’s kickdrum go out of control in a problematic way, that’s still attributable to the performer, not the algorithm. As the saying goes, ‘the poor craftsman blames his tools.’

Algoraving is a slightly niche art form, but one that is moving towards the mainstream – the BBC covered live coded dance music in an interview with Dan Stowell in 2009 and has programmed Algorave events since. Given Algorave’s close relationship with technology, it tends to be performed at tech events. For example, The Electro-Magnetic Fields Festival of 2016 had an Algorave tent, sponsored by Spotify. As would be expected, acts in the tent were billed by the performer, not tools. So the performance information for one act read ‘Shelly Knotts and Holger Balweg’, omitting reference to their programming language or code libraries.

Should someone’s algorithmically generated content somehow run afoul of the Code of Conduct (either that of the festival or of the one used by several live code communities), it is the performer who would be asked to stop or leave, not their laptop. Live coders say that algorithms are more like ideas than tools, but ideas do not have their own agency.

Zuckerberg’s assertion, ‘Facebook builds tools’, is similarly true of Algoravers. Indeed, like Algoravers, it is Facebook who is responsible for the final output. Shrugging their shoulders on clearly settled issues with regards to authorship is a weak defence for a company that has been promoting fascism to racists. Like a live coder, surely they can alter their algorithms when they go wrong – which they should be doing right now. To mount such a weak defence seems almost an admission that their actions are indefensible.

Like many other young silicon valley millionaires, Zuckerberg is certainly aware of his own cleverness and the willingness of some members of a credulous press to cut and paste his assertions, however unconvincing. Perhaps he expects Wall Street Journal readers to be entirely unaware of the history of algorithmic art and music, but his milieu, which includes Google’s sponsorship of such art, certainly is more informed. His disingenuous assertion insults us all.

The future is a site where you can watch people write code live. Like write a text editor. Because this is what’s entertaining in the 21st century. It’s meant to be educational.

It is also apparently only men.

Breathing code is another public coding platform. Or a conference, rather. Or something that lost money.

FARM workshop on functional art, music, modeling and design was a success.

TopLap is or was a fun community. It needs more participation.

Computational literacy. Chris Hancock in 2003 wrote real-time programming and the big ideas of computational literacy. It emphasises experience. Real time code makes for real time interaction.

Here is a slide showing a continuum between bodies and theories. Live coding is somewhere in the middle. Action thinking with live coding.

What next?

Q: are performances meant to be an academic or  scientific exercise or how should it be curated?

Maybe instead of a conceptual frame work instead a description of the kind of output or environment?

For an academic conference, things should have some novelty.

Things could be partly open call and partly curated. Curators need to be somewhat neutral.

Dance music is interesting and can be rigorous, or is that even a valuable thing to aspire to?

Livecode.TV is not an open platform.  We could take live streaming out to the wild. Interact with normal people.

Which performances should be public?

What about other at forms?

Could there be some youth outreach in the next conference?

Kids algorave or some such

Algorave school dances

Trying to avoid product oriented output.


Collaboratively live coding SuperCollider through the cloud.

Remotely located synchronous interaction. People use Dagstuhl, Gibber, etc.

SuperCopair uses it allows you to remotely collaboratively edit a document. They used the pusher. com cloud service.

Pusher cloud service uses push data. Sending a character is avg 230ms from San Palo to Ann Arbor.

It is easy to setup. No sync in clock.

You can run code locally or globally or remotely (only).

They’ve added permission control.

Users tend to want to  collaboratively fix bugs.

This package is available in

Live coding as a part of a free improv orchestra by Antonio Goulart and Miguel Antar

He is doing only code and not processing other people.

He does no breast tracking. Joe to communicate well with other improvisers. They try to be non idiomatic.

Performers should be able to play together based only on sound with no knowledge of other instruments. They try to play without memory.

Acoustic instruments are immediate, but live coding has lag, which provides opportunity to create future sounds.

He didn’t used to project code, so as to not grab to much attention. But the free imprison acoustics guys Jammed More together, so he thought screen showing would help people play more together.

This did help, and does not distract audiences, so they kept it.

It turns out that some knowledge of each others instruments is needed to play well together. Should instrumentalists learn a bit of code to play with live coders? Or is that too much to ask? Would it be too distracting when they’re trying to play?

There is a discussion about projection in the question section….

Extramuros: a browser coding thingee by David Ogborn et al

Uses, which he says is duct tape for network music.

It supports distributed ensembles and has language neutrality. Server client model.

Browser interface, piped to language from server.

It is aesthetically austere.

There have been several performances.

It is also useful for projecting ensembles live code. And for screen sharing. And for doing workshops with low configuration. (He says no configuration)

Running this means giving access to a high priority running thread on your machine…

Future work: it allows for JavaScript and osc right now. They want to do event visualisation. There are synchronisation issues. Phasing is an issue. What about stochastic stuff? When people write unpredictable, untimed code, unpredictable, untimed things happen. Apparently this is bad.

We are being made to participate. In tidal.

Live writing: asynchronous live coding

People collaborating can be Co located out distant, synchronous or asynchronous.

Asynchronous live coding is a thing, like sending code over email. There are ways to communicate etc. Open form scores, static code, screen cast, audio recording

Music notation is meant to allow this. He says live coding is improvised or composed on real time. So how to archive performances or rehearsals.

Studio can be recorded or there are symbolic recordings, like code or notation. Code and notation are not equivalent. In traditional music, midi files are between the recording and the score. it has time stamps etc.

What is the equivalent of a midi file for live coders?

He’s showing Gibber in a web browser. It records his keystrokes with time stamps, saves it to a server, and can be played back.

There are other systems like this, like threnoscope by Magnusson.

Show us your screens. Happens outside of live coding. Game streaming. Programmer streaming for not live coding.

Writing as music performance. Write a poem, sonority the typing and do stuff based on character content.

Written communication is asynchronous. Recording keystrokes can make writing into a real time experience. Or reading. Or whatever.

As there is no sense of audience, the experience is the same as non live writing. The trading experience is really different, though.

Text Alive online

YouTube, SoundCloud, etc, are for distributing product, not authoring it. This is less true for programmers.

Kinetic typography is animated text.

He had a system to music videos etc with text or maybe lyrics appearing in time.

Video is a pure function of time. Exported video projects are rendered images. But you can write a function that given a time, generates an image. This creates a real scalability.

The website text alive does this with text animation of lyrics. The system analysis the song and the lyrics and automatically m makes a video draft. The user can then edit it.

‘Everything is interactive’ via a set of sliders.

The video is generated with Java script. The faders are automatically generated.

You can modify the JavaScript. And make derivative videos.

This really shows the power of svg+JavaScript.


Copy paste tracking. Spreadsheets

Spreadsheets are great example of live programming.

Live coding is performance. Live programming is instant interpretation. Live programming languages are needed for live coding.

Spreadsheet formulas are programming.

Copying in spreadsheets is a way to make up for the lack of abstraction.

Managing copy errors is hard. A Canadian company lost millions from an excel error.

We could ask users to define data types, but then it gets like access.

Their plugin tracks the origin of the formula. If you change one, all clones will update. They allow you to detach clones.

Q: is it fair to distinguish between live coding and live programming?  Is live programming only for business?

What about dynamic programming? Or should we focus instead on a spectrum of liveness.

A:  one is for languages and one is for what you do with it. Aliveness need not be boolean.

Q: Is the difference social?

A: these communities can learn from each other.

Is it an aspect of the language or the use of it?

Q: live feedback with the machine and artists. Artists understand giving performances and engineering. But engineers don’t understand when they’re performing.

Practice lead craft research

Practice lead research is a problem of epistemology.

Knitting is creating knowledge with our hands. Craft make knowledge physical.

How do you get knowledge put off craft based practice as research. The answer is rigour.

Rigour has an awareness of previous work, having critical questions of y that work. Hacking and to making and being reflective.

Sonic pi vs overtone with piano phase.

He’s said a lot of interesting things about development, b but the best is that when he used sonic pi, he writes down ideas for tool development. Then he looks at them later. Playing music and composing is thus separated from development.

Keynote guy again

Rubik’s cube solving as performance. Sometimes the process of problem solving is the goal.

The classical theory of problem solving. States based search based problem solving. Problems have a starting state, a set of methods and a goal state. The method operators have pre-conditions, a  transformation method, etc. There is a defined state space.

He is showing the Towers of Hanoi.

The problem state graph for the towers of Hanoi can be an interface for collaborative problem solving. The tree can be computed on the fly.

The graphs give you a shared representation.

To generate the missionaries and cannibals graph you use a two  dimensional graph. This problem is problematic.

Blocks world problem graph has potential layout collisions.

R real world problems. It could be applied to climate change.

Co-solve controls collaborative problem solving. It has a lot of latency. By ticking up the atmosphere. This is some neoliberal bullocks. But it may have performance applications.

Q: what problem are live codes trying to solve?
A: trying to get people to dance, maybe.

Q: should a live coder think of a problem in advance?

A: formulation is difficult.