Domifare Classes

Key still had some problems with transposition that related to frequency quanitsation, so those are (hopefully?) now sorted. I got rid of the gravity argument for freqToDegree because it doesn’t make sense, imo and calculating it is a tiny bit of a faff.

For Domifare, as with a spoken language, breaks between commands are articulated as pauses, so I’ve added a DetectSilence ugen. The threshold will need to be connected to a fader to actually be useful, as the margin of background noise will vary massively based on environment.

The next step is parsing. It’s been a loooong time since I’ve worried about how to do this… IxiLang uses a switch statement with string matching.

I need to draw out how this is going to work, since the repeat and the chance commands both take commands as arguments.

This might work as a statement data array:

[key, min_args, max_args, [types], function]

Types can be: \var, \number, \operator, \data. If it’s \operator, then the operator received will be the key for another statement, and the parser will listen for that too…. If it’s \data, that means start the fucntion asap….

Also, since variables are actually loop holders, I’m going to need to make a class for them.

My original plan to was to use pitch recognition to enter in midi notes, but that’s not going to work, so some commands are now defunct.


(
var lang, vars, numbers;
vars = (solfasire:nil, solfasisol:nil, soldosifa:nil);
numbers = (redodo: 1, remimi:2, refafa: 3, resolsol: 4, relala: 5, resisi: 6, mimido: 7, mimire:8);
lang = (
larelasi: [\larelasi, 2, 2, [\var, \data], nil], // func adds the name to the var array, runs the recorder
dolamido: [\dolamido, 0, 1, [\var], nil], // func stops names loop or all loops
domilado: [\domilado, 0, 1, [\var], nil], // func resumes named loop or all loops
mifasol: [\mifasol, 0, 1, [\var], nil], // func raises an octave, which is probably impossible
solfami: [\solfami, 0, 1, [\var], nil], // func lowers an octave- also impossible
lamidore: [\lamidore, 2, 2, [\var, \data], nil], // add notes to existing loop
dosolresi: [\dosolresi, 1, 1, [\var], nil], // shake the loop, which is possible with recordings also...
misisifa: [\misisifa, 0, 1, [\var], nil], // next rhythm
fasisimi: [\fasisimi, 0, 1, [\var], nil], //previous rhythm
misoldola: [\misoldola, 0, 1, [\var], nil], //random rhytm
refamido: [\refamido, 0, 0, [], nil], // die
sifala: [\sifala, 2, 2, [\number, \operator], nil], // repeat N times (1x/bar)
larefami: [\larefami, 2, 2, [\number, \operator], nil] // X in 8 chance of doing the command
);

After pondering this for a bit, I decided to write some classes, because that’s how I solve all my problems. I created a github project. This is the state of the sole file today.

Domifare fonts

Solresol can be notated in many ways: solfedge, numbers, notes and via two competing sets of special glyphs. These glyphs are a proposed edition to the unicode standard and part of a set of glyphs known as the CSUR. They’re included in some fonts, like the amazingly ugly Unifoundry includes the more abstract glyphs

      

and Constructium which just has single characters of solfedge

       

(and this is boring, but well-rendered and easy to understand – aside from the duplication of the final syllable as both ‘si’ and ‘ti’.).

Both sets of glyphs above should render in modern web browsers, but allow some time.

Many of my compuer music projects seem to quickly get bogged down in font issues and learning a new script is probably too much to ask of performers (myself included), even if it’s only 8 glyphs. However Constructum is, essentially, a monospace font in the sence that all 4-note words will render the same length, so it’s my likely choice for a display. It is a lot more easy to do than to draw actual music notation.

Like ixilang users type into a dedicated Document window Domifare users will be provided with an auto-transcription of their input. This is enough problem to solve by itself in early versions, but ixi-lang’s page re-writing properties seem like a good plan for later ones.

Domifare sisidomi

‘Domifare sisidomi’ means ‘live code’ in solresol, which is the first ever ‘constructed language’. That is, it was the first ever language to be intentionally designed. And, as this was a new idea, the creator, François Sudre, apparently felt like new syllables were needed. He used musical tones.

This last weekend, I played at an algorave in Newcastle with tuba and algorithms. The idea was to use a foot pedal to control things, but (despite working perfectly at home), it was non-responsive when the gig started, so my set included some live coding. Live coding with one hand while holding a tuba is not terribly efficient and it’s impossible to live code and play tuba at the same time . . . unless, playing the tuba is the live coding.

And thus, I’ve now specified an ixi lang-like language, domifare sisidomi. It’s a bit sparse, but there’s only so much a player can be expected to remember.

All variables are loops. There are three built in: solfasire, solfasisol and soldosifa (low percussion, high percussion and bassline). These are entered by playing the name of the variable followed by a rhythm or melody. As there is more than one kind low or high percussion instruments, different ones can be specified by playing different pitches.

The full (rough, unimplemented) specification follows:

// Enter a loop

solfasire [rhythm] //kick & toms
solfasisol [rhythm] // higher drums
soldosifa [melody] // bassline

larelasi [4 notes = the name] [melody] // declare a new loop

// start stop and modify a loop

dolamido [name] -- silence loop
domilado [name] -- resume loop
mifasol [name] -- raise an octave
solfami [name] -- lower an octave

lamidore [name] [rhythm] -- add notes to the loop
dosolresi [name] -- randomise loop // shake in ixilang

// every time a loop is changed by playing in new notes, shaking or adding, it gets added to a list of rhythms

// moving between loops in the list

misisifa / fasisimi [optional name] - move to next or previous rhythms
misoldola [optional name] - move to a random rhythm

// if no name is given, applies to all playing loops

// control structures

refamido - die

sifala dofadore [number] [next/prev/rand/randomise/chance/octave shift] [optional name] -- repeat the command x times

larefami [number] [next/prev/rand/randomise/repeat/octave shift/die] [optional name] - x in 8 chance of doing the command

//numbers

redodo - 1
remimi - 2
refafa - 3
resolsol - 4
relala - 5
resisi - 6
mimido - 7
mimire - 8

Algorithms and Authorship

A recent Wall Street Journal article (paywalled, see below for relevant quotes) felt it necessary to quote associate professor Zeynep Tufekci on the seemingly self-evident assertion that ‘Choosing what to highlight in the trending section, whether by algorithms or humans, is an editorial process’. This quote was necessary, as Zuckerburg asserts Facebook is a technology company, building tools but not content. He thus seeks to absolve himself of responsibility for the output of his algorithms.

It’s surprising he’s taken this argument, and not just because it didn’t help Microsoft when they tried it after their twitter bot turned into a Nazi.

Facebook is acting as if the question of authorship of algorithmic output is an open question, when this has been settled in the arts for decades. Musicians have been using algorithmic processes for years. Some John Cage scores are lists of operations performers should undertake in order to generate a ‘performance score’ which is then ‘realised’. The 1958 score of Fonatana Mix ‘consists of 10 sheets of paper and 12 transparencies’ and a set of instructions on how to use these materials. (ibid) Any concert programme for a performance of this piece would list Cage as the composer. That is, he assumes authorship of algorithmic output. The question of authorship has had an answer for at least 58 years.

Indeed, other Silicon Valley companies, some located just down the road from Facebook have quite clearly acknowledged this. The Google-sponsored ‘Net.art’ exhibition, included in the Digital Revolutions show at London’s Barbican in 2014, included artist attribution next to every single piece, including those making copious use of algorithms.

Art has already tackled even the issues of collective and collaborative algorithmic authorship. In 1969 Cornelius Cardew published Nature Study Notes: Improvisation Rites, a collection of text pieces by Scratch Orchestra members. Each of the short pieces, or ‘rites’, has individually listed authors. However, when programmed for performance in 2015 at Cafe Oto, the programme was billed as ‘The Scratch Orchestra’s Nature Study Notes,’ thus indicating both individual and corporate authorship. Some of these pieces are best described as algorithms, and indeed have been influential in tech circles. As Simon Yuill points out in his paper All Problems of Notation Will Be Solved By The Masses the anti-copyright notice included with the score uses copy left mechanisms to encourage modification.

Some may argue that the artist gains authorship through a curatorial process of selecting algorithmic output. Unlike Iannis Xenakis, John Cage never altered the output of his formulas. He did, however, throw away results that he deemed unsatisfactory. Similarly, Nature Study Notes was curated by the listed editor, Cardew. One can assume that performing musicians would make musical choices during performance of algorithmic scores. It’s arguable that these musical choices would also be a form of curation. However, composers have been making music that is played without human performers since the invention of the music box. To take a more recent algorithmic example, Clarence Barlow’s piece Autobusk, first released in 1986, is a fully autonomous music generation program for the Atari. The piece uses algorithms to endlessly noodle out MIDI notes. Although phrasing the description of the piece in this way would seem to bestow some sort of agency upon it, any released recordings of the piece would certainly list Barlow as the composer.

Facebook’s odd claims to distance itself from it’s tools fail by any standard I can think of. It’s strange they would attempt this now, in light of not just Net.Art, but also Algorave music. That is dance music created by algorithms, an art form that is having a moment and which is tied in closely with the ‘live-code’ movement. Composer/performers Alex McLean, Nick Collins, and Shelly Knotts are all examples of ‘live-code’ artists, who write algorithms on stage to produce music. This is the form of artistic programming that is perhaps the closest analogue to writing code for a live web service. Performers generate algorithms and try them out – live – to see if they work. Algorithms are deployed for as long as useful in context and then are tweaked, changed or replaced as needed. Results may be unpredictable or even undesired, but a skilled performer can put a stop to elements that are going awry. Obviously, should someone’s kickdrum go out of control in a problematic way, that’s still attributable to the performer, not the algorithm. As the saying goes, ‘the poor craftsman blames his tools.’

Algoraving is a slightly niche art form, but one that is moving towards the mainstream – the BBC covered live coded dance music in an interview with Dan Stowell in 2009 and has programmed Algorave events since. Given Algorave’s close relationship with technology, it tends to be performed at tech events. For example, The Electro-Magnetic Fields Festival of 2016 had an Algorave tent, sponsored by Spotify. As would be expected, acts in the tent were billed by the performer, not tools. So the performance information for one act read ‘Shelly Knotts and Holger Balweg’, omitting reference to their programming language or code libraries.

Should someone’s algorithmically generated content somehow run afoul of the Code of Conduct (either that of the festival or of the one used by several live code communities), it is the performer who would be asked to stop or leave, not their laptop. Live coders say that algorithms are more like ideas than tools, but ideas do not have their own agency.

Zuckerberg’s assertion, ‘Facebook builds tools’, is similarly true of Algoravers. Indeed, like Algoravers, it is Facebook who is responsible for the final output. Shrugging their shoulders on clearly settled issues with regards to authorship is a weak defence for a company that has been promoting fascism to racists. Like a live coder, surely they can alter their algorithms when they go wrong – which they should be doing right now. To mount such a weak defence seems almost an admission that their actions are indefensible.

Like many other young silicon valley millionaires, Zuckerberg is certainly aware of his own cleverness and the willingness of some members of a credulous press to cut and paste his assertions, however unconvincing. Perhaps he expects Wall Street Journal readers to be entirely unaware of the history of algorithmic art and music, but his milieu, which includes Google’s sponsorship of such art, certainly is more informed. His disingenuous assertion insults us all.

The future

http://livecoding.tv is a site where you can watch people write code live. Like write a text editor. Because this is what’s entertaining in the 21st century. It’s meant to be educational.

It is also apparently only men.

Breathing code is another public coding platform. Or a conference, rather. Or something that lost money.

FARM workshop on functional art, music, modeling and design was a success.

TopLap is or was a fun community. It needs more participation.

Computational literacy. Chris Hancock in 2003 wrote real-time programming and the big ideas of computational literacy. It emphasises experience. Real time code makes for real time interaction.

Here is a slide showing a continuum between bodies and theories. Live coding is somewhere in the middle. Action thinking with live coding.

Discussion:
What next?

Q: are performances meant to be an academic or  scientific exercise or how should it be curated?

Maybe instead of a conceptual frame work instead a description of the kind of output or environment?

For an academic conference, things should have some novelty.

Things could be partly open call and partly curated. Curators need to be somewhat neutral.

Dance music is interesting and can be rigorous, or is that even a valuable thing to aspire to?

Livecode.TV is not an open platform.  We could take live streaming out to the wild. Interact with normal people.

Which performances should be public?

What about other at forms?

Could there be some youth outreach in the next conference?

Kids algorave or some such

Algorave school dances

Trying to avoid product oriented output.

SuperCopair

Collaboratively live coding SuperCollider through the cloud.

Remotely located synchronous interaction. People use Dagstuhl, Gibber, etc.

SuperCopair uses atom.io. it allows you to remotely collaboratively edit a document. They used the pusher. com cloud service.

Pusher cloud service uses push data. Sending a character is avg 230ms from San Palo to Ann Arbor.

It is easy to setup. No sync in clock.

You can run code locally or globally or remotely (only).

They’ve added permission control.

Users tend to want to  collaboratively fix bugs.

This package is available in atom.io.

Live coding as a part of a free improv orchestra by Antonio Goulart and Miguel Antar

He is doing only code and not processing other people.

He does no breast tracking. Joe to communicate well with other improvisers. They try to be non idiomatic.

Performers should be able to play together based only on sound with no knowledge of other instruments. They try to play without memory.

Acoustic instruments are immediate, but live coding has lag, which provides opportunity to create future sounds.

He didn’t used to project code, so as to not grab to much attention. But the free imprison acoustics guys Jammed More together, so he thought screen showing would help people play more together.

This did help, and does not distract audiences, so they kept it.

It turns out that some knowledge of each others instruments is needed to play well together. Should instrumentalists learn a bit of code to play with live coders? Or is that too much to ask? Would it be too distracting when they’re trying to play?

http://soundcloud.com/orquestraerrante

There is a discussion about projection in the question section….

Extramuros: a browser coding thingee by David Ogborn et al

Uses node.is, which he says is duct tape for network music.

It supports distributed ensembles and has language neutrality. Server client model.

Browser interface, piped to language from server.

It is aesthetically austere.

There have been several performances.

It is also useful for projecting ensembles live code. And for screen sharing. And for doing workshops with low configuration. (He says no configuration)

Running this means giving access to a high priority running thread on your machine…

Future work: it allows for JavaScript and osc right now. They want to do event visualisation. There are synchronisation issues. Phasing is an issue. What about stochastic stuff? When people write unpredictable, untimed code, unpredictable, untimed things happen. Apparently this is bad.

We are being made to participate. In tidal.

Live writing: asynchronous live coding

People collaborating can be Co located out distant, synchronous or asynchronous.

Asynchronous live coding is a thing, like sending code over email. There are ways to communicate etc. Open form scores, static code, screen cast, audio recording

Music notation is meant to allow this. He says live coding is improvised or composed on real time. So how to archive performances or rehearsals.

Studio can be recorded or there are symbolic recordings, like code or notation. Code and notation are not equivalent. In traditional music, midi files are between the recording and the score. it has time stamps etc.

What is the equivalent of a midi file for live coders?

He’s showing Gibber in a web browser. It records his keystrokes with time stamps, saves it to a server, and can be played back.

Livewriting.eecs.umich.edu
http:// Livewriting.eecs.umich.edu

There are other systems like this, like threnoscope by Magnusson.

Show us your screens. Happens outside of live coding. Game streaming. Programmer streaming for not live coding.

Writing as music performance. Write a poem, sonority the typing and do stuff based on character content.

Written communication is asynchronous. Recording keystrokes can make writing into a real time experience. Or reading. Or whatever.

As there is no sense of audience, the experience is the same as non live writing. The trading experience is really different, though.

Text Alive online

YouTube, SoundCloud, etc, are for distributing product, not authoring it. This is less true for programmers.

Kinetic typography is animated text.

He had a system to music videos etc with text or maybe lyrics appearing in time.

Video is a pure function of time. Exported video projects are rendered images. But you can write a function that given a time, generates an image. This creates a real scalability.

The website text alive does this with text animation of lyrics. The system analysis the song and the lyrics and automatically m makes a video draft. The user can then edit it.

‘Everything is interactive’ via a set of sliders.

The video is generated with Java script. The faders are automatically generated.

You can modify the JavaScript. And make derivative videos.

This really shows the power of svg+JavaScript.

http://www.textalive.jp

.