{"id":181,"date":"2011-09-15T15:51:00","date_gmt":"2011-09-15T14:51:00","guid":{"rendered":"http:\/\/www.celesteh.com\/blog\/2011\/09\/15\/dissertation-draft-solo-live-electronic\/"},"modified":"2015-06-19T00:23:21","modified_gmt":"2015-06-18T23:23:21","slug":"dissertation-draft-solo-live-electronic","status":"publish","type":"post","link":"https:\/\/www.celesteh.com\/blog\/2011\/09\/15\/dissertation-draft-solo-live-electronic\/","title":{"rendered":"Dissertation Draft: Solo Live Electronic Pieces"},"content":{"rendered":"<p><i>Phreaking<\/I><\/p>\n<p>When<br \/>\nthe local 2600 group, started organising BrumCon 2007, which took<br \/>\nplace on 3 May 2008, I asked if I could present a musical set.  They<br \/>\nhad never done such a thing before, but they agreed.   2600 is a<br \/>\nhacker group (\u201c2600 Meetings\u201d) with roots in phone phreaking<br \/>\n(Trigaux), so I decided to reference that in a piece written for the<br \/>\ngig.<\/p>\n<p>As<br \/>\nnoted in <i>The Complete Hacker&#8217;s Handbook,<\/I> \u201cPhone phreaking\u201d<br \/>\nrefers to the practice hacking phone systems in order to get free<br \/>\ncalls or just explore the workings of the system. Phone systems used<br \/>\nto be controlled by in-band signalling, that is, audible tones that a<br \/>\nuser could reproduce in order to gain unauthorised control. For<br \/>\nexample, 2600 Hz was a useful tone to \u201cseize\u201d control of a line.<br \/>\nOther such sounds commonly found in telephony are Dual Tone Multi<br \/>\nFrequency [DTMF] sounds, which are the ones produced by a landline<br \/>\nkeypad.   (Dr. K.)<\/p>\n<p>I<br \/>\nlooked up red box phreaking on Wikipedia and also DTMF signals and<br \/>\nused those tones as the heart of the piece. It starts with a dial<br \/>\ntone, then does coin dropping sounds, followed by the sound of<br \/>\ndialling and then a ring back sound, followed by a 2600 Hz tone.<br \/>\nAfter that introduction, it plays dialling signals and then a beat.<br \/>\nThe beat is made up of patterns of sampled drums.  The programme<br \/>\npicks random beats to be accented, which will always have a drum<br \/>\nsound on them and then scatters drum sounds on some of the other<br \/>\nbeats also.  The loop is repeated between 8-10 times and then a new<br \/>\npattern is created, retaining the same accents for the duration of<br \/>\nthe piece.  If the randomly generated drum pattern seems too sparse<br \/>\nor too full of beats, the performer can intervene by pressing a<br \/>\njoystick button to add some drum beats or another to remove them. The<br \/>\nidea for randomly accenting beats comes a lecture by Paul Berg at<br \/>\nSonology in the Hague where he noted that accenting random beats<br \/>\nseems like it had a deliberate rhythm when it&#8217;s heard by audiences.<br \/>\nThis is related to Trevor Wishart&#8217;s discussion of Clarence Barlow&#8217;s<br \/>\n\u201cindispensability factor,\u201d where Wishart notes that changing<br \/>\naccents of a steady beat can alter the listener&#8217;s perception between<br \/>\ntime signatures. (Wishart p 64)  It seems that greater randomness in<br \/>\npicking accents leads listeners to perceive more complex rhythms.<\/p>\n<p>After<br \/>\nthe beats, a busy signal comes in occasionally. There are also bass<br \/>\nfrequencies which are DTMF sine tones transposed by octaves.<br \/>\nFinally, there are samples of operator messages that are used in the<br \/>\nAmerican phone system.  These are glitched and stuttered, the degree<br \/>\nof which is controlled with a joystick.  Thus, this piece is partly a<br \/>\nlive-realisation, self-running piece and partly controlled by a<br \/>\nperformer.<\/p>\n<p>At<br \/>\nthe time, I was interested in making computer pieces that necessarily<br \/>\nhad to be computer pieces and could not be realised with live<br \/>\ninstruments or with an analogue synthesiser.  Extremely exact tunings<br \/>\nand sample processing are both examples of things that are<br \/>\ncomputer-dependant. I was also interested to have more live control<br \/>\nand more visible gesture, in order to, as Paine describes in his<br \/>\npaper on gesture in laptop performance, \u201cinject<br \/>\na sense of the now, an engagement with audience in an effort to<br \/>\nreclaim the authenticity associated with \u2018live\u2019 performance.\u201d<br \/>\n(Paine p 4)  I thought having physical motions would engage the<br \/>\naudience more than a live realisation. Conversely and relatedly, I<br \/>\nwas also interested in the aesthetics of computer failure, within the<br \/>\nglitches I was creating. Cascone writes, \u201c'[Failure]&#8217; has become a<br \/>\nprominent aesthetic in many of the arts in the late 20th century,<br \/>\nreminding us that our control of technology is an illusion, and<br \/>\nrevealing digital tools to be only as perfect, precise, and efficient<br \/>\nas the humans who build them.&nbsp;\u201c (Cascone)  I thought this<br \/>\nintentional highlighting of imperfection would especially resonate<br \/>\nwith an audience that largely worked in a highly technical and<br \/>\nprofessional capacity with computers.  <\/p>\n<p>I<br \/>\nalso find glitches to be aesthetically appealing and have been<br \/>\ninfluenced by the extremely glitchy work of Ryoji Ikeda, especially<br \/>\nworks like <i>Data.Matrix<\/I>,<br \/>\nwhich is a sonification of data. (\u201cDatamatics\u201d) Similarly,<br \/>\nin-band signalling is literally a sonic encoding of data, designed<br \/>\nfor computer usage.<\/p>\n<p>When<br \/>\nI performed the piece at BrumCon, their sound system did not have an<br \/>\neven frequency response. Some sine waves sounded way louder than<br \/>\nothers and I did not have a way to adjust. I suspect this problem is<br \/>\nmuch more pronounced for sine tones than it is for richer<br \/>\nfrequencies.  Another problem I encountered was that I was using<br \/>\nsounds with strong semantic meanings for the audience.  Many of them<br \/>\nhad been phreakers and the sounds already had a specific meaning and<br \/>\ncontext that I was not accurately reproducing.  Listeners without<br \/>\nthis background have generally been more positive about the piece.<br \/>\nOne blogger wrote the piece sounded like a \u201cdemonic<br \/>\nhomage to Gaga\u2019s <i>Telephone,<\/I>\u201d<br \/>\n(Lao) although he did note that my piece was written earlier.<\/p>\n<p><i>Blake&#8217;s<br \/>\n9<\/I><\/p>\n<p>The<br \/>\nmusic of the BBC Radiophonic Workshop has been a major influence on<br \/>\nmy music for a long time.  The incidental music and sound effects of<br \/>\n<i>Doctor Who<\/I> during the Tom Baker years was especially<br \/>\nformative.  I found time in 2008 to watch every episode of <i>Blakes<br \/>\n7 <\/I>and found the sound effects to be equally compelling.  I spent<br \/>\nsome time with my analogue synthsiser and tried to create sounds like<br \/>\nthe ones used in the series.  I liked the sounds I got, but they were<br \/>\na bit too complex to layer into a collage for making a piece that<br \/>\nway, but not complex enough to stand on their own.  I wrote a<br \/>\nSuperCollider programme to process them through granular synthesis<br \/>\nand other means and to create a piece using the effects as source<br \/>\nmaterial, mixed with other computer generated sounds.<\/p>\n<p>The<br \/>\ntiming on a micro, \u201cbeat,\u201d and loop level are all in groups of<br \/>\nnine or multiples of nine, which is why I changed the number in the<br \/>\npiece title.  I was influenced to use this number by a London poet,<br \/>\nMendoza, who had a project called <i>ninerrors<\/I><br \/>\nwhich ze<a CLASS=\"sdfootnoteanc\" NAME=\"sdfootnote1anc\" HREF=\"#sdfootnote1sym\" SDFIXED>*<\/A><br \/>\ndescribes as, \u201ca<br \/>\nsequence of poems constricted by configurations of 9: connected &amp;<br \/>\ndis-connected by&nbsp;self-imposed constraint.&nbsp;each has 9 lines<br \/>\nor multiples of 9, some have 9 words or syllables per line, others<br \/>\nare divisible by 9.&nbsp; <i><b>ninerrors&nbsp;<\/B><\/I>is&nbsp;presented<br \/>\nas a series of 9 pamphlets containing 9 pages of poetry.\u201d<br \/>\n(\u201cninerrors\u201d)  I adopted a use of nines not only in the timings,<br \/>\nbut also in shifting the playback rate of buffers, which are played<br \/>\nat  rates of 27\/25,<br \/>\n9\/7, 7\/9 or 25\/27. The<br \/>\ntone clusters frequencies also are related to each other by tuning<br \/>\nratios that are similarly based on nine.  I was in contact with<br \/>\nMendoza while writing this piece and one of the poems in hir<\/p>\n<p>ninerrors<br \/>\ncycle, <i>an<br \/>\nobsessive compulsive disorder<\/I>,<br \/>\nmentions part of the creation of this piece in it&#8217;s first line,<br \/>\n\u201cwashing machine spin cycle drowns out synth drones.\u201d (\u201can<br \/>\nobsessive compulsive disorder\u201d)<\/p>\n<p>While<br \/>\nratios based on nines gave me the internal tunings of the tone<br \/>\ncluster, I used Dissonance Curves, as described by William Sethares,<br \/>\nto generate the tuning and scale for the base frequencies of the<br \/>\nclusters.  The clusters should therefore sound as consonant as<br \/>\npossible and provide a contrast to the rest of the piece, which is<br \/>\nrather glitchy.  The glitches come partly from the analogue material,<br \/>\nbut also from sudden cuts in the playback of buffers.  For some parts<br \/>\nof the piece, the programme records it&#8217;s own output and then uses<br \/>\nthat as source material, something that may stutter, especially if<br \/>\nthe buffer is recording it&#8217;s own output.  I used this effect because,<br \/>\nas mentioned above, I want to use a computer to do things which only<br \/>\nit can do.  When writing about glitches, Vanhanen writes that their<br \/>\nsounds \u201care<br \/>\nsounds of the . . . technology itself.\u201d (p 47)  He notes that \u201cif<br \/>\nphonography is essentially acousmatic, then the ultimate phonographic<br \/>\nmusic would consist of sounds that have no acoustic origin,\u201d  (p<br \/>\n49) thus asserting that  skips and \u201cdeliberate mistakes\u201d (ibid)<br \/>\nare the essential sound of \u201cphonographic styles of music.\u201d (ibid)<br \/>\nSimilarly, \u201cglitch is the digital equivalent of the phonographic<br \/>\nmetasound.\u201d (p 50)  It is necessarily digital and thus is<br \/>\ninherently tied to my use of a computer. While my use of glitch is<br \/>\noppositional to the dominant style of BEAST, according to Vanhanen,<br \/>\nit is also the logical extension of acousmatic music.<\/p>\n<p>Indeed,<br \/>\nthe piece was written with the BEAST system in mind.  The code was<br \/>\nwritten to allow N-channel realisations. Some gestures are designed<br \/>\nwith rings of 8 in mind, but others, notably at the very start, are<br \/>\ndesigned to be front speakers only.  Some of the \u201crecycled\u201d<br \/>\nbuffers, playing back the pieces own recordings were originally<br \/>\nintended to be sent to distant speakers, not pointed at the audience,<br \/>\nthus give some distance between the audience and those glitches when<br \/>\nthey are first introduced.  I chose to do it this way partly in order<br \/>\nto automate the use of spatial gesture. In his paper on gesture,<br \/>\nPaine notes that moving a sound in space is a from of gesture,<br \/>\nspecifically mentioning the BEAST system. (p 11) I think that because<br \/>\nthis gesture is already physical, it does not need to necessarily<br \/>\nrely on the physical gesture of a performer moving faders.  Removing<br \/>\nmy own physical input from the spatialisation process allowed me more<br \/>\ncontrol over the physical placement of the sound, without diminishing<br \/>\nthe audience&#8217;s experience of the piece as authentic.  It also gives<br \/>\nme greater separation between sounds, since the stems are generated<br \/>\nseparately and lets me use more speakers at once, thus increasing the<br \/>\nimmersive aspect (p 13) of  the performance.<\/p>\n<p>Although<br \/>\nthis piece is entirely non-interactive, it is a live realisation<br \/>\nwhich makes extensive use of randomisations and can vary<br \/>\nsignificantly between performances.  In case I get a additional<br \/>\nchances to perform it on a large speaker system, I would like the<br \/>\naudience to have a fresh experience every time it is played.<\/p>\n<p><i>Synthesiser<br \/>\nImprovisation<\/I><\/p>\n<p>When<br \/>\nI was a student at Wesleyan, I had my MOTM analogue modular<br \/>\nsynthesier mounted into a 6 foot tall free-standing rack that was<br \/>\nintended for use in server rooms.  It was not very giggable, but it<br \/>\nwas visually quite striking.  When my colleagues saw it, they<br \/>\nlaunched a campaign that I should do a live-patching concert.  I was<br \/>\ninitially resistant to their encouragement, as it seemed like a<br \/>\nterrible idea, but eventually I gave in and spent several days<br \/>\npracticing getting sounds quickly and then refining them.  In<br \/>\nperformance, as with other types of improvisation, I would find<br \/>\nexciting and interesting sounds that I had not previously stumbled on<br \/>\nin the studio. Some of my best patches have been live.<\/p>\n<p>I&#8217;ve<br \/>\nbeen deeply interested in the music of other composers who do live<br \/>\nanalogue electronics, especially in the American experimental<br \/>\ntradition of the 1960s and 70s.  <i>Bye<br \/>\nBye Butterfly<\/I><\/p>\n<p>by Pauline Oliveros is one such piece, although she realised it in a<br \/>\nstudio. (Bernstein p 30) This piece and others that I find<br \/>\ninteresting are based on discovering the parameters and limits of a<br \/>\nsound phenomenon. Bernstein writes that \u201cShe discovered that a<br \/>\nbeautiful low difference tone would sound\u201d when her oscillators<br \/>\nwere tuned in a particular way. (ibid)  Live patching also seems to<br \/>\nbe music built on discovery, but perhaps a step more radical for it&#8217;s<br \/>\nbeing performed live.<\/p>\n<p>Even<br \/>\nmore radical than live patching is <i>Runthrough<\/I><br \/>\nby David Behrman, which is realised live with DIY electronics. The<br \/>\nprogramme notes for that piece state, \u201cNo<br \/>\nspecial skills or training are helpful in turning knobs or shining<br \/>\nflashlights, so whatever music can emerge from the equipment is as<br \/>\navailable to non-musicians as to musicians . . .. Things<br \/>\nare going well when all the players have the sensation they are<br \/>\nriding a sound they like in harmony together, and when each is<br \/>\nappreciative of what the others are doing.\u201d<br \/>\n(\u201cSonic Arts Union\u201d) The piece is based entirely on discovery and<br \/>\nhas no set plan or written score. (ibid)  The piece-ness relies on<br \/>\nthe equipment.  This is different than live-patching because a<br \/>\nmodular synthesiser is designed to be a more general purpose tool and<br \/>\nits use does not imply a particular piece.  Understanding the<br \/>\ninteraction between synthesiser modules is also a specialist skill<br \/>\nand does imply that expertise is possible.  However, the idea of<br \/>\nfinding a sound and following it is similar.<\/p>\n<p>Recently,<br \/>\nI have been investigating ways to merge my synthesiser performance<br \/>\nwith my laptop performance.  The first obvious avenue of exploration<br \/>\nwas via live sampling. This works well with a small modular, like the<br \/>\nEvenfall Mini Modular, which is small enough to put into a rucksack<br \/>\nand has many normalised connections.  It has enough flexibility to<br \/>\nmake interesting and somewhat unexpected music, but is small and<br \/>\nsimple enough that I can divide my attention between it and a laptop.<br \/>\nUnfortunately, mine was damaged in a bike accident in Amsterdam in<br \/>\n2008 and has not yet been repaired.<\/p>\n<p>My<br \/>\nMOTM synthesiser, however, is too large and too complex to divide my<br \/>\nattention between it and a laptop screen.  I experimented with using<br \/>\ngamepad control of a live sampler, such that I did not look at the<br \/>\ncomputer screen at all, but relied on being able to hear the state of<br \/>\nthe programme and a memory of what the different buttons did.  I<br \/>\ntried this once in concert at Noise = Noise #19 in April 2010.  As is<br \/>\noften the case in small concerts, I could not fully hear both monitor<br \/>\nspeakers, which made it difficult to monitor the programme.<br \/>\nFurthermore, as my patch grew in complexity, the computer-added<br \/>\ncomplexity became difficult to perceive and I stopped being able to<br \/>\ntell if it was still working correctly or at all. A few minutes into<br \/>\nthe performance, I stopped the computer programme entirely and<br \/>\nswitched to all analogue sounds.  While the programme did not perform<br \/>\nin the manner I had intended, the set was a success and the recording<br \/>\nalso came out quite well and is included in my portfolio.  One<br \/>\nblogger compared the track to Jimi Hendrix, (Weidenbaum) which was<br \/>\ncertainly unexpected.<\/p>\n<p>It<br \/>\nis unusual for me to have a live recording come out well.  This is<br \/>\nbecause of the live, exploratory aspect to the music.  If I discover<br \/>\nthat the I can make the subwoofers shake the room or make the stage<br \/>\nrattle, or discover another acoustic phenomenon in the space, I will<br \/>\npush the music in that direction. While this is exciting to play, and<br \/>\nhopefully to hear, it doesn&#8217;t tend to come out well on recordings.  I<br \/>\nalso have a persistent problem with panning.  On stage, it&#8217;s often<br \/>\ndifficult to hear both monitors and judging the relative amplitudes<br \/>\nbetween them requires a certain concentration that I find difficult<br \/>\nto do while simultaneously patching and altering timbres.  In order<br \/>\nto solve this problem, I&#8217;ve written a small programme in<br \/>\nSuperCollider which monitors the stereo inputs of the computer and<br \/>\npans them to output according to their amplitude.  If one is much<br \/>\nlouder than the other, it is panned to centre and the other output<br \/>\nslowly oscillated between left and right.  If the two inputs are<br \/>\nclose in amplitude, the inputs are panned left and right, with some<br \/>\noverlap.  I think this is the right answer for how to integrate a<br \/>\ncomputer with my synthesiser.  Rather than giving me more things to<br \/>\nthink about, this silently fixes a problem, thus removing a<br \/>\nresponsibility.  An example of playing with the autopanner, from a<br \/>\nsmall living room concert in 2011, is included in my portfolio.<\/p>\n<p>For<br \/>\nfuture exploration, I am thinking of returning to the idea of live<br \/>\nsampling, but similarly without my interaction. I would tell the<br \/>\ncomputer when the set starts and it would help me build up a texture<br \/>\nthrough live sampling. Then, as my inputted sound became more complex<br \/>\n(or after a set period of time), the computer interventions would<br \/>\nfade out, leaving me entirely analogue.  This could help me get<br \/>\nsomething interesting going more quickly, although it may violate the<br \/>\n\u201cblank canvas\u201d convention of live coding and live patching. In<br \/>\nFebruary 2011, there was a very brief discussion on the TopLap email<br \/>\nlist as to whether live patching was an analogue form of live coding<br \/>\n(Rohrhuber).  I do not see that much commonality between them, partly<br \/>\nbecause a synthsiser patch is more like a chainsaw than it is like an<br \/>\nidea, (\u201cManifestoDraft\u201d) and partly because patching is much more<br \/>\ntactile than coding is. However, some of the same conventions do seem<br \/>\nto apply to both.<\/p>\n<p>Bibliography<\/p>\n<p>\u201c2600<br \/>\nMeetings.\u201d <i>2600: The Hacker Quarterly.<\/I> Web. 8 September<br \/>\n2011. &lt;<a HREF=\"http:\/\/www.2600.com\/meetings\/\">http:\/\/www.2600.com\/meetings\/<\/A>&gt;<\/p>\n<p>Behrman,<br \/>\nDavid. \u201cRunthrough.\u201d 1971. Web. 15 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.ubu.com\/sound\/sau.html\">http:\/\/www.ubu.com\/sound\/sau.html<\/A>&gt;<\/p>\n<p>Bernstein, David. \u201cThe San Francisco Tape Music Center: Emering Art Forms and the American Counterculture, 1961 \u2013 1966.\u201d <i>The San Francisco Tape Music Center: 1960s Counterculture and the Avant-Garde.<\/i> Ed. David Bernstein. Berkeley, CA, USA: University of California Press, 2008. Print.<\/p>\n<p><i>Blakes<br \/>\n7<\/I>. BBC. BBC One, UK. 2 January<br \/>\n1978 \u2013 21 December 1981.  Television.<\/p>\n<p>Cascone,<br \/>\nKim. \u201cThe Aesthetics of Failure: &#8216;Post-Digital&#8217; Tendencies in<br \/>\nContemporary Computer Music.\u201d <i>rseonancias<\/I>. 2002. Web. 8<br \/>\nSeptember 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.ccapitalia.net\/reso\/articulos\/cascone\/aesthetics_failure.htm\">http:\/\/www.ccapitalia.net\/reso\/articulos\/cascone\/aesthetics_failure.htm<\/A>&gt;<\/p>\n<p>Cascone,<br \/>\nKim. \u201cThe Microsound Scene: An Interview with Kim Cascone.\u201d<br \/>\nInterview with Jeremy Turner. <i>Ctheory.net<\/I>. 12 April 2001. Web.<br \/>\n11 September 2011. &lt;<a HREF=\"http:\/\/www.ctheory.net\/articles.aspx?id=322\">http:\/\/www.ctheory.net\/articles.aspx?id=322<\/A>&gt;<\/p>\n<p>\u201cDatamatics.\u201d<br \/>\n<i>ryoji ikeda<\/I>. Web. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.ryojiikeda.com\/project\/datamatics\/\">http:\/\/www.ryojiikeda.com\/project\/datamatics\/<\/A>&gt;<\/p>\n<p><i>Doctor<br \/>\nWho. <\/I>BBC. BBC One, UK. 23<br \/>\nNovember 1963 \u2013 6 December 1989. Television.<\/p>\n<p>Dr.<br \/>\nK. \u201cChapter 9: Phone Phreaking in the US &amp; UK.\u201d <i>Complete<br \/>\nHacker&#8217;s Handbook.<\/I><br \/>\nWeb. 5 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.telefonica.net\/web2\/vailankanni\/HHB\/HHB_CH09.htm\">http:\/\/www.telefonica.net\/web2\/vailankanni\/HHB\/HHB_CH09.htm<\/A>&gt;<\/p>\n<p>Hutchins,<br \/>\nCharles C\u00e9leste. \u201cGig report: Edgetone Summit.\u201d <i>Les<br \/>\nsaid, the better<\/I>.<br \/>\n6 August 2008. Web. 5 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/celesteh.blogspot.com\/2008\/08\/gig-report-edgetone-summit.html\">http:\/\/celesteh.blogspot.com\/2008\/08\/gig-report-edgetone-summit.html<\/A>&gt;<\/p>\n<p>Ikeda,<br \/>\nRyoji, \u201cData.Matrix\u201d 2005. Web. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.youtube.com\/watch?v=F5hhFMSAuf4\">http:\/\/www.youtube.com\/watch?v=F5hhFMSAuf4<\/A>&gt;<\/p>\n<p>Knapp<br \/>\nand Bortz. \u201c<strong>Mobilemuse:<br \/>\nIntegral music control goes mobile<\/STRONG>.\u201d<br \/>\n2011 NIME. University of Oslo, Norway. 31 May 2011.<\/p>\n<p>Kuivila,<br \/>\nRon. \u201cDavid Tutor: Live Electonic Music.\u201d <i>Leonardo<br \/>\nMusic Journal<\/I>.<br \/>\nDecember 2004. 106-107. Web. 12Sepetember 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.jstor.org\/stable\/1513516\">http:\/\/www.jstor.org\/stable\/1513516<\/A>&gt;<\/p>\n<p>Lao,<br \/>\nLinus. \u201cThe<br \/>\nmusic of Charles&nbsp;Celeste&nbsp;Hutchins.\u201d<br \/>\n<i>Pringle&#8217;s<br \/>\nHardware Store<\/I>.<br \/>\n17 June 2010. Web. 8 September 2011.<\/p>\n<p>&lt;<a HREF=\"http:\/\/thelinusblog.wordpress.com\/2010\/06\/17\/the-music-of-charles-celeste-hutchins\/\">http:\/\/thelinusblog.wordpress.com\/2010\/06\/17\/the-music-of-charles-celeste-hutchins\/<\/A>&gt;<\/p>\n<p>\u201cManifestoDraft.\u201d<br \/>\n<i>TopLap<\/I>.<br \/>\n14 November 2010. Web. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/toplap.org\/index.php\/ManifestoDraft\">http:\/\/toplap.org\/index.php\/ManifestoDraft<\/A>&gt;<\/p>\n<p>Mendoza.<br \/>\n\u201can obsessive compulsive disorder\u201d <i>ninerrors<\/I>.<br \/>\n2009. Web. 9 September 2011.<\/p>\n<p>&lt;<a HREF=\"http:\/\/ninerrors.blogspot.com\/2009\/10\/ninerrors-2.html\">http:\/\/ninerrors.blogspot.com\/2009\/10\/ninerrors-2.html<\/A>&gt;<\/p>\n<p>Mendoza.<br \/>\n\u201cninerrors: Poetry Series.\u201d Web. 9 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/ninerrors.blogspot.com\/\">http:\/\/ninerrors.blogspot.com\/<\/A>&gt;<\/p>\n<p>Moller,<br \/>\nPolly. \u201cPre-concert Q&amp;A Session.\u201d Edgetone New Music Summit.<br \/>\nSan Francisco Community Music Center, San Francisco, California, USA.<br \/>\n23 July 2008.<\/p>\n<p>Olewnick,<br \/>\nBrian. \u201cLive Electronic Music.\u201d<br \/>\n<i>AllMusic<\/I>.<br \/>\nWeb. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.allmusic.com\/album\/live-electronic-music-w95765\/review\">http:\/\/www.allmusic.com\/album\/live-electronic-music-w95765\/review<\/A>&gt;<\/p>\n<p>Oliveros,<br \/>\nPauline. \u201cBye Bye Butterfly.\u201d 1965. Web. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/open.spotify.com\/track\/3sqvayIhGIvWcYG6G1pf8m\">http:\/\/open.spotify.com\/track\/3sqvayIhGIvWcYG6G1pf8m<\/A>&gt;<\/p>\n<p>Paine,<br \/>\nGarth. \u201cGesture and Morphology in Laptop Music Performance.\u201d<br \/>\n2008. Web. 8 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.activatedspace.com\/papers\/files\/gesture-and-morphology-in-laptop-performance.pdf\">http:\/\/www.activatedspace.com\/papers\/files\/gesture-and-morphology-in-laptop-performance.pdf<\/A>&gt;<\/p>\n<p>\u201cRed<br \/>\nbox (phreaking).\u201d <i>Wikipedia<\/I>.<br \/>\nWeb. 5 September 2011.<br \/>\n&lt;<a HREF=\"https:\/\/secure.wikimedia.org\/wikipedia\/en\/wiki\/Red_box_%28phreaking%29\">https:\/\/secure.wikimedia.org\/wikipedia\/en\/wiki\/Red_box_%28phreaking%29<\/A>&gt;<\/p>\n<p>Rohruber,<br \/>\nJulian. \u201c[livecode]<br \/>\nanalogue live coding?\u201d<br \/>\nEmail to livecode list. 19 February 2011. &lt;<\/p>\n<p><a HREF=\"http:\/\/lists.lurk.org\/mailman\/private\/livecode\/2011-February\/001176.html\">http:\/\/lists.lurk.org\/mailman\/private\/livecode\/2011-February\/001176.html<\/A>&gt;<\/p>\n<p>Schloss,<br \/>\nW. Andrew. \u201cUsing Contemporary Technology in Live Performance: The<br \/>\nDilemma of the Performer.\u201d <i>Journal of New Music Research<\/I>.<br \/>\n2002. Web. 11 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/people.finearts.uvic.ca\/~aschloss\/publications\/JNMR02_Dilemma_of_the_Performer.pdf\">http:\/\/people.finearts.uvic.ca\/~aschloss\/publications\/JNMR02_Dilemma_of_the_Performer.pdf<\/A>&gt;<\/p>\n<p>Sethares,<br \/>\nWilliam. \u201cRelating Tuning and Timbre.\u201d Web. 11 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/sethares.engr.wisc.edu\/consemi.html\">http:\/\/sethares.engr.wisc.edu\/consemi.html<\/A>&gt;<\/p>\n<p>\u201cSonic<br \/>\nArts Union\u201d. <i>UbuWeb<\/I>. 1971. Web. 15 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.ubu.com\/sound\/sau.html\">http:\/\/www.ubu.com\/sound\/sau.html<\/A>&gt;<\/p>\n<p>Trigaux,<br \/>\nRobert. \u201cThe bible of the phreaking faithful.\u201d <i>St<br \/>\nPetersburg Times<\/I>.<br \/>\n15 June 1998. Web. 5 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.sptimes.com\/Hackers\/monhackside.html\">http:\/\/www.sptimes.com\/Hackers\/monhackside.html<\/A>&gt;<\/p>\n<p>\u201cThe<br \/>\nTransparent Tape Music Festival.\u201d <i>SF<br \/>\nSound.<\/I><br \/>\nWeb. 12 September 2011. &lt;<a HREF=\"http:\/\/sfsound.org\/tape\/oliveros.html\">http:\/\/sfsound.org\/tape\/oliveros.html<\/A>&gt;<\/p>\n<p>\u201cThe<br \/>\nTruth About Lie Detectors (aka Polygraph Tests)\u201d <i>American<br \/>\nPsychological Association.<\/I><\/p>\n<p>5 August 2004. Web. 5 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.apa.org\/research\/action\/polygraph.aspx\">http:\/\/www.apa.org\/research\/action\/polygraph.aspx<\/A>&gt;<\/p>\n<p>United<br \/>\nStates v. Scheffer , 523 U.S. 303. Supreme Court of the US, 1998.<br \/>\n<i>Cornell<br \/>\nUniversity.<\/I><br \/>\nWeb.16 June 2011.<br \/>\n&lt;<a HREF=\"http:\/\/www.law.cornell.edu\/supct\/html\/96-1133.ZS.html\">http:\/\/www.law.cornell.edu\/supct\/html\/96-1133.ZS.html<\/A>&gt;<\/p>\n<p>Vanhanen,<br \/>\nJanne. \u201cVirtual Sound: Examining Glitch and Production.\u201d<br \/>\n<i>Contemporary<br \/>\nMusic Review <\/I>22:4<br \/>\n(2003) 45-52. Web. 11 September 2011.<\/p>\n<p>&lt;<a HREF=\"http:\/\/dx.doi.org\/10.1080\/0749446032000156946\">http:\/\/dx.doi.org\/10.1080\/0749446032000156946<\/A>&gt;<\/p>\n<p>Weidenbaum,<br \/>\nMark. \u201cIn London, Noise = Noise.\u201d <i>Disquiet<\/I>.<br \/>\n7 May 2010. Web. 12 September 2011.<br \/>\n&lt;<a HREF=\"http:\/\/disquiet.com\/2010\/05\/07\/charles-celeste-hutchins\/\">http:\/\/disquiet.com\/2010\/05\/07\/charles-celeste-hutchins\/<\/A>&gt;<\/p>\n<p>Wishart,<br \/>\nTrevor. <i>Audible<br \/>\nDesign<\/I>.<br \/>\nOrpheus the Pantomime Ltd, 1994. Print.<\/p>\n<div ID=\"sdfootnote1\">\n<p><a CLASS=\"sdfootnotesym\" NAME=\"sdfootnote1sym\" HREF=\"#sdfootnote1anc\">*<\/A>\u201cze\u201d<br \/>\nand \u201chir\u201d are gender neutral pronouns<\/DIV><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Phreaking When the local 2600 group, started organising BrumCon 2007, which took place on 3 May 2008, I asked if I could present a musical set. They had never done such a thing before, but they agreed. 2600 is a hacker group (\u201c2600 Meetings\u201d) with roots in phone phreaking (Trigaux), so I decided to reference &hellip; <a href=\"https:\/\/www.celesteh.com\/blog\/2011\/09\/15\/dissertation-draft-solo-live-electronic\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Dissertation Draft: Solo Live Electronic Pieces<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"activitypub_content_warning":"","activitypub_content_visibility":"","activitypub_max_image_attachments":4,"activitypub_interaction_policy_quote":"anyone","activitypub_status":"","footnotes":""},"categories":[1],"tags":[76,122],"class_list":["post-181","post","type-post","status-publish","format-standard","hentry","category-uncategorised","tag-celesteh","tag-thesis"],"_links":{"self":[{"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/posts\/181","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/comments?post=181"}],"version-history":[{"count":1,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/posts\/181\/revisions"}],"predecessor-version":[{"id":2380,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/posts\/181\/revisions\/2380"}],"wp:attachment":[{"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/media?parent=181"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/categories?post=181"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.celesteh.com\/blog\/wp-json\/wp\/v2\/tags?post=181"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}