Other things on this site...

Research
profile
Recipes
[Blog archives] [Categories]

Livecoding concerts: Paris (Sat 9th July) and Strasbourg (Mon 11th July)

Two livecoding concerts this weekend in France: Saturday in Paris and then Monday in Strasbourg.

PERFORMANCES FEATURING:

  • Marije Baalman (NL)
  • Charlie Chocolate (NL) ((Strasbourg only))
  • Michele Pasin (IT/UK)
  • MCLD (UK)

PARIS, Sat 9th July: La Generale 14, Avenue Parmentier Metro station: Voltaire http://lagenerale.fr/ [fb event page] [last.fm event page]

STRASBOURG, Mon 11th July: Place Rouge (parvis de la fac de droit) (law school square) (nearest tube?) 8pm--10:30pm Part of the RMLL free software festival http://2011.rmll.info [fb event page] [last.fm event page]

Please do forward, esp to french networks :)

| livecoding |

Awareness and feedthrough in collaborative music-making

Just been in an interesting seminar by Steve Benford, talking about techno-art projects and interaction design. One of the many interesting topics that came up: in musical interactions, how well do we design for awareness and feedthrough? Those terms come from the field of CSCW where researchers have found that in collaborative work it's often beneficial to be aware of what your collaborators are doing moment-by-moment (e.g. maybe I see your cursor moving around as we edit a document remotely but at the same time). "Feedthrough" by analogy with "feedback" is more specifically about awareness of how your collaborators have changed the state of the system (e.g. what edits they've just made) or maybe what they're preparing to do etc.

In livecoding it's tricky - it's often hard enough for an audience to follow what you're doing, let alone your collaborators (who are busy!). Groups like Powerbooks UnPlugged use text-chat to provide a kind of "backchannel" which is a (poor?) substitute for more direct awareness. Dave Griffiths' Scheme Bricks may be one of the best livecoding interfaces for awareness - the bits of code pulse in different colours as they act, which not only looks ├╝ber-cool but also helps visually connect the code to the sound.

In a text-based interface the structure is more conceptual than visual, so how would we do similar? Would we be any more "aware" if we saw our collaborators' screens merged into our own, or if the livecode was all done within a single collaboratively-edited document? Would that make a better gig, or a worse one?

When we were interviewed by the BBC about livecoding I remember Alex and Dave seemed a little embarrassed that they didn't use any fancy tech for awareness - "we just listen to each other". It strikes me that this is no deficit: the music should be the focus of the performers' attention, and perhaps too much awareness-by-other-means would be distracting.

When I used to play guitar a lot, my "multimodal awareness" of the bassist (say) was not particularly rich: you get some rhythmic cues (from the movement of the fingers etc) and expressive cues (from facial expressions etc) but most of the awareness is located directly in the sound you're making together.

Livecoding makes this more complex, since the chains of cause-and-effect often become a lot more abstract: rather than pluck a string and hear a note, you type some code and maybe this generates a thousand notes three minutes from now. So it's possible that we need more "awareness mechanisms" to help us manage that. But should those be embedded in the sound we make, contaminating/constraining that sound, or should they be on other channels, distracting us from that sound?

| livecoding |

social