Comments are closed.

Performance state

May 1st, 2014

During performance, I enter a special mode of being; something I will refer to as ‘performance mode’. What is a mode, and what makes this mode different from other states?

Entering the Performance State

For me, the transition takes place automatically. No special ritual has to be followed, except for setting up the necessary equipment, focussing, and beginning. However, performance can be seen as ritual. Environmental elements of course come into play; for the focussing to be effective, a certain degree of silence is needed; also, if other people are present, these have to be ‘benevolent’ towards the performance. The feeling of disturbing somebody can seriously disturb the performance state.

Sensory perception is altered when in performance mode.

Reflecting on personal experiences, I can distinguish three main changes:

1) Auditory perception is increased dramatically; incoming audio information is treated differently than when in other states. Less analyzing and interpreting takes place; associative reaction to sounds is automatically prioritized.

It functions as feedback for the control of musical action, and also as source of inspiration, especially in improvisation.

2) Visual perception decreases dramatically; the eyes are either fully or half closed – but I couldn’t tell which, as conscious interpretation of visual information is minimized and reserved for certain controlling tasks. When leaving performance state, I couldn’t specify what I have been looking at – for most of the duration. This only changes when transitions have to take place that need visual coordination, like changing instruments, or when a change in the system has to be made where no intuitive control affordances are given.

3) Somatosensory perception: proprioception and haptic perception are altered. Certain parts of the haptic perception might get prioritized, especially fine / discriminative touch  – not only of the hands, but of all connecting points to non-bodily extensions of the system, like for example the lips and tongue while controlling the reed / mouthpiece combination of a wind instrument. Conscious attention however is given only in a training situation, or during ‘problem situations’ – for example, discrepancies between an expected tone result on the saxophone and the actual fingering situation.

Proprioception is heightened and includes – or at least seems to include – certain parts of the hybrid instrument that are perceived as extensions of the body, or as part of the hybrid system of body & instrument. This includes heightened attention to the position of the own body, especially relative to the theremin; the position of the saxophone relative to the microphone; the feet relative to control switches / pedals.

But in gestural control, also orientation and position of the sensed body parts relative to the sensor (or including the sensor) is part of heightened attention to proprioception.

This altered perception of the performance state has certain consequences on the demands, design & functioning of the hybrid instrument and the performance environment.

1) The auditory information at the position of the performer needs to either reflect the salient musical elements well (good monitoring); otherwise (bad monitoring), it should at least be possible to deduct these elements from actual auditory sensation, in combination with experience of previous situations and knowledge of deformation of auditive content.

2) ‘Performance blindness’ has consequences for the design of visual feedback.

3) Placing all important extension elements into the ‘force field’ of hybrid proprioception requires a certain ergonomic arrangement.

The ‘performance blindness’ might be related to transformed cognitive processes during improvisation, which in my opinion might also be seen as a dense network of moments of creativity – and in the moment that creative thought appears, visual perception is diminished.

However, consciously looking at objects is important at certain moments during performance. Like when driving a car, many actions can be ‘deferred’ to muscle memory, while the visual cognition is dealing with a wide stream of passing landscape. But at a certain moment, an acoustic warning signal might appear, and the visual perception now has to look for an indication of the source (‘blinking light’), and then give access to the displayed information (‘fuel low’). This is a different kind of looking than the ‘passing landscape’ mode; I call it the ‘state checking’ mode. And this also appears during performance / improvisation, especially when electronics are involved that use visual elements for state information. Also in group improvisation, a broad stream of gestures might be picked up from the other players, which might inform and influence us on an unconscious level; at certain moments however, visual cues or signs might be given, which have to be dealt with differently.

Switching between different modes of perception might also establish different modes of performance; the immersed, flow-like experience might change into a more self- and situation-conscious mode. This might also be recognized by the audience.

Quick switching of states and modes – including *back* into the previous (flow-like, for example) mode is thus an important capacity; it enables the incorporation of certain (possibly mode-changing) cues and pieces of information, without breaking the flow of the performance.

In my opinion, this ability can be trained by experiencing and practicing it, preferably in a performance setting (or at least in performance mode).

Like the ability to remember ones dreams and exiting / returning to dream state can be trained, shortcutting the long process that falling into sleep and reaching the REM phase normally takes.


STEIM / Sonology experiment – wild mangling

December 30th, 2012

Experimental improvisation to explore my electro-acoustic setup of gesture controllers and sound manglers (M4L, Max, MiniBee) for live electro-acoustic improvisation.
Whistling & humming at the kitchen table transformed into some wilder manipulated sounds; extensively pitch-shifting. No pre-recorded sounds.
I think this might be the first experiment where I use the quadruple version of my sample-mistreating-system..

STEIM / Sonology experiment – Pataphone, Theremin, Soprano Sax

December 30th, 2012

Experimental improvisation to explore my electro-acoustic setup of gesture controllers and sound manglers (M4L, Max, MiniBee) for live electro-acoustic improvisation.
Layering Pataphone, Theremin (focus not on tuning 😉 and Soprano Sax, I’m building the soundscape live and manipulate it (with the gestural controllers), sampling the instruments and improvising along. No pre-recorded sounds.

STEIM/Sonology experiment – Soprano improvisation

December 30th, 2012

Experimental improvisation to explore my electro-acoustic setup of gesture controllers and sound manglers (M4L, Max, MiniBee) for live electro-acoustic improvisation.
This is one of the first times I tried it with my soprano saxophone.
Controlling- and playing modes here still are quite separated; interaction takes place by reaction to the automated mangling sequence created with the gesture controller, which then again is sampled and mangled and reacted to.
New version should allow for better integration of modes.
No pre-recorded sounds.

Research description

October 16th, 2012

The goal of my research is to develop an intuitive control- and sound manipulation system for improvised electro-acoustic music performance

My goal is to improve and expand certain aspects of my instrumental practices and live setup, making it a flexible and expressive system for both solo and ensemble situations.
Specifically, I want to focus on the development of DSP software and a control system, involving gestural and hardware control.
It should allow for the intuitive, spontaneous and instantaneous action / reaction needed in the context of free improvisation.

In my live performance practice, I usually combine a set of instruments in the traditional sense (like saxophone, bass clarinet, theremin) with electronic sound processing.
Originally, I transformed their sounds with hardware effects – a modified delay stomp box, a reverb and a multi effects pedal – into soundscapes.

The last years however, I envision a more complex sound universe. When I brought in Max/MSP (which before I primarily used in the context of live visuals) together with Max for Live, a huge amount of sound shaping- and transformation possibilities opened up.
The source sound material that is fed into the system is ‘harvested’ live by sampling the output of my ‘traditional’ instruments. I repurpose these sounds into loops (broken by bufferShuffling) or create layers that accompany further improvisation while morphing and evolving in (semi-)automated and generative ways.

This temporal reorganizing of the sounds, together with filtering / shaping, mixing and further DSP, results in an increasingly complex system of variable elements that will have to be controlled in a live performance context.

The use of general-purpose hardware controllers for this purpose however makes it much more detached from mind and body than for example the saxophone reed, the keys of the saxophone or even the knobs of my DD3 delay stomp box; a problem I often encountered in performances where I combined computer-based DSP with ‘traditional’ instruments.
Especially in the context of (free) improvisation it is necessary to facilitate a link as direct as possible between the involved ‘colonized neurons’ (Joel Ryan during Sonic Acts 2012) and the musical action.

My research thus not only involves developing the DSP further, but also condensing the complexity of possible parameters by meaningful mapping into an intuitive control system. By adding the possibilities of gestural controls to my system, I hope to create an expressive and meaningful control-‘vocabulary’, giving my system the properties of a true instrument which will require practicing, experimentation and performance-experience.

Jan Klug, STEIM blog notes about Instrument Lab #3

September 3rd, 2012

I remember a day in spring.

I sat on a bench at the Amstel near STEIM and noticed the pink wires coming out of the palm of my hand, running to a small device attached to its back, and felt that I’d probably have to try to not behave in an overly suspicious way.

Me and the device are harmless, I tried to communicate non-verbally to the numerous pedestrians and the curious police car.

We’re both just part of a hybrid instrument, consisting of some software, two of these wireless MiniBee controllers, and two theremins.

Those buttons at the other end of the pink wires – they snuggled so comfortably in the palm of my hand that I forgot about them when I went out of the dark STEIM studio.

* * *

The Instrument Lab #3 started with introductions to the rich history of STEIM by Jonathan Reus and Takuro Mizuta Lippit (including intriguing footage from the archives) and a tour through the building. This effectively set the tone for the week to come, and fired up the inspiration.

When we first presented our initial research ideas to the Steim staff and our fellow residents, I had the plan to develop a (possibly wireless) device which lets me apply and control effects on the sounds of my theremin, without having to remove my hands from the theremin antennas.

Packing the equipment for my trip to STEIM, I had felt sorry for my WiiMote, thinking “you’ll have to be disassembled, poor little thing”.

In the workshop of Frank Balde however, we discussed the disadvantages of bluetooth in a performance setup (though generally, the hacking of poor little devices was encouraged). This, and Marije Baalmans workshop on the Sense/Stage system, saved the WiiMote, but also rendered it superfluous for the time being.

The miniBee controllers of the Sense/Stage system are easily configurable and remarkably reliable – a seductive combination that made me employ two of these for my setup; one for each hand. Basically, they’re mini-Arduinos with Xbee radio communication and onboard accelerometers. This way, I could use the built-in bending- and turning possibilities of my hands to communicate something to the software, while the theremin was still able to concentrate on the distance of my hands from the pitch- and volume antennas.

Attaching the miniBees and their batteries at the back of my hands by means of some white rubber band (widely used to play jumping games on schoolyards) was far from elegant, but I had promised my hands to not force them into gloves while playing the theremin. The same rubber band however, in combination with some strategically applied gaffer tape, also allowed me to fix the three-button-contraption in the palm of my hand, so my fingers could click them without confusing the theremin.



The choice of the pink wire for connecting the buttons to the miniBees made the whole look like a medical device somebody had lost in a park, but unfortunately the extensive wire-and-button-boutique of STEIM was closed for the easter weekend, so I had to go with whatever was to be found in the box that me and Sam Andreae had quickly assembled while the lab was still open.

* * *

To give all the accelerometer- and button-data a meaningful purpose was like juggling with a couple of agitated pythons, but eventually I managed to map everything I needed through a patient max patch into my Max4Live device, whose main job it was to record the live input from my instruments (pataphone, sax, theremin-brothers) into a buffer and replay that in a controlled chaotic way, while applying some filters.

Not too much to ask, one could say.

* * *

After the easter weekend, the STEIM staff returned to check the progress of our projects. The evening of the same day also brought the Concept stage, where we were to present the outcome in a concert, so there was not much time to make big adjustments.

Still I found the comments by Kristina Andersen and Daniel Schorno very useful.

Daniel reminded me that I had to watch out for RSI-problems if I used buttons in the way I did, and suggested to use specific gestures to do the switching of parameters or even different modes of operation. And indeed, during the performance, I realized that my fingers, which were supposed to swiftly dance over the buttons to control recording and effects, were actually clinging to them like rugby players with a personality disorder.

Maybe that was because I didn’t have a chance yet to teach that part of my brain which extends into my fingers to do its new job (which dramatically differs from pressing saxophone keys), but I’ll definitely keep Daniels comments in mind.

And Kristina’s enthusiasm gave me the confidence that I actually could use this system for a performance, even though the software was still figuring out what it was meant to do and my skills for playing this new instrument were not even ordered yet by my consciousness.

* * *

Recapitulating this intensive week of STEIM Instrument Lab #3 of course also brings back memories of my fellow residents and their projects.

Sam Andraea, whose distortion-enhanced saxophone made me jealous for its charming wildness; Hasan Hujairi with his electronically expanded oud, who also played a nice duo with Luigi Pizzaleo and his amazing metal sculpture interface thing; Iris van der Ende with her beautiful harp sounds that even make the stars twinkle; Tim Thompson with his kinect-powered Space Palette instrument that transformed the audience into happily smiling performers.

And of course the STEIM staff, inhabiting this wonderful place and making the center of Amsterdam move counter-clockwise by boldly rowing against the stream of cultural decay.

I’d like to thank them all.

* * *

Ten years ago, I visited STEIM for the first time – for a research week with Italian director Andrea Paciotto. This residency dramatically changed the course of my life, as I learned about all the things for the first time that I now use on a daily basis, like Max/MSP, sensors, live video- and sound processing, Ableton Live and self-made pasta sauce.

Now, as if a loop with some transformation has been applied to my life, I will return to STEIM – and, to my pleasure, for a longer period.

During the Instrument Lab #3, I learned about the Instruments & Interfaces Master course that STEIM and the Institute of Sonology offer together.

The setup for the concert with Knalpot

After the Instrument Lab, I continued working on my setup, to prep it for using it during the theremin-festival at the Grand Theatre (another endangered species) where I partially played solo, partially improvised together with Knalpot – and the feeling crept over me that I didn’t want to let go, that I had to research further how to really use this interactive system, how to make it part of my setup for solo- and ensemble improvisations.

So I applied for the Instruments & Interfaces master course, and was accepted.

This week, I’ll start into this new phase of STEIM-influenced life.

Exciting times these are!


[previously posted on the STEIM blog]