Seine hohle Form: Artistic Collaboration in an Interactive Dance and Music Performance Environment

Joseph Butch Rovan
Center for Experimental Music and Intermedia (CEMI)
University of North Texas
USA

Robert Wechsler and Frieder Weiß
Palindrome Inter-media Performance Group
Nürnberg
Germany

Abstract. Composers and choreographers face unique and largely unexplored problems as they collaborate on interactive performance works. Not the least of these problems is settling on schemes for mapping the various parameters of human movement to those possible in the world of sound. The authors' collaborative piece, Seine hohle Form, is used as a case study in the development of effective mapping strategies, focusing on dance gesture to real-time music synthesis. Perceptual correlation of these mapping strategies is stressed, albeit through varying levels of abstraction.

Introduction

The use of choreographic gesture as a control component in music composition/performance for dance has been a concern of choreographers and musicians for almost half a century. As electronic instrument builders of the twentieth century struggled to devise effective interfaces for their unique instruments, choreographers such as Merce Cunningham offered the surprising option of extending the concept of gestural control to the world of dance. The Cage/Cunningham experiments of the 1960s using Theremin technology to sense body motion are only one example of such experiments that still continues today.

When musical control was relinquished to dance gesture, the union of wireless (non-contact) gesture to sound raised many intriguing questions. Even though the technology has progressed to the point where current dance systems rely on sophisticated video tracking instead of the antennae of a Theremin, the cause-and-effect relationship between sound and gesture has remained an elusive problem. To this day, most interactive dance/music systems have relied on fairly simple relationships between gesture and sound, such as the basic presence or absence of sound, volume control and possibly pitch control.

The lack of progress can to some extent be explained by the tenuous threads of communication between the computer music and dance fields. Indeed, although much work has been done recently in the world of computer music by composers/performers developing and composing for gestural controllers, the world of dance has remained largely isolated from these developments.

Today's tools, however, provide the possibility of rich relationships between dance and music in interactive systems. Real-time software for music synthesis and digital signal processing (e.g., MAX/MSP, developed by Miller Puckette and David Zicarelli, and jMAX, developed at IRCAM in Paris) is readily available and runs on standard desktop and laptop computers (Macintosh and PC Linux). Likewise, comparable developments in video image tracking/processing as a source of gestural information (e.g., Palindrome's EyeCon system) have given composers and choreographers powerful tools with which to harness the expressive gestures of dance. Still, the remarkable lack of communication between the two fields, and the often limited concept of interaction in this context, has limited, in the authors' opinions, the expressive possibilities of such collaborative work.

Working alternately in Nürnberg, Germany, and Denton, Texas, Palindrome Inter-media Performance Group and the Center for Experimental Music and Intermedia (CEMI) have explored these issues in their ongoing work together. A body of interactive dance/computer music works is emerging, as well as a dance-specific vocabulary of gesture mappings between movement recognition and real-time digital sound synthesis.

Mapping

In an interactive system, sensors are responsible for ‘translating’ one form of energy into another. Specifically, the physical gestures of dance are translated via sensors, analog/digital converters, and so on into a signal representation inside a computer. Once the gesture is available as an abstract value expressed as computer data, however, the important question arises: what do we do with it?

‘Mapping’ is the process of connecting one data port to another, somewhat like the early telephone operator patch bays. In our case, the term ‘mapping’ has a very specific connotation – it means the application of a given set of gestural data, obtained via a sensor system, to the control of a given sound synthesis parameter. The dramatic effectiveness of a dance, however, invariably depends on myriad factors: movement dynamics of body parts and torso, movement in space, location on stage, direction of focus, use of weight, muscle tension, and so on. And although sensors may be available to detect all of these parameters, the question remains of which ones to apply in a given setting, and then to which of the equally many musical parameters to assign it.

Herein lies the basic quandary. Making these mapping choices, it turns out, is anything but trivial. Indeed, designing an interactive system is somewhat of a paradox. The system should have components (dance input, musical output) that are obviously autonomous, but which, at the same time, must show a degree of cause and effect that creates a perceptual interaction. Unless the mapping choices are made with considerable care, the musical composition and choreography can easily end up being slaves to the system. In some cases, interaction might not occur at all. Not in a technical sense – the movement will indeed control the music – but in the sense that no one (except perhaps the performers) will notice that anything special is going on!

Some have argued that it is largely irrelevant whether or not an audience is aware that interaction is taking place (through technological means). Even if the artist is completely alone in experiencing the interactivity, for some it may be enough that the system of interaction ‘privately’ affects the performer's expression within the piece. The audience is thus vicariously part of the interactive experience.

Palindrome Inter-media Performance Group has pursued a different approach. We have attempted instead to design a degree of transparency into our collaborative works, the pursuit of which logically raises two possibilities.

One is for the choreographer and composer to create their work especially for a given technological system. Not, of course, that every dance gesture needs to trigger every musical event – there is actually considerable room for experimentation in this regard. Palindrome's performing experience has shown that, generally speaking, when only part of a piece is really clear and convincing in its interactive relationships, audiences tend to accept additional more complex relationships. They become ‘attuned,’ as it were, to the functionality of the piece.

The second possibility, which does not exclude the first, entails developing deliberate and targeted mapping strategies. This is a more complicated but rewarding approach, since it means that the technical system is born out of a need to serve the artistic vision, instead of the other way around. Herein lies the central focus of our work.

Mapping strategies should focus and harness the decisive qualities or parameters of the movement and sound, while acknowledging the perceptual dimensions of dance and music. The perception of human movement or sound can, after all, differ vastly from the visual or acoustic information actually present. That is, the video camera and computer (or other sensor system) ‘see’ dance differently than we do.

While this distinction may seem somewhat arcane, it lies in fact at the heart of our quest. The first step in assigning mappings is to identify these ‘decisive parameters’ within the dance (or perhaps a small scene thereof). The possibilities that EyeCon makes available are outlined below.

From this point, the work may go in two directions: On the one hand, the tendency of the choreographer is to seek out parallels between these chosen movement artifacts and those available within the music control system (in our case, within the MAX programming environment). On the other hand, there are also compositional concerns. Hence, the choreography may be designed or redesigned to achieve musical phrases according to the demands of the composition.

While the amount of give-and-take in such a collaboration varies (not to mention the direction thereof – who is ‘giving’ and who is ‘taking’), some letting go of habituated methods of working and collaborating is inevitable. Either or both collaborating artists generally need to modify their artistic priorities.

Still, in the best case, such a collaborative endeavor stands to generate a vocabulary, even a semiotic structure, for dance-music communication with enormous expressive potential.

Gestural Coherence

Just as is true of the sound world, we do not perceive the human body in motion in a very objective or scientific way. What we perceive in dance is highly filtered and often illusory – the choreographer and dancer work hard to achieve this effect. A given movement quality, such as ‘flow,’ may dominate our perception of a phrase so thoroughly that the individual shapes of the body go unnoticed. At another moment, geometrical shapes may override our perception of how the body is moving through space. And of course sound – particularly musical sound – has a powerful effect on how we perceive dance.

Our projects in Palindrome have explored these issues of perception and movement. In particular, we have concentrated on the notion of ‘gestural coherence’; that is, the perceptual coherence between sound and the movement that generates it. Within the context of this search, we make the following postulations:

Application: Seine hohle Form

The words ‘seine hohle Form’ are a fragment from the poem ‘Gesichter’ by Rainer Maria Rilke, roughly translating to ‘its hollow form.’ As a starting point for this interactive work, premiered at CEMI in November 2000, the title words serve as an emblem for the interesting challenge of creating a musical work that exists only when a dancer moves, and a dance in which movement must be approached as both functional, music-creating gesture as well as expressive or decorative elements. The collaboration between music and dance on this piece was complete; that is, the movement and sound were not designed separately, but interactively.

The choreography is affected by the live generation of sound through the use of sensors and real-time synthesis, and the resulting music is in turn shaped by these movements. There are no musical cues for the dancers, since without their movements the music is either nonexistent, or at other times, missing key elements. This method of working forced not only an inherent degree of improvisation upon the group, but also prompted a sharing of artistic roles in the working process: dancer became musician, composer became choreographer, and so forth.

Seine hohle Form is not the first interactive computer-controlled dance. As mentioned earlier, interactive dance has a long history. Recent important contributions include the work of David Rokeby, Richard Powell, Troika Ranch, Antonio Camurri, among others. Our work may be unique, however, in the extent to which multi-dimensional mapping strategies are applied within a framework of gestural coherence.

4.1 Technique

In about half of Palindrome's works, the dancers' gestures are tracked using the EyeCon video-tracking system, designed by Frieder Weiß of Palindrome Inter-media Performance Group. EyeCon is based on frame-grabbing technology, i.e., the capturing of video images in the computer's memory. By frame-grabbing and processing a dancer's movements, it is essentially possible to convert their gestures into computer data that can then be mapped into control of music or other media. For Seine hohle Form, three small video cameras were set up above and diagonally in front of the stage. [Figure 1].

Figure 1
Figure 1
4.1.1 Movement Tracking and Analysis (EyeCon)

A look at the EyeCon user interface [Figure 2] reveals five open control windows labeled: Elements, Control, Sequencer, Midi Monitor and licht.Cfg (the name of the currently loaded EyeCon file). The ‘Licht.Cfg’ window contains the current video image where the current position of the two dancers is seen. The green lines around the female dancer are ‘touchlines.’ These are the position-sensitive components of EyeCon; when they are ‘touched’ by the dancers, sound (or video) events are triggered. The green box around the male dancer in the background of the image is a ‘dynamic field.’ This is the simplest form of EyeCon's movement and shape sensing apparatus.

Figure 2
Figure 2

Thus, multiple, fundamentally different parameters of dance can be applied independently and simultaneously.

The analysis features of the EyeCon video-tracking system include the following six movement parameters:

  1. Changes in the presence or absence of a body part at a given position in space.
  2. Movement dynamics, or amount of movement occurring within a defined field.
  3. Position of the center of the body (or topmost, bottommost, left or rightmost part of the body) in horizontal or vertical space.
  4. Relative positions (closeness of one dancer to another, etc.) of multiple dancers (using costume color recognition).
  5. Degree of right-left symmetry in the body – how similar in shape the two sides of body are.
  6. Degree of expansion or contraction in the body.
4.1.2 Digital Signal Processing (DSP) and Mapping (written within MAX/MSP)

The real-time sound synthesis environment was designed in MAX/MSP by Butch Rovan. A PC running EyeCon is linked to a Macintosh PowerBook running MAX/MSP, sending the gestural data gathered by EyeCon to the real-time sound synthesis parameters.

The MAX/MSP program for Seine hohle Form is a musical synthesis environment that provides many control parameters, addressing a number of custom-built DSP modules that include granular sampling/synthesis, additive synthesis, spectral filtering, etc. [Figure 3].

Figure 3
Figure 3

All mapping is accomplished within the MAX/MSP environment, and changes throughout the work.

Control of the musical score to Seine hohle Form is accomplished through a cue list that enables/disables various EyeCon movement analysis parameters, mapping and DSP modules to be implemented centrally. Both EyeCon and MAX/MSP software components are organized as a series of ‘scenes,’ each describing a unique configuration of video tracking, mapping, and DSP. Scene changes for both computers are synchronized and can be initiated by a single keystroke from either station.

4.2 Examples from Seine hohle Form

The following description of excerpts from Seine hohle Form is certainly not complete. Even within the described scenes there is a good deal more going on than reported here. Nevertheless, such may offer an introduction to our working methods. In addition, a RealPlayer movie excerpt of Seine hohle Form is available from the Palindrome web site.

The twelve-minute Seine hohle Form is divided into 23 scenes. Some coincide with clear changes in the choreography, such as the end of the female solo and the beginning of male, and noticeably alter the music while others are extremely subtle. In the opening scene, the first dancer (female) controls nine relatively clear and isolated additive synthesis tones with the extension of her limbs into the space around her (an example of one-to-one mapping). An algorithm in MAX/MSP modifies the pitch and timbre slightly with each extension. Meanwhile, the second dancer (male), standing with his back to the audience, uses small, whole-body movements to cut off quieter, whiter sounds which build continuously as long as he is not moving.

In Scene 5, the male dancer manipulates a stream of loud, aggressive sound fragments derived through granular sampling. He activates the sounds through equally aggressive side-to-side torso movements. The speed of his movements shape the parameters of the granular sampling engine continuously, with many interactions between incoming gesture parameters (an example of convergent mapping).

In Scene 8, the male dancer finally rises from his low stance and approaches the audience. Here, his height (the highest body part from floor) controls the parameters of a real-time spectral filter, producing a thinner and more continuous musical texture the higher he rises. The effect is much subtler and less direct than what has come before and lends a sense of disorientation to his part, softening his role following the opening solo, and thus opening the way for the female dancer to begin her own solo.

Conclusions and Future Work

The basic technical system described in this paper has been operational for almost a year and has been tested in performances in Munich, Dresden, Nürnberg, Buenos Aires, as well as at the Society for Electro-Acoustic Music (SEAMUS) 2001 National Conference in Baton Rouge, Louisiana and, most recently, at the International Computer Music Conference (ICMC) in Havana, Cuba. It has, however, become increasingly clear to us that our current process for gestural mapping could be improved by creating a clearer hierarchy among the parameters that govern the relationship between the video-tracking system (EyeCon) and the sound synthesis software (MAX/MSP). In particular, we are working to segregate more clearly the tasks assigned to each of the system's components.

Of course, making use of the inexhaustible number of possible mappings between movement and sound requires an understanding of the different and potentially conflicting goals that drive composers and choreographers. In the past, traditional models of collaboration between composers and choreographers have subjugated either dance or music, or sidestepped the question altogether by removing all correlation between movement and sound. In a collaborative work such as Seine hohle Form, a new opportunity exists, one that results neither in subjugation nor in conceptual abstraction. Rather, this ‘conflict’ in artistic goals is seen in the light of heightened interactivity (in the traditional inter-personal sense) by making the work of choreographer and composer inter-dependent rather than contingent, fused instead of segregated.

Acknowledgments

Material from this paper has previously appeared in COSIGN2001: Conference on Computational Semiotics, Amsterdam, September 2001; ISEA2000: 10th International Symposium on Electronic Art, Paris, France, December 2000; and the 9th New York Digital Salon (Leonardo Magazine). The work will also be presented at the Body/Machine Conference at York University, October 2001; and the Cast01 Conference on Communication of Art, Science and Technology, September 2001 / GMD - Schloss Birlinghoven, Sankt Augustin / Bonn.

A version of this paper is also scheduled for presentation at the MTAC 2001: Multimedia Technology and Applications Conference held on 7-9 November 2001 at the University of California, Irvine and for subsequent publication in the conference proceedings.

The authors would like to thank the Center for Experimental Music and Intermedia (CEMI) at the University of North Texas and the 01plus Institute for Art, Design and New Media, in Nürnberg, Germany, for their assistance and support. Thanks also to Helena Zwiauer and Laura Warren (both of whom danced and contributed to the choreography) and Jon Nelson.

About the Authors

Butch Rovan is a composer, performer, and researcher on the faculty of the College of Music at the University of North Texas (UNT), where he directs the Center for Experimental Music and Intermedia (CEMI). Prior to joining UNT, he founded the computer music studios at Florida State University and was ‘compositeur en recherche’ at the Institut de Recherche et de Coordination Acoustique/Musique (IRCAM) in Paris, where he co-founded the ‘Groupe de Discussion à propos du Geste Musical,’ an ongoing interdisciplinary research group focusing on gesture and its application to control of real-time synthesis. He is the recipient of numerous awards and fellowships, including the Lester Horton Award for outstanding modern dance score, the George Ladd ‘Prix de Paris’ and the Stephan Wilkes Prize for Polish Music research. Most recently his work ‘Continuities II’ was awarded honorable mention in the 1998 Bourges International Electroacoustic Music Competition. He joined Palindrome in the year 2000 for which he wrote Seine hohle Form, an interactive piece of music (written on MAX/MSP and controlled by EyeCon) which can only exist when two dancers perform a dance by the same name. This piece was selected both by the Society for Electro-acoustic Musicians in the United States (SEAMUS) and the International Computer Music Conference (ICMC) for performance at their annual conferences in the year 2001.

Robert Wechsler studied biochemistry and molecular genetics at Iowa State University in the US. A transfer to dance and choreography (at State University of New York in Purchase, BFA and New York University, MA) did not lessen his interest in science. In New York City (1975-1984) he trained under the tutelage of Merce Cunningham and worked in various New York-based modern dance companies. He became a founding member of the Palindrome Dance Company in 1982. For choreography he was selected for a Fulbright Fellowship (1983) and grants from the Marshall Fund (1984), the Epstein Foundation (1984) and the city of Nürnberg (1989-present). From 1985 to 1995 he taught dance and choreography at the University of Erlangen in Germany. Starting in 1995 he began a series of collaborative projects with computer engineer Frieder Weiß and in so doing realized a new artistic direction in his work. Palindrome became an ‘Inter-media Performance Group’ – dance seen as an element in a dynamic and relationship with other media realized or augmented by computer-driven systems. This shift in focus, and the new generation of work it has precipitated, has been accompanied by international engagements, workshops and critical acclaim including 6 European and 4 US tours as well as a trip in 1991 to Argentina. He has presented his work at numerous scientific conferences including the International Computer Music Conference (ICMC), the Seventh International Theater Arts Conference, the first and third International Conferences on Dance and Technology. He has written numerous articles concerned with dance and new media for Leonardo Magazine, IEEE of Technology and Society Magazine, Ballet International, Dance Magazine, Dance Research Journal, Nouvelle de Danse and Der Tanz der Dinge. In 2001 he became the first artist-in-residence at 01-Plus Institute for Art, Design and Media Technology at the State College of Design in Nürnberg, Germany.

Frieder Weiß is a free-lance computer engineer working for various companies in Germany and United States (for example Bosch and Siemens). His specialization is in the area of quality control and computer-imaging systems. He is a designer of software and hardware besides being a musician with the groups Thevomefüme, American Drama Group Europe, Nürnberger Jazz Art Ensemble. Together with installation artist Reiner Hofmann, he developed an interactive installation work for the DATEV company. It is called ‘Lichtbild’ and uses camera interactive technology to track the motion of individuals in an entrance hall and convert them to light patterns on the adjacent wall. Starting in 1995 he has worked with Palindrome as Interactive Systems Designer and together with Robert Wechsler he has conceived and realized dozens of performance and installation projects. He is the author of the EyeCon motion-tracking and analysis software system. EyeCon is touted as one of the most flexible and user-friendly systems of its kind and is being used by artists, singers, dancers and theater companies the world over. Frieder Weiß has also designed miniaturized portable devices to allow the individual muscle contractions of a dancer's body to control other media, as well as a system making the dancer's heartbeat audible and available to control other media (such as the tempo of the music). Since spring 2001, he has been director of the Media Laboratory at 01-Plus Institute for Art, Design and Media Technology at the State College of Design in Nürnberg, Germany.

© 2001-2016 Trinity College, Dublin