GMD - German National
Research Center for Information Technology
Sankt Augustin, Germany
We introduce the generic term musical artifact to describe the different kinds of representations composers use to communicate their music. In this sense, we understand by musical artifact any kind of material outcome or physical trace of the compositional process used for a performance. Traditionally, the musical artifact is a text composed of musical signs generally referred to as musical notation. Since the advent of electronic and computer technology, composers are confronted with significant alternatives to written music. Sound recording, transmission, storage, and reproduction techniques revolutionized the composition, production, distribution, and perception of music.
Technological Musical Artifact
Since about half a century, composers use sound transformation and synthesis techniques to create compositions to be stored directly on magnetic tape or other storage media. In classical electronic music, as it appeared in the middle of our century, the musical text was replaced by the magnetic tape, the first veritable technological musical artifact relevant for our considerations. We employ the attribute technological to underline the fact that the specific characteristics of the artifact are dependent on modern technology (i.e. technology based on electric energy) as opposed to the classical musical artifact, the score, which relies only on older technologies such as writing implement, paper production, and printing machines.
With the invention of computers, composers potentially possess an even more interesting means of musical representation: the computer program, so far the most general technological musical artifact known. With the rapid proliferation of computer and network technology in the recent past, the infrastructure necessary to compose and communicate music by means of computer programs is a reality today. Composers use the Internet to distribute compositions represented as computer programs, which permit to generate variants of the music on any computer connected to the network.
Role of the Artifact
We base our observations and interpretations of this development on the notion of the musical artifact because we belief that the characteristics of the representations used to create and communicate music determine to a large extent the possibilities of creation and perception. As a basis for further discussion we shall now review some of the idiosyncrasies of the three already mentioned musical artifacts ó the score, the tape and the program. Our analysis will be rather schematic in the sense that it will concentrate on the prototypical attributes and ignore the many existing hybrid forms of artifacts (e.g. compositions for instruments and tape or life-electronics).
The characteristic of musical notation most relevant for our discussion is the fact that it is a special kind of text. Composers write scores primarily as a basis for the communication with musicians. The musical text can be understood as an incomplete symbolic representation of the music, which needs to be interpreted by musicians in order to be perceived by the audience. Musical texts need interpretation by specialists mainly for two reasons. Firstly, the text has to be deciphered, i.e. completed by relating it to the current cultural conventions of interpretation. Secondly, the out-of-time textual representation of the music has to be translated into an in-time acoustical representation, i.e. into sound.
Listening versus Reading
The interpretation process is a complex cultural phenomenon based on the physical abilities of the human body, a music education system, instrument building, architecture, the social conventions of music presentation and perception, and many other aspects. Writing music implies taking into account the particularities of the interpretation process, which determine to a large extent how written music can be composed and perceived. Let us retain here a significant difference between the musical and literary text: the former can (generally) not be directly read by the audience whereas the latter (usually) can - the best example being poetry. This is important because although the score is a text, not all possibilities generally associated with the reading process are directly available to the audience. This mainly concerns the ability to freely determine the sequence and tempo of reading. When reading a poem, we decide how fast we read it, how many times we read it and which parts we repeat. Listening to an interpreted musical text is an essentially linear experience whereas reading a poem may have a rather non-linear and exploratory nature.
The seemingly banal result of our comparison gains significance when relating it to one of the utopian concepts of twenties century music - the vision of open form. Composers tried to escape the linearity of traditional musical form by creating intentionally ambiguous scores and delegating certain (compositional) decisions to the musicians. As an example we may think of a musical text organized in several threads connected through branching nodes. In a performance the musician is asked to create one "closed" version of the text by realizing a particular reading among the set of possible readings. This ensures that a composition does not have one definite form but rather defines a set of variants, all of which representing valid instances of the piece.
Perception of Openness
The main drawback of this approach is that the audience is not given a real opportunity to experience the openness since it will always be confronted with a "closed" version at a time. Only when listening to several variants in succession, we may gain a glimpse of the formal potential by comparing the variants and by reconstructing the decisions taken by the musician ó a procedure rarely applicable in practice. Thus only musicians may really experience the formal openness of a musical text since they are confronted with the alternatives and can take decisions while playing. This problem may be one of the reasons why the quest for open form is not any longer a main concern among composers today. We will argue later on that employing another type of musical artifact, which may be made directly explorable by the audience, could be a solution to this problem.
The magnetic tape allows storing sound signals by translating their temporal structure into a spatial representation. Therefore, sound recorded on tape can be manipulated in ways impossible and hardly imaginable before the advent of sound recording technology. The volatile sound is turned into an object that can be manipulated, e.g. rearranged in time by cutting and splicing operations. As opposed to the score, the tape is not a text but an analogue representation of sound pressure fluctuations.
Composers started to use magnetic tapes as musical artifacts when they were seeking for more precise control over the timbre and the temporal structure of their music. The tape was the welcome alternative to musical notation and interpretation at a moment in music history when absolute control over the musical material and structure was an important driving force of composition. With electronic music, composition turns from a writing activity into a realizing activity. Not a text for interpretation is created but the (almost) final surface structure of a composition (i.e. its sonic appearance) is inscribed into the magnetic film - ready to be reproduced at any time later through playback over loudspeakers. Compared to the score, the tape is a flat representation of the music ó only audible information is stored. There is no room for the kind of structural information typical for scores (such as groupings of elements or other meta-information).
In electronic music, sound production becomes part of the compositional process. In the early stages of electronic music the tape recorder was one of the most important tools for sound construction. Transposing, cutting, looping, copying and mixing procedures were used to build complex sounds out of basic sound material (e.g. sine waves). Even the invention of such procedures became part of the compositional process and shaped the musical imagination ó a good example of how the artifact and the tools or procedures employed to produce it influence or even determine the compositional process. With the further development of sound synthesis techniques (e.g. voltage control) the tape-based manipulations lost their significance but the tape remained essential as storage medium. Since, in most of the cases, the compositions could not be created directly in the concert situation due to the technical and logistical complexity of the production process, they were produced and recorded in the electronic studio and played from the tape in the concert.
By a program we understand a sequence of instructions to be executed by a computer. The process resulting from this execution can be perceived and controlled by the user through interface devices (e.g. screen, keyboard, mouse, loudspeaker, microphone). During the last two decades, all sound synthesis and transformation techniques developed in the analogue studio were implemented in software. Many instrinsically digital techniques were added to form the very wide palette of digital signal processing tools available to composers to today. But the characteristic of the computer program most relevant for our discussion is the fact that it can be designed to react to actions of one or more users by means of the mentioned interfaces. Thus music represented as a program may allow the audience to influence the manner a composition unfolds.
Music conceived this way establishes a new type of relationship between the composer, the listener, and the composition. Listeners may actively drive the exploration of the composition instead of following the performance of a score or the reproduction of a tape. This is a very interesting characteristic of the program as a musical artifact because it allows the audience to experience openness and variability of musical form. Although these possibilities exist in theory since the invention of computers (or even older programmable machines), only the recent developments in multimedia and network technology and the widespread availability of personal computers allows composers to really explore this field. Already today, you can access compositions represented as programs on the Internet and perform them on your computer at home.
The required technological infrastructure exists already on almost every personal computer. It was developed to support multimedia applications and to browse multimedia documents, i.e. documents containing and applications treating video and audio information as well as text and graphics. Computers of almost any brand today are equipped with CD-quality audio ports. All modern operating systems include software modules for real-time sound synthesis. This software runs on the computerís central processing unit so there is no extra special-purpose hardware needed. The main reason for including sound synthesis software into standard PCs is to allow for a more compact multimedia content. Sound components of multimedia documents only need to carry the instructions necessary to produce the sound signals on the local computer system in real-time. The bulky sound signal data itself does not need to be transmitted, which speeds up the transfer time by several orders of magnitude.
We are not going to argue here about the many still existing deficiencies, restrictions, and incompatibilities of the current computer sound technology. It is clear that the situation is far from being satisfactory for serious composers today. Our perspective is rather oriented towards the near and middle term future, which will see many improvements on the level of software, interface, network, and processor technology. In order to illustrate such a perspective we shall now take a closer look at an interesting example of a composition realized as a program, the "Lexikon-Sonate" by the Viennese composer Karlheinz Essl.
This piano piece, which exists in various versions, was originally conceived for a CD-ROM project initiated by the "Libraries of the Mind", a group of artists and media designers from Vienna. The CD-ROM "ELEX" is an electronic version of Andreas Okopenkoís "Lexikon-Roman", a novel written in the format of an encyclopedia. The "Lexikon-Roman", which appeared in book-form in 1970, is an early example of hypertext literature. When exploring the open text of the "Lexikon-Roman", readers pass on from entry to entry depending on their preferences and current interest. Such they construct their own version of the novel. The "Lexikon-Sonate" was conceived as music to accompany the exploration of the hypertext. Its design is inspired by the special kind of reading experience, which is characterized by the associative context change from one entry to the next.
When activating the music feature in "ELEX", a program is started, which produces an endless and never repeating stream of piano music. In this case, the software-synthesizer built into the operating system of the computer is used to make the music audible. So both the structure and the sound are synthesized in real-time based on the model of the music created by the composer. While the program is running, a display shows the current state of the piece, indicating which structural modules are active at a time (fig. 1). Every time the program is started, it produces a different and endless variant of the "Lexikon-Sonate". Let us retain here that this very interesting feature is easy to realize when using programs as musical artifacts. But there are even more interesting features we can study in another version of the piece. In the "full version", the entire structure of the composition is exposed to the user. Since the "Lexikon-Sonate" is realized with Max, a visual programming environment specially designed for music and multimedia applications, the composer can decide on the way the music model is presented visually. On the top level of the program, only a set of control buttons is available to configure and activate the composition (fig. 2).
But behind each of the object boxes, sub-programs revealing the next lower level are hidden. By opening these boxes, potentially all levels of abstraction and structuralization of the piece are available to an interested and knowledgeable audience (fig. 3). By operating the buttons, switches and sliders on these lower levels allows for an in-depth exploration or analysis of the piece. This extreme possibility shows the range of conceivable ways to interact with the "Lexikon-Sonate". It reaches from passive listening to extending or even changing the composition by modifying the program.
One of the intermediate steps in this range is the concert performance of the "Lexikon-Sonate", for which the composer devised a special interface allowing to play the piece from the computer keyboard (fig. 4). Different keys are used to activate and deactivate the different modules of the music model, to operate the sustain pedal and to introduces pauses. With this interface the piece has been performed together with a real piano player in concerts.
The example of the "Lexikon-Sonate" illustrated many of the novel features of the program as a musical artifact. The most interesting ones are the following:
This shows the need to employ more sensible graphical user interface metaphors. The problem consists in identifying an intuitive and culturally accepted device for exploring complex structures and imaginary spaces. An adequate interface should allow for an immediate and seamless immersion of the audience in the music by reducing the cognitive load of the interface to an absolute minimum. Recent advances in virtual environment technology (Dai et al., 1993) permit the employment of architectural structures to immerse users in virtual worlds, which they explore by using their everyday competence of spatial navigation and orientation (Eckel 97a). The metaphor is as simple as it is powerful - while moving through an architecturally structured space, the audience explores the music. Structural aspects of the music are related to spaces and their attributes, which allows for representation of rather complex structural relationships. The architectural vocabulary of spatial organization and our highly developed spatial memory provide the means for an intuitive orientation in a composition. In this sense, virtual architecture becomes a vehicle for the exploration of music.
As an example of this approach we shall present the music installation project "Camera Musica" currently carried out at the GMD. "Camera Musica" is a music installation project which was first presented as a speculative article in a German contemporary music journal in 1994 (Eckel, 1994). A first proof of concept and a feasability study was carried out at the Banff Centre for the Arts in Canada during a 3-month residency in 1995 (Eckel 1996, Eckel 1997a). A first sketch of the installation has been realized in 1997 with GMDís CyberStage system (Eckel et al., 1997). "Camera Musica Sketch" was presented at the 1997 International Symposium for Electronic Arts in Chicago (Eckel 1997b).
The CyberStage is GMD's CAVE-like (Cruz-Neira, 1993) audio-visual display system which integrates a 4-side stereo visual display with an 8-channel spatial auditory display and 6 vibration emitters built into the floor (fig. 5). The CyberStage is a highly immersive display ideally suited for creating virtual environments. Viewer centered stereoscopic imaging and spatial sound rendering provide for an advanced degree of presence in virtual space (fig. 6). The sketch described here was developed with GMD's Avango VR toolkit and a sound server based on IRCAM's Max/FTS system (Puckette, 1991; Dechelle et al., 1995).
In a first sketch of the virtual music installation, a simple building-like structure (fig. 7) can be explored in the CyberStage immersive audio-visual display system.
The visual scene used in the sketch consists of an architectonic structure composed of free-floating walls of various dimensions and colors. A free-floating ceiling unites the walls and forms an interior space. Some of the walls reach out into open space thus mediating between inside and outside. The different heights of the walls create permeable sections with varying degrees of spatial continuity. Invisible spot lights are used to articulate the spatial structure and mark points of attraction. The light passing through the gaps between the walls and the ceiling enhances the impression of weightlessness and permeability. Global illumination techniques (radiosity) are used to create a strong sense of spatiality. The music in "Camera Musica Sketch" is conceived as a family of various, interrelated musical situations composing in their interplay what we may call a musical space. It is this space which is to be made accessible in the installation. The audience is enabled to move from one situation to another within this space and to slowly explore its special features through the relations between individual situations (fig. 8). Each situation is characterised by certain possibilities of choosing the musical material and arranging it, thereby determining the particularity of the situation - its mood, atmosphere, form and air. Depending on the position and orientation of the user, these choices are taken by a program whose development is part of the composition.
The most interesting aspect of using programs as musical artifacts is that they allow for a "deeper" representation of the music as compared to the tape or the score, i.e. a representation which comprises structural relationships of the elements the music is composed of. As these relationships can be exposed to the listeners, only the program as musical artifact has the potential to revolutionize the conception and perception of music. By interacting with the program, the audience has, for the first time, the possibility to actively explore structural relationships of a composition. As suggested earlier, we expect the potential of computer programs as musical artifacts to give rise to a renaissance of the quest for open and variable form in contemporary music composition. With the possibility of creating a model of a composition which defines not one closed piece of music but a field of possible variants, which can be produced by the listeners themselves by exploring (i.e. playing with) the model, composers posses a new powerful means of music creation.
Cruz-Neira C. (1993) Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE. Computer Graphics Proc., Annual Conference Series, pp 135-142.
Dai P., Eckel G., Göbel M., Hasenbrink F., Lalioti V., Lechner U., Strassner J., Tramberend H., Wesche G. (1997) Virtual Spaces: VR Projection System Technologies and Applications. Tutorial Notes. Eurographics '97, Budapest, 75 pages.
Dechelle F., DeCecco M. (1995) The IRCAM Real-Time Platform and Applications. Proc. of the 1995 International Computer Music Conference. International Computer Music Association. San Francisco.
Eckel, G. (1994) Camera musica. In: Interaktive Musik, Positionen 21, ed. Gisela Nauck, Berlin, pp. 25-28.
Eckel, G. (1996) Camera Musica: Virtual Architecture as Medium for the Exploration of Music. Proceedings of the 1996 International Computer Music Conference, International Computer Music Asociation, San Francisco.
Eckel, G. (1997a) Virtuelle Architektur als Medium zur Exploration von Musik. In: Architektur/Musik, HDA Dokumente 8, ed. Ritter, R., Haberz, M., Haus der Architektur, Graz, pp. 74-87.
Eckel, G., Göbel. M., Hasenbrink, F., Heiden, W., Lechner, U., Tramberend, H., Wesche, G., Wind, J. (1997) Benches and Caves. In Bullinger, H.J., Riedel, O. (eds), Proc. 1st. Int. Immersive Projection Technology Workshop, Springer, Vienna.
Eckel, G. (1997b) Exploring Musical Space by Means of Virtual Architecture. Proceedings of the 8th International Symposium on Electronic Art, School of the Art Institute of Chicago, Chicago.
Puckette, M., (1991) Combining Event and Signal Processing in the Max Graphical Programming Environment. Computer Music Journal 15(3):68-77, MIT Press.