Professional Documents
Culture Documents
The Preservation, Emulation, Migration, and Virtualization of Live Electronics For Performing Arts - An Overview of Musical and Technical Issues
The Preservation, Emulation, Migration, and Virtualization of Live Electronics For Performing Arts - An Overview of Musical and Technical Issues
Centers for electronic music creation now face strong difficulties to reperform important works considered as belonging to their
repertoire. They realize the fragility of works based on electronic modules, subject to technology changes. Sustainability can
be achieved in four different ways that are examined in this article: preservation, emulation, migration and virtualization. We
discuss these issues, showing the first steps towards solutions at Ircam in the framework of the European project Caspar.
Categories and Subject Descriptors: H.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing—Abstracting
methods; H.5.5 [Information Interfaces and Presentation]: Sound and Music Computing—Modeling, methodologies and
techniques
General Terms: Documentation, Reliability.
Additional Key Words and Phrases: Preservation, performing arts using electronics, reperformance
ACM Reference Format:
Bonardi, A. and Barthélemy, J. 2008. The preservation, emulation, migration, and virtualization of live electronics for performing
arts: An overview of musical and technical issues. ACM J. Comput. Cultur. Heritage 1, 1, Article 6 (June 2008), 16 pages. DOI =
10.1145/1367080.1367086 http://doi.acm.org/10.1145/1367080.1367086
1. INTRODUCTION
Many researchers, users, and composers, especially in electronic studios, have become aware of the
fragility of musical pieces using electronics [Teruggi 2001]. As Bernardini and Vidolin [2005] have
noted, the situation is quite paradoxical: “real-time/performed electro-acoustic music (also known as
live electro-acoustic music) is currently facing a serious sustainability problem: while its production is
indeed considered very recent from the music history point of view, several technological generations
and revolutions have gone by in the meantime”.
This sentence may be applied to all artistic works using electronics, not only to electro-acoustic
music. Correctly reperforming the most important pieces created in the studios of institutions has
This work was partially supported by the European Community under the Information Society Technologies (IST) program of
the 6th FP for RTD – project CASPAR, contract IST-033572.
The author is solely responsible for the content of this article. It does not represent the opinion of the European Community, and
the European Community is not responsible for any use that might be made of data appearing therein.
Authors’ address: A. Bonardi, J. Barthélemy, STMS, Ircam-CNRS, Institut de Recherche et de Coordination Acoustique/Musique
– 1, place Igor-Stravinsky – 75004 Paris, France; email: {alain.bonardi, jerome.barthelemy}@ IRCAM.fr.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided
that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first
page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute
to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be
requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212) 869-0481,
or permissions@acm.org.
⃝c 2008 ACM 1556-4673/2008/06-ART6 $5.00 DOI 10.1145/1367080.1367086 http://doi.acm.org/10.1145/1367080.1367086
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:2 • A. Bonardi and J. Barthélemy
become crucial as they try to find a balance between the constitution of a repertoire and the promotion
of creation [Tiffon 2005; Roeder 2006].
Whereas the preservation of music has been studied and practiced for many years on a wide range
of mediums from manuscripts to instruments, improving the sustainability of live electronic pieces has
become a growing issue of late. Several recent publications show that many institutions are concerned. A
European project named Caspar1 (Cultural, Artistic, and Scientific Knowledge for Preservation, Access,
and Retrieval) was launched in 2006, together 17 partners, including IRCAM, on the general topic of
preservation of digital data. This project intends to address three different communities by developing
three different testbeds: one for scientific knowledge, one for cultural heritage, and one for performing
arts.
This article presents an overview of musical and technical issues raised by the preservation activities
of electronic modules for the performing arts. In the first part, we define the scope of our study, detail
the materials involved, and the underlying logic of performances. The second part is dedicated to the
features of performance works that use electronics from the point of view of preservation activities and
the implications regarding their fragility and sustainability. In the third part, we exhibit the underly-
ing paradigms of the electronic parts of this kind of creation. The fourth part gives an overview and
classification of current activities: preservation, emulation, migration, and virtualization of electronic
modules. The last part presents several leads for further developments in that field.
Table I.
Configuration Example of Work
Tape diffusion Gesang der Jünglinge by Karlheinz Stockhausen, opus 8, for electronic and concrete
sounds (1955–1956)
Solo instrument and live electronics Anthèmes II, by Pierre Boulez, for violin and live electronics (1997)
Soloists, ensemble and live electronics Répons, by Pierre Boulez, for six soloists, chamber ensemble and live electronics
(1981–1984)
Dance and Music Performance L’écarlate, dance performance designed by Myriam Gourfink, choreographer, music
by Kasper Toeplitz (2001)
Opera with live sound transformations K ,music and text by Philippe Manoury (2001)
Theatre with live sound transformations Le Privilège des Chemins, by Fernando Pessoa, stage direction by Eric Génovèse,
sound transformations by Romain Kronenberg, 2004
Theatre and image generation La traversée de la nuit, by Geneviève de Gaulle-Anthonioz, stage direction by
Christine Zeppenfeld, real-time neural networks and multi-agent systems by Alain
Bonardi, 2003
Musical and video performance Sensors Sonic Sights (S.S.S.), music/gestures/images with Atau Tanaka, Laurent
Dailleau and Cécile Babiole (performed since 2004)
Installation Elle et la voix, virtual reality installation by Catherine Ikam and Louis-Fran cois
Fléri, music by Pierre Charvet (2000)
For example, in Jupiter, Manoury has written a precise score that both indicates the instrumental
part in classical notation and the behavior of electronic modules that is often described in musical
terms. In this case, the composer delivers a kind of global score, trying to integrate all performing
aspects within this document.
2.2.2 Active Modules. As either hardware (analogic device) or software (patches), active modules
provide control and signal-processing functions. Musicians now intensively use such graphical lan-
guages as Max/MSP or PureData software to implement the required processes. In these frameworks,
patches are windows in which processes are represented by virtual boxes connected by wires. They
are metaphors of hardware devices used in physics labs. These tools enable real-time processing and
prototyping to set up or dismantle a process.
For example, the boxes in the left column of Figure 1 are subpatches that combine into a kind of
electronic orchestra: “reverb” is the module of reverberation, “fshift” is the frequency shifter (a signal
process that adds and subtracts a certain frequency to the fundamental of the input signal), and so on.
The horizontal series of figures (from 0 to 13) at the top of the patch are buttons allowing one to trigger
the beginning of the corresponding sections in Jupiter. Other boxes are faders to adjust sound levels
(input and output), start/stop buttons, etc.
2.2.3 Technical Documentation. Technical documentation often includes schemata, instructions,
drawings, test files, and such. For example, Figure 2 illustrates the loudspeaker implementation and
Figure 3 the audio-MIDI3 setup for Jupiter. This conceptual configuration will be adapted to the concert
hall where Jupiter is to be reperformed, it should help audio engineers with the practical implementa-
tion of loudspeakers.
Fig. 1. A screenshot of the main patch of Jupiter by Philippe Manoury (musical assistant Serge Lemouton).
(3) The lack of care in electronic part preservation. For many years, libraries have kept scores, but
composers have been responsible for the preservation of their own electroacoustic or live electronics
parts. This is explained by Hannah Bosma in the framework of Donemus/Near institution [Bosma
2005]. This is one of the main reasons for what Michael Longton confirmed in a survey [Longton
2004] titled “Record Keeping Practices of Composer”: it shows that 47% of composers lost files they
considered valuable due to hardware or software obsolescence.
(4) The practical conditions of performance. Changing performing conditions may produce unexpected
results. For instance, performing with slightly different devices or in different places always requires
additional time for tests and often requires modifications.
(5) When applicable, the empirical making of patches. The complexity of patches sometimes makes
them very unstable. For instance, in Max/MSP software, it is well known that priority processing
depends on the position of the objects. Therefore, moving an object in a patcher may produce strong
modifications of the results produced. Let us give an overview of this complexity with a screenshot
of a Max/MSP patch by Olivier Pasquet, musical assistant at IRCAM (Figure 4).
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
Preservation, Emulation, Migration, and Virtualization of Live Electronics for Performing Arts • 6:7
On the one hand, each time a work is played, performers face the issue of authenticity, trying to set a
kind of loyalty in reference to the materials and to previous reference performances (perhaps belonging
to opposite interpretation traditions), famous or not. This requires the storage of results as sound
samples in order to keep a minimal memory of previous performances, but it may be insufficient to
evaluate the reperformance. Indeed authenticity criteria seem quite difficult to define since composers
have different conceptions of it. Musical assistant Serge Lemouton reports how the same processes
of migration of Max/MSP patches (from Next environment to Macintosh) applied to several works at
IRCAM led to various reactions: composer Philippe Manoury considered the new result as too close to
the original, composer Michael Jarrell found it too far from the original.
On the other hand, composers often want to modify their works or performers intend to adapt a
work to other configurations. Maintenance of software patches is a very difficult task since they are
not structured as programs, for instance. Therefore, slight modifications of patches a few months after
completing them may become quite laborious. During an interview in September 2006, Andrew Gerzso,
who is Pierre Boulez’ musical assistant, indicated to us that important patch revisions for technical
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:8 • A. Bonardi and J. Barthélemy
Fig. 5. Adding two signals, with a Max/MSP patch (on the left) or a PD patch (on the right).
reasons are opportunities for musical transformations. This was the case, for instance, with Anthèmes II
for violin and live electronics (1997) by Boulez, before its recording.
generated by a MIDI keyboard. Musically speaking, the objective is to enable the performer to become
independent of the electronic part: in a traditional music piece including electro-acoustics, the performer
is the slave of the electro-acoustic part as he is constrained to follow the timing of events that are either
prerecorded or strictly dependent on a time scale expressed in milliseconds. The aim is to allow the
musical performer(s) to lead the electronic parts either by specific gestures (e.g., MIDI pedal strokes)
or through more elaborate automatic MIDI event-following algorithms. This situation is generally
described as score-following. Numerous similar devices have been developed, for example, the electronic
violin by Max Mathews, or the SuperPolm by Suguru Goto.
The future of musical works using these devices is obviously extremely fragile, linked to the future of
the hardware device itself. Emulation or migration of the device is not a trivial task. One can imagine
a simple microphone, coupled with a software system that is able to detect the pitch of the instrument,
and thus able to send NoteOn/NoteOff events similar to those of MIDI. The first obvious problems are
those of pitch detection, based on audio analysis, which is not currently completely reliable, and NoteOff
event detection (the end of the note), which is reliable on a MIDI flute but not straightforward using
audio analysis. Moreover, there is a delay in the calculation of pitch, which results in a delay in the
triggering of the events, which does not exist with electromechanical devices.
The optimal solution to this problem can only be imagined if a complete description of the events and
of the resulting effects can be achieved, that is to say, in a complete virtualization of the underlying
process.
More recently, the field of computer-human interfaces has further influenced developments in all
graphical languages (Max/MSP/Jitter, PureData, etc.), including new objects for interface purposes
(e.g., multiparameter graphical interfaces). Most of these interfaces are based on common graphical
codes and on the general paradigm of mouse and window, and so the obsolescence of these interfaces is
linked to the obsolescence of these paradigms and graphical codes.
4.3 Difficulties
Electronic music’s close association with signal processing also has drawbacks.
Difficulties in handling time. Curiously, though we use patches for music purposes, no coherent time
paradigm is available, since patches were designed in the beginning as transfer functions in the field
of signal processing. For instance, in Max/MSP software, the only way to specify events according to a
predefined chronology consists of using timelines and timed cue lists, which are interfaces that enable
messages to be set, and then sent at precise times (Figure 6). Time is considered here as in a classical
sequencer. It means such temporalities as chronological time, asynchronous time, measured time, or
flexible time relations with real-time performed events can only be handled using ad hoc solutions
without any possibility of universal formalization.
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:10 • A. Bonardi and J. Barthélemy
The restrictions due to patents. Real-time processing of an image or a sound is strongly based on the
extraction of signal descriptors; there may be patents on some of these descriptors, making the situation
more difficult to handle. Moreover, the patent owners are in position to claim that the preservation
process constitutes a form of reverse engineering, and therefore dispute it.
5.2 Emulation
Emulation is certainly one of the most difficult approaches of reperformance. Bernardini and Vidolin
[2005] quote the example of Stockhausen’s Oktophonie, which requires an ATARI-1040 ST computer
that no longer exists. There are Atari emulators running on other computers but nobody knows whether
the Notator program used by Stockausen will run on an emulator, though communities of users may be
able to give some help. From this example, the authors of this article insist on the power of communities
versus the unreliability of companies.
The emulation of hardware by software devices is also difficult. Bullock and Coccioli [2005] provide
an interesting example by composer Jonathan Harvey. His piece, Madonna of winter and spring for
large orchestra and three synthetisers, requires a Yamaha TX816 for sound synthesis. The authors are
planning to emulate the synthetiser with PureData patches.
—Sound processing modules. Gerzso’s approach is based on basic and standard modules, universally
known and scientifically described in a unique way, for instance, frequency shifters, comb filters, ring
modulators.
5 BRAHMS is an online database of 20th and 21st centuries composers and their works.
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:12 • A. Bonardi and J. Barthélemy
Fig. 8. The score of the beginning of Anthèmes 2, for violin and live electronics by Pierre Boulez. (With permission of Universal
Editions, Vienna, Austria.)
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
Preservation, Emulation, Migration, and Virtualization of Live Electronics for Performing Arts • 6:13
Fig. 9. The symbolic schema of the beginning of Anthèmes 2 for violin and live electronics by Pierre Boulez, completed by Andrew
Gerzso. (With permission of Universal Editions, Vienna, Austria.)
—Spatialization modules. With the help of Olivier Warusfel (Head Researcher of IRCAM Room Acous-
tics Team), Andrew Gerzso has identified rather universal descriptors for which values can be induced
from current patches, for instance, source direction, level of direct source, level of first reflections.
—Score-following modules. Nothing is specified in Gerzso’s documentation except what the module
should be able to do.
These effects have been added to the score as parts, exactly as if they were instrumental parts. The
first part is notated “Inf. Rev.”, for the infinite reverberation effect. The second part is notated “Sampler
IR”, for the sampler completed by infinite reverberation, and the third part, is notated as “Sampler”, a
sampler without reverberation. These three parts each are completed by spatialization effects notated
on a single line directly below each part. The last part is dedicated to the frequency shifter.
The score follower in this score has the role of a conductor. It is responsible for triggering events at
the right time and is notated by using numbers in a circle.
As one may notice, each part is notated by using age-old music paradigms such as pentagrams, notes,
measures, and other symbols as close as possible to traditional music notation.
Any person able to read music and familiar with musical performance is able to read this score
and get a precise idea of the expected result without having any idea of its particular implementation
(Figure 9.)
At GRAME, Yann Orlarey and his research team [Orlarey et al. 2002] have explored other kind of
formalisms, using algebra to describe block-diagram languages. This led to the Faust system, which
allows for the use of the mathematical description of signal-processing functions, up to the compilation
of corresponding modules in C++ . We show an example of a part of a reverberator description in
Figure 10. It describes a monophonic reverberation using a high-level algebraic language to describe a
signal process which is given in the top yellow rectangle. To better understand, a plug schema is given
that shows eight comb filters set in parallel, followed by four serial allpass filters.
Other teams have examined the automatic generation of Max/MSP patches. Paolo Bottoni and his
colleagues in Roma University La Sapienza [Bottoni et al. 2006] have studied the use of planning
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:14 • A. Bonardi and J. Barthélemy
agents to transform abstract goal states specified by the performer into potentially complex sequences
of Max/MSP control messages by using a programming language named GO/Max.
The CASPAR project will strive to develop an adequate model, building on results of previous projects
like Interpares. Concepts like integrity, identity and provenance, even though they are useful, they are
not sufficient to solve this issue.
7. CONCLUSION
We have explained how fragile performance works based on electronics are when reperformance is con-
sidered. Many researchers, users, and composers in the field of performance arts require sustainability,
which may be achieved through four complementary activities: preservation, emulation, migration, and
virtualization.
Preservation based on standard formats (text format for documentation, WAV for audio files, etc.)
is necessary but not sufficient, whereas emulation and migration, though sometimes useful, require a
lot of work for provisional solutions. Virtualization, which is based on abstract descriptions, is the only
way to provide a means to regenerate electronic modules. It will require ontologies of signal processing
that are not yet available. The Caspar Project will work on these aspects, especially on a contemporary
art testbed, to describe complex artistic works in their production and organization.
ACKNOWLEDGMENTS
Our thanks to Michael Fingerhut, Andrew Gerzso, Serge Lemouton, Denis Lorrain, and Philippe
Manoury.
REFERENCES
BACHIMONT, B., BLANCHETTE, J.-F., GERSZO, A., SWETLAND, A., LESCURIEUX, O., MORIZET-MAHOUDEAUX, P., DONIN, N., AND TEASLEY, J.
2003. Preserving interactive digital music: A report on the MUSTICA research initiative. In Proceedings of the 3rd Interna-
tional Conference on WEB Delivery of Music (WEB’03). Leeds, England.
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.
6:16 • A. Bonardi and J. Barthélemy
BERNARDINI, N. AND VIDOLIN, A. 2005. Sustainable live electro-acoustic music. In Proceedings of the International Sound and
Music Computing Conference. Salerno, Italy.
BOSMA, H. 2005. Documentation and publication of electroacoustic compositions at NEAR. In Proceedings of the Electroacoustic
Music Studies Network International Conference (EMS’05). Montreal, Canada.
BOTTONI, P., FARALLI, S., LABELLA, A., AND PIERRO, M. 2006. Mapping with planning agents in the Max/MSP environment: The
GO/Max language. In Proceedings of the International Conference on New Interfaces for Musical Expression. Paris, France.
BULLOCK, J. AND COCCIOLI, L. 2005. Modernising live electronics technology in the works of Jonathan Harvey. In Proceedings of
the International Computer Music Conference. Barcelona, Spain.
DE SOUSA DIAS, A. 2003. Transcription de fichiers MusicV vers Csound au travers de Open Music. In Proceedings of the 10th
Journées d’Informatique Musicale. AFIM, Montbéliard.
GRAEF, A., KERSTEN, S., AND ORLAREY, Y. 2006. DSP Programming with Faust, Q and SuperCollider. In Proceedings of Linux
Audio Conference. Karlsruhe, Allemagne.
LONGTON, M. 2004. Record keeping practices of composers: A survey. InterPares 2 Website http://www.interpares.org.
ORLAREY, Y., FOBER, D., AND LETZ, S. 2002. An algebra for block diagram languages. In Proceedings of International Computer
Music Conference (ICMA’02). Göteborg, Sweden.
PUCKETTE, M. 2004. New public-domain realizations of standard pieces for instruments and live electronics. In Proceedings of
the International Computer Music Conference. Miami, FL.
RISSET, J.-C., ARFIB, D., DE SOUSA DIAS, A., LORRAIN, D. AND POTTIER, L. 2002. De Inharmonique à Resonant sound spaces. In
Proceedings of the Journées d’Informatique Musicale. AFIM, Marseille, Gmem.
ROEDER, J. 2006. Preserving authentic electroacoustic music: The InterPARES Project. In Proceedings of the IAML-IASA
Congress. Oslo, Norway.
TERUGGI, D. 2001. Preserving and diffusing. J. New Music Resear. 30, 4.
TIFFON, V. 2005. Les musique mixtes: entre pérennité et obsolescence. Revue Musurgia, XII/3, Paris.
Received November 2006; revised May 2007, September 2007; accepted November 2007
ACM Journal on Computing and Cultural Heritage, Vol. 1, No. 1, Article 6, Publication date: June 2008.