Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

Jos Fornari
Interdisciplinary Nucleus for Sound Communication (NICS), University of Campinas (UNICAMP) Campinas, So Paulo, Brazil tutifornari@gmail.com

Abstract. Computational adaptive methods have already been used in Installation art. Among them, Generative artworks are those that value the artistic process, rather than its final product, which can now be of multimodal nature. Evolutionary Algorithms (EA) can be successfully applied to create a Generative art process that is self-similar yet always new. EAs allow the creation of dynamic complex systems from which identity can emerge. In computational sonic arts, this is ecological modeling; here named Evolutionary Soundscape. This article describes the development and application of an EA computing system developed to generate Evolutionary Soundscapes in a Multimodal Art Installation. Its physical structure uses paths of forking pipes attached to fans and microphones that capture audio to feed the EA system that creates the Soundscape. Here is described the EA system; its design in PureData (PD); its connection with the physical structure; its artistic endeavor and final sonic accomplishments. Keywords: evolutionary computation, sonic art, soundscapes.

1 Introduction
In 2009, I presented a 20-hour workshop on adaptive methods for the creation of sonic art installations. I named it aLive Music, and was part of the package of my participations as advisor on the project of a music student who was awarded by the Brazilian art entity; MIS-SP (Museu da Imagem e do Som, de So Paulo) with a grant to develop his own multimodal project using PD (www.puredata.info). In this workshop, I presented an implementation in PD of an adaptive sound system, that uses principles of Evolutionary Algorithms (EA) for sound synthesis, which I developed in my Ph.D [8], [9]. Attending this workshop, was Fernando Visockis, a young artist, eager to learn adaptive computing methods to create Artwork Installations that could generate Soundscapes [4]. I was later invited to develop an Evolutionary Soundscape System that would be incorporated to an artwork installation, named Vereda de Remendos (VR); which is translated as Path of Patches. This artwork was awarded with Primeiras Obras prize, promoted by the cultural Brazilian entity CCJRC (Centro Cultural da Juventude Ruth Cardoso). With this fund, the artists could build the installation. They assembled, tested, and finally exhibited it, from 14 to 29 of August, 2010, in Sao Paulo, Brazil.
C. Di Chio et al. (Eds.): EvoApplications 2011, Part II, LNCS 6625, pp. 323333, 2011. Springer-Verlag Berlin Heidelberg 2011

324

J. Fornari

This work describes my contribution to the VR development, which involved the design and implementation of the Evolutionary Soundscape System to which the VR artwork physical structure was connected, and, therefore, capable of successfully generating a continuous synthesis of sound objects that constitute the Evolutionary Soundscape. 1.1 PA: Processual Artwork Defined VR follows the Conceptual Art philosophy, which differs from most of the art developed during the history of Western culture. For centuries, the aim of any artist was to produce fine artistic objects, where the artist studied and developed techniques to build object of art, as best they could. The object was the final product of an artistic endeavor; and the reason and purpose for the existence of any artistic process. However, in the 1950s, probably due to the scientific arrival of digital electronics and computational science, artistic processes turned to be aesthetically noticed; slowly equating and sometimes surpassing the static materiality of the final object of art. This caused, in that decade, a breakthrough of new artistic possibilities and experimentations; most of them gathered by the term Conceptual Art. To mention an example, Lucy Lippard, famous writer and art curator, when analyzing the artistic production of Sol LeWitt, in the 1960s, wrote that his work with structures was based on the premise that its concept or idea is more important than the final object [13]. This touches the definition of Processual Artwork (PA); a de-materialized expression of conceptual art that is complemented by the notion of Generative Art, here taking the definition of: any form of art where a system with a set of defined rules and some degree of autonomy, is put on movement [10]. Generative processes had been extensively explored in music and other forms of sonic arts; even before the arising of digital computing technology. More than four centuries ago, around 1650s, priest Athanasius Kircher, based on the belief that musical harmony should reflect natural proportions of physical laws, wrote a book entitled: Musurgia Universalis, where he described the design of a musical generating machine, based on these principles [3]. In 1793, Hummel published a system to generate musical scores, whose creation is attributed to Wolfgang Amadeus Mozart. This method generated music notation through a random process, based on the numbers given by tossing dices. This embeds most of late generative art elements, where a musician can create, from by applying simple rules, and building blocks (i.e. predefined musical bars), an astonishing amount of compositions. Later, this method was regarded as Mozart's Dice Game and has influenced many composers, such as John Cage and Hiller Lejaren, that created a musical piece entitled HPSCHD [11]. Generative art is therefore created by adaptive methods, which differ from deterministic and stochastic ones, as adaptive methods can rearrange their algorithmic structure according to the input. 1.2 From PAs to Soundscapes PAs are seen as adaptive methods for art creation that uses generative processes where the artist is the element who provides the aesthetic judgement; constantly

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

325

weighting the artistic value of the product, thus guiding the adaptive artistic process of art creation. In sonic arts, the correspondent of such process would be the design of sonic landscapes, also known as Soundscapes. This term was originally coined by Murray Schafer, that describes it as natural, self-organized processes usually resultant of an immense quantity of sound sources, that might be correlated or not, but conveys a unique sonic experience that is at the same time recognizable and yet always original [14]. This refers to the immersive sonic environment perceived by listeners that can recognize and even be part of it. Thus, a soundscape is also fruit of the listener's auditory perception. In this sense, a soundscape is categorized by cognitive units, such as: foreground, background, contour, rhythm, space, density, volume and silence. According to Schafer, soundscapes is sonically defined by 5 distinct categories of analytical auditory concepts, derived from their cognitive units. They are: Keynotes, Signals, Sound-marks, Sound Objects, and Sound Symbols. Keynotes are the resilient, omnipresent sonic aspects, usually unconsciously sensed by the listeners' perception. It refers to the musical concept of tonality or key. Signals are the foreground sounds that arouses listener's conscious attention, as they may convey meaningful information for the listener. Sound-marks are singular sounds only found in a specific soundscape, which makes this one perceptually unique. Sound objects a term introduced by the french composer Pierre Schaeffer, extending the concept of musical note can also be seen as a sonic process. Physical events create waveforms (a sound wave) carrying perceptual meaning, thus being heard, understood and recognized. Sound symbols are a more generic class that refer to sounds which evoke personal responses based on the sociocultural level of association and identity. In sonic arts, PAs are sometimes used to manipulate the process of creating these 5 units, to generate Soundscapes. 1.3 Soundscape and ESSynth Evolutionary Sound Synthesis (ESSynth) was first introduced in [7], that describes its original method. A detailed explanation of ESSynth method is in [8, 9, 10]. As an overview, ESSynth is an Evolutionary Algorithm for sound synthesis based on manipulating waveforms, within a Population set. This was later implemented as an adaptive computational model in PD. Waveforms are seen as Sound Objects. They are the Individuals of the ESSynth model. Individuals have Sonic Genotypes, formed by acoustic descriptors time-series expressing perceptually meaningful sonic aspects of sound objects, such as: Loudness, Pitch and Spectral density. ESSynth model has two sets of Individuals: Population and Target. Population set has the individuals that undergo the Evolutionary process. Target set has the individuals that do not evolve but influence the evolution process with their Genotypes. There are two dynamic processes operating in the Population set: Reproduction and Selection. Reproduction uses two Genetic Operators Crossover (Recombination) and Mutation to create new Individuals in the Population set, based on the information given by its Genotype. Selection process searches for the best Individual within the Population the one whose Genotype is more similar with the ones in the Target set. This aims to emulate the natural pressure imposed by environmental conditions, that shape the evolutionary path of biological species. Selection uses a Fitness Function a metric distance between Genotypes to scan the Population set, measuring the Individuals' fitness

326

J. Fornari

distance, in comparison with the ones in the Target set. This process may also eliminate individuals not well-fit the ones with fitness distance greater than a predetermined threshold while selecting the best-one; the one presenting the smallest fitness distance. Reproduction uses the genetic operators: Crossover and Mutation, to create a new generation of individuals; the offsprings of the best-one, with each individual in the Population set. Initially ESSynth had a Population set with a fixed number of Individuals. There was a Generation cycle in which all a new generation of individuals in the Population entirely replaced their predecessor. The Reproduction occurred in Generation steps. This new version of ESSynth replaced the fixed-size Population set by a variablesize one, with asynchronous Reproduction process, where the Individuals reproduce in pairs, and not in Generations steps. This also introduced the Lifespan rate, where Individuals now would be born, reproduce and die. With these developments, the sonic output of ESSynth was now created by all sound objects (the Individuals) coexisting in the Population set, instead of as before; where the sonic output was given by the single queue of best individuals. The most interesting aspect that arose from this new development was the fact that this new ESSynth model could now synthesize not only a single stream of sounds as before but a whole interaction of sonic events, co-existing as an Adaptive Complex System. This evolutionary process of sound objects seemed to endow Schafer's categories of analytical auditory concepts that define a Soundscape [17]. Soundscapes spontaneously occur in natural self-organized environments, but are very hard to be obtained in vitro. Formal mathematical methods and deterministic computational models are normally not able to describe the varying perceptual acoustic nature of a soundscape. In the same way, stochastic methods are normally not able in attaining sonic similarity. Deterministic and stochastic methods are mutually exclusive in describing soundscapes. As any mathematical or computational model, adaptive methods are also not complete, which means that they can not fully describe a phenomena. In our case, the sonic process that generates soundscapes is the complex adaptive system, modeled by an evolutionary algorithm that is incomplete, as any other computational model, but is able to describe its adaptive process of reproducing and selecting sound objects; the case of ESSynth. The sound we hear from (and/or within) Soundscapes are, at the same time, acoustically new and yet perceptually similar. Several methods aiming to synthesize soundscapes were already been proposed, such as the ones described in [2]. Most of them are deterministic models that control the parameters of sound sources location and spacial positioning displacement. However, when compared to natural soundscapes, these methods don't seem capable of creating dynamic sonic processes that are self-organized and selfsimilar. From a systemic perspective, self-organization is a process that occurs in certain open complex systems, formed by parallel and distributed interaction of a variable number of agents (in our case; sound objects). The mind perceives the interaction between all internal and external agents as a system presenting emergence and coherence. As an example, the flocking behavior of biological population is a self-organized processes, as it is emergent, coherent and ostensive (i.e. it is perceptible). Soundscapes have, as well, similarity with gesture (i.e. intentional movement) as both rely on agents spatial location and self-organization to

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

327

exist [15]. In this perspective, this new implementation of ESSynth fits well as a computer model that os able to generate a set of localized agents a Population of spatially located sound objects that reproduce new ones with genotypical features of their predecessors, which naturally creates self-similarity. This article describes the usage of a new version of ESSynth an evolutionary algorithm for sound synthesis that creates the synthetic soundscape for VR; the artwork installation: ESSynth is an adaptive method that is able of emulating a sonic ecological system, in which sound objects are ephemeral beings. During their limited lifespan, they move inside a sonic location field through the usage of sound location cues and generate new sound objects that inherit sonic aspects of their predecessors. When a sound object lifespan is over, this is removed from the Population set; its genotype is erased from the genotype pool, to never be repeated again. The intrinsic property of ESSynth sonic output the varying similarity is paramount to generate synthetic soundscapes and thus being successfully used with artwork installations such VR; the case studied here. VR is an awarded multimodal project to which this customized version of ESSynth was designed. VR was also an ephemeral project per se. It was dismantled after its exhibition period was over. A video showing VR in action, with the evolutionary process of soundscape creation, is in the link: http://www.youtube.com/watch?v=3pnFJswizBw. This vanishing characteristic is common for processual multimodal artworks, since the first one, in the records: the Pome lectronique, created by Edgard Varse, for the Philips Pavilion at the 1958 Brussels Worlds Fair; the first major World's Fair after World War II. Le Corbusier designed the Philips pavilion, assisted by the architect and composer Iannis Xenakis. The whole project was dismantled shortly after the fair was over, but its memory remains alive and still inspiring artistic creations to this date. A virtual tour to this installation was created by the Virtual Electronic Poem project, and can be accessed in the following link: (http://www.edu.vrmmp.it/vep/). Section 2 describes the VR structural project; its concept, design and built. Section 3 explains the development of the customized version of ESSynth for VR, and its implementation in the open-source computing environment of PD (www.puredata.info). Section 4 describes the sonic results achieved by VR, as a multimodal processual artwork project, and discusses the conclusions found in this successful artistic project; commenting on an excerpt of an alive recording of the generated Soundscape and discussing their correspondence with natural Soundscapes, based on Schafer's categories of analytical auditory concepts.

2 A Plumbing Project
The bulging aspect of VR, seen in Figure 1 (left), is here understood as the dwelling of the evolutionary soundscape system, described in this article. Several segments of PVC plastic pipes used in residential constructions compose this plumbing structure. The artists built with this material several pathways that were connected at one vented end with fans, as seen in Figure 1 (right). These fans created a constant flow of air inside these interconnected and forked pathways, veering the air blowing, which created unusual sonic results. The pipes had four microphones inserted in specific parts of the plumbing system, to capture in real-time these subtle sonic changes.

328

J. Fornari

The first concept of VR was inspired by a short story written by Jorge Luis Borges, entitled "El jardn de senderos que se bifurcate" (The Garden of Forking Paths), appeared in 1941, which is arguably claim to be the first hypertext novel, as it can be read and thus understood in multiple ways.

Fig. 1. Image of the VR installation. The plumbing structure (left), and a closeup of one fan attached to one of the vented ends of these pathways (right).

Contrary to the expected by the artists, this structure didn't generate the predicted sound, right after its assemblage. Actually, the system initially generate no sound at all. As in many projects, they had to go through a thorough tweaking of structural parts, fans positioning and microphones adjustments, in order to find the correct sweet-spots for the retrieval of veering venting sonic aspects, as initially intended. After the usual period of despair, the sonic system finally came to live, working very alike the artists initial expectation, and kept on successfully generating the soundscape during all the period of its exhibition. When it was over, VR ought to be disassembled, by contract. Following the previously mentioned example of Philips Pavilion, VR was also dismantled. The artists, dressed as plumbers, took themselves the role of destroying it, bringing down this structure, as an artistic intervention. A video of the artistic disassembling of VR, in reverse mode, is in the following link: http://www.youtube.com/watch?v=tQcTvfhcVb4&feature=player _embedded

3 Evolutionary Synthesized Soundscapes


The dynamic generation of synthetic soundscape in VR needed a customized version of ESSynth. Although developed in 2003, during the Ph.D. dissertation of the author, the first implementation of ESSynth was finished in 2009, for the RePartitura artwork [6]. In this previous implementation also programmed in PD individuals where sound objects whose genotype were given by the mapping of graphic objects from handmade drawings. A new version of ESSynth was later developed to use as

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

329

genotype other forms of artistic gesture, such as dance movements, as defined by Rudolf Laban [5]. In RePartitura, ESSynth didn't use a Selection process. It was enough to have only the Reproduction process. The soundscape was compounded by the unrestrained reproduction of individuals (sound objects) with short lifespan (10 to 30 seconds). This sufficed to create the variant similarity of a soundscape. This new ESSynth implementation for VR, a Selection process is back, using the same strategy described in the original ESSynth method from 2003. This Selection process basically measures the fitness distance between individuals. It calculates the Euclidean distance of the 6 arrays forming the sonic genotype, as seen in Figure 3. The author used PD for the implementation of such complex system because, besides the fact that PD is a free open-source software environment, multi-platform; it is also a robust computational platform for the implementation of real-time multimedia data processing algorithms, named in PD terms patches. PD also allows to explore meta-programming; which implies that the patch algorithmic structure can manipulate and create parts of itself, as well as other patches, which means that, it is possible to develop a patch that is adaptive and even evolutionary; that can reproduce other patches. The Reproduction process uses some of these strategies to simulate the dynamic creation of new individuals in the Population set; and in the individual lifespan inner process to eliminate individuals and their genotypes from the genetic pool, when they reached their life expectation period, and die. Figure 2 shows two structures programmed in PD, used in this implementation, depicted here to illustrate these algorithms. PD is a data-flow, or visual language. Patches are programmed using building blocks, called objects, interconnected so to describe the programming flow of events. PD was specifically developed for real-time processing. As such, it is possible to create structures in PD that are non-deterministic; allowing situations where two runs of the same patch, each time having the same inputs, may not have same results. This is interesting where simulating dynamic complex systems, such as the evolutionary algorithm for sound synthesis of ESSynth. VR had four pipes pathways with fans attached at their ends. Initially, each pathway had inside one piezoelectric sensor to capture vibrations and the air blowing sound. Each sensor was connected to an audio channel, which summed up to 4 audio inputs for the ESSynth. Later they were replaced by microphones. The audio retrieved by each channel was filtered into three frequency regions: Low, Middle and High (as seen in Figure 2, left); which summed up to 12 frequency regions, for the 4 audio channels. Low region is under 100Hz (given by a first-order low-pass filter); Middle region, around 1000Hz (band-pass with Q=3) and High region, above 4000Hz (firstorder high-pass). These 12 scalars referred to the 12 spectral regions of frequency, of the 4 input audio channels; captured by 4 microphones positioned inside the pipes. These data where inserted in the individuals sonic genotype, in the parameters arrays, denoted by the label prmt-, as seen in Figure 3. Each Individual was designed as an instantiation of a PD abstraction (a coadjutant patch that works as a sub-patch and appears in the main patch as an object). Each active individual generated a sound object, created by computing models of nonlinear sound syntheses. For this implementation, it was decided to use three models of sound synthesis: 1) Granular Synthesis (GS), 2) Karplus-Strong (KS), 3) Ring Modulation (RM). GS is a non-linear synthesis model that generates sound output

330

J. Fornari

Fig. 2. Details of PD patches used in ESSynth. Part of the Selection process (left). Euclidean distance measurement of individuals genotypes (right).

from the overlapped looping of small waveforms, of about 1 to 100 ms; known as sonic grains [16]. KS is a physical model for the sound synthesis of strings. It was later developed in the Digital Waveguide Synthesis model [18]. It is constituted by a short burst of sound (e.g. white-noise pulse), a digital filter, and a delay line. The sound is recursively filtered in a feedback looping, which creates a sound output similar to a percussed string [12]. RM model heterodynes (multiply in the time domain) two waveforms; normally an input audio signal by another one generated by an oscillator. It is equivalent to the convolution of these audio signals in the frequency domain. This implies that the output sound will have the sum and difference of the partials of each input sound, thus this method generates new (and normally inharmonic) partials, sometimes delivering a bell-like sound. The artists wanted the Soundscape to be generated by these synthesis models processing the audio from the commuting input of each one of the 4 audio channels. It was later decided to implement a logic to use the randomized commuting of one or more audio channels. When there was more than one input audio, they were mixed together, thus avoiding audio clipping. It may, at first, seems strange to use synthesis model to process sound input, instead of generating it. However, these 3 models are dependent of external audio data. GS uses sonic grains, which are provided by the segment of audio input. KS uses the same audio input to provide the pulse of audio for its feedback filtered looping. RM performs a frequency convolution with the audio input and a sine-wave. The amount of processing over the audio input, for each synthesis model, was determined by 3 arrays of 100 elements each. These ones determine the processing rate of each synth effect; its intervening on the audio input, during the Individual lifespan. For each effect, an array of 100 elements was assigned. They are labeled as

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

331

intd- which refer to the amount of each effect, along time. The elements of these arrays are normalized from [0,1]; 0 means no effect, and 1 means full effect. These arrays are part of the sonic genotype of each Individual, as seen in Figure 3.

Fig. 3. Example of Sonic Genotype. It is formed by 6 arrays of 100 elements each. The arrays labels intd- refer to the synth effect along time. The arrays labeled prmt- refer to the synth parameters. Each synth effect has 2 arrays. Their labels are finished by the correspondent synth name: GS, KS and RM.

As seen in Figure 3, there are also 3 other arrays on the sonic genotype. They are labeled as prmt-. These arrays refer to the parameters of each synthesis model. Although they are also arrays of 100 elements each, in this implementation, only the first 12 elements of each array were used. They are the control parameters related to the 12 frequency regions, as previously described. In future implementations, further parameters may turn to be necessary, therefore it seemed reasonable to leave these arrays with extra elements, even because it doesn't affect the computation required to run this system.

4 Sonic Results, Sound Conclusions


As any processual artwork, VR was, in several layers, a fleeting piece of art. Its structure was dismantled shortly after the exhibition was over. During its existence, the interactive parallel flow of sound objects, that constituted the soundscape, were constantly changing and vanishing. Each sound object, although perceptually similar, was never repeated. They were the product of an evolutionary process where individuals were coming into existence, reproducing and eventually dying, to never repeat themselves again, although passing to future generations their genetic legacy. The achieved sonic result of VR was a synthetic soundscape that seems to follow Schafer's formal definition of natural soundscapes, as previously described. When listening to recordings of this installation, such as the one in the video, whose link is referred in page 4, it is possible to perceive some of Schafer's categories of analytical auditory concepts. Figure 4 shows the waveform (bottom) and the correspondent spectrogram (top) of an excerpt of this audio recording. The spectrogram depicts the partials conveyed in this waveform. The horizontal coordinate is time. In the waveform, the vertical coordinate is intensity, and in the spectrogram, it is frequency,

332

J. Fornari

where the colors changes refer to partials intensity (the darker the louder). Keynotes can be seen in this figure as formed by the structure of all darker horizontal lines, below 2KHz. They refer to the corresponding of a tonal center, normally associated to the cognitive sensation of resting. Each one of the horizontal lines refers to the concept of Signals. They have clear pitch, which are consciously perceived, and can be attributed to corresponding musical notes. Soundmarks are seen as the spread homogenous mass of partials, found along the spectrogram, which refers to the Soundscape identity.

Fig. 4. Audio excerpt of a typical soundscape created by the system. The larger figure on top, shows its spectrogram. The narrow figure on the bottom is the waveform. Horizontal coordinate is time.

Acknowledgements
The author wants to thank the artists of Vereda de Remendos, Fernando Visockis and Thiago Parizi the PirarucuDuo (http://pirarucuduo.tumblr.com ), that created, built and dismantled this multimodal installation, for the Centro Cultural da Juventude Ruth Cardoso, in So Paulo, Brazil; on August of 2010.

References
1. Blauert, J.: Spatial hearing: the psychophysics of human sound localization. MIT Press, Cambridge (1997) 2. Chowning, J.M.: The simulation of moving sound sources. In: Audio Engineering Society 39th Convention, New York, NY, USA (1970) 3. Cramer, F.: Words Made Flesh. In: Code Culture Imagination. Piet Zwart Institute, Rotterdam (2005); Truax, B.: Soundscape composition as global music: Electroacoustic music as soundscape*. Org. Sound 13(2), 103109 (2008)

Path of Patches: Implementing an Evolutionary Soundscape Art Installation

333

4. Fornari, J., Maia Jr., A., Manzolli, J.: Creating Soundscapes Using Evolutionary Spatial Control. In: Giacobini, M. (ed.) EvoWorkshops 2007. LNCS, vol. 4448, pp. 517526. Springer, Heidelberg (2007), doi:10.1007/978-3-540-71805-5. 5. Fornari, J., Manzolli, J., Shellard, M.: O mapeamento sinestsico do gesto artstico em objeto sonoro. Opus, Goinia 15(1), 6984 (2009); Edio impressa: ISSN 0103-7412. Verso online (2010), http://www.anppom.com.br/opus ISSN 1517-7017 6. Fornari, J., Shellard, M., Manzolli, J.: Creating Evolutionary Soundscapes with Gestural Data. SBCM - Simpsio Brasileiro de Computao Musical 2009. Recife. September 7-9 (2009) 7. Fornari, J., Shellard, M.: Breeding Patches, Evolving Soundscapes. Article. In: 3rd PureData International Convention - PDCon 2009, So Paulo, July 19-26 (2009) 8. Fornari, J., Maia Jr., A., Damiani, F., Manzolli, J.: The evolutionary sound synthesis method. In: International Multimedia Conference, Proceedings of the Ninth ACM International Conference on Multimedia. Ottawa, Canada. Session: Posters and Short Papers, vol. 9, pp. 585587 (2001) ISBN:1-58113-394-4 9. Fornari, J.: Sntese Evolutiva de Segmentos Sonoros. Dissertao de Doutorado. Faculdade de Engenharia Eltrica e Computao FEEC / UNICAMP. Orientador: Prof. Dr. Frio Damiani (2003) 10. Galenter, P.: What Is Generative Art? Complexity Theory as a Context for Art Theory in: Generative Art Proceedings, Milan (2003) 11. Husarik, S.: American Music, vol. 1(2), pp. 121. University of Illinois Press, Urbana (Summer 1983) 12. Karplus, K., Strong, A.: Digital Synthesis of Plucked String and Drum Timbres. Computer Music Journal 7(2), 4355 (1983), doi:10.2307/3680062 13. Lippard, L.: Six Years. Studio Vista, London (1973) 14. Murray, R., Schafer, M.: The Soundscape (1977) ISBN 0-89281-455-1 15. Pulkki, V.: Virtual sound source positioning using vector base amplitude panning. Journal of the Audio Engineering Society 45, 456466 (1997) 16. Roads, C.: Microsound. MIT Press, Cambridge (2001) ISBN: 0-262-18215-7 17. Shellard, M., Oliveira, L.F., Fornari, J., Manzolli, J.: Abduction and Meaning in Evolutionary Soundscapes. Book Chapter - pgs 407 - 428. Part of the Springer Book: Model-Based Reasoning in Science and Technology. Abduction, Logic, and Computational Discovery. Series: Studies in Computational Intelligence, Vol. 314. X, 654 p., Hardcover. ISBN: 978-3-642-15222-1. Springer, Heidelberg/Berlin. E-ISBN 978-3-642-15223-8 (2010) 18. https://ccrma.stanford.edu/~jos/swgt/swgt.html

You might also like