Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

A Computational Environment for the Evolutionary Sound Synthesis of Birdsongs

Jos e Fornari
Interdisciplinary Nucleus for Sound Communication (NICS), University of Campinas (UNICAMP) Campinas, So Paulo, Brazil tutifornari@gmail.com

Abstract. Birdsongs are an integral part of many natural environments. They constitute an ecological network of sonic agents whose interaction is self-organized into an open complex system of similar cognitive characteristics, at the same time that it continuously generates original acoustic data. This work presents a preliminary study on the development of an evolutionary algorithm for the generation of an articial soundscape of birdsongs. This computational environment is created by genetic operators that dynamically generate sequences of control parameters for computational models of birdsongs, given by the physical model of a syrinx. This system is capable of emulating a wide range of realistic birdsongs and generating with them a network of bird calls. The result here presented is an articial evolutionary soundscape that is also interactive, as it can receive external data, such as from instant text messages like the ones from the micro-blog Twitter, and map them as the genotype of new individuals belonging to a dynamic population of articial birdsongs. Keywords: evolutionary algorithm, birdsong, soundscape.

Introduction

Its amazing the amount and variety of places where birdsongs are heard. These chunks of sonic of information are exchanged between individuals (birds), throughout their characteristic calls, composing a network of acoustic information that creates a natural Soundscape; a term coined by Murray Schafer that refers to an immersive sonic environment perceived by listeners that can recognize it and even be part of its composition [14]. Such organic texture of sounds, recognizable yet variant, are often merged with urban noises; the sounds of machines and people, that together create an immersive cybernetic environment [15]. This is orchestrated by the interaction between organisms and mechanisms; all acting as agents of an open complex system that creates an emergent sonic environment; a soundscape that is acoustically self-organized and cognitively self-similar. The physiological apparatus that allows birds to generate songs - perceptually diverse and acoustically complex - is of impressive sophistication. Its core is
P. Machado, J. Romero, and A. Carballal (Eds.): EvoMUSART 2012, LNCS 7247, pp. 96107, 2012. c Springer-Verlag Berlin Heidelberg 2012

A Computational Environment for the Evolutionary Sound Synthesis

97

found inside an organ named Syrinx. As further described, several researchers have presented computational models emulating the syrinx, aiming to understand its behavior and recreating its sound generation properties. However, these computational models require a large number of independent control parameters to generate a specic sound, which often makes the exploration of new birdsongs sonorities (by manually tweaking its parameters) very dicult or even impossible. The simultaneous control of a large number of parameters is a dicult task to be achieved by formal mathematical models (such as a linear system of equations) or handled by gestural interfaces (such as the computer mouse and keyboard). On the other hand, the human brain is capable of performing similar tasks, such as body movements control, at ease. The control of all body joints rotations and members displacements that altogether deliver bodily balance and compound gestural expression, is a task that most us can do almost automatically, even without noticing it. An alternative to control a large number of independent parameter, specially in a non-linear system, such as the syrinx model, can be achieved through the usage of adaptive computational models inspired in natural strategies of nding the optimal solutions for generic problems. These are also known as belonging to the computational eld of Articial Intelligence, that were mostly inspired by the observation of two natural strategies of problem solving: 1) Neural-networks, inspired in the neural architecture of the brain, from which arise methodologies that deal with the non-supervised development of complex structures, such as the brain itself; and 2) the Evolutionary Algorithms (EA), based on the Darwinian theory of natural evolution of species; the approach taken followed by this work. Here is presented the preliminary development of an AE system that allows not only the control of articial birdsongs, but also allows the simulation of a small social network of them, formed by a population of evolutionary articial birdsongs, where each individual is a birdsong generated by a computational model of the syrinx, and its genotype is given by a sequence of control parameters. In computational terms, each individual generates a sound object that is the instanciation of a syrinx physical model class. The EA system is implemented in PD (www.puredata.info). PD is a free software, open-source, multi-platform computational environment for the design of real-time data processing and synthesis models. The sound of each individual is the phenotypical expression of this procedural sound synthesis model controlled by a sequence of parameters, which represents the individuation of the acoustical behavior of one unique birdsong. This is preferable than using audio samples recorded from actual birds singing. In the work presented here, there is no audio recordings whatsoever. The evolutionary system handles physical models that generate sound, so it has total control on the creation of new birdsongs, such as generating new one that never existed before; unprecedented in natural soundscapes. For that, each individual has its own genotype, represented by a text le containing a sequence of parameters to control one instance of the sound synthesis model. The slight change of values in this genotype corresponds to the perceptual modication of the generated birdsong, in the evolutionary population. Inside this population set, individuals are brought into existence,

98

J. Fornari

reproduce in pairs and (after completing their lifespan) die. In each instant, the sound generated by all alive individuals compound the landscape of sounds; the articial soundscape that is an evolutionary dynamic process, cognitively similar and acoustically unique. To have this EA behaving as an open system, it was created a second degree of interactivity that allow external genotypical data input. In addition to the internal evolutionary process of this population of individuals, this system also allow simultaneous users to interact with its evolutionary process by inserting new genetic material into the population genetic pool. The envisioned process of enabling simultaneous users to interact with the evolving population of birdsongs was to use data from an internet social network, such as the micro-blog Twitter (www.twitter.com). Twitter was also inspired in the natural social network created by the succession of small sound segments (tweets) generated by birds. The metaphor of such name (Twitter) compares songbirds tweets with small text messages exchanged among users of this virtual social network. This ts into the context of the EA system here presented, and accurately describes the inspirational foundation of this micro-blog service. Jack Dorsey, the creator of Twitter, compares his micro-blog service with a soundscape of birdsongs. In nature, chirps of birds may initially sound like something seemingly devoid of meaning or order; however, the context is inferred by the cooperation between these birds, that transmit (sing) and receive (listen to) birdsongs of others. The same applies to Twitter, where many messages, when taken apart, seem as being completely random or meaningless, but in a thread of correlated messages, they gain global signicance that unies them into a single context [3]. Here this work uses the same metaphor for the development of this EA system. The usage of an external input of genetic material for the evolutionary process of the soundscape created by a population of birdsongs turns this computer model into an open complex system with emergent properties that generates articial soundscapes. Each model that generates a birdsong is an individual belonging to the population of this EA. The birdsong model uses 16 control parameters that constitutes the individual genotype. For simplicity, this rst model uses data from the ASCII characters of messages, acting as the DNA of individuals within the population. The AE is implemented as a PD patch (an algorithm in PD). The individual is implemented as a separated patch, where each one inside of the population is an instantiation of this patch (in PD terms, an abstraction ). When the system receives a new external message, this is transformed into an array of integers, corresponding to the original ASCII characters. Then it is normalized in a sequence of real number between 0 and 1, which corresponds to the DNA used to create a new birdsong for the evolutionary process. Every time a new message is received, a new individual containing this DNA is inserted into the population, by a new instantion of the individual abstraction. This one corresponds to a new birdsong and has its own lifespan rate. While the individual is active (alive) it is generating audio (birdsong). During that, the individual can also be chosen by the selection process to participate in the reproduction process. After its lifespan is over, the genotype of this

A Computational Environment for the Evolutionary Sound Synthesis

99

individual is erased from the genetic pool and its sound synthesis process halts, which means that the individual is now dead. Material from the genotype of deceased individuals will only remain in the population as traces of the genotypes of their alive successors. This process creates an articial soundscape, as an emergent process resulting from the sound generated by all interacting individuals in the population of the EA system. This work also aimed to implement a computer model to generate a computerized graphical representation of the soundscape, as a landscape of graphical objects, each one representing one active individual within the population. They are created by the same genotypes used in the soundscape generation, which intends to oer a complementary visual perspective thus enhancing the immersive experience of creating dynamic and interactive soundscapes collectively composed and interactively experienced.

The Sound of Syrinx

Songbirds belong to the biological order Passeriformes. This group is very large and diverse, with about 5,400 species, representing more than half of all known birds. They are divided into: 1) Tyranno (Suboscines, or shouter birds) and 2) Passeri (Oscines, or singing birds). Both have syrinx as the main organ responsible for the emission of sounds [2]. Unlike humans, birds can independently control each lung, which allow them to inhale with one lung while exhaling with the other. This let them simultaneously singing and breathing, which allow them to generate very long melodies; way beyond what the volumetric capacity of their tiny lungs would permit. In the bird anatomy, the syrinx corresponds to the human larynx. It has three groups of muscles independently controlled; one for the trachea and the other two for the bronchi. By constricting these muscles the bird modify the features of the sound generated by the syrinx, such as the fundamental intensity and frequency. The syrinx is located at the far end of the trachea, in the union between it and the two bronchi. Inside of the syrinx is located the tympanic membrane. This is a membrane suspended by a cartilaginous cavity, a sort of inated air bag, called clavicular sac. This membrane can move sideways, free and fast. This is the main oscillator of the syrinx and can be compared with the reed of a woodwind musical instrument, such as the oboe. Songbirds can also control the air ow entry through the trachea, passing through the clavicular sac, and its release through the bronchi. In addition, they can also control the sturdiness of the membrane itself, by subtle muscles, similar to those found in human lips (lateral and medial) [1]. There are several computational models developed to emulate the syrinx behavior [13]. The work here presented uses the one introduced by Hans Mikelson, originally developed as a Csound programming code [11]. This algorithm was later improved and implemented in PD, by Andy Farnell, who created a computer model that emulates its sound generation and also the melodic phrase of the birdsong [5].

100

J. Fornari

Evolutionary Sound Synthesis

Evolutionary Computation (EC) is an adaptive computing methodology inspired in the biological strategy of non-supervised search for the best solution to a generic problem [4]. Such methods are commonly used in the attempt of nding the best possible solution for complex problems, specially when there is insufcient information to solve them by using formal (deterministic) mathematical methods. EC algorithms are usually designed to perform automatic search for the best solution to an unbounded problem, within the scope of a landscape of possible solutions. The evolutionary system here presented, however, does not aim to achieve a nal, or best solution. Instead, its goal is to carry on the evolutionary process and take advantage of one of its byproducts; the evolutionary march given by the iterative steps, compounded by the reproduction and selection processes, that altogether always seek for the best solution. The convergence time is often seen as a value to be minimized in EA system. Here however, this is not an issue as the system works in a way to keep generating self-similar sound objects, by applying selection and reproduction to generate individuals that are similar but never identical. There are several studies on evolutionary methodologies aligned with the creation of artwork installations and music systems. Here is mentioned a few relevant ones: 1) Vox Populi; a system able to generate complex musical phrases and harmony, using genetic operators [12], 2) Roboser; a system created in collaboration with the SPECS UPF group, in Barcelona, that uses Adaptive Control Distribution to develop a correlation between the adaptive behavior in robotic algorithmic compositions [10], and 3) ESSynth, the evolutionary synthesis of sounds segments, an EA method that has as population set of waveforms acting as individuals within the population set, that undergo reproduction and selection by a tness function given by psychoacoustic features [6]. ESSynth was used in several artwork installations, in Brazil and abroad, such as in RePartitura; a multimodal evolutionary artwork that is based on a synesthetic computational system that maps graphic objects from a series of conceptual drawings into sound objects, and transform them into a dynamic population of evolving sound objects [9]. The rst version of ESSynth already shown this enticing characteristic; the generation of sound segments perceptually similar but never identical, which is also one of the most important features found in all natural soundscape, where the sonic components are never repeated but the overall sound is self-similar. This system was then enhanced to also include parameters of spatial sound location for individuals (the sound objects) thus allowing the system to create more realistic soundscapes and enhancing the evolutionary process by linking a reproduction criteria for pairs of individuals within the population, based on their geographic proximity [7]. The EA system for the creation of articial soundscape was implemented as a patch (essynth.pd ). Individuals are instances of an auxiliary patch (a PD abstraction) named ind.pd. Each instance of ind.pd generates an individual, which is a birdsong belonging to the population of essynth.pd. Each instantiation is an

A Computational Environment for the Evolutionary Sound Synthesis

101

independent physical model of the syrinx. What makes each individual to generate a distinct birdsong is its genotype, a sequence of parameters that control the syrinx model. 3.1 Birdsongs DNA

As said before, the DNA of this EA system is compounded by chunks of 16 elements, each one corresponding to a gene. Those elements are taken from ASCII characters of messages. Each chunk corresponds to a chromosome, which is, in turn, a control state of the syrinx model. When using external data input, such as from a social network, each message will correspond to a DNA. In the case of Twitter, each message has up to 140 elements. This sequence is composed of ASCII characters, corresponding to integers between 0 and 127, each one related to a specic ASCII character. Then, each numeric sequence of values is normalized, between 0 and 1, and subdivided into sections of 16 elements, each one corresponding to one chromosome of 16 genes, which represents the smallest number of parameters to feed the computational model instantiated by each individual that creates a birdsong. With this approach, the system receives several chromosomes for each message. In the case of Twitter messages, only up to 8 genes can then be retrieved. In this current implementation, each chromosome corresponds to a state of the birdsong. Further implementations may use the other sequences to create dominant and recessive chromosomes, for the introduction of a sexual reproduction, between individuals with gender. So far the individuals are genderless and each element of the chromosome corresponds to one of 16 genes; the control parameters of the procedural synthesis of a birdsong, as described by the PD model in [5]. This is an extension of the syrinx control, but also tothat now also embeds the throat (tracheal cavity) control and beak articulation, so not only the characteristic timbre of each bird tweet is considered in the chromosome, but also its whole melodic phrase. The 16 parameters that corresponds to the chromosome and control this birdsong model are: 1) Ba: Beak articulation (control the beak openness rate), 2) Rt: Random Tweetyness (control the rate of the tweet random generator), 3) Fb: Frequency of the rst formant (for the rst bronchi in the syrinx), 4) Ab: Amplitude of the rst formant (for the rst bronchi in the syrinx), 5) Fs: Frequency of the second formant (for the second bronchi in the syrinx), 6) As: Amplitude of the second formant (for the rst bronchi in the syrinx), 7) Ff : Fundamental frequency (fundamental frequency for the entire birdsong), 8) Fe: Fundamental Extent (fundamental sweep extent, for the entire songbird), 9) Fm: Fundamental frequency modulation amount, 10) Fb: Fundamental frequency modulation base, 11) Ft: Frequency of the rst tracheal formant, 12) At: Amplitude of the rst tracheal formant, 13) Fj: Frequency of the second tracheal formant, 14) Aj: Amplitude of the second tracheal formant, 15) Tr: Trachea resonance rate (trachea lter resonance), 16) Ao: Overall amplitude (for the entire birdsong).

102

J. Fornari

These 16 genetic parameters are organized into a chromosomic sequence. It is important to note that one single gene is already enough to create a perceptually recognizable birdsong. In order to create this self-organized articial soundscape of birdsongs, the tness function considered is given by a psychoacoustic distance. By this metric individuals inside the population are selected. Selection process measures the distance between each individual in the population and eliminates the ones that are more distant from the average distance for all individuals; the cluster of active individuals currently in the population. This will make the individuals in the evolutionary population to be similarity closer to each other, as far as the perception of their birdsongs concerns. The tness function is calculated by the Euclidean distance between the values of three psychoacoustic descriptors: 1) Loudness (L), the perception of sound intensity; 2) Pitch (P), the perception or clarity of the fundamental frequency; and 3) Spectral centroid (S), the overall distribution of sound partials. This distance D is given by the following equation: D= 3.2 (L1 L2)2 + (P 1 P 2)2 + (S 1 S 2)2 (1)

The Genetic Operators

Mutation is the operator that inserts novelty into the DNA of new individuals. This operator has a genetic mutation rate given by tm, that varies between 0 and 1.This determines the amount of variability that will be inserted into the DNA of a new individual (ex: C). This variation is given by multiplying the sequence of elements by another sequence of random real values (rand ) ranging between [(1-tm), 1]. If tm = 0, there is no novelty inserted into the newly

The reproduction process of this EA system uses two genetic operators: recombination (crossover) and mutation. Acting together, they manipulate the DNA compounded of one or more chromosomes of 16 genes each. The recombination mixes the genotype values of pairs of individuals (ex: A and B), thus creating a new genotype, corresponding to a new individual (ex: C). The mixture is given by the sequence multiplication of 16 genes. The mixing ratio is determined by the recombination rate tr; a scalar real value between -1 and 1, which determines the mixing rate between the genes of pairs of individuals A and B. If tr = -1, the elements values within the DNA of C are identical to those of A. If tr = 1, the values chosen for the recombination of genes in C will be identical to the ones in B. If tr = 0, the values chosen for recombination of chromosomes in C, will be given by the arithmetic mean of the values of A and B. Usually, tm is kept near 0, to guarantee that the resulting sequence is a uniform mixture of A and B. The system given by the equation below shows the calculation of this operator, for the ith element of their respective DNA: ((tr) Ai + (1 + tr) Bi )/2 , tr < 0 , tr = 0 (2) Ci = (Ai + Bi )/2 ((tr) Bi + (tr 1) Ai )/2 , tr > 0

A Computational Environment for the Evolutionary Sound Synthesis

103

created DNA (ex: C), since it will imply into the multiplication of the original sequence of C by a sequence of ones. If tm = 1, the sequence of C will be multiplied by a sequence of random values between [0, 1] and thus the resulting sequence C will also be a random sequence of values ranging between [0, 1], which means that there will be no traces of the original sequence of C preserved in C. Usually, mutation rate is kept around 0.1 (10%), to guarantee a reasonable rate of variability in the resulting sequence C without signicantly loosing original information of the previous sequence C. Next equation shows the calculation for the genetic operator mutation, where rand is a random variable, from 0 to 1, and i is the ith element of the DNA: Ci = (1 (tm rand)) Ci (3)

Both crossover and mutation rates are continuous variables and can be dynamically modied during the evolutionary process of this EA system. Other important global controls of this system are: lifespan and proliferation rate. Lifespan controls the average lifespan of each individual within the population. The system includes a random variability of about 10% of the value set by the user. This is done in order to guarantee that the individuals will have near (but never exactly) the sam lifespan. Usual values for the lifespan of these birdsongs range from 1 to 60 seconds. Proliferation rate controls the time that the reproduction process spends to apply the genetic operators of recombination and mutation in order to generate new individuals, thus inuencing the rate of proliferation in the whole EA system. Usual values for the proliferation rate ranges from 0.5 to 3 seconds.

Soudscapes of Birdsongs

As previously mentioned, the term Soundscape was created by the composer Murray Schafer, referring to immersive natural sonic environments, perceived by listeners who can interact with it in a passive listening to merely recognizing it, or actively, as one of the agents of its composition [1]. Thus, a soundscape is, in itself, also the result of the process of sound perception and cognition. This process can be classied by the following cognitive aspects: 1) Close-up, 2) Background, 3) Contour, 4) Pace, 5) Space, 6) Density, 7) and Volume 8) Silence. According to Schafer, Soundscapes can be formed by ve categories of sonic analytical concepts. They are: 1) Tonic, 2) Signs, 3) Sound marks, 4) Sound objects and 5) Sound symbols. Tonic are the active and omnipresent sounds, usually in the background of the listeners perception. Signs are the sounds in the foreground, that draw listeners attention, as they may contain important information for the listener (anything the grabs listeners attention). Sound marks are the sounds unique from an especic soundscape, that can not be found elsewhere. Sound objects, as dened by Pierre Schaeer - who coined the term - is an acoustic event that directs the listener into a particular and unique sonic perception. They are the agents that compose the soundscape. Symbols are sounds that evoke cognitive (memory) and aective (emotional) responses, according to listeners personal and socio-cultural background.

104

J. Fornari

They are emergent features that imbue contextual meaning for the selforganizing process of complex open systems that create soundscapes. As such, these units can be retrieved and analyzed to classify soundscapes. However, they are not sucient to dene a process of articial soundscapes generation. In order to do so, it is necessary to have a generating process of symbols with the inherent characteristics of similarity and variability. This can be achieved by an EA system such as the one here presented. Such computer model is enough to generate an articial soundscape that, by the interaction of individuals within the evolutionary population, will spontaneously present tonics, signals and sound marks, as previously dened by Schafer. In a systemic viewpoint, a soundscape can be seen as a self-organized complex open system formed by sound objects acting as dynamic agents. Together, they orchestrate a sonic environment rich of interacting sound objects that are always acoustically unique, in spite of perceptually holding enough self-similarity to allow their overall identication and discrimination by any laymen listener.

Experimental Discussion

An audio sample of this EA system in operation can be accessed at the following link: http://www.4shared.com/audio/gEsDwkNw/soundsample.html For this sound sample, of about 3 minutes of duration, in the rst half of the evolutionary process, of about 90 seconds, the mutation rate was kept very low (below 10%). In the second half of this sound sample, the mutation rate was risen to its full range (100%) and kept till the end of the audio ample. During all evolutionary process the crossover rate was kept around 50%. Proliferation and lifespan rates were also kept the invariant (about 2 seconds for the proliferation rate and 10 seconds for the lifespan rate). The rst half of this audio sample starts with one click, meaning that the AE system started. Slowly, new birdsongs start to be heard, resembling usual birdsongs found in nature. Some of them seems to be similar oscines (singing birds) while others, to suboscines (shouter birds), but all are generated by the same syrinx computer model, with dierent parameters (genotype). For computing processing limit reasons the maximum amount of individuals for this AE system was set in 20. However, it seems that above this value, we can no longer perceive the distinction of songbirds, as they start to embroil into a single cacophony. For this audio sample, individuals last about 10 seconds and procreated at every 2 seconds. After the rst half of it (about 90 seconds) the population reaches its limit of simultaneous individuals, where the mutation rate is abruptly raised to its maximum. From this moment on, unusual birdsongs are heard. Some of them sound quite peculiar; very distinct from the usual birdsongs heard in nature. At the end of this audio sample, the AE engine is halted. The reproduction process stops the generation new individuals. The remainder individuals in the population slowly die out, as their lifespan are reached, till there is no more sound left, when the articial soundscape ceases to exist.

A Computational Environment for the Evolutionary Sound Synthesis

105

Conclusion

This paper presented a preliminary study on the creation of a computer system that generates articial soundscapes of birdsongs by means of an evolutionary algorithm that can be extended from the sound eld to the visual one, through the usage of a synchronous graphical simulation computer model of a corresponding virtual landscape. The soundscape created by this system is inspired in the social network of birdsongs found in nature, with is originally generated by actual birds through their interactive calls. One of the most important features of this EA is the possibility of creating a similar yet variant sound texture. This is given by a population set of sound objects (birdsongs) that evolve over time, through the iterative computational processes that simulate the natural reproduction and selection processes, as observed in biological populations. The resulting sound of all active and interactive sound objects is cognitively similar, whilst always acoustically new, which is a primordial feature of natural soundscapes. This EA is able of generating articial soundscapes compounded of synthesized birdsongs. The sound objects are individuals belonging to the population set of variable size, where the evolution occurs. Each sound object is generated by an individual that has its own genetic code that controls the physical model sound emulation of a syrinx. New genotypes are created during the process of reproduction by the genetic operators: crossover and mutation, graded by a tness function (a psychoacoustic distance metric) in the selection process. New individuals are reproduced by pairs of previous ones, although there is no gender separation dened yet. Each pair reproduces a new individual with genotypical characteristics similar to their predecessors but never identical to one of its predecessors. Each individual is selected based on the similarity of its genotype in comparison with the population genetic pool, represented by the average of all genotypes. From time to time, the individual having the farthest genotype is eliminated. This helps the syste to keep a certain similarity between all individuals. They also have a limited lifespans. When an individual dies, his genotype is eliminated from the population, which is never repeated. This will avoid the occurrence of identical individuals (clones) in the population during the evolutionary process. Individuals are instances of procedural sound synthesis model that generate an specic birdsong based on its genotype. To emulate the sound generation of a syrinx, an adaptation of the computer model originally introduced by Hans Mikelson and extended by Andy Farnell, was used. That also incorporated extra parameters for the generation of whole melodic phrase of a birdsong. This is the computer model used to synthesize the sound of each individual within the population of the EA system. Genotypes are given by sequences of parameters that feed this model. They come to the population by two ways: 1) from the reproduction process that creates new genotypes from the crossing of pairs of genderless individuals, and 2) from external text messages, such as tweets from Twitter. The twitter interface is still under development, where has been investigated the usage of JSON (JavaScript Object Notation), a lightweight data-interchange format, to handle the interface between Twitter and PD. There is also the possibility

106

J. Fornari

of using JSON-based library built for Processing (www.processing.org), called TwitterStream which, in theory, is able of receiving a timeline of any twitter account, and send it via OSC (Open Sound Protocol). The entry of new genotypes throughout text messages allow the population of sound objects to behave as an open system, thus characterizing a CAS (Complex Adaptive Systems) with emergent properties, that is self-organized into the form of a soundscape. This is a complex open system, which has self-similar features, consisting of independent and interacting agents. This CAS presents emergent properties similar to the ones found in natural systems, created by natural evolution [8]. The AE system presented here may allow the interactivity of multiple users. This creates a feedback between users and the EA system, which can be enhanced by the usage of a computer model to generate graphical objets correspondent to the sound objects created by the individuals into the population. The graphical objects generation is also under development but there is enticing evidences of using the PD implementation of the algorithm known as boids (by Craig Reynolds) as a good solution for the graphical representation of a population of individuals inside a population set. Boids let the graphical representation of ock behavior into social animal species, such as birds ying. With that graphical rendering, each birdsong can be represented by a simple graphical object (such as a circle) moving inside a canvas. The movement behavior (ight) of each circle (songbird) will inuence and be inuenced by other individuals within the population set. These graphical extension may allow users to watch the objects corresponding to theirs messages in the form of a visual metaphor of the sound object, an animation resembling the development of a birdsong. Therefore, users can visually identify their insertion in the genetic pool of the evolutionary population. With this, we have two layers of systemic interactivity: internal and external. The internal one is given by the individuals interaction throughout the processes of selection and reproduction, that compound the soundscape, created by a mesh of syntheses processes corresponding to the various sorts of bird calls, similar yet variants, fruit of an evolutionary process engendered by the articial breeding and selection of birdsong emulated by computer models. The external interaction is given by the insertions of messages from multiple users that inuence the genetic pool of the population and (eventually) can visualize the graphical representation of such individuals created by their messages and phenotypically expressed as sound objects. These interactive degrees corroborate the initial aim of this work; to create a computer model to emulate the emerging properties of a complex open system composed by agents that self-organize into a recognizable and meaningful context. In our case, the agents are physical models of birdsongs, the context is an articial soundscape of birdsongs and the selforganizing process is the AE system here described. Acknowledgements. This work was funded by FAPESP (www.fapesp.br), project: 2010/06743-7.

A Computational Environment for the Evolutionary Sound Synthesis

107

References
1. Allison, J.D.: Birdsong and Human Speech: Common Themes and Mechanisms. Neuroscience 22, 567631 (1999) 2. Clarke, J.A.: Morphology, Phylogenetic Taxonomy, and Systematics of Ichthyornis and Apatornis (Avialae: Ornithurae). Bulletin of the American Museum of Natural History 286, 1179 (2004) 3. Dorsey, J.: Twitter creator Jack Dorsey illuminates the sites founding document. LA Times. David Sarno (February 18, 2009), http://latimesblogs.latimes.com/technology/2009/02/twitter-creator.html (accessed May 17, 2010) 4. Eiben, A.E., Smith, J.E.: Introduction to Evolutionary Computing, 2nd edn. Springer Natural Computing Series (2007) 5. Farnell, A.: Designing Sound. MIT Press, Cambridge (2010) 6. Fornari, J., Maia, A., Manzolli, J.: Soundscape Design Through Evolutionary Engines. Journal of the Brazilian Computer Society 14(3), 5164 (2008) 7. Fornari, J., Shellard, M., Manzolli, J.: Creating soundscapes with gestural evolutionary time. Article and presentation. In: SBCM - Brazilian Symposium on Computer Music (2009) 8. Holland, J.: Studying Complex Adaptive Systems. Journal of Systems Science and Complexity 19(1), 18 (2006) 9. Manzolli, J., Shellard, M.C., Oliveira, L.F., Fornari, J.: Abduction and Meaning in Evolutionary Soundscapes, 01/2010, Cient co Internacional, Model-based Reasoning in Science and Technology - Abduction, Logic, and Computational Discovery (MBR BRAZIL), Campinas, SP, Brasil, vol. 1, pp. 407428 (2010) 10. Manzolli, J., Verschure, P.: Robots: A real-world composition system. Computer Music Journal 29(3), 5574 (2005) 11. Mikelson, H.: Bird calls. Csound Magazine (Winter 2000) 12. Moroni, A., Manzolli, J., Von Zuben, F., Gudwin, R.: Vox populi: an interactive evolutionary system for algorithmic music composition. Leonardo Music Journal 10, 4954 (2000) 13. Larsen, O.N., Goller, F.: Role of Syringeal vibrations in bird vocalizations. The Royal Society 266, 16091615 (1999) 14. Schafer, M. R.: The soundscape: our sonic environment and the soundscape. Destiny Books (1977) ISBN 0-89281-455-1 15. Wiener, N.: Cybernetics and society: the human use of human beings. Cultrix, New York (1968)

You might also like