Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Generative Music 2.

0: User-driven ad-
hoc dance clubs

Work Plan Proposal for the Foundation for


Science and Technology

Yago de Quay

Bachelor in Professional Music at Berklee


College of music, Boston, USA

April 30th, 2009


Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

Abstract
Technology and theoretical models in the field of interactive media can provide a sustainable framework
for implementing interactive music in dance clubs. This research proposes investigating users’ ability to
change music to better suit their needs in a dance club environment.

1 Introduction
Generative Music 2.0 is a term defined here as a system that generates music using the Web 2.0 model of
user generated content [1]. User-driven music – Generative Music 2.0 – will aggregate users’ social
behaviors and individual attitudes towards dancing in clubs enabling users to create, modify, reconfigure
and/or share dancing music in real time through live motion sensing technologies. Moreover, Generative
Music 2.0 is expected to indirectly educate users about music composition and open a new ground for self-
expression and collective actions related with dancing.

1.1 Background
The present research proposal is built around an interactive system for dance clubs that enables user
participation while promoting creativity and networking. This project will create a system of motion
capturing (mocap) devices that will track dancers’ movements and map these movements to various
parameters in a song, thus the dancer will actually be able to interact and shape in real time the music that
is being played. The goal is not to create a new interface for music design such as gestural control of a
virtual studio, the main concern is to create a more engaging and involving atmosphere for dancers in a
nightclub setting. Instead of the usual performer (such as a DJ) vs. audience setup, this project proposes the
audience being the performers.

This project will rely upon from two successful innovations in the field of interactive media, namely motion
capture and adaptive music. Motion capture refers to capturing or tracking real body movement with
special equipment (e.g. digital video camera) and translating these movements into a digital model.
Adaptive music refers to music that adapts and/or responds to a stimulus such as action in a video game.

1.2 Issues
Technology is used as leverage for performers and can have the consequence of increasing the gap between
the artist and the audience [3]. This problem echoes the debate between User Generated Content (UGC)
and Engineered Content (EC). Engineered content, which involves no interaction and is illustrative of
modern day dance clubs and other artistic performances, lacks several features that can have a big impact
on the audience and music, and ultimately the venue where the music is being played. Engineered music, or
non-interactive music can be a problem for the following reasons [4]: First, there is the issue with the
limitation in the method of music creation. Engineered music, contrary to interactive music, only allows the
expert to have a say on the overall creative direction of a song. Second, non-interactive systems miss out on
the plethora of knowledge and inputs that other users and experts can provide. In this case, the dance club
isn’t harvesting its dancers’ musical potential. Another problem is that currently, the audience cannot share
their interpretation of a song or change it’s direction if desired. The audience is a mere spectator rather than
a participant of the experience.

This research procures solving these problems by offering an interactive music model for dance clubs. This
interactive model is not intended to substitute performers, experts or engineered content, but add a tool
under dance club’s belt, a tool that can have benefits on various businesses and the scientific community.

2 State of the Art


Several components that constitute the art of interactive music for dance clubs have achieved great leaps in
both technology and theoretical approaches. Nonetheless, the synergy of these domains still remains a
rough diamond. Most popular concerts forgo audience participation in exchange of carefully calibrated play
back and linear programs [3]. However, isolated interactive performances in the domains of music and
dance have attracted more and more attention. Following the trend of web 2.0 and user-generated content

1
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

websites, the successful strategies of “social networking as platform” have lowered barriers in interaction,
exploration and skill-building thus providing viable models for interactive media [1, 5].

2.1 Interactive Nightclubs


Interaction in a nightclub is normally limited to people asking the DJ to play a specific song. The current
model for clubs involves little to no user input. There have been however, some attempts such as dance
floors with responsive LED lights, and bar counters that respond to pressure such as objects. There is also
the recently created “Energy Generating Dance Floor” that converts crowd movement into electricity which
then powers LED lights with different colors representing “energy levels” [6]. Some clubs also permit
clients to text message on a TV screen [7].

2.2 Interactive Music


Due to advances in music synthesis and virtual music studios, sound can be altered in countless ways. The
gaming industry is leading the way in interactive music with carefully constructed scores that respond to
user input (what’s happening in the game) – adding value to the human-to-game interaction. Examples
include best-sellers such as Sony’s The Mark of Kri, Ubisoft’s Rainbow Six 3 and Bungie Studios' Halo 2
[2]. However, this open interaction also opens the door to a plethora of sounds are not always described as
pleasant. Presently there are various strategies on how to map interactions to attain more musical results.
Within the interactive spectrum, in the conservative side lays adaptive music. Adaptive music isn’t
considered interactive music per se because the user is really interacting with the game, the musical
outcome is an indirect result. But it is non-linear and the users’ decisions affect the composition even if
indirectly. By definition, adaptive music has to incorporate four characteristics: 1) by design, performance
is flexible, 2) interaction is limited to predefined parameters, 3) values to these parameters are not
predetermined, and 4) music coherence is a priority [2]. As an example, Stormfront Studios’ “The Lord of
the Rings: The Two Towers” depending on game play, extracts cues form the countless hours of recorded
material from the film with the same name into a coherent and dynamic results [8]. Applications of
adaptive music outside of the gamming industry was also seen Synesthesia LLC’s “Interactive Dance
Club” installation at SIGGRAPH 98 (1998) where various interactive controls such as orbs, pedals, pads
and joysticks lasers helped shape the music being played [9].

Dancers and musicians that use body language as means for music composition are also exploring the state
of the art in interactive music. Dancers can directly and deliberately manipulate rhythm, pitch, timbre and
other mapped musical parameters, and at the same time the mapping of these parameters can also be
modified, in real time, by musicians. On one end a dancer provides input to a computer, and on the other a
musician decides how the input will affect musical output. This framework provides intensive interaction
between the dancers and musicians while enriching improvisational possibilities [10, 11].

2.3 Interactive Dance


Dance is moving toward the digital realm and becoming increasingly involved with interactive systems.
Dance performers can express their movements through different media such as sound, lighting and video
projections which offer new ways to captivate the audience and involve them in a performance. Depending
on the context, different interactive technologies can be applicable to dancers. Interaction is defined by its
input and output. Dancers can choose how they interact with media by selecting between body-oriented and
environment-oriented, or a mixture of both input approaches. Currently, body-oriented systems involve
extracting data directly from the dancer such as brain waves, heart rate and skin conductivity [3].
Environment-oriented methods position the dancers in a 3D movement sensitive grid by using motion
tracking devices such as video cameras, sonar, and laser beams. Because dance is mostly involved with
fluid body movement, the trend goes towards environment-oriented and non-intrusive techniques that
remove the burden of the dancers carrying wires and devices attached to their bodies. Despite the technical
aspects, for most dancers, the question to be addressed is not how to capture motion, but what motion to
capture. Mapping refers to the technique of relating movement to data capture and retrieval. Approaches
offer microscopic (body parts) or macroscopic (crowd) solutions depending on the desired outcome.
Several different methods will also distinguish between intuitive vs. functional body movements [12].
Interactive systems with that are intuitive (“make sense”) provide a simpler interface and promise of

2
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

universal appeal. On the other hand, functional systems provide custom made interfaces (eg. virtual
instruments) in which skilled users can manipulate pre-defined parameters [13].

3 Objective
The goal of the present research is to gauge dancers’ satisfaction of interactive music/dance systems in
comparison with the non-interactive, traditional, music/dance experience.

3.1 Research Questions


Since any client in a dance club can interact with the music, how will quality be ensured?
How to match dancers' musical expectation with musical output?
Which body movements are going to influence the music?

3.2 Hypothesis
To capture dancer’s movements we propose to install mixed types of motion capture systems and
superimpose the motion analysis information. Various signals will be captured from the dancers ranging
from macroscopic (population, density, energy) to microscopic (arms, head, hips and legs).
In regards to music interaction, it should limited to pre-defined parameters set by the DJ.

Given the popularity of interactive gaming systems, it is expected that dancers will derive greater
enjoyment from music when their actions have immediate and direct influence on the music’s parameters.
The results will inform research in human computer interfaces (HCI) and help develop sustainable
interactive music models for dance clubs as well as other entertainment technologies.

4 Method
Data collection will include a mixture of qualitative and quantitative procedures. A motion capture and
interactive audio playback system will be used to gather data from nonprofessional dancers moving
spontaneously to music in a nightclub. Parameters of the music such as filter sweep, layers, sections, and
LFOs will be manipulated by dancers’ movements. Upon leaving the floor, dancers will be queried as to
how well they enjoyed the experience and the extent to which they believed their actions directly controlled
the music.

The experiment will follow these two steps in order:

------- Phase 1: Assembly ------


1) Assemble motion capture system
2) Create a sample population

------- Phase 2: Experiment ------


3) Test hypothesis

There are three questions that pertain to the constituents of this thesis. The methods on how these questions
will be approached are listed below.

4.1 Dance
What movement will be captured?

Literature
Behavior Observation Checklist
Performance Tests

4.2 Music
Which musical parameters will be interactive?

3
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

Questionnaire
Opinion Surveys
Self-Ratings
Simulations

4.3 Technology
What technology is best for club environments?

Literature
Simulations

4.4 Tests
Two random sample will be created from a population of club goers. Demographic questionnaires will
filter subjects to a sample representative of club goers. The sample would then be split in to two conditions.
Under these conditions participants will repeat various sets. Each set has a length of 10min and the music
played will have one of three levels of interaction assigned randomly: Non-interactive, interactive and very
interactive.

Under condition A, participants will dance in various sets and rate the experience after every set. This will
be a blind study; they will not be told which sets are interactive.

Under condition B, different participants will dance in various sets and rate the experience after every set.
They will be told which sets are interactive.

Fig. 1 provides visual representation of the experiment.

Fig 1. Experiment Sets

Questionnaires and opinion surveys will rate music appreciation as well as dance appreciation.
Sample questions might include:
Behavior observation checklists will be used evaluate dancers’ movements and provide information on
what movement to capture.
Performance tests will evaluate the degree of sensitivity between the subjects’ interaction and the affected
music. This will fine tune parameters in the music software.
Likert scale will allow participants to self-rate how much control they had on the music.

Work in this field of interactive music and interactive dance has been conducted by researchers at UT
Texas, Universidade do Porto and Universidade Nova. Outside sources such as MIT and McGill
Universisty have presented papers on motion capture methods. Motion capturing hardware and software
systems are also available in Universidade do Porto, Universidade Nova and UT Texas.

4
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

Fig 4. Action Plan

5
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

5 Results
Hypothetical results may look like the data presented in Fig. 2 and 3.

How dancers rate the music can be represented in this chart:

Fig 2. Music Enjoyment

The following graph illustrates an example of the correlation between perceived interaction and real system
interaction:

Fig 3. Perceived vs. Real Interaction

6 Implications
The implications of this research will uncover connections between mind, body and music, and create
innovative technological models for the scientific community and the outside world.

From a sociologists’ point of view this research provides a rich source of information on the dynamics of a
population. The human computer interaction (HCI) system developed in this project will discover how
humans express themselves through their bodies and contribute in fields such as body language. HCI will
also provide a fresh light on the role of music and it’s implications on the human body. Music cognition has
also much to derive from the knowledge of how humans are aware of their musical soundings such as the
preference over linear vs. non-linear music.
In this paper’s model, Disk Jockeys (DJ) are given more creative control of the music than before. It will be
up to the DJ to chose which parameters can be altered and customize how body language is translated into
musical content. He will act as a mediator and the platform in which dancers and music reciprocate.
The mapping technological models proposed in this research can also contribute to e-learning strategies by
providing a medium for instruction.

6
Generative Music 2.0: User-driven ad-hoc dance clubs Yago de Quay

The results of this research can present new avenues and resources for dance clubs and interactive media
installations. Lastly, the impact of this study will also raise more questions that can drive further research
and funding to the realm of interactive content creation.

7 References
[1] Tim O'Reilly. “What Is Web 2.0”, O'Reilly Network. http://www.oreillynet.com/pub/a/oreilly/tim/news/
2005/09/30/what-is-web-20.html. August 2006.

[2] Andrew Clark. “Defining Adaptive Music”, Gamasutra, April 17, 2007.

[3] http://www.palindrome.de/

[4] Paul Chin. “The Value of User-Generated Content”, Intranet Journal. July 3, 2006.

[5] "The growing cost of free online content", Deloitte Media Predictions.

[6] http://www.sustainabledanceclub.com/

[7] MSNBC News. “Your text message on a 40-foot screen: Fans get new ways to interact at a concert”,
http://msnbc.msn.com/id/7952252/.

[8] Andrew Boyd and Robb Mills. “Implementing an Adaptive, Live Orchestral Soundtrack”, lecture at the
Game Developers Conference, San Francisco, 2006.

[9] R. Ulyate and D. Bianciardi. “The Interactive Dance Club: Avoiding Chaos in a Multi-Participant
Environment”, Computer Music Journal, Volume 26, Number 3 (Boston: Massachusetts Institute of
Technology, 2002).

[10] Carlos Guedes. “translating dance movement into musical rhythm in real time: new possibilities for
computer-mediated collaboration in interactive dance performance”, CMC, 2007.

[11] Carlos Guedes. “Mapping movement to musical rhythm: a study in interactive dance”, pages 85-100,
2005.

[12] Wechsler, R.. “Artistic Considerations in the Use of Motion Tracking with Live Performers: a
Practical Guide", a chapter in the book: "Performance and Technology: Practices of Virtual Embodiment
and Interactivity" edited by Susan Broadhurst and Josephine Machon, Palgrove Macmillian, 2006.

[13] Meta Motion. “Mocap Resources”, http://www.metamotion.com/motion-capture/motion-capture-who-


1.htm

You might also like