Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Physical Modeling

Overview

In this discussion, as in the pro audio industry, 'physical modeling' refers to the use of
digital computers to model (simulate) the sounds of musical instruments. While
sometimes the instruments modeled are electronic gizmos like analog synthesizers, the
challenge is to model the tonal characteristics produced by acoustic instruments,
including all of their performance gestures. If you consider an acoustic musical
instrument such as a clarinet, from acoustic and physical principles you can derive
equations that model the properties of sound wave propagation in a pipe, including the
effects of the vibrating reed, the tone holes, and the bell at the end. Similarly for a violin,
you can compute what happens acoustically when a string is made to vibrate over a
resonant chamber. If you have a fast enough computer, you can compute in real time
(while playing) the sound that emits, based on typical inputs, such as breath pressure
on a clarinet reed assembly or bow pressure and speed on a violin string.
Understanding the acoustics is not easy, especially in the case of the violin in the
examples above. Deriving clever algorithms to reduce the amount of computation while
still keeping sufficient accuracy to the real sound has been a daunting challenge.
However, recent advances in physical modeling algorithms, combined with the blazing
performance of modern digital signal processing (dsp) chips has enabled commercial
realization of physical modeling synthesizers, as of about 1994. This discussion will not
delve into the physics and acoustics of physical modeling. Rather I will explore the
current state of the art, as implemented in the available commercial synthesizers,
focusing on the musical potential of the technology and the issues that affect musicians
who use these synths. At the outset, however, I would like to foster an appreciation for
the awesome amounts of computing power required to execute the algorithms, and the
untold man-months of engineering time that have gone into developing the models that
result in truly playable, expressive physical modeling synths that sound great down to
the last nuance in every performance gesture! We don't have perfection in all of the
models yet, but there are dozens of modeled instruments that are excellent, and very
playable. The implications for musicians seem to be mostly unrecognized at this time.
Read on and see what you are missing!

Background and Context

Traditionally, analog synthesizers came first to the commercial scene, using sine waves,
sawtooth waves, triangle waves, and so on to make sound. They modulate these waves
with various signals, filter them, add them together, and so forth to generate a variety of
'electronic' sounds.

Modern digital synths can easily simulate the analog sounds, and they can layer them
with other types of sounds as well. Most digital synths rely heavily on sample tables,
which amount to digital recordings of acoustic instrument sounds. Each sample has a
loop part which repeats indefinitely to sustain the sound as long as the key is held
down. Each sample has an envelope -- how it ramps up in volume in the beginning,
perhaps decays from a peak, then sustains, then decays at the release. These
envelopes can be edited by the musician, and many synths allow real-time control of
some envelope attributes while playing, using wheels, sliders, or such. They allow the
musician to control volume using a foot pedal or slider, pitch bend using a spring-
centered wheel, and vibrato depth using a wheel. Vibrato (cyclic variation of pitch at
about 4 to 8 cycles per second) is simulated by varying the pitch automatically, and
sometimes the volume as well. Modern synths allow layering (combining multiple
sounds for each key), and polyphony (hitting multiple keys at once, as with a chord).
They typically allow transposing, processing of the generated sounds, like flanging, EQ
(filtering), effects such as chorus and reverb, and quite a few others. More and more,
they allow real-time control via ribbons, sliders, pedals, etc, of the filter or effect
parameters, allowing the musician to alter the sound while playing. They can send MIDI
to your sequencer to store your performance, including everything that you did with the
mod wheel, volume slider, and so on. Then you can play back the sequence and hear
your performance just as you played it. You can build up a sequence of several parts
and then play it back, complete with drum sounds, rhythm parts, fills, pads, and multiple
instrument melodies, all simultaneously playing through one synth. Many synths come
with over a hundred different sounds preloaded at the factory, and pro synths allow you
to add more sounds, either by purchasing them from the same company, or some have
sampling capability. A sampling synth allows you to record and create your own sounds
across the keyboard, or purchase sample disks of professionally-made samples, giving
you an open-ended library of possible sounds. Yes, these synths are marvelous, and
there are lots of them in use out there.

There are limitations, however, and the limitations are quite serious when trying to
create certain kinds of music. These limitations could be summarized under the
heading: expressiveness. When a talented musician steps onto a stage, that one
person playing a single instrument can hold thousands of people spellbound. That
musician is doing lots of interesting things with his instrument, and when you try to do
that with a sampler you will run into difficulties. In fact, you run into trouble even in
simple parts. Take a trumpet, for example (I love the sound of the trumpet). When a
trumpet is played softly, it has a round, mellow sound with mild upper harmonics. As
you play it louder, the upper harmonics increase (more than the fundamental), and it
gains an edge, becoming 'harder'. When played full force, it becomes quite brassy, with
strong upper harmonics. The trumpet timbre also varies quite a bit depending on the
amount of air used and the 'intonation' achieved by the player's lips. As a simple
example, music parts often call for crescendos, where you start playing a note soft and
swell up to a loud level. How are you going to do that with a sampler? Well, first of all,
you can forget about the trumpet patches that came bundled with the synth, as they
never stack up against good pro samples. So you pony up for some good samples,
which can be difficult to find, and expensive. You can get sampled swells, but they have
fixed length, so they are worthless. Typically you would program in a three-layer patch
that has soft trumpet, medium, and forte, so that when you play softly, it triggers the soft
one, and if you hit the key hard, it plays the brassy sample, and so forth. It will also
cross-fade between them, but in this 'swell' note, you have to make the layers trigger off
of a controller other than velocity. You could also try setting up a modulated lowpass
filter to simulate the change in 'brassiness', albeit not so well. Things are getting
complicated, and not so easy to play at this point.

Now consider vibrato. Synth vibrato nearly always sounds too regular, too machine-like,
lacking in expression. That's because the vibrato is not usually being controlled by the
synth player other than its depth, it usually doesn't have the correct volume and pitch
components, and it virtually never has timbral changes that exist in real acoustic
instruments. Trumpet players have individual characteristic ways of doing vibrato, and
they vary it depending on the requirements of the piece and their feeling at the time.
This gives their performance an integrated, organic 'feel' that you can get carried along
on. Violin players and vocalists employ completely different vibrato techniques, and the
exact vibrato is essential to their performances. Emotion is carried in vibrato, and it must
not be compromised.

How about attacks? Perhaps you have heard of studies showing that most people have
great difficulty distinguishing one instrument sound from another if you edit out the
attacks of the notes. Attacks are very important, and they obviously convey a great deal
of the feeling of a performance. With a sample-based synth, you have control over the
envelope (although it is rarely used during playing), and potentially you could modulate
a filter or a layer fade, but acoustic instruments have very complex attack sounds that
change dramatically depending on how they are played. Try programming that in a
synth! Real acoustic performances are full of sounds that evolve throughout the note
because the musician is playing the note, not just triggering it. The entire note model
used in modern synths, consisting of attack, decay, sustain, and release parts is
basically artificial, and doesn't fit the reality of expressive playing. On sustaining
acoustic instruments, the note is being modulated in pitch, timbre, and volume by the
player throughout its duration, however he chooses -- that's reality, and that's what we
need to do with a synth.

Perhaps the most important weakness of conventional synths for expressive playing is
that they are not as 'playable' as they should be. When you have to use one sample for
some notes, another sample for other notes in the same part, and mess around with
controller assignments and parameter settings 'til you're blue in the face, you are not
having fun. Musicians want to sit there and play their instruments. They want to have a
natural, easy way to achieve the sounds that they want -- it's supposed to be 'play' time,
not physics class. By this time I hope that you are getting the idea that there is a giant
hole in our arsenal of music gear. It's so big, you could drive a semi truck through it. And
that hole is right in the heart of what music is all about: the ability to perform
expressively, to vividly express a musical piece. We're not talking about subleties and
nuances here, we're talking about the difference between good music and bad music,
the difference between music that moves people to tears and music that rides along on
elevators.
Notice that there is a complementary pattern here. Sample-based synths are great for
background pads and fills; you can even put in rhythm tracks with them; they do well
with piano and organ sounds, and if you are careful you can use them for many kinds of
percussion sounds. For string pads and such, you have to buy very expensive samples,
and even then you are limited to what you can do without it sounding 'synthy'. It is easy
to create chordal or melodic analog synth sounds, complete with expressive filtering,
evolving timbres, and so forth, and you can play all of the other native 'synth' sounds
that manufacturers put in their synths. Imperfections in tracks mentioned above are
masked by the chorusing together of all of the different sounds, so many people find it
perfectly acceptable to use sampling synths as described, if done skillfully. Then when it
comes to solo or lead parts, you can use acoustic instruments or a physical modeling
(PM) synth. In this situation, polyphony is not necessary. Not only that, but a PM synth
is easy to use when you want to make the sound of, say, 4 instruments playing as a
section. You can make a very convincing section with a single performance, and without
using the chorus effect.

My summary point of the discussion so far is that there is this niche -- it's quite a large
niche, actually -- for the PM synth. When you want great lead parts or convincing small
acoustic sections, look to the PM synth. And you acoustic musicians don't have to worry
about being displaced. You are the ones who can play the PM synths the best! When
you see the nifty controllers that they have for them, and adapt to the new control
interface (which you can tailor to your taste), I believe you will see some bright new
horizons. The technology is basically empowering. Instead of just being an oboe player,
you could lay down an oboe part, a bassoon part, a couple of clarinet parts, and so forth
for a recording. Acoustic musicians who are skillful with a PM synth will be much more
valuable than they were before. It's sort of like how one guy with a back hoe can dig
more than several people with shovels, and if he's good, he can also dig a straight,
clean trench or hole. And just as inevitably, when something conveys a clear economic
and performance advantage, it's coming, whether you like it or not.

Physical Modeling Today

Hailed in February '94 in Keyboard magazine as 'The Next Big Thing', physical
modeling has become a confused subject lately. Incidentally, the article by Marans in
that issue of Keyboard is an example of how helpful a gear magazine can be. It did a
nice job of introducing a very technical subject to musicians, and of exposing the
potential of the new technology. He focused on the Yamaha VL1 as the leading
commercial example of the technology, and he pointed up the fact that "the VL1 is so
responsive to performance gestures that playing it is an absolute joy". Then in June of
'94, Ernie Rideout wrote a detailed review of the VL1 for Keyboard that described a
number of nuances of playing a physically modeled instrument with a breath controller.
He spent quite a bit of time actually playing the VL1, and ended his article with:

"The VL1 is a victory for expression, but it's a victory you will have to win. You will find
unique ways to combine parameters and make assignments to achieve your
performance goals, and then it'll take practice to realize them fully. You'll want to
practice, guaranteed, and not only just to program clever assignments but to see how
far you can take things, where the boundaries are, can you exceed them, can you push
them, and so on. Heck, you'll want to just play the thing. It is -- dare we say it? -- a real
musical instrument."

In January of '95, Nick Batzdorf, editor of Recording magazine and a wind player, gave
the VL1 an accurate review that clearly pointed out a number of the strengths of
physical modeling, and some of the strengths and weaknesses of the VL1 in particular.
Elsewhere I read of a staffer who liked the VL1 so much he proclaimed "Give me this
synth, or give me death!" He was faced with the stiff price tag of the original VL1, but he
eventually got one.

In spite of all of the hoopla, and the obvious potential of the technology, the VL1 did not
do well in the marketplace. Other manufacturers, such as Korg, also invested in
physical modeling, but they held back and observed what a tough time Yamaha had
with the VL1, and obviously decided not to go head-to-head against the VL1. The early
feedback from the press was quite positive, and it probably seemed fair to Yamaha to
charge $5000 for the synth. Obviously, few musicians paid the price. Word of mouth
among musicians varied, and it was tough to get a decent demo of a VL synth. While
some of the patches were obviously excellent, others were poor and unusable. Most
synth players are keyboard players who have never used a breath controller. Each
would face a learning curve to use a breath controller, and yet to use a VL synth without
a breath controller for most voices is almost a crime. You would be giving away a lot of
what the technology is all about. A large proportion of keyboard players are happy with
synthy sounds, grunge, distortion, pitchless noise, etc. and so for them, the idea of
paying five big ones for a two-voice polyphonic synth must have seemed like a joke.
Add to this the unfamiliar voice editing issues, and we have enough barriers to explain
the lack of sales. Then when word got out among dealers that they weren't moving,
dealers lost interest. I doubt that there are many dealers on the planet that have a single
salesperson who can demo a VL synth half-way skillfully using a breath controller or
wind controller. The only clue that the average musician might have in a typical music
store that this was something different would be if a salesperson ran some demos in the
Utility section, which show off the potential of the technology pretty well.

In '96 we got some repackaging, upgrading, significant voice improvements, nifty


editors, and steep price reductions (yeah!) of the VL synths. Later, due to lack of market
response, Yamaha discontinued production of all VL synths except the VL70-m. Other
Yamaha offerings merely incorporate the VL70-m functionality. There are other synths
that use some physical modeling, and I apologize for the fact that my list here is not
really up-to-date. Each synth shown on this site is described in the depth that I judged
appropriate for this survey, keeping the focus on the physical modeling attributes. Each
synth is a significant achievement, so please understand the the brief treatment here is
in line with a survey focusing on the physical modeling aspects, including the control
interface. References are given to more detailed reviews in the pro audio magazines for
those who want more info on a particular synth. The VL line is then examined in
considerable detail, since it carries the technology to a level of refinement that allows a
fairly deep exploration into the true potential of physical modeling.

This site includes some material (with Web links) on PM research labs. For those of you
who are interested in physical modeling theory and algorithms, which has some bearing
on the editing screens of real synths like the VL1-m, I include pages giving a brief
treatment and some links to the literature on this subject. The theory requires at least
some understanding of musical acoustics, though, and the discussions of math and
algorithms are not likely to appeal to most musicians. The rest of this site is easy
reading for most any musician, so stay off the Theory page if you are not technically
inclined.

You might also like