Synthesizers A Brief Introduction

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 60

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/317746236

Synthesizers: A Brief Introduction

Book · January 2008

CITATIONS READS

0 4,297

1 author:

David Martinez-Zorrilla
Universitat Oberta de Catalunya
13 PUBLICATIONS   18 CITATIONS   

SEE PROFILE

All content following this page was uploaded by David Martinez-Zorrilla on 22 June 2017.

The user has requested enhancement of the downloaded file.


SYNTHESIZERS
A Brief Introduction

David Martínez Zorrilla


2008
David Martínez Zorrilla

Synthesizers: A brief Introduction


© David Martínez Zorrilla, 2008
ISBN: 978-1-4092-2251-4

Copyright notice: This book is published under a Creative Commons 2.5 By-
Non commercial-Share Alike license. This means that you can freely copy,
distribute and modify this book, as long as the following conditions are met: a)
proper credit of the author; b) no commercial use of this work; y c) if derivative
works are created, they must be distributed under the same kind of license.

2
Synthesizers: A Brief Introduction

TABLE OF CONTENTS

0. Introduction 5
1. Sound 7
2. What is a synthesizer? 9
3. Some kinds of synthesizers 11
4. Types of synthesis 15
4.1. Subtractive synthesis 15
4.1.1. Sophistications of the model 25
4.2. Additive synthesis 29
4.3. Frequency Modulation synthesis 30
4.4. Wavetable synthesis 33
4.5. Physical modelling synthesis 36
5. Some prominent models 39
5.1. The Minimoog (Moog Music, 1970) 39
5.2. The Prophet 5 (Sequential Circuits, 1978) 41
5.3. The DX-7 (Yamaha, 1983) 43
5.4. The D-50 (Roland, 1987) 46
5.5. The M1 (Korg, 1988) 48
5.6. The VL-1 (Yamaha, 1994) 50
5.7. The Fantom-X (Roland, 2004) 52
5.8. A special case: the SID (1981) 55
6. Bibliography 57

3
Synthesizers: A Brief Introduction

0. INTRODUCTION

This book aims to be a brief and easy to understand approach


to the world of those fascinating musical instruments called
‘synthesizers’, making a general presentation of their typology and
structure, and trying to explain some of the key points behind their
incredible flexibility, versatility and power. Indeed, it’s probably the
musical instrument (including here both acoustic and electric) with the
greatest and fastest evolution, despite its quite short history, in which it
has reached very high levels of success, from being used only in
electronic and experimental music in the 60s and 70s, to being
nowadays a fundamental device in the most diverse musical genres.

The book’s basic structure is as follows: After introducing some


basic concepts about sound in general, which are necessary to
properly understand subsequent explanations, we’ll see what’s a
synthesizer and what sorts of synthesizers we can find. Later, we’ll see
a more in-depth explanation of the diverse kinds of synthesis that those
devices use (the clue of their character and versatility). Finally, we’ll
make a brief historical overview paying attention to some
representative and prominent models, which illustrate quite well their
evolution and typology.

5
Synthesizers: A Brief Introduction

1. SOUND

From the physical point of view, the sound is a vibration of the air
(there’s no sound in space or in vacuum) of a certain sort which makes
that, when in contact with our audition system, it can be perceived by
the brain as an audible signal (what we usually call “a sound” or “a
noise”). Not every air vibration gives rise to a sound perception, due to
limitations of the human ear, which make certain vibrations to be
inaudible (for instance, those that are beyond the limits of audible
frequencies, or those that are too weak to be perceived).

It’s possible to set out certain properties or basic elements of the


sound, which have to be duly differentiated. Specifically, we can stress
the following four ones: 1) frequency or pitch; 2) amplitude or intensity;
3) length; and 4) timbre.

1) The frequency (or pitch) of the sound is, as its name suggests, the
time lapse between the cyclical vibrations of the sound waves. Those
vibrations are measured in cycles per second, which are technically
called hertz (Hz). A hertz corresponds to a single cycle (vibration) per
second. The frequency or speed of the vibrations is what allows us to
classify different sounds as higher or lower-pitched, according to higher
or lower frequencies: as the frequency raises, the sound becomes more
high-pitched, and vice versa. A person with normal audition capabilities
is able to perceive sounds which are between 20 Hz and 20 KHz
(20,000 Hz), more or less, although the upper limit tends to get lower as
we get old.

Regarding the frequency of the sound waves, a very important


concept in music theory must be emphasized: the octave. An interval or
difference of an octave between two musical notes corresponds exactly
to a difference of twice the frequency between cycles. For example, if
we take as a reference the ‘A’ of the fourth octave (which is usually
taken as a basis for the tuning of musical instruments), it corresponds to
440.0 Hz. If we take the ‘A’ of the fifth octave, it will be 880.0 Hz, and
the ‘A’ of the sixth octave, 1,760.0 Hz, and so on. Similarly, the ‘A’ of the
third octave will be 220.0 Hz.

2) The amplitude (or intensity) is what we usually call the “volume” or


“strength” of the sound that allows us to distinguish between different
sounds, depending on their intensity (louder of softer). Amplitude is
measured in decibels (dB), being 0 dB the lower limit of our sensibility

7
David Martínez Zorrilla

(for a normal human ear), so all audible sounds have a positive value in
decibels. Theoretically, there’s no upper limit, but from 100 dB and up
the sound is perceived as unpleasant, and from 130 dB and up it hurts
and it even can cause injuries.

One aspect that can be quite curious for someone who’s not
familiarized with this, is that if we look at some synths, or at some sound
equipment in general (such as an amplifier), we’ll see that maximum
values are set around 0 dB (or low positive values, such as +3 dB or +6
dB), while the rest are negative values (for instance, -10 dB, -20 dB,
etc.). Those negative values are actually not absolute, but relative to the
original signal (attenuation). Therefore, a value of 0B in an amplifier
does not mean that the signal has an absolute amplitude of 0 dB so it’s
inaudible, but means that its amplitude has not been attenuated with
regard to its original value.

3) The length is the time lapse from the beginning to the end of the
sound. Despite seeming a very simple concept, in many cases it can be
quite problematic to determine the length, because sounds can vary
dynamically in their frequency, amplitude and timbre, hence raising the
doubt whether we’re talking about the same sound or two (or more)
different ones.

4) The timbre is what gives the sound its own unique character, and
allows us to recognize or identify it as, for example, a piano sound (and
not as a violin), or as John’s voice (and not Susan’s). Two or more
sounds can be identical in their frequency, amplitude and length, and
despite this, be clearly distinguishable due to their different timbre. The
timbre of the sound depends of its harmonics or partials. Every sound,
be it natural or artificially produced, and with the exception of a pure
sine wave (which totally lacks harmonics), has, in addition to its
fundamental frequency (for instance, 440.0 Hz in the case of a ‘A’ note),
other frequencies with lower amplitudes (the harmonics). Depending on
the amplitude and the frequencies of the harmonics, the timbre will vary.
Indeed, as it can be mathematically proved by means of Fourier
functions, every sound can be reduced to a combination of sine waves
of different frequency and amplitude (and that’s what is on the basis of
the synthesis process called “additive synthesis”).

8
Synthesizers: A Brief Introduction

2. WHAT IS A SYNTHESIZER?

A synthesizer is, in the first place, and prior to any technological


aspects, a musical instrument: a device capable of generating sounds at
the will of the interpreter, for the sake of being used in music production.
The label “synthesizer” responds to the fact that the sound is created as
the result of a synthesis process: a product which is different than the
mere juxtaposition of the different elements that intervene or are used in
the process.

So, a remarkable feature of synthesizers, contrary to other


musical instruments, in which the sound that we hear depends solely on
certain physical aspects, such as the shape, dimensions, materials or
playing techniques, is that in synthesizers the sound is created or
synthesized electronically by the synth itself, what allows for an
extremely wide range of different possible timbres (and that’s a
especially relevant feature of synthesizers). Synthesizers are the
musical instruments with the widest timbre variety, given that a single
device can, according to the way it’s programmed, synthesize and play
lots of (very) different sound textures, often with radical timbre
variations. That’s the main reason why they are extremely flexible and
versatile instruments, and why their creation and evolution have opened
a whole new world of sonical possibilities which couldn’t be dreamed of
before. In the beginning, the main function of a synthesizer, from a
musical point of view, was to open the possibility of using new and
artificial sounds, never heard before. On the other hand, as technology
evolved, synthesizers have greatly improved their capabilities for
emulating or reproducing “real” instrument sounds, such as pianos,
brass, strings, etc., reaching very high levels of fidelity, what has made
that nowadays synths are commonly used as a “substitute” for other
instruments, at least when “real” instruments cannot be used (mainly
due to budget limitations).

Although 20th century was prolific in the invention of electric


musical instruments, a unique feature of synthesizers, as already noted,
is that sound is generated fully electronically. In other electric
instruments, such as the electric guitar, the electric piano (Rhodes or
Wurlitzer type), the electric grand piano (such as the Yamaha CP-80), or
the clavinet (such as the Hohner D-6), there’s in the first place an
acoustic process, that is later transformed into an electrical signal, which
is then manipulated, processed and amplified. In an electric guitar, for
instance, the player makes the strings to vibrate, and this generates an

9
David Martínez Zorrilla

acoustic signal (the vibration of the strings generates an audible signal).


That vibration is captured by electronic components (the piezoelectric)
and turned into an electrical current, which can be manipulated and
amplified. In a synthesizer, the whole process is electronic, so they are
much more “untied” when generating a sound.

In order to avoid confusions, in must be emphasized that, strictly


speaking, synths are not “keyboards”, or instruments played by means
of a keyboard (similar to a piano), because, actually, the synthesizer is
the device that generates the sound, indepently from the mechanism
used by the interpreter for paying and controlling it. It’s true that most
synths are controlled by a keyboard, but there are also others which can
be controlled by an electric guitar, and even synthesizer “modules”
(synthesizers with no keyboard or playing device at all, suitable to be
controlled by any kind of the proper external device, such as a
keyboard, a guitar, a wind controller –much like a flute or a saxophone,
albeit electric-, or a computer).

10
Synthesizers: A Brief Introduction

3. SOME KINDS OF SYNTHESIZERS

Depending on some of their features, it’s possible to classify


synthesizers in diverse categories. Among the different classifications
(which are not exclusory), the most relevant one is that related to the
synthesis type, which we’ll see in the next chapter.

Monophonic and polyphonic synthesizers. A synthesizer is monophonic


if it’s only able to reproduce one single voice (sound) at once, whereas
it’s polyphonic if it can play two or more voices or notes at once.
Generally, the polyphonic capabilities of synthesizers have increased
with the evolution of technology. Hence, at first (until mid-70s and even
later), nearly all synthesizers were monophonic, and gradually new
models increased their polyphonic capabilities, modestly at first (with
two, four, six voices, etc.), and then in a more significant amount.
Currently it’s usual to find synths with up to 128 voices of polyphony.

Monotimbral and multitimbral synthesizers. The classification responds


to the ability or inability or reproducing different timbres at once (such as
a bass sound and a strings ensemble sound). This classification is
relatively independent from the former one, because although a
monophonic synthesizer can only be monotimbral, there are both
polyphonic and monotimbral synths and polyphonic and multitimbral
ones. Each different timbre is usually called a “part”. Currently there are
several multitimbral synthesizers with 16, 32 and even 64 parts, which
allows a single synth to virtually operate as a full orchestra (within the
limits of its polyphony, of course). The relation between parts and voices
can be static (each part has an assigned number of voices), or dynamic
(each part can use as many voices as needed, as long as they are
available, and the voices that are not used by some part will be
available for other parts).

Analog and digital synthesizers. All synthesizers are electronic


instruments, but some of them use analog electronic components
(electric currents or voltages) to generate the sound, whereas others
use digital technology (the whole process is done using numerical
sequences, which at the end are converted into an analog signal by an
electronic circuit called DAC – Digital-to-Analog Converter -, which is the
signal that is sent to the amplifier and then to the speakers. There are
also some hybrid models, most of them from late 70s to mid 80s, in
which some components are analog but they are digitally controlled by a
microprocessor.

11
David Martínez Zorrilla

Historically, analog synthesizers have been the most widely


spread from their beginnings until mid 80s. Digital synthesizers came in
late 70s and early 80s, and towards late 80s they were clearly the
dominant models. Despite the fact that the switch from the analog to the
digital world responds to the evolution of technology, actually each kind
of synthesizer has its strong and weak points. Analog electronic
components suffer slight changes and variations (for instance, in the
intensity of voltages), whereas digital components are much more
precise and reliable. This makes that the former, for example, cannot be
so precisely tuned, and tend to be out of tune as time passes. Moreover,
analog components are more prone to be affected by electrical
interferences from other components, adding some “impurities” or
imperfections. Also, in an analog circuit the signal degradates as it
makes its route through the circuit, although the level of degradation will
be lower as the quality of components is better (with its consequences
regarding costs). But not necessarily those drawbacks have to be
considered as a problem, because from the point of view of many
musicians and enthusiasts those little imperfections are what make the
analog sound to be “fatter” and “warmer” than the digital sound, which is
considered as “thinner” and “colder” (mainly because it is more exact).

The advantages of digital technology are, on the one hand, its


higher precision (although this is not necessarily perceived as an
advantage from a musical point of view), as instead of dealing with
voltages, it has only to deal with numerical operations. On the other
hand, another advantage is, despite it could seem strange, its lower
cost, because we don’t longer need to use such high-quality
components to prevent or to minimize the signal degradation. It’s
enough to keep the numerical data. The main reason for the high cost of
digital synthesizers and digital audio in general, especially in the
beginning of digital technology, is the need to recover the big amounts
of money invested in research and development. Regarding strict
production costs, digital technology is cheaper than analog.

A quite common mistake is to think that digital audio (like, for


example, the compact disc) has more quality than analog audio (like, for
instance, vinyl records). Actually, it’s much the opposite, although digital
audio has many important advantages and nowadays is of a quality
good enough to be preferable, in general terms, to analog audio.

12
Synthesizers: A Brief Introduction

The digitalization process of an audio signal implies to “translate”


it into a numerical sequence. A CD, for example, contains only a
sequence of numbers, which are read by a laser beam and then
“translated” again into an analog signal (through a DAC), which is sent
to the amplifier and the speakers. The digitalization process implies to
put the original sound into a “grid”, so there will be always a certain
degree of loss (as a compensation, the interferences and the noise
signal commonly associated to the analog technology are vastly
reduced, as well as reducing the cost). To better understand the idea,
we can look at the following picture, which represents the original wave:

Fig. 1. Original audio signal

This wave is continuous. By contrast, when digitized, it becomes


fragmented by the need of converting it into numerical values:

Fig.2. Digitized audio signal

The quality level of digital audio depends mainly on two factors:


the resolution and the sampling frequency. Resolution determines how
many different amplitude levels the wave can contain. So, if resolution is
for instance 8 bit, there will be a total of 256 amplitude levels (because
in the binary numeration system, used by digital technology, the
greatest number that can be represented with eight figures is 256); if
resolution is 16 bit, there will be 65.536 levels; if it’s 24 bit, 16.777.216
13
David Martínez Zorrilla

levels, and so on. The higher the resolution, the greater the fidelity with
regard to the original waveform (the squares of the grid are thinner), and
therefore, the loss is lower. The problem is that as we increase the
resolution, the amount of information to be processed also increases
exponentially, so the calculation power must be also increased to cope
with the signal.

Quality also depends on sampling frequency. This is the number


of times per second that amplitude is updated (in other words, the speed
at which wave data are read). The greater the frequency, the greater the
global sound quality. Using again the example of the CD, this operates
at a 16 bit resolution and at a sampling frequency of 44.1 KHz (44,100
cycles per second), and that’s a quite good sound quality, although
nowadays professional digital audio is usually 24-bit and 96 KHz or
even 192 KHz.

One of the most noticeable problems of low-quality digital sound


(which virtually disappears when we operate with resolutions of 18 bit or
more) is that, as the numerical representation of the original sound lacks
precision and fidelity, some differences with respect to the original wave
are introduced (what is known as the quantization error, which is nothing
but the differences between the original signal and the digitized one),
and that gives rise to the presence of noises and distortions. And
contrary to analog distortion, the digital one is very unpleasant.

14
Synthesizers: A Brief Introduction

4. TYPES OF SYNTHESIS

Among the differences between synths, the most relevant ones


are those related to the model or type of synthesis, because they affect
the core of the sound creation, manipulation and reproduction process.
There are many different synthesis types, although most of them are a
minority, because, in general terms, the most common ones (which
would cover at least the 95% of models) are the following: 1) subtractive
synthesis; 2) additive synthesis; 3) frequency modulation synthesis; 4)
wavetable synthesis; and 5) physical modelling synthesis. Among them,
moreover, the greater number would be included in the categories of
subtractive synthesis and wavetable synthesis (and the latter, in turn,
has finally become in many cases an evolution of the former).

4.1. Subtractive synthesis

With no doubt, this is the most widely spread synthesis scheme.


Nearly all analog synthesizers and a great number of digital ones are
included in this category. The main idea (hence the name), is that the
sound is formed through the subtraction or elimination of a part of the
harmonics from the main sound generator. Using an analogy, it would
be like the process used by a sculptor who, from a marble bock,
removes part of the material in order to shape it in the desired way.

Now we’re going to dig into what could be considered as “the


basic model of subtractive synthesis”. Some synthesizers would
perfectly fit into this model, although many others are actually evolutions
or sophistications of this basic scheme, in order to provide more
flexibility and greater possibilities in the sound creation process. We’ll
also refer to some of these sophistications later.

In the basic model, the sound synthesis involves four basic


elements: the oscillator, the filter, the amplifier and the LFO (low
frequency oscillator). There’s a signal provided by the oscillator (which
is the sound source properly speaking), which is later manipulated by
the filter, and then processed by the amplifier, that can shape it
dynamically. The LFO, on the other hand, is usually an element that can
be routed at will to one or more other elements (the filter, the oscillator
or the amplifier) to cyclically manipulate them.

So, the scheme would be like this:


15
David Martínez Zorrilla

Fig.3. Basic model of subtractive synthesis

The Oscillator

The oscillator is an electronic circuit that generates a waveform in


the desired frequency or pitch (this signal alone, properly amplified, is
already audible). In acoustic instruments, the acoustic signal can be
generated by an air current (wind instruments), by the friction or
plucking of a string (stringed instruments), or by a physical hit on a
surface (percussion instruments). In synthesizers, that task is carried
out by the oscillator. In analog synths, they are usually called VCOs
(voltage-controlled oscillators), whereas in digital ones they are usually
called DCOs (digitally-controlled oscillators). Usually, those oscillators
are able to generate only a few different basic and simple waveforms.
Now we’ll see some of the most common waveforms that most synths
can generate. Not all models are capable of generating all the following
waveforms, but these are the most usual and common ones:

1. Sinus

Fig. 4. Graph of a sinus waveform

It is the most basic waveform. A sinus wave totally lacks harmonics,


although oscillators are never as precise as to be able to produce a pure
16
Synthesizers: A Brief Introduction

sinus waveform, so there will be always some harmonics. Nevertheless,


the fact of being a “poor” wave makes that many models don’t include it.

2. Triangle

Fig. 5. Graph of a triangle waveform

From the sonical point of view, it’s similar to the sinus wave, but it
has harmonics (although only a few and with low amplitudes), what
makes it a more interesting waveform for subtractive synthesis than the
sinus. Contrary to the latter, there are many models that do include the
triangle waveform.

3. Sawtooth

Fig. 6. Graph of a sawtooth waveform

It produces a bright sound, rich in harmonics, and suitable to


generate sound textures which resemble brass and strings. Nearly all
synthesizers include this waveform.

4. Square
17
David Martínez Zorrilla

Fig. 7. Graph of a square waveform

It’s also very common. The distance (width) between cycles is totally
exact (50-50). The harmonics are similar to the triangle wave, although
of greater amplitude. It makes a sound that resembles the reed
instruments.

5. Pulse

Fig.8. Graph of a pulse waveform

Pulse is similar to the square waveform (in fact, the square is a kind
of a pulse waveform), but with no totally equal cycles (50-50). The kind
of sound generated and its harmonics depend on the pulse width. Some
synthesizers have different fixed pulse widths (for instance, 10%, 25%,
etc.), whereas others have a variable pulse width that can be freely
adjusted by the user, what makes it possible to make PWM (pulse width
modulation), changing the timbre dynamically as the pulse width is
modified.

6. Noise

18
Synthesizers: A Brief Introduction

Fig.9. Graph of a noise waveform

Most synthesizers have also a noise generator (which is made


through random-generated waveforms), that can be useful to create
sound effects (such as the wind, or the waves at the beach), or for
creating different percussion sounds. Sometimes, they are able to
produce different kinds of noise (white noise, pink noise, etc.). White
noise has a constant amplitude throughout the entire frequency
spectrum, so it’s kind of “aggressive” (it’s what it can be heard –or what
it could be heard in the past, at least- in TVs when no channel is tuned).
Pink noise, on the other hand, diminishes its amplitude in a ratio of -3
dB/Octave, making it a bit “softer”. Other kinds of noise, less common in
synthesizers, are the brown noise (-6 dB/Octave), blue noise (+3
dB/Octave), or violet noise (+6 dB/Octave).

The Filter

The filter or VCF (voltage-controlled filter) is a fundamental piece


in subtractive synthesis synths, and actually they are the main reason
why this synthesis model is called “subtractive”, given that the main
function of filters is to lower or to cut (subtract) the amplitude of certain
frequencies, modifying the harmonics of the original waveform and
changing the timbre. In order to have an idea of the effect that a filter
causes to the sound, let’s see the following example: let’s suppose that
we make a continuous “aaaaaaaaaaah” sound, and, while always
keeping the same amplitude and pitch, we gradually close our mouth
and lips in order to finally say “ooooooooooh”. Our vocal chords act here
as an oscillator, which have been constant throughout the whole
process, but our mouth, tongue and lips have acted like a filter,
modifying the sound timbre.

19
David Martínez Zorrilla

There are several different kinds of filters, with different


characteristics. The most common one, which is used in nearly all
synthesizer models, is the low-pass filter. This filter attenuates all
frequencies that are higher to the cutoff frequency, determined by the
user, leaving, on the other hand, lower frequencies intact. The following
pictures show the effect of the filter:

Fig. 10. Original non-filtered waveform

Fig. 11. Action of a low-pass filter

The intensity of the attenuation of higher frequency signals


depends on the filter slope. The most common slopes are -12 dB (also
called “two-pole”, and -24dB (also called “four pole”). A slope of -12 dB
means that the waveform’s amplitude diminishes in a ratio of -12 dB per
octave. Hence, if for example the cutoff frequency is set at 1 KHz, in a
frequency of 2 KHz (an octave higher), amplitude will be 12 decibels
lower than the original waveform, and in a frequency of 4 KHz (two
octaves), 24 decibels. Other less common slopes are -6 dB, -18 dB and
-36 dB.

20
Synthesizers: A Brief Introduction

Most filters we can find in synthesizers are resonant filters.


Resonance is a parameter that allows regulating the amplitude level of
frequencies close to the cutoff frequency, hence offering greater
possibilities in the configuration of the sound.

Fig. 12. Action of a low-pass resonant filter

Also, there are may other kinds of filters than just the low-pass
one, less common but also quite usual among synthesizers. Among the
more common ones, there’s the high-pass filter, which attenuates
frequencies below the cutoff frequency, leaving the higher ones intact;
the band-pass filter, which attenuates both higher and lower
frequencies, leaving only the ones close to the cutoff frequency; and the
notch filter, which attenuates only the frequencies close to the cutoff
frequency. Also, both the band-pass and the notch filter can be obtained
by means of a combination of a low-pass and a high-pass filter.

Fig. 13. Action of a high-pass filter

21
David Martínez Zorrilla

Fig. 14. Action of a band-pass filter

Fig. 15. Action of a notch filter

The Amplifier

The amplifier or VCA (voltage-controlled amplifier), is, at its


name suggests, a circuit that controls the amplitude (volume or
intensity) of the signal from the oscillator and modified by the filter. In
order to allow greater flexibility and expression, this amplification is not
linear or static, but it can be dynamically controlled through an
adjustable envelope. An envelope is a structure that determines the
dynamic (through time) behaviour of an element; therefore, there’s not a
conceptual link between the existence of an envelope and the VCA,
although, as a matter of fact, it can be said that if a synthesizer has only
one envelope, this will control the amplifier. As we’ll see, there can be
also envelopes for controlling dynamically the filter or the oscillator.

The most common structure of the VCA envelope is that of four


segments or ADSR (attack, decay, sustain and release), although there
are also others, with less or more segments.

22
Synthesizers: A Brief Introduction

Fig. 16. An ADSR envelope

The attack is the period between the instant a note is played (for
instance, by pressing a key on a keyboard) and the moment in which
the sound reaches its maximum amplitude. Decay is the period between
the end of the attack and the stabilisation of the amplitude level while
the note is being played (sustain). Sustain is the amplitude level of the
wave while the note lasts, once the attack and the decay periods have
passed. Finally, release is the period between the end of the note (for
example, when the key is released), and silence (zero amplitude).

By manipulating the envelope, it’s possible to emulate the


dynamic behaviour of acoustic instruments, or to create entirely new
dynamics. For instance, in the case of a clarinet we would have a fast
attack, no decay, a sustain at the maximum amplitude level and a fast
release:

Fig. 17. The ADSR envelope of a clarinet

In the case of a piano, we would have a fast attack, a slow


decay, there wouldn’t be properly a sustain, and a fairly fast release:

23
David Martínez Zorrilla

Fig. 18. The ADSR envelope of a piano

In the case of a snare drum, the dynamics would only have the
attack and release portions, very fast in both cases (especially the
attack):

Fig. 19. The ADSR envelope of a snare drum

As stated before, it’s possible for a synth to have other envelope


generators for dynamically controlling the filter (dynamically changing
the cutoff frequency), or the oscillator (dynamically changing the
frequency or pitch).

The Low Frequency Oscillator

The LFO is, as its name suggests, an oscillator, which, as such,


produces certain waveforms (like triangle, sawtooth or pulse), but at a
very low frequencies (usually under 20 Hz). The reason for this is that
it’s not aimed to generate sound, but instead to interact cyclically with
other elements of the synthesis, such as the oscillator, the filter or the
amplifier. Usually, it is routable (the user can decide where to route it).
The results will vary depending on the LFO frequency, the intensity
(amplitude), and, above all, the element that it affects. Hence, if it’s used

24
Synthesizers: A Brief Introduction

to control the main oscillator, it will be useful to create a vibrato effect


(cyclically altering the tuning of the note), when the LFO waveform is a
sinus or a triangle, or to create an alternance between two different
notes (by means of a square waveform). If the synthesizer has a
variable pulse width waveform, the LFO can be used to make PWM
(pulse width modulation), cyclically changing the timbre. When it’s
routed to the amplifier, it can generate a tremolo effect (cyclical
alteration of the volume). Finally, when it’s routed to the filter, it can
create a wah-wah effect.

4.1.1. Sophistications of the model

Although some synthesizer models would fit very well into what
we’ve called “the basic model of subtractive synthesis”, actually most
synths have more powerful or sophisticated synthesis schemes, leading
to greater possibilities and versatility in sound creation. This greater
sophistication responds to two possible (non-exclusory) strategies: a) a
greater number of elements in the synthesis scheme; and b) more
interaction possibilities among the different synthesis elements. We’ll
briefly refer to both aspects.

1) It’s quite common to include a greater number of oscillators to


generate the sound. Most synthesizers have two, three or more
oscillators per voice. As every oscillator can generate different
waveforms at different frequencies, the sonical possibilities (the
resulting waveform of the combination of the oscillators) are much
greater. Most subtractive synthesis models use two oscillators (each
one with its own VCF and VCA), because it’s a pretty good compromise
between flexibility (it’s possible to create a great number of different
sounds with only two oscillators) and costs (more oscillators allow for
greater possibilities, but it also makes the product more expensive).

2) Other common strategy to improve the synthesis scheme is by using


oscillators capable of producing more different and varied waveforms.
Nevertheless, if we’re faced with the dilemma of choosing between
more oscillators or simply more waveforms (with the same number of
oscillators) it’s always preferable the first option. For that reason, it’s not
strange that most synths, both analog and digital, have oscillators which
are capable of generating only a few waveforms (many times they aren’t
even able to produce all the waveforms we’ve seen), but with at least
two oscillators.

25
David Martínez Zorrilla

3) Other elements can also be increased in number. For instance, a


second LFO can be added, so one can be used, for example, for
controlling the frequency of the oscillator, while the second one controls
the filter. There can be also more than one filter, or more than one
ADSR envelope to control the different elements of the synthesis.

4) A key aspect in the sound synthesis’ capabilities is that, even more


importantly than the number of elements that are involved in the
process, it’s the range of possibilities to interact, link or connect all those
different elements. For this reason, the capabilities of a synthesizer can
be greatly enhanced if, instead of simply replicating the oscillator →
filter → amplifier scheme (to have two oscillators, each one with its own
filter and amplifier), we can link or connect those elements in different
ways.

For example, the most straightforward way of combining the


signal of two oscillators is by simple mixing (in which the relative
amplitude of each oscillator need not to be the same, but instead they
can be set by the user –for instance, 75% of the signal can come from
the first oscillator, and 25% from the second one-). But there are other
more interesting possibilities. One of them is oscillator synchronization
(syncing). When oscillators are synchronized, one of them (called “the
slave”) is forced to initiate its cycle whenever the other (called “the
master”) begins its cycle, as shown in the following picture, which shows
two synchronized sawtooth waves at different
frequencies:

26
Synthesizers: A Brief Introduction

Fig. 20. Waveforms of two synchronized oscillators

Another even more interesting possibility is ring modulation. It


consists in the combination of two waveforms (so at least two oscillators
are needed), but not in a simple mixing, but in a “modulation” of one
waveform by the other, giving rise to a new waveform which is different
from the two original ones, and which is very rich in harmonics.
Technically, ring modulation consists in multiplying the two original
signals to produce as a result a waveform made of the addition and
subtraction of all the original harmonics:

Fig. 21. Ring modulation of two sinus waveforms

27
David Martínez Zorrilla

This waveform is rich in high frequency harmonics which are


absent in the original waveforms, so it’s very suitable to create “bell-like”
and “metallic” sounds.

Some synths, especially more modern ones, are very flexible


when combining different elements in different synthesis “structures”.
For example, if we have two oscillators which can be ring-modulated,
with two filters (one per oscillator) and two LFOs per sound or patch, a
possible structure would be that in which the filter affects the signal of
the first oscillator, and the filtered signal comes to the amplifier and is
ring-modulated by the signal of the second oscillator, and then the result
is filtered again by the second filter and amplified by the second VCA,
while one LFO affects the frequency of one oscillator, and the second
LFO controls the second filter:

Fig. 22. Advanced structure of subtractive synthesis

5) In the last decades, and always in a greater number, it’s usual to add,
as a final stage of the synthesis process, the manipulation of the result
of the synthesis through one or more effects units. Despite that, strictly
speaking, the use of effects is not a part of the synthesis process,
because they are applied after the sound has already been generated,
and just before the audio signal is routed from the synth to the external
amplifier and the speakers, they can greatly affect the final sound
character. Among the most common effects we can find the reverb (an
emulation of the reverberation of the sound waves in a certain space,
such as a room, a concert hall or a cathedral) and the chorus
(multiplication of the signal in slightly different frequencies to create the
illusion of an ensemble, instead of a single instrument). Other quite
common effects are echo or delay, the flanger effect (variable
application of small delays -20 milliseconds or less-, which create a
sensation of “motion”), overdrive, compression, distortion, etc.
Moreover, every effect usually has some configurable settings, for even
more flexibility. In addition, in the case that there are more than one

28
Synthesizers: A Brief Introduction

effect unit, it’s usual that the user can set the way they are connected
(serial connection, parallel connection, or a combination of both).

Usually, synthesizers that include effect units are digital,


because they need great processing capabilities, although some effects
such as chorus are classic and common among analog synths.

4.2. Additive Synthesis

As it’s easy to understand, when compared to subtractive


synthesis, additive synthesis operates through the opposite principle,
although results can be quite similar. Whereas, in subtractive synthesis,
the basic principle is the elimination of harmonics in order to obtain the
desired sound, in additive synthesis new harmonics are added to
configure the final sound. Here, instead of using the metaphor of the
sculptor who eliminates material from the marble block to obtain the
desired shape, the image would be more close to the potter who adds
more and more clay to make and shape his vase.

In precedent sections, we pointed out a couple of aspects which


are now relevant: a) a sinus wave totally lacks harmonics; and b) every
sound, no matter it’s naturally or artificially produced, can be reduced to
a set or combination of sinus waves at different frequencies and
amplitudes. The sound “construction” in additive synthesis, therefore,
consists in the addition of a number of sinus waveforms at different
amplitudes and frequencies in order to configure the timbre. Let’s see
an example: the creation of a sawtooth waveform right from sinus
waves. Starting from a base frequency of 500 Hz, if we add a first
harmonic (an octave higher, 500 Hz x 2 = 1 KHz, at lower amplitude),
and a second harmonic (500 Hz x 3 = 1.5 KHz, at even lower
amplitude), the resulting waveform begins to look like a sawtooth:

Fig. 23. Construction of a sawtooth wave using additive synthesis


29
David Martínez Zorrilla

If we use enough partials, the result will be a precise sawtooth


waveform. In theory, it’s possible to reproduce any sound using this
technique, although it can be the case that we need an extremely long
series of harmonics to exactly re-create a particular sound. On the other
hand, this scheme has, as added difficulties, a higher programming
complexity and less predictable results when compared to subtractive
synthesis. In the latter, it’s relatively easy and fast, with a bit of practice,
to predict how the modification of a parameter (the intensity of the filter,
the cutoff frequency, the oscillator waveform, etc.) will affect the final
sound, and it’s possible to try to construct a certain sound with no need
to previously know the structure of its harmonics. In the additive
synthesis, though, if our aim is to reproduce certain specific sound, we
previously have to know the harmonics structure. This is probably one of
the main reasons why there are a quite limited number of synthesizers
that use additive synthesis. Quite surprisingly, there are remarkable
predecessors that can be considered as the first additive synthesis
synthesizers: organs. In a pipe organ, like those we can find in some
churches, the air that circulates through the pipe produces an audible
vibration, similar to the sound of a flute. Using the pedals the player can
control the pipes through which the air circulates, changing the
harmonics and modifying the timbre. In an electric organ, such as the
Hammond, created in the 30s of the 20th century as an alternative to
pipe organs (very weighty, bulky and fragile), the timbre is configured
from several electric generators (dynamos) which generate sinus
waveforms at different frequencies (9 per note). The amplitude or
intensity of each wave (harmonic) is controlled by potentiometers in the
shape of drawbars, which allow to select from a minimum value of 0 (off)
up to 8 (maximum amplitude level).

Among modern synthesizers, additive synthesis is used by


models such as the K5 or the K5000 series of Kawai, although this latter
model combines additive synthesis (with up to 64 partials) with
wavetable synthesis. Moreover, it has different kinds of filters, LFOs and
other elements in order to allow an even more powerful sound editing.

4.3. Frequency modulation synthesis (FM)

Frequency modulation synthesis (usually called simply “FM”) has


a quite curious history. It was discovered nearly by chance in the 60s by
John Chowning, a researcher from the Stanford University, when he
was working on vibrato techniques. Later, in mid 70s, when Chowning

30
Synthesizers: A Brief Introduction

and his collaborators already had an advanced FM synthesis model,


suitable for being commercially exploited by synthesizer makers, they
found that the main American synth compamies weren’t interested in
this technology and they didn’t see its potential. In a quite desperate
movement, Chowning offered the technology to the Japanese brand
Yamaha, who were very interested in it and signed a license agreement
for the exclusive exploitment of the patent (that was owned by the
Stanford University). In the following years, Yamaha used this
technology in countless products, both professional synths and home
keyboards, with a huge commercial success, what led great profits to
the Stanford University, being the patent that, until its expiration in 1995,
has produced more incomes in all the history of the institution.
Nowadays, there are several synthesizer models that include FM among
their synthesis methods.

The theoretical operation of the frequency modulation is actually


very similar to the vibrato, hence the context in which it was discovered.
Vibrato consists in a cyclical variation of the waveform frequency, which
slightly (usually) changes its pitch around a certain base frequency at a
certain speed, leading to that sensation of “vibration”. Let’s suppose that
a violinist plays an A at a frequency of 440.0 Hz. When she applies the
vibrato, she does so by moving slightly and cyclically the finger that’s
pressing the string (while that string is being played with the bow by the
other hand). This movement causes the length of the string, that’s what
determines the pitch (frequency), to slightly change, making it cyclically
a bit longer and shorter, hence lowering and raising the pitch
(frequency) of the sound.

In a synthesizer, the vibrato effect can be achieved using a low


frequency oscillator (LFO), which affects the signal of the main oscillator
(the one that produces the audible frequency). The wave generated by
the oscillator is usually called the carrier, whereas the LFO signal is
called the modulator. The carrier frequency is fixed at some value (for
instance, a sinus waveform at 440.0 Hz. Modulator’s signal (usually a
sinus or a triangle wave when trying to produce a vibrato) is applied to
the one from the oscillator, modifying the latter in a cyclical manner, in
the amount set by the amplitude (intensity) of the modulator and by the
determined frequency (cycles per second). Playing with the amplitude
and frequency parameters of the modulator, the user can control the
depth and the speed of the vibrato effect.

31
David Martínez Zorrilla

The crucial point to understand FM synthesis is the following:


normally, frequencies used to create vibrato are very low (between 10
and 15 Hz, below the audible frequency range), because only this way
the effect is perceived as such. But what happens when the modulator
frequency is increased, and falls within the range of audible
frequencies? It happens that the resulting waveform from the interaction
between the carrier and the modulator is (or can be) a totally different
one, creating new waveforms and harmonics (in sum, new sounds)
totally different from the original waveforms. The effect of the modulation
depends, as it’s easy to figure out, on the modulator’s amplitude (here
called “modulation index”) and the frequency applied to the carrier. In
sum, FM synthesis is nothing but a very fast vibrato.

Fig. 24. Basic scheme of the frequency modulation synthesis

FM synthesis allows generating impossible-to-create sounds by


the (then common) subtractive synthesis analog synths, although, in
contrast, it’s virtually impossible to emulate the effect of a filter using FM
synthesis. Nevertheless, this kind of synthesis is known to be hard to
program and to have quite unpredictable results, and actually, it’s really
so unless we have very clear and present the mathematical foundations
behind frequency modulation (the results of the interactions between
different waves through FM synthesis can be determined using Bessel’s
functions). As a general guideline, the carrier and modulator frequencies
have to be multiples to create harmonic sounds, because if it’s not the
case, the resulting sounds are quite dissonant and “metallic”.

Although there’s no obstacle for using frequency modulation with


waveforms generated by analog oscillators, usually those oscillators are
not precise nor stable enough as to use this type of synthesis
satisfactorily, as it’s very sensitive to small variations, so this is the main
reason why FM synthesis has been almost exclusively used (and
exclusively in the case of Yamaha) in digital synthesizers, as in the
successful DX series of early and mid 80s.
32
Synthesizers: A Brief Introduction

In Yamaha’s nomenclature, the synthesis is carried out through


different operators, which can interact in different ways (routes or
algorithms). Yamaha developed synthesizers based on two, four and six
operators. An operator is a “block” composed by an oscillator (which
only produces sinus waveforms), and its corresponding envelopes and
amplifiers. Each oscillator (operator) can behave as a carrier or as a
modulator, and the carriers’ signals (if there’s more than one) are mixed
in the final sound. Moreover, there’s no obstacle for a modulator to
modulate another modulator. Therefore, using an example, in a six-
operator scheme, operators 1 and 2 can act as carriers, whereas
operator 3 can modulate operator 1, and operator 4 can modulate
operator 5, which in turn modulates operator 6, which in turn modulates
operator 2 (carrier). Then, the signals of operators 1 and 2 are mixed
and sent to the audio output of the synthesizer. This gives us an idea of
how powerful and flexible FM synthesis can be.

4.4. Wavetable synthesis

In the beginning of synthesizers, they were conceived as devices


that made it possible to enter into a whole new world of sound textures,
to access new sounds never heard before, and which allowed to
experiment with new acoustic possibilities. Short after, though, it was
clear that, with a proper programming, it was possible to emulate, with a
higher or lower degree of precision, the timbre and dynamics of some
acoustic instruments, such as the acoustic bass, the clarinet or the
xylophone (although the emulations were never precise enough not to
distinguish quite easily the sound of the real instrument from the
synthesized one).

Nevertheless, the emulation possibilities of the classic oscillator-


based subtractive synthesis synthesizers are quite limited, and if, for
instance, we look at the waveforms of a piano in an oscilloscope, we’ll
notice that they are very complex and that its harmonics cannot be at all
re-created by those synths with a quite acceptable level of precision.
Nevertheless, the evolution and development of digital technology would
radically change this matter, allowing, for the first time, an electronic
musical instrument to sound very close to the real thing (the acoustic
counterpart). Actually, the basic principle is quite simple: the easiest yet
most reliable way to replicate a sound is by making a recording of the
original sound source and playing it later. If we record the sound of a
piano with a microphone into a magnetophone and we then play it, we’ll

33
David Martínez Zorrilla

obtain the same sound (although with the limitations of that technology
regarding sound quality, what implies a loss of fidelity when compared
to the original sound source). If, on the other hand, we replay the
recording at different speeds, we can control the pitch (the faster the
speed, the higher the pitch; the slower the speed, the lower the pitch).

Wavetable synthesis is based on the very same principle:


instead of using oscillators which generate simple waveforms, they use
an electronic circuit which reproduces certain digitized sound samples,
at different frequencies depending on the notes to be played. This way,
very high levels of realism can be reached, which were previously
impossible to achieve. But not everything is good news, as we’ll now
see.

An important drawback of this technology is that it needs lots of


memory to store the sound samples. Nowadays, this is hardly a
problem, but it was a very serious one in the beginning of wavetable
digital synthesizers (mid and late 80s). At that time, memory was very
expensive (and also very slow when compared with current technology),
what made that synths usually had only a few samples, usually short
and low-quality ones, with the aim of cutting costs (or at least, not to
boost the final price). In some cases, as in Kawai’s K1 or in the D-series
synths from Roland, only the attack portion of the sound (and maybe the
decay) was sampled, using the “traditional” synthesis for the rest of the
sound (sustain, release), or there was a heavy use of loops, in which the
sustain part was made of the continuous repetition of some portion of
the same sampled waveform. Sample quality wasn’t either very high at
first. The amount of memory used by a sound sample directly depends
on three main factors: resolution, frequency and length. The first two
elements are what determine the global sound quality. The first digital
wavetable synthesizers had 8-bit resolution samples (such as the
Ensoniq Mirage), or 12-bit ones (such as the SY series from Yamaha),
at quite low sampling frequencies (usually not over 22 KHz). Resolution,
sampling frequency and sample length have increased through the
years thanks to the evolution and the price drop of electronic
components (especially memory).

Another drawback of this technology at first, which nowadays


has been overcome, is that a digitized waveform is much more difficult
to manipulate that the signal of a simple oscillator, as much greater
calculation capabilities are needed. For that reason, some early models
of wavetable synthesis had very simple synthesis structures, so they

34
Synthesizers: A Brief Introduction

weren’t able to alter the sound in a significant way or amount, contrary


to the analog or digital synths based on subtractive synthesis. It was
quite common in the 80s that they didn’t even have a filter (that’s the
case, for instance, of the Kawai K1 or the U-series of Roland), as that
would imply making several real-time calculations, so the only possible
“modifications” were the adjusting of the vibrato (waveform frequency)
and the amplifier’s envelope (ADSR). Later, thanks to the speed and
capacity increase of microprocessors and digital components in general,
all the usual elements of subtractive synthesis were added (filters,
LFOs, multiple simultaneous oscillators, etc.), so nowadays wavetable
synthesizers can be seen as the “natural” evolution from the old
subtractive synthesis synths, in which the most relevant difference is the
ability of using several hundred or even thousands of complex
waveforms instead of a small set of basic waveforms.

Other negative aspect which affects that kind of technology is


that sound can be quite static and artificial sounding. If we consider an
acoustic instrument, such as a violin, we’ll see that timbrical and
expressive capabilities are huge, depending on the playing technique
(legato, marcato, etc.), intensity (piano, forte, fortissimo, etc.), speed
(slower or faster), and so on. On the other hand, a digitized sample is
simply a recording that always sounds exactly the same. If we, for
instance, take a sample of a piano key (the central octave ‘C’, for
example), and play it at different frequencies in order to build the whole
scale (from lower to higher octaves), we’ll notice that, as we get further
from the original key, sound becomes more artificial and less close to
the original instrument’s sound (the central octave ‘C’ will sound quite
like a real piano, but a ‘C’ from 2 octaves lower will be very different,
because the harmonics of the original instrument change). This forces,
consequently, to take several samples of the same instrument at
different pitch intervals (with the corresponding use of memory), in order
to achieve a certain level of sound consistency throughout the whole
scale. But that’s not enough. In most acoustic instruments, differences
in amplitude (intensity) also imply timbrical changes. The sound of a
trumpet, for example, is different when it’s played softly (milder), or
when it’s played at full volume (brighter). If we only have a single
sample, all we can do is playing it at different volume levels, but we
won’t be able to emulate the real instrument’s dynamics. In order to try
to solve this shortcoming and producing a more convincing sound,
synthesizers use different techniques. One of them is the use of a filter
(if the synth has one). With a low-pass filter, when the sound is played
softly, the cutoff frequency is lower, cutting higher frequencies and

35
David Martínez Zorrilla

making the sound milder, whereas as intensity gets higher, also does
the cutoff frequency and the sound becomes brighter. Another
technique (not incompatible with the former) is velocity-switching. This
consists is obtaining different samples of the original sound source, at
different intensity (amplitude) levels (for instance, one taken when the
instrument is played piano and the other when it’s played forte), and
using one or the other depending on the intensity at which the synth is
being played: when the intensity is below a certain value, the piano
sample will be used, and for intensities over that value, the forte sample
will be used. The most basic form of velocity-switching is to use only two
different levels, but most current synthesizers can offer three or more
levels, what allows for a more subtle and smooth changes. If we also
add a filter, very smooth and gradual transitions can be achieved,
leading to a final result which is very close to the original acoustic
instrument. The evolution of the prices of memory and technology have
made that synthesizers gradually use more samples, which are also
longer and of better quality, what leads to very remarkable
improvements in realism and sound quality. To use an example, a
modern synth such as the Roland Fantom X (released in 2004) uses, for
the acoustic piano patch, individual simples of each one of the 88 piano
keys at four different levels (piano, mezzoforte, forte and fortissimo),
what makes a grand total of over 700 samples for this single patch.
This, combined with an advanced synthesis structure, makes it hard to
distinguish from the real thing, even for a musician.

4.5. Physical modelling synthesis

Despite wavetable synthesis, if properly sophisticated or evolved


(multiple high-quality samples, filters, velocity-switching, effects, and so
on), can offer very good results, and pretty convincing emulations of
acoustic instruments, it’s nevertheless tied to some important limitations.
In the base there are the samples, which are recordings, and the
possibilities of manipulation of recordings to allow for greater expression
are quite limited. If we take for example a saxophone, we’ll notice that
the timbrical expression possibilities that it offers are very wide, as the
sound depends on multiple factors, such as intensity (amplitude), the
pressure of the lips in the mouthpiece, the throat formant, the tongue’s
position, etc. All those aspects are part of the saxophone’s playing
technique, and cannot be properly re-created with only a handful of
sound samples. This makes that, usually, synth performances using a
timbre of a saxophone sound quite artificial and not very convincing.

36
Synthesizers: A Brief Introduction

Those shortcomings can be avoided by using a sound synthesis


method known as physical modelling. This method consists in a
mathematical representation of the instrument to be reproduced, in
order to emulate the physical behaviour of the sound waves from a set
of variables (or from their mathematical representation, to be more
exact), as the size, the shape, the construction materials, the playing
technique (the vibration of a string, a percussive sound, etc.), and other
aspects. In sum, it’s the sound synthesis through a set of equations and
algorithms which emulate the physical behaviour of the sound source.
This way, it’s possible to create precise simulations of musical
instruments (or, at least, more precise ones than those created through
other synthesis methods), let them be real or invented ones (because
we can play with the parameters and modify the “shape”, “size”,
“materials” and so on). Moreover, the results can even be more finely
tuned to make them more convincing, introducing slight imperfections
like those of real acoustic instruments. That makes that this technology
is also usually known as “virtual acoustic synthesis”.

As we can easily imagine, this synthesis technology requires


huge processing power, what explains why it wasn’t implemented in
commercial synthesizers until mid 90s, and even then, most early
models were strictly monophonic or biphonic, in an age when 32 and
64-voice wavetable polyphonic synthesizers were very common. As it
happened in the 80s regarding FM synthesis, Yamaha was again the
first one to commercially offer this new technology, thanks to an
agreement signed in 1989 with the Stanford University. The first model
launched to the market was the Yamaha VL-1, in 1994.

A surprising aspect of this synthesis technology is that it’s been


very successful not only in the emulation of acoustic instruments, but
also in the virtual re-creation of the classic analog synths of subtractive
synthesis. One of the consequences of the digital technology
implementation was that its greater precision and stability made the
“sound character” of the old analog synths to disappear, given that their
sound was greatly influenced by the imperfections of their technology
(inability to generate totally precise frequencies, instability, etc.). Today
there are lots of synthesizers that use physical modelling synthesis to
reproduce the features and behaviour of the analog technology, in order
to create, by means of digital technology (with its advantages in terms of
stability and costs), the closest possible sound to the analog classics.

37
Synthesizers: A Brief Introduction

5. SOME REMARKABLE MODELS

Once the main categories of synths and synthesis technologies


have been introduced, in the remaining part of this book we’ll look in
some detail at some of the most remarkable (for different reasons)
synthesizer models. The selection does not follow one single criterion: in
some cases, the selection is due to the fact that the model is quite
representative of a time or of a synthesis type; in other cases, it’ is due
to its commercial success; in other ones, because it incarnated a
relevant technological advance; etc. On the other hand, it can also be
seen as a very generic overview of the synthesizer evolution in the last
decades. The specific models that we’ll see are the following ones: 1)
Minimoog (Moog Music, 1970); 2) Prophet 5 (Sequential Circuits, 1978);
3) DX-7 (Yamaha, 1983); 4) D-50 (Roland, 1987); 5) M1 (Korg, 1988);
6) VL-1 (Yamaha, 1994); 7) Fantom-X (Roland, 2004), and 8) the SID
(MOS Technology, 1981).

5.1. The Minimoog (Moog Music, 1970)

Fig. 25. The Moog Minimoog, model “D”. Image obtained from Wikipedia
(http://en.wikipedia.org)

The Minimoog was a very innovative synthesizer in many


aspects, besides being one of the very first “commercial successes” in
the synth world, with a total production (1970-1982) of over 12,000

39
David Martínez Zorrilla

units, which was very relevant considering the age and the situation of
the synthesizer market at that time.

Surprisingly, its launch was in an important degree a result of a


complicated and problematic situation of Moog Music, the company
founded by Bob Moog, one of the pioneer engineers in the world of
electronic musical instruments. Towards the end of the 60s, nearly all
synthesizers were modular, and consisted in a keyboard connected to a
set of “modules” (boxes with potentiometers, switches, connectors and
so on) which were interconnected by cables (in an image quite similar to
the old telephone switchboards) to create the sound. Modular
synthesizers were very powerful and flexible, because there were nearly
no limits in the way they could be linked, but they were also bulky,
weighty and expensive, so very few units were sold, usually on demand.
In 1969 the sales of Bob Moog’s company had fallen in a very
concerning amount, and the company was nearly in bankruptcy. A quick
solution was needed to get out from the crisis, and the idea which
ultimately was more successful was to create an integrated (not
modular) synthesizer, which was also compact, portable and cheap
enough, while being also flexible and easy to use. So, in less than a
year from the first prototypes to the serial production, the Minimoog was
created, and it was released at US$ 1,495 and soon it was a great
success among musicians, because its sound, ease of use and
portability. Although it’s true that, on the one hand, due to the integration
and the fixed connections between the different synthesis components
(oscillators, filter, amplifier), it offered less possibilities than older
modular models, on the other hand the ease of use, and the remarkable
stability of the oscillators (for its time), and also its sound warmth, were
the key aspects of the success. Other remarkable innovation was the
introduction of the pitch bend wheel (which allows to manually control
the note pitch during performance, or to make vibrato) and the
modulation wheel, both at the left of the keyboard. The success of these
additions were so great that since then nearly all manufacturers have
included those controllers in their models up to now.

From the technological point of view, the Minimoog is a classic


example of an analog subtractive synthesis synthesizer. It’s monophonic
and monotimbral. It has three oscillators (VCOs), in which the third one
can act as a “normal” one or as a LFO. The oscillators con produce
triangle, sawtooth, square and two different fixed pulse waveforms. It
also has a white/pink noise generator and an external audio line input.

40
Synthesizers: A Brief Introduction

All audio sources (oscillators, noise, and line) have independent


amplitude (intensity) levels.

The signal mix goes then through a resonant -24 dB/Octave low-
pass filter (VCF), and then to the amplifier (VCA). Both the filter and the
amplifier have their own envelope generators, which the peculiarity that
they are ADSD instead of ADSR, because the “decay” parameter also
controls the release value.

Other remarkable feature is that the third oscillator and the noise
generator can be directed to the oscillator and/or filter inputs. This
allows to use the third oscillator as an LFO, but as it can also produce
frequencies in the audible level (as also can the noise generator), the
Minimoog is able to do, up to some degree, FM synthesis. The
modulation’s amplitude made by the third oscillator or by the noise
generator is controlled by the modulation wheel and the left side of the
keyboard, next to the pitch bend wheel.

In recent years, this model has suffered an important revaluation,


and quite big amounts of money are paid for them in the second-hand
market. But besides historical reasons or collector’s interests, this model
is valued among musicians for its sound. Although it’s hard to exactly
determine the reasons for its quite unique sound, probably the lack of
precision inherent to the analog technology, and also the electrical
interferences from other components (such as the power source), which
make that the oscillators cannot exactly synchronize their frequencies,
are behind the “fat” and “live” sound of the Minimoog.

5.2. The Prophet 5 (Sequential Circuits, 1978)

41
David Martínez Zorrilla

Fig. 26. The Sequential Circuits Prophet 5. Image obtained from Wikipedia
(http://en.wikipedia.org)

This synthesizer stands out for several reasons; among them,


that of being one of the first polyphonic synthesizer models (it has a
polyphony of 5 voices), and, above all, that of being the first synth with
programmable memory, what was an important step in the history of
those instruments. Although the sound generation technology is still
analog, this board is digitally controlled by a microprocessor, what,
among other things, allows to store different sound configurations in
memory, which are accessible at the touch of a button, with no need to
manually set the controls every time that the user wants to modify the
sound. That way it’s possible, for instance, to store the settings for a
synthetic strings timbre, or brass, or bells, etc., and recall them instantly
when needed. This model had a total of 40 different memory registers
(factory pre-programmed but modifiable by the user). Moreover, the
digital technology allowed also for a better control of the voltages of the
different components (oscillators, filters, and so on), making the
instrument to have a more precise tuning and a better stability.

Nevertheless, the synthesis structure sticks to the classic


scheme of analog subtractive synthesis. It’s a monotimbral and
polyphonic (5-voice) synthesizer. It has a total of 10 oscillators (two per
voice, from which the second one offers greater flexibility). The first
oscillator can generate a sawtooth or a variable pulse waveform,
whereas the second one can also produce a triangle wave and act at
low frequency (4 to 10 Hz). The oscillators can synchronize (syncing),
and its signals be mixed in the proportions set by the user. Moreover, it
has a noise generator.

The filter or VCF is a classic -24 dB/Octave low-pass filter, with


resonance and adjustable cutoff frequency. It has a two-segment (attack
and decay) envelope generator. On the other hand, the amplifier or VCA
has an ADSR-type four segment envelope generator. It has also an
LFO, with frequencies from 0.01 Hz to 20 Hz and three waveforms:
triangle, sawtooth and pulse.

From the point of view of the synthesis capabilities, the most


interesting aspect of the Prophet 5 is what its control panel labels as
“Poly-Mod”, and which allows doing from simple pulse width sweepings
(PWM) or filter sweepings, to ring modulation or basic FM synthesis.
The “Poly-Mod” connects the output of the filter envelope and the
second oscillator from each voice to the following three destinations
42
Synthesizers: A Brief Introduction

(individually or in each possible combination): The frequency (pitch) of


the first oscillator, the pulse width, or the cuttoff frequency of the filter.
As the second oscillator is not limited to behave as an LFO, but it’s also
able to produce frequencies in the audible range, it’s possible to make
FM synthesis with the Prophet 5.

It’s a synthesizer which achieved a quite relevant success in its


time, and nowadays it’s still quite valued by musicians and collectors.
Despite its success, nevertheless, the Sequential Circuits company had
a rather short history (barely 10 years, from 1977 to 1987), given that
later models were not as successful, and sales were falling. In 1987 the
company was acquired by Yamaha, who obtained all the rights over the
trademarks “Prophet”, “Sequential” and “Sequential Circuits”, but it
hasn’t launched any product line under those names.

5.3. The DX-7 (Yamaha, 1983)

Fig. 27. The Yamaha DX-7. Image obtained from Wikipedia (http://en.wikipedia.org)

The Yamaha DX-7 is, as all the other models included here, a
synthesizer that stands out for different reasons. Among them, we can
emphasize that it’s one on the first totally digital synths, that it has a
(then) ample polyphony of 16 voices, that it’s based on an advanced FM
synthesis model as its only sound source, that it was one of the first
models that implemented MIDI (the –then- new standard of
communication between electronic musical instruments, which is still
widely used), and, last but not least, its huge commercial success in the
synthesizer market. Its sound capabilities and the moderate price
(around US$ 2,000) made it an immediate success, and nearly all bands
and musicians who used synthesizers in the mid 80s had one. It’s
estimated that over 300,000 units were sold in total, an astounding
amount in the world of synthesizers. Despite the fact that Yamaha
launched lots of other models based on FM synthesis (among them, the
43
David Martínez Zorrilla

rest of the DX series), and many of them were cheaper (as, for instance,
the DX-9, the DX-11, the DX-21, the DX-27 or the DX-100), all of them
were very far from the success of the DX-7, mainly because the more
economic models used a 4-operator FM synthesis scheme, whereas the
DX-7 used a 6-operator scheme, which made the latter a much more
powerful and flexible machine. Yamaha also released two higher
models, the DX-5 and the DX-1, but very few units were sold.

From a technological point of view, it’s a totally digital


synthesizer, monotimbral and 16-voice polyphonic, with an internal
memory of 32 registers (expandable using optional cartridges). But the
most remarkable aspect is that it was very innovative in the synthesis
field, using FM as the only sound generation source, but with a much
higher level of flexibility and sophistication than all that was seen before.
As we saw earlier, some older analog synths were also capable of doing
some kind of frequency modulation. Nevertheless, in order to be totally
satisfactory and offer its full potential, this synthesis method required to
be based on digital technology, and with a more evolved synthesis
structure. The DX-7 plenty fulfilled those requirements, as it included 6
operators and 32 different algorithms. Thanks to that, it excelled in some
kind of sounds such as the emulations of electric pianos, bells, and all
kind of “percussive” and “metallic” sounds, as well as it allowed for the
creation of new sounds never heard before in the traditional subtractive
synthesis synthesizers.

The different synthesis “blocks” of Yamaha’s DX series are the


operators. Each operator is composed by an oscillator which only
generates sinus waveforms, an amplifier and an 8-stage envelope
generator which dynamically controls both the amplitude and the
frequency (pitch). The operators can interact through different
connection schemes called algorithms. Each algorithm determines
which operator or operators act as carriers (there will always be at least
one carrier), which ones act as modulators (here it can be the case that
no operator acts as a modulator, because all are carriers), and how they
are linked. Specifically, the DX-7 has 32 different algorithms, among
which we can find the following ones, as an example:

44
Synthesizers: A Brief Introduction

Fig. 28. Some of the algorithms of the Yamaha DX-7

In each of the former schemes, the operator(s) located at the


bottom are the carriers, whereas the rest (if that’s the case) are the
modulators. If we look at algorithm number 32, we’ll see that all
operators are carriers, and there are no modulators. The result,
therefore, is that there’s not FM synthesis properly speaking, but
additive synthesis (we should remember that the oscillators only
produce sinus waves). The synthesis is completed by an LFO that can
generate sinus, square, triangle, two different sawtooth and random
waveforms.

As already pointed out, the DX-7 was also one of the first
synthesizer models to implement the MIDI interface (although the very
first one was the Sequential Circuits Prophet 600, also from 1983).
‘MIDI’ stands for Musical Instrument Digital Interface, and since its
creation has become the communications standard among electronic
musical instruments. MIDI allows connecting two (or more) instruments
and using one to control the other (for instance, using the keyboard of
the first synth to play the sound of the second synth). It also makes
possible, for example, to use a sequencer or a personal computer
equipped with a MIDI interface to control one or more musical
instruments, determining what notes of what instrument to play and
when to do it (among lots of other parameters). Thanks to MIDI, there
have been lots of synthesizers in rack module format (with no
keyboard), designed mainly to be used in studios and not for live
performances, and that are controlled by other keyboards or by a
computer. But MIDI has gone much further than the realm of
professional synthesizers and nowadays is a common element not only
in home keyboards, but also in some electric guitars and electronic wind
instruments. Nevertheless, the MIDI implementation of the DX-7 is quite
limited, because the synth was released before MIDI specification
standards were completed, and many functions weren’t included (for
45
David Martínez Zorrilla

instance, it always transmits the information through channel 1, when


the standard has 16 different channels). This makes the DX-7 a bad
choice for a controller keyboard. In the electronic music terminology, a
“controller keyboard” is a keyboard which is not used to produce sound,
but for controlling other sound sources (other keyboards, synth
modules, and so on), which are the ones that actually produce the
sound. If a synth becomes technologically obsolete, thanks to MIDI it
can still be a good controller keyboard, if its MIDI implementation is
good.

Despite its limitations, the years that have passed and the
evolution of technology, the DX-7 is still quite popular, and it’s a quite
valued model because even today it’s able to produce sounds which are
hard to obtain with other models, and nearly everybody agrees in
considering it as one of the best synthesizers of all time.

5.4. The D-50 (Roland, 1987)

Fig. 29. The Roland D-50. Image obtained from http://rolandclan.info

In mid 80s, the reign of the Yamaha DX-7 was nearly absolute,
and the rest of manufacturers had to resign with trying to keep a certain
market share. The also Japanese company Roland had obtain a quite
relevant market niche at late 70s, but it wasn’t until 1987 when they
could put an end to the absolute hegemony of the DX-7. The
synthesizer model was the D-50, which began a whole line of synths
(the D series, with other models such as the D-5, D-10, D-20 or MT-32),
based on a kind of synthesis called by Roland as L/A (Linear Arithmetic
Synthesis), which, as we’ll see, combines digital subtractive synthesis
with the (then) new wavetable synthesis. Other remarkable novelty
incorporated by the D-50 was the inclusion of an effects unit which
allowed adding some effects to the synthesized sound such as
reverberation, chorus or equalization, hence creating a more “refined”

46
Synthesizers: A Brief Introduction

sound, closer to that obtained in a recording studio with dedicated


external effects units.

For most musicians, the first impression after listening to the D-


50 was that it achieved a higher degree of realism in the emulation of
“real” instruments than older subtractive and FM synthesizers, and also
that it had a more “polished” sound due to the inclusion of the effects.
The first aspect was directly related to the fact that, besides “classic”
oscillator-based subtractive synthesis, it also included in memory a set
of digital samples of real sounds, what allowed for producing sounds
that couldn’t be obtained with the other synthesis methods of that time,
trough a quite simple (for today’s standards) wavetable synthesis
scheme.

At that time, the price of memory was very high, what made that,
in order to keep prices at a reasonable level, there were only a small
number of samples, which, in general, were not very long or of a very
high quality (more sampling quality means more memory use). The D-
50 had only 256 kilobytes of sample memory, a very modest amount for
today’s standards, when we’re used to talk of hundreds of megabytes or
even gigabytes. In its memory, 100 different samples were stored, most
of them quite short and not very high-quality. However, the final results
were quite good, and the emulations quite convincing. To achieve that,
Roland engineers used an imaginative strategy. Studies in
psychoacoustics showed that, in our sound perception, the first parts of
a sound (the attack, and sometimes the decay) are the most relevant in
the recognition of a certain timbrical texture (our ability to recognize a
sound as a piano or as a guitar, for example). With that principle in
mind, most sampled sounds contained only the attack portion (or they
are percussive sounds, with short length), and the rest of the sound was
produced through the subtractive synthesis of simple waveforms.

Regarding the technical specifications, the D-50 is a monotimbral


and 16-voice polyphonic synthesizer, with 64 memory registers. For the
sound generation (the “linear arithmetic synthesis”), it uses a scheme
that combines subtractive synthesis of simple waveforms (which are
limited to sawtooth and variable width pulse) with a pretty basic model of
wavetable synthesis. Specifically, each sound or patch is formed by up
to two partials, and each partial by up to two elements (oscillators), so
each voice can use from 1 to 4 elements. Those elements can be either
simple waveform oscillators or digital samples, with no limitations in their
combination (it can be two of each category, all four of the same

47
David Martínez Zorrilla

category, etc.). The two partials, moreover, can simply be mixed, or


ring-modulated. Nevertheless, synthesis schemes are different
depending on the sound source (oscillator waveform or sound sample).
In the first case, it’s a quite classic model of subtractive synthesis: the
signal of the oscillator is processed by a low-pass resonant filter, and
then is sent to the amplifier, with three envelope generators: one of
them controls the oscillator frequency (pitch), other one the filter, and
the last one the amplifier (amplitude). If, on the other hand, the sound
source is a digital sample, the synthesis scheme is very basic: only the
frequency (pitch) and the amplitude can be regulated (there’s no filter),
with an envelope generator for the oscillator and the amplifier. The
scheme is completed with three LFOs which can produce triangle,
sawtooth, square and random waveforms and which can control the
oscillator’s pitch, the amplifier’s amplitude, the pulse width, and the
cutoff frequency of the filter (the last three parameters only when
subtractive synthesis is used). Finally, the sound can be processed by
an effects unit, which can apply different kinds of reverb, chorus and
equalization.

The D-50 had a considerable success and began to trace out the
route of the majority trend of synthesizers up to today, which is the
integration of subtractive and wavetable synthesis. Strictly speaking,
there’s no true integration of both schemes in the D-50, but more of a
juxtaposition of them, but soon after that integration happened. In later
models, instead of having sample waveform oscillators, what he have is
a memory bank with hundreds or even thousands different waveforms,
which include both simples of “real” sounds and instruments and the
classic simple waveforms, which are then processed following the
subtractive synthesis scheme, using filters, envelopes, LFOs and so on.

5.5. The M1 (Korg, 1988)

48
Synthesizers: A Brief Introduction

Fig. 30. The Korg M1. Image obtained from http://hem.passagen.se

Only a year after the release of the D-50, Korg counterattacked


with one of the most successful synthesizer models not only of their
company, but of all synth’s history, the M1. There are several clues
behind its success: in the first place, the sound quality, thanks to 4
megabytes of digital samples (a huge amount of memory at that time);
secondly, the fact of being a 8-part multitimbral synthesizer (up to 8
different sounds can be played at one, within the limits of its 16-voice
polyphony); thirdly, the two independent effects units, with lots of
different effects, including some which were then very unusual in
synthesizers such as flanger, phaser, overdrive, distortion, rotary
speaker, etc.; and, as an important novelty at the time, the inclusion of
an 8-track sequencer which allowed to independently record and edit
each different track (percussion, bass, piano, etc.) and create a whole
song. For all those reasons, the M1 is considered the first “music
workstation”, because, at least in theory, it has all the necessary
resources to create the full song with one single apparatus (the sound
selection, recording and editing of tracks, the effects –reverb, chorus,
delay…-). That is, for the first time it was possible (within certain limits,
of course) to create a complete album from the beginning to the end
with only a single keyboard.

The Korg M1 has 100 waveforms of “melodic” instruments and


44 samples of drum and percussion sounds. Although they are not quite
high numbers, the big (at that time) memory amount made the M1 to
stand out due to its sound quality and realism. The samples, moreover,
were multisamples, that is, multiple samples at different frequencies
and/or amplitudes of the original sound source, in order to faithfully
reproduce the original timbre of the instrument (the harmonics of an
acoustic instrument change depending on the pitch –higher or lower-
and on the amplitude or intensity –louder or softer-, as it’s not the same
waveform played at different pitch/amplitude levels). Some of the preset
sounds soon became classics in the recordings of that time, as for
instance the universe patch or the acoustic piano sound (nowadays we
find it a bit too much “metallic sounding” and artificial, but it was used in
hundreds of pop and dance recordings of late 80s and early 90s).

The synthesis scheme of the M1 is actually quite simple, but it


was a step forward to the integration of subtractive and wavetable
synthesis. Each sound or patch can use one or two oscillators (wave
generators). The signal of each one is processed (and that’s a step
forward compared to earlier wavetable synthesizers) by a low-pass filter
49
David Martínez Zorrilla

(with no resonance), and then by the amplifier. Each one of these basic
blocks (wave, filter and amplifier) has its own envelope generator.
Nevertheless, there are no LFOs, and oscillators cannot be ring-
modulated. Finally, the signal is processed by up to two effects units,
which can operate in serial connection (the second unit processes the
signal already processed by the first unit), or in parallel (both units
simultaneously process the original signal).

5.6. The VL-1 (Yamaha, 1994)

Fig. 31. The Yamaha VL-1. Image obtained from http:// www.zikinf.com

There are several parallelisms between this model and the DX-7:
in both cases, the used technology was based on a research developed
at the Stanford University (from which Yamaha acquired the exclusive
license of the patents), and both represented a very significant leap
compared with the synthesis schemes of previous synthesizer models.
The main difference, nevertheless, was the market’s response: contrary
to the DX-7, the VL-1 wasn’t very commercially successful, partly due to
its high price (when it was released it was nearly US$ 10,000).

This model was the irruption in the synth market of a new, though
longly awaited (because, until then, costs and processing power were
too high to be commercially exploited), synthesis model: the physical
modelling synthesis. As seen before, this kind of synthesis is based on
mathematical models that simulate the behaviour of the sound waves
depending on the different aspects of an instrument, such as the shape,
dimensions, materials, playing pressure or strength, etc. A computer
processes all those variables and generates the waveform in real time.
The result is a very realistic and expressive sound, which exceeds
whatever other, even the most sophisticated, wavetable synthesis
50
Synthesizers: A Brief Introduction

models. On the other hand, the greater processing power requirements


involved a severe limitation of the features in terms of polyphony or
multitimbral capabilities (for instance, the VL-1 is biphonic and
bitimbrical, and it’s so because it’s actually two independent
monophonic synthesizers in one single case that operate jointly). An
interesting aspect of the physical modelling synthesis is that it’s not only
suitable to faithfully reproduce a “real” acoustic instrument, such as a
violin, a flute or a saxophone, but also allows to “invent” new
instruments, modifying the parameters (shape, size…), with the same
expression capabilities. To have an idea of the flexibility, some of the
adjustable parameters (which can also be controlled in real time, as the
musician plays) are the following:

• Air pressure (wind instruments) or bow speed (string


instruments)
• Pressure of the lips in the mouthpiece (wind instruments) or bow
pressure (string instruments)
• Pipe length (wind instruments) or string length (string
instruments)
• Tongue position (a playing technique of partially obstructing the
mouthpiece with the tongue when blowing)
• Breath noise (wind instruments)
• Throat formant (wind instruments)
• Friction (of the air in the pipe or of the bow in the string)
• Absorption (loss of high frequencies at the end of the pipe or the
string)

Although we cannot ignore the important leap that this synthesis


model represented, the VL-1 wasn’t very successful, as said before.
With no doubt, a main reason was the high price, but it wasn’t only a
matter of cost. The VL-1 offered a lot more of expression possibilities,
but this also made its operation more difficult. Most synthesizer players
are pianists, and they are not used to the specific playing techniques of
wind or string instruments (just as most wind and string payers are not
used to the specific playing techniques of the piano). In order to take the
most out of the VL-1, it’s not enough with playing the keyboard, but the
player also needs to use multiple wheels, breath controllers and pedals,
in a way that it gets quite far from the “classical” synthesizer playing
technique.

Quite surprisingly, instead of the emulation of acoustic instruments,


the physical modelling synthesis has been much more successful in the
51
David Martínez Zorrilla

virtual “re-creation” of the vintage analog synthesizers. The new


technology makes it possible to accurately reproduce the instability,
imprecision and interferences related to the analog technology, bringing
to life again the old “character” and “warmth” of the analog synths, which
were gone with the introduction of digital technology.

5.7. The Fantom-X (Roland, 2004)

Fig. 32. The Fantom X6. Image obtained from http://rolandclan.info

Roland’s Fantom-X series include several models (X6, X7, X8


and XR) which only differ in the number of keys (61, 76, 88, and module
version, respectively), while being identical regarding sound capabilities.
They are a good example of the current wavetable synthesis
technology, which is, still today, predominant. In recent years, more than
significant changes in synthesis models, what we’re experiencing is a
refinement and sophistication of the already known schemes (mainly
wavetable and physical modelling), what implies better specs, flexibility
and sound quality. This makes that, when compared to current models,
synths like the D-50 or the M1 look quite “primitive”, despite being based
on the same principles. The Fantom-X is an excellent example of total
integration of subtractive and wavetable synthesis, with a very
remarkable level of sophistication and flexibility.

It’s a synthesizer with a maximum polyphony of 128 voices, and


16-part multitimbral. The total wave memory is 128 Megabytes,
expandable to 384 Mb (512 Mb in the case of the Fantom XR) using
expansion cards. The 128 Mb included from factory have a total of
1,460 multisamples. Moreover, the Fantom-X includes a sampler, that
is, that the user is not limited to the samples stored by the manufacturer,
and can himself record and store new samples, which can also be used
in the synthesis process. The sampler memory is 32 Mb (16 Mb in the
Fantom XR), expandable to 544 Mb (528 Mb in the Fantom XR). This

52
Synthesizers: A Brief Introduction

means that, when fully expanded, the synthesizer can store up to 928
Mb (1,040 Mb in the Fantom XR) of samples.

The synthesis structure, as already pointed out, is that of


wavetable subtractive synthesis, but with a high level of sophistication.
Up to 4 elements (oscillators) per patch can be used, but in turn, they
are stereo and can use two different waveforms for the left and right
channels, so actually up to 8 different waveforms can be used (however,
the synthesis always operates as 4 different blocks at most, despite the
fact that one or more use two different waveforms). Each element
(oscillator) from the 4 available ones has its own dedicated resonant
filter and amplifier. The filter can act as a low-pass (3 different kinds),
high-pass, band-pass or peak (emphasizes the frequencies close to the
cutoff frequency, without cutting the rest) type. Of course, the oscillator,
the filter and the amplifier have their own envelope generators. On the
other hand, each element of the 4 possible ones which are part of the
patch have two LFOs (hence, up to 8 LFOs can be used in each sound),
with 13 different waveforms, among which there are the sinus, the
triangle, various sawtooth, the square, a random wave, and even a
user-programmable type (called step LFO). In this last programmable
mode, when the LFO is used to control the oscillator’s frequency or
pitch, it can be used to create “melodies”, so it’s possible to create 4-
voice songs (one per oscillator) at the press of a single key.

On the other hand, the Fantom-X allows selecting among 10


different synthesis “structures”, that determine the way in which the
different elements of the synthesis interact. In this interaction, two other
elements can also intervene: the ring modulator and the booster. The
booster increases the waveform’s amplitude, what can in some cases
be used to alter its shape (for instance, transforming a triangle wave into
a trapezoid one). Some of the synthesis structures are the following
ones:

53
David Martínez Zorrilla

Fig. 33. Some of the Roland’s Fantom-X synthesis structures

Moreover, different structures can be selected for elements 1


and 2 and for elements 3 and 4, respectively, so there are dozens of
possible combinations.

Finally, there are also effects units to process the sound, once
it’s been synthesized. There are a total of 7 units, each one with several
effect types and possible configurations: reverb, chorus/delay, three
multieffects units (78 types each), mastering (compressor) and one
effects unit for the incoming signal of the sampler (an effect is applied to
the incoming signal).

All this makes it possible to realize about the high level of


versatility and flexibility in the synthesis process of modern synthesizers,
even considering that they are based on sampled waveforms.
Possibilities are nearly unlimited, what confirms one of the main ideas
from the beginning of synthesizers: to open a door to a whole new world
of sonical possibilities, with a degree of flexibility never seen before, and
all in a single instrument.

54
Synthesizers: A Brief Introduction

5.8. A special case: the SID (1981)

Fig. 34. The MOS Technology 6581, usually known as SID (Sound Interface Device).
Image obtained from Wikipedia (http://en.wikipedia.org).

The SID (which stands for Sound Interface Device) is the familiar
name of MOS Technology’s (semiconductor company which was owned
by the Commodore Business Machines group) 6581 chip. It’s basically a
synthesizer on a chip conceived to be used in personal computers and
videogame consoles, but for different reasons it has earned a relevant
place in the history of both computers and synthesizers.

It was designed by the engineer Bob Yannes, who was 24 in


1981. Among the curiosities related to the SID creation process, it’s that
it’s the first synth designed by Yannes (who, from his childhood, always
dreamed about creating a synthesizer), and that its development took a
very short period, barely five months from the first schemes to the
production phase. Despite that, its specs are quite remarkable and in
that time it was clearly superior to any other sound chip which could be
found in a computer or videogame console, given that the SID was a
real synthesizer, and not a mere sound generator. It was commercially
used for the first time in the Commodore 64, a low-cost personal
computer released in 1982, that later would become the most
successful computer model in history (the estimations are around 20
million units sold). Later, it was included in other computer models from
Commodore, and even was used as a basis in other companies’
synthesizers (such as the Swedish Elektron, in their Sidstation model),
or in PC soundcards (such as HardSID). A bit after creating the SID,
Yannes left Commodore and founded, with other partners, the synth
company Ensoniq (the makers of some quite successful models such as
the Mirage or the ESQ-1), which some years later would be acquired by
Creative, the company who owns the famous Sound Blaster PC sound
cards.

55
David Martínez Zorrilla

The specs of the SID are quite amazing, considering the time,
the circumstances of its creation and the fact of being a low cost chip
targeted to home computing appliances. It has three different oscillators
with independent frequency and amplitude, which can operate in
monophonic or polyphonic modes (as three different voices), with four
different waveforms: triangle, sawtooth, variable width pulse and noise,
which can produce frequencies in the range of 8 different octaves. It has
-12 dB/octave resonant low-pass, high-pass and band-pass filters, ring
modulation, oscillator syncing, four independent LFOs (with triangle,
sawtooth, ramp, square, random and flan waveforms), and three ADSR
envelopes (one for each independent amplifier). Moreover, it has a
signal input line which allows using the filters to process the incoming
signal.

On the other hand, the fact of being developed so quickly


sometimes caused the SID not to match its theoretical specs. But far
from being a problem, sometimes this led to certain “features” or
“advantages” which were used by musicians and programmers. For
example, due to a design flaw, the SID produces an audible “click”
sound every time the oscillator changes its volume, with an intensity that
is proportional to the volume level (there are 16 different levels). This
was used to reproduce digitized sound samples in a 4-bit resolution (16
levels), as for instance sound effects, human voices or even sampled
instruments. Some years alter the release of the 6581, Commodore
created the 8580, which is basically the same chip but redesigned in
order to better match the initial technical specifications. However, most
musicians prefer the “old” 6581, which produces a more characteristic
sound, quite “dirty” and “shouty” when compared to the “colder” and
“lifeless” sound of the 8580, that, moreover, due to its “enhancements”,
makes that digitized sounds are pretty inaudible.

Still today, the SID still benefits from some popularity both in the
computer and the synthesizer worlds, and quite relevant amounts of
money are paid for the chips themselves and for the musical devices
which use them, given that production ceased long ago and no
substitutes or equivalent versions have been created. Besides the fact
that the “cult” to the SID is nowadays, in a great amount, a matter of
fashion and of “freaky” ambient, it has to be acknowledged that it was
clearly some steps forward when compared to other computer sound
chips of its time and that it marked the path for forthcoming products.

56
Synthesizers: A Brief Introduction

6. BIBLIOGRAPHY

- BAGNALL, B. (2006): On the Edge. The Spectacular Rise and


Fall of Commodore, Winnipeg: Variant Press.
- HURTIG, B. (ed.) (1988): Synthesizer Basics, Milwaukee, Hal
Leonard Books.
- PAVLOV, A. (2006): The Fantom Tweakbook. Getting the Most
out of Roland Fantom-S, Fantom-X and Juno-G Synthesizers
(version 4). Manuscript which can be obtained from the autor at
http://www.sinevibes.com/publications/fantom-tb/
- PINCH, T. y TROCCO, F. (2002): Analog Days. The Invention
and Impact of the Moog Synthesizer, Cambridge (Mass.),
Harvard University Press.
- REID, G. (2000): “Synth Secrets. Part 12: An Introduction To
Frequency Modulation”, in Sound On Sound, April 2000,
http://www.soundonsound.com/sos/apr00/articles/synthsecrets.h
tm
- REID, G. (2000): “Synth Secrets. Part 13: More On Frecuency
Modulation”, in Sound On Sound, May 2000,
http://www.soundonsound.com/sos/may00/articles/synth.htm
- RUSS, M. (1994): “Yamaha VL1: Virtual Acoustic Synthesizer”,
in Sound On Sound, July 1994,
http://www.soundonsound.com/sos/1994_articles/jul94/yamahavl
1.html
- VAIL, M. (2000): Vintage Synthesizers: Pioneering Designers,
Groundbreaking Instruments, Collecting Tips, Mutants of
Technology, San Francisco, Miller Freeman Books.
- WELSH, F. (2006): Welsh’s Synthesizer Cookbook (3rd edition).
Manuscript that can be obtained from the autor, through eBay or
Amazon.

Besides those references, the following Wikipedia entries


(http://en.wikipedia.org) are also of interest: synthesizer, subtractive
synthesis, additive synthesis, frecuency modulation synthesis, physical
modelling synthesis, minimoog, sequential circuits prophet 5, yamaha
dx7, roland D-50, korg M1, MOS Technology SID.

57
View publication stats

You might also like