Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/330560014

The Physiology, Mechanism, and Nonlinearities of Hearing

Preprint · January 2019


DOI: 10.13140/RG.2.2.24895.97444

CITATIONS READS

0 2,208

1 author:

Bagus Tris Atmaja


Institut Teknologi Sepuluh Nopember
52 PUBLICATIONS   51 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

binaural hearing model and its application View project

Railway Noise Control View project

All content following this page was uploaded by Bagus Tris Atmaja on 23 January 2019.

The user has requested enhancement of the downloaded file.


Report I Human Perceptual System and its Model (I645)
The Physiology, Mechanism, and Nonlinearities of Hearing

Bagus Tris Atmaja (s182002)


Email: bagus@jaist.ac.jp
School of Information Science
Japan Advanced Institute of Science and Technology
January 23, 2019

Contents
1 The Physiology of Hearing 2
1.1 Outer ear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Middle ear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Inner ear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Mechanism of Hearing 6
2.1 From sound into vibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 From mechanical vibration into fluid vibration . . . . . . . . . . . . . . . 7
2.3 From fluid vibration into nerve impulse . . . . . . . . . . . . . . . . . . . 8

3 Nonlinearities of Auditory system 9


3.1 What is linear/nonlinear system? . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Frequency response of auditory filter . . . . . . . . . . . . . . . . . . . . . 10
3.3 Saturating nonlinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.4 High-sound-level compressions . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.5 Two-tone suppression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.6 Combination tone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.7 Otoacoustic Emission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.7.1 Cochlear echoes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.7.2 Single-tone emissions . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.7.3 Intermodulation products . . . . . . . . . . . . . . . . . . . . . . . 15
3.7.4 Spontaneous emissions . . . . . . . . . . . . . . . . . . . . . . . . . 15

1
1 The Physiology of Hearing
The function of each auditory peripherals can be studied through physiology while its
structures and parts can be studied through anatomy. The anatomy of human ears based
on its location and form of signal inside it (acoustic, vibration or electric) can be divided
into three parts: outer ear, middle ear and inner ears. Figure 1 shows the anatomy of
the human ear.

Figure 1: Anatomy of the human ear

1.1 Outer ear


• Pinna
The function of pinna or auricle is to gather and focus sound energy to tympanic
membrane.The one portion of the auricle that has no cartilage is the called lobule,
the fleshy lower part of the auricle. The function of lobule is not determined yet
[1].
• Concha
The concha is the ”shell-shaped” structure of the cavity of the external ear. The
function of the pinna and concha is to selectively filter different sound frequencies
in order to provide cues about the elevation of the sound source [3]. In addition,
the concha and external auditory canal acts as resonator, i.e. effectively enhance
the intensity of sound that reaches the tympanic membrane by about 10 to 15

2
dB. This enhancement is most pronounced for sounds in the frequency range of
roughly 2 to 7 kHz and so, in part, determines the frequencies to which the ear is
most sensitive [4]. Here, function of concha is given with other peripherals as it is
part between pinna and ear canal.

• Ear canal / External auditory meatus


The ear canal, also called as external auditory canal, external auditory meatus,
or external acoustic meatus, functions as an entryway for sound waves, which get
propelled toward the tympanic membrane, known as the eardrum.

1.2 Middle ear


• Tympanic membrane /ear drum
The tympanic membrane is thin and pliable so that a sound, consisting of compres-
sions and rarefactions of air particles, pulls and pushes at the membrane moving
it inwards and outwards at the same frequency as the incoming sound wave. It is
this vibration that ultimately leads to the perception of sound. The greater the
amplitude of the sound waves, the greater the deflection of the membrane. The
higher the frequency of the sound, the faster the membrane vibrates [4].

• Auditory ossicles
Auditory ossicles are consisted of the following smallest three bones in human body
to transfer the vibration of tympanic membrane to cochlea.
– malleus (hammer): forms a rigid connection with the incus
– incus (anvil): forms a flexible connection with the stapes
– stapes (stirrup): connects to oval window
The inward-outward movement of the tympanum displaces the malleus and incus
and the action of these two bones alternately drives the stapes deeper into the oval
window and retracts it, resulting in a cyclical movement of fluid within the inner
ear.

• Eustacian tube
The eustachian tube or auditory tube helps ventilate the middle ear and maintain
equal air pressure on both sides of the tympanic membrane, inside middle ear and
outside the body, via nasopharyx (the nasal part of the pharynx, lying behind the
nose and above the level of the soft palate).

The overal function of midlle ear is to amplify the vibrations of tympanic membrane to
oval window. It also matces low acoustic impedance of air to high acoustic impedance
of fluid inner ear (impedance matching). The more details anatomy of the middle ear
can be shown in Figure 2.

3
Figure 2: Anatomy of middle ear

1.3 Inner ear


• Vestibular system
The vestibular system performs these essential tasks. It engages a number of
reflex pathways that are responsible for making compensatory movements and
adjustments in body position. It also engages pathways that project to the cortex
to provide perceptions of gravity and movement. Vestibular system consist of
otolith organs and three semicircular ducts (horizontal, anterior and posterior).
– Otoliths organs
Otoliths organs consist of saccule and utricle which perpendicular each others.
They are also called as gravity receptor as the respond to the gravitational
forces. The receptors, called maculae (meaning ”spot”), are patches of hair
cells topped by small, calcium carbonate crystals called otoconia. It monitors
the position of the head relative to the vertical. Due to its position each
other that is perpendicular, with any position of the head, gravity will bend
the cilia of one patch of hair cells, due to the weight of the otoconia to which
they are attached by a gelatinous layer. This bending of the cilia produces
afferent activity going through the nerve to the brainstem.
– Semicircular Canals
Semicircular canals or semicircular ducts respond to angular acceleration.
There are three pairs of semicircular ducts, which are oriented roughly 90

4
degrees to each other for maximum ability to detect angular rotation of the
head. This organ mediate interactions between the vestibular system and eye
muscles via cranial nerve. Hence, it plays a smooth movement of the eyes
toward the left and right, keeping the visual field stable as the head turns.
This fluid-filled tubes is the main organ to keep our body balance.

• Cochlea
Cochlea is the main peripherals in the auditory system consisting the following
things:
– Two membranes:
∗ Raissner’s membrane
Together with the basilar membrane it creates a compartment in the
cochlea filled with endolymph, which is important for the function of
the spiral organ of Corti. Based on experiment evidence, the reissner’s
membrane is believed plays important role in otoacoustic as a wave on
Reissner’s membrane can propagate along the whole extent of the cochlea
[5].
∗ Basilar membrane
It forms the division between the scala media and tympani. Physical char-
acteristics of the basilar membrane cause different frequencies to reach
maximum amplitudes at different positions. BM performs frequency se-
lectivity by its filter bank. Hence, BM is is effectively a continuous array
of filters which decompose a complex sound waveform into into its con-
stituent frequency components.
– Three compartments:
∗ Scala vestibuli (vestibular ducts): conducts sound vibrations to the cochlear
duct.
∗ Scala tympani (tympanic ducts): together with vestibular duct to trans-
duce the movement of air that causes the tympanic membrane and the
ossicles to vibrate, to movement of liquid and the basilar membrane.
∗ Scala media (cochlear duct): houses of organ corti that transform fluid
vibration into nerve impulse.
– Oval window
It receives vibration from stapes and transmit to base of basilar membrane.
– Round window
It vibrates with opposite phase to vibrations entering the inner ear through
the oval window. It allows fluid in the cochlea to move, which in turn ensures
that hair cells of the basilar membrane will be stimulated and that audition
will occur.
– Organ of corti
Organ of corti transduces auditory signals and minimize the hair cells’ extrac-

5
tion of sound energy. It consist of the following two hair cells and tectorial
membrane:
∗ Inner hair cells (IHC): detect the sound and transmit it to the brain via
the auditory nerve.
∗ Outer hair cells (OHC): perform an amplifying role.
∗ Tectorial membrane: the function for human is not clear yet, but TM
may be involved in the longitudinal propagation of energy in the intact
cochlea [6]. The stereocillia on the tip of the hair cells respond to fluid
motion when the basilar membrane displaces to the right or left.
• Endolymph and perilymph
Endolymph is fluid contained in the membranous labyrinth of the inner ear while
perylimph is extracellular fluid inside perylmphatic space. It is located between
the outer wall of the membranous labyrinth and the wall of the bony labyrinth.
Their function is to regulate electrochemical impulses of hair cells. Perilymph
resembles extracellular fluid in composition (sodium salts are the predominate
positive electrolyte) while endolymph resembles intracellular fluid in composition
(potassium is the main cation).
Endolymph also has another two function:
– Hearing: fluid waves in the endolymph of the cochlear duct stimulate the
receptor cells, which in turn translate their movement into nerve impulses
that the brain perceives as sound.
– Balance: angular acceleration of the endolymph in the semicircular canals
stimulate the vestibular receptors of the endolymph. The semicircular canals
of both inner ears act in concert to coordinate balance

Detail anatomy of the inner ear explained above can be shown in figure 3.

2 Mechanism of Hearing
The hearing processes sound wave into perceived sounds translated from nerve impulse
(electric activity). This process is known as auditory transduction. In order to transform
sound to neural nervous system, the energy of sound wave transduced into into three
transformation: vibration, hydraulic motion (fluid vibration) and nerve impulse. The
mechanism of hearing will be divided into those three parts.

2.1 From sound into vibration


Sound is mechanical wave. It needs medium to propagate. Sound itself is created by
something that vibrates, for example fork. Then, the vibration creates air particles
movement due to change of air pressure. The sound wave through air reaches human
ear via pinna. As shown in Figure 4, the longitudinal wave of sound reaches pinna and
concha.

6
Figure 3: Detail anatomy of inner ear

Figure 4: Propagation of sound wave from sound source to ear

The minimum detectable level of sound to reach pinna corresponds to an energy flow
(or intensity) of 10−12 W/m2 in a sound pressure wave (threshold of hearing). Then, due
to its shape the most sensitive frequencies is from 1000 to 4000 Hz. In the outer ear,
sound frequencies from 1500 to 7000 Hz is amplified about 10-15 dB. The last step in
this stage is sound wave hit the ear drum causing mechanical vibration through ossicles.

2.2 From mechanical vibration into fluid vibration


Vibrations of ear drum are transmitted to malleus, incus and stapes (stirrup). The stapes
contacts oval window as pneumatic lever. Figure 5 shows the mechanism of converting
mechanical vibration into fluid (hydraulic) vibration in the cochlea. The two mechanics
in the ossicles can be calculated to amplification of energy: (1) the area ratio between
tympanic membrane and stapes footplate; (2) the length ratio of malleus and incus. The
first mechanic determines pressure amplification, while the second one determine force

7
Figure 5: Vibrations in the middle ear [7]

amplification.
From the illustration in Figure 5, it is shown that the area of stapes footplate is very
small compared to the area of tympanic membrane. The actual ratio is 17. Hence, the
power amplification is 17.

Pcochlea Area of tympanum


= = 17
Peardrum Area of stapes f ootplate
The second comparison can be used to determine force amplification i.e. comparison
of length of malleus and incus.

Fcochlea length of malleus


= = 1.2
Feardrum length of incus
Hence, the total amplification of sound pressure in cochlea (Pc ) compared to sound
pressure in eardrum (Pe ) can be calculated as in ideal transformer,

Pc
= 17 × 1.2 = 20
Pe
Now, the form of signal is fluid vibration in the cochlear duct as stapes pushes oval
window forth and back.

2.3 From fluid vibration into nerve impulse


The fluid vibration on the cochlear duct from scala vestibuli to scala tympani causes
movement on the basilar membrane and hair cells. Displacement of the basilar membrane
toward the scala vestibuli produces a shearing force between the basilar membrane and

8
Figure 6: Sound transduction in coiled (up) and uncoiled cochlea (bottom)

the tectorial membrane, causing the stereocilia on the hair cells to be bent to the right.
Displacement of the BM toward the scala tynpani produces the opposite effect, causing
the stereocilia to be bent to the left. When the stereocilia are bent to the right, the tip
links are stretched and ion channels are opened. Positively charged potassium ions (K+)
enter the cell, causing the interior of the cell to become more positive (depolarization). It
stimulates them to send nerve impulse to the cochlear nerve and on the brain. When
the stereocilia are bent in the opposite direction the tip links slacken and the channels
close. Here the sounds we perceived are processed in different loudness and pitches by
cochlea. The loudness is corresponds to the high of fluid wave and the frequencies we
perceived are corresponds to the frequency selectivity of basilar membrane.
Figure 6 shows coiled cochlea and its uncoiled section where sound transduction were
performed to convert fluid vibration into nerve impulse.

3 Nonlinearities of Auditory system


3.1 What is linear/nonlinear system?
A linear system satisfy a superposition principle. It states, for all linear systems, the
net response caused by two or more stimuli is the sum of the responses that would have
been caused by each stimulus individually. Superposition property can be defined by
two simpler properties,

F (x1 + x2 ) = F (x1 ) + F (x2 ) Additivity


F (αx) = αF (x) Homogeneity

9
If a system does not satisfy those two principles above, then it is nonlinear system.
The following subsection describes the evidence that our hearing system is nonlinear
system.

3.2 Frequency response of auditory filter


The cochlea acts like a bank of band-pass filter. Each filter has a place (position) on
basilar membrane. Hence, the response of basilar membrane depends on the place where
it vibrates when their characteristic frequency matches to the frequencies of input sounds
(resonance). The input stimulus is split into several frequency bands within which two
frequencies are not distinguishable. The ear averages the energies of the frequencies
within each critical band and thus forms a compressed representation of the original
stimulus. The critical band refers to the bandwidth at which subjective responses, such
as loudness, becomes significantly different. This filter bandwidth is larger for high
frequency filters. It also depends on sound level: higher levels lead to wider bandwidths.
Mathematically, critical band is the difference between the upper and lower cutoff
frequencies formulated below,

BW = fch − fcl
As this auditory filter is bank of filters, those bandwidth above is not single, but many
(overlapped) bandwidths. The psychoacoustics studies reveal that the human perception
of frequency contents of sound does not follow linear scale. This can be observed by
listening linearly spaced tone vs nonlinearly spaced tones (e.g. logarithmically spaced
tones). Therefore, auditory researcher proposed some scales for this filter bank: bark,
erb and mel scale.
The bark scale ranges from 1 to 24 Barks, corresponding to the first 24 critical bands
of hearing. The published Bark cut-off frequencies are given in Hertz as [0, 100, 200,
300, 400, 510, 630, 770, 920, 1080, 1270, 1480, 1720, 2000, 2320, 2700, 3150, 3700, 4400,
5300, 6400, 7700, 9500, 12000, 15500]. The published band centers in Hertz are [50, 150,
250, 350, 450, 570, 700, 840, 1000, 1170, 1370, 1600, 1850, 2150, 2500, 2900, 3400, 4000,
4800, 5800, 7000, 8500, 10500, 13500] [9]. The following formula can be used to convert
frequency in Hz to Bark scale.
 
−1 f
FBark = 6 sinh .
600
This bark scale is derived from critical band concept (Zwickers model). Although this
critical band terminology is wrongly used according to psychoacoustics, it is well used
on engineering science.
Moore and Glasberg proposed the ERB scale modifying Zwickers loudness model. The
ERB scale is a measure that gives an approximation to the bandwidth of filters in human
hearing using rectangular bandpass filters. The formula to compute a frequency in terms
of ERB-rate from a given frequency f in Hz is:

FERB = 21.4 log10 (0.00437f + 1) .

10
Figure 7: Saturation of OHC active force production [12]

The last well known scale that also used to model auditory filter is mel scale. It is not
come from experiments to model auditory filter, but a measurement of pitches judged
by listeners to be equal in distance from one another. The proposed the unit ”mel” scale
defined 1000 mels as the pitch at 1000 Hz and one mel to be 1/1000 of that pitch on
the subjective scale. The results of the 1940 experiment are displayed in the following
formula,
 
f
Fmel = 1127 ln 1 +
700
Some scales of auditory filter above shows that, the human ear processes fundamental
frequency on a nonlinear logarithmic scale rather than a linear scale. Actually, the
response of auditory filter is nonlinear level dependence [11]. At low level it almost
linear, but highly nonlinear at high level. The use of nonlinear model such as double-
resonance nonlinear (DRNL) also gives better result compared to linear gammatone filter
bank [10].

3.3 Saturating nonlinearity


OHCs have been discovered to have saturation property, yielding nonlinear cochlear
responses. This can be measured by observing the relation of transducer current of the
outer hair cell as a function of stereocilia deflection. Figure 7 shows the observation
result.
The sigmoid shape (or tanh) of figure above accounts well for the nonlinear proper-
ties of the mechano-sensitive transducer current as well as for the linear behavior in the
proximity of the stereocilia resting position (y = 0). This implies that the undamping of
the cochlear oscillations provided by the outer hair cells ceases to be effective outside a

11
Figure 8: Response of basilar membrane velocity as function of sound pressure level [12]

narrow range of stereocilia deflection.Let us assume that various tectorial membrane por-
tions behave as a secondary system of oscillators affected by appreciable damping, with
resonance frequencies close to the characteristic frequency all over the basilar membrane
length.

3.4 High-sound-level compressions


High-sound-level compression is one of the nonlinearities of cochlea. Sound signals at low
intensities are amplified in frequency-selective fashion at certain cochlea position - the
cochlea exhibits large gain, while high-level sound signals are nearly not amplified—the
cochlea exhibits small gain [8]. Figure 8 shows high-sound-level compression of OHC
(active). When the input pressure less than 80 dB, OHC gives large gain. The small
gain is given in the range of 80-100 dB and almost no gain is given over 100 dB input
pressure. This nonlinearity is also called as input-output gain or input-output curve.
This evidence beside shows the nonlinearity of cochlea also explains why elder people
cannot perceive low-level sound as the ability of gain decreased along age.

3.5 Two-tone suppression


When a pure tone signal is applied, the cochlear fluid oscillates in phase with the stimu-
lus, causing the whole basilar membrane to vibrate at the stimulating frequency. How-
ever, because the membrane varies along its length, there will be one place where the
resonant frequency of the membrane matches the stimulus frequency and this place will
show the maximum amount of vibration. Thus each frequency can be mapped to a single
place of maximum vibration. This is called a tonotopic or frequency to place mapping.
How about two tone? it is necessary to investigate directly the neural responses to com-
plex stimuli to understand how the properties of these complex stimuli are coded in the
auditory system.

12
Figure 9: Two-tone suppression in auditory nerve [13]

Figure 9 shows the phenomena of two-tone suppression. The threshold tuning curve
(dark red) plots the responses of one auditory nerve (AN) fiber with a characteristic
frequency of 8000 Hz. Whenever a second tone is played at the frequencies and levels
within the light-red areas to each side, the response of this AN fiber to an 8000-Hz tone
is reduced (suppressed), about 20% [13].

3.6 Combination tone


Although beats phenomena suggesting that human ear is linear as it follows superposition
principle, but it only appears for frequency different under 15 Hz.
Consider two tones with frequency of f1 and f2 are listened simultaneously,
   
f1 + f2 f1 − f2
cos(2πf1 t) + cos(2πf2 t) = 2 cos 2π t cos 2π t
2 2
if the difference between f1 and f2 very small (under 15 Hz) [16], so the parts of f1 −f
2
2

is too low to be perceived as an audible tone or pitch. So, the first part will be perceived
by our ears, as amplitude modulation (AM). The frequency of fine structure carrier wave
modulation is f1 +f2
2
(average) while the frequency of modulated signal (slowly varying
function, envelope) is f1 −f 2
2 . Becuse there is two beats in every one period of envelope,
so the frequency of beat is,

fbeat = f1 − f2
Figure 10 shows beats or amplitude modulation of tone 200 Hz with 210 Hz.
If the frequency difference between two tone is higher than 50 Hz, then combination
tone occurs. Instead of resulting beats, some new tone appears i.e:

f2 − f1 , 2f1 − f2 , 3f1 − 2f2 , . . . , f1 − k(f2 − f1 ), k = 1, 2, 3, ...

13
1

1
1

1
2

2
0 250 500 750 1000 1250 1500 1750 2000

Figure 10: Amplitude modulation of two frequencies, 200 Hz (up) and 210 Hz (middle)
resulting beating waveform (bottom)

Two prominent difference tones are the quadratic difference tone f2 − f1 , (sometimes
referred to simply as ”the difference tone”) and the cubic difference tone 2f1 − f2 . If f1 ,
remains constant (for example at 1000 Hz) while f2 increases, the quadratic difference
tone moves upward with f2 while the cubic difference tone moves in the opposite direc-
tion. At low levels ( approximately 50 dB), they can be heard from about f2 /f1 , = 1.2
to 1.4, but at 80 dB they are audible over nearly an entire octave, f2 /f1 = 1 to 2. In
this case, quadratic and cubic difference tones cross over at f2 /f1 = 1.5. Over part of
the frequency range, the quadratic difference tone 3f1 − f2 may be audible.
A demonstration to examine this combination tone can be simply performed by listen-
ing two different tones via stereo speakers. Each channel delivers different frequencies
(to eliminate distortion/nonlinearity of audio system). For headphones experiment, the
summation of two tones can be performed on both channels. By performing this exper-
iment, the distortion of f2 − f1 , 2f1 − f2 , etc can be heard.
Due to the nonlinearity of inner ear, distortion occurs whenever a sum of difference
tones was heard. However, experiments reports that listeners heard beat additional
listening two tone delivered separately on each channel by using headphones. This
might neural phenomenon called binaural beat rather than physics of inner ear.
A binaural beat is an auditory illusion perceived when two different pure-tone sine
waves, both with frequencies lower than 1500 Hz, with less than a 40 Hz difference

14
between them, are presented to a listener dichotically (one through each ear).
Binaural-beat perception originates in the inferior colliculus of the midbrain and the
superior olivary complex of the brainstem, where auditory signals from each ear are
integrated and precipitate electrical impulses along neural pathways through the reticular
formation up the midbrain to the thalamus, auditory cortex, and other cortical regions.

3.7 Otoacoustic Emission


When a stimulus is presented to normal ears, some products may be observed in the
ear canals. This is known as otoacoustic emissions. Not only processed input sounds,
our ears also generated additional components. The following nonlinear phenomena are
those generations.

3.7.1 Cochlear echoes


When a stimuli had subsided into the ear, another wide band signals are observed in
ear canal. It is firstly observed on guinea pig. This transiently evoked otoacoustic
emission (TEOAE) is least understood in hearing science [15]. The suggested explanation
for this phenomena is that the TOEAE reflects a weighted, summed response on all
active generators along the cochlea and hence much of the response for any frequency
components must be generated at the characteristic place for that frequency.

3.7.2 Single-tone emissions


Single-tone emission (STE) is observed if a single-frequency tone burst is presented to the
ear, then the same frequency generated within the cochlea. STE is demonstrable only
by virtue of nonlinearity and suppression of second tone [15]. The current interpretation
is that the suppressor suppressed the nonlinear component from within cochlea and with
some arithmetic manipulation it can be revealed the nonlinear component by itself.

3.7.3 Intermodulation products


While the distortion of combination tone is perceived by our hearing system when lis-
tening two stimulus frequencies, it also possible to measure in the external ear canal a
third component at the cubic intermodulation frequency 2f1 − f 2. The interpreation of
this emission is that the cubic intermodulation tone is generated at the characteristic
place of of f2 frequency due to nonlinear modulation of its BM response at the frequency
of f1 [15].

3.7.4 Spontaneous emissions


Spontaneous emission of otoacoustic can be observed without sound stimuli. It is in-
terpreted as evidence of an active process within the cochlea, limited from growing
indefinitely in amplitude by and amplitude-limiting nonlinearity.

15
References
[1] http://www.madsci.org/posts/archives/aug99/934627537.Ev.r.html

[2] https://www.open.edu/openlearn/science-maths-technology/science/
biology/hearing/content-section-2.1

[3] Neuroscience. 2nd edition. Purves D, Augustine GJ, Fitzpatrick D, et al., editors.
Sunderland (MA): Sinauer Associates; 2001.

[4] https://www.open.edu/openlearn/science-maths-technology/science/
biology/hearing/content-section-2.1

[5] Reichenbach, T., Stefanovic, A., Nin, F., & Hudspeth, A. J. (2012). Waves on Reiss-
ner’s membrane: a mechanism for the propagation of otoacoustic emissions from the
cochlea. Cell reports, 1(4), 374–84.

[6] Meaud, Julien; Grosh, Karl (2010). ”The effect of tectorial membrane and basilar
membrane longitudinal coupling in cochlear mechanics”. The Journal of the Acousti-
cal Society of America. 127 (3): 1411. doi:10.1121/1.3290995. ISSN 0001–4966. PMC
2856508.

[7] http://hyperphysics.phy-astr.gsu.edu/hbase/Sound/oss.html#c1

[8] Bo Wen (2006).”Modeling The Nonlinear Active Cochlea: Mathematics And Analog
VLSI”, PhD dissertation, University of Pennsylvania.

[9] Julius O. Smith and III and Jonathan S. Abel, (1999). “Bark and ERB Bilinear
Transforms”, IEEE Transactions on Speech and Audio Processing.

[10] E. Lopez-Poveda and R. Meddis, (2001). A human nonlinear cochlear filterbank. J.


Acoust. Soc. Am., 110:3107-3118.

[11] Lyon, R. F., Katsiamis, A. G., and Drakakis, E. M. (2010, May). History and future
of auditory filter models. In Circuits and Systems (ISCAS), Proceedings of 2010 IEEE
International Symposium, pp. 3809-3812.

[12] R. S. Chadwick (1998). Compression, gain, and nonlinear distortion in an active


cochlear model with subpartitions. Proceedings of the National Academy of Sciences
Dec, 95 (25) 14594—14599.

[13] Jeremy M. Wolfe et. al., (2017). Sensation and Perception, Oxfor University Press.

[14] Moore, B. C. (2012). An introduction to the psychology of hearing. Brill.

[15] Moore, B. C. (Ed.). (1995). Hearing. Academic Press.

[16] https://ccrma.stanford.edu/CCRMA/Courses/152/combination_tones.html

16

View publication stats

You might also like