Professional Documents
Culture Documents
Cochlear Implants: Chapter Outline
Cochlear Implants: Chapter Outline
Cochlear Implants: Chapter Outline
Cochlear implants
Chapter outline
15
Auditory physiology....................................................................................................374
Hearing................................................................................................... 374
Sensorineural hearing loss......................................................................... 378
Speech processing....................................................................................................380
Clinical need.............................................................................................................381
Historic devices.........................................................................................................383
Early implantable devices.......................................................................... 383
Enabling technology: continuous interleaved sampling speech
processing strategy......................................................................... 385
System description and diagram.................................................................................386
Processor................................................................................................. 386
Implant................................................................................................... 388
Electrode array......................................................................................... 389
Programmer............................................................................................. 390
Improvements in sound recognition............................................................ 391
Key features from engineering standards.....................................................................391
Output amplitude..................................................................................... 391
Protection from harm caused by magnetically-induced force......................... 391
Effect of tensile forces.............................................................................. 391
Effect of direct impact.............................................................................. 393
Effect of atmospheric pressure................................................................... 393
Summary...................................................................................................................393
Exercises..................................................................................................................394
References................................................................................................................395
In this chapter, we continue our discussion of neurostimulators and delve into the
relevant physiology, history, and key features of cochlear implant (CI) systems. A
cochlear implant system is an electrical stimulator that discharges electrical current
to spiral ganglia, giving rise to action potentials in the auditory nerve fibers. It is
also a medical instrument that can measure intracochlear evoked potentials, electrical
field potentials generated by the electrodes, and electrode impedance. It consists of
three main components: an external sound processor, an implanted stimulator, and a
FIGURE 15.1
MED-EL Cochlear Implant components: RONDO 2 Single-Unit Processor, SONNET
Behind-The-Ear Processor, and SYNCHRONY Implant. Only one processor is paired with
the implant during patient use.
(MED-EL, Innsbruck, Austria).
Auditory physiology
An organ, such as the ear or eye, may act as a sensor to measure external stimuli. The
output of this sensor is transmitted through a sensory nerve to the brain for further
processing. For example, 12 paired cranial nerves innervate each side of the head,
neck, and trunk. The nerves are named and numbered from anterior to posterior loca-
tions. Cranial nerve VIII is the vestibulocochlear, or auditory, nerve (Fig. 15.2).
We begin this chapter by discussing how the ear (Fig. 15.3) translates sound into
signals that are transmitted by the auditory nerve to the brain for hearing perception.
Hearing
Sound consists of the mechanical vibrations that are transmitted as longitudinal pres-
sure waves in a medium such as air. Sound is measured in decibels, often in sound
pressure level (dB SPL), comparing a measured pressure level to the minimum pres-
sure fluctuation detected by the ear. Assuming a threshold to audibility, P0, to be
2 × 10−5 Pa at 1000 Hz, sound is measured as 10 times the log of the squared ratio of
Auditory physiology 375
FIGURE 15.2
The cranial nerves.
Adapted from A.D.A.M., Atlanta, Georgia.
FIGURE 15.3
The ear.
Reproduced by permission from Guyton and Hall (2006).
376 CHAPTER 15 Cochlear implants
Using this equation, the level of the threshold of audibility is about 0–20 dB SPL for
young adults with normal hearing, and the level of a hair dryer is about 80–90 dB SPL.
Sound pressure waves enter the ear through the cartilage of the auricle and are
directed through the auditory canal to the tympanic membrane. The ridges and hol-
lows of the auricle create additional cavities with resonances that are capable of in-
fluencing sounds with much higher frequencies. These two outer ear structures act as
a natural filter to amplify frequencies critical to human speech by as much as 20 dB
(May & Niparko, 2009).
In the middle ear, the tympanic membrane receives the pressure waves and trans-
mits them to the bones that are collectively known as the ossicles. The ossicles are
the malleus, incus, and stapes. The lateral end of the handle of the malleus is attached
to the center of the tympanic membrane. As the pressure waves move through the
ossicles, they decrease in amplitude but increase in force. The mechanical movement
of the ossicles, as well as the real difference between the tympanic membrane and the
stapes footplate, provides impedance matching. These combined actions overcome
the impedance mismatch between air and the fluid-filled cochlea. Simultaneously,
pressure waves travel through the air to the oval window, but the sensitivity for hear-
ing is about 60 dB less than for ossicular transmission.
Next, the pressure waves begin to move through the incompressible fluid of the
cochlea, or inner ear. The cochlea consists of three adjacent tubes: the scala ves-
tibuli, the scala media, and the scala tympani (Fig. 15.4). The cochlea is coiled upon
itself for about 2.5 turns. The scala tympani and scala media are separated from each
other by the osseous spiral ligament, on top of which lies the basilar membrane. The
basilar membrane supports the organ of Corti, which contains sensory hair cells and
supporting cells.
When pressure waves move the footplate of the stapes inward against the fluid
of the cochlea, the basilar membrane is set in motion (Fig. 15.5). Different pressure
wave frequencies have different patterns of transmission (Fig. 15.6). When a pres-
sure wave reaches the portion of the basilar membrane that has a natural resonant
frequency equal to the input sound frequency, the basilar membrane vibrates at its
maximum, such that the wave energy dissipates and dies. Thus, sound frequencies
from 20 Hz to 20 kHz become mapped by their physical distance of travel, or dis-
tance from the stapes (Fig. 15.7).
As the basilar membrane vibrates, hair cells within the organ of Corti also move.
There are about 15,000 inner hair cells at birth, which decline with age (May and
Niparko, 2009). Hair cells, more properly called hair bundles, are composed of tens
to hundreds of actin-based stereocilia. Hair bundle deflections toward the tallest
stereocilia open transduction channels (Fig. 15.8). Potassium ions pour through the
Auditory physiology 377
FIGURE 15.4
Section through one turn of the cochlea.
Drawn by Sylvia Colard Keene. From Fawcett (1986). Figure reproduced by permission from Guyton and Hall
(2006).
FIGURE 15.5
Movement of fluid in the cochlea after forward thrust of the stapes.
Reproduced by permission from Guyton and Hall (2006).
channels, depolarizing the hair bundles and causing the release of the neurotransmit-
ter glutamate. Spiral ganglion neurons innervating the hair bundles respond by firing
action potentials. The spiral ganglion cells give rise to the auditory nerve fibers that
constitute the auditory nerve, which transmits these action potentials to the brain
(Vollrath, Kwan, & Corey, 2007; May and Niparko, 2009).
378 CHAPTER 15 Cochlear implants
FIGURE 15.6
Traveling waves along the basilar membrane for high-frequency (A) medium-frequency
(B) and low-frequency (C) waves.
Reproduced by permission from Guyton and Hall (2006).
FIGURE 15.7
Amplitude patterns for sounds of frequencies of 200–8000 Hz, showing the points of
maximum amplitude on the basilar membrane for the different frequencies.
Reproduced by permission from Guyton and Hall (2006).
FIGURE 15.8
Hair cell anatomy and physiology (A) Hair bundle in a bullfrog saccule, comprising ∼60
stereocilia (scale bar = 1 µm). (B) Two stereocilia and the tip link extending between them
(scale bar = 0.1 µm). (C) Deflection of hair bundle. Stereocilia tips remain in contact
but shear with deflection. (D) Schematic model of transduction. Shearing with positive
deflection increases tension in tip links, which pull open a transduction channel at each
end. Myosin motors slip or climb to restore resting tension. An elastic gating spring likely
exists between a channel and the actin cytoskeleton.
Reproduced by permission from Vollrath et al. (2007).
infectious disease. Very often the cause is unknown. Aging generally causes hair cell
loss in the organ of Corti and degeneration of some auditory nerve fibers. Sustained
noise or a single exposure acoustic trauma may cause permanent hair cell loss. Oto-
toxic medications, such as some antibiotics, also cause hair cell loss. Children who
contract mumps or measles might lose hair cells and possibly spiral ganglia and
nerve fibers (Almond & Brown, 2009).
Speech processing
If sufficient spiral ganglia and the auditory nerve are intact, hearing may be aug-
mented by providing electrical stimulation to the cochlea. Exactly reproducing
the original auditory processing of speech is impossible. However, brain plasticity
enables certain patients to decode new electrical stimulation patterns over time as
speech when frequency-place coding (the location of basilar membrane vibration)
and temporal coding (synchronization or phase-locking of ganglia to the period of
a pressure wave) are at least grossly preserved. Adult patients who had language
skills before deafness can associate new stimulation patterns with their memories
of sounded speech. Pediatric patients younger than the age of 2 years can learn spo-
ken language at a normal or near-normal rate because their brains quickly adapt to
electrical stimulation. But older children who did not have aural stimulation during
their early years have a much more difficult time acquiring speech and oral language
skills. What would normally be auditory areas of their brains have been reallocated
for different tasks before cochlear implantation.
In many systems, cochlear implant processing is based on the continuous inter-
leaved sampling (CIS) processing strategy. For this processing strategy, acoustic
sound is first filtered to preemphasize speech frequencies less than 1.2 kHz. The
prefiltered speech is bandpass filtered into 4–22 channels. Each frequency range
is then rectified and lowpass filtered (at a relatively high cutoff frequency, such as
200–400 Hz), to obtain a speech envelope. Each envelope signal is compressed
with a nonlinear mapping function in order to map the wide dynamic range of
sound into the narrow dynamic range of electrically evoked hearing. The extracted
envelope is used to modulate a biphasic pulse train. Each biphasic pulse channel
output is directed to a single intracochlear electrode, with low- to high-frequency
channels assigned to apical-to-basal electrodes. This mimics the order of frequen-
cy mapping in the normal cochlea. The pulse trains are interleaved in time, so that
pulses across channels are not simultaneous and channel interaction is reduced
(Fig. 15.9).
Fig. 15.9 simplifies how each envelope signal was originally mapped to an elec-
trode channel. A single pulse train was sent from the transmitter coil to the receiver
coil. Four channels of envelope data were interleaved onto this pulse train, with each
channel using pulse amplitude modulation (PAM). In PAM, the amplitudes of regu-
larly spaced pulses are proportional to the corresponding sample values of a continu-
ous envelope signal, e(t). For each channel, the PAM signal, s(t), is mathematically
Clinical need 381
FIGURE 15.9
The continuous interleaved sampling strategy illustrated for only four channels.
(Preemphasis filtering and envelope compression are not illustrated.)
Reproduced by permission from Dorman and Wilson (2004).
At the receiver end, the pulse train was demultiplexed to obtain separate channels of
envelope data.
Various processing strategies preserve formants, which are groups of overtone
pitches or vocal tract resonances, in spoken words. For any word, there is no unique
signature involving specific frequencies. Rather, a formant pattern encompassing
the relationship between two dominant magnitude peaks allows speech to be flex-
ible. This flexibility enables normal speech variations, such as speech patterns from
people in northern versus southern US states, to be accurately recognized. When
normal speech is bandpass filtered and transformed into four amplitude-modulated
sine waves, normal-hearing listeners can understand about 90% of words in simple
sentences (Fig. 15.10) (Dorman & Wilson, 2004; Wilson & Dorman, 2009).
Clinical need
According to the US Census Bureau’s Survey of Income and Program Participation,
about 8 million children (aged 6–17 years) and adults in the United States in 2002
were deaf or hard of hearing. About 1 million of these individuals are functionally
382 CHAPTER 15 Cochlear implants
FIGURE 15.10
Human speech is composed of multiple frequencies, as shown for the sentence, “Where
were you last year, Sam?” The waveform for the sentence is shown at top (black trace),
along with the speech envelope (dashed line). In the middle panel, the same sentence
is plotted according to its component frequencies. The energy at each point in the
frequency spectrum is indicated on a scale from low (light gray) to high (dark gray). For
any one word, a formant is the relationship between the dark gray areas. The bottom panel
shows the same audio signal after being processed to remove all information except the
envelopes of four contiguous bands from 300 Hz to 5 kHz, with center frequencies of
0.5, 1, 2, and 4 kHz. Remarkably, this simplified signal is almost as intelligible as actual
speech.
Reproduced by permission from Dorman and Wilson (2004).
and birthing centers. Newborn hearing screening is typically conducted using oto-
acoustic emissions and/or auditory brainstem response testing before the infant leaves
the hospital. Neither test requires a behavioral response from the newborn. Both tests
can be quickly and easily administered by trained personnel. When the infant does
not pass the screening in the hospital, then follow-up testing is done later on using
additional diagnostic tests.
After cochlear implantation, adult and pediatric patients are tested for recognition
of spoken speech, often using single words and sentences. Adult CI users score, on
average, about 50%–60% on the most difficult monosyllabic word recognition test;
however, variability is high, with patients performing both better and worse. On aver-
age, adult patients with short durations of deafness prior to implantation fare better
than patients with long periods of deafness. Other factors, such as the cause of deaf-
ness and central auditory processing, can also contribute to performance.
For pediatric cochlear implant users, the effect of early auditory deprivation can
be demonstrated in research cortical evoked potentials (CEP). The CEP can be re-
corded as an electroencephalogram that is analyzed in response to a spoken syllable,
such as “ba.” In congenitally deaf children implanted (n = 12) before the age of 4
years, the CEP latency moved to within 5 months into the normal latency range. The
children had heard nothing for up to 3.5 years but showed age-appropriate time of
cortical activity in response to speech. However, in congenitally deaf children im-
planted after (n = 8) the age of 4 years, the CEP latency did not move into the normal
latency range even after 1.5 years of CI use (Fig. 15.11) (Dorman and Wilson, 2004;
Wilson and Dorman, 2009).
Historic devices
Clinical stimulation of the auditory system can be traced to Alessandro Volta’s work
in the late 1700s. Volta, who invented the voltaic battery from dissimilar metals,
believed that the physiologic effects of nerve stimulation are affected by the type of
nerve stimulated, but not by the type of stimulus used. He stimulated his own tongue
to elicit taste, his eye to elicit light, and his ears to elicit sound. Specifically, Volta
produced noise by applying two probes connected to a battery of 30–40 silver-zinc
elements into both of his ears (Piccolino, 2000). He described this sound as “a boom
within the head,” followed by a sound resembling thick soup boiling (Volta, 1800).
FIGURE 15.11
CEPs in congenitally deaf children implanted before (n = 12 early implants) and after
(n = 8 late implants) 4 years of age over time. The area between the two curves is the
normal range as a function of age.
Data courtesy of Anu Sharma. Figure reproduced by permission from Dorman and Wilson (2004).
FIGURE 15.12
The House 3M cochlear implant consisted of a single-channel speech processor,
transmitter (oval piece), microphone (with tie pin–type fastener), and implant (not shown).
The speech processor was worn on the body. The transmitter coil was placed over the
implant behind the ear. The microphone was attached to clothing.
Reproduced by permission from the Hearing Aid Museum, Stewartstown, Pennsylvania.
FIGURE 15.13
The continuous interleaved sampling strategy illustrated for n channels. Note that,
compared to Fig. 15.9, pre-emphasis filtering and nonlinear envelope compression are
included in this system diagram.
Reproduced by permission from Wilson (2015).
Processor
As shown in Fig. 15.15, sound (in the range of 70–8500 Hz) is converted to an elec-
trical signal by a microphone and processed by the digital signal processor (DSP)
module, according to a preselected processing strategy, such as CIS. The extracted
System description and diagram 387
FIGURE 15.14
The implant and external processor components of a cochlear implant system.
Adapted from Papsin and Gordon (2007).
signal features are encoded for transmission, amplified, and transmitted over radio-
frequency (12 MHz, 0.6 MB/s) by the transceiver coil to the implant. The transceiver
also receives electrode voltage data from the implant, which are sent to the processor
module.
The processor may either be a single unit or an older behind-the-ear unit
(Fig. 15.1). The single unit is held in place with a magnet above the skin, over the
implant, and discretely beneath hair. The single unit processor has an attachment
clip (not shown), so that it is still secured to hair or clothing if it comes off the skin.
It is powered by three zinc-air batteries, for up to 75 h of battery life. The magnet is
removable.
When the user switches on the processor, the processor module provides feed-
back by causing two red LEDs to flash on and off. The number of flashes corresponds
to the listening environment program that is currently selected. Four programs are
stored in the processor module. Including four different programs allows users to
select a program for a particular listening situation, such as listening to speech in a
388 CHAPTER 15 Cochlear implants
FIGURE 15.15
Cochlear implant system diagram.
noisy background. Adults who wish to select another program, change microphone
sensitivity, or change program volume have access to a remote control (not shown
in Fig. 15.15) that communicates with the processor by radiofrequency. The proces-
sor connects through a connection cable to a mini battery pack, which enables con-
nection to the headphone port of a phone or other audio devices through a 3-pin to
3.5 mm audio jack cable. Connection to a hearing loop in a public venue such as a
theater is made through a remote control selection of “T,” so that only sound from
the telecoil (obtained through magnetic induction), but not microphone, is processed.
Implant
The transceiver coil in the implant receives the encoded signal, as well as about
20–40 mW of power. The power is required by an ASIC that provides the decoder,
waveform circuit, and sensing circuit functions. The received data parameters are de-
coded and used to construct the proper biphasic stimulation waveforms for selected
channels. This circuit includes 24 current sources for 12 pairs of electrodes. Each
channel provides about 0–1.2 mA of current, with a total stimulation rate of 51 kHz.
A channel cannot be stimulated by DC current because a capacitor is serially con-
nected to block its delivery.
By providing about 10 mA current between any two electrodes, the impedance
can be determined from the measured voltage. Evoked compound action poten-
tials (ECAPs), which are discussed in the Electrode Array subsection, can also be
measured.
The Synchrony implant is hermetically sealed in titanium housing; the reference
electrode and active electrode array are part of the implant. The transceiver magnet
is removable. The implant is rated MR conditional for MRI scanner field strengths of
0.2 T, 1.0 T, 1.5 T, and 3.0 T.
System description and diagram 389
FIGURE 15.16
MED-EL electrode arrays of different lengths.
Reproduced by permission from Hochmair, et al. (2015).
Electrode array
Stimulation current is transmitted to the active electrode array. The electrode array
is composed of single-strand platinum iridium wire and 24 platinum electrodes, en-
cased in medical grade silicone (Fig. 15.16). The electrodes are arranged as paired
interconnected surfaces, which result in 12 monopolar stimulation channels. For the
standard electrode array, the interelectrode spacing is 2.4 mm, across an active dis-
tance of 26.4 mm. This standard electrode array may be inserted up to 31 mm into
the cochlea. Other electrode array lengths of 28, 24, 20, and 19 mm are available.
Typically, monopolar, rather than bipolar, stimulation is used because monopolar
stimulation requires less current for producing auditory percepts, without a degrada-
tion in performance. Further, differences in the currents required to produce equally
loud percepts across individual electrodes are substantially lower with monopolar
than with bipolar stimulation, which can simplify speech processor fitting for pa-
tients (Wilson and Dorman, 2009; Zeng, Rebscher, Harrison, Sun, & Feng, 2008;
Med-El, 2010).
390 CHAPTER 15 Cochlear implants
FIGURE 15.17
ECAP response acquired with increasing stimulation level, for electrode 9 of patient 43 in a
specific study. The unit nC is nanoCoulomb.
Reproduced by permission from Alvarez et al. (2010).
Programmer
Approximately 4 weeks after successful implant surgery, a patient receives the exter-
nal processor. An audiologist uses fitting software and an interface box to connect a
PC to the processor. Each channel is tested to ensure that it is electrically functional
(neither a short or open circuit) and does not cause unpleasant nonauditory sensa-
tion such as dizziness or other facial nerve stimulation. For each viable channel,
the threshold, T, and most comfortable loudness, C, levels are determined. The dif-
ference between these levels is the dynamic range, which is a function of process-
ing strategy and electrode configuration. The C levels are also balanced across the
Key features from engineering standards 391
electrode array. For pediatric patients without previous sound exposure, ECAP levels
provide additional feedback for the fitting procedure, although their correlation with
thresholds and comfort levels is not very strong. The final fitting map, or program, is
downloaded to the external processor (Zeng et al., 2008). Up to four programs may
be stored in the external processor, and accessed with the remote control.
Output amplitude
When each channel is programmed to its maximum amplitude and pulse width, the
amplitude is required to be accurate to within ±5%, “taking all errors into consider-
ation.” Testing is conducted with the implant at 37 ± 2°C. An unspecified input is ap-
plied to the microphone, with the processor and implant coils separated by 5 ± 5 mm.
Each output is connected to a load resistor of unspecified value. Measurements of
peak outputs are made using an oscilloscope.
The median of amplitudes and pulse widths and their ranges are recorded. No
specific number of test devices is specified (AAMI, 2017).
392 CHAPTER 15 Cochlear implants
Summary
A cochlear implant system is an electrical stimulator that discharges electrical cur-
rent to spiral ganglia, giving rise to action potentials in the auditory nerve fibers. It is
also a medical instrument that can measure intracochlear evoked potentials, electrical
394 CHAPTER 15 Cochlear implants
Acknowledgment
The author thanks Dr. Claude Jolly for his detailed review of the first edition of this chapter.
Dr. Jolly was Director of Cochlear Implant Electrode Array Development for MED-EL.
Exercises
15.1 Using an audio acquisition system provided by your instructor, speak “Where
were you last year, Sam?” into a microphone, and create a spectrogram of
these words. Compare your spectrogram to those of two classmates. Plot all
References 395
three spectrograms. How many formants are present? How do the formants
differ among speakers?
15.2 Draw the system diagram for a Phonak Audeo M-312 hearing aid, which is
worn behind the ear. Include access to all accessories. Under what physiologic
conditions in the outer, middle, and inner ears is this hearing aid effective? Be
specific.
15.3 What electrode biocompatibility issues occur during and after cochlear
implantation? Be specific. Is there a combination product we previously
studied that could be useful in this device?
15.4 Can a patient hear music after implantation? Explain your reasoning.
15.5 What are the issues complicating the design of a completely implantable
cochlear implant (i.e., no external processor)? Be specific.
Read Doyle et al. (1963).
15.6 Where did electrode stimulation occur in these patients? What was the
refractory limitation in frequency? How was this overcome? How is the audio
added to the pulse train stimulation?
15.7 How were two electrodes permanently implanted? Describe how sound is
transmitted to this system.
Read Alvarez et al. (2010).
15.8 Describe the ECAP acquisition protocol in detail. Why was the T level not
considered?
15.9 What is the specific hypothesis being tested? Describe the model. Was the
hypothesis proven or disproven? Give reasons.
References
AAMI ANSI/AAMI CI86:2017 Cochlear Implant Systems: Requirements for Safety, Func-
tional Verification, Labeling and Reliability Report 2017, AAMI Arlington VA.
Almond, M., & Brown, D. J. (2009). The pathology and etiology of sensorineural hearing loss
and implications for cochlear implantation. In J. K. Niparko (Ed.), Cochlear implants:
Principles and practices (2nd ed.). Philadelphia: Lippincott Williams & Wilkins.
Alvarez, I., De La Torre, A., Sainz, M., Roldan, C., Schoesser, H., & Spitzer, P. (2010). Using
evoked compound action potentials to assess activation of electrodes and predict C-levels
in the Tempo + cochlear implant speech processor. Ear Hearing, 31, 134–145.
BSI. (2010). EN 45502-2-3:2010 Active implantable medical devices—Part 2–3: Particular
requirements for cochlear and auditory brainstem implant systems. Brussels: BSI.
CDC (2019). 2017 Summary of diagnostics among infants not passing hearing screening.
CDRH. (2001). Approval order P000025. Rockville, MD: FDA.
Clark, G. M., Pyman, B. C., & Bailey, Q. R. (1979). The surgery for multiple-electrode co-
chlear implantations. The Journal of Laryngology and Otology, 93(3), 215–223.
Dorman, M. F., & Wilson, B. S. (2004). The design and function of cochlear implants.
American Scientist, 92, 436–445.
Doyle, J. B., Jr., Doyle, J. H., Turnbull, F. M., Abbey, J., & House, L. (1963). Electrical stimu-
lation in eighth nerve deafness. A preliminary report. Bulletin of the Los Angeles Neuro-
logical Society, 28, 148–150.
396 CHAPTER 15 Cochlear implants
Eisen, M. D. (2009). The history of cochlear implants. In J. K. Niparko (Ed.), Cochlear im-
plants: Principles and practices (2nd ed.). Philadelphia: Lippincott Williams & Wilkins.
Fawcett, D. W. (1986). Bloom and Fawcett: A textbook of histology (11th ed.). Philadelphia:
WB Saunders.
Guyton, A. C., & Hall, J. E. (2006). Textbook of medical physiology (11th ed.). Philadelphia:
Elsevier Saunders.
Hochmair-Desoyer, I. J., Hochmair, E. S., Fischer, R. E., & Burian, K. (1980). Cochlear pros-
theses in use: Recent speech comprehension results. Archives of Otorhinolaryngology,
229(2), 81–98.
Hochmair, I., Hochmair, E., Nopp, P., Waller, M., & Jolly, C. (2015). Deep electrode insertion
and sound coding in cochlear implants. Hearing Research, 322, 14–23.
Lasker Awards: Former Winners. (2013). Lasker Foundation.
May, B. J., & Niparko, J. K. (2009). Auditory physiology and perception. In J. K. Niparko
(Ed.), Cochlear implants: Principles and practices (2nd ed.). Philadelphia: Lippincott
Williams & Wilkins.
Med-El (2010). Hear. There and everywhere. MAESTRO cochlear implant system. Durham,
NC: MED-EL.
Mitchell, R. E. (2006). How many deaf people are there in the United States? Estimates from
the survey of income and program participation. Journal of Deaf Studies and Deaf Educa-
tion, 11, 112–119.
Papsin, B. C., & Gordon, K. A. (2007). Cochlear implants for children with severe-to-pro-
found hearing loss. New England Journal of Medicine, 357, 2380–2387.
Piccolino, M. (2000). The bicentennial of the Voltaic battery (1800-2000): The artificial elec-
tric organ. Trends in Neurosciences, 23, 147–151.
Vollrath, M. A., Kwan, K. Y., & Corey, D. P. (2007). The micromachinery of mechanotrans-
duction in hair cells. Annual Review of Neuroscience, 30, 339–365.
Volta, A. (1800). On the electricity excited by the mere contact of conducting substances of
different species. Philosophical Transactions of the Royal Society of London, 90, 403–431.
Wilson, B. S. (2015). Getting a decent (but sparse) signal to the brain for users of cochlear
implants. Hearing Research, 322, 24–38.
Wilson, B. S., & Dorman, M. F. (2009). The design of cochlear implants. In J. K. Niparko
(Ed.), Cochlear implants: Principles and practices (2nd ed.). Philadelphia: Lippincott
Williams & Wilkins.
Wilson, B. S., Finley, C. C., Lawson, D. T., Wolford, R. D., Eddington, D. K., & Rabinowitz,
W. M. (1991). Better speech recognition with cochlear implants. Nature, 352, 236–238.
Zeng, F. G., Rebscher, S., Harrison, W. V., Sun, X., & Feng, H. (2008). Cochlear implants:
System design, integration and evaluation. IEEE Transactions on Biomedical Engineer-
ing, 1, 115–142.