2 - S.lavanya (Msc. Audiology) - Technology in Audiology - Biomedical Signals Acqusition and Processing Technique, High Fidelity DRAFT

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 30

1

PRESENTATION NUMBER: 2

SUBJECT: Technology in Audiology

TOPIC: Biomedical signals and signal processing

NAME: S.LAVANYA ( Msc.Audiology)

SUBMITTED TO : Ms.Rajasuman (Assistant Professor )

DATE OF SUBMISSION : /02/2022

REMARKS :
2

1. Biomedical signal and signal processing ( Introduction ) 1

2. 2
Origin of biomedical signals

3. 6
Classification of biomedical signals:

4. 8
Biomedical signal processing
1. signals acquisition
2. signal processing
3. signal analysis

5. Biomedical signal processing in hearing aid : 14

6. References 16

INDEX
3

BIOMEDICAL SIGNAL AND SIGNAL PROCESSING

● Signal is a quantity which conveys information about the physical system and/or its
functioning.
● The task of signal processing is to extract useful information contained in the signal and
make it available in a desired form.

Living organisms are made up of many component systems — the human body, for example,
includes the nervous system, the cardiovascular system, and the musculoskeletal system, among
others. Each system is made up of several subsystems that carry on many physiological
processes.
Biomedical signals are those signals that are used primarily for extracting information
from a biological system under investigation.
Biomedical signals are observations of physiological activities of organisms, ranging from
gene and protein sequences, to neural and cardiac rhythms, to tissue and organ images.

ORIGIN OF BIOMEDICAL SIGNALS


Bioelectric potentials are generated at a cellular level and the source of these potentials is ionicin
nature. A cell consists of an ionic conductor separated from the outside environment by a
semipermeable membrane which acts as a selective ionic filter to the i-ons. This means that some
ions can pass through the membrane freely where as others cannot do so. All living matter
iscomposed of cells of different types. Human cells may vary from 1 micron to 100 microns
indiameter, from 1 mm to 1 m in length, and have a typical membrane thickness of 0.01 micron (PeterStrong,
1973).Surrounding the cells of the body are body fluids, which are ionic and which provide
aconducting medium for electric potentials. The principal ions involved with the phenomena
of producing cell potentials are sodium (Na+), potassium (K+) and chloride (Cl–). The
membrane of excitable cells readily permits the entry of K+and Cl– but impedes the flow of Na+
even though theremay be a very high concentration gradiant of sodium across the cell membrane.
This results in the concentration of the sodium ion more on the outside of the cell membrane than
on the inside. Sincesodium is a positive ion, in its resting state, a cell has a negative charge along
the inner surface of its membrane and a positive charge along the outer portion.

RESTING POTENTIAL:
When there is no signal, the cell is said to be in a resting state. Concentration of Na is higher
outside the cell than inside and k+ vice versa ie) more +ve charge outside than inside creating
electrical gradient. At rest, when no signals are there, the membrane potential is approximately -
70mv, ie) inside the cell is 70mv less positive than outside. .In the resting state, membranes of
excitable cells readily permit the entry of K+ and Cl- ions, but block the entry of Na+ ions.
4

DEPOLARIZATION:
If there is a stimulus given to the cell membrane which could be an electrical stimulus or a
chemical stimulus, suddenly from one point the permeability of the cell membrane changes,
allowing the sodium ion to come inside.
As the flow of the sodium, the negative potential within the cell that keeps on increasing and it
stops to a level, when it reaches about 20 millivolt or plus
20 millivolt with respect to the outside and this process of change in voltage which is called
depolarization. Once the signal goes above the threshold this action potential is created.
5

REPOLARIZATION:
Once it reaches above 20 millivolt inside, then the characteristics of the cell membrane changes
and again it becomes impervious to the sodium ion.
It allows only the potassium alloy and to get out of the cell membrane. So, as they leave the cell
body or the cell boundary again the potential inside the cell starts dropping and it can go as low
as minus 90 millivolt and this process is called repolarization.
There is an overshoot which is called hyperpolarization and it slowly goes back to the resting
potential.
There is a coordinated electrochemical activity of a large number of cells to get these bio
potential or to record it as we get that each of these cells they have very little amount of energy
or very little amount of current they can provide.

CLASSIFICATION OF
BIOMEDICAL SIGNALS:
1. Bioelectric signals
2. Bioacoustic signals
3. Biomechanical signals
4. Biochemical signals
5. Biomagnetic signals
6. Bio-optic signals
7. Bioimpedance signals
6

1. Bioelectric Signals:
These are unique to the biomedical systems. They are generated by nerve cells and muscle cells.
Their basic source is the cell membrane potential which under certain conditions may be excited
to generate an action potential. The electric field generated by the action of many cells
constitutes the bioelectric signal.
Examples:
❖ ECG (electrocardiographic)
❖ EEG (electroencephalographic)
❖ EMG (electromyogram)

2. Bioacoustic Signals:
The measurement of acoustic signals created by many biomedical phenomena provides
information about the underlying phenomena.
Examples:
❖ Blood flow in the heart, through the heart's valves
❖ Air flow through the upper and lower airways and in the lungs which generate typical a
astic signal

3. Biomechanical Signals:
These signals originate from some mechanical function of the biological system. They include all
types of motion and displacement signals, pressure and flow signals etc.
Example:
❖ The movement of the chest wall in accordance with the respiratory activity

4. Biochemical Signals:
The signals which are obtained as a result of chemical measurements from the living tissue or
from samples analyzed in the laboratory. Examples:
❖ Measurement of partial pressure of carbon-dioxide (pCO₂)
❖ Measurement of partial pressure of oxygen (pO₂)
❖ Measurement of concentration of various ions in the blood.
7

5. Biomagnetic Signals:
Extremely weak magnetic fields are produced by various organs such as the brain, heart and
lungs. The measurement of these signals provides information which is not available in other
types of bio-signals such as bioelectric signals.
Example:
❖ Magneto-encephalograph signal from the brain.

6. Bio-optical Signals:
These signals are generated as a result of optical functions of the biological systems, occurring
either naturally or induced by the measurement process.
Example:
❖ Blood oxygenation may be estimated by measuring the transmitted/ back scattered light
from a tissue at different wavelengths.

7. Bio-impedance Signals:
The impedance of the tissue is a source of important information concerning its composition,
blood distribution and blood volume etc.
Example:
❖ The measurement of galvanic skin resistance
The bioimpedance signal is also obtained by injecting sinusoidal current in the tissue and
measuring the voltage drop generated by the tissue impedance.
Example: The measurement of respiration rate

BIOMEDICAL SIGNAL PROCESSING


In biomedical signal processing, the aim is to extract clinically, biochemically or
pharmaceutically relevant information in order to enable an improved medical diagnosis. All
living things, from cells to organisms, deliver signals of biological origin. Such signals can be
electric, mechanical, or chemical. All such signals can be of interest for diagnosis, for patient
monitoring and biomedical research. The main task of processing biomedical signals is to filter
the signal of interest out of the noisy background and to reduce the redundant data stream to only
a few, but relevant parameters.
Biomedical signal processing is mainly about the innovative applications of signal
processing methods in biomedical signals through various creative integrations of the method
8

and biomedical knowledge. It is a rapidly expanding field with a wide range of applications.
These range from the construction of artificial limbs and aids for the disabled to the development
of sophisticated medical monitoring systems that can operate in a noninvasive manner to give
real time views of the workings of the human body. There are a number of medical systems in
common use. These include ultrasound, electrocardiography and plethysmography are widely
used for many purposes.
Objectives of Biomedical Signal Analysis
1. Information gathering - measurement of phenomena to interpret a system.
2. Diagnosis - detection of malfunction, pathology, or abnormality.
3. Monitoring - obtaining continuous or periodic information about a system.
4. Therapy and control - modification of the behaviour of a system based upon the outcome
of the activities listed above to ensure a specific result.
5. Evaluation - objective analysis to determine the ability to meet functional requirement,
obtain proof of performance, perform quality control, or quantify the effect of treatment
The processing of biomedical signals usually consists of 3 main stages:
4. Signals acquisition
5. Signal Processing
6. Signal Analysis
9

1. Signal Acquisition
Procedures:
● Invasive
○ Placement of transducers or other devices inside the body (e.g Use of needle
electrodes, with the help of physician).
● Noninvasive
○ minimal risk
○ Surface electrodes (Placed on the skin surface)

● Active
○ require external stimuli
● Passive
○ not require external stimuli

Transducers:
● A transducer is a device that converts energy from one form to another.
● Transducers attached to a patient convert biological signals, like blood pressure, pulse
rate, mechanical movement, and electrical activity, e.g., of heart, muscle and brain, into
electrical signals, which are transmitted to the computer.
● Preamplifiers help in the amplification of the signal prior to the signal processing.
● The analog signal usually requires to be amplified and bandpass or low-pass filtered.
Noise is common in most of the measurement systems and it is considered as a limiting
factor in the performance of a medical instrument. The main aim of many signal
processing techniques were to minimize the variability in the measurements. In
biomedical measurements, variability has four different origins:
○ Physiological variability;
○ Environmental noise or interference;
○ Transducer artifact; and
○ Electronic noise.

Lowpass filters allow low frequencies to pass with minimum attenuation whilst higher
frequencies are attenuated. Conversely, highpass filters pass high frequencies, but
attenuate low frequencies. Bandpass filters reject frequencies above and below a
passband region. Bandstop filter passes frequencies on either side of a range of attenuated
frequencies. The bandwidth of a filter is defined by the range of frequencies that are not
attenuated.
10

Digitization of Bio-signals: Sampling and Quantization


Most naturally occurring signals are analog signals, i.e., signals that vary continuously. A
digital computer stores and processes values in discrete units. Before processing analog signals
must be converted to discrete units. The conversion process is called analog-to-digital conversion
(ADC). ADC can be thought of as sampling and rounding - the continuous value is observed
(sampled) at fixed intervals and rounded (quantized) to the nearest discrete unit. Two parameters
determine how closely the digital data match the original analog signal : the precision with which
the signal is recorded and the frequency with which the signal is sampled.
Ranging and calibration of the instruments, either manually or automatically, is necessary for
signals to be represented with as much precision as possible. Improper ranging will result in
information loss. For example, a change in a signal that varies between 0.1 and 0.2 volts will be
undetectable if the instrument has been set to record changes between 0.0 and 1.0, in 0.25-volt
steps.
The sampling rate (sampling frequency) is the second parameter that affects the correspondence
between an analog signal and its digital representation. A sampling rate that is too low relative to
the rate at which a signal changes value will produce a poor representation. On the other hand,
oversampling increases the expense of processing and storing the data.
As a general rule, we need to sample at least twice as frequently as the highest-frequency
component needed from a signal. For instance, looking at an ECG, we find that the basic
repetition frequency is at most a few per second, but that the QRS complex contains useful
frequency components on the order of 150Hz [5]. Thus, the data sampling rate should be at least
300 measurements per second. This rate is called the Nyquist frequency or Nyquist Theory.

● Most biomedical signals are low energy signals and their acquisition takes place in the
presence of noise and other signals originating from underlying systems that interfere
with the original one. Noise is characterized by certain statistical properties that facilitate
the estimation of Signal to Noise ratio.
● Once converted, the signal is often stored, or buffered, in memory.

Signal Processing:
● Digital signal processing algorithms applied on the digitized signal are mainly
categorized as artifact removal processing methods and events detection methods.
Artifact removal:
● It is the first building block of the signal processing
● It is the conditioning of the signal.
Artefactual signals arise from several internal and external sources.

● Sources of noise:
11

Physiological Interference:
(1) Signals from muscles
All muscle activity produces electrical signals. Signals from muscles other than the heart are
called EMG signals and appear on the monitor as narrow, rapid spikes associated with muscle
movement. These signals are sufficiently dissimilar to the ECG signals that they can be
electronically reduced or "filtered" from the trace. This filtering is readily absorbed by reduction
in the size of EMG signals as the monitor is switched from the diagnostic mode to the monitor
mode.

(2) Signals produced in the epidermis


The skin is a source of electrical signals which produce motion artifacts. Studies have revealed
that a voltage of several millivolts can be generated by stretching the epidermis, the outer layer
of the skin. This stretching is the primary source of movement-related artifacts. This type of
artifact is visible as large baseline shifts occurring when the patient changes positions in bed, eats
or ambulates.

○ Instrumentation used
○ Environment of the experiment
○ Power line interference (50Hz or 60Hz)
Another prominent kind of noise is power line interference. And power line interference is 60
hertz in the United States and in Europe. In India it is 50 hertz, in both these, signals are much
lower frequency compared to our communication signals, but they are very well within the band
of our biomedical signals. When we record the biomedical signal in a room and wherever the
civilization is there that now that electricity is there. So, that we have the electrical lines acts as
a source of that the electromagnetic interference at power frequency and the leads we actually
used for the recording of the biomedical signal they acts as an antenna and catches that
electromagnetic noise and that interference gets recorded or get added with the biomedical
signal.

Techniques for the removal of artifacts:


● Time-domain digital filter
○ Offset filter : remove offset introduced during acquisition (e.g., DC component
from skin- electrode interface in ECG)
○ Moving average filter : remove high- frequency noise with minimal loss of signal
components in the pass-band (e.g., ECG contaminated with EMG noise)
○ Median filter : Ranks from lowest value to highest value and picks the middle
one: e.g., median of [3, 3, 5, 9, 11] is 5 Good for rejecting certain types of noise
in which some samples take extreme values (outliers) Application: regular power
12

source leading to ripples riding on top of the signal (e.g., power source
interference in ECG)
○ Detrending : Uses a polynomial approximation based on least-squares fit of a
straight line (or composite line for piecewise linear trends) to the signal Then
subtracts the resulting approximate function from the original signal Application:
trends/shifts (low-frequency artifacts) leading to improper amplitudes (e.g.,
breathing in ECG)
● Frequency-domain digital filter
○ Butterworth/Chebyshev : Simplicity, monotonically decreasing magnitude
response, and maximally flat magnitude response in the pass-band Application:
remove high- frequency noise with minimal loss of signal components in the pass-
band (e.g., ECG contaminated with EMG noise)
○ Notch/Comb : Band-stop very selective filter For multiple notchs (i.e., in
multiple frequencies) use the comb digital filter Application: remove regular
power source noise leading to ripples riding on top of the signal (e.g., power-line
interference in ECG) 19-Apr-18 40 MATLAB: iirnotch, iircomb, filter Noisy
ECG Powerline Interference (60Hz) Notch filtered ECG Notch filter Comb filter
○ Savitzky–Golay : Method of data smoothing based on local least-squares
polynomial approximation Used to "smooth out" a noisy signal whose frequency
span (without noise) is large Preserves characteristics such as local maxima and
minima and peak width Application: trends/shifts (low- frequency artifacts)
leading to improper amplitudes (e.g., breathing in ECG)
● Adaptive filters (Optimal)
○ Wiener: It is commonly used to denoise audio signals, especially speech, as a
preprocessor before speech recognition.

Event detection
● Biomedical signals carry signatures of physiological events
● The part of a signal related to a specific event of interest is often referred to as an epoch
(e.g., QRS wave in ECG)
● Event detection techniques are normally used in order to identify epochs
● Once an event is identified, the corresponding waveform may be segmented and
characterised in terms of time or frequency (e.g., peak-to-peak amplitude,
waveshape/morphology, time duration, intervals between events, energy distribution,
spectral components, etc.)
Some of the methods:
● Envelope estimation
● Wave delineation
13

● Peak detection
● Cross-correlation
● Auto-correlation

Envelope estimation: The signal's envelope is equivalent to its outline, and an envelope
detector connects all the peaks in this signal
Application: detection of the burst moments and estimation of the amount of activity in the EMG
signal
Wave delineation: Direct thresholding Boundaries are defined as the instants a wave crosses a
certain amplitude threshold level Seldom applied in practice since signals are usually affected by
baseline drifts or offsets.
Application: detection of the QRS complex (largest slope/rate of change in a cardiac cycle)
Peak detection: Envelope estimation and wave delineation are frequently used in combination
Thresholding is used to determine candidate peaks A local maxima search is normally needed to
select outstanding peaks Application: estimation of the ECG R-R distance
Cross-correlation Measures the similarity of two series as a function of the lag of one relative
to the other. Intended to find common patterns (i.e., their lags) in a pair of signals
Application: detection of EEG rhythms
Auto-correlation: cross-correlation of the signal with itself. Intended to find repeating patterns
(i.e., their lags), such as the presence of a periodic signal obscured by noise

Signal Analysis:
Once the data have been acquired and filtered, they typically are processed to reduce their
volume and to abstract information for use by interpretation programs. Often the data are
analyzed to extract important parameters, or features, of the signal, e.g., the duration or intensity
of the ST segment of an ECG. The computer can also analyze and classify the shape of the
waveform by comparing the signal to models of known patterns. Further analysis (in connection
with a suitable knowledge base) is necessary to determine the meaning or importance of the
signals, e.g., to allow automated ECG-based cardiac diagnosis.

Biomedical Signal Processing in hearing aid :


Block Diagram:
14

● The acoustic signal is converted to its electrical analog at the microphone stage of the
hearing aid system.
● After this conversion, a frequency filter is introduced to reduce possible distortion of the
input signal. The signal is then "sampled" a given number of times per second. Normally,
the sampling rate is 10,000 times per second, or greater.
● The analog signal is then converted to its digital equivalent by the analog to digital (A/D)
converter. Each sample receives a digital code. Binary numbers (O and 1) are used to
represent the digital value of each sample.
● Following the digitization of the signal, the digital representations are processed by a
central processing unit (CPU) or microprocessor. The digital values can be multiplied,
divided, added, subtracted and grouped in defined ways. In the microprocessor are
various algorithms. An algorithm is a system of instructions that operates in a manner
determined by a set of mathematical rules and equations. If the algorithm is a dedicated
one, it performs a specific task relative to the processing of the input signal. For example,
one algorithm may control the frequency response of the instrument, another may control
loudness growth, a third may function to enhance the speech signal in a background of
noise, etc.
● After the microprocessor has performed its tasks, the digitized signal must be converted
back to its analog equivalent. This is accomplished at the digital to analog (D/A)
conversion stage.
● When the digitized signal is converted to its analog stage, it is frequency filtered again, to
prevent signal distortion. It is then amplified in the conventional manner and submitted to
the receiver (speaker) of the hearing aid
15

REFERENCE

John Wiley & Sons 2. Willis. J. Tompkins (2004) Biomedical Digital Signal Processing:

Khandpur; Handbook of Biomedical Instrumentation 2nd Edition

Description, analysis, and classification of biomedical signals: a computational


intelligence approach Adam Gacek • Witold Pedrycz
Sri Krishnan; Biomedical Signal Analysis for connected healthcare.
Biomedical Signal Processing and Applications by Muhammad Ibn Ibrahimy (article)
Monty A. Escabí PhD, in Introduction to Biomedical Engineering (Second Edition), 2005
introduction to biomedical signals

PRESENTATION NUMBER: 2

SUBJECT: Technology in Audiology

TOPIC: SIGNAL ACQUISITION AND PROCESSING


TECHNIQUES
16

HIGH FIDELITY

NAME: S.LAVANYA ( Msc.Audiology)

SUBMITTED TO : Ms.Rajasuman (Assistant Professor )

DATE OF SUBMISSION : /02/2022

REMARKS :

INDEX

1.
Signal acquisition and processing techniques

2.
Differential amplification

3.
Common mode rejection
17

4.
Artifact rejection

5.
Filtering

6.
Signal Averaging

7.
Signal Acquisition and Processing in OAE

8.
Auditorium acoustics

9.
High Fidelity

SIGNAL ACQUISITION AND PROCESSING TECHNIQUES


Signal acquisition is the process of sampling signals & converting the resulting samples into digital
numeric values. Signal-acquisition devices add noise that can be reduced by estimators using prior
information on signal properties. 
Signal processing focuses on analyzing, modifying, and synthesizing signals such
as sound, images, and scientific measurements.  Signal processing techniques can be used to
improve transmission, storage efficiency and subjective quality and to also emphasize components
of interest in a measured signal.
18

Signal processing techniques include:


1. Signal generating component – generates the desired type of signal, Eg: click, tone pip, or
tone burst.
2. Amplifier and filter component – amplifies the AEP from the scalp so the AEP can be
processed easily by the signal average and filter.
3. Computer average – function is to store electrical responses that are time locked to the
stimulus and to cancel out on-going EEG activity.
Signal processing should occur overcoming the unwanted potentials from many other sources. The
basic procedures for improving the signal to noise ratio (SNR) include; 
 Differential recording and amplification, 
 Band pass filtering,
 Analog –to-digital conversion, 
 Amplitude-based artifact rejection 
 Simple time domain averaging.

DIFFERENTIAL AMPLIFICATION
The term amplifier suggests a device that increases the strength of a signal (acoustic or electrical). 
A differential amplifier is a type of electronic amplifier that amplifies the difference between two
input voltages but suppresses any voltage common to the two inputs. It is an analog circuit with two
inputs (V- and V+) and an output (Vo). The output is ideally proportional to the difference between
the two voltages
Vo = A [( V+) - (V-)] where A is the gain of the amplifier.
For neurofeedback purposes, we use the differential amplifier which amplifies voltage
differences between 2 points. This is different from, a power amplifier that is used for a public
address system which is non-differential, it does not discriminate and thus everything in the line
gets amplified. 

Amplification mainly serves two purposes: 


 To reduce background noise through differential recording 
 To bring the signal of interest into the range of the A–D convertor (Durrant and Ferraro, 1999). 
For a basic differential recording, a minimum of three electrodes is required: non-inverting,
inverting and ground electrodes (Hyde, 1994). During amplification, any signal (and noise) that is
common to the positive and negative inputs will be reduced, while signals that differ between these
inputs are amplified (Duffy, Iyer, and Surwillo, 1989; Durrant and Ferraro, 2001).

A differential amplifier with a very high gain and extremely high input impedance is called an
operational amplifier (op-amp). The output node of op-amp has near zero resistance allowing it to
behave like an ideal voltage source. supplies as much current as necessary
19

Differential amplifiers are used mainly to suppress noise. It also acts as a volume control circuit. 
It can be used as an automatic gain control circuit and for amplitude modulation.

COMMON MODE REJECTION


Common mode rejection is a unique property of differential amplifiers. The signals that are
common to both inputs cancel themselves out while the signals that are unique to each input are
preserved for amplification.
In the field of EEG biofeedback this feature of the differential amplifier is important as we tend to
work in an extremely polluted environment from an electromagnetic standpoint. The feature of
common mode rejection helps in attaining clean measurements.
The common-mode signals are rejected By the differential amplifier. It is because a differential
amplifier amplifies the difference between the two signals between (v1-v2) and for common-
mode signals, this differences zero.
The common mode rejection ratio (CMRR) of a differential amplifier is used to quantify the ability
of the device to reject common-mode signals, i.e. those that appear simultaneously and in-phase on
both inputs. For example, if a differential input change of Y volts produces a change of 1 V at the
output, and a common-mode change of X volts produces a similar change of 1 V, then the CMRR is
X/Y. When the common-mode rejection ratio is expressed in dB, it is generally referred to as
common-mode rejection (CMR)
An ideal differential amplifier would have infinite CMRR; however this is not achievable in
practice. A high CMRR is required when a differential signal must be amplified in the presence of a
possibly large common-mode input, such as strong electromagnetic interference (EMI). 
Common mode rejection is important concept in understanding how relatively small AER
voltages can be detected in the midst of a wide variety of other electrical signals with greater
amplitude. Two electrodes placed at different locations on the head (eg: the high forehead in the
midline [Fz] and an earlobe) will presumably detect the same amount of electrical interference
(electrical activity that does not include the response) that is in the region of the head.  This
interference is common to or the same for each electrode. The differential preamplifier, in AER
system reverses polarity (positive or negative) of inverting electrode input voltage and adds it to
the non-inverting electrode input. This is in effect of subtraction process. In this way any activity
that is the same as detected at the electrode is eliminated. If each electrode recorded exactly the
same AER activity, then common mode rejection would subtract away the response.
Subtracting the activity detected at the earlobe electrode from activity at the vertex electrode
reduces the noise interference and also increases the amplitude of the ABR components. The
effectiveness of common mode rejection is usually expressed in terms of the amplifier output
(the electrical activity that remains after amplification) with only one input (i.e without the
benefit of the substitution process) to the amplifier output when both inputs are the same.
Usually the CMR ratio is more than 10,000. The ratio is often expressed in decibels. A CMR
ratio of 10,000 would be equivalent to a value of 80dB.
20

ARTIFACT REJECTION
When a sweep occurs that contains excessive voltage amplitudes, excessive noise is included in the
average, which can decrease the quality of a recording. Most evoked potential systems provide a
method where sweeps containing excessive nose can be excluded from the ongoing average. This is
known as artifact rejection. The artifact rejection level is set so that sweeps containing voltages well
above the voltages of the response of interest are rejected and thus not added into the summed or
averaged response.
An artifact is an electrical activity that is not part of the response and that should not include the
analysis of the response.  In short to differentiate between electromagnetic (non-patient) and
electrophysiological response this is used.
Artifact rejection can be done through 3 ways;
 Determine the source of artifact and eliminate it. 
 Modify the test parameters, ie, filter settings, electrode arrays, no of sweeps
 Using a technique known as artifact rejection.
Typically, artifact rejection is designed to detect any signal larger in amplitude than specified value
within the sensitive range of the A-D converter or a percentage thereof (eg: 90% of full scale
deflection). When such a signal is detected during an average run, the entire sweep is excluded from
the average. 
Artifact rejection is a technique in which all of the sweeps containing high amplitude signals that
have exceeded the present limit are excluded from the average. Each successive digitized trace first
goes to a buffer where it is examined for any voltages that exceed some pre-set limits. If all voltages
are at or below a pre-set level, then the digitized voltages are dumped to the memory unit for
averaging with prior and succeeding traces. Conversely, if excessive voltage is found at any address
in the analysis window, then that sample in the buffer is erased instead of being forward to the
averaging memory. 
Two common clinical limitations are: 
 (i)the inability to make progress with averaging because of almost continuous artifact
rejection                               
 (ii)the obvious artifact contamination of an averaging waveform, despite the use of artifact
rejection.
 {Increasing the sensitivity of the amplifier (increasing the gain) to solve the second problem will
also increase the sensitivity of the artifact rejection process and perhaps create the first problem}

FILTERING
21

In signal processing, a filter is a device or process that removes some unwanted components from a
signal. Filtering is the process of removing certain portions of the input signal in order to create a
new signal without any background noise.

Filtering is done by sending the input signal through a system function which determines the degree
of amplification for each frequency in the signal. The desired frequencies are boosted by the
instrument gain while the unwanted frequencies are boosted by a gain of zero.
Phase Response 
Ideally, a filter should have a linear phase response. This means that there is a constant time delay
difference from the input for all input frequencies. If the phase response is not linear, then different
frequencies would be delayed by different amounts.

Filters are categorised on the basis of pass band and stop band. A passband is the range of
frequencies or wavelengths that can pass through a filter.  
A stopband is a band of frequencies, between specified limits, through which a circuit, such as a
filter or telephone circuit, does not allow signals to pass. 

 The Low pass filter passes signal with a frequency lower than a selected cut off frequency &
attenuates signals with frequencies higher than the cut off frequency. 
22

 The exact frequency response of the filter depends on the filter design. The filter is sometimes
called a high-cut filter or treble-cut filter in audio applications.
  A low-pass filter is used as an anti-aliasing filter prior to sampling and for reconstruction in digital-
to-analog conversion.

 The High pass filter is an electronic filter that passes signals with a frequency higher than a


certain cut off frequency and attenuates signals with frequencies lower than the cut off frequency. 
 The amount of attenuation for each frequency depends on the filter design.
 A high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a low-
cut filter or bass-cut filter in the context of audio engineering. 
 High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero average
voltages or radio frequency devices. They can also be used in conjunction with a low-pass filter to
produce a bandpass filter.
23

  Band pass filter: A band-pass filter is a device that passes frequencies within a certain range
and rejects (attenuates) frequencies outside that range. It removes all frequencies outside f 1 and f
2(f1-low cut off, f2—high cut off)
 The bandwidth of the filter is simply the difference between the upper and lower cut off
frequencies.
 Bandpass filters are widely used in wireless transmitters and receivers, EEG measurements
 The main function of such a filter in a transmitter is to limit the bandwidth of the output signal to
the band allocated for the transmission.
  This prevents the transmitter from interfering with other stations. In a receiver, a bandpass filter
allows signals within a selected range of frequencies to be heard or decoded, while preventing
signals at unwanted frequencies from getting through.

 Band reject filter is a filter that passes most frequencies unaltered, but attenuates those in a


specific range to very low levels. 
 It is the opposite of a band-pass filter. removes all frequencies between f 1 and F 2 (f1-low cut off,
f2—high cut off)
 Mainly used in  live sound reproduction (public address systems) and in instrument
amplifiers (especially amplifiers or preamplifiers for acoustic instruments such as acoustic
guitar, mandolin, bass instrument amplifier, etc.) to reduce or prevent audio feedback, while having
little noticeable effect on the rest of the frequency spectrum 
24

 Notch filter  rejects just one specific frequency - an extreme band-stop filter
25

 Comb filter  has multiple regularly spaced narrow passbands giving the band form the
appearance of a comb. 

SIGNAL AVERAGING
Signal averaging is a digital technique for separating a repetitive signal from noise without
introducing signal distortion (Tompkins and Webster, 1981).

Signal averaging sums a set of time epochs of the signal together with the superimposed random
noise. If the time epochs are properly aligned, the signal waveforms directly sum together. And thus
the uncorrelated noise averages out in time. Thus, the signal-to-noise ratio (SNR) is improved.

Signal averaging is based on the following characteristics of the signal and the noise: 
 The signal waveform must be repetitive (although it does not have to be periodic). This means
that the signal must occur more than once but not necessarily at regular intervals. 
 The noise must be random and uncorrelated with the signal. Random means that the noise is not
periodic and that it can only be described statistically (e.g., by its mean and variance).
 The temporal position of each signal waveform must be accurately known
 Signal averaging is a kind of digital filtering process. The Fourier transform of the transfer function
of an average is composed of a series of discrete frequency components. Due to its appearance of its
amplitude response, this type of filter is called a comb filter. 
 The width of each tooth decreases as the number of sweep repetitions increases. The desired signal
has a frequency spectrum composed of discrete frequency components, a fundamental and
harmonics. Noise, on the other hand, has a continuous distribution. 
 As the bandwidth of each of the teeth of the comb decreases, this filter more selectively passes the
fundamental and harmonics of the signal while rejecting the random noise frequencies that fall
between the comb teeth. The signal averager, therefore, passes the signal while rejecting the noise

APPLICATION

One predominant application area of signal averaging is in electroencephalography. The EEG


recorded from scalp electrodes is difficult to interpret in part because it consists of a summation of
the activity of the billions of brain cells. It is impossible to deduce much about the activity of the
visual or auditory parts of the brain from the EEG. If we stimulate a part of the brain with a flash of
light or an acoustical click, an evoked response occurs in the region of the brain that processes
information for the sensory system being stimulated. By summing signal averaging the signals that
are evoked immediately following many stimuli and dividing by the total number of stimuli, we
obtain an averaged evoked response. This signal can reveal a great deal about the performance of a
sensory system
26

SIGNAL PROCESSING:
The auditory evoked potentials are the electrical response evoked by the auditory stimulus from
auditory system. These electrical responses generated by the structure from cochlea till cortex is
measured. These responses can be elicited through different stimulus. However response pattern
may differ with respect to the stimulus.  A complex response to a particular types of the external
stimuli that represents neural activity generated at a several anatomical sites. The amplitude of
the AEP is very small approximately 0.01 – 1uV. This small potential is masked by the larger
background activity generated by several sources such as the random, ongoing electrical activity
(EEG) within the brain, muscular (myogenic) activities in the skull electronic devices in the
environment such as 60 Hz and other artifacts produces while generating the stimuli or recording
the potentials. Therefore, in order to identify the AEP, a device consisting of three major
components is required
The ABR is recorded from electrodes attached to various positions on the head. The recording
occurs by measuring the difference in the electrical activity between two electrodes which is
known as differential recording.   

Auditorium acoustics
An auditorium is a room built to enable the audience to hear and watch performances at venues. It
may include any room intended for listening to music, including theatres, churches, classrooms,
meeting rooms.
Factors that affect hearing conditions in auditorium
 Location
 Shape
 Layout of boundary surfaces
 Dimensions
 Seating arrangements
 Volume
 Capacity of audience
 Stage position
 Ventilations
 Materials used for construction
27

Basic acoustic criteria to be filled by any auditorium includes


 An auditorium must have a low ambient noise level from the internal and external sources
 Provide a reasonable level of acoustic gain 
 Provide apt reverberation time
 Avoid artifacts such as reflections, reverberation, echo

Reflections
 After the arrival of the direct sound, a series of semi-distinct reflections from various reflecting
surfaces will reach the listener. These early reflections occur within 50ms. The reflections that
reach after the early reflections are of lower amplitude and very closely spaced in time. These
become the late reflections or the reverberant sound.
 A bright and clear sounding auditorium will provide 30 separate early reflections, each arriving
within first 1/40th second after the initial impact of the signal. Some will be strong and some will
be weak, but the overall power of early reflections should be in the range of 60 dBA to provide
good speech reinforcement.
 Auditory system determines the direction of a sound source from the direct sounds reaching the
surface of the ear. Early reflections that arrive within 35 ms reinforce the direct sound.
 The acoustic quality of an auditorium to enable the listeners to hear a hi-fi sound depends on factors
such as reverberation time. 
 But in larger halls, a longer reverberation time becomes necessary for music but this creates risks of
speech being no longer intelligible. To accommodate two or more acoustic uses generally means
that a reverberation time change is desirable. 
 Introducing acoustic absorbent is the simplest approach, but this has the additional effect of
reducing sound level, which may be unacceptable for the lower reverberation time configuration.

Echo
 Echo is a reflection of the sound that arrives at the listener with a delay after the direct sound. 
 This delay is directly proportional to the distance of the reflecting surface from the source and the
listener echo depends on the reflection and absorption of sound by the walls.
 An echo is an intelligible repetition and is not to be confused with reverberation, which is
unintelligible.

Reverberation time
28

Reverberation is defined as the prolonged reflection of sound from the walls, floor & ceiling of a
room. It is the persistence of audible sound after the source has stopped to emit sound

Reverberation time is defined as the time for the sound to die away to a level 60 decibels below its
original level. Reverberation time (RT) shows how long the sound can be heard in the hall after the
sound source stops to produce the sound. The definition of the reverberation time depends on what
we define as the end of the sound we hear. 

As our ears are very sensitive to quiet sounds, it was postulated that the sound ends when its
intensity I becomes 10 to the power -6, of its initial intensity Io . 

The late sound after about 100 ms is called the reverberant sound. It usually decays in a linear
manner; & this duration is described by the reverberation time.
An approximate formula for the reverberation time, TR is given below:
RT=RT60=Time to drop 60 Db below the original level
RT60=0.049 V/a
where, V= Volume of the space, a= total room absorption

ACOUSTIC ABSORPTION removes acoustic energy. There are three possible mechanisms:
porous absorption, panel absorption and Helmholtz resonance. 
 Porous absorption, as already mentioned, occurs with any porous material. In auditoria the
major absorbent surface is the audience, whose clothes act as efficient porous absorbers.
Absorption is measured by the absorption coefficient (a), which is simply the fraction of
incident energy absorbed. Porous absorbers are thus efficient at high but not low
frequencies. 
 Panel absorbers can complement porous absorbers to give absorption over the whole
frequency range, though the maximum absorption coefficient of panel absorbers is not great.
Panel absorption strongly influences the low-frequency reverberation time. 
 A traditional Helmholtz resonator is a rigid-walled cavity and an open neck, which result
in one acoustic resonance

ACOUSTICS OF SMALL ROOMS

 In small rooms, many reflections begin arriving soon after the direct sound. Thus, early
and late reflections become fused into a single reverberant sound.
 The only means for achieving a long reverberation time in a small room is to have low
absorption at the walls. This, however, results in prominent individual resonances.
 A large room has many more resonances than a small room (within the audible range).
This produces a fairly smooth frequency response. Small rooms, on the other hand, may
produce considerable coloration of sound.
29

 The reverberant field in a large room builds up much more slowly than that in a small
room, providing an important perceptual cue to room size.

HIGH FIDELITY
High fidelity is a term used to refer high quality reproduction of sound. High fidelity equipment has
inaudible noise & distortion and a flat frequency response within human hearing range. 
HiFi is really just a shortened version of “high fidelity”, implies a high degree of accuracy (fidelity)
in reproducing sound. In the past few decades, quality audio recording equipment has become fairly
easy to obtain, so just about any music or video we want to listen to will probably sound pretty
good. Hi-Fi system is to make the audio sound more authentic and real. Many audiophiles focus on
headphones because unlike a speaker system, the sound isn’t affected by the acoustics of the room,
leading to a much more consistent listening experience. 

Features of hi-fi sound reproduction


 The loudness of each frequency present in the usually complex sound wave must bear the same
relationship to the other frequencies as such relationships exist in the original sound.
 The nature of the sound must remain the same. Only the frequencies present in the original sound
are present in the reproduced sound.
 The observed sound must be free of distortions introduced by the time delays in some but not all the
components of the original sound. The observed sound must be free of phase shifts.
 A good sound reproduction system is relatively non-directional, reproducing the original sound
faithfully in all reasonable directions from the source through which the sound is reproduced.

REFERENCES

 Leo L. Berenak (1993) Acoustical Measurements (revised edn.)


 Moser, P. (2015). Electronics and Instrumentation for Audiologists. Psychology Press.
 Acoustics for Audiologists Author(s): Peter Haughton, P. M.
 https://en.wikipedia.org/wiki/High_fidelity
30

You might also like