Professional Documents
Culture Documents
2 - S.lavanya (Msc. Audiology) - Technology in Audiology - Biomedical Signals Acqusition and Processing Technique, High Fidelity DRAFT
2 - S.lavanya (Msc. Audiology) - Technology in Audiology - Biomedical Signals Acqusition and Processing Technique, High Fidelity DRAFT
2 - S.lavanya (Msc. Audiology) - Technology in Audiology - Biomedical Signals Acqusition and Processing Technique, High Fidelity DRAFT
PRESENTATION NUMBER: 2
REMARKS :
2
2. 2
Origin of biomedical signals
3. 6
Classification of biomedical signals:
4. 8
Biomedical signal processing
1. signals acquisition
2. signal processing
3. signal analysis
6. References 16
INDEX
3
● Signal is a quantity which conveys information about the physical system and/or its
functioning.
● The task of signal processing is to extract useful information contained in the signal and
make it available in a desired form.
Living organisms are made up of many component systems — the human body, for example,
includes the nervous system, the cardiovascular system, and the musculoskeletal system, among
others. Each system is made up of several subsystems that carry on many physiological
processes.
Biomedical signals are those signals that are used primarily for extracting information
from a biological system under investigation.
Biomedical signals are observations of physiological activities of organisms, ranging from
gene and protein sequences, to neural and cardiac rhythms, to tissue and organ images.
RESTING POTENTIAL:
When there is no signal, the cell is said to be in a resting state. Concentration of Na is higher
outside the cell than inside and k+ vice versa ie) more +ve charge outside than inside creating
electrical gradient. At rest, when no signals are there, the membrane potential is approximately -
70mv, ie) inside the cell is 70mv less positive than outside. .In the resting state, membranes of
excitable cells readily permit the entry of K+ and Cl- ions, but block the entry of Na+ ions.
4
DEPOLARIZATION:
If there is a stimulus given to the cell membrane which could be an electrical stimulus or a
chemical stimulus, suddenly from one point the permeability of the cell membrane changes,
allowing the sodium ion to come inside.
As the flow of the sodium, the negative potential within the cell that keeps on increasing and it
stops to a level, when it reaches about 20 millivolt or plus
20 millivolt with respect to the outside and this process of change in voltage which is called
depolarization. Once the signal goes above the threshold this action potential is created.
5
REPOLARIZATION:
Once it reaches above 20 millivolt inside, then the characteristics of the cell membrane changes
and again it becomes impervious to the sodium ion.
It allows only the potassium alloy and to get out of the cell membrane. So, as they leave the cell
body or the cell boundary again the potential inside the cell starts dropping and it can go as low
as minus 90 millivolt and this process is called repolarization.
There is an overshoot which is called hyperpolarization and it slowly goes back to the resting
potential.
There is a coordinated electrochemical activity of a large number of cells to get these bio
potential or to record it as we get that each of these cells they have very little amount of energy
or very little amount of current they can provide.
CLASSIFICATION OF
BIOMEDICAL SIGNALS:
1. Bioelectric signals
2. Bioacoustic signals
3. Biomechanical signals
4. Biochemical signals
5. Biomagnetic signals
6. Bio-optic signals
7. Bioimpedance signals
6
1. Bioelectric Signals:
These are unique to the biomedical systems. They are generated by nerve cells and muscle cells.
Their basic source is the cell membrane potential which under certain conditions may be excited
to generate an action potential. The electric field generated by the action of many cells
constitutes the bioelectric signal.
Examples:
❖ ECG (electrocardiographic)
❖ EEG (electroencephalographic)
❖ EMG (electromyogram)
2. Bioacoustic Signals:
The measurement of acoustic signals created by many biomedical phenomena provides
information about the underlying phenomena.
Examples:
❖ Blood flow in the heart, through the heart's valves
❖ Air flow through the upper and lower airways and in the lungs which generate typical a
astic signal
3. Biomechanical Signals:
These signals originate from some mechanical function of the biological system. They include all
types of motion and displacement signals, pressure and flow signals etc.
Example:
❖ The movement of the chest wall in accordance with the respiratory activity
4. Biochemical Signals:
The signals which are obtained as a result of chemical measurements from the living tissue or
from samples analyzed in the laboratory. Examples:
❖ Measurement of partial pressure of carbon-dioxide (pCO₂)
❖ Measurement of partial pressure of oxygen (pO₂)
❖ Measurement of concentration of various ions in the blood.
7
5. Biomagnetic Signals:
Extremely weak magnetic fields are produced by various organs such as the brain, heart and
lungs. The measurement of these signals provides information which is not available in other
types of bio-signals such as bioelectric signals.
Example:
❖ Magneto-encephalograph signal from the brain.
6. Bio-optical Signals:
These signals are generated as a result of optical functions of the biological systems, occurring
either naturally or induced by the measurement process.
Example:
❖ Blood oxygenation may be estimated by measuring the transmitted/ back scattered light
from a tissue at different wavelengths.
7. Bio-impedance Signals:
The impedance of the tissue is a source of important information concerning its composition,
blood distribution and blood volume etc.
Example:
❖ The measurement of galvanic skin resistance
The bioimpedance signal is also obtained by injecting sinusoidal current in the tissue and
measuring the voltage drop generated by the tissue impedance.
Example: The measurement of respiration rate
and biomedical knowledge. It is a rapidly expanding field with a wide range of applications.
These range from the construction of artificial limbs and aids for the disabled to the development
of sophisticated medical monitoring systems that can operate in a noninvasive manner to give
real time views of the workings of the human body. There are a number of medical systems in
common use. These include ultrasound, electrocardiography and plethysmography are widely
used for many purposes.
Objectives of Biomedical Signal Analysis
1. Information gathering - measurement of phenomena to interpret a system.
2. Diagnosis - detection of malfunction, pathology, or abnormality.
3. Monitoring - obtaining continuous or periodic information about a system.
4. Therapy and control - modification of the behaviour of a system based upon the outcome
of the activities listed above to ensure a specific result.
5. Evaluation - objective analysis to determine the ability to meet functional requirement,
obtain proof of performance, perform quality control, or quantify the effect of treatment
The processing of biomedical signals usually consists of 3 main stages:
4. Signals acquisition
5. Signal Processing
6. Signal Analysis
9
1. Signal Acquisition
Procedures:
● Invasive
○ Placement of transducers or other devices inside the body (e.g Use of needle
electrodes, with the help of physician).
● Noninvasive
○ minimal risk
○ Surface electrodes (Placed on the skin surface)
● Active
○ require external stimuli
● Passive
○ not require external stimuli
Transducers:
● A transducer is a device that converts energy from one form to another.
● Transducers attached to a patient convert biological signals, like blood pressure, pulse
rate, mechanical movement, and electrical activity, e.g., of heart, muscle and brain, into
electrical signals, which are transmitted to the computer.
● Preamplifiers help in the amplification of the signal prior to the signal processing.
● The analog signal usually requires to be amplified and bandpass or low-pass filtered.
Noise is common in most of the measurement systems and it is considered as a limiting
factor in the performance of a medical instrument. The main aim of many signal
processing techniques were to minimize the variability in the measurements. In
biomedical measurements, variability has four different origins:
○ Physiological variability;
○ Environmental noise or interference;
○ Transducer artifact; and
○ Electronic noise.
Lowpass filters allow low frequencies to pass with minimum attenuation whilst higher
frequencies are attenuated. Conversely, highpass filters pass high frequencies, but
attenuate low frequencies. Bandpass filters reject frequencies above and below a
passband region. Bandstop filter passes frequencies on either side of a range of attenuated
frequencies. The bandwidth of a filter is defined by the range of frequencies that are not
attenuated.
10
● Most biomedical signals are low energy signals and their acquisition takes place in the
presence of noise and other signals originating from underlying systems that interfere
with the original one. Noise is characterized by certain statistical properties that facilitate
the estimation of Signal to Noise ratio.
● Once converted, the signal is often stored, or buffered, in memory.
Signal Processing:
● Digital signal processing algorithms applied on the digitized signal are mainly
categorized as artifact removal processing methods and events detection methods.
Artifact removal:
● It is the first building block of the signal processing
● It is the conditioning of the signal.
Artefactual signals arise from several internal and external sources.
● Sources of noise:
11
Physiological Interference:
(1) Signals from muscles
All muscle activity produces electrical signals. Signals from muscles other than the heart are
called EMG signals and appear on the monitor as narrow, rapid spikes associated with muscle
movement. These signals are sufficiently dissimilar to the ECG signals that they can be
electronically reduced or "filtered" from the trace. This filtering is readily absorbed by reduction
in the size of EMG signals as the monitor is switched from the diagnostic mode to the monitor
mode.
○ Instrumentation used
○ Environment of the experiment
○ Power line interference (50Hz or 60Hz)
Another prominent kind of noise is power line interference. And power line interference is 60
hertz in the United States and in Europe. In India it is 50 hertz, in both these, signals are much
lower frequency compared to our communication signals, but they are very well within the band
of our biomedical signals. When we record the biomedical signal in a room and wherever the
civilization is there that now that electricity is there. So, that we have the electrical lines acts as
a source of that the electromagnetic interference at power frequency and the leads we actually
used for the recording of the biomedical signal they acts as an antenna and catches that
electromagnetic noise and that interference gets recorded or get added with the biomedical
signal.
source leading to ripples riding on top of the signal (e.g., power source
interference in ECG)
○ Detrending : Uses a polynomial approximation based on least-squares fit of a
straight line (or composite line for piecewise linear trends) to the signal Then
subtracts the resulting approximate function from the original signal Application:
trends/shifts (low-frequency artifacts) leading to improper amplitudes (e.g.,
breathing in ECG)
● Frequency-domain digital filter
○ Butterworth/Chebyshev : Simplicity, monotonically decreasing magnitude
response, and maximally flat magnitude response in the pass-band Application:
remove high- frequency noise with minimal loss of signal components in the pass-
band (e.g., ECG contaminated with EMG noise)
○ Notch/Comb : Band-stop very selective filter For multiple notchs (i.e., in
multiple frequencies) use the comb digital filter Application: remove regular
power source noise leading to ripples riding on top of the signal (e.g., power-line
interference in ECG) 19-Apr-18 40 MATLAB: iirnotch, iircomb, filter Noisy
ECG Powerline Interference (60Hz) Notch filtered ECG Notch filter Comb filter
○ Savitzky–Golay : Method of data smoothing based on local least-squares
polynomial approximation Used to "smooth out" a noisy signal whose frequency
span (without noise) is large Preserves characteristics such as local maxima and
minima and peak width Application: trends/shifts (low- frequency artifacts)
leading to improper amplitudes (e.g., breathing in ECG)
● Adaptive filters (Optimal)
○ Wiener: It is commonly used to denoise audio signals, especially speech, as a
preprocessor before speech recognition.
Event detection
● Biomedical signals carry signatures of physiological events
● The part of a signal related to a specific event of interest is often referred to as an epoch
(e.g., QRS wave in ECG)
● Event detection techniques are normally used in order to identify epochs
● Once an event is identified, the corresponding waveform may be segmented and
characterised in terms of time or frequency (e.g., peak-to-peak amplitude,
waveshape/morphology, time duration, intervals between events, energy distribution,
spectral components, etc.)
Some of the methods:
● Envelope estimation
● Wave delineation
13
● Peak detection
● Cross-correlation
● Auto-correlation
Envelope estimation: The signal's envelope is equivalent to its outline, and an envelope
detector connects all the peaks in this signal
Application: detection of the burst moments and estimation of the amount of activity in the EMG
signal
Wave delineation: Direct thresholding Boundaries are defined as the instants a wave crosses a
certain amplitude threshold level Seldom applied in practice since signals are usually affected by
baseline drifts or offsets.
Application: detection of the QRS complex (largest slope/rate of change in a cardiac cycle)
Peak detection: Envelope estimation and wave delineation are frequently used in combination
Thresholding is used to determine candidate peaks A local maxima search is normally needed to
select outstanding peaks Application: estimation of the ECG R-R distance
Cross-correlation Measures the similarity of two series as a function of the lag of one relative
to the other. Intended to find common patterns (i.e., their lags) in a pair of signals
Application: detection of EEG rhythms
Auto-correlation: cross-correlation of the signal with itself. Intended to find repeating patterns
(i.e., their lags), such as the presence of a periodic signal obscured by noise
Signal Analysis:
Once the data have been acquired and filtered, they typically are processed to reduce their
volume and to abstract information for use by interpretation programs. Often the data are
analyzed to extract important parameters, or features, of the signal, e.g., the duration or intensity
of the ST segment of an ECG. The computer can also analyze and classify the shape of the
waveform by comparing the signal to models of known patterns. Further analysis (in connection
with a suitable knowledge base) is necessary to determine the meaning or importance of the
signals, e.g., to allow automated ECG-based cardiac diagnosis.
● The acoustic signal is converted to its electrical analog at the microphone stage of the
hearing aid system.
● After this conversion, a frequency filter is introduced to reduce possible distortion of the
input signal. The signal is then "sampled" a given number of times per second. Normally,
the sampling rate is 10,000 times per second, or greater.
● The analog signal is then converted to its digital equivalent by the analog to digital (A/D)
converter. Each sample receives a digital code. Binary numbers (O and 1) are used to
represent the digital value of each sample.
● Following the digitization of the signal, the digital representations are processed by a
central processing unit (CPU) or microprocessor. The digital values can be multiplied,
divided, added, subtracted and grouped in defined ways. In the microprocessor are
various algorithms. An algorithm is a system of instructions that operates in a manner
determined by a set of mathematical rules and equations. If the algorithm is a dedicated
one, it performs a specific task relative to the processing of the input signal. For example,
one algorithm may control the frequency response of the instrument, another may control
loudness growth, a third may function to enhance the speech signal in a background of
noise, etc.
● After the microprocessor has performed its tasks, the digitized signal must be converted
back to its analog equivalent. This is accomplished at the digital to analog (D/A)
conversion stage.
● When the digitized signal is converted to its analog stage, it is frequency filtered again, to
prevent signal distortion. It is then amplified in the conventional manner and submitted to
the receiver (speaker) of the hearing aid
15
REFERENCE
John Wiley & Sons 2. Willis. J. Tompkins (2004) Biomedical Digital Signal Processing:
PRESENTATION NUMBER: 2
HIGH FIDELITY
REMARKS :
INDEX
1.
Signal acquisition and processing techniques
2.
Differential amplification
3.
Common mode rejection
17
4.
Artifact rejection
5.
Filtering
6.
Signal Averaging
7.
Signal Acquisition and Processing in OAE
8.
Auditorium acoustics
9.
High Fidelity
DIFFERENTIAL AMPLIFICATION
The term amplifier suggests a device that increases the strength of a signal (acoustic or electrical).
A differential amplifier is a type of electronic amplifier that amplifies the difference between two
input voltages but suppresses any voltage common to the two inputs. It is an analog circuit with two
inputs (V- and V+) and an output (Vo). The output is ideally proportional to the difference between
the two voltages
Vo = A [( V+) - (V-)] where A is the gain of the amplifier.
For neurofeedback purposes, we use the differential amplifier which amplifies voltage
differences between 2 points. This is different from, a power amplifier that is used for a public
address system which is non-differential, it does not discriminate and thus everything in the line
gets amplified.
A differential amplifier with a very high gain and extremely high input impedance is called an
operational amplifier (op-amp). The output node of op-amp has near zero resistance allowing it to
behave like an ideal voltage source. supplies as much current as necessary
19
Differential amplifiers are used mainly to suppress noise. It also acts as a volume control circuit.
It can be used as an automatic gain control circuit and for amplitude modulation.
ARTIFACT REJECTION
When a sweep occurs that contains excessive voltage amplitudes, excessive noise is included in the
average, which can decrease the quality of a recording. Most evoked potential systems provide a
method where sweeps containing excessive nose can be excluded from the ongoing average. This is
known as artifact rejection. The artifact rejection level is set so that sweeps containing voltages well
above the voltages of the response of interest are rejected and thus not added into the summed or
averaged response.
An artifact is an electrical activity that is not part of the response and that should not include the
analysis of the response. In short to differentiate between electromagnetic (non-patient) and
electrophysiological response this is used.
Artifact rejection can be done through 3 ways;
Determine the source of artifact and eliminate it.
Modify the test parameters, ie, filter settings, electrode arrays, no of sweeps
Using a technique known as artifact rejection.
Typically, artifact rejection is designed to detect any signal larger in amplitude than specified value
within the sensitive range of the A-D converter or a percentage thereof (eg: 90% of full scale
deflection). When such a signal is detected during an average run, the entire sweep is excluded from
the average.
Artifact rejection is a technique in which all of the sweeps containing high amplitude signals that
have exceeded the present limit are excluded from the average. Each successive digitized trace first
goes to a buffer where it is examined for any voltages that exceed some pre-set limits. If all voltages
are at or below a pre-set level, then the digitized voltages are dumped to the memory unit for
averaging with prior and succeeding traces. Conversely, if excessive voltage is found at any address
in the analysis window, then that sample in the buffer is erased instead of being forward to the
averaging memory.
Two common clinical limitations are:
(i)the inability to make progress with averaging because of almost continuous artifact
rejection
(ii)the obvious artifact contamination of an averaging waveform, despite the use of artifact
rejection.
{Increasing the sensitivity of the amplifier (increasing the gain) to solve the second problem will
also increase the sensitivity of the artifact rejection process and perhaps create the first problem}
FILTERING
21
In signal processing, a filter is a device or process that removes some unwanted components from a
signal. Filtering is the process of removing certain portions of the input signal in order to create a
new signal without any background noise.
Filtering is done by sending the input signal through a system function which determines the degree
of amplification for each frequency in the signal. The desired frequencies are boosted by the
instrument gain while the unwanted frequencies are boosted by a gain of zero.
Phase Response
Ideally, a filter should have a linear phase response. This means that there is a constant time delay
difference from the input for all input frequencies. If the phase response is not linear, then different
frequencies would be delayed by different amounts.
Filters are categorised on the basis of pass band and stop band. A passband is the range of
frequencies or wavelengths that can pass through a filter.
A stopband is a band of frequencies, between specified limits, through which a circuit, such as a
filter or telephone circuit, does not allow signals to pass.
The Low pass filter passes signal with a frequency lower than a selected cut off frequency &
attenuates signals with frequencies higher than the cut off frequency.
22
The exact frequency response of the filter depends on the filter design. The filter is sometimes
called a high-cut filter or treble-cut filter in audio applications.
A low-pass filter is used as an anti-aliasing filter prior to sampling and for reconstruction in digital-
to-analog conversion.
Band pass filter: A band-pass filter is a device that passes frequencies within a certain range
and rejects (attenuates) frequencies outside that range. It removes all frequencies outside f 1 and f
2(f1-low cut off, f2—high cut off)
The bandwidth of the filter is simply the difference between the upper and lower cut off
frequencies.
Bandpass filters are widely used in wireless transmitters and receivers, EEG measurements
The main function of such a filter in a transmitter is to limit the bandwidth of the output signal to
the band allocated for the transmission.
This prevents the transmitter from interfering with other stations. In a receiver, a bandpass filter
allows signals within a selected range of frequencies to be heard or decoded, while preventing
signals at unwanted frequencies from getting through.
Notch filter rejects just one specific frequency - an extreme band-stop filter
25
Comb filter has multiple regularly spaced narrow passbands giving the band form the
appearance of a comb.
SIGNAL AVERAGING
Signal averaging is a digital technique for separating a repetitive signal from noise without
introducing signal distortion (Tompkins and Webster, 1981).
Signal averaging sums a set of time epochs of the signal together with the superimposed random
noise. If the time epochs are properly aligned, the signal waveforms directly sum together. And thus
the uncorrelated noise averages out in time. Thus, the signal-to-noise ratio (SNR) is improved.
Signal averaging is based on the following characteristics of the signal and the noise:
The signal waveform must be repetitive (although it does not have to be periodic). This means
that the signal must occur more than once but not necessarily at regular intervals.
The noise must be random and uncorrelated with the signal. Random means that the noise is not
periodic and that it can only be described statistically (e.g., by its mean and variance).
The temporal position of each signal waveform must be accurately known
Signal averaging is a kind of digital filtering process. The Fourier transform of the transfer function
of an average is composed of a series of discrete frequency components. Due to its appearance of its
amplitude response, this type of filter is called a comb filter.
The width of each tooth decreases as the number of sweep repetitions increases. The desired signal
has a frequency spectrum composed of discrete frequency components, a fundamental and
harmonics. Noise, on the other hand, has a continuous distribution.
As the bandwidth of each of the teeth of the comb decreases, this filter more selectively passes the
fundamental and harmonics of the signal while rejecting the random noise frequencies that fall
between the comb teeth. The signal averager, therefore, passes the signal while rejecting the noise
APPLICATION
SIGNAL PROCESSING:
The auditory evoked potentials are the electrical response evoked by the auditory stimulus from
auditory system. These electrical responses generated by the structure from cochlea till cortex is
measured. These responses can be elicited through different stimulus. However response pattern
may differ with respect to the stimulus. A complex response to a particular types of the external
stimuli that represents neural activity generated at a several anatomical sites. The amplitude of
the AEP is very small approximately 0.01 – 1uV. This small potential is masked by the larger
background activity generated by several sources such as the random, ongoing electrical activity
(EEG) within the brain, muscular (myogenic) activities in the skull electronic devices in the
environment such as 60 Hz and other artifacts produces while generating the stimuli or recording
the potentials. Therefore, in order to identify the AEP, a device consisting of three major
components is required
The ABR is recorded from electrodes attached to various positions on the head. The recording
occurs by measuring the difference in the electrical activity between two electrodes which is
known as differential recording.
Auditorium acoustics
An auditorium is a room built to enable the audience to hear and watch performances at venues. It
may include any room intended for listening to music, including theatres, churches, classrooms,
meeting rooms.
Factors that affect hearing conditions in auditorium
Location
Shape
Layout of boundary surfaces
Dimensions
Seating arrangements
Volume
Capacity of audience
Stage position
Ventilations
Materials used for construction
27
Reflections
After the arrival of the direct sound, a series of semi-distinct reflections from various reflecting
surfaces will reach the listener. These early reflections occur within 50ms. The reflections that
reach after the early reflections are of lower amplitude and very closely spaced in time. These
become the late reflections or the reverberant sound.
A bright and clear sounding auditorium will provide 30 separate early reflections, each arriving
within first 1/40th second after the initial impact of the signal. Some will be strong and some will
be weak, but the overall power of early reflections should be in the range of 60 dBA to provide
good speech reinforcement.
Auditory system determines the direction of a sound source from the direct sounds reaching the
surface of the ear. Early reflections that arrive within 35 ms reinforce the direct sound.
The acoustic quality of an auditorium to enable the listeners to hear a hi-fi sound depends on factors
such as reverberation time.
But in larger halls, a longer reverberation time becomes necessary for music but this creates risks of
speech being no longer intelligible. To accommodate two or more acoustic uses generally means
that a reverberation time change is desirable.
Introducing acoustic absorbent is the simplest approach, but this has the additional effect of
reducing sound level, which may be unacceptable for the lower reverberation time configuration.
Echo
Echo is a reflection of the sound that arrives at the listener with a delay after the direct sound.
This delay is directly proportional to the distance of the reflecting surface from the source and the
listener echo depends on the reflection and absorption of sound by the walls.
An echo is an intelligible repetition and is not to be confused with reverberation, which is
unintelligible.
Reverberation time
28
Reverberation is defined as the prolonged reflection of sound from the walls, floor & ceiling of a
room. It is the persistence of audible sound after the source has stopped to emit sound
Reverberation time is defined as the time for the sound to die away to a level 60 decibels below its
original level. Reverberation time (RT) shows how long the sound can be heard in the hall after the
sound source stops to produce the sound. The definition of the reverberation time depends on what
we define as the end of the sound we hear.
As our ears are very sensitive to quiet sounds, it was postulated that the sound ends when its
intensity I becomes 10 to the power -6, of its initial intensity Io .
The late sound after about 100 ms is called the reverberant sound. It usually decays in a linear
manner; & this duration is described by the reverberation time.
An approximate formula for the reverberation time, TR is given below:
RT=RT60=Time to drop 60 Db below the original level
RT60=0.049 V/a
where, V= Volume of the space, a= total room absorption
ACOUSTIC ABSORPTION removes acoustic energy. There are three possible mechanisms:
porous absorption, panel absorption and Helmholtz resonance.
Porous absorption, as already mentioned, occurs with any porous material. In auditoria the
major absorbent surface is the audience, whose clothes act as efficient porous absorbers.
Absorption is measured by the absorption coefficient (a), which is simply the fraction of
incident energy absorbed. Porous absorbers are thus efficient at high but not low
frequencies.
Panel absorbers can complement porous absorbers to give absorption over the whole
frequency range, though the maximum absorption coefficient of panel absorbers is not great.
Panel absorption strongly influences the low-frequency reverberation time.
A traditional Helmholtz resonator is a rigid-walled cavity and an open neck, which result
in one acoustic resonance
In small rooms, many reflections begin arriving soon after the direct sound. Thus, early
and late reflections become fused into a single reverberant sound.
The only means for achieving a long reverberation time in a small room is to have low
absorption at the walls. This, however, results in prominent individual resonances.
A large room has many more resonances than a small room (within the audible range).
This produces a fairly smooth frequency response. Small rooms, on the other hand, may
produce considerable coloration of sound.
29
The reverberant field in a large room builds up much more slowly than that in a small
room, providing an important perceptual cue to room size.
HIGH FIDELITY
High fidelity is a term used to refer high quality reproduction of sound. High fidelity equipment has
inaudible noise & distortion and a flat frequency response within human hearing range.
HiFi is really just a shortened version of “high fidelity”, implies a high degree of accuracy (fidelity)
in reproducing sound. In the past few decades, quality audio recording equipment has become fairly
easy to obtain, so just about any music or video we want to listen to will probably sound pretty
good. Hi-Fi system is to make the audio sound more authentic and real. Many audiophiles focus on
headphones because unlike a speaker system, the sound isn’t affected by the acoustics of the room,
leading to a much more consistent listening experience.
REFERENCES