CSE 415 Signal Processing

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 35

CSE 415: Signal Processing

Course content
1. History and overview
 Explain the purpose and role of digital signal processing and
multimedia in computer engineering.
 Explain some important signal processing areas such as digital
audio, multimedia, image processing, video, signal compression,
signal detection, and digital filters.
 Contrast analog and digital signals using the concepts of sampling
and quantization.
 Draw a digital signal processing block diagram and define its key
components: antialiasing filter, analog to digital converter, digital
signal processing, digital to analog filter, and reconstruction filter.
 Explain the need for using transforms and how they differ for analog
and discrete-time signals.
 Contrast some techniques used in transformations such as Laplace,
Fourier, and wavelet transforms.
 Indicate design criteria for low- and high-pass filters.
2. Relevant tools, standards, and/or engineering constraints
 Describe the tradeoffs involved with increasing the sampling rate.
 Indicate key issues involved with sampling periodic signals including
the sampling period.
 Indicate key issues involved with sampling non-periodic signals
including spectral resolution.
 Prove whether a system is linear, time-invariant, causal, and/or
stable given its input to output mapping.
 Derive non-recursive and recursive difference equations, as
appropriate, given descriptions of input-output behavior for a linear,
time-invariant system.
3. Convolution
 Explain how the concept of impulse response arises from the
combination of linearity and time-invariance.
 Derive the linear convolution summation from the definition of
impulse response and linearity.
 Use the commutative property of convolution as a foundation for
providing two explanations of how a system output depends on the
input and system impulse response.
4. Transform analysis
 State, prove, and apply properties of the z–transform and its
inverse.
 State, prove, and apply properties of the discrete-time Fourier
transform (DTFT) and its inverse.
 Explain how the DTFT may be interpreted as a spectrum.
 Explain the relationship between the original and transformed
domains (e.g., aliasing).
 State, prove, and apply properties of discrete Fourier transform
(DFT) and its inverse.

1
CSE 415-SP Handouts
 Prove and state the symmetries of the Fourier transforms for real
signals.
 State the frequency shift property for Fourier transforms.
 Prove and state how Parseval’s theorem relates power or energy, as
appropriate, for the Fourier transforms.
 Explain the relationship among the z-transform, DTFT, DFT, and
FFTs (fast Fourier transforms).
 Define and calculate the Laplace transform of a continuous signal.
 Define and calculate the inverse Laplace transform.
5. Frequency response
 Interpret the frequency response of an LTI system as an alternative
view from the impulse response.
 Analyze the frequency response of a system using the DTFT and the
DFT.
 Determine pole and zero locations in the z-plane given a difference
equation describing a system.
 Relate the frequency selectivity of filters to the z–transform domain
system representation.
 Describe the repeated time series implication of frequency
sampling.
6. Sampling and aliasing
 State the sampling theorem and the related concepts of the Nyquist
frequency and aliasing.
 Demonstrate aliasing on a sampled sine wave.
 State the relationship between time and frequency domains with
respect to sampling.
 Explain when spectra are discrete vs. continuous.
 Calculate the errors or noise generated by sampling and quantizing.
7. Digital spectra and discrete transforms
 Sketch the spectrum of a periodic signal.
 Contrast the spectra of an impulse and a square wave.
 Calculate spectra of periodic and aperiodic signals.
 Explain how the block size controls the tradeoff between spectral
resolution and density.
 Calculate a spectrogram and explain what its key parameters are.
 Explain filtering as adding spectra in a frequency domain on a
logarithmic scale.
 Design interpolation and reconstruction filters using the sinc
function.
8. Finite and infinite impulse response filter design
 Design finite and infinite impulse response (FIR and IIR) filters that
have specified frequency characteristics including magnitude and
phase responses.
 Explain the general tradeoffs between FIR and IIR filters.
 Demonstrate that not all recursive filters are IIR, using a moving
average as an example.
 Use the DFT to accomplish filtering through (circular) convolution.

2
CSE 415-SP Handouts
 State the condition for linear phase in an FIR filter.
 Explain the tradeoffs between spectral resolution, length, and delay
in an FIR filter.
 Explain why one or more FIR filter design methods work.
 Explain why one or more IIR filter design methods work including
notch filters using pole-zero pairs.
 Design a digital filter using analog techniques (e.g., bilinear
transform) and explain its key parameters.
 Explain physically realizable system issues relevant in filter design
including causality and time shifts, and response truncation.

Supplementary Topics
9. Window functions
 Explain how window functions improve transform properties.
 Explain the periodic assumption in spectral analysis.
 Explain the purpose of a window function and its effect on a
spectrum.
 Discuss the tradeoffs of common window functions such as
rectangular, Blackman, Hann, and Hamming.
 Select an appropriate window function given a problem statement
regarding detection or identification tradeoffs.
10. Multimedia processing
 Define signals that vary in time and/or space and interpret
frequencies in both domains.
 Describe how sampling affects image integrity.
 Explain how low-pass filtering tends to smooth images.
 Contrast between reconstruction and enhancement filters.
 Describe methods for minimizing image noise.
 Describe how digital techniques perceptually or otherwise enhance
speech and audio signals.
 Explain techniques for noise reduction (e.g., Weiner or median
filters) or cancellation (e.g., LMS filters) in audio processing.
 Explain the motivation for audio coding and state key elements of
MPEG or related algorithms including perceptual elements.
11. Control system theory and applications
 Define basic control system concepts (e.g., zero-state response,
zero-input response, stability).
 Contrast design methods (root-locus, frequency-response, state-
space) for control systems.
 Explain limitations and trade-offs associated with microcontroller
implementations of digital control systems.
 Describe potential applications of digital control systems for electro-
mechanical systems, including robotics.
 Implement a simple microcontroller-based motion control system
with sensors and actuators

3
CSE 415-SP Handouts
1.History and overview

 Explain the purpose and role of digital signal processing and


multimedia in computer engineering.
 Explain some important signal processing areas such as digital
audio, multimedia, image processing, video, signal compression,
signal detection, and digital filters.
 Contrast analog and digital signals using the concepts of sampling
and quantization.
 Draw a digital signal processing block diagram and define its key
components: antialiasing filter, analog to digital converter, digital
signal processing, digital to analog filter, and reconstruction filter.
 Explain the need for using transforms and how they differ for analog
and discrete-time signals.
 Contrast some techniques used in transformations such as Laplace,
Fourier, and wavelet transforms.
 Indicate design criteria for low- and high-pass filters.

Explain the purpose and role of digital signal processing and multimedia in computer
engineering.

Multimedia communications concern the technology required to manipulate, transmit, and


control audiovisual signals across a networked communications channel. But the real time
challenge in multimedia communications is that;

First, compared with traditional textual applications, multimedia applications usually require
much higher bandwidth. A typical piece of 25-second 320x240 QuickTime movie could take
2.3MB, which is equivalent to about 1000 screens of textual data.

Second, most multimedia applications require the real-time traffic. Audio and video data
must be played back continuously at the rate they are sampled. If the data does not arrive in
time, the playing back process will stop and human ears and eyes can easily pick up the
artefact. In Internet telephony, human beings can tolerate a latency of about 250 millisecond.
If the latency exceeds this limit, the voice will sound like a call routed over a long satellite
circuit and users will complain about the quality of the call. In addition to the delay, network
congestion also has more serious effects on real-time traffic. If the network is congested, the
only effect on non-real-time traffic is that the transfer takes longer to complete, but real-time
data becomes obsolete and will be dropped if it doesn’t arrive in time.

Third, multimedia data stream is usually bursty. Just increasing the bandwidth will not solve
the burstiness problem. For most multimedia applications, the receiver has a limited buffer. If
no measure is taken to smooth the data stream, it may overflow or underflow the applications
buffer. When data arrives too fast, the buffer will overflow and the some data packets will be

4
CSE 415-SP Handouts
lost, resulting in poor quality. When data arrives too slow, the buffer will underflow and the
application will starve.
How to solve these conflicts is a challenge multimedia networking must face.

The possibility of answering this challenge comes from the existing network software
architecture and fast developing hardware.
 Fast networks like Gigabit Ethernet, FDDI, and ATM provide high bandwidth
required by digital audio and video.
 The design of real-time protocols for multimedia networking becomes imperative to
solve the problem.

The main protocols are Resource Reservation Protocol (RSVP), together with Real-
time Transport Protocol (RTP), Real-Time Control Protocol (RTCP), Real-Time
Streaming Protocol (RTSP), provides a working foundation for real-time services.

 Digital Signal Processings /Filtering/ Compression of the audio-visual signals

Important signal processing areas such as digital audio, multimedia, image processing,
video, signal compression, signal detection, and digital filters.

AUDIO
Capture, representation and Sampling variables
Speakers, headsets, and other audio output devices rely on sound cards to produce sounds
such as music, voice, beeps, and chimes. Sound cards contain the chips and circuitry to
record and playback a wide range of sound using A/D conversion.

To record a sound, the sound card must be connected to an input device, such as a
microphone or audio CD player. The input device sends the sound to the sound card as an
analog signal. The analog signal flows to the sound card’s analog-to-digital converter (ADC).
The ADC converts the signal into digital (binary) data of 1s and 0s by sampling the signal at
set intervals as shown in the figure below.

The analog sound is a continuous waveform, with a range of frequencies and volumes. To
represent the waveform in a recording, the computer would have to store the waveform’s
value at every instant in time. Because this is not possible, the sound is recorded using a
sampling process.

Sampling involves breaking up the waveform into the set of intervals and representing all
values during that interval with a single value.
Several factors in the sampling process-sampling rate, audio resolution, and mono or stereo
recording affect the quality of the recorded sound during playback.

5
CSE 415-SP Handouts
Sampling rate also called sampling frequency refers to the number of times per second the
sound is recorded.
The more frequently the sound is recorded, the smaller the intervals and the better the quality.
The sampling frequency used for audio CD, for example, is 44,100 times per sec, which is
expressed in Hertz (Hz) as 44,100Hz. Cassette tape quality multimedia files use a sampling
rate of 22,050Hz, and basic windows sound use a sampling rate of 11,025Hz.

Audio resolution- defined as a bit rate such as 8-bit, 16-bit, or 24-bit refers to the number of
bytes used to represent the sound at any one interval. A sound card using 8-bit resolution, for
example, represents a sound with any 1 to 256 values. Using a higher resolution provides a
finer measurement scale, which results in a more accurate representation of the value of each
sample and better sound quality. With 8-bit resolution, the sound quality is like that of an
AM-radio; 16-bit resolution gives CD quality sound and 24-bit for high quality digital audio
editing.

Mono or Stereo recording refers to the number of channels used during recording. Mono
means that the same sound emits from both the left and the right speakers during playback;
stereo means that two separate channels exist in the recording: One each for the right and left
speakers. Most sound cards support stereo recording for better playback.
After the ADC converts the analog sound through sampling, the digital sound flows to the
digital signal processor (DSP) on the sound card. The DSP then requests instructions from
the sound card’s memory chip on how to process the digital data. DSP then compresses the
digital data to save space. Finally, the DSP sends the compressed data to the computer’s main
processor, which stores the data in .wav, .MP3, or other audio file format.

To play a recorded sound, such as .wav, .MP3 or any other, the main processor retrieves the
sound file from the hard disk, CD, or other storage device.

The processor then sends to DSP which compresses and looks to the memory chip how to
recreate the sound. The DSP then sends the digital signals to the sound card’s DAC (Digital
to Analog Converter), which converts the digital to electrical voltage. And input device, such
as speaker, uses an amplifier to strengthen the electrical voltage. This causes speaker’s cone
to vibrate recreating sound. All of this happens in an instant.

Sound Card Layout


Let us here look at the sound card, which is an adapter card.

6
CSE 415-SP Handouts
Figure An ISA sound card
The connectors may look different on different sound cards, but as an example in the back of a simple
sound card you find connectors to:
 Microphone input, a jack
 Line input, a jack
 phone jacks for active speakers
 A DB15 jack for MIDI or joystick.
Most sound cards typically have a 2 Watt amplifier built-in. It can push a set of earphones.
Apart from the above the modern PC sound card contains several hardware systems relating
to the production and capture of audio.
The digital audio section of a sound card consists of a matched pair of 16-bit digital-
to-analogue (DAC) and analogue-to-digital (ADC) converters. DAC, this as discussed early
takes digital information and converts it to analog. ADC, this does work vice versa, example
is recording your voice on the system. It will record it from our analog world only to be
turned into a digital format for the computer.
Amplifier Chip, this will boost outbound signals for older systems with speakers that
can’t amplify themselves. Most speakers today do the job just fine.
Extras, some soundboards have extra chips for MIDI controllers and even
RAM/ROM chips to store MIDI information and more.
A card’s sound generator is based on a custom DSP (Digital Signal Processor) that
replays the required musical notes by multiplexing reads from different areas of the
wavetable memory at differing speeds to give the required pitches. The maximum number of
notes available is related to the processing power available in the DSP and is referred to as
the card’s ‘polyphony’. DSPs use complex algorithms to create effects such as reverb, chorus
and delay. Reverb gives the impression that the instruments are being played in large concert
halls. Chorus is used to give the impression that many instruments are playing at once when
in fact there’s only one. Adding a stereo delay to a guitar part, for example, can ‘thicken’ the
texture and give it a spacious stereo presence. Most 16-bit sound cards have a feature
connector that can connect to a WaveTable daughter board.
Sound card can contain connectors of digital interfaces like S/PDIF(Sony/Philips
Digital InterFace) and Toslink. They are intended for sound output in a digital format and

7
CSE 415-SP Handouts
further decoding and/or digital-to-analog conversion by home equipment converters or
receivers of computer acoustic sets which are sometimes better than cheap sound codecs. On
many sound cards an S/PDIF connector is made in the form of a minijack.

Synthesis of Sounds
The sound card synthesizer generates the sounds. Here we have three systems:
1) FM synthesis, Frequency Modulation
2) Wave table synthesis
3) Physical modeling

Wave tables - sampling


Wave table is the best and most expensive sound technology. WaveTable doesn’t use carriers
and modulators to create sound, but actual samples of real instruments. A sample is a digital
representation of a waveform produced by an instrument. This means that the sounds on the
sound card are recorded from real instruments. You record, for example, from a real piano
and make a small sample based on the recording. This sample is stored on the sound card.
When the music has to be played, you are actually listening to these samples. When they are
of good quality, the sound card can produce very impressive sounds, where the "piano"
sounds like a piano.
The quality of the instruments is determined by several factors:

 the quality of the original recordings


 the frequency at which the samples were recorded
 the number of samples used to create each instrument
 the compression methods used to store the samples.

Most instrument samples are recorded in 16-bit 44.1kHz but many manufacturers compress
the data so that more samples, or instruments, can be fit into small amounts of memory.
Every instrument produces subtly different timbres depending on how it is played. For
example, when a piano is played softly, you don’t hear the hammers hitting the strings. When
it’s played harder, not only does this become more apparent, but there are also changes in
tone.
Many samples and variations have to be recorded for each instrument to recreate this range of
sound accurately with a synthesiser. Inevitably, more samples require more memory. A
typical sound card may contain up to 700 instrument samples within 4MB ROM. To
accurately reproduce a piano sound alone, however, would require between 6MB and 10MB
of data.

Audio compression

To reduce the audio bit stream, three fundamental methods can be used but they have their
own drawbacks:
 Reduce the sampling rate so that there is less data to send.
This sounds fine, except that the laws of physics get in the way! Nyquist’s theorem states
that the sample rate must be at least twice the frequency of highest audio frequency that must
be reproduced to prevent artefacts from being introduced into the audio. This is the reason
that CD audio sample rates are 44.1 kHz, which gives a maximum frequency of 22.05 kHz —
comfortably above the 20 kHz frequencies that most audio systems support. Reducing the
sample rate reduces the number of bits needed — but it is not transparent to the listener.
 Reduce the sample size so that there is less data.

8
CSE 415-SP Handouts
With this approach, the sample size is reduced from 16 bits to a smaller figure, such as 8 or
less. This again is not a transparent compression technique. The 16-bit sample provides a
large signal to noise ratio of about 90dB, which is a good match for the human ear. This
means that noise generated during the sampling process due to quantisation effects is not
heard by the listener. This is the reason that CD audio sounds better than analogue audio from
a vinyl record: quiet passages are quiet and loud passages are loud. Reducing the sample size
destroys the signal to noise ratio and thus noise would be heard during quiet passages.
 Use compression/ coding techniques
This is a good technique, which preserves data but unfortunately does not achieve the
compression ratios that are needed and therefore it must be augmented by other techniques.

IMAGE
The term resolution is most often associated with an image’s degree of detail. It can also refer
to the quality capability of a graphic output device (monitor) or input device (scanner), and is
sometimes used to refer to the number of colors that an image can display.

1) Color Resolution
Color resolution, also called color depth, specifies the number of bits used in an image file to
store color information. So when a computer video display card is called an 8-bit or 24-bit
display, you can figure out easily how many colors the video display card can display and
roughly how much memory the image will take up.
For instance, take an 8-bit display, the maximum number of colors that such a system can
show is 256. That’s why it is also called a 256-color display. Since we know that there are
only 256 values that a byte can represent, then that is the limit of the different color
combinations that such a display system can show at any time.
With a 24-bit display, there are three bytes (3 times 8 bits equals 24 bits) available to store
color values. You could also look at this from the perspective of an 8-bit display multiplied
by three (256 x 256 x 256 colors). This gives a 24-bit display sixteen million (actually
16,777,216) color value possibilities for each pixel on the screen. With such a huge
combination, or palette, to choose from, such a display system is often also called true color
2) Image resolution
Image resolution is quite a different beast from color resolution. It refers to the number of
pixel (picture elements, or dots) used to represent the image. It also has a relationship to
device resolution, namely the screen’s display resolution.
Every computer image is displayed as a series of pixels arranged in rows and columns. In
particular, bitmapped images are stored as a large array of colored dots that from a distance
seem to merge together to give a recognizable picture. when you view the picture normally
you no longer see just dots, but rather a detailed picture. Your eyes compensate for the lack
of visual information between the dots, and your brain’s visual interpretation of the image
fills in the missing spaces. Image resolution refers to the width and height of the image in
pixels, and each pixel needs a certain amount of memory space to store the color of each
pixel.

3) Compression of Images
Joint Photographic Experts Group (JPEG).
This is a most widely- adopted standard and it has been developed by an
international standards body known as the joint photographic Experts Group
(JPEG). JPEG also forms the basis of most video compression algorithms .

9
CSE 415-SP Handouts
In practice , the standard defines a range of different compression modes ,each of which
is intended for use in a particular application domain. We shall restrict our discussion
here to the lossy sequential mode -- also known as the baseline mode – since it is this
which is intended for the compression of both monochromatic and color digitized
pictures / images as used in multimedia communication applications. There are five
main stages associated with this mode : image / block preparation, forward DCT,
quantization , entropy encoding, and frame building. These are shown in figure 1.
Huffman
Table

AC
Zig-zag reordering Huffman
coding
Color
88 JPEG
components Quantizer
DCT bit-stream
(Y, Cb, or Cr)
Difference Huffman
Encoding coding
DC
Quantization
Table Huffman
Table
Fig. 1 Baseline JPEG encoder

Image / block preparation:


In its pixel form, the source image / picture is made up of one or more 2-D matrices
values. Alternatively if the image is represented in an R, G, B format three matrices are
required , one each of the R, G, and B quantized values .For the representation of video
signal , for color images, the alternative form of representation known as Y, C b, C r, can
optionally be used .
Once the source image format has been selected and prepared , the set of values in
each matrix are compressed separately using the DCT .Before performing the DCT on
each matrix , however , a second step known as block preparation is carried out . It
would be too time consuming to compute the DCT of the total matrix in a single step so
each matrix is first divided into a set of smaller 88 sub matrices. Each is known as a
block.

Forward DCT(Discrete cosine Transform) :


Blocks are then fed sequentially to the DCT which transforms each block separately.
This is necessary since to compute the transformed value for each position in a matrix
requires the values in all the locations of the matrix to be processed .
 From spatial domain to frequency domain:

Quantization:

10
CSE 415-SP Handouts
Normally, each pixel value is quantized using 8 bits which produces a value in the
range 0 to 255 for the intensity / luminance values. The human eye responds primarily to
the DC coefficient and the lower spatial frequency coefficients. Thus if the magnitude of a
higher frequency coefficient is below a certain threshold , the eye will not detect it. This
property is exploited in the quantisation phase by dropping- in practice, setting to zero-these
spatial frequency in the transformed matrix whose amplitudes are less than a defined
threshold value.
 F'[u, v] = round ( F[u, v] / q[u, v] ).

Why? -- To reduce number of bits per sample

Example:101101=45(6bits). 
q[u, v] = 4 --> Truncate to 4 bits: 1011 = 11.

 Quantization error is the main source of the Lossy Compression.

Eye is most sensitive to low frequencies (upper left corner), less sensitive to high frequencies
(lower right corner)-Filters are used

Entropy encoding:
The entropy encoding stage comprise four steps: differential encoding , run-length
encoding , and Huffman encoding.

Vectoring: The various entropy encoding algorithms operate on a one – dimensional string of
values, that is, a vector. However, the output of the quantization stage is a 2-D matrix of
values. Hence before we can apply any entropy encoding to the set of values, in the matrix,
we must represent the values in the form of a single –dimension vector: This operation is
known as Vectoring.

Vectoring (Zig-zag Scan)


 to group low frequency coefficients in top of vector.
 Maps 8 x 8 to a 1 x 64 vector

Differential encoding: This is an efficient compression technique because it encodes only the
difference in magnitude of the DC coefficient in a quantised block relative to the value in the
preceding block. In this way the number of bits required to encode the relatively large
magnitude of the DC coefficients is reduced.
Differential Pulse Code Modulation (DPCM) on DC component

DC component is large and varied, but often close to previous value.


Encode the difference from previous 8 x 8 blocks -- DPCM

11
CSE 415-SP Handouts
Because the DC coefficients contains a lot of energy, it usually has much larger value than
AC coefficients, and we can notice that there is a very close connection between the DC
coefficients of adjacent blocks. So, the JPEG standard encode the difference between the DC
coefficients of consecutive 88 blocks rather than its true value. The mathematical represent
of the difference is :

Diffi = DCi  DCi-1 11\* MERGEFORMAT ()

and we set DC0 = 0. DC of the current block DCi will be equal to DCi-1 + Diffi . So, in the
JPEG file, the first coefficient is actually the difference of DCs as shown in Fig. 8.

Run Length Encode (RLE) on AC components


 1 x 64 vector has lots of zeros in it
 Keeps skip and value, where skip is the number of zeros and value is the next non-
zero component.
 Send (0,0) as end-of-block sentinel value.

Example: Zero Run Length Coding of AC Coefficient


Now we have the quantized vector with a lot of consecutive zeroes. We can exploit
this by run length coding of the consecutive zeroes. Let's consider the 63 AC coefficients in
the original 64 quantized vectors first. For example, we have :

57, 45, 0, 0, 0, 0, 23, 0, -30, -16, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,..., 0

We encode for each value which is not 0, than add the number of consecutive zeroes
preceding that value in front of it. The RLC (run length coding) is :

(0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; EOB

EOB (End of Block) is a special coded value. If we have reached in a position in the vector
from which we have till the end of the vector only zeroes, we'll mark that position with EOB
and finish the RLC of the quantized vector. Note that if the quantized vector does not finishes
with zeroes (the last element is not 0), we do not add the EOB marker. Actually, EOB is
equivalent to (0,0), so we have :

(0,57) ; (0,45) ; (4,23) ; (1,-30) ; (0,-16) ; (2,1) ; (0,0)

We give another example. For the quantized vector as follows :

57, eighteen zeroes, 3, 0, 0, 0, 0, 2, thirty-three zeroes, 895, EOB

The JPEG Huffman coding makes the restriction that the number of previous 0's to be coded
as a 4-bit value, so it can't overpass the value 15 (0xF). So, this example would be coded as :

(0,57) ; (15,0) ; (2,3) ; (4,2) ; (15,0) ; (15,0) ; (1,895) ; (0,0)

(15,0) is a special coded value which indicates that there are 16 consecutive zeroes.

Frame building:

12
CSE 415-SP Handouts
The role of the frame builder is to encapsulate all information relating to an encoded
image/picture in a particular format. The complete image is encapsulated between a start-of-
frame and an end-to-frame delimiter which allow receiver to decide the start and end of the
image.

ANIMATION
The term computer animation generally refers to any time sequence of visual changes in a
scene. In addition to changing object position with translations or rotations a computer-
generated animation could display time variations in object size, color, transparency, or
surface texture.

Definition:
Animation is defined as time-based manipulation of a target element or more specifically of
some attribute of the target element called the target attribute.

Types of Animation methods:

1) Frame animation
Frame animation is animation inside a rectangular frame. It is similar to cartoon movies: a
sequence of frames that follow each other at a fast rate, fast enough to convey fluent motion.
Frame animation is an internal animation method. It is typically pre-compiled and non-
interactive. The frame is typically rectangular and non-transparent. Frame animation with
transparency information is also referred to as “Cell animation”

2) Sprite Animation
Sprite Animation in its simplest form is a 2D graphic object that moves across the display.
Sprites often have transparent areas. By using a mask or a transparent color, sprites are not
restricted to rectangular shapes. Sprite animation lends itself well to be interactive: the
position of each sprite is controlled by the user or by an application programmer or by both. It
is called “external animation”.

Design of Animation Sequences:


In general, an animation sequence is designed with the following steps:
• Storyboard layout
• Object definitions
• Key-frame specifications
• Generation of in-between frames
The storyboard is an outline of the action. It defines the motion sequence as a set of Basic
events that are to take place. Depending on the type of animation to be produced, the
storyboard could consist of a set of rough sketches or it could be a list of the basic ideas for
the motion.
An object definition is given for each participant in the action. Objects can be defined in
terms of basic shapes, such as polygons or splines. In addition, the associated movements for
each object are specified along with the shape.
A key frame is a detailed drawing of the scene at a certain time in the animation sequence.
Within each key frame, each object is positioned according to the time for that frame. Some
key frames are chosen at extreme positions in the action; others are spaced so that the time
interval between key frames is not too great. More key frames are specified for intricate
motions than for simple, slowly varying motions.

13
CSE 415-SP Handouts
In betweens are the intermediate frames between the key frames. The number of in-betweens
needed is determined by the media to be used to display the animation film requires 24
frames/per second, and graphics terminals are refreshed at the rate of 30 to 60 frames per
second. Typically, time intervals for the motion are set up so that there are from three to five
in-betweens for each pair of key frames. Depending on the speed specified for the motion,
some key frames can be duplicated. For a l –minute film sequence with no duplication, we
would need 1440 frames. With five in-betweens for each pair of key frames, we would need
288 key frames. If the motion is not too complicated, we could space the key frames a little
farther apart.
There are several other tasks that may be required, depending on the application. They
include motion verification, editing, and production and synchronization of a soundtrack.
Many of the functions needed to produce general animations are now computer-generated.

VIDEO
PCs, deal with information in digits - ones and zeros, to be precise. To store visual
information digitally, the hills and valleys of the analogue video signal have to be translated
into the digital equivalent - ones and zeros - by an analogue-to-digital converter (ADC). The
conversion process is known as sampling, or video capture. Since computers have the
capability to deal with digital graphics information, no other special processing of this data is
needed to display digital video on a computer monitor.

Full-Motion Video Compression:


Full-motion video typically means 30 frames per second. To store the data required for a full-motion,
two-hour movie, would require approximately 133 GB. Also, the data to support this video stream
would need to be pumped to the video card at the rate of about 150 million bits per second. There is a
solution to this dilemma: data compression. Due to the nature of the data, video can be easily
compressed by a large factor. There are several reasons for this:
 Every image that is displayed has large areas of redundancy that can be represented
by a smaller amount of data using an encoding algorithm. For example, if the bottom
half of the screen in a particular frame is black, that can be represented using a small
number of bytes; we don't need to have 150,000 2-byte pixels all filled with zeroes.
 The human eye is not sensitive to certain details in a video, which can be removed
without appreciable loss to the perceived signal.
 If you take any two consecutive frames, the changes from one to the next are usually
rather small. For example, if you have a scene centered on someone's face, the
background images are probably going to remain static for hundreds or thousands of
consecutive frames. There are special algorithms that can describe a frame only by the
changes that it represents from the one before, which dramatically cuts down both on
the amount of storage required and the time to transmit each frame of data.

Contrast Analog and Digital signals using the concepts of sampling and quantization.

Figure 1 below shows the electronic waveforms of a typical analog-to-digital conversion. Figure (a) is
the analog signal to be digitized. The signal is a continuous-time signal with continuous amplitude.
Such a signal is also called an analog signal.

As shown by the labels on the graph, this analog signal is a voltage that varies over time. To make the
numbers easier, we will assume that the voltage can vary from 3.000 to 3.025 volts, corresponding to
the digital numbers between 3000 and 3025 that will be produced by a 12 bit digitizer.

14
CSE 415-SP Handouts
Notice that the block diagram is broken into two sections, the sample-and-hold (S/H), and the analog-
to-digital converter (ADC). The sample-and-hold is required to keep the voltage entering the ADC
constant while the conversion is taking place. It means breaking the digitization into these two stages
is an important theoretical model for understanding digitization.

Chapter 3- ADC and DAC 37

3.025 FIGURE 3-1


a. Original analog signal Waveforms illustrating the digitization process. The
3.020 conversion is broken into two stages to allow the
Amplitude (in volts)

effects of sampling to be separated from the effects of


3.015 quantization. The first stage is the sample-and-hold
(S/H), where the only information retained is the
3.010
instantaneous value of the signal when the periodic
sampling takes place. In the second stage, the ADC
converts the voltage to the nearest integer number.
3.005 This results in each sample in the digitized signal
having an error of up to ±½ LSB, as shown in (d). As
3.000 a result, quantization can usually be modeled as
0 5 10 15 20 25 30 35 40 45 50 simply adding noise to the signal.
Tim e

analog digital
input output
S/H ADC

3.025 3025
b. Sampled analog signal c. Digitized signal
3.020 3020
Amplitude (in volts)

Digital number

3.015 3015

3.010 3010

3.005 3005

3.000 3000
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
Tim e S am ple num ber

1.0
d. Quantization error pdf
0.5
Error (in LSBs)

0.0

15 -0.5
CSE 415-SP Handouts
-1.0
0 5 10 15 20 25 30 35 40 45 50
converts the voltage to the nearest integer number.

Am
3.005 This results in each sample in the digitized signal
having an error of up to ±½ LSB, as shown in (d). As
3.000 a result, quantization can usually be modeled as
0 5 10 15 20 25 30 35 40 45 50 simply adding noise to the signal.
Tim e

analog digital
input output
S/H ADC

3.025 3025
b. Sampled analog signal c. Digitized signal
3.020 3020
Amplitude (in volts)

Digital number
3.015 3015

3.010 3010

3.005 3005

3.000 3000
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
Tim e S am ple num ber

1.0
d. Quantization error pdf
0.5
Error (in LSBs)

0.0

-0.5

-1.0
0 5 10 15 20 25 30 35 40 45 50
S am ple num ber

Figure 1 :

Waveforms illustrating the digitization process. The conversion is broken into two stages to allow the effects of
sampling to be separated from the effects of quantization. The first stage is the sample-and-hold (S/H), where
the only information retained is the instantaneous value of the signal when the periodic sampling takes place. In
the second stage, the ADC converts the voltage to the nearest integer number. This results in each sample in the
digitized signal having an error of up to ±1⁄2 LSB, as shown in (d).

As shown by the difference between (a) and (b), the output of the sample-and- hold is allowed to
change only at periodic intervals, at which time it is made identical to the instantaneous value of the
input signal. Changes in the input signal that occur between these sampling times are completely
ignored. That is, sampling converts the independent variable (time in this example) from continuous
to discrete.

As shown by the difference between (b) and (c), the ADC produces an integer value between 3000
and 3025 for each of the flat regions in (b). This introduces an quantization error. For example, both
2.56000 volts and 2.56001 volts will be converted into digital number 2560.In other words,
quantization converts the dependent variable (voltage in this example) from continuous to discrete.
The process of converting a continuous-valued signal into a discrete-valued signal,
16
CSE 415-SP Handouts
called quantization, is basically an approximation process. It may be accomplished simply
bv rounding or truncation.

First we will look at the effects of quantization. Any one sample in the digitized signal can have a
maximum error of ±1⁄2 LSB (Least Significant Bit, jargon for the distance between adjacent
quantization levels). Figure (d) shows the quantization error for this particular example, found by
subtracting (b) from (c), with the appropriate conversions. In other words, the digital output (c), is
equivalent to the continuous input (b), plus a quantization error (d). An important feature of this
analysis is that the quantization error appears very much like random noise.

In general, for a signal to be processed digitally, it must be discrete in time and its values
must be discrete (i.e., it must be a digital signal). If a signal takes on all possible values on a
finite or an infinite range, it is said to be continuous-valued signal. Alternatively, if the signal takes on
values from a finite set of possible values, it is said to be a discrete-valued signal. Usually, these
values are equidistant and hence can be expressed as an integer multiple of the distance between two
successive values. A discrete-time signal having a set of discrete values is called a digital signal.

If the signal to be processed is in analog form, it is converted to a digital signal by sampling


the analog signal at discrete instants in time, obtaining a discrete-time signal, and then by
quantizing its values to a set of discrete values.

In the digital signal processing course we are mostly dealing with discrete–time rather than digital
signals and systems, the latter being a subset of the former.

An example given below:

Table: Describes sampling and quantization

Also, if we were to represent every sample value with infinite precision (for example, x(1) = 2.8--,
instead of being approximated as 2 or 3) then we would need registers and memory words of
arbitrarily large size. However, owing to a finite word length we round off the sample values (in this
case x(1) = 2.8-- will be rounded to 2) called as quantization. As seen before, this introduces
quantization noise or error.

Mathematical representation of Discrete-time signals:

Definition A discrete-time signal is a sequence, that is, a function defined on the positive
and negative integers.
The sequence x(n) = xR(n) + j xI(n) is a complex (valued) sequence if xI(n) is not zero for all
n. Otherwise, it is a real (valued) sequence.

Examples of discrete-time signals represented in functional form are given below.

17
CSE 415-SP Handouts
x1(n) = 2 cos 3n
x2(n) = 3 sin (0.2πn)

Definition A discrete-time signal whose values are from a finite set is called a digital signal.

Examples of discrete-time signals In these examples w(n) and z(n) take on only a finite
number of different values – hence digital. But x(n) and y(n) take on a countable infinite
number of values – they are not digital. (Figure)

The procedure of generating a digital signal from an analog signal and then encoding it in to 1s and
0s is shown in the following block diagram in figure 2.

Figure 2
Advantages of Digital over Analog Signal Processing
There are many reasons why digital signal processing of an analog signal may be preferable to
processing the signal directly in the analog domain, as mentioned briefly earlier.

 First, a digital programmable system allows flexibility in reconfiguring the digital signal
processing operations simply by changing the program.
Reconfiguration of an analog system usually implies a redesign of the hardware followed by
testing and verification to see that it operates properly.

 Accuracy considerations also play an important role in determining the form of the signal
processor. Tolerances in analog circuit components make it extremely difficult for the system
designer to control the accuracy of an analog signal processing system.

18
CSE 415-SP Handouts
On the other hand, a digital system provides much better control of accuracy requirements.
Such requirements, in turn, result in specifying the accuracy requirements in the A/D
converter and the digital signal processor, in terms of word length, floating-point versus
fixed-point arithmetic, and similar factors.

 Digital signals are easily stored on magnetic media (tape or disk) without deterioration or
loss of signal fidelity beyond that introduced in the A/D conversion. As a consequence, the
signals become transportable and can be processed off-line in a remote laboratory. The digital
signal processing method also allows for the implementation of more sophisticated signal
processing algorithms.
It is usually very difficult to perform precise mathematical operations on signals in analog
form but these same operations can be routinely implemented on a digital computer using
software.

 In some cases a digital implementation of the signal processing system is cheaper than its
analog counterpart. The lower cost may be due to the fact that the digital hardware is cheaper,
or perhaps it is a result of the flexibility for modifications provided by the digital
implementation.

As a consequence of these advantages, digital signal processing has been applied in practical systems
covering a broad range of disciplines.

We cite, for example, the application of digital signal processing techniques in speech processing and
signal transmission on telephone channels, in image processing and transmission, in seismology and
geophysics, in oil exploration, in the detection of nuclear explosions, in the processing of signals
received from outer space, and in a vast variety of other applications.

Example in digital audio has several advantages over analogue audio:


• Digital recordings do not degrade with re-recording. Each copy is an exact reproduction of the
original because the digital information can be compared with the copy and any errors detected and
corrected.
• The recording performance is independent of the recording medium. Digital audio bit streams
retrieved from a CD or hard disk sound the same because the bit streams are exact copies. This is not
the case with analogue recordings, where the characteristics of the recording medium can influence
the reproduced sound.
• Digital audio is easy to process because the signal processing can be performed by
mathematical algorithms. Different effects such as phasing, reverb, echo, and so on, can be achieved
by mathematical manipulation of the audio signals in their digital format. Better performance can be
obtained in terms of filter roll-off characteristics, dynamic range and signal to noise ratios.

As already indicated, however, digital implementation has its limitations. We shall see that signals
having extremely wide bandwidths require fast-sampling-rate A/D converters and fast digital signal
processors. Hence there are analog signals with large bandwidths for which a digital processing
approach is beyond the state of the art of digital hardware.

For example in digital audio has several disadvantages over analogue audio:
• It requires two conversion stages: one to convert the analogue signals in to a digital format
and a second to convert the digital signals to analogue. Limitations in the speed of operation of A/D
converters and digital signal processors.
• These conversion processes can introduce their own types of distortion and defects.
• The digital data requires a far higher density storage than its analogue equivalent. An
analogue recording system, such as a cassette recorder, cannot store sufficient digital data to repro -
duce even equivalent quality audio let alone the improved quality achievable with digital techniques.

19
CSE 415-SP Handouts
• Whilst effects are simple to achieve using algorithms, a very fast processor is required, which
can be expensive compared with an analogue equivalent — albeit with far less perform ance and
flexibility.

Digital signal processing block diagram and define its key components: antialiasing
filter, analog to digital converter, digital signal processing, digital to analog filter, and
reconstruction filter.

Basic Elements of a Analog Signal Processing System


Most of the signals encountered in science and engineering are analog in nature. That is. the signals
are functions of a continuous variable, such as time or space, and usually take on values in a
continuous range. Such signals may be processed directly by appropriate analog systems (such as
filters or frequency analyzers) or frequency multipliers for the purpose of changing their
characteristics or extracting some desired information. In such a case we say that the signal has been
processed directly in its analog form, as illustrated in Fig. 3. Both the input signal and the output
signal are in analog form.

Fig 3 Analog signal processing

Basic Elements of a Digital Signal Processing System


Now, let the three boxes shown above in figure 2 in previous section be represented by an Analog to
Digital Converter (ADC). A complete digital signal processing (DSP) system consists of an ADC, a
DSP algorithm (e.g., a difference equation) and a digital to analog converter (DAC) shown below in
figure 4.

Figure 4. Digital Signal Processing Block diagram

The digital signal processor algorithm may be a large programmable digital computer or a small
microprocessor programmed to perform the desired operations on the input signal. It may also be a
hardwired digital processor configured to perform a specified set of operations on the input signal.
Programmable machines provide the flexibility to change the signal processing operations through a
change in the software, whereas hardwired machines are difficult to reconfigure. Consequently,
programmable signal processors are in very common use. On the other hand, when signal processing
operations are well defined, a hardwired implementation of the operations can be optimized, resulting
in a cheaper signal processor and, usually, one that runs faster than its programmable counterpart. In
applications where the digital output from the digital signal processor is to be given to the user in
analog form, such as in speech communications, we must provide another interface from the digital
domain to the analog domain. Such an interface is called a digital-to-analog (D/A) converter and
should have other filters. Thus the signal is provided to the user in analog form. A detailed block
diagram of DSP is show below in Fig. 5

20
CSE 415-SP Handouts
Fig 5: DSP block diagram with filters

Before encountering the analog-to-digital converter, the input signal is processed with an electronic
low-pass filter to remove all frequencies above the Nyquist frequency (one-half the sampling rate).
This is done to prevent aliasing during sampling, and is correspondingly called an antialias filter.

On the other end, the digitized signal is passed through a digital-to-analog converter and another low-
pass filter set to the Nyquist frequency. This output filter is called a reconstruction filter.

Understanding of the properties of analog filters is important for successful DSP.


 First, the characteristics of every digitized signal you encounter will depend on what type of
antialias filter was used when it was acquired. If you don't understand the nature of the
antialias filter, you cannot understand the nature of the digital signal.
 Second, the future of DSP is to replace hardware with software. If you don't understand the
hardware, you cannot design software to replace it.
 Third, much of DSP is related to digital filter design. A common strategy is to start with an
equivalent analog filter, and convert it into software.

Three types of analog filters are commonly used: Chebyshev, Butterworth, and Bessel (also called a
Thompson filter). Each of these is designed to optimize a different performance parameter. The
complexity of each filter can be adjusted by selecting the number of poles and zeros, mathematical
terms that will be discussed later. The more poles in a filter, the more electronics it requires, and the
better it performs. Each of these names describe what the filter does, not a particular arrangement of
resistors and capacitors. For example, a six pole Bessel filter can be implemented by many different
types of circuits, all of which have the same overall characteristics. For DSP purposes, the
characteristics of these filters are more important than how they are constructed.

Electronic design of filters


Figure 6 shows a common building block for analog filter design, the modified Sallen-Key circuit.
The circuit shown is a two pole low-pass filter that can be configured as any of the three basic types.
Table 1 provides the necessary information to select the appropriate resistors and capacitors. For
example, to design a 1 kHz, 2 pole Butterworth filter, Table 1 provides the parameters: k = 0.1592 and
k = 0.586. Arbitrarily selecting R1 = 10K and C = 0.01uF (common values for op-amp circuits), R and
Rf can be calculated as 15.95K and 5.86K, respectively. Rounding these last two values to the nearest
1% standard resistors, results in R = 15.8K and R = 5.90K All of the components should be 1%
precision or better.

21
CSE 415-SP Handouts
Figure 6
The modified Sallen-Key circuit, a building block for active filter design. The circuit shown implements a 2
pole low-pass filter. Higher order filters (more poles) can be formed by cascading stages. Find k 1 and k2 from
Table 1, arbitrarily select R1 and C (try 10K and 0.01μF), and then calculate R and Rf from the equations in the
figure. parameter, The parameter fc, is the cutoff frequency of the filter in Hz.

Four, six, and eight pole filters are formed by cascading 2,3, and 4 of these circuits, respectively. For
example, Fig. 7 shows the schematic of a 6 pole Bessel filter created by cascading three stages. Each
stage has different values for k1 and k2 as provided by Table1, resulting in different resistors and
capacitors being used.

FIGURE 7
A six pole Bessel filter formed by cascading three Sallen-Key circuits. This is a low-pass filter with a cutoff
frequency of 1 kHz.

22
CSE 415-SP Handouts
Now for the important part: the characteristics of the three classic filter types. The first performance
parameter we want to explore is cutoff frequency sharpness. A low-pass filter is designed to block
all frequencies above the cutoff frequency (the stopband), while passing all frequencies below (the
passband).

Need a high-pass filter?


Simply swap the R and C components in the circuits (leaving R f and R1 alone).

Selecting The Antialias Filter


Different filters have different features to support. The Chebyshev optimizes the roll-off, the
Butterworth optimizes the passband flatness, and the Bessel optimizes the step response.

Indicate design criteria for low- and high-pass filters

How Information is Represented in Signals


The most important part of any DSP task is understanding how information is contained in the signals
you are working with. There are many ways that information can be contained in a signal. There are
two ways that are common for information to be represented in naturally occurring signals. We will
call these: information represented in the time domain, and information represented in the
frequency domain.

Information represented in the time domain describes when something occurs and what the
amplitude of the occurrence is. For example, imagine an experiment to study the light output from the
sun. The light output is measured and recorded once each second. Each sample in the signal indicates
what is happening at that instant, and the level of the event. If a solar flare occurs, the signal directly
provides information on the time it occurred, the duration, the development over time, etc. Each
sample contains information that is interpretable without reference to any other sample. Even if
you have only one sample from this signal, you still know something about what you are measuring.
This is the simplest way for information to be contained in a signal.

In contrast, information represented in the frequency domain is more indirect. Many things in our
universe show periodic motion. For example, the pendulum of a clock swings back and forth; stars
and planets rotate on their axis and revolve around each other, and so forth. By measuring the
frequency, phase, and amplitude of this periodic motion, information can often be obtained about the
system producing the motion. Suppose we sample the sound produced by the pendulum. The
fundamental frequency and harmonics of the periodic vibration relate to the mass and elasticity of the
material. A single sample, in itself, contains no information about the periodic motion, and therefore
no information about the pendulum. The information is contained in the relationship between
many points in the signal.

This brings us to the importance of the step and frequency responses.


The step response describes how information represented in the time domain is being modified by the
system.
In contrast, the frequency response shows how information represented in the frequency domain is
being changed.

This distinction is absolutely critical in filter design because it is not possible to optimize a filter
for both applications. Good performance in the time domain results in poor performance in the
frequency domain, and vice versa. If you are designing a filter to remove noise from an ECG signal
(information represented in the time domain), the step response is the important parameter, and the
frequency response is of little concern. If your task is to design a digital filter for a hearing aid (with
the information in the frequency domain), the frequency response is all important, while the step
response doesn't matter.

23
CSE 415-SP Handouts
Now let's look at what makes a filter optimal for time domain or frequency domain applications.

Time Domain Parameters


It may not be obvious why the step response is of such concern in time domain filters. The answer lies
in the way that the human mind understands and processes information. Remember that the step,
impulse and frequency responses all contain identical information, just in different arrangements. The
step response is useful in time domain analysis because it matches the way humans view the
information contained in the signals.

For example, suppose you are given a signal of some unknown origin and asked to analyze it. The
first thing you will do is divide the signal into regions of similar characteristics. You can't stop from
doing this; your mind will do it automatically. Some of the regions may be smooth; others may have
large amplitude peaks; others may be noisy. This segmentation is accomplished by identifying the
points that separate the regions. This is where the step function comes in. The step function is the
purest way of representing a division between two dissimilar regions. It can mark when an event
starts, or when an event ends. It tells you that whatever is on the left is somehow different from
whatever is on the right. This is how the human mind views time domain information: a group of step
functions dividing the information into regions of similar characteristics. The step response, in turn, is
important because it describes how the dividing lines are being modified by the filter.

The step response parameters that are important in filter design are shown in Fig. 8. To distinguish
events in a signal, the duration of the step response must be shorter than the spacing of the
events. This dictates that the step response should be as fast (the DSP jargon) as possible.

This is shown in Figs. (a) & (b). The most common way to specify the risetime is to quote the
number of samples between the 10% and 90% amplitude levels. Why isn't a very fast risetime always
possible? There are many reasons, noise reduction, inherent limitations of the data acquisition system,
avoiding aliasing, etc.

Figures (c) and (d) shows the next parameter that is important: overshoot in the step response.
Overshoot must generally be eliminated because it changes the amplitude of samples in the signal;
this is a basic distortion of the information contained in the time domain.

Finally, it is often desired that the upper half of the step response be symmetrical with the lower half,
as illustrated in (e) and (f). This symmetry is needed to make the rising edges look the same as the
falling edges. This symmetry is called linear phase, because the frequency response has a phase that
is a straight line.

24
CSE 415-SP Handouts
Chapter 14- Introduction to Digital Filters 267

POOR GOOD
1.5 1.5

a. S low step response b. Fast step response


1.0 1.0
Amplitude

Amplitude
0.5 0.5

0.0 0.0

-0.5 -0.5
0 16 32 48 64 0 16 32 48 64
S am ple num ber S am ple num ber

1.5 1.5

c. Overshoot d. No overshoot
1.0 1.0
Amplitude

Amplitude
0.5 0.5

0.0 0.0

-0.5 -0.5
0 16 32 48 64 0 16 32 48 64
S am ple num ber S am ple num ber

1.5 1.5

e. Nonlinear phase f. Linear phase


1.0 1.0
Amplitude

Amplitude

0.5 0.5

0.0 0.0

-0.5 -0.5
0 16 32 48 64 0 16 32 48 64
S am ple num ber S am ple num ber

FIGURE 14-2
FIGURE 8: Parameters
Parameters for evaluating
for evaluating time domtime domain performance.
ain performance. The step response is used to measure how well a filter
Three parameters
performs inaretheimportant:
time domain.(1)Three
transition speed
parameters are (risetime),
important: (1)shown in (a)
transition and
speed (b), (2)shown
(risetime), overshoot, shown in (c)
in (a) and
(b),(3)
and (d), and (2) phase
overshoot, shown (symmetry
linearity in (c) and (d),between
and (3) phase
the linearity
top and(symmetry between
bottom halves ofthe
thetopstep),
and bottom
shownhalves
in (e) and (f).
of the step), shown in (e) and (f).

Frequency Domain Parameters


Figure 9 shows the fourFigures
basic (c) and (d) shows
frequency the next
responses. parameter
The purposethat
of isthese filtersovershoot
important: in some
is to allow
the step response. Overshoot must generally be eliminated because it changes
frequencies to pass unaltered, while completely blocking other frequencies. The passband refers to
the amplitude of samples in the signal; this is a basic distortion of
those frequencies that arethepassed, while contained
information the stopband
in thecontains those frequencies
time domain. This can bethat are blocked.
summed up in The
transition band is between. A fast roll-off means that the transition band is very narrow.

The division between the passband and transition band is called the cutoff frequency.
In analog filter design, the cutoff frequency is usually defined to be where the amplitude is reduced to
0.707 (i.e., -3dB).
Digital filters are less standardized, and it is common to see 99%, 90%, 70.7%, and 50% amplitude
levels defined to be the cutoff frequency.

25
CSE 415-SP Handouts
frequencies to move through the filter unaltered, there must be no passband
ripple, as shown in (c) and (d). Lastly, to adequately block the stopband
frequencies, it is necessary to have good stopband attenuation, displayed
in (e) and (f).

a. Low-pass c. Band-pass
passband

Am plitude

Am plitude
transition
band

equency responses.
ilters are generally stopband
n frequencies (the Frequency Frequency
locking others (the
ponses are the most
igh-pass, band-pass,
b. High-pass d. Band-reject
Am plitude

Am plitude
Frequency Frequency

FIGURE 9: The four common frequency responses. Frequency domain filters are generally used to pass certain
frequencies (the passband), while blocking others (the stopband). Four responses are the most common: low-
pass, high-pass, band-pass, and band-reject.

Figure 10 shows three parameters that measure how well a filter performs in the frequency domain.
To separate closely spaced frequencies, the filter must have a fast roll-off, as illustrated in (a) and (b).
For the passband frequencies to move through the filter unaltered, there must be no passband ripple,
as shown in (c) and (d). Lastly, to adequately block the stopband frequencies, it is necessary to have
good stopband attenuation, displayed in (e) and (f).

26
CSE 415-SP Handouts
Chapter 14- Introduction to Digital Filters 269

POOR GOOD
1.5 1.5

a. S low roll-off b. Fast roll-off


1.0 1.0

Amplitude

Amplitude
0.5 0.5

0.0 0.0

-0.5 -0.5
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Frequency Frequency

1.5 1.5

c. Ripple in passband d. Flat passband


1.0 1.0
Amplitude

Amplitude
0.5 0.5

0.0 0.0

-0.5 -0.5
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Frequency Frequency

40 40

20 e. Poor stopband attenuation 20 f. Good stopband attenuation


0 0
Amplitude (dB)

Amplitude (dB)

-20 -20

-40 -40

-60 -60

-80 -80

-100 -100

-120 -120
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5
Frequency Frequency

FIGURE 14-4
FIGURE 10 Parameters for evaluating frequency domain performance. The frequency responses shown are for low-pass
filters. Three parameters are important: (1) roll-off sharpness, shown in (a) and (b), (2) passband ripple, shown
in (c) and (d),frequency
Parameters for evaluating and (3) stopband domain performance.
attenuation, shown in (e) and (f).The frequency responses shown are for low-pass
filters. Three parameters are important: (1) roll-off sharpness, shown in (a) and (b), (2) passband ripple, shown
in (c) and (d), and (3) stopband attenuation, shown in (e) and (f).
Why is there nothing about the phase in these parameters? First, the phase
isn't important in most frequency domain applications. For example, the phase
Design of High-Pass, Band-Pass andsignal
of an audio Band-Reject Filters random, and contains little useful
is almost completely
information. Second, if the phase is important, it is very easy to make digital
High-pass, band-pass and band-reject filters are designed by starting with a low-pass filter, and then
converting it into the desired response. For this reason, most discussions on filter design only give
examples of low-pass filters. There are two methods for the low-pass to high-pass conversion:
spectral inversion and spectral reversal. Both are equally useful.

Figure 11 shows why this two step modification to the time domain results in an inverted frequency
spectrum. In (a), the input signal, x[n], is applied to two systems in parallel. One of these systems is a
low-pass filter, with an impulse response given by h[n]. The other system does nothing to the signal,
and therefore has an impulse response that is a delta function, *[n]. The overall output, y[n], is equal
to the output of the all-pass system minus the output of the low-pass system. Since the low frequency
components are subtracted from the original signal, only the high frequency components appear in the
output. Thus, a high-pass filter is formed.

This could be performed as a two step operation in a computer program: run the signal through a low-
pass filter, and then subtract the filtered signal from the original. However, the entire operation can be
performed in a signal stage by combining the two filter kernels.

27
CSE 415-SP Handouts
FIGURE 11
Block diagram of spectral inversion. In (a), the input signal, x[n] , is applied to two systems in parallel, having
impulse responses of h[n] and *[n]. As shown in (b), the combined system has an impulse response of *[n] &
h[n] . This means that the frequency response of the combined system is the inversion of the frequency
response of h[n].

Explain the need for using transforms and how they differ for analog and discrete-time signals.

Transformations are useful because it makes understanding the problem easier in one domain than in
another. Transforms are tools to make analysis easier. They are mathematical tools that engineers,
scientists, and mathematicians have developed to analyze the properties of a signal. It is possible to do
all computation and analisys of a signal in either the time domain or the frequency domain. However,
some operations are much simpler and more intuitive in one than the other. Transforms make some
types of calculations much simpler and more convenient.

For Example:
Are you familiar with logarithms? Using logs, you can change a problem in multiplication to a
problem in addition. More useful, you can change a problem in exponentiation to one in
multiplication. More important, you can do this with a problem that is impossible to calculate by
hand, such as raising a number to a power such as 3.14. You take the logarithm, and solve by using
ordinary arithmetic, then look up the anti-logarithm for the solution.

Importance of Frequency domain


In studying many operations in Signal Processing, transforming the given signals into the
frequency domain(i.e., transform domain) simplifies computations, and is easier to study and
understand.

 Say for instance, a system uses an input cosine signal in its operations. For a linear time
invariant system, the output is the convolution of the input(cosine) and the system’s impulse

28
CSE 415-SP Handouts
response(represented usually as h(t)). Transforming all signals to the frequency domain, a
complicated operation like convolution in time can be reduced to multiplication in the
transform domain, which is less complex.

 In designing filters, it would be difficult to explain to someone or analyse the circuit in the
time domain. It is just easier to visualize filtering certain frequencies from a signal when we
know that the input signal, when represented in the transform domain has certain frequency
components.

 We can solve an RLC (An RLC circuit is an electrical circuit consisting of a resistor (R), an


inductor (L), and a capacitor (C), connected in series or in parallel.) in the time domain but it
will be a 2nd order differential equation. We can absolutely solve it using calculus and take
the derivative of this and that.
But if we can transform it into the frequency/S domain (Laplace transform), and solve the
circuit with simple algebra and then convert the results from the S domain back into the time
domain (inverse Laplace transform).
 Differentiation -> Multiply by s
 Integration -> Divide by s
 Convolution of two response functions -> Multiplication of two transfer functions

Practical applications of Transformations


It includes filtering audio signals, signal processing in communications, and also extends to Electronic
Circuits and Devices, Control Systems, Image processings etc.
 Consider for example why we use transformations in images. Transformations give us more
information in terms of the rapidity of change in amplitude levels, which in turn helps us
determine contrast. Another application of image transformations would be in compression of
images.
 There are some cases where frequency is directly important, such as radio communication and
audio reproduction. If this is an audio application, you may want to know how the filter
effects the amplitude of different frequencies. That is most easily and intuitively done by
expressing the filter in frequency space.
 One reason why these are useful in telecommunications is due to the importance of linear
filters in this field. Linear filters in the time-domain are represented as convolutions of
functions become multiplication of functions in the frequency domain (true for the Laplace,
Fourier, and z-transform). Multiplication of functions is easier to implement and to apply
intuition to than convolution. That makes linear filters easier to design and work with in the
frequency domain.

Common Tranformations
The Laplace and Fourier transforms act on continuous variables via integration whereas the z-
transform acts on a discrete variable (like time samples).

In summary, both time domain and frequency space are whole and consistant ways of looking at a
signal, but each gives different intuitive insights, and each makes different types of problems harder
or easier.

Contrast some techniques used in transformations such as Laplace, Fourier, and Z-transforms.

Transforms are used because the time-domain mathematical models of systems are generally complex
differential equations. Transforming these complex differential equations into simpler algebraic
expressions makes them much easier to solve. Once the solution to the algebraic expression is found,
the inverse transform will give you the time-domain response.

x(t) ————Fourier transform————X(jw)


x(t)————-Laplace transform————x(s)

29
CSE 415-SP Handouts
All the above mentioned Transforms have different ‘basis functions’, that is, the mapping from time
to frequency domain is different.

The Laplace transform has exp(-st) as the basis function where s is a complex number in the
Cartesian form s=σ+jω. This transformation has properties that help you analyse the stability of a
given system and find the output response to an input signal.

This transformation is used on signals in ‘continuous time’.

The Fourier transform is a specific case of the Laplace transform where s (the complex number) is
purely imaginary and equal to jw, where w is the angular frequency of the signal= 2*pi*f.
This is used for both ‘continuous signals’ and ‘discrete signals’.

The Z Transform is used to analyze discrete time signals and has the basis function z^-n where n is a
variable of the discrete time, and z is a complex number often represented in its polar form
R*exp(jQ), where Q is the phase of the complex number and R is the magnitude.

Stability of a system
In a control theory, the term ‘transfer function ‘ is very important. It is the most basic thing in a
control system.

Transfer Function:
It defines the relationship between input and output of a control system. Let us denote output, C(s)
and reference input, R(s), then transfer function, G(s)= C(s)/R(s)
If the system involves feedback loop, then the effect of feedback is also to be taken care of.

Poles and Zeroes:


In a control system poles and zeroes are the most important terms. They are defined by transfer
function.
Let the transfer function of a 2nd order system is defined as G(s)= (s+1)/(s+2) (s+3)

Here, we can easily recognize the poles and zeroes.


Poles : Poles are the roots of denominator of given transfer function by making Denominator is equal
to 0. Here, the poles of the system are -2 and -3.

Zeroes : Zeros are the roots of numerator of given transfer function by making numerator is equal
to 0. Here, the system have only 1 zero, ie, -1.

30
CSE 415-SP Handouts
Stability : When placed on placed on s-plane or z-plane, the position of poles and zeroes determine
the stability of the system. When the poles are located on the left side of the plane, the system is said
to be stable.

Mapping of s-plane to Z-plane:

31
CSE 415-SP Handouts
Hope this somewhat helped you understand the intricacies!

1. LAPLACE TRANSFORM : It is used for analysis in s-plane through region of


convergence of periodic or non-periodic , stable or unstable , continuous time signal .
Its a mathematical transformation from one domain to another. A time domain analog signal is
transformed to the Laplace domain by using a kernel : e−st, where t stands for time and s is a complex
number further represented as : s=σ+jω. As shown in above figure, the s-plane is a 2D plane with a
real and imaginary axis.
This below formula is used for the Laplace Transform :

It converts the equation from Time Domain to the s-Domain. We solve it in S-domain then at the end
we use Inverse Laplace Transform to again convert the equation into Time Domain.

2. FOURIER TRANSFORM : It is used for the frequency domain analysis of periodic or


non-periodic , stable signal.
The kernel for the Fourier transform is: e−jω. It is a special case of the Laplace transform evaluated
along the imaginary axis.

3. Z- TRANSFORM : It is used for analysis of given signal in Z- plane through region of


convergence of periodic or non-periodic , stable or unstable , discrete signal .
This is the most useful since it is a digitization of the Laplace transform, and in reality all DSP chips
work with digital signals. This mapping maps the imaginary axis of the s-plane to the unit circle on
the z-plane. Instead of integration you do summation.

32
CSE 415-SP Handouts
4. DISCRETE FOURIER TRANSFORM: Special case of the Z-transform, evaluated at discrete
points along the unit circle (just like Fourier transform was a special case of the Laplace
transform evaluated on the imaginary axis).

Fourier versus Laplace Transforms


Fourier is a subset of Laplace. Laplace is a more generalized transform. Fourier is used primarily
for steady state signal analysis, while Laplace is used for transient signal analysis. Laplace is good
at looking for the response to pulses, step functions, delta functions, while Fourier is good for
continuous signals.

Fourier transform is sufficient for signals which can be synthesized using only sine and cosine basis
functions. But when we find that there are exponential components in the signal as well, for eg. A
sine wave varying exponentially in time, this is where Fourier gives us half information only.
Exponential info is lost in such cases. But, Laplace deals with the so-called complex exponential
functions. Laplace is used extensively in filter design and analog circuits. Using Laplace we can
decide which exponential terms should be used so that our system converges and also remains stable.

Why do we use Z-Transform and Laplace Transform in signal analysis? What’s so special about
them? What’s their physical significance ?

Z- transform and Laplace transform are the easiest transforms to convert a time domain signal into
frequency domain. Laplace or Z Transform allows us to solve the differential equation in frequency
domain .

 Some systems like in Control Theory, we can’t judge whether the system is stable or not
stable in Time domain, so we have to convert to Frequency domain.
 In other words, we can’t perform operations like convolution, filtering and reconstruction of
signals in time domain.
 Just to note, an Telecom company doesn’t process our signal in Time domain, the filtering,
encoding all are done in frequency domain.
So, the need of Z Transform and Laplace transform increases as we always work in Frequency
domain of signals and Control systems.

The main reasons that engineers use the Laplace transform and the Z-transforms is that they allow us
to compute the responses of linear time invariant systems easily.
1. The Laplace transform provides a convenient way of solving linear differential
equations when the signals involved have simple Laplace transforms (constant signals,
exponentials, and sinusoids).
2. The Z-transform provides a convenient way of solving difference equations when the
signals involved have simple Z-transforms (constant signals, exponentials, and sinusoids).

Moreover, the continuous time Fourier transform is a special case of the Laplace transform and the
Discrete Time Fourier transform is a special case of the Z-transform. This means that the Laplace and
Z transforms can manage systems and equations that the corresponding Fourier transform cannot.

Differential equation (D.E.) is an equation which involves in it the derivatives (dy/dx) of a function


y = f(x) . For example, dy/dx + py = q , while a difference equation(d.e.) shows the relationship
between consecutive values of a sequence and the differences among them. They are often rearranged
as a recursive formula so that a systems output can be computed from the input signal and past
outputs. y[n]+7y[n−1]+2y[n−2]=x[n]−4x[n−1]

33
CSE 415-SP Handouts
What is the relation between Fourier Transform, Laplace and Z Transform?
Take the Laplace transform and evaluate it on the imaginary axis – you get the Continuous Time
Fourier Transform.
Take the Laplace Transform and sample it in the time domain – you get the z-transform.
Take the z-transform and evaluate it on the imaginary axis – you get the Discrete Time Fourier
Transform.
Sample the DTFT in the frequency domain, you get the Discrete Fourier Transform.

Uses of Laplace Transform


They can be used to solve differential equations and so your question could be rephrased as ‘what are
the daily life applications of differential equations?’. 
It is used for solving differential and integral equations. In physics and engineering, it is used for
analysis of linear time-invariant systems such as electrical circuits, harmonic oscillators, optical
devices, and mechanical systems also used in signal processing to access the frequency spectrum of
the signal in consideration There is a flow chart given below how Laplace equation solves:

You apply the transform to a differential equation and then turn it into an algebraic equation, thus
making it significantly easier to handle. Then you carry out the simplification of the algebraic
expression to the required extent. Now you find the inverse Laplace transform of simpler
expression(s) which solves the differential equation.

The Laplace Transform is applied to each terms at first and then the Inverse Laplace Transform is
applied at the end after solving them to get the answer in our actual given domain.

Uses of Z Transform
The z transform is good for digital systems which are discrete while the laplace (s) transform is good
for analog systems which are continuous. Technically you can use either transforms in both
applications but it wouldn’t be the most accurate representation.

34
CSE 415-SP Handouts
Z transform is used in many applications of mathematics and signal processing. The lists of
applications of z transform are:
 Uses to analysis of digital filters.
 Used to simulate the continuous systems
 Analyze the linear discrete system.
 Used to finding frequency response.
 Analysis of discrete signal.
 Helps in system design and analysis and also checks the systems stability.
 For automatic controls in telecommunication.
 Enhance the electrical and mechanical energy to provide dynamic nature of the system.
 Population science.
 Control theory.
 Digital signal processing

Uses of Fourier Transform


The Fourier transform (FT) decomposes a function of time (a signal) into its constituent
frequencies. The Fourier Transform is an important image processing tool which is used to
decompose an image into its sine and cosine components. The output of
the transformation represents the image in the Fourier or frequency domain, while the input image
is the spatial domain equivalent.

35
CSE 415-SP Handouts

You might also like