Signal Processing: Assignment

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

SIGNAL PROCESSING

ICPC-305
Assignment-Digital Filter Design & D.S.P
SUBMITTED BY:
RAHUL
19110074
B. TECH, ICE-5TH SEMESTER
G4

Q1.
Design of Digital Filters: FIR design by Windowing Techniques, Need and
choice of windows, Linear phase characteristics.
Ans-1:
FIR filters are not designed from Analog filters. They are designed by direct
approximation of the magnitude or the impulse response and may have a condition
that the phase frequency characteristics be linear.
There are two common methods used for design of FIR filters:
1. The windowed Fourier series method / Windowing Techniques.
2. The frequency sampling method.
We will study “Windowing Design Techniques” in detail.
The basic idea behind the window design is to choose a proper ideal frequency-
selective filter (which always has a noncausal, infinite-duration impulse response)
and then to truncate (or window) its impulse response to obtain a linear-phase and
causal FIR filter. Therefore, the emphasis in this method is on selecting an appropriate
windowing function and an appropriate ideal filter.
The design procedure for FIR filters then involves truncating the infinite-length
impulse response h(n) of ideal filters and shifting/delaying such impulse responses to
make the filter causal. The process of shortening the filter length is referred to as
“truncation or windowing” and is achieved by multiplication by window function.
In the digital signal processing, the window function is widely use for the denoising
the signal, and signal estimation and signal analysis. FIR filter has linear phase
response in pass band and stop band of filter. Windowing technique is the most useful
technique because we can easily find the coefficient of the desired frequency
response.
The design specification of FIR filter in following five steps: -
(a) Specification of the filter: -This is the starting point of the filter.
(b) Coefficient calculation of the filter: - It is calculated by the help of transfer
function H(z), which will satisfy the given coefficient.
(c) Realization: - in this step we convert the transfer function into suitable filter
network or structure.
(d) Analysis of finite word length effects: - In this step we filter the input signal,
coefficient data.
(e) Implementation: - In this step we do software code and/or hardware and
performing the actual code.
The windowing techniques for FIR filter design involves following steps:
1. Describe filter specification.
2. According to specifications define window functions.
3. For given set of specifications we compute filter order.
4. Compute window function Coefficients.
5. According to filter order coefficient compute ideal filter.
According to window function and ideal filter we compute desired FIR filter.
MATHEMATICAL PART
We will denote an ideal frequency-selective filter by Hd(e jw), which has a unity
magnitude gain and linear-phase characteristics over its passband, and zero response
over its stopband. An ideal LPF of bandwidth wc < π is given by:

Where wc is also called the cut-off frequency, and α is called the sample
delay. (Note that from the DTFT properties, e -jαw implies shift in the
positive n direction or delay.) The impulse response of this filter is of
infinite duration and is given by:
Note that hd(n) is symmetric with respect to α, a fact useful for linear phase
FIR filters.
To obtain an FIR filter from hd(n), one has to truncate hd(n)on both
Sides i.e., “windowing”. To obtain a causal and linear-phase FIR filter h(n) of length
M, we must have

And

In general, h(n) can be thought of as being formed by the product of hd(n) and a
window function w(n) as:

h(n) = hd(n) * w(n)

where

Depending on how we define w(n) decides the different window functions and we
obtain different window designs.
For the given filter specifications, choose the filter length M and a window function
w(n) for the narrowest main lobe width and the smallest side lobe attenuation possible.

NOTE: When an ideal impulse response h(n) of infinite length is truncated by


multiplication by a window function w(n) of finite-length the transfer function of the
ideal filter which was originally rectangular shows oscillatory behaviour. This
oscillatory behaviour is known as “Gibbs phenomenon”.
Important parameters in the design of FIR filters using window functions:

(i) Let wp and ws be the passband and stopband edge frequencies


of the desired filter. Then the cut-off frequency wc = wp + ws
and H(e jwc) = 0.5.
(ii) The transition bandwidth Δw = ws – wp and is approximately
given by Δw ≈ C/M.
(iii) The transition width of the window is always less than the
main lobe width.
We now briefly discuss the different well-known window functions.

1. RECTANGULAR WINDOW:

This is the simplest window function but provides the worst performance from the
viewpoint of stopband attenuation. It was defined as:

Its frequency response function is:

which is the amplitude response and actual amplitude response is given by:

This implies that the running integral of the window amplitude response is necessary
in the accurate analysis of the transition bandwidth and the stopband attenuation.

Observations about Rectangular Window:


1. The amplitude response Wr(w) has the first zero at w = w1 where w1= 2π/M.
Hence, the width of the main lobe is 2w1 = 4π/M. Therefore, the approximate
transition bandwidth is 4π/M.
2. The magnitude of the first side lobe (peak side lobe) is ≈ 3π/M and is given by
|Wr(w = 3π/M)| = 2M/3π. Comparing it with the main lobe amplitude which is
equal to M the magnitude is 13 dB.
3. The window amplitude response has the first side lobe magnitude at 21 dB that
results in minimum stopband attenuation of 21 dB irrespective of window
length M.
4. The exact transition bandwidth can be computed using above and comes out to
be ws – wp = 1.8π/M which is less than half the approximate bandwidth 4π/M.
Clearly, this is a simple window operation in the time domain and an easy function to
analyse in the frequency domain. However, there are two main problems i.e.,
1. The minimum stopband attenuation of 21 dB is insufficient in practical
applications.
2. The rectangular windowing being a direct truncation of the infinite length
hd(n), it suffers from the Gibbs phenomenon. If we increase M, the width of
each side lobe will decrease, but the area under each lobe will remain constant.
Therefore, the relative amplitudes of side lobes will remain constant, and the
minimum stopband attenuation will remain at 21 dB.

Since the rectangular window is impractical in many applications, we consider other


fixed window functions that provide a fixed amount of attenuation. These window
functions bear the names of the people who first proposed them. These window
functions can also be analysed similar to the rectangular window function and here I
will mention only their window functions w(n).

2. HANNING WINDOW:

This is the raised cosine window function given as:

3. HAMMING WINDOW:

This window is similar to the Hanning window except that it has a small amount of
discontinuity and is given by:

4. BLACKMAN WINDOW:

This window is also similar to the previous two but contains a second harmonic term
and is given by

Till now, the transition widths and the stopband attenuations increase as we are proceeding
ahead. Hence, Hamming window appears to be good for many applications.
Transition Width Δw Minimum Stop-Band
Window Function Attenuation
(dB)
Approximate Value Exact Value
Rectangular 4π/M 1.8π/M 21
Hanning 8π/M 6.2π/M 44
Hamming 8π/M 6.6π/M 53
Blackman 12π/M 11π/M 74

5. KAISER WINDOW:

This is an adjustable window function that is widely used in practice. The window
function is due to J. F. Kaiser and is given by:

Where I0[.] denotes the modified Zero-Order Bessel Function given by:

which is positive for all real values of x. The parameter β controls the minimum
stopband attenuation As and can be chosen to yield different transition widths for near-
optimum As. This window can provide different transition widths for the same M,
which is something other fixed windows lack.

For Kaiser window, it is sometimes difficult to obtain or derive the Bessel function results
properly. Hence some direct results are given as:

Given ωp, ωs, Rp, and As the parameters M and β are given by:
Hence, the performance of this window is comparable to that of the Hamming
window. In addition, the Kaiser window provides flexible transition bandwidths. So,
we can conclude that Kaiser window is better than the other window.
It is clear that Windowing Technique for FIR filter design is straight-forward and requires
minimal amount of computational effort. You just need to follow these 4 steps given below:

1. Specify the ‘ideal’ or ‘desired’ frequency response of filter, Hd(e jw).


2. Obtain the impulse response, hd(n) of desired filter by evaluating inverse Fourier
Transform.
3. Select the window function that satisfies the passband or attenuation specifications
and then determine the number of filter coefficients using the appropriate relationship
between filter length M and transition width Δw.
4. Obtain values of window function w(n) for the chosen window function using
expressions mentioned above and the value of actual FIR coefficients h(n) by using
the expression
h(n) = hd(n) * w(n)

Comparison of time and frequency domain characteristics of below listed window


functions (screenshot taken from book by Jervis):
a) Rectangular
b) Hamming
c) Blackman

All the mathematical equations used above are written using LATEX Software and then are inserted here.
Q2.
Digital Signal Processors: Architecture, Features, Addressing Formats,
Functional modes and different Commercial Processors.
Ans-2:
Digital Signal Processor is a specialized microprocessor with an architecture
optimized for the operational needs of digital signal processing which is a subfield
of signal processing.
It is the processor used to mathematically manipulate an information signal to modify
or improve it in some way very rapidly. It can process data in real time, making it
ideal for applications that can’t tolerate delays.
It uses video, voice, audio, temperature or position signals that have been digitized
and mathematically manipulate them so the information contained in them can be
displayed or converted to another type of signal. We can also define it as “an
integrated circuit designed for high-speed data manipulations and used in variety of
applications”.
A programmable DSP device should provide instructions similar to a conventional
microprocessor. The instruction set of a typical DSP device should include the
following:
a. Arithmetic operations such as ADD, SUBTRACT, MULTIPLY etc.
b. Logical operations such as AND, OR, NOT, XOR etc.
c. Multiply and Accumulate (MAC) operation.
d. Signal scaling operation.
In addition to the above provisions, the architecture should also include the following:
a. On chip registers to store immediate results.
b. On chip memories to store signal samples (RAM).
c. On chip memories to store filter coefficients (ROM).
Inside a DSP, we have the following mentioned key components:

• Program Memory: Stores the programs that the DSP will use to process data.
• Data Memory: Stores the information to be processed.
• Compute Engine: Performs the mathematical processing, accessing the program from
the Program Memory and the data from the Data Memory.
• Input/Output: Serves a range of functions to connect to the outside world.
Bus Architecture and Memory Organisation of DSP:

Conventional microprocessors use Von Neumann architecture for memory management


wherein the same memory is used to store both the program and data. Although this
architecture is simple, it takes a greater number of processor cycles for the execution of a
single instruction as the same bus is used for both data and program.

In order to increase the speed of operation, separate memories were used to store program
and data and a separate set of data and address buses have been given to both memories,
the architecture called as Harvard Architecture. Since the buses operate independently,
program instructions and data can be fetched at the same time, improving the speed over
the single bus design.

Then, we have the improved or modified version of Harvard Architecture called Super
Harvard Architecture. It contains instruction cache and I/O controller additional to
Harvard Architecture which results in overall improvement of performance.

In addition to this, we have On-Chip Memories present over digital signal processor in
order to increase execution speed of DSP functions. As dedicated buses are used to access
the memory, on chip memories are faster. “Speed and Size” are the two key parameters to
be considered with respect to the on-chip memories. These on-chip memories are
organised in following ways:
a. As many DSP algorithms requires instructions to be executed repeatedly, the
instruction can be stored in the external memory, once it is fetched can reside in the
instruction cache.
b. The access times for memories on-chip should be sufficiently small so that it can be
accessed more than once in every execution cycle.
c. On-chip memories can be configured dynamically so that they can serve different
purpose at different times.
The hardware architecture of DSP used for signal processing is as shown below:

A DSP processor must have its architecture optimized for executing DSP functions. It is
characterized by following:
• Multiple bus structure with separate memory space for data and program
instructions. Typically, the data memories hold input data, intermediate data
values, output samples as well as fixed coefficients. The program instructions are
stored in the program memory.
• The I/O port provides a means of passing data to and from external devices such
as ADC’s and DAC’s or for passing digital data to other processors. Direct
memory access (DMA) allows for rapid transfer of blocks of data directly to or
from data RAM under external control.
• Arithmetic units for logical and arithmetic operations (includes ALU), a hardware
multiplier.
This architecture is necessary because most DSP algorithms (such as filtering, correlation
and FFT) involve repetitive arithmetic operations and heavy data flow through the CPU.
Features of Digital Signal Processors:
DSP processors, unlike other processors, are designed to be able to process signals in real
time. In order to achieve this there are certain features that are unique to DSP processors
only. Architectural features of DSP include on-chip memory, special instruction set, I/O
capability and large memory. Other features are listed as follows:
• DSP processors have the ability to multiply and accumulate (MAC) in one instruction
cycle. This is achieved by embedding the MAC instruction in hardware in the main
data path whereas other processors take several instruction cycles to achieve the same
operation.
• DSP processors have the ability to complete several accesses to memory in a single
instruction cycle. For instance, a processor can fetch an instruction while
simultaneously storing results of the previous instruction.
• Some DSP processors provide special support for repetitive computations, which are
typical in DSP computations. A special loop or repeat instruction is provided. Such
features make DSP processors more suitable for real-time digital signal processing.
• Some DSP processors have dedicated address generation units which work in the
background and allow the arithmetic processing to proceed with maximum speed.
Once the address register is configured it will generate the address required for
accessing the operand in parallel with the execution of the arithmetic instruction.
• Most DSP processors have one or more serial or parallel input and output (I/O)
interface and specialized I/O handling mechanisms such as the direct memory access
(DMA). The purpose of these peripherals and interface is to allow a cost-effective
high-performance input and output.

The various DSP computational building block involves components such as: Multipliers,
Parallel Multipliers, Multipliers for signed numbers, Buses, Shifters, Barrel Shifters, MAC
unit, ALU unit etc.

Addressing Formats of Digital Signal Processors:

Addressing modes are various ways of specifying the data. Operands are stored either in the
register files or in the memory (on-chip & off-chip). Their addresses are present either
directly in the instruction, or indirectly in a CPU register. In either case, the addressing mode
should declare if the operand is in the registers or in the memory and provide its address.
The different addressing modes available for digital signal processors are:
• Immediate Addressing Mode: In this mode, the data is contained in the instruction
itself.
• Direct Addressing Mode: The address of the operand in the memory is obtained
from the LSBs of the instruction. In this addressing mode, instruction holds the
memory location of the operand.
• Indirect Addressing Mode: In this addressing mode, the operand is accessed using a
pointer. A pointer is generally a register, which holds the address of the location
where the operands reside. Indirect addressing mode can be extended to inculcate
automatic increment or decrement capabilities which leads to the addressing modes
such as “post increment, post decrement, pre increment, pre decrement,
post_add_subset, post_sub_subset, pre_add_subset, pre_sub_subset”. The location
of the operand in the memory is pin-pointed through a combination of the contents of
an auxiliary register, optional displacements and through the index registers available.
The auxiliary addressing register units (AARUs) are functional units that calculate
effective address of operand. This technique is useful when the blocks of data are
being processed since provision is made for automatically incrementing or
decrementing the address stored in the register following each reference.
• Register Addressing Mode: In this mode, one of the registers will be holding the
data and the register has to be specified in the instruction.

Special Addressing Modes:


For the implementation of some real time applications in DSP, normal addressing modes will
not completely serve the purpose. Thus, some special addressing modes are required for such
applications.
• Circular Addressing Mode: One of the specialized addressing modes
available for signal processing applications is that of circular addressing. In
most real-time signal processing applications, such as those found in filtering,
the input is an infinite stream of data samples. These samples are windowed
and used in filtering applications. The data samples simulate a tapped-delay
line and the oldest sample is written over by the most recent sample.
The filter coefficients and the data samples are written into two circular buffers. Then, they
are multiplied and accumulated together to form the output sample result, which is then
stored. The address pointer for the data buffer is then updated and the samples appear shifted
by one sample period, the oldest data being written out, and the most recent data is written in
into that location.
While processing the data samples coming continuously in a sequential manner, circular
buffers are used. In a circular buffer the data samples are stored sequentially from the initial
location till the buffer gets filled up. Once the buffer gets filled up, the next data samples will
get stored once again from the initial location. This process can go forever as long as the data
samples are processed in a rate faster than the incoming data rate.

Circular Addressing mode requires three registers that are:


a. Pointer register to hold the current location (PNTR)
b. Start Address Register to hold the starting address of the buffer (SAR)
c. End Address Register to hold the ending address of the buffer (EAR)

There are four special cases in this addressing mode. They are:
a. SAR < EAR & updated PNTR > EAR
b. SAR < EAR & updated PNTR < SAR
c. SAR >EAR & updated PNTR > SAR
d. SAR > EAR & updated PNTR < EAR
The buffer length in the first two case will be (EAR-SAR+1) whereas for the next two cases
(SAR-EAR+1).

• Bit-Reversed Addressing Mode: To implement FFT algorithms we need to


access the data in a bit reversed manner. Hence a special addressing mode
called bit reversed addressing mode is used to calculate the index of the next
data to be fetched. It works as follows. Start with index 0. The present index
can be calculated by adding half the FFT length to the previous index in a bit
reversed manner, carry being propagated from MSB to LSB.
Current index= Previous index+ B (1/2(FFT Size))
In the computation of FFT using butterfly-based algorithms, the addresses of the outputs are
bit-reversed with respect to the outputs. Many signal processors restore the original order of
the outputs without incurring additional cycles. A modulo addressing feature is also provided.

Different Commercial Digital Signal Processors:

There are several families of commercial DSP devices. Right from the early eighties, when
these devices began to appear in the market, they have been used in numerous applications,
such as communication, control, computers, Instrumentation, and consumer electronics.

The architectural features and the processing power of these devices have been constantly
upgraded based on the advances in technology and the application needs. However, their
basic versions, most of them have Harvard architecture, a single-cycle hardware multiplier,
an address generation unit with dedicated address registers, special addressing modes, on-
chip peripherals interfaces. Of the various families of programmable DSP devices that are
commercially available, the three most popular ones are those from Texas Instruments,
Motorola, and Analog Devices.
They can be classified into 2 categories i.e., 16-bit fixed point devices (TMS320C5X and
TMS320C54X) & 32-bit floating point devices (TMS320C6X) that comes for TMS320
family. They are General Purpose DSP & we have Special Purpose DSP as well for
additional functionality.
The first commercial programmable DSP processor was TMS32010 developed by Texas
Instruments in 1982.
List of various commercial digital signal processors available till date in the market:

A. 32-bit Floating Point DSP’s (5% of market):


• TI TMS320C3X, TMS320C67XX
• AT & T DSP32C
• ANALOG DEVICES ADSP21XXX
• HITACHI SH-4 DSP

B. 16-bit Fixed Point DSP’s (95% of market):


• TI TMS320C2X, TMS320C62XX
• Infineon TC1XXX (TriCorel)
• MOTOROLA DSP568XX, MSC810X
• Agere Systems DSP16XXX, Starpro2000
• LSI Logic LSI140X (ZPS400)
• HITACHI SH-3 DSP
• StarCore SC110, SC140

Functional Modes of Digital Signal Processors:

• Address Generation Unit (AGU): The main job of the Address Generation Unit is to
generate the address of the operands required to carry out the operation. They have to
work fast in order to satisfy the timing constraints. As the address generation unit has
to perform some mathematical operations in order to calculate the operand address, it
is provided with a separate ALU.

Address generation typically involves one of the following operations:


a. Getting value from immediate operand, register or a memory location.
b. Incrementing/ decrementing the current address.
c. Adding/subtracting the offset from the current address.
d. Adding/subtracting the offset from the current address and generating new address
according to circular addressing mode.
e. Generating new addresses using bit reversed addressing mode.

• Arithmetic Logic Unit (ALU): A typical DSP device should be capable of handling
arithmetic instructions like ADD, SUB, INC, DEC etc and logical operations like
AND, OR, NOT, XOR etc. It consists of status flag register, register file and
multiplexers. ALU includes circuitry to generate status flags after arithmetic and logic
operations. These flags include sign, zero, carry and overflow.

• Barrel Shifter: It provides the capability to scale the data during an operand read or
write. No overhead is required to implement the shift needed for the scaling
operations. The barrel shifter and the exponent encoder normalize the values in an
accumulator in a single cycle. An additional shift capability enables the processor to
perform numerical scaling, bit extraction, extended arithmetic, and overflow
prevention operation.

• Floating Point Unit (FPU): A floating-point unit is a part of a computer system


specially designed to carry out operations on floating-point numbers. Typical
operations are addition, subtraction, multiplication, division, and square root. Some
FPUs can also perform various transcendental functions such as exponential or
trigonometric calculations, but the accuracy can be very low.

• Memory Management Unit (MMU): The MMU handles the translation from virtual
into physical addresses. Virtual addresses are issued by the DSP to the MMU, which
converts them into physical addresses. These physical addresses are used to access the
actual resource (memory).

• Translation Lookaside Buffer (TLB): A translation lookaside buffer (TLB) is a


memory cache that is used to reduce the time taken to access a user memory location.
It is a part of the chip's memory-management unit (MMU). The TLB stores the recent
translations of virtual memory to physical memory and can be called an address-
translation cache. A TLB may reside between the CPU and the CPU cache, between
CPU cache and the main memory or between the different levels of the multi-level
cache.

• Registers: These are the temporary storage units available with digital signal
processors which can be used for a variety of purposes and addressing modes are
associated with data present in them.

• Back Side Bus (BSB): A backside bus (BSB) is an internal bus that connects the
central processing unit to the cache memory, such as Level 2 (L2) and Level 3 (L3)
cache.

****************

You might also like