Download as pdf or txt
Download as pdf or txt
You are on page 1of 51

1

VLSI Implementation of LMS Adaptive Filter By Ravikumar.C (Reg. No. 15803044)

A PROJECT REPORT

Submitted to the department of ELECTRONICS AND COMMUNICATION in the FACULTY OF ENGINEERING & TECHNOLOGY

in partial fulfillment of the requirement for the award of the degree of

MASTER OF TECHNOLOGY IN VLSI DESIGN

S.R.M ENGINEERING COLLEGE S.R.M INSTITUTE OF SCIENCE AND TECHNOLOGY (DEEMED UNIVERSITY) MAY 2005

BONAFIDE CERTIFICATE

Certified that this project report titled VLSI Implementation of LMS Adaptive Filter is the bonafide work of Mr.Ravikumar who carried out the research under my supervision. Certified further, that to the best of my knowledge the reported herein does not form any part of any other project report or dissertation on the basis of which a degree or award was conferred on an earlier occasion on this or any other certificate.

Signature of the Guide [Mr.T.VIGNESWARAN]

Signature of HOD [Dr.S. JAYASHRI]

ABSTRACT

In speech processing noise cancellation and echo cancellation are very much important. In this project I am going to design adaptive filter which is very applicable in the above mentioned applications. Adaptive signal processing (a new field in DSP) is a

relatively new area in which application are increasing rapidly. Adaptive signal processing evolved from the techniques developed to enable the adaptive control of time- varying systems.

Since, there is no dedicated IC for adaptive filter. I am going to design the filter using VHDL code. For designing the adaptive filter I am going to use LMS algorithm. The LMS algorithm is the most popular for real-time adaptive system implementations. It is employed in many areas such as modelling, controlling, beamforming and equalization.

ACKNOWLEDGEMENT

It is my great pleasure to thank our chairman, Thiru T.R.Pachamuthu, and our director Dr.T.P.Ganesan.

I take this opportunity to thank our beloved principal Prof. R.Venkatramanai.

I am indebted to our Head of the Department Prof. S. Jayashree who had been a source of inspiration for all my activities.

wish

to

convey

my

deepest

sense

of

gratitude

to

my

guide

Mr.T.Vigneswaran, M.E., for his valuable guidance.

I also thank our department faculty members for their great support in all my activities.

Signature of the Student (C.RAVIKUMAR)

TABLE OF CONTENTS CHAPTER NO TITLE PAGE NO

ABSTRACT LIST OF FIGURES LIST OF ABBREVATIONS 1. 2. 3. INTRODUCTION DIGITAL FILTERS STRUCTURE OF DIGITAL FILTERS 3.1 ADAVANTAGES OF DIGITAL FILTERS OVER ANALOG FILTERS

iii vii viii 1 3 5 6

4.

DIGITAL FILTER DESIGN 4.1 TYPES OF DIGITAL FILTERS 4.1.1 FIR FILTER 4.1.2 IIR FILTER 4.2 ADAVANTAGES OF FIR FILTERS FILTER STRUCTURE 5.1 TRANSVERSAL STRUCTURE 5.2 SYMMETRIC TRANSVERSAL STRUCTURE 5.3 LATTICE STRUCTURE SAMPLING THEORY HISTORY OF ADATPIVE FILTERS INTRODUCTION OF ADATPIVE FILTERS 8.1 OPERATION 8.2 GENERAL FORM 8.3 CHARACTERISTICS

7 8 8 8 9 10 10 12 13 15 16 17 18 21 23

5.

6. 7. 8.

9.

ALGORITHMS 9.1 RLS ALGORITHM 9.2 LMS ALGORITHM 9.2.1 MSE CRITERION 9.2.2 LMS CRITERION

25 25 26 26 28

10.

ADAPTIVE FILTER OVERVIEW 10.1 INTRODUCTION 10.2 ADAPTIVE FILTERING PROBLEM 10.3 APPLICATIONS 10.4 ADAPTIVE ALGORITHMS 10.4.1 WIENER FILTERS 10.4.2 METHOD OF STEEPEST DESCENT 10.4.3 LEAST-MEAN-SQUARE ALGORITHM CONCLUSIONS SYNTHESIZE REPORT 12.1 LMS REPORT

30 30 31 32 35 35 39 40 42 43 43

11. 12.

12.2 FLOATING POINT MULTIPLIER 12.3 FLOATING POINT ADDER 13. REFERENCES

45 48 51

LIST OF FIGURES

CHAPTER NO

TITLE

PAGE NO

5.1 5.2 5.3 8.1 8.2 10.1 10.2 10.3 10.3 10.3 10.3 10.4 .

Transversal Structure Symmetric Transversal Structure Lattice Structure Architecture adaptive filter General Form Adaptive Filters A strong narrowband interference N(f) in a wideband signal S(f) Block diagram for the adaptive filter problem a. Identification b. inverse modeling c. prediction d. interference cancellation Example cross section of an error-performance surface for a two tap filter

11 12 14 20 21 30 31 33 33 34 34 37

LIST ABBREVIATIONS

AGC DSP FIR IIR LMS MSE RLS VLSI

Automatic Gain Control Digital Signal Processing Finite Impulse Response Infinite Impulse Response Least Mean square Mean square Error Recursive Least Square Very Large scale integration

CHAPTER 1

INTRODUCTION

In recent years, a growing field of research in adaptive systems has resulted in a variety of applications such as communications, radar, sonar, seismology, mechanical design, navigation systems and biomedical electronics. Adaptive noise canceling has been applied to areas such as speech communications, electrocardiograph, and seismic signal processing. Adaptive noise canceling is in fact applicable to a wide variety of signal enhancement situations, because noise characteristics are not often stationary in real-world situations. An adaptive filter is a system whose structure is adjustable in such a way that its performance improves in accordance with its environment. A simple example of an adaptive system is the automatic gain control (AGC) used in radio and television receivers. The function of this circuit is to adjust the sensitivity of the receiver inversely as the average incoming signal strength. The receiver is thus able to adapt to a wide range of inputs levels and to produce a much narrower range of inputs intensities. These systems usually have many of the following characteristics: 1. They can automatically adapt (self-optimize) in the face changing (nonstationary) environment and changing system requirements. 2. They can be trained to perform specific filtering and decision- making tasks. Synthesis of systems having these capabilities can be accomplished automatically through training. In a sense, adaptive systems can be programmed by a training process.

3. Because of the above, adaptive systems do not require the elaborate synthesis procedures usually needed for nonadaptive systems. Instead, they tend to be self designing. 4. They can extrapolate a model of behavior to deal with new situations after having been trained on a finite and often small number of training signals or patterns. 5. To a limited extent they can repair themselves; that is they can adapt around certain kinds of internal defects. 6. They can usually be described as nonlinear systems with time-varying parameters.

CHAPTER 2 DIGITAL FILTERS

Digital filters are very important part of digital signal processing. Infact, their extra-ordinary performance is one of the key reasons that digital signal processing has become so popular. Digital filters can be used for: 1. Separation of signals that have been combined. 2. Restoration of signals that have been distorted in someway. Analog filters can be used for these same tasks. Digital filters can achieve far superior results. Signal separation is needed when a signal has been distorted with interference, noise or other signals. For example, imagine a device for measuring the

10

electrical activity of a babys heart while still in the womb. The breathing and the heartbeat of the mother will still corrupt the raw signal. A filter might be used to separate these signals so that they can be individually analyzed. Signal restoration is used when a signal has been distorted in some way. For example, an audio recording made with poor equipment may be filtered in-order to represent the sound in a better way as it actually occurred. Another example is the blurring of an image acquired with an improperly focused lens, or a shaky camera. A digital filter is a numerical procedure or an algorithm that transforms a given sequence of numbers in to a second sequence that has some desirable properties, such as less noise or distortion. Else, it can as well be defined as a digital machine that performs filtering process by the numerical evaluation of linear difference equation in real time under the program control. In Radar applications, digital filters are used to improve the detection of airplanes. In speech processing, digital filters have been employed to reduce the redundancy in the speech signal so as to allow more efficient transmission and for speech recognition. Input sequence to a digital filter can be generated in several ways. One common method is to sample a continuous time signal at a set of equally spaced time intervals. If the continuous time signal is denoted by X(t), then the values of the discrete time sequence are denoted as, X(nT s)=X(t); t= nTs ------- (2.1)

Where Ts is the sampling period. The implementation of a digital filter depends on the application. In education and research, a digital filter is typically implemented as a program on a general-purpose computer. The types of computers, which can be used, vary widely from personal

11

computers through the larger minicomputers to the large time-shared mainframes. In commercial instrumentation and in industrial applications, the digital filter program is commonly implemented with the microcomputer that may also be used for control and monitoring purposes as well. For high speed or large volume applications, such as use in automobiles for controlling engine operation, the digital filter may consist of special purpose integrated circuit chips. These circuits perform the computation and storage functions required for the digital filter operation.

CHAPTER 3

STRUCTURE OF DIGITAL FILTERS

A digital filter consists of three simple elements: adders, multipliers and delays. The adder and multiplier are conceptually are simple components that are readily implemented in the arithmetic logic unit of the computer. Delays are components that allow access to future and past va lues in the sequence. Delays come in two basic flavors: positive and negative. A memory register that stores the current values of a sequence for one sample interval, thus making it available for future calculations, implements a positive delays or simple delays. A negative delay, or advance delay, is used to look ahead to the next value in the sequence. Advance delays are typically used in applications, such as image processing, in which the entire data sequence to be processed is available at the start of processing. So that the advance delay serves to access the next data sample in the sequence. The availability of the advance delay will simplify the analysis of digital filters. A digital filter design involves selecting and interconnecting a finite number of these elements and determining the multipliers coefficient values.

12

3.1 ADAVANTAGES OF DIGITAL FILTERS OVER ANALOG FILTERS

1. Digital filters are programmable and its design parameters can be easily modified and implemented through a computer. 2. Temperature will not affect digital filters whereas the passive elements of analog filters are sensitive to temperature. 3. Digital filters can handle low frequency signals, unlike analog filters. 4. Digital filters are more flexible and are vastly superior in the level of performance compared to the analog filters.

CHAPTER 4

DIGITAL FILTER DESIGN

The design technique will be forcing the digital filter to behave very closely with some reference analog filter. The analog filter is analyzed by standard technique to obtain analog representation in terms of the differential equation. The equation can be solved easily by analytic means for the unknown voltage, when the input voltage is a sinusoidal function of time, or an exponential function of time, or a step function of time. Digital filters allow digital signal processors to work in real time. When the input signal is a more complicated function of time, though it is possible to obtain an appropriate solution by numerical methods. The numerical methods provide the basics of digital filtering.

13

Digital filters use a digital processor to perform numerical calculations on sampled values of the signal. The processor in this case may be specialized with a digital signal processor chip. They are highly reliable and predictable since they work with the binary states of zero and one. They are very flexible. The most basic step being the selection of the filter type based on parameters such as linear phase, efficiency and stability. Then the filter specifications need to be specified along with the number of co-efficients involved. The realization structure involved also is analyzed. 4.1 TYPES OF DIGITAL FILTERS Digital filters are classified as either recursive or nonrecursive. Depending on whether the filter is recursive or no nrecursive, the digital can be categorized as finite or Infinite Impulse Response filters. A nonrecursive filters that has a finite time duration impulse response is known as Infinite Impulse Response filter or FIR.A recursive filter with an infinite time duration impulse response is known as Infinite Impulse Response filter or IIR.

4.1.1 FIR FILTER Finite Impulse Response (FIR) filter is one in which the impulse response h(n) is limited to a finite number of samples defined over the range (0, N-1) For an N- tap FIR filter with co-efficients h(k), whose output is described by: Y(n)= h(0) x(n) +h(1) x(n-1)+h(2) x(n-2)+. . . . . . . .+h(N-1) x(n-N-1) ------- (4.1), The filters Z transform is, H(Z) = h(0) Z + h(1) Z + h(2) Z +. . . . . . . .+ h(N-1) Z ------- (4.2),

14

4.1.2 IIR FILTER Infinite impulse response filter is one in which the impulse response h(n) has an infinite number of samples.

4.2 ADAVANTAGES OF FIR FILTERS

Digital FIR filters are characteristics by the following advantages: 1. FIR filters are linear phase filters. 2. They are simple to implement. On most DSP processors, looping a single instruction can do the FIR calculation. 3. They are suited for MultiMate applications. By multi-rate, we mean either decimation (reducing the sampling rate), interpolation (increasing the sampling rate), or both (whether decimating or interpolating). The use FIR filters allows some of the calculation to be omitted, thus providing an important computational efficiency. 4. They have desirable numerical properties. In practice, all DSP filters must be implemented using finite-precision arithmetic that is, using a limited numbers of bits. The use of finite-precision arithmetic can cause problems in IIR filters since they make use of feedback. In FIR filters feedback is not used. So they can implemented using few bits.

15

CHAPTER 5

FILTER STRUCTURE

Several types of filter structures can be implemented for the adaptive filters such as IIR or FIR. The FIR filter has only adjustable zeroes and hence it is free of stability problems associated with adaptive. IIR filters have adjustable poles as well as zeroes. An adaptive IIR filter with poles as well as zeroes, makes it possible to offer the same filter characteristics as the FIR filter but with higher filter complexity. An adaptive FIR filter can be realized using transversal and lattice structures.

5.1TRANSVERSAL STRUCTURE The most common implementation of the adaptive filter is the transversal structure (tapped delay line). Figure 5.1 shows the structure of a transversal FIR filter with N tap weights (adjustable during the adaptation process). The filter output signal y(n) is given as, y(n) = WT (n) U(n) = ? wi (n)u(n- i) ------- (5.1) where U(n) = [u(n), u(n-1),..........., u(n-N+1)]T is the input vector, W(n) = [wo (n), w1 (n)...........WN-1 (n)]T is the weight vector

16

u(n-1) u(n)

u(n-2)

u(n-N+1)

wo (n)

w1 (n)

w2 (n)

wN-1 (n)

+
Y(n) Figure 5.1 Transversal Structure

5.2 SYMMETRIC TRANSVERSAL STRUCTURE

17

A transversal filter with symmetric impulse response (weight values) about the center weight has a linear phase response. The characteristic of linear phase response in filter is sometimes desirable because it allows a system to reject or shape energy bands of the spectrum and still maintain the basic pulse integrity with a constant filter group delay. Image and digital communication are examples where this characteristic is desirable. The adaptive symmetric transversal structure is shown in the figure 5.2.

u(n-1) u(n)

u(n-2)

u(n-N+1))

wo (n)

w1 (n)

wN/2-1 (n)

+
Figure 5.2 Symmetric Transversal Structure y(n) An FIR filter with time domain symmetry, such as

18

wo (n) = wN-1 (n) w1 (n) = wN-2 (n) ------- (5.2) has a linear phase response in the frequency domain. Consequently, the number of weights is reduced by half in a transversal structure, as shown in the figure 5.2 with even n tap weights. The tap- input vector becomes u(n) = [u(n) + u(n-N+1), 8(n-10 + u(n-N+2),........., u(n-N/2+1) + u(n-N/2)]T ------- (5.3) As a result, output y(n) becomes, y(n) =? [wi(n) u(n- i) + (n- N + i + 1)] ------- (5.4)

5.3 LATTICE STRUCTURE The lattice structure offers several advantages over the transversal structure: 1. The lattice structure has good numerical round-off characteristics that makes it less sensitive than the transversal structure to round-off errors and parameter variations. 2. The lattice structure orthoganalises the inp ut signal stage-by-stage, which leads to faster convergence and efficient tracking capabilities when used in an adaptive environment. 3. The various stages are decoupled from each other, so it is relatively easy to increase the prediction order if required. 5. The lattice structure is order recursive, which allows adding or deleting of stages from the lattice without affecting the existing stages.

19

6. The lattice filter (predictor) can be interpreted as wave propagation in a stratified medium. This can represent an acoustical tube model of the human vocal tract, which is extremely useful in Digital Signal Processing.] The lattice filter has a modular structure with cascaded identical stages, as shown below,

fo (n) u(n) bo (n) STAGE 1

f1 (n) STAGE 2 b1 (n) STAGE m

fm(n)

bm(n)

Figure 5.3 Lattice Structure

CHAPTER 6

SAMPLING THEORY

Sampling has a great importance in DSP. The transformation of a signal from digital to analog and from analog to digital is vital in Digital signal processing. High sampling rate provides a good frequency response. Sampling theory defines how a variant quantity can be captured in an instant of time. A signal can be sampled at a set of points spaced at equal intervals of time. Sampling is the process in which the continuous time signal is converted into discrete time signal. Method of sampling is based on:

20

1. Establishing the rate of sampling. 2. Changing an analog signal to a digital one. 3. Changing the digital number to an analog signal. Nyquist theory is also known as the sampling theory, and it states that in-order to obtain all information in a signal of one-sided bandwidth B, it must be sampled at a rate greater than 2B.

CHAPTER 7

HISTORY OF ADATPIVE FILTERS

Until the mid 1960's, telephone channel equalizers were fixed equalizers that caused fixed performance degradation or they were manually adjustable equalizers that were cumbersome to adjust. In 1965, Lucky introduced the Zero-Forcing algorithm for automatic adjustment of the equalizer weights. This algorithm minimizes a certain distortion, which has the effect of forcing the intersymbol interference to zero. This breakthrough by Lucky inspired the other researchers to investigate different aspects of adaptive equalization problem, leading to new improved solutions. The gain in popularity of adaptive signal processing is primarily due to advances in the digital technology that have increased computing capabilities and therefore broadened the scope of Digital Signal Processing as a whole.

21

The field continues to expand due to continuing developments in Very Large scale Integrated Circuits (VLSI) and other digital hardware, which have made adaptive techniques feasible in real-time applications[1].

CHAPTER 8

INTRODUCTION OF ADATPIVE FILTERS

A filter is a one which selects or controls the characteristics of the signal it produces, by conditioning the incoming signal. In some cases, however, the required filter characteristic is unknown; we know only the output required from a test-input signal. When a signal is distorted by transmission through a random medium, such as the atmosphere, or an imperfect communication system. We need a filter that can convolve or equalize the distorted signal to produce the original, undistorted signal. Such a filter is termed an adaptive filter it must adapt itself to the distortion present in the system. An example is an echo canceller: the desired output cancels the echo signal (an output result of zero when there is no other input signal). In this case, the coefficients cannot be determined initially since they depend on changing line or transmission conditions. For applications such as this, it is necessary to rely on adaptive filtering techniques. Adaptive filter is a filer that vary in time, adapting their coefficients according to some reference. Of course the term 'adaptive filter' is a misnomer since by definition filters must be time- invariant and thus cannot vary at all! However, we allow this usage when the filter coefficients vary much more slowly than the input signal. A central problem in signal processing is the estimation of some signal of interest from a set of received noisy data signals.

22

If the signal is deterministic with known spectrum and this spectrum does not overlap with that of the noise, then the signal can be recovered by the conventional filtering techniques. However, this situation is very rare. Instead, we are often faced with the problem of estimating an unknown random signal in the presence of noise. This is usually accomplished so as to minimize the error in the estimation according to a certain criterion. This leads to the area of adaptive filtering. For example, when a telephone call is initiated, the transfer function of the telephone channel is unknown, and the connection may involve signal reflections that produce undesirable echoes. This problem can be corrected with a linear filter once the transfer function is measured. The measurement can be made, however, only after the system is running. Therefore, the correction filter must be designed by the system itself, and in a reasonably short time. This can be done by using adaptive filters.

8.1 OPERATION For designing the adaptive filter, we have used Least Mean Square (LMS) algorithm. The LMS adaptive algorithm has found many applications in noise cancellation, line enhancing, etc., to design our filter, we used conventional Look-ahead technique that uses serial LMS algorithm. There is also another technique called Relaxed Look-ahead technique that uses delayed LMS algorithm. The Relaxed Lookahead technique does not maintain input-output mapping and does not result in a final unique architecture. Convergence characteristics would also differ for different combinations. These disadvantages are overcomes in Conventional Look-ahead technique.

23

We designed a fourth order adaptive filter. Designing the adaptive filter does not require any other frequency response information or specification. To define the self learning process, the filter uses the adaptive algorithm, used to reduce the error between the output signal y(k) and the desired signal d(k). [Initially the weight values are initialized to zero]. When the LMS performance criteria for e(k) have achieved its minimum value through the iterations of the adaptive algorithm, the adaptive filter has finished its weight updation process and its coefficients have converged to a solution. Now the output from the adaptive filter matches closely the desired signal d(k). When you change the input data characteristics, sometimes called the filter environment by generating a new set of coefficients for the new data. Notice that when e(k) goes to zero and remains these, we achieve perfect adaptation, but not likely in the real world [2].

24

F-BLOCK

u(n)

W1(n)

W2(n)

WN(n)

y(n)

d(n)

+
e(n)

WUD BLOCK

Figurer 8.1 architecture adaptive filter

25

8.2 GENERAL FORM An adaptive filter is a filter containing the co-efficients that are updated by an adaptive algorithm to optimize the filter's response to a desired performance criterion. _ X(n) +

en =xn-xn
+ -

Y(n)

Adaptive Filter

Adaptive Algorithm

Figure 8.2 General Form Adaptive Filters

In general, adaptive filters consists of two distinct parts: a filter, whose structure is designed to perform a desired processing function: and an adaptive algorithm, for adjusting the co-efficients of that filter to improve its performance, as illustrated in the figure 8.2 The incoming signal, u(n), is weighted in a digital filter to produce an output y(n). The adaptive algorithm adjusts the weights in the filter to minimize the error, e(n), between the filter output, y(n) and the desired response of the filter, d(n). Because of their robust performance in the unknown and time-variant environment, adaptive filters have been widely used from telecommunications to control applications.

26

An adaptive filter can eliminate phase errors that arise from distortion in the propagating medium or from the signal processing system. Suppose that a distortion of a test impulse u(t) = ? (t), which has a uniform amplitude as a func tion of frequency when it passes through a transmission medium, is H(? ), where H(? ) = A(? ) exp [-j? (? )] ------- (8.1) and A(? ) is a real function. If a compensating matched filter is constructed with a response H* (? ), the resultant output, when a signal U(? ) is passed through the medium, will be Y(? ) = U(? ) H(? ) H* (? ) = A2 (? )U(? ) ------- (8.2) All phase errors due to transmission medium are removed by this process, and the output of a test impulse, after passing through the distortion medium and the matched filter, has no phase variation with frequency. The transform of this output will be y(t), and the combined response of the transmission system and the equalizer will be the autocorrelation function of h(t) convolved with u(t). By removing the phase errors an output at t=0 corresponds to the peak of the autocorrelation function of h(t) and can be obtained from an impulse input [u(t) = ? (t)]. If H(? ) H* (? ) is constant in amplitude over a finite frequency range ? , so that H(? ) = ? [(? - ? o) / ? ], the output with an impulse input will be a sinc function of time. If there are major peaks in the frequency spectrum, the time response will exhibit service ringing, but provided that the input spectrum is reasonably smooth (ideally, a Gaussian function).

27

The time domain response corresponding to H(? )H* (? ) should be reasonably compact with no severe ringing. Suppose that a reference impulse is sent through a distorting medium. If a distorted pulse h(t) stored in the storage correlator, phase distortion caused by the medium can be removed from it. Now any other signal u(t), sent along the same path into the correlator will be correlated with the stored reference H*(? ) or h(-t) and will have its phase distortion removed.

8.3 CHARACTERISTICS An adaptive system is one that is designed primarily for the purpose of adaptive control and adaptive signal processing. Such a system usually has some or all of the following characteristics: 1. Firstly they can automatically adapt in the face of changing environments and changing system requirements. 2. Secondly they can be trained to perform specific filtering and decisionmaking tasks, i.e., they can be programmed by a training process. Because of this adaptive systems do not require the elaborate synthesis procedures usually needed for non-adaptive systems. Instead, they tend to be self designing. 3. Thirdly, they can extrapolate a model of behaviour to deal with new situations after having been trained on a finite small number of training signals or patterns.

28

4. Fourthly, to a limited extent they can repair themselves, i.e., adapt around certain kinds of internal defects. 5. Finally, they are more complex and difficult to analyze than non-adaptive systems, but they offer the possibility of substantially increased system performance when input signal characteristics are unknown or time varying.

By an adaptive, we mean a self-designing device that has the following characteristics: 1. It contains a set of adjustable filter co-efficients. 2. The co-efficients are updated in accordance with an algorithm. 3. The algorithm operates with arbitrary initial conditions; each time new samples are received for the input signal and the desired response, appropriate corrections are made to the previous values of the filter coefficients. 4. The adaptation is continued until the operating point of the filter on the error performance surface moves close enough to the minimum points. CHAPTER 9 ALGORITHMS

Two types of adaptive algorithms are discussed in this section: 1. Recursive least square algorithm (RLS)

29

2.

Least mean square algorithm (LMS)

RLS algorithm provides faster convergence but has more computational complexity. LMS algorithms based on gradient-type search for tracking time-varying signal characteristics.

9.1 RLS ALGORITHM In the Recursive Least Square Algorithm, the order of operations are 1. Compute the filter output 2. Find the error signal 3. Compute the kalman gain vector 4. Update the inverse of the correlation matrix 5. Update the weights

9.2 LMS ALGORITHM LMS algorithm has simplicity, which is its major advantages, and hence it is widely used in many applications. The LMS algorithm is described as,

W(n+1)=W(n)+m*e(n)*U(n) ------- (9.1) e(n) = d(n)-WT (n)*U(n) ------- (9.2)

30

where W(n) = {w1 ),w2 (n), . . . . . . . .WN(n)} T U(n) = {u(n),u(n-1), . . . . . . . .,u(n-N+1)}T m is the adaptation constant , d(n) is the desired output , e(n) is the error obtained during every it eration. As we specify m smaller, the correction to filter weights gets smaller for each sample and the LMS error falls more slowly. Larger m changes the weights more each step and hence the error falls more rapidly, but the resulting error does not approach the ideal solution closely. 9.2.1 MSE CRITERION The adaption algorithm uses the error signal, e(n) = d(n) y(n) ------- (9.3) where d(n) is the desired signal y(n) is the filter output The input vector u(n) and e(n) are used to update the adaptive filter co-efficients according to a criterion. The criterion employed in this section is the Mean Square Error (MSE) e: e = E[e2 (n)] ------- (9.4) we know that,

31

y(n) = ? wi(n)u(n-i) = W2 (n) U(n) ------- (9.5) If we substitute equation 9.5 into equation 9.3 we obtain, e = E[d2 (n)] + WT (n) R W(n) 2 WT (n) P ------- (9.6)

where

R = E[U(n)UT (n)] is the auto-correlation matrix P = E[d(n)U(n)] is the cross-correlation vector

By solving equation, ? e / dW(n) =0 ------- (9.7) we obtain the optimum solution w* = [w0 * w1 * ....... wN-1 *] which minimizes the MSE. This leads to normal equation R w* = P ------- (9.8) If R-1 exists, R matrix has full rank, the optimum weights are obtained by w* = R-1 P ------- (9.9) Here, the quantities of R and P are estimated, and the optimal weights corresponding to each segment are computed. This procedure is called block-by-block data adaptive algorithm. 9.2.2 LMS CRITERION The LMS algorithm is the steepest descent method in which the weights are updated on sample-by-sample basis. Since this method avoids the complicated computation of R-1 and P, the algorithm is a practical method for finding close

32

approximate solutions. In this algorithm, the next weight vector W(n+1) is increased by a change proportional to the negative gradient of mean-square error performance. W(n+1) = W(n) m? (n) ------- (9.10) Where m is the adaptation step size that controls the stability and convergence rate. For the LMS algorithm, the gradient at the nth iteration, ? (n), is estimated by assuming squared error e2 (n) as an estimate of the MSE in equation (9.3). Thus the expression for the gradient estimate can be simplified to, ? (n) = d[e2 (n)] / dW(n) ------- (9.11) = -2e(n) U(n) ------- (9.12) Substitution of this instantaneous gradient estimate into (9.10) yields the Windrow- Hoff LMS algorithm, W(n+1) = W(n) + 2me(n)U(n) ------- (9.13) where 2m in the above equation is replaced by m in practical implementation. m should be selected such that, 0<m<1 / ? max ------- (9.14) where ? max is the largest eigen value of the matrix R. ? max = ? r(0) = Nr(0) ------- (9.15) where r(0E[u2 (n)]) is average input power.

33

CHAPTER 10 ADAPTIVE FILTER OVERVIEW Adaptive filters learn the statistics of their operating environment and continually adjust their parameters accordingly. This chapter presents the theory of the algorithms needed to train the filters. 10.1 INTRODUCTION In practice, signals of interest often become contaminated by noise or other signals occupying the same band of frequency. When the signal of interest and the noise reside in separate frequency bands, conventional linear filters are able to extract the desired signal .However, when there is spectral overlap between the signal and noise, or the signal or interfering signals statistics change with time, fixed coefficient filters are inappropriate.

34

Figure 10.1. A strong narrowband interference N(f) in a wideband signal S(f)

This situation can occur frequently when there are various modulation Technologies operating in the s ame range of frequencies. In fact, in mobile radio systems co-channel interference is often the limiting factor rather than thermal or other noise sources. It may also be the result of intentional signal jamming, a scenario that regularly arises in military operations when competing sides intentionally broadcast signals to disrupt their enemies communications. Furthermore, if the statistics of the noise are not known a priori, or change over time, the coefficients of the filter cannot be specified in advance. In these situations, adaptive algorithms are needed in order to continuously update the filter coefficients. 10.2 ADAPTIVE FILTERING PROBLEM The goal of any filter is to extract useful information from noisy data. Whereas a normal fixed filter is designed in advance with knowledge of the statistics of both the signal and the unwanted noise, the adaptive filter continuously adjusts to a changing environment through the use of recursive algorithms. This is useful when either the statistics of the signals are not known beforehand of change with time.

35

Figure 10.2 Block diagram for the adaptive filter problem.

The discrete adaptive filter accepts an input u(n) and Produces an output y(n) by a convolution with the filters weights, w(k). A desired reference signal, d(n), is compared to the output to obtain an estimation error e(n). This error signal is used to incrementally adjust the filters weights for the next time instant. Several algorithms exist for the weight adjustment, such as the Least-Mean-Square (LMS) and the Recursive Least-Squares (RLS) algorithms .The choice of training algorithm is dependent upon needed convergence time and the computational complexity available, as statistics of the operating environment. 10.3 APPLICATIONS Because of the ir ability to perform well in unknown environments and track statistical time-variations, adaptive filters have been employed in a wide range of fields. However, there are essentially four basic classes of applications for adaptive filters. These are: Identification, inverse modeling, prediction, and interference cancellation, with the main difference between them being the manner in which the desired response is extracted. These are presented in figure 10.3 a, b, c, and d, respectively. The adjustable parameters that are dependent upon the applications at hand are the number of filter taps, choice of FIR or IIR, choice of training algorithm, and the learning rate. Beyond these, the underlying architecture required for realization is independent of the application. Therefore, this thesis will focus on one particular application, namely noise cancellation, as it is the most likely to require an embedded VLSI implementation. This is because it is sometimes necessary to use adaptive noise cancellation in communication systems such as handheld radios and satellite systems that are contained on a single silicon chip, where real-time processing is required. Doing this efficiently is important, because adaptive equalizers are a major component of receivers in modern communications systems and can account for up to 90% of the total gate count.

36

Figure 10.3 a Identification

Figure 10.3 b inverse modeling

37

Figure 10.3 c prediction

Figure 10.3 d interference cancellation

38

10.4 ADAPTIVE ALGORITHMS There are numerous methods for the performing weight update of an adaptive filter. There is the Wiener filter, which is the optimum linear filter in the terms of mean squared error, and several algorithms that attempt to approximate it, such as the method of steepest descent. There is also least-mean square algorithm, developed by Widrow and Hoff originally for use in artificial neural networks. Finally, there are other techniques such as the recursive- least squares algorithm and the Kalman filter. The choice of algorithm is highly dependent on the signals of interest and the operating environment, as well as the convergence time required and computation power available. 10.4.1 WIENER FILTERS The Wiener filter, so named after its inventor, was developed in 1949. It is the optimum linear filter in the sense that the output signal is as close to the desired signal as possible. Although not often implemented in practice due to computational complexity, the Wiener filter is studied as a frame of reference for the linear filtering of stochastic signals to which other algorithms can be compared. To formulate the Weiner filter and other adaptive algorithms, the mean squared error (MSE) is used. If the input signal u(n) to a filter with M taps is given as U(n) = [u(n),u(n-1),,u(n-M+1)] T ------- (10.1) and the coefficients or weight vector is given as w = [w(0),w(1),,w(M-1)] T ------- (10.2)

then the square of the output error can be formulated as

39

en 2 = dn2 2dn unT w +wT un unT W ------- (10.3)

The mean square error, J, is obtained by taking the expectations of both sides:

J=E[e2 n ] = E[d2 n ] - 2E[dn uTn w + wT un uT n w] ------- (10.4) =s 2 +2 pT w + wT R w ------- (10.5) Here, s is the variance of the desired output, P is the cross-correlation vector and R is the autocorrelation matrix of U. A plot of the MSE against the weights is a Non-negative bowl shaped surface with the minimum point being the optimal weights. This is referred to as the error performance surface, whose gradient is given by ? = dJ/dW = -2p + 2Rw. ------- (10.6)

40

Figure 10.4 Example cross section of an error-performance surface for a two tap filter. To determine the optimal Wiener filter for a given signal requires solving the Wiener-Hopf equations. First, let the matrix R can denote the M -by-M correlation matrix of u. That is, R = E [u(n)u H(n)], ------- (10.7) where the superscript H denotes the Hermitian transpose. In expanded form this

41

Also, let P represent the cross-correlation vector between the tap inputs and the desired response d(n) : p = E [u(n)d *(n)], ------- (10.8) which expanded is: p = [p(0), p(-1),,p(1-M)] T ------- (10.9) Since the lags in the definition of are p either zero or negative, the Wiener-Hopf equation may be written in compact matrix form: RwO = P, ------- (10.10) with wO stands for the M-by-1 optimum tap-weight vector, for the transversal filter. That is, the optimum filters coefficients will be: wO = [wO0 , wO1, , wO, M-1 ] T ------- (10.11) This produces the optimum output in terms of the mean-square-error, however if the signals statistics change with time then the Wiener-Hopf equation must be recalculated. This would require calculating two matrices, inverting one of them and then multiplying them together. This computation cannot be feasibly calculated in real time, so other algorithms that approximate the Wiener filter must be used.

42

10.4.2 METHOD OF STEEPEST DESCENT With the error-performance surface defined previously, one can use the method of steepest-descent to converge to the optimal filter weights for a given problem. Since the gradient of a surface (or hypersurface) points in the direction of maximum increase, then the direction opposite the gradient ( ) will point towards the minimum point of the surface. One can adaptively reach the minimum by updating the weights at each time step by using the equation w n+1 = w n + (
n

), ------- (10.12)

where the constant is the step size parameter. The step size parameter determines how fast the algorithm converges to the optimal weights. A necessary and sufficient condition for the convergence or stability of the steepest descent algorithm is for to satisfy, 0< < 2/ ? max, ------- (10.13) where ?max is the largest eigenvalue of the correlation matrix R. Although it is still less complex than solving the Wiener-Hopf equation, the method of steepestdescent is rarely used in practice because of the high computation needed. Calculating the gradient at each time step would involve calculating p and R, whereas the leastmean-square algorithm performs similarly using much less calculations.

43

10.4.3 LEAST-MEAN-SQUARE ALGORITHM The least- mean-square (LMS) algorithm is similar to the method of Steepestdescent in that it adapts the weights by iteratively approaching the MSE minimum. Widrow and Hoff invented this technique in 1960 for use in training neural networks. The key is that instead of calculating the gradient at every time step, the LMS algorithm uses a rough approximation to the gradient. The error at the output of the filter can be expressed as, en = dn w Tn un , ------- (10.14) which is simply the desired output minus the actual filter output. Using this definition for the error an approximation of the gradient is found by

Substituting this expression for the gradient into the weight update equation from the method of steepest-descent gives w n+1 = w n +2 .en un , ------- (10.15)

44

which is the Widrow-Hoff LMS algorithm. As with the steepest-descent algorithm, it can be shown to converge for values of less than the reciprocal of ?max, but ?max may be time-varying, and to avoid computing it another criterion can be used. This is

where M is the number of filter taps and Smax is the maximum value of the power spectral density of the tap inputs u. The relatively go od performance of the LMS algorithm given its simplicity has caused it to be the most widely implemented in practice. For an N-tap filter, the number of operations has been reduced to 2*N multiplications and N additions per coefficient update. This is suitable for real-time applications, and is the reason for the popularity of the LMS algorithm[4]. CHAPTER 11 CONCLUSIONS

This paper presents the development of algorithm, architecture and implementation for speech processing using FPGAs. The VHDL code developed is RTL compliant and works for using Xilinx tools. Adaptive filter is a filter that vary in time, adapting their coefficients according to some reference using LMS algorithm. We are often faced with the problem of estimating an unknown random signal in the presence of noise. This is usually accomplished so as to minimize the error in the estimation according to a certain criterion. This leads to the area of adaptive filtering.

45

CHAPTER 12

SYNTHESIS REPORT

12.1. LMS REPORT ============================================================= Final Report ============================================================= Final Results RTL Top Level Output File Name : lmsvhd.ngr Top Level Output File Name : lmsvhd Output Format : NGC Optimization Goal : Speed Keep Hierarchy : NO Design Statistics # IOs

:1

Cell Usage: ============================================================= Device utilization summary: --------------------------Selected Device : 2s15tq144-5

TIMING REPORT NOTE: THESE TIMING NUMBERS ARE ONLY A SYNTHESIS ESTIMATE. FOR ACCURATE TIMING INFORMATION PLEASE REFER TO THE TRACE REPORT GENERATED AFTER PLACE-and-ROUTE.

46

Clock Information: -----------------No clock signals found in this design

Timing Summary: --------------Speed Grade

: -5

Minimum period: No path found Minimum input arrival time before clock: No path found Maximum output required time after clock: No path found Maximum combinational path delay: No path found

Timing Detail: -------------All values displayed in nanoseconds (ns)

CPU: 56.64 / 57.50 s | Elapsed --> Total memory usage is 115620 kilobytes 12.2 FLOATING POINT ADDER

: 56.00 / 57.00 s

============================================================= Final Report ============================================================= Final Results RTL Top Level Output File Name : floatadd.ngr Top Level Output File Name : float add Output Format : NGC Optimization Goal : Speed Keep Hierarchy : NO Design Statistics # IOs Macro Statistics: # Multiplexers : 96

: 46

47

# 2-to-1 multiplexer # Logic shifters # 23-bit shifter logical left # 24-bit shifter logical right # Adders/Subtractors # 24-bit adder carry out # 26-bit subtractor # 8-bit adder # 8-bit subtractor # Comparators # 23-bit comparator equal # 23-bit comparator less # 8-bit comparator equal # 8-bit comparator greater # 8-bit comparator less Cell Usage: # BELS # GND # LUT1 # LUT2 # LUT3 # LUT4 # MUXCY # MUXF5 # VCC # XORCY # IO Buffers # IBUF # OBUF

: 46 : 46 : 23 : 23 : 49 :1 :1 :1 : 46 : 29 :1 :1 :1 : 25 :1 : 2 995 :1 :8 : 85 : 652 : 1356 : 431 : 41 :1 : 420 : 96 : 64 : 32

============================================================= Device utilization summary: --------------------------Selected Device Number of Slices Number of 4 input LUTs Number of bonded IOBs : 2s15tq144-5 : 1305 out of 192 679% (*) : 2101 out of 384 547% (*) : 96 out of 90 106% (*)

CPU: 80.20 / 81.20 s | Elapsed: 80.00 / 81.00 s

48

--> Total memory usage is 110500 kilobytes

Figure 12.1synthesized FLOATADD Architecture

49

12.3 FLOATING POINT MULTIPLIER ============================================================= Final Report

Final Results RTL Top Level Output File Name Top Level Output File Name Output Format Optimization Goal Keep Hierarchy Design Statistics # IOs

: floatmult.ngr : floatmult : NGC : Speed : NO

: 96

Macro Statistics: # Multiplexers # 2-to-1 multiplexer # Logic shifters # 23-bit shifter logical left # Adders/Subtractors # 8-bit adder # 8-bit subtractor # Multipliers # 24x24-bit multiplier

: 23 : 23 : 23 : 23 : 26 :2 : 24 :1 :1

50

Cell Usage: # BELS # GND # LUT1 # LUT2 # LUT3 # LUT4 # MULT_AND # MUXCY # MUXF5 # VCC # XORCY # IO Buffers # IBUF # OBUF

: 3490 :1 : 55 : 322 : 464 : 712 : 276 : 775 : 117 :1 : 767 : 96 : 64 : 32

Device utilization summary: Selected Device Number of Slices Number of 4 input LUTs Number of bonded IOBs : 2s15tq144-5 : 867 out of 192 451% (*) : 1553 out of 384 404% (*) : 96 out of 90 106% (*)

============================================================= CPU: 35.66 / 36.51 s | Elapsed: 35.00 / 36.00 s --> Total memory usage is 79780 kilobytes

51

Figure 12.2 synthesized FLOATMULT Architecture

CHAPTER 13

REFERENCES

1. Thomas Kailath, Adaptive Filter Theory Fourth edition, Pearson Education Asia, 2002. 2. K.Parhi. VLSI digital signal processing systems, design and implementation. John Wiley and sons, 1999. 3. S.Palnitkar. Verilog HDL, a guide to digital design and synthesis . Prentice Hall. 4. Nabeel shirazi, Al Walters, and Peter Athanans, Quantitative Analysis of Floating Point Arithmetic on FPGA Based Custom Computing Machines.

You might also like