2006 - Panomruttanarug&Longman - FREQUENCY BASED OPTIMAL DESIGN OF FIR ZERO-PHASE FILTERS AND COMPENSATORS FOR ROBUST REPETITIVE CONTROL PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/290742618

Frequency based optimal design of FIR zero-phase filters and compensators


for robust repetitive control

Conference Paper · January 2006

CITATIONS READS

24 186

2 authors, including:

Benjamas Panomruttanarug
King Mongkut's University of Technology Thonburi
37 PUBLICATIONS   179 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Applications in using advanced Control View project

Path Planning, optimization and control View project

All content following this page was uploaded by Benjamas Panomruttanarug on 18 May 2017.

The user has requested enhancement of the downloaded file.


AAS 05-263

FREQUENCY BASED OPTIMAL DESIGN OF FIR ZERO-PHASE


FILTERS AND COMPENSATORS FOR ROBUST
REPETITIVE CONTROL

Benjamas Panomruttanarug* and Richard W. Longman†

Spacecraft often have control moment gyros or reaction wheels, and any slight
imbalance impairs the performance of fine pointing equipment. Repetitive
control systems can learn to cancel the resulting vibrations at the location of fine
pointing equipment on board, by using the error in previous periods. In a
previous paper, a very effective and easy to use frequency based optimization
method was developed to design the compensators needed to stabilize the
learning process based on a system model. This paper uses similar methods,
making a frequency based optimization to design zero-phase low-pass filters to
robustify the design to unmodeled modes, or parasitic poles. And then the two
methods are integrated to produce a practical and effective approach to creating
robust repetitive control systems.

INTRODUCTION

Many control applications have disturbances that are periodic. In spacecraft, small imbalances in
reaction wheels, control moment gyros, or cryogenic pumps produce vibrations that disturb fine pointing
equipment. Active control methods can be used to try to isolate the fine pointing instrument from the
vibrations, but typical feedback control will not do a perfect job, even in theory. On the other hand
repetitive control (RC) has the potential to completely cancel the effects of such disturbances at the location
of a fine pointing instrument for disturbances of a known period [1-4]. RC learns to cancel the disturbance
effects by using data from previous periods to update the command given a control system. This paper
builds on previous work [5] that developed particularly effective methods to design the compensators
needed in RC, and develops analogous methods to design the real-time zero-phase low-pass filters needed
to robustify the RC to residual modes. The results are then integrated into an effective and practical method
to design RC systems.

The simplest form of RC can be described as follows. If the position error at a given time step in
the last period is 2 units too low, then add 2 units, or a repetitive control gain times 2 units, to the command
at the appropriate time in the current period of the disturbance. Reference [5] develops a particularly
attractive method of designing repetitive control laws using optimization in the frequency domain. The
optimization aims to minimize the magnitude of a frequency transfer function of the error from one period
to the next. This is a heuristic measure of the decay rate of the error from one period to the next for each
frequency component. The designer can separately adjust some parameters to influence the learning speed,
or rate of convergence, for each frequency. Satisfying the condition guarantees asymptotic stability.

Note that the ability to design the RC directly from the experimental frequency response
information is a very important characteristic of the approach. Because RC aims to converge to zero
tracking error at all frequencies, model accuracy is important at all frequencies in the design. Small
amplitude high frequency modes that are missing from the model will most likely destabilize the RC
process in applications. There are nearly always extra poles, or extra high frequency dynamics that one has
not modeled, such as an extra vibration mode that you could not see in the data. One extra unmodelled pole

1
represents an extra 90º of phase lag at high frequency. This is normally sufficient to make the RC system
unstable. Hence, in order to make repetitive control work in practical applications, one needs to develop
some method of producing robustness to singular perturbations. One can do this with a zero-phase low-pass
filter, and the more ideal the filter, the better the RC system performance [3,6-12]. But an important
constraint for RC is that the filter must function in real time.

The same methods used in [5] to design the compensator are extended here to design simple FIR
zero-phase low-pass filters for this application, to robustify the design to unmodeled modes, or parasitic
poles. Then the compensator and low pass filter design methods are integrated to produce a practical and
effective approach to creating robust repetitive control systems.

MATHEMATICAL FORMULATION AND STABILITY OF REPETITIVE CONTROL SYSTEMS

Figure 1 shows the structure of a typical repetitive control system. The G(z ) represents the
transfer function of a closed loop feedback control system. The d (k ) is a periodic output disturbance of
period p time steps, and the repetitive control system aims to eliminate its influence on the output. In
practice, the periodic disturbance is most likely to enter between the actuator and the plant within the
feedback control system, but a periodic disturbance at this location can be converted to an equivalent
periodic disturbance on the output. The command coming into the repetitive controller is either constant or
of period p , and control system errors in tracking the command are also to be eliminated by repetitive
control. The left hand block in the diagram represents the repetitive controller with the zero-phase low-pass
filter H (z ) . The simplest form of repetitive control described in words above, sets the control compensator
F ( z ) to a constant times a one step ahead shift, and H ( z ) to one in the case of no filtering. It adjusts the
command u (k ) to the feedback control system according to

u ( k ) = u ( k − p ) + φ e( k − p + 1) (1)

Here, φ is a repetitive control gain, and e represents the measured error, the desired output minus the
measured output. The one time step added to the argument of e compensates for the delay of the system
between the time step when an input is changed to the time step it first influences the output. This is
assumed to be one step. Running this equation recursively makes a sum of all errors observed in the past
for the current phase of the periodic disturbance, and hence it is a discrete time equivalent of an integral. In
practice, in order to get well behaved learning transients, it is important to introduce a compensator, F ( z ) ,
operating on the error signal, so that the z -domain repetitive control law takes the form

⎡ F (z) ⎤
U (z) = z − p [U (z) + F (z) E(z)] ; U (z) = ⎢ p ⎥ E(z) (2)
⎣ z −1⎦

For comparison, in the transform domain (1) becomes U ( z ) = [φ z /( z p − 1)]E ( z ) , which is (2) with
F ( z ) = φ z . Previous work develops particularly effective methods of designing F (z ) to both create
asymptotic stable of the RC system, and also to produce good learning transients. From the block diagram
in Fig. 2 with H (z) = 1, one can find a difference equation for the error E ( z ) in terms of the desired output
Yc (z ) and the disturbance D( z ) from the block diagram

{z p − [1 − G (z)F (z)] }E(z) = (z p −1)[Yc (z) − D(z)] (3)

The right-hand side represents a forcing function, and the left-hand side represents a difference equation
whose characteristic polynomial is given in curly brackets. The difference on the right is zero since D(z)
and Yc (z) are periodic with p time steps. Therefore, the error satisfies a homogeneous equation, and goes

2
to zero provided all roots of the characteristic polynomial are inside the unit circle. One can rewrite the
homogeneous difference equation as

z p E (z ) = [1− G (z ) F (z )] E (z ) (4)

Using approximate quasi steady state thinking, one can interpret the left hand side as the error in the next
period which is written as the transfer function in square brackets times the error in the current period.
Substituting z = exp(iωT ) into the transfer function makes it into a frequency transfer function, and the
magnitude of this transfer function for each frequency is the change in amplitude of any error component
having that frequency, going from period to the next. Asking that this magnitude be less than one for all
frequencies from zero to Nyquist, would then suggest monotonic decay of the amplitudes of all error
components, and hence it suggests stability and convergence to zero error. Thus, one can design to satisfy

[1− G (e iωT ) F (e iωT )] <1 (5)

for all ω up to Nyquist frequency. The above argument is heuristic, but one can prove that if this condition
is satisfied, then the repetitive control system is asymptotically stable [3]. Furthermore, one can show that
because p is normally a large number, the difference between this sufficient condition for stability and the
necessary and sufficient condition for stability is very small, and of no importance in practical applications
[13].

Based on this thinking, [5] develops a repetitive control design method to create the compensator
F (z) . The compensator is chosen as an FIR filter containing a chosen number of gains (5 and 12 gains are
used in examples below) which multiply the measured error at time steps around one period back, going
both forward and backward from one period back in time. An optimization function is defined that is the
square of left hand side of (5) summed over evenly spaced discrete frequencies going from zero to Nyquist
(e.g., one can use 100 values from ωT equal zero to π ). The gains of the compensator are picked to
minimize this objective function, which simply requires computing the solution of a linear set of equations,
with the number of equations equal to the number of gains, which can be quite small. If the resulting gains
make (5) satisfied, then one has a compensator that stabilizes the repetitive control process. Numerical
examples in [5] and below, show that one can make the left hand side very small for all frequencies,
corresponding to very fast decay of the error from one period to the next, according to the quasi steady state
thinking for equation (4). The design method is easy to use and very effective.

The difficulty in application is that repetitive control asks for zero error not just at low frequency,
but at the fundamental frequency associated with the addressed period, and also all harmonics up to
Nyquist frequency. This requires that one’s model be accurate all the way to Nyquist frequency which is a
very unusual requirement in control system design. More precisely, it must be accurate enough that (5) is
satisfied to Nyquist with the G (z) being the real world behavior, when the compensator F (z) is designed
based on our model of G (z) instead of the real world G (z) . Models normally miss some high frequency
dynamics, sometimes called residual modes or parasitic poles. If our model is missing one vibration mode,
then it is likely to be missing roughly a 180 degree phase lag at high frequency. If it is missing one real
pole, then at high frequency it can be missing roughly 90 degrees phase lag. Either one can be enough to
make (5) violated. Hence, repetitive control designs are not robust to missing high frequency dynamics.

It is the purpose of this paper to address this problem by creating design methods, similar to those
used for the compensator F (z) above, that design a zero-phase low-pass filter H (z) that will cut off the
learning signal at high frequencies. This filter can be applied to the command generated by the repetitive
controller, before it is applied to the system, as in the first of the following equations. It is also possible to
apply it only to the change in the command generated by the repetitive control updates without filtering the
initial command of the desired trajectory. The following equations rewrite equations (2) through (5) to
include the low pass filter influence

3
⎡ F (z)H (z) ⎤
U (z) = z − p H (z)[U (z) + F (z) E(z)] ; U (z) = ⎢ p ⎥E(z) (6)
⎣ z − H (z) ⎦
{z p − H (z) [1 − G (z)F (z)] }E(z) = (z p − H (z))[Yc (z) − D(z)] (7)
z p E(z) = H (z) [1− G (z)F (z)] E(z) (8)

[
H (z) 1 − G (e iωT
) F (e iωT
]
) <1 (9)

One would like an ideal filter so that H (z) equals one below the cutoff with no phase change and no
amplitude change, and it is zero above the cutoff frequency ω c . Then the right hand side of (7) is zero for
all frequency components before the cutoff, and it simply lets the frequency components above the cutoff
come through unaltered. Of course, by using a frequency cutoff we cannot hope to converge to zero error,
and what we converge to is the particular solution associated with the non zero forcing function on the right
hand side of the equation. Stability is of course a property of the homogeneous equation, and then equation
(8) still represents a quasi steady state approximation predicting the decay of the error from period to period
on it way to convergence to the particular solution. Equation (9) has the same status as before, being a
sufficient condition for stability [14], and also being very close to the stability boundary for practical
problems.

One picks the cutoff frequency to be low enough that equation (5) is satisfied with the real world
G (z) for all frequencies up to the cutoff, and then the filter attenuation above the cutoff makes (9) satisfied
because of the filter action. Note that we cannot pick this cutoff in the design stage, since we do not know
what poles or modes are missing in our model. If we knew, we would have used the information in the
design of F (z) . Hence, one has to adjust the cutoff when the RC design for F (z) is implemented in
hardware without a cutoff, and one observes the frequencies involved in the error growth.

CHOICE OF LOW PASS FILTER STRUCTURE

References [6] and [7] make use of zero-phase low-pass filtering in iterative learning control,
using a Butterworth IIR filter. One uses the filter to forward in time through data from the last repetition,
reverses the time sequence on the output, filters it again, and then reverse the time again. The reverse time
pass cancels the phase lag of the forward time pass, and doubles the attenuation above the cutoff. The
approach can be very effective. The fact that it is an IIR filter means that there are initial condition issues to
address [8]. References [9] and [10] consider ways in which one can make use of essentially the same
filtering in the repetitive control problem. Reference [10] distributes batch update computations within the
real time computations of the repetitive control, and makes an update of the repetitive control signal when
ready. Reference [9] finds ways to implement the filter in real time instead of using a batch update.
However, an FIR filter is much easier to implement in a real time repetitive control process than an IIR
filter. It simply requires taking an appropriate linear combination of the errors in the previous period.
References [11] and [12] use triangular window FIR zero phase filters of low order. The usual disadvantage
of an FIR filter is that it often requires a large number of gains in order to get the desired response
properties, and we investigate the use of higher order FIR filters here. In order to get zero phase, a type I
linear-phase FIR filter (symmetric impulse response with odd length) of order N , for some even N , is
used. The filter is defined as

M
H (z) = ∑ h(k)z −k
(10)
k=−M

where M = N / 2 , and h (k) satisfies the symmetry condition h (k ) = h (−k ) . Note that this property creates
zero phase in the passband, but one can have 180 degree phase shifts in the stopband. This is not of concern

4
in designing H (z ) , but it can be destabilizing when used in F (z ) . As a result of this symmetry, H (e iωT ) is
( (
of the form H (e iωT ) = e− j 0 H (ω ). H (ω ) is the amplitude response of H (e iωT ) and can be written as

M
(
H (ω ) = ∑h k cos(ωkT ) (11)
k= 0
where hk is given by
⎧2h (k ), 1 ≤ k ≤ M
hk = ⎨ (12)
⎩ h (0), k=0

There are many ways to pick the gains of the filter, optimizing them in different objectives. Table 1 from
[15] lists some filter designs together with various properties (see also e.g. [16]). The repetitive control
problem addressed here differs somewhat from the more routine applications of zero-phase low-pass filters.
For example:

(1) The main objective in repetitive control is to eliminate errors at a fundamental frequency, and at
all harmonics of this frequency. Hence, one is most interested in filter performance at a known
discrete set of frequencies. Perhaps the filter design can be made with this in mind.
(2) Taking data on the feedback control system’s performance in the actual disturbance environment,
tells one what the frequency content of the error is. In most cases, the most important frequency is
the fundamental, and next in importance are the next few harmonics. This means that the accuracy
of the passband of the filter is most important for these frequencies.
(3) The inevitable amplification of error in the stopband is at higher frequency where the errors are
likely to be much smaller, and hence one might be willing to suffer more amplification here in
exchange for better performance at chosen frequencies within the passband.

Based on these considerations, we develop filter design methods specialized for the repetitive control
objective.

OPTIMAL FILTER DESIGN FORMULATION

To design the filter, we create an optimality criterion. The ideal filter objective is to have the filter
have zero phase and gain of unity below the cutoff frequency, and to have gain of zero above the cutoff
frequency. Pick N H discrete frequencies ω j between zero and Nyquist, with the frequency having
subscript j c being the last frequency before the cutoff, and write the objective function as a quadratic
function of the errors between the actual filter performance and the ideal filter performance at these
frequencies

jc N H −1
J= ∑[1 − H (e iω j T
)]W j [1− H (e
iω j T
)] +*
∑[H (e ω i jT
)]V j [H (e
iω j T
)]* (13)
j= 0 j= j c +1

The superscript ∗ indicates the conjugate operation, and T is the sample time interval. Weights W j and
V j allow one to pick individual different weights for different frequencies in passband and stopband
respectively. Often we consider the same weight for all passband frequencies and call it simply w, and the
same weight for all frequencies in the stopband and call it v. This objective function is a quadratic function
of the FIR weights to be chosen. Differentiating with respect to the h (k ) and setting the derivatives to zero,
produces the following M +1 linear equations to solve for the filter coefficients

jc N H −1 NH

(∑ W jAj + ∑ VjAj) x = ∑W b j j (14)


j= 0 j = j c +1 j= 0

5
⎡ 1 2 cos(ω jT ) L 2 cos(Mω jT ) ⎤
⎢ ⎥
cos(ω jT ) 2 cos (ω jT )
2
L 2 cos(ω jT ) cos(Mω jT )⎥
Aj =⎢
⎢ M M O M ⎥
⎢ ⎥
⎣ cos(Mω jT ) 2 cos(ω jT ) cos(Mω jT ) L 2 cos (Mω jT )
2

⎡ h(0) ⎤ ⎡ 1 ⎤
⎢ ⎥ ⎢ ⎥
h(1) ⎥ ; b =⎢ cos(ω jT ) ⎥
x =⎢
⎢ M ⎥ j
⎢ M ⎥
⎢ ⎥ ⎢ ⎥
⎣ h(M )⎦ ⎣ cos(Mω jT )⎦

This forms a linear set of equations to be solved for the M + 1 unknowns in the zero-phase filter gains.
There are two ways in which we can use these equations:

(1) One can use a large N H so that the cost function (13) approaches representing the error at all
frequencies between zero and Nyquist, and one aims to pick the much smaller number of gains to
get good performance in general, possibly using the weighting factors to emphasize certain
regions.
(2) Another option is to adjust the N H and the number of gains so that there are enough gains so that
one can exactly satisfy the equations at all frequencies considered in (13), i.e. the optimization of
(13) produces a zero cost. This is equivalent to simply writing the equations that ask for zero error
for each of the frequencies, and solving this set of equations. The number of frequencies that can
be fit exactly, call it N E , is equal to

NE = M + 1 (15)

FILTER DESIGN EVALUATION

Example Minimum Square Error Filter Design: Figure 2 minimizes equation (13) with N H = 100 and all
weights set to unity, to design a 51 gain FIR zero-phase low-pass filter with a cutoff at 30Hz and a Nyquist
frequency of 100 Hz. Having 51 gains is quite reasonable for a real time computation for most applications.
The design is reasonably good. Below the cutoff frequency, the worst case frequency would have
[ z p − H (z )] eliminate 90% of the forcing function on the right of difference equation (7). Above the cutoff
the amplification of the forcing function is roughly by a factor of less than 1.1 in the worst case. These
comments refer to the size of the forcing function in equation (7). What is more important is the size of the
associated particular solution, and this is discussed later with respect to the sensitivity transfer function.
Figure 3 shows the impulse response for the filter in Fig. 2 compared with the middle portions of the
desired ideal impulse response. The impulse response H d (k) of the ideal filter is given by

π ωc
ωc ω
H d (k) =
1

∫ H d (e iωT ) e iωk dω =
1

∫ eω i k
dω =
π
sinc ( c k)
π
(16)
−π −ω c

This filter is not implementable since its impulse response is infinite and noncausal. A usable FIR filter can
be obtained by truncating, retaining the central section of the ideal impulse response. It is seen that the
response of the filter in Fig. 2 lies on top of the ideal frequency response filter, up to the truncation
associated with using only 51 gains. In other words, the uniform weighted filter becomes the rectangular
frequency response filter. For illustration purposes, consider that a sample rate of 200 Hz is used, creating a
100 Hz Nyquist frequency, and one is interested in repetitive control to eliminate frequencies of period one
second, i.e. 1 Hz and every integer above 1 Hz up to Nyquist. Figure 4 shows the error in the filter of Fig. 2

6
when compared to ideal low pass filter response, for each of these discrete frequencies, 0 Hz, 1 Hz, …, 99
Hz. The amplitude of error is large on the transition band since the gain cannot abruptly change from unity
to zero over an arbitrarily small frequency range. Figure 5 studies what kind of improvement might be
possible by using more terms in the FIR filter. The maximum deviations from ideal filter behavior below
and above the cutoff, are not much affected, but the worst case errors are pushed closer to the cutoff
frequency.

Benefit of Adjusting the Objective Function Weights: The cost function (13) allows different weights for
the passband and for the stopband. Figure 6 shows what happens to the above design when the passband
weight is decreased and the stopband weight is increased. One can get close to perfect response in the
stopband, but at a price of poor performance in the passband. On the other hand, Fig. 7 shows the results of
using a much bigger weight on the passband than the stopband. The performance in the passband is quite
good, with no ripples being apparent. The amplification in the stopband is worse, but not dramatically so.
Analogous to Fig. 4, Fig. 8 presents the errors at discrete integer frequencies from zero to 99 Hz. The
amplification in the transition region is worse, because of a somewhat slower decay of the filter at cutoff.
And in this region the forcing function on the right of difference equation (7) can be doubled. Nevertheless,
for repetitive control applications where the major errors to correct are the fundamental and the first few
harmonics, and higher harmonics have small error, this filter could be very attractive. It gets particularly
good performance for the fundamental and all harmonics up to the cutoff.

Filters Designed for Perfect Performance at Chosen Frequencies: According to (15), using 51 gains
allows us to get zero error at any 26 chosen frequencies. Figure 9 creates such a design, picking the
frequencies to start at 0 and progress every 4 Hz up through 28 Hz, and then to start again at 31 Hz and
progress every 4 Hz up to 99 Hz. As indicated in the figure by the circles, the plot does get zero error at the
chosen frequencies. Figure 10 shows the errors at every integer Hz as in Figs. 8 and 4. Of course the plot is
zero at the 26 frequencies, but the errors at other integer frequencies is generally worse than in Fig. 4. Since
the repetitive control problem is mainly interested in the discrete fundamental frequency and the harmonics,
it is natural to ask, is it possible to create a filter that is perfect at all of these frequencies? The answer is
yes. Consider again the case of a 1 Hz fundamental, a sample rate of 200 Hz, so that the frequencies at
which we want perfect performance are 0Hz, 1 Hz, 2 Hz, and every 1 Hz up through 99 Hz. Using equation
(15) indicates that the length of the filter must be 199 . This is composed of 99 gains for past points, one
gain for the present point (one period back), and 99 gains multiplying future points (future to one period
back). Since the period is 200 time steps, the most recent error data needed is just over one half period back
in the data set. This is a general property. To this we need to include some more recent data points because
of the compensator design, which can require for example the next 6 more recent measurements.

The conclusion is that one can design a zero-phase FIR low-pass filter with perfect performance at the
fundamental and all harmonics up to Nyquist, and it will always be causal in the repetitive control
application, provided the compensator does not use more than half a period of data for its forward points.
One can have a perfect filter for all addressed frequencies up to Nyquist. The limitation is the number of
gains that can be handled in each time step of the real time computation. Figure 11 gives an example of this
perfect filter design. The circles indicate the desired performance at the 100 addressed frequencies, and the
dashed line indicates the filter performance that is perfect at these frequencies, and has some peaks at
intermediate frequencies. Figure 12 presents the computed error at all 100 frequencies. The power on the
vertical axis scale is 10−14 .

Comparison to Kaiser Filter: Consider one of the standard filters mentioned in Table 1, the Kaiser filter,
and compare it to the filters we have generated. A disadvantage of this filter is that one cannot specify the
number of gains a priori, which one might want to do to ensure that the filter design can be computed in
real time. One specifies a passband cutoff and a stopband cutoff (28.2 and 30.5 are used here) with the gap
between them being the transition region, and one specifies the passband ripple and the stopband ripple (0.2
used here for both). With these specificatied values, the number of gains came out to be 51, matching that
in Fig. 2. The Kaiser filter and that in Fig. 2 are very similar. Figure 14 gives the errors at all integer
frequencies to be compared to Fig. 4. Also shown in Fig. 13 are circles at the 26 specified frequencies in

7
Fig. 9, and of course there is no way to specify zero error at specified frequencies with a Kaiser filter, and
hence there is larger error at these frequencies.

Evaluation of Effectiveness of Adjusting the Perfect Fit Frequencies: In the event that one cannot handle
enough gains to get zero error at all addressed frequencies as Fig. 11, perhaps one can use a smaller number
of gains, and ask for zero error at a subset of the desired frequencies. Figure 15 picks more frequencies near
zero and the fundamental (indicated by the circles), and more frequencies just after the cutoff, and robs
from some other frequencies to do this. The approach obviously is not effective. Figure 16 puts in extra
points near zero and the fundamental, and then aims to be evenly spaced above this region. This approach
is more effective, but the performance in the stopband is not good.

COMPENSATOR DESIGN METHODS

There are many approaches to designing compenators for repetitive controllers. Many choices aim
to cancel the dynamics of the system. If this can be accomplished, then the left hand side of (5) will be
zero, and the repetitive control system will be stable. However, there are difficulties with this, because most
physical systems written in discrete time have zeros outside the unit circle, and hence when inverted, are
unstable and cannot be used as a compenator. For sufficiently fast sample rate, any continuous time transfer
function having at least three more poles than zeros, when fed by a zero order hold, will have zeros outside
the unit circle in discrete time. We list three design method, with the first being the preferred one.

Method 1: As discussed above, the approach from [5] does not try to invert the system dynamics, but
instead tries to make an FIR filter F (z ) whose frequency response is the inverse of the frequency response
of G (z ) . The gains in the FIR filter are optimized using a cost function like that in (13), to minimize the
weighted square of the right hand side of (5) summed over a chosen set of frequencies between zero and
Nyquist. Note that this approach does not require that one have a pole-zero model of the system, and can be
made directly from experimental frequency response data.

Method 2: The approach of [4] cancels all poles and zeros that are inside the unit circle. And for any zero
outside, a zero at the reciprocal location is introduced into the compensator, and a pole is placed at the
origin. Then the DC gain is adjusted to be unity. This creates a compensator, with the property that
everything has been cancelled that can be cancelled, and any zero outside the unit circle has its phase but
not its magnitude response cancelled by the compensator. This produces a combination of an FIR and an
IIR compenator. Asymptotically as the sample time tends to zero, there is already a zero inside the unit
circle at the reciprocal location to zeros that appear outside. Hence, one may not need to cancel the zero
inside with a pole, and the compensator can be FIR. This FIR approach is used in the computations
described below.

Method 3: A third alternative, cancels all poles and zeros inside the unit circle, and then designs a
compensator using the approach from [5] in Method 1 above, to design an FIR filter that aims to cancel the
frequency response of just the uncancelable zeros. This approximately cancels both the phase and the
amplitude, and results in a combination of FIR and IIR filters.

These methods are investigated using a third order system that is a relatively good model of the
input-output response of the feedback control systems for each link of a Robotics Research Corporation
robot

⎛ a ⎞⎛ ω o2 ⎞
G (s) = ⎜ ⎟⎜ 2 ⎟ (17)
⎝ s + a ⎠⎝ s + 2ζω o s + ω o2 ⎠

where the first order term corresponds to a break frequency of 1.4 Hz, the undamped natural frequency is
5.9 Hz, and the damping ratio is 0.5. It is assumed that this continuous time transfer function is fed by a
zero order hold sampling at 200 Hz to create the discrete time z-transfer function G (z ) . Since the pole

8
excess is three, there are two zeros introduced by the conversion to discrete time, one outside the unit circle
and one inside, located at approximately -3.7 and its reciprocal when the sample time gets small. Figure 17
iω T iω T
shows the plot of the resulting G (e j ) F (e j ) using Method 1, and allowing only 5 gains in the FIR
compensator. Even with this small number of gains, the results are rather good. If one increases the number
of gains in the compensator to 12 gains, the plot differs from unity by a maximum of 0.015. Hence, it is
possible to get an accurate inverse of the frequency response with a small number of gains. Figure 18
shows the resulting pole zero locations for G (z)F (z) . Figures 19 and 20 give the corresponding plots for
Method 2, and Figs. 21 and 22 give the corresponding figures for Method 3.

COMBINING THE LOW PASS FILTER AND THE COMPENSATOR

When one has both the compensator and the low pass filter, the z-transform versions of the repetitive
control computation is the left equation in (6), which computes the control update with the compensator,
and then does the zero-phase low-pass filtering on the result. An alternative is to rewrite the equation in the
form

U (z ) = z − p H (z ) U (z ) + z − p [ H (z ) F (z )] E (z ) (18)

There are some advantages to this form.

(1) The filter F (z) models the inverse of the steady state frequency response of the feedback control
system G (z) , and therefore it gets very large at high frequencies. The product with the low pass
filter H (z) produces a combined filter that is not extreme at high frequencies.
(2) The gains in filter F (z) can be very large, and they tend to go from large positive to large
negative, and back, going from one gain to the next. For the above system (17), the maximum gain
is 4580.1, and this gains multiplies a measured error at one time step and is then added to a similar
large gain which is negative, times the measured error at the next time step. The largest gain in the
51 gain filter of Fig. 2 is 0.2973, and the largest gain in the product filter H (z ) F (z ) is 45.151.
Thus, the conditioning of the computation is better when the product filter is used.
(3) The number of real time computations needed, appears to be less using form (18) with the a priori
computed gains of the product filter. With 51 gains for the low pass filter and 12 gains for the
compensator, the number of multiplies is 113 and the number of additions is 112 for each time
step.

Figure 23 uses Method 1 and 51 gain Filter in Fig. 2, and displays how small the design makes the left hand
side of (9), guaranteeing stability and fast learning. Below the cutoff, the left hand side is small because the
left hand side of (5) is small from the compensator design. Above the cutoff it is small in addition because
the zero phase filter has cut off. Figure 24 gives the corresponding plot for Method 2. The fact that this
method does not invert the magnitude response associated with the zero outside the unit circle, means that
the left hand side of (5) is not kept so small. The ripples after cutoff are substantially larger also, but the
compensator in this design has 4 gains compared to the 12 gains in the Method 1 design. Nevertheless, in
Method 2 there is no provision for increasing the number of gains, while in Method 1 one can choose to use
more gains, and one will get still better performance.

SENSITIVITY TRANSFER FUNCTION AND THE WATERBED EFFECT

The previous discussions emphasize the performance of the zero phase filter design at the
fundamental frequency and all harmonics of the period addressed by the repetitive controller design. It is
the main task of a repetitive controller design to do a good job of handling these frequencies. Of course,
there may be other frequencies in the disturbance that do not happen to be of this period, and it is of interest
to understand how the repetitive controller handles these frequencies. The waterbed effect is an important
limitation of real time feedback control systems, saying that if the influence of disturbances is attenuated
for some frequencies by the repetitive controller, then there are other frequencies for which disturbances, if

9
present, would be amplified. It is of interest to determine how the introduction of a low pass filter behaves
with respect to the waterbed effect. Does it create any significant peaks in the frequency response?

Sensitivity Transfer Function for the Compensator Design Methods: First consider the response of the
repetitive control system to disturbances of all frequencies, before introducing the low pass filter. The error
in response to all frequencies is obtained by getting the transfer function from disturbance to error from
equation (3), called the sensitivity transfer function. Figure 25 plots the magnitude of the frequency
response for Method 1. Note that the optimization in Method 1 automatically picks the DC gain, and for
example (17) is slightly less than unity. But for purposes of comparison, it is set to unity in this figure. The
plot is zero at every addressed frequency, and reaches 2 at some frequency between each addressed integer
frequency. If the DC gain were not set to unity, the peaks would be slightly below 2. Figure 26 gives the
corresponding plot for Method 2. The amplification is by a factor of 2 at DC, just as in Method 1, but this
method learns slower at high frequency which results in less amplification. Figure 27 goes back to Method
1 and adjusts the DC gain to 0.3 to obtain a plot that has amplification at all frequencies that is equal to the
low amplification of Method 2 at high frequency.

Sensitivity Transfer Function after the Zero Phase Filter is Introduced: Figure 28 shows what happens if
the low pass filter of Fig. 2 is introduced into the repetitive control law so that the sensitivity transfer
function is determined from equation (7). It is clear that serious amplification appears, and the main effect
is to stop amplification and attenuation of the input signal above the cutoff. The ripple imperfections of the
low pass filter in Fig. 2 come through clearly in ripple both below and above the cutoff in the sensitivity
transfer function plot. The sensitivity transfer function contains knowledge of what the addressed period is,
in the z − p terms. The sensitivity transfer function goes to zero at the frequencies of the addressed period if
there is no zero phase filter. When there is a zero phase filter, any imperfection in the filter magnitude
response at these frequencies below cutoff will be reflected in a non zero error at these frequencies. Figure
29 shows only the frequencies at integer values of Hz that are the addressed frequencies in this problem.

One can try to do better by using the low pass filter design of Fig. 7 that modifies the weights used in the
cost function (13) to emphasize accuracy in the passband. Figure 30 shows how well the design has
minimized the size of the left hand side of (9). Figure 31 gives the plot of the sensitivity transfer function
for only the addressed frequencies of period p time steps. For most repetitive control applications where the
errors of period p that are above the cutoff are quite small compared to the main errors below cutoff, this
design would be a very effective one. The performance of the filter for all addressed frequencies below the
cutoff appear perfect to graphical accuracy.

If one is able to compute the 199 gain filter of Fig. 11 in real time, then one can have perfect performance
at addressed frequencies. Figure 32 shows the size of the left hand side of (9). And Fig. 33 gives the
sensitivity transfer function plot, which is perfect. This corresponds to the best possible performance, it is
causal so that it can be implemented, and it requires using a filter that has a number of gains approximately
equal to the number of time steps in a period. The actual computations needed in real time include using
this filter on the first term of (18), and using the combined filter H (z)F (z) which has a slightly larger
number of gains, and then the results must be added together to get the new command to the feedback
controller. It is a rare situation in control theory when perfect filter performance is actually possible in the
world, and it is possible in the repetitive control problem

CREATING ROBUSTNESS TO RESIDUAL MODES

The purpose of the zero-phase low-pass filter is to make repetitive control robust to model errors
at high frequencies. As pointed out earlier, one must adjust the cutoff of the filter after observing the
behavior of the hardware without the filer, turning the cutoff frequency down until any instability
disappears. We illustrate this process by considering that the repetitive controller is design based on the
third order model in equation (17), but the real world behaves like a fifth order model. This model is
obtained from (17) by multiplying by another second order term, with DC gain of unity, with an undamped
natural frequency corresponding to 20 Hz, and a damping ratio of 0.1. The compensator is designed using

10
Method 1 with 12 gains. There is no disturbance. The desired trajectory is considered to have period
p = 200 time steps, with 200 Hz sample rate, and is given by cos(0.1πk ) . This would correspond to a 10
Hz signal and the 10th harmonic of the fundamental of the chosen period.

Figure 34 gives the resulting RMS error for each period when this repetitive controller is used, and
it is clearly unstable. Taking the error history for the third period of the error, and finding the frequency
content of this error, produced Fig. 35, indicating that the error that is growing is around 20 Hz, and one
must design a low pass filter to cut off before this frequency. Figure 36 plots the left hand side of equation
(5) which shows that one must cut off somewhat below 20 Hz. One could simply have a zero-phase low-
pass filter and keep reducing the cutoff frequency until a stable behavior is established, or one can use the
experimental result in Fig. 35 to start the adjustment of the cutoff frequency, or one could use the product
of an experimental frequency response plot of the hardware, multiplying it by the frequency response of the
desired filter in order to make the plot in Fig. 36 from data, and use it to make a cutoff frequency decision.
In any case, one finally adjusts the cutoff downward until the model is sufficiently accurate to satisfy
equation (5) below the cutoff.

Figure 37 presents two different zero-phase low-pass filter designs that could be used. The first is
the 51 gain design corresponding to Fig. 2 but using a cutoff at 16 Hz. The second uses a 51 gain filter as in
Fig. 7, but with the cutoff adjusted to 11 Hz, which is perhaps a little lower than necessary. But the cutoff
does have to be a lower frequency for this filter than for the design analogous to Fig. 2, because the
transition region from passband to stopband is wider for the filter of Fig. 7 than for the filter of Fig. 2.

Figures 38, 39, and 40 show results using the equal weight low pass filter design. Figure 38 is the
left hand side of (9). Of course, it is no longer small, and the cutoff was used to prevent it from exceeding
unity and violating the stability condition (9). It is clear that the cutoff did what it was intended to do, to
stabilizing the system. Figure 39 gives the sensitivity transfer function showing all frequencies. The
mismatch in the model used versus the real world, has resulted in some substantial amplification of
unaddressed frequencies approaching the cutoff frequency. But Fig. 40 shows that for the addressed
frequencies the behavior is reasonably good. Finally, Fig. 41 compares the RMS error histories using both
low-pass filters. It is seen that the filter with the much better response in the passband, the filter with the
weights used in Fig. 7, converges to a significantly lower error level that the filter from Fig. 2. Hence, the
good behavior of the Fig. 11 filter below cutoff is reflected in better final error levels.

CONCLUSIONS

In a previous work, a particularly effective and easy to use method was developed to design
compensators to stabilize repetitive controllers using a system model. Before such a design method can be
practical in applications, it is necessary that one have a method to robustify the repetitive control system to
high frequency unmodeled dynamics. This can be accomplished by using a zero-phase low-pass filter to cut
off the learning in the frequency range for which there is significant model mismatch. The objectives for
repetitive control are somewhat specialized compared to the objectives in other applications of low pass
filter design, and as a result we develop methods of designing the low pass filters for the repetitive control
applications. Based on the frequency content of the error of a feedback control system in executing the
desired command, one knows where the majority of the error occurs in the spectrum, and one can design
accordingly. Accuracy is most important in the passband, and is especially important for the fundamental
and the first few harmonics of the period addressed by the repetitive controller. Errors above the cutoff are
likely to be less important since the errors at high frequency are likely to be small and one can tolerate
amplifying them somewhat. The weighted filter design of Fig. 7 is particularly attractive, since the error in
the passband is quite small, and error in the stopband is still reasonable. Small error in the passband
translates directly into reduced final error levels. The repetitive controller can converge to zero filtered
error in this region, and this only translates into zero error if the filter is perfect. This filter design should be
very useful for many repetitive control applications. When one has the ability to do substantial computing
in real time, it is shown that one can design an FIR filter that has perfect performance at the frequencies
being addressed. When one has the real time computing power, this gives the best possible low pass filter.

11
It is very unusual in control theory to be able to have perfect performance, but in this case it is theoretically
possible at the addressed frequencies.

REFERENCES

1. K. L. Moore and J.-X. Xu, Guest Editors, Special Issue on Iterative Learning Control, International
Journal of Control, Vol. 73, No. 10, July 2000.
2. Z. Bien and J.-X. Xu, Editors, Iterative Learning Control: Analysis, Design, Integration and
Applications, Kluwer Academic Publishers, Boston, 1998.
3. R. W. Longman, “Iterative Learning Control and Repetitive Control for Engineering Practice,”
International Journal of Control, Special Issue on Iterative Learning Control, Vol. 73, No. 10, July
2000, pp. 930-954.
4. M. Tomizuka, T.-C. Tsao, and K. K. Chew, “Analysis and Synthesis of Discrete Time Repetitive
Controllers,” Journal of Dynamic Systems, Measurement, and Control, Vol. 111, 1989, pp. 353-358.
5. B. Panomruttanarug and R. W. Longman, “Repetitive Controller Design Using Optimization in the
Frequency Domain,” AIAA/AAS Astrodynamics Specialist Conference and Exhibit, Providence, Rhode
Island, Aug. 16-19, 2004.
6. H. Elci, M. Phan, R. W. Longman, J.-N. Juang, R. and Ugoletti, “Experiments in the Use of Learning
Control for Maximum Precision Robot Trajectory Tracking,” Proceedings of the 1994 Conference on
Information Science and Systems, Department of Electrical Engineering, Princeton, NJ, 1994, pp. 951-
958.
7. H. Elci, R. W. Longman, M. Phan, J.-N. Juang, and R. Ugoletti, “Discrete Frequency Based Learning
Control for Precision Motion Control,” Proceedings of the 1994 IEEE International Conference on
Systems, Man, and Cybernetics, San Antonio, TX, Oct. 1994, pp. 2767-2773.
8. A. M. Plotnik and R. W. Longman, “Subtleties in the Use of Zero-Phase Low-Pass Filtering and Cliff
Filtering in Learning Control,” Advances in the Astronautical Sciences, Vol. 103, 1999, pp. 673-692.
9. S. J. Oh and R. W. Longman, “Methods of Real-Time Zero-Phase Low-Pass Filtering for Robust
Repetitive Control,” Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Monterey,
CA, August 2002.
10. Y.-P. Hsin, R. W. Longman, E. J. Solcz, and J. de Jong, “Experimental Comparisons of Four
Repetitive Control Algorithms,” Proceedings of the 31st Annual Conference on Information Sciences
and Systems, Johns Hopkins University, Department of Electrical and Computer Engineering,
Baltimore, Maryland, 1997, pp. 854-860.
11. K. Chew and M. Tomizuka, “Digital Control of Repetitive Errors in Disk-Drive Systems,” IEEE
Control Systems Magazine, vol. 10, pp. 1620, Jan. 1990.
12. Y. Wang and R. W. Longman, "Use of Non-Causal Digital Signal Processing in Learning and
Repetitive Control," Advances in the Astronautical Sciences, Vol. 90, 1996, pp. 649-668.
13. S. Songchon and R. W. Longman, “Comparison of the Stability Boundary and the Frequency Response
Stability Condition in Learning and Repetitive Control,” International Journal of Applied Mathematics
and Computer Science, Vol. 13, No. 2, 2003, pp. 169-177.
14. W. Kang and R. W. Longman, “The Effect of Interpolation on Stability and Performance in Repetitive
Control,” Advances in the Astronautical Sciences, this volume. 2005.
15. E. C. Ifeachor and B. W. Jervis, Digital Signal Processing: A Practical Approach, 2nd ed. Pearson
Education Limited, 2002.
16. C. S. William, Designing Digital Filters, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1986.
17. T. Songchon and R. W. Longman, “On the Waterbed Effect in Repetitive Control using Zero-Phase
Filtering,” Advances in the Astronautical Sciences, Vol 108, 2002, pp. 1321-1340.

12
Table 1. Summary of important features of common window functions
Main
lobe Stopband
Name of Passband
Transition width relative attenuation Window function wd ( k ),
window ripple
(Hz) (normalized) to side (dB) k ≤ ( N − 1) / 2
function (dB)
lobe (maximum)
(dB)
Rectangular 0.9 / N 0.7416 13 21 1
⎛ 2πk ⎞
Hanning 3. 1 / N 0.0546 31 44 0.5 + 0.5 cos⎜ ⎟
⎝ N ⎠
⎛ 2πk ⎞
Hamming 3.3 / N 0.0194 41 53 0.54 + 0.46 cos⎜ ⎟
⎝ N ⎠
⎛ 2πk ⎞
0.42 + 0.5 cos⎜ ⎟
⎝ N −1 ⎠
Blackman 5.5 / N 0.0017 57 75
⎛ 4πk ⎞
+ 0.08 cos⎜ ⎟
⎝ N −1 ⎠
2.93 / N ( β = 4.54) 0.0274 50

Kaiser 4.32 / N ( β = 6.76) 0.00275 70




{
I 0 ⎜ β 1 − [2k / (N − 1)]
2
}
1/ 2 ⎞


I 0 (β )
5.71 / N ( β = 8.96) 0.00275 90

d (k )

y c (k ) e(k ) u (k ) y0 ( k ) + y (k )
+ F ( z )H ( z ) +
∑ G (z ) ∑
− z p − H ( z)

Figure 1. Block diagram of repetitive control.

13
Figure 2. Frequency response of 51 gain Figure 3. Impulse responses of filter in Fig. 2 and
filter for w = 1 , v = 1 with cutoff at 30 Hz. ideal filter.

Figure 4. Error for all harmonics Figure 5. Frequency response of 151 gain
corresponding to Fig. 2. filter for w = 1 , v = 1 with cutoff at 30 Hz.

Figure 6. Frequency response of 51 gain filter Figure 7. Frequency response of 51 gain filter
with w = 0.001 , v = 5 . with w = 5 , v = 0.001 .

14
Figure 8. Error for all harmonics Figure 9. Frequency response of 51 gain exact
corresponding to Fig. 7. fit filter designed for 26 frequencies.

Figure 10. Error for all harmonics Figure 11. Frequency response of 199 gain
corresponding to Fig. 9. exact fit filter designed for 100 frequencies.

Figure 12. Error for all harmonics Figure 13. Frequency response of 51 gain
corresponding to Fig. 11. Kaiser filter compared with desired response
at 26 frequencies.

15
Figure 14. Error at all harmonics Figure 15. Frequency response of 51 gain
corresponding to Fig. 13. exact fit filter using non-uniformly spaced
frequencies (moving points).

Figure 16. Frequency response of 51 gain Figure 17. Nyquist plot of 3rd order system
exact fit filter using non-uniformly spaced using 5 gain compensator using method 1 with
frequencies (uniform remaining points). 3 noncausal gains.

Figure 18. Pole-zero map corresponding to Figure 19. Nyquist plot of the 3rd order system
Fig. 17. for method 2.

16
Figure 20. Pole-zero map corresponding to Figure 21. Nyquist plot of the 3rd order system
Fig. 19. for method 3.

Figure 22. Pole-zero map corresponding to Figure 23. Amplitude of H ( z ) [1 − F ( z )G( z )]


Fig. 21. for method 1 using filter in Fig. 2.

Figure 24. Amplitude of H ( z ) [1 − F ( z )G( z )] Figure 25. Sensitivity transfer function for
for method 2 using filter in Fig. 2. method 1 with unity DC gain.

17
Figure 26. Sensitivity transfer function for Figure 27. Sensitivity transfer function for
method 2 with unity gain. method 1 with 0.3 DC gain.

Figure 28. Sensitivity transfer function for Figure 29. Sensitivity transfer function
method 1 using filter in Fig. 2 for all corresponding to Fig. 28 for all harmonics.
frequencies.

Figure 30. Amplitude of H ( z ) [1 − F ( z )G( z )] Figure 31. Sensitivity transfer function


for method 1 using filter in Fig. 7. corresponding to Fig. 30.

18
Figure 32. Amplitude of H ( z ) [1 − F ( z )G( z )] Figure 33. Sensitivity transfer function
for method 1 using filter in Fig. 11. corresponding to Fig. 32.

Figure 34. RMS error for the 5th order model Figure 35. Frequency response of error in Fig.
using 12 gain compensator from method 1. 34 from repetition 3.

Figure 36. Amplitude of [1 − F ( z )G( z )] for the Figure 37. Frequency response of 51 gain
5th order system using method 1. filter for equal weights with cutoff at 16 Hz
and adjusted weights with cutoff at 11 Hz.

19
Figure 38. Amplitude of H ( z ) [1 − F ( z )G( z )] Figure 39. Sensitivity transfer function
for the 5th order system using the equal corresponding to Fig. 38 showing all
weight filter. frequencies.

Figure 40. Sensitivity transfer function Figure 41. RMS error vs. repetition after
showing all harmonics. applying the filters.

20

View publication stats

You might also like