Z-Transform and Its Application To Development of Scientific Simulation Algorithms

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Z-Transform and Its

Application to Development
of Scientic Simulation
Algorithms
ZVONKO FAZARINC
620 Sand Hill Road, 417D, Palo Alto, California 94304

Received 23 December 2009; accepted 29 March 2010

ABSTRACT: Engineers and educators nd in computer simulations a powerful substitute for inability of cal-
culus to produce closed form solutions to modern problems. But the validity of quasi-closed-form solutions
produced by computers remains suspect. While no magic test exists, pieces of algorithms can individually be
examined by the Z-transform [E. I. Jury, Sampled-data control systems, John Wiley & Sons, 1958]. Two examples
from engineering and three from education are presented. 2010 Wiley Periodicals, Inc. Comput Appl Eng Educ
21: 7588, 2013; View this article online at wileyonlinelibrary.com; DOI 10.1002/cae.20452

Keywords: Z-transform; science simulations; algorithm development; physics algorithms; mathe algorithms

INTRODUCTION convert differential equations into their algebraic equivalents is


matched by the Z-transforms ability to do the same for difference
While natural phenomena often exhibit high complexity, sci- equations.
ence, engineering, and education are able to penetrate many of One may legitimately question the focus on difference equa-
their secrets by studying individual, separable behaviors. These tions, which were seemingly abandoned long time ago. But besides
are commonly described by differential equations, whose closed- the just quoted reasons for dealing with them here are some addi-
from solutions are of high interest. The Laplace transform, which tional ones that may justify the elevation of difference equations
enables us to derive closed-form solutions is limited to linear into a higher rank in computer science, mathematics, and physics.
cases of orders four or lower. That is far removed from the prac- Computers are completely ignorant about integral-
tical needs but there was no way to obtain a closed-form solution differential equations while the difference equations are native
for phenomena of order ve and higher until the advent of the to them. Most algorithms used in scientic computation are in
computer. fact difference equations. Predicting what they will do when
Today we are able to attack problems of almost unlimited subjected to repeated evaluations is of high interest to designers
overall complexity and produce not just static but also dynamic of simulation algorithms. The Z-transform can give us an insight
closed-form solutions in the form of science simulations. These into their behavior.
are gaining in credibility yet remain untested when predicting A traditional reason for appreciation of difference equations
unobservable outcomes. There are no alternate methods avail- is the fact that most problems of interest were described in terms
able to scrutinize predictions of computer simulations but we of differences, resulting in one or more difference equations. The
can test individual components of the overall algorithmic struc- classical differential equations were then derived from them by
ture for their validity. To this end we need a tool that generates letting the independent variables approach zero. This process made
closed-form solutions of mini algorithmic components of the them less accessible to normal mortals and to computers alike. But
overall system. We nd in the Z-transform such a tool. This is leaving this controversial view point aside let us focus again on the
a discrete equivalent of continuous transforms such as Fourier fact that valuable insights can be gained into particular execution
and Laplace. Neither of them has a physical meaning unless we problems of troublesome algorithms from closed-form solutions.
feel comfortable in the imaginary domain. But all of them are These can be obtained in two possible ways:
valuable mathematical tools. The Laplace transforms ability to
Either we let the independent variables of the algorithm go to
Correspondence to Z. Fazarinc (z.fazarinc@comcast.net). zero and then solve the resulting differential equation by means
2010 Wiley Periodicals, Inc. of the Laplace transform,

75
76 FAZARINC

Or we solve the difference equation directly by mean of the Z- 4


transform. We will show that this approach delivers more reliable
and relevant answers. 3
h(n)
2
It is obvious that both methods are subject to low order and
linear cases and are therefore applicable only to small separable 1
components of more complex algorithms. We will demonstrate 0
the Z-transform approach to create closed-form solutions on two
examples in engineering physics and three in mathematical educa- -1 0 5 10 15 20 25

tion. First we will show a simple derivation of the Z-transform. Its n[ns]
-2
engineering principles and a formal solution of a generic second
order difference equation appearing throughout the paper will be -3
found in the Appendix.
Figure 1 Result of direct digitization of harmonic motion equation.

Z-TRANSFORM DIRECTLY FROM LAPLACE In it h(t) is the function subject to harmonic behavior and
TRANSFORM is the circular frequency. The Laplace transform solution of this
equation is
The causal Laplace transform H(s) of a time function h(t)  

+ 1  dh(t) 
is dened as H(s) = h(t)est dt and its inverse as h(t) = h(t) = h(0)cos t +   = h(0)cos t + g(0)sin t (4)
dt t=0
0

1

+
st An algorithm for computer generated harmonic motion or for
2j
H(s)e ds where t is time and s is a complex variable. We
evaluation of trigonometric functions in general is desirable and

discretize time t into n integer increments to obtain an equivalent may be found directly from the harmonic motion equation. But this

+ task soon encounters a surprising number of unpredictable obsta-
discrete forward transform H(s) = h(n)esn . Using the substi- cles, which we will address with the Z-transform. The rst obvious
n=0
idea that offers itself is to convert the differential harmonic equa-
tution z = eS we can write an equivalent H(s) function but in terms

+ tion into its discrete equivalent and then submit it to the computer
of z as H(z) = h(n)zn . This is the causal forward Z-transform. for evaluation. It is convenient to rst split the second order Equa-
n=0 tion (3) into two rst order equivalents as dh(t)/dt = g(t) and
Let us now turn our attention to the inverse. Replacing dg(t)/dt = h(t). These are now easily converted into discrete
again the continuous time t with its discrete equivalent n in the forms

+
inverse Laplace transform we obtain h(n) = 1
H(z)esn ds.
2j
0
h(n + 1) h(n) = g(n) g(n + 1) g(n) = h(n) (5)
The differential ds is found from the denition z = e as ds = dz/z. s

+ When presented in a language using arrayed variables
Substituting this we get h(n) = 1
2j
H(z)zn1 dz. The common these equations are easily digested by the computer as for

example: for (n = 0; n < 30; n++) {h[n + 1] = h[n] + *g[n];
Z-transform denitions are then
g[n + 1] = g[n]*h[n];}.The choice of 30 increments is arbi-

+ trary and if we think of them as nanoseconds the scaling is nine
H(z) = h(n)zn (1) orders of magnitude. So if we wish the frequency to be say
n=0 50 MHz we must choose to be 250 106 scaled down by
the power of nine or = 0.314159. The result of executing this
and algorithm with these numbers and with initial conditions h = 0
and g = 1 is plotted in Figure 1. There is not much doubt that we
+ are dealing with an unstable algorithm.
1
h(n) = H(z)zn1 dz (2) This author has met many frustrated designers of physics
2j
z= algorithms, who like him have encountered similar problems
in their practice. They have either blamed themselves for not
More about the practice of Z-transformations is found in the understanding the calculus of discrete mathematics, blamed their
Appendix. computers for misinterpreting their logical inputs or gave up in
despair. We intend to demonstrate that the Z-transform lets us
diagnose such misbehaved algorithms and then point to their
HARMONIC MOTION ALGORITHM DERIVED WITH possible cures. To this end we must derive the numeric expression
Z-TRANSFORM for h(n) as seen by the computer while executing our simple
algorithm. From it we may then gain an insight into the origin of
The harmonic motion, which is at the root of many resonance instability.
phenomena in physics is captured in the second order differential The closed form solution for h(n) can be derived from (5).
equation We take the backward difference [1] of the rst equation, which
amounts to h(n + 2) 2h(n + 1) + h(n) = [g(n + 1) g(n)].
d 2 h(t) Then we substitute the second equation for the bracketed term
+ 2 h(t) = 0 (3)
dt 2 and get after some simple algebraic manipulation the second order
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 77

difference equation for h(n) in the generic form

h(n + 2) 2h(n + 1)A + h(n)B = 0


with A = 1 and B = 1 + 2 (6)

The closed-form solution to (6) is found in Equation (A15)


of the Appendix as
 h(1) h(0)

h(n) = (1 + 2 )n/2 h(0)cos(n) + sin(n)

 = tan1 (7)

A comparison with (4) demonstrates a reassuring similarity


except for the divergence factor (1 + 2 )n/2 . This grows with the
number of executions n and is obvioulsy responsible for the insta-
bility of our algorithm (5). We wish now to address this problem
from a generic viewpoint of discrete calculus.
The discrete domain is beset with problems originating in the
basic question of what happens before something else. To demon- Figure 2 Frequency error as a function of digitization interval for various
strate this let us write our algorithm (5) in a few possible ways. sets of parameters.

h(n + 1) = h(n) + g(n) g(n + 1) = g(n) h(n) Substituting A and B from (11) we nd after some lengthy
h(n + 1) = h(n) + g(n) g(n + 1) = g(n) h(n + 1) but elementary algebra the following answer to our function
h(n + 1) = h(n) + g(n + 1) g(n + 1) = g(n) h(n)

n/2
h(n + 1) = g(n) + g(n + 1) g(n + 1) = g(n) h(n + 1) 1 + ab2
h(n) =
There are many other possibilities available as combinations 1 + (1 a)(1 b)2
of the pairs shown above and we propose a generalized form that 
includes them all h(0)(a b)/2 + g(0)
h(0)cos n +  sin n
h(n + 1) = h(n) + [ag(n) + (1 a)g(n + 1)] (8) 1 (a b)2 2 /4

g(n + 1) = g(n) [bh(n) + (1 b)h(n + 1)] (9) 1
1 (a b)2 (/2)2
 = tan (13)
1 (a + b 2ab)()2 /2
Factors a and b may adopt any values desired. We will now
combine these two equations into a second order difference equa- In deriving (13) we have evaluated the term h(1) from (8)
tion containing only h(n) and then derive its closed-form solution. and (9) by setting n equal to zero. Equation (13) exhibits again the
By inspecting it we should then be able to decide what values our divergence term that grows with the number of executions n. It is
factors a and b we should adopt to match the true solution (4). easy to see that this factor becomes unity when we set a + b = 1.
To accomplish this we take the rst difference of (8), which after This, in turn, produces the new expression
simple algebraic operations yields.
h(0)(a 1/2) + g(0)
h(n + 2) = 2h(n + 1) h(n) + {a[g(n + 1) g(n)] h(n) = h(0)cos n +  sin n
1 (a2 a + 1/4)2
+ (1 a)[g(n + 2) g(n + 1)]}
(14)
The bracketed expressions are directly available from (9) and
when inserted into the above we get, again after some straightfor- If we now set a to 0.5 then the above becomes h(n) =
ward but messy algebra the expression h(0)cos n + g(0)sin n, which matches the continuous solu-
tion (4) quite well with the exception of the argument . This
h(n + 2) 2h(n + 1)A + h(n)B = 0 (10) differs from and is as such a source of frequency error with
respect to it. In Figure 2 we have plotted the fractional frequency
error ( )/ for the divergent case a = b = 1 and for cases
1 + ab2 (a + b)2 /2
A= and that produce stable, non-divergent solutions with a + b = 1.
1 + (1 a)(1 b)2
With this much knowledge about the effect of our factors a
1 + ab2 and b we can now return to our algorithm (8) and (9) and make
B= (11) it stable by selecting the proper sequencing. The case a = b = 0.5
1 + (1 a)(1 b)2
matches the continuous solution (4) and would be written as
We have encountered this equation earlier and found its solu-
g(n) + g(n + 1)
tion in (A15) as h(n + 1) = h(n) +
2
h(1) h(0)A
h(n) = h(0)Bn/2 cos n + Bn/2 sin n h(n) + h(n + 1)
B A2 g(n + 1) = g(n)
2

 = tan 1 B A2 /A (12) This happens to be an implicit algorithm that the computers
do not accept kindly. Of course, there are numerous methods that
78 FAZARINC

1.5 t t
s(t) = s(0) + v(0)t + dt D(t)dt (18)
1
h(n) 0 0
0.5
An algorithm to simulate accelerated motion on a computer
0 can be elicited from Equations (16) and (17), respectively by equiv-
0 5 10 15 20 25 alent backward differences
-0.5
v(n + 1) = v(n + 1) v(n) = D
-1 n[ns]
s(n + 1) = s(n + 1) s(0) = v (19)
-1.5
In (19) we have intentionally avoided specications of tempo-
Figure 3 Results of correct digitization of the harmonic motion equation. ral arguments for acceleration D and velocity v because a number
of choices are available. We have demonstrated this earlier before
arriving at forms (8) and (9). Here we could also use the acceler-
deal with implicit algorithms but we can circumvent them, yet still ation D(n + 1) and velocity v(n + 1) or their previously evaluated
satisfy the stability condition in a very straightforward way by values D(n) and v(n) or we could also use some combinations
choosing a = 1 and b = 0. The resulting algorithm is then thereof.
h(n + 1) h(n) = g(n) and g(n + 1) g(n) = h(n + 1) Without knowing the outcomes produced by a given choice,
we have no reason to prefer any one of them. Therefore we will
This can now be cast into an appropriate computer lan- again employ the parameter optimization procedure, similar to the
guage as we have done before for (n = 0; n < 30; n++) one encountered in the previous section. This time we will use the
{h[n + 1] = h[n] + *g[n]; g[n + 1] = g[n] *h[n + 1];} Now a expected answer (18) as the target. To this end we choose as yet
keen eye of a computer programmer would quickly note that the undened fractions of the two primary choices with the parameters
arrays can be eliminated for this specic case so that the natural a and b determining their contributions as
computer sequencing of statement evaluations can be exploited.
We then simply write the program as for (n = 0; n < 30; n++) v(n + 1) = v(n) + aD(n) + (1 a)D(n + 1) (20)
{h = h + *g; g = g *h;}. This algorithm with same parame-
ters as before produces Figure 3 which meets our expectations for s(n + 1) = s(n) + bv(n) + (1 b)v(n + 1) (21)
extended periods of time [2]. In order to make a comparison with (18) possible we need a
The forgoing example illustrates how the Z-transform may closed form solution for s(n), which calls for elimination of veloc-
be used to develop algorithms of prescribed behavior. General ity terms from (21). We do this by taking the rst difference of
algebra though, limits the extraction of poles, required for inverse s(n + 1) yielding
Z-transformation to orders of less than ve. Consequently the tech-
nique is also limited to low orders. But there is no known technique s(n + 2) s(n + 1) = s(n + 1) s(n) + b[v(n + 1) v(n)]
other than trial and error that can be employed to complex simula-
tion algorithm development. In the next section we show another + (1 b)[v(n + 2) v(n + 1)]
example of Z-transforms power to accomplish this task.
The bracketed expressions are easily found from (20) and
their substitution produces
ACCELERATED MOTION ALGORITHM DERIVED WITH s(n + 2) 2s(n + 1) + s(n)
Z-TRANSFORM
= abD(n) + (1 a)bD(n + 1) + a(1 b)D(n + 1)
In this example we will demonstrate the use of convolution + (1 a)(1 b)D(n + 2) (22)
to derive algorithm parameters when the driving function is ill
dened. We will apply it to the most frequently encountered tasks We recognize Equation (A10) in the Appendix as possessing
in simulations of dynamics, that is, the algorithm of accelerated the form of (22) when A = B = 1 and f(n) is equal to the RHS of
motion based on Newtons Laws. (22). The specic solution of this equation is then as given by
A constant mass m, subjected to force F(t) experiences an (A16) in which the asterisk * denotes the convolution in the n
acceleration domain

D(t) = F (t)/m (15) s(n) = s(0)(1 n) + ns(1) + [abD(n) + (a + b 2ab)D(n + 1)

This acceleration generates a velocity differential + (1 a)(1 b)D(n + 2)] (n 1)

dv(t) = D(t)dt (16) The s(1) term can be obtained directly from (20) and (21) for
n = 0. s(1) = s(0) + v(0) + a(1 b)D(0) + (1 a)(1 b)D(1).
and a position differential With this the inverse transform is

ds(t) = v(t)dt (17) s(n) = s(0) + v(0)n + a(1 b)nD(0) + (1 a)(1 b)nD(1)
+ [abD(n) + (a + b 2ab)D(n + 1)
A direct integration of (16) and (17) produces for the position
of the object +(1 a)(1 b)D(n + 2)] (n 1) (23)
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 79

We must now produce an explicit form of the discrete con-


volution by using Equation (A9) from the Appendix, which for

Ds   
our case takes on the form f (j) (n 1) = f (j)(n j 1)
j=0
where f(j) stands for the bracketed term in (23).
Then our inverse transform becomes

s(n) = s(0) + v(0)n + a(1 b)nD(0) + (1 a)(1 b)nD(1)



n

+ [abD(j) + (a + b 2ab)D(j + 1)
j=0

+ (1 a)(1 b)D(j + 2)](n 1 j) (24)

It is convenient to use the following easily proven identities


0  
n

D(j + 1)(n 1 j)
-1 0 1 2
n
j=0


n

= [D(j)(n j)] nD(0) D(n + 1)


j=0
Figure 4 Stepwise acceleration in discrete domain.


n

D(j + 2)(n 1 j) outcome of a variety of excitations. So let us now do it for two


j=0 kinds of accelerations and see what values the factors a and b have
to adopt to force the algorithm (25) into alignment with Newtons

n

= [D(j)(n + 1 j)] (n + 1)D(0) nD(1) D(n + 2) laws contained in (18).


j=0

Substitution of these into (24) yields Step Acceleration

s(n) = s(0) + v(0)n + a(1 b)nD(0) + (1 a)(1 b)nD(1) We dene the step acceleration as Ds (n) = Ds u(n) where u(n) is
the unit step function

n
 0 if t < 0
+ [abD(j)(n 1 j) + (a + b 2ab)D(j)(n j)
u(n) =
j=0 1 otherwise
+ (1 a)(1 b)D(j)(n + 1 j)] It is illustrated in Figure 4 as a continuous function by dashed
lines and as its discrete equivalent by dots appearing at unit time
(a + b 2ab)[nD(0) + D(n + 1)] intervals. This type of acceleration can be employed to represent
(1 a)(1 b)[(n + 1)D(0) + nD(1) + D(n + 2)] a multitude of cases. If we think of it as having started at minus
innity, we have a constant acceleration case at hand. We can
We will drop the terms D(n + 1) and D(n + 2) from the above think of an arbitrary force as a sequence of steps, following one
expressions because they are meaningless to our goal. This deci- another. For the continuous domain, such a sequence would con-
sion has nothing to do with mathematical practice but only with sist of innitesimally small steps occurring at innitesimally small
common sense. It is namely quite obvious that the position s(n) at time intervals. In the discrete domain, nite steps, occurring at unit
time n cannot possibly depend on accelerations at later times n + 1 intervals would represent such an arbitrary acceleration function.
or n + 2. Consequently, the results obtained for a single step will be repre-
Combine the sums and after an elementary but lengthy alge- sentative of a whole array of acceleration functions, including zero
braic manipulation we end up with and constant acceleration.
To enable the comparison of results we must limit ourselves
s(n) = s(0) + v(0)n + (a 1)nD(0) (1 a)(1 b)D(0) to discrete times t = n where the information is available to us.

n The acceleration is equal to Ds for all times equal to or greater
+ D(j)[ab(n 1 j) + (a + b 2ab)(n j) than zero. When we perform the integrations indicated in (18) we
j=0 obtain for the objects position, expressed at discrete times, the
expression
+ (1 a)(1 b)(n + 1 j)] (25)

Without specifying the acceleration function D(j) one cannot n2 
s(n) = s(0) + v(0)n + Ds (26)
further expand expression (25). But we do have the freedom to 2 n0
specify any acceleration within reason to nd the resulting position
s(n) of the object in question. It should be pointed out that we have We will now substitute the same step acceleration into (25).
arrived at this universal answer by means of the use of convolution. In conformance with its notation we must use the summation index
This can be a real time saver, whenever we wish to examine an j which results in the following denition of the step acceleration
80 FAZARINC

for this case 100


 0 if j < 0
D(j) =
Ds otherwise
80
Consequently our expression (25) becomes
s(n)
s(n) = s(0) + v(0)n + (a 1)nDs (1 a)(1 b)Ds 60

n

+ Ds [ab(n 1 j) + (a + b 2ab)(n j)
j=0 40
+ (1 a)(1 b)(n + 1 j)]
20
Applying the following identities


n
n(n 1) 
n
n(n + 1)
n1j = nj = 0
2 2
j=0 j=0 2 4 6 8 10
 n
n(n + 1)
n+1j = +n+1 n
2
j=0
Figure 5 Simulation of uniformly accelerated motion.
we get

s(n) = s(0) + v(0)n (1 a)(1 b)Ds + (a 1)nDs


  (n 1)n   n(n + 1) 
+Ds ab + (a + b 2ab)
2 2
 n(n + 1) 
+ (1 a)(1 b) +n+1
2
After another bout with algebra we end up with this simple Di 
answer

n2
s(n) = s(0) + v(0)n + Ds + n(0.5 b) (27)
2

A comparison with (26) suggests the answer b = 0.5 to obtain


a match. No restriction on a is placed by (27). When we plug
this into (20) and (21) we get the algorithm for computation of
0    
accelerated motion that matches Newtons laws for a step input.
1 0 1 2
v(n + 1) = v(n) + aD(n) + (1 a)D(n + 1)
v(n) + v(n + 1) Figure 6 Impulsive acceleration in discrete domain.
s(n + 1) = s(n) + (28)
2
In Figure 5 we see a plot produced by (28). In it we have colliding. The simulation of such motion addresses a number of
used zero initial values for v and s, unity value for Ds and 0.5 natural behaviors, thermodynamics being one of them.
for both a and b. The discrete solution (28) correctly predicts the We denote the impulsive acceleration by Di (n) = Di (n)
motion at discrete points in time when a step input is applied. But where (n) is the Kronecker delta function, which is dened as
it also correctly predicts the inertial motion in absence of external
acceleration forces.  1 if n = 0
Independence from a is understandable for our particular step (n) =
0 otherwise
function case and extreme values can be used without ill effects.
In the above we have again used the discrete notation for
But fast and large uctuations of acceleration might not be served
time, that is, t = n. When the corresponding acceleration Di (n) is
well by large values of a. We will return to this question in the next
substituted into (18) we get the following answer for the position
section where we address the impulsive acceleration.
of the object in discrete notation

Impulsive Acceleration s(n) = s(0) + v(0)n + Di n|n0 (29)

The impulsive acceleration is illustrated in Figure 6 by dashed lines Equation (29) will be used as reference for establishing the
for the continuous domain and by discrete points for the discrete parameters a and b in (25) for this case. If we apply the same
time domain. It appears at a given time instant and then vanishes. impulsive acceleration to (25) the summation term has a value only
Such accelerations arise frequently in nature when particles are when the index j is zero. Then (25) assumes the following form
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 81

annual appreciation rate of 100 p% but we predict an ination


of 100 i% and want our income to follow it. How long will our
3 money last? An alternate question we want the answer to is how
s(n) much money is left after n years.
2 We dene the variable m(n) to be the money amount after n
years, and w(n) the withdrawal in the nth year. With these quantities
we can dene the following relationships:
1
Money after nth withdrawal m(n) w(n)

0
Money after appreciation m(n + 1) = [m(n) w(n)]P (31)
1 2 n
Inated withdrawal w(n + 1) = w(n)I (32)
Figure 7 Discrete motion following a stepwise acceleration compared to
theoretical expectation.
The shorthand used in the above : P = 1 + p; I =1+i
Take the rst difference of (31), which is m(n + 2) m(n +
1) = m(n + 1)P m(n)P w(n + 1)P + w(n)P and can be pre-
3.5 sented in the form
s(n)
m(n + 2) m(n + 1)(1 + P) + m(n)P + [w(n + 1) w(n)]P
2.5
=0 (33)

1.5 The square bracketed expression equals w(n)i according to


(32). From (31) we get also w(n)iP = m(n)iP m(n + 1)i. With
this we can write (33) as
0.5
m(n + 2) m(n + 1)(I + P) + m(n)IP = 0 (34)
2 n
0 1
This is recognized as a second order difference equation
Figure 8 Improved match to expected motion by initial condition choice.
encountered in the Appendix as (A10) when f(n) = 0, A = (I + P)/2
and B = IP. According to (A12) the poles of Equation (34) are real
and easily evaluated as z1 = I; z2 = P. The solution is given in
after some elementary algebra, whose only pleasure are numerous (A13) as
cancellations of terms.
I n+1 P n+1
 I +P
 In P n
m(n) = m(0) + m(1) 2m(0)
s(n) = s(0) + v(0)n + Di a(n b) (30) I P 2 I P
m(1) is obtained from (31) by setting n to zero as m(0)P w(0)P.
It is apparent that a match between (30) and (29) demands
With this and after some elementary algebra the nal solution to
a = 1 and b = 0. Applying these values to (28), with zero initial
our problem is
conditions and Di = 1, results in Figure 7. In it the continuous
Newton solution is shown as a dashed line. While agreements are  1 (I n /P n )

m(n) = P n m(0) w(0) (35)
achieved at discrete points as demanded, a closer match would 1 (I/P)
be desired. A shift of the graph to the left by 0.5 would certainly
produce a desirable pattern and would be achieved by choosing Expression (35), divided by the inated withdrawal from (32)
b = 0.5 in (30). Unfortunately the discrete algorithm (28) does not is plotted in Figures 911 as function of n, the number of periods
provide any information at half integer points. So, we must retain to which the appreciation rate p and the ination rate i apply. The
the result of our analysis, which calls for a = 1 and b = 0 but can parameter is the ratio P/I as dened earlier. The three graphs tell us
impose an initial value of 0.5 on s(0), which produces the plot how the initial principal m(n), invested at the rate p will behave with
shown in Figure 8. time if we start drawing one tenth, one twentieth or one fortieth of
The object of this analysis was not to computerize the physics the initial principal, respectively while allowing the withdrawals
of accelerated motion but to demonstrate the technique for analysis to increase at the ination rate i. Take an example from Figure 10
of relevant algorithms. We will therefore leave this section and take and choose the initial principal m(0) to be one million dollars. We
on a nancial example of Z-transform application. start drawing 50,000 dollars the rst year but then withdraw every
year 3% more to allow for the ination. If we want to preserve the
initial capital forever, the required ratio P/I is 1.053 as seen from
USE OF Z-TRANSFORM TO DEVELOP A FINANCIAL the graph. To achieve this we must have invested at 1.053 times
ALGORITHM 1.03 or about 8.5%. If on the other hand we can get only 4% interest
on our investment, the ratio of P to I is 1.04/1.03 = 1.01. The lower
This time we want to derive an algorithm that will address an curve in the middle graph which applies to this case tells us that
investment issue. Assume that we have at our disposal an amount the money will run out in about 22 years. While interesting any
of money m(0) and know that we need an annual amount of w(0) further discussion of this example would distract from the main
to live on at todays prices. The available investment offers an thrust of this paper.
82 FAZARINC

APPLICATION OF Z-TRANSFORM TO GAMBLING


STATISTICS

Plain intuition would tell a gambler that the odds of winning are
steadily improving with the number of attempts at the same game.
But the facts are different as we will prove by an analysis, which
will at the same time allow us to demonstrate the use of a two
dimensional Z-transform. An example of such in physics is found
in Ref. [3], Here we will attempt to nd the probability of the
number of successes s in n trials, p(s, n) at a game for which the
probability of a success in a single trial, p(1, 1) is known. Let us
designate this probability as being equal to p. Then

Probability of a success in a single trial p(1, 1) = p (36)

Probability of a failure in a single trial p(0, 1) = 1 p (37)


Figure 9 Ratio of investment status m to inated annual withdrawal w as Probability of s successes in n trials p(s, n) to be found (38)
a function of time in years n for different ratios of investment P and ination
rate I. for an initial ratio m/w = 10. We can say with certainty that there can be no successes
without trials, which means that we can set p(1, 0) = p(2, 0) =
p(3, 0) = = p(s, 0) = 0. We can also say that the failure is a
certainty with no trials, that is, p(0, 0) = 1. Finally we can state
that s successes in n trials can be achieved in two possible ways:

Either we had s 1 successes in previous n 1 trials, that is,


p(s 1, n 1), followed by a success in the nth trial. The prob-
ability of this happening is p(s 1, n 1) times the probability
of success in one trial, which from (36) amounts to p(1, 1) = p.
Or we had s successes in the previous n 1 trials, that is, p(s,
n 1), followed by a failure in the nth trial. The probability of
this happening is p(s, n 1) times the probability of failure in a
single trial given by (37) as p(0, 1) = 1 p.

The two probabilities add up to yield p(s, n) = p(s 1, n


1) p + p(s, n 1) (1 p). Advance both s and n by one
increment and obtain

p(s + 1, n + 1) = p(s, n) p + p(s + 1, n) (1 p) (39)

Figure 10 Ratio of investment status m to inated annual withdrawal w This is a partial difference equation in variables s and n. It
as a function of time in years n for different ratios of investment P and can be solved by a two-dimensional Z-transformation. To this end
ination rate I for an initial ratio m/w = 20. we will introduce some needed z-domain variables

Zsu [p(s, n)] = P(u, n)


Znz [P(u, n)] = Znz [Zsu [p(s, n)]] = P(u, z) (40)

The last expression is the doubly transformed p(s, n) denoted


by P(u, z) where the respective z domain variables are u for s and
z for n. Let us now apply (A4) to do the rst transformation of (39)
from s into the u domain. This yields

uP(u, n + 1) up(0, n + 1) = pP(u, n) + (1 p)uP(u, n)


(1 p)up(0, n)

The quantity p(0, n) is the probability of zero successes in


n trials. Because the probability of one failure in one trial is
(1 p) we immediately conclude that p(0, n) must be (1 p)n .
Consequently the term up(0, n + 1) on the left cancels the term
Figure 11 Ratio of investment status m to inated annual withdrawal w (1 p) up(0, n) on the right. The simplied equation after
as a function of time in years n for different ratios of investment P and the rst transformation is now uP(u, n + 1) = pP(u, n) + (1
ination rate I for an initial ratio m/w = 40. p)uP(u, n). We transform this now with respect to n using the
notation set down in (40) and by the method suggested in (A4)
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 83

zuP(u, z) zuP(u, 0) = pP(u, z) + (1 p)uP(u, z). P(u, z) is 0.5


easily extracted from this s=0
0.4 1
z uP(u, 0) P(u, 0)(zu/z 1 + p) p = 0.3
P(u, z) = = (41) 2
z u p (1 p)u u (p/z 1 + p) 0.3 3
We must now perform a double inverse transformation. First 4 5
we do it with respect to u into the s domain. We have a single pole at 0.2
u = p/(z 1 + p) which makes the transformation quite simple.
But the numerator contains a product of two functions of u. The 0.1
inverse transforms of these two factors must then be convolved
in the s domain and we indicate the intended operation with an 0.0
01 56 10
11 15
16 20
21 25
26 30
asterisk subscripted to identify the domain. n
 zu/(z 1 + p) 
1 Figure 12 Probability of s successes in n trials when the probability of
P(s, z) = p(s, 0) s Zus
u p/(z 1 + p) one success in one trial is 30%.
zu |
= p(s, 0) s us1 p
z1+p u = z1+p except zero, that is, it is certain that there will be zero successes
with zero trials.
After substitution of the pole location for u we get Consequently the above summation contributes zero for all
ps z values of j except when j = s. This leads instantly to the nal answer
P(s, z) = p(s, 0) s for the probability of s successes in n trials
(z 1 + p)s+1
The next step is now to invert the above with respect to z. Note n!
p(s, n) = ps (1 p)ns (42)
that we are faced with a single but (s + 1)-st order pole at z = 1 + p. s!(n s)!
There is also just a single residuum, which is easily evaluated when
(A7) is invoked. When also combined with (A8) we get Expression (42) is plotted in Figure 12 for the case p = 0.3.
s
Recall that p is the probability of one success in one trial or p(1, 1).
1 1 d The case s = 0 is truncated at the top but it starts out at p(s, n) = 1.
p(s, n) = Zzn [P(s, z)] = [P(s, z)(z 1 + p)s+1 zn1 ]z=1p
s! dzs The s = 0 case requires a failure at each of the n trials. Because
The convolution and all functions of s are not affected by this the probability of a failure is (1 p), the probability of n failures
last transformation because they are constants in the z n domain. is this to the power of n. The curve s = 0 exhibits this exponential
Consequently we can write behavior. As one would expect for p = 0.3 the curve s = 1 has a
ps d s zn value of 0.3 at n = 1.
p(s, n) = p(s, 0) s | For other probabilities of a single success p we can construct a
s! dzs z=1p
whole family of graphs. We show in Figure 13 only one additional
The sth order derivative of Zn is easily found to be case for p = 0.15. The curve s = 0 drops off slower than in the
d s zn n! previous case because the probability of a failure is 0.85 in this
= n(n 1)(n 2) (n s + 1)zns = zns case as compared to 0.7 in the preceding case. The best chances of
dzs (n s)!
success, on the other hand are somewhat lower for obvious reasons.
Substituting in the above and inserting the z value at the pole
we get The most important lesson for gamblers to learn from these
ps n! graphs is the fact that it does not pay to bet on the same game for a
p(s, n) = p(s, 0) s (1 p)ns
s! (n s)! long time. For example, if one has not won once after some 8 trials
when p = 0.15, the chances of winning are rapidly diminishing
The only task left now is to convolve the two functions as
beyond that point. The same is true after only three trials if p = 0.3
indicated by the asterisk. Invoking (A9) seems appropriate but the
as one can read from our rst graph. From the same graph we
confusion in many variables involved calls for some handholding.
Transcribe (A9) for our case as follows

0.5
s=0
f1 (s) s f2 (s) = f1 (s j)f2 (j)
0.4 1
j=0 p = 0.15
where the two functions to be convolved are 2
0.3
ps n! 3
f1 (s) = p(s, 0) and f2 (s) = (1 p)ns 4
s! (n s)! 5
0.2
With this aid we can write for the probability of s successes
in n trials 0.1


pj n!
p(s, n) = p(s j, 0) (1 p)nj 0.0
j! (n s)! 0 10 20 30 40 50
j=0 n
The summation extends over all values of j. But we have Figure 13 Probability of s successes in n trials when the probability of
concluded at the very outset that p(s, 0) is zero for all values of s one success in one trial p is 15%.
84 FAZARINC

nd that if we have not had two successes in six trials the chances Dene the transform of f(n) as Z[f(n)] = F(z) and using Table
of that happening are fading. If this was a gambling lesson there 1 we can nd the transform of n f(n) to be zdF(z)/dz. With
would be much more to say about Equation (42) but the purpose these denitions we can write the transform of (44) as a differential
of this exercise was to demonstrate the use of a two dimensional equation
Z-transform.
dF (z) 1
+ F (z) F (z) = 0
dz z
APPPLICATION OF THE Z-TRANSFORM TO Separation of variables yields dF (z)/F (z) = dz/(z 1),
CONNECTIVITY which after integration produces ln F (z) = ln z z + c. This is
further developed into
If we have n nodes connected to each other how many total links
do we have? F (z) = eln zz+c = kzez (45)
It is very easy to tell from the illustration Figure 14 that a new
node added to the previous n nodes creates exactly n new links. Constant k in (45) arises from the integration constant c. F(z)
If we call the number of initial links l(n) then the number of links is the factorial function in the z domain. To bring it back into the
between n + 1 nodes will be l(n + 1) = l(n) + n. This is a difference n domain we must perform an inverse transformation. Because
equation for l for which a closed form solution can be obtained with we have set no restrictions on n, the resulting expression will be
the Z-transform. Dene Z[l(n)] = L(z) and from Table 1 we get for applicable to any value of n, including complex numbers. Using
the transform of l(n + 1) the expression Z[l(n + 1)] = Z[l(z)] z (2) and substituting (45) for the transformed function we can write
l(0) and for the transform of n the expression z/(z 1)2 . We can now the inverse transform
transcribe our equation in the z-domain as zL(z) zl(0) = L(z) +  
z/(z 1)2 . There are zero links for zero nodes, thus l(0) = 0. This 1 k
f (n) = F (z)z n1
dz = ez zn dz.
leads to the nal transformed number of nodes 2j 2j

z
L(z) = (43)
(z 1)3 A straightforward probing of the integrand reveals that its
region of convergence is limited to positive values of z, meaning
Expression (43) consists of a third order pole at z = 1. Using that the transform exists only when the lower limit of z is zero. For
(A7) with k = 3 and z1 = 1 we obtain for the inverse transform this case we can establish the constant k from the well-known fact
1 d2 that the factorial of 1 is 1. A per-partem integration of our modied
l(n) = [L(z)(z 1)3 zn1 ] | expression yields
2! dz2 z=1

1 d2 n 1 k k
= z | = n(n 1)zn2 | f (1) = 1 = ez zdz =
2! dz2 z=1 2 z=1 2j 2j
0
Substitute the value of z at the pole and obtain the famil-
for n = 1. Consequently k = 2j and we have
iar nal answer l(n) = n(n 1)/2. This answer could have been
derived in a less formal way by mathematical induction but not so 
for our next example.
f (n) = zn ez dz (46)
0

Figure 14 Illustration of why the addition of one new node to the network
of n nodes produces n new links.

APPLICATION OF THE Z-TRANSFORM TO


MATHEMATICS

What the factorial function does for integers Eulers Gamma func-
tion does for all real numbers. It is considered to be a rare function
that does not originate in the solution of a differential equation.
We will derive it by means of Z-transform.
The factorial functions f (n) = n! = n (n 1)! basic def-
inition is f (n) = n f (n 1). Let us increment n in this Figure 15 Gamma function of a complex number nr + jni .
expression by one to obtain the following difference equation
This is the ultimate factorial function applicable to any real
f (n + 1) = n f (n) + f (n) (44) or complex number. Expression (46) with the argument (n 1) is
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 85

known as the Gamma function [4] If h(n) happens to be a constant c its transform is simply

 

z
(n) = f (n 1) = zn1 ez dz (47) C(z) = c zn = c (A3)
z1
n=0
0

It is plotted in Figure 15 for complex values of n. For all If a function h(n) has a Z-transform H(z) then h(n + k), where
positive integers (n + 1) = n! As expected it produces the correct k is a positive integer has a transform
answer for say (4) = f(3) = 6. An important recursive relationship
(n + 1) = n(n), is found from a per-partem integration of (47) 

Z[h(n + k)] = h(n + k)zn



  zn z (n + 1) n=0
(n) = zn1 ez 0 + e dz = .
n n 



0 = zk h(n + k)z(n+k) = zk h(n)zn

A value of interest is (0.5) = . This enables one to n=0 n=k

compute for example the factorial of 3.5 as (4.5) = 3.5


2.5 1.5 (0.5) = 23.26 or the factorial of 2.5 as (1.5) = In the above we have brought the expression under the sum-
(0.5)/(1.5) = (0.5)/(1.5)(0.5) = 2.363. mation sign into conformance with (A1) by changing the lower

k1
limit. If we now add and subtract the term zk h(n)zn , we can
n=0
CONCLUSIONS bring the lower limit into alignment with (A1) also. Thus

With the aid of examples from different domains we have shown 




k1

that the Z-transform can serve as a tool for testing validities of Z[h(n + k)] = zk h(n)zn zk h(n)zn
small segments of scientic simulation algorithms. Its predictive n=0 n=0
power can benet the development of new algorithms and the
troubleshooting of existing ones. Closed-form solutions of stud-

k1

= zk H(z) zk h(n)zn (A4)


ied algorithms open a window into its long-term behavior and
n=0
thereby provide clues for their optimization or repair. Furthermore
the Z-transform offers to students of physics, nances, statistics and
From (A4) we get Z[h(n + 1)] = z H(z) z h(0) and Z[h(n +
mathematics educational insights into the origins of tools relevant
2)] = z2 H(z) z2 h(0) zh(1) We can see that the initial condi-
to their elds.
tions h(0), h(1), etc. are carried over into the z-domain whenever the
argument of h(n) is incremented. An equivalent situation is found
in the Laplace transform whenever the derivatives of the continu-
APPENDIX ous function are present. It is apparent that a difference equation
containing h(n), h(n + 1), etc. will transform into the z-domain as
The Forward Z-Transformation
a polynomial in powers of z and solving for H(z) is reduced to
As derived in (1) the discrete functions h(n) are causal and as such algebra.
vanish for negative arguments. Their Z-transform function is then Often the transform of the argument n itself is needed. Let
us apply (A1) to it and then express the summand in terms of a


derivative with respect to z as shown next
Z[h(n)] = H(z) = h(n)zn
n=0 

d  n

Z[n] = nzn = z z
h(1) h(2) h(n) dz
= h(0) + + 2 + + n + (A1) n=0 n=0
z z z
Using (A3) with the constant being one, we get for the trans-
Its causality demands.
form of the timing argument n the following expression
h(n) = 0 if n < 0 (A2) d z z
Z[n] = z =
dz z 1 (z 1)2
Table 1 Some Z-Transforms
This and some other transforms are summarized in Table 1.


n Let us apply the acquired knowledge to a trivial example.
Z[h(n)] = H(z) = h(n)z
n=0 We have invested an amount of money at 5% annual interest. We
Z[h(n + 1)] = zH(z) zh(0) want to know how much money we will have after n years. In 1
Z[h(n + 2)]z2 H(z) z2 h(0) zh(1) year this is h(n + 1) = h(n) 1.05. Using (A4) we can transform
Z(c) = c z1 z
Z(n) = (z1) z
2 this into zH(z) zh(0) = H(z) 1.05 Solving for H(z) we get
Z[nh(n)] = z dz d
H(z) H(z) = zh(0)/(z 1.05). While H(z) does carry the answer to our
z(z+1)
Z[n2 ] (z1) 2 Z[an ] = za z
original question it happens to be in the wrong domain, that is, in
Z[1/n!] = e1/z Z[ean ] = zez a the z-domain. We must transform it back into the n-domain and
Z[sin n] = z2 2z
z sin
cos +1 how to do that we will learn in the next section.
86 FAZARINC

Table 2 Some Inverse Z-Transforms It is common to use an asterisk (*) to denote the convolution,
1
 so we can symbolize our statement as Z1 [H(z) W(z)] = h(n)
Z [H(z)] = h(n) = H(z)z 1 n1
 1  n1 2j
dz
w(n). Apply the denition of the Z-transform from (1) to each of
Z1 = z1
 1  z z
zz 1
n1 n1 the two factors in the above expression and then substitute j = k n
Z1 (zz )(zz
 ) = z z
1 2
 1 2 1 2
Z1 (zz1 ) = (n 1)zn2 



1
 1  (n1)(n2)z1
1
2
n3
Z1 h(n)zn w(j)zj
Z (zz1 )3
= 2
1

n=0 j=0




1 n k n
The Inverse Z-Transformation =Z h(n)z w(k n)z z
1 n=0 k=n
We will use the notation h(n) = Z [H(z)] to indicate the inverse
transformation. We have shown in (2) that the following integral To the inverse transformation functions of n are just con-
accomplishes this procedure
 we cancel out the two n powers of z we can
stants and after
 position the h(n) term outside the transform operator. Then we

1 

h(n) = H(z)zn1 dz (A5) have h(n) w(n) = h(n)Z1 w(k n)zk According to
2j
n=0 k=n

(A2), w(k n) vanishes for n > k. Therefore we can freely set the
A powerful method for evaluation of integrals of analytic lower limit of the second sum to k = 0. Then that sum becomes
functions is the Cauchys residue theorem [5]. If F(z) happens to a plain Z-transform of w(k n). Its inverse is then obviously the
be analytic its integral is found as function w(k n) itself. Consequently we end up with the answer

 

1  h(n) w(n) = h(n)w(k n) (A9)
F (z)dz = Res[F (z)] (A6)
2j n=0

In Table 3 we have summarized some useful inversion formu-
Res stands for the residuum at the pole of the function F(z).
las, employing discrete convolution. Particular attention was paid
This is dened for the kth order pole at z = z1 as
to the power of convolution when faced with having to transform
1 d k1   an as yet undened function w(n).
Resz1 ,k [F (z)] = F (z)(z z1 )k z=z (A7) With these preparations we are now ready to apply the Z
(k 1)! dzk1 1
transform techniques to solve an all important difference equation
Using (A6) we can then rewrite (A5) as that we will encounter in all our examples.

h(n) = Res[H(z)zn1 ] (A8)
Generic Solution of the Second Order Difference
Expression (A8) in conjunction with (A7) provides a rel- Equation
atively simple inversion method for most practical problems in
The equation we wish to solve for h(n) is
natural sciences, statistics and nance. Expression (A7) is by far
simpler to use than it may appear at rst glance. In Table 2 we
h(n + 2) 2h(n + 1)A + h(n)B = f (n) (A10)
see some of the results arising from it. A trivial example of its
application is our problem, which we left hanging in the previ- where f(n) is an arbitrary function of n. We employ (A4) to perform
ous section. In order to invert H(z) = zh(0)/(z 1.05), we must the transformation of h(n) into H(z) such that Z[h(n)] = H(z). The
rst nd the residuum of R(z) = H(z)zn1 at the pole located at relevant recipes are found in Table 1
z1 = 1.05. It happens to be a single pole thus k = 1. Using these
values the residuum is Res1.05,1 [H(z)zn1 ]z=1.05 = h(0)1.05n z2 H(z) h(0)z2 h(1)z 2H(z)zA + 2h(0)zA + H(z)B
Here h(0) is obviously our initial investment in year 0. Some
more serious problems are addressed in the body of this paper. = Z[f (n)].

The Discrete Convolution


Mathematically the discrete convolution of two functions of n in Table 3 Discrete Convolution
the n-domain is equivalent to a product of their transforms in the z-
domain. Thus the inverse transform of a product of transforms must 

h(n) w(n) = h(n)w(k n)
be their convolution. Those acquainted with Laplace or Fourier n=0
transforms will nd this very familiar. But there is more to convo- Z[h(n) w(n)] = H(z)W(z)
lution than this fundamental denition. It allows one to carry over Z1 [H(z)W(z)]
  = h(n) n1 w(n)
into the z-domain functions of n, which either cannot be trans- Z1 Z[w(n)] = w(n) z
 Z[w(n)] 
zz1 1
z z n1 n1
formed or are not fully dened yet. Later it can be brought back Z1 = w(n) z z 1 2

into the n-domain and can be incorporated into the inverted solu-
 1 
(zz )(zz ) 2 1 2

Z1 Z[w(n)] = w(n) (n 1)zn2


tion. This is a powerful technique that cannot be overestimated and  (zz ) 
1 Z[w(n)]
1
2 1
(n1)(n2)z n3
Z = w(n) 1
that we will employ in our examples. (zz1 )3 2
DEVELOPMENT OF SCIENTIFIC SIMULATION ALGORITHMS 87

Extract H(z) Starting


 from the setup zn1 = zn1 /z1 =
1
Bn/2 ejn / A + j B A2 we obtain after some elementary
h(0)z2 + h(1)z 2h(0)zA
H(z) = algebraic gymnastics
z2 2zA + B
1 zn1 zn1 ABn/21
+ Z[f (n)] (A11)
1 2
= sin n Bn/21 cos n
z2 2zA + B z1 z2 B A2
The two poles of this equation are located at When the above quantities are substituted into expression
 (A13) one gets
z1,2 = A A2 B (A12)
h(1) h(0)A n/2
The inverse transformation of (A11) is critically dependent h(n) = h B sin n + h(0)Bn/2 cos n
B A2
on the nature of quantities A and B and we will treat the three
possible cases separately.

A
+f (n) Bn/21 sin n + Bn/21 cos n
B A2
(A15)
Case of Real Unequal Poles A2 > B
Equation (A8) explains the rst two terms in the next expression

h(n) = H(z)(z z1 )zn1 |z=z1 + H(z)(z z2 )zn1 |z=z2 Case of Identical Poles A2 = B
 1

+ Z1 [Z[f (n)]] Z1 In this case z1 = z2 = A and expression (A11) becomes
(z z1 )(z z2 )
h(0)z2 + h(1)z 2h(0)zA 1
In the third term the asterisk denotes the convolution oper- H(z) = + Z[f (n)]
ator dened and derived in (A9). The inverse transform of (z A)2 (z A)2
the last expression is found in Table 2 in the form (zn1 1 = H1 (z) + H2 (z)
zn1
2 )/(z1 z2 ) and the tentative solution of our Equation (A10) is
then We will invert each term separately and start with H1 (z),
zn+1 zn+1 zn zn2 which exhibits a double pole at A and according to (A6) transforms
h(n) = h(0) 1 2
+ [h(1) 2h(0)A] 1 into
z1 z2 z1 z2
 
d 
zn1 zn1 Z1 [H1 (z)] =  H1 (z)(z A)2 zn1 
+ f (n) 1 2
(A13) dz z=A
z1 z2
The pole locations z1 and z2 are given in (A12). No further = h(0)A (1 n) + nh(1)An1
n

simplications, other than binomial expansion of the exponents


can be devised for this case. The second term H2 (z) retransforms into a convolution of
f(n) with the inverse transform of 1/(z A)2 , which is easily eval-
uated or read from Table 2 as (n 1)An2 . Hence Z1 [H2 (z)] =
f (n) (n 1)An2 . The nal solution of our Equation (A1) for
Case of Imaginary Poles A2 < B this particular case of B = A2 is now the sum of the two partial
Expression (A13) applies to this case also except for the value of answers
poles, which are now complex, located at
 h(n) = h(0)(1 n)An + nh(1)An1 + f (n) (n 1)An2 (A16)
1
z1,2 = A j B A2 = Bejtan BA2 /A
= Bej
(A14) REFERENCES
Equation
 (A13) relies
 on the following expressions zn+1 =
1
[1] F. B. Hildebrand, Introduction to numerical analysis, 2nd edition,
z1 zn1 = A + j B A2 Bn/2 ejn Dover Publications, Inc., New York, 1956, p 130.
[2] Z. Fazarinc, Minicomputer as a circuit-design tool, Simulation
zn+1
1 zn+1
2 = ABn/2 (ejn ejn ) Councils, February 1972, pp 4756.
 [3] Z. Fazarinc, Discretization of partial differential equations for com-
+j B A2 Bn/2 (ejn + ejn ) puter evaluation, Comput Appl Eng Educ 1 (1992/1993), 73785.
[4] E. Artin, The gamma function, New York, Holt, Rinehart and Winston,
zn+1 zn+1 A
1 2
= Bn/2 sin n + Bn/2 cos n 1964.
z1 z2 B A2 [5] P. Staric and E. Margan, Wideband ampliers, Springer, 2006, pp
1-651-72.
zn1 zn2 1
= Bn/2 sin n [6] E. I. Jury, Sampled-data control systems, John Wiley & Sons, New
z1 z2 B A2 York, 1958.
88 FAZARINC

BIOGRAPHY

Zvonko Fazarinc received his PhD in electrical


engineering from Stanford University in 1965.
He held a number of management positions at
Hewlett-Packard Laboratories in Palo Alto and
was a consulting professor of EE at Stanford
University. His professional activities involved
the elds of measurements, communications and
computation with a focus on software engineer-
ing. His interest in discrete system analysis led
him into exploration of the computer potential for
science education. He has lectured at more than 100 universities throughout
the world on this topic and has organized some of them into consortia. He
continues his critical evaluation of the classical approaches to teaching in
view of the modern computer technology.

You might also like