Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

An Introduction to Numerical Mechanics

By

J. Wyldmann
The Infinite Series

We will begin our introduction to numerical mechanics with the concept of an infinite 

series. We will use the standard 'sigma' notation. Basic algebraic principles allow us to 

compute the final value of some infinite series. Working through a few series 

representations will allow you to develop a sense of numerical objectivity.


1 1 1 1
Q= + 2 + 3 + ...=∑ n
q q q n=1 q
1 1
qQ=1+ + 2 +...
q q
qQ=1+ Q (1)
(q−1)Q=1
1
Q= , for q>1
q−1

Here, the infinite series began with index 1. We can show in the second example, that a 

similar series starting from index 0 can also be deduced.


1 1 1
Q=1+ + 2 +...=∑ n
q q n=0 q
1
qQ=q +1+ +...
q
qQ=q+ Q (2)
(q−1)Q=q
q
Q= , for q >1
q−1

As an amusing example of substitution, we can construct a new series out of the result in 

equation (2). First we provide the correct substitution. Then we rearrange the problem so 

that it's notation is minimally exact.

2
1
q=
xy
1
q xy xy 1 (3)
Q= = ⋅ =
q−1 1 xy 1−xy
−1
xy

1
=∑ (xy)n , for xy > 1
1−xy n=0

Again, we arrive at an infinite series. This one happens to have a 2­dimensional input, 

where the previous examples have 1­dimensional inputs only.

For our final series, we need to first introduce a concept from complex arithmetic. 

Complex arithmetic is a 2­dimensional form of math. The underlying principle employed is 

the multiplicative behavior of the complex unit:

2
i =i⋅i=−1 (4)

We can now use this rule to develop a series. It exploits the fact that multiple squarings will result in the

flipping of sign. We will start by using the above rule to change form.


1 1 n
2
= 2
= ∑ [ (ix)2 ]
1+ x 1−(ix) n=0
∞ n ∞ n

∑ [ (ix)2 ] =∑ (ix)2 =∑ (ix)2 n (5)
n=0 n=0 n=0
1
2
=1−x 2+ x 4 −...
1+ x
As the complex form mimics our previous result, we were able to use the correct substitution rule and

reverse engineer the starting series. It should be pointed out that this function can be plotted as a graph.

The mathematical standard is that your x and y coordinates are given by (x,y) = (x,f(x)). The function input

is the horizontal measure from your specified zero point. The vertical measure is the output of your

function.

3
The Smooth Derivative

Say we wanted to measure the algebraic slope of a smooth, polynomial function. This is the local

rising divided by the local run. The notation for a slope measure is as follows:

f (x+ h)−f ( x)
∂ f (x)=lim
∂x h→ 0 ( h ) (6)

This is valid for any and all functions that have smoothness. Note we are ignoring any and all non-smooth

functions (read statistical scatter).

Figure 1: Diagram of the slope calculation used in the classic smooth derivative.

The problem is that we need to algebraically resolve the argument on the right hand side. Sometimes

this is not simple. Fortunately, the basic polynomial equation provides an algebraic solution. It can be

found as follows:

f (x )=ax
a ( x+ h)−ax
∂ f (x)=lim
∂x h→ 0 ( h ) (
=lim h →0
ax−ax +ah
h )
=a (7)

To develop a numerical pattern, we will repeat this process twice more. The second function is a basic
quadratic.

4
2
f ( x )=ax
2 2
∂ f (x )=lim
∂x h →0 (
a( x +h) −ax
h )
2 2 2
2 axh+ ah2
lim h →0 ( a(x +2 xh+ h )−ax
h
=lim h→ 0 ) h ( ) (8)

∂ f ( x )=lim ( 2 ax+ h )=2 ax


h→ 0
∂x

And the final step is for the cubic function below.

f ( x)=ax3
a(x +h)3−ax3
∂ f (x )=lim
∂x h →0 ( h )
3 2 2 3 3
∂ f ( x)=lim
∂x h→ 0 (
a (x +3 x h+3 xh +h )−ax
h ) (9)
2 2 3
∂ f ( x )=lim
∂x h→0 (
3 ax h+ 3 axh +ah
h )
∂ f (x)=lim ( 3 ax 2 +3 axh+ ah2 )=3 ax 2
h →0
∂x

At this point, we point out that the following, generalized numerical pattern has been produced:

∂ x n=nx (n−1)
∂x (10)

This is the rule we will use to check the functions we develop later in this paper. This rule is always

considered true for a smooth function, and random functions all carry smooth measures. One example is

the running average.

Multi-function Derivatives

As we have defined a rule for the derivative which expands on a basic polynomial, we will now

develop derivative rules for a more complex function. We will develop a rule for a two function product, a

two function quotient, and a two function nest. We will then point out that these rules can be combined

mechanically to develop more complex functions that carry important properties.

First we will consider the derivative of the product of two independent functions taking the same

input variable. To keep it simple, we will use the standard power shown above, as these derivatives take
5
little space to compute. The correct rule will simply be displayed, and a test problem will verify it is

duplicating an obvious, known result.

∂ ( f (x )⋅f ( x ) )= ∂ (f )⋅f +f ⋅ ∂ (f )
∂x A B
∂x A B A ∂x B
f (x )=6 x 7
∂ f ( x)=6⋅7 x 6=42 x 6
∂x
f ( x )=f 1 ( x)⋅f 2 (x)=(2 x 4 )(3 x3 ) (11)
∂ f f = ∂ (2 x 4) 3 x3 +2 x 4 ∂ (3 x 3 )
( )
∂x 1 2 ∂x ∂x
∂ ( f f )=8 x 3⋅3 x 3+ 2 x 4⋅9 x2 =24 x 6+ 18 x 6 =42 x 6
∂x 1 2

We see that the derivative of the product of two functions is the derivative of the first times the

second plus the first times the derivative of the second. The next rule we will look at is the derivative of a

quotient. This is the derivative of one functions divided by a second function, each taking the same input

variable.

∂ (f )⋅f −f ⋅ ∂ ( f )
f x)

( UP (
∂ x f DN (x) ) =
∂ x UP DN UP ∂ x DN
f 2DN
f (x) 12 x 10
f ( x)=6 x 7= 1 =
f 2 (x) 2 x3
(12)
∂ f 1 = 120 x 2 x −12 x 6 x
9 3 10 2

()
∂x f2 4x
6

∂ f 1 = 168 x =42 x 6
12

( )
∂x f2 4 x6

The third function we will define a derivative rule for is a nested function. This is a function whose

input is the output of another function. We will again use a simple nesting of powers. The algebraic

presentation of the process is often referred to as the 'chain-rule'.

6
f (x )=6 f A ( f B (x ) )
7
f A ( x )=x 4 , f B ( x)=x 4
7
4 7
f (x)=6( x ) 4 =6 x
∂f ∂f A ∂f B 7 4 4
3 (13)
=6 ⋅ =6 ( x ) ⋅4 x 3
∂x ∂fB ∂x 4
∂f 3 3 6
=42 x x =42 x
∂x

These three rules successfully reproduce the simple power rule defined prior, and are valid for any

and all functions of a similar structure. Note that we have not presented the solution for a derivative of the

sum of two functions. The answer can be derived by combining equations (7) and (8) which will result in

the sum of the derivatives of the two functions. This is left as an example problem for the reader.

The Infinite Polynomial

Now we will take a look at the infinite polynomial. We can play two games with this function and

show that they directly correlate, term by term. This allows us to draw an abstract equivalence that will be

useful for defining more complex numerical phenomena later in this paper. We begin with the function

and the output of a two component, step input.

f ( x )=ax +bx 2 +cx 3 +dx 4 +...


f ( x+ h)=a(x +h)+b (x+ h)2+ c( x +h)3 +...
2 2 3 2 2 3
(14)
f (x+ h)=ax +ah+bx +2 bxh+bh + cx + 3 cx h+3 cxh + ch +...

Next we will introduce a completely different representation of the same series. This representation

is built by taking an infinite differential expansion of our function. It can simply be observed that the

series produced are identical, and this is technically numerically legal. The mathematicians who are given

historical credit for this expansion are Maclaurin and Taylor. It is often referred to as the Maclaurin or

Taylor Series or Expansion. To setup the numerical contraption, we first define the factorial function as it

applies to integer inputs. Then we construct a unique expansion of the desired function about a point.

7
given N=[0,1,2,3,...]
for n∈ N ; n !=n⋅( n−1)⋅( n−2)⋅...⋅3⋅2⋅1
h0
1 !=1; 0 !=1 ; =1
0!
∂f
=f ' (x )=a+ 2bx +3 cx 2 + 4 dx 3 +...
∂x
2 3
h ' h '' h '' ' (15)
f (x+ h)=f ( x)+ f (x )+ f ( x)+ f (x)+...
1! 2! 3!
ax + bx + cx 3 + ...
2
f (x+ h) =
+ ah + 2 bxh + 3 cx 2 h + ...
+ 0 + bh 2 + 3 cxh 2 + ...
+ 0 + 0 + ch3 + ...

It should be clear that this system will perfectly reconstruct the expanded series presented above. We

will now set up a method where some initial conditions are used to construct a function. The functions

final result will be in an infinite series format. We will use the method in equation (15) to establish this

series.

Figure 2: Unit circle showing sin(x) and cos(x) with respect to x in radians.

The Functions: Sin(x) and Cos(x)

Consider the unit circle presented in Figure 2. For any and all positions along the arc, we have a

horizontal and a vertical measure. In our system, we are using x as our variable for the radians. The

textbook standard for radians is typically the Greek variable 'theta'. As such, x will start at the value 0, and

8
approach the value 2pi at the perfect circular close. Therefore, if the horizontal measure is sin(x) and the

vertical measure is cos(x), we have a table of known values which has been presented below:

x 0 π π 3π 2π
2
2

sin(x) 0 1 0 -1 0

cos(x) 1 0 -1 0 1

The next step in setting up sin(x) and cos(x) is to establish a 'derivative rule' for each function. Note,

at this point, we are playing an implicit game. If our game were explicit, we would be computing the

derivatives exactly. The problem is, we have yet to define a function and cannot take the derivative. We

will implicitly define a relationship. We can then get clever with equation (15) and produce the desired

series. The derivative rule can be used to verify the implicit rule is actually being solved for exactly. Now,

let us setup a derivative rule for sin(x) and cos(x) to follow:

∂ sin(x )= cos (x)


∂x
∂ cos ( x)=−sin (x) (16)
∂x

Now we will illustrate the following technique on sin(x). First we lay out Equation (15). After

computing the correct derivatives for each term, we then set x equal to zero. It can be noted that, for

sin(x), half the terms end up dropping out as void. We then collect every term containing h and, being

extremely lazy, set h equal to x. The final series for sin(x) will fall out.

h ' h2 h3
f ( x +h)=f (x )+ f ( x)+ f '' (x )+ f ' ' ' (x )+...
1! 2! 3!
2
h h3
sin(x +h)=sin(x)+hcos ( x )− sin( x )− cos (x )+...
2 6
x →0
(17)
h3 h5
sin(h)=0+h−0− +0+ −...
6 120
h →x
x3 x5 x7
sin( x )=x− + − + ...
3! 5 ! 7 !

9
The same procedure will be carried out on cos(x). To save space, we will only present two steps. It is

left to the reader to verify that the derivative rule is now explicitly exact. Having sight for the logic

permits this to be done visually.

h2 h3
cos ( x+ h)=cos ( x)−hsin( x)− cos (x )+ sin(x )+...
2 6
2 4 6
x x x (18)
cos(x )=1− + − +...
2! 4! 6!

To close, we can now use our derivative rules to point out a fundamental relationship between

sin(x), cos(x), and the value unity:

2 2
sin(x ) + cos( x ) =1
sin2 ( x)+ cos2 (x )=1
∂ ( sin2 ( x)+ cos2 (x )) = ∂ 1
∂x ∂x (19)
∂ sin2 ( x)+ ∂ cos 2 (x)=0
∂x ∂x
2 sin( x) cos ( x)−2cos ( x )sin (x)=0
0=0... OK !
The first line is the Theorem of Pythagoras. It's truth has been verified by the explicit exactness of the

derivative. It's result is zero, as it should be, and all is well with the logic at hand.

The Functions: Sinh(x) and Cosh(x)

We follow up with another pair of functions, which we will define in a similar, yet different manner.

Again, we have the same goal: Peg my starting values for x equal to zero, setup a derivative system, and

then perform the expansion/contraction to the final infinite series.

sinh(0)=0
cosh (0)=1
∂ sinh(x )=cosh ( x ) (20)
∂x
∂ cosh ( x )=sinh( x )
∂x

For brevity, we will express the expansion/contraction as simply as possible. It is recommended that the

novice reader fully perform these by hand, to ensure you are executing the mechanics correctly.

10
2 3
h h
sinh( x +h)=sinh(x)+ hcosh( x )+ sinh(x)+ cosh (x)+...
2! 3!
x3 x5
sinh( x)=x+ + +... (21)
3! 5!
x2 x4
cosh ( x )=1+ + + ...
2! 4 !

It can be observed visually that we are explicitly satisfying the derivative rule and the functions are

correctly defined. As before, we will close with a unique relationship between sinh(x) and cosh(x).

cosh 2 ( x)−sinh2 (x)=1


∂ cosh 2 ( x)− ∂ sinh2 (x)= ∂ 1
∂x ∂x ∂x (22)
2 cosh (x) sinh(x )−2 sinh(x )cosh (x )=0
0=0. ..OK !

This is the hyperbolic relationship: that the difference of squares is always equal to unity. This closes our

development of sinh(x) and cosh(x). Our next step is to use these two functions to define two more

functions, each with a very interesting property.

The Natural Exponent

One curious thing about the natural exponent we are going to define, is the fact that we will never

actually deal with the mechanics of taking an exponent. We are literally going to perform black magic and

get away with it. It should be understood that an exponential functions is a constant of some value (often

referred to as C) raised to the power x. Humorously, we will be raising nothing to the power of x to define

our function, yet the fact that they are identical can be computed to the limit of your sanity. We begin with

the root definition:

x
e =sinh( x)+cosh ( x)
∂ e x = ∂ ( sinh( x)+cosh ( x) )
∂x ∂x
∂ e x = ∂ sinh( x)+ ∂ cosh ( x) (23)
∂x ∂x ∂x
∂ e x =cosh( x )+sinh ( x )=e x
∂x

Notice one thing about the function we invented: it is equal to the derivative of itself. Sadly, the

utility of this cannot be expressed fully in this paper. However, anyone who decides to pursue physics will

11
end up relying on this like a crutch, as it is used in almost every single function that has direct utility.

Why will be left for the curious to deduce. One interesting fact is that we already have an infinite series.

From this, we can set x equal to 1 and literally compute the value of e to the limit of the machine. This is

demonstrated below. The reader is free to take this to any integer power and compare it to the correct sum

of sinh(x) and cosh(x). You will find they are always identical.

x x 2 x3 x 4
e =1+ x + + + +...
2! 3 ! 4 !
1 1 1 1 (24)
e=e =1+1+ + + +...
2! 3 ! 4 !
e=2.7182818...

We will close our investigation of the natural exponent by computing the inverted power. The

construction of this function is remarkably simple. It is obvious that this function flips sign under

derivative.

e−x =cosh ( x)−sinh(x)


x2 x3 x 4
e−x =1−x + − + −...
2! 3 ! 4 !
∂ e−x =−e− x (25)
∂x
2 3
x x x2 x3
(1+ x+ + +...)(1−x + − +...)=e x⋅e−x =e 0=1
2! 3! 2! 3 !

The closing series product shows the value of compact notation. It is not obvious this product will

yield unity unless you know the functions these infinite series represent. In compact notation, its obvious

that the addition of powers results in the raising to the value of zero. This can only produce unity as it is

the fundamental definition of exponentiation: I have multiplied 1 by the value e exactly 0 times.

Closing Argument

We have successfully defined a handful of the most important analytic functions: sin(x), cos(x),

sinh(x), cosh(x), the natural exponent, and the negative natural exponent. In the next release we will begin

to tackle the concept of inversion. In layman's terms, the inverse function undoes whatever it's parter

function has already done. To the converse, the function itself will actually undo the undoer as well. The

12
difference is you either take one step forward and the one step back, or one step backwards and the next

step forward. You end up moving nowhere. Another concept is the integral. If I can take a derivative, I

should be able to integrate it back to it's original state. Likewise, I can always undo an integration by

taking a derivative. The problem is, technically, the derivative is 'simple' and the integral is far from it. We

will not be covering this topic in this section. Integration itself will be covered in a later release. We can

then use this rule to compute two functions: the natural logarithm and the arctangent.

The natural logarithm will always invert the natural exponent, and vice-versa, but the natural

logarithm involves an integral I can never take. The arctangent will pseudo-invert the tangent function

(sin divided by cos, which is problematic because we repeatedly divide by zero) and vice-versa, and the

arctangent will offer an integral we can take. If the reader beats us to it, the more the merrier. Both of

these functions will be defined mechanically, and the final step involves the potentially problematic

integration.

References

Proofs from THE BOOK, Fourth Edition, 2010, Martin Aigner, Gunter M. Ziegler, page 36.

Advanced Engineering Mathematics, Second Edition, 1960, C. R. Wylie, Jr., page {153,528,582}.

Appendix

In this short appendix, we point out a technical detail about the complex domain that is left as an

open puzzle. For the complex unit i, we can do something funny with a power. We can ruin it completely.

This is basically a piece of mischief that a mathematician will have to iron out. Consider:

1 1
i 3.2=(i 10 )32 ≠( i32 )10 (A1)

This is presented in the Appendix because it is a dirty trick. Understand it and never abuse it, please.

13

You might also like