Professional Documents
Culture Documents
Series de Taylor
Series de Taylor
Series de Taylor
where 𝑓 (𝑘) (𝑎) is the 𝑘th derivative, evaluated at 𝑥 = 𝑎. We may truncate the
infinite series at the 𝑛th term and use the resulting 𝑛th degree polynomial 𝑝𝑛 (𝑥) as
an approximation for the original function 𝑓(𝑥); i.e., 𝑝𝑛 (𝑥) ≅ 𝑓(𝑥):
It can be shown that the truncation error, 𝑅𝑛 = 𝑓(𝑥) − 𝑝𝑛 (𝑥), which is the sum of
all series terms above the 𝑛th term, is given by:
𝑓 (𝑛+1) (𝛾 )
𝑅𝑛 = (𝑥 − 𝑎)𝑛+1
(𝑛 + 1)!
Where 𝛾 is an unknown value that lies between 𝑥 and 𝑎. Note that if |𝑥 − 𝑎| < 1,
then only a small number of series terms would suffice to give very accurate
approximation for 𝑓(𝑥) (note the factorial term in the denominator).
If the Taylor series is centered at zero (i.e., the expansion point 𝑎 is set to zero),
then that series is also called a Maclaurin series (or power series), named after the
Scottish mathematician Colin Maclaurin who made extensive use of this special
case of Taylor series in the 18th century. The power series is then,
𝑥 (1) 𝑥 2 (2) 𝑥 𝑛 (𝑛)
𝑓(𝑥 ) = 𝑓(0) + 𝑓 (0) + 𝑓 (0) + ⋯ + 𝑓 (0) + ⋯
1! 2! 𝑛!
It should be noted that the Maclaurin series (and, as a matter of fact, the Taylor
series centered at any point 𝑎) of a polynomial function is finite, and is identical to
the polynomial itself.
Example. Find and compare the Taylor series approximation for 𝑓(𝑥 ) = sin(𝑥),
at 𝑎 = 0.
sin(𝑥) (red), 𝑝1 (𝑥) (green), 𝑝3 (𝑥) (magenta) and 𝑝5 (𝑥) (blue). Note how the higher
degree polynomial approximations are closer to sin(𝑥), over a wider range of 𝑥.
Notes:
The power series can be used to generate infinite sums and their corresponding
values. For example, setting 𝑥 = 1 in the series for 𝑒 𝑥 leads to 𝑒 1 = 𝑒 = 1 + 1 +
1 1 1
+ + ⋯ or 𝑒 = ∑∞ 𝑘=0 .
2! 3! 𝑘!
The following Matlab function cos_approx accepts a scalar 𝑥 and a positive, even
integer 𝑚 and generates a table of values for the approximations of cos(𝑥) using
polynomials of varying order: 0, 2, 4, … and 𝑚. The function also computes the
absolute relative error. This is essentially programming Problem 7 in Lecture 6, but
with an alternative expression for the approximating sum.
Example from Electric Circuits
The characteristic 𝑖 − 𝑣 relationship for a semiconductor diode can be expressed as
𝑖 (𝑣) = 𝑎(𝑒 𝑏𝑣 − 1)
where 𝑖 is the current through the diode and 𝑣 is the voltage drop. The above
equation is plotted in the following figure. The plot also depicts the graphical
solution for the current when the voltage applied across the diode is 𝑣 (𝑡) =
𝐴𝑐𝑜𝑠(𝜔𝑡).
(𝑏𝑣)2 (𝑏𝑣)3
𝑖 (𝑣) = 𝑎 [(1 + 𝑏𝑣 + + + ⋯ ) − 1]
2 6
When the voltage 𝑣(𝑡) has small values we may approximate the current as
𝑎𝑏 2 2
𝑖(𝑣) ≅ 𝑎𝑏𝑣 + 𝑣
2
So, for the case 𝑣 (𝑡) = 𝐴cos(𝜔𝑡) with small 𝐴 we obtain the current
𝑎𝑏 2 𝐴2
𝑖(𝑡) ≅ 𝑎𝑏𝐴cos(𝜔𝑡) + cos2 (𝜔𝑡)
2
Now, we may (optionally) employ the trigonometric identity
1 1
cos2 (𝑥) = + cos(2𝑥)
2 2
to the above approximation and obtain the truncated (Fourier series) form
𝑎𝑏 2 𝐴2 𝑎𝑏 2 𝐴2
𝑖(𝑡) ≅ + 𝑎𝑏𝐴cos(𝜔𝑡) + cos(2𝜔𝑡)
4 4
The following plot compares the exact current 𝑖 (𝑡) = 0.1(𝑒 cos(𝑡) − 1) to its
approximation 𝑖(𝑡) ≅ 0.025 + 0.1cos(𝑡) + 0.025cos(2𝑡). We have assumed 𝑎 =
0.1, 𝑏 = 1, 𝐴 = 1 and 𝜔 = 1. [Your turn: repeat for 𝐴 = 0.5, 1.5.]
Your turn: Repeat the above example assuming a cubic expansion for 𝑖(𝑡).
Evidence of Digital Machine Rounding Error
Let us consider a polynomial and see what happens when we apply Taylor series
expansion to it. Let 𝑓(𝑥) be simply equal to (𝑥 − 1)7 . If we expand this function,
we obtain a 7th degree polynomial:
Let us now apply Taylor series expansion at 𝑎 = 0, with ‘order’ set to 21:
The blue plot is for (𝑥 − 1)7 and the green one is for its expanded version:
x^7 - 7*x^6 + 21*x^5 - 35*x^4 + 35*x^3 - 21*x^2 + 7*x – 1
For example, if 𝑥 = 1.01, then the two expressions will return, respectively:
Many microprocessors have an adder circuit (build out of logic gates) which
performs addition. They do not have additional logic circuits to perform other
arithmetic operations (-, *, /), or elementary functions [such as sin(𝑥), cos(𝑥), 𝑒 𝑥 ,
ln(𝑥), etc]. Subtraction is performed via addition after converting negative numbers
using a special binary code called two’s complement. Multiplication and division
can then be performed based on repeated addition and subtraction, respectively.
Exponentiation (say, 𝑥 4 ) can be performed as repeated multiplication (𝑥𝑥𝑥𝑥). But
how does a microprocessor compute a function such as 𝑓(𝑥 ) = √𝑥 or 𝑓(𝑥 ) =
sin(𝑥)?
One method is to use a program that calculates 𝑓(𝑥) using its polynomial
approximation (Taylor series). Notice that a polynomial can be evaluated at a given
point 𝑥 by performing multiplications, additions and subtractions. For example, the
evaluation of the polynomial 𝑥 2 − 2𝑥 + 1 at 𝑥 = 3 can be performed as 3 ∗ 3 – 2 ∗
3 + 1.
For 𝑓(𝑥 ) = sin(𝑥), if 𝑥 is close to 𝑎 (the point of expansion; set to zero here) then
𝑥3 𝑥5
the polynomial approximation sin(𝑥 ) ≅ 𝑝5 (𝑥 ) = 𝑥 − + can be used
3! 5!
(remember that the angle is measured in radians):
sin(0.2) = 0.198669330795061
0.23 0.25
𝑝5 (0.2) = 0.2 − + = 0.198669333333333 (8 decimal digit accuracy)
6 120
Similarly,
sin(1) = 0.841470984807897
13 15
𝑝5 (1) = 1 − + = 0.841666666666667 (3 digit accuracy)
6 120
For |𝑥– 𝑎| > 1, more power series terms must be added in order for the polynomial
approximation for 𝑓(𝑥) to be accurate. In fact, we can utilize the Taylor series
truncation error, 𝑅𝑛 = 𝑓(𝑥 ) − 𝑝𝑛 (𝑥), introduced earlier:
𝑓 (𝑛+1) (𝛾 )
𝑅𝑛 = (𝑥 − 𝑎)𝑛+1
(𝑛 + 1)!
to determine beforehand the required polynomial order for a given desired accuracy
to be achieved. Since, the 𝑛th derivative of sin(𝑥) is a sinusoid, with values
between −1 and 1 (i.e.,−1 ≤ 𝑓 (𝑛+1) (𝛾 ) ≤ 1), the absolute residual error is
bounded from above by:
|(𝑥 − 𝑎)𝑛+1 |
|𝑅𝑛 | ≤
(𝑛 + 1)!
This result is in line with the absolute true truncation error of (2.54)10−9 ,
computed earlier.
1
Similarly, for 𝑥 = 1, the absolute truncation error is bound by = (14)10−4 , and
6!
−4
is consistent with the computed absolute true error of (1.96)10 .
𝜋
What is the smallest degree (𝑛) polynomial approximation of sin that leads to an
2
𝜋 𝜋
absolute true error = |sin ( ) − 𝑝𝑛 ( )| < 10−6 ? From the above truncation error
2 2
formula, we need 𝑛 that satisfies:
𝜋 𝑛+1
|( ) |
2
< 10−6
(𝑛 + 1)!
Here is a table for the value of the left hand side of the inequality for various 𝑛
values. From this Table, we find that 𝑛 should be equal to 11.
𝜋 𝑛+1
|( ) |
2
n
(𝑛 + 1)!
5 0.020863480763353
7 9.192602748394263𝑒 − 04
9 2.520204237306060𝑒 − 05
11 4.710874778818170𝑒 − 07
Matlab verification (note: the argument “ ‘order’,10 ” leads to a 9-degree
polynomial)
Rearrangement of Formulas to Control Rounding Error
A significant loss of accuracy due to machine round-off error occurs when two
numbers of about the same value are subtracted. Therefore, before computing
formulas [mathematical expressions, 𝑦 = 𝑓(𝑥)] using a digital machine, the
formula should be rearranged to avoid severe cancellation (in some region of the
argument 𝑥). For example, for large values of 𝑥, the expression √𝑥 + 1 − √𝑥 leads
to severe cancellations because √𝑥 + 1 ≅ √𝑥. One trick from calculus is to
multiply and divide by the conjugate
√𝑥 + 1 + √𝑥
(√𝑥 + 1 − √𝑥)
√𝑥 + 1 + √𝑥
which leads to the expression
1
√𝑥 + 1 + √𝑥
that has no cancellation and, therefore, leads to a more accurate numerical answer
(due to reduced round-off error). The following is Matlab verification for 𝑥 = 109 .
𝑥
𝑥2 𝑥3
𝑒 ≅1+𝑥+ +
2 6
𝑥 𝑥2
Therefore, we employ the expression 𝑦 = 𝑥 (1 + + ) as opposed to the original
2 6
expression, 𝑦 = 𝑒 𝑥 − 1. The following is Matlab verification.
𝑥4
The reader might want to verify that including the term does not change the
24
numerical result. Doing so will also verify that 1.000000500000167e-06 is the
more accurate answer.
The improved accuracy in the above example is due to the fact that in the Taylor
representation/approximation there is no subtraction of close-valued terms. But, the
original representation, 𝑒 𝑥 − 1, we subtract very close quantities (when 𝑥 is small).
Your turn: Derive more (numerically) appropriate formulations for the following
expressions. Verify the reduction in round-off error using Matlab (employ 𝑥 =
10−9 ).
sin(𝑥)
a.
𝑥
ln(1−𝑥)
b.
ln(1+𝑥)
Significance of the Expansion Point a
In the above examples that dealt with the Taylor series approximation of the sin(𝑥)
function, we chose to set the expansion point 𝑎 to zero. This generates a polynomial
that provides maximum accuracy for points 𝑥 in the proximity of zero. If we want
the approximation to work well close to 𝜋/2, then we should pick 𝑎 to be 𝜋/2, or
we keep 𝑎 = 0 and use a relatively high-degree polynomial approximation of
sin(𝑥). But, the later choice would slow down the computations and is prone to
numerical truncation error (as we saw earlier).
It would make sense to choose a such that we strike a compromise: one polynomial
would provide acceptable approximation for 𝑥 between 0 and 𝜋/2. That would
suggest that we generate the approximation polynomial from Taylor series
expansion (half way) at 𝑎 = 𝜋/4, and pick a sufficiently large order 𝑛 so that the
truncation error at the end points is acceptable (say, 𝑛 > 11).
But, how about the accuracy of the approximation for |𝑥| > 𝜋/2? Fortunately, in
this case, the sin(𝑥) is periodic and has odd symmetry. Recall that, sin(𝑥 + 2𝑘𝜋)
= sin(𝑥) for any integer 𝑘; Also, sin(−𝑥) = − sin(𝑥) and sin(𝑥 ± 𝜋) = − sin(𝑥).
We may then take advantage of these properties so that, for any 𝑥, the
approximation will never be worse than that that at 0 or𝜋/2.
One can then write a Matlab function, call it sin_poly, which computes the sin(𝑥)
via polynomial approximation. Here is an example that will help in developing such
a function.
Let us apply the above ideas to approximate sin(31𝜋/6). We first generate
(computed only once) a 13-degree polynomial from an “‘order’, 14” Taylor series
expansion at 𝑎 = 𝜋/4. Here is the Matlab-generated polynomial:
Next, we need to reduce the angle 31𝜋/6 to its equivalent positive, acute angle
(i.e., 0 < angle < 𝜋/2). We can accomplish that as follows: 31𝜋/6 = 𝜋/6 + 𝜋 +
4𝜋Therefore, sin(31𝜋/6) = sin(𝜋/6 + 𝜋) = −sin(𝜋/6). We can take advantage
of Matlab’s command rem(x,2*pi), which returns the remainder (𝑟) of dividing 𝑥 by
2𝜋, and then set 𝑥 = 𝑟 − 𝜋 if 𝜋/2 < 𝑟 ≤ 3𝜋/2. On the other hand, if 𝑟 > 3𝜋/2
we set 𝑥 = 2𝜋 − 𝑟. In either case, the sine of the transformed angle must be
multiplied by −1. The following Matlab code illustrates the angle transformation
for 𝑥 = 31𝜋/6:
And we have to remember to multiply the above answer by −1, since sin(𝑥) =
𝜋 𝜋
− sin(𝑥 − 𝜋). Similarly, sin (− ) = − sin ( ) and evaluating the approximation
3 3
polynomial at 𝜋/3, and multiplying the result by −1, gives:
A function is provided to you, sin_poly, which incorporates the above properties of
the sine function to compute sin(𝑥) for any real (scalar or vector) 𝑥. The following
is a version that generates a table (sin_poly_table).
Approximation of 𝝅: 𝜋 is most famous for representing the ratio of the
circumference 𝐶 to the diameter 𝑑 for any circle (𝜋 = 𝐶/𝑑). The approximation
𝜋 ≅ 3.16 was known since the time of ancient Egypt, over 4000 years ago. Since
then, this number (and its approximations) has appeared in many different
mathematical contexts. The first theoretical calculation seems to have been carried
out by Archimedes (an ancient Greek mathematician, physicist, engineer, inventor
and astronomer, 287-212 BC). He obtained the interval approximation 223/71 <
𝜋 < 22/7 based on bounding the circumference of a unit-diameter circle between
96-sided inscribed and circumscribed polygons (see figure below).
Almost 2000 years later, John Machin (1680-1752), a professor
of astronomy at Gresham College, London, discovered the approximation
1 1
𝜋 ≅ 16 atan ( ) − 4 atan ( )
5 239
which has the true error:
Your turn: Generate the power (Maclaurin) series of order 𝑛 for the atan(𝑥 )
𝑥3 𝑥5
function [for instance, for 𝑛 = 5 the approximation is atan(𝑥 ) ≅ 𝑥 − + . Write
3 5
a Matlab script that approximates 𝜋 using the Machin’s approximation formula and
employing a power series for atan(𝑥 ). Your script should generate a table of true
errors for 𝑛 = 1, 3, 5, 7 and 9. How many decimal places of accuracy did you find
for 𝑛 = 9? (It is interesting to note that, in October 2014, 𝜋 has been calculated to
13.3 trillion digits!)
Taylor Expansion of Multivariate Functions
The notation H(𝒂) = H(𝒙)|𝒙=𝒂 = ∇2 𝑓(𝒙)| 𝒙=𝒂 stands for the Hessian matrix and is
defined as
Example: Give a quadratic function that approximates the following function in the
vicinity of the origin. Plot the function and its approximation for −1 ≤ 𝑥1 ≤ 1 and
−1 ≤ 𝑥2 ≤ 1.
𝑓(𝑥1 , 𝑥2 ) = (𝑥2 − 𝑥1 )4 + 8𝑥1 𝑥2 − 𝑥1 + 𝑥2 + 3
The function evaluated at the origin gives: 𝑓(0,0) = 3
𝑇
𝜕𝑓(𝒙) 𝜕𝑓(𝒙) −4(𝑥2 − 𝑥1 )3 + 8𝑥2 − 1
∇𝑓 (𝟎) = [ | | ] =[ ]
𝜕𝑥1 𝒙=𝟎 𝜕𝑥2 𝒙=𝟎 +4(𝑥2 − 𝑥1 )3 + 8𝑥1 + 1 𝑥 =0, 𝑥 =0
1 2
−1
=[ ]
+1
The Hessian is
𝜕 2 𝑓(𝒙) 𝜕 2 𝑓(𝒙)
𝜕𝑥12 𝜕𝑥1 𝜕𝑥2
𝐻 (𝟎) = 2 =
𝜕 𝑓(𝒙) 𝜕 2 𝑓(𝒙)
[𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 ]𝒙=𝟎
12(𝑥2 − 𝑥1 )2 −12(𝑥2 − 𝑥1 )2 + 8 0 8
=[ ] =[ ]
−12(𝑥2 − 𝑥1 )2 + 8 12(𝑥2 − 𝑥1 )2 𝑥
8 0
1 =0, 𝑥2 =0
Plots superimposed: