Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

A Taylor series, called after the English mathematician Brook Taylor (1685–1731), is an

infinite sum of polynomial terms to approximate a function in the region about a certain
point a. This is only possible if the function is behaving analytically in this neighbourhood.
Such series about the point a = 0 are known as Maclaurin series, after the Scottish
mathematician Colin Maclaurin (1698–1746). They work by ensuring that the approximate
series matches up to the nthderivative of the function being approximated when it is
approximated by a polynomial of degree n.
We know that the higher the degree of an equation, the more "turning points" it may have. For
example, a parabola has one "turning point."
I (sort of) understand what Taylor series do, they approximate a function that is infinitely
differentiable. Well, first of all, what does infinitely differentiable mean? Does it mean that the
function has no point where the derivative is constant? Can someone intuitively explain that to
me? Anyway, so the function is infinitely differentiable, and the Taylor polynomial keeps adding
terms which make the polynomial = to the function at some point, and then the derivative of the
polynomial = to the derivative of the function at some point, and the second derivative, and so on.
Why does making the derivative, second derivative ... infinite derivative, of a polynomial and a
function equal at some point ensure that the polynomial will match the function exactly?

Usually, you tell because you can express the function in terms of other functions you
already know are infinitely differentiable and constructions you already know produce
infinitely differentiable functions. For example, the sum of two infinitely differentiable
functions is infinitely differentiable

"what does infinitely differentiable mean?" - if f(x) is "infinitely differentiable", this means
that if I differentiate f(x) to obtain a new function f′(x), then I can differentiate f′(x) to
obtain a new function f′′(x), which I can differentiate again to obtain... well, you get the
drift. Additionally, all those derivatives should evaluate to finite values at the point of
expansion.
Thanks. This made a lot of sense. I see why the first derivative gives the best linear approximation, as
it is the slope of the line tangent to the function at f(c). But why does the second derivative give the
best quadratic approximation?? I know this is something simple, this part just still kind of confuses
me though – mr real lyfeAug 15 '12 at 15:45
 @ordinary: The tangent line has the same value and derivative as ff does at cc, right? So maybe the
best second order approximation should have the same value, derivative, and second derivative.... –
Hurkyl Aug 15 '12 at 17:07
 As @Hurkyl said. We get as close as we can without allowing any derivatives higher than the second to
be nonzero. To prove it precisely you could write down the
difference f(x+h)−f(x)−f′(x)−f′′(x)f(x+h)−f(x)−f′(x)−f″(x). Since the limit of that over h2h2 is zero by the
definitions, it must go to zero faster than any constant times h2,h2, so we're not going to get any closer
with a different quadratic approximation. – Kevin Carlson Aug 16 '12 at 4:45
 1
Continuing along @KevinCarlson 's line of thought, if we let p_2(x) be the second MacLaurin
polynomial of the function f(x), then p_2(x) is the unique degree 2 polynomial with the property that
lim_{x \to 0} [f(x) - p_2(x)]/[f(x) - q_2(x)] = 0 for all degree 2 polynomials q_2(x) different than
p_2(x); it is in this sense that the degree 2 MacLaurin polynomial of f(x) is the best quadratic
approximation to f(x). The proof requires L'Hôpital's Rule, but is otherwise straight-forward. The
same statement holds for the degree n MacLaurin polynomial of f(x).
This, in a nutshell, is the idea of a Taylor series. The value of a function at a future point
(or a nearby input) can be better and better approximated by starting with its value at
present, adding a linear correction which is the derivative multiplied by the time
difference, adding a quadratic correction which is the second derivative multiplied by the
square of the time difference divided by 2, and so on. Taylor series work well when the
functions involved are “well behaved”, meaning the next derivative up from the one you
asked about doesn’t go haywire. This is why you needed to add a caveat around hitting
the brakes. Of course if I merely hit the brakes I would still be within dozens of meters
from your prediction, but if I ignited my photonic hyperdrive and changed my rate of
acceleration ginormously, you’d have been quite wrong very quickly. Taylor series are
simply taking this idea all the way through. They are fantastically useful both in theory
and in practice, and the underlying idea is simple: Every nice function (in a subtle sense)
can be better and better approximated by constant, linear, quadratic or higher order
polynomials, because polynomials are flexible enough to mimic everything closely over
short distances.

However if you want to compute something like sin(1) and want to get an accurate
approximation in decimals, you need to use the method of Taylor series. Again, to
scientists and engineers, things such as sin(1) represent physical quantities that they
need to approximate to certain accuracy by rational numbers (in decimals). They will
make two demands. First they need a (recursive) method that successively give more and
more accurate approximations—this is the method of computing Taylor series. Then they
need an error bound to tell them how many iterations they need to perform to obtain the
required accuracy—this is the Taylor approximation theorem. This is the fundamental
significance of the Taylor series. It makes math useful for science and engineering.

You might also like