Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

TERM PAPER OF NUMERICAL ANALYSIS

MTH 204
TOPIC
DIFFERENCE BETWEEN INTRPOLATION AND EXTRAPOLATION

Submitted to: Submitted by:

MS.ARSHI MERAJ KUMAR NITIN BHURIA


ROLL NO: B52
SECTION: B5801
TABLE OF CONTENTS:
 INTRODUCTION.
 INTERPOLATION.
 TYPES OF INTERPOLATION.
 EXTRAPOLATION.
 TYPES OF EXTRAPOLATION.
 BIBLIOGRAPHY.
ACKNOWLEDGEMENT

The successful completion of any task would be incomplete without mentioning the people who
have made it possible. So it`s with the gratitude that I acknowledge the help, which crowned my
efforts with success.

Life is a process of accumulating and discharging debts, not all of those can be measured. We
cannot hope to discharge them with simple words of thanks but we can certainly acknowledge
them.

I owe my gratitude to miss. ARSHI MERAJ, LSE for completing my term paper. Last but
not the least I am very much indebted to my family and friends for their warm encouragement
and moral support in conducting this project work.

KUMAR NITIN
INTRODUCTION:
Interpolation, extrapolation and regression:
Interpolation solves the following problem: given the value of some unknown function at a
number of points, what value does that function have at some other point between the given
points? A very simple method is to use linear interpolation, which assumes that the unknown
function is linear between every pair of successive points. This can be generalized to polynomial
interpolation, which is sometimes more accurate but suffers from Runge's phenomenon. Other
interpolation methods use localized functions like splines or wavelets.

Extrapolation is very similar to interpolation, except that now we want to find the value of the
unknown function at a point which is outside the given points.

Regression is also similar, but it takes into account that the data is imprecise. Given some points,
and a measurement of the value of some function at these points (with an error), we want to
determine the unknown function. The least squares-method is one popular way to achieve this.

Solving equations:
Another fundamental problem is computing the solution of some given equation. Two cases are
commonly distinguished, depending on whether the equation is linear or not.

Much effort has been put in the development of methods for solving systems of linear equations.
Standard methods are Gauss-Jordan elimination and LU-factorization. Iterative methods such as
the conjugate gradient method are usually preferred for large systems.

Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of
a function is an argument for which the function yields zero). If the function is differentiable and
the derivative is known, then Newton's method is a popular choice. Linearization is another
technique for solving nonlinear equations.

Optimization:
Optimization problems ask for the point at which a given function is maximized (or minimized).
Often, the point also has to satisfy some constraints.

The field of optimization is further split in several subfields, depending on the form of the
objective function and the constraint. For instance, linear programming deals with the case that
both the objective function and the constraints are linear. A famous method in linear
programming is the simplex method.

The method of Lagrange multipliers can be used to reduce optimization problems with


constraints to an unconstrained optimization problems.

Evaluating integrals:
Numerical integration, also known as numerical quadrature, asks for the value of a
definite integral. Popular methods use some Newton-Cotes formula, for instance the midpoint
rule or the trapezoid rule, or Gaussian quadrature. However, if the dimension of the integration
domain becomes large, these methods become prohibitively expensive. In this situation, one may
use a Monte Carlo method or, in modestly large dimensions, the method of sparse grids.

Differential equations:
Numerical analysis is also concerned with computing (in an approximate way) the solution
of differential equations, both ordinary differential equations and partial differential equations.

Partial differential equations are solved by first discretizing the equation, bringing it into a finite-
dimensional subspace. This can be done by a finite element method, a finite difference method,
or (particularly in engineering) a finite volume method. The theoretical justification of these
methods often involves theorems from functional analysis. This reduces the problem to the
solution of an algebraic equation.

History:

The field of numerical analysis predates the invention of modern computers by many centuries.
In fact, many great mathematicians of the past were preoccupied by numerical analysis, as is
obvious from the names of important algorithms like Newton's method, Lagrange interpolation
polynomial, Gaussian elimination, or Euler's method.

To facilitate computations by hand, large books were produced with formulas and tables of data
such as interpolation points and function coeficients. Using these tables, often calculated out to
16 decimal places or more for some functions, one could look up values to plug into the formulas
given and achieve very good numerical estimates of some functions. The canonical work in the
field is the NISTpublication edited by Abramowitz and Stegun, an 1000 plus page book of a very
large number of commonly used formulas and functions and their values at many points. The
function values are no longer very useful when a computer is available, but the large listing of
formulas can still be very handy.

The mechanical calculator was also developed as a tool for hand computation. These calculators
evolved in electronic computers in the1940s, and it was then found that these computers were
also useful for administrative purposes. But the invention of the computer also influenced the
field of numerical analysis, since now longer and more complicated calculations could be done

Interpolation:.

In the mathematical subfield of numerical analysis, interpolation is a method of constructing new


data points within the range of a discrete set of known data points.

In engineering and science one often has a number of data points, as obtained


by sampling or experimentation, and tries to construct a function which closely fits those data
points. This is called curve fitting or regression analysis. Interpolation is a specific case of curve
fitting, in which the function must go exactly through the data points.

A different problem which is closely related to interpolation is the approximation of a


complicated function by a simple function. Suppose we know the function but it is too complex
to evaluate efficiently. Then we could pick a few known data points from the complicated
function, creating a lookup table, and try to interpolate those data points to construct a simpler
function. Of course, when using the simple function to calculate new data points we usually do
not receive the same result as when using the original function, but depending on the problem
domain and the interpolation method used the gain in simplicity might offset the error.

It should be mentioned that there is another very different kind of interpolation in mathematics,
namely the "interpolation of operators". The classical results about interpolation of operators are
the Riesz–Thorin theorem and the Marcinkiewicz theorem. There are also many other
subsequent results.Interpolation provides a means of estimating the function at intermediate
points, such as x = 2.5.There are many different interpolation methods, some of which are
described below. Some of the concerns to take into account when choosing an
appropriate algorithm are: How accurate is the method? How expensive is it? How smooth is the
interpolant?
Piecewise constant interpolation:
The simplest interpolation method is to locate the nearest data value, and assign the same value.
In one dimension, there are seldom good reasons to choose this one over linear interpolation,
which is almost as cheap, but in higher dimensional multivariate interpolation.
Linear interpolation:

Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the
interpolant is not differentiable at the pointxk.

The following error estimate shows that linear interpolation is not very precise. Denote the
function which we want to interpolate by g, and suppose that x lies between xa and xb and
that g is twice continuously differentiable. Then the linear interpolation error is

In words, the error is proportional to the square of the distance between the data points. The error
of some other methods, including polynomial interpolation and spline interpolation (described
below), is proportional to higher powers of the distance between the data points.
Polynomial interpolation

Polynomial interpolation is a generalization of linear interpolation. Note that the linear


interpolant is a linear function. We now replace this interpolant by a polynomial of
higher degree.

Consider again the problem given above. The following sixth degree polynomial goes through all
the seven points:

f(x) = − 0.0001521x6 − 0.003130x5 + 0.07321x4 − 0.3577x3 + 0.2255x2 + 0.9038x.

Substituting x = 2.5, we find that f(2.5) = 0.5965.

Generally, if we have n data points, there is exactly one polynomial of degree at most n−1 going
through all the data points. The interpolation error is proportional to the distance between the
data points to the power n. Furthermore, the interpolant is a polynomial and thus infinitely
differentiable. So, we see that polynomial interpolation solves all the problems of linear
interpolation.
However, polynomial interpolation also has some disadvantages. Calculating the interpolating
polynomial is computationally expensive (seecomputational complexity) compared to linear
interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially
at the end points.

Spline interpolation:
Remember that linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline
interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial
pieces such that they fit smoothly together. The resulting function is called a spline.

For instance, the natural cubic spline is piecewise cubic and twice continuously differentiable.


Furthermore, its second derivative is zero at the end points

Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation
and the interpolant is smoother. However, the interpolant is easier to evaluate than the high-
degree polynomials used in polynomial interpolation. It also does not suffer from Runge's
phenomenon.

Interpolation via Gaussian processes:

Gaussian process is a powerful non-linear interpolation tool. Many popular interpolation tools
are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only
for fitting an interpolant that passes exactly through the given data points but also for regression,
i.e., for fitting a curve through noisy data. In the geostatistics community Gaussian process
regression is also known as Kriging.

Other forms of interpolation:

Other forms of interpolation can be constructed by picking a different class of interpolants. For
instance, rational interpolation is interpolation by rational functions, and trigonometric
interpolation is interpolation by trigonometric polynomials. Another possibility is to
use wavelets.

The Whittaker–Shannon interpolation formula can be used if the number of data points is


infinite.
Multivariate interpolation is the interpolation of functions of more than one variable. Methods
include bilinear interpolation and bicubic interpolation in two dimensions, and trilinear
interpolation in three dimensions.

Sometimes, we know not only the value of the function that we want to interpolate, at some
points, but also its derivative. This leads to Hermite interpolation problems.

Interpolation in Digital Signal Processing:

In the domain of digital signal processing, the term interpolation refers to the process of
converting a sampled digital signal (such as a sampled audio signal) to a higher sampling rate
using various digital filtering techniques (e.g., convolution with a frequency-limited impulse
signal). In this application there is a specific requirement that the harmonic content of the
original signal be preserved without creating aliased harmonic content of the original signal
above the original Nyquist limit of the signal (i.e., above fs/2 of the original signal sample rate).
EXTRAPOLATION:

In mathematics, extrapolation is the process of constructing new data points outside a discrete


set of known data points. It is similar to the process of interpolation, which constructs new points
between known points, but the results of extrapolations are often less meaningful, and are subject
to greater uncertainty. It may also mean extension of a method, assuming similar methods will be
applicable. Extrapolation may also apply to human experience to project, extend, or expand
known experience into an area not known or previously experienced so as to arrive at a (usually
conjectural) knowledge of the unknown  (e.g. a driver extrapolates road conditions beyond his
sight while driving).

.Extrapolation methods:

A sound choice of which extrapolation method to apply relies on a prior knowledge of the
process that created the existing data points. Crucial questions are for example if the data can be
assumed to be continuous, smooth, possibly periodic etc.

Linear extrapolation:
Linear extrapolation means creating a tangent line at the end of the known data and extending it
beyond that limit. Linear extrapolation will only provide good results when used to extend the
graph of an approximately linear function or not too far beyond the known data.

If the two data points nearest the point x * to be extrapolated are (xk − 1,yk − 1) and (xk,yk), linear
extrapolation gives the function:

(which is identical to linear interpolation if xk − 1 < x * < xk). It is possible to include more than two
points, and averaging the slope of the linear interpolant, by regression-like techniques, on the
data points chosen to be included. This is similar to linear prediction.
Polynomial extrapolation:
A polynomial curve can be created through the entire known data or just near the end. The
resulting curve can then be extended beyond the end of the known data. Polynomial
extrapolation is typically done by means of Lagrange interpolation or using Newton's method
of finite differences to create a Newton series that fits the data. The resulting polynomial may be
used to extrapolate the data.

High-order polynomial extrapolation must be used with due care. For the example data set and
problem in the figure above, anything above order 1 (linear extrapolation) will possibly yield
unusable values, an error estimate of the extrapolated value will grow with the degree of the
polynomial extrapolation. This is related to Runge's phenomenon.

Conic extrapolation:
A conic section can be created using five points near the end of the known data. If the conic
section created is an ellipse or circle, it will loop back and rejoin itself. A parabolic or hyperbolic
curve will not rejoin itself, but may curve back relative to the X-axis. This type of extrapolation
could be done with a conic sections template (on paper) or with a computer.

French curve extrapolation:


French curve extrapolation is a method suitable for any distribution that has a tendency to be
exponential, but with accelerating or decelerating factors. This method has been used
successfully in providing forecast projections of the growth of HIV/AIDS in the UK since 1987
and variant CJD in the UK for a number of years .

Quality of extrapolation:

Typically, the quality of a particular method of extrapolation is limited by the assumptions about
the function made by the method. If the method assumes the data are smooth, then a non-smooth
function will be poorly extrapolated.

Even for proper assumptions about the function, the extrapolation can diverge strongly from the
function. The classic example is truncated power series representations of sin(x) and
related trigonometric functions. For instance, taking only data from near the x = 0, we may
estimate that the function behaves as sin(x) ~ x. In the neighborhood of x = 0, this is an excellent
estimate. Away from x = 0 however, the extrapolation moves arbitrarily away from the x-axis
while sin(x) remains in the interval [−1,1]. I.e., the error increases without bound.
Taking more terms in the power series of sin(x) around x = 0 will produce better agreement over
a larger interval near x = 0, but will produce extrapolations that eventually diverge away from
the x-axis even faster than the linear approximation.

This divergence is a specific property of extrapolation methods and is only circumvented when
the functional forms assumed by the extrapolation method (inadvertently or intentionally due to
additional information) accurately represent the nature of the function being extrapolated. For
particular problems, this additional information may be available, but in the general case, it is
impossible to satisfy all possible function behaviors with a workably small set of potential
behaviors.

Extrapolation in the complex plane:

In complex analysis, a problem of extrapolation may be converted into an interpolation problem


by the change of variable . This transform exchanges the part of the complex plane inside
the unit circle with the part of the complex plane outside of the unit circle. In particular,
the compactification point at infinity is mapped to the origin and vice versa. Care must be taken
with this transform however, since the original function may have had "features", for
example poles and other singularities, at infinity that were not evident from the sampled data.

Another problem of extrapolation is loosely related to the problem of analytic continuation,


where (typically) a power series representation of a function is expanded at one of its points
ofconvergence to produce a power series with a larger radius of convergence. In effect, a set of
data from a small region is used to extrapolate a function onto a larger region.

Again, analytic continuation can be thwarted by function features that were not evident from the
initial data.

Also, one may use sequence transformations like Padé approximants and Levin-type sequence


transformations as extrapolation methods that lead to a summation of power series that are
divergent outside the original radius of convergence. In this case, one often obtains rational
approximants.
BIBLIOGRAPHY:

You might also like