Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Interduction

In numerical analysis, numerical integration constitutes a broad family of algorithms for


calculating the numerical value of a definite integral, and by extension, the term is also
sometimes used to describe the numerical solution of differential equations. This article focuses
on calculation of definite integrals. The term numerical quadrature is more or less a synonym
for numerical integration, especially as applied to one-dimensional integrals. Some authors refer
to numerical integration over more than one dimension as cubature;[1] others take quadrature to
include higher-dimensional integration.
The basic problem in numerical integration is to compute an approximate solution to a definite
integral
𝒃
∫ 𝒇(𝒙)𝒅𝒙
𝒂

to a given degree of accuracy. If f(x) is a smooth function integrated over a small number of
dimensions, and the domain of integration is bounded, there are many methods for
approximating the integral to the desired precision.

Trapezoidal rule
From Wikipedia, the free encyclopedia

This article is about the quadrature rule for approximating integrals. For the implicit trapezoidal
rule for solving initial value problems, see Trapezoidal rule (differential equations). For the explicit
trapezoidal rule for solving initial value problems, see Heun's method.

The function f(x) (in blue) is approximated by a linear function (in red).

In mathematics, and more specifically in numerical analysis, the trapezoidal rule (also known as
the trapezoid rule or trapezium rule) is a technique for approximating the definite integral

.
𝒃
∫ 𝒇(𝒙)𝒅𝒙
𝒂

The trapezoidal rule works by approximating the region under the graph of the function
as a trapezoid and calculating its area. It follows that
𝒃
𝒇(𝒂) + 𝒇(𝒃)
∫ 𝒇(𝒙)𝒅𝒙 ≈ (𝒃 − 𝒂) ( )
𝟐
𝒂
The trapezoidal rule may be viewed as the result obtained by averaging
the left and right Riemann sums, and is sometimes defined this way.

Illustration of "chained trapezoidal rule" used on an irregularly-spaced partition of .

The integral can be even better approximated by partitioning the integration interval,
applying the trapezoidal rule to each subinterval, and summing the results. In practice,
this "chained" (or "composite") trapezoidal rule is usually what is meant by "integrating

with the trapezoidal rule". Let {xk} a partition of a [a,b] such


that a=x0<x1<x2<......<xN=b
and ∆ xkbe the length of the k-th subinterval (that is, ∆xk=xk-xk-1), then

.
𝒃 𝑵
𝒇(𝒙𝒌−𝟏 ) + 𝒇(𝒙𝒌 )
∫ 𝒇(𝒙)𝒅𝒙 ≈ ∑ ∆𝒙𝒌
𝒂 𝟐
𝒌=𝟏
The approximation becomes more accurate as the resolution of the partition increases
(that is, for larger N). When the partition has a regular spacing, as is often the case, the
formula can be simplified for calculation efficiency.
As discussed below, it is also possible to place error bounds on the accuracy of the
value of a definite interval estimated using a trapezoidal rule.
.
Numerical implementation

Non-uniform grid[edit]
When the grid spacing is non-uniform, one can use the formula

𝒃 𝑵
𝒇(𝒙𝒌−𝟏 ) + 𝒇(𝒙𝒌 )
∫ 𝒇(𝒙)𝒅𝒙 ≈ ∑ ∆𝒙𝒌
𝒂 𝟐
𝒌=𝟏

Uniform grid[edit]
For a domain discretized into N equally spaced panels, considerable simplification may
occur. Let

𝒃−𝒂
∆𝒙𝒌 = ∆𝒙 =
𝑵
the approximation to the integral becomes
𝒃 ∆𝑿
∫𝒂 𝒇(𝒙)𝒅𝒙 ≈ ∑𝑵
𝒌=𝟏 𝒇( 𝒙𝒌−𝟏 ) + 𝒇(𝒙𝒌 )
𝟐

∆𝑥
= (𝑓(𝑥0 ) + 2 ∑𝑁−1
𝑘=1 𝑓(𝑥𝑘 ) + 𝑓(𝑥𝑁 ))
2
∆𝑿
= (𝒇(𝒙𝟎 ) + 𝟐(𝒇(𝒙𝟏 ) + 𝒇(𝒙𝟐 ) + ⋯ … … . . +𝒇(𝒙𝑵−𝟏 )) + 𝒇(𝒙𝒏 ))
𝟐
which requires fewer evaluations of the function to calculate.

Error analysis[edit]

An animation showing how the trapezoidal rule approximation improves with more strips for an
interval with a=2 and b=8. As the number of intervals N increases, so too does the accuracy
of the result.
The error of the composite trapezoidal rule is the difference between the value of the
integral and the numerical result:
𝒃 𝒃−𝒂 𝒇(𝒂)+𝒇(𝒃) 𝒃−𝒂
𝒆𝒓𝒓𝒐𝒓 = ∫𝒂 𝒇(𝒙)𝒅𝒙 − [ + ∑𝑵
𝒌=𝟏 𝒇(𝒂 + 𝒌 )]
𝒏 𝟐 𝒏
There exists a number ξ between a and b, such that[2

(𝒃−𝒂)𝟑
𝒆𝒓𝒓𝒐𝒓 = − 𝒇`` (𝝃 )
𝟏𝟐𝑵𝟐
It follows that if the integrand is concave up (and thus has a positive second derivative), then
the error is negative and the trapezoidal rule overestimates the true value. This can also be
seen from the geometric picture: the trapezoids include all of the area under the curve and
extend over it. Similarly, a concave-down function yields an underestimate because area is
unaccounted for under the curve, but none is counted above. If the interval of the integral being
approximated includes an inflection point, the error is harder to identify.
In general, three techniques are used in the analysis of error:[3]

1. Fourier series
2. Residue calculus
3. Euler–Maclaurin summation formula:[4][5]
An asymptotic error estimate for N → ∞ is given by

(𝒃 − 𝒂)𝟑 `
𝒆𝒓𝒓𝒐𝒓 = − 𝟐
[𝒇 (𝒃) − 𝒇` (𝒂)] + 𝑶(𝑵−𝟑 )
𝟏𝟐𝑵

Further terms in this error estimate are given by the Euler–Maclaurin summation formula.
It is argued that the speed of convergence of the trapezoidal rule reflects and can be used as
a definition of classes of smoothness of the functions.[6]

Periodic functions[edit]
The trapezoidal rule converges rapidly for periodic functions. This is an easy consequence of

the Euler-Maclaurin summation formula, which says that if is times continuously

differentiable with period

where and is the periodic extension of the th Bernoulli polynomial[7]. Due to

the periodicity, the derivatives at the endpoint cancel and we see that the error is .
Although some effort has been made to extending the Euler-Maclaurin summation formula to
higher dimensions[8], the most straightforward proof of the rapid convergence of the
trapezoidal rule in higher dimensions is to reduce the problem to that of convergence of

Fourier series. This line of reasoning shows that if is periodic on a -dimensional

space with continuous derivatives, the speed of convergence is . For very large
dimension, the shows that Monte-Carlo integration is most likely a better choice, but for 2 and
3 dimensions, equispaced sampling is efficient. This is exploited in computational solid state
physics where equispaced sampling over primitive cells in the reciprocal lattice is known
as Monkhorst-Pack integration

You might also like