Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

NUMERICAL METHODS MATHEMATICAL MODEL

 techniques used in the formulation of  a formulation or equation that expresses


mathematical problems so that they the essential features of a physical
can be solved with arithmetic system or process in mathematical terms.
operations.  In a very general sense, it can be
 a way to solve complex problems which represented as a functional relationship
often has no analytical solution of the form

Approximation Dependent Variable = f(independent


 approximation of the output of a variables, parameters, forcing functions)
numerical problem
Dependent variable
Numerical method vs. Analytical Method?  is a characteristic that shove the
 Approximation of  Exact solution behavior or state of the system
the exact solution Independent variable
 In the absence of  Often, they  indicates the dimensions, such as time(t)
analytical have structured and space (x,y,z), along with the system
solution, a set of rules used behavior is being determined.
numerical solution to answer a Parameters
is used to find particular  are reflective of the system's propenies
the approximate problem or composition, and
answer . Forcing functions
 Very tedious, due  Fast and simple  are external influences acting upon the
to large number calculations system
of computations
Example : F = ma
Ways to solve Mathematical Problems in  a is the dependent variable reflecting
Pre-Computer era the system's behavior
1. Using the exact or analytical methods  F is the forcing function
for simpler problems  m is a parameter representing a
2. Graphical methods property of the system.
3. Early calculators are used to implement
low level numerical methods Properties of Mathematical Models
 It describes a natural process or system
Katherine Johnson in mathematical terms,
 It represents an idealization and
Why study numerical methods? simplification of reality.
 As problem-solving tools, numerical  Finally, it yields reproducible results and,
methods are undoubtedly one of the consequently, can be used for
most powerful tools available today. predictive purposes
 As you pursue your career in the future,
you will come across commercially SIGNIFICANT FIGURES
available computer programs which
involve numerical methods.  The concept of a significant figure, or
 Many problems cannot be solved using digit, has been developed to formally
the readily available computer designate the reliability of a numerical
programs. value.
 Numerical methods are an efficient  The significant digits of a number are
vehicle for learning to use computers. those that can be used with confidence.
 Numerical methods deliver a way for They correspond to the number of
you to strengthen your understanding of certain digits plus one estimated digit.
mathematics.
Significant Figures in Numerical Methods ACCURACY AND PRECISION
 Numerical methods yield approximate
results. We must, therefore, develop  The errors associated with both
criteria to specify how confident we are calculations and measurements can be
in our approximate result. characterized with regard to their
 Although quantities such as n, e or √7, accuracy and precision.
represent specific quantities, they  Accuracy refers to how closely a
cannot be expressed exactly by a computed or measured value agrees
limited number of digits. with the true value.
 Precision refers to how closely individual
Rules for Significant Figures computed or measured values agree
1. All non-zero numbers ARE significant. with each other.
The number 33.2 has THREE significant
figures because all of the digits present
ERROR DEFINITIONS
are non-zero.
2. Zeros between two non-zero digits ARE  Numerical errors arise from the use of
significant. 2051 has FOUR significant approximations to represent exact
figures. The zero is between a 2 and a 5. mathematical operations and quantities
3. Leading zeros are NOT significant.
They're nothing more than "place Two Major Forms of Numerical Errors:
halders The number 0.54 has only TWO 1. Round-off Error
significant figures. 0.0032 also has TWO  Due to the fact that computers can
significant figures. All of the zeros are represent only quantities with a finite
leading. number of digits.
4. Trailing zeros to the right of the decimal 2. Truncation Error
ARE significant. There are FOUR  The discrepancy introduced by the fact
significant figures in 92.00. that numerical met ods may employ
5. Trailing zeros in a whole number with the approximations to represent exact
decimal shown ARE significant. Placing mathematical operations and quantities.
a decimal at the end of a number is
usually not done. By convention,  For both types, the relationship between
however, this decimal indicates a the exact, or true, result and the
significant zero. For example, 540." approximation can be formulated as:
Indicates that the trailing zero IS True value = approximation+error
significant; there are THREE significant Et = true value - approximation
figures in this value.
6. Trailing zeros in a whole number with no  To consider magnitude of error,
decimal shown are NOT significant.
Writing just "540" indicates that the zero is
εt = (Et / true value) 100%
(percent relative error)
NOT significant, and there are only TWO
significant figures in this value.
Approximate Error
7. Exact numbers have an INFINITE number approximate error
of significant figures. This rule applies to 𝜀𝐚 = x 100%
approximate value
numbers that are definitions. For
example, 1 meter 1.00 meters 1.0000
Percent Relative Error
meters = 1.0000000000000000000 meters, current approx − previous approx
etc. 𝜺𝐚 =
current approx
x 100%
8. For a number in scientific notation: N x
10x, all digits comprising N ARE
significant by the first 6 rules: "10" and "x" |εa| < εs (Percent Tolerance)
are NOT significant. 5.02 x 104 has THREE
significant figures: "5.02." "10 and "4" are
not significant.
ROOTS OF AN EQUATION  The particular methods described herein
employ different strategies to
 In general form, the root of an equation systematically reduce the width of the
appears in the general form: f(x)=0 bracket and, hence, home in on the
where x is the root/solution of the correct answer.
equation f(x).
 Graphically, f(x)=0 refers to the Bisection Method
intersection of f(x) and the x-axis.  The bisection method, which is
alteratively called binary chopping,
Methods Interval halving or Bolzano's Method, is
a. Graphical Method one type of incremental search method
b. Bracketing Method in which the interval is always divided in
1. Bisection Method half.
2. False-Position Method  If a function changes sign over an
c. Open Methods interval, the function value at the
1. Simple Fixed-Point Iteration midpoint is evaluated.
2. Newton-Raphson Method  The location of the root is then
3. Secant Method determined as lying at the midpoint of
4. Modified Secant Method the subinterval within which the sign
5. Multiple Roots (Modified Newton- change occurs. The process is repeated
Raphson Method) to obtain refined estimates.

GRAPHICAL METHOD False Position Method


 Although bisection is a perfectly valid
 Simplest method in finding the root of an technique for determining roots, its
equation "brute-force" approach is relatively
 However, it can only provide a rough inefficient.
approximation of the root/solution.  False position method is an alternative
 By graphing the function, we can based on a graphical insight.
observe at what value of x does it  A shortcoming of the bisection method
crosses the x-axis, or simply /(x)=0. is that, in dividing the interval from xL to
xU, into equal halves, no account is
Incremental Search Method taken of the magnitudes of f(xL) and f(xU)
 If you have observed in the graphs of  An alternative method that exploits this
our function, the f(x) are changing signs graphical insight is to join f(x) and f(x) by
whenever it crosses the root. a straight line.
 In general, if f(x) is real and continuous in  The intersection of this line with the x axis
an interval x, to x, and f(xL) and f(xU) represents an improved estimate of the
have opposite signs, that is: f(xL)f(xU) < 0 root.
then there is at least one root between  Using similar triangles, the intersection of
xL and xU. the straight line with the x-axis can be
estimated as:
𝑓(𝑥𝐿) 𝑓(𝑥𝑈)
BRACKETING METHOD =
𝑥𝑟 − 𝑥𝑙 𝑥𝑟 − 𝑥𝑢
 These methods deal with the fact that a
function typically changes sign in the
vicinity of a root. These techniques are
called bracketing methods because
two initial guesses for the root are
required.
 As the name implies, these guesses must
"bracket," or be on either side of, the
root.
ROOTS OF AN EQUATION  For these, cases, the derivative can be
approximated by a backward finite
Bracketing Methods vs. Open Methods divided difference.
The root is located The open methods are  This technique is similar to the Newton-
within an interval based on formulas that Raphson technique in the sense that an
prescribed by a they require only a estimate of the root is predicted by
lower and upper single starting value of extrapolating a tangent of the function
bound. x or two starting values to the x axis.
that do not necessarily  However, the secant method uses a
bracket the root. difference rather than a derivative to
Repeated As such, they estimate the slope.
application of sometimes diverge or
these methods move away from the Modified Secant Method
always results in true root as the  Rather than using two arbitrary values to
closer estimates of computation estimate the derivative, an alternative
the true value of progresses. approach involves a fractional
the root. perturbation of the independent
Such methods are However, when the variable to estimate f'(x),
ð𝑥𝑖 𝑓(𝑥𝑖)
said convergent open methods xi+1= xi 𝑓(𝑥𝑖 + 𝑥𝑖) − 𝑓(𝑥𝑖)
they move closer converge, they usually
Where ð = a small perturbation fraction.
to the truth as the do so much more
computation quickly than the
Multiple Roots Method
progresses. bracketing methods
 Modified Newton-Raphson Method
 A multiple root corresponds to a point
where a function is tangent to the axis.
Fixed Point Iteration
 For open methods, we employ a
formula to predict the root of the
equation.
 We can develop a formula for fixed-
point iteration (one-paint iteration ar
successive substitution) by rearranging
f(x) = 0 so that x is on the left hand side
of the equation.

Newton-Raphson Method
 The most widely used of all root-locating
formulas
 If the initial guess at the root is, a
tangent can be extended from the
point
 The point where this tangent cross the x-
axis usually represents an Improved
estimate of the root [xi , f(xi)]

Secant Method
 A potential problem in implementing the
Newton-Raphson method is the
evaluation of the derivative
 Although this is not inconvenient for
polynomuts and many other functions,
there are certain functions whicoe
derivatives may be extremely dia or
inconvenient to evaluate
SYSTEM OF LINEAR EQUATIONS echelon form using elementary row
operations
 A linear system of n algebraic equations in  The reduced row echelon form of a matrix
n unknowns x1, x2, is in the form: is unique, but the steps of the procedure
a11x1 + a12x2 + …. a1nxn = b1 are not
a21x1 + a22x2 + …. a2nxn = b2
 Where aij (i, j ) and bk(k = 1,2,…n) are
known constants, and aij are the INDIRECT METHODS
coefficients.  Iteration based
 If every bk is zero, the system is  Indirect methods are those that uses
homogeneous, otherwise it is iterations to find the approximate solution.
nonhomogeneous.  This is done with an initial vector and then
successive approximations are generated
 Numerical methods for solving linear that eventually converges to the actual
systems of equations are divided into two solution.
categories: direct methods and indirect  Unlike direct methods, the number of
methods. operations required by iterative methods
is not known in advance.
DIRECT METHODS
Jacobi Method
 Solution based  Named after Carl Gustav Jacob Jacobi
 These method transforms the original (1804 – 1851)
system into an equivalent system in which  This method makes two assumptions:
the coefficient matrix is upper-triangular, 1. The system is given by
lower-triangular, or diagonal, making the
system much easier to solve.

Gauss Elimination
 The Gauss elimination method is the most
familiar method for solving a system of has a unique solution
linear equations. 2. that the coefficient matrix A has no zeros
 This method consists of two parts: the in its main diagonal.
elimination phase and the solution phase.
 In this method, a system of equations that Gauss-Seidel Method
is given in a general form is manipulated  A modification of the Jacobi method
to be in the upper triangular form, which is  Named after Carl Friedrich Gauss (1777-
then be solved by using the back 1855) and Philipp L. Seidel (1821- 1896)
substitution.  This modification is no more difficult to use
 Gaussian elimination, also known as row than the Jacobi method, and it often
reduction, is an algorithm in linear algebra requires fewer iterations to produce the
for solving a system of linear equations. same degree of accuracy.
 It is usually understood as a sequence of  With the Jacobi method, the values of xi
operations performed on the obtained in the nth approximation remain
corresponding matrix of coefficients. unchanged until the entire (n+1)th
 Once a “solution” has been obtained, approximation has been calculated.
Gaussian elimination offers no method of  With the Gauss-Seidel method, on the
refinement. other hand, you use the new values of xi
 sensitive to rounding error as soon as they are known.
 That is, once you have determined x1 from
Gauss-Jordan Method the first equation, its value is used in the
 The Gauss Jordan method Is the second equation to obtain the new x 2.
refinement of the Gauss elimination Similarly, the new x1and x2 are used in the
method third equation to obtain the new x3, and
 Gauss-Jordan elimination a procedure for so on.
converting a matrix to a reduced row

You might also like