Professional Documents
Culture Documents
Term Paper of Mathematics: Submitted To:-Miss Saloni SUBMITTED ON:-12-11-2010
Term Paper of Mathematics: Submitted To:-Miss Saloni SUBMITTED ON:-12-11-2010
SUBMITTED BY:-
• NAME:-HAOBIJAM BANA
• ROLL NO.:-RG6009A62
• REG. NO.:-11013742
• Section: - G6009
• TOPIC: - APPLICATION OF
EIGENVALUES
Acknowledgement
• Properties of eigenvectors
• Non-zero values
Contents:-
• Scalar multiples
• Overview
• Components of eigenvectors
• Mathematical definition
• Linear independence
• Alternative definition
• Eigenvectors of a diagonal
• Computation of Eigenvalues,
matrix
and the characteristic
equation • Properties of Eigenvalues
• Shear • Eigen decomposition
• Applications
Overview:-
In linear algebra, there are two of a given matrix is the set of all
kinds of objects: scalars, which eigenvectors of the matrix with
are just numbers; and vectors, that eigenvalue.
which can be thought of as
arrows, and which have both An important benefit of knowing
magnitude and direction (though the eigenvectors and values of a
more precisely a vector is a system is that the effects of the
member of a vector space). In action of the matrix on the
place of the ordinary functions of system can be predicted. Each
algebra, the most important application of the matrix to an
functions in linear algebra are arbitrary vector yields a result
called "linear transformations," which will have rotated towards
and a linear transformation is the eigenvector with the largest
usually given by a "matrix," an eigenvalue.
array of numbers. Thus instead of
writing f(x) we write M (v) where Many kinds of mathematical
M is a matrix and v is a vector. objects can be treated as vectors:
The rules for using a matrix to ordered pairs, functions,
transform a vector are given in harmonic modes, quantum states,
the article linear algebra. and frequencies are examples. In
these cases, the concept of
If the action of a matrix on a direction loses its ordinary
(nonzero) vector changes its meaning, and is given an abstract
magnitude but not its direction, definition. Even so, if this
then the vector is called an abstract direction is unchanged
eigenvector of that matrix. Each by a given linear transformation,
eigenvector is, in effect, the prefix "Eigen" is used, as in
multiplied by a scalar, called the eigenfunction, eigenmode,
eigenvalue corresponding to that eigenface, eigenstate, and
eigenvector. The eigenspace eigenfrequency.
corresponding to one eigenvalue
Alternative definition:-
Mathematicians sometimes
alternatively define eigenvalue in
the following way, leading to Any vector u in V which
differences of opinion over what satisfies for some
constitutes an "eigenvector." eigenvalue λ is called an
eigenvector of A corresponding
Alternative Given a linear transformation to λ.
Definition on a linear space V over a field
F, a scalar λ in F is called an The set of all eigenvectors
eigenvalue of T if there exists a corresponding to λ is called the
nonzero vector x in V such that eigenspace of λ.[4]
Notice that the above definitions
are entirely equivalent, except for
in regards to eigenvectors, where If there exists an inverse
they disagree on whether the zero
vectors are considered an (A − λI) − 1,
eigenvector. The convention of
allowing the zero vector to be an Then both sides can be left
eigenvector (although still not multiplied by the inverse to
allowing it to determine obtain the trivial solution: x = 0.
Eigenvalues) allows one to avoid Thus we require there to be no
repeating the non-zero criterion inverse by assuming from linear
every time one proves a theorem, algebra that the determinant
and simplifies the definitions of equals zero:
concepts such as eigenspace,
which would otherwise be non-
intuitively made up of vectors
that are not all eigenvectors.
The matrix
Where I is the identity matrix.
This can be rearranged to
Scalar multiples:-
λ2 − 2λ + 1 = (1 − λ)2 = 0
As a one-dimensional vector
space, consider an elastic string
or rubber band tied to an
unmoving support at one end.
Pulling the string away from the
point of attachment stretches it
and elongates it by some scaling
factor λ which is a real number.
Each vector on the string is
stretched equally, with the same
scaling factor λ, and although
elongated, it preserves its original
direction.
Vertical shrink (k2 < 1) and
For a two-dimensional vector horizontal stretch (k1 > 1) of a
space, consider a rubber sheet unit square. Eigenvectors are u1
stretched equally in all directions and u2 and Eigenvalues are λ1 =
such as a small area of the k1 and λ2 = k2. This
surface of an inflating balloon transformation orients all vectors
(Fig. 3). All vectors originating at towards the principal eigenvector
the fixed point on the balloon u1.
surface (the origin) are stretched
equally with the same scaling For a slightly more complicated
factor λ. This transformation in example, consider a sheet that is
two-dimensions is described by stretched unequally in two
the 2×2 square matrix. perpendicular directions along
the coordinate axes, or, similarly,
Expressed in words, the stretched in one direction, and
transformation is equivalent to shrunk in the other direction. In
multiplying the length of any this case, there are two different
vector by λ while preserving its scaling factors: k1 for the scaling
original direction. Since the in direction x, and k2 for the
vector taken was arbitrary, every scaling in direction y. The
non-zero vector in the vector transformation matrix is
space is an eigenvector. Whether
the transformation is stretching
(elongation, extension, inflation),
or shrinking (compression, ,
deflation) depends on the scaling
factor: if λ > 1, it is stretching; if And the characteristic equation is
λ < 1, it is shrinking. Negative
values of λ correspond to a (k1 − λ)(K2 − λ) = 0.
reversal of direction, followed by
a stretch or a shrink, depending The Eigenvalues, obtained as
on the absolute value of λ. roots of this equation are λ1 = k1,
and λ2 = k2 which means, as
expected, that the two
Unequal scaling:- Eigenvalues are the scaling
factors in the two directions. y-axis, which will gradually
Plugging k1 back in the shrink away to nothing.
eigenvalue equation gives one of
the eigenvectors: Rotation
Eigenfunction:-
Where A is any constant; if λ is
A common example of such non-zero, the solution is the
maps on infinite dimensional exponential function
spaces is the action of differential
operators on function spaces. As
an example, on the space of
infinitely differentiable functions,
If we expand our horizons to
the process of differentiation
complex valued functions, the
defines a linear operator since
value of λ can be any complex
number. The spectrum of d/dt is
therefore the whole complex
plane. This is an example of a
continuous spectrum.
Where f(t) and g(t) are
differentiable functions, and a
Waves on a string:-
and b are constants.
The shape of a standing wave in a
The eigenvalue equation for
string fixed at its boundaries is an
linear differential operators is
example of an eigenfunction of a
then a set of one or more
differential operator. The
differential equations. The
admittable Eigenvalues are
eigenvectors are commonly
governed by the length of the
called eigenfunction. The most
string and determine the
simple case is the eigenvalue
frequency of oscillation.
equation for differentiation of a
The displacement, h(x, t), of a and
stressed rope fixed at both ends,
like the vibrating strings of a
string instrument, satisfies the
wave equation
Thus, the constant ω is
constrained to take one of the
values , where n is
Which is a linear partial any integer. Thus the clamped
differential equation, where c is string supports a family of
the constant wave speed? The standing waves of the form
normal method of solving such
an equation is separation of
variables. If we assume that h can
be written as the product of the Applications:-
form X(x)T(t), we can form a pair
of ordinary differential equations:
and
An example of an eigenvalue
equation where the Where is an eigenstate of
transformation T is represented in H. It is a self adjoint operator, the
terms of a differential operator is infinite dimensional analog of
the time-independent Hermitian matrices (see
Schrödinger equation in quantum Observable). As in the matrix
mechanics: case, in the equation above
is understood to be the
vector obtained by application of
the transformation H to .
Where H, the Hamiltonian, is a
second-order differential operator Molecular orbital:-
and ψE, the wave function, is one
of its eigenfunction In quantum mechanics, and in
corresponding to the eigenvalue particular in atomic and
E, interpreted as its energy. molecular physics, within the
Hartree–Fock theory, the atomic
However, in the case where one and molecular orbital can be
is interested only in the bound defined by the eigenvectors of the
state solutions of the Schrödinger Fock operator. The
equation, one looks for ψE within corresponding Eigenvalues are
the space of square integrable interpreted as ionization
functions. Since this space is a potentials via Koopmans'
Hilbert space with a well-defined theorem. In this case, the term
scalar product, one can introduce eigenvector is used in a
a basis set in which ψE and H can somewhat more general meaning,
be represented as a one- since the Fock operator is
dimensional array and a matrix explicitly dependent on the
respectively. This allows one to orbital and their Eigenvalues. If
represent the Schrödinger one wants to underline this aspect
equation in a matrix form. (Fig. 8 one speaks of nonlinear
presents the lowest eigenfunction eigenvalue problem. Such
of the hydrogen atom equations are usually solved by
Hamiltonian.) an iteration procedure, called in
this case self-consistent field
Bra-ket notation is often used in method. In quantum chemistry,
this context. A vector, which one often represents the Hartree–
represents a state of the system, Fock equation in a non-
in the Hilbert space of square orthogonal basis set. This
integrable functions is particular representation is a
generalized eigenvalue problem Principal components
called Roothaan equations. analysis:-
Geology and glaciology:- Main article: Principal
components analysis
In geology, especially in the See also: Positive semi definite
study of glacial till, eigenvectors matrix and Factor analysis
and Eigenvalues are used as a
method by which a mass of
information of a clast fabric's
constituents' orientation and dip
can be summarized in a 3-D
space by six numbers. In the
field, a geologist may collect
such data for hundreds or
thousands of clasts in a soil
sample, which can only be
compared graphically such as in a
Tri-Plot (Sneed and Folk) PCA of the multivariate Gaussian
diagram or as a Stereonet on a distribution centered at (1,3) with
Wulff Net. The output for the a standard deviation of 3 in
orientation tensor is in the three roughly the (0.878, 0.478)
orthogonal (perpendicular) axes direction and of 1 in the
of space. Eigenvectors output orthogonal direction.
from programs such as Stereo32
are in the order E1 ≥ E2 ≥ E3, with The eigendecomposition of a
E1 being the primary orientation symmetric positive semi definite
of clast orientation/dip, E2 being (PSD) matrix yields an
the secondary and E3 being the orthogonal basis of eigenvectors,
tertiary, in terms of strength. The each of which has a nonnegative
clast orientation is defined as the eigenvalue. The orthogonal
eigenvector, on a compass rose of decomposition of a PSD matrix is
360°. Dip is measured as the used in multivariate analysis,
eigenvalue, the modulus of the where the sample covariance
tensor: this is valued from 0° (no matrices are PSD. This
dip) to 90° (vertical). The relative orthogonal decomposition is
values of E1, E2, and E3 are called principal components
dictated by the nature of the analysis (PCA) in statistics. PCA
sediment's fabric. If E1 = E2 = E3, studies linear relations among
the fabric is said to be isotropic. variables. PCA is performed on
If E1 = E2 > E3 the fabric is the covariance matrix or the
planar. If E1 > E2 > E3 the fabric correlation matrix (in which each
is linear. See 'A Practical Guide variable is scaled to have its
to the Study of Glacial sample variance equal to one).
Sediments' by Benn & Evans, For the covariance or correlation
2004. matrix, the eigenvectors
correspond to principal Eigenvalue problems occur
components and the Eigenvalues naturally in the vibration analysis
to the variance explained by the of mechanical structures with
principal components. Principal many degrees of freedom. The
component analysis of the Eigenvalues are used to
correlation matrix provides an determine the natural frequencies
orthonormal Eigen-basis for the (or eigenfrequencies) of
space of the observed data: In vibration, and the eigenvectors
this basis, the largest Eigenvalues determine the shapes of these
correspond to the principal- vibrational modes. The
components that are associated orthogonality properties of the
with most of the co variability eigenvectors allows decoupling
among a number of observed of the differential equations so
data. that the system can be
represented as linear summation
Principal component analysis is of the eigenvectors. The
used to study large data sets, such eigenvalue problem of complex
as those encountered in data structures is often solved using
mining, chemical research, finite element analysis.
psychology, and in marketing.
PCA is popular especially in
psychology, in the field of
psychometrics. In Q-
methodology, the Eigenvalues of
the correlation matrix determine
the Q-methodologist's judgment
of practical significance (which
differs from the statistical Eigenfaces:-
significance of hypothesis
testing): The factors with
Eigenvalues greater than 1.00 are
considered to be practically
significant, that is, as explaining
an important amount of the
variability in the data, while
Eigenvalues less than 1.00 are
considered practically
insignificant, as explaining only a
negligible portion of the data
variability. More generally,
principal component analysis can
be used as a method of factor
analysis in structural equation
modeling. Eigenfaces as examples of
eigenvectors
Vibration analysis:- Main article: Eigenfaces
In image processing, processed principal axes of a rigid body.
images of faces can be seen as The tensor of inertia is a key
vectors whose components are quantity required in order to
the brightnesses of each pixel. determine the rotation of a rigid
The dimension of this vector body around its center of mass.
space is the number of pixels.
The eigenvectors of the Stress tensor:-
covariance matrix associated with
a large set of normalized pictures In solid mechanics, the stress
of faces are called Eigenfaces; tensor is symmetric and so can be
this is an example of principal decomposed into a diagonal
components analysis. They are tensor with the Eigenvalues on
very useful for expressing any the diagonal and eigenvectors as
face image as a linear a basis. Because it is diagonal, in
combination of some of them. In this orientation, the stress tensor
the facial recognition branch of has no shear components; the
biometrics, Eigenfaces provide a components it does have are the
means of applying data principal components.
compression to faces for
identification purposes. Research The kth largest or kth smallest
related to Eigen vision systems eigenvalue of the Laplacian. The
determining hand gestures has first principal eigenvector of the
also been made. graph is also referred to merely
as the principal eigenvector.
Similar to this concept,
eigenvoices represent the general The principal eigenvector is used
direction of variability in human to measure the centrality of its
pronunciations of a particular vertices. An example is Google's
utterance, such as a word in a PageRank algorithm. The
language. Based on a linear principal eigenvector of a
combination of such eigenvoices, modified adjacency matrix of the
a new voice pronunciation of the World Wide Web graph gives the
word can be constructed. These page ranks as its components.
concepts have been found useful This vector corresponds to the
in automatic speech recognition stationary distribution of the
systems, for speaker adaptation. Markov chain represented by the
row-normalized adjacency
Tensor of inertia:- matrix; however, the adjacency
matrix must first be modified to
In mechanics, the eigenvectors of ensure a stationary distribution
the inertia tensor define the exists.