Professional Documents
Culture Documents
MTH 101 Topic: - : Application of Eigen Values and Vectors
MTH 101 Topic: - : Application of Eigen Values and Vectors
MTH 101 Topic: - : Application of Eigen Values and Vectors
SUBMITTED TO:
SUBMITTED BY:
MISS. SOFIA SINGHLA
NISHANT RAJPOOT
ROLLNO.RB4003B66
REG. 11011767
Acknowledgement
Yours obediently,
NISHANT RAJPOOT
Contents
1. Introduction
2. Mathematical definition
3.Computation of eigenvalues, and the
characteristic equation
4. Applications
5. Daily life applications
6. References
INTRODUCTION
In linear algebra, there are two kinds of objects: scalars, which are just
numbers; and vectors, which can be thought of as arrows, and which have
both magnitude and direction (though more precisely a vector is a member of
a vector space). In place of the ordinary functions of algebra, the most
important functions in linear algebra are called "linear transformations," and a
linear transformation is usually given by a "matrix," an array of numbers. Thus
instead of writing f(x) we write M(v) where M is a matrix and v is a vector. The
rules for using a matrix to transform a vector are given in the article linear
algebra.
If the action of a matrix on a (nonzero) vector changes its magnitude but not
its direction, then the vector is called an eigenvector of that matrix. Each
eigenvector is, in effect, multiplied by a scalar, called the Eigen
value corresponding to that eigen-vector. The eigenspace corresponding to
one eigenvalue of a given matrix is the set of all eigenvectors of the matrix
with that eigenvalue.
An important benefit of knowing the eigenvectors and values of a system is
that the effects of the action of the matrix on the system can be predicted.
Each application of the matrix to an arbitrary vector yields a result which will
have rotated towards the eigenvector with the largest eigenvalue.
Many kinds of mathematical objects can be treated as vectors: ordered
pairs, functions, harmonic modes, quantum states, and frequencies are
examples. In these cases, the concept of direction loses its ordinary meaning,
and is given an abstract definition. Even so, if this abstract direction is
unchanged by a given linear transformation, the prefix "eigen" is used, as
in eigenfunction, eigenmode, eigenface, eigenstate, and eigenfrequency.
Mathematical definition
Definition Given a linear transformation A, a non-zero vector x is defined to be
an eigenvector of the transformation if it satisfies the eigenvalue equation
for some scalar λ. In this situation, the scalar λ is called
an eigenvalue of A corresponding to the eigenvector x.
The key equation in this definition is the eigenvalue equation, Ax = λx. The
vector x has the property that its direction is not changed by the
transformation A, but that it is only scaled by a factor of λ. Most vectors x will
not satisfy such an equation: a typical vector x changes direction when acted
on by A, so that Ax is not a scalar multiple of x. This means that only certain
special vectors x are eigenvectors, and only certain special scalars λ are
eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector
changes direction, and all non-zero vectors are eigenvectors.
Fig. 2. A acts to stretch the vector x, not change its direction, so x is an eigenvector ofA.
Geometrically (Fig. 2), the eigenvalue equation means that under the
transformation A eigenvectors experience only changes in magnitude and
sign—the direction of Ax is the same as that of x. The eigenvalue λ is simply
the amount of "stretch" or "shrink" to which a vector is subjected when
transformed by A. If λ = 1, the vector remains unchanged (unaffected by the
transformation). A transformation I under which a vector x remains
unchanged, Ix = x, is defined asidentity transformation. If λ = −1, the vector
flips to the opposite direction; this is defined as reflection.
Alternative definition
Mathematicians sometimes alternatively define eigenvalue in the following
way, leading to differences of opinion over what constitutes an "eigenvector."
Notice that the above definitions are entirely equivalent, except for in regards
to eigenvectors, where they disagree on whether the zero vector is
considered an eigenvector. The convention of allowing the zero vector to be
an eigenvector (although still not allowing it to determine eigenvalues) allows
one to avoid repeating the non-zero criterion every time one proves a
theorem, and simplifies the definitions of concepts such as eigenspace, which
would otherwise be non-intuitively made up of vectors that are not all
eigenvectors.
Eigendecomposition
The spectral theorem for matrices can be stated as follows. Let A be a
square n × n matrix. Let q1 ... qk be an eigenvector basis, i.e. an indexed set
of k linearly independenteigenvectors, where k is the dimension of the space
spanned by the eigenvectors of A. If k = n, then A can be written
Applications
Schrodinger equation
Fig. 8. The wavefunctions associated with the bound states of an electron in
a hydrogen atom can be seen as the eigenvectors of the hydrogen atom
Hamiltonian as well as of the angular momentum operator. They are associated with
eigenvalues interpreted as their energies (increasing downward: n=1,2,3,...)
and angular momentum (increasing across: s, p, d,...). The illustration shows the
square of the absolute value of the wavefunctions. Brighter areas correspond to
higher probability density for a position measurement. The centre of each figure is
the atomic nucleus, a proton.
Molecular orbitals
In quantum mechanics, and in particular in atomic and molecular physics,
within the Hartree–Fock theory, the atomicand molecular orbitals can be
defined by the eigenvectors of the Fock operator. The corresponding
eigenvalues are interpreted as ionization potentials via Koopmans' theorem.
In this case, the term eigenvector is used in a somewhat more general
meaning, since the Fock operator is explicitly dependent on the orbitals and
their eigenvalues. If one wants to underline this aspect one speaks of
nonlinear eigenvalue problem. Such equations are usually solved by
aniteration procedure, called in this case self-consistent field method.
In quantum chemistry, one often represents the Hartree–Fock equation in a
non-orthogonal basis set. This particular representation is a generalized
eigenvalue problem called Roothaan equations.
Vibration analysis
Eigenvalue problems occur naturally in the vibration analysis of mechanical
structures with many degrees of freedom. The eigenvalues are used to
determine the natural frequencies (or eigenfrequencies) of vibration, and the
eigenvectors determine the shapes of these vibrational modes. The
orthogonality properties of the eigenvectors allow decoupling of the differential
equations so that the system can be represented as linear summation of the
eigenvectors. The eigenvalue problem of complex structures is often solved
using finite element analysis.
Eigenfaces
Tensor of inertia
In mechanics, the eigenvectors of the inertia tensor define the principal
axes of a rigid body. The tensor of inertia is a key quantity required in order to
determine the rotation of a rigid body around its centre of mass.
Stress tensor
In solid mechanics, the stress tensor is symmetric and so can be decomposed
into a diagonal tensor with the eigenvalues on the diagonal and eigenvectors
as a basis. Because it is diagonal, in this orientation, the stress tensor has
no shear components; the components it does have are the principal
components.
Eigenvalues of a graph
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue
of the graph's adjacency matrix A, or (increasingly) of the
graph's Laplacian matrix, which is either T−A (sometimes called the
Combinatorial Laplacian) or I−T -1/2AT −1/2 (sometimes called the Normalized
Laplacian), where T is a diagonal matrix with Tv, v equal to the degree of
vertex v, and in T −1/2, the v, vth entry is deg(v)-1/2. The kth principal
eigenvector of a graph is defined as either the eigenvector corresponding to
the kth largest or kth smallest eigenvalue of the Laplacian. The first principal
eigenvector of the graph is also referred to merely as the principal
eigenvector.
The principal eigenvector is used to measure the centrality of its vertices. An
example is Google's Page Rank algorithm. The principal eigenvector of a
modified adjacency matrix of the World Wide Web graph gives the page ranks
as its components. This vector corresponds to the stationary distribution of
the Markov chain represented by the row-normalized adjacency matrix;
however, the adjacency matrix must first be modified to ensure a stationary
distribution exists. The second smallest eigenvector can be used to partition
the graph into clusters, via spectral clustering. Other methods are also
available for clustering.
Application in daily life
College town has two pizza restaurants and a large number of hungry pizza-loving
students and faculty. 5,000 people buy one pizza each week. Joe's Chicago Style
Pizza has the better pizza and 80% of the people who buy pizza each week
at Joe's return the following week. Steve's New York Style Pizza uses lower
quality cheese and doesn't have a very good sauce. As a result only 40% of the
people who buy pizza at Steve's each week return the following week. As usual,
we can represent this situation by a discrete dynamical system
Pn + 1 = APn
Where
Next enter the matrix above in the Java applet. Play with the applet a bit to see if
you notice anything worthy of note.
The two screenshots below show some behaviour that is worthy of note.
Notice in the screen shot above that when the first coordinate (Joe's Pizza) of the
vector x is roughly three times its second coordinate (Steve's Pizza) then Ax = x.
Notice in the screen shot below that when the first and second coordinates have the
same absolute value but opposite signs that Ax lines up with x but is roughly one-
fifth as long.
Notice the following lines from the evaluated Mathematics notebook.
Do you notice any connection between this rather cryptic Mathematics output and
our observations above? We will return to this point later but first we look at
another application.
Example 2
The three figures below show the current population by age and gender and the
projected population by age and gender in 25 years for three countries. These
figures were obtained from the United States Census Bureau International Data
Base. Demographic information like this is extraordinarily important for
understanding the vitality, needs, and resources of a country. The three examples
given above have very different properties. The United States and Japan are highly
developed countries with quite different demographics. Compare the fraction of
the population that is young in the two countries both currently and, as projected
by the U.S. Census Bureau, in 25 years. Notice how different these two countries
are from Pakistan.
We will look at a very simplified model of population growth for a hypothetical
species and habitat. You can build much more realistic models using data available
at the United States Census Bureau. Our simplified model will ignore gender and
immigration and emigration and will use only two age groups. These are serious
simplifications but the tools we will develop this semester will enable us to look at
much more realistic models. This example is just a starting point. Ignoring
immigration, for example, ignores one of the most important differences between
Japan whose immigration is close to zero and the United States.
Our model has two age groups -- the young (less than one year old) and the old
(one or more years old). Thus, each year the population is represented by a two-
dimensional vector
P = (A, B)
Whose first coordinate is the young population and whose second coordinate is the
old population. In this example the fertility rate for young people is 30% which
means that on the average each young individual gives birth to 0.30 new (and
hence young) individuals. The fertility rate for old individuals is 80% which means
that on the average each old individual gives birth to 0.80 new (and hence young)
individuals. In our model the survival rate for young individuals is 90% and for old
individuals is 10%. This gives us the model
Enter the matrix A into the Java applet and experiment. Do you notice anything
noteworthy?
The two screen shots below show two things that are worthy of note. The first one
shows a vector that when multiplied by the matrix A results in another vector that
is in the same direction but somewhat longer. The second one shows a vector that
when multiplied by the matrix A results in another vector in exactly the opposite
direction that is considerably shorter.
Click here for another Mathematics notebook. This Mathematics notebook explores this same
model. Evaluate the notebook and compare the results with your observations above. Notice
that over time the total population is growing at a rate of 5.44% per year and seems to be
settling into a pattern with the young population being 51.4% of the total population.
REFERNCES
www.sosmath.com/matrix
www.miislita.com/.../
www.maplesoft.com/applications
tutorial.math.lamar.edu/Classes
www.cse.unr.edu/~bebis