Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Change of basis

In linear algebra, a basis for a vector space is a linearly independent set


spanning the vector space.[1][2][3] This article deals mainly with finite-
dimensional vector spaces, but many of the theorems are also valid for
infinite-dimensional vector spaces.[4] A basis for a vector space of
dimension n is a set of n vectors (α1 , …, αn ), called basis vectors, with
the property that every vector in the space can be expressed as a unique
linear combination of the basis vectors.[5][6][7] The matrix
representations of operators are also determined by the chosen basis.
Since it is often desirable to work with more than one basis for a vector
space, it is of fundamental importance in linear algebra to be able to
easily transform coordinate-wise representations of vectors and
operators taken with respect to one basis to their equivalent
representations with respect to another basis. Such a transformation is
called a change of basis.[8][9][10] For example, if is a A linear combination of one basis set of vectors
matrix whose columns comprise a basis of , a vector (in the (purple) obtains new vectors (red). If they are
standard basis) can also be expressed as a linear combination of 's linearly independent, these form a new basis set.
columns by the vector . By definition then, The linear combinations relating the first set to
. If 's columns form an orthonormal the other extend to a linear transformation, called
the change of basis.
basis, then 's inverse is its transpose and we have the change of basis
as , i.e. the vector of 's scalar projections onto the columns of .

Although the symbol R used below can be taken to mean the field of
real numbers, the results are valid if R is replaced by any field F.
Although the terminology of vector spaces is used below, the results
discussed hold whenever R is a commutative ring and vector space is
everywhere replaced with free R-module.

Contents
Preliminary notions A vector
Transformation matrix represented by two
different bases
Uniqueness of linear transformations
(purple and red
Theorem
arrows).
Coordinate isomorphism
Matrix of a set of vectors
Change of coordinates of a vector
Two dimensions
Three dimensions
General case
The matrix of a linear transformation
Change of basis
The matrix of an endomorphism
Change of basis
The matrix of a bilinear form
Change of basis
Important instances
See also
Notes
References
External links

Preliminary notions

Transformation matrix

The standard basis for is the ordered sequence , where is the element of with in the
place and s elsewhere. For example, the standard basis for would be

If is a linear transformation, the matrix associated with is the matrix whose jth column is
, for , that is

In this case we have , , where we regard as a column vector and the multiplication on the right
side is matrix multiplication. It is a basic fact in linear algebra that the vector space Hom( ) of all linear
transformations from to is naturally isomorphic to the space of matrices over ; that is, a linear
transformation is for all intents and purposes equivalent to its matrix .

Uniqueness of linear transformations

We will also make use of the following observation.

Theorem

Let and be vector spaces, let be a basis for , and let be any vectors in
. Then there exists a unique linear transformation with , for .

This unique is defined by

Of course, if happens to be a basis for , then is bijective as well as linear; in other words, is an
isomorphism. If in this case we also have , then is said to be an automorphism.

Coordinate isomorphism

Now let be a vector space over and suppose is a basis for . By definition, if is a vector in ,
then for a unique choice of scalars called the coordinates of relative to the
ordered basis . The vector is called the coordinate tuple of relative to .

The unique linear map with for is called the coordinate isomorphism for and
the basis . Thus if and only if .

Matrix of a set of vectors


A set of vectors can be represented by a matrix of which each column consists of the components of the corresponding
vector of the set. As a basis is a set of vectors, a basis can be given by a matrix of this kind. Later it will be shown that the
change of basis of any object of the space is related to this matrix. For example, vectors change with its inverse (and they
are therefore called contravariant objects).

Change of coordinates of a vector


First we examine the question of how the coordinates of a vector in the vector space change when we select another
basis.

Two dimensions

This means that given a matrix whose columns are the vectors of the new basis of the space (described in terms of the
original basis) (new basis matrix), the new coordinates for a column vector are given by the matrix product . For
this reason, it is said that ordinary vectors are contravariant objects.

Any finite set of vectors can be represented by a matrix in which its columns are the coordinates of the given vectors. As an
example in dimension 2, a pair of vectors obtained by rotating the standard basis counterclockwise for 45°. The matrix
whose columns are the coordinates of these vectors is

If we want to change any vector of the space to this new basis, we only need to left-multiply its components by the inverse
of this matrix.[11]

Three dimensions

For example, let R be a new basis given by its Euler angles. The matrix of the basis will have as columns the components
of each vector. Therefore, this matrix will be (See Euler angles article):

Again, any vector of the space can be changed to this new basis by left-multiplying its components by the inverse of this
matrix.

General case

Suppose and are two ordered bases for an n-dimensional vector space V over a
field K. Let φA and φB be the corresponding coordinate isomorphisms (linear maps) from Kn to V, i.e. and
th
for i = 1, …, n, where ei denotes the n-tuple with i entry equal to 1, and all other entries equal to 0.

If is the coordinate n-tuple of a vector v in V with respect to the basis A, so that , then the
coordinate tuple of v with respect to B is the tuple y such that , i.e. , so that for any
vector in V, the map maps its coordinate tuple with respect to A to its coordinate tuple with respect to B. Since
this map is an automorphism on Kn , it therefore has an associated square matrix C. Moreover, the i th column of C is
, that is, the coordinate tuple of αi with respect to B.

Thus, for any vector v in V, if x is the coordinate tuple of v with respect to A, then the tuple is the
coordinate tuple of v with respect to B. The matrix C is called the transition matrix from A to B.
The matrix of a linear transformation
Now suppose T : V → W is a linear transformation, {α1 , …, αn } is a basis for V and {β1 , …, βm} is a basis for W. Let φ
and ψ be the coordinate isomorphisms for V and W, respectively, relative to the given bases. Then the map
T1 = ψ−1 ∘ T ∘ φ is a linear transformation from Rn to Rm, and therefore has a matrix t; its jth column is ψ−1 (T(αj)) for
j = 1, …, n. This matrix is called the matrix of T with respect to the ordered bases {α1 , …, αn } and {β1 , …, βm}. If η = T(ξ)
and y and x are the coordinate tuples of η and ξ, then y = ψ−1 (T(φ(x))) = tx. Conversely, if ξ is in V and x = φ−1 (ξ) is the
coordinate tuple of ξ with respect to {α1 , …, αn }, and we set y = tx and η = ψ(y), then η = ψ(T1 (x)) = T(ξ). That is, if ξ is
in V and η is in W and x and y are their coordinate tuples, then y = tx if and only if η = T(ξ).

Theorem Suppose U, V and W are vector spaces of finite dimension and an ordered basis is chosen for each. If T : U → V
and S : V → W are linear transformations with matrices s and t, then the matrix of the linear transformation S ∘ T : U → W
(with respect to the given bases) is st.

Change of basis

Now we ask what happens to the matrix of T : V → W when we change bases in V and W. Let {α1 , …, αn } and
{β1 , …, βm} be ordered bases for V and W respectively, and suppose we are given a second pair of bases {α′ 1 , …, α′ n } and
{β′ 1 , …, β′ m}. Let φ1 and φ2 be the coordinate isomorphisms taking the usual basis in Rn to the first and second bases for
V, and let ψ1 and ψ2 be the isomorphisms taking the usual basis in Rm to the first and second bases for W.

Let T1 = ψ1 −1 ∘ T ∘ φ1 , and T2 = ψ2 −1 ∘ T ∘ φ2 (both maps taking Rn to Rm), and let t1 and t2 be their respective
matrices. Let p and q be the matrices of the change-of-coordinates automorphisms φ2 −1 ∘ φ1 on Rn and ψ2 −1 ∘ ψ1 on Rm.

The relationships of these various maps to one another are illustrated in the following commutative diagram. Since we have
T2 = ψ2 −1 ∘ T ∘ φ2 = (ψ2 −1 ∘ ψ1 ) ∘ T1 ∘ (φ1 −1 ∘ φ2 ), and since composition of linear maps corresponds to matrix
multiplication, it follows that

t2 = q t1 p−1.

Given that the change of basis has once the basis matrix and once its inverse, these objects are said to be 1-co, 1-contra-
variant.

The matrix of an endomorphism


An important case of the matrix of a linear transformation is that of an endomorphism, that is, a linear map from a vector
space V to itself: that is, the case that W = V. We can naturally take {β1 , …, βn } = {α1 , …, αn } and
{β′ 1 , …, β′ m} = {α′ 1 , …, α′ n }. The matrix of the linear map T is necessarily square.

Change of basis

We apply the same change of basis, so that q = p and the change of basis formula becomes

t2 = p t1 p−1.

In this situation the invertible matrix p is called a change-of-basis matrix for the vector space V, and the equation above
says that the matrices t1 and t2 are similar.

The matrix of a bilinear form


A bilinear form on a vector space V over a field R is a mapping V × V → R which is linear in both arguments. That is,
B : V × V → R is bilinear if the maps
are linear for each w in V. This definition applies equally well to modules over a commutative ring with linear maps being
module homomorphisms.

The Gram matrix G attached to a basis is defined by

If and are the expressions of vectors v, w with respect to this basis, then the bilinear form is

given by

The matrix will be symmetric if the bilinear form B is a symmetric bilinear form.

Change of basis

If P is the invertible matrix representing a change of basis from to then the Gram matrix
transforms by the matrix congruence

Important instances
In abstract vector space theory the change of basis concept is innocuous; it seems to add little to science. Yet there are cases
in associative algebras where a change of basis is sufficient to turn a caterpillar into a butterfly, figuratively speaking:

In the split-complex number plane there is an alternative "diagonal basis". The standard hyperbola
xx − yy = 1 becomes xy = 1 after the change of basis. Transformations of the plane that leave the
hyperbolae in place correspond to each other, modulo a change of basis. The contextual difference is
profound enough to then separate Lorentz boost from squeeze mapping. A panoramic view of the literature
of these mappings can be taken using the underlying change of basis.
With the 2 × 2 real matrices one finds the beginning of a catalogue of linear algebras due to Arthur Cayley.
His associate James Cockle put forward in 1849 his algebra of coquaternions or split-quaternions, which
are the same algebra as the 2 × 2 real matrices, just laid out on a different matrix basis. Once again it is the
concept of change of basis that synthesizes Cayley's matrix algebra and Cockle's coquaternions.
A change of basis turns a 2 × 2 complex matrix into a biquaternion.

See also
Coordinate vector
Integral transform, the continuous analogue of change of basis.
Active and passive transformation

Notes
1. Anton (1987, p. 171)
2. Beauregard & Fraleigh (1973, p. 93)
3. Nering (1970, p. 15)
4. Nering (1970, p. 15)
5. Anton (1987, pp. 74–76)
6. Beauregard & Fraleigh (1973, pp. 194–195)
7. Nering (1970, p. 15)
8. Anton (1987, pp. 221–237)
9. Beauregard & Fraleigh (1973, pp. 240–243)
10. Nering (1970, pp. 50–52)
11. "Change of Basis - HMC Calculus Tutorial" (https://web.archive.org/web/20160716213428/https://math.hm
c.edu/calculus/tutorials/changebasis/). www.math.hmc.edu. Archived from the original (https://www.math.h
mc.edu/calculus/tutorials/changebasis/) on 2016-07-16. Retrieved 2017-08-22. and the explanation / proof
"Why?" (https://www.math.hmc.edu/calculus/tutorials/changebasis/why.html). www.math.hmc.edu.
Retrieved 2017-08-22.

References
Anton, Howard (1987), Elementary Linear Algebra (5th ed.), New York: Wiley, ISBN 0-471-84819-0
Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional
Introduction to Groups, Rings, and Fields (https://archive.org/details/firstcourseinlin0000beau), Boston:
Houghton Mifflin Company, ISBN 0-395-14017-X
Nering, Evar D. (1970), Linear Algebra and Matrix Theory (2nd ed.), New York: Wiley, LCCN 76091646 (htt
ps://lccn.loc.gov/76091646)

External links
MIT Linear Algebra Lecture on Change of Basis (http://ocw.mit.edu/courses/mathematics/18-06-linear-alge
bra-spring-2010/video-lectures/lecture-31-change-of-basis-image-compression/), from MIT
OpenCourseWare
Khan Academy Lecture on Change of Basis (https://www.youtube.com/watch?v=1j5WnqwMdCk), from
Khan Academy

Retrieved from "https://en.wikipedia.org/w/index.php?title=Change_of_basis&oldid=990594888"

This page was last edited on 25 November 2020, at 11:20 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you
agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.

You might also like