A Study of The Basic Concepts Linear Equations Project

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 103

INTRODUCTION

1|Page
Introduction
AGLEBRA AND LINEAR EQUATIONS

Elementary algebra is a fundamental and relatively basic form of

algebra taught to students who are presumed to have little or no

formal knowledge of mathematics beyond arithmetic. While in

arithmetic only numbers and their arithmetical operations (such as +,

−, ×, ÷) occur, in algebra one also uses symbols (such as x and y, or

a and b) to denote numbers. These are called variables. This is

useful because:

 It allows the generalization of arithmetical equations (and

inequalities) to be stated as laws (such as a + b = b + a for all a

and b), and thus is the first step to the systematic study of the

properties of the real number system.

 It allows reference to numbers which are not known. In the

context of a problem, a variable may represent a certain value

which is not yet known, but which may be found through the

formulation and manipulation of equations.

2|Page
 It allows the exploration of mathematical relationships between

quantities (such as "if you sell x tickets, then your profit will be

3x − 10 dollars").

These three are the main strands of elementary algebra, which

should be distinguished from abstract algebra, a more advanced topic

generally taught to college students.

In elementary algebra, an "expression" may contain numbers,

variables and arithmetical operations. These are usually written (by

convention) with 'higher-power' terms on the left (see polynomial); a

few examples are:

In more advanced algebra, an expression may also include


elementary functions.

A typical algebra problem that can be found in almost any middle school in the world.

3|Page
An "equation" is the claim that two expressions are equal. Some

equations are true for all values of the involved variables (such

as a + b = b + a); such equations are called "identities".

"Conditional" equations are true for only some values of the

involved variables: x2 − 1 = 4. The values of the variables

LAWS OF ELEMENTARY ALGEBRA

Properties of operations

 The operation of addition (+) …

o is written a + b;

o is commutative: a + b = b + a;

o is associative: (a + b) + c = a + (b + c);

o has an inverse operation called subtraction: (a + b) − b =

a, which is the same as adding a negative number, a − b

= a + (−b);

o has a special element 0 which preserves numbers: a + 0

= a.

 The operation of multiplication (×) …

o is written a × b or a ⋅ b;

4|Page
o is commutative: a × b = b × a;

o is associative: (a × b) × c = a × (b × c);

o is abbreviated by juxtaposition: a × b ≡ ab;

o has a special element 1 which preserves numbers: a × 1

= a;

o has, for non-zero numbers, an inverse operation called

division: (ab)/b = a, which is the same as multiplying by a

reciprocal, a/b = a(1/b);

o distributes over addition: (a + b)c = ac + bc;

 The operation of exponentiation …

o is written ab;

o means repeated multiplication: an = a × a × … × a (n

times);

o is neither commutative nor associative: in general ab ≠ ba

and (ab)c ≠ a(bc);

o has an inverse operation, called the logarithm: alogab = b =

logaab;

o can be written in terms of n-th roots: am/n ≡ (n√a)m and thus

even roots of negative numbers do not exist in the real

number system. (See: complex number system)

5|Page
o has a special element 1 which preserves numbers: a1 = a;

o distributes over multiplication: (ab)c = acbc;

o has the property: abac = ab + c;

o has the property: (ab)c = abc.

6|Page
Properties of equality

 The relation of equality (=) is …

o reflexive: a = a;

o symmetric: if a = b then b = a;

o transitive: if a = b and b = c then a = c.

Laws of equality

 The relation of equality (=) has the property …

o that if a = b and c = d then a + c = b + d and ac = bd;

o that if a = b then a + c = b + c;

o that if two symbols are equal, then one can be substituted

for the other.

Laws of inequality

 The relation of inequality (<) has the property …

o of transivity: if a < b and b < c then a < c;

o that if a < b and c < d then a + c < b + d;

o that if a < b and c > 0 then ac < bc;

o that if a < b and c < 0 then bc < ac.

7|Page
EXAMPLES

8|Page
EXAMPLES

Linear equations in one variable

The simplest equations to solve are linear equations that have only

one variable. They contain only constant numbers and a single

variable without an exponent. For example:

The central technique is add, subtract, multiply, or divide both sides

of the equation by the same number in order to isolate the variable on

one side of the equation. Once the variable is isolated, the other side

of the equation is the value of the variable. For example, by

subtracting 4 from both sides in the equation above:

which simplifies to:

Dividing both sides by 2:

9|Page
simplifies to the solution:

The general case,

follows the same format for the solution:

QUADRATIC EQUATIONS

Quadratic equations can be expressed in the form ax2 + bx + c = 0,

where a is not zero (if it were zero, then the equation would not be

quadratic but linear). Because of this a quadratic equation must

contain the term ax2, which is known as the quadratic term. Hence a

≠ 0, and so we may divide by a and rearrange the equation into the

standard form

10 | P a g e
where p = b/a and q = −c/a. Solving this, by a process known as

completing the square, leads to the quadratic formula.

Quadratic equations can also be solved using factorization (the

reverse process of which is expansion, but for two linear terms is

sometimes denoted foiling). As an example of factoring:

Which is the same thing as

It follows from the zero-product property that either x = 2 or x = −5 are

the solutions, since precisely one of the factors must be equal to

zero. All quadratic equations will have two solutions in the complex

number system, but need not have any in the real number system.

For example,

has no real number solution since no real number squared equals −1.

Sometimes a quadratic equation has a root of multiplicity 2, such as:

11 | P a g e
For this equation, −1 is a root of multiplicity 2.

12 | P a g e
Exponential and logarithmic equations

An exponential equation is an equation of the form aX = b for a > 0,

which has solution

when b > 0. Elementary algebraic techniques are used to rewrite a

given equation in the above way before arriving at the solution. For

example, if

then, by subtracting 1 from both sides of the equation, and then

dividing both sides by 3 we obtain

whence

or

13 | P a g e
A logarithmic equation is an equation of the form log aX = b for a > 0,

which has solution

For example, if

then, by adding 2 to both sides of the equation, followed by dividing

both sides by 4, we get

whence

from which we obtain

Radical equations

A radical equation is an equation of the form Xm/n = a, for m, n

integers, which has solution

14 | P a g e
if m is odd, and solution

if m is even and a ≥ 0. For example, if

then

whence either x = 8 − 5 = 3, or x = −8 − 5 = −13.

SYSTEM OF LINEAR EQUATIONS

In the case of a system of linear equations, like, for instance, two

equations in two variables, it is often possible to find the solutions of

both variables that satisfy both equations.

15 | P a g e
FIRST METHOD OF FINDING A SOLUTION

An example of a system of linear equations could be the following:

Multiplying the terms in the second equation by 2:

Adding the two equations together to get:

which simplifies to

Since the fact that x = 2 is known, it is then possible to deduce that y

= 3 by either of the original two equations (by using 2 instead of x)

The full solution to this problem is then

16 | P a g e
Note that this is not the only way to solve this specific system; y could

have been solved before x.

17 | P a g e
SECOND METHOD OF FINDING A SOLUTION

Another way of solving the same system of linear equations is by

substitution.

An equivalent for y can be deduced by using one of the two

equations. Using the second equation:

Subtracting 2x from each side of the equation:

and multiplying by -1:

Using this y value in the first equation in the original system:

18 | P a g e
Adding 2 on each side of the equation:

which simplifies to

Using this value in one of the equations, the same solution as in the

previous method is obtained.

Note that this is not the only way to solve this specific system; in this

case as well, y could have been solved before x.

19 | P a g e
OTHER TYPES OF SYSTEMS OF LINEAR EQUATIONS

Unsolvable systems

In the above example, it is possible to find a solution. However, there

are also systems of equations which do not have a solution. An

obvious example would be:

The second equation in the system has no possible solution.

Therefore, this system can't be solved. However, not all incompatible

systems are recognized at first sight. As an example, the following

system is studied:

When trying to solve this (for example, by using the method of

substitution above), the second equation, after adding − 2x on both

sides and multiplying by −1, results in:

20 | P a g e
And using this value for y in the first equation:

No variables are left, and the equality is not true. This means that the

first equation can't provide a solution for the value for y obtained in

the second equation.

Undetermined systems

There are also systems which have multiple or infinite solutions, in

opposition to a system with a unique solution (meaning, two unique

values for x and y) For example:

Isolating y in the second equation:

And using this value in the first equation in the system:

21 | P a g e
The equality is true, but it does not provide a value for x. Indeed, one

can easily verify (by just filling in some values of x) that for any x

there is a solution as long as y = −2x + 6. There are infinite solutions

for this system.

Over and underdetermined systems

Systems with more variables than the number of linear equations do

not have a unique solution. An example of such a system is

Such a system is called underdetermined; when trying to find a

solution, one or more variables can only be expressed in relation to

the other variables, but cannot be determined numerically.

Incidentally, a system with a greater number of equations than

variables, in which necessarily some equations are sums or multiples

of others, is called overdetermined.

22 | P a g e
RELATION BETWEEN SOLVABILITY AND MULTIPLICITY

Given any system of linear equations, there is a relation between

multiplicity and solvability.

If one equation is a multiple of the other (or, more generally, a sum of

multiples of the other equations), then the system of linear equations

is undetermined, meaning that the system has infinitely many

solutions. Example:

When the multiplicity is only partial (meaning that for example, only

the left hand sides of the equations are multiples, while the right hand

sides are not or not by the same number) then the system is

unsolvable. For example, in

the second equation yields that x + y = 1/4 which is in contradiction

with the first equation. Such a system is also called inconsistent in the

23 | P a g e
language of linear algebra. When trying to solve a system of linear

equations it is generally a good idea to check if one equation is a

multiple of the other. If this is precisely so, the solution cannot be

uniquely determined. If this is only partially so, the solution does not

exist.

This, however, does not mean that the equations must be multiples of

each other to have a solution, as shown in the sections above; in

other words: multiplicity in a system of linear equations is not a

necessary condition for solvability.

References

 Leonhard Euler, Elements of Algebra, 1770. English translation

Tarquin Press, 2007, ISBN 978-1899618798, also online

digitized editions 2006, 1822.

 Charles Smith, A Treatise on Algebra, in Cornell University

Library Historical Math Monographs.

24 | P a g e
LINEAR EQUATION

A linear equation is an algebraic equation in which each term is

either a constant or the product of a constant and (the first power of)

a single variable. Linear equations can have one, two, three or more

variables.

Linear equations occur with great regularity in applied mathematics.

While they arise quite naturally when modeling many phenomena,

they are particularly useful since many non-linear equations may be

reduced to linear equations by assuming that quantities of interest

vary to only a small extent from some "background" state.

25 | P a g e
LINEAR

EQUATIONS IN

TWO VARIABLES

26 | P a g e
LINEAR EQUATIONS IN TWO VARIABLES

Properties of Mathematical Formulas

A common form of a linear equation in the two variables x and y is

where m and b designate constants (the variable y is multiplied by the

constant 1, which as usual is not explicitly written). The origin of the

name "linear" comes from the fact that the set of solutions of such an

equation forms a straight line in the plane. In this particular equation,

the constant m determines the slope or gradient of that line; and the

constant term b determines the point at which the line crosses the y-

axis.

Since terms of a linear equations cannot contain products of distinct

or equal variables, nor any power (other than 1) or other function of a

variable, equations involving terms such as xy, x², y1/3, and sin(x) are

nonlinear.

27 | P a g e
Graph sample of linear equations.

28 | P a g e
Examples of Mathematical Formulas

Examples of linear equations in two variables:

Forms for 2D linear equations

Complicated linear equations, such as the ones above, can be

rewritten using the laws of elementary algebra into several different

forms. These equations are often referred to as the "equations of the

straight line". In what follows x, y and t are variables; other letters

represent constants (fixed numbers).

General form

where A and B are not both equal to zero. The equation is usually

written so that A ≥ 0, by convention. The graph of the equation is a

straight line, and every straight line can be represented by an

equation in the above form. If A is nonzero, then the x-intercept, that

is the x-coordinate of the point where the graph crosses the x-axis (y

is zero), is −C/A. If B is nonzero, then the y-intercept, that is the y-

29 | P a g e
coordinate of the point where the graph crosses the y-axis (x is zero),

is −C/B, and the slope of the line is −A/B.

Standard form

where A, B, and C are integers whose greatest common factor

is 1, A and B are not both equal to zero and, A is non-negative

(and if A = 0 then B has to be positive). The standard form can

be converted to the general form, but not always to all the other

forms if A or B is zero.

Slope–intercept form

Y-axis formula

where m is the slope of the line and b is the y-intercept, which

is the y-coordinate of the point where the line crosses the y

axis. This can be seen by letting x = 0, which immediately gives

y = b.

X-axis formula

30 | P a g e
where m ≠ 0, is the slope of the line and c is the x-intercept,

which is the x-coordinate of the point where the line crosses the

x axis. This can be seen by letting y = 0, which immediately

gives x = c.

Point–slope form

where m is the slope of the line and (x1,y1) is any point on the

line. The point-slope and slope-intercept forms are easily

interchangeable.

The point-slope form expresses the fact that the difference in

the y coordinate between two points on a line (that is, y − y1) is

proportional to the difference in the x coordinate (that is, x − x1).

The proportionality constant is m (the slope of the line).

Intercept form

where c and b must be nonzero. The graph of the equation has

x-intercept c and y-intercept b. The intercept form can be

31 | P a g e
converted to the standard form by setting A = 1/c, B = 1/b and

C = 1.

Two-point form

where p ≠ h. The graph passes through the points (h,k) and

(p,q), and has slope m = (q−k) / (p−h).

Parametric form

and

Two simultaneous equations in terms of a variable parameter t,

with slope m = V / T, x-intercept (VU−WT) / V and y-intercept

(WT−VU) / T.

This can also be related to the two-point form, where T = p−h,

U = h, V = q−k, and W = k:

and

32 | P a g e
In this case t varies from 0 at point (h,k) to 1 at point (p,q), with

values of t between 0 and 1 providing interpolation and other

values of t providing extrapolation.

Normal form

where φ is the angle of inclination of the normal and p is the

length of the normal. The normal is defined to be the shortest

segment between the line in question and the origin. Normal

form can be derived from general form by dividing all of the

coefficients by

This form is also called the Hesse standard form, after the German

mathematician Ludwig Otto Hesse.

Special cases

This is a special case of the standard form where A = 0 and B =

1, or of the slope-intercept form where the slope M = 0. The

33 | P a g e
graph is a horizontal line with y-intercept equal to b. There is no

x-intercept, unless b = 0, in which case the graph of the line is

the x-axis, and so every real number is an x-intercept.

This is a special case of the standard form where A = 1 and B =

0. The graph is a vertical line with x-intercept equal to c. The

slope is undefined. There is no y-intercept, unless c = 0, in

which case the graph of the line is the y-axis, and so every real

number is a y-intercept.

and

In this case all variables and constants have canceled out,

leaving a trivially true statement. The original equation,

therefore, would be called an identity and one would not

normally consider its graph (it would be the entire xy-plane). An

example is 2x + 4y = 2(x + 2y). The two expressions on either

side of the equal sign are always equal, no matter what values

are used for x and y.

In situations where algebraic manipulation leads to a statement

such as 1 = 0, then the original equation is called inconsistent,

34 | P a g e
meaning it is untrue for any values of x and y (i.e. its graph

would be the empty set) An example would be 3x + 2 = 3x − 5.

35 | P a g e
Connection with linear functions and
operators
In all of the named forms above (assuming the graph is not a vertical

line), the variable y is a function of x, and the graph of this function is

the graph of the equation.

In the particular case that the line crosses through the origin, if the

linear equation is written in the form y = f(x) then f has the properties:

and

where a is any scalar. A function which satisfies these properties is

called a linear function, or more generally a linear map. This

property makes linear equations particularly easy to solve and reason

about.

36 | P a g e
LINEAR

EQUATIONS IN

MORE THAN TWO

VARIABLES

37 | P a g e
LINEAR EQUATIONS IN MORE THAN
TWO VARIABLES
A linear equation can involve more than two variables. The general

linear equation in n variables is:

In this form, a1, a2, …, an are the coefficients, x1, x2, …, xn are the

variables, and b is the constant. When dealing with three or fewer

variables, it is common to replace x1 with just x, x2 with y, and x3 with

z, as appropriate.

Such an equation will represent an (n–1)-dimensional hyperplane in

n-dimensional Euclidean space (for example, a plane in 3-space).

Jump to: navigation, search

In mathematics, a vector space is a collection of objects (called

vectors) that may be scaled and added. These two operations have

to adhere to a number of axioms that generalize common properties

of tuples of real numbers such as vectors in the plane or three-

dimensional Euclidean space. As much of the theory around vector

38 | P a g e
spaces is of linear nature, these objects are a keystone of linear

algebra. From this point of view vector spaces are well-understood,

since vector spaces are completely described by a single number

called dimension.

A vector space is a set of objects


(called vectors) that can be scaled and
added.

Many mathematical

entities can be

described as vector spaces. The presence of vector spaces in

analysis, mainly in the guise of function spaces, calls for a notion that

goes beyond linear algebra by taking into account convergence

questions. This is done by considering vector spaces with additional

data, mostly spaces endowed with a suitable topology. These

topological vector spaces, in particular Banach spaces and Hilbert

spaces have a richer theory.

Due to their ubiquity, vector spaces are applied throughout

mathematics, science and engineering. They are used in methods as

Fourier expansion, which serves for modern sound and image


39 | P a g e
compression routines, or provides the framework to solution

techniques of partial differential equations. Vector spaces also used

in geometry, either to examine local properties of manifolds by

linearization techniques or as an ambiant space for geometric

objects.

40 | P a g e
MOTIVATION
AND
DEFINITION

41 | P a g e
MOTIVATION AND DEFINITION

The black vector (x, y) = (5, 7) can be expressed as a linear combination of two different pairs of vectors (5·(1, 0) and

7·(0, 1) – blue; 3·(−1, 1) and 4·(2, 1) – yellow).

The space R2 consisting of pairs (x, y) of real numbers is a common

example of a vector space: any two pairs of real numbers can be

added:

(x1, y1) + (x2, y2) = (x1 + x2, y1 + y2),

and any pair (x, y) can be multiplied by a real number s to yield

another vector (sx, sy).

The general vector space notion is a generalization of this idea. It is

more general in several ways: first, instead of the real numbers other

fields, such as complex numbers or finite fields, are allowed.[nb 1]

Second, the dimension of the space, which is two in the above

42 | P a g e
example, can be arbitrary, even infinite. Another conceptually

important point is that elements of vector spaces are not usually

expressed as linear combinations of a particular set of vectors, i.e.,

there is no preference of representing the vector (x, y) as

(x, y) = x · (1, 0) + y · (0, 1)

over

(x, y) = (−1/3·x + 2/3·y) · (−1, 1) + (1/3·x + 1/3·y) · (2, 1)

Definition

The definition of a vector space requires a field F such as the field of

rational, real or complex numbers. A vector space is a set V together

with two operations that combine two elements to a third:

 vector addition: any two vectors, i.e. elements of V, v and w

can be added to yield a third vector v + w

 scalar multiplication: any vector v can be multiplied by a scalar,

i.e. element of F, a. The product is denoted av.

To specify the field F, one speaks of F-vector space, vector space

over F. For F = R or C, they are also called real and complex vector

spaces, respectively. To qualify as a vector space, the addition and

43 | P a g e
multiplication have to adhere to a number of requirements called

axioms generalizing the situation of Euclidean plane R2 or Euclidean

space R3. For distinction, vectors v will be denoted boldly.[nb 2] In the

formulation of the axioms below, let u, v, w be arbitrary vectors in V,

and a, b be scalars, respectively.

The space V = R2, with the addition and multiplication as above, is

indeed a vector space. Checking the axioms reduces to verifying

simple identities such as

(x, y) + (0, 0) = (x, y),

i.e. the addition of the zero vector (0, 0) to any other vector yields that

same vector. The distributive law amounts to

(a + b) · (x, y) = a · (x, y) + b · (x, y).

44 | P a g e
ALTERNATIVE FORMULATIONS AND
ELEMENTARY CONSEQUENCES OF
THE DEFINITION
The requirement that vector addition and scalar multiplication be

binary operations includes (by definition of binary operations) a

property called closure, i.e. u + v and a v are in V for all a, u, and v.

Some authors choose to mention these properties as separate

axioms.

The first four axioms can be subsumed by requiring the set of vectors

to be an abelian group under addition, and the rest are equivalent to

a ring homomorphism f from the field into the endomorphism ring of

the group of vectors. Then scalar multiplication a v is defined as (f(a))

(v). This can be seen as the starting point of defining vector spaces

without referring to a field.

There are a number of properties that follow easily from the vector

space axioms. Some of them derive from elementary group theory,

applied to the (additive) group of vectors: for example the zero vector

0 ∈ V and the additive inverse −v of a vector v are unique. Other

properties can be derived from the distributive law, for example scalar

45 | P a g e
multiplication by zero yields the zero vector and no other scalar

multiplication yields the zero vector.

46 | P a g e
HISTORY

47 | P a g e
HISTORY
Vector spaces stem from affine geometry, via the introduction of

coordinates in the plane or three-dimensional space. Around 1636,

French mathematicians Descartes and Fermat founded the bases of

analytic geometry by tying the solutions of an equation with two

variables to the determination of a plane curve. To achieve a

geometric solutions without using coordinates, Bernhard Bolzano

introduced in 1804 certain operations on points, lines and planes,

which are predecessors of vectors. This work was made use of in the

conception of barycentric coordinates of August Ferdinand Möbius in

1827. The founding leg of the definition of vectors was the Bellavitis'

definition of the bipoint, which is an oriented segment, one of whose

ends is the origin and the other one a target.

The notion of vector was reconsidered with the presentation of

complex numbers by Jean-Robert Argand and William Rowan

Hamilton and the inception of quaternions by the latter

mathematician, being elements in R2 and R4, respectively. Treating

them using linear combinations goes back to Laguerre in 1867, who

defined systems of linear equations.

48 | P a g e
In 1857, Cayley introduced the matrix notation which allows one to

harmonize and simplify the writing of linear maps between vector

spaces. At the same time, Grassmann studied the barycentric

calculus initiated by Möbius. He envisaged sets of abstract objects

endowed with operations. His work exceeds the framework of vector

spaces, since his introduction of multiplication led him to what is

today called algebras. Nonetheless, the concepts of dimension and

linear independence are present, as well as the scalar product

(1844). The primacy of these discoveries was disputed with Cauchy's

publication Sur les clefs algébriques.

Italian mathematician Peano, one of whose important contributions

was the rigorous axiomatisation of extant concepts, in particular the

construction of sets, was one of the first to give the modern definition

of vector spaces around the end of 19th century.

An important development of this concept is due to the construction

of function spaces by Henri Lebesgue. This was later formalized by

David Hilbert and Stefan Banach, in his 1920 PhD thesis. At this time,

algebra and the new field of functional analysis began to interact,

notably with key concepts such as spaces of p-integrable functions

49 | P a g e
and Hilbert spaces. Also at this time, the first studies concerning

infinite dimensional vector spaces were done.

50 | P a g e
EXAMPLES

51 | P a g e
Examples
Coordinate and function spaces

The first example of a vector space over a field F is the field itself,

equipped with its standard addition and multiplication. This is

generalized by the vector space known as the coordinate space and

usually denoted Fn, where n is an integer. Its elements are n-tuples

(a1, a2, ..., an), where the ai are elements of F.

Infinite coordinate sequences, and more generally functions from any

fixed set Ω to a field F also form vector spaces, by performing

addition and scalar multiplication pointwise, i.e. the sum of two

functions f and g is given by

(f + g)(w) = f(w) + g(w)

and similarly for multiplication. Such function spaces occur in many

geometric situations, when Ω is the real line or an interval, or other

subsets of Rn. Many notions in topology and analysis, such as

continuity, integrability or differentiability are well-behaved with

respect to linearity, i.e. sums and scalar multiples of functions

52 | P a g e
possessing such a property will still have that property. Therefore, the

set of such functions are vector spaces. They are studied in more

detail using the methods of functional analysis, see below. Algebraic

constraints also yield vector spaces: the vector space F[x] is given by

polynomial functions, i.e.

f (x) = rnxn + rn−1xn−1 + ... + r1x + r0, where the coefficients rn, ..., r0

are in F.[9]

Power series are similar, except that infinitely many terms are

allowed.[10]

Linear equations

Systems of homogeneous linear equations also lead to vector

spaces.[11] For example, the solutions of

a + 3b + c = 0

4a + 2b + 2c = 0

given by triples with arbitrary a, b = a/2, and c = −5a/2 form a vector

space: sums and scalar multiples of such triples still satisfy the same

ratios of the three variables; thus they are solutions, too. Matrices can

53 | P a g e
be used to condense multiple linear equations as above into one

vector equation, namely

Ax = 0,

where x is the vector (a, b, c), A is the matrix

and 0 = (0, 0) is the zero vector. In a similar vein, the solutions of

homogeneous linear differential equations form vector spaces. For

example

f ''(x) + 2f '(x) + f (x) = 0

yields f (x) = a · e−x + bx · e−x, where a and b are arbitrary constants,

and e=2.718....

Algebraic number theory

A common situation in algebraic number theory is a field F containing

a smaller field E. By the given multiplication and addition operations

of F, F becomes an E-vector space, also called a field extension of E.

54 | P a g e
[12]
For example the complex numbers are a vector space over R.

Another example is Q(z), the smallest field containing the rationals

and some complex number z.

55 | P a g e
BASES

AND

DIMENSION

56 | P a g e
Bases and dimension

Bases reveal the structure of vector spaces in a concise way. A basis

is a (finite or infinite) set B = {vi}i ∈ I of vectors that spans the whole

space, and minimal with this property. The former means that any

vector v can be expressed as a finite sum (called linear combination)

of the basis elements

a1vi1 + a2vi2 + ... + anvin,

where the ak are scalars and vik (k = 1, ..., n) elements of the basis B.

The minimality, on the other hand, is made formal by the notion of

linear independence. A set of vectors is said to be linearly

independent if none of its elements can be expressed as a linear

combination of the remaining ones. Equivalently, an equation

a1vi1 + ai2v2 + ... + anvin = 0

can only hold if all scalars a1, ..., an equal zero. By definition every

vector can be expressed as a finite sum of basis elements. Because

of linear independence any such representation is unique. Vector

spaces are sometimes introduced from this coordinatised viewpoint.

57 | P a g e
Every vector space has a basis. This fact relies on Zorn’s Lemma, an

equivalent formulation of the axiom of choice.[14] The ultrafilter lemma,

which is weaker than the axiom of choice, implies that all bases of a

given vector space have the same "size", i.e. cardinality.[citation needed] It

is called the dimension of the vector space, denoted dim V. Given the

other axioms of Zermelo-Fraenkel set theory, the latter statement is

equivalent to the axiom of choice.[15] Historically, the existence of

bases was first shown by Felix Hausdorff.[citation needed]


If the space is

spanned by finitely many vectors, the above statements can be

proven without such fundamental input from set theory.

The dimension of the coordinate space Fn is n, since any vector (x1,

x2, ..., xn) can be uniquely expressed as a linear combination of n

vectors (called coordinate vectors) e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ...,

0), to en = (0, 0, ..., 0, 1), namely the sum

x1e1 + x2v2 + ... + xnen,

The dimension of many function spaces, such as the space of

differentiable functions on some interval, is infinite. [citation needed]


Under

suitable regularity assumptions on the coefficients involved, the

dimension of the solution space of an homogeneous ordinary

58 | P a g e
differential equation equals the degree of the equation. For example,

the above equation has degree 2. The solution space is generated by

ex and xex (which are linearly independent over R), so the dimension

of this space is two.

The dimension (or degree) of a field extension such as Q(z) over Q

depends on whether or not z is algebraic, i.e. satisfies some

polynomial equation

qnzn + qn−1zn−1 + ... + q0 = 0, with rational coefficients qn, ..., q0.

If it is algebraic the dimension is finite. More precisely, it equals the

degree of the minimal polynomial having this number as a root.[18] For

example, the complex numbers are a two-dimensional real vector

space, generated by 1 and the imaginary unit i. The latter satisfies i2

+ 1 = 0, an equation of degree two. If z is not algebraic, the

dimension is infinite. For instance, for z = π there is no such equation,

since π is transcendental.

59 | P a g e
LINEAR MAPS

AND

MATRICES

60 | P a g e
Linear maps and matrices
As with many algebraic entities, the relation of two vector spaces is

expressed by maps between them. In the context of vector spaces,

the corresponding concept is called linear maps or linear

transformations. They are functions f : V → W that are compatible

with the relevant structure—i.e., they preserve sums and scalar

multiplication:

f(v + w) = f(v) + f(w) and f(a · v) = a · f(v).

An isomorphism is a linear map f : V → W such that there exists an

inverse map g : W → V, i.e. a map such that the two possible

compositions f ∘ g : W → W and g ∘ f : V → V are identity maps.

Equivalently, f is both one-to-one (injective) and onto (surjective). If

there exists an isomorphism between V and W, the two spaces are

said to be isomorphic; they are then essentially identical as vector

spaces, since all identities holding in V are, via f, transported to

similar ones in W, and vice versa via g.

61 | P a g e
Given any two vector spaces V and W, linear maps V → W form a

vector space HomF(V, W), also denoted L(V, W). The space of linear

maps from V to F is called the dual vector space, denoted V∗.

Once a basis of V is chosen, linear maps f : V → W are completely

determined by specifying the images of the basis vectors, because

any element of V is expressed uniquely as a linear combination of

them. If dim V = dim W, one can choose a 1-to-1 correspondence

between two fixed bases of V and W. The map that maps any basis

element of V to the corresponding basis element of W is, by its very

definition, an isomorphism. Thus any vector spaces is completely

classified (up to isomorphism) by its dimension, a single number. In

particular, any n-dimensional vector spaces over F is isomorphic to

Fn.

62 | P a g e
Matrices

A typical matrix

Matrices are a useful notion to encode linear maps. They are written

as a rectangular array of scalars, i.e. elements of some field F. Any

m-by-n matrix A gives rise to a linear map from Fn to Fm, by the

following

or, using the matrix multiplication of the matrix A with the coordinate vector

x:x ↦ Ax.

63 | P a g e
Moreover, after choosing bases of V and W, any linear map f : V →

W is uniquely represented by a matrix via this assignment.

The volume of this parallelepiped is the absolute value of the

determinant of the 3-by-3 matrix formed by the vectors r1, r2, and r3.

The determinant det (A) of a square matrix A is a scalar that tells

whether the associated map is an isomorphism or not: to be so it is

sufficient and necessary that the determinant is nonzero.

Eigenvalues and eigenvectors

A particularly important case are endomorphisms, i.e. maps f : V →

V. In this case, vectors v can be compared to their image under f,

f(v). Any vector v satisfying λ · v = f(v), where λ is a scalar, is called

an eigenvector of f with eigenvalue λ.[nb 4]


Equivalently, v is an

element of kernel of the difference f − λ · Id (the identity map V → V).

64 | P a g e
In the finite-dimensional case, this can be rephrased using

determinants: f having eigenvalue λ is the same as

det (f − λ · Id) = 0.

By spelling out the definition of the determinant, the expression on

the left hand side turns out to be a polynomial function in λ, called the

characteristic polynomial of f. If the field F is large enough to contain

a zero of this polynomial (which automatically happens for F

algebraically closed, such as F = C) any linear map has at least one

eigenvector. The vector space V may or may not possess an

eigenbasis, i.e. a basis consisting of eigenvectors. This phenomenon

is governed by the Jordan canonical form of the map.[nb 5] The spectral

theorem describes the infinite-dimensional case; to accomplish this

aim, the machinery of functional analysis is needed, see below.

65 | P a g e
Basic

constructions

66 | P a g e
Basic constructions
In addition to the above concrete examples, there are a number of

standard linear algebraic constructions that yield vector spaces

related to given ones. In addition to the concrete definitions given

below, they are also characterized by universal properties, which

determines an object X by specifying the linear maps from X to any

other vector space.

67 | P a g e
Subspaces and quotient spaces

A line passing through the origin (blue, thick) in R3 is a linear

subspace. It is the intersection of two planes (green and yellow).

In general, a nonempty subset W of a vector space V that is closed

under addition and scalar multiplication is called a subspace of V.

Subspaces of V are vector spaces (over the same field) in their own

right. The intersection of all subspaces containing a given set S of

68 | P a g e
vectors is called its span. Expressed in terms of elements, the span is

the subspace consisting linear combinations of elements of S.

The counterpart to subspaces are quotient vector spaces. Given any

subspace W ⊂ V, the quotient space V/W ("V modulo W") is defined

as follows: as a set, it consists of v + W = {v + w, w ∈ W}, where v is

an arbitrary vector in V. The sum of two such elements v1 + W and v2

+ W is (v1 + v2) + W, and scalar multiplication is given by a · (v + W) =

(a · v) + W. The key point in this definition is that v1 + W = v2 + W if

and only if the difference of v1 and v2 lies in W.[nb 6]


This way, the

quotient space "forgets" information that is contained in the subspace

W.

For any linear map f : V → W, the kernel ker(f ) consists of vectors v

that are mapped to 0 in W. Both kernel and image im(f ) = {f(v), v ∈

V}, are linear subspaces of V and W, respectively. They are related

by an elementary but fundamental isomorphism

V / ker(f ) ≅ im(f ).

The existence of kernels and images is part of the statement that the

category of vector spaces (over a fixed field F) is an abelian category.

69 | P a g e
An important example is the kernel of a linear map x ↦ Ax for some

fixed matrix A, as above. The kernel of this map, i.e. the subspace of

vectors x such that Ax = 0 are precisely the solutions to the system of

linear equations belonging to A. This concept also comprises linear

differential equations

, where the coefficients

ai are functions in x, too.

In the corresponding map

the derivatives of the function f appear linearly (as opposed to f ''(x)2,

for example). Since differentiation is a linear procedure (i.e. (f + g)' =

f ' + g ' and (c·f)' = c·f ' for a constant c) this assignment is linear,

called a linear differential operator. In particular, the solutions to the

differential equation D(f ) = 0 form a vector space (over R or C).

Direct product and direct sum

70 | P a g e
The direct product of a family of vector spaces Vi, where i runs

through some index set I, consists of tuples (vi)i ∈ I, i.e. for any index i,

one element vi of Vi is given.[38] Addition and scalar multiplication is

performed componentwise. A variant of this construction is the direct

sum (also called coproduct and denoted ), where only

tuples with finitely many nonzero vectors are allowed. If the index set

I is finite, the two constructions agree, but differ otherwise.

Tensor product

The tensor product V ⊗F W, or simply V ⊗ W is one of the central

notions of multilinear algebra. It is a vector space consisting of finite

(formal) sums of symbols

v1 ⊗ w1 + v2 ⊗ w2 + ... + vn ⊗ wn,

subject to certain rules mimicking bilinearity, such as

a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w).

Via the fundamental adjunction isomorphism

71 | P a g e
HomF (V, W) ≅ V∗ ⊗F W,[citation needed]

matrices, which are essentially the same as linear maps, i.e.

contained in the left hand side, translate into an element of the tensor

product of the dual of V with W. Therefore the tensor product may be

seen as the extension of the hierarchy of scalars, vectors and

matrices.

72 | P a g e
Vector spaces

with additional

structure

73 | P a g e
Vector spaces with additional structure

From the point of view of linear algebra, vector spaces are completely

understood insofar as any vector space is characterized, up to

isomorphism, by its dimension. However, vector spaces ad hoc do

not offer a framework to deal with the question—crucial to analysis—

whether a sequence of functions converges to another function.

Likewise, linear algebra is not per se adapted to deal with infinite

series, since the addition operation allows only finitely many terms to

be added. The needs of functional analysis require considering

additional structures. Much the same way the formal treatment of

vector spaces reveals their essential algebraic features, studying

vector spaces with additional datums abstractly turns out to be

advantageous, too.

A first example of an additional datum is an order ≤, a token by which

vectors can be compared. Rn can be ordered by comparing the

vectors componentwise. Ordered vector spaces, for example Riesz

spaces, are basal to Lebesgue integration, which stakes on

expressing a function as a difference of two positive functions

74 | P a g e
f = f + − f −,

i.e. the positive f + and negative part f −, respectively.[41]

Normed vector spaces and inner product spaces

"Measuring" vectors is a frequent need, either by specifying a norm, a

datum which measures the lengths of vectors, or by an inner product,

which measures the angles between vectors. The latter entails that

lengths of vectors can be defined too, by defining the associated

norm . Vector spaces endowed with such data are

known as normed vector spaces and inner product spaces,

respectively.

Coordinate space Fn can be equipped with the standard dot product:

〈x | y〉 = x · y = x1y1 + ... + xnyn.

In R2, this reflects the common notion of the angle between two

vectors x and y, by the law of cosines:

75 | P a g e
Because of this, two vectors satisfying 〈 x | y 〉 = 0 are called

orthogonal.

An important variant of the standard dot product is used in Minkowski

space, i.e. R4 endowed with the inner product

〈x | y〉 = x1y1 + x2y2 + x3y3 − x4y4.

It is crucial to the mathematical treatment of special relativity, where

the fourth coordinate—corresponding to time, as opposed to three

space-dimensions—is singled out.

76 | P a g e
Topological vector spaces

Unit "balls" in R2, i.e. the set of plane vectors of norm 1, in different p-

norms, for p = 1, 2, and ∞. The bigger diamond depicts points of 1-

norm equal to .

77 | P a g e
Convergence questions are addressed by considering vector spaces

V carrying a compatible topology, i.e. a structure that allows to talk

about elements being close to each other. Compatible here means

that addition and scalar multiplication should be continuous maps, i.e.

if x and y in V, and a in F vary by a bounded amount, then so do x +

y and ax.[nb 7]
To make sense of specifying the amount a scalar

changes, the field F also has to carry a topology in this setting; a

common choice are the reals or the complex numbers.

In such topological vector spaces one can consider infinite sums of

vectors, i.e. series, by writing

for the limit of the corresponding finite sums of functions. For

example, the fi could be (real or complex) functions, and the limit

takes place in some function space. The type of convergence

obtained depends on the used topology. Pointwise convergence and

absolute convergence are two prominent examples.

A way of ensuring the existence of limits of such infinite series is to

restrict attention to complete vector spaces where any Cauchy

78 | P a g e
sequence has a limit. Roughly, completeness means the absence of

holes. E.g. the rationals are not complete, since there are series of

rational numbers converging to irrational numbers such as . The

following two concepts, Banach and Hilbert spaces, are complete

topological spaces whose topology is given by a norm and an inner

product, respectively. Their study is a key piece of functional analysis.

The theory focusses on infinite-dimensional vector spaces, since all

norms on finite-dimensional topological vector spaces are equivalent,

i.e. give rise to the same notion of convergence. [47] The image at the

right shows the equivalence of the 1-norm and ∞-norm on R2: as the

unit "balls" enclose each other, a sequence converges to zero in one

norm if and only if it so does in the other norm. In the infinite-

dimensional case, however, there will generally be inequivalent

topologies, which makes the study of topological vector spaces richer

than that of vector spaces without additional data.

From a conceptual point of view, all notions related to topological

vector spaces should match the topology. For example, instead of

considering all linear maps (also called functionals) V → W, it is

useful to require maps to be continuous. For example, the dual space

79 | P a g e
V∗ consists of continuous functionals V → R (or C). Applying the dual

construction twice yields the bidual V∗∗. The natural map V → V∗∗ is

always injective, thanks to the fundamental Hahn-Banach theorem.[49]

If it is an isomorphism, V is called reflexive.

Banach spaces

Banach spaces, introduced by Stefan Banach, are complete normed

vector spaces.[50] A first example is the vector space lp consisting of

infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤

∞) given by

for p < ∞ and

is finite.[nb 8]
The topologies on the infinite-dimensional space l p are

inequivalent for different p. E.g. the sequence of vectors xn = (2−n, 2−n,

..., 2−n, 0, 0, ...), i.e. the first 2n components are 2−n, the following ones

are 0, converges to the zero vector for p = ∞, but does not for p = 1:

, but .

80 | P a g e
More generally, functions f: Ω → R are endowed with a norm that

replaces the above sum by the Lebesgue integral

The space of integrable functions on a given domain Ω (for example

an interval) satisfying |f |p < ∞, and equipped with this norm are called

Lebesgue spaces, denoted Lp(Ω). Thanks to the use of the Lebesgue

integral, these spaces are complete. [51] (The same space, equipped

with the Riemann integral, does not yield a complete space, which

may be seen as a justification for Lebesgue's integration theory. [citation


needed]
) Concretely this means that for any sequence of Lebesgue-

integrable functions f1(x), f2(x), ... with |fn| < ∞, satisfying the condition

there exists a function f(x) belonging to the vector space Lp(Ω) such

that

81 | P a g e
Imposing boundedness conditions not only on the function, but also

on its derivatives leads to Sobolev spaces.[52]

Hilbert spaces

The succeeding snapshots show summation of 1, 2, 3, 4, & 5 terms in

approximating a periodic function (blue) by finite sum of sine

functions (red).

Complete inner product spaces are known as Hilbert spaces, in honor

of David Hilbert. A key case is the Hilbert space L2(Ω), with inner

product given by

, with being the complex

conjugate of f(x).[54][55]

82 | P a g e
By definition, Cauchy sequences in any Hilbert space converge, i.e.

have a limit. Conversely, finding a sequence of functions fn with

desirable properties that approximates a given limit function, is

equally crucial. Early analysis, in the guise of the Taylor

approximation, established an approximation of differentiable

functions f by polynomials.[citation needed]


By the Stone-Weierstrass

theorem, every continuous function on [a, b] can be approximated as

closely as desired by a polynomial. [56] More generally, and more

conceptually, the theorem yields a simple description of what "basic

functions" suffice to generate a Hilbert space H, in the sense that the

closure of their span (i.e. finite linear combinations and limits of

those) is the whole space. Such a set of functions is called a Hilbert

basis of H, its cardinality is known as the Hilbert dimension.[nb 9]. Not

only does the theorem exhibit suitable basis functions as sufficient for

approximation purposes, together with the Gram-Schmidt process, it

also allows to construct a basis of orthogonal vectors.[citation needed]

Ortoghonal bases are the Hilbert space generalization of the

coordinate axes in (finite-dimensional) Euclidean space. Similar

approximation statements hold for Legendre polynomials, Bessel

functions, hypergeometric and trigonometric functions. The latter

83 | P a g e
technique, commonly called Fourier expansion, is applied in

engineering, see below.

The solutions to various important differential equations can be

interpreted in terms of Hilbert spaces. For example, a great many

fields in physics and engineering lead to such equations and

frequently solutions with particular physical properties are used as

basis functions, often orthogonal, that serve as the axes in a

corresponding Hilbert spaceAs an example from physics, the time-

dependent Schrödinger equation in quantum mechanics describes

the change of physical properties in time, by means of a partial

differential equation determining a wavefunction Definite values for

physical properties such as energy, or momentum, correspond to

eigenvalues of a certain (linear) differential operator and the

associated wavefunctions are called eigenstates. The spectral

theorem decomposes a linear compact operator that acts upon

functions in terms of these eigenfunctions and their eigenvalues.

84 | P a g e
Algebras over fields

A hyperbola, given by the equation x · y = 1. The coordinate ring of

functions on this hyperbola is given by R[x, y] / (x · y − 1), an infinite-

dimensional vector space over R.

In general, vector spaces do not possess a multiplication operation. A

vector space equipped with an additional bilinear operator defining

the multiplication of two vectors is an algebra over a field.[58] Many


85 | P a g e
algebras stem from functions on some geometrical object: since

functions with values in a field can be multiplied, these entities form

algebras. The Stone-Weierstrass theorem mentioned above, for

example, relies on Banach algebras which are at the junction of

Banach spaces and algebras.

Commutative algebra makes great use of rings of polynomials in one

or several variables, introduced above, where multiplication is both

commutative and associative. These rings and their quotients form

the basis of algebraic geometry, because they are rings of functions

of algebraic geometric objects.

Another crucial example are Lie algebras, which are neither

commutative, nor associative, but the failure to be so is limited by the

constraints ([x, y] denotes the product of x and y):

anticommutativity [x, y] = −[y, x] and the Jacobi identity [x, [y, z]]

+ [y, [x, z]] + [z, [x, y]] = 0.

The standard example is the vector space of n-by-n matrices, with [x,

y] = xy − yx, the commutator of two matrices.

86 | P a g e
A formal way of adding products to any vector space V, i.e. obtaining

an algebra, is the tensor algebra T(V). It is built of symbols, called

tensors

v1 ⊗ v2 ⊗ ... ⊗ vn, where the rank n varies.

The multiplication is given by concatenating two such symbols. In

general, there are no relations between v1 ⊗ v2 and v2 ⊗ v1. Forcing

two such elements to be equal leads to the symmetric algebra,

whereas forcing v1 ⊗ v2 = − v2 ⊗ v1 yields the exterior algebra.[62]

87 | P a g e
APPLICATIONS

88 | P a g e
APPLICATIONS
Distributions

A distribution (or generalized function) is a map assigning a number

to functions in a given vector space, in a continuous way. Employing

the general terminology introduced above, they are just elements of

the (continuous) dual space of the space of functions with compact

support. Two standard examples are given by integrating the function

with compact support f over some domain Ω, and the Dirac

distribution:

and

Distributions are a powerful instrument to solve differential equations.

Since all standard analytic notions such as derivatives are linear, they

extend naturally to the space of distributions. Therefore the equation

in question can be transferred to a distribution space, which are

strictly bigger than the underlying function space, so more flexible

methods, for example using Green's functions (which usually aren't

functions, but distributions) can be used to find a solution. The found

solution can then in some cases be proven to be actually a true


89 | P a g e
function, and a solution to the original equation (e.g. using the Lax-

Milgram theorem, a consequence of the Riesz representation

theorem).

Fourier expansion

The heat equation describes the dissipation of physical properties

over time, such as the decline of the temperature of a hot body

placed in a colder environment (yellow depicts hotter regions than

red).

Resolving a periodic function f(x) into sums of trigonometric functions

is known as the Fourier expansion, a technique much used in physics

and engineering. By the Stone-Weierstrass theorem (see above), any

90 | P a g e
function f(x) on a bounded, closed interval (or equivalently, any

periodic function) is a limit of functions of the following type

as N → ∞ . The coefficients am and bm are called Fourier coefficients

of f.

The historical motivation to this were differential equations: Fourier, in

1822, used the technique to give the first solution of the heat

equation.[66] Other than that, the discrete Fourier transform is used in

jpeg image compression or, in the guise of Fast Fourier Transform,

for high-speed multiplications of large integers.

91 | P a g e
Geometry

The tangent space to the 2-sphere at some point is the plane

touching the sphere in this point.

The tangent space TxM of a differentiable manifold M (for example a

smooth curve in Rn) at a point x of M is a tool to "linearize" the

manifold at that point. It can be calculated by the partial derivatives of

equations defining M inside the ambiant space. It may reveal a great

deal of information about the manifold: the tangent space of Lie

groups, which naturally is a Lie algebra, can be used to classify

compact Lie groups.[70] Getting back from the tangent space to the

manifold is accomplished by the vector flow.

92 | P a g e
The dual vector space of the tangent space is called cotangent

space. Differential forms are elements of the exterior algebra of the

cotangent space. They generalize the "dx" in standard integration

Further vector space constructions, in particular tensors are widely

used in geometry and beyond. Riemannian manifolds are manifolds

whose tangent spaces are employed with a suitable inner product.[71]

Derived therefrom, the Riemann curvature tensor encodes all

curvatures of a manifold in one object, which finds applications in

general relativity, for example, where the Einstein curvature tensor

describes the curvature of space-time.

The cotangent space is also used in commutative algebra and

algebraic geometry to define regular local rings, an algebraic

adaptation of smoothness in differential geometry, by comparing the

dimension cotangent space of the ring (defined in a purely algebraic

manner) to the Krull dimension of the ring.

93 | P a g e
GENERALIZATION
S

94 | P a g e
GENERALIZATIONS
Vector bundles

A Möbius strip. Locally, it looks like U × R, but the total bundle is

different from S1 × R (which is a cylinder instead).

A family of vector spaces, parametrised continuously by some

topological space X, is a vector bundle.[73] More precisely, a vector

bundle E over X is given by a continuous map

π : E → X,

95 | P a g e
which is locally a product of X with some (fixed) vector space V, i.e.

such that for every point x in X, there is a neighborhood U of x such

that the restriction of π to π−1(U) equals the projection V × U → U (the

"trivialization" of the bundle over U).[nb 10] The case dim V = 1 is called

a line bundle. While the situation is simple to oversee locally, there

may be—depending on the shape of the underlying space X—global

twisting phenomena. For example, the Möbius strip can be seen as a

line bundle over the circle S1 (at least if one extends the bounded

interval to infinity). The (non-)existence of vector bundles with certain

properties can tell something about the underlying topological space.

By the hairy ball theorem, for example, there is no tangent vector field

on the 2-sphere S2 which is everywhere nonzero, in contrast to the

circle S1, whose tangent bundle is trivial, i.e. globally isomorphic to S1

× R.[74] K-theory studies the ensemble of all vector bundles over some

topological space.

Modules

Modules are to rings what vector spaces are to fields, i.e. the very

same axioms, applied to a ring R instead of a field F yield modules. In

contrast to the exhaustive understanding of vector spaces offered by

96 | P a g e
linear algebra, the theory of modules is much more complicated,

because of ring elements that do not possess multiplicative inverses.

For example, modules need not have bases as the Z-module (i.e.

abelian group) Z/2 shows; those modules that do (including all vector

spaces) are known as free modules. The algebro-geometric

interpretation of commutative rings via their spectrum allows to

develop concepts such as locally free modules, which are the

algebraic counterpart to vector bundles. In the guise of projective

modules they are important in homological algebra and algebraic K-

theory.

Affine and projective spaces

Affine spaces can be thought of being vector spaces whose origin is

not specified. Formally, an affine space is a set with a transitive

vector space action. In particular, a vector space is an affine space

over itself, by the structure map

V2 → V, (a, b) ↦ a − b.

Sets of the form x + Rm (viewed as a subset of some bigger Rn), i.e.

moving some linear subspace by a fixed vector x, yields affine

97 | P a g e
spaces, too. In particular, this notion encloses all solution of systems

of (not necessarily homogeneous) linear equations

Ax = b.

generalizing the homogenouos case above.

The set of one-dimensional subspaces of a fixed vector space V is

known as projective space, an important geometric object formalizing

the idea of parallel lines intersecting at infinity. [81] More generally, the

Grassmann manifold consists of linear subspaces of higher (fixed)

dimension n. Finally, flag manifolds parametrize flags, i.e. chains of

subspaces (with fixed dimension)

0 = V0 ⊂ V1 ⊂ ... ⊂ Vn = V

98 | P a g e
REFERENCE

99 | P a g e
REFERENCE
1. Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd
ed.), Springer-Verlag, ISBN 0387982590
2. Lay, David C. (August 22, 2005), Linear Algebra and Its
Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137
3. Meyer, Carl D. (February 15, 2001), Matrix Analysis and
Applied Linear Algebra, Society for Industrial and Applied
Mathematics (SIAM), ISBN 978-0898714548,
http://www.matrixanalysis.com/DownloadChapters.html
4. Poole, David (2006), Linear Algebra: A Modern Introduction
(2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
5. Anton, Howard (2005), Elementary Linear Algebra (Applications
Version) (9th ed.), Wiley International
6. Leon, Steven J. (2006), Linear Algebra With Applications (7th
ed.), Pearson Prentice Hall
7. Axler, Sheldon (February 26, 2004), Linear Algebra Done Right
(2nd ed.), Springer, ISBN 978-0387982588
8. Bretscher, Otto (June 28, 2004), Linear Algebra with
Applications (3rd ed.), Prentice Hall, ISBN 978-0131453340
9. Farin, Gerald; Hansford, Dianne (December 15, 2004), Practical
Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-
1568812342
10. Friedberg, Stephen H.; Insel, Arnold J.; Spence,
Lawrence E. (November 11, 2002), Linear Algebra (4th ed.),
Prentice Hall, ISBN 978-0130084514

100 | P a g e
11. Kolman, Bernard; Hill, David R. (May 3, 2007),
Elementary Linear Algebra with Applications (9th ed.), Prentice
Hall, ISBN 978-0132296540
12. Strang, Gilbert (July 19, 2005), Linear Algebra and Its
Applications (4th ed.), Brooks Cole, ISBN 978-0030105678
13. Bhatia, Rajendra (November 15, 1996), Matrix Analysis,
Graduate Texts in Mathematics, Springer, ISBN 978-
0387948461
14. Demmel, James W. (August 1, 1997), Applied Numerical
Linear Algebra, SIAM, ISBN 978-0898713893
15. Golan, Johnathan S. (January 2007), The Linear Algebra
a Beginning Graduate Student Ought to Know (2nd ed.),
Springer, ISBN 978-1402054945
16. Golub, Gene H.; Van Loan, Charles F. (October 15,
1996), Matrix Computations, Johns Hopkins Studies in
Mathematical Sciences (3rd ed.), The Johns Hopkins University
Press, ISBN 978-0801854149
17. Greub, Werner H. (October 16, 1981), Linear Algebra,
Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-
0801854149
18. Hoffman, Kenneth; Kunze, Ray (April 25, 1971), Linear
Algebra (2nd ed.), Prentice Hall, ISBN 978-0135367971
19. Halmos, Paul R. (August 20, 1993), Finite-Dimensional
Vector Spaces, Undergraduate Texts in Mathematics, Springer,
ISBN 978-0387900933

101 | P a g e
20. Horn, Roger A.; Johnson, Charles R. (February 23, 1990),
Matrix Analysis, Cambridge University Press, ISBN 978-
0521386326
21. Horn, Roger A.; Johnson, Charles R. (June 24, 1994),
Topics in Matrix Analysis, Cambridge University Press, ISBN
978-0521467131
22. Lang, Serge (March 9, 2004), Linear Algebra,
Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN
978-0387964126
23. Roman, Steven (March 22, 2005), Advanced Linear
Algebra, Graduate Texts in Mathematics (2nd ed.), Springer,
ISBN 978-0387247663
24. Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover
Publications, ISBN 978-0486635187
25. Shores, Thomas S. (December 6, 2006), Applied Linear
Algebra and Matrix Analysis, Undergraduate Texts in
Mathematics, Springer, ISBN 978-0387331942
26. Smith, Larry (May 28, 1998), Linear Algebra,
Undergraduate Texts in Mathematics, Springer, ISBN 978-
0387984551
27. Study guides and outlines
28. Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs
Quick Review), Cliffs Notes, ISBN 978-0822053316
29. Lipschutz, Seymour; Lipson, Marc (December 6, 2000),
Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill,
ISBN 978-0071362009

102 | P a g e
30. Lipschutz, Seymour (January 1, 1989), 3,000 Solved
Problems in Linear Algebra, McGraw-Hill, ISBN 978-
0070380233
31. McMahon, David (October 28, 2005), Linear Algebra
Demystified, McGraw-Hill Professional, ISBN 978-0071465793
32. Zhang, Fuzhen (April 7, 2009), Linear Algebra:
Challenging Problems for Students, The Johns Hopkins
University Press, ISBN 978-0801891250

103 | P a g e

You might also like