Professional Documents
Culture Documents
A Study of The Basic Concepts Linear Equations Project
A Study of The Basic Concepts Linear Equations Project
A Study of The Basic Concepts Linear Equations Project
1|Page
Introduction
AGLEBRA AND LINEAR EQUATIONS
useful because:
and b), and thus is the first step to the systematic study of the
which is not yet known, but which may be found through the
2|Page
It allows the exploration of mathematical relationships between
quantities (such as "if you sell x tickets, then your profit will be
3x − 10 dollars").
A typical algebra problem that can be found in almost any middle school in the world.
3|Page
An "equation" is the claim that two expressions are equal. Some
equations are true for all values of the involved variables (such
Properties of operations
o is written a + b;
o is commutative: a + b = b + a;
o is associative: (a + b) + c = a + (b + c);
= a + (−b);
= a.
o is written a × b or a ⋅ b;
4|Page
o is commutative: a × b = b × a;
o is associative: (a × b) × c = a × (b × c);
= a;
o is written ab;
times);
logaab;
5|Page
o has a special element 1 which preserves numbers: a1 = a;
6|Page
Properties of equality
o reflexive: a = a;
o symmetric: if a = b then b = a;
Laws of equality
o that if a = b then a + c = b + c;
Laws of inequality
7|Page
EXAMPLES
8|Page
EXAMPLES
The simplest equations to solve are linear equations that have only
one side of the equation. Once the variable is isolated, the other side
9|Page
simplifies to the solution:
QUADRATIC EQUATIONS
where a is not zero (if it were zero, then the equation would not be
contain the term ax2, which is known as the quadratic term. Hence a
standard form
10 | P a g e
where p = b/a and q = −c/a. Solving this, by a process known as
zero. All quadratic equations will have two solutions in the complex
number system, but need not have any in the real number system.
For example,
has no real number solution since no real number squared equals −1.
11 | P a g e
For this equation, −1 is a root of multiplicity 2.
12 | P a g e
Exponential and logarithmic equations
given equation in the above way before arriving at the solution. For
example, if
whence
or
13 | P a g e
A logarithmic equation is an equation of the form log aX = b for a > 0,
For example, if
whence
Radical equations
14 | P a g e
if m is odd, and solution
then
15 | P a g e
FIRST METHOD OF FINDING A SOLUTION
which simplifies to
16 | P a g e
Note that this is not the only way to solve this specific system; y could
17 | P a g e
SECOND METHOD OF FINDING A SOLUTION
substitution.
18 | P a g e
Adding 2 on each side of the equation:
which simplifies to
Using this value in one of the equations, the same solution as in the
Note that this is not the only way to solve this specific system; in this
19 | P a g e
OTHER TYPES OF SYSTEMS OF LINEAR EQUATIONS
Unsolvable systems
system is studied:
20 | P a g e
And using this value for y in the first equation:
No variables are left, and the equality is not true. This means that the
first equation can't provide a solution for the value for y obtained in
Undetermined systems
21 | P a g e
The equality is true, but it does not provide a value for x. Indeed, one
can easily verify (by just filling in some values of x) that for any x
22 | P a g e
RELATION BETWEEN SOLVABILITY AND MULTIPLICITY
solutions. Example:
When the multiplicity is only partial (meaning that for example, only
the left hand sides of the equations are multiples, while the right hand
sides are not or not by the same number) then the system is
with the first equation. Such a system is also called inconsistent in the
23 | P a g e
language of linear algebra. When trying to solve a system of linear
uniquely determined. If this is only partially so, the solution does not
exist.
This, however, does not mean that the equations must be multiples of
References
24 | P a g e
LINEAR EQUATION
either a constant or the product of a constant and (the first power of)
a single variable. Linear equations can have one, two, three or more
variables.
25 | P a g e
LINEAR
EQUATIONS IN
TWO VARIABLES
26 | P a g e
LINEAR EQUATIONS IN TWO VARIABLES
name "linear" comes from the fact that the set of solutions of such an
the constant m determines the slope or gradient of that line; and the
constant term b determines the point at which the line crosses the y-
axis.
variable, equations involving terms such as xy, x², y1/3, and sin(x) are
nonlinear.
27 | P a g e
Graph sample of linear equations.
28 | P a g e
Examples of Mathematical Formulas
General form
where A and B are not both equal to zero. The equation is usually
is the x-coordinate of the point where the graph crosses the x-axis (y
29 | P a g e
coordinate of the point where the graph crosses the y-axis (x is zero),
Standard form
be converted to the general form, but not always to all the other
forms if A or B is zero.
Slope–intercept form
Y-axis formula
y = b.
X-axis formula
30 | P a g e
where m ≠ 0, is the slope of the line and c is the x-intercept,
which is the x-coordinate of the point where the line crosses the
gives x = c.
Point–slope form
where m is the slope of the line and (x1,y1) is any point on the
interchangeable.
Intercept form
31 | P a g e
converted to the standard form by setting A = 1/c, B = 1/b and
C = 1.
Two-point form
Parametric form
and
(WT−VU) / T.
U = h, V = q−k, and W = k:
and
32 | P a g e
In this case t varies from 0 at point (h,k) to 1 at point (p,q), with
Normal form
coefficients by
This form is also called the Hesse standard form, after the German
Special cases
33 | P a g e
graph is a horizontal line with y-intercept equal to b. There is no
which case the graph of the line is the y-axis, and so every real
number is a y-intercept.
and
side of the equal sign are always equal, no matter what values
34 | P a g e
meaning it is untrue for any values of x and y (i.e. its graph
35 | P a g e
Connection with linear functions and
operators
In all of the named forms above (assuming the graph is not a vertical
In the particular case that the line crosses through the origin, if the
linear equation is written in the form y = f(x) then f has the properties:
and
about.
36 | P a g e
LINEAR
EQUATIONS IN
VARIABLES
37 | P a g e
LINEAR EQUATIONS IN MORE THAN
TWO VARIABLES
A linear equation can involve more than two variables. The general
In this form, a1, a2, …, an are the coefficients, x1, x2, …, xn are the
z, as appropriate.
vectors) that may be scaled and added. These two operations have
38 | P a g e
spaces is of linear nature, these objects are a keystone of linear
called dimension.
Many mathematical
entities can be
analysis, mainly in the guise of function spaces, calls for a notion that
objects.
40 | P a g e
MOTIVATION
AND
DEFINITION
41 | P a g e
MOTIVATION AND DEFINITION
The black vector (x, y) = (5, 7) can be expressed as a linear combination of two different pairs of vectors (5·(1, 0) and
added:
more general in several ways: first, instead of the real numbers other
42 | P a g e
example, can be arbitrary, even infinite. Another conceptually
over
Definition
over F. For F = R or C, they are also called real and complex vector
43 | P a g e
multiplication have to adhere to a number of requirements called
i.e. the addition of the zero vector (0, 0) to any other vector yields that
44 | P a g e
ALTERNATIVE FORMULATIONS AND
ELEMENTARY CONSEQUENCES OF
THE DEFINITION
The requirement that vector addition and scalar multiplication be
axioms.
The first four axioms can be subsumed by requiring the set of vectors
(v). This can be seen as the starting point of defining vector spaces
There are a number of properties that follow easily from the vector
applied to the (additive) group of vectors: for example the zero vector
properties can be derived from the distributive law, for example scalar
45 | P a g e
multiplication by zero yields the zero vector and no other scalar
46 | P a g e
HISTORY
47 | P a g e
HISTORY
Vector spaces stem from affine geometry, via the introduction of
which are predecessors of vectors. This work was made use of in the
1827. The founding leg of the definition of vectors was the Bellavitis'
48 | P a g e
In 1857, Cayley introduced the matrix notation which allows one to
construction of sets, was one of the first to give the modern definition
David Hilbert and Stefan Banach, in his 1920 PhD thesis. At this time,
49 | P a g e
and Hilbert spaces. Also at this time, the first studies concerning
50 | P a g e
EXAMPLES
51 | P a g e
Examples
Coordinate and function spaces
The first example of a vector space over a field F is the field itself,
52 | P a g e
possessing such a property will still have that property. Therefore, the
set of such functions are vector spaces. They are studied in more
constraints also yield vector spaces: the vector space F[x] is given by
f (x) = rnxn + rn−1xn−1 + ... + r1x + r0, where the coefficients rn, ..., r0
are in F.[9]
Power series are similar, except that infinitely many terms are
allowed.[10]
Linear equations
a + 3b + c = 0
4a + 2b + 2c = 0
space: sums and scalar multiples of such triples still satisfy the same
ratios of the three variables; thus they are solutions, too. Matrices can
53 | P a g e
be used to condense multiple linear equations as above into one
Ax = 0,
example
and e=2.718....
54 | P a g e
[12]
For example the complex numbers are a vector space over R.
55 | P a g e
BASES
AND
DIMENSION
56 | P a g e
Bases and dimension
space, and minimal with this property. The former means that any
where the ak are scalars and vik (k = 1, ..., n) elements of the basis B.
can only hold if all scalars a1, ..., an equal zero. By definition every
57 | P a g e
Every vector space has a basis. This fact relies on Zorn’s Lemma, an
which is weaker than the axiom of choice, implies that all bases of a
given vector space have the same "size", i.e. cardinality.[citation needed] It
is called the dimension of the vector space, denoted dim V. Given the
58 | P a g e
differential equation equals the degree of the equation. For example,
ex and xex (which are linearly independent over R), so the dimension
polynomial equation
since π is transcendental.
59 | P a g e
LINEAR MAPS
AND
MATRICES
60 | P a g e
Linear maps and matrices
As with many algebraic entities, the relation of two vector spaces is
multiplication:
inverse map g : W → V, i.e. a map such that the two possible
61 | P a g e
Given any two vector spaces V and W, linear maps V → W form a
vector space HomF(V, W), also denoted L(V, W). The space of linear
between two fixed bases of V and W. The map that maps any basis
Fn.
62 | P a g e
Matrices
A typical matrix
Matrices are a useful notion to encode linear maps. They are written
following
or, using the matrix multiplication of the matrix A with the coordinate vector
x:x ↦ Ax.
63 | P a g e
Moreover, after choosing bases of V and W, any linear map f : V →
determinant of the 3-by-3 matrix formed by the vectors r1, r2, and r3.
64 | P a g e
In the finite-dimensional case, this can be rephrased using
det (f − λ · Id) = 0.
the left hand side turns out to be a polynomial function in λ, called the
65 | P a g e
Basic
constructions
66 | P a g e
Basic constructions
In addition to the above concrete examples, there are a number of
67 | P a g e
Subspaces and quotient spaces
Subspaces of V are vector spaces (over the same field) in their own
68 | P a g e
vectors is called its span. Expressed in terms of elements, the span is
W.
For any linear map f : V → W, the kernel ker(f ) consists of vectors v
V / ker(f ) ≅ im(f ).
The existence of kernels and images is part of the statement that the
69 | P a g e
An important example is the kernel of a linear map x ↦ Ax for some
fixed matrix A, as above. The kernel of this map, i.e. the subspace of
differential equations
f ' + g ' and (c·f)' = c·f ' for a constant c) this assignment is linear,
70 | P a g e
The direct product of a family of vector spaces Vi, where i runs
through some index set I, consists of tuples (vi)i ∈ I, i.e. for any index i,
tuples with finitely many nonzero vectors are allowed. If the index set
Tensor product
v1 ⊗ w1 + v2 ⊗ w2 + ... + vn ⊗ wn,
a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w).
71 | P a g e
HomF (V, W) ≅ V∗ ⊗F W,[citation needed]
contained in the left hand side, translate into an element of the tensor
matrices.
72 | P a g e
Vector spaces
with additional
structure
73 | P a g e
Vector spaces with additional structure
From the point of view of linear algebra, vector spaces are completely
series, since the addition operation allows only finitely many terms to
advantageous, too.
74 | P a g e
f = f + − f −,
which measures the angles between vectors. The latter entails that
respectively.
In R2, this reflects the common notion of the angle between two
75 | P a g e
Because of this, two vectors satisfying 〈 x | y 〉 = 0 are called
orthogonal.
76 | P a g e
Topological vector spaces
Unit "balls" in R2, i.e. the set of plane vectors of norm 1, in different p-
norm equal to .
77 | P a g e
Convergence questions are addressed by considering vector spaces
y and ax.[nb 7]
To make sense of specifying the amount a scalar
78 | P a g e
sequence has a limit. Roughly, completeness means the absence of
holes. E.g. the rationals are not complete, since there are series of
i.e. give rise to the same notion of convergence. [47] The image at the
right shows the equivalence of the 1-norm and ∞-norm on R2: as the
79 | P a g e
V∗ consists of continuous functionals V → R (or C). Applying the dual
construction twice yields the bidual V∗∗. The natural map V → V∗∗ is
Banach spaces
infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤
∞) given by
is finite.[nb 8]
The topologies on the infinite-dimensional space l p are
..., 2−n, 0, 0, ...), i.e. the first 2n components are 2−n, the following ones
are 0, converges to the zero vector for p = ∞, but does not for p = 1:
, but .
80 | P a g e
More generally, functions f: Ω → R are endowed with a norm that
an interval) satisfying |f |p < ∞, and equipped with this norm are called
integral, these spaces are complete. [51] (The same space, equipped
with the Riemann integral, does not yield a complete space, which
integrable functions f1(x), f2(x), ... with |fn| < ∞, satisfying the condition
there exists a function f(x) belonging to the vector space Lp(Ω) such
that
81 | P a g e
Imposing boundedness conditions not only on the function, but also
Hilbert spaces
functions (red).
of David Hilbert. A key case is the Hilbert space L2(Ω), with inner
product given by
conjugate of f(x).[54][55]
82 | P a g e
By definition, Cauchy sequences in any Hilbert space converge, i.e.
only does the theorem exhibit suitable basis functions as sufficient for
83 | P a g e
technique, commonly called Fourier expansion, is applied in
84 | P a g e
Algebras over fields
anticommutativity [x, y] = −[y, x] and the Jacobi identity [x, [y, z]]
The standard example is the vector space of n-by-n matrices, with [x,
86 | P a g e
A formal way of adding products to any vector space V, i.e. obtaining
tensors
87 | P a g e
APPLICATIONS
88 | P a g e
APPLICATIONS
Distributions
distribution:
and
Since all standard analytic notions such as derivatives are linear, they
theorem).
Fourier expansion
red).
90 | P a g e
function f(x) on a bounded, closed interval (or equivalently, any
of f.
1822, used the technique to give the first solution of the heat
91 | P a g e
Geometry
compact Lie groups.[70] Getting back from the tangent space to the
92 | P a g e
The dual vector space of the tangent space is called cotangent
93 | P a g e
GENERALIZATION
S
94 | P a g e
GENERALIZATIONS
Vector bundles
π : E → X,
95 | P a g e
which is locally a product of X with some (fixed) vector space V, i.e.
"trivialization" of the bundle over U).[nb 10] The case dim V = 1 is called
line bundle over the circle S1 (at least if one extends the bounded
By the hairy ball theorem, for example, there is no tangent vector field
× R.[74] K-theory studies the ensemble of all vector bundles over some
topological space.
Modules
Modules are to rings what vector spaces are to fields, i.e. the very
96 | P a g e
linear algebra, the theory of modules is much more complicated,
For example, modules need not have bases as the Z-module (i.e.
abelian group) Z/2 shows; those modules that do (including all vector
theory.
V2 → V, (a, b) ↦ a − b.
97 | P a g e
spaces, too. In particular, this notion encloses all solution of systems
Ax = b.
the idea of parallel lines intersecting at infinity. [81] More generally, the
0 = V0 ⊂ V1 ⊂ ... ⊂ Vn = V
98 | P a g e
REFERENCE
99 | P a g e
REFERENCE
1. Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd
ed.), Springer-Verlag, ISBN 0387982590
2. Lay, David C. (August 22, 2005), Linear Algebra and Its
Applications (3rd ed.), Addison Wesley, ISBN 978-0321287137
3. Meyer, Carl D. (February 15, 2001), Matrix Analysis and
Applied Linear Algebra, Society for Industrial and Applied
Mathematics (SIAM), ISBN 978-0898714548,
http://www.matrixanalysis.com/DownloadChapters.html
4. Poole, David (2006), Linear Algebra: A Modern Introduction
(2nd ed.), Brooks/Cole, ISBN 0-534-99845-3
5. Anton, Howard (2005), Elementary Linear Algebra (Applications
Version) (9th ed.), Wiley International
6. Leon, Steven J. (2006), Linear Algebra With Applications (7th
ed.), Pearson Prentice Hall
7. Axler, Sheldon (February 26, 2004), Linear Algebra Done Right
(2nd ed.), Springer, ISBN 978-0387982588
8. Bretscher, Otto (June 28, 2004), Linear Algebra with
Applications (3rd ed.), Prentice Hall, ISBN 978-0131453340
9. Farin, Gerald; Hansford, Dianne (December 15, 2004), Practical
Linear Algebra: A Geometry Toolbox, AK Peters, ISBN 978-
1568812342
10. Friedberg, Stephen H.; Insel, Arnold J.; Spence,
Lawrence E. (November 11, 2002), Linear Algebra (4th ed.),
Prentice Hall, ISBN 978-0130084514
100 | P a g e
11. Kolman, Bernard; Hill, David R. (May 3, 2007),
Elementary Linear Algebra with Applications (9th ed.), Prentice
Hall, ISBN 978-0132296540
12. Strang, Gilbert (July 19, 2005), Linear Algebra and Its
Applications (4th ed.), Brooks Cole, ISBN 978-0030105678
13. Bhatia, Rajendra (November 15, 1996), Matrix Analysis,
Graduate Texts in Mathematics, Springer, ISBN 978-
0387948461
14. Demmel, James W. (August 1, 1997), Applied Numerical
Linear Algebra, SIAM, ISBN 978-0898713893
15. Golan, Johnathan S. (January 2007), The Linear Algebra
a Beginning Graduate Student Ought to Know (2nd ed.),
Springer, ISBN 978-1402054945
16. Golub, Gene H.; Van Loan, Charles F. (October 15,
1996), Matrix Computations, Johns Hopkins Studies in
Mathematical Sciences (3rd ed.), The Johns Hopkins University
Press, ISBN 978-0801854149
17. Greub, Werner H. (October 16, 1981), Linear Algebra,
Graduate Texts in Mathematics (4th ed.), Springer, ISBN 978-
0801854149
18. Hoffman, Kenneth; Kunze, Ray (April 25, 1971), Linear
Algebra (2nd ed.), Prentice Hall, ISBN 978-0135367971
19. Halmos, Paul R. (August 20, 1993), Finite-Dimensional
Vector Spaces, Undergraduate Texts in Mathematics, Springer,
ISBN 978-0387900933
101 | P a g e
20. Horn, Roger A.; Johnson, Charles R. (February 23, 1990),
Matrix Analysis, Cambridge University Press, ISBN 978-
0521386326
21. Horn, Roger A.; Johnson, Charles R. (June 24, 1994),
Topics in Matrix Analysis, Cambridge University Press, ISBN
978-0521467131
22. Lang, Serge (March 9, 2004), Linear Algebra,
Undergraduate Texts in Mathematics (3rd ed.), Springer, ISBN
978-0387964126
23. Roman, Steven (March 22, 2005), Advanced Linear
Algebra, Graduate Texts in Mathematics (2nd ed.), Springer,
ISBN 978-0387247663
24. Shilov, Georgi E. (June 1, 1977), Linear algebra, Dover
Publications, ISBN 978-0486635187
25. Shores, Thomas S. (December 6, 2006), Applied Linear
Algebra and Matrix Analysis, Undergraduate Texts in
Mathematics, Springer, ISBN 978-0387331942
26. Smith, Larry (May 28, 1998), Linear Algebra,
Undergraduate Texts in Mathematics, Springer, ISBN 978-
0387984551
27. Study guides and outlines
28. Leduc, Steven A. (May 1, 1996), Linear Algebra (Cliffs
Quick Review), Cliffs Notes, ISBN 978-0822053316
29. Lipschutz, Seymour; Lipson, Marc (December 6, 2000),
Schaum's Outline of Linear Algebra (3rd ed.), McGraw-Hill,
ISBN 978-0071362009
102 | P a g e
30. Lipschutz, Seymour (January 1, 1989), 3,000 Solved
Problems in Linear Algebra, McGraw-Hill, ISBN 978-
0070380233
31. McMahon, David (October 28, 2005), Linear Algebra
Demystified, McGraw-Hill Professional, ISBN 978-0071465793
32. Zhang, Fuzhen (April 7, 2009), Linear Algebra:
Challenging Problems for Students, The Johns Hopkins
University Press, ISBN 978-0801891250
103 | P a g e