Linier Algebra

You might also like

Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 8

Home

Random

Nearby

Log in

Settings

About Wikipedia

Disclaimers

Open main menu

Wikipedia

Search

Linear algebra

Language

Download PDF

Watch

Edit

In the three-dimensional Euclidean space, these three planes represent solutions of linear equations and
their intersection represents the set of common solutions: in this case, a unique point. The blue line is
the common solution of a pair of linear equations.

Linear algebra is the branch of mathematics concerning linear equations such as

{\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,} {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}=b,}

linear functions such as

{\displaystyle (x_{1},\ldots ,x_{n})\mapsto a_{1}x_{1}+\ldots +a_{n}x_{n},} {\displaystyle (x_{1},\ldots


,x_{n})\mapsto a_{1}x_{1}+\ldots +a_{n}x_{n},}
and their representations through matrices and vector spaces.[1][2][3]

Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in
modern presentations of geometry, including for defining basic objects such as lines, planes and
rotations. Also, functional analysis may be basically viewed as the application of linear algebra to spaces
of functions. Linear algebra is also used in most sciences and engineering areas, because it allows
modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems,
which cannot be modeled with linear algebra, linear algebra is often used as a first-order approximation.

History

Vector spaces

Matrices

Linear systems

Endomorphisms and square matrices

Duality

Relationship with geometry

Usage and applications

Extensions and generalizations

See also

Notes

References

Further reading

External links

Last edited 22 days ago by Favonian

RELATED ARTICLES

Dual space

Vector space of linear functions of vectors returning scalars; generalizing the dot product
Inner product space

vector space with an additional structure called an inner product

Vector space

Basic algebraic structure of linear algebra

Wikipedia

Content is available under CC BY-SA 3.0 unless otherwise noted.

Terms of UsePrivacyDesktop

Home

Random

Nearby

Log in

Settings

About Wikipedia

Disclaimers

Open main menu

Wikipedia

Search

Vector space

Language

Download PDF

Watch
Edit

Not to be confused with Vector field.

This article is about linear (vector) spaces. For the structure in incidence geometry, see Linear space
(geometry). For the space technology company, see Vector Space Systems.

Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper
illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2w.

A vector space (also called a linear space) is a collection of objects called vectors, which may be added
together and multiplied ("scaled") by numbers, called scalars. Scalars are often taken to be real numbers,
but there are also vector spaces with scalar multiplication by complex numbers, rational numbers, or
generally any field. The operations of vector addition and scalar multiplication must satisfy certain
requirements, called axioms, listed below, in § Definition. For specifying that the scalars are real or
complex numbers, the terms real vector space and complex vector space are often used.

Euclidean vectors are an example of a vector space. They represent physical quantities such as forces:
any two forces (of the same type) can be added to yield a third, and the multiplication of a force vector
by a real multiplier is another force vector. In the same vein, but in a more geometric sense, vectors
representing displacements in the plane or in three-dimensional space also form vector spaces. Vectors
in vector spaces do not necessarily have to be arrow-like objects as they appear in the mentioned
examples: vectors are regarded as abstract mathematical objects with particular properties, which in
some cases can be visualized as arrows.

Vector spaces are the subject of linear algebra and are well characterized by their dimension, which,
roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional
vector spaces arise naturally in mathematical analysis, as function spaces, whose vectors are functions.
These vector spaces are generally endowed with additional structure, which may be a topology, allowing
the consideration of issues of proximity and continuity. Among these topologies, those that are defined
by a norm or inner product are more commonly used, as having a notion of distance between two
vectors. This is particularly the case of Banach spaces and Hilbert spaces, which are fundamental in
mathematical analysis.

Historically, the first ideas leading to vector spaces can be traced back as far as the 17th century's
analytic geometry, matrices, systems of linear equations, and Euclidean vectors. The modern, more
abstract treatment, first formulated by Giuseppe Peano in 1888, encompasses more general objects than
Euclidean space, but much of the theory can be seen as an extension of classical geometric ideas like
lines, planes and their higher-dimensional analogs.

Today, vector spaces are applied throughout mathematics, science and engineering. They are the
appropriate linear-algebraic notion to deal with systems of linear equations. They offer a framework for
Fourier expansion, which is employed in image compression routines, and they provide an environment
that can be used for solution techniques for partial differential equations. Furthermore, vector spaces
furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors.
This in turn allows the examination of local properties of manifolds by linearization techniques. Vector
spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract
algebra.

Introduction and definition

History

Examples

Basis and dimension

Linear maps and matrices

Basic constructions

Vector spaces with additional structure

Applications

Generalizations

See also

Notes

Citations

References

External links

Last edited 10 days ago by Jakob.scholbach

RELATED ARTICLES

Linear map
mapping that preserves the operations of addition and scalar multiplication

Linear subspace

subset of a vector space that forms a vector space itself

Examples of vector spaces

Wikipedia

Content is available under CC BY-SA 3.0 unless otherwise noted.

Terms of UsePrivacyDesktop

Home

Random

Nearby

Log in

Settings

About Wikipedia

Disclaimers

Open main menu

Wikipedia

Search

Linear equation

Language

Download PDF

Watch

Edit
Learn more

This article needs additional citations for verification.

Two graphs of linear equations in two variables

In mathematics, a linear equation is an equation that may be put in the form

{\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}+b=0,} {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}+b=0,}

where {\displaystyle x_{1},\ldots ,x_{n}} x_{1},\ldots ,x_{n} are the variables (or unknowns or
indeterminates), and {\displaystyle b,a_{1},\ldots ,a_{n}} {\displaystyle b,a_{1},\ldots ,a_{n}} are the
coefficients, which are often real numbers. The coefficients may be considered as parameters of the
equation, and may be arbitrary expressions, provided they do not contain any of the variables. To yield a
meaningful equation, the coefficient s {\displaystyle a_{1},\ldots ,a_{n}} a_1, \ldots, a_n are required to
not be all zero.

In other words, a linear equation is obtained by equating to zero a linear polynomial over some field,
from which the coefficients are taken (the symbols used for the variables are supposed to not denote
any element of the field).

The solutions of such an equation are the values that, when substituted for the unknowns, make the
equality true.

In the case of just one variable, there is exactly one solution (provided that {\displaystyle a_{1}\neq 0}
{\displaystyle a_{1}\neq 0}). Often, the term linear equation refers implicitly to this particular case, in
which the variable is sensibly called the unknown.

In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of
the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and,
conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This
is the origin of the term linear for describing this type of equations. More generally, the solutions of a
linear equation in n variables form a hyperplane (a subspace of dimension n − 1) in the Euclidean space
of dimension n.
Linear equations occur frequently in all mathematics and their applications in physics and engineering,
partly because non-linear systems are often well approximated by linear equations.

This article considers the case of a single equation with coefficients from the field of real numbers, for
which one studies the real solutions. All of its content applies to complex solutions and, more generally,
for linear equations with coefficients and solutions in any field. For the case of several simultaneous
linear equations, see system of linear equations.

One variable

Two variables

More than two variables

See also

Notes

References

External links

Last edited 5 days ago by DVdm

Wikipedia

Content is available under CC BY-SA 3.0 unless otherwise noted.

Terms of UsePrivacyDesktop

You might also like