Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Linear Algebra

Student Name : Jihad Hossain Jisan


ID : 231016712
Section:05
Course title: MATH 207
Faculty name : Mashky Chowdhury Surja
Table of Contents
1. Vector: ............................................................................................................................................. 3
............................................................................................................................................................. 3
2 .Linear combinations, span, and basis vectors: ................................................................................ 4
3 :Linear transformations and matrices: ........................................................................................... 5
4. Matrix multiplication as composition: ........................................................................................... 7
5.Three-dimensional linear transformations:....................................................................................... 8
6.The determinant: ............................................................................................................................. 9
7. Inverse matrices, column space and null space .......................................................................... 10
8.Nonsquare matrices as transformations between dimensions ..................................................... 12
9.Dot products and duality................................................................................................................ 13
10. Cross products ............................................................................................................................. 14
11. Cross products in the light of linear transformation ................................................................... 16
12.Cramer's rule, explained geometrically ....................................................................................... 17
13.Change of basis ............................................................................................................................ 19
14.Eigenvectors and eigenvalues ...................................................................................................... 20
15. A quick trick for computing eigenvalues ..................................................................................... 22
16. Abstract vector spaces ................................................................................................................ 22
1. Vector:
Vector are ordered list of number & it is a fancy word for a list. There are 2D
vector which mean only 2 element but also 3D vector where we live in .

2D 3D

Vector addition works by this a it is kind of a questioning but it work like


starting the journey from a vector director than from that vector going to the
same director of another vector than returning to the point of start will be the
way of addition vector of these two vectors.
2 .Linear combinations, span, and basis
vectors:
First off, let's talk about linear combinations. Imagine you have two vectors,
let's call them v and w, and two numbers, say α and β. A linear combination of
v and w basically means you scale each vector by its number and then add them
together. So, α times v plus β times w gives you this new vector.

These special vectors, called basis vectors, are like the building blocks of a
coordinate system. They define the directions and lengths of each axis. Think
of the usual xy-coordinate system - the basis vectors here are i-hat (pointing
right) and j-hat (pointing up). Any vector in this system can be written as a
combo of i-hat and j-hat.

Now, the span of a set of vectors is just all the possible combinations you can
make using those vectors. For example, if you have i-hat and j-hat, you basically
cover the entire xy-plane because any point on that plane can be made by
combining i-hat and j-hat in different ways. If the vectors are in the same
direction, you get a line. If they're not in the same direction, you get a whole
plane.

A basis is just a set of vectors that are independent (you can't make one by
combining the others) and that cover the entire space. For the xy-plane, i-hat
and j-hat form a basis since they're not in the same direction and they cover the
whole plane.
What's cool is you can pick different basis vectors for the same space, and it
changes how you describe vectors in that space. Like, if you choose different
vectors for the xy-plane, any point in that plane can be described using these
new vectors.

And it's not just for 2D. You can do this for 3D too! If you have vectors that
form a plane in 3D, those vectors can be your basis for describing any point in
that plane.

Understanding all this stuff about linear combinations, span, and basis vectors
is super important because you'll use these ideas a lot in the future. The author
also says it's fun to play around with different basis vectors to see how they
change things up for vectors in space.

3 Linear transformations and matrices:


:

The concept of of linear equations can be seen as transformations, just like


matrices. This idea was mind-expanding. Understanding that solving equations
isn't just about finding specific values but rather finding vectors that undergo
specific transformations felt like a paradigm shift.

The concept of matrix equations was a key highlight. It illustrated how matrix-
vector multiplication is akin to expressing a system of equations compactly. The
video revealed that solving Ax = b (where A is a matrix and x, b are vectors) is
equivalent to finding a vector x that, when transformed by matrix A, results in
vector b. It was fascinating to see how this simple representation connects to
solving equations.

The visuals were incredibly helpful in grasping the concept of equation


transformations. Watching how matrices change vectors and how the solutions
to equations relate to specific transformations made abstract algebraic concepts
more concrete.

Understanding the geometric interpretation of equations was a game-changer.


Realizing that the solutions to equations represent points that transform to
specific

𝑥
• Transformation iof vector A = 𝑦

𝑎 𝑏 𝑥 𝑎𝑥 𝑏𝑦
T(A)=| | =( )
𝑐 𝑑 𝑦 𝑐𝑥 𝑑𝑦

places after being acted upon by matrices felt empowering. It made solving
equations feel like deciphering the transformations hidden within them.

Moreover, the video showcased how different matrices lead to different types
of transformations, which in turn affect the solutions to equations. This
revelation highlighted the intricate relationship between matrices, equations,
and their solutions.
4. Matrix multiplication as composition:
So, the video explained that a linear transformation, represented by T, takes a

vector v and "transforms" it into another vector, T(v). This new vector is

calculated by multiplying a matrix A with the original vector v, like this: T(v)

= Av.demonstrate how matrices affect vectors, with the vector v depicted as a

combination of basis vectors xi + yj. This relationship is vital in understanding

matrix-vector multiplication, where Av embodies the application of the linear

transformation represented by matrix A on vector v. Moreover, I learned about

composing linear transformations, denoted as C = AB, illustrating how the new

basis vectors' coordinates, Ci and Cj, are obtained through sequential

transformations A(Bi) and A(Bj), respectively. The concept that matrix

multiplication order, such as C = AB, is read from right to left, emphasizing the

order of transformations from the innermost to the outermost transformation,

was particularly enlightening. Additionally, understanding the associativity of

matrix multiplication, (AB)C = A(BC), and its application in sequential

transformations, (AB)v = A(Bv), elucidates how matrix operations are grouped

and applied. The conceptual understanding of matrix multiplication,

particularly visualizing it as a series of sequential transformations where the

order significantly impacts the outcome (AB ≠ BA), provides a deeper insight
into how matrices influence and alter vectors in various contexts. Overall,

comprehending these matrix-related concepts has been a foundational step in

understanding how matrices encode linear transformations and their sequential

applications on vectors in different dimensions.

5.Three-dimensional linear
transformations:
the context of 3D space, a linear transformation can be visualized as a process
that stretches, shrinks, rotates, or reflects an object in any way imaginable.Let's
consider the following example to illustrate the construction of a transformation
matrix. Imagine we want to rotate a vector by 90 degrees around the y-axis. We
can represent this rotation as a 3x3 matrix as shown below:

0 1 0
0 0 −1
−1 0 0

In this matrix, each column represents the transformed coordinates of one of the
basis vectors:
First Column: This represents the transformed i-hat vector. After the 90-degree
rotation around the y-axis, i-hat moves along the negative z-axis, resulting in a
new coordinate of (-1, 0, 0).
Second Column: This represents the transformed j-hat vector. As the rotation
happens around the y-axis, j-hat remains unchanged, resulting in a new
coordinate of (0, 0, 1).
Third Column: This represents the transformed k-hat vector. After the
rotation, k-hat moves along the positive x-axis, resulting in a new coordinate of
(0, 1, 0).
The provided a valuable introduction to the concept of three-dimensional linear
transformations and their representation using matrices. Through clear
explanations and engaging visualizations, the video helped me understand how
these transformations work and how they can be applied in various fields. I am
excited to further explore this world of linear algebra and unlock its potential
for creating and manipulating objects in 3D space.

6.The determinant:
let's consider a 2x2 matrix:
3 4
A=
1 2
To find the determinant of this matrix, denoted as det(A) or |A|, we use the
formula for a 2x2 matrix:
det(A)=ad−bc
For matrix A:
det(A)=(3×2)−(4×1)=6−4=2 det(A)=(3×2)−(4×1)=6−4=2
So, the determinant of matrix A is 2.
This determinant value of 2 signifies the scaling factor by which this matrix
transforms areas in space. If we consider a unit square in the original space,
after the transformation by matrix A, the area of the resulting parallelogram will
be twice the original unit square, indicating the impact of the transformation on
area scaling.
This illustrates how determinants provide insight into the effect of matrices on
the spatial content, specifically in terms of area transformations in two
dimensions.

7. Inverse matrices, column space


and null space
2 3
Let's consider a 2x2 matrix: A =
1 4

We'll explore its properties regarding inverses, column space, and null space.

Inverse Matrix:
To find the inverse of matrix A (A -1) we can use the formula:
1
A-1= .adj(A)
det⁡(𝐴)

Determinant of A ;

Det(A) = (2 x 4) –(3x1) = 8-3=5

Adjoint of A:

4 −3
Adj(A) =
−1 2
Inverse of A :

1 4 −3
A-1 = . ⁡| |
5 −1 2

Column Space and Null Space:


The column space of matrix A is the space spanned by its columns. For matrix
A:

2
Column 1:
1
3
Column 2:
4

The column space is the entire 2D space since these columns are linearly
independent.

The null space (kernel) of matrix A consists of vectors x


such that Ax=0. To find this:

2 3 x1 0
Ax = [ ]⁡ =
1 4 x2 0
This equation yields a solution x1=x2=0. So, the null space
only contains the zero vector.
8.Nonsquare matrices as transformations
between dimensions

A 2x3 matrix B that represents a transformation from a 3-dimensional


space to a 2-dimensional space:

2 1 3
B= ;
−1 0 2

Suppose we have a vector in 3d space :

4
V=−2
1

The multiplication will be dot multiplication between matrix B and


vector V

(2𝑋4) + (1𝑋(−2)) + (3X1) 9


B.V= =
(−1𝑋4) + (0𝑋(−2)) + (2𝑋1) −2

So, the result B.V = 9 is a vector in a 2-dimensional space. This


−2
transformation maps a vector from a 3D space onto a 2D space, showcasing
how a non-square matrix operates as a transformation between dimensions.
9.Dot products and duality
Dot product:

Let's take two vectors a and b and calculate their dot product.

Consider a = [2, 3, 4] and b = [5, 1, 2].

To find their dot product:

a · b = a1 * b1 + a2 * b2 + a3 * b3

Substituting the values:

a · b = (2 * 5) + (3 * 1) + (4 * 2) = 10 + 3 + 8 = 21

So, the dot product of vectors a and b is 21.


Duality:

A vector v = [3, 4] and a linear functional f defined as f(x) = x1 + 5x2, where


x = [x1, x2].

The dot product of v and x (v · x) can be represented using the functional f(x):

v · x = v1 * x1 + v2 * x2 = f(x)

Let's verify this with the given values:

v = [3, 4] x = [1, 2]

v · x = 3 * 1 + 4 * 2 = 3 + 8 = 11

Now, using the linear functional f(x) = 2x1 + 5x2:


f(x) = 1 + 5 * 2 = 2 + 10 = 12

In this example, v · x = 11, which is equivalent to the value obtained by


evaluating the linear functional f(x) = 12 when x = [1, 2].

This illustrates the duality between the dot product of vectors and the evaluation
of a corresponding linear functional. The dot product of vectors can be
represented as the result of applying a specific linear functional to another
vector, showcasing the relationship between vectors and linear functionals in
terms of duality.

10. Cross products


Lets look at few math example to understand this concept:

a=(3,−3,1) and b=(4,9,2

𝑖 𝑗 𝑙
a x b = 3 −3 1
4 9 2
=i(−3⋅2−1⋅9)−j(3⋅2−1⋅4)+k(3⋅9+3⋅4)
=−15i−2j+39k
So,the cross product of a vector and b vector is −15i−2j+39k
Lets see another one

a=(3,−3,1) and c=(−12,12,−4)

𝑖 𝑗 𝑘
axc= 3 −3 1
−12 12 −4

=i(12-12)-j(-12+12)+k(36-36)
=0,0,0

||a x c|| = 0

This means that the cross product a x c yields a vector whose length is zero.
Geometrically, this suggests that the vectors u and v are either parallel or anti-
parallel to each other.
11. Cross products in the light of linear
transformation
Linear Transformation and Duality:

A linear transformation L from three-dimensional space to the


real number line is associated with a unique dual vector p, such
that for any vector x,

L(x)=p⋅x

The cross product involves defining a specific linear transformation, and its
dual vector is precisely the cross product of v and w.

Computational Interpretation:
• The cross product can be computationally interpreted as a
linear transformation L from three dimensions to one
dimension. This can be expressed as matrix multiplication:
𝑉𝑥 𝑉𝑦 𝑉𝑧
[px py pz]=[x y[𝑊𝑥 𝑊𝑦 𝑊𝑧
0 0 0

The resulting vector p encapsulates the essence of this transformation.

Geometric Interpretation:

• Geometrically, the dual vector p corresponds to a vector


perpendicular to v and w with a magnitude equal to the area of
the parallelogram spanned by v and w. This can be expressed as
a dot product:

𝑥 𝑥 𝑦 𝑧
p ⋅ [𝑦] = det [ 𝑉𝑥 𝑉𝑦 𝑉𝑧 ]
𝑧 𝑊𝑥 𝑊𝑦 𝑊𝑧

The geometric interpretation aligns with the computational perspective,


revealing the fundamental connection between the computation and the
geometric properties of the cross product.

12.Cramer's rule, explained geometrically

Cramer’s rule stands as a significant technique for solving systems of


equations. It involves determining variable values by leveraging
matrix determinants. Hence, it's often recognized as the determinant
method in solving these equations.

Now lets understand it with a math example

Let us consider two linear equations in two variables.

2x - y = 5

x+y=4

Let us write these two equations in the form of AX = B.

2 −1 𝑥 5
[ ] =
1 1 𝑦 4
Here

2 −1
Coefficient matrix = A= [ ]
1 1
𝑥
Variable matrix = X = 𝑦

5
Constant matrix = B =
4

Now,

D = |A|

2 −1
=[ ] = 2+1 = 3 (not 0)
1 1
So, the given system of equations has a unique solution.

5 −1
Dx=[ ] = 5+4 = 9
4 1
2 5
Dy= [ ] = 8-5 = 3
1 4
Therefore,

x = Dx/D = 9/3 = 3

y = Dy/D = 3/3 = 1

Cramer’s Rule Conditions


• Cramer’s rule fails for the system of equations in which D = 0 since for finding the values of
unknowns, D must be in the denominator and hence these values go undefined.

• Also, when D = 0, there will be two possibilities for which:The system may have no solution.
• If D ≠ 0, we say that the system AX = B has a unique solution

13.Change of basis

Changing the basis is basically like changing the language you use to talk about
vectors. how in a graph we have the x and y axes Those are like the usual way
we talk about vectors, kind of like our default language.

But sometimes, we want to talk about vectors using different directions. So,
instead of using x and y, we might want to use u1 and u2. When we do that,
we're changing the basis.

Say we have a vector v = [x, y] in the usual x and y language. To express it in


terms of the new u1 and u2, we need to find out how much of u1 and u2 make
up our vector v.

It's like saying v = x * u1 + y * u2, where x and y are like the secret codes that
tell us how much of u1 and u2 we need to add together to get our original vector
v.

To do this properly, we use some math tricks involving matrices. These tricks
help us convert our coordinates from the x and y way of speaking into the u1
and u2 way.

This whole thing about changing the basis is super important in math. It's used
in all sorts of stuff like understanding quantum mechanics and dealing with
signals in things like music and communications.
14.Eigenvectors and eigenvalues
An eigenvector of a square matrix A is a non-zero vector v such that
when A operates on v, the resulting vector is a scaled version of v.
Mathematically, if Av=λv, where λ is a scalar (called the eigenvalue),
then v is an eigenvector of A.

Let's break this down with an example:

Consider a matrix:

3 1
A=[ ]
1 3

To find the eigenvectors and eigenvalues of A, we solve the equation


Av=λv.

First, let's find the eigenvalues λ:

det(A−λI)=0
3 −λ 1
det[ ]=0
1 3 −λ

(3−λ)2−1=0

Λ2−6λ+8=0

(λ−4)(λ−2)=0

So, the eigenvalues are λ1=4 and λ2=2.

Next, let's find the eigenvectors corresponding to each eigenvalue.

For λ1=4:

−1 1
A−λ1I=
1 −1

Solving (A−λ1I)v1=0:

−1 1 𝑥 0
| | =
1 −1 𝑦 0

-x+y=0

X=y

So, an eigenvector corresponding to λ1=4 is any non-zero vector of the


1
form
1

In the same way ,

So, an eigenvector corresponding to λ1=2 is any non-zero vector of the


1
form
−1
15. A quick trick for computing
eigenvalues
The formula : m ±√(𝑚2 − 𝑝)

m = mean of an eigenvalues

p= product of an eigenvalues

Lets start with an math example:

2 7
[ ]
1 8

M= 5 & p=16 – 7 = 9

5±√52 − 9 =5±4=9,1.

That’s the quick formula for finding eigenvalues

16. Abstract vector spaces


An abstract vector space is a set of ”vectors” which can be added
together and multiplied by scalars in order to yield new vectors. The
rules for addition and scalar multiplication have to satisfy a number of
axioms in order for the set to count as a vector space. The axioms
guarantee that the set will behave ”like” R n for some n. Some examples
of abstract vector spaces are the n×m matrices (denoted by Mn×m(R)),
continuous infinitely differentiable realvalued functions on R (denoted
by C ∞(R), and polynomials of degree up to n with coefficients in R
(denoted by Pn(R)). A subspace of a vector space is a subset of the
vector space which is ”closed” under addition and multiplication by
scalars. This means that, for any two vectors v and w in the subspace,
and for any scalar c, v + w and cv are both in the subspace. Any
subspace of a vector space is a vector space itself.

Rules for Vector addition & scaling

You might also like