Professional Documents
Culture Documents
Linear Algebra: Student Name: Jihad Hossain Jisan ID: 231016712 Section:05 Course Title: MATH 207
Linear Algebra: Student Name: Jihad Hossain Jisan ID: 231016712 Section:05 Course Title: MATH 207
2D 3D
These special vectors, called basis vectors, are like the building blocks of a
coordinate system. They define the directions and lengths of each axis. Think
of the usual xy-coordinate system - the basis vectors here are i-hat (pointing
right) and j-hat (pointing up). Any vector in this system can be written as a
combo of i-hat and j-hat.
Now, the span of a set of vectors is just all the possible combinations you can
make using those vectors. For example, if you have i-hat and j-hat, you basically
cover the entire xy-plane because any point on that plane can be made by
combining i-hat and j-hat in different ways. If the vectors are in the same
direction, you get a line. If they're not in the same direction, you get a whole
plane.
A basis is just a set of vectors that are independent (you can't make one by
combining the others) and that cover the entire space. For the xy-plane, i-hat
and j-hat form a basis since they're not in the same direction and they cover the
whole plane.
What's cool is you can pick different basis vectors for the same space, and it
changes how you describe vectors in that space. Like, if you choose different
vectors for the xy-plane, any point in that plane can be described using these
new vectors.
And it's not just for 2D. You can do this for 3D too! If you have vectors that
form a plane in 3D, those vectors can be your basis for describing any point in
that plane.
Understanding all this stuff about linear combinations, span, and basis vectors
is super important because you'll use these ideas a lot in the future. The author
also says it's fun to play around with different basis vectors to see how they
change things up for vectors in space.
The concept of matrix equations was a key highlight. It illustrated how matrix-
vector multiplication is akin to expressing a system of equations compactly. The
video revealed that solving Ax = b (where A is a matrix and x, b are vectors) is
equivalent to finding a vector x that, when transformed by matrix A, results in
vector b. It was fascinating to see how this simple representation connects to
solving equations.
𝑥
• Transformation iof vector A = 𝑦
𝑎 𝑏 𝑥 𝑎𝑥 𝑏𝑦
T(A)=| | =( )
𝑐 𝑑 𝑦 𝑐𝑥 𝑑𝑦
places after being acted upon by matrices felt empowering. It made solving
equations feel like deciphering the transformations hidden within them.
Moreover, the video showcased how different matrices lead to different types
of transformations, which in turn affect the solutions to equations. This
revelation highlighted the intricate relationship between matrices, equations,
and their solutions.
4. Matrix multiplication as composition:
So, the video explained that a linear transformation, represented by T, takes a
vector v and "transforms" it into another vector, T(v). This new vector is
calculated by multiplying a matrix A with the original vector v, like this: T(v)
multiplication order, such as C = AB, is read from right to left, emphasizing the
order significantly impacts the outcome (AB ≠ BA), provides a deeper insight
into how matrices influence and alter vectors in various contexts. Overall,
5.Three-dimensional linear
transformations:
the context of 3D space, a linear transformation can be visualized as a process
that stretches, shrinks, rotates, or reflects an object in any way imaginable.Let's
consider the following example to illustrate the construction of a transformation
matrix. Imagine we want to rotate a vector by 90 degrees around the y-axis. We
can represent this rotation as a 3x3 matrix as shown below:
0 1 0
0 0 −1
−1 0 0
In this matrix, each column represents the transformed coordinates of one of the
basis vectors:
First Column: This represents the transformed i-hat vector. After the 90-degree
rotation around the y-axis, i-hat moves along the negative z-axis, resulting in a
new coordinate of (-1, 0, 0).
Second Column: This represents the transformed j-hat vector. As the rotation
happens around the y-axis, j-hat remains unchanged, resulting in a new
coordinate of (0, 0, 1).
Third Column: This represents the transformed k-hat vector. After the
rotation, k-hat moves along the positive x-axis, resulting in a new coordinate of
(0, 1, 0).
The provided a valuable introduction to the concept of three-dimensional linear
transformations and their representation using matrices. Through clear
explanations and engaging visualizations, the video helped me understand how
these transformations work and how they can be applied in various fields. I am
excited to further explore this world of linear algebra and unlock its potential
for creating and manipulating objects in 3D space.
6.The determinant:
let's consider a 2x2 matrix:
3 4
A=
1 2
To find the determinant of this matrix, denoted as det(A) or |A|, we use the
formula for a 2x2 matrix:
det(A)=ad−bc
For matrix A:
det(A)=(3×2)−(4×1)=6−4=2 det(A)=(3×2)−(4×1)=6−4=2
So, the determinant of matrix A is 2.
This determinant value of 2 signifies the scaling factor by which this matrix
transforms areas in space. If we consider a unit square in the original space,
after the transformation by matrix A, the area of the resulting parallelogram will
be twice the original unit square, indicating the impact of the transformation on
area scaling.
This illustrates how determinants provide insight into the effect of matrices on
the spatial content, specifically in terms of area transformations in two
dimensions.
We'll explore its properties regarding inverses, column space, and null space.
Inverse Matrix:
To find the inverse of matrix A (A -1) we can use the formula:
1
A-1= .adj(A)
det(𝐴)
Determinant of A ;
Adjoint of A:
4 −3
Adj(A) =
−1 2
Inverse of A :
1 4 −3
A-1 = . | |
5 −1 2
2
Column 1:
1
3
Column 2:
4
The column space is the entire 2D space since these columns are linearly
independent.
2 3 x1 0
Ax = [ ] =
1 4 x2 0
This equation yields a solution x1=x2=0. So, the null space
only contains the zero vector.
8.Nonsquare matrices as transformations
between dimensions
2 1 3
B= ;
−1 0 2
4
V=−2
1
Let's take two vectors a and b and calculate their dot product.
a · b = a1 * b1 + a2 * b2 + a3 * b3
a · b = (2 * 5) + (3 * 1) + (4 * 2) = 10 + 3 + 8 = 21
The dot product of v and x (v · x) can be represented using the functional f(x):
v · x = v1 * x1 + v2 * x2 = f(x)
v = [3, 4] x = [1, 2]
v · x = 3 * 1 + 4 * 2 = 3 + 8 = 11
This illustrates the duality between the dot product of vectors and the evaluation
of a corresponding linear functional. The dot product of vectors can be
represented as the result of applying a specific linear functional to another
vector, showcasing the relationship between vectors and linear functionals in
terms of duality.
𝑖 𝑗 𝑙
a x b = 3 −3 1
4 9 2
=i(−3⋅2−1⋅9)−j(3⋅2−1⋅4)+k(3⋅9+3⋅4)
=−15i−2j+39k
So,the cross product of a vector and b vector is −15i−2j+39k
Lets see another one
𝑖 𝑗 𝑘
axc= 3 −3 1
−12 12 −4
=i(12-12)-j(-12+12)+k(36-36)
=0,0,0
||a x c|| = 0
This means that the cross product a x c yields a vector whose length is zero.
Geometrically, this suggests that the vectors u and v are either parallel or anti-
parallel to each other.
11. Cross products in the light of linear
transformation
Linear Transformation and Duality:
L(x)=p⋅x
The cross product involves defining a specific linear transformation, and its
dual vector is precisely the cross product of v and w.
Computational Interpretation:
• The cross product can be computationally interpreted as a
linear transformation L from three dimensions to one
dimension. This can be expressed as matrix multiplication:
𝑉𝑥 𝑉𝑦 𝑉𝑧
[px py pz]=[x y[𝑊𝑥 𝑊𝑦 𝑊𝑧
0 0 0
Geometric Interpretation:
𝑥 𝑥 𝑦 𝑧
p ⋅ [𝑦] = det [ 𝑉𝑥 𝑉𝑦 𝑉𝑧 ]
𝑧 𝑊𝑥 𝑊𝑦 𝑊𝑧
2x - y = 5
x+y=4
2 −1 𝑥 5
[ ] =
1 1 𝑦 4
Here
2 −1
Coefficient matrix = A= [ ]
1 1
𝑥
Variable matrix = X = 𝑦
5
Constant matrix = B =
4
Now,
D = |A|
2 −1
=[ ] = 2+1 = 3 (not 0)
1 1
So, the given system of equations has a unique solution.
5 −1
Dx=[ ] = 5+4 = 9
4 1
2 5
Dy= [ ] = 8-5 = 3
1 4
Therefore,
x = Dx/D = 9/3 = 3
y = Dy/D = 3/3 = 1
• Also, when D = 0, there will be two possibilities for which:The system may have no solution.
• If D ≠ 0, we say that the system AX = B has a unique solution
13.Change of basis
Changing the basis is basically like changing the language you use to talk about
vectors. how in a graph we have the x and y axes Those are like the usual way
we talk about vectors, kind of like our default language.
But sometimes, we want to talk about vectors using different directions. So,
instead of using x and y, we might want to use u1 and u2. When we do that,
we're changing the basis.
It's like saying v = x * u1 + y * u2, where x and y are like the secret codes that
tell us how much of u1 and u2 we need to add together to get our original vector
v.
To do this properly, we use some math tricks involving matrices. These tricks
help us convert our coordinates from the x and y way of speaking into the u1
and u2 way.
This whole thing about changing the basis is super important in math. It's used
in all sorts of stuff like understanding quantum mechanics and dealing with
signals in things like music and communications.
14.Eigenvectors and eigenvalues
An eigenvector of a square matrix A is a non-zero vector v such that
when A operates on v, the resulting vector is a scaled version of v.
Mathematically, if Av=λv, where λ is a scalar (called the eigenvalue),
then v is an eigenvector of A.
Consider a matrix:
3 1
A=[ ]
1 3
det(A−λI)=0
3 −λ 1
det[ ]=0
1 3 −λ
(3−λ)2−1=0
Λ2−6λ+8=0
(λ−4)(λ−2)=0
For λ1=4:
−1 1
A−λ1I=
1 −1
Solving (A−λ1I)v1=0:
−1 1 𝑥 0
| | =
1 −1 𝑦 0
-x+y=0
X=y
m = mean of an eigenvalues
p= product of an eigenvalues
2 7
[ ]
1 8
M= 5 & p=16 – 7 = 9
5±√52 − 9 =5±4=9,1.