Linear Transformation

You might also like

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 30

LINEAR

TRANSFORMATIONS
GROUP MEMBERS

SHIPRA CHOUDHARY 1022125


ANUSHKA DALVI 1022130
DEY RIYANKA 1022134
AARON DSOUZA 1022136
SAMEEP GARUD 1022144
RIYA JADHAV 1022148
INTRODUCTION
CONTEN
MATRIX
T REPRESENTATION
INVERSE
TRANSFORMATION

APPLICATION

CONCLUSION
INTRODUCTI
ON
LINEAR TRANSFORMATION?

LET'S UNDERSTAND BY

ANSWERING THE 3 QUESTIONS :-


INTRODUCTI WHAT?

ON
LINEAR TRANSFORMATION?

LET'S UNDERSTAND BY

ANSWERING THE 3 QUESTIONS :-


INTRODUCTI WHAT?

ON
LINEAR TRANSFORMATION?

HOW?
LET'S UNDERSTAND BY

ANSWERING THE 3 QUESTIONS :-


INTRODUCTI WHAT?

ON
LINEAR TRANSFORMATION?

HOW?
LET'S UNDERSTAND BY

ANSWERING THE 3 QUESTIONS :-

WHERE?
WHAT?
DEFINITION:-
Linear transformations (also called linear maps or linear operators) between two vector spaces
is a function that preserves the operations of vector addition and scalar multiplication.
WHAT?
DEFINITION:-
Linear transformations (also called linear maps or linear operators) between two vector spaces
is a function that preserves the operations of vector addition and scalar multiplication.

Let’s take an example:- Consider there is one Empty bag (T) and two bags (A) & and (B) that
contain some numbers.

T A B
WHAT?
Now if I add the contains of A and B and put in T

A + B = T

OR if I first put the contains of A and then the contains of B in T


AREN'T
THEY
THE
B SAME?
A T
WHAT?
DEFINITION:-
Linear transformations (also called linear maps or linear operators) between two vector spaces
is a function that preserves the operations of vector addition and scalar multiplication.

Formally, a function T: V→W is a linear transformation if, for any vectors u and v in the
domain V and any scalar c, the following two properties hold:-

• Additivity: T(u + v)=T(u) + T(v)


• Homogeneity: T(cu)=cT(u)
HOW TO IDENTIFY LINEAR
TRANSFORMATION?
LETS TAKE AN
EXAMPLE :-
HOW TO IDENTIFY LINEAR
TRANSFORMATION?
LETS TAKE AN
EXAMPLE :-
LETS TAKE AN OTHER
EXAMPLE :-
LETS TAKE AN OTHER
EXAMPLE :-
MATRIX
REPRESENTATION
Let T: V->W be a linear transformation from vector space V to vector space W.
Let v be a vector in V and w be it’s image under T in W. The linear transformation T can be
represented by a matrix such that:
T(v) = A .v

Here:
• T(v) is the image of v under T.
• A is the matrix representation of T.
• v is the column vector representing to the input of the transformation matrix.
Constructing the Matrix A :
• Basis vectors :-
• Choose a basis for V, say {v1, v2, v3, ...., vn}
• For each basis vector vi, determine it’s image under T, i.e. T(vi) and express it
as linear combinations of basis vectors in W, say {w1, w2, w3, ....., wm}
2.Matrix entities :-
• The columns of A are formed by stacking the coordinates of T(vi) in the basis
{w1, w2, w3, ...., wm}
Some transformations are :
1.Translation :-
• Translation involves shifting of a point by a certain amount in specific direction.
• Consider a point (x,y) to be translated by a factor (a,b) then the transformation
matrix would be

2. Scaling :-
• Scaling involves enlarging or reducing an object a certain factor.
• If you want to scale the objects by factors sx in x-direction and sy in
y-direction then transformation matrix would be
3. Rotation :-
• Rotation involves rotating an object by a certain angle along specific axis.
• If you want to rotate an object by a certain angle ‘x’ the transformation matrix would
be
Kernel and Image of linear transformations :
Let T: V->W be a linear transformation from vector space V to vector space W.
1.Kernel :-
• The kernel of a linear transformation, also known as the null space, is the set of all
vectors in the domain that map to the zero vector in the codomain under the linear
transformation.
• It consists of all v∈V such that t(v)=0.
ker(T)={v∈V: T(v)=0}
2.Image :-
• The image of a linear transformation, also known as the range, is the set of all possible
output vectors obtained by applying the linear transformation to vectors from the
domain.
• It consists of all vectors in W which are equal to T(v) for some v ∈V;
im(T)={T(v ): v∈V}
INVERSE LINEAR TRANSFORMATION
DEFINITION:-
The inverse of a linear transformation is a mapping that undoes the effects of the original
linear transformation. More formally, let T: V→W be a linear transformation between
vector spaces V and W. The inverse of T, denoted as T^−1:W→V, is another linear
transformation such that the composition of T and T^−1 in either order results in the
identity transformation.

The inverse transformation T^−1 of a linear transformation


T: V→W is a mapping from the codomain W back to the domain V.
The conditions for T^−1 to be the inverse of T are:

• Existence: T^−1 exists if and only if T is a bijective (one-to-one and onto) linear transformation.
• Composition Identity: For every vector v in V and w in W, the following equalities hold:
⚬ T^−1(T(v))=v (applying T^−1 after T yields the original vector).
⚬ T(T^−1(w))=w (applying T after T^−1 yields the original vector).
APPLICATION:-
Computer Graphics:
• Explanation:
⚬ Linear transformations play a crucial role in computer graphics for rendering realistic scenes.
⚬ They are used for transformations like translation, scaling, and rotation to position and orient
objects in a 3D space.
• Image Processing:
⚬ Filters and Enhancements: Linear transformations, such as convolution, are used in image
processing to apply filters for tasks like blurring, sharpening, and edge detection. These
operations help enhance or modify digital images.
APPLICATION:-
• Physics and Engineering:
⚬ Coordinate Transformations: Linear transformations are used to convert coordinates from one
reference frame to another in physics and engineering. This is crucial when dealing with
different coordinate systems.
• Machine Learning:
⚬ Feature Engineering: Linear transformations are often applied to features in machine learning to
create new features or transform existing ones. Principal Component Analysis (PCA) is an
example of a linear transformation used for dimensionality reduction
Conclusion
:
• A function T:V→W is a linear transformation if it satisfies two fundamental properties:
Additivity: T(u+v)=T(u)+T(v) for any vectors u,v in the domain V.
Homogeneity: T(cu)=cT(u) for any scalar c and any vector u in V.
• If a function fails to satisfy either additivity or homogeneity for some vectors or scalars, it is not a
linear transformation.
• Checking these properties is essential when determining whether a given function is a linear
transformation, and it is a fundamental concept in linear algebra.
• Applicable in diverse fields, from computer graphics to machine learning.
• Essential for systematic mathematical problem-solving.
• Crucial for a broad range of mathematical applications.
THANK'S
FOR
WATCHING

You might also like