Table of Contents
Linear Transformations
Linear transformations are mapping between two vector spaces which obey the rules of vector addition and scalar multiplication.
A linear transformation is a mapping from to denoted by that obeys the following rules -
- (Transformation of sum is sum of transformations)
- (Transformation of scalar product is a scalar product of transformation)
Here -
Each vector can be transformed into a new vector space by performing a linear combination on it using a specific set of scalars. This linear transformation can be denoted as matrix multiplication too -
This matrix is used to represent the linear transformation .
Representation as matrix
Say we have a transformation and we are interested in finding its matrix representation.
T\left(\begin{array} & x \\ y \\ \end{array}\right) = T\left(\begin{array} & x+y \\ x-y \\ \end{array}\right)The column vectors of the matrix are the basis of the transformation. We can simply transform the basis vectors of to get the new basis of the transformation.
and . Thus -
Similarly the matrix for the linear transformation can be obtained for any linear transformation.
Transformation across dimensions
Let be a lower dimensional vector space and be a higher dimensional vector space.
Will cover all of ? Will cover all of ?
- will cover all of because we are transforming a higher dimensional space into a lower dimensional space. This is because there is enough information in the basis of to be able to capture all directions of .
- will not cover all of because we are transforming a lower dimensional vector space to a higher dimensional subspace. This is because the number of basis vectors of is not enough to capture all the directions of . This transformation would instead cover a lower dimensional subspace in .
Composition of transformations
Let and be two linear transformations. Using these we define a composition of linear combinations as .
We can represent as a matrix multiplication of and -
Invertibility
For a transformation to be invertible, the mapping needs to be bijective. If multiple inputs could be mapped to the zero vector, then this mapping isn’t injective. Hence, the transformation isn’t invertible (singular) as the zero vector has more than one pre-image to be mapped to in the reverse direction.
Because the transformation is singular, we can say the matrix representing the transformation is singular too.
Why do singular matrices have det = 0?
The determinant measures the “volume” of the space formed by the vectors of a matrix. If one of these vectors is a linear combination of the others, this would cause a loss of dimension in the vector space and mean that the vectors lie in a lower-dimensional subspace. The space spanned by a lower-dimensional subspace is always zero, which results in the determinant being zero.
Isometry Transformations
Any linear transformation where is called an isometry transformation.
Example - Rotation and Reflection.
Column Space
The column space of a matrix is the set of all possible vectors obtained as the linear combinations of columns of a matrix.
In terms of a transformation - The range of the linear transformation is the column space of the matrix used to represent the transformation.
Null Space
The subspace of vectors that are the solution to the homogeneous system of linear equations is called the null space.
In terms of a transformation - The subspace of vectors that gets mapped to the zero vector upon performing a linear transformation is called the null space or the kernel of the linear transformation.
- Homogeneous system of equations - System of equations where no equation has a constant term (all constants are 0.)
- Heterogeneous system of equations - System of equations where at least one equation has a constant term.