You have 3 free guides left 😟
Unlock your guides
You have 3 free guides left 😟
Unlock your guides

1.2 Linear Transformations and Matrices

3 min readjuly 22, 2024

Linear transformations are the mathematical superheroes of vector spaces. They preserve vector addition and scalar multiplication, allowing us to map one space to another while keeping the structure intact. These transformations are the backbone of many applications in physics and engineering.

Matrix representations give us a concrete way to work with linear transformations. By expressing transformations as matrices, we can perform calculations and analyze their properties. This powerful tool connects abstract concepts to practical computations, making linear algebra accessible and applicable.

Linear Transformations

Properties of linear transformations

Top images from around the web for Properties of linear transformations
Top images from around the web for Properties of linear transformations
  • Functions between vector spaces that preserve the vector space structure
    • Let VV and WW be vector spaces over the same field FF, and let T:VWT: V \to W be a function
    • TT is a if for all vectors u,vV\vec{u}, \vec{v} \in V and scalars cFc \in F:
      • Satisfies additivity property: T(u+v)=T(u)+T(v)T(\vec{u} + \vec{v}) = T(\vec{u}) + T(\vec{v})
      • Satisfies homogeneity property: T(cu)=cT(u)T(c\vec{u}) = cT(\vec{u})
  • Preserve vector addition and scalar multiplication operations
  • Map the zero vector of VV to the zero vector of WW: T(0V)=0WT(\vec{0}_V) = \vec{0}_W
  • The set of all linear transformations from VV to WW is denoted by L(V,W)\mathcal{L}(V, W)

Matrix representation of transformations

  • Can be represented using matrices
    • Let VV and WW be finite-dimensional vector spaces with ordered bases B={v1,,vn}B = \{\vec{v}_1, \ldots, \vec{v}_n\} and C={w1,,wm}C = \{\vec{w}_1, \ldots, \vec{w}_m\}, respectively
    • For a linear transformation T:VWT: V \to W, the of TT with respect to the bases BB and CC is the m×nm \times n matrix A=[aij]A = [a_{ij}], where aija_{ij} is the ii-th coordinate of T(vj)T(\vec{v}_j) with respect to the basis CC
  • Matrix representation depends on the choice of bases for the domain and codomain
  • corresponds to the composition of linear transformations
    • If T:UVT: U \to V and S:VWS: V \to W are linear transformations with matrix representations AA and BB, respectively, then the matrix representation of the composition ST:UWS \circ T: U \to W is the product BABA
  • Matrix addition corresponds to the pointwise addition of linear transformations
  • Scalar multiplication of matrices corresponds to scalar multiplication of linear transformations

Kernel and range of transformations

  • The (or null space) of a linear transformation T:VWT: V \to W is the set of all vectors in VV that are mapped to the zero vector in WW
    • ker(T)={vV:T(v)=0W}\ker(T) = \{\vec{v} \in V : T(\vec{v}) = \vec{0}_W\}
    • The kernel is a subspace of the domain VV
  • The (or image) of a linear transformation T:VWT: V \to W is the set of all vectors in WW that are the output of TT for some input vector in VV
    • range(T)={T(v):vV}\text{range}(T) = \{T(\vec{v}) : \vec{v} \in V\}
    • The range is a subspace of the codomain WW
  • The nullity of a linear transformation TT is the of its kernel: nullity(T)=dim(ker(T))\text{nullity}(T) = \dim(\ker(T))
  • The of a linear transformation TT is the dimension of its range: rank(T)=dim(range(T))\text{rank}(T) = \dim(\text{range}(T))
  • The relates the dimensions of the domain, kernel, and range of a linear transformation
    • For a linear transformation T:VWT: V \to W, where VV is finite-dimensional: dim(V)=rank(T)+nullity(T)\dim(V) = \text{rank}(T) + \text{nullity}(T)

Invertibility of linear transformations

  • A linear transformation T:VWT: V \to W is invertible (or bijective) if it is both injective (one-to-one) and surjective (onto)
    • Injective: For all u,vV\vec{u}, \vec{v} \in V, if T(u)=T(v)T(\vec{u}) = T(\vec{v}), then u=v\vec{u} = \vec{v}
    • Surjective: For every wW\vec{w} \in W, there exists a vV\vec{v} \in V such that T(v)=wT(\vec{v}) = \vec{w}
  • If TT is invertible, there exists a unique linear transformation T1:WVT^{-1}: W \to V called the inverse of TT, such that:
    • T1T=IVT^{-1} \circ T = I_V (the identity transformation on VV)
    • TT1=IWT \circ T^{-1} = I_W (the identity transformation on WW)
  • A linear transformation T:VWT: V \to W is invertible if and only if its kernel is trivial (contains only the zero vector)
    • Equivalently, TT is invertible if and only if nullity(T)=0\text{nullity}(T) = 0
  • For square matrices (linear transformations from a vector space to itself), a matrix is invertible if and only if its determinant is nonzero
  • The inverse of a matrix AA, denoted by A1A^{-1}, can be found using various methods
    • Gaussian elimination
    • Cramer's rule
    • Adjugate matrix formula
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.


© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
Glossary
Glossary