# Linear Transformation Matrix and Invertibility

Okay, so you know what a matrix is, but what is the matrix of a linear map? In this article, I cover linear transformations and their invertibility. I work through several examples that I hope you enjoy.

## The Matrix of a Linear Map

Let $V$ and $W$ be finite-dimensional linear spaces.

Definition. (The Matrix of a Linear Map) Let $T\in \mathcal{L}(V,W)$ and let $b_1={v_1,\ldots,v_n}$ be a basis for $V$ and $b_2={w_1,\ldots,w_m}$ be a base for $W.$ Then the matrix of $T$ with respect to the bases $b_1$ and $b_2$ is $$\begin{bmatrix} a_{1 1} & \cdots & a_{1 n} \\ \vdots & \cdots & \vdots \\ a_{m 1} & \cdots & a_{m n} \\ \end{bmatrix}$$ where the $a_{i j}\in \mathbb{F}$ are determined by $T v_k=a_{1 k}w_1+\cdots +a_{m k}w_m$ for each $k=1,\ldots,n.$

Example. Consider the linear transformation $T(f)=f’+f”$ from $\mathcal{P}_2$ to $\mathcal{P}_2.$ Since $\mathcal{P}_2$ is isomorphic to $\mathbb{R}^3$ with isomorphism given by a $3\times 3$ matrix $B$, how do we find this matrix $B$?

Solution. Let $a+b x+c x^2$, then we write $T$ as \begin{align} T(a+b x+c x^2)& =(a+b x+c x^2)’+(a+b x+c x^2)” \\ & =b+2c x+2c=(b+2c)+2cx. \end{align} Next let’s write the input $f(x)=a+b x+c x^2$ and the output $$T(f(x))=(b+2c)+2c x$$ in coordinates with respect to the standard bases $\mathcal{B}=(1, x, x^2)$ of $\mathcal{P}_2$ Written in $\mathcal{P}_2$ coordinates, transformation $T$ takes $[f(x)]_\mathcal{B}$ to$$[T(f(x))]_\mathcal{B}=\begin{bmatrix} b+2c\\ 2c\\ 0\end{bmatrix} = \begin{bmatrix} 0 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} a \\ b \\ c \end{bmatrix} = \begin{bmatrix} 0 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{bmatrix} [f(x)]_\mathcal{B}$$ The matrix $B= \begin{bmatrix} 0 & 1 & 2 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{bmatrix}$ is called the $\mathcal{B}$-matrix of the transformation $T.$

Example. Find the $B$-matrix for the linear transformation given by $$T(M)=\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}M-M\begin{bmatrix} 5 & 0 \\ 0 & -1 \end{bmatrix}M$$ from $\mathbb{R}^{2\times 2}$ to $\mathbb{R}^{2\times 2}.$ Determine whether $T$ is an isomorphism and if not find kernel, image, nullity, and the rank of $T.$

Solution. We will use the standard basis of $\mathbb{R}^{2 \times 2}$: $$\mathcal{B}= \left ( \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right )$$ and we will construct the $\mathcal{B}$-matrix column-by-column: $$\begin{array}{rl} B&= \begin{bmatrix} \left [ T\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} \right ]_\mathcal{B} & \left [T[\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\right ]_\mathcal{B} & \left [ T\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}\right ]_\mathcal{B} & \left [ T\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right ]_\mathcal{B} \end{bmatrix} \\ &= \begin{bmatrix} \begin{bmatrix} -4 & 0 \\ 4 & 0 \end{bmatrix}_\mathcal{B} & \begin{bmatrix} 0 & 2 \\ 0 & 4 \end{bmatrix}_\mathcal{B} & \begin{bmatrix} 2 & 0 \\ -2 & 0 \end{bmatrix}_\mathcal{B} & \begin{bmatrix} 0 & 2 \\ 0 & 4 \end{bmatrix}_\mathcal{B} \end{bmatrix} \\ & = \begin{bmatrix} -4 & 0 & 2 & 0 \\ 0 & 2 & 0 & 2 \\ 4 & 0 & -2 & 0 \\ 0 & 4 & 0 & 4 \end{bmatrix}\end{array}$$ with $$\text{rref}(B)= \begin{bmatrix} 1 & 0 & -\frac{1}{2} & 0\\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}.$$ After eliminating redundant columns from $B$ we find a basis of $\operatorname{im} T$ is $$\left(\, \begin{bmatrix} -4 \\ 0 \\ 4\\ 0\end{bmatrix} ,\begin{bmatrix} 0 \\ 2 \\ 0\\ 4\end{bmatrix} \, \right).$$ To find $\ker T$ we solve $B {x}={0}$ and using the $\text{rref}(B)$ we find $$\left(\, \begin{bmatrix} -\frac{1}{2} \\ 0 \\ 1\ 0\end{bmatrix}, \begin{bmatrix} 0\\ -1\\ 0\\ 1\end{bmatrix} \, \right)$$ to be a basis for $\ker T.$ Notice since the rank of $T$ is $2=\mathop{dim} \operatorname{im} T$, $T$ is not an isomorphism. Therefore, since $\ker T\neq {0}$, $T$ is not an isomorphism.

Here is another example.

Example. Find the matrix of the linear transformation $$T(f(t))=f(3)$$ from $\mathcal{P}_2$ to $\mathcal{P}_2$ with respect to the basis $(1,t-3,(t-3)^2).$ Determine whether the transformation is an isomorphism, it it isn’t an isomorphism then determine the kernel and image of $T$, and also determine the nullity and rank of $T.$

Solution. The matrix of $T$ is \begin{align} B & =\begin{bmatrix} [T(1)]_\mathcal{B} & [T(x-3)]_\mathcal{B} & [T((x-3)^2)]_\mathcal{B} \end{bmatrix} \\ & = \begin{bmatrix} [1]_\mathcal{B} & [0]_\mathcal{B} & [0]_\mathcal{B} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}. \end{align} Notice the vectors $\begin{bmatrix} 0\\ 1\\ 0\end{bmatrix}, \begin{bmatrix} 0 \\ 0\\ 1\end{bmatrix}$ form a basis of the kernel of $B$ and $\begin{bmatrix} 1 \\ 0\\ 0\end{bmatrix}$ is a basis of the image of $B.$ Therefore, the rank is 1 and the nullity is 2; and therefore, $T$ is not an isomorphism.

Definition. Let $b={v_1,\ldots,v_n}$ be a basis for $V$ and let $v\in V.$ We define the matrix of $v$, denoted by $\mathcal{M}(v)$, to be the $n$-by-1 matrix $$\begin{bmatrix} b_{1}\\ \vdots \\ b_{n}\end{bmatrix}$$ determined by $v=b_1 v_1+\cdots + b_n v_n.$

Theorem. If $T\in \mathcal{L}(V,W)$, then $\mathcal{M}(Tv)=\mathcal{M}(T) \mathcal{M}(v)$ for all $v\in V.$

Proof. Let $(v_1,\ldots,v_n)$ be a basis of $V$ and $(w_1,\ldots,w_m)$ be a basis of $W.$ If $v\in V$, then there exists $b_1,\ldots,b_n\in \mathbb{F}$ such that $$v=b_1 v_1+\cdots + b_n v_n$$ so that $$\mathcal{M}(v)=\begin{bmatrix} b_1 \\ \vdots \\ b_n\end{bmatrix} .$$ For each $k$, $1\leq k \leq n$ we write $T v_k=a_{1k}w_1+ \cdots + a_{m k} w_m$ and so by definition of the matrix of a linear map $T$: $$\mathcal{M}=\begin{bmatrix}a_{11} & & a_{1n} \\ \vdots & \cdots & \vdots \\ a_{m1} & & a_{mn} \end{bmatrix}.$$ By linearity of $T$:$$\begin{array}{rl} Tv& =b_1 T v_1+\cdots b_n T v_n \\ & = b_1 \left(\sum_{j=1}^m a_{j 1}w_j \right)+\cdots +b_n \left(\sum_{j=1}^m a_{j n}w_j \right) \\ & =w_1(a_{11}b_1+\cdots + a_{1n}b_n)+\cdots + w_m(a_{m1}b_1+\cdots + a_{mn}b_n). \end{array}$$ Therefore, $$\mathcal{M}(T v)= \begin{bmatrix} a_{11}b_1+\cdots + a_{1n}b_n \\ \cdots \\ a_{m1}b_1 +\cdots + a_{mn}b_n\end{bmatrix} =\mathcal{M}(T)\mathcal{M}(v).$$ where the last equality holds by definition of matrix multiplication.

## Invertibility

Let $\overline{u}=(u_1,\ldots,u_p)$ be a basis of $U$, let $\overline{v}=(v_1,\ldots,v_n)$ be a basis of $V$, and let $\overline{w}=(w_1,\ldots,w_m)$ be a basis of $W.$ If $T\in \mathcal{L}(U,V)$ and $S\in \mathcal{L}(V,W)$, then $ST\in \mathcal{L}(U,W)$ and by the definition of matrix multiplication,

$$\label{matrix multiplication} \mathcal{M}(ST,\overline{u},\overline{w}) =\mathcal{M}(S,\overline{v},\overline{w}) \mathcal{M}(T,\overline{u},\overline{v}).$$

Theorem. If $\overline{u}=(u_1,\ldots,u_p)$ and $\overline{v}=(v_1,\ldots,v_n)$ are bases of $V$, then $\mathcal{M}(I,\overline{u},\overline{v})$ is invertible and $$\mathcal{M}(I,\overline{u},\overline{v})^{-1}=\mathcal{M}(I,\overline{v},\overline{u}).$$

Proof. Replace $U$ and $W$ with $V$, replace $w_j$ with $u_j$, and replace $S$ and $T$ with $I$, getting $$I=\mathcal{M}(I,(\overline{v},\overline{u}))\mathcal{M}(I,\overline{u},\overline{v}).$$ Now interchange the roles of the $u$’s and $v$’s, getting $$I=\mathcal{M}(I,(\overline{u},\overline{v}))\mathcal{M}(I,\overline{v},\overline{u}).$$ These equations give the desired result.

For example, obviously,

$$\mathcal{M}\left(I,\left(\begin{bmatrix} 4\\ 2\end{bmatrix}, \begin{bmatrix} 5\\ 3\end{bmatrix} \right), \left(\begin{bmatrix} 1\\ 0\end{bmatrix},\begin{bmatrix}0\\ 1\end{bmatrix}\right)\right)=\begin{bmatrix} 4 & 5 \\ 2 & 3 \end{bmatrix} .$$

The inverse of the matrix above is $\begin{bmatrix} 3/2 & -5/2 \\ -1 & 2\end{bmatrix} .$ Thus,

$$\mathcal{M} \left(I, \left( \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix} \right), \left( \begin{bmatrix} 4 \\ 2 \end{bmatrix}, \begin{bmatrix} 5 \\ 3 \end{bmatrix} \right) \right) =\begin{bmatrix} 3/2 & -5/2 \\ -1 & 2 \end{bmatrix}.$$

Theorem. Suppose $T\in\mathcal{L}(V).$ Let $\overline{u}=(u_1,\ldots,u_n)$ and $(v_1,\ldots,v_n)$ be bases of $V.$ Let $A=\mathcal{M}(I,\overline{u},\overline{v}).$ Then $$\label{change of basis} \mathcal{M}(T,\overline{u})=A^{-1}\mathcal{M}(T,\overline{v})A.$$

Proof. Replace $U$ and $W$ with $V$, replace $w_j$ with $v_j$, replace $T$ with $I$, and replace $S$ with $T$, getting $$\label{rchange} \mathcal{M}(T,\overline{u}, \overline{v})=\mathcal{M}(T,\overline{v})A.$$ In \eqref{matrix multiplication}, replace $U$ and $W$ with $V$, replace $w_j$ with $u_j$, replace $S$ with $I$, and replace $S$ with $T$, getting $$\label{lchange} \mathcal{M}(T,\overline{u})=A^{-1}\mathcal{M}(T,\overline{u},\overline{v})$$ matrix inversion. Substitution of \eqref{rchange} into \eqref{lchange} yields \eqref{change of basis}.

Example. Prove that every linear map from $\text{Mat}(n,1,F)$ to $\text{Mat}(m,1,F)$ is given by matrix multiplication. In other words, prove that if $T$ is a linear transformation from $\text{Mat}(n,1,F)$ to $\text{Mat}(m,1,F))$, then there exists an $m$-by-$n$ matrix $A$ such that $T B=A B$ for every $B\in \text{Mat}(n,1,F).$

Solution. Let $(e_1,\ldots,e_n)$ be a basis for $Mat(n,1,\mathbb{F})$ and let $(v_1,\ldots,v_m)$ be a basis for $Mat(m,1,\mathbb{F}).$ For each $k$, there exists $a_{1k},\ldots,a_{mk}\in \mathbb{F}$ such that $T e_k=a_{1k} v_1+\cdots a_{mk} v_m.$ Define the $m \times n$ matrix $A$ as follows: $$A=\begin{bmatrix} T e_1 & \cdots & T e_n \end{bmatrix}.$$ If $B\in Mat(n,1,\mathbb{F})$ there exists $b_1,\ldots,b_n\in \mathbb{F}$ such that $B=b_1 e_1+\cdots + b_n e_n$, and thus$$TB=T(b_1 e_1+\cdots + b_n e_n)=b_1 T e_1+\cdots + b_n T e_n= BA$$ as desired. Notice the word “the” follows since $(v_1,\ldots,v_m)$ is a basis. In other words one bases have been chosen, the matrix $A$ is unique.

Example. Suppose that $V$ is finite-dimensional and $S,T\in \mathcal{L}(V).$ Prove that $S T$ is invertible if and only if both $S$ and $T$ are invertible.

Solution. Suppose both $S$ and $T$ are invertible. Then both $S$ and $T$ are injective and surjective. Thus, $ST$ is both injective and surjective showing $ST$ is invertible. Conversely, suppose $ST$ is invertible. Since $\operatorname{ker} T\subseteq \operatorname{ker} ST ={0}$ because $ST$ is injective, $T$ is also injective. Thus, $T$ is invertible. Since $ST$ is surjective, if $w\in W$, then there exists $v\in V$ such that $(ST)v=w.$ Rewriting $S(Tv)=w$, showing $S$ is surjective. Thus, $S$ is also invertible.

Example. Suppose that $V$ is finite-dimensional and $S,T\in \mathcal{L}(V).$ Prove that $S T=I$ if and only if $T S=I.$

Solution. Without loss of generality, we will show $ST=I \implies TS=I.$ Suppose $ST=I.$ Since $I$ is invertible, the previous exercise implies $S$ and $T$ are both invertible. Then $ST=I \implies S^{-1}(ST)=S^{-1}I \implies T=S^{-1}.$ Therefore, $TS=S^{-1}S=I.$

Example. Suppose that $V$ is finite-dimensional and $T\in \mathcal{L}(V).$ Prove that $T$ is a scalar multiple of the identity if and only if $S T=T S$ for every $S\in \mathcal{L}(V).$

Solution. If $T$ is a scalar multiple of the identity, say $T=\alpha I$, then for all $v\in V$, $$S T v=S \alpha v= \alpha S v= TS v.$$ Conversely, suppose $ST=TS$ for every $S\in \mathcal{L}(V).$ Pick a basis $v_1,\ldots,v_N)$ for $V.$ For $m=1,\ldots,N$, define linear maps $S+m\in \mathcal{L}(V)$ by

$$S_m=v_n = \left\{ \begin{array}{rl} v_m & \text{ if } m=n \\ 0 & \text{ if } m\neq n \end{array} \right.$$

Now if $v=\sum \alpha_n v_n$, then $$S_m\sum \alpha_n v_n=\alpha_m v_m$$ Thus the only vectors satisfying $S_m v=v$ are $v=\alpha v_m$ for some $\alpha \in \mathbb{F}.$ The condition $S_m T=T S_m$ gives $$S_m T v_m=T S_m v_m=T v_m$$ and by the above observation $T v_m=\alpha_m v_m.$ Now consider another collection of linear maps $A(m,n)$ defined by $$A(m,n) v_m=v_n, A(m,n)v_n=v_m, A(m,n)v_k=0, \text{ when } k\neq m,n.$$ The condition $A(m,n)T v_n=T A(m,n) v_n$ gives $$A(m,n)T v_n=TA(m,n)v_n=T v_m=\alpha_m v_m$$ and $$A(m,n)\alpha_n v_n=A(m,n)\alpha_n v_n=\alpha_nA(m,n)v_n=\alpha_n v_m.$$ Whence $\alpha_m v_m=\alpha_n v_m$ that is, $\alpha_m=\alpha_n$ for $m,n=1,\ldots,N$; and thus $T$ is a scalar multiple of the identity.

Example. Prove that if $V$ is finite-dimensional with $\mathop{dim} V > 1$, then the set of non-invertible operators on $V$ is not a subspace of $\mathcal{L}(V).$

Solution. Suppose $(v_1,\ldots,v_n)$ is a basis of $V$, with $n \geq 2.$ Define the linear maps $S$ and $T$ by $$S v_1=v_1, \hspace{1cm} S v_k =0, \text{ when } k\geq 2$$ $$T v_1=0, \hspace{1cm} T v_k =v_k, \text{ when } k\geq 2.$$ Since $S$ and $T$ have nontrivial null spaces, they are not invertible. However, $(S+T) v_k=v_k$, for $k=1,\ldots,n,$ so $S+T=I$, which is invertible. Thus the set of noninvertible operators on $V$ with $n\geq 2$ is not closed under addition.

David Smith (Dave) has a B.S. and M.S. in Mathematics and has enjoyed teaching precalculus, calculus, linear algebra, and number theory at both the junior college and university levels for over 20 years. David is the founder and CEO of Dave4Math.