If you find any mistakes, please make a comment! Thank you.

Solution to Linear Algebra Hoffman & Kunze Chapter 3.4


Exercise 3.4.1

Let $T$ be the linear operator on $\mathbb C^2$ defined by $T(x_1,x_2)=(x_1,0)$. Let $\mathcal B$ be the standard ordered basis for $\mathbb C^2$ and let $\mathcal B'=\{\alpha_1,\alpha_2\}$ be the ordered basis defined by $\alpha_1=(1,i)$, $\alpha_2=(-i,2)$.

(a) What is the matrix of $T$ relative to the pair $\mathcal B$, $\mathcal B'$?
(b) What is the matrix of $T$ relative to the pair $\mathcal B'$, $\mathcal B$?
(c) What is the matrix of $T$ in the ordered basis $\mathcal B'$?
(d) What is the matrix of $T$ in the ordered basis $\{\alpha_2,\alpha_1\}$?

Solution:

(a) According to the comments at the bottom of page 87, the $i$-th column of the matrix is given by $[T\epsilon_i]_{\mathcal B'}$, where $\epsilon_1=(1,0)$ and $\epsilon_2=(0,1)$, the standard basis vectors of $\mathbb C^2$. Now $T\epsilon_1=(1,0)$ and $T\epsilon_2=(0,0)$. To write these in terms of $\alpha_1$ and $\alpha_2$ we use the approach of row-reducing the augmented matrix
$$\left[\begin{array}{cc|cc}1&-i&1&0\\i&2&0&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&-i&1&0\\0&1&-i&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&0&2&0\\0&1&-i&0\end{array}\right].$$ Thus $T\epsilon_1=2\alpha_1-i\alpha_2$ and $T\epsilon_2=0\cdot\alpha_1+0\cdot\alpha_2$ and the matrix of $T$ relative to $\mathcal B$, $\mathcal B'$ is
$$\left[\begin{array}{cc}2&0\\-i&0\end{array}\right].$$(b) In this case we have to write $T\alpha_1$ and $T\alpha_2$ as linear combinations of $\epsilon_1,\epsilon_2$.
$$T\alpha_1=(1,0)=1\cdot\epsilon_1+0\cdot\epsilon_2$$ $$T\alpha_2=(-i,0)=-i\cdot\epsilon_1+0\cdot\epsilon_2.$$ Thus the matrix of $T$ relative to $\mathcal B'$, $\mathcal B$ is $$\left[\begin{array}{cc}1&-i\\0&0\end{array}\right].$$(c) In this case we need to write $T\alpha_1$ and $T\alpha_2$ as linear combinations of $\alpha_1$ and $\alpha_2$. $T\alpha_1=(1,0)$, $T\alpha_2=(-i,0)$. We row-reduce the augmented matrix:
$$\left[\begin{array}{cc|cc}1&-i&1&-i\\i&2&0&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&-i&1&-i\\0&1&-i&-1\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&0&2&-2i\\0&1&-i&-1\end{array}\right].$$
Thus the matrix of $T$ in the ordered basis $\mathcal B'$ is
$$\left[\begin{array}{cc}2&-2i\\-i&-1\end{array}\right].$$(d) In this case we need to write $T\alpha_2$ and $T\alpha_1$ as linear combinations of $\alpha_2$ and $\alpha_1$. In this case the matrix we need to row-reduce is just the same as in (c) but with columns switched:
$$\left[\begin{array}{cc|cc}-i&1&-i&1\\2&i&0&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&i&1&i\\2&i&0&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&i&1&i\\0&-i&-2&-2i\end{array}\right]$$ $$\rightarrow\left[\begin{array}{cc|cc}1&i&1&i\\0&1&-2i&2\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&0&-1&-i\\0&1&-2i&2\end{array}\right]$$Thus the matrix of $T$ in the ordered basis $\{\alpha_2,\alpha_1\}$ is
$$\left[\begin{array}{cc}-1&-i\\-2i&2\end{array}\right].$$


Exercise 3.4.2

Let $T$ be the linear transformation from $\Bbb R^3$ to $\Bbb R^2$ defined by $$T(x_1,x_2,x_3)=(x_1+x_2,2x_3-x_1).$$(a) If $\mathcal B$ is the standard ordered basisfor $\Bbb R^3$ and $\mathcal B'$ is the standard ordered basis for $\Bbb R^2$, what is the matrix of $T$ relative to the pair $\mathcal B$, $\mathcal B'$?
(b) If $\mathcal B=\{\alpha_1,\alpha_2,\alpha_3\}$ and $\mathcal B'=\{\beta_1,\beta_2\}$, where $$\alpha_1=(1,0,-1),\quad\alpha_2=(1,1,1),\quad\alpha_3=(1,0,0),\quad\beta_1=(0,1),\quad\beta_2=(1,0)$$ what is the matrix of $T$ relative to the pair $\mathcal B$, $\mathcal B'$?

Solution:

(a) With respect to the standard bases, the matrix is simply
$$\left[\begin{array}{ccc}1&1&0\\-1&0&2\end{array}\right].$$(b) We must write $T\alpha_1,T\alpha_2,T\alpha_3$ in terms of $\beta_1,\beta_2$. $$T\alpha_1=(1,-3)$$ $$T\alpha_2=(2,1)$$ $$T\alpha_3=(1,0).$$ We row-reduce the augmented matrix
$$\left[\begin{array}{cc|ccc}0&1&1&2&1\\1&0&-3&1&0\end{array}\right]
\rightarrow\left[\begin{array}{cc|ccc}1&0&-3&1&0\\0&1&1&2&1\end{array}\right].$$ Thus the matrix of $T$ with respect to $\mathcal B$, $\mathcal B'$ is
$$\left[\begin{array}{ccc}-3&1&0\\1&2&1\end{array}\right].$$


Exercise 3.4.3

Let $T$ be a linear operator on $F^n$, let $A$ be the matrix of $T$ in the standard ordered basis for $F^n$, and let $W$ be the subspace of $F^n$ spanned by the column vectors of $A$. What does $W$ have to do with $T$?

Solution: Since $\{\alpha_1,\dots,\alpha_n\}$ is a basis of $F^n$, we know $\{T\epsilon_1,\dots,T\epsilon_n\}$ generate the range of $T$. But $T\epsilon_i$ equals the $i$-th column vector of $A$. Thus the column vectors of $A$ generate the range of $T$ (where we identify $F^n$ with $F^{n\times1}$). We can also conclude that a subset of the columns of $A$ give a basis for the range of $T$.


Exercise 3.4.4

Let $V$ be a two-dimensional vector space over the field $F$, and let $\mathcal B$ be an ordered basis for $V$. If $T$ is a linear operator on $V$ and
$$[T]_{\mathcal B}=\left[\begin{array}{cc}a&b\\c&d\end{array}\right]$$ prove that $T^2-(a+d)T+(ad-bc)I=0.$

Solution: The coordinate matrix of $T^2-(a+d)T+(ad-bc)I$ with respect to $\mathcal B$ is $$[T^2-(a+d)T+(ad-bc)I]_{\mathcal B}=\ \ \left[\begin{array}{cc}a&b\\c&d\end{array}\right]^2-(a+d)\left[\begin{array}{cc}a&b\\c&d\end{array}\right]+(ad-bc)\left[\begin{array}{cc}1&0\\0&1\end{array}\right]\ $$
Expanding gives
$$=\left[\begin{array}{cc}a^2+bc&ab+bd\\ac+cd&bc+d^2\end{array}\right]-\left[\begin{array}{cc}a^2+ad&ab+bd\\ac+cd&ad+d^2\end{array}\right]+\left[\begin{array}{cc}ad-bc&0\\0&ad-bc\end{array}\right]$$
$$=\left[\begin{array}{cc}0&0\\0&0\end{array}\right].$$
Thus $T^2-(a+d)T+(ad-bc)I$ is represented by the zero matrix with respect to $\mathcal B$. Thus $T^2-(a+d)T+(ad-bc)I=0$.


Exercise 3.4.5

Let $T$ be the linear operator on $\mathbb R^3$, the matrix of which in the standard ordered basis is
$$\left[\begin{array}{ccc}1&2&1\\0&1&1\\-1&3&4\end{array}\right].$$ Find a basis for the range of $T$ and a basis for the null space of $T$.

Solution: The range is the column-space, which is the row-space of the following matrix (the transpose):
$$\left[\begin{array}{ccc}1&0&-1\\2&1&3\\1&1&4\end{array}\right]$$which we can easily determine a basis of by putting it in row-reduced echelon form.
$$\left[\begin{array}{ccc}1&0&-1\\2&1&3\\1&1&4\end{array}\right]
\rightarrow\left[\begin{array}{ccc}1&0&-1\\0&1&5\\0&1&5\end{array}\right]
\rightarrow\left[\begin{array}{ccc}1&0&-1\\0&1&5\\0&0&0\end{array}\right].$$So a basis of the range is $\{(1,0,-1), (0,1,5)\}$.

The null space can be found by row-reducing the matrix
$$\left[\begin{array}{ccc}1&2&1\\0&1&1\\-1&3&4\end{array}\right]
\rightarrow\left[\begin{array}{ccc}1&2&1\\0&1&1\\0&5&5\end{array}\right]
\rightarrow\left[\begin{array}{ccc}1&0&-1\\0&1&1\\0&0&0\end{array}\right]$$So
$$\left\{\begin{array}{c}x-z=0\\y+z=0\end{array}\right.$$which implies
$$\left\{\begin{array}{c}x=z\\y=-z\end{array}\right.$$The solutions are parameterized by the one variable $z$, thus the null space has dimension equal to one. A basis is obtained by setting $z=1$. Thus $\{(1,-1,1)\}$ is a basis for the null space.


Exercise 3.4.6

Let $T$ be the linear operator on $\mathbb R^2$ defined by
$$T(x_1,x_2)=(-x_2,x_1).$$(a) What is the matrix of $T$ in the standard ordered basis for $\mathbb R^2$?
(b) What is the matrix of $T$ in the ordered basis $\mathcal B=\{\alpha_1,\alpha_2\}$, where $\alpha_1=(1,2)$ and $\alpha_2=(1,-1)$?
(c) Prove that for every real number $c$ the operator $(T-cI)$ is invertible.
(d) Prove that if $\mathcal B$ is any ordered basis for $\mathbb R^2$ and $[T]_{\mathcal B}=A$, then $A_{12}A_{21}\not=0$.

Solution:

(a) We must write $T\epsilon_1=(0,1)$ and $T\epsilon_2=(-1,0)$ in terms of $\epsilon_1$ and $\epsilon_2$. Clearly $T\epsilon_1=\epsilon_2$ and $T\epsilon_2=-\epsilon_1$. Thus the matrix is
$$\left[\begin{array}{cc}0&-1\\1&0\end{array}\right].$$(b) We must write $T\alpha_1=(-2,1)$ and $T\alpha_2=(1,1)$ in terms of $\alpha_1,\alpha_2$. We can do this by row-reducing the augmented matrix
$$\left[\begin{array}{cc|cc}1&1&-2&1\\2&-1&1&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{cc|cc}1&1&-2&1\\0&-3&5&-1\end{array}\right]$$$$\rightarrow\left[\begin{array}{cc|cc}1&1&-2&1\\0&1&-5/3&1/3\end{array}\right]$$$$\rightarrow\left[\begin{array}{cc|cc}1&0&-1/3&2/3\\0&1&-5/3&1/3\end{array}\right]$$Thus the matrix of $T$ in the ordered basis $\mathcal B$ is
$$[T]_{\mathcal B}=\left[\begin{array}{cc}-1/3&2/3\\-5/3&1/3\end{array}\right].$$(c) The matrix of $T-cI$ with respect to the standard basis is
$$\left[\begin{array}{cc}0&-1\\1&0\end{array}\right]-c\left[\begin{array}{cc}1&0\\0&1\end{array}\right]$$$$\left[\begin{array}{cc}0&-1\\1&0\end{array}\right]-\left[\begin{array}{cc}c&0\\0&c\end{array}\right]$$$$\left[\begin{array}{cc}-c&-1\\1&-c\end{array}\right].$$Row-reducing the matrix
$$\left[\begin{array}{cc}-c&-1\\1&-c\end{array}\right]
\rightarrow\left[\begin{array}{cc}1&-c\\-c&-1\end{array}\right]
\rightarrow\left[\begin{array}{cc}1&-c\\0&-1-c^2\end{array}\right].$$Now $-1-c^2\not=0$ (since $c^2\geq0$). Thus we can continue row-reducing by dividing the second row by $-1-c^2$ to get
$$\rightarrow\left[\begin{array}{cc}1&-c\\0&1\end{array}\right]
\rightarrow\left[\begin{array}{cc}1&0\\0&1\end{array}\right].$$Thus the matrix has rank two, thus $T$ is invertible.

(d) Let $\{\alpha_1,\alpha_2\}$ be any basis. Write $\alpha_1=(a,b)$, $\alpha_2=(c,d)$. Then $T\alpha_1=(-b,a)$, $T\alpha_2=(-d,c)$. We need to write $T\alpha_1$ and $T\alpha_2$ in terms of $\alpha_1$ and $\alpha_2$. We can do this by row reducing the augmented matrix
$$\left[\begin{array}{cc|cc}a&c&-b&-d\\b&d&a&c\end{array}\right].$$Since $\{\alpha_1,\alpha_2\}$ is a basis, the matrix $\left[\begin{array}{cc}a&b\\c&d\end{array}\right]$ is invertible. Thus (recalling Exercise 1.6.8, page 27), $ad-bc\not=0$. Thus the matrix row-reduces to
$$\left[\begin{array}{cc|cc}1&0&\frac{ac+bd}{ad-bc}&\frac{c^2+d^2}{ad-bc}\\\rule{0mm}{5mm}0&1&\frac{a^2+b^2}{ad-bc}&\frac{ac+bd}{ad-bc}\end{array}\right].$$Assuming $a\not=0$ this can be shown as follows:
$$\rightarrow\left[\begin{array}{cc|cc}1&c/a&-b/a&-d/a\\b&d&a&c\end{array}\right].$$$$\rightarrow\left[\begin{array}{cc|cc}1&c/a&-b/a&-d/a\\0&\frac{ad-bc}{a}&\frac{a^2+b^2}{a}&\frac{ac+bd}{a}\end{array}\right].$$$$\rightarrow\left[\begin{array}{cc|cc}1&c/a&-b/a&-d/a\\0&1&\frac{a^2+b^2}{ad-bc}&\frac{ac+bd}{ad-bc}\end{array}\right].$$$$\rightarrow\left[\begin{array}{cc|cc}1&0&\frac{ac+bd}{ad-bc}&\frac{c^2+d^2}{ad-bc}\\\rule{0mm}{5mm}0&1&\frac{a^2+b^2}{ad-bc}&\frac{ac+bd}{ad-bc}\end{array}\right].$$If $b\not=0$ then a similar computation results in the same thing. Thus
$$[T]_{\mathcal B} =
\left[\begin{array}{cc}\frac{ac+bd}{ad-bc}&\frac{c^2+d^2}{ad-bc}\\\rule{0mm}{5mm}\frac{a^2+b^2}{ad-bc}&\frac{ac+bd}{ad-bc}\end{array}\right].$$Now $ad-bc\not=0$ implies that at least one of $a$ or $b$ is non-zero and at least one of $c$ or $d$ is non-zero, it follows that $a^2+b^2>0$ and $c^2+d^2>0$. Thus $(a^2+b^2)(c^2+d^2)\not=0$. Thus$$\frac{a^2+b^2}{ad-bc}\cdot\frac{c^2+d^2}{ad-bc}\not=0.$$


Exercise 3.4.7

Let $T$ be the linear operator on $\mathbb R^3$ defined by
$$T(x_1,x_2,x_3)=(3x_1+x_3,\ \ -2x_1+x_2,\ \ -x_1+2x_2+4x_3).$$(a) What is the matrix of $T$ in the standard ordered basis for $\mathbb R^3$.
(b) What is the matrix of $T$ in the ordered basis
$$(\alpha_1,\alpha_2,\alpha_3)$$where $\alpha_1=(1,0,1)$, $\alpha_2=(-1,2,1)$, and $\alpha_3=(2,1,1)$?
(c) Prove that $T$ is invertible and give a rule for $T^{-1}$ like the one which defines $T$.

Solution:

(a) As usual we can read the matrix in the standard basis right off the definition of $T$:
$$[T]_{\{\epsilon_1,\epsilon_2,\epsilon_3\}}=\left[\begin{array}{ccc}3&0&1\\-2&1&0\\-1&2&4\end{array}\right].$$(b) $T\alpha_1=(4,-2,3)$, $T\alpha_2=(-2,4,9)$ and $T\alpha_3=(7,-3,4)$. We must write these in terms of $\alpha_1,\alpha_2,\alpha_3$. We do this by row-reducing the augmented matrix
$$\left[\begin{array}{ccc|ccc}1&-1&2&4&-2&7\\0&2&1&-2&4&-3\\1&1&1&3&9&4\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&2&4&-2&7\\0&2&1&-2&4&-3\\0&2&-1&-1&11&-3\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&2&4&-2&7\\0&2&1&-2&4&-3\\0&0&-2&1&7&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&2&4&-2&7\\0&1&1/2&-1&2&-3/2\\0&0&1&-1/2&-7/2&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&5/2&3&0&11/2\\0&1&1/2&-1&2&-3/2\\0&0&1&-1/2&-7/2&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&0&17/4&35/4&11/2\\0&1&0&-3/4&15/4&-3/2\\0&0&1&-1/2&-7/2&0\end{array}\right]$$Thus the matrix of $T$ in the basis $\{\alpha_1,\alpha_2,\alpha_3\}$ is
$$[T]_{\{\alpha_1,\alpha_2,\alpha_3\}}=\left[\begin{array}{ccc}17/4&35/4&11/2\\-3/4&15/4&-3/2\\-1/2&-7/2&0\end{array}\right].$$(c) We row reduce the augmented matrix (of $T$ in the standard basis). If we achieve the identity matrix on the left of the dividing line then $T$ is invertible and the matrix on the right will represent $T^{-1}$ in the standard basis, from which we will be able read the rule for $T^{-1}$ by inspection.
$$\left[\begin{array}{ccc|ccc}3&0&1 & 1&0&0\\-2&1&0 & 0&1&0\\-1&2&4 & 0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}-1&2&4 & 0&0&1\\3&0&1 & 1&0&0\\-2&1&0 & 0&1&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-2&-4 & 0&0&-1\\3&0&1 & 1&0&0\\-2&1&0 & 0&1&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-2&-4 & 0&0&-1\\0&6&13 & 1&0&3\\0&-3&-8 & 0&1&-2\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-2&-4 & 0&0&-1\\0&0&-3 & 1&2&-1\\0&-3&-8 & 0&1&-2\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-2&-4 & 0&0&-1\\0&-3&-8 & 0&1&-2\\0&0&-3 & 1&2&-1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-2&-4 & 0&0&-1\\0&1&8/3 & 0&-1/3&2/3\\0&0&-3 & 1&2&-1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&4/3& 0&-2/3&1/3\\0&1&8/3 & 0&-1/3&2/3\\0&0&-3 & 1&2&-1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&4/3& 0&-2/3&1/3\\0&1&8/3 & 0&-1/3&2/3\\0&0&1 & -1/3&-2/3&1/3\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&0& 4/9&2/9&-1/9\\0&1&0 & 8/9&13/9&-2/9\\0&0&1 & -1/3&-2/3&1/3\end{array}\right]$$Thus $T$ is invertible and the matrix for $T^{-1}$ in the standard basis is
$$\left[\begin{array}{ccc}4/9&2/9&-1/9\\8/9&13/9&-2/9\\-1/3&-2/3&1/3\end{array}\right].$$Thus $T^{-1}(x_1,x_2,x_3)=\left(\frac49 x_1+\frac29x_2-\frac19x_3,\frac89x_1+\frac{13}9x_2-\frac29x_3,-\frac13x_1-\frac23x_2+\frac13x_3\right)$.


Exercise 3.4.8

Let $\theta$ be a real number. Prove that the following two matrices are similar over the field of complex numbers:
$$\left[\begin{array}{cc}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{array}\right],\quad\left[\begin{array}{cc}e^{i\theta}&0\\0&e^{-i\theta}\end{array}\right]$$(Hint: Let $T$ be the linear operator on $\Bbb C^2$ which is represented by the first matrix in the standard ordered basis. Then find vectors $\alpha_1$ and $\alpha_2$ such that $T\alpha_1=e^{i\theta}\alpha_1$, $T\alpha_2=e^{-i\theta}\alpha_2$, and $\{\alpha_1,\alpha_2\}$ is a basis.)

Solution: Let $\mathcal B$ be the standard basis. Following the hint, let $T$ be the linear operator on $\mathbb C^2$ which is represented by the first matrix in the standard ordered basis $\mathcal B$. Thus $[T]_{\mathcal B}$ is the first matrix above. Let $\alpha_1=(i,1)$, $\alpha_2=(i,-1)$. Then $\alpha_1,\alpha_2$ are clealry linearly independent so $\mathcal B'=\{\alpha_1,\alpha_2\}$ is a basis for $\mathbb C^2$ (as a vector space over $\mathbb C$). Since $e^{i\theta}=\cos\theta+i\sin\theta$, it follows that $$T\alpha_1=(i\cos\theta-\sin\theta,\ \ i\sin\theta+\cos\theta)=(\cos\theta+i\sin\theta)(i,1)=e^{i\theta}\alpha_1$$ and similarly since and $e^{-i\theta}=\cos\theta-i\sin\theta$, it follows that $T\alpha_2=e^{-i\theta}\alpha_2$. Thus the matrix of $T$ with respect to $\mathcal B'$ is
$$[T]_{\mathcal B'}=\left[\begin{array}{cc}e^{i\theta}&0\\0&e^{-i\theta}\end{array}\right].$$By Theorem 14, page 92, $[T]_{\mathcal B}$ and $[T]_{\mathcal B'}$ are similar.


Exercise 3.4.9

Let $V$ be a finite-dimensional vector space over the field $F$ and let $S$ and $T$ be linear operators on $V$. We ask: When do there exist ordered bases $\mathcal B$ and $\mathcal B'$ for $V$ such that $[S]_{\mathcal B}=[T]_{\mathcal B'}$? Prove that such bases exist if and only if there is an invertible linear operator $U$ on $V$ such that $T=USU^{-1}$. (Outline of proof: If $[S]_{\mathcal B}=[T]_{\mathcal B'}$, let $U$ be the operator which carries $\mathcal B$ onto $\mathcal B'$ and show that $S=UTU^{-1}$. Conversely, if $T=USU^{-1}$ for some invertible $U$, let $\mathcal B$ be any ordered basis for $V$ and let $\mathcal B'$ be its image under $U$. Then show that $[S]_{\mathcal B}=[T]_{\mathcal B'}$.)

Solution: We follow the hint. Suppose there exist bases $\mathcal B=\{\alpha_1,\dots,\alpha_n\}$ and $\mathcal B=\{\beta_1,\dots,\beta_n\}$ such that $[S]_{\mathcal B}=[T]_{\mathcal B'}$. Let $U$ be the operator which carries $\mathcal B$ onto $\mathcal B'$. Then by Theorem 14, page 92, $[USU^{-1}]_{\mathcal B'}=[U]_{\mathcal B}^{-1}[USU^{-1}]_{\mathcal B}[U]_{\mathcal B}$ and by the comments at the very bottom of page 90, this equals $[U]_{\mathcal B}^{-1}[U]_{\mathcal B}[S]_{\mathcal B}[U]_{\mathcal B}^{-1}[U]_{\mathcal B}$ which equals $[S]_{\mathcal B}$, which we've assumed equals $[T]_{\mathcal B'}$. Thus $[USU^{-1}]_{\mathcal B'}=[T]_{\mathcal B'}$. Thus $USU^{-1}=T$.

Conversely, assume $T=USU^{-1}$ for some invertible $U$. Let $\mathcal B$ be any ordered basis for $V$ and let $\mathcal B'$ be its image under $U$. Then $[T]_{\mathcal B'}=[USU^{-1}]_{\mathcal B'}=[U]_{\mathcal B'}[S]_{\mathcal B'}[U]_{\mathcal B'}^{-1}$, which by Theorem 14, page 92, equals $[S]_{\mathcal B}$ (because $U^{-1}$ carries $\mathcal B'$ into $\mathcal B$). Thus $[T]_{\mathcal B'}=[S]_{\mathcal B}$.


Exercise 3.4.10

We have seen that the linear operator $T$ on $\mathbb R^2$ defined by $T(x_1,x_2)=(x_1,0)$ is represented in the standard ordered basis by the matrix
$$A=\left[\begin{array}{cc}1&0\\0&0\end{array}\right].$$This operator satisfies $T^2=T$. Prove that if $S$ is a linear operator on $\mathbb R^2$ such that $S^2=S$, then $S=0$, or $S=I$, or there is an ordered basis $\mathcal B$ for $\mathbb R^2$ such that $[S]_{\mathcal B}=A$ (above).

Solution: Suppose $S^2=S$. Let $\epsilon_1,\epsilon_2$ be the standard basis vectors for $\Bbb R^2$. Consider $\{S\epsilon_1,S\epsilon_2\}$.

If both $S\epsilon_1=S\epsilon_2=0$ then $S=0$. Thus suppose WLOG that $S\epsilon_1\not=0$.

First note that if $x\in S(\Bbb R^2)$ then $x=S(y)$ for some $y\in\Bbb R^2$ and therefore $S(x)=S(S(y))=S^2(y)=S(y)=x$. In other words $S(x)=x$ for all $x\in S(\Bbb R^2)$.

Case 1: Suppose $\exists$ $c\in\mathbb R$ such that $S\epsilon_2=cS\epsilon_1$. Then $S(\epsilon_2-c\epsilon_1)=0$. In this case $S$ is singular because it maps a non-zero vector to zero. Thus since $S\epsilon_1\not=0$ we can conclude that $\dim(S(\mathbb R^2))=1$. Let $\alpha_1$ be a basis for $S(\mathbb R^2)$. Let $\alpha_2\in\mathbb R^2$ be such that $\{\alpha_1,\alpha_2\}$ is a basis for $\mathbb R^2$. Then $S\alpha_2=k\alpha_1$ for some $k\in\mathbb R$. Let $\alpha'_2=\alpha_2-k\alpha_1$. Then $\{\alpha_1,\alpha'_2\}$ span $\mathbb R^2$ because if $x=a\alpha_1+b\alpha_2$ then $x=(a+bk)\alpha_1+b\alpha'_2$. Thus $\{\alpha_1,\alpha'_2\}$ is a basis for $\mathbb R^2$. We now determine the matrix of $S$ with respect to this basis. Since $\alpha_1\in S(\mathbb R^2)$ and $S(x)=x$ $\forall$ $x\in S(\mathbb R^2)$, it follows that $S\alpha_1=\alpha_1$. And consequently $S(\alpha_1)=1\cdot\alpha_1 + 0\cdot\alpha'_2$. Thus the first column of the matrix of $S$ with respect to $\alpha_1,\alpha'_2$ is $[1,0]^{\text{T}}$. Also \begin{align*}S\alpha'_2&=S(\alpha_2-k\alpha_1)=S\alpha_2-kS\alpha_1\\&=S\alpha_2-k\alpha_1=k\alpha_1-k\alpha_1=0=0\cdot\alpha_1+0\cdot\alpha'_2.\end{align*} So the second column of the matrix is $[0,0]^{\text{T}}$. Thus the matrix of $S$ with respect to the basis $\{\alpha_1,\alpha'_2\}$ is exactly $A$.

Case 2: There does not exist $c\in\mathbb R$ such that $S\epsilon_2=cS\epsilon_1$. In this case $S\epsilon_1$ and $S\epsilon_2$ are linearly independent from each other. Thus if we let $\alpha_i=S\epsilon_i$ then $\{\alpha_1,\alpha_2\}$ is a basis for $\mathbb R^2$. Now by assumption $S(x)=x$ for all $x\in S(\mathbb R^2)$, thus $S\alpha_1=\alpha_1$ and $S\alpha_2=\alpha_2$. Thus the matrix of $S$ with respect to the basis $\{\alpha_1,\alpha_2\}$ is exactly the identity matrix $I$.


Exercise 3.4.11

Let $W$ be the space of all $n\times1$ column matrices over a field $F$. If $A$ is an $n\times n$ matrix over $F$, then $A$ defines a linear operator $L_A$ on $W$ through left multiplication: $L_A(X)=AX$. Prove that every linear operator on $W$ is left multiplication by some $n\times n$ matrix, i.e., is $L_A$ for some $A$.

Now suppose $V$ is an $n$-dimensional vector space over the field $F$, and let $\mathcal B$ be an ordered basis for $V$. For each $\alpha$ in $V$, define $U\alpha=[\alpha]_{\mathcal B}$. Prove that $U$ is an isomorphism of $V$ onto $W$. If $T$ is a linear operator on $V$, then $UTU^{-1}$ is a linear operator on $W$. Accordingly, $UTU^{-1}$ is left multiplication by some $n\times n$ matrix $A$. What is $A$?

Solution:

Part 1: I'm confused by the first half of this question because isn't this exactly Theorem 11, page 87 in the special case $V=W$ where $\mathcal B=\mathcal B'$ is the standard basis of $F^{n\times1}$. This special case is discussed on page 88 after Theorem 12, and in particular in Example 13. I don't know what we're supposed to add to that.

Part 2: Since $U(c\alpha_1+\alpha_2)=[c\alpha_1+\alpha_2]_{\mathcal B}=c[\alpha_1]_{\mathcal B}+[\alpha_2]_{\mathcal B}=cU(\alpha_1)+U(\alpha_2)$, $U$ is linear, we just must show it is invertible. Suppose $\mathcal B=\{\alpha_1,\dots,\alpha_n\}$. Let $T$ be the function from $W$ to $V$ defined as follows:
$$\left[\begin{array}{c}a_1\\a_2\\ \vdots\\ a_n\end{array}\right]\mapsto a_1\alpha_1+\cdots a_n\alpha_n.$$ Then $T$ is well defined and linear and it is also clear by inspection that $TU$ is the identity transformation on $V$ and $UT$ is the identity transformation on $W$. Thus $U$ is an isomorphism from $V$ to $W$.

It remains to deterine the matrix of $UTU^{-1}$. Now $U\alpha_i$ is the standard $n\times1$ matrix with all zeros except in the $i$-th place which equals one. Let $\mathcal B'$ be the standard basis for $W$. Then the matrix of $U$ with respect to $\mathcal B$ and $\mathcal B'$ is the identity matrix. Likewise the matrix of $U^{-1}$ with respect to $\mathcal B'$ and $\mathcal B$ is the identity matrix. Thus $[UTU^{-1}]_{\mathcal B'}=I[T]_{\mathcal B}I^{-1}=[T]_{\mathcal B}$. Therefore the matrix $A$ is simply $[T]_{\mathcal B}$, the matrix of $T$ with respect to $\mathcal B$.


Exercise 3.4.12

Let $V$ be an $n$-dimensional vector space over the field $F$, and let $\mathcal B=\{\alpha_1,\dots,\alpha_n\}$ be an ordered basis for $V$.

(a) According to Theorem 1, there is a unique linear operator $T$ on $V$ such that
$$T\alpha_j=\alpha_{j+1},\quad j=1,\dots,n-1,\quad T\alpha_n=0.$$What is the matrix $A$ of $T$ in the ordered basis $\mathcal B$.?
(b) Prove that $T^n=0$ but $T^{n-1}\not=0$.
(c) Let $S$ be any linear operator on $V$ such that $S^n=0$ but $S^{n-1}\not=0$. Prove that there is an ordered basis $\mathcal B'$ for $V$ such that the matrix of $S$ in the ordered basis $\mathcal B'$ is the matrix $A$ of part (a).
(d) Prove that if $M$ and $N$ are $n\times n$ matrices over $F$ such that $M^n=N^n=0$ but $M^{n-1}\not=0\not=N^{n-1}$, then $M$ and $N$ are similar.

Solution:

(a) The $i$-th column of $A$ is given by the coefficients obtained by writing $\alpha_i$ in terms of $\{\alpha_1,\dots,\alpha_n\}$. Since $T\alpha_i=\alpha_{i+1}$, $i<n$ and $T\alpha_n=0$, the matrix is therefore
$$A=\left[\begin{array}{ccccccc}
0&0&0&0&\cdots&0&0\\
1&0&0&0&\cdots&0&0\\
0&1&0&0&\cdots&0&0\\
0&0&1&0&\cdots&0&0\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&\cdots&1&0\end{array}\right].$$(b) $A$ has all zeros except $1$'s along the diagonal one below the main diagonal. Thus $A^2$ has all zeros except $1$'s along the diagonal that is two diagonals below the main diagonal, as follows:
$$A^2=\left[\begin{array}{ccccccc}
0&0&0&0&\cdots&0&0\\
0&0&0&0&\cdots&0&0\\
1&0&0&0&\cdots&0&0\\
0&1&0&0&\cdots&0&0\\
0&0&1&0&\cdots&0&0\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&0&\cdots&0&0\end{array}\right].$$Similarly $A^3$ has all zeros except the diagonal three below the main diagonal. Continuing we see that $A^{n-1}$ is the matrix that is all zeros except for the bottom left entry which is a $1$:
$$A^{n-1}=\left[\begin{array}{ccccccc}
0&0&0&0&\cdots&0&0\\
0&0&0&0&\cdots&0&0\\
0&0&0&0&\cdots&0&0\\
0&0&0&0&\cdots&0&0\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
1&0&0&0&\cdots&0&0\end{array}\right].$$Multiplying by $A$ one more time then yields the zero matrix, $A^n=0$. Since $A$ represents $T$ with respect to the basis $\mathcal B$, and $A^i$ represents $T^i$, we see that $T^{n-1}\not=0$ and $T^n=0$.

(c) We will first show that $\dim(S^{k}(V))=n-k$. Suppose $\dim(S(V))=n$. Then $\dim(S^k(V))=n$ $\forall$ $k=1,2,\dots$, which contradicts the fact that $S^n=0$. Thus it must be that $\dim(S(V))\leq n-1$. Now $\dim(S^2(V))$ cannot be greater than $\dim(S(V))$ because a linear transformation cannot map a space onto one with higher dimension. Thus $\dim(S^2(V))\leq n-1$. Suppose that $\dim(S^2(V))=n-1$. Thus $$n-1=\dim(S^2(V))\leq \dim(S(V))\leq n-1.$$ Thus it must be that $\dim(S(V))=n-1$. Thus $S$ is an isomorphism on $S(V)$ because $S(V)$ and $S(S(V))$ have the same dimension. It follows that $S^k$ is also an isomorphism on $S(V)$ $\forall$ $k\geq2$. Thus it follows that $\dim(S^k(V))=n-1$ for all $k=2,3,4,\dots$, another contradiction. Thus $\dim(S^2(V))\leq n-2$. Suppose that $\dim(S^3(V))=n-2$, then it must be that $\dim(S^2(V))=n-2$ and therefore $S$ is an isomorphism on $S^2(V)$, from which it follows that $\dim(S^k(V))=n-2$ for all $k=3,4,\dots$, a contradiction. Thus $\dim(S^3(V))\leq n-3$. Continuing in this way we see that $\dim(S^k(V))\leq n-k$. Thus $\dim(S^{n-1}(V))\leq 1$. Since we are assuming $S^{n-1}\not=0$ it follows that $\dim(S^{n-1}(V))=1$. We have seen that $\dim(S^k(V))$ cannot equal $\dim(S^{k+1}(V))$ for $k=1,2,\dots,n-1$, thus it follows that the dimension must go down by one for each application of $S$. In other words $\dim(S^{n-2}(V))$ must equal $2$, and then in turn $\dim(S^{n-3}(V))$ must equal $3$, and generally $\dim(S^{k}(V))=n-k$.

Now let $\alpha_1$ be any basis vector for $S^{n-1}(V)$ which we have shown has dimension one. Now $S^{n-2}(V)$ has dimension two and $S$ takes this space onto a space $S^{n-1}(V)$ of dimension one. Thus there must be $\alpha_2\in S^{n-2}(V)\setminus S^{n-1}(V)$ such that $S(\alpha_2)=\alpha_1$. Since $\alpha_2$ is not in the space generated by $\alpha_1$ and $\{\alpha_1,\alpha_2\}$ are in the space $S^{n-2}(V)$ of dimension two, it follows that $\{\alpha_1,\alpha_2\}$ is a basis for $S^{n-2}(V)$. Now $S^{n-3}(V)$ has dimension three and $S$ takes this space onto a space $S^{n-2}(V)$ of dimension two. Thus there must be $\alpha_3\in S^{n-3}(V)\setminus S^{n-2}(V)$ such that $S(\alpha_3)=\alpha_2$. Since $\alpha_3$ is not in the space generated by $\alpha_1$ and $\alpha_2$ and $\{\alpha_1,\alpha_2, \alpha_3\}$ are in the space $S^{n-3}(V)$ of dimension three, it follows that $\{\alpha_1,\alpha_2,\alpha_3\}$ is a basis for $S^{n-3}(V)$. Continuing in this way we produce a sequence of elements $\{\alpha_1,\alpha_2,\dots,\alpha_k\}$ that is a basis for $S^{n-k}(V)$ and such that $S(\alpha_i)=\alpha_{i-1}$ for all $i=2,3,\dots,k$. In particular we have a basis $\{\alpha_1,\alpha_2,\dots,\alpha_n\}$ for $V$ and such that $S(\alpha_i)=\alpha_{i-1}$ for all $i=2,3,\dots,n$. Reverse the ordering of this bases to give $\mathcal B=\{\alpha_n,\alpha_{n-1},\dots,\alpha_1\}$. Then $\mathcal B$ therefore is the required basis for which the matrix of $S$ with respect to this basis will be the matrix given in part (a).

(d) Suppose $S$ is the transformation of $F^{n\times1}$ given by $v\mapsto Mv$ and similarly let $T$ be the transformation $v\mapsto Nv$. Then $S^n=T^n=0$ and $S^{n-1}\not=0\not=T^{n-1}$. Then we know from the previous parts of this problem that there is a basis $\mathcal B$ for which $S$ is represented by the matrix from part (a). By Theorem 14, page 92, it follows that $M$ is similar to the matrix in part (a). Likewise there's a basis $\mathcal B'$ for which $T$ is represented by the matrix from part (a) and thus the matrix $N$ is also similar to the matrix in part (a). Since similarity is an equivalence relation (see last paragraph page 94), it follows that since $M$ and $N$ are similar to the same matrix that they must be similar to each other.


Exercise 3.4.13

Let $V$ and $W$ be finite-dimensional vector spaces over the field $F$ and let $T$ be a linear transformation from $V$ into $W$. If
$$\mathcal B=\{\alpha_1,\dots,\alpha_n\}\ \ \ \text{and}\ \ \ \mathcal B'=\{\beta_1,\dots,\beta_n\}$$are ordered bases for $V$ and $W$, respectively, define the linear transformations $E^{p,q}$ as in the proof of Theorem 5: $E^{p,q}(\alpha_i)=\delta_{i,q}\beta_p$. Then the $E^{p,q}$, $1\leq p\leq m$, $1\leq q\leq n$, form a basis for $L(V,W)$, and so
$$T=\sum_{p=1}^m\sum_{q=1}^nA_{pq}E^{p,q}$$for certain scalars $A_{pq}$ (the coordinates of $T$ in this basis for $L(V,W)$). Show that the matrix $A$ with entries $A(p,q)=A_{pq}$ is precisely the matrix of $T$ relative to the pair $\mathcal B$, $\mathcal B'$.

Solution: Let $E^{p,q}_{\text M}$ be the matrix of the linear transformation $E^{p,q}$ with respect to the bases $\mathcal B$ and $\mathcal B'$. Then by the formula for a matrix associated to a linear transformation as given in the proof of Theorem 11, page 87, $E^{p,q}_{\text M}$ is the matrix all of whose entries are zero except for the $p,q$-the entry which is one. Thus $A=\sum_{p,q}A_{p,q}E^{p,q}_{\text M}$. Since the association between linear transformations and matrices is an isomorphism, $T\mapsto A$ implies $\sum_{p,q}A_{pq}E^{p,q}\mapsto\sum_{p,q}A_{pq}E^{p,q}_{\text M}$. And thus $A$ is exactly the matrix whose entries are the $A_{pq}$'s.

From http://greggrant.org

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has One Comment

  1. There is an error in exercise 3.4.2 part (b). I believe you have calculated T(alpha_3) incorrectly.

Leave a Reply

Close Menu