1. Solution: Since

$$ T^2(w, z) = T(z, 0) = (0, 0), $$ it follows that $G(0, T) = V$. Therefore every vector in $\mathbb{C}^2$ is a generalized eigenvector of $T$.

2. Solution: The eigenvalues of $T$ are $i$ and $-i$. Since $\mathbb{C}^2$ has dimension $2$, the generalized eigenspaces are the eigenspaces themselves.

3. Solution: We will prove $\operatorname{null} (T – \lambda I)^n = \operatorname{null} \left(T^{-1} – \dfrac{1}{\lambda} I\right)^n$ for all nonnegative integers $n$ by induction on $n$.

It is easy to check that $\operatorname{null} (T – \lambda I) = \operatorname{null} \left(T^{-1} – \dfrac{1}{\lambda} I\right)$ (see Exercise 9 in section 5C). Let $n > 1$ and assume the result holds for all nonnegative integers less than $n$.

Suppose $v \in \operatorname{null}(T – \lambda I)^n$. Then

$$ (T – \lambda I)v \in \operatorname{null}\left(T – \lambda I\right)^{n-1}. $$ By the induction hypothesis $$ (T – \lambda I)v \in \operatorname{null}\left(T^{-1} – \frac{1}{\lambda} I\right)^{n-1}. $$ Thus $$ 0 = \left(T^{-1} – \frac{1}{\lambda} I\right)^{n-1}(T – \lambda I)v = (T – \lambda I)\left(T^{-1} – \frac{1}{\lambda} I\right)^{n-1}v, $$ where the second equality follows from Theorem 1 below.

Theorem 1. Suppose $T \in \mathcal{L}(V)$ is invertible and $p, q \in \mathcal{P}(\mathbb{F})$. Then $p(T^{-1}) q(T) = q(T) p(T^{-1})$.

Proof. The key idea used here is that $T$ commutes with $T^{-1}$, even when raised to different powers.

Suppose $p(z) = \sum_{j=0}^m a_j z^j$ and $q(z) = \sum_{k=0}^n b_k z^k$ for $z \in \mathbb{F}$. Then

$$ \begin{aligned} p\left(T\right)q\left(T^{-1}\right) &= \left(\sum_{j=0}^m a_j T^j\right)\left(\sum_{k=0}^n b_k \left(T^{-1}\right)^k\right)\\ &= \sum_{j=0}^m \sum_{k=0}^n a_j b_k T^j \left(T^{-1}\right)^k\\ &= \sum_{j=0}^m \sum_{k=0}^n b_k a_j \left(T^{-1}\right)^k T^j\\ &= \sum_{k=0}^n \sum_{j=0}^m b_k a_j \left(T^{-1}\right)^k T^j\\ &= \left(\sum_{k=0}^n b_k \left(T^{-1}\right)^k\right)\left(\sum_{j=0}^m a_j T^j\right)\\ &= q\left(T^{-1}\right)p\left(T\right) \end{aligned}. $$

Therefore $$ \left(T^{-1} – \frac{1}{\lambda}I\right)^{n-1}v \in \operatorname{null} (T – \lambda I). $$ But

$$ \operatorname{null} (T – \lambda I) = \operatorname{null} \left(T^{-1} – \frac{1}{\lambda} I\right). $$ Hence

$$ \left(T^{-1} – \frac{1}{\lambda}I\right)^{n-1}v \in \operatorname{null} \left(T^{-1} – \frac{1}{\lambda} I\right) $$ and so

$$ 0 = \left(T^{-1} – \frac{1}{\lambda} I\right) \left(T^{-1} – \frac{1}{\lambda}I\right)^{n-1}v = \left(T^{-1} – \frac{1}{\lambda}I\right)^n v, $$ which shows that $v \in \operatorname{null} (T^{-1} – \frac{1}{\lambda}I)$. Therefore $$\operatorname{null} (T – \lambda I)^n \subset \operatorname{null} \left(T^{-1} – \frac{1}{\lambda} I\right)^n.$$ To prove the inclusion in the other direction, it suffices to repeat the same thing replacing $\left(T – \lambda I\right)$ with $\left(T^{-1} – \frac{1}{\lambda}I\right)$ and vice versa.

Now, by 8.11, we have

$$ G(\lambda, T) = \operatorname{null}(T – \lambda I)^{\operatorname{dim} V} = \operatorname{null}\left(T^{-1} – \frac{1}{\lambda} I\right)^{\operatorname{dim} V} = G\left(\frac{1}{\lambda}, T^{-1}\right). $$

4. Solution: Suppose $v \in G(\alpha, T) \cap G(\beta, T)$ and suppose by contradiction that $v \neq 0$. Then $v, v$ are generalized eigenvectors corresponding to distinct generalized eigenvalues of $T$. Now 8.13 implies that $v, v$ is linearly independent, which is clearly a contradiction. Therefore $v$ must be $0$.

5. Solution: Let $a_0, a_1, \dots, a_{m-1} \in \mathbb{F}$ such that

$$ 0 = a_0v + a_1Tv + \dots + a_{m-1}T^{m-1}v $$ Applying $T^{m-1}$ to both sides of the equation above yields

$$ 0 = a_0T^{m-1}v, $$ which shows that $a_0 = 0$. Therefore

$$ 0 = a_1Tv + \dots + a_{m-1}T^{m-1}v. $$ Applying $T^{m-2}$ yields

$$ 0 = a_1T^{m-1}v, $$ which shows that $a_1 = 0$. Continuing in this fashion, we see that $a_0 = a_1 = \dots = a_m = 0$. Thus $v, Tv, T^2v, \dots, T^{m-1}v$ is linearly independent.

6. Solution: Suppose by contradiction that $S \in \mathcal{L}(\mathbb{C}^3)$ is a square root of $T$. Note that $V = \operatorname{null} T^3$. We have

$$ \begin{aligned} V &= \operatorname{null} T^3 = \operatorname{null} R^6\\ &= \operatorname{null} R^3 = \operatorname{null} RT\\ &\subset \operatorname{null} R^2T= \operatorname{null} T^2, \end{aligned} $$ where the third line follows by 8.4. But this a contradiction, since $T^2(z_1, z_2, z_3) = (z_3, 0, 0)$, we see that $\operatorname{null} T^2 = \{(0, 0, z): z \in \mathbb{C}\}$, then we can’t have $V \subset \operatorname{null} T^2$.

7. Solution: This follows directly from 8.19 and 5.32.

8. Solution: False. Let $V = \mathbb{C}^2$. Define $S, T \in \mathcal{L}(\mathbb{C})$ by

$$ \begin{aligned} S(z_1, z_2) &= (0, z_1)\\ T(z_1, z_2) &= (z_2, 0). \end{aligned} $$ Both $S$ and $T$ are nilpotent, however $S + T$ is not (its square equals the identity).

9. Solution: We have

$$ \operatorname{null} (TS)^{\operatorname{dim} V} = \operatorname{null} TS(TS)^{\operatorname{dim} V} = \operatorname{null} T(ST)^{\operatorname{dim} V}S = V, $$ where the first equality follows from 8.4 and the third because $(ST)^{\operatorname{dim} V} = 0$ (by 8.18). Thus $(TS)^{\operatorname{dim} V} = 0$ and so $TS$ is nilpotent.

10. Solution: If $T$ is not nilpotent, then $\operatorname{dim} \operatorname{null} T^n < n$ and , by the same reasoning used in 8.4, it follows that $\operatorname{null} T^{n-1} = \operatorname{null} T^n$. Thus, by 8.5, we have

$$ V = \operatorname{null} T^{n-1} + \operatorname{range} T^n. $$ Since $\operatorname{range} T^n \subset \operatorname{range} T^{n-1}$, we must also have

$$ V = \operatorname{null} T^{n-1} + \operatorname{range} T^{n-1}. $$ Then, by the Fundamental Theorem of Linear Maps (3.22),

$$ \operatorname{dim} (\operatorname{null} T^{n-1} + \operatorname{range} T^{n-1}) = \operatorname{dim} V = \operatorname{dim} \operatorname{null} T^{n-1} + \operatorname{dim} \operatorname{range} T^{n-1}. $$ 3.78 now implies that $\operatorname{null} T^{n-1} + \operatorname{range} T^{n-1}$ is a direct sum.

12. Solution: Suppose $v_1, \dots, v_n$ is such basis. Then $Nv_1 = 0$, because the the first column of the matrix has $0$ in all its entries. The definition of matrix of linear map shows that $Nv_2 \in \operatorname{span}(v_1)$. But this implies that $N^2v_2 = 0$. Similarly, $Nv_3 \in \operatorname{span}(v_1, v_2)$, so $N^3v_3 = 0$.

Continuing like this, we see that $N^j v_j = 0$, for each $j = 1, \dots, n$. Therefore $N^n = 0$ and so $N$ is nilpotent.

13. Solution: It is easy when $\mathbb{F} = \mathbb{C}$, because then $V$ has a basis consisting of eigenvectors of $N$ and for each vector $v$ in this basis we have $0 = N^{\operatorname{dim} V} v = \lambda^{\operatorname{dim} V} v$ for the corresponding eigenvalue $\lambda$, which implies that $\lambda = 0$.

More generally, without restricting $\mathbb{F}$ to $\mathbb{C}$, we will prove $N^{\operatorname{dim} V – 1} = 0$ and this fact can be used to show $N^{\operatorname{dim} V – 2} = 0$, which then can be used to show… and so on until $N^1$.

Let $\mathcal{N} = N^{\operatorname{dim} V – 1}$. Note that $\mathcal{N}$ is also normal and that $\mathcal{N}^2 = 0$. Then, for all $v \in V$,

$$ ||\mathcal{N}^*\mathcal{N}v||^2 = ||\mathcal{N}\mathcal{N}v|| = 0, $$ where the first equality comes from 7.20. Thus $\mathcal{N}^*\mathcal{N} = 0$. Therefore

$$ ||\mathcal{N}v||^2 = \langle \mathcal{N}v, \mathcal{N}v \rangle = \langle v, \mathcal{N}^*\mathcal{N}v \rangle = 0, $$ which shows that $\mathcal{N} = 0$.

14. Solution: This follows directly from 8.19 and 6.37.

15. Solution: By the same reasoning used in the proof of 8.4, it follows that $\operatorname{dim} \operatorname{null} N^{\operatorname{dim} V} \ge \operatorname{dim} V$. Thus $\operatorname{dim} \operatorname{null} N^{\operatorname{dim} V} = \operatorname{dim} V$ and so $N$ is nilpotent. We have $\operatorname{dim} V + 1$ null spaces each of different dimension. Since the sequence

$$ \operatorname{dim} \operatorname{null} N^0, \operatorname{dim} \operatorname{null} N^1, \dots, \operatorname{dim} \operatorname{null} N^{\operatorname{dim} V} $$ must be sorted in strictly increasing order, the only way this can fit is if $\operatorname{dim} \operatorname{null} N^j = j$ for each $j$.

16. Solution: Obviously $V = \operatorname{range} T^0 = \operatorname{range} I$. Let $k$ be a nonnegative integer. Suppose $v \in \operatorname{range} T^{k+1}$. Then $v = T^{k+1}u$ for some $u \in V$. But then $v = T^k(Tu)$. This implies that $v \in \operatorname{range} T^k$.

17. Solution: By the Fundamental Theorem of Linear Maps (3.22), we have

$$ \operatorname{dim} \operatorname{null} T^m + \operatorname{dim} \operatorname{range} T^m = \operatorname{dim} \operatorname{null} T^{m+1} + \operatorname{dim} \operatorname{range} T^{m+1}, $$ which implies that $\operatorname{dim} \operatorname{null} T^m = \operatorname{dim} \operatorname{null} T^{m+1}$ (because $\operatorname{dim} \operatorname{range} T^m = \operatorname{dim} \operatorname{range} T^{m+1}$). Thus, by 8.3, for all $k > m$, we have

$$ \operatorname{dim} \operatorname{null} T^m = \operatorname{dim} \operatorname{null} T^{m+k}. $$ Applying the Fundamental Theorem of Linear Maps again to $T^m$ and $T^{m+k}$ we see that

$$ \operatorname{dim} \operatorname{range} T^m = \operatorname{dim} \operatorname{range} T^{m+k}. $$ Since $\operatorname{range} T^{m+k} \subset \operatorname{range} T^m$, it follows that $\operatorname{range} T^{m+k} = \operatorname{range} T^m$.

18. Solution: This follows directly from the previous exercise and 8.4.

19. Solution: This is just a matter of realizing that $\operatorname{null} T^m \subset \operatorname{null} T^{m+1}$ and $\operatorname{range} T^{m+1} \subset \operatorname{range} T^m$ and applying the Fundamental Theorem of Linear Maps.

20. Solution: By Exercise 19, $\operatorname{null} T^4 \neq \operatorname{null} T^5$. By Exercise 15, this implies that $T$ is nilpotent.

21. Solution: Let $W = \mathbb{F}^\infty \times \mathbb{F}^\infty$ and define $T \in \mathcal{L}(W)$ by

$$ T\bigr((x_1, x_2, x_3, \dots), (y_1, y_2, y_3, \dots)\bigl) = \bigr((x_2, x_3, \dots), (0, y_1, y_2, y_3, \dots)\bigl), $$ that is, $T$ applies the backward shift operator (call it $B$) on the first slot and forward shift operator (call it $F$) on the second slot. Thus, for each positive integer $k$, we have $$ \operatorname{null} B^k = \{(x_1, x_2, x_3, \dots) \in \mathbb{F}^\infty: x_j = 0 \text{ for all } j > k\} $$ and $$ \operatorname{range} F^k = \{(x_1, x_2, x_3, \dots) \in \mathbb{F}^\infty: x_1 = x_2 = \dots = x_k = 0\}. $$ Moreover $\operatorname{range} B^k = \mathbb{F}^\infty$ and $\operatorname{null} F^k = \{0\}$. Note that $\operatorname{null} B^k \subsetneq \operatorname{null} B^{k+1}$ and $\operatorname{range} F^k \supsetneq \operatorname{range} F^{k+1}$. Thus $$ \operatorname{null} T^k = \{(x, 0) \in \mathbb{F}^\infty \times \mathbb{F}^\infty: x \in \operatorname{null} B^k\} $$ and

$$ \operatorname{range} T^k = \{(x, y) \in \mathbb{F}^\infty \times \mathbb{F}^\infty: y \in \operatorname{range} T^k\}. $$ Hence $\operatorname{null} T^k \subsetneq \operatorname{null} T^{k+1}$ and $\operatorname{range} T^k \supsetneq \operatorname{range} T^{k+1}$.

## Phi

16 Oct 2020I give an alternative short proof of exercise 3.

Suppose 𝔽 is any field, V is an n-dimensional vector space over 𝔽, $T \in \mathcal{L}(V)$ is invertible.

Suppose λ∈𝔽 and λ≠0.

$$T^{-1} - \frac{1}{\lambda} I = T^{-1} (T-\lambda I) (-\frac{1}{\lambda} I)$$

All three factors in the right hand side of the previous equation commute.

Because of this, $(T^{-1} - \frac{1}{\lambda} I)^n = (-\frac{1}{\lambda})^n \cdot T^{-n} \cdot (T - \lambda I)^n$.

Since $T^{-n}$ is invertible, $G(\frac{1}{\lambda}, T^{-1}) = \operatorname{null}(T^{-1}-\frac{1}{\lambda}I)^n = \operatorname{null}(T - \lambda I)^n = G(\lambda, T)$.

## Mustafa Kemal Turak

8 Jul 2020I give an alternative proof of Q13. Suppose N is nilpotent and normal operator. By 8.18 N has an upper-triangular matrix which all entries of diagonal equal to zero. Apply Gram- Schmidt Procedure then span(v1,....,vj) = span (e1,...,ej) for each j. ej ∈ span (e1,...,ej) = span(v1,....,vj) then

N(ej)∈ span(v1,....,vj-1) = span (e1,....ej-1) for each j. (because of M( N,(v1,...,vn) ) has form by 8.18)

Finally we obtain N has matrix by form 8.18 respect to orthonormal basis. We denote M(N). N is normal operator then M(N) M(N*) = M(N*) M(N) . By 7.10 implies that M(N)=0 so that N=0

## Zixiu Su

24 Jun 2020For ex 13, we can use the nice property we proved in 7A17 where T is normal implies null T^k = null T for all positive integer k.

## Chi Yuan Lau

10 Aug 2019for the ex 3, I have a convenient method , note the $\mathrm{ex} 4,5 , 3.D$ , then how about consider

$$\mathrm{dim}\ker(\lambda_iT)^n(T^{-1}-\lambda_iI)^n$$

## Chi Yuan Lau

9 Aug 2019For ex $16,17,18,19$ , why not use theorem $8.2-8.4$ on $T^*$, and then consider the orthogonal complement.

## Linearity

9 Aug 2019~~Did you see anything inner product in this section? I have no idea what are you talking about.~~Yes, you are right.

## Hunter

24 Jul 2019Any thoughts on 11?

## Linearity

24 Jul 2019Consider the counterexample, $I+N$, where $N$ is a nonzero nilpotent operator such that $N\ne 0$ and $N^2=0$. I will update details later.

## Hunter

25 Jul 2019The nilpotent operator N will have an upper triangular matrix with zeros on the diagonal. The square of I + N will be I + 2N which will be upper triangular, so all the eigenvalues are 1 by 5.32.

There's a result that says an upper triangular matrix with all the same entries on the diagonal is diagonalizable iff it's already diagonal, so using that means we're already done.

If we don't want to use that, picking a nilpotent operator N = $\begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix} $ gives us for $(I + N)^{2} = \begin{pmatrix} 1 & 2 \\ 0 & 1\end{pmatrix}$ which has only one eigenvalue ($\lambda =1$). The corresponding eigenspace is 1 dimensional, so by 5.41 it's not diagonalizable.

How did you think to consider $I+N$?

## Linearity

26 Jul 2019If you know the Jordan form, it is natural to consider a single Jordan block. You may also wonder what is the Jordan form of a power of a single Jordan block.

## Kiwi

23 Aug 2019Not entirely sure if this makes sense, but if $T^n$ is diagonalizable, there exists some basis of eigenvectors $v_1,...,v_n$ with corresponding eigenvalues $\lambda_1,...,\lambda_n$. Then, we have $T_nv_i=\lambda_iv_i$ and thus $Tv_i$ must equal $\sqrt[n]{\lambda_i}v_i$, and so $T$ is also diagonalizable with respect to this list of eigenvectors. Similarly the other way around, we then see $T_n$ is diagonalizable iff $T$ is diagonalizable. Thus choose some non-diagonalizable T and you have your counter-example.

I feel a little unsure on the $\sqrt[n]{\lambda_i}$ part, and I'm not quite sure why, but if this is flawed I have a feeling it is there.

## Linearity

23 Aug 2019Your feeling is correct. That part is wrong. $T^n$ is diagonalizable does not imply that $T$ is diagonalizable. You may look at the example, $$T=\begin{pmatrix} 0 & 1\\ 0& 0\end{pmatrix}.$$ $T^2=0$ is diagonalizable but $T$ is not.

## Chayan Ghosh

29 Nov 2018Solution to 8.A problem 3

## Marcel Ackermann

24 Mar 20188A18) dim V = dim range T + dim null T (fundamental theorem of LA), so dim V = dim range T^{n+i} + dim null T^{n+i}. According to 8.4: n=dim V => null T^n = null T^{n+i}, so: dim V = dim range T^{n+i} + dim null T^n. dim V and dim null T^n are constant so dim range T^{n+i} = dim range T^{n}. According to exercise 16 range T^{n+i} are subsets, so range T^{n+i}=range T^{n}.

## Marcel Ackermann

21 Sep 20178A9) we know (ST)^{dimV} = 0. (TS)^{dimV+1} = (TS)(TS)...(TS) = T(ST)(ST)...(ST)S = T(ST)^{dimV}S = T0S = T0 = 0. => TS i nilpotent.

## The Dark Knight

16 Jan 2021This is a great answer. Not only is it much simpler, but it also works for infinite dimensional spaces.

@Linearity Please take it into consideration. I am sure you will like it too.

## Marcel Ackermann

17 Sep 20178A5) If a new vector is independent from all vectors in a list and the list is independent then the list joint with the new vector is independent.

By 8.2. and 8.3.: {0}=null T^0 subset null T^1 ... subset null T^{m-1} subset V

Tv in null T^{m-1}, v notin null T^{m-1}, by the previous statement that means that Tv and v are linearly independent because else we would have a contradiction (vector spaces closed by scalar multiplication).

This generalizes to all vectors in the form of the exercises: T^{m-k} notin null T^{k-1} but T^{m-k} in null T^k.

## Marcel Ackermann

31 Jul 20178A1) $G(0,T)=\{(w,z): w,z \in \mathbb{C}\}$, because $T^2(w,z)=(0,0)$