If you find any mistakes, please make a comment! Thank you.

Chapter 7 Exercise A


1. Solution: By definition, we have
\[\begin{align*}\langle (z_1,\cdots,z_n),T^*(w_1,\cdots,w_n)\rangle=&\langle T(z_1,\cdots,z_n),(w_1,\cdots,w_n) \rangle
\\=& z_1w_2+\cdots+z_{n-1}w_{n}=\langle (z_1,\cdots,z_n),(w_2,\cdots,w_n,0)\rangle.\end{align*}\]Therefore $T^*(w_1,\cdots,w_n)=(w_2,\cdots,w_n,0)$ or $T^*(z_1,\cdots,z_n)=(z_2,\cdots,z_n,0)$.
See also Linear Algebra Done Right Solution Manual Chapter 6 Problem 27.


2. Solution: (This solution works for $\dim V<\infty$, I am not sure whether $V$ is finite dimensional or not)Note that $(T^*)^*=T$, it suffices to show $\lambda$ is an eigenvalue of $T$ then $\bar\lambda$ is an eigenvalue of $T^*$. Let $v$ be a eigenvectors of $T$ corresponding to $\lambda$, then $Tv=\lambda v$. We have \begin{equation}\label{7AP1}0=((T-\lambda I)v,w)=(v,(T^*-\bar\lambda I)w)\end{equation}for all $w\in V$. If $\bar\lambda$ is not an eigenvalue of $T^*$, then $T^*-\bar\lambda I$ is surjective by 5.6. It follows that there exists some $\xi\in V$ such that $(T^*-\bar\lambda I)\xi=v$. By $(\ref{7AP1})$, we have \[0=(v,(T^*-\bar\lambda I)\xi)=(v,v).\]But $v\ne 0$ since $v$ is a eigenvectors, we get a contradiction. Hence we get the conclusion.
Solution: See Linear Algebra Done Right Solution Manual Chapter 6 Problem 28.


3. Solution: Let $u\in U$ and $w\in U^\perp$, then we have \begin{equation}\label{7AP1.1} \langle Tu,w\rangle=\langle u,T^*w\rangle. \end{equation} If $U$ is invariant under $T$, then $Tu\in U$ for all $u\in U$. Hence for a fixed $w\in U^\perp$, we have \[ 0=\langle Tu,w\rangle=\langle u,T^*w\rangle \]for all $u\in U$. This implies $T^*w\in U^{\perp}$. As $w$ is chosen arbitrarily, we conclude $U^{\perp}$ is invariant under $T^*$. The other direction is similar.
Solution: See Linear Algebra Done Right Solution Manual Chapter 6 Problem 29.


4. Solution: See Linear Algebra Done Right Solution Manual Chapter 6 Problem 30.


5. Solution: We have
$$ \begin{aligned} \operatorname{dim} \operatorname{null} T^* &= \operatorname{dim} (\operatorname{range} T)^\perp\\ &= \operatorname{dim} W – \operatorname{dim} \operatorname{range} T\\ &= \operatorname{dim} W + \operatorname{dim} \operatorname{null} T – \operatorname{dim} V \end{aligned} $$ where the first line follows from 7.7 (a), the second from 6.50 and the third from 3.22. We alse have
$$ \begin{aligned} \operatorname{dim} \operatorname{range} T^* &= \operatorname{dim} (\operatorname{null} T)^\perp\\ &= \operatorname{dim} V – \operatorname{dim} \operatorname{null} T\\ &= \operatorname{dim} \operatorname{range} T \end{aligned} $$ where the first line follows from 7.7 (b), the second from 6.50 and the third from 3.22.


6. Solution: (a) If $T$ were self-adjoint, we would have
$$ \langle Tp, q \rangle = \langle p, T^*q \rangle = \langle p, Tq \rangle. $$ However, let $p(x) = a_0 + a_1 x + a_2 x^2$ and $q(x) = b_0 + b_1 x + b_2 x^2$. We have
$$ \begin{aligned} \langle Tp, q \rangle &= \langle a_1 x, q \rangle\\ &= a_1 \int_0^1 b_0x + b_1x^2 + b_2x^3\:dx\\ &= a_1 \left(\frac{b_0}{2}x^2 + \frac{b_1}{3}x^3 + \frac{b_2}{4}x^4\right)\biggr\rvert_0^1\\ &= a_1 \left(\frac{b_0}{2} + \frac{b_1}{3} + \frac{b_2}{4}\right). \end{aligned} $$ Similarly
$$ \langle p, Tq \rangle = \langle p, b_1x \rangle = b_1 \left(\frac{a_0}{2} + \frac{a_1}{3} + \frac{a_2}{4}\right). $$ Thus, taking $a_1 = 0$ and $b_1, a_0, a_2 > 0$ clearly shows $\langle Tp, q \rangle \neq \langle p, Tq \rangle$.
(b) 7.10 requires the chosen basis to be orthonormal.


7. Solution: We have $(ST)^*=T^*S^*$ by 7.6 (e). Hence $ST$ is self-adjoint if and only if \[T^*S^*=ST.\]Note that $S,T\in\ca L(V)$ are self-adjoint, we have $S^*=S$ and $T^*=T$. Therefore $ST$ is self-adjoint if and only if $T^*S^*=ST$, which is equivalent to $ST=TS$.


8. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 3 (a).


9. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 3 (b).


10. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 5.


11. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 4.


12. Solution: Let $u$ be a unit eigenvector (i.e. $\|u\|=1$) of $T$ corresponding to eigenvalue 3, then $Tu=3u$. Let $w$ be a unit eigenvector (i.e. $\|w\|=1$) of $T$ corresponding to eigenvalue 4, then $Tv=4w$.
By 7.22, we have $\langle u,w\rangle =0$. Let $v=au+bw$, then it follows from 6.25 that $$\|v\|^2=\|au+bw\|^2=|a|^2+|b|^2$$ and $$\|Tv\|^2=\|Tau+Tbw\|^2=\|3au+4bw\|^2=9|a|^2+16|b|^2.$$Hence we need to choose $a$ and $b$ such that\[|a|^2+|b|^2=2,\quad 9|a|^2+16|b|^2=25.\]A simple solution is $a=1$ and $b=1$. Hence $v=u+w$ satisfies the requirement.


13. Solution: Define $T \in L(\mathbb{C}^4)$ by
$$ T(z_1, z_2, z_3, z_4) = (z_4, z_1, z_2, z_3). $$ We have
$$ \begin{aligned} \langle (z_1, z_2, z_3, z_4), T^*(x_1, x_2, x_3, x_4) \rangle &= \langle T(z_1, z_2, z_3, z_4), (x_1, x_2, x_3, x_4) \rangle\\ &= \langle (z_4, z_1, z_2, z_3), (x_1, x_2, x_3, x_4) \rangle\\ &= z_4\overline{x_1} + z_1\overline{x_2} + z_2\overline{x_3} + z_3\overline{x_4}\\ &= \langle (z_1, z_2, z_3, z_4), (x_2, x_3, x_4, x_1) \rangle. \end{aligned} $$ Thus $T^*(z_1, z_2, z_3, z_4) = (z_2, z_3, z_4, z_1)$. Note that $T$ is normal ($T^*T$ and $TT^*$ equal the identity), however $T \neq T^*$.


14. Solution: Since $v$ and $w$ are eigenvectors corresponding to distinct eigenvalues, by 7.22 they are orthogonal. Thus
$$ \begin{aligned} ||T(v + w)||^2 &= ||Tv + Tw||^2\\ &= ||3v + 4w||^2\\ &= ||3v||^2 + ||4w||^2\\ &= 9||v||^2 + 16||w||^2\\ &= 100, \end{aligned} $$ where the third line follows from the Pythagorean Theorem.


15. Solution: Let $w_1, w_2 \in V$. We have
$$ \begin{aligned} \langle w_1, T^*w_2 \rangle &= \langle Tw_1, w_2 \rangle\\ &= \langle \langle w_1, u \rangle x, w_2 \rangle\\ &= \langle w_1, u \rangle \langle x, w_2 \rangle\\ &= \langle w_1, \overline{\langle x, w_2 \rangle} u \rangle\\ &= \langle w_1, \langle w_2, x \rangle u \rangle\\ \end{aligned} $$ Hence $T^*v = \langle v, x \rangle u$.

(a) Suppose $T$ is selft-adjoint. Then
$$ \langle v, u \rangle x – \langle v, x \rangle u = Tv – T^*v = 0, $$ for all $v \in V$. We can assume $u$ and $x$ are non-zero (otherwise there is nothing to prove). Taking $v = u$ forces $\langle v, u \rangle \neq 0$, showing that $x$ and $u$ are linearly dependent.

Conversely, suppose $x$ and $u$ are linearly dependent. We can assume $x$ and $u$ are non-zero, otherwise $T$ would equal $0$, which already is self-adjoint. Then $u = cx$, for some non-zero $c \in \mathbb{R}$. Thus
$$ \begin{aligned} Tv &= \langle v, u \rangle x\\ &= \langle v, cx \rangle \frac{1}{c}u\\ &= \langle v, x \rangle u\\ &= T^* v. \end{aligned} $$ Therefore $T = T^*$.

(b) Again, we can assume $u$ and $x$ are both non-zero in both directions of the proof.
We have
$$ \begin{aligned} \langle \langle v, u \rangle x, x \rangle u &= T^*(\langle v, u \rangle x)\\ &= T^*Tv\\ &= TT^*v\\ &= T(\langle v, x \rangle u)\\ &= \langle \langle v, x \rangle u, u \rangle x. \end{aligned} $$ Taking $v = u$ ensures $\langle \langle v, u \rangle x, x \rangle \neq 0$, showing that $u$ and $x$ are linearly dependent.

Conversely, suppose $x$ and $u$ are linearly dependent. Then $u = cx$ for some non-zero $c \in \mathbb{F}$. Then
$$ \begin{aligned} &= TT^*v\\ &= T(\langle v, x \rangle u)\\ &= \langle \langle v, x \rangle u, u \rangle x\\ &= \langle \langle v, x \rangle x, cx \rangle cx\\ &= \langle \langle v, cx \rangle x, x \rangle cx\\ &= \langle \langle v, u \rangle x, x \rangle u\\ &= T^*(\langle v, u \rangle x)\\ &= T^*Tv. \end{aligned} $$ Hence $TT^* = T^*T$.


16. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 6.


17. Solution: See Linear Algebra Done Right Solution Manual Chapter 7 Problem 7.


18. Solution: We give a counterexample. Let $V = \mathbb{R}^2$ and $T$ defined by
$$ Te_1 = e_1 + e_2, Te_2 = – e_1 – e_2 $$ where $e_1, e_2$ is the standard basis of $\mathbb{R}^2$. Its matrix with respect to the same basis is
$$ \begin{pmatrix} 1 & -1\\ 1 & -1 \end{pmatrix}. $$ Taking the transpose, we see that $T^*$ is defined by
$$ T^*e_1 = e_1 – e_2, T^*e_2 = e_1 – e_2. $$ Note that $||Te_1|| = ||T^*e_1||$ and $||Te_2|| = ||T^*e_2||$. However $\mathcal{M}(T^*)\mathcal{M}(T) \neq \mathcal{M}(T)\mathcal{M}(T^*)$, thus $T$ is not normal.


19. Solution: We saw in Exercise 16 that $\operatorname{null} T = \operatorname{null} T^*$ (for $T$ normal). Thus $(z_1, z_2, z_3) \in \operatorname{null} T$ and we have $$ \begin{aligned} 0 &= \langle T^*(z_1, z_2, z_3), (1, 1, 1) \rangle\\ &= \langle (z_1, z_2, z_3), T(1, 1, 1) \rangle\\ &= \langle (z_1, z_2, z_3), (2, 2, 2) \rangle\\ &= 2z_1 + 2z_2 + 3z_3. \end{aligned} $$ Dividing by $2$ yields the desired result.


20. Solution: Let $v \in V$ and $w \in W$. Then
$$ \begin{aligned} ((\Phi_V \circ T^*)(w))(v) &= (\Phi_V(T^*w))(v)\\ &= \langle v, T^*v \rangle\\ &= \langle Tv, w \rangle. \end{aligned} $$ On the other hand, we have $$ \begin{aligned} ((T’ \circ \Phi_W)(w))(v) &= (T’ \circ \Phi_W(w))(v)\\ &= (\Phi_W(w) \circ T)(v)\\ &= (\Phi_W(w))(Tv)\\ &= \langle Tv, w \rangle. \end{aligned} $$ Therefore $\Phi_V \circ T^* = T’ \circ \Phi_W$.


21. Solution:

(a) Let $e_j = \dfrac{\cos jx}{\sqrt{\pi}}$ and $f_j = \dfrac{\sin jx}{\sqrt{\pi}}$. By Exercise 4 in section 6B, $\frac{1}{\sqrt{2\pi}}, e_1, \dots, e_n, f_1, \dots, f_n$ is an orthonormal basis of $V$. Note that $De_j = -jf_j$ and $Df_j = je_j$. Then, for any $v, w \in V$, we have
$$ \begin{aligned} \langle v, D^*w \rangle &= \langle Dv, w \rangle\\ &= \left\langle D\left(\left\langle v, \frac{1}{\sqrt{2\pi}} \right\rangle \frac{1}{\sqrt{2\pi}} + \sum_{j=1}^n (\langle v, e_j \rangle e_j + \langle v, f_j \rangle f_j)\right), w \right\rangle\\ &= \left\langle \sum_{j=1}^n (-j\langle v, e_j \rangle f_j + j\langle v, f_j \rangle e_j), \left\langle w, \frac{1}{\sqrt{2\pi}} \right\rangle \frac{1}{\sqrt{2\pi}} + \sum_{j=1}^n (\langle w, e_j \rangle e_j + \langle w, f_j \rangle f_j) \right\rangle\\ &= \sum_{j=1} (-j\langle v, e_j \rangle \langle w, f_j \rangle + j\langle v, f_j \rangle \langle w, e_j \rangle)\\ &= \left\langle \left\langle v, \frac{1}{\sqrt{2\pi}} \right\rangle \frac{1}{\sqrt{2\pi}} + \sum_{j=1}^n (\langle v, e_j \rangle e_j + \langle v, f_j \rangle f_j), \sum_{j=1}^n (-j\langle w, f_j \rangle e_j + j\langle w, e_j \rangle f_j)\right\rangle\\ &= \langle v, -Dw \rangle. \end{aligned} $$ Hence $D^* = -D$. Obviously $D$ is normal but not self-adjoint.

(b) Note that $T = D^2$. Thus $T^* = (DD)^* = D^*D^* = (-D)(-D) = D^2 = T$.

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has 12 Comments

  1. About question 2: of course V must be finite-dimensional, because to make sure that the very definition of the adjoint made sense, the author had to rely on the Riesz Representation Theorem, which only works in finite dimensions.

  2. question 12

    4th line and 6th line, you are missing two squares on the left side of the equation

    1. Fixed, thanks!

      1. By the way, Can you provide an alternate proof for 7.7 (b): range T* = (null T)^
        (let ^ denote the orthogonal complement, unable to type that thing).
        I really don’t like the proof book provides. but I am unable to come up with my own proof.

        1. I would say that is a good and the simplest proof, just by checking definition. Which part you don’t like?

          It is not easy to use the relation between T and T^* in (b) and (d), but easier in (a) and (c). If you don’t like “replacing T by T^*”, you can prove (c) and then take orthogonal complement.

          1. Yes, but taking the orthogonal complement requires range T to be finite dimensional (6.47 if I understand it right). While (a) and (c) has no such restrictions.
            Tell me not that (b) and (d) only holds when range T or null T is finite dimensional

          2. See Matt Lundy’s comment below. There is a standing assumption in this chapter that everything is finite dimensional (7.1).

          3. I know, but doesn’t the fact that (a) and (c) hold in all circumstances while we have to rely on such assumptions for (b) and (d) to hold bothers you?

            In other words, I think my question can be rephrased as: Prove or give a counter example: does (b) and (d) still holds when V is infinite dimensional?

          4. Does not always hold.

  3. Solution to 7A6:
    a) self adjoint means $ \langle tv,w \rangle = \langle v,tw \rangle$ for all $v,w \in V$. Thus $$\int_0^1 v_1 x (w_0+w_1 x + w_2 x^2)dx = \int_0^1 (v_0+v_1 x + v_2 x^2) w_1 x dx,$$ which is obviously not true. Trivial counterexample $0 != x^3+C$ for all $x$

    b) Theorem 7.10 requires an orthonormal basis, $(1,x,x^2)$ isn’t orthonormal.

  4. Regarding your solution to number 2, your comment on the questionable dimension of V, there is a standing assumption in this chapter that everything is finite dimensional (7.1).

    1. Thank you! I haven’t read this new edition carefully.

Leave a Reply

Close Menu