Chapter 3 Exercise F


1. Solution: For any $\vp\in\ca L(V,\mb F)$, if $\dim \m{range} \vp=0$, then $\vp$ is the zero map. If $\dim \m{range} \vp=1$, then $\vp$ is surjective since $\dim\mb F=1$. Moreover, $\dim \m{range} \vp\leqslant \dim \mb F=1$. Hence, that is all the possible cases.


2. Solution: Let $\vp_1,\vp_2,\vp_3\in\ca L(\R^{[0,1]},\mb F)$ defined by \[\vp_1(f)=f(0),\quad\vp_2(f)=f(0.5),\quad\vp_3(f)=f(1).\]Please check that $\vp_1,\vp_2,\vp_3\in\ca L(\R^{[0,1]},\mb F)$ and they are different from each other.


3. Solution: Extend $v$ to a basis of $V$ and use 3.96.


4. Solution: Let $u_1$, $\cdots$, $u_m$ be a basis of $U$, since $U\ne V$ we can extend it to a basis of $V$ as $u_1$, $\cdots$, $u_m$, $u_{m+1}$, $\cdots$, $v_{m+n}$, where $n\geqslant 1$. Hence we can define $\vp\in V’$ by \[\vp(u_i)=\left\{ \begin{array}{ll} 0, & \hbox{if $i\ne m+1$;} \\ 1, & \hbox{if $i=m+1$.} \end{array} \right. \]Then $\vp\in V’$ and $\vp(u)=0$ for every $u\in U$ but $\vp\ne 0$.


5. Solution: Define $P_i\in\ca L(V_i,V_1\times\cdots\times V_m)$ by \[P_i(x)=(0,\cdots,0,x,0,\cdots,0)\]with $x$ in the $i$-th component. Define $\vp\in \ca L((V_1\times\cdots\times V_m)’,V’_1\times\cdots\times V’_m)$ by \[\vp(f)=(P’_1f,\cdots,P’_mf).\]Now let us check that $\vp$ is an isomorphism.

Injectivity: suppose $(P’_1f,\cdots,P’_mf)=0$, that is for any $(x_1,\cdots,x_m)\in V_1\times\cdots\times V_m$, we have \[ P’_if(x_i)=0\Longrightarrow f(0,\cdots,x_i,\cdots,0)=0 \]by the definition of $P_i$ and dual map. This implies \[f(x_1,\cdots,x_m)=\sum_{i=1}^mf(0,\cdots,x_i,\cdots,0)=0,\]namely $f=0$. Thus $\vp$ is injective. Here $(0,\cdots,x_i,\cdots,0)$ means the $i$-th component is $x_i$ and all other components are zero.

Surjectivity: for any $(f_1,\cdots,f_m)\in V’_1\times\cdots\times V’_m$, define $f\in (V_1\times\cdots\times V_m)’$ by \[f(x_1,\cdots,x_m)=\sum_{i=1}^mf_i(x_i).\]Then we can easily check that $\vp f=(f_1,\cdots,f_m)$.

By the arguments above, it follows that $(V_1\times\cdots\times V_m)’$ and $V’_1\times\cdots\times V’_m$ are isomorphic.


6. Solution: (a) If $v_1,\cdots,v_m$ spans $V$, then $\Gamma(\vp)=0$ implies \[\vp(v_1)=\cdots=\vp(v_m)=0.\]Hence $\vp=0$ since $v_1,\cdots,v_m$ spans $V$. Specifically, for any $v\in V$, we can write \[v=\sum_{i=1}^mk_iv_i,\quad k_i\in\mb F.\]Thus \[\vp(v)=\vp\left(\sum_{i=1}^mk_iv_i\right)=\sum_{i=1}^mk_i\vp(v_i)=0.\]This implies $\vp=0$. We conclude $\Gamma$ is injective.

If $\Gamma$ is injective and $\m{span}(v_1,\cdots,v_m)\ne V$, then by Problem 4, there exists a $\vp\in V’$ such that \[\vp(\m{span}(v_1,\cdots,v_m))=0\]and $\vp\ne 0$. This implies $\Gamma$ is not injective. We get a contradiction. Hence $v_1,\cdots,v_m$ spans $V$.

(b) If $v_1,\cdots,v_m$ is linearly independent, then for any $(f_1,\cdots,f_m)\in\mb F^m$, there exists a $\vp\in V’$ such that \[\vp(v_i)=f_i,\quad i=1,\cdots,m.\]This is easy to show by extending $v_1,\cdots,v_m$ to a basis of $V$ and using 3.5. Then by definition of $\Gamma$, we have\[\Gamma(\vp)=(f_1,\cdots,f_m).\]This implies $\Gamma$ is surjective.

If $\Gamma$ is surjective, suppose $v_1,\cdots,v_m$ is linearly dependent. Then there exist $k_1,\cdots,k_m\in\mb F$ such that \[k_1v_1+\cdots+k_mv_m=0\]and some $k_i$ is nonzero. Let $k_i\ne 0$, then $v_i$ can be written as a linear combination of $v_1,\cdots,v_{i-1}$,$v_{i+1},\cdots,v_n$. Hence, $(0,\cdots,0,1,0,\cdots,0)$ is not in $\m{range}\Gamma$, where $1$ is on the $i$-th component. Otherwise, we have $\vp\in V’$ such that $\Gamma(\vp)=(0,\cdots,0,1,0,\cdots,0)$. Then \[\vp(v_j)=0,\vp(v_i)=1,j=1,\cdots,i-1,i+1,\cdots,m.\]This implies $\vp(v)=0$ if $v$ is a linear combination of $v_1,\cdots,v_{i-1}$,$v_{i+1},\cdots,v_n$. Thus $\vp(v_i)=0$ by our previous argument. However, we also have $\vp(v_i)=1$. Therefore this can not happen, namely $\Gamma$ is not surjective. That means that the assumption that $v_1,\cdots,v_m$ is linearly dependent can never happen. Hence $v_1,\cdots,v_m$ is linearly independent.


7. Solution: By calculating them directly, we have \[ \vp_j(x^i)=\delta_{i,j}, \]where $\delta_{i,j}=1$ if $i=j$ and $\delta_{i,j}=0$ if $i\ne j$. Note that the dual basis of one given basis is unique(if exist). Hence we have the dual basis of the basis $1,x,\cdots,x_m$ of $\ca P_m(\R)$ is $\vp_0,\vp_1,\cdots,\vp_m$.


8. Solution: (a) This is easy, see Problem 10 of Exercise 2C.

(b) The dual basis of the basis $1,x-5,\cdots,(x-5)_m$ of $\ca P_m(\R)$ is $\vp_0,\vp_1,\cdots,\vp_m$, where $\vp_j(p)=\frac{p^{(j)}(5)}{j!}$. Here $p^{(j)}$ denotes the $j^{\m{th}}$ derivative of $p$, with the understanding that the $0^{\m{th}}$ derivative of $p$ is $p$. The proof is similar to Problem 7.


9. Solution: Note $v_1,\cdots,v_n$ is a basis of $V$ and $\vp_1,\cdots,\vp_n$ is the corresponding dual basis of $V’$, we have \[(\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n)(v_1)=\psi(v_1).\]Similarly, we also have\[(\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n)(v_i)=\psi(v_i).\]Hence\[\psi=\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n,\]as they coincide at a basis of $V$.


10. Solution: (a) $(S+T)’=S’+T’$ for all $S,T\in\ca L(V,W)$. For each $\vp\in W’$, we have \begin{align*} (S+T)'(\vp)(x)&=\vp((S+T)x)=\vp(Sx+Tx)=\vp(Sx)+\vp(Tx)\\&=S'(\vp)(x)+T'(\vp)(x)=(S’+T’)(\vp)(x) \end{align*} for all $x\in W$. The first and forth equality hold by the definition of dual map (3.99). The other ones hold by 3.6. Hence $(S+T)'(\vp)=(S’+T’)(\vp)$ for each $\vp\in W’$, namely $(S+T)’=S’+T’$.

(b) $(\lambda T)’=\lambda T’$ for all $\lambda\in\mb F$ and all $T\in\ca L(V,W)$. For each $\vp\in W’$, we have \begin{align*} (\lambda T)'(\vp)(x)&=\vp((\lambda T)x)=\vp(\lambda Tx)=\lambda\vp( Tx)\\&=\lambda T'(\vp)(x)=(\lambda T’)(\vp)(x) \end{align*}for all $x\in W$. Here we also use 3.6 and 3.99. Similarly, we conclude $(\lambda T)’=\lambda T’$.


11. Solution: Suppose that the rank of $A$ is 1. We have that all the columns are multiples of each other. Then $A$ can be written in the following form

$$ A = \begin{bmatrix} c_1\\ \vdots\\ c_m \end{bmatrix} \begin{bmatrix} d_1 & \dots & d_n \end{bmatrix} = \begin{bmatrix} c_1 d_1 & \dots & c_1 d_n\\ \vdots & \ddots & \vdots\\ c_m d_1 & \dots & c_m d_n \end{bmatrix} $$

Where the first vector is a non-zero scalar multiple of a column in $A$ and the $d$’s are the corresponding scalars of each column such that $d_j$ times the first vector equals the $j$-th column of $A$.

Conversely, suppose there are $(c_1, \dots, c_m) \in F^m$ and $(d_1, \dots, d_n) \in F^n$ such that $A_{j,k} = c_j d_k$. It is easy to see that $A$ takes the same previous form and thus each column is a scalar multiple of each other which implies that the rank of A is 1.


12. Solution: Suppose that $I$ is the identity map on $V$ and $\varphi$ is any linear functional in $V’$. We have

$$ I'(\varphi) = \varphi \circ I = \varphi $$

As desired.


13. Solution:(a) We have

$$ \begin{aligned} T'(\varphi_1)(x, y, z) &= \varphi_1 \circ T(x, y, z)\\ &= \varphi_1(4x + 5y + 6z, 7x + 8y + 9z)\\ &= 4x + 5y + 6z \end{aligned} $$

$$ \begin{aligned} T'(\varphi_2)(x, y, z) &= \varphi_2 \circ T(x, y, z)\\ &= \varphi_2(4x + 5y + 6z, 7x + 8y + 9z)\\ &= 7x + 8y + 9z \end{aligned} $$

(b) Since $\psi_1(x, y, z) = x$, $\psi_2(x, y, z) = y$ and $\psi_3(x, y, z) = z$, substituting these in the results from item (a), we get

$$ T'(\varphi_1)(x, y, z) = 4 \psi_1(x, y z) + 5 \psi_2(x, y, z) + 6 \psi_3(x, y, z) $$

Thus $T'(\varphi_1) = 4 \psi_1 + 5 \psi_2 + 6 \psi_3$. Similarly, we get $T'(\varphi_2) = 7 \psi_1 + 8 \psi_2 + 9 \psi_3$.


14. Solution: (a) Let $p \in \mathcal{P}(\mathbb{R})$. Then

$$ \begin{aligned} T'(\varphi)(p) &= \varphi \circ Tp\\ &= \varphi(x^2 p + p”)\\ &= (2xp + x^2p’ + p”) \rvert_{x=4}\\ &= 8p(4) + 16p'(4) + p”'(4) \end{aligned} $$

(b) We have $$ \begin{aligned} (T'(\varphi))(x^3) &= \varphi \circ T(x^3)\\ &= \varphi(x^5 + 6x)\\ &= \int_0^1 x^5 + 6x \,dx\\ &= (\frac{x^6}{6} + 3x^2) \biggr\rvert_{0}^{1} \end{aligned} $$


15. Solution: If $T=0$, then for any $f\in W’$ and any $v\in V$, we have $$(T’f)v=f(Tv)=f(0)=0.$$Therefore $T’f=0$ for all $f\in W’$ and hence $T’=0$.

Conversely, suppose $T’=0$, we are going to show that $T=0$ by contradiction. We assume that $T\ne 0$, then there exists $v\in V$ such that $Tv\ne 0$. Since $W$ is finite, it follows from Problem 3 that there exists $\vp\in W’$ such that $\vp(Tv)\ne 0$. Note that $(T’\vp)v=\vp(Tv)\ne 0$, which contradicts with the assumption that $T’=0$. Hence $T=0$.


16. Solution: Let $\Gamma:\ca L(V,W)\to \ca L(W’,V’)$ defined by \[\Gamma(T)=T’.\]By 3.60, we have $\dim \ca L(V,W)=\dim \ca L(W’,V’)$. Hence, by 3.69, it suffices to show $\Gamma$ is injective. Suppose $\Gamma(S)=0$ for some $S\in \ca L(V,W)$, that is $S’=0$. Hence for any $\vp\in W’$ and $v\in V$, we have \[S'(\vp)(v)=\vp(Sv)=0.\]By Problem 3, this can only happen when $Sv=0$. Hence $Sv=0$ for all $v\in V$. Thus $S=0$. We conclude $\Gamma$ is injective.


17. Solution: Note that\[\vp(u)=0\text{ for all } u\in U\iff U\subset \m{null}\vp.\]


18. Solution: By Problem 17, $U^0=V’$ if and only if $U\subset \m{null}\vp$ for all $\vp\in V’$. Note that by Problem 3, $v\in\m{null}\vp$ for all $\vp\in V’$ if and only if $v=0$. This implies $U^0=V’$ if and only if $U=\{0\}$.

Other solution: by 3.106, we have \[\dim \mathrm{span}(U)+\dim U^0=\dim V.\]Hence\[\dim U^0=\dim V’\iff \dim \mathrm{span}(U)=0\] since $\dim V’=\dim V$.


19. Solution: By 3.106, we have \[\dim U+\dim U^0=\dim V.\]Hence \[\dim U=\dim V\iff \dim U^0=0.\]That is $U=V$ if and only if $U^0=\{0\}$.


20. Solution: If $\vp\in W^0$, then $\vp(w)=0$ for all $w\in W$. As $U\subset W$, we also have $\vp(u)=0$ for all $u\in W$, hence $\vp\in U^0$. Since $\vp$ is chosen arbitrarily, we deduce that $W^0\subset U^0$.


21. Solution: Since $W^0\subset U^0$, it follows from Problem
22 that $$(U+W)^0=U^0\cap W^0= W^0.$$Note that $V$ is finite-dimensional, by 3.106 we have $$\dim (U+W)^0=\dim V-\dim(U+W),\quad \dim W^0=\dim V-\dim W.$$Therefore, we have $\dim (U+W)=\dim W$. As $W\subset U+W$ and $\dim (U+W)=\dim W$, we conclude that $U+W=W$, which implies that $U\subset W$.


22. Solution: Note that $U\subset U+W$ and $W\subset U+W$, it follows from Problem 20 that $(U+W)^0\subset U^0$ and $(U+W)^0\subset W^0$. Therefore, $(U+W)^0\subset U^0 \cap W^0$.

On the other hand, for any given $f\in U^0\cap W^0$, we have $f(u)=0$ and $f(w)=0$ for any $u\in U$ and any $w\in W$. Therefore, $$f(u+w)=f(u)+f(w)=0$$for any $u\in U$ and any $w\in W$. Note that every vector $x\in U+W$ can be written in the form of $u+w$, where $u\in U$ and $w\in W$. Therefore, we prove that $f(x)=0$ for all $x\in U+W$. This implies that $f\in (U+W)^0$, hence we have $U^0 \cap W^0\subset (U+W)^0$.

Therefore, $(U+W)^0=U^0 \cap W^0$.


23. Solution: Note that $U\cap W\subset U$ and $U\cap W\subset W$, it follows from Problem 20 that $U^0\subset (U\cap W)^0$ and $W^0\subset (U\cap W)^0$. Hence $U^0+W^0\subset (U\cap W)^0$.

On the other hand, since $V$ is finite-dimensional, it follows from 3.106 that\begin{align*}\dim(U^0+W^0)=& \dim
U^0+\dim W^0-\dim (U^0\cap W^0)\\ \text{by Problem 22 and 3.106}\quad=&\dim V-\dim U+\dim V-\dim W-\dim((U+W)^0)\\ \text{by 3.106}\quad=&\dim V-\dim U+\dim V-\dim W-\dim V+\dim(U+W)\\ =&\dim V-\dim U-\dim W+(\dim U+\dim W-\dim (U\cap W))\\ \text{by 3.106}\quad=&\dim V-\dim(U\cap W)=\dim ((U\cap W)^0).\end{align*}Since $\dim(U^0+W^0)=\dim \dim ((U\cap W)^0)$ and $U^0+W^0\subset (U\cap W)^0$, they must equal. Therefore, $U^0+W^0= (U\cap W)^0$.


24. Solution:Let $u_1, \dots, u_m$ be a basis of $U$. It can be extended to a basis $u_1, \dots, u_m, v_1, \dots, v_n$ of V. Let $\psi_1, \dots, \psi_m, \varphi_1, \dots, \varphi_n$ be the dual basis.

Suppose $\varphi \in \operatorname{span}(\varphi_1, \dots, \varphi_n)$. There are $a_1, \dots, a_n \in \mathbb{F}$ such that

$$ \varphi = a_1 \varphi_1 + \dots + a_n \varphi_n $$

Let $u \in U$. We have

$$ \varphi(u) = (a_1 \varphi_1 + \dots + a_n \varphi_n)(u) = 0 $$

Therefore $\varphi \in U^0$. Hence $\operatorname{span}(\varphi_1, \dots, \varphi_n) \subset U^0$.

Now suppose $\varphi \in U^0$. Because $\varphi \in V’$ there are $c_1, \dots, c_m, a_1, \dots, a_n \in \mathbb{F}$ such that

$$ \varphi = c_1 \psi_1 + \dots + c_m \psi_m + a_1 \varphi_1 + \dots + a_n \varphi_n $$

For every $j \in \{ 1, \dots, m \}$, we have $\psi_j(u_j) = c_j$. But $\varphi \in U^0$, that implies $c_j = 0$ and, hence, $\varphi \in \operatorname{span}(\varphi_1, \dots, \varphi_m)$. Thus $U^0 \subset \operatorname{span}(\varphi_1, \dots, \varphi_m)$.

Since $\varphi_1, \dots, \varphi_m$ is linearly independent, $\operatorname{dim}(U^0) = m$. We get

$$ \begin{aligned} \operatorname{dim} V &= m + n\\ &= \operatorname{dim} U^0 + \operatorname{dim} U .\end{aligned} $$


25. Solution: Let $B = \{v \in V: \varphi(v) = 0 \text{ for every } \varphi \in U^0\}$.

Suppose that $u \in U$. By definition, $\varphi(u) = 0$ for all $\varphi \in U^0$. Thus $u \in B$ and, therefore, $U \subset B$.

For the inclusion in the other direction, we will prove the contrapositive. Suppose that $v \notin U$. Since $0 \in U$, it follows that $v \neq 0$. Let $u_1, \dots, u_n$ be a basis of $U$. We have that $v, u_1, \dots, u_n$ is a linearly indepedent list in $V$. Extend it to a basis $v, u_1, \dots, u_n, v_1, \dots, v_m$ of $V$ and let $\varphi, \psi_1, \dots, \psi_n, \varphi_1, \dots, \varphi_m$ be its dual basis. It is easy to see that $\varphi, \varphi_1, \dots, \varphi_m$ is a basis of $U^0$. But $\varphi(v) = 1$, thus $v \notin B$.

By modus tollens, $v \in B$ implies $v \in U$. Therefore $B \subset U$.


26. Solution: Let $U$ be the subspace of $V$ such that

$$ U = \bigcap\limits_{\varphi \in \Gamma} \operatorname{null} \varphi $$

We have that

$$ \begin{aligned} U &= \{v \in V: v \in U\}\\ &= \{v \in V: v \in \bigcap\limits_{\varphi \in \Gamma} \operatorname{null} \varphi\}\\ &= \{v \in V: v \in \operatorname{null} \varphi \text{ for every } \varphi \in \Gamma\}\\ &= \{v \in V: \varphi(v) = 0 \text{ for every } \varphi \in \Gamma\}\\ \end{aligned} $$

By Exercise 25, $\Gamma = U^0$. Therefore

$$ \Gamma = U^0 = \{v \in V: \varphi(v) = 0 \text{ for every } \varphi \in \Gamma\}^0.$$


27. Solution: We have

$$ \begin{aligned} \operatorname{range} T &= \{p \in \mathcal{P_5}(\mathbb{R}): \psi(p) = 0 \text{ for every } \psi \in (\operatorname{range} T)^0\}\\ &= \{p \in \mathcal{P_5}(\mathbb{R}): \psi(p) = 0 \text{ for every } \psi \in \operatorname{null} T’\}\\ &= \{p \in \mathcal{P_5}(\mathbb{R}): \psi(p) = 0 \text{ for every } \psi \in \operatorname{span}(\varphi)\}\\ &= \{p \in \mathcal{P_5}(\mathbb{R}): \varphi(p) = 0\}\\ &= \{p \in \mathcal{P_5}(\mathbb{R}): p(8) = 0\}\\ \end{aligned} $$

Where the first equation follows from Exercise 25 and the second from 3.107.


28. Solution: Similarly to Exercise 27, we have

$$ \begin{aligned} \operatorname{range} T &= \{v \in V: \psi(v) = 0 \text{ for every } \psi \in (\operatorname{range} T)^0\}\\ &= \{v \in V: \psi(v) = 0 \text{ for every } \psi \in \operatorname{null} T’\}\\ &= \{v \in V: \psi(v) = 0 \text{ for every } \psi \in \operatorname{span}(\varphi)\}\\ &= \{v \in V: \varphi(v) = 0\}\\ &= \{v \in V: v \in \operatorname{null} \varphi\}\\ &= \operatorname{null} \varphi. \end{aligned} $$


29. Solution: This is almost the same as Exercise 28. Just use 3.109 instead.


30. Solution: We have

$$ \begin{aligned} \operatorname{dim}(\operatorname{null} \varphi_1 \cap \dots \cap \operatorname{null} \varphi_m) &= \operatorname{dim} V – \operatorname{dim}((\operatorname{null} \varphi_1 \cap \dots \cap \operatorname{null} \varphi_m)^0)\\ &= \operatorname{dim} V – \operatorname{dim}((\operatorname{null} \varphi_1)^0 + \dots + (\operatorname{null} \varphi_m)^0)\\ &= \operatorname{dim} V – \operatorname{dim}(\operatorname{span}(\varphi_1) + \dots + \operatorname{span}(\varphi_m))\\ &= \operatorname{dim} V – \operatorname{dim}(\operatorname{span}(\varphi_1, \dots, \varphi_m))\\ &= \operatorname{dim} V – m\\ \end{aligned} $$

Where the first equation follows from 3.106, the second from Exercise 23, the third from Theorem 1 in Chapter 3 notes, the fourth from definition of sum of subspaces and the fifth because $\varphi_1, \dots, \varphi_m$ is linearly independent.


31. Solution:

By Exercise 1, all the $\varphi$’s are surjective. Consider the following process

  • Step 1.

    Choose $v_1 \in V$ such that $\varphi_1(v_1) = 1$.

  • Step j.

    If $j = n + 1$, stop the process. By the contrapositive of the statement in Theorem 2 of Chapter 3 notes, it follows that there is a vector $v_j \in V$ such that $v_j \in \bigcap_{1 \le k \le n, k \neq j}$ and $v_j \notin \operatorname{null} \varphi_j$. Because both these subspaces are closed under scalar multiplication and because $\varphi_j$ is surjective, we can assume without loss of generality that $\varphi_j(v_j) = 1$.

After step $n$ the process stops and we will have a list $v_1, \dots, v_n$ such that $\varphi_j(v_k) = 1$ if $j = k$ and $\varphi_j(v_k) = 0$ if $j \neq k$. We but need to prove that $v_1, \dots, v_n$ is linearly independent, since it already has length $\operatorname{dim} V$.

Suppose there are $a_1, \dots, a_n \in \mathbb{F}$ such that

$$ a_1 v_1 + \dots + a_n v_n = 0 $$

Applying $\varphi_j$ to both sides of the equation above gives $a_j = 0$, for each $j = 1, \dots, n$. Hence $v_1, \dots, v_n$ is linearly independent and, therefore, a basis of $V$.


33. Solution:

Let $t \in \mathcal{L}(\mathbb{F}^{m,n}, \mathbb{F}^{n,m})$ denote the linear map that takes a matrix to its transpose. For this exercise, assume $1 \le k \le m$ and $1 \le j \le n$.

Suppose $A, C \in \mathbb{F}^{m, n}$. Then

$$ \begin{aligned} (t(A + C))_{k, j} &= ((A + C)^T)_{k, j}\\ &= (A + C)_{j, k}\\ &= A_{j, k} + C_{j, k}\\ &= (A^T)_{k, j} + (C^T)_{k, j}\\ &= (t(A))_{k, j} + (t(C))_{k, j}\\ \end{aligned} $$

Let $\lambda \in \mathbb{F}$. We have

$$ \begin{aligned} (t(\lambda A))_{k, j} &= ((\lambda A)^T)_{k, j}\\ &= (\lambda A)_{j, k}\\ &= \lambda (A_{j, k})\\ &= \lambda (A^T)_{k, j}\\ &= \lambda (t(A))_{k, j}\\ \end{aligned} $$

Therefore $t$ is indeed a linear map.

Since $\operatorname{dim}(\mathbb{F}^{m, n}) = \operatorname{dim}(\mathbb{F}^{n, m})$, to prove that $t$ is invertible we only need to show that it is injective.

Suppose $t(A) = 0$ for some $A \in F^{m,n}$ (here 0 denotes a matrix in $\mathbb{F}^{n, m})$ with 0 in all entries). We have that

$$ 0 = (t(A))_{k, j} = (A^T)_{k, j} = A_{j,k} $$

Because $A$ has zero in all its entries, it follows that $A = 0$ and, therefore, $\operatorname{null} t = \{ 0 \}$, which implies that $t$ is injective.


34. Solution: (a) Given $k_1,k_2\in\mb F$ and $v_1,v_2\in V$. For any $\vp\in V’$, we have\begin{align*}(\Lambda(k_1v_1+k_2v_2))(\vp)=&\, \vp(k_1v_1+k_2v_2)\\=&\, k_1\vp (v_1)+k_2\vp(v_2)\\=&\, k_1(\Lambda v_1)(\vp)+k_2(\Lambda v_2)(\vp)\\ =&\, (k_1\Lambda v_1+k_2\Lambda v_2)(\vp).\end{align*}Since this is true for any $\vp$, it follows that $$\Lambda(k_1v_1+k_2v_2)=k_1\Lambda v_1+k_2\Lambda v_2.$$Hence $\Lambda$ is a linear map from $V$ to $V^{\prime\prime}$.

(b) For any given $v\in V$, $(T^{\prime\prime}\circ \lambda) v=T^{\prime\prime}(\Lambda v)$ and $(\Lambda \circ T)v=\Lambda(Tv)$ are elements of $V^{\prime\prime}$. To show they are equal, it suffices to show that for any $f\in V’$ we have$$(T^{\prime\prime}(\Lambda v))f=(\Lambda(Tv))f.$$To see this, we have\begin{align*}&\,(T^{\prime\prime}(\Lambda v))f\\ \text{by the definition of dual map, 3.99}\quad =&\,(\Lambda v)(T’f)\\ \text{by the definition of }\Lambda \quad=&\, (T’f)v\\ \text{by the definition of dual map, 3.99}\quad =&\, f(Tv).\end{align*}On the other hand, by the definition of $\Lambda$, we also have$$(\Lambda(Tv))f=f(Tv).$$Hence we have $T^{\prime\prime}(\Lambda v)=\Lambda(Tv)$, therefore$$(T^{\prime\prime}\circ \lambda) v=(\Lambda \circ T)v.$$As the vector $v$ is chosen arbitrarily, we prove that $T^{\prime\prime}\circ \lambda=\Lambda \circ T$.

(c) Since $V$ is finite-dimensional, by 3.95, we have $\dim V=\dim V’ =\dim V^{\prime\prime}$. Hence it suffices to show that $\Lambda$ is injective. Suppose $\Lambda v=0$, then for any $f\in V’$ we have $$(\Lambda v)f=f(v)=0.$$Let $U=\{v\}$ as in Problem 18, by our assumption we have $U^0=V’$, hence it follows from Problem 18 that $U=\{0\}$. Therefore $v=0$, which implies that $\Lambda$ is injective.


35. Solution: Define $\tau \in \mathcal{L}((\mathcal{P}(\mathbb{R}))’, \mathbb{R}^\infty)$ by

$$ \tau(\sigma) = (\sigma(1), \sigma(x), \sigma(x^2), \dots) $$

You can check that $\tau$ is indeed linear. To prove $\tau$ is injective, suppose there are $\sigma, \eta \in (\mathcal{P}(\mathbb{R}))’$ such that $\tau(\sigma) = \tau(\eta)$. By definition of $\tau$ we have that $\sigma(x^j) = \eta(x^j)$. Let $p \in \mathcal{P}(\mathbb{R})$. Clearly $p$ takes the form

$$ p(x) = a_0 + a_1 x + \dots + a_m x^m $$

For some $a_0, \dots, a_m \in \mathbb{F}$ where $\deg p = m$. Then

$$ \begin{aligned} \sigma(p) &= \sigma(a_0 + a_1 x + \dots + a_m x^m)\\ &= \sigma(a_0) + \sigma(a_1 x) + \dots + \sigma(a_m x^m)\\ &= a_0 \sigma(1) + a_1 \sigma(x) + \dots + a_m \sigma(x^m)\\ &= a_0 \eta(1) + a_1 \eta(x) + \dots + a_m \eta(x^m)\\ &= \eta(a_0) + \eta(a_1 x) + \dots + \eta(a_m x^m)\\ &= \eta(a_0 + a_1 x + \dots + a_m x^m)\\ &= \eta(p)\\ \end{aligned} $$

Because $p$ was arbitrary, it follows that $\sigma = \eta$ and thus $\tau$ is injective.

To prove surjectivity, let $a = (a_0, a_1, a_2, \dots)\in \mathbb{R}^\infty$. Define $\sigma \in (\mathcal{P}(\mathbb{R}))’$ by

$$ \sigma(x^n) = a_n $$

You can check that $\sigma$ is also linear. By definition of $\tau$ we have that $\tau(\sigma) = a$ and, because $a$ was arbitrary, it follows that $\tau$ is surjective.

Hence $\tau$ is an isomorphism between $(\mathcal{P}(\mathbb{R}))’$ and $\mathbb{R}^\infty$.


36. Solution: (a) We have

$$ \begin{aligned} \operatorname{null} i’ &= \{\varphi \in V’: i'(\varphi) = 0\}\\ &= \{\varphi \in V’: i'(\varphi)(u) = 0 \text{ for every } u \in U\}\\ &= \{\varphi \in V’: \varphi \circ i(u) = 0 \text{ for every } u \in U\}\\ &= \{\varphi \in V’: \varphi(u) = 0 \text{ for every } u \in U\}\\ &= \{\varphi \in V’: \varphi \in U^0\}\\ &= U^0\\ \end{aligned} $$

(b)

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *