If you find any mistakes, please make a comment! Thank you.

Chapter 3 Exercise F

1. Solution: For any $\vp\in\ca L(V,\mb F)$, if $\dim \m{range} \vp=0$, then $\vp$ is the zero map. If $\dim \m{range} \vp=1$, then $\vp$ is surjective since $\dim\mb F=1$. Moreover, $\dim \m{range} \vp\leqslant \dim \mb F=1$. Hence, that is all the possible cases.

2. Solution: Let $\vp_1,\vp_2,\vp_3\in\ca L(\R^{[0,1]},\mb F)$ defined by \[\vp_1(f)=f(0),\quad\vp_2(f)=f(0.5),\quad\vp_3(f)=f(1).\]Please check that $\vp_1,\vp_2,\vp_3\in\ca L(\R^{[0,1]},\mb F)$ and they are different from each other.

3. Solution: Extend $v$ to a basis of $V$ and use 3.96.

4. Solution: Let $u_1$, $\cdots$, $u_m$ be a basis of $U$, since $U\ne V$ we can extend it to a basis of $V$ as $u_1$, $\cdots$, $u_m$, $u_{m+1}$, $\cdots$, $v_{m+n}$, where $n\geqslant 1$. Hence we can define $\vp\in V’$ by \[\vp(u_i)=\left\{ \begin{array}{ll} 0, & \hbox{if $i\ne m+1$;} \\ 1, & \hbox{if $i=m+1$.} \end{array} \right. \]Then $\vp\in V’$ and $\vp(u)=0$ for every $u\in U$ but $\vp\ne 0$.

5. Solution: Define $P_i\in\ca L(V_i,V_1\times\cdots\times V_m)$ by \[P_i(x)=(0,\cdots,0,x,0,\cdots,0)\]with $x$ in the $i$-th component. Define $\vp\in \ca L((V_1\times\cdots\times V_m)’,V’_1\times\cdots\times V’_m)$ by \[\vp(f)=(P’_1f,\cdots,P’_mf).\]Now let us check that $\vp$ is an isomorphism.

Injectivity: suppose $(P’_1f,\cdots,P’_mf)=0$, that is for any $(x_1,\cdots,x_m)\in V_1\times\cdots\times V_m$, we have \[ P’_if(x_i)=0\Longrightarrow f(0,\cdots,x_i,\cdots,0)=0 \]by the definition of $P_i$ and dual map. This implies \[f(x_1,\cdots,x_m)=\sum_{i=1}^mf(0,\cdots,x_i,\cdots,0)=0,\]namely $f=0$. Thus $\vp$ is injective. Here $(0,\cdots,x_i,\cdots,0)$ means the $i$-th component is $x_i$ and all other components are zero.

Surjectivity: for any $(f_1,\cdots,f_m)\in V’_1\times\cdots\times V’_m$, define $f\in (V_1\times\cdots\times V_m)’$ by \[f(x_1,\cdots,x_m)=\sum_{i=1}^mf_i(x_i).\]Then we can easily check that $\vp f=(f_1,\cdots,f_m)$.

By the arguments above, it follows that $(V_1\times\cdots\times V_m)’$ and $V’_1\times\cdots\times V’_m$ are isomorphic.

6. Solution: (a) If $v_1,\cdots,v_m$ spans $V$, then $\Gamma(\vp)=0$ implies \[\vp(v_1)=\cdots=\vp(v_m)=0.\]Hence $\vp=0$ since $v_1,\cdots,v_m$ spans $V$. Specifically, for any $v\in V$, we can write \[v=\sum_{i=1}^mk_iv_i,\quad k_i\in\mb F.\]Thus \[\vp(v)=\vp\left(\sum_{i=1}^mk_iv_i\right)=\sum_{i=1}^mk_i\vp(v_i)=0.\]This implies $\vp=0$. We conclude $\Gamma$ is injective.

If $\Gamma$ is injective and $\m{span}(v_1,\cdots,v_m)\ne V$, then by Problem 4, there exists a $\vp\in V’$ such that \[\vp(\m{span}(v_1,\cdots,v_m))=0\]and $\vp\ne 0$. This implies $\Gamma$ is not injective. We get a contradiction. Hence $v_1,\cdots,v_m$ spans $V$.

(b) If $v_1,\cdots,v_m$ is linearly independent, then for any $(f_1,\cdots,f_m)\in\mb F^m$, there exists a $\vp\in V’$ such that \[\vp(v_i)=f_i,\quad i=1,\cdots,m.\]This is easy to show by extending $v_1,\cdots,v_m$ to a basis of $V$ and using 3.5. Then by definition of $\Gamma$, we have\[\Gamma(\vp)=(f_1,\cdots,f_m).\]This implies $\Gamma$ is surjective.

If $\Gamma$ is surjective, suppose $v_1,\cdots,v_m$ is linearly dependent. Then there exist $k_1,\cdots,k_m\in\mb F$ such that \[k_1v_1+\cdots+k_mv_m=0\]and some $k_i$ is nonzero. Let $k_i\ne 0$, then $v_i$ can be written as a linear combination of $v_1,\cdots,v_{i-1}$,$v_{i+1},\cdots,v_n$. Hence, $(0,\cdots,0,1,0,\cdots,0)$ is not in $\m{range}\Gamma$, where $1$ is on the $i$-th component. Otherwise, we have $\vp\in V’$ such that $\Gamma(\vp)=(0,\cdots,0,1,0,\cdots,0)$. Then \[\vp(v_j)=0,\vp(v_i)=1,j=1,\cdots,i-1,i+1,\cdots,m.\]This implies $\vp(v)=0$ if $v$ is a linear combination of $v_1,\cdots,v_{i-1}$,$v_{i+1},\cdots,v_n$. Thus $\vp(v_i)=0$ by our previous argument. However, we also have $\vp(v_i)=1$. Therefore this can not happen, namely $\Gamma$ is not surjective. That means that the assumption that $v_1,\cdots,v_m$ is linearly dependent can never happen. Hence $v_1,\cdots,v_m$ is linearly independent.

7. Solution: By calculating them directly, we have \[ \vp_j(x^i)=\delta_{i,j}, \]where $\delta_{i,j}=1$ if $i=j$ and $\delta_{i,j}=0$ if $i\ne j$. Note that the dual basis of one given basis is unique(if exist). Hence we have the dual basis of the basis $1,x,\cdots,x_m$ of $\ca P_m(\R)$ is $\vp_0,\vp_1,\cdots,\vp_m$.

8. Solution: (a) This is easy, see Problem 10 of Exercise 2C.

(b) The dual basis of the basis $1,x-5,\cdots,(x-5)_m$ of $\ca P_m(\R)$ is $\vp_0,\vp_1,\cdots,\vp_m$, where $\vp_j(p)=\frac{p^{(j)}(5)}{j!}$. Here $p^{(j)}$ denotes the $j^{\m{th}}$ derivative of $p$, with the understanding that the $0^{\m{th}}$ derivative of $p$ is $p$. The proof is similar to Problem 7.

9. Solution: Note $v_1,\cdots,v_n$ is a basis of $V$ and $\vp_1,\cdots,\vp_n$ is the corresponding dual basis of $V’$, we have \[(\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n)(v_1)=\psi(v_1).\]Similarly, we also have\[(\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n)(v_i)=\psi(v_i).\]Hence\[\psi=\psi(v_1)\vp_1+\cdots+\psi(v_n)\vp_n,\]as they coincide at a basis of $V$.

10. Solution: (a) $(S+T)’=S’+T’$ for all $S,T\in\ca L(V,W)$. For each $\vp\in W’$, we have \begin{align*} (S+T)'(\vp)(x)&=\vp((S+T)x)=\vp(Sx+Tx)=\vp(Sx)+\vp(Tx)\\&=S'(\vp)(x)+T'(\vp)(x)=(S’+T’)(\vp)(x) \end{align*} for all $x\in W$. The first and forth equality hold by the definition of dual map (3.99). The other ones hold by 3.6. Hence $(S+T)'(\vp)=(S’+T’)(\vp)$ for each $\vp\in W’$, namely $(S+T)’=S’+T’$.

(b) $(\lambda T)’=\lambda T’$ for all $\lambda\in\mb F$ and all $T\in\ca L(V,W)$. For each $\vp\in W’$, we have \begin{align*} (\lambda T)'(\vp)(x)&=\vp((\lambda T)x)=\vp(\lambda Tx)=\lambda\vp( Tx)\\&=\lambda T'(\vp)(x)=(\lambda T’)(\vp)(x) \end{align*}for all $x\in W$. Here we also use 3.6 and 3.99. Similarly, we conclude $(\lambda T)’=\lambda T’$.

15. Solution: If $T=0$, then for any $f\in W’$ and any $v\in V$, we have $$(T’f)v=f(Tv)=f(0)=0.$$Therefore $T’f=0$ for all $f\in W’$ and hence $T’=0$.

Conversely, suppose $T’=0$, we are going to show that $T=0$ by contradiction. We assume that $T\ne 0$, then there exists $v\in V$ such that $Tv\ne 0$. Since $W$ is finite, it follows from Problem 3 that there exists $\vp\in W’$ such that $\vp(Tv)\ne 0$. Note that $(T’\vp)v=\vp(Tv)\ne 0$, which contradicts with the assumption that $T’=0$. Hence $T=0$.

16. Solution: Let $\Gamma:\ca L(V,W)\to \ca L(W’,V’)$ defined by \[\Gamma(T)=T’.\]By 3.60, we have $\dim \ca L(V,W)=\dim \ca L(W’,V’)$. Hence, by 3.69, it suffices to show $\Gamma$ is injective. Suppose $\Gamma(S)=0$ for some $S\in \ca L(V,W)$, that is $S’=0$. Hence for any $\vp\in W’$ and $v\in V$, we have \[S'(\vp)(v)=\vp(Sv)=0.\]By Problem 3, this can only happen when $Sv=0$. Hence $Sv=0$ for all $v\in V$. Thus $S=0$. We conclude $\Gamma$ is injective.

17. Solution: Note that\[\vp(u)=0\text{ for all } u\in U\iff U\subset \m{null}\vp.\]

18. Solution: By Problem 17, $U^0=V’$ if and only if $U\subset \m{null}\vp$ for all $\vp\in V’$. Note that by Problem 3, $v\in\m{null}\vp$ for all $\vp\in V’$ if and only if $v=0$. This implies $U^0=V’$ if and only if $U=\{0\}$.

Other solution: by 3.106, we have \[\dim \mathrm{span}(U)+\dim U^0=\dim V.\]Hence\[\dim U^0=\dim V’\iff \dim \mathrm{span}(U)=0\] since $\dim V’=\dim V$.

19. Solution: By 3.106, we have \[\dim U+\dim U^0=\dim V.\]Hence \[\dim U=\dim V\iff \dim U^0=0.\]That is $U=V$ if and only if $U^0=\{0\}$.

20. Solution: If $\vp\in W^0$, then $\vp(w)=0$ for all $w\in W$. As $U\subset W$, we also have $\vp(u)=0$ for all $u\in W$, hence $\vp\in U^0$. Since $\vp$ is chosen arbitrarily, we deduce that $W^0\subset U^0$.

21. Solution: Since $W^0\subset U^0$, it follows from Problem 22 that $$(U+W)^0=U^0\cap W^0= W^0.$$Note that $V$ is finite-dimensional, by 3.106 we have $$\dim (U+W)^0=\dim V-\dim(U+W),\quad \dim W^0=\dim V-\dim W.$$Therefore, we have $\dim (U+W)=\dim W$. As $W\subset U+W$ and $\dim (U+W)=\dim W$, we conclude that $U+W=W$, which implies that $U\subset W$.

22. Solution: Note that $U\subset U+W$ and $W\subset U+W$, it follows from Problem 20 that $(U+W)^0\subset U^0$ and $(U+W)^0\subset W^0$. Therefore, $(U+W)^0\subset U^0 \cap W^0$.

On the other hand, for any given $f\in U^0\ cap W^0$, we have $f(u)=0$ and $f(w)=0$ for any $u\in U$ and any $w\in W$. Therefore, $$f(u+w)=f(u)+f(w)=0$$for any $u\in U$ and any $w\in W$. Note that every vector $x\in U+W$ can be written in the form of $u+w$, where $u\in U$ and $w\in W$. Therefore, we prove that $f(x)=0$ for all $x\in U+W$. This implies that $f\in (U+W)^0$, hence we have $U^0 \cap W^0\subset (U+W)^0$.

Therefore, $(U+W)^0=U^0 \cap W^0$.

23. Solution: Note that $U\cap W\subset U$ and $U\cap W\subset W$, it follows from Problem 20 that $U^0\subset (U\cap W)^0$ and $W^0\subset (U\cap W)^0$. Hence $U^0+W^0\subset (U\cap W)^0$.

On the other hand, since $V$ is finite-dimensional, it follows from 3.106 that\begin{align*}\dim(U^0+W^0)=& \dim
U^0+\dim W^0-\dim (U^0\cap W^0)\\ \text{by Problem 22 and 3.106}\quad=&\dim V-\dim U+\dim V-\dim W-\dim((U+W)^0)\\ \text{by 3.106}\quad=&\dim V-\dim U+\dim V-\dim W-\dim V+\dim(U+W)\\ =&\dim V-\dim U-\dim W+(\dim U+\dim W-\dim (U\cap W))\\ \text{by 3.106}\quad=&\dim V-\dim(U\cap W)=\dim ((U\cap W)^0).\end{align*}Since $\dim(U^0+W^0)=\dim \dim ((U\cap W)^0)$ and $U^0+W^0\subset (U\cap W)^0$, they must equal. Therefore, $U^0+W^0= (U\cap W)^0$.

34. Solution: (a) Given $k_1,k_2\in\mb F$ and $v_1,v_2\in V$. For any $\vp\in V’$, we have\begin{align*}(\Lambda(k_1v_1+k_2v_2))(\vp)=&\, \vp(k_1v_1+k_2v_2)\\=&\, k_1\vp (v_1)+k_2\vp(v_2)\\=&\, k_1(\Lambda v_1)(\vp)+k_2(\Lambda v_2)(\vp)\\ =&\, (k_1\Lambda v_1+k_2\Lambda v_2)(\vp).\end{align*}Since this is true for any $\vp$, it follows that $$\Lambda(k_1v_1+k_2v_2)=k_1\Lambda v_1+k_2\Lambda v_2.$$Hence $\Lambda$ is a linear map from $V$ to $V^{\prime\prime}$.

(b) For any given $v\in V$, $(T^{\prime\prime}\circ \lambda) v=T^{\prime\prime}(\Lambda v)$ and $(\Lambda \circ T)v=\Lambda(Tv)$ are elements of $V^{\prime\prime}$. To show they are equal, it suffices to show that for any $f\in V’$ we have$$(T^{\prime\prime}(\Lambda v))f=(\Lambda(Tv))f.$$To see this, we have\begin{align*}&\,(T^{\prime\prime}(\Lambda v))f\\ \text{by the definition of dual map, 3.99}\quad =&\,(\Lambda v)(T’f)\\ \text{by the definition of }\Lambda \quad=&\, (T’f)v\\ \text{by the definition of dual map, 3.99}\quad =&\, f(Tv).\end{align*}On the other hand, by the definition of $\Lambda$, we also have$$(\Lambda(Tv))f=f(Tv).$$Hence we have $T^{\prime\prime}(\Lambda v)=\Lambda(Tv)$, therefore$$(T^{\prime\prime}\circ \lambda) v=(\Lambda \circ T)v.$$As the vector $v$ is chosen arbitrarily, we prove that $T^{\prime\prime}\circ \lambda=\Lambda \circ T$.

(c) Since $V$ is finite-dimensional, by 3.95, we have $\dim V=\dim V’ =\dim V^{\prime\prime}$. Hence it suffices to show that $\Lambda$ is injective. Suppose $\Lambda v=0$, then for any $f\in V’$ we have $$(\Lambda v)f=f(v)=0.$$Let $U=\{v\}$ as in Problem 18, by our assumption we have $U^0=V’$, hence it follows from Problem 18 that $U=\{0\}$. Therefore $v=0$, which implies that $\Lambda$ is injective.


This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has 46 Comments

  1. I have a question about Problem 30, I am confused because I think I found a counter case of #30.
    Suppose v1, ... , vn is a basis of V and φ1, ... , φn is the corresponding dual basis of V'.
    Then construct m linearly independent elements in V' by ψi = φi+φ(m+1), i = 1, ... ,m.
    It is easy to find that null ψi = span(v1, ..., v(i-1), v(i+1), ... , vm, v(m+2), ... , vn).
    Suppose U = (null ψ1) ∩ ... ∩ (null ψm), then U = span(v(m+2), ..., vn),
    dimU = n - m - 1 = dimV - m - 1, not dimV - m.
    I think there should be something wrong with that, but I can not find it. Can anyone help?

    1. how about ½(v1+…vm)-½v(m+1)?This is in the intersection.

  2. For the last part of problem 6(b) I did a proof not by contradiction, maybe someone will find it useful (if it's wrong please tell me!):

    - Let $\Gamma$ be surjective. Then, for any $w_{1}, ..., w_{m} \in \mathbb{F}^{m}$ there must be some $\phi \in V'$ such that $\Gamma (\phi) = (w_{1}, ..., w_{m})$, thus $\phi (v_{j}) = w_{j}$ for all $1 \leq j \leq m$.

    - Let now $w_{1}, ..., w_{m}$ be a basis of $F^{m}$, thus it is linearly independent. So (by the point above) there must be some $\phi_{1} \in V'$ such that $w_{j} = \phi_{1}(v_{j})$ for all $1 \leq j \leq m$.

    - But now, $\phi_{1}(v_{1}), ..., \phi_{1}(v_{m})$ is also linearly independent, so by exercise 3.A.4, $v_{1}, ..., v_{m}$ is linearly independent.

  3. Can I use the Riesz representation theorem to solve the problems in this chaper? This is my idea for problem 30: each function phi_i can express in the form phi_i(x)= with some vector e_i belongs to v. Because the list of phi_i is basis of V' so that the list of e_i is basis of V(easy to prove). Take u_1 is the spanning vector of the orthogonal complement of span(e_2,e_3,...,e_n), u_2 is the spanning vector of the orthogonal complement of span(e_1,e_3,...,e_n), ... , u_n is the spanning vector of the orthogonal complement of span(e_2,e_3,...,e_n-1). then can't be equal to 0 because if =0 so that ei belongs to the span(e_1,e_2,..., ) such that e_1, e_2, ... , e_n is linearly dependent. This is contradiction. It's easy to show that the list u_i is linearly independent. So that the vectors u_i/ is the basis that has dual basis is phi_i

  4. Anyone can check my solution for problem 30: Let U=intersection of null_phi_i (1<=i<=m). We will show that span(phi_1, phi_2, .., phi_m) = U_0. It's easy to show that span(phi_1, ... phi_m) is subset of U_0. Now we will prove that U_0 is subset of span(phi_1, phi_2, ..., phi_m) by proving that if f(u)=0 with every U belongs to U so f belongs to span(phi_1, phi_2, ... phi_m) and f(x)= for some e. By the riesz representation theorem we have phi_i(x)= with some e_i. We can prove easily if phi_1, phi_2, ..., phi_m is a linearly independent list so e_1, e_2,...,e_m is linearly independent list too. Now we have span(e_1, ..., e_m) is orthogonal complement of U. Because e is orthogonal to every elements of U so that e belongs to span(e_1, e_2,....,e_m) => f belongs to span(phi_1, phi_2, ..., phi_m) => span(phi_1, phi_2,...,phi_m)=U_0 => dim(U)=dim(V)-dim(span(phi_1, phi_2,...)) = dim(V)-m

  5. can anyone please help for answer of F.11???

  6. Can I solve problem 4 like this?
    Solution: since dimV=dimU+dimU0
    so dimU0

    1. There is dimU0>0

  7. #16, the injectivity of $\Gamma$ can also be derived from the result of #15

  8. For 27, I can prove range T is a subset of degree-5 polynomials with p(8)=0. (For any q in P5(R), (Tq)(8)=phi(Tq)=T'(phi(q))=0.) But I wonder the reverse way might be false?
    Let 1, (x-8),...,(x-8)^5 be a basis of P5(R). Then we look at matrix of T with regard to this basis. The first row should be 0, because
    null T'=span(phi) if and only if
    (Tq)(8)=0 for all q in P5(R) if and only if
    first row of M(T) are 0.
    Then we let the last row of M(T) be all 1, and any other rows be 0. Now we have a T that satisfies the condition, but any p from span(1,(x-8),(x-8)^2,...,(x-8)^4) is not in range T.
    Can someone explain to me if I'm wrong?

    1. The problem lies in \[\textrm{null} T' = \textrm{span}(\varphi) \Leftrightarrow (Tq)(8) = 0 \textrm{ for all } q\in\mathcal{P}_5(\mathbf{R}^5)\] It is easy to show in one direction \[\textrm{null} T' = \textrm{span}(\varphi) \Rightarrow (Tq)(8) = 0 \textrm{ for all } q\in\mathcal{P}_5(\mathbf{R}^5)\] However, in the other direction, we have \[\begin{aligned}
      &(Tq)(8) = 0 \textrm{ for all } q\in\mathcal{P}_5(\mathbf{R}^5)\\
      \Rightarrow &T'(\varphi(q)) = 0\Rightarrow \textrm{span}(\varphi)\subset \textrm{null} T'\\
      \end{aligned}\] Thus it should be \[\textrm{null} T' \supset \textrm{span}(\varphi) \Rightarrow (Tq)(8) = 0 \textrm{ for all } q\in\mathcal{P}_5(\mathbf{R}^5)\] In other words, $\textrm{null} T'$ is `amplified'. And the example you give happened to satisfy such condition. From your example, we have \[\mathcal{M}(T)=\left(\begin{aligned}
      &0\; 0\; 0\; 0\; 0\;\\
      &0\; 0\; 0\; 0\; 0\;\\
      &0\; 0\; 0\; 0\; 0\\
      &0\; 0\; 0\; 0\; 0\\
      &1\; 1\; 1\; 1\; 1\\
      \end{aligned}\right)\] Consider $\psi\in\mathcal{P}_5(\mathbf{R}^5)$ whose matrix is \[\mathcal{M}(\psi) = \left(1\; 1\; 1\; 1\; 0\right)\] Thus \[\mathcal{M}(T'(\psi)) = \mathcal{M}(\psi T) = \mathcal{M}(\psi)\mathcal{M}(T) = \left(0\; 0\; 0\; 0\; 0\right)\] Obviously $\psi\in\textrm{null}T'$. However, consider $p = x-8$, we have $\varphi(p) = 0$ and $\psi(p) = x-8$. Thus $\psi\notin\textrm{span}(\varphi)$, which means \[\textrm{null} T' \neq \textrm{span}(\varphi)\] I hope my explanation could help you, please correct me if I make mistakes.

      1. You're right, thank you! Just a note, \psi(p) = 1, not x-8, by definition that linear functionals take polynomials to F. Also the matrices and vectors are 6-by-6 and 1-by-6. I think your way of thinking about matrices of linear functionals is very clear. Actually we can write matrix of \varphi as (1 0 0 0 0 0), then in order to have nullT'=span(\varphi), any \psi from null T' should have matrix (a 0 0 0 0 0) for some a, which means when defining T, we must make range T span (x-8,(x-8)^2,...,(x-8)^5), otherwise nullT' would be larger than (a 0 0 0 0 0). Thus the reverse direction is proved.

        1. Yes, you are right. Thank you very much

  9. 6b. To prove that if 𝛤 is surjective implies list of V vectors is linearly independent, there is a simpler and direct way. Each vi corresponds to 𝛷i, such that 𝛷i (vi) = 1 and 𝛷j(vi) = 0 if i ≠ j.

    Consider any 𝝨(ai)(vi) = 0, where ai are coefficients of the vector. Apply 𝛷i to both sides, for all i.

    This forces each ai to be 0. Giving linear independence.

  10. Anyone understand 37 c?

    1. Any injective linear map is an isomorphism between its domain and its range.

  11. Sketch of solution to exercise 35:
    Let φ ∈ (P(R))'
    Define f: (P(R))' → R∞ by:
    f(φ) = (φ(1), φ(x), φ(x²), φ(x³), ...).
    It takes little work to verify that f is an isomorphism, with
    f⁻¹(x₀, x₁, x₂, x₃,...) = ψ, where ψ is defined by
    ψ(xⁿ)=xₙ [this univocally determines ψ, because any polynomial is a finite linear combinations of xⁿs, and since ψ must be linear, the image of every polynomial is therefore the corresponding (i.e. with the same scalars) linear combination of xₙs].

    1. Well done ! Thank you very much Janou!

    2. Could you please tell me why is tau a subset of the annihilator set?

  12. Solution to Exercise 30:
    Let Γ = span(φ₁,...,φₘ). Of course, Γ is a subspace of V'.

    Note that the statement "v is an element of (null(φ₁)∩...∩null(φₘ))" is equivalent to

    "v gets annihilated by φ₁, by..., and by φₘ [and thus by all their linear combinations]",

    which in turn is equivalent to

    "φ(v)=0 for all φ in Γ".

    So, by Exercise 26, Γ = (null(φ₁)∩...∩null(φₘ))⁰. This implies:

    dimΓ = dim (null(φ₁)∩...∩null(φₘ))⁰

    and therefore:

    m = dimV - dim(null(φ₁)∩...∩null(φₘ))


    dim(null(φ₁)∩...∩null(φₘ)) = dimV - m.

    This completes the proof.

  13. Solution to Exercise 29:
    The hypothesis means that, for any ψ in W', there's some scalar λ in F such that:

    (T'(ψ)) = λφ
    i.e. ψ∘T = λφ

    Of course, the value of λ depends on the ψ at issue. Note that, if λ=0 for all ψ in W', then this implies T'(ψ) = ψ∘T = 0 for all ψ in W', so range(T') = {0} and hence, by the hypothesis, span(φ) = {0}, which means φ is the zero functional, and hence Null(φ) = V. Thus, if all λs are zero, we obtain trivially:

    Null(T) ⊂ Null(φ).

    Otherwise, if λ is nonzero for some ψ in W', then we pick any such ψ, and reason as follows:
    if v is in null(T), then
    (ψ∘T)(v) = 0
    λφ(v) = 0
    φ(v) = 0
    So v is in Null(φ). So we obtain

    Null(T) ⊂ Null(φ)


    On the other hand, observe that dim(span(φ)) = dim(range(φ)): both are 0 if φ=0, and 1 otherwise.
    dim(null(T)) = dim(V) - dim(range(T))
    = dim(V) - dim(range(T'))
    = dim(V) - dim(span(φ)) [by the hypothesis]
    = dim(V) - dim(range(φ))
    = dim(null(φ)).

    This completes the proof.

  14. Solution to Exercise 28:
    φ belongs to its own range, so φ is in null(T'). This means T'(φ)=0, i.e. φ∘T=0. This happens iff

    Range(T) ⊂ Null(φ).

    On the other hand, observe that dim(span(φ)) = dim(range(φ)): both are 0 if φ=0, and 1 otherwise.
    dim(null(φ)) = dim(W) - dim(range(φ))
    = dim(W) - dim(span(φ))
    = dim (W) - dim(null(T')) [by the hypothesis]
    = dim(W) - [dim(null(T)) + dim(W) - dim(V))
    = dim(V) - dim(null(T))
    = dim(range(T)).

    This completes the proof

      (and if you're running the blog, please delete it).


      (This is me, although with a different account).

      1. It is done. Thank you very much for the solutions. I will update them later!

    2. In regard to #26, isn't there a much faster solution using the previous result? That is, letting Γ = U° and applying the result of #25, the question is reduced to: ...show that U° = (U)°...

  15. I'm curious what others have to say about problem 36(b) if anyone here has completed it. Given that i: U -> V is the inclusion map, i.e. i(u) = u, we are asked to show that "If V is finite-dimensional, then range i' = U'." That is, i': V' -> U' is surjective. I don't see why the antecedent that V is finite-dimensional is necessary. Where is the pitfall in the logic that, given any f in U', for all u in U, i'(f(u)) = f(i(u)) = f(u), hence i'(f) = f and thus f is in range i'? It seems too simple and doesn't use the fact that V is finite-dimensional..

    1. That's because the domain of i' is V', not U'.

  16. I have a better solution of exercise 15.

    Here's a list of logically equivalent statements that prove 15:

    1) T=0

    2) (range T)^0 = W'

    by 3.107a this is equivalent to

    3) null T' = W'

    4) T' = 0

    1. You didn't use the fact that W' is finite-dimensional. Could this fact point to any flaw in your proof? Personally, I think it is correct. I couldn't see the need for W' to be finite-dimensional.

      1. By the way, 3.107(a) doesn't require V or W to be finite-dimensional either.

  17. I believe that you have an error in your solution to the first problem. You wrote dim range V. V is a vector space, not a function, hence it does not have a range. I believe you meant dim range $\phi$.

    Also, for the 5th problem, I believe the proof could be much shorter if you show that the dimensions of the two vector spaces are equal, since two vector spaces are isomorphic if and only if they have the same dimension.

    Thank you so much for taking the time to put these solutions up. They have been very helpful.

    1. Fixed, thanks!

    2. Anoden: For #5, I thought the same thing at first, but then I realized that we don't know whether or not any of the V's are finite-dimensional, so we can't use that specific tactic.

  18. Hi. can you post the solution to question 34(B). It took me almost the whole afternoon yielding no results. I have some trouble understanding A, the linear transformation that maps V to V''.

    According to the book (which is something like this)

    wouldn't A maps V to P. Since phi (x) would equal to a number

    1. The map Λ maps x (in V) to Λx, not φ(x). The element Λx, as an element of V'', acts on V', which gives us a number. I updated Problem 34.

  19. Also for solution to question 6, line 4. The expression of v is missing some parts

    1. Thanks!

  20. The other solution to question 18 is wrong. Since you don't know if U is a subspace or not, "dim U" doesn't make any sense. 3.106 specify U to be a subspace of V.

    1. It does not matter, since we can substitute U by span(U).

  21. Solution to 3F12:
    $I'(\phi)=\phi \circ I$ (definition of dual map)
    $\phi \circ I(v) = \phi(v) \forall v \in V$, which is the definition of the identity map.

  22. Do you happen to have the solutions to 30-33?

  23. Thank you so much, whoever did this.

  24. I think 6. can be done directly by proving that dim span(v1, ..., vm) = dim range gamma (assume for simplicity v1, ..., vn is a basis of the span, and map v using gamma(coordinate_functional(v)), where coordinate_functional(v)(vi) is the i-th coefficient of vi in v. This is obviously invertible as for some v gives you the list of coordinates plus some garbage)... Using that equality both (a) and (b) seem easy. If I'm not mistaken somewhere.

Leave a Reply

Close Menu