If you find any mistakes, please make a comment! Thank you.

## Chapter 3 Exercise C

1. Solution: Suppose for some basis $v_1$, $\cdots$, $v_n$ of $V$ and some basis $w_1$, $\cdots$, $w_m$ of $W$, the matrix of $T$ has at most $\dim \m{range} T-1$ nonzero entries. Then there are at most $\dim \m{range} T-1$ nonzero vectors in $Tv_1$, $\cdots$, $Tv_n$. Note that $\m{range} T=\m{span}(Tv_1,\cdots,Tv_n)$, it follows that $\dim \m{range} T\le \dim \m{range} T-1.$We get a contradiction, hence completing the proof.

2. Solution: The basis of $\ca P(\R^3)$ is $x^3$, $x^2$, $x$, $1$. The corresponding basis of $\ca P(\R^2)$ is $3x^2$, $2x$, $1$ (with the indicated order).(Check it!)

3. Solution: We use the notation and the proof of 3.22. Extend $Tv_1$, $\cdots$, $Tv_n$ to a basis of $W$ as $Tv_1$, $\cdots$, $Tv_n$, $\mu_1$, $\cdots$, $\mu_s$. Then with respect to the basis $v_1$, $\cdots$, $v_n$, $u_1$, $\cdots$, $u_m$ of $V$ and the basis $Tv_1$, $\cdots$, $Tv_n$, $\mu_1$, $\cdots$, $\mu_s$ of $W$, all entries of $\ca M(T)$ are 0 except that the entries in row $j$ , column $j$ , equal $1$ for $1\le j\le \dim \m{range} T$.

4. Solution: If $Tv_1=0$, then any basis $w_1$, $\cdots$, $w_n$ of $W$ will satisfy the desired conditions. If $Tv_1\ne 0$, then any basis $w_1$, $\cdots$, $w_n$ of $W$ such that $w_1=Tv_1$ will satisfy the desired conditions.

5. Solution: Let $\nu_1$, $\cdots$, $\nu_m$ be a basis of $V$, denote the first row of $\ca M(T)$ with respect to the bases $\nu_1$, $\cdots$, $\nu_m$ and $w_1$, $\cdots$, $w_n$ by $(a_1,\cdots,a_m)$. If $(a_1,\cdots,a_m)=0$, then we can choose $v_i=\nu_i$, $i=1,\cdots,m$. If $(a_1,\cdots,a_m)\ne 0$, suppose $a_i\ne 0$. Then let $v_1=\frac{\nu_i}{a_i},v_j=\nu_{j-1}-a_{j-1}v_1,v_k=\nu_k-a_kv_1$for $j=2,\cdots,i$, $k=i+1,\cdots,m$. Then you can check that $v_1$, $\cdots$, $v_m$ satisfies the desired conditions.

6. Solution: Suppose there exist a basis $v_1$, $\cdots$, $v_m$ of $V$ and a basis $w_1$, $\cdots$, $w_n$ of $W$ such that with respect to these bases, all entries of $\ca M(T)$ equal $1$. Then $Tv_i=w_1+\cdots+w_n,\quad i=1,\cdots,m.$Hence $\m{range} T=\m{span}(w_1+\cdots+w_n)$, it follows that $\dim \m{range} T = 1$.

Conversely, if $\dim \m{range} T = 1$, then $\dim \m{null} T =\dim V-1$. Let $\nu_1$, $\nu_2$, $\cdots$, $\nu_m$ be a basis of $V$ such that $\nu_2$, $\cdots$, $\nu_m\in \m{null} T$. Note that $T\nu_1\ne0$, hence we can extend it to a basis of $W$ as $T\nu_1$, $w_2$, $\cdots$, $w_n$. Let $w_1=T\nu_1-w_2-\cdots-w_n$ and $v_1=\nu_1$, $v_i=\nu_i+\nu_1$ for $i=2,\cdots,m$. $T(v_1)=T(v_i)=w_1+w_2+\cdots+w_n,\quad i=2,\cdots,m.$It is obvious that $v_1$, $\cdots$, $v_m$ is a basis of $V$ and $w_1$, $\cdots$, $w_n$ is a basis of $W$. Now we can directly check that all entries of $\ca M(T)$ with respect to these bases equal $1$.

7. Solution: Given a basis $v_1$, $\cdots$, $v_m$ of $V$ and a basis $w_1$, $\cdots$, $w_n$ of $W$, denote $v$ and $\ca M(S)$ with respect to these bases by $A$ and $B$, respectively. Then we have $Tv_j=\sum_{k=1}^n A_{k,j}w_k$and $Sv_j=\sum_{k=1}^n B_{k,j}w_k.$Hence $(T+S)v_j=Tv_j+Sv_j=\sum_{k=1}^n (A_{k,j}+B_{k,j})w_k,$it follows that the entries in row $k$ , column $j$ of $\ca M(T+S)$ with respect to these bases are $A_{k,j}+B_{k,j}$. By 3.35, we deduce that $\ca M(T+S)=\ca M(T)+\ca M(S)$.

8. Verify 3.38.

Solution: It is almost the same as the previous Exercise.

9. Prove 3.52.

Solution: It is almost the same as Problem 11. Just consider the entries.

10. Suppose $A$ is an $m$-by-$n$ matrix and $C$ is an $n$-by-$p$ matrix. Prove that $(AC)_{j,\cdot}=A_{j,\cdot}C$ for $1\le j\le m$. In other words, show that row $j$ of $AC$ equals (row $j$ of $A$) times $C$.

Solution: It is almost the same as Problem 11. Just consider the entries.

These exercises are tedious. I prefer solving other interesting exercises… If you have problems regarding to them, please make a comment.

11. Solution: By 3.41, we have $(aC)_{1,k}=\sum_{i=1}^na_iC_{i,k}.$It is obvious that $(a_iC_{i,\cdot})_{1,k}=a_iC_{i,k}$. Hence $(aC)_{1,k}=(a_1C_{1,\cdot})_{1,k}+\cdots+(a_nC_{n,\cdot})_{1,k}=(a_1C_{1,\cdot}+\cdots+a_nC_{n,\cdot})_{1,k}$by 3,35. Thus we deduce that $aC=a_1C_{1,\cdot}+\cdots+a_nC_{n,\cdot}$.

12. Solution: Let $A=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right)$ and $C=\left( \begin{array}{cc} 1 & 0 \\ 0 & 2 \\ \end{array} \right)$, then we have $AC=\left( \begin{array}{cc} 0 & 2 \\ 1 & 0 \\ \end{array} \right)$and$CA=\left( \begin{array}{cc} 0 & 1 \\ 2 & 0 \\ \end{array} \right) .$Hence $AC\ne CA$.

13. Solution: See Linear Algebra Done Right Solution Manual Chapter 3 Problem 17.

14. Solution: See Linear Algebra Done Right Solution Manual Chapter 3 Problem 18.

15. Solution: Note that $AAA=(AA)A$, denote $AA=B$, then by definition (3.41) $$\label{3CP151} B_{j,r}=\sum_{p=1}^n A_{j,p}A_{p,r}.$$Similarly, the entry in row $j$ , column $k$, of $A^3$ is $\sum_{r=1}^n B_{j,r}A_{r,k}.$Hence by $(\ref{3CP151})$, we have $\sum_{r=1}^n B_{j,r}A_{r,k}=\sum_{r=1}^n \sum_{p=1}^n A_{j,p}A_{p,r}A_{r,k}=\sum_{p=1}^n\sum_{r=1}^nA_{j,p}A_{p,r}A_{r,k}.$

### This Post Has 11 Comments

1. Could someone help me understand the proof for Q.5? I went about it this way:
Suppose v1,...,vm is a basis of null(T) and null(T)=V, then v1,...,vm is basis of V. Then applying T on v1,...,vm will give 0 so the entire first row of M(T) has 0 entires for any choice of basis of V.
But I do not understand how to approach the case where the row 1 of M(T) contains a 1 in row 1 column 1... can I say that let v2,...,vm be a basis of null(T) and extend this basis to a basis of V so that v1,v2,...,vm is a basis of V and apply T to this? Can someone explain the solution provided above?
Thanks!

1. The solution says we can just guess a bases of V, then there are two situations if we calculate the matrix using chosen bases:
1. the basis turned out to be just what we want, where (a_1, ..., a_m) is all zero. This satisfy the condition that the first row is all zero
2. some of (a_1, ..., a_m) is not zero, there can be 1 or more non zero element in the first row. In this situation, we construct a new bases that satisfy the condition that only the first element is one (The choice of i can be arbitrary as long as a_i !=0). The method of constructing new bases is like this:
a. devide v_i by a_i and set this to the new v_1，so that the new a1 will be one
b. subtract a_{j}v_1 from any other vector, so that the new a_j will be zero. Notice that because we moved v_i the front, the old v_1,...,v_i-1 is shifted to new v_2,...,v_i

The twos cases above includes all choice of v_1,...,v_m, so no matter what (w_1,...,w_m) is we will be able to guess or construct a bases of V that satisfy the condition.

1. Shouldn't the new vector v_1 be such as its linear transformation be exactly equal to one of the vectors in the basis of W? This way, a1,1=1 and a2,1... = 0. Then when we substract new v1 from a v2, the transform of new v2 will have no w1 component and thus a1,2=0. Am I correct?

2. Hello,

I think there is a minor issue in your solution to 3.C.3 since T(v1),...,T(vn) is not guaranteed to be linearly independent (since T might not be injective). I think you should instead find a basis of null T, then extend this to a basis of V. Then T is injective on the basis vectors not in null T, so use your solution on these vectors.

1. There are no issues. It’s said we are using the notation as in the proof of 3.22.

I'm having a little trouble understanding the solution of 3.C.1. Could you enlighten me on why the the matrix of T has at most dimRange T - 1 nonzero entries?

4. Is the solution 2 true?

1. Sure.

1. I see now, because it is not said with respect to standard basis. If it is said so, we would have (3,0,0), (0,2,0),(0,0,1) and (0,0,0) for each column