Exercise 2.6.1
Let $\alpha_1$, $\alpha_2$, $\dots$, $\alpha_n$ be the colunms of $A$. Then $\alpha_i\in F^{s\times1}$ $\forall$ $i$. Thus $\{\alpha_1,\dots,\alpha_n\}$ are $n$ vectors in $F^{s\times1}$. But $F^{s\times1}$ has dimension $s<n$ thus by Theorem 4, page 44, $\alpha_1,\dots,\alpha_n$ cannot be linearly independent. Thus $\exists$ $x_1,\dots,x_n\in F$ such that $x_1\alpha_1+\cdots+x_n\alpha_n=0$. Thus if
$$X=\left[\begin{array}{c}x_1\\ \vdots \\ x_n\end{array}\right]$$then $AX=x_1\alpha_1+\cdots+x_n\alpha_n=0$.
Exercise 2.6.2
(a) We use the approach of row-reducing the matrix whose rows are given by the $\alpha_i$:
$$\left[\begin{array}{cccc}1&1&-2&1\\3&0&4&-1\\-1&2&5&2\end{array}\right]$$$$\rightarrow\left[\begin{array}{cccc}1&1&-2&1\\0&-3&10&-4\\0&3&3&3\end{array}\right]$$$$\rightarrow\left[\begin{array}{cccc}1&1&-2&1\\0&0&13&-1\\0&1&1&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{cccc}1&0&-3&0\\0&1&1&1\\0&0&13&-1\end{array}\right]$$$$\rightarrow\left[\begin{array}{cccc}1&0&-3&0\\0&1&1&1\\0&0&1&-1/13\end{array}\right]$$$$\rightarrow\left[\begin{array}{cccc}1&0&0&-3/13\\0&1&0&14/13\\0&0&1&-1/13\end{array}\right]$$Let $\rho_1=(1,0,0,-3/13)$, $\rho_2=(0,1,0,14/13)$ and $\rho_3=(0,0,1,-1/13)$. Thus elements of the subspace spanned by the $\alpha_i$ are of the form $b_1\rho_1+b_2\rho_2+b_3\rho_3$ $$=\left(b_1,\ b_2,\ b_3,\ {\textstyle \frac{1}{13}}(14b_2-3b_1-b_3)\right).$$
- $\alpha=(4,-5,9,-7)$. We have $b_1=4$, $b_2=-5$ and $b_3=9$. Thus if $\alpha$ is in the subspace it must be that
$$\frac{1}{13}(14(-5)-3(4)-9)\overset{\text{?}}{=}b_4$$ where $b_4=-7$. Indeed the left hand side does equal $-7$, so $\alpha$ is in the subspace. - $\beta=(3,1,-4,4)$. We have $b_1=3$, $b_2=1$, $b_3=-4$. Thus if $\beta$ is in the subspace it must be that
$$\frac{1}{13}(14-3(3)+4)\overset{\text{?}}{=}b_4$$ where $b_4=4$. But the left hand side equals $9/13\not=4$ so $\beta$ is not in the subspace. - $\gamma=(-1,1,0,1)$. We have $b_1=-1$, $b_2=1$, $b_3=0$. Thus if $\gamma$ is in the subspace it must be that
$$\frac{1}{13}(14-3(-1)-0)\overset{\text{?}}{=}b_4$$ where $b_4=1$. But the left hand side equals $17/13\not=1$ so $\gamma$ is not in the subspace.
(b) Nowhere in the above did we use the fact that the field was $\mathbb R$ instead of $\mathbb C$. The only equations we had to solve are linear equations with real coefficients, which have solutions in $\mathbb R$ if and only if they have solutions in $\mathbb C$. Thus the same results hold: $\alpha$ is in the subspace while $\beta$ and $\gamma$ are not.
(c) This suggests the following theorem: Suppose $F$ is a subfield of the field $E$ and $\alpha_1,\dots,\alpha_n$ are a basis for a subspace of $F^n$, and $\alpha\in F^n$. Then $\alpha$ is in the subspace of $F^n$ generated by$\alpha_1,\dots,\alpha_n$ if and only if $\alpha$ is in the subspace of $E^n$ generated by $\alpha_1,\dots,\alpha_n$.
Exercise 2.6.3
We use the approach of row-reducing the matrix whose rows are given by the $\alpha_i$:
$$\left[\begin{array}{cccc}-1&0&1&2 \\ 3&4&-2&5 \\ 1&4&0&9\end{array}\right]\rightarrow
\left[\begin{array}{cccc}1&0&-1&-2 \\ 0&4&1&11 \\ 0&4&1&11\end{array}\right]\rightarrow
\left[\begin{array}{cccc}1&0&-1&-2 \\ 0&1&1/4&11/4 \\ 0&0&0&0\end{array}\right].$$Let $\rho_1=(1,0,-1,-2)$ and $\rho_2=(0,1,1/4,11/4)$. Then the arbitrary element of the subspace spanned by $\alpha_1$ and $\alpha_2$ is of the form $b_1\rho_1+b_2\rho_2$ for arbitrary $b_1,b_2\in \mathbb R$. Expanding we get
$$b_1\rho_1+b_2\rho_2=(b_1,\ \ b_2,\ \ -b_1+\frac14b_2,\ \ -2b_1+\frac{11}{4}b_2).$$Thus the equations that must be satisfied for $(x,y,z,w)$ to be in the subspace are
$$\left\{\begin{array}{l}z=-x+\frac14y\\w=-2x+\frac{11}{4}y\end{array}\right. .$$or equivalently
$$\left\{\begin{array}{l}-x+\frac14y-z=0\\-2x+\frac{11}{4}y-w=0\end{array}\right. .$$
Exercise 2.6.4
We use the approach of row-reducing the augmented matrix:
$$\left[\begin{array}{ccc|ccc}1&0&-i & 1&0&0\\1+i&1-i&1 & 0&1&0\\i&i&i & 0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&-i & 1&0&0\\0&1-i&i & -1-i&1&0\\0&i&i-1 & -i&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&-i & 1&0&0\\0&1&\frac{-1+i}{2} & -i&\frac{1+i}{2}&0\\0&i&i-1 & -i&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&-i & 1&0&0\\0&1&\frac{-1+i}{2} & -i&\frac{1+i}{2}&0\\0&0&\frac{-1+3i}{2}\rule{0mm}{4mm} & -1-i&\frac{1-i}{2}&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&-i & 1&0&0\\0&1&\frac{-1+i}{2} & -i&\frac{1+i}{2}&0\\0&0&1\rule{0mm}{4mm} & \frac{-2+4i}{5}&\frac{-2-i}{5}&\frac{-1-3i}{5}\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&0 & \frac{1-2i}{5}&\frac{1-2i}{5}&\frac{3-i}{5}\\0&1&0 & \frac{1-2i}{5} & \frac{1+3i}{5}&\frac{-2-i}{5}\rule{0mm}{4mm}\\0&0&1\rule{0mm}{4mm} & \frac{-2+4i}{5}&\frac{-2-i}{5}&\frac{-1-3i}{5}\end{array}\right]$$Since the left side transformed into the identity matrix we know that $\{\alpha_1,\alpha_2,\alpha_3\}$ form a basis for $\mathbb C^3$. We used the vectors to form the rows of the augmented matrix not the columns, so the matrix on the right is $(P^{\text T})^{-1}$ from (2-17). But $(P^{\text T})^{-1}=(P^{-1})^{\text T}$, so the coordinate matrix of $(a,b,c)$ with respect to the basis $\mathcal B=\{\alpha_1,\alpha_2,\alpha_3\}$ are given by
$$[(a,b,c)]_{\mathcal B}=(P^{-1})^{\text T}\left[\begin{array}{c}a\\b\\c\end{array}\right]$$$$= \left[\begin{array}{c}
\frac{1-2i}{5}a+\frac{1-2i}{5}b+\frac{-2+4i}{5}c\\
\rule{0mm}{4mm}\frac{1-2i}{5}a+ \frac{1+3i}{5}b+\frac{-2-i}{5}c\\
\rule{0mm}{4mm}\frac{3-i}{5}a+\frac{-2-i}{5}b+\frac{-1-3i}{5}c
\end{array}
\right].$$
Exercise 2.6.5
We row-reduce the matrix whose rows are given by the $\alpha_i$’s.
$$\left[\begin{array}{ccccc}1&0&2&1&-1\\-1&2&-4&2&0\\2&-1&5&2&1\\2&1&3&5&2\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&1&-1\\0&2&-2&3&-1\\0&-1&1&0&3\\0&1&-1&3&4\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&1&-1\\0&1&-1&3&4\\0&0&0&-3&-9\\0&0&0&3&7\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&1&-1\\0&1&-1&3&4\\0&0&0&1&3\\0&0&0&3&7\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&1&-4\\0&1&-1&0&-5\\0&0&0&1&3\\0&0&0&0&-2\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&0&-4\\0&1&-1&0&-5\\0&0&0&1&3\\0&0&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}1&0&2&0&0\\0&1&-1&0&0\\0&0&0&1&0\\0&0&0&0&1\end{array}\right]$$Let $\rho_1=(1,0,2,0,0)$, $\rho_2=(0,1,-1,0,0)$, $\rho_3=(0,0,0,1,0)$ and $\rho_4=(0,0,0,0,1)$. Then the general element that is a linear combination of the $\alpha_i$’s is $$b_1\rho_1+b_2\rho_2+b_3\rho_3+b_4\rho_4=(b_1,\ \ b_2,\ \ 2b_1-b_2,\ \ b_3,\ \ b_4).$$
Exercise 2.6.6
We row-reduce the matrix
$$\left[\begin{array}{ccccc}
3&21&0&9&0\\
1&7&-1&-2&-1\\
2&14&0&6&1\\
6&42&-1&13&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1&7&-1&-2&-1\\
0&0&3&15&3\\
0&0&2&10&3\\
0&0&5&25&6
\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1&7&-1&-2&-1\\
0&0&1&5&1\\
0&0&2&10&3\\
0&0&5&25&6
\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1&7&0&3&0\\
0&0&1&5&1\\
0&0&0&0&1\\
0&0&0&0&1
\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1&7&0&3&0\\
0&0&1&5&0\\
0&0&0&0&1\\
0&0&0&0&0
\end{array}\right]$$(a) A basis for $V$ is given by the non-zero rows of the reduced matrix
$$\rho_1=(1,7,0,3,0),\quad\rho_2=(0,0,1,5,0),\quad\rho_3=(0,0,0,0,1).$$(b) Vectors of $V$ are any of the form $b_1\rho_1+b_2\rho_2+b_3\rho_3$
$$=(b_1,\ \ 7b_1,\ \ b_2,\ \ 3b_1+5b_2,\ \ b_3)$$ for arbitrary $b_1,b_2,b_3\in\mathbb R$.
(c) By the above, the element $(x_1,x_2,x_3,x_4,x_5)$ in $V$ must be of the form $x_1\rho_1+x_3\rho_2+x_5\rho_3$. In other words if $\mathcal B=\{\rho_1,\rho_2,\rho_3\}$ is the basis for $V$ given in part (a), then the coordinate matrix of $(x_1,x_2,x_3,x_4,x_5)$ is
$$[(x_1,x_2,x_3,x_4,x_5)]_{\mathcal B}=\left[\begin{array}{c}x_1\\x_3\\x_5\end{array}\right].$$
Exercise 2.6.7
To solve the system we row-reduce the augmented matrix $[A\,|\,Y]$ resulting in an augmented matrix $[R\,|\,Z]$ where $R$ is in reduced echelon form and $Z$ is an $m\times1$ matrix. If the last $k$ rows of $R$ are zero rows then the system has a solution if and only if the last $k$ entries of $Z$ are also zeros. Thus the only non-zero entries in $Z$ are in the non-zero rows of $R$. These rows are already linearly independent, and they clearly remain independent regardless of the augmented values. Thus if there are solutions then the rank of the augmented matrix is the same as the rank of $R$. Conversely, if there are non-zero entries in $Z$ in any of the last $k$ rows then the system has no solutions. We want to show that those non-zero rows in the augmented matrix are linearly independent from the non-zero rows of $R$, so we can conclude that the rank of $R$ is less than the rank of $[R\,|\,Z]$. Let $S$ be the set of rows of $[R\,|\,Z]$ that contain all rows where $R$ is non-zero, plus one additional row $r$ where $Z$ is non-zero. Suppose a linear combination of the elements of $S$ equals zero. Since $c\cdot r=0$ $\Leftrightarrow$ $r=0$, at least one of the elements of $S$ different from $r$ must have a non-zero coefficient. Suppose row $r’\in S$ has non-zero coefficient $c$ in the linear combination. Suppose the leading one in row $r’$ is in position $i$. Then the $i$-th coordinate of the linear combination is also $c$, because except for the one in the $i$-th position, all other entries in the $i$-th column of $[R\,|\,Z]$ are zero. Thus there can be no non-zero coefficients. Thus the set $S$ is linearly independent and $|S|=|R|+1$. Thus the system has a solution if and only if the rank of $R$ is the same as the rank of $[R\,|\,Z]$. Now $A$ has the same rank as $R$ and $[R\,|\,Z]$ has the same rank as $[A\,|\,Y]$ since they differ by elementary row operations. Thus the system has a solution if and only if the rank of $A$ is the same as the rank of $[A\,|\,Y]$.
From http://greggrant.org