Exercise 2.6.1
Let , , , be the colunms of . Then . Thus are vectors in . But has dimension thus by Theorem 4, page 44, cannot be linearly independent. Thus such that . Thus if
then .
Exercise 2.6.2
(a) We use the approach of row-reducing the matrix whose rows are given by the :
Let , and . Thus elements of the subspace spanned by the are of the form
- . We have , and . Thus if is in the subspace it must be that
where . Indeed the left hand side does equal , so is in the subspace.
- . We have , , . Thus if is in the subspace it must be that
where . But the left hand side equals so is not in the subspace.
- . We have , , . Thus if is in the subspace it must be that
where . But the left hand side equals so is not in the subspace.
(b) Nowhere in the above did we use the fact that the field was instead of . The only equations we had to solve are linear equations with real coefficients, which have solutions in if and only if they have solutions in . Thus the same results hold: is in the subspace while and are not.
(c) This suggests the following theorem: Suppose is a subfield of the field and are a basis for a subspace of , and . Then is in the subspace of generated by if and only if is in the subspace of generated by .
Exercise 2.6.3
We use the approach of row-reducing the matrix whose rows are given by the :
Let and . Then the arbitrary element of the subspace spanned by and is of the form for arbitrary . Expanding we get
Thus the equations that must be satisfied for to be in the subspace are
or equivalently
Exercise 2.6.4
We use the approach of row-reducing the augmented matrix:
Since the left side transformed into the identity matrix we know that form a basis for . We used the vectors to form the rows of the augmented matrix not the columns, so the matrix on the right is from (2-17). But , so the coordinate matrix of with respect to the basis are given by
Exercise 2.6.5
We row-reduce the matrix whose rows are given by the ’s.
Let , , and . Then the general element that is a linear combination of the ’s is
Exercise 2.6.6
We row-reduce the matrix
(a) A basis for is given by the non-zero rows of the reduced matrix
(b) Vectors of are any of the form
for arbitrary .
(c) By the above, the element in must be of the form . In other words if is the basis for given in part (a), then the coordinate matrix of is
Exercise 2.6.7
To solve the system we row-reduce the augmented matrix resulting in an augmented matrix where is in reduced echelon form and is an matrix. If the last rows of are zero rows then the system has a solution if and only if the last entries of are also zeros. Thus the only non-zero entries in are in the non-zero rows of . These rows are already linearly independent, and they clearly remain independent regardless of the augmented values. Thus if there are solutions then the rank of the augmented matrix is the same as the rank of . Conversely, if there are non-zero entries in in any of the last rows then the system has no solutions. We want to show that those non-zero rows in the augmented matrix are linearly independent from the non-zero rows of , so we can conclude that the rank of is less than the rank of . Let be the set of rows of that contain all rows where is non-zero, plus one additional row where is non-zero. Suppose a linear combination of the elements of equals zero. Since , at least one of the elements of different from must have a non-zero coefficient. Suppose row has non-zero coefficient in the linear combination. Suppose the leading one in row is in position . Then the -th coordinate of the linear combination is also , because except for the one in the -th position, all other entries in the -th column of are zero. Thus there can be no non-zero coefficients. Thus the set is linearly independent and . Thus the system has a solution if and only if the rank of is the same as the rank of . Now has the same rank as and has the same rank as since they differ by elementary row operations. Thus the system has a solution if and only if the rank of is the same as the rank of .
From http://greggrant.org