Exercise 1.6.1
As in Exercise 4, Section 1.5, we row reduce and keep track of the elementary matrices involved. It takes nine steps to put in row-reduced form resulting in the matrix
Exercise 1.6.2
Same story as Exercise 1, we get to the identity matrix in nine elementary steps. Multiplying those nine elementary matrices together gives
Exercise 1.6.3
For the first matrix we row-reduce the augmented matrix as follows:
At this point we see that the matrix is not invertible since we have obtained an entire row of zeros.
For the second matrix we row-reduce the augmented matrix as follows:
Thus the inverse matrix is
Exercise 1.6.4
Note that
implies
Now if then () implies , and then () implies , and then () implies . So it is true for with .
If then () implies and () implies . So if any such vector must be of the form and indeed any such vector works with .
So the final answer is any vector of the form .
Exercise 1.6.5
We row-reduce the augmented matrix as follows:
Thus the does have an inverse and
Exercise 1.6.6
Write and . Then
If any of or equals zero then has an entire row or an entire column of zeros. A matrix with an entire row or column of zeros is not invertible. Thus assume and are non-zero. Now if we add of the first row to the second row we get
Thus is not row-equivalent to the identity. Thus by Theorem 12 page 23, is not invertible.
Exercise 1.6.7
(a) . Thus .
(b) By Theorem 13 (ii) since is not invertible must have a non-trivial solution . Let be the matrix all of whose columns are equal to . Then but .
Exercise 1.6.8
Suppose
Then
Then implies the following system in has a solution
because is one such solution. The augmented coefficient matrix of this system is
As long as this system row-reduces to the following row-reduced echelon form
Thus we see that , , , and
Now it’s a simple matter to check that
Now suppose that . We will show there is no solution. If then obviously has no inverse. So suppose WOLOG that (because by elementary row and column operations we can move any of the four elements to be the top left entry, and elementary row and column operations do not change a matrix’s status as being invertible or not). Subtracting times the 3rd row from the 4th row of () gives
Now and since also . Thus we get
and it follows that is not invertible.
Exercise 1.6.9
Suppose that for all . Then we can divide row by to give a row-equivalent matrix which has all ones on the diagonal. Then by a sequence of elementary row operations we can turn all off diagonal elements into zeros. We can therefore row-reduce the matrix to be equivalent to the identity matrix. By Theorem 12 page 23, is invertible.
Now suppose that some . If all ’s are zero then the last row of the matrix is all zeros. A matrix with a row of zeros cannot be row-equivalent to the identity so cannot be invertible. Thus we can assume there’s at least one such that . Let be the largest such index, so that and for all . We can divide all rows with by to give ones on the diagonal for those rows. We can then add multiples of those rows to row to turn row into an entire row of zeros. Since again is row-equivalent to a matrix with an entire row of zeros, it cannot be invertible.
Exercise 1.6.10
There are colunms in so the vector space generated by those columns has dimension no greater than . All columns of are linear combinations of the columns of . Thus the vector space generated by the columns of is contained in the vector space generated by the columns of . Thus the column space of has dimension no greater than . Thus the column space of the matrix has dimension less or equal to and . Thus the columns of generate a space of dimension strictly less than . Thus is not invertible.
Exercise 1.6.11
First put in row-reduced echelon form, . So an invertible matrix such that . Each row of is either all zeros or starts (on the left) with zeros, then has a one, then may have non-zero entries after the one. Suppose row has a leading one in the -th column. The -th column has zeros in all other places except the -th, so if we add a multiple of this column to another column then it only affects entries in the -th row. Therefore a sequence of such operations can turn this row into a row of all zeros and a single one.
Let be the matrix such that and except . Then equals with times column added to column . is invertible since any such operation can be undone by another such operation. By a sequence of such operations we can turn all values after the leading one into zeros. Let be a product of all of the elementary matrices involved in this transformation. Then is in row-reduced and column-reduced form.
Exercise 1.6.12
This problem seems a bit hard for this book. There are a class of theorems like this, in particular these are called Hilbert Matrices and a proof is given in this article on arxiv by Christian Berg called Fibonacci numbers and orthogonal polynomials. See Theorem 4.1. Also there might be a more elementary proof in this discussion on mathoverflow.net where several proofs are given: Deriving inverse of Hilbert matrix. Also see http://vigo.ime.unicamp.br/HilbertMatrix.pdf where a general formula for the entry of the inverse is given explicitly as
From http://greggrant.org