If you find any mistakes, please make a comment! Thank you.

Solution to Linear Algebra Hoffman & Kunze Chapter 1.6


Exercise 1.6.1

As in Exercise 4, Section 1.5, we row reduce and keep track of the elementary matrices involved. It takes nine steps to put A in row-reduced form resulting in the matrix
P=[3/81/43/81/401/41/81/41/8].


Exercise 1.6.2

Same story as Exercise 1, we get to the identity matrix in nine elementary steps. Multiplying those nine elementary matrices together gives
P=[1/329+3i3013i1003+i1013i10i/33+i153+i5].


Exercise 1.6.3

For the first matrix we row-reduce the augmented matrix as follows:
[251100412010641001][25110001142100114301][2511000114210000111]At this point we see that the matrix is not invertible since we have obtained an entire row of zeros.

For the second matrix we row-reduce the augmented matrix as follows:
[112100324010012001][112100052310012001][112100012001052310][100101012001008315][1001010120010013/81/85/8][1001010103/41/41/40013/81/85/8]Thus the inverse matrix is
[1013/41/41/43/81/85/8].


Exercise 1.6.4

Note that[500150015][xyz]=c[xyz]
implies
(1)5x=cx(2)x+5y=cy(3)y+5z=czNow if c5 then (1) implies x=0, and then (2) implies y=0, and then (3) implies z=0. So it is true for [000] with c=0.

If c=5 then (2) implies x=0 and (3) implies y=0. So if c=5 any such vector must be of the form [00z] and indeed any such vector works with c=5.

So the final answer is any vector of the form [00z].


Exercise 1.6.5

We row-reduce the augmented matrix as follows:
[12341000023401000034001000040001][10001100023401000034001000040001][10001100020001100034001000040001][10001100020001100030001100040001][10001100010001/21/200010001/31/300010001/4]Thus the A does have an inverse and
A1=[110001/21/20001/31/30001/4].


Exercise 1.6.6

Write A=[a1a2] and B=[b1  b2]. Then
AB=[a1b1a1b2a2b1a2b2].If any of a1,a2,b1 or b2 equals zero then AB has an entire row or an entire column of zeros. A matrix with an entire row or column of zeros is not invertible. Thus assume a1,a2,b1 and b2 are non-zero. Now if we add a2/a1 of the first row to the second row we get
[a1b1a1b200].Thus AB is not row-equivalent to the identity. Thus by Theorem 12 page 23, AB is not invertible.


Exercise 1.6.7

(a) 0=A10=A1(AB)=(A1A)B=IB=B. Thus B=0.

(b) By Theorem 13 (ii) since A is not invertible AX=0 must have a non-trivial solution v. Let B be the matrix all of whose columns are equal to v. Then B0 but AB=0.


Exercise 1.6.8

Suppose
A=[abcd],B=[xyzw].Then
AB=[ax+bzay+bwcx+dzcy+dw].Then AB=I implies the following system in u,r,s,t has a solution
au+bs=1cu+ds=0ar+bt=0cr+dt=1because (x,y,z,w) is one such solution. The augmented coefficient matrix of this system is
(4)[a0b01c0d000a0b00c0d1].As long as adbc0 this system row-reduces to the following row-reduced echelon form
[1000d/(adbc)0100b/(adbc)0010c/(adbc)0001a/(adbc)]Thus we see that x=d/(adbc), y=b/(adbc), z=c/(adbc), w=a/(adbc) and
A1=[d/(adbc)b/(adbc)c/(adbc)a/(adbc)].Now it’s a simple matter to check that
[d/(adbc)b/(adbc)c/(adbc)a/(adbc)][abcd]=[1001].Now suppose that adbc=0. We will show there is no solution. If a=b=c=d=0 then obviously A has no inverse. So suppose WOLOG that a0 (because by elementary row and column operations we can move any of the four elements to be the top left entry, and elementary row and column operations do not change a matrix’s status as being invertible or not). Subtracting ca times the 3rd row from the 4th row of (4) gives
[a0b01c0d000a0b00ccaa0dcab1].Now ccaa=0 and since adbc=0 also dcab=0. Thus we get
[a0b01c0d000a0b000001].and it follows that A is not invertible.


Exercise 1.6.9

Suppose that aii0 for all i. Then we can divide row i by aii to give a row-equivalent matrix which has all ones on the diagonal. Then by a sequence of elementary row operations we can turn all off diagonal elements into zeros. We can therefore row-reduce the matrix to be equivalent to the identity matrix. By Theorem 12 page 23, A is invertible.

Now suppose that some aii=0. If all aii’s are zero then the last row of the matrix is all zeros. A matrix with a row of zeros cannot be row-equivalent to the identity so cannot be invertible. Thus we can assume there’s at least one i such that aii0. Let i be the largest such index, so that aii=0 and aii0 for all i>i. We can divide all rows with i>i by aii to give ones on the diagonal for those rows. We can then add multiples of those rows to row i to turn row i into an entire row of zeros. Since again A is row-equivalent to a matrix with an entire row of zeros, it cannot be invertible.


Exercise 1.6.10

There are n colunms in A so the vector space generated by those columns has dimension no greater than n. All columns of AB are linear combinations of the columns of A. Thus the vector space generated by the columns of AB is contained in the vector space generated by the columns of A. Thus the column space of AB has dimension no greater than n. Thus the column space of the m×m matrix AB has dimension less or equal to n and n<m. Thus the columns of AB generate a space of dimension strictly less than m. Thus AB is not invertible.


Exercise 1.6.11

First put A in row-reduced echelon form, R. So an invertible m×m matrix P such that R=PA. Each row of R is either all zeros or starts (on the left) with zeros, then has a one, then may have non-zero entries after the one. Suppose row i has a leading one in the j-th column. The j-th column has zeros in all other places except the i-th, so if we add a multiple of this column to another column then it only affects entries in the i-th row. Therefore a sequence of such operations can turn this row into a row of all zeros and a single one.

Let B be the n×n matrix such that Brr=1 and Brs=0 rs except Bjk0. Then AB equals A with Bjk times column j added to column k. B is invertible since any such operation can be undone by another such operation. By a sequence of such operations we can turn all values after the leading one into zeros. Let Q be a product of all of the elementary matrices B involved in this transformation. Then PAQ is in row-reduced and column-reduced form.


Exercise 1.6.12

This problem seems a bit hard for this book. There are a class of theorems like this, in particular these are called Hilbert Matrices and a proof is given in this article on arxiv by Christian Berg called Fibonacci numbers and orthogonal polynomials. See Theorem 4.1. Also there might be a more elementary proof in this discussion on mathoverflow.net where several proofs are given: Deriving inverse of Hilbert matrix. Also see http://vigo.ime.unicamp.br/HilbertMatrix.pdf where a general formula for the i,j entry of the inverse is given explicitly as
(1)i+j(i+j1)(n+i1nj)(n+j1ni)(i+j1i1)2

From http://greggrant.org

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
Close Menu
Close Menu