If you find any mistakes, please make a comment! Thank you.

Solution to Linear Algebra Hoffman & Kunze Chapter 2.3


Exercise 2.3.1

Suppose v1 and v2 are linearly dependent. If one of them, say v1, is the zero vector then it is a scalar multiple of the other one v1=0v2. So we can assume both v1 and v2 are non-zero. Then if c1,c2 such that c1v1+c2v2=0, both c1 and c2 must be non-zero. Therefore we can write v1=c2c1v2.


Exercise 2.3.2

By Corollary 3, page 46, it suffices to determine if the matrix whose rows are the αi’s is invertible. By Theorem 12 (ii) we can do this by row reducing the matrix
[1124215211402116].[1124215211402116][1124039602640132]swaprows[1124013203960264][1124013203960264][1124013200000000]Thus the four vectors are not linearly independent.


Exercise 2.3.3

In Section 2.5, Theorem 9, page 56, it will be proven that row equivalent matrices have the same row space. The proof of this is almost immediate so there seems no easier way to prove it than to use that fact. If you multiply a matrix A on the left by another matrix P, the rows of the new matrix PA are linear combinations of the rows of the original matrix. Thus the rows of PA generate a subspace of the space generated by the rows of A. If P is invertible, then the two spaces must be contained in each other since we can go backwards with P1. Thus the rows of row-equivalent matrices generate the same space. Thus using the row reduced form of the matrix in Exercise 2, it must be that the space is two dimensoinal and generated by (1,1,2,4) and (0,1,3,2).


Exercise 2.3.4

By Corollary 3, page 46, to show the vectors are linearly independent it suffices to show the matrix whose rows are the αi’s is invertible. By Theorem 12 (ii) we can do this by row reducing the matrix
A=[101121032].[101121032][101022032][101011032][101011005][101011001][100010001].Now to write the standard basis vectors in terms of these vectors, by the discussion at the bottom of page 25 through page 26, we can row-reduce the augmented matrix
[101100121010032001].This gives
[101100121010032001][101100022110032001][1011000111/21/20032001][1011000111/21/200053/23/21][1011000111/21/200013/103/101/5][1007/103/101/50101/51/51/50013/103/101/5].Thus if
P=[7/103/101/51/51/51/53/103/101/5]then PA=I, so we have710α1+310α2+15α3=(1,0,0)15α1+15α215α3=(0,1,0)310α1+310α2+15α3=(0,0,1).


Exercise 2.3.5

Let v1=(1,0,0), v2=(0,1,0) and v3=(1,1,0). Then v1+v2v3=(0,0,0) so they are linearly dependent. We know v1 and v2 are linearly independent as they are two of the standard basis vectors (see Example 13, page 41). Suppose av1+bv3=0. Then (a+b,b,0)=(0,0,0). The second coordinate implies b=0 and then the first coordinate in turn implies a=0. Thus v1 and v3 are linearly independent. Analogously v2 and v3 are linearly independent.


Exercise 2.3.6

Let
v11=[1000],v12=[0100]v21=[0010],v22=[0001]Suppose av11+bv12+cv21+dv22=[0000].Then
[abcd]=[0000],from which it follows immediately that a=b=c=d=0. Thus v11, v12, v21, v22 are linearly independent.

Now let [abcd] be any 2×2 matrix. Then [abcd]=av11+bv12+cv21+dv22. Thus v11, v12, v21, v22 span the space of 2×2 matrices.

Thus v11, v12, v21, v22 are both linearly independent and they span the space of all 2×2 matrices. Thus v11, v12, v21, v22 constitue a basis for the space of all 2×2 matrices.


Exercise 2.3.7

(a) Let A=[xxyz] and B=[xxyz] be two elements of W1 and let cF. Then
cA+B=[cx+xcxxcy+ycz+z]=[aauv]where a=cx+x, u=cy+y and v=cz+z. Thus cA+B is in the form of an element of W1. Thus cA+BW1. By Theorem 1 (page 35) W1 is a subspace.

Now let A=[abad] and B=[abad] be two elements of W1 and let cF.Then
cA+B=[ca+acb+bcaacd+d]=[xyxz]where x=ca+a, y=cb+b and z=cd+d. Thus cA+B is in the form of an element of W2. Thus cA+BW2. By Theorem 1 (page 35) W2 is a subspace.

(b) Let
A1=[1100],A2=[0010],A2=[0001].Then A1,A2,A3W1 and
c1A1+c2A2+c3A3=[c1c1c2c3]=[0000]implies c1=c2=c3=0. So A1, A2, A3 are linearly independent. Now let A=[xxyz] be any element of W1. Then A=xA1+yA2+zA3. Thus A1, A2, A3 span W1. Thus {A1,A2,A3} form a basis for W1. Thus W1 has dimension three.

Let
A1=[1010],A2=[0100],A2=[0001].Then A1,A2,A3W2 and
c1A1+c2A2+c3A3=[c1c2c1c3]=[0000]implies c1=c2=c3=0. So A1, A2, A3 are linearly independent. Now let A=[xyxz] be any element of W2. Then A=xA1+yA2+zA3. Thus A1, A2, A3 span W2. Thus {A1,A2,A3} form a basis for W2. Thus W2 has dimension three.

Let V be the space of 2×2 matrices. We showed in Exercise 6 that the dim(V)=4. Now W1W1+W2V. Thus by Corollary 1, page 46, 3dim(W1+W2)4. Let A=[1010]. Then AW2 and AW1. Thus W1+W2 is strictly bigger than W1. Thus 4dim(W1+W2)>dim(W1)=3. Thus dim(W1+W2)=4.

Suppose A=[abcd] is in W1W2. Then AW1 a=b and AW2 a=c. So A=[aaab]. Let A1=[1110], A2=[0001]. Suppose aA1+bA2=0. Then
[aaab]=[0000],which implies a=b=0. Thus A1 and A2 are linearly independent. Let A=[aaab]W1W2. Then A=aA1+bA2. So A1,A2 span W1W2. Thus {A1,A2} is a basis for W1W2. Thus dim(W1W2)=2.


Exercise 2.3.8

Let V be the space of all 2×2 matrices. Let
A1=[1001],A2=[0001]A3=[1100],A4[0011]Then Ai2=Ai for all i. Now
aA1+bA2+cA3+dA4=[a+ccdb+d]=[0000]implies c=d=0 which in turn implies a=b=0. Thus A1,A2,A3,A4 are linearly independent. Thus they span a subspace of A of dimension four. But by Exercise 6, A also has dimension four. Thus by Corollary 1, page 46, the subspace spanned by A1,A2,A3,A4 is the entire space. Thus {A1,A2,A3,A4} is a basis.


Exercise 2.3.9

Suppose a(α+β)+b(β+γ)+c(γ+α)=0. Rearranging gives (a+c)α+(a+b)β+(b+c)γ=0. Since α, β, and γ are linearly independent it follows that a+c=a+b=b+c=0. This gives a system of equations in a,b,c with matrix
[110101011].This row-reduces as follows:
[110101011][110011011][110011011][101011002][101011001][100010001].Since this row-reduces to the identity matrix, by Theorem 7, page 13, the only solution is a=b=c=0. Thus (α+β), (β+γ), and (γ+α) are linearly independent.


Exercise 2.3.10

The statement follows from Theorem 4 on Page 44.


Exercise 2.3.11

(a) It is clear from inspection of the definition of a vector space (pages 28-29) that a vector space over a field F is a vector space over every subfield of F, because all properties (e.g. commutativity and associativity) are inherited from the operations in F. Let M be the vector space of all 2×2 matrices over C (M  is a vector space, see example 2 page 29). We will show V is a subspace M as a vector space over C. It will follow from the comment above that V is a vector space over R. Now V is a subset of M, so using Theorem 1 (page 35) we must show whenever A,BV and cC then cA+BV. Let A,BV. Write A=[xyzw] and B=[xyzw]. Then
(1)x+w=x+w=0.cA+B=[cx+xcy+ycz+zcw+w]To show cA+BV we must show (cx+x)+(cw+w)=0. Rearranging the left hand side gives c(x+w)+(x+w) which equals zero by (1).

(b) We can write the general element of V as
A=[a+bie+fig+hiabi].Let
v1=[1001],v2=[i00i],v3=[0100],v4=[0i00],v5=[0010],v6=[00i0].Then A=av1+bv2+ev3+fv4+gv5+hv6 so v1, v2, v3, v4, v5, v6 span V.Suppose av1+bv2+ev3+fv4+gv5+hv6=0. Then
av1+bv2+ev3+fv4+gv5+hv6=[a+bie+fig+hiabi]=[0000]implies a=b=c=d=e=f=g=h=0 because a complex number u+vi=0 u=v=0. Thus v1, v2, v3, v4, v5, v6 are linearly independent. Thus {v1,,v6} is a basis for V as a vector space over R, and dim(V)=6.

(c) Let A,BW and cR. By Theorem 1 (page 35) we must show cA+BW. Write A=[xyy¯x] and B=[xyy¯x], where x,y,x,yC. Then cA+B=[cx+xcy+ycy¯y¯cxx]. Since cy¯y¯=(cy+y), it follows that cA+BW. Note that we definitely need cR for this to be true.

It remains to find a basis for W. We can write the general element of W as
A=[a+bie+fie+fiabi].Let
v1=[1001],v2=[i00i],v3=[0110],v4=[0ii0].Then A=av1+bv2+ev3+fv4 so v1, v2, v3, v4 span V.Suppose av1+bv2+ev3+fv4=0. Then
av1+bv2+ev3+fv4=[a+bie+fie+fiabi]=[0000]implies a=b=e=f=0 because a complex number u+vi=0 u=v=0. Thus v1, v2, v3, v4 are linearly independent. Thus {v1,,v4} is a basis for V as a vector space over R, and dim(V)=4.


Exercise 2.3.12

Let M be the space of all m×n matrices. Let Mij be the matrix of all zeros except for the i,j-th place which is a one. We claim {Mij1im,1jn} constitute a basis for M. Let A=(aij) be an arbitrary marrix in M. Then A=ijaijMij. Thus {Mij} span M. Suppose ijaijMij=0. The left hand side equals the matrix (aij) and this equals the zero matrix if and only if every aij=0. Thus {Mij} are linearly independent as well. Thus the nm matrices constitute a basis and M has dimension mn.


Exercise 2.3.13

If F has characteristic two then (α+β)+(β+γ)+(γ+α)=2α+2β+2γ=0+0+0=0 since in a field of characteristic two, 2=0. Thus in this case (α+β), (β+γ) and (γ+α) are linearly dependent. However any two of them are linearly independent. For example suppose a1(α+β)+a2(β+γ)=0. The LHS equals a1α+a2γ+(a1+a2)β. Since α, β, γ are linearly independent, this is zero only if a1=0, a2=0 and a1+a2=0. In particular a1=a2=0, so α+β and β+γ are linearly independent.


Exercise 2.3.14

We know that Q is countable and R is uncountable. Since the set of n-tuples of things from a countable set is countable, Qn is countable for all n. Now, suppose {r1,,rn} is a basis for R over Q. Then every element of R can be written as a1r1++anrn. Thus we can map n-tuples of rational numbers onto R by (a1,,an)a1r1++anrn. Thus the cardinality of R must be less or equal to Qn. But the former is uncountable and the latter is countable, a contradiction. Thus there can be no such finite basis.

From http://greggrant.org

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
Close Menu
Close Menu