If you find any mistakes, please make a comment! Thank you.

Chapter 2 Exercise C

1. Solution: Let $u_1,u_2,\cdots,u_n$ be a basis of $U$. Thus $n=\dim U=\dim V$. Hence $u_1,u_2,\cdots,u_n$ is a linearly independent list of vectors in V with length $\dim V$. By 2.39, $u_1,u_2,\cdots,u_n$ is a basis of $V$. In particular, any vector in $V$ can be written as a linear combination of $u_1,u_2,\cdots,u_n$. As $u_i\in U$, it follows that $V\subset U$. This means that $U=V$.

2. Solution: The dimension of a subspace $U$ of $\R^2$ can only be 0,1,2. If $\dim U=0$, then $U=\{0\}$. If $\dim U=2$, then $U=\R^2$ by problem 1. If $\dim U=1$, then for any nonzero $x\in U$, it follows that \[U=\{kx:k\in\R\},\]which it is the line through $x$ and the origin.

3. Solution: It is similar to Problem 2. If $\dim U=2$, there exist two linearly independent $x,y\in\R^3$. Then \[U=\{k_1x+k_2y:k_1\in\R,k_2\in\R\},\]which it is the plane through $x$, $y$ and the origin.

4. Solution: (a) A basis of $U$ is $x-6$, $x^2-6x$, $x^3-6x^2$ and $x^4-6x^3$. Of course, $x-6$, $x^2-6x$, $x^3-6x^2$ and $x^4-6x^3$ is linearly independent since they has different degrees (It is easy to check). Moreover, if $p(6)=0$, then $p(x)$ is divided by $x-6$, hence \begin{align*}p(x)=&(x-6)(k_3x^3+k_2x^2+k_1x+k_0)\\=&k_3(x^4-6x^3)+k_2(x^3-6x^2)+k_1(x^2-6x)+k_0(x-6)\end{align*} is a linear combination of $x-6$, $x^2-6x$, $x^3-6x^2$ and $x^4-6x^3$.

(b) Of course, $1$, $x-6$, $x^2-6x$, $x^3-6x^2$ and $x^4-6x^3$ is a basis of $\ca P_4(\mb F)$.

(c) Let $W=\{c:c\in\mb F\}$, then $\ca P_4(\mb F)=U\oplus W$ by (b).

5. Solution: (a) For polynomial $f(x)=ax^4+bx^3+cx^2+dx+e$, consider the condition when $f”(6)=0$. Then you will get a linear equation about $a,b,c,d,e$. Find a basis of the solution space of this linear equation. Then substitute it into $f(x)=ax^4+bx^3+cx^2+dx+e$, you will get a basis of $U$(why?). I skip the details and give a example of basis. $1$, $x$, $x^3-18x^2$, $x^4-12x^3$ is a basis of $U$.

(b) Of course, $1$, $x$, $x^2$, $x^3-18x^2$ and $x^4-12x^3$ is a basis of $\ca P_4(\mb R)$.

(c) Let $W=\{cx^2:c\in\mb R\}$, then $\ca P_4(\mb R)=U\oplus W$ by (b).

6. Solution: (a) For polynomial $f(x)=ax^4+bx^3+cx^2+dx+e$, consider the condition when $f(2)=f(5)$. Then you will get a linear equation about $a,b,c,d,e$. The dimension of the solution space of this linear equation is $4$, so is $U$(why?). Thus we only have to give $4$ linearly independent polynomials in $\ca P_4(\mb F)$ such that each of them attain the same value at $x=2$ and $x=5$. A good example is $1$, $x^2-7x+10$, $x(x^2-7x+10)$ and $x^2(x^2-7x+10)$.

(b) Of course, $1$, $x$, $x^2-7x+10$, $x(x^2-7x+10)$ and $x^2(x^2-7x+10)$ is a basis of $\ca P_4(\mb F)$.

(c) Let $W=\{cx:c\in\mb F\}$, then $\ca P_4(\mb F)=U\oplus W$ by (b).

7. Solution: (a) For polynomial $f(x)=ax^4+bx^3+cx^2+dx+e$, consider the condition when $f(2)=f(5)=f(6)$. Then you will get $2$ linear equations about $a,b,c,d,e$. The dimension of the solution space of this linear equation is $3$, so is $U$(why?). Thus we only have to give $3$ linearly independent polynomials in $\ca P_4(\mb F)$ such that each of them attain the same value at $x=2$, $x=5$ and $x=6$. A good example is $1$, $(x-2)(x-5)(x-6)$ and $x(x-2)(x-5)(x-6)$.

(b) Of course, $1$, $x$, $x^2$, $(x-2)(x-5)(x-6)$ and $x(x-2)(x-5)(x-6)$ is a basis of $\ca P_4(\mb F)$.

(c) Let $W=\{cx+dx^2:c\in\mb F,d\in\mb F\}$, then $\ca P_4(\mb F)=U\oplus W$ by (b).

8. Solution: (a) For polynomial $f(x)=ax^4+bx^3+cx^2+dx+e$, consider the condition when $\int_{-1}^1 f=0$. Then you will get a linear equation about $a,b,c,d,e$, which is $a/5+c/3+e=0$. Find a basis of the solution space of this linear equation. Then substitute it into $f(x)=ax^4+bx^3+cx^2+dx+e$, you will get a basis of $U$(why?). I skip the details and give a example of basis. $x$, $3x^2-1$, $x^3$ and $5x^4-1$ is a basis of $U$.

(b) Of course, $1$, $x$, $3x^2-1$, $x^3$ and $5x^4-1$ is a basis of $\ca P_4(\mb R)$.

(c) Let $W=\{c:c\in\mb R\}$, then $\ca P_4(\mb R)=U\oplus W$ by (b).

9. Solution: Note that \[v_2-v_1=(v_2+w)-(v_1+w),\]it follows that $v_2-v_1\in \mathrm{span}(v_1+w,\cdots,v_m+w)$. Similarly, $v_i-v_1\in \mathrm{span}(v_1+w,\cdots,v_m+w)$ for all $2\leqslant i\leqslant m$.

Actually, $v_2-v_1$, $\cdots$, $v_m-v_1$ is linearly independent since $v_1$, $\cdots$, $v_m$ is linearly independent in $V$. (It is easy to prove, see examples in Exercise 2.A and 2.B). By 2.33, it follows that \[\dim\mathrm{span}(v_1+w,\cdots,v_m+w)\geqslant m-1.\]

10. Solution: Because $p_0$ has degree $0$, we have $\mathrm{span}(p_0)=\mathrm{span}(1)$. If we assume that \[ \mathrm{span}(p_0,p_1,\cdots,p_i)=\mathrm{span}(1,x,\cdots,x^i). \] Then by assumption, it is trivial that \[ \mathrm{span}(p_0,p_1,\cdots,p_i,p_{i+1})\subset \mathrm{span}(1,x,\cdots,x^i,x^{i+1}). \]On the other hand, $p_{i+1}$ has degree $i+1$, hence it can be written as \[p_{i+1}=a_{i+1}x^{i+1}+f_{i+1}(x),\]where $a_{i+1}\ne0$ and $\deg f_{i+1}(x)\leqslant i$. Then \[x^{i+1}=\frac{1}{a_{i+1}}(p_{i+1}-f_{i+1}(x))\in \mathrm{span}(1,x,\cdots,x^i,p_{i+1}).\]Note that $\mathrm{span}(p_0,p_1,\cdots,p_i)=\mathrm{span}(1,x,\cdots,x^i)$, we conclude \[\mathrm{span}(1,x,\cdots,x^i,p_{i+1})=\mathrm{span}(p_0,p_1,\cdots,p_i,p_{i+1}).\]Thus \[x^{i+1}\in \mathrm{span}(p_0,p_1,\cdots,p_i,p_{i+1}),\]then \[\mathrm{span}(1,x,\cdots,x^i,x^{i+1})\subset \mathrm{span}(p_0,p_1,\cdots,p_i,p_{i+1}).\] By induction, we have \[ \mathrm{span}(p_0,p_1,\cdots,p_i)=\mathrm{span}(1,x,\cdots,x^i). \]for all $0\leqslant i\leqslant m$. In particular, \[ \mathrm{span}(p_0,p_1,\cdots,p_m)=\mathrm{span}(1,x,\cdots,x^m) \]means $p_0$, $p_1$, $\cdots$, $p_m$ is a basis of $\ca P(\mb F)$. Because $p_0$, $p_1$, $\cdots$, $p_m$ is a spanning list of $\ca P(\mb F)$ with the same length as the dimension of $\ca P_m(\mb F)$ (2.42).

11. Solution: By 2.43, we have \[\dim(U\cap W)=\dim U+\dim W-\dim(U+W)=8-\dim(\mb R^8)=0.\]Hence $U \cap W=\{0\}$, combining with $U+W=\mb R^8$, it follows that $\mb R^8=U\oplus W$.

12. Solution: By 2.43, we have \[\dim(U\cap W)=\dim U+\dim W-\dim(U+W)=10-\dim(U+W).\]Note that $U+W$ is a subspace of $\mb R^9$, it follows that $\dim(U+W)\leqslant 9$ (by 2.38). Hence $\dim(U\cap W)\geqslant 1$, i.e. $U \cap W\ne\{0\}$.

13. Solution: By 2.43, we have \[\dim(U\cap W)=\dim U+\dim W-\dim(U+W)=8-\dim(U+W)\geqslant 8-\dim(\mb C^6)=2.\]Hence there exists $e_1, e_2\in U\cap W$ such that $e_1$ and $e_2$ are linearly independent. Then neither of $e_1$ or $e_2$ is a scalar multiple of the other.

14. Solution: Choose a basis $\ca W_i$ of $U_i$, then by definition of direct sum, $U_1+\cdots+U_m$ can be spanned by the union of $\ca W_1$, $\cdots$, $\ca W_m$. From 2.31, we conclude \[\dim(U_1+\cdots+U_m)\leqslant \dim U_1+\cdots+\dim U_m,\]since the cardinality of the gather of $\ca W_1$, $\cdots$, $\ca W_m$ is no more than $\dim U_1+\cdots+\dim U_m$. In particular, $U_1+\cdots+U_m$ is finite-dimensional.

15. Solution: Let $(v_1,\cdots,v_n)$ be a basis of $V$. For each $j$, let $U_j$ equal $\mathrm{span}(v_j)$; in other words, $U_j=\{av_j:a\in\mathbb F\}$. It is easy to see that $\dim U_j=1$ for all $j=1,\cdots,n$. Because $(v_1,\cdots,v_n)$ is a basis of $V$, each vector in V can be written uniquely in the form \[a_1v_1+\cdots+a_nv_n,\]where $a_1$, $\cdots$, $a_n\in\mathbb F$. By definition of direct sum, this means that $V=U_1\oplus \cdots \oplus U_n$.

16. Solution: Since $U_1+\cdots+U_m$ is a direct sum, it follows that\[U_1\oplus \cdots \oplus U_m=U_1+\cdots+U_m.\]Hence $U_1\oplus \cdots \oplus U_m$ is finite dimensional by Problem 14. Now we use induction on $m$ to show\[\dim U_1\oplus \cdots \oplus U_m= \dim U_1+\cdots+\dim U_m.\]By 2.43, for $m=2$, we have\[\dim(U_1+U_2)=\dim U_1+\dim U_2-\dim(U_1\cap U_2)=\dim U_1+\dim U_2\]since $U_1\cap U_2=0$ as $U_1+U_2$ is a direct sum.

Suppose the equality is true for $m-1$. Now consider the case $m$, if $U_1+\cdots+U_m$ is a direct sum, then the only way to write $0$ as a sum $u_1+\cdots+u_m$, where each $u_j$ is in $U_j$, is by taking each $u_j$ equal to $0$. Therefore the only way to write $0$ as a sum $u_1+\cdots+u_{m-1}$, where each $u_j$ is in $U_j$, is by taking each $u_j$ equal to $0$. It follows that $U_1+\cdots+U_{m-1}$ is a direct sum, hence\[\dim U_1\oplus \cdots \oplus U_{m-1}= \dim U_1+\cdots+\dim U_{m-1}.\]On the other hand, let $W=U_1\oplus \cdots \oplus U_{m-1}$, then $U_1\oplus \cdots \oplus U_m=W+U_m$. Suppose $0=x+y$, where $x=x_1+\cdots+x_{m-1}\in W$ and $y\in u_m$, where each $x_j$ is in $U_j$, it follows from 1.44 that $x_i=0$ and $y=0$. Hence $W+U_m$ is a direct sum again by 1.44. Therefore by the inductive assumption, we have\begin{align*}&\ \dim U_1\oplus \cdots \oplus U_m\\ =&\ \dim(W+U_m)=\dim W+\dim U_m\\ =&\ \dim U_1+\cdots+\dim U_{m-1}+\dim U_m.\end{align*}

17. Solution: To give a counterexample, let $V=\mb R^2$, and let \[U_1=\{(x,0):x\in\R\},\]\[U_2=\{(0,y):y\in\R\},\]\[U_3=\{(x,x):x\in\R\}.\]Then $U_1+U_2+U_3=\R^2$, so $\dim(U_1+U_2+U_3)=2$. However, \[\dim U_1=\dim U_2=\dim U_3=1\]and \[\dim(U_1\cap U_2)=\dim (U_2\cap U_3)=\dim (U_3\cap U_1)=\dim (U_1\cap U_2\cap U_3)=0.\]Thus in this case our guess would reduce to the formula $2=3$, which is obviously false.


This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has 47 Comments

  1. 15 seems incomplete, it should be noted (demonstrated?) that because the intersection of any two of the one dimensional subspaces we constructed only contains 0, then a special case of the solution in 14 can be derived where the inequality is replaced by an equality. This means the sum of the subspaces has the same dimension as V and thus the list v_1, … , v_m is also a basis in the sum of the subspaces.

    You note the the dimension of the 1-D subspaces is clearly one, but without noting this then Q14 implies the dimension of the sum of subspaces could be strictly less than the sum of the dimension of each individual subspace. If this were the case then the basis you gave for V would span the sum of the subspaces, but it would not be linearly independent in it. That would mean V is the sum of the subspaces, but it could not be the direct sum.

    1. “without noting this then Q14 implies” should be “without noting the above Q14 implies”

  2. I think the solution to question 5 is wrong. I believe you are imposing the additional condition that x^3 -18x^2 = 0, this makes one of the vectors in your basis the zero vector; so the list is necessarily linearly dependent and thus cannot be a basis.

    I will find the constraint imposed by p’’(6)=0 and then plug it into the original equation:
    For f(x) = ax^4 + bx^3 + cx^2 + dx + e
    f’’(x) = 12ax^2 + 6bx + 2c
    f’’(6) = 0 = 432a + 36b + 2c, so c = -18b - 216a
    Putting this c into the first equation gives:
    f(x) = ax^4 + bx^3 + (-18b -216a)x^2 + dx + e
    f(x) = a(x^4 - 216x^2) + b(x^3 - 18x^2) + dx + e

    If the basis you supplied are suitable then we can construct an f(x) constrained in the way given above but using your basis:
    f(x) = q(x^4 -12x^3) + r(x^3 -18x^2) + sx + t (q,r,s,t are scalars)
    Additive identity is in the vector space, so let f(x) in particular be 0. Then:
    a(x^4 - 216x^2) + b(x^3 - 18x^2) + dx + e = q(x^4 -12x^3) + r(x^3 -18x^2) + sx + t
    There is only one x^4 on each side, so a=q, similarly it is clear that b=r, d=s, e=t. Then:
    a(x^4 - 216x^2) = a(x^4 -12x^3) and it follows that x^3 - 18x^2 = 0.

    1. Oops, taking f(x) = 0 is actually not required, but does not impact my post.

  3. Hello! This is my first contribution on this great site. I think I found one more solution to problem #9.
    We can identify the following subspaces of V (using the fact that w \in V)

    U = span(v_1, ..., v_m)
    U1 = span(v_1 + w, ... , v_m+ w)
    U2 = span(w)

    Note that U is a subset of U1+U2, since for each u in U:
    u = a_1*v_1 + ... + a_m * v+m
    = a_1*(v_1+w) + ... + a_m*(v_m+w) - (a_1+...+a_m) * w
    We can then see that U is a subspace of U1 + U2.

    Then m = dim(U) <= dim(U1+U2) [by 2.38]
    = dimU1 + dimU2 - dim(U1 \intersection U2) [by 2.43]
    = m - 1

    1. (typo in the last line: should be <= m -1)

      1. You have shown dimU = m-1. Also it appears to be wrong as dimU = m, so your conclusion is a contradiction: m <= m-1.

        1. My other comment did not display correctly, the first sentence should have been: You have shown dimU = m-1.

          1. I’ll say it in words because the symbols appear to be messing with the reply. The first sentence should have said: You have shown dimU is less than or equal to m-1, but you are asked to show dimU1 is greater or equal to m-1.

  4. For questions 2 and 3, to complete the questions don't we have to show that those subspaces are precisely $R^{2}$ and $R^{3}$, respectively? So for example for question 2, we show that those are precisely the subspaces of $R^{2}$ by showing {0}, $R^{2}$, and $U = \{kx, k \in R\}$ is a direct sum equal to $R^{2}$?

  5. For #17, the counterexample is good but text " U1+U2+U3=R2” is inaccurate.
    R2 is plane obviously, and U1, U2, U3 are three axises (Lines), we better say dim(U1+U2+U3)=dim(R2)

  6. I do not understand how to approach question 5. I started by letting p(x)=a+bx+cx^2+dx^3+ex^4 then came with an expression for p''(6) that is 2c+36d+432e=0. I do not know where to go from here on. Can someone explain what to do next? thanks.

    1. Compute the second derivative $$p''(x)=12ex^2+6dx+2c.$$Plug in $6$ and use $p''(6)$ to get it.

      1. That is what she did - she computed the second derivative, plugged in 6 and got a linear equation that links c, d and e. The question was about what to do next.

        I'm also not sure

        1. I think at this point of the book, we shouldn't introduce the idea of linear equation system to solve problem #5,#6 and #7. Analogous to examples in the book, I think what the author might except ideas like this:
          For #5, it's easy to find 4 linearly independent elements: 1,p,(p-6)^3,(p-6)^4, and since the whole space has its dimension equal to 5, the dimension of this subspace must be less than 5, so these 4 linearly independent elements is a basis of given subspace.
          For #6, we can find 1,(x-7/2)^2,x(x-7/2)^2,x^2(x-7/2)^2, and the remaining part is analogous to #5.
          For #7, we can find 1,(x-2)(x-5)(x-6),x(x-2)(x-5)(x-6), and since the subspace defined in #7 is a subspace of that one defined in #6, so its dimension should be less than 4, so since we have 3 linearly independent elements, it's done.

          1. I don't understand why the subspace must be less than 5, can you show me why the p''(6) = 0 property of the subspace makes this true? By equation 2.38 a subspace U of a vector space V has dimension s.t. dim(U) <= dim(V).

          2. @sam: see exercise 1, where we prove that if U is a subspace of V and dim U = dim V, then U = V. Since in this case we know U is not V its dimension must be less than 5.

          3. Hey Lucida. Your basis for #6 is indubitably invalid.
            I made the same mistake first too, but $p(2) \neq p(5)$ for $x(x-7/2)$. $(-1)^x (x-7/2)^3 works, but it obviously doesn't fit the standard polynomial definition. Intuitively there must be a third degree polynomial for which $p(2) = p(5)$, but I don't know how to find it now.
            The same issue can be found with your next polynomial in the basis list beginning with $x^2$. A valid forth-degree polynomial with $p(2)=p(5)$ would be (x-7/2)^4.
            I'm unfortunately not sure how to find the third degree polynomial for this basis.

      2. I think an easier solution to problem #4 would be as follows:

        To prove linear independence, we can write each of the polynomials (t-6),(t^2-6t),(t^3-6t^2),(t^4-6t^3) with respect to the standard basis of P4 (i.e. as column vectors in R5). This will give you a 5-by-4 matrix and thereby using Gaussian elimination will ascertain linear independence.

        Because the length of any linearly independent list is at most the length of any spanning list, we have dim(U) is at least 4. And because U is a subspace of P4(F), we have dim(U) is at most 5. Thus 4<=dim(U)<=5. Clearly dim(U) cannot equal 5 since U is a subspace of P4(F); that is, if dim(U)=dim(P4(F))=5, then U=P4(F). However, U does not equal P4(F). Thus dim(U)=4, which means that we have a linearly independent list in U with length dim(U) and hence this list must be a basis of U. Therefore, the list (t-6),(t^2-6t),(t^3-6t^2),(t^4-6t^3) is a basis of U.

  7. #10 is has a much simpler solution.

    Suppose p0,p1,..,pm is linearly dependent. Then by linear dependence lemma, some pj = c0p0 + ... + c_{j-1}p_{j-1}. But the left side is degree j and the right side is degree j-1, contradiction. Therefore p0,p1,...,pm is linearly independent, and has the right length to be a basis for Pm(F), therefore it is a basis for Pm(F).

    1. Yes, nice solution.

  8. Is there an intuitive reason to think 17 won't work?

  9. Hi,

    In exercise #4 shouldn't we choose the vectors of the basis in P4(F) since U is defined by U = {p belongs to P4(F) : p(6) = 0} ? The basis given in the solution has vectors not in P4(F), so it can not be a basis of U. Or am I missing something ?

    A basis would be : x^3-(x^4)/6, x^2-(x^4)/(6^2), x-(x^4)/(6^3) and 1-(x^4)/(6^4). All those vectors belong to P4(F) and are 0 for x=6, then they belong to U.

    Hope someone responds, I'am trying to learn linear algebra and needs to understand if I'm doing a mistake !


    1. Vectors in a basis do not have to be of degree 4.

  10. #8 should be a/5 + c/3 + e = 0 right?

    1. Yes, thanks!

  11. I think there is an easier solution for problem 10 that uses previous results of the chapter. Please correct me if I am wrong.

    We first show that the list p_0, p_1, ..., p_m is linearly independent. Indeed if there exist scalars a0, a1,...,am in F such that a0*p_0 + a1*p_1 + .... + am*p_m = 0, then since the degree of p_j is j for all j, equating degrees of the LHS and RHS yields that all ai must be 0.

    Now since (1, z, z^2, ...., z^m) is trivially a basis for Pm(F), dim(Pm(F)) = m +1. Then since the length of p_0,...,p_m is also m + 1, it must be a basis for Pm(F), by 2.39.

  12. It seems to me that problem 16 has an easier solution.

    U_1\oplus \cdots \oplus U_m is finite dimensional by 14.
    Let u_1 be a basis of U_1,..., u_m be a basis of U_m.
    So if some u\inU_1\oplus \cdots \oplus U_m, then u=(some linear comb. of u_1)+\cdots + (some linear comb. of u_m), so u_1,\cdots,u_m spans U_1\oplus \cdots \oplus U_m. Since we are dealing with a direct sum, u_1,\cdots,u_m is also linearly independent, so it is a basis of U_1\oplus \cdots \oplus U_m.
    Clearly, dimension of a basis is a number of it's elements, so \dim U_1\oplus \cdots \oplus U_m= \dim U_1+\cdots+\dim U_m.

  13. #6/7/8 Those constructions are very elegant.

  14. #17
    I don't quite get the idea how we get the counterexample to prove it is wrong, can anyone give me an advice?Any information is appreciated.

  15. can someone explain those "why?"s in question 5,6,7,8 or at least direct me to some resources pls?

  16. Regarding exercise 9. I have a feeling that whenever dim = m-1, w = -v_i for an i = 1, ..., m. However, fail to prove it. Any advice is appreciated!

    1. This is wrong. If the dimension is $m-1$, then $v_1+w$, $\cdots$, $v_m+w$ is linearly dependent. Hence there exist not all zero numbers $a_1,\dots,a_m$ such that $$a_1(v_1+w)+\cdots+a_m(v_m+w)=0.$$Hence $$a_1v_1+\cdots+a_mv_m+(a_1+\cdots+a_m)w=0.$$Clearly, we must have $a_1+\cdots+a_m\ne 0$. Otherwise, by the linear independence of $v_1,\dots,v_m$, we have $a_1=\cdots=a_m$. Therefore, we have$$w=-\frac{a_1v_1+\cdots+a_mv_m}{a_1+\cdots+a_m}.$$In other words, the dimension is equal to $m-1$ if and only if there exist $k_1+\cdots+k_m=-1$ such that $$w=k_1v_1+\cdots+k_mv_m.$$

      1. Yes but the only if direction doesn't hold true. If $$w \in \mathrm{span}(v_1+w,...,v_m+w)$$ then for any $v \in \mathrm{span}(v_1+w,...,v_m+w)$ we have that $$v=a_1(v_1+w)+...+a_m(v_m+w)$$ which gives $$v=a_1v_1+...+a_mv_m+(a_1+...+a_m)w$$ $$=a_1v_1+...+a_mv_m+(a_1+...+a_m)(b_1v_1+...+b_mv_m)$$ $$ \in \mathrm{span}(v_1,...,v_m)$$ Since $v_1,...,v_m$ is linearly independent and spans $V$ we thus get that
        $\dim{\mathrm{span}(v_1+w,...,v_m+w)} = m$ not $m-1$. So actually the span can never have dimension $m-1$.

        1. Unfortunately, your argument is wrong. See the following counterexample. Take $m=2$ and let $w=-(v_1+v_2)/2$, then
          $$\mathrm{span}(v_1+w,v_2+w)=\mathrm{span}(v_1-v_2),$$which is one-dimensional.

        2. It seems that "w∈span(v1+w,…,vm+w)" in your argument doesn't hold in the first place. We'd only know that w∈V, but V is neither equal to span(v1+w,…,vm+w) nor span(v1,…,vm)

    1. The proof is correct. I showed this span has a linearly independent of length $m-1$. (By 2.33) Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space. Therefore, a basis has length no shorter than $m-1$

      1. Oh yes, I get it now. Thanks for clarification.

  17. #14
    Hi. Your solution of excercise #14 is using dimension of V, but the excercise do not mention nothing about dimV.

    1. Yes, you are correct. Thank you.

  18. You don't need to use induction for 16; more insight ensues if you use another method

  19. #10
    no two polynomials are of the same degree => linearly independent
    there are m+1 of those => size is the same as dimension of P_m
    linearly independent + right size => basis
    induction proof is also simple
    it's only true iff the following holds:
    (U1 + U2) ∩ U3 = (U1 ∩ U2) + (U1 ∩ U3)
    which is true iff U1 ⊆ U3 and U2 ⊆ U3

    1. it is aslo true if $U_1 = \{ 0 \}$ or $U_2 = \{ 0 \}$.

  20. The solution to 15 seems incorrect. We are supposed to prove that ONE-DIMENSIONAL subspaces exist, and you don't seem to mention the dimension. I think you are supposed to apply 2.34 recursively: start with some one-dimensional subspace of V called U1. We know from 2.34 that there is some corresponding subspace to U1, let's call it W1, such that U1 ++ W1 = V (where ++ is direct sum).

    Then, we replace V with W1 and do the same thing. Pick a one-dimensional subspace of W1 called U2 and apply 2.34 to get some corresponding subspace of W1 and call it W2. U2 ++ W2 = W1. Now we have U1 ++ U2 ++ W2 = U.

    Repeat this process until you have Un-1 ++ Wn-1. Wn-1 will be our Un.

    1. It is correct since U_j is obviously one-dimensional.

  21. The solution to 16 is incomplete because the text is wrong. It's an equality, not an inequality.

Leave a Reply

Close Menu