If you find any mistakes, please make a comment! Thank you.

Chapter 6 Exercise B


1. Solution:

(a) One can easily check that each of the four vectors has norm $\sin^2 \theta + \cos^2 \theta$, which equals $1$. Moreover, we have
$$ \begin{aligned} \langle (\cos\theta, \sin\theta), (-\sin\theta, \cos\theta) \rangle &= -\cos\theta \sin\theta + \sin\theta \cos\theta = 0\\ \langle (\cos\theta, \sin\theta), (\sin\theta, -\cos\theta) \rangle &= \cos\theta \sin\theta – \sin\theta \cos\theta = 0, \end{aligned} $$ which shows that they are orthogonal.

(b) Clearly, for any $v$ and $u$ in $\mathbb{R}^2$ with $||v|| = ||u|| = 1$, we can write $v = (\cos \theta, \sin \theta)$ and $u = (\cos \alpha, \sin \alpha)$ for some angles $\theta$ and $\alpha$. If $v, u$ is an orthonormal basis, then we must have
$$ 0 = \langle v, u \rangle = \langle (\cos \theta, \sin \theta), (\cos \alpha, \sin \alpha) \rangle = \cos\theta \cos\alpha + \sin\theta \sin\alpha = \cos(\theta – \alpha). $$ One solution is to take choose $\theta$ and $\alpha$ such that $\alpha = \theta + \frac{\pi}{2}$. Then
$$ \begin{aligned} (\cos \alpha, \sin \alpha) &= (\cos(\theta + \frac{\pi}{2}), \sin(\theta + \frac{\pi}{2}))\\ &= (\cos\theta \cos\frac{\pi}{2} – \sin\theta\sin\frac{\pi}{2}, \sin\theta \cos\frac{\pi}{2} + \sin\frac{\pi}{2} \cos\theta)\\ &= (-\sin\theta, \cos\theta).\\ \end{aligned} $$ Which shows that $v, u$ is of the first form given in part (a).


2. Solution: If $v\in \m{span}(e_1,\cdots,e_m)$, then $e_1$, $\cdots$, $e_m$ is an orthonormal basis of $\m{span}(e_1,\cdots,e_m)$ by 6.26. By 6.30, it follows that\[\|v\|^2=|\langle v,e_1\rangle|^2+\cdots+|\langle v,e_m\rangle|^2.\] If $\|v\|^2=|\langle v,e_1\rangle|^2+\cdots+|\langle v,e_m\rangle|^2$, we denote \[ \xi=v-(\langle v,e_1\rangle e_1 +\cdots+\langle v,e_m\rangle e_m). \]It is easily seen that \[ \langle \xi,e_i\rangle=\langle v,e_i\rangle-\langle v,e_i\rangle=0 \]for $i=1,\cdots,m$. This implies\[\langle \xi,e_1\rangle e_1 +\cdots+\langle v,e_m\rangle e_m\rangle=0.\]By 6.13, we have \begin{align*} \|v\|^2=&\|\xi\|^2+\|\langle v,e_1\rangle e_1 +\cdots+\langle v,e_m\rangle e_m\|^2\\ =&\|\xi\|^2+|\langle v,e_1\rangle|^2+\cdots+|\langle v,e_m\rangle|^2.\end{align*} It follows that $\|\xi\|^2=0$, hence $\xi=0$. Thus $v=\langle v,e_1\rangle e_1 +\cdots+\langle v,e_m\rangle e_m$, namely $v\in \m{span}(e_1,\cdots,e_m)$.


3. Solution: Applying the Gram-Schmidt Procedure to the given basis, we get the following basis
$$ (1, 0, 0),\:\frac{1}{\sqrt{2}}(0, 1, 1),\:\frac{1}{\sqrt{2}}(0, -1, 1). $$ As in the proof of 6.37, we see that the matrix of $T$ with respect to this basis is upper triangular.


4. Solution: See Linear Algebra Done Right Solution Manual Chapter 6 Problem 9.


5. Solution: Applying the Gram-Schmidt Procedure, we get the following basis
$$ 1, \: 2\sqrt{3}(x – \frac{1}{2}), \: 6\sqrt{5}(x^2 – x + \frac{1}{6}). $$


6. Solution: Let $D$ denote the differential operator. Note that $D$ is already upper-triangular with respect to the standard basis of $\mathcal{P}_2{\mathbb{R}}$. Therefore, by the same reasoning used in the proof of 6.37, $\mathcal{M}(D)$ is upper-triangular with respect to the basis found in Exercise 5.


7. Solution:Defining $\varphi(p) = p(\frac{1}{2})$ and $\langle p, q \rangle = \int_{0}^{1} p(x)q(x)\ dx$ and using the formula from 6.43 together with the basis found in Exercise 5, we find that
$$ q(x) = -15x^2 + 15x – \frac{3}{2}. $$


8. Solution: Using the orthonormal basis found in Exercise 5 and the formula in 6.43, we get $q(x) = \dfrac{-24}{\pi^2}\left(x – \dfrac{1}{2}\right)$.


9. Solution: Suppose $v_1, \dots, v_m$ is a lienarly dependent list in $V$. Let $k$ be the smallest integer such that $v_k \in \operatorname{span}(v_1, \dots, v_{k-1})$. Then $v_1, \dots, v_{k-1}$ is linearly independent and we can apply the Gram-Schmidt Procedure to produce an orthonormal list $e_1, \dots, e_{k-1}$ whose span is the same. Therefore $v_k \in \operatorname{span}(e_1, \dots, e_{k-1})$ and, by 6.30,
$$ v_k = \langle v_k, e_1 \rangle e_1 + \dots + \langle v_k, e_{k-1} \rangle e_{k-1}. $$ But the right hand side is exactly what we subtract from $v_k$ when calculating $e_k$, hence the Gram-Schmidt Procedure cannot continue because we can’t divide by $0$. If, however, you discard $v_k$ (and every other vector to which happens the same thing), you end up producing an orthonormal basis whose span equals $\operatorname{span}(v_1, \dots, v_m)$.


10. Solution: Just apply the Gram-Schmidt Procedure once on $v_1, \dots, v_m$ to get the orthonormal an $e_1, \dots, e_m$. Note that, if we change $e_i$ to $-e_i$ without relabeling the vectors, $e_1, \dots, e_m$ is still orthonormal and we still have $\operatorname{span}(v_1, \dots, v_j) = \operatorname{span}(e_1, \dots, e_j)$ for all $j \in \{1, \dots, m\}$. Because we have $m$ vectors and for each vector we are free to choose a $\pm$ sign. There are $2^m$ such lists.


11. Solution: Let $w \in V$. Define $\varphi(v) = \langle v, w \rangle_1$ and $\psi(v) = \langle v, w \rangle_2$. Since $\varphi(v) = 0$ if and only if $\psi(v) = 0$, it follows that $\operatorname{null} \varphi =\operatorname{null} \psi$. By Theorem 1 in Chapter 3 notes we have
$$ \operatorname{span}(\varphi) = (\operatorname{null} \varphi)^0 = (\operatorname{null} \psi)^0 = \operatorname{span}(\psi). $$ Thus $\varphi = c\psi$ for some $c \in \mathbb{F}$. Hence, for each fixed $w$ we have $\langle v, w \rangle_1 = c\langle v, w \rangle_2$ for every $v \in V$. Chosing $v = w$ now implies that $c$ is real and positive. Fix $w_1, w_2 \in V$ and let $c_1, c_2 \in \mathbb{F}$ such that
$$ \begin{aligned} \langle v, w_1 \rangle_1 &= c_1 \langle v, w_1 \rangle_2\\ \langle v, w_2 \rangle_1 &= c_2 \langle v, w_2 \rangle_2.\\ \end{aligned} $$ Pluging $v = w_2$ in the first equation and $v = w_1$ in the second yields
$$ \begin{aligned} \langle w_2, w_1 \rangle_1 &= c_1 \langle w_2, w_1 \rangle_2\\ \langle w_1, w_2 \rangle_1 &= c_2 \langle w_1, w_2 \rangle_2.\\ \end{aligned} $$ Then
$$ c_1 \langle w_2, w_1 \rangle_2 = \langle w_2, w_1 \rangle_1 = \overline{\langle w_1, w_2 \rangle_1} = \overline{c_2 \langle w_1, w_2 \rangle_2} = \bar{c_2} \langle w_2, w_1 \rangle_2. $$ Hence $c_1 = \bar{c_2}$. Because both are real, it follows that $c_1 = c_2$. Therefore, the constant is the same for all $v, w \in V$.


13. Solution: We show it by induction on $n$. It is clearly true for $n=1$ where we can choose $w=v_1$. Since $v_1,\dots,v_m$ is a linearly independent list of vectors in $V$, by 6.31 Gram-Schmidt procedure, we get an orthonormal list $e_1,\dots,e_n$. By induction hypothesis, there exists $w'$ such that $\langle w, v_i\rangle >0$ for all $i=1,\dots,n-1$. Then we let $w=w'+k e_n$, where $k\in\mathbb F$. Clearly, we have $\langle e_n,v_i\rangle =0$ for all $i=1,\dots,n-1$. Hence $$\langle w,v_i\rangle =\langle w',v_i\rangle >0$$for $i=1,\dots,n-1$. We show that there exists $k\in \mathbb F$ such that $\langle w,v_n\rangle >0$. It suffices to show that $\langle e_n,v_n\rangle \ne 0$ (since then $\langle k e_n,v_n\rangle$ can run over any value as $k$ changes). We argue it by contradiction. Suppose $\langle e_n,v_n\rangle = 0$. Note that $\langle e_n,v_i\rangle =0$ for all $i=1,\dots,n-1$, then $\langle e_n,v\rangle=0$ for all $v\in \mathrm{span}\{v_1,\dots,v_n\}$. In particular, we have $\langle e_n,e_n\rangle =0$, which is a contradiction. Hence we are done.


14. Solution: Since $e_1,\cdots,e_n$ is an orthonormal basis of $V$, we have $\dim\,V=n$. To show that $v_1,\cdots,v_n$ is a basis of $V$, it suffices to show that $v_1,\cdots,v_n$ is linearly independent. We prove it by contradiction.
Suppose $v_1,\cdots,v_n$ is linearly dependent, then there exist $a_1,\cdots,a_n\in\mathbb F$ such that $a_k\ne 0$ for some $k\in\{1,\cdots,n\}$ and \[\sum_{i=1}^na_iv_i=0.\]On one hand, by 6.25, we have $$\Big\|\sum_{i=1}^na_i(e_i-v_i)\Big\|^2=\Big\|\sum_{i=1}^na_ie_i\Big\|^2=\sum_{i=1}^n|a_i|^2.$$On the other hand, we also have\begin{align*}\Big\|\sum_{i=1}^na_i(e_i-v_i)\Big\|^2=&\,\Big\langle \sum_{i=1}^n a_i(e_i-v_i),\sum_{j=1}^n a_j(e_j-v_j)\Big\rangle\\ = &\,\sum_{i=1}^n\sum_{j=1}^n\Big\langle a_i(e_i-v_i),a_j(e_j-v_j)\Big\rangle\\ \leqslant &\, \left|\sum_{i=1}^n\sum_{j=1}^n\Big\langle a_i(e_i-v_i),a_j(e_j-v_j)\Big\rangle\right|\\ \leqslant &\, \sum_{i=1}^n\sum_{j=1}^n\left|\Big\langle a_i(e_i-v_i),a_j(e_j-v_j)\Big\rangle\right| \\ \text{by 6.15}\quad \leqslant &\,\sum_{i=1}^n\sum_{j=1}^n\|a_i(e_i-v_i)\|\|a_j(e_j-v_j)\|\\ = &\,\sum_{i=1}^n\sum_{j=1}^n|a_i||a_j|\|e_i-v_i\|\|e_j-v_j\|\\ \text{by assumption and }a_k\ne 0 \quad <&\,\sum_{i=1}^n\sum_{j=1}^n\frac{1}{n}|a_i||a_j|=\frac{1}{n}\Big(\sum_{i=1}^n|a_i|\Big)^2\\ \text{by Problem 6.A.12}\quad\leqslant &\sum_{i=1}^n|a_i|^2.\end{align*}Hence we get $$\sum_{i=1}^n|a_i|^2<\,\sum_{i=1}^n|a_i|^2,$$which is impossible, hence completing the proof.


15. Solution: Suppose there exists $g$ such that $\vp(f)=\langle f,g\rangle$ for all $f\in C_{\R}[-1,1]$. We would like to show a contradiction.
For any positive integer $n$ and integer $-n\leqslant i\leqslant n-1$, define\[f_{n,i}(x)=\begin{cases}4n^2(x-i/n),\quad &\text{if }x\in [i/n,i/n+1/(2n)]\\ 4n^2((i+1)/n-x),\quad &\text{if }x\in [i/n+1/(2n),(i+1)/n]\\ 0,\quad &\text{otherwise },\end{cases}\]then $f_{n,i}(x)\in C_{\R}[-1,1]$ and $f_{n,i}(0)=0$.
Given any $\epsilon>0$, since $g\in C_{\R}[-1,1]$, by the fact that a continuous function on a closed interval is uniformally continuous, there exists $N$ such that for any $n\geqslant N$, we have \begin{equation}\label{6B151}|g(x)-g(y)|\leqslant \epsilon\end{equation} if $|x-y|\leqslant 1/n$.
Note that \begin{equation}\label{6B152}\int_{-1}^{1}f_{n,i}(x)dx=\int_{i/n}^{(i+1)/n}f_{n,i}(x)dx=1,\end{equation} for any $y\in [i/n,(i+1)/n]$ we have \begin{align*}&\left|g\left(y\right)-\int_{-1}^1 f_{n,i}(x)g(x)dx\right|\\=& \left|\int_{i/n}^{(i+1)/n}f_{n,i}(x)\left(g\left(y\right)-g(x)\right)dx\right|\\ \leqslant& \int_{i/n}^{(i+1)/n}f_{n,i}(x)\left|g\left(y\right)-g(x)\right|dx \\ \text{by \eqref{6B151} and \eqref{6B152}}\quad \leqslant& \int_{i/n}^{(i+1)/n}f_{n,i}(x)\epsilon dx=\epsilon.\end{align*} On the other hand, we also have$$0=f_{n,i}(0)=\vp(f_{n,i})=\langle f_{n,i},g\rangle=\int_{-1}^1 f_{n,i}(x)g(x)dx.$$ Hence we have $$|g(y)|=|g(y)-f_{n,i}(0)|\leqslant \epsilon$$ for any $y\in [i/n,(i+1)/n]$. Thus $|g(x)|\leqslant \epsilon $ by taking all $-n\leqslant i\leqslant n-1$ with $n\geqslant N$.
Since $\epsilon$ is chosen arbitrarily, we have $g(x)\equiv 0$. Hence $\vp f\equiv 0$ for all $f\in C_{\R}[-1,1]$, which is impossible. Therefore the proof is complete.


17. Solution:

(a) For additivity, suppose $u_1, u_2 \in V$. Then, for $v \in V$, we have
$$ (\Phi(u_1 + u_2))(v) = \langle v, u_1 + u_2 \rangle = \langle v, u_1 \rangle + \langle v, u_2 \rangle = (\Phi u_1)(v) + (\Phi u_2)(v). $$ For homogeneity, suppose $u \in V$ and $c \in \mathbb{R}$. Then, for $v \in V$, we have
$$ (\Phi(cu))(v) = \langle v, cu \rangle = c\langle v, u \rangle = c(\Phi u)(v). $$ (b) If $\mathbb{F} = \mathbb{C}$, then the homogeneity property of linear maps is not satisfied, because we would have $(\Phi(cu))(v) = \bar{c}(\Phi u)(v)$, but $c = \bar{c}$ if and only if $c$ is a real number.

(c) This is the same as the second part in the proof of 6.42. Suppose there are $u_1$ and $u_2$ in $V$ such that $\Psi u_1 = \Psi u_2$. Then
$$ 0 = (\Psi u_1 – \Psi u_2)(v) = (\Psi(u_1 – u_2))(v) = \langle v, u_1 – u_2 \rangle $$ for all $v \in V$. Choosing $v = u_1 – u_2$ shows that $u_1 – u_2 = 0$ and thus $u_1 = u_2$.

(d) From (c), we get that $\dim\mathrm{null}~ \Phi = 0$. Thus, from 3.22, we have $$ \operatorname{dim} V = \operatorname{dim} \operatorname{null} \Phi + \operatorname{dim} \operatorname{range} \Phi = \operatorname{dim} \operatorname{range} \Phi. $$ However, $\operatorname{dim} V = \operatorname{dim} V’$. This shows that $\Phi$ also surjective. Hence $\Phi$ is invertible, that is, an isomorphism from $V$ to $V’$.


Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has 33 Comments

  1. I have a solution to 13 not using induction. (I use ' to denote complex conjugate.) Please correct me if something is wrong.
    Apply Gram-Schmidt to get e1,e2,...,em. Let w=a1e1+...+amem, and construct an operator on Fm by T(a1,...,am)=(,,...,). We can verify that it is a linear map.
    Then we prove it's injective. Say T(a1,...,am)=0. Then =0 for i=1,2,...,m. Because w is in span(e1,...,em), w is also in span(v1,...,vm) by Gram-Schmidt. Say w=b1v1+...+bmvm. Then
    0=b1'+...+bm'==, which leads to w=0. So ai=0 for all i=1,2,...,m. So it's injective. Then it's also surjective, and we're done.

    1. Don't know why all inner product signs are ignored...Can Mr. Linearity help me a bit? On the 4th line I mean T(a1,...,am)=(inner product of w and v1, inner product of w and v2,..., inner product of w and vm).. On the 5th line I mean inner product of w and vi =0 for i=1,2,...,m. On the second to last line I mean 0=b1'*inner product of w and v1+...+bm'*inner product of w and vm = inner product of w and b1v1+...+bmvm = inner product of w and w.

  2. I would appreciate if you could give feedback on my solution to problem 15. Your proof is much more involved which makes me feel I might be missing something obvious here.

    Suppose such a g exists.

    Define f_d(x) as the triangular function centered on 0 with base width d (where we choose d < 2) and height 1 (so f(0) = 1).

    Then 1 = f(0)^2 = ɸ(f)^2 = ^2 ≤ ||f||^2 ||g||^2 = (0.5 d) ||g||^2
    (The inequality follows from Cauchy-Schwarz and the last equality comes from the formula for the area of a triangle.)

    So ||g||^2 ≥ 2/d Since we can choose d arbitrarily small, ||g||^2 can be
    made arbitrarily large which contradicts the extreme value theorem.

    1. I don't know why the inner product disappeared above. Here's another try:
      $1 = f(0)^2 = \varphi(f)^2 = \langle f, g \rangle^2 \le \lVert f\rVert^2 \lVert g\rVert^2 = (0.5 d) \lVert g\rVert^2$

    2. I just realised $\lVert f \rVert^2$ is not $\frac{d}2$ but $\frac{d}3$. This shouldn't change the main argument though. (It also seems I don't need to invoke the extreme value theorem since $\lVert g \rVert$ needs to be finite anyway if the given inner product is well defined.)

    3. I only partially understood your solution, and I don't careaenough to try to understand it completely, but I solved this problem in a very similar way.
      Suppose such g exists.
      Let f be such that f(0) ≠ 0.
      Consider f' = f +δ_ε, where is the triangular function with base width ε, centered at 0, with δ_ε(0) = -f(0).
      Then as ε→0, ||δ_ε||→0.
      Then we apply cauchy schwartz and show that as ε→0, ⟨f', g⟩=f'(0) remains close to ⟨f, g⟩ = f(0).
      But at the same time, we know that f'(0) = 0. Contradiction

  3. In your solution of the exercise 11, we have
    span φ ⊆ 𝔽,
    (null φ)^0 ⊆ V',
    so I don't understand what you mean when you say they are equal

    1. Oops, nevermind, I confused span with range.

  4. Exercise 1 $\theta - \alpha = \pi/2$

    I think you meant, $\alpha = \pi/2 + \theta$.

  5. Number 10 requires you to prove there are exactly 2^m orthogonal lists. I think you've only shown that there are at least 2^m orthogonal lists.

    1. I've commented my solution in another comment below, the general idea: H(n) := (*) holds for j<=n, J(n) := (**) holds for j J(n). And we have shown that forall m, P(m), by first showing P(1) and then forall m, P(m) -> P(m+1). Hence since in the problem H(m) is assumed, we have J(m), so e_j = +- s_j for any j, together with the answer above, we have exactly 2^m such bases.

  6. Also , for ex 11, one can use 3.D, ex4, just change $W$ to $\mathbb{F}$.
    However, how to solve ex 13?

    1. It has been updated.

    2. 3.D ex4 talks only about finite dimensional vector spaces.

      1. Oops, nevermind, it only constrains W to be finite dimensional.

  7. I think 10 is wrong.
    First of all there are n! such permutations. Second, if you swap, say, e1 and e2, then the claim is that e2 spans span(v1), which is clearly wrong since they're orthogonal by gram Schmidt.

    1. Yes, you are right. No switch order but switch signs.

      1. Applying Gram to v1, ..., vm, we get e1, ..., em.

        For each ej for 1 <= j <= m, we have 2 choices , pick it or not, hence we can construct 2^m distinct orthonormal lists.

        To show that each vj , for 1 <= j <= m, is actually in the span of e1, .., ej , we just move each e to another side and we see that vj is a linear combination of e1, ..., ej.

        Am I correct?
        Could you explain why do I need to switch the sign?

    2. My suggestion to the answer : (*) span(e_1,...,e_j) = span (v_1,...,v_j), one of the way to see how many possibilities for such orthonomal bases is by constructing them inductively, as it was shown above, that there are 2^n such orthonomal bases, so we only need to show that if (*) holds then we must have the vectors stated in the answer above, namely +- of e_j, we assume first we've e_1,...,e_m by Gram-Schmidt Procedure, and try to construct s_1,...,s_m with the property (*) and stronger, we claim that if s_1,...,s_m have the property (*), we have also span (**) s_j = span e_j for any j, now for the ind.basis, j = 1, we assume we've found a normal vector s_1 such that it fulfills (*), then span e_1 = span v_1, so s_1 = k*e_1, since they are normal, we have |k| = 1, hence 2 possibilities for v_1 (+-). For the inductive step, we assume (*) holds for j<=n+1 and (**) holds for j<=n, then similarly as in the basis case, we show that span e_(n+1) = span s_(n+1), this way, we have again s_(n+1) = k * e_(n+1) thus |k| = 1, completing the ind.construction. Together with the answer in this page, this shows that the total possibility for s_1,...,s_n is exactly 2^n. Please tell me if I've got something wrong, thank you.

  8. Hi, Can you provide a solution to question 15.

    I solve this question by using the fact that the Riesc Representation Theorem states that such q is unique for a sub space spanned by some orthonormal vectors. Since the vector space is infinite dimensional. we can always add some vectors in to the basis to produce a new q.

    SoI tried to proof that we can always choose a basis(en+1) such that en+1(0) does not equal to zero. (so that q has to change every time we add a vector into the basis, the new q has to be different from the old q) {the choose of basis doesn't matter the result of the equation (6.43)}.

    let e1 be a constant term, We can generate new vector using the Gram-Schmidt Procedure and it is easy to prove that if we use f to generate en+1 and has en+1(0) = 0, use f+e1 would let en+1(0) not equal to zero. So there exist no such such q that fits the requirement.

    I really don't think this is a good proof and I feel like it is flawed somewhere. Can you help me check it and provide a better proof?

    1. In my opinion, this is not a linear algebra problem, but a calculus one.

      I cannot understand your logic without detail. If you could provide a complete proof of yours (by taking a picture and uploading it here or sending it to me linearalgebras[at]gmail), then I might be able to check it.

      1. Never mind, Spotted one fatal error when I tries to type it down.
        For this question, is it impossible to solve it using mostly linear algebra? I found it extremely hard to understand the solution you provide. should I learn some real analysis first?

        1. I guess so. The idea is that the integral can vary tremendously without changing the function at some given point. If you know something about "good kernel" in analysis, this is a common idea. See page 48 (Theorem 4.1) of the following book.

          Don't mind if you have trouble with this problem, it cannot reflect your understanding with linear algebra.

          1. Thank you, By the way, do you have complete answer to this series of books? I planned to study some analysis after finished "linear algebra done right" and this series seems to be a good choice (or may be "baby Rudin" is a better choice?)

          2. Thanks, By the way, if I want to go more depth about linear algebra. Which book should I use? I felt some (or most) of the exercises in this book are extremely challenging and they took me a lot of effort to work through. Especially the part related with inner products and orthogonal. I always think that part of the book need some rework, The author hides everything important in the exercises. Exercise 6.B is my forever nightmare.

            This is a website I found that contains some good questions
            http://www.math.harvard.edu...

            Yet, even after I studied the related chapters, some of the questions are still undoable for me (e.g. 6.b question 13, they put it in the final, use conjugate transpose in the solution, totally unintelligible). Any book you recommend that would fill the holes this one leave out?

          3. Linear Algebra Done Right treats everything without determinant. This is quite abnormal from a classical point of view. Though it helps a lot for you to understand the linear structure behind it.

            For a classical one, I would recommend Linear Algebra (2nd Edition) by Kenneth M Hoffman (Author), Ray Kunze (Author) (the same level as Linear Algebra Done Right).

            For example 6.b question 13 is not hard if you know the Cramer's Rule. This is usually treated with linear equations. Linear Algebra Done Right did not talk about this too much.

            In my opinion, it might not be proper to use Linear Algebra Done Right as your first linear algebra textbook. A classical one would be better. The book you need is a good classical one.

          4. Thanks, and one more thing, What's you point of view on "Real Analysis " By N.L. Carothers? Is it at introductory level, or in other words, is it friendly to beginners?

          5. It seems this is a book containing stuff from both Mathematical Analysis and Real Analysis. It is something like baby Rudin.

  9. In the solution to 6B14, first inequality would be clearer if the absolute value signs would be included.

    1. Thank you! Actually, it should be an equality.

      1. The inequality was technically true ;) But I do think it would be better understandable if you leave the inequality but include the absolute value or add another line with the absolute value around the dot product.

        1. Yes, you are right. Thanks.

Leave a Reply

Close Menu