If you find any mistakes, please make a comment! Thank you.

Solution to Linear Algebra Hoffman & Kunze Chapter 3.5

Exercise 3.5.1

In $\mathbb R^3$ let $\alpha_1=(1,0,1)$, $\alpha_2=(0,1,-2)$, $\alpha_3=(-1,-1,0)$.

(a) If $f$ is a linear functional on $\mathbb R^3$ such that$$f(\alpha_1)=1,\quad f(\alpha_2)=-1,\quad f(\alpha_3)=3,$$and if $\alpha=(a,b,c)$, find $f(\alpha)$.
(b) Describe explicitly a linear functional $f$ on $\mathbb R^3$ such that$$f(\alpha_1)=f(\alpha_2)=0\quad\text{but $\quad f(\alpha_3)\not=0$}.$$(c) Let $f$ be any linear functional such that $$f(\alpha_1)=f(\alpha_2)=0\quad\text{and $\quad f(\alpha_3)\not=0$}.$$If $\alpha=(2,3,-1)$, show that $f(\alpha)\not=0$.


(a) We need to write $(a,b,c)$ in terms of $\alpha_1,\alpha_2,\alpha_3$. We can do this by row reducing the following augmented matrix whose colums are the $\alpha_i$'s.
$$\left[\begin{array}{ccc|c}1&0&-1&a\\0&1&-1&b\\1&-2&0&c\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|c}1&0&-1&a\\0&1&-1&b\\0&-2&-1&c-a\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|c}1&0&-1&a\\0&1&-1&b\\0&0&-1&c-a+2b\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|c}1&0&-1&a\\0&1&-1&b\\0&0&1&a-2b-c\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|c}1&0&0&2a-2b-c\\0&1&0&a-b-c\\0&0&1&a-2b-c\end{array}\right]$$Thus if $(a,b,c)=x_1\alpha_1+x_2\alpha_2+x_3\alpha_3$ then $x_1=2a-2b-c$, $x_2=a-b-c$ and $x_3=a-2b-c$. Now \begin{align*}f(a,b,c)&=f(x_1\alpha_1+x_2\alpha_2+x_3\alpha_3) \\&= x_1f(\alpha_1)+x_2f(\alpha_2)+x_3f(\alpha_3)\\&=(2a-2b-c)\cdot 1 + (a-b-c) \cdot (-1) + (a-2b-c) \cdot 3\\&=(2a-2b-c)-(a-b-c)+(3a-6b-3c)\\&=4a-7b-3c.\end{align*} In summary
$$f(\alpha)=4a-7b-3c.$$(b) Let $f(x,y,z)=x-2y-z$. The $f(1,0,1)=0$, $f(0,1,-2)=0$, and $f(-1,-1,0)=1$.

(c) Using part (a) we know that $\alpha=(2,3,-1)=-\alpha_1-3\alpha_3$ (plug in $a=2$, $b=3$, $c=-1$ for the formulas for $x_1,x_2,x_3$). Thus $f(\alpha)=-f(\alpha_1)-3f(\alpha_3)=0- 3f(\alpha_3)$ and since $f(\alpha_3)\not=0$, $-3f(\alpha_3)\not=0$ and thus $f(\alpha)\not=0$.

Exercise 3.5.2

Let $\mathcal B=\{\alpha_1,\alpha_2,\alpha_3\}$ be the basis for $\mathbb C^3$ defined by
$$\alpha_1=(1,0,-1),\quad\alpha_2=(1,1,1),\quad\alpha_3=(2,2,0).$$Find the dual basis of $\mathcal B$.

Solution: The dual basis $\{f_1,f_2,f_3\}$ are given by $f_i(x_1,x_2,x_3)=\sum_{j=1}^3 A_{ij}x_j$ where $(A_{1,1},A_{1,2},A_{1,3})$ is the solution to the system
$$\left[\begin{array}{ccc|c}1&0&-1&1\\1&1&1&0\\2&2&0&0\end{array}\right],$$$(A_{2,1},A_{2,2},A_{2,3})$ is the solution to the system
$$\left[\begin{array}{ccc|c}1&0&-1&0\\1&1&1&1\\2&2&0&0\end{array}\right],$$and $(A_{3,1},A_{3,2},A_{3,3})$ is the solution to the system
$$\left[\begin{array}{ccc|c}1&0&-1&0\\1&1&1&0\\2&2&0&1\end{array}\right],$$We row reduce the generic matrix
\rightarrow\left[\begin{array}{ccc|c}1&0&0&a+b-\frac12c\\0&1&0&c-b-a\\0&0&1&b-\frac12c\end{array}\right].$$$a=1$, $b=0$, $c=0$ $\Rightarrow$ $f_1(x_1,x_2,x_3)=x_1-x_2$.

$a=0$, $b=1$, $c=0$ $\Rightarrow$ $f_2(x_1,x_2,x_3)=x_1-x_2+x_3$.

$a=0$, $b=0$, $c=1$ $\Rightarrow$ $f_3(x_1,x_2,x_3)=-\frac12x_1+x_2-\frac12x_3$.

Then $\{f_1,f_2,f_3\}$ is the dual basis to $\{\alpha_1,\alpha_2,\alpha_3\}$.

Exercise 3.5.3

If $A$ and $B$ are $n\times n$ matrices over the field $F$, show that trace$(AB)$ $=$ trace$(BA)$. Now show that similar matrices have the same trace.

Solution: We have $(AB)_{ij}=\sum_{k=1}^n A_{ik}B_{kj}$ and $(BA)_{ij}=\sum_{k=1}^n B_{ik}A_{kj}$. Thus
\begin{align*}\text{trace}(AB)&=\sum_{i=1}^n(AB)_{ii}=\sum_{i=1}^n\sum_{k=1}^n A_{ik}B_{ki}\\&=\sum_{i=1}^n\sum_{k=1}^n B_{ki}A_{ik}
=\sum_{k=1}^n\sum_{i=1}^n B_{ki}A_{ik}
\\&=\sum_{k=1}^n(BA)_{kk}=\text{trace}(BA).\end{align*}Suppose $A$ and $B$ are similar. Then $\exists$ an invertible $n\times n$ matrix $P$ such that $A=PBP^{-1}$. Thus \begin{align*}\text{trace}(A)&=\text{trace}(PBP^{-1})=\text{trace}((P)(BP^{-1}))\\&=\text{trace}((BP^{-1})(P))=\text{trace}(B).\end{align*}

Exercise 3.5.4

Let $V$ be the vector space of all polynomial functions $p$ from $R$ into $R$ which have degree $2$ or less:
$$p(x)=c_0+c_1x+c_2x^2.$$Define three linear functionals on $V$ by
$$f_1(p)=\int_0^1p(x)dx,\quad f_2(x)=\int_0^2p(x)dx,\quad f_3(x)=\int_0^3p(x)dx.$$Show that $\{f_1,f_2,f_3\}$ is a basis for $V^{*}$ by exhibiting the basis for $V$ of which it is the dual.

Solution: We have $$\int_0^ac_0+c_1x+c_2x^2dx$$$$=c_0x+\frac12c_1x^2+\frac13c_2x^3 \mid_0^a$$$$=c_0a+\frac12c_1a^2+\frac13c_2a^3.$$Thus
$$\int_0^1p(x)dx=c_1+\frac12c_1+\frac13c_2$$$$\int_0^2p(x)dx=2c_1+2c_1+\frac83c_2$$$$\int_0^3p(x)dx=3c_1+\frac92c_1+9c_2$$Thus we need to solve the following system three times
$$Once when $(u,v,w)=(1,0,0)$, once when $(u,v,w)=(0,1,0)$ and once when $(u,v,w)=(0,0,1)$.

We therefore row reduce the following matrix
1 & 1/2 & 1/3 & 1 & 0 & 0\\
2 & 2 & 8/3 & 0 & 1 & 0\\
3 & 9/2 & 9 & 0 & 0 & 1
1 & 1/2 & 1/3 & 1 & 0 & 0\\
0 & 1 & 2 & -2 & 1 & 0\\
0 & 3 & 8 & -3 & 0 & 1
1 & 0 & -2/3 & 2 & -1/2 & 0\\
0 & 1 & 2 & -2 & 1 & 0\\
0 & 0 & 2 & 3 & -3 & 1
1 & 0 & -2/3 & 2 & -1/2 & 0\\
0 & 1 & 2 & -2 & 1 & 0\\
0 & 0 & 1 & 3/2 & -3/2 & 1/2
1 & 0 & 0 & 3 & -3/2 & 1/3\\
0 & 1 & 0 & -5 & 4 & -1\\
0 & 0 & 1 & 3/2 & -3/2 & 1/2

Exercise 3.5.5

If $A$ and $B$ are $n\times n$ complex matrices, show that $AB-BA=I$ is impossible.

Solution: Recall for $n\times n$ matrices $M$, $\text{trace}(M)=\sum_{i=1}^nM_{ii}$. The trace is clearly additive $$\text{trace}(M_1+M_2)=\text{trace}(M_1)+\text{trace}(M_2).$$ We know from Exercise $3$ that $\text{trace}(AB)=\text{trace}(BA)$. Thus \begin{align*}\text{trace}(AB-BA)&=\text{trace}(AB)-\text{trace}(BA)\\&=\text{trace}(AB)-\text{trace}(AB)=0.\end{align*} But $\text{trace}(I)=n$ and $n\not=0$ in $\mathbb C$.

Exercise 3.5.6

Let $m$ and $n$ be positive integers and $F$ a field. Let $f_1,\dots,f_m$ be linear functionals on $F^n$. For $\alpha$ in $F^n$ define
$$T(\alpha)=(f_1(\alpha),\dots,f_m(\alpha)).$$Show that $T$ is a linear transformation from $F^n$ into $F^m$. Then show that every linear transformation from $F^n$ into $F^m$ is of the above form, for some $f_1,\dots,f_m$.

Solution: Clearly $T$ is a well defined function from $F^n$ into $F^m$. We must just show it is linear. Let $\alpha,\beta\in F^n$, $c\in\mathbb C$. Then
$$T(c\alpha+\beta)=(f_1(c\alpha+\beta),\dots,f_m(c\alpha+\beta))$$$$=(cf_1(\alpha)+f_1(\beta),\dots,cf_n(\alpha)+f_n(\beta))$$$$=c(f_1(\alpha),\dots,f_n(\alpha))+(f_1(\beta),\dots,f_n(\beta))$$$$=cT(\alpha)+T(\beta).$$Thus $T$ is a linear transformation.

Let $S$ be any linear transformation from $F^n$ to $F^m$. Let $M$ be the matrix of $S$ with respect to the standard bases of $F^n$ and $F^m$. Then $M$ is an $m\times n$ matrix and $S$ is given by $X\mapsto MX$ where we identify $F^n$ as $F^{n\times1}$ and $F^m$ with $F^{m\times1}$. Now for each $i=1,\dots,m$ let $$f_i(x_1,\dots,x_n)=\sum_{j=1}^nM_{ij}x_j.$$ Then $X\mapsto MX$ is the same as $$X\mapsto(f_1(X),\dots,f_m(x))$$ (keeping in mind our identification of $F^m$ with $F^{m\times1}$). Thus $S$ has been written in the desired form.

Exercise 3.5.7

Let $\alpha_1=(1,0,-1,2)$ and $\alpha_2=(2,3,1,1)$, and let $W$ be the subspace of $\mathbb R^4$ spanned by $\alpha_1$ and $\alpha_2$. Which linear functionals $f$:
$$f(x_1,x_2,x_3,x_4)=c_1x_1+c_2x_2+c_3x_3+c_4x_4$$are in the annihilator of $W$?

Solution: The two vectors $\alpha_1$ and $\alpha_2$ are linearly independent since neither is a multiple of the other. Thus $W$ has dimension $2$ and $\{\alpha_1,\alpha_2\}$ is a basis for $W$. Therefore a functional $f$ is in the annihilator of $W$ if and only if $f(\alpha_1)=f(\alpha_2)=0$. We find such $f$ by solving the system
f(\alpha_2)=0\end{array}\right.$$or equivalently
\end{array}\right.$$We do this by row reducing the matrix
1 & 0 & -1 & 2\\
2 & 3 & 1 & 1
1 & 0 & -1 & 2\\
0 & 1 & 1 & -1
The general element of $W^0$ is therefore
$$f(x_1,x_2,x_3,x_4)=(c_3-2c_4)x_1+(c_3+c_4)x_2+c_3x_3+c_4x_4,$$for arbitrary elements $c_3$ and $c_4$. Thus $W^0$ has dimension $2$ as expected.

Exercise 3.5.8

Let $W$ be the subspace of $\mathbb R^5$ which is spanned by the vectors
$$\alpha_1=\epsilon_1+2\epsilon_2+\epsilon_3,\quad \alpha_2=\epsilon_2+3\epsilon_3+3\epsilon_4+\epsilon_5$$$$\alpha_3=\epsilon_1+4\epsilon_2+6\epsilon_3+4\epsilon_4+\epsilon_5.$$Find a basis for $W^0$.

Solution: The vectors $\alpha_1$, $\alpha_2$, $\alpha_3$ are linearly independent as can be seen by row reducing the matrix
1 & 2 & 1 & 0 & 0\\
0 & 1 & 3 & 3 & 1\\
1 & 4 & 6 & 4 & 1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1 & 2 & 1 & 0 & 0\\
0 & 1 & 3 & 3 & 1\\
0 & 2 & 5 & 4 & 1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1 & 0 & -5 & -6 & -2\\
0 & 1 & 3 & 3 & 1\\
0 & 0 & -1 & -2 & -1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1 & 0 & -5 & -6 & -2\\
0 & 1 & 3 & 3 & 1\\
0 & 0 & 1 & 2 & 1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccccc}
1 & 0 & 0 & 4 & 3\\
0 & 1 & 0 & -3 & -2\\
0 & 0 & 1 & 2 & 1\end{array}\right].$$Thus $W$ has dimension $3$ and $\{\alpha_1,\alpha_2,\alpha_3\}$ is a basis for $W$. We know every functional is given by $$f(x_1,x_2,x_3,x_4,x_5)=c_1x_2+c_2x_2+c_3x_3+c_4x_4+c_5x_5$$ for some $c_1,\dots,c_5$. From the row reduced matrix we see that the general solution for an element of $W^0$ is

Exercise 3.5.9

Let $V$ be the vector space of all $2\times2$ matrices over the field of real numbers, and let
2&-2\\-1&1\end{array}\right].$$Let $W$ be the subspace of $V$ consisting of all $A$ such that $AB=0$. Let $f$ be a linear functional on $V$ which is in the annihilator of $W$. Suppose that $f(I)=0$ and $f(C)=3$, where $I$ is the $2\times2$ identity matrix and
$$C=\left[\begin{array}{cc}0&0\\0&1\end{array}\right].$$Find $f(B)$.

Solution: The general linear functional on $V$ is of the form $f(A)=aA_{11}+bA_{12}+cA_{21}+dA_{22}$ for some $a,b,c,d\in\mathbb R$. If $A\in W$ then
$$\left[\begin{array}{cc} x & y\\z & w\end{array}\right]\left[\begin{array}{cc} 2 & -2\\-1 & 1\end{array}\right]=\left[\begin{array}{cc} 0 & 0\\0 & 0\end{array}\right]$$implies $y=2x$ and $w=2y$. So $W$ consists of all matrices of the form
$$\left[\begin{array}{cc} x & 2x\\y & 2y\end{array}\right]$$Now $f\in W^0$ $\Rightarrow$ $f\left(\left[\begin{array}{cc} x & 2x\\y & 2y\end{array}\right]\right)=0$ $\forall$ $x,y\in\mathbb R$ $\Rightarrow$ $ax+2bx+cy+2dy=0$ $\forall$ $x,y\in\mathbb R$ $\Rightarrow$ $(a+2b)x+(c+2d)y=0$ $\forall$ $x,y\in\mathbb R$ $\Rightarrow$ $b = -\frac12a$ and $d=-\frac12c$. So the general $f\in W^0$ is of the form $$f\left(A\right)=aA_{11}-\frac12aA_{12}+cA_{21}-\frac12cA_{22}.$$Now $f(C)=3$ $\Rightarrow$ $d=3$ $\Rightarrow$ $-\frac12c=3$ $\Rightarrow$ $c=-6$. And $f(I)=0$ $\Rightarrow$ $a-\frac12c=0$ $\Rightarrow$ $c=2a$ $\Rightarrow$ $a=-3$. Thus $$f(A)=-3A_{11}+\frac32A_{12}-6A_{21}+3A_{22}.$$ Thus $$f(B)=-3\cdot2+\frac32\cdot(-2)-6\cdot(-1)+3\cdot1=0.$$

Exercise 3.5.10

Let $F$ be a subfield of the complex numbers. We define $n$ linear functionals on $F^n$ $(n\geq2)$ by
$$f_k(x_1,\dots,x_n)=\sum_{j=1}^n(k-j)x_j,\quad 1\leq k\leq n.$$What is the dimension of the subspace annihilated by $f_1,\dots,f_n$?

Solution: $N_{f_k}$ is the subspace annihilated by $f_k$. By the comments on page 101, $N_{f_k}$ has dimension $n-1$. Now the standard basis vector $\epsilon_2$ is in $N_{f_2}$ but is not in $N_{f_1}$. Thus $N_{f_1}$ and $N_{f_2}$ are distinct hyperspaces. Thus their intersection has dimension $n-2$. Now $\epsilon_3$ is in $N_{f_3}$ but is not in $N_{f_1}\cup N_{f_2}$. Thus $N_{f_1}\cap N_{f_2}\cap N_{f_3}$ is the intersection of three distinct hyperspaces and so has dimension $n-3$. Continuing in this way, $\epsilon_i\not\in\cup_{j=1}^{i-1} N_{f_i}$. Thus $\cup_{j=1}^{i} N_{f_i}$ is the intersection of $i$ distinct hyperspaces and so has dimension $n-i$. Thus when $i=n$ we have $\cup_{j=1}^{n} N_{f_i}$ has dimension $0$.

Exercise 3.5.11

Let $W_1$ and $W_2$ be subspace of a finite-dimensional vector space $V$.

(a) Prove that $(W_1+W_2)^0=W_1^0\cap W_2^0$.
(b) Prove that $(W_1\cap W_2)^0=W_1^0+ W_2^0$.


(a) $f\in(W_1+W_2)^0$ $\Rightarrow$ $f(v)=0$ $\forall$ $v\in W_1+W_2$ $\Rightarrow$ $f(w_1+w_2)=0$ $\forall$ $w_1\in W_1$, $w_2\in W_2$ $\Rightarrow$ $f(w_1)=0$ $\forall$ $w_1\in W_1$ (take $w_2=0$) and $f(w_2)=0$ $\forall$ $w_2\in W_2$ (take $w_1=0$). Thus $f\in W_1^0$ and $f\in W_2^0$. Thus $f\in W_1^0\cap W_2^0$. Thus $(W_1+W_2)^0\subseteq W_1^0\cap W_2^0$.

Conversely, let $f\in W_1^0\cap W_2^0$. Let $v\in W_1+W_2$. Then $v=w_1+w_2$ where $w_i\in W_i$. Thus $$f(v)=f(w_1+w_2)=f(w_1)+f(w_2)=0+0$$ (since $f\in W_1^0$ and $f\in W_2^0$). Thus $f(v)=0$ $\forall$ $v\in W_1+W_2$. Thus $f\in(W_1+W_2)^0$. Thus $W_1^0\cap W_2^0\subseteq (W_1+W_2)^0$.

Since $(W_1+W_2)^0\subseteq W_1^0\cap W_2^0$ and $W_1^0\cap W_2^0\subseteq (W_1+W_2)^0$ it follows that $$W_1^0\cap W_2^0= (W_1+W_2)^0.$$(b) $f\in W_1^0+W_2^0$ $\Rightarrow$ $f=f_1+f_2$, for some $f_i\in W_i^0$. Now let $v\in W_1\cap W_2$. Then $$f(v)=(f_1+f_2)(v)=f_1(v)+f_2(v)=0+0.$$ Thus $f\in(W_1\cap W_2)^0$. Thus $W_1^0+W_2^0 \subseteq (W_1\cap W_2)^0$.

Now let $f\in(W_1\cap W_2)^0$. In the proof of Theorem 6 on page 46 it was shown that we can choose a basis for $W_1+W_2$
$$\{\alpha_1,\dots,\alpha_k,\quad \beta_1,\dots,\beta_m,\quad \gamma_1,\dots,\gamma_n\}$$where $\{\alpha_1,\dots,\alpha_k\}$ is a basis for $W_1\cap W_2$, $\{\alpha_1,\dots,\alpha_k,\quad \beta_1,\dots,\beta_m\}$ is a basis for $W_1$ and $\{\alpha_1,\dots,\alpha_k,\,\gamma_1,\dots,\gamma_n\}$ is a basis for $W_2$. We expand this to a basis for all of $V$
$$\{\alpha_1,\dots,\alpha_k,\quad \beta_1,\dots,\beta_m,\quad \gamma_1,\dots,\gamma_n,\quad \lambda_1,\dots,\lambda_{\ell}\}.$$Now the general element $v\in V$ can be written as
v=\sum_{i=1}^k x_i\alpha_i+\sum_{i=1}^m y_i\beta_i+\sum_{i=1}^n z_i\gamma_i + \sum_{i=1}^{\ell}w_i\lambda_i
\end{equation}and $f$ is given by
$$f(v)=\sum_{i=1}^k a_ix_i+\sum_{i=1}^m b_iy_i+\sum_{i=1}^n c_iz_i+ \sum_{i=1}^{\ell}d_iw_i$$for some constants $a_i$, $b_i$, $c_i$, $d_i$. Since $f(v)=0$ for all $v\in W_1\cap W_2$, it follows that $a_1=\cdots=a_k=0$. So
$$f(v)=\sum b_iy_i+\sum c_iz_i+ \sum d_iw_i.$$ Define
$$f_1(v)=\sum c_iz_i+ \sum d_iw_i$$and
$$f_2(v)=\sum b_iy_i.$$Then $f=f_1+f_2$. Now if $v\in W_1$ then
$$v=\sum_{i=1}^k x_i\alpha_i+\sum_{i=1}^m y_i\beta_i,$$so that the coefficients $z_i$ and $w_i$ in (\ref{wefe2}) are all zero. Thus $f_1(v)=0$. Thus $f_1\in W_1^0$. Similarly if $v\in W_2$ then the coefficients $y_i$ and $w_i$ in (\ref{wefe2}) are all zero and thus $f_2(v)=0$. So $f_2\in W_2$. Thus $f=f_1+f_2$ where $f_1\in W_1^0$ and $f_2\in W_2^0$. Thus $f\in W_1^0+W_2^0$. Thus $(W_1\cap W_2)^0\subseteq W_1^0+W_2^0$.

Thus $(W_1\cap W_2)^0\subseteq W_1^0+W_2^0$.

Exercise 3.5.12

Let $V$ be a finite-dimensional vector space over the field $F$ and let $W$ be a subspace of $V$. If $f$ is a linear functional on $W$, prove that there is a linear functional $g$ on $V$ suvch that $g(\alpha)=f(\alpha)$ for each $\alpha$ in the subspace $W$.

Solution: Let $\mathcal B$ be a basis for $W$ and let $\mathcal B'$ be a basis for $V$ such that $\mathcal B\subseteq \mathcal B'$. A linear function on a vector space is uniquely determined by its values on a basis, and conversely any function on the basis can be extended to a linear function on the space. Thus we define $g$ on $\mathcal B$ by $g(\beta)=f(\beta)$ $\forall$ $\beta\in\mathcal B$. Then define $g(\beta)=0$ for all $\beta\in\mathcal B'\setminus\mathcal B$. Since we have defined $g$ on $\mathcal B'$ it defines a linear functional on $V$ and since it agrees with $f$ on a basis for $W$ it agrees with $f$ on all of $W$.

Exercise 3.5.13

Let $F$ be a subfield of the field of complex numbers and let $V$ be any vector space over $F$. Suppose that $f$ and $g$ are linear functionals on $V$ such that the function $h$ defined by $h(\alpha)=f(\alpha)g(\alpha)$ is also a linear functional on $V$. Prove that either $f=0$ or $g=0$.

Solution: Suppose neither $f$ nor $g$ is the zero function. We will derive a contradiction. Let $v\in V$. Then $h(2v)=f(2v)g(2v)=4f(v)g(v)$. But also $h(2v)=2h(v)=2f(v)g(v)$. Therefore $f(v)g(v)=2f(v)g(v)$ $\forall$ $v\in V$. Thus $f(v)g(v)=0$ $\forall$ $v\in V$. Let $\mathcal B$ be a basis for $V$. Let $\mathcal B_1=\{\beta\in\mathcal B\mid f(\beta)=0\}$ and $\mathcal B_2=\{\beta\in\mathcal B\mid g(\beta)=0\}$. Since $f(\beta)g(\beta)=0$ $\forall$ $\beta\in\mathcal B$, we have $\mathcal B=\mathcal B_1\cup\mathcal B_2$. Suppose $\mathcal B_1\subseteq \mathcal B_2$. Then $\mathcal B_2=\mathcal B$ and consequently $g$ is the zero function. Thus $\mathcal B_1\not\subseteq \mathcal B_2$. And similarly $\mathcal B_2\not\subseteq \mathcal B_1$. Thus we can choose $\beta_1\in\mathcal B_1\setminus\mathcal B_2$ and $\beta_2\in\mathcal B_2\setminus\mathcal B_1$. So we have $f(\beta_2)\not=0$ and $g(\beta_1)\not=0$. Then $$f(\beta_1+\beta_2)g(\beta_1+\beta_2)=f(\beta_1)g(\beta_1)+f(\beta_2)g(\beta_1)+f(\beta_1)g(\beta_2)+f(\beta_2)g(\beta_2).$$Since $f(\beta_1)=g(\beta_2)=0$, this equals $f(\beta_2)g(\beta_1)$ which is non-zero since each term is non-zero. And this contradicts the fact that $f(v)g(v)=0$ $\forall$ $v\in V$.

Exercise 3.5.14

Let $F$ be a field of characteristic zero and let $V$ be a finite-dimensional vector space over $F$. If $\alpha_1,\dots,\alpha_m$ are finitely many vectors in $V$, each different from the zero vector, prove that there is a linear functional $f$ on $V$ such that
$$f(\alpha_i)\not=0,\quad i=1,\dots,m.$$

Solution: Re-index if necessary so that $\{\alpha_1,\dots,\alpha_k\}$ is a basis for the subspace generated by $\{\alpha_1,\dots,\alpha_m\}$. So each $\alpha_{k+1},\dots,\alpha_{m}$ can be written in terms of $\alpha_1,\dots,\alpha_k$. Extend $\{\alpha_1,\dots,\alpha_k\}$ to a basis for $V$
$$\{\alpha_1,\dots,\alpha_k,\beta_1,\dots,\beta_n\}.$$ For each $i=k+1,\dots,m$ write $\alpha_i=\sum_{j=1}^kA_{ij}\alpha_j$. Since $\alpha_{k+1},\dots,\alpha_{m}$ are all non-zero, for each $i=k+1,\dots,m$ $\exists$ $j_i\leq k$ such that $A_{ij_i}\not=0$. Now define $f$ by mapping $\alpha_1,\dots,\alpha_k$ to $k$ arbitrary non-zero values and map $\beta_i$ to zero $\forall$ $i$. Then $f(\alpha_{k+1})=\sum_{j=1}^kA_{k+1,j}f(\alpha_j)$. If $f(\alpha_{k+1})=0$ then leaving $f(\alpha_i)$ fixed for all $i\leq k$ and adjusting $f(\alpha_{j_{k+1}})$, it equals zero for exactly one possible value of $f(\alpha_{j_{k+1}})$ (since $A_{k+1,j_{k+1}}\not=0$). Thus we can redefine $f(\alpha_{j_{k+1}})$ so that $f(\alpha_{k+1})\not=0$ while maintaining $f(\alpha_{j_{k+1}})\not=0$.

Now if $f(\alpha_{k+2})=0$, then leaving $f(\alpha_i)$ fixed for $i\not=j_{k+2}$, it equals zero for exactly one possible value of $f(\alpha_{j_{k+2}})$ (since $A_{k+2,j_{k+2}}\not=0$) So we can adjust $f(\alpha_{j_{k+2}})$ so that $f(\alpha_{k+2})\not=0$ and $f(\alpha_{k+1})\not=0$ and $f(\alpha_{k+2})\not=0$ simultaneously.

Continuing in this way we can adjust $f(\alpha_{j_{k+3}}),\dots,f(\alpha_{j_m})$ as necessary until all $f(\alpha_{k+1}),\dots,f(\alpha_{m})$ are non-zero and also all of $f(\alpha_1),\dots,f(\alpha_k)$ are non-zero.

Exercise 3.5.15

According to Exercise 3, similar matrices have the same trace. Thus we can define the trace of a linear operator on a finite-dimensional space to be the trace of any matrix which represents the operator in an ordered basis. This is well-defined since all such representing matrices for one operator are similar.

Now let $V$ be the space of all $2\times2$ matrices over the field $F$ and let $P$ be a fixed $2\times2$ matrix. Let $T$ be the linear operator on $V$ defined by $T(A)=PA$. Prove that $\text{trace}(T)=2\text{trace}(P)$.

Solution: Write
$$e_{11}=\left[\begin{array}{cc}1&0\\0&0\end{array}\right],\quad e_{12}=\left[\begin{array}{cc}0&1\\0&0\end{array}\right]$$$$e_{21}=\left[\begin{array}{cc}0&0\\1&0\end{array}\right],\quad e_{22}=\left[\begin{array}{cc}0&0\\0&1\end{array}\right]$$Then $\mathcal B=\{e_{11},e_{12},e_{21},e_{22}\}$ is an ordered basis for $V$. We find the matrix of the linear transformation with respect to this basis.
$$T(e_{11})=\left[\begin{array}{cc}P_{11}&0\\P_{21}&0\end{array}\right]=P_{11}e_{11}+P_{21}e_{21}$$$$T(e_{12})=\left[\begin{array}{cc}0&P_{11}\\0&P_{21}\end{array}\right]=P_{11}e_{12}+P_{21}e_{22}$$$$T(e_{21})=\left[\begin{array}{cc}P_{21}&0\\P_{22}&0\end{array}\right]=P_{12}e_{11}+P_{22}e_{21}$$$$T(e_{22})=\left[\begin{array}{cc}0&P_{12}\\0&P_{22}\end{array}\right]=P_{12}e_{12}+P_{22}e_{22}.$$Thus the matrix of $T$ with respect to $\mathcal B$ is
P_{11} & 0 & P_{12} & 0\\
0 & P_{11} & 0 & P_{12}\\
P_{21} & 0 & P_{22} & 0\\
0 & P_{21} & 0 & P_{22}
\end{array}\right].$$The trace of this matrix is $2P_{11} + 2P_{22}=2\text{trace}(P)$.

Exercise 3.5.16

Show that the trace functional on $n\times n$ matrices is unique in the following sense. If $W$ is the space of $n\times n$ matrices over the field $F$ and if $f$ is a linear functional on $W$ such that $f(AB)=f(BA)$ for each $A$ and $B$ in $W$, then $f$ is a scalar multiple of the trace function. If, in addition, $f(I)=n$ then $f$ is the trace function.

Solution: Let $A$ and $B$ be $n\times n$ matrices. The $\ell,m$ entry in $AB$ is
(AB)_{\ell m}=\sum_{k=1}^nA_{\ell k}B_{km}
\end{equation}and the $\ell,m$ entry in $BA$ is
(BA)_{\ell m}=\sum_{k=1}^nB_{\ell k}A_{km}.
\end{equation}Fix $i,j\in\{1,\dots,n\}$ such that $i>j$. Let $A$ be the matrix where $A_{ij}=1$ and all other entries are zero. Let $B$ be the matrix where $B_{ii}=1$ and all other entries are zero. Consider the general element of $AB$
$$(AB)_{\ell m}=\sum_{k=1}^nA_{\ell k}B_{km}.$$The only non-zero $A$ in the sum on the right is $A_{ij}$. But $B_{jm}=0$ since $j>i$ and only $B_{ii}\not=0$. Thus $AB$ is the zero matrix.

Now we compute $BA$. From (\ref{fjw9320}) the only non-zero term is when $\ell=i$, $m=j$ and $k=i$.

Thus the matrix $AB$ has zeros in every position except for the $i,j$ position where it equals one.

Now the general functional on $n\times n$ matrices is of the form
$$f(M)=\sum_{\ell=1}^n\sum_{m=1}^n c_{\ell m}M_{\ell m}$$for some constants $c_{\ell m}$. Now $f(AB)=f(0)=0$ and $f(BA)=c_{ij}$. So if $f(AB)=f(BA)$ then it follows that $c_{ij}=0$.

Thus we have shown that $c_{ij}=0$ for all $i>j$. Similarly $c_{ij}=0$ for all $i<j$. Thus the only possible non-zero coefficients are $c_{11},\dots,c_{nn}$.$$f(M)=\sum_{i=1}^n c_{ii}M_{ii}.$$We will be done if we show $c_{11}=c_{mm}$ for all $m=2,\dots,n$. Fix $2\leq i\leq n$. Let $A$ be the matrix such that $A_{11}=A_{i1}=1$ and $A_{\ell m}=0$ in all other positions. Let $B=A^{\text{T}}$. Then $AB$ is zero in every position except $A_{11}=A_{1i}=A_{i1}=A_{ii}=1$. And $BA$ is zero in every position except $(BA)_{11}=2$. Thus $f(AB)=c_{11}+c_{ii}$ and $f(BA)=2c_{11}$. Thus if $f(AB)=f(BA)$ then $c_{11}+c_{ii}=2c_{11}$ which implies $c_{11}=c_{ii}$. Thus there's a constant $c$ such that $c_{ii}=c$ for all $i$.

Thus $f$ is given by
$$f(M)=\sum_{k=1}^n cM_{ii}.$$If $f(I)=n$ then $c=1$ and we have the trace function.

Exercise 3.5.17

Let $W$ be the space of $n\times n$ matrices over the field $F$, and let $W_0$ be the subspace spanned by the matrices $C$ of the form $C=AB-BA$. Prove that $W_0$ is exactly the subspace of matrices which have trace zero. (Hint: What is the dimension of the space of matrices of trace zero? Use the matrix 'units,' i.e., matrices with exactly one non-zero entry, to construct enough linearly independent matrices of the form $AB-BA$.)

Solution: Let $W'=\{w\in W\mid \text{trace}(w)=0\}$. We want to show $W'=W_0$. We know from Exercise 3 that $\text{trace}(AB-BA)=0$ for all matrices $A,B$. Since matrices of the form $AB-BA$ span $W_0$, it follows that $\text{trace}(M)=0$ for all $M\in W_0$. Thus $W_0\subseteq W'$.

Since the trace function is a linear functional, the dimension of $W'$ is $\text{dim}(W)-1=n^2-1$. Thus if we show the dimension of $W_0$ is also $n^2-1$ then we will be done. We do this by exhibiting $n^2-1$ linearly independent elements of $W_0$. Denote by $E_{ij}$ the matrix with a one in the $i,j$ position and zeros in all other positions. Let $H_{ij}=E_{ii}-E_{jj}$. Let $$\mathcal B=\{E_{ij}\mid i\not=j\}\cup\{H_{1,i}\mid 2\leq i\leq n\}.$$ We will show that $\mathcal B\subseteq W_0$ and that $\mathcal B$ is a linearly independent set. First, it clear that they are linearly independent because $E_{ij}$ is the only vector in $\mathcal B$ with a non-zero value in the $i,j$ position and $H_{1,i}$ is the only vector in $\mathcal B$ with a non-zero value in the $i,i$ position. Now $2E_{ij}=H_{ij}E_{ij}-E_{ij}H_{ij}$ and $H_{ij}=E_{ij}E_{ji}-E_{ji}E_{ij}$. Thus $E_{ij}\in W_0$ and $H_{ij}\in W_0$. Now $$|\mathcal B|=|\{E_{ij}\mid i\not=j\}|+|\{H_{1,i}\mid 2\leq i\leq n\}|=(n^2-n)+(n-1)=n^2-1$$ Thus we are done.

From http://greggrant.org


This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.

This Post Has 4 Comments

  1. 3.5.10 is incorrect. Here is a counter example; Let f1, f2, f3 be linear functionals such that
    f1(x, y, z) = y - z
    f2(x, y, z) = x + z
    f3(x, y, z) = x + y
    If ei is the i-th standard basis of R^3, then clearly fi(ei) = 0. By the proof given, the subspace annihilated by the given functionals is the zero subspace. However, if v = (-x, x, x), then f1(v) = f2(v) = f3(v) = 0 which is a contradiction.

    1. The dimension appears to be (n - 2), and this is how I proved it:
      Let A be the matrix representing the system of equations f1(v) = 0, ..., fn(v) = 0. I don't know how to properly format matrices in the comments so I will skip the actual reduction, but it is easy to see the rank of A is 2 and therefore the null space (and therefore, the subspace annihilated by our given functionals) must have dimension n - 2.

      1. The idea is clear. These linear functionals can be expressed by two linear functionals, namely $$x_1+x_2+\cdots+x_n$$ and $$x_1+2x_2+\cdots+nx_n.$$Clearly, they are linearly independent. Hence the dimension of the subspace annihilated by $f_1,\dots,f_n$ is $n-2$.

    2. Yes, you are right. Some solutions from http://greggrant.org/ have low quality. I will consider double check and rewrite most of them.

Leave a Reply

Close Menu