Exercise 2.4.1
Using Theorem 7, page 52, if we calculate the inverse of
$$P=\left[\begin{array}{cccc}1&0&1&0\\1&0&0&0\\0&1&0&0\\0&1&4&2\end{array}\right].$$then the columns of $P^{-1}$ will give the coefficients to write the standard basis vectors in terms of the $\alpha_i$’s. We do this by row-reducing the augmented matrix
$$\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\1&0&0&0&0&1&0&0\\0&1&0&0&0&0&1&0\\0&1&4&2&0&0&0&1\end{array}\right].$$ The left side must reduce to the identity whlie the right side transforms to the inverse of $P$. Row reduction gives
$$\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\1&0&0&0&0&1&0&0\\0&1&0&0&0&0&1&0\\0&1&4&2&0&0&0&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\0&0&-1&0&-1&1&0&0\\0&1&0&0&0&0&1&0\\0&1&4&2&0&0&0&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\0&1&0&0&0&0&1&0\\0&0&-1&0&-1&1&0&0\\0&1&4&2&0&0&0&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\0&1&0&0&0&0&1&0\\0&0&-1&0&-1&1&0&0\\0&0&4&2&0&0&-1&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&1&0&1&0&0&0\\0&1&0&0&0&0&1&0\\0&0&1&0&1&-1&0&0\\0&0&4&2&0&0&-1&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&0&0&0&1&0&0\\0&1&0&0&0&0&1&0\\0&0&1&0&1&-1&0&0\\0&0&0&2&-4&4&-1&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{cccc|cccc}1&0&0&0&0&1&0&0\\0&1&0&0&0&0&1&0\\0&0&1&0&1&-1&0&0\\0&0&0&1&-2&2&-1/2&1/2\end{array}\right].$$Thus $\{\alpha_1,\dots,\alpha_4\}$ is a basis. Call this basis $\beta$.Thus $(1,0,0,0)= \alpha_3-2\alpha_4$, $(0,1,0,0)=\alpha_1-\alpha_3+2\alpha_4$, $(0,0,1,0)=\alpha_2-\frac12\alpha_4$ and $(0,0,0,1)=\frac12\alpha_4$.
Thus $[(1,0,0,0)]_\beta=(0,0,1,-2)$, $[(0,1,0,0)]_\beta=(1,0,-1,2)$, $[(0,0,1,0)]_\beta=(0,1,0,-1/2)$ and $[(0,0,0,1)]_\beta=(0,0,0,1/2)$.
Exercise 2.4.2
Using Theorem 7, page 52, the answer is $P^{-1}\left[\begin{array}{c}1\\0\\1\end{array}\right]$ where
$$P=\left[\begin{array}{cccc}2i&2&0\\1&-1&1+i\\0&0&1-i\end{array}\right].$$We find $P^{-1}$ by row-reducing the augmented matrix
$$\left[\begin{array}{ccc|ccc}2i&2&0&1&0&0\\1&-1&1+i&0&1&0\\0&0&1-i&0&0&1\end{array}\right].$$The right side will transform into the $P^{-1}$. Row reducing:
$$\left[\begin{array}{ccc|ccc}2i&2&0&1&0&0\\1&-1&1+i&0&1&0\\0&0&1-i&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&1+i&0&1&0\\2i&2&0&1&0&0\\0&0&1-i&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&1+i&0&1&0\\0&2+2i&2-2i&1&-2i&0\\0&0&1-i&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&-1&1+i&0&1&0\\0&1&-i&\frac{1-i}{4}&\frac{-1-i}{2}&0\\0&0&1-i&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&1&\frac{1-i}{4}&\frac{1-i}{2}&0\\0\rule{0mm}{4mm}&1&-i&\frac{1-i}{4}&\frac{-1-i}{2}&0\\0\rule{0mm}{4mm}&0&1-i&0&0&1\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&1&\frac{1-i}{4}&\frac{1-i}{2}&0\\0\rule{0mm}{4mm}&1&-i&\frac{1-i}{4}&\frac{-1-i}{2}&0\\0\rule{0mm}{4mm}&0&1&0&0&\frac{1+i}{2}\end{array}\right]$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&0&\frac{1-i}{4}&\frac{1-i}{2}&\frac{-1-i}{2}\\0\rule{0mm}{4mm}&1&0&\frac{1-i}{4}&\frac{-1-i}{2}&\frac{-1+i}{2}\\0\rule{0mm}{4mm}&0&1&0&0&\frac{1+i}{2}\end{array}\right]$$Therefore$$P^{-1}\left[\begin{array}{c}1\\0\\1\end{array}\right]=\left[\begin{array}{ccc}\frac{1-i}{4}&\frac{1-i}{2}&\frac{-1-i}{2}\\\rule{0mm}{4mm}\frac{1-i}{4}&\frac{-1-i}{2}&\frac{-1+i}{2}\\0&0&\rule{0mm}{4mm}\frac{1+i}{2}\end{array}\right]\left[\begin{array}{c}1\\0\rule{0mm}{4mm}\\1\rule{0mm}{4mm}\end{array}\right]=\left[\begin{array}{c}\frac{-1-3i}{4}\\ \frac{-1+i}{4}\rule{0mm}{4mm}\\\rule{0mm}{4mm} \frac{1+i}{2}\end{array}\right]$$Thus $(1,0,1)=\frac{-1-3i}{4}(2i,1,0)+\frac{-1+i}{4}(2,-1,0)+\frac{1+i}{2}(0,1+i,1-i)$.
Exercise 2.4.3
Using Theorem 7, page 52, the answer is $P^{-1}\left[\begin{array}{c}a\\b\\c\end{array}\right]$ where
$$P=\left[\begin{array}{cccc}1&1&1\\0&1&0\\-1&1&0\end{array}\right].$$We find $P^{-1}$ by row-reducing the augmented matrix
$$\left[\begin{array}{ccc|ccc}1&1&1&1&0&0\\0&1&0&0&1&0\\-1&1&0&0&0&1\end{array}\right].$$The right side will transform into the $P^{-1}$. Row reducing:
$$\left[\begin{array}{ccc|ccc}1&1&1&1&0&0\\0&1&0&0&1&0\\-1&1&0&0&0&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&1&1&1&0&0\\0&1&0&0&1&0\\0&2&1&1&0&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&1&1&-1&0\\0&1&0&0&1&0\\0&0&1&1&-2&1\end{array}\right].$$$$\rightarrow\left[\begin{array}{ccc|ccc}1&0&0&0&1&-1\\0&1&0&0&1&0\\0&0&1&1&-2&1\end{array}\right].$$Therefore,
$$P^{-1}\left[\begin{array}{c}a\\b\\c\end{array}\right]=\left[\begin{array}{ccc}0&1&-1\\0&1&0\\1&-2&1\end{array}\right]\left[\begin{array}{c}a\\b\\c\end{array}\right]=\left[\begin{array}{c}b-c\\b\\a-2b+c\end{array}\right]$$Thus the answer is$$[(a,b,c)]_{\mathcal B}=(b-c,\ b,\ a-2b+c).$$
Exercise 2.4.4
(a) To show $\alpha_1$ and $\alpha_2$ form a basis of the space they generate we must show they are linearly independent. In other words that $a\alpha_1+b\alpha_2=0$ $\Rightarrow$ $a=b=0$. Equivalently we need to show neither is a multiple of the other. If $\alpha_2=c\alpha_1$ then from the second coordinate it follows that $c=0$ which would imply $\alpha_2=(0,0,0)$, which it does not. So $\{\alpha_1,\alpha_2\}$ is a basis for the space they genearate.
(b) Since the first coordinate of both $\beta_1$ and $\beta_2$ is one, it’s clear that neither is a multiple of the other. So they generate a two dimensional subspace of $\mathbb C^3$. If we show $\beta_1$ and $\beta_2$ can be written as linear combinations of $\alpha_1$ and $\alpha_2$ then since the spaces generated by them both have dimension two, by Corollary 1, page 46, they must be equal. To show $\beta_1$ and $\beta_2$ can be written as linear combinations of $\alpha_1$ and $\alpha_2$ we row-reduce the augmented matrix
$$\left[\begin{array}{cc|cc}1&1+i&1&1\\0&1&1&i\\i&-1&0&1+i\end{array}\right].$$Row reduction follows:
$$\left[\begin{array}{cc|cc}1&1+i&1&1\\0&1&1&i\\i&-1&0&1+i\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&1+i&1&1\\0&1&1&i\\0&-i&-i&1\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&0&-i&2-i\\0&1&1&i\\0&0&0&0\end{array}\right]$$Thus $\beta_1=-i\alpha_1+\alpha_2$ and $\beta_2=(2-i)\alpha_1+i\alpha_2$.
(c) We have to write the $\beta_i$’s in terms of the $\alpha_i$’s, basically the opposite of what we did in part b. In this case we row-reduce the augmented matrix
$$\left[\begin{array}{cc|cc}1&1&1&1+i\\1&i&0&1\\0&1+i&i&-1\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&1&1&1+i\\0&-1+i&-1&-i\\0&1+i&i&-1\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&1&1&1+i\\0&-1+i&-1&-i\\0&0&0&0\end{array}\right]$$$$\rightarrow\left[\begin{array}{cc|cc}1&1&1&1+i\\0&1&\frac{1+i}{2}&\frac{-1+i}{2}\\0&0&0&0\rule{0mm}{4mm}\end{array}\right]
\rightarrow\left[\begin{array}{cc|cc}1&0&\frac{1-i}{2}&\frac{3+i}{2}\\0\rule{0mm}{4mm}&1&\frac{1+i}{2}&\frac{-1+i}{2}\\0\rule{0mm}{4mm}&0&0&0\end{array}\right]$$Thus $\alpha_1=\frac{1-i}{2}\beta_1+\frac{1+i}{2}\beta_2$ and $\alpha_2=\frac{3+i}{2}\beta_1+\frac{-1+i}{2}\beta_2$. So finally, if $\mathcal B$ is the basis $\{\beta_1,\beta_2\}$ then
$$[\alpha_1]_{\mathcal B}=\left(\frac{1-i}{2},\frac{1+i}{2}\right)$$$$[\alpha_2]_{\mathcal B}=\left(\frac{3+i}{2},\frac{-1+i}{2}\right).$$
Exercise 2.4.5
It suffices by Corollary 1, page 46, to show $\alpha$ and $\beta$ are linearly indepdenent, because then they generate a subspace of $\mathbb R^2$ of dimension two, which therefore must be all of $\mathbb R^2$. The second condition on $x_1,x_2,y_1,y_2$ implies that neither $\alpha$ nor $\beta$ are the zero vector. To show two vectors are linearly independent we only need show neither is a non-zero scalar multiple of the other. Suppose WLOG that $\beta=c\alpha$ for some $c\in\mathbb R$, and since neither vector is the zero vector, $c\not=0$. Then $y_1=cx_1$ and $y_2=cx_2$. Thus the conditions on $x_1,x_2,y_1,y_2$ implies
$$0=x_1y_1+x_2y_2=cx_1^2+cx_2^2=c(x_1^2+x_2^2)=c\cdot 1=c.$$Thus $c=0$, a contradiction.
It remains to find the coordinates of the arbitrary vector $(a,b)$ in the ordered basis $\{\alpha,\beta\}$. To find the coordinates of $(a,b)$ we can row-reduce the augmented matrix
$$\left[\begin{array}{cc|c}x_1&y_1&a\\x_2&y_2&b\end{array}\right].$$It cannot be that both $x_1=x_2=0$ so assume WLOG that $x_1\not=0$. Also it cannot be that both $y_1=y_2=0$. Assume first that $y_1\not=0$. Since order matters we cannot assume $y_1\not=0$ WLOG, so we must consider both cases. Then note that $x_1y_1+x_2y_2=0$ implies
\begin{equation}
\frac{x_2y_2}{x_1y_1}=-1
\label{2.5.5}
\end{equation} Thus if $x_1y_2-x_2y_1=0$ then $\frac{x_2}{x_1}=\frac{y_2}{y_1}$ from which (\ref{2.5.5}) implies $\left(\frac{x_2}{x_1}\right)^2=-1$, a contradiction. Thus we can conclude that $x_1y_2-x_2y_1\not=0$. We use this in the following row reduction to be sure we are not dividing by zero.
$$\left[\begin{array}{cc|c}x_1&y_1&a\\x_2&y_2&b\end{array}\right]
\rightarrow\left[\begin{array}{cc|c}1&y_1/x_1&a/x_1\\x_2&y_2&b\end{array}\right]
\rightarrow\left[\begin{array}{cc|c}1&y_1/x_1&a/x_1\\0&y_2-\frac{x_2y_1}{x_1}&b-\frac{x_2a}{x_1}\end{array}\right]
=\left[\begin{array}{cc|c}1&y_1/x_1&a/x_1\\0&\frac{x_1y_2-x_2y_1}{x_1}&\frac{bx_1-ax_2}{x_1}\end{array}\right]$$$$\rightarrow\left[\begin{array}{cc|c}1&y_1/x_1&a/x_1\\0&1&\frac{bx_1-ax_2}{x_1y_2-x_2y_1}\end{array}\right]
\rightarrow\left[\begin{array}{cc|c}1&0&\frac{ay_2-by_1}{x_1y_2-x_2y_1}\\0&1&\frac{bx_1-ax_2}{x_1y_2-x_2y_1}\rule{0mm}{5mm}\end{array}\right]$$Now if we substitute $y_1=-x_2y_2/x_1$ into the numerator and denominator of $\frac{ay_2-by_1}{x_1y_2-x_2y_1}$ and use $x_1^2+x_2^2=1$ it simplifies to $ax_1+bx_2$. Similarly $\frac{ay_2-by_1}{x_1y_2-x_2y_1}$ simplifies to $ay_1+by_2$. So we get
$$\left[\begin{array}{cc|c}1&0&ax_1+bx_2\\0&1&ay_1+by_2\end{array}\right].$$Now assume $y_2\not=0$ (and we continue to assume $x_1\not=0$ since we assumed that WLOG). In this case
\begin{equation}
\frac{y_1}{y_2}=-\frac{x_2}{x_1}
\label{2323r2}
\end{equation}So if $x_1y_2-x_2y_1=0$ then $\frac{x_2y_1}{x_1y_2}=1$. But then (\ref{2323r2}) implies $\left(\frac{x_2}{x_1}\right)^2=-1$ a contradition. So also in this case we can assume $x_1y_2-x_2y_1\not=0$ and so we can do the same row-reduction as before. Thus in all cases
$$(ax_1+bx_2)\alpha+(ay_1+by_2)\beta=(a,b)$$or equivalently
$$(ax_1+bx_2)(x_1,x_2)+(ay_1+by_2)(y_1,y_2)=(a,b).$$
Exercise 2.4.6
Suppose $a+be^{ix}+ce^{-ix}=0$ as functions of $x\in\mathbb R$. In other words $a+be^{ix}+ce^{-ix}=0$ for all $x\in\mathbb R$. Let $y=e^{ix}$. Then $y\not=0$ and $a+by+\frac{c}{y}=0$ which implies $ay+by^2+c=0$. This is at most a quadratic polynomial in $y$ thus can be zero for at most two values of $y$. But $e^{ix}$ takes infinitely many different values as $x$ varies in $\mathbb R$, so $ay+by^2+c$ cannot be zero for all $y=e^{ix}$, so this is a contradiction.
We know that $e^{ix}=\cos(x)+i\sin(x)$. Thus $e^{-ix}=\cos(x)-i\sin(x)$. Adding these gives $2\cos(x)=e^{ix}+e^{-ix}$. Thus $\cos(x)=\frac12e^{ix}+\frac12e^{-ix}$. Subtracting instead of adding the equations gives $e^{ix}-e^{-ix}=2i\sin(x)$. Thus $\sin(x)=\frac{1}{2i}e^{ix}-\frac{1}{2i}e^{-ix}$ or equivalently $\sin(x)=-\frac{i}{2}e^{ix}+\frac{i}{2}e^{-ix}$. Thus the requested matrix is
$$P=\left[\begin{array}{ccc}1 & 0 & 0\\ 0 & 1/2 & -i/2\\ 0 & 1/2 & i/2\end{array}\right].$$
Exercise 2.4.7
We know $V$ has dimension three (it follows from Example 16, page 43, that $\{1,x,x^2\}$ is a basis). Thus by Corollary 2 (b), page 45, it suffices to show $\{g_1,g_2,g_3\}$ span $V$. We need to solve for $u,v,w$ the equation
$$c_2x^2+c_1x+c_0=u+v(x+t)+w(x+t)^2.$$Rearranging
$$c_2x^2+c_1x+c_0=wx^2+(v+2wt)x+(u+vt+wt^2).$$It follows that
$$w=c_2,\quad v=c_1-2c_2t,$$$$u=c_0-c_1t+c_2t^2.$$Thus $\{g_1,g_2,g_3\}$ do span $V$ and the coordinates of $f(x)=c_2x^2+c_1x+c_0$ are $$(c_2,\ \ c_1-2c_2t,\ \ c_0-c_1t+c_2t^2).$$
From http://greggrant.org