If you find any mistakes, please make a comment! Thank you.

Solution to Principles of Mathematical Analysis Chapter 10


Chapter 10 Integration of Differential Forms

Exercise 2

(By analambanomenos) The $\varphi_i(x)\varphi_i(y)$ has support in the square $2^{-i}<x<2{-1+1}$, $2^{-i}<y<2{-1+1}$, and the $\varphi_{i+1}(x)\varphi_i(y)$ term has support in the rectangle $2^{-i-1}<x<2{-1}$, $2^{-i}<y<2{-1+1}$, so $f$ has compact support in the square $0<x<1$, $0<y<1$. Each $(x,y)\ne(0,0)$ has a neighborhood small enough so that at most three of the terms in the sum are nonzero. Since these terms are continuous, $f$ is continous away from the origin.

Let $M_i$ be the maximum value of $\varphi_i$, attained at $x_i\in(2^{-i},2^{-i+1})$. Since $1=\int\varphi_i<M_i2^{-i}$, we have $M_i>2^i$. Hence $f(x_i,x_i)=M_i^2>2^{i+1}$ diverges to $\infty$ as $i\rightarrow\infty$, so $f$ is not continuous at $(0,0)$ and is unbounded in every neighborhood of $(0,0)$.

We have
\begin{align*}
\int dy\int f(x,y)\,dx &= \sum_{i=1}^\infty\Bigg(\int\varphi_i(y)\,dy\Bigg)\Bigg(\int\varphi_i(x)\,dx-\int\varphi_{i+1}(x)\,dx\Bigg) \\
&= \sum_{i=1}^\infty 1\cdot 0 = 0 \\
\int dx\int f(x,y)\,dy &= \Bigg(\int\varphi_1(x)\,dx\Bigg)\Bigg(\int\varphi_1(y)\,dy\Bigg)\;+ \\
&\qquad\sum_{i=2}^\infty\Bigg(\int\varphi_i(x)\,dx\Bigg)\Bigg(\int\varphi_i(y)\,dy-\int\varphi_{i-1}(y)\,dy\Bigg) \\
&= 1\cdot 1 + \sum_{i=2}^\infty 1\cdot 0 = 1
\end{align*}


Exercise 3

(By analambanomenos)

(a) (Much of this solution just repeats the proof of Theorem 10.7.) Assume $1\le m\le n-1$, and make the following induction hypothesis (which evidently holds for $m=1$):

$V_m$ is a neighborhood of $\mathbf 0$, $\mathbf F_m\in\mathscr C’(V_m)$, $\mathbf F_m(\mathbf 0)=\mathbf 0$, $\mathbf F_m’(\mathbf 0)=I$, and for $\mathbf x\in V_m$,
\begin{equation}\label{10.3.1}
P_{m-1}\mathbf F_m(\mathbf x)=P_{m-1}\mathbf x.
\end{equation}By \eqref{10.3.1}, we have
\[
\mathbf F_m(\mathbf x)=P_{m-1}\mathbf x+\sum_{i=m}^n\alpha_i(\mathbf x)\mathbf e_i,
\]where $\alpha_m,\ldots,\alpha_n$ are real $\mathscr C’$-functions in $V_m$. Hence, for $j=m,\ldots,n$,
\[
\mathbf e_j = \mathbf F_m’(\mathbf 0)\mathbf e_j = \sum_{i=m}^n(D_j\alpha_i)(\mathbf 0)\mathbf e_i.
\]Since $\mathbf e_m,\ldots\mathbf e_n$ are independent, we must have
\begin{equation}\label{10.3.2}
(D_m\alpha_m)(\mathbf 0)=1\qquad(D_{m+1}\alpha_m)(\mathbf 0)=\cdots=(D_n\alpha_m)(\mathbf 0)=0.
\end{equation}Define, for $\mathbf x\in V_m$,
\[
\mathbf G_m(\mathbf x)=\mathbf x+\big(\alpha_m(\mathbf x)-x_m\big)\mathbf e_m
\]Then $\mathbf G_m\in\mathscr C’(V_m)$, $\mathbf G_m$ is primitive, and $\mathbf G_m’(\mathbf 0)=I$ by \eqref{10.3.2}. The inverse function theorem shows therefore that there is an open set $U_m$, with $\mathbf 0\in U_m\subset V_m$, such that $\mathbf G_m$ is a 1-1 mapping of $U_m$ onto a neighborhood $V_{m+1}$ of $\mathbf 0$, in which $\mathbf G_m^{-1}$ is continuously differentiable, and
\[
{\mathbf G_m^{-1}}’(\mathbf 0)=\mathbf G_m’(\mathbf 0)^{-1}=I.
\]Define $\mathbf F_{m+1}(\mathbf y)$, for $\mathbf y\in V_{m+1}$, by
\[
\mathbf F_{m+1}(\mathbf y)=\mathbf F_m\circ\mathbf G_m^{-1}(\mathbf y).
\]Then $\mathbf F_{m+1}\in\mathscr C’(V_{m+1})$, $\mathbf F_{m+1}(\mathbf 0)=\mathbf 0$, and $\mathbf F_{m+1}’(\mathbf 0)=I$ by the chain rule. Also, for $\mathbf x\in U_m$,
\begin{align*}
P_m\mathbf F_{m+1}\big(\mathbf G_m(\mathbf x)\big) &= P_m\mathbf F_m(\mathbf x) \\
&= P_m\big(P_{m-1}\mathbf x+\alpha_m(\mathbf x)\mathbf e_m+\cdots\big) \\
&= P_{m-1}\mathbf x+\alpha_m(\mathbf x)\mathbf e_m \\
&= P_m\mathbf G_m(\mathbf x)
\end{align*}so that, for $\mathbf y\in V_{m+1}$, $P_m\mathbf F_{m+1}(\mathbf y)=P_m\mathbf y$. Our induction hypothesis holds therefore with $m+1$ in place of $m$.

Note that, for $\mathbf y=\mathbf G_m(\mathbf x)$, we have
\[
\mathbf F_{m+1}\big(\mathbf G_m(\mathbf x)\big)=\mathbf F_m(\mathbf x).
\]If we apply this with $m=1,\ldots,n-1$, we successively obtain
\[
\mathbf F_1 = \mathbf F_2\circ\mathbf G_1 = \mathbf F_3\circ\mathbf G_2\circ\mathbf G_1 = \cdots = \mathbf F_n\circ\mathbf G_{n-1}\circ\cdots\circ\mathbf G_1
\]in some neighborhood of $\mathbf 0$. By \eqref{10.3.1}, $\mathbf F_n$ is primitive, so we can let $\mathbf G_n=\mathbf F_n$.

(b) Let $\mathbf F$ be the mapping $(x,y)\rightarrow(y,x)$ and suppose $\mathbf F=\mathbf G_2\circ\mathbf G_1$ in some neighborhood of the origin, where
\[
\mathbf G_1(x,y)=\big(f(x,y),y\big)\qquad\mathbf G_2(u,v)=\big(u,g(u,v)\big)
\]are primitive mappings. Then we would have
\begin{align*}
(y,x) &= \mathbf G_2\circ\mathbf G_1(x,y) \\
&= \mathbf G_2\big(f(x,y),y\big) \\
&= \big(f(x,y),g\big(f(x,y),y\big)\big)
\end{align*}so that
\[
y=f(x,y)\qquad x=g\big(f(x,y),y\big)=g(y,y)
\]which is impossible. Trying $\mathbf F=\mathbf G_1\circ\mathbf G_2$ leads to a similar contradiction.


Exercise 4

(By analambanomenos) We have
\begin{align*}
\mathbf G_2\circ\mathbf G_1(x,y) &= \mathbf G_2(e^x\cos y-1,y) \\ &= (e^x\cos y-1,e^x\cos y\tan y) \\ &= (e^x\cos y-1,e^x\sin y) \\ &= \mathbf F(x,y) \end{align*}The derivative matrices are
\begin{align*}
\mathbf G_1′(x,y) &=
\begin{pmatrix}
e^x\cos y & -e^x\sin y \\
0 & 1
\end{pmatrix} \\
\mathbf G_2′(u,v) &=
\begin{pmatrix}
1 & 0 \\
\tan v & (1+u)\cos^{-2}v
\end{pmatrix}
\end{align*}so that $\mathbf G_1′(0,0)=\mathbf G_2′(0,0)=I$, hence $J_{\mathbf G_1}(0,0)=J_{\mathbf G_2}(0,0)=1$. By the chain rule and the properties of determinants, we also have $J_{\mathbf F}(0,0)=1$.

Let $h(u,v)=\sqrt{v^2-e^{2u}}-1$. Then, for $(x,y)$ near the origin,
\begin{align*}
\mathbf H_1\circ\mathbf H_2(x,y) &= \mathbf H_1(x,e^x\sin y) \\
&= \bigg(\sqrt{e^{2x}\sin^2y-e^{2x}}-1,e^x\sin y\bigg) \\
&=(e^x\cos y-1,e^x\sin y) \\
&= \mathbf F(x,y)
\end{align*}


Exercise 5

(By analambanomenos) We want to show: Suppose $K$ is a compact subset of a metric space $X$, and $\{V_\alpha\}$ is an open cover of $K$. Then there exists $\psi_1,\ldots,\psi_s\in\mathscr C(X)$ such that

(a) $0\le\psi_i\le 1$ for $1\le i\le s$;
(b) each $\psi_i$ has its support in some $V_\alpha$, and
(c) $\psi_i(x)+\cdots+\psi_s(x)=1$ for every $x\in K$.

Repeating the proof of Theorem 10.8 in the text and following the hint, associate with each $x\in K$ an index $\alpha(x)$ so that $x\in V_{\alpha(x)}$. Then there are open balls $B(x)$ and $W(x)$ centered at $x$, with
\[
\overline{B(x)}\subset W(x)\subset\overline{W(x)}\subset V_{\alpha(x)}.
\]Since $K$ is compact, there are points $x_1,\ldots,x_s$ in $K$ such that
\[
K\subset B(x_1)\cup\cdots\cup B(x_s).
\]By Exercise 4.22, there are functions $\varphi_1\,\ldots,\varphi_s\in\mathscr C(X)$ such that $\varphi_i(x)=1$ on $\overline{B(x_i)}$, $\varphi_i(x)=0$ outside $W(x_i)$, and $0\le\varphi_i(x)\le 1$ on $X$, namely,
\[
\varphi_i(x)=\frac{\rho_{i1}(x)}{\rho_{i1}(x)+\rho_{i2}(x)}
\]where $\rho_{i1}(x)$ is the distance from $x$ to the complement of $W(x_i)$, a closed set, and $\rho_{i2}(x)$ is the distance from $x$ to $\overline{B(x_i)}$. Letting $\psi_1=\varphi_1$, and
\[
\psi_{i+1}=(1-\varphi_1)\cdots(1-\varphi_i)\varphi_{i+1}
\]for $i=1,\ldots,s-1$, the remainder of the proof follows exactly as in the proof of Theorem 10.8.


Exercise 6

(By analambanomenos) Following the hint, recall that Exercise 8.1 defined an infinitely differentiable function on $\mathbf R^1$ such that $f(x)=0$ for $x\le0$ and $0\le f(x)<1$ for all $x$. Let $a<b$. Then the function $g_{a,b}(x)=f(x-a)f(b-x)$ is also infinitely differentiable, equals 0 for $x\le a$ and $x\ge b$, and $0\le g_{a,b}(x)<1$ for all $x$. Since it has compact support, we can define a function
\[
h_{a,b}(x)=\frac{1}{A}\int_x^\infty g_{a,b}(t)\,dt\qquad\hbox{where}\qquad A=\int_{-\infty}^\infty g_{a,b}(t)\,dt
\]which is infinitely differentiable, equals 1 for $x\le a$, equals 0 for $x\ge b$, and $0\le h_{a,b}(x)\le 1$ for all $x$.

Now let $\mathbf x\in\mathbf R^n$, and let $B(\mathbf x)$ and $W(\mathbf x)$ be open balls centered at $\mathbf x$ with radii $a<b$, respectively. Define the function $r(\mathbf y)$ for $\mathbf y\in\R n$ which is the distance between $\mathbf x$ and $\mathbf y$, that is,
\[
r(\mathbf y)=\sqrt{\sum(x_i-y_i)^2}
\]which is infinitely differentiable for $\mathbf y\ne\mathbf x$. Then the function $\varphi=h_{a,b}\circ r$ is infinitely differentiable on $\R n$, equals 1 on $\overline{B(\mathbf x)}$, equals 0 on $W(\mathbf x)$, and $0\le\varphi(\mathbf y)\le 1$ for all $\mathbf y$. We can use these functions in the proof of Theorem 10.8 to get infinitely differentiable functions $\psi_i$.


Exercise 7

(By analambanomenos)

(a) First we need to show that $Q^k$ is convex. Let $\mathbf x,\mathbf y\in Q^k$, so that the components satisfy
\[
x_i\ge 0,\quad y_i\ge 0,\quad\sum x_i\le 1,\quad\sum y_i\le 1.
\]Let $0\le\lambda\le 1$, and let $\mathbf z=\lambda\mathbf x+(1-\lambda)\mathbf y$. Then
\[
z_i=\lambda x_i+(1-\lambda)y_i\qquad\sum z_i=\lambda\sum x_i+(1-\lambda)\sum y_i
\]so that $z_i$ lies between $x_i$ and $y_i$, and $\sum z_i$ lies between $\sum x_i$ and $\sum y_i$. Hence $\mathbf z\in Q^k$.

Let $C$ be a convex subset of $\mathbf R^k$ containing $\mathbf 0,\mathbf e_1,\ldots,\mathbf e_k$; we need to show that $Q^k\subset C$. We can consider $Q^i\subset Q^j$ for $i<j$ by letting the components with index greater than $i$ be 0. I am going to show that $Q^i\subset C$, $i=1,\ldots,k$ by induction. Let $\mathbf x\in Q^1$. Then $\mathbf x=x_1\mathbf e_1+(1-x_1)\mathbf 0$ for $0\le x_1\le 1$, so that $\mathbf x\in C$. Now suppose that $Q^{i-1}\subset C$ and let $\mathbf x\in Q^i$. Then $x_1+\ldots+x_i\le 1$ implies
\[
\frac{(x_1+\ldots+x_{i-1})}{1-x_i}\le 1.
\]so that
\[
\mathbf x’=(1-x_i)^{-1}(x_1,\ldots,x_{i-1},0,\ldots,0)\in Q^{i-1}\subset C.
\]Hence $\mathbf x=(1-x_i)\mathbf x’+x_i\mathbf e_i\in C$ since $0\le x_i\le 1$, which shows that $Q^i\subset C$.

(b) Let $X,Y$ be vector spaces and let $\mathbf f=\mathbf f(\mathbf 0)+A$, for some $A\in L(X,Y)$, be an affine mapping from $X$ to $Y$. Let $C$ be a convex subset of $X$, and let $\mathbf y_1=\mathbf f(\mathbf x_1)$, $\mathbf y_2=\mathbf f(\mathbf x_2)$ be elements of $\mathbf f(C)$ for some $\mathbf x_1\in C$ and $\mathbf x_2\in C$. Then for $0\le\lambda\le 1$, we have $$\lambda\mathbf x_1+(1-\lambda)\mathbf x_2\in C,$$ so that
\[
\lambda\mathbf y_1+(1-\lambda)\mathbf y_2 = \mathbf f(\mathbf 0)+\lambda A(\mathbf x_1)+(1-\lambda)A(\mathbf x_2) = \mathbf f(\mathbf 0)+A\big(\lambda\mathbf x_1+(1-\lambda)\mathbf x_2\big)\in\mathbf f(C).
\]Hence $\mathbf f(C)$ is convex.


Exercise 8

(By analambanomenos) Since $(3,2)=(1,1)+(2,1)$ and $(2,4)=(1,1)+(1,3)$, the linear part of the affine map is $A(u,v)=(2u+v,u+3v)$, so
\begin{align*}
T(u,v) &= (1,1)+A(u,v)=(2u+v+1,u+3v+1) \\
J_T &=
\begin{vmatrix}
2 & 1 \\
1 & 3
\end{vmatrix}
=5 \\
\int_He^{x-y}\,dx\,dy &= \int_0^1\int_0^1e^{(2u+v+1)-(u+3v+1)}J_T\,du\,dv \\
&= 5\bigg(\int_0^1e^u\,du\bigg)\bigg(\int_0^1e^{-2v}\,dv\bigg) \\
&= \textstyle\frac{5}{2}(e-e^{-1}+e^{-2}-1)
\end{align*}

Baby Rudin 数学分析原理不完整第十章习题解答

Linearity

This website is supposed to help you study Linear Algebras. Please only read these solutions after thinking about the problems carefully. Do not just copy these solutions.
Close Menu