### Chapter 7 Sequences and Series of Functions

- Part A: Exercise 1 – Exercise 12
- Part B: Exercise 13 – Exercise 17
- Part C: Exercise 18 – Exercise 26

#### Exercise 18

(By analambanomenos) This follows from Theorem 7.25 if we show that $\{F_n\}$ is pointwise-bounded and equicontinuous on $[a,b]$. Let $|f_n|\le K$ on $[a,b]$. Then for all $n$ $$\big|F_n(x)\big|\le\int_a^x\big|

f_n(t)\big|\,dt\le K(x-a),$$ so $\{F_n\}$ is pointwise-bounded on $[a,b]$. And if $\varepsilon>0$ and $x<y$ are points of $[a,b]$ such that $y-x\le\varepsilon/K$, then $$\big|F_n(x)-F_n(y)\big|

\le\int_x^y\big|f_n(t)\big|\,dt\le K(y-x)\le\varepsilon,$$ showing that $\{F_n\}$ is equicontinuous on $[a,b]$.

#### Exercise 19

(By analambanomenos) Suppose $S$ is uniformly closed, pointwise bounded, and equicontinuous, and let $\{f_n\}$ be a sequence of elements of $S$. By Theorem 7.25, $\{f_n\}$ has a subsequence converging uniformly to $f\in\mathscr C(K)$, and since $f$ is uniformly closed, we have $f\in S$. Hence by Exercise 2.26, $S$ is compact.

Conversely, suppose that $S$ is compact. By Theorem 2.34 it is closed with respect to the supremum norm, that is, $S$ is uniformly closed. Let $x\in K$ and define the complex-valued function $F_x$ on $\mathscr C(K)$ by $F_x(f)=f(x)$. $F_x$ is continuous since if $g\rightarrow f$, then $$\big|F_x(f)-F_x(g)\big|=\big|f(x)-g(x)\big|\le||f-g||\rightarrow 0.$$ Hence $F_x(S)$ is bounded by Theorem 4.15, that is, $S$ is pointwise bounded. Since $S$ is compact, by Theorem 2.37 every infinite subset of $S$ has a limit point in $S$, that is, it has a subsequence which converges uniformly on $K$. If $S$ were not equicontinuous, then there would exist an $\varepsilon>0$ such that for all positive integers there exists $x_n,y_n\in K$ and $f_n\in S$ such that $$d(x_n,y_n)<1/n\hbox{ but }\big|f_n(x_n)-f_n(y_n)\big|\ge\varepsilon.$$ Hence $\{f_n\}$ would have no equicontinuous subsequence, so by Theorem 7.24 it would have no subsequence which converges uniformly on $K$, which we have seen contradicts the compactness of $S$.

#### Exercise 20

(By analambanomenos) Following the hint, note that by the linearity of Riemann-integration, $\int_0^1f(x)P(x)\,dx=0$ for all polynomials $P$. By Theorem 7.26 there exists a sequence of polynomials $P_n$ which converge to $f$ uniformly on $[0,1]$. Since $[0,1]$ is compact, $f$ and each $P_n$ are bounded on $[0,1]$, so $f(x)P_n(x)$ converges uniformly to $f^2(x)$ by Exercise 7.2. By Theorem 7.16, $0=\int_0^1f(x)P_n(x)\,dx$ must converge to $\int_0^1f^2(x)\,dx$, so this integral equals 0. Hence $f(x)=0$ on $[0,1]$ by Exercise 6.2.

#### Exercise 21

(By analambanomenos) Since $e^{in\theta}e^{im\theta}=e^{i(m+n)\theta}$, it is clear that $\mathscr A$ is an algebra, and since it contains the identity function $f(e^{i\theta})=e^{i\theta}$, $\mathscr A$ separates points on $K$ and vanishes at no point of $K$. Following the hint, for $f=\sum_0^Nc_ne^{in\theta}\in\mathscr A$ we have

\begin{align*}

\int_0^{2\pi}f(e^{i\theta})e^{i\theta}\,d\theta &= \sum_{n=0}^Nc_n\int_0^{2\pi}e^{i(n+1)\theta}\,d\theta \\

&= \sum_{n=0}^Nc_n\int_0^{2\pi}\cos\big((n+1)\theta\bigr)\,d\theta + \sum_{n=0}^Nic_n\int_0^{2\pi}\sin\big((n+1)\theta\bigr)\,d\theta \\

&= 0.

\end{align*}Hence if $g$ is in the uniform closure of $\mathscr A$, we have by the same reasoning used in Exercise 7.20 that $\int_0^{2\pi}g(e^{i\theta})e^{i\theta}\,d\theta=0$. However, $e^{-i\theta}$ is a continuous function on $K$ such that $$\int_0^{2\pi}e^{-i\theta}e^{i\theta}\,d\theta=2\pi,$$ so that $e^{-i\theta}$ is not in the uniform closure of $\mathscr A$.

#### Exercise 22

(By analambanomenos) Recall the notation of Exercise 6.11: $||u||_2=\bigl(\int_a^b|u^2|\,d\alpha\bigr)^{1/2}$. We want to show that, if $\varepsilon>0$, there is a polynomial $P$ such that $||f-P||_2<\varepsilon$. By Exercise 6.12, there is a continuous function $g$ on $[a,b]$ such that $||f-g||_2<\varepsilon/2$. By Theorem 7.26, there is a polynomial $P$ such that $\sup\big|g(x)-P(x)\big|<

\varepsilon/\bigl(2\sqrt{\alpha(b)-\alpha(a)}\bigr)$. Then $$||g-P||_2^2=\int_a^b\big|g(x)-P(x)\big|^2\,d\alpha<\frac{\varepsilon^2}{4\bigl(\alpha(b)-\alpha(a)\bigr)}\bigl(\alpha(b)-\alpha(a)\bigr)=

\frac{\varepsilon^2}{4}.$$ Hence, by Exercise 6.11, $||f-P||_2\le||f-g||_2+||g-P||_2<\varepsilon$.

#### Exercise 23

(By analambanomenos) From the definition, it is easy to see that if $P_n$ is an even function, then so is $P_{n+1}$. Hence by induction, since $P_0$ is trivially an even function, all the $P_n$ are even functions. Since $|x|$ is also even, it suffices to show that the $P_n$ converge uniformly to $x$ on $[0,1]$. In what follows, it is always understood that the variable $x$ lies in $[0,1]$.

Following the hint, note that \begin{equation}\label{7.23.1}\bigl(x-P_n(x)\bigr)\biggl(1-\frac{x+P_n(x)}{2}\biggr)=x-P_n(x)-\frac{x^2-P_n(x)}{2}=x-P_{n+1}(x).\end{equation}I want to show that $0\le P_n(x)\le P_{n+1}(x)\le x$, by induction. Since $P_0(x)=0$ and $P_1(x)=x^2/2$, this is true for the case $n=0$. Suppose it is true for the case $n$. Then the factors

$$x-P_n(x)\quad\hbox{and}\quad 1-\biggl(\frac{x+P_n(x)}{2}\biggr)$$ in \eqref{7.23.1} are nonnegative, so $x-P_{n+1}(x)\ge 0$ or $P_{n+1}\le x$. Also, the terms $$P_n(x)\quad\hbox{and}\quad

\frac{x^2-P_n^2(x)}{2}$$ in the definition of $P_{n+1}$ are nonnegative, so $P_{n+1}(x)\ge 0$. Hence the factor $$1-\frac{x+P_n(x)}{2}$$ in \eqref{7.23.1} lies between 0 and 1, so that $x-P_{n+1}(x)\le

x-P_n(x)$, or $P_n(x)\le P_{n+1}(x)$. Putting this all together, $\{P_n\}$ is a monotonically increasing sequence of polynomials on $[0,1]$ which lie between $0$ and $x$.

By \eqref{7.23.1} we have

\begin{align*}

x-P_n(x) &= \biggl(1-\frac{x+P_{n-1}(x)}{2}\biggr) \bigl(x-P_{n-1}(x)\bigr) \\

&= \biggl(1-\frac{x+P_{n-1}(x)}{2}\biggr) \biggl(1-\frac{x+P_{n-2}(x)}{2}\biggr) \bigl(x-P_{n-2}(x)\bigr) \\

&= \cdots \\

&= x\prod_{i=0}^{n-1}\biggl(1-\frac{x+P_i(x)}{2}\biggr) \\

&\le x\biggl(1-\frac{x}{2}\biggr)^n.

\end{align*}By elementary calculus, the function $x(1-x/2)^n$ has a maximum value at $x=2/(n+1)$ of $$\frac{2}{n+1}\biggl(\frac{n}{n+1}\biggr)\rightarrow 0\hbox{ as }n\rightarrow\infty.$$ Hence $P_n(x)$ increases monotonically and uniformly to $x$ as $n\rightarrow\infty$.

#### Exercise 24

(By analambanomenos) Note that the triangle inequality gives us, for all $x\in X$, $y\in X$, and $z\in X$,

\begin{align*}

d(x,z)-d(x,y) &\le d(y,z) \\

d(x,y)-d(x,z) &\le d(z,y)=d(y,z),

\end{align*}so that $$\big|d(x,z)-d(x,y)\big|\le d(y,z).$$Hence, for all $x\in X$, $$\big|f_p(x)\big|=\big|d(x,p)-d(x,a)\big|\le d(a,p).$$To show that $f_p$ is continuous, let $\varepsilon>0$ and let $x\in X$ and $y\in X$ such that $d(x,y)<\delta=\varepsilon/2$. Then

\begin{align*}

\big|f_p(x)-f_p(y)\big| &= \big|d(x,p)-d(x,a)-d(y,p)+d(y,a)\big| \\

&\le \big|d(x,p)-d(y,p)\big|+\big|d(y,a)-d(x,a)\big| \\

&\le d(x,y)+d(x,y) \\

&\le \varepsilon.

\end{align*}(This shows that $f_p$ is uniformly continuous.)

Note that if $p\in E$ and $q\in E$, then for all $x\in E$ we have $$f_p(x)-f_q(x) = d(x,p)-d(x,a)-d(x,p)+d(x,a) = d(x,p)-d(x,q)$$ so that

\begin{align*}

||f_p-f_q|| &= \sup_{x\in E}\big|f_p(x)-f_q(x)\big| \\

&= \sup_{x\in E}\big|d(x,p)-d(x,q)\big| \\

&\le d(p,q).

\end{align*}And since $f_p(q)-f_q(q)=d(p,q)$, we have $$||f_p-f_q||=d(p,q)$$ for all $p,q\in X$.

By Theorem 7.15, $\mathscr C(X)$ with the uniform convergence metric is complete. Since it is clear that any closed subset of a complete metric space is also complete (if a sequence of elements in the closed subset satisfies the Cauchy condition, then it must converge to an element of the complete metric space, which must be an element of the closed subset since it is closed), we see that the closure $Y$ if $\Phi(X)$ is complete.

#### Exercise 25

(By analambanomenos) Following the hint, let $n$ be a positive integer. For $i=0,\ldots,n,$ put $x_i=i/n$. Let $f_n$ be a continuous, piecewise-linear function on $[0,1]$ such that $f_n(0)=c$ and has slope $f_n’(t)=\phi\bigl(x_i,f_n(x_i)\bigr)$ if $x_i<t<x_{i+1}$.

Let $|\phi|$ be bounded by $M$, so that $|f_n’|\le M$. Note that $$\int_{x_i}^{x_{i+1}}f_n’(t)\,dt=f_n(x_{i+1})-f_n(x_i),$$ so that for $0\le x\le 1$, $f_n(x)-c$ is the sum of integrals of $f_n’$ over intervals where it is defined. Hence $$\big|f_n(x)\big|\le|c|+\sum_{i=0}^{n-1}\int_{x_i}^{x_{i+1}}\big|f_n’(t)\big|\,dt\le|c|+M=M_1$$ so that $\{f_n\}$ is uniformly bounded on $[0,1]$. Also, since the continuous, piecewise-linear functions $f_n$ have slopes lying between $-M$ and $M$ on their linear parts, if $\varepsilon>0$ then for $0\le x\le 1$, $0\le y\le 1$, $|x-y|\le

\varepsilon/M$, we have $\big|f_n(x)-f_n(y)\big|\le\varepsilon$. That is, $\{f_n\}$ is equicontinuous on $[0,1]$. Hence by Theorem 7.25, there is a subsequence $\{f_{n_k}\}$ which converges uniformly to a continuous function $f$ on $[0,1]$.

By Theorem 4.19, $\phi$ is uniformly continuous on the compact rectangle $R$ given by $0\le x\le 1$, $|y|\le M_1$. That is, if $\varepsilon>0$, there is a $\delta>0$ such that if the distance between the points $(x_1,y_1)$ and $(x_2,y_2)$ in $R$ is less than $\delta$, then $\big|\phi(x_1,y_1)-\phi(x_2,y_2)\big|<\varepsilon$. Since there is a $K$ such that for all $k\ge K$ and all $t\in[0,1]$, we have $\big|f_{n_k}(t)-f(t)\big|<\delta$, we have $$\big|\phi\bigl(t,f_{n_k}(t)\bigr)-\phi\bigl(t,f(t)\bigr)\big|<\varepsilon.$$ That is $\phi\bigl(t,f_{n_k}(t)\bigr)$ converges uniformly to $\phi\bigl(t,f(t)\bigr)$ as $k\rightarrow\infty$. Hence, if we let $$\Delta_n(t)=

\begin{cases}

\phi\bigl(x_i,f_n(x_i)\bigr)-\phi\bigl(t,f_n(t)\bigr) & x_i<t<t_{i+1},\quad i=0,\ldots,n-1, \\

0 & t=x_i,\quad i=0,\ldots,n,

\end{cases}$$ then, since $\phi\bigl(x_i,f_{n_k}(x_i)\bigr)$ converges to $\phi\big(x_i,f(x_i)\bigr)$, and $\phi\bigl(t,f_{n_k}(t)\bigr)$ converges to $\phi\big(t,f(t)\bigr)$, and $\phi\bigl(t,f(t)\bigr)$

is uniformly continuous on $[0,1]$, and the distance between the $x_i$ and the $t$ in the definition of $\Delta_n$ is less than $1/n$, it’s not hard to see that $\Delta_{n_k}(x)$ will converge uniformly to 0 as $k\rightarrow\infty$.

Here are the gory details. Let $\varepsilon>0$. There is a $\delta>0$ such that if $|t_1-t_2|<\delta$ we have $$\big|\phi\big(t_1,f(t_1)\big)-\phi\big(t_2,f(t_2)\big)\big|<\varepsilon/3.$$

There is a $K$ such that for $k>K$ we have $1/n_k<\delta$ and such that for all $t\in[0,1]$, $$\big|\phi\big(t,f_{n_k}(t)\big)-\phi\big(t,f(t)\big)\big|<\varepsilon/3.$$ Then, for $k>K$ and for $t$ such that $x_i<t<x_{i+1}$

\begin{align*}

\big|\Delta_{n_k}(t)\big| &= \big|\phi\big(x_i,f_{n_k}(x_i)\big)-\phi\big(t,f_{n_k}(t)\big| \\

&\le\big|\phi\big(x_i,f_{n_k}(x_i)\big)-\phi\big(x_i,f(x_i)\big)\big| + \big|\phi\big(x_i,f(x_i)\big)-\phi\big(t,f(t)\big)\big| + \big|\phi\big(t,f(t)\big)-\phi\big(t,f_{n_k}(t)\big)\big| \\

&\le\varepsilon.

\end{align*}By the definition of $f_n$, $\Delta_n(t)=f_n’(t)-\phi\bigl(t,f_n(t)\bigr)$ for $x_i<t<t_{i+1}$, so \begin{equation}\label{7.25.1}f_{n_k}(x)=c+\int_0^x\phi\bigl(t,f_{n_k}(t)\bigr)+\Delta_{n_k}(t).\end{equation}Since $f_{n_k}(x)$ converges uniformly to $f(x)$ on $[0,1]$, and $\phi\bigl(t,f_{n_k}(t)\bigr)$ converges uniformly to $\phi\bigl(t,f(t)\bigr)$ on $[0,1]$, and $\Delta_{n_k}(t)$ converges uniformly to 0 on $[0,1]$, letting $k\rightarrow\infty$ in \eqref{7.25.1}, by Theorem 7.16 we have $$f(x)=c+\int_0^x\phi\bigl(t,f(t)\bigr)\,dt.$$ Hence $f(0)=c$, and by Theorem 6.20, $f’(x)=\phi\bigl(x,f(x)\bigr)$ for $0\le x\le 1$.

#### Exercise 26

(By analambanomenos) Repeating the argument of the solution to Exercise 7.25, making changes where necessary, let $n$ be a positive integer. For $i=0,\ldots,n,$ put $x_i=i/n$. Let $\mathbf f_n$ be a continuous, piecewise-linear vector-valued function on $[0,1]$ into $\mathbf R^k$ such that $\mathbf f_n(0)=\mathbf c$ and has slope $\mathbf f_n’(t)=\mathbf\Phi\bigl(x_i,\mathbf f_n(x_i)\bigr)$ if $x_i<t<x_{i+1}$.

Let $||\mathbf\Phi||$ be bounded by $M$, so that $||\mathbf f_n’||\le M$. Note that $$\int_{x_i}^{x_{i+1}}\mathbf f_n’(t)\,dt=\mathbf f_n(x_{i+1})-\mathbf f_n(x_i),$$ so that for $0\le x\le 1$, $\mathbf f_n(x)-\mathbf c$ is the sum of integrals of $\mathbf f_n’$ over intervals where it is defined. Hence $$\big|\big|\mathbf f_n(x)\big|\big|\le||\mathbf c||+\sum_{i=0}^{n-1}

\int_{x_i}^{x_{i+1}}\big|\big|\mathbf f_n’(t)\big|\big|\,dt\le||\mathbf c||+M=M_1$$ so that $\{\mathbf f_n\}$ is uniformly bounded on $[0,1]$. Let $\mathbf f_n=(f_{n1},\ldots,f_{nk})$. Then the continuous, piecewise-linear $f_{nm}$ have slopes lying between $-M$ and $M$ on their linear parts, so if $\varepsilon>0$ then for $0\le x\le 1$, $0\le y\le 1$, $|x-y|\le\varepsilon/Mk$, we have $\big|f_{nm}(x)-f_{nm}(y)\big|\le\varepsilon/k$, so $\big|\big|\mathbf f_n(x)\big|\big|\le\varepsilon$. That is, $\{\mathbf f_n\}$ is equicontinuous on $[0,1]$, using the extended definition of *equicontinuous* given in Exercise 17. Hence by the extended version Theorem 7.25 also given in Exercise 25, there is a subsequence $\{\mathbf f_{n_j}\}$ which converges uniformly to a continuous function $\mathbf f$ on $[0,1]$.

By Theorem 4.19, $\mathbf\Phi$ is uniformly continuous on the compact parallelpiped $R$ given by $0\le x\le 1$, $||y||\le M_1$. That is, if $\varepsilon>0$, there is a $\delta>0$ such that if the distance between the points $(x_1,\mathbf y_1)$ and $(x_2,\mathbf y_2)$ in $R$ is less than $\delta$, then $\big|\big|\mathbf\Phi(x_1,\mathbf y_1)-\mathbf\Phi(x_2,\mathbf y_2)\big|\big|<\varepsilon$. Since there is a $K$ such that for all $k\ge K$ and all $t\in[0,1]$, we have $\big|\big|\mathbf f_{n_k}(t)-\mathbf f(t)\big|\big|<\delta$, we have $$\big|\big|\mathbf\Phi\bigl(t,\mathbf f_{n_k}(t)

\bigr)-\mathbf\Phi\bigl(t,\mathbf f(t)\bigr)\big|\big|<\varepsilon.$$ That is, $\mathbf\Phi\bigl(t,\mathbf f_{n_k}(t)\bigr)$ converges uniformly to $\mathbf\Phi\bigl(t,\mathbf f(t)\bigr)$ as $k\rightarrow\infty$. Hence, if we let $$\mathbf\Delta_n(t)=

\begin{cases}

\mathbf\Phi\bigl(x_i,\mathbf f_n(x_i)\bigr)-\mathbf\Phi\bigl(t,\mathbf f_n(t)\bigr) & x_i<t<t_{i+1},\quad i=0,\ldots,n-1, \\

\mathbf 0 & t=x_i,\quad i=0,\ldots,n,

\end{cases}$$ then, since $\mathbf\Phi\bigl(x_i,\mathbf f_{n_k}(x_i)\bigr)$ converges to $\mathbf\Phi\big(x_i,\mathbf f(x_i)\bigr)$, and $\mathbf\Phi\bigl(t,\mathbf f_{n_k}(t)\bigr)$ converges to $\mathbf\Phi\big(t,\mathbf f(t)\bigr)$, and $\mathbf\Phi\bigl(t,\mathbf f(t)\bigr)$ is uniformly continuous on $[0,1]$, and the distance between the $x_i$ and the $t$ in the definition of $\mathbf \Delta_n$ is less than $1/n$, it’s not hard to see that $\mathbf\Delta_{n_k}(x)$ will converge uniformly to $\mathbf 0$ as $k\rightarrow\infty$.

Here are the gory details. Let $\varepsilon>0$. There is a $\delta>0$ such that if $|t_1-t_2|<\delta$ we have $$\big|\big|\mathbf\Phi\big(t_1,\mathbf f(t_1)\big)-\mathbf\Phi\big(t_2,f(t_2)\big)

\big|\big|<\varepsilon/3.$$ There is a $K$ such that for $k>K$ we have $1/n_k<\delta$ and such that for all $t\in[0,1]$, $$\big|\big|\mathbf\Phi\big(t,\mathbf f_{n_k}(t)\big)-\mathbf\Phi

\big(t,\mathbf f(t)\big)\big|\big|<\varepsilon/3.$$ Then, for $k>K$ and for $t$ such that $x_i<t<x_{i+1}$

\begin{align*}

\big|\big|\mathbf\Delta_{n_k}(t)\big|\big| &= \big|\big|\mathbf\Phi\big(x_i,\mathbf f_{n_k}(x_i)\big)-\mathbf\Phi\big(t,\mathbf f_{n_k}(t)\big|\big| \\

&\le\big|\big|\mathbf\Phi\big(x_i,\mathbf f_{n_k}(x_i)\big)-\mathbf\Phi\big(x_i,\mathbf f(x_i)\big)\big|\big| + \big|\big|\mathbf\Phi\big(x_i,\mathbf f(x_i)\big)-\mathbf\Phi\big(t,\mathbf f(t)

\big)\big|\big|\;+ \\

&\phantom{\le}\;\;\big|\big|\mathbf\Phi\big(t,\mathbf f(t)\big)-\mathbf\Phi\big(t,\mathbf f_{n_k}(t)\big)\big|\big| \\

&\le\varepsilon.

\end{align*}By the definition of $\mathbf f_n$, $\mathbf\Delta_n(t)=\mathbf f_n’(t)-\mathbf\Phi\bigl(t,\mathbf f_n(t)\bigr)$ for $x_i<t<t_{i+1}$, so \begin{equation}\label{7.26.1}\mathbf f_{n_k}(x)=\mathbf c+\int_0^x\mathbf\Phi

\bigl(t,\mathbf f_{n_k}(t)\bigr)+\mathbf\Delta_{n_k}(t).\end{equation} Since $\mathbf f_{n_k}(x)$ converges uniformly to $\mathbf f(x)$ on $[0,1]$, and $\mathbf\Phi\bigl(t,\mathbf f_{n_k}(t)

\bigr)$ converges uniformly to $\mathbf\Phi\bigl(t,\mathbf f(t)\bigr)$ on $[0,1]$, and $\mathbf\Delta_{n_k}(t)$ converges uniformly to $\mathbf 0$ on $[0,1]$, letting $k\rightarrow\infty$ in \eqref{7.26.1}, by the extended version of Theorem 7.16 given in Exercise 17 we have $$\mathbf f(x)=\mathbf c+\int_0^x\mathbf\Phi\bigl(t,\mathbf f(t)\bigr)\,dt.$$ Hence $\mathbf f(0)=\mathbf c$, and by Theorem 6.20, $\mathbf f’(x)=\mathbf\Phi\bigl(x,\mathbf f(x)\bigr)$ for $0\le x\le 1$.

**Baby Rudin 数学分析原理完整第七章习题解答**