Chapter 9 Functions of Several Variables
- Part A: Exercise 1 – Exercise 12
- Part B: Exercise 13 – Exercise 22
- Part C: Exercise 23 – Exercise 31
Exercise 23
(By analambanomenos) Note that
\begin{align*}
D_1f(x,y_1,y_2) &= 2xy_1+e^x \\
D_2f(x,y_1,y_2) &= x^2 \\
D_3f(x,y_1,y_2) &= 1.
\end{align*}Hence
\begin{align*}
f(0,1,-1) &= 0+1-1 = 0 \\
D_1f(0,1,-1) &= 0+1=1 \\
D_2f(0,1,-1) &= 0 \\
D_3f(0,1,-1) &= 1.
\end{align*}We can apply the implicit function theorem with $m=1$, $n=2$, where
\begin{align*}
A_x &=
\begin{pmatrix}
1
\end{pmatrix} \\
A_y &=
\begin{pmatrix}
0 & 1
\end{pmatrix}
\end{align*}to conclude that there is a function $g$ in some neighborhood of $(1,-1)$ such that
\begin{align*}
f\big(g(y_1,y_2),y_1,y_2\big) &= 0 \\
g’(1,-1) &= -A_x^{-1}A_y \\
&= –
\begin{pmatrix}
1
\end{pmatrix}
^{-1}
\begin{pmatrix}
0 & 1
\end{pmatrix} \\
&=
\begin{pmatrix}
0 & -1
\end{pmatrix}
\end{align*}so that $D_1g(1,-1)=0$ and $D_2g(1,-1)=-1$.
Exercise 24
(By analambanomenos) The Jacobian of $\mathbf f$ is
\begin{align*}
\begin{vmatrix}
D_1f_1(x,y) & D_2f_1(x,y) \\
D_1f_2(x,y) & D_2f_2(x,y)
\end{vmatrix}
&=
\frac{1}{(x^2+y^2)^4}\begin{vmatrix}
4xy^2 & -4x^2y \\
y(y^2-x^2) & x(x^2-y^2)
\end{vmatrix} \\
&= \frac{4x^4y^2-4x^2y^4+4x^2y^4-4x^4y^2}{(x^2+y^2)^4} \\
&= 0
\end{align*}so the rank of $\mathbf f’$ is less than 2. Since $\mathbf f$ is nonconstant, the rank of $\mathbf f’$ must be more than 0, hence the rank of $\mathbf f’$ is 1.
Converting to polar coordinates, we get
\[
\mathbf f(r\cos\theta,r\sin\theta) = \bigg(\cos(2\theta),\frac{\sin(2\theta)}{2}\bigg)
\]so we see that the range of $\mathbf f$ is an ellipse centered at the origin and intersecting the coordinate axes at $(\pm 1,0)$ and $(0,\pm 1/2)$.
Exercise 25
(By analambanomenos)
(a) If $r$ equals the rank of $A$, and $\mathscr R(A)$ is spanned by the independent set $\mathbf y_1,\ldots,\mathbf y_r$, then $\mathbf y_i=A\mathbf z_i$, $i=1,\ldots,r$ for some independent set $\mathbf z_1,\ldots,\mathbf z_r$ in $\R n$, and $S$ is a map of $\mathscr R(A)$ into $\R n$ defined as
\[
S(c_1\mathbf y_1+\cdots+c_r\mathbf y_r)=c_1\mathbf z_1+\cdots+c_r\mathbf z_r.
\]Note that $S$ is a one-to-one map of $\mathscr R(A)$ into $\mathbf R^n$. Following the hint, note that
\[
AS(c_1\mathbf y_1+\cdots+c_r\mathbf y_r)=A(c_1\mathbf z_1+\cdots+c_r\mathbf z_r)=c_1\mathbf y_1+c_r\mathbf y_r,
\]that is, $AS(\mathbf y)=\mathbf y$ for $\mathbf y\in\mathscr R(A)$. Hence, for $\mathbf x\in \mathbf R^n$,
\[
SASA(\mathbf x)=S\Big(AS\big(A(\mathbf x)\big)\Big)=S\big(A(\mathbf x)\big)=SA(\mathbf x)
\]so $SASA$ is a projection on $\mathbf R^n$.
If $SASA(\mathbf x)=SA(\mathbf x)=\mathbf 0$, then $A(\mathbf x)=\mathbf 0$ since $S$ is one-to-one, so the null space of $SASA$ is $\mathscr N(A)$. The range of $SASA$ is clearly a subset of $\mathscr R(S)$, and if $\mathbf z=c_1\mathbf z_1+\cdots+c_r\mathbf z_r\in\mathscr R(S)$, then\[
SASA(\mathbf z)=SA(\mathbf z)=S(c_1\mathbf y_1+\cdots+c_r\mathbf y_r)=\mathbf z
\]so that the range of SASA is $\mathscr R(S)$.
(b) From the discussion in 9.31, you can conclude that if $P$ is a projection in $\mathbf R^n$, then $n=\dim\mathscr N(P)+\dim\mathscr R(P)$. Hence from part (a), we have
\begin{align*}
n &= \dim\mathscr N(SASA)+\dim\mathscr R(SASA) \\
&= \dim\mathscr N(A)+\dim\mathscr R(S) \\
&= \dim\mathscr N(A)+\dim\mathscr R(A)
\end{align*}where the last equality follows from the fact that $S$ is a one-to-one map on $\mathscr R(A)$.
Exercise 26
(By analambanomenos) Letting $f(x,y)=g(x)$ be the function given in the example, then $D_2f(x,y)=0$, so $D_{12}f(x,y)=0$, for all $(x,y)$. However, $D_1f(x,y)=g’(x)$ does not exist. For $g$ you can use the function defined in Theorem 7.18.
Exercise 27
(By analambanomenos)
(a) Converting to polar coordinates for $(x,y)\ne(0,0)$, we have
\begin{align*}
f(r\cos\theta,r\sin\theta) &= \frac{r^4\cos\theta\sin\theta(\cos^2\theta-\sin^2\theta)}{r^2} \\
&= \frac{r^2\sin2\theta\cos2\theta}{2} \\
&= \frac{r^2\sin4\theta}{4} \\
\big|f(x,y)\big| &\le\frac{r^2}{4}=\frac{x^2+y^2}{4}
\end{align*}which converges to $0=f(0,0)$ as $(x,y)\rightarrow(0,0)$.
We have
\[
D_1f(0,0) = \lim_{h\rightarrow 0}\frac{f(h,0)}{h} = \lim_{h\rightarrow 0}\frac{0}{h}=0.
\]For $(x,y)\ne(0,0)$, we have
\begin{align*}
D_1f(x,y) &= \frac{(x^2+y^2)(3x^2y-y^3)-2x(x^3y-xy^3)}{(x^2+y^2)^2} \\
&= \frac{x^4y+4x^2y^3-y^5}{(x^2+y^2)^2} \\
D_1f(r\cos\theta,r\sin\theta) &= \frac{r^5(\cos^4\theta\sin\theta+4\cos^2\theta\sin^3\theta-\sin^5\theta)}{r^4} \\
\big|D_1f(x,y)\big| &\le 6r = 6\sqrt{x^2+y^2}
\end{align*}which converges to $0=D_1f(0,0)$ as $(x,y)\rightarrow(0,0)$.
We have
\[
D_2f(0,0) = \lim_{h\rightarrow 0}\frac{f(0,h)}{h} = \lim_{h\rightarrow 0}\frac{0}{h}=0.
\]For $(x,y)\ne(0,0)$, we have
\begin{align*}
D_2f(x,y) &= \frac{(x^2+y^2)(x^3-3xy^2)-2y(x^3y-xy^3)}{(x^2+y^2)^2} \\
&= \frac{x^5-4x^3y^2-xy^4}{(x^2+y^2)^2} \\
D_2f(r\cos\theta,r\sin\theta) &= \frac{r^5(\cos^5-4\cos^3\theta\sin^2\theta-\cos\theta\sin^4\theta)}{r^4} \\
\big|D_1f(x,y)\big| &\le 6r = 6\sqrt{x^2+y^2}
\end{align*}which converges to $0=D_2f(0,0)$ as $(x,y)\rightarrow(0,0)$.
(b) For $(x,y)\ne(0,0)$, we have
\begin{align*}
D_{12}f(x,y) &= \frac{(x^2+y^2)(5x^4-12x^2y^2-y^4)-4x(x^5-4x^3y^2-xy^4)}{(x^2+y^2)^3} \\
&= \frac{x^6+9x^4y^2-9x^2y^4-y^6}{(x^2+y^2)^3} \\ D_{12}f(r\cos\theta,r\sin\theta) &= \frac{r^6(\cos^6\theta+9\cos^4\theta\sin^2\theta-\sin^6\theta)}{r^6} \\ &= \cos^6\theta+9\cos^4\theta\sin^2\theta-\sin^6\theta
\end{align*} So $D_{12}f$ has a constant value along the rays emanating from the origin. Since this value is not a constant function of $\theta$, we see that $D_{12}f(x,y)$ does not converge to a limit as $(x,y)\rightarrow(0,0)$, and so $D_{12}f(x,y)$ cannot be continuous at the origin. Also, for $(x,y)\ne(0,0)$ we can apply Theorem 9.41 and conclude that $D_{21}f(x,y)=D_{12}f(x,y)$ is also not continuous at the origin.
(c) We have
\begin{gather*}
D_{12}f(0,0)=\lim_{h\rightarrow 0}\frac{D_2f(h,0)}{h}=\lim_{h\rightarrow 0}\frac{h^5/h^4}{h}=\lim_{h\rightarrow 0}1=1 \\
D_{21}f(0,0)=\lim_{h\rightarrow 0}\frac{D_1f(0,h)}{h}=\lim_{h\rightarrow 0}\frac{-h^5/h^4}{h}=\lim_{h\rightarrow 0}-1=-1
\end{gather*}
Exercise 28
(By analambanomenos) Away from the origin, $\varphi$ is continuous since the definitions agree at the points $(x,0)$, $(x,\sqrt{t})$ and $(x,2\sqrt{t})$. Since $\big|\varphi(x,t)\big|\le\sqrt{|t|}$, we have for $\varepsilon>0$, $-\varepsilon<x<\varepsilon$, and $-\varepsilon<t<\varepsilon$, $\big|\varphi(x,t)\big|<\sqrt{\varepsilon}$, so $\varphi$ is also continuous at the origin.
For $x\le 0$ we have $(x,t)=0$ for all $t$, and for $x>0$ we have $(x,t)=0$ for $-\frac{1}{2}x^2\le t\le\frac{1}{2}x^2$, so $D_2(x,0)=0$ for all $x$.
If $0\le t<\frac{1}{4}$, then
\begin{align*}
f(t) &= \int_{-1}^1\varphi(x,t)\,dx \\
&= \int_0^{\sqrt{t}}x\,dx + \int_{\sqrt{t}}^{2\sqrt{t}}-x+2\sqrt{t}\,dx \\
&= \frac{(\sqrt{t})^2}{2}-0-\frac{(2\sqrt{t})^2}{2}+2\sqrt{t}\big(2\sqrt{t}\big)+\frac{(\sqrt{t})^2}{2}-2\sqrt{t}\big(\sqrt{t}\big) \\
&= t
\end{align*}and if $-\frac{1}{4}<t\le 0$, then
\begin{align*}
f(t) &= \int_{-1}^1\varphi(x,t)\,dx \\
&= \int_0^{\sqrt{-t}}-x\,dx + \int_{\sqrt{-t}}^{2\sqrt{-t}}x-2\sqrt{-t}\,dx \\
&= -\frac{(\sqrt{-t})^2}{2}+0+\frac{(2\sqrt{-t})^2}{2}-2\sqrt{-t}\big(2\sqrt{-t}\big)-\frac{(\sqrt{-t})^2}{2}+2\sqrt{-t}\big(\sqrt{-t}\big) \\
&= t
\end{align*}Hence $f’(0)=1$, while $\int_{-1}^1D_2\varphi(x,0)\,dx=0$.
Exercise 29
(By analambanomenos) If we let $g=D_{i_{n+1}\cdots i_k}f$, then by Theorem 9.41 we have
\begin{align*}
D_{i_1\cdots i_k}f &= D_{i_1\cdots i_{n-2}}(D_{i_{n-1}i_n}g) \\
&= D_{i_1\cdots i_{n-2}}(D_{i_ni_{n-1}}g) \\
&= D_{i_1\cdots i_{n-2}i_ni_{n-1}\cdots i_k}f,
\end{align*}that is, we can swap any two adjacent indices and the derivative remains unchanged. We can apply this result to show that we can swap any two indices:
\begin{align*}
D_{i_1\cdots i_m\cdots i_n\cdots i_k}f &= D_{i_1\cdots i_m\cdots i_ni_{n-1}i_{n+1}\cdots i_k}f \\
&= \hbox{(keep swapping adjacent indices until $i_n$ comes before $i_m$)} \\
&= D_{i_1\cdots i_ni_m\cdots i_{n-1}i_{n+1}\cdots i_k}f \\
&= D_{i_1\cdots i_ni_{m+1}i_m\cdots i_{n-1}i_{n+1}\cdots i_k}f \\
&= \hbox{(keep swapping adjacent indices until $i_m$ comes after $i_{n-1}$)} \\
&= D_{i_1\cdots i_{m-1}i_ni_{m+1}\cdots i_{n-1}i_mi_{n+1}\cdots i_k}f
\end{align*}Since any permutation is the result of pairwise swaps (this is usually shown in elementary abstract algebra courses when discussing the permutation group), we see that in the case of $f\in \mathscr C^{(k)}$, we can permute the order of partial differentiation without changing the derivative.
Exercise 30
(By analambanomenos)
(a) I am going to show this by induction on $k$. For the case $k=1$, we have by Theorem 9.15 and Theorem 9.17
\[
h’(t) = f’\big(\mathbf p(t)\big)\mathbf p’(t) = \sum_{i=1}^nD_if\big(\mathbf p(t)\big)x_i
\]which is the assertion in the case $k=1$. Now assume the assertion is true for the case $k-1$. Then we have
\begin{align*}
h^{(k)}(t) &= \frac{d}{dt}h^{(k-1)}(t) \\
&= \frac{d}{dt}\sum_{i_1,\ldots,i_{k-1}=1}^n D_{i_1\cdots i_{k-1}}f\big(\mathbf p(t)\big)x_{i_1}\cdots x_{i_{k-1}} \\
&= \sum_{i_1,\ldots,i_{k-1}=1}^n \frac{d}{dt}D_{i_1\cdots i_{k-1}}f\big(\mathbf p(t)\big)x_{i_1}\cdots x_{i_{k-1}} \\
&= \sum_{i_1,\ldots,i_{k-1}=1}^n\Bigg(\sum_{i_k=1}^nD_{i_k}D_{i_1\cdots i_{k-1}}f\big(\mathbf p(t)\big)x_{i_k}\Bigg)x_{i_1}\cdots x_{i_{k-1}} \\
&= \sum_{i_1,\ldots,i_k=1}^n D_{i_1\cdots i_k}f\big(\mathbf p(t)\big)x_{i_1}\cdots x_{i_k}
\end{align*}where the last equality follows from Exercise 29.
(b) Plugging in the results of part (a), we get, for some $t\in(0,1)$.
\begin{align*}
f(\mathbf a+\mathbf x) &= h(1) \\
&= \sum_{k=0}^{m-1}\frac{h^{(k)}(0)}{k!}+\frac{h^{(m)}(t)}{m!} \\
&= \sum_{k=0}^{m-1}\frac{1}{k!}\sum_{i_1,\ldots,i_k=1}^nD_{i_1\cdots i_k}f(\mathbf a)x_{i_1}\cdots x_{i_k}+\frac{1}{m!}\sum_{i_1,\ldots,i_m=1}^nD_{i_1\cdots i_k}f(\mathbf a+t\mathbf x)x_{i_1}\cdots x_{i_m}
\end{align*}Since $f\in\mathscr C^{(m)}(E)$, there is a bound $M$ such that $\big|D_{i_1\cdots i_m}f(\mathbf a+t\mathbf x)\big|<M$ for all $t\in(0,1)$ and all partial derivatives of $f$ of order $m$. Hence,
\begin{align*}
\big|r(\mathbf x)\big| &\le \frac{1}{m!}\sum_{i_1,\ldots,i_m=1}^n\big|D_{i_1\cdots i_k}f(\mathbf a+t\mathbf x)\big|\cdot|x_{i_1}|\cdots|x_{i_m}| \\
&\le \frac{1}{m!}m!\big(M|\mathbf x|^m\big) \\
\lim_{\mathbf x\rightarrow\mathbf 0}\frac{\big|r(\mathbf x)\big|}{|\mathbf x|^{m-1}} &= \lim_{\mathbf x\rightarrow\mathbf 0}M|\mathbf x|=0
\end{align*}(c) By simple combinatorics, the number of ways to arrange $k$ distinct objects in an ordered sequence is $k!$. If $s$ of these objects are identical, this reduces the number of distinct ordered sequences by a factor of $s!$, since there are $s!$ ways of rearranging the identical objects in a given sequence. Hence the number of times a given partial derivative $D_1^{s_1}\cdots D_n^{s_n}$ of order $k=s_1+\cdots+s_n$ occurs in the Taylor polynomial is $k!/(s_1!\cdots s_n!)$, so we can rewrite the Taylor polynomial as
\begin{align*}
\sum_{k=0}^{m-1}\frac{1}{k!}\sum_{i_1,\ldots,i_k=1}^nD_{i_1\cdots i_k}f(\mathbf a)x_{i_1}\cdots x_{i_k} &= \sum_{k=0}^n\frac{1}{k!}\sum_{s_1+\cdots+s_n=k}\frac{k!D_1^{s_1}\cdots D_n^{s_n}f(\mathbf a)}
{s_1!\cdots s_n!}x_1^{s_1}\cdots x_n^{s_n} \\
&= \sum_{k=0}^n\sum_{s_1+\cdots+s_n=k}\frac{D_1^{s_1}\cdots D_n^{s_n}f(\mathbf a)}{s_1!\cdots s_n!}x_1^{s_1}\cdots x_n^{s_n} \\
\end{align*}
Exercise 31
(By analambanomenos) I am going to give the results, but not prove them. You can find a proof in any good advanced calculus text, and it’s easy to find online. (For example, see Theorem 16.4 of Loomis and Sternberg’s Advanced Calculus, which is legally available online now. Actually, why don’t you just read the whole book, it wouldn’t be difficult for you at this point, and it would introduce you to Differential Geometry and Mechanics. They actually used to assign it in advanced Freshman Calculus courses long ago, which must have been a good way to generate a lot of pre-meds.)
For a function $f$ satisfying the above conditions, we have from Exercise 30 that $f$ is approximately the following quadratic function in two variables:
\[
f(\mathbf a) +\frac{D_{11}f(\mathbf a)}{2}(x_1-a_1)^2 + D_{12}f(\mathbf a)(x_1-a_1)(x_2-a_1) + \frac{D_{22}f(\mathbf a)}{2}(x_2-a_1)^2.
\]Let $D=\big(D_{11}f(\mathbf a)\big)\big(D_{22}f(\mathbf a)\big)-\big(D_{12}f(\mathbf a)\big)^2$. Then the above function will have a local maximum if and only if $D$ is positive and $D_{11}f(\mathbf a)$ is negative. It will have a local minimum if and only if $D$ is positive and $D_{11}f(\mathbf a)$ is positive. This will also hold for $f$ near $\mathbf a$.
For $n$ variables, we need to consider the eigenvalues of the Hessian matrix $\big(D_{ij}f(\mathbf a)\big)$. There will be $n$ real eigenvalues since the matrix is symmetric. Then $f$ has a local maximum at $\mathbf a$ if and only if the eigenvalues are all negative, and it will have a local minimum at $\mathbf a$ if and only if they are all positive.