Exercise 5.2.1
Each of the following expressions defines a function $D$ on the set of $3\times3$ matrices over the field of real numbers. In which of these cases is $D$ a $3$-linear function?
(a) $D(A)=A_{11}+A_{22}+A_{33}$;
(b) $D(A)=(A_{11})^2+3A_{11}A_{22}$;
(c) $D(A)=A_{11}A_{12}A_{33}$;
(d) $D(A)=A_{13}A_{22}A_{32}+5A_{12}A_{22}A_{32}$;
(e) $D(A)=0$;
(f) $D(A)=1$;
Solution:
(a) No $D$ is not $3$-linear. Let
$$A=\left[\begin{array}{ccc}2&0&0\\0&1&0\\0&0&1\end{array}\right].$$Then if $D$ were $3$-linear then it would be linear in the first row and we’d have to have $D(A)=D(I)+D(I)$. But $D(A)=4$ and $D(I)=3$, so $D(A)\not=D(I)+D(I)$.
(b) No $D$ is not $3$-linear. Let $A$ be the same matrix as in part (a). Then $D(A)=10$ and $D(I)=4$, so $D(A)\not=D(I)+D(I)$.
(c) No $D$ is not $3$-linear. Let
$$A=\left[\begin{array}{ccc}2&2&0\\0&0&0\\0&0&1\end{array}\right],$$$$B=\left[\begin{array}{ccc}1&1&0\\0&0&0\\0&0&1\end{array}\right].$$Then if $D$ were $3$-linear we’d have to have $D(A)=D(B)+D(B)$. But $D(A)=4$ and $D(B)=1$. Thus $D(A)\not=D(B)+D(B)$.
(d) Yes $D$ is $3$-linear. The two functions $A\mapsto A_{13}A_{22}A_{32}$ and $A\mapsto 5A_{12}A_{22}A_{32}$ are both $3$-linear by Example 1, page 142. The sum of these two functions is then $3$-linear by the Lemma on page 143. Since $D$ is exactly the sum of these two functions, it follows that $D$ is $3$-linear.
(e) Yes $D$ is $3$-linear. We must show (5-1) on page 142 holds for all matrices $A$. But since $D(A)=0$ $\forall$ $A$, both sides of (5-1) are always equal to zero. Thus (5-1) does hold $\forall$ $A$.
(f) No $D$ is not $3$-linear. Let $A$ be the matrix from part (a) again. Then $D(A)=1$ but $D(I)+D(I)=2$. Thus $D(A)\not=D(I)+D(I)$. Thus $D$ is not $3$-linear.
Exercise 5.2.2
Verify directly that the three functions $E_1$, $E_2$, $E_3$ defined by (5-6), (5-7), and (5-8) are identical.
Solution: We have$$E_1(A)=A_{11}(A_{22}A_{33}-A_{23}A_{32}) – A_{21}(A_{12}A_{33}-A_{13}A_{32})+A_{31}(A_{12}A_{23}-A_{13}A_{22})$$$$=\underset{\text{term $1$}}{A_{11}A_{22}A_{33}}-\underset{\text{term $2$}}{A_{11}A_{23}A_{32}}-\underset{\text{term $3$}}{A_{21}A_{12}A_{33}}+\underset{\text{term $4$}}{A_{21}A_{13}A_{32}}+\underset{\text{term $5$}}{A_{31}A_{12}A_{23}}-\underset{\text{term $6$}}{A_{31}A_{13}A_{22}}.$$$$E_2(A)=-A_{12}(A_{21}A_{33}-A_{23}A_{31}) + A_{22}(A_{11}A_{33}-A_{13}A_{31})-A_{32}(A_{11}A_{23}-A_{13}A_{21})$$$$=\underset{\text{term $3$}}{-A_{12}A_{21}A_{33}}+\underset{\text{term $5$}}{A_{12}A_{23}A_{31}}+\underset{\text{term $1$}}{A_{22}A_{11}A_{33}}-\underset{\text{term $6$}}{A_{22}A_{13}A_{31}}-\underset{\text{term $2$}}{A_{32}A_{11}A_{23}}+\underset{\text{term $4$}}{A_{32}A_{13}A_{21}}.$$$$E_3(A)=A_{13}(A_{21}A_{32}-A_{22}A_{31})-A_{23}(A_{11}A_{32}-A_{12}A_{31})+A_{33}(A_{11}A_{22}-A_{12}A_{21})$$$$=\underset{\text{term $4$}}{A_{13}A_{21}A_{32}}-\underset{\text{term $6$}}{A_{13}A_{22}A_{31}}-\underset{\text{term $2$}}{A_{23}A_{11}A_{32}}+\underset{\text{term $5$}}{A_{23}A_{12}A_{31}}+\underset{\text{term $1$}}{A_{33}A_{11}A_{22}}-\underset{\text{term $3$}}{A_{33}A_{12}A_{21}}.$$I’ve expanded the three expressions and labelled corresponding terms. We see each of the six terms appears exactly once in each expansion, and always with the same sign. Therefore the three expressions are equal.
Exercise 5.2.3
Let $K$ be a commutative ring with identity. If $A$ is a $2\times2$ matrix over $K$, the {\bf classical adjoint} of $A$ is the $2\times2$ matrix adj $A$ defined by
$$\text{adj $A$}=\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right].$$If det denotes the unique determinant function on $2\times2$ matrices over $K$, show that
(a) $(\text{adj $A$})A = A(\text{adj $A$})=(\det A)I$;
(b) $\det(\text{adj $A$})=\det(A)$;
(c) adj $(A^t)=(\text{adj }A)^t$.
($A^t$ denotes the transpose of $A$.)
Solution:
(a) we have
$$(\text{adj $A$})A=\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right]
\left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]$$$$=\left[\begin{array}{cc}A_{11}A_{22}-A_{12}A_{21} & A_{12}A_{22}-A_{12}A_{22}\\-A_{11}A_{21}+A_{11}A_{21} & -A_{12}A_{21}+A_{11}A_{22}\end{array}\right]$$$$=\left[\begin{array}{cc}A_{11}A_{22}-A_{12}A_{21} & 0\\0 & A_{11}A_{22}-A_{12}A_{21}\end{array}\right]$$$$=\left[\begin{array}{cc}\det(A) & 0\\0 & \det(A)\end{array}\right].$$$$A(\text{adj $A$})=\left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]
\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right]$$$$=\left[\begin{array}{cc}A_{11}A_{22}-A_{12}A_{21} & -A_{11}A_{12}+A_{12}A_{11}\\A_{21}A_{22}-A_{22}A_{21} & -A_{21}A_{12}+A_{22}A_{11}\end{array}\right]$$$$=\left[\begin{array}{cc}\det(A) & 0\\0 & \det(A)\end{array}\right].$$Thus both $(\text{adj $A$})A$ and $A(\text{adj $A$})$ equal $(\det A)I$.
(b) We have
$$\det(\text{adj $A$})=\det\left(\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right]\right)$$$$=A_{11}A_{22}-A_{12}A_{21}=\det(A).$$(c) We have $$\text{adj}(A^t)
=\text{adj}\left(\left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]^t\right)$$$$=\text{adj}\left(\left[\begin{array}{cc}A_{11}&A_{21}\\A_{12}&A_{22}\end{array}\right]\right)$$\begin{equation}
=\left[\begin{array}{cc}A_{22}&-A_{21}\\-A_{12}&A_{11}\end{array}\right]
\label{fjffff97779}
\end{equation}And$$(\text{adj} A)^t
=\left(\text{adj}\left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]\right)^t$$$$=\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right]^t$$\begin{equation}
=\left[\begin{array}{cc}A_{22}&-A_{21}\\-A_{12}&A_{11}\end{array}\right]
\label{fwfwfa000}
\end{equation}
Comparing (\ref{fjffff97779}) and (\ref{fwfwfa000}) gives the result.
Exercise 5.2.4
Let $A$ be a $2\times2$ matrix over a field $F$. Show that $A$ is invertible if and only if $\det A\not=0$. When $A$ is invertible, give a formula for $A^{-1}$.
Solution: We showed in Example 3, page 143, that $\det(A)=A_{11}A_{22}-A_{12}A_{21}$. Therefore, we’ve already done the first part in Exercise 8 of section 1.6 (page 27). We just need a formula for $A^{-1}$. The formula is
$$A^{-1}=\frac1{\det(A)}\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right].$$Checking:
$$A\cdot \frac1{\det(A)}\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right]=\frac1{\det(A)}\left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]\left[\begin{array}{cc}A_{22}&-A_{12}\\-A_{21}&A_{11}\end{array}\right].$$$$=\frac1{\det(A)}\left[\begin{array}{cc}A_{11}A_{22}-A_{12}A_{21}&-A_{11}A_{12}+A_{12}A_{11}\\A_{21}A_{22}-A_{22}A_{21}&-A_{21}A_{12}+A_{22}A_{11}\end{array}\right]$$$$=\frac1{\det(A)}\left[\begin{array}{cc}\det(A)&0\\0&\det(A)\end{array}\right]=\left[\begin{array}{cc}1&0\\0&1\end{array}\right]$$
Exercise 5.2.5
Let $A$ be a $2\times2$ matrix over a field $F$, and suppose that $A^2=0$. Show for each scalar $c$ that $\det(cI-A)=c^2$.
Solution: One has to be careful in proving this not to use implications such as $2x=0$ $\Rightarrow$ $x=0$; or $x^2+y=0$ $\Rightarrow$ $y=0$. These implications are not valid in a general field. However, we will need to use that fact that $xy=0$ $\Rightarrow$ $x=0$ or $y=0$, which is true in any field.
Let $A=\left[\begin{array}{cc}x&y\\z&w\end{array}\right]$. Then
$$A^2=\left[\begin{array}{cc}x^2+yz&xy+yw\\xz+wz&yz+w^2\end{array}\right].$$If $A^2=0$ then
\begin{equation}
x^2+yz=0
\label{k1}
\end{equation}\begin{equation}
y(x+w)=0
\label{k2}
\end{equation}\begin{equation}
z(x+w)=0
\label{k3}
\end{equation}\begin{equation}
yz+w^2=0.
\label{k4}
\end{equation}Now $\det(cI-A)=\det\left[\begin{array}{cc}c-x&-y\\-z&c-w\end{array}\right]=(c-x)(c-w)-yz=c^2-c(x+w)+xw-yz.$
Thus
\begin{equation}
\det(cI-A)=c^2-c(x+w)+\det(A).
\label{f322}
\end{equation}Suppose $x+w\not=0$. Then (\ref{k2}) and (\ref{k3}) imply $y=z=0$. Thus $A=\left[\begin{array}{cc}x&0\\0&w\end{array}\right]$. But then $A^2=\left[\begin{array}{cc}x^2&0\\0&w^2\end{array}\right]$. So if $A^2=0$ then it must be that also $x=w=0$, which contradicts the assumption that $x+w\not=0$.
Thus necessarily $A^2=0$ implies $x+w=0$. This implies $A=\left[\begin{array}{cc}x&y\\z&-x\end{array}\right]$. Thus $\det(A)=-x^2-yz$, which equals zero by (\ref{k1}). Thus $A^2=0$ implies $x+w=0$ and $\det(A)=0$. Thus by (\ref{f322}) $A^2=0$ implies $\det(cI-A)=c^2$.
Exercise 5.2.6
Let $K$ be a subfield of the complex numbers and $n$ a positive integer. Let $j_1,\dots,j_n$ and $k_1,\dots,k_n$ be positive integers not exceeding $n$. For an $n\times n$ matrix $A$ over $K$ define
$$D(A)=A(j_1,k_1)A(j_2,k_2)\cdots A(j_n,k_n).$$Prove that $D$ is $n$-linear if and only if the integers $j_1,\dots,j_n$ are distinct.
Solution: First assume the integers $j_1,\dots,j_n$ are distinct. Since these $n$ integers all satisfy $1\leq j_i\leq n$, their being distinct necessarily implies $\{j_1,\dots,j_n\}=\{1,2,3,\dots,n\}$ Thus $A(j_1,k_1)A(j_2,k_2)\cdots A(j_n,k_n)$ is just a rearrangement of $A(1,k_1)A(2,k_2)\cdots A(3,k_n)$. It follows from Example 1 on page 142 that $A(j_1,k_1)A(j_2,k_2)\cdots A(j_n,k_n)$ is $n$-linear.
Now assume two or more of the $j_i$’s are equal. Assume without loss of generality that $j_1=j_2=\cdots=j_{\ell}=1$ where $\ell\geq2$. Let $A$ be the matrix with all $2$’s in the first row and all ones in all other rows. Let $B$ be the matrix of all $1$’s. Then $D(A)=2^{\ell}$ and $D(B)=1$. Since $D$ is $n$-linear it must be that $D(A)=D(B)+D(B)$. But $\ell>1$ $\Rightarrow$ $2^{\ell}\not=2$. Thus $D(A)\not=D(B)+D(B)$ and $D$ is not $n$-linear.
Exercise 5.2.7
Let $K$ be a commutative ring with identity. Show that the determinant function on $2\times2$ matrices $A$ over $K$ is alternating and $2$-linear as a function of the columns of $A$.
Solution: We have$$\det\left[\begin{array}{cc}ra_1+a_2&b\\rc_1+c_2&d\end{array}\right]$$$$=(ra_1+a_2)d-(rc_1+c_2)b$$$$=ra_1d+a_2d-rc_1b-c_2b$$$$=r(ad-bc_1)+(a_2d-bc_2)$$$$=r\det\left[\begin{array}{cc}a_1&b\\c_1&d\end{array}\right]+\det\left[\begin{array}{cc}a_2&b\\c_2&d\end{array}\right].$$Likewise
$$\det\left[\begin{array}{cc}a&rb_1+b_2\\c&rd_1+d_2\end{array}\right]$$$$=a(rd_1+d_2)-(rb_1+b_2)c$$$$=rad_1+ad_2-rcb_1-cb_2$$$$=(rad_1-rcb_1)+(ad_2-cb_2)$$$$=r\det\left[\begin{array}{cc}a&b_1\\c&d_1\end{array}\right]+\det\left[\begin{array}{cc}a&b_2\\c&d_2\end{array}\right].$$Thus the determinant function is $2$-linear on columns. Now
$$\det\left[\begin{array}{cc}a&b\\c&d\end{array}\right]$$$$=ad-bc=-(bc-ad)$$$$=-\det\left[\begin{array}{cc}b&d\\a&c\end{array}\right].$$Thus the determinant function is alternating on columns.
Exercise 5.2.8
Let $K$ be a commutative ring with identity. Define a function $D$ on $3\times3$ matrices over $K$ by the rule
$$D(A)=A_{11}\det\left[\begin{array}{cc}A_{22}&A_{23}\\A_{32}&A_{33}\end{array}\right]-A_{12}\left[\begin{array}{cc}A_{21}&A_{23}\\A_{31}&A_{33}\end{array}\right]+A_{13}\left[\begin{array}{cc}A_{21}&A_{22}\\A_{31}&A_{32}\end{array}\right].$$Show that $D$ is alternating and $3$-linear as a function of the columns of $A$.
Solution: This is exactly Theorem 1 page 146 but with respect to columns instead of rows. The statement and proof go through without change except for chaning the word “row” to “column” everywhere. To make it work, however, we must know that $\det$ is an alternating $2$-linear function on columns of $2\times2$ matrices over $K$. This is exactly what was shown in the previous exercise.
Exercise 5.2.9
Let $K$ be a commutative ring with identity and $D$ and alternating $n$-linear function on $n\times n$ matrices over $K$. Show that
(a) $D(A)=0$, if one of the rows of $A$ is $0$.
(b) $D(B)=D(A)$, if $B$ is obtained from $A$ by adding a scalar multiple of one row of $A$ to another.
Solution: Let $A$ be an $n\times n$ matrix with one row all zeros. Suppose row $\alpha_i$ is all zeros. Then $\alpha_i+\alpha_i=\alpha_i$. Thus by the linearity of the determinant in the $i^{\text th}$ row we have $$\det(A)=\det(A)+\det(A).$$ Subtracting $\det(A)$ from both sides gives $\det(A)=0$.
Now suppose $B$ is obtained from $A$ by adding a scalar multiple of one row to another. Assume row $\beta_i$ of $B$ equals $\alpha_i+c\alpha_j$ where $\alpha_i$ is the $i^{\text th}$ row of $A$ and $\alpha_j$ is the $j^{\text th}$. Then the rows of $B$ are $\alpha_1,\dots,\alpha_{i-1},\alpha_i+c\alpha_j,\alpha_{i+1},\dots,\alpha_n$. Thus
$$\det(B)=\det(\alpha_1,\dots,\alpha_{i-1},\alpha_i+c\alpha_j,\alpha_{i+1},\dots,\alpha_n)$$$$=\det(\alpha_1,\dots,\alpha_{i-1},\alpha_i+c\alpha_j,\alpha_{i+1},\dots,\alpha_n)$$$$=\det(\alpha_1,\dots,\alpha_{i-1},\alpha_i,\alpha_{i+1},\dots,\alpha_n)
+c\cdot\det(\alpha_1,\dots,\alpha_{i-1},\alpha_j,\alpha_{i+1},\dots,\alpha_n).$$The first determinant is $\det(A)$. And by the first part of this problem, the second determinant equals zero, since it has a repeated row $\alpha_j$. Thus $\det(B)=\det(A)$.
Exercise 5.2.10
Let $F$ be a field, $A$ a $2\times3$ matrix over $F$, and $(c_1,c_2,c_3)$ the vector in $F^3$ defined by
$$c_1=\left|\begin{array}{cc}A_{12}&A_{13}\\A_{22}&A_{23}\end{array}\right|,\quad c_2=\left|\begin{array}{cc}A_{13}&A_{11}\\A_{23}&A_{21}\end{array}\right|,\quad c_3=\left|\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right|.$$Show that
(a) $\text{rank}(A)=2$ if and only if $(c_1,c_2,c_3)\not=0$;
(b) if $A$ has rank $2$, then $(c_1,c_2,c_3)$ is a basis for the solution space of the system of equations $AX=0$.
Solution: We will use the fact that the rank of a $2\times2$ matrix is $2$ $\Leftrightarrow$ the matrix is invertible $\Leftrightarrow$ the determinant is non-zero. The first equivalence follows from the fact that a matrix $M$ with rank $2$ has two linearly independent rows and therefore the row space of $M$ is all of $F^2$ which is the same as the row space of the identity matrix. Thus by the Corollary on page 58 $M$ is row-equivalent to the identity matrix, thus by Theorem 12 (page 23) it follows that $M$ is invertible. The second equivalence follows from Exercise 4 from Section 5.2 (page 149).
(a) If $\text{rank}(A)=0$ then $A$ is the zero matrix and clearly $c_1=c_2=c_3=0$.
If $\text{rank}(A)=1$ then the second row must be a multiple of the first row. This is then true for each of the $2\times2$ matrices
\begin{equation}
\left[\begin{array}{cc}A_{12}&A_{13}\\A_{22}&A_{23}\end{array}\right],\quad \left[\begin{array}{cc}A_{13}&A_{11}\\A_{23}&A_{21}\end{array}\right],\quad \left[\begin{array}{cc}A_{11}&A_{12}\\A_{21}&A_{22}\end{array}\right]
\label{fffhnbv}
\end{equation}because each one is obtained from $A$ by deleting one column (and in the case of the second one, switching the two remaining columns). Thus each of them has rank $\leq 1$. Therefore the determinant of each of these three matrices is zero. Thus $(c_1,c_2,c_3)$ is the zero vector.
If $\text{rank}(A)=2$ then the second row of $A$ is not a multiple of its first row. We must show the same is true of at least one of the matrices in (\ref{fffhnbv}). Suppose the second row is a multiple of the first for each matrix in (\ref{fffhnbv}). Since each pair of these matrices shares a column, it must be the same multiple for each pair; and therefore the same multiple for all three, call it $c$. Therefore the seond row of the entire matrix $A$ is $c$ times the first row, which contradicts our assumption that $\text{rank}(A)=2$. Thus at least one of the matrices in (\ref{fffhnbv}) must have rank two and the result follow.
(b) Identify $F^3$ with the space of $3\times1$ column vectors and $F^2$ the space of $2\times1$ column vectors. Let $T:F^3\rightarrow F^2$ be the linear transformation given by $TX=AX$. Then by Theorem 2 page 71 (the rank/nullity theorem) we know $\text{rank}(T)+\text{nullity}(T)=3$. It was shown in the proof of Theorem 3 page 72 (the third displayed equation in the proof) that $$\text{rank}(T)=\text{column rank}(A).$$ And $\text{nullity}(T)$ is the solution space for $AX=0$. Thus $\text{column rank}(A)$ plus the rank of the solution space of $AX=0$ equals three. Thus if $\text{rank}(A)=2$ then the rank of the solution space of $AX=0$ must equal one. Thus a basis for this space is any non-zero vector in the space. Thus we only need show $(c_1,c_2,c_3)$ is in this space. In other words we must show
$$\left[\begin{array}{ccc}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\end{array}\right]\left[\begin{array}{c}c_1\\c_2\\c_3\end{array}\right]=0.$$It feels like we’re supposed to apply Exercise 8 to the following matrix
$$\left[\begin{array}{ccc}A_{11}&A_{12}&A_{13}\\A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\end{array}\right],$$but the problem with that is that we do not know that an alternating function on columns is necessarily zero on a matrix with a repeated row. That is true, but rather than prove it, it’s easier just prove this directly
$$c_1=A_{12}A_{23}-A_{22}A_{13}$$$$c_1=A_{13}A_{21}-A_{11}A_{23}$$$$c_1=A_{11}A_{22}-A_{12}A_{21}.$$Therefore
$$\left[\begin{array}{ccc}A_{11}&A_{12}&A_{13}\\A_{21}&A_{22}&A_{23}\end{array}\right]\left[\begin{array}{c}c_1\\c_2\\c_3\end{array}\right]$$$$=\left[\begin{array}{c}A_{11}c_1+A_{12}c_2+A_{13}c_3\\A_{21}c_1+A_{22}c_2+A_{23}c_3\end{array}\right]$$Expanding the first entry
$$A_{11}c_1+A_{12}c_2+A_{13}c_3$$$$=A_{11}(A_{12}A_{23}-A_{22}A_{13})+A_{12}(A_{13}A_{21}-A_{11}A_{23})+A_{13}(A_{11}A_{22}-A_{12}A_{21})$$$$={A_{11}A_{12}A_{23}}-A_{11}A_{22}A_{13}-A_{12}A_{13}A_{21}-A_{12}A_{11}A_{23}+A_{13}A_{11}A_{22}-A_{13}A_{12}A_{21}$$matching up terms we see everything cancels.
$$=\underset{\text{term 1}}{A_{11}A_{12}A_{23}}-\underset{\text{term 2}}{A_{11}A_{22}A_{13}}-\underset{\text{term 3}}{A_{12}A_{13}A_{21}}-\underset{\text{term 1}}{A_{12}A_{11}A_{23}}+\underset{\text{term 2}}{A_{13}A_{11}A_{22}}-\underset{\text{term 3}}{A_{13}A_{12}A_{21}}=0.$$Expanding the second entry
$$A_{21}c_1+A_{22}c_2+A_{23}c_3$$$$=A_{21}(A_{12}A_{23}-A_{22}A_{13})+A_{22}(A_{13}A_{21}-A_{11}A_{23})+A_{23}(A_{11}A_{22}-A_{12}A_{21})$$$$=A_{21}A_{12}A_{23}-A_{21}A_{22}A_{13}+A_{22}A_{13}A_{21}-A_{22}A_{11}A_{23}+A_{23}A_{11}A_{22}-A_{23}A_{12}A_{21}$$matching up terms we see everything cancels.
$$=\underset{\text{term 1}}{A_{21}A_{12}A_{23}}-\underset{\text{term 2}}{A_{21}A_{22}A_{13}}+\underset{\text{term 2}}{A_{22}A_{13}A_{21}}-\underset{\text{term 3}}{A_{22}A_{11}A_{23}}+\underset{\text{term 3}}{A_{23}A_{11}A_{22}}-\underset{\text{term 1}}{A_{23}A_{12}A_{21}}=0.$$