book_name
stringclasses
983 values
chunk_index
int64
0
503
text
stringlengths
1.01k
10k
1009_(GTM175)An Introduction to Knot Theory
26
of two trivial 2-string tangles. The double cover of a ball branched over a trivial 2-string tangle is a solid torus. Thus the double cover of \( {S}^{3} \) branched over the link is two solid tori with their boundaries glued together. That is a lens space or, exceptionally, \( {S}^{3} \) or \( {S}^{1} \times {S}^{2} \) . In fact, \( {L}_{p, q} \) is the double cover of \( {S}^{3} \) branched over the \( \left( {p, q}\right) \) 2-bridge link. It has been shown that only this link has \( {L}_{p, q} \) as its double branched cover [46]. Further facts about covering spaces will be useful in Chapter 11. As has been noted, a covering map \( p : \left( {E,{e}_{0}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) induces an injection on fundamental groups, and \( {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) is called the group of the covering. The chief further result is that provided \( B \) is "semi-locally simply connected" (locally contractible will do fine), then for any given subgroup \( G \) of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) there exists a covering space with \( G \) as group. Conjugate subgroups produce equivalent (base point free) covers in the sense that \( p : E \rightarrow B \) and \( {p}^{\prime } : {E}^{\prime } \rightarrow B \) are equivalent if there is a homeomorphism \( h : E \rightarrow {E}^{\prime } \) such that \( {p}^{\prime }h = p \) . All this is fairly simple once one can do the theory when \( G \) is the trivial one-element subgroup of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) . Definition 7.11. A covering \( p : \widetilde{B} \rightarrow B \) in which \( \widetilde{B} \) is simply connected is called the universal covering of \( B \) . Note that by Lemma 7.8 a space \( B \) has at most one universal covering up to equivalence. The aim now will be to show that a path-connected and locally path-connected space \( B \), with one extra property, does have a simply connected covering space. The definition of the extra property is as follows: Definition 7.12. A space \( B \) is semi-locally simply connected if for each \( b \in B \) there exists a neighbourhood \( V \) of \( b \) with the property that every closed curve in \( V \) is null-homotopic in \( B \) . Note that if \( B \) has this property, then the set \( V \) can be taken to be open and path-connected. The property is then the same as the assertion that inclusion induces the constant map \( {\Pi }_{1}\left( V\right) \rightarrow {\Pi }_{1}\left( B\right) \) . Suppose there does exist a covering map \( p : E \rightarrow B \) for some simply connected space \( E \) . If \( b \in B \), there is (from the definition of a covering map) an open set \( U \subset E \) such that \( p \mid U : U \rightarrow V \) is a homeomorphism onto some open neighbourhood \( V \) of \( b \) . Then \( p{i}_{U} = {i}_{V}\left( {p \mid U}\right) \) , where \( {i}_{U} \) and \( {i}_{V} \) are inclusion maps. Of course, \( {\left( {i}_{U}\right) }_{ \star } : {\Pi }_{1}\left( U\right) \rightarrow {\Pi }_{1}\left( E\right) \) is constant because \( {\Pi }_{1}\left( E\right) \) is trivial, and as \( {\left( p \mid U\right) }_{ \star } \) is an isomorphism, it follows that \( {\left( {i}_{V}\right) }_{ \star } \) is the trivial constant map. Thus the semi-locally simply connected condition is certainly needed if \( B \) is to have a simply connected cover. Note that if \( B \) is a manifold or a finite complex, it certainly has this property (as \( V \) can be taken to be contractible). Theorem 7.13. Let \( B \) be a path-connected, locally path-connected, semi-locally simply connected space. Then there exists a simply connected space \( \widetilde{B} \) and covering map \( p : \widetilde{B} \rightarrow B \) . Furthermore, the group \( {\Pi }_{1}\left( B\right) \) acts freely as a group of homeomorphisms on (the left of) \( \widetilde{B} \), the quotient map \( q : \widetilde{B} \rightarrow \widetilde{B}/{\Pi }_{1}\left( B\right) \) is a covering map and there is a homeomorphism \( h : \widetilde{B}/{\Pi }_{1}\left( B\right) \rightarrow B \) such that \( {hq} = p \) . Proof. Let \( {b}_{0} \in B \) be a base point and let \( X \) be the set of all paths \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow \) \( B \) such that \( \alpha \left( 0\right) = {b}_{0} \) . Define an equivalence relation on \( \mathrm{X} \) by letting \( \alpha \sim \beta \) if and only if \( \alpha \left( 1\right) = \beta \left( 1\right) \) and \( \alpha \approx \beta \), where " \( \approx \) " denotes homotopy of paths in \( B \) keeping the end points \( \{ 0,1\} \) fixed. Let \( \widetilde{B} \) be the quotient set \( X/ \sim \) and define \( p : \widetilde{B} \rightarrow B \) by \( p\left\lbrack \alpha \right\rbrack = \alpha \left( 1\right) \), where \( \left\lbrack \alpha \right\rbrack \) is the equivalence class of \( \alpha \) . Suppose that \( \alpha \in X \) and that \( V \) is an open neighbourhood of \( \alpha \left( 1\right) \) in \( B \) . Let \( \langle \alpha, V\rangle \subset B \) be defined by \[ \langle \alpha, V\rangle = \{ \left\lbrack {\alpha \cdot \beta }\right\rbrack : \beta : \left\lbrack {0,1}\right\rbrack \rightarrow V,\beta \left( 0\right) = \alpha \left( 1\right) \} . \] Take all possible \( \langle \alpha, V\rangle \) to be a base for a topology on \( \widetilde{B} \) (so that a subset of \( \widetilde{B} \) is defined to be open if and only if it is a union of some of these basic sets). Note that if \( \left\lbrack \alpha \right\rbrack \in \left\langle {{\alpha }_{1},{V}_{1}}\right\rangle \cap \left\langle {{\alpha }_{2},{V}_{2}}\right\rangle \), then \[ \left\langle {\alpha ,{V}_{1} \cap {V}_{2}}\right\rangle \subset \left\langle {{\alpha }_{1},{V}_{1}}\right\rangle \cap \left\langle {{\alpha }_{2},{V}_{2}}\right\rangle \] so that the given sets do form a base of a genuine topology. Now \( p\langle \alpha, V\rangle \) is the path component of \( V \) that contains \( \alpha \left( 1\right) \) . This is open in \( B \) , since \( B \) is locally path-connected, so \( p \) maps open sets to open sets. Further, if \( V \) is open in \( B \), then \[ {p}^{-1}V = \mathop{\bigcup }\limits_{\alpha }\{ \langle \alpha, V\rangle : \alpha \left( 1\right) \in V\} . \] By definition this is open, and so \( p \) is continuous. The space \( \widetilde{B} \) is path-connected, since \( \left\lbrack \alpha \right\rbrack \) is joined to the class of the path that is constant at \( {b}_{0} \) by \( \left\{ {\left\lbrack {\alpha }_{s}\right\rbrack : s \in \left\lbrack {0,1}\right\rbrack }\right\} \) , where \( {\alpha }_{s}\left( t\right) = \alpha \left( {st}\right) \) . If \( V \) is open in \( B \) and \( \left\lbrack \gamma \right\rbrack \in \langle \alpha, V\rangle \), then \( \langle \gamma, V\rangle = \langle \alpha, V\rangle \) . Thus any \( \langle \alpha, V\rangle \) and \( \langle \beta, V\rangle \) are either disjoint or identical, and so \( {p}^{-1}V \) is the disjoint union of open sets of the form \( \langle \alpha, V\rangle \) . If \( b \in B \), use the given properties of \( B \) to select, an open path-connected neighbourhood \( V \) of \( b \) for which \( {\Pi }_{1}\left( V\right) \rightarrow {\Pi }_{1}\left( B\right) \) is the trivial map. Then \( p \) is injective on \( \langle \alpha, V\rangle \) . This is because if \( p\left\lbrack {\alpha \cdot \beta }\right\rbrack = p\left\lbrack {\alpha \cdot {\beta }^{\prime }}\right\rbrack \), where \( \beta \) and \( {\beta }^{\prime } \) are paths in \( V \) with the same end points, then \( \beta \approx {\beta }^{\prime } \), so that \( \alpha \cdot \beta \approx \alpha \cdot {\beta }^{\prime } \) and hence \( \left\lbrack {\alpha \cdot \beta }\right\rbrack = \left\lbrack {\alpha \cdot {\beta }^{\prime }}\right\rbrack \) . Thus \( p : \langle \alpha, V\rangle \rightarrow V \) is a homeomorphism and, as \( {p}^{-1}\left( V\right) \) is a disjoint union of sets of the form \( \langle \alpha, V\rangle, p \) is a covering map. Suppose that \( \left\lbrack \gamma \right\rbrack \in {\Pi }_{1}\left( {B,{b}_{0}}\right) \), where \( \gamma \) is a loop based at \( {b}_{0} \) . Define a map \( \left\lbrack \gamma \right\rbrack : \widetilde{B} \rightarrow \widetilde{B} \) by \( \left\lbrack \gamma \right\rbrack \left( \left\lbrack \alpha \right\rbrack \right) = \left\lbrack {\gamma \cdot \alpha }\right\rbrack \) . This gives a well-defined map that sends basic open sets to basic open sets and, as it has \( \left\lbrack \overline{\gamma }\right\rbrack \) as an inverse, it is a homeomorphism. Thus the group \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) acts on \( \widetilde{B} \) . Note that \( \left\lbrack {\gamma \cdot \alpha }\right\rbrack = \left\lbrack \alpha \right\rbrack \) only if \( \left\lbrack \gamma \right\rbrack \) is the identity of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) so that the action is a free action. The projection \( p : \widetilde{B} \rightarrow B \) commutes with this action and so induces a map \( h : \widetilde{B}/{\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow B \) . If \( q \) denotes the quotient map \( q : \widetilde{B} \rightarrow \widetilde{B}/{\Pi }_{1}\left( {B,{b}_{0}}\right) \), then \( {hq} = p \) . As \( p \) has these properties, this \( h \) is continuous and open, and it is easy to check that \( h \) is a bijection. Thus \( h \) is a homeomorphism, and the fact that \( p \) is a covering implies that \( q \) is a covering. Finally, it is necessary to check that \( \widetilde{B} \) is simply connected. By the injectivity of \( {p}_{ \star } \), it suffices to show that for any loop \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow \widetilde{B} \), the loop \( {p\gamma } \) is null-homotopic in \( B \) by a homotopy that keeps \( \{ 0,1\} \) fixed. For each \( t,\gamma \left( t\right) = \left\lbrack {\alpha }_{t}\right\rbrack \) for some path \( {\alpha }_{t} \) in \( B \) from \( {b}_{0} \) to \( {p\gamma }\left( t\right) \) . By the contin
1009_(GTM175)An Introduction to Knot Theory
27
nduces a map \( h : \widetilde{B}/{\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow B \) . If \( q \) denotes the quotient map \( q : \widetilde{B} \rightarrow \widetilde{B}/{\Pi }_{1}\left( {B,{b}_{0}}\right) \), then \( {hq} = p \) . As \( p \) has these properties, this \( h \) is continuous and open, and it is easy to check that \( h \) is a bijection. Thus \( h \) is a homeomorphism, and the fact that \( p \) is a covering implies that \( q \) is a covering. Finally, it is necessary to check that \( \widetilde{B} \) is simply connected. By the injectivity of \( {p}_{ \star } \), it suffices to show that for any loop \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow \widetilde{B} \), the loop \( {p\gamma } \) is null-homotopic in \( B \) by a homotopy that keeps \( \{ 0,1\} \) fixed. For each \( t,\gamma \left( t\right) = \left\lbrack {\alpha }_{t}\right\rbrack \) for some path \( {\alpha }_{t} \) in \( B \) from \( {b}_{0} \) to \( {p\gamma }\left( t\right) \) . By the continuity of \( \gamma \) and the compactness of \( \left\lbrack {0,1}\right\rbrack \), there is a dissection of the interval \[ 0 = {t}_{0} \leq {t}_{1} \leq \ldots \leq {t}_{n} = 1 \] so that \( \gamma \left( \left\lbrack {{t}_{i},{t}_{i + 1}}\right\rbrack \right) \subset \left\langle {{\alpha }_{{t}_{i}},{V}_{i}}\right\rangle \) for each \( i \), each \( {V}_{i} \) is open in \( B \) and path-connected, and the map \( {\Pi }_{1}\left( {V}_{i}\right) \rightarrow {\Pi }_{1}\left( B\right) \) induced by inclusion is the constant map. Now, \( {\alpha }_{{t}_{i + 1}} \approx {\alpha }_{{t}_{i}} \cdot {\beta }_{i} \) for some path \( {\beta }_{i} \) in \( {V}_{i} \) from \( {\alpha }_{{t}_{i}}\left( 1\right) \) to \( {\alpha }_{{t}_{i + 1}}\left( 1\right) \) . Because \( {\Pi }_{1}\left( {V}_{i}\right) \rightarrow \) \( {\Pi }_{1}\left( B\right) \) is constant, \( {\beta }_{i} \) can be chosen to be any path in \( {V}_{i} \) between these end points. Thus, choose \( {\beta }_{i} \) to be a reparametrisation of the restriction of \( {p\gamma } \) to the subinterval \( \left\lbrack {{t}_{i},{t}_{i + 1}}\right\rbrack \) . But \( {\alpha }_{{t}_{i + 1}} \approx {\alpha }_{{t}_{i}} \cdot {\beta }_{i} \) implies that \( \overline{{\alpha }_{{t}_{i}}} \cdot {\alpha }_{{t}_{i + 1}} \approx {\beta }_{i} \), and so \( {p\gamma } \approx {\beta }_{0} \cdot {\beta }_{1} \cdot \cdots \cdot {\beta }_{n - 1} \approx \overline{{\alpha }_{{t}_{0}}} \cdot {\alpha }_{{t}_{n}} \), and as \( \left\lbrack {\alpha }_{{t}_{0}}\right\rbrack = \left\lbrack {\alpha }_{{t}_{n}}\right\rbrack \), this is homotopic to a constant loop keeping \( \{ 0,1\} \) fixed. A further remark is in order using the notation of the above proof. Suppose that \( \left\lbrack \gamma \right\rbrack \in {\Pi }_{1}\left( {B,{b}_{0}}\right) \) . By definition of the group action, \( \left\lbrack \gamma \right\rbrack \langle \alpha, V\rangle = \langle \gamma \cdot \alpha, V\rangle \) . Suppose that \( {\Pi }_{1}\left( V\right) \rightarrow {\Pi }_{1}\left( B\right) \) is constant. If \( \langle \gamma \cdot \alpha, V\rangle = \langle \alpha, V\rangle \), then \( \gamma \cdot \alpha \approx \alpha \) ; this means that \( \left\lbrack \gamma \right\rbrack \) is the identity element of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) . Otherwise \( \left\lbrack \gamma \right\rbrack \langle \alpha, V\rangle \) and \( \langle \alpha, V\rangle \) are disjoint. Thus the action of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) (or of any of its subgroups) on the universal cover \( \widetilde{B} \) of \( B \) has the following property: Each point of \( \widetilde{B} \) has an open neighbourhood that is disjoint from every one of its translates by a non-trivial element of the group. This property will now be explored. Theorem 7.14. Suppose that a group \( G \) acts as a group of homeomorphisms on a path-connected, locally path-connected, space \( Y \) . Suppose that each \( y \) belonging to \( Y \) has an open neighbourhood \( U \) such that \( U \cap {gU} = \varnothing \) for all \( g \in G - \{ 1\} \) . Then the quotient map \( q : Y \rightarrow Y/G \) is a covering map. If \( Y \) is simply connected, then \( {\Pi }_{1}\left( {Y/G}\right) \) is isomorphic to \( G \) . Proof. If \( y \in Y \), there is an open neighbourhood \( U \) of \( y \) such that \( U \cap {gU} = \varnothing \) for all \( g \in G - \{ 1\} \) . Now \( {q}^{-1}\left( {qU}\right) = \mathop{\bigcup }\limits_{{g \in G}}{gU} \) . This is open because each \( {gU} \) is open (because \( g \) is a homeomorphism). Hence \( {qU} \) is open in the quotient topology on \( Y/G \) . Similarly, if \( {U}^{\prime } \) is any open subset of \( U \), then \( q{U}^{\prime } \) is open. The map \( q : U \rightarrow {qU} \) is an injection because \( U \cap {gU} = \varnothing \) for all \( g \neq 1 \), and so it is a homeomophism. Of course, \( q{g}^{-1} = q \), so that \( q : {gU} \rightarrow {qU} \) is also a homeomorphism. Thus \( q \) is a covering map. Suppose now that \( Y \) is simply connected. Let \( {y}_{0} \) be a base point in \( Y \) and let \( g \) belong to \( G \) . Define a function \( \phi : G \rightarrow {\Pi }_{1}\left( {Y/G, q\left( {y}_{0}\right) }\right) \) as follows: Let \( \alpha \) be a path in \( Y \) from \( {y}_{0} \) to \( g{y}_{0} \) and let \( \phi \left( g\right) = \left\lbrack {q\alpha }\right\rbrack \) . If \( \beta \) is another such path, \( \alpha \approx \beta \) as \( Y \) is simply connected. So \( \left\lbrack {q\alpha }\right\rbrack = \left\lbrack {q\beta }\right\rbrack \), and \( \phi \) is well defined. Let \( {\alpha }_{1} \) be a path from \( {y}_{0} \) to \( {g}_{1}{y}_{0} \) and \( {\alpha }_{2} \) be a path from \( {y}_{0} \) to \( {g}_{2}{y}_{0} \) . Then \( {\alpha }_{1} \cdot {g}_{1}{\alpha }_{2} \) is a path from \( {y}_{0} \) to \( {g}_{1}{g}_{2}{y}_{0} \) . Thus \( \phi \left( {{g}_{1}{g}_{2}}\right) = \left\lbrack {q\left( {{\alpha }_{1} \cdot {g}_{1}{\alpha }_{2}}\right) }\right\rbrack = \left\lbrack {q\left( {\alpha }_{1}\right) \cdot q\left( {\alpha }_{2}\right) }\right\rbrack = \phi \left( {g}_{1}\right) \phi \left( {g}_{2}\right) \), and so \( \phi \) is a group homomorphism. The path lifting property of a covering (Lemma 7.4) implies at once that \( \phi \) is surjective, and the homotopy lifting property (Lemma 7.5) implies it is injective. This theorem can sometimes be used in an elementary way to determine the fundamental group of a space if that space can easily be expressed as \( Y/G \), where \( G \) acts on a simply connected space \( Y \) as in the theorem. Thus, referring back to Examples 7.2, it is clear that \[ {\Pi }_{1}\left( {S}^{1}\right) \cong \mathbb{Z},\;{\Pi }_{1}\left( {\mathbb{R}{P}^{n}}\right) \cong \mathbb{Z}/2\mathbb{Z},\text{ and }{\Pi }_{1}\left( {L}_{p, q}\right) \cong \mathbb{Z}/p\mathbb{Z}. \] A good exercise is to construct the famous Klein bottle as the quotient of the plane with respect to a group action, and then to use the theorem to determine, as a subgroup of the isometries of the plane, the (non-abelian) fundamental group of the Klein bottle. This theorem must be accompanied by the usual caution that when considering quotient spaces, it is possible that \( Y \) may be Hausdorff but that \( Y/G \) may fail to be Hausdorff. Suppose that a group \( G \) acts on a space \( Y \) as in the last theorem and that \( H \) is a subgroup of \( G \) . Then there is a commutative diagram of maps \[ Y\overset{{q}_{H}}{ \rightarrow }\;Y/H \] \[ \downarrow 1\; \downarrow p \] \[ Y\overset{{q}_{G}}{ \rightarrow }Y/G \] where \( {q}_{H} \) and \( {q}_{G} \) are the two quotient maps and \( p \) is the map that makes the diagram commute (it exists as \( H \subset G \) ). Of course, as the action of \( G \) satisfies the condition in the theorem, so does the action of \( H \), so that \( {q}_{H} \) is a covering map. If \( U \) is an open neighbourhood of \( y \in Y \) such that \( U \cap {gU} = \varnothing \) for all \( g \in G - \{ 1\} \) , then \( {p}^{-1}\left( {{q}_{G}U}\right) = {GU}/H = \mathop{\bigcup }\limits_{{g \in G}}{q}_{H}\left( {HgU}\right) \) . For any right coset \( {Hg} \) of \( H \), the set \( {q}_{H}\left( {HgU}\right) \) is open in \( Y/H \), it projects by \( p \) homeomorphically onto \( {q}_{G}U \), and distinct cosets give distinct open sets in \( Y/H \) . Thus \( p : Y/H \rightarrow Y/G \) is a covering map. By the previous theorem, if \( Y \) is simply connected, then \( H \cong {\Pi }_{1}\left( {Y/H}\right) \), and the inclusion \( H \subset G \) corresponds to the injection \( {p}_{ \star } : {\Pi }_{1}\left( {Y/H}\right) \rightarrow {\Pi }_{1}\left( {Y/G}\right) \) . It might be pleasing if the group action of \( G \) on \( Y \) were to induce a group action on \( Y/H \) with \( Y/G \) as the resulting quotient. For that to happen, one requires that \( {q}_{H}\left( {y}_{1}\right) = {q}_{H}\left( {y}_{2}\right) \) imply that \( {q}_{H}\left( {g{y}_{1}}\right) = {q}_{H}\left( {g{y}_{2}}\right) \) for all \( g \in G \) . Trivially, \( {y}_{1} = h{y}_{2} \) if and only if \( g{y}_{1} = {gh}{g}^{-1}g{y}_{2} \) for all \( g \in G \) . Thus the requirement is that \( H \) should be a normal subgroup of \( G \) . Then the quotient group \( G/H \) does act on \( Y/H \) with quotient space \( Y/G \) . These remarks and the previous two theorems (starting with subgroups of \( {\Pi }_{1}\left( B\right) \) acting on the universal cover \( \widetilde{B} \) ) produce the following theorem: Theorem 7.15. Let \( B \) be a path-connected, locally path-connected, semi-locally simply connected space. Then for any subgroup \( G \) of \( {\Pi }_{1}\left( B\right) \), there exists a covering map \( p : {E}_{G} \rightarrow B \), unique up to equivalence, such that \( {p}_{ \star } : {\Pi }_{1}\left( {E}_{G}\right) = G \) . If \( H \) is a subgroup of \( G \), then \( {E}_{H} \) covers \( {E}_{G} \) and the covering maps compose in a natural way. If \( G \
1009_(GTM175)An Introduction to Knot Theory
28
right) = {q}_{H}\left( {g{y}_{2}}\right) \) for all \( g \in G \) . Trivially, \( {y}_{1} = h{y}_{2} \) if and only if \( g{y}_{1} = {gh}{g}^{-1}g{y}_{2} \) for all \( g \in G \) . Thus the requirement is that \( H \) should be a normal subgroup of \( G \) . Then the quotient group \( G/H \) does act on \( Y/H \) with quotient space \( Y/G \) . These remarks and the previous two theorems (starting with subgroups of \( {\Pi }_{1}\left( B\right) \) acting on the universal cover \( \widetilde{B} \) ) produce the following theorem: Theorem 7.15. Let \( B \) be a path-connected, locally path-connected, semi-locally simply connected space. Then for any subgroup \( G \) of \( {\Pi }_{1}\left( B\right) \), there exists a covering map \( p : {E}_{G} \rightarrow B \), unique up to equivalence, such that \( {p}_{ \star } : {\Pi }_{1}\left( {E}_{G}\right) = G \) . If \( H \) is a subgroup of \( G \), then \( {E}_{H} \) covers \( {E}_{G} \) and the covering maps compose in a natural way. If \( G \) is a normal subgroup of \( {\Pi }_{1}\left( B\right) \), then \( {\Pi }_{1}\left( B\right) /G \) acts freely on \( {E}_{G} \) and the quotient map is equivalent to \( p \) . When \( G \) is a normal subgroup of \( {\Pi }_{1}\left( B\right) \), the covering \( {E}_{G} \rightarrow B \) is called a regular covering. Note that the universal cover \( \widetilde{B} \) is so called because it covers any other cover of \( B \) . From Theorem 7.15, it is clear that understanding all covering spaces of \( B \) is, in some sense, equivalent to understanding all subgroups of \( {\Pi }_{1}\left( B\right) \) . That will not always be easy. However, a little practice can be obtained from consideration of covering spaces of the space \( {S}^{1} \vee {S}^{1} \), two circles with one point in common, shown in Figure 7.2(i). The space, of course, covers itself, and the ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_0.jpg) (i) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_1.jpg) (ii) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_2.jpg) (iii) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_87_3.jpg) Figure 7.2 group of the cover is the whole of \( {\Pi }_{1}\left( {{S}^{1} \vee {S}^{1}}\right) \), which is the free group on two generators \( a \) and \( b \) . The next six parts of Figure 7.2 show other covering spaces of \( {S}^{1} \vee {S}^{1} \) where the covering maps are defined in a sensible and obvious way. As an exercise, determine in terms of \( a \) and \( b \) the groups of these various covers, and determine whether each covering is regular. A final useful example concerns manifolds. Suppose \( B \) is now a connected nonorientable manifold. The subgroup of \( {\Pi }_{1}\left( B\right) \) consisting of all elements represented by loops that preserve orientation is a normal subgroup of index 2 . The covering space corresponding to this, for which the covering map is a two-to-one map, is an orientable manifold called the orientable double cover of \( B \) . ## Exercises 1. Write out the details of a proof of the exactness of the homotopy sequence (7.6) associated with a covering map. 2. Figure 7.1 illustrates a method for finding the Alexander polynomial of the knot \( {4}_{1} \) . Use this method to find the Alexander polynomial of \( {6}_{3} \) . 3. Let \( B \) be a \( \theta \) -curve, that is, a graph of two vertices and three edges, each edge having the two vertices as its end points. Describe (i) the universal cover of \( B \) ,(ii) a cover of \( B \) with infinite cyclic fundamental group and (iii) a finite cover of \( B \) . 4. Work through the exercises suggested in association with Figure 7.2. 5. What is the orientable double cover of (i) the Möbius band, (ii) the real projective plane, (iii) the Klein bottle and (iv) the connected sum of \( n \) real projective planes? 6. The group \( \mathbb{Z} \oplus \mathbb{Z} \) acts on \( {\mathbb{R}}^{2} \) by \( \left( {m, n}\right) \left( {x, y}\right) = \left( {x + m, y + n}\right) \) . The quotient space is a torus, and the quotient map \( q : {\mathbb{R}}^{2} \rightarrow {\mathbb{R}}^{2}/\mathbb{Z} \oplus \mathbb{Z} \) is the universal covering map of the torus. By considering the projection of the straight line in \( {\mathbb{R}}^{2} \) from the origin to the point \( \left( {p, q}\right) \), show that if \( p \) and \( q \) are coprime, then the element \( \left( {p, q}\right) \in \mathbb{Z} \oplus \mathbb{Z} \equiv \) \( {H}_{1}\left( {{S}^{1} \times {S}^{1}}\right) \) is represented by a simple closed curve. By cutting the torus along any given non-separating simple closed curve and noting that the result is a annulus, prove the converse is also true. 7. Find a specific action of a group \( G \) on the plane \( {\mathbb{R}}^{2} \) so that the quotient space \( {\mathbb{R}}^{2}/G \) is a Klein bottle and the action satisfies the condition of Theorem 7.14. Prove that the fundamental group of the Klein bottle is non-abelian. 8. A diagram of a (null-homotopic) simple closed curve \( C \) in a solid torus \( T \) is shown in Figure 6.5. By considering linking numbers between different lifts of \( C \) to the universal cover of \( T \), show that \( C \) is not the boundary of any disc embedded in \( T \) . Let \( \bar{C} \) be the curve in \( T \) represented by this diagram reflected in the plane of the paper (that is, with the two crossings changed). Show, again by considering lifts to the universal cover, that there is no orientation preserving (piecewise linear) homeomorphism of \( T \) to itself sending \( C \) to \( \bar{C} \) . 9. Suppose that \( p : \widetilde{B} \rightarrow B \) is a universal covering map and \( X \subset \widetilde{B} \) . Show that \( p \mid X \) is an injection if and only if \( {gX} \cap X = \varnothing \) for all \( g \in {\Pi }_{1}\left( B\right) \) with \( g \) not equal to the identity. Suppose that \( {\mathbb{R}}^{3} \) is the universal covering space of a closed connected 3-manifold \( M \) . Show that any 2-sphere piecewise linearly embedded in \( M \) separates \( M \) into two components, the closure of one of which is a 3-ball. 10. The fundamental group of a graph (a possibly infinite 1-dimensional complex) is a free group. Prove that any subgroup of a free group is a free group. 11. Prove that if knots \( {K}_{1} \) and \( {K}_{2} \) are related by mutation, then the double cover of \( {S}^{3} \) branched over \( {K}_{1} \) and the double cover of \( {S}^{3} \) branched over \( {K}_{2} \) are homeomorphic. # The Conway Polynomial, Signatures and Slice Knots The Conway polynomial [20] for an oriented link is really just the Alexander polynomial without the ambiguity concerning multiplication by units of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) . Although that might seem a small improvement, it enables two such polynomials to be added together, which would be meaningless if the signs were in doubt, and this in turn permits a "skein formula" for the Alexander polynomials of links to be produced. The method for this given below uses Seifert matrices as before, but it abandons any interpretation by means of the homology of the infinite cyclic cover. (Use of the \( L \) -matrix of Reidemeister, as in [107] or [108], can also produce this theory.) Definition 8.1. Suppose that \( F \) is a Seifert surface for an oriented link \( L \) in \( {S}^{3} \) . Suppose there is a solid cylinder, parametrised as \( \left\lbrack {0,1}\right\rbrack \times {D}^{2} \), in \( {S}^{3} \) such that \( \left( {\left\lbrack {0,1}\right\rbrack \times {D}^{2}}\right) \cap F = \{ 0,1\} \times {D}^{2} \), the solid cylinder being on the same side of \( F \) near \( \{ 0,1\} \times {D}^{2} \) . Let \( {F}^{\prime } = \left( {F-\{ 0,1\} \times {D}^{2}}\right) \cup \left\lbrack {0,1}\right\rbrack \times \partial {D}^{2} \) . Then \( {F}^{\prime } \) is said to be obtained from \( F \) by means of (embedded) surgery along the arc \( \left\lbrack {0,1}\right\rbrack \times 0 \) . Theorem 8.2. Suppose that \( {F}_{1} \) and \( {F}_{2} \) are Seifert surfaces for an oriented link \( L \) in \( {S}^{3} \) . Then there is a sequence of Seifert surfaces \( {\sum }_{1},{\sum }_{2},\ldots {\sum }_{N} \), with \( {\sum }_{1} = {F}_{1} \) and \( {\sum }_{N} = {F}_{2} \), such that for each \( i \), either \( {\sum }_{i} \) is obtained from \( {\sum }_{i - 1} \) or \( {\sum }_{i - 1} \) is obtained from \( {\sum }_{i} \) by surgery along an arc embedded in \( {S}^{3} \), or they are related by an isotopy of \( {S}^{3} \) . Proof. After a small homeomorphism (which is isotopic to the identity) of \( {S}^{3} \) , it may be assumed that \( {F}_{1} \) and \( {F}_{2} \) intersect transversely in finitely many simple closed curves, including their common boundary \( L \) . Suppose that the closure \( M \) of some component of \( {S}^{3} - \left( {{F}_{1} \cup {F}_{2}}\right) \) is a 3-manifold with the property that wherever \( M \) abuts either \( {F}_{i} \), it always does so from the same side of \( {F}_{i} \) . Let \( \partial M = {\partial }_{1}M \cup {\partial }_{2}M \), where \( {\partial }_{i}M = \partial M \cap {F}_{i} \) . Any triangulation of \( {S}^{3} \) with \( {F}_{1} \) and \( {F}_{2} \) as subcomplexes includes a triangulation \( T \) of \( M \) . Let \( A \) be a (collar) neighbourhood of \( {\partial }_{1}M \) in \( M \) together with a neighbourhood in \( M \) of all the 1- simplexes of \( T \) . Let \( B \) be the closure of \( M - A \) . To be more precise, \( A \) is the simplicial neighbourhood, in the second derived subdivision \( {T}^{\left( 2\right) } \) of the union of \( {\partial }_{1}M \) with the 1 -simplexes of \( T \) . Then \( B \) is the simplicial neighbourhood in \( {T}^{\left( 2\right) } \) of the union of all cones with ver
1009_(GTM175)An Introduction to Knot Theory
29
hat the closure \( M \) of some component of \( {S}^{3} - \left( {{F}_{1} \cup {F}_{2}}\right) \) is a 3-manifold with the property that wherever \( M \) abuts either \( {F}_{i} \), it always does so from the same side of \( {F}_{i} \) . Let \( \partial M = {\partial }_{1}M \cup {\partial }_{2}M \), where \( {\partial }_{i}M = \partial M \cap {F}_{i} \) . Any triangulation of \( {S}^{3} \) with \( {F}_{1} \) and \( {F}_{2} \) as subcomplexes includes a triangulation \( T \) of \( M \) . Let \( A \) be a (collar) neighbourhood of \( {\partial }_{1}M \) in \( M \) together with a neighbourhood in \( M \) of all the 1- simplexes of \( T \) . Let \( B \) be the closure of \( M - A \) . To be more precise, \( A \) is the simplicial neighbourhood, in the second derived subdivision \( {T}^{\left( 2\right) } \) of the union of \( {\partial }_{1}M \) with the 1 -simplexes of \( T \) . Then \( B \) is the simplicial neighbourhood in \( {T}^{\left( 2\right) } \) of the union of all cones with vertex the barycentre of some 3-simplex \( \sigma \) of \( T \) and base the barycentres of the 2-simplexes in \( \partial \sigma - {\partial }_{1}M \) . Change \( {F}_{1} \) to \( {F}_{1}^{\prime } \) by removing \( {\partial }_{1}M \) and inserting the closure of \( \partial A - {\partial }_{1}M \) . Change \( {F}_{2} \) to \( {F}_{2}^{\prime } \) similarly by removing \( B \cap {F}_{2} \) and inserting the closure of \( \partial B - \left( {B \cap {F}_{2}}\right) \) . These changes can be achieved by moving \( {F}_{1} \) by an isotopy across the collar and then across the the neighbourhood of the graph of 1-simplexes by isotopies and by surgeries along embedded arcs. Similarly (only without the collar), \( {F}_{2} \) can be changed to \( {F}_{2}^{\prime } \) . Now, \( {F}_{1}^{\prime } \cap {F}_{2}^{\prime } = {F}_{1} \cap {F}_{2} \cup \left( {\partial A - {\partial }_{1}M}\right) \), and a small displacement of \( {F}_{1}^{\prime } \) removes \( \partial A - {\partial }_{1}M \) from this intersection and so reduces the number of components of \( {F}_{1}^{\prime } \cap {F}_{2}^{\prime } \) to less than the number of components of \( {F}_{1} \cap {F}_{2} \) . If \( L \subset M \), readjust \( {F}_{1}^{\prime } \) by an isotopy that slides \( \partial {F}_{1}^{\prime } \) back down the collar until \( \partial {F}_{1}^{\prime } = \partial {F}_{2}^{\prime } = L \) . In this inductive way the number of components of intersection of the two Seifert surfaces can be steadily reduced until \( {F}_{1} \cap {F}_{2} = L \) . Then one more application of the above procedure finishes the proof. However, it is important to show that at any stage of this induction, the manifold \( M \) that abuts each \( {F}_{i} \) from one side only does indeed exist. How to find \( M \) ? Recall the infinite cyclic cover \( {X}_{\infty } \) of the exterior \( X \) of \( L \) that is constructed by gluing together infinitely many copies of \( Y \), where \( Y \) is \( X \) -cut-along- \( {F}_{1} \) . As proved in Theorem 7.9, this is the same as the cover constructed in a similar way by cutting along \( {F}_{2} \) . Thus \( {X}_{\infty } \) contains infinitely many copies of \( {F}_{2} \) (less a small neighbourhood of \( L \) ) which are the lifts to the cover of the second Seifert surface. The infinite cyclic group \( \langle t\rangle \) acts on \( {X}_{\infty } \) ; the homeomorphism \( t \) moves one lift of \( {F}_{i} \) to the next lift. Let \( {\widehat{F}}_{1} \subset {X}_{\infty } \) be a fixed lift of \( {F}_{1} \) and let \( {\widehat{F}}_{2} \subset {X}_{\infty } \) be a fixed lift of \( {F}_{2} \) . Suppose that \( \left( {{F}_{1} \cap {F}_{2}}\right) - L \neq \varnothing \) . Let \( n \) be the maximal integer such that \( {\widehat{F}}_{2} \cap {t}^{n}{F}_{1} \neq \varnothing \) . The surface \( {\widehat{F}}_{2} \) separates \( {X}_{\infty } \) into two components \( {C}_{L} \) and \( {C}_{R} \), with \( {t}^{r}{\widehat{F}}_{2} \subset {C}_{L} \) if and only if \( r < 0 \) . Let \( {Y}_{n} \) be the copy of \( Y \) between \( {t}^{n}{\widehat{F}}_{1} \) and \( {t}^{n + 1}{\widehat{F}}_{1} \), and let \( \widehat{M} \) be the closure of some component of \( {C}_{L} \cap {Y}_{n} \) . The boundary of \( \widehat{M} \) is contained in \( {t}^{n}{\widehat{F}}_{1} \cup {\widehat{F}}_{2} \cup \partial {X}_{\infty } \), and clearly \( \widehat{M} \) lies on only one side of \( {t}^{n}{\widehat{F}}_{1} \) and one side of \( {\widehat{F}}_{2} \) . The projection map \( p : {X}_{\infty } \rightarrow X \) is injective when restricted to \( \widehat{M} \), as \( \widehat{M} \subset {Y}_{n} - {t}^{n + 1}{\widehat{F}}_{1} \) . Let \( M \) be \( p\widehat{M} \) . Now observe that \( M \) is just the closure of some component of \( X - \left( {{F}_{1} \cup {F}_{2}}\right) \) . If this were not so, some lift \( {t}^{r}{\widehat{F}}_{2} \) of \( {F}_{2} \) would intersect \( \widehat{M} \) for some \( r \neq 0 \) . But if \( r > 0,{t}^{r}{\widehat{F}}_{2} \) is disjoint from the closure of \( {C}_{L} \) and so disjoint from \( \widehat{M} \) . If \( r < 0 \) , \( {t}^{r}{\widehat{F}}_{2} \cap {t}^{n}{\widehat{F}}_{1} = \varnothing \) by the maximality of \( n \) . Definition 8.3. Let \( A \) be a square matrix over \( \mathbb{Z} \) . An elementary enlargement of \( A \) is a matrix \( B \) of the form \[ B = \left( \begin{matrix} A & \xi & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{matrix}\right) \text{ or }\left( \begin{matrix} A & 0 & 0 \\ {\eta }^{\tau } & 0 & 0 \\ 0 & 1 & 0 \end{matrix}\right) \] for some column \( \xi \) or row \( {\eta }^{\tau } \) . The matrix \( A \) is called an elementary reduction of \( B \) . Square matrices \( A \) and \( B \) over \( \mathbb{Z} \) are called \( S \) -equivalent if they are related by a sequence of elementary enlargements, elementary reductions and unimodular congruences (this last being a relation of the form \( B = {P}^{\tau }{AP} \), where \( \det P = \) \( \pm 1) \) . Theorem 8.4. Let \( A \) and \( B \) be Seifert matrices for an oriented link \( L \) . Then \( A \) and B are S-equivalent. Proof. Suppose that \( A \) is an \( n \times n \) matrix corresponding to a Seifert surface \( F \), with respect to some base of \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) . Changing the base used for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) changes \( A \) to a matrix of the form \( {P}^{\tau }{AP} \), where \( P \) is the unimodular base-change matrix. Thus it suffices to check what happens when the Seifert surface is changed, and to do that it suffices, by Theorem 8.2, to check (with respect to any base) the effect of surgery along an arc. Suppose \( F \) is changed to \( {F}^{\prime } \) by surgery along an arc. A base for \( {H}_{1}\left( {{F}^{\prime };\mathbb{Z}}\right) \) can be chosen to be the homology classes of curves \( \left\{ {f}_{i}\right\} \) that constitute a base for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) together with the classes of a curve \( {f}_{n + 1} \) that goes once over the solid cylinder defining the surgery and of a curve \( {f}_{n + 2} \) around the middle of the cylinder (that is, \( {f}_{n + 2} = 1/2 \times \partial {D}^{2} \) in the notation of Definition 8.1). Then, because \( {f}_{n + 2} \) bounds a disc \( \left( {1/2 \times {D}^{2}}\right) \) that is disjoint from \( \bigcup \left\{ {{f}_{i} : i \leq n}\right\} \) , \( \operatorname{lk}\left( {{f}_{n + 2}^{ \pm },{f}_{i}}\right) = 0 \) for all \( i \neq n + 1 \) . Further, as \( {f}_{n + 1} \) meets this disc at one point in its boundary, choosing orientations carefully gives either \( \operatorname{lk}\left( {{f}_{n + 1}^{ + },{f}_{n + 2}}\right) = 0 \) and \( \operatorname{lk}\left( {{f}_{n + 1}^{ - },{f}_{n + 2}}\right) = 1 \), or \( \operatorname{lk}\left( {{f}_{n + 1}^{ + },{f}_{n + 2}}\right) = 1 \) and \( \operatorname{lk}\left( {{f}_{n + 1}^{ - },{f}_{n + 2}}\right) = 0 \) . In the first case the new Seifert matrix is of the form \[ \left( \begin{matrix} A & \xi & 0 \\ ? & ? & 1 \\ 0 & 0 & 0 \end{matrix}\right) ,\;\text{ which is congruent to }\;\left( \begin{matrix} A & \xi & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{matrix}\right) . \] The second case leads to a Seifert matrix of the form \[ \left( \begin{matrix} A & 0 & 0 \\ {\eta }^{\tau } & 0 & 0 \\ 0 & 1 & 0 \end{matrix}\right) \] It follows from this theorem that any invariant well-defined on \( S \) -equivalence classes of square matrices of integers gives at once an invariant of oriented links. For example, let \( A \) be a Seifert matrix for \( L \), and define \( {\Delta }_{L}\left( t\right) \in \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) to be \( \det \left( {{t}^{1/2}A - {t}^{-1/2}{A}^{\tau }}\right) \) . Here \( {t}^{1/2} \) is just an indeterminate for the ring of Laurent polynomials \( \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \), but it should be thought of as a formal square root of \( t \), so that \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \subset \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) . Note that if \( A \) is an \( r \times r \) matrix, then \( {\Delta }_{L}\left( t\right) = {t}^{-r/2}\det \left( {{tA} - {A}^{\tau }}\right) \), so that up to a unit of \( \mathbb{Z}\left\lbrack {{t}^{-\frac{1}{2}},{t}^{\frac{1}{2}}}\right\rbrack \), it follows that \( {\Delta }_{L}\left( t\right) \) is just the Alexander polynomial of \( L \) . However, it will now be shown that this normalised \( {\Delta }_{L}\left( t\right) \) has no ambiguity of sign or units. Thus call this the Conway normalisation of the Alexander polynomial. Theorem 8.5. The Conway-normalised Alexander polynomial is a well-defined invariant of the oriented link \( L \) . Proof. It is only necessary to check the invariance of the Conway-normalised polynomial when \( A \) changes by \( S \) -equivalence. Firstly, note that \[ \det \left( {{t}^{1/2}{P}^{\tau }{AP} - {t}^{-1/2}{P}^{\tau }{A}^{\tau }P}\right) = {\left( \det P\right) }^{2}\det
1009_(GTM175)An Introduction to Knot Theory
30
\subset \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) . Note that if \( A \) is an \( r \times r \) matrix, then \( {\Delta }_{L}\left( t\right) = {t}^{-r/2}\det \left( {{tA} - {A}^{\tau }}\right) \), so that up to a unit of \( \mathbb{Z}\left\lbrack {{t}^{-\frac{1}{2}},{t}^{\frac{1}{2}}}\right\rbrack \), it follows that \( {\Delta }_{L}\left( t\right) \) is just the Alexander polynomial of \( L \) . However, it will now be shown that this normalised \( {\Delta }_{L}\left( t\right) \) has no ambiguity of sign or units. Thus call this the Conway normalisation of the Alexander polynomial. Theorem 8.5. The Conway-normalised Alexander polynomial is a well-defined invariant of the oriented link \( L \) . Proof. It is only necessary to check the invariance of the Conway-normalised polynomial when \( A \) changes by \( S \) -equivalence. Firstly, note that \[ \det \left( {{t}^{1/2}{P}^{\tau }{AP} - {t}^{-1/2}{P}^{\tau }{A}^{\tau }P}\right) = {\left( \det P\right) }^{2}\det \left( {{t}^{1/2}A - {t}^{-1/2}{A}^{\tau }}\right) , \] so that the normalised \( {\Delta }_{L}\left( t\right) \) is invariant under unimodular congruence. If now \[ B = \left( \begin{array}{lll} A & \xi & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right) \] then \[ \left( {{t}^{1/2}B - {t}^{-1/2}{B}^{\tau }}\right) = \left( \begin{matrix} {t}^{1/2}A - {t}^{-1/2}{A}^{\tau } & {t}^{1/2}\xi & 0 \\ - {t}^{-1/2}{\xi }^{\tau } & 0 & {t}^{1/2} \\ 0 & - {t}^{-1/2} & 0 \end{matrix}\right) , \] which has the same determinant as \( \left( {{t}^{1/2}A - {t}^{-1/2}{A}^{\tau }}\right) \) . Similarly, the other type of elementary enlargement of \( A \) has no effect on this determinant. Note that for a knot \( K \) the Conway-normalised Alexander polynomial belongs to \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \), it is symmetric between \( t \) and \( {t}^{-1} \) and \( {\Delta }_{K}\left( 1\right) = + 1 \) by the proof of Theorem 6.10. The polynomials quoted in the table in Chapter 6 are indeed Conway-normalised. The sign needed for the normalisation cannot be determined in this simple way for an oriented link \( L \) of two or more components because \( {\Delta }_{L}\left( 1\right) = 0 \) . Theorem 8.6. For oriented links \( L \), the Conway-normalised Alexander polynomial \( {\Delta }_{L}\left( t\right) \in \mathbb{Z}\left\lbrack {{t}^{-\frac{1}{2}},{t}^{\frac{1}{2}}}\right\rbrack \) is characterised by (i) \( {\Delta }_{\text{unknot }}\left( t\right) = 1 \) , (ii) whenever three oriented links \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are the same except in the neighbourhood of a point where they are as shown in Figure 3.2, then \[ {\Delta }_{{L}_{ + }} - {\Delta }_{{L}_{ - }} = \left( {{t}^{-1/2} - {t}^{1/2}}\right) {\Delta }_{{L}_{0}} \] Proof. Construct a Seifert surface \( {F}_{0} \) for \( {L}_{0} \) that meets the neighbourhood of the point in question as shown in Figure 8.1. The Seifert circuit method described in Chapter 2 will do this. Now form Seifert surfaces \( {F}_{ + } \) for \( {L}_{ + } \) and \( {F}_{ - } \) for \( {L}_{ - } \) by adding short twisted strips to \( {F}_{0} \) as also shown in Figure 8.1. Let \( {H}_{1}\left( {{F}_{0};\mathbb{Z}}\right) \) be ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_92_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_92_0.jpg) Figure 8.1 generated by the classes of oriented closed curves \( \left\{ {{f}_{2},{f}_{3},\ldots ,{f}_{n}}\right\} \), and for generators of \( {H}_{1}\left( {{F}_{ \pm };\mathbb{Z}}\right) \), take the classes of the same curves together with the class of an extra curve \( {f}_{1} \) that goes once along the twisted strip. If \( {A}_{0} \) is the resulting Seifert matrix for \( {L}_{0} \), the Seifert matrix for \( {L}_{ - } \) is of the form \( \left( \begin{matrix} N & {\xi }^{\tau } \\ \eta & {A}_{0} \end{matrix}\right) \) for some integer \( N \) and columns \( \xi \) and \( \eta \), whereas that for \( {L}_{ + } \) is \( \left( \begin{matrix} N - 1 & {\xi }^{\tau } \\ \eta & {A}_{0} \end{matrix}\right) \) . Consideration of \( \det \left( {{t}^{1/2}A - {t}^{-1/2}{A}^{\tau }}\right) \) when \( A \) is each of these three Seifert matrices immediately produces the required formula. The formulae of this theorem is the promised analogue of the similar formulae (Proposition 3.7) for the Jones polynomial. Just as for the Jones polynomial, repeated use of these formulae allow \( {\Delta }_{L}\left( t\right) \) to be calculated for any oriented link \( L \) . It is easy to see that the result is always a polynomial (not a Laurent polynomial) in \( \left( {{t}^{-1/2} - {t}^{1/2}}\right) \), so make the substitution \( \left( {{t}^{-1/2} - {t}^{1/2}}\right) = z \), and define the Conway polynomial, or potential, for \( L \) to be \( {\nabla }_{L}\left( z\right) \in \mathbb{Z}\left\lbrack z\right\rbrack \), where \( {\nabla }_{L}\left( {{t}^{-1/2} - {t}^{1/2}}\right) = {\Delta }_{L}\left( t\right) \), the Conway-normalised Alexander polynomial. A paraphrase of the last theorem is that, using the theory of \( S \) -equivalence of Seifert matrices, the Conway polynomial invariant of oriented links is well defined. It is characterised by \( {\nabla }_{\text{unknot }}\left( z\right) = 1 \) and (with reference to Figure 3.2) the skein formula \[ {\nabla }_{{L}_{ + }}\left( z\right) - {\nabla }_{{L}_{ - }}\left( z\right) = z{\nabla }_{{L}_{0}}\left( z\right) \] In theory at least, this suffices for calculation of \( {\nabla }_{L}\left( z\right) \) . Some easily deduced properties follow; \( {\nabla }_{L}\left( z\right) \) is written as \( {\nabla }_{L}\left( z\right) = \mathop{\sum }\limits_{{i \geq 0}}{a}_{i}\left( L\right) {z}^{i} \), where \( {a}_{i}\left( L\right) \in \mathbb{Z} \) . Proposition 8.7. For an oriented link \( L \) with \( \# L \) components, the Conway polynomial has the following properties. (i) If \( L \) is a split link, then \( {\nabla }_{L}\left( z\right) = 0 \) . (ii) \( {a}_{i}\left( L\right) = 0 \) for \( i \equiv \# L \) modulo 2 and also for \( i < \# L - 1 \) . (iii) If \( L \) is a knot, so \( \# L = 1 \), then \( {a}_{0}\left( L\right) = 1 \) . (iv) If \( \# L = 2 \), then \( {a}_{1}\left( L\right) = \operatorname{lk}\left( L\right) \), where \( \operatorname{lk}\left( L\right) \) is the linking number of the two components of \( L \) . (v) If \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are related in the manner of Figure 3.2 and \( \# {L}_{ + } = \# {L}_{ - } = \) 1, then \( {a}_{2}\left( {L}_{ + }\right) - {a}_{2}\left( {L}_{ - }\right) = \operatorname{lk}\left( {L}_{0}\right) \) . Proof. (i) This follows from the stronger Proposition 6.14. However, it also follows at once by applying the skein formula to links \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) shown in Figure 8.2. As \( {L}_{ + } \) and \( {L}_{ - } \) are here the same link, \( {\nabla }_{{L}_{0}}\left( z\right) = 0 \) . (ii) This follows by induction on the number of crossings in a diagram and the number of crossing changes needed to make it a diagram of the trivial link of unknots. (iii) When \( z = 0 \), the skein formula becomes \( {\nabla }_{{L}_{ + }}\left( 0\right) = {\nabla }_{{L}_{ - }}\left( 0\right) \), so any crossings can be changed without altering \( {\nabla }_{L}\left( 0\right) \) . Of course, \( {a}_{0} \) (unknot) \( = 1 \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_94_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_94_0.jpg) Figure 8.2 (iv) Suppose the skein formula is considering a crossing between the two components of \( L \) . Using (iii), consideration of the coefficient of \( z \) shows that \( {a}_{1}\left( {L}_{ + }\right) - {a}_{1}\left( {L}_{ - }\right) = 1 \) . But \( \operatorname{lk}\left( {L}_{ + }\right) - \operatorname{lk}\left( {L}_{ - }\right) = 1 \), so the result follows by using (i) and considering a collection of crossing changes that yield a split link. (v) This follows at once from (iv). A good exercise is to use the skein formula to show that the Conway polynomial is equal to 1 for the generalised Kinoshita-Terasaka knot shown in Figure 8.3. The symbols in Figure 8.3 denote the numbers of crossings in the tassels, and \( d \) is required to be even. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_94_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_94_1.jpg) Figure 8.3 Although this completes the theory that establishes the Conway polynomial, it is convenient here to establish also the existence of the \( \omega \) -signature of an oriented link. This will be done in a direct matrix-oriented manner. This \( \omega \) -signature was first introduced by A. G. Tristram [123], generalising work of Murasugi [102]. Definition 8.8. Let \( L \) be an oriented link in \( {S}^{3} \) and let \( \omega \) be a unit modulus complex number, \( \omega \neq 1 \) . The \( \omega \) -signature \( {\sigma }_{\omega }\left( L\right) \) of \( L \) is defined to be the signature of the Hermitian matrix \[ \left( {1 - \omega }\right) A + \left( {1 - \bar{\omega }}\right) {A}^{\tau } \] where \( A \) is a Seifert matrix for \( L \) . Theorem 8.9. The \( \omega \) -signature \( {\sigma }_{\omega }\left( L\right) \) is well defined as an invariant of \( L \) . Proof. The signature of a Hermitian matrix is not changed by congruence (that fact is Sylvester's famous law of inertia), so it is only necessary to see whether the definition changes under an elementary enlargement of a Seifert matrix \( A \) . Suppose \[ B = \left( \begin{array}{lll} A & \xi & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right) \] then \[ \left( {1 - \omega }\right) B + \left( {1 - \bar{\omega }}\right) {B}^{\tau } = \left( \begin{matrix} \left( {1 - \omega }\right) A + \left( {1 - \bar{\omega }}\right) {A}^{\tau } & \left( {1 - \omega }\right) \xi & 0 \\ \left( {1 - \bar{\omega }}\right) {\xi }^{\tau } & 0 & \left( {1 - \omega }\right) \\ 0 & \left( {1 - \bar{\omega }}\right) & 0 \end{matrix}\right) . \] As \( \left( {1 - \omega }\right) \neq 0 \),
1009_(GTM175)An Introduction to Knot Theory
31
( {1 - \bar{\omega }}\right) {A}^{\tau } \] where \( A \) is a Seifert matrix for \( L \) . Theorem 8.9. The \( \omega \) -signature \( {\sigma }_{\omega }\left( L\right) \) is well defined as an invariant of \( L \) . Proof. The signature of a Hermitian matrix is not changed by congruence (that fact is Sylvester's famous law of inertia), so it is only necessary to see whether the definition changes under an elementary enlargement of a Seifert matrix \( A \) . Suppose \[ B = \left( \begin{array}{lll} A & \xi & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array}\right) \] then \[ \left( {1 - \omega }\right) B + \left( {1 - \bar{\omega }}\right) {B}^{\tau } = \left( \begin{matrix} \left( {1 - \omega }\right) A + \left( {1 - \bar{\omega }}\right) {A}^{\tau } & \left( {1 - \omega }\right) \xi & 0 \\ \left( {1 - \bar{\omega }}\right) {\xi }^{\tau } & 0 & \left( {1 - \omega }\right) \\ 0 & \left( {1 - \bar{\omega }}\right) & 0 \end{matrix}\right) . \] As \( \left( {1 - \omega }\right) \neq 0 \), the terms in \( \xi \) and \( {\xi }^{\tau } \) can be removed by congruence (subtracting multiples of the last row and column from predecessors), so that the signature of \( \left( {1 - \omega }\right) A + \left( {1 - \bar{\omega }}\right) {A}^{\tau } \) and the signature of \( \left( {1 - \omega }\right) B + \left( {1 - \bar{\omega }}\right) {B}^{\tau } \) differ by the signature of \( \left( \begin{matrix} 0 & \left( {1 - \omega }\right) \\ \left( {1 - \bar{\omega }}\right) & 0 \end{matrix}\right) \) . Of course, this last signature is zero, as the matrix clearly has one positive eigenvalue and one negative one. Consideration of the other type of elementary enlargement is exactly the same. Note that \( \left( {1 - \omega }\right) A + \left( {1 - \bar{\omega }}\right) {A}^{\tau } = - \left( {1 - \bar{\omega }}\right) \left( {{\omega A} - {A}^{\tau }}\right) \), so that the Hermitian matrix is non-singular except when \( \omega \) is a zero of the Alexander polynomial of \( L \) . In fact, it can be shown that for a fixed link \( L \), the invariant \( {\sigma }_{\omega }\left( L\right) \), when viewed as a function of \( \omega \), is continuous except at zeros of the Alexander polynomial. As signatures are integers, this means that \( {\sigma }_{\omega }\left( L\right) \) takes finitely many values as \( \omega \) varies on \( {S}^{1} \), with possible jumps at roots of \( {\Delta }_{L}\left( t\right) = 0 \) . Sometimes \( {\sigma }_{-1}\left( L\right) \) is called the signature of \( L \) . Table 8.1 records the value of this signature for the knots up to eight crossings as depicted in Chapter 1. Theorem 8.10. If \( L \) is an oriented link in \( {S}^{3} \) and \( \bar{L} \) is its reflection, then for any unit complex number \( \omega \neq 1 \) , \[ {\sigma }_{\omega }\left( L\right) = - {\sigma }_{\omega }\left( \bar{L}\right) \] Proof. If \( A \) is a Seifert matrix for \( L \), then \( - A \) is a Seifert matrix for \( \bar{L} \) . TABLE 8.1. Signatures of Knots <table><tr><td>3,</td><td>2</td><td>\( {7}_{1} \)</td><td>6</td><td>\( {8}_{1} \)</td><td>0</td><td>88</td><td>0</td><td>\( {\mathbf{8}}_{15} \)</td><td>4</td></tr><tr><td>4</td><td>0</td><td>\( {7}_{2} \)</td><td>2</td><td>\( {8}_{2} \)</td><td>4</td><td>89</td><td>0</td><td>\( {\mathbf{8}}_{16} \)</td><td>2</td></tr><tr><td>\( {5}_{1} \)</td><td>4</td><td>\( {7}_{3} \)</td><td>\( - 4 \)</td><td>\( {8}_{3} \)</td><td>0</td><td>\( {\mathbf{8}}_{10} \)</td><td>\( - 2 \)</td><td>817</td><td>0</td></tr><tr><td>\( {5}_{2} \)</td><td>2</td><td>74</td><td>\( - 2 \)</td><td>84</td><td>2</td><td>\( {8}_{11} \)</td><td>2</td><td>\( {\mathbf{8}}_{18} \)</td><td>0</td></tr><tr><td>\( {\mathbf{6}}_{1} \)</td><td>0</td><td>7s</td><td>4</td><td>85</td><td>\( - 4 \)</td><td>\( {8}_{12} \)</td><td>0</td><td>819</td><td>\( - 6 \)</td></tr><tr><td>\( {\mathbf{6}}_{2} \)</td><td>2</td><td>76</td><td>2</td><td>86</td><td>2</td><td>\( {8}_{13} \)</td><td>0</td><td>\( {\mathbf{8}}_{20} \)</td><td>0</td></tr><tr><td>\( {\mathbf{6}}_{3} \)</td><td>0</td><td>7.</td><td>0</td><td>87</td><td>\( - 2 \)</td><td>\( {8}_{14} \)</td><td>2</td><td>\( {\mathbf{8}}_{21} \)</td><td>2</td></tr></table> A corollary is, of course, that if \( L = \bar{L} \) then \( {\sigma }_{\omega }\left( L\right) = 0 \) . A direct calculation shows that the signature of the trefoil knot is 2 (or -2 if the reflected diagram is used). That is a pre-Jones polynomial proof of the fact that the trefoil and its reflection are distinct knots. The remainder of this chapter takes a brief look at 4-dimensional topology. The Alexander polynomial and the signatures of knots give information concerning whether a knot \( K \) in \( {S}^{3} \) bounds some disc embedded in \( {B}^{4} \), the 4-ball bounded by \( {S}^{3} \) . Definition 8.11. A knot \( K \subset {S}^{3} \) is a slice knot if there is a flat disc \( D \) contained in \( {B}^{4} \) such that \( K = \partial D = D \cap {S}^{3} \) . Such a disc is called a slicing disc for \( K \) . Here "flat" means that \( D \) has a neighbourhood \( N \) that is a copy of \( D \times {I}^{2} \) meeting \( {S}^{3} \) in \( \partial D \times {I}^{2} \) (of course, \( {I}^{2} = I \times I \), and this is just another disc). To avoid triviality such a restriction is needed, for \( {B}^{4} \) can be regarded as the cone on \( {S}^{3} \), and this contains the cone on any knot in \( {S}^{3} \) . Such a subcone is not flat unless the knot is trivial. It is known that a locally flat condition for \( D \) implies flatness. Similarly, if everything is interpreted in terms of differential topology and the disc \( D \) is a smooth submanifold of \( {B}^{4} \), then it has a trivial normal bundle and so is flat. Slice knots seem to be fairly rare. In Table 1.1, the knots \( {6}_{1},{8}_{8},{8}_{9} \) and \( {8}_{20} \) are slice. The sum of any knot \( K \) with the reverse of its reflection is also slice. This can be seen by creating \( \left( {{B}^{4}, D}\right) \) from \( \left( {{S}^{3} \times I, K \times I}\right) \) by removing a neighbourhood of \( \{ x\} \times I \), where \( x \in K \) . An explicit example of a slice knot is needed. Consider, as an analogue, the contour-map description of a mountain with two peaks and one pass (or col) somewhere between the peaks. At low levels there is just one simple closed curve as the contour line. This becomes a curve with one self-intersection point at the level of the pass. Above that, the contour consists of two simple closed curves, which finally become single points at the peaks. Figure 8.4 shows a disc evolving in \( {S}^{3} \times \lbrack 0,\infty ) \) . The disc meets low levels in a copy of the knot \( {8}_{20} \) , then meets a critical level in a curve with one self-intersection and meets levels just above that in two curves. The important thing is that these two curves are unknotted and unlinked, and hence they can be capped off with two discs in a standard way. As will be shown below, any attempt to imitate this with the knot \( {3}_{1} \) will fail. It is known that any slicing disc is obtained in this way, though it may have minima (levels where a curve is "born" unknotted and unlinked from everything else) as well as many passes and maxima. The "slice knots are ribbon knots" ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_96_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_96_0.jpg) conjecture opines that minima are unnecessary. To obtain necessary conditions for sliceness from the theory of the Seifert form, some preliminary lemmas are needed. Lemma 8.12. Suppose that for some knot \( K \) in \( {S}^{3} \), there is a flat surface \( F \) in \( {B}^{4} \) with \( F \cap {S}^{3} = \partial F \cap {S}^{3} = K \) . Then the inclusion map induces an isomorphism \( {H}_{1}\left( {{S}^{3} - K}\right) \rightarrow {H}_{1}\left( {{B}^{4} - F}\right) \cong \mathbb{Z}. \) Proof. Let \( N \), a copy of \( F \times {I}^{2} \), be a neighbourhood of \( F \) meeting \( {S}^{3} \) in \( \partial F \times {I}^{2} \) . The Mayer-Vietoris theorem gives an exact sequence \[ 0 = {H}_{2}\left( {B}^{4}\right) \rightarrow {H}_{1}\left( {F \times \partial {I}^{2}}\right) \rightarrow {H}_{1}\left( N\right) \oplus {H}_{1}\left( \overline{{B}^{4} - N}\right) \rightarrow {H}_{1}\left( {B}^{4}\right) = 0. \] Exactness implies that the middle map of this must be an isomorphism. Of course, \[ {H}_{1}\left( {F \times \partial {I}^{2}}\right) = {H}_{1}\left( F\right) \oplus {H}_{1}\left( {\partial {I}^{2}}\right) , \] and the \( {H}_{1}\left( F\right) \) component is mapped isomorphically to \( {H}_{1}\left( N\right) \) (and each is the direct sum of copies of \( \mathbb{Z} \) ); \( {H}_{1}\left( {\partial {I}^{2}}\right) \) is mapped to zero in \( {H}_{1}\left( N\right) \) . As \( {H}_{1}\left( {\partial {I}^{2}}\right) = \mathbb{Z} \) , it follows that \( {H}_{1}\left( \overline{{B}^{4} - N}\right) \) is also a copy of \( \mathbb{Z} \) . The map \( {H}_{1}\left( {\partial {I}^{2}}\right) \rightarrow {H}_{1}\left( \overline{{B}^{4} - N}\right) \) must send generator to generator, as otherwise a matrix representing the map in the above sequence will not have unit determinant. However, a generator of this copy of \( {H}_{1}\left( {\partial {I}^{2}}\right) \) is a meridian of the knot \( K \) . Thus the inclusion map from the knot exterior to \( \overline{{B}^{4} - N} \) induces an isomorphism on the first homology, and that is, up to adjustment by a small homotopy equivalence, the required statement. Lemma 8.13. Suppose that \( {f}_{1} : {F}_{1} \rightarrow {B}^{4} \) and \( {f}_{2} : {F}_{2} \rightarrow {B}^{4} \) are maps, of orientable surfaces into the 4-ball, which have disjoint images. Suppose that on \( \partial {F}_{i} \) the map \( {f}_{i} \) is a homeomorphism onto a knot \( {K}_{i} \) in \( {S}^{3} = \partial {B}^{4} \) . Then \( \operatorname{lk}\left( {{K}_{1},{K}_{2}}\right) = 0 \) . Proof. After moving the maps into general position,
1009_(GTM175)An Introduction to Knot Theory
32
rtial {I}^{2}}\right) \rightarrow {H}_{1}\left( \overline{{B}^{4} - N}\right) \) must send generator to generator, as otherwise a matrix representing the map in the above sequence will not have unit determinant. However, a generator of this copy of \( {H}_{1}\left( {\partial {I}^{2}}\right) \) is a meridian of the knot \( K \) . Thus the inclusion map from the knot exterior to \( \overline{{B}^{4} - N} \) induces an isomorphism on the first homology, and that is, up to adjustment by a small homotopy equivalence, the required statement. Lemma 8.13. Suppose that \( {f}_{1} : {F}_{1} \rightarrow {B}^{4} \) and \( {f}_{2} : {F}_{2} \rightarrow {B}^{4} \) are maps, of orientable surfaces into the 4-ball, which have disjoint images. Suppose that on \( \partial {F}_{i} \) the map \( {f}_{i} \) is a homeomorphism onto a knot \( {K}_{i} \) in \( {S}^{3} = \partial {B}^{4} \) . Then \( \operatorname{lk}\left( {{K}_{1},{K}_{2}}\right) = 0 \) . Proof. After moving the maps into general position, it may be assumed that each \( {f}_{i} \) has only double points as singularities. That means that near the image of such a singularity in \( {B}^{4} \), the image of \( {F}_{i} \) looks like two standard planes in \( {\mathbb{R}}^{4} \) meeting in a point \( P \) . That is, near \( P \) it is the cone from \( P \) on a standard Hopf link (a non-trivial two-crossing link) in a copy of \( {S}^{3} \) . Replace the cone on that link with a Seifert surface of the link. This changes \( {F}_{i} \) by removing two discs and inserting an annulus, but there is no longer a point of self-intersection. There may also be points at which the image of \( {f}_{i} \) is locally knotted, points \( P \) near which the image is the cone on a knot in a copy of \( {S}^{3} \) ; replace that cone with a Seifert surface of the knot, changing \( {F}_{i} \) but gaining flatness. In this way it may be assumed that each \( {f}_{i} \) is an embedding onto a flat surface. Then the existence of \( {f}_{1}\left( {F}_{1}\right) \) asserts that \( {K}_{1} \) represents the zero homology class in \( {H}_{1}\left( {{B}^{4} - {F}_{2}}\right) \), and so, by the last lemma, \( {K}_{1} \) represents zero in \( {H}_{1}\left( {{S}^{3} - {K}_{2}}\right) \) . Lemma 8.14. Suppose that \( F \) is a Seifert surface for a knot \( K \) in \( {S}^{3} \) that has a slicing disc \( D \) . Then \( F \cup D \) bounds some two-sided 3-manifold \( {M}^{3} \subset {B}^{4} \) with \( {M}^{3} \cap {S}^{3} = F \) . Proof. The idea here is that \( {M}^{3} \) should be \( {\phi }^{-1} \) (one point), where \( \phi : {B}^{4} - \) \( D \rightarrow {S}^{1} \) is a carefully chosen map inducing an isomorphism of first homology groups. It will be more convenient to define \( \phi \) on \( \overline{{B}^{4} - N} \), where \( N \) is a standard neighbourhood of \( D \) as considered above, with \( \left( \overline{{B}^{4} - N}\right) \cap {S}^{3} \) being a copy of \( X \) , the knot exterior. Define, in the following way, \( \phi : X \rightarrow {S}^{1} \) so that \( {\phi }_{ \star } : {H}_{1}\left( X\right) \rightarrow \) \( {H}_{1}\left( {S}^{1}\right) \) is an isomorphism and \( {\phi }^{-1} \) (one point) \( = F \) . On a product neighbourhood of \( F \) in \( X \), define \( \phi \) to be the projection \( F \times \left\lbrack {-1,1}\right\rbrack \rightarrow \left\lbrack {-1,1}\right\rbrack \) followed by the map \( t \mapsto {e}^{i\pi t} \in {S}^{1} \), and let \( \phi \) map the remainder of \( X \) to \( - 1 \in {S}^{1} \) . Extend \( \phi \) over the rest of \( \partial \left( \overline{{B}^{4} - N}\right) \) so that the inverse image of \( 1 \in {S}^{1} \) is \( F \cup \left( {D \times \star }\right) \) for some point \( \star \in \partial {I}^{2} \), where \( N = D \times {I}^{2} \) (note that \( \partial D \times \star \) is a longitude of \( K \) by Lemma 8.13). This map must now be extended over the whole of \( \overline{{B}^{4} - N} \) . Consider the simplexes of some triangulation of \( \overline{{B}^{4} - N} \) . Let \( T \) be a tree in the 1-skeleton containing all the vertices of this triangulation that contains a similar maximal tree of \( \partial \left( \overline{{B}^{4} - N}\right) \) . Extend \( \phi \) over all of \( T \) in an arbitrary way. Then on a 1-simplex \( \sigma \) not in \( T \) define \( \phi \) so that if \( \mathrm{c} \) is a 1-cycle consisting of \( \sigma \) summed with a 1-chain in \( T \) (joining up the ends of \( \sigma \) ), \( \left\lbrack {\phi c}\right\rbrack \in {H}_{1}\left( {S}^{1}\right) \) is the image of \( \left\lbrack c\right\rbrack \) under the isomorphism \[ {H}_{1}\left( \overline{{B}^{4} - N}\right) \overset{ \cong }{ \leftarrow }{H}_{1}\left( X\right) \overset{{\phi }_{ \star }}{ \rightarrow }{H}_{1}\left( {S}^{1}\right) . \] Trivially, the boundary of a 2-simplex \( \tau \) of \( \overline{{B}^{4} - N} \) represents zero in \( {H}_{1}\left( \overline{{B}^{4} - N}\right) \) , so \( \left\lbrack {\phi \left( {\partial \tau }\right) }\right\rbrack = 0 \in {H}_{1}\left( {S}^{1}\right) \) . Hence \( \phi \) is null-homotopic on \( \partial \tau \) and so extends over \( \tau \) . Finally, \( \phi \) extends over the 3-simplexes and 4-simplexes, as any map from the boundary of an \( n \) -simplex to \( {S}^{1} \) is null-homotopic when \( n \geq 3 \) . Now, regard \( \phi : \overline{{B}^{4} - N} \rightarrow {S}^{1} \) as a simplicial map to some triangulation of \( {S}^{1} \) in which 1 is not a vertex. Then \( {\phi }^{-1}\left( 1\right) \) is a 3-manifold \( {M}^{3} \), with a neighbourhood \( {M}^{3} \times I \), in \( \overline{{B}^{4} - N} \) . To see this just consider how \( {\phi }^{-1} \) (a non-vertex) meets the neighbourhood of any simplex in \( \overline{{B}^{4} - N} \) . Of course, \( \phi \) was constructed so that \( \partial {M}^{3} = F \cup \left( {D \times \star }\right) \) . The method used to extend \( \phi \) in this last proof is a very elementary example of the use of "obstruction theory". The proof can be interpreted by saying that \( {H}^{1}\left( {\overline{{B}^{4} - N};\mathbb{Z}}\right) \) corresponds naturally to the homotopy classes of maps from \( \overline{{B}^{4} - N} \) to \( {S}^{1} \) and \( \phi \) corresponds to a generator of \( {H}^{1}\left( {\overline{{B}^{4} - N};\mathbb{Z}}\right) \) . If working with smooth manifolds, the final manoeuvre of the proof should be replaced by the the procedure of changing \( \phi \) by a homotopy to make it transverse to \( 1 \in {S}^{1} \) and then considering \( {\phi }^{-1}\left( 1\right) \) as before. One more lemma is now needed. It concerns the way in which the homology of the boundary of a 3-manifold is related to that of the manifold itself. There seems to be no escape from cohomology theory here, and the proof given below is perhaps a little terse. Lemma 8.15. Let \( M \) be a compact orientable 3-manifold such that \( \partial M \) is a connected surface of genus \( g \) . Suppose that \( i : \partial M \rightarrow M \) is the inclusion map. Then the kernel of \( {i}_{ \star } : {H}_{1}\left( {\partial M;\mathbb{Q}}\right) \rightarrow {H}_{1}\left( {M;\mathbb{Q}}\right) \) is a vector subspace of dimension \( g \) . Proof. The following commutative diagram has rows that are parts of the homology and cohomology exact sequences of the pair \( \left( {M,\partial M}\right) \) . Of the vertical arrows, the first and third are Lefschetz duality isomorphisms, and the central one is a Poincaré duality isomorphism. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_99_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_99_0.jpg) Now, \( {H}^{1}\left( {\partial M;\mathbb{Q}}\right) \) is the vector space dual to \( {H}_{1}\left( {\partial M;\mathbb{Q}}\right) ,{H}^{1}\left( {M;\mathbb{Q}}\right) \) is the space dual to \( {H}_{1}\left( {M;\mathbb{Q}}\right) \) and \( {i}^{ \star } \) and \( {i}_{ \star } \) are dual linear maps. (This follows from the universal coefficient theorem for homology and cohomology and the fact that there is no torsion when coefficients are in the field \( \mathbb{Q} \) .) Thus, if \( r\left( \right) \) denotes the rank of a linear map, \( r\left( {i}^{ \star }\right) = r\left( {i}_{ \star }\right) \) . The vertical isomorphisms imply that \( {i}_{ \star } \) and \( \partial \) have the same nullity. Thus \( r\left( {i}^{ \star }\right) = {2g} - r\left( {i}_{ \star }\right) \) . Hence \( r\left( {i}_{ \star }\right) = g \), and so \( g \) is also the nullity of \( {i}_{ \star } \) . Corollary 8.16. There is a base \( \left\lbrack {f}_{1}\right\rbrack ,\left\lbrack {f}_{2}\right\rbrack ,\ldots ,\left\lbrack {f}_{2g}\right\rbrack \) over \( \mathbb{Z} \) for \( {H}_{1}\left( {\partial M;\mathbb{Z}}\right) \) so that \( \left\lbrack {f}_{1}\right\rbrack ,\left\lbrack {f}_{2}\right\rbrack ,\ldots ,\left\lbrack {f}_{g}\right\rbrack \) map to zero in \( {H}_{1}\left( {M;\mathbb{Q}}\right) \) . Proof. One may consider \( {H}_{1}\left( {\partial M;\mathbb{Z}}\right) \) to be \( {\mathbb{Z}}^{2g} \subset {\mathbb{Q}}^{2g} = {H}_{1}\left( {\partial M;\mathbb{Q}}\right) \) . The \( g \) -dimensional subspace \( U \) of \( {\mathbb{Q}}^{2g} \), given by Lemma 8.15, has a base consisting of elements in \( {\mathbb{Z}}^{2g} \) . Let \( \widetilde{U} \) be the \( \mathbb{Z} \) -span of those elements. As a \( \mathbb{Z} \) -module \( {\mathbb{Z}}^{2g}/\widetilde{U} = \) \( A/\widetilde{U} \oplus B/\widetilde{U} \), where \( A \) and \( B \) are submodules of \( {\mathbb{Z}}^{2g}, A/\widetilde{U} \) is free and \( B/\widetilde{U} \) is a torsion module over \( \mathbb{Z} \) . Thus if \( b \in B \) then \( {nb} \in \widetilde{U} \) for some \( n \in \mathbb{Z} \) ; hence \( b \in U \) . Thus a \( \mathbb{Z} \) -base for \( B \) is a \( \mathbb{Q} \) -base for \( U \) and it extends, using a base of \( A/\widetilde{U} \), to a \( \mathbb{Z} \) -base of \( {\mathbb{Z}}^{2g} \) . Proposition 8.17. Suppose that \( F \) is a genus \( g \) Seifert surface fo
1009_(GTM175)An Introduction to Knot Theory
33
t) \) to be \( {\mathbb{Z}}^{2g} \subset {\mathbb{Q}}^{2g} = {H}_{1}\left( {\partial M;\mathbb{Q}}\right) \) . The \( g \) -dimensional subspace \( U \) of \( {\mathbb{Q}}^{2g} \), given by Lemma 8.15, has a base consisting of elements in \( {\mathbb{Z}}^{2g} \) . Let \( \widetilde{U} \) be the \( \mathbb{Z} \) -span of those elements. As a \( \mathbb{Z} \) -module \( {\mathbb{Z}}^{2g}/\widetilde{U} = \) \( A/\widetilde{U} \oplus B/\widetilde{U} \), where \( A \) and \( B \) are submodules of \( {\mathbb{Z}}^{2g}, A/\widetilde{U} \) is free and \( B/\widetilde{U} \) is a torsion module over \( \mathbb{Z} \) . Thus if \( b \in B \) then \( {nb} \in \widetilde{U} \) for some \( n \in \mathbb{Z} \) ; hence \( b \in U \) . Thus a \( \mathbb{Z} \) -base for \( B \) is a \( \mathbb{Q} \) -base for \( U \) and it extends, using a base of \( A/\widetilde{U} \), to a \( \mathbb{Z} \) -base of \( {\mathbb{Z}}^{2g} \) . Proposition 8.17. Suppose that \( F \) is a genus \( g \) Seifert surface for a slice knot \( K \) in \( {S}^{3} \) . Then a base may be chosen for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) with respect to which the corresponding Seifert matrix has the form \[ \left( \begin{matrix} 0 & P \\ Q & R \end{matrix}\right) \] consisting of a \( g \times g \) block of zeros together with \( g \times g \) blocks of integers \( P, Q \) and \( R \) . Proof. Let \( D \) be a slicing disc for \( K \) contained in \( {B}^{4} \) . By Lemma 8.14 there is contained in \( {B}^{4} \) a 3-manifold \( M \) having an \( M \times \left\lbrack {-1,1}\right\rbrack \) neighbourhood such that \( \partial M = D \cup F \) . Corollary 8.16 gives a certain base \( \left\lbrack {f}_{1}\right\rbrack ,\left\lbrack {f}_{2}\right\rbrack ,\ldots ,\left\lbrack {f}_{2g}\right\rbrack \) for \( {H}_{1}\left( {\partial M;\mathbb{Z}}\right) \) . It may be assumed that each \( \left\lbrack {f}_{i}\right\rbrack \) is represented by an oriented closed curve \( {f}_{i} \) in \( F \) . Consider the Seifert matrix \( A \) with respect to this basis. In the notation of Chapter \( 6,{A}_{ij} = \operatorname{lk}\left( {{f}_{i}^{ - },{f}_{j}}\right) \) . (If the \( {f}_{i} \) are not simple curves, they should here be changed by a very small amount in \( {S}^{3} \) to become simple so that "linking number" makes sense.) Now the property of the base proved in Corollary 8.16 means that for \( i \leq g \), there exists a non-zero integer \( {n}_{i} \) so that \( {n}_{i}\left\lbrack {f}_{i}\right\rbrack \) is zero in \( {H}_{1}\left( {M;\mathbb{Z}}\right) \) . But \( {n}_{i}\left\lbrack {f}_{i}\right\rbrack \) can be represented by a closed curve that will be denoted \( {n}_{i}{f}_{i} \), and as this bounds a 2-chain with integer coefficients, it bounds a surface mapped into \( M \) (dangerous reasoning in higher dimensions). When \( {n}_{i}{f}_{i} \) is moved to \( {\left( {n}_{i}{f}_{i}\right) }^{ - } \), the mapped-in surface can likewise be moved across the neighbourhood of \( M \) into \( M \times - 1 \) . Thus, for \( 1 \leq i, j \leq g \), the curves \( {\left( {n}_{i}{f}_{i}\right) }^{ - } \) and \( {n}_{j}{f}_{j} \) bound disjoint surfaces mapped into \( {B}^{4} \) . By Lemma 8.13, \( 0 = \operatorname{lk}\left( {{\left( {n}_{i}{f}_{i}\right) }^{ - },\left( {{n}_{j}{f}_{j}}\right) }\right) = \) \( {n}_{i}{n}_{j}\operatorname{lk}\left( {{f}_{i}^{ - },{f}_{j}}\right) \), and so \( {A}_{ij} = 0 \) for \( 1 \leq i, j \leq g \) . Now that it has been established that slice knots have Seifert matrices as described in Proposition 8.17, it is easy to produce some necessary conditions for a knot to be a slice knot. Theorem 8.18. If \( K \) is a slice knot, then the Conway-normalised Alexander polynomial of \( K \) is of the form \( f\left( t\right) f\left( {t}^{-1}\right) \), where \( f \) is a polynomial with integer coefficients. Proof. Using the Seifert matrix of Proposition 8.17, the required Alexander polynomial is the determinant of \[ \left( \begin{matrix} 0 & {t}^{1/2}P - {t}^{-1/2}{Q}^{\tau } \\ {t}^{1/2}Q - {t}^{-1/2}{P}^{\tau } & {t}^{1/2}R - {t}^{-1/2}{R}^{\tau } \end{matrix}\right) , \] which is \( \det \left( {{tP} - {Q}^{\tau }}\right) \det \left( {{t}^{-1}P - {Q}^{\tau }}\right) \) . Theorem 8.19. If \( K \) is a slice knot, then the signature of \( K \) is zero and, if the unit complex number \( \omega \) is not a zero of the Alexander polynomial, then \( {\sigma }_{\omega }\left( K\right) = 0 \) . Proof. This follows at once from the fact that the signature is zero for a quadratic form coming from a non-singular symmetric bilinear form that vanishes on a subspace of half the dimension of the space concerned. A similar result holds for Hermitian forms. These two theorems give considerable help in establishing that a knot fails to be a slice knot. A glance at Table 8.1 immediately reveals very many non-slice knots. If the signature is zero, one can wonder if the factorisation of Theorem 8.18 occurs. Note that Theorem 8.18 implies that for a slice knot \( K \), the determinant of \( K \), equal by definition to \( \left| {{\Delta }_{K}\left( {-1}\right) }\right| \) (see Chapter 9), is the square of an integer. As \( \left| {{\Delta }_{K}\left( {-1}\right) }\right| \) is an odd integer (see Corollary 6.11), this means that \( \left| {{\Delta }_{K}\left( {-1}\right) }\right| \equiv 1 \) modulo 8 . The knot \( {4}_{1} \), for example, has zero signature, but its determinant is 5 and so it cannot be a slice knot. However, the two knots shown in Figure 3.3, the Kinoshita-Terasaka and Conway knots, both have trivial Alexander polynomials and signatures, and so the above results give no information. The Kinoshita-Terasaka knot is a slice knot, but the slice status of the Conway knot appears to be unknown. The topic of slice knots has here given a glimpse of knot theory in dimensions higher than 3 . In general, it is quite possible to study knots of any space \( X \) in another \( Y \) . Usually the spaces are taken to be manifolds. Results in this generality are described in [47], at least in the piecewise linear framework. In that context all knots of an \( r \) -sphere \( {S}^{r} \) in an \( n \) -sphere \( {S}^{n} \) are trivial if \( n - r > 2 \) (see [140]). Knots of \( {S}^{n - 2} \) in \( {S}^{n} \) have a well-developed theory, with an Alexander polynomial very similar to that for \( {S}^{1} \) in \( {S}^{3} \) (see [44] or [35]). A motivation for a study of slice knots is their relevance to problems of creating smooth surfaces in 4-manifolds. Suppose a surface embedded in a 4-manifold is locally knotted at a point \( P \) . In a neighbourhood of \( P \), the surface is the cone on a knot \( K \) . If the knot is a slice knot, the cone on the knot can be replaced by the slicing disc, thus removing a point of local knottedness. Considerable progress has been made in the study of slice knots (for example, see [18]) and the theory of smooth 4-manifolds has virtually become a distinct subject on its own following spectacular progress coming from the use of differential geometry and differential equations (surveys are given in [66] and [25]). The removal of differential or piecewise linear restrictions has a remarkable effect on slice knot theory; the resulting topological slice theory is described in [30]. An extension of the slicing idea is the concept of the 4-ball genus \( {g}^{ \star }\left( K\right) \) of a knot \( K \) . This is the minimal genus of a surface \( F \) with the property that \( F \) includes in \( {B}^{4} \) as a flat surface and \( F \cap {S}^{3} = \partial F \cap {S}^{3} = K \) . A slight generalisation of Theorem 8.19 shows that \( \left| {{\sigma }_{\omega }\left( K\right) }\right| \leq 2{g}^{ \star }\left( K\right) \) ; see [101] and [123]. It is easy to see that \( {g}^{ \star }\left( K\right) \) is a lower bound for the unknotting number \( u\left( K\right) \) . Recent work of \( \mathrm{P} \) . B. Kronheimer and T. S. Mrowka [72], using gauge theory for smooth manifolds, shows that for \( K \) the \( \left( {p, q}\right) \) torus knot, \( {g}^{ \star }\left( K\right) = \frac{1}{2}\left( {p - 1}\right) \left( {q - 1}\right) \) . As it is easy to show that this number of crossing changes will undo that knot, \( \frac{1}{2}\left( {p - 1}\right) \left( {q - 1}\right) \) is the unknotting number. ## Exercises 1. Let \( {L}_{n} \) be the \( \left( {{2n},2}\right) \) torus link as described in Chapter 1. It has two components, the linking number between them being \( n \) . Use the Conway skein formula to calculate, by means of a recurrence formula, the Conway polynomial of this link. If \( {L}_{n}^{\prime } \) is \( {L}_{n} \) with the orientation of one of its components reversed, calculate in a similar way the Conway polynomial of \( {L}_{n}^{\prime } \) . 2. Show that the Conway knot (shown in Figure 3.3) has Conway polynomial equal to 1. 3. Show that if knot \( {K}_{1} \) is a mutant of knot \( {K}_{2} \), then \( {K}_{1} \) and \( {K}_{2} \) have the same Conway polynomial. 4. Let \( A \) and \( B \) be diagrams of links of oriented arcs and simple closed curves in balls that meet the boundary at the four oriented points as shown below. The sum of \( A \) and \( B \) is also a diagram of such a link defined in the way shown. The "numerator" \( {A}^{N} \) and "denominator" \( {A}^{D} \) of \( A \) are the Conway polynomials of the links formed by joining up the entry and exit points in the way shown. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_102_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_102_0.jpg) Prove that \( {\left( A + B\right) }^{D} = {A}^{D}{B}^{D} \) and \( {\left( A + B\right) }^{N} = {A}^{N}{B}^{D} + {B}^{N}{A}^{D} \) . 5. Prove that the knots \( {6}_{1},{8}_{8},{8}_{9} \) and \( {3}_{1} + \overline{{3}_{1}} \) are all slice knots. 6. Calculate the signature of the pretzel knot \( P\left( {3,3, - 3}\right) \) . 7. Two knots \( {K}_{0} \) and \( {K}_{1} \) are said to be cobordant if
1009_(GTM175)An Introduction to Knot Theory
34
) and \( {K}_{2} \) have the same Conway polynomial. 4. Let \( A \) and \( B \) be diagrams of links of oriented arcs and simple closed curves in balls that meet the boundary at the four oriented points as shown below. The sum of \( A \) and \( B \) is also a diagram of such a link defined in the way shown. The "numerator" \( {A}^{N} \) and "denominator" \( {A}^{D} \) of \( A \) are the Conway polynomials of the links formed by joining up the entry and exit points in the way shown. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_102_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_102_0.jpg) Prove that \( {\left( A + B\right) }^{D} = {A}^{D}{B}^{D} \) and \( {\left( A + B\right) }^{N} = {A}^{N}{B}^{D} + {B}^{N}{A}^{D} \) . 5. Prove that the knots \( {6}_{1},{8}_{8},{8}_{9} \) and \( {3}_{1} + \overline{{3}_{1}} \) are all slice knots. 6. Calculate the signature of the pretzel knot \( P\left( {3,3, - 3}\right) \) . 7. Two knots \( {K}_{0} \) and \( {K}_{1} \) are said to be cobordant if there is a (piecewise linear) embedding \( e : \left( {{S}^{1} \times {D}^{2}}\right) \times \left\lbrack {0,1}\right\rbrack \rightarrow {S}^{3} \times \left\lbrack {0,1}\right\rbrack \) so that \( {e}^{-1}\left( {{S}^{3}\times \{ i\} }\right) = \left( {{S}^{1} \times {D}^{2}}\right) \times \{ i\} \) for \( i = 0,1 \) and \( e\left( {{S}^{1} \times 0}\right) \times \{ i\} = {K}_{i} \) for \( i = 0,1 \) . Prove that cobordant knots have the same signatures. 8. Prove that the unknotting number of the knot \( {8}_{2} \) is 2 . 9. Show that the knot produced by summing together \( n \) copies of the trefoil knot \( {3}_{1} \) has unknotting number \( n \) . [Note. It is not known, in general, whether or not \( u\left( {{K}_{1} + {K}_{2}}\right) = \) \( \left. {u\left( {K}_{1}\right) + u\left( {K}_{2}\right) \text{.}}\right\rbrack \) 10. Show that the 4-ball genus \( {g}^{ \star }\left( K\right) \) of a knot \( K \) does indeed satisfy the inequality \( \left| {\sigma \left( K\right) }\right| \leq 2{g}^{ \star }\left( K\right) . \) # Cyclic Branched Covers and the Goeritz Matrix Most of this chapter will be concerned with a study of the twofold cyclic cover \( {X}_{2} \rightarrow {S}^{3} \) branched over an \( n \) component link \( L \) . The link \( L \) does not need to be oriented for this to make sense, but it will be sometimes convenient to select an arbitrary orientation in order to consider a Seifert surface. The principle result here is that the order of the first homology group \( {H}_{1}\left( {X}_{2}\right) \) is det \( L \) -the determinant of the link, where \( \det L = \left| {{\Delta }_{L}\left( {-1}\right) }\right| \) - and that this number is often easy to calculate. As will be explained, the link determinant is, up to sign, the determinant of any Goeritz matrix [34] of the link, a matrix which is easy to write down starting from any diagram of the link. As explained at the end of Chapter \( 7,{X}_{2} \) can be constructed by gluing together \( n \) solid tori and two copies \( {Y}_{0} \) and \( {Y}_{1} \) of \( Y \), where \( Y \) is the link exterior cut along a (connected, orientable) Seifert surface \( F \) . In the boundary of \( {Y}_{i} \) are copies \( {F}_{i, + } \) and \( {F}_{i, - } \) of \( F \) . The twofold cyclic cover \( {\widehat{X}}_{2} \) of the link exterior is formed from the disjoint union \( {Y}_{0} \sqcup {Y}_{1} \) by identifying, in the natural way, \( {F}_{0, + } \) with \( {F}_{1, - } \) and \( {F}_{1, + } \) with \( {F}_{0, - } \) . Then \( {X}_{2} \) is created by gluing a solid torus to each boundary component of \( {\widehat{X}}_{2} \), identifying a meridian of a solid torus with a lift of a square of a meridian of each component of \( L \) . Of course, \( {H}_{1}\left( {X}_{2}\right) \) (with coefficients understood to be \( \mathbb{Z} \) ), being an abelian group, is a \( \mathbb{Z} \) -module. The next result gives a presentation matrix for \( {H}_{1}\left( {X}_{2}\right) \), in the sense of Theorem 6.1, as a \( \mathbb{Z} \) -module. Theorem 9.1. Let \( {X}_{2} \) be the cyclic double cover of \( {S}^{3} \) branched over a link \( L \) and suppose that \( A \) is a Seifert matrix for \( L \) with respect to some orientation and some Seifert surface. Then \( {H}_{1}\left( {X}_{2}\right) \) is presented, as an abelian group, by the matrix \( \left( {A + {A}^{\tau }}\right) \) . Proof. In the above notation, \( {\widehat{X}}_{2} = {Y}_{0} \cup {Y}_{1} \), where \( {Y}_{0} \cap {Y}_{1} \) is two disjoint copies of \( F \) . A presentation of \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) can be obtained from the following exact Mayer-Vietoris sequence: \[ \rightarrow {H}_{1}\left( {{Y}_{0} \cap {Y}_{1}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{1}\left( {Y}_{0}\right) \oplus {H}_{1}\left( {Y}_{1}\right) \overset{{\beta }_{ \star }}{ \rightarrow }{H}_{1}\left( {\widehat{X}}_{2}\right) \rightarrow \] \[ \rightarrow {H}_{0}\left( {{Y}_{0} \cap {Y}_{1}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{0}\left( {Y}_{0}\right) \oplus {H}_{0}\left( {Y}_{1}\right) . \] The situation is here very similar to that of Theorem 6.5, and the same sign conventions will be used. There is now a homeomorphism \( t : {\widehat{X}}_{2} \rightarrow {\widehat{X}}_{2} \) with \( {t}^{2} = 1 \) which interchanges \( {Y}_{0} \) and \( {Y}_{1} \) . As in Theorem 6.5, one can take a base \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( F\right) \), with corresponding Seifert matrix \( A \) and dual base \( \left\{ \left\lbrack {e}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( Y\right) \) . Transferring to \( {\widehat{X}}_{2} \), this gives a base \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \cup \left\{ \left\lbrack {t{f}_{i}}\right\rbrack \right\} \) for \( {H}_{1}\left( {{Y}_{0} \cap {Y}_{1}}\right) \) (since \( {Y}_{0} \cap {Y}_{1} \) is two copies of \( F \) ), a base \( \left\{ \left\lbrack {e}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( {Y}_{0}\right) \) and a base \( \left\{ \left\lbrack {t{e}_{i}}\right\rbrack \right\} \) for \( {H}_{1}\left( {Y}_{1}\right) \) . Then, with respect to these bases, \( {\alpha }_{ \star } \) is represented by the matrix \[ \left( \begin{array}{rr} - A & {A}^{\tau } \\ {A}^{\tau } & - A \end{array}\right) \] Similarly, using bases represented by single points, the map \( {H}_{0}\left( {{Y}_{0} \cap {Y}_{1}}\right) \rightarrow \) \( {H}_{0}\left( {Y}_{0}\right) \oplus {H}_{0}\left( {Y}_{1}\right) \) is represented by \( \left( \begin{array}{rr} - 1 & 1 \\ 1 & - 1 \end{array}\right) \) . Thus the kernel of this last map is a copy of \( \mathbb{Z} \), and (recalling the definition of the maps in the Mayer-Vietoris sequence) any loop in \( {\widehat{X}}_{2} \) that cuts each of the two components of \( {Y}_{0} \cap {Y}_{1} \) at one point maps to a generator of this copy of \( \mathbb{Z} \) . Suppose that \( L \) has \( n \) components and that \( {c}_{i} \) is a closed curve in \( {\widehat{X}}_{2} \) which projects to the square of the meridian of \( {L}_{i} \), the \( i \) th component of \( L \) . Then \( {H}_{1}\left( {X}_{2}\right) \) is obtained from \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) by equating each \( \left\lbrack {c}_{i}\right\rbrack \) to zero. Consider the genus \( g \) surface \( F \) with "standard" curves \( \left\{ {f}_{i}\right\} \) as shown in Figure 6.1. Suppose that the "outer" boundary in the diagram is \( {L}_{1} \) . The relation \( \left\lbrack {c}_{1}\right\rbrack = 0 \) simply removes from \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) the copy of \( \mathbb{Z} \) mentioned above. To achieve \( {H}_{1}\left( {X}_{2}\right) \), it is then necessary to add in the relations \( \left\lbrack {c}_{i}\right\rbrack = \left\lbrack {c}_{1}\right\rbrack \) for \( i \geq 2 \) . Now, for \( i \geq 2 \), the curve \( {e}_{{2g} + i - 1} \) in \( Y \) encircles the band of \( F \) that has \( {L}_{i} \) as part of its boundary; when regarded as a curve in the exterior of \( L,\left\lbrack {e}_{{2g} + i - 1}\right\rbrack \) represents the difference between the first and the \( i \) th meridians of \( L \) . Thus the element \( \left\lbrack {e}_{{2g} + i - 1}\right\rbrack \oplus \left\lbrack {t{e}_{{2g} + i - 1}}\right\rbrack \in {H}_{1}\left( {Y}_{0}\right) \oplus {H}_{1}\left( {Y}_{1}\right) \) is mapped by \( {\beta }_{ \star } \) to the difference between \( \left\lbrack {c}_{i}\right\rbrack \) and \( \left\lbrack {c}_{1}\right\rbrack \) in \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) . This means that \( {H}_{1}\left( {X}_{2}\right) \) is presented by the matrix \[ \left( \begin{array}{rrr} - A & {A}^{\tau } & B \\ {A}^{\tau } & - A & B \end{array}\right) \] where \( B \) is the \( \left( {{2g} + n - 1}\right) \times \left( {n - 1}\right) \) matrix \( \left( \begin{array}{l} 0 \\ I \end{array}\right), I \) here being the \( \left( {n - 1}\right) \times \) \( \left( {n - 1}\right) \) identity matrix. The permitted rules for changing a presentation matrix are described in Theorem 6.1. The operation of subtracting the first row of the blocks from the second, adding the first column of blocks to the second, and then adding each of the last \( n - 1 \) columns to the column preceding it by \( n - 1 \) places gives as an equivalent presentation matrix \[ \left( \begin{matrix} - A & - A + {A}^{\tau } + \left( {0 \oplus B}\right) & B \\ A + {A}^{\tau } & 0 & 0 \end{matrix}\right) , \] where \( \left( {0 \oplus B}\right) \) is \( B \) preceded by \( {2g} \) zero columns. Now \( \left( {-A + {A}^{\tau } + \left( {0 \oplus B}\right) }\right) \) consists (see Theorem 6.10) of \( g \) blocks of the form \( \left( \begin{array}{rr} 0 & - 1 \\ 1 & 0 \end{array}\right) \) followed by the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) identity along its diagonal and zeros elsewhere. This matrix is clearly invertible over \( \mathbb{Z} \) . Thus the first row of blocks may be di
1009_(GTM175)An Introduction to Knot Theory
35
matrix. The permitted rules for changing a presentation matrix are described in Theorem 6.1. The operation of subtracting the first row of the blocks from the second, adding the first column of blocks to the second, and then adding each of the last \( n - 1 \) columns to the column preceding it by \( n - 1 \) places gives as an equivalent presentation matrix \[ \left( \begin{matrix} - A & - A + {A}^{\tau } + \left( {0 \oplus B}\right) & B \\ A + {A}^{\tau } & 0 & 0 \end{matrix}\right) , \] where \( \left( {0 \oplus B}\right) \) is \( B \) preceded by \( {2g} \) zero columns. Now \( \left( {-A + {A}^{\tau } + \left( {0 \oplus B}\right) }\right) \) consists (see Theorem 6.10) of \( g \) blocks of the form \( \left( \begin{array}{rr} 0 & - 1 \\ 1 & 0 \end{array}\right) \) followed by the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) identity along its diagonal and zeros elsewhere. This matrix is clearly invertible over \( \mathbb{Z} \) . Thus the first row of blocks may be discarded, and \( {H}_{1}\left( {X}_{2}\right) \) is presented by \( \left( {A + {A}^{\tau }}\right) \) . Of course, the Seifert matrix for \( L \) with respect to a different basis of \( {H}_{1}\left( F\right) \) is of the form \( {P}^{\tau }{AP} \) for some invertible matrix \( P \) , and \( \left( {{P}^{\tau }{AP} + {P}^{\tau }{A}^{\tau }P}\right) \) presents the same group as \( \left( {A + {A}^{\tau }}\right) \) . Corollary 9.2. Let \( {X}_{2} \) be the double cover of \( {S}^{3} \) branched over a link \( L \) . The order of the group \( {H}_{1}\left( {X}_{2}\right) \) is the modulus of the determinant of \( \left( {A + {A}^{\tau }}\right) \), that is \[ \left| {{H}_{1}\left( {X}_{2}\right) }\right| = \left| {\det \left( {A + {A}^{\tau }}\right) }\right| = \left| {{\Delta }_{L}\left( {-1}\right) }\right| . \] Proof. Any finitely generated abelian group can be expressed as a direct sum of cyclic groups. Thus it has as a presentation matrix a diagonal matrix, the entries on the diagonal being the orders of the summands, with the convention that an infinite group has order zero. By Theorem 6.1, the determinant of a square presentation matrix is unique up to multiplication by a unit (that is, by \( \pm 1 \) ), so the result follows at once. The statement about the Alexander polynomial then follows from Theorem 6.5. Note the caveat that here a zero corresponds to an infinite order group. However for a knot it has already been shown (Corollary 6.11) that the determinant of \( \left( {A + {A}^{\tau }}\right) \) is an odd integer. Thus the double cover of \( {S}^{3} \) branched over a knot always has finite first homology of odd order. Whenever the exterior \( X \) of a link \( L \) has been cut by a spanning surface \( F \), it has been required that \( F \) be orientable. What happens when \( F \) is a non-orientable spanning surface? Suppose, then, that \( F \) is a non-orientable connected surface that has the link \( L \) as boundary, and let \( W \) be \( X \) -cut-along- \( F \) . Recall that \( X \) is \( {S}^{3} \) less the interior of a regular neighbourhood \( N\left( L\right) \) of \( L \) . If (by removing a small neighbourhood of \( \partial F \) in \( F \) ) \( F \) is regarded as being in \( X, W \) is formed by removing from \( X \) the interior of a regular neighbourhood \( N\left( F\right) \) of \( F \) . Locally \( N\left( F\right) \) is a product of part of \( F \) with the unit interval \( I \) . Thus the orientable manifold \( N\left( F\right) \) is an \( I \) -bundle over the non-orientable surface \( F \) . The associated \( \partial I \) bundle gives a two-to-one covering map from a connected orientable surface \( \widetilde{F} \) to \( F \) . Thus \( \widetilde{F} \) is the orientable double covering space (see Chapter 7) of \( F \), and \( N\left( F\right) \) is the mapping cylinder of the covering map \( p : \widetilde{F} \rightarrow F \) . If \( f \) is a closed loop in \( F \) , then \( {p}^{-1}f \) is a single loop (that double covers \( f \) ) if \( f \) is orientation reversing and is the union of two loops if \( f \) preserves orientation. In [36], Gordon and Litherland defined a quadratic form \[ {\mathcal{G}}_{F} : {H}_{1}\left( F\right) \times {H}_{1}\left( F\right) \rightarrow \mathbb{Z} \] by \( {\mathcal{G}}_{F}\left( {\left\lbrack f\right\rbrack ,\left\lbrack g\right\rbrack }\right) = \operatorname{lk}\left( {{p}^{-1}f, g}\right) \), where \( f \) and \( g \) are oriented loops in \( F \) . (Thus \( {\mathcal{G}}_{F}\left( {\left\lbrack f\right\rbrack ,\left\lbrack g\right\rbrack }\right) \) is the linking number of \( g \) with \( f \) pushed off \( F \) locally in "both directions".) It is clear that this Gordon-Litherland form gives a well-defined bilinear map and, by considering signs of crossings, that \( {\mathcal{G}}_{F} \) is symmetric. Of course, this definition still makes sense when \( F \) is orientable, and \( {p}^{-1}f \) is always two copies of \( f \), one on either side of \( F \) . The form is then sometimes called the Trotter form [124], and it is represented by \( A + {A}^{\tau } \) where \( A \) is any Seifert matrix. It has already been seen above that \( A + {A}^{\tau } \) is a presentation matrix for \( {H}_{1}\left( {X}_{2}\right) \) . This will be extended to the non-orientable surfaces in Theorem 9.3. Returning to the situation where \( F \) is a non-orientable connected surface spanning \( L \), the surface \( \widetilde{F} \) is a connected subspace of \( \partial W \) . The map that interchanges the two end points of each fibre of the above \( I \) bundle gives a homeomorphism \( t : \widetilde{F} \rightarrow \widetilde{F} \) such that \( {t}^{2} = 1 \) and \( W/t = X \) . One cannot imitate the orientable situation, taking infinitely many copies of \( W \) and gluing them together in a sequence, in any sensible way, for \( \partial W \) does not contain two copies of \( F \) . However, one can take \( {two} \) copies of \( W,{W}_{0} \) and \( {W}_{1} \), with copies \( {\widetilde{F}}_{0} \) and \( {\widetilde{F}}_{1} \) of \( \widetilde{F} \) in their boundaries, and for each \( x \in \widetilde{F} \) identify the copy of \( x \) in \( {\widetilde{F}}_{0} \) with the copy of \( {tx} \) in \( {\widetilde{F}}_{1} \) . This constructs a cover of \( X \) . A loop in \( X \) lifts to a loop in this cover if and only if it meets \( F \) at an even number of points - that is, if and only if it has even linking number with \( L \) . As this property characterises the double cyclic cover \( {\widehat{X}}_{2} \) of \( X \), it is precisely that cover which has been constructed from the two copies of \( W \) . Solid tori can then be added, if desired, to obtain the double branched cover \( {X}_{2} \) . Theorem 9.3. Suppose that \( F \) is a connected surface spanning a link \( L \) ; then any matrix representing the form \( {\mathcal{G}}_{F} : {H}_{1}\left( F\right) \times {H}_{1}\left( F\right) \rightarrow \mathbb{Z} \) is a presentation matrix for \( {H}_{1}\left( {X}_{2}\right) \) . Proof. The previous theorem dealt with the case when \( F \) is orientable, so suppose that \( F \) is a connected non-orientable surface spanning the \( n \) -component link \( L \) . To calculate \( {H}_{1}\left( {X}_{2}\right) \) from this, consider the exact Mayer-Vietoris sequence \[ \rightarrow {H}_{1}\left( {{W}_{0} \cap {W}_{1}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{1}\left( {W}_{0}\right) \oplus {H}_{1}\left( {W}_{1}\right) \overset{{\beta }_{ \star }}{ \rightarrow }{H}_{1}\left( {\widehat{X}}_{2}\right) \rightarrow . \] Because \( {W}_{0} \cap {W}_{1} \) is a copy of the connected surface \( \widetilde{F} \), the map \( {\beta }_{ \star } \) is a surjection. In the abstract, regard \( F \) as the surface (together with generating curves) shown in Figure 6.1, with the addition of one twisted band, or of two interlocking bands one of which is twisted, together with the extra generating curves as shown in Figure 9.1. Any non-orientable surface can be regarded as being of one of these two types. Consider the first type of surface with a generating curve \( g \) as shown. Exactly as in the orientable case, \( {H}_{1}\left( F\right) \) is freely generated by \( \left\{ {\left\lbrack {f}_{i}\right\rbrack : i = 1,2,\ldots ,{2g} + }\right. \) \( n - 1\} \cup \{ \left\lbrack g\right\rbrack \} \), and there is a dual base \( \left\{ {\left\lbrack {e}_{j}\right\rbrack : j = 1,2,\ldots ,{2g} + n}\right\} \) freely ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_106_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_106_0.jpg) generating \( {H}_{1}\left( W\right) \) . For each \( i \) let \( {\widetilde{f}}_{i} \) and \( t{\widetilde{f}}_{i} \) be the two lifts of \( {f}_{i} \) to \( \widetilde{F} \), and let \( \widetilde{g} \) be \( {p}^{-1}g \) . The classes of these curves are a free base for \( {H}_{1}\left( \widetilde{F}\right) \) . If \( \iota : \partial W \rightarrow W \) is the inclusion, there are two \( \left( {{2g} + n}\right) \times \left( {{2g} + n - 1}\right) \) matrices \( R \) and \( S \) and a \( \left( {{2g} + n}\right) \times 1 \) matrix \( \lambda \) such that \[ {\iota }_{ \star }\left\lbrack {\widetilde{f}}_{i}\right\rbrack = \mathop{\sum }\limits_{j}{R}_{ji}\left\lbrack {e}_{j}\right\rbrack ,\;{\iota }_{ \star }\left\lbrack {t{\widetilde{f}}_{i}}\right\rbrack = \mathop{\sum }\limits_{j}{S}_{ji}\left\lbrack {e}_{j}\right\rbrack \text{ and }{\iota }_{ \star }\left\lbrack \widetilde{g}\right\rbrack = \mathop{\sum }\limits_{j}{\lambda }_{j}\left\lbrack {e}_{j}\right\rbrack . \] Hence the map \( {\alpha }_{ \star } \) in the above Mayer-Vietoris sequence is represented by \[ \left( \begin{array}{rrr} R & S & \lambda \\ - S & - R & - \lambda \end{array}\right) \] which is thus a presentation matrix for \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) . It remains to consider the effect of gluing solid tori to \( {\widehat{X}}_{2} \) . Consider the cur
1009_(GTM175)An Introduction to Knot Theory
36
clusion, there are two \( \left( {{2g} + n}\right) \times \left( {{2g} + n - 1}\right) \) matrices \( R \) and \( S \) and a \( \left( {{2g} + n}\right) \times 1 \) matrix \( \lambda \) such that \[ {\iota }_{ \star }\left\lbrack {\widetilde{f}}_{i}\right\rbrack = \mathop{\sum }\limits_{j}{R}_{ji}\left\lbrack {e}_{j}\right\rbrack ,\;{\iota }_{ \star }\left\lbrack {t{\widetilde{f}}_{i}}\right\rbrack = \mathop{\sum }\limits_{j}{S}_{ji}\left\lbrack {e}_{j}\right\rbrack \text{ and }{\iota }_{ \star }\left\lbrack \widetilde{g}\right\rbrack = \mathop{\sum }\limits_{j}{\lambda }_{j}\left\lbrack {e}_{j}\right\rbrack . \] Hence the map \( {\alpha }_{ \star } \) in the above Mayer-Vietoris sequence is represented by \[ \left( \begin{array}{rrr} R & S & \lambda \\ - S & - R & - \lambda \end{array}\right) \] which is thus a presentation matrix for \( {H}_{1}\left( {\widehat{X}}_{2}\right) \) . It remains to consider the effect of gluing solid tori to \( {\widehat{X}}_{2} \) . Consider the curves \( x \) and \( y \) on the boundary of \( N\left( F\right) \), as shown in Figure 9.2. Suppose that \( \xi \) and \( \eta \) are column matrices such that \( {\iota }_{ \star }\left\lbrack x\right\rbrack = \mathop{\sum }\limits_{j}{\xi }_{j}\left\lbrack {e}_{j}\right\rbrack \) and \( {\iota }_{ \star }\left\lbrack y\right\rbrack = \mathop{\sum }\limits_{j}{\eta }_{j}\left\lbrack {e}_{j}\right\rbrack \) . Inspection of Figure 9.2 shows that \( \eta - \xi = \lambda \), and \( \eta + \xi \) is the column with 1 in the final place and zeros elsewhere (being the coordinates of \( \left\lbrack {e}_{{2g} + n}\right\rbrack \) ). ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_107_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_107_0.jpg) Figure 9.2 The effect of gluing the first solid torus to \( {\widehat{X}}_{2} \) is to equate to zero the element \( {\iota }_{ \star }\left\lbrack x\right\rbrack \oplus {\iota }_{ \star }\left\lbrack y\right\rbrack \in {H}_{1}\left( {W}_{0}\right) \oplus {H}_{1}\left( {W}_{1}\right) \) . Gluing on any of the other solid tori has an effect analogous to that observed in Theorem 9.1; it equates to zero the elements of the form \( \left\lbrack {e}_{{2g} + i - 1}\right\rbrack \oplus \left\lbrack {e}_{{2g} + i - 1}\right\rbrack \in {H}_{1}\left( {W}_{0}\right) \oplus {H}_{1}\left( {W}_{1}\right) \) for \( 2 \leq i \leq n \) . Hence \( {H}_{1}\left( {X}_{2}\right) \) has a presentation matrix of the form \[ \left( \begin{array}{rrrrr} R & S & \lambda & \xi & B \\ - S & - R & - \lambda & \eta & B \end{array}\right) \] where \( B \) is the \( \left( {{2g} + n}\right) \times \left( {n - 1}\right) \) matrix with \( {B}_{{2g} + j, j} = 1 \) for \( j = 1,2,\ldots, n - 1 \) and \( {B}_{i, j} = 0 \) otherwise. Subtracting the second row of blocks from the first produces \[ \left( \begin{matrix} R + S & R + S & {2\lambda } & - \lambda & 0 \\ - S & - R & - \lambda & \eta & B \end{matrix}\right) . \] Subtracting the first column of blocks from the second and adding twice the fourth column to the third gives \[ \left( \begin{matrix} R + S & 0 & 0 & - \lambda & 0 \\ - S & S - R & \xi + \eta & \eta & B \end{matrix}\right) \] As in the proof of Theorem 9.2, \( \left( {{2g} + n}\right) \times \left( {{2g} + n}\right) \) matrix \( \left( \begin{array}{ll} S - R & \xi + \eta \end{array}\right) \) consists of \( g \) blocks up to sign of the form \( \left( \begin{array}{rr} 0 & 1 \\ - 1 & 0 \end{array}\right) \) down the diagonal, a 1 in the final place on the diagonal and zeros elsewhere. Hence, if \( \left( {0 \oplus B \oplus 0}\right) \) is \( B \) preceded by \( {2g} \) zero columns and followed by one zero column, \( \left( {S - R\;\xi + \eta }\right) + \left( {0 \oplus B \oplus 0}\right) \) is invertible. Thus another presentation matrix of the same group is \( \left( {R + S - \lambda }\right) \) or equivalently \( \left( \begin{array}{ll} R + S & \lambda \end{array}\right) \) . However, this matrix represents the quadratic form \( {\mathcal{G}}_{F} \) with respect to the given base of \( {H}_{1}\left( F\right) \) . As with any quadratic form, changing the base changes the matrix to one of the form \( {P}^{\tau }\left( {R + S\;\lambda }\right) P \), where \( P \) is invertible, and this presents the same group. Finally, it remains to consider what happens when \( F \) is a surface of the second type shown in Figure 9.1. The situation is much the same as before, except that now \( {H}_{1}\left( F\right) \) and \( {H}_{1}\left( W\right) \) each has \( {2g} + n + 1 \) generators, so that \( R \) and \( S \) are \( \left( {{2g} + n + 1}\right) \times \left( {{2g} + n}\right) \) matrices. However, \( \xi + \eta \) is a column with a 1 in each of the last \( t \) wo places and zeros elsewhere. Now, \( \left( {S - R}\right) \) has \( g \) blocks each up to sign of the form \( \left( \begin{array}{rr} 0 & 1 \\ - 1 & 0 \end{array}\right) \) down the diagonal and a 1 in the \( \left( {{2g} + n + 1,{2g} + n}\right) \) place. Thus \( \left( {S - R : \xi + \eta }\right) + \left( {0 \oplus B \oplus 0}\right) \) is again invertible (where the second " \( \oplus 0 \) " is two columns of zeros). The discussion then proceeds as before. A Goeritz matrix for a link is a matrix of integers constructed in the following way: Let \( D \) be a connected diagram of a link \( L \) and let the regions of the diagram be coloured black and white in chessboard fashion. Given this colouring, an incidence number \( \zeta \left( c\right) = \pm 1 \) can be allocated to any crossing \( c \), as in Figure 9.3. Let \( {R}_{0},{R}_{1},\ldots ,{R}_{n} \) be the white regions of the diagram. Define a "pre-Goeritz matrix" to be the \( \left( {n + 1}\right) \times \left( {n + 1}\right) \) matrix having terms \( \left\{ {g}_{ij}\right\} \) given, for \( i \neq j \), by \[ {g}_{ij} = \sum \zeta \left( c\right) \] where the sum is over all crossings at which \( {R}_{i} \) and \( {R}_{j} \) come together. Define diagonal terms by \[ {g}_{ii} = - \mathop{\sum }\limits_{{j \neq i}}{g}_{ij} \] The related Goeritz matrix \( G \) is this matrix with a row and corresponding column deleted. It may be assumed that the labelling is such that it is the row and column indexed by zero that are deleted. Thus \( G \) is the \( n \times n \) matrix \( \left\{ {{g}_{ij} : 1 \leq i, j \leq n}\right\} \) . Of course \( G \) depends on the diagram chosen for \( L \), on which regions are called white, and on the labelling of those white regions. The following result is taken from [36]. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_108_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_108_0.jpg) Theorem 9.4. Any Goeritz matrix for a link \( L \), associated with the white regions of a diagram of \( L \), represents, with respect to some base, the Gordon-Litherland form \[ {\mathcal{G}}_{F} : {H}_{1}\left( F\right) \times {H}_{1}\left( F\right) \rightarrow \mathbb{Z}, \] where \( F \) is the spanning surface for \( L \) given by the black regions of the diagram. Proof. Let the white regions, \( {R}_{0},{R}_{1},\ldots ,{R}_{n} \), of the diagram inherit an orientation from the sphere \( {S}^{2} \) in which they are assumed to lie; thus each \( \partial {R}_{i} \) has an orientation. Let \( {f}_{i} \) be the oriented simple closed curve in \( F \) that consists of \( \partial {R}_{i} \) pushed into the union of the black regions. Then \( \left\{ {\left\lbrack {f}_{i}\right\rbrack : 0 \leq i \leq n}\right\} \) forms a set of generators for \( {H}_{1}\left( F\right) \) ; any subset of \( n \) of the \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) forms a base for \( {H}_{1}\left( F\right) \) . Suppose that the white regions \( {R}_{i} \) and \( {R}_{j} \) are both incident at a crossing \( c \) where \( \zeta \left( c\right) = + 1 \) . Then in the above notation, the curve or curves \( {p}^{-1}{f}_{j} \), namely the push-off of \( {f}_{j} \) from \( F \) locally to both sides of \( F \), meet \( {R}_{i} \) in a positive point of intersection and meet \( {R}_{j} \) in a negative point of intersection near to \( c \) . See Figure 9.4. The sign is positive if the orientation of the region is in the sense of a right-hand screw with respect to the orientation of \( {p}^{-1}{f}_{j} \) . The signs are reversed if \( \zeta \left( c\right) = - 1 \) . Thus for \( i \neq j,\operatorname{lk}\left( {{p}^{-1}{f}_{j},{f}_{i}}\right) = \sum \zeta \left( c\right) \), where the sum is over all crossings at which \( {R}_{i} \) and \( {R}_{j} \) come together, and \( \operatorname{lk}\left( {{p}^{-1}{f}_{j},{f}_{j}}\right) = - \sum \zeta \left( c\right) \) , the sum being over all \( c \) at which \( {R}_{j} \) is incident with other regions. Note the two points of \( {p}^{-1}{f}_{j} \cap {R}_{j} \) near a crossing at which \( {R}_{j} \) is incident with itself cancel each other. Hence the quadratic form \( {\mathcal{G}}_{F} \) is represented with respect to the base \( \left\lbrack {f}_{1}\right\rbrack ,\left\lbrack {f}_{2}\right\rbrack ,\ldots ,\left\lbrack {f}_{n}\right\rbrack \) by the Goeritz matrix of the diagram with the above labelling of the white regions. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_109_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_109_0.jpg) Figure 9.4 Corollary 9.5. The determinant of \( L,\left| {{\Delta }_{L}\left( {-1}\right) }\right| \), is equal to \( \left| {\det G}\right| \), where \( G \) is any Goertiz matrix for \( L \) . The proof of this is immediate from the last three theorems. It follows that \( \left| {\det G}\right| \) is an invariant of \( L \), and, as a Goeritz matrix is often easy to write down, it can be a useful invariant. As an example, consider the diagram with \( n + 2 \) crossings of a twisted double of the unknot shown in Figure 6.3. The diagram there shown has its regions coloured in chessboard fashion with three white regions. Suppose the outer region is \( {R}_{0} \) , that \( {R}_{1} \) is the region abutting only two crossing
1009_(GTM175)An Introduction to Knot Theory
37
\lbrack {f}_{2}\right\rbrack ,\ldots ,\left\lbrack {f}_{n}\right\rbrack \) by the Goeritz matrix of the diagram with the above labelling of the white regions. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_109_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_109_0.jpg) Figure 9.4 Corollary 9.5. The determinant of \( L,\left| {{\Delta }_{L}\left( {-1}\right) }\right| \), is equal to \( \left| {\det G}\right| \), where \( G \) is any Goertiz matrix for \( L \) . The proof of this is immediate from the last three theorems. It follows that \( \left| {\det G}\right| \) is an invariant of \( L \), and, as a Goeritz matrix is often easy to write down, it can be a useful invariant. As an example, consider the diagram with \( n + 2 \) crossings of a twisted double of the unknot shown in Figure 6.3. The diagram there shown has its regions coloured in chessboard fashion with three white regions. Suppose the outer region is \( {R}_{0} \) , that \( {R}_{1} \) is the region abutting only two crossings, and that \( {R}_{2} \) is the other region. The pre-Goeritz matrix is \[ \left( \begin{matrix} n + 1 & - 1 & - n \\ - 1 & 2 & - 1 \\ - n & - 1 & n + 1 \end{matrix}\right) \] so that \( \left( \begin{matrix} 2 & - 1 \\ - 1 & n + 1 \end{matrix}\right) \) is a Goeritz matrix and the determinant of the knot is \( \left| {{2n} + 1}\right| \) . Note that this simple invariant is enough to distinguish all these knots from each other when \( n \geq 0 \) . As a second favourite example, take the pretzel knot or link \( P\left( {p, q, r}\right) \) shown by a coloured diagram in Figure 6.4. The pre-Goeritz matrix is \[ \left( \begin{matrix} r + p & - p & - r \\ - p & p + q & - q \\ - r & - q & q + r \end{matrix}\right) \] the Goeritz matrix is \( \left( \begin{matrix} p + q & - q \\ - q & q + r \end{matrix}\right) \) and the determinant is \( \left| {{pq} + {qr} + {rp}}\right| \) . Note that this last determinant can be equal to 1 (for example when \( \left( {p, q, r}\right) = \) \( \left( {-3,5,7}\right) ) \) . Then the double cover of \( {S}^{3} \) branched over the link has trivial first homology group; standard results in homology theory then imply that it has all the same homology groups as \( {S}^{3} \) . More information about the Goeritz matrix can be found in [36]. In particular, the signature of the link can be calculated from the signature of \( G \) together with a simple "correction term". A variant of the proof given here for Theorem 8.2 shows that the Goeritz matrix of a knot is well defined up to moves that change \( G \) to \( {P}^{\tau }{GP} \) for an invertible matrix of integers \( P \), or to \( \left( \begin{array}{rr} G & 0 \\ 0 & \pm 1 \end{array}\right) \) or the reverse move. Of course, from this the invariance of \( \left| {\det G}\right| \) follows at once. One can likewise easily check this invariance directly from the Reidemeister moves. These last remarks must be qualified a little if links are considered rather than knots (see [36]). A result that connects the idea of the determinant with link polynomials is the following. In the language of Chapters 15 and 16, it states that (det \( L{)}^{2} \) is the value of the Kauffman polynomial of \( L \) when \( \left( {1,2}\right) \) is substituted for the pair of variables of that polynomial. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_110_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_110_0.jpg) Figure 9.5 Theorem 9.6. Suppose that \( {L}_{ + },{L}_{ - },{L}_{0} \) and \( {L}_{\infty } \) are four links that have identical diagrams except near a point where they are as shown in Figure 9.5. Then \[ {\left( \det {L}_{ + }\right) }^{2} + {\left( \det {L}_{ - }\right) }^{2} = 2\left( {{\left( \det {L}_{0}\right) }^{2} + {\left( \det {L}_{\infty }\right) }^{2}}\right) . \] Proof. The diagram shows the four links together with connected shaded spanning surfaces \( {F}_{i} \) for \( i = + , - ,0,\infty \) . These can always be constructed by using Seifert’s method (see Chapter 2) for \( {F}_{0} \) and adding bands to get the other three surfaces. The four surfaces are taken to be identical outside the areas shown. Take closed curves in \( {F}_{0} \) representing a base of \( {H}_{1}\left( {F}_{0}\right) \) and, for bases of \( {H}_{1}\left( {F}_{i}\right) \) for \( i = + , - ,\infty \), take the classes of the extra curves shown in the diagrams (the ends of them are joined up outside the diagrams) together with the set of curves already chosen for \( {F}_{0} \) . Matrices \( {M}_{i} \) for the Gordon-Litherland forms \( {\mathcal{G}}_{{F}_{i}} \) with respect to these bases are of the following form: \[ {M}_{\infty } = \left( \begin{array}{rr} n & \rho \\ {\rho }^{\tau } & {M}_{0} \end{array}\right) ,\;{M}_{ \pm } = \left( \begin{matrix} n \mp 1 & \rho \\ {\rho }^{\tau } & {M}_{0} \end{matrix}\right) . \] Thus \[ \det {M}_{ \pm } = \det {M}_{\infty } \mp \det {M}_{0} \] Squaring and adding give the required result. Recall that the \( r \) -fold cyclic cover of \( {S}^{3} \) branched over an \( n \) -component link is constructed by adding \( n \) solid tori to the space formed by gluing together, in a cyclic fashion, \( r \) copies of the link’s exterior cut along a (connected) Seifert surface. The following result and its proof are direct generalisations of those of Theorem 9.1. The details will thus not be pursued; note however that everything simplifies a little when the link is a knot (that is, when \( n = 1 \) ). Theorem 9.7. Let \( {X}_{r} \) be the cyclic \( r \) -fold cover of \( {S}^{3} \) branched over an \( n \) - component oriented link \( L \), and suppose that \( A \) is a Seifert matrix for \( L \) coming from a genus \( g \) Seifert surface. Then \( {H}_{1}\left( {X}_{r}\right) \) is presented, as an abelian group, by the \( r \times \left( {r + 1}\right) \) matrix of blocks \[ \left( \begin{matrix} - {A}^{\tau } & & & & A & B \\ A & - {A}^{\tau } & & & & B \\ & A & - {A}^{\tau } & & & B \\ & & \ddots & \ddots & & \vdots \\ & & & A & - {A}^{\tau } & B \end{matrix}\right) , \] where \( B \) is the \( \left( {{2g} + n - 1}\right) \times \left( {n - 1}\right) \) matrix \( \left( \begin{array}{l} 0 \\ I \end{array}\right) \) . Corollary 9.8. The order of the first homology group of \( {X}_{r} \), the cyclic \( r \) -fold cover of \( {S}^{3} \) branched over \( L \), is given by \[ \left| {{H}_{1}\left( {X}_{r}\right) }\right| = \left| {\mathop{\prod }\limits_{{v = 1}}^{{r - 1}}{\Delta }_{L}\left( {e}^{{2\pi }\imath \frac{v}{r}}\right) }\right| . \] Assuming that a "standard" base has been used for the homology of the Seifert surface, the presentation matrix given in the above theorem can be manipulated in the following way: Add to the first column of blocks all the other columns of blocks except the last one, so that every block entry in the first column becomes \( A - {A}^{\tau } \) . Then by rearranging the first \( {2g} + n - 1 \) columns of the matrix and the last \( n - 1 \) columns, deleting zero columns and changing some signs of columns, obtain, as an alternative presentation matrix, \[ \left( \begin{matrix} I & & & & A \\ I & - {A}^{\tau } & & & \\ I & A & - {A}^{\tau } & & \\ \vdots & & \ddots & \ddots & \\ I & & & A & - {A}^{\tau } \end{matrix}\right) . \] This is an \( r \times r \) matrix of \( \left( {{2g} + n - 1}\right) \times \left( {{2g} + n - 1}\right) \) blocks. The proof of the corollary consists of the (not entirely trivial) exercise in linear algebra of evaluating the determinant of this matrix, using the fact that \( {\Delta }_{L}\left( t\right) \) is, up to a unit, \( \det \left( {{tA} - {A}^{\tau }}\right) \) . ## Exercises 1. Use the Goeritz matrix to find the determinant of the knot \( {8}_{18} \) . 2. Find some knots \( K \) for which the double cover of \( {S}^{3} \) branched of \( K \) has zero first homology group (and hence has the same homology groups as \( {S}^{3} \) ). 3. Show that Theorem 9.6, together with the fact that the determinant of the unknot is 1 , can be used to calculate the determinant of any link. Illustrate the method with a calculation for the knot \( {4}_{1} \) . 4. Let \( C \) be a knot. Let \( K \) be the (cable) satellite of \( C \) that consists of a simple closed curve, on the boundary of the solid torus neighbourhood \( N\left( C\right) \) of \( C \), which is homologous to two longitudes plus \( {2n} + 1 \) meridians. Thus \( K \) bounds a knotted Möbius band contained in a neighbourhood of \( C \) . Use the Gordon-Litherland form associated with this Möbius band to find the determinant of \( K \) . 5. Use the Gordon-Litherland form to determine the determinant of the pretzel knot \( P\left( {p, q, r}\right) \) when \( p \) is an even integer and \( q \) and \( r \) are both odd. 6. Show directly that the modulus of the determinant of \( G \), the Goeritz matrix associated to the white regions of a knot diagram, does not change if the diagram is changed by a Reidemeister move. What happens if attention is switched to the black regions? 7. If two knots have diagrams giving rise to the same Goeritz matrix, in what way are the knots related? 10 ## The Arf Invariant and the Jones Polynomial The original Arf invariant was an invariant of certain quadratic forms on a vector space over a field of characteristic 2 . This can be applied to a quadratic form, closely associated to the Seifert form, on the first homology with \( \mathbb{Z}/2\mathbb{Z} \) coefficients of a Seifert surface of an oriented link \( L \) . The result is a fairly classical link invariant \( \mathcal{A}\left( L\right) \in \mathbb{Z}/2\mathbb{Z} \) called the Arf (or Robertello) invariant of \( L \) ([111],[114]). It must, however, be stated at once that for this theory to work-that is, for \( \mathcal{A}\left( L\right) \) to be defined- \( L \) must satisfy the condition that the linking number of any component with the remainder of the link should be an even number. Before the discovery of the Jon
1009_(GTM175)An Introduction to Knot Theory
38
ppens if attention is switched to the black regions? 7. If two knots have diagrams giving rise to the same Goeritz matrix, in what way are the knots related? 10 ## The Arf Invariant and the Jones Polynomial The original Arf invariant was an invariant of certain quadratic forms on a vector space over a field of characteristic 2 . This can be applied to a quadratic form, closely associated to the Seifert form, on the first homology with \( \mathbb{Z}/2\mathbb{Z} \) coefficients of a Seifert surface of an oriented link \( L \) . The result is a fairly classical link invariant \( \mathcal{A}\left( L\right) \in \mathbb{Z}/2\mathbb{Z} \) called the Arf (or Robertello) invariant of \( L \) ([111],[114]). It must, however, be stated at once that for this theory to work-that is, for \( \mathcal{A}\left( L\right) \) to be defined- \( L \) must satisfy the condition that the linking number of any component with the remainder of the link should be an even number. Before the discovery of the Jones polynomial, efforts to find a sensible generalisation of the Arf invariant to all links met with no success. The Jones polynomial \( V\left( L\right) \) is, of course, always defined. As will be shown in what follows, evaluating \( V\left( L\right) \) when \( t = i \) (with \( {t}^{1/2} = {e}^{{i\pi }/4} \) ) gives \[ V{\left( L\right) }_{\left( t = i\right) } = \left\{ \begin{array}{ll} {\left( -\sqrt{2}\right) }^{\# L - 1}{\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }\mathcal{A}\left( L\right) \text{ is defined,} \\ 0 & \text{ otherwise,} \end{array}\right. \] where \( \# L \) is the number of components of \( L \) . In a sense, this shows why a definition of \( \mathcal{A}\left( L\right) \) for any link could not be found. Interpreted from the point of view of the Jones polynomial, this result gives one of the very few evaluations of the polynomial in terms of previously known invariants that can be calculated in "polynomial time" (see Chapter 16). This chapter will first explore the Arf invariant for vector spaces over \( \mathbb{Z}/2\mathbb{Z} \) and then effect liaison with the Jones polynomial. In what follows, let \( V \) be a finite-dimensional vector space over \( \mathbb{Z}/2\mathbb{Z} \), the field of two elements \( \{ 0,1\} \) . Definition 10.1. A function \( \psi : V \rightarrow \mathbb{Z}/2\mathbb{Z} \) is a quadratic form if for some bilinear map \( \mathcal{F} : V \times V \rightarrow \mathbb{Z}/2\mathbb{Z} \) , \[ \psi \left( {u + v}\right) + \psi \left( u\right) + \psi \left( v\right) = \mathcal{F}\left( {u, v}\right) \] for all \( u, v \in V \) . The quadratic form is called non-singular if \( \mathcal{F} \) is non-singular (that is, for each non-zero \( u \in V,\mathcal{F}\left( {u, v}\right) \neq 0 \) for some \( v \in V \) ). Note that \( \psi \left( 0\right) = 0,\mathcal{F}\left( {u, u}\right) = 0,\mathcal{F} \) is symmetric (which is here the same as skew-symmetric) and \( \psi \left( {\lambda u}\right) = {\lambda \psi }\left( u\right) = {\lambda }^{2}\psi \left( u\right) \) for \( \lambda \in \{ 0,1\} \) . If \( \mathcal{F} \) is non-singular, the usual arguments for real skew-symmetric forms imply that there is a base \( {e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n} \) for \( V \) with respect to which \( \mathcal{F} \) is represented by a matrix of the form \[ \left( \begin{matrix} \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) & 0 & \ldots & 0 \\ 0 & \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & \left( \begin{array}{ll} 0 & 1 \\ 1 & 0 \end{array}\right) \end{matrix}\right) . \] This implies that \( V \) must have even dimension. Such a base is called symplectic. Using a symplectic base, it follows (for instance, by induction on \( n \) ), that \[ \psi \left( {{x}_{1}{e}_{1} + {y}_{1}{f}_{1} + \cdots + {x}_{n}{e}_{n} + {y}_{n}{f}_{n}}\right) = \mathop{\sum }\limits_{1}^{n}{x}_{i}^{2}\psi \left( {e}_{i}\right) + \mathop{\sum }\limits_{1}^{n}{y}_{i}^{2}\psi \left( {f}_{i}\right) + \mathop{\sum }\limits_{1}^{n}{x}_{i}{y}_{i}. \] Consider the identity \( {x}_{1}^{2} + {x}_{1}{y}_{1} = {x}_{1}\left( {{x}_{1} + {y}_{1}}\right) \) . If \( \psi \left( {e}_{1}\right) = 1 \) and \( \psi \left( {f}_{1}\right) = 0 \), this identity can be used to construct a new symplectic base (starting with \( \left\{ {{e}_{1} + {f}_{1},{f}_{1}}\right\} \) ) so that with the new base neither the term in \( {x}_{1}^{2} \) nor the term in \( {y}_{1}^{2} \) appears. Similarly, when \( \psi \left( {e}_{1}\right) ,\psi \left( {e}_{2}\right) ,\psi \left( {f}_{1}\right) \) and \( \psi \left( {f}_{2}\right) \) are all non-zero, a new symplectic base starting with \[ \left\{ {\left( {{e}_{1} + {e}_{2} + {f}_{1}}\right) ,\left( {{e}_{1} + {f}_{1} + {f}_{2}}\right) ,\left( {{e}_{1} + {e}_{2} + {f}_{2}}\right) ,\left( {{e}_{2} + {f}_{1} + {f}_{2}}\right) }\right\} \] can be chosen to remove the squared terms; this corresponds to the identity \[ {x}_{1}^{2} + {x}_{1}{y}_{1} + {y}_{1}^{2} + {x}_{2}^{2} + {x}_{2}{y}_{2} + {y}_{2}^{2} \] \[ = \left( {{x}_{1} + {y}_{1} + {x}_{2}}\right) \left( {{x}_{1} + {y}_{1} + {y}_{2}}\right) + \left( {{x}_{1} + {x}_{2} + {y}_{2}}\right) \left( {{y}_{1} + {x}_{2} + {y}_{2}}\right) \text{.} \] Thus, a symplectic base can be chosen with respect to which \( \psi \left( {{x}_{1}{e}_{1} + {y}_{1}{f}_{1} + }\right. \) \( \left. {\cdots + {x}_{n}{e}_{n} + {y}_{n}{f}_{n}}\right) \) is of one of the two following "Types". Type 1: \( {x}_{1}{y}_{1} + {x}_{2}{y}_{2} + \cdots + {x}_{n}{y}_{n} \) , Type 2: \( {x}_{1}{y}_{1} + {x}_{2}{y}_{2} + \cdots + {x}_{n}{y}_{n} + {x}_{n}^{2} + {y}_{n}^{2} \) . Definition 10.2. The Arf invariant \( c\left( \psi \right) \) of the non-singular quadratic form \( \psi : V \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the value,0 or 1, taken more often by \( \psi \left( u\right) \) as \( u \) varies over the \( {2}^{2n} \) elements of \( V \) . It is easy to show, by induction on \( n \), that the value 1 is taken \( {2}^{{2n} - 1} - {2}^{n - 1} \) times by \( \psi \left( u\right) \) if \( \psi \) is of Type 1 and \( {2}^{{2n} - 1} + {2}^{n - 1} \) times if \( \psi \) is of Type 2 . Hence \( c\left( \psi \right) \) is always defined, no choice is involved in its definition and \[ c\left( \psi \right) = \left\{ \begin{array}{ll} 0 & \text{ if }\psi \text{ is of Type }1 \\ 1 & \text{ if }\psi \text{ is of Type }2 \end{array}\right. \] Note that if \( {\psi }_{1} \) and \( {\psi }_{2} \) are quadratic forms on \( {V}_{1} \) and \( {V}_{2} \), respectively, then the form \( {\psi }_{1} \oplus {\psi }_{2} \) on \( {V}_{1} \oplus {V}_{2} \) has \( c\left( {{\psi }_{1} \oplus {\psi }_{2}}\right) = c\left( {\psi }_{1}\right) + c\left( {\psi }_{2}\right) \) modulo 2 . This follows by checking the possible Types. Note also that if \( {e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n} \) is any symplectic base, then \[ c\left( \psi \right) = \mathop{\sum }\limits_{{i = 1}}^{n}\psi \left( {e}_{i}\right) \psi \left( {f}_{i}\right) \] The above theory of \( \mathbb{Z}/2\mathbb{Z} \) quadratic forms is applied to links in the following way: Let \( L \) be an oriented link in \( {S}^{3} \) with Seifert surface \( F \), the orientation being needed to define \( F \) . Define \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) by \( q\left( x\right) = {\alpha }_{2}\left( {x, x}\right) \in \) \( \mathbb{Z}/2\mathbb{Z} \), where \( {\alpha }_{2} \) is the Seifert form \( \alpha \) (see Chapter 6) reduced modulo 2 . Thus if \( x \) is (represented by) a simple closed curve on \( F, q\left( x\right) \) is the number, modulo 2, of twists in an annular neighbourhood of \( x \) in \( F \) . Then \[ q\left( {x + y}\right) + q\left( x\right) + q\left( y\right) = {\alpha }_{2}\left( {x, y}\right) + {\alpha }_{2}\left( {y, x}\right) = \mathcal{F}\left( {x, y}\right) , \] where \( \mathcal{F} \) is the intersection form (which just counts the number of intersection points of transverse curves) modulo 2 on \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \) . However, a glance at the base shown in Figure 6.1 reveals that \( \mathcal{F} \) is non-singular only when \( L \) has one component. A second glance shows that \( \mathcal{F} \) induces a non-singular form on the quotient \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \), where \( \iota \) is the inclusion map. Suppose that \( L \) has components \( \left\{ {L}_{i}\right\} \) and that \( L \) has the property that (*) \[ \operatorname{lk}\left( {{L}_{i}, L - {L}_{i}}\right) \equiv 0{\;\operatorname{modulo}\;2}. \] Then \( q\left( \left\lbrack {L}_{i}\right\rbrack \right) \equiv \operatorname{lk}\left( {{L}_{i}^{ - },{L}_{i}}\right) = \operatorname{lk}\left( {{L}_{i}, L - {L}_{i}}\right) \equiv 0 \) modulo 2, as \( {L}_{i} \) is homologous to \( L - {L}_{i} \) in the complement of \( {L}_{i}^{ - } \) . For any \( x \in {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \), clearly \( \mathcal{F}\left( {x,\left\lbrack {L}_{i}\right\rbrack }\right) = 0 \), so \( q\left( {x + \left\lbrack {L}_{i}\right\rbrack }\right) = q\left( x\right) \), and hence \( q \) induces a well-defined non-singular quadratic form \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) . Definition 10.3. The Arf invariant \( \mathcal{A}\left( L\right) \) of an oriented link \( L \) having the property \( \left( \star \right) \) is \( c\left( q\right) \), where \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the quadratic form described above. ## Proposition 10.4. The Arfinvariant \(
1009_(GTM175)An Introduction to Knot Theory
39
, as \( {L}_{i} \) is homologous to \( L - {L}_{i} \) in the complement of \( {L}_{i}^{ - } \) . For any \( x \in {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) \), clearly \( \mathcal{F}\left( {x,\left\lbrack {L}_{i}\right\rbrack }\right) = 0 \), so \( q\left( {x + \left\lbrack {L}_{i}\right\rbrack }\right) = q\left( x\right) \), and hence \( q \) induces a well-defined non-singular quadratic form \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) . Definition 10.3. The Arf invariant \( \mathcal{A}\left( L\right) \) of an oriented link \( L \) having the property \( \left( \star \right) \) is \( c\left( q\right) \), where \( q : {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \rightarrow \mathbb{Z}/2\mathbb{Z} \) is the quadratic form described above. ## Proposition 10.4. The Arfinvariant \( \mathcal{A}\left( L\right) \) for an oriented link \( L \) having property \( \left( \star \right) \) is well defined. Proof. It is necessary to check that \( \mathcal{A}\left( L\right) \) does not depend on the choice of Seifert surface \( F \) . By Theorem 8.2, it is only necessary to check what happens when \( F \) is changed to \( {F}^{\prime } \) by embedded surgery along an arc in \( {S}^{3} \) . Suppose that \( \left\{ {{e}_{1},{f}_{1},{e}_{2},{f}_{2},\ldots ,{e}_{n},{f}_{n}}\right\} \) is a symplectic base for \( {H}_{1}\left( {F;\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial F;\mathbb{Z}/2\mathbb{Z}}\right) \) represented by simple closed curves (for example the first \( {2g} \) curves, renamed, of Figure 6.1). That base can be augmented by \( \left\{ {{e}_{n + 1},{f}_{n + 1}}\right\} \) to give a symplectic base for \( {H}_{1}\left( {{F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) /{\iota }_{ \star }{H}_{1}\left( {\partial {F}^{\prime };\mathbb{Z}/2\mathbb{Z}}\right) \) : Choose \( {e}_{n + 1} \) to be represented by a simple closed curve encircling once the solid cylinder defining the embedded surgery, that curve being met at exactly one point by a simple closed curve representing \( {f}_{n + 1} \) . Note that an isotopy of the end points of the surgery arc \( \alpha \) ensures that the two points of \( \partial \alpha \) are not separated by any base curve. Then \( q\left( {e}_{n + 1}\right) = 0 \), and so \( \mathop{\sum }\limits_{{i = 1}}^{n}q\left( {e}_{i}\right) q\left( {f}_{i}\right) = \mathop{\sum }\limits_{{i = 1}}^{{n + 1}}q\left( {e}_{i}\right) q\left( {f}_{i}\right) \) . Note that \( \mathcal{A} \) (the unknot) \( = 0 \) and \( \mathcal{A} \) (the trefoil) \( = 1 \), for as shown in Figure 6.3 (when \( n = 1 \) ), the trefoil has a symplectic base \( \left\{ {{e}_{1},{f}_{1}}\right\} \) for which \( q\left( {e}_{1}\right) = \) \( q\left( {f}_{1}\right) = 1 \) . Note, too, that the addition formula for the Arf invariant of the direct sum of two quadratic forms implies that \( \mathcal{A}\left( {L + {L}^{\prime }}\right) = \mathcal{A}\left( L\right) + \mathcal{A}\left( {L}^{\prime }\right) \) for any two links \( L \) and \( {L}^{\prime } \) having property \( \left( \star \right) \) (whatever components are chosen for the summing operation). Lemma 10.5. Suppose that \( L \) and \( {L}^{\prime } \) are oriented links having property \( \left( \star \right) \) which are the same except near one point, where they are as shown in Figure 10.1; then \( \mathcal{A}\left( L\right) = \mathcal{A}\left( {L}^{\prime }\right) \) Proof. The two segments shown on one of the two sides of Figure 10.1 must belong to the same component of the link. Suppose, without loss of generality, it is the two segments on the left side. Then using the Seifert circuit method of Theorem 2.2, a Seifert surface can be constructed for the left link that meets the neighbourhood of the point in question in the way indicated by the shading. Adding a band to that produces a Seifert surface for the right link as indicated. Now, as these two surfaces just differ by a band added to the boundary, the \( \mathbb{Z}/2\mathbb{Z} \) -homology of the second surface is just that of the first surface with an extra \( \mathbb{Z}/2\mathbb{Z} \) summand. However, that summand is in the image of the homology of the boundary of the surface; this image is disregarded (by means of the quotienting) in construction of the quadratic form that gives the Arf invariant. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_116_0.jpg) Figure 10.1 Note that elementary consideration of linking numbers shows the following: If the two segments of the link \( L \) shown on one side of Figure 10.1 belong to distinct components, and if \( L \) has the property \( \left( \star \right) \), then \( {L}^{\prime } \) also has the property \( \left( \star \right) \) . With the definition of the Arf invariant and its elementary properties now established, its relevance to the Jones polynomial can now be considered. The result linking the two topics is as follows: Theorem 10.6. The Jones polynomial of any oriented link \( L \) in \( {S}^{3} \), evaluated at \( t = i\left( \right. \) with \( \left. {{t}^{1/2} = {e}^{{i\pi }/4}}\right) \), is given by \[ V{\left( L\right) }_{\left( t = i\right) } = \left\{ \begin{array}{ll} {\left( -\sqrt{2}\right) }^{\# L - 1}{\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise,} \end{array}\right. \] where \( \# L \) is the number of components of \( L \) and \( \mathcal{A}\left( L\right) \) is its Arfinvariant. Proof. Define \( A\left( L\right) \) to be the integer given by \[ A\left( L\right) = \left\{ \begin{array}{ll} {\left( -1\right) }^{\mathcal{A}\left( L\right) } & \text{ if }L\text{ has property }\left( \star \right) , \\ 0 & \text{ otherwise. } \end{array}\right. \] Now suppose that \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are three oriented links that are exactly the same except near a point where they are as shown in Figure 3.2 (the usual relationship). The proof considers two cases as follows: Suppose first that the two segments of \( {L}_{ + } \) near the point in question are parts of the same component of \( {L}_{ + } \) . (Then either both \( {L}_{ + } \) and \( {L}_{ - } \) have property \( \left( \star \right) \) or neither of them does.) If \( {L}_{0} \) has property \( \left( \star \right) \) so, by the above remark, do \( {L}_{ + } \) and \( {L}_{ - } \), and by Lemma 10.5, \( \mathcal{A}\left( {L}_{0}\right) = \mathcal{A}\left( {L}_{ + }\right) = \mathcal{A}\left( {L}_{ - }\right) \) . Thus certainly \[ A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - {2A}\left( {L}_{0}\right) = 0, \] an equation that also, trivially, holds if none of \( {L}_{ + },{L}_{ - } \) or \( {L}_{0} \) has property \( \left( \star \right) \) . There remains the possibility that \( {L}_{ + } \) and \( {L}_{ - } \) have property \( \left( \star \right) \) but that \( {L}_{0} \) does not. Consider the two links shown in Figure 10.2. It is easy to check that the first link, \( X \) say, has property \( \left( \star \right) \), and so its Arf invariant exists and by Lemma 10.5, \( \mathcal{A}\left( {L}_{ + }\right) = \mathcal{A}\left( X\right) \) . The second link is just \( {L}_{ - } \) in disguise. It can also be thought of as \( X \) first summed with a trefoil knot and then having two components banded together. Thus, again using Lemma \( {10.5},\mathcal{A}\left( X\right) + 1 = \mathcal{A}\left( {L}_{ - }\right) \) modulo 2. Hence again it is true that \( A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - {2A}\left( {L}_{0}\right) = 0 \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_117_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_117_0.jpg) Figure 10.2 Secondly, suppose that the two segments of \( {L}_{ + } \) near the point in question are parts of different components of \( {L}_{ + } \) . If \( {L}_{0} \) does not have property \( \left( \star \right) \) then neither do \( {L}_{ + } \) and \( {L}_{ - } \), and so trivially \[ A\left( {L}_{ + }\right) + A\left( {L}_{ - }\right) - A\left( {L}_{0}\right) = 0. \] Otherwise \( {L}_{0} \) and one of \( {L}_{ + } \) and \( {L}_{ - } \) has property \( \left( \star \right) \), and this formula is again true (using Lemma 10.5). If \( \widehat{A}\left( L\right) \) denotes \( {\left( -\sqrt{2}\right) }^{\# L - 1}A\left( L\right) \), then the two preceding displayed formulae both become \[ \widehat{A}\left( {L}_{ + }\right) + \widehat{A}\left( {L}_{ - }\right) + \sqrt{2}\widehat{A}\left( {L}_{0}\right) = 0, \] and of course if \( L \) is the unknot, \( \widehat{A}\left( L\right) = 1 \) . However, as discussed in Chapter 3, the Jones polynomial \( V\left( L\right) \in \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) is characterised by being 1 on the unknot and by satisfying \[ {t}^{-1}V\left( {L}_{ + }\right) - {tV}\left( {L}_{ - }\right) + \left( {{t}^{-1/2} - {t}^{1/2}}\right) V\left( {L}_{0}\right) = 0. \] Substituting \( {t}^{\frac{1}{2}} = {e}^{{i\pi }/4} \) reduces this to exactly the above formula for \( \widehat{A} \) . If, in the notation used in the above proof, \( {L}_{ + } \) is a knot, then so is \( {L}_{ - } \), and \( {L}_{0} \) is a link of two components. Of course, \( {L}_{0} \) has the property \( \left( \star \right) \) if and only if \( \operatorname{lk}\left( {L}_{0}\right) \) , the linking number of the two component of \( {L}_{0} \), is even. The second paragraph of the above proof shows that \( \mathcal{A}\left( {L}_{ + }\right) - \mathcal{A}\left( {L}_{ - }\right) \equiv \operatorname{lk}\left( {L}_{0}\right) \) modulo 2 . Theorem 10.7. Let \( K \) be a knot. Then \( \mathcal{A}\left( K\right) \equiv {a}_{2}\left( K\right) \) modulo 2, where \( {a}_{2}\left( K\right) \) is the coefficient of \( {
1009_(GTM175)An Introduction to Knot Theory
40
the unknot and by satisfying \[ {t}^{-1}V\left( {L}_{ + }\right) - {tV}\left( {L}_{ - }\right) + \left( {{t}^{-1/2} - {t}^{1/2}}\right) V\left( {L}_{0}\right) = 0. \] Substituting \( {t}^{\frac{1}{2}} = {e}^{{i\pi }/4} \) reduces this to exactly the above formula for \( \widehat{A} \) . If, in the notation used in the above proof, \( {L}_{ + } \) is a knot, then so is \( {L}_{ - } \), and \( {L}_{0} \) is a link of two components. Of course, \( {L}_{0} \) has the property \( \left( \star \right) \) if and only if \( \operatorname{lk}\left( {L}_{0}\right) \) , the linking number of the two component of \( {L}_{0} \), is even. The second paragraph of the above proof shows that \( \mathcal{A}\left( {L}_{ + }\right) - \mathcal{A}\left( {L}_{ - }\right) \equiv \operatorname{lk}\left( {L}_{0}\right) \) modulo 2 . Theorem 10.7. Let \( K \) be a knot. Then \( \mathcal{A}\left( K\right) \equiv {a}_{2}\left( K\right) \) modulo 2, where \( {a}_{2}\left( K\right) \) is the coefficient of \( {z}^{2} \) in the Conway polynomial \( {\nabla }_{K}\left( z\right) \) . The Arfinvariant of \( K \) is related to the Alexander polynomial by \[ \mathcal{A}\left( K\right) = \left\{ \begin{array}{ll} 0 & \text{ if }{\Delta }_{K}\left( {-1}\right) \equiv \pm 1\text{ modulo }8, \\ 1 & \text{ if }{\Delta }_{K}\left( {-1}\right) \equiv \pm 3\text{ modulo }8. \end{array}\right. \] If \( K \) is a slice knot, then \( \mathcal{A}\left( K\right) = 0 \) . Proof. The formula \( \mathcal{A}\left( {L}_{ + }\right) - \mathcal{A}\left( {L}_{ - }\right) \equiv \operatorname{lk}\left( {L}_{0}\right) \) modulo 2, valid when \( {L}_{ + } \) has one component, allows calculation of \( \mathcal{A}\left( K\right) \) from \( \mathcal{A} \) (unknot) \( = 0 \) . However, this gives the same answer as the calculation, modulo 2, of \( {a}_{2}\left( K\right) \) using Proposition 8.7 (v). With the Conway normalisation, \( {\Delta }_{K}\left( {-1}\right) = {\nabla }_{K}\left( {-{2i}}\right) \) . However, \( {\nabla }_{K}\left( z\right) = \) \( 1 + {a}_{2}\left( K\right) {z}^{2} + {a}_{4}\left( K\right) {z}^{4} + \cdots \), and so \( {\nabla }_{K}\left( {-{2i}}\right) \equiv 1 - 4{a}_{2}\left( K\right) \) modulo 8 . Thus, modulo 8, \[ {\nabla }_{K}\left( {-{2i}}\right) \equiv \left\{ \begin{array}{ll} 1 & \text{ if }{a}_{2}\left( K\right) \equiv 0{\;\operatorname{modulo}\;2}, \\ - 3 & \text{ if }{a}_{2}\left( K\right) \equiv 1{\;\operatorname{modulo}\;2}. \end{array}\right. \] This gives the required result. As remarked after Theorem 8.19, if \( K \) is a slice knot then \( {\Delta }_{K}\left( {-1}\right) \equiv \pm 1 \) modulo 8, and so, from the above discussion, \( \mathcal{A}\left( K\right) = 0 \) . The result given above (due to J. Levine [75]), relating the determinant of a knot with the Arf invariant, has been stated with the Alexander polynomial determined only up to multiplication by \( \pm {t}^{\pm n} \) . However, as shown in the proof, \( {\Delta }_{K}\left( {-1}\right) \equiv 1 \) modulo 4 when the Conway normalisation is employed. The vanishing of the Arf invariant on slice knots does suggest that the Arf invariant is connected with 4-dimensional topology. In fact, the Arf invariant of a link is intimately related to the Rohlin invariant of a 4-manifold with spin structure. Indeed, A. J. Casson has given a proof of the Rholin theorem based on the Arf invariant of a link. This theorem asserts that the signature of a closed smooth orientable spin 4-manifold is divisible by 16 (see [29] and [66]). ## Exercises 1. Make a table of the Arf invariants of the prime knots with at most eight crossings. 2. Determine, directly from a Seifert matrix, the Arf invariant of the pretzel knot \( P\left( {p, q, r}\right) \), where \( p, q \) and \( r \) are odd integers. 3. Prove that cobordant knots have the same Arf invariant (see Exercise 7 of Chapter 8). 4. Use Lemma 10.5 to show that if \( L \) is a trivial link of unknotted unlinked components, then the Arf invariant of \( L \) is zero. By considering the maxima, minima and saddles of a slice disc for a slice knot \( K \) (as for example in Figure 8.4), show directly from Lemma 10.5 that a slice knot has zero Arf invariant. 5. Suppose \( L \) is an oriented link for which the Arf invariant is defined. Suppose that \( {L}^{\prime } \) is obtained by reversing the orientation of one component of \( L \) . Is the Arf invariant of \( {L}^{\prime } \) defined? If so, how is it related to the Arf invariant of \( L \) ? ## 11 ## The Fundamental Group It is in its interaction with the theory of the fundamental group that the theory of knots and links becomes almost a part of the general theory of 3-manifolds. It is the exterior of a link (that is, the closure of the complement in \( {S}^{3} \) of a small regular neighbourhood of the link) that is studied, by means of its group, as a compact 3-manifold with torus boundary components. In the theory of 3- manifolds this is a very important example, but perhaps not much more than that. Here the view has been taken that to a mathematician it is the proving of results that brings satisfaction, and that this is particularly important in knot theory, wherein a cheerful punter might be satisfied by a good diagram. However, 3-manifold theory is well documented at length elsewhere ([43], [49]), and other more established treatises on knots have dwelt comprehensively on the relationship between links and the fundamental group. Thus what follows in this chapter is but an essay on this topic. It tries to interpret the Alexander polynomial in terms of the fundamental group and to explain what is available in more detail elsewhere. The fundamental group of a space has, of course, already featured in the discussion of covering spaces in Chapter 7. The group of a link \( L \) in \( {S}^{3} \) is defined to be \( {\Pi }_{1}\left( {{S}^{3} - L}\right) \), the fundamental group of the complement of \( L \) ; this is the same as \( {\Pi }_{1}\left( X\right) \), where \( X \) is the exterior of \( L \) . It is easy to write down a presentation of \( {\Pi }_{1}\left( {{S}^{3} - L}\right) \) from a given diagram of the link in the following way: Select an orientation of \( L \) just for convenience. Now, corresponding to the \( {i}^{\text{th }} \) segment ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_120_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_120_0.jpg) Figure 11.1 of the diagram with the usual breaks at under-passes (that is, an "over-passing" section of the link traversing, maybe, many under-passes or maybe none) take a group generator \( {g}_{i} \) . Corresponding to each crossing \( c \), take a relator \( {r}_{c} \) as follows: Suppose at the crossing \( c \) the over-pass arc is labelled \( {g}_{k} \) and the under-pass is labelled \( {g}_{i} \) as it approaches \( c \) and \( {g}_{j} \) as it leaves \( c \) . Then \( {r}_{c} = {g}_{k}{g}_{i}{g}_{k}^{-1}{g}_{j}^{-1} \) if the sign of the crossing is negative and \( {r}_{c} = {g}_{k}^{-1}{g}_{i}{g}_{k}{g}_{j}^{-1} \) if the sign is positive. This is indicated in Figure 11.1. Each relator, when equated to the identity, asserts that the two generators corresponding to the under-passing arc are conjugate by means of the over-passing generator or its inverse (that choice being determined by the sign of the crossing). The symbol \( {g}_{i} \) represents the loop that, starting from a base point (the eye of the reader) above the diagram, goes straight to the \( {i}^{\text{th }} \) over-passing arc, encircles it in a positive direction (to achieve linking number 1) and returns immediately to the base point. The resulting presentation is called the Wirtinger presentation of the link group. If there are \( m \) segments in the diagram and \( n \) crossings \( (m = n \) unless some link component contains no under-pass), then the group of the link is isomorphic to the group \( G \) that has presentation \[ G = \left\langle {{g}_{1},{g}_{2},\ldots ,{g}_{m};{r}_{1},{r}_{2},\ldots ,{r}_{n}}\right\rangle , \] this meaning that \( G \) is the quotient of the free group on generators \( \left\{ {{g}_{1},{g}_{2},\ldots ,{g}_{m}}\right\} \) by the smallest normal subgroup containing \( \left\{ {{r}_{1},{r}_{2},\ldots ,{r}_{n}}\right\} \) . A proof of this result follows from finding a suitable 2-complex that is a deformation retract of the link complement and using some algorithm for writing down a presentation of the fundamental group of a 2-complex. It is, of course, clear from the geometric interpretation of the \( {g}_{i} \) that the stated relators are indeed trivial elements of the group; the difficulty is in seeing that no more relators are required. In fact, for \( n \geq 1 \), at most \( \left( {n - 1}\right) \) of the relators are actually needed, for it is easy to see that the product of certain conjugates of any \( \left( {n - 1}\right) \) of the relators, in a suitable order, gives the remaining relator. That follows, for a connected diagram, from the fact that the dual graph in \( {\mathbb{R}}^{2} \cup \infty \) to the link projection has a four sided region containing each of the original crossings. The boundary of each such four-sided region gives one of the relators. The boundary of the union of \( \left( {n - 1}\right) \) of these dual regions is the boundary of the \( {n}^{\text{th }} \) region. It is clear that all the generators of a Wirtinger presentation that correspond to a single link component belong to the same conjugacy class in the group. Further, if the group is abelianised by adding in relations that assert that the \( {g}_{i} \) all commute with each other, then the group becomes just the direct sum of copies of \( \mathbb{Z} \), one for each link component, with all the generators that correspond to a single link component becoming the generator of one of the \( \mathbb{Z} \) ’s. As expected, this is the first homology group of \( {S}^{3} - L \) ; the loops representing generators of \( {\Pi
1009_(GTM175)An Introduction to Knot Theory
41
ed diagram, from the fact that the dual graph in \( {\mathbb{R}}^{2} \cup \infty \) to the link projection has a four sided region containing each of the original crossings. The boundary of each such four-sided region gives one of the relators. The boundary of the union of \( \left( {n - 1}\right) \) of these dual regions is the boundary of the \( {n}^{\text{th }} \) region. It is clear that all the generators of a Wirtinger presentation that correspond to a single link component belong to the same conjugacy class in the group. Further, if the group is abelianised by adding in relations that assert that the \( {g}_{i} \) all commute with each other, then the group becomes just the direct sum of copies of \( \mathbb{Z} \), one for each link component, with all the generators that correspond to a single link component becoming the generator of one of the \( \mathbb{Z} \) ’s. As expected, this is the first homology group of \( {S}^{3} - L \) ; the loops representing generators of \( {\Pi }_{1}\left( {{S}^{3} - L}\right) \) in the above presentation also represent meridian generators of \( {H}_{1}\left( {{S}^{3} - L}\right) \) . The group of the unknot is, of course, infinite cyclic. As a simple non-trivial example, consider the trefoil knot \( {3}_{1} \) with the three generators allocated as in Figure 11.2. By the above remark, only two relators are needed, and the group of ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_122_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_122_0.jpg) Figure 11.2 the trefoil knot is given by \[ G = \left\langle {{g}_{1},{g}_{2},{g}_{3};{g}_{3}{g}_{1}{g}_{3}^{-1}{g}_{2}^{-1},{g}_{1}{g}_{2}{g}_{1}^{-1}{g}_{3}^{-1}}\right\rangle . \] In this case a group homomorphism can be defined from \( G \) to \( {\sum }_{3} \), the group of permutations of \( \{ 1,2,3\} \), by \[ {g}_{1} \mapsto \left( {1,2}\right) ,\;{g}_{2} \mapsto \left( {2,3}\right) ,\;{g}_{3} \mapsto \left( {3,1}\right) , \] where as usual \( \left( {1,2}\right) \) is the permutation that interchanges 1 and 2 and fixes 3 . That this does give a homomorphism follows from the observation that the two relators do indeed map to the trivial element of \( {\sum }_{3} \) . It is clear that this homomorphism is surjective; hence \( G \) is non-abelian and so certainly it is not cyclic. This proves that the trefoil knot is not the unknot. It is easy to verify that the group of \( {4}_{1} \), the 4-crossing knot, has no surjective homomorphism onto \( {\sum }_{3} \) . All the generators of a Wirtinger presentation of a knot group are conjugate, so any homomorphism will map them into a single conjugacy class. Of course, in a permutation group such a class is determined by the cycle type of a permutation. Homomorphisms can, then, be constructed to \( {\sum }_{n} \) by assigning permutations in some conjugacy class to the \( {g}_{i} \) and verifying that the relators map to the identity. This can be done in a systematic way with a computer, and a count can be made of all possible homomorphisms. The count for different knots can then be compared. Thistlethwaite has found such a method to be most effective for distinguishing knots from one another when compiling tables of knots with diagrams of up to fifteen crossings. Two theorems, basic to the study of 3-manifolds, were proved by C. D. Papakyriakopoulos and published in 1957 ([105]). These are the Loop Theorem and the Sphere Theorem. They are both concerned with changing the assertion that a certain map of a surface into a 3-manifold exists to a statement that an embedding (an injective map) of the surface exists. The proofs are similar (see [43]) and employ the idea of lifting the map up to a succession of covering spaces until the self-intersections of the map can be reduced. Theorem 11.1 (The Loop Theorem). Let \( M \) be a (possibly non-compact) 3- manifold with boundary \( \partial M \) such that the inclusion-induced homomorphism \( {\Pi }_{1}\left( {\partial M}\right) \rightarrow {\Pi }_{1}\left( M\right) \) is not injective. Then there exists a (piecewise linear) embedding of the disc \( e : {D}^{2} \rightarrow M \), with \( {e}^{-1}\left( {\partial M}\right) = \partial {D}^{2} \) such that the restriction \( e : \partial {D}^{2} \rightarrow \partial M \) is not homotopic to a constant map. There follows the application of this to knots; a version of this is sometimes known as Dehn's lemma. Theorem 11.2. Let \( X \) be the exterior of a knot \( K \) in \( {S}^{3} \) . If \( K \) is not the unknot, then the inclusion map induces an injection \( {\Pi }_{1}\left( {\partial X}\right) \rightarrow {\Pi }_{1}\left( X\right) \) . Proof. Suppose \( {\Pi }_{1}\left( {\partial X}\right) \rightarrow {\Pi }_{1}\left( X\right) \) is not injective. Then, by the loop theorem, there is an embedding \( e : {D}^{2} \rightarrow X \) sending \( \partial {D}^{2} \) into the torus \( \partial X \), to a simple closed curve not homotopically trivial in the torus. Now \( e\left( {\partial {D}^{2}}\right) \) is certainly the boundary of the disc \( e\left( {D}^{2}\right) \) and so represents a non-trivial element of the kernel of the map \( {H}_{1}\left( {\partial X}\right) \rightarrow {H}_{1}\left( X\right) \) ; the longitude of the knot \( K \) (with either orientation) is the only simple closed curve representing an element in this kernel (see Definition 1.6). The longitude is parallel to \( K \) in a small solid torus neighbourhood of \( K \), so expanding the disc \( e\left( {D}^{2}\right) \) by an annulus gives a disc embedded in \( {S}^{3} \) with \( K \) as its boundary. Thus \( e\left( {D}^{2}\right) \) when so expanded is a Seifert surface for \( K \) . This shows that \( K \) is unknotted. Corollary 11.3. A knot \( K \) is the unknot if and only if \( {\Pi }_{1}\left( {{S}^{3} - K}\right) \) is infinite cyclic. Proof. If \( {\Pi }_{1}\left( {{S}^{3} - K}\right) \) is isomorphic to \( \mathbb{Z} \), there can be no injection \( {\Pi }_{1}\left( {\partial X}\right) \rightarrow \) \( {\Pi }_{1}\left( X\right) \) (as \( {\Pi }_{1}\left( {\partial X}\right) \) is isomorphic to \( \mathbb{Z} \oplus \mathbb{Z} \) ). Corollary 11.4. Let \( {X}_{1} \) and \( {X}_{2} \) be the exteriors of two non-trivial knots and let M be a 3-manifold formed by identifying their boundaries together using any homeomorphism. Then the inclusion into \( M \) of the torus \( T \) that comes from the identified boundaries induces an injection \( {\Pi }_{1}\left( T\right) \rightarrow {\Pi }_{1}\left( M\right) \) . Proof. This follows at once from the above theorem and from the Van Kam-pen theorem, which describes how fundamental groups behave when a space is described as a union of subspaces. Of course, as there are many invariants for showing that a knot is non-trivial, this corollary provides, if required, a source of orientable 3-manifolds containing tori for which the fundamental group injects. As stated in Definition 4.7, such tori are called incompressible. Thus if the exteriors of two non-trivial knots are glued together by some homeomorphism between their bounding tori, then the result is a 3-manifold, without boundary, containing an incompressible torus. Theorem 11.5 (The Sphere Theorem). Suppose that \( M \) is an orientable 3- manifold and that there exists a map \( {S}^{2} \rightarrow M \) that is not homotopic to a constant map (that is, \( {\Pi }_{2}\left( M\right) \neq 0 \) ). Then there exists a (piecewise linear) embedding \( {S}^{2} \rightarrow M \) that is not homotopic to a constant map. The theorem does not assert that the embedding is homotopic to the given map. A slightly stronger version of the theorem can be found in [43]. The application to knots (or to non-split links) are the following two results: Theorem 11.6. If \( K \) is a knot in \( {S}^{3} \) any map \( {S}^{2} \rightarrow {S}^{3} - K \) is homotopic to a constant map (that is, \( {\Pi }_{2}\left( {{S}^{3} - K}\right) = 0 \) ). Proof. If the statement is false then, by the sphere theorem, there exists a piecewise linear embedding \( e : {S}^{2} \rightarrow {S}^{3} - K \) that is not homotopic to a constant in \( \left( {{S}^{3} - K}\right) \) . Then, by the Schönflies theorem, \( e\left( {S}^{2}\right) \) separates \( {S}^{3} \) into two components, the closure of each of which is a ball with boundary \( e\left( {S}^{2}\right) \) . The knot \( K \), being connected and disjoint from \( e\left( {S}^{2}\right) \), lies in one of these balls, so \( e \) is homotopic to a constant using the other ball. Theorem 11.7. If \( K \) is a knot in \( {S}^{3} \), any map \( {S}^{r} \rightarrow {S}^{3} - K \) is homotopic to a constant map (that is, \( {\Pi }_{r}\left( {{S}^{3} - K}\right) = 0 \) ) for all \( r \geq 2 \) . Proof. Let \( X \) be the exterior of \( K \) and let \( \widetilde{X} \) be the universal cover of \( X \) . Thus \( \widetilde{X} \) is the simply connected cover of \( X \), it is acted upon by \( {\Pi }_{1}\left( X\right) \), and the quotient of \( \widetilde{X} \) by this action is \( X \) . The operation of lifting maps and homotopies from \( X \) to \( \widetilde{X} \) shows that, for \( r \geq 2,{\Pi }_{r}\left( X\right) = 0 \) if and only if \( {\Pi }_{r}\left( \widetilde{X}\right) = 0 \) (or equivalently just use the homotopy long exact sequence of the covering). So certainly \( {\Pi }_{2}\left( \widetilde{X}\right) = 0 \) . Now the third homology of any non-compact connected 3-manifold is zero. A simplicial argument for this uses the fact that any 3-cycle would be a finite sum of oriented 3-simplexes, a neighbourhood of the union of those 3-simplexes is a compact 3-manifold \( N \) with non-empty boundary which can be taken to be connected; any such \( N \) deformation retracts to a 2-dimensional complex (by collapsing 3- simplexes from the boundary), and so \( {H}_{3}\left( N\right) = 0 \) . Of course, \( \widetilde{X} \) is non-compact because \( {\Pi }_{1}\left( X\right) \) is infinite (as \( {H}_{1}\l
1009_(GTM175)An Introduction to Knot Theory
42
he quotient of \( \widetilde{X} \) by this action is \( X \) . The operation of lifting maps and homotopies from \( X \) to \( \widetilde{X} \) shows that, for \( r \geq 2,{\Pi }_{r}\left( X\right) = 0 \) if and only if \( {\Pi }_{r}\left( \widetilde{X}\right) = 0 \) (or equivalently just use the homotopy long exact sequence of the covering). So certainly \( {\Pi }_{2}\left( \widetilde{X}\right) = 0 \) . Now the third homology of any non-compact connected 3-manifold is zero. A simplicial argument for this uses the fact that any 3-cycle would be a finite sum of oriented 3-simplexes, a neighbourhood of the union of those 3-simplexes is a compact 3-manifold \( N \) with non-empty boundary which can be taken to be connected; any such \( N \) deformation retracts to a 2-dimensional complex (by collapsing 3- simplexes from the boundary), and so \( {H}_{3}\left( N\right) = 0 \) . Of course, \( \widetilde{X} \) is non-compact because \( {\Pi }_{1}\left( X\right) \) is infinite (as \( {H}_{1}\left( X\right) \) is infinite), and so each simplex has infinitely many different lifts in \( \widetilde{X} \) . Thus \( {H}_{3}\left( \widetilde{X}\right) = 0 \) and \( {H}_{r}\left( \widetilde{X}\right) = 0 \) for \( r > 3 \), as then \( X \) has no \( r \) -simplex and so its \( {r}^{\text{th }} \) chain group is zero. Now, for a simply connected cell complex, the Hurewicz isomorphism theorem asserts that the first non-vanishing homology group and the first non-vanishing homotopy group occur in the same dimension and are isomorphic. Thus \( {\Pi }_{r}\left( \widetilde{X}\right) = 0 \) for all \( r \), and so \( \widetilde{X} \) is a contractible space. The above remark about lifting ensures that \( {\Pi }_{r}\left( X\right) = 0 \) for \( r \geq 2 \) . Another way of stating the last theorem is to say that \( \left( {{S}^{3} - K}\right) \) is an Eilenberg-MacLane space \( \mathbf{K}\left( {G,1}\right) \), where \( G \) is the knot group. An Eilenberg-MacLane space \( \mathbf{K}\left( {G, n}\right) \) is a path-connected space that has homotopy group \( G \) in dimension \( n \) and all other homotopy groups zero. It is a routine task in homotopy theory to establish that two cell complexes that are both \( \mathbf{K}\left( {G, n}\right) \) ’s are homotopy equivalent. Thus the group of a knot \( K \) determines the homotopy type of \( \left( {{S}^{3} - K}\right) \) ; any isomorphism between the groups of two knots is induced by some homotopy equivalence between the knot complements. In fact, this result and the given proof of it extend at once to a theorem stating that an irreducible 3-manifold with infinite fundamental group is determined up to homotopy type by that group. Knots themselves (when not prime) may however not be determined by the homotopy types of their complements. Suppose \( {X}_{1} \) and \( {X}_{2} \) are the exteriors of oriented knots \( {K}_{1} \) and \( {K}_{2} \) . Consider the knots \( {K}_{1} + {K}_{2} \) and \( {K}_{1} + r{\bar{K}}_{2} \), where as usual \( r{\bar{K}}_{2} \) is the reverse of the reflection of \( {K}_{2} \) . The exterior of either these two composite knots is formed by identifying an annulus in the boundary of \( {X}_{1} \) with an annulus in the boundary of \( {X}_{2} \) . The two identifications needed are homotopic (they differ by reversing the \( I \) factor in the annulus \( {S}^{1} \times I \) ), and so the two spaces obtained are homotopy equivalent. However, in general the two composite knots are distinct; if \( {K}_{1} \) and \( {K}_{2} \) are each the trefoil knot \( {3}_{1} \), the two composites are distinguished by the Jones polynomial. Suppose that \( {X}_{i} \) is the exterior of an oriented knot \( {K}_{i} \) . On the boundary of \( {X}_{i} \) are the longitude \( {\lambda }_{i} \) and meridian \( {\mu }_{i} \), simple closed oriented curves that meet at a single point. They are well defined up to homotopy in \( \partial {X}_{i} \) . Taking \( {\lambda }_{i} \cap {\mu }_{i} \) as a base point, let \( \left\lbrack {\lambda }_{i}\right\rbrack \) and \( \left\lbrack {\mu }_{i}\right\rbrack \) be the elements of \( {\Pi }_{1}\left( {\partial {X}_{i}}\right) \) represented by these two curves. If \( {K}_{1} \) and \( {K}_{2} \) are equivalent oriented knots, there is a homeomorphism \( h : {X}_{1} \rightarrow {X}_{2} \) such that the following diagram commutes. \[ \left\lbrack {\lambda }_{1}\right\rbrack ,\left\lbrack {\mu }_{1}\right\rbrack \in {\Pi }_{1}\left( {\partial {X}_{1}}\right) \rightarrow {\Pi }_{1}\left( {{S}^{3} - {K}_{1}}\right) \] \[ \downarrow {h}_{ \star }\; \downarrow {h}_{ \star }\; \downarrow {h}_{ \star } \] \[ \left\lbrack {\lambda }_{2}\right\rbrack ,\left\lbrack {\mu }_{2}\right\rbrack \in {\Pi }_{1}\left( {\partial {X}_{2}}\right) \rightarrow {\Pi }_{1}\left( {{S}^{3} - {K}_{2}}\right) \] This is immediate. The following converse is however also true. It is a consequence of the, somewhat lengthy, theory of homotopy-equivalent Haken manifolds created by F. Waldhausen circa 1966 ([132]). An account is also in [43]. Theorem 11.8. If there exists an isomorphism from \( {\Pi }_{1}\left( {{S}^{3} - {K}_{1}}\right) \) to \( {\Pi }_{1}\left( {{S}^{3} - {K}_{2}}\right) \) which sends \( \left\lbrack {\lambda }_{1}\right\rbrack \) to \( \left\lbrack {\lambda }_{2}\right\rbrack \) and \( \left\lbrack {\mu }_{1}\right\rbrack \) and \( \left\lbrack {\mu }_{2}\right\rbrack \), then \( {K}_{1} \) and \( {K}_{2} \) are equivalent knots. Much more recently, the following has been proved by W. Whitten and F. Gonzales-Acuña [134]. Theorem 11.9. If \( {K}_{1} \) and \( {K}_{2} \) are prime knots in \( {S}^{3} \) and \( {\Pi }_{1}\left( {{S}^{3} - {K}_{1}}\right) \) and \( {\Pi }_{1}\left( {{S}^{3} - }\right. \) \( \left. {K}_{2}\right) \) are isomorphic groups, then \( \left( {{S}^{3} - {K}_{1}}\right) \) and \( \left( {{S}^{3} - {K}_{2}}\right) \) are homeomorphic spaces. Thus, for prime knots, the knot group determines the complement of the knot. It is by no means obvious that this means that the knots are the same. Perhaps the homeomorphism might send a meridian to a non-meridian. That this is not so is the substance of one of the most impressive results in knot theory of the 1980's. It is due to Gordon and J. Luecke [37] and the proof is lengthy and intricate: Theorem 11.10. If \( {K}_{1} \) and \( {K}_{2} \) are unoriented knots in \( {S}^{3} \) and there is an orientation preserving homeomorphism between their complements, then \( {K}_{1} \) and \( {K}_{2} \) are equivalent (as unoriented knots). These results proclaim the importance of the group of a knot. It should, however, be observed that nothing as sophisticated as the last two results is needed to show that the knot group determines the Alexander polynomial of the knot. For suppose that a knot \( K \) has exterior \( X \) and group \( G \) . As has already been noted, \( {H}_{1}\left( X\right) \) is the infinite cyclic group \( G/{G}^{\prime } \) (a generator of which was previously called \( t \) ), where \( {G}^{\prime } \) is the commutator subgroup of \( G \) . The infinite cyclic cover \( {X}_{\infty } \) of \( X \) has its fundamental group equal to \( {G}^{\prime } \) because a loop in \( X \) lifts to a loop in \( {X}_{\infty } \) if and only if it has zero linking number with \( K \) and so represents an element of \( {G}^{\prime } \) . Now \( {H}_{1}\left( {X}_{\infty }\right) \), the abelianisation of \( {\Pi }_{1}\left( {X}_{\infty }\right) \), is \( {G}^{\prime }/{G}^{\prime \prime } \) where \( {G}^{\prime \prime } \) is the commutator subgroup of \( {G}^{\prime } \) . Group-theoretic conjugation gives an action of \( G \) on \( G \) and this passes to quotients to give an action of \( G/{G}^{\prime } \) on \( {G}^{\prime }/{G}^{\prime \prime } \) . This is the action of \( {H}_{1}\left( X\right) \) on \( {H}_{1}\left( {X}_{\infty }\right) \) that defines the latter as a \( \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \) module. Roughly, that is because conjugacy in \( G \) corresponds to moving a base point around a loop, and this operation lifts to the idea of translating \( {X}_{\infty } \) along the lift of that loop. The module \( {H}_{1}\left( {X}_{\infty }\right) \) can thus be defined entirely in terms of \( G \), and then the definition of the Alexander polynomial (up to a unit) can be given as before. This means that starting with a presentation of \( G \) it should be possible to calculate the Alexander polynomial (it being understood that the abelianisation of \( G \) is known to be infinite cyclic). This can be done in the following way, using the free differential calculus devised by R. H. Fox. Suppose that \( G \) is the group of a knot \( K \), given by any presentation \[ G = \left\langle {{x}_{1},{x}_{2},\ldots ,{x}_{n};{r}_{1},{r}_{2},\ldots ,{r}_{m}}\right\rangle , \] and let \( \alpha : G \rightarrow G/{G}^{\prime } \cong \langle t\rangle \) be the abelianisation homomorphism. If \( P \) is any space with \( {\Pi }_{1}\left( P\right) = G \), and \( \widetilde{P} \) is the cover of \( P \) corresponding to \( {G}^{\prime } \) , then the above reasoning transferred from \( X \) to \( P \) shows that \( {H}_{1}\left( {\widetilde{P};\mathbb{Z}}\right) \), regarded as a module over \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \), is also \( {G}^{\prime }/{G}^{\prime \prime } \) with action by \( \mathbb{Z}\left\lbrack {G/{G}^{\prime }}\right\rbrack \), and so it is equivalent to the Alexander module of \( K \) . Take for \( P \) a complex consisting of one 0-cell \( V, n \) oriented 1-cells labelled \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \), having all their end points identified with \( V \) to form \( n \) loops, and \( m \) oriented 2-cells \( {c}_{1},{c}_{2},\ldots ,{c}_{m} \), with each \( \partial {c}_{i} \) glued to the 1-cells according to the word \( {r}_{i} \) . All the lifts to \( \widetilde{P} \) of all the cells of \( P \) give a cell structu
1009_(GTM175)An Introduction to Knot Theory
43
sm. If \( P \) is any space with \( {\Pi }_{1}\left( P\right) = G \), and \( \widetilde{P} \) is the cover of \( P \) corresponding to \( {G}^{\prime } \) , then the above reasoning transferred from \( X \) to \( P \) shows that \( {H}_{1}\left( {\widetilde{P};\mathbb{Z}}\right) \), regarded as a module over \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \), is also \( {G}^{\prime }/{G}^{\prime \prime } \) with action by \( \mathbb{Z}\left\lbrack {G/{G}^{\prime }}\right\rbrack \), and so it is equivalent to the Alexander module of \( K \) . Take for \( P \) a complex consisting of one 0-cell \( V, n \) oriented 1-cells labelled \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \), having all their end points identified with \( V \) to form \( n \) loops, and \( m \) oriented 2-cells \( {c}_{1},{c}_{2},\ldots ,{c}_{m} \), with each \( \partial {c}_{i} \) glued to the 1-cells according to the word \( {r}_{i} \) . All the lifts to \( \widetilde{P} \) of all the cells of \( P \) give a cell structure on \( \widetilde{P} \) which can be used in the following way to investigate the homology of \( \widetilde{P} \) . Let \( \widetilde{V} \) be a chosen lift of the point \( V \), let \( {\widetilde{x}}_{i} \) be the lift of \( {x}_{i} \) that starts at \( \widetilde{V} \) and let \( \widetilde{{c}_{i}} \) be the lift of \( {c}_{i} \) that has as its boundary the lift of \( {r}_{i} \) that begins at \( \widetilde{V} \) . The whole of \( \widetilde{P} \) is just the union of all translates of these cells under the action of \( \langle t\rangle \) . Thus the chain complex (with integer coefficients) of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) modules for \( \widetilde{P} \) , \[ {C}_{2}\left( \widetilde{P}\right) \overset{{d}_{2}}{ \rightarrow }{C}_{1}\left( \widetilde{P}\right) \overset{{d}_{1}}{ \rightarrow }{C}_{0}\left( \widetilde{P}\right) \] has each \( {C}_{i}\left( \widetilde{P}\right) \) freely generated as a module by the above specified \( i \) -cells in \( \widetilde{P} \) . In this chain complex, the boundary map \( {d}_{2} \) sends \( \widetilde{{c}_{i}} \) to the lift of \( {r}_{i} \) beginning at \( \widetilde{V} \) now regarded as an element of the module \( {C}_{1}\left( \widetilde{P}\right) \) . Any occurrence of \( {x}_{j} \) in \( {r}_{i} \) contributes some \( \langle t\rangle \) -translate of \( \widetilde{{x}_{j}} \) to \( {d}_{2}\left( \widetilde{{c}_{i}}\right) \) . In fact, if \( {r}_{i} = {w}_{1}{x}_{j}{w}_{2} \) this occurrence of \( {x}_{j} \) contributes to \( {d}_{2}\left( {\widetilde{c}}_{i}\right) \) the lift of \( {x}_{j} \) that begins at the final point of the lift of \( {w}_{1} \) which starts at \( \widetilde{V} \) ; thus the contribution is \( \alpha \left( {w}_{1}\right) \) acting on \( \widetilde{{x}_{j}} \) . If \( {r}_{i} = {v}_{1}{x}_{j}^{-1}{v}_{2} \), this occurrence of \( {x}_{j}^{-1} \) contributes \( - \alpha \left( {{v}_{1}{x}_{j}^{-1}}\right) \widetilde{{x}_{j}} \) . The \( \widetilde{{x}_{j}} \) term in \( {d}_{2}\left( \widetilde{{c}_{i}}\right) \) is thus the sum, over all occurrences of \( {x}_{j}^{-1} \) and \( {x}_{j} \) in \( {r}_{i} \), of these contributions. It is a simple formality to write down a procedure to determine this sum as \[ {d}_{2}\left( \widetilde{{c}_{i}}\right) = \mathop{\sum }\limits_{j}{\alpha \phi }\left( \frac{\partial {r}_{i}}{\partial {x}_{j}}\right) \widetilde{{x}_{j}} \] where the meaning of the terms is as follows: The quotient map (given by the presentation) from \( F \), the free group on generators \( {x}_{1},{x}_{2},\ldots {x}_{n} \), to \( G \) induces a homomorphism of group-rings \( \phi : \mathbb{Z}\left( F\right) \rightarrow \mathbb{Z}\left( G\right) \) . The map \( \frac{\partial }{\partial {x}_{i}} : \mathbb{Z}\left( F\right) \rightarrow \) \( \mathbb{Z}\left( F\right) \) is the linear extension of the map defined on elements of \( F \) by \[ \frac{\partial {x}_{i}}{\partial {x}_{j}} = {\delta }_{ij},\;\frac{\partial {x}_{i}{}^{-1}}{\partial {x}_{j}} = - {\delta }_{ij}{x}_{i}{}^{-1}, \] \[ \frac{\partial \left( {uv}\right) }{\partial {x}_{j}} = \frac{\partial u}{\partial {x}_{j}} + u\frac{\partial v}{\partial {x}_{j}}. \] (In practice, this last formula should be used on a word in which \( v \) is the last letter of the word.) Thus the transpose of \( {\alpha \phi }\left( \frac{\partial {r}_{i}}{\partial {x}_{i}}\right) \) is a matrix representing \( {d}_{2} \), and so that is a presentation matrix for the module \( {C}_{1}\left( \widetilde{P}\right) /{d}_{2}\left( {{C}_{2}\left( \widetilde{P}\right) }\right) \) . Now, as usual, there is a short exact sequence of modules \[ 0 \rightarrow {H}_{1}\left( \widetilde{P}\right) \rightarrow {C}_{1}\left( \widetilde{P}\right) /{d}_{2}\left( {{C}_{2}\left( \widetilde{P}\right) }\right) \overset{{d}_{1}}{ \rightarrow }{d}_{1}{C}_{1}\left( \widetilde{P}\right) \rightarrow 0. \] So it is useful to investigate \( {d}_{1}{C}_{1}\left( \widetilde{P}\right) \) . The boundary map \( {d}_{1} \) is determined by \( {d}_{1}\left( {\widetilde{x}}_{j}\right) = \left( {{\alpha \phi }\left( {x}_{j}\right) - 1}\right) \widetilde{V} \) . Now, \( {\alpha \phi }\left( {x}_{j}\right) = {t}^{{a}_{j}} \) for some \( {a}_{j} \in \mathbb{Z} \), so \( {d}_{1}{C}_{1}\left( \widetilde{P}\right) \) is \( \mathcal{I}\widehat{V} \) where \( \mathcal{I} \) is the ideal of \( \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \) generated by \( \left\{ {\left( {{t}^{{a}_{j}} - 1}\right) : j = 1,2,\ldots, n}\right\} \) . However, observe that \[ \left( {{t}^{a} - 1}\right) + {t}^{a}\left( {{t}^{b} - 1}\right) = \left( {{t}^{a + b} - 1}\right) , \] \[ \left( {{t}^{a} - 1}\right) - {t}^{a - b}\left( {{t}^{b} - 1}\right) = \left( {{t}^{a - b} - 1}\right) . \] As the \( {t}^{{a}_{j}} \) generate \( \langle t\rangle \), it must be that \( t = {t}^{\sum {v}_{j}{a}_{j}} \) for some \( {v}_{j} \in \mathbb{Z} \) . Thus, using the above observation, \( \left( {t - 1}\right) \in \mathcal{I} \) and, as \( \left( {t - 1}\right) \) divides each \( \left( {{t}^{{a}_{j}} - 1}\right) \) , it follows that \( \mathcal{I} \) is just the principal ideal generated by \( \left( {t - 1}\right) \) . Then \( {d}_{1}{C}_{1}\left( \widetilde{P}\right) \) is the free rank-one module generated by the element \( \left( {t - 1}\right) \widehat{V} \) . As \( {d}_{1}{C}_{1}\left( \widetilde{P}\right) \) is free, the above short exact sequence splits and \[ {C}_{1}\left( \widetilde{P}\right) /{d}_{2}\left( {{C}_{2}\left( \widetilde{P}\right) }\right) \cong {H}_{1}\left( \widetilde{P}\right) \oplus \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack . \] Now if \( A \) is a presentation matrix for a module \( M \) over a ring \( R \), a presentation matrix for \( M \oplus R \) is \( A \) with an extra row of zeros appended. This has the same non-zero minors as has \( A \), but the number of rows deleted to obtain a minor of the new matrix is one more than the number so deleted from \( A \) . Thus the \( {r}^{\text{th }} \) elementary ideal of \( M \) is the \( {\left( r + 1\right) }^{\text{th }} \) elementary ideal of \( M \oplus R \) . The transpose of the \( m \times n \) matrix \( {\alpha \phi }\left( {\partial {r}_{i}/\partial {x}_{j}}\right) \) presents \( {H}_{1}\left( \widetilde{P}\right) \oplus \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \), so the \( {r}^{\text{th }} \) elementary ideal \( {\mathcal{E}}_{r} \) of \( \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \) for the module \( {H}_{1}\left( \widetilde{P}\right) \) is generated by the \( \left( {n - r}\right) \times \left( {n - r}\right) \) minors of this matrix. In particular \( {\mathcal{E}}_{1} \) is generated by the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) minors, and this is the ideal that is known (from Chapter 6) to be a principal ideal generated by the Alexander polynomial \( {\Delta }_{K}\left( t\right) \) . As a simple example of the way that this can be used, consider the Wirtinger presentation of the trefoil knot given by \[ G = \left\langle {{g}_{1},{g}_{2},{g}_{3};{g}_{3}{g}_{1}{g}_{3}^{-1}{g}_{2}^{-1},{g}_{1}{g}_{2}{g}_{1}^{-1}{g}_{3}^{-1}}\right\rangle . \] The formalism of the free differential calculus gives \[ \frac{\partial {r}_{i}}{\partial {g}_{j}} = \left( \begin{matrix} {g}_{3} & - {g}_{3}{g}_{1}{g}_{3}^{-1}{g}_{2}^{-1} & 1 - {g}_{3}{g}_{1}{g}_{3}^{-1} \\ 1 - {g}_{1}{g}_{2}{g}_{1}^{-1} & {g}_{1} & - {g}_{1}{g}_{2}{g}_{1}^{-1}{g}_{3}^{-1} \end{matrix}\right) . \] On abelianising, each generator of the Wirtinger presentation is mapped to \( t \), so \[ {\alpha \phi }\left( \frac{\partial {r}_{i}}{\partial {g}_{j}}\right) = \left( \begin{matrix} t & - 1 & 1 - t \\ 1 - t & t & - 1 \end{matrix}\right) . \] Up to sign, all the \( \left( {2 \times 2}\right) \) minors of this are \( 1 - t + {t}^{2} \), and so this is the Alexander polynomial of the trefoil knot. This general method of calculating the Alexander polynomial, when applied to a Wirtinger presentation, is essentially the " \( L \) -matrix" method of Reidemeister ([107] or [108]). The free differential calculus excels in the case of a torus knot. Let \( T \) be a standard unknotted torus in \( {S}^{3} \) ( \( T \) can be thought of as the boundary of a neighbourhood of the unknot). A \( \left( {p, q}\right) \) torus knot is the knot \( K \) contained in \( T \) that represents \( p \) longitudes and \( q \) meridians of the unknot. Such a simple closed curve exists if and only if \( p \) and \( q \) are coprime (an exercise). The exterior of \( K \) consists of two solid tori (one inside \( T \) and one outside) glued together along an annulus that goes around \( T \) as \( p \) longitudes and \( q \) meridians. The Van Kampen theorem, which describes the fundamental group of a space obtained by gluing two other spaces together, can then be used. In this case it shows that the group of \( K \) has a one relator presentation as \[ \left\langle {{x}_{1},{x}_{2};{x}_{1
1009_(GTM175)An Introduction to Knot Theory
44
er presentation, is essentially the " \( L \) -matrix" method of Reidemeister ([107] or [108]). The free differential calculus excels in the case of a torus knot. Let \( T \) be a standard unknotted torus in \( {S}^{3} \) ( \( T \) can be thought of as the boundary of a neighbourhood of the unknot). A \( \left( {p, q}\right) \) torus knot is the knot \( K \) contained in \( T \) that represents \( p \) longitudes and \( q \) meridians of the unknot. Such a simple closed curve exists if and only if \( p \) and \( q \) are coprime (an exercise). The exterior of \( K \) consists of two solid tori (one inside \( T \) and one outside) glued together along an annulus that goes around \( T \) as \( p \) longitudes and \( q \) meridians. The Van Kampen theorem, which describes the fundamental group of a space obtained by gluing two other spaces together, can then be used. In this case it shows that the group of \( K \) has a one relator presentation as \[ \left\langle {{x}_{1},{x}_{2};{x}_{1}^{p}{x}_{2}^{-q}}\right\rangle \] where \( {x}_{1} \) and \( {x}_{2} \) are represented by cores of the two solid tori. The relation occurs because the core of the gluing annulus represents both \( {x}_{1}^{p} \) and \( {x}_{2}^{q} \) . Note that \( {x}_{1} \) links \( K \) with linking number \( q \) and \( {x}_{2} \) links \( K \) with linking number \( p \) . Hence \( \alpha {x}_{1} = {t}^{q} \) and \( \alpha {x}_{2} = {t}^{p} \) . Thus \[ \frac{\partial {r}_{1}}{\partial {x}_{j}} = \left( {1 + {x}_{1} + {x}_{1}^{2} + \cdots + {x}_{1}^{p - 1}\;{x}_{1}^{p}\left( {-{x}_{2}^{-1} - {x}_{2}^{-2} - \cdots - {x}_{2}^{-q}}\right) }\right) , \] \[ {\alpha \phi }\left( \frac{\partial {r}_{1}}{\partial {x}_{j}}\right) = \left( {\frac{1 - {t}^{pq}}{1 - {t}^{q}}\;\frac{-{t}^{pq}{t}^{-p}\left( {1 - {t}^{-{pq}}}\right) }{1 - {t}^{-p}}}\right) . \] So the Alexander polynomial of \( K \) is a generator of the (principal) ideal of \( \mathbb{Z}\left\lbrack {t,{t}^{-1}}\right\rbrack \) generated by the two elements \( 1 - {t}^{pq}/1 - {t}^{q} \) and \( 1 - {t}^{pq}/1 - {t}^{p} \) . As \( p \) and \( q \) are coprime, the technique used above shows that \( \left( {1 - t}\right) \) is in the ideal generated by \( \left( {1 - {t}^{p}}\right) \) and \( \left( {1 - {t}^{q}}\right) \) . Then it is not hard to see that a highest common factor of \( 1 - {t}^{pq}/1 - {t}^{q} \) and \( 1 - {t}^{pq}/1 - {t}^{p} \) is \[ \frac{\left( {1 - t}\right) \left( {1 - {t}^{pq}}\right) }{\left( {1 - {t}^{p}}\right) \left( {1 - {t}^{q}}\right) } \] and so this is (up to multiplication by \( \pm {t}^{\pm n} \) ) the Alexander polynomial of the torus knot. This discussion of the torus knot depends for its simplicity on the fact that the group of a torus knot has a presentation with just two generators and one relator. Any 2-bridge link also has a two-generator, one-relator presentation, as can be seen from its description as a union of two trivial 2-string tangles. In general, this relator is more complicated than that for the torus knot. The Alexander polynomial of an oriented link is (up to units) a Laurent polynomial in one variable \( t \) . If \( \# L \), the number of components of \( L \), is two or more, the theory can be amplified to give a multi-variable Alexander polynomial. Suppose that \( l : \{ 1,2,\ldots ,\# L\} \rightarrow \{ 1,2,\ldots, v\} \), for some integer \( v \geq 2 \), is a surjection, thought of as labelling (or colouring) of the components \( \left\{ {{L}_{i} : i = 1,2,\ldots ,\# L}\right\} \) of \( L \) . Let \( G \) be, as usual, the group of \( L \) . Then \( G/{G}^{\prime } \) is a free abelian group on \( \# L \) meridian generators. Map this on to the free abelian group on \( v \) generators (written multiplicatively as \( \left\langle {{t}_{1},{t}_{2},\ldots ,{t}_{v};{t}_{i}{t}_{j}{t}_{i}^{-1}{t}_{j}^{-1}}\right\rangle \) ) by sending the \( {i}^{\text{th }} \) oriented meridian to \( {t}_{l\left( i\right) } \), and let \( \alpha \) be the composition \[ \alpha : G \rightarrow G/{G}^{\prime } \rightarrow \left\langle {{t}_{1},{t}_{2},\ldots ,{t}_{v};{t}_{i}{t}_{j}{t}_{i}^{-1}{t}_{j}^{-1}}\right\rangle . \] Then \( X \), the exterior of \( L \), has a covering \( \widehat{X} \) corresponding to the kernel of \( \alpha \) that is acted upon freely by the group \( \left\langle {{t}_{1},{t}_{2},\ldots ,{t}_{v};{t}_{i}{t}_{j}{t}_{i}^{-1}{t}_{j}^{-1}}\right\rangle \) . The group ring of this group will be written as \( \mathbb{Z}\left\lbrack {{t}_{1}^{\pm 1},{t}_{2}^{\pm 1},\ldots ,{t}_{v}^{\pm 1}}\right\rbrack \), and \( {H}_{1}\left( {\widehat{X};\mathbb{Z}}\right) \) is a module over this ring. This module is an invariant of the oriented labelled link \( L \) . Again, the first elementary ideal of the module is principal, and a generator (well defined up to multiplication by \( \left. {\pm {t}_{1}^{\pm {m}_{1}}{t}_{2}^{\pm {m}_{2}}\cdots {t}_{v}^{\pm {m}_{v}}}\right) \) is called the multi-variable Alexander polynomial of the oriented labelled link. In [21] a method is given for finding a square matrix that presents the module \( {H}_{1}\left( {\widehat{X};\mathbb{Z}}\right) \) ; it is a generalisation of the Seifert surface method of Chapter 6 to the multi-variable situation. Alternatively, one can follow the formalism of the free differential calculus discussed in this chapter. Starting with a Wirtinger presentation of \( G \) as \( \left\langle {{x}_{1},{x}_{2},\ldots ,{x}_{n};{r}_{1},{r}_{2},\ldots ,{r}_{n}}\right\rangle \), form the cell complex \( P \) from the presentation as before, and let \( \widetilde{P} \) be the cover corresponding to the kernel of \( \alpha \) . As before, the transpose of the square matrix \( {\alpha \phi }\left( {\partial {r}_{i}/\partial {x}_{i}}\right) \) is a presentation matrix for the module \( {C}_{1}\left( \widetilde{P}\right) /{d}_{2}\left( {{C}_{2}\left( \widetilde{P}\right) }\right) \) . Now, however, the module \( {d}_{1}{C}_{1}\left( \widetilde{P}\right) \) is not free; it is isomorphic to the ideal of \( \mathbb{Z}\left\lbrack {{t}_{1}^{\pm 1},{t}_{2}^{\pm 1},\ldots ,{t}_{v}^{\pm 1}}\right\rbrack \) generated by \( \left\{ {\left( {{t}_{i} - 1}\right) : i = 1,2,\ldots, v}\right\} \) . Thus the short exact sequence relating these two modules and \( {H}_{1}\left( \widetilde{P}\right) \) cannot be split. Nevertheless, it can be shown that a multi-variable Alexander polynomial is obtained by taking the matrix \( {\alpha \phi }\left( {\partial {r}_{i}/\partial {x}_{j}}\right) \) , deleting any row and the \( {j}^{\text{th }} \) column, evaluating the determinant of this smaller matrix, and then dividing by \( \left( {{\alpha \phi }\left( {x}_{j}\right) - 1}\right) \), which will indeed be a factor. More details are in [17]. This has been refined in [39] in order to obtain a canonical normalisation for multi-variable polynomials, which fits into the a generalised skein approach described by Conway [20] (see also [100] and [99]). A final example of the direct use of the fundamental group of a knot's exterior will now be given. It is a proof found by H. F. Trotter [125] that there are oriented knots that are not equivalent to their reverses. It is included here because the result seems important and because no invariant has been found that can ever prove a knot to be a non-reversible knot (though see [70]). Most known methods of achieving such a result are rather ad hoc. In this case the technique consists of a detailed investigation of a particular group, understanding it in terms of isometries of the hyperbolic plane. Trotter's result is the following: Theorem 11.11. Let \( p, q \) and \( r \) be odd integers such that \( \left| p\right| ,\left| q\right| \) and \( \left| r\right| \) are distinct and greater than one. Then the oriented pretzel knot \( P\left( {p, q, r}\right) \) is not equivalent to its reverse. Sketch PROOF. Suppose that \( p = {2k} + 1, q = {2l} + 1 \) and \( r = {2m} + 1 \) . Many of the generators of the Wirtinger presentation of the group of \( P\left( {p, q, r}\right) \) can easily be eliminated to give a presentation with meridian generators \( x, y \) and \( z \) as indicated (for \( \left( {p, q, r}\right) = \left( {7,3,5}\right) \) ) in Figure 11.3 and relations \[ {\left( x{y}^{-1}\right) }^{m}x{\left( x{y}^{-1}\right) }^{-m} = {\left( y{z}^{-1}\right) }^{k + 1}z{\left( y{z}^{-1}\right) }^{-k - 1}, \] \[ {\left( y{z}^{-1}\right) }^{k}y{\left( y{z}^{-1}\right) }^{-k} = {\left( z{x}^{-1}\right) }^{l + 1}x{\left( z{x}^{-1}\right) }^{-l - 1}, \] \[ {\left( z{x}^{-1}\right) }^{l}z{\left( z{x}^{-1}\right) }^{-l} = {\left( x{y}^{-1}\right) }^{m + 1}y{\left( x{y}^{-1}\right) }^{-m - 1}. \] A longitude \( w \) represents the element \[ {\left( x{y}^{-1}\right) }^{-m}{\left( y{z}^{-1}\right) }^{k + 1}{\left( z{x}^{-1}\right) }^{-l}{\left( x{y}^{-1}\right) }^{m + 1}{\left( y{z}^{-1}\right) }^{-k}{\left( z{x}^{-1}\right) }^{l + 1}. \] If \( P\left( {p, q, r}\right) \) is reversible, there exists an automorphism \( \alpha \) of the group that sends meridians to inverse meridians and \( w \) to \( {w}^{-1} \) . (Here a meridian is an element of the group represented by a loop that goes from the base point along some path to the knot, around the knot and back along the same path.) Thus if \( H \) is the subgroup generated by the squares of meridians, \( H \) is normal and invariant under \( \alpha \) . Then \( \alpha \) induces an automorphism of \( G/H \), and \( G/H \) has a presentaion with generators ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_130_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_130_0.jpg) Figure 11.3 \( x, y \) and \( z \), with the above three relations and the relations \[ {x}^{2} = {y}^{2} = {z}^{2} = 1 \] That easily simplifies to become \[ G/H = \left\langle {x, y, z;{x}^{2} = {y}^{2} = {z}^{2} = 1,{\left( xy\right) }^{r} = {\left( yz\right) }^{p} = {\left( zx\right) }^{q}}\right\rangle . \] Now, abelianising \( G/H \) produces a cyclic
1009_(GTM175)An Introduction to Knot Theory
45
there exists an automorphism \( \alpha \) of the group that sends meridians to inverse meridians and \( w \) to \( {w}^{-1} \) . (Here a meridian is an element of the group represented by a loop that goes from the base point along some path to the knot, around the knot and back along the same path.) Thus if \( H \) is the subgroup generated by the squares of meridians, \( H \) is normal and invariant under \( \alpha \) . Then \( \alpha \) induces an automorphism of \( G/H \), and \( G/H \) has a presentaion with generators ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_130_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_130_0.jpg) Figure 11.3 \( x, y \) and \( z \), with the above three relations and the relations \[ {x}^{2} = {y}^{2} = {z}^{2} = 1 \] That easily simplifies to become \[ G/H = \left\langle {x, y, z;{x}^{2} = {y}^{2} = {z}^{2} = 1,{\left( xy\right) }^{r} = {\left( yz\right) }^{p} = {\left( zx\right) }^{q}}\right\rangle . \] Now, abelianising \( G/H \) produces a cyclic group of order 2 (as \( p, q \) and \( r \) are odd) with \( x, y \) and \( z \) all mapping to the generator. Thus the commutator subgroup of \( G/H \) is all elements expressible as words of even length in \( x, y \) and \( z \) . It is thus the subgroup generated by \( {xy},{yz} \) and \( {zx} \) (note that \( \left( {xy}\right) \left( {yz}\right) = {xz} \) ), and each of these three elements commutes with \( {\left( xy\right) }^{r} \) . Thus if \( U \) is the subgroup of \( G/H \) generated by \( {\left( xy\right) }^{r} \), then \( U \) is contained in the centre of the commutator subgroup of \( G/H \) . Let \( W \) be the quotient \( \left( {G/H}\right) /U \) so that \( W \) is \[ \left\langle {x, y, z;{x}^{2} = {y}^{2} = {z}^{2} = {\left( xy\right) }^{r} = {\left( yz\right) }^{p} = {\left( zx\right) }^{q} = 1}\right\rangle . \] This is well known to be a triangle subgroup of isometries of the hyperbolic plane, where \( x, y \) and \( z \) now represent the reflections in the three sides of a hyperbolic triangle having vertex angles \( \pi /p,\pi /q \) and \( \pi /r \) . This means that \( {xy},{yz} \) and \( {zx} \) represent rotations about the three vertices. (It may be assumed, without effecting the algebra, that \( p, q \) and \( r \) are now all positive.) It is easy to see that the subgroup generated by those rotations has trivial centre. Hence \( U \) is the centre of the commutator subgroup of \( G/H \), and so \( U \) is mapped to itself by any automorphism of \( G/H \) . Hence \( \alpha \) induces an automorphism \( \bar{\alpha } \) of \( W \) . In \( W \), the longitude element \( w \) has become \( {\left( {\left( xy\right) }^{-m}{\left( yz\right) }^{-k}{\left( zx\right) }^{-l}\right) }^{2} \), which represents a translation of the hyperbolic plane along the direction of one of the sides of the triangle. Then \( \bar{\alpha } \) must send this element to its inverse, the translation in the opposite direction. However, a little consideration of the triangular tessellation of the hyperbolic plane [125] shows that this would mean that the direction of the side of the triangle would be reversed by some element of \( W \), and this is not possible when \( p, q \) and \( r \) are all odd. ## Exercises 1. Find a presentation of the group of the knot \( {8}_{2} \) with the minimum possible number of generators and the minimum possible number of relators. 2. Use the loop theorem (or Dehn’s Lemma) to show that a non-trivial knot \( K \) and a longitude of \( K \) never constitute a split link. 3. Use the Loop Theorem and the Schönflies theorem to show that any torus \( T \), piecewise linearly embedded in \( {S}^{3} \), bounds, on one side in \( {S}^{3} \), a solid torus. [It should be assumed that \( {S}^{3} - T \) has two components.] 4. For a given positive integer \( n \) find a knot for which the group has no presentation with fewer than \( n \) generators. 5. Explore the way in which the free differential calculus, applied to the Wirtinger presentation of a knot group, provides a way of deriving the Alexander polynomial of a knot from the determinant of a matrix directly associated with a diagram of the knot. 6. A knot \( K \) has tunnel number 1 if an arc \( \alpha \) can be embedded (piecewise linearly) in \( {S}^{3} \), meeting \( K \) precisely in \( \partial \alpha \), so that the closure of the complement of a regular neighbourhood of the \( \theta \) -curve \( K \cup \alpha \) is a handlebody of genus 2 (see Chapter 12 for discussion of handlebodies). Show that a 2-bridge knot has tunnel number 1 and so does a torus knot. Prove that the group of a tunnel number 1 knot has a presentation with two generators and one relator. What does that imply about the Alexander ideals of the knot? Prove that the pretzel knot \( P\left( {3,3, - 3}\right) \) does not have tunnel number 1 . 7. The dihedral group \( {D}_{2n} \) of \( {2n} \) elements is the group of symmetries of a regular \( n \) -gon; it has a presentation \( \left\langle {x, y;{x}^{n},{y}^{2},{yxyx}}\right\rangle \) . Suppose that there is given an \( n \) -colouring of a diagram of a knot \( K \) . This is a function \( c \) from the segments of the diagram to \( \mathbb{Z}/n\mathbb{Z} \) so that at any crossing, the over-pass is labelled with the average, modulo \( n \), of the labels of the two segments on either side. If \( {g}_{i} \) is the generator of the Wirtinger presentation of \( {\Pi }_{1}\left( {{S}^{3} - K}\right) \) corresponding to the \( i \) th segment of the diagram, show that \( {g}_{i} \mapsto y{x}^{c\left( i\right) } \) defines a homomorphism \( {\Pi }_{1}\left( {{S}^{3} - K}\right) \rightarrow {D}_{2n} \) . Show that any surjective homomorphism \( {\Pi }_{1}\left( {{S}^{3} - K}\right) \rightarrow {D}_{2n} \) must arise in this way. [When \( n \) is odd, such a surjection exists if and only if \( n \) divides the exponent (the lowest common multiple of the orders of the elements) of the first homology group of the double cover of \( {S}^{3} \) branched over \( K \) . A necessary condition for the existence of an \( n \) -colouring is that \( n \) divide the determinant of \( K \) .] 8. Show that the genus of the \( \left( {p, q}\right) \) torus knot, where \( p \) and \( q \) are coprime, is \( \frac{1}{2}\left( {p - 1}\right) \left( {q - 1}\right) \) . 9. Suppose that \( X \) is the exterior of a knot \( K \) . The 3-manifold \( \left( {{S}^{1} \times {D}^{2}}\right) { \cup }_{h}X \), where \( h : \partial \left( {{S}^{1} \times {D}^{2}}\right) \rightarrow \partial X \) is a homeomorphism with \( h \) (point \( \times \partial {D}^{2} \) ) homologous to the sum of \( \alpha \) meridians and \( \beta \) longitudes, is said to be obtained by \( \alpha /\beta \) Dehn surgery on \( K \) . Show that if \( \alpha /\beta \) Dehn surgery on a torus knot produces a simply connected manifold, then \( \left( {\alpha ,\beta }\right) = \left( {\pm 1,0}\right) \) and the manifold produced is just \( {S}^{3} \) . [A knot with this property is said to have "Property P"; it is not known if all knots have this property.] 10. Prove that the trefoil knot \( {3}_{1} \) and its reflection are distinct by showing that there is no isomorphism, from the group of one knot to the group of the other, that maps the elements corresponding to meridian and and longitude in one group to those corresponding to meridian and longitude in the other. ## 12 ## Obtaining 3-Manifolds by Surgery on \( {S}^{3} \) The aim of this chapter is to show, in Theorem 12.14, that every closed connected orientable 3-manifold can be obtained by "surgery" on \( {S}^{3} \) . The method used is a version of that of [77]. An elementary \( r \) -surgery on a general \( n \) -manifold \( M \) is the operation of removing from \( M \) an embedded copy of \( {S}^{r} \times {D}^{n - r} \) and replacing it with a copy of \( {D}^{r + 1} \times {S}^{n - r - 1} \), the replacement being effected by means of the obvious homeomorphism between the boundaries of the removed set and its replacement. Surgery in general is a sequence of elementary surgeries. In the case of surfaces, instances of 1-surgery and 0-surgery have already been employed in earlier chapters, usually when the surface was contained in \( {S}^{3} \) . The only surgeries needed in this chapter are 1-surgeries on a 3-manifold, and it is easy to see they can be performed "simultaneously". The surgery process will consist of the removal from \( {S}^{3} \) of disjoint copies of \( {S}^{1} \times {D}^{2} \) and their replacement by copies of \( {D}^{2} \times \) \( {S}^{1} \) . Of course, the set removed and its replacement are homeomorphic, but the parametrisation of the removed set as disjoint copies of \( {S}^{1} \times {D}^{2} \), and the canonical method of replacement with respect to that, ensure that the new manifold is usually not \( {S}^{3} \) . A collection of disjoint solid tori in \( {S}^{3} \) is just a regular neighbourhood of a link, and a parametrisation of a neighbourhood of each component by \( {S}^{1} \times {D}^{2} \) is called a framing of the link. Thus it will be shown that 3-manifolds can be interpreted by means of framed links in \( {S}^{3} \) . The fact that any 3-manifold \( M \) is triangulable, and so can be regarded as a simplicial complex, will be assumed. It is hoped that piecewise linearity, though assumed throughout, will not be obtrusive. When \( M \) is closed (that is, compact and with empty boundary) and orientable, a triangulation will lead easily to the fact that \( M \) has a Heegaard splitting. This will mean that \( M \) is just two "handle-bodies" (see Definition 12.10) with their boundary surfaces identified by some homeomorphism between them. Philosophically, complete knowledge of surface homeomorphisms should tell all about 3-manifolds. Thus a little investigation of surface homeomorphisms is in order. Firstly, it is desirable to divide homeomorphisms into isotopy classes. As
1009_(GTM175)An Introduction to Knot Theory
46
link, and a parametrisation of a neighbourhood of each component by \( {S}^{1} \times {D}^{2} \) is called a framing of the link. Thus it will be shown that 3-manifolds can be interpreted by means of framed links in \( {S}^{3} \) . The fact that any 3-manifold \( M \) is triangulable, and so can be regarded as a simplicial complex, will be assumed. It is hoped that piecewise linearity, though assumed throughout, will not be obtrusive. When \( M \) is closed (that is, compact and with empty boundary) and orientable, a triangulation will lead easily to the fact that \( M \) has a Heegaard splitting. This will mean that \( M \) is just two "handle-bodies" (see Definition 12.10) with their boundary surfaces identified by some homeomorphism between them. Philosophically, complete knowledge of surface homeomorphisms should tell all about 3-manifolds. Thus a little investigation of surface homeomorphisms is in order. Firstly, it is desirable to divide homeomorphisms into isotopy classes. As already mentioned in Chapter 1, homeomorphisms are isotopic if one can be "slid" to the other. The definition of Chapter 1 is amplified below. If two homeomorphisms between surfaces do not differ significantly, one would not expect much difference between 3-manifolds formed by operations using those homeomorphisms. Definition 12.1. Piecewise linear homeomorpisms \( {h}_{0} \) and \( {h}_{1} \) between complexes \( X \) and \( Y \) are isotopic if they are connected by a path of homeomorphisms \( \left\{ {h}_{t}\right. \) : \( X \rightarrow Y, t \in \left\lbrack {0,1}\right\rbrack \} \) such that the map \( H : X \times \left\lbrack {0,1}\right\rbrack \rightarrow Y \times \left\lbrack {0,1}\right\rbrack \) defined by \( H\left( {x, t}\right) = \left( {{h}_{t}\left( x\right), t}\right) \) is a piecewise linear homeomorphism. If preferred, "smooth" could be substituted for "piecewise linear" in the above definition when \( X \) is a smooth manifold. However, it is important that the homeomorphism \( H \) should indeed belong in the category of choice. A classical result of Alexander ([113], [47]) states that any piecewise linear homeomorphism of the \( n \) -dimensional ball to itself, that is fixed on the boundary, is isotopic to the identity keeping the boundary fixed (by all the \( {h}_{t} \) ). This leads to the result that any piecewise linear orientation-preserving homeomorphism of the \( n \) -sphere to itself is isotopic to the identity. (Although the smooth versions of these results are, in general, false, they are true when \( n = 2 \) .) For surfaces it is, in fact, known that homotopic homeomorphisms are isotopic. It is easy to show that for any complex \( X \), the set of all self-homeomorphisms that are isotopic to the identity forms a normal subgroup of the group of all self-homeomorphisms of \( X \) . The quotient of the group of all self-homeomorphisms by this normal subgroup is called the mapping class group of \( X \) . The present motivation for thinking about isotopy comes from the following elementary lemma. Lemma 12.2. Suppose that \( U \) and \( V \) are 3-manifolds with homeomorphic boundaries, and that \( {h}_{0} : \partial U \rightarrow \partial V \) and \( {h}_{1} : \partial U \rightarrow \partial V \) are isotopic homeomorphisms. Then \( U{ \cup }_{{h}_{0}}V \) and \( U{ \cup }_{{h}_{1}}V \) are homeomorphic. Proof. Choose ([113],[47]) a collar neighbourhood \( C \) of \( \partial U \) in \( U \) ; \( C \) is a neighbourhood of \( \partial U \) homeomorphic to \( \partial U \times \left\lbrack {0,1}\right\rbrack \), with \( \partial U \) identified with \( \partial U \times 0 \) . A homeomorphism \( f : U{ \cup }_{{h}_{0}}V \rightarrow U{ \cup }_{{h}_{1}}V \) can be constructed by defining \( f \) to be the identity on \( \left( {U - C}\right) \cup V \) and on \( C \) defining \( f\left( {x, t}\right) = \) \( \left( {{h}_{1}^{-1}{h}_{t}x, t}\right) \) . In what follows, let \( F \) be a connected compact oriented surface, possibly with non-empty boundary. Let \( C \) be a simple closed curve embedded in \( F \), and let \( A \) be an annulus neighbourhood of \( C \) . The standard annulus is \( {S}^{1} \times \left\lbrack {0,1}\right\rbrack \) with some fixed orientation. Definition 12.3. A twist about \( \mathrm{C} \) is any homeomorphism isotopic to the homeomorphism \( \tau : F \rightarrow F \) defined such that \( \tau \mid F - A \) is the identity and, parametrising \( A \) as \( {S}^{1} \times \left\lbrack {0,1}\right\rbrack \) in an orientation-preserving manner, \( \tau \mid A \) is given by \( \tau \left( {{e}^{i\theta }, t}\right) = \left( {{e}^{i\left( {\theta - {2\pi t}}\right) }, t}\right) \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_0.jpg) Figure 12.1 Note that the effect of \( \tau \) on a path crossing \( C \) is to sweep that path all the way around the annulus. See Figure 12.1. Strictly, of course, a twist homeomorphism should here be piecewise linear; the fourth power of the piecewise linear homeomorphism shown in Figure 12.2 (which fixes the inner boundary component and moves each vertex on the outer boundary to the next vertex in a clockwise direction) is an appropriate piecewise linear model for a twist rather than the homeomorphism of Figure 12.1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_135_1.jpg) Figure 12.2 Definition 12.4. Oriented simple closed curves \( p \) and \( q \) contained in the interior of the surface \( F \) are called twist-equivalent, written \( p{ \sim }_{\tau }q \), if \( {hp} = q \) for some homeomorphism \( h \) of \( F \) that is in the group of homeomorphisms generated by all twists of \( F \) (which includes homeomorphisms isotopic to the identity). In this definition \( h \) is required to carry the orientation of one curve to that of the other. Of course, in general there may be no homeomorphism of any sort that sends \( p \) to \( q \) ; that is certainly the case if \( p \) separates \( F \) and \( q \) does not. Lemma 12.5. Suppose oriented simple closed curves \( p \) and \( q \), contained in the interior of the surface \( F \), intersect transversely at precisely one point. Then \( p{ \sim }_{\tau }q \) . Proof. The first diagram of Figures 12.3 shows the intersection point of \( p \) and \( q \) and also a simple closed curve \( {C}_{1} \) that runs parallel to, and is slightly displaced from, \( q \) . Similarly, \( {C}_{2} \) is a slightly displaced copy of \( p \) . The second diagram shows \( {\tau }_{1}p \), where \( {\tau }_{1} \) is a twist about \( {C}_{1} \) . The third diagram shows \( {\tau }_{2}{\tau }_{1}p \), where \( {\tau }_{2} \) is a twist about \( {C}_{2} \) . In this diagram \( {\tau }_{2}{\tau }_{1}p \) has a doubled-back portion that can easily be moved by a homeomorphism isotopic to the identity (that is, a slide in \( F \) ) to change \( {\tau }_{2}{\tau }_{1}p \) to \( q \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_0.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_1.jpg) Figure 12.3 Lemma 12.6. Suppose that oriented simple closed curves \( p \) and \( q \) contained in the interior of the surface \( F \) are disjoint and that neither separates \( F \) (that is, \( \left\lbrack p\right\rbrack \neq 0 \neq \left\lbrack q\right\rbrack \) in \( \left. {{H}_{1}\left( {F,\partial F}\right) }\right) \) . Then \( p{ \sim }_{\tau }q \) . Proof. Consideration of the surface obtained by cutting \( F \) along \( p \cup q \) shows at once that there is a simple closed curve \( r \) in \( F \) that intersects each of \( p \) and \( q \) transversely at one point. Then, by Lemma 12.5, \( p{ \sim }_{\tau }r{ \sim }_{\tau }q \) . Proposition 12.7. Suppose that oriented simple closed curves \( p \) and \( q \) are contained in the interior of the surface \( F \) and that neither separates \( F \) . Then \( p{ \sim }_{\tau }q \) . Proof. Changing \( q \) by means of a homeomorphism of \( F \) that is (close to and) isotopic to the identity, it can be assumed that \( p \) and \( q \) intersect transversely at \( n \) points. The proof is by induction on \( n \) ; Lemmas 12.5 and 12.6 start the induction, so assume that \( n \geq 2 \) and that the result is true for less that \( n \) points of intersection. Let \( A \) and \( B \) be consecutive points along \( p \) of \( p \cap q \) . Suppose firstly that \( p \) leaves \( A \) on one side of \( q \) and returns to \( B \) from the other side of \( q \) . Let \( r \) be a simple closed curve in \( F \) that starts near \( A \), follows close to \( p \) until near \( B \) and then returns to its start in a neighbourhood of \( q \) . As shown in the first diagram of Figure 12.4, \( r \) can be chosen so that \( p \cap r \) contains less than \( n \) points and \( q \cap r \) is one point. Hence \( p{ \sim }_{\tau }r \) by the induction hypothesis, and \( r{ \sim }_{\tau }q \) by Lemma 12.5. Suppose now that \( p \) leaves \( A \) on one side of \( q \) and returns to \( B \) from the same side of \( q \) . Let \( {r}_{1} \) and \( {r}_{2} \) be the two simple closed curves shown in the second ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg) Figure 12.4 diagram of Figure 12.4. Each starts near \( A \), proceeds near \( p \) until close to \( B \) and then back to its start following near to \( q \) . However, \( {r}_{1} \) starts on the right of \( p \) and \( {r}_{2} \) starts on the left. Now in \( {H}_{1}\left( {F,\partial F}\right) ,\left\lbrack {r}_{1}\right\rbrack - \left\lbrack {r}_{2}\right\rbrack = \left\lbrack q\right\rbrack \), and hence at least one of \( {r}_{1} \) and \( {r}_{2}
1009_(GTM175)An Introduction to Knot Theory
47
\) is one point. Hence \( p{ \sim }_{\tau }r \) by the induction hypothesis, and \( r{ \sim }_{\tau }q \) by Lemma 12.5. Suppose now that \( p \) leaves \( A \) on one side of \( q \) and returns to \( B \) from the same side of \( q \) . Let \( {r}_{1} \) and \( {r}_{2} \) be the two simple closed curves shown in the second ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_2.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_136_3.jpg) Figure 12.4 diagram of Figure 12.4. Each starts near \( A \), proceeds near \( p \) until close to \( B \) and then back to its start following near to \( q \) . However, \( {r}_{1} \) starts on the right of \( p \) and \( {r}_{2} \) starts on the left. Now in \( {H}_{1}\left( {F,\partial F}\right) ,\left\lbrack {r}_{1}\right\rbrack - \left\lbrack {r}_{2}\right\rbrack = \left\lbrack q\right\rbrack \), and hence at least one of \( {r}_{1} \) and \( {r}_{2} \) does not separate (as \( \left\lbrack q\right\rbrack \neq 0 \) ). Let that curve be defined to be \( r \) . Then \( r \) is disjoint from \( q \), so \( r{ \sim }_{\tau }q \) by Lemma 12.6 and, as \( r \cap p \) has at most \( n - 2 \) points, \( p{ \sim }_{\tau }r \) by the induction hypothesis. Corollary 12.8. Let \( {p}_{1},{p}_{2},\ldots ,{p}_{n} \) be disjoint simple closed curves in the interior of \( F \) the union of which does not separate \( F \) . Let \( {q}_{1},{q}_{2},\ldots ,{q}_{n} \) be another set of curves with the same properties. Then there is a homeomorphism \( h \) of \( F \) that is in the group generated by twists, so that \( h{p}_{i} = {q}_{i} \) for each \( i = 1,2,\ldots, n \) . Proof. Suppose inductively that such an \( h \) can be found so that \( h{p}_{i} = {q}_{i} \) for each \( i = 1,2,\ldots, n - 1 \) . Apply Proposition 12.7 to \( h{p}_{n} \) and \( {q}_{n} \) in \( F \) cut along \( {q}_{1} \cup {q}_{2} \cup \ldots \cup {q}_{n - 1} \) The theory of homeomorphisms of surfaces will be left at that point and attention turned back to \( n \) -manifolds with particular interest in \( n = 3 \) . Definition 12.9. Let \( \mathrm{M} \) be an \( n \) -manifold, let \( e : \partial {D}^{r} \times {D}^{n - r} \rightarrow \partial M \) be an embedding (where, as usual, \( {D}^{s} \) is the standard \( s \) -dimensional disc or ball). Then \( M{ \cup }_{e}\left( {{D}^{r} \times {D}^{n - r}}\right) \) is called “ \( M \) with an \( r \) -handle added”. Note that the boundary of this new manifold is \( \partial M \) changed by an \( \left( {r - 1}\right) \) -surgery. Definition 12.10. A handlebody of genus \( g \) is an orientable 3-manifold that is a 3-ball with \( {g1} \) -handles added. Here, "orientable" can be taken to mean that every simple closed curve in the manifold has a solid torus neighbourhood. It is a straightforward exercise in the elementary technicalities of piecewise linear manifold theory to show that, up to homeomorphism, there is only one genus \( g \) handlebody. It is indeed, as already stated, the product of an interval with a \( g \) -holed disc. A regular neighbourhood of any finite connected graph embedded in an orientable 3-manifold is a handlebody. This follows by taking the neighbourhood of a maximal tree as the 3-ball and neighbourhoods of the midpoints of the remaining edges as 1-handles. Definition 12.11. A Heegaard splitting of a (closed, connected, orientable) 3- manifold \( M \) is a pair of handlebodies \( X \) and \( Y \) contained in \( M \) such that \( X \cup Y = M \) and \( X \cap Y = \partial X = \partial Y \) . Note that \( X \) and \( Y \) have the same genus; namely, the genus of their common boundary surface. Lemma 12.12. Any closed connected orientable 3-manifold has a Heegaard splitting. Proof. This is similar to the first part of the proof of Theorem 8.2. Take a triangulation of \( M \) as a simplicial complex \( K \) . The vertices of the first derived subdivision \( {K}^{\left( 1\right) } \) of \( K \) are the barycentres \( \widehat{A} \) of the simplexes \( A \) of \( K \) . The second derived subdivision \( {K}^{\left( 2\right) } \) of \( K \) is, of course, just \( {\left( {K}^{\left( 1\right) }\right) }^{\left( 1\right) } \) . The 1-skeleton of \( K \) (that is, the sub-complex consisting of the 0 -simplexes and 1 -simplexes of \( K \) ), being a graph, has, as intimated above, for its simplicial neighbourhood in \( {K}^{\left( 2\right) } \), a handlebody. The closure of the complement of this is the simplicial neighbourhood in \( {K}^{\left( 2\right) } \) of another graph. That graph, called the dual 1-skeleton of \( K \), is the sub-complex \( \mathop{\bigcup }\limits_{A}{C}_{A} \) of \( {K}^{\left( 1\right) } \), where the union is over all 3-simplexes \( A \), and \( {C}_{A} \) is the cone with vertex \( \widehat{A} \) on the barycentres of the 2-dimensional faces of \( A \) . Thus \( {K}^{\left( 2\right) } \) is expressed as the union of two handlebodies that intersect in their common boundary, and this is the required Heegaard splitting. Theorem 12.13. Let \( M \) be a closed connected orientable 3-manifold. There exists finite sets of disjoint solid tori \( {T}_{1}^{\prime },{T}_{2}^{\prime },\ldots ,{T}_{N}^{\prime } \) in \( M \) and \( {T}_{1},{T}_{2},\ldots ,{T}_{N} \) in \( {S}^{3} \) such that \( M - { \cup }_{1}^{N}\operatorname{Int}\left( {T}_{i}^{\prime }\right) \) and \( {S}^{3} - { \cup }_{1}^{N}\operatorname{Int}\left( {T}_{i}\right) \) are homeomorphic. Proof. By Lemma 12.12, \( M \) has a Heegaard splitting, so for handlebodies \( U \) and \( V \) of some genus \( g \), and some homeomorphism \( h : \partial U \rightarrow \partial V, M = \) \( U{ \cup }_{h}V \) . Let \( {p}_{1}^{\prime },{p}_{2}^{\prime },\ldots ,{p}_{g}^{\prime } \) be disjoint simple closed curves in \( \partial U \), that bound disjoint discs in \( U \) and let \( {q}_{1},{q}_{2},\ldots {q}_{g} \) be disjoint simple closed curves in \( \partial V \) (one around each "hole" of the handlebody) as shown in Figure 12.5, so that if \( \phi \) is any homeomorphism \( \phi : \partial U \rightarrow \partial V \) such that \( \phi \left( {p}_{i}^{\prime }\right) = {q}_{i} \) for each \( i \), then \( U{ \cup }_{\phi }V = {S}^{3} \) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_138_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_138_0.jpg) Figure 12.5 Let \( h\left( {p}_{i}^{\prime }\right) = {p}_{i} \) for each \( i \) . If there were a homeomorphism of \( V \) sending each \( {p}_{i} \) to \( {q}_{i} \) then \( U{ \cup }_{h}V \) would be \( {S}^{3} \) . However, by Corollary 12.8, there is a product \( \psi \), of twists and inverses of twists, of \( \partial V \) that sends each \( {p}_{i} \) to \( {q}_{i} \) . Up to isotopy a twist \( \tau \) of \( \partial V \) is, by definition, supported on an annulus \( A \) . By Lemma 12.2 (and using the normality of the subgroup of all homeomorphisms isotopic to the identity) it may be assumed that all the twists concerned are so supported. As in Lemma 12.2, \( \partial V \) has a collar neighbourhood in \( V \), a neighbourhood homeomorphic to \( \partial V \times \left\lbrack {0,1}\right\rbrack \) with \( \partial V \) identified with \( \partial V \times 0 \) . Of course, \( A \times \left\lbrack {0,1}\right\rbrack \subset \partial V \times \left\lbrack {0,1}\right\rbrack \), and \( \tau \) extends to \( \tau \times 1 \) on \( A \times \left\lbrack {0,1/2}\right\rbrack \) . Then \( \tau \) extends, by the identity, over the remainder of the closure of \( V - \left( {A \times \left\lbrack {1/2,1}\right\rbrack }\right) \) . Thus \( \tau \) extends over \( V \) after the removal of the interior of a solid torus. This means that the product \( \psi \), of twists and inverse twists supported on annuli in \( \partial V \), extends to a homeomorphism from \( V \) less the interiors of solid tori to \( V \) less the interiors of (in general, different) solid tori. The solid tori that permit successive twists to extend are removed from successively narrower collars of \( \partial V \) . Thus, at the cost of removing these solid tori, there is a homeomorphism of \( V \) to \( V \) sending each \( {p}_{i} \) to \( {q}_{i} \), so gluing on copies of \( U \) by means of \( h \) to the first copy of \( V \) and by \( {\psi h} \) to the second copy gives the required result. Note that, with the notation of the above proof \( \tau \) maps the boundary of the meridian disc of the solid torus \( A \times \left\lbrack {1/2,1}\right\rbrack \) to a curve that is homologous to one longitude plus some number of meridians of the boundary of the solid torus. The solid torus can, then, be parametrised as \( {S}^{1} \times {D}^{2} \) so that \( \tau \) maps \( \{ \star \} \times \partial {D}^{2} \) to \( {S}^{1} \times \{ \star \} \) . This translates at once into the following result: Theorem 12.14. Any closed connected orientable 3-manifold \( M \) can be obtained from \( {S}^{3} \) by a collection of 1 -surgeries, that is, by removing disjoint copies of \( {S}^{1} \times {D}^{2} \) and replacing them with copies of \( {D}^{2} \times {S}^{1} \) in the canonical way. Thus \( M \) bounds a 4-manifold that is a 4-ball to which a collection of 2-handles has been added. In using this result the disjoint copies of \( {S}^{1} \times {D}^{2} \) that are to be removed from \( {S}^{3} \) are thought of as a neighbourhood of a link in \( {S}^{3} \) . In order to specify the parametrisation of this neighbourhood by copies of \( {S}^{1} \times {D}^{2} \), parallels (in the \( {S}^{1} \times {D}^{2} \) structures) to the link components (the cores of the solid tori) are specified. Each parallel, or framing curve, is a simple closed curve on the boundary of a solid torus neighbourhood of a link component that will bound a disc when \( {S}^{1} \times {D}^{2} \) is replaced by \( {D}^{2} \times {S}^{1} \) . Each parallel can be specified by an integer
1009_(GTM175)An Introduction to Knot Theory
48
obtained from \( {S}^{3} \) by a collection of 1 -surgeries, that is, by removing disjoint copies of \( {S}^{1} \times {D}^{2} \) and replacing them with copies of \( {D}^{2} \times {S}^{1} \) in the canonical way. Thus \( M \) bounds a 4-manifold that is a 4-ball to which a collection of 2-handles has been added. In using this result the disjoint copies of \( {S}^{1} \times {D}^{2} \) that are to be removed from \( {S}^{3} \) are thought of as a neighbourhood of a link in \( {S}^{3} \) . In order to specify the parametrisation of this neighbourhood by copies of \( {S}^{1} \times {D}^{2} \), parallels (in the \( {S}^{1} \times {D}^{2} \) structures) to the link components (the cores of the solid tori) are specified. Each parallel, or framing curve, is a simple closed curve on the boundary of a solid torus neighbourhood of a link component that will bound a disc when \( {S}^{1} \times {D}^{2} \) is replaced by \( {D}^{2} \times {S}^{1} \) . Each parallel can be specified by an integer, allocated to the component of the link, that specifies the linking number in \( {S}^{3} \) of the component and its parallel (both oriented in the same direction around the solid torus neighbourhood). Alternatively the framed link can be taken to be a link of thin bands (annuli), the two boundary components of each annulus being a component of the link and its parallel. Sometimes the link is drawn with crossovers in the plane (or some other surface), and it is assumed that the designated parallel always runs beside the link component in the 2-dimensional projection. The framing so encoded by a diagram is sometimes colloquially described as the "blackboard framing". The representation of a closed connected orientable 3-manifold by means of surgery on a framed link is by no means unique. That certainly seems likely from the proof of Theorem 12.13. There is no unique way of expressing a homeomorphism as a product of twists, for there are relations in the mapping class group of a surface. The following theorem, due to Kirby [65], describes two ways in which the framed links can be changed without changing the 3-manifolds that result from them by means of surgery. It is fairly easy to see that the changes of links by such Kirby moves do not change the 3-manifold. What is not obvious is the fact that iterations of these two types of move relate any two framed links representing the same 3-manifold. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_0.jpg) Figure 12.6 Theorem 12.15. Two framed links in \( {S}^{3} \) give, by surgery, the same oriented 3- manifold if and only if they are related by a sequence of moves of two types. In a move of type 1, an extra unknotted component, unlinked from all other components, with framing 1 or -1 is added to or removed from the link. In a move of type 2, any two components that are, together with their framing curves, contained in a doubly punctured disc (itself possibly knotted up and linked with other components) in \( {S}^{3} \) , as on the left of Figure 12.6, can be changed to the two curves on the right, the new framing curves again being on the punctured disc. For the proof, which uses 4-dimensional Cerf theory, refer to [65]. If one considers the surgery information as a recipe for adding 2-handles on to a 4-ball to create a 4-manifold with the 3-manifold as its boundary, a move of type 2 corresponds to sliding one 2-handle over another. A type 1 move changes the 4-manifold by taking the connected sum with a complex projective plane (oriented in either way), or by removing such a summand. Neither manoeuvre changes the boundary of the 4-manifold. The two moves of Theorem 12.15 can be, and indeed have been, explored at length to give many examples of different framed links representing the same manifold [65]. An interesting exercise is to show that any closed connected orientable 3-manifold can be obtained by surgery on \( {S}^{3} \) using a framed link with all its components unknotted (a crossing in a link diagram can be changed by introducing, by a type 1 move, a new component and then employing two type 2 (a) (b) (c) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_1.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_2.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_3.jpg) (d) (e) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_4.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_4.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_5.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_140_5.jpg) Figure 12.7 moves). A few examples of framed links that yield, by surgery, certain well-known 3-manifolds are shown in Figure 12.7, where the integers indicate the linking numbers of framing curves. Diagram (a) is \( {S}^{1} \times {S}^{2} \) ,(b) is \( {S}^{1} \times {S}^{1} \times {S}^{1} \) ,(c) is the Poincaré homology 3-sphere with finite fundamental group,(d) is the lens space \( {L}_{p, q} \) where \( q/p \) has continued fraction expansion \( 1/\left\{ {{a}_{1} + 1/\left\{ {{a}_{2} + 1/\left\{ {{a}_{3} + \cdots + 1/{a}_{n}}\right\} }\right\} }\right\} \) , and (e) is the connected sum of a homology 3-sphere and real projective 3-space. The manifold obtained by surgery on a \( \left( {p, q}\right) \) cable knot with framing \( {pq} \) always has \( {L}_{p, q} \) as a connected summand. At the beginning of this chapter, the mapping class group (self-homeomorphisms up to isotopy) of a space was introduced, and the twist homeomorphisms of a surface were discussed. For a closed orientable surface the isotopy classes of orientation-preserving homeomorphisms form a subgroup of index 2 in the mapping class group (beware that sometimes it is that subgroup that is named the mapping class group). It can be shown that that subgroup is generated by all twists ([77] [24]). Further, a finite collection of twists generate ([78], [80]) this subgroup; a minimal collection of twist generators, found by S. P. Humphries [48], consists of the twists about the set of \( \left( {{2g} + 1}\right) \) curves shown in Figure 12.8. For a torus \( T \) these are just the familiar longitude and meridian curves; twists about them induce standard generators of the group of automorphisms of \( {H}_{1}\left( T\right) \) of determinant 1 . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_141_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_141_0.jpg) Figure 12.8 A finite presentation for the mapping class group of a surface was given by B. Wajnryb [131]. Note that for a surface of genus 2 the five generators for the orientation-preserving group commute with a \( \pi \) rotation about the "horizontal axis" (see Figure 12.9), so this rotation is in the centre of the group. This implies that a 3-manifold with a Heegaard splitting of genus 2 has a self-homeomorphism of period 2. In turn, using results of Thurston, that can be shown to lead to the result that any simply connected closed genus two 3-manifold is \( {S}^{3} \) ; that is, the famous Poincaré conjecture is true for genus two 3-manifolds. Studies of the mapping class group of a closed non-orientable surface can be found in [79], [19] and [8]. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_141_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_141_1.jpg) Figure 12.9 ## Exercises 1. Prove that any piecewise linear homeomorphism \( h : {B}^{n} \rightarrow {B}^{n} \), with \( h \mid \partial {B}^{n} \) the identity, is isotopic, keeping the boundary fixed, to the identity homeomorphism. [Hint: Consider the \( \left( {n + 1}\right) \) -ball \( {B}^{n} \times \left\lbrack {0,1}\right\rbrack \) as the cone on its boundary, with an interior point for the vertex of the cone.] 2. Let \( X \) be a disc with \( n \) holes (that is, a 2-sphere from which the interiors of \( n + 1 \) disjoint discs have been removed). Suppose that \( h : X \rightarrow X \) is a (piecewise linear) homeomorphism that is the identity on \( \partial X \) . Prove that \( h \) can be expressed as a product of finitely many twists. [Hint: If \( \alpha \) is an arc in \( X \) from one boundary component to another, consider the intersection of \( {h\alpha } \) with other such arcs.] 3. Prove that any orientation-preserving homeomorphism of a closed connected surface \( F \) to itself is expressible as a product of finitely many twists. 4. Consider the mapping class group of isotopy classes of orientation-preserving homeomorphisms of the torus to itself. Show that this is isomorphic to the group of \( 2 \times 2 \) matrices over \( \mathbb{Z} \) generated by \( \left( \begin{array}{ll} 1 & 1 \\ 0 & 1 \end{array}\right) \) and \( \left( \begin{array}{ll} 1 & 0 \\ 1 & 1 \end{array}\right) \) . What is the mapping class group of isotopy classes of all homeomorphisms of the Klein bottle? 5. Suppose that \( C \) is a simple closed curve in an orientable surface \( X \) and that \( h : X \rightarrow X \) is an orientation-preserving homeomorphism. How are the twist about \( C \) and the twist about \( {hC} \) related? Suppose that \( p \) and \( q \) are simple closed curves in \( X \) that intersect transversely in precisely one point; \( {\tau }_{p} \) and \( {\tau }_{q} \) are the twists about \( p \) and \( q \) . Show that in the mapping class group of \( X,{\tau }_{p}{\tau }_{q}{\tau }_{p} = {\tau }_{q}{\tau }_{p}{\tau }_{q} \) . 6. Prove that surgery on the unknot in \( {S}^{3} \) with \( \pm 1 \) framing just produces \( {S}^{3} \) . 7. Find two distinct non-trivial framed knots in \( {S}^{3} \) that describe, by means of surgery, the same 3-manifold. 8. Verify that diagrams (a) and (b) of Figure 12.7 are indeed surgery diagrams for \( {S}^{1} \times {S}^{2} \) and \( {S}^{1} \times {S}^{1} \times {S}^{1} \) . Find surgery diagrams for real projective 3-space \( \mathbb{R}{P}^{3} \) and (harder) for \
1009_(GTM175)An Introduction to Knot Theory
49
curve in an orientable surface \( X \) and that \( h : X \rightarrow X \) is an orientation-preserving homeomorphism. How are the twist about \( C \) and the twist about \( {hC} \) related? Suppose that \( p \) and \( q \) are simple closed curves in \( X \) that intersect transversely in precisely one point; \( {\tau }_{p} \) and \( {\tau }_{q} \) are the twists about \( p \) and \( q \) . Show that in the mapping class group of \( X,{\tau }_{p}{\tau }_{q}{\tau }_{p} = {\tau }_{q}{\tau }_{p}{\tau }_{q} \) . 6. Prove that surgery on the unknot in \( {S}^{3} \) with \( \pm 1 \) framing just produces \( {S}^{3} \) . 7. Find two distinct non-trivial framed knots in \( {S}^{3} \) that describe, by means of surgery, the same 3-manifold. 8. Verify that diagrams (a) and (b) of Figure 12.7 are indeed surgery diagrams for \( {S}^{1} \times {S}^{2} \) and \( {S}^{1} \times {S}^{1} \times {S}^{1} \) . Find surgery diagrams for real projective 3-space \( \mathbb{R}{P}^{3} \) and (harder) for \( {S}^{1} \times F \), where \( F \) is a closed connected orientable surface. 9. What is the effect on \( {S}^{3} \) of (i) a 0 -surgery and (ii) a 2 -surgery? 10. Show that any closed connected orientable 3-manifold can be obtained by surgery on a framed link in any other such manifold. 11. If \( M \) is a 3-manifold with a genus \( g \) Heegaard splitting, show that the fundamental group of \( M \) has a presentation with \( g \) generators and \( g \) relators. 12. Suppose an orientable connected surface is described in terms of a handle decomposition with just one 0-handle and some 1-handles. Use the idea of sliding a 1-handle over other 1-handles (a 2-dimensional version of the 4-dimensional handle sliding described in the second type Kirby move) to produce a canonical form for the surface as depicted in Figure 6.1. What happens if there are more 0-handles and some 2-handles? 13 # 3-Manifold Invariants from the Jones Polynomial As proved in Chapter 12, any closed connected orientable 3-manifold can be obtained by the process of surgery on a framed link in \( {S}^{3} \) . Any invariant of framed links can be applied to such a surgery prescription in the hope of finding an invariant of the 3-manifold. That would need to be some entity associated to the 3-manifold and not just to the particular surgery description; it would need to be unchanged by all possible Kirby moves. An elementary example comes from the idea of linking numbers. A framed link (with components temporarily ordered and oriented) has a linking matrix. This is the symmetric matrix with entries the linking numbers between the pairs of components of the link. The linking number of a component with itself (a diagonal term of the matrix) is taken to be the integer that gives the framing of that component. This linking matrix can easily be seen to be a presentation matrix (in the sense of Chapter 6) for the first homology of the 3-manifold arising from surgery on the framed link. Thus the modulus of the determinant of the matrix, if it is non-zero, is the order of that homology group and the nullity of the matrix is the first Betti number of the manifold. It is easy to check that these numerical invariants do indeed remain unchanged by Kirby moves on the framed link. This, however, is not too exciting, as homology is long and better understood by other means. One might hope to emulate this procedure by a simple direct application of some link invariant. The Alexander polynomial and the Jones polynomial fail in that respect. This chapter explains how the Jones polynomial can nevertheless be amplified to achieve a 3-manifold invariant. Roughly, the idea is to take a linear sum of the Jones polynomials, evaluated at a complex root of unity, of copies of the link with the components replaced by various parallels of the original components. The resulting invariants are known as Witten's quantum \( S{U}_{q}\left( 2\right) 3 \) -manifold invariants. The details are somewhat intricate and, as might be expected, will here be eased by the simplifying approach of the Kauffman bracket and the linear skein theory associated with it. The Temperley-Lieb algebras appear as instances of that theory. E. Witten's initiation of this topic can be found in [135]. A proof, using quantum groups, of the existence of these \( S{U}_{q}\left( 2\right) 3 \) -manifold invariants was given first by N. Y. Reshetákhin and V. G. Turaev [109]; it was amplified by Kirby and P. Melvin [68]. Early proofs using skein theory, or the Temperley-Lieb algebra, appeared in [82] and [83]; the proof that follows here first appeared in [86]. When thinking about surgery, framed links are needed. As remarked in Chapter 12 , a framing can be interpreted as annuli twisting along the components of a link, and this can be encoded by a planar diagram of the link. The understanding then is that each annulus is a widening of each component in the plane of the diagram. Extra twists of the annulus correspond to extra 'kinks' in the diagram. This means that the writhe of any component of the diagram is the linking number of a boundary curve of the annulus with that link component. This integer is the framing of the link component. A diagram so encoding the framing will be called a diagram for the framed link. Of course, moving a framed link around by isotopy in \( {S}^{3} \) will not change at all the result of surgery upon it. This moving around corresponds to the equivalence of regular isotopy on representing diagrams in \( {\mathbb{R}}^{2} \cup \infty \) . Recall that regular isotopy is generated by the Reidemeister moves of Types II and III. A Kirby type 2 move on a diagram of a framed link can be thought of as dragging a segment of one component of the link up to another and then passing it over to the far side of that component. The framings so encoded by the diagrams are then correct for such a move. A Kirby type 1 move consists of adding to a diagram, or subtracting from it, a curve with precisely one crossing. The theorem of Kirby [65], Theorem 12.15, is then that closed connected oriented 3-manifolds are equivalent if and only if any link diagrams that represent them (with respect to surgery) differ by regular isotopy and a sequence of Kirby moves of the above two types. Thus, to construct a 3-manifold invariant, it is necessary only to associate with each link diagram some algebraic concept that does not change when the diagram changes under regular isotopy or Kirby moves. Of course any link invariant is unchanged under (regular) isotopy. It is in accommodating the type 2 move that difficulty arises; the type 1 move turns out to be almost a piece of administration. Consider now, for a surface, the following version of the linear skein theory associated to the Kauffman bracket. Let \( F \) be an oriented surface with a finite collection (possibly empty) of points specified in its boundary \( \partial F \) . A link diagram in the surface \( F \) consists of finitely many arcs and closed curves in \( F \), with just finitely many transverse crossings with the usual "over and under" information; the end points of the arcs must be precisely the specified points in \( \partial F \) . This definition is meant to contain no surprise. Two diagrams are regarded as the same if they differ by a homeomorphism of \( F \) that is isotopic to the identity always keeping \( \partial F \) fixed. The required linear skein theory of \( F \) (inspired by the Kauffman bracket) is defined as follows: Definition 13.1. Let \( \mathrm{A} \) be a fixed complex number. The linear skein \( \mathcal{S}\left( F\right) \) of \( F \) is the vector space of formal linear sums, over \( \mathbb{C} \), of (unoriented) link diagrams in \( F \) qotiented by the relations (i) \( D\; \cup \) (a trivial closed curve) \( = \left( {-{A}^{-2} - {A}^{2}}\right) D \) , (ii) \( > < = A > < + {A}^{-1} > \) . Here a trivial closed curve in a diagram is one that is null-homotopic and that contains no crossing. The empty set is a permitted diagram if no point is specified in \( \partial F \) . The equation in (ii) refers to three diagrams that are identical except where shown. It follows, exactly as in Lemma 3.3, that diagrams that are regularly isotopic in \( F \) (that is, related by the Reidemeister Type II and III moves in \( F \) ) represent the same element of \( \mathcal{S}\left( F\right) \) . Although a linear skein space is in this way associated with any oriented surface, the only surfaces needed in what follows are the plane, the sphere, the annulus and the disc. The linear skein of the plane, \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \), is easily seen to be a 1-dimensional vector space with the empty diagram as a (fairly natural) base. \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) will thus be identified with \( \mathbb{C} \) . This is because, by use of (ii), any link diagram in any surface can be expressed uniquely as a linear sum of diagrams with no crossing at all, and, in this case, it follows from (i) that those diagrams are multiples of the empty diagram. Of course, this is the Kauffman bracket approach to the Jones polynomial; the Kauffman bracket of a diagram is the coordinate of the diagram in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) if the zero-crossing diagram of the unknot were the base. The inclusion of \( {\mathbb{R}}^{2} \) in \( {\mathbb{R}}^{2} \cup \infty \) induces an isomorphism of the skein spaces of the plane and the sphere. The linear skein of the annulus, \( {S}^{1} \times I \), similarly has a base consisting of diagrams with no crossing and no null-homotopic closed curve. Each base element is then just a number of parallel curves encircling the annulus. A product of a diagram in an annulus with a diagram in another annulus can be defined by identifying together one boundary component from each annulus. This produces a third annulus containing a diagram that is the union of the two original diagra
1009_(GTM175)An Introduction to Knot Theory
50
no crossing at all, and, in this case, it follows from (i) that those diagrams are multiples of the empty diagram. Of course, this is the Kauffman bracket approach to the Jones polynomial; the Kauffman bracket of a diagram is the coordinate of the diagram in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) if the zero-crossing diagram of the unknot were the base. The inclusion of \( {\mathbb{R}}^{2} \) in \( {\mathbb{R}}^{2} \cup \infty \) induces an isomorphism of the skein spaces of the plane and the sphere. The linear skein of the annulus, \( {S}^{1} \times I \), similarly has a base consisting of diagrams with no crossing and no null-homotopic closed curve. Each base element is then just a number of parallel curves encircling the annulus. A product of a diagram in an annulus with a diagram in another annulus can be defined by identifying together one boundary component from each annulus. This produces a third annulus containing a diagram that is the union of the two original diagrams. It is easy to see that this operation induces a well-defined bilinear product on \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) that turns it into a commutative algebra. Let \( \alpha \) denote the base element that consists of one single curve encircling the annulus once with no crossing. Then the base mentioned above is \( \left\{ {{\alpha }^{0},{\alpha }^{1},{\alpha }^{2},\ldots }\right\} \), where \( {\alpha }^{0} \) denotes the empty diagram in the annulus, and \( {\alpha }^{n} \) is represented by \( n \) parallel curves all encircling the annulus. \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) is thus \( \mathbb{C}\left\lbrack \alpha \right\rbrack \) the polynomial algebra in \( \alpha \) with complex coefficients. Next consider the linear skein \( \mathcal{S}\left( {{D}^{2},{2n}}\right) \) of a disc with \( {2n} \) points in its boundary. Again, this has a base consisting of all diagrams with no crossing and no closed curve. (A combinatorial exercise shows there are \( \frac{1}{n + 1}\left( \begin{matrix} {2n} \\ n \end{matrix}\right) \) such diagrams, this number being the \( {n}^{\text{th }} \) Catalan number.) Regarding the disc as a square with \( n \) standard points on the left edge and \( n \) on the right, a product of diagrams can be defined by juxtaposing squares, identifying the right edge of one (with its \( n \) special points) with the left edge of the other. This product of diagrams extends to a well-defined bilinear map that turns \( \mathcal{S}\left( {{D}^{2},{2n}}\right) \) into an algebra \( T{L}_{n} \), the \( {n}^{\text{th }} \) Temperley-Lieb algebra. As an algebra \( T{L}_{n} \) is generated by \( n \) elements \( 1,{e}_{1},{e}_{2},\ldots ,{e}_{n - 1} \) shown in Figure 13.1, for any of the above base elements is a product of these (an easy exercise). In this and later diagrams, an integer \( n \) beside an arc signifies \( n \) copies of that arc all parallel in the plane so that, for example, the identity element \( \mathbf{1} \in T{L}_{n} \) is \( n \) parallel arcs going from one side of the square to the other. Note that in practice, some figures will, for convenience, show the square as a rectangle! ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_145_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_145_0.jpg) Figure 13.1 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_0.jpg) Figure 13.2 Nothing subtle has yet occurred. It is now, however, essential to understand the definition of the Jones-Wenzl idempotent \( {f}^{\left( n\right) } \in T{L}_{n} \) as defined in [133]. In the following figures \( {f}^{\left( n\right) } \) will be shown as a small blank square with \( n \) arcs entering and \( n \) leaving (see Figure 13.2); indeed, the number of such arcs is used to determine to which value of \( n \), and hence to which Temperley-Lieb algebra, such a blank square refers. Although \( {f}^{\left( n\right) } \) will be represented by a linear sum of diagrams, it is sometimes helpful to pretend it is just one diagram! The complex number \( {\Delta }_{n} \) will be that obtained by placing \( {f}^{\left( n\right) } \) in the plane, joining the \( n \) points on the left of the square by parallel arcs to those on the right (see Figure 13.3) and interpreting the result in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \equiv \mathbb{C} \) . This type of definition will occur again. More pedantically, \( {\Delta }_{n} \) is the image of \( {f}^{\left( n\right) } \) under the linear map \( T{L}_{n} \rightarrow \mathcal{S}\left( {\mathbb{R}}^{2}\right) \equiv \mathbb{C} \) induced by mapping each diagram in the square (with \( {2n} \) boundary points) to a planar diagram formed by the above standard joining-up process. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_1.jpg) Figure 13.3 The element \( {f}^{\left( n\right) } \) is defined and characterised in the following lemma: Lemma 13.2. Suppose that \( {A}^{4} \) is not a \( {k}^{\text{th }} \) root of unity for \( k \leq n \) . Then there is a unique element \( {f}^{\left( n\right) } \in T{L}_{n} \) such that (i) \( {f}^{\left( n\right) }{e}_{i} = 0 = {e}_{i}{f}^{\left( n\right) } \) for \( 1 \leq i \leq n - 1 \) , (ii) \( \left( {{f}^{\left( n\right) } - 1}\right) \) belongs to the algebra generated by \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n - 1}}\right\} \) , (iii) \( {f}^{\left( n\right) }{f}^{\left( n\right) } = {f}^{\left( n\right) } \) and (iv) \( {\Delta }_{n} = {\left( -1\right) }^{n}\left( {{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) /\left( {{A}^{2} - {A}^{-2}}\right) \) . Proof. Note that if \( {f}^{\left( n\right) } \) exists, \( \mathbf{1} - {f}^{\left( n\right) } \) is the identity of the algebra generated by \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{n - 1}}\right\} \), and so \( {f}^{\left( n\right) } \) is then certainly unique. Let \( {f}^{\left( 0\right) } \) be the empty diagram (so that \( {\Delta }_{0} = 1 \) ), let \( {f}^{\left( 1\right) } = \mathbf{1} \), and inductively assume that \( {f}^{\left( 2\right) },{f}^{\left( 3\right) },\ldots ,{f}^{\left( n\right) } \) have been defined with the above properties (i),(ii),(iii) and (iv). Observe that (i) and (ii) immediately imply (iii) and that this generalises to the identity shown in Figure 13.4 provided that \( \left( {i + j}\right) \leq n \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_146_2.jpg) Figure 13.4 Now consider the element \( x \), say, of \( T{L}_{n - 1} \) shown at the start of Figure 13.5. The identity of Figure 13.4 implies that \( {f}^{\left( n - 1\right) }x = x \) . But \( {f}^{\left( n - 1\right) }x \) is, by (i), just some scalar multiple \( \lambda \) of \( {f}^{\left( n - 1\right) } \) (because \( x \) is a linear sum of 1 ’s and products of \( {e}_{i} \) ’s); the trick of placing squares in the plane and joining points on the left to points on the right, in the standard way, implies that the scalar \( \lambda \) is \( {\Delta }_{n}/{\Delta }_{n - 1} \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_0.jpg) Figure 13.5 Suppose now that \( {A}^{4k} \neq 1 \) for \( k \leq n + 1 \), so that \( {\Delta }_{k} \neq 0 \) for \( k \leq n \) . Define \( {f}^{\left( n + 1\right) } \in T{L}_{n + 1} \) inductively by the equation of Figure 13.6. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_1.jpg) Figure 13.6 Properties (i) and (ii) (and hence (iii)) for \( {f}^{\left( n + 1\right) } \) follow immediately, except perhaps for the fact that \( {f}^{\left( n + 1\right) }{e}_{n} = 0 \) . However, Figure 13.7 shows, using the identities of Figure 13.5 and Figure 13.4, why that also is true. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_147_2.jpg) Figure 13.7 It remains to investigate \( {\Delta }_{n + 1} \) . Consider the operation of placing a square in an annulus and joining \( k \) points on one side to \( k \) points on the other by parallel arcs encircling the annulus. For each \( k \), this gives a linear map \( T{L}_{k} \rightarrow \mathcal{S}\left( {{S}^{1} \times I}\right) \) . The image of \( {f}^{\left( k\right) } \) is some polynomial \( {S}_{k}\left( \alpha \right) \) in the generator \( \alpha \) of \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) . \( {S}_{0}\left( \alpha \right) = {\alpha }^{0} \) and \( {S}_{1}\left( \alpha \right) = \alpha \) . Inserting into the annulus, in this way, the defining relation of Figure 13.6 for \( {f}^{\left( n + 1\right) } \) gives the formula of Figure 13.8. However, in the last diagram in Figure 13.8 the two small squares representing \( {f}^{\left( n\right) } \) can be slid together to become one small square (using \( {f}^{\left( n\right) }{f}^{\left( n\right) } = {f}^{\left( n\right) } \) ), and an application of the formula of Figure 13.5 gives \[ {S}_{n + 1}\left( \alpha \right) = \alpha {S}_{n}\left( \alpha \right) - {S}_{n - 1}\left( \alpha \right) . \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_148_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_148_0.jpg) Figure 13.8 This, with the above initial conditions, is the recurrence formula for the \( {n}^{th} \) Cheby-shev polynomial (of the second kind, renormalised) in \( \alpha \) . If now an annulus is placed in the plane, the ensuing linear map \( \mathcal{S}\left( {{S}^{1} \times I}\right) \rightarrow \mathcal{S}\left( \mathbb{R}\right) \) sends \( {\alpha }^{k} \) to \( {\left( -{A}^{-2} - {A}^{2}\right) }^{k} \), and by definition it maps \( {S}_{k}\left( \alpha \right) \) to \( {\Delta }_{k} \) . Thus \[ {\Delta }_{n + 1} = \left( {-{A}^{-2} - {A}^{2}}\right) {\Delta }_{n} - {\Delta }_{n - 1}. \] An induction argument then easily shows that \[ {\Delta }_{n + 1} = \frac{{\left( -1\right) }^{n + 1}\
1009_(GTM175)An Introduction to Knot Theory
51
t) } \) ), and an application of the formula of Figure 13.5 gives \[ {S}_{n + 1}\left( \alpha \right) = \alpha {S}_{n}\left( \alpha \right) - {S}_{n - 1}\left( \alpha \right) . \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_148_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_148_0.jpg) Figure 13.8 This, with the above initial conditions, is the recurrence formula for the \( {n}^{th} \) Cheby-shev polynomial (of the second kind, renormalised) in \( \alpha \) . If now an annulus is placed in the plane, the ensuing linear map \( \mathcal{S}\left( {{S}^{1} \times I}\right) \rightarrow \mathcal{S}\left( \mathbb{R}\right) \) sends \( {\alpha }^{k} \) to \( {\left( -{A}^{-2} - {A}^{2}\right) }^{k} \), and by definition it maps \( {S}_{k}\left( \alpha \right) \) to \( {\Delta }_{k} \) . Thus \[ {\Delta }_{n + 1} = \left( {-{A}^{-2} - {A}^{2}}\right) {\Delta }_{n} - {\Delta }_{n - 1}. \] An induction argument then easily shows that \[ {\Delta }_{n + 1} = \frac{{\left( -1\right) }^{n + 1}\left( {{A}^{2\left( {n + 2}\right) } - {A}^{-2\left( {n + 2}\right) }}\right) }{{A}^{2} - {A}^{-2}}. \] The proof of Lemma 13.2 could have been slightly shortened by inserting the squares directly into the plane, but consideration of the annulus is important later. Also, attention has been drawn to the Chebyshev polynomial \( {S}_{n} \), which, with indeterminate \( x \) and integer coefficients, is defined by \[ {S}_{n + 1}\left( x\right) = x{S}_{n}\left( x\right) - {S}_{n - 1}\left( x\right) ;\;{S}_{0}\left( x\right) = 1,\;{S}_{1}\left( x\right) = x. \] It has the important (easy) properties that \[ {S}_{n}\left( x\right) = {\left( -1\right) }^{n}{S}_{n}\left( {-x}\right) \text{ and }\left( {t - {t}^{-1}}\right) {S}_{n}\left( {t + {t}^{-1}}\right) = {t}^{n + 1} - {t}^{-\left( {n + 1}\right) }. \] Further, it has been seen that \( {f}^{\left( n\right) } \) inserted into \( {S}^{1} \times I \) with the boundary points of \( {f}^{\left( n\right) } \) connected up by arcs encircling the annulus is \( {S}_{n}\left( \alpha \right) \in \mathcal{S}\left( {{S}^{1} \times I}\right) \) . This features in the next most important definition, soon to be extensively employed. Definition 13.3. For a given integer \( r \), let \( \omega \in \mathcal{S}\left( {{S}^{1} \times I}\right) \) be defined by \[ \omega = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) \] As a final instance of skein theory, consider the linear skein of an annulus with two points specified on one of its boundary components, \( \mathcal{S}\left( {{S}^{1} \times I,2\text{points}}\right) \) . Let \( {a\omega } \) and \( {b\omega } \) be the elements of \( \mathcal{S}\left( {{S}^{1} \times I,2\text{points}}\right) \) that consist of \( \omega \) inserted into the annulus together with an arc, joining the two boundary points of the annulus; the arc goes "above" \( \omega \) for \( {a\omega } \) or "below" \( \omega \) for \( {b\omega } \) (see Figure 13.9). Lemma 13.4. In \( \mathcal{S}\left( {{S}^{1} \times I,2\text{points}}\right) ,{a\omega } - {b\omega } \) is a linear sum of two elements, each of which contains a copy of \( {f}^{\left( r - 1\right) } \) . (That is, each of the two elements is ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_0.jpg) Figure 13.9 the image of \( {f}^{\left( r - 1\right) } \) under some map \( T{L}_{r - 1} \rightarrow \mathcal{S}\left( {{S}^{1} \times I,2\text{points}}\right) \) formed by including a square into an annulus and joining up boundary points in some way.) Proof. Consider the inclusion, shown in Figure 13.10, of the \( T{L}_{n + 1} \) recurrence relation of Figure 13.6 into the annulus. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_1.jpg) Figure 13.10 The top boundary points on either side of the square are joined to the two points on the annulus boundary, and the other \( n \) points on the left of the square are joined to the \( n \) on the right by parallel arcs encircling the annulus. As in the proof of Lemma 13.2, the two small squares in the final diagram of Figure 13.10 can be slid together (using \( {f}^{\left( n\right) }{f}^{\left( n\right) } = {f}^{\left( n\right) } \) ) to become one square, and the equality can then be rearranged to become that of Figure 13.11. Sum these equalities from \( n = 0 \) to \( n = r - 2 \) (here \( {\Delta }_{-1} = 0 \) ). The right-hand side is \( {a\omega } \) . Rotate each annulus of Figure 13.11 through \( \pi \) and sum again. The right-hand side is now \( {b\omega } \) . The left-hand sides of the formulae so obtained are almost the same; recalling that \( {\Delta }_{-1} = 0 \), the difference of these left-hand sides is the difference of the first term of Figure 13.11, when \( n = r - 2 \), and its rotation; in each is a copy of \( {f}^{\left( r - 1\right) } \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_149_2.jpg) Figure 13.11 If \( D \) is a planar diagram of a link of \( n \) ordered components, \( D \) defines a multilinear map \[ \text{,}\ldots ,\;{ > }_{D} : \mathcal{S}\left( {{S}^{1} \times I}\right) \times \mathcal{S}\left( {{S}^{1} \times I}\right) \times \cdots \times \mathcal{S}\left( {{S}^{1} \times I}\right) \rightarrow \mathcal{S}\left( {\mathbb{R}}^{2}\right) \text{.} \] This map is defined, by multilinearity, using the following construction with diagrams: Take \( n \) link diagrams in \( n \) annuli and immerse the annuli, with their diagrams, in the plane as a regular neighbourhood of the \( n \) (ordered) components of \( D \) . Over- and under-crossings of \( D \) become over- and under-crossings of the immersed annuli and of the diagrams that they contain. In this way the diagrams in the annuli are made to run parallel to the original components of \( D \) . Then the \( n \) annulus diagrams have produced a diagram in \( {\mathbb{R}}^{2} \) representing an element of \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \equiv \mathbb{C} \) . It is easy to check that this induces a well-defined map of the required form. As a simple example, let \( D \) be the diagram on the left of Figure 13.12 ; then \( < {\alpha }^{2},\alpha ,1{ > }_{D} \) is represented in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) by the diagram on the right of Figure 13.12 (remember that in \( \mathcal{S}\left( {{S}^{1} \times I}\right) ,\alpha \) is the generator and 1 is represented by the empty set). ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_150_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_150_0.jpg) Figure 13.12 Lemma 13.5. Suppose that \( A \) is chosen so that \( {A}^{4} \) is a primitive \( {r}^{\text{th }} \) root of unity, \( r \geq 3 \) . Suppose that \( D \) is a planar diagram of a link of \( n \) (ordered) components. Suppose that \( {D}^{\prime } \) is another such diagram, obtained from \( D \) by a Kirby type 2 move, in which a parallel of the first component of \( D \) is joined by some band to another component (or, equivalently, a segment of the second component is moved up to and over the first). Then \[ < \omega ,\ldots ,\;{ > }_{D} = < \omega ,\ldots ,\;{ > }_{{D}^{\prime }}. \] Proof. It must be checked that the elements of \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \), produced as described above from \( D \) and from \( {D}^{\prime } \), with \( \omega \) as the "diagram" around the first component and with any given diagrams around the others, are in fact the same element. The difference between these elements is the result of a sequence of moves, each consisting of moving, by regular isotopy, an arc of some component up to that labelled with \( \omega \), and changing from an immersed copy of \( {a\omega } \) to one of \( {b\omega } \) . By Lemma 13.4, this difference is a linear sum of elements of \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \), each containing a copy of \( {f}^{\left( r - 1\right) } \) . However, in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) any element containing a copy of \( {f}^{\left( r - 1\right) } \) is zero if \( {A}^{4} \) is a primitive \( {r}^{th} \) root of unity. That is because such an element is, for some \( x \in T{L}_{r - 1} \), the image in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) of \( {f}^{\left( r - 1\right) }x \) under the map induced by placing the square in the plane and joining the \( r - 1 \) points on the left to those on the right by parallel arcs. As usual, \( {f}^{\left( r - 1\right) }x \) is a scalar multiple of \( {f}^{\left( r - 1\right) } \), but \( {f}^{\left( r - 1\right) } \) maps to \( {\Delta }_{r - 1} \) and \( {\Delta }_{r - 1} = 0 \) because \( {A}^{4r} = 1 \) . Corollary 13.6. If \( {A}^{4} \) is a primitive \( {r}^{\text{th }} \) root of unity, \( r \geq 3 \), and planar diagrams \( D \) and \( {D}^{\prime } \) are related by a sequence of Kirby moves of type 2, then \[ < \omega ,\omega ,\ldots ,\omega { > }_{D} = < \omega ,\omega \ldots ,\omega { > }_{{D}^{\prime }}. \] In what follows, \( {U}_{ + } \) and \( {U}_{ - } \) will be planar figure-eight diagrams, each with one crossing, representing the unknot with framings +1 and -1 respectively; \( U \) will denote the diagram of the 0 -framed unknot with no crossing at all. The definition of \( \omega \) implies at once that \( \langle \omega {\rangle }_{U} = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{}^{2} \) . When \( {A}^{4} \) is a primitive \( {r}^{th} \) root of unity, the substitution \[ {\Delta }_{n} = \frac{{\left( -1\right) }^{n}\left( {{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) }{{A}^{2} - {A}^{-2}} \] and the summation of the ensuing geometric progression produce the formula \[ {\left\langle \omega \right\rangle }_{U} = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{}^{2} = \frac{-{2r}}{{\left( {A}^{2} - {A}^{-2}\right) }^{2}}. \] Note that \( \langle \omega {\rangl
1009_(GTM175)An Introduction to Knot Theory
52
ega { > }_{D} = < \omega ,\omega \ldots ,\omega { > }_{{D}^{\prime }}. \] In what follows, \( {U}_{ + } \) and \( {U}_{ - } \) will be planar figure-eight diagrams, each with one crossing, representing the unknot with framings +1 and -1 respectively; \( U \) will denote the diagram of the 0 -framed unknot with no crossing at all. The definition of \( \omega \) implies at once that \( \langle \omega {\rangle }_{U} = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{}^{2} \) . When \( {A}^{4} \) is a primitive \( {r}^{th} \) root of unity, the substitution \[ {\Delta }_{n} = \frac{{\left( -1\right) }^{n}\left( {{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) }{{A}^{2} - {A}^{-2}} \] and the summation of the ensuing geometric progression produce the formula \[ {\left\langle \omega \right\rangle }_{U} = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{}^{2} = \frac{-{2r}}{{\left( {A}^{2} - {A}^{-2}\right) }^{2}}. \] Note that \( \langle \omega {\rangle }_{U} \neq 0 \) . The next result implies that \( \langle \omega {\rangle }_{{U}_{ + }} \) and \( \langle \omega {\rangle }_{{U}_{ - }} \) are also non-zero. The proof of this lemma will be given a little later. Lemma 13.7. Suppose \( r \geq 3 \) and \( A \) is a primitive \( 4{r}^{th} \) root of unity. Then \[ < \omega { > }_{{U}_{ + }} < \omega { > }_{{U}_{ - }} = < \omega { > }_{U} = \frac{-{2r}}{{\left( {A}^{2} - {A}^{-2}\right) }^{2}}. \] Now comes the theorem (first proved in another form in [109]) asserting the existence of certain 3-manifold invariants that, up to normalisation, are often called the quantum \( S{U}_{q}\left( 2\right) \) invariants. First though, recall the linking matrix of a framed link with ordered oriented components mentioned at the start of this chapter. This matrix changes by congruence under Kirby type 2 moves, so its numbers of positive and of negative eigenvalues do not change under such moves, nor do they change if different orientations or orderings on the link's components are chosen. Theorem 13.8. Suppose that a closed oriented 3-manifold \( M \) is obtained by surgery on a framed link that is represented by a planar diagram D. Let \( {b}_{ + } \) be the number of positive eigenvalues and \( {b}_{ - } \) be the number negative eigenvalues of the linking matrix of this link. Suppose \( r \geq 3 \) and that \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity. Then \[ < \omega ,\omega ,\ldots ,\omega { > }_{D} < \omega { > }_{{U}_{ + }}^{-{b}_{ + }} < \omega { > }_{{U}_{ - }}^{-{b}_{ - }} \] is a well-defined invariant of \( M \) . Proof. Note that \( A \) is a primitive \( 4{r}^{th} \) root of unity, and so, by Lemma 13.7, \( < \omega { > }_{{U}_{ + }} \) and \( < \omega { > }_{{U}_{ - }} \) are non-zero. It follows from the Corollary 13.6 and the preceding remarks about the linking matrix that the given expression is invariant under Kirby type 2 moves. The last two factors make it invariant under Kirby type 1 moves, and regular isotopy of \( D \) just induces regular isotopies of all the diagrams used in defining the expression. The invariant just defined is essentially the \( S{U}_{q}\left( 2\right) \) invariant of \( M \) at a "level" corresponding to \( r \) . Observe however that if \( \omega \) is replaced throughout by \( {\mu \omega } \), where \( \mu \) is a constant complex number, then clearly another slightly different invariant is obtained. (The new invariant is the old one multiplied by \( \mu \) raised to the power of the first Betti number of \( M \) which is the nullity of the above linking matrix.) It may often be more convenient to use some such renormalisation. Some small generalisations to this whole approach can be made in several directions. One can take \( A \) to be a primitive \( 2{r}^{th} \) root of unity when \( r \) is odd ([12], or see [86]). One can take \( A \) to be an indeterminate symbol rather than a complex number and work with modules over \( \mathbb{Z}\left\lbrack {A,{A}^{-1}}\right\rbrack \) rather than vector spaces, quotienting when appropriate by a cyclotomic polynomial. One can also rephrase the exposition in terms of the skein theory of framed links in 3-manifolds rather than using link diagrams in surfaces. The invariant generalises at once to become an invariant of framed links in 3-manifolds; just add extra components to the surgery link (see [86]). A more subtle extension to the theory comes from expressing \( \omega \) as \( {\omega }_{0} + {\omega }_{1} \) , where \[ {\omega }_{0} = \mathop{\sum }\limits_{\substack{{n = 0} \\ {n\text{ even }} }}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) ,\;{\omega }_{1} = \mathop{\sum }\limits_{\substack{{n = 0} \\ {n\text{ odd }} }}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) . \] If \( \omega \) is replaced by \( {\omega }_{0} \) or \( {\omega }_{1} \) in Figure 13.9, the result analogous to that of Lemma 13.4 is that each of \( a{\omega }_{0} - b{\omega }_{1} \) and \( a{\omega }_{1} - b{\omega }_{0} \) is a multiple of an element containing a copy of \( {f}^{\left( r - 1\right) } \) . The theory just described can be altered by decorating some subset of the components of the surgery link with \( {\omega }_{0} \) the remainder with \( {\omega }_{1} \) . Careful choice of those subsets leads ([11], or see [86]) to invariants of a 3-manifold \( M \) with spin structure or with a preferred element of \( {H}^{1}\left( {M;\mathbb{Z}/2\mathbb{Z}}\right) \) . To complete this chapter, a proof of Lemma 13.7 is needed. If the square, with \( n \) points specified on each of its two sides, is placed in the plane or in \( {S}^{2} \), each element of \( T{L}_{n} \) (a linear sum of diagrams inside the square) can be regarded as a linear map to \( \mathbb{C} \) of the linear skein of diagrams outside the square. This map is induced by taking a diagram inside and a diagram outside the square and regarding the union of the two as an element of \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) = \mathbb{C} \) . As has already been noted, if \( r \geq 3 \) and \( {A}^{4} \) is a primitive \( {r}^{th} \) root of unity, \( {f}^{\left( r - 1\right) } \) defines the zero map of outsides although it is not the zero element of \( T{L}_{r - 1} \) . Consider now the element of \( T{L}_{n} \) shown in Figure 13.13, regarded as a map of outsides, that consists of \( {f}^{\left( n\right) } \) encircled by an \( \omega \) . Lemma 13.9. Suppose \( r \geq 3 \) and \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity. The element of \( T{L}_{n} \) shown in Figure 13.13 is the zero map of outsides if \( 1 \leq n \leq r - 2 \) . When \( n = 0 \), the element acts as multiplication by \( \langle \omega {\rangle }_{U} \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_0.jpg) Figure 13.13 Proof. Consider first the element of \( T{L}_{n} \) that consists of \( {f}^{\left( n\right) } \) encircled by one simple closed curve. This is shown in Figure 13.14 for \( n = 4 \) . Figure 13.14 shows a calculation for that element. Firstly one crossing is removed in the two standard ways, the results being multiplied by \( A \) and \( {A}^{-1} \) and added. The two elements obtained are then simplified by removing kinks and multiplying by \( - {A}^{\pm 3} \) . Now, in the two resulting elements, removal of any of the crossings depicted in one of the standard ways gives zero (as \( {f}^{\left( n\right) }{e}_{i} = 0 \) ), so only the other standard way need be considered. It follows that \( {f}^{\left( n\right) } \) encircled by one simple closed curve is equal, in \( T{L}_{n} \), to \( \left( {-{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) {f}^{\left( n\right) } \) . Now the element required in this Lemma is \( {f}^{\left( n\right) } \) encircled by an \( \omega \), regarded as a map of outsides. Let this be denoted \( x \) . A small single unknotted simple closed curve inserted into this changes \( x \), in the usual way, to \( \left( {-{A}^{-2} - {A}^{2}}\right) x \) . However, that small curve can be slid right over the \( \omega \) without (by Lemma 13.5) changing the map of outsides, and then removed altogether (by the preceding paragraph) at the cost of multiplying by \( \left( {-{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) \) . Thus \( \left( {-{A}^{-2} - {A}^{2}}\right) x = \) \( \left( {-{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) x \) . Hence either \( x = 0 \) or \( {A}^{2\left( {n + 1}\right) } = {A}^{2} \) or \( {A}^{2\left( {n + 1}\right) } = {A}^{-2} \) . The two latter possibilities do not occur for \( 1 \leq n \leq r - 2 \), as \( A \) is a primitive \( 4{r}^{th} \) root of unity, so then \( x = 0 \) . When \( n = 0 \), it is trivial that \( x \) acts as multiplication by \( \langle \omega {\rangle }_{U} \) because there is nothing but the curve labelled \( \omega \) to consider. Proof of Lemma 13.7. By Corollary 13.6, \( \langle \omega {\rangle }_{{U}_{ + }} < \omega { > }_{{U}_{ + }} < \omega { > }_{{U}_{ - }} \) is equal to a component with one crossing labelled \( \omega \) simply linked with a component with no self-crossing also labelled \( \omega \) ,(see Figure 13.15). By definition the \( \omega \) on the first component is \( \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) \), and \( {S}_{n}\left( \alpha \right) \) is \( {f}^{\left( n\right) } \) inserted into the annulus and joined around the annulus by \( n \) parallel arcs. By Lemma 13.9, the linking curve ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_1.jpg) \[ = - {A}^{4} \cdot \sqrt{{f}^{\left( n\right) } + 1} = 1 - {A}^{-4} \cdot \sqrt{{f}^{\left( n\right) } + 1} \] Figure 13.14 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_154_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_154_0.jpg) Figure 13.15 labelled \(
1009_(GTM175)An Introduction to Knot Theory
53
of Lemma 13.7. By Corollary 13.6, \( \langle \omega {\rangle }_{{U}_{ + }} < \omega { > }_{{U}_{ + }} < \omega { > }_{{U}_{ - }} \) is equal to a component with one crossing labelled \( \omega \) simply linked with a component with no self-crossing also labelled \( \omega \) ,(see Figure 13.15). By definition the \( \omega \) on the first component is \( \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) \), and \( {S}_{n}\left( \alpha \right) \) is \( {f}^{\left( n\right) } \) inserted into the annulus and joined around the annulus by \( n \) parallel arcs. By Lemma 13.9, the linking curve ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_153_1.jpg) \[ = - {A}^{4} \cdot \sqrt{{f}^{\left( n\right) } + 1} = 1 - {A}^{-4} \cdot \sqrt{{f}^{\left( n\right) } + 1} \] Figure 13.14 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_154_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_154_0.jpg) Figure 13.15 labelled \( \omega \) converts to zero each term of the summation except the first (when \( n = 0) \) . Thus \( \langle \omega {\rangle }_{{U}_{ + }}\langle \omega {\rangle }_{{U}_{ - }} = \langle \omega {\rangle }_{{U}_{ - }} = \langle \omega {\rangle }_{U} \) . That completes the proof of the existence of the \( S{U}_{q}\left( 2\right) 3 \) -manifold invariants associated with the Jones polynomial. The first proof by V. G. Turaev and H. Wenzl, based on representation theory, of the existence of \( S{U}_{q}\left( n\right) \) invariants (associated with the HOMFLY polynomial of Chapter 15) can be found in [128]. A skein theory \( S{U}_{q}\left( n\right) \) proof - a much harder version of the proof of this chapter-is given by Y. Yokota [139]. ## Exercises 1. Prove that the elements \( 1,{e}_{1},{e}_{2},\ldots {e}_{n - 1} \) do indeed generate the Temperley-Lieb algebra \( T{L}_{n} \) . 2. Draw five diagrams that form a base of \( T{L}_{3} \) and determine a specific expression for the idempotent \( {f}^{\left( 3\right) } \) as a linear sum of these base elements. 3. Consider the \( \pi \) -rotations of the square, used to define the Temperley-Lieb algebra \( T{L}_{n} \) , about axes from north to south, from east to west and perpendicular to the plane of the square. Show that for \( n \geq 3 \), these rotations induce involutions of \( T{L}_{n} \) which are not the identity but which fix the element \( {f}^{\left( n\right) } \) . 4. Let \( \rho \) be an element of the permutation group \( {S}_{n} \) . Let \( {D}_{\rho } \) be the element of the Temperley-Lieb algebra \( T{L}_{n} \) that consists of precisely \( n \) arcs, one arc joining the point labelled \( i \) on the left edge of the square to the point labelled \( {\rho i} \) on the right edge (where labellings start at the top). In \( {D}_{\rho } \), if \( i < j \) and \( {\rho i} > {\rho j} \), there is one crossing between the arc starting at \( i \) and that starting at \( j \) and there the first arc is over the second. There are no other crossings. Let \( \left| \rho \right| \) denote the number of crossings in \( {D}_{\rho } \) . Show that the idempotent \( {f}^{\left( n\right) } \in T{L}_{n} \) is a scalar multiple of \( \mathop{\sum }\limits_{{\rho \in {S}_{n}}}{A}^{3\left| \rho \right| }{D}_{\rho } \) and determine that scalar. 5. The operation of placing a square, containing a generating diagram of \( T{L}_{n} \), in the plane, joining the \( n \) points on the left to the \( n \) on the right (introducing no new crossing) and evaluating the result in the skein of the plane induces a linear map tr : \( T{L}_{n} \rightarrow \mathbb{C} \) . (Thus \( \operatorname{tr}\left( {f}^{\left( n\right) }\right) = {\Delta }_{n} \) .) Show that \( \operatorname{tr}\left( {xy}\right) = \operatorname{tr}\left( {yx}\right) \) and that \( \left( {x, y}\right) \mapsto \operatorname{tr}\left( {xy}\right) \) defines a bilinear form on \( T{L}_{n} \) . If \( A \) is not a root of unity show that this form is non-degenerate. 6. Prove that the Chebyshev polynomials have a product formula of the form \( {S}_{m}\left( x\right) {S}_{n}\left( x\right) = \mathop{\sum }\limits_{r}{S}_{r}\left( x\right) \), and determine the range of \( r \) for given \( m \) and \( n \) . 7. The collections of elements \( \left\{ {{\alpha }^{n} : n \geq 0}\right\} \) and \( \left\{ {{S}_{n}\left( \alpha \right) : n \geq 0}\right\} \) are bases of the space \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) . Find an expression for \( {\alpha }^{n} \) as a linear sum of elements in the second base. [Hint: Prove that \( {x}^{n + 1} = \mathop{\sum }\limits_{{r = 0}}^{n}\left( \begin{array}{l} n \\ r \end{array}\right) {S}_{n - {2r} + 1}\left( x\right) \) .] 8. Suppose a diagram \( D \) of a framed knot can be changed by mutation to become a diagram \( {D}^{\prime } \), where the mutation is effected by rotating, in the usual way, a disc in the plane whose boundary meets \( D \) at just four points. Prove that \( {\left\langle {S}_{n}\left( \alpha \right) \right\rangle }_{D} = {\left\langle {S}_{n}\left( \alpha \right) \right\rangle }_{{D}^{\prime }} \) . Is it true that \( {\left\langle {\alpha }^{n}\right\rangle }_{D} = {\left\langle {\alpha }^{n}\right\rangle }_{{D}^{\prime }} \) ? 9. Prove that the signature of the linking matrix of a framed link is not changed when the link is changed by a Kirby move of the second type. 10. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. Suppose that \( \phi \in \mathcal{S}\left( {{S}^{1} \times I}\right) \) is an element of the skein of the annulus with the property that if \( H \) is a two-crossing diagram of a non-trivial (Hopf) link, \( \langle \phi ,\psi {\rangle }_{H} = 0 \) for all \( \psi \in \mathcal{S}\left( {{S}^{1} \times I}\right) \) . Show that if \( D \) is a diagram of any other two-component link, then \( \langle \phi ,\psi {\rangle }_{D} = 0 \) for all \( \psi \in \mathcal{S}\left( {{S}^{1} \times I}\right) \) . [Hint: Use \( \omega \) .] 11. Let \( D \) be a planar link diagram, \( {D}_{1},{D}_{2},\ldots ,{D}_{n} \) being the sub-diagrams of the individual components. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity and suppose that \( k \leq r - 2 \) . Let \( w\left( {D}_{1}\right) \) be the writhe of \( {D}_{1} \) . If \( i\left( 2\right), i\left( 3\right) ,\ldots, i\left( n\right) \) are non-negative integers, show that \[ {\left\langle {S}_{k}\left( \alpha \right) ,{\alpha }^{i\left( 2\right) },{\alpha }^{i\left( 3\right) },\ldots ,{\alpha }^{i\left( n\right) }\right\rangle }_{D} \] \[ = {\left( -1\right) }^{\Lambda + r}{\left( {\left( -1\right) }^{k + r + 1}{A}^{-{r}^{2}}\right) }^{w\left( {D}_{1}\right) }{\left\langle {S}_{r - 2 - k}\left( \alpha \right) ,{\alpha }^{i\left( 2\right) },{\alpha }^{i\left( 3\right) },\ldots ,{\alpha }^{i\left( n\right) }\right\rangle }_{D}, \] where \( \Lambda = \mathop{\sum }\limits_{{j = 2}}^{n}i\left( j\right) \operatorname{lk}\left( {{D}_{1},{D}_{j}}\right) \) . ## 14 ## Methods for Calculating Quantum Invariants The quantum \( S{U}_{q}\left( 2\right) 3 \) -manifold invariants associated with a primitive \( 4{r}^{\text{th }} \) root of unity, described in the previous chapter, are fairly new and mysterious. Their use has so far been exceedingly limited in knot theory and in 3-manifold theory. Certainly they do distinguish many pairs of 3-manifolds, even pairs with the same homotopy type, but that has usually been more simply achieved by other means. However, there exist pairs of distinct manifolds with the same invariants for all \( r \) (see [85], [55] and [62]). For some manifolds, for some values of \( r \) the invariant is known by direct calculation to be zero. Superficially it might seem to be almost impossible to calculate any of these invariants. The calculation, from first principles, of the invariant corresponding to a \( 4{r}^{\text{th }} \) root of unity involves taking an \( \left( {r - 2}\right) \) -parallel of a surgery link giving the 3-manifold. If the link’s diagram has \( n \) crossings, that of the parallel has \( n{\left( r - 2\right) }^{2} \) crossings; calculating a Jones polynomial by naive means soon becomes impractical when many crossings are involved. It will be shown here that it is in principle fairly easy to give a formula, as a summation, for the invariants of lens spaces and, more generally, for certain Seifert fibrations. Although in theory any of the invariants can always be calculated, it is sensible to use various simplifying procedures whenever possible. Some of those will be described in this chapter. Tables of specific computer calculations appear in [104] and in [62], where one can search for patterns in the resulting lists of complex numbers. The basic strategy in making calculations of the quantum \( S{U}_{q}\left( 2\right) \) invariants is to make calculations of elements of the skein of \( {S}^{2} \), making as much use as possible of the idempotents \( {f}^{\left( n\right) } \) of the Temperley-Lieb algebras \( T{L}_{n} \) . The methods for doing this were first developed by several authors; original accounts can be found in [87], [86], [84], [61], [62] and [137]. The next two important preparatory results relate to the Temperley-Lieb algebras. Lemma 14.1. The element of \( T{L}_{n} \) shown on the left of Figure 14.1, which consists of the idempotent \( {f}^{\left( n\right) } \) followed by a complete positive "kink" in all \( n \) strands, is \( {\left( -1\right) }^{n}{A}^{{n}^{2} + {2n}}{f}^{\left( n\right) } \) . Proof. As shown in Figure 14.1, one strand can be separated a little from the other \( n - 1 \) strands. Now removing the kink in that single strand contributes \( - {A}^{3} \) , ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_0.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_1.jpg) Figure 14.1 and (as in the
1009_(GTM175)An Introduction to Knot Theory
54
\( {f}^{\left( n\right) } \) of the Temperley-Lieb algebras \( T{L}_{n} \) . The methods for doing this were first developed by several authors; original accounts can be found in [87], [86], [84], [61], [62] and [137]. The next two important preparatory results relate to the Temperley-Lieb algebras. Lemma 14.1. The element of \( T{L}_{n} \) shown on the left of Figure 14.1, which consists of the idempotent \( {f}^{\left( n\right) } \) followed by a complete positive "kink" in all \( n \) strands, is \( {\left( -1\right) }^{n}{A}^{{n}^{2} + {2n}}{f}^{\left( n\right) } \) . Proof. As shown in Figure 14.1, one strand can be separated a little from the other \( n - 1 \) strands. Now removing the kink in that single strand contributes \( - {A}^{3} \) , ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_0.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_1.jpg) Figure 14.1 and (as in the proof of Lemma 13.9) removing all the other crossings of the single strand with the other \( n - 1 \) strands contributes only a multiplying factor of \( {A}^{2\left( {n - 1}\right) } \) (any removal of a crossing in a negative manner gives zero on interacting with \( \left. {f}^{\left( n\right) }\right) \) . Thus the first diagram of Figure 14.1 is equal to \( - {A}^{{2n} + 1} \) times the third diagram, but that is \( {\left( -1\right) }^{n - 1}{A}^{{n}^{2} - 1}{f}^{\left( n\right) } \) by induction on \( n \) . The result follows at once. Note that this implies that the removal a negative kink adjacent to an \( {f}^{\left( n\right) } \) entails multiplying by a factor of \( {\left( -1\right) }^{n}{A}^{-\left( {{n}^{2} + {2n}}\right) } \) . Lemma 14.2. The element of \( T{L}_{n} \) shown in Figure 14.2, which consists of the idempotent \( {f}^{\left( n\right) } \) with all its strands encircled by a parallel strands that join up the ends of an idempotent \( {f}^{\left( a\right) } \), is \[ {\left( -1\right) }^{a}\frac{{A}^{2\left( {n + 1}\right) \left( {a + 1}\right) } - {A}^{-2\left( {n + 1}\right) \left( {a + 1}\right) }}{{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}{f}^{\left( n\right) }. \] Proof. The \( a \) parallel strands and the idempotent \( {f}^{\left( a\right) } \) can, as explained in Chapter 13, be thought of as \( {S}_{a}\left( \alpha \right) \) contained in an annulus encircling the strands of \( {f}^{\left( n\right) } \), where \( {S}_{a} \) is the \( {a}^{\text{th }} \) Chebyshev polynomial. Now, as in the proof of Lemma 13.9, \( {f}^{\left( n\right) } \) with a single strand encircling it (to be thought of as \( \alpha \) in the annulus) is \( \left( {-{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) {f}^{\left( n\right) } \) . Hence the element required here is \( {S}_{a}\left( {-{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) {f}^{\left( n\right) } \) . This immediately gives the result using the remarks about Chebyshev polynomials after Lemma 13.2. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_157_2.jpg) Figure 14.2 The first of these results is sometimes interpreted by saying that the operation of inserting a positive kink induces a linear map \( \mathcal{S}\left( {{S}^{1} \times I}\right) \rightarrow \mathcal{S}\left( {{S}^{1} \times I}\right) \) and, with respect to this, each \( {S}_{n}\left( \alpha \right) \) is an eigenvector with corresponding eigenvalue \( {\left( -1\right) }^{n}{A}^{{n}^{2} + {2n}} \) . A direct application of this result is the following: Lemma 14.3. Suppose \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity. Then \[ < \omega { > }_{{U}_{ + }} = \frac{G}{2{A}^{\left( 3 + {r}^{2}\right) }\left( {{A}^{2} - {A}^{-2}}\right) }, \] where \( G \) is the Gauss sum given by \( G = \mathop{\sum }\limits_{{n = 1}}^{{4r}}{A}^{{n}^{2}} \) . Proof. Recall that \( {U}_{ + } \) is the diagram of the unknot with one positive crossing, \[ \omega = \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) \text{ and }{\Delta }_{n} = \frac{{\left( -1\right) }^{n}\left( {{A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }}\right) }{{A}^{2} - {A}^{-2}}. \] So use of Lemma 14.1 to remove the kink in each \( {S}_{n}\left( \alpha \right) \) shows that \( \langle \omega {\rangle }_{{U}_{ + }} \) is \[ \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}^{2}{\left( -1\right) }^{n}{A}^{{n}^{2} + {2n}} = {\left( {A}^{2} - {A}^{-2}\right) }^{-2}\mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\left( -1\right) }^{n}{A}^{{n}^{2} + {2n}}{\left( {A}^{2\left( {n + 1}\right) } - {A}^{-2\left( {n + 1}\right) }\right) }^{2}. \] Now elementary manoeuvres of algebraic number theory (see [74] for example) show that the summation in the last term is \( \frac{1}{2}{A}^{-\left( {3 + {r}^{2}}\right) }\left( {{A}^{2} - {A}^{-2}}\right) \mathop{\sum }\limits_{{n = 1}}^{{4r}}{A}^{{n}^{2}} \) . The fact that reflecting diagrams induces in \( \mathcal{S}\left( {S}^{2}\right) \) an interchange of \( A \) with \( {A}^{-1} \) , and that \( {\Delta }_{n} \) is unaltered by such interchange, means that \[ < \omega { > }_{{U}_{ - }} = \frac{-\bar{G}}{2{A}^{-\left( {3 + {r}^{2}}\right) }\left( {{A}^{2} - {A}^{-2}}\right) }. \] Thus \[ - \bar{G}G/4{\left( {A}^{2} - {A}^{-2}\right) }^{2} = < \omega { > }_{{U}_{ + }} < \omega { > }_{{U}_{ - }} \] and this has already been shown, in Lemma 13.7, to be \( - {2r}{\left( {A}^{2} - {A}^{-2}\right) }^{-2} \) . Thus \( \bar{G}G = {8r} \) . In fact, when \( A = {e}^{{i\pi }/{2r}} \), it can be shown that \( G = 2\sqrt{2r}{e}^{{i\pi }/4} \) . It was remarked in Chapter 12 that a lens space has a surgery diagram that consists of a chain of unknotted simple closed curves, each with some framing, each simply linking the curve before it in the chain and the curve after it (except that the curves at the two ends of the chain only link one other curve). Calculation of the invariant of Chapter 13 involves evaluating the element of \( \mathcal{S}\left( {S}^{2}\right) \) that arises from allocating \( \omega \) to each component of the chain. That can be done by expanding each \( \omega \) as \( \mathop{\sum }\limits_{{n = 0}}^{{r - 2}}{\Delta }_{n}{S}_{n}\left( \alpha \right) \) and using multilinearity, next changing all framings to zero by removing "kinks" using Lemma 14.1, and then removing components from the end of the chain using Lemma 14.2. Factors involving powers of \( < \omega { > }_{{U}_{ + }} \) and \( < \omega { > }_{{U}_{ - }} \) can be evaluated using Lemma 14.3. There results a formula that can be given to a computer for determination (see [104] for details). An extensive analysis of such a formula appears in [76]. Work on the formula shows, for example, that for all \( r \) the lens spaces \( {L}_{{65},8} \) and \( {L}_{{65},{18}} \) have the same invariant, but that the invariant is not a function of the fundamental group. The lens space invariants were also explored in [28] and [51]. The same method works if the unknotted components of the surgery diagram are linked not just in a linear chain but in a tree-like configuration. The 3-manifold then has the structure of the union of Seifert fibre spaces (see [104]). It has already been intimated that it is expedient to renormalise the \( S{U}_{q}\left( 2\right) \) invariant discussed so far by replacing \( \omega \) with \( {\mu \omega } \), for some carefully chosen \( \mu \in \mathbb{C} \) . Now choose \( \mu \in \mathbb{C} \) so that \[ {\mu }^{-2} = {\left\langle \omega { > }_{{U}_{ + }} < \omega { > }_{{U}_{ - }} = < \omega { > }_{U} = \frac{-{2r}}{{\left( {A}^{2} - {A}^{-2}\right) }^{2}}\right\rangle }^{-{2r}} \] (quoting Lemma 13.7 in the last equality). This means that \( \langle {\mu \omega }{\rangle }_{{U}_{ + }} = \) \( < {\mu \omega }{ > }_{{U}_{ - }}^{-1} \) . The renormalisation of the invariant can then be written in terms of the signature of the linking matrix of the surgery link; it is this renormalisation, which will now be defined, that produces some elegant evaluations. Definition 14.4. Suppose \( r \geq 3 \) and \( A \) is a primitive \( 4{r}^{th} \) root of unity. Let \( M \) be a closed oriented 3-manifold. Define the invariant \( {\mathcal{I}}_{A}\left( M\right) \) by \[ {\mathcal{I}}_{A}\left( M\right) = < {\mu \omega },{\mu \omega },\ldots ,{\mu \omega }{ > }_{D} < {\mu \omega }{ > }_{{U}_{ - }}^{\sigma }\mu , \] where \( \sigma \) is the signature of the linking matrix of a link diagram \( D \) that is a surgery diagram for \( M \) . It follows at once that when \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity, \[ {\mathcal{I}}_{A}\left( {S}^{3}\right) = \frac{{A}^{2} - {A}^{-2}}{\sqrt{-{2r}}}\;\text{ and }\;{\mathcal{I}}_{A}\left( {{S}^{1} \times {S}^{2}}\right) = 1. \] This is because the empty diagram represents \( {S}^{3} \), so \( {\mathcal{I}}_{A}\left( {S}^{3}\right) = \mu \), this being the term inserted somewhat gratuitously at the end of the above definition. From the definition of \( \mu , < {\mu \omega }{ > }_{U}^{-1} = \mu = \left( {{A}^{2} - {A}^{-2}}\right) /\sqrt{-{2r}} \) . The diagram \( U \), the zero-crossing diagram of the unknot, represents \( {S}^{1} \times {S}^{2} \), so \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {S}^{2}}\right) = 1 \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_159_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_159_0.jpg) Figure 14.3 To make more general progress in calculating these 3-manifold invariants, it is necessary to develop some expertise in evaluating, in \( \mathcal{S}\left( {S}^{2}\right) \), certain "diagrams" that consist of idempotents of various Temperley-Lieb algebras joined together by arcs in very simple ways. Consider first the diagram shown on the left of Figure 14.3. It consists of \( x \) parallel copies of a circle, \( y \) of another circle and \( z
1009_(GTM175)An Introduction to Knot Theory
55
athcal{I}}_{A}\left( {S}^{3}\right) = \mu \), this being the term inserted somewhat gratuitously at the end of the above definition. From the definition of \( \mu , < {\mu \omega }{ > }_{U}^{-1} = \mu = \left( {{A}^{2} - {A}^{-2}}\right) /\sqrt{-{2r}} \) . The diagram \( U \), the zero-crossing diagram of the unknot, represents \( {S}^{1} \times {S}^{2} \), so \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {S}^{2}}\right) = 1 \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_159_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_159_0.jpg) Figure 14.3 To make more general progress in calculating these 3-manifold invariants, it is necessary to develop some expertise in evaluating, in \( \mathcal{S}\left( {S}^{2}\right) \), certain "diagrams" that consist of idempotents of various Temperley-Lieb algebras joined together by arcs in very simple ways. Consider first the diagram shown on the left of Figure 14.3. It consists of \( x \) parallel copies of a circle, \( y \) of another circle and \( z \) of a third with \( {f}^{\left( x + y\right) },{f}^{\left( y + z\right) } \) and \( {f}^{\left( z + x\right) } \) inserted as shown. Let \( \Gamma \left( {x, y, z}\right) \) be the element of \( \mathcal{S}\left( {S}^{2}\right) \) that this diagram represents. This element will now be determined, for it will be important to know when \( \Gamma \left( {x, y, z}\right) \) is and is not zero. In what follows, \( {\Delta }_{n} \) ! denotes \( {\Delta }_{n}{\Delta }_{n - 1}{\Delta }_{n - 2}\ldots {\Delta }_{1} \), this being interpreted as 1 if \( n \) is -1 or zero. Lemma 14.5. \[ \Gamma \left( {x, y, z}\right) = \frac{{\Delta }_{x + y + z}!{\Delta }_{x - 1}!{\Delta }_{y - 1}!{\Delta }_{z - 1}!}{{\Delta }_{y + z - 1}!{\Delta }_{z + x - 1}!{\Delta }_{x + y - 1}!}. \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_160_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_160_0.jpg) Figure 14.4 Proof. Consider the equations depicted in Figure 14.4; as usual a symbol beside a line is a count of the number of parallel arcs that it represents. The first equality follows from the defining relation of Figure 13.6 for \( {f}^{\left( y + z - 1\right) } \) (together with \( {f}^{\left( z\right) }{e}_{z - 1} = 0 \) ), and the second line follows by iterating the first line. Next, the defining relation for \( {f}^{\left( y + z\right) } \) followed by a double application of Figure 14.4 produces the identity of Figure 14.5. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_160_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_160_1.jpg) Figure 14.5 Now apply this last identity to Figure 14.3, using the formulae of Figures 13.4 and 13.5. The following recurrence relation results: \[ \Gamma \left( {x, y, z}\right) = \] \[ \Gamma \left( {x, y, z - 1}\right) {\Delta }_{x + z}/{\Delta }_{x + z - 1} - \Gamma \left( {x + 1, y - 1, z - 1}\right) {\left( {\Delta }_{y - 1}\right) }^{2}/\left( {{\Delta }_{y + z - 1}{\Delta }_{y + z - 2}}\right) . \] This is ready for a verification of the given formula by induction on \( z \) . That formula is clearly true when \( z = 0 \), and inserting it into this recurrence relation reduces the proof to a demonstration of the equality \[ {\Delta }_{x + y + z}{\Delta }_{z - 1} = {\Delta }_{x + z}{\Delta }_{y + z - 1} - {\Delta }_{y - 1}{\Delta }_{x} \] The truth of this can however easily be checked either directly from the formula for \( {\Delta }_{n} \) or using a double induction on \[ {\Delta }_{x + y} = {\Delta }_{x}{\Delta }_{y} - {\Delta }_{x - 1}{\Delta }_{y - 1} \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_161_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_161_0.jpg) Figure 14.6 Consider the skein space of the disc \( D \) with \( a + b + c \) specified points in its boundary. The points are partitioned into three sets of \( a, b \) and \( c \) (consecutive) points. The effect of adding the idempotents \( {f}^{\left( a\right) },{f}^{\left( b\right) } \) and \( {f}^{\left( c\right) } \) just outside every diagram in such a disc with specified points (and so slightly enlarging the disc), is to map the skein space of the disc into a subspace of itself. That subspace will be be denoted \( {T}_{a, b, c} \) . Thus \( {T}_{a, b, c} \) is spanned by all diagrams inserted into the inner disc of Figure 14.6. The dimension of \( {T}_{a, b, c} \) is either one or zero, for the only chance of obtaining a non-zero skein element on inserting a diagram without crossings into Figure 14.6 is when the element obtained is a multiple of that on the left of Figure 14.7. This element, if it exists, will be denoted \( {\tau }_{a, b, c} \) . (The insertion of any other zero-crossing diagram into Figure 14.6 always gives zero on interacting with the idempotents.) For \( {\tau }_{a, b, c} \) to exist, it is necessary that there should be non-negative integers \( x, y \) and \( z \) defined by \( a = y + z, b = z + x \) and \( c = x + y \) . This occurs precisely when \( a, b \) and \( c \) are admissible in the following sense: ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_161_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_161_1.jpg) Figure 14.7 Definition 14.6. The triple \( \left( {a, b, c}\right) \) of non-negative integers will be called admissible if \( a + b + c \) is even, \( a \leq b + c, b \leq c + a \) and \( c \leq a + b \) . When \( \left( {a, b, c}\right) \) is admissible, it is easy to see that \( {\tau }_{a, b, c} \) is not the zero element of \( {T}_{a, b, c} \) by considering the 1 -terms of the expansions of the three idempotents as sums of base elements of the various Temperley-Lieb algebras. When \( \left( {a, b, c}\right) \) is admissible, define \[ \theta \left( {a, b, c}\right) = \Gamma \left( {x, y, z}\right) , \] where non-negative integers \( x, y \) and \( z \) are defined in the above way. In the diagrams that follow, a triad consisting of a black dot with three arcs emerging from it labelled \( a, b \) and \( c \), will be an abbreviation for the triad diagram \( {\tau }_{a, b, c} \) (see Figure 14.7); it is always then to be assumed that \( \left( {a, b, c}\right) \) is admissible. Note that \( \theta \left( {a, b, c}\right) \) is the evaluation of the diagram consisting of two black dots joined together by three simple disjoint arcs labelled \( a, b \), and \( c \) ; see Figure 14.3. It should be observed that each arc emerging from a black dot is automatically decorated with the relevant idempotent. A useful identity that uses this notation is shown in Figure 14.8. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_162_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_162_0.jpg) The Krönecker delta function occurs because if, say, \( a > d \) then the copy of \( {f}^{\left( a\right) } \) in the left-hand triad must, in the expansion of the remainder as a sum of diagrams with no crossing, always abut some " \( {e}_{i} \) ",(in each such diagram, some curve leaving the left must return to the left). When \( a = d \), the left diagram must be some scalar multiple of \( {f}^{\left( a\right) } \), and the multiplier is readily found by joining, in the plane, points on the left of the diagram to points on the right. Suppose that \( D \) is in \( {S}^{2} \) and \( {D}^{\prime } \) is the disc complementary to \( D \) with the same specified \( a + b + c \) boundary points. Taking unions of diagrams in the two discs induces a bilinear form \( \mathcal{S}D \times \mathcal{S}{D}^{\prime } \rightarrow \mathcal{S}\left( {S}^{2}\right) = \mathbb{C} \), and using this, \( {\tau }_{a, b, c} \) corresponds to the element \( {\tau }_{a, b, c}^{ * } \) of the dual space to \( \mathcal{S}{D}^{\prime } \) . In this way \( {T}_{a, b, c} \) can be regarded as a space \( {T}_{a, b, c}^{ * } \) of linear maps of the skein outside \( D \) ; an element of \( {T}_{a, b, c} \) is thus a "map of outsides". Strictly, \( {T}_{a, b, c}^{ * } \) is the quotient of \( {T}_{a, b, c} \) by the kernel of the bilinear form. This is almost unnecessary sophistry for generic \( A \) , but is significant when \( A \) is a root of unity. Lemma 14.7. Let \( \left( {a, b, c}\right) \) be admissible and let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. Then \( {\tau }_{a, b, c}^{ * } \) is non-zero if and only if \( a + b + c \leq 2\left( {r - 2}\right) \) . Proof. \( \mathcal{S}{D}^{\prime } \) has a base consisting of all diagrams in \( {D}^{\prime } \) with no crossing. For all but one of these diagrams there is an arc from a point of one of the three specified subsets (for example, that with \( a \) points) to another point of the same subset. As usual (using \( {f}^{\left( a\right) }{e}_{i} = 0 \) ), \( {\tau }_{a, b, c}^{ * } \) annihilates such an element. There remains to consider the base element of \( \mathcal{S}{D}^{\prime } \) that consists of \( z \) arcs from the first boundary subset to the second such subset, \( x \) from the second to the third and \( y \) from the third to the first. Of course, \( {T}_{a, b, c}^{ * } \) maps this element to \( \Gamma \left( {x, y, z}\right) \) . It follows from Lemma 14.5 that as \( x + y + z \) increases, this is non-zero until \( {\Delta }_{x + y + z}! = 0 \) and that this occurs when \( x + y + z = r - 1 \) . Definition 14.8. A triple \( \left( {a, b, c}\right) \) of non-negative integers will be called \( r \) -admissible if it is admissible and \( a + b + c \leq 2\left( {r - 2}\right) \) . The substance of the last result is that for \( A \) a primitive \( 4{r}^{\text{th }} \) root of unity, the space of maps \( {T}_{a, b, c}^{ * } \) is zero unless \( \left( {a, b, c}\right) \) is \( r \) -admissible, and in that case it has dimension 1. These ideas are now to be generalised to the disc with an even number, \( (a + \) \( b + c + d) \), of points specified in its boundary, partitioned consecutively into \( a \) , \( b, c \) and \( d \) points. Let \( {Q}_{a, b, c, d} \) denote the subspace of the skein space of such a disc that comes from placing the idempotents \( {f}^{\left(
1009_(GTM175)An Introduction to Knot Theory
56
to \( \Gamma \left( {x, y, z}\right) \) . It follows from Lemma 14.5 that as \( x + y + z \) increases, this is non-zero until \( {\Delta }_{x + y + z}! = 0 \) and that this occurs when \( x + y + z = r - 1 \) . Definition 14.8. A triple \( \left( {a, b, c}\right) \) of non-negative integers will be called \( r \) -admissible if it is admissible and \( a + b + c \leq 2\left( {r - 2}\right) \) . The substance of the last result is that for \( A \) a primitive \( 4{r}^{\text{th }} \) root of unity, the space of maps \( {T}_{a, b, c}^{ * } \) is zero unless \( \left( {a, b, c}\right) \) is \( r \) -admissible, and in that case it has dimension 1. These ideas are now to be generalised to the disc with an even number, \( (a + \) \( b + c + d) \), of points specified in its boundary, partitioned consecutively into \( a \) , \( b, c \) and \( d \) points. Let \( {Q}_{a, b, c, d} \) denote the subspace of the skein space of such a disc that comes from placing the idempotents \( {f}^{\left( a\right) },{f}^{\left( b\right) },{f}^{\left( c\right) } \) and \( {f}^{\left( d\right) } \) just outside every diagram that generates this space (see Figure 14.9). ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_163_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_163_0.jpg) Figure 14.9 Lemma 14.9. Suppose \( A \) is not a root of unity. \( A \) base for \( {Q}_{a, b, c, d} \) is the set of elements as in Figure 14.10 (the boundary of the disc is not shown), where \( j \) takes all values such that \( \left( {a, b, j}\right) \) and \( \left( {c, d, j}\right) \) are both admissible. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_163_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_163_1.jpg) Figure 14.10 Proof. Note that the proposed base elements each consist of two triads glued together; there is an \( {f}^{\left( j\right) } \) on the central line. Certainly \( {Q}_{a, b, c, d} \) is spanned by all elements of the form shown in Figure 14.11, where the lines all represent multiple parallel arcs, for, as usual, any other diagrams interact with the idempotents to give zero. Without loss of generality, it is assumed that \( b + d \geq a + c \), and it is clear that the diagonal line represents \( \frac{1}{2}\{ b + d - a - c\} \) parallel arcs. The number of arcs represented by the other lines can vary. Suppose there are \( j \) arcs crossing the vertical dotted line. In the Temperley-Lieb algebra \( T{L}_{j} \), recall that \( \mathbf{1} - {f}^{\left( j\right) } \) is in the ideal generated by the \( {e}_{i} \) . Thus a diagram with \( j \) arcs crossing the dotted line can be replaced with a linear sum of diagrams, one with \( j \) arcs containing an \( {f}^{\left( j\right) } \) and others that cross the vertical line fewer than \( j \) times (coming from the \( {e}_{i} \) ). Thus, by induction on the number of arcs crossing the vertical line, it is seen that the given elements span the space. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_164_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_164_0.jpg) Figure 14.11 Gluing a triad \( {\tau }_{c, d, k} \) onto the right of the disc under consideration produces a linear map from \( {Q}_{a, b, c, d} \) to \( {T}_{a, b, k} \) . Of course this operation maps the element of Figure 14.10 to \( {\delta }_{j, k}\left( {\theta \left( {c, d, k}\right) /{\Delta }_{k}}\right) {\tau }_{a, b, k} \), using the formula of Figure 14.8. When \( j = k \), this is non-zero, and so the proposed base elements are indeed independent. Lemma 14.10. Suppose \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity. A base for \( {Q}_{a, b, c, d}^{ * } \) (this being \( {Q}_{a, b, c, d} \) regarded as maps of diagrams outside the disc) is the set of elements as in Figure 14.10 where \( j \) takes all values such that \( \left( {a, b, j}\right) \) and \( \left( {c, d, j}\right) \) are both \( r \) -admissible. Proof. The proof that the given elements span is the same as in Lemma 14.9 with a small modification. Now, \( {f}^{\left( n\right) } \) does not exist for \( n \geq r \) . However, \( {f}^{\left( r - 1\right) } \) is the zero map of outsides. Thus working in this dual context, any diagram as in the above proof, with at least \( \left( {r - 1}\right) \) arcs crossing the dotted vertical line, can be replaced by a sum of diagrams with fewer such arcs. Further, any triad encountered that is not \( r \) -admissible may be discarded, since it represents the zero map. The proof of independence is essentially the same as before (though the map used now goes from and to the dual spaces). The bases for \( {Q}_{a, b, c, d} \) and \( {Q}_{a, b, c, d}^{ * } \) given in the last two lemmas have a "horizontal" bias. There is, by symmetry, a base in each case with a "vertical" bias. The change-of-base equation is depicted in Figure 14.12, where the summation is over all \( i \) for which the triples \( \left( {b, c, i}\right) \) and \( \left( {a, d, i}\right) \) are admissible (or, respectively, \( r \) - admissible). The terms \( \left\{ \begin{array}{lll} a & b & i \\ c & d & j \end{array}\right\} \) of this change-of-base matrix are sometimes called \( {6j} \) -symbols. The \( {6j} \) -symbols can be evaluated in terms of a diagrammatic presentation by adjoining a triad \( {\tau }_{a, d, k} \) beneath the diagrams of both sides of the equation of Figure 14.12. This produces zero for every term on the right hand side except the \( {k}^{\text{th }} \) term. That term becomes (with no summation convention) \( \left\{ \begin{array}{lll} a & b & k \\ c & d & j \end{array}\right\} \theta \left( {a, d, k}\right) {\Delta }_{k}^{-1}{\tau }_{b, c, k} \) . Now place a copy of \( {\tau }_{b, c, k} \) on the outside of ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_0.jpg) Figure 14.12 both sides of the equation. It transpires (keeping track of what has happened to the left hand side) that the labelled diagram in the shape of the edges of a tetrahedron as in Figure 14.13 is equal to \( \left\{ \begin{array}{lll} a & b & k \\ c & d & j \end{array}\right\} \theta \left( {a, d, k}\right) \theta \left( {b, c, k}\right) {\Delta }_{k}^{-1} \) . A lengthy closed formula is known for this labelled tetrahedral diagram and hence for the \( {6j} \) -symbol, but it is not very attractive (except to a computer). It is quoted in [62], a proof is in [12], and the form of the answer is known from quantum field theory [69]. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_1.jpg) Figure 14.13 One more general formula of use is the identity shown in Figure 14.14. The 1-dimensional nature of \( {T}_{a, b, c} \) asserts that the element on the left of Figure 14.14 is some multiple of \( {\tau }_{a, b, c} \) . That the multiplier is \( {\left( -1\right) }^{\left( {a + b - c}\right) /2}{A}^{a + b - c + \left( {\left( {{a}^{2} + {b}^{2} - {c}^{2}}\right) /2}\right) } \) is left as a fairly easy exercise. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_165_2.jpg) Figure 14.14 Suppose now it is desired to evaluate in \( \mathcal{S}\left( {S}^{2}\right) = \mathbb{C} \), the complex number represented by a link diagram in which each component is replaced by some \( {S}_{n}\left( \alpha \right) \) as encountered in the definition of the 3-manifold invariants. This can be thought of as a diagram with segments representing multiple parallel arcs with some \( {f}^{\left( n\right) } \) included (and note that because \( {f}^{\left( n\right) }{f}^{\left( n\right) } = {f}^{\left( n\right) } \), as many copies of \( {f}^{\left( n\right) } \) as might be desired may be inserted around any component of the link). Near each crossing, two such parallel multiple arcs with labels \( a \) and \( b \) can be replaced by a linear sum of the above base elements of \( {Q}_{a, b, b, a}^{ * } \), and then the crossing can be removed in each summand using the equality of Figure 14.14. What emerges is a linear sum of weighted trivalent graphs in \( {S}^{2} \) with a black dot at each vertex. Again at the expense of taking linear sums, the graphs can be changed using the \( {6j} \) -symbol equation to reduce the number of edges of the graph around a region of the graph’s complement in \( {S}^{2} \) . When a region is bounded by just two edges, it can be removed using Figure 14.8. This eventually simplifies completely all the graphs to be considered. Although in principle such a method of calculation will always work, in general it desperately needs computer assistance. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_166_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_166_0.jpg) Figure 14.15 Sometimes the \( S{U}_{q}\left( 2\right) 3 \) -manifold invariant does have a compact formulation with an elegant method of calculation. An example, which will now be described, is the manifold consisting of the product of a closed orientable surface and a circle. In Figure 14.15 is a picture of \( a \) strands with an \( {f}^{\left( a\right) } \) inserted beside \( b \) strands with an \( {f}^{\left( b\right) } \) . This can be regarded as an element of \( {Q}_{b, a, a, b} \) or of \( {Q}_{b, a, a, b}^{ * } \) . In either case, it must be expressible as a linear sum of the basis elements. The summation is over all \( c \) for which \( \left( {a, b, c}\right) \) is admissible (or \( r \) -admissible), and the coefficients of the sum are determined by adjoining the triad \( {\tau }_{a, b, c} \) and using Figure 14.8. Lemma 14.11. In \( \mathcal{S}\left( {{S}^{1} \times I}\right) ,{S}_{a}\left( \alpha \right) {S}_{b}\left( \alpha \right) = \mathop{\sum }\limits_{c}{S}_{c}\left( \alpha \right) \) where the summation is over all \( c \) such that \( \left( {a, b, c}\right) \) is admissible. If \( \bar{A} \) is a primitive \( 4{r}^{\text{th }} \) root of unity regarding both sides of the equation as maps of outsides (of immersed annuli as in Chapter
1009_(GTM175)An Introduction to Knot Theory
57
n Figure 14.15 is a picture of \( a \) strands with an \( {f}^{\left( a\right) } \) inserted beside \( b \) strands with an \( {f}^{\left( b\right) } \) . This can be regarded as an element of \( {Q}_{b, a, a, b} \) or of \( {Q}_{b, a, a, b}^{ * } \) . In either case, it must be expressible as a linear sum of the basis elements. The summation is over all \( c \) for which \( \left( {a, b, c}\right) \) is admissible (or \( r \) -admissible), and the coefficients of the sum are determined by adjoining the triad \( {\tau }_{a, b, c} \) and using Figure 14.8. Lemma 14.11. In \( \mathcal{S}\left( {{S}^{1} \times I}\right) ,{S}_{a}\left( \alpha \right) {S}_{b}\left( \alpha \right) = \mathop{\sum }\limits_{c}{S}_{c}\left( \alpha \right) \) where the summation is over all \( c \) such that \( \left( {a, b, c}\right) \) is admissible. If \( \bar{A} \) is a primitive \( 4{r}^{\text{th }} \) root of unity regarding both sides of the equation as maps of outsides (of immersed annuli as in Chapter 13), \( {S}_{a}\left( \alpha \right) {S}_{b}\left( \alpha \right) = \mathop{\sum }\limits_{c}{S}_{c}\left( \alpha \right) \), where now the sum is over all \( c \) such that \( \left( {a, b, c}\right) \) is \( r \) -admissible. Proof. In fact the first part of this lemma is almost immediate. This is because it is a result on Chebyshev polynomials that \( {S}_{a}\left( x\right) {S}_{b}\left( x\right) = \mathop{\sum }\limits_{c}{S}_{c}\left( x\right) \), the sum being over all \( c \) such that \( \left( {a, b, c}\right) \) is admissible. This follows by induction on \( b \) . However, another proof is shown in Figure 14.16, where the result of Figure 14.15 is first applied at the top of the diagram and then the result of Figure 14.8 is applied at the bottom. The advantage of this alternative proof is that it also works in the \( r \) -admissible case as well. Figure 14.17 shows an element of \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) that will temporarily be denoted \( \beta \) . Regarding \( \beta \) as a map of outsides, expanding one of the \( \omega \) ’s as \( \mathop{\sum }\limits_{{a = 0}}^{{r - 2}}{\Delta }_{a}{S}_{a}\left( \alpha \right) \) and using Figure 14.15, a summation expression for \( \beta \) is obtained. This is also depicted in Figure 14.17, where the \( a \) ’s at the top are to be understood to be joined to those at the bottom by arcs that encircle the annulus. But, by Lemma 13.9, if \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity, then the only non-zero contribution to that expression ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_167_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_167_0.jpg) Figure 14.16 is when \( c = 0 \) . Thus as maps of outsides (recall that \( \theta \left( {a, a,0}\right) = {\Delta }_{a} \) ), \[ \beta = \mathop{\sum }\limits_{{a = 0}}^{{r - 2}} < \omega { > }_{U}{\left( {S}_{a}\left( \alpha \right) \right) }^{2}. \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_167_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_167_1.jpg) Figure 14.17 Theorem 14.12. Let \( {F}_{g} \) be the closed orientable surface of genus \( g \) and let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity, \( r \geq 3 \) . Then \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) \) is an integer. It is \( r - 1 \) when \( g = 1 \) . Otherwise it is the number of ways of labelling the \( 3\left( {g - 1}\right) \) edges of the graph of Figure 14.18 with integers \( {a}_{i},0 \leq {a}_{i} \leq r - 2 \), so that the three labels at any node form an \( r \) -admissible triple. Proof. The 3-manifold \( {S}^{1} \times {F}_{g} \) is obtained by surgery on a link that consists of \( g \) copies of the Borromean rings summed together on one component, each component having the zero framing. (Proving this is an interesting exercise.) A diagram \( D \) for such a link is obtained by taking \( g \) annuli, each containing a link as on the left of Figure 14.17, threading an unknotted closed curve through these annuli and then taking the resultant diagram of \( {2g} + 1 \) components. Then \( < \omega ,\omega ,\ldots ,\omega { > }_{D} = < \omega ,{\beta }^{g}{ > }_{H} \), where \( H \) is just the two-crossing diagram of the simple Hopf link of two curves. Thus, as the signature of the linking matrix of ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_168_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_168_0.jpg) Figure 14.18 this link is zero (and using \( \left\langle {{\mu \omega }{ > }_{U} = {\mu }^{-1}}\right\rangle \) , \[ {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) = {\mu }^{{2g} + 2} < \omega ,{\beta }^{g}{ > }_{H}. \] Now \( \beta = \mathop{\sum }\limits_{{a = 0}}^{{r - 2}}{\mu }^{-2}{\left( {S}_{a}\left( \alpha \right) \right) }^{2} \), so \( {\beta }^{g} \) can be expressed as a sum of the \( {S}_{n}\left( \alpha \right) \) ’s by Lemma 14.11. Then, by Lemma 13.9, \( \langle \omega ,{\beta }^{g}{\rangle }_{H} \) is \( {\mu }^{-2 - {2g}}N \), where \( N \) is the number of times \( {S}_{0}\left( \alpha \right) \) appears in the expansion (by Lemma 14.11) for \( {\left( \mathop{\sum }\limits_{{a = 0}}^{{r - 2}}{\left( {S}_{a}\left( \alpha \right) \right) }^{2}\right) }^{g} \) as a sum of the \( {S}_{n}\left( \alpha \right) \) ’s. This \( N \) is the number of \( r \) -admissible labellings of the edges of Figure 14.18. The last result can, in general terms, be anticipated. It was shown in [13] that these \( S{U}_{q}\left( 2\right) \) invariants can be regarded as emanating from a topological quantum field theory. These theories will not be described here in any detail (but see [4]). Roughly, such a theory is a functor from the category of oriented surfaces and cobordisms to that of vector spaces and linear maps that sends disjoint unions to tensor products. It follows from such an abstract formulation [4] that the invariant of the mapping torus of an automorphism of a surface \( F \) is the trace of some linear map. The invariant for \( {S}^{1} \times F \) is the trace of the identity map, and that is certainly an integer. When the surface has genus equal to 1 or 2 , it is easy to make a count of this integer from the theorem. The results obtained are \[ {\mathcal{I}}_{A}\left( {{S}^{1} \times {S}^{1} \times {S}^{1}}\right) = \left( {r - 1}\right) ,\;{\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{2}}\right) = \frac{{r}^{3} - r}{6}. \] Working from the same surgery diagram, with \( A \) still a primitive \( 4{r}^{\text{th }} \) root of unity, \( r \geq 3 \), it can also be shown [86] that \[ {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) = {\left( -2r\right) }^{g - 1}\mathop{\sum }\limits_{{a = 0}}^{{r - 2}}{\left( {A}^{2\left( {a + 1}\right) } - {A}^{-2\left( {a + 1}\right) }\right) }^{2 - {2g}}. \] It is surprising that this last expression must be an integer as, indeed, it has just been proved to be. The mapping torus of the automorphism of \( {S}^{1} \times {S}^{1} \) that reverses the sign of every element of \( {H}_{1}\left( {{S}^{1} \times {S}^{1}}\right) \) has a surgery diagram as shown in Figure 14.19. It is clearly very similar to the diagram of the Borromean rings with zero framings, considered above, for \( {S}^{1} \times {S}^{1} \times {S}^{1} \) . It is an easy exercise using the above methods to show ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_169_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_169_0.jpg) Figure 14.19 that this manifold also has \( {\mathcal{I}}_{A} = \left( {r - 1}\right) \) . That can, in fact, also be deduced from considerations of the topological quantum field theory. Of course this manifold is not equal to \( {S}^{1} \times {S}^{1} \times {S}^{1} \) ; its first homology is \( \mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z} \) . Thus two manifolds can have all \( S{U}_{q}\left( 2\right) \) invariants the same and yet be distinguished by their first homology groups. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_169_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_169_1.jpg) Figure 14.20 Less obvious, whether by calculation or philosophy, is the fact that the 3- manifolds (described by Kauffman [62]), obtained by surgery on the framed links shown in the diagrams at the top of Figure 14.21 and the top of Figure 14.22, also have the same integer invariants. To see this, first note the equalities shown in Figure 14.20, applicable when \( A \) is a primitive \( 4{r}^{\text{th }} \) root of unity. The first of these identities has, essentially, already been used. It follows at once from Figure 14.15 and Lemma 13.9. The second one follows by using Figure 14.15 twice and then Lemma 13.9, but note that the right-hand side is to be interpreted as zero unless \( \left( {a, b, c}\right) \) is an \( r \) -admissible triple. The diagram \( D \) at the top of Figure 14.21 has three zero-framed components. With appropriate orientations it has linking matrix \[ \left( \begin{array}{lll} 0 & 3 & 0 \\ 3 & 0 & 3 \\ 0 & 3 & 0 \end{array}\right) \] and this matrix has signature zero. The matrix represents the first homology of the 3-manifold \( M \) obtained by surgery on this diagram, and so this homology group is \( \mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z} \) . The remainder of Figure 14.21 considers the above identities (together with that of Figure 14.14) applied to the diagram, with \( \omega \) decorating the left and right components and \( {\Delta }_{a}{S}_{a}\left( \alpha \right) \) decorating the other component. The result is \( {\left( < \omega { > }_{U}\right) }^{2} = {\mu }^{-4} \) when \( \left( {a, a, a}\right) \) is an \( r \) -admissible triple and zero otherwise. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_0.jpg) Figure 14.21 Summing this over all \( a,0 \leq a \leq r - 2 \), shows that \( {\mu }^{4} < \omega ,\omega ,\omega { > }_{D} \) is the number of \( a \) such that \( \left( {a, a, a}\right) \) is \( r \) -ad
1009_(GTM175)An Introduction to Knot Theory
58
ght) \] and this matrix has signature zero. The matrix represents the first homology of the 3-manifold \( M \) obtained by surgery on this diagram, and so this homology group is \( \mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z} \oplus \mathbb{Z}/3\mathbb{Z} \) . The remainder of Figure 14.21 considers the above identities (together with that of Figure 14.14) applied to the diagram, with \( \omega \) decorating the left and right components and \( {\Delta }_{a}{S}_{a}\left( \alpha \right) \) decorating the other component. The result is \( {\left( < \omega { > }_{U}\right) }^{2} = {\mu }^{-4} \) when \( \left( {a, a, a}\right) \) is an \( r \) -admissible triple and zero otherwise. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_0.jpg) Figure 14.21 Summing this over all \( a,0 \leq a \leq r - 2 \), shows that \( {\mu }^{4} < \omega ,\omega ,\omega { > }_{D} \) is the number of \( a \) such that \( \left( {a, a, a}\right) \) is \( r \) -admissible. Thus \( {\mathcal{I}}_{A}\left( M\right) \) is equal to the number of such \( a \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_170_1.jpg) Figure 14.22 The manifold \( M \) obtained by surgery on the zero-framed reef knot shown at the top of Figure 14.22 has \( \mathbb{Z} \) as its first homology group. The second line of Figure 14.22 shows the effect of changing this to a different diagram by some adroit Kirby moves. The first moves introduce a -1-framed unknot and a +1-framed unknot and slide parts of the knot over them, then the first of those unknots is slid over the second; the last move is an isotopy. Figure 14.22 then analyses the resulting diagram, with \( \omega \) decorating two components and \( {\Delta }_{a}{S}_{a}\left( \alpha \right) \) decorating the third component. Again, summing this over all \( a \) shows that the result of evaluating \( \omega \) decorating the original reef knot diagram is the product of \( {\left( < \omega { > }_{U}\right) }^{2}\left\{ { < \omega { > }_{{U}_{ + }}}\right. \) \( {\left. < \omega { > }_{{U}_{ - }}\right\} }^{-1} \) with the number of \( a \) such that \( \left( {a, a, a}\right) \) is \( r \) -admissible. Hence again that number is the invariant \( {\mathcal{I}}_{A}\left( M\right) \) . One more example of these "recombination techniques" will conclude this chapter. It is not actually concerned with calculating a 3-manifold invariant, but with calculating the Jones polynomial of a torus knot. The method used here is modelled on a paper by P. M. Strickland [117]. In what follows, \( A \) is a generic complex number. Theorem 14.13. If \( p \) and \( q \) are coprime positive integers, then the Jones polynomial of the \( \left( {p, q}\right) \) -torus knot is \[ {t}^{\left( {p - 1}\right) \left( {q - 1}\right) /2}{\left( 1 - {t}^{2}\right) }^{-1}\left( {1 - {t}^{p + 1} - {t}^{q + 1} + {t}^{p + q}}\right) . \] Proof. Consider the diagram of Figure 14.23, which shows \( p \) arcs traversing a rectangle. Suppose \( q \) copies of this are placed side by side and the result is closed up by joining the \( p \) points on the left to those on the right, using \( p \) crossing-free arcs encircling an annulus \( {S}^{1} \times I \) to form a diagram \( T\left( {p, q}\right) \) in that annulus. It is desired to evaluate this diagram in the skein of the annulus in terms of the base elements \( \left\{ {{S}_{n}\left( \alpha \right) }\right\} \) . Then, placing the annulus in the plane will at once give a value for the Jones polynomial of the \( \left( {p, q}\right) \) -torus knot. For some fixed \( k \) (which will here later be taken to be 1), consider \( p \) arcs side by side in a diagram, each labelled with an \( {f}^{\left( k\right) } \) so that as usual each arc represents \( k \) parallel arcs with the idempotent inserted. Applying the identity of Figure 14.15 ( \( p - 1 \) ) times shows that this is of the form of Figure 14.24, where the coefficient \( \Lambda \left( {{i}_{1},{i}_{2},\ldots ,{i}_{p - 2}, a}\right) \) is the quotient of a product of \( \Delta \) ’s by a product of \( \theta \) ’s and the summation is over all \( \left( {{i}_{1},{i}_{2},\ldots ,{i}_{p - 2}, a}\right) \) that produce an admissible triple at each vertex of the diagram. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_171_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_171_0.jpg) Figure 14.23 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_0.jpg) Figure 14.24 The diagram of Figure 14.25 is, of course, a multiple of \( {f}^{\left( a\right) } \) ; let it be denoted, without any summation convention, by \[ {\left\{ \Lambda \left( {i}_{1},{i}_{2},\ldots ,{i}_{p - 2}, a\right) \Lambda \left( {j}_{1},{j}_{2},\ldots ,{j}_{p - 2}, a\right) \right\} }^{-1/2}M{\left( a\right) }_{\mathbf{j}}^{\mathbf{i}}{f}^{\left( a\right) }. \] The \( \left\{ {M{\left( a\right) }_{\mathrm{i}}^{\mathrm{i}}}\right\} \) will be regarded as a matrix \( M\left( a\right) \) with rows and columns indexed by i and \( \mathbf{j} \), each representing a multi-suffix \( \left( {{i}_{1},{i}_{2},\ldots ,{i}_{p - 2}}\right) \) or \( \left( {{j}_{1},{j}_{2},\ldots ,{j}_{p - 2}}\right) \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_1.jpg) Figure 14.25 Suppose \( T{\left( p, q\right) }^{\left( k\right) } \) is the diagram \( T\left( {p, q}\right) \) decorated by \( {S}_{k}\left( \alpha \right) \) . Suppose that in \( T{\left( p, q\right) }^{\left( k\right) } \), between consecutive occurrences of (the \( k \) -weighted) Figure 14.23, the \( p \) parallel strings are "combined" as in Figure 14.24. Note that terms with distinct values of \( a \) compose to give zero. The terms with a given value of \( a \) combine to be the trace of the matrix \( {\left( M\left( a\right) \right) }^{q} \) multiplying \( {S}_{a}\left( \alpha \right) \) . This follows just from the above notation. Now, \( {\left( {\left( M\left( a\right) \right) }^{p}\right) }_{\mathbf{i}}^{\mathbf{i}}{f}^{\left( a\right) } \) is \( {\left\{ \Lambda \left( {i}_{1},{i}_{2},\ldots ,{i}_{p - 2}, a\right) \Lambda \left( {j}_{1},{j}_{2},\ldots ,{j}_{p - 2}, a\right) \right\} }^{1/2} \) times the diagram of Figure 14.26, and that diagram in turn is zero unless \( \mathbf{i} = \mathbf{j} \) and is then, by Lemma 14.1 and Figure 14.8, equal to \[ {\left\{ \Lambda \left( {i}_{1},{i}_{2},\ldots ,{i}_{p - 2}, a\right) \Lambda \left( {j}_{1},{j}_{2},\ldots ,{j}_{p - 2}, a\right) \right\} }^{-1/2}{\left( -1\right) }^{a}{A}^{{a}^{2} + {2a}}{f}^{\left( a\right) }. \] Hence \( {\left( M\left( a\right) \right) }^{p} \) is \( {\left( -1\right) }^{a}{A}^{{a}^{2} + {2a}} \) times an identity matrix, of size dependent on the number of admissible labellings of the diagram of Figure 14.24. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_172_2.jpg) Figure 14.26 This means that the eigenvalues of \( M\left( a\right) \) are all \( {p}^{\text{th }} \) roots of \( {\left( -1\right) }^{a}{A}^{{a}^{2} + {2a}} = \) \( {\left( -A\right) }^{{a}^{2} + {2a}} \) and, of course, the trace is the sum of those eigenvalues. Each eigenvalue can be regarded as \( {\xi }_{j}\rho \), where \( \rho \) is some fixed \( {p}^{\text{th }} \) root of \( {\left( -A\right) }^{{a}^{2} + {2a}} \) and \( {\xi }_{j} \) is some \( {p}^{\text{th }} \) root of unity. The trace of \( M\left( a\right) \) is, as explained above, the coefficient of \( {S}_{a}\left( \alpha \right) \) in the expansion of \( T{\left( p,1\right) }^{\left( k\right) } \) in terms of the \( {S}_{n}\left( \alpha \right) \) . Suppose (see below) that \( \rho \) can be chosen so that \( \mathop{\sum }\limits_{j}{\xi }_{j} \) is an integer; it will be denoted \( {N}_{a} \) . Then, if \( p \) and \( q \) are coprime, \( \mathop{\sum }\limits_{j}{\left( {\xi }_{j}\right) }^{q} \) is also equal to \( {N}_{a} \) (because if a primitive \( {p}^{\text{th }} \) root of unity is a zero of a polynomial with integer coefficients, then any other primitive \( {p}^{\text{th }} \) root of unity is a zero of the same polynomial). From this it emerges that \( T{\left( p,1\right) }^{\left( k\right) } = \mathop{\sum }\limits_{a}{N}_{a}{\left( -A\right) }^{\left( {{a}^{2} + {2a}}\right) /p}{S}_{a}\left( \alpha \right) \) for some integers \( \left\{ {N}_{a}\right\} \) and then, using the same integers, \( T{\left( p, q\right) }^{\left( k\right) } = \mathop{\sum }\limits_{a}{N}_{a}{\left( -A\right) }^{q\left( {{a}^{2} + {2a}}\right) /p}{S}_{a}\left( \alpha \right) \) . Thus it remains to evaluate \( T{\left( p,1\right) }^{\left( k\right) } \), at least when \( k = 1 \) . The element \( T\left( {p,1}\right) \) of \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) contains a copy of Figure 14.23. The process of removing the top crossing of Figure 14.23 by using the defining skein relation and removing a kink leads at once to the recurrence relation \[ T\left( {p,1}\right) = {A\alpha T}\left( {p - 1,1}\right) - {A}^{2}T\left( {p - 2,1}\right) . \] Letting \( {x}_{p} = {A}^{-p}T\left( {p,1}\right) \), this becomes \( {x}_{p} = \alpha {x}_{p - 1} - {x}_{p - 2} \) . This is, of course, the recurrence relation which has as solution the Chebyshev polynomials \( {S}_{n}\left( \alpha \right) \) . Now \( {x}_{1} = - {A}^{2}\alpha \) and \( {x}_{0} = - {A}^{-2} - {A}^{2} \) . Thus \( {x}_{p} = - {A}^{2}{S}_{p}\left( \alpha \right) + {A}^{-2}{S}_{p - 2}\left( \alpha \right) \) . Hence, for \( k = 1,\rho \) can indeed be chosen for each \( a \) so that \( \mathop{\sum }\limits_{i}{\xi }_{j} \) is an integer \( {N}_{a} \) : when \( a = p \), choose \( \rho = {\left( -A\right) }^{p + 2} \) and then \( {N}_{p} = {\left( -1\right) }^{p + 1} \), and when \( a = p - 2 \), choose \( \rho = {\left( -A\right) }^{p - 2} \) and then \( {N}_{p - 2} = {\left( -1\right) }^{p} \), otherwise
1009_(GTM175)An Introduction to Knot Theory
59
on and removing a kink leads at once to the recurrence relation \[ T\left( {p,1}\right) = {A\alpha T}\left( {p - 1,1}\right) - {A}^{2}T\left( {p - 2,1}\right) . \] Letting \( {x}_{p} = {A}^{-p}T\left( {p,1}\right) \), this becomes \( {x}_{p} = \alpha {x}_{p - 1} - {x}_{p - 2} \) . This is, of course, the recurrence relation which has as solution the Chebyshev polynomials \( {S}_{n}\left( \alpha \right) \) . Now \( {x}_{1} = - {A}^{2}\alpha \) and \( {x}_{0} = - {A}^{-2} - {A}^{2} \) . Thus \( {x}_{p} = - {A}^{2}{S}_{p}\left( \alpha \right) + {A}^{-2}{S}_{p - 2}\left( \alpha \right) \) . Hence, for \( k = 1,\rho \) can indeed be chosen for each \( a \) so that \( \mathop{\sum }\limits_{i}{\xi }_{j} \) is an integer \( {N}_{a} \) : when \( a = p \), choose \( \rho = {\left( -A\right) }^{p + 2} \) and then \( {N}_{p} = {\left( -1\right) }^{p + 1} \), and when \( a = p - 2 \), choose \( \rho = {\left( -A\right) }^{p - 2} \) and then \( {N}_{p - 2} = {\left( -1\right) }^{p} \), otherwise \( {N}_{a} = 0 \) . Hence \[ T{\left( p, q\right) }^{\left( 1\right) } = {\left( -1\right) }^{p + 1}{\left( -A\right) }^{q\left( {p + 2}\right) }{S}_{p}\left( \alpha \right) + {\left( -1\right) }^{p}{\left( -A\right) }^{q\left( {p - 2}\right) }{S}_{p - 2}\left( \alpha \right) . \] Placing \( {S}^{1} \times I \) in the standard way in the plane sends \( T{\left( p, q\right) }^{\left( 1\right) } \) to the element \[ {\left( -1\right) }^{p + 1}{\left( -A\right) }^{q\left( {p + 2}\right) }{\Delta }_{p} + {\left( -1\right) }^{p}{\left( -A\right) }^{q\left( {p - 2}\right) }{\Delta }_{p - 2} \] in the skein of the plane. This planar diagram is a diagram with writhe \( {pq} \) of the \( \left( {p, q}\right) \) torus knot. To obtain the Jones polynomial, the above expression must be multiplied by \( {\left( -A\right) }^{-{3pq}} \) to account for the writhe, then by \( {\left( -{A}^{-2} - {A}^{2}\right) }^{-1} \) because the Jones polynomial is the coordinate of the skein element with the zero-crossing unknot as base, and then the substitution \( t = {A}^{-4} \) must be made. Thus, with \( t = {A}^{-4} \), the Jones polynomial of the knot is \[ {\left( {A}^{-4} - {A}^{4}\right) }^{-1}{\left( -A\right) }^{-{2pq}}\left\lbrack {-{A}^{2q}\left( {{A}^{2\left( {p + 1}\right) } - {A}^{-2\left( {p + 1}\right) }}\right) + {A}^{-{2q}}\left( {{A}^{2\left( {p - 1}\right) } - {A}^{-2\left( {p - 1}\right) }}\right) }\right\rbrack . \] This is \[ {\left( {A}^{-4} - {A}^{4}\right) }^{-1}{\left( -A\right) }^{-{2pq}}{A}^{2\left( {p + 1}\right) }{A}^{2q}\left\lbrack {-1 + {A}^{-4\left( {p + 1}\right) } + {A}^{-4\left( {q + 1}\right) } - {A}^{-4\left( {p + q}\right) }}\right\rbrack \] \[ = - {\left( 1 - {A}^{-8}\right) }^{-1}{A}^{-2\{ {pq} - p - q + 1\} }\left\lbrack {-1 + {A}^{-4\left( {p + 1}\right) } + {A}^{-4\left( {q + 1}\right) } - {A}^{-4\left( {p + q}\right) }}\right\rbrack , \] and this is the stated result. This chapter has really been about the calculation of \( S{U}_{q}\left( 2\right) \) invariants of framed coloured links in \( {S}^{3} \) . Such a link is a framed link \( L \) together with a non-negative integer \( n\left( i\right) \) assigned to each component \( {L}_{i} \) . The coloured invariant is then the element of \( \mathbb{Z}\left\lbrack {{A}^{-1}, A}\right\rbrack \) that results from decorating every \( {L}_{i} \) with \( {S}_{n\left( i\right) }\left( \alpha \right) \) and evaluating the result in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) . Applications have been found for these link invariants. They are used in [96], [138] and [71] to give information about the tunnel number of a knot. (The tunnel number is the minimal number of arcs that can be embedded in the knot exterior \( X \), with each arc meeting \( \partial X \) at its end points, so that \( X \) less a regular neighbourhood of the arcs is a handlebody.) They have also been employed in [73] to give information about a generalised unknotting operation. ## Exercises 1. Prove the formula displayed in Figure 14.14. 2. Let \( D \) be a diagram in \( {S}^{2} \) of the \( p \) -framed \( \left( {p,2}\right) \) torus knot. Calculate \( {\left\langle {S}_{n}\left( \alpha \right) \right\rangle }_{D} \), where \( {S}_{n} \) is the usual Chebyshev polynomial. 3. Suppose that \( D \) is the usual three-crossing diagram of the trefoil knot and that surgery prescribed by this diagram gives a 3-manifold \( M \) . Calculate the invariant \( {\mathcal{I}}_{A}\left( M\right) \) when \( A = \exp \left( {{\pi i}/{10}}\right) \) . 4. Let \( A = \exp \left( {{\pi i}/{2r}}\right) \) . If \( M \) and \( \bar{M} \) are the same closed connected 3-manifold but with opposite orientations, show that \( {\mathcal{I}}_{A}\left( \bar{M}\right) \) is the complex conjugate of \( {\mathcal{I}}_{A}\left( M\right) \) . Let \( {M}_{1} + {M}_{2} \) be the connected sum of two oriented closed connected 3-manifolds \( {M}_{1} \) and \( {M}_{2} \) . (This sum is formed by removing a 3-ball from each manifold and identifying the 2-sphere boundaries together so that orientations match up.) Show that \[ \mu {\mathcal{I}}_{A}\left( {{M}_{1} + {M}_{2}}\right) = {\mathcal{I}}_{A}\left( {M}_{1}\right) {\mathcal{I}}_{A}\left( {M}_{2}\right) . \] 5. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. Find, in the \( r \) -admissible situation, expressions for all the \( {6j} \) -symbols when \( r = 4 \) . 6. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. Suppose that \( {\omega }^{\prime } \) is another element of \( \mathcal{S}\left( {{S}^{1} \times I}\right) \) with the property of invariance under type \( 2 \) Kirby moves that is described (for \( \omega \) ) in Lemma 13.5. Let \( {\mu }^{\prime } \) be defined so that \( {\left( {\mu }^{\prime }\right) }^{-2} = {\left\langle {\omega }^{\prime }\right\rangle }_{U} \) and suppose that this is non-zero. Suppose that \( {\omega }^{\prime } \) and \( {\mu }^{\prime } \) are used to define an invariant \( {\mathcal{I}}_{A}^{\prime }\left( M\right) \) of closed, connected, oriented 3-manifolds \( M \) exactly as in Definition 14.4. By considering \[ {\left\langle \mu {\mu }^{\prime }\omega {\omega }^{\prime },\mu {\mu }^{\prime }\omega {\omega }^{\prime },\ldots ,\mu {\mu }^{\prime }\omega {\omega }^{\prime }\right\rangle }_{D} \] for a link diagram \( D \), determine the relationship between \( {\mathcal{I}}_{A}^{\prime }\left( M\right) \) and \( {\mathcal{I}}_{A}\left( M\right) \) . 7. Show that any compact connected oriented 3-manifold with boundary a torus can be obtained by surgery on a framed link in a solid torus. [Glue a solid torus to the boundary and use the surgery result for closed 3-manifolds.] Suppose that \( {X}_{1} \) and \( {X}_{2} \) are knot exteriors and \( h : \partial {X}_{1} \rightarrow \partial {X}_{2} \) is a homeomorphism. Let \( - h \) be the composition of \( h \) and \( - \mathrm{{id}} : \partial {X}_{1} \rightarrow \partial {X}_{1} \), where \( - \mathrm{{id}} \) is a homeomorphism that sends longitude to longitude and meridian to meridian but reverses the directions of them both. Show that for every primitive \( 4{r}^{\text{th }} \) root of unity \( A \) , \[ {\mathcal{I}}_{A}\left( {{X}_{1}{ \cup }_{h}{X}_{2}}\right) = {\mathcal{I}}_{A}\left( {{X}_{1}{ \cup }_{-h}{X}_{2}}\right) . \] 8. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. In Lemma 14.10, a base is described for the space of skeins of a disc, with points in its boundary partitioned into four sets and with an idempotent (of the relevant Temperley-Lieb algebra) adjacent to each set, when that space is regarded as a dual space (as "maps of outside diagrams"). Generalise this from a partition into four sets to a partition into \( n \) sets of points, finding a base corresponding to labelled trivalent graphs with \( r \) -admissibility for the labels at every vertex. 9. Let \( A \) be a primitive \( 4{r}^{\text{th }} \) root of unity. Let \[ N = \left\{ {\phi \in \mathcal{S}\left( {{S}^{1} \times I}\right) : \langle \phi ,\psi {\rangle }_{H} = 0\text{ for all }\psi \in \mathcal{S}\left( {{S}^{1} \times I}\right) }\right\} . \] Here, again, \( H \) is a two-crossing diagram of a non-trivial (Hopf) link. Show that the dimension of the quotient space \( \mathcal{S}\left( {{S}^{1} \times I}\right) /N \) is \( r - 1 \), and find for this space two bases, represented by sets \( \left\{ {\beta }_{i}\right\} \) and \( \left\{ {\gamma }_{j}\right\} \) of elements of \( \mathcal{S}\left( {{S}^{1} \times I}\right) \), such that \( {\left\langle {\beta }_{i},{\gamma }_{j}\right\rangle }_{H} = {\delta }_{i, j} \) . 10. Repeat the previous exercise with a \( g \) -holed disc replacing the annulus, the bilinear form being given by placing skein space elements in the diagram of two linked \( g \) - holed discs, shown below, and evaluating the result in the skein space of the plane. The dimension of the quotient space is to be shown to be the integer \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) \) obtained in Theorem 14.12. Show that a base is all labelled diagrams in the \( g \) -holed disc, of the form of Figure 14.18 with \( r \) -admissibility at each vertex. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_175_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_175_0.jpg) ## 15 ## Generalisations of the Jones Polynomial The Jones polynomial invariant of oriented links has already been expressed by means of a so-called skein formula in Proposition 3.7. and a similar, but different, formula was given for the Conway polynomial in Theorem 8.6. It will now be shown that those are two instances of a more general polynomial invariant in two indeterminates, sometimes called the HOMFLY polynomial ([31], [90], [106]). This is one of two two-variable generalisations of the Jones invariant. The other is the Kauffman polynomial invariant ([60], [58], [16], [45]). The main aim of this chapter is to show that these two in
1009_(GTM175)An Introduction to Knot Theory
60
he integer \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) \) obtained in Theorem 14.12. Show that a base is all labelled diagrams in the \( g \) -holed disc, of the form of Figure 14.18 with \( r \) -admissibility at each vertex. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_175_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_175_0.jpg) ## 15 ## Generalisations of the Jones Polynomial The Jones polynomial invariant of oriented links has already been expressed by means of a so-called skein formula in Proposition 3.7. and a similar, but different, formula was given for the Conway polynomial in Theorem 8.6. It will now be shown that those are two instances of a more general polynomial invariant in two indeterminates, sometimes called the HOMFLY polynomial ([31], [90], [106]). This is one of two two-variable generalisations of the Jones invariant. The other is the Kauffman polynomial invariant ([60], [58], [16], [45]). The main aim of this chapter is to show that these two invariants exist-that is, that they are indeed well defined. These proofs of existence are harder than the one given for the Jones polynomial in Chapter 3. The simple defining formulae of these invariants are in the statements of the next two theorems and the proofs of the two are very similar. First, however, there follows a preparatory and slightly technical result of planar geometry. It investigates the way in which a lens-shaped region \( R \) of the plane is divided into regions by a collection of transversals \( \left\{ {t}_{i}\right\} \) . Lemma 15.1. Suppose that \( p \) and \( q \) are two arcs in \( {\mathbb{R}}^{2} \) meeting only at their end points \( A \) and \( B \), and let \( R \) be the compact region bounded by \( p \cup q \) . Suppose that \( {t}_{1},{t}_{2},\ldots {t}_{n} \) are arcs in \( R \), each meeting \( p \cup q \) at just its end points, one in \( p \) and one in \( q \) . Suppose that every \( {t}_{i} \cap {t}_{j} \) is at most one point, that intersections of arcs are transverse and that there are no triple points. The graph, with vertices all intersections of these arcs and edges comprising \( p \cup q \cup \mathop{\bigcup }\limits_{i}{t}_{i} \), separates \( R \) into a collection of \( v \) -gons; amongst these \( v \) -gons there is a 3 -gon with an edge in \( p \) and a 3-gon with an edge in \( q \) . Proof. Proceed by induction on the number \( n \) of arcs. The result is trivial if \( n = 1 \), so assume \( n > 1 \) . Amongst the end points of the \( {t}_{i} \) that lie on \( p \), let \( X \) be the nearest to \( A \) . If then \( X \) is an end of \( {t}_{j} \), let \( {B}^{\prime } \) be the other end of \( {t}_{j} \) on \( q \) . If possible, from \( \left\{ {{t}_{i} : i \neq j}\right\} \) select a \( {t}_{k} \) with \( {t}_{k} \cap {t}_{j} = {X}^{\prime },{t}_{k} \cap q = {A}^{\prime } \), such that \( {t}_{k} \) has no point of intersection with a \( {t}_{i} \) between \( {A}^{\prime } \) and \( {X}^{\prime } \) . Select such a \( {t}_{k} \) with \( {X}^{\prime } \) as near as possible to \( {B}^{\prime } \), see Figure 15.1. If there is no such \( {t}_{k} \), select \( p \) instead, taking \( {A}^{\prime } = A \) and \( {X}^{\prime } = X \) . Now let \( {p}^{\prime } \) be an arc starting at \( {A}^{\prime } \), proceeding along ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_177_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_177_0.jpg) Figure 15.1 \( {t}_{k} \) (or \( p \) if there is no \( {t}_{k} \) ) to \( {X}^{\prime } \) and then along \( {t}_{j} \) to \( {B}^{\prime } \) (it helps not to think of a corner at \( {X}^{\prime } \) ). Let \( {q}^{\prime } \) be the sub-arc of \( q \) from \( {A}^{\prime } \) to \( {B}^{\prime } \) and let \( {R}^{\prime } \) be the region bounded by \( {p}^{\prime } \cup {q}^{\prime } \) . If no \( {t}_{i} \) meets the interior of \( {R}^{\prime } \), then \( {R}^{\prime } \) is a 3 -gon with an edge in \( q \) . Otherwise \( {R}^{\prime } \) meets \( \left\{ {{t}_{i} : i \neq j, i \neq k}\right\} \) in fewer than \( n \) arcs and so, by induction, there is a 3-gon in \( {R}^{\prime } \) with an edge in \( {q}^{\prime } \) . The choice made for \( {t}_{k} \) ensures that \( {A}^{\prime } \) is not a vertex of this 3-gon (which is important, as \( {X}^{\prime } \) is not a vertex of a \( v \) -gon of \( {R}^{\prime } \) ), and so it is one of the original 3-gons having an edge contained in \( q \) . Similarly, there is a 3-gon with an edge in \( p \) . In the course of the proof of the existence of the HOMFLY and Kauffman polynomials, the idea of an ascending link diagram will be used. The idea is as follows: A diagram \( D \) of an oriented link is ordered if an ordering is chosen for the link components and \( {based} \) if a base point is selected in \( D \) on each link component. If \( D \) is so ordered and based, the associated ascending diagram \( {\alpha D} \) is formed from \( D \) by changing the crossings so that on a journey around all the components in the given order, always beginning at the base point of each component, each crossing is first encountered as an under-pass. That means that the link represented by \( {\alpha D} \) can be thought of as lying in \( {\mathbb{R}}^{3} \) above the diagram, with each component entirely below those following it in the given order, and with each component ascending as one moves around it away from its base point, but eventually dropping vertically back to that base point. Thus \( {\alpha D} \) represents a trivial link. It is important to remember, given \( D \), that \( {\alpha D} \) depends on those two choices, component order and base points. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_177_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_177_1.jpg) Type III Figure 15.2 Also needed in the proof is the idea of Reidemeister moves that do not increase the number of crossings of a diagram above a certain bound. In principle it is clear what that means, but the moves will be taken to be the usual Type II move together with those of the Figure 15.2, which include two forms of the Type I and Type III moves and a Type IV move. The usual proofs that this is a redundant list involve increasing the number of crossings. Theorem 15.2. There is a unique function \[ P : \left\{ {\text{ Oriented links in }{S}^{3}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{l}^{\pm 1},{m}^{\pm 1}}\right\rbrack \] such that \( P \) takes the value 1 on the unknot and, if \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are links that have diagrams \( {D}_{ + },{D}_{ - } \) and \( {D}_{0} \) that are the same except near a single point where they are as in Figure 15.3, then \[ {lP}\left( {L}_{ + }\right) + {l}^{-1}P\left( {L}_{ - }\right) + {mP}\left( {L}_{0}\right) = 0. \] \( P\left( L\right) \) is called the HOMFLY polynomial (see [31]) of the oriented link \( L \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_178_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_178_0.jpg) Figure 15.3 Proof. In outline, the proof consists of defining \( P \) on link diagrams (using induction on the number of crossings), ensuring the validity of the skein relation (*) \[ {lP}\left( {D}_{ + }\right) + {l}^{-1}P\left( {D}_{ - }\right) + {mP}\left( {D}_{0}\right) = 0 \] for diagrams related as in Figure 15.3, and verifying invariance under Reidemeister moves. Note first that the equation \( \left( \star \right) \) determines uniquely any one of \( P\left( {D}_{ + }\right) \) , \( P\left( {D}_{ - }\right) \) and \( P\left( {D}_{0}\right) \) from knowledge of the other two. Note, too, that a solution to (*) is \[ \left( {P\left( {D}_{ + }\right), P\left( {D}_{ - }\right), P\left( {D}_{0}\right) }\right) = \left( {x, x,{\mu x}}\right) , \] where \( x \) is arbitrary and \( \mu = - {m}^{-1}\left( {l + {l}^{-1}}\right) \) . Let \( {\mathcal{D}}_{n} \) be the set of all oriented link diagrams in the plane with at most \( n \) crossings (two diagrams are regarded as being identical if they differ by an orientation-preserving homeomorphism of the plane). Suppose inductively that \( P : {\mathcal{D}}_{n - 1} \rightarrow \mathbb{Z}\left\lbrack {{l}^{\pm 1},{m}^{\pm 1}}\right\rbrack \) has been defined such that on \( {\mathcal{D}}_{n - 1} \) (i) the skein relation \( \left( \star \right) \) holds for any three diagrams in \( {\mathcal{D}}_{n - 1} \) related in the usual way; (ii) \( P\left( D\right) \) is unchanged by Reidemeister moves on \( D \) that never involve more than \( n - 1 \) crossings; (iii) if \( D \) is any ascending diagram of a link in \( {\mathcal{D}}_{n - 1} \) with \( \# D \) components, then \( P\left( D\right) = {\mu }^{\# D - 1} \) . The induction starts with \( {\mathcal{D}}_{0} \) in which any diagram is, trivially, ascending, and there is nothing to prove. Now extend the definition of \( P \) over \( {\mathcal{D}}_{n} \) in the following way: If \( D \) is an \( n \) - crossing diagram, select an ordering of its components, select a base point on each component and let \( {\alpha D} \) be the associated ascending diagram. Define \( P\left( {\alpha D}\right) = \) \( {\mu }^{\# D - 1} \), where \( \# D \) is the number of link components of \( D \) . The crossings of \( {\alpha D} \) can be changed one at a time to achieve \( D \) . If \( {D}_{ + } \) and \( {D}_{ - } \) are the diagrams before and after such a crossing change, and \( {D}_{0} \) is the diagram with the crossing annulled, the value of \( P \) on the diagram after the change can be calculated, using \( \left( \star \right) \), from \( P\left( {D}_{0}\right) \) and the value on the diagram before the change. The value of \( P\left( {D}_{0}\right) \) is known by induction. The value for \( P\left( D\right) \) is then defined to be the value thus calculated from \( P\left( {\alpha D}\right) \) by changing crossings of \( {\alpha D} \) to produce \( D \) . It is easy to see that \( P\left( D\right) \) does not depend on the ordering of the sequence of cros
1009_(GTM175)An Introduction to Knot Theory
61
ase point on each component and let \( {\alpha D} \) be the associated ascending diagram. Define \( P\left( {\alpha D}\right) = \) \( {\mu }^{\# D - 1} \), where \( \# D \) is the number of link components of \( D \) . The crossings of \( {\alpha D} \) can be changed one at a time to achieve \( D \) . If \( {D}_{ + } \) and \( {D}_{ - } \) are the diagrams before and after such a crossing change, and \( {D}_{0} \) is the diagram with the crossing annulled, the value of \( P \) on the diagram after the change can be calculated, using \( \left( \star \right) \), from \( P\left( {D}_{0}\right) \) and the value on the diagram before the change. The value of \( P\left( {D}_{0}\right) \) is known by induction. The value for \( P\left( D\right) \) is then defined to be the value thus calculated from \( P\left( {\alpha D}\right) \) by changing crossings of \( {\alpha D} \) to produce \( D \) . It is easy to see that \( P\left( D\right) \) does not depend on the ordering of the sequence of crossing changes chosen to get from \( {\alpha D} \) to \( D \) (consider transposing one crossing change and the next one in the sequence). However, the problem is to show that \( P\left( D\right) \) does not depend on component order and choice of base points. It is fairly easy to deal with base points. Suppose, keeping fixed the order of link components, the base point \( b \) of a certain link component of \( D \) is moved from just before a crossing to \( {b}^{\prime } \), a point just after the crossing. Let \( {\beta D} \) be the ascending diagram using \( {b}^{\prime } \) instead of \( b \) . If the other segment involved at the crossing is from a different component, then \( {\alpha D} = {\beta D} \) . Otherwise \( {\beta D} \) is constructed from \( {\alpha D} \) by simply changing this crossing. However, the diagram \( {D}_{0} \) obtained by annulling this crossing is also an ascending diagram with \( \# D + 1 \) link components and is, of course, in \( {\mathcal{D}}_{n - 1} \) . Thus by the induction, \( P\left( {D}_{0}\right) = {\mu }^{\# D} \) and, as \( P\left( {\alpha D}\right) = {\mu }^{\# D - 1} \) , the skein formula gives \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \) . This means that if one had defined \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \), next calculated that \( P\left( {\alpha D}\right) = {\mu }^{\# D - 1} \) and then calculated \( P\left( D\right) \), one would have obtained the same value for \( P\left( D\right) \) as before. Hence the definition of \( P\left( D\right) \) is independent of choice of base points. At this stage \( P \) is well defined on \( n \) -crossing diagrams with an ordering of their components. For such diagrams the identity \( \left( \star \right) \) is satisfied (assuming \( {D}_{ + } \) and \( {D}_{ - } \) have the "same" orderings), for \( \left( \star \right) \) may be regarded as the first step in a calculation of \( P\left( {D}_{ + }\right) \) from \( P\left( {\alpha D}\right) \) . Reidemeister moves, that never involve more than \( n \) -crossings, on an ordered diagram \( D \) will now be considered. Note that if the diagram before a Reidemeister move has an ordering on its components, then this clearly induces an ordering on the components of the diagram after the move. A move is to be interpreted with respect to such associated orderings. Suppose a crossing of \( D \) is to be removed by a Type I move on some component. It can be assumed (as position of base points is immaterial) that in selecting base points, the base point on the component in question is immediately before the crossing. Then this crossing is not changed in obtaining \( D \) from \( {\alpha D} \) . Thus the calculation of \( P \) for \( D \) is exactly the same as the calculation for the diagram after the move. With reference to a Reidemeister move of Type II, consider the two triples of diagrams shown in Figure 15.4(a). The two diagrams labelled \( {D}_{ - } \) are the same, as are the two labelled \( {D}_{0} \) . Thus, by \( \left( \star \right), P \) takes the same value on the two labelled \( {D}_{ + } \) . The same is true for the two triples of Figure 15.4(b), using in addition invariance of \( P \) under Type I moves. Hence when considering removing two crossings of \( D \) by a Type II move, choose base points away from the area concerned, and note that the above remarks on Figure 15.4 imply that, without loss of generality, both crossings may be changed before even considering the move. Thus it may be assumed that the crossings are such that neither has to be changed in obtaining \( D \) from \( {\alpha D} \) . As before, the calculations of \( P \) on the diagrams before and after the move are the same. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_180_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_180_0.jpg) Figure 15.4 The same general idea works for Type III moves. Consider the diagrams of Figure 15.5 where it is supposed that the components are in some way oriented. The task of showing that \( P\left( {D}_{1}\right) = P\left( {{D}_{1}{}^{\prime }}\right) \) is equivalent to the task of showing that \( P\left( {D}_{2}\right) = P\left( {{D}_{2}{}^{\prime }}\right) \) . This is because \( {D}_{3} = {D}_{3}^{\prime } \), and \( {D}_{4} \) and \( {D}_{4}^{\prime } \) are related by a Type II move; \( \left( \star \right) \) gives the usual relationship between \( P\left( {D}_{1}\right), P\left( {D}_{2}\right) \), and one of \( P\left( {D}_{3}\right) \) and \( P\left( {D}_{4}\right) \) (according to the orientation situation), and it also gives exactly the same relation between the \( P\left( {D}_{i}^{\prime }\right) \) . In this way the three crossings under consideration in the diagrams before and after a contemplated Type III move may be adjusted so that (choosing base points well out of the way) no crossing needs to be changed to achieve the ascending diagram. As before, the calculations of \( P \) before and after the move are the same. Finally, note that the fourth type of move, introduced (temporarily) above, clearly does not effect calculations of \( P \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_180_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_180_1.jpg) One thing remains to be proved. Suppose \( D \) is any \( n \) -crossing diagram with an ordering of its components and \( {\alpha D} \) is the associated ascending diagram (with respect to any base points). Suppose that \( {\beta D} \) is an ascending diagram constructed from \( D \) with reference to a different ordering. Give the components of \( {\beta D} \) the original ordering, so that \( P\left( {\beta D}\right) \) is defined by calculating it from \( P\left( {\alpha D}\right) \) . It is required to show that \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \) . If this is so, then a calculation of a value of \( P\left( D\right) \) could be started from knowing \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \), calculating \( P\left( {\alpha D}\right) = {\mu }^{\# D - 1} \) and then calculating the value of \( P\left( D\right) \) as prescribed in the definition (with \( D \) having its original \( \alpha \) -given order). That would mean that \( P\left( D\right) \) would be well defined, independent of the ordering of its components. It would also complete the induction as \( {\beta D} \) is an arbitrary ascending \( n \) -crossing diagram. To make this final check, consider the ascending diagram \( {\beta D} \) . Any component with no crossing that bounds a disc whose interior is disjoint from the diagram may be moved away from the rest of the diagram (into the unbounded complementary region) using the fourth type of move. Now, consider an innermost loop of the diagram (it may help to forget the over-crossing information for a while), a loop being a sub-arc of the diagram starting and stopping at the same crossing. If this loop contains no crossing (except at its ends) it can be removed using a Type I move (in this case, "innermost" and the remark on zero-crossing components imply there is no component totally within the area bounded by the loop). That move leaves the value of \( P \) unchanged. However, the new diagram has \( n - 1 \) crossings and is still ascending. Thus, by the induction, \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \) . Otherwise other arcs of \( {\beta D} \) traverse the loop; these transversals are simple arcs, as the loop is innermost and each meets the loop at two points. One transversal and part of the loop bound a 2-gon, which is probably crossed by many transversals. Amongst such 2-gons, and similar 2-gons bounded by pairs of the transversals, choose an innermost one. Let the two arcs involved, denoted \( p \) and \( q \), meet at points \( A \) and \( B \) and bound the region \( R \) . (Again, the innermost condition implies there is no component entirely within \( R \) .) Any of the remaining transversals that meets \( R \), meets each of \( p \) and \( q \) , and within \( R \), transversals meet each other in at most one point (as \( R \) is innermost). This is the situation described in Lemma 15.1, so considering the pattern of \( v \) -gons in \( R \) formed by the complementary regions of \( {\beta D} \), there is a 3-gon having an edge in \( p \) . Assuming all the base points are outside \( R \), the fact that \( {\beta D} \) is ascending means that the 3-gon has cross-overs at its three vertices that are appropriate for a Type III Reidemeister move. Thus change the diagram by such a move, moving the part of \( p \) across the 3-gon. This changes \( R \) to a new region, and the procedure can be repeated; at each stage the diagram is still ascending. Eventually there are no 3-gons in the new region \( R \), in which case that region can be removed completely by a Type II move. Thus \( {\beta D} \) can be changed by Reidemeister moves, which never involve more than \( n \) crossings, to an ascending diagram with \( n - 2 \) crossings. Thus \( P\left( {\beta D}
1009_(GTM175)An Introduction to Knot Theory
62
versals meet each other in at most one point (as \( R \) is innermost). This is the situation described in Lemma 15.1, so considering the pattern of \( v \) -gons in \( R \) formed by the complementary regions of \( {\beta D} \), there is a 3-gon having an edge in \( p \) . Assuming all the base points are outside \( R \), the fact that \( {\beta D} \) is ascending means that the 3-gon has cross-overs at its three vertices that are appropriate for a Type III Reidemeister move. Thus change the diagram by such a move, moving the part of \( p \) across the 3-gon. This changes \( R \) to a new region, and the procedure can be repeated; at each stage the diagram is still ascending. Eventually there are no 3-gons in the new region \( R \), in which case that region can be removed completely by a Type II move. Thus \( {\beta D} \) can be changed by Reidemeister moves, which never involve more than \( n \) crossings, to an ascending diagram with \( n - 2 \) crossings. Thus \( P\left( {\beta D}\right) = {\mu }^{\# D - 1} \) by induction. This means that choosing the base points outside \( R \) was valid, as position of base points is irrelevant in ascending diagrams with this value of \( P \) . This completes the proof of the induction hypothesis. Thus \( P \) is, finally, well defined on \( {\mathcal{D}}_{n} \) for all \( n \), the skein formula \( \left( \star \right) \) is always satisfied, and as any collection of Reidemeister moves remains within \( {\mathcal{D}}_{n} \) for some \( n, P\left( D\right) \) is unchanged by all Reidemeister moves. Thus a link invariant is produced by taking \( P\left( L\right) = P\left( D\right) \) , where \( D \) is any diagram for the link \( L \) . There is only one such \( P \) as described in the theorem, for the properties of the statement of the theorem always allow \( P\left( L\right) \) to be calculated from an ascending diagram. An ascending diagram with \( c \) components is an unlink and so is represented by a zero-crossing diagram of \( c \) components. It then follows (an easy exercise) from the statement of the theorem that the value of \( P \) on that diagram is \( {\mu }^{c - 1} \) . There is nothing sacrosanct about the notation used here for the HOMFLY polynomial. The skein formula simply expresses a linear relation between the values of \( P \) on three oriented diagrams related in the usual way. It is equally valid to regard \( P \) as having values in the Laurent polynomials in three (projective) variables \( x, y \) and \( z \), with the skein relation being \( {xP}\left( {L}_{ + }\right) + {yP}\left( {L}_{ - }\right) + {zP}\left( {L}_{0}\right) = 0 \) . A helpful custom has been established to the effect that any use of the HOMFLY polynomial is accompanied by a declaration of notational conventions. In earlier chapters the skein formulae for the Jones polynomial and the Conway polynomial have already been considered, so the general procedures that might be applied to the HOMFLY polynomial are not unfamiliar. The details of the proof of the above theorem explain how \( P\left( L\right) \) can be calculated by reference to an ascending diagram. Although the length of such a calculation depends exponentially on the number of crossings of a diagram, it is easy to calculate with diagrams of only a few crossings. It is also easy to make trivial errors in such calculations; several computer programs have been written to obviate this and to manage an inhuman number of crossings. Further exploration of the HOMFLY polynomial will be postponed to the following chapter. Instead, the preceding existence proof will first be adapted to give a proof of the existence of the Kauffman polynomial. That is the other two-variable polynomial invariant that generalises the Jones polynomial; it should not to be confused with the Kauffman bracket. Parts of the two existence proofs are the same, including the final tricky component re-ordering section. Thus emphasis will be placed on places where the proofs differ. One difference is that the Kauffman polynomial is really not defined on links but on framed links. The HOMFLY polynomial can be regarded as referring to framed links but that can seem, initially, to be an unnecessary sophistication. For the Kauffman polynomial it is necessary. In what follows, framings will be interpreted by means of diagrams, the framing on a component being the sum of the signs of the crossings at which that component crosses itself. The work of the second existence proof consists in defining a two-variable Laurent polynomial invariant \( \Lambda \left( D\right) \in \mathbb{Z}\left\lbrack {{a}^{\pm 1},{z}^{\pm 1}}\right\rbrack \) for unoriented link diagrams \( D \) . This is the burden of the next theorem. If a diagram \( D \) happens to have an orientation, it should be forgotten when evaluating \( \Lambda \left( D\right) \) . Given this, the Kauffman polynomial \( F\left( L\right) \) of an oriented link \( L \) has the following simple definition: Definition 15.3. The Kauffman polynomial is the function \[ F : \left\{ {\text{ Oriented links in }{S}^{3}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{a}^{\pm 1},{z}^{\pm 1}}\right\rbrack \] defined by \( F\left( L\right) = {a}^{-w\left( D\right) }\Lambda \left( D\right) \), where \( D \) is a diagram with writhe \( w\left( D\right) \) of the oriented link \( L \) and \( \Lambda \) is the function of the next theorem. In the course of the proof of the next theorem, it will be useful to use a self-writhe \( \bar{w}\left( D\right) \) of a link diagram \( D \), defined to be the sum of the signs of crossings at which all link components cross themselves (not other components). This self-writhe can be considered as the sum of the framings of all the components; note that it does not depend on a choice of orientation on the link. For an oriented link diagram \( D \) , the difference \( w\left( D\right) - \bar{w}\left( D\right) \) is twice the sum of the linking numbers between all pairs of link components. (It might have been better, and equally valid, had the Kauffman polynomial been defined using \( \bar{w}\left( D\right) \) instead of \( w\left( D\right) \), for \( {a}^{-\bar{w}\left( D\right) }\Lambda \left( D\right) \) is just an invariant of unoriented links.) The proof of the next theorem will again use a definition involving induction on the number of crossings in a diagram and reference to ascending diagrams. However, a slight generalisation of an ascending diagram will be needed. This consists of the idea of a link diagram having an untying function (to be thought of as a "height") in the following sense: Definition 15.4. Suppose \( D \) is a diagram for a link \( L \) with ordered components. An untying function for \( D \) is a real-valued function \( h \) on \( D \), two-valued at the crossings, that corresponds to a continuous function \( h : L \rightarrow \mathbb{R} \), with the following properties: (i) If component \( {c}_{i} \) precedes component \( {c}_{j} \) in the ordering, then \( h\left( {x}_{i}\right) < h\left( {x}_{j}\right) \) for any \( {x}_{i} \in {c}_{i} \) and \( {x}_{j} \in {c}_{j} \) . (ii) On each link component \( {c}_{i} \), the function \( h \) is monotonically strictly increasing from some base point \( {b}_{i} \in {c}_{i} \) to some top point \( {t}_{i} \in {c}_{i} \), in both directions around \( {c}_{i} \) . (iii) At a crossing the value of \( h \) on the over-pass exceeds that on the under-pass. Note that any ascending (oriented) diagram has an untying function in which the top points of components always just precede the base points. Note, too, that if \( D \) has an untying function, it represents the unlink. This is because it represents a link \( L \) in which \( h \) is the height function of \( L \) above the plane of the diagram \( D \) (just "lift \( D \) up" to the height specified by \( h \) ). Then \( L \) attains each height at most twice (by the monotonicity of \( h \) ), so that the union of line segments joining points of \( L \) of equal height gives a collection of disjoint discs bounded by \( L \) . Theorem 15.5. There exists a function \[ \Lambda : \left\{ {\text{ Unoriented links diagrams in }{S}^{2}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{a}^{\pm 1},{z}^{\pm 1}}\right\rbrack \] that is defined uniquely by the following: (i) \( \Lambda \left( U\right) = 1 \), where \( U \) is the zero-crossing diagram of the unknot; (ii) \( \Lambda \left( D\right) \) is unchanged by Reidemeister moves of Types II and III on the diagram \( D \) ; (iii) \( \Lambda \left( { \curvearrowright \sim }\right) = {a\Lambda }\left( \curvearrowright \right) \) ; (iv) If \( {D}_{ + },{D}_{ - },{D}_{0} \) and \( {D}_{\infty } \) are four diagrams exactly the same except near a point where they are as shown in Figure 15.6, then \[ \left( {\star \star }\right) \;\Lambda \left( {D}_{ + }\right) + \Lambda \left( {D}_{ - }\right) = z\left( {\Lambda \left( {D}_{0}\right) + \Lambda \left( {D}_{\infty }\right) }\right) . \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_184_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_184_0.jpg) Figure 15.6 Proof. Note that, when considering a crossing in an unoriented diagram, it has no claim to be termed \( {D}_{ + } \) rather than \( {D}_{ - } \) in the above notation. However, this never matters, since \( {D}_{ + } \) and \( {D}_{ - } \) feature symmetrically in the formula \( \left( {\star \star }\right) \) ; the treatment of \( {D}_{0} \) and \( {D}_{\infty } \) is likewise symmetric. Observe that the equation \( \left( {\star \star }\right) \) determines uniquely any one of \( \Lambda \left( {D}_{ + }\right) ,\Lambda \left( {D}_{ - }\right) ,\Lambda \left( {D}_{0}\right) \) and \( \Lambda \left( {D}_{\infty }\right) \) from knowledge of the other three. Observe also that a solution to \( \left( {\star \star }\right) \) is \( \left( {\Lambda \left( {D}_{ + }\right) ,\Lambda \left( {D}_{ - }\right) ,\Lambda \left( {D}_{0}\ri
1009_(GTM175)An Introduction to Knot Theory
63
\left( {D}_{\infty }\right) }\right) . \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_184_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_184_0.jpg) Figure 15.6 Proof. Note that, when considering a crossing in an unoriented diagram, it has no claim to be termed \( {D}_{ + } \) rather than \( {D}_{ - } \) in the above notation. However, this never matters, since \( {D}_{ + } \) and \( {D}_{ - } \) feature symmetrically in the formula \( \left( {\star \star }\right) \) ; the treatment of \( {D}_{0} \) and \( {D}_{\infty } \) is likewise symmetric. Observe that the equation \( \left( {\star \star }\right) \) determines uniquely any one of \( \Lambda \left( {D}_{ + }\right) ,\Lambda \left( {D}_{ - }\right) ,\Lambda \left( {D}_{0}\right) \) and \( \Lambda \left( {D}_{\infty }\right) \) from knowledge of the other three. Observe also that a solution to \( \left( {\star \star }\right) \) is \( \left( {\Lambda \left( {D}_{ + }\right) ,\Lambda \left( {D}_{ - }\right) ,\Lambda \left( {D}_{0}\right) ,\Lambda \left( {D}_{\infty }\right) }\right) = \left( {{ax},{a}^{-1}x, x,{\delta x}}\right) \), where \( x \) is arbitrary and \( \delta = \left( {a + {a}^{-1}}\right) {z}^{-1} - 1 \) Now follow the pattern of the proof of Theorem 15.2. Let \( {\mathcal{D}}_{n} \) be the set of all unoriented link diagrams in the plane with at most \( n \) crossings. Suppose inductively that \( \Lambda : {\mathcal{D}}_{n - 1} \rightarrow \mathbb{Z}\left\lbrack {{a}^{\pm 1},{z}^{\pm 1}}\right\rbrack \) has been defined such that on \( {\mathcal{D}}_{n - 1} \) (a) the skein relation \( \left( {\star \star }\right) \) holds for any four diagrams in \( {\mathcal{D}}_{n - 1} \) related as in Figure 15.6; (b) \( \Lambda \left( {\mathcal{O}}^{ - }\right) = {a\Lambda }\left( \frown \right) \) and \( \Lambda \left( {\mathcal{O}}^{ - }\right) = {a}^{-1}\Lambda \left( \frown \right) \) ; (c) \( \Lambda \left( D\right) \) is unchanged by Reidemeister moves of Types II, III and IV on \( D \) that never involve more than \( n - 1 \) crossings (see Figure 15.2); (d) if \( D \) is any diagram of a link in \( {\mathcal{D}}_{n - 1} \) with \( \# D \) link components that has an untying function, then \( \Lambda \left( D\right) = {a}^{\bar{w}\left( D\right) }{\delta }^{\# D - 1} \) . The induction starts with \( {\mathcal{D}}_{0} \) in which any diagram has, trivially, an untying function. Now extend the definition of \( \Lambda \) over \( {\mathcal{D}}_{n} \) in the following way: If \( D \) is an \( n \) - crossing diagram, select an orientation on each component, an ordering of the components and a base point on each component. Let \( {\alpha D} \) be the associated ascending diagram (it is just to define this that the orientation is needed). Define \( \Lambda \left( {\alpha D}\right) = {a}^{\bar{w}\left( {\alpha D}\right) }{\delta }^{\# D - 1} \), where \( \# D \) is the number of link components of \( D \) (and of \( {\alpha D} \) ). The value for \( \Lambda \left( D\right) \) is defined to be the value calculated from \( \Lambda \left( {\alpha D}\right) \) by changing one by one the crossings of \( {\alpha D} \) to produce \( D \), using \( \left( {\star \star }\right) \) and inductive knowledge of \( \Lambda \left( {D}_{0}\right) \) and \( \Lambda \left( {D}_{\infty }\right) \) at each crossing change. It is easy to see that \( \Lambda \left( D\right) \) does not depend on the ordering of the sequence of crossing changes chosen to change \( {\alpha D} \) to \( D \) . The problem is to show that \( \Lambda \left( D\right) \) does not depend on component order, component orientation and choice of base points. Suppose, keeping fixed the orientations and the order of link components, the base point \( b \) of a certain link component of \( D \) is moved from just before a crossing to \( {b}^{\prime } \), a point just after the crossing. Let \( {\beta D} \) be the ascending diagram using \( {b}^{\prime } \) instead of \( b \) . If the other segment involved at the crossing is from a different component, then \( {\alpha D} = {\beta D} \) . Otherwise \( {\beta D} \) is constructed from \( {\alpha D} \) by simply changing this crossing. However, the diagram \( {D}_{0} \), obtained from \( {\alpha D} \) by annulling that crossing in the way consistent with orientation, is also an ascending diagram with \( \# D + 1 \) link components, and the diagram \( {D}_{\infty } \), obtained by annulling in the other way, is easily seen to have an untying function. Both \( {D}_{0} \) and \( {D}_{\infty } \) are in \( {\mathcal{D}}_{n - 1} \) . Hence \( \Lambda \left( {D}_{0}\right) = {a}^{\bar{w}\left( {D}_{0}\right) }{\delta }^{\# D} \) and \( \Lambda \left( {D}_{\infty }\right) = {a}^{\bar{w}\left( {D}_{\infty }\right) }{\delta }^{\# D - 1} \) . However, as in all four diagrams there are zero linking numbers between components, \( \bar{w}\left( {D}_{\infty }\right) = \) \( \bar{w}\left( {D}_{0}\right) \), and \( \bar{w}\left( {\alpha D}\right) \) and \( \bar{w}\left( {\beta D}\right) \) are \( \bar{w}\left( {D}_{0}\right) + 1 \) and \( \bar{w}\left( {D}_{0}\right) - 1 \) (which is which depending on the sign of the crossing). Hence the skein relation \( \left( {\star \star }\right) \) shows that \( \Lambda \left( {\beta D}\right) = {a}^{\bar{w}\left( {\beta D}\right) }{\delta }^{\# D - 1} \), as this valued substituted in \( \left( {\star \star }\right) \) gives \[ {a}^{\bar{w}\left( {D}_{0}\right) }\left\{ {a{\delta }^{\# D - 1} + {a}^{-1}{\delta }^{\# D - 1}}\right\} = z{a}^{\bar{w}\left( {D}_{0}\right) }\left\{ {{\delta }^{\# D} + {\delta }^{\# D - 1}}\right\} , \] and that accords with the definition of \( \delta \) . This value for \( \Lambda \left( {\beta D}\right) \) is that calculated using \( b \) as the base point; it is, of course, equal to the value it would have, by definition, if \( {b}^{\prime } \) were the base point. Thus \( \Lambda \left( D\right) \) does not depend on choice of base points. At this stage \( \Lambda \) is well defined on \( n \) -crossing diagrams with an ordering of their components and an orientation on each component. For such diagrams the \( \left( {\star \star }\right) \) identity of statement (a) of the induction hypothesis is satisfied exactly as in the proof of Theorem 15.2. To see that the formulae of statement (b) are satisfied, let \( D \) be the diagram on the left-hand side of such a formula and \( {D}^{\prime } \) that on the right. Placing the base point just before the crossing shown ensures that the crossing is unchanged in the ascending diagram \( {\alpha D} \) . But then \( \bar{w}\left( {\alpha D}\right) = w\left( {\alpha {D}^{\prime }}\right) \pm 1 \) , the choice of sign depending on the sign of the crossing. Thus by definition, \( \Lambda \left( {\alpha D}\right) = {a}^{\pm 1}\Lambda \left( {\alpha {D}^{\prime }}\right) \), and the factor \( {a}^{\pm 1} \) persists throughout the calculations to show that \( \Lambda \left( D\right) = {a}^{\pm 1}\Lambda \left( {D}^{\prime }\right) \) . Reidemeister moves other than of Type I, and never involving more than \( n \) - crossings, on ordered oriented diagrams must now be considered. The invariance of \( \Lambda \left( D\right) \) under a Reidemeister move of Type II is shown in exactly the same way as in Theorem 15.2. The diagrams of Figure 15.4 should be considered without arrows, and it should be noted that the values of \( \Lambda \) on the two diagrams labelled \( {D}_{0} \) in Figure 15.4(b) are the same by means of two applications of the formula (b) that has just been proved. Similarly, invariance under a Reidemeister move of Type III follows as before; it is simply required to use all of the \( {D}_{i} \) and \( {D}_{i}{}^{\prime } \) shown in Figure 15.5. Again invariance under a Type IV move follows trivially. It is necessary, for (d), to check that \( \Lambda \left( D\right) = {a}^{\bar{w}\left( D\right) }{\delta }^{\# D - 1} \) for an \( n \) -crossing oriented ordered diagram \( D \) with an untying function \( h \) . This is true if the top points on all components immediately precede the base points, as then the diagram is ascending with respect to its ordering and orientation. Thus proceed by a sub-induction on the total number of self-crossings of all components from top points to base points in the directions of the orientations. On a component \( c \), let \( X \) be the first self-crossing on \( c \) after top point \( t \) ( before the base point \( b \) is reached). If, on travelling from \( t \), the crossing is an over-pass, \( h \) can be changed to be increasing from \( t \) to just beyond \( X \) and still be an untying function. Then \( \Lambda \left( D\right) = {a}^{\bar{w}\left( D\right) }{\delta }^{\# D - 1} \) by the sub-induction. Otherwise \( X \) is encountered as an under-pass. It must be an under-passing of part of \( c \) from \( b \) to \( t \) ; this follows from the monotonicity properties of \( h \) . The situation is illustrated in Figure 15.7(a), where \( h \) is to be thought of as decreasing along broken lines and increasing along unbroken lines. Calculate \( \Lambda \left( D\right) \) using \( \left( {\star \star }\right) \) applied to the crossing \( X \) . Changing the crossing gives a diagram \( {D}^{\prime } \), and this diagram has an untying function with the top point moved nearer to base point \( b \) . Thus, by the sub-induction, \( \Lambda \left( {D}^{\prime }\right) = {a}^{\bar{w}\left( {D}^{\prime }\right) }{\delta }^{\# D - 1} \) . The diagrams \( {D}_{0} \) and \( {D}_{\infty } \) have \( n - 1 \) crossings and are as in Figure 15.7(c) and Figure 15.7(d); these diagrams have untying functions as indicated by the broken and unbroken lines. Thus \( \Lambda \left( {D}_{0}\right) \) and \( \Lambda \left( {D}_{\infty }\right) \) are known by induction on \( n \) . Of course, \( \bar{w}\left( {D}_{\infty }\right) = \bar{w}\left( {D}_{0}\right) \), and \( \bar{w}\left( D\right) \) and \( \bar{w}\left( {D
1009_(GTM175)An Introduction to Knot Theory
64
gure 15.7(a), where \( h \) is to be thought of as decreasing along broken lines and increasing along unbroken lines. Calculate \( \Lambda \left( D\right) \) using \( \left( {\star \star }\right) \) applied to the crossing \( X \) . Changing the crossing gives a diagram \( {D}^{\prime } \), and this diagram has an untying function with the top point moved nearer to base point \( b \) . Thus, by the sub-induction, \( \Lambda \left( {D}^{\prime }\right) = {a}^{\bar{w}\left( {D}^{\prime }\right) }{\delta }^{\# D - 1} \) . The diagrams \( {D}_{0} \) and \( {D}_{\infty } \) have \( n - 1 \) crossings and are as in Figure 15.7(c) and Figure 15.7(d); these diagrams have untying functions as indicated by the broken and unbroken lines. Thus \( \Lambda \left( {D}_{0}\right) \) and \( \Lambda \left( {D}_{\infty }\right) \) are known by induction on \( n \) . Of course, \( \bar{w}\left( {D}_{\infty }\right) = \bar{w}\left( {D}_{0}\right) \), and \( \bar{w}\left( D\right) \) and \( \bar{w}\left( {D}^{\prime }\right) \) are \( \bar{w}\left( {D}_{0}\right) + 1 \) and \( \bar{w}\left( {D}_{0}\right) - 1 \) . Further, \( {D}_{0} \) has one more component than the other diagrams. It then follows at once from \( \left( {\star \star }\right) \) that \( \Lambda \left( D\right) = {a}^{\bar{w}\left( D\right) }{\delta }^{\# D - 1} \), and the induction argument is complete. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_186_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_186_0.jpg) Figure 15.7 Consider an ordered oriented based diagram \( D \) and the associated ascending diagram \( {\alpha D} \) . Suppose the component ordering is kept fixed but that the orientation on some component \( c \) is reversed. Let \( {\beta D} \) be the resulting ascending diagram. Then \( {\beta D} \) has an untying function (and untying functions ignore orientation). Thus, with respect to the original orientation on \( {\beta D},\Lambda \left( {\beta D}\right) = {a}^{\bar{w}\left( {\beta D}\right) }{\delta }^{\# D - 1} \) . This means that the definition of \( \Lambda \left( D\right) \) does not depend on choice of the orientations used in defining ascending diagrams. At this stage any possible ambiguity in \( \Lambda \left( D\right) \) depends only on the chosen ordering of components. However, that this choice, too, is irrelevant follows exactly as in Theorem 15.2. So with the induction on \( n \) complete, the theorem is proved, for uniqueness follows easily as before. A variant in the signs for the Kauffman polynomial is sometimes useful. (The resulting polynomial is sometimes called the "Dubrovnic" polynomial [60].) This is based on a function \[ {\Lambda }^{ \star } : \left\{ {\text{ Unoriented links diagrams in }{S}^{2}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{\alpha }^{\pm 1},{\omega }^{\pm 1}}\right\rbrack \] that is defined in exactly the same way as is \( \Lambda \) in Theorem 15.5 (with \( \alpha \) in place of \( a \) and \( \omega \) in place of \( z \) ) except that (iv) is replaced by \[ {\Lambda }^{ \star }\left( {D}_{ + }\right) - {\Lambda }^{ \star }\left( {D}_{ - }\right) = \omega \left( {{\Lambda }^{ \star }\left( {D}_{0}\right) - {\Lambda }^{ \star }\left( {D}_{\infty }\right) }\right) . \] If an oriented link \( L \) is represented by diagram \( D \), define \( {F}^{ \star }\left( L\right) = {\alpha }^{-w\left( D\right) }{\Lambda }^{ \star }\left( D\right) \) . It is, however, fairly easy to verify that if \( L \) has \( \# L \) components, then \[ {F}^{ \star }\left( L\right) = {\left( -1\right) }^{\# L - 1}{\left\lbrack F\left( L\right) \right\rbrack }_{\left( {a, z}\right) = \left( {{i\alpha }, - {i\omega }}\right) }, \] where \( {i}^{2} = - 1 \) . Thus this variant contains no additional information. ## Exercises 1. Evaluate the HOMFLY and Kauffman polynomials for each of the three knots with crossing number 6 . 2. Suppose that the HOMFLY polynomial exists and satisfies the criteria of the statement of Theorem 15.2. Show that if \( L \) is the trivial link with \( \# L \) components then \( P\left( L\right) = \) \( {\mu }^{\# L - 1} \), where \( \mu = - {m}^{-1}\left( {l + {l}^{-1}}\right) \) . 3. Suppose that an oriented link \( {L}^{\prime } \) is obtained from oriented link \( L \) by reversing the direction of one of the components of \( L \) . Show, by considering specific examples, that there is no simple multiplicative formula relating \( P\left( L\right) \) and \( P\left( {L}^{\prime }\right) \) of the type that exists for the Jones and Kauffman polynomials. 4. Show that there exists a version \( {P}^{ \star }\left( L\right) \) of the HOMFLY polynomial invariant of oriented links that is a function of indeterminates \( x, y \), and \( z \), with the property that \( {P}^{ \star } \) (unknot) \( = \) 1 and \[ x{P}^{ \star }\left( {L}_{ + }\right) + y{P}^{ \star }\left( {L}_{ - }\right) + z{P}^{ \star }\left( {L}_{0}\right) = 0. \] Here, as usual, \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are three oriented links identical except within a ball where, respectively, they have a positive crossing, a negative crossing and no crossing. Show that \( {P}^{ \star }\left( L\right) \) is homogeneous in \( x, y \), and \( z \) and determine the relationship between \( {P}^{ \star }\left( L\right) \) and \( P\left( L\right) \) . 5. Show that the existence of the HOMFLY polynomial implies that \( X\left( L\right) \), an invariant of an oriented link \( L \), that is a function of \( l, m, a \) and \( z \), can be defined by \[ X\left( \text{ unknot }\right) = a\text{ and }{lX}\left( {L}_{ + }\right) + {l}^{-1}X\left( {L}_{ - }\right) + {mX}\left( {L}_{0}\right) + z = 0. \] Here \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are related in the usual way. 6. Show that an invariant \( Y\left( L\right) \) of an oriented link \( L \), with \( Y\left( L\right) \) a function of an indeterminate \( x \), can be defined by \[ Y\left( \text{ unknot }\right) = 1\text{ and }Y\left( {L}_{ - }\right) Y\left( {L}_{0}\right) = x\left( {Y\left( {L}_{ + }\right) - Y\left( {L}_{ - }\right) - Y\left( {L}_{0}\right) }\right) . \] 7. Show that if \( L \) is a split link, then under the substitution \( \left( {a, z}\right) = \left( {q,{q}^{-1} + q}\right) \), the polynomial \( F\left( L\right) \) is zero. 8. For an oriented diagram \( D \) of an oriented link \( L \), define \( \widehat{P}\left( D\right) \) by \( \widehat{P}\left( D\right) = {l}^{w\left( D\right) }P\left( L\right) \) . Show that (i) if \( D \) is changed by regular isotopy, then \( \widehat{P}\left( D\right) \) does not change; (ii) if \( {D}_{ + },{D}_{ - } \) and \( {D}_{0} \) are oriented diagrams related in the usual way, then \[ \widehat{P}\left( {D}_{ + }\right) + \widehat{P}\left( {D}_{ - }\right) + m\widehat{P}\left( {D}_{0}\right) = 0 \] (iii) if \( {D}^{\prime } \) is \( D \) with a positive kink removed, then \( \widehat{P}\left( D\right) = l\widehat{P}\left( {D}^{\prime }\right) \) . 9. What is the value of the "Dubrovnik" polynomial \( {F}^{ * }\left( L\right) \) of the oriented link \( L \) when \( \alpha = 1 \) ? 10. For a knot \( K \), let \( Q\left( K\right) \) be the polynomial in \( z \) obtained by substituting \( a = 1 \) in \( F\left( L\right) \) . If \( K \) has an alternating diagram with \( n \) crossings, show that the degree of \( Q\left( L\right) \) is at most \( n - 1 \) . 16 ## Exploring the HOMFLY and Kauffman Polynomials Elementary properties of the Jones polynomial have already been discussed in Chapter 3. Versions of some of those results which hold equally well for the HOM-FLY and Kauffman polynomials are given below. Where proofs are essentially the same as those relating to the Jones polynomial, they are left as an exercise. Proposition 16.1. If \( L \) is an oriented link and \( \bar{L} \) is its reflection, then (i) changing the signs of both variables leaves \( P\left( L\right) \) and \( F\left( L\right) \) unchanged; (ii) \( \overline{P\left( L\right) } = P\left( \bar{L}\right) \) where \( \bar{l} = {l}^{-1} \) and \( \bar{m} = m \) ; (iii) \( \overline{F\left( L\right) } = F\left( \bar{L}\right) \) where \( \bar{a} = {a}^{-1} \) and \( \bar{z} = z \) . Proposition 16.2. If \( {L}_{1} \) and \( {L}_{2} \) are oriented links, then (i) \( P\left( {{L}_{1} + {L}_{2}}\right) = P\left( {L}_{1}\right) P\left( {L}_{2}\right) \) ; (ii) \( F\left( {{L}_{1} + {L}_{2}}\right) = F\left( {L}_{1}\right) F\left( {L}_{2}\right) \) ; (iii) \( P\left( {{L}_{1} \sqcup {L}_{2}}\right) = - \left( {l + {l}^{-1}}\right) {m}^{-1}P\left( {L}_{1}\right) P\left( {L}_{2}\right) \) ; (iv) \( F\left( {{L}_{1} \sqcup {L}_{2}}\right) = \left( {\left( {a + {a}^{-1}}\right) {z}^{-1} - 1}\right) F\left( {L}_{1}\right) F\left( {L}_{2}\right) \) . Note that in the above " \( \sqcup \) " denotes the separated (or split) union of links. Note too that the sum of oriented links is not well defined; it depends upon which components are used to produce the sum. The above results are true whatever components are selected, and so by varying that selection, different links are obtained with the same polynomials. Proposition 16.3. Both \( P\left( L\right) \) and \( F\left( L\right) \) are unchanged by mutation of \( L \) . Mutation was discussed in Chapter 3, with a famous example shown in Figure 3.3. That example thus produces different knots with the same polynomials. In [54], T. Kanenobu gave infinitely many distinct knots all with the same HOMFLY polynomial (and hence also the same Jones polynomial). Proposition 16.4. If the oriented link \( {L}^{ * } \) is obtained from the oriented link \( L \) by reversing the orientation of one component \( K \), then \[ F\left( {L}^{ * }\right) = {a}^{4\operatorname{lk}\left( {K, L - K}\right) }F\left( L\right) . \] Changing the orientation of all components of \( L \) leaves both \( P\left( L\right) \) and \( F\left( L\right) \) unchanged. The HOMFLY polynomial and the Kauffman polynomial are independent
1009_(GTM175)An Introduction to Knot Theory
65
ove results are true whatever components are selected, and so by varying that selection, different links are obtained with the same polynomials. Proposition 16.3. Both \( P\left( L\right) \) and \( F\left( L\right) \) are unchanged by mutation of \( L \) . Mutation was discussed in Chapter 3, with a famous example shown in Figure 3.3. That example thus produces different knots with the same polynomials. In [54], T. Kanenobu gave infinitely many distinct knots all with the same HOMFLY polynomial (and hence also the same Jones polynomial). Proposition 16.4. If the oriented link \( {L}^{ * } \) is obtained from the oriented link \( L \) by reversing the orientation of one component \( K \), then \[ F\left( {L}^{ * }\right) = {a}^{4\operatorname{lk}\left( {K, L - K}\right) }F\left( L\right) . \] Changing the orientation of all components of \( L \) leaves both \( P\left( L\right) \) and \( F\left( L\right) \) unchanged. The HOMFLY polynomial and the Kauffman polynomial are independent invariants in the sense that they distinguish different pairs of knots. Thus neither polynomial can be seen to be trivially contained in the other by means of some subtle change of variables. Examples are shown in Figure 16.1. The knots \( {8}_{8} \) and \( {10}_{129} \) have the same HOMFLY polynomial but distinct Kauffman polynomials (even when taking the variable \( a = 1 \) ). Knots \( {11}_{255} \) and \( {11}_{257} \) have the same Kauffman polynomial but distinct HOMFLY polynomials (and even have distinct Alexander polynomials). ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_190_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_190_0.jpg) Figure 16.1 Skein formulae have already been given for the Jones polynomial, in Proposition 3.7, and for the Conway-normalised Alexander polynomial in Theorem 8.6. Those results just mean that certain substitutions of variables in the HOMFLY polynomial give the Jones polynomial on the one hand or the Alexander polynomial on the other. The precise results, in the notation used here, are as follows: Proposition 16.5. For an oriented link \( L \), the Conway-normalised Alexander polynomial \( {\Delta }_{L}\left( t\right) \) and the Jones polynomial \( V\left( L\right) \) are related to the HOMFLY polynomial \( P\left( L\right) \) by \[ {\Delta }_{L}\left( t\right) = P{\left( L\right) }_{\left( i, i\left( {t}^{1/2} - {t}^{-1/2}\right) \right) }\;\text{ and }\;V\left( L\right) = P{\left( L\right) }_{\left( i{t}^{-1}, i\left( {t}^{-1/2} - {t}^{1/2}\right) \right) }, \] where \( {i}^{2} = - 1 \) . The Alexander polynomial is not contained within the Kauffman polynomial as the example above shows. However, the Jones polynomial is hidden in the Kauffman polynomial in two ways. Proposition 16.6. For an oriented link \( L \) , \[ V\left( L\right) = F\left( L\right) \text{ when }\left( {a, z}\right) = \left( {-{t}^{-3/4},\;\left( {{t}^{-1/4} + {t}^{1/4}}\right) }\right) , \] \[ {\left( V\left( L\right) \right) }^{2} = {\left( -1\right) }^{\# L - 1}F\left( L\right) \text{ when }t = - {q}^{-2},\;\left( {a, z}\right) = \left( {{q}^{3},{q}^{-1} + q}\right) . \] Proof. Underlying the Jones polynomial is the Kauffman bracket. Reverting to the notation for that, given in Definition 3.1, \[ \langle > < \rangle = A\langle > < \rangle + {A}^{-1}\langle < \rangle \] \[ \langle > < \rangle = {A}^{-1}\langle > < \rangle + A\langle > < \rangle . \] Adding these equations gives \[ \langle > < \rangle + \langle > < \rangle = \left( {A + {A}^{-1}}\right) \left( {\langle > < \rangle +\langle < \rangle }\right) . \] However, \( \langle \sim \rangle = - {A}^{3}\langle \sim \rangle \), and the Kauffman bracket is invariant under regular isotopy (Reidemeister Type II and III moves). However, these are the defining rules for the polynomial \( \Lambda \left( D\right) \) of Theorem 15.5 with the variables changed to \( \left( {a, z}\right) = \left( {-{A}^{3},\left( {A + {A}^{-1}}\right) }\right) \) . The substitution \( t = {A}^{-4} \) gives the first result. Subtracting the square of one of the above two equations from the square of the other gives \[ {\left( > < \right) }^{2} - \langle > < {\rangle }^{2} = \left( {{A}^{2} - {A}^{-2}}\right) \left( {\langle > < {\rangle }^{2}-\langle < {\rangle }^{2}}\right) . \] Of course \( \langle \sim \sim {\rangle }^{2} = {A}^{6}\langle \sim {\rangle }^{2} \), and so the square of the Kauffman bracket is an instance of the \( {\Lambda }^{ \star }\left( D\right) \) polynomial, defined at the very end of Chapter 15, that satisfies \[ {\Lambda }^{ \star }\left( {D}_{ + }\right) - {\Lambda }^{ \star }\left( {D}_{ - }\right) = \omega \left( {{\Lambda }^{ \star }\left( {D}_{0}\right) - {\Lambda }^{ \star }\left( {D}_{\infty }\right) }\right) . \] Now, translating the notation by \( \omega = \left( {{A}^{2} - {A}^{-2}}\right) ,\alpha = {A}^{6}, t = - {q}^{-2} = {A}^{-4} \) and (from Chapter 15) \( \left( {a, z}\right) = \left( {{i\alpha }, - {i\omega }}\right) \) the second result follows. Many of the best simple applications of the skein theoretic polynomials refer to the Jones and Alexander polynomials and have already been considered (particularly in Chapter 5). However, the following result is a significant direct application of the HOMFLY polynomial; it is one of the best applications of that polynomial to geometric questions. Versions of it first appeared in [27] and [97]. The result gives, particularly in its corollary, some information about the complexity that must exist in a diagram of a specific link. As convenient notation, let \( {E}_{l}\left( {P\left( L\right) }\right) \) and \( {e}_{l}\left( {P\left( L\right) }\right) \) be the maximum and minimum exponents of \( l \) that appear in the HOMFLY polynomial \( P\left( L\right) \) of an oriented link \( L \) . Theorem 16.7. Suppose that an oriented link \( L \) is represented by a diagram \( D \) with writhe \( w\left( D\right) \), having \( s\left( D\right) \) Seifert circuits and \( n\left( D\right) \) crossings. Then the degrees of \( m \) that occur in \( P\left( L\right) \) are bounded above by \( n\left( D\right) - s\left( D\right) + 1 \), and \[ - w\left( D\right) - s\left( D\right) + 1 \leq {e}_{l}\left( {P\left( L\right) }\right) \leq {E}_{l}\left( {P\left( L\right) }\right) \leq - w\left( D\right) + s\left( D\right) - 1. \] Proof. The first inequality asserts that \( {E}_{m}\left( {P\left( L\right) }\right) \leq n\left( D\right) - s\left( D\right) + 1 \), where \( {E}_{m}\left( {P\left( L\right) }\right) \) is the maximum exponent of \( m \) occurring in \( P\left( L\right) \) . This will be proved by induction on \( n\left( D\right) \) . If \( n\left( D\right) = 0 \), then \( s\left( D\right) = \# D \) and \( P\left( D\right) = {\mu }^{\# D - 1} \) where \( \mu = - {m}^{-1}\left( {l + {l}^{-1}}\right) \), and the result follows. The skein relation for \( P\left( D\right) \) is \( {lP}\left( {D}_{ + }\right) + {l}^{-1}P\left( {D}_{ - }\right) + {mP}\left( {D}_{0}\right) = 0 \), where \( {D}_{ + },{D}_{ - } \) and \( {D}_{0} \), being diagrams related in the usual way, have the same number of Seifert circuits. The inequality is by induction true on \( {D}_{0} \), so using induction again on the number of crossings that need to be changed to achieve an ascending diagram, it is just necessary to prove the result for ascending diagrams. That is, it is required to prove for an ascending diagram \( D \) that \( - \left( {\# D - 1}\right) \leq n\left( D\right) - s\left( D\right) + 1 \), or that \( s\left( D\right) \leq n\left( D\right) + \# D \) . However, this inequality is true for any link diagram, as can be seen in the following way, again by induction on \( n\left( D\right) \) . It is clear when \( n\left( D\right) = 0 \) . Let \( {D}^{\prime } \) be the result of annulling a crossing of \( D \) . Then the inequality is true for \( {D}^{\prime } \), and clearly \( s\left( D\right) = \) \( s\left( {D}^{\prime }\right) ,\# D = \# {D}^{\prime } \pm 1 \) and \( n\left( D\right) = n\left( {D}^{\prime }\right) + 1 \) . That gives the inequality for \( D \) . To consider the second inequality, define for an oriented link diagram \( D \) the Laurent polynomial \( X\left( D\right) = {l}^{w\left( D\right) }P\left( D\right) \) . It is required to show that \( {E}_{l}\left( {X\left( D\right) }\right) \leq \) \( s\left( D\right) - 1 \) . Proceed again by induction on \( n \) ; the inequality is clearly true when \( n = 0 \) . The skein relation for \( X\left( D\right) \) is \( X\left( {D}_{ + }\right) + X\left( {D}_{ - }\right) + {mX}\left( {D}_{0}\right) = 0 \) . Again, \( {D}_{ + },{D}_{ - } \) and \( {D}_{0} \) all have the same number of Seifert circuits and, as the required inequality is true by induction for \( {D}_{0} \), it is sufficient to prove it for ascending diagrams. Suppose \( D \) is ascending so that \( P\left( D\right) = {\mu }^{\# D - 1} \) . It is required to show that \( w\left( D\right) + \# D \leq s\left( D\right) \) . Suppose that in \( D \), some crossing of a component with itself is annulled to give another ascending diagram \( {D}^{\prime } \) (the self-crossing following a base point will do). By induction on the number of crossings, \( w\left( {D}^{\prime }\right) + \# {D}^{\prime } \leq \) \( s\left( {D}^{\prime }\right) = s\left( D\right) \) . However, \( \# {D}^{\prime } = \# D + 1 \) and \( w\left( {D}^{\prime }\right) = w\left( D\right) \pm 1 \), and so the inequality is true for \( D \) . If there is no crossing at which a component crosses itself, let \( {D}^{\prime \prime } \) be obtained from \( D \) by annulling a negative crossing where one component crosses another. This can be done, as all linking numbers are zero. Choose the two components as close as possible, in the ordering of the components of \( D \) as an ascending diagram, and then \( {D}^{\prime \prime } \) is also ascending. By induction on the number of crossings, \( w\left( {D}^{\prime \prime }\right) + \# {D}^
1009_(GTM175)An Introduction to Knot Theory
66
t) \) . Suppose that in \( D \), some crossing of a component with itself is annulled to give another ascending diagram \( {D}^{\prime } \) (the self-crossing following a base point will do). By induction on the number of crossings, \( w\left( {D}^{\prime }\right) + \# {D}^{\prime } \leq \) \( s\left( {D}^{\prime }\right) = s\left( D\right) \) . However, \( \# {D}^{\prime } = \# D + 1 \) and \( w\left( {D}^{\prime }\right) = w\left( D\right) \pm 1 \), and so the inequality is true for \( D \) . If there is no crossing at which a component crosses itself, let \( {D}^{\prime \prime } \) be obtained from \( D \) by annulling a negative crossing where one component crosses another. This can be done, as all linking numbers are zero. Choose the two components as close as possible, in the ordering of the components of \( D \) as an ascending diagram, and then \( {D}^{\prime \prime } \) is also ascending. By induction on the number of crossings, \( w\left( {D}^{\prime \prime }\right) + \# {D}^{\prime \prime } \leq s\left( {D}^{\prime \prime }\right) = s\left( D\right) \) . Now \( w\left( {D}^{\prime \prime }\right) = w\left( D\right) + 1 \) and \( \# {D}^{\prime \prime } = \# D - 1 \), and so the inequality holds for \( D \) . The inequality \( - w\left( D\right) - s\left( D\right) + 1 \leq {e}_{l}\left( {P\left( L\right) }\right) \) can be proved similarly or deduced from the above by reflection of the diagram. Corollary 16.8. The \( l \) -breadth of \( P\left( L\right) \) satisfies \( {E}_{l}\left( {P\left( L\right) }\right) - {e}_{l}\left( {P\left( L\right) }\right) \leq \) \( 2\left( {s\left( D\right) - 1}\right) \) . The significance of this corollary is that for an oriented link \( L \) it gives a lower bound on the number of Seifert circuits in any diagram that might represent \( L \) . The minimum number of such Seifert circuits is known to be equal to another invariant, the "braid index" of \( L \), which is defined to be the minimal \( n \) for which \( L \) can be described as the closure of an \( n \) -string braid (see [136]), so the corollary gives a lower bound for the braid index. Applications of the Kauffman polynomial have been explored by Thistlethwaite in [119] and [120] and by Kidwell [64]. Some, though not all, of those results now follow from the technique of Chapter 5. One result of [64] is that if \( Q\left( L\right) \) is the polynomial in \( z \) obtained by the substitution \( a = 1 \) in \( F\left( L\right) \), and \( L \) has a diagram with \( n\left( D\right) \) crossings, then \[ \text{degree}Q\left( L\right) \leq n\left( D\right) - b\left( D\right) \text{,} \] where \( b\left( D\right) \) is the maximum number of consecutive over-passes that occur anywhere in the diagram. This \( b\left( D\right) \) is called the bridge length of the diagram \( D \) ; if \( D \) is alternating then \( b\left( D\right) = 1 \) . It may be helpful to have a rough idea of the appearance of these polynomial invariants for knots. There follow two tables giving the values of the HOMFLY polynomial for knots up to eight crossings (as depicted in Chapter 1) and the Kauffman polynomial for knots up to \( {8}_{7} \) . The HOMFLY polynomial of a knot, in the notation of the last chapter, is of the form \( \mathop{\sum }\limits_{{i > 0}}{p}_{i}\left( {l}^{2}\right) {m}^{i} \), where \( {p}_{i}\left( {l}^{2}\right) \) is a Laurent polynomial in \( {l}^{2},{p}_{i}\left( {l}^{2}\right) \) is zero if \( i \) is odd and if \( i \) is sufficiently large. These simple facts are exploited in the table, which gives (in notation due to Thistlethwaite), for each knot listed, the coefficients in the polynomial. The numbers in the \( i \) th brackets give the coefficients in \( {p}_{2\left( {i - 1}\right) }\left( {l}^{2}\right) \), the bold face number being the coefficient of \( {l}^{0} \) . Thus, for example, the knot \( {7}_{7} \) has polynomial \( \left( {{l}^{-4} + 2{l}^{-2} + 2}\right) + \left( {-2{l}^{-2} - }\right. \) \( \left. {2 - {l}^{2}}\right) {m}^{2} + {m}^{4} \), and this is abbreviated in the table to \( \left( {122}\right) \left( {-2 - 2 - 1}\right) \left( 1\right) \) . The Kauffman polynomial of a knot, in the notation of the last chapter, is of the form \( \mathop{\sum }\limits_{{i \geq 0}}{q}_{i}\left( a\right) {z}^{i} \), where \( {q}_{i}\left( a\right) \) is a Laurent polynomial in \( a \) . However, \( {q}_{i}\left( a\right) \) contains only odd powers of \( a \) if \( i \) is odd and only even powers if \( i \) is even. If \( i \) is sufficiently large, \( {q}_{i} \) is zero. The table given here for Kauffman polynomials uses these elementary facts. Again in notation due to Thistlethwaite, the conventions are as follows: For a knot listed, the numbers in the \( {i}^{\text{th }} \) bracket give the coefficients in \( {q}_{i}\left( a\right) \) . If \( i \) is even, the coefficients listed are of the even powers of \( a \) (for the others are zero), the bold face number being the coefficient of \( {a}^{0} \) . If \( i \) is odd, the coefficients listed are of the odd powers of \( a \), the star denoting the divide between negative and positive powers. Many of the listings occupy two lines. Thus, for example, the knot \( {6}_{1} \) has polynomial \[ \left( {-{a}^{-2} + {a}^{2} + {a}^{4}}\right) + \left( {{2a} + 2{a}^{3}}\right) z + \left( {{a}^{-2} - 4{a}^{2} - 3{a}^{4}}\right) {z}^{2} \] \[ + \left( {{a}^{-1} - {2a} - 3{a}^{3}}\right) {z}^{3} + \left( {1 + 2{a}^{2} + {a}^{4}}\right) {z}^{4} + \left( {a + {a}^{3}}\right) {z}^{5}, \] and this is encoded as \[ \left( {-{1011}}\right) \left( {\star {22}}\right) \left( {{10} - 4 - 3}\right) \left( {1 \star - 2 - 3}\right) \left( {121}\right) \left( {\star {11}}\right) \text{.} \] A glance at the tables of the HOMFLY and Kauffman polynomials (Tables 16.1 and 16.2) reveals that for any knot the first entry is the same in the two tables. That is an instance of the following result, the proof of which is left as an easy exercise. Proposition 16.9. Suppose \( L \) is an oriented link with \( \# L \) components. Then \( (1 - \) \( \# L) \) is the lowest power both of \( m \) in \( P\left( L\right) \) and of \( z \) in \( F\left( L\right) \), and \[ {\left\lbrack {z}^{\# L - 1}F\left( L\right) \right\rbrack }_{\left( {a, z}\right) = \left( {l,0}\right) } = {\left\lbrack {\left( -m\right) }^{\# L - 1}P\left( L\right) \right\rbrack }_{m = 0}. \] Perhaps the elegance of this result is a quirk of notation, but it serves to focus attention on the polynomial \( {p}_{0}\left( {l}^{2}\right) \) . That invariant has been used by P. Traczyk [122] to provide a necessary condition that a knot should have a certain type of symmetry. The symmetry envisaged is that the knot might be (set-wise) invariant <table><thead><tr><th colspan="5">TABLE 16.1.HOMFLY Polynomial Table</th></tr><tr><th>3</th><th>\( \left( {0 - 2 - 1}\right) \)</th><th>(01)</th><td rowspan="2"></td><td rowspan="7"></td></tr><tr><td>\( {4}_{1} \)</td><td>\( \left( {-1 - 1 - 1}\right) \)</td><td>(1)</td></tr><tr><td>\( {5}_{1} \)</td><td>(0 0 3 2)</td><td>\( \left( {{00} - 4 - 1}\right) \)</td><td rowspan="3">\( \left( \begin{array}{lll} \mathbf{0} & 0 & 1 \end{array}\right) \)</td></tr><tr><td>\( {5}_{2} \)</td><td>\( \left( {0 - {111}}\right) \)</td><td>\( \left( {{01} - 1}\right) \)</td></tr><tr><td>\( {\mathbf{6}}_{1} \)</td><td>(-1 0 1 1)</td><td>(1 -1)</td></tr><tr><td>\( {\mathbf{6}}_{2} \)</td><td>(2 2 1)</td><td>\( \left( {-1 - 3 - 1}\right) \)</td><td>(01)</td></tr><tr><td>\( {\mathbf{6}}_{3} \)</td><td>(13 1)</td><td>\( \left( {-1 - 3 - 1}\right) \)</td><td>(1)</td></tr></thead><tr><td>\( {7}_{1} \) \( {7}_{2} \)</td><td>\( \left( {{000} - 4 - 3}\right) \) \( \left( {\left\lbrack \begin{matrix} \mathbf{0} & - 1 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} 0 & - 1 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} - 1 \end{matrix}\right\rbrack }\right) \)</td><td>\( \left( \begin{array}{llllll} \mathbf{0} & 0 & 0 & 1 & 0 & 4 \end{array}\right) \) (01 -11 1)</td><td>\( \left( {{000} - 6 - 1}\right) \)</td><td rowspan="6">\( \left( \begin{array}{llll} \mathbf{0} & 0 & 0 & 1 \end{array}\right) \)</td></tr><tr><td>\( {7}_{3} \) 74</td><td>\( \left( {-2 - {2100}}\right) \) (-1 0 2 0 0)</td><td>(1 3 -3 0 0) (1 -2 1 0)</td><td>\( \left( {-{1100}}\right) \)</td></tr><tr><td>7s</td><td>(0 0 2 0 -1)</td><td>(0 0 -3 2 1)</td><td>\( \left( {{001} - 1}\right) \)</td></tr><tr><td>76</td><td>(1121)</td><td>\( \left( {-1 - 2 - 2}\right) \)</td><td>(01)</td></tr><tr><td>\( {7}_{7} \)</td><td>(1 2 2)</td><td>\( \left( {-2 - 2 - 1}\right) \)</td><td>(1)</td></tr><tr><td>\( {8}_{1} \)</td><td>\( \left( {-{100} - 1 - 1}\right) \)</td><td>\( \left( {1 - {11}}\right) \)</td><td></td></tr><tr><td>\( {8}_{2} \)</td><td>\( \left( {0 - 3 - 3 - 1}\right) \)</td><td>(0 4 7 3)</td><td>\( \left( {0 - 1 - 5 - 1}\right) \)</td><td rowspan="3">(001)</td></tr><tr><td>\( {8}_{3} \)</td><td>(1 0 -1 0 1)</td><td>\( \left( {-{12} - 1}\right) \)</td><td></td></tr><tr><td>\( {8}_{4} \)</td><td>(-2 -2 0 1)</td><td>(1 3 -2 -1)</td><td>(-1 1)</td></tr><tr><td>85</td><td>\( \left( {-2 - 5 - 4\mathbf{0}}\right) \)</td><td>(3 8 4 0)</td><td>\( \left( {-1 - 5 - 1\mathbf{0}}\right) \)</td><td rowspan="2">\( \left( \begin{array}{lll} 1 & 0 & \mathbf{0} \end{array}\right) \)</td></tr><tr><td>86</td><td>\( \left( {{21} - 1 - 1}\right) \)</td><td>(-1 -2 2 1)</td><td>(01 -1)</td></tr><tr><td>87</td><td>\( \left( {-2 - 4 - 1}\right) \)</td><td>(3 8 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td rowspan="2">(1 0)</td></tr><tr><td>88</td><td>\( \left( {-1 - {121}}\right) \)</td><td>(1 2 -2 -1)</td><td>(-1 1)</td></tr><tr><td>89</td><td>\( \left( {-2 - 3 - 2}\right) \)</td><td>(3 8 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td>(1)</td></tr><tr><td>\( {\mathbf{8}}_{\mathbf{{10}}} \)</td><td>\( \left( {-3 - 6 - 2}\right) \)</td><td>(3 9 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td rowspan="6">(1 0)</td></tr><tr><td>811</t
1009_(GTM175)An Introduction to Knot Theory
67
ht) \)</td><td></td></tr><tr><td>\( {8}_{4} \)</td><td>(-2 -2 0 1)</td><td>(1 3 -2 -1)</td><td>(-1 1)</td></tr><tr><td>85</td><td>\( \left( {-2 - 5 - 4\mathbf{0}}\right) \)</td><td>(3 8 4 0)</td><td>\( \left( {-1 - 5 - 1\mathbf{0}}\right) \)</td><td rowspan="2">\( \left( \begin{array}{lll} 1 & 0 & \mathbf{0} \end{array}\right) \)</td></tr><tr><td>86</td><td>\( \left( {{21} - 1 - 1}\right) \)</td><td>(-1 -2 2 1)</td><td>(01 -1)</td></tr><tr><td>87</td><td>\( \left( {-2 - 4 - 1}\right) \)</td><td>(3 8 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td rowspan="2">(1 0)</td></tr><tr><td>88</td><td>\( \left( {-1 - {121}}\right) \)</td><td>(1 2 -2 -1)</td><td>(-1 1)</td></tr><tr><td>89</td><td>\( \left( {-2 - 3 - 2}\right) \)</td><td>(3 8 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td>(1)</td></tr><tr><td>\( {\mathbf{8}}_{\mathbf{{10}}} \)</td><td>\( \left( {-3 - 6 - 2}\right) \)</td><td>(3 9 3)</td><td>\( \left( {-1 - 5 - 1}\right) \)</td><td rowspan="6">(1 0)</td></tr><tr><td>811</td><td>\( \left( {1 - 1 - 2 - 1}\right) \)</td><td>(-1 -1 2 1)</td><td>\( \left( {{01} - 1}\right) \)</td></tr><tr><td>\( {8}_{12} \)</td><td>(11111)</td><td>\( \left( {-2 - 1 - 2}\right) \)</td><td>(1)</td></tr><tr><td>\( {8}_{13} \)</td><td>(0 -2 -1)</td><td>(-1 -1 2 1)</td><td>(1 -1)</td></tr><tr><td>\( {8}_{14} \)</td><td>(1)</td><td>\( \left( {-1 - {111}}\right) \)</td><td>(01 -1)</td></tr><tr><td>\( {\mathbf{8}}_{15} \)</td><td>\( \left( {{001} - 3 - 4 - 1}\right) \)</td><td>(0 0 -2 5 3)</td><td>(0 0 1 -2)</td></tr><tr><td>\( {\mathbf{8}}_{16} \)</td><td>(0 -2 -1)</td><td>(25 2)</td><td>\( \left( {-1 - 4 - 1}\right) \)</td><td>(01)</td></tr><tr><td>817</td><td>\( \left( {-1 - 1 - 1}\right) \)</td><td>(25 2)</td><td>\( \left( {-1 - 4 - 1}\right) \)</td><td>(1)</td></tr><tr><td>8</td><td>(13 1)</td><td>(11 1)</td><td>\( \left( {-1 - 3 - 1}\right) \)</td><td>(1)</td></tr><tr><td>\( {8}_{19} \)</td><td>(-1 -5 -5 0 0 0)</td><td>(5 10 0 0 0)</td><td>\( \left( {-1 - {6000}}\right) \)</td><td rowspan="3">\( \left( \begin{array}{llll} 1 & 0 & 0 & \mathbf{0} \end{array}\right) \)</td></tr><tr><td>\( {\mathbf{8}}_{\mathbf{{20}}} \)</td><td>\( \left( {-1 - 4 - 2}\right) \)</td><td>(14 1)</td><td>\( \left( {0 - 1}\right) \)</td></tr><tr><td>\( {8}_{21} \)</td><td>\( \left( {0 - 3 - 3 - 1}\right) \)</td><td>(0 2 3 1)</td><td>\( \left( {{00} - 1}\right) \)</td></tr></table> TABLE 16.2. Kauffman Polynomial Table <table><tr><td colspan="6">--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</td></tr><tr><td>\( {3}_{1} \)</td><td>\( \left( {0 - 2 - 1}\right) \)</td><td>\( \left( \begin{array}{llll} \star & 0 & 1 & 1 \end{array}\right) \)</td><td>(011)</td><td></td><td></td></tr><tr><td>\( {4}_{1} \)</td><td>\( \left( {-1 - 1 - 1}\right) \)</td><td>\( \left( {-1 \star - 1}\right) \)</td><td>(1 2 1)</td><td>\( \left( {1 \star 1}\right) \)</td><td></td></tr><tr><td>\( {5}_{1} \)</td><td>( 0 0 3 2) \( \left( \begin{array}{llll} \mathbf{0} & 0 & 1 & 1 \end{array}\right) \)</td><td>\( \left( {\star {00} - 2 - {11}}\right) \)</td><td>(0 0 -4 -3 1)</td><td>\( \left( {\star {0011}}\right) \)</td><td></td></tr><tr><td>\( {5}_{2} \)</td><td>(0 -1 1 1) \( \left( \begin{array}{llll} \mathbf{0} & 0 & 1 & 1 \end{array}\right) \)</td><td>\( \left( {\star {00} - 2 - 2}\right) \)</td><td>\( \left( {{01} - 1 - 2}\right) \)</td><td>\( \left( {\star {0121}}\right) \)</td><td></td></tr><tr><td>\( {\mathbf{6}}_{1} \)</td><td>(-1 0 1 1) (1 2 1)</td><td>\( \left( \begin{array}{lll} \star & 2 & 2 \end{array}\right) \) \( \left( \begin{array}{lll} \star & 1 & 1 \end{array}\right) \)</td><td>(1 0 -4 -3)</td><td>\( \left( {1 \star - 2 - 3}\right) \)</td><td></td></tr><tr><td>\( {6}_{2} \)</td><td>(2 2 1) (13 2)</td><td>\( \left( {\star 0 - 1 - 1}\right) \) \( \left( \begin{array}{lll} \star & 1 & 1 \end{array}\right) \)</td><td>\( \left( {-3 - 6 - {21}}\right) \)</td><td>(* -2 0 2)</td><td></td></tr><tr><td>\( {6}_{3} \)</td><td>(1 3 1) (24 2)</td><td>\( \left( {-1 - 2 \star - 2 - 1}\right) \) \( \left( {1 \star 1}\right) \)</td><td>\( \left( {-3 - 6 - 3}\right) \)</td><td>\( \left( {{11} \star {11}}\right) \)</td><td></td></tr><tr><td>\( {7}_{1} \)</td><td>\( \left( {{000} - 4 - 3}\right) \)</td><td>\( \left( {\star {00031} - {11}}\right) \)</td><td>\( \left( {{000107} - {21}}\right) \)</td><td>\( \left( {\star {000} - 4 - {31}}\right) \)</td><td></td></tr><tr><td></td><td>\( \left( {{000} - 6 - {51}}\right) \)</td><td>\( \left( {\star {00011}}\right) \)</td><td>\( \left( \begin{array}{lllll} \mathbf{0} & 0 & 0 & 1 & 1 \end{array}\right) \)</td><td></td><td></td></tr><tr><td>\( {7}_{2} \)</td><td>\( \left( {\left\lbrack \begin{matrix} 0 & - 1 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} 0 & - 1 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} - 1 \end{matrix}\right\rbrack }\right) \)</td><td>\( \left( {\star {00033}}\right) \)</td><td>(0103 4)</td><td>\( \left( {\star {01} - 1 - 6 - 4}\right) \)</td><td></td></tr><tr><td></td><td>(0 0 1 -3 -4)</td><td>\( \left( {\star {00121}}\right) \)</td><td>\( \left( \begin{array}{lllll} \mathbf{0} & 0 & 0 & 1 & 1 \end{array}\right) \)</td><td></td><td></td></tr><tr><td>\( {7}_{3} \)</td><td>\( \left( {-2 - {2100}}\right) \)</td><td>\( \left( {-{213000} \star }\right) \)</td><td>\( \left( {-{164} - {300}}\right) \)</td><td>\( \left( {1 - 1 - 4 - {200} \star }\right) \)</td><td></td></tr><tr><td>\( {7}_{4} \)</td><td>(1 -3 -3 1 0 0) (-1 0 2 0 0)</td><td>\( \left( {{12100} \star }\right) \) \( \left( {4\left\lbrack \begin{matrix} 4 & 0 \\ 0 & 0 \end{matrix}\right\rbrack \star }\right) \)</td><td>\( \left( \begin{array}{lllll} 1 & 1 & 0 & 0 & \mathbf{0} \end{array}\right) \) (2 -3 -4 1 0)</td><td>\( \left( {-4 - 8 - {220} \star }\right) \)</td><td></td></tr><tr><td></td><td>(-3 0 3 0 0)</td><td>\( \left( {1\;3\;2\;0\;0\; \star }\right) \)</td><td>\( \left( \begin{array}{lllll} 1 & 1 & 0 & 0 & \mathbf{0} \end{array}\right) \)</td><td></td><td></td></tr><tr><td>\( {7}_{5} \)</td><td>(0 0 2 0 -1) \( \left( {{001} - {102}}\right) \)</td><td>\( \left( {\star {00} - {111} - 1}\right) \) \( \left( {\star {001132}}\right) \)</td><td>\( \left( {{00} - {301} - 2}\right) \) \( \left( \begin{array}{lllll} \mathbf{0} & 0 & 0 & 1 & 1 \end{array}\right) \)</td><td>\( \left( {\star {00} - 1 - 4 - {21}}\right) \)</td><td></td></tr><tr><td>76</td><td>(1122 1) (1122)</td><td>\( \left( {\star {120} - 1}\right) \) \( \left( {\star {242}}\right) \)</td><td>\( \left( {-2 - 4 - 4 - 2}\right) \) \( \left( \begin{array}{lll} \mathbf{0} & 1 & 1 \end{array}\right) \)</td><td>\( \left( {\star - 4 - 6 - {11}}\right) \)</td><td></td></tr><tr><td>\( {7}_{7} \)</td><td>(12 2)</td><td>\( \left( {{23} \star 1}\right) \)</td><td>\( \left( {-2 - 6 - 7 - 3}\right) \)</td><td>\( \left( {-4 - 8 \star - {31}}\right) \)</td><td></td></tr><tr><td></td><td>(1 2 4 3)</td><td>\( \left( {{25} \star 3}\right) \)</td><td>(11)</td><td></td><td></td></tr><tr><td>\( {8}_{1} \)</td><td>\( \left( {-1\mathbf{0}0 - 1 - 1}\right) \)</td><td>\( \left( {\star 0 - 3 - 3}\right) \)</td><td>(10076)</td><td>(1 * -1 5 7)</td><td></td></tr><tr><td></td><td>(1 -2 -8 -5)</td><td>\( \left( {\star 1 - 4 - 5}\right) \)</td><td>(0121)</td><td>\( \left( \begin{array}{llll} \star & 0 & 1 & 1 \end{array}\right) \)</td><td></td></tr><tr><td>\( {8}_{2} \)</td><td>(0 -3 -3 -1)</td><td>\( \left( {\star {011} - 1 - 1}\right) \)</td><td>(0 7 12 3 -1 1)</td><td>\( \left( {\star 0\text{ }3\text{ -1 -2 }2}\right) \)</td><td></td></tr><tr><td></td><td>(0 -5 -12 -5 2)</td><td>\( \left( {\star 0 - 4 - 2}\right) \) 。</td><td>(0132)</td><td>\( \left( \begin{array}{llll} \star & 0 & 1 & 1 \end{array}\right) \)</td><td></td></tr><tr><td>\( {8}_{3} \)</td><td>(1 0 -1 0 1)</td><td>\( \left( {-4 \star - 4}\right) \)</td><td>\( \left( {-{3181} - 3}\right) \)</td><td>\( \left( {-{28} \star 8 - 2}\right) \)</td><td></td></tr><tr><td></td><td>\( \left( {1 - 2 - 6 - 2\text{ }1}\right) \)</td><td>\( \left( {1 - 4 \star - {41}}\right) \)</td><td>(1 2 1)</td><td>\( \left( {1 \star 1}\right) \)</td><td></td></tr><tr><td>\( {8}_{4} \)</td><td>(-2 -2 0 1)</td><td>\( \left( {-1 \star {12}}\right) \)</td><td>(7 10 -1 -3 1)</td><td>(4 + -3 -5 2)</td><td></td></tr><tr><td></td><td>(-5 -11 -3 3)</td><td>\( \left( {-4 \star - {13}}\right) \)</td><td>(13 2)</td><td>\( \left( {1 \star 1}\right) \)</td><td></td></tr><tr><td>\( {8}_{5} \)</td><td>\( \left( {-2 - 5 - 4\mathbf{0}}\right) \)</td><td>(4 7 3 0 *)</td><td>(1 -2 4 15 8 0)</td><td>\( \left( {2 - 8 - {1000} \star }\right) \)</td><td></td></tr><tr><td></td><td>(3 -7 -15 -5 0)</td><td>(41 -3 0 *)</td><td>(3410)</td><td>\( \left( {{110} \star }\right) \)</td><td></td></tr><tr><td>86</td><td>\( \left( {{21} - 1 - 1}\right) \)</td><td>\( \left( {\star - 1 - 3 - 1\text{ }1}\right) \)</td><td>\( \left( {-3 - {263} - 2}\right) \)</td><td>\( \left( {\star - {152} - 4}\right) \)</td><td></td></tr><tr><td></td><td>(1 0 -6 -4 1)</td><td>\( \left( {\star 1 - 2 - {12}}\right) \)</td><td>(0132)</td><td>\( \left( \begin{array}{llll} \star & 0 & 1 & 1 \end{array}\right) \)</td><td></td></tr><tr><td>\( {8}_{7} \)</td><td>\( \left( {-2 - 4 - 1}\right) \)</td><td>\( \left( {-{1022} \star 1}\right) \)</td><td>(-2 4 12 6)</td><td>\( \left( {1 - 1 - 2 - 3 \star - 3}\right) \)</td><td></td></tr><tr><td></td><td>(2 -3 -12 -7)</td><td>\( \left( {{20} - 1 \star 1}\right) \)</td><td>(24 2)</td><td>\( \left( {{11} \star }\right) \)</td><td></td></tr></table> under a \( {2\pi }/p \) rotation about some axis, for some prime \( p \) ; the condition is in terms of the coefficients modulo \( p \) of \( {p}_{0}\left( {l}^{2}\right) \) . Table 16.3 shows the values taken by the HOMFLY, Jones and Kauffman polynomials for an arbitrary link \( L \) when various specific values are substituted for the v
1009_(GTM175)An Introduction to Knot Theory
68
>\( \left( {-3 - {263} - 2}\right) \)</td><td>\( \left( {\star - {152} - 4}\right) \)</td><td></td></tr><tr><td></td><td>(1 0 -6 -4 1)</td><td>\( \left( {\star 1 - 2 - {12}}\right) \)</td><td>(0132)</td><td>\( \left( \begin{array}{llll} \star & 0 & 1 & 1 \end{array}\right) \)</td><td></td></tr><tr><td>\( {8}_{7} \)</td><td>\( \left( {-2 - 4 - 1}\right) \)</td><td>\( \left( {-{1022} \star 1}\right) \)</td><td>(-2 4 12 6)</td><td>\( \left( {1 - 1 - 2 - 3 \star - 3}\right) \)</td><td></td></tr><tr><td></td><td>(2 -3 -12 -7)</td><td>\( \left( {{20} - 1 \star 1}\right) \)</td><td>(24 2)</td><td>\( \left( {{11} \star }\right) \)</td><td></td></tr></table> under a \( {2\pi }/p \) rotation about some axis, for some prime \( p \) ; the condition is in terms of the coefficients modulo \( p \) of \( {p}_{0}\left( {l}^{2}\right) \) . Table 16.3 shows the values taken by the HOMFLY, Jones and Kauffman polynomials for an arbitrary link \( L \) when various specific values are substituted for the variables of the polynomial. The items of information shown here are not always independent of one another. This follows from Propositions 16.5 and 16.6. The values obtained are all simple functions of classical invariants, of the number of components, \( \# L \), of \( L \), of homology data of various covers branched over the link, of the Arf invariant of \( L \), and some intricacies of sign. Immediate additional information can, of course, be obtained by changing the signs of variables and also by taking complex conjugates. It is thought that it may not be possible to produce any more such valuations. This is because complexity theory (see [50]) suggests that bounds on the length of the evaluation process at other choices of the variables may not be expressible as a polynomial in the number of crossings of a link diagram. That many of these specific evaluations should exist was first suggested by Jones. Once it is suspected what one of these evaluations ought to be, it is usually not too hard to give a proof of the result. That has already been done here in the case of the Arf invariant in Chapter 10. The nature of the proofs is always the same: just check that the postulated evaluation satisfies the relevant skein formula for the given values of the polynomial variables. The proof is then an exercise in the TABLE 16.3. Evaluations of Polynomials <table><thead><tr><th></th><th>P(L) \( \left( {l, m}\right) \)</th><th>V(L) \( t \)</th><th>\( \mathbf{F}\left( \mathbf{L}\right) \) \( \left( {a, z}\right) \)</th><th>Value</th></tr></thead><tr><td>A</td><td>\( \left( {l, - l - {l}^{-1}}\right) \)</td><td>\( {e}^{{4\pi i}/3} \) 1 \( {e}^{{2\pi i}/3} \)</td><td>\( \left( {1,1}\right) \) \( \left( {1, - 2}\right) \) \( \left( {i, z}\right) \)</td><td>1 \( {\left( -2\right) }^{\# L - 1} \) \( {\left( -1\right) }^{\# L - 1} \)</td></tr><tr><td>B</td><td>\( \left( {i, - 2}\right) \)</td><td>\( - 1 \)</td><td>\( \left( {1,2}\right) \)</td><td>\( {\Delta }_{L}\left( {-1}\right) \) \( {\left( \det L\right) }^{2} \)</td></tr><tr><td>C</td><td>\( \left( {1,\sqrt{2}}\right) \)</td><td>\( i \)</td><td></td><td>\( {\left( -\sqrt{2}\right) }^{\# L - 1}{\left( -1\right) }^{\operatorname{Arf}\left( L\right) } \) 0 if \( \operatorname{Arf}\left( L\right) \) undefined</td></tr><tr><td>D</td><td>\( \left( {1,1}\right) \)</td><td></td><td></td><td>\( {\left( i\sqrt{2}\right) }^{{d}_{2}\left( {T\left( L\right) }\right) } \)</td></tr><tr><td>E</td><td>\( \left( {{e}^{{\pi i}/6},1}\right) \)</td><td>\( {e}^{{\pi i}/3} \)</td><td>\( \left( {1, - 1}\right) \)</td><td>\( {\delta }_{3}{i}^{\# L - 1}{\left( i\sqrt{3}\right) }^{{d}_{3}\left( {D\left( L\right) }\right) } \) \( {\left( -3\right) }^{{d}_{3}\left( {D\left( L\right) }\right) } \)</td></tr><tr><td>F</td><td></td><td></td><td>\( \left( {1,\frac{\sqrt{5} - 1}{2}}\right) \)</td><td>\( {\delta }_{5}{\sqrt{5}}^{{d}_{5}\left( {D\left( L\right) }\right) } \)</td></tr><tr><td>G</td><td></td><td></td><td>\( \left( {-q,{q}^{-1} + q}\right) \)</td><td>\( \frac{1}{2}{\left( -1\right) }^{\# L - 1}\mathop{\sum }\limits_{{X \subset L}}{q}^{4 \mid \mathrm{k}\left( {X, L - X}\right) } \)</td></tr></table> understanding of some classical invariant (the Arf invariant or the homology of a branched cover). Some explanation of the notation used in the table of evaluations is required. The results of row A of the table are elementary. They do however assert that the number of components of a link is incorporated in its polynomial invariants. Row B refers to \( {\Delta }_{L}\left( {-1}\right) \), the value at \( t = - 1 \) of the Conway-normalised Alexander polynomial, \( \det L = \left| {{\Delta }_{L}\left( {-1}\right) }\right| \) (see Chapter 9). For a knot \( K \), det \( K \) is the order of \( {H}_{1}\left( {D\left( K\right) ;\mathbb{Z}}\right) \), where \( D\left( K\right) \) is the double cover of \( {S}^{3} \) branched over \( K \) (for a link, the determinant is zero if the homology group is infinite). Row \( \mathrm{C} \) has already been discussed in Chapter 10. In the remaining rows, \( D\left( L\right) \) is again the double branched cover and \( T\left( L\right) \) is the threefold cyclic cover of \( {S}^{3} \) branched over oriented link \( L \) . The prefix \( {d}_{r} \) denotes the dimension, as a \( \mathbb{Z}/r\mathbb{Z} \) -vector space, of the first homology with \( \mathbb{Z}/r\mathbb{Z} \) coefficients of the space in question. Details of proofs relating to row D and row E can be found in [88]. The coefficients \( {\delta }_{3} \) and \( {\delta }_{5} \) are both \( \pm 1 \) and can be evaluated in terms of Legendre symbols (see [91]). Row G is a result to be found in [89]; the summation is over all sublinks \( X \) of \( L \), including the empty sublink and \( L \) itself, and \( \operatorname{lk}\left( {X, L - X}\right) \) is the sum of the linking numbers of every component of \( X \) with every component of \( L - X \) . The representation of links as closed braids (see Chapter 1) was the original starting point for the invention of the Jones polynomial [53]. Fundamental were the theorems of Alexander and Markov that, combined, constitute the following proposition. Modern proofs can be found in [7] and [98]. Proposition 16.10. Any oriented link in \( {S}^{3} \) is the closure \( \widehat{\xi } \) of some \( \xi \) belonging to the braid group \( {B}_{n} \), for some \( n \) . Oriented links \( \widehat{\xi } \) and \( \widehat{\eta } \) are equivalent if \( \xi \) and \( \eta \) differ by a sequence of (Markov) moves of the following two types and inverses of such moves: (i) Change an element of \( {B}_{n} \) to a conjugate element in that group; (ii) Change \( \xi \in {B}_{n} \) to \( {i}_{n}\left( \xi \right) {\sigma }_{n}^{\pm 1} \in {B}_{n + 1} \), where \( {i}_{n} : {B}_{n} \rightarrow {B}_{n + 1} \) is the inclusion (that disregards the \( \left( {n + 1}\right) \) th string). The braid approach has also been used in an entirely different way to give another existence proof for the HOMFLY polynomial [52], [126]. (A version also works for the Kauffman polynomial [126].) This method, which involves "R-matrices" and the Yang-Baxter equations, is of particular interest as it employs some of the same mathematics as is used in quantum statistical mechanics [6]. The method is amenable to considerable extension, abstraction and generalisation; so much so, in fact, that it has lead to the birth of the subject of quantum groups, now a branch of abstract algebra. That subject is now the main topic of several books, for example [127] and [56]. The method can also be interpreted in terms of a "states model" for a sequence of values of the HOMFLY polynomial. This gives a complete, immediate (though complicated) definition of \( P{\left( L\right) }_{\left( i{q}^{-\left( {n + 1}\right) }, i\left( q - {q}^{-1}\right) \right) } \) as a Laurent polynomial in \( q \), without recourse to any existence theorem. A brief outline follows. Let \( V \) be a free module with base \( {e}_{1},{e}_{2},\ldots ,{e}_{m} \) over a commutative ring \( \mathcal{K} \) . As usual, let \( {V}^{\otimes n} \) denote the \( n \) -fold tensor product \( V \otimes V \otimes \cdots \otimes V \) . Suppose \( R : V \otimes V \rightarrow V \otimes V \) is an automorphism; in suffix notation \( R \) maps \( {e}_{i} \otimes {e}_{j} \) to \( {R}_{i, j}^{p, q}{e}_{p} \otimes {e}_{q} \), summing over the repeated suffices. Let \( {R}_{i} : {V}^{\otimes n} \rightarrow {V}^{\otimes n} \) be \( 1 \otimes 1 \otimes \cdots \otimes 1 \otimes R \otimes 1 \otimes \cdots \otimes 1 \), where the \( R \) operates on the tensor product of the \( i \) th and \( \left( {i + 1}\right) \) th copies of \( V \) . This \( R \) is called a Yang-Baxter operator if it satisfies the (quantum) Yang-Baxter equations \[ {R}_{1}{R}_{2}{R}_{1} = {R}_{2}{R}_{1}{R}_{2} \] Suppose that \( \mu : V \rightarrow V \) is represented by the diagonal matrix \( \operatorname{diag}\left( {{\mu }_{1},{\mu }_{2},\ldots }\right. \) , \( \left. {\mu }_{m}\right) \), and \[ R\left( {\mu \otimes \mu }\right) = \left( {\mu \otimes \mu }\right) R,\;\mathop{\sum }\limits_{j}{R}_{i, j}^{k, j}{\mu }_{j} = {\alpha \beta }{\delta }_{i}^{k}\text{ and }\mathop{\sum }\limits_{j}{\left( {R}^{-1}\right) }_{i, j}^{k, j}{\mu }_{j} = {\alpha }^{-1}\beta {\delta }_{i}^{k}, \] where \( \alpha \) and \( \beta \) are fixed units in \( \mathcal{K} \) . If this occurs, \( \mu \) is called an enhancement of \( R \) . Often it is not too difficult to find such a \( \mu \) once a solution is known for the Yang-Baxter equations. Given such \( R \) and \( \mu \), a representation of the braid group can be found as follows: Define \( \phi : {B}_{n} \rightarrow \) Aut \( {V}^{\otimes n} \) by \( \phi \left( {\sigma }_{i}\right) = {R}_{i} \) . The Yang-Baxter equations imply that \( \phi \) is compatible with the relations of the braid group (quoted in Chapter 1), and so \( \phi \) gives a well-defined
1009_(GTM175)An Introduction to Knot Theory
69
{\mu }_{1},{\mu }_{2},\ldots }\right. \) , \( \left. {\mu }_{m}\right) \), and \[ R\left( {\mu \otimes \mu }\right) = \left( {\mu \otimes \mu }\right) R,\;\mathop{\sum }\limits_{j}{R}_{i, j}^{k, j}{\mu }_{j} = {\alpha \beta }{\delta }_{i}^{k}\text{ and }\mathop{\sum }\limits_{j}{\left( {R}^{-1}\right) }_{i, j}^{k, j}{\mu }_{j} = {\alpha }^{-1}\beta {\delta }_{i}^{k}, \] where \( \alpha \) and \( \beta \) are fixed units in \( \mathcal{K} \) . If this occurs, \( \mu \) is called an enhancement of \( R \) . Often it is not too difficult to find such a \( \mu \) once a solution is known for the Yang-Baxter equations. Given such \( R \) and \( \mu \), a representation of the braid group can be found as follows: Define \( \phi : {B}_{n} \rightarrow \) Aut \( {V}^{\otimes n} \) by \( \phi \left( {\sigma }_{i}\right) = {R}_{i} \) . The Yang-Baxter equations imply that \( \phi \) is compatible with the relations of the braid group (quoted in Chapter 1), and so \( \phi \) gives a well-defined group homomorphism. Define \( T : \mathop{\bigcup }\limits_{n}{B}_{n} \rightarrow \mathcal{K} \) by \[ T\left( \xi \right) = {\alpha }^{-w\left( \xi \right) }{\beta }^{-n}\operatorname{Trace}\left( {\phi \left( \xi \right) {\mu }^{\otimes n}}\right) , \] where \( \xi \in {B}_{n} \) and \( w : {B}_{n} \rightarrow \mathbb{Z} \) is the homomorphism defined by \( w\left( {\sigma }_{i}\right) = 1 \) . Theorem 16.11. If an oriented link \( L \) is the closure of \( \xi \in {B}_{n} \), let \( T\left( L\right) \) be defined to be \( T\left( \xi \right) \) . This is a well-defined link invariant. Proof. Because \( T \) is essentially a trace function, if \( \xi ,\eta \in {B}_{n} \) then \( T\left( {{\eta }^{-1}{\xi \eta }}\right) = \) \( T\left( \xi \right) \) . Using the properties of \( \mu \), it is easy to show that \( T\left( {\xi {\sigma }_{n}}\right) = T\left( {\xi {\sigma }_{n}^{-1}}\right) = \) \( T\left( \xi \right) \) . The result then follows from Proposition 16.10. Proposition 16.12. Suppose the minimal polynomial equation satisfied by the automorphism \( R : V \otimes V \rightarrow V \otimes V \) is \( \mathop{\sum }\limits_{{i = p}}^{q}{k}_{i}{R}^{i} = 0 \) for some \( {k}_{i} \in \mathcal{K} \) . Then \( \mathop{\sum }\limits_{{i = p}}^{q}{k}_{i}{\alpha }^{i}T\left( {L}_{i}\right) = 0 \) whenever \( {L}_{i} \) are links identical except near a point where \( {L}_{i} \) has a "tassel" of i crossings. Proof. The "tassel" of \( i \) crossings can be taken to be an occurrence of \( {\sigma }_{1}^{i} \) in a braid word representing \( {L}_{i} \) . Now if \( \eta \in {B}_{n} \), the result follows from \[ T\left( {{\sigma }_{1}^{i}\eta }\right) = {\alpha }^{-i - w\left( \eta \right) }{\beta }^{-n}\operatorname{Trace}\left( {{R}^{i}\phi \left( \eta \right) {\mu }^{\otimes n}}\right) . \] The example that leads to the HOMFLY polynomial is as follows: Let \( \mathcal{K} \) be \( \mathbb{Z}\left\lbrack {{q}^{-1}, q}\right\rbrack \) and let \( m \geq 1 \) . Let \( {E}_{i, j} \) be the endomorphism of \( V \) that maps \( {e}_{i} \) to \( {e}_{j} \) and maps the other base elements to zero. A solution to the Yang-Baxter equations is given by \[ R = - q\mathop{\sum }\limits_{i}{E}_{i, i} \otimes {E}_{i, i} + \mathop{\sum }\limits_{{i \neq j}}{E}_{i, j} \otimes {E}_{j, i} + \left( {{q}^{-1} - q}\right) \mathop{\sum }\limits_{{i < j}}{E}_{i, i} \otimes {E}_{j, j}. \] It is arduous but straightforward to check directly that this is a solution. It is, however, not hard to see that \[ {R}^{-1} = - {q}^{-1}\mathop{\sum }\limits_{i}{E}_{i, i} \otimes {E}_{i, i} + \mathop{\sum }\limits_{{i \neq j}}{E}_{i, j} \otimes {E}_{j, i} + \left( {q - {q}^{-1}}\right) \mathop{\sum }\limits_{{i > j}}{E}_{i, i} \otimes {E}_{j, j}. \] so that \( R - {R}^{-1} = \left( {{q}^{-1} - q}\right) {1}_{V \otimes V} \) . This is then the minimal polynomial equation for \( R \) . Let \( \mu = \operatorname{diag}\left( {{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{m}}\right) \) where \( {\mu }_{i} = {q}^{{2i} - m - 1} \), and let \( \alpha = - {q}^{m} \) and \( \beta = 1 \) . A routine check shows that this provides an enhancement for \( R \) . Thus by Theorem 16.11, these data provide an oriented link invariant \( T\left( L\right) \) which, by Proposition 16.12, satisfies \[ {q}^{m}T\left( {L}_{ + }\right) - {q}^{-m}T\left( {L}_{ - }\right) + \left( {{q}^{-1} - q}\right) T\left( {L}_{0}\right) = 0, \] where \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are related in the usual way. Re-normalising to get the polynomial of the unknot to be one, gives for \( \xi \in {B}_{n} \) , \[ P{\left( \widehat{\xi }\right) }_{\left( i{q}^{m}, i\left( {q}^{-1} - q\right) \right) } = {\left( -q\right) }^{-{mw}\left( \xi \right) }\frac{\operatorname{Trace}\left( {\phi \left( \xi \right) {\mu }^{\otimes n}}\right) }{\operatorname{Trace}\mu }. \] As \( m \) varies, the evaluations of the HOMFLY polynomial at these special values of the variables do, of course, determine the whole two-variable polynomial. It is interesting to note that the Alexander polynomial \( {\Delta }_{L}\left( t\right) = P{\left( L\right) }_{\left( i, i\left( {t}^{1/2} - {t}^{-1/2}\right) \right) } \) does not feature as one of the special values (as \( m \geq 1 \) ); from this standpoint \( {\Delta }_{L}\left( t\right) \) occurs only by way of creating the entire two-variable polynomial from the whole sequence of special values. A full version of this Yang-Baxter equation approach to the HOMFLY and Kauffman polynomials is given in [127]. More complicated \( R \) -matrices lead to descriptions of those invariants for "coloured" links that are linear combinations of invariants for satellites and parallels. From the above example, Jones [52] produced a "states model" for each of the above values of the HOMFLY polynomial. His result is as follows: Fix \( n \geq 0 \), let \( D \) be a diagram of an oriented link \( L \), and let \( {D}^{ \star } \) be \( D \) less the crossings of \( D \) . A map \( s : \left\{ {\text{segments of}{D}^{ \star }}\right\} \rightarrow \{ - n, - n + 2, - n + 4,\ldots, n - 2, n\} \) is a labelling of \( D \) if near each crossing the values of \( s \) conform to one of the three types shown in Figure 16.2. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_199_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_199_0.jpg) Figure 16.2 Let \( \left| {{C}_{i}^{ \pm }s}\right| \) be the number of \( \pm \) crossings of type \( i \) with respect to \( s \) for \( i \in \) \( \{ 1,2,3\} \), and let \( \left| {{C}_{i}s}\right| = \left| {{C}_{i}^{ + }s}\right| + \left| {{C}_{i}^{ - }s}\right| \) . Then \[ {\left( -1\right) }^{1 + \operatorname{rot}\left( D\right) }\left( {{q}^{-n} + {q}^{-n + 2} + \cdots + {q}^{n}}\right) P{\left( L\right) }_{\left( i{q}^{-\left( {n + 1}\right) }, i\left( q - {q}^{-1}\right) \right) } \] \[ = {q}^{\left( {n + 1}\right) w\left( D\right) }\mathop{\sum }\limits_{s}{\left( -1\right) }^{\left| {{C}_{3}^{ - }s}\right| + \left| {{C}_{2}s}\right| }{q}^{\left| {{C}_{2}^{ - }s}\right| - \left| {{C}_{2}^{ + }s}\right| -\int {sd\theta }}{\left( q - {q}^{-1}\right) }^{\left| {C}_{3}s\right| }. \] In this formula, \( \int {sd\theta } \) is the integer obtained by smoothing \( D \) (that is, changing \( {D}_{ \pm } \) to \( {D}_{0} \) ) at all crossings of types 2 and 3 (so that each resulting link component has a constant label) and subtracting the sum of the labels on clockwise components from the sum of those on anti-clockwise components. The term \( \operatorname{rot}\left( D\right) \) is \( \int {1d\theta } \) , where 1 is the labelling with constant value 1 ; the summation is over all labellings \( s \) . This formula could be used as a definition of \( P{\left( L\right) }_{\left( i{q}^{-\left( {n + 1}\right) }, i\left( q - {q}^{-1}\right) \right) } \) and then the \( \left( {{L}_{ + },{L}_{ - },{L}_{0}}\right) \) -formula and invariance under Reidemeister moves could be checked directly. Note that, although this "states model" was certainly derived by means of consideration of representations of the braid group, the final formulation contains no mention of braids. The structure and significance of the HOMFLY and Kauffman polynomials are frequently interpreted in the language of Vassiliev invariants, sometimes called invariants of finite type. A final remark will give the rough idea of these, but for many more details, see [5] and [10]. Suppose \( V \) is any invariant of oriented links taking values in some abelian group. This \( V \) be can extended to be an invariant of singular links in the following way: A singular link is an immersion of simple closed curves in \( {S}^{3} \) with finitely many transverse double points. These self-intersections are required to remain transverse in any isotopy demonstrating the equivalence of such singular links. If the definition of \( V \) has been extended over singular links with \( n - 1 \) double points, define it on a singular link \( {L}_{ \times } \) with \( n \) singularities by \[ V\left( {L}_{ \times }\right) = V\left( {L}_{ + }\right) - V\left( {L}_{ - }\right) \] where \( V\left( {L}_{ \times }\right), V\left( {L}_{ + }\right) \) and \( V\left( {L}_{ - }\right) \) are identical except near a point where they are as in Figure 16.2. Note that \( V\left( {L}_{ + }\right) \) and \( V\left( {L}_{ - }\right) \) each has \( n - 1 \) double points. Then \( V \) is called a Vassiliev invariant of order \( n \), or an invariant of finite type \( n \), if \( V\left( L\right) = 0 \) for every \( L \) with \( n + 1 \) or more singularities. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_200_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_200_0.jpg) Figure 16.3 Recall the Conway polynomial invariant, \( {\nabla }_{L}\left( z\right) \in \mathbb{Z}\left\lbrack z\right\rbrack \), of oriented links defined by \( {\nabla }_{\text{unknot }}\left( z\right) = 1 \) and \[ {\nabla }_{{L}_{ + }}\left( z\right) - {\nabla }_{{L}_{ - }}\left
1009_(GTM175)An Introduction to Knot Theory
70
\times } \) with \( n \) singularities by \[ V\left( {L}_{ \times }\right) = V\left( {L}_{ + }\right) - V\left( {L}_{ - }\right) \] where \( V\left( {L}_{ \times }\right), V\left( {L}_{ + }\right) \) and \( V\left( {L}_{ - }\right) \) are identical except near a point where they are as in Figure 16.2. Note that \( V\left( {L}_{ + }\right) \) and \( V\left( {L}_{ - }\right) \) each has \( n - 1 \) double points. Then \( V \) is called a Vassiliev invariant of order \( n \), or an invariant of finite type \( n \), if \( V\left( L\right) = 0 \) for every \( L \) with \( n + 1 \) or more singularities. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_200_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_200_0.jpg) Figure 16.3 Recall the Conway polynomial invariant, \( {\nabla }_{L}\left( z\right) \in \mathbb{Z}\left\lbrack z\right\rbrack \), of oriented links defined by \( {\nabla }_{\text{unknot }}\left( z\right) = 1 \) and \[ {\nabla }_{{L}_{ + }}\left( z\right) - {\nabla }_{{L}_{ - }}\left( z\right) = z{\nabla }_{{L}_{0}}\left( z\right) \] Extend this over singular links by the above method. Then if \( {L}_{ \times } \) is a link with \( r \) singularities, \( {\nabla }_{{L}_{x}}\left( z\right) = z{\nabla }_{{L}_{0}}\left( z\right) \) where \( {L}_{0} \) is a link with \( r - 1 \) singularities. Thus by induction on \( r \), if \( L \) has \( r \) singularities then \( {\nabla }_{L}\left( z\right) \) has a factor of \( {z}^{r} \) . This implies at once that the coefficient of \( {z}^{n} \) in the Conway polynomial of a link is a Vassiliev invariant of order \( n \) . Now suppose one considers the HOMFLY polynomial and makes the substitution \( \left( {l, m}\right) = \left( {i{t}^{N/2}, i\left( {{t}^{-1/2} - {t}^{1/2}}\right) }\right) \) . The characterising skein relation becomes \[ {t}^{N/2}P\left( {L}_{ + }\right) - {t}^{-N/2}P\left( {L}_{ - }\right) = \left( {{t}^{1/2} - {t}^{-1/2}}\right) P\left( {L}_{0}\right) . \] Note that this becomes the Jones polynomial when \( N = 2 \) . Now make the further substitution \( t = \exp x \) . Here \( \exp x \) should be thought of as the classical power series expansion. Of course, \( \exp x/2 \) and \( \exp \left( {-x/2}\right) \) have power series expansions; power series can be multiplied and added to give power series. Thus \( P\left( L\right) \) has a power series expansion in powers of \( x \) . It follows immediately that \( P\left( {L}_{ + }\right) - \) \( P\left( {L}_{ - }\right) = {xS}\left( x\right) \) for some power series \( S\left( x\right) \) . Hence the proof used above for the Conway polynomial shows at once that the coefficient of \( {x}^{n} \) in the power series expansion of \( P\left( L\right) \) is a Vassiliev invariant of order \( n \) . Vassiliev invariants have attracted much attention, partly because they seem to give a structured view of the polynomial invariants discussed here. They also have associated with them a pleasing blend of linear algebra and diagrammatic combinatorics and an interaction with Lie algebras. This is described in some detail in [5]. They can also be interpreted in terms of the configuration space of immersions of closed curves into \( {S}^{3} \) ([129],[130]). ## Exercises 1. Generalise to the theories of the HOMFLY and Kauffman polynomials the "numerator" and "denominator" ideas that work so neatly for the Conway polynomial (see Exercise 4 of Chapter 8). 2. Prove Proposition 16.9 concerning the "first" terms in the HOMFLY and Kauffman polynomials. 3. Suppose that in a diagram of an oriented knot \( K \), some crossings labelled \( 1,2,\ldots, n \) , with crossing \( i \) having sign \( {\epsilon }_{i} \), are changed one by one to obtain the unknot. Let \( {K}_{i} \) be the knot created from \( K \) by the first \( i \) changes so that \( K = {K}_{0} \) and \( {K}_{n} \) is the unknot. Let \( {L}_{i} \) be the oriented two-component link obtained from \( {K}_{i - 1} \) by nullifying the crossing \( i \) (that is, by replacing it with no crossing in the way that respects orientations). Let the total twisting \( \tau \left( K\right) \) of \( K \) be defined by \( \tau \left( K\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{\epsilon }_{i}\operatorname{lk}\left( {L}_{i}\right) \), where \( \operatorname{lk}\left( {L}_{i}\right) \) is the linking number of the two components of \( {L}_{i} \) . If the HOMFLY polynomial of \( K \) is written \( P\left( K\right) = \mathop{\sum }\limits_{j}{p}_{j}\left( l\right) {m}^{j} \), prove that the derivatives of the Laurent polynomial \( {p}_{0}\left( l\right) \) satisfy (i) \( {p}_{0}^{\prime }\left( \sqrt{-1}\right) = 0 \) , (ii) \( {p}_{0}^{\prime \prime }\left( \sqrt{-1}\right) = {8\tau }\left( K\right) \) . Deduce that \( \tau \left( K\right) \) is well defined independent of the chosen crossing changes. 4. In the notation of the previous question, show that \( {p}_{2}\left( \sqrt{-1}\right) = - \tau \left( K\right) \) . 5. The HOMFLY skein of the solid torus \( {S}^{1} \times {D}^{2} \), denoted \( {\mathcal{S}}_{H}\left( {{S}^{1} \times {D}^{2}}\right) \), is the free module over \( \mathbb{Z}\left\lbrack {{l}^{\pm 1},{m}^{\pm 1}}\right\rbrack \) generated by all oriented links in \( {S}^{1} \times {D}^{2} \) quotiented by all relations of the form \( l{L}_{ + } + {l}^{-1}{L}_{ - } + m{L}_{0} = 0 \), where \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are related in the usual way. Show that embedding two solid tori in one (by taking the product with \( {S}^{1} \) of two discs embedded in one disc) induces a product structure on \( {\mathcal{S}}_{H}\left( {{S}^{1} \times {D}^{2}}\right) \) that turns it into a commutative algebra. Show that this algebra is generated by the closure in the solid torus of all braids of the form \( {\sigma }_{1}{\sigma }_{2}\ldots {\sigma }_{r} \) with either orientation. Consider the order-2 homeomorphism of \( {S}^{1} \times {D}^{2} \) to itself that rotates the solid torus through an angle \( \pi \) about an axis meeting it in two intervals ( \( h \) reverses the \( {S}^{1} \) direction but preserves the orientation of the solid torus). Show that \( h \) induces a linear map of \( {\mathcal{S}}_{H}\left( {{S}^{1} \times {D}^{2}}\right) \) which sends one of the above generators to itself but with reversed direction. Show that this fact can be used to construct different links with the same HOMFLY polynomial by rotating a solid torus containing some components of a link and changing their directions. 6. Suppose that \( {K}_{1} \) and \( {K}_{2} \) are two oriented knots that are related by a mutation. For \( i = 1,2 \), let \( {K}_{i}^{\left( 2\right) } \) be the 2-parallel of \( {K}_{i} \) (that is, the link consisting of \( {K}_{i} \) and a longitude of \( {K}_{i} \) with parallel orientation). Prove that \( P\left( {K}_{1}\right) = P\left( {K}_{2}\right) \) . Extend this result to anti-parallels, where now the other orientation is chosen for the longitude. 7. Prove the theorem of Alexander that asserts that any oriented link in \( {S}^{3} \) is the closure of some element of some braid group \( {B}_{n} \) . 8. Suppose that \( V \) is a free module of dimension 2 over \( \mathbb{Z}\left\lbrack {{q}^{-1}, q}\right\rbrack \) and that \( R : V \otimes V \rightarrow \) \( V \otimes V \) is defined by \[ R = - q\mathop{\sum }\limits_{i}{E}_{i, i} \otimes {E}_{i, i} + \mathop{\sum }\limits_{{i \neq j}}{E}_{i, j} \otimes {E}_{j, i} + \left( {{q}^{-1} - q}\right) \mathop{\sum }\limits_{{i < j}}{E}_{i, i} \otimes {E}_{j, j}. \] Show that \( R \) satisfies the Yang-Baxter equation \[ {R}_{1}{R}_{2}{R}_{1} = {R}_{2}{R}_{1}{R}_{2} \] ## References [1] C. C. Adams. The Knot Book, W. H. Freeman (1994). [2] C. C. Adams. Toroidally alternating knots and links, Topology 33 (1994) 353-369. [3] J. W. Alexander. Topological invariants of knots and links, Trans. Amer: Math. Soc. 30 (1928) 275-306. [4] M. F. Atiyah. The Geometry and Physics of Knots, Cambridge University Press (1990). [5] D. Bar-Natan. On the Vassiliev knot invariants, Topology 34 (1995) 423-472. [6] R. J. Baxter. Exactly Solved Models in Statistical Mechanics, Academic Press London (1982). [7] J. S. Birman. Braids, links and mapping class groups, Ann. of Math. Studies 82 Princeton University Press (1974). [8] J. S. Birman and D. R. J. Chillingworth. On the homeotopy group of a non-orientable surface, Proc. Cambridge Philos. Soc. 71 (1972) 437-448. [9] J. S. Birman and A. Libgober. Braids, Contemporary Math., 78 Amer. Math. Soc., Providence, R.I., (1988). [10] J. S. Birman and X.-S. Lin. Knot polynomials and Vassiliev invariants, Invent. Math. 111 (1993) 225–270. [11] C. Blanchet. Invariants on three-manifolds with spin structure, Comment. Math. Helv. 67 (1992) 406-427. [12] C. Blanchet, N. Habegger, G. Masbaum and P. Vogel. Three-manifold invariants derived from the Kauffman bracket, Topology 31 (1992) 685-699. [13] C. Blanchet, N. Habegger, G. Masbaum and P. Vogel. Topological quantum fie theories derived from the Kauffman bracket, Topology 34 (1995) 883-927. [14] F. Bonahon and L. C. Siebenmann. The characteristic tori splitting of irreducible compact 3-orbifolds, Math. Ann. 278 (1987) 441-479. [15] F. Bonahon and L. C. Siebenmann. New geometric splittings of classical knots, London Math. Soc. Lecture Notes 75 (to appear), Cambridge University Press. [16] R. D. Brandt, W. B. R. Lickorish and K. C. Millett. A polynomial invariant for unoriented knots and links, Invent. Math. 84 (1986) 563-573. [17] G. Burde and H. Zieschang. Knots, de Gruyter (1986). [18] A. J. Casson and C. M. Gordon. On slice knots in dimension three, Geometric Topology. Proc. Symp. Pure Math. XXXII Amer. Math. Soc. Providence, R.I. (1978) 39-53. [19] D. R. J. Chillingworth. A finite set of generators for the homeotopy group of a nonorientable surface, Proc. Cambridge Philos. Soc. 65 (1969) 409-430. [20] J. H. Conway. "An enumeration of knots and links," Computational problems in abstract algebr
1009_(GTM175)An Introduction to Knot Theory
71
ories derived from the Kauffman bracket, Topology 34 (1995) 883-927. [14] F. Bonahon and L. C. Siebenmann. The characteristic tori splitting of irreducible compact 3-orbifolds, Math. Ann. 278 (1987) 441-479. [15] F. Bonahon and L. C. Siebenmann. New geometric splittings of classical knots, London Math. Soc. Lecture Notes 75 (to appear), Cambridge University Press. [16] R. D. Brandt, W. B. R. Lickorish and K. C. Millett. A polynomial invariant for unoriented knots and links, Invent. Math. 84 (1986) 563-573. [17] G. Burde and H. Zieschang. Knots, de Gruyter (1986). [18] A. J. Casson and C. M. Gordon. On slice knots in dimension three, Geometric Topology. Proc. Symp. Pure Math. XXXII Amer. Math. Soc. Providence, R.I. (1978) 39-53. [19] D. R. J. Chillingworth. A finite set of generators for the homeotopy group of a nonorientable surface, Proc. Cambridge Philos. Soc. 65 (1969) 409-430. [20] J. H. Conway. "An enumeration of knots and links," Computational problems in abstract algebra, Ed. J. Leech, Pergamon Press, (1969), 329-358. [21] D. Cooper. The universal abelian cover of a link, London Math. Soc. Lecure Notes 48 (1982) 51-66. [22] R. H. Crowell. Genus of alternating link types, Ann. of Math. 69 (1959) 258-275. [23] R. H. Crowell and R. H. Fox. Introduction to Knot Theory, Graduate Texts in Mathematics 57 Springer-Verlag (1977). [24] M. Dehn. Die Groupe der Abbildungsklassen, Acta Math. 69 (1938) 135-206. [25] S. K. Donaldson. The Seiberg-Witten equations and 4-manifold topology, Bull. Amer: Math. Soc. 33 (1996) 45–70. [26] C. H. Dowker and M. B. Thistlethwaite. On the classification of knots, C. R. Math. Rep. Acad. Sci. Canada IV (1982) 129-131. [27] J. Franks and R. Williams. Braids and the Jones polynomial, Trans. Amer. Math. Soc. 303 (1987) 97–107. [28] D. S. Freed and R. E. Gompf. Computer calculations of Witten’s 3-manifold invariants, Comm. Math. Phys. 141 (1991) 79-117. [29] M. H. Freedman and R. C. Kirby. A geometric proof of Rohlin's theorem, Proc. Symp. Pure Math. 32 (1978) 85–97. [30] M. H. Freedman and F. S. Quinn. Topology of 4-manifolds, Princeton Math. Ser. 39, Princeton University Press, N.J. (1990). [31] P. Freyd, D. Yetter, J. Hoste, W. B. R. Lickorish, K. Millett and A. Ocneanu. A new polynomial invariant of knots and links, Bull. Amer: Math. Soc. 12 (1985) 239-246. [32] D. Gabai. Genera of arborescent links, Mem. Amer. Math. Soc. 339 (1986) 1-98. [33] N. D. Gilbert and T. Porter. Knots and Surfaces, Oxford University Press (1994). [34] L. Goeritz. Knotten und quadratische Formen, Math. Z. 36 (1933) 647-654. [35] C. M. Gordon. "Some aspects of classical knot theory," Lecture Notes in Mathematics 685, Springer-Verlag (1978) 1-60. [36] C. M. Gordon and R. A. Litherland. On the signature of a link, Invent. Math. 47 (1978) 53-69. [37] C. M. Gordon and J. Luecke. Knots are determined by their complements, J. Amer: Math. Soc. 2 (1989) 371–415. [38] B. Hartley and T. O. Hawkes. Rings, Modules and Linear Algebra, Chapman and Hall (1970). [39] R. Hartley. The Conway potential function for links, Comment. Math. Helv. 58 (1983) 365-378. [40] R. Hartley. Identifying non-invertible knots, Topology 22 (1983) 137-145. [41] C. Hayashi. Links with alternating diagrams on closed surfaces of positive genus, Math. Proc. Cambridge Philos. Soc. 117 (1995) 113-128. [42] G. Hemion. The Classification of Knots and 3-Dimensional Spaces, Oxford University Press (1992). [43] J. Hempel. 3-manifolds, Ann. of Math. Studies 86 Princeton University Press (1976). [44] J. A. Hillman. Alexander ideals of links, Lecture Notes in Math. 895 Springer-Verlag (1981). [45] C. F. Ho. A new polynomial of knots and links - preliminary report, Abstracts Amer: Math. Soc. 6 (1985) 300. [46] C. D. Hodgson and J. H. Rubinstein. Involutions and isotopies of lens spaces, Lecture Notes in Math. 1144 Springer-Verlag (1985) 60-96. [47] J. F. P. Hudson. Lecture Notes on Piecewise Linear Topology, Benjamin New York (1969). [48] S. P. Humphries. Generators for the mapping class group, Topology of low-dimensional manifolds, Ed. R. A. Fenn, Springer-Verlag (1979) 44-47. [49] W. H. Jaco. Lectures on three-manifold topology, C.B.M.S. Regional Conference Series 43 Amer. Math. Soc. Providence R.I. (1980). [50] F. Jaeger, D. L. Vertigan and D. J. A. Welsh. On the computational complexity of the Jones and Tutte polynomials, Math. Proc. Cambridge Philos. Soc. 108 (1990) 35–53. [51] L. C. Jeffrey. Chern-Simons-Witten invariants of lens spaces and torus bundles, and the semi-classical approximation, Comm. Math. Phys. 147 (1992) 563-604. [52] V. F. R. Jones. Notes on a talk to the Atiyah seminar (1986). [53] V. F. R. Jones. Hecke algebra representations of braid groups and link polynomials, Ann. of Math. 126 (1987) 335-388. [54] T. Kanenobu. Infinitely many knots with the same polynomial, Proc. Amer. Math. Soc. 97 (1986) 158-161. [55] J. Kania-Bartoszynska. Examples of different 3-manifolds with the same invariants of Witten and Reshetikhin, Topology 32 (1993) 47-54. [56] C. Kassel. Quantum Groups, Springer-Verlag (1995). [57] L. H. Kauffman. Formal Knot Theory, Math. Notes 30 Princeton University Press (1983). [58] L. H. Kauffman. On knots, Ann. of Math. Studies 115 Princeton University Press (1987). [59] L. H. Kauffman. States models and the Jones polynomial, Topology 26 (1987) 395–407. [60] L. H. Kauffman. An invariant of regular isotopy, Trans. Amer. Math. Soc. 318 (1990) 417-471. [61] L. H. Kauffman. Knots and physics, World Scientific (1991). [62] L. H. Kauffman and S. Lins. Temperley-Lieb recoupling theory and invariants of 3-manifolds, Ann. of Math. Studies, 134 Princeton University Press (1994). [63] A. Kawauchi. The invertibility problem for amphicheiral excellent knots, Proc. Japan Acad. 55 (1979) 399-402. [64] M. E. Kidwell. On the degree of the Brandt-Lickorish-Millett polynomial, Proc. Amer: Math. Soc. 100 (1987) 755-762. [65] R. C. Kirby. A calculus for framed links in \( {S}^{3} \), Invent. Math. 45 (1978) 35-56. [66] R. C. Kirby. The topology of 4-manifolds, Lecture Notes in Math. 1374, Springer Verlag (1989). [67] R. C. Kirby. Problems in low dimensional topology, Geometric Topology, Ed. W. H. Kazez, Amer. Math. Soc. (1997). [68] R. C. Kirby and P. Melvin. The 3-manifold invariants of Witten and Reshetikhin-Turaev for \( {sl}\left( {2,\mathbb{C}}\right) \), Invent. Math. 105 (1991) 473-545. [69] A. N. Kirillov and N. Y. Reshetikhin. "Representations of the algebra \( {U}_{q}\left( {{Sl}\left( 2\right) }\right), q \) - orthogonal polynomials and invariants of links," Infinite dimensional Lie algebras and groups Adv. Ser: Math. Phys. 7, World Scientific (1989) 285-339. [70] P. Kirk and C. Livingston. Twisted knot polynomials, inversion, mutation and concordance, Indiana University preprint (1996). [71] T. Kohno. Tunnel numbers of knots and Jones-Witten invariants. Braid groups, knot theory and statistical mechanics II, Adv. Ser. Math. Phys. 17, World Scientific (1994) 275-293. [72] P. B. Kronheimer and T. S. Mrowka. Gauge theory for embedded surfaces I, Topology 32 (1993) 773–826. [73] M. Lackenby. Fox’s congruence classes and the quantum- \( {SU}\left( 2\right) \) invariants of links in 3-manifolds, Comment. Math. Helv. 71 (1996) 664-677. [74] S. Lang. Algebraic Number Theory, Springer-Verlag (1986). [75] J. Levine. Polynomial invariants of knots of codimension 2, Ann. of Math. 84 (1966) 537-544. [76] B.-H. Li and T.-J. Li. Generalized Gaussian sums and Chern-Simons-Witten-Jones invariants of lens spaces, J. Knot Theory and its Ramifications 5 (1996) 183-224. [77] W. B. R. Lickorish. A representation of orientable combinatorial 3-manifolds, Ann. of Math. 76 (1962) 531–540. [78] W. B. R. Lickorish. A finite set of generators for the homeotopy group of a 2-manifold, Proc. Cambridge Philos. Soc. 60 (1964) 769-778. [79] W. B. R. Lickorish. On the homeomorphisms of a non-orientable surface, Proc. Cambridge Philos. Soc. 61 (1965) 61-64. [80] W. B. R. Lickorish. A finite set of generators for the homeotopy group of a 2-manifold. (Corrigendum), Proc. Cambridge Philos. Soc. 62 (1966) 679-681. [81] W. B. R. Lickorish. The irreducibility of the three-sphere, Michigan Math. J. 36 (1989) 345–349. [82] W. B. R. Lickorish. Invariants for 3-manifolds from the combinatorics of the Jones polynomial, Pacific J. Math. 149 (1991) 337-347. [83] W. B. R. Lickorish. Three-manifolds and the Temperley-Lieb algebra, Math. Ann. 290 (1991) 657-670. [84] W. B. R. Lickorish. Calculations with the Temperley-Lieb algebra, Comment. Math. Helv. 67 (1992) 571-591. [85] W. B. R. Lickorish. Distinct 3-manifolds with all \( {SU}{\left( 2\right) }_{q} \) invariants the same, Proc. Amer. Math. Soc. 117 (1993) 285–292. [86] W. B. R. Lickorish. The skein method for three-manifold invariants, J. Knot Theory and its Ramifications 2 (1993) 171-194. [87] W. B. R. Lickorish. Skeins and handlebodies, Pacific J. Math. 159 (1993) 337-349. [88] W. B. R. Lickorish and K. C. Millett. Some evaluations of link polynomials, Comment. Math. Helv. 61 (1986) 349-359. [89] W. B. R. Lickorish and K. C. Millett. An evaluation of the \( F \) -polynomial of a link, Lecture Notes in Math. 1350, Springer Verlag (1987) 104-108. [90] W. B. R. Lickorish and K. C. Millett. A polynomial invariant of oriented links, Topology 26 (1987) 107-141. [91] A. S. Lipson. An evaluation of a link polynomial, Math. Proc. Cambridge Philos. Soc. 100 (1986) 361-364. [92] C. N. Little. Non-alternate \( \pm \) knots, Trans. Roy. Soc. Edinburgh 39 (1900) 771-778. [93] C. Livingston. Knot theory, Carus Mathematical Monographs 24, Math. Assoc. Amer. (1993). [94] W. W. Menasco. Closed incompressible surfaces in alternating knot and link complements, Topology 23 (1984) 37-44. [95] W. W. Menasco and M. B. Thistlethwaite. The classification of alternating links, Ann. of Math. 138 (1993) 113-171. [96] K. Morimoto, M. Sakuma and Y. Yokota. Identifying tu
1009_(GTM175)An Introduction to Knot Theory
72
349. [88] W. B. R. Lickorish and K. C. Millett. Some evaluations of link polynomials, Comment. Math. Helv. 61 (1986) 349-359. [89] W. B. R. Lickorish and K. C. Millett. An evaluation of the \( F \) -polynomial of a link, Lecture Notes in Math. 1350, Springer Verlag (1987) 104-108. [90] W. B. R. Lickorish and K. C. Millett. A polynomial invariant of oriented links, Topology 26 (1987) 107-141. [91] A. S. Lipson. An evaluation of a link polynomial, Math. Proc. Cambridge Philos. Soc. 100 (1986) 361-364. [92] C. N. Little. Non-alternate \( \pm \) knots, Trans. Roy. Soc. Edinburgh 39 (1900) 771-778. [93] C. Livingston. Knot theory, Carus Mathematical Monographs 24, Math. Assoc. Amer. (1993). [94] W. W. Menasco. Closed incompressible surfaces in alternating knot and link complements, Topology 23 (1984) 37-44. [95] W. W. Menasco and M. B. Thistlethwaite. The classification of alternating links, Ann. of Math. 138 (1993) 113-171. [96] K. Morimoto, M. Sakuma and Y. Yokota. Identifying tunnel number one knots, J. Math. Soc. Japan 48 (1996) 667-688. [97] H. R. Morton. Seifert circles and knot polynomials, Math. Proc. Cambridge Philos. Soc. 99 (1986) 107-109. [98] H. R. Morton. Threading knot diagrams, Math. Proc. Cambridge Philos. Soc. 99 (1986) 247-260. [99] H. Murakami. A weight system derived from the Conway potential function, J. London Math. Soc. (1997) (to appear). [100] J. Murakami. A state model for the multi-variable Alexander polynomial, Pacific J. Math. 157 (1993) 109-135. [101] K. Murasugi. On a certain numerical invariant of link types, Trans Amer. Math. Soc. \( {117}\left( {1965}\right) {387} - {422} \) . [102] K. Murasugi. On the signature of links, Topology 9 (1970) 283-298. [103] Y. Nakanishi. A note on unknotting number, Math. Sem. Notes Kobe Univ. 9 (1981) 99-108. [104] J. R. Neil. Combinatorial calculations of the various normalisations of the Witten invariants for 3-manifolds, J. Knot Theory and its Ramifications 1 (1992) 407–499. [105] C. D. Papakyriakopoulos. On Dehn's lemma and the asphericity of knots, Ann. of Math. 66 (1957) 1-26. [106] J. H. Przytycki and P. Traczyk. Invariants of links of Conway type, Kobe J. Math. 4 (1987) 115-139. [107] K. Reidemeister. Knotentheorie, Springer-Verlag New York (1948). [108] K. Reidemeister. Knot Theory (Translation of Knotentheorie), BSC Associates Moscow, Idaho (1983). [109] N. Y. Reshetikhin and V. G. Turaev. Invariants of 3-manifolds via link polynomials and quantum groups, Invent. Math. 103 (1991) 547–597. [110] R. Riley. Homomorphisms of knot groups on finite groups, Math. Comp. 25 (1971) 603-619. [111] R. A. Robertello. An invariant of knot cobordism, Comm. Pure and App. Math. 18 (1965) 543-555. [112] D. Rolfsen. Knots and Links, Publish or Perish (1976). [113] C. P. Rourke and B. J. Sanderson. Introduction to piecewise-linear topology, Ergebnisse der mathematik 69 Springer-Verlag (1972). [114] C. P. Rourke and D. P. Sullivan. On the Kervaire construction, Ann. of Math. 94 (1971) 397–413. [115] H. Schubert. Knotten und Vollringe, Acta Math. 90 (1953) 131-286. [116] R. A. Stong. The Jones polynomial of parallels and applications to crossing number, Pacific J. Math. 164 (1994) 383-395. [117] P. M. Strickland. On the quantum group invariants of cables, Preprint (University of Liverpool) (1990). [118] P. G. Tait. On knots I, II, III., Scientific papers I. Cambridge University Press London (1898) 273-437. [119] M. B. Thistlethwaite. Kauffman's polynomial and alternating links, Topology 27 (1988) 311-318. [120] M. B. Thistlethwaite. On the Kauffman polynomial of an adequate link, Invent. Math. 93 (1988) 285–298. [121] W. P. Thurston. Geometry and topology of 3-manifolds, Princeton University notes (1979). [122] P. Traczyk. Periodic knots and the skein polynomial, Invent. Math. 106 (1991) 73–84. [123] A. G. Tristram. Some cobordism invariants for links, Proc. Cambridge Philos. Soc. 66 (1969) 251-264. [124] H. F. Trotter. Homology of group systems with applications to knot theory, Ann. of Math. 76 (1962) 464-498. [125] H. F. Trotter. Non-invertible knots exist, Topology 2 (1964) 275-280. [126] V. G. Turaev. The Yang-Baxter equations and invariants of links, Invent. Math. 92 (1988) 527-553. [127] V. G. Turaev. Quantum invariants for knots and 3-manifolds, de Gruyter Berlin (1994). [128] V. G. Turaev and H. Wenzl. Quantum invariants of 3-manifolds associated with classical simple Lie algebras, International J. Math. 4 (1993) 323-358. [129] V. A. Vassiliev. Cohomology of knot spaces, "Theory of singularities and its applications" Amer. Math. Soc., Providence (1990) [130] V. A. Vassiliev. Complements of discriminants of smooth maps: topology and applications, Amer. Math. Soc. Translations of Math. 98 (1992). [131] B. Wajnryb. A simple presentation for the mapping class group of an orientable surface, Israel J. Math. 45 (1983) 157-174. [132] F. Waldhausen. On irreducible 3-manifolds which are sufficiently large, Ann. of Math. 87 (1968) 56-88. [133] H. Wenzl. On sequences of projections, C. R. Math. Rep. Acad. Sci. IX (1987) 5-9. [134] W. Whitten. Knot complements and groups, Topology 26 (1987) 41-44. [135] E. Witten. Quantum field theory and Jones' polynomial, Comm. Math. Phys. 121 (1989) 351-399. [136] S. Yamada. The minimum number of Seifert circuits equals the braid index of a link, Invent. Math. 89 (1987) 347–356. [137] S. Yamada. A topological invariant of spatial regular graphs, Knots 90, Ed. A. Kawauchi, de Gruyter, (1992) 447-454. [138] Y. Yokota. On quantum SU(2) invariants and generalised bridge numbers of knots, Math. Proc. Cambridge Philos. Soc. 117 (1995) 545-557. [139] Y. Yokota. Skeins and quantum \( {SU}\left( N\right) \) invariants of 3-manifolds, Math. Ann. 307 (1997) 109-138. [140] E. C. Zeeman. Unknotting combinatorial balls, Ann. of Math. 78 (1963) 501-526. ## Index \( {6j} \) -symbols, 154 \( \left( {n + 1}\right) \) -ball \( {B}^{n + 1},1 \) \( {D}^{n + 1},1 \) \( {f}^{\left( n\right) } \in T{L}_{n},{136} \) \( n \) -dimensional sphere \( {S}^{n},1 \) \( R \) -matrices,187 \( r \) -parallel,46 \( {r}^{th} \) Alexander ideal,55 \( {r}^{\text{th }} \) Alexander polynomial,55 \( S \) -equivalence,81 \( {S}_{n}\left( x\right) ,{138} \) \( S{U}_{q}\left( 2\right) \) three-manifold invariants,133 \( \Gamma \left( {x, y, z}\right) ,{150} \) \( \omega \) -signature,84 \( \omega \in \mathcal{S}\left( {{S}^{1} \times I}\right) ,{138} \) \( \bar{L},4 \) \( \partial ,{11} \) \( {\mathcal{I}}_{A}\left( {{S}^{1} \times {F}_{g}}\right) ,{157} \) 2-bridge link, 8 4-ball genus, 91 adequate diagram, 42 admissible triple, 152 Alexander module, 55 Alexander polynomial, 49 Alexander polynomial table, 59 alternating diagram, 32 alternating knots, 7 amphicheiral knots, 29 arborescent link, 9 arborescent part, 39 Arf invariant, 103 ascending, 167 ball-arc pair, 19 Betti number, 133 boundary, 11 braid group, 9 braid index, 182 braids, 9 branched covers, 93 branched, 72 breadth, 45 bubbles, 33 cable knot, 10-11 Catalan number, 135 characteristic polynomial, 62 Chebyshev polynomial, 138 closed braid, 10 coloured links, 164 commutator subgroup, 68 companion, 10 complexity theory, 186 components, 1 connected, 43 Conway polynomial, 79, 82, 166 Conway sphere, 38 covering map, 66 covering spaces, 66 crossing number, 6 Dehn's Lemma, 113 determinant of \( K,{90} \) determinant of \( L,{99} \) "Dubrovnic" polynomial, 176 Eilenberg-MacLane space, 114 elementary enlargement, 80 elementary ideal, 51 200 Index equivalent links, 2 evaluations of polynomials, 186 exterior, 11 fibre map, 66 finite presentation, 49 flat, 86 framed link, 129 framing curve, 129 framing, 123 free action, 74 free differential calculus, 116 fundamental group, 67, 110 gauge theory, 91 genus \( g \) handlebody,127 genus of a knot, 16 Goeritz matrix, 93, 98 Gordon-Litherland form, 95 group of a covering, 69 group of a link, 110 Haken manifolds, 115 handlebody, 127 Heegaard splitting, 123, 127 HOMFLY polynomial, 166, 179 HOMFLY polynomial table, 184 homotopy exact sequence, 68 homotopy lifting property, 67 homotopy type, 114 Hopf link, 87 horned spheres, 19 Hurewicz Isomorphism Theorem, 68 hyperbolic plane, 120 incompressible torus, 113 infinite cyclic covering, 70 innermost curves, 18 invariant, 6 invariants of finite type, 190 isotopy, 2 isotopy classes, 123 Jones polynomial of a torus knot, 161 Jones polynomial, 23, 26, 106 Jones polynomial table, 27 Jones-Wenzl idempotent, 136 Kauffman bracket, 23-24, 133 Kauffman polynomial, 100, 166, 179 Kauffman polynomial table, 185 kink, 3 Kirby moves, 129, 133 knot, 1 ’ \( L \) -matrix’ method,118 lens space, 67, 146 'level', 142 linear skein \( \mathcal{S}\left( F\right) ,{134} \) linear skein theory, 133 link determinant, 93 link diagram, 3 link, 1 linking matrix, 133 linking number, 11, 13 longitude, 13 Loop Theorem, 112 map of outsides, 152 mapping class group, 124, 131 Markov moves, 10 meridian, 13 minimal polynomial, 188 minus-adequate, 42 multi-variable Alexander polynomial, 119 mutation, 29, 179 non-alternating, 45 non-orientable spanning surface, 95 non-reversible knot, 120 non-singular, 103 nugatory, 42 obstruction theory, 88 obverse, 4 orientable double cover, 77 orientable, 127 path lifting property, 67 pattern, 10 peripheral torus, 38 plumbing, 9 plus-adequate, 42 Poincaré conjecture, 131 pre-Goeritz matrix, 98 presentation matrix, 49 pretzel link, 8, 56 prime diagram, 33 prime, 6, 33 quadratic form, 103 quantum \( S{U}_{q}\left( 2\right) \) invariants,141,146 quantum groups, 187 rational link, 8 recombination techniques, 161 reduced diagram, 42 reflection, 4 regular covering, 76 regular isotopy, 3, 134 regular neighbourhood, 11 Reidemeister moves, 3 removable crossing, 42 reverse, 4 ribbon knots, 86 satellite knots, 10, 60 Schönfl
1009_(GTM175)An Introduction to Knot Theory
73
Loop Theorem, 112 map of outsides, 152 mapping class group, 124, 131 Markov moves, 10 meridian, 13 minimal polynomial, 188 minus-adequate, 42 multi-variable Alexander polynomial, 119 mutation, 29, 179 non-alternating, 45 non-orientable spanning surface, 95 non-reversible knot, 120 non-singular, 103 nugatory, 42 obstruction theory, 88 obverse, 4 orientable double cover, 77 orientable, 127 path lifting property, 67 pattern, 10 peripheral torus, 38 plumbing, 9 plus-adequate, 42 Poincaré conjecture, 131 pre-Goeritz matrix, 98 presentation matrix, 49 pretzel link, 8, 56 prime diagram, 33 prime, 6, 33 quadratic form, 103 quantum \( S{U}_{q}\left( 2\right) \) invariants,141,146 quantum groups, 187 rational link, 8 recombination techniques, 161 reduced diagram, 42 reflection, 4 regular covering, 76 regular isotopy, 3, 134 regular neighbourhood, 11 Reidemeister moves, 3 removable crossing, 42 reverse, 4 ribbon knots, 86 satellite knots, 10, 60 Schönflies theorem, 19 Seifert circuits, 16, 106 Seifert fibrations, 146 Seifert form, 53 Seifert matrix, 53 Seifert surface, 15, 93 semi-locally simply connected, 73 sign of a crossing, 11 signature of a link, 85 singular links, 190 skein formula, 83, 166 skew-symmetric forms, 104 slice knot, 86 slicing disc, 86 Sphere Theorem, 112 spin structure, 142 split link, 32 standard position, 33 state, 41 states model, 187, 189 statistical mechanics, 187 strongly prime, 33 substitutions of variables, 180 sum of knots, 4 surgery along an arc, 79 surgery, 18, 70, 123 symplectic base, 104 table of diagrams, 5 table of signatures, 85 tassel, 8 Temperley-Lieb algebras, 133, 135 torus knot, 118 torus link, 10 tunnel number, 164 twist homeomorphism, 124 twist-equivalent, 125 twisted double, 56 two-bridge link, 119 unimodular congruences, 81 universal covering, 73 unknot, 4 unknotting number, 7, 72, 91 untying function, 173 Vassiliev invariants, 190 Whitehead double, 62 wild embeddings, 1 Wirtinger presentation, 111 writhe \( w\left( D\right) ,{25} \) Yang-Baxter equations, 187 Yang-Baxter operator, 188 ## Graduate Texts in Mathematics 61 Whitehead. Elements of Homotopy Theory. 62 Kargapolov/Merlzjakov. Fundamentals of the Theory of Groups. 63 Bollobas. Graph Theory. 64 Edwards. Fourier Series. Vol. I 2nd ed. 65 Wells. Differential Analysis on Complex Manifolds. 2nd ed. 66 WATERHOUSE. Introduction to Affine Group Schemes. 67 Serre. Local Fields. 68 WEIDMANN. Linear Operators in Hilbert Spaces. 69 LANG. Cyclotomic Fields II. 70 Massey. Singular Homology Theory. 71 FARKAS/KRA. Riemann Surfaces. 2nd ed. 72 Stillwell. Classical Topology and Combinatorial Group Theory. 2nd ed. 73 Hungerford. Algebra. 74 Davenport. Multiplicative Number Theory. 2nd ed. 75 HOCHSCHILD. Basic Theory of Algebraic Groups and Lie Algebras. 76 Irraka. Algebraic Geometry. 77 HECKE. Lectures on the Theory of Algebraic Numbers. 78 Burris/Sankappanavar. A Course in Universal Algebra. 79 WALTERS. An Introduction to Ergodic Theory. 80 Robinson. A Course in the Theory of Groups. 2nd ed. 81 Forster. Lectures on Riemann Surfaces. 82 Borr/Tu. Differential Forms in Algebraic Topology. 83 WASHINGTON. Introduction to Cyclotomic Fields. 2nd ed. 84 IRELAND/ROSEN. A Classical Introduction to Modern Number Theory. 2nd ed. 85 Edwards. Fourier Series. Vol. II. 2nd ed. 86 VAN LINT. Introduction to Coding Theory. 2nd ed. 87 Brown. Cohomology of Groups. 88 Pierce. Associative Algebras. 89 LANG. Introduction to Algebraic and Abelian Functions. 2nd ed. 90 BRøndsted. An Introduction to Convex Polytopes. 91 BEARDON. On the Geometry of Discrete Groups. 92 Diestel. Sequences and Series in Banach Spaces. 93 Dubrovin/Fomenko/Novikov. Modern Geometry-Methods and Applications. Part I. 2nd ed. 94 WARNER. Foundations of Differentiable Manifolds and Lie Groups. 95 Shiryaev. Probability. 2nd ed. 96 Conway. A Course in Functional Analysis. 2nd ed. 97 Koßlitz. Introduction to Elliptic Curves and Modular Forms. 2nd ed. 98 Bröcker/Tom Dieck. Representations of Compact Lie Groups. 99 Grove/Benson. Finite Reflection Groups. 2nd ed. 100 Berg/Christensen/Ressel. Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions. 101 Edwards. Galois Theory. 102 Varadaranan. Lie Groups, Lie Algebras and Their Representations. 103 LANG. Complex Analysis. 3rd ed. 104 Dubrovin/Fomenko/Novikov. Modern Geometry-Methods and Applications. Part II. 105 LANG. \( S{L}_{2}\left( \mathbf{R}\right) \) . 106 SILVERMAN. The Arithmetic of Elliptic Curves. 107 OLVER. Applications of Lie Groups to Differential Equations. 2nd ed. 108 Range. Holomorphic Functions and Integral Representations in Several Complex Variables. 109 LEHTO. Univalent Functions and Teichmüller Spaces. 110 LANG. Algebraic Number Theory. 111 Husemöller. Elliptic Curves. 112 LANG. Elliptic Functions. 113 KARATZAS/SHREVE. Brownian Motion and Stochastic Calculus. 2nd ed. 114 Koblitz. A Course in Number Theory and Cryptography. 2nd ed. 115 Berger/Gostlaux. Differential Geometry: Manifolds, Curves, and Surfaces. 116 Kelley/Srinivasan. Measure and Integral. Vol. I. 117 Serre. Algebraic Groups and Class Fields. 118 Pedersen. Analysis Now. 119 Rotman. An Introduction to Algebraic Topology. 120 ZIEMER. Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation. 121 LANG. Cyclotomic Fields I and II. Combined 2nd ed. 122 REMMERT. Theory of Complex Functions. Readings in Mathematics 123 EBBINGHAUS/HERMES et al. Numbers. Readings in Mathematics 124 Dubrovin/Fomenko/Novikov. Modern Geometry-Methods and Applications. Part III. 125 Berenstein/Gay. Complex Variables: An Introduction. 126 Borel. Linear Algebraic Groups. 2nd ed. 127 Massey. A Basic Course in Algebraic Topology. 128 Rauch. Partial Differential Equations. 129 Fulton/Harris. Representation Theory: A First Course. Readings in Mathematics 130 Dodson/Poston. Tensor Geometry. 131 LAM. A First Course in Noncommutative Rings. 132 Beardon. Iteration of Rational Functions. 133 Harris. Algebraic Geometry: A First Course. 134 Roman. Coding and Information Theory. 135 Roman. Advanced Linear Algebra. 136 ADKINS/WEINTRAUB. Algebra: An Approach via Module Theory. 137 AXLER/BOURDON/RAMEY. Harmonic Function Theory. 138 COHEN. A Course in Computational Algebraic Number Theory. 139 Bredon. Topology and Geometry. 140 AUBIN. Optima and Equilibria. An Introduction to Nonlinear Analysis. 141 Becker/Weispfenning/Kredel. Gröbner Bases. A Computational Approach to Commutative Algebra. 142 LANG. Real and Functional Analysis. 3rd ed. 143 DOOB. Measure Theory. 144 DENNIS/FARB. Noncommutative Algebra. 145 VICK. Homology Theory. An Introduction to Algebraic Topology. 2nd ed. 146 BRIDGES. Computability: A Mathematical Sketchbook. 147 ROSENBERG. Algebraic \( K \) -Theory and Its Applications. 148 ROTMAN. An Introduction to the Theory of Groups. 4th ed. 149 RATCLIFFE. Foundations of Hyperbolic Manifolds. 150 EISENBUD. Commutative Algebra with a View Toward Algebraic Geometry. 151 SILVERMAN. Advanced Topics in the Arithmetic of Elliptic Curves. 152 ZIEGLER. Lectures on Polytopes. 153 FULTON. Algebraic Topology: A First Course. 154 Brown/Pearcy. An Introduction to Analysis. 155 KASSEL. Quantum Groups. 156 KECHRIS. Classical Descriptive Set Theory. 157 MALLIAVIN. Integration and Probability. 158 ROMAN. Field Theory. 159 Conway. Functions of One Complex Variable II. 160 LANG. Differential and Riemannian Manifolds. 161 BORWEIN/ERDÉLYI. Polynomials and Polynomial Inequalities. 162 ALPERIN/BELL. Groups and Representations. 163 DIXON/MORTIMER. Permutation Groups. 164 NATHANSON. Additive Number Theory: The Classical Bases. 165 NATHANSON. Additive Number Theory: Inverse Problems and the Geometry of Sumsets. 166 SHARPE. Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. 167 MORANDI. Field and Galois Theory. 168 EWALD. Combinatorial Convexity and Algebraic Geometry. 169 BHATIA. Matrix Analysis. 170 BREDON. Sheaf Theory. 2nd ed. 171 Petersen. Riemannian Geometry. 172 REMMERT. Classical Topics in Complex Function Theory. 173 Diestel. Graph Theory. 174 BRIDGES. Foundations of Real and Abstract Analysis. 175 LICKORISH. An Introduction to Knot Theory. 176 Lee. Riemannian Manifolds. 177 Newman. Analytic Number Theory. 178 CLARKE/LEDYAEV/STERN/WOLENSKI. Nonsmooth Analysis and Control Theory.
100_S_Fourier Analysis
0
# GraduateTexts inMathematics Rajendra Bhatia Matrix Analysis Springer # Graduate Texts in Mathematics 169 Editorial Board S. Axler F.W. Gehring P.R. Halmos ## Graduate Texts in Mathematics 1 TAKEUTI/ZARING. Introduction to Axiomatic Set Theory. 2nd ed. 2 Oxtoby. Measure and Category. 2nd ed. 3 Schaefer. Topological Vector Spaces. 4 Hilton/Stammbach. A Course in Homological Algebra. 5 MAC LANE. Categories for the Working Mathematician. 6 Hughes/Piper. Projective Planes. 7 Serre. A Course in Arithmetic. 8 TAKEUTI/ZARING. Axiomatic Set Theory. 9 Humphreys. Introduction to Lie Algebras and Representation Theory. 10 COHEN. A Course in Simple Homotopy Theory. 11 Conway. Functions of One Complex Variable I. 2nd ed. 12 Beals. Advanced Mathematical Analysis. 13 Anderson/Fuller. Rings and Categories of Modules. 2nd ed. 14 Golubitsky/Guillemin. Stable Mappings and Their Singularities. 15 Berberlan. Lectures in Functional Analysis and Operator Theory. 16 WINTER. The Structure of Fields. 17 Rosenblatt. Random Processes. 2nd ed. 18 Halmos. Measure Theory. 19 Halmos. A Hilbert Space Problem Book. 2nd ed. 20 Husemoller. Fibre Bundles. 3rd ed. 21 Humphreys. Linear Algebraic Groups. 22 BARNES/MACK. An Algebraic Introduction to Mathematical Logic. 23 Greub. Linear Algebra. 4th ed. 24 Holmes. Geometric Functional Analysis and Its Applications. 25 Hewitt/Stromberg. Real and Abstract Analysis. 26 Manes. Algebraic Theories. 27 Kelley. General Topology. 28 ZARISKI/SAMUEL. Commutative Algebra. Vol.I. 29 Zariski/Samuel. Commutative Algebra. Vol.II. 30 JACOBSON. Lectures in Abstract Algebra I. Basic Concepts. 31 JACOBSON. Lectures in Abstract Algebra II. Linear Algebra. 32 JACOBSON. Lectures in Abstract Algebra III. Theory of Fields and Galois Theory. 33 Hirsch. Differential Topology. 34 Spitzer. Principles of Random Walk. 2nd ed. 35 Wermer. Banach Algebras and Several Complex Variables. 2nd ed. 36 Kelley/Namioka et al. Linear Topological Spaces. 37 MONK. Mathematical Logic. 38 Grauert/Fritzsche. Several Complex Variables. 39 Arveson. An Invitation to \( {C}^{ * } \) -Algebras. 40 Kemeny/Snell/Knapp. Denumerable Markov Chains. 2nd ed. 41 Apostol. Modular Functions and Dirichlet Series in Number Theory. 2nd ed. 42 Serre. Linear Representations of Finite Groups. 43 GILLMAN/JERISON. Rings of Continuous Functions. 44 KENDIG. Elementary Algebraic Geometry. 45 Loève. Probability Theory I. 4th ed. 46 Loève. Probability Theory II. 4th ed. 47 MOISE. Geometric Topology in Dimensions 2 and 3. 48 SACHS/WU. General Relativity for Mathematicians. 49 Gruenberg/Weir. Linear Geometry. 2nd ed. 50 Edwards. Fermat's Last Theorem. 51 KLINGENBERG. A Course in Differential Geometry. 52 Hartshorne. Algebraic Geometry. 53 Manin. A Course in Mathematical Logic. 54 Graver/Watkins. Combinatorics with Emphasis on the Theory of Graphs. 55 Brown/Pearcy. Introduction to Operator Theory I: Elements of Functional Analysis. 56 Massey. Algebraic Topology: An Introduction. 57 Crowell/Fox. Introduction to Knot Theory. 58 KOBLITZ. \( p \) -adic Numbers, \( p \) -adic Analysis, and Zeta-Functions. 2nd ed. 59 LANG. Cyclotomic Fields. 60 Arnold. Mathematical Methods in Classical Mechanics. 2nd ed. Rajendra Bhatia ## Matrix Analysis Rajendra Bhatia Indian Statistical Institute New Delhi 110016 India Editorial Board S. Axler F.W. Gehring Department of Department of Mathematics Mathematics Michigan State University University of Michigan East Lansing, MI 48824 Ann Arbor, MI 48109 USA USA P.R. Halmos Department of Mathematics Santa Clara University Santa Clara, CA 95053 USA ## Mathematics Subject Classification (1991): 15-01, 15A16, 15A45, 47A55, 65F15 Library of Congress Cataloging-in-Publication Data Bhatia, Rajendra, 1952- Matrix analysis / Rajendra Bhatia. p. cm. - (Graduate texts in mathematics; 169) Includes bibliographical references and index. ISBN 978-1-4612-6857-4 ISBN 978-1-4612-0653-8 (eBook) DOI 10.1007/978-1-4612-0653-8 1. Matrices. I. Title. II. Series. QA188.B485 1996 \( {512.9}{}^{\prime }{434} - \mathrm{{dc}}{20} \) 96-32217 Printed on acid-free paper. (C) 1997 Springer Science+Business Media New York Originally published by Springer-Verlag New York Berlin Heidelberg in 1997 Softcover reprint of the hardcover 1st edition 1997 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Production managed by Victoria Evarretta; manufacturing supervised by Jeffrey Taub. Photocomposed pages prepared from the author's LaTeX files. 987654321 ## Preface A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and graduate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathematical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to acquire hard tools and then learn how to use them delicately. The reader is expected to be very thoroughly familiar with basic linear algebra. The standard texts Finite-Dimensional Vector Spaces by P.R. Halmos and Linear Algebra by K. Hoffman and R. Kunze provide adequate preparation for this. In addition, a basic knowledge of functional analysis, complex analysis and differential geometry is necessary. The usual first courses in these subjects cover all that is used in this book. The book is divided, conceptually, into three parts. The first five chapters contain topics that are basic to much of the subject. (Of these, Chapter 5 is more advanced and also more special.) Chapters 6 to 8 are devoted to perturbation of spectra, a topic of much importance in numerical analysis, physics and engineering. The last two chapters contain inequalities and perturbation bounds for other matrix functions. These too have been of broad interest in several areas. In Chapter 1, I have given a very brief and rapid review of some basic topics. The aim is not to provide a crash course but to remind the reader of some important ideas and theorems and to set up the notations that are used in the rest of the book. The emphasis, the viewpoint, and some proofs may be different from what the reader has seen earlier. Special attention is given to multilinear algebra; and inequalities for matrices and matrix functions are introduced rather early. After the first chapter, the exposition proceeds at a much more leisurely pace. The contents of each chapter have been summarised in its first paragraph. The book can be used for a variety of graduate courses. Chapters 1 to 4 should be included in any course on Matrix Analysis. After this, if perturbation theory of spectra is to be emphasized, the instructor can go on to Chapters 6,7 and 8. With a judicious choice of topics from these chapters, she can design a one-semester course. For example, Chapters 7 and 8 are independent of each other, as are the different sections in Chapter 8. Alternately, a one-semester course could include much of Chapters 1 to 5 , Chapter 9, and the first part of Chapter 10. All topics could be covered comfortably in a two-semester course. The book can also be used to supplement courses on operator theory, operator algebras and numerical linear algebra. The book has several exercises scattered in the text and a section called Problems at the end of each chapter. An exercise is placed at a particular spot with the idea that the reader should do it at that stage of his reading and then proceed further. Problems, on the other hand, are designed to serve different purposes. Some of them are supplementary exercises, while others are about themes that are related to the main development in the text. Some are quite easy while others are hard enough to be contents of research papers. From Chapter 6 onwards, I have also used the problems for another purpose. There are results, or proofs, which are a bit too special to be placed in the main text. At the same time they are interesting enough to merit the attention of anyone working, or planning to work, in this area. I have stated such results as parts of the Problems section, often with hints about their solutions. This should enhance the value of the book as a reference, and provide topics for a seminar course as well. The reader should not be discouraged if he finds some of these problems difficult. At a few places I have drawn attention to some unsolved research problems. At some others, the existence of such problems can be inferred from the text. I hope the book will encourage some readers to solve these problems too. While most of the notations used are the standa
100_S_Fourier Analysis
1
out themes that are related to the main development in the text. Some are quite easy while others are hard enough to be contents of research papers. From Chapter 6 onwards, I have also used the problems for another purpose. There are results, or proofs, which are a bit too special to be placed in the main text. At the same time they are interesting enough to merit the attention of anyone working, or planning to work, in this area. I have stated such results as parts of the Problems section, often with hints about their solutions. This should enhance the value of the book as a reference, and provide topics for a seminar course as well. The reader should not be discouraged if he finds some of these problems difficult. At a few places I have drawn attention to some unsolved research problems. At some others, the existence of such problems can be inferred from the text. I hope the book will encourage some readers to solve these problems too. While most of the notations used are the standard ones, some need a little explanation: Almost all functional analysis books written by mathematicians adopt the convention that an inner product \( \langle u, v\rangle \) is linear in the variable \( u \) and conjugate-linear in the variable \( v \) . Physicists and numerical analysts adopt the opposite convention, and different notations as well. There would be no special reason to prefer one over the other, except that certain calculations and manipulations become much simpler in the latter notation. If \( u \) and \( v \) are column vectors, then \( {u}^{ * }v \) is the product of a row vector and a column vector, hence a number. This is the inner product of \( u \) and \( v \) . Combined with the usual rules of matrix multiplication, this facilitates computations. For this reason, I have chosen the second convention about inner products, with the belief that the initial discomfort this causes some readers will be offset by the eventual advantages. (Dirac's bra and ket notation, used by physicists, is different typographically but has the same idea behind it.) The \( k \) -fold tensor power of an operator is represented in this book as \( { \otimes }^{k}A \), the antisymmetric and the symmetric tensor powers as \( { \land }^{k}A \) and \( { \vee }^{k}A \) , respectively. This helps in thinking of these objects as maps, \( A \rightarrow { \otimes }^{k}A \) , etc. We often study the variational behaviour of, and perturbation bounds for, functions of operators. In such contexts, this notation is natural. Very often we have to compare two \( n \) -tuples of numbers after rearranging them. For this I have used a pictorial notation that makes it easy to remember the order that has been chosen. If \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is a vector with real coordinates, then \( {x}^{ \downarrow } \) and \( {x}^{ \uparrow } \) are vectors whose coordinates are obtained by rearranging the numbers \( {x}_{j} \) in decreasing order and in increasing order, respectively. We write \( {x}^{ \downarrow } = \left( {{x}_{1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }}\right) \) and \( {x}^{ \uparrow } = \left( {{x}_{1}^{ \uparrow },\ldots ,{x}_{n}^{ \uparrow }}\right) \) , where \( {x}_{1}^{ \downarrow } \geq \cdots \geq {x}_{n}^{ \downarrow } \) and \( {x}_{1}^{ \uparrow } \leq \cdots \leq {x}_{n}^{ \uparrow } \) . The symbol \( \parallel \cdot \parallel \) stands for a unitarily invariant norm on matrices: one that satisfies the equality \( \parallel \left| {{UAV}\parallel }\right| = \parallel \left| A\right| \parallel \) for all \( A \) and for all unitary \( U, V \) . \( A \) statement like \( \parallel A\parallel \leq \parallel B\parallel \) means that, for the matrices \( A \) and \( B \) , this inequality is true simultaneously for all unitarily invariant norms. The supremum norm of \( A \), as an operator on the space \( {\mathbb{C}}^{n} \), is always written as \( \parallel A\parallel \) . Other norms carry special subscripts. For example, the Frobenius norm, or the Hilbert-Schmidt norm, is written as \( \parallel A{\parallel }_{2} \) . (This should be noted by numerical analysts who often use the symbol \( \parallel A{\parallel }_{2} \) for what we call \( \parallel A\parallel \) .) A few symbols have different meanings in different contexts. The reader's attention is drawn to three such symbols. If \( x \) is a complex number, \( \left| x\right| \) denotes the absolute value of \( x \) . If \( x \) is an \( n \) -vector with coordinates \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) , then \( \left| x\right| \) is the vector \( \left( {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n}\right| }\right) \) . For a matrix \( A \), the symbol \( \left| A\right| \) stands for the positive semidefinite matrix \( {\left( {A}^{ * }A\right) }^{1/2} \) . If \( J \) is a finite set, \( \left| J\right| \) denotes the number of elements of \( J \) . A permutation on \( n \) indices is often denoted by the symbol \( \sigma \) . In this case, \( \sigma \left( j\right) \) is the image of the index \( j \) under the map \( \sigma \) . For a matrix \( A,\sigma \left( A\right) \) represents the spectrum of \( A \) . The trace of a matrix \( A \) is written as \( \operatorname{tr}A \) . In analogy, if \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is a vector, we write \( \operatorname{tr}x \) for the sum \( \sum {x}_{j} \) . The words matrix and operator are used interchangeably in the book. When a statement about an operator is purely finite-dimensional in content, I use the word matrix. If a statement is true also in infinite-dimensional spaces, possibly with a small modification, I use either the word matrix or the word operator. Many of the theorems in this book have extensions to infinite-dimensional spaces. Several colleagues have contributed to this book, directly and indirectly. I am thankful to all of them. T. Ando, J.S. Aujla, R.B. Bapat, A. Ben Israel, I. Ionascu, A.K. Lal, R.-C.Li, S.K. Narayan, D. Petz and P. Rosenthal read parts of the manuscript and brought several errors to my attention. Fumio Hiai read the whole book with his characteristic meticulous attention and helped me eliminate many mistakes and obscurities. Long-time friends and coworkers M.D. Choi, L. Elsner, J.A.R. Holbrook, R. Horn, F. Kittaneh, A. McIntosh, K. Mukherjea, K.R. Parthasarathy, P. Rosenthal and K.B. Sinha, have generously shared with me their ideas and insights. These ideas, collected over the years, have influenced my writing. I owe a special debt to T. Ando. I first learnt some of the topics presented here from his Hokkaido University lecture notes. I have also learnt much from discussions and correspondence with him. I have taken a lot from his notes while writing this book. The idea of writing this book came from Chandler Davis in 1986. Various logistic difficulties forced us to abandon our original plans of writing it together. The book is certainly the poorer for it. Chandler, however, has contributed so much to my mathematics, to my life, and to this project, that this is as much his book as it is mine. I am thankful to the Indian Statistical Institute, whose facilities have made it possible to write this book. I am also thankful to the Department of Mathematics of the University of Toronto and to NSERC Canada, for several visits that helped this project take shape. It is a pleasure to thank V.P. Sharma for his IAT_FXtyping, done with competence and with good cheer, and the staff at Springer-Verlag for their help and support. My most valuable resource while writing, has been the unstinting and ungrudging support from my son Gautam and wife Irpinder. Without that, this project might have been postponed indefinitely. ## Contents I A Review of Linear Algebra 1 I. 1 Vector Spaces and Inner Product Spaces 1 I. 2 Linear Operators and Matrices 3 I. 3 Direct Sums 9 I. 4 Tensor Products 12 I. 5 Symmetry Classes 16 I. 6 Problems 20 I. 7 Notes and References 26 II Majorisation and Doubly Stochastic Matrices 28 II.1 Basic Notions 28 II. 2 Birkhoff's Theorem 36 II. 3 Convex and Monotone Functions 40 II. 4 Binary Algebraic Operations and Majorisation 48 II. 5 Problems 50 II. 6 Notes and References 54 III Variational Principles for Eigenvalues 57 III. 1 The Minimax Principle for Eigenvalues 57 III. 2 Weyl's Inequalities 62 III. 3 Wielandt's Minimax Principle 65 III. 4 Lidskii's Theorems 68 III. 5 Eigenvalues of Real Parts and Singular Values 73 III. 6 Problems 75 III. 7 Notes and References 78 IV Symmetric Norms 84 IV. 1 Norms on \( {\mathbb{C}}^{n} \) 84 IV. 2 Unitarily Invariant Norms on Operators on \( {\mathbb{C}}^{n} \) 91 IV. 3 Lidskii's Theorem (Third Proof) 98 IV. 4 Weakly Unitarily Invariant Norms 101 IV. 5 Problems 107 IV. 6 Notes and References 109 V Operator Monotone and Operator Convex Functions 112 V. 1 Definitions and Simple Examples 112 V. 2 Some Characterisations 117 V. 3 Smoothness Properties 123 V. 4 Loewner's Theorems 131 V. 5 Problems 147 V. 6 Notes and References 149 VI Spectral Variation of Normal Matrices 152 VI. 1 Continuity of Roots of Polynomials 153 VI. 2 Hermitian and Skew-Hermitian Matrices 155 VI. 3 Estimates in the Operator Norm 159 VI. 4 Estimates in the Frobenius Norm 165 VI. 5 Geometry and Spectral Variation: the Operator Norm 168 VI. 6 Geometry and Spectral Variation: wui Norms 173 VI. 7 Some Inequalities for the Determinant 181 VI. 8 Problems 184 VI. 9 Notes and References 190 VII Perturbation of Spectral Subspaces of Normal Matrices 194 VII. 1 Pairs of Subspaces 195 VII. 2 The Equation \( {AX} - {XB} = Y \) 203 VII. 3 Perturbation of Eigenspaces 211 VII. 4 A Perturbation Bound for Eigenvalues 212 VII. 5 Perturbation of the Polar Factors 213 VII. 6 Appendix: Evaluating the (Fourier) constants 216 VII. 7 Problems 221 VII. 8 Notes and References 223 VIII Spectral Variation of Nonnormal Matrices 226 VIII. 1 General Spectral Variation Bounds 227 V
100_S_Fourier Analysis
2
r's Theorems 131 V. 5 Problems 147 V. 6 Notes and References 149 VI Spectral Variation of Normal Matrices 152 VI. 1 Continuity of Roots of Polynomials 153 VI. 2 Hermitian and Skew-Hermitian Matrices 155 VI. 3 Estimates in the Operator Norm 159 VI. 4 Estimates in the Frobenius Norm 165 VI. 5 Geometry and Spectral Variation: the Operator Norm 168 VI. 6 Geometry and Spectral Variation: wui Norms 173 VI. 7 Some Inequalities for the Determinant 181 VI. 8 Problems 184 VI. 9 Notes and References 190 VII Perturbation of Spectral Subspaces of Normal Matrices 194 VII. 1 Pairs of Subspaces 195 VII. 2 The Equation \( {AX} - {XB} = Y \) 203 VII. 3 Perturbation of Eigenspaces 211 VII. 4 A Perturbation Bound for Eigenvalues 212 VII. 5 Perturbation of the Polar Factors 213 VII. 6 Appendix: Evaluating the (Fourier) constants 216 VII. 7 Problems 221 VII. 8 Notes and References 223 VIII Spectral Variation of Nonnormal Matrices 226 VIII. 1 General Spectral Variation Bounds 227 VIII. 4 Matrices with Real Eigenvalues 238 VIII. 5 Eigenvalues with Symmetries 240 VIII.6 Problems 244 VIII. 7 Notes and References 249 IX A Selection of Matrix Inequalities 253 IX. 1 Some Basic Lemmas . 253 IX. 2 Products of Positive Matrices 255 IX. 3 Inequalities for the Exponential Function 258 IX. 4 Arithmetic-Geometric Mean Inequalities 262 IX.5 Schwarz Inequalities 266 IX. 6 The Lieb Concavity Theorem 271 IX. 7 Operator Approximation 275 IX. 8 Problems 279 IX. 9 Notes and References 285 X Perturbation of Matrix Functions 289 X. 1 Operator Monotone Functions 289 X. 2 The Absolute Value 296 X. 3 Local Perturbation Bounds 301 X. 4 Appendix: Differential Calculus 310 X. 5 Problems 317 X. 6 Notes and References 320 References 325 Index 339 I A Review of Linear Algebra In this chapter we review, at a brisk pace, the basic concepts of linear and multilinear algebra. Most of the material will be familiar to a reader who has had a standard Linear Algebra course, so it is presented quickly with no proofs. Some topics, like tensor products, might be less familiar. These are treated here in somewhat greater detail. A few of the topics are quite advanced and their presentation is new. ## I. 1 Vector Spaces and Inner Product Spaces Throughout this book we will consider finite-dimensional vector spaces over the field \( \mathbb{C} \) of complex numbers. Such spaces will be denoted by symbols \( V, W,{V}_{1},{V}_{2} \), etc. Vectors will, most often, be represented by symbols \( u, v \) , \( w, x \), etc., and scalars by \( a, b, s, t \), etc. The symbol \( n \), when not explained, will always mean the dimension of the vector space under consideration. Most often, our vector space will be an inner product space. The inner product between the vectors \( u, v \) will be denoted by \( \langle u, v\rangle \) . We will adopt the convention that this is conjugate-linear in the first variable \( u \) and linear in the second variable \( v \) . We will always assume that the inner product is definite; i.e., \( \langle u, u\rangle = 0 \) if and only if \( u = 0 \) . A vector space with such an inner product is then a finite-dimensional Hilbert space. Spaces of this type will be denoted by symbols \( \mathcal{H},\mathcal{K} \), etc. The norm arising from the inner product will be denoted by \( \parallel u\parallel \) ; i.e., \( \parallel u\parallel = \langle u, u{\rangle }^{1/2} \) . As usual, it will sometimes be convenient to deal with the standard Hilbert space \( {\mathbb{C}}^{n} \) . Elements of this vector space are column vectors with \( n \) coordinates. In this case, the inner product \( \langle u, v\rangle \) is the matrix product \( {u}^{ * }v \) obtained by multiplying the column vector \( v \) on the left by the row vector \( {u}^{ * } \) . The symbol \( * \) denotes the conjugate transpose for matrices of any size. The notation \( {u}^{ * }v \) for the inner product is sometimes convenient even when the Hilbert space is not \( {\mathbb{C}}^{n} \) . The distinction between column vectors and row vectors is important in manipulations involving products. For example, if we write elements of \( {\mathbb{C}}^{n} \) as column vectors, then \( {u}^{ * }v \) is a number, but \( u{v}^{ * } \) is an \( n \times n \) matrix (sometimes called the "outer product" of \( u \) and \( v \) ). However, it is typographically inconvenient to write column vectors. So, when the context does not demand this distinction, we may write a vector \( x \) with scalar coordinates \( {x}_{1},\ldots ,{x}_{n} \), simply as \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) . This will often be done in later chapters. For the present, however, we will maintain the distinction between row and column vectors. Occasionally our Hilbert spaces will be real, but we will use the same notation for them as for the complex ones. Many of our results will be true for infinite-dimensional Hilbert spaces, with appropriate modifications at times. We will mention this only in passing. Let \( X = \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) be a \( k \) -tuple of vectors. If these are column vectors, then \( X \) is an \( n \times k \) matrix. This notation suggests matrix manipulations with \( X \) that are helpful even in the general case. For example, let \( X = \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) be a linearly independent \( k \) -tuple. We say that a \( k \) -tuple \( Y = \left( {{y}_{1},\ldots ,{y}_{k}}\right) \) is biorthogonal to \( X \) if \( \left\langle {{y}_{i},{x}_{j}}\right\rangle = {\delta }_{ij} \) . This condition is expressed in matrix terms as \( {Y}^{ * }X = {I}_{k} \), the \( k \times k \) identity matrix. Exercise I.1.1 Given any \( k \) -tuple of linearly independent vectors \( X \) as above, there exists a \( k \) -tuple \( Y \) biorthogonal to it. If \( k = n \), this \( Y \) is unique. The Gram-Schmidt procedure, in this notation, can be interpreted as a matrix factoring theorem. Given an \( n \) -tuple \( X = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) of linearly independent vectors the procedure gives another \( n \) -tuple \( Q = \left( {{q}_{1},\ldots ,{q}_{n}}\right) \) whose entries are orthonormal vectors. For each \( k = 1,2,\ldots, n \), the vectors \( \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) and \( \left\{ {{q}_{1},\ldots ,{q}_{k}}\right\} \) have the same linear span. In matrix notation this can be expressed as an equation, \( X = {QR} \), where \( R \) is an upper triangular matrix. The matrix \( R \) may be chosen so that all its diagonal entries are positive. With this restriction the factors \( Q \) and \( R \) are both unique. If the vectors \( {x}_{j} \) are not linearly independent, this procedure can be modified. If the vector \( {x}_{k} \) is linearly dependent on \( {x}_{1},\ldots ,{x}_{k - 1} \), set \( {q}_{k} = 0 \) ; otherwise proceed as in the Gram-Schmidt process. If the \( k \) th column of the matrix \( Q \) so constructed is zero, put the \( k \) th row of \( R \) to be zero. Now we have a factorisation \( X = {QR} \), where \( R \) is upper triangular and \( Q \) has orthogonal columns, some of which are zero. Take the nonzero columns of \( Q \) and extend this set to an orthonormal basis. Then, replace the zero columns of \( Q \) by these additional basis vectors. The new matrix \( Q \) now has orthonormal columns, and we still have \( X = {QR} \), because the new columns of \( Q \) are matched with zero rows of \( R \) . This is called the \( \mathbf{{QR}} \) decomposition. Similarly, a change of orthogonal bases can be conveniently expressed in these notations as follows. Let \( X = \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) be any \( k \) -tuple of vectors and \( E = \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) any orthonormal basis. Then, the columns of the matrix \( {E}^{ * }X \) are the representations of the vectors comprising \( X \), relative to the basis \( E \) . When \( k = n \) and \( X \) is an orthonormal basis, then \( {E}^{ * }X \) is a unitary matrix. Furthermore, this is the matrix by which we pass between coordinates of any vector relative to the basis \( E \) and those relative to the basis \( X \) . Indeed, if \[ u = {a}_{1}{e}_{1} + \cdots + {a}_{n}{e}_{n} = {b}_{1}{x}_{1} + \cdots + {b}_{n}{x}_{n} \] then we have \[ u = {Ea},\;{a}_{j} = {e}_{j}^{ * }u,\;a = {E}^{ * }u, \] \[ u = {Xb},\;{b}_{j} = {x}_{j}^{ * }u,\;b = {X}^{ * }u. \] Hence, \[ a = {E}^{ * }{Xb}\;\text{ and }\;b = {X}^{ * }{Ea}. \] Exercise I.1.2 Let \( X \) be any basis of \( \mathcal{H} \) and let \( Y \) be the basis biorthogonal to it. Using matrix multiplication, \( X \) gives a linear transformation from \( {\mathbb{C}}^{n} \) to \( \mathcal{H} \) . The inverse of this is given by \( {Y}^{ * } \) . In the special case when \( X \) is orthonormal (so that \( Y = X \) ), this transformation is inner-product-preserving if the standard inner product is used on \( {\mathbb{C}}^{n} \) . Exercise I.1.3 Use the QR decomposition to prove Hadamard’s inequality: if \( X = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \), then \[ \left| {\det X}\right| \leq \mathop{\prod }\limits_{{j = 1}}^{n}\begin{Vmatrix}{x}_{j}\end{Vmatrix} \] Equality holds here if and only if either the \( {x}_{j} \) are mutually orthogonal or some \( {x}_{j} \) is zero. ## I. 2 Linear Operators and Matrices Let \( \mathcal{L}\left( {V, W}\right) \) be the space of all linear operators from a vector space \( V \) to a vector space \( W \) . If bases for \( V, W \) are fixed, each such operator has a unique matrix associated with it. As usual, we will talk of operators and matrices interchangeably. For operators between Hilbert spaces, the matrix representations are especially nice if the bases chosen are orthonormal. Let \( A \in \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \), and let \( E = \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) be an orthonormal basis of \( \mathcal{H} \) and \( F = \left( {{f}_{1},\ldots ,{f}_{m}}\
100_S_Fourier Analysis
3
ecomposition to prove Hadamard’s inequality: if \( X = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \), then \[ \left| {\det X}\right| \leq \mathop{\prod }\limits_{{j = 1}}^{n}\begin{Vmatrix}{x}_{j}\end{Vmatrix} \] Equality holds here if and only if either the \( {x}_{j} \) are mutually orthogonal or some \( {x}_{j} \) is zero. ## I. 2 Linear Operators and Matrices Let \( \mathcal{L}\left( {V, W}\right) \) be the space of all linear operators from a vector space \( V \) to a vector space \( W \) . If bases for \( V, W \) are fixed, each such operator has a unique matrix associated with it. As usual, we will talk of operators and matrices interchangeably. For operators between Hilbert spaces, the matrix representations are especially nice if the bases chosen are orthonormal. Let \( A \in \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \), and let \( E = \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) be an orthonormal basis of \( \mathcal{H} \) and \( F = \left( {{f}_{1},\ldots ,{f}_{m}}\right) \) an orthonormal basis of \( \mathcal{K} \) . Then, the \( \left( {i, j}\right) \) -entry of the matrix of \( A \) relative to these bases is \( {a}_{ij} = {f}_{i}^{ * }A{e}_{j} = \left\langle {{f}_{i}, A{e}_{j}}\right\rangle \) . This suggests that we may say that the matrix of \( A \) relative to these bases is \( {F}^{ * }{AE} \) . In this notation, composition of linear operators can be identified with matrix multiplication as follows. Let \( \mathcal{M} \) be a third Hilbert space with orthonormal basis \( G = \left( {{g}_{1},\ldots ,{g}_{p}}\right) \) . Let \( B \in \mathcal{L}\left( {\mathcal{K},\mathcal{M}}\right) \) . Then \[ \text{ (matrix of }B \cdot A\text{ ) } = {G}^{ * }\left( {B \cdot A}\right) E \] \[ = {G}^{ * }{BF}{F}^{ * }{AE} \] \[ = \left( {{G}^{ * }{BF}}\right) \left( {{F}^{ * }{AE}}\right) \] \[ = \text{(matrix of}B\text{) (matrix of}A\text{).} \] The second step in the above chain is justified by Exercise I.1.2. The adjoint of an operator \( A \in \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) is the unique operator \( {A}^{ * } \) in \( \mathcal{L}\left( {\mathcal{K},\mathcal{H}}\right) \) that satisfies the relation \[ \langle z,{Ax}{\rangle }_{\mathcal{K}} = {\left\langle {A}^{ * }z, x\right\rangle }_{\mathcal{H}} \] for all \( x \in \mathcal{H} \) and \( z \in \mathcal{K} \) . Exercise I.2.1 For fixed bases in \( \mathcal{H} \) and \( \mathcal{K} \), the matrix of \( {A}^{ * } \) is the conjugate transpose of the matrix of \( A \) . For the space \( \mathcal{L}\left( {\mathcal{H},\mathcal{H}}\right) \) we use the more compact notation \( \mathcal{L}\left( \mathcal{H}\right) \) . In the rest of this section, and elsewhere in the book, if no qualification is made, an operator would mean an element of \( \mathcal{L}\left( \mathcal{H}\right) \) . An operator \( A \) is called self-adjoint or Hermitian if \( A = {A}^{ * } \), skew-Hermitian if \( A = - {A}^{ * } \), unitary if \( A{A}^{ * } = I = {A}^{ * }A \), and normal if \( A{A}^{ * } = {A}^{ * }A \) . A Hermitian operator \( A \) is said to be positive or positive semidefinite if \( \langle x,{Ax}\rangle \geq 0 \) for all \( x \in \mathcal{H} \) . The notation \( A \geq 0 \) will be used to express the fact that \( A \) is a positive operator. If \( \langle x,{Ax}\rangle > 0 \) for all nonzero \( x \), we will say \( A \) is positive definite, or strictly positive . We will then write \( A > 0 \) . A positive operator is strictly positive if and only if it is invertible. If \( A \) and \( B \) are Hermitian, then we say \( A \geq B \) if \( A - B \geq 0 \) . Given any operator \( A \) we can find an orthonormal basis \( {y}_{1},\ldots ,{y}_{n} \) such that for each \( k = 1,2,\ldots, n \), the vector \( A{y}_{k} \) is a linear combination of \( {y}_{1},\ldots ,{y}_{k} \) . This can be proved by induction on the dimension \( n \) of \( \mathcal{H} \) . Let \( {\lambda }_{1} \) be any eigenvalue of \( A \) and \( {y}_{1} \) an eigenvector corresponding to \( {\lambda }_{1} \), and \( \mathcal{M} \) the 1-dimensional subspace spanned by it. Let \( \mathcal{N} \) be the orthogonal complement of \( \mathcal{M} \) . Let \( {P}_{\mathcal{N}} \) denote the orthogonal projection on \( \mathcal{N} \) . For \( y \in \mathcal{N} \) . let \( {A}_{\mathcal{N}}y = {P}_{\mathcal{N}}{Ay} \) . Then, \( {A}_{\mathcal{N}} \) is a linear operator on the \( \left( {n - 1}\right) \) -dimensional space \( \mathcal{N} \) . So, by the induction hypothesis, there exists an orthogonal basis \( {y}_{2},\ldots ,{y}_{n} \) of \( \mathcal{N} \) such that for \( k = 2,\ldots, n \) the vector \( {A}_{\mathcal{N}}{y}_{k} \) is a linear combination of \( {y}_{2},\ldots ,{y}_{k} \) . Now \( {y}_{1},\ldots ,{y}_{n} \) is an orthogonal basis for \( \mathcal{H} \), and each \( A{y}_{k} \) is a linear combination of \( {y}_{1},\ldots ,{y}_{k} \) for \( k = 1,2,\ldots, n \) . Thus, the matrix of \( A \) with respect to this basis is upper triangular. In other words, every matrix \( A \) is unitarily equivalent (or unitarily similar) to an upper triangular matrix \( T \), i.e., \( A = {QT}{Q}^{ * } \), where \( Q \) is unitary and \( T \) is upper triangular. This triangular matrix is called a Schur Triangular Form for \( A \) . An orthonormal basis with respect to which \( A \) is upper triangular is called a Schur basis for \( A \) . If \( A \) is normal, then \( T \) is diagonal and we have \( {Q}^{ * }{AQ} = D \), where \( D \) is a diagonal matrix whose diagonal entries are the eigenvalues of \( A \) . This is the Spectral Theorem for normal matrices. The Spectral Theorem makes it easy to define functions of normal matrices. If \( f \) is any complex function, and if \( D \) is a diagonal matrix with \( {\lambda }_{1},\ldots \) , \( {\lambda }_{n} \) on its diagonal, then \( f\left( D\right) \) is the diagonal matrix with \( f\left( {\lambda }_{1}\right) ,\ldots, f\left( {\lambda }_{n}\right) \) on its diagonal. If \( A = {QD}{Q}^{ * } \), then \( f\left( A\right) = {Qf}\left( D\right) {Q}^{ * } \) . A special consequence, used very often, is the fact that every positive operator \( A \) has a unique positive square root. This square root will be written as \( {A}^{1/2} \) . Exercise I.2.2 Show that the following statements are equivalent: (i) \( A \) is positive. (ii) \( A = {B}^{ * }B \) for some \( B \) . (iii) \( A = {T}^{ * }T \) for some upper triangular \( T \) . (iv) \( A = {T}^{ * }T \) for some upper triangular \( T \) with nonnegative diagonal entries. If \( A \) is positive definite, then the factorisation in (iv) is unique. This is called the Cholesky Decomposition of \( A \) . Exercise I.2.3 (i) Let \( \left\{ {A}_{\alpha }\right\} \) be a family of mutually commuting operators. Then, there is a common Schur basis for \( \left\{ {A}_{\alpha }\right\} \) . In other words, there exists a unitary \( Q \) such that \( {Q}^{ * }{A}_{\alpha }Q \) is upper triangular for all \( \alpha \) . (ii) Let \( \left\{ {A}_{\alpha }\right\} \) be a family of mutually commuting normal operators. Then, there exists a unitary \( Q \) such that \( {Q}^{ * }{A}_{\alpha }Q \) is diagonal for all \( \alpha \) . For any operator \( A \) the operator \( {A}^{ * }A \) is always positive, and its unique positive square root is denoted by \( \left| A\right| \) . The eigenvalues of \( \left| A\right| \) counted with multiplicities are called the singular values of \( A \) . We will always enumerate these in decreasing order, and use for them the notation \( {s}_{1}\left( A\right) \geq \) \( {s}_{2}\left( A\right) \geq \cdots \geq {s}_{n}\left( A\right) \) If rank \( A = k \), then \( {s}_{k}\left( A\right) > 0 \), but \( {s}_{k + 1}\left( A\right) = \cdots = {s}_{n}\left( A\right) = 0 \) . Let \( S \) be the diagonal matrix with diagonal entries \( {s}_{1}\left( A\right) ,\ldots ,{s}_{n}\left( A\right) \) and \( {S}_{ + } \) the \( k \times k \) diagonal matrix with diagonal entries \( {s}_{1}\left( A\right) ,\ldots ,{s}_{k}\left( A\right) \) . Let \( Q = \left( {{Q}_{1},{Q}_{2}}\right) \) be the unitary matrix in which \( {Q}_{1} \) is the \( n \times k \) matrix whose columns are the eigenvectors of \( {A}^{ * }A \) corresponding to the eigenvalues \( {s}_{1}^{2}\left( A\right) ,\ldots ,{s}_{k}^{2}\left( A\right) \) and \( {Q}_{2} \) the \( n \times \left( {n - k}\right) \) matrix whose columns are the eigenvectors of \( {A}^{ * }A \) corresponding to the remaining eigenvalues. Then, by the Spectral Theorem \[ {Q}^{ * }\left( {{A}^{ * }A}\right) Q = \left( \begin{matrix} {S}_{ + }^{2} & 0 \\ 0 & 0 \end{matrix}\right) \] Note that \[ {Q}_{1}^{ * }\left( {{A}^{ * }A}\right) {Q}_{1} = {S}_{ + }^{2},\;{Q}_{2}^{ * }\left( {{A}^{ * }A}\right) {Q}_{2} = 0. \] The second of these relations implies that \( A{Q}_{2} = 0 \) . From the first one we can conclude that if \( {W}_{1} = A{Q}_{1}{S}_{ + }^{-1} \), then \( {W}_{1}^{ * }{W}_{1} = {I}_{k} \) . Choose \( {W}_{2} \) so that \( W = \left( {{W}_{1},{W}_{2}}\right) \) is unitary. Then, we have \[ \begin{array}{l} {W}^{ * }{AQ} = \left( \begin{matrix} {W}_{1}^{ * }A{Q}_{1} & {W}_{1}^{ * }A{Q}_{2} \\ {W}_{2}^{ * }A{Q}_{1} & {W}_{2}^{ * }A{Q}_{2} \end{matrix}\right) = \left( \begin{matrix} {S}_{ + } & 0 \\ 0 & 0 \end{matrix}\right) . \end{array} \] This is the Singular Value Decomposition: for every matrix \( A \) there exist unitaries \( W \) and \( Q \) such that \[ {W}^{ * }{AQ} = S \] where \( S \) is the diagonal matrix whose diagonal entries are the singular values of \( A \) . Note that in the above representation the columns of \( Q \) are eigenvectors of \( {A}^{ * }A \) and the columns of \( W \) are eigenvectors of \( A{A}^{ * } \) corresponding to the eigenvalues \( {s}_{j}^{2}\left( A\right) ,1 \leq j \leq n \) . These eigenvectors are called the right and left singular vectors of \( A \), respectively. Exercise I.2.4 (i) The Singular Value Decomposition leads to the Polar Decomposition: Every operator
100_S_Fourier Analysis
4
ft( {{W}_{1},{W}_{2}}\right) \) is unitary. Then, we have \[ \begin{array}{l} {W}^{ * }{AQ} = \left( \begin{matrix} {W}_{1}^{ * }A{Q}_{1} & {W}_{1}^{ * }A{Q}_{2} \\ {W}_{2}^{ * }A{Q}_{1} & {W}_{2}^{ * }A{Q}_{2} \end{matrix}\right) = \left( \begin{matrix} {S}_{ + } & 0 \\ 0 & 0 \end{matrix}\right) . \end{array} \] This is the Singular Value Decomposition: for every matrix \( A \) there exist unitaries \( W \) and \( Q \) such that \[ {W}^{ * }{AQ} = S \] where \( S \) is the diagonal matrix whose diagonal entries are the singular values of \( A \) . Note that in the above representation the columns of \( Q \) are eigenvectors of \( {A}^{ * }A \) and the columns of \( W \) are eigenvectors of \( A{A}^{ * } \) corresponding to the eigenvalues \( {s}_{j}^{2}\left( A\right) ,1 \leq j \leq n \) . These eigenvectors are called the right and left singular vectors of \( A \), respectively. Exercise I.2.4 (i) The Singular Value Decomposition leads to the Polar Decomposition: Every operator \( A \) can be written as \( A = {UP} \), where \( U \) is unitary and \( P \) is positive. In this decomposition the positive part \( P \) is unique, \( P = \left| A\right| \) . The unitary part \( U \) is unique if \( A \) is invertible. (ii) An operator \( A \) is normal if and only if the factors \( U \) and \( P \) in the polar decomposition of \( A \) commute. (iii) We have derived the Polar Decomposition from the Singular Value Decomposition. Show that it is possible to derive the latter from the former. Every operator \( A \) can be decomposed as a sum \[ A = \operatorname{Re}A + i\operatorname{Im}A \] where \( \operatorname{Re}A = \frac{A + {A}^{ * }}{2} \) and \( \operatorname{Im}A = \frac{A - {A}^{ * }}{2i} \) . This is called the Cartesian Decomposition of \( A \) into its "real" and "imaginary" parts. The operators \( \operatorname{Re}A \) and \( \operatorname{Im}A \) are both Hermitian. The norm of an operator \( A \) is defined as \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel \] We also have \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = \parallel y\parallel = 1}}\left| {\langle y,{Ax}\rangle }\right| . \] When \( A \) is Hermitian we have \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\left| {\langle x,{Ax}\rangle }\right| \] For every operator \( A \) we have \[ \parallel A\parallel = {s}_{1}\left( A\right) = {\begin{Vmatrix}{A}^{ * }A\end{Vmatrix}}^{1/2}. \] When \( A \) is normal we have \[ \parallel A\parallel = \max \left\{ {\left| {\lambda }_{j}\right| : {\lambda }_{j}\text{ is an eigenvalue of }A}\right\} . \] An operator \( A \) is said to be a contraction if \( \parallel A\parallel \leq 1 \) . We also use the adjective contractive for such an operator. A positive operator \( A \) is contractive if and only if \( A \leq I \) . To distinguish it from other norms that we consider later, the norm \( \parallel A\parallel \) will be called the operator norm or the bound norm of \( A \) . Another useful norm is the norm \[ \parallel A{\parallel }_{2} = {\left( \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}^{2}\left( A\right) \right) }^{1/2} = {\left( \operatorname{tr}{A}^{ * }A\right) }^{1/2}, \] where tr stands for the trace of an operator. If \( {a}_{ij} \) are the entries of a matrix representation of \( A \) relative to an orthonormal basis of \( \mathcal{H} \), then \[ \parallel A{\parallel }_{2} = {\left( \mathop{\sum }\limits_{{i, j}}{\left| {a}_{ij}\right| }^{2}\right) }^{1/2} \] This makes this norm useful in calculations with matrices. This is called the Frobenius norm or the Schatten 2-norm or the Hilbert-Schmidt norm. Both \( \parallel A\parallel \) and \( \parallel A{\parallel }_{2} \) have an important invariance property called unitary invariance: we have \( \parallel A\parallel = \parallel {UAV}\parallel \) and \( \parallel A{\parallel }_{2} = \parallel {UAV}{\parallel }_{2} \) for all unitary \( U, V \) . Any two norms on a finite-dimensional space are equivalent. For the norms \( \parallel A\parallel \) and \( \parallel A{\parallel }_{2} \) it follows from the properties listed above that \[ \parallel A\parallel \leq \parallel A{\parallel }_{2} \leq {n}^{1/2}\parallel A\parallel \] for every \( A \) . Exercise I.2.5 Show that matrices with distinct eigenvalues are dense in the space of all \( n \times n \) matrices. (Use the Schur Triangularisation.) Exercise I.2.6 If \( \parallel A\parallel < 1 \), then \( I - A \) is invertible and \[ {\left( I - A\right) }^{-1} = I + A + {A}^{2} + \cdots , \] a convergent power series. This is called the Neumann Series. Exercise I.2.7 The set of all invertible matrices is a dense open subset of the set of all \( n \times n \) matrices. The set of all unitary matrices is a compact subset of the set of all \( n \times n \) matrices. These two sets are also groups under multiplication. They are called the general linear group \( \mathrm{{GL}}\left( \mathrm{n}\right) \) and the unitary group \( \mathbf{U}\left( \mathbf{n}\right) \), respectively. Exercise I.2.8 For any matrix \( A \) the series \[ \exp A = I + A + \frac{{A}^{2}}{2!} + \cdots + \frac{{A}^{n}}{n!} + \cdots \] converges. This is called the exponential of \( A \) . The matrix \( \exp A \) is always invertible and \[ {\left( \exp A\right) }^{-1} = \exp \left( {-A}\right) \] Conversely, every invertible matrix can be expressed as the exponential of some matrix. Every unitary matrix can be expressed as the exponential of a skew-Hermitian matrix. The numerical range or the field of values of an operator \( A \) is the subset \( W\left( A\right) \) of the complex plane defined as \[ W\left( A\right) = \{ \langle x,{Ax}\rangle : \parallel x\parallel = 1\} . \] Note that \[ W\left( {{UA}{U}^{ * }}\right) \; = \;W\left( A\right) \;\text{ for all }U \in U\left( n\right) , \] \[ W\left( {{aA} + {bI}}\right) = {aW}\left( A\right) + {bW}\left( I\right) \text{ for all }a, b \in \mathbb{C}. \] It is clear that if \( \lambda \) is an eigenvalue of \( A \), then \( \lambda \) is in \( W\left( A\right) \) . It is also clear that \( W\left( A\right) \) is a closed set. An important property of \( W\left( A\right) \) is that it is a convex set. This is called the Toeplitz-Hausdorff Theorem; an outline of its proof is given in Problem I.6.2. Exercise I.2.9 (i) When \( A \) is normal, the set \( W\left( A\right) \) is the convex hull of the eigenvalues of \( A \) . For nonnormal matrices, \( W\left( A\right) \) may be bigger than the convex hull of its eigenvalues. For Hermitian operators, the first statement says that \( W\left( A\right) \) is the closed interval whose endpoints are the smallest and the largest eigenvalues of \( A \) . (ii) If a unit vector \( x \) belongs to the linear span of the eigenspaces corresponding to eigenvalues \( {\lambda }_{1},\ldots ,{\lambda }_{k} \) of a normal operator \( A \), then \( \langle x,{Ax}\rangle \) lies in the convex hull of \( {\lambda }_{1},\ldots ,{\lambda }_{k} \) . (This fact will be used frequently in Chapter III.) The number \( w\left( A\right) \) defined as \[ w\left( A\right) = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\left| {\langle x,{Ax}\rangle }\right| \] is called the numerical radius of \( A \) . Exercise I.2.10 (i) The numerical radius defines a norm on \( \mathcal{L}\left( \mathcal{H}\right) \) . (ii) \( w\left( {{UA}{U}^{ * }}\right) = w\left( A\right) \) for all \( U \in U\left( n\right) \) . (iii) \( w\left( A\right) \leq \parallel A\parallel \leq {2w}\left( A\right) \) for all \( A \) . (iv) \( w\left( A\right) = \parallel A\parallel \) if (but not only if) \( A \) is normal. The spectral radius of an operator \( A \) is defined as \[ \operatorname{spr}\left( A\right) = \max \{ \left| \lambda \right| : \lambda \text{ is an eigenvalue of }A\} . \] We have noted that \( \operatorname{spr}\left( A\right) \leq w\left( A\right) \leq \parallel A\parallel \), and that the three are equal if (but not only if) the operator \( A \) is normal. ## I. 3 Direct Sums If \( U, V \) are vector spaces, their direct sum is the space of columns \( \left( \begin{array}{l} u \\ v \end{array}\right) \) with \( u \in U \) and \( v \in V \) . This is a vector space with vector operations naturally defined coordinatewise. If \( \mathcal{H},\mathcal{K} \) are Hilbert spaces, their direct sum is a Hilbert space with inner product defined as \[ \left\langle {\left( \begin{array}{l} h \\ k \end{array}\right) ,\left( \begin{array}{l} {h}^{\prime } \\ {k}^{\prime } \end{array}\right) }\right\rangle = {\left\langle h,{h}^{\prime }\right\rangle }_{\mathcal{H}} + {\left\langle k,{k}^{\prime }\right\rangle }_{\mathcal{K}} \] We will always denote this direct sum as \( \mathcal{H} \oplus \mathcal{K} \) . If \( \mathcal{M} \) and \( \mathcal{N} \) are orthogonally complementary subspaces of \( \mathcal{H} \), then the fact that every vector \( x \) in \( \mathcal{H} \) has a unique representation \( x = u + v \) with \( u \in \) \( \mathcal{M} \) and \( v \in \mathcal{N} \) implies that \( \mathcal{H} \) is isomorphic to \( \mathcal{M} \oplus \mathcal{N} \) . This isomorphism is given by a natural, fixed map. So, we say that \( \mathcal{H} = \mathcal{M} \oplus \mathcal{N} \) . When a distinction is necessary we call this an internal direct sum. If \( \mathcal{M},\mathcal{N} \) are subspaces of \( \mathcal{H} \) complementary in the algebraic but not in the orthogonal sense; i.e., if \( \mathcal{M} \) and \( \mathcal{N} \) are disjoint and their linear span is \( \mathcal{H} \), then every vector \( x \) in \( \mathcal{H} \) has a unique decomposition \( x = u + v \) as before, but not with orthogonal summands. In this case we write \( \mathcal{H} = \mathcal{M} + \mathcal{N} \) and say \( \mathcal{H} \) is the algebraic direct sum of \( \mathcal{M} \) and \( \mathcal{N} \) . If \( \mathcal{H} = \mathcal{M} \oplus \mat
100_S_Fourier Analysis
5
tor \( x \) in \( \mathcal{H} \) has a unique representation \( x = u + v \) with \( u \in \) \( \mathcal{M} \) and \( v \in \mathcal{N} \) implies that \( \mathcal{H} \) is isomorphic to \( \mathcal{M} \oplus \mathcal{N} \) . This isomorphism is given by a natural, fixed map. So, we say that \( \mathcal{H} = \mathcal{M} \oplus \mathcal{N} \) . When a distinction is necessary we call this an internal direct sum. If \( \mathcal{M},\mathcal{N} \) are subspaces of \( \mathcal{H} \) complementary in the algebraic but not in the orthogonal sense; i.e., if \( \mathcal{M} \) and \( \mathcal{N} \) are disjoint and their linear span is \( \mathcal{H} \), then every vector \( x \) in \( \mathcal{H} \) has a unique decomposition \( x = u + v \) as before, but not with orthogonal summands. In this case we write \( \mathcal{H} = \mathcal{M} + \mathcal{N} \) and say \( \mathcal{H} \) is the algebraic direct sum of \( \mathcal{M} \) and \( \mathcal{N} \) . If \( \mathcal{H} = \mathcal{M} \oplus \mathcal{N} \) is an internal direct sum, we may define the injection of \( \mathcal{M} \) into \( \mathcal{H} \) as the operator \( {I}_{\mathcal{M}} \in \mathcal{L}\left( {\mathcal{M},\mathcal{H}}\right) \) such that \( {I}_{\mathcal{M}}\left( u\right) = u \) for all \( u \in \mathcal{M} \) . Then, \( {I}_{\mathcal{M}}^{ * } \) is an element of \( \mathcal{L}\left( {\mathcal{H},\mathcal{M}}\right) \) defined as \( {I}_{\mathcal{M}}^{ * }x = {Px} \) for all \( x \in \mathcal{H} \), where \( P \) is the orthoprojector onto \( \mathcal{M} \) . Here one should note that \( {I}_{\mathcal{M}}^{ * } \) is not the same as \( P \) because they map into different spaces. That is why their adjoints can be different. Similarly define \( {I}_{\mathcal{N}} \) . Then, \( \left( {{I}_{\mathcal{M}},{I}_{\mathcal{N}}}\right) \) is an isometry from the ordinary (“external”) direct sum \( \mathcal{M} \oplus \mathcal{N} \) onto \( \mathcal{H} \) . If \( \mathcal{H} = \mathcal{M} \oplus \mathcal{N} \) and \( A \in \mathcal{L}\left( \mathcal{H}\right) \), then using this isomorphism, we can write \( A \) as a block-matrix \[ A = \left( \begin{array}{ll} B & C \\ D & E \end{array}\right) \] where \( B \in \mathcal{L}\left( \mathcal{M}\right), C \in \mathcal{L}\left( {\mathcal{N},\mathcal{M}}\right) \), etc. Here, for example, \( C = {I}_{\mathcal{M}}^{ * }A{I}_{\mathcal{N}} \) . The usual rules of matrix operations hold for block matrices. Adjoints are obtained by taking "conjugate transposes" formally. If the subspace \( \mathcal{M} \) is invariant under \( A \) ; i.e., \( {Ax} \in \mathcal{M} \) whenever \( x \in \mathcal{M} \) , then in the above block-matrix representation of \( A \) we must have \( D = 0 \) . Indeed, this condition is equivalent to \( \mathcal{M} \) being invariant. If both \( \mathcal{M} \) and its orthogonal complement \( \mathcal{N} \) are invariant under \( A \), we say that \( \mathcal{M} \) reduces \( A \) . In this case, both \( C \) and \( D \) are 0 . We then say that the operator \( A \) is the direct sum of \( B \) and \( E \) and write \( A = B \oplus E \) . Exercise I.3.1 Let \( A = {A}_{1} \oplus {A}_{2} \) . Show that (i) \( W\left( A\right) \) is the convex hull of \( W\left( {A}_{1}\right) \) and \( W\left( {A}_{2}\right) \) ; i.e., the smallest convex set containing \( W\left( {A}_{1}\right) \cup W\left( {A}_{2}\right) \) . (ii) \( \parallel A\parallel = \max \left( {\begin{Vmatrix}{A}_{1}\end{Vmatrix},\begin{Vmatrix}{A}_{2}\end{Vmatrix}}\right) \) , \[ \operatorname{spr}\left( \mathrm{A}\right) = \max \left( {\operatorname{spr}\left( {\mathrm{A}}_{1}\right) ,\operatorname{spr}\left( {\mathrm{A}}_{2}\right) }\right) , \] \[ w\left( A\right) = \max \left( {w\left( {A}_{1}\right), w\left( {A}_{2}\right) }\right) . \] Direct sums in which each summand \( {\mathcal{H}}_{j} \) is the same space \( \mathcal{H} \) arise often in practice. Very often, some properties of an operator \( A \) on \( \mathcal{H} \) are reflected in those of some other operators on \( \mathcal{H} \oplus \mathcal{H} \) . This is illustrated in the following propositions. Lemma I.3.2 Let \( A \in \mathcal{L}\left( \mathcal{H}\right) \) . Then, the operators \( \left( \begin{matrix} A & A \\ A & A \end{matrix}\right) \) and \( \left( \begin{matrix} {2A} & 0 \\ 0 & 0 \end{matrix}\right) \) are unitarily equivalent in \( \mathcal{L}\left( {\mathcal{H} \oplus \mathcal{H}}\right) \) . Proof. The equivalence is implemented by the unitary operator \( \frac{1}{\sqrt{2}}\left( \begin{matrix} I & I \\ - I & I \end{matrix}\right) \) Corollary I.3.3 An operator \( A \) on \( \mathcal{H} \) is positive if and only if the operator \( \left( \begin{matrix} A & A \\ A & A \end{matrix}\right) \) on \( \mathcal{H} \oplus \mathcal{H} \) is positive. This can also be seen by writing \( \left( \begin{array}{ll} A & A \\ A & A \end{array}\right) = \left( \begin{array}{ll} {A}^{1/2} & 0 \\ {A}^{1/2} & 0 \end{array}\right) \left( \begin{matrix} {A}^{1/2} & {A}^{1/2} \\ 0 & 0 \end{matrix}\right) \), and using Exercise I.2.2. Corollary I.3.4 For every \( A \in \mathcal{L}\left( \mathcal{H}\right) \) the operator \( \left( \begin{matrix} \left| A\right| & {A}^{ * } \\ A & \left| {A}^{ * }\right| \end{matrix}\right) \) is positive. Proof. Let \( A = {UP} \) be the polar decomposition of \( A \) . Then, \[ \left( \begin{matrix} \left| A\right| & {A}^{ * } \\ A & \left| {A}^{ * }\right| \end{matrix}\right) = \left( \begin{matrix} P & P{U}^{ * } \\ {UP} & {UP}{U}^{ * } \end{matrix}\right) \] \[ = \left( \begin{matrix} I & O \\ O & U \end{matrix}\right) \left( \begin{matrix} P & P \\ P & P \end{matrix}\right) \left( \begin{matrix} I & O \\ O & {U}^{ * } \end{matrix}\right) . \] Note that \( \left( \begin{array}{ll} I & O \\ O & U \end{array}\right) \) is a unitary operator on \( \mathcal{H} \oplus \mathcal{H} \) . Proposition I.3.5 An operator \( A \) on \( \mathcal{H} \) is contractive if and only if the operator \( \left( \begin{matrix} I & {A}^{ * } \\ A & I \end{matrix}\right) \) on \( \mathcal{H} \oplus \mathcal{H} \) is positive. Proof. If \( A \) has the singular value decomposition \( A = {US}{V}^{ * } \), then \[ \left( \begin{matrix} I & {A}^{ * } \\ A & I \end{matrix}\right) = \left( \begin{matrix} V & O \\ O & U \end{matrix}\right) \left( \begin{matrix} I & S \\ S & I \end{matrix}\right) \left( \begin{matrix} {V}^{ * } & O \\ O & {U}^{ * } \end{matrix}\right) . \] Hence \( \left( \begin{matrix} I & {A}^{ * } \\ A & I \end{matrix}\right) \) is positive if and only if \( \left( \begin{matrix} I & S \\ S & I \end{matrix}\right) \) is positive. Also, \( \parallel A\parallel = \parallel S\parallel \) . So we may assume, without loss of generality, that \( A = S \) . Now let \( W \) be the unitary operator on \( \mathcal{H} \oplus \mathcal{H} \) that sends the orthonormal basis \( \left\{ {{e}_{1},{e}_{2},\ldots ,{e}_{2n}}\right\} \) to the basis \( \left\{ {{e}_{1},{e}_{n + 1},{e}_{2},{e}_{n + 2},\ldots ,{e}_{n},{e}_{2n}}\right\} \) . Then, the unitary conjugation by \( W \) transforms the matrix \( \left( \begin{array}{ll} I & S \\ S & I \end{array}\right) \) to a direct sum of \( n \) two-by-two matrices \[ \left( \begin{matrix} 1 & {s}_{1} \\ {s}_{1} & 1 \end{matrix}\right) \oplus \left( \begin{matrix} 1 & {s}_{2} \\ {s}_{2} & 1 \end{matrix}\right) \oplus \cdots \oplus \left( \begin{matrix} 1 & {s}_{n} \\ {s}_{n} & 1 \end{matrix}\right) . \] This is positive if and only if each of the summands is positive, which happens if and only if \( {s}_{j} \leq 1 \) for all \( j \) ; i.e., \( S \) is a contraction. Exercise I.3.6 If \( A \) is a contraction, show that \[ {A}^{ * }{\left( I - A{A}^{ * }\right) }^{1/2} = {\left( I - {A}^{ * }A\right) }^{1/2}{A}^{ * }. \] Use this to show that if \( A \) is a contraction on \( \mathcal{H} \), then the operators \[ U = \left( \begin{matrix} A & {\left( I - A{A}^{ * }\right) }^{1/2} \\ {\left( I - {A}^{ * }A\right) }^{1/2} & - {A}^{ * } \end{matrix}\right) , \] \[ V = \left( \begin{matrix} A & - {\left( I - A{A}^{ * }\right) }^{1/2} \\ {\left( I - {A}^{ * }A\right) }^{1/2} & {A}^{ * } \end{matrix}\right) \] are unitary operators on \( \mathcal{H} \oplus \mathcal{H} \) . Exercise I.3.7 For every matrix \( A \), the matrix \( \left( \begin{array}{ll} I & A \\ 0 & I \end{array}\right) \) is invertible and its inverse is \( \left( \begin{matrix} I & - A \\ 0 & I \end{matrix}\right) \) . Use this to show that if \( A, B \) are any two \( n \times n \) matrices, then \[ {\left( \begin{matrix} I & A \\ 0 & I \end{matrix}\right) }^{-1}\left( \begin{matrix} {AB} & 0 \\ B & 0 \end{matrix}\right) \left( \begin{matrix} I & A \\ 0 & I \end{matrix}\right) = \left( \begin{matrix} 0 & 0 \\ B & {BA} \end{matrix}\right) . \] This implies that \( {AB} \) and \( {BA} \) have the same eigenvalues. (This last fact can be proved in another way as follows. If \( B \) is invertible, then \( {AB} = \) \( {B}^{-1}\left( {BA}\right) B \) . So, \( {AB} \) and \( {BA} \) have the same eigenvalues. Since invertible matrices are dense in the space of all matrices, and a general known fact in complex analysis is that the roots of a polynomial vary continuously with the coefficients, the above conclusion also holds in general.) Direct sums with more than two summands are defined in the same way. We will denote the direct sum of spaces \( {\mathcal{H}}_{1},\ldots ,{\mathcal{H}}_{k} \) as \( { \oplus }_{j = 1}^{k}{\mathcal{H}}_{j} \), or simply as \( { \oplus }_{j}{\mathcal{H}}_{j} \) . ## I. 4 Tensor Products Let \( {V}_{j},1 \leq j \leq k \), be vector spaces. A map \( F \) from the Cartesian product \( {V}_{1} \times \cdots \times {V}_{k} \) to another vector space \( W \) is called multilinear if it depends linearly on each of the arguments. When \( W = \mathbb{C} \), such maps are called multilinear functionals. When \( k = 2 \), the word multilinear is replaced by bilinear. Bilinear maps, thus, are maps \( F : {V}_{1} \times
100_S_Fourier Analysis
6
{AB} \) and \( {BA} \) have the same eigenvalues. Since invertible matrices are dense in the space of all matrices, and a general known fact in complex analysis is that the roots of a polynomial vary continuously with the coefficients, the above conclusion also holds in general.) Direct sums with more than two summands are defined in the same way. We will denote the direct sum of spaces \( {\mathcal{H}}_{1},\ldots ,{\mathcal{H}}_{k} \) as \( { \oplus }_{j = 1}^{k}{\mathcal{H}}_{j} \), or simply as \( { \oplus }_{j}{\mathcal{H}}_{j} \) . ## I. 4 Tensor Products Let \( {V}_{j},1 \leq j \leq k \), be vector spaces. A map \( F \) from the Cartesian product \( {V}_{1} \times \cdots \times {V}_{k} \) to another vector space \( W \) is called multilinear if it depends linearly on each of the arguments. When \( W = \mathbb{C} \), such maps are called multilinear functionals. When \( k = 2 \), the word multilinear is replaced by bilinear. Bilinear maps, thus, are maps \( F : {V}_{1} \times {V}_{2} \rightarrow W \) that satisfy the conditions \[ F\left( {u, a{v}_{1} + b{v}_{2}}\right) = {aF}\left( {u,{v}_{1}}\right) + {bF}\left( {u,{v}_{2}}\right) \] \[ F\left( {a{u}_{1} + b{u}_{2}, v}\right) = {aF}\left( {{u}_{1}, v}\right) + {bF}\left( {{u}_{2}, v}\right) \] for all \( a, b \in \mathbb{C};\;u,{u}_{1},{u}_{2} \in {V}_{1} \) and \( v,{v}_{1},{v}_{2} \in {V}_{2} \) . We will be looking most often at the special situation when each \( {V}_{j} \) is the same vector space. As a special example consider a Hilbert space \( \mathcal{H} \) and fix two vectors \( x, y \) in it. Then, \[ F\left( {u, v}\right) = \langle x, u\rangle \langle y, v\rangle \] is a bilinear functional on \( \mathcal{H} \) . We see from this example that it is equally natural to consider conjugate-multilinear functionals as well. Even more generally we could study functions that are linear in some variables and conjugate-linear in others. As an example, let \( A \in \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) and for \( u \in \mathcal{K} \) and \( v \in \mathcal{H} \), let \( F\left( {u, v}\right) = \langle u,{Av}{\rangle }_{\mathcal{K}} \) . Then, \( F \) depends linearly on \( v \) and conjugate-linearly on \( u \) . Such functionals are called sesquilinear; an inner product is a functional of this sort. The example given above is the "most general" example of a sesquilinear functional: if \( F\left( {u, v}\right) \) is any sesquilinear functional on \( \mathcal{K} \times \mathcal{H} \), then there exists a unique operator \( A \in \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) such that \( F\left( {u, v}\right) = \langle u,{Av}\rangle \) . In this sense our first example is not the most general example of a bilinear functional. Bilinear functionals \( F\left( {u, v}\right) \) on \( \mathcal{H} \) that can be expressed as \( F\left( {u, v}\right) = \langle x, u\rangle \langle y, v\rangle \) for some fixed \( x, y \in \mathcal{H} \) are called elementary. They are special as the following exercise will show. Exercise I.4.1 Let \( x, y, z \) be linearly independent vectors in \( \mathcal{H} \) . Find a necessary and sufficient condition that a vector \( w \) must satisfy in order that the bilinear functional \[ F\left( {u, v}\right) = \langle x, u\rangle \langle y, v\rangle + \langle z, u\rangle \langle w, v\rangle \] is elementary. The set of all bilinear functionals is a vector space. The result of this exercise shows that the subset consisting of elementary functionals is not closed under addition. We will soon see that a convenient basis for this vector space can be constructed with elementary functionals as its members. The procedure, called the tensor product construction, starts by taking formal linear combinations of symbols \( x \otimes y \) with \( x \in \mathcal{H}, y \in \mathcal{K} \) ; then reducing this space modulo suitable equivalence relations; then identifying the resulting space with the space of bilinear functionals. More precisely, consider all finite sums of the type \( \mathop{\sum }\limits_{i}{c}_{i}\left( {{x}_{i} \otimes {y}_{i}}\right) \) , \( {c}_{i} \in \mathbb{C},{x}_{i} \in \mathcal{H},{y}_{i} \in \mathcal{K} \) and manipulate them formally as linear combinations. In this space the expressions \[ a\left( {x \otimes y}\right) \; - \;\left( {{ax} \otimes y}\right) \] \[ a\left( {x \otimes y}\right) \; - \;\left( {x \otimes {ay}}\right) \] \[ {x}_{1} \otimes y + {x}_{2} \otimes y\; - \;\left( {{x}_{1} + {x}_{2}}\right) \otimes y \] \[ x \otimes {y}_{1} + x \otimes {y}_{2}\; - \;x \otimes \left( {{y}_{1} + {y}_{2}}\right) \] are next defined to be equivalent to 0, for all \( a \in \mathbb{C};x,{x}_{1},{x}_{2} \in \mathcal{H} \) and \( y,{y}_{1},{y}_{2} \in \mathcal{K} \) . The set of all linear combinations of expressions \( x \otimes y \) for \( x \in \mathcal{H}, y \in \mathcal{K} \), after reduction modulo these equivalences, is called the tensor product of \( \mathcal{H} \) and \( \mathcal{K} \) and is denoted as \( \mathcal{H} \otimes \mathcal{K} \) . Each term \( c\left( {x \otimes y}\right) \) determines a conjugate-bilinear functional \( {F}^{ * }\left( {u, v}\right) \) on \( \mathcal{H} \times \mathcal{K} \) by the natural rule \[ {F}^{ * }\left( {u, v}\right) = c\langle u, x\rangle \langle v, y\rangle . \] This can be extended to sums of such terms, and the equivalences were chosen in such a way that equivalent expressions (i.e., expressions giving the same element of \( \mathcal{H} \otimes \mathcal{K} \) ) give the same functional. The complex conjugate of each such functional gives a bilinear functional. These ideas can be extended directly to \( k \) -linear functionals, including those that are linear in some of the arguments and conjugate-linear in others. Theorem I.4.2 The space of all bilinear functionals on \( \mathcal{H} \) is linearly spanned by the elementary ones. If \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) is a fixed orthonormal basis of \( \mathcal{H} \) , then to every bilinear functional \( F \) there correspond unique vectors \( {x}_{1},\ldots ,{x}_{n} \) such that \[ {F}^{ * } = \mathop{\sum }\limits_{j}{e}_{j} \otimes {x}_{j} \] Every sequence \( {x}_{j},1 \leq j \leq n \), leads to a bilinear functional in this way. Proof. Let \( F \) be a bilinear functional on \( \mathcal{H} \) . For each \( j,{F}^{ * }\left( {{e}_{j}, v}\right) \) is a conjugate-linear function of \( v \) . Hence there exists a unique vector \( {x}_{j} \) such that \( {F}^{ * }\left( {{e}_{j}, v}\right) = \left\langle {v,{x}_{j}}\right\rangle \) for all \( v \) . Now, if \( u = \sum {a}_{j}{e}_{j} \) is any vector in \( \mathcal{H} \), then \( F\left( {u, v}\right) = \sum {a}_{j}F\left( {{e}_{j}, v}\right) = \) \( \sum \left\langle {{e}_{j}, u}\right\rangle \left\langle {{x}_{j}, v}\right\rangle \) . In other words, \( {F}^{ * } = \sum {e}_{j} \otimes {x}_{j} \) as asserted. A more symmetric form of the above statement is the following: Corollary I.4.3 If \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) and \( \left( {{f}_{1},\ldots ,{f}_{n}}\right) \) are two fixed orthonormal bases of \( \mathcal{H} \), then every bilinear functional \( F \) on \( \mathcal{H} \) has a unique representation \( F = \sum {a}_{ij}{\left( {e}_{i} \otimes {f}_{j}\right) }^{ * } \) . (Most often, the choice \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) = \left( {{f}_{1},\ldots ,{f}_{n}}\right) \) is the convenient one for using the above representations.) Thus, it is natural to denote the space of conjugate-bilinear functionals on \( \mathcal{H} \) by \( \mathcal{H} \otimes \mathcal{H} \) . This is an \( {n}^{2} \) -dimensional vector space. The inner product on this space is defined by putting \[ \left\langle {{u}_{1} \otimes {u}_{2},{v}_{1} \otimes {v}_{2}}\right\rangle = \left\langle {{u}_{1},{v}_{1}}\right\rangle \left\langle {{u}_{2},{v}_{2}}\right\rangle \] and then extending this definition to all of \( \mathcal{H} \otimes \mathcal{H} \) in a natural way. It is easy to verify that this definition is consistent with the equivalences used in defining the tensor product. If \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) and \( \left( {{f}_{1},\ldots ,{f}_{n}}\right) \) are orthonormal bases in \( \mathcal{H} \), then \( {e}_{i} \otimes {f}_{j},\;1 \leq i, j \leq n \), form an orthonormal basis in \( \mathcal{H} \otimes \mathcal{H} \) . For the purposes of computation it is useful to order this basis lexicographically: we say that \( {e}_{i} \otimes {f}_{j} \) precedes \( {e}_{k} \otimes {f}_{\ell } \) if and only if either \( i < k \) or \( i = k \) and \( j < \ell \) . Tensor products such as \( \mathcal{H} \otimes \mathcal{K} \) or \( {\mathcal{K}}^{ * } \otimes \mathcal{H} \) can be defined by imitating the above procedure. Here the space \( {\mathcal{K}}^{ * } \) is the space of all conjugate-linear functionals on \( \mathcal{K} \) . This space is called the dual space of \( \mathcal{K} \) . There is a natural identification between \( \mathcal{K} \) and \( {\mathcal{K}}^{ * } \) via a conjugate-linear, norm preserving bijection. Exercise I.4.4 (i) There is a natural isomorphism between the spaces \( \mathcal{K} \otimes \) \( {\mathcal{H}}^{ * } \) and \( \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) in which the elementary tensor \( k \otimes {h}^{ * } \) corresponds to the linear map that takes a vector \( u \) of \( \mathcal{H} \) to \( \langle h, u\rangle k \) . This linear transformation has rank one and all rank one, transformations can be obtained in this way. (ii) An explicit construction of this isomorphism \( \varphi \) is outlined below. Let \( {e}_{1},\ldots ,{e}_{n} \) be an orthonormal basis for \( \mathcal{H} \) and for \( {\mathcal{H}}^{ * } \) . Let \( {f}_{1},\ldots ,{f}_{m} \) be an orthonormal basis for \( \mathcal{K} \) . Identitfy each element of \( \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) with its matrix with respect to these bases. Let \( {E}
100_S_Fourier Analysis
7
al identification between \( \mathcal{K} \) and \( {\mathcal{K}}^{ * } \) via a conjugate-linear, norm preserving bijection. Exercise I.4.4 (i) There is a natural isomorphism between the spaces \( \mathcal{K} \otimes \) \( {\mathcal{H}}^{ * } \) and \( \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) in which the elementary tensor \( k \otimes {h}^{ * } \) corresponds to the linear map that takes a vector \( u \) of \( \mathcal{H} \) to \( \langle h, u\rangle k \) . This linear transformation has rank one and all rank one, transformations can be obtained in this way. (ii) An explicit construction of this isomorphism \( \varphi \) is outlined below. Let \( {e}_{1},\ldots ,{e}_{n} \) be an orthonormal basis for \( \mathcal{H} \) and for \( {\mathcal{H}}^{ * } \) . Let \( {f}_{1},\ldots ,{f}_{m} \) be an orthonormal basis for \( \mathcal{K} \) . Identitfy each element of \( \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) with its matrix with respect to these bases. Let \( {E}_{ij} \) be the matrix all whose entries are zero except the \( \left( {i, j}\right) \) -entry, which is 1 . Show that \( \varphi \left( {{f}_{i} \otimes {e}_{j}}\right) = {E}_{ij} \) for all \( 1 \leq i \leq m,1 \leq j \leq n \) . Thus, if \( A \) is any \( m \times n \) matrix with entries \( {a}_{ij} \) , then \[ {\varphi }^{-1}\left( A\right) = \mathop{\sum }\limits_{{i, j}}{a}_{ij}\left( {{f}_{i} \otimes {e}_{j}}\right) = \mathop{\sum }\limits_{j}\left( {A{e}_{j}}\right) \otimes {e}_{j}. \] (iii) The space \( \mathcal{L}\left( {\mathcal{H},\mathcal{K}}\right) \) is a Hilbert space with inner product \( \langle A, B\rangle = \) tr \( {A}^{ * }B \) . The set \( {E}_{ij},1 \leq i \leq m,1 \leq j \leq n \), is an orthonormal basis for this space. Show that the map \( \varphi \) is a Hilbert space isomorphism: i.e.. \( \left\langle {{\varphi }^{-1}\left( A\right) ,{\varphi }^{-1}\left( B\right) }\right\rangle = \langle A, B\rangle \) for all \( A, B \) . Corresponding facts about multilinear functionals and tensor products of several spaces are proved in the same way. We will use the notation \( { \otimes }^{k}\mathcal{H} \) for the \( k \) -fold tensor product \( \mathcal{H} \otimes \mathcal{H} \otimes \cdots \otimes \mathcal{H} \) . Tensor products of linear operators are defined as follows. We first define \( A \otimes B \) on elementary tensors by putting \( \left( {A \otimes B}\right) \left( {x \otimes y}\right) = {Ax} \otimes {By} \) . We then extend this definition linearly to all linear combinations of elementary tensors, i.e., to all of \( \mathcal{H} \otimes \mathcal{H} \) . This extension involves no inconsistency. It is obvious that \( \left( {A \otimes B}\right) \left( {C \otimes D}\right) = {AC} \otimes {BD} \), that the identity on \( \mathcal{H} \otimes \mathcal{H} \) is given by \( I \otimes I \), and that if \( A \) and \( B \) are invertible, then so is \( A \otimes B \) and \( {\left( A \otimes B\right) }^{-1} = {A}^{-1} \otimes {B}^{-1} \) . A one-line verification shows that \( {\left( A \otimes B\right) }^{ * } = {A}^{ * } \otimes {B}^{ * } \) . It follows that \( A \otimes B \) is Hermitian if (but not only if) \( A \) and \( B \) are Hermitian; \( A \otimes B \) is unitary if (but not only if) \( A \) and \( B \) are unitary; \( A \otimes B \) is normal if (and only if) \( A \) and \( B \) are normal. (The trivial cases \( A = 0 \), or \( B = 0 \), must be excluded for the last assertion to be valid.) Exercise I.4.5 Suppose it is known that \( \mathcal{M} \) is an invariant subspace for \( A \) . What invariant subspaces for \( A \otimes A \) can be obtained from this information alone? For operators \( A, B \) on different spaces \( \mathcal{H} \) and \( \mathcal{K} \), the tensor product can be defined in the same way as above. This gives an operator \( A \otimes B \) on \( \mathcal{H} \otimes \mathcal{K} \) . Many of the assertions made earlier for the case \( \mathcal{H} = \mathcal{K} \) remain true in this situation. Exercise I.4.6 Let \( A \) and \( B \) be two matrices (not necessarily of the same size). Relative to the lexicographically ordered basis on the space of tensors, the matrix for \( A \otimes B \) can be written in block form as follows: if \( A = \left( {a}_{ij}\right) \) , then \[ A \otimes B = \left( \begin{matrix} {a}_{11}B & \cdots & {a}_{1n}B \\ \cdots & \cdots & \cdots \\ {a}_{n1}B & \cdots & {a}_{nn}B \end{matrix}\right) \] Especially important are the operators \( A \otimes A \otimes \cdots \otimes A \), which are \( k \) -fold tensor products of an operator \( A \in \mathcal{L}\left( \mathcal{H}\right) \) . Such a product will be written more briefly as \( {A}^{\otimes k} \) or \( { \otimes }^{k}A \) . This is an operator on the \( {n}^{k} \) -dimensional space \( { \otimes }^{k}\mathcal{H} \) . Some of the easily proved and frequently used properties of these products are summarised below: 1. \( \left( {{ \otimes }^{k}A}\right) \left( {{ \otimes }^{k}B}\right) = { \otimes }^{k}\left( {AB}\right) \) . 2. \( {\left( { \otimes }^{k}A\right) }^{-1} = { \otimes }^{k}{A}^{-1} \) when either inverse exists. 3. \( {\left( { \otimes }^{k}A\right) }^{ * } = { \otimes }^{k}{A}^{ * } \) . 4. If \( A \) is Hermitian, unitary, normal or positive, then so is \( { \otimes }^{k}A \) . 5. If \( {\alpha }_{1},\ldots ,{\alpha }_{k} \) (not necessarily distinct) are eigenvalues of \( A \) with eigenvectors \( {u}_{1},\ldots ,{u}_{k} \), respectively, then \( {\alpha }_{1}\cdots {\alpha }_{k} \) is an eigenvalue of \( { \otimes }^{k}A \) and \( {u}_{1} \otimes \cdots \otimes {u}_{k} \) is an eigenvector for it. 6. If \( {s}_{{i}_{1}},\ldots ,{s}_{{i}_{k}} \) (not necessarily distinct) are singular values of \( A \), then \( {s}_{{i}_{1}}\cdots {s}_{{i}_{k}} \) is a singular value of \( { \otimes }^{k}A \) . 7. \( \begin{Vmatrix}{{ \otimes }^{k}A}\end{Vmatrix} = \parallel A{\parallel }^{k} \) . The reader should formulate and prove analogous statements for tensor products \( {A}_{1} \otimes {A}_{2} \otimes \cdots \otimes {A}_{k} \) of different operators. ## I. 5 Symmetry Classes In the space \( { \otimes }^{k}\mathcal{H} \) there are two especially important subspaces (for nontrivial cases, \( k > 1 \) and \( n > 1 \) ). The antisymmetric tensor product of vectors \( {x}_{1},\ldots ,{x}_{k} \) in \( \mathcal{H} \) is defined as \[ {x}_{1} \land \cdots \land {x}_{k} = {\left( k!\right) }^{-1/2}\mathop{\sum }\limits_{\sigma }{\varepsilon }_{\sigma }{x}_{\sigma \left( 1\right) } \otimes \cdots \otimes {x}_{\sigma \left( k\right) }, \] where \( \sigma \) runs over all permutations of the \( k \) indices and \( {\varepsilon }_{\sigma } \) is \( \pm 1 \), depending on whether \( \sigma \) is an even or an odd permutation. ( \( {\varepsilon }_{\sigma } \) is called the signature of \( \sigma \) .) The factor \( {\left( k!\right) }^{-1/2} \) is chosen so that if \( {x}_{j} \) are orthonormal, then \( {x}_{1} \land \cdots \land {x}_{k} \) is a unit vector. The antisymmetry of this product means that \[ {x}_{1} \land \cdots \land {x}_{i} \land \cdots \land {x}_{j} \land \cdots \land {x}_{k} = - {x}_{1} \land \cdots \land {x}_{j} \land \cdots \land {x}_{i} \land \cdots \land {x}_{k}, \] i.e., interchanging the position of any two of the factors in the product amounts to a change of sign. In particular, \( {x}_{1} \land \cdots \land {x}_{k} = 0 \) if any two of the factors are equal. The span of all antisymmetric tensors \( {x}_{1} \land \cdots \land {x}_{k} \) in \( { \otimes }^{k}\mathcal{H} \) is denoted by \( { \land }^{k}\mathcal{H} \) . This is called the \( k \) th antisymmetric tensor product (or tensor power) of \( \mathcal{H} \) . Given an orthonormal basis \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) in \( \mathcal{H} \), there is a standard way of constructing an orthonormal basis in \( { \land }^{k}\mathcal{H} \) . Let \( {Q}_{k, n} \) denote the set of all strictly increasing \( k \) -tuples chosen from \( \{ 1,2,\ldots, n\} \) ; i.e., \( \mathcal{I} \in {Q}_{k.n} \) if and only if \( \mathcal{I} = \left( {{i}_{1},{i}_{2},\ldots ,{i}_{k}}\right) \), where \( 1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n \) . For such an \( \mathcal{I} \) let \( {e}_{\mathcal{I}} = {e}_{{i}_{1}} \land \cdots \land {e}_{{i}_{k}} \) . Then, \( \left\{ {{e}_{\mathcal{I}} : \mathcal{I} \in {Q}_{k, n}}\right\} \) gives an orthonormal basis of \( { \land }^{k}\mathcal{H} \) . Such \( \mathcal{I} \) are sometimes called multi-indices . It is conventional to order them lexicographically. Note that the cardinality of \( {Q}_{k.n} \), and hence the dimensionality of \( { \land }^{k}\mathcal{H} \), is \( \left( \begin{array}{l} n \\ k \end{array}\right) \) . If in particular \( k = n \), the space \( { \land }^{k}\mathcal{H} \) is 1-dimensional. This plays a special role later on. When \( k > n \) the space \( { \land }^{k}\mathcal{H} \) is \( \{ 0\} \) . Exercise I.5.1 Show that the inner product \( \left\langle {{x}_{1} \land \cdots \land {x}_{k},{y}_{1} \land \cdots \land {y}_{k}}\right\rangle \) is equal to the determinant of the \( k \times k \) matrix \( \left( \left\langle {{x}_{i},{y}_{j}}\right\rangle \right) \) . The symmetric tensor product of \( {x}_{1},\ldots ,{x}_{k} \) is defined as \[ {x}_{1} \vee \cdots \vee {x}_{k} = {\left( k!\right) }^{-1/2}\mathop{\sum }\limits_{\sigma }{x}_{\sigma \left( 1\right) } \otimes \cdots \otimes {x}_{\sigma \left( k\right) }, \] where \( \sigma \), as before, runs over all permutations of the \( k \) indices. The linear span of all these vectors comprises the subspace \( { \vee }^{k}\mathcal{H} \) of \( { \otimes }^{k}\mathcal{H} \) . This is called the \( k \) th symmetric tensor power of \( \mathcal{H} \) . Let \( {G}_{k, n} \) denote the set of all non-decreasing \( k \) -tuples chosen from \( \{ 1,2,\ldots, n\} \) ; i.e., \( \mathcal{I} \in {G}_{k, n} \) if and only if \( \mathcal{I} = \left( {{i}_{1},\ldots ,{i}_{k}}\right) \), wher
100_S_Fourier Analysis
8
angle {{x}_{1} \land \cdots \land {x}_{k},{y}_{1} \land \cdots \land {y}_{k}}\right\rangle \) is equal to the determinant of the \( k \times k \) matrix \( \left( \left\langle {{x}_{i},{y}_{j}}\right\rangle \right) \) . The symmetric tensor product of \( {x}_{1},\ldots ,{x}_{k} \) is defined as \[ {x}_{1} \vee \cdots \vee {x}_{k} = {\left( k!\right) }^{-1/2}\mathop{\sum }\limits_{\sigma }{x}_{\sigma \left( 1\right) } \otimes \cdots \otimes {x}_{\sigma \left( k\right) }, \] where \( \sigma \), as before, runs over all permutations of the \( k \) indices. The linear span of all these vectors comprises the subspace \( { \vee }^{k}\mathcal{H} \) of \( { \otimes }^{k}\mathcal{H} \) . This is called the \( k \) th symmetric tensor power of \( \mathcal{H} \) . Let \( {G}_{k, n} \) denote the set of all non-decreasing \( k \) -tuples chosen from \( \{ 1,2,\ldots, n\} \) ; i.e., \( \mathcal{I} \in {G}_{k, n} \) if and only if \( \mathcal{I} = \left( {{i}_{1},\ldots ,{i}_{k}}\right) \), where \( 1 \leq {i}_{1} \leq \) \( {i}_{2}\cdots \leq {i}_{k} \leq n \) . If such an \( \mathcal{I} \) consists of \( \ell \) distinct indices \( {i}_{1},\ldots ,{i}_{\ell } \) with multiplicities \( {m}_{1},\ldots ,{m}_{\ell } \), respectively, put \( m\left( \mathcal{I}\right) = {m}_{1}!{m}_{2}!\cdots {m}_{\ell }! \) . Given an orthonormal basis \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) of \( \mathcal{H} \) define, for every \( \mathcal{I} \in {G}_{k, n},{e}_{\mathcal{I}} = \) \( {e}_{{i}_{1}} \vee {e}_{{i}_{2}} \vee \cdots \vee {e}_{{i}_{k}} \) . Then, the set \( \left\{ {m{\left( \mathcal{I}\right) }^{-1/2}{e}_{\mathcal{I}} : \mathcal{I} \in {G}_{k, n}}\right\} \) is an orthonormal basis in \( { \vee }^{k}\mathcal{H} \) . Again, it is conventional to order these multi-indices lexicographically. The cardinality of the set \( {G}_{k, n} \), and hence the dimensionality of the space \( { \vee }^{k}\mathcal{H} \), is \( \left( \begin{matrix} n + k - 1 \\ k \end{matrix}\right) \) . Notice that the expressions for the basis in \( { \land }^{k}\mathcal{H} \) are simpler because \( m\left( \mathcal{I}\right) = 1 \) for \( \mathcal{I} \in {Q}_{k, n} \) Exercise I.5.2 The elementary tensors \( x \otimes \cdots \otimes x \), with all factors equal, are all in the subspace \( { \vee }^{k}\mathcal{H} \) . Do they span it? Exercise I.5.3 Let \( \mathcal{M} \) be a p-dimensional subspace of \( \mathcal{H} \) and \( \mathcal{N} \) its orthogonal complement. Choosing \( j \) vectors from \( \mathcal{M} \) and \( k - j \) vectors from \( \mathcal{N} \) and forming the linear span of the antisymmetric tensor products of all such vectors, we get different subspaces of \( { \land }^{k}\mathcal{H} \) ; for example, one of those is \( { \land }^{k}\mathcal{M} \) . Determine all the subspaces thus obtained and their dimensional-ities. Do the same for \( { \vee }^{k}\mathcal{H} \) . Exercise I.5.4 If \( \dim \mathcal{H} = 3 \), then \( \dim { \otimes }^{3}\mathcal{H} = {27},\dim { \land }^{3}\mathcal{H} = 1 \) and \( \dim { \vee }^{3}\mathcal{H} = {10} \) . In terms of an orthonormal basis of \( \mathcal{H} \), write an element of \( {\left( { \land }^{3}\mathcal{H} \oplus { \vee }^{3}\mathcal{H}\right) }^{ \bot }. \) The permanent of a matrix \( A = \left( {a}_{ij}\right) \) is defined as \[ \operatorname{per}A = \mathop{\sum }\limits_{\sigma }{a}_{{1\sigma }\left( 1\right) }\cdots {a}_{{n\sigma }\left( n\right) } \] where \( \sigma \) varies over all permutations on \( n \) symbols. Note that, in contrast to the determinant, the permanent is not invariant under similarities. Thus, matrices of the same operator relative to different bases may have different permanents. Exercise I.5.5 Show that the inner product \( \left\langle {{x}_{1} \vee \cdots \vee {x}_{k},{y}_{1} \vee \cdots \vee {y}_{k}}\right\rangle \) is equal to the permanent of the \( k \times k \) matrix \( \left( \left\langle {{x}_{i},{y}_{j}}\right\rangle \right) \) . The spaces \( { \land }^{k}\mathcal{H} \) and \( { \vee }^{k}\mathcal{H} \) are also referred to as "symmetry classes" of tensors - there are other such classes in \( { \otimes }^{k}\mathcal{H} \) . Another way to look at them is as the ranges of the respective symmetry operators. Define \( {P}_{ \land } \) and \( {P}_{ \vee } \) as linear operators on \( { \otimes }^{k}\mathcal{H} \) by first defining them on the elementary tensors as \[ {P}_{ \land }\left( {{x}_{1} \otimes \cdots \otimes {x}_{k}}\right) = {\left( k!\right) }^{-1/2}{x}_{1} \land \cdots \land {x}_{k} \] \[ {P}_{ \vee }\left( {{x}_{1} \otimes \cdots \otimes {x}_{k}}\right) = {\left( k!\right) }^{-1/2}{x}_{1} \vee \cdots \vee {x}_{k} \] and extending them by linearity to the whole space. (Again it should be verified that this can be done consistently.) The constant factor in the above definitions has been chosen so that both these operators are idempotent. They are also Hermitian. The ranges of these orthoprojectors are \( { \land }^{k}\mathcal{H} \) and \( { \vee }^{k}\mathcal{H} \), respectively. If \( A \in \mathcal{L}\left( \mathcal{H}\right) \), then \( A{x}_{1} \land \cdots \land A{x}_{k} \) lies in \( { \land }^{k}\mathcal{H} \) for all \( {x}_{1},\ldots ,{x}_{k} \) in \( \mathcal{H} \) . Using this, one sees that the space \( { \land }^{k}\mathcal{H} \) is invariant under the operator \( { \otimes }^{k}A \) . The restriction of \( { \otimes }^{k}A \) to this invariant subspace is denoted by \( { \land }^{k}A \) or \( {A}^{\land k} \) . This is called the \( k \) th antisymmetric tensor power or the \( k \) th Grassmann power of \( A \) . We could have also defined it by first defining it on the elementary antisymmetric tensors \( {x}_{1} \land \cdots \land {x}_{k} \) as \[ { \land }^{k}A\left( {{x}_{1} \land \cdots \land {x}_{k}}\right) = A{x}_{1} \land \cdots \land A{x}_{k} \] and then extending it linearly to the span \( { \land }^{k}\mathcal{H} \) of these tensors. Exercise I.5.6 Let \( A \) be a nilpotent operator. Show how to obtain, from a Jordan basis for \( A \), a Jordan basis for \( { \land }^{2}A \) . The space \( { \vee }^{k}\mathcal{H} \) is also invariant under the operator \( { \otimes }^{k}A \) . The restriction of \( { \otimes }^{k}A \) to this invariant subspace is written as \( { \vee }^{k}A \) or \( {A}^{\vee k} \) and called the \( k \) th symmetric tensor power of \( A \) . Some essential and simple properties of these operators are summarised below: 1. \( \left( {{ \land }^{k}A}\right) \left( {{ \land }^{k}B}\right) = { \land }^{k}\left( {AB}\right) ,\;\left( {{ \vee }^{k}A}\right) \left( {{ \vee }^{k}B}\right) = { \vee }^{k}\left( {AB}\right) \) . 2. \( {\left( { \land }^{k}A\right) }^{ * } = { \land }^{k}{A}^{ * },\;{\left( { \vee }^{k}A\right) }^{ * } = { \vee }^{k}{A}^{ * } \) . 3. \( {\left( { \land }^{k}A\right) }^{-1} = { \land }^{k}{A}^{-1},\;{\left( { \vee }^{k}A\right) }^{-1} = { \vee }^{k}{A}^{-1} \) . 4. If \( A \) is Hermitian, unitary, normal or positive, then so are \( { \land }^{k}A \) and \( { \vee }^{k}A \) . 5. If \( {\alpha }_{1},\ldots ,{\alpha }_{k} \) are eigenvalues of \( A \) (not necessarily distinct) belonging to eigenvectors \( {u}_{1},\ldots ,{u}_{k} \), respectively, then \( {\alpha }_{1}\cdots {\alpha }_{k} \) is an eigenvalue of \( { \vee }^{k}A \) belonging to eigenvector \( {u}_{1} \vee \cdots \vee {u}_{k} \) ; if in addition the vectors \( {u}_{j} \) are linearly independent, then \( {\alpha }_{1}\cdots {\alpha }_{k} \) is an eigenvalue of \( { \land }^{k}A \) belonging to eigenvector \( {u}_{1} \land \cdots \land {u}_{k} \) . 6. If \( {s}_{1},\ldots ,{s}_{n} \) are the singular values of \( A \), then the singular values of \( { \land }^{k}A \) are \( {s}_{{i}_{1}}\cdots {s}_{{i}_{k}} \), where \( \left( {{i}_{1},\ldots ,{i}_{k}}\right) \) vary over \( {Q}_{k, n} \) ; the singular values of \( { \vee }^{k}A \) are \( {s}_{{i}_{1}}\cdots {s}_{{i}_{k}} \), where \( \left( {{i}_{i},\ldots ,{i}_{k}}\right) \), vary over \( {G}_{k, n} \) . 7. \( \operatorname{tr}{ \land }^{k}A \) is the \( k \) th elementary symmetric polynomial in the eigenvalues of \( A \) ; \( {\operatorname{trV}}^{k}A \) is the \( k \) th complete symmetric polynomial in the eigenvalues of \( A \) . (These polynomials are defined as follows. Given any \( n \) -tuple \( \left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) of numbers or other commuting objects, the \( k \) th elementary symmetric polynomial in them is the sum of all terms \( {\alpha }_{{i}_{1}}{\alpha }_{{i}_{2}}\cdots {\alpha }_{{i}_{k}} \) for \( \left( {{i}_{1},{i}_{2},\ldots ,{i}_{k}}\right) \) in \( {Q}_{k, n} \) ; the \( k \) th complete symmetric polynomial is the sum of all terms \( \left. {{\alpha }_{{i}_{1}}{\alpha }_{{i}_{2}}\cdots {\alpha }_{{i}_{k}}\text{ for }\left( {{i}_{1},{i}_{2},\ldots ,{i}_{k}}\right) \text{ in }{G}_{k, n}.}\right) \) For \( A \in \mathcal{L}\left( \mathcal{H}\right) \), consider the operator \( A \otimes I \otimes \cdots \otimes I + I \otimes A \otimes I\cdots \otimes I \) \( + \cdots + I \otimes I \otimes \cdots \otimes A \) . (There are \( k \) summands, each of which is a product of \( k \) factors.) The eigenvalues of this operator on \( { \otimes }^{k}\mathcal{H} \) are sums of eigenvalues of \( A \) . Both the spaces \( { \land }^{k}\mathcal{H} \) and \( { \vee }^{k}\mathcal{H} \) are invariant under this operator. One pleasant way to see this is to regard this operator as the \( t \) -derivative at \( t = 0 \) of \( { \otimes }^{k}\left( {I + {tA}}\right) \) . The restriction of this operator to the space \( { \land }^{k}\mathcal{H} \) will be of particular interest to us; we will write this restriction as \( {A}^{\left\lbrack k\right\rbrack } \) . If \( {u}_{1},\ldots ,{u}_{k} \) are linearly independent eigenvectors of \( A \) belonging to eigenvalues \( {\alpha }_{1},\ldots ,{\alpha }_{k} \), then \( {u}_{1} \land \cdots \land {u}_{k} \) is an eigenvector of \( {A}^{\left\lbrack k\right
100_S_Fourier Analysis
9
cdots \otimes I + I \otimes A \otimes I\cdots \otimes I \) \( + \cdots + I \otimes I \otimes \cdots \otimes A \) . (There are \( k \) summands, each of which is a product of \( k \) factors.) The eigenvalues of this operator on \( { \otimes }^{k}\mathcal{H} \) are sums of eigenvalues of \( A \) . Both the spaces \( { \land }^{k}\mathcal{H} \) and \( { \vee }^{k}\mathcal{H} \) are invariant under this operator. One pleasant way to see this is to regard this operator as the \( t \) -derivative at \( t = 0 \) of \( { \otimes }^{k}\left( {I + {tA}}\right) \) . The restriction of this operator to the space \( { \land }^{k}\mathcal{H} \) will be of particular interest to us; we will write this restriction as \( {A}^{\left\lbrack k\right\rbrack } \) . If \( {u}_{1},\ldots ,{u}_{k} \) are linearly independent eigenvectors of \( A \) belonging to eigenvalues \( {\alpha }_{1},\ldots ,{\alpha }_{k} \), then \( {u}_{1} \land \cdots \land {u}_{k} \) is an eigenvector of \( {A}^{\left\lbrack k\right\rbrack } \) belonging to eigenvalue \( {\alpha }_{1} + \cdots + {\alpha }_{k} \) . Now, fixing an orthonormal basis \( \left( {{e}_{1},\ldots ,{e}_{n}}\right) \) of \( \mathcal{H} \), identify \( A \) with its matrix \( \left( {a}_{ij}\right) \) . We want to find the matrix representations of \( { \land }^{k}A \) and \( { \vee }^{k}A \) relative to the standard bases constructed earlier. The basis of \( { \land }^{k}\mathcal{H} \) we are using is \( {e}_{\mathcal{I}},\mathcal{I} \in {Q}_{k, n} \) . The \( \left( {\mathcal{I},\mathcal{J}}\right) \) -entry of \( { \land }^{k}A \) is \( \left\langle {{e}_{\mathcal{I}},\left( {{ \land }^{k}A}\right) {e}_{\mathcal{J}}}\right\rangle \) . One may verify that this is equal to a subdeterminant of \( A \) . Namely, let \( A\left\lbrack {\mathcal{I} \mid \mathcal{J}}\right\rbrack \) denote the \( k \times k \) matrix obtained from \( A \) by expunging all its entries \( {a}_{ij} \) except those for which \( i \in \mathcal{I} \) and \( j \in \mathcal{J} \) . Then, the \( \left( {\mathcal{I},\mathcal{J}}\right) \) -entry of \( { \land }^{k}A \) is equal to det \( A\left\lbrack {\mathcal{I} \mid \mathcal{J}}\right\rbrack \) . The special case \( k = n \) leads to the 1-dimensional space \( { \land }^{n}\mathcal{H} \) . The operator \( { \land }^{n}A \) on this space is just the operator of multiplication by the number det \( A \) . We can thus think of det \( A \) as being equal to \( { \land }^{n}A \) . The basis of \( { \vee }^{k}\mathcal{H} \) we are using is \( m{\left( \mathcal{I}\right) }^{-1/2}{e}_{\mathcal{I}},\mathcal{I} \in {G}_{k, n} \) . The \( \left( {\mathcal{I},\mathcal{J}}\right) \) - entry of the matrix \( { \vee }^{k}A \) can be computed as before, and the result is somewhat similar to that for \( { \land }^{k}A \) . For \( \mathcal{I} = \left( {{i}_{1},\ldots ,{i}_{k}}\right) \) and \( \mathcal{J} = \left( {{j}_{1},\ldots ,{j}_{k}}\right) \) in \( {G}_{k, n} \), let \( A\left\lbrack {\mathcal{I} \mid \mathcal{J}}\right\rbrack \) now denote the \( k \times k \) matrix whose \( \left( {r, s}\right) \) -entry is the \( \left( {{i}_{r},{j}_{s}}\right) \) - entry of \( A \) . Since repetitions of indices are allowed in \( \mathcal{I} \) and \( \mathcal{J} \) , this is not a submatrix of \( A \) this time. One verifies that the \( \left( {\mathcal{I},\mathcal{J}}\right) \) -entry of \( { \vee }^{k}A \) is \( {\left( m\left( \mathcal{I}\right) m\left( \mathcal{J}\right) \right) }^{-1/2} \) per \( A\left\lbrack {\mathcal{I} \mid \mathcal{J}}\right\rbrack \) . In particular, per \( A \) is one of the diagonal entries of \( { \vee }^{n}A \) : the \( \left( {\mathcal{I},\mathcal{I}}\right) \) -entry for \( \mathcal{I} = \left( {1,2,\ldots, n}\right) \) . Exercise I.5.7 Prove that for any vectors \( {u}_{1},\ldots ,{u}_{k},{v}_{1},\ldots ,{v}_{k} \) we have \[ {\left| \det \left( \left\langle {u}_{i},{v}_{j}\right\rangle \right) \right| }^{2} \leq \det \left( \left\langle {{u}_{i},{u}_{j}}\right\rangle \right) \det \left( \left\langle {{v}_{i},{v}_{j}}\right\rangle \right) , \] \[ {\left| \mathrm{{per}}\left( \langle {u}_{i},{v}_{j}\rangle \right) \right| }^{2}\; \leq \;\mathrm{{per}}\left( {\langle {u}_{i},{u}_{j}\rangle }\right) \mathrm{{per}}\left( {\langle {v}_{i},{v}_{j}\rangle }\right) . \] Exercise I.5.8 Prove that for any two matrices \( A, B \) we have \[ {\left| \operatorname{per}\left( AB\right) \right| }^{2} \leq \operatorname{per}\left( {A{A}^{ * }}\right) \operatorname{per}\left( {{B}^{ * }B}\right) . \] (The corresponding relation for determinants is an easy equality.) Exercise I.5.9 (Schur’s Theorem) If \( A \) is positive, then \[ \operatorname{per}A \geq \det A\text{.} \] [Hint: Using Exercise I.2.2 write \( A = {T}^{ * }T \) for an upper triangular \( T \) . Then use the preceding exercise cleverly.] We have observed earlier that for any vectors \( {x}_{1},\ldots ,{x}_{k} \) in \( \mathcal{H} \) we have \[ \det \left( \left\langle {{x}_{i},{x}_{j}}\right\rangle \right) = {\begin{Vmatrix}{x}_{1} \land \cdots \land {x}_{k}\end{Vmatrix}}^{2}. \] When \( \mathcal{H} = {\mathbb{R}}^{n} \), this determinant is also the square of the \( k \) -dimensional volume of the parallelepiped having \( {x}_{1},\ldots ,{x}_{k} \) as its sides. To see this, note that neither the determinant nor the volume in question is altered if we add to any of these vectors a linear combination of the others. Performing such operations successively, we can reach an orthogonal set of vectors, some of which might be zero. In this case it is obvious that the determinant is equal to the square of the volume; hence that was true initially too. Given any \( k \) -tuple \( X = \left( {{x}_{1},\ldots ,{x}_{k}}\right) \), the matrix \( \left( \left\langle {{x}_{i},{x}_{j}}\right\rangle \right) = {X}^{ * }X \) is called the Gram matrix of the vectors \( {x}_{j} \) ; its determinant is called their Gram determinant. Exercise I.5.10 Every \( k \times k \) positive matrix \( A = \left( {a}_{ij}\right) \) can be realised as a Gram matrix, i.e., vectors \( {x}_{j},1 \leq j \leq k \), can be found so that \( {a}_{ij} = \left\langle {{x}_{i},{x}_{j}}\right\rangle \) for all \( i, j \) . ## I. 6 Problems Problem I.6.1. Given a basis \( U = \left( {{u}_{1},\ldots ,{u}_{n}}\right) \), not necessarily orthonormal, in \( \mathcal{H} \), how would you compute the biorthogonal basis \( \left( {{v}_{1},\ldots ,{v}_{n}}\right) \) ? Find a formula that expresses \( \left\langle {{v}_{j}, x}\right\rangle \) for each \( x \in \mathcal{H} \) and \( j = 1,2,\ldots, k \) in terms of Gram matrices. Problem I.6.2. A proof of the Toeplitz-Hausdorff Theorem is outlined below. Fill in the details. Note that \( W\left( A\right) = \{ \langle x,{Ax}\rangle : \parallel x\parallel = 1\} = \left\{ {\operatorname{tr}{Ax}{x}^{ * } : {x}^{ * }x = 1}\right\} \) . It is enough to consider the special case \( \dim \mathcal{H} = 2 \) . In higher dimensions, this special case can be used to show that if \( x, y \) are any two vectors, then any point on the line segment joining \( \langle x,{Ax}\rangle \) and \( \langle y,{Ay}\rangle \) can be represented as \( \langle z,{Az}\rangle \), where \( z \) is a vector in the linear span of \( x \) and \( y \) . Now, on the space of \( 2 \times 2 \) Hermitian matrices consider the linear map \( \Phi \left( T\right) = \operatorname{tr}{AT} \) . This is a real linear map from a space of 4 real dimensions (the \( 2 \times 2 \) Hermitian matrices) to a space of 2 real dimensions (the complex plane). We want to prove that \( \Phi \) maps the set of 1-dimensional orthoprojectors \( x{x}^{ * } \) onto a convex set. The set of these projectors in matrix form is \[ \left( \begin{matrix} \cos t \\ {e}^{-{iw}}\sin t \end{matrix}\right) \left( {\cos {t}^{ \cdot }{e}^{iw}\sin t}\right) = \frac{1}{2} + \frac{1}{2}\left( \begin{matrix} \cos {2t} & {e}^{iw}\sin {2t} \\ {e}^{-{iw}}\sin {2t} & - \cos {2t} \end{matrix}\right) . \] This is a 2-sphere centred at \( \left( \begin{matrix} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{matrix}\right) \) and having radius \( 1/\sqrt{2} \) in the Frobe-nius norm. The image of a 2-sphere under a linear map with range in \( {\mathbb{R}}^{2} \) must be either an ellipse with interior, or a line segment, or a point; in any case, a convex set. Problem I.6.3. By the remarks in Section 5, vectors \( {x}_{1},\ldots ,{x}_{k} \) are linearly dependent if and only if \( {x}_{1} \land \cdots \land {x}_{k} = 0 \) . This relationship between linear dependence and the antisymmetric tensor product goes further. Two sets \( \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) and \( \left\{ {{y}_{1},\ldots ,{y}_{k}}\right\} \) of linearly independent vectors have the same linear span if and only if \( {x}_{1} \land \cdots \land {x}_{k} = c{y}_{1} \land \cdots \land {y}_{k} \) for some constant \( c \) . Thus, there is a one-to-one correspondence between \( k \) -dimensional subspaces of a vector space \( W \) and 1-dimensional subspaces of \( { \land }^{k}W \) generated by elementary tensors \( {x}_{1} \land \cdots \land {x}_{k} \) . Problem I.6.4. How large must \( \dim W \) be in order that there exist some element of \( { \land }^{2}W \) which is not elementary? Problem I.6.5. Every vector \( w \) of \( W \) induces a linear operator \( {T}_{w} \) from \( { \land }^{k}W \) to \( { \land }^{k + 1}W \) as follows. \( {T}_{w} \) is defined on elementary tensors as \( {T}_{w}\left( {{v}_{1} \land \cdots \land {v}_{k}}\right) = {v}_{1} \land \cdots \land {v}_{k} \land w \), and then extended linearly to all of \( { \land }^{k}W \) . It is, then, natural to write \( {T}_{w}\left( x\right) = x \land w \) for any \( x \in { \land }^{k}W \) . Show that a nonzero vector \( x \) in \( { \land }^{k}W \) is elementary if and only if the space \( \{ w \in W : x \land w = 0\} \) is \( k \) -dimensional. (When \( W \) is a Hilbert space, the operators \( {T}_{w} \) are called creation operators a
100_S_Fourier Analysis
10
\) and 1-dimensional subspaces of \( { \land }^{k}W \) generated by elementary tensors \( {x}_{1} \land \cdots \land {x}_{k} \) . Problem I.6.4. How large must \( \dim W \) be in order that there exist some element of \( { \land }^{2}W \) which is not elementary? Problem I.6.5. Every vector \( w \) of \( W \) induces a linear operator \( {T}_{w} \) from \( { \land }^{k}W \) to \( { \land }^{k + 1}W \) as follows. \( {T}_{w} \) is defined on elementary tensors as \( {T}_{w}\left( {{v}_{1} \land \cdots \land {v}_{k}}\right) = {v}_{1} \land \cdots \land {v}_{k} \land w \), and then extended linearly to all of \( { \land }^{k}W \) . It is, then, natural to write \( {T}_{w}\left( x\right) = x \land w \) for any \( x \in { \land }^{k}W \) . Show that a nonzero vector \( x \) in \( { \land }^{k}W \) is elementary if and only if the space \( \{ w \in W : x \land w = 0\} \) is \( k \) -dimensional. (When \( W \) is a Hilbert space, the operators \( {T}_{w} \) are called creation operators and their adjoints are called annihilation operators in the physics literature.) Problem I.6.6. (The \( n \) -dimensional Pythagorean Theorem) Let \( {x}_{1},\ldots ,{x}_{n} \) be orthogonal vectors in \( {\mathbb{R}}^{n} \) . Consider the \( n \) -dimensional simplex \( S \) with vertices \( 0,{x}_{1},\ldots ,{x}_{n} \) . Think of the \( \left( {n - 1}\right) \) -dimensional simplex with vertices \( {x}_{1},\ldots ,{x}_{n} \) as the "hypotenuse" of \( S \) and the remaining \( \left( {n - 1}\right) \) -dimensional faces of \( S \) as its "legs". By the remarks in Section 5, the \( k \) -dimensional volume of the simplex formed by any \( k \) points \( {y}_{1},\ldots ,{y}_{k} \) together with the origin is \( {\left( k!\right) }^{-1}\begin{Vmatrix}{{y}_{1} \land \cdots \land {y}_{k}}\end{Vmatrix} \) . The volume of a simplex not having 0 as a vertex can be found by translating it. Use this to prove that the square of the volume of the hypotenuse of \( S \) is the sum of the squares of the volumes of the \( n \) legs. Problem I.6.7. (i) Let \( {Q}_{ \land } \) be the inclusion map from \( { \land }^{k}\mathcal{H} \) into \( { \otimes }^{k}\mathcal{H} \) (so that \( {Q}_{ \land }^{ * } \) equals the projection \( {P}_{ \land } \) defined earlier) and let \( {Q}_{ \vee } \) be the inclusion map from \( { \vee }^{k}\mathcal{H} \) into \( { \otimes }^{k}\mathcal{H} \) . Then, for any \( A \in \mathcal{L}\left( \mathcal{H}\right) \) \[ { \land }^{k}A = {P}_{ \land }\left( {{ \otimes }^{k}A}\right) {Q}_{ \land } \] \[ { \vee }^{k}A = {P}_{ \vee }\left( {{ \otimes }^{k}A}\right) {Q}_{ \vee } \] (ii) \( \begin{Vmatrix}{{ \land }^{k}A}\end{Vmatrix} \leq \parallel A{\parallel }^{k},\;\begin{Vmatrix}{{ \vee }^{k}A}\end{Vmatrix} \leq \parallel A{\parallel }^{k} \) . (iii) \( \left| {\det A}\right| \leq \parallel A{\parallel }^{n},\;\left| {\operatorname{per}A}\right| \leq \parallel A{\parallel }^{n} \) . Problem I.6.8. For an invertible operator \( A \) obtain a relationship between \( {A}^{-1},{ \land }^{n}A \), and \( { \land }^{n - 1}A \) . Problem I.6.9. (i) Let \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) and \( \left\{ {{f}_{1},\ldots ,{f}_{n}}\right\} \) be two orthonormal bases in \( \mathcal{H} \) . Show that \[ {\left| \left\langle {e}_{2} \land \cdots \land {e}_{n},{f}_{2} \land \cdots \land {f}_{n}\right\rangle \right| }^{2} = {\left| \left\langle {e}_{1},{f}_{1}\right\rangle \right| }^{2}. \] (ii) Let \( P \) and \( Q \) be orthogonal projections in \( \mathcal{H} \), each of rank \( n - 1 \) . Let \( x, y \) be unit vectors such that \( {Px} = {Qy} = 0 \) . Show that \[ { \land }^{n - 1}\left( {PQP}\right) = {\left| \langle x, y\rangle \right| }^{2}{ \land }^{n - 1}P. \] Problem I.6.10. If the characteristic polynomial of \( A \) is written as \[ {t}^{n} + {a}_{1}{t}^{n - 1} + \cdots + {a}_{n} \] then the coefficient \( {a}_{k} \) is the sum of all \( k \times k \) principal minors of \( A \) . This is equal to \( \operatorname{tr}{ \land }^{k}A \) . Problem I.6.11. (i) For any \( A, B \in \mathcal{L}\left( \mathcal{H}\right) \) we have \[ { \otimes }^{k}A - { \otimes }^{k}B = \mathop{\sum }\limits_{{j = 1}}^{k}{C}_{j} \] where \[ {C}_{j} = \left( {{ \otimes }^{k - j}A}\right) \otimes \left( {A - B}\right) \otimes \left( {{ \otimes }^{j - 1}B}\right) . \] Hence, \[ \begin{Vmatrix}{{ \otimes }^{k}A - { \otimes }^{k}B}\end{Vmatrix} \leq k{M}^{k - 1}\parallel A - B\parallel \] where \( M = \max \left( {\parallel A\parallel ,\parallel B\parallel }\right) \) . (ii) The norms of \( { \land }^{k}A - { \land }^{k}B \) and \( { \vee }^{k}A - { \vee }^{k}B \) are therefore also bounded by \( k{M}^{k - 1}\parallel A - B\parallel \) . (iii) For \( n \times n \) matrices \( A, B \) , \[ \left| {\det A - \det B}\right| \leq n{M}^{n - 1}\parallel A - B\parallel \] \[ \left| {\operatorname{per}A - \operatorname{per}B}\right| \leq n{M}^{n - 1}\parallel A - B\parallel . \] (iv) The example \( A = {\alpha I}, B = \left( {\alpha + \varepsilon }\right) I \) for small \( \varepsilon \) shows that these inequalities are sometimes sharp. When \( \parallel A\parallel \) and \( \parallel B\parallel \) are far apart, find a simple improvement on them. (v) If \( A, B \) are \( n \times n \) matrices with characteristic polynomials \[ {t}^{n} + {a}_{1}{t}^{n - 1} + \cdots + {a}_{n} \] \[ {t}^{n} + {b}_{1}{t}^{n - 1} + \cdots + {b}_{n} \] respectively, then \[ \left| {{a}_{k} - {b}_{k}}\right| \leq k\left( \begin{array}{l} n \\ k \end{array}\right) {M}^{k - 1}\parallel A - B\parallel \] where \( M = \max \left( {\parallel A\parallel ,\parallel B\parallel }\right) \) . Problem I.6.12. Let \( A, B \) be positive operators with \( A \geq B \) (i.e., \( A - B \) is positive). Show that \[ { \otimes }^{k}A\; \geq \;{ \otimes }^{k}B, \] \[ { \land }^{k}A \geq { \land }^{k}B, \] \[ { \vee }^{k}A\; \geq \;{ \vee }^{k}B, \] \[ \det A \geq \det B \] \[ \operatorname{per}A \geq \operatorname{per}B\text{.} \] Problem I.6.13. The Schur product or the Hadamard product of two matrices \( A \) and \( B \) is defined to be the matrix \( A \circ B \) whose \( \left( {i, j}\right) \) -entry is \( {a}_{ij}{b}_{ij} \) . Show that this is a principal submatrix of \( A \otimes B \), and derive from this fact two significant properties: (i) \( \parallel A \circ B\parallel \leq \parallel A\parallel \parallel B\parallel \) for all \( A, B \) . (ii) If \( A, B \) are positive, then so is \( A \circ B \) . (This is called Schur’s Theorem.) Problem I.6.14. (i) Let \( A = \left( {a}_{ij}\right) \) be an \( n \times n \) positive matrix. Let \[ {r}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij},\;1 \leq i \leq n \] \[ s = \mathop{\sum }\limits_{{i, j}}{a}_{ij} \] Show that \[ {s}^{n}\operatorname{per}A \geq n!\mathop{\prod }\limits_{{i = 1}}^{n}{\left| {r}_{i}\right| }^{2} \] [Hint: Represent \( A \) as the Gram matrix of some vectors \( {x}_{1},\ldots ,{x}_{n} \) as in Exercise I.5.10. Let \( u = {s}^{-1/2}\left( {{x}_{1} + \cdots + {x}_{n}}\right) \) . Consider the vectors \( u \vee u \vee \cdots \vee u \) and \( {x}_{1} \vee \cdots \vee {x}_{n} \), and use the Cauchy-Schwarz inequality.] (ii) Show that equality holds in the above inequality if and only if either \( A \) has rank 1 or \( A \) has a row of zeroes. (iii) If in addition all \( {a}_{ij} \) are nonnegative and all \( {r}_{i} = 1 \) (so that the matrix \( A \) is doubly stochastic as well as positive semidefinite), then \[ \operatorname{per}A \geq \frac{n!}{{n}^{n}} \] Here equality holds if and only if \( {a}_{ij} = \frac{1}{n} \) for all \( i, j \) . Problem I.6.15. Let \( A \) be Hermitian with eigenvalues \( {\alpha }_{1} \geq {\alpha }_{2} \geq \cdots \geq \) \( {\alpha }_{n} \) . In Exercise I.2.7 we noted that \[ {\alpha }_{1} = \max \{ \langle x,{Ax}\rangle : \parallel x\parallel = 1\} \] \[ {\alpha }_{n} = \min \{ \langle x,{Ax}\rangle : \parallel x\parallel = 1\} . \] Using these relations and tensor products, we can deduce some other extremal representations: (i) For every \( k = 1,2,\ldots, n \) , \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\alpha }_{j} = \max \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] \[ \mathop{\sum }\limits_{{j = n - k + 1}}^{n}{\alpha }_{j} = \min \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] where the maximum and the minimum are taken over all choices of orthonormal \( k \) -tuples \( \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) in \( \mathcal{H} \) . The first statement is referred to as Ky Fan’s Maximum Principle. It will reappear in Chapter II (with a different proof) and subsequently. (ii) If \( A \) is positive, then for every \( k = 1,2,\ldots, n \) , \[ \mathop{\prod }\limits_{{j = n - k + 1}}^{n}{\alpha }_{j} = \min \mathop{\prod }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] where the minimum is taken over all choices of orthonormal \( k \) -tuples \( \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) in \( \mathcal{H} \) . [Hint: You may need to use the Hadamard Determinant Theorem, which says that the determinant of a positive matrix is bounded above by the product of its diagonal entries. This is also proved in Chapter II.] (ii) If \( A \) is positive, then for every \( \mathcal{I} \in {Q}_{k, n} \) \[ \mathop{\prod }\limits_{{j = n - k + 1}}^{n}{\alpha }_{j} \leq \det A\left\lbrack {\mathcal{I} \mid \mathcal{I}}\right\rbrack \leq \mathop{\prod }\limits_{{j = 1}}^{k}{\alpha }_{j} \] Problem I.6.16. Let \( A \) be any \( n \times n \) matrix with eigenvalues \( {\alpha }_{1},\ldots ,{\alpha }_{n} \) . Show that \[ \left| {{\alpha }_{j} - \frac{\operatorname{tr}A}{n}}\right| \leq {\left\lbrack \frac{n - 1}{n}\left( \parallel A{\parallel }_{2}^{2} - \frac{{\left| \operatorname{tr}A\right| }^{2}}{n}\right) \right\rbrack }^{1/2} \] for all \( j = 1,2,\ldots, n \) . (Results such as this are interesting because they give some i
100_S_Fourier Analysis
11
tuples \( \left( {{x}_{1},\ldots ,{x}_{k}}\right) \) in \( \mathcal{H} \) . [Hint: You may need to use the Hadamard Determinant Theorem, which says that the determinant of a positive matrix is bounded above by the product of its diagonal entries. This is also proved in Chapter II.] (ii) If \( A \) is positive, then for every \( \mathcal{I} \in {Q}_{k, n} \) \[ \mathop{\prod }\limits_{{j = n - k + 1}}^{n}{\alpha }_{j} \leq \det A\left\lbrack {\mathcal{I} \mid \mathcal{I}}\right\rbrack \leq \mathop{\prod }\limits_{{j = 1}}^{k}{\alpha }_{j} \] Problem I.6.16. Let \( A \) be any \( n \times n \) matrix with eigenvalues \( {\alpha }_{1},\ldots ,{\alpha }_{n} \) . Show that \[ \left| {{\alpha }_{j} - \frac{\operatorname{tr}A}{n}}\right| \leq {\left\lbrack \frac{n - 1}{n}\left( \parallel A{\parallel }_{2}^{2} - \frac{{\left| \operatorname{tr}A\right| }^{2}}{n}\right) \right\rbrack }^{1/2} \] for all \( j = 1,2,\ldots, n \) . (Results such as this are interesting because they give some information about the location of the eigenvalues of a matrix in terms of more easily computable functions like the Frobenius norm \( \parallel A{\parallel }_{2} \) and the trace. We will see several such statements later.) [Hint: First prove that if \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) is a vector with \( {x}_{1} + \cdots + {x}_{n} = \) 0 , then \[ \max \left| {x}_{j}\right| \leq {\left( \frac{n - 1}{n}\right) }^{1/2}\parallel x\parallel \] Problem I.6.17. (i) Let \( {z}_{1},{z}_{2},{z}_{3} \) be three points on the unit circle. Then, the numerical range of an operator \( A \) is contained in the triangle with vertices \( {z}_{1},{z}_{2},{z}_{3} \) if and only if \( A \) can be expressed as \( A = {z}_{1}{A}_{1} + {z}_{2}{A}_{2} + \) \( {z}_{3}{A}_{3} \), where \( {A}_{1},{A}_{2},{A}_{3} \) are positive operators with \( {A}_{1} + {A}_{2} + {A}_{3} = I \) . [Hint: It is easy to see that if \( A \) is a sum of this form, then \( W\left( A\right) \) is contained in the given triangle. The converse needs some work to prove. Let \( z \) be any point in the given triangle. Then, one can find \( {\alpha }_{1},{\alpha }_{2},{\alpha }_{3} \) such that \( {\alpha }_{j} \geq 0,{\alpha }_{1} + {\alpha }_{2} + {\alpha }_{3} = 1 \) and \( z = {\alpha }_{1}{z}_{1} + {\alpha }_{2}{z}_{2} + {\alpha }_{3}{z}_{3} \) . These are the "barycentric coordinates" of \( z \) and can be obtained as follows. Let \( \gamma = \operatorname{Im}\left( {{\bar{z}}_{1}{z}_{2} + {\bar{z}}_{2}{z}_{3} + {\bar{z}}_{3}{z}_{1}}\right) \) . Then, for \( j = 1,2,3 \) , \[ {\alpha }_{j} = \operatorname{Im}\frac{\left( {z - {z}_{j + 1}}\right) \left( {{\bar{z}}_{j + 2} - {\bar{z}}_{j + 1}}\right) }{\gamma }, \] where the subscript indices are counted modulo 3 . Put \[ {A}_{j} = \operatorname{Im}\frac{\left( {A - {z}_{j + 1}I}\right) \left( {{\bar{z}}_{j + 2} - {\bar{z}}_{j + 1}}\right) }{\gamma }. \] Then, \( {A}_{j} \) have the required properties.] (ii) Let \( W\left( A\right) \) be contained in a triangle with vertices \( {z}_{1},{z}_{2},{z}_{3} \) lying on the unit circle. Then, choosing \( {A}_{1},{A}_{2},{A}_{3} \) as above, write \[ \left( \begin{matrix} I & {A}^{ * } \\ A & I \end{matrix}\right) = \mathop{\sum }\limits_{{j = 1}}^{3}\left( \begin{matrix} {A}_{j} & {\bar{z}}_{j}{A}_{j} \\ {z}_{j}{A}_{j} & {A}_{j} \end{matrix}\right) = \mathop{\sum }\limits_{{j = 1}}^{3}{A}_{j} \otimes \left( \begin{matrix} 1 & {\bar{z}}_{j} \\ {z}_{j} & 1 \end{matrix}\right) . \] This, being a sum of three positive matrices, is positive. Hence, by Proposition I.3.5 \( A \) is a contraction. (iii) If \( W\left( A\right) \) is contained in a triangle with vertices \( {z}_{1},{z}_{2},{z}_{3} \), then \( \parallel A\parallel \leq \) \( \max \left| {z}_{j}\right| \) . This is Mirman’s Theorem. Problem I.6.18. If an operator \( T \) has the Cartesian decomposition \( T = \) \( A + {iB} \) with \( A \) and \( B \) positive, then \[ \parallel T{\parallel }^{2} \leq \parallel A{\parallel }^{2} + \parallel B{\parallel }^{2} \] Show that, if \( A \) or \( B \) is not positive then this need not be true. [Hint: To prove the above inequality note that \( W\left( T\right) \) is contained in a rectangle in the first quadrant. Find a suitable triangle that contains it and use Mirman's Theorem.] ## I. 7 Notes and References Standard references on linear algebra and matrix theory include P.R. Hal-mos, Finite-Dimensional Vector Spaces, Van Nostrand, 1958; F.R. Gant-macher, Matrix Theory, 2 volumes, Chelsea, 1959 and K. Hoffman and R. Kunze, Linear Algebra, 2nd ed., Prentice Hall, 1971. A recent work is R.A. Horn and C.R. Johnson, in two volumes, Matrix Analysis and Topics in Matrix Analysis, Cambridge University Press, 1985 and 1990. For more of multilinear algebra, see W. Greub, Multilinear Algebra, 2nd ed., Springer-Verlag, 1978, and M. Marcus, Finite-Dimensional Multilinear Algebra, 2 volumes, Marcel Dekker, 1973 and 1975. A brief treatment that covers all the basic results may be found in M. Marcus and H. Minc, A Survey of Matrix Theory and Matrix Inequalities, Prindle, Weber and Schmidt, 1964, reprinted by Dover in 1992. Though not as important as the determinant, the permanent of a matrix is an interesting object with many uses in combinatorics, geometry, and physics. A book devoted entirely to it is H. Minc, Permanents, Addison-Wesley, 1978. Apart from the symmetric and the antisymmetric tensors, there are other symmetry classes of tensors. Their study is related to the glorious subject of representations of finite groups. See J.P. Serre, Linear Representations of Finite Groups, Springer-Verlag, 1977. The result in Exercise I.3.6 is due to P.R. Halmos, and is the beginning of a subject called Dilation Theory. See Chapter 23 of P.R. Halmos, A Hilbert Space Problem Book, 2nd ed., Springer-Verlag, 1982. The proof of the Toeplitz-Hausdorff Theorem in Problem I.6.2 is taken from C. Davis, The Toeplitz-Hausdorff theorem explained, Canad. Math. Bull., 14(1971) 245-246. For a different proof, see P.R. Halmos, A Hilbert Space Problem Book. For relations between Grassmann spaces and geometry, as indicated in Problem I.6.3, see, for example, I.R. Porteous, Topological Geometry, Cambridge University Press, 1981. The simple proof of the Pythagorean Theorem in Problem I.6.6 is due to S. Ramanan. Among the several papers in quantum physics, where ideas very close to those in Problems I.6.3 and I.6.5 are used effectively, is one by N.M. Hugenholtz and R.V. Kadison, Automorphisms and quasi-free states of the CAR algebra, Commun. Math. Phys., 43 (1975) 181-197. Inequalities like the ones in Problem I.6.11 were first discovered in connection with perturbation theory of eigenvalues. This is summarised in R. Bhatia, Perturbation Bounds for Matrix Eigenvalues, Longman, 1987. The simple identity at the beginning of Problem I.6.11 was first used in this context in R. Bhatia and L. Elsner, On the variation of permanents, Linear and Multilinear Algebra, 27(1990) 105-110. The results and the ideas of Problem I.6.14 are from M. Marcus and M. Newman, Inequalities for the permanent function, Ann. of Math., 75(1962) 47-62. In 1926, B.L. van der Waerden had conjectured that the inequality in part (iii) of Problem I.6.14 will hold for all doubly stochastic matrices. This conjecture was proved, in two separate papers in 1981, by G.P. Egorychev and D. Falikman. An expository account is given in J.H. van Lint, The van der Waerden conjecture: two proofs in one year, Math. Intelligencer. \( 4\left( {1982}\right) {72} - {77} \) . The results of Problem I.6.15 are all due to Ky Fan, On a theorem of Weyl concerning eigenvalues of linear transformations I, II, Proc. Nat. Acad. Sci., U.S.A., 35(1949) 652-655, 36(1950)31-35, and A minimum property of the eigenvalues of a Hermitian transformation, Amer. Math. Monthly. \( {60}\left( {1953}\right) {48} - {50} \) . A special case of the inequality of Problem I.6.16 occurs in P. Tarazaga. Eigenvalue estimates for symmetric matrices, Linear Algebra and Appl., 135(1990) 171-179. Mirman’s Theorem is proved in B.A. Mirman, Numerical range and norm of a linear operator, Trudy Seminara po Funkcional' nomu Analizu, No. 10 (1968), pp. 51-55. The inequality of Problem I.6.18 is also noted there as a corollary. Our proof of Mirman’s Theorem is taken from \( Y \) . Nakamura, Numerical range and norm, Math. Japonica, 27 (1982) 149-150. II Majorisation and Doubly Stochastic Matrices Comparison of two vector quantities often leads to interesting inequalities that can be expressed succinctly as "majorisation" relations. There is an intimate relation between majorisation and doubly stochastic matrices. These topics are studied in detail here. We place special emphasis on ma-jorisation relations between the eigenvalue \( n \) -tuples of two matrices. This will be a recurrent theme in the book. ## II. 1 Basic Notions Let \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) be an element of \( {\mathbb{R}}^{n} \) . Let \( {x}^{ \downarrow } \) and \( {x}^{ \uparrow } \) be the vectors obtained by rearranging the coordinates of \( x \) in the decreasing and the increasing orders, respectively. Thus, if \( {x}^{ \downarrow } = \left( {{x}_{1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }}\right) \), then \( {x}_{1}^{ \downarrow } \geq \cdots \geq \) \( {x}_{n}^{ \downarrow } \) . Similarly, if \( {x}^{ \uparrow } = \left( {{x}_{1}^{ \uparrow },\ldots ,{x}_{n}^{ \uparrow }}\right) \), then \( {x}_{1}^{ \uparrow } \leq \cdots \leq {x}_{n}^{ \uparrow } \) . Note that \[ {x}_{j}^{ \uparrow } = {x}_{n - j + 1}^{ \downarrow },\;1 \leq j \leq n. \] (II.1) Let \( x, y \in {\mathbb{R}}^{n} \) . We say that \( x \) is majorised by \( y \), in symbols \( x \prec y \), if \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },\;1 \leq k \leq n \] (II.2) and \[ \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{ \d
100_S_Fourier Analysis
12
of \( {\mathbb{R}}^{n} \) . Let \( {x}^{ \downarrow } \) and \( {x}^{ \uparrow } \) be the vectors obtained by rearranging the coordinates of \( x \) in the decreasing and the increasing orders, respectively. Thus, if \( {x}^{ \downarrow } = \left( {{x}_{1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }}\right) \), then \( {x}_{1}^{ \downarrow } \geq \cdots \geq \) \( {x}_{n}^{ \downarrow } \) . Similarly, if \( {x}^{ \uparrow } = \left( {{x}_{1}^{ \uparrow },\ldots ,{x}_{n}^{ \uparrow }}\right) \), then \( {x}_{1}^{ \uparrow } \leq \cdots \leq {x}_{n}^{ \uparrow } \) . Note that \[ {x}_{j}^{ \uparrow } = {x}_{n - j + 1}^{ \downarrow },\;1 \leq j \leq n. \] (II.1) Let \( x, y \in {\mathbb{R}}^{n} \) . We say that \( x \) is majorised by \( y \), in symbols \( x \prec y \), if \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },\;1 \leq k \leq n \] (II.2) and \[ \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{ \downarrow } = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j}^{ \downarrow } \] (II.3) Example: If \( {x}_{i} \geq 0 \) and \( \sum {x}_{i} = 1 \), then \[ \left( {\frac{1}{n},\ldots ,\frac{1}{n}}\right) \prec \left( {{x}_{1},\ldots ,{x}_{n}}\right) \prec \left( {1,0,\ldots ,0}\right) . \] The notion of majorisation occurs naturally in various contexts. For example, in physics, the relation \( x \prec y \) is interpreted to mean that the vector \( x \) describes a "more chaotic" state than \( y \) . (Think of \( {x}_{i} \) as the probability of a system being found in state \( i \) .) Another example occurs in economics. If \( {x}_{1},\ldots ,{x}_{n} \) and \( {y}_{1},\ldots ,{y}_{n} \) denote incomes of individuals \( 1,2,\ldots, n \), then \( x \prec y \) would mean that there is a more equal distribution of incomes in the state \( x \) than in \( y \) . The above example illustrates this. From (II.1) we have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \uparrow } = \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} - \mathop{\sum }\limits_{{j = 1}}^{{n - k}}{x}_{j}^{ \downarrow } \] Hence \( x \prec y \) if and only if \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \uparrow } \geq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \uparrow },\;1 \leq k \leq n \] (II.4) and \[ \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{ \uparrow } = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j}^{ \uparrow } \] (II.5) Let \( e \) denote the vector \( \left( {1,1,\ldots ,1}\right) \), and for any subset \( I \) of \( \{ 1,2,\ldots, n\} \) let \( {e}_{I} \) denote the vector whose \( j \) th component is 1 if \( j \in I \) and 0 if \( j \notin I \) . Given a vector \( x \in {\mathbb{R}}^{n} \), let \[ \operatorname{tr}x = \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j} = \langle x, e\rangle \] where \( \langle \cdot , \cdot \rangle \) denotes the inner product in \( {\mathbb{R}}^{n} \) . Note that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } = \mathop{\max }\limits_{{\left| I\right| = k}}\left\langle {x,{e}_{I}}\right\rangle \] where \( \left| I\right| \) stands for the number of elements in the set \( I \) . So, \( x \prec y \) if and only if for each subset \( I \) of \( \{ 1,2,\ldots, n\} \) there exists a subset \( J \) with \( \left| I\right| = \left| J\right| \) such that \[ \left\langle {x,{e}_{I}}\right\rangle \leq \left\langle {y,{e}_{J}}\right\rangle \] (II.6) and \[ \operatorname{tr}x = \operatorname{tr}y. \] (II.7) We say that \( x \) is (weakly) submajorised by \( y \), in symbols \( x{ \prec }_{w}y \), if condition (II.2) is fulfilled. Note that in the absence of (II.3), the conditions (II.2) and (II.4) are not equivalent. We say that \( x \) is (weakly) supermajorised by \( y \), in symbols \( x{ \prec }^{w}y \), if condition (II.4) is fulfilled. Exercise II.1.1 (i) \( x \prec y \Leftrightarrow x{ \prec }_{w}y \) and \( x{ \prec }^{w}y \) . (ii) If \( \alpha \) is a positive real number, then \[ x{ \prec }_{w}y \Rightarrow {\alpha x}{ \prec }_{w}{\alpha y} \] \[ x{ \prec }^{w}y \Rightarrow {\alpha x}{ \prec }^{w}{\alpha y}. \] (iii) \( x{ \prec }_{w}y \Leftrightarrow - x{ \prec }^{w} - y \) . (iv) For any real number \( \alpha \) , \[ x \prec y \Rightarrow {\alpha x} \prec {\alpha y}. \] Remark II.1.2 The relations \( \prec ,{ \prec }_{w} \), and \( { \prec }^{w} \) are all reflexive and transitive. None of them, however, is a partial order. For example, if \( x \prec y \) and \( y \prec x \), we can only conclude that \( x = {Py} \), where \( P \) is a permutation matrix. If we say that \( x \sim y \) whenever \( x = {Py} \) for some permutation matrix \( P \), then \( \sim \) defines an equivalence relation on \( {\mathbb{R}}^{n} \) . If we denote by \( {\mathbb{R}}_{\text{sum }}^{n} \) the resulting quotient space, then \( \prec \) defines a partial order on this space. This relation is also a partial order on the set \( \left\{ {x \in {\mathbb{R}}^{n} : {x}_{1} \geq \cdots \geq {x}_{n}}\right\} \) . These statements are true for the relations \( { \prec }_{w} \) and \( { \prec }^{w} \) as well. For \( a, b \in \mathbb{R} \), let \( a \vee b = \max \left( {a, b}\right) \) and \( a \land b = \min \left( {a, b}\right) \) . For \( x, y \in {\mathbb{R}}^{n} \) , define \[ x \vee y = \left( {{x}_{1} \vee {y}_{1},\ldots ,{x}_{n} \vee {y}_{n}}\right) \] \[ x \land y = \left( {{x}_{1} \land {y}_{1},\ldots ,{x}_{n} \land {y}_{n}}\right) \] Let \[ {x}^{ + } = x \vee 0 \] \[ \left| x\right| = x \vee \left( {-x}\right) \text{.} \] In other words, \( {x}^{ + } \) is the vector obtained from \( x \) by replacing the negative coordinates by zeroes, and \( \left| x\right| \) is the vector obtained by taking the absolute values of all coordinates. With these notations we can prove the following characterisation of ma-jorisation that does not involve rearrangements: Theorem II.1.3 Let \( x, y \in {\mathbb{R}}^{n} \) . Then, (i) \( x{ \prec }_{w}y \) if and only if for all \( t \in \mathbb{R} \) \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j} - t\right) }^{ + } \leq \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {y}_{j} - t\right) }^{ + } \] (II.8) (ii) \( x{ \prec }^{w}y \) if and only if for all \( t \in \mathbb{R} \) \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left( t - {x}_{j}\right) }^{ + } \leq \mathop{\sum }\limits_{{j = 1}}^{n}{\left( t - {y}_{j}\right) }^{ + } \] (II.9) (iii) \( x \prec y \) if and only if for all \( t \in \mathbb{R} \) \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{x}_{j} - t}\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{y}_{j} - t}\right| \] (II.10) Proof. Let \( x{ \prec }_{w}y \) . If \( t > {x}_{1}^{ \downarrow } \), then \( {\left( {x}_{j} - t\right) }^{ + } = 0 \) for all \( j \), and hence (II.8) holds. Let \( {x}_{k + 1}^{ \downarrow } \leq t \leq {x}_{k}^{ \downarrow } \) for some \( 1 \leq k \leq n \), where, for convenience, \( {x}_{n + 1}^{ \downarrow } = - \infty \) . Then, \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j} - t\right) }^{ + } = \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{x}_{j}^{ \downarrow } - t}\right) = \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } - {kt} \] \[ \leq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow } - {kt} \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\left( {y}_{j}^{ \downarrow } - t\right) }^{ + } \] and, hence, (II.8) holds. To prove the converse, note that if \( t = {y}_{k}^{ \downarrow } \), then \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {y}_{j} - t\right) }^{ + } = \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{y}_{j}^{ \downarrow } - t}\right) = \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow } - {kt}. \] But \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } - {kt} = \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{x}_{j}^{ \downarrow } - t}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\left( {x}_{j}^{ \downarrow } - t\right) }^{ + } \] \[ \leq \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j}^{ \downarrow } - t\right) }^{ + } = \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j} - t\right) }^{ + }. \] So, if (II.8) holds, then we must have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow } \] i.e., \( x{ \prec }_{w}y \) . This proves (i). The statements (ii) and (iii) have similar proofs. Corollary II.1.4 If \( x \prec y \) in \( {\mathbb{R}}^{n} \) and \( u \prec w \) in \( {\mathbb{R}}^{m} \), then \( \left( {x, u}\right) \prec \left( {y, w}\right) \) in \( {\mathbb{R}}^{n + m} \) . In particular, \( x \prec y \) if and only if \( \left( {x, u}\right) \prec \left( {y, u}\right) \) for all \( u \) . An \( n \times n \) matrix \( A = \left( {a}_{ij}\right) \) is called doubly stochastic if \[ {a}_{ij} \geq 0\;\text{ for all }\;i, j \] (II.11) \[ \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ij} = 1\;\text{ for all }\;j \] (II.12) \[ \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij} = 1\;\text{ for all }\;i \] (II.13) Exercise II.1.5 A linear map A on \( {\mathbb{C}}^{n} \) is called positivity-preserving if it carries vectors with nonnegative coordinates to vectors with nonnegative coordinates. It is called trace-preserving if \( \operatorname{tr}{Ax} = \operatorname{tr}x \) for all \( x \) . It is called unital if \( {Ae} = e \) . Show that a matrix \( A \) is doubly stochastic if and only if the linear operator \( A \) is positivity-preserving, trace-preserving and unital. Show that \( A \) is trace-preserving if and only if its adjoint \( {A}^{ * } \) is unital. Exercise II.1.6 (i) The class of \( n \times n \) doubly stochastic matrices is a convex set and is closed under multiplication and the adjoint operation. It is, however, not a group. (ii) Every permutation matrix is doubly stochastic and is an extreme point of the convex set of all doubly stochastic matrices. (Later we will prove Birkhoff's Theorem, which says that all extreme points of this convex set are permutation matrices.)
100_S_Fourier Analysis
13
3) Exercise II.1.5 A linear map A on \( {\mathbb{C}}^{n} \) is called positivity-preserving if it carries vectors with nonnegative coordinates to vectors with nonnegative coordinates. It is called trace-preserving if \( \operatorname{tr}{Ax} = \operatorname{tr}x \) for all \( x \) . It is called unital if \( {Ae} = e \) . Show that a matrix \( A \) is doubly stochastic if and only if the linear operator \( A \) is positivity-preserving, trace-preserving and unital. Show that \( A \) is trace-preserving if and only if its adjoint \( {A}^{ * } \) is unital. Exercise II.1.6 (i) The class of \( n \times n \) doubly stochastic matrices is a convex set and is closed under multiplication and the adjoint operation. It is, however, not a group. (ii) Every permutation matrix is doubly stochastic and is an extreme point of the convex set of all doubly stochastic matrices. (Later we will prove Birkhoff's Theorem, which says that all extreme points of this convex set are permutation matrices.) Exercise II.1.7 Let \( A \) be a doubly stochastic matrix. Show that all eigenvalues of \( A \) have modulus less than or equal to 1, that 1 is an eigenvalue of \( A \), and that \( \parallel A\parallel = 1 \) . Exercise II.1.8 If \( A \) is doubly stochastic, then \[ \left| {Ax}\right| \leq A\left( \left| x\right| \right) \] where, as usual, \( \left| x\right| = \left( {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n}\right| }\right) \) and we say that \( x \leq y \) if \( {x}_{j} \leq {y}_{j} \) for all \( j \) . There is a close relationship between majorisation and doubly stochastic matrices. This is brought out in the next few theorems. Theorem II.1.9 A matrix \( A \) is doubly stochastic if and only if \( {Ax} \prec x \) for all vectors \( x \) . Proof. Let \( {Ax} \prec x \) for all \( x \) . First choosing \( x \) to be \( e \) and then \( {e}_{i} = \left( {0,0,\ldots ,1,0,\ldots ,0}\right) ,1 \leq i \leq n \), one can easily see that \( A \) is doubly stochastic. Conversely, let \( A \) be doubly stochastic. Let \( y = {Ax} \) . To prove \( y \prec x \) we may assume, without loss of generality, that the coordinates of both \( x \) and \( y \) are in decreasing order. (See Remark II.1.2 and Exercise II.1.6.) Now note that for any \( k,1 \leq k \leq n \), we have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j} = \mathop{\sum }\limits_{{j = 1}}^{k}\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{ij}{x}_{i} \] If we put \( {t}_{i} = \mathop{\sum }\limits_{{j = 1}}^{k}{a}_{ij} \), then \( 0 \leq {t}_{i} \leq 1 \) and \( \mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i} = k \) . We have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j} - \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j} = \mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i}{x}_{i} - \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i} \] \[ = \mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i}{x}_{i} - \mathop{\sum }\limits_{{i = 1}}^{k}{x}_{i} + \left( {k - \mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i}}\right) {x}_{k} \] \[ = \mathop{\sum }\limits_{{i = 1}}^{k}\left( {{t}_{i} - 1}\right) \left( {{x}_{i} - {x}_{k}}\right) + \mathop{\sum }\limits_{{i = k + 1}}^{n}{t}_{i}\left( {{x}_{i} - {x}_{k}}\right) \] \[ \leq 0\text{.} \] Further, when \( k = n \) we must have equality here simply because \( A \) is doubly stochastic. Thus, \( y \prec x \) . Note that if \( x, y \in {\mathbb{R}}^{2} \) and \( x \prec y \) then \[ \left( {{x}_{1},{x}_{2}}\right) = \left( {t{y}_{1} + \left( {1 - t}\right) {y}_{2},\left( {1 - t}\right) {y}_{1} + t{y}_{2}}\right) \text{for some}0 \leq t \leq 1\text{.} \] Note also that if \( x, y \in {\mathbb{R}}^{n} \) and \( x \) is obtained by averaging any two coordinates of \( y \) in the above sense while keeping the rest of the coordinates fixed, then \( x \prec y \) . More precisely, call a linear map \( T \) on \( {\mathbb{R}}^{n} \) a T-transform if there exists \( 0 \leq t \leq 1 \) and indices \( j, k \) such that \[ {Ty} = \left( {{y}_{1},\ldots ,{y}_{j - 1}, t{y}_{j} + \left( {1 - t}\right) {y}_{k},{y}_{j + 1},\ldots ,\left( {1 - t}\right) {y}_{j} + t{y}_{k},{y}_{k + 1},\ldots ,{y}_{n}}\right) . \] Then, \( {Ty} \prec y \) for all \( y \) . Theorem II.1.10 For \( x, y \in {\mathbb{R}}^{n} \), the following statements are equivalent: (i) \( x \prec y \) . (ii) \( x \) is obtained from \( y \) by a finite number of \( T \) -transforms. (iii) \( x \) is in the convex hull of all vectors obtained by permuting the coordinates of \( y \) . (iv) \( x = {Ay} \) for some doubly stochastic matrix \( A \) . Proof. When \( n = 2 \), then (i) \( \Rightarrow \) (ii). We will prove this for a general \( n \) by induction. Assume that we have this implication for dimensions up to \( n - 1 \) . Let \( x, y \in {\mathbb{R}}^{n} \) . Since \( {x}^{ \downarrow } \) and \( {y}^{ \downarrow } \) can be obtained from \( x \) and \( y \) by permutations and each permutation is a product of transpositions - which are surely T-transforms, we can assume without loss of generality that \( {x}_{1} \geq {x}_{2} \geq \cdots \geq {x}_{n} \) and \( {y}_{1} \geq {y}_{2} \geq \cdots \geq {y}_{n} \) . Now, if \( x \prec y \), then \( {y}_{n} \leq {x}_{1} \leq {y}_{1} \) . Choose \( k \) such that \( {y}_{k} \leq {x}_{1} \leq {y}_{k - 1} \) . Then \( {x}_{1} = t{y}_{1} + \left( {1 - t}\right) {y}_{k} \) for some \( 0 \leq t \leq 1 \) . Let \[ {T}_{1}z = \left( {t{z}_{1} + \left( {1 - t}\right) {z}_{k},{z}_{2},\ldots ,{z}_{k - 1},\left( {1 - t}\right) {z}_{1} + t{z}_{k},{z}_{k + 1},\ldots ,{z}_{n}}\right) \] for all \( z \in {\mathbb{R}}^{n} \) . Then note that the first coordinate of \( {T}_{1}y \) is \( {x}_{1} \) . Let \[ {x}^{\prime } = \left( {{x}_{2},\ldots ,{x}_{n}}\right) \] \[ {y}^{\prime } = \left( {{y}_{2},\ldots ,{y}_{k - 1},\left( {1 - t}\right) {y}_{1} + t{y}_{k},{y}_{k + 1},\ldots ,{y}_{n}}\right) . \] We will show that \( {x}^{\prime } \prec {y}^{\prime } \) . Since \( {y}_{1} \geq \cdots \geq {y}_{k - 1} \geq {x}_{1} \geq {x}_{2} \geq \cdots \geq {x}_{n} \) , we have for \( 2 \leq m \leq k - 1 \) \[ \mathop{\sum }\limits_{{j = 2}}^{m}{x}_{j} \leq \mathop{\sum }\limits_{{j = 2}}^{m}{y}_{j} \] For \( k \leq m \leq n \) \[ \mathop{\sum }\limits_{{j = 2}}^{m}{y}_{j}^{\prime } = \mathop{\sum }\limits_{{j = 2}}^{{k - 1}}{y}_{j} + \left\lbrack {\left( {1 - t}\right) {y}_{1} + t{y}_{k}}\right\rbrack + \mathop{\sum }\limits_{{j = k + 1}}^{m}{y}_{j} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{m}{y}_{j} - t{y}_{1} + \left( {t - 1}\right) {y}_{k} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{m}{y}_{j} - {x}_{1} \geq \mathop{\sum }\limits_{{j = 1}}^{m}{x}_{j} - {x}_{1} = \mathop{\sum }\limits_{{j = 2}}^{m}{x}_{j}. \] The last inequality is an equality when \( m = n \) since \( x \prec y \) . Thus \( {x}^{\prime } \prec {y}^{\prime } \) . So by the induction hypothesis there exist a finite number of T-transforms \( {T}_{2},\ldots ,{T}_{r} \) on \( {\mathbb{R}}^{n - 1} \) such that \( {x}^{\prime } = \left( {{T}_{r}\cdots {T}_{2}}\right) {y}^{\prime } \) . We can regard each of them as a T-transform on \( {\mathbb{R}}^{n} \) if we prohibit them from touching the first coordinate of any vector. We then have \[ \left( {{T}_{r}\cdots {T}_{1}}\right) y = \left( {{T}_{r}\cdots {T}_{2}}\right) \left( {{x}_{1},{y}^{\prime }}\right) = \left( {{x}_{1},{x}^{\prime }}\right) = x, \] and that is what we wanted to prove. Now note that a T-transform is a convex combination of the identity map and some permutation. So a product of such maps is a convex combination of permutations. Hence (ii) \( \Rightarrow \) (iii). The implication (iii) \( \Rightarrow \) (iv) is obvious. and (iv) \( \Rightarrow \) (i) is a consequence of Theorem II.1.9. A consequence of the above theorem is that the set \( \{ x : x \prec y\} \) is the convex hull of all points obtained from \( y \) by permuting its coordinates. Exercise II.1.11 If \( U = \left( {u}_{ij}\right) \) is a unitary matrix, then the matrix \( \left( {\left| {u}_{ij}\right| }^{2}\right) \) is doubly stochastic. Such a doubly stochastic matrix is called unitary-stochastic; it is called orthostochastic if \( U \) is real orthogonal. Show that if \( x = {Ay} \) for some doubly stochastic matrix \( A \), then there exists an orthostochastic matrix \( B \) such that \( x = {By} \) . (Use induction.) Exercise II.1.12 Let \( A \) be an \( n \times n \) Hermitian matrix. Let \( \operatorname{diag}\left( A\right) \) denote the vector whose coordinates are the diagonal entries of \( A \) and \( \lambda \left( A\right) \) the vector whose coordinates are the eigenvalues of \( A \) specified in any order. Show that \[ \operatorname{diag}\left( A\right) \prec \lambda \left( A\right) \text{.} \] (II.14) This is sometimes referred to as Schur's Theorem. Exercise II.1.13 Use the majorisation (II.14) to prove that if \( {\lambda }_{j}^{ \downarrow }\left( A\right) \) denote the eigenvalues of an \( n \times n \) Hermitian matrix arranged in decreasing order then for all \( k = 1,2,\ldots, n \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \max \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] (II.15) where the maximum is taken over all orthonormal \( k \) -tuples of vectors \( \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) in \( {\mathbb{C}}^{n} \) . This is the Ky Fan’s maximum principle. (See Problem I.6.15 also.) Show that the majorisation (II.14) can be derived from (II.15). The two statements are, thus, equivalent. Exercise II.1.14 Let \( A, B \) be Hermitian matrices. Then for all \( k = 1,2 \) , \( \ldots, n \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( B\right) . \] (II.16) Exercise II.1.15 For any matrix \( A \), let \( \widetilde{A} \) be the Hermitian matrix \[ \widetilde{A} = \left\lbra
100_S_Fourier Analysis
14
\[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \max \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] (II.15) where the maximum is taken over all orthonormal \( k \) -tuples of vectors \( \left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) in \( {\mathbb{C}}^{n} \) . This is the Ky Fan’s maximum principle. (See Problem I.6.15 also.) Show that the majorisation (II.14) can be derived from (II.15). The two statements are, thus, equivalent. Exercise II.1.14 Let \( A, B \) be Hermitian matrices. Then for all \( k = 1,2 \) , \( \ldots, n \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( B\right) . \] (II.16) Exercise II.1.15 For any matrix \( A \), let \( \widetilde{A} \) be the Hermitian matrix \[ \widetilde{A} = \left\lbrack \begin{matrix} 0 & A \\ {A}^{ * } & 0 \end{matrix}\right\rbrack \] (II.17) Then the eigenvalues of \( \widetilde{A} \) are the singular values of \( A \) together with their negatives. Denote the singular values of \( A \) arranged in decreasing order by \( {s}_{1}\left( A\right) ,\ldots ,{s}_{n}\left( A\right) \) . Show that for any two \( n \times n \) matrices \( A, B \) and for any \( k = 1,2,\ldots, n \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( B\right) . \] (II.18) When \( k = 1 \), this is just the triangle inequality for the operator norm \( \left| \right| A\left| \right| \) . For each \( 1 \leq k \leq n \), define \( \parallel A{\parallel }_{\left( k\right) } = \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) \) . From (II.18) it follows that \( \parallel A{\parallel }_{\left( k\right) } \) defines a norm. These norms are called the \( \mathbf{{Ky}}\mathbf{{Fan}}k \) -norms. ## II. 2 Birkhoff's Theorem We start with a combinatorial problem known as the Matching Problem. Let \( B = \left\{ {{b}_{1},\ldots ,{b}_{n}}\right\} \) and \( G = \left\{ {{g}_{1},\ldots ,{g}_{n}}\right\} \) be two sets of \( n \) elements each, and let \( R \) be a subset of \( B \times G \) . When does there exist a bijection \( f \) from \( B \) to \( G \) whose graph is contained in \( R \) ? This is called the Matching Problem or the Marriage Problem for the following reason. Think of \( B \) as a set of boys, \( G \) as a set of girls, and \( \left( {{b}_{i},{g}_{j}}\right) \in R \) as saying that the boy \( {b}_{i} \) knows the girl \( {g}_{j} \) . Then the above question can be phrased as: when can one arrange a monogamous marriage in which each boy gets married to a girl he knows? We will call such a matching a compatible matching. For each \( i \) let \( {G}_{i} = \left\{ {{g}_{j} : \left( {{b}_{i},{g}_{j}}\right) \in R}\right\} \) . This represents the set of girls whom the boy \( {b}_{i} \) knows. For each \( k \) -tuple of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) , let \( {G}_{{i}_{1}\cdots {i}_{k}} = \mathop{\bigcup }\limits_{{r = 1}}^{k}{G}_{{i}_{r}} \) . This represents the set of girls each of whom are known to one of the boys \( {b}_{{i}_{1}},\ldots ,{b}_{{i}_{k}} \) . Clearly a necessary condition for a compatible matching to be possible is that \( \left| {G}_{{i}_{1}\ldots {i}_{k}}\right| \geq k \) for all \( k = 1,2,\ldots, n \) . Hall’s Marriage Theorem says that this condition is sufficient as well. Theorem II.2.1 (Hall) A compatible matching between \( B \) and \( G \) can be found if and only if \[ \left| {G}_{{i}_{1}\cdots {i}_{k}}\right| \geq k \] (II.19) for all \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n, k = 1,2,\ldots, n \) . Proof. Only the sufficiency of the condition needs to be proved. This is done by induction on \( n \) . Obviously, the Theorem is true when \( n = 1 \) . First assume that we have \[ \left| {G}_{{i}_{1}\cdots {i}_{k}}\right| \geq k + 1 \] for all \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n,1 \leq k < n \) . In other words, if \( 1 \leq k < n \), then every set of \( k \) boys together knows at least \( k + 1 \) girls. Pick up any boy and marry him to one of the girls he knows. This leaves \( n - 1 \) boys and \( n - 1 \) girls; condition (II.19) still holds, and hence the remaining boys and girls can be compatibly matched. If the above assumption is not met, then there exist \( k \) indices \( {i}_{1},\ldots ,{i}_{k} \) , \( k < n \), for which \[ \left| {G}_{{i}_{1}\cdots {i}_{k}}\right| = k \] In other words, there exist \( k \) boys who together know exactly \( k \) girls. By the induction hypothesis these \( k \) boys and girls can be compatibly matched. Now we are left with \( n - k \) unmarried boys and as many unmarried girls. If some set of \( h \) of these boys knew less than \( h \) of these remaining girls, then together with the earlier \( k \) these \( h + k \) boys would have known less than \( h + k \) girls. (The earlier \( k \) boys did not know any of the present \( n - k \) maidens.) So, condition (II.19) is satisfied for the remaining \( n - k \) boys and girls who can now be compatibly married by the induction hypothesis. Exercise II.2.2 (The König-Frobenius Theorem) Let \( A = \left( {a}_{ij}\right) \) be an \( n \times \) \( n \) matrix. If \( \sigma \) is a permutation on \( n \) symbols, the set \( \left\{ {{a}_{{1\sigma }\left( 1\right) },{a}_{{2\sigma }\left( 2\right) },\ldots }\right. \) , \( \left. {a}_{{n\sigma }\left( n\right) }\right\} \) is called a diagonal of \( A \) . Each diagonal contains exactly one element from each row and from each column of \( A \) . Show that the following two statements are equivalent: (i) every diagonal of \( A \) contains a zero element. (ii) A has a \( k \times \ell \) submatrix with all entries zero for some \( k,\ell \) such that \( k + \ell > n \) . One can see that the statement of the König-Frobenius Theorem is equivalent to that of Hall's Theorem. Theorem II.2.3 (Birkhoff’s Theorem) The set of \( n \times n \) doubly stochastic matrices is a convex set whose extreme points are the permutation matrices. Proof. We have already made a note of the easy part of this theorem in Exercise II.1.6. The harder part is showing that every extreme point is a permutation matrix. For this we need to show that each doubly stochastic matrix is a convex combination of permutation matrices. This is proved by induction on the number of positive entries of the matrix. Note that if \( A \) is doubly stochastic, then it has at least \( n \) positive entries. If the number of positive entries is exactly \( n \), then \( A \) is a permutation matrix. We first show that if \( A \) is doubly stochastic, then \( A \) has at least one diagonal with no zero entry. Choose any \( k \times \ell \) submatrix of zeroes that \( A \) might have. We can find permutation matrices \( {P}_{1},{P}_{2} \) such that \( {P}_{1}A{P}_{2} \) has the form \[ {P}_{1}A{P}_{2} = \left\lbrack \begin{array}{ll} O & B \\ C & D \end{array}\right\rbrack \] where \( O \) is a \( k \times \ell \) matrix with all entries zero. Since \( {P}_{1}A{P}_{2} \) is again doubly stochastic, the rows of \( B \) and the columns of \( C \) each add up to 1 . Hence \( k + \ell \leq n \) . So at least one diagonal of \( A \) must have all its entries positive, by the König-Frobenius Theorem. Choose any such positive diagonal and let \( a \) be the smallest of the elements of this diagonal. If \( A \) is not a permutation matrix, then \( a < 1 \) . Let \( P \) be the permutation matrix obtained by putting ones on this diagonal and let \[ B = \frac{A - {aP}}{1 - a} \] Then \( B \) is doubly stochastic and has at least one more zero entry than \( A \) has. So by the induction hypothesis \( B \) is a convex combination of permutation matrices. Hence so is \( A \), since \( A = \left( {1 - a}\right) B + {aP} \) . Remark. There are \( n \) ! permutation matrices of size \( n \) . Birkhoff’s Theorem tells us that every \( n \times n \) doubly stochastic matrix is a convex combination of these \( n \) ! matrices. This number can be reduced as a consequence of a general theorem of Carathéodory. This says that if \( X \) is a subset of an \( m \) -dimensional linear variety in \( {\mathbb{R}}^{N} \), then any point in the convex hull of \( X \) can be expressed as a convex combination of at most \( m + 1 \) points of \( X \) . Using this theorem one sees that every \( n \times n \) doubly stochastic matrix can be expressed as a convex combination of at most \( {n}^{2} - {2n} + 2 \) permutation matrices. Doubly substochastic matrices defined below are related to weak ma-jorisation in the same way as doubly stochastic matrices are related to majorisation. A matrix \( B = \left( {b}_{ij}\right) \) is called doubly substochastic if \[ {b}_{ij} \geq 0\;\text{ for all }\;i, j \] \[ \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ij} \leq 1\;\text{ for all }\;j \] \[ \mathop{\sum }\limits_{{j = 1}}^{n}{b}_{ij} \leq 1\;\text{ for all }\;i \] Exercise II.2.4 B is doubly substochastic if it is positivity-preserving. \( {Be} \leq e, \) and \( {B}^{ * }e \leq e. \) Exercise II.2.5 Every square submatrix of a doubly stochastic matrix is doubly substochastic. Conversely, every doubly substochastic matrix B can be dilated to a doubly stochastic matrix \( A \) . Moreover, if \( B \) is an \( n \times n \) matrix, then this dilation \( A \) can be chosen to have size at most \( {2n} \times {2n} \) . Indeed, if \( R \) and \( C \) are the diagonal matrices whose \( j \) th diagonal entries are the sums of the \( j \) th rows and the \( j \) th columns of \( B \), respectively, then \[ A = \left( \begin{matrix} B & I - R \\ I - C & {B}^{ * } \end{matrix}\right) \] is a doubly stochastic matrix. Exercise II.2.6 The set of all \( n
100_S_Fourier Analysis
15
[ {b}_{ij} \geq 0\;\text{ for all }\;i, j \] \[ \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ij} \leq 1\;\text{ for all }\;j \] \[ \mathop{\sum }\limits_{{j = 1}}^{n}{b}_{ij} \leq 1\;\text{ for all }\;i \] Exercise II.2.4 B is doubly substochastic if it is positivity-preserving. \( {Be} \leq e, \) and \( {B}^{ * }e \leq e. \) Exercise II.2.5 Every square submatrix of a doubly stochastic matrix is doubly substochastic. Conversely, every doubly substochastic matrix B can be dilated to a doubly stochastic matrix \( A \) . Moreover, if \( B \) is an \( n \times n \) matrix, then this dilation \( A \) can be chosen to have size at most \( {2n} \times {2n} \) . Indeed, if \( R \) and \( C \) are the diagonal matrices whose \( j \) th diagonal entries are the sums of the \( j \) th rows and the \( j \) th columns of \( B \), respectively, then \[ A = \left( \begin{matrix} B & I - R \\ I - C & {B}^{ * } \end{matrix}\right) \] is a doubly stochastic matrix. Exercise II.2.6 The set of all \( n \times n \) doubly substochastic matrices is convex; its extreme points are matrices having at most one entry 1 in each row and each column and all other entries zero. Exercise II.2.7 A matrix B with nonnegative entries is doubly substochas-tic if and only if there exists a doubly stochastic matrix \( A \) such that \( {b}_{ij} < {a}_{ij} \) for all \( i, j = 1,2,\ldots, n \) . Our next theorem connects doubly substochastic matrices to weak majorisation. Theorem II.2.8 (i) Let \( x, y \) be two vectors with nonnegative coordinates. Then \( x{ \prec }_{w}y \) if and only if \( x = {By} \) for some doubly substochastic matrix B. (ii) Let \( x, y \in {\mathbb{R}}^{n} \) . Then \( x{ \prec }_{w}y \) if and only if there exists a vector \( u \) such that \( x \leq u \) and \( u \prec y \) . Proof. If \( x, u \in {\mathbb{R}}^{n} \) and \( x \leq u \), then clearly \( x{ \prec }_{w}u \) . So, if in addition \( u \prec y \), then \( x{ \prec }_{w}y \) . Now suppose that \( x, y \) are nonnegative vectors and \( x = {By} \) for some doubly substochastic matrix \( B \) . By Exercise II.2.7 we can find a doubly stochastic matrix \( A \) such that \( {b}_{ij} \leq {a}_{ij} \) for all \( i, j \) . Then \( x = {By} \leq {Ay} \) . Hence, \( x{ \prec }_{w}y \) . Conversely, let \( x, y \) be nonnegative vectors such that \( x{ \prec }_{w}y \) . We want to prove that there exists a doubly substochastic matrix \( B \) for which \( x = {By} \) . If \( x = 0 \), we can choose \( B = 0 \), and if \( x \prec y \), we can even choose \( B \) to be doubly stochastic by Theorem II.1.10. So, assume that neither of these is the case. Let \( r \) be the smallest of the positive coordinates of \( x \), and let \( s = \sum {y}_{j} - \sum {x}_{j} \) . By assumption \( s > 0 \) . Choose a positive integer \( m \) such that \( r \geq s/m \) . Dilate both vectors \( x \) and \( y \) to \( \left( {n + m}\right) \) -dimensional vectors \( {x}^{\prime },{y}^{\prime } \) defined as \[ {x}^{\prime } = \left( {{x}_{1},\ldots ,{x}_{n}, s/m,\ldots, s/m}\right) \] \[ {y}^{\prime } = \left( {{y}_{1},\ldots ,{y}_{n},0,\ldots ,0}\right) . \] Then \( {x}^{\prime } \prec {y}^{\prime } \) . Hence \( {x}^{\prime } = A{y}^{\prime } \) for some doubly stochastic matrix of size \( n + m \) . Let \( B \) be the \( n \times n \) submatrix of \( A \) sitting in the top left corner. Then \( B \) is doubly substochastic and \( x = {By} \) . This proves (i). Finally, let \( x, y \in {\mathbb{R}}^{n} \) and \( x{ \prec }_{w}y \) . Choose a positive number \( t \) so that \( x + {te} \) and \( y + {te} \) are both nonnegative, where \( e = \left( {1,1,\ldots ,1}\right) \) . We still have \( x + {te}{ \prec }_{w}y + {te} \) . So, by (i) there exists a doubly substochastic matrix \( B \) such that \( x + {te} = B\left( {y + {te}}\right) \) . By Exercise II.2.7 we can find a doubly stochastic matrix \( A \) such that \( {b}_{ij} \leq {a}_{ij} \) for all \( i, j \) . But then \( x \) + \( {te} \leq A\left( {y\text{ + }{te}}\right) = {Ay} \) + \( {te}. \) Hence, if \( u = {Ay} \), then \( x \leq u \) and \( u \prec y \) . Exercise II.2.9 A matrix A is doubly substochastic if and only if for every \( x \geq 0 \) we have \( {Ax} \geq 0 \) and \( {Ax}{ \prec }_{w}x \) . (Compare with Theorem II.1.9.) Exercise II.2.10 Let \( x, y \in {\mathbb{R}}^{n} \) and let \( x \geq 0, y \geq 0 \) . Then \( x{ \prec }_{w}y \) if and only if \( x \) is in the convex hull of the \( {2}^{n}n \) ! points obtained from \( y \) by permutations and sign changes of its coordinates (i.e., vectors of the form \( \left( {\pm {y}_{\sigma \left( 1\right) }, \pm {y}_{\sigma \left( 2\right) },\ldots , \pm {y}_{\sigma \left( n\right) }}\right) \), where \( \sigma \) is a permutation \( ) \) . ## II. 3 Convex and Monotone Functions In this section we will study maps from \( {\mathbb{R}}^{n} \) to \( {\mathbb{R}}^{m} \) that preserve various orders. Let \( f : \mathbb{R} \rightarrow \mathbb{R} \) be any function. We will denote the map induced by \( f \) on \( {\mathbb{R}}^{n} \) also by \( f \) ; i.e., \( f\left( x\right) = \left( {f\left( {x}_{1}\right) ,\ldots, f\left( {x}_{n}\right) }\right) \) for \( x \in {\mathbb{R}}^{n} \) . An elementary and useful characterisation of majorisation is the following. Theorem II.3.1 Let \( x, y \in {\mathbb{R}}^{n} \) . Then the following two conditions are equivalent: (i) \( x \prec y \) . (ii) \( \operatorname{tr}\varphi \left( x\right) \leq \operatorname{tr}\varphi \left( \mathrm{y}\right) \) for all convex functions \( \varphi \) from \( \mathbb{R} \) to \( \mathbb{R} \) . Proof. Let \( x \prec y \) . Then \( x = {Ay} \) for some doubly stochastic matrix \( A \) . So \( {x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}{y}_{j} \), where \( {a}_{ij} \geq 0 \) and \( \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij} = 1 \) . Hence for every convex function \( \varphi ,\varphi \left( {x}_{i}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{n}{a}_{ij}\varphi \left( {y}_{j}\right) \) . Hence \( \mathop{\sum }\limits_{{i = 1}}^{n}\varphi \left( {x}_{i}\right) \leq \mathop{\sum }\limits_{{i, j}}{a}_{ij}\varphi \left( {y}_{j}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\varphi \left( {y}_{j}\right) \) . To prove the converse note that for each \( t \) the function \( {\varphi }_{t}\left( x\right) = \left| {x - t}\right| \) is convex. Now apply Theorem II.1.3 (iii). Exercise II.3.2 For \( x, y \in {\mathbb{R}}^{n} \) the following two conditions are equivalent: (i) \( x{ \prec }_{w}y \) . (ii) \( \operatorname{tr}\varphi \left( x\right) \leq \operatorname{tr}\varphi \left( y\right) \) for all monotonically increasing convex functions \( \varphi \) from \( \mathbb{R} \) to \( \mathbb{R} \) . Note that in the two statements above it suffices to consider only continuous functions. A real valued function \( \varphi \) on \( {\mathbb{R}}^{n} \) is called Schur-convex or S-convex if \[ x \prec y\; \Rightarrow \;\varphi \left( x\right) \leq \varphi \left( y\right) \] (II.20) (This terminology might seem somewhat inappropriate because the condition (II.20) expresses preservation of order rather than convexity. However, the above two propositions do show that ordinary convex functions are related to this notion. Also, if \( x \prec y \), then \( x \) is obtained from \( y \) by an averaging procedure. The condition (II.20) says that the value of \( \varphi \) is diminished when such a procedure is applied to its argument. Later on, we will come across other notions of averaging, and corresponding notions of convexity.) We will study more general maps that include Schur-convex maps. Consider maps \( \Phi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m} \) . The domain of \( \Phi \) will be either all of \( {\mathbb{R}}^{n} \) or some convex set invariant under coordinate permutations of its elements. Such a map will be called monotone increasing if \[ x \leq y\; \Rightarrow \;\Phi \left( x\right) \leq \Phi \left( y\right) \] monotone decreasing if \[ - \Phi \text{is monotone increasing,} \] convex if \[ \Phi \left( {{tx} + \left( {1 - t}\right) y}\right) \leq {t\Phi }\left( x\right) + \left( {1 - t}\right) \Phi \left( y\right) ,\;0 \leq t \leq 1, \] concave if \[ - \Phi \text{is convex,} \] isotone if \[ x \prec y\; \Rightarrow \;\Phi \left( x\right) { \prec }_{w}\Phi \left( y\right) \] strongly isotone if \[ x{ \prec }_{w}y\; \Rightarrow \;\Phi \left( x\right) { \prec }_{w}\Phi \left( y\right) \] and strictly isotone if \[ x \prec y\; \Rightarrow \;\Phi \left( x\right) \prec \Phi \left( y\right) \] Note that when \( m = 1 \) isotone maps are precisely the Schur-convex maps. The next few propositions provide examples of such maps. We will denote by \( {S}_{n} \) the group of \( n \times n \) permutation matrices. Theorem II.3.3 Let \( \Phi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m} \) be a convex map. Suppose that for any \( P \in {S}_{n} \) there exists \( {P}^{\prime } \in {S}_{m} \) such that \[ \Phi \left( {Px}\right) = {P}^{\prime }\Phi \left( x\right) \;\text{ for all }\;x \in {\mathbb{R}}^{n}. \] (II.21) Then \( \Phi \) is isotone. In addition, if \( \Phi \) is monotone increasing, then \( \Phi \) is strongly isotone. Proof. Let \( x \prec y \) in \( {\mathbb{R}}^{n} \) . By Theorem II.1.10 there exist \( {P}_{1},\ldots ,{P}_{N} \) in \( {S}_{n} \) and positive real numbers \( {t}_{1},\ldots ,{t}_{N} \) with \( \sum {t}_{j} = 1 \) such that \[ x = \sum {t}_{j}{P}_{j}y \] So, by the convexity of \( \Phi \) and the property (II.21) \[ \Phi \left( x\right) \leq \sum {t}_{j}\Phi \left( {{P}_{j}y}\right) = \sum {t}_{j}{P}_{j}^{\prime }\Phi \left( y\right) = z,\text{ say. } \] Then \( z \prec \Phi \left( y\right) \) and \( \Phi \left( x\right) \leq z \) . So \( \Phi \left( x\right) { \prec }_{w}\Phi \left( y\right) \) . This proves that \( \Phi \) is isotone. Suppose \( \Phi \) is also monotone increasing. Let \( u{ \prec }_{w}y \) . Then by Theore
100_S_Fourier Analysis
16
}_{m} \) such that \[ \Phi \left( {Px}\right) = {P}^{\prime }\Phi \left( x\right) \;\text{ for all }\;x \in {\mathbb{R}}^{n}. \] (II.21) Then \( \Phi \) is isotone. In addition, if \( \Phi \) is monotone increasing, then \( \Phi \) is strongly isotone. Proof. Let \( x \prec y \) in \( {\mathbb{R}}^{n} \) . By Theorem II.1.10 there exist \( {P}_{1},\ldots ,{P}_{N} \) in \( {S}_{n} \) and positive real numbers \( {t}_{1},\ldots ,{t}_{N} \) with \( \sum {t}_{j} = 1 \) such that \[ x = \sum {t}_{j}{P}_{j}y \] So, by the convexity of \( \Phi \) and the property (II.21) \[ \Phi \left( x\right) \leq \sum {t}_{j}\Phi \left( {{P}_{j}y}\right) = \sum {t}_{j}{P}_{j}^{\prime }\Phi \left( y\right) = z,\text{ say. } \] Then \( z \prec \Phi \left( y\right) \) and \( \Phi \left( x\right) \leq z \) . So \( \Phi \left( x\right) { \prec }_{w}\Phi \left( y\right) \) . This proves that \( \Phi \) is isotone. Suppose \( \Phi \) is also monotone increasing. Let \( u{ \prec }_{w}y \) . Then by Theorem II.2.8 there exists \( x \) such that \( u \leq x \prec y \) . Hence \( \Phi \left( u\right) \leq \Phi \left( x\right) \) and \( \Phi \left( x\right) { \prec }_{w} \) \( \Phi \left( y\right) \) . So, \( \Phi \left( u\right) { \prec }_{w}\Phi \left( y\right) \) . This proves \( \Phi \) is strongly isotone. Corollary II.3.4 If \( \varphi : \mathbb{R} \rightarrow \mathbb{R} \) is a convex function, then the induced map \( \varphi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{n} \) is isotone. If \( \varphi \) is convex and monotone on \( \mathbb{R} \), then the induced map is strongly isotone on \( {\mathbb{R}}^{n} \) . Note that one part of Theorem II.3.1 and Exercise II.3.2 is subsumed by the above corollary. Example II.3.5 From the above results we can conclude that (i) \( x \prec y \) in \( {\mathbb{R}}^{n} \Rightarrow \left| x\right| { \prec }_{w}\left| y\right| \) . (ii) \( x \prec y \) in \( {\mathbb{R}}^{n} \Rightarrow {x}^{2}{ \prec }_{w}{y}^{2} \) . (iii) \( x{ \prec }_{w}y \) in \( {\mathbb{R}}_{ + }^{n} \Rightarrow {x}^{p}{ \prec }_{w}{y}^{p} \) for \( p > 1 \) . (iv) \( x{ \prec }_{w}y \) in \( {\mathbb{R}}^{n} \Rightarrow {x}^{ + }{ \prec }_{w}{y}^{ + } \) . (v) If \( \varphi \) is any function such that \( \varphi \left( {e}^{t}\right) \) is convex and monotone increasing in \( t \), then \( \log x{ \prec }_{w}\log y \) in \( {\mathbb{R}}_{ + }^{n} \Rightarrow \varphi \left( x\right) { \prec }_{w}\varphi \left( y\right) \) . (vi) \( \log x{ \prec }_{w}\log y \) in \( {\mathbb{R}}_{ + }^{n} \Rightarrow x{ \prec }_{w}y \) . (vii) For \( x, y \in {\mathbb{R}}_{ + }^{n} \) \[ \mathop{\prod }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\prod }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },1 \leq k \leq n, \Rightarrow \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\sum }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },1 \leq k \leq n. \] Here \( {\mathbb{R}}_{ + }^{n} \) stands for the collection of vectors \( x \geq 0 \) (or, at places, \( x > 0 \) ). All functions are understood in the coordinatewise sense. Thus, e.g., \( \left| x\right| = \) \( \left( {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n}\right| }\right) \) . As an application we have the following very useful theorem. Theorem II.3.6 (Weyl’s Majorant Theorem) Let \( A \) be an \( n \times n \) matrix with singular values \( {s}_{1} \geq \cdots \geq {s}_{n} \) and eigenvalues \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) arranged in such a way that \( \left| {\lambda }_{1}\right| \geq \cdots \geq \left| {\lambda }_{n}\right| \) . Then for every function \( \varphi : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + } \) , such that \( \varphi \left( {e}^{t}\right) \) is convex and monotone increasing in \( t \), we have \[ \left( {\varphi \left( \left| {\lambda }_{1}\right| \right) ,\ldots ,\varphi \left( \left| {\lambda }_{n}\right| \right) }\right) { \prec }_{w}\left( {\varphi \left( {s}_{1}\right) ,\ldots ,\varphi \left( {s}_{n}\right) }\right) . \] (II.22) In particular, we have \[ \left( {{\left| {\lambda }_{1}\right| }^{p},\ldots ,{\left| {\lambda }_{n}\right| }^{p}}\right) { \prec }_{w}\left( {{s}_{1}^{p},\ldots ,{s}_{n}^{p}}\right) \] (II.23) for all \( p \geq 0 \) . Proof. The spectral radius of a matrix is bounded by its operator norm. Hence, \[ \left| {\lambda }_{1}\right| \leq \parallel A\parallel = {s}_{1} \] Apply this argument to the antisymmetric tensor powers \( { \land }^{k}A \) . This gives \[ \mathop{\prod }\limits_{{j = 1}}^{k}\left| {\lambda }_{j}\right| \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j},\;1 \leq k \leq n \] (II.24) Now use the assertion of II.3.5 (vii). Note that we have \[ \mathop{\prod }\limits_{{j = 1}}^{n}\left| {\lambda }_{j}\right| = \mathop{\prod }\limits_{{j = 1}}^{n}{s}_{j} \] (II.25) both the expressions being equal to (det \( {A}^{ * }A{)}^{1/2} \) . Remark II.3.7 Returning to Theorem II.3.3, we note that when \( m = 1 \) the condition (II.21) just says that \( \Phi \) is permutation invariant; i.e., \[ \Phi \left( {Px}\right) = \Phi \left( x\right) \] (II.26) for all \( x \in {\mathbb{R}}^{n} \) and \( P \in {S}_{n} \) . So, in this case Theorem II.3.3 says that if a function \( \Phi : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) is convex and permutation invariant, then it is isotone (i.e., Schur-convex). Also note that every isotone function \( \Phi \) from \( {\mathbb{R}}^{n} \) to \( \mathbb{R} \) has to be permutation invariant because \( {Px} \) and \( x \) majorise each other and hence isotony of \( \Phi \) implies equality of \( \Phi \left( {Px}\right) \) and \( \Phi \left( x\right) \) in this case. However, we will see that not every isotone function from \( {\mathbb{R}}^{n} \) to \( \mathbb{R} \) (i.e. not every Schur-convex function) is convex. Exercise II.3.8 Let \( \Psi : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) be any convex function and let \( \Phi \left( x\right) = \) \( \mathop{\max }\limits_{{P \in {S}_{n}}}\Psi \left( {Px}\right) \) . Prove that \( \Phi \) is isotone. If, in addition, \( \Psi \) is monotone increasing, then \( \Phi \) is strongly isotone. Exercise II.3.9 Let \( \varphi : \mathbb{R} \rightarrow \mathbb{R} \) be convex. For each \( k = 1,2,\ldots, n \), define functions \( {\varphi }^{\left( k\right) } : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) by \[ {\varphi }^{\left( k\right) }\left( x\right) = \mathop{\max }\limits_{\sigma }\mathop{\sum }\limits_{{j = 1}}^{k}\varphi \left( {x}_{\sigma \left( j\right) }\right) \] where \( \sigma \) runs over all permutations on \( n \) symbols. Then \( {\varphi }^{\left( k\right) } \) is isotone. If, in addition, \( \varphi \) is monotone increasing, then \( {\varphi }^{\left( k\right) } \) is strongly isotone. Note that this applies, in particular, to \[ {\varphi }^{\left( n\right) }\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\varphi \left( {x}_{j}\right) = \operatorname{tr}\varphi \left( \mathrm{x}\right) . \] Compare this with Theorem II.3.1. The special choice \( \varphi \left( t\right) = t \) gives \( {\varphi }^{\left( k\right) }\left( x\right) = \) \( \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \) Example II.3.10 For \( x \in {\mathbb{R}}^{n} \) let \( \bar{x} = \frac{1}{n}\sum {x}_{j} \) . Let \[ V\left( x\right) = \frac{1}{n}\mathop{\sum }\limits_{j}{\left( {x}_{j} - \bar{x}\right) }^{2}. \] This is called the variance function. Since the maps \( {x}_{j} \rightarrow {\left( {x}_{j} - \bar{x}\right) }^{2} \) are convex, \( V\left( x\right) \) is isotone (i.e., Schur-convex). Example II.3.11 For \( x \in {\mathbb{R}}_{ + }^{n} \) let \[ H\left( x\right) = - \mathop{\sum }\limits_{j}{x}_{j}\log {x}_{j} \] where by convention we put \( t\log t = 0 \), if \( t = 0 \) . Then \( H \) is called the entropy function. Since the function \( f\left( t\right) = t\log t \) is convex for \( t \geq 0 \), we see that \( - H\left( x\right) \) is isotone. (This is sometimes expressed by saying that the entropy function is anti-isotone or Schur-concave on \( {\mathbb{R}}_{ + }^{n} \) .) In particular, if \( {x}_{j} \geq 0 \) and \( \sum {x}_{j} = 1 \) we have \[ H\left( {1,0,\ldots ,0}\right) \leq H\left( {{x}_{1},\ldots ,{x}_{n}}\right) \leq H\left( {\frac{1}{n},\ldots ,\frac{1}{n}}\right) , \] which is a basic fact about entropy. Example II.3.12 For \( p \geq 1 \) the function \[ \Phi \left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j} + \frac{1}{{x}_{j}}\right) }^{p} \] is isotone on \( {\mathbb{R}}_{ + }^{n} \) . In particular, if \( {x}_{j} > 0 \) and \( \sum {x}_{j} = 1 \), we have \[ \frac{{\left( {n}^{2} + 1\right) }^{p}}{{n}^{p - 1}} \leq \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {x}_{j} + \frac{1}{{x}_{j}}\right) }^{p} \] Example II.3.13 A function \( \Phi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}_{ + } \) is called a symmetric gauge function \( {if} \) (i) \( \Phi \) is a norm on the real vector space \( {\mathbb{R}}^{n} \) , (ii) \( \Phi \left( {Px}\right) = \Phi \left( x\right) \) for all \( x \in {\mathbb{R}}^{n}, P \in {S}_{n} \) , (iii) \( \Phi \left( {{\varepsilon }_{1}{x}_{1},\ldots ,{\varepsilon }_{n}{x}_{n}}\right) = \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) if \( {\varepsilon }_{j} = \pm 1 \) , (iv) \( \Phi \left( {1,0,\ldots ,0}\right) = 1 \) . (The last condition is an inessential normalisation.) Examples of symmetric gauge functions are \[ {\Phi }_{p}\left( x\right) = {\left( \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {x}_{j}\right| }^{p}\right) }^{1/p},\;1 \leq p < \infty , \] \[ {\Phi }_{\infty }\left( x\right) = \mathop{\max }\limits_{{1 \leq j \leq n}}\left| {x}_{j}\right| \] These norms are commonly used in functional analysis. If the coordinates of \( x \) are arranged so as to have \( \left| {x}_{1}\right| \geq \cdots \geq \left| {x}_{n}\right| \), then \[ {\Phi }_{\left( k\right) }\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\left| {x}_{j}\right| ,\;1 \leq k \leq n \] is also
100_S_Fourier Analysis
17
\left( {Px}\right) = \Phi \left( x\right) \) for all \( x \in {\mathbb{R}}^{n}, P \in {S}_{n} \) , (iii) \( \Phi \left( {{\varepsilon }_{1}{x}_{1},\ldots ,{\varepsilon }_{n}{x}_{n}}\right) = \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) if \( {\varepsilon }_{j} = \pm 1 \) , (iv) \( \Phi \left( {1,0,\ldots ,0}\right) = 1 \) . (The last condition is an inessential normalisation.) Examples of symmetric gauge functions are \[ {\Phi }_{p}\left( x\right) = {\left( \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {x}_{j}\right| }^{p}\right) }^{1/p},\;1 \leq p < \infty , \] \[ {\Phi }_{\infty }\left( x\right) = \mathop{\max }\limits_{{1 \leq j \leq n}}\left| {x}_{j}\right| \] These norms are commonly used in functional analysis. If the coordinates of \( x \) are arranged so as to have \( \left| {x}_{1}\right| \geq \cdots \geq \left| {x}_{n}\right| \), then \[ {\Phi }_{\left( k\right) }\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\left| {x}_{j}\right| ,\;1 \leq k \leq n \] is also a symmetric gauge function. This is a consequence of the majorisa-tions (II.29) and (i) in Examples II.3.5. Every symmetric gauge function is convex on \( {\mathbb{R}}^{n} \) and is monotone on \( {\mathbb{R}}_{ + }^{n} \) (Problem II.5.11). Hence by Theorem II.3.3 it is strongly isotone; i.e., \[ x{ \prec }_{w}y\text{ in }{\mathbb{R}}_{ + }^{n} \Rightarrow \Phi \left( x\right) \leq \Phi \left( y\right) . \] For differentiable functions there are necessary and sufficient conditions characterising Schur-convexity: Theorem II.3.14 A differentiable function \( \Phi : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) is isotone if and only if (i) \( \Phi \) is permutation invariant, and (ii) for each \( x \in {\mathbb{R}}^{n} \) and for all \( i, j \) \[ \left( {{x}_{i} - {x}_{j}}\right) \left( {\frac{\partial \Phi }{\partial {x}_{i}}\left( x\right) - \frac{\partial \Phi }{\partial {x}_{j}}\left( x\right) }\right) \geq 0. \] Proof. We have already observed that every isotone function is permutation invariant. To see that it also satisfies (ii), let \( i = 1, j = 2 \), without any loss of generality. For \( 0 \leq t \leq 1 \) let \[ x\left( t\right) = \left( {\left( {1 - t}\right) {x}_{1} + t{x}_{2}, t{x}_{1} + \left( {1 - t}\right) {x}_{2},{x}_{3},\ldots ,{x}_{n}}\right) . \] (II.27) Then \( x\left( t\right) \prec x = x\left( 0\right) \) . Hence \( \Phi \left( {x\left( t\right) }\right) \leq \Phi \left( {x\left( 0\right) }\right) \), and therefore \[ 0 \geq {\left\lbrack \frac{d}{dt}\Phi \left( x\left( t\right) \right) \right\rbrack }_{t = 0} = - \left( {{x}_{1} - {x}_{2}}\right) \left( {\frac{\partial \Phi }{\partial {x}_{1}}\left( x\right) - \frac{\partial \Phi }{\partial {x}_{2}}\left( x\right) }\right) . \] This proves (ii). Conversely, suppose \( \Phi \) satisfies (i) and (ii). We want to prove that \( \Phi \left( u\right) \leq \) \( \Phi \left( x\right) \) if \( u \prec x \) . By Theorem II.1.10 and the permutation invariance of \( \Phi \) we may assume that \[ u = \left( {\left( {1 - s}\right) {x}_{1} + s{x}_{2}, s{x}_{1} + \left( {1 - s}\right) {x}_{2},{x}_{3},\ldots ,{x}_{n}}\right) \] for some \( 0 \leq s \leq \frac{1}{2} \) . Let \( x\left( t\right) \) be as in (II.27). Then \[ \Phi \left( u\right) - \Phi \left( x\right) = {\int }_{0}^{s}\frac{d}{dt}\Phi \left( {x\left( t\right) }\right) {dt} \] \[ = - {\int }_{0}^{s}\left( {{x}_{1} - {x}_{2}}\right) \left\lbrack {\frac{\partial \Phi }{\partial {x}_{1}}\left( {x\left( t\right) }\right) - \frac{\partial \Phi }{\partial {x}_{2}}\left( {x\left( t\right) }\right) }\right\rbrack {dt} \] \[ = - {\int }_{0}^{s}\frac{x{\left( t\right) }_{1} - x{\left( t\right) }_{2}}{1 - {2t}}\left\lbrack {\frac{\partial \Phi }{\partial {x}_{1}}\left( {x\left( t\right) }\right) - \frac{\partial \Phi }{\partial {x}_{2}}\left( {x\left( t\right) }\right) }\right\rbrack {dt} \] \[ \leq \;0, \] because of (ii) and the condition \( 0 \leq s \leq \frac{1}{2} \) . Example II.3.15 (A Schur-convex function that is not convex) Let \( \Phi : {I}^{2} \rightarrow \mathbb{R} \), where \( I = \left( {0,1}\right) \), be the function \[ \Phi \left( {{x}_{1},{x}_{2}}\right) = \log \left( {\frac{1}{{x}_{1}} - 1}\right) + \log \left( {\frac{1}{{x}_{2}} - 1}\right) . \] Using Theorem II.3.14 one can check that \( \Phi \) is Schur-convex on the set \[ \left\{ {x : x \in {I}^{2},{x}_{1} + {x}_{2} \leq 1}\right\} \] However, the function \( \log \left( {\frac{1}{t} - 1}\right) \) is convex on \( \left( {0,\frac{1}{2}}\right\rbrack \) but not on \( \left\lbrack {\frac{1}{2},1}\right) \) . Example II.3.16 (The elementary symmetric polynomials) For each \( k = \) \( 1,2,\cdots, n \), let \( {S}_{k} : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) be the functions \[ {S}_{k}\left( x\right) = \mathop{\sum }\limits_{{1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n}}{x}_{{i}_{1}}{x}_{{i}_{2}}\cdots {x}_{{i}_{k}}. \] These are called the elementary symmetric polynomials of the \( n \) variables \( {x}_{1},\ldots ,{x}_{n} \) . These are invariant under permutations. We have the identities \[ \frac{\partial }{\partial {x}_{j}}{S}_{k}\left( {{x}_{1},\ldots ,{x}_{n}}\right) = {S}_{k - 1}\left( {{x}_{1},\ldots ,{\widehat{x}}_{j},\ldots ,{x}_{n}}\right) \] and \[ {S}_{k}\left( {{x}_{1},\ldots ,{\widehat{x}}_{i},\ldots ,{x}_{n}}\right) - {S}_{k}\left( {{x}_{1},\ldots ,{\widehat{x}}_{j},\ldots ,{x}_{n}}\right) \] \[ = \left( {{x}_{j} - {x}_{i}}\right) {S}_{k - 1}\left( {{x}_{1},\ldots ,{\widehat{x}}_{i},\ldots ,{\widehat{x}}_{j},\ldots ,{x}_{n}}\right) , \] where the circumflex indicates that the term below it has been omitted. Using these one finds via Theorem II.3.14 that each \( {S}_{k} \) is Schur-concave: i.e., \( - {S}_{k} \) is isotone, on \( {\mathbb{R}}_{ + }^{n} \) . The special case \( k = n \) says that if \( x, y \in {\mathbb{R}}_{ + }^{n} \) and \( x \prec y \), then \( \mathop{\prod }\limits_{{j = 1}}^{n}{x}_{j} \geq \mathop{\prod }\limits_{{j = 1}}^{n}{y}_{j} \) . Theorem II.3.17 (The Hadamard Determinant Theorem) If A is an \( n \times n \) positive matrix, then \[ \text{det}A \leq \mathop{\prod }\limits_{{j = 1}}^{n}{a}_{jj}\text{.} \] Proof. Use Schur's Theorem (Exercise II.1.12) and the above statement about the Schur-concavity of the function \( f\left( x\right) = \mathop{\prod }\limits_{j}{x}_{j} \) on \( {\mathbb{R}}_{ + }^{n} \) . More generally, if \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) are the eigenvalues of a positive matrix \( A \) , we have for \( k = 1,2,\ldots, n \) \[ {S}_{k}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \leq {S}_{k}\left( {{a}_{11},\ldots ,{a}_{nn}}\right) \] (II.28) Exercise II.3.18 If \( A \) is an \( m \times n \) complex matrix, then \[ \det \left( {A{A}^{ * }}\right) \leq \mathop{\prod }\limits_{{i = 1}}^{m}\mathop{\sum }\limits_{{j = 1}}^{n}{\left| {a}_{ij}\right| }^{2} \] (See Exercise I.1.3.) Exercise II.3.19 Show that the ratio \( {S}_{k}\left( x\right) /{S}_{k - 1}\left( x\right) \) is Schur-concave on the set of positive vectors for \( k = 2,\ldots, n \) . Hence, if \( A \) is a positive matrix, then \[ \frac{{S}_{n}\left( {{a}_{11},\ldots ,{a}_{nn}}\right) }{{S}_{n}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) } \geq \frac{{S}_{n - 1}\left( {{a}_{11},\ldots ,{a}_{nn}}\right) }{{S}_{n - 1}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) } \geq \cdots \geq \frac{{S}_{1}\left( {{a}_{11},\ldots ,{a}_{nn}}\right) }{{S}_{1}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) } \] \[ = \frac{\operatorname{tr}A}{\operatorname{tr}A} = 1\text{.} \] Proposition II.3.20 If \( A \) is an \( n \times n \) positive definite matrix, then \[ {\left( \det A\right) }^{1/n} = \min \left\{ {\frac{\operatorname{tr}{AB}}{n} : B\text{ is positive and }\det B = 1}\right\} . \] If \( A \) is positive semidefinite, then the same relation holds with \( \min \) replaced \( {by} \) inf. Proof. It suffices to prove the statement about positive definite matrices; the semidefinite case follows by a continuity argument. Using the spectral theorem and the cyclicity of the trace, the general case of the proposition can be reduced to the special case when \( A \) is diagonal. So, let \( A \) be diagonal with diagonal entries \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) . Then, using the arithmetic-geometric mean inequality and Theorem II.3.17 we have \[ \frac{\operatorname{tr}{AB}}{n} = \frac{1}{n}\mathop{\sum }\limits_{j}{\lambda }_{j}{b}_{jj} \geq {\left( \mathop{\prod }\limits_{j}{\lambda }_{j}\right) }^{1/n}{\left( \mathop{\prod }\limits_{j}{b}_{jj}\right) }^{1/n} \geq {\left( \det A\right) }^{1/n}{\left( \det B\right) }^{1/n}, \] for every positive matrix \( B \) . Hence, \( \frac{\operatorname{tr}{AB}}{n} \geq {\left( \det A\right) }^{1/n} \) if \( \det B = 1 \) . When \( B = {\left( \det A\right) }^{1/n}{A}^{-1} \) this becomes an equality. Corollary II.3.21 (The Minkowski Determinant Theorem) If \( A, B \) are \( n \times n \) positive matrices then \[ {\left( \det \left( A + B\right) \right) }^{1/n} \geq {\left( \det A\right) }^{1/n} + {\left( \det B\right) }^{1/n}. \] ## II. 4 Binary Algebraic Operations and Majorisation For \( x \in {\mathbb{R}}^{n} \) we have seen in Section II. 1 that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } = \mathop{\max }\limits_{{\left| I\right| = k}}\left\langle {x,{e}_{I}}\right\rangle \] It follows that if \( x, y \in {\mathbb{R}}^{n} \), then \[ x + y \prec {x}^{ \downarrow } + {y}^{ \downarrow } \] (II.29) In this section we will study majorisation relations of this form for sums, products, and other functions of two vectors. A map \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) is called lattice superadditive if \[ \varphi \left( {{s}_{1},{t}_{1}}\right) + \varphi \left( {{s}_{2},{t}_{2}}\right) \leq \varphi \left( {{s}_{1} \vee {s}_{2},{t}_{1} \vee {t}_{2}}\right) + \varphi \left( {{s}_{1} \land {s}_{2},{t}_{1} \land {t}_{2}}\right) \] (II.30) We will call a
100_S_Fourier Analysis
18
\[ {\left( \det \left( A + B\right) \right) }^{1/n} \geq {\left( \det A\right) }^{1/n} + {\left( \det B\right) }^{1/n}. \] ## II. 4 Binary Algebraic Operations and Majorisation For \( x \in {\mathbb{R}}^{n} \) we have seen in Section II. 1 that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } = \mathop{\max }\limits_{{\left| I\right| = k}}\left\langle {x,{e}_{I}}\right\rangle \] It follows that if \( x, y \in {\mathbb{R}}^{n} \), then \[ x + y \prec {x}^{ \downarrow } + {y}^{ \downarrow } \] (II.29) In this section we will study majorisation relations of this form for sums, products, and other functions of two vectors. A map \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) is called lattice superadditive if \[ \varphi \left( {{s}_{1},{t}_{1}}\right) + \varphi \left( {{s}_{2},{t}_{2}}\right) \leq \varphi \left( {{s}_{1} \vee {s}_{2},{t}_{1} \vee {t}_{2}}\right) + \varphi \left( {{s}_{1} \land {s}_{2},{t}_{1} \land {t}_{2}}\right) \] (II.30) We will call a map \( \varphi \) monotone if it is either monotonically increasing or monotonically decreasing in each of its arguments. In this section we will adopt the following notation. Given \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) , we will denote by \( \Phi \) the map from \( {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \) to \( {\mathbb{R}}^{n} \) defined as \[ \Phi \left( {x, y}\right) = \left( {\varphi \left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\varphi \left( {{x}_{n},{y}_{n}}\right) }\right) . \] (II.31) Example II.4.1 (i) \( \varphi \left( {s, t}\right) = s + t \) is a monotone and lattice superadditive function on \( {\mathbb{R}}^{2}. \) (ii) \( \varphi \left( {s, t}\right) = {st} \) is a monotone and lattice superadditive function on \( {\mathbb{R}}_{ + }^{2} \) . For (i) above we have \[ \Phi \left( {x, y}\right) = \left( {{x}_{1} + {y}_{1},\ldots ,{x}_{n} + {y}_{n}}\right) \;\text{ for }\;x, y \in {\mathbb{R}}^{n}, \] and for (ii) we have \[ \Phi \left( {x, y}\right) = \left( {{x}_{1}{y}_{1},\ldots ,{x}_{n}{y}_{n}}\right) \;\text{ for }\;x, y \in {\mathbb{R}}^{n}. \] Theorem II.4.2 If \( \varphi \) is monotone and lattice superadditive, then \[ \Phi \left( {{x}^{ \downarrow },{y}^{ \uparrow }}\right) { \prec }_{w}\Phi \left( {x, y}\right) { \prec }_{w}\Phi \left( {{x}^{ \downarrow },{y}^{ \downarrow }}\right) \] (II.32) for all \( x, y \in {\mathbb{R}}^{n} \) . Proof. Note that if we apply a coordinate permutation simultaneously to \( x \) and \( y \), then \( \Phi \left( {x, y}\right) \) undergoes the same coordinate permutation. The two outer terms in (II.32) remain unaffected and so do the majorisations. Hence, to prove (II.32) we may assume that \( x = {x}^{ \downarrow } \) ; i.e., \( {x}_{1} \geq {x}_{2} \geq \cdots \geq {x}_{n} \) . Next note that we can find a finite sequence of vectors \( {u}^{\left( 0\right) },{u}^{\left( 1\right) },\ldots ,{u}^{\left( N\right) } \) such that \[ {y}^{ \downarrow } = {u}^{\left( 0\right) },{y}^{ \uparrow } = {u}^{\left( N\right) }, y = {u}^{\left( j\right) }\text{ for some }1 \leq j \leq N, \] and each \( {u}^{\left( k + 1\right) } \) is obtained from \( {u}^{\left( k\right) } \) by interchanging two components in such a way as to move from the arrangement \( {y}^{ \downarrow } \) to \( {y}^{ \uparrow } \) ; i.e., we pick up two indices \( i, j \) such that \[ i < j\;\text{and}\;{u}_{i}^{\left( k\right) } > {u}_{j}^{\left( k\right) } \] and interchange these two components to obtain the vector \( {u}^{\left( k + 1\right) } \) . So, to prove (II.32) it suffices to prove \[ \Phi \left( {x,{u}^{\left( k + 1\right) }}\right) { \prec }_{w}\Phi \left( {x,{u}^{\left( k\right) }}\right) \] (II.33) for \( k = 0,1,\ldots, N - 1 \) . Since we have already assumed \( {x}_{1} \geq {x}_{2} \geq \cdots \geq {x}_{n} \) , to prove (II.33) we need to prove the two-dimensional majorisation \[ \left( {\varphi \left( {{s}_{1},{t}_{2}}\right) ,\varphi \left( {{s}_{2},{t}_{1}}\right) }\right) { \prec }_{w}\left( {\varphi \left( {{s}_{1},{t}_{1}}\right) ,\varphi \left( {{s}_{2},{t}_{2}}\right) }\right) \] (II.34) if \( {s}_{1} \geq {s}_{2} \) and \( {t}_{1} \geq {t}_{2} \) . Now, by the definition of weak majorisation, this is equivalent to the two inequalities \[ \varphi \left( {{s}_{1},{t}_{2}}\right) \vee \varphi \left( {{s}_{2},{t}_{1}}\right) \leq \varphi \left( {{s}_{1},{t}_{1}}\right) \vee \varphi \left( {{s}_{2},{t}_{2}}\right) \] \[ \varphi \left( {{s}_{1},{t}_{2}}\right) + \varphi \left( {{s}_{2},{t}_{1}}\right) \leq \varphi \left( {{s}_{1},{t}_{1}}\right) + \varphi \left( {{s}_{2},{t}_{2}}\right) \] for \( {s}_{1} \geq {s}_{2} \) and \( {t}_{1} \geq {t}_{2} \) . The first of these follows from the monotony of \( \varphi \) and the second from the lattice superadditivity. Corollary II.4.3 For \( x, y \in {\mathbb{R}}^{n} \) \[ {x}^{ \downarrow } + {y}^{ \uparrow } \prec x + y \prec {x}^{ \downarrow } + {y}^{ \downarrow } \] (II.35) For \( x, y \in {\mathbb{R}}_{ + }^{n} \) \[ {x}^{ \downarrow } \cdot {y}^{ \uparrow }{ \prec }_{w}x \cdot y{ \prec }_{w}{x}^{ \downarrow } \cdot {y}^{ \downarrow } \] (II.36) where \( x \cdot y = \left( {{x}_{1}{y}_{1},\ldots ,{x}_{n}{y}_{n}}\right) \) . Corollary II.4.4 For \( x, y \in {\mathbb{R}}^{n} \) \[ \left\langle {{x}^{ \downarrow },{y}^{ \uparrow }}\right\rangle \leq \langle x, y\rangle \leq \left\langle {{x}^{ \downarrow },{y}^{ \downarrow }}\right\rangle \] (II.37) Proof. If \( x \geq 0 \) and \( y \geq 0 \), this follows from (II.36). In the general case, choose \( t \) large enough so that \( x + {te} \geq 0 \) and \( y + {te} \geq 0 \) and apply the special result. The inequality (II.37) has a "mechanical" interpretation when \( x \geq 0 \) and \( y \geq 0 \) . On a rod fixed at the origin, hang weights \( {y}_{i} \) at the points at distances \( {x}_{i} \) from the origin. The inequality (II.37) then says that the maximum moment is obtained if the heaviest weights are the farthest from the origin. Exercise II.4.5 The function \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) defined as \( \varphi \left( {s, t}\right) = s \land t \) is monotone and lattice superadditive on \( {\mathbb{R}}^{2} \) . Hence, for \( x, y \in {\mathbb{R}}^{n} \) \[ {x}^{ \downarrow } \land {y}^{ \uparrow }{ \prec }_{w}x \land y{ \prec }_{w}{x}^{ \downarrow } \land {y}^{ \downarrow } \] ## II. 5 Problems Problem II.5.1. If a doubly stochastic matrix \( A \) is invertible and \( {A}^{-1} \) is also doubly stochastic, then \( A \) is a permutation. Problem II.5.2. Let \( y \in {\mathbb{R}}_{ + }^{n} \) . The set \( \left\{ {x : x \in {\mathbb{R}}_{ + }^{n}, x{ \prec }_{w}y}\right\} \) is the convex hull of the points \( \left( {{r}_{1}{y}_{\sigma \left( 1\right) },\ldots ,{r}_{n}{y}_{\sigma \left( n\right) }}\right) \), where \( \sigma \) varies over permutations and each \( {r}_{j} \) is either 0 or 1 . Problem II.5.3. Let \( y \in {\mathbb{R}}^{n} \) . The set \( \left\{ {x \in {\mathbb{R}}^{n} : \left| x\right| { \prec }_{w}\left| y\right| }\right\} \) is the convex hull of points of the form \( \left( {{\varepsilon }_{1}{y}_{\sigma \left( 1\right) },\ldots ,{\varepsilon }_{n}{y}_{\sigma \left( n\right) }}\right) \), where \( \sigma \) varies over permutations and each \( {\varepsilon }_{j} = \pm 1 \) . Problem II.5.4. Let \( A = \left( \begin{array}{ll} {A}_{11} & {A}_{12} \\ {A}_{21} & {A}_{22} \end{array}\right) \) be a \( 2 \times 2 \) block matrix and let \( \mathcal{C}\left( A\right) = \left( \begin{matrix} {A}_{11} & 0 \\ 0 & {A}_{22} \end{matrix}\right) \) If \( U = \left( \begin{matrix} I & 0 \\ 0 & - I \end{matrix}\right) \), then we can write \[ \mathcal{C}\left( A\right) = \frac{1}{2}\left( {A + {UA}{U}^{ * }}\right) \] Let \( \lambda \left( A\right) \) and \( s\left( A\right) \) denote the \( n \) -vectors whose coordinates are the eigenvalues and the singular values of \( A \), respectively. Use (II.18) to show that \[ s\left( {\mathcal{C}\left( A\right) }\right) { \prec }_{w}s\left( A\right) \] If \( A \) is Hermitian, use (II.16) to show that \[ \lambda \left( {\mathcal{C}\left( A\right) }\right) \prec \lambda \left( A\right) \] Problem II.5.5. More generally, let \( {P}_{1},\ldots ,{P}_{r} \) be a family of mutually orthogonal projections in \( {\mathbb{C}}^{n} \) such that \( \oplus {P}_{j} = I \) . Then the operation of taking \( A \) to \( \mathcal{C}\left( A\right) = \sum {P}_{j}A{P}_{j} \) is called a pinching of \( A \) . In an appropriate choice of basis this means that \[ A = \left\lbrack \begin{matrix} {A}_{11} & {A}_{12} & \cdots & {A}_{1r} \\ \cdots & \cdots & \cdots & \cdots \\ \cdots & \cdots & \cdots & \cdots \\ {A}_{r1} & {A}_{r2} & \cdots & {A}_{rr} \end{matrix}\right\rbrack ,\mathcal{C}\left( A\right) = \left\lbrack \begin{array}{llll} {A}_{11} & & & \\ & {A}_{22} & & \\ & & \ddots & \\ & & & {A}_{rr} \end{array}\right\rbrack . \] Each such pinching is a product of \( r - 1 \) pinchings of the \( 2 \times 2 \) type introduced in Problem II.5.4. Show that for every pinching \( \mathcal{C} \) \[ s\left( {\mathcal{C}\left( A\right) }\right) { \prec }_{w}s\left( A\right) \] (II.38) for all matrices \( A \), and \[ \lambda \left( {\mathcal{C}\left( A\right) }\right) \prec \lambda \left( A\right) \] (II.39) for all Hermitian matrices \( A \) . When \( {P}_{1},\ldots ,{P}_{n} \) are the projections onto the coordinate axes, we get as a special case of (II.38) above \[ \left| {\operatorname{tr}A}\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) = \parallel A{\parallel }_{1} \] (II.40) From (II.39) we get as a special case Schur's Theorem \[ \operatorname{diag}\left( A\right) \prec \lambda \left( A\right) \] which we saw before in Exercise II.1.12. Problem II.5.6. Let \( A \) be positive. Then \[ \det A \leq \det \mathcal{C}\left( A\right) \] (II.41) for every pinching \( \mathcal{C} \) . This is called Fischer’s inequality and includes the Hadamard
100_S_Fourier Analysis
19
( 2 \times 2 \) type introduced in Problem II.5.4. Show that for every pinching \( \mathcal{C} \) \[ s\left( {\mathcal{C}\left( A\right) }\right) { \prec }_{w}s\left( A\right) \] (II.38) for all matrices \( A \), and \[ \lambda \left( {\mathcal{C}\left( A\right) }\right) \prec \lambda \left( A\right) \] (II.39) for all Hermitian matrices \( A \) . When \( {P}_{1},\ldots ,{P}_{n} \) are the projections onto the coordinate axes, we get as a special case of (II.38) above \[ \left| {\operatorname{tr}A}\right| \leq \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) = \parallel A{\parallel }_{1} \] (II.40) From (II.39) we get as a special case Schur's Theorem \[ \operatorname{diag}\left( A\right) \prec \lambda \left( A\right) \] which we saw before in Exercise II.1.12. Problem II.5.6. Let \( A \) be positive. Then \[ \det A \leq \det \mathcal{C}\left( A\right) \] (II.41) for every pinching \( \mathcal{C} \) . This is called Fischer’s inequality and includes the Hadamard Determinant Theorem as a special case. Problem II.5.7. For each \( k = 1,2,\ldots, n \) and for each pinching \( \mathcal{C} \) show that for positive definite \( A \) \[ {S}_{k}\left( {\lambda \left( A\right) }\right) \leq {S}_{k}\left( {\lambda \left( {\mathcal{C}\left( A\right) }\right) }\right) \] (II.42) where \( {S}_{k}\left( {\lambda \left( A\right) }\right) \) denotes the \( k \) th elementary symmetric polynomial of the eigenvalues of \( A \) . This inequality, due to Ostrowski, includes (II.28) as a special case. It also includes (II.41) as a special case. Problem II.5.8. If \( { \land }^{k}A \) denotes the \( k \) th antisymmetric tensor power of \( A \), then the above inequality can be written as \[ \operatorname{tr}{ \land }^{k}A \leq \operatorname{tr}{ \land }^{k}\left( {\mathcal{C}\left( A\right) }\right) \] (II.43) The operator inequality \[ { \land }^{k}A \leq { \land }^{k}\left( {\mathcal{C}\left( A\right) }\right) \] is not always true. This is shown by the following example. Let \[ A = \left\lbrack \begin{array}{llll} 2 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 2 & 0 \\ 1 & 0 & 0 & 1 \end{array}\right\rbrack ,\;P = \left\lbrack \begin{array}{llll} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right\rbrack \] and let \( \mathcal{C} \) be the pinching induced by the pair of projections \( P \) and \( I - P \) . (The space \( { \land }^{2}{\mathbb{C}}^{4} \) is 6-dimensional.) Problem II.5.9. Let \( \left\{ {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right\} ,\left\{ {{\mu }_{1},\ldots ,{\mu }_{n}}\right\} \) be two \( n \) -tuples of complex numbers. Let \[ d\left( {\lambda ,\mu }\right) = \mathop{\min }\limits_{\sigma }\mathop{\max }\limits_{{1 \leq j \leq n}}\left| {{\lambda }_{j} - {\mu }_{\sigma \left( j\right) }}\right| \] where the minimum is taken over all permutations on \( n \) symbols. This is called the optimal matching distance between the unordered \( n \) -tuples \( \lambda \) and \( \mu \) . It defines a metric on the space \( {\mathbb{C}}_{sym}^{n} \) of such \( n \) -tuples. Show that we also have \[ d\left( {\lambda ,\mu }\right) = \mathop{\max }\limits_{\substack{{I, J \subset \{ 1,2,\ldots, n\} } \\ {\left| I\right| + \left| J\right| = n + 1} }}\mathop{\min }\limits_{\substack{{i \in I} \\ {j \in J} }}\left| {{\lambda }_{i} - {\mu }_{j}}\right| . \] Problem II.5.10. This problem gives a refinement of Hall's Theorem under an additional assumption that is often fulfilled in matching problems. In the notations introduced at the beginning of Section II.2, define \[ {B}_{i} = \left\{ {{b}_{j} : \left( {{b}_{j},{g}_{i}}\right) \in R}\right\} ,\;1 \leq i \leq n. \] This is the set of boys known to the girl \( {g}_{i} \) . Let \[ {B}_{{i}_{1}\cdots {i}_{k}} = \mathop{\bigcup }\limits_{{r = 1}}^{k}{B}_{{i}_{r}},\;1 \leq {i}_{1} < \cdots < {i}_{k} \leq n. \] Suppose that for each \( k = 1,2,\ldots ,\left\lbrack \frac{n}{2}\right\rbrack \) and for every choice of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) \[ \left| {G}_{{i}_{1}\cdots {i}_{k}}\right| \geq k\text{ and }\left| {B}_{{i}_{1}\cdots {i}_{k}}\right| \geq k. \] Show that then \[ \left| {G}_{{i}_{1}\cdots {i}_{k}}\right| \geq k\text{ for all }k = 1,2,\ldots, n,1 \leq {i}_{1} < \cdots < {i}_{k} \leq n. \] Hence a compatible matching between \( B \) and \( G \) exists. Problem II.5.11. (i) Show that every symmetric gauge function is continuous. (ii) Show that if \( \Phi \) is a symmetric gauge function, then \( {\Phi }_{\infty }\left( x\right) \leq \Phi \left( x\right) \leq \) \( {\Phi }_{1}\left( x\right) \) for all \( x \in {\mathbb{R}}^{n} \) . (iii) If \( \Phi \) is a symmetric gauge function and \( 0 \leq {t}_{j} \leq 1 \), then \[ \Phi \left( {{t}_{1}{x}_{1},\ldots ,{t}_{n}{x}_{n}}\right) \leq \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) \] (iv) Every symmetric gauge function is monotone on \( {\mathbb{R}}_{ + }^{n} \) . (v) If \( x, y \in {\mathbb{R}}^{n} \) and \( \left| x\right| \leq \left| y\right| \), then \( \Phi \left( x\right) \leq \Phi \left( y\right) \) for every symmetric gauge function \( \Phi \) . (vi) If \( x, y \in {\mathbb{R}}_{ + }^{n} \), then \( x{ \prec }_{w}y \) if and only if \( \Phi \left( x\right) \leq \Phi \left( y\right) \) for every symmetric gauge function \( \Phi \) . Problem II.5.12. Let \( f : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + } \) be a concave function such that \( f\left( 0\right) = 0 \) . (i) Show that \( f \) is subadditive: \( f\left( {a + b}\right) \leq f\left( a\right) + f\left( b\right) \) for all \( a, b \in {\mathbb{R}}_{ + } \) . (ii) Let \( \Phi : {\mathbb{R}}_{ + }^{2n} \rightarrow {\mathbb{R}}_{ + } \) be defined as \[ \Phi \left( {x, y}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {x}_{j}\right) + \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {y}_{j}\right) ,\;x, y \in {\mathbb{R}}_{ + }^{n}. \] Then \( \Phi \) is Schur-concave. (iii) Note that for \( x, y \in {\mathbb{R}}_{ + }^{n} \) \[ \left( {x, y}\right) \prec \left( {x + y,0}\right) \text{ in }{\mathbb{R}}_{ + }^{2n}. \] (iv) From (ii) and (iii) conclude that the function \[ F\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( \left| {x}_{j}\right| \right) \] is subadditive on \( {\mathbb{R}}^{n} \) . (v) Special examples lead to the following inequalities for vectors \( x, y \in {\mathbb{R}}^{n} \) : \[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {x}_{j} + {y}_{j}\right| }^{p} \leq \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {x}_{j}\right| }^{p} + \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {y}_{j}\right| }^{p},\;0 < p \leq 1. \] \[ \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\left| {x}_{j} + {y}_{j}\right| }{1 + \left| {{x}_{j} + {y}_{j}}\right| } \leq \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\left| {x}_{j}\right| }{1 + \left| {x}_{j}\right| } + \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\left| {y}_{j}\right| }{1 + \left| {y}_{j}\right| }. \] \[ \mathop{\sum }\limits_{{j = 1}}^{n}\log \left( {1 + \left| {{x}_{j} + {y}_{j}}\right| }\right) \leq \mathop{\sum }\limits_{{j = 1}}^{n}\log \left( {1 + \left| {x}_{j}\right| }\right) + \mathop{\sum }\limits_{{j = 1}}^{n}\log \left( {1 + \left| {y}_{j}\right| }\right) . \] Problem II.5.13. Show that a map \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) is lattice superadditive if and only if \[ \varphi \left( {{x}_{1} + {\delta }_{1},{x}_{2} - {\delta }_{2}}\right) + \varphi \left( {{x}_{1} - {\delta }_{1},{x}_{2} + {\delta }_{2}}\right) \] \[ \leq \varphi \left( {{x}_{1} + {\delta }_{1},{x}_{2} + {\delta }_{2}}\right) + \varphi \left( {{x}_{1} - {\delta }_{1},{x}_{2} - {\delta }_{2}}\right) \] for all \( \left( {{x}_{1},{x}_{2}}\right) \) and for all \( {\delta }_{1},{\delta }_{2} \geq 0 \) . If \( \varphi \) is twice differentiable, this is equivalent to \[ 0 \leq \frac{{\partial }^{2}\varphi \left( {{x}_{1},{x}_{2}}\right) }{\partial {x}_{1}\partial {x}_{2}} \] Problem II.5.14. Let \( \varphi : {\mathbb{R}}^{2} \rightarrow \mathbb{R} \) be a monotone increasing lattice super-additive function, and let \( f \) be a monotone increasing and convex function from \( \mathbb{R} \) to \( \mathbb{R} \) . Show that if \( \varphi \) and \( f \) are twice differentiable, then the composition \( f \circ \varphi \) is monotone and lattice superadditive. When \( \varphi \left( {s, t}\right) = s + t \) show that this is also true if \( f \) is monotone decreasing. These statements are also true without any differentiability assumptions. Problem II.5.15. For \( x, y \in {\mathbb{R}}_{ + }^{n} \) \[ - \log \left( {{x}^{ \downarrow } + {y}^{ \uparrow }}\right) { \prec }_{w} - \log \left( {x + y}\right) { \prec }_{w} - \log \left( {{x}^{ \downarrow } + {y}^{ \downarrow }}\right) \] \[ \log \left( {{x}^{ \downarrow } \cdot {y}^{ \uparrow }}\right) { \prec }_{w}\log \left( {x \cdot y}\right) { \prec }_{w}\log \left( {{x}^{ \downarrow } \cdot {y}^{ \downarrow }}\right) . \] From the first of these relations it follows that \[ \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j}^{ \downarrow } + {y}_{j}^{ \downarrow }}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j} + {y}_{j}}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j}^{ \downarrow } + {y}_{j}^{ \uparrow }}\right) . \] Problem II.5.16. Let \( x, y, u \) be vectors in \( {\mathbb{R}}^{n} \) all having their coordinates in decreasing order. Show that (i) \( \langle x, u\rangle \leq \langle y, u\rangle \) if \( x \prec y \) , (ii) \( \langle x, u\rangle \leq \langle y, u\rangle \) if \( x{ \prec }_{w}y \) and \( u \in {\mathbb{R}}_{ + }^{n} \) . In particular, this means that if \( x, y \in {\mathbb{R}}^{n}, x{ \prec }_{w}y \), and \( u \in {\mathbb{R}}_{ + }^{n} \), then \[ \left( {{x}_{1}^{ \downarrow }{u}_{1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }{u}_{n}^{ \downarrow }}\right) { \prec }_{w}\left( {{y}_{1}^{ \downarrow }{u}_{1}^{ \downarrow },\ldots ,{y}_{n}^{ \downarrow }{u}_{n}^{ \downarrow }}\righ
100_S_Fourier Analysis
20
ws that \[ \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j}^{ \downarrow } + {y}_{j}^{ \downarrow }}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j} + {y}_{j}}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{x}_{j}^{ \downarrow } + {y}_{j}^{ \uparrow }}\right) . \] Problem II.5.16. Let \( x, y, u \) be vectors in \( {\mathbb{R}}^{n} \) all having their coordinates in decreasing order. Show that (i) \( \langle x, u\rangle \leq \langle y, u\rangle \) if \( x \prec y \) , (ii) \( \langle x, u\rangle \leq \langle y, u\rangle \) if \( x{ \prec }_{w}y \) and \( u \in {\mathbb{R}}_{ + }^{n} \) . In particular, this means that if \( x, y \in {\mathbb{R}}^{n}, x{ \prec }_{w}y \), and \( u \in {\mathbb{R}}_{ + }^{n} \), then \[ \left( {{x}_{1}^{ \downarrow }{u}_{1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }{u}_{n}^{ \downarrow }}\right) { \prec }_{w}\left( {{y}_{1}^{ \downarrow }{u}_{1}^{ \downarrow },\ldots ,{y}_{n}^{ \downarrow }{u}_{n}^{ \downarrow }}\right) . \] [Use Theorem II.3.14 or the telescopic summation identity \[ \mathop{\sum }\limits_{{j = 1}}^{k}{a}_{j}{b}_{j} = \mathop{\sum }\limits_{{j = 1}}^{k}\left( {{a}_{j} - {a}_{j + 1}}\right) \left( {{b}_{1} + \cdots + {b}_{j}}\right) \] where \( {a}_{j},{b}_{j},1 \leq j \leq k \), are any numbers and \( {a}_{k + 1} = 0 \) .] ## II. 6 Notes and References Many of the results of this chapter can be found in the classic Inequalities by G.H. Hardy, J.E. Littlewood, and G. Polya, Cambridge University Press, 1934, which gave the first systematic treatment of this theme. The more recent treatise Inequalities: Theory of Majorization and Its Applications by A.W. Marshall and I. Olkin, Academic Press, 1979, is a much more detailed and exhaustive text devoted entirely to the study of majorisation. It is an invaluable resource on this topic. For the reader who wants a quicker introduction to the essentials of majorisation and its applications in linear algebra, the survey article Majorization, doubly stochastic matrices and comparison of eigenvalues by T. Ando, Linear Algebra and Its Applications, 118(1989) 163-248, is undoubtedly the ideal course. Our presentation is strongly influenced by this article from which we have freely borrowed. The distance \( d\left( {\lambda ,\mu }\right) \) introduced in Problem II.5.9 is commonly employed in the study of variation of roots of polynomials and eigenvalues of matrices since these are known with no preferred ordering. See Chapter 6. The result of Problem II.5.10 is due to L. Elsner, C. Johnson, J. Ross, and J. Schönheim, On a generalised matching problem arising in estimating the eigenvalue variation of two matrices, European J. Combinatorics, 4(1983) 133-136. Several of the theorems in this chapter have converses. For illustration we mention two of these. Schur’s Theorem (II.14) has a converse; it says that if \( d \) and \( \lambda \) are real vectors with \( d \prec \lambda \), then there exists a Hermitian matrix \( A \) whose diagonal entries are the components of \( d \) and whose eigenvalues are the components of \( \lambda \) . Weyl’s Majorant Theorem (II.3.6) has a converse; it says that if \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) are complex numbers and \( {s}_{1},\ldots ,{s}_{n} \) are positive real numbers ordered as \( \left| {\lambda }_{1}\right| \geq \cdots \geq \left| {\lambda }_{n}\right| \) and \( {s}_{1} \geq \cdots \geq {s}_{n} \), and if \[ \mathop{\prod }\limits_{{j = 1}}^{k}\left| {\lambda }_{j}\right| \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\;\text{ for }\;1 \leq k \leq n \] \[ \mathop{\prod }\limits_{{j = 1}}^{n}\left| {\lambda }_{j}\right| = \mathop{\prod }\limits_{{j = 1}}^{n}{s}_{j} \] then there exists an \( n \times n \) matrix \( A \) whose eigenvalues are \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) and singular values \( {s}_{1},\ldots ,{s}_{n} \) . For more such theorems, see the book by Marshall and Olkin cited above. Two results very close to those in II.3.16-II.3.21 and II.5.6-II.5.8 are given below. M. Marcus and L. Lopes, Inequalities for symmetric functions and Hermitian matrices, Canad. J. Math., 9(1957) 305-312, showed that the map \( \Phi : {\mathbb{R}}_{ + }^{n} \rightarrow \mathbb{R} \) given by \( \Phi \left( x\right) = {\left( {S}_{k}\left( x\right) \right) }^{1/k} \) is Schur-concave for \( 1 \leq k \leq n \) . Using this they showed that for positive matrices \( A, B \) \[ {\left\lbrack \operatorname{tr}{ \land }^{k}\left( A + B\right) \right\rbrack }^{1/k} \geq {\left\lbrack \operatorname{tr}{ \land }^{k}A\right\rbrack }^{1/k} + {\left\lbrack \operatorname{tr}{ \land }^{k}B\right\rbrack }^{1/k}. \] (II.44) This can also be expressed by saying that the map \( A \rightarrow {\left( \operatorname{tr}{ \land }^{k}A\right) }^{1/k} \) is concave on the set of positive matrices. For \( k = n \), this reduces to the statement \[ {\left\lbrack \det \left( A + B\right) \right\rbrack }^{1/n} \geq {\left\lbrack \det A\right\rbrack }^{1/n} + {\left\lbrack \det B\right\rbrack }^{1/n} \] which is the Minkowski determinant inequality. E.H. Lieb, Convex trace functions and the Wigner-Yanase-Dyson conjecture, Advances in Math., 11(1973) 267-288, proved some striking operator inequalities in connection with the W.-Y.-D. conjecture on the concavity of entropy in quantum mechanics. These were proved by different techniques and extended in other directions by T. Ando, Concavity of certain maps on positive definite matrices and applications to Hadamard products, Linear Algebra Appl., 26(1979) 203-241. One consequence of these results is the inequality \[ { \land }^{k}{\left( A + B\right) }^{1/k} \geq { \land }^{k}{A}^{1/k} + { \land }^{k}{B}^{1/k} \] (II.45) for all positive matrices \( A, B \) and for all \( k = 1,2,\ldots, n \) . In particular, this implies that \[ \operatorname{tr}{ \land }^{k}{\left( A + B\right) }^{1/k} \geq \operatorname{tr}{ \land }^{k}{A}^{1/k} + \operatorname{tr}{ \land }^{k}{B}^{1/k}. \] When \( k = n \), this reduces to the Minkowski determinant inequality. Some of these inequalities are proved in Chapter 9. III Variational Principles for Eigenvalues In this chapter we will study inequalities that are used for localising the spectrum of a Hermitian operator. Such results are motivated by several interrelated considerations. It is not always easy to calculate the eigenvalues of an operator. However, in many scientific problems it is enough to know that the eigenvalues lie in some specified intervals. Such information is provided by the inequalities derived here. While the functional dependence of the eigenvalues on an operator is quite complicated, several interesting relationships between the eigenvalues of two operators \( A, B \) and those of their sum \( A + B \) are known. These relations are consequences of variational principles. When the operator \( B \) is small in comparison to \( A \) , then \( A + B \) is considered as a perturbation of \( A \) or an approximation to \( A \) . The inequalities of this chapter then lead to perturbation bounds or error bounds. Many of the results of this chapter lead to generalisations, or analogues, or open problems in other settings discussed in later chapters. ## III. 1 The Minimax Principle for Eigenvalues The following notation will be used throughout this chapter. If \( A, B \) are Hermitian operators, we will write their spectral resolutions as \( A{u}_{j} = \) \( {\alpha }_{j}{u}_{j}, B{v}_{j} = {\beta }_{j}{v}_{j},1 \leq j \leq n \), always assuming that the eigenvectors \( {u}_{j} \) and the eigenvectors \( {v}_{j} \) are orthonormal and that \( {\alpha }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\alpha }_{n} \) and \( {\beta }_{1} \geq {\beta }_{2} \geq \cdots \geq {\beta }_{n} \) . When the dependence of the eigenvalues on the operator is to be emphasized, we will write \( {\lambda }^{ \downarrow }\left( A\right) \) for the vector with components \( {\lambda }_{1}^{ \downarrow }\left( A\right) ,\ldots ,{\lambda }_{n}^{ \downarrow }\left( A\right) \), where \( {\lambda }_{j}^{ \downarrow }\left( A\right) \) are arranged in decreasing order; i.e., \( {\lambda }_{j}^{ \downarrow }\left( A\right) = {\alpha }_{j} \) . Similarly, \( {\lambda }^{ \uparrow }\left( A\right) \) will denote the vector with components \( {\lambda }_{j}^{ \uparrow }\left( A\right) \) where \( {\lambda }_{j}^{ \uparrow }\left( A\right) = {\alpha }_{n - j + 1},1 \leq j \leq n \) . Theorem III.1.1 (Poincaré’s Inequality) Let \( A \) be a Hermitian operator on \( \mathcal{H} \) and let \( \mathcal{M} \) be any \( k \) -dimensional subspace of \( \mathcal{H} \) . Then there exist unit vectors \( x, y \) in \( \mathcal{M} \) such that \( \langle x,{Ax}\rangle \leq {\lambda }_{k}^{ \downarrow }\left( A\right) \) and \( \langle y,{Ay}\rangle \geq {\lambda }_{k}^{ \uparrow }\left( A\right) \) . Proof. Let \( \mathcal{N} \) be the subspace spanned by the eigenvectors \( {u}_{k},\ldots ,{u}_{n} \) of \( A \) corresponding to the eigenvalues \( {\lambda }_{k}^{ \downarrow }\left( A\right) ,\ldots ,{\lambda }_{n}^{ \downarrow }\left( A\right) \) . Then \[ \dim \mathcal{M} + \dim \mathcal{N} = n + 1 \] and hence the intersection of \( \mathcal{M} \) and \( \mathcal{N} \) is nontrivial. Pick up a unit vector \( x \) in \( \mathcal{M} \cap \mathcal{N} \) . Then we can write \( x = \mathop{\sum }\limits_{{j = k}}^{n}{\xi }_{j}{u}_{j} \), where \( \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2} = 1 \) . Hence, \[ \langle x,{Ax}\rangle = \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2}{\lambda }_{j}^{ \downarrow }\left( A\right) \leq \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2}{\lambda }_{k}^{ \downarrow }\left( A\right) = {\lambda }_{k}^{ \downarrow }\left( A\right) . \] This proves the first statement. The second can be obtained by applying this to the operator \( - A \) instead of \( A \) . Equally well, one can repeat the ar
100_S_Fourier Analysis
21
A \) corresponding to the eigenvalues \( {\lambda }_{k}^{ \downarrow }\left( A\right) ,\ldots ,{\lambda }_{n}^{ \downarrow }\left( A\right) \) . Then \[ \dim \mathcal{M} + \dim \mathcal{N} = n + 1 \] and hence the intersection of \( \mathcal{M} \) and \( \mathcal{N} \) is nontrivial. Pick up a unit vector \( x \) in \( \mathcal{M} \cap \mathcal{N} \) . Then we can write \( x = \mathop{\sum }\limits_{{j = k}}^{n}{\xi }_{j}{u}_{j} \), where \( \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2} = 1 \) . Hence, \[ \langle x,{Ax}\rangle = \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2}{\lambda }_{j}^{ \downarrow }\left( A\right) \leq \mathop{\sum }\limits_{{j = k}}^{n}{\left| {\xi }_{j}\right| }^{2}{\lambda }_{k}^{ \downarrow }\left( A\right) = {\lambda }_{k}^{ \downarrow }\left( A\right) . \] This proves the first statement. The second can be obtained by applying this to the operator \( - A \) instead of \( A \) . Equally well, one can repeat the argument, applying it to the given \( k \) -dimensional space \( \mathcal{M} \) and the \( (n - \) \( k + 1) \) -dimensional space spanned by \( {u}_{1},{u}_{2},\ldots ,{u}_{n - k + 1} \) . Corollary III.1.2 (The Minimax Principle) Let \( A \) be a Hermitian operator on \( \mathcal{H} \) . Then \[ {\lambda }_{k}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{\substack{{\mathcal{M} \subset \mathcal{H}} \\ {\dim \mathcal{M} = k} }}\mathop{\min }\limits_{\substack{{x \in \mathcal{M}} \\ {\left| \right| x\left| \right| = 1} }}\langle x,{Ax}\rangle \] \[ = \mathop{\min }\limits_{\substack{{\mathcal{M} \subset \mathcal{H}} \\ {\dim \mathcal{M} = n - k + 1} }}\mathop{\max }\limits_{\substack{{x \in \mathcal{M}} \\ {\left| \right| x\left| \right| = 1} }}\langle x,{Ax}\rangle . \] Proof. By Poincaré’s inequality, if \( \mathcal{M} \) is any \( k \) -dimensional subspace of \( \mathcal{H} \), then \( \mathop{\min }\limits_{x}\langle x,{Ax}\rangle \leq {\lambda }_{k}^{ \downarrow }\left( A\right) \), where \( x \) varies over unit vectors in \( \mathcal{M} \) . But if \( \mathcal{M} \) is the span of \( \left\{ {{u}_{1},\ldots ,{u}_{k}}\right\} \), then this last inequality becomes an equality. That proves the first statement. The second can be obtained from the first by applying it to \( - A \) instead of \( A \) . This minimax principle is sometimes called the Courant-Fischer-Weyl minimax principle. Exercise III.1.3 In the proof of the minimax principle we made a particular choice of \( \mathcal{M} \) . This choice is not always unique. For example, if \( {\lambda }_{k}^{ \downarrow }\left( A\right) = {\lambda }_{k + 1}^{ \downarrow }\left( A\right) \), there would be a whole 1-parameter family of such subspaces obtained by choosing different eigenvectors of \( A \) belonging to \( {\lambda }_{k}^{ \downarrow }\left( A\right) \) . This is not surprising. More surprising, perhaps even shocking, is the fact that we could have \( {\lambda }_{k}^{ \downarrow }\left( A\right) = \min \{ \langle x,{Ax}\rangle : x \in \mathcal{M},\parallel x\parallel = 1\} \), even for a \( k \) -dimensional subspace that is not spanned by eigenvectors of \( A \) . Find an example where this happens. (There is a simple example.) Exercise III.1.4 In the proof of Theorem III.1.1 we used a basic principle of linear algebra: \[ \dim \left( {{\mathcal{M}}_{1} \cap {\mathcal{M}}_{2}}\right) = \dim {\mathcal{M}}_{1} + \dim {\mathcal{M}}_{2} - \dim \left( {{\mathcal{M}}_{1} + {\mathcal{M}}_{2}}\right) \] \[ \geq \dim {\mathcal{M}}_{1} + \dim {\mathcal{M}}_{2} - n \] for any two subspaces \( {\mathcal{M}}_{1} \) and \( {\mathcal{M}}_{2} \) of an \( n \) -dimensional vector space. Derive the corresponding inequality for an intersection of three subspaces. An equivalent formulation of the Poincaré inequality is in terms of compressions. Recall that if \( V \) is an isometry of a Hilbert space \( \mathcal{M} \) into \( \mathcal{H} \), then the compression of \( A \) by \( V \) is defined to be the operator \( B = {V}^{ * }{AV} \) . Usually we suppose that \( \mathcal{M} \) is a subspace of \( \mathcal{H} \) and \( V \) is the injection map. Then \( A \) has a block-matrix representation in which \( B \) is the northwest corner entry: \[ A = \left( \begin{array}{ll} B & \star \\ \star & \star \end{array}\right) . \] We say that \( B \) is the compression of \( A \) to the subspace \( \mathcal{M} \) . Corollary III.1.5 (Cauchy's Interlacing Theorem) Let \( A \) be a Hermitian operator on \( \mathcal{H} \), and let \( B \) be its compression to an \( \left( {n - k}\right) \) -dimensional subspace \( \mathcal{N} \) . Then for \( j = 1,2,\ldots, n - k \) \[ {\lambda }_{j}^{ \downarrow }\left( A\right) \geq {\lambda }_{j}^{ \downarrow }\left( B\right) \geq {\lambda }_{j + k}^{ \downarrow }\left( A\right) \] (III.1) Proof. For any \( j \), let \( \mathcal{M} \) be the span of the eigenvectors \( {v}_{1},\ldots ,{v}_{j} \) of \( B \) corresponding to its eigenvalues \( {\lambda }_{1}^{ \downarrow }\left( B\right) ,\ldots ,{\lambda }_{j}^{ \downarrow }\left( B\right) \) . Then \( \langle x,{Bx}\rangle = \langle x,{Ax}\rangle \) for all \( x \in \mathcal{M} \) . Hence, \[ {\lambda }_{j}^{ \downarrow }\left( B\right) = \mathop{\min }\limits_{\substack{{x \in \mathcal{M}} \\ {\parallel x\parallel = 1} }}\langle x,{Bx}\rangle = \mathop{\min }\limits_{\substack{{x \in \mathcal{M}} \\ {\parallel x\parallel = 1} }}\langle x,{Ax}\rangle \leq {\lambda }_{j}^{ \downarrow }\left( A\right) . \] This proves the first assertion in (III.1). Now apply this to \( - A \) and its compression \( - B \) to the given subspace \( \mathcal{N} \) . Note that \[ - {\lambda }_{i}^{ \downarrow }\left( A\right) = {\lambda }_{i}^{ \uparrow }\left( {-A}\right) = {\lambda }_{n - i + 1}^{ \downarrow }\left( {-A}\right) \;\text{ for all }1 \leq i \leq n, \] and \[ - {\lambda }_{j}^{ \downarrow }\left( B\right) = {\lambda }_{j}^{ \uparrow }\left( {-B}\right) = {\lambda }_{\left( {n - k}\right) - j + 1}^{ \downarrow }\left( {-B}\right) \;\text{ for all }1 \leq j \leq n - k. \] Choose \( i = j + k \) . Then the first inequality yields \( - {\lambda }_{j}^{ \downarrow }\left( B\right) \leq - {\lambda }_{j + k}^{ \downarrow }\left( B\right) \), which is the second inequality in (III.1). The above inequalities look especially nice when \( B \) is the compression of \( A \) to an \( \left( {n - 1}\right) \) -dimensional subspace: then they say that \[ {\alpha }_{1} \geq {\beta }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\beta }_{n - 1} \geq {\alpha }_{n} \] (III.2) This explains why this is called an interlacing theorem. Exercise III.1.6 The Poincaré inequality, the minimax principle, and the interlacing theorem can be derived from each other. Find an independent proof for each of them using Exercise III.1.4. (This "dimension-counting" for intersections of subspaces will be used in later sections too.) Exercise III.1.7 Let \( B \) be the compression of a Hermitian operator \( A \) to an \( \left( {n - 1}\right) \) -dimensional space \( \mathcal{M} \) . If, for some \( k \), the space \( \mathcal{M} \) contains the vectors \( {u}_{1},\ldots ,{u}_{k} \), then \( {\beta }_{j} = {\alpha }_{j} \) for \( 1 \leq j \leq k \) . If \( \mathcal{M} \) contains \( {u}_{k},\ldots ,{u}_{n} \) , then \( {\alpha }_{j} = {\beta }_{j - 1} \) for \( k \leq j \leq n \) . Exercise III.1.8 (i) Let \( {A}_{n} \) be the \( n \times n \) tridiagonal matrix with entries \( {a}_{ii} = 2\cos \theta \) for all \( i,{a}_{ij} = 1 \) if \( \left| {i - j}\right| = 1 \), and \( {a}_{ij} = 0 \) otherwise. The determinant of \( {A}_{n} \) is \( \sin \left( {n + 1}\right) \theta /\sin \theta \) . (ii) Show that the eigenvalues of \( {A}_{n} \) are given by \( 2\left( {\cos \theta + \cos \frac{j\pi }{n + 1}}\right) \) , \( 1 \leq j \leq n. \) (iii) The special case when \( {a}_{ii} = - 2 \) for all \( i \) arises in Rayleigh’s finite-dimensional approximation to the differential equation of a vibrating string. In this case the eigenvalues of \( {A}_{n} \) are \[ {\lambda }_{j}^{ \downarrow }\left( {A}_{n}\right) = - 4{\sin }^{2}\frac{j\pi }{2\left( {n + 1}\right) },\;1 \leq j \leq n. \] (iv) Note that, for each \( k < n \), the matrix \( {A}_{n - k} \) is a compression of \( {A}_{n} \) . This example provides a striking illustration of Cauchy’s interlacing theorem. It is illuminating to think of the variational characterisation of eigenvalues as a solution of a variational problem in analysis. If \( A \) is a Hermitian operator on \( {\mathbb{R}}^{n} \), the search for the top eigenvalue of \( A \) is just the problem of maximising the function \( F\left( x\right) = {x}^{ * }{Ax} \) subject to the constraint that the function \( G\left( x\right) = {x}^{ * }x \) has the fixed value 1 . The extremum must occur at a critical point, and using Lagrange multipliers the condition for a point \( x \) to be critical is \( \nabla F\left( x\right) = \lambda \nabla G\left( x\right) \), which becomes \( {Ax} = {\lambda x} \) . Our earlier arguments got to the extremum problem from the algebraic eigenvalue problem, and this argument has gone the other way. If additional constraints are imposed, the maximum can only decrease. Confining \( x \) to an \( \left( {n - k}\right) \) -dimensional subspace is equivalent to imposing \( k \) linearly independent linear constraints on it. These can be expressed as \( {H}_{j}\left( x\right) = 0 \), where \( {H}_{j}\left( x\right) = {w}_{j}^{ * }x \) and the vectors \( {w}_{j},1 \leq j \leq k \) are linearly independent. Introducing additional Lagrange multipliers \( {\mu }_{j} \), the condition for a critical point is now \( \nabla F\left( x\right) = \lambda \nabla G\left( x\right) + \mathop{\sum }\limits_{j}{\mu }_{j}\nabla {H}_{j}\left( x\right) \) ; i.e., \( {Ax} - {\lambda x} \) is no longer required to be 0 but merely to be a linear combination of the \( {w}_{j} \) . Look at this in block-
100_S_Fourier Analysis
22
\lambda \nabla G\left( x\right) \), which becomes \( {Ax} = {\lambda x} \) . Our earlier arguments got to the extremum problem from the algebraic eigenvalue problem, and this argument has gone the other way. If additional constraints are imposed, the maximum can only decrease. Confining \( x \) to an \( \left( {n - k}\right) \) -dimensional subspace is equivalent to imposing \( k \) linearly independent linear constraints on it. These can be expressed as \( {H}_{j}\left( x\right) = 0 \), where \( {H}_{j}\left( x\right) = {w}_{j}^{ * }x \) and the vectors \( {w}_{j},1 \leq j \leq k \) are linearly independent. Introducing additional Lagrange multipliers \( {\mu }_{j} \), the condition for a critical point is now \( \nabla F\left( x\right) = \lambda \nabla G\left( x\right) + \mathop{\sum }\limits_{j}{\mu }_{j}\nabla {H}_{j}\left( x\right) \) ; i.e., \( {Ax} - {\lambda x} \) is no longer required to be 0 but merely to be a linear combination of the \( {w}_{j} \) . Look at this in block-matrix terms. Our space has been decomposed into a direct sum of a space \( \mathcal{N} \) and its orthogonal complement which is spanned by \( \left\{ {{w}_{1},\ldots ,{w}_{k}}\right\} \) . Relative to this direct sum decomposition we can write \[ A = \left( \begin{matrix} B & C \\ {C}^{ * } & D \end{matrix}\right) \] Our vector \( x \) is now constrained to be in \( \mathcal{N} \), and the requirement for it to be a critical point is that \( \left( {A - {\lambda I}}\right) \left( \begin{array}{l} x \\ 0 \end{array}\right) \) lies in \( {\mathcal{N}}^{ \bot } \) . This is exactly requiring \( x \) to be an eigenvector of the compression \( B \) . If two interlacing sets of real numbers are given, they can be realised as the eigenvalues of a Hermitian matrix and one of its compressions. This is a converse to one of the theorems proved above: Theorem III.1.9 Let \( {\alpha }_{j},1 \leq j \leq n \), and \( {\beta }_{i},1 \leq i \leq n - 1 \), be real numbers such that \[ {\alpha }_{1} \geq {\beta }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\beta }_{n - 1} \geq {\alpha }_{n} \] Then there exists a compression of the diagonal matrix \( A = \operatorname{diag}\left( {{\alpha }_{1},\ldots ,{\alpha }_{\mathrm{n}}}\right) \) having \( {\beta }_{i},1 \leq i \leq n - 1 \), as its eigenvalues. Proof. Let \( A{u}_{j} = {\alpha }_{j}{u}_{j} \) ; then \( \left\{ {u}_{j}\right\} \) constitute the standard orthonormal basis in \( {\mathbb{C}}^{n} \) . There is a one-to-one correspondence between \( \left( {n - 1}\right) \) - dimensional orthogonal projection operators and unit vectors given by \( P = I - z{z}^{ * } \) . Each unit vector, in turn, is completely characterised by its coordinates \( {\zeta }_{j} \) with respect to the basis \( {u}_{j} \) . We have \( z = \sum {\zeta }_{j}{u}_{j} = \) \( \sum \left( {{u}_{j}^{ * }z}\right) {u}_{j},\sum {\left| {\zeta }_{j}\right| }^{2} = 1 \) . We will find conditions on the numbers \( {\zeta }_{j} \) so that, for the corresponding orthoprojector \( P = I - z{z}^{ * } \), the compression of \( A \) to the range of \( P \) has eigenvalues \( {\beta }_{i} \) . Since \( {PAP} \) is a Hermitian operator of rank \( n - 1 \), we must have \[ \mathop{\prod }\limits_{{i = 1}}^{{n - 1}}\left( {\lambda - {\beta }_{i}}\right) = \operatorname{tr}{ \land }^{n - 1}\left\lbrack {P\left( {{\lambda I} - A}\right) P}\right\rbrack \] If \( {E}_{j} \) are the projectors defined as \( {E}_{j} = I - {u}_{j}{u}_{j}^{ * } \), then \[ { \land }^{n - 1}\left( {{\lambda I} - A}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\prod }\limits_{{k \neq j}}\left( {\lambda - {\alpha }_{k}}\right) { \land }^{n - 1}{E}_{j}. \] Using the result of Problem I.6.9 one sees that \[ { \land }^{n - 1}P \cdot { \land }^{n - 1}{E}_{j} \cdot { \land }^{n - 1}P = {\left| {\zeta }_{j}\right| }^{2}{ \land }^{n - 1}P. \] Since rank \( { \land }^{n - 1}P = 1 \), the above three relations give \[ \mathop{\prod }\limits_{{i = 1}}^{{n - 1}}\left( {\lambda - {\beta }_{i}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\zeta }_{j}\right| }^{2}\left\lbrack {\mathop{\prod }\limits_{{k \neq j}}\left( {\lambda - {\alpha }_{k}}\right) }\right\rbrack \] (III.3) an identity between polynomials of degree \( n - 1 \), which the \( {\zeta }_{j} \) must satisfy if \( B \) has spectrum \( \left\{ {\beta }_{i}\right\} \) . We will show that the interlacing inequalities between \( {\alpha }_{j} \) and \( {\beta }_{i} \) ensure that we can find \( {\zeta }_{j} \) satisfying (III.3) and \( \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\zeta }_{j}\right| }^{2} = 1 \) . We may assume, without loss of generality, that the \( {\alpha }_{j} \) are distinct. Put \[ {\gamma }_{j} = \frac{\mathop{\prod }\limits_{{i = 1}}^{{n - 1}}\left( {{\alpha }_{j} - {\beta }_{i}}\right) }{\mathop{\prod }\limits_{{k \neq j}}\left( {{\alpha }_{j} - {\alpha }_{k}}\right) },\;1 \leq j \leq n. \] (III.4) The interlacing property ensures that all \( {\gamma }_{j} \) are nonnegative. Now choose \( {\zeta }_{j} \) to be any complex numbers with \( {\left| {\zeta }_{j}\right| }^{2} = {\gamma }_{j} \) . Then the equation (III.3) is satisfied for the values \( \lambda = {\alpha }_{j},1 \leq j \leq n \), and hence it is satisfied for all \( \lambda \) . Comparing the leading coefficients of the two sides of (III.3), we see that \( \mathop{\sum }\limits_{j}{\left| {\zeta }_{j}\right| }^{2} = 1 \) . This completes the proof. ## III. 2 Weyl's Inequalities Several relations between eigenvalues of Hermitian matrices \( A, B \), and \( A + B \) can be obtained using the ideas of the previous section. Most of these results were first proved by \( \mathrm{H} \) . Weyl. Theorem III.2.1 Let \( A, B \) be \( n \times n \) Hermitian matrices. Then, \[ {\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \leq {\lambda }_{i}^{ \downarrow }\left( A\right) + {\lambda }_{j - i + 1}^{ \downarrow }\left( B\right) \;\text{ for }i \leq j, \] (III.5) \[ {\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \geq {\lambda }_{i}^{ \downarrow }\left( A\right) + {\lambda }_{j - i + n}^{ \downarrow }\left( B\right) \;\text{ for }i \geq j. \] (III.6) Proof. Let \( {u}_{j},{v}_{j} \), and \( {w}_{j} \) denote the eigenvectors of \( A, B \), and \( A + B \) respectively, corresponding to their eigenvalues in decreasing order. Let \( i \leq j \) . Consider the three subspaces spanned by \( \left\{ {{w}_{1},\ldots ,{w}_{j}}\right\} ,\left\{ {{u}_{i},\ldots ,{u}_{n}}\right\} \) , and \( \left\{ {{v}_{j - i + 1},\ldots ,{v}_{n}}\right\} \) respectively. These have dimensions \( j, n - i + 1 \), and \( n - j + i \), and hence by Exercise III.1.4 they have a nontrivial intersection. Let \( x \) be a unit vector in their intersection. Then \[ {\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \leq \langle x,\left( {A + B}\right) x\rangle = \langle x,{Ax}\rangle + \langle x,{Bx}\rangle \leq {\lambda }_{i}^{ \downarrow }\left( A\right) + {\lambda }_{j - i + 1}^{ \downarrow }\left( B\right) . \] This proves (III.5). If \( A \) and \( B \) in this inequality are replaced by \( - A \) and \( - B \), we get (III.6). Corollary III.2.2 For each \( j = 1,2,\ldots, n \) , \[ {\lambda }_{j}^{ \downarrow }\left( A\right) + {\lambda }_{n}^{ \downarrow }\left( B\right) \leq {\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \leq {\lambda }_{j}^{ \downarrow }\left( A\right) + {\lambda }_{1}^{ \downarrow }\left( B\right) . \] (III.7) Proof. Put \( i = j \) in the above inequalities. It is customary to state these and related results as perturbation theorems, whereby \( B \) is a perturbation of \( A \) ; that is \( B = A + H \) . In many of the applications \( H \) is small and the object is to give bounds for the distance of \( \lambda \left( B\right) \) from \( \lambda \left( A\right) \) in terms of \( H = B - A \) . Corollary III.2.3 (Weyl’s Monotonicity Theorem) If \( H \) is positive, then \[ {\lambda }_{j}^{ \downarrow }\left( {A + H}\right) \geq {\lambda }_{j}^{ \downarrow }\left( A\right) \;\text{ for all }j. \] Proof. By the preceding corollary, \( {\lambda }_{j}^{ \downarrow }\left( {A + H}\right) \geq {\lambda }_{j}^{ \downarrow }\left( A\right) + {\lambda }_{n}^{ \downarrow }\left( H\right) \), but all the eigenvalues of \( H \) are nonnegative. Alternately, note that \( \langle x,\left( {A + H}\right) x\rangle \geq \) \( \langle x,{Ax}\rangle \) for all \( x \) and use the minimax principal. Exercise III.2.4 If \( H \) is positive and has rank \( k \), then \[ {\lambda }_{j}^{ \downarrow }\left( {A + H}\right) \geq {\lambda }_{j}^{ \downarrow }\left( A\right) \geq {\lambda }_{j + k}^{ \downarrow }\left( {A + H}\right) \;\text{ for }j = 1,2,\ldots, n - k. \] This is analogous to Cauchy's interlacing theorem. Exercise III.2.5 Let \( H \) be any Hermitian matrix. Then \[ {\lambda }_{j}^{ \downarrow }\left( A\right) - \parallel H\parallel \leq {\lambda }_{j}^{ \downarrow }\left( {A + H}\right) \leq {\lambda }_{j}^{ \downarrow }\left( A\right) + \parallel H\parallel \] This can be restated as: Corollary III.2.6 (Weyl’s Perturbation Theorem) Let \( A \) and \( B \) be Hermitian matrices. Then \[ \mathop{\max }\limits_{j}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B\parallel \] Exercise III.2.7 For Hermitian matrices \( A, B \), we have \[ \parallel A - B\parallel \leq \mathop{\max }\limits_{j}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \uparrow }\left( B\right) }\right| . \] It is useful to have another formulation of the above two inequalities, which will be in conformity with more general results proved later. We will denote by Eig \( A \) a diagonal matrix whose diagonal entries are the eigenvalues of \( A \) . If these are arranged in decreasing order, we write this matrix as \( {\operatorname{Eig}}^{ \downarrow }\left( A\right) \) ; if in increasing order as \( {\operatorname{Eig}
100_S_Fourier Analysis
23
narrow }\left( A\right) + \parallel H\parallel \] This can be restated as: Corollary III.2.6 (Weyl’s Perturbation Theorem) Let \( A \) and \( B \) be Hermitian matrices. Then \[ \mathop{\max }\limits_{j}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B\parallel \] Exercise III.2.7 For Hermitian matrices \( A, B \), we have \[ \parallel A - B\parallel \leq \mathop{\max }\limits_{j}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \uparrow }\left( B\right) }\right| . \] It is useful to have another formulation of the above two inequalities, which will be in conformity with more general results proved later. We will denote by Eig \( A \) a diagonal matrix whose diagonal entries are the eigenvalues of \( A \) . If these are arranged in decreasing order, we write this matrix as \( {\operatorname{Eig}}^{ \downarrow }\left( A\right) \) ; if in increasing order as \( {\operatorname{Eig}}^{ \uparrow }\left( A\right) \) . The results of Corollary III.2.6 and Exercise III.2.7 can then be stated as Theorem III.2.8 For any two Hermitian matrices \( A, B \) , \[ \begin{Vmatrix}{{\operatorname{Eig}}^{ \downarrow }\left( A\right) - {\operatorname{Eig}}^{ \downarrow }\left( B\right) }\end{Vmatrix} \leq \parallel A - B\parallel \leq \begin{Vmatrix}{{\operatorname{Eig}}^{ \downarrow }\left( A\right) - {\operatorname{Eig}}^{ \uparrow }\left( B\right) }\end{Vmatrix}. \] Weyl’s inequality (III.5) is equivalent to an inequality due to Aronszajn connecting the eigenvalues of a Hermitian matrix to those of any two complementary principal submatrices. For this let us rewrite (III.5) as \[ {\lambda }_{i + j - 1}^{ \downarrow }\left( {A + B}\right) \leq {\lambda }_{i}^{ \downarrow }\left( A\right) + {\lambda }_{j}^{ \downarrow }\left( B\right) \] (III.8) for all indices \( i, j \) such that \( i + j - 1 \leq n \) . Theorem III.2.9 (Aronszajn’s Inequality) Let \( C \) be an \( n \times n \) Hermitian matrix partitioned as \[ C = \left( \begin{matrix} A & X \\ {X}^{ * } & B \end{matrix}\right) \] where \( A \) is a \( k \times k \) matrix. Let the eigenvalues of \( A, B \), and \( C \) be \( {\alpha }_{1} \geq \cdots \) \( \geq {\alpha }_{k},{\beta }_{1} \geq \cdots \geq {\beta }_{n - k} \), and \( {\gamma }_{1} \geq \cdots \geq {\gamma }_{n} \), respectively. Then \[ {\gamma }_{i + j - 1} + {\gamma }_{n} \leq {\alpha }_{i} + {\beta }_{j}\;\text{ for all }i, j\text{ with }i + j - 1 \leq n. \] (III.9) Proof. First assume that \( {\gamma }_{n} = 0 \) . Then \( C \) is a positive matrix. Hence \( C = {D}^{ * }D \) for some matrix \( D \) . Partition \( D \) as \( D = \left( {{D}_{1}{D}_{2}}\right) \), where \( {D}_{1} \) has \( k \) columns. Then \[ C = \left( \begin{matrix} A & X \\ {X}^{ * } & B \end{matrix}\right) = \left( \begin{array}{ll} {D}_{1}^{ * }{D}_{1} & {D}_{1}^{ * }{D}_{2} \\ {D}_{2}^{ * }{D}_{1} & {D}_{2}^{ * }{D}_{2} \end{array}\right) . \] Note that \( D{D}^{ * } = {D}_{1}{D}_{1}^{ * } + {D}_{2}{D}_{2}^{ * } \) . Now the nonzero eigenvalues of the matrix \( C = {D}^{ * }D \) are the same as those of \( D{D}^{ * } \) . The same is true for the matrices \( A = {D}_{1}^{ * }{D}_{1} \) and \( {D}_{1}{D}_{1}^{ * } \), and also for the matrices \( B = {D}_{2}^{ * }{D}_{2} \) and \( {D}_{2}{D}_{2}^{ * } \) . Hence, using Weyl’s inequality (III.8) we get (III.9) in this special case. If \( {\gamma }_{n} \neq 0 \), subtract \( {\gamma }_{n}I \) from \( C \) . Then all eigenvalues of \( A, B \), and \( C \) are translated by \( - {\gamma }_{n} \) . By the special case considered above we have \[ {\gamma }_{i + j - 1} - {\gamma }_{n} \leq \left( {{\alpha }_{i} - {\gamma }_{n}}\right) + \left( {{\beta }_{j} - {\gamma }_{n}}\right) \] which is the same as (III.9). We have derived Aronszajn's inequality from Weyl's inequality. But the argument above can be reversed. Let \( A, B \) be \( n \times n \) Hermitian matrices and let \( C = A + B \) . Let the eigenvalues of these matrices be \( {\alpha }_{1} \geq \cdots \geq {\alpha }_{n},{\beta }_{1} \geq \) \( \cdots \geq {\beta }_{n} \), and \( {\gamma }_{1} \geq \cdots \geq {\gamma }_{n} \), respectively. We want to prove that \( {\gamma }_{i + j - 1} \leq \) \( {\alpha }_{i} + {\beta }_{j} \) . This is the same as \( {\gamma }_{i + j - 1} - \left( {{\alpha }_{n} + {\beta }_{n}}\right) \leq \left( {{\alpha }_{i} - {\alpha }_{n}}\right) + \left( {{\beta }_{j} - {\beta }_{n}}\right) \) . Hence, we can assume, without loss of generality, that both \( A \) and \( B \) are positive. Then \( A = {D}_{1}^{ * }{D}_{1} \) and \( B = {D}_{2}^{ * }{D}_{2} \) for some matrices \( {D}_{1},{D}_{2} \) . Hence, \[ C = {D}_{1}^{ * }{D}_{1} + {D}_{2}^{ * }{D}_{2} = \left( \begin{array}{ll} {D}_{1}^{ * } & {D}_{2}^{ * } \end{array}\right) \left( \begin{array}{l} {D}_{1} \\ {D}_{2} \end{array}\right) . \] Consider the \( {2n} \times {2n} \) matrix \[ E = \left( \begin{array}{l} {D}_{1} \\ {D}_{2} \end{array}\right) \left( {{D}_{1}^{ * }{D}_{2}^{ * }}\right) = \left( \begin{array}{ll} {D}_{1}{D}_{1}^{ * } & {D}_{1}{D}_{2}^{ * } \\ {D}_{2}{D}_{1}^{ * } & {D}_{2}{D}_{2}^{ * } \end{array}\right) . \] Then the eigenvalues of \( E \) are the eigenvalues of \( C \) together with \( n \) zeroes. Aronszajn’s inequality for the partitioned matrix \( E \) then gives Weyl’s inequality (III.8). By this procedure, several linear inequalities for the eigenvalues of a sum of Hermitian matrices can be transformed to those for the eigenvalues of block Hermitian matrices, and vice versa. ## III. 3 Wielandt's Minimax Principle The minimax principle (Corollary III.1.2) gives an extremal characterisation for each eigenvalue \( {\alpha }_{j} \) of a Hermitian matrix \( A \) . Ky Fan’s maximum principle (Problem I.6.15 and Exercise II.1.13) provides an extremal characterisation for the sum \( {\alpha }_{1} + \cdots + {\alpha }_{k} \) of the top \( k \) eigenvalues of \( A \) . In this section we will prove a deeper result due to Wielandt that subsumes both these principles by providing an extremal representation of any sum \( {\alpha }_{{i}_{1}} + \cdots + {\alpha }_{{i}_{k}} \) . The proof involves a more elaborate dimension-counting for intersections of subspaces than was needed earlier. We will denote by \( V + W \) the vector sum of two vector spaces \( V \) and \( W \), by \( V - W \) any linear complement of a space \( W \) in \( V \), and by span \( \left\{ {{v}_{1},\ldots ,{v}_{k}}\right\} \) the linear span of vectors \( {v}_{1},\ldots ,{v}_{k} \) . Lemma III.3.1 Let \( {W}_{1} \supset {W}_{2} \supset \cdots \supset {W}_{k} \) be a decreasing chain of vector spaces with \( \dim {W}_{j} \geq k - j + 1 \) . Let \( {w}_{j},1 \leq j \leq k - 1 \), be linearly independent vectors such that \( {w}_{j} \in {W}_{j} \), and let \( U \) be their linear span. Then there exists a nonzero vector \( u \) in \( {W}_{1} - U \) such that the space \( U + \operatorname{span}\{ u\} \) has a basis \( {v}_{1},\ldots ,{v}_{k} \) with \( {v}_{j} \in {W}_{j},1 \leq j \leq k \) . Proof. This will be proved by induction on \( k \) . The statement is easily verified when \( k = 2 \) . Assume that it is true for a chain consisting of \( k - 1 \) spaces. Let \( {w}_{1},\ldots ,{w}_{k - 1} \) be the given vectors and \( U \) their linear span. Let \( S \) be the linear span of \( {w}_{2},\ldots ,{w}_{k - 1} \) . Apply the induction hypothesis to the chain \( {W}_{2} \supset \cdots \supset {W}_{k} \) to pick up a vector \( v \) in \( {W}_{2} - S \) such that the space \( S + \operatorname{span}\{ v\} \) is equal to \( \operatorname{span}\left\{ {{v}_{2},\ldots ,{v}_{k}}\right\} \) for some linearly independent vectors \( {v}_{j} \in {W}_{j}, j = 2,\ldots, k \) . This vector \( v \) may or may not be in the space \( U \) . We will consider the two possibilities. Suppose \( v \in U \) . Then \( U = S + \) span \( \{ v\} \) because \( U \) is \( \left( {k - 1}\right) \) -dimensional and \( S \) is \( \left( {k - 2}\right) \) -dimensional. Since \( \dim {W}_{1} \geq k \), there exists a nonzero vector \( u \) in \( {W}_{1} - U \) . Then \( u,{v}_{2},\ldots ,{v}_{k} \) form a basis for \( U + \operatorname{span}\{ u\} \) . Put \( u = {v}_{1} \) . All requirements are now met. Suppose \( v \notin U \) . Then \( {w}_{1} \notin S + \operatorname{span}\{ v\} \), for if \( {w}_{1} \) were a linear combination of \( {w}_{2},\ldots ,{w}_{k - 1} \) and \( v \), then \( v \) would be a linear combination of \( {w}_{1},{w}_{2},\ldots ,{w}_{k - 1} \) and hence be an element of \( U \) . So, span \( \left\{ {{w}_{1},{v}_{2},\ldots ,{v}_{k}}\right\} \) is a \( k \) -dimensional space that must, therefore, be \( U + \operatorname{span}\{ v\} \) . Now \( {w}_{1} \in {W}_{1} \) and \( {v}_{j} \in {W}_{j}, j = 2,\ldots, k \) . Again all requirements are met. Theorem III.3.2 Let \( {V}_{1} \subset {V}_{2} \subset \cdots \subset {V}_{k} \) be linear subspaces of an \( n \) - dimensional vector space \( V \), with \( \dim {V}_{j} = {i}_{j},1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n \) . Let \( {W}_{1} \supset {W}_{2} \supset \cdots \supset {W}_{k} \) be subspaces of \( V \), with \( \dim {W}_{j} = n - {i}_{j} + 1 = \) codim \( {V}_{j} + 1 \) . Then there exist linearly independent vectors \( {v}_{j} \in {V}_{j},1 \leq \) \( j \leq k \), and linearly independent vectors \( {w}_{j} \in {W}_{j},1 \leq j \leq k \), such that \[ \operatorname{span}\left\{ {{v}_{1},\ldots ,{v}_{k}}\right\} = \operatorname{span}\left\{ {{w}_{1},\ldots ,{w}_{k}}\right\} \] Proof. When \( k = 1 \) the statement is obviously true. (We have used this repeatedly in the earlier sections.) The general case will be proved by induction on \( k \) . So, let us assume that the theorem has been proved for \( k - 1 \) pairs of subspaces. By the induction hypothesis choose \( {v}_{j} \in {V}_{j} \) and \( {w}_{j} \in {W}_{j},1 \leq j \leq k - 1 \), two sets of linearly inde
100_S_Fourier Analysis
24
\( n \) - dimensional vector space \( V \), with \( \dim {V}_{j} = {i}_{j},1 \leq {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n \) . Let \( {W}_{1} \supset {W}_{2} \supset \cdots \supset {W}_{k} \) be subspaces of \( V \), with \( \dim {W}_{j} = n - {i}_{j} + 1 = \) codim \( {V}_{j} + 1 \) . Then there exist linearly independent vectors \( {v}_{j} \in {V}_{j},1 \leq \) \( j \leq k \), and linearly independent vectors \( {w}_{j} \in {W}_{j},1 \leq j \leq k \), such that \[ \operatorname{span}\left\{ {{v}_{1},\ldots ,{v}_{k}}\right\} = \operatorname{span}\left\{ {{w}_{1},\ldots ,{w}_{k}}\right\} \] Proof. When \( k = 1 \) the statement is obviously true. (We have used this repeatedly in the earlier sections.) The general case will be proved by induction on \( k \) . So, let us assume that the theorem has been proved for \( k - 1 \) pairs of subspaces. By the induction hypothesis choose \( {v}_{j} \in {V}_{j} \) and \( {w}_{j} \in {W}_{j},1 \leq j \leq k - 1 \), two sets of linearly independent vectors having the same linear span \( U \) . Note that \( U \) is a subspace of \( {V}_{k} \) . For \( j = 1,\ldots, k \), let \( {S}_{j} = {W}_{j} \cap {V}_{k} \) . Then note that \[ n \geq \dim {W}_{j} + \dim {V}_{k} - \dim {S}_{j} \] \[ = \left( {n - {i}_{j} + 1}\right) + {i}_{k} - \dim {S}_{j}. \] Hence, \[ \dim {S}_{j} \geq {i}_{k} - {i}_{j} + 1 \geq k - j + 1 \] Note that \( {S}_{1} \supset {S}_{2} \supset \cdots \supset {S}_{k} \) are subspaces of \( {V}_{k} \) and \( {w}_{j} \in {S}_{j} \) for \( j = \) \( 1,2,\ldots, k - 1 \) . Hence, by Lemma III.3.1 there exists a vector \( u \) in \( {S}_{1} - U \) such that the space \( U + \operatorname{span}\{ u\} \) has a basis \( {u}_{1},\ldots ,{u}_{k} \), where \( {u}_{j} \in {S}_{j} \subset {W}_{j}, j = \) \( 1,2,\ldots, k \) . But \( U + \operatorname{span}\{ u\} \) is also the linear span of \( {v}_{1},\ldots ,{v}_{k - 1} \) and \( u \) . Put \( {v}_{k} = u \) . Then \( {v}_{j} \in {V}_{j}, j = 1,2,\ldots, k \), and they span the same space as the \( {u}_{j} \) . Exercise III.3.3 If \( V \) is a Hilbert space, the vectors \( {v}_{j} \) and \( {w}_{j} \) in the statement of the above theorem can be chosen to be orthonormal. Proposition III.3.4 Let \( A \) be a Hermitian operator on \( \mathcal{H} \) with eigenvectors \( {u}_{j} \) belonging to eigenvalues \( {\lambda }_{j}^{ \downarrow }\left( A\right), j = 1,2,\ldots, n \) . (i) Let \( {\mathcal{V}}_{j} = \operatorname{span}\left\{ {{u}_{1},\ldots ,{u}_{j}}\right\} ,1 \leq j \leq n \) . Given indices \( 1 \leq {i}_{1} < \cdots < \) \( {i}_{k} \leq n \), choose orthonormal vectors \( {x}_{{i}_{j}} \) from the spaces \( {\mathcal{V}}_{{i}_{j}}, j = 1,\ldots, k \) . Let \( \mathcal{V} \) be the span of these vectors, and let \( {A}_{\mathcal{V}} \) be the compression of \( A \) to the space \( \mathcal{V} \) . Then \[ {\lambda }_{j}^{ \downarrow }\left( {A}_{\mathcal{V}}\right) \geq {\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \;\text{ for }\;j = 1,\ldots, k. \] (ii) Let \( {\mathcal{W}}_{j} = \operatorname{span}\left\{ {{u}_{j},\ldots ,{u}_{n}}\right\} ,1 \leq j \leq n \) . Choose orthonormal vectors \( {x}_{{i}_{j}} \) from the spaces \( {\mathcal{W}}_{{i}_{j}}, j = 1,\ldots, k \) . Let \( \mathcal{W} \) be the span of these vectors and \( {A}_{\mathcal{W}} \) the compression of \( A \) to \( \mathcal{W} \) . Then \[ {\lambda }_{j}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) \leq {\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \;\text{ for }\;j = 1,\ldots, k. \] Proof. Let \( {y}_{1},\ldots ,{y}_{k} \) be the eigenvectors of \( {A}_{\mathcal{V}} \) belonging to its eigenvalues \( {\lambda }_{1}^{ \downarrow }\left( {A}_{\mathcal{V}}\right) ,\ldots ,{\lambda }_{k}^{ \downarrow }\left( {A}_{\mathcal{V}}\right) \) . Fix \( j,1 \leq j \leq k \), and in the space \( \mathcal{V} \) consider the spaces spanned by \( \left\{ {{x}_{{i}_{1}},\ldots ,{x}_{{i}_{j}}}\right\} \) and \( \left\{ {{y}_{j},\ldots ,{y}_{k}}\right\} \), respectively. The dimensions of these two spaces add up to \( k + 1 \), while the space \( \mathcal{V} \) is \( k \) -dimensional. Hence there exists a unit vector \( u \) in the intersection of these two spaces. For this vector we have \[ {\lambda }_{j}^{ \downarrow }\left( {A}_{\mathcal{V}}\right) \geq \left\langle {u,{A}_{\mathcal{V}}u}\right\rangle = \langle u,{Au}\rangle \geq {\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) . \] This proves (i). The statement (ii) has exactly the same proof. Theorem III.3.5 (Wielandt’s Minimax Principle) Let \( A \) be a Hermitian operator on an \( n \) -dimensional space \( \mathcal{H} \) . Then for any indices \( 1 \leq {i}_{1} < \cdots < \) \( {i}_{k} \leq n \) we have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{\substack{{{\mathcal{M}}_{1} \subset \cdots \subset {\mathcal{M}}_{k}} \\ {\text{ dim }{\mathcal{M}}_{j} = {i}_{j}} }}\mathop{\min }\limits_{\substack{{{x}_{j} \in {\mathcal{M}}_{j}} \\ {{x}_{j}\text{ orthonormal }} }}\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \] \[ = \mathop{\min }\limits_{\substack{{{\mathcal{N}}_{1} \supset \cdots \supset {\mathcal{N}}_{k}} \\ {\dim {\mathcal{N}}_{j} = n - {i}_{j} + 1} }}\mathop{\max }\limits_{\substack{{{x}_{j} \in {\mathcal{N}}_{j}} \\ {{x}_{j}\text{ orthonormal }} }}\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle . \] Proof. We will prove the first statement; the second has a similar proof. Let \( {\mathcal{V}}_{{i}_{j}} = \operatorname{span}\left\{ {{u}_{1},\ldots ,{u}_{{i}_{j}}}\right\} \), where, as before, the \( {u}_{j} \) are eigenvectors of \( A \) corresponding to \( {\lambda }_{j}^{ \downarrow }\left( A\right) \) . For any unit vector \( x \) in \( {\mathcal{V}}_{{i}_{j}},\langle x,{Ax}\rangle \geq {\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \) . So, if \( {x}_{j} \in {\mathcal{V}}_{{i}_{j}} \) are orthonormal vectors, then \[ \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \geq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \] Since \( {x}_{j} \) were quite arbitrary, we have \[ \mathop{\min }\limits_{\substack{{{x}_{j} \in {\mathcal{V}}_{{i}_{j}}} \\ {{x}_{j}\text{ orthonormal }} }}\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \geq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \] Hence, the desired result will be achieved if we prove that given any subspaces \( {\mathcal{M}}_{1} \subset \cdots \subset {\mathcal{M}}_{k} \) with \( \dim {\mathcal{M}}_{j} = {i}_{j} \) we can find orthonormal vectors \( {x}_{j} \in {\mathcal{M}}_{j} \) such that \[ \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \] Let \( {\mathcal{N}}_{j} = {\mathcal{W}}_{{i}_{j}} = \operatorname{span}\left\{ {{v}_{{i}_{j}},\ldots ,{v}_{n}}\right\}, j = 1,2,\ldots, k \) . These spaces were considered in Proposition III.3.4(ii). We have \( {\mathcal{N}}_{1} \supset {\mathcal{N}}_{2}\cdots \supset {\mathcal{N}}_{k} \) and \( \dim {\mathcal{N}}_{j} = n - {i}_{j} + 1 \) . Hence, by Theorem III.3.2 and Exercise III.3.3 there exist orthonormal vectors \( {x}_{j} \in {\mathcal{M}}_{j} \) and orthonormal vectors \( {y}_{j} \in {\mathcal{N}}_{j} \) such that \[ \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} = \operatorname{span}\left\{ {{y}_{1},\ldots ,{y}_{k}}\right\} = \mathcal{W},\;\text{ say. } \] By Proposition III.3.4 (ii), \( {\lambda }_{j}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) \leq {\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \) for \( j = 1,2,\ldots, k \) . Hence, \[ \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j},{A}_{\mathcal{W}}{x}_{j}}\right\rangle = \operatorname{tr}{A}_{\mathcal{W}} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) \] This is what we wanted to prove. Exercise III.3.6 Note that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{u}_{{i}_{j}}, A{u}_{{i}_{j}}}\right\rangle \] We have seen that the maximum in the first assertion of Theorem III.3.5 is attained when \( {\mathcal{M}}_{j} = {\mathcal{V}}_{{i}_{j}} = \operatorname{span}\left\{ {{u}_{1},\ldots ,{u}_{{i}_{j}}}\right\}, j = 1,\ldots, k \), and with this choice the minimum is attained for \( {x}_{j} = {u}_{{i}_{j}}, j = 1,\ldots, k \) . Are there other choices of subspaces and vectors for which these extrema are attained? (See Exercise III.1.3.) Exercise III.3.7 Let \( \left\lbrack {a, b}\right\rbrack \) be an interval containing all eigenvalues of \( A \) and let \( \Phi \left( {{t}_{1},\ldots ,{t}_{k}}\right) \) be any real valued function on \( \left\lbrack {a, b}\right\rbrack \times \cdots \times \left\lbrack {a, b}\right\rbrack \) that is monotone in each variable and permutation-invariant. Show that for each choice of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) , \[ \Phi \left( {{\lambda }_{{i}_{1}}^{ \downarrow }\left( A\right) ,\ldots ,{\lambda }_{{i}_{k}}^{ \downarrow }\left( A\right) }\right) \] \[ = \mathop{\max }\limits_{\substack{{{\mathcal{M}}_{1} \subset \cdots \subset {\mathcal{M}}_{k}} \\ {\dim {\mathcal{M}}_{j} = {i}_{j}} }}\mathop{\min }\limits_{\substack{{\mathcal{W} = \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} } \\ {{x}_{j} \in {\mathcal{M}}_{j},{x}_{j}\text{ orthonormal }} }}\Phi \left( {{\lambda }_{1}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) ,\ldots ,{\lambda }_{k}^{ \downarrow }\left( {A}_{\mathcal{W}}
100_S_Fourier Analysis
25
k {a, b}\right\rbrack \) be an interval containing all eigenvalues of \( A \) and let \( \Phi \left( {{t}_{1},\ldots ,{t}_{k}}\right) \) be any real valued function on \( \left\lbrack {a, b}\right\rbrack \times \cdots \times \left\lbrack {a, b}\right\rbrack \) that is monotone in each variable and permutation-invariant. Show that for each choice of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) , \[ \Phi \left( {{\lambda }_{{i}_{1}}^{ \downarrow }\left( A\right) ,\ldots ,{\lambda }_{{i}_{k}}^{ \downarrow }\left( A\right) }\right) \] \[ = \mathop{\max }\limits_{\substack{{{\mathcal{M}}_{1} \subset \cdots \subset {\mathcal{M}}_{k}} \\ {\dim {\mathcal{M}}_{j} = {i}_{j}} }}\mathop{\min }\limits_{\substack{{\mathcal{W} = \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} } \\ {{x}_{j} \in {\mathcal{M}}_{j},{x}_{j}\text{ orthonormal }} }}\Phi \left( {{\lambda }_{1}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) ,\ldots ,{\lambda }_{k}^{ \downarrow }\left( {A}_{\mathcal{W}}\right) }\right) , \] where \( {A}_{\mathcal{W}} \) is the compression of \( A \) to the space \( \mathcal{W} \) . In Theorem III.3.5 we have proved the special case of this with \( \Phi \left( {{t}_{1},\ldots ,{t}_{k}}\right) = {t}_{1} + \cdots + {t}_{k} \) . ## III. 4 Lidskii's Theorems One important application of Wielandt's minimax principle is in proving a theorem of Lidskii giving a relationship between eigenvalues of Hermitian matrices \( A, B \) and \( A + B \) . This is quite like our derivation of some of the results in Section III. 2 from those in Section III.1. Theorem III.4.1 Let \( A, B \) be Hermitian matrices. Then for any choice of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) , \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( B\right) . \] (III.10) Proof. By Theorem III.3.5 there exist subspaces \( {\mathcal{M}}_{1} \subset \cdots \subset {\mathcal{M}}_{k} \), with \( \dim {\mathcal{M}}_{j} = {i}_{j} \) such that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {A + B}\right) = \mathop{\min }\limits_{\substack{{{x}_{j} \in {\mathcal{M}}_{j}} \\ {{x}_{j}\text{ orthonormal }j = 1} }}\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j},\left( {A + B}\right) {x}_{j}}\right\rangle . \] By Ky Fan's maximum principle \[ \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, B{x}_{j}}\right\rangle \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( B\right) \] for any choice of orthonormal vectors \( {x}_{1},\ldots ,{x}_{k} \) . The above two relations imply that \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\min }\limits_{\substack{{{x}_{j} \in {\mathcal{M}}_{j}} \\ {{x}_{j}\text{ orthonormal }} }}\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, A{x}_{j}}\right\rangle + \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j}, B{x}_{j}}\right\rangle . \] Now, using Theorem III.3.5 once again, it can be concluded that the first term on the right-hand side of the above inequality is dominated by \( \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( \mathbf{A}\right) \) . Corollary III.4.2 If \( A, B \) are Hermitian matrices, then the eigenvalues of \( A, B \), and \( A + B \) satisfy the following majorisation relation \[ {\lambda }^{ \downarrow }\left( {A + B}\right) - {\lambda }^{ \downarrow }\left( A\right) \prec \lambda \left( B\right) \] (III.11) Exercise III.4.3 (Lidskii’s Theorem) The vector \( {\lambda }^{ \downarrow }\left( {A + B}\right) \) is in the convex hull of the vectors \( {\lambda }^{ \downarrow }\left( A\right) + P{\lambda }^{ \downarrow }\left( B\right) \), where \( P \) varies over all permutation matrices. [This statement and those of Theorem III.4.1 and Corollary III.4.2 are, in fact, equivalent to each other.] Lidskii's Theorem can be proved without calling upon the more intricate Wielandt's principle. We will see several other proofs in this book, each highlighting a different viewpoint. The second proof given below is in the spirit of other results of this chapter. Lidskii's Theorem (second proof). We will prove Theorem III.4.1 by induction on the dimension \( n \) . Its statement is trivial when \( n = 1 \) . Assume it is true up to dimension \( n - 1 \) . When \( k = n \), the inequality (III.10) needs no proof. So we may assume that \( k < n \) . Let \( {u}_{j},{v}_{j} \), and \( {w}_{j} \) be the eigenvectors of \( A, B \), and \( A + B \) corresponding to their eigenvalues \( {\lambda }_{j}^{ \downarrow }\left( A\right) ,{\lambda }_{j}^{ \downarrow }\left( B\right) \), and \( {\lambda }_{j}^{ \downarrow }\left( {A + B}\right) \) . We will consider three cases separately. Case 1. \( {i}_{k} < n \) . Let \( \mathcal{M} = \operatorname{span}\left\{ {{w}_{1},\ldots ,{w}_{n - 1}}\right\} \) and let \( {A}_{\mathcal{M}} \) be the compression of \( A \) to the space \( \mathcal{M} \) . Then, by the induction hypothesis \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {{A}_{\mathcal{M}} + {B}_{\mathcal{M}}}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {A}_{\mathcal{M}}\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {B}_{\mathcal{M}}\right) . \] The inequality (III.10) follows from this by using the interlacing principle (III.2) and Exercise III.1.7. Case 2. \( 1 < {i}_{1} \) . Let \( \mathcal{M} = \operatorname{span}\left\{ {{u}_{2},\ldots ,{u}_{n}}\right\} \) . By the induction hypothesis \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j} - 1}^{ \downarrow }\left( {{A}_{\mathcal{M}} + {B}_{\mathcal{M}}}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j} - 1}^{ \downarrow }\left( {A}_{\mathcal{M}}\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {B}_{\mathcal{M}}\right) . \] Once again, the inequality (III.10) follows from this by using the interlacing principle and Exercise III.1.7. Case 3. \( {i}_{1} = 1 \) . Given the indices \( 1 = {i}_{1} < {i}_{2} < \cdots < {i}_{k} \leq n \), pick up the indices \( 1 \leq {\ell }_{1} < {\ell }_{2} < \ldots < {\ell }_{n - k} < n \) such that the set \( \left\{ {{i}_{j} : 1 \leq j \leq k}\right\} \) is the complement of the set \( \left\{ {n - {\ell }_{j} + 1 : 1 \leq j \leq n - k}\right\} \) in the set \( \{ 1,2,\ldots, n\} \) . These new indices now come under Case 1. Use (III.10) for this set of indices, but for matrices \( - A \) and \( - B \) in place of \( A, B \) . Then note that \( {\lambda }_{j}^{ \downarrow }\left( {-A}\right) = - {\lambda }_{n - j + 1}^{ \downarrow }\left( A\right) \) for all \( 1 \leq j \leq n \) . This gives \[ \mathop{\sum }\limits_{{j = 1}}^{{n - k}} - {\lambda }_{n - {\ell }_{j} + 1}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{{n - k}} - {\lambda }_{n - {\ell }_{j} + 1}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{{n - k}} - {\lambda }_{n - j + 1}^{ \downarrow }\left( B\right) . \] Now add \( \operatorname{tr}\left( {A + B}\right) \) to both sides of the above inequality to get \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( B\right) . \] This proves the theorem. As in Section III.2, it is useful to interpret the above results as perturbation theorems. The following statement for Hermitian matrices \( A, B \) can be derived from (III.11) by changing variables: \[ {\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \downarrow }\left( B\right) \prec \lambda \left( {A - B}\right) \prec {\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \uparrow }\left( B\right) . \] (III.12) This can also be written as \[ {\lambda }^{ \downarrow }\left( A\right) + {\lambda }^{ \uparrow }\left( B\right) \prec \lambda \left( {A + B}\right) \prec {\lambda }^{ \downarrow }\left( A\right) + {\lambda }^{ \downarrow }\left( B\right) . \] (III.13) In fact, the two right-hand majorisations are consequences of the weaker maximum principle of Ky Fan. As a consequence of (III.12) we have: Theorem III.4.4 Let \( A, B \) be Hermitian matrices and let \( \Phi \) be any symmetric gauge function on \( {\mathbb{R}}^{n} \) . Then \[ \Phi \left( {{\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \downarrow }\left( B\right) }\right) \leq \Phi \left( {\lambda \left( {A - B}\right) }\right) \leq \Phi \left( {{\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \uparrow }\left( B\right) }\right) . \] Note that Weyl's perturbation theorem (Corollary III.2.6) and the inequality in Exercise III.2.7 are very special cases of this theorem. The majorisations in (III.13) are significant generalisations of those in (II.35), which follow from these by restricting \( A, B \) to be diagonal matrices. Such "noncommutative" extensions exist for some other results; they are harder to prove. Some are given in this section; many more will occur later. It is convenient to adopt the following notational shorthand. If \( x, y, z \) are \( n \) -vectors with nonnegative coordinates, we will write \[ \log x{ \prec }_{w}\log y\text{ if }\mathop{\prod }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\prod }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },\text{ for }k = 1,\ldots, n \] (III.14) \[ \log x \prec \log y\text{ if }\log x{ \prec }_{w}\log y\text{ and }\mathop{\prod }\limits_{{j = 1}}^{n}{x}_{j}^{ \downarrow } = \mathop{\prod }\limits_{{j = 1}}^{n}{y}_{j}^{ \downarrow }; \] (III.15) \[ \log x - \log z{ \prec }_{w}\log y\text{
100_S_Fourier Analysis
26
llary III.2.6) and the inequality in Exercise III.2.7 are very special cases of this theorem. The majorisations in (III.13) are significant generalisations of those in (II.35), which follow from these by restricting \( A, B \) to be diagonal matrices. Such "noncommutative" extensions exist for some other results; they are harder to prove. Some are given in this section; many more will occur later. It is convenient to adopt the following notational shorthand. If \( x, y, z \) are \( n \) -vectors with nonnegative coordinates, we will write \[ \log x{ \prec }_{w}\log y\text{ if }\mathop{\prod }\limits_{{j = 1}}^{k}{x}_{j}^{ \downarrow } \leq \mathop{\prod }\limits_{{j = 1}}^{k}{y}_{j}^{ \downarrow },\text{ for }k = 1,\ldots, n \] (III.14) \[ \log x \prec \log y\text{ if }\log x{ \prec }_{w}\log y\text{ and }\mathop{\prod }\limits_{{j = 1}}^{n}{x}_{j}^{ \downarrow } = \mathop{\prod }\limits_{{j = 1}}^{n}{y}_{j}^{ \downarrow }; \] (III.15) \[ \log x - \log z{ \prec }_{w}\log y\text{ if }\mathop{\prod }\limits_{{j = 1}}^{k}{x}_{{i}_{j}} \leq \mathop{\prod }\limits_{{j = 1}}^{k}{y}_{j}\mathop{\prod }\limits_{{j = 1}}^{k}{z}_{{i}_{j}} \] (III.16) for all indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) . Note that we are allowing the possibility of zero coordinates in this notation. Theorem III.4.5 (Gel’fand-Naimark) Let \( A, B \) be any two operators on \( \mathcal{H} \) . Then the singular values of \( A, B \) and \( {AB} \) satisfy the majorisation \[ \log s\left( {AB}\right) - \log s\left( B\right) \prec \log s\left( A\right) . \] (III.17) Proof. We will use the result of Exercise III.3.7. Fix any index \( k,1 \leq \) \( k \leq n \) . Choose any \( k \) orthonormal vectors \( {x}_{1},\ldots ,{x}_{k} \), and let \( \mathcal{W} \) be their linear span. Let \( \Phi \left( {{t}_{1},\ldots ,{t}_{k}}\right) = {t}_{1}{t}_{2}\cdots {t}_{k} \) . Express \( {AB} \) in its polar form \( {AB} = {UP} \) . Then, denoting by \( {T}_{\mathcal{W}} \) the compression of an operator \( T \) to the subspace \( \mathcal{W} \), we have \[ \Phi \left( {{\lambda }_{1}^{2}\left( {P}_{\mathcal{W}}\right) ,\ldots ,{\lambda }_{k}^{2}\left( {P}_{\mathcal{W}}\right) }\right) = {\left| \det {P}_{\mathcal{W}}\right| }^{2} \] \[ = {\left| \det \left( \left\langle {x}_{i},{P}_{\mathcal{W}}{x}_{j}\right\rangle \right) \right| }^{2} \] \[ = {\left| \det \left( \left\langle {x}_{i}, P{x}_{j}\right\rangle \right) \right| }^{2} \] \[ = {\left| \det \left( \left\langle {A}^{ * }U{x}_{i}, B{x}_{j}\right\rangle \right) \right| }^{2}\text{.} \] Using Exercise I.5.7 we see that this is dominated by \[ \det \left( \left\langle {{A}^{ * }U{x}_{i},{A}^{ * }U{x}_{j}}\right\rangle \right) \det \left( \left\langle {B{x}_{i}, B{x}_{j}}\right\rangle \right) . \] The second of these determinants is equal to \( \det {\left( {B}^{ * }B\right) }_{\mathcal{W}} \) ; the first is equal to \( \det {\left( A{A}^{ * }\right) }_{U\mathcal{W}} \) and by Corollary III.1.5 is dominated by \( \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{2}\left( A\right) \) . Hence, we have \[ \Phi \left( {{\lambda }_{1}^{2}\left( {P}_{\mathcal{W}}\right) ,\ldots ,{\lambda }_{k}^{2}\left( {P}_{\mathcal{W}}\right) }\right) \leq \det {\left( {B}^{ * }B\right) }_{\mathcal{W}}\mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{2}\left( A\right) \] \[ = \Phi \left( {{\lambda }_{1}\left( {\left| B\right| }_{\mathcal{W}}^{2}\right) ,\ldots ,{\lambda }_{k}\left( {\left| B\right| }_{\mathcal{W}}^{2}\right) }\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{2}\left( A\right) . \] Now, using Exercise III.3.7, we can conclude that \[ {\left( \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( P\right) \right) }^{2} \leq \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( {\left| B\right| }^{2}\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{2}\left( A\right) \] i.e., \[ \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{{i}_{j}}\left( {AB}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{{i}_{j}}\left( B\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) \] (III.18) for all \( 1 \leq {i}_{1} < \ldots < {i}_{k} \leq n \) . This, by definition, is what (III.17) says. Remark. The statement \[ \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( {AB}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( B\right) \] (III.19) which is a special case of (III.18), is easier to prove. It is just the statement \( \left| \right| { \land }^{k}\left( {AB}\right) \left| \right| \leq \left| \right| { \land }^{k}A\left| \right| \left| \right| { \land }^{k}B\left| \right| \) . If we temporarily introduce the notation \( {s}^{ \downarrow }\left( A\right) \) and \( {s}^{ \uparrow }\left( A\right) \) for the vectors whose coordinates are the singular values of \( A \) arranged in decreasing order and in increasing order, respectively, then the inequalities (III.18) and (III.19) can be combined to yield \[ \log {s}^{ \downarrow }\left( A\right) + \log {s}^{ \uparrow }\left( B\right) \prec \log s\left( {AB}\right) \prec \log {s}^{ \downarrow }\left( A\right) + \log {s}^{ \downarrow }\left( B\right) \] (III.20) for any two matrices \( A, B \) . In conformity with our notation this is a symbolic representation of the inequalities \[ \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{{i}_{j}}\left( A\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{n - {i}_{j} + 1}\left( B\right) \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( {AB}\right) \leq \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( B\right) \] for all \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) . It is illuminating to compare this with the statement (III.13) for eigenvalues of Hermitian matrices. Corollary III.4.6 (Lidskii) Let \( A, B \) be two positive matrices. Then all eigenvalues of \( {AB} \) are nonnegative and \[ \log {\lambda }^{ \downarrow }\left( A\right) + \log {\lambda }^{ \uparrow }\left( B\right) \prec \log \lambda \left( {AB}\right) \prec \log {\lambda }^{ \downarrow }\left( A\right) + \log {\lambda }^{ \downarrow }\left( B\right) . \] (III.21) Proof. It is enough to prove this when \( B \) is invertible, since every positive matrix is a limit of such matrices. For invertible \( B \) we can write \[ {AB} = {B}^{-1/2}\left( {{B}^{1/2}A{B}^{1/2}}\right) {B}^{1/2}. \] Now \( {B}^{1/2}A{B}^{1/2} \) is positive; hence the matrix \( {AB} \), which is similar to it, has nonnegative eigenvalues. Now, from (III.20) we obtain \[ \log {\lambda }^{ \downarrow }\left( {A}^{1/2}\right) + \log {\lambda }^{ \uparrow }\left( {B}^{1/2}\right) \] \[ \prec \log s\left( {{A}^{1/2}{B}^{1/2}}\right) \prec \log {\lambda }^{ \downarrow }\left( {A}^{1/2}\right) + \log {\lambda }^{ \downarrow }\left( {B}^{1/2}\right) . \] (III.22) But \( {s}^{2}\left( {{A}^{1/2}{B}^{1/2}}\right) = {\lambda }^{ \downarrow }\left( {{B}^{1/2}A{B}^{1/2}}\right) = {\lambda }^{ \downarrow }\left( {AB}\right) \) . So, the majorisations (III.21) follow from (III.22). ## III. 5 Eigenvalues of Real Parts and Singular Values The Cartesian decomposition \( A = \operatorname{Re}A + i\operatorname{Im}A \) of a matrix \( A \) associates with it two Hermitian matrices \( \operatorname{Re}A = \frac{A + {A}^{ \star }}{2} \) and \( \operatorname{Im}A = \frac{A - {A}^{ \star }}{2i} \) . It is of interest to know relationships between the eigenvalues of these matrices, those of \( A \), and the singular values of \( A \) . Weyl's majorant theorem (Theorem II.3.6) provides one such relationship: \[ \log \left| {\lambda \left( A\right) }\right| \prec \log s\left( A\right) . \] Some others, whose proofs are in the same spirit as others in this chapter, are given below. Proposition III.5.1 (Fan-Hoffman) For every matrix \( A \) \[ {\lambda }_{j}^{ \downarrow }\left( {\operatorname{Re}A}\right) \leq {s}_{j}\left( A\right) \;\text{ for all }\;j = 1,\ldots, n. \] Proof. Let \( {x}_{j} \) be eigenvectors of \( \operatorname{Re}A \) belonging to its eigenvalues \( {\lambda }_{j}^{ \downarrow }\left( {\operatorname{Re}A}\right) \) and \( {y}_{j} \) eigenvectors of \( \left| A\right| \) belonging to its eigenvalues \( {s}_{j}\left( A\right) ,1 \leq j \leq n \) . For each \( j \) consider the spaces \( \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{j}}\right\} \) and \( \operatorname{span}\left\{ {{y}_{j},\ldots ,{y}_{n}}\right\} \) . Their dimensions add up to \( n + 1 \), so they have a nonzero intersection. If \( x \) is a unit vector in their intersection then \[ {\lambda }_{j}^{ \downarrow }\left( {\operatorname{Re}A}\right) \leq \langle x,\left( {\operatorname{Re}A}\right) x\rangle = \operatorname{Re}\langle x,{Ax}\rangle \] \[ \leq \left| {\langle x,{Ax}\rangle }\right| \leq \parallel {Ax}\parallel \] \[ = {\left\langle x,{A}^{ * }Ax\right\rangle }^{1/2} \leq {s}_{j}\left( A\right) . \] Exercise III.5.2 (i) Let \( A \) be the \( 2 \times 2 \) matrix \( \left( \begin{array}{ll} 1 & 1 \\ 0 & 0 \end{array}\right) \) . Then \( {s}_{2}\left( A\right) = 0 \) , but \( \operatorname{Re}A \) has two nonzero eigenvalues. Hence the vector \( {\left| \lambda \left( \operatorname{Re}A\right) \right| }^{ \downarrow } \) is not dominated by the vector \( s\left( A\right) \) . (ii) However, note that \( \left| {\lambda \left( {\operatorname{Re}A}\right) }\right| { \prec }_{w}s\left( A\right) \) for every matrix \( A \) . (Use the triangle inequality for \( {Ky} \) Fan norms.) Proposition III.5.3 (Ky Fan) For every matrix \( A \) we have \[ \operatorname{Re}\lambda \left( A\right) \prec \lambda \left( {\operatorname{Re}A}\right) . \] Proof. Arrange the eigenvalues \( {\lambda }_{j}\left( A\right) \) in such a way that \[ \operatorname{Re}{\lambda }_{1}\left( A\right) \geq \operatorname{Re}{\lambda }_{2}\left( A\right) \geq \cdots \geq \operatorname{Re}{\lambda }_{n}\left( A\right) . \]
100_S_Fourier Analysis
27
e III.5.2 (i) Let \( A \) be the \( 2 \times 2 \) matrix \( \left( \begin{array}{ll} 1 & 1 \\ 0 & 0 \end{array}\right) \) . Then \( {s}_{2}\left( A\right) = 0 \) , but \( \operatorname{Re}A \) has two nonzero eigenvalues. Hence the vector \( {\left| \lambda \left( \operatorname{Re}A\right) \right| }^{ \downarrow } \) is not dominated by the vector \( s\left( A\right) \) . (ii) However, note that \( \left| {\lambda \left( {\operatorname{Re}A}\right) }\right| { \prec }_{w}s\left( A\right) \) for every matrix \( A \) . (Use the triangle inequality for \( {Ky} \) Fan norms.) Proposition III.5.3 (Ky Fan) For every matrix \( A \) we have \[ \operatorname{Re}\lambda \left( A\right) \prec \lambda \left( {\operatorname{Re}A}\right) . \] Proof. Arrange the eigenvalues \( {\lambda }_{j}\left( A\right) \) in such a way that \[ \operatorname{Re}{\lambda }_{1}\left( A\right) \geq \operatorname{Re}{\lambda }_{2}\left( A\right) \geq \cdots \geq \operatorname{Re}{\lambda }_{n}\left( A\right) . \] Let \( {x}_{1},\ldots ,{x}_{n} \) be an orthonormal Schur-basis for \( A \) such that \( {\lambda }_{j}\left( A\right) \) \( = \left\langle {{x}_{j}, A{x}_{j}}\right\rangle \) . Then \( \overline{{\lambda }_{j}\left( A\right) } = \left\langle {{x}_{j},{A}^{ * }{x}_{j}}\right\rangle \) . Let \( \mathcal{W} = \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{k}}\right\} \) . Then \[ \mathop{\sum }\limits_{{j = 1}}^{k}\operatorname{Re}{\lambda }_{j}\left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j},\left( {\operatorname{Re}A}\right) {x}_{j}}\right\rangle = \operatorname{tr}{\left( \operatorname{Re}A\right) }_{\mathcal{W}} \] \[ = \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}\left( {\left( \operatorname{Re}A\right) }_{\mathcal{W}}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( {\operatorname{Re}A}\right) \] Exercise III.5.4 Give another proof of Proposition III.5.3 using Schur’s theorem (given in Exercise II.1.12). Exercise III.5.5 Let \( X, Y \) be Hermitian matrices. Suppose that their eigenvalues can be indexed as \( {\lambda }_{j}\left( X\right) \) and \( {\lambda }_{j}\left( Y\right) ,1 \leq j \leq n \), in such a way that \( {\lambda }_{j}\left( X\right) \leq {\lambda }_{j}\left( Y\right) \) for all \( j \) . Then there exists a unitary \( U \) such that \( X \leq {U}^{ * }{YU} \) . (ii) For every matrix \( A \) there exists a unitary matrix \( U \) such that Re \( A \leq {U}^{ * }\left| A\right| U \) . An interesting consequence of Proposition III.5.1 is the following version of the triangle inequality for the matrix absolute value: Theorem III.5.6 (R.C. Thompson) Let \( A, B \) be any two matrices. Then there exist unitary matrices \( U, V \) such that \[ \left| {A + B}\right| \leq U\left| A\right| {U}^{ * } + V\left| B\right| {V}^{ * }. \] Proof. Let \( A + B = W\left| {A + B}\right| \) be a polar decomposition of \( A + B \) . Then we can write . \[ \left| {A + B}\right| = {W}^{ * }\left( {A + B}\right) = \operatorname{Re}{W}^{ * }\left( {A + B}\right) = \operatorname{Re}{W}^{ * }A + \operatorname{Re}{W}^{ * }B. \] Now use Exercise III.5.5(ii). Exercise III.5.7 (i) Find \( 2 \times 2 \) matrices \( A, B \) such that the inequality \( \left| {A + B}\right| \leq \left| A\right| + \left| B\right| \) is false for them. (ii) Find \( 2 \times 2 \) matrices \( A, B \) for which there does not exist any unitary matrix \( U \) such that \( \left| {A + B}\right| \leq U\left( {\left| A\right| + \left| B\right| }\right) {U}^{ * } \) . ## III. 6 Problems Problem III.6.1. (The minimax principle for singular values) For any operator \( A \) on \( \mathcal{H} \) we have \[ {s}_{j}\left( A\right) = \mathop{\max }\limits_{{\mathcal{M} : \dim \mathcal{M} = j}}\mathop{\min }\limits_{{x \in \mathcal{M},\parallel x\parallel = 1}}\parallel {Ax}\parallel \] \[ = \mathop{\min }\limits_{{\mathcal{N} : \dim \mathcal{N} = n - j + 1}}\mathop{\max }\limits_{{x \in \mathcal{N},\left| \right| x\left| \right| = 1}}\left| \right| {Ax}\left| \right| \] for \( 1 \leq j \leq n \) . Problem III.6.2. Let \( A, B \) be any two operators. Then \[ {s}_{j}\left( {AB}\right) \leq \parallel B\parallel {s}_{j}\left( A\right) \] \[ {s}_{j}\left( {AB}\right) \leq \parallel A\parallel {s}_{j}\left( B\right) \] for \( 1 \leq j \leq n \) . Problem III.6.3. For \( j = 0,1,\ldots, n \), let \[ {\Re }_{j} = \{ T \in \mathcal{L}\left( H\right) : \operatorname{rank}T \leq j\} . \] Show that for \( j = 1,2,\ldots, n \) , \[ {s}_{j}\left( A\right) = \mathop{\min }\limits_{{T \in {\Re }_{j - 1}}}\parallel A - T\parallel \] Problem III.6.4. Show that if \( A \) is any operator and \( H \) is any operator of rank \( k \), then \[ {s}_{j}\left( A\right) \geq {s}_{j + k}\left( {A + H}\right) ,\;j = 1,2,\ldots, n - k. \] Problem III.6.5. For any two operators \( A, B \) and any two indices \( i, j \) such that \( i + j \leq n + 1 \), we have \[ {s}_{i + j - 1}\left( {A + B}\right) \leq {s}_{i}\left( A\right) + {s}_{j}\left( B\right) \] \[ {s}_{i + j - 1}\left( {AB}\right) \leq {s}_{i}\left( A\right) {s}_{j}\left( B\right) \] Problem III.6.6. Show that for every operator \( A \) and for each \( k = \) \( 1,2,\ldots, n \), we have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) = \max \left| {\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{y}_{j}, A{x}_{j}}\right\rangle }\right| \] where the maximum is over all choices of orthonormal \( k \) -tuples \( {x}_{1},\ldots ,{x}_{k} \) and \( {y}_{1},\ldots ,{y}_{k} \) . This can also be written as \[ \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) = \max \left| {\mathop{\sum }\limits_{{j = 1}}^{k}\left\langle {{x}_{j},{UA}{x}_{j}}\right\rangle }\right| \] where the maximum is taken over all choices of unitary operators \( U \) and orthonormal \( k \) -tuples \( {x}_{1},\ldots ,{x}_{k} \) . Note that for \( k = 1 \) this reduces to the statement \[ \parallel A\parallel = \mathop{\sum }\limits_{{\parallel x\parallel = \parallel y\parallel = 1}}\left| {\langle y,{Ax}\rangle }\right| . \] For \( k = 1,2,\ldots, n \), the above extremal representations can be used to give another proof of the fact that the expressions \( \parallel A{\parallel }_{\left( k\right) } = \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) \) are norms. (See Exercise II.1.15.) Problem III.6.7. Let \( A = \left( {a}_{ij}\right) \) be a Hermitian matrix. For each \( i = \) \( 1,\ldots, n \), let \[ {r}_{i} = {\left( \mathop{\sum }\limits_{{j \neq i}}{\left| {a}_{ij}\right| }^{2}\right) }^{1/2}. \] Show that each interval \( \left\lbrack {{a}_{ii} - {r}_{i},{a}_{ii} + {r}_{i}}\right\rbrack \) contains at least one eigenvalue of \( A \) . Problem III.6.8. Let \( {\alpha }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\alpha }_{n} \) be the eigenvalues of a Hermitian matrix \( A \) . We have seen that the \( n - 1 \) eigenvalues of any principal submatrix of \( A \) interlace with these numbers. If \( {\delta }_{1} \geq {\delta }_{2} \geq \cdots \geq {\delta }_{n - 1} \) are the roots of the polynomial that is the derivative of the characteristic polynomial of \( A \), then we have by Rolle’s Theorem \[ {\alpha }_{1} \geq {\delta }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\delta }_{n - 1} \geq {\alpha }_{n} \] Show that for each \( j \) there exists a principal submatrix \( B \) of \( A \) for which \( {\alpha }_{j} \geq {\lambda }_{j}^{ \downarrow }\left( B\right) \geq {\delta }_{j} \) and another principal submatrix \( C \) for which \( {\delta }_{j} \geq \) \( {\lambda }_{j}^{ \downarrow }\left( C\right) \geq {\alpha }_{j + 1} \) Problem III.6.9. Most of the results in this chapter gave descriptions of eigenvalues of a Hermitian operator in terms of the numbers \( \langle x,{Ax}\rangle \) when \( x \) varies over unit vectors. Sometimes in computational problems an "approximate" eigenvalue \( \lambda \) and an "approximate" eigenvector \( x \) are already known. The number \( \langle x,{Ax}\rangle \) can then be used to further refine this information. For a given unit vector \( x \), let \( \rho = \langle x,{Ax}\rangle ,\varepsilon = \parallel \left( {A - \rho }\right) x\parallel \) . (i) Let \( \left( {a, b}\right) \) be an open interval that contains \( \rho \) but does not contain any eigenvalue of \( A \) . Show that \[ \left( {b - \rho }\right) \left( {\rho - a}\right) \leq {\varepsilon }^{2} \] (ii) Show that there exists an eigenvalue \( \alpha \) of \( A \) such that \( \left| {\alpha - \rho }\right| \leq \varepsilon \) . Problem III.6.10. Let \( \rho \) and \( \varepsilon \) be defined as in the above problem. Let \( \left( {a, b}\right) \) be an open interval that contains \( \rho \) and only one eigenvalue \( \alpha \) of \( A \) . Then \[ \rho - \frac{{\varepsilon }^{2}}{\rho - a} \leq \alpha \leq \rho + \frac{{\varepsilon }^{2}}{b - \rho } \] This is called the Kato-Temple inequality. Note that if \( \rho - a \) and \( b - \rho \) are much larger than \( \varepsilon \), then this improves the inequality in part (ii) of Problem III.6.9. Problem III.6.11. Show that for every Hermitian matrix \( A \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{{U{U}^{ * } = {I}_{k}}}\operatorname{tr}{UA}{U}^{ * } \] \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \uparrow }\left( A\right) = \mathop{\min }\limits_{{U{U}^{ * } = {I}_{k}}}\operatorname{tr}{UA}{U}^{ * } \] for \( 1 \leq k \leq n \), where the extrema are taken over \( k \times n \) matrices \( U \) that satisfy \( U{U}^{ * } = {I}_{k},{I}_{k} \) being the \( k \times k \) identity matrix. Show that if \( A \) is positive, then \[ \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{{U{U}^{ * } = {I}_{k}}}\det {UA}{U}^{ * } \] \[ \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \uparrow }\left( A\right) = \mathop{\min }\limits_{{U{U}^{ * } = {I}_
100_S_Fourier Analysis
28
larger than \( \varepsilon \), then this improves the inequality in part (ii) of Problem III.6.9. Problem III.6.11. Show that for every Hermitian matrix \( A \) \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{{U{U}^{ * } = {I}_{k}}}\operatorname{tr}{UA}{U}^{ * } \] \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \uparrow }\left( A\right) = \mathop{\min }\limits_{{U{U}^{ * } = {I}_{k}}}\operatorname{tr}{UA}{U}^{ * } \] for \( 1 \leq k \leq n \), where the extrema are taken over \( k \times n \) matrices \( U \) that satisfy \( U{U}^{ * } = {I}_{k},{I}_{k} \) being the \( k \times k \) identity matrix. Show that if \( A \) is positive, then \[ \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \downarrow }\left( A\right) = \mathop{\max }\limits_{{U{U}^{ * } = {I}_{k}}}\det {UA}{U}^{ * } \] \[ \mathop{\prod }\limits_{{j = 1}}^{k}{\lambda }_{j}^{ \uparrow }\left( A\right) = \mathop{\min }\limits_{{U{U}^{ * } = {I}_{k}}}\det {UA}{U}^{ * } \] (See Problem I.6.15.) Problem III.6.12. Let \( A, B \) be any matrices. Then \[ \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) {s}_{j}\left( B\right) = \mathop{\sup }\limits_{{U, V}}\left| {\operatorname{tr}{UAVB}}\right| = \mathop{\sup }\limits_{{U, V}}\operatorname{Re}\operatorname{tr}{UAVB} \] where \( U, V \) vary over all unitary matrices. Problem III.6.13. (Perturbation theorem for singular values) Let \( A, B \) be any \( n \times n \) matrices and let \( \Phi \) be any symmetric gauge function on \( {\mathbb{R}}^{n} \) . Then \[ \Phi \left( {s\left( A\right) - s\left( B\right) }\right) { \prec }_{w}\Phi \left( {s\left( {A - B}\right) }\right) . \] In particular, \[ \max \left| {{s}_{j}\left( A\right) - {s}_{j}\left( B\right) }\right| \leq \parallel A - B\parallel \] [Hint: See Theorem III.4.4 and Exercise II.1.15.] Problem III.6.14. For positive matrices \( A, B \) show that \[ {\lambda }^{ \downarrow }\left( A\right) \cdot {\lambda }^{ \uparrow }\left( B\right) \prec \lambda \left( {AB}\right) \prec {\lambda }^{ \downarrow }\left( A\right) \cdot {\lambda }^{ \downarrow }\left( B\right) . \] For Hermitian matrices \( A, B \) show that \[ \left\langle {{\lambda }^{ \downarrow }\left( A\right) ,{\lambda }^{ \uparrow }\left( B\right) }\right\rangle \leq \operatorname{tr}{AB} \leq \left\langle {{\lambda }^{ \downarrow }\left( A\right) ,{\lambda }^{ \downarrow }\left( B\right) }\right\rangle \] (Compare these with (II.36) and (II.37).) Problem III.6.15. Let \( A, B \) be Hermitian matrices. Use the second part of Problem III.6.14 to show that \[ {\begin{Vmatrix}{\operatorname{Eig}}^{ \downarrow }A - {\operatorname{Eig}}^{ \downarrow }B\end{Vmatrix}}_{2} \leq \parallel A - B{\parallel }_{2} \leq {\begin{Vmatrix}{\operatorname{Eig}}^{ \downarrow }A - {\operatorname{Eig}}^{ \uparrow }B\end{Vmatrix}}_{2}. \] Note the analogy between this and Theorem III.2.8. (In Chapter IV we will see that both these results are true for a whole family of norms called unitarily invariant norms. This more general result is a consequence of Theorem III.4.4.) ## III. 7 Notes and References As pointed out in Exercise III.1.6, many of the results in Sections III.1 and III. 2 could be derived from each other. Hence, it seems fair to say that the variational principles for eigenvalues originated with A.L. Cauchy's interlacing theorem. A pertinent reference is Sur l'équation á l'aide de laquelle on détermine les inégalités séculaires des mouvements des planétes. 1829, in A.L. Cauchy, Oeuvres Complétes (IIe Série), Volume 9, Gauthier-Villars. The minimax principle was first stated by E. Fischer, Über Quadratische Formen mit reellen Koeffizienten, Monatsh. Math. Phys., 16 (1905) 234- 249. The monotonicity principle and many of the results of Section III. 2 were proved by H. Weyl in Das asymptotische Verteilungsgesetz der Eigen-werte linearer partieller Differentialgleichungen, Math. Ann., 71 (1911)441- 469. In a series of papers beginning with Über die Eigenwerte bei den Dif-ferentialgleichungen der mathematischen Physik, Math. Z., 7(1920) 1-57, R. Courant exploited the full power of the minimax principle. Thus the principle is often described as the Courant-Fischer-Weyl principle. As the titles of these papers suggest, the variational principles for eigenvalues were discovered in connections with problems of physics. One famous work where many of these were used is The Theory of Sound by Lord Rayleigh, reprinted by Dover in 1945. The modern applied mathematics classic Methods of Mathematical Physics by R. Courant and D. Hilbert, Wiley, 1953, is replete with applications of variational principles. For a still more recent source, see M. Reed and B. Simon, Methods of Modern Mathematical Physics, Volume 4, Academic Press, 1978. Of course, here most of the interest is in infinite-dimensional problems and consequently the results are much more complicated. The numerical analyst could turn to B.N. Par-lett, The Symmetric Eigenvalue Problem, Prentice-Hall, 1980, and to G.W. Stewart and J.-G. Sun, Matrix Perturbation Theory, Academic Press, 1990. The converse to the interlacing theorem given in Theorem III.1.9 was first proved in L. Mirsky, Matrices with prescribed characteristic roots and diagonal elements, J. London Math. Soc., 33 (1958) 14-21. We do not know whether the similar question for higher dimensional compressions has been answered. More precisely, let \( {\alpha }_{1} \geq \cdots \geq {\alpha }_{n} \), and \( {\beta }_{1} \geq \cdots \geq {\beta }_{n} \), be real numbers such that \( \sum {\alpha }_{j} = \sum {\beta }_{j} \) . What conditions must these numbers satisfy so that there exists an orthogonal projection \( P \) of rank \( k \) such that the matrix \( A = \operatorname{diag}\left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) when compressed to range \( P \) has eigenvalues \( {\beta }_{1},\ldots ,{\beta }_{k} \) and when compressed to (range \( P{)}^{ \bot } \) has eigenvalues \( {\beta }_{k + 1},\ldots ,{\beta }_{n} \) ? (Theorem III.1.9 is the case \( k = n - 1 \) .) Aronszajn's inequality appeared in N. Aronszajn, Rayleigh-Ritz and A. Weinstein methods for approximation of eigenvalues. I. Operators in a Hilbert space, Proc. Nat. Acad. Sci. U.S.A., 34(1948) 474-480. The elegant proof of its equivalence to Weyl’s inequality is due to H.W. Wielandt, Topics in the Analytic Theory of Matrices, mimeographed lecture notes, University of Wisconsin, 1967. Theorem III.3.5 was proved in H.W. Wielandt, An extremum property of sums of eigenvalues, Proc. Amer. Math. Soc., 6 (1955) 106-110. The motivation for Wielandt was that he "did not succeed in completing the interesting sketch of a proof given by Lidskii" of the statement given in Exercise III.4.3. He noted that this is equivalent to what we have stated as Theorem III.4.1, and derived it from his new minimax principle. Interestingly, now several different proofs of Lidskii's Theorem are known. The second proof given in Section III.4 is due to M.F. Smiley, Inequalities related to Lidskii's, Proc. Amer. Math. Soc., 19 (1968) 1029-1034. We will see some other proofs later. However, Theorem III.3.5 is more general, has several other applications, and has led to a lot of research. An account of the earlier work on these questions may be found in A.R. Amir-Moez, Extreme Properties of Linear Transformations and Geometry in Unitary Spaces, Texas Tech. University, 1968, from which our treatment of Section III. 3 has been adapted. An attempt to extend these ideas to infinite dimensions was made in R.C. Riddell, Minimax problems on Grassmann manifolds, Advances in Math., 54 (1984) 107-199, where connections with differential geometry and some problems in quantum physics are also developed. The tower of subspaces occurring in Theorem III.3.5 suggests a connection with Schubert calculus in algebraic geometry. This connection is yet to be fully understood. Lidskii's Theorem has an interesting history. It appeared first in V.B. Lidskii, On the proper values of a sum and product of symmetric matrices, Dokl. Akad. Nauk SSSR, 75 (1950) 769-772. It seems that Lidskii provided an elementary (matrix analytic) proof of the result which F. Berezin and I.M. Gel’fand had proved by more advanced (Lie theoretic) techniques in connection with their work that appeared later in Some remarks on the theory of spherical functions on symmetric Riemannian manifolds, Trudi Moscow Math. Ob., 5 (1956) 311-351. As mentioned above, difficulties with this "elementary" proof led Wielandt to the discovery of his minimax principle. Among the several directions this work opened up, one led to the following question. What relations must three \( n \) -tuples of real numbers satisfy in order to be the eigenvalues of some Hermitian matrices \( A, B \) and \( A + B \) ? Necessary conditions are given by Theorem III.4.1. Many more were discovered by others. A. Horn, Eigenvalues of sums of Hermitian matrices, Pacific J. Math., 12(1962) 225-242, derived necessary and sufficient conditions in the above problem for the case \( n = 4 \), and wrote down a set of conditions which he conjectured would be necessary and sufficient for \( n > 4 \) . In a short paper Spectral polyhedron of a sum of two Hermitian matrices, Functional Analysis and Appl., 10 (1982) 76-77, B.V. Lidskii has sketched a "proof" establishing Horn's conjecture. This proof, however, needs a lot of details to be filled in; these have not yet been published by B.V. Lidskii (or anyone else). When should a theorem be considered to be proved? For an interesting discussion of this question, see S. Smale, The fundamental theorem of algebra and complexity theory, Bull. Amer. Math. Soc. (New Series), 4(1981) \( 1 - {36} \) . Theorem III.4.5 was proved in I.M. Gel'fand and M. Naimark, The relation between the unitary representations of the complex unimodular group and its unitary subgroup, Izv Akad. Nauk SSSR Ser. Mat. 14(1950)
100_S_Fourier Analysis
29
rices, Pacific J. Math., 12(1962) 225-242, derived necessary and sufficient conditions in the above problem for the case \( n = 4 \), and wrote down a set of conditions which he conjectured would be necessary and sufficient for \( n > 4 \) . In a short paper Spectral polyhedron of a sum of two Hermitian matrices, Functional Analysis and Appl., 10 (1982) 76-77, B.V. Lidskii has sketched a "proof" establishing Horn's conjecture. This proof, however, needs a lot of details to be filled in; these have not yet been published by B.V. Lidskii (or anyone else). When should a theorem be considered to be proved? For an interesting discussion of this question, see S. Smale, The fundamental theorem of algebra and complexity theory, Bull. Amer. Math. Soc. (New Series), 4(1981) \( 1 - {36} \) . Theorem III.4.5 was proved in I.M. Gel'fand and M. Naimark, The relation between the unitary representations of the complex unimodular group and its unitary subgroup, Izv Akad. Nauk SSSR Ser. Mat. 14(1950) 239- 260. Many of the questions concerning eigenvalues and singular values of sums and products were first framed in this paper. An excellent summary of these results can be found in A.S. Markus, The eigen-and singular values of the sum and product of linear operators, Russian Math. Surveys, 19 (1964) \( {92} - {120} \) . The structure of inequalities like (III.10) and (III.18) was carefully analysed in several papers by R.C. Thompson and his students. The asymmetric way in which \( A \) and \( B \) enter (III.10) is remedied by one of their inequalities, which says \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j} + {p}_{j} - j}^{ \downarrow }\left( {A + B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{i}_{j}}^{ \downarrow }\left( A\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{\lambda }_{{p}_{j}}^{ \downarrow }\left( B\right) \] for any indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n,1 \leq {p}_{1} < \cdots < {p}_{k} \leq n \), such that \( {i}_{k} + {p}_{k} - k \leq n \) . A similar generalisation of (III.18) has also been proved. References to this work may be found in the book by Marshall and Olkin cited in Chapter II. Proposition III.5.1 is proved in K. Fan and A.J. Hoffman, Some metric inequalities in the space of matrices, Proc. Amer. Math. Soc., 6 (1955) 111- 116. Results of Proposition III.5.3, Problems III.6.5, III.6.6, III.6.11, and III.6.12 were first proved by Ky Fan in several papers. References to these may be found in I.C. Gohberg and M.G. Krein, Introduction to the Theory of Linear Nonselfadjoint operators, American Math. Society, 1969, and in the Marshall-Olkin book cited earlier. The matrix triangle inequality (Theorem III.5.6) was proved in R.C. Thompson, Convex and concave functions of singular values of matrix sums, Pacific J. Math., 66 (1976) 285-290. An extension to infinite dimensions was attempted in C. Akemann, J. Anderson, and G. Pedersen, Triangle inequalities in operator algebras, Linear and Multilinear Algebra, 11(1982) 167-178. For operators \( A, B \) on an infinite-dimensional Hilbert space there exist isometries \( U, V \) such that \[ \left| {A + B}\right| \leq U\left| A\right| {U}^{ * } + V\left| B\right| {V}^{ * }. \] Also, for each \( \varepsilon > 0 \) there exist unitaries \( U, V \) such that \[ \left| {A + B}\right| \leq U\left| A\right| {U}^{ * } + V\left| B\right| {V}^{ * } + {\varepsilon I}. \] It is not known whether the \( \varepsilon \) part in the last statement is necessary. Refinements of the interlacing principle such as the one in Problem III.6.8 have been obtained by several authors, including R.C. Thompson. See, for example, his paper Principal submatrices II, Linear Algebra Appl., 1(1968) 211-243. One may wonder whether there are interlacing theorem, for singular values. There are, although they are a little different from the ones for eigenvalues. This is best understood if we extend the definition of singular values to rectangular matrices. Let \( A \) be an \( m \times n \) matrix. Let \( r = \min \left( {m, n}\right) \) . The \( r \) numbers that are the common eigenvalues of \( {\left( {A}^{ * }A\right) }^{1/2} \) and \( {\left( A{A}^{ * }\right) }^{1/2} \) are called the singular values of \( A \) . (Sometimes a sequence of zeroes is added to make \( \max \left( {m, n}\right) \) singular values in all.) Many of the results for singular values that we have proved can be carried over to this setting. See, e.g., the books by Horn and Johnson cited in Chapter I. Let \( A \) be a rectangular matrix and let \( B \) be a matrix obtained by deleting any row or any column of \( A \) . Then the minimax principle can be used to prove that the singular values of \( A \) and \( B \) interlace. The reader should work this out, and see that when \( A \) is an \( n \times n \) matrix and \( B \) a principal submatrix of order \( n - 1 \) then this gives \[ \begin{matrix} {s}_{1}\left( A\right) & \geq & {s}_{1}\left( B\right) & \; \geq & {s}_{3}\left( A\right) , \\ {s}_{2}\left( A\right) & \geq & {s}_{2}\left( B\right) & \; \geq & {s}_{4}\left( A\right) , \end{matrix} \] \[ \begin{matrix} \cdots \cdots \cdots \cdots & & \cdots \cdots \cdots \cdots & & \\ {s}_{n - 2}\left( A\right) & \geq & {s}_{n - 2}\left( B\right) & \geq & {s}_{n}\left( A\right) , \\ {s}_{n - 1}\left( A\right) & \geq & {s}_{n - 1}\left( B\right) & \geq & 0. \end{matrix} \] For more such results, see R.C. Thompson, Principal submatrices IX, Linear Algebra and Appl., 5(1972) 1-12. Inequalities like the ones in Problems III.6.9 and III.6.10 are called "residual bounds" in the numerical analysis literature. For more such results, see the book by Parlett cited above, and F. Chatelin, Spectral Approximation of Linear Operators, Academic Press, 1983. =Several refinements, extensions, and applications of these results in atomic physics are described in the book by Reed and Simon cited above. The results of Theorem III.4.4 and Problem III.6.13 were noted by L. Mirsky, Symmetric gauge functions and unitarily invariant norms, Quart. J. Math., Oxford Ser. (2), 11(1960) 50-59. This paper contains a lucid survey of several related problems and has stimulated a lot of research. The inequalities in Problem III.6.15 were first stated in K. Löwner, Über monotone Matrix functionen, Math. Z., 38 (1934) 177-216. Let \( A = {UP} \) be a polar decomposition of \( A \) . Weyl’s majorant theorem gives a relationship between the eigenvalues of \( A \) and those of \( P \) (the singular values of \( A \) ). A relation between the eigenvalues of \( A \) and those of \( U \) was proved by A. Horn and R. Steinberg, Eigenvalues of the unitary part of a matrix, Pacific J. Math., 9(1959) 541-550. This is in the form of a majorisation between the arguments of the eigenvalues: \[ \arg \lambda \left( A\right) \prec \arg \lambda \left( U\right) \] A theorem, very much like Theorems III.4.1 and III.4.5 was proved by A. Nudel’man and P. Svarcman, The spectrum of a product of unitary matrices, Uspehi Mat. Nauk, 13 (1958) 111-117. Let \( A, B \) be unitary matrices. Label the eigenvalues of \( A, B \), and \( {AB} \) as \( {e}^{i{\alpha }_{1}},\ldots ,{e}^{i{\alpha }_{n}};{e}^{i{\beta }_{1}},\ldots ,{e}^{i{\beta }_{n}} \), and \( {e}^{i{\gamma }_{1}},\ldots ,{e}^{i{\gamma }_{n}} \), respectively, in such a way that \[ {2\pi } > {\alpha }_{1} \geq \cdots \geq {\alpha }_{n} \geq 0 \] \[ {2\pi } > {\beta }_{1} \geq \cdots \geq {\beta }_{n} \geq 0 \] \[ {2\pi } > {\gamma }_{1} \geq \cdots \geq {\gamma }_{n} \geq 0 \] If \( {\alpha }_{1} + {\beta }_{1} < {2\pi } \), then for any choice of indices \( 1 \leq {i}_{1} < \cdots < {i}_{k} \leq n \) we have \[ \mathop{\sum }\limits_{{j = 1}}^{k}{\gamma }_{{i}_{j}} \leq \mathop{\sum }\limits_{{j = 1}}^{k}{\alpha }_{{i}_{j}} + \mathop{\sum }\limits_{{j = 1}}^{k}{\beta }_{j} \] These inequalities can also be written in the form of a majorisation between \( n \) -vectors: \[ \gamma - \alpha \prec \beta \] For a generalisation in the same spirit as the one of inequalities (III.10) and (III.18) mentioned earlier, see R.C. Thompson, On the eigenvalues of a product of unitary matrices, Linear and Multilinear Algebra, 2(1974) 13-24. ## IV Symmetric Norms In this chapter we study norms on the space of matrices that are invariant under multiplication by unitaries. Their properties are closely linked to those of symmetric gauge functions on \( {\mathbb{R}}^{n} \) . We also study norms that are invariant under unitary conjugations. Some of the inequalities proved in earlier chapters lead to inequalities involving these norms. ## IV. 1 Norms on \( {\mathbb{C}}^{n} \) Let us begin by considering the familiar \( p \) -norms frequently used in analysis. For a vector \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) we define \[ \parallel x{\parallel }_{p} = {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p}\right) }^{1/p},\;1 \leq p < \infty , \] (IV.1) \[ \parallel x{\parallel }_{\infty } = \mathop{\max }\limits_{{1 \leq i \leq n}}\left| {x}_{i}\right| \] (IV.2) For each \( 1 \leq p \leq \infty ,\parallel x{\parallel }_{p} \) defines a norm on \( {\mathbb{C}}^{n} \) . These are called the \( p \) -norms or the \( {l}_{p} \) -norms. The notation (IV.2) is justified because of the fact that \[ \parallel x{\parallel }_{\infty } = \mathop{\lim }\limits_{{p \rightarrow \infty }}\parallel x{\parallel }_{p} \] (IV.3) Some of the pleasant properties of this family of norms are \[ \parallel x{\parallel }_{p} = \parallel \left| x\right| {\parallel }_{p}\;\text{ for all }x \in {\mathbb{C}}^{n}, \] (IV.4) \[ \parallel x{\parallel }_{p} \leq \parallel y{\parallel }_{p}\;\text{ if }\left| x\right| \leq \left| y\right| \] (IV.5) \[ \parallel x{\parallel }_{p} = \parallel {Px}{\parallel }_{p}\;\text{ for all }x \in {\mathbb{C}}^{n}, P \in {S}_{n}. \] (IV.6) (Recall the notations: \( \left| x\right| = \left( {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n
100_S_Fourier Analysis
30
arallel }_{\infty } = \mathop{\max }\limits_{{1 \leq i \leq n}}\left| {x}_{i}\right| \] (IV.2) For each \( 1 \leq p \leq \infty ,\parallel x{\parallel }_{p} \) defines a norm on \( {\mathbb{C}}^{n} \) . These are called the \( p \) -norms or the \( {l}_{p} \) -norms. The notation (IV.2) is justified because of the fact that \[ \parallel x{\parallel }_{\infty } = \mathop{\lim }\limits_{{p \rightarrow \infty }}\parallel x{\parallel }_{p} \] (IV.3) Some of the pleasant properties of this family of norms are \[ \parallel x{\parallel }_{p} = \parallel \left| x\right| {\parallel }_{p}\;\text{ for all }x \in {\mathbb{C}}^{n}, \] (IV.4) \[ \parallel x{\parallel }_{p} \leq \parallel y{\parallel }_{p}\;\text{ if }\left| x\right| \leq \left| y\right| \] (IV.5) \[ \parallel x{\parallel }_{p} = \parallel {Px}{\parallel }_{p}\;\text{ for all }x \in {\mathbb{C}}^{n}, P \in {S}_{n}. \] (IV.6) (Recall the notations: \( \left| x\right| = \left( {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n}\right| }\right) \), and \( \left| x\right| \leq \left| y\right| \) if \( \left| {x}_{j}\right| \leq \left| {y}_{j}\right| \) for \( 1 \leq j \leq n \) . \( {S}_{n} \) is the set of permutation matrices.) A norm on \( {\mathbb{C}}^{n} \) is called gauge invariant or absolute if it satisfies the condition (IV.4), monotone if it satisfies (IV.5), and permutation invariant or symmetric if it satisfies (IV.6). The first two of these conditions turn out to be equivalent: Proposition IV.1.1 A norm on \( {\mathbb{C}}^{n} \) is gauge invariant if and only if it is monotone. Proof. Monotonicity clearly implies gauge invariance. Conversely, if a norm \( \parallel \cdot \parallel \) is gauge invariant, then to show that it is monotone it is enough to show that \( \parallel x\parallel \leq \parallel y\parallel \) whenever \( {x}_{j} = {t}_{j}{y}_{j} \) for some real numbers \( 0 \leq {t}_{j} \leq 1, j = 1,2,\ldots, n \) . Further, it suffices to consider the special case when all \( {t}_{j} \) except one are equal to 1 . But then \[ \begin{Vmatrix}\left( {{y}_{1},\ldots, t{y}_{k},\ldots ,{y}_{n}}\right) \end{Vmatrix} \] \[ = \begin{Vmatrix}\left( {\frac{1 + t}{2}{y}_{1} + \frac{1 - t}{2}{y}_{1},\ldots ,\frac{1 + t}{2}{y}_{k} - \frac{1 - t}{2}{y}_{k},\ldots ,\frac{1 + t}{2}{y}_{n} + \frac{1 - t}{2}{y}_{n}}\right) \end{Vmatrix} \] \[ \leq \frac{1 + t}{2}\begin{Vmatrix}\left( {{y}_{1},\ldots ,{y}_{n}}\right) \end{Vmatrix} + \frac{1 - t}{2}\begin{Vmatrix}\left( {{y}_{1},\ldots , - {y}_{k},\ldots ,{y}_{n}}\right) \end{Vmatrix} \] \[ = \begin{Vmatrix}\left( {{y}_{1},\ldots ,{y}_{n}}\right) \end{Vmatrix}\text{.} \] Example IV.1.2 Consider the following norms on \( {\mathbb{R}}^{2} \) : (i) \( \parallel x\parallel = \left| {x}_{1}\right| + \left| {x}_{2}\right| + \left| {{x}_{1} - {x}_{2}}\right| \) . (ii) \( \parallel x\parallel = \left| {x}_{1}\right| + \left| {{x}_{1} - {x}_{2}}\right| \) . (iii) \( \parallel x\parallel = 2\left| {x}_{1}\right| + \left| {x}_{2}\right| \) . The first of these is symmetric but not gauge invariant, the second is neither symmetric nor gauge invariant, while the third is not symmetric but is gauge invariant. Norms that are both symmetric and gauge invariant are especially interesting. Before studying more examples and properties of such norms, let us make a few remarks. Let \( \mathbf{T} \) be the circle group; i.e., the multiplicative group of all complex numbers of modulus 1. Let \( {S}_{n}o\mathbf{T} \) be the semidirect product of \( {S}_{n} \) and \( \mathbf{T} \) . In other words, this is the group of all \( n \times n \) matrices that have exactly one nonzero entry on each row and each column, and this nonzero entry has modulus 1. We will call such matrices complex permutation matrices. Then a norm \( \parallel \cdot \parallel \) on \( {\mathbb{C}}^{n} \) is symmetric and gauge invariant if \[ \parallel x\parallel = \parallel {Tx}\parallel \text{ for all complex permutations }T. \] (IV.7) In other words, the group of (linear) isometries for \( \parallel \cdot \parallel \) contains \( {S}_{n}o\mathbf{T} \) as a subgroup. (Linear isometries for a norm \( \parallel \cdot \parallel \) are those linear transformations on \( {\mathbb{C}}^{n} \) that preserve \( \parallel \cdot \parallel \) .) Exercise IV.1.3 For the Euclidean norm \( \parallel x{\parallel }_{2} = {\left( \sum {\left| {x}_{i}\right| }^{2}\right) }^{1/2} \) the group of isometries is the group of all unitary matrices, which is much larger than the complex permutation group. Show that for each of the norms \( \parallel x{\parallel }_{1} \) and \( \parallel x{\parallel }_{\infty } \) the group of isometries is the complex permutation group. Note that gauge invariant norms on \( {\mathbb{C}}^{n} \) are determined by those on \( {\mathbb{R}}^{n} \) . Symmetric gauge invariant norms on \( {\mathbb{R}}^{n} \) are called symmetric gauge functions. We have come across them earlier (Example II.3.13). To repeat, a map \( \Phi : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}_{ + } \) is called a symmetric gauge function if (i) \( \Phi \) is a norm, (ii) \( \Phi \left( {Px}\right) = \Phi \left( x\right) \) for all \( x \in {\mathbb{R}}^{n} \) and \( P \in {S}_{n} \) , (iii) \( \Phi \left( {{\varepsilon }_{1}{x}_{1},\ldots ,{\varepsilon }_{n}{x}_{n}}\right) = \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) if \( {\varepsilon }_{j} = \pm 1 \) . In addition, we will always assume that \( \Phi \) is normalised, so that (iv) \( \Phi \left( {1,0,\ldots ,0}\right) = 1 \) . The conditions (ii) and (iii) can be expressed together by saying that \( \Phi \) is invariant under the group \( {S}_{n}o{\mathbf{Z}}_{\mathbf{2}} \) consisting of permutations and sign changes of the coordinates. Notice also that a symmetric gauge function is completely determined by its values on \( {\mathbb{R}}_{ + }^{n} \) . Example IV.1.4 If the coordinates of \( x \) are arranged so that \( \left| {x}_{1}\right| \geq \left| {x}_{2}\right| \geq \) \( \ldots \geq \left| {x}_{n}\right| \), then for each \( k = 1,2,\ldots, n \), the function \[ {\Phi }_{\left( k\right) }\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{k}\left| {x}_{j}\right| \] (IV.8) is a symmetric gauge function. We will also use the notation \( \parallel x{\parallel }_{\left( k\right) } \) for these. The parentheses are used to distinguish these norms from the p-norms defined earlier. Indeed, note that \( \parallel x{\parallel }_{\left( 1\right) } = \parallel x{\parallel }_{\infty } \) and \( \parallel x{\parallel }_{\left( n\right) } = \parallel x{\parallel }_{1} \) . We have observed in Problem II.5.11 that these norms play a very distinguished role: if \( {\Phi }_{\left( k\right) }\left( x\right) \leq {\Phi }_{\left( k\right) }\left( y\right) \) for all \( k = 1,2,\ldots, n \), then \( \Phi \left( x\right) \leq \Phi \left( y\right) \) for every symmetric gauge function \( \Phi \) . Thus an infinite family of norm inequalities follows from a finite one. Proposition IV.1.5 For each \( k = 1,2,\ldots, n \) , \[ {\Phi }_{\left( k\right) }\left( x\right) = \min \left\{ {{\Phi }_{\left( n\right) }\left( u\right) + k{\Phi }_{\left( 1\right) }\left( v\right) : x = u + v}\right\} . \] (IV.9) Proof. We may assume, without loss of generality, that \( x \in {\mathbb{R}}_{ + }^{n} \) . If \( x = \) \( u + v \), then \( {\Phi }_{\left( k\right) }\left( x\right) \leq {\Phi }_{\left( k\right) }\left( u\right) + {\Phi }_{\left( k\right) }\left( v\right) \leq {\Phi }_{\left( n\right) }\left( u\right) + k{\Phi }_{\left( 1\right) }\left( v\right) \) . If we choose \[ \begin{matrix} u & = & \left( {{x}_{1}^{ \downarrow } - {x}_{k}^{ \downarrow },{x}_{2}^{ \downarrow } - {x}_{k}^{ \downarrow },\ldots ,{x}_{k}^{ \downarrow } - {x}_{k}^{ \downarrow },0,\ldots ,0}\right) \end{matrix} \] \[ v = \left( {{x}_{k}^{ \downarrow },\ldots ,{x}_{k}^{ \downarrow },{x}_{k + 1}^{ \downarrow },\ldots ,{x}_{n}^{ \downarrow }}\right) \] then \[ u + v = {x}^{ \downarrow } \] \[ {\Phi }_{\left( n\right) }\left( u\right) = {\Phi }_{\left( k\right) }\left( x\right) - k{x}_{k}^{ \downarrow } \] \[ {\Phi }_{\left( 1\right) }\left( v\right) = {x}_{k}^{ \downarrow } \] and the proposition follows. We now derive some basic inequalities. If \( f \) is a convex function on an interval \( I \) and if \( {a}_{i}, i = 1,2,\ldots, n \), are nonnegative real numbers such that \( \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i} = 1 \), then \[ f\left( {\mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{t}_{i}}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}f\left( {t}_{i}\right) \;\text{ for all }{t}_{i} \in I. \] Applying this to the function \( f\left( t\right) = - \log t \) on the interval \( \left( {0,\infty }\right) \), one obtains the fundamental inequality \[ \mathop{\prod }\limits_{{i = 1}}^{n}{t}_{i}^{{a}_{i}} \leq \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{t}_{i}\;\text{ if }\;{t}_{i} \geq 0,{a}_{i} \geq 0,\sum {a}_{i} = 1 \] (IV.10) This is called the (weighted) arithmetic-geometric mean inequality. The special choice \( {a}_{1} = {a}_{2} = \cdots = {a}_{n} = \frac{1}{n} \) gives the usual arithmetic - geometric mean inequality \[ {\left( \mathop{\prod }\limits_{{i = 1}}^{n}{t}_{i}\right) }^{1/n} \leq \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i}\;\text{ if }\;{t}_{i} \geq 0 \] (IV.11) Theorem IV.1.6 Let \( p, q \) be real numbers with \( p > 1 \) and \( \frac{1}{p} + \frac{1}{q} = 1 \) . Let \( x, y \in {\mathbb{R}}^{n} \) . Then for every symmetric gauge function \( \Phi \) \[ \Phi \left( \left| {x \cdot y}\right| \right) \leq {\left\lbrack \Phi \left( {\left| x\right| }^{p}\right) \right\rbrack }^{1/p}{\left\lbrack \Phi \left( {\left| y\right| }^{q}\right) \right\rbrack }^{1/q}. \] (IV.12) Proof. From the inequality (IV.10) one obtains \[ \left| {x \cdot y}\right| \leq \frac{{\left| x\right| }^{p}}{p} + \frac{{\left| y\right| }^{q}}{q} \] and hence \[ \Phi \left( \left| {x \cdot y}\right| \right) \leq \frac{1}{p}
100_S_Fourier Analysis
31
(weighted) arithmetic-geometric mean inequality. The special choice \( {a}_{1} = {a}_{2} = \cdots = {a}_{n} = \frac{1}{n} \) gives the usual arithmetic - geometric mean inequality \[ {\left( \mathop{\prod }\limits_{{i = 1}}^{n}{t}_{i}\right) }^{1/n} \leq \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{t}_{i}\;\text{ if }\;{t}_{i} \geq 0 \] (IV.11) Theorem IV.1.6 Let \( p, q \) be real numbers with \( p > 1 \) and \( \frac{1}{p} + \frac{1}{q} = 1 \) . Let \( x, y \in {\mathbb{R}}^{n} \) . Then for every symmetric gauge function \( \Phi \) \[ \Phi \left( \left| {x \cdot y}\right| \right) \leq {\left\lbrack \Phi \left( {\left| x\right| }^{p}\right) \right\rbrack }^{1/p}{\left\lbrack \Phi \left( {\left| y\right| }^{q}\right) \right\rbrack }^{1/q}. \] (IV.12) Proof. From the inequality (IV.10) one obtains \[ \left| {x \cdot y}\right| \leq \frac{{\left| x\right| }^{p}}{p} + \frac{{\left| y\right| }^{q}}{q} \] and hence \[ \Phi \left( \left| {x \cdot y}\right| \right) \leq \frac{1}{p}\Phi \left( {\left| x\right| }^{p}\right) + \frac{1}{q}\Phi \left( {\left| y\right| }^{q}\right) \] (IV.13) For \( t > 0 \), if we replace \( x, y \) by \( {tx} \) and \( {t}^{-1}y \), then the left-hand side of (IV.13) does not change. Hence, \[ \Phi \left( \left| {x \cdot y}\right| \right) \leq \mathop{\min }\limits_{{t > 0}}\left\lbrack {\frac{{t}^{p}}{p}\Phi \left( {\left| x\right| }^{p}\right) + \frac{1}{q{t}^{q}}\Phi \left( {\left| y\right| }^{q}\right) }\right\rbrack . \] (IV.14) But, if \[ \varphi \left( t\right) = \frac{{t}^{p}}{p}a + \frac{1}{q{t}^{q}}b,\;\text{ where }t, a, b > 0, \] then plain differentiation shows that \[ \min \varphi \left( t\right) = {a}^{1/p}{b}^{1/q}. \] So, (IV.12) follows from (IV.14). When \( \Phi = {\Phi }_{\left( n\right) } \) ,(IV.12) reduces to the familiar Hölder inequality \[ \mathop{\sum }\limits_{{i = 1}}^{n}\left| {{x}_{i}{y}_{i}}\right| \leq {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {x}_{i}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left| {y}_{i}\right| }^{q}\right) }^{1/q}. \] We will refer to (IV.12) as the Hölder inequality for symmetric gauge functions. The special case \( p = 2 \) will be called the Cauchy-Schwarz inequality for symmetric gauge functions. Exercise IV.1.7 Let \( p, q, r \) be positive real numbers with \( \frac{1}{p} + \frac{1}{q} = \frac{1}{r} \) . Show that for every symmetric gauge function \( \Phi \) we have \[ {\left\lbrack \Phi \left( {\left| x \cdot y\right| }^{r}\right) \right\rbrack }^{1/r} \leq {\left\lbrack \Phi \left( {\left| x\right| }^{p}\right) \right\rbrack }^{1/p}{\left\lbrack \Phi \left( {\left| y\right| }^{q}\right) \right\rbrack }^{1/q}. \] (IV.15) Theorem IV.1.8 Let \( \Phi \) be any symmetric gauge function and let \( p \geq 1 \) . Then for all \( x, y \in {\mathbb{R}}^{n} \) \[ {\left\lbrack \Phi \left( {\left| x + y\right| }^{p}\right) \right\rbrack }^{1/p} \leq {\left\lbrack \Phi \left( {\left| x\right| }^{p}\right) \right\rbrack }^{1/p} + {\left\lbrack \Phi \left( {\left| y\right| }^{p}\right) \right\rbrack }^{1/p}. \] (IV.16) Proof. When \( p = 1 \), the inequality (IV.16) is a consequence of the triangle inequalities for the absolute value on \( {\mathbb{R}}^{n} \) and for the norm \( \Phi \) . Let \( p > 1 \) . It is enough to consider the case \( x \geq 0, y \geq 0 \) . Make this assumption and write \[ {\left( x + y\right) }^{p} = x \cdot {\left( x + y\right) }^{p - 1} + y \cdot {\left( x + y\right) }^{p - 1}. \] Now, using the triangle inequality for \( \Phi \) and Theorem IV.1.6, one obtains \[ \Phi \left( {\left( x + y\right) }^{p}\right) \leq \Phi \left( {x \cdot {\left( x + y\right) }^{p - 1}}\right) + \Phi \left( {y \cdot {\left( x + y\right) }^{p - 1}}\right) \] \[ \leq {\left\lbrack \Phi \left( {x}^{p}\right) \right\rbrack }^{1/p}{\left\lbrack \Phi \left( {\left( x + y\right) }^{q\left( {p - 1}\right) }\right) \right\rbrack }^{1/q} \] \[ + {\left\lbrack \Phi \left( {y}^{p}\right) \right\rbrack }^{1/p}{\left\lbrack \Phi \left( {\left( x + y\right) }^{q\left( {p - 1}\right) }\right) \right\rbrack }^{1/q} \] \[ = \left\{ {{\left\lbrack \Phi \left( {x}^{p}\right) \right\rbrack }^{1/p} + {\left\lbrack \Phi \left( {y}^{p}\right) \right\rbrack }^{1/p}}\right\} {\left\lbrack \Phi \left( {\left( x + y\right) }^{p}\right) \right\rbrack }^{1/q}, \] since \( q\left( {p - 1}\right) = p \) . If we divide both sides of the above inequality by \( {\left\lbrack \Phi \left( {\left( x + y\right) }^{p}\right) \right\rbrack }^{1/q} \), we get (IV.16). Once again, when \( \Phi = {\Phi }_{\left( n\right) } \) the inequality (IV.16) reduces to the familiar Minkowski inequality. So, we will call (IV.16) the Minkowski inequality for symmetric gauge functions. Exercise IV.1.9 Let \( \Phi \) be a symmetric gauge function and let \( p \geq 1 \) . Let \[ {\Phi }^{\left( p\right) }\left( x\right) = {\left\lbrack \Phi \left( {\left| x\right| }^{p}\right) \right\rbrack }^{1/p}. \] (IV.17) Show that \( {\Phi }^{\left( p\right) } \) is also a symmetric gauge function. Note that, if \( {\Phi }_{p} \) is the family of \( {\ell }_{p} \) -norms, then \[ {\Phi }_{{p}_{1}}^{\left( {p}_{2}\right) } = {\Phi }_{{p}_{1}{p}_{2}}\;\text{ for all }{p}_{1},{p}_{2} \geq 1, \] (IV.18) and, if \( {\Phi }_{\left( k\right) } \) is the norm defined by (IV.8), then \[ {\Phi }_{\left( k\right) }^{\left( p\right) }\left( x\right) = {\left( \mathop{\sum }\limits_{{j = 1}}^{k}{\left| {x}_{j}\right| }^{p}\right) }^{1/p} \] (IV.19) where the coordinates of \( x \) are arranged as \( \left| {x}_{1}\right| \geq \left| {x}_{2}\right| \geq \cdots \geq \left| {x}_{n}\right| \) . Just as among the \( {l}_{p} \) -norms, the Euclidean norm has especially interesting properties, the norms \( {\Phi }^{\left( 2\right) } \) where \( \Phi \) is any symmetric gauge function have some special interest. We will give these norms a name: Definition IV.1.10 \( \Psi \) is called a quadratic symmetric gauge function, or a Q-norm, if \( \Psi = {\Phi }^{\left( 2\right) } \) for some symmetric gauge function \( \Phi \) . In other words, \[ \Psi \left( x\right) = {\left\lbrack \Phi \left( {\left| x\right| }^{2}\right) \right\rbrack }^{1/2}. \] (IV.20) Exercise IV.1.11 (i) Show that an \( {l}_{p} \) -norm is a Q-norm if and only if \( p \geq 2 \) . (ii) More generally, show that for each \( k = 1,2,\ldots, n,{\Phi }_{\left( k\right) }^{\left( p\right) } \) is a Q-norm if and only if \( p \geq 2 \) . Exercise IV.1.12 We saw earlier that if \( {\Phi }_{\left( k\right) }\left( x\right) \leq {\Phi }_{\left( k\right) }\left( y\right) \) for all \( k = \) \( 1,2,\ldots, n \), then \( \Phi \left( x\right) \leq \Phi \left( y\right) \) for all symmetric gauge functions. Show that if \( {\Phi }_{\left( k\right) }^{\left( 2\right) }\left( x\right) \leq {\Phi }_{\left( k\right) }^{\left( 2\right) }\left( y\right) \) for all \( k = 1,2,\ldots, n \), then \( {\Phi }^{\left( 2\right) }\left( x\right) \leq {\Phi }^{\left( 2\right) }\left( y\right) \) for all symmetric gauge functions \( \Phi \) ; i.e., \( \Psi \left( x\right) \leq \Psi \left( y\right) \) for all quadratic symmetric gauge functions. If \( \Phi \) is a norm on \( {\mathbb{C}}^{n} \), the dual of \( \Phi \) is defined as \[ {\Phi }^{\prime }\left( x\right) = \mathop{\sup }\limits_{{\Phi \left( y\right) = 1}}\left| {\langle x, y\rangle }\right| \] (IV.21) It is easy to see that \( {\Phi }^{\prime } \) is a norm. (In fact, \( {\Phi }^{\prime } \) is a norm even when \( \Phi \) is a function on \( {\mathbb{C}}^{n} \) that does not necessarily satisfy the triangle inequality that but meets the other requirements of a norm.) Exercise IV.1.13 If \( \Phi \) is a symmetric gauge function then so is \( {\Phi }^{\prime } \) . Exercise IV.1.14 Show that for any norm \( \Phi \) \[ \left| {\langle x, y\rangle }\right| \leq \Phi \left( x\right) {\Phi }^{\prime }\left( y\right) \;\text{ for all }x, y. \] (IV.22) Exercise IV.1.15 Let \( {\Phi }_{p} \) be the \( {l}_{p} \) -norm, \( 1 \leq p \leq \infty \) . Show that \[ {\Phi }_{p}^{\prime } = {\Phi }_{q},\;\text{ where }\;\frac{1}{p} + \frac{1}{q} = 1. \] (IV.23) Exercise IV.1.16 Let \( \Phi \) and \( \Psi \) be two norms such that \[ \Phi \left( x\right) \leq {c\Psi }\left( x\right) \text{for all}x\text{and for some}c > 0\text{.} \] Show that \[ {\Phi }^{\prime }\left( x\right) \geq {c}^{-1}{\Psi }^{\prime }\left( x\right) \;\text{ for all }x. \] We shall call a symmetric gauge function a \( {\mathbf{Q}}^{\prime } \) -norm if it is the dual of a \( Q \) -norm. The \( {l}_{p} \) -norms for \( 1 \leq p \leq 2 \) are examples of \( {Q}^{\prime } \) -norms. Exercise IV.1.17 (i) Let \( \Phi \) be a norm such that \( \Phi = {\Phi }^{\prime } \) . Then \( \Phi \) must be the Euclidean norm. (ii) Let \( \Phi \) be both a \( Q \) -norm and a \( {Q}^{\prime } \) -norm. Then \( \Phi \) must be the Euclidean norm. (Use Exercise IV.1.16 and the fact that every symmetric gauge function is bounded by the \( {l}_{1} \) -norm.) Exercise IV.1.18 For each \( k = 1,2,\ldots, n \), the dual of the norm \( {\Phi }_{\left( k\right) } \) is given by \[ {\Phi }_{\left( k\right) }^{\prime }\left( x\right) = \max \left\{ {{\Phi }_{\left( 1\right) }\left( x\right) ,\frac{1}{k}{\Phi }_{\left( n\right) }\left( x\right) }\right\} . \] (IV.24) Prove this using Proposition IV.1.5 and Exercise IV.1.16. Some ways of generating symmetric gauge functions are described in the following exercises. Exercise IV.1.19 Let \( 1 = {\alpha }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\alpha }_{n} \geq 0 \) . Given a symmetric gauge function \( \Phi \) on \( {\mathbb{R}}^{n} \), define \[ \Psi \left( x\right) = \Phi \left( {{\alpha }_{1}{\left| x\right| }_{1}^{ \downarrow },\ldots ,{\alpha }_{n}{\left| x\right| }_{n}^{ \downarrow }}\right) \] Then \( \Psi \) is a symmetric gauge function. Exercise IV.1.20 (i) Let \( \Phi \) be a symmetric gauge function on \( {\mathb
100_S_Fourier Analysis
32
symmetric gauge function is bounded by the \( {l}_{1} \) -norm.) Exercise IV.1.18 For each \( k = 1,2,\ldots, n \), the dual of the norm \( {\Phi }_{\left( k\right) } \) is given by \[ {\Phi }_{\left( k\right) }^{\prime }\left( x\right) = \max \left\{ {{\Phi }_{\left( 1\right) }\left( x\right) ,\frac{1}{k}{\Phi }_{\left( n\right) }\left( x\right) }\right\} . \] (IV.24) Prove this using Proposition IV.1.5 and Exercise IV.1.16. Some ways of generating symmetric gauge functions are described in the following exercises. Exercise IV.1.19 Let \( 1 = {\alpha }_{1} \geq {\alpha }_{2} \geq \cdots \geq {\alpha }_{n} \geq 0 \) . Given a symmetric gauge function \( \Phi \) on \( {\mathbb{R}}^{n} \), define \[ \Psi \left( x\right) = \Phi \left( {{\alpha }_{1}{\left| x\right| }_{1}^{ \downarrow },\ldots ,{\alpha }_{n}{\left| x\right| }_{n}^{ \downarrow }}\right) \] Then \( \Psi \) is a symmetric gauge function. Exercise IV.1.20 (i) Let \( \Phi \) be a symmetric gauge function on \( {\mathbb{R}}^{n} \) . Let \( m < n \) . If \( x \in {\mathbb{R}}^{m} \), let \( \widetilde{x} = \left( {{x}_{1},\ldots ,{x}_{m},0,0,\ldots ,0}\right) \) and define \( \Psi \left( x\right) = \Phi \left( \widetilde{x}\right) \) . Then \( \Psi \) is a symmetric gauge function on \( {\mathbb{R}}^{m} \) . (ii) Conversely, given any symmetric gauge function \( \Psi \) on \( {\mathbb{R}}^{m} \), if for \( n > m \) we define \( \Phi \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \Psi \left( {{\left| x\right| }_{1}^{ \downarrow },\ldots ,{\left| x\right| }_{m}^{ \downarrow }}\right) \), then \( \Phi \) is a symmetric gauge function on \( {\mathbb{R}}^{n} \) . ## IV. 2 Unitarily Invariant Norms on Operators on \( {\mathbb{C}}^{n} \) In this section, \( {\mathbb{C}}^{n} \) will always stand for the Hilbert space \( {\mathbb{C}}^{n} \) with inner product \( \langle \cdot , \cdot \rangle \) and the associated norm \( \parallel \cdot \parallel \) . (No subscript will be attached to this "standard" norm as was done in the previous Section.) If \( A \) is a linear operator on \( {\mathbb{C}}^{n} \), we will denote by \( \parallel A\parallel \) the operator (bound) norm of \( A \) defined as \[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel \] (IV.25) As before, we denote by \( \left| A\right| \) the positive operator \( {\left( {A}^{ * }A\right) }^{1/2} \) and by \( s\left( A\right) \) the vector whose coordinates are the singular values of \( A \), arranged as \( {s}_{1}\left( A\right) \geq {s}_{2}\left( A\right) \geq \cdots \geq {s}_{n}\left( A\right) \) . We have \[ \parallel A\parallel = \parallel \left| A\right| \parallel = {s}_{1}\left( A\right) \] (IV.26) Now, if \( U, V \) are unitary operators on \( {\mathbb{C}}^{n} \), then \( \left| {UAV}\right| = {V}^{ * }\left| A\right| V \) and hence \[ \parallel A\parallel = \parallel {UAV}\parallel \] (IV.27) for all unitary operators \( U, V \) . This last property is called unitary invariance. Several other norms have this property. These are frequently useful in analysis, and we will study them in some detail. We will use the symbol \( \parallel \mid \cdot \parallel \mid \) to mean a norm on \( n \times n \) matrices that satisfies \[ \parallel \left| {UAV}\right| \parallel = \parallel \left| A\right| \parallel \] (IV.28) for all \( A \) and for unitary \( U, V \) . We will call such a norm a unitarily invariant norm on the space \( \mathbf{M}\left( \mathbf{n}\right) \) of \( n \times n \) matrices. We will normalise such norms so that they all take the value 1 on the matrix \( \operatorname{diag}\left( {1,0,\ldots ,0}\right) \) . There is an intimate connection between these norms and symmetric gauge functions on \( {\mathbb{R}}^{n} \) ; the link is provided by singular values. Theorem IV.2.1 Given a symmetric gauge function \( \Phi \) on \( {\mathbb{R}}^{n} \), define a function on \( \mathbf{M}\left( \mathbf{n}\right) \) as \[ \parallel \left| A\right| {\parallel }_{\Phi } = \Phi \left( {s\left( A\right) }\right) \] (IV.29) Then this defines a unitarily invariant norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Conversely, given any unitarily invariant norm \( \parallel \mid \mid \cdot \parallel \mid \) on \( \mathbf{M}\left( \mathbf{n}\right) \), define a function on \( {\mathbb{R}}^{n} \) by \[ {\Phi }_{\parallel \left| \cdot \right| \parallel }\left( x\right) = \parallel \left| {\operatorname{diag}\left( x\right) }\right| \parallel \] (IV.30) where diag \( \left( x\right) \) is the diagonal matrix with entries \( {x}_{1},\ldots ,{x}_{n} \) on its diagonal. Then this defines a symmetric gauge function on \( {\mathbb{R}}^{n} \) . Proof. Since \( s\left( {UAV}\right) = s\left( A\right) \) for all unitary \( U, V,\parallel \mid \cdot \parallel {\parallel }_{\Phi } \) is unitarily invariant. We will prove that it obeys the triangle inequality - the other conditions for it to be a norm are easy to verify. For this, recall the majori-sation (II.18) \[ s\left( {A + B}\right) { \prec }_{w}s\left( A\right) + s\left( B\right) \;\text{ for all }A, B \in \mathbf{M}\left( \mathbf{n}\right) , \] and then use the fact that \( \Phi \) is strongly isotone and monotone. (See Example II.3.13 and Problem II.5.11.) To prove the converse, note that (IV.30) clearly gives a norm on \( {\mathbb{R}}^{n} \) . Since diagonal matrices of the form \( \operatorname{diag}\left( {{e}^{i{\theta }_{1}},\ldots ,{e}^{i{\theta }_{n}}}\right) \) and permutation matrices are all unitary, this norm is absolute and permutation invariant, and hence it is a symmetric gauge function. Symmetric gauge functions on \( {\mathbb{R}}^{n} \) constructed in the preceding section thus lead to several examples of unitarily invariant norms on \( \mathbf{M}\left( \mathbf{n}\right) \) . Two classes of such norms are specially important. The first is the class of Schatten \( p \) -norms defined as \[ \parallel A{\parallel }_{p} = {\Phi }_{p}\left( {s\left( A\right) }\right) = {\left\lbrack \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {s}_{j}\left( A\right) \right) }^{p}\right\rbrack }^{1/p},\;1 \leq p < \infty , \] (IV.31) \[ \parallel A{\parallel }_{\infty } = {\Phi }_{\infty }\left( {s\left( A\right) }\right) = {s}_{1}\left( A\right) = \parallel A\parallel . \] (IV.32) The second is the class of \( \mathbf{{Ky}} \) Fan \( k \) -norms defined as \[ \parallel A{\parallel }_{\left( k\right) } = \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) ,\;1 \leq k \leq n. \] (IV.33) Among the \( p \) -norms, the ones for the values \( p = 1,2,\infty \), are used most often. As we have noted, \( \parallel A{\parallel }_{\infty } \) is the same as the operator norm \( \parallel A\parallel \) and the Ky Fan norm \( \parallel A{\parallel }_{\left( 1\right) } \) . The norm \( \parallel A{\parallel }_{1} \) is the same as \( \parallel A{\parallel }_{\left( n\right) } \) . This is equal to \( \operatorname{tr}\left( \left| A\right| \right) \) and hence is called the trace norm, and is sometimes written as \( \parallel A{\parallel }_{tr} \) . The norm \[ \parallel A{\parallel }_{2} = {\left\lbrack \mathop{\sum }\limits_{{j = 1}}^{n}{\left( {s}_{j}\left( A\right) \right) }^{2}\right\rbrack }^{1/2} \] (IV.34) is also called the Hilbert-Schmidt norm or the Frobenius norm (and is sometimes written as \( \parallel A{\parallel }_{F} \) for that reason). It will play a basic role in our analysis. For \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) let \[ \langle A, B\rangle = \operatorname{tr}{A}^{ * }B \] (IV.35) This defines an inner product on \( \mathbf{M}\left( \mathbf{n}\right) \) and the norm associated with this inner product is \( \parallel A{\parallel }_{2} \), i.e., \[ \parallel A{\parallel }_{2} = {\left( \operatorname{tr}{A}^{ * }A\right) }^{1/2}. \] (IV.36) If the matrix \( A \) has entries \( {a}_{ij} \), then \[ \parallel A{\parallel }_{2} = {\left( \mathop{\sum }\limits_{{i, j}}{\left| {a}_{ij}\right| }^{2}\right) }^{1/2}. \] (IV.37) Thus the norm \( \parallel A{\parallel }_{2} \) is the Euclidean norm of the matrix \( A \) when it is thought of as an element of \( {\mathbb{C}}^{{n}^{2}} \) . This fact makes this norm easily computable and geometrically tractable. The main importance of the Ky Fan norms lies in the following: Theorem IV.2.2 (Fan Dominance Theorem) Let \( A, B \) be two \( n \times n \) matrices. If \[ \parallel A{\parallel }_{\left( k\right) } \leq \parallel B{\parallel }_{\left( k\right) }\;\text{ for }k = 1,2,\ldots, n, \] then \[ \left| {\left| \right| A\left| \right| }\right| \leq \left| \right| \left| B\right| \left| \right| \;\text{for all unitarily invariant norms.} \] Proof. This is a consequence of the corresponding assertion about symmetric gauge functions. (See Example IV.1.4.) Since \( {\Phi }_{\left( 1\right) }\left( x\right) \leq \Phi \left( x\right) \leq {\Phi }_{\left( n\right) }\left( x\right) \) for all \( x \in {\mathbb{R}}^{n} \) and for all symmetric gauge functions \( \Phi \), we have \[ \parallel A\parallel \leq \parallel \left| A\right| \parallel \leq \parallel A{\parallel }_{\left( n\right) } = \parallel A{\parallel }_{1} \] (IV.38) for all \( A \in \mathbf{M}\left( \mathbf{n}\right) \) and for all unitarily invariant norms \( \parallel \mid \mid \) . \( \parallel \mid \) . Analogous to Proposition IV.1.5 we have Proposition IV.2.3 For each \( k = 1,2,\ldots, n \) , \[ \parallel A{\parallel }_{\left( k\right) } = \min \left\{ {\parallel B{\parallel }_{\left( n\right) } + k\parallel C\parallel : A = B + C}\right\} . \] (IV.39) Proof. If \( A = B + C \), then \( \parallel A{\parallel }_{\left( k\right) } \leq \parallel B{\parallel }_{\left( k\right) } + \parallel C{\parallel }_{\left( k\right) } \leq \parallel B{\parallel }_{\left( n\right) } + k\parallel C\parallel \) . Now let \( s\left( A\right) = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) and choose unitary \( U, V \) so that \[ A = U\left\lbrack {
100_S_Fourier Analysis
33
for all symmetric gauge functions \( \Phi \), we have \[ \parallel A\parallel \leq \parallel \left| A\right| \parallel \leq \parallel A{\parallel }_{\left( n\right) } = \parallel A{\parallel }_{1} \] (IV.38) for all \( A \in \mathbf{M}\left( \mathbf{n}\right) \) and for all unitarily invariant norms \( \parallel \mid \mid \) . \( \parallel \mid \) . Analogous to Proposition IV.1.5 we have Proposition IV.2.3 For each \( k = 1,2,\ldots, n \) , \[ \parallel A{\parallel }_{\left( k\right) } = \min \left\{ {\parallel B{\parallel }_{\left( n\right) } + k\parallel C\parallel : A = B + C}\right\} . \] (IV.39) Proof. If \( A = B + C \), then \( \parallel A{\parallel }_{\left( k\right) } \leq \parallel B{\parallel }_{\left( k\right) } + \parallel C{\parallel }_{\left( k\right) } \leq \parallel B{\parallel }_{\left( n\right) } + k\parallel C\parallel \) . Now let \( s\left( A\right) = \left( {{s}_{1},\ldots ,{s}_{n}}\right) \) and choose unitary \( U, V \) so that \[ A = U\left\lbrack {\left( {\operatorname{diag}\left( {{s}_{1},\ldots ,{s}_{n}}\right) }\right\rbrack V.}\right. \] Let \[ B = U\left\lbrack {\operatorname{diag}\left( {{s}_{1} - {s}_{k},{s}_{2} - {s}_{k},\ldots ,{s}_{k} - {s}_{k},0,\ldots ,0}\right) }\right\rbrack V \] \[ C = U\left\lbrack {\operatorname{diag}\left( {{s}_{k},{s}_{k},\ldots ,{s}_{k},{s}_{k + 1},\ldots ,{s}_{n}}\right) }\right\rbrack V. \] Then \[ A = B + C \] \[ \parallel B{\parallel }_{\left( n\right) } = \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j} - k{s}_{k} = \parallel A{\parallel }_{\left( k\right) } - k{s}_{k} \] \[ \parallel C\parallel = {s}_{k} \] and \[ \parallel A{\parallel }_{\left( k\right) } = \parallel B{\parallel }_{\left( n\right) } + k\parallel C\parallel . \] A norm \( \nu \) on \( \mathbf{M}\left( \mathbf{n}\right) \) is called symmetric if for \( A, B, C \) in \( \mathbf{M}\left( \mathbf{n}\right) \) \[ \nu \left( {BAC}\right) \leq \parallel B\parallel \nu \left( A\right) \parallel C\parallel . \] (IV.40) Proposition IV.2.4 A norm on \( \mathbf{M}\left( \mathbf{n}\right) \) is symmetric if and only if it is unitarily invariant. Proof. If \( \nu \) is a symmetric norm, then for unitary \( U, V \) we have \( \nu \left( {UAV}\right) \leq \) \( \nu \left( A\right) \) and \( \nu \left( A\right) = \nu \left( {{U}^{-1}{UAV}{V}^{-1}}\right) \leq \nu \left( {UAV}\right) \) . So, \( \nu \) is unitarily invariant. Conversely, by Problem III.6.2, \( {s}_{j}\left( {BAC}\right) \leq \parallel B\parallel \parallel C\parallel {s}_{j}\left( A\right) \) for all \( j = \) \( 1,2,\ldots, n \) . So, if \( \Phi \) is any symmetric gauge function, then \( \Phi \left( {s\left( {BAC}\right) }\right) \leq \) \( \parallel B\parallel \parallel C\parallel \Phi \left( {s\left( A\right) }\right) \) and hence the norm associated with \( \Phi \) is symmetric. In particular, this implies that every unitarily invariant norm is sub-multiplicative: ## \(\parallel \left| {AB}\right| \parallel \leq \parallel \left| A\right| \parallel \parallel B\parallel \mid \text{ for all }A, B.\) Inequalities for sums and products of singular values of matrices, when combined with inequalities for symmetric gauge functions proved in Section IV.1, lead to interesting statements about unitarily invariant norms. This is illustrated below. Theorem IV.2.5 If \( A, B \) are \( n \times n \) matrices, then \[ {s}^{r}\left( {AB}\right) { \prec }_{w}{s}^{r}\left( A\right) {s}^{r}\left( B\right) \;\text{ for all }r > 0. \] (IV.41) Proof. If \( { \land }^{k}A \) is the \( k \) th antisymmetric tensor product of \( A \), then \[ \begin{Vmatrix}{{ \land }^{k}A}\end{Vmatrix} = {s}_{1}\left( {{ \land }^{k}A}\right) = \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}\left( A\right) ,\;1 \leq k \leq n. \] Hence, \[ \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{r}\left( {AB}\right) = {\begin{Vmatrix}{ \land }^{k}\left( AB\right) \end{Vmatrix}}^{r} \leq {\left( \begin{Vmatrix}{ \land }^{k}A\end{Vmatrix}\begin{Vmatrix}{ \land }^{k}B\end{Vmatrix}\right) }^{r} \] \[ = \mathop{\prod }\limits_{{j = 1}}^{k}{s}_{j}^{r}\left( A\right) {s}_{j}^{r}\left( B\right) ,\;1 \leq k \leq n. \] Now use the statement II.3.5(vii). Corollary IV.2.6 (Hölder’s Inequality for Unitarily Invariant Norms) For every unitarily invariant norm and for all \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) \[ \parallel \left| {AB}\right| \parallel \leq \parallel {\left| A\right| }^{p}\parallel \left| {{}^{1/p}\parallel }\right| {\left| A\right| }^{q}\parallel {\parallel }^{1/q} \] (IV.42) for all \( p > 1 \) and \( \frac{1}{p} + \frac{1}{q} = 1 \) . Proof. Use the special case of (IV.41) for \( r = 1 \) to get \[ \Phi \left( {s\left( {AB}\right) }\right) \leq \Phi \left( {s\left( A\right) s\left( B\right) }\right) \] for every symmetric gauge function. Now use Theorem IV.1.6 and the fact that \( {\left( s\left( A\right) \right) }^{p} = s\left( {\left| A\right| }^{p}\right) \) . Exercise IV.2.7 Let \( p, q, r \) be positive real numbers with \( \frac{1}{p} + \frac{1}{q} = \frac{1}{r} \) . Then for every unitarily invariant norm \[ \parallel {\left| AB\right| }^{r}\parallel {\parallel }^{1/r} \leq \parallel {\left| A\right| }^{p}\parallel \left| {{}^{1/p}\parallel }\right| {\left| B\right| }^{q}\parallel {\parallel }^{1/q}. \] (IV.43) Choosing \( p = q = 1 \), one gets from this \[ \parallel {\left| AB\right| }^{1/2}\parallel \mid \leq {\left( \parallel \left| A\right| \parallel \parallel \left| B\right| \parallel \right) }^{1/2}. \] (IV.44) This is the Cauchy-Schwarz inequality for unitarily invariant norms. Exercise IV.2.8 Given a unitarily invariant norm \( \parallel \mid \mid \mid \) on \( \mathbf{M}\left( \mathbf{n}\right) \), define \[ \parallel \left| A\right| {\parallel }^{\left( p\right) } = \parallel {\left| A\right| }^{p}\parallel {\parallel }^{1/p}\;1 \leq p < \infty . \] (IV.45) Show that this is a unitarily invariant norm. Note that \[ \parallel A{\parallel }_{{p}_{1}}^{\left( {p}_{2}\right) } = \parallel A{\parallel }_{{p}_{1}{p}_{2}}\;\text{ for all }\;{p}_{1},{p}_{2} \geq 1 \] (IV.46) and \[ \parallel A{\parallel }_{\left( k\right) }^{\left( p\right) } = {\left( \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}^{p}\left( A\right) \right) }^{1/p}\;\text{ for }\;p \geq 1,1 \leq k \leq n. \] (IV.47) Definition IV.2.9 A unitarily invariant norm on \( \mathbf{M}\left( \mathbf{n}\right) \) is called a Q-norm if it corresponds to a quadratic symmetric gauge function; i.e., \( \left| \right| \left| \cdot \right| \left| \right| \) is a \( Q \) -norm if and only if there exists a unitarily invariant norm \( \parallel \mid \mid \cdot \parallel { \mid }^{ \land } \) such that \[ \parallel \left| A\right| {\parallel }^{2} = {\begin{Vmatrix}\left| {A}^{ * }A\right| \end{Vmatrix}}^{ \land }. \] (IV.48) Note that the norm \( \parallel {\parallel }_{p} \) is a \( Q \) -norm if and only if \( p \geq 2 \) because \[ \parallel A{\parallel }_{p}^{2} = {\begin{Vmatrix}{A}^{ * }A\end{Vmatrix}}_{p/2} \] (IV.49) The norms defined in (IV.47) are \( Q \) -norms if and only if \( p \geq 2 \) . Exercise IV.2.10 Let \( \parallel \cdot {\parallel }_{Q} \) denote a \( Q \) -norm. Observe that the following conditions are equivalent: (i) \( \parallel A{\parallel }_{Q} \leq \parallel B{\parallel }_{Q} \) for all \( Q \) -norms. (ii) \( \parallel \left| {{A}^{ * }A}\right| \parallel \leq \parallel \left| {{B}^{ * }B}\right| \parallel \) for all unitarily invariant norms. (iii) \( \parallel A{\parallel }_{\left( k\right) }^{\left( 2\right) } \leq \parallel B{\parallel }_{\left( k\right) }^{\left( 2\right) } \) for \( k = 1,2,\ldots, n \) . (iv) \( {\left( s\left( A\right) \right) }^{2}{ \prec }_{w}{\left( s\left( B\right) \right) }^{2} \) . Duality in the space of unitarily invariant norms is defined via the inner product (IV.35). If \( \parallel \mid \mid \mid \) is a unitarily invariant norm, define \( \parallel \mid \mid \mid {\parallel }^{\prime } \) as \[ \parallel \left| A\right| {\parallel }^{\prime } = \mathop{\sup }\limits_{{\parallel \parallel B\parallel \parallel = 1}}\left| {\langle A, B\rangle }\right| = \mathop{\sup }\limits_{{\parallel \parallel B\parallel \parallel = 1}}\left| {\operatorname{tr}{A}^{ * }B}\right| . \] (IV.50) It is easy to see that this defines a norm that is unitarily invariant. Proposition IV.2.11 Let \( \Phi \) be a symmetric gauge function on \( {\mathbb{R}}^{n} \) and let \( \parallel \cdot {\parallel }_{\Phi } \) be the corresponding unitarily invariant norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Then \( \parallel \cdot {\parallel }_{\Phi }^{\prime } = \parallel \cdot {\parallel }_{{\Phi }^{\prime }}. \) Proof. We have from (II.40) and (IV.41) \[ \left| {\operatorname{tr}{A}^{ * }B}\right| \leq \operatorname{tr}\left| {{A}^{ * }B}\right| = \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( {{A}^{ * }B}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) {s}_{j}\left( B\right) . \] It follows that \[ \parallel A{\parallel }_{\Phi }^{\prime } \leq {\Phi }^{\prime }\left( {s\left( A\right) }\right) = \parallel A{\parallel }_{{\Phi }^{\prime }} \] Conversely, \[ \parallel A{\parallel }_{{\Phi }^{\prime }} = {\Phi }^{\prime }\left( {s\left( A\right) }\right) \] \[ = \sup \left\{ {\mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) {y}_{j} : y \in {\mathbb{R}}^{n},\Phi \left( y\right) = 1}\right\} \] \[ = \;\sup \left\{ {\operatorname{tr}\left\lbrack {\operatorname{diag}\left( {s\left( A\right) }\right) \operatorname{diag}\left( y\right) }\right\rbrack : \parallel \operatorname{diag}\left( y\right) {\parallel }_{\Phi } = 1}\right\} \] \[ \leq \;\parallel \operatorname{diag}\left( {s\left( A\right) }\right) {\parallel }_{\Phi }^{\prime } = \parallel A{\parallel }_{\Phi }^{\prime }. \] Exercise IV.2.12 From statements about duals proved in Section IV.1, we can now conclude that (i) \( \left| {\operatorname{tr}{A}^{ * }B}\right| \leq \parallel \left| A\right| \paralle
100_S_Fourier Analysis
34
that \[ \parallel A{\parallel }_{\Phi }^{\prime } \leq {\Phi }^{\prime }\left( {s\left( A\right) }\right) = \parallel A{\parallel }_{{\Phi }^{\prime }} \] Conversely, \[ \parallel A{\parallel }_{{\Phi }^{\prime }} = {\Phi }^{\prime }\left( {s\left( A\right) }\right) \] \[ = \sup \left\{ {\mathop{\sum }\limits_{{j = 1}}^{n}{s}_{j}\left( A\right) {y}_{j} : y \in {\mathbb{R}}^{n},\Phi \left( y\right) = 1}\right\} \] \[ = \;\sup \left\{ {\operatorname{tr}\left\lbrack {\operatorname{diag}\left( {s\left( A\right) }\right) \operatorname{diag}\left( y\right) }\right\rbrack : \parallel \operatorname{diag}\left( y\right) {\parallel }_{\Phi } = 1}\right\} \] \[ \leq \;\parallel \operatorname{diag}\left( {s\left( A\right) }\right) {\parallel }_{\Phi }^{\prime } = \parallel A{\parallel }_{\Phi }^{\prime }. \] Exercise IV.2.12 From statements about duals proved in Section IV.1, we can now conclude that (i) \( \left| {\operatorname{tr}{A}^{ * }B}\right| \leq \parallel \left| A\right| \parallel \cdot \parallel \left| B\right| {\parallel }^{\prime } \) for every unitarily invariant norm. (ii) \( \parallel A{\parallel }_{p}^{\prime } = \parallel A{\parallel }_{q} \) for \( 1 \leq p \leq \infty ,\frac{1}{p} + \frac{1}{q} = 1 \) . (iii) \( \parallel A{\parallel }_{\left( k\right) }^{\prime } = \max \left\{ {\parallel A{\parallel }_{\left( 1\right) },\frac{1}{k}\parallel A{\parallel }_{\left( n\right) }}\right\} ,1 \leq k \leq n \) . (iv) The only unitarily invariant norm that is its own dual is the Hilbert-Schmidt norm \( \parallel \cdot {\parallel }_{2} \) . (v) The only norm that is a Q-norm and is also the dual of a Q-norm is the norm \( \parallel \cdot {\parallel }_{2} \) . Duals of \( Q \) -norms will be called \( {Q}^{\prime } \) -norms. These include the norms \( \parallel \cdot {\parallel }_{p},1 \leq p \leq 2. \) An important property of all unitarily invariant norms is that they are all reduced by pinchings. If \( {P}_{1},\ldots ,{P}_{k} \) are mutually orthogonal projections such that \( {P}_{1} \oplus {P}_{2} \oplus \ldots \oplus {P}_{k} = I \), then the operator on \( \mathbf{M}\left( \mathbf{n}\right) \) defined as \[ \mathcal{C}\left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{k}{P}_{j}A{P}_{j} \] (IV.51) is called a pinching operator. It is easy to see that \[ \parallel \left| {\mathcal{C}\left( A\right) }\right| \parallel \leq \parallel \left| A\right| \parallel \] (IV.52) for every unitarily invariant norm. (See Problem II.5.5.) We will call this the pinching inequality. Let us illustrate one use of this inequality. Theorem IV.2.13 Let \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) . Then for every unitarily invariant norm on \( \mathbf{M}\left( {\mathbf{2}\mathbf{n}}\right) \) \[ \frac{1}{2}\left| \left| \left| \left\lbrack \begin{matrix} A + B & 0 \\ 0 & A + B \end{matrix}\right\rbrack \right| \right| \right| \leq \left| \left| \left| \left\lbrack \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right\rbrack \right| \right| \right| \leq \left| \left| \left| \left\lbrack \begin{matrix} \left| A\right| + \left| B\right| & 0 \\ 0 & 0 \end{matrix}\right\rbrack \right| \right| \right| . \] (IV.53) Proof. The first inequality follows easily from the observation that \( \left\lbrack \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right\rbrack \) and \( \left\lbrack \begin{matrix} B & 0 \\ 0 & A \end{matrix}\right\rbrack \) are unitarily equivalent. If we prove the second inequality in the special case when \( A, B \) are positive, the general case follows easily. So, assume \( A, B \) are positive. Then \[ \left\lbrack \begin{matrix} A + B & 0 \\ 0 & 0 \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} {A}^{1/2} & {B}^{1/2} \\ 0 & 0 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {A}^{1/2} & 0 \\ {B}^{1/2} & 0 \end{matrix}\right\rbrack , \] where \( {A}^{1/2},{B}^{1/2} \) are the positive square roots of \( A, B \) . Since \( {T}^{ * }T \) and \( T{T}^{ * } \) are unitarily equivalent for every \( T \), the matrix \( \left\lbrack \begin{matrix} A + B & 0 \\ 0 & 0 \end{matrix}\right\rbrack \) is unitarily equivalent to \[ \left\lbrack \begin{matrix} {A}^{1/2} & 0 \\ {B}^{1/2} & 0 \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {A}^{1/2} & {B}^{1/2} \\ 0 & 0 \end{matrix}\right\rbrack = \left\lbrack \begin{matrix} A & {A}^{1/2}{B}^{1/2} \\ {B}^{1/2}{A}^{1/2} & B \end{matrix}\right\rbrack . \] But \( \left\lbrack \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right\rbrack \) is a pinching of this last matrix. As a corollary we have: Theorem IV.2.14 (Rotfel’d) Let \( f : {\mathbb{R}}_{ + } \rightarrow {\mathbb{R}}_{ + } \) be a concave function with \( f\left( 0\right) = 0 \) . Then the function \( F \) on \( \mathbf{M}\left( \mathbf{n}\right) \) defined by \[ F\left( A\right) = \mathop{\sum }\limits_{{j = 1}}^{n}f\left( {{s}_{j}\left( A\right) }\right) \] (IV.54) is subadditive. Proof. The second inequality in (IV.53) can be written as a majorisation in \( {\mathbb{R}}^{2n} \) : \[ \left( {s\left( A\right), s\left( B\right) }\right) { \prec }_{w}\left( {s\left( {\left| A\right| + \left| B\right| }\right) ,0}\right) \] for all \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) . We also know that \( s\left( {\left| A\right| + \left| B\right| }\right) \prec s\left( A\right) + s\left( B\right) \) . Hence \[ \left( {s\left( A\right), s\left( B\right) }\right) \prec \left( {s\left( A\right) + s\left( B\right) ,0}\right) . \] Now proceed as in Problem II.5.12. Exercise IV.2.15 Let \( \parallel \left| \cdot \right| \parallel \) be a unitarily invariant norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . For \( m < n \) and \( A \in \mathbf{M}\left( \mathbf{m}\right) \) , define \[ \parallel \left| A\right| {\parallel }^{ \dagger } = \begin{Vmatrix}\left| \left\lbrack \begin{matrix} A & 0 \\ 0 & 0 \end{matrix}\right\rbrack \right| \end{Vmatrix}. \] Show that \( \parallel \mid \cdot {\left| \right| }^{ \dagger } \) defines a unitarily invariant norm on \( \mathbf{M}\left( \mathbf{m}\right) \) . We will use this idea of "dilating" \( A \) and of going from \( \mathbf{M}\left( \mathbf{n}\right) \) to \( \mathbf{M}\left( \mathbf{{2n}}\right) \) in later chapters. Procedures given in Exercises IV.1.19 and IV.1.20 can be adapted to matrices to generate unitarily invariant norms. ## IV. 3 Lidskii's Theorem (Third Proof) Let \( {\lambda }^{ \downarrow }\left( A\right) \) denote the \( n \) -vector whose coordinates are the eigenvalues of a Hermitian matrix \( A \) arranged in decreasing order. Lidskii’s Theorem, for which we gave two proofs in Section III.4, says that if \( A, B \) are Hermitian matrices, then we have the majorisation \[ {\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \downarrow }\left( B\right) \prec \lambda \left( {A - B}\right) \] (IV.55) We will give another proof of this theorem now, using the easier ideas of Sections III. 1 and III.2. Exercise IV.3.1 One corollary of Lidskii’s Theorem is that, if \( A \) and \( B \) are any two matrices, then \[ \left| {s\left( A\right) - s\left( B\right) }\right| { \prec }_{w}s\left( {A - B}\right) . \] (IV.56) See Problem III.6.13. Conversely, show that if (IV.56) is known to be true for all matrices \( A, B \), then we can derive from it the statement (IV.55). [Hint: Choose real numbers \( \alpha ,\beta \) such that \( A + {\alpha I} \geq B + {\beta I} \geq 0 \) .] We will prove (IV.56) by a different argument. To prove this we need to prove that for each of the Ky Fan symmetric gauge functions \( {\Phi }_{\left( k\right) },1 \leq k \leq \) \( n \), we have the inequality \[ {\Phi }_{\left( k\right) }\left( {s\left( A\right) - s\left( B\right) }\right) \leq {\Phi }_{\left( k\right) }\left( {s\left( {A - B}\right) }\right) . \] (IV.57) We will prove this for \( {\Phi }_{\left( 1\right) } \) and \( {\Phi }_{\left( n\right) } \), and then use the interpolation formulas (IV.9) and (IV.39). For \( {\Phi }_{\left( 1\right) } \) this is easy. By Weyl’s perturbation theorem (Corollary III.2.6) we have \[ \mathop{\max }\limits_{j}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B\parallel \] This can be proved easily by another argument also. For any \( j \) consider the subspaces spanned by \( \left\{ {{u}_{1},\ldots ,{u}_{j}}\right\} \) and \( \left\{ {{v}_{j},\ldots ,{v}_{n}}\right\} \), where \( {u}_{i},{v}_{i},1 \leq \) \( i \leq n \) are eigenvectors of \( A \) and \( B \) corresponding to their eigenvalues \( {\lambda }_{i}^{ \downarrow }\left( A\right) \) and \( {\lambda }_{i}^{ \downarrow }\left( B\right) \), respectively. Since the dimensions of these two spaces add up to \( n + 1 \), they have a nonzero intersection. For a unit vector \( x \) in this intersection we have \( \langle x,{Ax}\rangle \geq {\lambda }_{j}^{ \downarrow }\left( A\right) \) and \( \langle x,{Bx}\rangle \leq {\lambda }_{j}^{ \downarrow }\left( B\right) \) . Hence, we have \[ \parallel A - B\parallel \geq \left| {\langle x,\left( {A - B}\right) x\rangle }\right| \geq {\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) . \] So, by symmetry \[ \left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B\parallel ,\;1 \leq j \leq n. \] From this, as before, we can get \[ \mathop{\max }\limits_{j}\left| {{s}_{j}\left( A\right) - {s}_{j}\left( B\right) }\right| \leq \parallel A - B\parallel \] for any two matrices \( A \) and \( B \) . This is the same as saying \[ {\Phi }_{\left( 1\right) }\left( {s\left( A\right) - s\left( B\right) }\right) \leq {\Phi }_{\left( 1\right) }\left( {s\left( {A - B}\right) }\right) \] (IV.58) Let \( T \) be a Hermitian matrix with eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{p} > \) \( {\lambda }_{p + 1} \geq \cdots \geq {\lambda }_{n} \), where \( {\lambda }_{p} \geq 0 > {\lambd
100_S_Fourier Analysis
35
\[ \parallel A - B\parallel \geq \left| {\langle x,\left( {A - B}\right) x\rangle }\right| \geq {\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) . \] So, by symmetry \[ \left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B\parallel ,\;1 \leq j \leq n. \] From this, as before, we can get \[ \mathop{\max }\limits_{j}\left| {{s}_{j}\left( A\right) - {s}_{j}\left( B\right) }\right| \leq \parallel A - B\parallel \] for any two matrices \( A \) and \( B \) . This is the same as saying \[ {\Phi }_{\left( 1\right) }\left( {s\left( A\right) - s\left( B\right) }\right) \leq {\Phi }_{\left( 1\right) }\left( {s\left( {A - B}\right) }\right) \] (IV.58) Let \( T \) be a Hermitian matrix with eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{p} > \) \( {\lambda }_{p + 1} \geq \cdots \geq {\lambda }_{n} \), where \( {\lambda }_{p} \geq 0 > {\lambda }_{p + 1} \) . Choose a unitary matrix \( U \) such that \( T = {UD}{U}^{ * } \), where \( D \) is the diagonal matrix \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) . Let \( {D}^{ + } = \left( {{\lambda }_{1},\ldots ,{\lambda }_{p},0,\cdots ,0}\right) \) and \( {D}^{ - } = \left( {0,\cdots ,0, - {\lambda }_{p + 1},\ldots , - {\lambda }_{n}}\right) \) . Let \( {T}^{ + } = U{D}^{ + }{U}^{ * },{T}^{ - } = U{D}^{ - }{U}^{ * } \) . Then both \( {T}^{ + } \) and \( {T}^{ - } \) are positive matrices and \[ T = {T}^{ + } - {T}^{ - } \] (IV.59) This is called the Jordan decomposition of \( T \) . Lemma IV.3.2 If \( A, B \) are \( n \times n \) Hermitian matrices, then \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \parallel A - B{\parallel }_{\left( n\right) } \] (IV.60) Proof. Using the Jordan decomposition of \( A - B \) we can write \[ \parallel A - B{\parallel }_{\left( n\right) } = \operatorname{tr}{\left( A - B\right) }^{ + } + \operatorname{tr}{\left( A - B\right) }^{ - }. \] If we put \[ C = A + {\left( A - B\right) }^{ - } = B + {\left( A - B\right) }^{ + }, \] then \( C \geq A \) and \( C \geq B \) . Hence, by Weyl’s monotonicity principle, \( {\lambda }_{j}^{ \downarrow }\left( C\right) \geq \) \( {\lambda }_{j}^{ \downarrow }\left( A\right) \) and \( {\lambda }_{j}^{ \downarrow }\left( C\right) \geq {\lambda }_{j}^{ \downarrow }\left( B\right) \) for all \( j \) . From these inequalities it follows that \[ \left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq {\lambda }_{j}^{ \downarrow }\left( {2C}\right) - {\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) \] Hence, \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{\lambda }_{j}^{ \downarrow }\left( A\right) - {\lambda }_{j}^{ \downarrow }\left( B\right) }\right| \leq \operatorname{tr}\left( {{2C} - A - B}\right) = \parallel A - B{\parallel }_{\left( n\right) }. \] Corollary IV.3.3 For any two \( n \times n \) matrices \( A, B \) we have \[ {\Phi }_{\left( n\right) }\left( {s\left( A\right) - s\left( B\right) }\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\left| {{s}_{j}\left( A\right) - {s}_{j}\left( B\right) }\right| \leq \parallel A - B{\parallel }_{\left( n\right) }. \] (IV.61) Theorem IV.3.4 For \( n \times n \) matrices \( A, B \) we have the majorisation \[ \left| {s\left( A\right) - s\left( B\right) }\right| { \prec }_{w}s\left( {A - B}\right) . \] Proof. Choose any index \( k = 1,2,\ldots, n \) and fix it. By Proposition IV.2.3, there exist \( X, Y \in \mathbf{M}\left( \mathbf{n}\right) \) such that \[ A - B = X + Y \] and \[ \parallel A - B{\parallel }_{\left( k\right) } = \parallel X{\parallel }_{\left( n\right) } + k\parallel Y\parallel \] Define vectors \( \alpha ,\beta \) as \[ \alpha = s\left( {X + B}\right) - s\left( B\right) \] \[ \beta = s\left( A\right) - s\left( {X + B}\right) . \] Then \[ s\left( A\right) - s\left( B\right) = \alpha + \beta . \] Hence, by Proposition IV.1.5 (or Proposition IV.2.3 restricted to diagonal matrices) and by (IV.58) and (IV.61), we have \[ {\Phi }_{\left( k\right) }\left( {s\left( A\right) - s\left( B\right) }\right) \leq {\Phi }_{\left( n\right) }\left( \alpha \right) + k{\Phi }_{\left( 1\right) }\left( \beta \right) \] \[ = {\Phi }_{\left( n\right) }\left( {s\left( {X + B}\right) - s\left( B\right) }\right) + k{\Phi }_{\left( 1\right) }\left( {s\left( A\right) - s\left( {X + B}\right) }\right) \] \[ \leq \parallel X{\parallel }_{\left( n\right) } + k\parallel A - \left( {X + B}\right) \parallel \] \[ = \parallel X{\parallel }_{\left( n\right) } + k\parallel Y\parallel \] \[ = \parallel A - B{\parallel }_{\left( k\right) }\text{.} \] This proves the theorem. As we observed in Exercise IV.3.1, this theorem is equivalent to Lidskii's Theorem. In Section III. 2 we introduced the notation Eig \( A \) for a diagonal matrix whose diagonal entries are the eigenvalues of a matrix \( A \) . The majorisations \[ {\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \downarrow }\left( B\right) \prec \lambda \left( {A - B}\right) \prec {\lambda }^{ \downarrow }\left( A\right) - {\lambda }^{ \uparrow }\left( B\right) \] for the eigenvalues of Hermitian matrices lead to norm inequalities \[ \left| \right| \left| {{\operatorname{Eig}}^{ \downarrow }\left( A\right) - {\operatorname{Eig}}^{ \downarrow }\left( B\right) }\right| \left| \right| \leq \left| \right| \left| {A - B}\right| \left| \right| \leq \left| \right| \left| {{\operatorname{Eig}}^{ \downarrow }\left( A\right) - {\operatorname{Eig}}^{ \uparrow }\left( B\right) }\right| \left| \right| , \] (IV.62) for all unitarily invariant norms. This is just another way of expressing Theorem III.4.4. The inequalities of Theorem III.2.8 and Problem III.6.15 are special cases of this. We will see several generalisations of this inequality and still other proofs of it. Exercise IV.3.5 If \( {\operatorname{Sing}}^{ \downarrow }\left( A\right) \) denotes the diagonal matrix whose diagonal entries are \( {s}_{1}\left( A\right) ,\ldots ,{s}_{n}\left( A\right) \), then it follows from Theorem IV.3.4 that for any two matrices \( A, B \) \[ \left| \right| \left| {{\operatorname{Sing}}^{ \downarrow }\left( A\right) - {\operatorname{Sing}}^{ \downarrow }\left( B\right) }\right| \left| \right| \leq \left| \right| \left| {A - B}\right| \left| \right| \] for every unitarily invariant norm. Show that in this case the "opposite inequality" \[ \parallel \left| {A - B}\right| \parallel \leq \left| \right| \left| {{\operatorname{Sing}}^{ \downarrow }\left( A\right) - {\operatorname{Sing}}^{ \uparrow }\left( B\right) }\right| \parallel \] is not always true. ## IV. 4 Weakly Unitarily Invariant Norms Consider the following numbers associated with an \( n \times n \) matrix: (i) \( \left| {\operatorname{tr}A}\right| = \left| {\sum {\lambda }_{j}\left( A\right) }\right| \) (ii) \( \operatorname{spr}A = \mathop{\max }\limits_{{1 \leq j \leq n}}\left| {{\lambda }_{j}\left( A\right) }\right| \), the spectral radius of \( A \) ; (iii) \( w\left( A\right) = \mathop{\max }\limits_{{\parallel x\parallel = 1}}\left| {\langle x,{Ax}\rangle }\right| \), the numerical radius of \( A \) . Of these, the first one is a seminorm but not a norm on \( \mathbf{M}\left( \mathbf{n}\right) \), the second one is not a seminorm, and the third one is a norm. (See Exercise I.2.10.) All three functions of a matrix described above have an important invariance property: they do not change under unitary conjugations; i.e., the transformations \( A \rightarrow {UA}{U}^{ * }, U \) unitary, do not change these functions. Indeed, the first two are invariant under the larger class of similarity transformations \( A \rightarrow {SA}{S}^{-1}, S \) invertible. The third one is not invariant under all such transformations. Exercise IV.4.1 Show that no norm on \( \mathbf{M}\left( \mathbf{n}\right) \) can be invariant under all similarity transformations. Unlike the norms that were studied in Section 2, none of the three functions mentioned above is invariant under all transformations \( A \rightarrow {UAV} \) , where \( U, V \) vary over the unitary group \( \mathbf{U}\left( \mathbf{n}\right) \) . We will call a norm \( \tau \) on \( \mathbf{M}\left( \mathbf{n}\right) \) weakly unitarily invariant (wui, for short) if \[ \tau \left( A\right) = \tau \left( {{UA}{U}^{ * }}\right) \;\text{ for all }\;A \in \mathbf{M}\left( \mathbf{n}\right), U \in \mathbf{U}\left( \mathbf{n}\right) . \] (IV.63) Examples of such norms include the unitarily invariant norms and the numerical radius. Some more will be constructed now. Exercise IV.4.2 Let \( {E}_{11} \) be the diagonal matrix with its top left entry 1 and all other entries zero. Then \[ w\left( A\right) = \mathop{\max }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}\left| {\operatorname{tr}{E}_{11}{UA}{U}^{ * }}\right| \] (IV.64) Equivalently, \( w\left( A\right) = \max \{ \left| {\operatorname{tr}{AP}}\right| : P \) is an orthogonal projection of rank 1 \( \} \) . Given a matrix \( C \), let \[ {w}_{C}\left( A\right) = \mathop{\max }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}\left| {\operatorname{tr}{CUA}{U}^{ * }}\right| ,\;A \in \mathbf{M}\left( \mathbf{n}\right) . \] (IV.65) This is called the C-numerical radius of \( A \) . Exercise IV.4.3 For every \( C \in \mathbf{M}\left( \mathbf{n}\right) \), the C-numerical radius \( {w}_{C} \) is a wui seminorm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Proposition IV.4.4 The C-numerical radius \( {w}_{C} \) is a norm on \( \mathbf{M}\left( \mathbf{n}\right) \) if and only if (i) \( C \) is not a scalar multiple of \( I \), and (ii) \( \operatorname{tr}C \neq 0 \) . Proof. If \( C = {\lambda I} \) for any \( \lambda \in \mathbb{C} \),
100_S_Fourier Analysis
36
U \in \mathbf{U}\left( \mathbf{n}\right) }}\left| {\operatorname{tr}{E}_{11}{UA}{U}^{ * }}\right| \] (IV.64) Equivalently, \( w\left( A\right) = \max \{ \left| {\operatorname{tr}{AP}}\right| : P \) is an orthogonal projection of rank 1 \( \} \) . Given a matrix \( C \), let \[ {w}_{C}\left( A\right) = \mathop{\max }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}\left| {\operatorname{tr}{CUA}{U}^{ * }}\right| ,\;A \in \mathbf{M}\left( \mathbf{n}\right) . \] (IV.65) This is called the C-numerical radius of \( A \) . Exercise IV.4.3 For every \( C \in \mathbf{M}\left( \mathbf{n}\right) \), the C-numerical radius \( {w}_{C} \) is a wui seminorm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Proposition IV.4.4 The C-numerical radius \( {w}_{C} \) is a norm on \( \mathbf{M}\left( \mathbf{n}\right) \) if and only if (i) \( C \) is not a scalar multiple of \( I \), and (ii) \( \operatorname{tr}C \neq 0 \) . Proof. If \( C = {\lambda I} \) for any \( \lambda \in \mathbb{C} \), then \( {w}_{C}\left( A\right) = \left| \lambda \right| \left| {\operatorname{tr}A}\right| \), and this is zero if \( \operatorname{tr}A = 0 \) . So \( {w}_{C} \) cannot be a norm. If \( \operatorname{tr}C = 0 \), then \( {w}_{C}\left( I\right) = \left| {\operatorname{tr}C}\right| = 0 \) . Again \( {w}_{C} \) is not a norm. Thus (i) and (ii) are necessary conditions for \( {w}_{C} \) to be a norm. Conversely, suppose \( {w}_{C}\left( A\right) = 0 \) . If \( A \) were a scalar multiple of \( I \), this would mean that \( \operatorname{tr}C = 0 \) . So, if \( \operatorname{tr}C \neq 0 \), then \( A \) is not a scalar multiple of \( I \) . Hence \( A \) has an eigenspace \( \mathcal{M} \) of dimension \( m \), for some \( 0 < m < n \) . Since \( {e}^{tK} \) is a unitary matrix for all real \( t \) and skew-Hermitian \( K \), the condition \( {w}_{C}\left( A\right) = 0 \) implies in particular that \[ \operatorname{tr}C{e}^{tK}A{e}^{-{tK}} = 0\;\text{ if }\;t \in \mathbb{R}, K = - {K}^{ * }. \] Differentiating this relation at \( t = 0 \), one gets \[ \operatorname{tr}\left( {{AC} - {CA}}\right) K = 0\;\text{ if }\;K = - {K}^{ * }. \] Hence, we also have \[ \operatorname{tr}\left( {{AC} - {CA}}\right) X = 0\text{ for all }X \in \mathbf{M}\left( \mathbf{n}\right) . \] Hence \( {AC} = {CA} \) . (Recall that \( \langle S, T\rangle = \operatorname{tr}{S}^{ * }T \) is an inner product on \( \mathbf{M}\left( \mathbf{n}\right) \) .) Since \( C \) commutes with \( A \), it leaves invariant the \( m \) -dimensional eigenspace \( \mathcal{M} \) of \( A \) we mentioned earlier. Now, note that since \( {w}_{C}\left( A\right) = \) \( {w}_{C}\left( {{UA}{U}^{ * }}\right), C \) also commutes with \( {UA}{U}^{ * } \) for every \( U \in \mathbf{U}\left( \mathbf{n}\right) \) . But \( {UA}{U}^{ * } \) has the space \( U\mathcal{M} \) as an eigenspace. So, \( C \) also leaves \( U\mathcal{M} \) invariant for all \( U \in \mathbf{U}\left( \mathbf{n}\right) \) . But this would mean that \( C \) leaves all \( m \) -dimensional subspaces invariant, which in turn would mean \( C \) leaves all one-dimensional subspaces invariant, which is possible only if \( C \) is a scalar multiple of \( I \) . More examples of wui norms are given in the following exercise. Exercise IV.4.5 (i) \( \tau \left( A\right) = \parallel A\parallel + \left| {\operatorname{tr}A}\right| \) is a wui norm. More generally, the sum of any wui norm and a wui seminorm is a wui norm. (ii) \( \tau \left( A\right) = \max \left( {\parallel A\parallel ,\left| {\operatorname{tr}A}\right| }\right) \) is a wui norm. More generally, the maximum of any wui norm and a wui seminorm is a wui norm. (iii) Let \( W\left( A\right) \) be the numerical range of \( A \) . Then its diameter \( \operatorname{diam}W\left( A\right) \) is a wui seminorm on \( \mathbf{M}\left( \mathbf{n}\right) \) . It can be used to generate wui norms as in (i) and (ii). Of particular interest would be the norm \( \tau \left( A\right) = \) \( w\left( A\right) + \operatorname{diam}W\left( A\right) \) . (iv) Let \( m\left( A\right) \) be any norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Then \[ \tau \left( A\right) = \mathop{\max }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}m\left( {{UA}{U}^{ * }}\right) \] is a wui norm. (v) Let \( m\left( A\right) \) be any norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Then \[ \tau \left( A\right) = {\int }_{\mathbf{U}\left( \mathbf{n}\right) }m\left( {{UA}{U}^{ * }}\right) {dU} \] where the integral is with respect to the (normalised) Haar measure on \( \mathbf{U}\left( \mathbf{n}\right) \) is a wui norm. (vi) Let \[ \tau \left( A\right) = \mathop{\max }\limits_{{{e}_{1},\ldots ,{e}_{n}}}\mathop{\max }\limits_{{i, j}}\left| \left\langle {{e}_{i}, A{e}_{j}}\right\rangle \right| \] where \( {e}_{1},\ldots ,{e}_{n} \) varies over all orthonormal bases. Then \( \tau \) is a wui norm. How is this related to (ii) and (iv) above? Let \( S \) be the unit sphere in \( {\mathbb{C}}^{n} \) , \[ S = \left\{ {x \in {\mathbb{C}}^{n} : \parallel x\parallel = 1}\right\} \] and let \( C\left( S\right) \) be the space of all complex valued continuous functions on \( S \) . Let \( {dx} \) denote the normalised Lebesgue measure on \( S \) . Consider the familiar \( {L}_{p} \) -norms on \( C\left( S\right) \) defined as \[ {N}_{p}\left( f\right) = \parallel f{\parallel }_{p} = {\left( {\int }_{S}{\left| f\left( x\right) \right| }^{p}dx\right) }^{1/p},\;1 \leq p < \infty , \] \[ {N}_{\infty }\left( f\right) = \parallel f{\parallel }_{\infty } = \mathop{\max }\limits_{{x \in S}}\left| {f\left( x\right) }\right| . \] (IV.66) Since the measure \( {dx} \) is invariant under rotations, the above norms satisfy the invariance property \[ {N}_{p}\left( {f \circ U}\right) = {N}_{p}\left( f\right) \text{ for all }f \in C\left( S\right), U \in \mathbf{U}\left( \mathbf{n}\right) . \] We will call a norm \( N \) on \( C\left( S\right) \) a unitarily invariant function norm if \[ N\left( {f \circ U}\right) = N\left( f\right) \text{ for all }f \in C\left( S\right), U \in \mathbf{U}\left( \mathbf{n}\right) . \] (IV.67) The \( {L}_{p} \) -norms are important examples of such norms. Now, every \( A \in \mathbf{M}\left( \mathbf{n}\right) \) induces, naturally, a function \( {f}_{A} \) on \( S \) by its quadratic form: \[ {f}_{A}\left( x\right) = \langle x,{Ax}\rangle \] (IV.68) The correspondence \( A \rightarrow {f}_{A} \) is a linear map from \( \mathbf{M}\left( \mathbf{n}\right) \) into \( C\left( S\right) \), which is one-to-one. So, given a norm \( N \) on \( C\left( S\right) \), if we define a function \( {N}^{\prime } \) on \( \mathbf{M}\left( \mathbf{n}\right) \) as \[ {N}^{\prime }\left( A\right) = N\left( {f}_{A}\right) \] (IV.69) then \( {N}^{\prime } \) is a norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Further, \[ {N}^{\prime }\left( {{UA}{U}^{ * }}\right) = N\left( {f}_{{UA}{U}^{ * }}\right) = N\left( {{f}_{A} \circ {U}^{ * }}\right) . \] So, if \( N \) is a unitarily invariant function norm on \( C\left( S\right) \) then \( {N}^{\prime } \) is a wui norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . The next theorem says that all wui norms arise in this way: Theorem IV.4.6 A norm \( \tau \) on \( \mathbf{M}\left( \mathbf{n}\right) \) is weakly unitarily invariant if and only if there exists a unitarily invariant function norm \( N \) on \( C\left( S\right) \) such that \( \tau = {N}^{\prime } \), where the map \( N \rightarrow {N}^{\prime } \) is defined by relations (IV.68) and (IV.69). Proof. We need to prove that every wui norm \( \tau \) on \( \mathbf{M}\left( \mathbf{n}\right) \) is of the form \( {N}^{\prime } \) for some unitarily invariant function norm \( N \) . Let \( F = \left\{ {{f}_{A} : A \in \mathbf{M}\left( \mathbf{n}\right) }\right\} \) . This is a finite-dimensional linear subspace of \( C\left( S\right) \) . Given a wui norm \( \tau \), define \( {N}_{0} \) on \( F \) by \[ {N}_{0}\left( {f}_{A}\right) = \tau \left( A\right) \] (IV.70) Then \( {N}_{0} \) defines a norm on \( F \), and further, \( {N}_{0}\left( {f \circ U}\right) = {N}_{0}\left( f\right) \) for all \( f \in F \) . We will extend \( {N}_{0} \) from \( F \) to all of \( C\left( S\right) \) to obtain a norm \( N \) that is unitarily invariant. Clearly, then \( \tau = {N}^{\prime } \) . This extension is obtained by an application of the Hahn-Banach Theorem. The space \( C\left( S\right) \) is a Banach space with the supremum norm \( \parallel f{\parallel }_{\infty } \) . The finite-dimensional subspace \( F \) has two norms \( {N}_{0} \) and \( \parallel \cdot {\parallel }_{\infty } \) . These must be equivalent: there exist constants \( 0 < \alpha \leq \beta < \infty \) such that \( \alpha \parallel f{\parallel }_{\infty } \leq {N}_{0}\left( f\right) \leq \beta \parallel f{\parallel }_{\infty } \) for all \( f \in F \) . Let \( G \) be the set of all linear functionals on \( F \) that have norm less than or equal to 1 with respect to the norm \( {N}_{0} \) ; i.e., the linear functional \( g \) is in \( G \) if and only if \( \left| {g\left( f\right) }\right| \leq {N}_{0}\left( f\right) \) for all \( f \in F \) . By duality then \( {N}_{0}\left( f\right) = \mathop{\sup }\limits_{{g \in G}}\left| {g\left( f\right) }\right| \), for every \( f \in F \) . Now \( \left| {g\left( f\right) }\right| \leq \beta \parallel f{\parallel }_{\infty } \) for \( g \in G \) and \( f \in F \) . Hence, by the Hahn-Banach Theorem, each \( g \) can be extended to a linear functional \( \widetilde{g} \) on \( C\left( S\right) \) such that \( \left| {\widetilde{g}\left( f\right) }\right| \leq \beta \parallel f{\parallel }_{\infty } \) for all \( f \in C\left( S\right) \) . Now define \[ \theta \left( f\right) = \mathop{\sup }\limits_{{g \in G}}\left| {\widetilde{g}\left( f\right) }\right| ,\;\text{ for all }\;f \in C\left( S\right) . \] Then \( \theta \) is a seminorm on \( C\left( S\right)
100_S_Fourier Analysis
37
e norm less than or equal to 1 with respect to the norm \( {N}_{0} \) ; i.e., the linear functional \( g \) is in \( G \) if and only if \( \left| {g\left( f\right) }\right| \leq {N}_{0}\left( f\right) \) for all \( f \in F \) . By duality then \( {N}_{0}\left( f\right) = \mathop{\sup }\limits_{{g \in G}}\left| {g\left( f\right) }\right| \), for every \( f \in F \) . Now \( \left| {g\left( f\right) }\right| \leq \beta \parallel f{\parallel }_{\infty } \) for \( g \in G \) and \( f \in F \) . Hence, by the Hahn-Banach Theorem, each \( g \) can be extended to a linear functional \( \widetilde{g} \) on \( C\left( S\right) \) such that \( \left| {\widetilde{g}\left( f\right) }\right| \leq \beta \parallel f{\parallel }_{\infty } \) for all \( f \in C\left( S\right) \) . Now define \[ \theta \left( f\right) = \mathop{\sup }\limits_{{g \in G}}\left| {\widetilde{g}\left( f\right) }\right| ,\;\text{ for all }\;f \in C\left( S\right) . \] Then \( \theta \) is a seminorm on \( C\left( S\right) \) that coincides with \( {N}_{0} \) on \( F \) . Let \[ \mu \left( f\right) = \max \left\{ {\theta \left( f\right) ,\alpha \parallel f{\parallel }_{\infty }}\right\} ,\;f \in C\left( S\right) . \] Then \( \mu \) is a norm on \( C\left( S\right) \), and \( \mu \) coincides with \( {N}_{0} \) on \( F \) . Now define \[ N\left( f\right) = \mathop{\sup }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}\mu \left( {f \circ U}\right) ,\;f \in C\left( S\right) . \] Then \( N \) is a unitarily invariant function norm on \( C\left( S\right) \) that coincides with \( {N}_{0} \) on \( F \) . The proof is complete. When \( N = \parallel \cdot {\parallel }_{\infty } \) the norm \( {N}^{\prime } \) induced by the above procedure is the numerical radius \( w \) . Another example is discussed in the Notes. The \( C \) -numerical radii play a useful role in proving inequalities for wui norms: Theorem IV.4.7 For \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) the following statements are equivalent: (i) \( \tau \left( A\right) \leq \tau \left( B\right) \) for all wui norms \( \tau \) . (ii) \( {w}_{C}\left( A\right) \leq {w}_{C}\left( B\right) \) for all upper triangular matrices \( C \) that are not scalars and have nonzero trace. (iii) \( {w}_{C}\left( A\right) \leq {w}_{C}\left( B\right) \) for all \( C \in \mathbf{M}\left( \mathbf{n}\right) \) . (iv) \( A \) can be expressed as a finite sum \( A = \sum {z}_{k}{U}_{k}B{U}_{k}^{ * } \) where \( {U}_{k} \in \mathbf{U}\left( \mathbf{n}\right) \) and \( {z}_{k} \) are complex numbers with \( \sum \left| {z}_{k}\right| \leq 1 \) . Proof. By Proposition IV.4.4, when \( C \) is not a scalar and \( \operatorname{tr}C \neq 0 \), each \( {w}_{C} \) is a wui norm. So (i) \( \Rightarrow \) (ii). Note that \( {w}_{C}\left( A\right) = {w}_{A}\left( C\right) \) for all pairs of matrices \( A, C \) . So, if (ii) is true, then \( {w}_{A}\left( C\right) \leq {w}_{B}\left( C\right) \) for all upper triangular nonscalar matrices \( C \) with nonzero trace. Since \( {w}_{A} \) and \( {w}_{B} \) are wui, and since every matrix is unitarily equivalent to an upper triangular matrix, this implies that \( {w}_{A}\left( C\right) \leq {w}_{B}\left( C\right) \) for all nonscalar matrices \( C \) with nonzero trace. But such \( C \) are dense in the space \( \mathbf{M}\left( \mathbf{n}\right) \) . So \( {w}_{A}\left( C\right) \leq {w}_{B}\left( C\right) \) for all \( C \in \mathbf{M}\left( \mathbf{n}\right) \) . Hence (iii) is true. Let \( \mathcal{K} \) be the convex hull of all matrices \( {e}^{i\theta }{UB}{U}^{ * },\theta \in \mathbb{R}, U \in \mathbf{U}\left( \mathbf{n}\right) \) . Then \( \mathcal{K} \) is a compact convex set in \( \mathbf{M}\left( \mathbf{n}\right) \) . The statement (iv) is equivalent to saying that \( A \in \mathcal{K} \) . If \( A \notin \mathcal{K} \), then by the Separating Hyperplane Theorem there exists a linear functional \( f \) on \( \mathbf{M}\left( \mathbf{n}\right) \) such that \( \operatorname{Re}f\left( A\right) > \operatorname{Re}f\left( X\right) \) for all \( X \in \mathcal{K} \) . For this linear functional \( f \) there exists a matrix \( C \) such that \( f\left( Y\right) = \operatorname{tr}{CY} \) for all \( Y \in \mathbf{M}\left( \mathbf{n}\right) \) . (Problem IV.5.8) For these \( f \) and \( C \) we have \[ {w}_{C}\left( A\right) = \mathop{\max }\limits_{{U \in \mathbf{U}\left( \mathbf{n}\right) }}\left| {\operatorname{tr}{CUA}{U}^{ * }}\right| \geq \left| {\operatorname{tr}{CA}}\right| = \left| {f\left( A\right) }\right| \geq \operatorname{Re}f\left( A\right) \] \[ > \mathop{\max }\limits_{{X \in \mathcal{K}}}\operatorname{Re}f\left( X\right) \] \[ = \mathop{\max }\limits_{{\theta, U}}\operatorname{Re}\operatorname{tr}C{e}^{i\theta }{UB}{U}^{ * } \] \[ = \mathop{\max }\limits_{U}\left| {\operatorname{tr}{CUB}{U}^{ * }}\right| \] \[ = {w}_{C}\left( B\right) \text{.} \] So, if (iii) were true, then (iv) cannot be false. Clearly (iv) \( \Rightarrow \) (i). The family \( {w}_{C} \) of C-numerical radii, where \( C \) is not a scalar and has nonzero trace, thus plays a role analogous to that of the Ky Fan norms in the family of unitarily invariant norms. However, unlike the Ky Fan family on \( \mathbf{M}\left( \mathbf{n}\right) \), this family is infinite. It turns out that no finite subfamily of wui norms can play this role. More precisely, there does not exist any finite family \( {\tau }_{1},\ldots ,{\tau }_{m} \) of wui norms on \( \mathbf{M}\left( \mathbf{n}\right) \) that would lead to the inequalities \( \tau \left( A\right) \leq \tau \left( B\right) \) for all wui norms whenever \( {\tau }_{j}\left( A\right) \leq {\tau }_{j}\left( B\right) ,1 \leq j \leq m \) . For if such a family existed, then we would have \[ \{ X : \tau \left( X\right) \leq \tau \left( I\right) \text{ for all wui norms }\tau \} = \mathop{\bigcap }\limits_{{j = 1}}^{m}\left\{ {X : {\tau }_{j}\left( X\right) \leq {\tau }_{j}\left( I\right) }\right\} . \] (IV.71) Now each of the sets in this intersection contains 0 as an interior point (with respect to some fixed topology on \( \mathbf{M}\left( \mathbf{n}\right) \) ). Hence the intersection also contains 0 as an interior point. However, by Theorem IV.4.7, the set on the left-hand side of (IV.71) reduces to the set \( \{ {zI} : z \in \mathbb{C},\left| z\right| \leq 1\} \), and this set has an empty interior in \( \mathbf{M}\left( \mathbf{n}\right) \) . Finally, note an important property of all wui norms: \[ \tau \left( {\mathcal{C}\left( A\right) }\right) \leq \tau \left( A\right) \] (IV.72) for all \( A \in \mathbf{M}\left( \mathbf{n}\right) \) and all pinchings \( \mathcal{C} \) on \( \mathbf{M}\left( \mathbf{n}\right) \) . In Chapter 6 we will prove a generalisation of Lidskii's inequality (IV.62) extending it to all wui norms. ## IV. 5 Problems Problem IV.5.1. When \( 0 < p < 1 \), the function \( {\Phi }_{p}\left( x\right) = {\left( \sum {\left| {x}_{i}\right| }^{p}\right) }^{1/p} \) does not define a norm. Show that in lieu of the triangle inequality we have \[ {\Phi }_{p}\left( {x + y}\right) \leq {2}^{\frac{1}{p} - 1}\left\lbrack {{\Phi }_{p}\left( x\right) + {\Phi }_{p}\left( y\right) }\right\rbrack ,\;0 < p < 1. \] (Use the fact that \( f\left( t\right) = {t}^{p} \) on \( {\mathbb{R}}_{ + } \) is subadditive when \( 0 < p \leq 1 \) and convex when \( p \geq 1 \) .) Positive homogeneous functions that do not satisfy the triangle inequality but a weaker inequality \( \varphi \left( {x + y}\right) \leq c\left\lbrack {\varphi \left( x\right) + \varphi \left( y\right) }\right\rbrack \) for some constant \( c > 1 \) are sometimes called quasi-norms. Problem IV.5.2. More generally, show that for any symmetric gauge function \( \Phi \) and \( 0 < p < 1 \), if we define \( {\Phi }^{\left( p\right) } \) as in (IV.17), then \[ {\Phi }^{\left( p\right) }\left( {x + y}\right) \leq {2}^{\frac{1}{p} - 1}\left\lbrack {{\Phi }^{\left( p\right) }\left( x\right) + {\Phi }^{\left( p\right) }\left( y\right) }\right\rbrack ,\;0 < p < 1. \] Problem IV.5.3. All norms on \( {\mathbb{C}}^{n} \) are equivalent in the sense that if \( \Phi \) and \( \Psi \) are two norms, then there exists a constant \( K \) such that \( \Phi \left( x\right) \leq {K\Psi }\left( x\right) \) for all \( x \in {\mathbb{C}}^{n} \) . Let \[ {K}_{\Phi ,\Psi } = \inf \{ K : \Phi \left( x\right) \leq {K\Psi }\left( x\right) \text{ for all }x\} . \] Find the constants \( {K}_{\Phi ,\Psi } \) when \( \Phi ,\Psi \) are both members of the family \( {\Phi }_{p} \) . Problem IV.5.4. Show that for every norm \( \Phi \) on \( {\mathbb{C}}^{n} \) we have \( {\Phi }^{\prime \prime } = \Phi \) ; i.e., the dual of the dual of a norm is the norm itself. Problem IV.5.5. Find the duals of the norms \( {\Phi }_{\left( k\right) }^{\left( p\right) } \) defined by (IV.19). (These are somewhat complicated.) \( \textbf{Problem IV.5.6. }\textbf{For }0 < p < 1\textbf{ and a unitarily invariant norm }\left| \left| \left| \cdot \right| \right| \right| \textbf{ on} \) \( \mathbf{M}\left( \mathbf{n}\right) \), let \[ \parallel \left| A\right| {\parallel }^{\left( p\right) } = \parallel \left| \right| A\left| {}^{p}\right| {\parallel }^{1/p}. \] Show that \[ \parallel \left| {A + B}\right| {\parallel }^{\left( p\right) } \leq {2}^{\frac{1}{p} - 1}\left\lbrack {\parallel \left| A\right| {\parallel }^{\left( p\right) } + \parallel \left| B\right| {\parallel }^{\left( p\right) }}\right\rbrack . \] Problem IV.5.7. Choosing \( p = q = 2 \) in (IV.43) or (IV.42), one obtains \[ \parallel \left| {AB}\right| \parallel \leq {\left. \left| \left| {A}^{ * }A\right| \right| \right| }^{1/2}{\left. \left| \left| {B}^{ * }B\right| \right| \right| }^{1/2}. \] This, like the inequality (IV.44), is also a form of the Cauchy-Schwarz inequality, for unitarily invariant norms. Show that this is just the inequality (IV.44) restricted to Q-norms. Problem IV.5.8. Let \( f \) be any linear
100_S_Fourier Analysis
38
variant norm }\left| \left| \left| \cdot \right| \right| \right| \textbf{ on} \) \( \mathbf{M}\left( \mathbf{n}\right) \), let \[ \parallel \left| A\right| {\parallel }^{\left( p\right) } = \parallel \left| \right| A\left| {}^{p}\right| {\parallel }^{1/p}. \] Show that \[ \parallel \left| {A + B}\right| {\parallel }^{\left( p\right) } \leq {2}^{\frac{1}{p} - 1}\left\lbrack {\parallel \left| A\right| {\parallel }^{\left( p\right) } + \parallel \left| B\right| {\parallel }^{\left( p\right) }}\right\rbrack . \] Problem IV.5.7. Choosing \( p = q = 2 \) in (IV.43) or (IV.42), one obtains \[ \parallel \left| {AB}\right| \parallel \leq {\left. \left| \left| {A}^{ * }A\right| \right| \right| }^{1/2}{\left. \left| \left| {B}^{ * }B\right| \right| \right| }^{1/2}. \] This, like the inequality (IV.44), is also a form of the Cauchy-Schwarz inequality, for unitarily invariant norms. Show that this is just the inequality (IV.44) restricted to Q-norms. Problem IV.5.8. Let \( f \) be any linear functional on \( \mathbf{M}\left( \mathbf{n}\right) \) . Show that there exists a unique matrix \( X \) such that \( f\left( A\right) = \operatorname{tr}{XA} \) for all \( A \in \mathbf{M}\left( \mathbf{n}\right) \) . Problem IV.5.9. Use Theorem IV.2.14 to show that for all \( A, B \in \mathbf{M}\left( \mathbf{n}\right) \) \[ \det \left( {1 + \left| {A + B}\right| }\right) \leq \det \left( {1 + \left| A\right| }\right) \det \left( {1 + \left| B\right| }\right) . \] Problem IV.5.10. More generally, show that for \( 0 < p \leq 1 \) and \( \mu \geq 0 \) \[ \det \left( {1 + \mu {\left| A + B\right| }^{p}}\right) \leq \det \left( {1 + \mu {\left| A\right| }^{p}}\right) \det \left( {1 + \mu {\left| B\right| }^{p}}\right) . \] Problem IV.5.11. Let \( {\ell }_{p} \) denote the space \( {\mathbb{C}}^{n} \) with the \( p \) -norm defined in (IV.1) and (IV.2), \( 1 \leq p \leq \infty \) . For a matrix \( A \) let \( \parallel A{\parallel }_{p \rightarrow {p}^{\prime }} \) denote the norm of \( A \) as a linear operator from \( {\ell }_{p} \) to \( {\ell }_{{p}^{\prime }} \) ; i.e., \[ \parallel A{\parallel }_{p \rightarrow {p}^{\prime }} = \mathop{\max }\limits_{{\parallel x{\parallel }_{p} = 1}}\parallel {Ax}{\parallel }_{{p}^{\prime }} \] Show that \[ \parallel A{\parallel }_{1 \rightarrow 1} = \mathop{\max }\limits_{j}\mathop{\sum }\limits_{i}\left| {a}_{ij}\right| \] \[ \parallel A{\parallel }_{\infty \rightarrow \infty } = \mathop{\max }\limits_{i}\mathop{\sum }\limits_{j}\left| {a}_{ij}\right| \] \[ \parallel A{\parallel }_{1 \rightarrow \infty } = \mathop{\max }\limits_{{i, j}}\left| {a}_{ij}\right| \] None of these norms is weakly unitarily invariant. Problem IV.5.12. Show that there exists a weakly unitarily invariant norm \( \tau \) such that \( \tau \left( A\right) \neq \tau \left( {A}^{ * }\right) \) for some \( A \in \mathbf{M}\left( \mathbf{n}\right) \) . Problem IV.5.13. Show that there exists a weakly unitarily invariant norm \( \tau \) such that \( \tau \left( A\right) > \tau \left( B\right) \) for some positive matrices \( A, B \) with \( A \leq B \) . Problem IV.5.14. Let \( \tau \) be a wui norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Define \( \nu \) on \( \mathbf{M}\left( \mathbf{n}\right) \) as \( \nu \left( A\right) = \tau \left( \left| A\right| \right) \) . Then \( \nu \) is a unitarily invariant norm if and only if \( \tau \left( A\right) < \) \( \tau \left( B\right) \) whenever \( 0 \leq A \leq B \) . Problem IV.5.15. Show that for every wui norm \( \tau \) \[ \tau \left( {\operatorname{Eig}A}\right) = \inf \left\{ {\tau \left( {{SA}{S}^{-1}}\right) : S \in \mathbf{{GL}}\left( \mathbf{n}\right) }\right\} . \] When is the infimum attained? Problem IV.5.16. Let \( \tau \) be a wui norm on \( \mathbf{M}\left( \mathbf{n}\right) \) . Show that for every \( A \) \[ \tau \left( A\right) \geq \frac{\left| \operatorname{tr}A\right| }{n}\tau \left( I\right) \] Use this to show that \[ \min \{ \tau \left( {A - B}\right) : \operatorname{tr}B = 0\} = \frac{\left| \operatorname{tr}A\right| }{n}\tau \left( I\right) . \] ## IV. 6 Notes and References The first major paper on the theory of unitarily invariant norms and symmetric gauge functions was by J. von Neumann, Some matrix inequalities and metrization of matric space, Tomsk. Univ. Rev., 1(1937) 286-300, reprinted in his Collected Works, Pergamon Press, 1962. A famous book devoted to the study of such norms (for compact operators in a Hilbert space) is R. Schatten, Norm Ideals of Completely Continuous Operators, Springer-Verlag, 1960. Other excellent sources of information are the books by I.C. Gohberg and M.G. Krein cited in Chapter III, by A. Marshall and I. Olkin cited in Chapter II, and by R. Horn and C.R. Johnson cited in Chapter I. A succinct but complete summary can be found in L. Mirsky's paper cited in Chapter III. Much more on matrix norms (not necessarily invariant ones) can be found in the book Matrix Norms and Their Applications, by G.R. Belitskii and Y.I. Lyubich, Birkhäuser, 1988. The notion of Q-norms is mentioned explicitly in R. Bhatia, Some inequalities for norm ideals, Commun. Math. Phys., 111(1987) 33-39. (The possible usefulness of the idea was suggested by C. Davis.) The Cauchy-Schwarz inequality (IV.44) is proved in R. Bhatia, Perturbation inequalities for the absolute value map in norm ideals of operators, J. Operator Theory, 19 (1988) 129-136. This, and a whole family of inequalities including the one in Problem IV.5.7, are studied in detail by R.A. Horn and R. Mathias in two papers, An analog of the Cauchy-Schwarz inequality for Hadamard products and unitarily invariant norms, SIAM J. Matrix Anal. Appl., 11 (1990) 481-498, and Cauchy-Schwarz inequalities associated with positive semidefinite matrices, Linear Algebra Appl., 142(1990) 63-82. Many of the other inequalities in this section occur in K. Okubo, Hölder-type norm inequalities for Schur products of matrices, Linear Algebra Appl., 91(1987) 13-28. A general study of these and related inequalities is made in R. Bha-tia and C. Davis, Relations of linking and duality between symmetric gauge functions, Operator Theory: Advances and Applications, 73(1994) 127-137. Theorems IV.2.13 and IV.2.14 were proved by S. Ju. Rotfel'd, The singular values of a sum of completely continuous operators, in Topics in Mathematical Physics, Consultants Bureau, 1969, Vol. 3, pp. 73-78. See also, R.C. Thompson, Convex and concave functions of singular values of matrix sums, Pacific J. Math., 66(1976) 285-290. The results of Problems IV.5.9 and IV.5.10 are also due to Rotfel'd. The proof of Lidskii's Theorem given in Section IV. 3 is adapted from F. Hiai and Y. Nakamura, Majorisation for generalised s-numbers in semi-finite von Neumann algebras, Math. Z., 195(1987) 17-27. The theory of weakly unitarily invariant norms was developed in R. Bha-tia and J.A.R. Holbrook, Unitary invariance and spectral variation, Linear Algebra Appl., 95(1987) 43-68. Theorem IV.4.6 is proved in this paper. More on C-numerical radii can be found in C.-K. Li and N.-K. Tsing, Norms that are invariant under unitary similarities and the \( C \) -numerical radii, Linear and Multilinear Algebra, 24(1989) 209-222. Theorem IV.4.7 is taken from this paper. A part of this theorem (the equivalence of conditions (i) and (iv)) was proved in R. Bhatia and J.A.R. Holbrook, A softer, stronger Lidskii theorem, Proc. Indian Acad. Sciences (Math. Sciences), 99 (1989) 75-83. Two papers dealing with wui norms for infinite-dimensional operators are C.-K. Fong and J.A.R. Holbrook, Unitarily invariant operator norms, Canad. J. Math., 35 (1983) 274-299, and C.-K. Fong, H. Radjavi, and P. Rosenthal, Norms for matrices and operators, J. Operator Theory, \( {18}\left( {1987}\right) {99} - {113} \) . The theory of wui norms is not developed as fully as that of unitarily invariant norms. Theorem IV.4.6 would be useful if one could make the correspondence between \( \tau \) and \( N \) more explicit. As things stand, this has not been done even for some well-known and much-used norms like the \( {L}_{p} \) -norms. When \( N \) is the \( {L}_{\infty } \) function norm, we have noted that \( {N}^{\prime }\left( A\right) = \) \( w\left( A\right) \) . When \( N \) is the \( {L}_{2} \) function norm, then it is shown in the Bhatia-Holbrook (1987) paper cited above that \[ {N}^{\prime }\left( A\right) = {\left( \frac{\parallel A{\parallel }_{2}^{2} + {\left| \operatorname{tr}A\right| }^{2}}{n + {n}^{2}}\right) }^{1/2}. \] For other values of \( p \), the correspondence has not been worked out. For a recent survey of several results on invariant norms see C.-K. Li, Some aspects of the theory of norms, Linear Algebra Appl., 212/213 (1994) 71-100. \( V \) # Operator Monotone and Operator Convex Functions In this chapter we study an important and useful class of functions called operator monotone functions. These are real functions whose extensions to Hermitian matrices preserve order. Such functions have several special properties, some of which are studied in this chapter. They are closely related to properties of operator convex functions. We shall study both of these together. ## V. 1 Definitions and Simple Examples Let \( f \) be a real function defined on an interval \( I \) . If \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) is a diagonal matrix whose diagonal entries \( {\lambda }_{j} \) are in \( I \), we define \( f\left( D\right) = \) \( \operatorname{diag}\left( {f\left( {\lambda }_{1}\right) ,\ldots, f\left( {\lambda }_{n}\right) }\right) \) . If \( A \) is a Hermitian matrix whose eigenvalues \( {\lambda }_{j} \) are in \( I \), we choose a unitary \( U \) such that \( A = {UD}{U}^{ * } \), where \( D \) is diagonal, and then define \( f\left( A\right) = {Uf}\left( D\right) {U}^{ * } \) . In this way we can define \( f\left( A\right) \) for all Hermitian matrices (of any order) whose eigenva
100_S_Fourier Analysis
39
matrices preserve order. Such functions have several special properties, some of which are studied in this chapter. They are closely related to properties of operator convex functions. We shall study both of these together. ## V. 1 Definitions and Simple Examples Let \( f \) be a real function defined on an interval \( I \) . If \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) is a diagonal matrix whose diagonal entries \( {\lambda }_{j} \) are in \( I \), we define \( f\left( D\right) = \) \( \operatorname{diag}\left( {f\left( {\lambda }_{1}\right) ,\ldots, f\left( {\lambda }_{n}\right) }\right) \) . If \( A \) is a Hermitian matrix whose eigenvalues \( {\lambda }_{j} \) are in \( I \), we choose a unitary \( U \) such that \( A = {UD}{U}^{ * } \), where \( D \) is diagonal, and then define \( f\left( A\right) = {Uf}\left( D\right) {U}^{ * } \) . In this way we can define \( f\left( A\right) \) for all Hermitian matrices (of any order) whose eigenvalues are in \( I \) . In the rest of this chapter, it will always be assumed that our functions are real functions defined on an interval (finite or infinite, closed or open) and are extended to Hermitian matrices in this way. We will use the notation \( A \leq B \) to mean \( A \) and \( B \) are Hermitian and \( B - A \) is positive. The relation \( \leq \) is a partial order on Hermitian matrices. A function \( f \) is said to be matrix monotone of order \( \mathbf{n} \) if it is monotone with respect to this order on \( n \times n \) Hermitian matrices, i.e., if \( A \leq B \) implies \( f\left( A\right) \leq f\left( B\right) \) . If \( f \) is matrix monotone of order \( n \) for all \( n \) we say \( f \) is matrix monotone or operator monotone. A function \( f \) is said to be matrix convex of order \( \mathbf{n} \) if for all \( n \times n \) Hermitian matrices \( A \) and \( B \) and for all real numbers \( 0 \leq \lambda \leq 1 \) , \[ f\left( {\left( {1 - \lambda }\right) A + {\lambda B}}\right) \leq \left( {1 - \lambda }\right) f\left( A\right) + {\lambda f}\left( B\right) . \] (V.1) If \( f \) is matrix convex of all orders, we say that \( f \) is matrix convex or operator convex. (Note that if the eigenvalues of \( A \) and \( B \) are all in an interval \( I \), then the eigenvalues of any convex combination of \( A, B \) are also in \( I \) . This is an easy consequence of results in Chapter III.) We will consider continuous functions only. In this case, the condition (V.1) can be replaced by the more special condition \[ f\left( \frac{A + B}{2}\right) \leq \frac{f\left( A\right) + f\left( B\right) }{2}. \] (V.2) (Functions satisfying (V.2) are called mid-point operator convex, and if they are continuous, then they are convex.) A function \( f \) is called operator concave if the function \( - f \) is operator convex. It is clear that the set of operator monotone functions and the set of operator convex functions are both closed under positive linear combinations and also under (pointwise) limits. In other words, if \( f, g \) are operator monotone, and if \( \alpha ,\beta \) are positive real numbers, then \( {\alpha f} + {\beta g} \) is also operator monotone. If \( {f}_{n} \) are operator monotone, and if \( {f}_{n}\left( x\right) \rightarrow f\left( x\right) \), then \( f \) is also operator monotone. The same is true for operator convex functions. Example V.1.1 The function \( f\left( t\right) = \alpha + {\beta t} \) is operator monotone (on every interval) for every \( \alpha \in \mathbb{R} \) and \( \beta \geq 0 \) . It is operator convex for all \( \alpha ,\beta \in \mathbb{R} \) . The first surprise is in the following example. Example V.1.2 The function \( f\left( t\right) = {t}^{2} \) on \( \lbrack 0,\infty ) \) is not operator monotone. In other words, there exist positive matrices \( A, B \) such that \( B - A \) is positive but \( {B}^{2} - {A}^{2} \) is not. To see this, take \[ A = \left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) ,\;B = \left( \begin{array}{ll} 2 & 1 \\ 1 & 1 \end{array}\right) . \] Example V.1.3 The function \( f\left( t\right) = {t}^{2} \) is operator convex on every interval. To see this, note that for any Hermitian matrices \( A, B \) , \[ \frac{{A}^{2} + {B}^{2}}{2} - {\left( \frac{A + B}{2}\right) }^{2} = \frac{1}{4}\left( {{A}^{2} + {B}^{2} - {AB} - {BA}}\right) = \frac{1}{4}{\left( A - B\right) }^{2} \geq 0. \] This shows that the function \( f\left( t\right) = \alpha + {\beta t} + \gamma {t}^{2} \) is operator convex for all \( \alpha ,\beta \in \mathbb{R},\gamma \geq 0. \) Example V.1.4 The function \( f\left( t\right) = {t}^{3} \) on \( \lbrack 0,\infty ) \) is not operator convex. To see this, let \[ A = \left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) ,\;B = \left( \begin{array}{ll} 3 & 1 \\ 1 & 1 \end{array}\right) . \] Then, \[ \frac{{A}^{3} + {B}^{3}}{2} - {\left( \frac{A + B}{2}\right) }^{3} = \left( \begin{array}{ll} 6 & 1 \\ 1 & 0 \end{array}\right) \] and this is not positive. Examples V.1.2 and V.1.4 show that very simple functions which are monotone (convex) as real functions need not be operator monotone (operator convex). A complete description of operator monotone and operator convex functions will be given in later sections. It is instructive to study a few more examples first. The operator monotonicity or convexity of some functions can be proved by special arguments that are useful in other contexts as well. We will repeatedly use two simple facts. If \( A \) is positive, then \( A \leq I \) if and only if \( \operatorname{spr}\left( A\right) \leq 1 \) . An operator \( A \) is a contraction \( \left( {\parallel A\parallel \leq 1}\right) \) if and only if \( {A}^{ * }A \leq I \) . This is also equivalent to the condition \( A{A}^{ * } \leq I \) . The following elementary lemma is also used often. Lemma V.1.5 If \( B \geq A \), then for every operator \( X \) we have \( {X}^{ * }{BX} \geq \) \( {X}^{ * }{AX} \) . Proof. For every vector \( u \) we have, \[ \left\langle {u,{X}^{ * }{BXu}}\right\rangle = \langle {Xu},{BXu}\rangle \geq \langle {Xu},{AXu}\rangle = \left\langle {u,{X}^{ * }{AXu}}\right\rangle . \] This proves the lemma. An equally brief proof goes as follows. Let \( C \) be the positive square root of the positive operator \( B - A \) . Then \[ {X}^{ * }\left( {B - A}\right) X = {X}^{ * }{CCX} = {\left( CX\right) }^{ * }{CX} \geq 0. \] Proposition V.1.6 The function \( f\left( t\right) = - \frac{1}{t} \) is operator monotone on \( \left( {0,\infty }\right) \) . Proof. Let \( B \geq A > 0 \) . Then, by Lemma V.1.5, \( I \geq {B}^{-1/2}A{B}^{-1/2} \) . Since the map \( T \rightarrow {T}^{-1} \) is order-reversing on commuting positive operators, we have \( I \leq {B}^{1/2}{A}^{-1}{B}^{1/2} \) . Again, using Lemma V.1.5 we get from this \( {B}^{-1} \leq {A}^{-1} \) . Lemma V.1.7 If \( B \geq A \geq 0 \) and \( B \) is invertible, then \( \begin{Vmatrix}{{A}^{1/2}{B}^{-1/2}}\end{Vmatrix} \leq 1 \) . Proof. If \( B \geq A \geq 0 \), then \( I \geq {B}^{-1/2}A{B}^{-1/2} = {\left( {A}^{1/2}{B}^{-1/2}\right) }^{ * }{A}^{1/2}{B}^{-1/2} \) , and hence \( \begin{Vmatrix}{{A}^{1/2}{B}^{-1/2}}\end{Vmatrix} \leq 1 \) . Proposition V.1.8 The function \( f\left( t\right) = {t}^{1/2} \) is operator monotone on \( \lbrack 0,\infty ) \) . Proof. Let \( B \geq A \geq 0 \) . Suppose \( B \) is invertible. Then, by Lemma V.1.7, \[ 1 \geq \begin{Vmatrix}{{A}^{1/2}{B}^{-1/2}}\end{Vmatrix} \geq \operatorname{spr}\left( {{A}^{1/2}{B}^{-1/2}}\right) = \operatorname{spr}\left( {{B}^{-1/4}{A}^{1/2}{B}^{-1/4}}\right) . \] Since \( {B}^{-1/4}A{B}^{-1/4} \) is positive, this implies that \( I \geq {B}^{-1/4}{A}^{1/2}{B}^{-1/4} \) . Hence, by Lemma V.1.5, \( {B}^{1/2} \geq {A}^{1/2} \) . This proves the proposition under the assumption that \( B \) is invertible. If \( B \) is not strictly positive, then for every \( \varepsilon > 0, B + {\varepsilon I} \) is strictly positive. So, \( {\left( B + \varepsilon I\right) }^{1/2} \geq {A}^{1/2} \) . Let \( \varepsilon \rightarrow 0 \) . This shows that \( {B}^{1/2} \geq {A}^{1/2} \) . Theorem V.1.9 The function \( f\left( t\right) = {t}^{r} \) is operator monotone on \( \lbrack 0,\infty ) \) for \( 0 \leq r \leq 1 \) . Proof. Let \( r \) be a dyadic rational, i.e., a number of the form \( r = \frac{m}{{2}^{n}} \) , where \( n \) is any positive integer and \( 1 \leq m \leq {2}^{n} \) . We will first prove the assertion for such \( r \) . This is done by induction on \( n \) . Proposition V.1.8 shows that the assertion of the theorem is true when \( n = 1 \) . Suppose it is also true for all dyadic rationals \( \frac{m}{{2}^{j}} \), in which \( 1 \leq \) \( j \leq n - 1 \) . Let \( B \geq A \) and let \( r = \frac{m}{{2}^{n}} \) . Suppose \( m \leq {2}^{n - 1} \) . Then, by the induction hypothesis, \( {B}^{m/{2}^{n - 1}} \geq {A}^{m/{2}^{n - 1}} \) . Hence, by Proposition V.1.8, \( {B}^{m/{2}^{n}} \geq {A}^{m/{2}^{n}} \) . Suppose \( m > {2}^{n - 1} \) . If \( B \geq A > 0 \), then \( {A}^{-1} \geq {B}^{-1} \) . Using Lemma V.1.5, we have \( {B}^{m/{2}^{n}}{A}^{-1}{B}^{m/{2}^{n}} \geq {B}^{m/{2}^{n}}{B}^{-1}{B}^{m/{2}^{n}} = \) \( {B}^{\left( m/{2}^{n - 1} - 1\right) } \) . By the same argument, \[ {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1}{B}^{m/{2}^{n}}{A}^{-1/2} \geq {A}^{-1/2}{B}^{\left( m/{2}^{n - 1} - 1\right. }{A}^{-1/2} \] \[ \geq {A}^{-1/2}{A}^{\left( m/{2}^{n - 1} - 1\right) }{A}^{-1/2} \] (by the induction hypothesis). This can be written also as \[ {\left( {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1/2}\right) }^{2} \geq {A}^{\left( m/{2}^{n - 1} - 2\right) }. \] So, by the operator monotonicity of the square root, \[ {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1/2} \geq {A}^{\left( m/{2}^{n} - 1\right) }. \] Hence, \( {B}^{m/{2}^{n}} \geq {A}^{m/{2}^{n}} \) . We have shown that \( B \geq A > 0 \) implies \( {B}^{r} \geq {A}^{r} \) for all dyadic rationals \( r \) in \( \left\lbra
100_S_Fourier Analysis
40
{m/{2}^{n}} \geq {A}^{m/{2}^{n}} \) . Suppose \( m > {2}^{n - 1} \) . If \( B \geq A > 0 \), then \( {A}^{-1} \geq {B}^{-1} \) . Using Lemma V.1.5, we have \( {B}^{m/{2}^{n}}{A}^{-1}{B}^{m/{2}^{n}} \geq {B}^{m/{2}^{n}}{B}^{-1}{B}^{m/{2}^{n}} = \) \( {B}^{\left( m/{2}^{n - 1} - 1\right) } \) . By the same argument, \[ {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1}{B}^{m/{2}^{n}}{A}^{-1/2} \geq {A}^{-1/2}{B}^{\left( m/{2}^{n - 1} - 1\right. }{A}^{-1/2} \] \[ \geq {A}^{-1/2}{A}^{\left( m/{2}^{n - 1} - 1\right) }{A}^{-1/2} \] (by the induction hypothesis). This can be written also as \[ {\left( {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1/2}\right) }^{2} \geq {A}^{\left( m/{2}^{n - 1} - 2\right) }. \] So, by the operator monotonicity of the square root, \[ {A}^{-1/2}{B}^{m/{2}^{n}}{A}^{-1/2} \geq {A}^{\left( m/{2}^{n} - 1\right) }. \] Hence, \( {B}^{m/{2}^{n}} \geq {A}^{m/{2}^{n}} \) . We have shown that \( B \geq A > 0 \) implies \( {B}^{r} \geq {A}^{r} \) for all dyadic rationals \( r \) in \( \left\lbrack {0,1}\right\rbrack \) . Such \( r \) are dense in \( \left\lbrack {0,1}\right\rbrack \) . So we have \( {B}^{r} \geq {A}^{r} \) for all \( r \) in \( \left\lbrack {0,1}\right\rbrack \) . By continuity this is true even when \( A \) is positive semidefinite . Exercise V.1.10 Another proof of Theorem V.1.9 is outlined below. Fill in the details. (i) The composition of two operator monotone functions is operator monotone. Use this and Proposition V.1.6 to prove that the function \( f\left( t\right) = \) \( \frac{t}{1 + t} \) is operator monotone on \( \left( {0,\infty }\right) \) . (ii) For each \( \lambda > 0 \), the function \( f\left( t\right) = \frac{t}{\lambda + t} \) is operator monotone on \( \left( {0,\infty }\right) \) . (iii) One of the integrals calculated by contour integration in Complex Analysis is \[ {\int }_{0}^{\infty }\frac{{\lambda }^{r - 1}}{1 + \lambda }{d\lambda } = \pi \operatorname{cosec}{r\pi },\;0 < r < 1. \] (V.3) By a change of variables, obtain from this the formula \[ {t}^{r} = \frac{\sin {r\pi }}{\pi }{\int }_{0}^{\infty }\frac{t}{\lambda + t}{\lambda }^{r - 1}{d\lambda } \] (V.4) valid for all \( t > 0 \) and \( 0 < r < 1 \) . (iv) Thus, we can write \[ {t}^{r} = {\int }_{0}^{\infty }\frac{t}{\lambda + t}{d\mu }\left( \lambda \right) ,\;0 < r < 1 \] (V.5) where \( \mu \) is a positive measure on \( \left( {0,\infty }\right) \) . Now use (ii) to conclude that the function \( f\left( t\right) = {t}^{r} \) is operator monotone on \( \left( {0,\infty }\right) \) for \( 0 \leq r \leq 1 \) . Example V.1.11 The function \( f\left( t\right) = \left| t\right| \) is not operator convex on any interval that contains 0 . To see this, take \[ A = \left( \begin{array}{rr} - 1 & 1 \\ 1 & - 1 \end{array}\right) ,\;B = \left( \begin{array}{ll} 2 & 0 \\ 0 & 0 \end{array}\right) . \] Then \[ \left| A\right| = \left( \begin{array}{rr} 1 & - 1 \\ - 1 & 1 \end{array}\right) ,\;\left| A\right| + \left| B\right| = \left( \begin{array}{rr} 3 & - 1 \\ - 1 & 1 \end{array}\right) . \] But \( \left| {A + B}\right| = \sqrt{2} \) I. So \( \left| A\right| + \left| B\right| - \left| {A + B}\right| \) is not positive. (See also Exercise III.5.7.) Example V.1.12 The function \( f\left( t\right) = t \vee 0 \) is not operator convex on any interval that contains 0 . To see this, take \( A, B \) as in Example V.1.11. Since the eigenvalues of \( A \) are -2 and 0, \( f\left( A\right) = 0 \) . So \( \frac{1}{2}\left( {f\left( A\right) + f\left( B\right) }\right) = \) \( \left( \begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array}\right) \) . Any positive matrix dominated by this must have \( \left( \begin{array}{l} 0 \\ 1 \end{array}\right) \) as an eigenvector with 0 as the corresponding eigenvalue. Since \( \frac{1}{2}\left( {A + B}\right) \) does not have \( \left( \begin{array}{l} 0 \\ 1 \end{array}\right) \) as an eigenvector, neither does \( f\left( \frac{A + B}{2}\right) \) . Exercise V.1.13 Let \( I \) be any interval. For \( a \in I \), let \( f\left( t\right) = \left( {t - a}\right) \vee 0 \) . Then \( f \) is called an "angle function" angled at a. If \( I \) is a finite interval, then every convex function on \( I \) is a limit of positive linear combinations of linear functions and angle functions. Use this to show that angle functions are not operator convex. Exercise V.1.14 Show that the function \( f\left( t\right) = t \vee 0 \) is not operator monotone on any interval that contains 0 . Exercise V.1.15 Let \( A, B \) be positive. Show that \[ \frac{{A}^{-1} + {B}^{-1}}{2} - {\left( \frac{A + B}{2}\right) }^{-1} = \frac{\left( {{A}^{-1} - {B}^{-1}}\right) {\left( {A}^{-1} + {B}^{-1}\right) }^{-1}\left( {{A}^{-1} - {B}^{-1}}\right) }{2}. \] Therefore, the function \( f\left( t\right) = \frac{1}{t} \) is operator convex on \( \left( {0,\infty }\right) \) . ## V. 2 Some Characterisations There are several different notions of averaging in the space of operators. In this section we study the relationship between some of these operations and operator convex functions. This leads to some characterisations of operator convex and operator monotone functions and to the interrelations between them. In the proofs that are to follow, we will frequently use properties of operators on the direct sum \( \mathcal{H} \oplus \mathcal{H} \) to draw conclusions about operators on \( \mathcal{H} \) . This technique was outlined briefly in Section I.3. Let \( K \) be a contraction on \( \mathcal{H} \) . Let \( L = {\left( I - K{K}^{ * }\right) }^{1/2}, M = {\left( I - {K}^{ * }K\right) }^{1/2} \) . Then the operators \( U, V \) defined as \[ U = \left( \begin{array}{rr} K & L \\ M & - {K}^{ * } \end{array}\right) ,\;V = \left( \begin{matrix} K & - L \\ M & {K}^{ * } \end{matrix}\right) \] (V.6) are unitary operators on \( \mathcal{H} \oplus \mathcal{H} \) . (See Exercise I.3.6.) More specially, for each \( 0 \leq \lambda \leq 1 \), the operator \[ W = \left( \begin{array}{ll} {\lambda }^{1/2}I & - {\left( 1 - \lambda \right) }^{1/2}I \\ {\left( 1 - \lambda \right) }^{1/2}I & {\lambda }^{1/2}I \end{array}\right) \] (V.7) is a unitary operator on \( \mathcal{H} \oplus \mathcal{H} \) . Theorem V.2.1 Let \( f \) be a real function on an interval I. Then the following two statements are equivalent: (i) \( f \) is operator convex on \( I \) . (ii) \( f\left( {\mathcal{C}\left( A\right) }\right) \leq \mathcal{C}\left( {f\left( A\right) }\right) \) for every Hermitian operator \( A \) (on a Hilbert space \( \mathcal{H} \) ) whose spectrum is contained in \( I \) and for every pinching \( \mathcal{C} \) (in the space \( \mathcal{H} \) ). Proof. (i) \( \Rightarrow \) (ii): Every pinching is a product of pinchings by two complementary projections. (See Problems II.5.4 and II.5.5.) So we need to prove this implication only for pinchings \( \mathcal{C} \) of the form \[ \mathcal{C}\left( X\right) = \frac{X + {U}^{ * }{XU}}{2},\;\text{ where }U = \left( \begin{matrix} I & 0 \\ 0 & - I \end{matrix}\right) . \] For such a \( \mathcal{C} \) \[ f\left( {\mathcal{C}\left( A\right) }\right) = f\left( \frac{A + {U}^{ * }{AU}}{2}\right) \leq \frac{f\left( A\right) + f\left( {{U}^{ * }{AU}}\right) }{2} \] \[ = \frac{f\left( A\right) + {U}^{ * }f\left( A\right) U}{2} = \mathcal{C}\left( {f\left( A\right) }\right) . \] (ii) \( \Rightarrow \) (i): Let \( A, B \) be Hermitian operators on \( \mathcal{H} \), both having their spectrum in \( I \) . Consider the operator \( T = \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right) \) on \( \mathcal{H} \oplus \mathcal{H} \) . If \( W \) is the unitary operator defined in (V.7), then the diagonal entries of \( {W}^{ * }{TW} \) are \( {\lambda A} + \left( {1 - \lambda }\right) B \) and \( \left( {1 - \lambda }\right) A + {\lambda B} \) . So if \( \mathcal{C} \) is the pinching on \( \mathcal{H} \oplus \mathcal{H} \) induced by the projections onto the two summands, then \[ \mathcal{C}\left( {{W}^{ * }{TW}}\right) = \left( \begin{matrix} {\lambda A} + \left( {1 - \lambda }\right) B & 0 \\ 0 & \left( {1 - \lambda }\right) A + {\lambda B} \end{matrix}\right) . \] By the same argument, \[ \mathcal{C}\left( {f\left( {{W}^{ * }{TW}}\right) }\right) = \mathcal{C}\left( {{W}^{ * }f\left( T\right) W}\right) \] \[ = \left( \begin{matrix} {\lambda f}\left( A\right) + \left( {1 - \lambda }\right) f\left( B\right) & 0 \\ 0 & \left( {1 - \lambda }\right) f\left( A\right) + {\lambda f}\left( B\right) \end{matrix}\right) . \] So the condition \( f\left( {\mathcal{C}\left( {{W}^{ * }{TW}}\right) }\right) \leq \mathcal{C}\left( {f\left( {{W}^{ * }{TW}}\right) }\right) \) implies that \[ f\left( {{\lambda A} + \left( {1 - \lambda }\right) B}\right) \leq {\lambda f}\left( A\right) + \left( {1 - \lambda }\right) f\left( B\right) . \] Exercise V.2.2 The following conditions are equivalent: (i) \( f \) is operator convex on \( I \) . (ii) \( f\left( {A}_{\mathcal{M}}\right) \leq {\left( f\left( A\right) \right) }_{\mathcal{M}} \) for every Hermitian operator \( A \) with its spectrum in \( I \), and for every compression \( T \rightarrow {T}_{\mathcal{M}} \) . (iii) \( f\left( {{V}^{ * }{AV}}\right) \leq {V}^{ * }f\left( A\right) V \) for every Hermitian operator \( A \) (on \( \mathcal{H} \) ) with its spectrum in \( I \), and for every isometry from any Hilbert space into \( \mathcal{H} \) . (See Section III. 1 for the definition of a compression.) Theorem V.2.3 Let \( I \) be an interval containing 0 and let \( f \) be a real function on \( I \) . Then the following conditions are equivalent: (i) \( f \) is operator convex on \( I \) and \( f\left( 0\right) \leq 0 \) . (ii) \( f\left( {{K}^{ * }{AK}}\right) \leq {K}^{ * }f\left( A\right) K \) for every contraction \( K \) and every Hermitian operator \( A \) with spectrum in \( I \) . (iii) \( f\left( {{K}_{1}^{ * }A{K}_{1} + {K}_{2}^{ * }B{K}_{2}}\right) \leq {K}_{1}^{ * }f\left( A\right) {K}_{1} + {K}_{2}^{ * }f\left( B\right) {K}_{2} \) for
100_S_Fourier Analysis
41
l{M}} \) for every Hermitian operator \( A \) with its spectrum in \( I \), and for every compression \( T \rightarrow {T}_{\mathcal{M}} \) . (iii) \( f\left( {{V}^{ * }{AV}}\right) \leq {V}^{ * }f\left( A\right) V \) for every Hermitian operator \( A \) (on \( \mathcal{H} \) ) with its spectrum in \( I \), and for every isometry from any Hilbert space into \( \mathcal{H} \) . (See Section III. 1 for the definition of a compression.) Theorem V.2.3 Let \( I \) be an interval containing 0 and let \( f \) be a real function on \( I \) . Then the following conditions are equivalent: (i) \( f \) is operator convex on \( I \) and \( f\left( 0\right) \leq 0 \) . (ii) \( f\left( {{K}^{ * }{AK}}\right) \leq {K}^{ * }f\left( A\right) K \) for every contraction \( K \) and every Hermitian operator \( A \) with spectrum in \( I \) . (iii) \( f\left( {{K}_{1}^{ * }A{K}_{1} + {K}_{2}^{ * }B{K}_{2}}\right) \leq {K}_{1}^{ * }f\left( A\right) {K}_{1} + {K}_{2}^{ * }f\left( B\right) {K}_{2} \) for all operators \( {K}_{1},{K}_{2} \) such that \( {K}_{1}^{ * }{K}_{1} + {K}_{2}^{ * }{K}_{2} \leq I \) and for all Hermitian \( A, B \) with spectrum in \( I \) . (iv) \( f\left( {PAP}\right) \leq {Pf}\left( A\right) P \) for all projections \( P \) and Hermitian operators \( A \) with spectrum in \( I \) . Proof. (i) \( \Rightarrow \) (ii): Let \( T = \left( \begin{array}{ll} A & 0 \\ 0 & 0 \end{array}\right) \) and let \( U, V \) be the unitary operators defined in (V.6). Then \[ {U}^{ * }{TU} = \left( \begin{matrix} {K}^{ * }{AK} & {K}^{ * }{AL} \\ {LAK} & {LAL} \end{matrix}\right) ,\;{V}^{ * }{TV} = \left( \begin{matrix} {K}^{ * }{AK} & - {K}^{ * }{AL} \\ - {LAK} & {LAL} \end{matrix}\right) . \] So, \[ \left( \begin{matrix} {K}^{ * }{AK} & 0 \\ 0 & {LAL} \end{matrix}\right) = \frac{{U}^{ * }{TU} + {V}^{ * }{TV}}{2}. \] Hence, \[ \left( \begin{matrix} f\left( {{K}^{ * }{AK}}\right) & 0 \\ 0 & f\left( {LAL}\right) \end{matrix}\right) \] \[ = f\left( \frac{{U}^{ * }{TU} + {V}^{ * }{TV}}{2}\right) \] \[ \leq \frac{f\left( {{U}^{ * }{TU}}\right) + f\left( {{V}^{ * }{TV}}\right) }{2} \] \[ = \frac{{U}^{ * }f\left( T\right) U + {V}^{ * }f\left( T\right) V}{2} \] \[ = \;\frac{1}{2}\left\{ {{U}^{ * }\left( \begin{matrix} f\left( A\right) & 0 \\ 0 & f\left( 0\right) \end{matrix}\right) U + {V}^{ * }\left( \begin{matrix} f\left( A\right) & 0 \\ 0 & f\left( 0\right) \end{matrix}\right) V}\right\} \] \[ \leq \;\frac{1}{2}\left\{ {{U}^{ * }\left( \begin{matrix} f\left( A\right) & 0 \\ 0 & 0 \end{matrix}\right) U + {V}^{ * }\left( \begin{matrix} f\left( A\right) & 0 \\ 0 & 0 \end{matrix}\right) V}\right\} \] \[ = \left( \begin{matrix} {K}^{ * }f\left( A\right) K & 0 \\ 0 & {Lf}\left( A\right) L \end{matrix}\right) . \] Hence, \( f\left( {{K}^{ * }{AK}}\right) \leq {K}^{ * }f\left( A\right) K \) . \[ \text{(ii)} \Rightarrow \text{(iii): Let}T = \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right), K = \left( \begin{matrix} {K}_{1} & 0 \\ {K}_{2} & 0 \end{matrix}\right) \text{. Then}K\text{is a con-} \] traction. Note that \[ {K}^{ * }{TK} = \left( \begin{matrix} {K}_{1}^{ * }A{K}_{1} + {K}_{2}^{ * }B{K}_{2} & 0 \\ 0 & 0 \end{matrix}\right) . \] Hence, \[ \left( \begin{matrix} f\left( {{K}_{1}^{ * }A{K}_{1} + {K}_{2}^{ * }B{K}_{2}}\right) & 0 \\ 0 & f\left( 0\right) \end{matrix}\right) = f\left( {{K}^{ * }{TK}}\right) \leq {K}^{ * }f\left( T\right) K \] \[ = \left( \begin{matrix} {K}_{1}^{ * }f\left( A\right) {K}_{1} + {K}_{2}^{ * }f\left( B\right) {K}_{2} & 0 \\ 0 & 0 \end{matrix}\right) . \] (iii) \( \Rightarrow \) (iv) obviously. (iv) \( \Rightarrow \) (i): Let \( A, B \) be Hermitian operators with spectrum in \( I \) and let \( 0 \leq \lambda \leq 1 \) . Let \( T = \left( \begin{matrix} A & 0 \\ 0 & B \end{matrix}\right), P = \left( \begin{array}{ll} I & 0 \\ 0 & 0 \end{array}\right) \) and let \( W \) be the unitary operator defined by (V.7). Then \[ P{W}^{ * }{TWP} = \left( \begin{matrix} {\lambda A} + \left( {1 - \lambda }\right) B & 0 \\ 0 & 0 \end{matrix}\right) . \] So, \[ \left( \begin{matrix} f\left( {{\lambda A} + \left( {1 - \lambda }\right) B}\right) & 0 \\ 0 & f\left( 0\right) \end{matrix}\right) \; = \;f\left( {P{W}^{ * }{TWP}}\right) \] \[ \leq \;{Pf}\left( {{W}^{ * }{TW}}\right) P = P{W}^{ * }f\left( T\right) {WP} \] \[ = \;\left( \begin{matrix} {\lambda f}\left( A\right) + \left( {1 - \lambda }\right) f\left( B\right) & 0 \\ 0 & 0 \end{matrix}\right) . \] Hence, \( f \) is operator convex and \( f\left( 0\right) \leq 0 \) . Exercise V.2.4 (i) Let \( {\lambda }_{1},{\lambda }_{2} \) be positive real numbers such that \( {\lambda }_{1}{\lambda }_{2} \geq \) \( {C}^{ * }C \) . Then \( \left( \begin{matrix} {\lambda }_{1}I & {C}^{ * } \\ C & {\lambda }_{2}I \end{matrix}\right) \) is positive. (Use Proposition I.3.5.) (ii) Let \( \left( \begin{matrix} A & {C}^{ * } \\ C & B \end{matrix}\right) \) be a Hermitian operator. Then for every \( \varepsilon > 0 \) , there exists \( \lambda > 0 \) such that \[ \left( \begin{matrix} A & {C}^{ * } \\ C & B \end{matrix}\right) \leq \left( \begin{matrix} A + {\varepsilon I} & 0 \\ 0 & {\lambda I} \end{matrix}\right) . \] The next two theorems are among the several results that describe the connections between operator convexity and operator monotonicity. Theorem V.2.5 Let \( f \) be a (continuous) function mapping the positive half-line \( \lbrack 0,\infty ) \) into itself. Then \( f \) is operator monotone if and only if it is operator concave. Proof. Suppose \( f \) is operator monotone. If we show that \( f\left( {{K}^{ * }{AK}}\right) \geq \) \( {K}^{ * }f\left( A\right) K \) for every positive operator \( A \) and contraction \( K \), then it would follow from Theorem V.2.3 that \( f \) is operator concave. Let \( T = \left( \begin{array}{ll} A & 0 \\ 0 & 0 \end{array}\right) \) and let \( U \) be the unitary operator defined in (V.6). Then \( {U}^{ * }{TU} = \left( \begin{matrix} {K}^{ * }{AK} & {K}^{ * }{AL} \\ {LAK} & {LAL} \end{matrix}\right) \) . By the assertion in Exercise V.2.4(ii), given any \( \varepsilon > 0 \), there exists \( \lambda > 0 \) such that \[ {U}^{ * }{TU} \leq \left( \begin{matrix} {K}^{ * }{AK} + \varepsilon & 0 \\ 0 & {\lambda I} \end{matrix}\right) . \] Replacing \( T \) by \( f\left( T\right) \), we get \[ \left( \begin{matrix} {K}^{ * }f\left( A\right) K & {K}^{ * }f\left( A\right) L \\ {Lf}\left( A\right) K & {Lf}\left( A\right) L \end{matrix}\right) = {U}^{ * }f\left( T\right) U = f\left( {{U}^{ * }{TU}}\right) \] \[ \leq \;\left( \begin{matrix} f\left( {{K}^{ * }{AK} + \varepsilon }\right) & 0 \\ 0 & f\left( \lambda \right) I \end{matrix}\right) \] by the operator monotonicity of \( f \) . In particular, this shows \( {K}^{ * }f\left( A\right) K \leq \) \( f\left( {{K}^{ * }{AK} + \varepsilon }\right) \) for every \( \varepsilon > 0 \) . Hence \( {K}^{ * }f\left( A\right) K \leq f\left( {{K}^{ * }{AK}}\right) \) . Conversely, suppose \( f \) is operator concave. Let \( 0 \leq A \leq B \) . Then for any \( 0 < \lambda < 1 \) we can write \[ {\lambda B} = {\lambda A} + \left( {1 - \lambda }\right) \frac{\lambda }{1 - \lambda }\left( {B - A}\right) . \] Since \( f \) is operator concave, this gives \[ f\left( {\lambda B}\right) \geq {\lambda f}\left( A\right) + \left( {1 - \lambda }\right) f\left( {\frac{\lambda }{1 - \lambda }\left( {B - A}\right) }\right) . \] Since \( f\left( X\right) \) is positive for every positive \( X \), it follows that \( f\left( {\lambda B}\right) \geq {\lambda f}\left( A\right) \) . Now let \( \lambda \rightarrow 1 \) . This shows \( f\left( B\right) \geq f\left( A\right) \) . So \( f \) is operator monotone. Corollary V.2.6 Let \( f \) be a continuous function from \( \left( {0,\infty }\right) \) into itself. If \( f \) is operator monotone then the function \( g\left( t\right) = \frac{1}{f\left( t\right) } \) is operator convex. Proof. Let \( A, B \) be positive operators. Since \( f \) is operator concave, \( f\left( \frac{A + B}{2}\right) \geq \frac{f\left( A\right) + f\left( B\right) }{2} \) . Since the map \( X \rightarrow {X}^{-1} \) is order-reversing and convex on positive operators (see Proposition V.1.6 and Exercise V.1.15), this gives \[ {\left\lbrack f\left( \frac{A + B}{2}\right) \right\rbrack }^{-1} \leq {\left\lbrack \frac{f\left( A\right) + f\left( B\right) }{2}\right\rbrack }^{-1} \leq \frac{f{\left( A\right) }^{-1} + f{\left( B\right) }^{-1}}{2}. \] This is the same as saying \( g \) is operator convex. Exercise V.2.7 Let \( I \) be an interval containing 0, and let \( f \) be a real function on \( I \) with \( f\left( 0\right) \leq 0 \) . Show that for every Hermitian operator \( A \) with spectrum in \( I \) and for every projection \( P \) \[ f\left( {PAP}\right) \leq {Pf}\left( {PAP}\right) = {Pf}\left( {PAP}\right) P. \] Exercise V.2.8 Let \( f \) be a continuous real function on \( \lbrack 0,\infty ) \) . Then for all positive operators \( A \) and projections \( P \) \[ f\left( {{A}^{1/2}P{A}^{1/2}}\right) {A}^{1/2}P = {A}^{1/2}{Pf}\left( {PAP}\right) . \] (Prove this first, by induction, for \( f\left( t\right) = {t}^{n} \) . Then use the Weierstrass approximation theorem to show that this is true for all \( f \) .) Theorem V.2.9 Let \( f \) be a (continuous) real function on the interval \( \lbrack 0,\alpha ) \) . Then the following two conditions are equivalent: (i) \( f \) is operator convex and \( f\left( 0\right) \leq 0 \) . (ii) The function \( g\left( t\right) = f\left( t\right) /t \) is operator monotone on \( \left( {0,\alpha }\right) \) . Proof. (i) \( \Rightarrow \) (ii): Let \( 0 < A \leq B \) . Then \( 0 < {A}^{1/2} \leq {B}^{1/2} \) . Hence, \( {B}^{-1/2}{A}^{1/2} \) is a contraction by Lemma V.1.7. Therefore, using Theorem V.2.3 we see that \[ f\left( A\right) = f\left( {{A}^{1/2}{B}^{-1/2}B{B}^{-1/2}{A}^{1/2}}\right) \leq {A}^{1/2}{B}^{-1/2}f\left( B\right) {B}^{-1/2}{A}^{1/2}. \] From this, one obtains, using Lemma V.
100_S_Fourier Analysis
42
P \) \[ f\left( {{A}^{1/2}P{A}^{1/2}}\right) {A}^{1/2}P = {A}^{1/2}{Pf}\left( {PAP}\right) . \] (Prove this first, by induction, for \( f\left( t\right) = {t}^{n} \) . Then use the Weierstrass approximation theorem to show that this is true for all \( f \) .) Theorem V.2.9 Let \( f \) be a (continuous) real function on the interval \( \lbrack 0,\alpha ) \) . Then the following two conditions are equivalent: (i) \( f \) is operator convex and \( f\left( 0\right) \leq 0 \) . (ii) The function \( g\left( t\right) = f\left( t\right) /t \) is operator monotone on \( \left( {0,\alpha }\right) \) . Proof. (i) \( \Rightarrow \) (ii): Let \( 0 < A \leq B \) . Then \( 0 < {A}^{1/2} \leq {B}^{1/2} \) . Hence, \( {B}^{-1/2}{A}^{1/2} \) is a contraction by Lemma V.1.7. Therefore, using Theorem V.2.3 we see that \[ f\left( A\right) = f\left( {{A}^{1/2}{B}^{-1/2}B{B}^{-1/2}{A}^{1/2}}\right) \leq {A}^{1/2}{B}^{-1/2}f\left( B\right) {B}^{-1/2}{A}^{1/2}. \] From this, one obtains, using Lemma V.1.5, \[ {A}^{-1/2}f\left( A\right) {A}^{-1/2} \leq {B}^{-1/2}f\left( B\right) {B}^{-1/2}. \] Since all functions of an operator commute with each other, this shows that \( {A}^{-1}f\left( A\right) \leq {B}^{-1}f\left( B\right) \) . Thus, \( g \) is operator monotone. (ii) \( \Rightarrow \) (i): If \( f\left( t\right) /t \) is monotone on \( \left( {0,\alpha }\right) \) we must have \( f\left( 0\right) \leq 0 \) . We will show that \( f \) satisfies the condition (iv) of Theorem V.2.3. Let \( P \) be any projection and let \( A \) be any positive operator with spectrum in \( \left( {0,\alpha }\right) \) . Then there exists an \( \varepsilon > 0 \) such that \( \left( {1 + \varepsilon }\right) A \) has its spectrum in \( \left( {0,\alpha }\right) \) . Since \( P + {\varepsilon I} \leq \left( {1 + \varepsilon }\right) I \), we have \( {A}^{1/2}\left( {P + {\varepsilon I}}\right) {A}^{1/2} \leq \left( {1 + \varepsilon }\right) A \) . So, by the operator monotonicity of \( g \), we have \[ {A}^{-1/2}{\left( P + \varepsilon I\right) }^{-1}{A}^{-1/2}f\left( {{A}^{1/2}\left( {P + {\varepsilon I}}\right) {A}^{1/2}}\right) \leq {\left( 1 + \varepsilon \right) }^{-1}{A}^{-1}f\left( {\left( {1 + \varepsilon }\right) A}\right) . \] Multiply both sides on the right by \( {A}^{1/2}\left( {P + {\varepsilon I}}\right) \) and on the left by its conjugate \( \left( {P + {\varepsilon I}}\right) {A}^{1/2} \) . This gives \[ {A}^{-1/2}f\left( {{A}^{1/2}\left( {P + {\varepsilon I}}\right) {A}^{1/2}}\right) {A}^{1/2}\left( {P + {\varepsilon I}}\right) \leq {\left( 1 + \varepsilon \right) }^{-1}\left( {P + {\varepsilon I}}\right) f\left( {\left( {1 + \varepsilon }\right) A}\right) \left( {P + {\varepsilon I}}\right) . \] Let \( \varepsilon \rightarrow 0 \) . This gives \[ {A}^{-1/2}f\left( {{A}^{1/2}P{A}^{1/2}}\right) {A}^{1/2}P \leq {Pf}\left( A\right) P. \] Use the identity in Exercise V.2.8 to reduce this to \( {Pf}\left( {PAP}\right) \leq {Pf}\left( A\right) P \) , and then use the inequality in Exercise V.2.7 to conclude that \( f\left( {PAP}\right) \leq \) \( {Pf}\left( A\right) P \), as desired. As corollaries to the above results, we deduce the following statements about the power functions . Theorem V.2.10 On the positive half-line \( \left( {0,\infty }\right) \) the functions \( f\left( t\right) = {t}^{r} \) , where \( r \) is a real number, are operator monotone if and only if \( 0 \leq r \leq 1 \) . Proof. If \( 0 \leq r \leq 1 \), we know that \( f\left( t\right) = {t}^{r} \) is operator monotone by Theorem V.1.9. If \( r \) is not in \( \left\lbrack {0,1}\right\rbrack \), then the function \( f\left( t\right) = {t}^{r} \) is not concave on \( \left( {0,\infty }\right) \) . Therefore, it cannot be operator monotone by Theorem V.2.5. Exercise V.2.11 Consider the functions \( f\left( t\right) = {t}^{r} \) on \( \left( {0,\infty }\right) \) . Use Theorems V.2.9 and V.2.10 to show that if \( r \geq 0 \), then \( f\left( t\right) \) is operator convex if and only if \( 1 \leq r \leq 2 \) . Use Corollary V.2.6 to show that \( f\left( t\right) \), is operator convex if \( - 1 \leq r \leq 0 \) . (We will see later that \( f\left( t\right) \) is not operator convex for any other value of \( r \) .) Exercise V.2.12 A function \( f \) from \( \left( {0,\infty }\right) \) into itself is both operator monotone and operator convex if and only if it is of the form \( f\left( t\right) = \) \( \alpha + {\beta t},\alpha ,\beta \geq 0. \) Exercise V.2.13 Show that the function \( f\left( t\right) = - t\log t \) is operator concave on \( \left( {0,\infty }\right) \) . ## V. 3 Smoothness Properties Let \( I \) be the open interval \( \left( {-1,1}\right) \) . Let \( f \) be a continuously differentiable function on \( I \) . Then we denote by \( {f}^{\left\lbrack 1\right\rbrack } \) the function on \( I \times I \) defined as \[ {f}^{\left\lbrack 1\right\rbrack }\left( {\lambda ,\mu }\right) = \frac{f\left( \lambda \right) - f\left( \mu \right) }{\lambda - \mu },\;\text{ if }\lambda \neq \mu \] \[ {f}^{\left\lbrack 1\right\rbrack }\left( {\lambda ,\lambda }\right) = {f}^{\prime }\left( \lambda \right) \] The expression \( {f}^{\left\lbrack 1\right\rbrack }\left( {\lambda ,\mu }\right) \) is called the first divided difference of \( f \) at \( \left( {\lambda ,\mu }\right) \) . If \( \Lambda \) is a diagonal matrix with diagonal entries \( {\lambda }_{1},\ldots ,{\lambda }_{n} \), all of which are in \( I \), we denote by \( {f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \) the \( n \times n \) symmetric matrix whose \( \left( {i, j}\right) \) -entry is \( {f}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{j}}\right) \) . If \( A \) is Hermitian and \( A = {U\Lambda }{U}^{ * } \), let \( {f}^{\left\lbrack 1\right\rbrack }\left( A\right) = U{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) {U}^{ * } \) . Now consider the induced map \( f \) on the set of Hermitian matrices with eigenvalues in \( I \) . Such matrices form an open set in the real vector space of all Hermitian matrices. The map \( f \) is called (Fréchet) differentiable at \( A \) if there exists a linear transformation \( {Df}\left( A\right) \) on the space of Hermitian matrices such that for all \( H \) \[ \parallel f\left( {A + H}\right) - f\left( A\right) - {Df}\left( A\right) \left( H\right) \parallel = o\left( {\parallel H\parallel }\right) . \] (V.8) The linear operator \( {Df}\left( A\right) \) is then called the derivative of \( f \) at \( A \) . Basic rules of the Fréchet differential calculus are summarised in Chapter 10. If \( f \) is differentiable at \( A \), then \[ {Df}\left( A\right) \left( H\right) = {\left. \frac{d}{dt}\right| }_{t = 0}f\left( {A + {tH}}\right) . \] (V.9) There is an interesting relationship between the derivative \( {Df}\left( A\right) \) and the matrix \( {f}^{\left\lbrack 1\right\rbrack }\left( A\right) \) . This is explored in the next few paragraphs. Lemma V.3.1 Let \( f \) be a polynomial function. Then for every diagonal matrix \( \Lambda \) and for every Hermitian matrix \( H \) , \[ {Df}\left( \Lambda \right) \left( H\right) = {f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ H \] (V.10) where \( \circ \) stands for the Schur-product of two matrices. Proof. Both sides of (V.10) are linear in \( f \) . Therefore, it suffices to prove this for the powers \( f\left( t\right) = {t}^{p}, p = 1,2,3,\ldots \) For such \( f \), using (V.9) one gets \[ {Df}\left( \Lambda \right) \left( H\right) = \mathop{\sum }\limits_{{k = 1}}^{p}{\Lambda }^{k - 1}H{\Lambda }^{p - k}. \] This is a matrix whose \( \left( {i, j}\right) \) -entry is \( \mathop{\sum }\limits_{{k = 1}}^{p}{\lambda }_{i}^{k - 1}{\lambda }_{j}^{p - k}{h}_{ij} \) . On the other hand, the \( \left( {i, j}\right) \) -entry of \( {f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \) is \( \mathop{\sum }\limits_{{k = 1}}^{p}{\lambda }_{i}^{k - 1}{\lambda }_{j}^{p - k} \) . Corollary V.3.2 If \( A = {U\Lambda }{U}^{ * } \) and \( f \) is a polynomial function, then \[ {Df}\left( A\right) \left( H\right) = U\left\lbrack {{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ \left( {{U}^{ * }{HU}}\right) }\right\rbrack {U}^{ * }. \] (V.11) Proof. Note that \[ {\left. \frac{d}{dt}\right| }_{t = 0}f\left( {{U\Lambda }{U}^{ * } + {tH}}\right) = U\left\lbrack {{\left. \frac{d}{dt}\right| }_{t = 0}f\left( {\Lambda + t{U}^{ * }{HU}}\right) }\right\rbrack {U}^{ * }, \] and use (V.10). Theorem V.3.3 Let \( f \in {C}^{1}\left( I\right) \) and let \( A \) be a Hermitian matrix with all its eigenvalues in I. Then \[ {Df}\left( A\right) \left( H\right) = {f}^{\left\lbrack 1\right\rbrack }\left( A\right) \circ H \] (V.12) where \( \circ \) denotes the Schur-product in a basis in which \( A \) is diagonal. Proof. Let \( A = {U\Lambda }{U}^{ * } \), where \( \Lambda \) is diagonal. We want to prove that \[ {Df}\left( A\right) \left( H\right) = U\left\lbrack {{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ \left( {{U}^{ * }{HU}}\right) }\right\rbrack {U}^{ * }. \] (V.13) This has been proved for all polynomials \( f \) . We will extend its validity to all \( f \in {C}^{1} \) by a continuity argument. Denote the right-hand side of (V.13) by \( \mathcal{D}f\left( A\right) \left( H\right) \) . For each \( f \) in \( {C}^{1} \) , \( \mathcal{D}f\left( A\right) \) is a linear map on Hermitian matrices. We have \[ \parallel \mathcal{D}f\left( A\right) \left( H\right) {\parallel }_{2} = {\begin{Vmatrix}{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ \left( {U}^{ * }HU\right) \end{Vmatrix}}_{2}. \] All entries of the matrix \( {f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \) are bounded by \( \mathop{\max }\limits_{{\left| t\right| \leq \parallel A\parallel }}\left| {{f}^{\prime }\left( t\right) }\right| \) . (Use the mean value theorem.) Hence \[ \parallel \mathcal{D}f\left( A\
100_S_Fourier Analysis
43
right) = U\left\lbrack {{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ \left( {{U}^{ * }{HU}}\right) }\right\rbrack {U}^{ * }. \] (V.13) This has been proved for all polynomials \( f \) . We will extend its validity to all \( f \in {C}^{1} \) by a continuity argument. Denote the right-hand side of (V.13) by \( \mathcal{D}f\left( A\right) \left( H\right) \) . For each \( f \) in \( {C}^{1} \) , \( \mathcal{D}f\left( A\right) \) is a linear map on Hermitian matrices. We have \[ \parallel \mathcal{D}f\left( A\right) \left( H\right) {\parallel }_{2} = {\begin{Vmatrix}{f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \circ \left( {U}^{ * }HU\right) \end{Vmatrix}}_{2}. \] All entries of the matrix \( {f}^{\left\lbrack 1\right\rbrack }\left( \Lambda \right) \) are bounded by \( \mathop{\max }\limits_{{\left| t\right| \leq \parallel A\parallel }}\left| {{f}^{\prime }\left( t\right) }\right| \) . (Use the mean value theorem.) Hence \[ \parallel \mathcal{D}f\left( A\right) \left( H\right) {\parallel }_{2} \leq \mathop{\max }\limits_{{\left| t\right| \leq \parallel A\parallel }}\left| {{f}^{\prime }\left( t\right) }\right| \parallel H{\parallel }_{2}. \] (V.14) Let \( H \) be a Hermitian matrix with norm so small that the eigenvalues of \( A + H \) are in \( I \) . Let \( \left\lbrack {a, b}\right\rbrack \) be a closed interval in \( I \) containing the eigenvalues of both \( A \) and \( A + H \) . Choose a sequence of polynomials \( {f}_{n} \) such that \( {f}_{n} \rightarrow f \) and \( {f}_{n}^{\prime } \rightarrow {f}^{\prime } \) uniformly on \( \left\lbrack {a, b}\right\rbrack \) . Let \( \mathcal{L} \) be the line segment joining \( A \) and \( A + H \) in the space of Hermitian matrices. Then, by the mean value theorem (for Fréchet derivatives), we have \[ \begin{Vmatrix}{{f}_{m}\left( {A + H}\right) - {f}_{n}\left( {A + H}\right) - \left( {{f}_{m}\left( A\right) - {f}_{n}\left( A\right) }\right) }\end{Vmatrix} \] \[ \leq \parallel H\parallel \mathop{\sup }\limits_{{X \in \mathcal{L}}}\begin{Vmatrix}{D{f}_{m}\left( X\right) - D{f}_{n}\left( X\right) }\end{Vmatrix} \] \[ = \parallel H\parallel \mathop{\sup }\limits_{{X \in \mathcal{L}}}\begin{Vmatrix}{\mathcal{D}{f}_{m}\left( X\right) - \mathcal{D}{f}_{n}\left( X\right) }\end{Vmatrix}. \] (V.15) This is so because we have already shown that \( D{f}_{n} = \mathcal{D}{f}_{n} \) for the polynomial functions \( {f}_{n} \) . Let \( \varepsilon \) be any positive real number. The inequality (V.14) ensures that there exists a positive integer \( {n}_{0} \) such that for \( m, n \geq {n}_{0} \) we have \[ \mathop{\sup }\limits_{{X \in \mathcal{L}}}\begin{Vmatrix}{\mathcal{D}{f}_{m}\left( X\right) - \mathcal{D}{f}_{n}\left( X\right) }\end{Vmatrix} \leq \frac{\varepsilon }{3} \] (V.16) and \[ \begin{Vmatrix}{\mathcal{D}{f}_{n}\left( A\right) - \mathcal{D}f\left( A\right) }\end{Vmatrix} \leq \frac{\varepsilon }{3} \] (V.17) Let \( m \rightarrow \infty \) and use (V.15) and (V.16) to conclude that \[ \begin{Vmatrix}{f\left( {A + H}\right) - f\left( A\right) - \left( {{f}_{n}\left( {A + H}\right) - {f}_{n}\left( A\right) }\right) }\end{Vmatrix} \leq \frac{\varepsilon }{3}\parallel H\parallel . \] (V.18) If \( \parallel H\parallel \) is sufficiently small, then by the definition of the Fréchet derivative, we have \[ \begin{Vmatrix}{{f}_{n}\left( {A + H}\right) - {f}_{n}\left( A\right) - \mathcal{D}{f}_{n}\left( A\right) \left( H\right) }\end{Vmatrix} \leq \frac{\varepsilon }{3}\parallel H\parallel . \] (V.19) Now we can write, using the triangle inequality, \[ \parallel f\left( {A + H}\right) - f\left( A\right) - \mathcal{D}f\left( A\right) \left( H\right) \parallel \] \[ \leq \begin{Vmatrix}{f\left( {A + H}\right) - f\left( A\right) - \left( {{f}_{n}\left( {A + H}\right) - {f}_{n}\left( A\right) }\right) }\end{Vmatrix} \] \[ + \begin{Vmatrix}{{f}_{n}\left( {A + H}\right) - {f}_{n}\left( A\right) - \mathcal{D}{f}_{n}\left( A\right) \left( H\right) }\end{Vmatrix} \] \[ + \begin{Vmatrix}{\left( {\mathcal{D}f\left( A\right) - \mathcal{D}{f}_{n}\left( A\right) }\right) \left( H\right) }\end{Vmatrix}, \] and then use (V.17),(V.18), and (V.19) to conclude that, for \( \parallel H\parallel \) sufficiently small, we have \[ \parallel f\left( {A + H}\right) - f\left( A\right) - \mathcal{D}f\left( A\right) \left( H\right) \parallel \leq \varepsilon \parallel H\parallel . \] But this says that \( {Df}\left( A\right) = \mathcal{D}f\left( A\right) \) . Let \( t \rightarrow A\left( t\right) \) be a \( {C}^{1} \) map from the interval \( \left\lbrack {0,1}\right\rbrack \) into the space of Hermitian matrices that have all their eigenvalues in \( I \) . Let \( f \in {C}^{1}\left( I\right) \), and let \( F\left( t\right) = f\left( {A\left( t\right) }\right) \) . Then, by the chain rule, \( {Df}\left( t\right) = {DF}\left( {A\left( t\right) }\right) \left( {{A}^{\prime }\left( t\right) }\right) \) . Therefore, by the theorem above, we have \[ F\left( 1\right) - F\left( 0\right) = {\int }_{0}^{1}{f}^{\left\lbrack 1\right\rbrack }\left( {A\left( t\right) }\right) \circ {A}^{\prime }\left( t\right) {dt} \] (V.20) where for each \( t \) the Schur-product is taken in a basis that diagonalises \( A\left( t\right) \) . Theorem V.3.4 Let \( f \in {C}^{1}\left( I\right) \) . Then \( f \) is operator monotone on \( I \) if and only if, for every Hermitian matrix \( A \) whose eigenvalues are in \( I \), the matrix \( {f}^{\left\lbrack 1\right\rbrack }\left( A\right) \) is positive. Proof. Let \( f \) be operator monotone, and let \( A \) be a Hermitian matrix whose eigenvalues are in \( I \) . Let \( H \) be the matrix all whose entries are 1 . Then \( H \) is positive. So, \( A + {tH} \geq A \) if \( t \geq 0 \) . Hence, \( f\left( {A + {tH}}\right) - f\left( A\right) \) is positive for small positive \( t \) . This implies that \( {Df}\left( A\right) \left( H\right) \geq 0 \) . So, by Theorem V.3.3, \( {f}^{\left\lbrack 1\right\rbrack }\left( A\right) \circ H \geq 0 \) . But, for this special choice of \( H \), this just says that \( {f}^{\left\lbrack 1\right\rbrack }\left( A\right) \geq 0 \) . To prove the converse, let \( A, B \) be Hermitian matrices whose eigenvalues are in \( I \), and let \( B \geq A \) . Let \( A\left( t\right) = \left( {1 - t}\right) A + {tB},0 \leq t \leq 1 \) . Then \( A\left( t\right) \) also has all its eigenvalues in \( I \) . So, by the hypothesis, \( {f}^{\left\lbrack 1\right\rbrack }\left( {A\left( t\right) }\right) \geq 0 \) for all \( t \) . Note that \( {A}^{\prime }\left( t\right) = B - A \geq 0 \), for all \( t \) . Since the Schur-product of two positive matrices is positive, \( {f}^{\left\lbrack 1\right\rbrack }\left( {A\left( t\right) }\right) \circ {A}^{\prime }\left( t\right) \) is positive for all \( t \) . So, by \( \left( {\mathrm{V}{.20}}\right), f\left( B\right) - f\left( A\right) \geq 0 \) . Lemma V.3.5 If \( f \) is continuous and operator monotone on \( \left( {-1,1}\right) \), then for each \( - 1 \leq \lambda \leq 1 \) the function \( {g}_{\lambda }\left( t\right) = \left( {t + \lambda }\right) f\left( t\right) \) is operator convex. Proof. We will prove this using Theorem V.2.9. First assume that \( f \) is continuous and operator monotone on \( \left\lbrack {-1,1}\right\rbrack \) . Then the function \( f\left( {t - 1}\right) \) is operator monotone on \( \lbrack 0,2) \) . Let \( g\left( t\right) = {tf}\left( {t - 1}\right) \) . Then \( g\left( 0\right) = 0 \) and the function \( g\left( t\right) /t \) is operator monotone on \( \left( {0,2}\right) \) . Hence, by Theorem V.2.9, \( g\left( t\right) \) is operator convex on \( \lbrack 0,2) \) . This implies that the function \( {h}_{1}\left( t\right) = \) \( g\left( {t + 1}\right) = \left( {t + 1}\right) f\left( t\right) \) is operator convex on \( \lbrack - 1,1) \) . Instead of \( f\left( t\right) \), if the same argument is applied to the function \( - f\left( {-t}\right) \), which is also operator monotone on \( \left\lbrack {-1,1}\right\rbrack \), we see that the function \( {h}_{2}\left( t\right) = - \left( {t + 1}\right) f\left( {-t}\right) \) is operator convex on \( \lbrack - 1,1) \) . Changing \( t \) to \( - t \) preserves convexity. So the function \( {h}_{3}\left( t\right) = {h}_{2}\left( {-t}\right) = \left( {t - 1}\right) f\left( t\right) \) is also operator convex. But for \( \left| \lambda \right| \leq 1,\;{g}_{\lambda }\left( t\right) = \frac{1 + \lambda }{2}{h}_{1}\left( t\right) + \frac{1 - \lambda }{2}{h}_{3}\left( t\right) \; \) is a convex combination of \( \;{h}_{1}\; \) and \( {h}_{3} \) . So \( {g}_{\lambda } \) is also operator convex. Now, given \( f \) continuous and operator monotone on \( \left( {-1,1}\right) \), the function \( f\left( {\left( {1 - \varepsilon }\right) t}\right) \) is continuous and operator monotone on \( \left\lbrack {-1,1}\right\rbrack \) for each \( \varepsilon > 0 \) . Hence, by the special case considered above, the function \( \left( {t + \lambda }\right) f\left( {\left( {1 - \varepsilon }\right) t}\right) \) is operator convex. Let \( \varepsilon \rightarrow 0 \), and conclude that the function \( \left( {t + \lambda }\right) f\left( t\right) \) is operator convex. The next theorem says that every operator monotone function on \( I \) is in the class \( {C}^{1} \) . Later on, we will see that it is actually in the class \( {C}^{\infty } \) . (This is so even if we do not assume that it is continuous to begin with.) In the proof we make use of some differentiability properties of convex functions and smoothing techniques. For the reader's convenience, these are summarised in Appendices 1 and 2 at the end of the chapter. \( \textbf{Theorem V.3.6 }\textit{Every operator monotone function }f\textit{ on }I\textit{ is continuously} \) differentiable. Proof. Let \( 0 < \varepsilon < 1 \), and let \( {f}_{\varepsilon } \) be a regularisation of \( f \) of order \( \varepsilon \) . (See
100_S_Fourier Analysis
44
red above, the function \( \left( {t + \lambda }\right) f\left( {\left( {1 - \varepsilon }\right) t}\right) \) is operator convex. Let \( \varepsilon \rightarrow 0 \), and conclude that the function \( \left( {t + \lambda }\right) f\left( t\right) \) is operator convex. The next theorem says that every operator monotone function on \( I \) is in the class \( {C}^{1} \) . Later on, we will see that it is actually in the class \( {C}^{\infty } \) . (This is so even if we do not assume that it is continuous to begin with.) In the proof we make use of some differentiability properties of convex functions and smoothing techniques. For the reader's convenience, these are summarised in Appendices 1 and 2 at the end of the chapter. \( \textbf{Theorem V.3.6 }\textit{Every operator monotone function }f\textit{ on }I\textit{ is continuously} \) differentiable. Proof. Let \( 0 < \varepsilon < 1 \), and let \( {f}_{\varepsilon } \) be a regularisation of \( f \) of order \( \varepsilon \) . (See Appendix 2.) Then \( {f}_{\varepsilon } \) is a \( {C}^{\infty } \) function on \( \left( {-1 + \varepsilon ,1 - \varepsilon }\right) \) . It is also operator monotone. Let \( \widetilde{f}\left( t\right) = \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}{f}_{\varepsilon }\left( t\right) \) . Then \( \widetilde{f}\left( t\right) = \frac{1}{2}\left\lbrack {f\left( {t + }\right) + f\left( {t - }\right) }\right\rbrack \) . Let \( {g}_{\varepsilon }\left( t\right) = \left( {t + 1}\right) {f}_{\varepsilon }\left( t\right) \) . Then, by Lemma V.3.5, \( {g}_{\varepsilon } \) is operator convex. Let \( \widetilde{g}\left( t\right) = \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}{g}_{\varepsilon }\left( t\right) \) . Then \( \widetilde{g}\left( t\right) \) is operator convex. But every convex function (on an open interval) is continuous. So \( \widetilde{g}\left( t\right) \) is continuous. Since \( \widetilde{g}\left( t\right) = \) \( \left( {t + 1}\right) \widetilde{f}\left( t\right) \) and \( t + 1 > 0 \) on \( I \), this means that \( \widetilde{f}\left( t\right) \) is continuous. Hence \( \widetilde{f}\left( t\right) = f\left( t\right) \) . We thus have shown that \( f \) is continuous. Let \( g\left( t\right) = \left( {t + 1}\right) f\left( t\right) \) . Then \( g \) is a convex function on \( I \) . So \( g \) is left and right differentiable and the one-sided derivatives satisfy the properties \[ {g}_{ - }^{\prime }\left( t\right) \leq {g}_{ + }^{\prime }\left( t\right) ,\;\mathop{\lim }\limits_{{s \downarrow t}}{g}_{ \pm }^{\prime }\left( s\right) = {g}_{ + }^{\prime }\left( t\right) ,\;\mathop{\lim }\limits_{{s \uparrow t}}{g}_{ \pm }^{\prime }\left( s\right) = {g}_{ - }^{\prime }\left( t\right) . \] (V.21) But \( {g}_{ \pm }^{\prime }\left( t\right) = f\left( t\right) + \left( {t + 1}\right) {f}_{ \pm }^{\prime }\left( t\right) \) . Since \( t + 1 > 0 \), the derivatives \( {f}_{ \pm }^{\prime }\left( t\right) \) also satisfy relations like (V.21). Now let \( A = \left( \begin{array}{ll} s & 0 \\ 0 & t \end{array}\right), s, t \in \left( {-1,1}\right) \) . If \( \varepsilon \) is sufficiently small, \( s, t \) are in \( \left( {-1 + \varepsilon ,1 - \varepsilon }\right) \) . Since \( {f}_{\varepsilon } \) is operator monotone on this interval, by Theorem V.3.4, the matrix \( {f}_{\varepsilon }^{\left\lbrack 1\right\rbrack }\left( A\right) \) is positive. This implies that \[ {\left( \frac{{f}_{\varepsilon }\left( s\right) - {f}_{\varepsilon }\left( t\right) }{s - t}\right) }^{2} \leq {f}_{\varepsilon }^{\prime }\left( s\right) {f}_{\varepsilon }^{\prime }\left( t\right) \] Let \( \varepsilon \rightarrow 0 \) . Since \( {f}_{\varepsilon } \rightarrow f \) uniformly on compact sets, \( {f}_{\varepsilon }\left( s\right) - {f}_{\varepsilon }\left( t\right) \) converges to \( f\left( s\right) - f\left( t\right) \) . Also, \( {f}_{\varepsilon }^{\prime }\left( t\right) \) converges to \( \frac{1}{2}\left\lbrack {{f}_{ + }^{\prime }\left( t\right) + {f}_{ - }^{\prime }\left( t\right) }\right\rbrack \) . Therefore, the above inequality gives, in the limit, the inequality \[ {\left( \frac{f\left( s\right) - f\left( t\right) }{s - t}\right) }^{2} \leq \frac{1}{4}\left\lbrack {{f}_{ + }^{\prime }\left( s\right) + {f}_{ - }^{\prime }\left( s\right) }\right\rbrack \left\lbrack {{f}_{ + }^{\prime }\left( t\right) + {f}_{ - }^{\prime }\left( t\right) }\right\rbrack \] Now let \( s \downarrow t \), and use the fact that the derivatives of \( f \) satisfy relations like (V.21). This gives \[ {\left\lbrack {f}_{ + }^{\prime }\left( t\right) \right\rbrack }^{2} \leq \frac{1}{4}\left\lbrack {{f}_{ + }^{\prime }\left( t\right) + {f}_{ + }^{\prime }\left( t\right) }\right\rbrack \left\lbrack {{f}_{ + }^{\prime }\left( t\right) + {f}_{ - }^{\prime }\left( t\right) }\right\rbrack \] which implies that \( {f}_{ + }^{\prime }\left( t\right) = {f}_{ - }^{\prime }\left( t\right) \) . Hence \( f \) is differentiable. The relations (V.21), which are satisfied by \( f \) too, show that \( {f}^{\prime } \) is continuous. Just as monotonicity of functions can be studied via first divided differences, convexity requires second divided differences. These are defined as follows. Let \( f \) be twice continuously differentiable on the interval \( I \) . Then \( {f}^{\left\lbrack 2\right\rbrack } \) is a function defined on \( I \times I \times I \) as follows. If \( {\lambda }_{1},{\lambda }_{2},{\lambda }_{3} \) are distinct \[ {f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{1},{\lambda }_{2},{\lambda }_{3}}\right) = \frac{{f}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{1},{\lambda }_{2}}\right) - {f}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{1},{\lambda }_{3}}\right) }{{\lambda }_{2} - {\lambda }_{3}}. \] For other values of \( {\lambda }_{1},{\lambda }_{2},{\lambda }_{3},{f}^{\left\lbrack 2\right\rbrack } \) is defined by continuity; e.g., \[ {f}^{\left\lbrack 2\right\rbrack }\left( {\lambda ,\lambda ,\lambda }\right) = \frac{1}{2}{f}^{\prime \prime }\left( \lambda \right) \] Exercise V.3.7 Show that if \( {\lambda }_{1},{\lambda }_{2},{\lambda }_{3} \) are distinct, then \( {f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{1},{\lambda }_{2},{\lambda }_{3}}\right) \) is the quotient of the two determinants \[ \left| \begin{matrix} f\left( {\lambda }_{1}\right) & f\left( {\lambda }_{2}\right) & f\left( {\lambda }_{3}\right) \\ {\lambda }_{1} & {\lambda }_{2} & {\lambda }_{3} \\ 1 & 1 & 1 \end{matrix}\right| \;\text{ and }\;\left| \begin{matrix} {\lambda }_{1}^{2} & {\lambda }_{2}^{2} & {\lambda }_{3}^{2} \\ {\lambda }_{1} & {\lambda }_{2} & {\lambda }_{3} \\ 1 & 1 & 1 \end{matrix}\right| . \] Hence the function \( {f}^{\left\lbrack 2\right\rbrack } \) is symmetric in its three arguments. Exercise V.3.8 If \( f\left( t\right) = {t}^{m}, m = 2,3,\ldots \), show that \[ {f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{1},{\lambda }_{2},{\lambda }_{3}}\right) = \mathop{\sum }\limits_{\substack{{0 \leq p, q, r} \\ {p + q + r = m - 2} }}{\lambda }_{1}^{p}{\lambda }_{2}^{q}{\lambda }_{3}^{r}. \] Exercise V.3.9 (i) Let \( f\left( t\right) = {t}^{m}, m \geq 2 \) . Let \( A \) be an \( n \times n \) diagonal matrix; \( A = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{P}_{i} \), where \( {P}_{i} \) are the projections onto the coordinate axes. Show that for every \( H \) \[ {\left. \frac{{d}^{2}}{d{t}^{2}}\right| }_{t = 0}f\left( {A + {tH}}\right) = 2\mathop{\sum }\limits_{{p + q + r = m - 2}}{A}^{p}H{A}^{q}H{A}^{r} \] \[ = 2\mathop{\sum }\limits_{{p + q + r = m - 2}}\mathop{\sum }\limits_{{1 \leq i, j, k \leq n}}{\lambda }_{i}^{p}{\lambda }_{j}^{q}{\lambda }_{k}^{r}{P}_{i}H{P}_{j}H{P}_{k} \] and \[ {\left. \frac{{d}^{2}}{d{t}^{2}}\right| }_{t = 0}f\left( {A + {tH}}\right) = 2\mathop{\sum }\limits_{{i, j, k}}{f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{j},{\lambda }_{k}}\right) {P}_{i}H{P}_{j}H{P}_{k}. \] (V.22) (ii) Use a continuity argument, like the one used in the proof of Theorem V.3.3, to show that this last formula is valid for all \( {C}^{2} \) functions \( f \) . Theorem V.3.10 If \( f \in {C}^{2}\left( I\right) \) and \( f \) is operator convex, then for each \( \mu \in I \) the function \( g\left( \lambda \right) = {f}^{\left\lbrack 1\right\rbrack }\left( {\mu ,\lambda }\right) \) is operator monotone. Proof. Since \( f \) is in the class \( {C}^{2}, g \) is in the class \( {C}^{1} \) . So, by Theorem V.3.4, it suffices to prove that, for each \( n \), the \( n \times n \) matrix with entries \( {g}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{j}}\right) \) is positive for all \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) in \( I \) . Fix \( n \) and choose any \( {\lambda }_{1},\ldots ,{\lambda }_{n + 1} \) in \( I \) . Let \( A \) be the diagonal matrix with entries \( {\lambda }_{1},\ldots {\lambda }_{n + 1} \) . Since \( f \) is operator convex and is twice differentiable, for every Hermitian matrix \( H \), the matrix \( {\left. \frac{{d}^{2}}{d{t}^{2}}\right| }_{t = 0}f\left( {A + {tH}}\right) \) must be positive. If we write \( {P}_{1},\ldots ,{P}_{n + 1} \) for the projections onto the coordinate axes, we have an explicit expression for this second derivative in (V.22). Choose \( H \) to be of the form \[ H = \left( \begin{matrix} 0 & 0 & \cdots & & {\bar{\xi }}_{1} \\ 0 & 0 & \cdots & & {\bar{\xi }}_{2} \\ \cdot & \cdot & \cdots & \cdot & \cdot \\ {\xi }_{1} & {\xi }_{2} & \cdots & {\xi }_{n} & 0 \end{matrix}\right) \] where \( {\xi }_{1},\ldots ,{\xi }_{n} \) are any complex numbers. Let \( x \) be the \( \left( {n + 1}\right) \) -vector \( \left( {1,1,\ldots ,1,0}\right) \) . Then \[ \left\langle {x,{P}_{i}H{P}_{j}H{P}_{k}x}\right\rangle = {\xi }_{k}{\bar{\xi }}_{i}{\delta }_{j, n + 1} \] (V.23) for \( 1 \leq i, j, k \leq n + 1 \), where \( {\delta }_{j, n + 1} \) is equal to 1 if \( j = n + 1 \), and is equal to 0 otherwise. So, usi
100_S_Fourier Analysis
45
ntiable, for every Hermitian matrix \( H \), the matrix \( {\left. \frac{{d}^{2}}{d{t}^{2}}\right| }_{t = 0}f\left( {A + {tH}}\right) \) must be positive. If we write \( {P}_{1},\ldots ,{P}_{n + 1} \) for the projections onto the coordinate axes, we have an explicit expression for this second derivative in (V.22). Choose \( H \) to be of the form \[ H = \left( \begin{matrix} 0 & 0 & \cdots & & {\bar{\xi }}_{1} \\ 0 & 0 & \cdots & & {\bar{\xi }}_{2} \\ \cdot & \cdot & \cdots & \cdot & \cdot \\ {\xi }_{1} & {\xi }_{2} & \cdots & {\xi }_{n} & 0 \end{matrix}\right) \] where \( {\xi }_{1},\ldots ,{\xi }_{n} \) are any complex numbers. Let \( x \) be the \( \left( {n + 1}\right) \) -vector \( \left( {1,1,\ldots ,1,0}\right) \) . Then \[ \left\langle {x,{P}_{i}H{P}_{j}H{P}_{k}x}\right\rangle = {\xi }_{k}{\bar{\xi }}_{i}{\delta }_{j, n + 1} \] (V.23) for \( 1 \leq i, j, k \leq n + 1 \), where \( {\delta }_{j, n + 1} \) is equal to 1 if \( j = n + 1 \), and is equal to 0 otherwise. So, using the positivity of the matrix (V.22) and then (V.23), we have \[ 0 \leq \mathop{\sum }\limits_{{1 \leq i, j, k \leq n + 1}}{f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{j},{\lambda }_{k}}\right) \left\langle {x,{P}_{i}H{P}_{j}H{P}_{k}x}\right\rangle \] \[ = \mathop{\sum }\limits_{{1 \leq i, k \leq n}}{f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{n + 1},{\lambda }_{k}}\right) {\xi }_{k}{\bar{\xi }}_{i} \] But, \[ {f}^{\left\lbrack 2\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{n + 1},{\lambda }_{k}}\right) = \frac{{f}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{n + 1},{\lambda }_{i}}\right) - {f}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{n + 1},{\lambda }_{k}}\right) }{{\lambda }_{i} - {\lambda }_{k}} \] \[ = {g}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{k}}\right) \] (putting \( {\lambda }_{n + 1} = \mu \) in the definition of \( g \) ). So we have \[ 0 \leq \mathop{\sum }\limits_{{1 \leq i, k \leq n}}{g}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{k}}\right) {\xi }_{k}{\bar{\xi }}_{i} \] Since \( {\xi }_{i} \) are arbitrary complex numbers, this is equivalent to saying that the \( n \times n \) matrix \( \left\lbrack {{g}^{\left\lbrack 1\right\rbrack }\left( {{\lambda }_{i},{\lambda }_{k}}\right) }\right\rbrack \) is positive. Corollary V.3.11 If \( f \in {C}^{2}\left( I\right), f\left( 0\right) = 0 \), and \( f \) is operator convex, then the function \( g\left( t\right) = \frac{f\left( t\right) }{t} \) is operator monotone. Proof. By the theorem above, the function \( {f}^{\left\lbrack 1\right\rbrack }\left( {0, t}\right) \) is operator monotone. But this is just the function \( f\left( t\right) /t \) in this case. Corollary V.3.12 If \( f \) is operator monotone on \( I \) and \( f\left( 0\right) = 0 \), then the function \( g\left( t\right) = \frac{t + \lambda }{t}f\left( t\right) \) is operator monotone for \( \left| \lambda \right| \leq 1 \) . Proof. First assume that \( f \in {C}^{2}\left( I\right) \) . By Lemma V.3.5, the function \( {g}_{\lambda }\left( t\right) = \left( {t + \lambda }\right) f\left( t\right) \) is operator convex. By Corollary V.3.11, therefore, \( g\left( t\right) \) is operator monotone. If \( f \) is not in the class \( {C}^{2} \), consider its regularisations \( {f}_{\varepsilon } \) . These are in \( {C}^{2} \) . Apply the special case of the above paragraph to the functions \( {f}_{\varepsilon }\left( t\right) - {f}_{\varepsilon }\left( 0\right) \) , and then let \( \varepsilon \rightarrow 0 \) . Corollary V.3.13 If \( f \) is operator monotone on \( I \) and \( f\left( 0\right) = 0 \), then \( f \) is twice differentiable at 0 . Proof. By Corollary V.3.12, the function \( g\left( t\right) = \left( {1 + \frac{1}{t}}\right) f\left( t\right) \) is operator monotone, and by Theorem V.3.6, it is continuously differentiable. So the function \( h \) defined as \( h\left( t\right) = \frac{1}{t}f\left( t\right), h\left( 0\right) = {f}^{\prime }\left( 0\right) \) is continuously differentiable. This implies that \( f \) is twice differentiable at 0 . Exercise V.3.14 Let \( f \) be a continuous operator monotone function on \( I \) . Then the function \( F\left( t\right) = {\int }_{0}^{t}f\left( s\right) {ds} \) is operator convex. Exercise V.3.15 Let \( f \in {C}^{1}\left( I\right) \) . Then \( f \) is operator convex if and only if for all Hermitian matrices \( A, B \) with eigenvalues in \( I \) we have \[ f\left( A\right) - f\left( B\right) \geq {f}^{\left\lbrack 1\right\rbrack }\left( B\right) \circ \left( {A - B}\right) , \] where \( \circ \) denotes the Schur-product in a basis in which \( B \) is diagonal. ## V. 4 Loewner's Theorems Consider all functions \( f \) on the interval \( I = \left( {-1,1}\right) \) that are operator monotone and satisfy the conditions \[ f\left( 0\right) = 0,\;{f}^{\prime }\left( 0\right) = 1. \] (V.24) Let \( K \) be the collection of all such functions. Clearly, \( K \) is a convex set. We will show that this set is compact in the topology of pointwise convergence and will find its extreme points. This will enable us to write an integral representation for functions in \( K \) . Lemma V.4.1 If \( f \in K \), then \[ f\left( t\right) \leq \frac{t}{1 - t}\;\text{ for }\;0 \leq t < 1 \] \[ f\left( t\right) \geq \frac{t}{1 + t}\;\text{ for } - 1 < t < 0 \] \[ \left| {{f}^{\prime \prime }\left( 0\right) }\right| \leq 2. \] Proof. Let \( A = \left( \begin{array}{ll} t & 0 \\ 0 & 0 \end{array}\right) \) . By Theorem V.3.4, the matrix \[ {f}^{\left\lbrack 1\right\rbrack }\left( A\right) = \left( \begin{matrix} {f}^{\prime }\left( t\right) & f\left( t\right) /t \\ f\left( t\right) /t & 1 \end{matrix}\right) \] is positive. Hence, \[ \frac{f{\left( t\right) }^{2}}{{t}^{2}} \leq {f}^{\prime }\left( t\right) \] (V.25) Let \( {g}_{ \pm }\left( t\right) = \left( {t \pm 1}\right) f\left( t\right) \) . By Lemma V.3.5, both functions \( {g}_{ \pm } \) are convex. Hence their derivatives are monotonically increasing functions. Since \( {g}_{ \pm }^{\prime }\left( t\right) = f\left( t\right) + \left( {t \pm 1}\right) {f}^{\prime }\left( t\right) \) and \( {g}_{ \pm }^{\prime }\left( 0\right) = \pm 1 \), this implies that \[ f\left( t\right) + \left( {t - 1}\right) {f}^{\prime }\left( t\right) \geq - 1\;\text{ for }\;t > 0 \] (V.26) and \[ f\left( t\right) + \left( {t + 1}\right) {f}^{\prime }\left( t\right) \leq 1\;\text{ for }\;t < 0. \] (V.27) From (V.25) and (V.26) we obtain \[ f\left( t\right) + 1 \geq \frac{\left( {1 - t}\right) f{\left( t\right) }^{2}}{{t}^{2}}\;\text{ for }\;t > 0. \] (V.28) Now suppose that for some \( 0 < t < 1 \) we have \( f\left( t\right) > \frac{t}{1 - t} \) . Then \( f{\left( t\right) }^{2} > \) \( \frac{t}{1 - t}f\left( t\right) \) . So, from (V.28), we get \( f\left( t\right) \) + \( 1 > \frac{f\left( t\right) }{t} \) . But this gives the inequality \( f\left( t\right) < \frac{t}{1 - t} \), which contradicts our assumption. This shows that \( f\left( t\right) \leq \frac{t}{1 - t} \) for \( 0 \leq t < 1 \) . The second inequality of the lemma is obtained by the same argument using (V.27) instead of (V.26). We have seen in the proof of Corollary V.3.13 that \[ {f}^{\prime }\left( 0\right) + \frac{1}{2}{f}^{\prime \prime }\left( 0\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{\left( {1 + {t}^{-1}}\right) f\left( t\right) - {f}^{\prime }\left( 0\right) }{t}. \] Let \( t \downarrow 0 \) and use the first inequality of the lemma to conclude that this limit is smaller than 2 . Let \( t \uparrow 0 \), and use the second inequality to conclude that it is bigger than 0. Together, these two imply that \( \left| {{f}^{\prime \prime }\left( 0\right) }\right| \leq 2 \) . Proposition V.4.2 The set \( K \) is compact in the topology of pointwise convergence. Proof. Let \( \left\{ {f}_{i}\right\} \) be any net in \( K \) . By the lemma above, the set \( \left\{ {{f}_{i}\left( t\right) }\right\} \) is bounded for each \( t \) . So, by Tychonoff’s Theorem, there exists a subnet \( \left\{ {f}_{i}\right\} \) that converges pointwise to a bounded function \( f \) . The limit function \( f \) is operator monotone, and \( f\left( 0\right) = 0 \) . If we show that \( {f}^{\prime }\left( 0\right) = 1 \), we would have shown that \( f \in K \), and hence that \( K \) is compact. By Corollary V.3.12, each of the functions \( \left( {1 + \frac{1}{t}}\right) {f}_{i}\left( t\right) \) is monotone on \( \left( {-1,1}\right) \) . Since for all \( i,\mathop{\lim }\limits_{{t \rightarrow 0}}\left( {1 + \frac{1}{t}}\right) {f}_{i}\left( t\right) = {f}_{i}^{\prime }\left( 0\right) = 1 \), we see that \( \left( {1 + \frac{1}{t}}\right) {f}_{i}\left( t\right) \geq 1 \) if \( t \geq 0 \) and is \( \leq 1 \) if \( t \leq 0 \) . Hence, if \( t > 0 \), we have \( \left( {1 + \frac{1}{t}}\right) f\left( t\right) \geq 1 \) ; and if \( t < 0 \), we have the opposite inequality. Since \( f \) is continuously differentiable, this shows that \( {f}^{\prime }\left( 0\right) = 1 \) . Proposition V.4.3 All extreme points of the set \( K \) have the form \[ f\left( t\right) = \frac{t}{1 - {\alpha t}},\;\text{ where }\;\alpha = \frac{1}{2}{f}^{\prime \prime }\left( 0\right) . \] Proof. Let \( f \in K \) . For each \( \lambda , - 1 < \lambda < 1 \), let \[ {g}_{\lambda }\left( t\right) = \left( {1 + \frac{\lambda }{t}}\right) f\left( t\right) - \lambda \] By Corollary V.3.12, \( {g}_{\lambda } \) is operator monotone. Note that \( {g}_{\lambda }\left( 0\right) = 0 \), since \( f\left( 0\right) = 0 \) and \( {f}^{\prime }\left( 0\right) = 1 \) . Also, \( {g}_{\lambda }^{\prime }\left( 0\right) = 1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) \) . So the function \( {h}_{\lambda } \) defined as \[ {h}_{\lambda }\left( t\right) = \frac{1}{1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\left\lbrack {\left( {1 + \fra
100_S_Fourier Analysis
46
the opposite inequality. Since \( f \) is continuously differentiable, this shows that \( {f}^{\prime }\left( 0\right) = 1 \) . Proposition V.4.3 All extreme points of the set \( K \) have the form \[ f\left( t\right) = \frac{t}{1 - {\alpha t}},\;\text{ where }\;\alpha = \frac{1}{2}{f}^{\prime \prime }\left( 0\right) . \] Proof. Let \( f \in K \) . For each \( \lambda , - 1 < \lambda < 1 \), let \[ {g}_{\lambda }\left( t\right) = \left( {1 + \frac{\lambda }{t}}\right) f\left( t\right) - \lambda \] By Corollary V.3.12, \( {g}_{\lambda } \) is operator monotone. Note that \( {g}_{\lambda }\left( 0\right) = 0 \), since \( f\left( 0\right) = 0 \) and \( {f}^{\prime }\left( 0\right) = 1 \) . Also, \( {g}_{\lambda }^{\prime }\left( 0\right) = 1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) \) . So the function \( {h}_{\lambda } \) defined as \[ {h}_{\lambda }\left( t\right) = \frac{1}{1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\left\lbrack {\left( {1 + \frac{\lambda }{t}}\right) f\left( t\right) - \lambda }\right\rbrack \] is in \( K \) . Since \( \left| {{f}^{\prime \prime }\left( 0\right) }\right| \leq 2 \), we see that \( \left| {\frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\right| < 1 \) . We can write \[ f = \frac{1}{2}\left( {1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\right) {h}_{\lambda } + \frac{1}{2}\left( {1 - \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\right) {h}_{-\lambda }. \] So, if \( f \) is an extreme point of \( K \), we must have \( f = {h}_{\lambda } \) . This says that \[ \left( {1 + \frac{1}{2}\lambda {f}^{\prime \prime }\left( 0\right) }\right) f\left( t\right) = \left( {1 + \frac{\lambda }{t}}\right) f\left( t\right) - \lambda \] from which we can conclude that \[ f\left( t\right) = \frac{t}{1 - \frac{1}{2}{f}^{\prime \prime }\left( 0\right) t}. \] Theorem V.4.4 For each \( f \) in \( K \) there exists a unique probability measure \( \mu \) on \( \left\lbrack {-1,1}\right\rbrack \) such that \[ f\left( t\right) = {\int }_{-1}^{1}\frac{t}{1 - {\lambda t}}{d\mu }\left( \lambda \right) \] (V.29) Proof. For \( - 1 \leq \lambda \leq 1 \), consider the functions \( {h}_{\lambda }\left( t\right) = \frac{t}{1 - {\lambda t}} \) . By Proposition V.4.3, the extreme points of \( K \) are included in the family \( \left\{ {h}_{\lambda }\right\} \) . Since \( K \) is compact and convex, it must be the closed convex hull of its extreme points. (This is the Krein-Milman Theorem.) Finite convex combinations of elements of the family \( \left\{ {{h}_{\lambda } : - 1 \leq \lambda \leq 1}\right\} \) can also be written as \( \int {h}_{\lambda }{d\nu }\left( \lambda \right) \), where \( \nu \) is a probability measure on \( \left\lbrack {-1,1}\right\rbrack \) with finite support. Since \( f \) is in the closure of these combinations, there exists a net \( \left\{ {\nu }_{i}\right\} \) of finitely supported probability measures on \( \left\lbrack {-1,1}\right\rbrack \) such that the net \( {f}_{i}\left( t\right) = \int {h}_{\lambda }\left( t\right) d{\nu }_{i}\left( \lambda \right) \) converges to \( f\left( t\right) \) . Since the space of the probability measures is weak* compact, the net \( {\nu }_{i} \) has an accumulation point \( \mu \) . In other words, a subnet of \( \int {h}_{\lambda }d{\nu }_{i}\left( \lambda \right) \) converges to \( \int {h}_{\lambda }{d\mu }\left( \lambda \right) \) . So \( \begin{matrix} f\left( t\right) = \int {h}_{\lambda }\left( t\right) {d\mu }\left( \lambda \right) = \int \frac{t}{1 - {\lambda t}}{d\mu }\left( \lambda \right) . \end{matrix} \) Now suppose that there are two measures \( {\mu }_{1} \) and \( {\mu }_{2} \) for which the representation (V.29) is valid. Expand the integrand as a power series \( \frac{t}{1 - {\lambda t}} = \mathop{\sum }\limits_{{n = 0}}^{\infty }{t}^{n + 1}{\lambda }^{n} \) convergent uniformly in \( \left| \lambda \right| \leq 1 \) for every fixed \( t \) with \( \left| t\right| < 1 \) . This shows that \[ \mathop{\sum }\limits_{{n = 0}}^{\infty }{t}^{n + 1}{\int }_{-1}^{1}{\lambda }^{n}d{\mu }_{1}\left( \lambda \right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{t}^{n + 1}{\int }_{-1}^{1}{\lambda }^{n}d{\mu }_{2}\left( \lambda \right) \] for all \( \left| t\right| < 1 \) . The identity theorem for power series now shows that \[ {\int }_{-1}^{1}{\lambda }^{n}d{\mu }_{1}\left( \lambda \right) = {\int }_{-1}^{1}{\lambda }^{n}d{\mu }_{2}\left( \lambda \right) ,\;n = 0,1,2,\ldots \] But this is possible if and only if \( {\mu }_{1} = {\mu }_{2} \) . One consequence of the uniqueness of the measure \( \mu \) in the representation (V.29) is that every function \( {h}_{{\lambda }_{0}} \) is an extreme point of \( K \) (because it can be represented as an integral like this with \( \mu \) concentrated at \( {\lambda }_{0} \) ). The normalisations (V.24) were required to make the set \( K \) compact. They can now be removed. We have the following result. Corollary V.4.5 Let \( f \) be a nonconstant operator monotone function on \( \left( {-1,1}\right) \) . Then there exists a unique probability measure \( \mu \) on \( \left\lbrack {-1,1}\right\rbrack \) such that \[ f\left( t\right) = f\left( 0\right) + {f}^{\prime }\left( 0\right) {\int }_{-1}^{1}\frac{t}{1 - {\lambda t}}{d\mu }\left( \lambda \right) . \] (V.30) Proof. Since \( f \) is monotone and is not a constant, \( {f}^{\prime }\left( 0\right) \neq 0 \) . Now note that the function \( \frac{f\left( t\right) - f\left( 0\right) }{{f}^{\prime }\left( 0\right) } \) is in \( K \) . It is clear from the representation (V.30) that every operator monotone function on \( \left( {-1,1}\right) \) is infinitely differentiable. Hence, by the results of earlier sections, every operator convex function is also infinitely differentiable. Theorem V.4.6 Let \( f \) be a nonlinear operator convex function on \( \left( {-1,1}\right) \) . Then there exists a unique probability measure \( \mu \) on \( \left\lbrack {-1,1}\right\rbrack \) such that \[ f\left( t\right) = f\left( 0\right) + {f}^{\prime }\left( 0\right) t + \frac{1}{2}{f}^{\prime \prime }\left( 0\right) {\int }_{-1}^{1}\frac{{t}^{2}}{1 - {\lambda t}}{d\mu }\left( \lambda \right) . \] (V.31) Proof. Assume, without loss of generality, that \( f\left( 0\right) = 0 \) and \( {f}^{\prime }\left( 0\right) = 0 \) . Let \( g\left( t\right) = f\left( t\right) /t \) . Then \( g \) is operator monotone by Corollary V.3.11, \( g\left( 0\right) = \) 0, and \( {g}^{\prime }\left( 0\right) = \frac{1}{2}{f}^{\prime \prime }\left( 0\right) \) . So \( g \) has a representation like (V.30), from which the representation (V.31) for \( f \) follows. We have noted that the integral representation (V.30) implies that every operator monotone function on \( \left( {-1,1}\right) \) is infinitely differentiable. In fact, we can conclude more. This representation shows that \( f \) has an analytic continuation \[ f\left( z\right) = f\left( 0\right) + {f}^{\prime }\left( 0\right) {\int }_{-1}^{1}\frac{z}{1 - {\lambda z}}{d\mu }\left( \lambda \right) \] (V.32) defined everywhere on the complex plane except on \( \left( {-\infty , - 1\rbrack \cup \lbrack 1,\infty }\right) \) . Note that \[ \operatorname{Im}\frac{z}{1 - {\lambda z}} = \frac{\operatorname{Im}z}{{\left| 1 - \lambda z\right| }^{2}} \] So \( f \) defined above maps the upper half-plane \( {H}_{ + } = \{ z : \operatorname{Im}z > 0\} \) into itself. It also maps the lower half-plane \( {H}_{ - } \) into itself. Further, \( f\left( z\right) = \overline{f\left( \bar{z}\right) } \) . In other words, the function \( f \) on \( {H}_{ - } \) is an analytic continuation of \( f \) on \( {H}_{ + } \) across the interval \( \left( {-1,1}\right) \) obtained by reflection. This is a very important observation, because there is a very rich theory of analytic functions in a half-plane that we can exploit now. Before doing so, let us now do away with the special interval \( \left( {-1,1}\right) \) . Note that a function \( f \) is operator monotone on an interval \( \left( {a, b}\right) \) if and only if the function \( f\left( {\frac{\left( {b - a}\right) t}{2} + \frac{b + a}{2}}\right) \) is operator monotone on \( \left( {-1,1}\right) \) . So, all results obtained for operator monotone functions on \( \left( {-1,1}\right) \) can be extended to functions on \( \left( {a, b}\right) \) . We have proved the following. Theorem V.4.7 If \( f \) is an operator monotone function on \( \left( {a, b}\right) \), then \( f \) has an analytic continuation to the upper half-plane \( {H}_{ + } \) that maps \( {H}_{ + } \) into itself. It also has an analytic continuation to the lower-half plane \( {H}_{ - } \) , obtained by reflection across \( \left( {a, b}\right) \) . The converse of this is also true: if a real function \( f \) on \( \left( {a, b}\right) \) has an analytic continuation to \( {H}_{ + } \) mapping \( {H}_{ + } \) into itself, then \( f \) is operator monotone on \( \left( {a, b}\right) \) . This is proved below. Let \( P \) be the class of all complex analytic functions defined on \( {H}_{ + } \) with their ranges in the closed upper half-plane \( \{ z : \operatorname{Im}z \geq 0\} \) . This is called the class of Pick functions. Since every nonconstant analytic function is an open map, if \( f \) is a nonconstant Pick function, then the range of \( f \) is contained in \( {H}_{ + } \) . It is obvious that \( P \) is a convex cone, and the composition of two nonconstant functions in \( P \) is again in \( P \) . Exercise V.4.8 (i) For \( 0 \leq r \leq 1 \), the function \( f\left( z\right) = {z}^{r} \) is in \( P \) . (ii) The function \( f\left( z\right) = \log z \) is in \( P \) . (iii) The function \( f\left( z\right) = \tan z \) is in \( P \) . (iv) The function \( f\left( z\right) = - \frac{1}{z} \) is in \( P \) . (v) If \( f \) is in \( P \), then so is the function \( \frac{-1}{f} \) . Given any open interval \( \l
100_S_Fourier Analysis
47
r monotone on \( \left( {a, b}\right) \) . This is proved below. Let \( P \) be the class of all complex analytic functions defined on \( {H}_{ + } \) with their ranges in the closed upper half-plane \( \{ z : \operatorname{Im}z \geq 0\} \) . This is called the class of Pick functions. Since every nonconstant analytic function is an open map, if \( f \) is a nonconstant Pick function, then the range of \( f \) is contained in \( {H}_{ + } \) . It is obvious that \( P \) is a convex cone, and the composition of two nonconstant functions in \( P \) is again in \( P \) . Exercise V.4.8 (i) For \( 0 \leq r \leq 1 \), the function \( f\left( z\right) = {z}^{r} \) is in \( P \) . (ii) The function \( f\left( z\right) = \log z \) is in \( P \) . (iii) The function \( f\left( z\right) = \tan z \) is in \( P \) . (iv) The function \( f\left( z\right) = - \frac{1}{z} \) is in \( P \) . (v) If \( f \) is in \( P \), then so is the function \( \frac{-1}{f} \) . Given any open interval \( \left( {a, b}\right) \), let \( P\left( {a, b}\right) \) be the class of Pick functions that admit an analytic continuation across \( \left( {a, b}\right) \) into the lower half-plane and the continuation is by reflection. In particular, such functions take only real values on \( \left( {a, b}\right) \), and if they are nonconstant, they assume real values only on \( \left( {a, b}\right) \) . The set \( P\left( {a, b}\right) \) is a convex cone. Let \( f \in P\left( {a, b}\right) \) and write \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \), where as usual \( u\left( z\right) \) and \( v\left( z\right) \) denote the real and imaginary parts of \( f \) . Since \( v\left( x\right) = 0 \) for \( a < x < b \) , we have \( v\left( {x + {iy}}\right) - v\left( x\right) \geq 0 \) if \( y > 0 \) . This implies that the partial derivative \( {v}_{y}\left( x\right) \geq 0 \) and hence, by the Cauchy-Riemann equations, \( {u}_{x}\left( x\right) \geq 0 \) . Thus, on the interval \( \left( {a, b}\right), f\left( x\right) = u\left( x\right) \) is monotone. In fact, we will soon see that \( f \) is operator monotone on \( \left( {a, b}\right) \) . This is a consequence of a theorem of Nevanlinna that gives an integral representation of Pick functions. We will give a proof of this now using some elementary results from Fourier analysis. The idea is to use the conformal equivalence between \( {H}_{ + } \) and the unit disk \( D \) to transfer the problem to \( D \), and then study the real part \( u \) of \( f \) . This is a harmonic function on \( D \), so we can use standard facts from Fourier analysis. Theorem V.4.9 Let \( u \) be a nonnegative harmonic function on the unit disk \( D = \{ z : \left| z\right| < 1\} \) . Then there exists a finite measure \( m \) on \( \left\lbrack {0,{2\pi }}\right\rbrack \) such that \[ u\left( {r{e}^{i\theta }}\right) = {\int }_{0}^{2\pi }\frac{1 - {r}^{2}}{1 + {r}^{2} - {2r}\cos \left( {\theta - t}\right) }{dm}\left( t\right) . \] (V.33) Conversely, any function of this form is positive and harmonic on the unit disk \( D \) . Proof. Let \( u \) be any continuous real function defined on the closed unit disk that is harmonic in \( D \) . Then, by a well-known and elementary theorem in analysis, \[ u\left( {r{e}^{i\theta }}\right) = \frac{1}{2\pi }{\int }_{0}^{2\pi }\frac{1 - {r}^{2}}{1 + {r}^{2} - {2r}\cos \left( {\theta - t}\right) }u\left( {e}^{it}\right) {dt} \] \[ = \frac{1}{2\pi }{\int }_{0}^{2\pi }{P}_{r}\left( {\theta - t}\right) u\left( {e}^{it}\right) {dt} \] (V.34) where \( {P}_{r}\left( \theta \right) \) is the Poisson kernel (defined by the above equation) for \( 0 \leq \) \( r < 1,0 \leq \theta \leq {2\pi } \) . If \( u \) is nonnegative, put \( {dm}\left( t\right) = \frac{1}{2\pi }u\left( {e}^{it}\right) {dt} \) . Then \( m \) is a positive measure on \( \left\lbrack {0,{2\pi }}\right\rbrack \) . By the mean value property of harmonic functions, the total mass of this measure is \[ \frac{1}{2\pi }{\int }_{0}^{2\pi }u\left( {e}^{it}\right) {dt} = u\left( 0\right) \] (V.35) So we do have a representation of the form (V.33) under the additional hypothesis that \( u \) is continuous on the closed unit disk. The general case is a consequence of this. Let \( u \) be positive and harmonic in \( D \) . Then, for \( \varepsilon > 0 \), the function \( {u}_{\varepsilon }\left( z\right) = u\left( \frac{z}{1 + \varepsilon }\right) \) is positive and harmonic in the disk \( \left| z\right| < 1 + \varepsilon \) . Therefore, it can be represented in the form (V.33) with a measure \( {m}_{\varepsilon }\left( t\right) \) of finite total mass \( {u}_{\varepsilon }\left( 0\right) = u\left( 0\right) \) . As \( \varepsilon \rightarrow 0,{u}_{\varepsilon } \) converges to \( u \) uniformly on compact subsets of \( D \) . Since the measures \( {m}_{\varepsilon } \) all have the same mass, using the weak* compactness of the space of probability measures, we conclude that there exists a positive measure \( m \) such that \[ u\left( {r{e}^{i\theta }}\right) = \mathop{\lim }\limits_{{\varepsilon \rightarrow 0}}{u}_{\varepsilon }\left( {r{e}^{i\theta }}\right) = {\int }_{0}^{2\pi }\frac{1 - {r}^{2}}{1 + {r}^{2} - {2r}\cos \left( {\theta - t}\right) }{dm}\left( t\right) . \] Conversely, since the Poisson kernel \( {P}_{r} \) is nonnegative any function represented by (V.33) is nonnegative. Theorem V.4.9 is often called the Herglotz Theorem. It says that every nonnegative harmonic function on the unit disk is the Poisson integral of a positive measure. Recall that two harmonic functions \( u, v \) are called harmonic conjugates if the function \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \) is analytic. Every harmonic function \( u \) has a harmonic conjugate that is uniquely determined up to an additive constant. Theorem V.4.10 Let \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \) be analytic on the unit disk \( D \) . If \( u\left( z\right) \geq 0 \), then there exists a finite positive measure \( m \) on \( \left\lbrack {0,{2\pi }}\right\rbrack \) such that \[ f\left( z\right) = {\int }_{0}^{2\pi }\frac{{e}^{it} + z}{{e}^{it} - z}{dm}\left( t\right) + {iv}\left( 0\right) \] (V.36) Conversely, every function of this form is analytic on \( D \) and has a positive real part. Proof. By Theorem V.4.9, the function \( u \) can be written as in (V.33). The Poisson kernel \( {P}_{r},0 \leq r < 1 \), can be written as \[ {P}_{r}\left( \theta \right) = \frac{1 - {r}^{2}}{1 + {r}^{2} - {2r}\cos \theta } = \mathop{\sum }\limits_{{-\infty }}^{\infty }{r}^{\left| n\right| }{e}^{in\theta } = \operatorname{Re}\frac{1 + r{e}^{i\theta }}{1 - r{e}^{i\theta }}. \] Hence, \[ {P}_{r}\left( {\theta - t}\right) = \operatorname{Re}\frac{1 + r{e}^{i\left( {\theta - t}\right) }}{1 - r{e}^{i\left( {\theta - t}\right) }} = \operatorname{Re}\frac{{e}^{it} + r{e}^{i\theta }}{{e}^{it} - r{e}^{i\theta }}, \] and \[ u\left( z\right) = \operatorname{Re}{\int }_{0}^{2\pi }\frac{{e}^{it} + z}{{e}^{it} - z}{dm}\left( t\right) \] So, \( f\left( z\right) \) differs from this last integral only by an imaginary constant. Putting \( z = 0 \), one sees that this constant is \( {iv}\left( 0\right) \) . The converse statement is easy to prove. Next, note that the disk \( D \) and the half-plane \( {H}_{ + } \) are conformally equivalent, i.e., there exists an analytic isomorphism between these two spaces. For \( z \in D \), let \[ \zeta \left( z\right) = \frac{1}{i}\frac{z + 1}{z - 1} \] (V.37) Then \( \zeta \in {H}_{ + } \) . The inverse of this map is given by \[ z\left( \zeta \right) = \frac{\zeta - i}{\zeta + i} \] (V.38) Using these transformations, we can establish an equivalence between the class \( P \) and the class of analytic functions on \( D \) with positive real part. If \( f \) is a function in the latter class, let \[ \varphi \left( \zeta \right) = {if}\left( {z\left( \zeta \right) }\right) \] (V.39) Then \( \varphi \in P \) . The inverse of this transformation is \[ f\left( z\right) = - {i\varphi }\left( {\zeta \left( z\right) }\right) \] (V.40) Using these ideas we can prove the following theorem, called Nevan-linna's Theorem. Theorem V.4.11 A function \( \varphi \) is in the Pick class if and only if it has a representation \[ \varphi \left( \zeta \right) = \alpha + {\beta \zeta } + {\int }_{-\infty }^{\infty }\frac{1 + {\lambda \zeta }}{\lambda - \zeta }{d\nu }\left( \lambda \right) \] (V.41) where \( \alpha \) is a real number, \( \beta \geq 0 \), and \( \nu \) is a positive finite measure on the real line. Proof. Let \( f \) be the function on \( D \) associated with \( \varphi \) via the transformation (V.40). By Theorem V.4.10, there exists a finite positive measure \( m \) on \( \left\lbrack {0,{2\pi }}\right\rbrack \) such that \[ f\left( z\right) = {\int }_{0}^{2\pi }\frac{{e}^{it} + z}{{e}^{it} - z}{dm}\left( t\right) - {i\alpha } \] If \( f\left( z\right) = u\left( z\right) + {iv}\left( z\right) \), then \( \alpha = - v\left( 0\right) \), and the total mass of \( m \) is \( u\left( 0\right) \) . If the measure \( m \) has a positive mass at the singleton \( \{ 0\} \), let this mass be \( \beta \) . Then the expression above reduces to \[ f\left( z\right) = {\int }_{\left( 0,2\pi \right) }\frac{{e}^{it} + z}{{e}^{it} - z}{dm}\left( t\right) + \beta \frac{1 + z}{1 - z} - {i\alpha }. \] Using the transformations (V.38) and (V.39), we get from this \[ \varphi \left( \zeta \right) = \alpha + {\beta \zeta } + i{\int }_{\left( 0,2\pi \right) }\frac{{e}^{it} + \frac{\zeta - i}{\zeta + i}}{{e}^{it} - \frac{\zeta - i}{\zeta + i}}{dm}\left( t\right) . \] The last term above is equal to \[ {\int }_{\left( 0,2\pi \right) }\frac{\zeta \cos \frac{t}{2} - \sin \frac{t}{2}}{\zeta \sin \frac{t}{2} + \cos \frac{t}{2}}{dm}\left( t\right) \] Now, introduce a change of variables \( \lambda = - \cot \frac{t}{2} \) . This maps \( \left( {0,{2\pi }}\right) \) ont
100_S_Fourier Analysis
48
( z\right) + {iv}\left( z\right) \), then \( \alpha = - v\left( 0\right) \), and the total mass of \( m \) is \( u\left( 0\right) \) . If the measure \( m \) has a positive mass at the singleton \( \{ 0\} \), let this mass be \( \beta \) . Then the expression above reduces to \[ f\left( z\right) = {\int }_{\left( 0,2\pi \right) }\frac{{e}^{it} + z}{{e}^{it} - z}{dm}\left( t\right) + \beta \frac{1 + z}{1 - z} - {i\alpha }. \] Using the transformations (V.38) and (V.39), we get from this \[ \varphi \left( \zeta \right) = \alpha + {\beta \zeta } + i{\int }_{\left( 0,2\pi \right) }\frac{{e}^{it} + \frac{\zeta - i}{\zeta + i}}{{e}^{it} - \frac{\zeta - i}{\zeta + i}}{dm}\left( t\right) . \] The last term above is equal to \[ {\int }_{\left( 0,2\pi \right) }\frac{\zeta \cos \frac{t}{2} - \sin \frac{t}{2}}{\zeta \sin \frac{t}{2} + \cos \frac{t}{2}}{dm}\left( t\right) \] Now, introduce a change of variables \( \lambda = - \cot \frac{t}{2} \) . This maps \( \left( {0,{2\pi }}\right) \) onto \( \left( {-\infty ,\infty }\right) \) . The measure \( m \) is transformed by the above map to a finite measure \( \nu \) on \( \left( {-\infty ,\infty }\right) \) and the above integral is transformed to \[ {\int }_{-\infty }^{\infty }\frac{1 + {\lambda \zeta }}{\lambda - \zeta }{d\nu }\left( \lambda \right) \] This shows that \( \varphi \) can be represented in the form (V.41). It is easy to see that every function of this form is a Pick function. There is another form in which it is convenient to represent Pick functions. Note that \[ \frac{1 + {\lambda \zeta }}{\lambda - \zeta } = \left( {\frac{1}{\lambda - \zeta } - \frac{\lambda }{{\lambda }^{2} + 1}}\right) \left( {{\lambda }^{2} + 1}\right) \] So, if we write \( {d\mu }\left( \lambda \right) = \left( {{\lambda }^{2} + 1}\right) {d\nu }\left( \lambda \right) \), then we obtain from (V.41) the representation \[ \varphi \left( \zeta \right) = \alpha + {\beta \zeta } + {\int }_{-\infty }^{\infty }\left\lbrack {\frac{1}{\lambda - \zeta } - \frac{\lambda }{{\lambda }^{2} + 1}}\right\rbrack {d\mu }\left( \lambda \right) \] (V.42) where \( \mu \) is a positive Borel measure on \( \mathbb{R} \), for which \( \int \frac{1}{{\lambda }^{2} + 1}{d\mu }\left( \lambda \right) \) is finite. (A Borel measure on \( \mathbb{R} \) is a measure defined on Borel sets that puts finite mass on bounded sets.) Now we turn to the question of uniqueness of the above representations. It is easy to see from (V.41) that \[ \alpha = \operatorname{Re}\varphi \left( i\right) \] (V.43) Therefore, \( \alpha \) is uniquely determined by \( \varphi \) . Now let \( \eta \) be any positive real number. From (V.41) we see that \[ \frac{\varphi \left( {i\eta }\right) }{i\eta } = \frac{\alpha }{i\eta } + \beta + {\int }_{-\infty }^{\infty }\frac{1 + {\lambda }^{2} + {i\lambda }\left( {\eta - {\eta }^{-1}}\right) }{{\lambda }^{2} + {\eta }^{2}}{d\nu }\left( \lambda \right) . \] As \( \eta \rightarrow \infty \), the integrand converges to 0 for each \( \lambda \) . The real and imaginary parts of the integrand are uniformly bounded by 1 when \( \eta > 1 \) . So by the Lebesgue Dominated Convergence Theorem, the integral converges to 0 as \( \eta \rightarrow \infty \) . Thus, \[ \beta = \mathop{\lim }\limits_{{\eta \rightarrow \infty }}\varphi \left( {i\eta }\right) /{i\eta } \] (V.44) and thus \( \beta \) is uniquely determined by \( \varphi \) . Now we will prove that the measure \( {d\mu } \) in (V.42), is uniquely determined by \( \varphi \) . Denote by \( \mu \) the unique right continuous monotonically increasing function on \( \mathbb{R} \) satisfying \( \mu \left( 0\right) = 0 \) and \( \mu (\left( {a, b\rbrack }\right) = \mu \left( b\right) - \mu \left( a\right) \) for every interval \( (a, b\rbrack \) . (This is called the distribution function associated with \( {d\mu } \) .) We will prove the following result, called the Stieltjes inversion formula, from which it follows that \( \mu \) is unique. Theorem V.4.12 If the Pick function \( \varphi \) is represented by (V.42), then for any \( a, b \) that are points of continuity of the distribution function \( \mu \) we have \[ \mu \left( b\right) - \mu \left( a\right) = \mathop{\lim }\limits_{{\eta \rightarrow 0}}\frac{1}{\pi }{\int }_{a}^{b}\operatorname{Im}\varphi \left( {x + {i\eta }}\right) {dx}. \] (V.45) Proof. From (V.42) we see that \[ \frac{1}{\pi }{\int }_{a}^{b}\operatorname{Im}\varphi \left( {x + {i\eta }}\right) {dx} = \frac{1}{\pi }{\int }_{a}^{b}\left\lbrack {{\beta \eta } + {\int }_{-\infty }^{\infty }\frac{\eta }{{\left( \lambda - x\right) }^{2} + {\eta }^{2}}{d\mu }\left( \lambda \right) }\right\rbrack {dx} \] \[ = \frac{1}{\pi }\left\lbrack {{\beta \eta }\left( {b - a}\right) + {\int }_{-\infty }^{\infty }{\int }_{a}^{b}\frac{\eta dx}{{\left( x - \lambda \right) }^{2} + {\eta }^{2}}{d\mu }\left( \lambda \right) }\right\rbrack \] the interchange of integrals being permissible by Fubini's Theorem. As \( \eta \rightarrow 0 \), the first term in the square brackets above goes to 0 . The inner integral can be calculated by the change of variables \( u = \frac{x - \lambda }{\eta } \) . This gives \[ \mathop{\int }\limits_{a}^{b}\frac{\eta dx}{{\left( x - \lambda \right) }^{2} + {\eta }^{2}} = \mathop{\int }\limits_{\frac{a - \lambda }{\eta }}^{\frac{b - \lambda }{\eta }}\frac{du}{{u}^{2} + 1} \] \[ = \arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) . \] So to prove (V.45), we have to show that \[ \mu \left( b\right) - \mu \left( a\right) = \mathop{\lim }\limits_{{\eta \rightarrow 0}}\frac{1}{\pi }{\int }_{-\infty }^{\infty }\left\lbrack {\arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) . \] We will use the following properties of the function arctan. This is a monotonically increasing odd function on \( \left( {-\infty ,\infty }\right) \) whose range is \( \left( {-\frac{\pi }{2},\frac{\pi }{2}}\right) \) . So, \[ 0 \leq \arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) \leq \pi . \] If \( \left( {b - \lambda }\right) \) and \( \left( {a - \lambda }\right) \) have the same sign, then by the addition law for arctan we have, \[ \arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) = \arctan \frac{\eta \left( {b - a}\right) }{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }. \] If \( x \) is positive, then \[ \arctan x = {\int }_{0}^{x}\frac{dt}{1 + {t}^{2}} \leq {\int }_{0}^{x}{dt} = x \] Now, let \( \varepsilon \) be any given positive number. Since \( a \) and \( b \) are points of continuity of \( \mu \), we can choose \( \delta \) such that \[ \mu \left( {a + \delta }\right) - \mu \left( {a - \delta }\right) \leq \varepsilon /5 \] \[ \mu \left( {b + \delta }\right) - \mu \left( {b - \delta }\right) \leq \varepsilon /5 \] We then have, \[ \left| {\mu \left( b\right) - \mu \left( a\right) - \frac{1}{\pi }{\int }_{-\infty }^{\infty }\left\lbrack {\arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) }\right| \] \[ \leq \frac{1}{\pi }{\int }_{b}^{\infty }\left\lbrack {\arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{a}^{b}\left\lbrack {\pi - \arctan \left( \frac{b - \lambda }{\eta }\right) + \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{-\infty }^{a}\left\lbrack {\arctan \left( \frac{b - \lambda }{\eta }\right) - \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) \] \[ \leq \frac{2\varepsilon }{5} + \frac{1}{\pi }{\int }_{b + \delta }^{\infty }\arctan \left( \frac{\eta \left( {b - a}\right) }{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }\right) {d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{a + \delta }^{b - \delta }\left\lbrack {\pi - \arctan \left( \frac{b - \lambda }{\eta }\right) + \arctan \left( \frac{a - \lambda }{\eta }\right) }\right\rbrack {d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{-\infty }^{a - \delta }\arctan \left( \frac{\eta \left( {b - a}\right) }{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }\right) {d\mu }\left( \lambda \right) . \] Note that in the two integrals with infinite limits, the arguments of arctan are positive. In the middle integral the variable \( \lambda \) runs between \( a + \delta \) and \( b - \delta \) . For such \( \lambda ,\frac{b - \lambda }{\eta } \geq \frac{\delta }{\eta } \) and \( \frac{a - \lambda }{\eta } \leq - \frac{\delta }{\eta } \) . So the right-hand side of the above inequality is dominated by \[ \frac{2\varepsilon }{5} + \frac{\eta }{\pi }{\int }_{b + \delta }^{\infty }\frac{b - a}{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }{d\mu }\left( \lambda \right) \] \[ + \frac{\eta }{\pi }{\int }_{-\infty }^{a - \delta }\frac{b - a}{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }{d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{a + \delta }^{b - \delta }\left\lbrack {\pi - 2\arctan \frac{\delta }{\eta }}\right\rbrack {d\mu }\left( \lambda \right) \] The first two integrals are finite (because of the properties of \( {d\mu } \) ). The third one is dominated by \( 2\left( {\frac{\pi }{2} - \arctan \frac{\delta }{\eta }}\right) \left\lbrack {\mu \left( b\right) - \mu \left( a\right) }\right\rbrack \) . So we can choose \( \eta \) small enough to make each of the last three terms smaller than \( \varepsilon /5 \) . This proves the theorem. We have shown above that all the terms occurring in the representa
100_S_Fourier Analysis
49
ominated by \[ \frac{2\varepsilon }{5} + \frac{\eta }{\pi }{\int }_{b + \delta }^{\infty }\frac{b - a}{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }{d\mu }\left( \lambda \right) \] \[ + \frac{\eta }{\pi }{\int }_{-\infty }^{a - \delta }\frac{b - a}{{\eta }^{2} + \left( {b - \lambda }\right) \left( {a - \lambda }\right) }{d\mu }\left( \lambda \right) \] \[ + \frac{1}{\pi }{\int }_{a + \delta }^{b - \delta }\left\lbrack {\pi - 2\arctan \frac{\delta }{\eta }}\right\rbrack {d\mu }\left( \lambda \right) \] The first two integrals are finite (because of the properties of \( {d\mu } \) ). The third one is dominated by \( 2\left( {\frac{\pi }{2} - \arctan \frac{\delta }{\eta }}\right) \left\lbrack {\mu \left( b\right) - \mu \left( a\right) }\right\rbrack \) . So we can choose \( \eta \) small enough to make each of the last three terms smaller than \( \varepsilon /5 \) . This proves the theorem. We have shown above that all the terms occurring in the representation (V.42) are uniquely determined by the relations (V.43), (V.44), and (V.45). Exercise V.4.13 We have proved the relations (V.33), (V.36), (V.41) and (V.42) in that order. Show that all these are, in fact, equivalent. Hence, each of these representations is unique. Proposition V.4.14 A Pick function \( \varphi \) is in the class \( P\left( {a, b}\right) \) if and only if the measure \( \mu \) associated with it in the representation (V.42) has zero mass on \( \left( {a, b}\right) \) . Proof. Let \( \varphi \left( {x + {i\eta }}\right) = u\left( {x + {i\eta }}\right) + {iv}\left( {x + {i\eta }}\right) \), where \( u, v \) are the real and imaginary parts of \( \varphi \) . If \( \varphi \) can be continued across \( \left( {a, b}\right) \), then as \( \eta \downarrow 0 \) , on any closed subinterval \( \left\lbrack {c, d}\right\rbrack \) of \( \left( {a, b}\right), v\left( {x + {i\eta }}\right) \) converges uniformly to a bounded continuous function \( v\left( x\right) \) on \( \left\lbrack {c, d}\right\rbrack \) . Hence, \[ \mu \left( d\right) - \mu \left( c\right) = \frac{1}{\pi }{\int }_{c}^{d}v\left( x\right) {dx} \] i.e., \( {d\mu }\left( x\right) = \frac{1}{\pi }v\left( x\right) {dx} \) . If the analytic continuation to the lower half-plane is by reflection across \( \left( {a, b}\right) \), then \( v \) is identically zero on \( \left\lbrack {c, d}\right\rbrack \) and hence so is \( \mu \) . Conversely, if \( \mu \) has no mass on \( \left( {a, b}\right) \), then for \( \zeta \) in \( \left( {a, b}\right) \) the integral in (V.42) is convergent, and is real valued. This shows that the function \( \varphi \) can be continued from \( {H}_{ + } \) to \( {H}_{ - } \) across \( \left( {a, b}\right) \) by reflection. The reader should note that the above proposition shows that the converse of Theorem V.4.7 is also true. It should be pointed out that the formula (V.42) defines two analytic functions, one on \( {H}_{ + } \) and the other on \( {H}_{ - } \) . If these are denoted by \( \varphi \) and \( \psi \), then \( \varphi \left( \zeta \right) = \overline{\psi \left( \bar{\zeta }\right) } \) . So \( \varphi \) and \( \psi \) are reflections of each other. But they need not be analytic continuations of each other. For this to be the case, the measure \( \mu \) should be zero on an interval \( \left( {a, b}\right) \) across which the function can be continued analytically. Exercise V.4.15 If a function \( f \) is operator monotone on the whole real line, then \( f \) must be of the form \( f\left( t\right) = \alpha + {\beta t},\alpha \in \mathbb{R},\beta \geq 0 \) . Let us now look at a few simple examples. Example V.4.16 The function \( \varphi \left( \zeta \right) = - \frac{1}{\zeta } \) is a Pick function. For this function, we see from (V.43) and (V.44) that \( \alpha = \beta = 0 \) . Since \( \varphi \) is analytic everywhere in the plane except at 0, Proposition V.4.14 tells us that the measure \( \mu \) is concentrated at the single point 0 . Example V.4.17 Let \( \varphi \left( \zeta \right) = {\zeta }^{1/2} \) be the principal branch of the square root function. This is a Pick function. From (V.43) we see that \[ \alpha = \operatorname{Re}\varphi \left( i\right) = \operatorname{Re}{e}^{{i\pi }/4} = \frac{1}{\sqrt{2}}. \] From (V.44) we see that \( \beta = 0 \) . If \( \zeta = \lambda + {i\eta } \) is any complex number, then \[ {\zeta }^{1/2} = {\left( \frac{\left| \zeta \right| + \lambda }{2}\right) }^{1/2} + i\operatorname{sgn}\eta {\left( \frac{\left| \zeta \right| - \lambda }{2}\right) }^{1/2}, \] where \( \operatorname{sgn}\eta \) is the sign of \( \eta \), defined to be 1 if \( \eta \geq 0 \) and -1 if \( \eta < 0 \) . So if \( \eta \geq 0 \), we have \( \operatorname{Im}\varphi \left( \zeta \right) = {\left( \frac{\left| \zeta \right| - \lambda }{2}\right) }^{1/2} \) . As \( \eta \downarrow 0,\left| \zeta \right| \) comes closer to \( \left| \lambda \right| \) . So, \( \operatorname{Im}\varphi \left( {\lambda + {i\eta }}\right) \) converges to 0 if \( \lambda > 0 \) and to \( {\left| \lambda \right| }^{1/2} \) if \( \lambda < 0 \) . Since \( \varphi \) is positive on the right half-axis, the measure \( \mu \) has no mass at 0 . The measure can now be determined from (V.45). We have, then \[ {\zeta }^{1/2} = \frac{1}{\sqrt{2}} + {\int }_{-\infty }^{0}\left( {\frac{1}{\lambda - \zeta } - \frac{\lambda }{{\lambda }^{2} + 1}}\right) \frac{{\left| \lambda \right| }^{1/2}}{\pi }{d\lambda }. \] (V.46) Example V.4.18 Let \( \varphi \left( \zeta \right) = \log \zeta \), where Log is the principal branch of the logarithm, defined everywhere except on \( ( - \infty ,0\rbrack \) by the formula \( \log \zeta = \ln \left| \zeta \right| + i\operatorname{Arg}\zeta \) . The function \( \operatorname{Arg}\zeta \) is the principal branch of the argument, taking values in \( ( - \pi ,\pi \rbrack \) . We then have \[ \alpha = \operatorname{Re}\left( {\log i}\right) = 0 \] \[ \beta = \mathop{\lim }\limits_{{\eta \rightarrow \infty }}\frac{\log \left( {i\eta }\right) }{i\eta } = 0 \] As \( \eta \downarrow 0,\operatorname{Im}\left( {\operatorname{Log}\left( {\lambda + {i\eta }}\right) }\right) \) converges to \( \pi \) if \( \lambda < 0 \) and to 0 if \( \lambda > 0 \) . So from (V.45) we see that, the measure \( \mu \) is just the restriction of the Lebesgue measure to \( ( - \infty ,0\rbrack \) . Thus, \[ \log \zeta = {\int }_{-\infty }^{0}\left( {\frac{1}{\lambda - \zeta } - \frac{\lambda }{{\lambda }^{2} + 1}}\right) {d\lambda } \] (V.47) Exercise V.4.19 For \( 0 < r < 1 \), let \( {\zeta }^{r} \) denote the principal branch of the function \( \varphi \left( \zeta \right) = {\zeta }^{r} \) . Show that \[ {\zeta }^{r} = \cos \frac{r\pi }{2} + \frac{\sin {r\pi }}{\pi }{\int }_{-\infty }^{0}\left( {\frac{1}{\lambda - \zeta } - \frac{\lambda }{{\lambda }^{2} + 1}}\right) {\left| \lambda \right| }^{r}{d\lambda }. \] (V.48) This includes (V.46) as a special case. Let now \( f \) be any operator monotone function on \( \left( {0,\infty }\right) \) . We have seen above that \( f \) must have the form \[ f\left( t\right) = \alpha + {\beta t} + {\int }_{-\infty }^{0}\left( {\frac{1}{\lambda - t} - \frac{\lambda }{{\lambda }^{2} + 1}}\right) {d\mu }\left( \lambda \right) . \] By a change of variables we can write this as \[ f\left( t\right) = \alpha + {\beta t} + {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda + t}}\right) {d\mu }\left( \lambda \right) \] (V.49) where \( \alpha \in \mathbb{R},\beta \geq 0 \) and \( \mu \) is a positive measure on \( \left( {0,\infty }\right) \) such that \[ {\int }_{0}^{\infty }\frac{1}{{\lambda }^{2} + 1}{d\mu }\left( \lambda \right) < \infty \] (V.50) Suppose \( f \) is such that \[ f\left( 0\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{t \rightarrow 0}}f\left( t\right) > - \infty . \] (V.51) Then, it follows from (V.49) that \( \mu \) must also satisfy the condition \[ {\int }_{0}^{1}\frac{1}{\lambda }{d\mu }\left( \lambda \right) < \infty \] (V.52) We have from (V.49) \[ f\left( t\right) - f\left( 0\right) = {\beta t} + {\int }_{0}^{\infty }\left( {\frac{1}{\lambda } - \frac{1}{\lambda + t}}\right) {d\mu }\left( \lambda \right) \] \[ = {\beta t} + {\int }_{0}^{\infty }\frac{t}{\left( {\lambda + t}\right) \lambda }{d\mu }\left( \lambda \right) \] Hence, we can write \( f \) in the form \[ f\left( t\right) = \gamma + {\beta t} + {\int }_{0}^{\infty }\frac{\lambda t}{\lambda + t}{dw}\left( \lambda \right) \] (V.53) where \( \gamma = f\left( 0\right) \) and \( {dw}\left( \lambda \right) = \frac{1}{{\lambda }^{2}}{d\mu }\left( \lambda \right) \) . From (V.50) and (V.52), we see that the measure \( w \) satisfies the conditions \[ {\int }_{0}^{\infty }\frac{{\lambda }^{2}}{{\lambda }^{2} + 1}{dw}\left( \lambda \right) < \infty \text{ and }{\int }_{0}^{1}{\lambda dw}\left( \lambda \right) < \infty . \] (V.54) These two conditions can, equivalently, be expressed as a single condition \[ {\int }_{0}^{\infty }\frac{\lambda }{1 + \lambda }{dw}\left( \lambda \right) < \infty \] (V.55) We have thus shown that an operator monotone function on \( \left( {0,\infty }\right) \) satisfying the condition (V.51) has a canonical representation (V.53), where \( \gamma \in \mathbb{R},\beta \geq 0 \) and \( w \) is a positive measure satisfying (V.55). The representation (V.53) is often useful for studying operator monotone functions on the positive half-line \( \lbrack 0,\infty ) \) . Suppose that we are given a function \( f \) as in (V.53). If \( \mu \) satisfies the conditions (V.54) then \[ {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda }}\right) {\lambda }^{2}{dw}\left( \lambda \right) > - \infty \] and we can write \[ f\left( t\right) = \left\{ {\gamma - {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda }}\right) {\lambda }^{2}{dw}\left( \lambda \ri
100_S_Fourier Analysis
50
equivalently, be expressed as a single condition \[ {\int }_{0}^{\infty }\frac{\lambda }{1 + \lambda }{dw}\left( \lambda \right) < \infty \] (V.55) We have thus shown that an operator monotone function on \( \left( {0,\infty }\right) \) satisfying the condition (V.51) has a canonical representation (V.53), where \( \gamma \in \mathbb{R},\beta \geq 0 \) and \( w \) is a positive measure satisfying (V.55). The representation (V.53) is often useful for studying operator monotone functions on the positive half-line \( \lbrack 0,\infty ) \) . Suppose that we are given a function \( f \) as in (V.53). If \( \mu \) satisfies the conditions (V.54) then \[ {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda }}\right) {\lambda }^{2}{dw}\left( \lambda \right) > - \infty \] and we can write \[ f\left( t\right) = \left\{ {\gamma - {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda }}\right) {\lambda }^{2}{dw}\left( \lambda \right) }\right\} + {\beta t} + {\int }_{0}^{\infty }\left( {\frac{\lambda }{{\lambda }^{2} + 1} - \frac{1}{\lambda + t}}\right) {\lambda }^{2}{dw}\left( \lambda \right) . \] So, if we put the number in braces above equal to \( \alpha \) and \( {d\mu }\left( \lambda \right) = {\lambda }^{2}{dw}\left( \lambda \right) \) , then we have a representation of \( f \) in the form (V.49). Exercise V.4.20 Use the considerations in the preceding paragraphs to show that, for \( 0 < r \leq 1 \) and \( t > 0 \), we have \[ {t}^{r} = \frac{\sin {r\pi }}{\pi }{\int }_{0}^{\infty }\frac{\lambda t}{\lambda + t}{\lambda }^{r - 2}{d\lambda } \] (V.56) (See Exercise V.1.10 also.) Exercise V.4.21 For \( t > 0 \), show that \[ \log \left( {1 + t}\right) = {\int }_{1}^{\infty }\frac{\lambda t}{\lambda + t}{\lambda }^{-2}{d\lambda } \] (V.57) ## Appendix 1. Differentiability of Convex Functions Let \( f \) be a real valued convex function defined on an interval \( I \) . Then \( f \) has some smoothness properties, which are listed below. The function \( f \) is Lipschitz on any closed interval \( \left\lbrack {a, b}\right\rbrack \) contained in \( {I}^{0} \) , the interior of \( I \) . So \( f \) is continuous on \( {I}^{0} \) . At every point \( x \) in \( {I}^{0} \), the right and left derivatives of \( f \) exist. These are defined, respectively, as \[ {f}_{ + }^{\prime }\left( x\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{y \downarrow x}}\frac{f\left( y\right) - f\left( x\right) }{y - x} \] \[ {f}_{ - }^{\prime }\left( x\right) \mathrel{\text{:=}} \mathop{\lim }\limits_{{y \uparrow x}}\frac{f\left( y\right) - f\left( x\right) }{y - x}. \] Both these functions are monotonically increasing on \( {I}^{0} \) . Further, \[ \mathop{\lim }\limits_{{x \downarrow w}}{f}_{ \pm }^{\prime }\left( x\right) = {f}_{ + }^{\prime }\left( w\right) \] \[ \mathop{\lim }\limits_{{x \uparrow w}}{f}_{ \pm }^{\prime }\left( x\right) = {f}_{ - }^{\prime }\left( w\right) \] The function \( f \) is differentiable except on a countable set \( E \) in \( {I}^{0} \), i.e., at every point \( x \) in \( {I}^{0} \smallsetminus E \) the left and right derivatives of \( f \) are equal. Further, the derivative \( {f}^{\prime } \) is continuous on \( {I}^{0} \smallsetminus E \) . If a sequence of convex functions converges at every point of \( I \), then the limit function is convex. The convergence is uniform on any closed interval \( \left\lbrack {a, b}\right\rbrack \) contained in \( {I}^{0} \) . ## Appendix 2. Regularisation of Functions The convolution of two functions leads to a new function that inherits the stronger of the smoothness properties of the two original functions. This is the idea behind "regularisation" of functions. Let \( \varphi \) be a real function of class \( {C}^{\infty } \) with the following properties: \( \varphi \geq \) \( 0,\varphi \) is even, the support supp \( \varphi = \left\lbrack {-1,1}\right\rbrack \), and \( \int \varphi = 1 \) . For each \( \varepsilon > \) 0, let \( {\varphi }_{\varepsilon }\left( x\right) = \frac{1}{\varepsilon }\varphi \left( \frac{x}{\varepsilon }\right) \) . Then supp \( {\varphi }_{\varepsilon } = \left\lbrack {-\varepsilon ,\varepsilon }\right\rbrack \) and \( {\varphi }_{\varepsilon } \) has all the other properties of \( \varphi \) listed above. The functions \( {\varphi }_{\varepsilon } \) are called mollifiers or smooth approximate identities. If \( f \) is a locally integrable function, we define its regularisation of order \( \varepsilon \) as the function \[ {f}_{\varepsilon }\left( x\right) = \left( {f * {\varphi }_{\varepsilon }}\right) \left( x\right) \; \mathrel{\text{:=}} \int f\left( {x - y}\right) {\varphi }_{\varepsilon }\left( y\right) {dy} \] \[ = \int f\left( {x - {\varepsilon t}}\right) \varphi \left( t\right) {dt} \] The family \( {f}_{\varepsilon } \) has the following properties. 1. Each \( {f}_{\varepsilon } \) is a \( {C}^{\infty } \) function. 2. If the support of \( f \) is contained in a compact set \( K \), then the support of \( {f}_{\varepsilon } \) is contained in an \( \varepsilon \) -neighbourhood of \( K \) . 3. If \( f \) is continuous at \( {x}_{0} \), then \( \mathop{\lim }\limits_{{\varepsilon \downarrow 0}}{f}_{\varepsilon }\left( {x}_{0}\right) = f\left( {x}_{0}\right) \) . 4. If \( f \) has a discontinuity of the first kind at \( {x}_{0} \), then \( \mathop{\lim }\limits_{{\varepsilon \downarrow 0}}{f}_{\varepsilon }\left( {x}_{0}\right) = \) \( 1/2\left\lbrack {f\left( {{x}_{0} + }\right) + f\left( {{x}_{0} - }\right) }\right\rbrack \) . (A point \( {x}_{0} \) is a point of discontinuity of the first kind if the left and right limits of \( f \) at \( {x}_{0} \) exist; these limits are denoted as \( f\left( {{x}_{0} - }\right) \) and \( f\left( {{x}_{0} + }\right) \), respectively.) 5. If \( f \) is continuous, then \( {f}_{\varepsilon }\left( x\right) \) converges to \( f\left( x\right) \) as \( \varepsilon \rightarrow 0 \) . The convergence is uniform on every compact set. 6. If \( f \) is differentiable, then, for every \( \varepsilon > 0,{\left( {f}_{\varepsilon }\right) }^{\prime } = {\left( {f}^{\prime }\right) }_{\varepsilon } \) . 7. If \( f \) is monotone, then, as \( \varepsilon \rightarrow 0,{f}_{\varepsilon }^{\prime }\left( x\right) \) converges to \( {f}^{\prime }\left( x\right) \) at all points \( x \) where \( {f}^{\prime }\left( x\right) \) exists. (Recall that a monotone function can have discontinuities of the first kind only and is differentiable almost everywhere.) ## V. 5 Problems Problem V.5.1. Show that the function \( f\left( t\right) = \exp t \) is neither operator monotone nor operator convex on any interval. Problem V.5.2. Let \( f\left( t\right) = \frac{{at} + b}{{ct} + d} \), where \( a, b, c, d \) are real numbers such that \( {ad} - {bc} > 0 \) . Show that \( f \) is operator monotone on every interval that does not contain the point \( \frac{-d}{c} \) . Problem V.5.3. Show that the derivative of an operator convex function need not be operator monotone. Problem V.5.4. Show that for \( r < - 1 \), the function \( f\left( t\right) = {t}^{r} \) on \( \left( {0,\infty }\right) \) is not operator convex. (Hint: The function \( {f}^{\left\lbrack 1\right\rbrack }\left( {1, t}\right) \) cannot be continued analytically to a Pick function.) Together with the assertion in Exercise V.2.11, this shows that on the half-line \( \left( {0,\infty }\right) \) the function \( f\left( t\right) = {t}^{r} \) is operator convex if \( - 1 \leq r \leq 0 \) or if \( 1 \leq r \leq 2 \) ; and it is not operator convex for any other real \( r \) . Problem V.5.5. A function \( g \) on \( \lbrack 0,\infty ) \) is operator convex if and only if it is of the form \[ g\left( t\right) = \alpha + {\beta t} + \gamma {t}^{2} + {\int }_{0}^{\infty }\frac{\lambda {t}^{2}}{\lambda + t}{d\mu }\left( \lambda \right) \] where \( \alpha ,\beta \) are real numbers, \( \gamma \geq 0 \), and \( \mu \) is a positive finite measure. Problem V.5.6. Let \( f \) be an operator monotone function on \( \left( {0,\infty }\right) \) . Then \( {\left( -1\right) }^{n - 1}{f}^{\left( n\right) }\left( t\right) \geq 0 \) for \( n = 1,2,\ldots \) [A function \( g \) on \( \left( {0,\infty }\right) \) is said to be completely monotone if for all \( n \geq 0,{\left( -1\right) }^{n}{g}^{\left( n\right) }\left( t\right) \geq 0 \) . There is a theorem of S.N. Bernstein that says that a function \( g \) is completely monotone if and only if there exists a positive measure \( \mu \) such that \( g\left( t\right) = \) \( {\int }_{0}^{\infty }{e}^{-{\lambda t}}{d\mu }\left( \lambda \right) \) .] The result of this problem says that the derivative of an operator monotone function on \( \left( {0,\infty }\right) \) is completely monotone. Thus, \( f \) has a Taylor expansion \( f\left( t\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }{a}_{n}{\left( t - 1\right) }^{n} \), in which the coefficients \( {a}_{n} \) are positive for all odd \( n \) and negative for all even \( n \) . Problem V.5.7. Let \( f \) be a function mapping \( \left( {0,\infty }\right) \) into itself. Let \( g\left( t\right) = \) \( {\left\lbrack f\left( {t}^{-1}\right) \right\rbrack }^{-1} \) . Show that if \( f \) is operator monotone, then \( g \) is also operator monotone. If \( f \) is operator convex and \( f\left( 0\right) = 0 \), then \( g \) is operator convex. Problem V.5.8. Show that the function \( f\left( \zeta \right) = - \cot \zeta \) is a Pick function. Show that in its canonical representation (V.42), \( \alpha = \beta = 0 \) and the measure \( \mu \) is atomic with mass 1 at the points \( {n\pi } \) for every integer \( n \) . Thus, we have the familiar series expansion \[ - \cot \zeta = \mathop{\sum }\limits_{{n = - \infty }}^{\infty }\left\lbrack {\frac{1}{{n\pi } - \zeta } - \frac{n\pi }{{n}^{2}{\pi }^{2} + 1}}\right\rbrack . \] Problem V.5.9. The aim of this problem is to show
100_S_Fourier Analysis
51
cients \( {a}_{n} \) are positive for all odd \( n \) and negative for all even \( n \) . Problem V.5.7. Let \( f \) be a function mapping \( \left( {0,\infty }\right) \) into itself. Let \( g\left( t\right) = \) \( {\left\lbrack f\left( {t}^{-1}\right) \right\rbrack }^{-1} \) . Show that if \( f \) is operator monotone, then \( g \) is also operator monotone. If \( f \) is operator convex and \( f\left( 0\right) = 0 \), then \( g \) is operator convex. Problem V.5.8. Show that the function \( f\left( \zeta \right) = - \cot \zeta \) is a Pick function. Show that in its canonical representation (V.42), \( \alpha = \beta = 0 \) and the measure \( \mu \) is atomic with mass 1 at the points \( {n\pi } \) for every integer \( n \) . Thus, we have the familiar series expansion \[ - \cot \zeta = \mathop{\sum }\limits_{{n = - \infty }}^{\infty }\left\lbrack {\frac{1}{{n\pi } - \zeta } - \frac{n\pi }{{n}^{2}{\pi }^{2} + 1}}\right\rbrack . \] Problem V.5.9. The aim of this problem is to show that if a Pick function \( \varphi \) satisfies the growth restriction \[ \mathop{\sup }\limits_{{\eta \rightarrow \infty }}\left| {{\eta \varphi }\left( {i\eta }\right) }\right| < \infty \] (V.58) then its representation (V.42) takes the simple form \[ \varphi \left( \zeta \right) = {\int }_{-\infty }^{\infty }\frac{1}{\lambda - \zeta }{d\mu }\left( \lambda \right) \] (V.59) where \( \mu \) is a finite measure. To see this, start with the representation (V.41). The condition (V.58) implies the existence of a constant \( M \) that bounds, for all \( \eta > 0 \), the quantity \( {\eta \varphi }\left( {i\eta }\right) \), and hence also its real and imaginary parts. This gives two inequalities: \[ \left| {{\alpha \eta } + {\int }_{-\infty }^{\infty }\frac{\eta \left( {1 - {\eta }^{2}}\right) \lambda }{{\lambda }^{2} + {\eta }^{2}}{d\nu }\left( \lambda \right) }\right| \leq M \] \[ \left| {\beta {\eta }^{2} + {\eta }^{2}{\int }_{-\infty }^{\infty }\frac{1 + {\lambda }^{2}}{{\lambda }^{2} + {\eta }^{2}}{d\nu }\left( \lambda \right) }\right| \leq M \]