book_name
stringclasses
983 values
chunk_index
int64
0
503
text
stringlengths
1.01k
10k
1008_(GTM174)Foundations of Real and Abstract Analysis
24
rcise. The preceding proposition enables us to prove a result that is used later (in Theorem (6.1.18)) to establish the uniqueness of the representation of certain continuous linear functions. (1.5.19) Proposition. Let \( \alpha \) be a function of bounded variation on \( I = \) \( \left\lbrack {a, b}\right\rbrack \), and let \( D \) be the set consisting of \( a, b \), and all points of \( \left( {a, b}\right) \) at which \( \alpha \) is discontinuous. Then \( {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) = 0 \) for each continuous function \( f : I \rightarrow \mathbf{R} \) if and only if \( \alpha \left( x\right) = \alpha \left( a\right) \) for all \( x \in I \smallsetminus D \) . Proof. Note that \( D \) is countable, by Exercise (1.5.15:5). Suppose first that \( {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) = 0 \) for each continuous \( f \) on \( I \), and consider any point \( \xi \in I \smallsetminus D \) . There exist arbitrarily small positive numbers \( t \in I \smallsetminus D \) such that \( \xi < b - t \) . For such \( t \) let \( f \) be the continuous function that equals 1 on the interval \( \left\lbrack {a,\xi }\right\rbrack \), equals 0 on \( \left\lbrack {\xi + t, b}\right\rbrack \), and is linear on \( \left\lbrack {\xi ,\xi + t}\right\rbrack \) . Referring to Exercises (1.5.16:5 and 3), we obtain the estimate \[ 0 = {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) \] \[ = {\int }_{a}^{\xi }f\left( x\right) \mathrm{d}\alpha \left( x\right) + {\int }_{\xi }^{\xi + t}f\left( x\right) \mathrm{d}\alpha \left( x\right) + {\int }_{\xi + t}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) \] \[ \leq \alpha \left( \xi \right) - \alpha \left( a\right) + {T}_{\alpha }\left( {\xi ,\xi + t}\right) + 0. \] Letting \( t \) tend to 0 and using Proposition (1.5.18), we see that \( \alpha \left( \xi \right) = \alpha \left( a\right) \) . Now suppose, conversely, that \( \alpha \left( x\right) = \alpha \left( a\right) \) for all \( x \in I \smallsetminus D \), and let \( f \) be any continuous real-valued function on \( I \) . Given \( \varepsilon > 0 \), choose \( \delta > 0 \) as in the definition of \( {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) \) . In view of Exercise (1.5.15:5), we can choose a partition \( P \), with mesh less than \( \delta \), consisting of \( a, b \), and points of \( I \smallsetminus D \) . For any Riemann-Stieltjes sum \( \sum \) for \( f \) corresponding to \( P \) and \( \alpha \) , we then have \( \sum = 0 \) and therefore \[ \left| {{\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) }\right| = \left| {\sum - {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) }\right| < \varepsilon . \] Since \( \varepsilon > 0 \) is arbitrary, it follows that \( {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) = 0 \) . ## (1.5.20) Exercises .1 Complete the proof of Proposition (1.5.18). .2 Let \( \alpha \) be of bounded variation on \( I = \left\lbrack {a, b}\right\rbrack \) . Prove that \( {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) = \) 0 for each continuous \( f : I \rightarrow \mathbf{R} \) if and only if \( \alpha \left( x\right) = \alpha \left( a\right) \) for all \( x \) in a dense subset of \( I \) that includes \( b \) . We bring our treatment of the Riemann and Riemann-Stieltjes integrals to an end here. In the next chapter we develop a type of integral, based on a generalisation of the integral as an antiderivative, that is much more powerful than the Riemann integral, and for which it is possible to construct (although we do not do so) a related generalisation analogous to that of Riemann-Stieltjes. For more on Riemann and Riemann-Stieltjes integration see [17], [42], or \( \left\lbrack {50}\right\rbrack \) . 2 Differentiation and the Lebesgue Integral More matter with less art. HAMLET, Act 2, Scene 2 In the first section of this chapter we show how the ideas of Chapter 1 can be applied in a theory of the length of a subset of \( \mathbf{R} \) ; this leads to the Vitali Covering Theorem, a result with many interesting applications in the theory of differentiation and integration. Building on that material, in the next two sections we describe F. Riesz's development of Lebesgue integration as the inverse process to differentiation "almost everywhere". ## 2.1 Outer Measure and Vitali's Covering Theorem Can we assign to a subset \( A \) of \( \mathbf{R} \) a measure of its length? We have already done this when \( A \) is a bounded interval; but what about a more general set \( A \) ? The answer lies in measure theory, a subject that was pioneered by Lebesgue, Borel, and others at the beginning of this century and which has proved of immense importance in analysis, probability theory, and many other areas of mathematics. The outer measure of \( A \) is the quantity \[ {\mu }^{ * }\left( A\right) = \inf \left\{ {\mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| : {\left( {I}_{n}\right) }_{n = 1}^{\infty }}\right. \text{is a cover of}A \] \[ \text{by bounded open intervals}\} \text{,} \] which we take as \( \infty \) if the set on the right-hand side is unbounded. \( {}^{1} \) If \( {\mu }^{ * }\left( A\right) \in \mathbf{R} \), we say that \( A \) has finite outer measure. Note that since, for any sequence \( \left( {I}_{n}\right) \) of bounded open intervals that covers \( A \), the terms of the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| \) are all positive, the (possibly infinite) sum of the series does not depend on the order of those terms; this is an immediate consequence of Exercise (1.2.17:1). If \( A \) has outer measure zero, then we say that \( A \) is a set of measure zero, or that \( A \) has measure zero. Thus \( A \) has measure zero if and only if for each \( \varepsilon > 0 \) there exists a sequence \( {\left( {I}_{n}\right) }_{n = 1}^{\infty } \) of bounded open intervals such that \( A \subset \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{I}_{n} \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| < \varepsilon \) . ## (2.1.1) Exercises . 1 Show that for each \( A \subset \mathbf{R},{\mu }^{ * }\left( A\right) \) is the infimum of \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| \) taken over all covers of \( A \) by sequences \( {\left( {I}_{n}\right) }_{n = 1}^{\infty } \) of bounded, but not necessarily open, intervals. .2 Prove that if a subset \( A \) of \( \mathbf{R} \) has finite outer measure, then for each \( \varepsilon > 0 \) there exists a sequence \( \left( {I}_{n}\right) \) of disjoint bounded open intervals such that \( A \subset \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{I}_{n} \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| < {\mu }^{ * }\left( A\right) + \varepsilon \) . (Use Proposition (1.3.6).) .3 Show that \( {\mu }^{ * }\left( \varnothing \right) = 0 \), and that if \( A \subset B \), then \( {\mu }^{ * }\left( A\right) \leq {\mu }^{ * }\left( B\right) \) . .4 Prove that for each \( a \in \mathbf{R},{\mu }^{ * }\left( {\{ a\} }\right) = 0 \) . .5 Let \( A \) be a subset of \( \mathbf{R} \), and \( E \subset A \) a set of measure zero. Show that \( {\mu }^{ * }\left( {A \smallsetminus E}\right) = {\mu }^{ * }\left( A\right) \) .6 Let \( A \) be a subset of a compact interval \( I \) . Prove that \( {\mu }^{ * }\left( A\right) + \) \( {\mu }^{ * }\left( {I \smallsetminus A}\right) \geq \left| I\right| \) . (It follows from results towards the end of Section 3 of this chapter that, perhaps surprisingly, we cannot replace inequality by equality in this result.) .7 Let \( \left( {A}_{n}\right) \) be a sequence of subsets of \( \mathbf{R} \) . Show that \[ {\mu }^{ * }\left( {\mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n}}\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{\mu }^{ * }\left( {A}_{n}\right) \] where the right-hand side is taken as \( \infty \) if either any of its terms is \( \infty \) or the series diverges. (If one of the sets \( {A}_{n} \) has infinite outer measure, then the inequality is trivial. If each \( {A}_{n} \) has finite outer measure, then for each positive integer \( n \) and each \( \varepsilon > 0 \) there exists a sequence --- \( {}^{1} \) In Section 1 of Chapter 3 we give a precise meaning to this use of \( \infty \) as an "extended real number". --- \( {\left( {I}_{n, k}\right) }_{k = 1}^{\infty } \) of bounded open intervals such that \( {A}_{n} \subset \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{I}_{n, k} \) and \( \left. {\mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {I}_{n, k}\right| < {\mu }^{ * }\left( {A}_{n}\right) + {2}^{-n}\varepsilon \text{. }}\right) \) Prove that if also the sets \( {A}_{n} \) are pairwise-disjoint, then \[ {\mu }^{ * }\left( {\mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\mu }^{ * }\left( {A}_{n}\right) \] .8 Give two proofs that a countable subset of \( \mathbf{R} \) has measure zero. Hence prove that \( \mathbf{R} \) is uncountable. .9 Give two proofs that the union of a sequence of sets of measure zero has measure zero. .10 Prove that a subset \( A \) of \( \mathbf{R} \) has finite outer measure if and only if \( l = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mu }^{ * }\left( {A \cap \left\lbrack {-n, n}\right\rbrack }\right) \) exists, in which case \( {\mu }^{ * }\left( A\right) = l \) . .11 Prove that \( {\mu }^{ * } \) is translation invariant—that is, \( {\mu }^{ * }\left( {A + t}\right) = {\mu }^{ * }\left( A\right) \) for each \( A \subset \mathbf{R} \) and each \( t \in \mathbf{R} \), where \( A + t = \{ x + t : x \in A\} \) . (2.1.2) Proposition. The outer measure of any interval in \( \mathbf{R} \) equals the length of the interval. Proof. Consider, to begin with, a bounded closed interval \( \left\lb
1008_(GTM174)Foundations of Real and Abstract Analysis
25
limits_{{n = 1}}^{\infty }{\mu }^{ * }\left( {A}_{n}\right) \] .8 Give two proofs that a countable subset of \( \mathbf{R} \) has measure zero. Hence prove that \( \mathbf{R} \) is uncountable. .9 Give two proofs that the union of a sequence of sets of measure zero has measure zero. .10 Prove that a subset \( A \) of \( \mathbf{R} \) has finite outer measure if and only if \( l = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mu }^{ * }\left( {A \cap \left\lbrack {-n, n}\right\rbrack }\right) \) exists, in which case \( {\mu }^{ * }\left( A\right) = l \) . .11 Prove that \( {\mu }^{ * } \) is translation invariant—that is, \( {\mu }^{ * }\left( {A + t}\right) = {\mu }^{ * }\left( A\right) \) for each \( A \subset \mathbf{R} \) and each \( t \in \mathbf{R} \), where \( A + t = \{ x + t : x \in A\} \) . (2.1.2) Proposition. The outer measure of any interval in \( \mathbf{R} \) equals the length of the interval. Proof. Consider, to begin with, a bounded closed interval \( \left\lbrack {a, b}\right\rbrack \) . For each \( \varepsilon > 0 \) we have \( \left\lbrack {a, b}\right\rbrack \subset \left( {a - \varepsilon, b + \varepsilon }\right) \) and therefore \[ {\mu }^{ * }\left( \left\lbrack {a, b}\right\rbrack \right) \leq \left| \left( {a - \varepsilon, b + \varepsilon }\right) \right| = b - a + {2\varepsilon }. \] Since \( \varepsilon > 0 \) is arbitrary, we conclude that \( {\mu }^{ * }\left( \left\lbrack {a, b}\right\rbrack \right) \leq b - a \) . To prove the reverse inequality, let \( \left( {I}_{n}\right) \) be any sequence of bounded open intervals that covers \( \left\lbrack {a, b}\right\rbrack \) . Applying the Heine-Borel-Lebesgue Theorem (1.4.6), and reindexing the terms \( {I}_{n} \) (which we can do without loss of generality), we may assume that for some \( N \) , \[ \left\lbrack {a, b}\right\rbrack \subset {I}_{1} \cup {I}_{2} \cup \cdots \cup {I}_{N} \] There exists an interval \( {I}_{{k}_{1}} \), where \( 1 \leq {k}_{1} \leq N \), that contains \( a \) ; let this interval be \( \left( {{a}_{1},{b}_{1}}\right) \) . Either \( b < {b}_{1} \), in which case we stop the procedure, or else \( {b}_{1} \leq b \) . In the latter case, \( {b}_{1} \in \left\lbrack {a, b}\right\rbrack \smallsetminus \left( {{a}_{1},{b}_{1}}\right) \) ; so there exists an interval \( {I}_{{k}_{2}} \), where \( 1 \leq {k}_{2} \leq N \) and \( {k}_{2} \neq {k}_{1} \), that contains \( {b}_{1} \) ; call this interval \( \left( {{a}_{2},{b}_{2}}\right) \) . Repeating this argument, we obtain intervals \( \left( {{a}_{1},{b}_{1}}\right) ,\left( {{a}_{2},{b}_{2}}\right) ,\ldots \) in the collection \( \left\{ {{I}_{1},\ldots ,{I}_{N}}\right\} \) such that for each \( i,{a}_{i} < {b}_{i - 1} < {b}_{i} \) . This procedure must terminate with the construction of \( \left( {{a}_{j},{b}_{j}}\right) \) for some \( j \leq N \) . Then \( b \in \left( {{a}_{j},{b}_{j}}\right) \), so \[ \mathop{\sum }\limits_{{n = 1}}^{N}\left| {I}_{n}\right| \geq \mathop{\sum }\limits_{{i = 1}}^{j}\left( {{b}_{i} - {a}_{i}}\right) \] \[ = {b}_{j} - \left( {{a}_{j} - {b}_{j - 1}}\right) - \left( {{a}_{j - 1} - {b}_{j - 2}}\right) \] \[ - \cdots - \left( {{a}_{2} - {b}_{1}}\right) - {a}_{1} \] \[ > {b}_{j} - {a}_{1}\text{.} \] It follows that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| > b - a \) and therefore, since \( \left( {I}_{n}\right) \) was any sequence of bounded open intervals covering \( \left\lbrack {a, b}\right\rbrack \), that \( {\mu }^{ * }\left( \left\lbrack {a, b}\right\rbrack \right) \geq b - a \) . Coupled with the reverse inequality already established, this proves that \( {\mu }^{ * }\left( \left\lbrack {a, b}\right\rbrack \right) = \) \( b - a \) . The proof for other types of interval is left as the next exercise. ## (2.1.3) Exercises .1 Complete the proof of Proposition (2.1.2) in the remaining cases. .2 Let \( \left\{ {{I}_{1},\ldots ,{I}_{N}}\right\} \) be a finite set of bounded open intervals covering \( \mathbf{Q} \cap \left\lbrack {0,1}\right\rbrack \) . Prove that \( \mathop{\sum }\limits_{{n = 1}}^{N}\left| {I}_{n}\right| \geq 1 \) . (Given \( \varepsilon > 0 \), extend each \( {I}_{n} \), if necessary, to ensure that it has rational endpoints and that the total length of the intervals is increased by at most \( \varepsilon \) . Then argue as in the proof of Proposition (2.1.2).) .3 Let \( X \) be a subset of \( \mathbf{R} \) with finite outer measure. Prove that for each \( \varepsilon > 0 \) there exists an open set \( A \supset X \) with finite outer measure, such that \( {\mu }^{ * }\left( A\right) < {\mu }^{ * }\left( X\right) + \varepsilon \) . (Use Exercise (2.1.1:2).) Show that if \( X \) is also bounded, then we can choose \( A \) to be bounded. Let \( X \) be a subset of \( \mathbf{R} \), and \( \mathcal{V} \) a family of nondegenerate intervals - that is, intervals each having positive length. We say that \( \mathcal{V} \) is a Vitali covering of \( X \) if for each \( \varepsilon > 0 \) and each \( x \in X \) there exists \( I \in \mathcal{V} \) such that \( x \in I \) and \( \left| I\right| < \varepsilon \) . (2.1.4) The Vitali Covering Theorem. Let \( \mathcal{V} \) be a Vitali covering of a set \( X \subset \mathbf{R} \) with finite outer measure. Then for each \( \varepsilon > 0 \) there exists a finite set \( \left\{ {{I}_{1},\ldots ,{I}_{N}}\right\} \) of pairwise-disjoint intervals in \( \mathcal{V} \) such that \[ {\mu }^{ * }\left( {X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n}}\right) < \varepsilon \] We postpone the proof of this very useful theorem until we have dealt with some auxiliary exercises. ## (2.1.5) Exercises .1 Let \( \mathcal{V} \) be a Vitali covering of a subset \( X \) of \( \mathbf{R}, x \) a point of \( X \), and \( A \) an open subset of \( \mathbf{R} \) containing \( X \) . Show that for each \( \varepsilon > 0 \) there exists \( I \in \mathcal{V} \) such that \( x \in I, I \subset A \), and \( \left| I\right| < \varepsilon \) . .2 Let \( {I}_{1},\ldots ,{I}_{N} \) be finitely many closed intervals belonging to a Vitali covering \( \mathcal{V} \), of a subset \( X \) of \( \mathbf{R} \) with finite outer measure, and let \( x \in \) \( X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \) . Show that for each \( \varepsilon > 0 \) there exists \( I \in \mathcal{V} \) such that \( x \in I,\left| I\right| < \varepsilon \), and \( I \) is disjoint from \( \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \) . Proof of the Vitali Covering Theorem. If necessary replacing the intervals in \( I \) by their closures, we may assume that \( \mathcal{V} \) consists of closed intervals. Referring to Exercise (2.1.3: 3), choose an open set \( A \supset X \) with finite outer measure. In view of Exercise (2.1.5: 1), we may assume without loss of generality that \[ I \subset A\text{for each}I \in \mathcal{V}\text{.} \] (1) Choosing any interval \( {I}_{1} \) in the covering \( \mathcal{V} \), we construct pairwise-disjoint intervals \( {I}_{1},{I}_{2},\ldots \) in \( \mathcal{V} \) inductively as follows. Assume that we have constructed \( {I}_{1},\ldots ,{I}_{n} \) in \( \mathcal{V} \) . If \( X \subset \mathop{\bigcup }\limits_{{k = 1}}^{n}{I}_{k} \), then \( {\mu }^{ * }\left( {X \smallsetminus \mathop{\bigcup }\limits_{{k = 1}}^{n}{I}_{k}}\right) = 0 \) and we stop the construction. If \( X \) is not contained in \( \mathop{\bigcup }\limits_{{k = 1}}^{n}{I}_{k} \), then Exercise (2.1.5: 2) shows that the set \[ {S}_{n} = \left\{ {\left| I\right| : I \in \mathcal{V}, I \cap \mathop{\bigcup }\limits_{{k = 1}}^{n}{I}_{k} = \varnothing }\right\} \] is nonempty. Since, by \( \left( 1\right) ,{S}_{n} \) is bounded above by \( {\mu }^{ * }\left( A\right) \), it follows that \[ {s}_{n} = \sup {S}_{n} \] exists; moreover, as each \( I \in \mathcal{V} \) is nondegenerate, \( {s}_{n} > 0 \) . To complete our inductive construction, we now choose \( {I}_{n + 1} \in \mathcal{V} \) such that \( {I}_{n + 1} \cap \mathop{\bigcup }\limits_{{k = 1}}^{n}{I}_{k} = \) \( \varnothing \) and \( \left| {I}_{n + 1}\right| > \frac{1}{2}{s}_{n} \) . We may assume that this construction leads to an infinite sequence \( {\left( {I}_{n}\right) }_{n = 1}^{\infty } \) of pairwise-disjoint elements of \( \mathcal{V} \) . Since the partial sums of the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| \) are bounded by \( {\mu }^{ * }\left( A\right) \), the monotone sequence principle (Proposition (1.2.4)) ensures that the series converges. Given \( \varepsilon > 0 \), we can therefore find \( N \) such that \[ \mathop{\sum }\limits_{{n = N + 1}}^{\infty }\left| {I}_{n}\right| < \frac{\varepsilon }{5} \] For each \( n > N \) let \( {x}_{n} \) be the midpoint of \( {I}_{n} \), and let \( {J}_{n} \) be the closed interval with midpoint \( {x}_{n} \) and length \( 5\left| {I}_{n}\right| \) . It suffices to prove that \[ X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \subset \mathop{\bigcup }\limits_{{n = N + 1}}^{\infty }{J}_{n} \] (2) For then \[ {\mu }^{ * }\left( {X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n}}\right) \leq \mathop{\sum }\limits_{{n = N + 1}}^{\infty }\left| {J}_{n}\right| = 5\mathop{\sum }\limits_{{n = N + 1}}^{\infty }\left| {I}_{n}\right| < \varepsilon . \] To prove (2), consider any \( x \in X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \) . By Exercise (2.1.5:2), there exists \( I \in \mathcal{V} \) such that \( x \in I \) and \( I \cap \mathop{\bigcup }\limits_{{n = 1}}^{\bar{N}}{I}_{n} = \varnothing \) . We claim that \( I \cap {I}_{m} \) is nonempty for some \( m > N \) . If this were not the case, then for each \( m \) we would have \( I \cap \mathop{\bigcup }\limits_{{n = 1}}^{m}{I}_{n} = \varnothing \) and therefore \( \left| I\right| \leq {s}_{m} < 2\left| {I}_{m + 1}\right| \) ; since \( \mathop{\lim }\limits_{{m \rightarrow \infty }}\
1008_(GTM174)Foundations of Real and Abstract Analysis
26
thop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \subset \mathop{\bigcup }\limits_{{n = N + 1}}^{\infty }{J}_{n} \] (2) For then \[ {\mu }^{ * }\left( {X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n}}\right) \leq \mathop{\sum }\limits_{{n = N + 1}}^{\infty }\left| {J}_{n}\right| = 5\mathop{\sum }\limits_{{n = N + 1}}^{\infty }\left| {I}_{n}\right| < \varepsilon . \] To prove (2), consider any \( x \in X \smallsetminus \mathop{\bigcup }\limits_{{n = 1}}^{N}{I}_{n} \) . By Exercise (2.1.5:2), there exists \( I \in \mathcal{V} \) such that \( x \in I \) and \( I \cap \mathop{\bigcup }\limits_{{n = 1}}^{\bar{N}}{I}_{n} = \varnothing \) . We claim that \( I \cap {I}_{m} \) is nonempty for some \( m > N \) . If this were not the case, then for each \( m \) we would have \( I \cap \mathop{\bigcup }\limits_{{n = 1}}^{m}{I}_{n} = \varnothing \) and therefore \( \left| I\right| \leq {s}_{m} < 2\left| {I}_{m + 1}\right| \) ; since \( \mathop{\lim }\limits_{{m \rightarrow \infty }}\left| {I}_{m}\right| = 0 \) (by Exercise (1.2.14:1)), it would follow that \( \left| I\right| = 0 \) , which is absurd as \( \mathcal{V} \) contains only nondegenerate intervals. Thus \[ \nu = \min \left\{ {m > N : I \cap {I}_{m} \neq \varnothing }\right\} \] is well defined, \( I \cap \mathop{\bigcup }\limits_{{n = 1}}^{{\nu - 1}}{I}_{n} = \varnothing \), and therefore \( \left| I\right| \leq {s}_{\nu - 1} < 2\left| {I}_{\nu }\right| \) . Since \( x \in I \) and \( I \cap {I}_{\nu } \neq \varnothing \), we see that \[ \left| {x - {x}_{\nu }}\right| \leq \left| I\right| + \frac{1}{2}\left| {I}_{\nu }\right| < 2\left| {I}_{\nu }\right| + \frac{1}{2}\left| {I}_{\nu }\right| = \frac{5}{2}\left| {I}_{\nu }\right| . \] Hence \( x \in {J}_{\nu } \) . This establishes (2) and completes the proof. In the remainder of this section we apply the Vitali Covering Theorem in the proofs of some fundamental results in the theory of differentiation and integration. Let \( I \) be an interval in \( \mathbf{R} \) . We say that a mapping \( f : I \rightarrow \mathbf{R} \) is \( {ab} \) - solutely continuous if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that if \( {\left( \left\lbrack {a}_{k},{b}_{k}\right\rbrack \right) }_{k = 1}^{n} \) is a finite family of nonoverlapping \( {}^{2} \) compact subintervals of \( I \) such that \( \mathop{\sum }\limits_{{k = 1}}^{n}\left( {{b}_{k} - {a}_{k}}\right) < \delta \), then \( \mathop{\sum }\limits_{{k = 1}}^{n}\left| {f\left( {b}_{k}\right) - f\left( {a}_{k}\right) }\right| < \varepsilon \) . ## (2.1.6) Exercises . 1 Prove that an absolutely continuous function on \( I \) is both uniformly continuous and bounded. .2 Let \( f, g \) be absolutely continuous functions on \( I \) . Prove that the functions \( f + g, f - g,{\lambda f} \) (where \( \lambda \in \mathbf{R} \) ), and \( {fg} \) are absolutely continuous, and that if \( \inf \{ \left| {f\left( x\right) }\right| : x \in I\} > 0 \), then \( 1/f \) is absolutely continuous. .3 Prove that if \( f \) is differentiable, with bounded derivative, on an interval \( I \), then \( f \) is absolutely continuous. .4 Let \( f \) be absolutely continuous on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . Prove that \( f \) has bounded variation in \( I \), that the variation function \( {T}_{f}\left( {a, \cdot }\right) \) is absolutely continuous on \( I \), and that \( f \) is the difference of two absolutely continuous, increasing functions on \( I \) . (See Exercises (1.5.15: 1 and 2).) --- \( {}^{2} \) Two intervals in \( \mathbf{R} \) are nonoverlapping if their intersection is either empty or contains only endpoints of the intervals. --- Let \( S \) be a subset of \( \mathbf{R} \), and \( P\left( x\right) \) a statement about real numbers \( x \) . If there exists a set \( E \) of measure zero such that \( P\left( x\right) \) holds for all \( x \) in \( S \smallsetminus E \) , then we say that \( P\left( x\right) \) holds almost everywhere on \( S \), or, more loosely, that \( P \) holds almost everywhere on \( S \) ; in the case \( S = \mathbf{R} \) we say simply that \( P\left( x\right) \), or \( P \), holds almost everywhere. A simple corollary of the Mean Value Theorem (Exercise (1.5.4: 6)), one that suffices for many applications, states that if \( f \) is continuous on \( \left\lbrack {a, b}\right\rbrack \) and \( \left| {{f}^{\prime }\left( x\right) }\right| \leq M \) for all \( x \in \left( {a, b}\right) \), then \( \left| {f\left( b\right) - f\left( a\right) }\right| \leq M\left( {b - a}\right) \) . Our first application of the Vitali Covering Theorem generalises this corollary, and can be regarded an extension of the Mean Value Theorem itself. (2.1.7) Proposition. Let \( f \) be an absolutely continuous mapping of a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) into \( \mathbf{R} \), and \( F \) a differentiable increasing mapping of \( I \) into \( \mathbf{R} \) such that \( \left| {{f}^{\prime }\left( x\right) }\right| \leq {F}^{\prime }\left( x\right) \) almost everywhere on \( I \) . Then \[ \left| {f\left( b\right) - f\left( a\right) }\right| \leq F\left( b\right) - F\left( a\right) . \] (3) Proof. Let \( E \subset I \) be a set of measure zero such that \( \left| {{f}^{\prime }\left( x\right) }\right| \leq {F}^{\prime }\left( x\right) \) for each \( x \in X = I \smallsetminus E \) . We may assume without loss of generality that \( a, b \in E \) . Given \( \varepsilon > 0 \), choose \( \delta > 0 \) as in the definition of absolute continuity. For each \( x \in X \) there exist arbitrarily small \( r > 0 \) such that \( \left\lbrack {x, x + r}\right\rbrack \subset \left( {a, b}\right) \) , \[ \left| {f\left( {x + r}\right) - f\left( x\right) - {f}^{\prime }\left( x\right) r}\right| < {\varepsilon r} \] \[ \left| {F\left( {x + r}\right) - F\left( x\right) - {F}^{\prime }\left( x\right) r}\right| < {\varepsilon r} \] and therefore \[ \left| {f\left( {x + r}\right) - f\left( x\right) }\right| \leq \left| {{f}^{\prime }\left( x\right) }\right| r + {\varepsilon r} \] \[ \leq {F}^{\prime }\left( x\right) r + {\varepsilon r} \] \[ \leq F\left( {x + r}\right) - F\left( x\right) + {2\varepsilon r}. \] The sets of the form \( \left\lbrack {x, x + r}\right\rbrack \), for such \( r > 0 \), form a Vitali covering of \( X \) . By the Vitali Covering Theorem, there exists a finite, pairwise-disjoint collection \( {\left( \left\lbrack {x}_{k},{x}_{k} + {r}_{k}\right\rbrack \right) }_{k = 1}^{N} \) of sets of this type such that \[ {\mu }^{ * }\left( {X \smallsetminus \mathop{\bigcup }\limits_{{k = 1}}^{N}\left\lbrack {{x}_{k},{x}_{k} + {r}_{k}}\right\rbrack }\right) < \delta . \] We may assume that \( {x}_{k} + {r}_{k} < {x}_{k + 1} \) for \( 1 \leq k \leq N - 1 \) . Thus \[ {x}_{1} - a + \mathop{\sum }\limits_{{k = 1}}^{{N - 1}}\left( {{x}_{k + 1} - {x}_{k} - {r}_{k}}\right) + b - {x}_{N} - {r}_{N} < \delta , \] and therefore \[ \left| {f\left( {x}_{1}\right) - f\left( a\right) }\right| + \mathop{\sum }\limits_{{k = 1}}^{{N - 1}}\left| {f\left( {x}_{k + 1}\right) - f\left( {{x}_{k} + {r}_{k}}\right) }\right| + \left| {f\left( b\right) - f\left( {{x}_{N} + {r}_{N}}\right) }\right| < \varepsilon . \] It follows that \[ \left| {f\left( b\right) - f\left( a\right) }\right| \leq \left| {f\left( {x}_{1}\right) - f\left( a\right) }\right| + \mathop{\sum }\limits_{{k = 1}}^{{N - 1}}\left| {f\left( {x}_{k + 1}\right) - f\left( {{x}_{k} + {r}_{k}}\right) }\right| \] \[ + \left| {f\left( b\right) - f\left( {{x}_{N} + {r}_{N}}\right) }\right| + \mathop{\sum }\limits_{{k = 1}}^{N}\left| {f\left( {{x}_{k} + {r}_{k}}\right) - f\left( {x}_{k}\right) }\right| \] \[ < \varepsilon + \mathop{\sum }\limits_{{k = 1}}^{N}\left( {F\left( {{x}_{k} + {r}_{k}}\right) - F\left( {x}_{k}\right) + {2\varepsilon }{r}_{k}}\right) \] \[ < \varepsilon + F\left( {x}_{1}\right) - F\left( a\right) + \mathop{\sum }\limits_{{k = 1}}^{{N - 1}}\left( {F\left( {x}_{k + 1}\right) - F\left( {{x}_{k} + {r}_{k}}\right) }\right) \] \[ + \mathop{\sum }\limits_{{k = 1}}^{N}\left( {F\left( {{x}_{k} + {r}_{k}}\right) - F\left( {x}_{k}\right) + {2\varepsilon }{r}_{k}}\right) \] \[ + \left( {F\left( b\right) - F\left( {{x}_{N} + {r}_{N}}\right) }\right) \] \[ = \varepsilon + F\left( b\right) - F\left( a\right) + {2\varepsilon }\mathop{\sum }\limits_{{k = 1}}^{N}{r}_{k} \] \[ < F\left( b\right) - F\left( a\right) + \varepsilon \left( {1 + {2b} - {2a}}\right) \text{.} \] Since \( \varepsilon > 0 \) is arbitrary, we conclude that (3) holds. ## (2.1.8) Exercises .1 Let \( f \) be absolutely continuous on \( I = \left\lbrack {a, b}\right\rbrack \), and suppose that for some constant \( M,\left| {f}^{\prime }\right| \leq M \) almost everywhere on \( I \) . Prove that \( \left| {f\left( b\right) - f\left( a\right) }\right| \leq M\left( {b - a}\right) . \) .2 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \) be an absolutely continuous function such that \( {f}^{\prime }\left( x\right) = 0 \) almost everywhere on \( I = \left\lbrack {a, b}\right\rbrack \) . Give two proofs that \( f \) is a constant function. (For one proof use the Vitali Covering Theorem.) .3 Let \( f, F \) be continuous on \( I = \left\lbrack {a, b}\right\rbrack \), and suppose there exists a countable subset \( D \) of \( I \) such that \( \left| {{f}^{\prime }\left( x\right) }\right| \leq {F}^{\prime }\left( x\right) \) for all \( x \in I \smallsetminus D \) . Show that \( \left| {f\left( b\right) - f\left( a\right) }\right| \leq F\left( b\right) - F\left( a\right) \) . (We may assume that \( D \) is countably infinite. Let \( {d}_{1},{d}_{2},\ldots \) be a one-one mapping of \( {\mathbf{N}}^{ + } \) onto \( D \) . Given \( \varepsilon > 0 \), let \( X \) be the set of all points \( x \in I \) such that \[ \left| {f\left( \xi \right) - f\left( a\right) }\right| \leq F\left( \xi \right) - F\left( a\right) + \varepsilon \left( {\xi
1008_(GTM174)Foundations of Real and Abstract Analysis
27
ely continuous function such that \( {f}^{\prime }\left( x\right) = 0 \) almost everywhere on \( I = \left\lbrack {a, b}\right\rbrack \) . Give two proofs that \( f \) is a constant function. (For one proof use the Vitali Covering Theorem.) .3 Let \( f, F \) be continuous on \( I = \left\lbrack {a, b}\right\rbrack \), and suppose there exists a countable subset \( D \) of \( I \) such that \( \left| {{f}^{\prime }\left( x\right) }\right| \leq {F}^{\prime }\left( x\right) \) for all \( x \in I \smallsetminus D \) . Show that \( \left| {f\left( b\right) - f\left( a\right) }\right| \leq F\left( b\right) - F\left( a\right) \) . (We may assume that \( D \) is countably infinite. Let \( {d}_{1},{d}_{2},\ldots \) be a one-one mapping of \( {\mathbf{N}}^{ + } \) onto \( D \) . Given \( \varepsilon > 0 \), let \( X \) be the set of all points \( x \in I \) such that \[ \left| {f\left( \xi \right) - f\left( a\right) }\right| \leq F\left( \xi \right) - F\left( a\right) + \varepsilon \left( {\xi - a + \mathop{\sum }\limits_{\left\{ n : {d}_{n} < \xi \right\} }{2}^{-n}}\right) \] for all \( \xi \in \lbrack a, x) \), and let \( s = \sup X \) . Assume that \( s < b \), and derive a contradiction.) .4 Let \( f \) be continuous on \( I = \left\lbrack {a, b}\right\rbrack \), and suppose there exists a countable subset \( D \) of \( I \) such that \( {f}^{\prime }\left( x\right) = 0 \) for all \( x \in I \smallsetminus D \) . Prove that \( f \) is constant on \( I \) . .5 Let \( C \) be the Cantor set (see Exercise (1.3.8: 11)). Show that \( \left\lbrack {0,1}\right\rbrack \smallsetminus C \) is a countable union of nonoverlapping open intervals \( {\left( {J}_{n}\right) }_{n = 1}^{\infty } \) whose lengths sum to 1, and that \( C \) has measure zero. For each \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{3}^{-n} \in C \) define \( F\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{2}^{-n - 1} \) . Show that (i) if \( x \) has two ternary expansions, then they produce the same value for \( F\left( x\right) \), so that \( F \) is a function on \( C \) ; (ii) \( F \) is a strictly increasing, continuous mapping of \( C \) onto \( \left\lbrack {0,1}\right\rbrack \) ; (iii) \( C \) is uncountable; and (iv) \( F \) extends to an increasing continuous mapping that is constant on each \( {J}_{n} \), equals 0 throughout \( ( - \infty ,0\rbrack \), and equals 1 throughout \( \lbrack 1,\infty ) \) . Prove that for each \( \delta > 0 \) there exist finitely many points \[ {a}_{1} < 0 < {b}_{1} < {a}_{2} < \cdots < {b}_{N - 1} < {a}_{N} < 1 < {b}_{N} \] of \( \left\lbrack {-1,2}\right\rbrack \) such that \( C \subset \mathop{\bigcup }\limits_{{n = 1}}^{N}\left\lbrack {{a}_{n},{b}_{n}}\right\rbrack \) , \[ \mathop{\sum }\limits_{{n = 1}}^{N}\left( {F\left( {b}_{n}\right) - F\left( {a}_{n}\right) }\right) = 1 \] and \( \mathop{\sum }\limits_{{n = 1}}^{N}\left( {{b}_{n} - {a}_{n}}\right) < \delta \) . (Thus \( F \) is increasing and continuous, but not absolutely continuous, on \( \left\lbrack {-1,2}\right\rbrack \) .) Finally, show that \( {F}^{\prime }\left( x\right) = 0 \) for all \( x \in \left\lbrack {0,1}\right\rbrack \smallsetminus C \), but \( F\left( 1\right) > F\left( 0\right) \) . The last two exercises deserve further comment. Consider a continuous function \( F \) on \( \left\lbrack {0,1}\right\rbrack \) whose derivative exists and vanishes throughout \( \left\lbrack {0,1}\right\rbrack \smallsetminus E \) . If \( E \) is countable, then Exercise (2.1.8:4) shows that \( F \) is constant. On the other hand, Exercise (2.1.8:5) shows that if \( E \) is uncountable and of measure zero, then \( F \) need not be constant; but if, in that case, \( F \) is absolutely continuous, then it follows from Exercise (2.1.8:2) that it is constant. Although the derivative of a function \( f \) may not exist at a point \( x \in \mathbf{R} \) , one or more of the following quantities - the Dini derivates of \( f \) at \( x \) -may: \[ {D}^{ + }f\left( x\right) = \mathop{\lim }\limits_{{h \rightarrow {0}^{ + }}}\sup \frac{f\left( {x + h}\right) - f\left( x\right) }{h}, \] \[ {D}_{ + }f\left( x\right) = \mathop{\lim }\limits_{{h \rightarrow {0}^{ + }}}\inf \frac{f\left( {x + h}\right) - f\left( x\right) }{h} \] \[ {D}^{ - }f\left( x\right) = \mathop{\lim }\limits_{{h \rightarrow {0}^{ - }}}\sup \frac{f\left( {x + h}\right) - f\left( x\right) }{h}, \] \[ {D}_{ - }f\left( x\right) = \mathop{\lim }\limits_{{h \rightarrow {0}^{ - }}}\inf \frac{f\left( {x + h}\right) - f\left( x\right) }{h}. \] We consider \( {D}^{ + }f\left( x\right) \) to be undefined if - either there is no \( h > 0 \) such that \( f \) is defined throughout the interval \( \left\lbrack {x, x + h}\right\rbrack \) - or else \( \left( {f\left( {x + h}\right) - f\left( x\right) }\right) /h \) remains unbounded as \( h \rightarrow {0}^{ + } \) . Similar comments apply to the other derivates of \( f \) . ## (2.1.9) Exercises .1 Prove that \( {D}^{ + }f\left( x\right) \geq {D}_{ + }f\left( x\right) \) and \( {D}^{ - }f\left( x\right) \geq {D}_{ - }f\left( x\right) \) whenever the quantities concerned make sense. .2 Prove that \( f \) is differentiable on the right (respectively, left) at \( x \) if and only if \( {D}^{ + }f\left( x\right) = {D}_{ + }f\left( x\right) \) (respectively, \( {D}^{ - }f\left( x\right) = {D}_{ - }f\left( x\right) \) ). .3 Let \( f \) be a mapping of \( \mathbf{R} \) into \( \mathbf{R} \), and define \( g\left( x\right) = - f\left( {-x}\right) \) . Prove that for each \( x \in \mathbf{R},{D}^{ + }g\left( x\right) = {D}^{ - }f\left( {-x}\right) \) and \( {D}_{ - }g\left( x\right) = {D}_{ + }f\left( {-x}\right) \) . .4 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \) be continuous, and suppose that one of the four derivates of \( f \) is nonnegative throughout \( \left( {a, b}\right) \) . Prove that \( f \) is an increasing function on \( \left\lbrack {a, b}\right\rbrack \) . (Show that \( x \mapsto f\left( x\right) + {\varepsilon x} \) is increasing for each \( \varepsilon > 0 \) .) .5 Consider a function \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \), and real numbers \( r, s \) with \( r > s \) . Define \[ E = \left\{ {x \in \left( {a, b}\right) : {D}^{ + }f\left( x\right) > r > s > {D}_{ - }f\left( x\right) }\right\} . \] Let \( X \) be an open set such that \( E \subset X \) and \( {\mu }^{ * }\left( X\right) < {\mu }^{ * }\left( E\right) + \varepsilon \) (see Exercise (2.1.3:3)). Prove that the intervals of the form \( \left( {x - h, x}\right) \) such that \( x \in E, h > 0,\left\lbrack {x - h, x}\right\rbrack \subset X \), and \( f\left( x\right) - f\left( {x - h}\right) < {sh} \) form a Vitali covering of \( E \) . Hence prove that for each \( \varepsilon > 0 \) there exist finitely many points \( {x}_{1},\ldots ,{x}_{m} \) of \( E \), and finitely many positive numbers \( {h}_{1},\ldots ,{h}_{m} \), such that the intervals \( {J}_{i} = \left( {{x}_{i} - {h}_{i},{x}_{i}}\right) (1 \leq \) \( i \leq m \) ) form a pairwise-disjoint collection, \[ {\mu }^{ * }\left( {\mathop{\bigcup }\limits_{{i = 1}}^{m}{J}_{i}}\right) > {\mu }^{ * }\left( E\right) - \varepsilon \] and \[ \mathop{\sum }\limits_{{i = 1}}^{m}\left( {f\left( {x}_{i}\right) - f\left( {{x}_{i} - {h}_{i}}\right) }\right) < s\left( {{\mu }^{ * }\left( E\right) + \varepsilon }\right) . \] Again applying the Vitali Covering Theorem, prove that there exist finitely many points \( {y}_{1},\ldots ,{y}_{n} \) of \( E \cap \mathop{\bigcup }\limits_{{i = 1}}^{m}{J}_{i} \), and finitely many positive numbers \( {h}_{1}^{\prime },\ldots ,{h}_{n}^{\prime } \), such that \[ {y}_{k} + {h}_{k}^{\prime } < {y}_{k + 1}\;\left( {1 \leq k \leq n - 1}\right) \] for each \( k \) there exists \( i \) such that \( \left( {{y}_{k},{y}_{k} + {h}_{k}^{\prime }}\right) \subset {J}_{i} \), and \[ \mathop{\sum }\limits_{{k = 1}}^{n}\left( {f\left( {{y}_{k} + {h}_{k}^{\prime }}\right) - f\left( {y}_{k}\right) }\right) > r\left( {{\mu }^{ * }\left( E\right) - {2\varepsilon }}\right) . \] Our next theorem shows, in particular, that the differentiability of the function \( F \) can be dropped from the hypotheses of Proposition (2.1.7). (2.1.10) Theorem. An increasing function \( f : \mathbf{R} \rightarrow \mathbf{R} \) is differentiable almost everywhere. Proof. It suffices to show that the sets \[ S = \left\{ {x \in \mathbf{R} : {D}^{ + }f\left( x\right) \text{ is undefined }}\right\} \] \[ T = \left\{ {x \in \mathbf{R} : {D}^{ + }f\left( x\right) > {D}_{ - }f\left( x\right) }\right\} \] have measure zero. For, applying this and Exercise (2.1.9: 3) to the increasing function \( x \mapsto - f\left( {-x}\right) \), we then see that \( {D}^{ - }f\left( x\right) \leq {D}_{ + }f\left( x\right) \) almost everywhere; whence, by Exercises (2.1.9: 1) and (2.1.1:9), \[ {D}^{ + }f\left( x\right) \leq {D}_{ - }f\left( x\right) \leq {D}^{ - }f\left( x\right) \leq {D}_{ + }f\left( x\right) \leq {D}^{ + }f\left( x\right) \in \mathbf{R} \] almost everywhere. (Note that as \( f \) is increasing, \( {D}_{ + }f\left( x\right) \) and \( {D}_{ - }f\left( x\right) \) are everywhere defined and nonnegative.) Thus the four Dini derivates of \( f \) are equal almost everywhere. Reference to Exercise (2.1.9: 2) then completes the proof. Leaving \( S \) to the next set of exercises, we now show that \( T \) has measure zero. Since \( T \) is the union of a countable family of sets of the form \[ E = \left\{ {x \in \left( {a, b}\right) : {D}^{ + }f\left( x\right) > r > s > {D}_{ - }f\left( x\right) }\right\} , \] where \( a < b \) and \( r, s \) are rational numbers with \( r > s \), it is enough to prove that such a set \( E \) has measure zero. We first use Exercise (2.1.9:5) to obtain (i) finitely many points \( {x}_{1},\ldots ,{x}_{m} \) of \( \left( {a, b}\right) \), and finitely many positive numbers \( {h}_{1},\ldots ,{h}_{m} \), such
1008_(GTM174)Foundations of Real and Abstract Analysis
28
right) \leq {D}_{ + }f\left( x\right) \leq {D}^{ + }f\left( x\right) \in \mathbf{R} \] almost everywhere. (Note that as \( f \) is increasing, \( {D}_{ + }f\left( x\right) \) and \( {D}_{ - }f\left( x\right) \) are everywhere defined and nonnegative.) Thus the four Dini derivates of \( f \) are equal almost everywhere. Reference to Exercise (2.1.9: 2) then completes the proof. Leaving \( S \) to the next set of exercises, we now show that \( T \) has measure zero. Since \( T \) is the union of a countable family of sets of the form \[ E = \left\{ {x \in \left( {a, b}\right) : {D}^{ + }f\left( x\right) > r > s > {D}_{ - }f\left( x\right) }\right\} , \] where \( a < b \) and \( r, s \) are rational numbers with \( r > s \), it is enough to prove that such a set \( E \) has measure zero. We first use Exercise (2.1.9:5) to obtain (i) finitely many points \( {x}_{1},\ldots ,{x}_{m} \) of \( \left( {a, b}\right) \), and finitely many positive numbers \( {h}_{1},\ldots ,{h}_{m} \), such that the intervals \( {J}_{i} = \left( {{x}_{i} - {h}_{i},{x}_{i}}\right) (1 \leq \) \( i \leq m \) ) form a pairwise-disjoint collection, \[ {\mu }^{ * }\left( {\mathop{\bigcup }\limits_{{i = 1}}^{m}{J}_{i}}\right) > {\mu }^{ * }\left( E\right) - \varepsilon \] and \[ \mathop{\sum }\limits_{{i = 1}}^{m}\left( {f\left( {x}_{i}\right) - f\left( {{x}_{i} - {h}_{i}}\right) }\right) < s\left( {{\mu }^{ * }\left( E\right) + \varepsilon }\right) ; \] (ii) finitely many points \( {y}_{1},\ldots ,{y}_{n} \) of \( E \cap \mathop{\bigcup }\limits_{{i = 1}}^{m}{J}_{i} \), and finitely many positive numbers \( {h}_{1}^{\prime },\ldots ,{h}_{n}^{\prime } \), such that \[ {y}_{k} + {h}_{k}^{\prime } < {y}_{k + 1}\;\left( {1 \leq k \leq n - 1}\right) , \] (4) for each \( k \) there exists \( i \) with \( \left( {{y}_{k},{y}_{k} + {h}_{k}^{\prime }}\right) \subset {J}_{i} \), and \[ \mathop{\sum }\limits_{{k = 1}}^{n}\left( {f\left( {{y}_{k} + {h}_{k}^{\prime }}\right) - f\left( {y}_{k}\right) }\right) > r\left( {{\mu }^{ * }\left( E\right) - {2\varepsilon }}\right) . \] For each \( i \) with \( 1 \leq i \leq m \) let \[ {S}_{i} = \left\{ {k : \left( {{y}_{k},{y}_{k} + {h}_{k}^{\prime }}\right) \subset {J}_{i}}\right\} . \] Since \( f \) is increasing, it follows from (4) that \[ \mathop{\sum }\limits_{{k \in {S}_{i}}}\left( {f\left( {{y}_{k} + {h}_{k}^{\prime }}\right) - f\left( {y}_{k}\right) }\right) \leq f\left( {x}_{i}\right) - f\left( {{x}_{i} - {h}_{i}}\right) . \] Thus, as the intervals \( {J}_{i} \) are disjoint, \[ \mathop{\sum }\limits_{{i = 1}}^{m}\left( {f\left( {x}_{i}\right) - f\left( {{x}_{i} - {h}_{i}}\right) }\right) \geq \mathop{\sum }\limits_{{k = 1}}^{n}\left( {f\left( {{y}_{k} + {h}_{k}^{\prime }}\right) - f\left( {y}_{k}\right) }\right) , \] so that \[ s\left( {{\mu }^{ * }\left( E\right) + \varepsilon }\right) > r\left( {{\mu }^{ * }\left( E\right) - {2\varepsilon }}\right) . \] Since \( \varepsilon > 0 \) is arbitrary, it follows that \( s{\mu }^{ * }\left( E\right) \geq r{\mu }^{ * }\left( E\right) \) . But \( r > s \), so we must have \( {\mu }^{ * }\left( E\right) = 0 \) . We make good use of the following consequence of Theorem (2.1.10). (2.1.11) Fubini’s Series Theorem. Let \( \left( {F}_{n}\right) \) be a sequence of increasing continuous functions on \( \mathbf{R} \) such that \( F\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}\left( x\right) \) converges for all \( x \in \mathbf{R} \) . Then almost everywhere, \( F \) is differentiable, \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}^{\prime }\left( x\right) \) converges, and \( {F}^{\prime }\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}^{\prime }\left( x\right) \) . Proof. Fix real numbers \( a, b \) with \( a < b \) . It suffices to prove that \( {F}^{\prime }\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}^{\prime }\left( x\right) \) almost everywhere on \( I = \left\lbrack {a, b}\right\rbrack \) : for then we can apply the result to the intervals \( \left\lbrack {-n, n}\right\rbrack \) as \( n \) increases through \( {\mathbf{N}}^{ + } \) . If necessary replacing \( {F}_{n} \) by \( {F}_{n} - {F}_{n}\left( a\right) \), we may assume that \( {F}_{n}\left( a\right) = 0 \) . Write \[ {s}_{n}\left( x\right) = {F}_{1}\left( x\right) + \cdots + {F}_{n}\left( x\right) \;\left( {x \in I}\right) \] and note that \( F - {s}_{n} = \mathop{\sum }\limits_{{k = n + 1}}^{\infty }{F}_{k} \) is increasing and nonnegative. By Theorem (2.1.10), \( {s}_{n} \) is differentiable on \( I \smallsetminus {A}_{n} \) for some set \( {A}_{n} \) of measure zero; likewise, \( F \) (which is clearly increasing) is differentiable on \( I \smallsetminus {A}_{0} \) for some set \( {A}_{0} \) of measure zero. Then \[ A = \mathop{\bigcup }\limits_{{n = 0}}^{\infty }{A}_{n} \] has measure zero, by Exercise (2.1.1: 9). Since both \( F - {s}_{n + 1} \) and \( {s}_{n + 1} - {s}_{n} \) are increasing functions, for each \( x \in I \smallsetminus A \) we have \[ {s}_{n}^{\prime }\left( x\right) \leq {s}_{n + 1}^{\prime }\left( x\right) \leq {F}^{\prime }\left( x\right) \] (5) It follows from the monotone sequence principle that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}^{\prime }\left( x\right) \) converges to a sum \( \leq {F}^{\prime }\left( x\right) \) . Now choose an increasing sequence \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) of positive integers such that for each \( k \) , \[ 0 \leq F\left( b\right) - {s}_{{n}_{k}}\left( b\right) \leq {2}^{-k}. \] Since \( F - {s}_{{n}_{k}} \) is an increasing function, for each \( x \in I \) we obtain the inequalities \[ 0 \leq F\left( x\right) - {s}_{{n}_{k}}\left( x\right) \leq {2}^{-k}. \] Hence \( \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {F\left( x\right) - {s}_{{n}_{k}}\left( x\right) }\right) \) converges, by comparison with \( \mathop{\sum }\limits_{{k = 1}}^{\infty }{2}^{-k} \) . Applying the first part of the proof with \( {F}_{k} \) replaced by \( F - {s}_{{n}_{k}} \), we now see that, almost everywhere on \( I,\mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{F}^{\prime }\left( x\right) - {s}_{{n}_{k}}^{\prime }\left( x\right) }\right) \) converges and therefore \[ \mathop{\lim }\limits_{{k \rightarrow \infty }}\left( {{F}^{\prime }\left( x\right) - {s}_{{n}_{k}}^{\prime }\left( x\right) }\right) = 0. \] It follows from (5) that \[ {F}^{\prime }\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{s}_{n}\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{F}_{n}^{\prime }\left( x\right) \] almost everywhere on \( I \) . ## (2.1.12) Exercises . 1 Let \( f \) be an increasing function on \( \left\lbrack {a, b}\right\rbrack \), and for each positive integer \( n \) define \[ {S}_{n} = \left\{ {x \in \left( {a, b}\right) : {D}^{ + }f\left( x\right) > n}\right\} . \] Prove that \[ {\mu }^{ * }\left( {I \smallsetminus {S}_{n}}\right) < {n}^{-1}\left( {f\left( b\right) - f\left( a\right) }\right) \] and hence that the set of those \( x \in \left( {a, b}\right) \) at which \( {D}^{ + }f\left( x\right) \) is undefined has measure zero. (Use the Vitali Covering Theorem to show that there exist finitely many points \( {x}_{1},{x}_{2},\ldots ,{x}_{m} \) of \( \left( {a, b}\right) \) , and positive numbers \( {h}_{1},{h}_{2},\ldots ,{h}_{m} \), such that \( {x}_{k} + {h}_{k} < {x}_{k + 1} \) and \( \left. {f\left( {{x}_{k} + {h}_{k}}\right) - f\left( {x}_{k}\right) > n{h}_{k}.}\right) \) .2 Let \( E \) be a bounded subset of \( \mathbf{R} \) that has measure zero, and let \( a \) be a lower bound for \( E \) . For each positive integer \( n \) choose a bounded open set \( {A}_{n} \supset E \) such that \( {\mu }^{ * }\left( {A}_{n}\right) < {2}^{-n} \) (this is possible by Exercise (2.1.3: 3)), and define \[ {f}_{n}\left( x\right) = \left\{ \begin{array}{ll} 0 & \text{ if }x < a \\ {\mu }^{ * }\left( {{A}_{n} \cap \left\lbrack {a, x}\right\rbrack }\right) & \text{ if }x \geq a. \end{array}\right. \] Show that (i) \( f = \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) is an increasing continuous function on \( \mathbf{R} \) ; (ii) \( {D}^{ + }f\left( x\right) \) is undefined for each \( x \in E \) . .3 Prove that if \( f \) has bounded variation on \( \left\lbrack {a, b}\right\rbrack \), then it is differentiable almost everywhere on \( \left\lbrack {a, b}\right\rbrack \) . (The converse is not true: see Exercise (1.5.15:4).) .4 Let \( f \) have bounded variation on \( \left\lbrack {a, b}\right\rbrack \) . Prove that \( {T}_{f}^{\prime }\left( {a, x}\right) = \left| {{f}^{\prime }\left( x\right) }\right| \) almost everywhere on \( \left\lbrack {a, b}\right\rbrack \) . (Using Lemma (1.5.17), construct a sequence \( \left( {g}_{n}\right) \) of functions on \( I \) such that for each \( n,{T}_{f}\left( {a, \cdot }\right) - {g}_{n} \) is increasing, \( 0 \leq {T}_{f}\left( {a, \cdot }\right) - {g}_{n} \leq {2}^{-n} \), and \( {g}_{n}^{\prime } = \pm {f}^{\prime } \) almost everywhere. Then use Fubini's Series Theorem.) .5 Prove that if a bounded function \( f \) is continuous almost everywhere on a compact interval \( I \), then it is Riemann integrable. (Let \( M \) be a bound for \( \left| f\right| \) on \( I \), let \( E \subset I = \left\lbrack {a, b}\right\rbrack \) be a set of measure zero such that \( f \) is continuous on \( X = I \smallsetminus E \), and let \( \varepsilon > 0 \) . We may assume that \( a, b \in E \) . For each \( x \in X \) there exist arbitrarily small \( r > 0 \) such that \( \left\lbrack {x, x + r}\right\rbrack \subset I \) and \[ \left| {f\left( {x}^{\prime }\right) - f\left( {x}^{\prime \prime }\right) }\right| < \frac{\varepsilon }{2\left( {b - a}\right) }\;\left( {x \leq {x}^{\prime } \leq {x}^{\prime \prime } \leq x + r}\right) . \] The sets \( \left\lbrack {x, x + r}\right\rbrack \) of this type form a Vitali cover of \( X \) . With the aid of the Vitali Covering Theorem,
1008_(GTM174)Foundations of Real and Abstract Analysis
29
_{n}^{\prime } = \pm {f}^{\prime } \) almost everywhere. Then use Fubini's Series Theorem.) .5 Prove that if a bounded function \( f \) is continuous almost everywhere on a compact interval \( I \), then it is Riemann integrable. (Let \( M \) be a bound for \( \left| f\right| \) on \( I \), let \( E \subset I = \left\lbrack {a, b}\right\rbrack \) be a set of measure zero such that \( f \) is continuous on \( X = I \smallsetminus E \), and let \( \varepsilon > 0 \) . We may assume that \( a, b \in E \) . For each \( x \in X \) there exist arbitrarily small \( r > 0 \) such that \( \left\lbrack {x, x + r}\right\rbrack \subset I \) and \[ \left| {f\left( {x}^{\prime }\right) - f\left( {x}^{\prime \prime }\right) }\right| < \frac{\varepsilon }{2\left( {b - a}\right) }\;\left( {x \leq {x}^{\prime } \leq {x}^{\prime \prime } \leq x + r}\right) . \] The sets \( \left\lbrack {x, x + r}\right\rbrack \) of this type form a Vitali cover of \( X \) . With the aid of the Vitali Covering Theorem, construct a partition \( P \) of \( I \) such that \( U\left( {P, f}\right) - L\left( {P, f}\right) < \varepsilon \) .) .6 Prove the converse of the last exercise - namely, if a bounded function \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \) is Riemann integrable, then it is continuous almost everywhere on \( \left\lbrack {a, b}\right\rbrack \) . (For each positive integer \( n \) define \[ {A}_{n} = \left\{ {x \in \left\lbrack {a, b}\right\rbrack : \omega \left( {f, x}\right) > \frac{1}{n}}\right\} \] where \( \omega \left( {f, x}\right) \) is the oscillation of \( f \) at \( x \) ; see Exercise (1.4.5:7). Given \( \varepsilon > 0 \), choose a partition \( P \) of \( \left\lbrack {a, b}\right\rbrack \) such that \( U\left( {f, P}\right) - L\left( {f, P}\right) < \) \( \varepsilon /{2n} \) . Use this to construct a finite set of intervals that cover \( {A}_{n} \) and have total length less than \( \varepsilon \) .) Re-examine Exercise (1.5.10: 6) in the light of this result. ## 2.2 The Lebesgue Integral as an Antiderivative In this section we show how Theorem (2.1.10) and Fubini's Series Theorem (2.1.11) can be used to introduce the Lebesgue integral, a very powerful extension of the Riemann integral, as an antiderivative. Our approach \( {}^{3} \) is based on a little-known development by F. Riesz [39]. Let \( f \) be a nonnegative real-valued function defined almost everywhere on R. A function \( F : \mathbf{R} \rightarrow \mathbf{R} \) is called a Lebesgue primitive of \( f \) if it is increasing, bounded below, and satisfies \( {F}^{\prime } = f \) almost everywhere. In order to discuss Lebesgue primitives, we first consider the set \( {\mathcal{P}}_{f} \) of functions \( F : \mathbf{R} \rightarrow \mathbf{R} \) that are increasing, bounded below, and satisfy \( {F}^{\prime } \geq f \) almost everywhere. Note that for such a function, \[ F\left( {-\infty }\right) = \mathop{\lim }\limits_{{x \rightarrow - \infty }}F\left( x\right) \] exists: indeed, the sequence \( {\left( F\left( -n\right) \right) }_{n = 1}^{\infty } \), which is decreasing and bounded below, converges to a limit which is easily shown to be \( F\left( {-\infty }\right) \) . (2.2.1) Proposition. If \( {\mathcal{P}}_{f} \) is nonempty, then there exists an element \( {F}_{ * } \in {\mathcal{P}}_{f} \), called an extremal element of \( {\mathcal{P}}_{f} \), such that \[ {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) \leq F\left( \eta \right) - F\left( \xi \right) \] (1) whenever \( \xi < \eta \) and \( F \in {\mathcal{P}}_{f} \) . Proof. First note that the set \[ {\mathcal{P}}_{f}^{0} = \left\{ {F \in {\mathcal{P}}_{f} : F\left( {-\infty }\right) = 0}\right\} \] --- \( {}^{3} \) It is worth comparing this with the development of the Cauchy integral in \( \left\lbrack {13}\right\rbrack \) . --- is nonempty: for if \( F \in {\mathcal{P}}_{f} \), then \( F - F\left( {-\infty }\right) \in {\mathcal{P}}_{f}^{0} \) . It is now a straightforward exercise to show that \[ {F}_{ * }\left( x\right) = \inf \left\{ {F\left( x\right) : F \in {\mathcal{P}}_{f}^{0}}\right\} \] defines an increasing function \( {F}_{ * } : \mathbf{R} \rightarrow {\mathbf{R}}^{0 + } \) with \( {F}_{ * }\left( {-\infty }\right) = 0 \) . Given \( \varepsilon > 0 \) and real numbers \( \xi ,\eta \) with \( \xi < \eta \), choose \( {F}_{1} \in {\mathcal{P}}_{f}^{0} \) such that \( {F}_{1}\left( \xi \right) < \) \( {F}_{ * }\left( \xi \right) + \varepsilon \), and consider any element \( F \) of \( {\mathcal{P}}_{f} \) . The function \( {F}_{2} \) defined by \[ {F}_{2}\left( x\right) = \left\{ \begin{array}{ll} {F}_{1}\left( x\right) & \text{ if }x \leq \xi \\ F\left( x\right) + {F}_{1}\left( \xi \right) - F\left( \xi \right) & \text{ if }\xi \leq x \end{array}\right. \] belongs to \( {\mathcal{P}}_{f}^{0} \), so \[ {F}_{ * }\left( \eta \right) \leq {F}_{2}\left( \eta \right) \] \[ = F\left( \eta \right) + {F}_{1}\left( \xi \right) - F\left( \xi \right) \] \[ < F\left( \eta \right) + {F}_{ * }\left( \xi \right) + \varepsilon - F\left( \xi \right) \] and therefore \[ {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) < F\left( \eta \right) - F\left( \xi \right) + \varepsilon . \] Since \( \varepsilon > 0 \) is arbitrary, inequality (1) follows. Now let \( N \) be a positive integer, and choose a sequence \( \left( {F}_{n}\right) \) in \( {\mathcal{P}}_{f}^{0} \) such that for each \( n \) , \[ 0 \leq {F}_{n}\left( N\right) - {F}_{ * }\left( N\right) \leq {2}^{-n}. \] For each \( x \in \left\lbrack {-N, N}\right\rbrack \) we have \[ {F}_{ * }\left( N\right) - {F}_{ * }\left( x\right) \leq {F}_{n}\left( N\right) - {F}_{n}\left( x\right) \] and therefore \[ 0 \leq {F}_{n}\left( x\right) - {F}_{ * }\left( x\right) \leq {F}_{n}\left( N\right) - {F}_{ * }\left( N\right) \leq {2}^{-n}. \] Hence the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {{F}_{n} - {F}_{ * }}\right) \) of increasing functions converges at each point of \( \left\lbrack {-N, N}\right\rbrack \), by comparison with \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n} \) . Fubini’s Series Theorem (2.1.11) now shows that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {{F}_{n}^{\prime } - {F}_{ * }^{\prime }}\right) \) converges almost everywhere on \( \left\lbrack {-N, N}\right\rbrack \) . Hence \( {F}_{n}^{\prime } - {F}_{ * }^{\prime } \rightarrow 0 \), and therefore \( {F}_{ * }^{\prime } \geq f \), almost everywhere on \( \left\lbrack {-N, N}\right\rbrack \) . Since the union of a sequence of sets of measure zero has measure zero, it follows that \( {F}_{ * }^{\prime } \geq f \) almost everywhere on \( \mathbf{R} \) . (2.2.2) Corollary. Under the conditions of Proposition (2.2.1), \( {F}_{1} \) is an extremal element of \( {\mathcal{P}}_{f} \) if and only if \( {F}_{ * } - {F}_{1} \) is constant on \( \mathbf{R} \) . Proof. If \( {F}_{1} \) is an extremal element of \( {\mathcal{P}}_{f} \), and \( \xi \leq \eta \), then \[ {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) \leq {F}_{1}\left( \eta \right) - {F}_{1}\left( \xi \right) \leq {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) \] and therefore \[ {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) = {F}_{1}\left( \eta \right) - {F}_{1}\left( \xi \right) \] It follows that \[ {F}_{1}\left( x\right) - {F}_{ * }\left( x\right) = {F}_{1}\left( {-\infty }\right) - {F}_{ * }\left( {-\infty }\right) \] for all \( x \in \mathbf{R} \) . The converse is trivial. (2.2.3) Corollary. Under the conditions of Proposition (2.2.1), iff has a Lebesgue primitive, then \( {F}_{ * } \) is also a Lebesgue primitive. Proof. Let \( F \) be a Lebesgue primitive of \( f \) . Then \( F \in {\mathcal{P}}_{f} \), so by Proposition (2.2.1), \[ {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) \leq F\left( \eta \right) - F\left( \xi \right) \;\left( {\xi \leq \eta }\right) . \] Since a finite union of sets of measure zero has measure zero, it follows from this inequality and Theorem (2.1.10) that \[ f \leq {F}_{ * }^{\prime } \leq {F}^{\prime } = f \] almost everywhere. Hence \( {F}_{ * }^{\prime } = f \) almost everywhere, and \( {F}_{ * } \) is a Lebesgue primitive of \( f \) . We say that a nonnegative function \( f \) defined almost everywhere on \( \mathbf{R} \) is Lebesgue integrable (or simply integrable) if there is a bounded Lebesgue primitive of \( f \) . In that case we define the Lebesgue integral (or simply the integral) of \( f \) to be \[ \int f = {F}_{ * }\left( \infty \right) - {F}_{ * }\left( {-\infty }\right) \] where \( {F}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f} \) and \[ {F}_{ * }\left( \infty \right) = \mathop{\lim }\limits_{{x \rightarrow \infty }}{F}_{ * }\left( x\right) \] (The existence of \( {F}_{ * }\left( \infty \right) \) is left as an exercise.) Corollary (2.2.2) shows that the value of the integral of \( f \) does not depend on the choice of extremal element \( {F}_{ * } \) in \( {\mathcal{P}}_{f} \) . Note that \[ \int f = \mathop{\sup }\limits_{{x < y}}\left( {{F}_{ * }\left( y\right) - {F}_{ * }\left( x\right) }\right) . \] We often write \[ \int f = \int f\left( x\right) \mathrm{d}x = \int f\left( t\right) \mathrm{d}t = \cdots , \] as in elementary calculus courses. ## (2.2.4) Exercises .1 In the notation of the proof of Proposition (2.2.1), prove that \( {F}_{ * } \) is an increasing function and that \( {F}_{ * }\left( {-\infty }\right) = 0 \) . .2 Prove that if some element of \( {\mathcal{P}}_{f} \) is bounded above and \( {F}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f} \), then \( {F}_{ * } \) is bounded above and \( {F}_{ * }\left( \infty \right) \) exists. .3 Let \( f \) be an integrable nonnegative function, and \( F \) a Lebesgue primitive of \( f \) . Prove that if \( F \) is absolutely continuous on each compact interval, then it is an extremal element of \( {\mathcal{P}}_{f} \)
1008_(GTM174)Foundations of Real and Abstract Analysis
30
oice of extremal element \( {F}_{ * } \) in \( {\mathcal{P}}_{f} \) . Note that \[ \int f = \mathop{\sup }\limits_{{x < y}}\left( {{F}_{ * }\left( y\right) - {F}_{ * }\left( x\right) }\right) . \] We often write \[ \int f = \int f\left( x\right) \mathrm{d}x = \int f\left( t\right) \mathrm{d}t = \cdots , \] as in elementary calculus courses. ## (2.2.4) Exercises .1 In the notation of the proof of Proposition (2.2.1), prove that \( {F}_{ * } \) is an increasing function and that \( {F}_{ * }\left( {-\infty }\right) = 0 \) . .2 Prove that if some element of \( {\mathcal{P}}_{f} \) is bounded above and \( {F}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f} \), then \( {F}_{ * } \) is bounded above and \( {F}_{ * }\left( \infty \right) \) exists. .3 Let \( f \) be an integrable nonnegative function, and \( F \) a Lebesgue primitive of \( f \) . Prove that if \( F \) is absolutely continuous on each compact interval, then it is an extremal element of \( {\mathcal{P}}_{f} \) . (Use Proposition (2.1.7).) Is every bounded Lebesgue primitive of \( f \) an extremal element of \( {\mathcal{P}}_{f} \) ? .4 Show that if \( f \geq 0 \) is Lebesgue integrable, then \[ \int f = \inf \left\{ {\mathop{\sup }\limits_{{x < y}}\left( {F\left( y\right) - F\left( x\right) }\right) : F \in {\mathcal{P}}_{f}}\right\} . \] .5 Let \( f\left( x\right) \) equal a nonnegative constant \( c \) in a bounded interval \( I \), and 0 outside \( I \) . Show that \( f \) is Lebesgue integrable, with \( \int f = c\left| I\right| \) . .6 Let \( f \) be an integrable nonnegative function. Prove that \( \int f = 0 \) if and only if \( f = 0 \) almost everywhere. .7 Let \( f \) be an integrable nonnegative function such that \( \int f > 0 \) . Prove that \( f\left( x\right) > 0 \) on some set with positive outer measure. (Suppose that for all positive integers \( m \) and \( n \) , \[ {E}_{m, n} = \left\{ {x \in \left\lbrack {-m, m}\right\rbrack : f\left( x\right) > \frac{1}{n}}\right\} \] has measure zero, and use the preceding exercise to obtain a contradiction.) .8 Let \( f \) and \( g \) be integrable nonnegative functions such that \( f \geq g \) almost everywhere, and let \( F, G \) be extremal elements of \( {\mathcal{P}}_{f},{\mathcal{P}}_{g} \) , respectively. Prove that (i) \( F - G \in {\mathcal{P}}_{f - g} \), and (ii) \( f - g \) is integrable. (For (ii) note that \( {F}^{\prime } \geq g \) almost everywhere.) .9 Let \( f \) be an integrable nonnegative function, and \( F \) a Lebesgue primitive of \( f \) . Show that if \( {F}^{\prime }\left( \xi \right) = f\left( \xi \right) \) and \( {s}_{n} \leq \xi \leq {s}_{n} + {2}^{-n} \) for each \( n \), then \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{2}^{-n}{\int }_{{s}_{n}}^{{s}_{n} + {2}^{-n}}f = f\left( \xi \right) \] (Note Exercise (1.5.1: 3).) (2.2.5) Lemma. Let \( \Phi ,\Psi \), and \( \Phi - \Psi \) be increasing functions on \( \mathbf{R} \) such that \( \Phi \) is an extremal element of \( {\mathcal{P}}_{{\Phi }^{\prime }} \) and \( \Psi \) is bounded. Then \( \Psi \) is an extremal element of \( {\mathcal{P}}_{{\Psi }^{\prime }} \) . Proof. Note that \( {\Phi }^{\prime } \) and \( {\Psi }^{\prime } \) are defined almost everywhere, by Proposition (2.1.10). Hence \( \Psi \in {\mathcal{P}}_{{\Psi }^{\prime }} \) . By Proposition (2.2.1), \( {\mathcal{P}}_{{\Psi }^{\prime }} \) has an extremal element \( {\Psi }_{ * } \) . The function \[ \Theta = \Phi - \Psi + {\Psi }_{ * } \] is increasing, bounded below, and has derivative equal to \( {\Phi }^{\prime } \) almost everywhere; so it belongs to \( {\mathcal{P}}_{{\Phi }^{\prime }} \) . Since \( \Phi \) is an extremal element of \( {\mathcal{P}}_{{\Phi }^{\prime }} \), it follows that \( {\Psi }_{ * } - \Psi = \Theta - \Phi \) is an increasing function. But by our choice of \( {\Psi }_{ * },\Psi - {\Psi }_{ * } \) is an increasing function. It follows that \( \Psi - {\Psi }_{ * } \) is constant; whence, by Corollary (2.2.2), \( \Psi \) is an extremal element of \( {\mathcal{P}}_{{\Psi }^{\prime }} \) . (2.2.6) Proposition. If \( f, g \) are integrable nonnegative functions defined almost everywhere, and \( \lambda \geq 0 \), then \( f + g \) and \( {\lambda f} \) are integrable, \[ \int \left( {f + g}\right) = \int f + \int g \] (2) and \[ \int {\lambda f} = \lambda \int f \] Proof. Let \( {F}_{ * },{G}_{ * } \) be extremal elements of \( {\mathcal{P}}_{f},{\mathcal{P}}_{g} \), respectively. Then \( {F}_{ * } + {G}_{ * } \) is a bounded Lebesgue primitive of \( f + g \), which is therefore integrable; but there is no guarantee that \( {F}_{ * } + {G}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f + g} \), so we have to work harder to establish the identity (2). To this end, let \( {H}_{ * } \) be an extremal element of \( {\mathcal{P}}_{f + g} \) . Then \( {F}_{ * } + {G}_{ * } - {H}_{ * } \) is an increasing function. On the other hand, \( {H}_{ * } \) is increasing, and \( {H}_{ * }^{\prime } = f + g \geq g \) almost everywhere; so by our choice of \( {G}_{ * },{H}_{ * } - {G}_{ * } \) is increasing. Applying Lemma (2.2.5) with \( \Phi = {H}_{ * } \) and \( \Psi = {H}_{ * } - {G}_{ * } \), we see that \( {H}_{ * } - {G}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f} \) ; whence, by Corollary (2.2.2), \( {H}_{ * } - {G}_{ * } - {F}_{ * } \) has a constant value \( c \) . It follows that \[ \int \left( {f + g}\right) = {H}_{ * }\left( \infty \right) - {H}_{ * }\left( {-\infty }\right) \] \[ = \left( {{G}_{ * }\left( \infty \right) + {F}_{ * }\left( \infty \right) + c}\right) - \left( {{G}_{ * }\left( {-\infty }\right) + {F}_{ * }\left( {-\infty }\right) + c}\right) \] \[ = \left( {{F}_{ * }\left( \infty \right) - {F}_{ * }\left( {-\infty }\right) }\right) + \left( {{G}_{ * }\left( \infty \right) - {G}_{ * }\left( {-\infty }\right) }\right) \] \[ = \int f + \int g\text{. } \] It is left as an exercise to deal with \( {\lambda f} \) . (2.2.7) Proposition. If \( \left( {f}_{n}\right) \) is a sequence of integrable nonnegative functions defined almost everywhere, then \( f = \inf {f}_{n} \) is integrable. Proof. For each \( n \) choose an extremal element \( {F}_{*n} \) of \( {\mathcal{P}}_{{f}_{n}} \), and note that, by Corollary (2.2.3), \( {F}_{*n} \) is a Lebesgue primitive of \( {f}_{n} \) . Then \( {F}_{*n} \in {\mathcal{P}}_{f} \), so \( {\mathcal{P}}_{f} \) is nonempty. By Proposition (2.2.1), there exists an extremal element \( {F}_{ * } \) of \( {\mathcal{P}}_{f} \), and \( {F}_{*n} - {F}_{ * } \) is increasing; so \( {\left( {F}_{*n} - {F}_{ * }\right) }^{\prime } \geq 0 \) almost everywhere. Hence, almost everywhere, \[ {f}_{n} = {F}_{*n}^{\prime } \geq {F}_{ * }^{\prime } \geq f \] so \[ f = \inf {f}_{n} \geq {F}_{ * }^{\prime } \geq f \] and therefore \( {F}_{ * }^{\prime } = f \) . Moreover, by Exercise (2.2.4: 2), \( {F}_{*n} \), and therefore \( {F}_{ * } \), is bounded; so \( f \) is integrable. (2.2.8) Corollary. If \( f, g \) are integrable nonnegative functions, then so are \( f \vee g \) and \( f \land g \) . Proof. The integrability of \( f \land g \) is a special case of Proposition (2.2.7); that of \( f \vee g \) then follows from the identity \[ f \vee g = f + g - f \land g, \] Proposition (2.2.6), and Exercise (2.2.4: 8). We now extend the Lebesgue integral to functions of variable sign. We say that a real-valued function \( f \) defined almost everywhere on \( \mathbf{R} \) is (Lebesgue) integrable if there exist integrable nonnegative functions \( {f}_{1},{f}_{2} \) such that \( f = {f}_{1} - {f}_{2} \) ; we then define the (Lebesgue) integral of \( f \) to be \[ \int f = \int {f}_{1} - \int {f}_{2} \] ## (2.2.9) Exercises .1 Prove that the foregoing is a good definition -in other words, that if \( {f}_{1},{f}_{2},{f}_{3},{f}_{4} \) are integrable nonnegative functions such that \( {f}_{1} - {f}_{2} = \) \( {f}_{3} - {f}_{4} \), then \( \int {f}_{1} - \int {f}_{2} = \int {f}_{3} - \int {f}_{4} \) . Prove also that if a nonnegative function \( f \) has a bounded Lebesgue primitive, then it is integrable in the new sense, and its integrals in the old and new senses coincide. .2 Show that \( f \) is integrable if and only if \( {f}^{ + } = f \vee 0 \) and \( {f}^{ - } = \left( {-f}\right) \vee 0 \) are integrable, in which case \( \int f = \int {f}^{ + } - \int {f}^{ - } \) . (Choose integrable nonnegative functions \( {f}_{1},{f}_{2} \) such that \( f = {f}_{1} - {f}_{2} \), and note that \( \left. {{f}^{ + } = {f}_{1} - {f}_{1} \land {f}_{2}.}\right) \) .3 Prove that if \( f, g \) are integrable and \( \lambda \in \mathbf{R} \), then \( f + g \) and \( {\lambda f} \) are integrable, \( \int \left( {f + g}\right) = \int f + \int g \), and \( \int {\lambda f} = \lambda \int f \) . (For the last part you will first need to complete the proof of Proposition (2.2.6).) .4 Prove that if \( f, g \) are integrable functions such that \( f \geq g \) almost everywhere, then \( \int f \geq \int g \) . .5 Prove that if \( f \) is integrable, then so is \( \left| f\right| \), and \( \left| {\int f}\right| \leq \int \left| f\right| \) . .6 Show that if \( f \) and \( g \) are integrable, then so are \( f \vee g \) and \( f \land g \) . (Reduce to the case where \( f \) and \( g \) are nonnegative.) .7 Let \( {\left( {f}_{n}\right) }_{n = 0}^{\infty } \) be a sequence of integrable functions. Prove that (i) if \( {f}_{n} \geq {f}_{0} \) almost everywhere, then \( \mathop{\inf }\limits_{{n \geq 1}}{f}_{n} \) is integrable; (ii) if \( {f}_{n} \leq {f}_{0} \) almost everywhere, then \( \mathop{\sup }\limits_{{n \geq 1}}{f}_{n} \) is integrable. .8 Let \( f \) be a step function-that is, a function, defined almost everywhere on \( \mathbf{R} \), for which there exist points \[ a = {x}_{1} < {x}_{2} < \cdots < {x}_{n} = b \] and real numbers \( {c}_{1},\ldots ,{c}_{n - 1} \) such that \[ f\left( x\right) = \left\{ \begin{array}{ll} {c}_{i}
1008_(GTM174)Foundations of Real and Abstract Analysis
31
verywhere, then \( \int f \geq \int g \) . .5 Prove that if \( f \) is integrable, then so is \( \left| f\right| \), and \( \left| {\int f}\right| \leq \int \left| f\right| \) . .6 Show that if \( f \) and \( g \) are integrable, then so are \( f \vee g \) and \( f \land g \) . (Reduce to the case where \( f \) and \( g \) are nonnegative.) .7 Let \( {\left( {f}_{n}\right) }_{n = 0}^{\infty } \) be a sequence of integrable functions. Prove that (i) if \( {f}_{n} \geq {f}_{0} \) almost everywhere, then \( \mathop{\inf }\limits_{{n \geq 1}}{f}_{n} \) is integrable; (ii) if \( {f}_{n} \leq {f}_{0} \) almost everywhere, then \( \mathop{\sup }\limits_{{n \geq 1}}{f}_{n} \) is integrable. .8 Let \( f \) be a step function-that is, a function, defined almost everywhere on \( \mathbf{R} \), for which there exist points \[ a = {x}_{1} < {x}_{2} < \cdots < {x}_{n} = b \] and real numbers \( {c}_{1},\ldots ,{c}_{n - 1} \) such that \[ f\left( x\right) = \left\{ \begin{array}{ll} {c}_{i} & \text{ if }{x}_{i} < x < {x}_{i + 1} \\ 0 & \text{ if }x < a\text{ or }x > b. \end{array}\right. \] Give two proofs that \( f \) is integrable and that \[ \int f = \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{c}_{i}\left( {{x}_{i + 1} - {x}_{i}}\right) \] .9 Let \( f \) be integrable, \( t \) a real number, and \( g\left( x\right) = f\left( {x + t}\right) \) . Prove that \( g \) is integrable, with \( \int g = \int f \) . (Translation invariance of the Lebesgue integral. First consider the case where \( f \) is nonnegative. Let \( {F}_{ * } \) be an extremal element of \( {\mathcal{P}}_{f} \), and define \( {G}_{ * }\left( x\right) = {F}_{ * }\left( {x + t}\right) \) ; prove that \( {G}_{ * } \) is a bounded Lebesgue primitive of \( g \), and that \( \int g \leq \int f \) .) Let \( A \) be a subset of \( \mathbf{R} \) . The characteristic function of \( A \) is the mapping \( {\chi }_{A} : \mathbf{R} \rightarrow \mathbf{R} \) defined by \[ {\chi }_{A}\left( x\right) = \left\{ \begin{array}{ll} 1 & \text{ if }x \in A \\ 0 & \text{ if }x \notin A \end{array}\right. \] A function \( f \) defined almost everywhere is said to be integrable over \( A \) if \( f{\chi }_{A} \) is integrable, in which case we define \[ {\int }_{A}f = \int f{\chi }_{A} \] If \( A \) is a compact interval \( \left\lbrack {a, b}\right\rbrack \), we write \( {\int }_{a}^{b}f \) for \( {\int }_{A}f \) . If \( A \) is a closed infinite interval, we use the natural analogous notations; for example, if \( A = \lbrack a,\infty ) \), we write \( {\int }_{a}^{\infty }f \) for \( {\int }_{A}f \) . (2.2.10) Proposition. If \( f \) is an integrable function, then \( f \) is integrable over any interval. Moreover, if \( f \) is nonnegative and \( {F}_{ * } \) is an extremal element of \( {\mathcal{P}}_{f} \), then \[ {\int }_{a}^{b}f = {F}_{ * }\left( b\right) - {F}_{ * }\left( a\right) \] whenever \( a \leq b \) . Proof. We only discuss the case where \( f \) is nonnegative and the interval is of the form \( I = \left\lbrack {a, b}\right\rbrack \) with \( a \leq b \) . Accordingly, we define \[ F\left( x\right) = \left\{ \begin{array}{ll} {F}_{ * }\left( a\right) & \text{ if }x < a \\ {F}_{ * }\left( x\right) & \text{ if }a \leq x \leq b \\ {F}_{ * }\left( b\right) & \text{ if }x > b. \end{array}\right. \] Then \( F \) is a Lebesgue primitive of \( f{\chi }_{I} \) and so belongs to \( {\mathcal{P}}_{f{\chi }_{I}} \) . We show that \( F \) is an extremal element of \( {\mathcal{P}}_{f{\chi }_{I}} \) . Let \( G \in {\mathcal{P}}_{f{\chi }_{I}} \), and for each pair of real numbers \( \alpha ,\beta \) with \( \alpha < \beta \) define \[ {H}_{\alpha ,\beta }\left( x\right) = \left\{ \begin{array}{ll} {F}_{ * }\left( x\right) + G\left( \alpha \right) - {F}_{ * }\left( \alpha \right) & \text{ if }x < \alpha , \\ G\left( x\right) & \text{ if }\alpha \leq x \leq \beta , \\ {F}_{ * }\left( x\right) + G\left( \beta \right) - {F}_{ * }\left( \beta \right) & \text{ if }x > \beta . \end{array}\right. \] Note that if \( a \leq \alpha < \beta \leq b \), then \( {H}_{\alpha ,\beta } \in {\mathcal{P}}_{f} \) . Consider real numbers \( \xi ,\eta \) with \( \xi < \eta \) . If \( \eta < a \) or \( \xi > b \), then \( F\left( \eta \right) = F\left( \xi \right) \) and so \[ F\left( \eta \right) - F\left( \xi \right) \leq G\left( \eta \right) - G\left( \xi \right) \] (3) holds trivially. If \( a \leq \xi < \eta \leq b \), then \[ F\left( \eta \right) - F\left( \xi \right) = {F}_{ * }\left( \eta \right) - {F}_{ * }\left( \xi \right) \] \[ \leq {H}_{\xi ,\eta }\left( \eta \right) - {H}_{\xi ,\eta }\left( \xi \right) \] \[ = G\left( \eta \right) - G\left( \xi \right) \] If \( \xi < a \) and \( \eta > b \), then, as \( {H}_{a, b} \in {\mathcal{P}}_{f} \) and \( G \) is increasing, \[ F\left( \eta \right) - F\left( \xi \right) = {F}_{ * }\left( b\right) - {F}_{ * }\left( a\right) \] \[ \leq {H}_{a, b}\left( b\right) - {H}_{a, b}\left( a\right) \] \[ = G\left( b\right) - G\left( a\right) \] \[ \leq G\left( \eta \right) - G\left( \xi \right) \] Hence (3) holds in all possible cases, so \( F \) is an extremal element of \( {\mathcal{P}}_{f{\chi }_{I}} \) . Since \( F \) is bounded by \( {F}_{ * }, f \) is integrable over \( I \) and \[ {\int }_{a}^{b}f = F\left( \infty \right) - F\left( {-\infty }\right) = {F}_{ * }\left( b\right) - {F}_{ * }\left( a\right) . \] ## (2.2.11) Exercises .1 Let \( f \) be an integrable nonnegative function, and \( {F}_{ * } \) an extremal element of \( {\mathcal{P}}_{f} \) . Prove that for each \( x \in \mathbf{R}, f \) is integrable over \( ( - \infty, x\rbrack \) and \[ {\int }_{-\infty }^{x}f = {F}_{ * }\left( x\right) - {F}_{ * }\left( {-\infty }\right) . \] .2 Complete the proof of Proposition (2.2.10) in the remaining cases. .3 Let \( f \) be a nonnegative integrable function such that \( {\int }_{-\infty }^{x}f = 0 \) for each \( x \in \mathbf{R} \) . Prove that \( f = 0 \) almost everywhere. .4 Find expressions for \( {\chi }_{A \cap B},{\chi }_{A \cup B} \), and \( {\chi }_{A \smallsetminus B} \) in terms of \( {\chi }_{A} \) and \( {\chi }_{B} \) . Prove that if \( f \) is integrable over both \( A \) and \( B \), then it is integrable over \( A \cap B, A \cup B \), and \( A \smallsetminus B \) . Prove also that (i) if \( A \) and \( B \) are disjoint, then \( {\int }_{A \cup B}f = {\int }_{A}f + {\int }_{B}f \) ; (ii) if \( B \subset A \), then \( {\int }_{A \smallsetminus B}f = {\int }_{A}f - {\int }_{B}f \) . .5 Let \( f \) be a nonnegative integrable function, \( \left\lbrack {a, b}\right\rbrack \) a compact interval, and \( m \) a real number such that \( f\left( x\right) \geq m \) for each \( x \in \left( {a, b}\right) \) . Give two proofs that \( {\int }_{a}^{b}f \geq m\left( {b - a}\right) \) . (For one proof use Proposition (2.1.7).) .6 Let \( f \) be a nonnegative integrable function, \( F \) a bounded Lebesgue primitive of \( f \), and \( \left\lbrack {a, b}\right\rbrack \) a compact interval. Must we have \( {\int }_{a}^{b}f = \) \( F\left( b\right) - F\left( a\right) \) ? The power of the Lebesgue integral only appears when we consider the interplay between the operations of integration and of taking limits. There now follows a string of results and exercises that deal with this topic. A sequence \( {\left( {f}_{n}\right) }_{n = 1}^{\infty } \) of real-valued functions defined almost everywhere is said to be increasing (respectively, decreasing) if \( {f}_{1} \leq {f}_{2} \leq \cdots \) (respectively, \( {f}_{1} \geq {f}_{2} \geq \cdots \) ) almost everywhere. (2.2.12) Beppo Levi’s Theorem. Let \( \left( {f}_{n}\right) \) be an increasing sequence of integrable functions such that the corresponding sequence of integrals is bounded above. Then \( \left( {f}_{n}\right) \) converges almost everywhere to an integrable function \( f \), and \( \int f = \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} \) . Proof. Replacing \( {f}_{n} \) by \( {f}_{n} - {f}_{1} \) if necessary, we may assume that \( {f}_{n} \geq 0 \) . Choose \( M > 0 \) such that \( \int {f}_{n} \leq M \) for each \( n \) . By Exercise (2.2.11:1) and Corollary (2.2.2), \[ {F}_{n}\left( x\right) = {\int }_{-\infty }^{x}{f}_{n} \] defines an extremal element \( {F}_{n} \) of \( {\mathcal{P}}_{{f}_{n}} \) . Since \[ {f}_{n}{\chi }_{( - \infty, x\rbrack } \leq {f}_{n + 1}{\chi }_{( - \infty, x\rbrack } \leq {f}_{n + 1} \] it follows from Exercise (2.2.9: 4) that \( {\left( {F}_{n}\left( x\right) \right) }_{n = 1}^{\infty } \) is an increasing sequence that is bounded above by \( M \) and therefore converges to a limit \( F\left( x\right) \leq \) \( M \) . Since each \( {F}_{n} \) is an increasing function, so is \( F \) ; whence, by Theorem (2.1.10), \( F \) is differentiable almost everywhere. If \( m > n \), then \( {F}_{m}^{\prime } = {f}_{m} \geq {f}_{n} \) almost everywhere, so \( {F}_{m} \in {\mathcal{P}}_{{f}_{n}} \) . Thus if \( x < y \), then \[ {F}_{n}\left( y\right) - {F}_{n}\left( x\right) \leq {F}_{m}\left( y\right) - {F}_{m}\left( x\right) \] letting \( m \rightarrow \infty \), we obtain \[ {F}_{n}\left( y\right) - {F}_{n}\left( x\right) \leq F\left( y\right) - F\left( x\right) . \] It follows that \( {F}^{\prime } \geq {F}_{n}^{\prime } = {f}_{n} \) almost everywhere, which ensures that, almost everywhere, the increasing sequence \( \left( {f}_{n}\right) \) converges to a limit \( f \) satisfying \[ f = \sup {f}_{n} \leq {F}^{\prime } \] Since \( {F}^{\prime } \) is integrable ( \( F \) is a bounded Lebesgue primitive of \( {F}^{\prime } \) ), it follows from Exercise (2.2.9: 7) that \( f \) is integrable. Finally, by Exercises (2.2.4: 4) and \( \left( {{2.2.9} : 4}\right) \) , \[ F\left( \infty \right) - F\left( {-\infty }\right) \geq \int {F}^{\prime } \geq \int f \geq \int {f}_{n} \] \[ = {F}_{n}\left( \infty \right) - {F}_{n}\left( {-\infty }\right) \] \[ \rightarrow F\left( \infty \right) - F\left( {-\
1008_(GTM174)Foundations of Real and Abstract Analysis
32
n \[ {F}_{n}\left( y\right) - {F}_{n}\left( x\right) \leq {F}_{m}\left( y\right) - {F}_{m}\left( x\right) \] letting \( m \rightarrow \infty \), we obtain \[ {F}_{n}\left( y\right) - {F}_{n}\left( x\right) \leq F\left( y\right) - F\left( x\right) . \] It follows that \( {F}^{\prime } \geq {F}_{n}^{\prime } = {f}_{n} \) almost everywhere, which ensures that, almost everywhere, the increasing sequence \( \left( {f}_{n}\right) \) converges to a limit \( f \) satisfying \[ f = \sup {f}_{n} \leq {F}^{\prime } \] Since \( {F}^{\prime } \) is integrable ( \( F \) is a bounded Lebesgue primitive of \( {F}^{\prime } \) ), it follows from Exercise (2.2.9: 7) that \( f \) is integrable. Finally, by Exercises (2.2.4: 4) and \( \left( {{2.2.9} : 4}\right) \) , \[ F\left( \infty \right) - F\left( {-\infty }\right) \geq \int {F}^{\prime } \geq \int f \geq \int {f}_{n} \] \[ = {F}_{n}\left( \infty \right) - {F}_{n}\left( {-\infty }\right) \] \[ \rightarrow F\left( \infty \right) - F\left( {-\infty }\right) \text{as}n \rightarrow \infty \text{,} \] so \[ \int f = F\left( \infty \right) - F\left( {-\infty }\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n}. \] \( ▱ \) ## (2.2.13) Exercises .1 Let \( \alpha \in \mathbf{R} \), and define \[ f\left( x\right) = \left\{ \begin{array}{ll} {x}^{\alpha } & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0 \end{array}\right. \] Prove that \( f \) is integrable over \( \lbrack 1,\infty ) \) if and only if \( \alpha < 1 \), and that \( f \) is integrable over \( \lbrack 0,1) \) if and only if \( \alpha > 1 \) . Calculate \( \int f \) in each case. .2 Define \( f\left( x\right) = {\mathrm{e}}^{-{\alpha x}} \), where \( \alpha \) is a positive constant. Prove that \( f \) is integrable, and calculate \( \int f \) . .3 Let \( f \) be an integrable function, and \( I \) a bounded interval. Use Beppo Levi’s Theorem to prove that \( f \) is integrable over \( I \) . (Consider the sequence \( {\left( f \land {g}_{n}\right) }_{n = 1}^{\infty } \), where \( {g}_{n}\left( x\right) = n \) if \( x \in I \), and \( {g}_{n}\left( x\right) = 0 \) otherwise.) Extend this result to an unbounded interval \( I \) . (First take \( f \geq 0 \) . Consider the sequence \( \left( {f}_{n}\right) \), where \( {f}_{n}\left( x\right) = f\left( x\right) \) if \( x \in I \cap \) \( \left\lbrack {-n, n}\right\rbrack \), and \( {f}_{n}\left( x\right) = 0 \) otherwise.) .4 Prove Lebesgue’s Series Theorem: if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) is a series of integrable functions such that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\mathop{\int }\limits^{\infty }\left| {f}_{n}\right| \) converges, then \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) converges almost everywhere to an integrable function, and \[ \int \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\int {f}_{n} \] (Consider the partial sums of the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n}^{ + } \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n}^{ - } \) .) .5 Use the preceding exercise to give another proof that if \( f \) is a nonnegative integrable function satisfying \( \int f = 0 \), then \( f = 0 \) almost everywhere (See also Exercises (2.2.11: 3) and (2.2.4: 6).) .6 Let \( \left( {A}_{n}\right) \) be a sequence of subsets of \( \mathbf{R} \), and \( f \) a function that is integrable over each \( {A}_{n} \), such that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\int }_{{A}_{n}}\left| f\right| \) converges. Prove that (i) \( f \) is integrable over \( A = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n} \), and \( {\int }_{A}\left| f\right| \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{\int }_{{A}_{n}}\left| f\right| \) ; (ii) if also the sets \( {A}_{n} \) are pairwise-disjoint, then \( {\int }_{A}f = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\int }_{{A}_{n}}f \) . .7 Let \( f \) be an integrable function, and \( \varepsilon > 0 \) . Show that there exists a bounded interval \( I \) such that \( {\int }_{\mathbf{R} \smallsetminus I}\left| f\right| < \varepsilon \) . (Consider \( \left| f\right| {\chi }_{n} \), where \( {\chi }_{n} \) is the characteristic function of \( \left\lbrack {-n, n}\right\rbrack \) .) .8 Prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\mathrm{e}}^{-{n}^{2}x} \) converges for each \( x > 0 \) . Define \[ f\left( x\right) = \left\{ \begin{array}{ll} \mathop{\sum }\limits_{{n = 1}}^{\infty }{\mathrm{e}}^{-{n}^{2}x} & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0. \end{array}\right. \] Prove that \( f \) is integrable, and that \( \int f = \mathop{\sum }\limits_{{n = 1}}^{\infty }1/{n}^{2} \) . .9 Let \( \left( {f}_{n}\right) \) be a sequence of integrable functions such that \( 0 \leq {f}_{1} \leq \) \( {f}_{2} \leq \cdots \) almost everywhere. Show that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} = 0 \) if and only if \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) = 0 \) almost everywhere. (For "only if" choose a subsequence \( {\left( {f}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) such that \( \int {f}_{{n}_{k}} \leq {2}^{-k} \) for each \( k \), and use Lebesgue’s Series Theorem.) .10 Let \( \left( {f}_{n}\right) \) be a sequence of step functions such that \( 0 \leq {f}_{n + 1} \leq {f}_{n} \) almost everywhere and \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} = 0 \) . Without using any of the foregoing theorems or exercises about the convergence of integrals, prove that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) = 0 \) almost everywhere. \( {}^{4} \) (Use the Vitali Covering Theorem.) .11 Prove Fatou’s Lemma: if \( \left( {f}_{n}\right) \) is a sequence of nonnegative integrable functions that converges almost everywhere to a function \( f \), and if the sequence \( {\left( \int {f}_{n}\right) }_{n = 1}^{\infty } \) is bounded above, then \( f \) is integrable and \[ \int f \leq \lim \inf \int {f}_{n} \] (Apply Beppo Levi’s Theorem to the functions \( {g}_{n} = \mathop{\inf }\limits_{{k \geq n}}{f}_{k} \) .) .12 Let \( f \) be defined almost everywhere, and suppose that for each \( \varepsilon > 0 \) there exist integrable functions \( g, h \) such that \( g \leq f \leq h \) almost everywhere and \( \int \left( {h - g}\right) < \varepsilon \) . Prove that \( f \) is integrable. (For each \( n \) choose integrable functions \( {g}_{n},{h}_{n} \) such that \( {g}_{n} \leq f \leq {h}_{n} \) almost everywhere and \( \left. {\int \left( {{h}_{n} - {g}_{n}}\right) < {2}^{-n}\text{.}}\right) \) .13 Prove that if \( E \) is a set of measure zero, then there exists a nonnegative integrable function \( f \) such that \( \int f = 0 \) and \( f\left( x\right) = 1 \) for all \( x \in E \) . (For each positive integer \( n \) choose a sequence \( {\left( {I}_{n, k}\right) }_{k = 1}^{\infty } \) of pairwise-disjoint bounded open intervals such that \( E \subset {A}_{n} = \) \( \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{I}_{n, k} \) and \( \mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {I}_{n, k}\right| < 1/n \) . Let \( f \) be the characteristic function of \( \mathop{\bigcap }\limits_{{n = 1}}^{\infty }{A}_{n} \) .) Let \( f, g \) be functions defined almost everywhere. We say that \( g \) dominates \( f \) if \( \left| f\right| \leq g \) almost everywhere. (2.2.14) Lebesgue’s Dominated Convergence Theorem. Let \( \left( {f}_{n}\right) \) be a sequence of integrable functions that converges almost everywhere to a function \( f \), and suppose that there exists an integrable function \( g \) that dominates each \( {f}_{n} \) . Then \( f \) is integrable, and \( \int f = \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} \) . Proof. The functions \[ {g}_{n} = \mathop{\sup }\limits_{{k \geq n}}{f}_{k} \] are integrable, by Exercise (2.2.9:7), and form a decreasing sequence converging to \( f \) almost everywhere. Noting that \( \int \left( {-{g}_{n}}\right) \leq \int g \), we now apply \( {}^{4} \) This is the basic result in another approach to Lebesgue integration on \( \mathbf{R} \) , which starts by defining the integral of a step function and then considers the convergence of a sequence \( \left( {f}_{n}\right) \) of step functions when the corresponding sequence of integrals is bounded above; see [40]. Beppo Levi’s theorem to the sequence \( \left( {-{g}_{n}}\right) \) to show that \( f \) is integrable and that \( \int {g}_{n} \rightarrow \int f \) . Replacing \( {f}_{n} \) by \( - {f}_{n} \) in this argument, we see that \( \int {h}_{n} \rightarrow \int f \), where \[ {h}_{n} = \mathop{\inf }\limits_{{k \geq n}}{f}_{k} \] Finally, \( {h}_{n} \leq {f}_{n} \leq {g}_{n} \), so \[ \int {h}_{n} \leq \int {f}_{n} \leq \int {g}_{n} \] and therefore \( \int {f}_{n} \rightarrow \int f \) . ## (2.2.15) Exercises .1 Prove that if \( f \) is an integrable function, then \( \int \left( {f \land n}\right) \rightarrow \int f \) as \( n \rightarrow \infty \) . .2 Let \( f \) be an integrable function, and for each \( n \) define \( {f}_{n} = \left( {f \land n}\right) \vee \) \( - n \) . Prove that \( \int \left| {f - {f}_{n}}\right| \rightarrow 0 \) as \( n \rightarrow \infty \) . .3 Give two proofs that if \( f \) is an integrable function, then \( \int \left( {\left| f\right| \land {n}^{-1}}\right) \) \( \rightarrow 0 \) as \( n \rightarrow \infty \) . .4 Give an example of a sequence \( \left( {f}_{n}\right) \) of integrable functions such that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n} = 0 \) almost everywhere, \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} = 0 \), and there is no integrable function that dominates each \( {f}_{n} \) . .5 Let \( \left( {f}_{n}\right) \) be a sequence of integrable functions converging almost everywhere to a function \( f \), and let \( g \) be an integrable function that dominates \( f \) .
1008_(GTM174)Foundations of Real and Abstract Analysis
33
\left( {f \land n}\right) \rightarrow \int f \) as \( n \rightarrow \infty \) . .2 Let \( f \) be an integrable function, and for each \( n \) define \( {f}_{n} = \left( {f \land n}\right) \vee \) \( - n \) . Prove that \( \int \left| {f - {f}_{n}}\right| \rightarrow 0 \) as \( n \rightarrow \infty \) . .3 Give two proofs that if \( f \) is an integrable function, then \( \int \left( {\left| f\right| \land {n}^{-1}}\right) \) \( \rightarrow 0 \) as \( n \rightarrow \infty \) . .4 Give an example of a sequence \( \left( {f}_{n}\right) \) of integrable functions such that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n} = 0 \) almost everywhere, \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} = 0 \), and there is no integrable function that dominates each \( {f}_{n} \) . .5 Let \( \left( {f}_{n}\right) \) be a sequence of integrable functions converging almost everywhere to a function \( f \), and let \( g \) be an integrable function that dominates \( f \) . Show that \( f \) is integrable, and that \( \int f = \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} \) . (Consider the functions \( \left( {{f}_{n} \land g}\right) \vee - g \) .) With the help of Lebesgue's Dominated Convergence Theorem we can prove the converse of Exercise (2.2.4: 3), and thereby, for a nonnegative integrable function \( f \), complete the characterisation of the extremal elements of \( {\mathcal{P}}_{f} \) among the Lebesgue primitives of \( f \) . (2.2.16) Proposition. If \( f \) is a nonnegative integrable function, then each extremal element of \( {\mathcal{P}}_{f} \) is absolutely continuous on each compact interval. Proof. Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval. Given an extremal element \( F \) of \( {\mathcal{P}}_{f} \), consider first the case where \( f \) is bounded above almost everywhere by some constant \( M > 0 \) . The function \( x \mapsto {Mx} \) is increasing and has derivative \( M \geq f \) almost everywhere. It follows from Proposition (2.2.1) that \( F\left( \eta \right) - F\left( \xi \right) \leq M\left( {\eta - \xi }\right) \) whenever \( \xi \leq \eta \) . So if \( {\left( \left\lbrack {a}_{k},{b}_{k}\right\rbrack \right) }_{k = 1}^{n} \) is a finite sequence of nonoverlapping subintervals of \( I \), then \[ \mathop{\sum }\limits_{{k = 1}}^{n}\left| {F\left( {b}_{k}\right) - F\left( {a}_{k}\right) }\right| \leq M\mathop{\sum }\limits_{{k = 1}}^{n}\left( {{b}_{k} - {a}_{k}}\right) \] from which the absolute continuity of \( F \) readily follows. In the general case we define \[ {f}_{n} = \left( {f \land n}\right) \vee - n \] for each positive integer \( n \) . Given \( \varepsilon > 0 \), we see from Exercise (2.2.15: 2) that there exists \( N \) such that \( \int \left| {f - {f}_{N}}\right| < \varepsilon \) . Choose an extremal element \( {F}_{N} \) of \( {\mathcal{P}}_{{f}_{N}} \) . If \( {\left( \left\lbrack {a}_{k},{b}_{k}\right\rbrack \right) }_{k = 1}^{n} \) is a finite sequence of nonoverlapping subintervals of \( I \), then by Proposition (2.2.10), \[ \mathop{\sum }\limits_{{k = 1}}^{n}\left| {F\left( {b}_{k}\right) - F\left( {a}_{k}\right) }\right| = \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{\int }_{{a}_{k}}^{{b}_{k}}f}\right| \] \[ \leq \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{\int }_{{a}_{k}}^{{b}_{k}}{f}_{N}}\right| + \mathop{\sum }\limits_{{k = 1}}^{n}{\int }_{{a}_{k}}^{{b}_{k}}\left| {f - {f}_{N}}\right| \] \[ \leq \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{F}_{N}\left( {b}_{k}\right) - {F}_{N}\left( {a}_{k}\right) }\right| + {\int }_{a}^{b}\left| {f - {f}_{N}}\right| \] \[ < \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{F}_{N}\left( {b}_{k}\right) - {F}_{N}\left( {a}_{k}\right) }\right| + \varepsilon . \] Since, by the first part of the proof, \( {F}_{N} \) is absolutely continuous, it follows that \( F \) is absolutely continuous. ## (2.2.17) Exercises .1 Let \( f \) be an integrable function, and \( F \) the function defined by \[ F\left( x\right) = {\int }_{-\infty }^{x}f \] Prove that \( F \) is absolutely continuous on each compact interval. .2 Prove that if \( G : \mathbf{R} \rightarrow \mathbf{R} \) is absolutely continuous on each compact interval, then there exists an integrable function \( g \) such that \( {G}^{\prime } = g \) almost everywhere. (Note that for \( a \leq x, G\left( x\right) = {T}_{G}\left( {a, x}\right) - \) \( \left( {{T}_{G}\left( {a, x}\right) - G\left( x\right) }\right) \) .) .3 Let \( f \) be a nonnegative continuous function on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \), and extend \( f \) to \( \mathbf{R} \) by setting \( f\left( x\right) = 0 \) for all \( x \) outside \( I \) . Prove that \( f \) is integrable over \( \left\lbrack {a, b}\right\rbrack \), and that \( {\int }_{a}^{b}f = \widehat{{\int }_{a}^{b}}f \), where \( \widehat{\int } \) denotes the Riemann integral. (Let \[ F\left( x\right) = \left\{ \begin{array}{ll} 0 & \text{ if }x \leq a \\ \widehat{{\int }_{a}^{x}}f & \text{ if }a \leq x \leq b \\ \widehat{{\int }_{a}^{b}}f & \text{ if }x > b \end{array}\right. \] and show that \( F \) is absolutely continuous.) Two fundamental techniques of calculus are changing the variable in an integral, and integration by parts. We now deal with the former, the latter being left to the next set of exercises. (2.2.18) Proposition. Let \( g \) be an absolutely continuous, increasing function on \( I = \left\lbrack {\alpha ,\beta }\right\rbrack, a = g\left( \alpha \right), b = g\left( \beta \right) \), and \( f \) an integrable function on \( \left\lbrack {a, b}\right\rbrack \) . Then \( \left( {f \circ g}\right) {g}^{\prime } \) is integrable, and \[ {\int }_{a}^{b}f = {\int }_{\alpha }^{\beta }\left( {f \circ g}\right) {g}^{\prime } \] Proof. We may take \( f = g = 0 \) outside \( \left\lbrack {a, b}\right\rbrack \) . By considering \( {f}^{ + } \) and \( {f}^{ - } \) separately, we reduce to the case where \( f \) is nonnegative. Moreover, we may assume that \( f \) is bounded: for if we have proved the proposition in the bounded case, we obtain the desired result in the general case for nonnegative \( f \) by considering \( f \land n \), letting \( n \rightarrow \infty \), and using Beppo Levi’s Theorem. Note that, by Corollary (1.4.12), \( g \) maps \( \left\lbrack {\alpha ,\beta }\right\rbrack \) onto \( \left\lbrack {a, b}\right\rbrack \) . Choose \( M \) such that \( 0 \leq f \leq M \), and let \( F \) be an extremal element of \( {\mathcal{P}}_{f} \) . Then the function \[ G = F \circ g \] is increasing. Since \( t \mapsto {Mt} \) belongs to \( {\mathcal{P}}_{f} \), if \( \alpha \leq \xi < \eta \leq \beta \), then \[ G\left( \eta \right) - G\left( \xi \right) = F\left( {g\left( \eta \right) }\right) - F\left( {g\left( \xi \right) }\right) \leq {Mg}\left( \eta \right) - {Mg}\left( \xi \right) \] (4) so the function \( {Mg} - G \) is increasing. Since, by Exercise (2.2.4:3), \( {Mg} \) is an extremal element of \( {\mathcal{P}}_{M{g}^{\prime }} \), we can apply Lemma (2.2.5) with \( \Phi = {Mg} \) and \( \Psi = G \), to show that \( G \) is an extremal element of \( {\mathcal{P}}_{{G}^{\prime }} \) ; whence \[ {\int }_{a}^{b}f = F\left( b\right) - F\left( a\right) = G\left( \beta \right) - G\left( \alpha \right) = {\int }_{\alpha }^{\beta }{G}^{\prime }. \] It therefore remains to prove that \[ {G}^{\prime }\left( t\right) = f\left( {g\left( t\right) }\right) {g}^{\prime }\left( t\right) \] (5) almost everywhere in \( I \) . Consider the set of those \( t \in \left( {\alpha ,\beta }\right) \) for which (5) fails to hold. This may be split into five subsets, as follows: - the set \( {A}_{1} \) of measure zero on which \( {G}^{\prime }\left( t\right) \) does not exist; - the set \( {A}_{2} \) of measure zero on which \( {g}^{\prime }\left( t\right) \) does not exist (see Exercises (2.1.6: 4) and (2.1.12: 3)); - the set \( {A}_{3} \) of measure zero on which \( {F}^{\prime }\left( {g\left( t\right) }\right) \) does not exist; - the set \( {A}_{4} \) of measure zero on which \( f\left( {g\left( t\right) }\right) \) does not exist; - the set \( B \) of those \( t \in I \smallsetminus \mathop{\bigcup }\limits_{{k = 1}}^{4}{A}_{k} \) such that \( {F}^{\prime }\left( {g\left( t\right) }\right) \neq f\left( {g\left( t\right) }\right) \) . To complete the proof for bounded nonnegative \( f \), we show that \( B \) has measure zero. If \( t \in B \) and \( {g}^{\prime }\left( t\right) = 0 \), then it follows from (4) that \[ \left| {{G}^{\prime }\left( t\right) }\right| \leq \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\left| Mg\left( t + h\right) - Mg\left( t\right) \right| }{\left| h\right| } = M{g}^{\prime }\left( t\right) = 0, \] so (5) holds. Let \[ C = \left\{ {t \in B : {g}^{\prime }\left( t\right) \text{ exists and is nonzero, and }{F}^{\prime }\left( {g\left( t\right) }\right) \neq f\left( {g\left( t\right) }\right) }\right\} . \] Since \( F \) is a Lebesgue primitive of \( f, g\left( C\right) \) has measure zero; we must show that \( C \) itself has measure zero. To this end, for all positive integers \( m, n \) let \( {C}_{m, n} \) be the set of those \( t \in C \) such that if \( \alpha < {t}_{1} \leq t \leq {t}_{2} < \beta \) and \( g\left( {t}_{2}\right) - g\left( {t}_{1}\right) \leq 1/m \), then \[ g\left( {t}_{2}\right) - g\left( {t}_{1}\right) \geq \frac{{t}_{2} - {t}_{1}}{n} \] Then (Exercise (2.2.19:1)) \( C = \mathop{\bigcup }\limits_{{m, n = 1}}^{\infty }{C}_{m, n} \), so we need only prove that for fixed \( m \) and \( n,{C}_{m, n} \) has measure zero. Since \( {C}_{m, n} \subset C, g\left( {C}_{m, n}\right) \) has measure zero; so for each \( \varepsilon > 0 \) there exists a sequence \( {\left( \left\lbrack {a}_{k},{b}_{k}\right\rbrack \right) }_{k = 1}^{\infty } \) of compact subintervals of \( \left( {a, b}\right) \) such that (i) \( {b}_{k} - {a}_{k} < 1/m \) for each \( k \) , (ii) \( g\left( {C}
1008_(GTM174)Foundations of Real and Abstract Analysis
34
( F \) is a Lebesgue primitive of \( f, g\left( C\right) \) has measure zero; we must show that \( C \) itself has measure zero. To this end, for all positive integers \( m, n \) let \( {C}_{m, n} \) be the set of those \( t \in C \) such that if \( \alpha < {t}_{1} \leq t \leq {t}_{2} < \beta \) and \( g\left( {t}_{2}\right) - g\left( {t}_{1}\right) \leq 1/m \), then \[ g\left( {t}_{2}\right) - g\left( {t}_{1}\right) \geq \frac{{t}_{2} - {t}_{1}}{n} \] Then (Exercise (2.2.19:1)) \( C = \mathop{\bigcup }\limits_{{m, n = 1}}^{\infty }{C}_{m, n} \), so we need only prove that for fixed \( m \) and \( n,{C}_{m, n} \) has measure zero. Since \( {C}_{m, n} \subset C, g\left( {C}_{m, n}\right) \) has measure zero; so for each \( \varepsilon > 0 \) there exists a sequence \( {\left( \left\lbrack {a}_{k},{b}_{k}\right\rbrack \right) }_{k = 1}^{\infty } \) of compact subintervals of \( \left( {a, b}\right) \) such that (i) \( {b}_{k} - {a}_{k} < 1/m \) for each \( k \) , (ii) \( g\left( {C}_{m, n}\right) \subset \mathop{\bigcup }\limits_{{k = 1}}^{\infty }\left\lbrack {{a}_{k},{b}_{k}}\right\rbrack \), and (iii) \( \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{b}_{k} - {a}_{k}}\right) < \varepsilon /n \) . Clearly, we may assume that \( g\left( {C}_{m, n}\right) \cap \left( {{a}_{k},{b}_{k}}\right) \) is nonempty for each \( k \) . Since \( g \) is continuous and increasing, it follows from the Intermediate Value Theorem that each \( \left\lbrack {{a}_{k},{b}_{k}}\right\rbrack \) is the image under \( g \) of a compact subinterval \( \left\lbrack {{\alpha }_{k},{\beta }_{k}}\right\rbrack \) of \( \left\lbrack {\alpha ,\beta }\right\rbrack \) . For each \( k \) choose \( t \in {C}_{m, n} \) with \( {\alpha }_{k} \leq t \leq {\beta }_{k} \) . Since \[ g\left( {\beta }_{k}\right) - g\left( {\alpha }_{k}\right) = {b}_{k} - {a}_{k} < \frac{1}{m}, \] the definition of \( {C}_{m, n} \) ensures that \[ {b}_{k} - {a}_{k} \geq \frac{{\beta }_{k} - {\alpha }_{k}}{n} \] Thus the intervals \( \left\lbrack {{\alpha }_{k},{\beta }_{k}}\right\rbrack \) cover \( {C}_{m, n} \) and have total length \[ \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{\beta }_{k} - {\alpha }_{k}}\right) \leq \mathop{\sum }\limits_{{k = 1}}^{\infty }n\left( {{b}_{k} - {a}_{k}}\right) < \varepsilon . \] Since \( \varepsilon \) is arbitrary, it follows that \( {C}_{m, n} \) has measure zero. ## (2.2.19) Exercises .1 In the notation of the proof of Proposition (2.2.18), show that \( C = \) \( \mathop{\bigcup }\limits_{{m, n = 1}}^{\infty }{C}_{m, n} \) . .2 This exercise deals with integration by parts. Let \( f, g \) be integrable functions, and \( I = \left\lbrack {a, b}\right\rbrack \) a compact interval. For \( a \leq x \leq b \) define \[ F\left( x\right) = {\int }_{a}^{x}f,\;G\left( x\right) = {\int }_{a}^{x}g. \] Prove that the functions \( {Fg} \) and \( {fG} \), extended to equal 0 outside \( I \) , are integrable over \( I \) and that \[ {\int }_{a}^{b}{Fg} + {\int }_{a}^{b}{fG} = F\left( b\right) G\left( b\right) - F\left( a\right) G\left( a\right) . \] The final set of exercises in this section explores further the relation between Riemann and Lebesgue integration. For this purpose, we again denote the Riemann integral by \( \widehat{\int } \) . ## (2.2.20) Exercises .1 Let the bounded function \( f \) be Riemann integrable over the compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . Show that for each \( \varepsilon > 0 \) there exist step functions \( g, h \) that vanish outside \( I \), such that \( g \leq f \leq h \) , \[ \int g \leq {\widehat{\int }}_{a}^{b}f \leq \int h \] and \( \int \left( {h - g}\right) \leq \varepsilon \) . Then use Exercise (2.2.13:12) to deduce that \( f \) is Lebesgue integrable over \( I \) and that the Lebesgue and Riemann integrals of \( f \) over \( I \) are equal. .2 Define \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbf{R} \) by \[ f\left( x\right) = \left\{ \begin{array}{ll} 1 & \text{ if }x\text{ is irrational } \\ 0 & \text{ if }x\text{ is rational. } \end{array}\right. \] Show that \( f \), which we have already shown is not Riemann integrable (Exercise (1.5.10: 6)), is Lebesgue integrable over \( \left\lbrack {0,1}\right\rbrack \), with \( {\int }_{0}^{1}f = 1 \) . .3 Let \( f \) be a bounded nonnegative function on \( \mathbf{R} \) that is Riemann integrable over each compact interval, such that the infinite Riemann integral \( J = \mathop{\lim }\limits_{{n \rightarrow \infty }}\widehat{{\int }_{-n}^{n}}f \) exists. Prove that \( f \) is Lebesgue integrable and that its Lebesgue integral equals \( J \) . Need this conclusion hold if \( f \) is allowed to take negative values? .4 Let \( \left( {f}_{n}\right) \) be an increasing sequence of Riemann integrable functions over a compact interval \( \left\lbrack {a, b}\right\rbrack \), such that \( f\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) \) defines a Riemann integrable function over \( \left\lbrack {a, b}\right\rbrack \) . Prove that \[ \widehat{{\int }_{a}^{b}}f = \mathop{\lim }\limits_{{n \rightarrow \infty }}\widehat{{\int }_{a}^{b}}{f}_{n} \] ## 2.3 Measurable Sets and Functions A function \( f \) defined almost everywhere on \( \mathbf{R} \) is said to be measurable if it is the limit almost everywhere of a sequence of integrable functions. Clearly, an integrable function is measurable. (2.3.1) Proposition. If a measurable function is dominated by an integrable function, then it is integrable. Proof. Let \( g \) be an integrable function dominating a measurable function \( f \), and choose a sequence \( \left( {f}_{n}\right) \) of integrable functions converging to \( f \) almost everywhere. For each \( n \) define \[ {g}_{n} = \left( {{f}_{n} \land g}\right) \vee - g. \] Then \( {g}_{n} \) is integrable, by Exercise (2.2.9: 6), and is dominated by \( g \) ; also, \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{g}_{n} = \left( {f \land g}\right) \vee - g = f \] almost everywhere. It follows from Lebesgue's Dominated Convergence Theorem (2.2.14) that \( f \) is integrable. (2.3.2) Corollary. A measurable function \( f \) is integrable if and only if \( \left| f\right| \) is integrable. Proof. If \( \left| f\right| \) is integrable, then, as it dominates \( f \), we see from Proposition (2.3.1) that \( f \) is integrable. For the converse we refer to Exercise \( \left( {{2.2.9} : 5}\right) \) . ## (2.3.3) Exercises . 1 Prove that if \( f \) is a measurable function and \( I \) is an interval, then \( f{\chi }_{I} \) is measurable. .2 Prove that a continuous function \( f : \mathbf{R} \rightarrow \mathbf{R} \) is measurable. .3 Let \( f, g \) be measurable functions. Prove that \( f + g, f - g, f \vee g \), and \( f \land g \) are measurable. .4 Let \( \left( {f}_{n}\right) \) be a sequence of measurable functions that converges almost everywhere to a function \( f \) . Prove that \( f \) is measurable. (For each \( k \) define the step function \( {g}_{k} \) by \[ {g}_{k}\left( x\right) = \left\{ \begin{array}{ll} k & \text{ if } - k \leq x \leq k \\ & \\ 0 & \text{ otherwise. } \end{array}\right. \] First prove that \( \left( {f \land {g}_{k}}\right) \vee - {g}_{k} \) is integrable.) .5 Let \( f \) be a measurable function, and \( p \) a positive number. Prove that \( {\left| f\right| }^{p} \) is measurable. .6 Give an example of a measurable function \( f \) which is not integrable even though \( {f}^{2} \) is. .7 Give two proofs that the product of two measurable functions is measurable. (For one proof use Exercises (2.3.3: 3 and 5).) .8 Let the measurable function \( f \) be nonzero almost everywhere. Prove that \( 1/f \) is measurable. (First consider the case where \( f \geq c \) almost everywhere for some positive constant \( c \) . For general \( f \geq 0 \) consider \( \left. {{f}_{n} = 1/\left( {f + {n}^{-1}}\right) \text{.}}\right) \) .9 Let \( f \) be a measurable function, and \( \varphi : \mathbf{R} \rightarrow \mathbf{R} \) a continuous function. Prove that \( \varphi \circ f \) is measurable. (Reduce to the case where \( f \) vanishes outside a compact interval \( \left\lbrack {a, b}\right\rbrack \) . Then use Exercise (2.2.4:9) to construct a sequence \( \left( {f}_{n}\right) \) of step functions that vanish outside \( \left\lbrack {a, b}\right\rbrack \) and converge almost everywhere to \( f \) .) .10 Let \( - 2 < \alpha < - 1 \), and define \[ f\left( x\right) = \left\{ \begin{array}{ll} {x}^{\alpha }\sin x & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0 \end{array}\right. \] Prove that \( f \) is integrable over \( \left( {0,\infty }\right) \) . .11 Define \[ f\left( x\right) = \left\{ \begin{array}{ll} \frac{\sin x}{x} & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0. \end{array}\right. \] Prove that \( f \) is measurable but not integrable. (For the second part suppose that \( f \) is integrable, so \( \left| f\right| \) is integrable. Use the inequality \[ \int \left| f\right| \geq \mathop{\sum }\limits_{{n = 1}}^{N}{\int }_{2n\pi }^{{2n\pi } + \pi /3}f\;\left( {N \in {\mathbf{N}}^{ + }}\right) \] to derive a contradiction.) .12 Let \( \alpha > 0 \), and define \[ f\left( x\right) = \left\{ \begin{array}{ll} {\mathrm{e}}^{-x}{x}^{\alpha - 1} & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0. \end{array}\right. \] Prove that \( f \) is integrable. (Consider the functions \( f{\chi }_{( - \infty ,1\rbrack } \) and \( f{\chi }_{\left( 1,\infty \right) } \) separately.) .13 Give two proofs of the Riemann-Lebesgue Lemma: if \( f \) is an integrable function, then the functions \( x \mapsto f\left( x\right) \sin {nx} \) and \( x \mapsto f\left( x\right) \cos {nx} \) are integrable, and \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\int f\left( x\right) \sin {nx}\mathrm{\;d}x = 0 \] \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\int f\left( x\right) \cos {nx}\mathrm{\;d}x = 0. \] (One proof proceeds l
1008_(GTM174)Foundations of Real and Abstract Analysis
35
egrable. Use the inequality \[ \int \left| f\right| \geq \mathop{\sum }\limits_{{n = 1}}^{N}{\int }_{2n\pi }^{{2n\pi } + \pi /3}f\;\left( {N \in {\mathbf{N}}^{ + }}\right) \] to derive a contradiction.) .12 Let \( \alpha > 0 \), and define \[ f\left( x\right) = \left\{ \begin{array}{ll} {\mathrm{e}}^{-x}{x}^{\alpha - 1} & \text{ if }x > 0 \\ 0 & \text{ if }x \leq 0. \end{array}\right. \] Prove that \( f \) is integrable. (Consider the functions \( f{\chi }_{( - \infty ,1\rbrack } \) and \( f{\chi }_{\left( 1,\infty \right) } \) separately.) .13 Give two proofs of the Riemann-Lebesgue Lemma: if \( f \) is an integrable function, then the functions \( x \mapsto f\left( x\right) \sin {nx} \) and \( x \mapsto f\left( x\right) \cos {nx} \) are integrable, and \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\int f\left( x\right) \sin {nx}\mathrm{\;d}x = 0 \] \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\int f\left( x\right) \cos {nx}\mathrm{\;d}x = 0. \] (One proof proceeds like this. First reduce to the case where \( f \geq 0 \) and \( f \) vanishes outside a compact interval \( I = \left\lbrack {-{N\pi },{N\pi }}\right\rbrack \) for some positive integer \( N \) . Let \( F \) be an extremal element of \( {\mathcal{P}}_{f} \), and carry out integration by parts on \( {\int }_{-{N\pi }}^{N\pi }{F}^{\prime }\left( x\right) \sin {nx}\mathrm{\;d}x \) .) .14 Let \( \varphi ,\psi ,\theta \) be nonnegative bounded integrable functions on \( I = \left\lbrack {0, c}\right\rbrack \) such that \[ \theta \left( x\right) \leq \varphi \left( x\right) + {\int }_{0}^{x}\psi \left( t\right) \theta \left( t\right) \mathrm{d}t\;\left( {x \in I}\right) . \] Prove that \[ \theta \left( x\right) \leq \varphi \left( x\right) + {\int }_{0}^{x}\varphi \left( t\right) \psi \left( t\right) \exp \left( {{\int }_{t}^{x}\psi \left( s\right) \mathrm{d}s}\right) \mathrm{d}t\;\left( {x \in I}\right) . \] (Define \[ \gamma \left( x\right) = {\int }_{0}^{x}\psi \left( t\right) \theta \left( t\right) \mathrm{d}t \] \[ \lambda \left( x\right) = \gamma \left( x\right) \exp \left( {-{\int }_{0}^{x}\psi \left( t\right) \mathrm{d}t}\right) . \] Show that \[ {\lambda }^{\prime }\left( x\right) \leq \varphi \left( x\right) \psi \left( x\right) \exp \left( {-{\int }_{0}^{x}\psi \left( t\right) \mathrm{d}t}\right) \] almost everywhere on \( I \), and then use Proposition (2.1.7).) A subset \( A \) of \( \mathbf{R} \) is called a measurable set (respectively, integrable set) if \( {\chi }_{A} \) is a measurable (respectively, integrable) function. A measurable subset of an integrable set is integrable, by Proposition (2.3.1). If \( A \subset \mathbf{R} \) is integrable, we define its (Lebesgue) measure to be \( \mu \left( A\right) = \) \( \int {\chi }_{A} \) . ## (2.3.4) Exercises .1 Let \( A, B \) be measurable sets. Prove that \( A \cup B, A \cap B \), and \( A \smallsetminus B \) are measurable. .2 Let \( \left( {A}_{n}\right) \) be a sequence of pairwise-disjoint measurable sets. Prove that (i) \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n} \) is measurable; (ii) if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\mu \left( {A}_{n}\right) \) is convergent, then \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n} \) is integrable, and \( \mu \left( {\mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n}}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\mu \left( {A}_{n}\right) . \) .3 Prove that any interval in \( \mathbf{R} \) is measurable. .4 Let \( \mathcal{B} \) be the smallest collection of subsets of \( \mathbf{R} \) that satisfies the following properties. - Any open subset of \( \mathbf{R} \) is in \( \mathcal{B} \) . - If \( A \in \mathcal{B} \), then \( \mathbf{R} \smallsetminus A \in \mathcal{B} \) . - The union of a sequence of elements of \( \mathcal{B} \) belongs to \( \mathcal{B} \) . The elements of \( \mathcal{B} \) are called Borel sets. Prove that any Borel set is measurable. If \( \diamond \) is a binary relation on \( \mathbf{R} \) and \( f, g \) are functions defined almost everywhere on \( \mathbf{R} \), we define \[ \llbracket f \diamond g\rrbracket = \{ x \in \mathbf{R} : f\left( x\right) \diamond g\left( x\right) \} . \] So, for example, \[ \llbracket f > g\rrbracket = \{ x \in \mathbf{R} : f\left( x\right) > g\left( x\right) \} . \] We also use analogous notations such as \[ \llbracket a \leq f < b\rrbracket = \{ x \in \mathbf{R} : a \leq f\left( x\right) < b\} . \] Just as the measurability of a set is related to that of a corresponding (characteristic) function, so the measurability of a function is related to that of certain associated sets. (2.3.5) Proposition. Let \( f \) be a real-valued function defined almost everywhere. Then \( f \) is measurable if and only if \( \llbracket f > r\rrbracket \) is measurable for each \( r \in \mathbf{R} \) . Proof. Suppose that \( f \) is measurable, let \( r \in \mathbf{R} \), and for each positive integer \( n \) define \[ {f}_{n} = \frac{{\left( f - r\right) }^{ + }}{\frac{1}{n} + {\left( f - r\right) }^{ + }} \] Since the functions \( t \mapsto {t}^{ + } \) and \[ t \mapsto \frac{t}{\frac{1}{n} + t} \] are continuous on \( \mathbf{R} \) and \( {\mathbf{R}}^{0 + } \), respectively, we see from Exercises (2.3.3: 2 and 9) that \( {f}_{n} \) is measurable. But \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n} = {\chi }_{\llbracket f > r\rrbracket } \) almost everywhere, so \( \llbracket f > r\rrbracket \) is measurable, by Exercise (2.3.3: 4). Now assume, conversely, that \( \llbracket f > r\rrbracket \) is measurable for each \( r \in \mathbf{R} \) . Given a positive integer \( n \), choose real numbers \[ \ldots ,{r}_{-2},{r}_{-1},{r}_{0},{r}_{1},{r}_{2},\ldots \] such that \( 0 < {r}_{k + 1} - {r}_{k} < {2}^{-n} \) for each \( k \) . Then \[ \llbracket {r}_{k - 1} < f \leq {r}_{k}\rrbracket = \llbracket f > {r}_{k - 1}\rrbracket \smallsetminus \llbracket f > {r}_{k}\rrbracket \] is measurable, by Exercise (2.3.4:1); let \( {\chi }_{k} \) denote its characteristic function. The function \[ {f}_{n} = \mathop{\sum }\limits_{{k = - \infty }}^{\infty }{r}_{k - 1}{\chi }_{k} \] is measurable: for it is the limit almost everywhere of the sequence of partial sums of the series on the right-hand side, and Exercises (2.3.3:3 and 4) apply. To each \( x \) in the domain of \( f \) there corresponds a unique \( k \) such that \( {r}_{k - 1} \leq f\left( x\right) < {r}_{k} \) ; then \[ 0 \leq f\left( x\right) - {f}_{k}\left( x\right) < {r}_{k} - {r}_{k - 1} < {2}^{-k}. \] Hence the sequence \( \left( {f}_{n}\right) \) converges almost everywhere to \( f \), which is therefore measurable, again by Exercise (2.3.3: 4). The exercises in the next set extend the ideas used in the proof of Proposition (2.3.5). In particular, when taken together with the subsequent discussion of measurability in the sense of Carathéodory (a concept defined shortly), the second and third exercises link our approach to integration with the one originally used by Lebesgue; see [40], pages 94-96. ## (2.3.6) Exercises .1 Let \( f \) be a function defined almost everywhere on \( \mathbf{R} \) . Prove that the following conditions are equivalent. (i) \( f \) is measurable. (ii) \( \llbracket f \geq r\rrbracket \) is measurable for each \( r \) . (iii) \( \llbracket f \leq r\rrbracket \) is measurable for each \( r \) . (iv) \( \llbracket f < r\rrbracket \) is measurable for each \( r \) . (v) \( \llbracket r \leq f < R\rrbracket \) is measurable whenever \( r < R \) . .2 In the notation of the second part of the proof of Proposition (2.3.5), prove that if \( f \) is nonnegative and integrable, then each \( {f}_{n} \) is integrable and \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {f}_{n} = \int f \) . .3 Let \( f \) be a nonnegative measurable function vanishing outside the interval \( \left\lbrack {a, b}\right\rbrack \) . For the purpose of this exercise, we call a sequence \( {\left( {r}_{n}\right) }_{n = 0}^{\infty } \) of real numbers admissible if \( {r}_{0} = 0 \) and there exists \( \delta > \) 0 such that \( {r}_{n + 1} - {r}_{n} < \delta \) for all \( n \) ; and we say that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) corresponds to the admissible sequence, where \( {E}_{n} \) , whose characteristic function we denote by \( {\chi }_{n} \), is the measurable set \( \llbracket {r}_{n - 1} \leq f < {r}_{n}\rrbracket \) . Suppose that this series converges. Let \( {\left( {r}_{n}^{\prime }\right) }_{n = 0}^{\infty } \) be any admissible sequence for \( f \), and let \( {\chi }_{n}^{\prime } \) be the characteristic function of \( {E}_{n}^{\prime } = \llbracket {r}_{n - 1}^{\prime } \leq f < {r}_{n}^{\prime }\rrbracket \) . Prove that (i) the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }{\chi }_{n}^{\prime } \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}{\chi }_{n} \) converge almost everywhere to integrable functions, (ii) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }{\chi }_{n}^{\prime } \leq f \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}{\chi }_{n} \) almost everywhere, (iii) the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }\mu \left( {E}_{n}^{\prime }\right) \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) converge, and (iv) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }\mu \left( {E}_{n}^{\prime }\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) . Hence prove that if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) converges for at least one admissible sequence \( \left( {r}_{n}\right) \), then \( f \) is integrable, and \( \int f \) is both the infimum of the set \[ \left\{ {\mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) : \lef
1008_(GTM174)Foundations of Real and Abstract Analysis
36
\) converge almost everywhere to integrable functions, (ii) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }{\chi }_{n}^{\prime } \leq f \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}{\chi }_{n} \) almost everywhere, (iii) the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }\mu \left( {E}_{n}^{\prime }\right) \) and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) converge, and (iv) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}^{\prime }\mu \left( {E}_{n}^{\prime }\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) . Hence prove that if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) \) converges for at least one admissible sequence \( \left( {r}_{n}\right) \), then \( f \) is integrable, and \( \int f \) is both the infimum of the set \[ \left\{ {\mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n}\mu \left( {E}_{n}\right) : \left( {r}_{n}\right) \text{ is admissible,}\forall n\left( {{E}_{n} = \left\lbrack {{r}_{n - 1} \leq f < {r}_{n}}\right\rbrack }\right) }\right\} \] and the supremum of the set \[ \left\{ {\mathop{\sum }\limits_{{n = 1}}^{\infty }{r}_{n - 1}\mu \left( {E}_{n}\right) : \left( {r}_{n}\right) \text{ is admissible,}\forall n\left( {{E}_{n} = \left\lbrack {{r}_{n - 1} \leq f < {r}_{n}}\right\rbrack }\right) }\right\} . \] .4 By a simple function we mean a finite sum of functions of the form \( {c\chi } \) , where \( c \in \mathbf{R} \) and \( \chi \) is the characteristic function of an integrable set. Let \( f \) be a nonnegative integrable function. Show that there exists a sequence \( \left( {f}_{n}\right) \) of simple functions such that (i) \( 0 \leq {f}_{n} \leq f \) for each \( n \) , (ii) \( f = \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) almost everywhere, and (iii) \( \int f = \mathop{\sum }\limits_{{n = 1}}^{\infty }\int {f}_{n} \) . (First reduce to the case where \( f \) is nonnegative and vanishes outside a compact interval. Then use the preceding exercise to construct \( {f}_{k} \) inductively such that \( \left. {\int \left( {f - \mathop{\sum }\limits_{{n = 1}}^{k}{f}_{n}}\right) < {2}^{-k}\text{.}}\right) \) This exercise relates our development to axiomatic measure theory, which is based on primitive notions of a "measurable subset" of a set \( X \) and the "measure" of such a set, and in which the integral is often built up in the following way. First, define a function \( f : X \rightarrow \mathbf{R} \) to be measurable if \( \llbracket f < \alpha \rrbracket \) is a measurable set for each \( \alpha \in \mathbf{R} \) . Next, define the integral of a simple function \( \mathop{\sum }\limits_{{n = 1}}^{N}{c}_{n}{\chi }_{{A}_{n}} \), where the measurable sets \( {A}_{n} \) are pairwise disjoint, to be \( \mathop{\sum }\limits_{{n = 1}}^{N}{c}_{n}\mu \left( {A}_{n}\right) \) . If \( f \) is a nonnegative measurable function, then define its integral to be the supremum of the integrals of simple functions \( s \) which satisfy \( 0 \leq s \leq f \) on the complement of a set whose measure is 0 . For this approach to integration see, for example, [43] or [44]. There is another definition of measurability for sets, due to Carathéodory: we call a set \( A \subset \mathbf{R}C \) -measurable if \[ {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) = \left| I\right| \] for each compact interval \( I \) . We prove two lemmas that enable us to show that this notion of measurability is equivalent to our original one. (2.3.7) Lemma. Let \( f \) be an integrable function. Then there exists a sequence \( \left( {f}_{n}\right) \) of step functions converging almost everywhere to \( f \) . Moreover, if \( f \) vanishes outside a compact interval \( I \), then (i) \( {f}_{n} \) can be chosen to vanish outside \( I \) ; and (ii) if, in addition, \( f \) is the characteristic function of an integrable set, then \( {f}_{n} \) can be taken as the characteristic function of a finite union of subintervals of \( I \) . Proof. Since \( f = \mathop{\lim }\limits_{{n \rightarrow \infty }}f{\chi }_{\left\lbrack -n, n\right\rbrack } \), it suffices to consider the case where \( f \) vanishes outside a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . For each \( n \) let \[ a = {x}_{n,0} < {x}_{n,1} < \cdots < {x}_{n,{2}^{n}} = b \] be a partition of \( \left\lbrack {a, b}\right\rbrack \) into \( {2}^{-n} \) subintervals of equal length. Define a step function \( {f}_{n} \) by setting \[ {f}_{n}\left( x\right) = \left\{ \begin{array}{ll} {2}^{-n}{\int }_{{x}_{n, j}}^{{x}_{n, j + 1}}f & \text{ if }{x}_{n, j} < x < {x}_{n, j + 1} \\ 0 & \text{ otherwise,} \end{array}\right. \] and let \[ E = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }\left\{ {{x}_{n, j} : 0 \leq j \leq {2}^{n}}\right\} \cup \{ x \in \mathbf{R} : f\left( x\right) \text{ is undefined }\} . \] Then \( E \) has measure zero, and, by Exercise (2.2.4: 9), \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) = f\left( x\right) \) for all \( x \) in \( E \) . This completes the proof of (i). Now suppose that \( f \) is the characteristic function of an integrable set \( A \subset I \), and, using the first part of the proof, construct a sequence \( \left( {\phi }_{n}\right) \) of step functions that vanish outside \( I \) and converge almost everywhere to \( f \) . Define \[ {f}_{n} = \left\{ \begin{array}{ll} 1 & \text{ if }{\phi }_{n}\left( x\right) > \frac{1}{2} \\ 0 & \text{ otherwise. } \end{array}\right. \] Then \( {f}_{n} \) is the characteristic function of a finite union of subintervals of \( I \) , and \( f = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n} \) almost everywhere. (2.3.8) Lemma. Let \( A \) be an integrable subset of a compact interval \( I \) . Then there exists a decreasing sequence \( \left( {\chi }_{n}\right) \) of integrable functions converging almost everywhere to \( {\chi }_{A} \), such that each \( {\chi }_{n} \) is the characteristic function of a countable union of pairwise-disjoint bounded open intervals. Proof. Using Lemma (2.3.7), choose a sequence \( \left( {f}_{n}\right) \) of step functions that vanish outside \( I \) and converge almost everywhere to \( {\chi }_{A} \), such that each \( {f}_{n} \) is the characteristic function of a finite union \( {S}_{n} \) of pairwise-disjoint bounded open intervals. Then \( {\chi }_{A} \) is also the limit, almost everywhere, of the decreasing sequence \( \left( {g}_{n}\right) \), where \[ {g}_{n} = \mathop{\sup }\limits_{{k \geq n}}{f}_{k} \] Also, \( {g}_{n} \) is the characteristic function of \( \mathop{\bigcup }\limits_{{k = n}}^{\infty }{S}_{k} \), which is a countable union of bounded open subintervals of \( I \) . We now build up a sequence \( {\left( {T}_{k}\right) }_{k = n}^{\infty } \) of finite collections of pairwise-disjoint bounded open intervals, as follows: taking \( {T}_{n} = {S}_{n} \), suppose we have constructed \( {T}_{N} \) for some \( N \geq n \) , and form \( {T}_{N + 1} \) by adjoining to \( {T}_{N} \) all the intervals of the form \( J \smallsetminus \mathop{\bigcup }\limits_{{k = n}}^{N}{T}_{k} \) with \( J \in {S}_{N + 1} \) . Let \( {\chi }_{n} \) be the characteristic function of \( \mathop{\bigcup }\limits_{{k = n}}^{\infty }{T}_{k} \), which is a countable union of pairwise-disjoint bounded open intervals. Then \( {\chi }_{n} = {g}_{n} \) almost everywhere, so \( {\chi }_{n} \) converges to \( {\chi }_{A} \) almost everywhere. 2. Differentiation and the Lebesgue Integral (2.3.9) Proposition. Let \( A \) be a subset of \( \mathbf{R} \) . Then (i) \( A \) is measurable if and only if it is \( C \) -measurable; (ii) \( A \) is integrable if and only if it is measurable and has finite outer measure, in which case \( {\mu }^{ * }\left( A\right) = \mu \left( A\right) \) . Proof. Assume, to begin with, that \( A \) is \( C \) -measurable. Given a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) and \( \varepsilon > 0 \), choose sequences \( \left( {I}_{n}\right) \) and \( \left( {J}_{n}\right) \) of bounded open intervals such that \[ A \cap I \subset \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{I}_{n} \] \[ I \smallsetminus A \subset \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{J}_{n} \] \[ \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| < {\mu }^{ * }\left( {A \cap I}\right) + \varepsilon /2 \] \[ \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {J}_{n}\right| < {\mu }^{ * }\left( {I \smallsetminus A}\right) + \varepsilon /2 \] By Lebesgue's Series Theorem (Exercise (2.2.13: 4)), the functions \[ g = {\chi }_{I} - \mathop{\sum }\limits_{{n = 1}}^{\infty }{\chi }_{{J}_{n}} \] \[ h = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\chi }_{{I}_{n}} \] are defined almost everywhere and integrable, \[ \int g = b - a - \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {J}_{n}\right| \] and \[ \int h = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| \] So \[ \int \left( {h - g}\right) < {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) - \left( {b - a}\right) + \varepsilon = \varepsilon . \] Since \[ g \leq {\chi }_{A \cap I} \leq h \] almost everywhere and \( \varepsilon > 0 \) is arbitrary, we see from Exercise (2.2.13:12) that \( {\chi }_{A \cap I} \) is integrable. Moreover, \[ \int {\chi }_{A \cap I} \geq \int g \] \[ \geq b - a - {\mu }^{ * }\left( {I \smallsetminus A}\right) - \frac{\varepsilon }{2} \] \[ = {\mu }^{ * }\left( {A \cap I}\right) - \frac{\varepsilon }{2} \] and \[ \int {\chi }_{A \cap I} \leq \int h \leq {\mu }^{ * }\left( {A \cap I}\right) + \frac{\varepsilon }{2}. \] Again as \( \varepsilon > 0 \) is arbitrary, we see that \( \int {\chi }_{A \cap I} = {\mu }^{ * }\left( {A \cap I}\right) \) . Since \( {\chi }_{A} \) is the limit of the sequence \( {\left( {\chi }_{
1008_(GTM174)Foundations of Real and Abstract Analysis
37
J}_{n}\right| \] and \[ \int h = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {I}_{n}\right| \] So \[ \int \left( {h - g}\right) < {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) - \left( {b - a}\right) + \varepsilon = \varepsilon . \] Since \[ g \leq {\chi }_{A \cap I} \leq h \] almost everywhere and \( \varepsilon > 0 \) is arbitrary, we see from Exercise (2.2.13:12) that \( {\chi }_{A \cap I} \) is integrable. Moreover, \[ \int {\chi }_{A \cap I} \geq \int g \] \[ \geq b - a - {\mu }^{ * }\left( {I \smallsetminus A}\right) - \frac{\varepsilon }{2} \] \[ = {\mu }^{ * }\left( {A \cap I}\right) - \frac{\varepsilon }{2} \] and \[ \int {\chi }_{A \cap I} \leq \int h \leq {\mu }^{ * }\left( {A \cap I}\right) + \frac{\varepsilon }{2}. \] Again as \( \varepsilon > 0 \) is arbitrary, we see that \( \int {\chi }_{A \cap I} = {\mu }^{ * }\left( {A \cap I}\right) \) . Since \( {\chi }_{A} \) is the limit of the sequence \( {\left( {\chi }_{A \cap \left\lbrack {-n, n}\right\rbrack }\right) }_{n = 1}^{\infty } \), it follows that \( A \) is measurable in our original sense. If also \( {\mu }^{ * }\left( A\right) \) is finite, then \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mu }^{ * }\left( {A \cap \left\lbrack {-n, n}\right\rbrack }\right) = {\mu }^{ * }\left( A\right) \] by Exercise (2.1.1:10); so, applying Beppo Levi's Theorem (2.2.12), we conclude that \( A \) is integrable, with \( \mu \left( A\right) = {\mu }^{ * }\left( A\right) \) . On the other hand, if A is integrable, then Lebesgue’s Dominated Convergence Theorem (2.2.14) shows that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mu }^{ * }\left( {A \cap \left\lbrack {-n, n}\right\rbrack }\right) = \int {\chi }_{A} \] It then follows from Exercise (2.1.1: 10) that \( {\mu }^{ * }\left( A\right) = \mu \left( A\right) \) . It remains to prove that measurability implies \( C \) -measurability. Accordingly, let \( A \) be measurable in our original sense, and again let \( I = \left\lbrack {a, b}\right\rbrack \) be any compact interval. Using Lemma (2.3.8), construct a decreasing sequence \( \left( {\chi }_{n}\right) \) of integrable functions converging almost everywhere to \( {\chi }_{A \cap I} \) , such that each \( {\chi }_{n} \) is the characteristic function of the union of a sequence \( {\left( {I}_{n, k}\right) }_{k = 1}^{\infty } \) of pairwise-disjoint bounded open intervals. Then \( \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{I}_{n, k} \) includes \( \left( {A \cap I}\right) \smallsetminus E \), where \( E \) is a (possibly empty) set of measure zero; so \[ {\mu }^{ * }\left( {A \cap I}\right) \leq \mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {I}_{n, k}\right| = \int {\chi }_{n} \] the last equality being a consequence of Beppo Levi's Theorem (2.2.12). By Lebesgue's Dominated Convergence Theorem, we now have \[ \mu \left( {A \cap I}\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\int {\chi }_{n} \geq {\mu }^{ * }\left( {A \cap I}\right) \] Similarly, \[ b - a - \mu \left( {A \cap I}\right) = \mu \left( {I \smallsetminus A}\right) \geq {\mu }^{ * }\left( {I \smallsetminus A}\right) , \] so \[ {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) \leq b - a. \] But Exercise (2.1.1: 6) shows that \( {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) \geq b - a \) ; so \[ {\mu }^{ * }\left( {A \cap I}\right) + {\mu }^{ * }\left( {I \smallsetminus A}\right) = b - a, \] and therefore \( A \) is measurable. ## (2.3.10) Exercise Let \( I \) be a compact interval, and \( f \) an integrable function that vanishes outside \( I \) . Prove that there exists a sequence \( \left( {f}_{n}\right) \) of continuous functions, each vanishing outside \( I \), such that \( \int \left| {f - {f}_{n}}\right| \rightarrow 0 \) . (Reduce to the case where \( f \) is bounded. Then use Lemma (2.3.7) to reduce to the case where \( f \) is a step function.) Are all subsets of \( \mathbf{R} \) measurable? No: the Axiom of Choice (Appendix B) ensures that nonmeasurable sets exist. \( {}^{5} \) To show this, following Zermelo, we define an equivalence relation \( \sim \) on \( \lbrack 0,1) \) by \[ x \sim y\text{if and only if}x - y \in \mathbf{Q}\text{.} \] Let \( \dot{x} \) denote the equivalence class of \( x \) under this relation. By the Axiom of Choice, there exists a function \( \phi \) on the set of these equivalence classes such that \[ \phi \left( \dot{x}\right) \in \dot{x}\;\left( {x \in \lbrack 0,1}\right) ). \] Let \[ E = \{ \phi \left( \dot{x}\right) : x \in \lbrack 0,1)\} . \] Now let \( {r}_{1},{r}_{2},\ldots \) be a one-one enumeration of \( \mathbf{Q} \cap \lbrack 0,1) \), and for each \( n \) define \[ {A}_{n} = E \cap \left\lbrack {0,{r}_{n}}\right) \] \[ {B}_{n} = E \cap \left\lbrack {{r}_{n},1}\right) \] \[ {E}_{n}^{0} = \left\{ {x \in \lbrack 0,1) : x + {r}_{n} - 1 \in {A}_{n}}\right\} \] \[ {E}_{n}^{1} = \left\{ {x \in \lbrack 0,1) : x + {r}_{n} \in {B}_{n}}\right\} \] \[ {E}_{n} = {E}_{n}^{0} \cup {E}_{n}^{1} \] We show that if \( {r}_{n} < {r}_{m} \), then the sets \( {E}_{m},{E}_{n} \) are disjoint. To this end, first note that \[ {E}_{k} = \left\{ {x \in \lbrack 0,1) : x + {r}_{k} - \left\lfloor {x + {r}_{k}}\right\rfloor \in E}\right\} , \] --- \( {}^{5} \) Solovay [48] has shown that there is a model of Zermelo-Fraenkel set theory, without the Axiom of Choice, in which every subset of \( \mathbf{R} \) is Lebesgue measurable. --- where \( \lfloor x\rfloor \) denotes the integer part of the real number \( x \) . Suppose that \( x \in {E}_{m} \cap {E}_{n} \), so that \[ {y}_{m} = x + {r}_{m} - \left\lfloor {x + {r}_{m}}\right\rfloor \in E \] and \[ {y}_{n} = x + {r}_{n} - \left\lfloor {x + {r}_{n}}\right\rfloor \in E. \] Then \[ {y}_{m} - {y}_{n} = {r}_{m} - {r}_{n} + \text{ integer } \] is a rational number. Since \( E \) contains exactly one element from each equivalence class under \( \sim \), we must have \( {y}_{m} = {y}_{n} \) ; so \( {r}_{m} - {r}_{n} \) is an integer, which is impossible as \( 0 \leq {r}_{n} < {r}_{m} < 1 \) . Hence, in fact, \( {E}_{m} \cap {E}_{n} \) is empty. Now suppose that \( E \) is measurable; then \( {A}_{n},{B}_{n} \) are measurable and have finite measure. Since \( {E}_{n}^{0} \) and \( {E}_{n}^{1} \) are translates of \( {A}_{n} \) and \( {B}_{n} \), respectively, it follows from Exercise (2.2.9:9) that \( {E}_{n} \) is measurable, with \( \mu \left( {E}_{n}\right) = \mu \left( E\right) \) . But \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{E}_{n} = \lbrack 0,1) \), so by Exercise (2.3.4:2), \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\mu \left( {E}_{n}\right) = 1 \) . This is absurd, since an infinite series with all terms equal cannot converge unless all its terms are 0 . Hence \( E \) is not measurable. For more on nonmeasurable sets, see Chapter 5 of [33]. ## (2.3.11) Exercises .1 Let \( E \) be a nonmeasurable subset of \( \mathbf{R} \), and \( A \) a subset of \( E \) that is measurable. Prove that \( {\mu }^{ * }\left( A\right) = 0 \) . .2 Give an example of a nonmeasurable function \( f \) such that \( \left| f\right| \) is integrable. At first sight it might appear that our approach to the Lebesgue integral cannot be generalised to multiple integrals. However, in the context of \( {\mathbf{R}}^{n} \) it is relatively straightforward to develop notions of outer measure, set of measure zero, and Dini derivates (of a special, set-based kind), and it is not too hard to prove a version of the Vitali Covering Theorem and hence of Fubini's Series Theorem ([46], Chapter 4). With these at hand, as Riesz has pointed out, \( {}^{6} \) it is indeed possible to develop the Lebesgue integral in \( {\mathbf{R}}^{n} \) by --- \( {}^{6} \) Il ne s’agira, dans le présent Mémoire, que les fonctions d’une seule variable et il pourrait paraître, à première vue, comme si notre méthode était façonnée entièrement sur ce cas particulier. Dans cet ordre d’idées, il convient d’observer que l’on aurait pu baser les considérations, au lieu de la dérivée au sens ordinaire, sur l’idée moins exigeante de dérivée par rapport à un réseau, comme s’en sert M. de la Vallée Poussin pour l’étude de la dérivation des fonctions d'ensemble [53]. Non seulement que la démonstration de l'existence presque partout de cette sorte de dérivée d'une fonction monotone est presque immédiate, mais en outre on ne rencontre aucune nouvelle difficulté quand on veut passer au cas de plusieurs variables et les considérations concernant l'intégrale s'étendent à ce cas général avec des modifications évidentes. ([39], pages 192-193) --- methods akin to those we have used for one-dimensional integration. But as there are more illuminating approaches to integration on \( {\mathbf{R}}^{n} \), especially once a general theory of measures has been developed (see [44] or [43]), we do not discuss the theory of multivariate integrals in this book. Part II ## Abstract Analysis 3 Analysis in Metric Spaces ...an excellent play; well digested in the scenes, set down with as much modesty as cunning. HAMLET, Act 2, Scene 2 In Section 1 we abstract many of the ideas from Chapter 1 to the context of a metric space, a set in which we can measure the distance between two points. In Section 2 we discuss limits and continuity in that context. Section 3 deals with compactness, which, as a substitute for finiteness, is perhaps the single most useful concept in analysis. The next section covers connectedness and lifts the Intermediate Value Theorem into its proper context. Finally, in Section 5, we study the product of a family of metric spaces, thereby enabling us to deal with analysis in \( {\mathbf{R}}^{n} \) and \( {\mathbf{C}}^{n} \) . ## 3.1 Metric and Topological Spaces The notion of a metric space generalises the properties of \( \mathbf{R} \) that are associated with the dista
1008_(GTM174)Foundations of Real and Abstract Analysis
38
theory of multivariate integrals in this book. Part II ## Abstract Analysis 3 Analysis in Metric Spaces ...an excellent play; well digested in the scenes, set down with as much modesty as cunning. HAMLET, Act 2, Scene 2 In Section 1 we abstract many of the ideas from Chapter 1 to the context of a metric space, a set in which we can measure the distance between two points. In Section 2 we discuss limits and continuity in that context. Section 3 deals with compactness, which, as a substitute for finiteness, is perhaps the single most useful concept in analysis. The next section covers connectedness and lifts the Intermediate Value Theorem into its proper context. Finally, in Section 5, we study the product of a family of metric spaces, thereby enabling us to deal with analysis in \( {\mathbf{R}}^{n} \) and \( {\mathbf{C}}^{n} \) . ## 3.1 Metric and Topological Spaces The notion of a metric space generalises the properties of \( \mathbf{R} \) that are associated with the distance given by the function \( \left( {x, y}\right) \mapsto \left| {x - y}\right| \) . A further generalisation, which we touch on at the end of this section, is a topological space, in which, since there may be no analogue of distance, the concept of open set plays a primary role. A metric, or distance function, on a set \( X \) is a mapping \( \rho \) of \( X \times X \) into \( \mathbf{R} \) such that the following properties hold for all \( x, y, z \) in \( X \) . ## M1 \( \rho \left( {x, y}\right) \geq 0 \) . 126 3. Analysis in Metric Spaces \( \rho \left( {x, y}\right) = 0 \) if and only if \( x = y \) . M3 \[ \rho \left( {x, y}\right) = \rho \left( {y, x}\right) . \] I4 \( \rho \left( {x, y}\right) \leq \rho \left( {x, z}\right) + \rho \left( {z, y}\right) \) (triangle inequality). A metric space is a pair \( \left( {X,\rho }\right) \) consisting of a set \( X \) and a metric \( \rho \) on \( X \) ; when the identity of the metric is clear from the context, we simply refer to \( X \) itself as a metric space. We use the letter \( \rho \) to denote the metric on any metric space, except where it might be confusing to do so. The standard example of a metric space is, of course, the real line \( \mathbf{R} \) taken with the metric \( \left( {x, y}\right) \mapsto \left| {x - y}\right| \) . More generally, if \( S \) is a subset of \( \mathbf{R} \), then the restriction of this metric to a function on \( S \times S \) is a metric on \( S \) . Unless we say otherwise, whenever we consider \( S \subset \mathbf{R} \) as a metric space, we assume that it carries this canonical metric. ## (3.1.1) Exercises .1 Let \( {x}_{1},\ldots ,{x}_{n} \) be elements of a metric space \( X \) . Prove the generalised triangle inequality: \[ \rho \left( {{x}_{1},{x}_{n}}\right) \leq \rho \left( {{x}_{1},{x}_{2}}\right) + \rho \left( {{x}_{2},{x}_{3}}\right) + \cdots + \rho \left( {{x}_{n - 1},{x}_{n}}\right) . \] .2 Let \( X \) be a set. Prove that the mapping \( \rho : X \times X \rightarrow \mathbf{R} \), defined by \[ \rho \left( {x, y}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }x = y \\ 1 & \text{ if }x \neq y \end{array}\right. \] is a metric on \( X \) . This metric is called the discrete metric, and \( X \), taken with the discrete metric, is called a discrete space. .3 Prove that each of the following mappings from \( {\mathbf{R}}^{n} \times {\mathbf{R}}^{n} \) to \( \mathbf{R} \) is a metric on \( {\mathbf{R}}^{n} \) . (i) \( \left( {x, y}\right) \mapsto \mathop{\sum }\limits_{{i = 1}}^{n}\left| {{x}_{i} - {y}_{i}}\right| \; \) (taxicab metric). (ii) \( \left( {x, y}\right) \mapsto \max \left\{ {\left| {{x}_{i} - {y}_{i}}\right| : 1 \leq i \leq n}\right\} \) . Here, and in the next two exercises, \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) and \( y = \) \( \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) . .4 Prove the Cauchy-Schwarz inequality, \[ \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}{y}_{i} \leq {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{2}\right) }^{1/2}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{y}_{i}^{2}\right) }^{1/2}. \] Hence prove Minkowski's inequality, \[ {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - {y}_{i}\right) }^{2}\right) }^{1/2} \leq {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{2}\right) }^{1/2} + {\left( \mathop{\sum }\limits_{{i = 1}}^{n}{y}_{i}^{2}\right) }^{1/2}. \] .5 Show that the mapping \[ \left( {x, y}\right) \mapsto \sqrt{\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - {y}_{i}\right) }^{2}} \] is a metric on \( {\mathbf{R}}^{n} \) . (Note the preceding exercise.) This metric is known as the Euclidean metric, and \( {\mathbf{R}}^{n} \), taken with the Euclidean metric, is known as Euclidean \( n \) -space or \( n \) -dimensional Euclidean space. .6 Let \( p \) be a prime number. For each positive integer \( n \) define \( {v}_{p}\left( n\right) \) to be the exponent of \( p \) in the prime factorisation of \( n \) . For each rational number \( r = \pm m/n \), where \( m, n \) are positive integers, define \[ {v}_{p}\left( r\right) = {v}_{p}\left( m\right) - {v}_{p}\left( n\right) . \] Show that this definition does not depend on the particular representation of \( r \) as a quotient of integers, and that if \( {r}^{\prime } \) is also rational, then \[ {v}_{p}\left( {r{r}^{\prime }}\right) = {v}_{p}\left( r\right) + {v}_{p}\left( {r}^{\prime }\right) \] Finally, show that \[ \rho \left( {x, y}\right) = \left\{ \begin{array}{ll} {p}^{-{v}_{p}\left( {x - y}\right) } & \text{ if }x \neq y \\ 0 & \text{ if }x = y \end{array}\right. \] defines a metric \( \rho \) -which we call the \( p \) -adic metric - on \( \mathbf{Q} \), such that \[ \rho \left( {x, z}\right) \leq \max \{ \rho \left( {x, y}\right) ,\rho \left( {y, z}\right) \} . \] On any set \( X \) a metric \( \rho \) that satisfies this last property is called an ultrametric, and \( \left( {X,\rho }\right) \) is called an ultrametric space; clearly, \( X \) is then a metric space. .7 Let \( X \) be a nonempty set, and denote by \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) the set of all bounded mappings of \( X \) into \( \mathbf{R} \) . Show that \[ \rho \left( {f, g}\right) = \sup \{ \left| {f\left( x\right) - g\left( x\right) }\right| : x \in X\} \] defines a metric on \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) . From now on, when we refer to \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) as a metric space, it is understood that the metric is the one defined in this exercise. .8 A pseudometric on a set \( X \) is a mapping \( \rho : X \times X \rightarrow \mathbf{R} \) that satisfies \( \mathbf{{M1},{M3},{M4}} \) and the following weakening of \( \mathbf{{M2}} \) : if \( x = y \) , then \( \rho \left( {x, y}\right) = 0 \) . The pair \( \left( {X,\rho }\right) \), or, loosely, \( X \) itself, is then called a pseudometric space. Prove that in that case, \[ x \sim y\text{if and only if}\rho \left( {x, y}\right) = 0 \] defines an equivalence relation on \( X \), and that \[ \rho \left( {\bar{x},\bar{y}}\right) = \rho \left( {x, y}\right) \] defines a metric \( \rho \) on the quotient set \( X/ \sim \), where \( \bar{x} \) is the corresponding equivalence class of \( x \) . In practice, we often identify \( X/ \sim \) with \( X \), and thereby turn \( X \) into a metric space, by calling two elements \( x, y \) of \( X \) equal if \( \rho \left( {x, y}\right) = 0 \) or, equivalently, if \( \bar{x} = \bar{y} \) . .9 Prove that \[ \rho \left( {f, g}\right) = \widehat{{\int }_{a}^{b}}\left| {f - g}\right| \] defines a metric on the set of continuous real-valued mappings on the compact interval \( \left\lbrack {a, b}\right\rbrack \), where \( \widehat{{\int }_{a}^{b}} \) denotes the Riemann integral. .10 Prove that \[ \rho \left( {f, g}\right) = \int \left| {f - g}\right| \] defines a pseudometric on the set of Lebesgue integrable functions on R. The corresponding metric space (see Exercise (3.1.1: 8)) is denoted by \( {L}_{1}\left( \mathbf{R}\right) \) . We see from Exercise (2.2.4: 6) that two elements of \( {L}_{1}\left( \mathbf{R}\right) \) are equal if and only if, as functions, they are equal almost everywhere. Let \( X \) and \( Y \) be metric spaces. A bijection \( f \) of \( X \) onto \( Y \) is called an isometry if \[ \rho \left( {f\left( x\right), f\left( y\right) }\right) = \rho \left( {x, y}\right) \] for all \( x, y \) in \( X \), in which case the inverse mapping \( {f}^{-1} \) is an isometry of \( Y \) onto \( X \), and the spaces \( X \) and \( Y \) are said to be isometric (under \( f \) ). Two isometric spaces can be regarded as indistinguishable for all practical purposes that involve only distance. Now let \( X \) be a metric space, and \( Y \) a set in one-one correspondence with \( X \) . With any bijection \( f \) of \( X \) onto \( Y \) there is associated a natural metric \( {\rho }_{Y} \) on \( Y \), defined by setting \[ {\rho }_{Y}\left( {f\left( x\right), f\left( y\right) }\right) = \rho \left( {x, y}\right) . \] We say that the metric \( \rho \) has been transported from \( X \) to \( Y \) by \( f \) . The mapping \( f \) is then an isometry from \( \left( {X,\rho }\right) \) onto \( \left( {Y,{\rho }_{Y}}\right) \) . An important example of the transport of a metric occurs in connection with the real line \( \mathbf{R} \), and enables us, in Section 3.2, to discuss the convergence of sequences in a metric space as a special case of the convergence of functions. The mapping \( f \) defined on \( \mathbf{R} \) by \[ f\left( x\right) = \frac{x}{1 + \left| x\right| }\;\left( {x \in \mathbf{R}}\right) \] is an order-preserving bijection of \( \mathbf{R} \) onto the open interval \( \left( {-1,1}\right) \), with inverse mapping \( g \) defined by \[ g\left( y\right) = \frac{y}{1 - \left| y\right| }\;\left( {\left| y\right| < 1}\right) . \] Let \( \overline{\mathbf{R}} \) be obtained from \( \mathbf{R} \) by adjoining two n
1008_(GTM174)Foundations of Real and Abstract Analysis
39
( x\right), f\left( y\right) }\right) = \rho \left( {x, y}\right) . \] We say that the metric \( \rho \) has been transported from \( X \) to \( Y \) by \( f \) . The mapping \( f \) is then an isometry from \( \left( {X,\rho }\right) \) onto \( \left( {Y,{\rho }_{Y}}\right) \) . An important example of the transport of a metric occurs in connection with the real line \( \mathbf{R} \), and enables us, in Section 3.2, to discuss the convergence of sequences in a metric space as a special case of the convergence of functions. The mapping \( f \) defined on \( \mathbf{R} \) by \[ f\left( x\right) = \frac{x}{1 + \left| x\right| }\;\left( {x \in \mathbf{R}}\right) \] is an order-preserving bijection of \( \mathbf{R} \) onto the open interval \( \left( {-1,1}\right) \), with inverse mapping \( g \) defined by \[ g\left( y\right) = \frac{y}{1 - \left| y\right| }\;\left( {\left| y\right| < 1}\right) . \] Let \( \overline{\mathbf{R}} \) be obtained from \( \mathbf{R} \) by adjoining two new elements \( - \infty \) and \( \infty \) , called the points at infinity. (Note that \( - \infty \) and \( \infty \) are not real numbers, and that the real numbers are often referred to as the finite elements of \( \overline{\mathbf{R}} \) .) Extend \( f \) to a bijection of \( \overline{\mathbf{R}} \) onto \( \left\lbrack {-1,1}\right\rbrack \) by setting \[ f\left( {-\infty }\right) = - 1\text{and}f\left( \infty \right) = 1\text{.} \] Then \( g \) extends to a bijection of \( \left\lbrack {-1,1}\right\rbrack \) onto \( \overline{\mathbf{R}} \), such that the extended mapping \( g \) is the inverse of the extended mapping \( f \) . Now transport (by \( g \) ) the standard metric \( \left( {s, t}\right) \mapsto \left| {s - t}\right| \) from \( \left\lbrack {-1,1}\right\rbrack \) to \( \overline{\mathbf{R}} \) ; that is, define \[ {\rho }_{\overline{\mathbf{R}}}\left( {x, y}\right) = \left| {f\left( x\right) - f\left( y\right) }\right| \;\left( {x, y \in \overline{\mathbf{R}}}\right) . \] Taken with the metric \( {\rho }_{\overline{\mathbf{R}}} \), the set \( \overline{\mathbf{R}} \) becomes a metric space, called the extended real line. Note that \( {\rho }_{\overline{\mathbf{R}}} \), restricted to \( \mathbf{R} \), is different from the standard metric \( \left( {x, y}\right) \mapsto \left| {x - y}\right| \) on \( \mathbf{R} \) . We introduce the order relations \( > , \geq \) on \( \overline{\mathbf{R}} \) (and hence the opposite relations \( < \) , \( \leq \) ) by setting \[ x > y\text{if and only if}f\left( x\right) > f\left( y\right) \text{,} \] \[ x \geq y\text{if and only if}f\left( x\right) \geq f\left( y\right) \text{.} \] On \( \mathbf{R} \) these relations coincide with the respective standard inequality relations. ## (3.1.2) Exercises . 1 Prove that the function \( {\rho }_{\overline{\mathbf{R}}} \) is a metric on \( \overline{\mathbf{R}} \) . .2 Show that the relations \( > \) and \( \geq \) on \( \bar{\mathbf{R}} \) have the properties that you would expect. In particular, prove that (i) \( - \infty < x < \infty \) for all \( x \in \mathbf{R} \) ; (ii) a nonempty subset \( S \) of \( \mathbf{R} \) is bounded, and has a supremum and infimum, relative to the order \( \geq \) on \( \overline{\mathbf{R}} \) (where \( \sup S \) and \( \inf S \) may equal \( \infty \) or \( - \infty ) \) ; (iii) when restricted to \( \mathbf{R} \), the order relations \( > \) and \( \geq \) on \( \overline{\mathbf{R}} \) coincide with the standard order relations \( > \) and \( \geq \) . Let \( \left( {X,\rho }\right) \) be a metric space, \( a \in X \), and \( r > 0 \) . We define the open ball with centre a and radius \( r \) to be \[ B\left( {a, r}\right) = \{ x \in X : \rho \left( {a, x}\right) < r\} \] and the closed ball with center \( a \) and radius \( r \) to be \[ \bar{B}\left( {a, r}\right) = \{ x \in X : \rho \left( {a, x}\right) \leq r\} . \] For example, the open and closed balls with centre \( a \) and radius \( r \) in \( \mathbf{R} \) are the intervals \( \left( {a - r, a + r}\right) \) and \( \left\lbrack {a - r, a + r}\right\rbrack \), respectively; and the open ball with centre \( \infty \) and radius \( r \in \left( {0,1}\right) \) in \( \overline{\mathbf{R}} \) is \( \left( {{r}^{-1} - 1,\infty }\right) \cup \{ \infty \} \) . In order to define the notions of open set, interior point, interior of a set, neighbourhood, cluster point, closure, and closed set for a metric space \( X \), in the corresponding definition for subsets of \( \mathbf{R} \) we replace - the open interval \( \left( {x - r, x + r}\right) \) by its analogue, the open ball \( B\left( {x, r}\right) \) in \( X \), and - the inequality \( \left| {x - y}\right| < r \) by the inequality \( \rho \left( {x, y}\right) < r \) . For example, a subset \( A \) of \( X \) is said to be open (in \( X \) ) if for each \( x \in A \) there exists \( r > 0 \) such that \( B\left( {x, r}\right) \subset A \) . Propositions (1.3.2), (1.3.9), and (1.3.10), and the applicable parts of Exercises (1.3.7) and (1.3.8), carry over unchanged into the context of a metric space. When we mention those results in future, it is assumed that we are referring to their metric space versions. ## (3.1.3) Exercises . 1 Prove that \( X \) itself, the empty set \( \varnothing \subset X \), and the open balls in \( X \) are open sets; and that \( X,\varnothing \), and the closed balls in \( X \) are closed sets. .2 Give proofs of the metric space analogues of Proposition (1.3.2), Exercises (1.3.7: 3-8), and Exercises (1.3.8: 3-8). .3 Prove that a subset of \( X \) is closed if and only if \( X \smallsetminus S \) is open (cf. Proposition (1.3.9)). .4 Prove that the intersection of a family of closed sets is closed, and that the union of a finite family of closed sets is closed (cf. Proposition (1.3.10)). .5 Suppose that \( \rho \) is an ultrametric on \( X \) (see Exercise (3.1.1: 6)). Prove the following statements. (i) If \( \rho \left( {x, y}\right) \neq \rho \left( {y, z}\right) \), then \( \rho \left( {x, z}\right) = \max \{ \rho \left( {x, y}\right) ,\rho \left( {y, z}\right) \} \) . (ii) If \( y \in B\left( {x, r}\right) \), then \( B\left( {y, r}\right) = B\left( {x, r}\right) \) . (iii) Every open ball in \( X \) is a closed set. (iv) If two open balls in \( X \) have a nonempty intersection, then one of them is a subset of the other. Is every closed ball in \( X \) an open set? Does (iv) hold with "open ball" replaced by “ball”? .6 Two metrics on a set are said to be equivalent if they give rise to the same class of open sets. Prove that the Euclidean metric is equivalent to each of the metrics in Exercise (3.1.1: 3). .7 Prove that \( \infty \) is a cluster point of \( \mathbf{R} \), considered as a subset of the metric space \( \overline{\mathbf{R}} \) . If \( S \) is a subset of a metric space \( X \), then the restriction to \( S \times S \) of the metric \( \rho \) on \( X \) is a metric-also denoted by \( \rho \) -on \( S \), and is said to be induced on \( S \) by \( \rho \) . The set \( S \), taken with that induced metric, is called a (metric) subspace of \( X \) . ## (3.1.4) Exercise Prove that if \( x \in S \) and \( r > 0 \), then \( S \cap B\left( {x, r}\right) \) is the open ball, and \( S \cap \bar{B}\left( {x, r}\right) \) is the closed ball, with centre \( x \) and radius \( r \) in the subspace \( S \) . (3.1.5) Proposition. Let \( S \) be a subspace of the metric space \( \left( {X,\rho }\right) \), and \( A \) a subset of \( S \) . Then \( A \) is open in \( S \) if and only if \( A = S \cap E \) for some open set \( E \) in \( X \) ; and \( A \) is closed in \( S \) if and only if \( A = S \cap E \) for some closed set \( E \) in \( X \) Proof. We prove only the part dealing with open sets, since the other part then follows by considering complements. Accordingly, suppose that \( A = S \cap E \) for some open set \( E \) in \( X \), and let \( x \in A \) . Choosing \( r > 0 \) such that \( B\left( {x, r}\right) \subset E \), we see that \[ x \in S \cap B\left( {x, r}\right) \subset S \cap E. \] Since, by Exercise (3.1.4), \( S \cap B\left( {x, r}\right) \) is the open ball with centre \( x \) and radius \( r \) in \( S \), it follows that \( x \) is an interior point of \( S \cap E \) in the subspace \( S \) . Hence \( A = S \cap E \) is open in \( S \) . Conversely, suppose that \( A \) is open in \( S \) . Then, by Exercise (3.1.4), for each \( x \in A \) there exists \( {r}_{x} > 0 \) such that \( S \cap B\left( {x,{r}_{x}}\right) \subset A \) . So \[ A = \mathop{\bigcup }\limits_{{x \in A}}\left( {S \cap B\left( {x,{r}_{x}}\right) }\right) = S \cap \mathop{\bigcup }\limits_{{x \in A}}B\left( {x,{r}_{x}}\right) , \] where the set \( \mathop{\bigcup }\limits_{{x \in A}}B\left( {x,{r}_{x}}\right) \) is open in \( X \), by (the metric space analogue of) Proposition (1.3.2). ## (3.1.6) Exercises In each of these exercises \( S \) is a subspace of \( \left( {X,\rho }\right) \) . .1 Complete the proof of Proposition (3.1.5). .2 Prove that the following conditions are equivalent. (i) Every subset of \( S \) that is open in \( S \) is open in \( X \) . (ii) \( S \) is open in \( X \) . .3 Prove that the following conditions are equivalent. (i) Every subset of \( S \) that is closed in \( S \) is closed in \( X \) . (ii) \( S \) is closed in \( X \) . .4 Let \( x \in S \) and \( U \subset S \) . Show that \( U \) is a neighbourhood of \( x \) in \( S \) if and only if \( U = S \cap V \) for some neighbourhood \( V \) of \( x \) in \( X \) . .5 Let \( x \in S \) . Show that the following conditions are equivalent. (i) Every neighbourhood of \( x \) in \( S \) is a neighbourhood of \( x \) in \( X \) . (ii) \( S \) is a neighbourhood of \( x \) in \( X \) . Let \( A \) and \( B \) be subsets of \( X \) . We say that \( A \) is - dense with respect to \( B \) if \( B \subset \bar{A} \), and - dense in \
1008_(GTM174)Foundations of Real and Abstract Analysis
40
) is a subspace of \( \left( {X,\rho }\right) \) . .1 Complete the proof of Proposition (3.1.5). .2 Prove that the following conditions are equivalent. (i) Every subset of \( S \) that is open in \( S \) is open in \( X \) . (ii) \( S \) is open in \( X \) . .3 Prove that the following conditions are equivalent. (i) Every subset of \( S \) that is closed in \( S \) is closed in \( X \) . (ii) \( S \) is closed in \( X \) . .4 Let \( x \in S \) and \( U \subset S \) . Show that \( U \) is a neighbourhood of \( x \) in \( S \) if and only if \( U = S \cap V \) for some neighbourhood \( V \) of \( x \) in \( X \) . .5 Let \( x \in S \) . Show that the following conditions are equivalent. (i) Every neighbourhood of \( x \) in \( S \) is a neighbourhood of \( x \) in \( X \) . (ii) \( S \) is a neighbourhood of \( x \) in \( X \) . Let \( A \) and \( B \) be subsets of \( X \) . We say that \( A \) is - dense with respect to \( B \) if \( B \subset \bar{A} \), and - dense in \( X \), or everywhere dense, if \( \bar{A} = X \) . The space \( X \) is called separable if it contains a countable dense subset. For example, \( \mathbf{Q} \) and \( \mathbf{R} \smallsetminus \mathbf{Q} \) are dense in \( \mathbf{R} \), by Exercises (1.1.1:19) and (1.2.11:5). Thus \( \mathbf{R} \) is separable, as \( \mathbf{Q} \) is countable. (3.1.7) Proposition. If \( A \) is dense with respect to \( B \), and \( B \) is dense with respect to \( C \), then \( A \) is dense with respect to \( C \) . Proof. We have \( B \subset \bar{A} \) and \( C \subset \bar{B} \) . By Exercises (1.3.8: 7 and 3), \( \bar{B} \subset \overline{\left( \bar{A}\right) } = \bar{A} \) ; whence \( C \subset \bar{A} \) . ## (3.1.8) Exercises .1 Show that \( A \) is dense in \( X \) if and only if each nonempty open set in \( X \) contains a point of \( A \) . .2 Prove that \( \left( {-\infty , - 1}\right) \cup \left( {-1,1}\right) \cup \left( {1,\infty }\right) \) is dense in \( \mathbf{R} \) . .3 Prove that a nonempty subspace \( S \) of a separable metric space \( X \) is separable. (Let \( \left( {x}_{n}\right) \) be a dense sequence in \( X \) . For each positive integer \( m \) consider the set \( \left\{ {n : \rho \left( {{x}_{n}, S}\right) < 1/m}\right\} \) .) .4 Prove that the union of a countable family of separable subspaces of \( X \) is separable. What about the union of an uncountable family of separable subspaces? .5 A point \( x \) of a metric space is said to be isolated if there exists \( r > 0 \) such that \( B\left( {x, r}\right) = \{ x\} \) . Prove that the set of isolated points of a separable metric space is either empty or countable. .6 Prove that a nonempty family of pairwise-disjoint, nonempty open subsets of a separable metric space is countable. (Use the preceding exercise.) If \( S \) is a nonempty subset of \( X \) and \( x \in X \), then we define the distance from \( x \) to \( S \) to be the real number \[ \rho \left( {x, S}\right) = \inf \{ \rho \left( {x, s}\right) : s \in S\} . \] More generally, if \( T \) is also a nonempty subset of \( X \), then we define \[ \rho \left( {S, T}\right) = \inf \{ \rho \left( {s, t}\right) : s \in S, t \in T\} . \] (3.1.9) Proposition. If \( S \) is a nonempty subset of \( X \), and \( x, y \) are two points of \( X \), then \[ \left| {\rho \left( {x, S}\right) - \rho \left( {y, S}\right) }\right| \leq \rho \left( {x, y}\right) . \] Proof. For each \( s \in S \) we have \[ \rho \left( {x, S}\right) \leq \rho \left( {x, s}\right) \leq \rho \left( {x, y}\right) + \rho \left( {y, s}\right) . \] It follows that \[ \rho \left( {x, S}\right) \leq \rho \left( {x, y}\right) + \inf \{ \rho \left( {y, s}\right) : s \in S\} = \rho \left( {x, y}\right) + \rho \left( {y, S}\right) \] and therefore that \[ \rho \left( {x, S}\right) - \rho \left( {y, S}\right) \leq \rho \left( {x, y}\right) \] Similarly, \[ \rho \left( {y, S}\right) - \rho \left( {x, S}\right) \leq \rho \left( {x, y}\right) \] The result follows immediately. The diameter of a nonempty subset \( S \) of a metric space \( X \) is defined as \[ \operatorname{diam}\left( S\right) = \sup \{ \rho \left( {x, y}\right) : x \in S, y \in S\} \] and is either a nonnegative real number or \( \infty \) . Clearly, if \( S \subset T \), then \( \operatorname{diam}\left( S\right) \leq \operatorname{diam}\left( T\right) \) ; and \( \operatorname{diam}\left( S\right) = 0 \) if and only if \( S \) contains exactly one point. A subset \( S \) of \( X \) is said to be bounded if its diameter is finite - that is, if \( \operatorname{diam}\left( S\right) \in \mathbf{R} \) . ## (3.1.10) Exercises .1 For nonempty subsets \( S, T \) of \( X \), prove that \( \rho \left( {S, S}\right) = 0 \) and \( \rho \left( {S, T}\right) = \) \( \rho \left( {T, S}\right) \) . .2 Is it true that if \( S, T \) are closed subsets of \( \mathbf{R} \) such that \( \rho \left( {S, T}\right) = 0 \) , then \( S \cap T \) is nonempty? .3 Prove that a nonempty subset \( S \) of \( X \) is closed if and only if \( \rho \left( {x, S}\right) > \) 0 for each \( x \in X \smallsetminus S \) . .4 Let \( X \) be an ultrametric space, and \( B,{B}^{\prime } \) distinct open balls of radius \( r \) in \( X \) both of which are contained in a closed ball of radius \( r \) . Compute \( \rho \left( {B,{B}^{\prime }}\right) \) . (Note Exercise (3.1.3: 5).) .5 Is it true that \[ \operatorname{diam}\left( {B\left( {x, r}\right) }\right) = \operatorname{diam}\left( {\bar{B}\left( {x, r}\right) }\right) = {2r} \] for any metric space \( X, x \in X \), and \( r > 0 \) ? .6 Prove that (i) the union of two bounded subsets of a metric space is bounded; (ii) the union of finitely many bounded subsets of a metric space is bounded. Is the union of an infinite family of bounded subsets necessarily bounded? Although the notion of a metric space is sufficiently strong to underpin a large amount of analysis, the following more general notion is needed in more advanced work. \( {}^{1} \) A topological space \( \left( {X,\tau }\right) \) consists of a set \( X \) and a family \( \tau \) of subsets of \( X \) satisfying the following conditions. TO1 \( X \in \tau \) and \( \varnothing \in \tau \) . TO2 If \( {A}_{i} \in \tau \) for each \( i \in I \), then \( \mathop{\bigcup }\limits_{{i \in I}}{A}_{i} \in \tau \) . --- \( {}^{1} \) As we do not use the notion of a topology, other than a metric one, in the remainder of this book, this part of the section can be skipped without penalty. --- TO3 If \( {A}_{1} \in \tau \) and \( {A}_{2} \in \tau \), then \( {A}_{1} \cap {A}_{2} \in \tau \) . \( \tau \) is called the topology of the space, and the elements of \( \tau \) the open sets of that topology. When the topology \( \tau \) is clear from the context, we speak loosely of \( X \) as a topological space and of the elements of \( \tau \) as open sets in \( X \) . A metric space \( \left( {X,\rho }\right) \) is associated with a topological space \( \left( {X,\tau }\right) \) in the obvious way: the open sets of \( \tau \) are precisely those subsets of \( X \) that are open relative to the metric \( \rho \) . In such a case we say that the metric \( \rho \) defines the topology \( \tau \), and we identify the metric space \( \left( {X,\rho }\right) \) with the associated topological space \( \left( {X,\tau }\right) \) . A topological space \( \left( {X,\tau }\right) \) is said to be metrisable if there is a metric \( \rho \) on \( X \) that defines the topology \( \tau \) . Not every topological space is metris-able. For example, if \( X = \{ 0,1\} \) is given the topology \( \tau \) consisting of \( \varnothing \) and \( X \) itself, then every neighbourhood of 0 intersects every neighbourhood of 1; if \( \tau \) were metrisable, then the distinct points 0,1 of \( X \) would have disjoint neighbourhoods - namely, \( B\left( {0,\frac{1}{2}}\right) \) and \( B\left( {1,\frac{1}{2}}\right) \) . For characterisations of metrisable topological spaces see [25]. Let \( S \) be a subset of a topological space \( X \), and \( x \in X \) . We say that \( x \) is an interior point of \( S \) if there is an open set \( A \) such that \( x \in A \subset S \) ; and we define the interior of \( S \) to be the set of all interior points of \( S \) . By a neighbourhood of \( x \) we mean a set \( U \subset X \) containing \( x \) in its interior. On the other hand, \( x \) is called a cluster point of \( S \) if each neighbourhood of \( x \) has a nonempty intersection with \( S \) ; and we define the closure of \( S \) (in \( X \) ) to be the set \( \bar{S} \) of all cluster points of \( S \) . A subset \( C \) of \( X \) is said to be closed (in \( X \) ) if it equals its closure. Propositions (1.3.2), (1.3.9), and (1.3.10), and the applicable parts of Exercises (1.3.7) and (1.3.8), all hold in the context of a topological space. ## (3.1.11) Exercises .1 Prove that the standard metric on \( \mathbf{R} \), and the metric induced on \( \mathbf{R} \) as a subset of the extended real line \( \overline{\mathbf{R}} \), give rise to the same topology on \( \mathbf{R} \) . .2 Prove the statement immediately preceding this set of exercises. ## 3.2 Continuity, Convergence, and Completeness In contrast to our approach to limits in Chapter 1, in the context of a metric space we first introduce the notion of continuity. The following definition is intended to capture formally the idea that \( f\left( x\right) \) is close to \( f\left( a\right) \) whenever \( x \) is close to \( a \) . Let \( X, Y \) be metric spaces, and \( f \) a mapping of \( X \) into \( Y \) . We say that \( f \) is - continuous at the point \( a \in X \) if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \rho \left( {f\left( a\right), f\left( x\right) }\right) < \varepsilon \) whenever \( x \in X \) and \( \rho \left( {a, x}\right) < \delta \) ; - continuous on \( X \), or simply
1008_(GTM174)Foundations of Real and Abstract Analysis
41
\mathbf{R} \), and the metric induced on \( \mathbf{R} \) as a subset of the extended real line \( \overline{\mathbf{R}} \), give rise to the same topology on \( \mathbf{R} \) . .2 Prove the statement immediately preceding this set of exercises. ## 3.2 Continuity, Convergence, and Completeness In contrast to our approach to limits in Chapter 1, in the context of a metric space we first introduce the notion of continuity. The following definition is intended to capture formally the idea that \( f\left( x\right) \) is close to \( f\left( a\right) \) whenever \( x \) is close to \( a \) . Let \( X, Y \) be metric spaces, and \( f \) a mapping of \( X \) into \( Y \) . We say that \( f \) is - continuous at the point \( a \in X \) if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \rho \left( {f\left( a\right), f\left( x\right) }\right) < \varepsilon \) whenever \( x \in X \) and \( \rho \left( {a, x}\right) < \delta \) ; - continuous on \( X \), or simply continuous, if it is continuous at each point of \( X \) . If \( f \) is not continuous at \( a \in X \), we say that \( f \) has a discontinuity at \( a \), or that \( f \) is discontinuous at \( a \) . ## (3.2.1) Exercises .1 Prove that the identity mapping \( {i}_{X} : X \rightarrow X \), defined on the metric space \( X \) by \( {i}_{X}\left( x\right) = x \), is continuous. .2 Prove that any constant mapping between metric spaces is continuous. .3 A mapping \( f : X \rightarrow Y \) between metric spaces is said to be contractive if \( \rho \left( {f\left( x\right), f\left( y\right) }\right) < \rho \left( {x, y}\right) \) whenever \( x, y \) are distinct points of \( X \) . Prove that a contractive mapping is continuous. .4 Let \( X \) be a metric space, \( a \in X \), and \( f, g \) two functions from \( X \) into \( \mathbf{R} \) that are continuous at \( a \) . Prove that the functions \( f + g, f - g \) , \( \max \{ f, g\} ,\min \{ f, g\} ,\left| f\right| \), and \( {fg} \) are continuous at \( a \) . Prove also that if \( g\left( a\right) \neq 0 \), then \( f/g \) is defined in a neighbourhood of \( a \) and is continuous at \( a \) . .5 Let \( Y \) be a closed subset of a metric space \( X \), and \( f : Y \rightarrow \mathbf{R} \) a bounded continuous mapping. Prove that \[ x \mapsto \inf \{ f\left( y\right) \rho \left( {x, y}\right) : y \in Y\} \] is continuous on \( X \smallsetminus Y \) . (Note Exercise (3.1.10:3).) .6 Let \( h \) be a mapping of \( {\mathbf{R}}^{0 + } \) into itself such that (i) \( h\left( t\right) = 0 \) if and only if \( t = 0 \) , (ii) \( h\left( {s + t}\right) \leq h\left( s\right) + h\left( t\right) \) for all \( s, t \) . Let \( \rho \) be a metric on a set \( X \) . Prove that \( d = h \circ \rho \) is a metric on \( X \) , and that if \( h \) is continuous at 0, then \( d \) is equivalent to \( \rho \) (see Exercise (3.1.3:6)). Prove, conversely, that if \( X \) contains a point that is not isolated relative to \( \rho \) (see Exercise (3.1.8: 5), and if \( \rho \) and \( h \circ \rho \) are equivalent metrics, then \( h \) is continuous at 0 . Taking \( h\left( t\right) = \min \{ t,1\} \) in the first part of this exercise, we obtain a bounded metric equivalent to the given metric on \( X \) . (3.2.2) Proposition. The following are equivalent conditions on a mapping \( f : X \rightarrow Y \), where \( X, Y \) are metric spaces. (i) \( f \) is continuous. (ii) For each open set \( A \subset Y,{f}^{-1}\left( A\right) \) is open in \( X \) . (iii) For each closed set \( A \subset Y,{f}^{-1}\left( A\right) \) is closed in \( X \) . Proof. Suppose that \( f \) is continuous, let \( A \subset Y \) be an open set, and consider any \( a \) in \( {f}^{-1}\left( A\right) \) . Since \( f\left( a\right) \in A \) and \( A \) is open, there exists \( \varepsilon > 0 \) such that \( B\left( {f\left( a\right) ,\varepsilon }\right) \subset A \) . Choose \( \delta > 0 \) such that if \( \rho \left( {a, x}\right) < \delta \), then \( \rho \left( {f\left( a\right), f\left( x\right) }\right) < \varepsilon \) and therefore \( f\left( x\right) \in A \) . Then \( B\left( {a,\delta }\right) \subset {f}^{-1}\left( A\right) \) . Hence \( {f}^{-1}\left( A\right) \) is open in \( X \), and therefore (i) implies (ii). Since a set is open if and only if its complement is closed, it readily follows that (ii) is equivalent to (iii). Finally, assume (ii), let \( a \in X \) and \( \varepsilon > 0 \), and set \( A = B\left( {f\left( a\right) ,\varepsilon }\right) \subset Y \) . Then \( A \) is open in \( Y \), so \( {f}^{-1}\left( A\right) \) is open in \( X \) . Since \( a \in {f}^{-1}\left( A\right) \), there exists \( \delta > 0 \) such that \( B\left( {a,\delta }\right) \subset {f}^{-1}\left( A\right) \) ; so if \( \rho \left( {a, x}\right) < \delta \), then \( f\left( x\right) \in A \) and therefore \( \rho \left( {f\left( a\right), f\left( x\right) }\right) < \varepsilon \) . Hence \( f \) is continuous at \( a \) . Since \( a \in X \) is arbitrary, \( f \) is continuous on \( X \) . Thus (ii) implies (i). The preceding result says that a mapping between metric spaces is continuous if and only if the inverse image of each open set is open. But the image of an open set under a continuous mapping need not be open: the continuous function \( x \mapsto 0 \) maps each nonempty open subset of \( \mathbf{R} \) onto the closed set \( \{ 0\} \) . Likewise, although the inverse image of a closed set under a continuous mapping is closed, the image of a closed set need not be: the mapping \( \left( {x, y}\right) \mapsto x \) on the Euclidean space \( {\mathbf{R}}^{2} \) takes the hyperbola \( \{ \left( {x, y}\right) : {xy} = 1\} \), a closed set, onto the open set \( \mathbf{R} \smallsetminus \{ 0\} \) . (3.2.3) Proposition. Let \( X, Y, Z \) be metric spaces. If \( f : X \rightarrow Y \) is continuous at \( a \in X \), and \( g : Y \rightarrow Z \) is continuous at \( f\left( a\right) \), then the composite mapping \( g \circ f : X \rightarrow Z \) is continuous at a. If \( f \) is continuous on \( X \) and \( g \) is continuous on \( Y \), then \( g \circ f \) is continuous on \( X \) . Proof. Suppose that \( f \) is continuous at \( a \) and that \( g \) is continuous at \( b = f\left( a\right) \) . Let \( \varepsilon > 0 \) . The continuity of \( g \) at \( b \) ensures that there exists \( {\delta }^{\prime } > 0 \) such that if \( \rho \left( {b, y}\right) < {\delta }^{\prime } \), then \( \rho \left( {g\left( b\right), g\left( y\right) }\right) < \varepsilon \) . In turn, as \( f \) is continuous at \( a \), there exists \( \delta > 0 \) such that if \( \rho \left( {a, x}\right) < \delta \), then \( \rho \left( {f\left( a\right), f\left( x\right) }\right) < {\delta }^{\prime } \) . So if \( \rho \left( {a, x}\right) < \delta \), then \( \rho \left( {b, f\left( x\right) }\right) < {\delta }^{\prime } \) and therefore \( \rho \left( {g\left( b\right), g\left( {f\left( x\right) }\right) }\right) < \varepsilon \) ; that is, \[ \rho \left( {g \circ f\left( a\right), g \circ f\left( x\right) }\right) < \varepsilon . \] Hence \( g \circ f \) is continuous at \( a \) . The second conclusion of the proposition follows immediately from the first. Let \( S \) be a subset of a metric space \( X \), and \( a \) a limit point of \( S \) -that is, a point of the closure of \( S \smallsetminus \{ a\} \) . Let \( f \) be a mapping of \( S \smallsetminus \{ a\} \) into a metric space \( Y \), and \( l \) a point of \( Y \) . We say that \( f\left( x\right) \) has a limit \( l \) as \( x \) tends to \( a \) in \( S \) if the mapping \( F : S \cup \{ a\} \rightarrow Y \) defined by \[ F\left( x\right) = \left\{ \begin{array}{ll} f\left( x\right) & \text{ if }x \in S \smallsetminus \{ a\} \\ l & \text{ if }x = a \end{array}\right. \] is continuous at \( a \) relative to the subspace \( S \cup \{ a\} \) of \( X \) . We then also use such expressions as \( l \) is a limit of the mapping \( f \) at a with respect to \( S \), or \( f\left( x\right) \) converges to \( l \) as \( x \) tends to a in \( S \), or \( f\left( x\right) \) tends to \( l \) as \( x \in S \) tends to \( a \) . In that case we write \[ l = \mathop{\lim }\limits_{{x \rightarrow a, x \in S}}f\left( x\right) \] or \[ f\left( x\right) \rightarrow l\text{as}x \rightarrow a, x \in S\text{,} \] Note that in this definition it is not required either that \( a \in S \) or that \( f\left( x\right) \) be defined at \( x = a \) . In the special case where \( S = X \) we often write \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) \), rather than \( \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}f\left( x\right) \) . ## (3.2.4) Exercises .1 Prove that the following condition is both necessary and sufficient for \( l \in Y \) to be a limit of \( f\left( x\right) \) as \( x \in S \) tends to \( a \) : for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that if \( x \in S \) and \( 0 < \rho \left( {a, x}\right) < \delta \), then \( \rho \left( {l, f\left( x\right) }\right) < \varepsilon \) . .2 Prove that a mapping \( f \) has at most one limit at \( a \in \overline{\left( S\smallsetminus \{ a\} \right) } \) with respect to the subset \( S \) of \( X \) . (Thus we are safe in referring to "the" limit of \( f \) at \( a \) .) .3 Let \( a \in X \) be a limit point of \( X \) . Prove that \( f : X \rightarrow Y \) is continuous at \( a \) if and only if \( f\left( a\right) = \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}f\left( x\right) \) . .4 Show that if \( l = \mathop{\lim }\limits_{{x \rightarrow a, x \in S}}f\left( x\right) \), then for each subset \( A \) of \( S \) such that \( a \in \overline{A\smallsetminus \{ a\} }, l \) is the limit of \( f \) at \( a \) with respect to \( A \) . .5 Show that if \( l = \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}f\left( x\right) \) and the mapping \( g : Y \rightarrow Z
1008_(GTM174)Foundations of Real and Abstract Analysis
42
\) such that if \( x \in S \) and \( 0 < \rho \left( {a, x}\right) < \delta \), then \( \rho \left( {l, f\left( x\right) }\right) < \varepsilon \) . .2 Prove that a mapping \( f \) has at most one limit at \( a \in \overline{\left( S\smallsetminus \{ a\} \right) } \) with respect to the subset \( S \) of \( X \) . (Thus we are safe in referring to "the" limit of \( f \) at \( a \) .) .3 Let \( a \in X \) be a limit point of \( X \) . Prove that \( f : X \rightarrow Y \) is continuous at \( a \) if and only if \( f\left( a\right) = \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}f\left( x\right) \) . .4 Show that if \( l = \mathop{\lim }\limits_{{x \rightarrow a, x \in S}}f\left( x\right) \), then for each subset \( A \) of \( S \) such that \( a \in \overline{A\smallsetminus \{ a\} }, l \) is the limit of \( f \) at \( a \) with respect to \( A \) . .5 Show that if \( l = \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}f\left( x\right) \) and the mapping \( g : Y \rightarrow Z \) is continuous at \( l \), then \( g\left( l\right) = \mathop{\lim }\limits_{{x \rightarrow a, x \in X}}g\left( {f\left( x\right) }\right) \) . .6 Prove that if \( l = \mathop{\lim }\limits_{{x \rightarrow a, x \in S}}f\left( x\right) \), then \( l \in \overline{f\left( S\right) } \) . Using the metric on the extended real line \( \overline{\mathbf{R}} \) introduced in Section 1 of this chapter, we can handle the convergence of sequences in a metric space \( X \) as a special case of the convergence of functions. To this end, recall that a sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) in \( X \) is really a mapping \( n \mapsto {x}_{n} \) of \( {\mathbf{N}}^{ + } \) into \( X \), and note that \( \infty \) is a limit point of \( {\mathbf{N}}^{ + } \) in \( \overline{\mathbf{R}} \) (see Exercise (3.1.3:7)). If the mapping \( n \mapsto {x}_{n} \) has a limit \( l \) at the point \( \infty \in \overline{\mathbf{R}} \) with respect to \( {\mathbf{N}}^{ + } \), we call \( l \) the limit of the sequence \( \left( {x}_{n}\right) \), we say that the sequence \( \left( {x}_{n}\right) \) converges to \( l \) as \( n \) tends to \( \infty \), and we write \[ l = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} \] or \[ {x}_{n} \rightarrow l\text{as}n \rightarrow \infty \text{.} \] The next proposition shows, in particular, that on \( \mathbf{R} \) our current notion of convergence of sequences coincides with the one introduced in Section 1.2. (3.2.5) Proposition. In order that \( l = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} \), it is necessary and sufficient that for each \( \varepsilon > 0 \) there exist a positive integer \( N \) such that \( \rho \left( {l,{x}_{n}}\right) < \varepsilon \) whenever \( n \geq N \) . Proof. By Exercise (3.2.4: 1), in order that \( a = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} \), it is necessary and sufficient that for each \( \varepsilon > 0 \) there exist \( \delta > 0 \) such that if \( n \in {\mathbf{N}}^{ + } \) and \( 0 < {\rho }_{\overline{\mathbf{R}}}\left( {\infty, n}\right) < \delta \), then \( \rho \left( {a,{x}_{n}}\right) < \varepsilon \) . But \[ {\rho }_{\overline{\mathbf{R}}}\left( {\infty, n}\right) = {\left( n + 1\right) }^{-1} > 0, \] so \( {\rho }_{\bar{\mathbf{R}}}\left( {\infty, n}\right) < \delta \) if and only if \( n \geq N \), where \( N \) is the smallest positive integer \( > {\delta }^{-1} - 1 \) . The desired conclusion now follows. In view of Proposition (3.2.5), we can easily adapt to the context of a metric space many of the elementary results about limits of sequences that were proved in the context of \( \mathbf{R} \) in Chapter 1. We frequently do this without further comment. (3.2.6) Proposition. Let \( S \) be a subset of the metric space \( X \), and \( a \in \) \( X \) . In order that \( a \in \bar{S} \), it is necessary and sufficient that \( a \) be the limit of a sequence of points of \( S \) . Proof. To prove the necessity of the stated condition, assume that \( a \in \bar{S} \) . Then for each positive integer \( n \) there exists a point \( {x}_{n} \) in \( S \cap B\left( {a,{n}^{-1}}\right) \) . Since \( \rho \left( {{x}_{n}, a}\right) < 1/N \) whenever \( n \geq N \), the sequence \( \left( {x}_{n}\right) \) converges to \( a \) . The sufficiency part of the proposition is left as an exercise. (3.2.7) Proposition. Let \( \left( {x}_{n}\right) \) be a sequence in \( X \), and \( a \in X \) . In order that there exist a subsequence of \( \left( {x}_{n}\right) \) converging to a, it is necessary and sufficient that for each neighbourhood \( U \) of \( a,{x}_{n} \in U \) for infinitely many values of \( n \) . Proof. The condition is clearly necessary. Conversely, if it is satisfied, then we can construct, inductively, a strictly increasing sequence \( \left( {n}_{k}\right) \) of positive integers such that \( {x}_{{n}_{k}} \in B\left( {a,{k}^{-1}}\right) \) for each \( k \) . Since \( {x}_{{n}_{j}} \in \) \( B\left( {a,{k}^{-1}}\right) \) whenever \( j \geq k \), the subsequence \( \left( {x}_{{n}_{k}}\right) \) of \( \left( {x}_{n}\right) \) converges to the limit \( a \) . ## (3.2.8) Exercises .1 Prove the sufficiency of the condition in Proposition (3.2.6). .2 Prove that the subset \( A \) is dense in the metric space \( X \) if and only if for each \( x \in X \) there exists a sequence \( \left( {x}_{n}\right) \) of points of \( A \) that converges to \( X \) . .3 Let \( A \) be a dense subset of \( X \), and let \( f, g \) be continuous functions from \( X \) into a metric space \( Y \) such that \( f\left( x\right) = g\left( x\right) \) for all \( x \) in \( A \) . Prove that \( f\left( x\right) = g\left( x\right) \) for all \( x \) in \( X \) . .4 Let \( f \) be a mapping between metric spaces \( X \) and \( Y \), and let \( a \in \) \( X \) . Prove that \( f \) is continuous at \( a \) if and only if it is sequentially continuous at \( a \), in the sense that \( f\left( {x}_{n}\right) \rightarrow f\left( a\right) \) whenever \( \left( {x}_{n}\right) \) is a sequence in \( X \) that converges to \( a \) . .5 Let \( X \) be a separable metric space, and \( f \) a mapping of \( X \) into \( \mathbf{R} \) . For each pair of rational numbers \( q,{q}^{\prime } \) let \( {X}_{q,{q}^{\prime }} \) be the set of \( t \in X \) such that \( \mathop{\lim }\limits_{{x \rightarrow t, x \in X}}f\left( x\right) \) exists and \[ f\left( t\right) \leq q < {q}^{\prime } \leq \mathop{\lim }\limits_{{x \rightarrow t, x \in X}}f\left( x\right) . \] Show that \( {X}_{q,{q}^{\prime }} \) is either empty or countable. (Use Exercise (3.1.8: 5).) Hence prove that the set of points \( t \in X \) such that \( \mathop{\lim }\limits_{{x \rightarrow t, x \in X}}f\left( x\right) \) exists and does not equal \( f\left( t\right) \) is empty or countable. A sequence \( \left( {x}_{n}\right) \) in a metric space \( X \) is called a Cauchy sequence if for each \( \varepsilon > 0 \) there exists a positive integer \( N \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) < \varepsilon \) whenever \( m, n \geq N \) . Any convergent sequence is a Cauchy sequence: for if \( \left( {x}_{n}\right) \) converges to a limit \( l \), then, given \( \varepsilon > 0 \) and choosing \( N \) such that \( \rho \left( {{x}_{n}, l}\right) < \varepsilon /2 \) for all \( n \geq N \), we use the triangle inequality to show that \( \rho \left( {{x}_{m},{x}_{n}}\right) < \varepsilon \) whenever \( m, n \geq N \) . We say that \( X \) is complete if each Cauchy sequence in \( X \) has a limit in \( X \) . We have already seen that \( \mathbf{R} \) is complete (Theorem (1.2.10)). (3.2.9) Proposition. A complete subspace of a metric space is closed. A closed subspace of a complete metric space is complete. Proof. Let \( S \) be a subspace of the metric space \( X \) . If \( x \in \bar{S} \), then by Proposition (3.2.6), there exists a sequence \( \left( {x}_{n}\right) \) in \( S \) that converges to \( x \) . Being convergent, \( \left( {x}_{n}\right) \) is a Cauchy sequence in \( S \) . So if \( S \) is complete, then \( \left( {x}_{n}\right) \) converges to a limit \( s \) in \( S \) . By Exercise (3.2.4: 2), we then have \( x = s \) , so \( x \in S \) . Hence \( \bar{S} = S \) -that is, \( S \) is closed in \( X \) . Conversely, suppose that \( X \) is complete and \( S \) is closed in \( X \) . If \( \left( {x}_{n}\right) \) is a Cauchy sequence in \( S \), then it converges to a limit \( x \) in \( X \) . By Proposition (3.2.6), \( x \in \bar{S} = S \) . Hence \( S \) is complete. ## (3.2.10) Exercises . 1 Prove that a sequence \( \left( {x}_{n}\right) \) in an ultrametric space is a Cauchy sequence if and only if \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\rho \left( {{x}_{n},{x}_{n + 1}}\right) = 0 \) . Give an example to show that this is not the case in a general metric space. .2 Show that a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \) is bounded, in the sense that \( \left\{ {{x}_{n} : n \geq 1}\right\} \) is a bounded subset of \( X \) . .3 Prove that if a Cauchy sequence \( \left( {x}_{n}\right) \) has a subsequence that converges to a limit \( a \), then \( {x}_{n} \rightarrow a \) as \( n \rightarrow \infty \) . .4 Prove that the interval \( I = (0,1\rbrack \) is not complete with respect to the metric \( \rho \) induced by the usual metric on \( \mathbf{R} \) . Define a mapping \( {\rho }^{\prime } : I \times I \rightarrow \mathbf{R} \) by \[ {\rho }^{\prime }\left( {x, y}\right) = \left| {\frac{1}{x} - \frac{1}{y}}\right| . \] Show that \( {\rho }^{\prime } \) is a metric on \( I \), that \( \rho \) and \( {\rho }^{\prime } \) are equivalent metrics on \( I \), and that \( \left( {I,{\rho }^{\prime }}\right) \) is complete. .5 Let \( A \) and \( B \) be complete subsets of a metric space. Give at least two proofs that \( A \cup B \) and \( A \cap B \) are complete. .6 Suppose that \( \rho \left( {S,
1008_(GTM174)Foundations of Real and Abstract Analysis
43
\) is bounded, in the sense that \( \left\{ {{x}_{n} : n \geq 1}\right\} \) is a bounded subset of \( X \) . .3 Prove that if a Cauchy sequence \( \left( {x}_{n}\right) \) has a subsequence that converges to a limit \( a \), then \( {x}_{n} \rightarrow a \) as \( n \rightarrow \infty \) . .4 Prove that the interval \( I = (0,1\rbrack \) is not complete with respect to the metric \( \rho \) induced by the usual metric on \( \mathbf{R} \) . Define a mapping \( {\rho }^{\prime } : I \times I \rightarrow \mathbf{R} \) by \[ {\rho }^{\prime }\left( {x, y}\right) = \left| {\frac{1}{x} - \frac{1}{y}}\right| . \] Show that \( {\rho }^{\prime } \) is a metric on \( I \), that \( \rho \) and \( {\rho }^{\prime } \) are equivalent metrics on \( I \), and that \( \left( {I,{\rho }^{\prime }}\right) \) is complete. .5 Let \( A \) and \( B \) be complete subsets of a metric space. Give at least two proofs that \( A \cup B \) and \( A \cap B \) are complete. .6 Suppose that \( \rho \left( {S, T}\right) > 0 \) for any two disjoint closed subsets \( S, T \) of \( X \) . Prove that \( X \) is complete. (Suppose there exists a Cauchy sequence \( \left( {x}_{n}\right) \) that does not converge to a limit in \( X \) . First reduce to the case where \( {x}_{m} \neq {x}_{n} \) whenever \( m \neq n \) . Then consider the sets \( \left\{ {{x}_{2n} : n \geq 1}\right\} \) and \( \left. {\left\{ {{x}_{{2n} - 1} : n \geq 1}\right\} \text{.}}\right) \) .7 Prove that if \( X \) is a nonempty set, then the metric space \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) is complete. (See Exercise (3.1.1:7). Given a Cauchy sequence \( \left( {f}_{n}\right) \) in \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) and a positive number \( \varepsilon \), first show that for each \( x \in X \) , \( {\left( {f}_{n}\left( x\right) \right) }_{n = 1}^{\infty } \) is a Cauchy sequence in \( \mathbf{R} \) and therefore converges to a limit \( f\left( x\right) \in \mathbf{R} \) . Then prove that the function \( f \) so defined is bounded, and that \( \left( {f}_{n}\right) \) converges to \( f \) in the metric on \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) .) .8 Let \( X \) be a metric space, \( a \in X \), and for all \( x, y \in X \) define \[ {\phi }_{x}\left( y\right) = \rho \left( {x, y}\right) \] and \[ Y = \left\{ {{\phi }_{a} + f : f \in \mathcal{B}\left( {X,\mathbf{R}}\right) }\right\} \] Prove that (i) \( {\phi }_{x} \in Y \) , (ii) the equation \[ d\left( {F, G}\right) = \sup \{ \left| {F\left( x\right) - G\left( x\right) }\right| : x \in X\} \] defines a metric on \( Y \) , (iii) \( x \mapsto {\phi }_{x} \) is an isometric mapping of \( X \) into \( Y \), and (iv) the closure \( \widehat{X} \) of \( \left\{ {{\phi }_{x} : x \in X}\right\} \) in \( Y \) is a complete metric space. We call the metric space \( \left( {\widehat{X}, d}\right) \) the completion of \( X \) . More generally, we say that a complete metric space \( {X}^{\prime } \) is a completion of \( X \) if there is an isometry of \( X \) onto a dense subspace of \( {X}^{\prime } \) ; but as two completions of the same metric space \( X \) are isometric (why?), we commonly refer to any completion of \( X \) as "the" completion of \( X \) . We now arrive at the notion of uniform continuity, a natural strengthening of continuity that, as we show in Theorem (3.3.12), turns out to be equivalent to continuity for certain very important spaces. We say that a mapping \( f : X \rightarrow Y \) between metric spaces is uniformly continuous if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \rho \left( {f\left( x\right), f\left( y\right) }\right) < \varepsilon \) whenever \( x, y \in X \) and \( \rho \left( {x, y}\right) < \delta \) . ## (3.2.11) Exercises .1 Prove that a uniformly continuous mapping is continuous. Give an example of a continuous mapping on \( (0,1\rbrack \) that is not uniformly continuous. .2 Let \( f, g \) be uniformly continuous mappings of \( X \) into \( \mathbf{R} \) . Show that \( f + g, f - g \), and \( {fg} \) are uniformly continuous on \( X \) . Show that if also \( \mathop{\inf }\limits_{{x \in X}}\left| {f\left( x\right) }\right| > 0 \), then \( 1/f \) is uniformly continuous on \( X \) . .3 Let \( f : X \rightarrow Y \) and \( g : Y \rightarrow Z \) be uniformly continuous mappings between metric spaces. Show that \( g \circ f \) is uniformly continuous on \( X \) . .4 Let \( S \) be a nonempty subset of \( X \) . Show that the mapping \( x \mapsto \rho \left( {x, S}\right) \) is uniformly continuous on \( X \) . .5 Let \( \left( {a}_{n}\right) \) be a sequence in \( X \) . Prove that the function \[ x \mapsto \mathop{\inf }\limits_{{n \geq 1}}\rho \left( {x,{a}_{n}}\right) \] is uniformly continuous on \( X \) . .6 Let \( \alpha \) be a positive number. A mapping \( f \) between metric spaces \( X \) and \( Y \) is said to satisfy a Lipschitz condition of order \( \alpha \), or to be Lipschitz of order \( \alpha \), if \[ \rho \left( {f\left( x\right), f\left( y\right) }\right) \leq {\left( \rho \left( x, y\right) \right) }^{\alpha }\;\left( {x, y \in X}\right) . \] Prove that such a mapping is uniformly continuous. .7 Prove that a mapping \( f \) between metric spaces \( X, Y \) is uniformly continuous if and only if \( \rho \left( {f\left( S\right), f\left( T\right) }\right) = 0 \) whenever \( S, T \subset X \) and \( \rho \left( {S, T}\right) = 0. \) .8 Prove that if \( X \) is not complete, then there exists a uniformly continuous mapping of \( X \) into \( {\mathbf{R}}^{ + } \) with infimum 0 . (See Exercise (3.2.10:8).) .9 Prove that if \( X \) is not complete, then there exists an unbounded continuous mapping of \( X \) into \( \mathbf{R} \) . .10 Suppose that every continuous mapping of \( X \) into \( \mathbf{R} \) is uniformly continuous. Prove that \( X \) is complete. (Assume that \( X \) is a dense subset of its completion \( \widehat{X} \), as defined in Exercise (3.2.10:8), and that there exists a Cauchy sequence of elements of \( X \) converging to \( \left. {{x}_{\infty } \in \widehat{X} \smallsetminus X\text{. Consider the function}x \mapsto 1/\rho \left( {x,{x}_{\infty }}\right) \text{on}X\text{.}}\right) \) (3.2.12) Proposition. Let \( D \) be a dense subset of a metric space \( X \), and \( f \) a uniformly continuous mapping of \( D \) into a complete metric space \( Y \) . Then there exists a unique continuous mapping \( F \) of \( X \) into \( Y \) such that \( F\left( x\right) = f\left( x\right) \) for all \( x \) in \( D \) ; moreover, \( F \) is uniformly continuous on \( X \) . Proof. For each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \rho \left( {f\left( x\right), f\left( {x}^{\prime }\right) }\right) < \varepsilon \) whenever \( \rho \left( {x,{x}^{\prime }}\right) < \delta \) . Given \( x \) in \( X \), let \( \left( {x}_{n}\right) \) be a sequence in \( D \) converging to \( x \) . Since for each \( \varepsilon > 0 \) there exists \( N \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) < \delta \), and therefore \( \rho \left( {f\left( {x}_{m}\right), f\left( {x}_{n}\right) }\right) < \varepsilon \), whenever \( m, n \geq N \), we see that \( \left( {f\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( Y \) . As the latter space is complete, \( \left( {f\left( {x}_{n}\right) }\right) \) converges to a limit \( \xi \) in \( Y \) . Moreover, if \( \left( {x}_{n}^{\prime }\right) \) is another sequence in \( D \) converging to \( x \), then \( \mathop{\lim }\limits_{{n \rightarrow \infty }}f\left( {x}_{n}^{\prime }\right) = \xi \) : for, replacing \( \left( {x}_{n}\right) \) by the sequence \( \left( {{x}_{1},{x}_{1}^{\prime },{x}_{2},{x}_{2}^{\prime },\ldots }\right) \) in the foregoing argument, we can show that \( \left( {f\left( {x}_{1}\right), f\left( {x}_{1}^{\prime }\right), f\left( {x}_{2}\right), f\left( {x}_{2}^{\prime }\right) ,\ldots }\right) \) is a Cauchy sequence; since the subsequence \( \left( {f\left( {x}_{n}\right) }\right) \) converges to \( \xi \), we conclude from Exercise (3.2.10:3) that the sequence \( \left( {f\left( {x}_{1}\right), f\left( {x}_{1}^{\prime }\right), f\left( {x}_{2}\right), f\left( {x}_{2}^{\prime }\right) ,\ldots }\right) \), and hence the subsequence \( \left( {f\left( {x}_{n}^{\prime }\right) }\right) \) , converges to \( \xi \) . Thus \[ F\left( x\right) = \xi = \mathop{\lim }\limits_{{n \rightarrow \infty }}f\left( {x}_{n}\right) \] is an unambiguous definition of a function \( F \) from \( X \) into \( Y \) . If \( x \in D \), then \( \left( {x, x,\ldots }\right) \) is a sequence in \( D \) converging to \( x \), so \( F\left( x\right) = f\left( x\right) \) . To prove that \( F \) is uniformly continuous, consider \( x,{x}^{\prime } \) in \( X \) such that \( \rho \left( {x,{x}^{\prime }}\right) < \delta \), and let \( \left( {x}_{n}\right) \) and \( \left( {x}_{n}^{\prime }\right) \) be sequences in \( D \) converging to \( x \) and \( {x}^{\prime } \), respectively. Then \( \left( {f\left( {x}_{n}\right) }\right) \) and \( \left( {f\left( {x}_{n}^{\prime }\right) }\right) \) converge to \( F\left( x\right) \) and \( F\left( {x}^{\prime }\right) \) , respectively. So for all sufficiently large \( n \) we have \( \rho \left( {F\left( x\right), f\left( {x}_{n}\right) }\right) < \varepsilon \) , \( \rho \left( {F\left( {x}^{\prime }\right), f\left( {x}_{n}^{\prime }\right) }\right) < \varepsilon \), and \( \rho \left( {{x}_{n},{x}_{n}^{\prime }}\right) < \delta \) ; whence \( \rho \left( {f\left( {x}_{n}\right), f\left( {x}_{n}^{\prime }\right) }\right) < \varepsilon \), and therefore, by the triangle inequality, \( \rho \left( {F\left( x\right), F\left( {x}^{\prime }\right) }\right) < {3\varepsilon } \) . Thus \( F \) is uniformly continuous on \( X \) . Finally, the uniqueness of \( F \) is an immediate consequence of Exercise (3.2.8: 3). The foregoing result enables us to extend uniformly
1008_(GTM174)Foundations of Real and Abstract Analysis
44
onverging to \( x \) and \( {x}^{\prime } \), respectively. Then \( \left( {f\left( {x}_{n}\right) }\right) \) and \( \left( {f\left( {x}_{n}^{\prime }\right) }\right) \) converge to \( F\left( x\right) \) and \( F\left( {x}^{\prime }\right) \) , respectively. So for all sufficiently large \( n \) we have \( \rho \left( {F\left( x\right), f\left( {x}_{n}\right) }\right) < \varepsilon \) , \( \rho \left( {F\left( {x}^{\prime }\right), f\left( {x}_{n}^{\prime }\right) }\right) < \varepsilon \), and \( \rho \left( {{x}_{n},{x}_{n}^{\prime }}\right) < \delta \) ; whence \( \rho \left( {f\left( {x}_{n}\right), f\left( {x}_{n}^{\prime }\right) }\right) < \varepsilon \), and therefore, by the triangle inequality, \( \rho \left( {F\left( x\right), F\left( {x}^{\prime }\right) }\right) < {3\varepsilon } \) . Thus \( F \) is uniformly continuous on \( X \) . Finally, the uniqueness of \( F \) is an immediate consequence of Exercise (3.2.8: 3). The foregoing result enables us to extend uniformly continuous functions from dense subsets to the whole space. We close this section with a famous theorem that enables us to extend continuous real-valued functions from closed subspaces to the whole space. (3.2.13) The Tietze Extension Theorem. Let \( X \) be a metric space, \( Y \) a closed subspace of \( X \), and \( f \) a bounded continuous mapping of \( Y \) into \( \mathbf{R} \) . Then there exists a bounded continuous mapping \( F : X \rightarrow \mathbf{R} \) such that (i) \( F\left( y\right) = f\left( y\right) \) for all \( y \in Y \) , (ii) \( \mathop{\inf }\limits_{{x \in X}}F\left( x\right) = \mathop{\inf }\limits_{{y \in Y}}f\left( y\right) \), and (iii) \( \mathop{\sup }\limits_{{x \in X}}F\left( x\right) = \mathop{\sup }\limits_{{y \in Y}}f\left( y\right) \) . Proof. We may assume that \( f \) is not constant. Let \( h \) be an increasing function of the form \( x \mapsto {ax} + b \) mapping the interval \( \left\lbrack {\inf f,\sup f}\right\rbrack \) onto \( \left\lbrack {1,2}\right\rbrack \) ; replacing \( f \) by \( h \circ f \), if necessary, we reduce to the case where \( \inf f = 1 \) and \( \sup f = 2 \) . Since \( Y \) is closed, \( \rho \left( {x, Y}\right) > 0 \) for all \( x \in X \smallsetminus Y \) (Exercise (3.1.10: 3)), and so \[ F\left( x\right) = \left\{ \begin{array}{ll} f\left( x\right) & \text{ if }x \in Y \\ \frac{\mathop{\inf }\limits_{{y \in Y}}f\left( y\right) \rho \left( {x, y}\right) }{\rho \left( {x, Y}\right) } & \text{ if }x \in X \smallsetminus Y \end{array}\right. \] defines a function \( F : X \rightarrow \mathbf{R} \) that coincides with \( f \) on \( Y \) . To prove that \( F \) satisfies (ii) and (iii), we need only show that \( 1 \leq F\left( x\right) \leq 2 \) for all \( x \in X \smallsetminus Y \) . For such \( x \) and all \( y \in Y \) we have \[ F\left( x\right) \leq \frac{{2\rho }\left( {x, y}\right) }{\rho \left( {x, Y}\right) } \] So, given \( \varepsilon > 0 \) and choosing \( y \in Y \) such that \[ \rho \left( {x, y}\right) \leq \left( {1 + \frac{\varepsilon }{2}}\right) \rho \left( {x, Y}\right) \] we obtain \( F\left( x\right) \leq 2 + \varepsilon \) . On the other hand, choosing \( {y}^{\prime } \in Y \) such that \[ 1 \leq \frac{\rho \left( {x,{y}^{\prime }}\right) }{\rho \left( {x, Y}\right) } \leq \frac{f\left( {y}^{\prime }\right) \rho \left( {x,{y}^{\prime }}\right) }{\rho \left( {x, Y}\right) } < F\left( x\right) + \varepsilon , \] we see that \( F\left( x\right) > 1 - \varepsilon \) . As \( \varepsilon > 0 \) is arbitrary, it follows that \( 1 \leq F\left( x\right) \leq 2 \) . Since \( f \) is continuous on \( {Y}^{ \circ } \), so is \( F \) . Also, the function \( x \mapsto \rho \left( {x, Y}\right) \) is uniformly continuous on \( X \smallsetminus Y \), by Exercise (3.2.11: 4); so, by Exercises (3.2.1: 5 and 4), \( F \) is continuous on \( X \smallsetminus Y \) . It therefore remains to prove the continuity of \( F \) at any \( \xi \in Y \cap \overline{X \smallsetminus Y} \) . Given \( \varepsilon > 0 \), choose \( r > 0 \) such that if \( y \in Y \) and \( \rho \left( {\xi, y}\right) < r \), then \( \left| {f\left( \xi \right) - f\left( y\right) }\right| < \varepsilon \) . It suffices to prove that if \( x \in X \smallsetminus Y \) and \( \rho \left( {x,\xi }\right) < r/4 \), then \[ F\left( \xi \right) - \varepsilon \leq F\left( x\right) \leq F\left( \xi \right) + \varepsilon . \] (1) To this end, observe that for each \( y \in Y \smallsetminus B\left( {\xi, r}\right) \) , \[ \rho \left( {x, y}\right) \geq \rho \left( {\xi, y}\right) - \rho \left( {x,\xi }\right) > \frac{3r}{4} > {2\rho }\left( {x,\xi }\right) \geq \rho \left( {x, Y \cap B\left( {\xi, r}\right) }\right) , \] so \[ f\left( y\right) \rho \left( {x, y}\right) > \frac{3r}{4} > f\left( \xi \right) \rho \left( {x,\xi }\right) \geq \mathop{\inf }\limits_{{\eta \in Y \cap B\left( {\xi, r}\right) }}f\left( \eta \right) \rho \left( {x,\eta }\right) . \] It follows that \[ \rho \left( {x, Y}\right) = \rho \left( {x, Y \cap B\left( {\xi, r}\right) }\right) \] (2) and that \[ \mathop{\inf }\limits_{{y \in Y}}f\left( y\right) \rho \left( {x, y}\right) = \mathop{\inf }\limits_{{y \in Y \cap B\left( {\xi, r}\right) }}f\left( y\right) \rho \left( {x, y}\right) . \] (3) For each \( y \in Y \cap B\left( {\xi, r}\right) \) we have \[ f\left( \xi \right) - \varepsilon < f\left( y\right) < f\left( \xi \right) + \varepsilon \] and therefore \[ \left( {f\left( \xi \right) - \varepsilon }\right) \rho \left( {x, Y}\right) \leq f\left( y\right) \rho \left( {x, y}\right) \leq \left( {f\left( \xi \right) + \varepsilon }\right) \rho \left( {x, y}\right) . \] Hence \[ \left( {f\left( \xi \right) - \varepsilon }\right) \rho \left( {x, Y}\right) \leq \mathop{\inf }\limits_{{y \in Y \cap B\left( {\xi, r}\right) }}f\left( y\right) \rho \left( {x, y}\right) \leq \left( {f\left( \xi \right) + \varepsilon }\right) \rho \left( {x, Y \cap B\left( {\xi, r}\right) }\right) , \] and so, by (2) and (3), \[ \left( {f\left( \xi \right) - \varepsilon }\right) \rho \left( {x, Y}\right) \leq \mathop{\inf }\limits_{{y \in Y}}f\left( y\right) \rho \left( {x, y}\right) \leq \left( {f\left( \xi \right) + \varepsilon }\right) \rho \left( {x, Y}\right) . \] Dividing through by \( \rho \left( {x, Y}\right) \), we obtain the desired inequalities (1). The mapping \( F \) in Theorem (3.2.13) is called a continuous extension of \( f \) to \( X \) . ## (3.2.14) Exercises . 1 Give two proofs of Urysohn’s Lemma: if \( S, T \) are nonempty disjoint closed subspaces of a metric space \( X \), then there exists a continuous mapping \( f : X \rightarrow \left\lbrack {0,1}\right\rbrack \) such that \( f\left( S\right) = \{ 0\} \) and \( f\left( T\right) = \{ 1\} \) . (For one proof, note that \( \rho \left( {x, S}\right) + \rho \left( {x, T}\right) > 0 \) for all \( x \in X \) .) .2 Let \( Y \) be a closed subspace of a metric space \( X \), and \( f \) a continuous mapping of \( Y \) into \( \mathbf{R} \) . Prove that there exists a continuous extension \( F : X \rightarrow \mathbf{R} \) of \( f \) . (First apply Theorem (3.2.13) to \( g \circ f \) for some suitable function \( g \) .) .3 Suppose that for each pair \( S, T \) of nonempty disjoint closed subsets of \( X \) there exists a uniformly continuous mapping \( f : X \rightarrow \left\lbrack {0,1}\right\rbrack \) such that \( f\left( S\right) = \{ 0\} \) and \( f\left( T\right) = \{ 1\} \) . Prove that \( X \) is complete. .4 Show that the following are equivalent conditions on \( X \) . (i) Every continuous function \( f : X \rightarrow \mathbf{R} \) is uniformly continuous. (ii) \( \rho \left( {S, T}\right) > 0 \) for all nonempty disjoint closed subsets \( S, T \) of \( X \) . (To prove that (ii) implies (i), suppose that \( f : X \rightarrow \mathbf{R} \) is continuous but not uniformly continuous. Then there exist sequences \( \left( {x}_{n}\right) ,\left( {y}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\rho \left( {{x}_{n},{y}_{n}}\right) = 0 \) and \( \left| {f\left( {x}_{n}\right) - f\left( {y}_{n}\right) }\right| \geq \alpha \) for all \( n \) . Consider the sets \( S = \left\{ {{x}_{n} : n \geq 1}\right\} \) and \( \left. {T = \left\{ {{y}_{n} : n \geq 1}\right\} .}\right) \) ## 3.3 Compactness In the context of a metric space, the various notions associated with the word compactness represent different generalisations of, and approximations to, finiteness. Let \( S \) be a subset of a metric space \( \left( {X,\rho }\right) \) . By a cover of \( S \) we mean a family \( \mathcal{U} \) of subsets of \( X \) such that \( S \subset \bigcup \mathcal{U} \) ; we then say that \( S \) is covered by \( \mathcal{U} \), and that \( \mathcal{U} \) covers \( S \) . If also each \( U \in \mathcal{U} \) is an open subset of \( X \), we refer to \( \mathcal{U} \) as an open cover of \( S \) . On the other hand, if \( \mathcal{U} \) is a finite set, we call it a finite cover of \( S \) . By a subcover of \( \mathcal{U} \) we mean a subfamily \( \mathcal{F} \) of \( \mathcal{U} \) that covers \( S \) . A metric space \( X \) is called compact, or a compact space, if every open cover of \( X \) contains a finite subcover. By a compact set in a metric space \( X \) we mean a subset of \( X \) that is compact when considered as a metric subspace of \( X \) . Note that we can apply our definition of compactness to a topological space \( X \), even if the topology of \( X \) is not metrisable. The Heine-Borel-Lebesgue Theorem (1.4.6) shows that a bounded closed interval in \( \mathbf{R} \) is compact. (3.3.1) Proposition. A compact subset of a metric space is separable and bounded. Proof. Let \( S \) be a compact subset of a metric space \( X \) . We may assume that \( S \) is nonempty. For each positive integer \( n \) the family \( {\left( B\left( s,{n}^{-1}\right) \right) }_{s
1008_(GTM174)Foundations of Real and Abstract Analysis
45
e other hand, if \( \mathcal{U} \) is a finite set, we call it a finite cover of \( S \) . By a subcover of \( \mathcal{U} \) we mean a subfamily \( \mathcal{F} \) of \( \mathcal{U} \) that covers \( S \) . A metric space \( X \) is called compact, or a compact space, if every open cover of \( X \) contains a finite subcover. By a compact set in a metric space \( X \) we mean a subset of \( X \) that is compact when considered as a metric subspace of \( X \) . Note that we can apply our definition of compactness to a topological space \( X \), even if the topology of \( X \) is not metrisable. The Heine-Borel-Lebesgue Theorem (1.4.6) shows that a bounded closed interval in \( \mathbf{R} \) is compact. (3.3.1) Proposition. A compact subset of a metric space is separable and bounded. Proof. Let \( S \) be a compact subset of a metric space \( X \) . We may assume that \( S \) is nonempty. For each positive integer \( n \) the family \( {\left( B\left( s,{n}^{-1}\right) \right) }_{s \in S} \) of open balls is an open cover of \( S \), so there exists a finite subset \( {F}_{n} \) of \( S \) such that \( S \) is covered by the balls \( B\left( {s,{n}^{-1}}\right) \) with \( s \in {F}_{n} \) . It follows that the countable set \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{F}_{n} \) is dense in \( S \), which is therefore separable. Now fix \( {s}_{1} \in {F}_{1} \), and define the nonnegative number \[ R = \max \left\{ {\rho \left( {s,{s}_{1}}\right) : s \in {F}_{1}}\right\} \] For each \( x \in S \) choose \( s \in {F}_{1} \) such that \( \rho \left( {x, s}\right) < 1 \) ; then \[ \rho \left( {x,{s}_{1}}\right) \leq \rho \left( {x, s}\right) + \rho \left( {s,{s}_{1}}\right) < 1 + R. \] Hence \( S \) is bounded. (3.3.2) Proposition. A compact set in a metric space is closed. Proof. Let \( S \) be a compact subset of a metric space \( X \) . We may assume that \( X \smallsetminus S \) is nonempty. If \( a \in X \smallsetminus S \), then for each \( s \in S \) , \[ 0 < {r}_{s} = \rho \left( {a, s}\right) \] The open balls \( B\left( {s,\frac{1}{2}{r}_{s}}\right) \), with \( s \in S \), form an open cover of \( S \), so there exists a finite subset \( F \) of \( S \) such that \( S \) is covered by the balls \( B\left( {s,\frac{1}{2}{r}_{s}}\right) \) with \( s \in F \) . Define the positive number \[ r = \min \left\{ {{r}_{s} : s \in F}\right\} \] For each \( x \in S \) choose \( s \in F \) such that \( x \in B\left( {s,\frac{1}{2}{r}_{s}}\right) \) ; then \[ \rho \left( {a, x}\right) \geq \rho \left( {a, s}\right) - \rho \left( {x, s}\right) \] \[ \geq {r}_{s} - \frac{1}{2}{r}_{s} \] \[ \geq \frac{1}{2}r\text{.} \] It follows that \( B\left( {a,\frac{1}{2}r}\right) \subset X \smallsetminus S \) and therefore that \( a \) is an interior point of \( X \smallsetminus S \) . Since \( a \) is any point of \( X \smallsetminus S \), we conclude that \( X \smallsetminus S \) is open and therefore that \( S \) is closed. (3.3.3) Proposition. A compact metric space is complete. Proof. Let \( X \) be a compact metric space, and \( \widehat{X} \) its completion (Exercise (3.2.10:8)). By Proposition (3.3.2), \( X \) is a closed subspace of \( \widehat{X} \) . Since \( \widehat{X} \) is complete, it follows from Proposition (3.2.9) that \( X \) is complete. (3.3.4) Proposition. A closed subset of a compact metric space is compact. Proof. Let \( S \) be a closed subset of a compact metric space \( X \), and let \( \mathcal{U} \) be an open cover of \( S \) . By Proposition (3.1.5), for each \( U \in \mathcal{U} \) there exists an open set \( {V}_{U} \) in \( X \) such that \( U = S \cap {V}_{U} \) . Then \( X \smallsetminus S \) and the sets \( {V}_{U} \) , with \( U \in \mathcal{U} \), form an open cover of \( X \) . Since \( X \) is compact, there exist finitely many sets \( {U}_{1},\ldots ,{U}_{n} \) in \( \mathcal{U} \) such that \[ \{ X \smallsetminus S\} \cup \left\{ {{V}_{{U}_{1}},\ldots ,{V}_{{U}_{n}}}\right\} \] is an open cover of \( X \) . Clearly, \( \left\{ {{U}_{1},\ldots ,{U}_{n}}\right\} \) covers \( S \) and so is a finite subcover of \( \mathcal{U} \) ; whence \( S \) is compact. ## (3.3.5) Exercises .1 Find an alternative proof of Proposition (3.3.2). .2 Find an alternative proof of Proposition (3.3.3). (Suppose that \( X \) is compact but not complete, and let \( \left( {x}_{n}\right) \) be a Cauchy sequence in \( X \) that does not converge and therefore has no convergent subsequence. Then for each \( x \in X \) there exist \( {r}_{x} > 0 \) and \( {N}_{x} \in {\mathbf{N}}^{ + } \) such that \( \rho \left( {{x}_{n}, x}\right) > {r}_{x} \) for all \( n \geq {N}_{x} \) . Cover \( X \) by finitely many of the balls \( B\left( {x,\frac{1}{2}{r}_{x}}\right) \) .) .3 Prove that a subset of the Euclidean space \( {\mathbf{R}}^{n} \) is compact if and only if it is bounded and closed. .4 A family \( \mathcal{F} \) of subsets of a set \( X \) is said to have the finite intersection property if every finite subfamily of \( \mathcal{F} \) has a nonempty intersection. Prove that a metric space \( X \) is compact if and only if every family of closed subsets of \( X \) with the finite intersection property has a nonempty intersection. .5 Let \( K \) be a compact subset of an open set \( U \subset X \) . Prove that there exists \( r > 0 \) such that if \( \rho \left( {x, K}\right) \leq r \), then \( x \in U \) . .6 Prove that any open cover of a separable metric space has a countable subcover. (This is a special case of Lindelöf's Theorem; see page 72 of \( \left\lbrack {47}\right\rbrack \) .) (3.3.6) Proposition. If \( f \) is a continuous mapping of a compact metric space \( X \) into a metric space \( Y \), then \( f\left( X\right) \) is a compact set. Proof. Let \( \mathcal{U} \) be an open cover of \( f\left( X\right) \) . By Proposition (3.2.2), the family \( {\left( {f}^{-1}\left( U\right) \right) }_{U \in \mathcal{U}} \) is an open cover of \( X \) . Since \( X \) is compact, there is a finite set \( \mathcal{F} \subset \mathcal{U} \) such that \( {\left( {f}^{-1}\left( U\right) \right) }_{U \in \mathcal{F}} \) is an open cover of \( X \) . Then \( \mathcal{F} \) is an open cover of \( f\left( X\right) \), which is therefore compact. ## (3.3.7) Exercises .1 Prove that a continuous mapping \( f \) of a compact metric space \( X \) into \( \mathbf{R} \) is bounded. Prove also that \( f \) attains its bounds, in the sense that there exist points \( a, b \) in \( X \) such that \( f\left( a\right) = \inf f \) and \( f\left( b\right) = \sup f \) . .2 Prove that a continuous mapping of a compact space \( X \) into \( {\mathbf{R}}^{ + } \) has a positive infimum. .3 Prove that if \( f \) is a continuous one-one mapping of a compact metric space \( X \) onto a metric space \( Y \), then the inverse mapping \( {f}^{-1} : Y \rightarrow \) \( X \) is continuous. (Use Proposition (3.2.2).) .4 A mapping \( f \) of a set \( X \) into itself is called a self-map of \( X \) . By a fixed point of such a mapping we mean a point \( x \in X \) such that \( f\left( x\right) = x \) . Let \( f \) be a contractive self-map of a compact metric space \( X \) (see Exercise (3.2.1: 3)). Prove that the mapping \( x \mapsto \rho \left( {x, f\left( x\right) }\right) \) of \( X \) into \( \mathbf{R} \) is continuous. Applying Exercise (3.3.7: 1) to this mapping, deduce that \( f \) has a fixed point (Edelstein’s Theorem). Prove that there is no other fixed point of \( f \) . There are other properties of a metric space \( X \) that capture the idea of approximate finiteness and are intimately related to compactness. We say that \( X \) is - sequentially compact if every sequence in \( X \) has a convergent subsequence; - totally bounded, or precompact, if for each \( \varepsilon > 0 \) there exists a finite cover of \( X \) by subsets of diameter \( < \varepsilon \) . Sequential compactness, like compactness, is a topological concept, whereas total boundedness is a metric notion. An analogue of sequential compactness can be defined for a general topological space; see under "filters" in [7] or [47]. For a nonmetric analogue of total boundedness we need the context of a uniform space, which is also discussed in [7] and [47]. The total boundedness of a metric space \( X \) can be expressed differently. By an \( \varepsilon \) -approximation to \( X \) we mean a subset \( S \) of \( X \) such that \( \rho \left( {x, S}\right) < \varepsilon \) for each \( x \in X \) . It is easy to show that \( X \) is totally bounded if and only if for each \( \varepsilon > 0 \) it contains a finite \( \varepsilon \) -approximation. Note that since the empty set is regarded as finite, it is also totally bounded. Corollary (1.2.8) shows that a bounded closed subset of \( \mathbf{R} \) is sequentially compact. ## (3.3.8) Exercises .1 Prove that a bounded interval in \( \mathbf{R} \) is totally bounded. .2 Prove that a subset of a totally bounded metric space is totally bounded. .3 Prove that if a metric space is either sequentially compact or totally bounded, then it is bounded. .4 Show that a totally bounded metric space is separable. .5 Let \( f \) be a uniformly continuous mapping of a totally bounded metric space into a metric space. Prove that the range of \( f \) is totally bounded. .6 Let \( X \) be a metric space that is not totally bounded. Prove that there exist a sequence \( \left( {x}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) \geq \alpha \) whenever \( m \neq n \) . .7 Let \( f \) be a function of bounded variation on a compact interval \( I \subset \mathbf{R} \) . Prove that \( f\left( I\right) \) is totally bounded. (Use the preceding exercise.) .8 Let \( X \) be a metric space that is not totally bounded, and choose \( \left( {x}_{n}\right) \) and \( \alpha \) as in Exercise (3.3.8: 6). For each \( n \) construct a uniformly conti
1008_(GTM174)Foundations of Real and Abstract Analysis
46
bounded metric space is totally bounded. .3 Prove that if a metric space is either sequentially compact or totally bounded, then it is bounded. .4 Show that a totally bounded metric space is separable. .5 Let \( f \) be a uniformly continuous mapping of a totally bounded metric space into a metric space. Prove that the range of \( f \) is totally bounded. .6 Let \( X \) be a metric space that is not totally bounded. Prove that there exist a sequence \( \left( {x}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) \geq \alpha \) whenever \( m \neq n \) . .7 Let \( f \) be a function of bounded variation on a compact interval \( I \subset \mathbf{R} \) . Prove that \( f\left( I\right) \) is totally bounded. (Use the preceding exercise.) .8 Let \( X \) be a metric space that is not totally bounded, and choose \( \left( {x}_{n}\right) \) and \( \alpha \) as in Exercise (3.3.8: 6). For each \( n \) construct a uniformly continuous function \( {\phi }_{n} : X \rightarrow \left\lbrack {0,1}\right\rbrack \) such that (i) \( {\phi }_{n}\left( {x}_{n}\right) = 1 \) and (ii) \( {\phi }_{n}\left( x\right) = 0 \) if \( \rho \left( {x,{x}_{n}}\right) \geq \alpha /3 \) . Given any sequence \( \left( {c}_{n}\right) \) of real numbers, show that \( f = \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}{\phi }_{n} \) is a well-defined continuous function on \( X \), and that if \( \left( {c}_{n}\right) \) is bounded, then \( f \) is uniformly continuous on \( X \) . .9 Let \( \left( {X,\rho }\right) \) be a separable metric space. Show that there exists on \( X \) a metric \( d \) equivalent to \( \rho \), such that \( \left( {X, d}\right) \) is totally bounded. (Let \( \left( {x}_{n}\right) \) be a dense sequence in \( X \), and use Exercise (3.2.1: 6) to reduce to the case where \( \rho < 1 \) . Define \[ d\left( {x, y}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}\left| {\rho \left( {x,{x}_{n}}\right) - \rho \left( {y,{x}_{n}}\right) }\right| \] for all \( x, y \in X \) .) We now arrive at a fundamental theorem linking compactness, sequential compactness, and total boundedness. (3.3.9) Theorem. The following are equivalent conditions on a metric space \( \left( {X,\rho }\right) \) . (i) \( X \) is compact. (ii) \( X \) is sequentially compact. (iii) \( X \) is totally bounded and complete. Proof. First, let \( X \) be a compact metric space, and \( \left( {x}_{n}\right) \) a sequence in \( X \) . For each \( n \) let \( {F}_{n} \) be the closure of \( \left\{ {{x}_{n},{x}_{n + 1},{x}_{n + 2},\ldots }\right\} \) in \( X \) . It is easy to show that \( {\left( {F}_{n}\right) }_{n = 1}^{\infty } \) has the finite intersection property. By Exercise (3.3.5: 4), \( \mathop{\bigcap }\limits_{{n = 1}}^{\infty }{F}_{n} \) contains a point \( a \) . Consider any neighbourhood \( U \) of \( a \) . For each \( n \), since \( a \in {F}_{n} \), there exists \( m \geq n \) such that \( {x}_{m} \in U \) . It follows that \( U \) contains \( {x}_{k} \) for infinitely many values of \( k \) ; whence, by Proposition (3.2.7), there exists a subsequence of \( \left( {x}_{n}\right) \) converging to \( a \) . Thus (i) implies (ii). Next, let \( X \) satisfy (ii). Then any Cauchy sequence in \( X \) has a convergent subsequence and so converges to a limit in \( X \), by Exercise (3.2.10: 3); whence \( X \) is complete. Suppose that \( X \) is not totally bounded. Then, by Exercise (3.3.8: 6), there exist a sequence \( \left( {x}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) \geq \alpha \) whenever \( m \neq n \) . Clearly, \( \left( {x}_{n}\right) \) has no Cauchy subsequences and therefore no convergent subsequences. This contradicts our assumption (ii); so, in fact, \( X \) is totally bounded. Thus (ii) implies (iii). It remains to prove that (iii) implies (i). Accordingly, let \( X \) be totally bounded and complete, and suppose that there exists an open cover \( \mathcal{U} \) of \( X \) that contains no finite subcover. With \( {B}_{0} = X \), we construct a sequence \( {\left( {B}_{n}\right) }_{n = 1}^{\infty } \) of closed balls in \( X \) such that for each \( n \geq 1 \) , (a) \( {B}_{n} \) has radius \( {2}^{-n} \) , (b) \( {B}_{n} \) has a nonempty intersection with \( {B}_{n - 1} \), and (c) no finite subfamily of \( \mathcal{U} \) is a cover of \( {B}_{n - 1} \) . Having constructed \( {B}_{0},\ldots ,{B}_{n - 1} \) with the applicable properties, let \( {\left( {V}_{j}\right) }_{j = 1}^{m} \) be a finite cover of \( {B}_{n - 1} \) by balls in \( {B}_{n - 1} \) of radius \( {2}^{-n} \) . (Note that \( {B}_{n - 1} \) is totally bounded, by Exercise (3.3.8: 2).) Amongst the sets \( {V}_{j} \) there exists at least one - call it \( {B}_{n} \) -that is not covered by finitely many of the sets in \( \mathcal{U} \) : otherwise each of the finitely many sets \( {V}_{j} \), and therefore \( {B}_{n - 1} \), would be covered by finitely many elements of \( \mathcal{U} \), thereby contradicting (c). This completes the inductive construction of \( {B}_{n} \) . For each \( n \geq 1 \) let \( {x}_{n} \) be the centre of \( {B}_{n} \) . Since \( {B}_{n} \cap {B}_{n - 1} \) is nonempty, it follows from the triangle inequality that for \( n \geq 2 \) , \[ \rho \left( {{x}_{n},{x}_{n - 1}}\right) \leq {2}^{-n} + {2}^{-n + 1} < {2}^{-n + 2}. \] So if \( j > i \geq N \geq 1 \), then \[ \rho \left( {{x}_{i},{x}_{j}}\right) \leq \mathop{\sum }\limits_{{k = i + 1}}^{j}\rho \left( {{x}_{k},{x}_{k - 1}}\right) \] \[ < \mathop{\sum }\limits_{{k = i + 1}}^{j}{2}^{-k + 2} \] \[ < {2}^{-i + 1}\mathop{\sum }\limits_{{k = 0}}^{\infty }{2}^{-k} = {2}^{-i + 2} \leq {2}^{-N + 2}. \] Hence \( \left( {x}_{n}\right) \) is a Cauchy sequence in \( X \) and so, as \( X \) is complete, converges to a limit \( {x}_{\infty } \) in \( X \) . Now pick \( U \in \mathcal{U} \) such that \( {x}_{\infty } \in U \) . Since \( U \) is open, there exists \( r > 0 \) such that \( B\left( {{x}_{\infty }, r}\right) \subset U \) . Choosing \( N > 1 \) such that \( \rho \left( {{x}_{\infty },{x}_{N}}\right) < r/2 \) and \( {2}^{-N} < r/2 \), we see that for each \( x \in {B}_{N} \) , \[ \rho \left( {x,{x}_{\infty }}\right) \leq \rho \left( {x,{x}_{N}}\right) + \rho \left( {{x}_{\infty },{x}_{N}}\right) < {2}^{-N} + r/2 < r, \] so \( x \in B\left( {{x}_{\infty }, r}\right) \) . Hence \( {B}_{N} \subset B\left( {{x}_{\infty }, r}\right) \subset U \), which contradicts (c). It follows that our initial assumption about the open cover \( \mathcal{U} \) is false; whence \( X \) is compact, and therefore (iii) implies (i). The proof that (iii) implies (i) in Theorem (3.3.9) is a generalisation of the argument we used to prove the Heine-Borel-Lebesgue Theorem (1.4.6). ## (3.3.10) Exercises .1 Use sequential compactness arguments to show that a compact subset of a metric space is both bounded and closed. .2 Show that if \( X \) is compact, then there exist points \( a, b \) of \( X \) such that \( \rho \left( {a, b}\right) = \operatorname{diam}\left( X\right) . \) .3 Let \( A, B \) be nonempty disjoint subsets of a metric space \( X \) with \( A \) closed and \( B \) compact. Give two proofs that \( \rho \left( {A, B}\right) > 0 \) . .4 Let \( \left( {S}_{n}\right) \) be a descending sequence of compact sets in a metric space \( X \) (so \( {S}_{1} \supset {S}_{2} \supset \cdots \) ). Prove, in at least two different ways, that if \( {S}_{n} \neq \varnothing \) for all \( n \), then \( \mathop{\bigcap }\limits_{{n = 1}}^{\infty }{S}_{n} \neq \varnothing \) . .5 Let \( X \) be a compact space in which each point \( x \) is isolated (see Exercise (3.1.8: 5)). Give at least two proofs that \( X \) is finite. .6 Prove that if every continuous mapping of \( X \) into \( \mathbf{R} \) is bounded, then \( X \) is compact. (First suppose that \( X \) is not totally bounded, and use Exercise (3.3.8: 8) to construct an unbounded continuous mapping of \( X \) into \( \mathbf{R} \) . Then use Exercise (3.2.11:9).) Is this true if "continuous" is replaced by "uniformly continuous" in the hypothesis? .7 Prove that if every uniformly continuous mapping of \( X \) into \( {\mathbf{R}}^{ + } \) has a positive infimum, then \( X \) is compact. (cf. Exercise (3.3.7: 2). Use Exercises (3.3.8: 8), (3.2.11: 5), and (3.2.11: 8).) .8 Let \( \left( {X,\rho }\right) \) be a metric space, and suppose that \( X \) is complete with respect to every metric equivalent to \( \rho \) (see Exercise (3.1.3: 6)). Prove that \( X \) is compact. (Suppose that \( X \) is not totally bounded. By Exercise (3.3.8: 6), there exist a sequence \( \left( {x}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) \geq \alpha \) whenever \( m \neq n \) . Show that \[ d\left( {x, y}\right) = \min \left\{ {\rho \left( {x, y}\right) ,\mathop{\inf }\limits_{{m, n \geq 1}}\left\{ {\rho \left( {x,{x}_{m}}\right) + \left| {\frac{1}{m} - \frac{1}{n}}\right| \alpha + \rho \left( {y,{x}_{n}}\right) }\right\} }\right\} \] defines a metric equivalent to \( \rho \) with respect to which \( X \) is not complete.) .9 Prove that the following are equivalent conditions on a metric space \( \left( {X,\rho }\right) \) . (i) If \( d \) is a metric equivalent to \( \rho \), and \( S, T \) are disjoint closed subsets of \( \left( {X, d}\right) \), then \( d\left( {S, T}\right) > 0 \) . (ii) \( X \) is compact. (Use Exercises (3.3.10:3), (3.2.10:6), and (3.3.10:8); also, note the guide to the solution of Exercise (3.3.5: 2).) The following property of a metric space \( X \) is known as the Lebesgue covering property. For each open cover \( \mathcal{U} \) of \( X \) there exists \( r > 0 \) such that any open ball of radius \( r \) in \( X \) is contained in some \( U \in \mathcal{U} \) . The positive number \( r \) associated with t
1008_(GTM174)Foundations of Real and Abstract Analysis
47
p{\inf }\limits_{{m, n \geq 1}}\left\{ {\rho \left( {x,{x}_{m}}\right) + \left| {\frac{1}{m} - \frac{1}{n}}\right| \alpha + \rho \left( {y,{x}_{n}}\right) }\right\} }\right\} \] defines a metric equivalent to \( \rho \) with respect to which \( X \) is not complete.) .9 Prove that the following are equivalent conditions on a metric space \( \left( {X,\rho }\right) \) . (i) If \( d \) is a metric equivalent to \( \rho \), and \( S, T \) are disjoint closed subsets of \( \left( {X, d}\right) \), then \( d\left( {S, T}\right) > 0 \) . (ii) \( X \) is compact. (Use Exercises (3.3.10:3), (3.2.10:6), and (3.3.10:8); also, note the guide to the solution of Exercise (3.3.5: 2).) The following property of a metric space \( X \) is known as the Lebesgue covering property. For each open cover \( \mathcal{U} \) of \( X \) there exists \( r > 0 \) such that any open ball of radius \( r \) in \( X \) is contained in some \( U \in \mathcal{U} \) . The positive number \( r \) associated with the open cover \( \mathcal{U} \) in this way is called a Lebesgue number for \( \mathcal{U} \) . (3.3.11) Proposition. A compact metric space has the Lebesgue covering property. Proof. Let \( X \) be a compact metric space, and \( \mathcal{U} \) an open cover of \( X \) . For each \( x \in X \) choose \( {r}_{x} > 0 \) such that \( B\left( {x,2{r}_{x}}\right) \subset U \) for some \( U \in \mathcal{U} \) . The balls \( B\left( {x,{r}_{x}}\right) \), with \( x \in X \), form an open cover of \( X \), from which we can extract a finite subcover, say \[ \left\{ {B\left( {{x}_{i},{r}_{{x}_{i}}}\right) : 1 \leq i \leq n}\right\} . \] Then \[ 0 < r = \min \left\{ {{r}_{{x}_{1}},\ldots ,{r}_{{x}_{n}}}\right\} \] Given \( x \in X \), choose \( i \) such that \( x \in B\left( {{x}_{i},{r}_{{x}_{i}}}\right) \) . Then for each \( y \in B\left( {x, r}\right) \) we have \[ \rho \left( {y,{x}_{i}}\right) \leq \rho \left( {x, y}\right) + \rho \left( {x,{x}_{i}}\right) < r + {r}_{{x}_{i}} \leq 2{r}_{{x}_{i}}. \] 154 So \[ B\left( {x, r}\right) \subset B\left( {{x}_{i},2{r}_{{x}_{i}}}\right) \subset U \] for some \( U \in \mathcal{U} \) . The implications (i) \( \Rightarrow \) (ii) \( \Rightarrow \) (iii) of the first part of the next result - a general version of the Uniform Continuity Theorem for metric spaces - are well known, in contrast to the implication (iii) \( \Rightarrow \) (i), which is due to Wong \( \left\lbrack {55}\right\rbrack \) . (3.3.12) Theorem. The following are equivalent conditions on a metric space \( X \) . (i) \( X \) has the Lebesgue covering property. (ii) Every continuous mapping of \( X \) into a metric space is uniformly continuous. (iii) Every continuous mapping of \( X \) into \( \mathbf{R} \) is uniformly continuous. Proof. Assuming (i), let \( f \) be a continuous mapping of \( X \) into a metric space, and let \( \varepsilon > 0 \) . For each \( t \in X \) there exists \( {\delta }_{t} > 0 \) such that if \( \rho \left( {x, t}\right) < {\delta }_{t} \), then \( \rho \left( {f\left( x\right), f\left( t\right) }\right) < \varepsilon /2 \) . It follows from the triangle inequality that if \( x \) and \( y \) belong to \( B\left( {t,{\delta }_{t}}\right) \), then \( \rho \left( {f\left( x\right), f\left( y\right) }\right) < \varepsilon \) . Let \( \delta > 0 \) be a Lebesgue number for the open cover \( {\left( B\left( t,{\delta }_{t}\right) \right) }_{t \in X} \) of \( X \) . If \( x \) and \( y \) are points of \( X \) such that \( \rho \left( {x, y}\right) < \delta \), then both \( x \) and \( y \) belong to \( B\left( {x,\delta }\right) \) , which is a subset of \( B\left( {t,{\delta }_{t}}\right) \) for some \( t \) ; so \( \rho \left( {f\left( x\right), f\left( y\right) }\right) < \varepsilon \) . Thus \( f \) is uniformly continuous, and therefore (i) implies (ii). It is trivial that (ii) implies (iii). To complete the proof, suppose that \( X \) does not have the Lebesgue covering property; so there exists an open cover \( \mathcal{U} \) of \( X \) for which there is no Lebesgue number. For each positive integer \( n \) we can therefore construct \( {x}_{n} \in X \) such that \( B\left( {{x}_{n},{n}^{-1}}\right) \smallsetminus U \) is nonempty for each \( U \in \mathcal{U} \) . Then there exists \( {y}_{n} \in B\left( {{x}_{n},{n}^{-1}}\right) \smallsetminus \left\{ {x}_{n}\right\} \) : for otherwise we would have \[ B\left( {{x}_{n},{n}^{-1}}\right) = \left\{ {x}_{n}\right\} \subset U \] for some \( U \in \mathcal{U} \) . We show that neither \( \left( {x}_{n}\right) \) nor \( \left( {y}_{n}\right) \) has a convergent subsequence. (1) Indeed, if \( \left( {x}_{n}\right) \) had a subsequence that converged to a limit \( \xi \in X \), then, choosing \( U \in \mathcal{U} \) such that \( \xi \in U \), we would have \( B\left( {{x}_{n},{n}^{-1}}\right) \subset U \) for some \( n \), a contradiction. On the other hand, if \( {\left( {y}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) were a convergent subsequence of \( \left( {y}_{n}\right) \), then the subsequence \( \left( {x}_{{n}_{k}}\right) \) of \( \left( {x}_{n}\right) \) would converge to the same limit, which contradicts what we have just proved. Setting \( {n}_{1} = 1 \), suppose we have constructed \( {n}_{1} < {n}_{2} < \cdots < {n}_{k} \) such that the sets \[ {S}_{k} = \left\{ {{x}_{{n}_{1}},\ldots ,{x}_{{n}_{k}}}\right\} \] \[ {T}_{k} = \left\{ {{y}_{{n}_{1}},\ldots ,{y}_{{n}_{k}}}\right\} \] are disjoint. There exists \( {n}_{k + 1} > {n}_{k} \) such that \( {x}_{{n}_{k + 1}} \notin {S}_{k} \) and \( {y}_{{n}_{k + 1}} \notin {T}_{k} \) : otherwise we would have either \( {x}_{j} \in {S}_{k} \) for infinitely many \( j \) or else \( {y}_{j} \in {T}_{k} \) for infinitely many \( j \) ; since \( {S}_{k} \) and \( {T}_{k} \) are finite, this would imply that either \( \left( {x}_{n}\right) \) or \( \left( {y}_{n}\right) \) had a convergent subsequence, thereby contradicting (1). Thus we have inductively constructed a strictly increasing sequence \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) of positive integers such that the sets \[ S = \left\{ {{x}_{{n}_{k}} : k \geq 1}\right\} \] \[ T = \left\{ {{y}_{{n}_{k}} : k \geq 1}\right\} \] are disjoint. These sets are both closed in \( X \) : for example, any point of \( \bar{S} \smallsetminus S \) would be the limit of some subsequence of \( \left( {x}_{n}\right) \), which contradicts (1). Applying Urysohn's Lemma (Exercise (3.2.14:1)), we now construct a continuous function \( f : X \rightarrow \left\lbrack {0,1}\right\rbrack \) such that \( f\left( S\right) = \{ 0\} \) and \( f\left( T\right) = \{ 1\} \) . Since \( \rho \left( {{x}_{{n}_{k}},{y}_{{n}_{k}}}\right) < 1/{n}_{k} \) but \( \left| {f\left( {x}_{{n}_{k}}\right) - f\left( {y}_{{n}_{k}}\right) }\right| = 1 \), the function \( f \) is not uniformly continuous. Hence (iii) implies (i). (3.3.13) Corollary—The Uniform Continuity Theorem. Every continuous mapping of a compact metric space into a metric space is uniformly continuous. Proof. This follows from the preceding two results. The converse of Corollary (3.3.13) is not true, since every function from the discrete metric space \( \mathbf{N} \) to \( \mathbf{R} \) is uniformly continuous but \( \mathbf{N} \), being unbounded, is not compact. However, there is an interesting partial converse to Corollary (3.3.13), which we discuss in Section 4. ## (3.3.14) Exercises .1 Use a sequential compactness argument to prove that a compact metric space has the Lebesgue covering property. .2 Give an example of a totally bounded metric space for which the Lebesgue covering property does not hold. .3 Prove that \( X \) has the Lebesgue covering property if and only if for each nonempty closed set \( S \subset X \) and each open set \( U \) containing \( S \) , there exists \( r > 0 \) such that the \( r \) -enlargement of \( S \) , \[ B\left( {S, r}\right) = \{ x \in X : \rho \left( {x, S}\right) < r\} \] is contained in \( U \) . (For "only if", consider the open cover \( \{ X \smallsetminus S, U\} \) of \( X \) . For "if", suppose that \( X \) does not have the Lebesgue covering property and, as in the second part of the proof of Theorem (3.3.12), construct disjoint nonempty closed subsets \( S, T \) of \( X \) such that \( \rho \left( {S, T}\right) = 0 \) ; then show that there exists \( r > 0 \) such that \( B\left( {S, r}\right) \subset X \smallsetminus T \) .) .4 Prove that a metric space with the Lebesgue covering property is complete. Need it be totally bounded? .5 Let \( X \) have the Lebesgue covering property, and let \( Y \) be a closed subset of \( X \) . Give two proofs that \( Y \) has the Lebesgue covering property. (For one proof, use the Tietze Extension Theorem; for another, work directly with an open cover of \( Y \) .) .6 Prove the Uniform Continuity Theorem using sequential compactness without the Lebesgue covering property. .7 Let \( X \) be a metric space, and \( h \) a mapping of \( X \) into a compact metric space \( Y \) . Suppose that \( f \circ h \) is uniformly continuous for each continuous (and therefore uniformly continuous) mapping \( f : Y \rightarrow \mathbf{R} \) . Give at least two proofs that \( h \) is uniformly continuous. The notion of compactness can be generalised in a number of ways. The one we deal with is typical of topology, in that it replaces a global property (one that holds for the whole space) by a local one (one that holds in some neighbourhood of any given point). A metric space \( X \) is said to be locally compact, or a locally compact space, if each point in \( X \) has a compact neighbourhood in \( X \) . For example, although (in view of Proposition (3.3.1)) \( \mathbf{R} \) is not compact, it is locally compact: if \( x \in \mathbf{R} \), then \( \left\lbrack {x - 1, x + 1}\right\rbrack \) is a compact neighbourhood of \( x \) in R. Of course, a compact metric space is locally compact. (3.3.15) Proposition. Le
1008_(GTM174)Foundations of Real and Abstract Analysis
48
a mapping of \( X \) into a compact metric space \( Y \) . Suppose that \( f \circ h \) is uniformly continuous for each continuous (and therefore uniformly continuous) mapping \( f : Y \rightarrow \mathbf{R} \) . Give at least two proofs that \( h \) is uniformly continuous. The notion of compactness can be generalised in a number of ways. The one we deal with is typical of topology, in that it replaces a global property (one that holds for the whole space) by a local one (one that holds in some neighbourhood of any given point). A metric space \( X \) is said to be locally compact, or a locally compact space, if each point in \( X \) has a compact neighbourhood in \( X \) . For example, although (in view of Proposition (3.3.1)) \( \mathbf{R} \) is not compact, it is locally compact: if \( x \in \mathbf{R} \), then \( \left\lbrack {x - 1, x + 1}\right\rbrack \) is a compact neighbourhood of \( x \) in R. Of course, a compact metric space is locally compact. (3.3.15) Proposition. Let \( X \) be a locally compact space, and \( S \) a subset of \( X \) . If either \( S \) is open or \( S \) is closed, then \( S \) is locally compact. Proof. Let \( a \in S \), and choose a compact neighbourhood \( K \) of \( a \) in \( X \) . If \( S \) is open, then \[ a \in {\left( K \cap S\right) }^{ \circ } = {K}^{ \circ } \cap {S}^{ \circ } \] so there exists \( r > 0 \) such that \( \bar{B}\left( {a, r}\right) \subset K \) and \( \bar{B}\left( {a, r}\right) \subset S \) . As \( \bar{B}\left( {a, r}\right) \) is closed in \( X \), it is closed in \( K \) (by Proposition (3.1.5)) and therefore compact (by Proposition (3.3.4)). Hence \( a \) has a compact neighbourhood in \( S \), and so \( S \) is locally compact. Now suppose that \( S \) is closed in \( X \) . Since \( K \) is a neighbourhood of \( a \) in \( X, K \cap S \) is a neighbourhood of \( a \) in \( S \) (Exercise (3.1.6:4)). Also, \( K \cap S \) is closed in \( K \), by Proposition (3.1.5), and therefore compact, by Proposition (3.3.4). Hence \( S \) is locally compact. ## (3.3.16) Exercises . 1 Let \( S \) and \( T \) be locally compact subspaces of a locally compact metric space \( X \) . Prove that \( S \cap T \) is locally compact. Need \( S \cup T \) be locally compact? .2 Is every locally compact space complete? .3 Let \( X \) be a metric space in which every bounded set is contained in a compact set. Prove that \( X \) is locally compact and separable. .4 Let \( X \) be locally compact, and \( K \) a compact subset of \( X \) . Prove that for some \( r > 0 \) the closure of the \( r \) -enlargement of \( K \) is compact. (See Exercise (3.3.14: 3).) .5 Let \( X \) be a separable locally compact metric space. Show that there exists a sequence \( \left( {V}_{n}\right) \) of open subsets of \( X \), each of which has compact closure, with the property that for each \( x \in X \) and each neighbourhood \( U \) of \( x \) there exists \( n \) such that \( x \in {V}_{n} \subset U \) . Hence prove that there exists a sequence \( \left( {U}_{n}\right) \) of open subsets of \( X \) with the following properties. (i) \( \overline{{U}_{n}} \) is compact; (ii) \( \overline{{U}_{n}} \subset {U}_{n + 1} \) ; (iii) \( X = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{U}_{n} \) . (Set \( {U}_{1} = {V}_{1} \) and \( {U}_{n + 1} = {V}_{n + 1} \cup B\left( {\overline{{U}_{n}}, r}\right) \), where, using Exercise (3.3.16:4), \( r > 0 \) is chosen to make the closure of \( B\left( {\overline{{U}_{n}}, r}\right) \) compact.) .6 Let \( X \) be a separable locally compact metric space that is not compact, and let \( \left( {U}_{n}\right) \) be as in the preceding exercise. Use Urysohn’s Lemma (Exercise (3.2.14: 1)) to show that there exists a continuous function \( f : X \rightarrow \mathbf{R} \) such that \( f\left( x\right) \leq n \) for all \( x \in \overline{{U}_{n}} \), and \( f\left( x\right) \geq n \) for all \( x \in X \smallsetminus \overline{{U}_{n}} \) . Then show that \[ d\left( {x, y}\right) = \rho \left( {x, y}\right) + \left| {f\left( x\right) - f\left( y\right) }\right| \] defines a metric \( d \) equivalent to \( \rho \), and that in the space \( \left( {X, d}\right) \) any bounded set is contained in a compact set. 158 3. Analysis in Metric Spaces ## 3.4 Connectedness In analysis there are many situations where progress is made by restricting attention to parts of a metric space that cannot be split into smaller, separated parts. Our next definition captures this imprecise idea formally. A metric space is said to be connected, or a connected space, if it can not be expressed as a union of two disjoint nonempty open subsets. So if \( X \) is connected, and if \( S, T \) are nonempty open subsets of \( X \) such that \( S \cup T = X \), then \( S \cap T \neq \varnothing \) . A subspace that is connected is called a connected set in the metric space. Clearly, the empty subset of any metric space is connected. (3.4.1) Proposition. The following are equivalent conditions on a metric space \( X \) . (i) \( X \) is connected. (ii) \( X \) is not a union of two disjoint nonempty closed subsets. (iv) The only subsets of \( X \) that are both open and closed in \( X \) are \( X \) and the empty subset. Proof. The straightforward proof is left as the next exercise. ## (3.4.2) Exercises .1 Prove Proposition (3.4.1). .2 Prove that a metric space \( X \) is connected if and only if there is no continuous mapping of \( X \) onto \( \{ 0,1\} \) . We showed in Proposition (1.3.13) that the only subsets of \( \mathbf{R} \) that are both open and closed are \( \mathbf{R} \) and \( \varnothing \) . It follows from Proposition (3.4.1) that \( \mathbf{R} \) is connected. In fact, we can say more. (3.4.3) Proposition. A nonempty subset of \( \mathbf{R} \) is connected if and only if it is an interval. Proof. Let \( S \) be a nonempty subset of \( \mathbf{R} \), and suppose first that \( S \) is connected. Let \( a, b \) be points of \( S \) with \( a \leq b \), and consider any \( x \) such that \( a \leq x \leq b \) . If \( x \notin S \), then \( S \) is the union of the disjoint subsets \( S \cap \left( {-\infty, x}\right) \) and \( S \cap \left( {x,\infty }\right) \), each of which is open in \( S \), by Proposition (3.1.5). This contradicts the assumption that \( S \) is connected. So \( x \in S \), and therefore \( S \) has the intermediate value property. Hence, by Proposition (1.3.3), \( S \) is an interval. Now let \( S \) be an interval in \( \mathbf{R} \), and suppose that \( S \) is not connected. Then there exist nonempty open subsets \( A, B \) of the subspace \( S \) such that \( S = A \cup B \) and \( A \cap B = \varnothing \) . We may assume that there exist \( a \in A \) and \( b \in B \) such that \( a < b \) . Let \( x \) be the supremum of the nonempty bounded set \( A \cap \lbrack a, b) \), and suppose that \( x \in A \) . Then \( a \leq x < b \), as \( b \notin A \) . Since \( A \) is open in \( S \), there exists \( r > 0 \) such that \( S \cap \left\lbrack {x, x + r}\right\rbrack \subset A \cap \lbrack a, b) \) . Being an interval, \( S \) has the intermediate value property (Proposition (1.3.3)), so \( \left\lbrack {a, b}\right\rbrack \subset S \), and therefore \( \left\lbrack {x, x + r}\right\rbrack \subset S \) . Hence \( x + r \in A \cap \lbrack a, b) \), which contradicts the definition of \( x \) . Thus, in fact, \( x \notin A \) . A similar argument shows that \( x \notin B \), which is absurd since, as we have already observed, \( \left\lbrack {a, b}\right\rbrack \subset S \) . This contradiction shows that \( S \) is connected. ## (3.4.4) Exercise Let \( S, T \) be nonempty closed subsets of a metric space \( X \) such that \( S \cup T \) and \( S \cap T \) are connected. Prove that \( S \) and \( T \) are connected. Give an example to show that the conclusion no longer holds if we remove the hypothesis that \( S \) and \( T \) are closed. We now prove some general results about connected spaces. (3.4.5) Proposition. If \( S, T \) are subsets of a metric space \( X \) such that \( S \) is connected and \( S \subset T \subset \bar{S} \), then \( T \) is connected. In particular, \( \bar{S} \) is connected. Proof. Suppose that \( A, B \) are nonempty open sets in the subspace \( T \) such that \( T = A \cup B \) and \( A \cap B = \varnothing \) . As \( S \) is dense in \( T \), both \( S \cap A \) and \( S \cap B \) are nonempty. They are clearly disjoint, and, by Proposition (3.1.5), they are open in \( S \) . Since \( S = \left( {S \cap A}\right) \cup \left( {S \cap B}\right) \), we have contradicted the fact that \( S \) is connected. (3.4.6) Proposition. If \( \mathcal{F} \) is a family of connected sets in a metric space \( X \) such that \( \bigcap \mathcal{F} \) is nonempty, then \( \bigcup \mathcal{F} \) is connected. Proof. Let \( S = \bigcup \mathcal{F} \) and \( a \in \bigcap \mathcal{F} \) . Suppose that \( S = A \cup B \), where \( A, B \) are nonempty disjoint open sets in \( S \) . Consider, for example, the case where \( a \in A \) . Choose \( F \in \mathcal{F} \) such that \( B \cap F \) is nonempty, and note that \( a \in A \cap F \) . Then \( A \cap F \) and \( B \cap F \) are open in \( F \) (by Proposition (3.1.5)), have union \( F \), are disjoint, and are nonempty. This contradicts the fact that \( F \) is connected. ## (3.4.7) Exercises .1 Let \( S, T \) be connected subsets of a metric space \( X \) such that \( \bar{S} \cap T \) is nonempty. Prove that \( S \cup T \) is connected. .2 Let \( \left( {S}_{n}\right) \) be a sequence of connected subsets of a metric space \( X \) such that \( {S}_{n} \cap {S}_{n + 1} \) is nonempty for each \( n \) . Prove that \( \bigcup {S}_{n} \) is connected. .3 A metric space \( X \) is said to be chain connected if for each pair \( a, b \) of points of \( X \), and each \( \varepsilon > 0 \), there exist finitely many points \( a = {x}_{0},{x}_{1},
1008_(GTM174)Foundations of Real and Abstract Analysis
49
\), where \( A, B \) are nonempty disjoint open sets in \( S \) . Consider, for example, the case where \( a \in A \) . Choose \( F \in \mathcal{F} \) such that \( B \cap F \) is nonempty, and note that \( a \in A \cap F \) . Then \( A \cap F \) and \( B \cap F \) are open in \( F \) (by Proposition (3.1.5)), have union \( F \), are disjoint, and are nonempty. This contradicts the fact that \( F \) is connected. ## (3.4.7) Exercises .1 Let \( S, T \) be connected subsets of a metric space \( X \) such that \( \bar{S} \cap T \) is nonempty. Prove that \( S \cup T \) is connected. .2 Let \( \left( {S}_{n}\right) \) be a sequence of connected subsets of a metric space \( X \) such that \( {S}_{n} \cap {S}_{n + 1} \) is nonempty for each \( n \) . Prove that \( \bigcup {S}_{n} \) is connected. .3 A metric space \( X \) is said to be chain connected if for each pair \( a, b \) of points of \( X \), and each \( \varepsilon > 0 \), there exist finitely many points \( a = {x}_{0},{x}_{1},\ldots ,{x}_{n} = b \) such that \( \rho \left( {{x}_{i},{x}_{i + 1}}\right) < \varepsilon \) for \( i = 0,\ldots, n - 1 \) . Prove that a compact, chain connected metric space is connected. .4 If \( X \) is a metric space, then it follows from Proposition (3.4.6) that for each \( x \in X \) , \[ {C}_{x} = \bigcup \{ S \subset X : S\text{ is connected and }x \in S\} \] is connected. \( {C}_{x} \) is called the connected component of \( x \) in \( X \) . Prove the following statements. (i) \( {C}_{x} \) is closed in \( X \) . (ii) \( {C}_{x} \) is the largest connected subset of \( X \) that contains \( x \) . (iii) If \( y \in {C}_{x} \), then \( {C}_{y} = {C}_{x} \) . (iv) If \( y \notin {C}_{x} \), then \( {C}_{y} \cap {C}_{x} = \varnothing \) . .5 A subset \( S \) of a metric space \( X \) is said to be totally disconnected if for each \( x \in X \) the connected component of \( x \) in \( S \) is \( \{ x\} \) . Prove that (i) every countable subset of \( \mathbf{R} \) is totally disconnected; (ii) the irrational numbers form a totally disconnected set in \( \mathbf{R} \) . .6 A metric space \( X \) is said to be locally connected if for each \( x \in X \) and each neighbourhood \( U \) of \( X \) there exists a connected neighbourhood \( V \) of \( x \) with \( V \subset U \) . Prove that \( X \) is locally connected if and only if the following property holds: for each open subset \( S \) of \( X \), and each \( x \in S \), the connected component of \( x \) in the subspace \( S \) is an open subset of \( X \) . .7 Use Proposition (3.4.3) and the previous exercise to give another proof of Proposition (1.3.6). .8 Let \( X \) be a connected space, and \( S \) a nonempty subset of \( X \) such that \( X \smallsetminus S \) is also nonempty. Show that the boundary of \( S \) is nonempty. (Suppose the contrary.) (3.4.8) Proposition. The range of a continuous mapping from a connected metric space into a metric space is connected. Proof. Let \( X \) be a connected space, and \( f \) a continuous mapping of \( X \) into a metric space \( Y \) . Suppose that \( f\left( X\right) = S \cup T \), where \( S, T \) are nonempty disjoint open sets in the subspace \( f\left( X\right) \) of \( Y \) . By Proposition (3.2.2), the nonempty disjoint sets \( {f}^{-1}\left( S\right) \) and \( {f}^{-1}\left( T\right) \) are open in \( X \) . Since \[ X = {f}^{-1}\left( {f\left( X\right) }\right) = {f}^{-1}\left( {S \cup T}\right) = {f}^{-1}\left( S\right) \cup {f}^{-1}\left( T\right) , \] it follows that \( X \) is not connected, a contradiction. A very important consequence of Proposition (3.4.8) is the following generalised Intermediate Value Theorem. (3.4.9) Theorem. Let \( f \) be a continuous mapping of a connected metric space \( X \) into \( \mathbf{R} \), and \( a, b \) points of \( f\left( X\right) \) such that \( a < b \) . Then for each \( y \in \left( {a, b}\right) \) there exists \( x \in X \) such that \( f\left( x\right) = y \) . Proof. By Propositions (3.4.8) and (3.4.3), \( f\left( X\right) \) is an interval. The result follows immediately. ## (3.4.10) Exercises .1 Let \( X \) be an unbounded connected metric space. Prove that for each \( x \in X \) and each \( r > 0 \) there exists \( y \in X \) such that \( \rho \left( {x, y}\right) = r \) . .2 Let \( S \) be a connected subset of the Euclidean space \( {\mathbf{R}}^{n} \) . Prove that for each \( r > 0 \) the set \[ \left\{ {x \in {\mathbf{R}}^{n} : \rho \left( {x, S}\right) \leq r}\right\} \] is also connected. .3 Let \( X \) be a compact metric space, and suppose that the closure of any open ball \( B\left( {a, r}\right) \) in \( X \) is the closed ball \( \bar{B}\left( {a, r}\right) \) . Show that any open or closed ball in \( X \) is connected. (Suppose that \( B\left( {a, r}\right) = S \cup T \) , where \( S, T \) are nonempty, disjoint, and closed in the subspace \( B\left( {a, r}\right) \) . Without loss of generality take \( a \) in \( A \) . Show that \[ C = \{ x \in X \smallsetminus S : \rho \left( {a, x}\right) \geq \rho \left( {a, T}\right) \} \] is compact, and hence that there exists \( {t}_{0} \in T \) such that \( \rho \left( {a,{t}_{0}}\right) = \) \( \rho \left( {a, T}\right) > 0 \) . Then consider \( \bar{B}\left( {a,\rho \left( {a, T}\right) }\right) \) .) Show by an example that we cannot remove the compactness of \( X \) from the hypotheses of this result. We now prove the partial converse to Corollary (3.3.13) that was postponed from Section 3. (3.4.11) Proposition. Let \( X \) be a connected metric space such that every continuous function from \( X \) to \( \mathbf{R} \) is uniformly continuous. Then \( X \) is compact. Proof. Suppose that \( X \) is not totally bounded. By Exercise (3.3.8:6), there exist a sequence \( \left( {x}_{n}\right) \) in \( X \) and a positive number \( \alpha \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) \geq \alpha \) whenever \( m \neq n \) . Using Exercise (3.3.8:8), we can construct, for each \( k \), a uniformly continuous function \( {\phi }_{k} : X \rightarrow \left\lbrack {0,1}\right\rbrack \) such that \( {\phi }_{k}\left( {x}_{k}\right) = 1,{\phi }_{k}\left( x\right) = 0 \) if \( \rho \left( {x,{x}_{k}}\right) \geq \alpha /3 \), and \( f = \mathop{\sum }\limits_{{n = 1}}^{\infty }n{\phi }_{n} \) is a well-defined continuous function on \( X \) ; to be precise, we set \[ {\phi }_{k}\left( x\right) = \max \left\{ {0,1 - 3{\alpha }^{-1}\rho \left( {x,{x}_{k}}\right) }\right\} . \] Our hypotheses ensure that \( f \) is uniformly continuous. Now, \( X \) is connected, the mapping \( x \mapsto \rho \left( {x,{x}_{n}}\right) \) is continuous on \( X,\rho \left( {{x}_{n},{x}_{n}}\right) = 0 \), and \( \rho \left( {{x}_{n + 1},{x}_{n}}\right) \geq \alpha \) . It follows from Theorem (3.4.9) that there exists \( x \in X \) such that \( \rho \left( {x,{x}_{n}}\right) = \alpha /{3n} \) . Then \[ f\left( {x}_{n}\right) - f\left( x\right) = n - \left( {n - 1}\right) = 1. \] Since \( n > 1 \) is arbitrary, \( f \) is not uniformly continuous. This contradiction shows that \( X \) is totally bounded. Now suppose that \( X \) is not complete; so there exists a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \) that does not converge to a limit in \( X \) . Without loss of generality we may assume that \( X \) is a dense subset of its completion \( \left( {\widehat{X},\rho }\right) \) . So \( \left( {x}_{n}\right) \) converges to a limit \( {x}_{\infty } \in \widehat{X} \smallsetminus X \) . The function \( x \mapsto \rho \left( {x,{x}_{\infty }}\right) \) is (uniformly) continuous and positive-valued on \( X \), so \[ f\left( x\right) = \frac{1}{\rho \left( {x,{x}_{\infty }}\right) } \] defines a continuous mapping \( f : X \rightarrow {\mathbf{R}}^{ + } \) . By our hypotheses, \( f \) is uniformly continuous on \( X \), so there exists \( \delta > 0 \) such that if \( x, y \in X \) and \( \rho \left( {x, y}\right) < \delta \), then \( \left| {f\left( x\right) - f\left( y\right) }\right| < 1 \) . Choose \( N \) such that \( \rho \left( {{x}_{m},{x}_{n}}\right) < \delta \) for all \( n \geq N \) . Since \( \rho \left( {{x}_{N},{x}_{\infty }}\right) > 0 \), there exist positive integers \( k, m \) such that \( m > N \) and \[ \rho \left( {{x}_{m},{x}_{\infty }}\right) < \frac{1}{k + 1} < \frac{1}{k} < \rho \left( {{x}_{N},{x}_{\infty }}\right) . \] Then \( \rho \left( {{x}_{m},{x}_{N}}\right) < \delta \) but \[ f\left( {x}_{m}\right) - f\left( {x}_{N}\right) > \left( {k + 1}\right) - k = 1, \] contrary to our choice of \( \delta \) . Hence, in fact, \( X \) is complete and therefore, by Theorem (3.3.9), compact. There is another type of connectedness of importance in analysis and topology, one that generalises the informal idea that a subset \( X \) of the Euclidean plane is in one piece if any two points of \( X \) can be joined by a path that lies wholly in \( X \) . (In spite of this correct claim about the importance of this type of connectedness, we do not actually use it later in the book; so you can ignore the rest of this section with impunity.) Let \( X \) be a metric space. A continuous mapping \( f : \left\lbrack {0,1}\right\rbrack \rightarrow X \) such that \( f\left( 0\right) = a \) and \( f\left( 1\right) = b \) is called a path in \( X \) with endpoints \( a \) and \( b \), or a path in \( X \) from a to \( b \) ; the path \( f \) is also said to join a to \( b \) . We say that \( X \) is path connected, or a path connected space, if for each pair \( a, b \) of points of \( X \) there is a path in \( X \) from \( a \) to \( b \) . By a path connected subset of \( X \) we mean a subset of \( X \) that is path connected as a subspace of \( X \) . A subset \( S \) of \( {\mathbf{R}}^{n} \) is said to be convex if \( {tx} + \left( {1 - t}\right) y \in S \) whenever \( x, y \in S \) and \( 0 \leq t \leq 1 \) . A convex subset \( S \) of \( {\mathbf{R}}^{n} \) is path co
1008_(GTM174)Foundations of Real and Abstract Analysis
50
about the importance of this type of connectedness, we do not actually use it later in the book; so you can ignore the rest of this section with impunity.) Let \( X \) be a metric space. A continuous mapping \( f : \left\lbrack {0,1}\right\rbrack \rightarrow X \) such that \( f\left( 0\right) = a \) and \( f\left( 1\right) = b \) is called a path in \( X \) with endpoints \( a \) and \( b \), or a path in \( X \) from a to \( b \) ; the path \( f \) is also said to join a to \( b \) . We say that \( X \) is path connected, or a path connected space, if for each pair \( a, b \) of points of \( X \) there is a path in \( X \) from \( a \) to \( b \) . By a path connected subset of \( X \) we mean a subset of \( X \) that is path connected as a subspace of \( X \) . A subset \( S \) of \( {\mathbf{R}}^{n} \) is said to be convex if \( {tx} + \left( {1 - t}\right) y \in S \) whenever \( x, y \in S \) and \( 0 \leq t \leq 1 \) . A convex subset \( S \) of \( {\mathbf{R}}^{n} \) is path connected: for if \( a, b \in S \), then \[ f\left( t\right) = \left( {1 - t}\right) a + {tb}\;\left( {0 \leq t \leq 1}\right) \] defines a path in \( S \) from \( a \) to \( b \) . In particular, an interval in \( \mathbf{R} \) is path connected. ## (3.4.12) Proposition. A path connected space is connected. Proof. Let \( X \) be a path connected space; we may assume that \( X \) is nonempty. Let \( a \in X \), and for each \( x \in X \) let \( {f}_{x} \) be a path in \( X \) joining \( a \) to \( x \) ; for convenience, let \( I = \left\lbrack {0,1}\right\rbrack \) . Then \( {f}_{x}\left( I\right) \) is connected, by Propositions (3.4.3) and (3.4.8), and \( a \in {f}_{x}\left( I\right) \) . Hence, by Proposition (3.4.6), \( X = \) \( \mathop{\bigcup }\limits_{{x \in X}}{f}_{x}\left( I\right) \) is connected. Propositions (3.4.3) and (3.4.12) show that path connectedness and connectedness are equivalent properties of a nonempty subset \( S \) of \( \mathbf{R} \), and hold precisely when \( S \) is an interval. In \( {\mathbf{R}}^{2} \), however, there are subsets that are connected but not path connected; see Exercise (3.4.16:1). Our next result is therefore substantial. (3.4.13) Proposition. A connected open subset of \( {\mathbf{R}}^{n} \) is path connected. In order to prove Proposition (3.4.13) we need some simple consequences of the following Glueing Lemma. (3.4.14) Lemma. Let \( X, Y \) be metric spaces, and let \( A, B \) be closed subsets of \( X \) whose union is \( X \) . Let \( f : A \rightarrow Y \) and \( g : B \rightarrow Y \) be continuous functions such that \( f\left( x\right) = g\left( x\right) \) for all \( x \in A \cap B \) . Then the function \( h : X \rightarrow Y \) defined by \[ h\left( x\right) = \left\{ \begin{array}{ll} f\left( x\right) & \text{ if }x \in A \\ g\left( x\right) & \text{ if }x \in B \end{array}\right. \] is continuous. Proof. Let \( C \) be a closed subset of \( Y \) . Then, by Proposition (3.2.2), \( {f}^{-1}\left( C\right) \) is closed in the subspace \( A \) of \( X \), and hence, by Exercise (3.1.6: 3), in \( X \) . Similarly, \( {g}^{-1}\left( C\right) \) is closed in \( X \) . Hence \[ {h}^{-1}\left( C\right) = {f}^{-1}\left( C\right) \cup {g}^{-1}\left( C\right) \] is closed in \( X \) . It follows from Proposition (3.2.2) that \( h \) is continuous. Now consider two paths \( f, g \) in a metric space \( X \) such that \( f\left( 1\right) = g\left( 0\right) \) . We define the product of the paths \( f \) and \( g \) to be the path \( {gf} \), where \[ {gf}\left( t\right) = \left\{ \begin{array}{ll} f\left( {2t}\right) & \text{ if }0 \leq t \leq \frac{1}{2} \\ g\left( {{2t} - 1}\right) & \text{ if }\frac{1}{2} \leq t \leq 1. \end{array}\right. \] It follows from Proposition (3.4.14) that \( {gf} \) is a path in \( X \) joining \( f\left( 0\right) \) to \( g\left( 1\right) \) . The product \( {gf} \) of two paths must not be confused with the composite \( g \circ f \) of two mappings. Indeed, unless \( f \) is a path in \( \left\lbrack {0,1}\right\rbrack \), the composite of the paths \( f \) and \( g \) is undefined. ## (3.4.15) Exercise We define the path component of a point \( x \) in a metric space \( X \) to be \[ {P}_{x} = \{ y \in X\text{ : there exists a path in }X\text{ from }x\text{ to }y\} . \] Prove that \( {P}_{x} \) is the union of the path connected subsets of \( X \) that contain \( x \), and that it is the largest path connected subset of \( X \) containing \( x \) . Prove also that if \( x, y \in X \), then either \( {P}_{x} = {P}_{y} \) or \( {P}_{x} \cap {P}_{y} = \varnothing \) . Proof of Proposition (3.4.13). Let \( U \) be a connected open subset of \( {\mathbf{R}}^{n} \) . For each \( x \) in \( U \) let \( {U}_{x} \) be the path component of \( x \) in \( U \) ; we first show that \( {U}_{x} \) is open in \( U \) . Given \( y \) in \( {U}_{x} \), choose a path \( f \) in \( U \) joining \( x \) to \( y \) ; choose also \( r > 0 \) such that \( B\left( {y, r}\right) \subset U \) . Since \( B\left( {y, r}\right) \) is convex, for each \( z \in B\left( {y, r}\right) \) there exists a path \( g \) in \( B\left( {y, r}\right) \) joining \( y \) to \( z \) ; then \( {gf} \) is a path in \( U \) joining \( x \) to \( z \) . Hence \( B\left( {y, r}\right) \subset {U}_{x} \), and therefore \( {U}_{x} \) is open in \( U \) . Now suppose that \( U \) is not path connected. Then there exist distinct points of \( U \) that cannot be joined by a path in \( U \) . Let \( a \) be one of these points. By the foregoing, \( {U}_{a} \) is nonempty and open in \( U \), as is \[ V = \bigcup \left\{ {{U}_{x} : x \in U \smallsetminus {U}_{a}}\right\} \] Moreover, \( U = {U}_{a} \cup V \) . Since \( U \) is connected, \( {U}_{a} \cap V \) is nonempty, so there exists \( b \in U \smallsetminus {U}_{a} \) such that \( {U}_{a} \cap {U}_{b} \neq \varnothing \) . Exercise (3.4.15) shows that \( {U}_{a} = {U}_{b} \) ; whence \( b \in {U}_{a} \), a contradiction. Thus \( U \) is path connected. ## (3.4.16) Exercises .1 Let \[ A = \left\{ {\left( {0, y}\right) \in {\mathbf{R}}^{2} : - 1 \leq y \leq 1}\right\} \] \[ B = \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : 0 < x \leq 1, y = \sin \frac{\pi }{x}}\right\} , \] and \( X = A \cup B \) . Prove that any connected subset of \( X \) that intersects both \( A \) and \( B \) has diameter greater than 2 . Then prove that \( X \) is not path connected. (Suppose there exists a path \( f : \left\lbrack {0,1}\right\rbrack \rightarrow X \) with \( f\left( 0\right) \in A \) and \( f\left( 1\right) \in B \) . Let \[ \tau = \sup \{ t \in \left\lbrack {0,1}\right\rbrack : f\left( \left\lbrack {0, t}\right\rbrack \right) \subset A\} \] and show that there exists \( {\tau }^{\prime } > \tau \) such that \( f\left( {\tau }^{\prime }\right) \in B \) and the diameter of \( f\left( \left\lbrack {\tau ,{\tau }^{\prime }}\right\rbrack \right) \) is less than 1.) .2 Let \( \mathcal{F} \) be a family of path connected subsets of a metric space \( X \) such that \( \bigcap \mathcal{F} \neq \varnothing \) . Prove that \( \bigcup \mathcal{F} \) is path connected. .3 Let \( {\left( {S}_{n}\right) }_{n = 1}^{\infty } \) be a sequence of path connected subsets of a metric space \( X \) such that for each \( n \geq 1 \) , \[ {S}_{n} \cap \mathop{\bigcup }\limits_{{i = 1}}^{{n - 1}}{S}_{i} \neq \varnothing \] Prove that \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{S}_{n} \) is path connected. ## 3.5 Product Metric Spaces Let \( \left( {{X}_{1},{\rho }_{1}}\right) \) and \( \left( {{X}_{2},{\rho }_{2}}\right) \) be nonempty \( {}^{2} \) metric spaces, and \( X \) their Cartesian product \( {X}_{1} \times {X}_{2} \) . Throughout this section we use such notations as \( x = \left( {{x}_{1},{x}_{2}}\right) ,{x}^{\prime } = \left( {{x}_{1}^{\prime },{x}_{2}^{\prime }}\right) \), and \( a = \left( {{a}_{1},{a}_{2}}\right) \) for points of \( X \) ; we write \( {B}_{k}\left( {{a}_{k}, r}\right) \) (respectively, \( {\bar{B}}_{k}\left( {{a}_{k}, r}\right) \) ) for the open (respectively, closed) ball in \( {X}_{k} \) with centre \( {a}_{k} \) and radius \( r \) . It is a simple exercise to show that the mapping \( \rho : X \times X \rightarrow \mathbf{R} \) defined by \[ \rho \left( {x, y}\right) = \max \left\{ {{\rho }_{1}\left( {{x}_{1},{y}_{1}}\right) ,{\rho }_{2}\left( {{x}_{2},{y}_{2}}\right) }\right\} \] is a metric-called the product metric-on \( X \) ; taken with this metric, \( X \) is called the product of the metric spaces \( {X}_{1} \) and \( {X}_{2} \) . We assume that \( X \) carries this metric in the remainder of this section. There are at least two other natural metrics on the set \( X \) : namely, the metrics \( {\rho }^{\prime } \) and \( {\rho }^{\prime \prime } \) defined by \[ {\rho }^{\prime }\left( {x, y}\right) = \sqrt{{\rho }_{1}{\left( {x}_{1},{y}_{1}\right) }^{2} + {\rho }_{2}{\left( {x}_{2},{y}_{2}\right) }^{2}} \] and \[ {\rho }^{\prime \prime }\left( {x, y}\right) = {\rho }_{1}\left( {{x}_{1},{y}_{1}}\right) + {\rho }_{2}\left( {{x}_{2},{y}_{2}}\right) . \] --- \( {}^{2} \) The requirement that \( {X}_{1} \) and \( {X}_{2} \) be nonempty enables us to avoid some minor complications. --- Since \[ \rho \left( {x, y}\right) \leq {\rho }^{\prime }\left( {x, y}\right) \leq {\rho }^{\prime \prime }\left( {x, y}\right) \leq {2\rho }\left( {x, y}\right) , \] the identity mapping \( {i}_{X} \) (see Exercise (3.2.1:1)) is uniformly continuous when its domain and range are given any of the metrics \( \rho ,{\rho }^{\prime },{\rho }^{\prime \prime } \) . Hence, in particular, each of these three metrics gives rise to the same topology (family of open sets) on \( X \) ; that is, the metrics are equivalent (see Exercise (3.1.3: 6)). (3.5.1) Lemma. The open ball with centre a and radius \( r \) in the product space \( X \) is \( {B}_{1}\left( {{a}_{1}, r}\right) \times {B}_{2}\left( {{a}_{2}, r}\right)
1008_(GTM174)Foundations of Real and Abstract Analysis
51
d \[ {\rho }^{\prime \prime }\left( {x, y}\right) = {\rho }_{1}\left( {{x}_{1},{y}_{1}}\right) + {\rho }_{2}\left( {{x}_{2},{y}_{2}}\right) . \] --- \( {}^{2} \) The requirement that \( {X}_{1} \) and \( {X}_{2} \) be nonempty enables us to avoid some minor complications. --- Since \[ \rho \left( {x, y}\right) \leq {\rho }^{\prime }\left( {x, y}\right) \leq {\rho }^{\prime \prime }\left( {x, y}\right) \leq {2\rho }\left( {x, y}\right) , \] the identity mapping \( {i}_{X} \) (see Exercise (3.2.1:1)) is uniformly continuous when its domain and range are given any of the metrics \( \rho ,{\rho }^{\prime },{\rho }^{\prime \prime } \) . Hence, in particular, each of these three metrics gives rise to the same topology (family of open sets) on \( X \) ; that is, the metrics are equivalent (see Exercise (3.1.3: 6)). (3.5.1) Lemma. The open ball with centre a and radius \( r \) in the product space \( X \) is \( {B}_{1}\left( {{a}_{1}, r}\right) \times {B}_{2}\left( {{a}_{2}, r}\right) \), and the closed ball with centre a and radius \( r \) in \( X \) is \( {\bar{B}}_{1}\left( {{a}_{1}, r}\right) \times {\bar{B}}_{2}\left( {{a}_{2}, r}\right) \) . Proof. For example, we have \[ \rho \left( {a, x}\right) < r\; \Leftrightarrow \;\max \left\{ {{\rho }_{1}\left( {{a}_{1},{x}_{1}}\right) ,{\rho }_{2}\left( {{a}_{2},{x}_{2}}\right) }\right\} < r \] \[ \Leftrightarrow \;{\rho }_{1}\left( {{a}_{1},{x}_{1}}\right) < r\text{ and }{\rho }_{2}\left( {{a}_{2},{x}_{2}}\right) < r, \] so \( B\left( {a, r}\right) = {B}_{1}\left( {{a}_{1}, r}\right) \times {B}_{2}\left( {{a}_{2}, r}\right) \) . (3.5.2) Proposition. If \( {A}_{1} \) is open in \( {X}_{1} \), and \( {A}_{2} \) is open in \( {X}_{2} \), then \( {A}_{1} \times {A}_{2} \) is open in \( X \) . Proof. Let \[ a \in A = {A}_{1} \times {A}_{2} \] Then \( {a}_{1} \in {A}_{1} \) and \( {a}_{2} \in {A}_{2} \) ; so there exist \( {r}_{1},{r}_{2} > 0 \) such that \( {B}_{1}\left( {{a}_{1},{r}_{1}}\right) \subset \) \( {A}_{1} \) and \( {B}_{2}\left( {{a}_{2},{r}_{2}}\right) \subset {A}_{2} \) . Let \( r = \min \left\{ {{r}_{1},{r}_{2}}\right\} \) ; then by Lemma (3.5.1), \[ B\left( {a, r}\right) \subset {B}_{1}\left( {{a}_{1},{r}_{1}}\right) \times {B}_{2}\left( {{a}_{2},{r}_{2}}\right) \subset A. \] Hence \( a \in {A}^{ \circ } \), and so \( A \) is open in \( X \) . (3.5.3) Corollary. If \( {U}_{k} \) is a neighbourhood of \( {x}_{k} \) in \( {X}_{k} \), then \( {U}_{1} \times {U}_{2} \) is a neighbourhood of \( x \) in \( X \) . Proof. Choose an open set \( {A}_{k} \) in \( {X}_{k} \) such that \( {x}_{k} \in {A}_{k} \subset {U}_{k} \) . Then \[ \left( {{x}_{1},{x}_{2}}\right) \in {A}_{1} \times {A}_{2} \subset {U}_{1} \times {U}_{2} \] where, by the previous proposition, \( {A}_{1} \times {A}_{2} \) is an open subset of \( X \) . The mapping \( {\operatorname{pr}}_{k} : X \rightarrow {X}_{k} \) defined by \[ {\operatorname{pr}}_{k}\left( {{x}_{1},{x}_{2}}\right) = {x}_{k} \] is called the projection of \( X \) onto \( {X}_{k} \) . (3.5.4) Proposition. If \( A \) is an open set in \( X \), then \( {\operatorname{pr}}_{k}\left( A\right) \) is open in the space \( {X}_{k} \) . Proof. Consider any \( {x}_{1} \in {X}_{1} \) . Either \[ A\left( {x}_{1}\right) = \left\{ {{x}_{2} \in {X}_{2} : \left( {{x}_{1},{x}_{2}}\right) \in A}\right\} \] is empty and therefore open, or else there exists \( {x}_{2} \in A\left( {x}_{1}\right) \) . In the latter case, since \( A \) is open, we can choose \( r > 0 \) such that \( B\left( {x, r}\right) \subset A \), where \( x = \left( {{x}_{1},{x}_{2}}\right) \) . If \( {x}_{2}^{\prime } \in {X}_{2} \) and \( {\rho }_{2}\left( {{x}_{2},{x}_{2}^{\prime }}\right) < r \), then \[ \rho \left( {x,\left( {{x}_{1},{x}_{2}^{\prime }}\right) }\right) = {\rho }_{2}\left( {{x}_{2},{x}_{2}^{\prime }}\right) < r \] so \( \left( {{x}_{1},{x}_{2}^{\prime }}\right) \in A \) . Hence \( A\left( {x}_{1}\right) \) is open in this case also. Since \[ {\operatorname{pr}}_{2}\left( A\right) = \mathop{\bigcup }\limits_{{{x}_{1} \in {X}_{1}}}A\left( {x}_{1}\right) \] a union of open sets, it follows that \( {\operatorname{pr}}_{2}\left( A\right) \) is open in \( {X}_{2} \) . A similar argument shows that \( {\operatorname{pr}}_{1}\left( A\right) \) is open in \( {X}_{1} \) . Note that the projections of a closed subset of \( X \) need not be closed; see the remarks following the proof of Proposition (3.2.2) on page 137. (3.5.5) Proposition. If \( {A}_{1} \subset {X}_{1} \) and \( {A}_{2} \subset {X}_{2} \), then \[ \overline{{A}_{1} \times {A}_{2}} = \overline{{A}_{1}} \times \overline{{A}_{2}} \] Proof. Let \( a \in \overline{{A}_{1}} \times \overline{{A}_{2}} \) . Then for each \( \varepsilon > 0 \) there exist \( {x}_{1} \in {A}_{1} \) and \( {x}_{2} \in {A}_{2} \) such that \( {\rho }_{1}\left( {{a}_{1},{x}_{1}}\right) < \varepsilon \) and \( {\rho }_{2}\left( {{a}_{2},{x}_{2}}\right) < \varepsilon \) ; whence \( \rho \left( {a, x}\right) < \varepsilon \) , where \[ x = \left( {{x}_{1},{x}_{2}}\right) \in {A}_{1} \times {A}_{2} \] Thus \( \overline{{A}_{1}} \times \overline{{A}_{2}} \subset \overline{{A}_{1} \times {A}_{2}} \) . On the other hand, if \( a \notin \overline{{A}_{1}} \times \overline{{A}_{2}} \), then either \( {a}_{1} \notin \overline{{A}_{1}} \) or \( {a}_{2} \notin \overline{{A}_{2}} \) . Taking, for example, the first alternative, we see from Exercise (3.1.3: 3) and Corollary (3.5.3) that the set \( \left( {{X}_{1} \smallsetminus \overline{{A}_{1}}}\right) \times {X}_{2} \), which is clearly disjoint from \( {A}_{1} \times {A}_{2} \), is a neighbourhood of \( a \) ; thus \( a \notin \overline{{A}_{1} \times {A}_{2}} \) . Hence \[ X \smallsetminus \left( {\overline{{A}_{1}} \times \overline{{A}_{2}}}\right) \subset X \smallsetminus \overline{{A}_{1} \times {A}_{2}} \] so \( \overline{{A}_{1} \times {A}_{2}} \subset \overline{{A}_{1}} \times \overline{{A}_{2}} \), and therefore \( \overline{{A}_{1} \times {A}_{2}} = \overline{{A}_{1}} \times \overline{{A}_{2}} \) . (3.5.6) Corollary. \( \;{A}_{1} \times {A}_{2} \) is closed in \( X \) if and only if \( {A}_{k} \) is closed in \( {X}_{k} \) for each \( k \) . Proof. This follows immediately from the last proposition. A mapping \( f \) from a set \( E \) into \( X = {X}_{1} \times {X}_{2} \) can be identified with the ordered pair \( \left( {{\operatorname{pr}}_{1} \circ f,{\operatorname{pr}}_{2} \circ f}\right) \) ; where there is no risk of confusion, we write \( {f}_{k} \) for the mapping \( {\operatorname{pr}}_{k} \circ f \) of \( E \) into \( {X}_{k} \), so that \( f = \left( {{f}_{1},{f}_{2}}\right) \) . (3.5.7) Proposition. Let \( f \) be a mapping of a metric space \( \left( {E, d}\right) \) into \( X \) . Then \( f \) is continuous at \( a \in E \) if and only if both \( {f}_{1} \) and \( {f}_{2} \) are continuous at a. Proof. Suppose that for each \( k,{f}_{k} \) is continuous at \( {a}_{k} \) . Given \( \varepsilon > 0 \) , choose \( {\delta }_{k} > 0 \) such that if \( d\left( {a, x}\right) < {\delta }_{k} \), then \( {\rho }_{k}\left( {{f}_{k}\left( a\right) ,{f}_{k}\left( x\right) }\right) < \varepsilon \) . If \( d\left( {a, x}\right) < \min \left\{ {{\delta }_{1},{\delta }_{2}}\right\} \), then \[ \rho \left( {f\left( a\right), f\left( x\right) }\right) = \max \left\{ {{\rho }_{1}\left( {{f}_{1}\left( a\right) ,{f}_{1}\left( x\right) }\right) ,{\rho }_{2}\left( {{f}_{2}\left( a\right) ,{f}_{2}\left( x\right) }\right) }\right\} < \varepsilon . \] Thus \( f \) is continuous at \( a \) . To prove the converse, first note that, trivially, \( {\operatorname{pr}}_{k} \) is continuous on \( X \) ; so if \( f \) is continuous at \( a \), then so is \( {\operatorname{pr}}_{k} \circ f \), by Proposition (3.2.3). (3.5.8) Proposition. Let \( f \) be a mapping of a metric space \( E \) into \( X \) . Then \( f \) is uniformly continuous if and only if both \( {f}_{1} \) and \( {f}_{2} \) are uniformly continuous. Proof. This is left as an exercise. ## (3.5.9) Exercises . 1 Prove that if a mapping \( f \) of \( X \) into a metric space \( Y \) is continuous at \( \left( {a, b}\right) \), then the mappings \( {x}_{1} \mapsto f\left( {{x}_{1}, b}\right) \) and \( {x}_{2} \mapsto f\left( {a,{x}_{2}}\right) \) are continuous at \( a \) and \( b \), respectively. .2 Prove Proposition (3.5.8). .3 Let \( E \) be a metric space, \( A \subset E \), and \( a \in \overline{A\smallsetminus \{ a\} } \) . Prove that a mapping \( f : E \rightarrow X \) has a limit at \( a \) with respect to \( A \) if and only if both \( {b}_{1} = \mathop{\lim }\limits_{{t \rightarrow a,\;t \in A}}{f}_{1}\left( t\right) \) and \( {b}_{2} = \mathop{\lim }\limits_{{t \rightarrow a,\;t \in A}}{f}_{2}\left( t\right) \) exist, in which case \( \mathop{\lim }\limits_{{t \rightarrow a, t \in A}}f\left( t\right) = \left( {{b}_{1},{b}_{2}}\right) \) . .4 Prove that a sequence \( \left( {x}_{n}\right) \) in \( X \) converges to a limit in \( X \) if and only if both \( {\xi }_{1} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\operatorname{pr}}_{1}\left( {x}_{n}\right) \) and \( {\xi }_{2} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\operatorname{pr}}_{2}\left( {x}_{n}\right) \) exist, in which case \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} = \left( {{\xi }_{1},{\xi }_{2}}\right) \) . .5 Prove that a sequence \( \left( {x}_{n}\right) \) in \( X \) is a Cauchy sequence if and only if \( \left( {{\operatorname{pr}}_{1}\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( {X}_{1} \) and \( \left( {{\operatorname{pr}}_{2}\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( {X}_{2} \) . .6 For \( i = 1,2 \) let \( {X}_{i},{Y}_{i} \) be metric spaces, and \( {f}_{i} \) a mapping of \( {X}_{i} \) into \( {Y}_{i} \) . Prove that the mapping \[ \left( {{x}_{1},{x}_{2}}\right) \mapsto \left( {{f}_{1}\left( {x}_{1}\right) ,{f}_{2}\left( {x}_{2}\right) }\right)
1008_(GTM174)Foundations of Real and Abstract Analysis
52
X \) converges to a limit in \( X \) if and only if both \( {\xi }_{1} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\operatorname{pr}}_{1}\left( {x}_{n}\right) \) and \( {\xi }_{2} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\operatorname{pr}}_{2}\left( {x}_{n}\right) \) exist, in which case \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} = \left( {{\xi }_{1},{\xi }_{2}}\right) \) . .5 Prove that a sequence \( \left( {x}_{n}\right) \) in \( X \) is a Cauchy sequence if and only if \( \left( {{\operatorname{pr}}_{1}\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( {X}_{1} \) and \( \left( {{\operatorname{pr}}_{2}\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( {X}_{2} \) . .6 For \( i = 1,2 \) let \( {X}_{i},{Y}_{i} \) be metric spaces, and \( {f}_{i} \) a mapping of \( {X}_{i} \) into \( {Y}_{i} \) . Prove that the mapping \[ \left( {{x}_{1},{x}_{2}}\right) \mapsto \left( {{f}_{1}\left( {x}_{1}\right) ,{f}_{2}\left( {x}_{2}\right) }\right) \] of \( {X}_{1} \times {X}_{2} \) into \( {Y}_{1} \times {Y}_{2} \) is continuous if and only if both \( {f}_{1} \) and \( {f}_{2} \) are continuous. We have now reached the main result of this section. (3.5.10) Proposition. Let \( T \) be any one of the following types of metric space: complete, totally bounded, compact. Then the product \( X = {X}_{1} \times {X}_{2} \) of two nonempty metric spaces \( {X}_{1} \) and \( {X}_{2} \) is of type \( T \) if and only if both \( {X}_{1} \) and \( {X}_{2} \) are of type \( T \) . Proof. Leaving the necessity of the stated conditions as an exercise, we prove their sufficiency. To this end, assume that \( {X}_{1} \) and \( {X}_{2} \) are complete, and consider a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \) . By Exercise (3.5.9:5), \( {\left( {\operatorname{pr}}_{k}\left( {x}_{n}\right) \right) }_{n = 1}^{\infty } \) is a Cauchy sequence in \( {X}_{k} \) ; since \( {X}_{k} \) is complete, \[ {\xi }_{k} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\operatorname{pr}}_{k}\left( {x}_{n}\right) \] exists. Reference to Exercise (3.5.9: 4) shows that \( \left( {x}_{n}\right) \) converges to the point \( \left( {{\xi }_{1},{\xi }_{2}}\right) \) of \( X \) . Hence \( X \) is complete. It is easy to see that if \( \varepsilon > 0 \) and \( {F}_{k} \) is a finite \( \varepsilon \) -approximation to \( {X}_{k} \) , then \( {F}_{1} \times {F}_{2} \) is a finite \( \varepsilon \) -approximation to \( X \) . It follows that if \( {X}_{1} \) and \( {X}_{2} \) are totally bounded, so is \( X \) . The first two parts of the proof, and Theorem (3.3.9), show that if \( {X}_{1} \) and \( {X}_{2} \) are compact, then so is \( X \) . ## (3.5.11) Exercises .1 Prove that the product of two discrete metric spaces is discrete. .2 In the notation of Proposition (3.5.10), prove that if \( X \) is of type \( T \) , then so are \( {X}_{1} \) and \( {X}_{2} \) . .3 Prove that \( X \) is separable if and only if both \( {X}_{1} \) and \( {X}_{2} \) are separable. .4 Prove that \( X \) is locally compact if and only if both \( {X}_{1} \) and \( {X}_{2} \) are locally compact. .5 Prove that \( X \) is connected (respectively, path connected) if and only if both \( {X}_{1} \) and \( {X}_{2} \) are connected (respectively, path connected). .6 Prove that a subset of the product space \( {\mathbf{R}}^{2} \) or \( {\mathbf{C}}^{2} \) is compact if and only if it is closed and bounded. .7 Prove that the Euclidean spaces \( {\mathbf{R}}^{2} \) and \( {\mathbf{C}}^{2} \) are complete. .8 Show that in the product space \( {\mathbf{R}}^{2} \) the set \[ X = \left( {\{ 0\} \times \left\lbrack {0,1}\right\rbrack }\right) \cup \left( {\left\lbrack {0,1}\right\rbrack \times \{ 0\} }\right) \] is compact, that every ball in \( X \) is connected, but that the closure of an open ball in \( X \) need not be the corresponding closed ball (cf. Exercise (3.4.10: 3)). We define the product of a finite family \( \left( {{X}_{1},{\rho }_{1}}\right) ,\ldots ,\left( {{X}_{n},{\rho }_{n}}\right) \) of metric spaces to be the metric space \( \left( {X,\rho }\right) \), where \[ X = {X}_{1} \times \cdots \times {X}_{n} \] and \[ \rho \left( {\left( {{x}_{1},\ldots ,{x}_{n}}\right) ,\left( {{y}_{1},\ldots ,{y}_{n}}\right) }\right) = \max \left\{ {{\rho }_{i}\left( {{x}_{i},{y}_{i}}\right) : i = 1,\ldots, n}\right\} . \] The results proved so far in this section extend in the obvious ways to a product of more than two, but finitely many, metric spaces. The final set of exercises in this chapter shows how we can handle the product of a sequence of metric spaces. ## (3.5.12) Exercises . 1 Let \( {\left( \left( {X}_{n},{\rho }_{n}\right) \right) }_{n = 1}^{\infty } \) be a sequence of nonempty metric spaces such that \( \operatorname{diam}\left( {X}_{n}\right) \leq 1 \) for each \( n \) . Let \( X \) be the set of all sequences \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) such that \( {x}_{n} \in {X}_{n} \) for each \( n \), and define a mapping \( \rho : X \times X \rightarrow \mathbf{R} \) by \[ \rho \left( {\left( {x}_{n}\right) ,\left( {y}_{n}\right) }\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{\rho }_{n}\left( {{x}_{n},{y}_{n}}\right) . \] Prove that \( \rho \) is a metric on \( X \) . The metric space \( \left( {X,\rho }\right) \) is called the product of the sequence \( \left( {X}_{n}\right) \) of metric spaces and is usually denoted by \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{X}_{n} \) . The next four exercises use the notation of Exercise (3.5.12:1). .2 Let \( x = {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) be a point of \( X \) . Prove that \( U \subset X \) is a neighbourhood of \( x \) in \( X \) if and only if for some positive integer \( m \) and some \( r > 0, U \) contains a set of the form \[ {U}_{m}\left( {x, r}\right) = \left\{ {{\left( {y}_{n}\right) }_{n = 1}^{\infty } \in X : {\rho }_{i}\left( {{x}_{i},{y}_{i}}\right) \leq r\text{ for }1 \leq i \leq m}\right\} . \] .3 For each \( n \) let \( {A}_{n} \) be a subset of \( {X}_{n} \) . Prove that the closure of \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{A}_{n} \) in \( X \) is \( \mathop{\prod }\limits_{{n = 1}}^{\infty }\overline{{A}_{n}}. \) .4 For each \( k \) let \( {x}_{k} = {\left( {x}_{k, n}\right) }_{n = 1}^{\infty } \) be a point of \( X \) . Prove that the sequence \( {\left( {x}_{k}\right) }_{k = 1}^{\infty } \) converges in \( X \) to a limit \( a = {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) if and only if for each \( n \) the sequence \( {\left( {x}_{k, n}\right) }_{k = 1}^{\infty } \) converges to \( {a}_{n} \) in \( {X}_{n} \) . Prove also that \( {\left( {x}_{k}\right) }_{k = 1}^{\infty } \) is a Cauchy sequence in \( X \) if and only if for each \( n \) the sequence \( {\left( {x}_{k, n}\right) }_{k = 1}^{\infty } \) is a Cauchy sequence in \( {X}_{n} \) . .5 With \( T \) as in Proposition (3.5.10), prove that \( X \) is of type \( T \) if and only if \( {X}_{n} \) is of type \( T \) for each \( n \) . .6 Let \( {\left( {X}_{n}\right) }_{n = 1}^{\infty } \) be a sequence of discrete metric spaces, each having positive diameter \( \leq 1 \) . Prove that the product space \( \mathop{\prod }\limits_{{n = 1}}^{\infty }{X}_{n} \) is not discrete. 4 Analysis in Normed Linear Spaces ...I could be bounded in a nutshell, and count myself a king of infinite space... HAMLET, Act 2, Scene 2 Many significant applications of analysis are the fruit of cross-fertilisation between metric structure and algebraic structure. In this chapter we discuss such a cross-breed: a normed (linear) space. Section 1 introduces these objects and deals with their elementary analytic and geometric properties. In Section 2 we discuss linear mappings between normed spaces, paying particular attention to bounded linear functionals - continuous linear mappings into \( \mathbf{R} \) and \( \mathbf{C} \) . Although many of the most important normed spaces of analysis are infinite-dimensional, finite-dimensional ones remain significant in many ways; they are dealt with in Section 3. The next two sections deal with two fundamental classes of infinite-dimensional complete normed spaces: the \( {L}_{p} \) integration spaces and the space \( \mathcal{C}\left( X\right) \) of continuous functions from a compact metric space \( X \) into \( \mathbf{R} \) . They also characterise the associated bounded linear functionals. Two of the most important results about \( \mathcal{C}\left( X\right) \) -Ascoli’s Theorem and the Stone-Weierstrass Theorem (a far-reaching generalisation of the classical Weierstrass Approximation Theorem)-are proved in Sections 5 and 6. Both of these theorems reappear in the final section of the chapter, where they are applied to the concrete classical problem of solving ordinary differential equations. 174 4. Analysis in Normed Linear Spaces ## 4.1 Normed Linear Spaces Metric spaces offer one context within which the analytic and topological properties of \( \mathbf{R} \) can be generalised, but they do not provide a natural framework for a generalisation of the algebraic properties of \( \mathbf{R} \) . A framework of the latter sort is made available by the notion of a normed linear space. Let \( \mathbf{F} \) stand for either \( \mathbf{R} \) or \( \mathbf{C} \), and let \( X \) be a linear space (vector space) over F. A norm on \( X \) is a mapping \( x \mapsto \parallel x\parallel \) of \( X \) into \( \mathbf{R} \) such that the following properties hold for all \( x, y \in X \) and \( \lambda \in \mathbf{F} \) . N1 \( \parallel x\parallel \geq 0 \) . \( \mathbf{{N2}}\;\parallel x\parallel = 0 \) if and only if \( x = 0. \) N3 \( \parallel {\lambda x}\parallel = \left| \lambda \right| \parallel x\parallel \) . N4 \( \;\parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \; \) (triangle inequality). A normed linear space, or normed space, over \( \mathbf{F} \) is a pair \(
1008_(GTM174)Foundations of Real and Abstract Analysis
53
ic and topological properties of \( \mathbf{R} \) can be generalised, but they do not provide a natural framework for a generalisation of the algebraic properties of \( \mathbf{R} \) . A framework of the latter sort is made available by the notion of a normed linear space. Let \( \mathbf{F} \) stand for either \( \mathbf{R} \) or \( \mathbf{C} \), and let \( X \) be a linear space (vector space) over F. A norm on \( X \) is a mapping \( x \mapsto \parallel x\parallel \) of \( X \) into \( \mathbf{R} \) such that the following properties hold for all \( x, y \in X \) and \( \lambda \in \mathbf{F} \) . N1 \( \parallel x\parallel \geq 0 \) . \( \mathbf{{N2}}\;\parallel x\parallel = 0 \) if and only if \( x = 0. \) N3 \( \parallel {\lambda x}\parallel = \left| \lambda \right| \parallel x\parallel \) . N4 \( \;\parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \; \) (triangle inequality). A normed linear space, or normed space, over \( \mathbf{F} \) is a pair \( \left( {X,\parallel \cdot \parallel }\right) \) consisting of a linear space \( X \) over \( \mathbf{F} \) and a norm \( \parallel \cdot \parallel \) on \( X \) ; by abuse of language, we refer to the linear space \( X \) itself as a normed space if it is clear from the context which norm is under consideration. We say that the normed space \( X \) is real or complex, depending on whether \( \mathbf{F} \) is \( \mathbf{R} \) or \( \mathbf{C} \) . A vector with norm 1 is called a unit vector. The simplest example of a norm is, of course, the mapping \( x \mapsto \left| x\right| \) on \( \mathbf{F} \) . If \( X \) is a normed space, then the mapping \( \left( {x, y}\right) \mapsto \parallel x - y\parallel \) of \( X \times X \) into \( \mathbf{R} \) is a metric on \( X \) (Exercise (4.1.1:1)), and is said to be associated with the norm on \( X \) . When we consider \( X \) as a metric space, it is understood that we are referring to the metric associated with the given norm on \( X \) . By the unit ball of \( X \) we mean the closed ball with centre 0 and radius 1, \[ \bar{B}\left( {0,1}\right) = \{ x \in X : \parallel x\parallel \leq 1\} \] relative to the metric associated with the norm on \( X \) . ## (4.1.1) Exercises .1 Prove that \( \rho \left( {x, y}\right) = \parallel x - y\parallel \) defines a metric on a normed space \( X \) , such that \[ \rho \left( {x + z, y + z}\right) = \rho \left( {x, y}\right) \] \[ \rho \left( {{\lambda x},{\lambda y}}\right) = \left| \lambda \right| \rho \left( {x, y}\right) \] for all \( x, y, z \in X \) and \( \lambda \in \mathbf{F} \) . .2 Show that \[ \left| {\parallel x\parallel - \parallel y\parallel }\right| \leq \parallel x - y\parallel \] for all vectors \( x, y \) in a normed space \( X \) . Hence prove that if a sequence \( \left( {x}_{n}\right) \) converges to a limit \( x \) in \( X \), then \( \parallel x\parallel = \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{x}_{n}\end{Vmatrix} \) . .3 Prove that \[ \parallel x\parallel = \inf \left\{ {{\left| t\right| }^{-1} : t \in \mathbf{F}, t \neq 0,\parallel {tx}\parallel \leq 1}\right\} \] for each element \( x \) of a normed space \( X \) . .4 Prove that for each positive integer \( n \) the mappings \[ \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \max \left\{ {\left| {x}_{1}\right| ,\ldots ,\left| {x}_{n}\right| }\right\} \] \[ \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \sqrt{{x}_{1}^{2} + \cdots + {x}_{n}^{2}} \] \[ \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \left| {x}_{1}\right| + \cdots + \left| {x}_{n}\right| \] are norms on \( {\mathbf{F}}^{n} \) . In the case \( \mathbf{F} = \mathbf{R} \) the second of these norms is called the Euclidean norm on \( {\mathbf{R}}^{n} \), and the associated metric is the Euclidean metric (see Exercise (3.1.1: 5)). .5 Let \( X \) be a nonempty set, and denote by \( \mathcal{B}\left( {X,\mathbf{F}}\right) \) the set of all bounded mappings of \( X \) into \( \mathbf{F} \), taken with the pointwise operations of addition and multiplication-by-scalars: \[ \left( {f + g}\right) \left( x\right) = f\left( x\right) + g\left( x\right) \] \[ \left( {\lambda f}\right) \left( x\right) = {\lambda f}\left( x\right) \text{.} \] The supremum norm, or sup norm, on \( \mathcal{B}\left( {X,\mathbf{F}}\right) \) is defined by \[ \parallel f\parallel = \sup \{ \left| {f\left( x\right) }\right| : x \in X\} . \] Verify that the sup norm is a norm on \( \mathcal{B}\left( {X,\mathbf{F}}\right) \) . .6 Prove that \( \parallel f{\parallel }_{1} = \int \left| f\right| \) defines a norm on the set \( {L}_{1}\left( \mathbf{R}\right) \) of all Lebesgue integrable functions (defined almost everywhere) on \( \mathbf{R} \) , where two such functions are taken as equal if and only if they are equal almost everywhere. .7 Let \( {X}_{1},{X}_{2} \) be normed spaces over \( \mathbf{F} \), and recall that the standard operations of addition and multiplication-by-scalars on the product vector space \( X = {X}_{1} \times {X}_{2} \) are given by \[ \left( {{x}_{1},{x}_{2}}\right) + \left( {{x}_{1}^{\prime },{x}_{2}^{\prime }}\right) = \left( {{x}_{1} + {x}_{1}^{\prime },{x}_{2} + {x}_{2}^{\prime }}\right) , \] \[ \lambda \left( {{x}_{1},{x}_{2}}\right) = \left( {\lambda {x}_{1},\lambda {x}_{2}}\right) . \] Verify that the mapping \( \left( {{x}_{1},{x}_{2}}\right) \mapsto \max \left\{ {\begin{Vmatrix}{x}_{1}\end{Vmatrix},\begin{Vmatrix}{x}_{2}\end{Vmatrix}}\right\} \) is a norm on \( X \), and that the metric associated with this norm is the product metric on \( X \) (considered as the product of the metric spaces \( {X}_{1} \) and \( \left. {X}_{2}\right) \) . Taken with this norm, which we call the product norm, \( X \) is known as the product of the normed spaces \( {X}_{1} \) and \( {X}_{2} \) . The product norm and the product space for a finite number of normed spaces are defined analogously. (4.1.2) Proposition. Let \( X \) be a normed space over \( \mathbf{F} \) . Then (i) the mapping \( \left( {x, y}\right) \mapsto x + y \) is uniformly continuous on \( X \times X \) ; (ii) for each \( \lambda \in \mathbf{F} \) the mapping \( x \mapsto {\lambda x} \) is uniformly continuous on \( X \) ; (iii) for each \( x \in X \) the mapping \( \lambda \mapsto {\lambda x} \) is uniformly continuous on \( \mathbf{F} \) ; (iv) the mapping \( \left( {\lambda, x}\right) \mapsto {\lambda x} \) is continuous on \( \mathbf{F} \times X \) . Proof. The uniform continuity of the first three mappings follows from the relations \[ \begin{Vmatrix}{\left( {x + y}\right) - \left( {{x}^{\prime } + {y}^{\prime }}\right) }\end{Vmatrix} \leq \begin{Vmatrix}{x - {x}^{\prime }}\end{Vmatrix} + \begin{Vmatrix}{y - {y}^{\prime }}\end{Vmatrix}, \] \[ \parallel {\lambda x} - {\lambda y}\parallel = \left| \lambda \right| \parallel x - y\parallel \] \[ \begin{Vmatrix}{{\lambda x} - {\lambda }^{\prime }x}\end{Vmatrix} = \left| {\lambda - {\lambda }^{\prime }}\right| \parallel x\parallel . \] On the other hand, the relations \[ \begin{Vmatrix}{{\lambda x} - {\lambda }_{0}{x}_{0}}\end{Vmatrix} = \begin{Vmatrix}{{\lambda }_{0}\left( {x - {x}_{0}}\right) + \left( {\lambda - {\lambda }_{0}}\right) {x}_{0} + \left( {\lambda - {\lambda }_{0}}\right) \left( {x - {x}_{0}}\right) }\end{Vmatrix} \] \[ \leq \left| {\lambda }_{0}\right| \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} + \left| {\lambda - {\lambda }_{0}}\right| \begin{Vmatrix}{x}_{0}\end{Vmatrix} + \left| {\lambda - {\lambda }_{0}}\right| \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \] easily lead to the continuity of \( \left( {\lambda, x}\right) \mapsto {\lambda x} \) at \( \left( {{\lambda }_{0},{x}_{0}}\right) \) . If \( X \) is a normed space and \( S \) is a linear subset of \( X \), then the restriction to \( S \) of the norm on \( X \) is a norm on \( S \) ; taken with this norm, \( S \) is called a normed linear subspace, or simply a subspace, of the normed space \( X \) . (4.1.3) Proposition. If \( S \) is a subspace of a normed space \( X \), then the closure of \( S \) in \( X \) is a subspace of \( X \) . Proof. Let \( f \) be the mapping \( \left( {x, y}\right) \mapsto x + y \) of \( X \times X \) into \( X \) . As \( S \) is a subspace, \( f \) maps \( S \times S \) into \( S \), so \[ S \times S \subset {f}^{-1}\left( S\right) \subset {f}^{-1}\left( \bar{S}\right) \] and therefore \( \overline{S \times S} \) is a subset of the closure of \( {f}^{-1}\left( \bar{S}\right) \) . Since, by Proposition (4.1.2), \( f \) is continuous on \( X \times X \), it follows from Proposition (3.2.2) that \( {f}^{-1}\left( \bar{S}\right) \) is a closed subset of \( X \) ; whence \( \overline{S \times S} \subset {f}^{-1}\left( \bar{S}\right) \) . But \( \overline{S \times S} = \overline{S} \times \overline{S} \), by Proposition (3.5.5); so if \( x \in \overline{S} \) and \( y \in \overline{S} \), then \( x + y \in \overline{S} \) . A similar argument, using the continuity of the mapping \( \left( {\lambda, x}\right) \mapsto {\lambda x} \), shows that if \( \lambda \in \mathbf{F} \) and \( x \in \bar{S} \), then \( {\lambda x} \in \bar{S} \) . (4.1.4) Lemma. If \( S \) is a closed subspace of a normed space \( X \), and \( a \in X \), then \[ a + S = \{ a + x : x \in S\} \] is closed in \( X \) . Proof. Let \( f \) be the mapping \( z \mapsto z - a \) of \( X \) into itself. Since \( f \) is the composition of the mappings \( z \mapsto \left( {z, - a}\right) \) and \( \left( {x, y}\right) \mapsto x + y \), it follows from Exercise (3.5.9:1), Proposition (4.1.2), and Proposition (3.2.3) that \( f \) is continuous on \( X \) . But \( a + S = {f}^{-1}\left( S\right) \) ; so, by Proposition (3.2.2), \( a + S \) is closed in \( X \) . ## (4.1.5) Exercises .1 Explain why, in Proposition (4.1.2), the mapping \( \left( {\lambda, x}\right) \mapsto {\lambda x} \) is not uniformly continuous on \( \mathbf{F} \times X \) . .2 Complete the proof of Proposition (4.1
1008_(GTM174)Foundations of Real and Abstract Analysis
54
e mapping \( \left( {\lambda, x}\right) \mapsto {\lambda x} \), shows that if \( \lambda \in \mathbf{F} \) and \( x \in \bar{S} \), then \( {\lambda x} \in \bar{S} \) . (4.1.4) Lemma. If \( S \) is a closed subspace of a normed space \( X \), and \( a \in X \), then \[ a + S = \{ a + x : x \in S\} \] is closed in \( X \) . Proof. Let \( f \) be the mapping \( z \mapsto z - a \) of \( X \) into itself. Since \( f \) is the composition of the mappings \( z \mapsto \left( {z, - a}\right) \) and \( \left( {x, y}\right) \mapsto x + y \), it follows from Exercise (3.5.9:1), Proposition (4.1.2), and Proposition (3.2.3) that \( f \) is continuous on \( X \) . But \( a + S = {f}^{-1}\left( S\right) \) ; so, by Proposition (3.2.2), \( a + S \) is closed in \( X \) . ## (4.1.5) Exercises .1 Explain why, in Proposition (4.1.2), the mapping \( \left( {\lambda, x}\right) \mapsto {\lambda x} \) is not uniformly continuous on \( \mathbf{F} \times X \) . .2 Complete the proof of Proposition (4.1.3). .3 Let \( \left( {\widehat{X},\rho }\right) \) be a metric space, and \( X \) a normed space such that (i) \( \rho \left( {x, y}\right) = \parallel x - y\parallel \) for all \( x, y \in X \) ; (ii) \( X \) is dense in \( \widehat{X} \) . Show that the operations of addition and multiplication-by-scalars can be extended uniquely to make \( \widehat{X} \) a normed space with associated metric the given metric \( \rho \) . (Use Propositions (4.1.2) and (3.2.12).) .4 Let \( A \) and \( B \) be nonempty subsets of a normed space \( X \), and define \[ A + B = \{ x + y : x \in A, y \in B\} . \] Prove that (i) if \( A \) is open, then \( A + B \) is open; (ii) if \( A \) is compact and \( B \) is closed, then \( A + B \) is closed. Need \( A + B \) be closed when \( A \) and \( B \) are closed? .5 Recall that a subset \( C \) of a linear space is said to be convex if \( {tx} + \) \( \left( {1 - t}\right) y \in C \) whenever \( x, y \in C \) and \( 0 \leq t \leq 1 \) . Prove that the closure of a convex subset of a normed space is convex. .6 Let \( C \) be a nonempty closed convex subset of a normed space \( X,{x}_{0} \) a point of \( X \smallsetminus C \), and \( r \) a positive number such that \( C \cap B\left( {{x}_{0}, r}\right) \) is empty. Prove that \( C + B\left( {0, r}\right) \) is open and convex, and that \( {x}_{0} \notin C + B\left( {0, r}\right) \) . .7 Let \( C \) be a nonempty convex subset of \( {\mathbf{R}}^{n} \), and \( {x}_{0} \) a point of \( \bar{C} \smallsetminus C \) . Prove that each open ball with centre \( {x}_{0} \) intersects the complement of \( \bar{C} \) . (First consider the case where \( C \) has a nonempty interior.) Does the conclusion hold if we drop the hypothesis that \( C \) is convex? In a tribute to one of the founders of functional analysis, the Polish mathematician Stefan Banach (1892-1945), a complete normed linear space is called a Banach space. Among examples of Banach spaces are - Euclidean \( n \) -space \( {\mathbf{R}}^{n} \) (Exercise (4.1.1:4)); - \( \mathcal{B}\left( {X,\mathbf{F}}\right) \) where \( X \) is a nonempty set and the norm is the sup norm (Exercise (4.1.1: 5)); - certain spaces of continuous or integrable functions that we consider in later sections of this chapter. ## (4.1.6) Exercises . 1 Let \( {c}_{0} \) be the real vector space (with termwise algebraic operations) consisting of all infinite sequences \( x = {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) in \( \mathbf{R} \) that converge to 0 . For each \( x \in {c}_{0} \) write \[ \parallel x\parallel = \mathop{\sup }\limits_{{n \geq 1}}\left| {x}_{n}\right| \] Prove that this defines a norm on \( {c}_{0} \) with respect to which \( {c}_{0} \) is a separable Banach space. (For the second part, consider a Cauchy sequence \( {\left( {x}_{k}\right) }_{k = 1}^{\infty } \) in \( {c}_{0} \), where for each \( k,{x}_{k} = {\left( {x}_{k, n}\right) }_{n = 1}^{\infty } \) . Show that for each \( n,{\left( {x}_{k, n}\right) }_{k = 1}^{\infty } \) is a Cauchy sequence in \( \mathbf{R} \) . Denoting its limit by \( {\xi }_{n} \), show that \( {\left( {\xi }_{n}\right) }_{n = 1}^{\infty } \) belongs to \( {c}_{0} \) and is the limit of the sequence \( \left. {{\left( {x}_{k}\right) }_{k = 1}^{\infty }\text{in the space}{c}_{0}\text{.}}\right) \) .2 Let \( {l}_{1} \) denote the space of all sequences of real numbers such that the corresponding series is absolutely convergent, and for each \( x = \) \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \in {l}_{1} \) write \[ \parallel x{\parallel }_{1} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {x}_{n}\right| \] Prove that this defines a norm on \( {l}_{1} \) with respect to which \( {l}_{1} \) is a separable Banach space. .3 Let \( {l}_{\infty } \) denote the space of all bounded sequences of real numbers, and for each \( x = {\left( {x}_{n}\right) }_{n = 1}^{\infty } \in {l}_{\infty } \) write \[ \parallel x{\parallel }_{\infty } = \mathop{\sup }\limits_{{n \geq 1}}\left| {x}_{n}\right| \] Prove that this defines a norm on \( {l}_{\infty } \) with respect to which \( {l}_{\infty } \) is a Banach space. .4 Prove that if \( X \) is a nonempty set, then \( \mathcal{B}\left( {X,\mathbf{F}}\right) \), taken with the supremum norm, is a Banach space. We now sketch how any normed space \( X \) can be embedded as a dense subspace of a Banach space. Defining \[ {\phi }_{x}\left( y\right) = \parallel x - y\parallel \;\left( {x, y \in X}\right) , \] \[ Y = \left\{ {{\phi }_{0} + f : f \in \mathcal{B}\left( {X,\mathbf{R}}\right) }\right\} \] \[ d\left( {F, G}\right) = \sup \{ \left| {F\left( x\right) - G\left( x\right) }\right| : x \in X\} \;\left( {F, G \in Y}\right) , \] we recall from Exercise (3.2.10:8) that \( \left( {Y, d}\right) \) is a complete metric space, that \( x \mapsto {\phi }_{x} \) is an isometric mapping of \( X \) onto a subset \( Z \) of \( Y \), and that the closure \( \widehat{X} \) of \( Z \) is a complete subspace of \( Y \) . We transport the algebraic structure from \( X \) to \( Z \) by defining \[ {\phi }_{x} + {\phi }_{y} = {\phi }_{x + y} \] \[ \lambda {\phi }_{x} = {\phi }_{\lambda x} \] for all \( x, y \in X \) and \( \lambda \in \mathbf{F} \) . Then \[ \begin{Vmatrix}{\phi }_{x}\end{Vmatrix} = d\left( {{\phi }_{x},{\phi }_{0}}\right) = \parallel x\parallel \] defines a norm on \( Z \) whose associated metric is the one induced by \( d \) . Using Exercise (4.1.5: 3), we can extend the operations of addition and multiplication-by-scalars uniquely from \( X \) (identified with its image under the mapping \( x \mapsto {\phi }_{x} \) ) to \( \widehat{X} \), thereby making \( \widehat{X} \) a Banach space in which there is a dense linear subspace isometric and algebraically isomorphic to \( X \) . In practice, we normally forget about the mapping \( x \mapsto {\phi }_{x} \) and regard \( X \) simply as a dense subspace of \( \widehat{X} \), which we call the completion of the normed space \( X \) . ## (4.1.7) Exercise Fill in the details of the proof that the foregoing constructions provide \( \widehat{X} \) with the structure of a Banach space and that \( x \mapsto {\phi }_{x} \) is a norm-preserving algebraic isomorphism of \( X \) with a dense subspace of \( \widehat{X} \) . Banach spaces form the natural abstract context for the notion of convergence of series. Given a sequence \( \left( {x}_{n}\right) \) of elements of a normed space \( X \) , we define the corresponding series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) to be the sequence \( \left( {s}_{n}\right) \), where \( {s}_{n} = \mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k} \) is the \( n \) th partial sum of the series. The series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) is said to be - convergent if the sequence \( \left( {s}_{n}\right) \) converges to a limit \( s \) in \( X \), called the sum of the series, - absolutely convergent if the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\begin{Vmatrix}{x}_{n}\end{Vmatrix} \) is convergent in \( \mathbf{R} \) , - unconditionally convergent if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{f\left( n\right) } \) converges for each permutation \( f \) of \( {\mathbf{N}}^{ + } \) . In the first case we write \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} = s \) . ## (4.1.8) Exercises .1 Prove that a series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) in a Banach space \( X \) converges if and only if for each \( \varepsilon > 0 \) there exists a positive integer \( N \) such that \( \begin{Vmatrix}{\mathop{\sum }\limits_{{n = i + 1}}^{j}{x}_{n}}\end{Vmatrix} < \varepsilon \) whenever \( j > i \geq N \) . .2 Prove that an absolutely convergent series in a Banach space is unconditionally convergent. (See Exercise (1.2.17:1).) .3 Let \( X \) be a normed linear space, and suppose that each absolutely convergent series in \( X \) is convergent. Prove that \( X \) is a Banach space. (Given a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \), choose \( {n}_{1} < {n}_{2} < \cdots \) such that \( \begin{Vmatrix}{{x}_{i} - {x}_{j}}\end{Vmatrix} < {2}^{-k} \) for all \( i, j \geq {n}_{k} \) . Then consider the series \( \left. {\mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{x}_{{n}_{k + 1}} - {x}_{{n}_{k}}}\right) \text{.}}\right) \) .4 In the Banach space \( {c}_{0} \) of Exercise (4.1.6:1), for each positive integer \( n \) let \( {x}_{n} \) be the element with \( n \) th term \( 1/n \) and all other terms 0 . Prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) is unconditionally convergent but not absolutely convergent. Exercises (1.2.17:1 and 2) show that a series in \( \mathbf{R} \) is unconditionally convergent if and only if it is absolutely convergent. Exercise (4.1.8: 4) shows that this need not be true if \( \mathbf{R} \) is replaced by an infinite-dimensional Banach space. In fact, if every unconditionally convergent series
1008_(GTM174)Foundations of Real and Abstract Analysis
55
Given a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \), choose \( {n}_{1} < {n}_{2} < \cdots \) such that \( \begin{Vmatrix}{{x}_{i} - {x}_{j}}\end{Vmatrix} < {2}^{-k} \) for all \( i, j \geq {n}_{k} \) . Then consider the series \( \left. {\mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{x}_{{n}_{k + 1}} - {x}_{{n}_{k}}}\right) \text{.}}\right) \) .4 In the Banach space \( {c}_{0} \) of Exercise (4.1.6:1), for each positive integer \( n \) let \( {x}_{n} \) be the element with \( n \) th term \( 1/n \) and all other terms 0 . Prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) is unconditionally convergent but not absolutely convergent. Exercises (1.2.17:1 and 2) show that a series in \( \mathbf{R} \) is unconditionally convergent if and only if it is absolutely convergent. Exercise (4.1.8: 4) shows that this need not be true if \( \mathbf{R} \) is replaced by an infinite-dimensional Banach space. In fact, if every unconditionally convergent series in a Banach space \( X \) is absolutely convergent, then \( X \) is finite-dimensional; this is the Dvoretsky-Rogers Theorem (see [12], Chapter VI). Let \( X \) be a normed space over \( \mathbf{F} \), and \( S \) a linear subspace of \( X \) . Then \[ x \sim y\text{if and only if}x - y \in S \] defines an equivalence relation on \( X \) . The set of equivalence classes under this relation is written \( X/S \) and is called the quotient space of \( X \) by \( S \) . The canonical map \( \varphi \) of \( X \) onto \( X/S \) is defined by \[ \varphi \left( x\right) = \{ x + s : s \in S\} \] and maps each element of \( X \) to its equivalence class under \( \sim \) . We define operations of addition and multiplication-by-scalars on \( X/S \) by \[ \varphi \left( x\right) + \varphi \left( y\right) = \varphi \left( {x + y}\right) \] \[ \varphi \left( {\lambda x}\right) = {\lambda \varphi }\left( x\right) \] These definitions are sound: for if \( x \sim {x}^{\prime } \) and \( y \sim {y}^{\prime } \), then \( x + y \sim {x}^{\prime } + {y}^{\prime } \) and \( {\lambda x} \sim \lambda {x}^{\prime } \) . If \( S \) is a closed linear subspace of \( X \), then \[ \parallel \varphi \left( x\right) \parallel = \rho \left( {x, S}\right) = \inf \{ \parallel x - s\parallel : s \in S\} \] defines a norm, called the quotient norm, on \( X/S \) . In that case we assume that \( X/S \) is equipped with the foregoing algebraic operations and with the quotient norm. ## (4.1.9) Exercises .1 Verify the claims made without proof in the preceding paragraph. .2 Prove that if \( S \) is closed in \( X \), then the canonical map \( \varphi : X \rightarrow X/S \) is uniformly continuous on \( S \) . (4.1.10) Proposition. If \( S \) is a closed linear subspace of a Banach space \( X \), then the quotient space \( X/S \) is a Banach space. Proof. Let \( \varphi \) be the canonical map of \( X \) onto \( X/S \), and consider a sequence \( \left( {x}_{n}\right) \) in \( X \) such that \( \left( {\varphi \left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( X/S \) . Choose a strictly increasing sequence \( {\left( {n}_{k}\right) }_{k = 1}^{\infty } \) of positive integers such that \[ \begin{Vmatrix}{\varphi \left( {x}_{{n}_{k + 1}}\right) - \varphi \left( {x}_{{n}_{k}}\right) }\end{Vmatrix} < {2}^{-k}\;\left( {k \geq 1}\right) . \] Setting \( {s}_{1} = 0 \), we construct inductively a sequence \( \left( {s}_{k}\right) \) in \( S \) such that for each \( k \) , \[ \begin{Vmatrix}{\left( {{x}_{{n}_{k + 1}} - {s}_{k + 1}}\right) - \left( {{x}_{{n}_{k}} - {s}_{k}}\right) }\end{Vmatrix} < {2}^{-k}. \] (1) Indeed, having constructed elements \( {s}_{1},\ldots ,{s}_{k} \) of \( S \) with the applicable properties, we have \[ \inf \left\{ {\begin{Vmatrix}{{x}_{{n}_{k + 1}} - \left( {{x}_{{n}_{k}} - {s}_{k}}\right) - s}\end{Vmatrix} : s \in S}\right\} \] \[ = \inf \left\{ {\begin{Vmatrix}{{x}_{{n}_{k + 1}} - {x}_{{n}_{k}} - s}\end{Vmatrix} : s \in S}\right\} \] \[ = \begin{Vmatrix}{\varphi \left( {{x}_{{n}_{k + 1}} - {x}_{{n}_{k}}}\right) }\end{Vmatrix} \] \[ = \begin{Vmatrix}{\varphi \left( {x}_{{n}_{k + 1}}\right) - \varphi \left( {x}_{{n}_{k}}\right) }\end{Vmatrix} < {2}^{-k}, \] so there exists \( {s}_{k + 1} \in S \) such that (1) holds. We now see from (1) that \( {\left( {x}_{{n}_{k}} - {s}_{k}\right) }_{k = 1}^{\infty } \) is a Cauchy sequence in the Banach space \( X \) ; whence it converges to a limit \( z \) in \( X \) . By Exercise \( \left( {{4.1.9} : 2}\right) \) , \[ \varphi \left( {x}_{{n}_{k}}\right) = \varphi \left( {x}_{{n}_{k}}\right) - \varphi \left( {s}_{k}\right) = \varphi \left( {{x}_{{n}_{k}} - {s}_{k}}\right) \rightarrow \varphi \left( z\right) \text{ as }k \rightarrow \infty . \] Thus the Cauchy sequence \( \left( {\varphi \left( {x}_{n}\right) }\right) \) has a convergent subsequence. It follows from Exercise (3.2.10: 3) that \( \left( {\varphi \left( {x}_{n}\right) }\right) \) itself converges in \( X \smallsetminus S \) . ## 4.2 Linear Mappings and Hyperplanes In the context of normed spaces, the important mappings are not just continuous but also preserve the algebraic structure. Recall that a mapping \( u \) between vector spaces \( X, Y \) is linear if \[ u\left( {x + y}\right) = u\left( x\right) + u\left( y\right) \] and \[ u\left( {\lambda x}\right) = {\lambda u}\left( x\right) \] whenever \( x, y \in X \) and \( \lambda \in \mathbf{F} \) . If \( Y = \mathbf{F} \), then \( u \) is called a linear functional on \( X \) . Examples of linear mappings are - the mapping \( x \mapsto {Ax} \) on \( {\mathbf{F}}^{n} \), where \( A \) is an \( n \) -by- \( n \) matrix over \( \mathbf{F} \) ; - the Lebesgue integral, regarded as a mapping of \( {L}_{1}\left( \mathbf{R}\right) \) into \( \mathbf{R} \) (see Exercise (4.1.1: 6)); - the mapping \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \mapsto {x}_{1} \) of \( {c}_{0} \) into \( \mathbf{R} \) (see Exercise (4.1.6:1)); - the canonical mapping of a normed space \( X \) onto the quotient space \( X/S \), where \( S \) is a closed subspace of \( X \) ; - the mapping \( x \mapsto {\phi }_{x} \) of a normed space onto a dense subspace of its completion (see page 179). Here is the fundamental result about the continuity of linear mappings between normed spaces. (4.2.1) Theorem. The following are equivalent conditions on a linear mapping of a normed space \( X \) into a normed space \( Y \) . (i) \( u \) is continuous at 0 . (ii) \( u \) is continuous on \( X \) . (iii) \( u \) is uniformly continuous on \( X \) . (iv) \( u \) is bounded on the unit ball of \( X \) . (v) \( u \) is bounded on each bounded subset of \( X \) . (vi) There exists a positive number \( c \), called a bound for \( u \), such that \( \parallel u\left( x\right) \parallel \leq c\parallel x\parallel \) for all \( x \in X \) . Proof. Suppose that \( u \) is continuous at 0 . Then there exists \( r > 0 \) such that \[ \parallel u\left( x\right) \parallel = \parallel u\left( x\right) - u\left( 0\right) \parallel \leq 1 \] whenever \( \parallel x\parallel \leq r \) . For each nonzero \( t \in \mathbf{F} \) with \( \parallel {tx}\parallel \leq 1 \) we have \( \parallel {rtx}\parallel \leq r \) and therefore \[ \parallel u\left( x\right) \parallel = {r}^{-1}{\left| t\right| }^{-1}\parallel u\left( {rtx}\right) \parallel \leq {r}^{-1}{\left| t\right| }^{-1}. \] It follows from Exercise (4.1.1:3) that \( \parallel u\left( x\right) \parallel \leq {r}^{-1}\parallel x\parallel \) for all \( x \in X \) . Hence (i) implies (vi). It is clear that (vi) \( \Rightarrow \) (v) \( \Rightarrow \) (iv). Next, suppose that there exists \( c > 0 \) such that \( \parallel u\left( x\right) \parallel \leq c \) whenever \( \parallel x\parallel \leq 1 \) . Since \[ \parallel u\left( x\right) \parallel = \parallel x\parallel \begin{Vmatrix}{u\left( {\parallel x{\parallel }^{-1}x}\right) }\end{Vmatrix} \leq c\parallel x\parallel \;\left( {x \neq 0}\right) \] and \( u\left( 0\right) = 0 \), we see that (vi) holds, with \( c \) a bound for \( u \) . We now have \[ \parallel u\left( {x - y}\right) \parallel \leq c\parallel x - y\parallel \;\left( {x, y \in X}\right) , \] from which it follows that \( u \) is uniformly continuous on \( X \) . Thus (iv) \( \Rightarrow \) (vi) \( \Rightarrow \) (iii). Finally, it is obvious that (iii) \( \Rightarrow \) (ii) \( \Rightarrow \) (i). In view of property (v) of Proposition (4.2.1), we commonly refer to a continuous linear mapping between normed spaces \( X, Y \) as a bounded linear mapping on \( X \) . We define the norm of such a mapping by \[ \parallel u\parallel = \sup \{ \parallel u\left( x\right) \parallel : x \in X,\parallel x\parallel \leq 1\} . \] (1) The argument used to prove that (iv) \( \Rightarrow \) (vi) in the last proof shows that \[ \parallel u\left( x\right) \parallel \leq \parallel u\parallel \parallel x\parallel \;\left( {x \in X}\right) . \] In Exercise (4.2.2:11) you will prove that equation (1) defines a norm on the linear space \( L\left( {X, Y}\right) \) of all bounded linear mappings \( u : X \rightarrow Y \), and that if \( Y \) is a Banach space, then so is \( L\left( {X, Y}\right) \) . The Banach space \( L\left( {X,\mathbf{F}}\right) \) , consisting of all bounded linear functionals from \( X \) into its ground field \( \mathbf{F} \) , is called the dual space, or simply the dual, of \( X \), and is denoted by \( {X}^{ * } \) . The interplay between a Banach space and its dual is one of the most significant themes of functional analysis, so we spend some time later in this chapter and in Chapter 6 identifying the duals of certain important Banach spaces. Two norms \( \parallel \cdot \parallel ,\parallel \cdot {\parallel }^{\prime } \) on a vector space \( X \) are said to be equivalent if both the identity mapping from \( \left( {X,\parallel \cdot \parallel }\right) \) onto \( \left( {X,\parallel \cdot {\parallel }^{\pr
1008_(GTM174)Foundations of Real and Abstract Analysis
56
cise (4.2.2:11) you will prove that equation (1) defines a norm on the linear space \( L\left( {X, Y}\right) \) of all bounded linear mappings \( u : X \rightarrow Y \), and that if \( Y \) is a Banach space, then so is \( L\left( {X, Y}\right) \) . The Banach space \( L\left( {X,\mathbf{F}}\right) \) , consisting of all bounded linear functionals from \( X \) into its ground field \( \mathbf{F} \) , is called the dual space, or simply the dual, of \( X \), and is denoted by \( {X}^{ * } \) . The interplay between a Banach space and its dual is one of the most significant themes of functional analysis, so we spend some time later in this chapter and in Chapter 6 identifying the duals of certain important Banach spaces. Two norms \( \parallel \cdot \parallel ,\parallel \cdot {\parallel }^{\prime } \) on a vector space \( X \) are said to be equivalent if both the identity mapping from \( \left( {X,\parallel \cdot \parallel }\right) \) onto \( \left( {X,\parallel \cdot {\parallel }^{\prime }}\right) \) and its inverse are continuous; since those mappings are linear, it follows from Proposition (4.2.1) that \( \parallel \cdot \parallel \) and \( \parallel \cdot {\parallel }^{\prime } \) are equivalent norms on \( X \) if and only if there exist positive constants \( a, b \) such that \( a\parallel x\parallel \leq \parallel x{\parallel }^{\prime } \leq b\parallel x\parallel \) for all \( x \in X \) . ## (4.2.2) Exercises . 1 Prove that a linear mapping \( u : X \rightarrow Y \) between normed spaces is bounded if and only if there exists \( c > 0 \) such that \( \parallel u\left( x\right) \parallel \leq c \) for all \( x \in X \) with \( \parallel x\parallel = 1 \), and that we then have \[ \parallel u\parallel = \sup \{ \parallel u\left( x\right) \parallel : x \in X,\parallel x\parallel = 1\} . \] .2 Let \( u \) be a bounded linear mapping on a normed space \( X \) . Prove that \[ \parallel u\parallel = \inf \{ c \geq 0 : \parallel u\left( x\right) \parallel \leq c\parallel x\parallel \text{ for all }x \in X\} . \] .3 Show that any two of the three norms on \( {\mathbf{R}}^{n} \) introduced in Exercise (4.1.1:4) are equivalent. .4 Let \( \parallel \cdot \parallel ,\parallel \cdot {\parallel }^{\prime } \) be equivalent norms on a linear space \( X \) . Prove that if \( X \) is complete with respect to \( \parallel \cdot \parallel \), then it is complete with respect to \( \parallel \cdot {\parallel }^{\prime } \) . .5 Let \( {X}_{1},\ldots ,{X}_{n} \) be normed spaces, \( X = {X}_{1} \times \cdots \times {X}_{n} \), and \( Y \) a normed space. Let \( u \) be a multilinear mapping of \( X \) into \( Y \) -that is, a mapping linear in each of its \( n \) variables. Prove that \( u \) is continuous if and only if there exists a constant \( c > 0 \) (which we call a bound for \( u) \) such that \[ \begin{Vmatrix}{u\left( {{x}_{1},\ldots ,{x}_{n}}\right) }\end{Vmatrix} \leq c\begin{Vmatrix}{x}_{1}\end{Vmatrix}\begin{Vmatrix}{x}_{2}\end{Vmatrix}\cdots \begin{Vmatrix}{x}_{n}\end{Vmatrix} \] for all \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \in X \) . .6 Let \( X, Y \) be normed spaces, and \( u : X \rightarrow Y \) a linear mapping such that for each sequence \( \left( {x}_{n}\right) \) in \( X \) converging to 0, the sequence \( \left( {u\left( {x}_{n}\right) }\right) \) is bounded in \( Y \) . Prove that \( u \) is continuous. (Let \( \left( {x}_{n}\right) \) be a sequence converging to 0 in \( X \), reduce to the case where \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} < 1/{n}^{2} \) for each \( n \), and then consider the sequence \( \left. {\left( {n{x}_{n}}\right) \text{.}}\right) \) .7 Prove that a linear mapping \( u : X \rightarrow Y \) between normed spaces is bounded if and only if for each Cauchy sequence \( \left( {x}_{n}\right) \) in \( X,\left( {u\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( Y \) . .8 Let \( u \) be a continuous linear mapping of a normed space \( X \) into a Banach space \( Y \), and let \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \) be an absolutely convergent series in \( X \) . Prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }u\left( {x}_{n}\right) \) converges absolutely in \( Y \) . .9 Recalling the Banach space \( {l}_{1} \) of Exercise (4.1.6: 2), for each \( n \) let \( {e}_{n} \) be the element of \( {l}_{1} \) with \( n \) th term 1 and all other terms 0 . Show that to each bounded sequence \( \left( {x}_{n}\right) \) in a Banach space \( X \) there corresponds a unique bounded linear mapping \( u : {l}_{1} \rightarrow X \) such that \( u\left( {e}_{n}\right) = {x}_{n} \) for each \( n \) . Now let \( X \) be a separable Banach space, and \( \left( {x}_{n}\right) \) a dense sequence in the unit ball \( B \) of \( X \) . Define the bounded linear mapping \( u : {l}_{1} \rightarrow X \) as previously. Prove that \( u \) maps \( {l}_{1} \) onto \( X \) . (Given \( x \in B \), construct inductively \( {n}_{1} < {n}_{2} < \cdots \) such that \[ \begin{Vmatrix}{{2}^{k - 1}\left( {x - {x}_{{n}_{1}}}\right) - \mathop{\sum }\limits_{{j = 2}}^{k}{2}^{k - j}{x}_{{n}_{j}}}\end{Vmatrix} < {2}^{-k} \] for each \( k \) .) Thus every separable Banach space is the range of a bounded linear mapping on \( {l}_{1} \) . For further results of this type see [12]. .10 Let \( D \) be a dense linear subspace of a normed space \( X \), and \( u \) a bounded linear mapping from \( D \) into a normed space \( Y \) . Prove that \( u \) extends to a bounded linear mapping, with the same norm, from \( X \) into \( Y \) . (First use Proposition (3.2.12).) .11 Prove that if \( X, Y \) are normed spaces, then \[ \parallel u\parallel = \sup \{ \parallel u\left( x\right) \parallel : x \in X,\parallel x\parallel \leq 1\} \] defines a norm on \( L\left( {X, Y}\right) \), and that if \( Y \) is complete, then \( L\left( {X, Y}\right) \) is a Banach space with respect to this norm. (To establish the completeness, let \( \left( {u}_{n}\right) \) be a Cauchy sequence in \( L\left( {X, Y}\right) \), and show that \[ u\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{u}_{n}\left( x\right) \] defines an element \( u \) of \( L\left( {X, Y}\right) \) such that \( \left. {\begin{Vmatrix}{u - {u}_{n}}\end{Vmatrix} \rightarrow 0\text{as}n \rightarrow \infty \text{.}}\right) \) .12 Let \( {c}_{0} \) be the Banach space of Exercise (4.1.6:1). For each positive integer \( n \) let \( {e}_{n} \) be the sequence whose \( n \) th term is 1 and which has all other terms equal to 0 . Let \( u \) be a bounded linear functional on \( {c}_{0} \) . Prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }u\left( {e}_{n}\right) \) is absolutely convergent, and that the norm of \( u \) is \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left| {u\left( {e}_{n}\right) }\right| \) . Conversely, prove that if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{t}_{n} \) is an absolutely convergent series of real numbers, then there is a unique bounded linear functional \( u \) on \( {c}_{0} \) such that \( u\left( {e}_{n}\right) = {t}_{n} \) for each \( n \) . Describe \( u\left( x\right) \), where \( x = {\left( {x}_{n}\right) }_{n = 1}^{\infty } \in {c}_{0} \) . This example shows that the dual space \( {c}_{0}^{ * } \) can be identified with the Banach space \( {l}_{1} \) of Exercise (4.1.6: 2). .13 Prove that \( {l}_{1}^{ * } \) can be identified with the Banach space \( {l}_{\infty } \) of Exercise \( \left( {{4.1.6} : 3}\right) \) . .14 Prove the Uniform Boundedness Theorem: let \( {\left( {T}_{i}\right) }_{i \in I} \) be a family of bounded linear mappings from a Banach space \( X \) into a normed space \( Y \), such that \( \left\{ {\begin{Vmatrix}{{T}_{i}x}\end{Vmatrix} : i \in I}\right\} \) is bounded for each \( x \in X \) ; then \( \left\{ {\begin{Vmatrix}{T}_{i}\end{Vmatrix} : i \in I}\right\} \) is bounded. (Suppose the contrary. Then construct sequences \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) in \( X \) and \( {\left( {i}_{n}\right) }_{n = 1}^{\infty } \) in \( I \) such that for each \( n \) , \[ \begin{Vmatrix}{x}_{n}\end{Vmatrix} = {4}^{-n} \] \[ \begin{Vmatrix}{{T}_{{i}_{n}}{x}_{n}}\end{Vmatrix} > \frac{2}{3}\begin{Vmatrix}{T}_{{i}_{n}}\end{Vmatrix}\begin{Vmatrix}{x}_{n}\end{Vmatrix} \] and \[ \begin{Vmatrix}{T}_{{i}_{n}}\end{Vmatrix} > 3 \times {4}^{n}\left( {n + \mathop{\sup }\limits_{{i \in I}}\left\{ \begin{Vmatrix}{{T}_{i}\left( {{x}_{1} + \cdots + {x}_{n - 1}}\right) }\end{Vmatrix}\right\} }\right) . \] Taking \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n} \), deduce the contradiction that \( \begin{Vmatrix}{{T}_{{i}_{n}}x}\end{Vmatrix} > n \) for each \( n \) . This proof was published in [22]. A less elementary, but more standard, approach to the Uniform Boundedness Theorem is based on Baire’s Theorem (6.3.1) and is discussed in Chapter 6.) .15 A normed space \( X \) is said to be uniformly convex if it has the following property: for each \( \varepsilon > 0 \) there exists \( \delta \in \left( {0,1}\right) \) such that \( \parallel x - y\parallel < \varepsilon \) whenever \( \parallel x\parallel \leq 1,\parallel y\parallel \leq 1 \), and \( \begin{Vmatrix}{\frac{1}{2}\left( {x + y}\right) }\end{Vmatrix} > 1 - \delta \) . Prove that if \( u \) is a bounded linear functional on a uniformly convex Banach space \( X \), then there exists a unit vector \( x \in X \) such that \( \left| {u\left( x\right) }\right| = \parallel u\parallel \) . Recall that the kernel, or null space, of a linear mapping \( u : X \rightarrow Y \) between vector spaces is the subspace \[ \ker \left( u\right) = {u}^{-1}\left( 0\right) = \{ x \in X : u\left( x\right) = 0\} \] of \( X \) . We say that \( u \) is nonzero if \( \ker \left( u\right) \neq X \) -that is, if there exists \( x \in X \) such that \( u\left( x\right) \neq 0 \) ; otherwise, \( u \) is said to be zero. (4.2.3) Proposition. A linear functional on a normed space \( X \) is
1008_(GTM174)Foundations of Real and Abstract Analysis
57
each \( \varepsilon > 0 \) there exists \( \delta \in \left( {0,1}\right) \) such that \( \parallel x - y\parallel < \varepsilon \) whenever \( \parallel x\parallel \leq 1,\parallel y\parallel \leq 1 \), and \( \begin{Vmatrix}{\frac{1}{2}\left( {x + y}\right) }\end{Vmatrix} > 1 - \delta \) . Prove that if \( u \) is a bounded linear functional on a uniformly convex Banach space \( X \), then there exists a unit vector \( x \in X \) such that \( \left| {u\left( x\right) }\right| = \parallel u\parallel \) . Recall that the kernel, or null space, of a linear mapping \( u : X \rightarrow Y \) between vector spaces is the subspace \[ \ker \left( u\right) = {u}^{-1}\left( 0\right) = \{ x \in X : u\left( x\right) = 0\} \] of \( X \) . We say that \( u \) is nonzero if \( \ker \left( u\right) \neq X \) -that is, if there exists \( x \in X \) such that \( u\left( x\right) \neq 0 \) ; otherwise, \( u \) is said to be zero. (4.2.3) Proposition. A linear functional on a normed space \( X \) is continuous if and only if its kernel is closed in \( X \) . Proof. Let \( u \) be a linear functional on \( X \), and \( S = \ker \left( u\right) \) . As \( \{ 0\} \) is a closed subset of \( X \), Proposition (3.2.2) shows that if \( u \) is continuous, then \( S \) is closed in \( X \) . Suppose, conversely, that \( S \) is closed in \( X \) . Since the zero linear functional is certainly continuous, we may assume that there exists \( a \in X \) such that \( u\left( a\right) = 1 \) . Then \( 0 \notin a + S \) . On the other hand, by Lemma (4.1.4), \( a + S \) is closed in \( X \), so its complement is open. Hence there exists \( r > 0 \) such that \( x \notin a + S \) whenever \( \parallel x\parallel \leq r \) . Suppose that \( \parallel x\parallel \leq r \) and \( \left| {u\left( x\right) }\right| > 1 \), and let \( y = u{\left( x\right) }^{-1}x \) . Then \( \parallel y\parallel \leq r \), so \( y \notin a + S \) . On the other hand, \[ u\left( {y - a}\right) = u{\left( x\right) }^{-1}u\left( x\right) - 1 = 0, \] so \( y - a \in S \), and therefore \[ y = a + \left( {y - a}\right) \in a + S. \] This contradiction shows that \( \left| {u\left( x\right) }\right| \leq 1 \) whenever \( \parallel x\parallel \leq r \) . It follows from Proposition (4.2.1) that \( u \) is continuous. As we show in a moment, nonzero linear functionals on a normed space \( X \) are associated with certain subspaces of \( X \) which we now define. A subspace \( H \) of a vector space \( X \) is called a hyperplane if \( - X \smallsetminus H \) is nonempty, and - for each \( a \in X \smallsetminus H \) and each \( x \in X \) there exists a unique pair \( \left( {t, y}\right) \in \) \( \mathbf{F} \times H \) such that \( x = {ta} + y \) . This expression of the element \( x \) is called its representation relative to the pair \( \left( {H, a}\right) \) consisting of the hyperplane \( H \) and the element \( a \) of \( X \smallsetminus H \) . (4.2.4) Proposition. The kernel of a nonzero linear functional on a normed space \( X \) is a hyperplane in \( X \) . Conversely, if \( H \) is a hyperplane in \( X \) and \( a \notin H \), then there exists a unique linear functional \( u \) on \( X \) such that \( \ker \left( u\right) = H \) and \( u\left( a\right) = 1 \) . Proof. First let \( u \) be a nonzero linear functional on \( X \) . If \( a \notin \ker \left( u\right) \) and \( x \in X \), then, using the linearity of \( u \), we easily verify that \( x = {ta} + y \) , with \( t \in \mathbf{F} \) and \( y \in \ker \left( u\right) \), if and only if \( t = u\left( x\right) /u\left( a\right) \) . Hence \( \ker \left( u\right) \) is a hyperplane. Conversely, let \( H \) be a hyperplane in \( X \), and let \( a \notin H \) . For each \( x \in X \) there exists a unique pair \( \left( {t, y}\right) \) in \( \mathbf{F} \times H \) such that \( x = {ta} + y \) . Setting \( u\left( x\right) = t \) and \( f\left( x\right) = y \), we define functions \( u : X \rightarrow \mathbf{F} \) and \( f : X \rightarrow H \) . If also \( {x}^{\prime } \in X \), then \[ x + {x}^{\prime } = \left( {u\left( x\right) + u\left( {x}^{\prime }\right) }\right) a + f\left( x\right) + f\left( {x}^{\prime }\right) , \] where \( u\left( x\right) + u\left( {x}^{\prime }\right) \in \mathbf{F} \) and (as \( H \) is a linear subset of \( X \) ) \( f\left( x\right) + f\left( {x}^{\prime }\right) \in H \) ; the uniqueness of the representation of a given element of \( X \) relative to \( \left( {H, a}\right) \) ensures that \( u\left( {x + {x}^{\prime }}\right) = u\left( x\right) + u\left( {x}^{\prime }\right) \) . Similar uniqueness arguments show that \( u\left( {\lambda x}\right) = {\lambda u}\left( x\right) \) whenever \( \lambda \in \mathbf{F} \) and \( x \in X \), and that \( u\left( a\right) = 1 \) . In particular, it follows that \( u \) is a linear functional on \( X \) . Moreover, \( u\left( x\right) = 0 \) if and only if \( x = f\left( x\right) \in H \) ; so \( \ker \left( u\right) = H \) . It remains to prove that \( u \) is the unique linear functional on \( X \) which takes the value 1 at \( a \) and has kernel \( H \) . But if \( v \) is another such linear functional on \( X \), then for each \( x \in X \) we have \[ v\left( x\right) = v\left( {u\left( x\right) a + f\left( x\right) }\right) \] \[ = u\left( x\right) v\left( a\right) + v\left( {f\left( x\right) }\right) \] \[ = u\left( x\right) 1 + 0 \] \[ = u\left( x\right) \text{.} \] ## (4.2.5) Exercises .1 Let \( H \) be a hyperplane in a normed space \( X, a \in X \smallsetminus H \), and \( \alpha \in \mathbf{R} \) . Prove that there exists a unique linear functional \( u \) on \( X \) such that \( a + H = \{ x \in X : u\left( x\right) = \alpha \} . \) .2 Let \( u \) be a nonzero bounded linear functional on a normed space \( X \) , and \( H = \ker \left( u\right) \) . Show that \( \rho \left( {x, H}\right) = \parallel u{\parallel }^{-1}\left| {u\left( x\right) }\right| \) for each \( x \in X \) . .3 A translated hyperplane \( {}^{1} \) in a normed space \( X \) is a subset of the form \( v + H \) where \( H \) is a hyperplane in \( X \) and \( v \in X \) . Prove that a translated hyperplane is closed if and only if its complement has a nonempty interior. .4 Let \( K \) be a subset of a normed space \( X \), and \( u \) a linear functional on \( X \) . For each \( \alpha \in \mathbf{R} \) the translated hyperplane \[ {H}_{\alpha } = \{ x \in X : u\left( x\right) = \alpha \} \] is called a hyperplane of support for \( K \) if - there exists \( {x}_{0} \in K \) such that \( u\left( {x}_{0}\right) = \alpha \), and - either \( u\left( x\right) \geq \alpha \) for all \( x \in K \) or \( u\left( x\right) \leq \alpha \) for all \( x \in K \) . Prove that if \( K \) is compact and is not contained in any \( {H}_{\alpha } \), then for exactly two real numbers \( \alpha ,{H}_{\alpha } \) is a hyperplane of support for \( K \) . (Consider the set \( \left\{ {t \in \mathbf{R} : {u}^{-1}\left( t\right) \cap K \neq \varnothing }\right\} \) .) --- \( {}^{1} \) Some authors use the term "hyperplane" for a translated hyperplane. --- ## 4.3 Finite-Dimensional Normed Spaces Before studying some of the more important infinite-dimensional spaces in analysis, we devote a section to the major analytic properties of finite-dimensional spaces. We begin by showing that for any positive integer \( n \) , any normed space of dimension \( n \) over \( \mathbf{F} \) can be identified with the product space \( {\mathbf{F}}^{n} \) . (4.3.1) Proposition. If \( X \) is an \( n \) -dimensional normed space with basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \), then \[ \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \mapsto \mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{e}_{i} \] is a one-one bounded linear mapping of the product space \( {\mathbf{F}}^{n} \) onto \( X \) with a bounded linear inverse. Proof. Let \( f \) denote the mapping in question. It is easy to verify that \( f \) is one-one and maps \( {\mathbf{F}}^{n} \) onto \( X \), and that both \( f \) and \( {f}^{-1} \) are linear. Let \[ c = \mathop{\max }\limits_{{1 \leq i \leq n}}\begin{Vmatrix}{e}_{i}\end{Vmatrix} \] The inequalities \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{e}_{i}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\left| {\xi }_{i}\right| \begin{Vmatrix}{e}_{i}\end{Vmatrix} \leq c\mathop{\sum }\limits_{{i = 1}}^{n}\left| {\xi }_{i}\right| \leq {nc}\mathop{\max }\limits_{{1 \leq i \leq n}}\left| {\xi }_{i}\right| \] show that \( f \) is bounded and therefore continuous. Let \[ S = \left\{ {\left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \in {\mathbf{F}}^{n} : \mathop{\max }\limits_{{1 \leq i \leq n}}\left| {\xi }_{i}\right| = 1}\right\} . \] Then \( S \) is closed (Exercise (4.3.2: 1)) and bounded, and is therefore compact (see Exercise (3.5.11: 6)). Now, the mapping \( \xi \mapsto \parallel f\left( \xi \right) \parallel \) is continuous and (as \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) is a basis) maps \( S \) into \( {\mathbf{R}}^{ + } \) ; so, by Exercise (3.3.7: 2), \[ 0 < r = \inf \{ \parallel f\left( \xi \right) \parallel : \xi \in S\} . \] If \( \xi \) is any nonzero element of \( {\mathbf{F}}^{n} \), then, setting \( \eta = \parallel \xi {\parallel }^{-1}\xi \), we have \( \eta \in S \) and therefore \[ r \leq \parallel f\left( \eta \right) \parallel = \parallel \xi {\parallel }^{-1}\parallel f\left( \xi \right) \parallel . \] Hence \( \parallel \xi \parallel \leq {r}^{-1}\parallel f\left( \xi \right) \parallel \) . Since this holds trivially when \( \xi = 0 \), we see that \( {r}^{-1} \) is a bound for the linear mapping \( {f}^{-1} \) . ## (4.3.2) Exercises .1 Prove that the set \( S \) in the preceding proof is closed. .2 Show that if \( X \) is \( n \) -dimensional with basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \), then the mapping \[ \mathop{\sum }\limits_{{i = 1}}^{n}{
1008_(GTM174)Foundations of Real and Abstract Analysis
58
is continuous and (as \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) is a basis) maps \( S \) into \( {\mathbf{R}}^{ + } \) ; so, by Exercise (3.3.7: 2), \[ 0 < r = \inf \{ \parallel f\left( \xi \right) \parallel : \xi \in S\} . \] If \( \xi \) is any nonzero element of \( {\mathbf{F}}^{n} \), then, setting \( \eta = \parallel \xi {\parallel }^{-1}\xi \), we have \( \eta \in S \) and therefore \[ r \leq \parallel f\left( \eta \right) \parallel = \parallel \xi {\parallel }^{-1}\parallel f\left( \xi \right) \parallel . \] Hence \( \parallel \xi \parallel \leq {r}^{-1}\parallel f\left( \xi \right) \parallel \) . Since this holds trivially when \( \xi = 0 \), we see that \( {r}^{-1} \) is a bound for the linear mapping \( {f}^{-1} \) . ## (4.3.2) Exercises .1 Prove that the set \( S \) in the preceding proof is closed. .2 Show that if \( X \) is \( n \) -dimensional with basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \), then the mapping \[ \mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{e}_{i} \mapsto \mathop{\max }\limits_{{1 \leq i \leq n}}\left| {\xi }_{i}\right| \] is a norm on \( X \), and that \( X \) is complete with respect to this norm. .3 Find an alternative proof that the mapping \( {f}^{-1} \) in the proof of Proposition (4.3.1) is continuous. .4 Prove that any linear mapping from a finite-dimensional normed space into a normed space is bounded. Hence prove that any two norms on a given finite-dimensional linear space are equivalent. (4.3.3) Proposition. A finite-dimensional normed space is complete. Proof. Let \( X \) be a finite-dimensional normed space. We may assume that \( X \neq \{ 0\} \), so that \( X \) has a basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) . Let \( f \) be the mapping in Proposition (4.3.1), and let \( \left( {x}_{n}\right) \) be a Cauchy sequence in \( X \) . Then \( \left( {{f}^{-1}\left( {x}_{n}\right) }\right) \) is a Cauchy sequence in \( {\mathbf{F}}^{n} \) and therefore (see Exercise (3.5.11:7)), converges to a limit \( y \in {\mathbf{F}}^{n} \) . Since \( f \) is continuous, \( \left( {x}_{n}\right) \) converges to \( f\left( y\right) \in X \) . (4.3.4) Corollary. A finite-dimensional subspace of a normed space \( X \) is closed in \( X \) . Proof. This is an immediate consequence of Propositions (4.3.3) and (3.2.9). \( ▱ \) Our next result is surprisingly useful. We use it to simplify the proof of Theorem (4.3.6). (4.3.5) Riesz’s Lemma. Let \( S \) be a closed subspace with a nonempty complement in a normed space \( X \), and let \( 0 < \theta < 1 \) . Then there exists a unit vector \( x \in X \) such that \( \parallel x - y\parallel > \theta \) for each \( y \in S \) . Proof. Fix \( {x}_{0} \in X \smallsetminus S \) . By Exercise (3.1.10: 3), \[ 0 < r = \rho \left( {{x}_{0}, S}\right) < {\theta }^{-1}r. \] Choosing \( {s}_{0} \in S \) such that \[ r \leq \begin{Vmatrix}{{x}_{0} - {s}_{0}}\end{Vmatrix} < {\theta }^{-1}r \] let \[ x = {\begin{Vmatrix}{x}_{0} - {s}_{0}\end{Vmatrix}}^{-1}\left( {{x}_{0} - {s}_{0}}\right) . \] Then \( \parallel x\parallel = 1 \) . Also, for each \( s \in S \) , \[ {s}_{0} + \begin{Vmatrix}{{x}_{0} - {s}_{0}}\end{Vmatrix}s \in S \] so \[ \begin{Vmatrix}{{x}_{0} - {s}_{0}}\end{Vmatrix}\parallel x - s\parallel = \begin{Vmatrix}{{x}_{0} - \left( {{s}_{0} + \begin{Vmatrix}{{x}_{0} - {s}_{0}}\end{Vmatrix}s}\right) }\end{Vmatrix} \geq \rho \left( {{x}_{0}, S}\right) = r, \] and therefore \[ \parallel x - s\parallel \geq \frac{r}{\begin{Vmatrix}{x}_{0} - {s}_{0}\end{Vmatrix}} > \theta \] It follows from Riesz's Lemma that in an infinite-dimensional normed space \( X \), if \( 0 < \theta < 1 \), then there exists a sequence \( \left( {x}_{n}\right) \) of unit vectors such that \( \begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} > \theta \) whenever \( m \neq n \) (see Exercise (4.3.7:4)). This result can be improved in various ways. For example, in Chapter 6 we prove that in any infinite-dimensional normed space there exists a sequence \( \left( {x}_{n}\right) \) of unit vectors such that \( \begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} > 1 \) whenever \( m \neq n \) . A much deeper result, due to Elton and Odell, says that if \( X \) is an infinite-dimensional normed space, then there exist \( \varepsilon > 0 \) and a sequence \( \left( {x}_{n}\right) \) of unit vectors in \( X \) such that \( \begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} \geq 1 + \varepsilon \) whenever \( m \neq n \) ; see Chapter XIV of \( \left\lbrack {12}\right\rbrack \) . We now use Riesz's Lemma to provide a topological characterisation of finite-dimensional normed spaces. (4.3.6) Theorem. A normed space is finite-dimensional if and only if its unit ball is totally bounded, in which case that ball is compact. Proof. For simplicity, we take the case \( \mathbf{F} = \mathbf{R} \) . Let \( X \) be a normed space, and \( B \) its (closed) unit ball; we may assume that \( X \neq \{ 0\} \) . Suppose that \( X \) is finite-dimensional with basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \), and let \( u \) be the one-one linear mapping \( \mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{e}_{i} \mapsto \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \) of \( X \) onto the product metric space \( {\mathbf{R}}^{n} \) . By Propositions (4.3.1) and (4.2.1), there exists \( R > 0 \) such that if \( \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\xi }_{i}{e}_{i}}\end{Vmatrix} \leq 1 \), then \[ \begin{Vmatrix}\left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \end{Vmatrix} = \mathop{\max }\limits_{{1 \leq i \leq n}}\left| {\xi }_{i}\right| \leq R. \] (1) By the Heine-Borel-Lebesgue Theorem (1.4.6) and Proposition (3.5.10), \( {\left\lbrack -R, R\right\rbrack }^{n} \) is a compact subset of \( {\mathbf{R}}^{n} \) ; it follows from Propositions (4.3.1) and (3.3.6) that \( {u}^{-1}\left( {\left\lbrack -R, R\right\rbrack }^{n}\right) \) is a compact subset of \( X \) . Since \( B \) is closed and, by (1), a subset of \( {u}^{-1}\left( {\left\lbrack -R, R\right\rbrack }^{n}\right) \), we see from Proposition (3.3.4) that \( B \) is compact. Assume, conversely, that \( B \) is totally bounded. Construct a finite \( \frac{1}{2} - \) approximation \( F \) to \( B \), let \( S \) be the finite-dimensional subspace of \( X \) generated by \( F \), and suppose that \( X \neq S \) . Then, by Riesz’s Lemma (4.3.5), there exists a unit vector \( x \in X \) such that \( \parallel x - s\parallel > \frac{1}{2} \) for all \( s \in S \) ; but this is absurd, as \( \parallel x - s\parallel < \frac{1}{2} \) for some \( s \in F \) . Hence, in fact, \( X = S \) . ## (4.3.7) Exercises .1 Show that if a normed space \( X \) contains a totally bounded ball, then every closed ball in \( X \) is compact. .2 Prove that a normed space \( X \) is finite-dimensional if and only if \( \{ x \in X : \parallel x\parallel = 1\} \) is compact. .3 Prove that a normed space is locally compact if and only if it is finite-dimensional. .4 Let \( X \) be an infinite-dimensional normed space. Use Riesz’s Lemma to construct, inductively, a sequence \( \left( {x}_{n}\right) \) of unit vectors in \( X \) such that for each \( n \) , (i) \( {x}_{1},\ldots ,{x}_{n} \) are linearly independent, and (ii) \( \rho \left( {{x}_{n + 1},{X}_{n}}\right) \geq \frac{1}{2} \), where \( {X}_{n} = \operatorname{span}\left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) . Hence prove that the unit ball of \( X \) is not compact. This provides us with another proof that if the unit ball of a normed space is compact, then the space is finite-dimensional. .5 Let \( X \) be a metric space, \( x \in X \), and \( S \) a nonempty subset of \( X \) . A point \( b \in S \) is called a best approximation, or a closest point, to \( x \) in \( S \) if \( \rho \left( {x, b}\right) = \rho \left( {x, S}\right) \) . Prove the Fundamental Theorem of Approximation Theory: if \( X \) is a finite-dimensional subspace of a normed space \( X \), then each point of \( X \) has a best approximation in \( X \) . (See [10],[38], or [52] for further information about approximation theory, a major branch of analysis with many important practical applications.) .6 Prove that any hyperplane in a finite-dimensional normed space is closed. Now let \( X \) be the subspace of \( {c}_{0} \) consisting of all sequences \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) of real numbers such that \( {x}_{n} = 0 \) for all sufficiently large \( n \) . Show that \[ f\left( {\left( {x}_{n}\right) }_{n = 1}^{\infty }\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }n{x}_{n} \] defines a linear functional \( f : X \rightarrow \mathbf{R} \) whose kernel is not closed in \( X \) . .7 Let \( S \) be a nonempty closed subset of \( {\mathbf{R}}^{N} \), and \( K, B \) closed balls in \( {\mathbf{R}}^{N} \) such that (i) \( B \subset K \) and (ii) \( K \) intersects \( S \) in a single point \( \zeta \) on the boundary of \( K \) . If \( B \) does not intersect the boundary of \( K \), let \( \xi \) be the centre of \( B \) ; otherwise, \( B \) must intersect the boundary of \( K \) in a single point, which we denote by \( \xi \) . For each positive integer \( n \) let \[ {K}_{n} = \frac{1}{n}\left( {\xi - \zeta }\right) + K \] Prove that for all sufficiently large \( n \) we have \( B \subset {K}_{n} \) and \( {K}_{n} \cap S = \) \( \varnothing \) . Hence prove that there exists a ball \( {K}^{\prime } \) that is concentric with \( K \), has radius greater than that of \( K \), and is disjoint from \( S \) . (For the first part, begin by showing that there exists a positive integer \( \nu \) such that \( B \subset {K}_{n} \) for all \( n \geq \nu \) . Then suppose that for each \( n \geq \nu \) there exists \( {s}_{n} \in {K}_{n} \cap S \) . Show that there exists a subsequence \( {\left( {s}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) converging to \( \zeta \), and hence find \( k \) such that \( {s}_{{n}_{k}} \in K \cap S \), a contradiction.
1008_(GTM174)Foundations of Real and Abstract Analysis
59
he boundary of \( K \), let \( \xi \) be the centre of \( B \) ; otherwise, \( B \) must intersect the boundary of \( K \) in a single point, which we denote by \( \xi \) . For each positive integer \( n \) let \[ {K}_{n} = \frac{1}{n}\left( {\xi - \zeta }\right) + K \] Prove that for all sufficiently large \( n \) we have \( B \subset {K}_{n} \) and \( {K}_{n} \cap S = \) \( \varnothing \) . Hence prove that there exists a ball \( {K}^{\prime } \) that is concentric with \( K \), has radius greater than that of \( K \), and is disjoint from \( S \) . (For the first part, begin by showing that there exists a positive integer \( \nu \) such that \( B \subset {K}_{n} \) for all \( n \geq \nu \) . Then suppose that for each \( n \geq \nu \) there exists \( {s}_{n} \in {K}_{n} \cap S \) . Show that there exists a subsequence \( {\left( {s}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) converging to \( \zeta \), and hence find \( k \) such that \( {s}_{{n}_{k}} \in K \cap S \), a contradiction.) A sequence in a normed linear space \( X \) is said to be total if it generates a dense linear subspace of \( X \) -that is, if the linear space consisting of all finite linear combinations of terms of the sequence is dense in \( X \) . In that case \( X \) is separable. To see this, let \( \left( {a}_{n}\right) \) be a total sequence in \( X \), and let \( S \) be the set of all finite linear combinations \( {r}_{1}{a}_{1} + \cdots + {r}_{n}{a}_{n} \) with each coefficient \( {r}_{k} \) rational. (By a rational complex number we mean a complex number whose real and imaginary parts are rational.) If \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) are in \( \mathbf{F} \), then there exist rational elements \( {r}_{1},\ldots ,{r}_{n} \) of \( \mathbf{F} \) such that \( \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{\lambda }_{k} - {r}_{k}}\right| \begin{Vmatrix}{a}_{k}\end{Vmatrix} \) is arbitrarily small; since \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{k = 1}}^{n}{\lambda }_{k}{a}_{k} - \mathop{\sum }\limits_{{k = 1}}^{n}{r}_{k}{a}_{k}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{k = 1}}^{n}\left| {{\lambda }_{k} - {r}_{k}}\right| \begin{Vmatrix}{a}_{k}\end{Vmatrix} \] and \( \left( {a}_{n}\right) \) is total, it follows that \( S \) is dense in \( X \) ; but \( S \) is countable. We have the following converse. (4.3.8) Proposition. If \( X \) is an infinite-dimensional separable normed space, then it has a total sequence of linearly independent vectors. Proof. Let \( \left( {a}_{n}\right) \) be a dense sequence in \( X \), and assume without loss of generality that \( {a}_{1} \neq 0 \) . We construct inductively a strictly increasing sequence \( 1 = {n}_{1} < {n}_{2} < \cdots \) of positive integers such that for each \( k \) , (i) the vectors \( {a}_{{n}_{1}},\ldots ,{a}_{{n}_{k}} \) are linearly independent, and (ii) for \( 1 \leq m \leq {n}_{k},{a}_{m} \) is a linear combination of \( {a}_{{n}_{1}},\ldots ,{a}_{{n}_{k}} \) . Indeed, if \( {a}_{{n}_{1}},\ldots ,{a}_{{n}_{k}} \) have been constructed with properties (i) and (ii), we take \( {n}_{k + 1} \) to be the smallest integer \( m > {n}_{k} \) such that \( {a}_{m} \) does not belong to the subspace \( {X}_{k} \) of \( X \) generated by \( \left\{ {{a}_{{n}_{1}},\ldots ,{a}_{{n}_{k}}}\right\} \) . (If no such integer exists, then, being closed by Proposition (4.3.4), \( {X}_{k} \) contains the closure of the subspace of \( X \) generated by the dense sequence \( \left( {a}_{n}\right) \), so \( X = {X}_{k} \) is finite-dimensional - a contradiction.) It now follows from (ii) that the sequence \( {\left( {a}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) is total in \( X \) . ## (4.3.9) Exercises .1 Prove that the Banach spaces \( {c}_{0} \) and \( {l}_{1} \) are separable. .2 Show that the Banach space \( {l}_{\infty } \) is not separable. (Consider the set of elements of \( {l}_{\infty } \) whose terms belong to \( \{ 0,1\} \) .) ## 4.4 The \( {L}_{p} \) Spaces In this section we introduce certain infinite dimensional Banach spaces of integrable functions that appear very frequently in many areas of pure and applied mathematics. For convenience, we call real numbers \( p, q \) conjugate exponents if \( p > 1, q > 1 \), and \( 1/p + 1/q = 1 \) . We begin our discussion with an elementary lemma. (4.4.1) Lemma. If \( x, y \) are positive numbers and \( 0 < \alpha < 1 \), then \[ {x}^{\alpha }{y}^{1 - \alpha } \leq {\alpha x} + \left( {1 - \alpha }\right) y. \] Proof. Taking \( u = x/y \), consider \[ f\left( u\right) = {u}^{\alpha } - {\alpha u} - 1 + \alpha . \] We have \( {f}^{\prime }\left( u\right) = \alpha \left( {{u}^{\alpha - 1} - 1}\right) \), which is positive if \( 0 < u < 1 \) and negative if \( u > 1 \) . Since \( f\left( 1\right) = 0 \), it follows from Exercise (1.5.4: 7) that \( f\left( u\right) \leq 0 \) for all \( u > 0 \) . This immediately leads to the desired inequality. (4.4.2) Proposition. Let \( p, q \) be conjugate exponents, and \( f, g \) measurable functions on \( \mathbf{R} \) such that \( {\left| f\right| }^{p} \) and \( {\left| g\right| }^{q} \) are integrable. Then \( {fg} \) is integrable, and Hölder's inequality \[ \left| {\int {fg}}\right| \leq {\left( \int {\left| f\right| }^{p}\right) }^{1/p}{\left( \int {\left| g\right| }^{q}\right) }^{1/q} \] (1) holds. Proof. We first note that if \( \int {\left| f\right| }^{p} = 0 \), then \( {\left| f\right| }^{p} = 0 \) almost everywhere; so \( f = 0 \), and therefore \( {fg} = 0 \), almost everywhere. Then \( {fg} \) is integrable, \( \int {fg} = 0 \), and (1) holds trivially, as it does also in the case where \( \int {\left| g\right| }^{q} = 0 \) . Thus we may assume that \( \int {\left| f\right| }^{p} > 0 \) and \( \int {\left| g\right| }^{q} > 0 \) . We then have, almost everywhere, \[ \frac{\left| fg\right| }{{\left( \int {\left| f\right| }^{p}\right) }^{1/p}{\left( \int {\left| g\right| }^{q}\right) }^{1/q}} = {\left( \frac{{\left| f\right| }^{p}}{\int {\left| f\right| }^{p}}\right) }^{1/p}{\left( \frac{{\left| g\right| }^{q}}{\int {\left| g\right| }^{q}}\right) }^{1/q} \] \[ \leq \frac{{\left| f\right| }^{p}}{p\int {\left| f\right| }^{p}} + \frac{{\left| g\right| }^{q}}{q\int {\left| g\right| }^{q}} \] (where the last step uses Lemma (4.4.1)), so \[ \left| {fg}\right| \leq {\left( \int {\left| f\right| }^{p}\right) }^{1/p}{\left( \int {\left| g\right| }^{q}\right) }^{1/q}\left( {\frac{{\left| f\right| }^{p}}{p\int {\left| f\right| }^{p}} + \frac{{\left| g\right| }^{q}}{q\int {\left| g\right| }^{q}}}\right) . \] (2) Now, \( {fg} \) is measurable and the right-hand side of (2) is integrable. Hence, by Proposition (2.3.1), \( {fg} \) is integrable and \[ \left| {\int {fg}}\right| \leq \int \left| {fg}\right| \leq {\left( \int {\left| f\right| }^{p}\right) }^{1/p}{\left( \int {\left| g\right| }^{q}\right) }^{1/q}\left( {\frac{1}{p} + \frac{1}{q}}\right) , \] from which (1) follows. (4.4.3) Proposition. Let \( p \geq 1 \), and let \( f, g \) be measurable functions on \( \mathbf{R} \) such that \( {\left| f\right| }^{p} \) and \( {\left| g\right| }^{p} \) are integrable. Then \( {\left| f + g\right| }^{p} \) is integrable, and Minkowski's inequality \[ {\left( \int {\left| f + g\right| }^{p}\right) }^{1/p} \leq {\left( \int {\left| f\right| }^{p}\right) }^{1/p} + {\left( \int {\left| g\right| }^{p}\right) }^{1/p} \] holds. Proof. Clearly, we may assume that \( p > 1 \) . Now, \( {\left| f + g\right| }^{p} \) is measurable, by Exercise (2.3.3: 5). Since \[ {\left| f + g\right| }^{p} \leq {\left( 2\max \{ \left| f\right| ,\left| g\right| \} \right) }^{p} \leq {2}^{p}\left( {{\left| f\right| }^{p} + {\left| g\right| }^{p}}\right) \] and the last function is integrable, it follows from Proposition (2.3.1) that \( {\left| f + g\right| }^{p} \) is integrable. The functions \( \left| f\right| \) and \( {\left| f + g\right| }^{p - 1} \) are measurable, by Exercise (2.3.3: 5), and \[ {\left( {\left| f + g\right| }^{p - 1}\right) }^{q} = {\left| f + g\right| }^{p} \in {L}_{1}\left( \mathbf{R}\right) . \] Thus, by Proposition (4.4.2), \( {\left| f + g\right| }^{p - 1}\left| f\right| \) is integrable and \[ \int {\left| f + g\right| }^{p - 1}\left| f\right| \leq {\left( \int {\left| f + g\right| }^{p}\right) }^{1 - {p}^{-1}}{\left( \int {\left| f\right| }^{p}\right) }^{1/p}. \] Similarly, \( {\left| f + g\right| }^{p - 1}\left| g\right| \) is integrable and \[ \int {\left| f + g\right| }^{p - 1}\left| g\right| \leq {\left( \int {\left| f + g\right| }^{p}\right) }^{1 - {p}^{-1}}{\left( \int {\left| g\right| }^{p}\right) }^{1/p}. \] It follows that \[ \int {\left| f + g\right| }^{p} = \int {\left| f + g\right| }^{p - 1}\left| {f + g}\right| \] \[ \leq \int {\left| f + g\right| }^{p - 1}\left| f\right| + \int {\left| f + g\right| }^{p - 1}\left| g\right| \] \[ \leq {\left( \int {\left| f + g\right| }^{p}\right) }^{1 - {p}^{-1}}\left( {{\left( \int {\left| f\right| }^{p}\right) }^{1/p} + {\left( \int {\left| g\right| }^{p}\right) }^{1/p}}\right) \] from which we easily obtain Minkowski's inequality. ## (4.4.4) Exercises .1 Prove Hölder's inequality \[ \left| {\mathop{\sum }\limits_{{n = 1}}^{N}{x}_{n}{y}_{n}}\right| \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {y}_{n}\right| }^{q}\right) }^{1/q} \] and Minkowski's inequality \[ {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n} + {y}_{n}\right| }^{p}\right) }^{1/p} \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n}\right| }^{p}\right) }^{1/p} + {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {y}_{n}\right| }^{p}\right) }^{1/p} \] for finite sequences \( {x}_{1},\ldots ,{x}_{N} \) and \( {y}_{1},\ldots ,{y}_{N} \) of real numbers. .2 A sequence \( \left( {x}_{n}\right) \) of real numbers is called \( p \) -power summable if the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n}\right| }^
1008_(GTM174)Foundations of Real and Abstract Analysis
60
nt {\left| g\right| }^{p}\right) }^{1/p}}\right) \] from which we easily obtain Minkowski's inequality. ## (4.4.4) Exercises .1 Prove Hölder's inequality \[ \left| {\mathop{\sum }\limits_{{n = 1}}^{N}{x}_{n}{y}_{n}}\right| \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {y}_{n}\right| }^{q}\right) }^{1/q} \] and Minkowski's inequality \[ {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n} + {y}_{n}\right| }^{p}\right) }^{1/p} \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {x}_{n}\right| }^{p}\right) }^{1/p} + {\left( \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {y}_{n}\right| }^{p}\right) }^{1/p} \] for finite sequences \( {x}_{1},\ldots ,{x}_{N} \) and \( {y}_{1},\ldots ,{y}_{N} \) of real numbers. .2 A sequence \( \left( {x}_{n}\right) \) of real numbers is called \( p \) -power summable if the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n}\right| }^{p} \) converges. Prove that if \( \left( {x}_{n}\right) \) is \( p \) -power summable and \( \left( {y}_{n}\right) \) is \( q \) -power summable, where \( p, q \) are conjugate exponents, then (i) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n}{y}_{n} \) is absolutely convergent, and (ii) Hölder's inequality holds in the form \[ \left| {\mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n}{y}_{n}}\right| \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n}\right| }^{p}\right) }^{1/p}{\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {y}_{n}\right| }^{q}\right) }^{1/q}. \] Prove also that if \( \left( {x}_{n}\right) \) and \( \left( {y}_{n}\right) \) are both \( p \) -power summable, then so is \( \left( {{x}_{n} + {y}_{n}}\right) \), and Minkowski’s inequality \[ {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n} + {y}_{n}\right| }^{p}\right) }^{1/p} \leq {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n}\right| }^{p}\right) }^{1/p} + {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {y}_{n}\right| }^{p}\right) }^{1/p} \] holds. .3 Let \( p \geq 1 \), and let \( {l}_{p} \) denote the set of all \( p \) -power summable sequences, taken with termwise addition and multiplication-by-scalars. Prove that \[ {\begin{Vmatrix}{\left( {x}_{n}\right) }_{n = 1}^{\infty }\end{Vmatrix}}_{p} = {\left( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {x}_{n}\right| }^{p}\right) }^{1/p} \] defines a norm on \( {l}_{p} \) . (We define the normed space \( {l}_{p}\left( \mathbf{C}\right) \) of \( p \) -power summable sequences of complex numbers in the obvious analogous way.) Let \( X \) be a measurable subset of \( \mathbf{R} \), and \( p \geq 1 \) . We define \( {L}_{p}\left( X\right) \) to be the set of all functions \( f \), defined almost everywhere on \( \mathbf{R} \), such that \( f \) is measurable, \( f \) vanishes almost everywhere on \( \mathbf{R} \smallsetminus X \), and \( {\left| f\right| }^{p} \) is integrable. Taken with the pointwise operations of addition and multiplication-by-scalars, \( {L}_{p}\left( X\right) \) becomes a linear space. If we follow the usual practice of identifying two measurable functions that are equal almost everywhere, then \[ \parallel f{\parallel }_{p} = {\left( \int {\left| f\right| }^{p}\right) }^{1/p} \] is a norm, called the \( {L}_{p} \) -norm, on \( {L}_{p}\left( X\right) \) . (We met the normed space \( {L}_{1}\left( \mathbf{R}\right) \) in Exercise (4.1.1:6).) When \( X = \left\lbrack {a, b}\right\rbrack \) is a compact interval, we write \( {L}_{p}\left\lbrack {a, b}\right\rbrack \) rather than \( {L}_{p}\left( \left\lbrack {a, b}\right\rbrack \right) \) . ## (4.4.5) Exercises In these exercises, \( X \) is a measurable subset of \( \mathbf{R} \) . .1 Let \( X \) be integrable and \( 1 \leq r < s \) . Prove the following. (i) \( {L}_{s}\left( X\right) \subset {L}_{r}\left( X\right) \) . (Note that if \( f \in {L}_{s}\left( X\right) \), then \( {\left| f\right| }^{r} \in {L}_{s/r}\left( X\right) \) .) (ii) The linear mapping \( f \mapsto f \) of \( {L}_{s}\left( X\right) \) into \( {L}_{r}\left( X\right) \) is bounded and has norm \( \leq \mu {\left( X\right) }^{{r}^{-1} - {s}^{-1}} \) . .2 Let \( 1 \leq r \leq t \leq s < \infty, r \neq s \) , \[ \alpha = \frac{{t}^{-1} - {s}^{-1}}{{r}^{-1} - {s}^{-1}},\;\beta = \frac{{r}^{-1} - {t}^{-1}}{{r}^{-1} - {s}^{-1}}. \] and \( f \in {L}_{r}\left( X\right) \cap {L}_{s}\left( X\right) \) . Prove that \( f \in {L}_{t}\left( X\right) \) and \[ \parallel f{\parallel }_{t} \leq \parallel f{\parallel }_{r}^{\alpha }\parallel f{\parallel }_{s}^{\beta } \] (Consider \( {\left| f\right| }^{\alpha t}{\left| f\right| }^{\beta t} \) .) .3 Prove that the step functions that vanish outside \( X \) form a dense subspace of \( {L}_{p}\left( X\right) \) for \( p \geq 1 \) . (First consider the case where \( X \) is a compact interval.) .4 Let \( p, q \) be conjugate exponents, and let \( f, g \in {L}_{p}\left( X\right) \) . Prove that if \( 1 < p < 2 \), then \[ 2{\left( \parallel f{\parallel }_{p}^{p} + \parallel g{\parallel }_{p}^{p}\right) }^{q - 1} \geq \parallel f + g{\parallel }_{p}^{q} + \parallel f - g{\parallel }_{p}^{q} \] and \[ \parallel f + g{\parallel }_{p}^{p} + \parallel f - g{\parallel }_{p}^{p} \geq 2{\left( \parallel f{\parallel }_{p}^{q} + \parallel g{\parallel }_{p}^{q}\right) }^{p - 1}, \] and that the reverse inequalities hold if \( p \geq 2 \) . (Clarkson’s inequalities. Use Exercise (1.5.8: 10).) .5 Use the preceding exercise to prove that if \( p > 1 \), then \( {L}_{p}\left( X\right) \) is uniformly convex. (See Exercise (4.2.2:15).) (4.4.6) The Riesz-Fischer Theorem. \( {L}_{p}\left( X\right) \) is a Banach space for all \( p \geq 1 \) . More precisely, if \( \left( {f}_{n}\right) \) is a Cauchy sequence in \( {L}_{p}\left( X\right) \), then there exist \( f \in {L}_{p}\left( X\right) \) and a subsequence \( {\left( {f}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) of \( \left( {f}_{n}\right) \) such that (i) \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{p} = 0 \), and (ii) \( {f}_{{n}_{k}} \rightarrow f \) almost everywhere on \( X \) as \( k \rightarrow \infty \) . Proof. We illustrate the proof with the case \( X = \mathbf{R} \) and \( p > 1 \) . Given a Cauchy sequence \( \left( {f}_{n}\right) \) in \( {L}_{p}\left( \mathbf{R}\right) \), choose a subsequence \( {\left( {f}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) such that \[ {\begin{Vmatrix}{f}_{m} - {f}_{n}\end{Vmatrix}}_{p} \leq {2}^{-k}\;\left( {m, n \geq {n}_{k}}\right) \] Then \[ {\begin{Vmatrix}{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}\end{Vmatrix}}_{p} \leq {2}^{-k}. \] Writing \( q = p/\left( {p - 1}\right) \), we see from Proposition (4.4.2) that for each positive integer \( N,\left| {{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}}\right| \) is integrable over \( \left\lbrack {-N, N}\right\rbrack \), and \[ \int \left| {{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}}\right| {\chi }_{\left\lbrack -N, N\right\rbrack } \leq {\begin{Vmatrix}{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}\end{Vmatrix}}_{p}{\left( \int {\chi }_{\left\lbrack -N, N\right\rbrack }\right) }^{1/q} \] \[ \leq {2}^{-k}{\left( 2N\right) }^{1/q} \] so the series \[ \mathop{\sum }\limits_{{k = 1}}^{\infty }\int \left| {{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}}\right| {\chi }_{\left\lbrack -N, N\right\rbrack } \] converges. It follows from Lebesgue’s Series Theorem (Exercise (2.2.13: 4)) that there exists a set \( {E}_{N} \) of measure zero such that the series \[ \mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {{f}_{{n}_{k + 1}}\left( x\right) - {f}_{{n}_{k}}\left( x\right) }\right| {\chi }_{\left\lbrack -N, N\right\rbrack }\left( x\right) \] converges for all \( x \in \mathbf{R} \smallsetminus {E}_{N} \), and the function \( \mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {{f}_{{n}_{k + 1}} - {f}_{{n}_{k}}}\right| {\chi }_{\left\lbrack -N, N\right\rbrack } \) is integrable. Then \[ E = \mathop{\bigcup }\limits_{{N = 1}}^{\infty }{E}_{N} \] is a set of measure zero, and \[ f\left( x\right) = \mathop{\lim }\limits_{{k \rightarrow \infty }}{f}_{{n}_{k}}\left( x\right) = {f}_{{n}_{1}}\left( x\right) + \mathop{\sum }\limits_{{k = 1}}^{\infty }\left( {{f}_{{n}_{k + 1}}\left( x\right) - {f}_{{n}_{k}}\left( x\right) }\right) \] exists for all \( x \in \mathbf{R} \smallsetminus E \) . The function \( f \) so defined is measurable, by Exercise (2.3.3: 4). Since \[ {\begin{Vmatrix}{f}_{{n}_{k}}\end{Vmatrix}}_{p} \leq {\begin{Vmatrix}{f}_{{n}_{1}}\end{Vmatrix}}_{p} + {\begin{Vmatrix}{f}_{{n}_{k}} - {f}_{{n}_{1}}\end{Vmatrix}}_{p} \leq {\begin{Vmatrix}{f}_{{n}_{1}}\end{Vmatrix}}_{p} + \frac{1}{2} \] for all \( k \), we see from Fatou’s Lemma (Exercise (2.2.13:11)) that \( {\left| f\right| }^{p} \) is integrable and hence that \( f \in {L}_{p}\left( \mathbf{R}\right) \) . Moreover, if \( n \geq {n}_{i} \), then by applying Fatou’s Lemma to the sequence \( {\left( \left| {f}_{{n}_{k}} - {f}_{n}\right| \right) }_{k = i}^{\infty } \) we see that \[ {\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{p} = \mathop{\lim }\limits_{{k \rightarrow \infty }}{\begin{Vmatrix}{f}_{{n}_{k}} - {f}_{n}\end{Vmatrix}}_{p} \leq {2}^{-i}. \] Hence \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{p} = 0 \) . ## (4.4.7) Exercises .1 Prove the Riesz-Fischer Theorem for a general measurable set \( X \subset \) R. Prove it also in the case \( p = 1 \) . .2 Prove that the space \( {l}_{p} \) is complete for \( p \geq 1 \) . In order to establish an elegant characterisation of bounded linear functionals on \( {L}_{p}\left( X\right) \), we first discuss those functions whose derivatives almost everywhere belong to \( {L}_{q}\left( \mathbf{R}\right) \) . (4.4.8) Lemma. Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval, \( q > 1 \), and \( G \) a real-valued function defined almost everywhere on \( \mat
1008_(GTM174)Foundations of Real and Abstract Analysis
61
\( {\left( \left| {f}_{{n}_{k}} - {f}_{n}\right| \right) }_{k = i}^{\infty } \) we see that \[ {\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{p} = \mathop{\lim }\limits_{{k \rightarrow \infty }}{\begin{Vmatrix}{f}_{{n}_{k}} - {f}_{n}\end{Vmatrix}}_{p} \leq {2}^{-i}. \] Hence \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{p} = 0 \) . ## (4.4.7) Exercises .1 Prove the Riesz-Fischer Theorem for a general measurable set \( X \subset \) R. Prove it also in the case \( p = 1 \) . .2 Prove that the space \( {l}_{p} \) is complete for \( p \geq 1 \) . In order to establish an elegant characterisation of bounded linear functionals on \( {L}_{p}\left( X\right) \), we first discuss those functions whose derivatives almost everywhere belong to \( {L}_{q}\left( \mathbf{R}\right) \) . (4.4.8) Lemma. Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval, \( q > 1 \), and \( G \) a real-valued function defined almost everywhere on \( \mathbf{R} \) and vanishing outside I. Then the following conditions are equivalent. (i) There exists \( g \in {L}_{q}\left( \mathbf{R}\right) \) such that \( {G}^{\prime }\left( x\right) = g\left( x\right) \) almost everywhere. (ii) There exists \( M > 0 \) such that \[ \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q}}{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} \leq M \] whenever the points \( {x}_{k} \in I \) and \( {x}_{1} < {x}_{2} < \cdots < {x}_{n} \) . In that case, the smallest such \( M \) is \( \int {\left| g\right| }^{q} \) . Proof. Writing \( p = q/\left( {q - 1}\right) \), suppose that (i) holds, and let \( a \leq {x}_{1} < \) \( {x}_{2} < \cdots < {x}_{n} \leq b \) . Applying Proposition (4.4.2) to the functions \( {\chi }_{I} \) and \( {\chi }_{I}g \), we have \[ \left| {G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) }\right| = \left| {{\int }_{{x}_{k}}^{{x}_{k + 1}}g}\right| \] \[ \leq {\left( {\int }_{{x}_{k}}^{{x}_{k + 1}}{\chi }_{I}\right) }^{1/p}{\left( {\int }_{{x}_{k}}^{{x}_{k + 1}}{\left| g\right| }^{q}\right) }^{1/q} \] \[ = {\left( {x}_{k + 1} - {x}_{k}\right) }^{1/p}{\left( {\int }_{{x}_{k}}^{{x}_{k + 1}}{\left| g\right| }^{q}\right) }^{1/q}. \] Hence \[ {\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q} \leq {\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}{\int }_{{x}_{k}}^{{x}_{k + 1}}{\left| g\right| }^{q}, \] and therefore \[ \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q}}{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} \leq \mathop{\sum }\limits_{{k = 1}}^{n}{\int }_{{x}_{k}}^{{x}_{k + 1}}{\left| g\right| }^{q} \leq \int {\left| g\right| }^{q}. \] Thus (ii) holds, and the smallest \( M \) for which (ii) holds is at most \( \int {\left| g\right| }^{q} \) . Now suppose that (ii) holds, and let \( {\left( \left( {a}_{k},{b}_{k}\right) \right) }_{k = 1}^{N} \) be a finite sequence of nonoverlapping open subintervals of \( I \) . Applying Exercise (4.4.4:1), we obtain \[ \mathop{\sum }\limits_{{k = 1}}^{N}\left| {G\left( {b}_{k}\right) - G\left( {a}_{k}\right) }\right| = \mathop{\sum }\limits_{{k = 1}}^{N}\left( \frac{\left| G\left( {b}_{k}\right) - G\left( {a}_{k}\right) \right| }{{\left( {b}_{k} - {a}_{k}\right) }^{1/p}}\right) {\left( {b}_{k} - {a}_{k}\right) }^{1/p} \] \[ \leq {\left( \mathop{\sum }\limits_{{k = 1}}^{N}\frac{{\left| G\left( {b}_{k}\right) - G\left( {a}_{k}\right) \right| }^{q}}{{\left( {b}_{k} - {a}_{k}\right) }^{q - 1}}\right) }^{1/q}{\left( \mathop{\sum }\limits_{{k = 1}}^{N}\left( {b}_{k} - {a}_{k}\right) \right) }^{1/p} \] \[ \leq M{\left( \mathop{\sum }\limits_{{k = 1}}^{N}\left( {b}_{k} - {a}_{k}\right) \right) }^{1/p}. \] Hence \( G \) is absolutely continuous. It follows from Exercise (2.2.17:2) that there exists an integrable function \( g \) such that \( {G}^{\prime }\left( x\right) = g\left( x\right) \) almost everywhere. For each positive integer \( n \) let \[ {x}_{n, k} = a + \frac{k}{{2}^{n}}\left( {b - a}\right) \;\left( {0 \leq k \leq {2}^{n}}\right) \] and define a step function \( {g}_{n} \) by setting \[ {g}_{n}\left( x\right) = \left\{ \begin{array}{ll} \frac{G\left( {x}_{n, k + 1}\right) - G\left( {x}_{n, k}\right) }{{x}_{n, k + 1} - {x}_{n, k}} & \text{ if }{x}_{n, k} < x < {x}_{n, k + 1} \\ 0 & \text{ otherwise. } \end{array}\right. \] Then \( g\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{g}_{n}\left( x\right) \) almost everywhere to be precise, on the complement of the union of \[ \left\{ {{x}_{n, k} : n \geq 1,0 \leq k \leq {2}^{n}}\right\} \] and the set of measure zero on which \( {G}^{\prime } = g \) . Also, \[ \int {\left| {g}_{n}\right| }^{q} = \mathop{\sum }\limits_{{k = 0}}^{{2}^{n}}\frac{{\left| G\left( {x}_{n, k + 1}\right) - G\left( {x}_{n, k}\right) \right| }^{q}}{{\left( {x}_{n, k + 1} - {x}_{n, k}\right) }^{q - 1}} \leq M. \] Applying Fatou’s Lemma, we now see that \( {\left| g\right| }^{q} \) is integrable and \( \int {\left| g\right| }^{q} \leq M \) . Hence (ii) implies (i). Referring to the last sentence of the first part of the proof, we also see that \( \int {\left| g\right| }^{q} \) is the smallest \( M \) for which (ii) holds. (4.4.9) Lemma. Let \( u \) be a bounded linear functional on \( {L}_{p}\left\lbrack {a, b}\right\rbrack \), and define \[ G\left( x\right) = \left\{ \begin{array}{ll} u\left( {\chi }_{\left\lbrack a, x\right\rbrack }\right) & \text{ if }a \leq x \leq b \\ 0 & \text{ otherwise. } \end{array}\right. \] Let \( a \leq {x}_{1} < {x}_{2} < \cdots < {x}_{n} \leq b \), and let \( f = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}{\chi }_{\left\lbrack {x}_{k},{x}_{k + 1}\right\rbrack } \) . Then \[ u\left( f\right) = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}\left( {G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) }\right) . \] Moreover, if there exists \( g \in {L}_{q}\left\lbrack {a, b}\right\rbrack \) such that \( {G}^{\prime } = g \) almost everywhere, then \( u\left( f\right) = \int {fg} \) . Proof. We have \[ u\left( f\right) = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}u\left( {\chi }_{\left\lbrack {x}_{k},{x}_{k + 1}\right\rbrack }\right) \] \[ = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}\left( {u\left( {{\chi }_{\left\lbrack a,{x}_{k + 1}\right\rbrack } - {\chi }_{\left\lbrack a,{x}_{k}\right\rbrack }}\right) }\right) \] \[ = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}\left( {G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) }\right) . \] Now suppose that \( {G}^{\prime } = g \) almost everywhere for some \( g \in {L}_{q}\left\lbrack {a, b}\right\rbrack \) . Then \[ u\left( f\right) = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{c}_{k}{\int }_{{x}_{k}}^{{x}_{k + 1}}g = \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\int }_{{x}_{k}}^{{x}_{k + 1}}{c}_{k}g = \int {fg}. \] \( ▱ \) We now show that if \( p, q \) are conjugate exponents, then the dual space \( {L}_{p}^{ * } \) can be identified with \( {L}_{q} \) . (4.4.10) Theorem. Let \( p, q \) be conjugate exponents. Then for each \( g \in \) \( {L}_{q}\left( X\right) \) \[ {u}_{g}\left( f\right) = \int {fg} \] defines a bounded linear functional on \( {L}_{p}\left( X\right) \) with norm equal to \( \parallel g{\parallel }_{q} \) . Conversely, to each bounded linear functional \( u \) on \( {L}_{p}\left( X\right) \) there corresponds a unique \( g \in {L}_{q}\left( X\right) \) such that \( u = {u}_{g} \) . Proof. If \( g \in {L}_{q}\left( X\right) \), then by Lemma (4.4.2), \( {u}_{g} \) is well defined on \( {L}_{p}\left( X\right) \) . It is trivial that \( {u}_{g} \) is linear, and Hölder’s inequality shows that \( \parallel g{\parallel }_{q} \) is a bound for \( {u}_{g} \) . On the other hand, taking \[ f = {\left( {g}^{ + }\right) }^{q/p} - {\left( {g}^{ - }\right) }^{q/p}, \] we see that \( f \in {L}_{p}\left( X\right) \) and \[ {u}_{g}\left( f\right) = \int {fg} = \int {\left| g\right| }^{1 + q{p}^{-1}} = \int {\left| g\right| }^{q} \] \[ = \parallel g{\parallel }_{q}{\left( \int {\left| g\right| }^{q}\right) }^{1 - {q}^{-1}} \] \[ = \parallel g{\parallel }_{q}{\left( \int {\left| f\right| }^{p}\right) }^{1/p} \] \[ = \parallel g{\parallel }_{q}\parallel f{\parallel }_{p}. \] Hence \( \begin{Vmatrix}{u}_{g}\end{Vmatrix} = \parallel g{\parallel }_{q} \) . Now consider any bounded linear functional \( u \) on \( {L}_{p}\left( X\right) \) . To begin with, take the case where \( X \) is a compact interval \( \left\lbrack {a, b}\right\rbrack \) . If \( u = {u}_{g} \) for some \( g \in {L}_{q}\left( X\right) \), then \( u\left( {\chi }_{\left\lbrack a, x\right\rbrack }\right) = {\int }_{a}^{x}g \) for each \( x \in X \) . This suggests that we define \[ G\left( x\right) = \left\{ \begin{array}{ll} u\left( {\chi }_{\left\lbrack a, x\right\rbrack }\right) & \text{ if }x \in X \\ 0 & \text{ if }x \in \mathbf{R} \smallsetminus X \end{array}\right. \] and try to show that \( {G}^{\prime } \in {L}_{q}\left( X\right) \) and that \( u\left( f\right) = \int f{G}^{\prime } \) for all \( f \in {L}_{p}\left( X\right) \) . To this end, let \( a \leq {x}_{1} < {x}_{2} < \cdots < {x}_{n} \leq b \) . Let \( \phi \) be the step function that vanishes outside \( \left\lbrack {{x}_{1},{x}_{n}}\right\rbrack \) and at each \( {x}_{i} \), and that takes the constant value \[ {c}_{k} = \frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q - 1}\operatorname{sgn}\left( {G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) }\right) }{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} \] on \( \left( {{x}_{k},{x}_{k + 1}}\right) \), where \[ \operatorname{sgn}\left( x\right) = \begin{cases} 1 & \text{ if }x > 0 \\ 0 & \text{ if }x = 0 \\ - 1 & \text{ if }x < 0. \end{cases} \] Then, by Lemma (4.4.9), \[ \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}
1008_(GTM174)Foundations of Real and Abstract Analysis
62
s X \end{array}\right. \] and try to show that \( {G}^{\prime } \in {L}_{q}\left( X\right) \) and that \( u\left( f\right) = \int f{G}^{\prime } \) for all \( f \in {L}_{p}\left( X\right) \) . To this end, let \( a \leq {x}_{1} < {x}_{2} < \cdots < {x}_{n} \leq b \) . Let \( \phi \) be the step function that vanishes outside \( \left\lbrack {{x}_{1},{x}_{n}}\right\rbrack \) and at each \( {x}_{i} \), and that takes the constant value \[ {c}_{k} = \frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q - 1}\operatorname{sgn}\left( {G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) }\right) }{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} \] on \( \left( {{x}_{k},{x}_{k + 1}}\right) \), where \[ \operatorname{sgn}\left( x\right) = \begin{cases} 1 & \text{ if }x > 0 \\ 0 & \text{ if }x = 0 \\ - 1 & \text{ if }x < 0. \end{cases} \] Then, by Lemma (4.4.9), \[ \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q}}{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} = u\left( \phi \right) \] \[ \leq \parallel u\parallel \parallel \phi {\parallel }_{p} \] \[ = \parallel u\parallel {\left( \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\left| {c}_{k}\right| }^{p}\left( {x}_{k + 1} - {x}_{k}\right) \right) }^{1/p} \] \[ = \parallel u\parallel {\left( \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q}}{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}}\right) }^{1/p}, \] and therefore \[ \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\frac{{\left| G\left( {x}_{k + 1}\right) - G\left( {x}_{k}\right) \right| }^{q}}{{\left( {x}_{k + 1} - {x}_{k}\right) }^{q - 1}} \leq \parallel u{\parallel }^{q}. \] Thus, by Lemma (4.4.8), there exists \( g \in {L}_{q}\left( X\right) \) such that \( {G}^{\prime } = g \) almost everywhere and \( \parallel g{\parallel }_{q} \leq \parallel u\parallel \) . It follows from Lemma (4.4.9) that \( u\left( f\right) = \) \( \int {fg} \) for each step function \( f \) that vanishes outside \( X \) . The set of such step functions is dense in the space \( {L}_{p}\left( X\right) \), by Exercise (4.4.5: 3); moreover, the linear functionals \( u \) and \( f \mapsto \int {fg} \) are bounded, and therefore uniformly continuous, on \( {L}_{p}\left( X\right) \) . Referring to Proposition (3.2.12), we conclude that \( u = {u}_{g}. \) It remains to remove the restriction that \( X \) be a compact interval and to prove the uniqueness of \( g \) for a given \( u \) . This is left as an exercise. ## (4.4.11) Exercises .1 Complete the proof of Theorem (4.4.10) by removing the restriction that \( X \) be a compact interval, and by proving the uniqueness of the function \( g \) for a given \( u \) . .2 A measurable function \( f \) on \( \mathbf{R} \) is said to be essentially bounded if there exists \( M > 0 \) such that \( \left| {f\left( x\right) }\right| \leq M \) almost everywhere. Prove that \[ {\left\| f\right\| }_{\infty } = \inf \left\{ {M > 0 : \left| {f\left( x\right) }\right| \leq M\text{ almost everywhere}}\right\} \] defines a norm on the vector space \( {L}_{\infty } \) of all essentially bounded functions under pointwise operations, and that \( {L}_{\infty } \) is a Banach space with respect to this norm (where, as usual, we identify measurable functions that are equal almost everywhere). The real number \( \parallel f{\parallel }_{\infty } \) is called the essential supremum of the element \( f \) of \( {L}_{\infty } \) . .3 Let \( K \) be a compact subset of \( \mathbf{R} \), and \( f : K \rightarrow \mathbf{R} \) a continuous function. Extend \( f \) to \( \mathbf{R} \) by setting \( f\left( x\right) = 0 \) if \( x \in \mathbf{R} \smallsetminus K \) . Prove that \( f \in {L}_{\infty } \) and that \( \parallel f{\parallel }_{\infty } = \mathop{\sup }\limits_{{x \in K}}\left| {f\left( x\right) }\right| \) . .4 Prove that if \( f \in {L}_{1} \) and \( g \in {L}_{\infty } \), then \( {fg} \in {L}_{1} \) and Hölder’s inequality holds in the form \[ \parallel {fg}{\parallel }_{1} \leq \parallel f{\parallel }_{1}\parallel g{\parallel }_{\infty }. \] .5 Prove that for each \( g \in {L}_{\infty } \) , \[ {u}_{g}\left( f\right) = \int {fg} \] defines a bounded linear functional on \( {L}_{1} \) with norm equal to \( \parallel g{\parallel }_{\infty } \) , and that every bounded linear functional on \( {L}_{1} \) has the form \( {u}_{g} \) for a unique corresponding \( g \in {L}_{\infty } \) . .6 Let \( 0 < p < 1 \), and let \( {L}_{p} \) consist of all measurable functions \( f \) on \( \mathbf{R} \) such that \( {\left| f\right| }^{p} \) is integrable. Show that when we identify functions that are equal almost everywhere, \[ {\rho }_{p}\left( {f, g}\right) = \int {\left| f - g\right| }^{p} \] defines a metric on \( {L}_{p} \), and that \( \left( {{L}_{p},{\rho }_{p}}\right) \) is a complete metric space. Show also that the only continuous linear mapping from \( {L}_{p} \) (with pointwise operations) to \( \mathbf{R} \) is the zero mapping. ## 4.5 Function Spaces Among the most important examples of Banach spaces are certain subsets of the space \( \mathcal{B}\left( {X, Y}\right) \) of bounded functions from a nonempty set \( X \) into a Banach space \( Y \), where the norm on \( \mathcal{B}\left( {X, Y}\right) \) is the sup norm \[ \parallel f\parallel = \sup \{ \parallel f\left( x\right) \parallel : x \in X\} . \] Note that when \( X \) is a compact interval \( \left\lbrack {a, b}\right\rbrack \), we usually write \( \mathcal{B}\left\lbrack {a, b}\right\rbrack \) rather than \( \mathcal{B}\left( \left\lbrack {a, b}\right\rbrack \right) \) ; we use similar notations without further comment in related situations. A special case of the next result has appeared already (Exercise \( \left( {{4.1.6} : 4}\right) ) \) . (4.5.1) Proposition. If \( Y \) is a Banach space, then \( \mathcal{B}\left( {X, Y}\right) \) is a Banach space. Proof. Let \( \left( {f}_{n}\right) \) be a Cauchy sequence in \( \mathcal{B}\left( {X, Y}\right) \), and \( \varepsilon > 0 \) . There exists \( N \) such that \( \begin{Vmatrix}{{f}_{m} - {f}_{n}}\end{Vmatrix} < \varepsilon \) for all \( m, n \geq N \) . For each \( x \in X \) we have \[ \begin{Vmatrix}{{f}_{m}\left( x\right) - {f}_{n}\left( x\right) }\end{Vmatrix} \leq \begin{Vmatrix}{{f}_{m} - {f}_{n}}\end{Vmatrix} < \varepsilon \] whenever \( m, n \geq N \) ; so \( {\left( {f}_{n}\left( x\right) \right) }_{n = 1}^{\infty } \) is a Cauchy sequence in \( Y \) . Since \( Y \) is complete, \[ f\left( x\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n}\left( x\right) \] exists; also, for all \( m \geq N \) , \[ \begin{Vmatrix}{{f}_{m}\left( x\right) - f\left( x\right) }\end{Vmatrix} = \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{{f}_{m}\left( x\right) - {f}_{n}\left( x\right) }\end{Vmatrix} \leq \varepsilon . \] (1) Hence \[ \parallel f\left( x\right) \parallel \leq \begin{Vmatrix}{{f}_{N}\left( x\right) }\end{Vmatrix} + \begin{Vmatrix}{{f}_{N}\left( x\right) - f\left( x\right) }\end{Vmatrix} \leq \begin{Vmatrix}{f}_{N}\end{Vmatrix} + \varepsilon . \] Since \( x \in X \) is arbitrary, we see that \( f \in \mathcal{B}\left( {X, Y}\right) \) . Also, it follows from (1) that \( \begin{Vmatrix}{{f}_{m} - f}\end{Vmatrix} \leq \varepsilon \) for all \( m \geq N \) . Since \( \varepsilon > 0 \) is arbitrary, \( \left( {f}_{n}\right) \) converges to \( f \) in \( \mathcal{B}\left( {X, Y}\right) \) . Hence \( \mathcal{B}\left( {X, Y}\right) \) is complete. ## (4.5.2) Exercises .1 Let \( Y \) be a finite-dimensional Banach space, and \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) a basis of \( Y \) . Prove that each \( f \in \mathcal{B}\left( {X, Y}\right) \) can be written uniquely in the form \( x \mapsto \mathop{\sum }\limits_{{k = 1}}^{n}{f}_{k}\left( x\right) {e}_{k} \) with each \( {f}_{k} \in \mathcal{B}\left( {X,\mathbf{F}}\right) \) . Prove also that for each \( k, f \mapsto {f}_{k} \) is a bounded linear mapping of \( \mathcal{B}\left( {X, Y}\right) \) into \( \mathcal{B}\left( {X,\mathbf{F}}\right) \) . .2 Prove that the mapping \( f \mapsto \mathop{\sup }\limits_{{t \in X}}f\left( x\right) \) of \( \mathcal{B}\left( {X,\mathbf{R}}\right) \) into \( \mathbf{R} \) is continuous. .3 Let \( Y \) be a Banach space, and \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) a series in \( \mathcal{B}\left( {X, Y}\right) \) . Let \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n} \) be a convergent series of nonnegative real numbers such that \( \begin{Vmatrix}{f}_{n}\end{Vmatrix} \leq {c}_{n} \) for each \( n \) . Show that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n} \) converges in the Banach space \( \mathcal{B}\left( {X, Y}\right) \) . .4 Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval, and \( \mathcal{B}\mathcal{V}\left( I\right) \) the linear space of all real-valued functions of bounded variation on \( I \), with pointwise operations. Show that \[ \parallel f{\parallel }_{\mathrm{{bv}}} = \left| {f\left( a\right) }\right| + {T}_{f}\left( {a, b}\right) \] defines a norm on \( \mathcal{B}\mathcal{V}\left( I\right) \), and that \( \mathcal{B}\mathcal{V}\left( I\right) \) is complete with respect to this norm. (For the second part recall Exercise (1.5.15: 6).) Let \( f,{f}_{1},{f}_{2},\ldots \) be mappings of a nonempty set \( X \) into a metric space \( \left( {Y,\rho }\right) \) . We say that the sequence \( \left( {f}_{n}\right) \) - converges simply to \( f \) on \( X \) if for each \( x \in X \) the sequence \( \left( {{f}_{n}\left( x\right) }\right) \) converges to \( f\left( x\right) \) in \( Y \) ; - converges uniformly to \( f \) on \( X \) if \[ \mathop{\sup }\limits_{{x \in X}}\rho \left( {{f}_{n}\left( x\right), f\left( x\right) }\right) \rightarrow 0\text{ as }n \rightarrow \infty . \] Clearly, uniform convergence i
1008_(GTM174)Foundations of Real and Abstract Analysis
63
all real-valued functions of bounded variation on \( I \), with pointwise operations. Show that \[ \parallel f{\parallel }_{\mathrm{{bv}}} = \left| {f\left( a\right) }\right| + {T}_{f}\left( {a, b}\right) \] defines a norm on \( \mathcal{B}\mathcal{V}\left( I\right) \), and that \( \mathcal{B}\mathcal{V}\left( I\right) \) is complete with respect to this norm. (For the second part recall Exercise (1.5.15: 6).) Let \( f,{f}_{1},{f}_{2},\ldots \) be mappings of a nonempty set \( X \) into a metric space \( \left( {Y,\rho }\right) \) . We say that the sequence \( \left( {f}_{n}\right) \) - converges simply to \( f \) on \( X \) if for each \( x \in X \) the sequence \( \left( {{f}_{n}\left( x\right) }\right) \) converges to \( f\left( x\right) \) in \( Y \) ; - converges uniformly to \( f \) on \( X \) if \[ \mathop{\sup }\limits_{{x \in X}}\rho \left( {{f}_{n}\left( x\right), f\left( x\right) }\right) \rightarrow 0\text{ as }n \rightarrow \infty . \] Clearly, uniform convergence implies simple convergence; but, as the next exercise shows, the converse is false. ## (4.5.3) Exercises . 1 Give an example of a sequence \( \left( {f}_{n}\right) \) of continuous mappings from \( \left\lbrack {0,1}\right\rbrack \) into \( \left\lbrack {0,1}\right\rbrack \) that converges to 0 simply, but not uniformly, on \( \left\lbrack {0,1}\right\rbrack \) . (Consider a spike of height 1 travelling along the \( x \) -axis towards 0 .) .2 Let \( Y \) be a normed space. Prove that a sequence \( \left( {f}_{n}\right) \) in \( \mathcal{B}\left( {X, Y}\right) \) converges to a limit \( f \) in the normed space \( \mathcal{B}\left( {X, Y}\right) \) if and only if \( \left( {f}_{n}\right) \) converges uniformly to \( f \) on \( X \) . .3 Let \( X \) be a compact metric space, and \( \left( {f}_{n}\right) ,\left( {g}_{n}\right) \) strictly increasing sequences of real-valued functions on \( X \) that converge simply to the same bounded function \( f : X \rightarrow \mathbf{R} \) . Show that for each \( m \) there exists \( n \) such that \( {f}_{m} < {g}_{n} \) (that is, \( {f}_{m}\left( x\right) < {g}_{n}\left( x\right) \) for all \( x \in X \) ). Show also that we cannot omit "compact" from the hypotheses. Now let \( \left( {X,\rho }\right) \) be a metric space, and \( Y \) a normed space. The set of all continuous mappings of \( X \) into \( Y \) is denoted by \( \mathcal{C}\left( {X, Y}\right) \) or \( {\mathcal{C}}_{Y}\left( X\right) \), and the set of all bounded continuous mappings of \( X \) into \( Y \) by \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) or \( {\mathcal{C}}_{Y}^{\infty }\left( X\right) \) ; so \[ {\mathcal{C}}^{\infty }\left( {X, Y}\right) = \mathcal{B}\left( {X, Y}\right) \cap \mathcal{C}\left( {X, Y}\right) \] If \( X \) is compact, then \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) = \mathcal{C}\left( {X, Y}\right) \), by Exercise (3.3.7:1). In general, \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) is a linear subspace of \( \mathcal{B}\left( {X, Y}\right) \) ; we consider it as a normed space, taken with the sup norm. We usually write \( {\mathcal{C}}^{\infty }\left( X\right) \) and \( \mathcal{C}\left( X\right) \), respectively, instead of \( {\mathcal{C}}^{\infty }\left( {X,\mathbf{R}}\right) \) and \( \mathcal{C}\left( {X,\mathbf{R}}\right) \) . (4.5.4) Proposition. If \( X \) is a metric space and \( Y \) a Banach space, then \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) is a closed, and therefore complete, subspace of \( \mathcal{B}\left( {X, Y}\right) \) . Proof. Let \( \left( {f}_{n}\right) \) be a sequence of elements of \( \mathcal{C}\left( {X, Y}\right) \) converging to a limit \( f \) in \( \mathcal{B}\left( {X, Y}\right) \) . For each \( \varepsilon > 0 \) there exists \( N \) such that \( \begin{Vmatrix}{f - {f}_{n}}\end{Vmatrix} \leq \varepsilon /3 \) whenever \( n \geq N \) . Given \( {x}_{0} \) in \( X \), construct a neighbourhood \( U \) of \( {x}_{0} \) such that if \( x \in U \), then \( \begin{Vmatrix}{{f}_{N}\left( x\right) - {f}_{N}\left( {x}_{0}\right) }\end{Vmatrix} \leq \varepsilon /3 \) . For each \( x \in U \) we then have \[ \begin{Vmatrix}{f\left( x\right) - f\left( {x}_{0}\right) }\end{Vmatrix} \leq \begin{Vmatrix}{f\left( x\right) - {f}_{N}\left( x\right) }\end{Vmatrix} + \begin{Vmatrix}{{f}_{N}\left( x\right) - {f}_{N}\left( {x}_{0}\right) }\end{Vmatrix} \] \[ + \begin{Vmatrix}{{f}_{N}\left( {x}_{0}\right) - f\left( {x}_{0}\right) }\end{Vmatrix} \] \[ \leq \begin{Vmatrix}{f - {f}_{N}}\end{Vmatrix} + \frac{\varepsilon }{3} + \begin{Vmatrix}{{f}_{N} - f}\end{Vmatrix} \] \[ = \varepsilon \text{.} \] Since \( \varepsilon > 0 \) and \( {x}_{0} \in X \) are arbitrary, it follows that \( f \) is continuous on \( X \) . Thus \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) is closed in \( \mathcal{B}\left( {X, Y}\right) \) ; whence, by Propositions (4.5.1) and (3.2.9), \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) is complete. Proposition (4.5.4) shows that a uniform limit of bounded continuous functions is continuous. Taken with Exercise (4.5.3:1), this observation highlights the significance of the next theorem. (4.5.5) Dini’s Theorem. Let \( X \) be a compact metric space, and \( \left( {f}_{n}\right) \) an increasing sequence in \( \mathcal{C}\left( X\right) \) that converges simply to a continuous function \( f \) . Then \( \left( {f}_{n}\right) \) converges to \( f \) uniformly. Proof. Let \( \varepsilon > 0 \) . For each \( x \in X \) there exists \( {N}_{x} \) such that if \( n \geq {N}_{x} \) , then \( 0 \leq f\left( x\right) - {f}_{n}\left( x\right) \leq \varepsilon /3 \) . Since \( f \) and \( {f}_{{N}_{x}} \) are continuous, there exists an open neighbourhood \( {U}_{x} \) of \( x \) such that if \( {x}^{\prime } \in {U}_{x} \), then \( \left| {f\left( x\right) - f\left( {x}^{\prime }\right) }\right| \leq \varepsilon /3 \) and \( \left| {{f}_{{N}_{x}}\left( x\right) - {f}_{{N}_{x}}\left( {x}^{\prime }\right) }\right| \leq \varepsilon /3 \) ; whence \[ 0 \leq f\left( {x}^{\prime }\right) - {f}_{{N}_{x}}\left( {x}^{\prime }\right) \] \[ \leq \left| {f\left( x\right) - f\left( {x}^{\prime }\right) }\right| + f\left( x\right) - {f}_{{N}_{x}}\left( x\right) + \left| {{f}_{{N}_{x}}\left( x\right) - {f}_{{N}_{x}}\left( {x}^{\prime }\right) }\right| \] \[ \leq \frac{\varepsilon }{3} + \frac{\varepsilon }{3} + \frac{\varepsilon }{3} \] \[ = \varepsilon \text{.} \] Since \( X \) is compact, there are finitely many points \( {x}_{1},\ldots ,{x}_{\nu } \) of \( X \) such that the neighbourhoods \( {U}_{{x}_{i}} \) cover \( X \) . Setting \[ {n}_{\varepsilon } = \max \left\{ {{N}_{{x}_{i}} : 1 \leq i \leq \nu }\right\} \] consider \( n \geq {n}_{\varepsilon } \) . Given \( x \in X \), choose \( i \) such that \( x \in {U}_{{x}_{i}} \) ; then \[ 0 \leq f\left( x\right) - {f}_{n}\left( x\right) \leq f\left( x\right) - {f}_{{n}_{\varepsilon }}\left( x\right) \leq f\left( x\right) - {f}_{{N}_{{x}_{i}}}\left( x\right) \leq \varepsilon . \] Since \( \varepsilon > 0 \) and \( x \in X \) are arbitrary, we conclude that \( \left( {f}_{n}\right) \) converges to \( f \) uniformly. ## (4.5.6) Exercises .1 Show that "increasing" can be replaced by "decreasing" in Dini's Theorem. .2 Give an alternative proof of Dini's Theorem using the sequential compactness of \( X \) . .3 Let \( X \) be a metric space, \( Y \) a Banach space, and \( D \) a dense subset of \( X \) . Let \( \left( {f}_{n}\right) \) be a sequence of bounded continuous mappings of \( X \) into \( Y \) such that the restrictions of the functions \( {f}_{n} \) to \( D \) form a uniformly convergent sequence. Prove that \( \left( {f}_{n}\right) \) is uniformly convergent on \( X \) . .4 Let \( X \) be a metric space, and \( Y \) a normed space. Prove that the mapping \( \left( {x, f}\right) \mapsto f\left( x\right) \) is continuous on \( X \times {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) . .5 Let \( I \) be a compact interval in \( \mathbf{R} \), and \( \left( {f}_{n}\right) \) a sequence of increasing real functions on \( I \) that converges simply in \( I \) to a continuous function \( f \) . Prove that \( f \) is increasing and that \( \left( {f}_{n}\right) \) converges to \( f \) uniformly on \( I \) . .6 Let \( a, b \) be real numbers with \( b > 0 \), and let \( X \) be the set of all continuous mappings \( f : \left\lbrack {0, b}\right\rbrack \rightarrow \mathbf{R} \) such that \( f\left( 0\right) = a \) . Prove that \( X \) is complete with respect to the sup norm. .7 Let \( I \) be a compact interval in \( \mathbf{R},{x}_{0} \in I \), and \( \alpha > 0 \) . Show that \[ \parallel f{\parallel }^{\prime } = \sup \left\{ {{\mathrm{e}}^{-\alpha \left| {x - {x}_{0}}\right| }\left| {f\left( x\right) }\right| : x \in I}\right\} \] defines a norm on \( \mathcal{C}\left( I\right) \), and that \( \mathcal{C}\left( I\right) \) is complete with respect to this norm. .8 In the notation of Exercise (4.5.2:4) prove that if a sequence \( \left( {f}_{n}\right) \) converges to a limit \( f \) with respect to the norm \( \parallel \cdot {\parallel }_{\mathrm{{bv}}} \) on \( \mathcal{B}\mathcal{V}\left( I\right) \), then it converges to \( f \) with respect to the sup norm on \( \mathcal{B}\left( I\right) \) . With \( I = \left\lbrack {0,1}\right\rbrack \) find a sequence in \( \mathcal{B}\mathcal{V}\left( I\right) \cap \mathcal{C}\left( I\right) \) that (i) converges to a limit \( f \in \mathcal{C}\left( I\right) \) with respect to the sup norm, and (ii) is not a Cauchy sequence with respect to \( \parallel \cdot {\parallel }_{\mathrm{{bv}}} \) . (Note Exercise (1.5.15:4).) Let \( X \) be a metric space, \( Y \) a normed space, and \( \mathcal{F} \) a subset of \( \mathcal{B}\left( {X, Y}\right) \) . We say that \( \mathcal{F} \) is - equicontinuous at \( a \in X \) if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \parallel f\left( x\right) - f\left
1008_(GTM174)Foundations of Real and Abstract Analysis
64
e (4.5.2:4) prove that if a sequence \( \left( {f}_{n}\right) \) converges to a limit \( f \) with respect to the norm \( \parallel \cdot {\parallel }_{\mathrm{{bv}}} \) on \( \mathcal{B}\mathcal{V}\left( I\right) \), then it converges to \( f \) with respect to the sup norm on \( \mathcal{B}\left( I\right) \) . With \( I = \left\lbrack {0,1}\right\rbrack \) find a sequence in \( \mathcal{B}\mathcal{V}\left( I\right) \cap \mathcal{C}\left( I\right) \) that (i) converges to a limit \( f \in \mathcal{C}\left( I\right) \) with respect to the sup norm, and (ii) is not a Cauchy sequence with respect to \( \parallel \cdot {\parallel }_{\mathrm{{bv}}} \) . (Note Exercise (1.5.15:4).) Let \( X \) be a metric space, \( Y \) a normed space, and \( \mathcal{F} \) a subset of \( \mathcal{B}\left( {X, Y}\right) \) . We say that \( \mathcal{F} \) is - equicontinuous at \( a \in X \) if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \parallel f\left( x\right) - f\left( a\right) \parallel < \varepsilon \) whenever \( f \in \mathcal{F} \) and \( \rho \left( {x, a}\right) < \delta , \) - equicontinuous (on \( X \) ) if it is equicontinuous at each point of \( X \) ; - uniformly equicontinuous if for each \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that \( \parallel f\left( x\right) - f\left( y\right) \parallel < \varepsilon \) whenever \( f \in \mathcal{F}, x \in X, y \in X \), and \( \rho \left( {x, y}\right) < \delta . \) Clearly, uniform equicontinuity implies equicontinuity, and if \( \mathcal{F} \) is equicontinuous at \( a \), then each \( f \in \mathcal{F} \) is continuous at \( a \) . ## (4.5.7) Exercises In these exercises, \( X, Y \), and \( \mathcal{F} \) are as in the first sentence of the last paragraph. .1 Suppose that there exist constants \( c > 0 \) and \( \lambda \geq 1 \) such that \[ \parallel f\left( x\right) - f\left( y\right) \parallel \leq {c\rho }{\left( x, y\right) }^{\lambda } \] for all \( f \in \mathcal{F} \) and all \( x, y \in X \) . Show that \( \mathcal{F} \) is uniformly equicontinuous. .2 Let \( \alpha > 0 \), and let \( \mathcal{F} \) be the set of all mappings \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbf{R} \) such that \( {f}^{\prime } \) exists, is continuous, and has sup norm at most \( \alpha \) . Show that \( \mathcal{F} \) is uniformly equicontinuous. .3 Show that \( \left\{ {{x}^{n} : n \in \mathbf{N}}\right\} \) is not equicontinuous at 1 . .4 Let \( \left( {f}_{n}\right) \) be an equicontinuous sequence of real-valued functions on \( X \) . Prove that the sequence \[ {\left( {f}_{1} \vee {f}_{2} \vee \cdots \vee {f}_{n}\right) }_{n = 1}^{\infty } \] is also equicontinuous. .5 For each \( \lambda \in L \) let \( {\mathcal{F}}_{\lambda } \subset \mathcal{B}\left( {X, Y}\right) \) be equicontinuous at \( a \) . Prove that if \( L \) is a finite set, then \( \mathop{\bigcup }\limits_{{\lambda \in L}}{\mathcal{F}}_{\lambda } \) is equicontinuous at \( a \) . Give an example where \( L \) is an infinite set and \( \mathop{\bigcup }\limits_{{\lambda \in L}}{\mathcal{F}}_{\lambda } \) is not equicontinuous at \( a \) . .6 Let \( \left( {f}_{n}\right) \) be a sequence of functions in \( \mathcal{B}\left( {X, Y}\right) \) that converges simply to a function \( f \) and is equicontinuous at \( a \in X \) . Prove that \( f \) is continuous at \( a \) . Hence prove that the closure of an equicontinuous set in \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) \) is equicontinuous. .7 Prove that if \( X \) is compact and \( \mathcal{F} \subset \mathcal{C}\left( {X, Y}\right) \) is equicontinuous, then \( \mathcal{F} \) is uniformly equicontinuous. .8 Suppose that \( X \) is compact, and let \( \left( {f}_{n}\right) \) be a convergent sequence in \( \mathcal{C}\left( {X, Y}\right) \) . Prove that \( \left( {f}_{n}\right) \) is uniformly equicontinuous. (Let \( f = \) \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{f}_{n} \) . Given \( \varepsilon > 0 \), choose \( N \) such that \( \begin{Vmatrix}{f - {f}_{n}}\end{Vmatrix} < \varepsilon \) for all \( n \geq N \) . First find \( {\delta }_{1} > 0 \) such that \( \begin{Vmatrix}{{f}_{n}\left( x\right) - {f}_{n}\left( y\right) }\end{Vmatrix} < {3\varepsilon } \) whenever \( \rho \left( {x, y}\right) < {\delta }_{1} \) and \( n \geq N \) .) .9 Suppose that \( X \) is compact, and that \( \left( {f}_{n}\right) \) is an equicontinuous sequence in \( \mathcal{C}\left( {X, Y}\right) \) that converges simply to a function \( f : X \rightarrow Y \) . Then \( f \) is continuous on \( X \), by Exercise (4.5.7: 6). Show that \( \left( {f}_{n}\right) \) converges uniformly to \( f \) . (Given \( \varepsilon > 0 \), use Exercise (4.5.7: 7) to obtain \( \delta \) as in the definition of "uniformly equicontinuous". Then cover \( X \) by finitely many balls of the form \( B\left( {x,\delta }\right) \) .) .10 Let \( \left( {f}_{n}\right) \) be a sequence of continuous real-valued mappings on a compact interval \( I \) . (i) Prove that if \( \left( {f}_{n}\right) \) is a Cauchy sequence in \( \mathcal{C}\left( I\right) \), then it is a Cauchy sequence in \( {L}_{2}\left( I\right) \) . (ii) Prove that if \( \left( {f}_{n}\right) \) is both equicontinuous and a Cauchy sequence relative to the \( {L}_{2} \) -norm, then \( \left( {f}_{n}\right) \) converges in \( \mathcal{C}\left( I\right) \) . (For (ii) fix \( {t}_{0} \in I \) and \( \varepsilon > 0 \) . Choose \( \delta > 0 \) such that if \( t \in I \) and \( \left| {t - {t}_{0}}\right| < \delta \), then \( \left| {{f}_{n}\left( t\right) - {f}_{n}\left( {t}_{0}\right) }\right| < \varepsilon \) for all \( n \) . Let \( \chi \) be the characteristic function of \( I \cap \left\lbrack {{t}_{0} - \delta ,{t}_{0} + \delta }\right\rbrack \), and show that \[ \int \chi \left( t\right) {\left| {f}_{m}\left( {t}_{0}\right) - {f}_{n}\left( {t}_{0}\right) \right| }^{2}\mathrm{\;d}t < {c\delta }{\varepsilon }^{2} \] for some constant \( c > 0 \) and all sufficiently large \( m \) and \( n \) . Deduce that \( \left( {{f}_{n}\left( {t}_{0}\right) }\right) \) is a Cauchy sequence in \( I \) .) Show that the equicontinuity hypothesis cannot be dropped in (ii). If \( X \) is compact and \( Y \) is a Banach space, then we have a powerful characterisation of totally bounded subsets of \( \mathcal{C}\left( {X, Y}\right) \) . (4.5.8) Ascoli’s Theorem. \( {}^{2} \) Let \( X \) be a compact metric space, \( Y \) a normed space, and \( \mathcal{F} \) a subset of \( \mathcal{C}\left( {X, Y}\right) \) . Then \( \mathcal{F} \) is totally bounded if and only if (i) \( \mathcal{F} \) is equicontinuous and (ii) for each \( x \in X \) , \[ {\mathcal{F}}_{x} = \{ f\left( x\right) : f \in \mathcal{F}\} \] is a totally bounded subset of \( Y \) . Proof. Assume first that \( \mathcal{F} \) is totally bounded, and let \( \varepsilon > 0 \) . Construct a finite \( \varepsilon \) -approximation \( \left\{ {{f}_{1},\ldots ,{f}_{N}}\right\} \) to \( \mathcal{F} \) . Then for each \( f \) in \( \mathcal{F} \) there exists \( i \) such that \( \begin{Vmatrix}{f - {f}_{i}}\end{Vmatrix} \leq \varepsilon \) . So for each \( x \in X \) we have \( \begin{Vmatrix}{f\left( x\right) - {f}_{i}\left( x\right) }\end{Vmatrix} \leq \varepsilon \) , from which it follows that \( \left\{ {{f}_{1}\left( x\right) ,\ldots ,{f}_{N}\left( x\right) }\right\} \) is an \( \varepsilon \) -approximation to \( {\mathcal{F}}_{x} \) . \( {}^{2} \) This is also known as the Ascoli-Arzelà Theorem. Hence \( {\mathcal{F}}_{x} \) is totally bounded. On the other hand, choose \( \delta > 0 \) such that if \( \rho \left( {x, y}\right) < \delta \), then \( \begin{Vmatrix}{{f}_{k}\left( x\right) - {f}_{k}\left( y\right) }\end{Vmatrix} \leq \varepsilon \) for each \( k \) . With \( f \) and \( {f}_{i} \) as in the foregoing, we have \[ \parallel f\left( x\right) - f\left( y\right) \parallel \leq \begin{Vmatrix}{f\left( x\right) - {f}_{i}\left( x\right) }\end{Vmatrix} + \begin{Vmatrix}{{f}_{i}\left( x\right) - {f}_{i}\left( y\right) }\end{Vmatrix} \] \[ + \begin{Vmatrix}{{f}_{i}\left( y\right) - f\left( y\right) }\end{Vmatrix} \] \[ \leq \varepsilon + \varepsilon + \varepsilon \] \[ = \varepsilon \text{.} \] Hence \( \mathcal{F} \) is equicontinuous. Now assume, conversely, that conditions (i) and (ii) hold, and let \( \varepsilon \) be any positive number. For each \( x \in X \) choose an open neighbourhood \( {U}_{x} \) of \( x \) such that \( \parallel f\left( y\right) - f\left( x\right) \parallel < \varepsilon \) for each \( f \in \mathcal{F} \) and each \( y \in {U}_{x} \) . Since \( X \) is compact, it can be covered by a finite family \( \left\{ {{U}_{{x}_{1}},\ldots ,{U}_{{x}_{m}}}\right\} \) of such neighbourhoods. Now, the sets \( {\mathcal{F}}_{{x}_{i}}\left( {1 \leq i \leq m}\right) \) are totally bounded, as is therefore their union \( K \) . Let \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{n}}\right\} \) be a finite \( \varepsilon \) -approximation to \( K \) . On the other hand, let \( \Phi \) be the finite set of all mappings of \( \{ 1,\ldots, m\} \) into \( \{ 1,\ldots, n\} \), and for each \( \varphi \in \Phi \) let \[ {S}_{\varphi } = \left\{ {f \in \mathcal{F} : \begin{Vmatrix}{f\left( {x}_{i}\right) - {\xi }_{\varphi \left( i\right) }}\end{Vmatrix} \leq \varepsilon \;\left( {1 \leq i \leq m}\right) }\right\} . \] Then for each \( f \in \mathcal{F} \) there exists \( \varphi \in \Phi \) such that \( f \in {S}_{\varphi } \) . Since there are only finitely many of the sets \( {S\varphi } \) (some of which may be empty), to complete the proof that \( \mathcal{F} \) is totally bounded it suffices to prove that the diameter of each \( {S}_{\varphi } \) is at most \( {4\varepsilon } \) . To this end, consider any \( \varphi \in \Phi \) and any two elements \( f, g \) of \( {S}_{\varphi } \) . Given \( x \in X \), choose \( i \) such that \( x \in {U}_{{x}_{i}} \) . Then \( \begin{Vmatrix}
1008_(GTM174)Foundations of Real and Abstract Analysis
65
inite \( \varepsilon \) -approximation to \( K \) . On the other hand, let \( \Phi \) be the finite set of all mappings of \( \{ 1,\ldots, m\} \) into \( \{ 1,\ldots, n\} \), and for each \( \varphi \in \Phi \) let \[ {S}_{\varphi } = \left\{ {f \in \mathcal{F} : \begin{Vmatrix}{f\left( {x}_{i}\right) - {\xi }_{\varphi \left( i\right) }}\end{Vmatrix} \leq \varepsilon \;\left( {1 \leq i \leq m}\right) }\right\} . \] Then for each \( f \in \mathcal{F} \) there exists \( \varphi \in \Phi \) such that \( f \in {S}_{\varphi } \) . Since there are only finitely many of the sets \( {S\varphi } \) (some of which may be empty), to complete the proof that \( \mathcal{F} \) is totally bounded it suffices to prove that the diameter of each \( {S}_{\varphi } \) is at most \( {4\varepsilon } \) . To this end, consider any \( \varphi \in \Phi \) and any two elements \( f, g \) of \( {S}_{\varphi } \) . Given \( x \in X \), choose \( i \) such that \( x \in {U}_{{x}_{i}} \) . Then \( \begin{Vmatrix}{f\left( x\right) - f\left( {x}_{i}\right) }\end{Vmatrix} \leq \varepsilon \) and \( \begin{Vmatrix}{g\left( x\right) - g\left( {x}_{i}\right) }\end{Vmatrix} \leq \varepsilon \) . But \( \begin{Vmatrix}{f\left( {x}_{i}\right) - {\xi }_{\varphi \left( i\right) }}\end{Vmatrix} \leq \varepsilon \) and \( \begin{Vmatrix}{g\left( {x}_{i}\right) - {\xi }_{\varphi \left( i\right) }}\end{Vmatrix} \leq \varepsilon \) ; two applications of the triangle inequality show, in turn, that \( \begin{Vmatrix}{f\left( {x}_{i}\right) - g\left( {x}_{i}\right) }\end{Vmatrix} \leq {2\varepsilon } \) and \( \parallel f\left( x\right) - g\left( x\right) \parallel \leq {4\varepsilon } \) . Since \( x \in X \) is arbitrary, it follows that \( \parallel f - g\parallel \leq {4\varepsilon } \) ; whence \( \operatorname{diam}\left( {S}_{\varphi }\right) \leq {4\varepsilon } \) . ## (4.5.9) Exercises .1 Let \( X \) be compact, and let \( \left( {f}_{n}\right) \) be a bounded equicontinuous sequence of mappings of \( X \) into \( Y \) . Prove that there exists a subsequence \( {\left( {f}_{{n}_{k}}\right) }_{k = 1}^{\infty } \) such that \( {\left( {f}_{{n}_{k}}\left( x\right) \right) }_{k = 1}^{\infty } \) converges for each \( x \in X \) . (Let \( \left( {x}_{n}\right) \) be a dense sequence in \( X \) . Setting \( {f}_{0, n} = {f}_{n} \), construct sequences \( {\left( {f}_{i, n}\right) }_{n = 1}^{\infty }\left( {i = 0,1,\ldots }\right) \) such that for all \( i \) and \( n \) , (i) \( \left( {f}_{i + 1, n}\right) \) is a subsequence of \( \left( {f}_{i, n}\right) \) and (ii) \( {\left( {f}_{i, n}\left( {x}_{i}\right) \right) }_{n = 1}^{\infty } \) converges in \( Y \) . Then show that \( \left( {{f}_{n, n}\left( x\right) }\right) \) converges in \( Y \) for each \( x \in X \) .) Use this result to give another proof of the "if" part of Ascoli's Theorem. .2 Let \( {c}_{0},{c}_{1} > 0 \), and let \( S \) consist of all differentiable functions \( f \) : \( \left\lbrack {0,1}\right\rbrack \rightarrow \mathbf{R} \) such that \( \parallel f\parallel \leq {c}_{0} \) and \( \begin{Vmatrix}{f}^{\prime }\end{Vmatrix} \leq {c}_{1} \) . Prove that \( S \) is a compact subset of \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) . .3 For each positive integer \( n \) and each \( x \geq 0 \) let \[ {f}_{n}\left( x\right) = \sin \sqrt{x + 4{n}^{2}{\pi }^{2}}. \] Prove that (i) \( \left( {f}_{n}\right) \) is equicontinuous on \( {\mathbf{R}}^{0 + } \) ; (ii) \( \left( {f}_{n}\right) \) converges simply to 0 on \( {\mathbf{R}}^{0 + } \) ; (iii) \( \left( {f}_{n}\right) \) is not totally bounded in \( {\mathcal{C}}^{\infty }\left( {\mathbf{R}}^{0 + }\right) \) . (For the last part, show that if \( \left( {f}_{n}\right) \) were totally bounded, then it would converge to 0 uniformly on \( {\mathbf{R}}^{0 + } \) .) ## 4.6 The Theorems of Weierstrass and Stone In this section we follow a path from the famous, and widely applicable, approximation theorem of Weierstrass to its remarkable generalisation by Stone. (4.6.1) The Weierstrass Approximation Theorem. If \( I \) is a compact interval in \( \mathbf{R} \), then the set of polynomial functions on \( I \) is dense in \( \mathcal{C}\left( I\right) \) . Thus for each \( f \in \mathcal{C}\left( I\right) \) and each \( \varepsilon > 0 \) there exists a polynomial function \( p \) on \( I \) such that \[ \parallel f - p\parallel = \sup \{ \left| {f\left( x\right) - p\left( x\right) }\right| : x \in I\} < \varepsilon . \] In other words, each element of \( \mathcal{C}\left( I\right) \) can be uniformly approximated, to any degree of accuracy, by polynomial functions. We derive the Weierstrass Approximation Theorem as a simple consequence of a more general theorem about linear operators on \( \mathcal{C}\left( I\right) \) . By a positive linear operator on \( \mathcal{C}\left( X\right) \), where \( X \) is any metric space, we mean a linear mapping \( L : \mathcal{C}\left( X\right) \rightarrow \mathcal{C}\left( X\right) \) such that \( {Lf} \geq 0 \) whenever \( f \geq 0 \) . In the remainder of this section we let \( {p}_{k} \) denote the monomial function \( x \mapsto {x}^{k} \) on \( \mathbf{R} \) . (4.6.2) Korovkin’s Theorem. Let \( I \) be a compact interval in \( \mathbf{R} \), and \( \left( {L}_{n}\right) \) a sequence of positive linear operators on \( \mathcal{C}\left( I\right) \) such that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{L}_{n}{p}_{k} = {p}_{k}\;\left( {k = 0,1,2}\right) . \] Then \( {L}_{n}f \rightarrow f \) for all \( f \in \mathcal{C}\left( I\right) \) . Proof. For each \( t \) in \( I \) let \( {g}_{t} \) be the element of \( \mathcal{C}\left( I\right) \) defined by \[ {g}_{t}\left( x\right) = {\left( t - x\right) }^{2} = {t}^{2}{p}_{0}\left( x\right) - {2t}{p}_{1}\left( x\right) + {p}_{2}\left( x\right) . \] The linearity of \( {L}_{n} \) implies that \[ {L}_{n}{g}_{t} = {t}^{2}{L}_{n}{p}_{0} - {2t}{L}_{n}{p}_{1} + {L}_{n}{p}_{2} \] whence \[ 0 \leq \left( {{L}_{n}{g}_{t}}\right) \left( t\right) \] \[ = {t}^{2}\left( {\left( {{L}_{n}{p}_{0}}\right) \left( t\right) - 1}\right) - {2t}\left( {\left( {{L}_{n}{p}_{1}}\right) \left( t\right) - t}\right) + \left( {\left( {{L}_{n}{p}_{2}}\right) \left( t\right) - {t}^{2}}\right) \] \[ \leq {t}^{2}\begin{Vmatrix}{{L}_{n}{p}_{0} - {p}_{0}}\end{Vmatrix} + \left| {2t}\right| \begin{Vmatrix}{{L}_{n}{p}_{1} - {p}_{1}}\end{Vmatrix} + \begin{Vmatrix}{{L}_{n}{p}_{2} - {p}_{2}}\end{Vmatrix}. \] Since \( {t}^{2} \) and \( \left| {2t}\right| \) are bounded on \( I \), our hypotheses ensure that \( \left( {{L}_{n}{g}_{t}}\right) \left( t\right) \rightarrow \) 0 uniformly on \( I \) as \( n \rightarrow \infty \) . We use this observation shortly. Given \( f \in \mathcal{C}\left( I\right) \) and \( \varepsilon > 0 \), and noting the Uniform Continuity Theorem (Corollary (3.3.13)), choose \( \delta > 0 \) such that if \( x, y \in I \) and \( \left| {x - y}\right| < \delta \) , then \( \left| {f\left( x\right) - f\left( y\right) }\right| < \varepsilon \) . Fix \( t \) in \( I \), and consider any \( x \in I \) . If \( \left| {t - x}\right| \geq \delta \) , then \[ \left| {f\left( t\right) - f\left( x\right) }\right| \leq 2\parallel f\parallel \leq 2\parallel f\parallel \frac{{\left( t - x\right) }^{2}}{{\delta }^{2}} = \frac{2}{{\delta }^{2}}\parallel f\parallel {g}_{t}\left( x\right) . \] It follows from this and our choice of \( \delta \) that \[ \left| {f\left( t\right) - f\left( x\right) }\right| \leq \frac{2}{{\delta }^{2}}\parallel f\parallel {g}_{t}\left( x\right) + \varepsilon \] for all \( x \) in \( I \) . Hence \[ - \varepsilon {p}_{0} - \frac{2}{{\delta }^{2}}\parallel f\parallel {g}_{t} \leq f\left( t\right) {p}_{0} - f \leq \varepsilon {p}_{0} + \frac{2}{{\delta }^{2}}\parallel f\parallel {g}_{t}. \] Since \( {L}_{n} \) is linear and positive, we have \[ - \varepsilon {L}_{n}{p}_{0} - \frac{2}{{\delta }^{2}}\parallel f\parallel {L}_{n}{g}_{t} \leq f\left( t\right) {L}_{n}{p}_{0} - {L}_{n}f \leq \varepsilon {L}_{n}{p}_{0} + \frac{2}{{\delta }^{2}}\parallel f\parallel {L}_{n}{g}_{t}. \] Hence \[ \left| {f\left( t\right) \left( {{L}_{n}{p}_{0}}\right) \left( t\right) - \left( {{L}_{n}f}\right) \left( t\right) }\right| \leq \varepsilon \begin{Vmatrix}{{L}_{n}{p}_{0}}\end{Vmatrix} + \frac{2}{{\delta }^{2}}\parallel f\parallel \left( {{L}_{n}{g}_{t}}\right) \left( t\right) . \] Thus \[ \left| {f\left( t\right) - \left( {{L}_{n}f}\right) \left( t\right) }\right| \] \[ \leq \left| {f\left( t\right) - f\left( t\right) \left( {{L}_{n}{p}_{0}}\right) \left( t\right) }\right| + \left| {f\left( t\right) \left( {{L}_{n}{p}_{0}}\right) \left( t\right) - \left( {{L}_{n}f}\right) \left( t\right) }\right| \] \[ \leq \left| {f\left( t\right) }\right| \left| {1 - \left( {{L}_{n}{p}_{0}}\right) \left( t\right) }\right| + \varepsilon \begin{Vmatrix}{{L}_{n}{p}_{0}}\end{Vmatrix} + \frac{2}{{\delta }^{2}}\parallel f\parallel \left( {{L}_{n}{g}_{t}}\right) \left( t\right) \] \[ \leq \parallel f\parallel \begin{Vmatrix}{{p}_{0} - {L}_{n}{p}_{0}}\end{Vmatrix} + \varepsilon \left( {\begin{Vmatrix}{p}_{0}\end{Vmatrix} + \begin{Vmatrix}{{p}_{0} - {L}_{n}{p}_{0}}\end{Vmatrix}}\right) + \frac{2}{{\delta }^{2}}\parallel f\parallel \left( {{L}_{n}{g}_{t}}\right) \left( t\right) . \] It now follows from our hypotheses, and the observation in the first paragraph of the proof, that \( \begin{Vmatrix}{f - {L}_{n}f}\end{Vmatrix} \rightarrow 0 \) as \( n \rightarrow \infty \) . Proof of the Weierstrass Approximation Theorem. Without loss of generality, take \( I = \left\lbrack {0,1}\right\rbrack \) . For each \( f \in \mathcal{C}\left( I\right) \) and each positive integer \( n \) define the corresponding Bernstein polynomial \( {B}_{n}f \) by \[ \left( {{B}_{n}f}\right) \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {x}^{k}{\left( 1 - x\right) }^{n - k}f\left( {k/n}\right) . \] Then \( {B}_{
1008_(GTM174)Foundations of Real and Abstract Analysis
66
left( t\right) \] \[ \leq \parallel f\parallel \begin{Vmatrix}{{p}_{0} - {L}_{n}{p}_{0}}\end{Vmatrix} + \varepsilon \left( {\begin{Vmatrix}{p}_{0}\end{Vmatrix} + \begin{Vmatrix}{{p}_{0} - {L}_{n}{p}_{0}}\end{Vmatrix}}\right) + \frac{2}{{\delta }^{2}}\parallel f\parallel \left( {{L}_{n}{g}_{t}}\right) \left( t\right) . \] It now follows from our hypotheses, and the observation in the first paragraph of the proof, that \( \begin{Vmatrix}{f - {L}_{n}f}\end{Vmatrix} \rightarrow 0 \) as \( n \rightarrow \infty \) . Proof of the Weierstrass Approximation Theorem. Without loss of generality, take \( I = \left\lbrack {0,1}\right\rbrack \) . For each \( f \in \mathcal{C}\left( I\right) \) and each positive integer \( n \) define the corresponding Bernstein polynomial \( {B}_{n}f \) by \[ \left( {{B}_{n}f}\right) \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {x}^{k}{\left( 1 - x\right) }^{n - k}f\left( {k/n}\right) . \] Then \( {B}_{n} \) is a positive linear operator on \( \mathcal{C}\left( I\right) \) . Routine calculations (with reference to the binomial theorem) show that \( {B}_{n}{p}_{0} = {p}_{0} \), that \( {B}_{n}{p}_{1} = {p}_{1} \) , and that \[ \left( {{B}_{n}{p}_{2}}\right) \left( x\right) = \frac{n - 1}{n}{x}^{2} + \frac{1}{n}x \rightarrow {x}^{2}\text{ as }n \rightarrow \infty . \] It follows from Korovkin’s theorem that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{f - {B}_{n}f}\end{Vmatrix} = 0 \) for each \( f \in \mathcal{C}\left( I\right) \) . ## (4.6.3) Exercises .1 Show that there is no loss of generality in our taking \( I = \left\lbrack {0,1}\right\rbrack \) in the proof of the Weierstrass Approximation Theorem. .2 Prove that \( {B}_{n}{p}_{0} = {p}_{0} \), that \( {B}_{n}{p}_{1} = {p}_{1} \), and that \[ \left( {{B}_{n}{p}_{2}}\right) \left( x\right) = \frac{n - 1}{n}{x}^{2} + \frac{1}{n}x \rightarrow {x}^{2}\text{ as }n \rightarrow \infty . \] .3 Let \( f\left( x\right) = {x}^{3} \) . Calculate \( {B}_{n}\left( f\right) \), and hence prove that \( {B}_{n}\left( f\right) \rightarrow f \) as \( n \rightarrow \infty \) . .4 Prove that if \( p \) is a polynomial function of degree at most \( k \) on \( \left\lbrack {0,1}\right\rbrack \) , then so is \( {B}_{n}\left( p\right) \) for each \( n \) . (Use induction on \( k \) .) .5 Prove that there is only one positive linear operator \( L \) on \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) such that \( L\left( f\right) = f \) for all quadratic polynomial functions. (Use Korovkin’s Theorem.) .6 Suppose that \( f \) and \( {f}^{\prime } \) belong to \( \mathcal{C}\left( I\right) \), where \( I \) is a compact interval. Prove that for each \( \varepsilon > 0 \) there exists a polynomial function \( p \) such that \( \parallel f - p\parallel < \varepsilon \) and \( \begin{Vmatrix}{{f}^{\prime } - {p}^{\prime }}\end{Vmatrix} < \varepsilon \) . (Reduce to the case \( I = \left\lbrack {0,1}\right\rbrack \) . First find a polynomial \( q \) such that \( \left. {\begin{Vmatrix}{{f}^{\prime } - q}\end{Vmatrix} < \varepsilon \text{.}}\right) \) .7 Let \( I \) be a compact interval contained in \( \left( {0,1}\right) \) . For each \( f \in \mathcal{C}\left( I\right) \) and each \( n \in \mathbf{N} \) define \( {Q}_{n}f \) on \( I \) by \[ \left( {{Q}_{n}f}\right) \left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{n}\left\lfloor {\left( \begin{array}{l} n \\ k \end{array}\right) f\left( {k/n}\right) }\right\rfloor {x}^{k}{\left( 1 - x\right) }^{n - k}, \] where \( \lfloor t\rfloor \) denotes the integer part of \( t \) . Prove that \( \begin{Vmatrix}{{B}_{n}f - {Q}_{n}f}\end{Vmatrix} \rightarrow 0 \) and hence that, on \( I, f \) is the uniform limit of a sequence of polynomials with integer coefficients. .8 A function \( f : \mathbf{R} \rightarrow \mathbf{C} \) is said to be periodic if \[ \alpha = \min \{ \tau > 0 : \forall t \in \mathbf{R}\left( {f\left( {t + \tau }\right) = f\left( t\right) }\right) \} \] exists and is positive, in which case \( \alpha \) is called the period of \( f \) and \( f \) is also said to be \( \alpha \) -periodic. Prove Korovkin’s Theorem for \( {2\pi } \) -periodic functions: let \( I = \left\lbrack {-\pi ,\pi }\right\rbrack \) , let \[ \mathcal{P}\left( I\right) = \{ f \in \mathcal{C}\left( I\right) : f\left( {-\pi }\right) = f\left( \pi \right) \} \] and let \( \left( {L}_{n}\right) \) be a sequence of positive linear operators on \( \mathcal{P}\left( I\right) \) such that \( {L}_{n}f \rightarrow f \) uniformly as \( n \rightarrow \infty \) for \( f = 1 \), cos, and sin; then \( {L}_{n}f \rightarrow f \) uniformly for all \( f \in \mathcal{P}\left( I\right) \) . (Write \( z = \cos x \) and apply Theorem (4.6.2).) .9 Although this exercise mentions Fourier series, it does not require any knowledge of Fourier analysis. Let \( I = \left\lbrack {-\pi ,\pi }\right\rbrack \) . For each \( f \in \mathcal{P}\left( I\right) \) the \( k \) th partial sum of the Fourier series of \( f \) is \[ \left( {{S}_{k}f}\right) \left( x\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{n = 1}}^{k}\left( {{a}_{n}\cos {nx} + {b}_{n}\sin {nx}}\right) , \] where for \( n \geq 1 \) , \[ {a}_{n} = \frac{1}{\pi }{\int }_{-\pi }^{\pi }f\left( t\right) \cos t\mathrm{\;d}t \] \[ {b}_{n} = \frac{1}{\pi }{\int }_{-\pi }^{\pi }f\left( t\right) \sin t\mathrm{\;d}t \] The \( n \) th Cesàro mean of the Fourier series of \( f \) is \[ {G}_{n}f = \frac{1}{n}\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{S}_{k}f \] Prove that \[ \left( {{G}_{n}f}\right) \left( x\right) = \frac{1}{2n\pi }{\int }_{-\pi }^{\pi }f\left( {t + x}\right) {\left( \frac{\sin \frac{1}{2}{nt}}{\sin \frac{1}{2}t}\right) }^{2}\mathrm{\;d}t \] and hence that \( {G}_{n} \) is a positive linear operator on \( \mathcal{P}\left( I\right) \) . Then prove that for each \( f \in \mathcal{P}\left( I\right) ,\left( {{G}_{n}f}\right) \) converges to \( f \) uniformly on \( I \) . (Use the preceding exercise.) The following result, which was proved by Müntz in 1914, is an interesting generalisation of the Weierstrass Approximation Theorem. Let \( {\left( {\lambda }_{n}\right) }_{n = 1}^{\infty } \) be a sequence in \( \lbrack 1,\infty ) \) that diverges to \( \infty \) . Then span \( \left\{ {1,{x}^{{\lambda }_{1}},{x}^{{\lambda }_{2}},\ldots }\right\} \) is dense in \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) if and only if the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }1/{\lambda }_{n} \) diverges to \( \infty \) . An elementary proof of Müntz's Theorem can be found on pages 193-198 of [10]. Here is a very recent generalisation of Müntz’s Theorem, due to P. Borwein and T. Erdélyi [6]. Let \( {\left( {\lambda }_{n}\right) }_{n = 1}^{\infty } \) be a sequence of distinct positive real numbers. Then \( \operatorname{span}\left\{ {1,{x}^{{\lambda }_{1}},{x}^{{\lambda }_{2}},\ldots }\right\} \) is dense in \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) if and only if \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}/\left( {{\lambda }_{n}^{2} + 1}\right) \) diverges to \( \infty \) . A different, more abstract, generalisation of Theorem (4.6.1) was given by Stone in 1937. In this generalisation we let \( X \) be a compact metric space; we consider \( \mathcal{C}\left( X\right) \) as an algebra, with the pointwise operations of addition, multiplication, and multiplication-by-scalars; and we are interested in dense subalgebras of \( \mathcal{C}\left( X\right) \) . The property introduced in the next definition plays a key role in the proof of the Stone-Weierstrass theorem. We say that a set \( \mathcal{A} \) of real-valued functions on a metric space \( X \) separates the points of \( X \) if for each pair \( x, y \) of distinct points of \( X \) there exists \( f \in \mathcal{A} \) such that \( f\left( x\right) \neq f\left( y\right) \) . (4.6.4) The Stone-Weierstrass Theorem. Let \( X \) be a compact metric space, and \( \mathcal{A} \) a subalgebra of \( \mathcal{C}\left( X\right) \) that contains the constant functions and separates the points of \( X \) . Then \( \mathcal{A} \) is dense in the Banach space \( \mathcal{C}\left( X\right) \) . The next two lemmas lead us to the proof of this theorem. (4.6.5) Lemma. Under the hypotheses of Theorem (4.6.4), if \( \varphi ,\psi \in \mathcal{A} \) , then \( \varphi \land \psi \) and \( \varphi \vee \psi \) belong to the closure of \( \mathcal{A} \) in \( \mathcal{C}\left( X\right) \) . Proof. Given \( f \in \mathcal{A} \) and \( \varepsilon > 0 \), first apply the Weierstrass Approximation Theorem (4.6.1) to construct a polynomial function \( p \) such that \[ \left| \right| t\left| {-p\left( t\right) }\right| < \varepsilon \;\left( {0 \leq t \leq \parallel f\parallel }\right) . \] Then \[ \left| \right| f\left( x\right) \left| {-p \circ f\left( x\right) }\right| < \varepsilon \;\left( {x \in X}\right) . \] Since \( \varepsilon \) is arbitrary, we see that \( \left| f\right| \in \overline{\mathcal{A}} \) . The desired conclusion now follows by taking \( f = \left| {\varphi - \psi }\right| \) and noting the identities \[ \varphi \land \psi = \frac{1}{2}\left( {\varphi + \psi - \left| {\varphi - \psi }\right| }\right) , \] \[ \varphi \vee \psi = \frac{1}{2}\left( {\varphi + \psi + \left| {\varphi - \psi }\right| }\right) . \] 口 (4.6.6) Lemma. Under the hypotheses of Theorem (4.6.4), for each pair \( x, y \) of distinct points of \( X \) and each pair \( a, b \) of real numbers, there exists \( g \in \mathcal{A} \) such that \( g\left( x\right) = a \) and \( g\left( y\right) = b \) . Proof. Since \( \mathcal{A} \) separates the points of \( X \), there exists \( h \in \mathcal{A} \) such that \( h\left( x\right) \neq h\left( y\right) \) . Define \[ g\left( t\right) = a + \left( {b - a}\right) \frac{h\left( t\right) - h\left( x\right) }{h\left( y\right) - h\left( x\right) }. \] Since \( \mathcal{A} \) contains the
1008_(GTM174)Foundations of Real and Abstract Analysis
67
rary, we see that \( \left| f\right| \in \overline{\mathcal{A}} \) . The desired conclusion now follows by taking \( f = \left| {\varphi - \psi }\right| \) and noting the identities \[ \varphi \land \psi = \frac{1}{2}\left( {\varphi + \psi - \left| {\varphi - \psi }\right| }\right) , \] \[ \varphi \vee \psi = \frac{1}{2}\left( {\varphi + \psi + \left| {\varphi - \psi }\right| }\right) . \] 口 (4.6.6) Lemma. Under the hypotheses of Theorem (4.6.4), for each pair \( x, y \) of distinct points of \( X \) and each pair \( a, b \) of real numbers, there exists \( g \in \mathcal{A} \) such that \( g\left( x\right) = a \) and \( g\left( y\right) = b \) . Proof. Since \( \mathcal{A} \) separates the points of \( X \), there exists \( h \in \mathcal{A} \) such that \( h\left( x\right) \neq h\left( y\right) \) . Define \[ g\left( t\right) = a + \left( {b - a}\right) \frac{h\left( t\right) - h\left( x\right) }{h\left( y\right) - h\left( x\right) }. \] Since \( \mathcal{A} \) contains the constant functions and is an algebra, \( g \in \mathcal{A} \) . Clearly, \( g\left( x\right) = a \) and \( g\left( y\right) = b \) . Proof of the Stone-Weierstrass Theorem. Given \( f \in \mathcal{C}\left( X\right) \) and \( \varepsilon > 0 \), we need only show that there exists \( h \in \overline{\mathcal{A}} \) such that \( \parallel f - h\parallel < \varepsilon \) . To this end, for each \( g \) in \( \overline{\mathcal{A}} \) define \[ U\left( g\right) = \{ x \in X : g\left( x\right) < f\left( x\right) + \varepsilon \} , \] \[ L\left( g\right) = \{ x \in X : g\left( x\right) > f\left( x\right) - \varepsilon \} \] and note that, as \( g \) is continuous, \( U\left( g\right) \) and \( L\left( g\right) \) are open sets. It follows from Lemma (4.6.6) that for each \( t \in X \) the sets \( U\left( g\right) \), with \( g \in \mathcal{A} \) and \( g\left( t\right) = f\left( t\right) \), form an open cover of \( X \) . Since \( X \) is compact, we can extract a finite subcover \( \left\{ {U\left( {g}_{1}\right) ,\ldots, U\left( {g}_{n}\right) }\right\} \) of \( X \) . Define \[ {h}_{t} = {g}_{1} \land {g}_{2} \land \cdots \land {g}_{n} \] Then \( {h}_{t} \in \overline{\mathcal{A}} \), by Lemma (4.6.5); \( {h}_{t}\left( x\right) < f\left( x\right) + \varepsilon \) for each \( x \in X \) ; and \( {h}_{t}\left( t\right) = f\left( t\right) \), so \( t \in L\left( {h}_{t}\right) \) . Thus \( {\left( L\left( {h}_{t}\right) \right) }_{t \in X} \) is an open cover of \( X \), from which we can extract a finite subcover, say \( \left\{ {L\left( {h}_{{t}_{1}}\right) ,\ldots, L\left( {h}_{{t}_{m}}\right) }\right\} \) . Then the function \[ h = {h}_{{t}_{1}} \vee {h}_{{t}_{2}} \vee \cdots \vee {h}_{{t}_{m}} \] belongs to \( \overline{\mathcal{A}} \), by Lemma (4.6.5); also, \[ f\left( x\right) - \varepsilon < h\left( x\right) < f\left( x\right) + \varepsilon \] for each \( x \in X \), so \( \parallel f - h\parallel < \varepsilon \) . It is simple to verify that the Weierstrass Approximation Theorem is the special case of the Stone-Weierstrass Theorem in which the algebra \( \mathcal{A} \) consists of all polynomial functions on the compact interval \( I \) . Since the polynomial functions on \( I \) with rational coefficients form a countable dense set in this algebra \( \mathcal{A} \), we see that \( \mathcal{C}\left( I\right) \) is a separable metric space; this is a special case of the following more general corollary of the Stone-Weierstrass Theorem. (4.6.7) Corollary. If \( X \) is a compact metric space, then the Banach space \( \mathcal{C}\left( X\right) \) is separable. Proof. Let \( \left( {x}_{n}\right) \) be a dense sequence in \( X \), and for each positive integer \( k \) write \[ {f}_{n, k}\left( t\right) = \rho \left( {t, X \smallsetminus B\left( {{x}_{n},{k}^{-1}}\right) }\right) . \] The set \( \mathcal{S} \) of all functions of the form \[ {f}_{{n}_{1},{k}_{1}}^{{\alpha }_{1}}{f}_{{n}_{2},{k}_{2}}^{{\alpha }_{2}}\cdots {f}_{{n}_{i},{k}_{i}}^{{\alpha }_{i}} \] with each \( {\alpha }_{k} \) a nonnegative integer, is countable. Hence the subspace \( \mathcal{A} \) of \( \mathcal{C}\left( X\right) \) generated by \( \mathcal{S} \) is separable (see the paragraph immediately preceding Proposition (4.3.8)). So to complete the proof we need only show that \( \mathcal{A} \) is dense in \( \mathcal{C}\left( X\right) \) . Since \( \mathcal{A} \) is a subalgebra of \( \mathcal{C}\left( X\right) \), if \( \mathcal{S} \) separates the points of \( X \) we can invoke the Stone-Weierstrass Theorem. But for each pair \( x, y \) of distinct points of \( X \) we can choose \( n, k \) such that \( x \in B\left( {{x}_{n},{k}^{-1}}\right) \) and \( y \in X \smallsetminus B\left( {{x}_{n},{k}^{-1}}\right) \) . We then have \( {f}_{n, k}\left( x\right) \neq 0 \) (as \( X \smallsetminus B\left( {{x}_{n},{k}^{-1}}\right) \) is closed) and \( {f}_{n, k}\left( y\right) = 0 \) . ## (4.6.8) Exercises .1 Let \( f \) be a strictly increasing continuous function on \( I = \left\lbrack {0,1}\right\rbrack \) . Prove that the subalgebra of \( \mathcal{C}\left( I\right) \) generated by \( \{ 1, f\} \) is dense in \( \mathcal{C}\left( I\right) \) . .2 Let \( X \) be a compact metric space containing at least two points, and let \( \mathcal{A} \) be the subalgebra of \( \mathcal{C}\left( X\right) \) generated by the family \[ {\left( t \mapsto \rho \left( t, x\right) \right) }_{x \in X}. \] Prove that \( \mathcal{A} \) is dense in \( \mathcal{C}\left( X\right) \) . .3 Define a sequence \( \left( {u}_{n}\right) \) of polynomial functions on \( \mathbf{R} \) inductively, as follows. \[ {u}_{1}\left( t\right) = 0 \] \[ \begin{matrix} {u}_{n + 1}\left( t\right) & = & {u}_{n}\left( t\right) + \frac{1}{2}\left( {t - {u}_{n}{\left( t\right) }^{2}}\right) . \end{matrix} \] Prove that \( {u}_{n} \) maps \( \left\lbrack {0,1}\right\rbrack \) into \( \left\lbrack {0,1}\right\rbrack \), and that the sequence \( {\left( {u}_{n}\left( t\right) \right) }_{n = 1}^{\infty } \) converges uniformly to \( \sqrt{t} \) on \( \left\lbrack {0,1}\right\rbrack \) . Hence prove that if \( \mathcal{A} \) is a subalgebra of \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) and \( f \in \mathcal{A} \), then \( \left| f\right| \in \overline{\mathcal{A}} \) . This proof can be used to eliminate the reference to the Weierstrass Approximation Theorem from the proof of the Stone-Weierstrass Theorem, thereby making the former a genuine corollary of the latter. .4 Let \( I \) be a compact interval in \( \mathbf{R} \), and \( f \) a continuous mapping of the rectangle \( I \times I \) into \( \mathbf{R} \) . Prove that for each \( \varepsilon > 0 \) there exists a polynomial \[ p\left( {x, y}\right) = \mathop{\sum }\limits_{{j, k = 0}}^{n}{a}_{j, k}{x}^{j}{y}^{k} \] such that \[ \mathop{\sup }\limits_{{x, y \in I}}\left| {f\left( {x, y}\right) - p\left( {x, y}\right) }\right| < \varepsilon . \] .5 Prove the Complex Stone-Weierstrass Theorem: let \( X \) be a compact metric space, and \( \mathcal{A} \) a subalgebra of \( \mathcal{C}\left( {X,\mathbf{C}}\right) \) that contains the constant functions, separates the points of \( X \), and is closed under complex conjugation (so that \( {f}^{ * } \in \mathcal{A} \) whenever \( f \in \mathcal{A} \), where \( {f}^{ * }\left( x\right) = \) \( \left. {f{\left( x\right) }^{ * }}\right) \) ; then \( \mathcal{A} \) is dense in \( \mathcal{C}\left( {X,\mathbf{C}}\right) \) . Can we remove the hypothesis that \( \mathcal{A} \) is closed under complex conjugation? .6 Use the Stone-Weierstrass Theorem to prove that each \( {2\pi } \) -periodic continuous function \( f : \mathbf{R} \rightarrow \mathbf{C} \) is a uniform limit of a sequence of trigonometric polynomials of the form \[ t \mapsto \mathop{\sum }\limits_{{n = - N}}^{N}\left( {{a}_{n}\sin {nt} + {b}_{n}\cos {nt}}\right) \] where the coefficients \( {a}_{n},{b}_{n} \) belong to \( \mathbf{C} \) (cf. Exercise (4.6.3:9). Let \( \mathcal{S} \) be the set of \( {2\pi } \) -periodic elements of \( {\mathcal{C}}^{\infty }\left( {\mathbf{R},\mathbf{C}}\right) \) . First note that \[ F\left( {e}^{it}\right) = f\left( t\right) \] defines an isometric isomorphism of \( \mathcal{S} \) with \( \mathcal{C}\left( {\mathbf{T},\mathbf{C}}\right) \), where \[ \mathbf{T} = \{ z \in \mathbf{C} : \left| z\right| = 1\} \] is the unit circle in the complex plane.) .7 Let \( I \) be a compact interval, and \( p \geq 1 \) . Prove that the Banach space \( {L}_{p}\left( I\right) \) is separable. Prove also that \( {L}_{p}\left( \mathbf{R}\right) \) is separable. (First use Exercise (2.3.10) to prove that \( \mathcal{C}\left( I\right) \) is dense in \( {L}_{p}\left( I\right) \) .) ## 4.7 Fixed Points and Differential Equations In this final section of the chapter we show how various ideas that have appeared in the earlier sections are used to establish the existence of a solution \( \varphi \) of the first-order ordinary differential equation \( {\varphi }^{\prime }\left( x\right) = f\left( {x,\varphi \left( x\right) }\right) \) on a compact interval. In order to do this, we first introduce a fundamental fixed-point theorem. Let \( X \) and \( Y \) be metric spaces, and \( f \) a mapping of \( X \) into \( Y \) . We say that \( f \) satisfies a Lipschitz condition, or is a Lipschitz mapping, if there exists a constant \( c > 0 \) such that \( \rho \left( {f\left( x\right), f\left( y\right) }\right) \leq {c\rho }\left( {x, y}\right) \) for all \( x, y \) in \( X \) ; \( c \) is then called a Lipschitz constant for \( f \), and \( f \) is said to be Lipschitz of order \( c \) . In the special case where \( 0 < c < 1, f \) is called a contraction mapping of \( X \) into \( Y \) . A Lipschitz map is uniformly continuous (Exercise (3.2.11: 6)). A mapping of a metric space \( X \) into itself is called a self-map. By a fixed point of a self-map \( f : X \rightarrow X \) we mean a point \( \xi
1008_(GTM174)Foundations of Real and Abstract Analysis
68
er ordinary differential equation \( {\varphi }^{\prime }\left( x\right) = f\left( {x,\varphi \left( x\right) }\right) \) on a compact interval. In order to do this, we first introduce a fundamental fixed-point theorem. Let \( X \) and \( Y \) be metric spaces, and \( f \) a mapping of \( X \) into \( Y \) . We say that \( f \) satisfies a Lipschitz condition, or is a Lipschitz mapping, if there exists a constant \( c > 0 \) such that \( \rho \left( {f\left( x\right), f\left( y\right) }\right) \leq {c\rho }\left( {x, y}\right) \) for all \( x, y \) in \( X \) ; \( c \) is then called a Lipschitz constant for \( f \), and \( f \) is said to be Lipschitz of order \( c \) . In the special case where \( 0 < c < 1, f \) is called a contraction mapping of \( X \) into \( Y \) . A Lipschitz map is uniformly continuous (Exercise (3.2.11: 6)). A mapping of a metric space \( X \) into itself is called a self-map. By a fixed point of a self-map \( f : X \rightarrow X \) we mean a point \( \xi \in X \) such that \( f\left( \xi \right) = \xi \) . ## (4.7.1) Exercises .1 Let \[ p\left( {x, y}\right) = \mathop{\sum }\limits_{{j, k = 0}}^{n}{a}_{j, k}{x}^{j}{y}^{k} \] be a polynomial function of two variables \( x, y \) . Prove that \( p \) satisfies a Lipschitz condition on any bounded subset of \( {\mathbf{R}}^{2} \) . .2 Let \( f \) be a mapping of a metric space \( X \) into itself, and define the iterates of \( f \) inductively: for each \( x \in X \) , \[ {f}^{n}\left( x\right) = \left\{ \begin{array}{ll} x & \text{ if }n = 0 \\ f\left( {{f}^{n - 1}\left( x\right) }\right) & \text{ if }n \in {\mathbf{N}}^{ + }. \end{array}\right. \] Prove that if, for some positive integer \( N,{f}^{N} \) has a unique fixed point \( \xi \), then \( \xi \) is a fixed point of \( f \), and \( f \) has no other fixed point. Fixed points play an important role in many applications of mathematics, including the solution of differential equations and the existence of economic equilibria [51]. Many of these applications depend on our next result, Banach's Contraction Mapping Theorem. (4.7.2) Theorem. A contraction mapping of a nonempty complete metric space into itself has a unique fixed point. Proof. Let \( X \) be a nonempty complete metric space, \( f \) a contraction mapping of \( X \) into itself, and \( c \in \left( {0,1}\right) \) a Lipschitz constant for \( f \) . Choose \( {x}_{0} \) in \( X \), and define a sequence \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) inductively by setting \( {x}_{n} = f\left( {x}_{n - 1}\right) \) . For each \( k \geq 1 \) we have \[ \rho \left( {{x}_{k},{x}_{k + 1}}\right) = \rho \left( {f\left( {x}_{k - 1}\right), f\left( {x}_{k}\right) }\right) \] \[ \leq {c\rho }\left( {{x}_{k - 1},{x}_{k}}\right) \] \[ \leq \cdots \] \[ \leq {c}^{k}\rho \left( {{x}_{0},{x}_{1}}\right) \] So if \( m > n \geq 1 \), then \[ \rho \left( {{x}_{n},{x}_{m}}\right) \leq \mathop{\sum }\limits_{{k = n}}^{{m - 1}}\rho \left( {{x}_{k},{x}_{k + 1}}\right) \] \[ \leq \mathop{\sum }\limits_{{k = n}}^{{m - 1}}{c}^{k}\rho \left( {{x}_{0},{x}_{1}}\right) \] \[ \leq \rho \left( {{x}_{0},{x}_{1}}\right) \mathop{\sum }\limits_{{k = n}}^{\infty }{c}^{k} \] \[ = \rho \left( {{x}_{0},{x}_{1}}\right) \frac{{c}^{n}}{1 - c} \rightarrow 0\text{ as }n \rightarrow \infty . \] Hence \( \left( {x}_{n}\right) \) is a Cauchy sequence in the complete space \( X \) . Let \( \xi \) be its limit in \( X \) ; then \[ \xi = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n} = \mathop{\lim }\limits_{{n \rightarrow \infty }}f\left( {x}_{n - 1}\right) = f\left( \xi \right) . \] Thus \( \xi \) is a fixed point of \( f \) . Now suppose that \( \eta \) is a fixed point of \( f \) distinct from \( \xi \) . Then \[ \rho \left( {\xi ,\eta }\right) = \rho \left( {f\left( \xi \right), f\left( \eta \right) }\right) \leq {c\rho }\left( {\xi ,\eta }\right) < \rho \left( {\xi ,\eta }\right) , \] which is absurd. Hence \( \xi \) is the unique fixed point of \( f \) in \( X \) . Recall that a self-map \( f \) of a metric space \( X \) is said to be contractive if \( \rho \left( {f\left( x\right), f\left( y\right) }\right) < \rho \left( {x, y}\right) \) for all distinct \( x, y \) in \( X \) ; and that, according to Edelstein's Theorem (Exercise (3.3.7: 4)), a contractive self-map of a compact metric space has a unique fixed point. The next exercise shows that we can neither remove the compactness hypothesis from Edelstein's Theorem nor replace the word "contraction" by "contractive" in the hypotheses of Banach's Contraction Mapping Theorem. ## (4.7.3) Exercises .1 Let \( B \) be the unit ball in the Banach space \( {c}_{0} \), and for each positive integer \( n \) let \( {e}_{n} \) be the element of \( {c}_{0} \) whose \( n \) th term is 1 and all of whose other terms are 0 . Show that there is a unique linear mapping \( u : {c}_{0} \rightarrow {c}_{0} \) such that \[ u\left( {e}_{n}\right) = \left( {1 - \frac{1}{{2}^{n}}}\right) {e}_{n + 1} \] for each \( n \) . Then show that \[ v\left( x\right) = \frac{1}{2}\left( {1 + \parallel x\parallel }\right) {e}_{1} + u\left( x\right) \] defines a contractive map of \( B \) into itself such that \( v\left( x\right) \neq x \) for each \( x \in B \) . (For the last part note that \( \mathop{\prod }\limits_{{k = 1}}^{n}\left( {1 - {2}^{-k}}\right) \geq 1 - \mathop{\sum }\limits_{{k = 1}}^{n}{2}^{-k} \) .) .2 Let \( X, Y \) be Banach spaces over \( \mathbf{F}, U \) the open ball in \( X \) with centre 0 and radius \( a \), and \( V \) the open ball in \( Y \) with centre 0 and radius \( b \) . Let \( 0 \leq c < 1 \), and let \( \varphi : U \times V \rightarrow Y \) be a continuous mapping such that for all \( x \in U \) , (i) \( \begin{Vmatrix}{\varphi \left( {x,{y}_{1}}\right) - \varphi \left( {x,{y}_{2}}\right) }\end{Vmatrix} \leq c\begin{Vmatrix}{{y}_{1} - {y}_{2}}\end{Vmatrix} \) for all \( {y}_{1},{y}_{2} \in V \), and (ii) \( \parallel \varphi \left( {x,0}\right) \parallel < b\left( {1 - c}\right) \) . Show that there exists a unique mapping \( f : U \rightarrow V \) such that \( f\left( x\right) = \varphi \left( {x, f\left( x\right) }\right) \) for all \( x \in U \), and that \( f \) is continuous on \( U \) . (For each \( x \in U \) define \[ {f}_{0}\left( x\right) = 0 \] \[ {f}_{n + 1}\left( x\right) = \varphi \left( {x,{f}_{n}\left( x\right) }\right) . \] Show that \( {f}_{n} \) is a continuous mapping of \( U \) into \( V \), that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {{f}_{n} - {f}_{n - 1}}\right) \) converges absolutely in the Banach space \( \mathcal{B}\left( {U,\mathbf{F}}\right) \) , and that its sum is the required function \( f \) .) .3 Let \( Y \) be a Banach space, \( {y}_{0} \in Y, V = B\left( {{y}_{0}, b}\right) \subset Y \), and \( 0 \leq c < 1 \) . Let \( v \) be a mapping of \( V \) into \( Y \) such that (i) \( \begin{Vmatrix}{v\left( {y}_{1}\right) - v\left( {y}_{2}\right) }\end{Vmatrix} \leq c\begin{Vmatrix}{{y}_{1} - {y}_{2}}\end{Vmatrix} \) for all \( {y}_{1},{y}_{2} \in V \), and (ii) \( \begin{Vmatrix}{v\left( {y}_{0}\right) - {y}_{0}}\end{Vmatrix} < b\left( {1 - c}\right) \) . Prove that \( v \) has a unique fixed point in \( V \) . .4 Let \( X \) be a metric space such that each continuous self-map of a closed subset of \( S \) has a fixed point. Prove that \( X \) is complete. (Suppose the contrary, and choose a Cauchy sequence \( \left( {x}_{n}\right) \) in \( X \) that does not converge in \( X \) . Assuming, without loss of generality, that \( {x}_{i} \neq {x}_{j} \) whenever \( i \neq j \), for each \( x \in X \) let \[ {\alpha }_{x} = \inf \left\{ {\rho \left( {x,{x}_{n}}\right) : x \neq {x}_{n}}\right\} . \] Show that \( {\alpha }_{x} > 0 \) . Next, let \( 0 < r < 1 \), set \( \sigma \left( 0\right) = 0 \), and define \( \sigma \left( n\right) \) inductively such that \( \sigma \left( n\right) > \sigma \left( {n - 1}\right) \) and \[ \rho \left( {{x}_{i},{x}_{j}}\right) \leq r{\alpha }_{{x}_{{\sigma }_{\left( n - 1\right) }}}\;\left( {i, j \geq \sigma \left( n\right) }\right) . \] Let \( S = \left\{ {{x}_{\sigma \left( n\right) } : n \geq 1}\right\} \) and \( f\left( {x}_{\sigma \left( n\right) }\right) = {x}_{\sigma \left( {n + 1}\right) } \) .) .5 Let \( a, b \) be real numbers with \( 0 < b < 1 \), and let \( X \) be the set of all continuous mappings \( f : \left\lbrack {0, b}\right\rbrack \rightarrow \mathbf{R} \) such that \( f\left( 0\right) = a \) (so, according to Exercise (4.5.6: 6), \( X \) is a Banach space relative to the sup norm). Define a mapping \( T \) on \( X \) by \[ \left( {Tf}\right) \left( t\right) = a + {\int }_{0}^{t}\left| {f\left( x\right) }\right| \mathrm{d}x\;\left( {0 \leq t \leq b}\right) . \] Prove that \( T \) is a contraction mapping of \( X \) into itself, and hence that there exists a unique \( f \in X \) that is differentiable and satisfies \( {f}^{\prime } = \left| f\right| \) on the interval \( \left( {0, b}\right) \) . A function \( \varphi \) is said to be continuously differentiable on an interval \( I \) of \( \mathbf{R} \) if \( {\varphi }^{\prime } \) exists and is continuous on \( I \) . We now use Theorem (4.7.2) to prove the first of two theorems about the existence of solutions of ordinary differential equations, thereby generalising the work of Exercise (4.7.3: 5). (4.7.4) Picard’s Theorem. Let \( K \) be the rectangle \[ \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : \left| {x - {x}_{0}}\right| \leq a,\left| {y - {y}_{0}}\right| \leq b}\right\} \] where \( a, b > 0 \) . Let \( f : K \rightarrow \mathbf{R} \) be a continuous mapping such that there exists \( c > 0 \) with \[ \left| {f\left( {x,{y}_{1}}\right) - f\left( {x,{y}_{2}}\right) }\right| \leq c\left| {{y}_{1} - {y}_{2}}\right| \] for all applicable \( x,{y}_{1},{y}_{2} \) (in other words, \( f \) satisfies a Lipschitz condition in its second variable). Let \[ M = \mathop{\sup }\limits_
1008_(GTM174)Foundations of Real and Abstract Analysis
69
val \( \left( {0, b}\right) \) . A function \( \varphi \) is said to be continuously differentiable on an interval \( I \) of \( \mathbf{R} \) if \( {\varphi }^{\prime } \) exists and is continuous on \( I \) . We now use Theorem (4.7.2) to prove the first of two theorems about the existence of solutions of ordinary differential equations, thereby generalising the work of Exercise (4.7.3: 5). (4.7.4) Picard’s Theorem. Let \( K \) be the rectangle \[ \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : \left| {x - {x}_{0}}\right| \leq a,\left| {y - {y}_{0}}\right| \leq b}\right\} \] where \( a, b > 0 \) . Let \( f : K \rightarrow \mathbf{R} \) be a continuous mapping such that there exists \( c > 0 \) with \[ \left| {f\left( {x,{y}_{1}}\right) - f\left( {x,{y}_{2}}\right) }\right| \leq c\left| {{y}_{1} - {y}_{2}}\right| \] for all applicable \( x,{y}_{1},{y}_{2} \) (in other words, \( f \) satisfies a Lipschitz condition in its second variable). Let \[ M = \mathop{\sup }\limits_{{\left( {x, y}\right) \in K}}\left| {f\left( {x, y}\right) }\right| \] and \[ h = \left\{ \begin{array}{ll} \min \left\{ {a,\frac{b}{M}}\right\} & \text{ if }M > 0 \\ a & \text{ if }M = 0. \end{array}\right. \] Then there exists a unique continuously differentiable mapping \( \varphi \) on the interval \( I = \left\lbrack {{x}_{0} - h,{x}_{0} + h}\right\rbrack \), such that \[ \varphi \left( {x}_{0}\right) = {y}_{0} \] and \[ {\varphi }^{\prime }\left( x\right) = f\left( {x,\varphi \left( x\right) }\right) \text{ for all }x \in I. \] Proof. In view of the version of the Fundamental Theorem of Calculus in Exercise (1.5.14:1), it suffices to find a continuous mapping \( \varphi : I \rightarrow \mathbf{R} \) satisfying \[ \varphi \left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t \] (1) for all \( x \in I \) . Let \( V \) denote the closed ball with centre \( y \mapsto {y}_{0} \) and radius \( b \) in the Banach space \( \left( {\mathcal{C}\left( I\right) ,\parallel \cdot \parallel }\right) \), where \( \parallel \cdot \parallel \) denotes the sup norm. If \( y \in V \) , then for all \( t \in I \) we have \( \left| {y\left( t\right) - {y}_{0}}\right| \leq b \) and therefore \( \left( {t, y\left( t\right) }\right) \in K \) ; so \[ {F}_{y}\left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}f\left( {t, y\left( t\right) }\right) \mathrm{d}t \] defines a mapping \( {F}_{y} : I \rightarrow \mathbf{R} \) . We see from Exercise (1.5.12:4) that \( {F}_{y} \) satisfies the Lipschitz condition \[ \left| {{F}_{y}\left( x\right) - {F}_{y}\left( {x}^{\prime }\right) }\right| = \left| {{\int }_{{x}^{\prime }}^{x}f\left( {t, y\left( t\right) }\right) \mathrm{d}t}\right| \leq M\left| {x - {x}^{\prime }}\right| \] and is therefore uniformly continuous on \( I \) . Moreover, \[ \left| {{F}_{y}\left( x\right) - {y}_{0}}\right| \leq M\left| {x - {x}_{0}}\right| \leq {Mh} \leq b \] for all \( x \in I \), so \( y \mapsto {F}_{y} \) maps \( V \) into \( V \) . We now endow \( \mathcal{C}\left( I\right) \) not with its usual norm, but with the norm defined by \[ \parallel f{\parallel }^{\prime } = \sup \left\{ {{\mathrm{e}}^{-{2c}\left| {x - {x}_{0}}\right| }\left| {f\left( x\right) }\right| : x \in I}\right\} . \] Recall from Exercise (4.5.6:7) that \( \mathcal{C}\left( I\right) \), and hence \( V \), is complete with respect to the metric \( {\rho }^{\prime } \) associated with this norm. We prove that \( y \mapsto {F}_{y} \) is a contraction mapping on \( \left( {V,{\rho }^{\prime }}\right) \) . To this end, consider \( {y}_{1},{y}_{2} \in V \) and \( x \in I \) . Taking, for example, the case where \( x \geq {x}_{0} \), we have \[ \left| {{F}_{{y}_{1}}\left( x\right) - {F}_{{y}_{2}}\left( x\right) }\right| \leq {\int }_{{x}_{0}}^{x}\left| {f\left( {t,{y}_{1}\left( t\right) }\right) - f\left( {t,{y}_{2}\left( t\right) }\right) }\right| \mathrm{d}t \] \[ \leq c{\int }_{{x}_{0}}^{x}\left| {{y}_{1}\left( t\right) - {y}_{2}\left( t\right) }\right| \mathrm{d}t \] \[ \leq c{\begin{Vmatrix}{y}_{1} - {y}_{2}\end{Vmatrix}}^{\prime }{\int }_{{x}_{0}}^{x}{\mathrm{e}}^{{2c}\left| {t - {x}_{0}}\right| }\mathrm{d}t \] \[ < \frac{1}{2}{\mathrm{e}}^{{2c}\left| {x - {x}_{0}}\right| }{\begin{Vmatrix}{y}_{1} - {y}_{2}\end{Vmatrix}}^{\prime } \] since \[ {\int }_{{x}_{0}}^{x}{\mathrm{e}}^{{2c}\left( {t - {x}_{0}}\right) }\mathrm{d}t = \frac{1}{2c}\left( {{\mathrm{e}}^{{2c}\left( {x - {x}_{0}}\right) } - 1}\right) . \] It follows that \[ {\begin{Vmatrix}{F}_{{y}_{1}} - {F}_{{y}_{2}}\end{Vmatrix}}^{\prime } < \frac{1}{2}{\begin{Vmatrix}{y}_{1} - {y}_{2}\end{Vmatrix}}^{\prime }\;\left( {{y}_{1},{y}_{2} \in \mathcal{C}\left( I\right) }\right) . \] Applying Banach's Contraction Mapping Theorem (4.7.2), we now obtain a unique element \( \varphi \) of \( V \) satisfying equation (1). A restricted version of Picard's Theorem can be proved by applying the Contraction Mapping Theorem to a certain complete subset of \( \mathcal{C}\left( I\right) \), taken with the usual sup norm; this produces a positive number \( \delta \), which may be smaller than \( h \), and a solution of the differential equation on the interval \( \left\lbrack {{x}_{0} - \delta ,{x}_{0} + \delta }\right\rbrack \) . (See Chapter X of [13].) With a bit more work, it can then be shown that that solution extends to \( I \) (Exercise (4.7.5:4)). The introduction of the norm \( \parallel \cdot {\parallel }^{\prime } \) —a device due to Bielicki [4]—both simplifies the proof and provides, at a stroke, the solution over the whole interval \( I \) . (Note that when \( f \) is only known to be defined on \( K, I \) is the largest interval on which it makes sense to talk about a solution of the differential equation \( \left. {{y}^{\prime } = f\left( {x, y}\right) \text{.}}\right) \) By examining closely the proofs of Theorems (4.7.2) and (4.7.4), we obtain the following iteration scheme for a sequence \( \left( {y}_{n}\right) \) of functions converging to a solution of the differential equation in the preceding theorem. \[ {y}_{0}\left( x\right) = {y}_{0} \] \[ {y}_{n}\left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}f\left( {t,{y}_{n - 1}\left( t\right) }\right) \mathrm{d}t\;\left( {n \geq 1}\right) . \] This scheme can be used in practice, although there are better methods of finding solutions of first-order differential equations of special types. ## (4.7.5) Exercises .1 Apply the foregoing iteration scheme to solve the differential equation \( {y}^{\prime } = y \) on \( \mathbf{R} \) with initial condition \( y\left( 0\right) = 3 \) . .2 Let \[ K = \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : \left| x\right| \leq a,\left| y\right| \leq b}\right\} , \] where \( a, b \) are positive constants. Let \( f \) be a continuous mapping of \( K \) into \( \mathbf{R} \) such that \( f\left( {x, y}\right) < 0 \) if \( {xy} > 0 \), and \( f\left( {x, y}\right) > 0 \) if \( {xy} < 0 \) . Prove that \( x \mapsto 0 \) is the unique solution of the differential equation \( {y}^{\prime } = f\left( {x, y}\right) \) defined in a neighbourhood of 0 and such that \( y\left( 0\right) = 0 \) . (Assume the contrary, and consider, in a compact interval containing 0 , the points where a solution attains its maximum or minimum.) .3 Define \( f : {\mathbf{R}}^{2} \rightarrow \mathbf{R} \) by \[ f\left( {x, y}\right) = \left\{ \begin{matrix} - {2x} & \text{if }y \geq {x}^{2} \\ - \frac{2y}{x} & \text{if }\left| y\right| < {x}^{2} \\ {2x} & \text{if }y \leq - {x}^{2} \end{matrix}\right. \] Define a sequence of functions by setting \( {y}_{0}\left( x\right) = {x}^{2} \) and \[ {y}_{n + 1}\left( x\right) = {\int }_{0}^{x}f\left( {t,{y}_{n}\left( t\right) }\right) \mathrm{d}t. \] Show that for each \( x \neq 0 \) the sequence \( {\left( {y}_{n}\left( x\right) \right) }_{n = 0}^{\infty } \) is not convergent. Comment on this, in the light of Exercise (4.7.5: 2) and the paragraph immediately preceding this set of exercises. .4 Let \( {x}_{0},{y}_{0}, I \), and \( K \) be as in Theorem (4.7.4), and let \( f : K \rightarrow \mathbf{R} \) be a continuous function with the following property: for each \( \left( {\xi ,\eta }\right) \in {K}^{ \circ } \) there exist \( \delta > 0 \) and a unique continuously differentiable function \( y : \left\lbrack {\xi - \delta ,\xi + \delta }\right\rbrack \rightarrow \mathbf{R} \) such that \( y\left( \xi \right) = \eta \) and \( {y}^{\prime }\left( x\right) = f\left( {x, y\left( x\right) }\right) \) whenever \( \left| {x - \xi }\right| \leq \delta \) . Show that there exists a unique continuously differentiable function \( \varphi : I \rightarrow \mathbf{R} \) such that \( \varphi \left( {x}_{0}\right) = {y}_{0} \) and \( {\varphi }^{\prime }\left( x\right) = \) \( f\left( {x,\varphi \left( x\right) }\right) \) for all \( x \in I \) . (Let \( S \) be the set of all positive numbers \( \delta \leq h \) with the property that there exists a continuously differentiable function \( y : \left\lbrack {{x}_{0} - \delta ,{x}_{0} + \delta }\right\rbrack \rightarrow \mathbf{R} \) such that \( y\left( {x}_{0}\right) = {y}_{0} \) and \( {y}^{\prime }\left( x\right) = \) \( f\left( {x, y\left( x\right) }\right) \) whenever \( \left| {x - {x}_{0}}\right| \leq \delta \) . Let \( \sigma = \sup S \), suppose that \( \sigma < h \) , and derive a contradiction.) .5 Let \( I \) be the closed interval \( \left\lbrack {a, b}\right\rbrack \) in \( \mathbf{R} \), and \[ A = \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : a \leq x \leq y \leq b}\right\} . \] Let the function \( k : I \times I \rightarrow \mathbf{R} \) be continuous on \( A \) and vanish everywhere on \( \left( {I \times I}\right) \smallsetminus A \), and for each \( f \in \mathcal{C}\left( I\right) \) define \( {Tf} : I \rightarrow \mathbf{R} \) by \[ {Tf}\left( t\right) = {\int }_{a}^{t}k\left( {s, t}\right) f\left( s\right) \mathrm{d}s\;\left( {t \in I}\right) . \
1008_(GTM174)Foundations of Real and Abstract Analysis
70
a continuously differentiable function \( y : \left\lbrack {{x}_{0} - \delta ,{x}_{0} + \delta }\right\rbrack \rightarrow \mathbf{R} \) such that \( y\left( {x}_{0}\right) = {y}_{0} \) and \( {y}^{\prime }\left( x\right) = \) \( f\left( {x, y\left( x\right) }\right) \) whenever \( \left| {x - {x}_{0}}\right| \leq \delta \) . Let \( \sigma = \sup S \), suppose that \( \sigma < h \) , and derive a contradiction.) .5 Let \( I \) be the closed interval \( \left\lbrack {a, b}\right\rbrack \) in \( \mathbf{R} \), and \[ A = \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : a \leq x \leq y \leq b}\right\} . \] Let the function \( k : I \times I \rightarrow \mathbf{R} \) be continuous on \( A \) and vanish everywhere on \( \left( {I \times I}\right) \smallsetminus A \), and for each \( f \in \mathcal{C}\left( I\right) \) define \( {Tf} : I \rightarrow \mathbf{R} \) by \[ {Tf}\left( t\right) = {\int }_{a}^{t}k\left( {s, t}\right) f\left( s\right) \mathrm{d}s\;\left( {t \in I}\right) . \] Show that for all sufficiently large \( n,{T}^{n} \) is a contraction mapping of \( \mathcal{C}\left( I\right) \) into itself, and hence that the integral equation \[ f\left( t\right) = g\left( t\right) + {\int }_{a}^{t}k\left( {s, t}\right) f\left( s\right) \mathrm{d}s \] has a unique solution \( f \) in \( \mathcal{C}\left( I\right) \) for each given \( g \in \mathcal{C}\left( I\right) \) . (For the contraction mapping part, show that \[ \left| {{T}^{n}f\left( x\right) - {T}^{n}g\left( x\right) }\right| \leq \frac{{M}^{n}}{n!}{\left( x - a\right) }^{n}\parallel f - g\parallel \] for all \( x \in I \) and \( f, g \in \mathcal{C}\left( I\right) \) .) .6 Taking \( I = \left\lbrack {0,1}\right\rbrack \), use the preceding exercise to find the solution of the integral equation \[ f\left( t\right) = g\left( t\right) + c{\int }_{0}^{t}{\left( t - s\right) }^{3}f\left( s\right) \mathrm{d}s\;\left( {t \in I}\right) , \] where \( c \) is a positive constant and \( g \in \mathcal{C}\left( I\right) \) . .7 Let \( c > 0 \), let \( f \) be a continuous real-valued mapping that satisfies the condition \[ \left| {f\left( {x,{y}_{1}}\right) - f\left( {x,{y}_{2}}\right) }\right| \leq c\left| {{y}_{1} - {y}_{2}}\right| \] on the strip \( \left\lbrack {a, b}\right\rbrack \times \mathbf{R} \) in \( {\mathbf{R}}^{2} \), and let \( \left( {{x}_{0},{y}_{0}}\right) \) be any point of that strip. Prove that the differential equation \( {y}^{\prime } = f\left( {x, y}\right) \) has a unique solution \( y : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \) such that \( y\left( {x}_{0}\right) = {y}_{0} \) . (For each \( x \in \left\lbrack {a, b}\right\rbrack \) define \( {y}_{0}\left( x\right) = {y}_{0} \) and \[ {y}_{n + 1}\left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}f\left( {t,{y}_{n}\left( t\right) }\right) \mathrm{d}t. \] Let \[ M = \left| {y}_{0}\right| + \max \left\{ {\left| {{y}_{1}\left( x\right) }\right| : a \leq x \leq b}\right\} . \] Show that the series \[ {y}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }\left( {{y}_{n}\left( x\right) - {y}_{n - 1}\left( x\right) }\right) \] converges uniformly on \( \left\lbrack {a, b}\right\rbrack \) to a sum \( y\left( x\right) \), by comparison with the series \[ M + \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}^{n - 1}M\frac{{\left( b - a\right) }^{n - 1}}{\left( {n - 1}\right) !}. \] Then show that \( y \) is the desired unique solution.) .8 Let \( K \) be the compact set \[ \left\{ {\left( {x, y}\right) \in \mathbf{R} \times {\mathbf{R}}^{n} : \left| {x - {x}_{0}}\right| \leq a,\begin{Vmatrix}{y - {y}_{0}}\end{Vmatrix} \leq b}\right\} , \] where \( a, b > 0 \) . Let \( f = \left( {{f}_{1},\ldots ,{f}_{n}}\right) \) be a continuous mapping of \( K \) into \( {\mathbf{R}}^{n} \) such that there exists \( c > 0 \) with \[ \left| {f\left( {x,{y}_{1}}\right) - f\left( {x,{y}_{2}}\right) }\right| \leq c\begin{Vmatrix}{{y}_{1} - {y}_{2}}\end{Vmatrix} \] for all applicable \( x,{y}_{1},{y}_{2} \) . Let \[ M = \mathop{\sup }\limits_{{\left( {x, y}\right) \in K}}\parallel f\left( {x, y}\right) \parallel \] and \[ h = \left\{ \begin{array}{ll} \min \left\{ {a,\frac{b}{M}}\right\} & \text{ if }M > 0 \\ a & \text{ if }M = 0. \end{array}\right. \] Prove that there exists a unique mapping \( \varphi = \left( {{\varphi }_{1},\ldots ,{\varphi }_{n}}\right) \) of the interval \( I = \left\lbrack {{x}_{0} - h,{x}_{0} + h}\right\rbrack \) into \( {\mathbf{R}}^{n} \), such that (i) \( \varphi \left( {x}_{0}\right) = {y}_{0} \), and (ii) for each \( k \) the component mapping \( {\varphi }_{k} \) is continuously differentiable and satisfies \( {\varphi }_{k}^{\prime }\left( x\right) = {f}_{k}\left( {x,\varphi \left( x\right) }\right) \) on \( I \) . .9 Let \( p, q \), and \( r \) be continuous real-valued functions on the interval \( \left\lbrack {a, b}\right\rbrack \), let \( {x}_{0} \in \left\lbrack {a, b}\right\rbrack \), and let \( {y}_{0},{y}_{0}^{\prime } \) be real numbers. Use the preceding exercise to prove that there exists a unique function \( y : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbf{R} \) satisfying the differential equation \[ {y}^{\prime \prime } + p\left( x\right) {y}^{\prime } + q\left( x\right) = r\left( x\right) \] on \( \left\lbrack {a, b}\right\rbrack \), with initial conditions \( y\left( {x}_{0}\right) = {y}_{0} \) and \( {y}^{\prime }\left( {x}_{0}\right) = {y}_{0}^{\prime } \) . Although Picard's Theorem enables us to solve, both in principle and in practice, a large class of differential equations, there are simple examples of differential equations to which it does not apply and yet for which solutions can easily be found. One such example is the equation \( {y}^{\prime } = {y}^{1/3} \) with initial condition \( y\left( 0\right) = 0 \) : this equation has two solutions-namely, \( y = 0 \) and \( y = {\left( 2x/3\right) }^{3/2} \) —but the function \( \left( {x, y}\right) \mapsto {y}^{1/3} \) does not satisfy a Lipschitz condition at \( \left( {0,0}\right) \) . The final theorem of this chapter covers cases such as this, and provides us with a good application of Ascoli's Theorem and the Stone-Weierstrass Theorem. (4.7.6) Peano's Theorem. Let \( K \) be the rectangle \[ \left\{ {\left( {x, y}\right) \in {\mathbf{R}}^{2} : \left| {x - {x}_{0}}\right| \leq a,\left| {y - {y}_{0}}\right| \leq b}\right\} , \] where \( a, b > 0 \) . Let \( f : K \rightarrow \mathbf{R} \) be a continuous mapping, \[ M = \mathop{\sup }\limits_{{\left( {x, y}\right) \in K}}\left| {f\left( {x, y}\right) }\right| \] and \[ h = \left\{ \begin{array}{ll} \min \left\{ {a,\frac{b}{M}}\right\} & \text{ if }M > 0 \\ a & \text{ if }M = 0. \end{array}\right. \] Then there exists a continuously differentiable mapping \( \varphi \) on the interval \( I = \left\lbrack {{x}_{0} - h,{x}_{0} + h}\right\rbrack \), such that \[ \varphi \left( {x}_{0}\right) = {y}_{0} \] and \[ {\varphi }^{\prime }\left( x\right) = f\left( {x,\varphi \left( x\right) }\right) \text{ for all }x \in I. \] Proof. Using Exercise (4.6.8: 4), construct a sequence \( \left( {p}_{n}\right) \) of polynomial functions of two variables such that \( \begin{Vmatrix}{f - {p}_{n}}\end{Vmatrix} \leq {2}^{-n} \) for each \( n \), where \( \parallel \cdot \parallel \) denotes the sup norm on \( \mathcal{C}\left( K\right) \) . We may assume that \( \left| {p}_{n}\right| \leq {2M} \) for each \( n \) . By Exercise (4.7.1:1), Picard's Theorem, and the Fundamental Theorem of Calculus, the integral equation \[ y\left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}{p}_{n}\left( {t, y\left( t\right) }\right) \mathrm{d}t \] has a unique solution \( {\varphi }_{n} \) on the interval \( I \) . Exercise (1.5.12:4) shows that for all \( {x}_{1},{x}_{2} \in I \) , \[ \left| {{\varphi }_{n}\left( {x}_{2}\right) - {\varphi }_{n}\left( {x}_{1}\right) }\right| \leq \left| {{\int }_{{x}_{1}}^{{x}_{2}}{p}_{n}\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t}\right| \leq {2M}\left| {{x}_{2} - {x}_{1}}\right| . \] It follows that \( \left( {\varphi }_{n}\right) \) is an equicontinuous sequence in \( \mathcal{C}\left( I\right) \) . Also, for each \( x \in I \) , \[ \left| {{\varphi }_{n}\left( x\right) }\right| \leq \left| {y}_{0}\right| + \left| {{\int }_{{x}_{0}}^{x}{p}_{n}\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t}\right| \leq \left| {y}_{0}\right| + {2M}\left| I\right| \] so \( \left( {\varphi }_{n}\right) \) is a bounded sequence in \( \mathcal{C}\left( I\right) \) . Applying Ascoli’s Theorem (4.5.8), and, if necessary, passing to a subsequence of \( \left( {\varphi }_{n}\right) \), we may now assume that \( \left( {\varphi }_{n}\right) \) converges uniformly on \( I \) to an element \( \varphi \) of \( \mathcal{C}\left( I\right) \) . Since \( f \) is uniformly continuous on \( K \), for each \( \varepsilon > 0 \) there exists \( t > 0 \) such that if \( \left( {{x}_{i},{y}_{i}}\right) \in K \) and \[ \max \left\{ {\left| {{x}_{1} - {x}_{2}}\right| ,\left| {{y}_{1} - {y}_{2}}\right| }\right\} < t \] then \[ \left| {f\left( {{x}_{1},{y}_{1}}\right) - f\left( {{x}_{2},{y}_{2}}\right) }\right| < \varepsilon . \] Choose \( N \) such that for all \( n \geq N \) , \[ \begin{Vmatrix}{\varphi - {\varphi }_{n}}\end{Vmatrix} < \min \{ t,\varepsilon \} \] and \( {2}^{-n} < \varepsilon \) . Consider any \( x \in I \) and any \( n \geq N \) . Note that for each \( t \in I \) , \( \left( {t,\varphi \left( t\right) }\right) \) belongs to the closed set \( K \) and \[ \left| {f\left( {t,\varphi \left( t\right) }\right) - f\left( {t,{\varphi }_{n}\left( t\right) }\right) }\right| < \varepsilon . \] We now have \[ \left| {{\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t - {\int }_{{x}_{0}}^{x}f\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t}\right| \leq \varepsilon \left| {x - {x}_{0}}\right| < \left| I\right| \varepsilon \] and therefore \[ \left| {\varphi
1008_(GTM174)Foundations of Real and Abstract Analysis
71
eft\{ {\left| {{x}_{1} - {x}_{2}}\right| ,\left| {{y}_{1} - {y}_{2}}\right| }\right\} < t \] then \[ \left| {f\left( {{x}_{1},{y}_{1}}\right) - f\left( {{x}_{2},{y}_{2}}\right) }\right| < \varepsilon . \] Choose \( N \) such that for all \( n \geq N \) , \[ \begin{Vmatrix}{\varphi - {\varphi }_{n}}\end{Vmatrix} < \min \{ t,\varepsilon \} \] and \( {2}^{-n} < \varepsilon \) . Consider any \( x \in I \) and any \( n \geq N \) . Note that for each \( t \in I \) , \( \left( {t,\varphi \left( t\right) }\right) \) belongs to the closed set \( K \) and \[ \left| {f\left( {t,\varphi \left( t\right) }\right) - f\left( {t,{\varphi }_{n}\left( t\right) }\right) }\right| < \varepsilon . \] We now have \[ \left| {{\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t - {\int }_{{x}_{0}}^{x}f\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t}\right| \leq \varepsilon \left| {x - {x}_{0}}\right| < \left| I\right| \varepsilon \] and therefore \[ \left| {\varphi \left( x\right) - {y}_{0} - {\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t}\right| \leq \left| {\varphi \left( x\right) - {\varphi }_{n}\left( x\right) }\right| \] \[ + \left| {{\varphi }_{n}\left( x\right) - {y}_{0} - {\int }_{{x}_{0}}^{x}{p}_{n}\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t}\right| \] \[ + \left| {{\int }_{{x}_{0}}^{x}\left( {{p}_{n}\left( {t,{\varphi }_{n}\left( t\right) }\right) - f\left( {t,{\varphi }_{n}\left( t\right) }\right) }\right) \mathrm{d}t}\right| \] \[ + \left| {{\int }_{{x}_{0}}^{x}\left( {f\left( {t,{\varphi }_{n}\left( t\right) }\right) - f\left( {t,\varphi \left( t\right) }\right) }\right) \mathrm{d}t}\right| \] \[ < \varepsilon + 0 + \left| I\right| \begin{Vmatrix}{f - {p}_{n}}\end{Vmatrix} + \left| I\right| \varepsilon \] \[ = \left( {1 + 2\left| I\right| }\right) \varepsilon \text{.} \] Since \( \varepsilon > 0 \) is arbitrary, we conclude that \[ \varphi \left( x\right) = {y}_{0} + {\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t\;\left( {x \in I}\right) . \] A final application of the Fundamental Theorem of Calculus (see Exercise (1.5.14: 1)) shows that \( \varphi \) is continuously differentiable and satisfies the desired conditions. There are two fundamental differences between Picard's Theorem and Peano's: - in the former the solution is unique, whereas in the latter it need not be; - the proof of Picard's Theorem embodies an algorithm for computing the solution, but Peano's Theorem uses the highly nonconstructive property of sequential compactness and is an intrinsically nonalgorithmic theorem. By an \( \varepsilon \) -approximate solution to the differential equation \[ {y}^{\prime } = f\left( {x, y}\right) ,\;y\left( {x}_{0}\right) = {y}_{0} \] (2) in an interval \( J \) containing \( {x}_{0} \) we mean a mapping \( y : J \rightarrow \mathbf{R} \) with the following properties. - There exists a partition \( \left( {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right) \) of \( J \) such that \( y \) is continuously differentiable on each of the intervals \( \left\lbrack {{x}_{i},{x}_{i + 1}}\right\rbrack \) ; \( \left| {{y}^{\prime }\left( x\right) - f\left( {x, y\left( x\right) }\right) }\right| \leq \varepsilon \) for all \( x \in \mathop{\bigcup }\limits_{{i = 1}}^{{n - 1}}\left( {{x}_{i},{x}_{i + 1}}\right) \) \( - y\left( {x}_{0}\right) = {y}_{0} \) ## (4.7.7) Exercises .1 Under the hypotheses of Theorem (4.7.6), but without invoking that theorem, show that for each \( \varepsilon > 0 \) there exists an \( \varepsilon \) -approximate solution of (2). (Choose \( \delta > 0 \) such that \( \left| {f\left( {{x}_{1},{y}_{1}}\right) - f\left( {{x}_{2},{y}_{2}}\right) }\right| \leq \varepsilon \) whenever \( \left( {{x}_{i},{y}_{i}}\right) \in K \) and \( \begin{Vmatrix}{\left( {{x}_{1},{y}_{1}}\right) - \left( {{x}_{2},{y}_{2}}\right) }\end{Vmatrix} \leq \delta \) . Take points \( {x}_{0} < \) \( {x}_{1} < \cdots < {x}_{n} = {x}_{0} + h \) such that \( {x}_{i + 1} - {x}_{i} \leq \min \{ \delta ,\delta /M\} \), and construct an \( \varepsilon \) -approximate solution of (2) on \( \left\lbrack {{x}_{0},{x}_{0} + h}\right\rbrack \) that is linear on each of the intervals \( \left\lbrack {{x}_{i},{x}_{i + 1}}\right\rbrack \) ; then deal with the interval \( \left\lbrack {{x}_{0} - h,{x}_{0}}\right\rbrack \) . This technique is known as the Cauchy-Euler method.) .2 Under the hypotheses of Theorem (4.7.6), let \( \left( {\varepsilon }_{n}\right) \) be a sequence of positive numbers converging to 0, and for each \( n \) let \( {\varphi }_{n} \) be an \( {\varepsilon }_{n} - \) approximate solution of the differential equation (2) on \( I \) . Suppose that \( \left( {\varphi }_{n}\right) \) converges uniformly to a continuous function \( \varphi \) on \( I \) . Prove that (i) \( \left( {t,\varphi \left( t\right) }\right) \in K \) for each \( t \in I \) ; (ii) \( {\int }_{{x}_{0}}^{x}f\left( {t,{\varphi }_{n}\left( t\right) }\right) \mathrm{d}t \rightarrow {\int }_{{x}_{0}}^{x}f\left( {t,\varphi \left( t\right) }\right) \mathrm{d}t \) uniformly on \( I \) as \( n \rightarrow \infty \) ; (iii) \( \varphi \) is a solution of the differential equation (2) on \( I \) . .3 Use the preceding two exercises to give an alternative proof of Peano's Theorem. .4 Let \( I, K, f, M \), and \( c \) be as in the hypotheses of Picard’s Theorem. Let \( {\varepsilon }_{1},{\varepsilon }_{2} > 0 \), and let \( {\varphi }_{i} \) be an \( {\varepsilon }_{i} \) -approximate solution to the differential equation on \( I \) . Show that \[ \left| {{\varphi }_{1}\left( x\right) - {\varphi }_{2}\left( x\right) }\right| \leq \left| {{\varphi }_{1}\left( {x}_{0}\right) - {\varphi }_{2}\left( {x}_{0}\right) }\right| {\mathrm{e}}^{c\left| {x - {x}_{0}}\right| } + \left( {{\varepsilon }_{1} + {\varepsilon }_{2}}\right) \frac{{\mathrm{e}}^{c\left| {x - {x}_{0}}\right| } - 1}{c} \] for each \( x \in I \) . (Use Exercise (2.3.3:14).) Hence find an alternative proof of Picard's Theorem. 5 Hilbert Spaces ## When shall we three meet again...? This chapter explores the elementary theory of Hilbert spaces. In Section 1 we introduce the notion of an inner product, with its associated norm, on a linear space, and prove some fundamental inequalities. The next section deals with orthogonality, projections, and orthonormal bases in a Hilbert space, and with their use in approximation theory. In Section 3 we derive Riesz's characterisation of the bounded linear functionals on a Hilbert space, and show how this can be applied both in the theory of operators and to prove the existence of weak solutions of the Dirichlet Problem. ## 5.1 Inner Products So far we have shown how to abstract the notions of distance and length from Euclidean space to the abstract contexts of a metric space and a normed space, respectively. In this chapter we show how to abstract the notion of the inner product in \( {\mathbf{R}}^{n} \) to the context of a linear space. The resulting combination of distance, length, and inner product provides the space with an extremely rich structure that turns out to have many significant applications in pure and applied mathematics. In particular - although we are not able to explore that subject in this book -certain linear self-maps of such a space are the mathematical analogues of quantum-mechanical operations. By an inner product on a linear space \( X \) over \( \mathbf{F} \) we mean a mapping \( \left( {x, y}\right) \mapsto \langle x, y\rangle \) of \( X \times X \) into \( \mathbf{F} \) such that the following hold for all \( x, y, z \) in \( X \) and all \( \lambda ,\mu \) in \( \mathbf{F} \) . IP1 \( \langle x, y\rangle = \langle y, x{\rangle }^{ * } \) . IP2 \( \;\langle {\lambda x} + {\mu y}, z\rangle = \lambda \langle x, z\rangle + \mu \langle y, z\rangle \) . IP3 \( \langle x, x\rangle \geq 0 \), and \( \langle x, x\rangle = 0 \) if and only if \( x = 0 \) . The element \( \langle x, y\rangle \) of \( \mathbf{F} \) is then called the inner product of the vectors \( x \) and \( y \) . Note that by IP2, the inner product is linear in the first variable; and that by IP2 and IP1, it is conjugate linear in the second-that is, \[ \langle x,{\lambda y} + {\mu z}\rangle = {\lambda }^{ * }\langle x, y\rangle + {\mu }^{ * }\langle x, z\rangle \] We define an inner product space, or a prehilbert space, to be a pair \( \left( {X,\langle \cdot , \cdot \rangle }\right) \) consisting of a linear space \( X \) over \( \mathbf{F} \) and an inner product \( \langle \cdot , \cdot \rangle \) on \( X \) . When there is no confusion over the inner product, we refer to \( X \) itself as an inner product space. By a subspace of an inner product space \( X \) we mean a linear subset \( S \) of \( X \), taken with the inner product induced on \( S \) by that on \( X \) ; thus the inner product on \( S \) is the restriction to \( S \times S \) of the inner product on \( X \) . The simplest example of an inner product space is the Euclidean space \( {\mathbf{F}}^{n} \), with the inner product of vectors \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) and \( y = \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) defined by \[ \langle x, y\rangle = \mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k}{y}_{k}^{ * } \] For another example consider the linear space \( {l}_{2}\left( \mathbf{C}\right) \) of square-summable sequences in \( \mathbf{C} \), introduced in Exercise (4.4.4: 3), where the inner product of two elements \( x = \left( {x}_{k}\right) \) and \( y = \left( {y}_{k}\right) \) is defined as \[ \langle x, y\rangle = \mathop{\sum }\limits_{{k = 1}}^{\infty }{x}_{k}{y}_{k}^{ * } \] This can be regarded as a generalisation of the first example, since the one-one mapping \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \left( {{x}_{1},\ldots ,{x}_{n},0,0,\ldots }\right) \) of \( {\mathbf{C}}^{n} \) into \( {l}_{2}\left( \mathbf{C}\right) \) preser
1008_(GTM174)Foundations of Real and Abstract Analysis
72
uct on \( X \) . The simplest example of an inner product space is the Euclidean space \( {\mathbf{F}}^{n} \), with the inner product of vectors \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) and \( y = \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) defined by \[ \langle x, y\rangle = \mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k}{y}_{k}^{ * } \] For another example consider the linear space \( {l}_{2}\left( \mathbf{C}\right) \) of square-summable sequences in \( \mathbf{C} \), introduced in Exercise (4.4.4: 3), where the inner product of two elements \( x = \left( {x}_{k}\right) \) and \( y = \left( {y}_{k}\right) \) is defined as \[ \langle x, y\rangle = \mathop{\sum }\limits_{{k = 1}}^{\infty }{x}_{k}{y}_{k}^{ * } \] This can be regarded as a generalisation of the first example, since the one-one mapping \( \left( {{x}_{1},\ldots ,{x}_{n}}\right) \mapsto \left( {{x}_{1},\ldots ,{x}_{n},0,0,\ldots }\right) \) of \( {\mathbf{C}}^{n} \) into \( {l}_{2}\left( \mathbf{C}\right) \) preserves the value of the inner product. Before discussing a third example, in Exercise (5.1.1: 2), let us agree to call a complex-valued function \( f \) on a subset \( X \) of \( \mathbf{R} \) integrable if its real and imaginary parts are integrable over \( X \), in which case we define \[ {\int }_{X}f = {\int }_{X}\operatorname{Re}\left( f\right) + \mathrm{i}{\int }_{X}\operatorname{Im}\left( f\right) \] The complex integration spaces \( {L}_{p}\left( {X,\mathbf{C}}\right) \) are then defined in the obvious way, and we use \( {L}_{p}\left( {X,\mathbf{F}}\right) \) to denote either \( {L}_{p}\left( I\right) \) or \( {L}_{p}\left( {X,\mathbf{C}}\right) \), depending on whether \( \mathbf{F} = \mathbf{R} \) or \( \mathbf{F} = \mathbf{C} \) . ## (5.1.1) Exercises .1 Prove that the equation \( \langle x, y\rangle = \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n}{y}_{n}^{ * } \) does define an inner product on \( {l}_{2}\left( \mathbf{C}\right) \) . (You must first prove that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{x}_{n}{y}_{n}^{ * } \) is convergent when \( \left( {x}_{n}\right) \) and \( \left( {y}_{n}\right) \) are elements of \( {l}_{2}\left( \mathbf{C}\right) \) .) .2 By a weight function on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) we mean a nonnegative continuous function \( w \) on \( I \) such that if \( f \in \mathcal{C}\left( I\right) \) and \( {\int }_{I}w\left( t\right) f\left( t\right) \mathrm{d}t = 0 \), then \( f = 0 \) . Prove that \[ \langle f, g\rangle = {\int }_{a}^{b}w\left( t\right) f\left( t\right) g{\left( t\right) }^{ * }\mathrm{\;d}t \] defines an inner product on \( {L}_{2}\left( {I,\mathbf{F}}\right) \) (where, as always, we identify two elements of \( {L}_{2}\left( {I,\mathbf{F}}\right) \) that are equal almost everywhere). We denote the corresponding inner product space by \( {L}_{2, w}\left( {I,\mathbf{F}}\right) \) . (5.1.2) Proposition. Let \( X \) be an inner product space. Then \[ \parallel x\parallel = \langle x, x{\rangle }^{1/2} \] defines a norm on \( X \) . Moreover, the inner product and this norm satisfy the Cauchy-Schwarz inequality \[ \left| {\langle x, y\rangle }\right| \leq \parallel x\parallel \parallel y\parallel \] and Minkowski's inequality \[ \langle x + y, x + y{\rangle }^{1/2} \leq \langle x, x{\rangle }^{1/2} + \langle y, y{\rangle }^{1/2}. \] Proof. We first prove the two inequalities. For any \( x, y \in X \) and any \( \lambda \in \mathbf{F} \) we have, by IP1 through IP3, \[ 0 \leq \langle x + {\lambda y}, x + {\lambda y}\rangle = \langle x, x\rangle + \langle x,{\lambda y}\rangle + \langle {\lambda y}, x\rangle + \langle {\lambda y},{\lambda y}\rangle \] so \[ {\begin{Vmatrix}x\end{Vmatrix}}^{2} + {\lambda }^{ * }\left\langle {x, y}\right\rangle + \lambda {\left\langle x, y\right\rangle }^{ * } + \lambda {\lambda }^{ * }{\begin{Vmatrix}y\end{Vmatrix}}^{2} \geq 0, \] (1) with equality if and only if \( x + {\lambda y} = 0 \) . If \( \parallel y\parallel \neq 0 \), the Cauchy-Schwarz inequality is obtained by taking \( \lambda = - \langle x, y\rangle /\parallel y{\parallel }^{2} \) ; if \( \parallel x\parallel \neq 0 \), the equality is obtained by taking \( \lambda = - \langle x, y\rangle /\parallel x{\parallel }^{2} \) ; if \( \parallel x\parallel = \parallel y\parallel = 0 \), then IP3 shows that \( x = 0 = y \) and hence that \( \langle x, y\rangle = 0 \), so the Cauchy-Schwarz inequality holds trivially. Taking \( \lambda = 1 \) in (1) and using the Cauchy-Schwarz inequality, we obtain \[ \langle x + y, x + y\rangle = \langle x, x\rangle + 2\operatorname{Re}\langle x, y\rangle + \langle y, y\rangle \] \[ \leq \langle x, x\rangle + 2\left| {\langle x, y\rangle }\right| + \langle y, y\rangle \] \[ \leq \langle x, x\rangle + 2\parallel x\parallel \parallel y\parallel + \langle y, y\rangle \] \[ = {\left( \langle x, x{\rangle }^{1/2}+\langle y, y{\rangle }^{1/2}\right) }^{2} \] which immediately yields Minkowski's inequality. It is now a simple exercise, involving this inequality and the defining properties of an inner product, to show that \( x \mapsto \langle x, x{\rangle }^{1/2} \) is a norm on \( X \) . When we refer to the norm or the metric structure on an inner product space \( X \), we always have in mind the norm, and the corresponding metric structure, associated with the inner product as in Proposition (5.1.2). ## (5.1.3) Exercises .1 Complete the details of the proof that if \( \langle \cdot , \cdot \rangle \) is an inner product on a linear space \( X \), then \( \parallel x\parallel = \langle x, x{\rangle }^{1/2} \) defines a norm on \( X \) . .2 Prove that an inner product on a linear space \( X \) is continuous, and that it is uniformly continuous on bounded sets, with respect to the corresponding product norm on \( X \times X \) . .3 Prove the parallelogram law for vectors \( x, y \) in an inner product space: \[ \parallel x + y{\parallel }^{2} + \parallel x - y{\parallel }^{2} = 2\parallel x{\parallel }^{2} + 2\parallel y{\parallel }^{2}. \] Interpreting a norm as a length, we see that this law generalises the plane geometry theorem that the sum of the squares of the diagonals of a parallelogram equals the sum of the squares of its sides. .4 Use the parallelogram law to show that a Hilbert space is uniformly convex (see Exercise (4.2.2:15)). .5 Let \( X \) be a normed space whose norm satisfies the parallelogram law (see the exercise before last). Show that if \( \mathbf{F} = \mathbf{R} \), then \[ \langle x, y\rangle = \frac{1}{4}\left( {\parallel x + y{\parallel }^{2} - \parallel x - y{\parallel }^{2}}\right) \] defines an inner product on \( X \) such that \( \parallel x\parallel = \langle x, x{\rangle }^{1/2} \) for each \( x \in X \) . Then show that if \( \mathbf{F} = \mathbf{C} \), there is a unique inner product on \( X \) related to the norm in this way. .6 Prove that there is no inner product on \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) such that \( \langle f, f{\rangle }^{1/2} = \) \( \parallel f\parallel \) (the supremum norm). (Show that the supremum norm does not obey the parallelogram law.) .7 Prove that the inner product space \( {L}_{2, w}\left( {I,\mathbf{F}}\right) \), introduced in Exercise (5.1.1:2), is complete. Prove also that \( \mathcal{C}\left( {I,\mathbf{F}}\right) \) is not a complete subspace of this inner product space. An inner product space that is complete with respect to its norm is called a Hilbert space. For example, the Euclidean space \( {\mathbf{F}}^{n} \) is a Hilbert space, as is \( {l}_{2}\left( \mathbf{C}\right) \) . On the other hand, if \( w \) is a nonnegative weight function on a compact interval \( I \), then Exercise (5.1.3:7) shows that \( {L}_{2, w}\left( {I,\mathbf{F}}\right) \) is complete, but \( \mathcal{C}\left( {I,\mathbf{F}}\right) \) is not complete, with respect to the inner product \[ \langle f, g\rangle = {\int }_{I}w\left( t\right) f\left( t\right) g{\left( t\right) }^{ * }\mathrm{\;d}t. \] ## (5.1.4) Exercises .1 Let \( X \) be an inner product space, \( {X}_{0} \) a closed linear subspace of \( X \) , and \( \varphi \) the canonical mapping of \( X \) onto the quotient space \( X/{X}_{0} \) . Prove that \[ \langle \varphi \left( x\right) ,\varphi \left( y\right) \rangle = \langle x, y\rangle \] unambiguously defines an inner product on \( X/{X}_{0} \), and that the corresponding norm is the quotient norm on \( X/{X}_{0} \) . .2 Show that each inner product space \( X \) can be embedded as a dense subset of a Hilbert space \( H \) . (Extend the inner product by continuity to the completion of \( X \), as defined on page 179.) \( H \) is then known as the (Hilbert space) completion of \( X \) . .3 Prove that any two completions \( H \) and \( {H}^{\prime } \) of an inner product space \( X \) are isomorphic, in the sense that there exists a one-one linear mapping \( u \) of \( H \) onto \( {H}^{\prime } \) such that \( \langle u\left( x\right), u\left( y\right) \rangle = \langle x, y\rangle \) for all \( x, y \in \) \( H \) . .4 Let \( I \) be a compact interval. Show that \( {L}_{2}\left( {I,\mathbf{F}}\right) \) is the completion of the Hilbert space \( \mathcal{C}\left( {I,\mathbf{F}}\right) \) with respect to the inner product \( \langle f, g\rangle = \) \( {\int }_{I}f\left( t\right) g{\left( t\right) }^{ * }\mathrm{\;d}t \) ## 5.2 Orthogonality and Projections Two elements \( x, y \) of an inner product space \( X \) are said to be orthogonal if \( \langle x, y\rangle = 0 \), in which case we write \( x \bot y \) . In view of IP1, the relation \( \bot \) is symmetric: \( x \bot y \) if and only if \( y \bot x \) . A vector \( x \) is said to be orthogonal to the subset \( S \) of \( X \) if \( x \bot s \) for each \( s \in S \) ; we then write \( x \bot S \) . The set of all vectors orthogonal to \( S \) is called the orthogonal complement of \( S \) , and is written \( {S}^{ \bot } \) (pronounced " \
1008_(GTM174)Foundations of Real and Abstract Analysis
73
left( x\right), u\left( y\right) \rangle = \langle x, y\rangle \) for all \( x, y \in \) \( H \) . .4 Let \( I \) be a compact interval. Show that \( {L}_{2}\left( {I,\mathbf{F}}\right) \) is the completion of the Hilbert space \( \mathcal{C}\left( {I,\mathbf{F}}\right) \) with respect to the inner product \( \langle f, g\rangle = \) \( {\int }_{I}f\left( t\right) g{\left( t\right) }^{ * }\mathrm{\;d}t \) ## 5.2 Orthogonality and Projections Two elements \( x, y \) of an inner product space \( X \) are said to be orthogonal if \( \langle x, y\rangle = 0 \), in which case we write \( x \bot y \) . In view of IP1, the relation \( \bot \) is symmetric: \( x \bot y \) if and only if \( y \bot x \) . A vector \( x \) is said to be orthogonal to the subset \( S \) of \( X \) if \( x \bot s \) for each \( s \in S \) ; we then write \( x \bot S \) . The set of all vectors orthogonal to \( S \) is called the orthogonal complement of \( S \) , and is written \( {S}^{ \bot } \) (pronounced " \( S \) perp"). It follows from IP2 that \( {S}^{ \bot } \) is a (linear) subspace of \( X \) ; and from IP3 that \( S \cap {S}^{ \bot } \) is nonempty if and only if \( 0 \in S \), in which case \( S \cap {S}^{ \bot } = \{ 0\} \) . Moreover, \( {S}^{ \bot } \) is orthogonal to \( \bar{S} \) , in the sense that every element of \( {S}^{ \bot } \) is orthogonal to \( \bar{S} \) : for, by Exercise (5.1.3: 2), if \( \left( {s}_{n}\right) \) is a sequence of elements of \( S \) converging to \( s \in \bar{S} \), then for each \( x \in {S}^{ \bot } \) , \[ \langle x, s\rangle = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left\langle {x,{s}_{n}}\right\rangle = 0. \] For each \( x \in X,\{ x{\} }^{ \bot } \) is the kernel of the continuous linear functional \( z \mapsto \langle z, x\rangle \) on \( X \), and so, by Proposition (4.2.3), is a closed subspace of \( X \) . Hence for each subset \( S \) of \( X \) , \[ {S}^{ \bot } = \mathop{\bigcap }\limits_{{s \in S}}\{ s{\} }^{ \bot } \] is closed in \( X \) . If \( x \) and \( y \) are orthogonal vectors, then, expanding \( \langle x + y, x + y\rangle \), we obtain Pythagoras's Theorem: \[ \parallel x + y{\parallel }^{2} = \parallel x{\parallel }^{2} + \parallel y{\parallel }^{2}. \] (5.2.1) Proposition. Let \( S \) be a nonempty complete convex subset of an inner product space \( X \), and let \( a \in X \) . Then there exists a unique vector \( s \) in \( S \) such that \( \parallel a - s\parallel = \rho \left( {a, S}\right) \) . Proof. Let \( d = \rho \left( {a, S}\right) \), and choose a sequence \( \left( {s}_{n}\right) \) in \( S \) such that \( \rho \left( {a,{s}_{n}}\right) \rightarrow d \) . Using the parallelogram law (Exercise (5.1.3:3)) and the convexity of \( S \), for all \( m \) and \( n \) we compute \[ {\begin{Vmatrix}{s}_{m} - {s}_{n}\end{Vmatrix}}^{2} = {\begin{Vmatrix}{s}_{m} - a - \left( {s}_{n} - a\right) \end{Vmatrix}}^{2} \] \[ = 2{\begin{Vmatrix}{s}_{m} - a\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{s}_{n} - a\end{Vmatrix}}^{2} - {\begin{Vmatrix}{s}_{m} - a + \left( {s}_{n} - a\right) \end{Vmatrix}}^{2} \] \[ = 2{\begin{Vmatrix}{s}_{m} - a\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{s}_{n} - a\end{Vmatrix}}^{2} - 4{\begin{Vmatrix}\frac{1}{2}\left( {s}_{m} + {s}_{n}\right) - a\end{Vmatrix}}^{2} \] \[ \leq 2{\begin{Vmatrix}{s}_{m} - a\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{s}_{n} - a\end{Vmatrix}}^{2} - 4{d}^{2} \] \[ = 2\left( {{\begin{Vmatrix}{s}_{m} - a\end{Vmatrix}}^{2} - {d}^{2}}\right) + 2\left( {{\begin{Vmatrix}{s}_{n} - a\end{Vmatrix}}^{2} - {d}^{2}}\right) \] \[ \rightarrow 0\text{as}m, n \rightarrow \infty \text{.} \] Hence \( \left( {s}_{n}\right) \) is a Cauchy sequence. Since \( S \) is complete, \( \left( {s}_{n}\right) \) converges to a limit \( s \) in \( S \) ; then \[ \parallel a - s\parallel = \mathop{\lim }\limits_{{n \rightarrow \infty }}\begin{Vmatrix}{a - {s}_{n}}\end{Vmatrix} = \rho \left( {a, S}\right) . \] On the other hand, if \( {s}^{\prime } \in S \) and \( \begin{Vmatrix}{a - {s}^{\prime }}\end{Vmatrix} = \rho \left( {a, S}\right) \), then a computation similar to the one used at the start of the proof shows that \[ {\begin{Vmatrix}s - {s}^{\prime }\end{Vmatrix}}^{2} = 2{\begin{Vmatrix}s - a\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{s}^{\prime } - a\end{Vmatrix}}^{2} - 4{\begin{Vmatrix}\frac{1}{2}\left( s + {s}^{\prime }\right) - a\end{Vmatrix}}^{2} \] \[ = 4\left( {{d}^{2} - {\begin{Vmatrix}\frac{1}{2}\left( s + {s}^{\prime }\right) - a\end{Vmatrix}}^{2}}\right) \] \[ \leq 0 \] so that \( s = {s}^{\prime } \) . It is worth digressing here to prove a converse of the foregoing result. (5.2.2) Proposition. Let \( S \) be a nonempty closed subset of the Euclidean space \( {\mathbf{R}}^{N} \) such that each point of \( {\mathbf{R}}^{N} \) has a unique closest point in \( S \) . Then \( S \) is convex. Proof. Supposing that \( S \) is not convex, we can find \( a, b \in S \) and \( \lambda \in \left( {0,1}\right) \) such that \[ z = {\lambda a} + \left( {1 - \lambda }\right) b \notin S. \] Since \( X \smallsetminus S \) is open, there exists \( r > 0 \) such that \( \bar{B}\left( {z, r}\right) \cap S = \varnothing \) . Let \( \mathcal{F} \) be the set of all closed balls \( B \) such that \( \bar{B}\left( {z, r}\right) \subset B \) and \( S \cap {B}^{ \circ } = \varnothing \) ; then \( \bar{B}\left( {z, r}\right) \in \mathcal{F} \) . The radii of the balls belonging to \( \mathcal{F} \) are bounded above, since any ball containing \( B \) and having sufficiently large radius will meet \( S \) . Let \( {r}_{\infty } \) be the supremum of the radii of the members of \( \mathcal{F} \), and let \( {\left( \bar{B}\left( {x}_{n},{r}_{n}\right) \right) }_{n = 1}^{\infty } \) be a sequence of elements of \( \mathcal{F} \) such that \( {r}_{n} \rightarrow {r}_{\infty } \) . Then \( {x}_{n} \in \bar{B}\left( {z,{r}_{\infty }}\right) \) for each \( n \) . Since \( \bar{B}\left( {z,{r}_{\infty }}\right) \) is compact (Theorem (4.3.6)) and therefore sequentially compact (Theorem (3.3.9)), we may assume without loss of generality that \( \left( {x}_{n}\right) \) converges to a limit \( {x}_{\infty } \) . Let \( K = \bar{B}\left( {{x}_{\infty },{r}_{\infty }}\right) \) ; we prove that \( K \in \mathcal{F} \) . First we consider any \( x \in \bar{B}\left( {z, r}\right) \) and any \( \varepsilon > 0 \) . Choosing \( m \) such that \( \begin{Vmatrix}{{x}_{m} - {x}_{\infty }}\end{Vmatrix} < \varepsilon \), and noting that \( \bar{B}\left( {z, r}\right) \subset \bar{B}\left( {{x}_{m},{r}_{m}}\right) \), we have \[ \begin{Vmatrix}{x - {x}_{\infty }}\end{Vmatrix} \leq \begin{Vmatrix}{x - {x}_{m}}\end{Vmatrix} + \begin{Vmatrix}{{x}_{m} - {x}_{\infty }}\end{Vmatrix} \] \[ < {r}_{m} + \varepsilon \] \[ \leq {r}_{\infty } + \varepsilon \text{.} \] Since \( \varepsilon \) is arbitrary, we conclude that \( \begin{Vmatrix}{x - {x}_{\infty }}\end{Vmatrix} \leq {r}_{\infty } \) ; whence \( \bar{B}\left( {z, r}\right) \subset K \) . On the other hand, supposing that there exists \( s \in S \cap B\left( {{x}_{\infty },{r}_{\infty }}\right) \), choose \( \delta > 0 \) such that \( \begin{Vmatrix}{s - {x}_{\infty }}\end{Vmatrix} < {r}_{\infty } - \delta \), and then \( n \) such that \( 0 \leq {r}_{\infty } - {r}_{n} < \delta /2 \) and \( \begin{Vmatrix}{{x}_{n} - {x}_{\infty }}\end{Vmatrix} < \delta /2 \) . We have \[ \begin{Vmatrix}{s - {x}_{n}}\end{Vmatrix} \leq \begin{Vmatrix}{s - {x}_{\infty }}\end{Vmatrix} + \begin{Vmatrix}{{x}_{\infty } - {x}_{n}}\end{Vmatrix} \] \[ < {r}_{\infty } - \delta + \frac{\delta }{2} \] \[ = {r}_{\infty } - \frac{\delta }{2} \] \[ < {r}_{n}\text{,} \] so \( s \in S \cap B\left( {{x}_{n},{r}_{n}}\right) \) . This is absurd, as \( B\left( {{x}_{n},{r}_{n}}\right) \in \mathcal{F} \) ; hence \( S \cap B\left( {{x}_{\infty },{r}_{\infty }}\right) \) is empty, and therefore \( K \in \mathcal{F} \) . Now, the centre \( {x}_{\infty } \) of \( K \) has a unique closest point \( p \) in \( S \) . This point cannot belong to \( {K}^{ \circ } \), as \( K \in \mathcal{F} \) ; nor can it lie outside \( K \), as \( {r}_{\infty } \) is the supremum of the radii of the balls in \( \mathcal{F} \) . Therefore \( p \) must lie on the boundary of \( K \) . The unique closest point property of \( S \) ensures that the boundary of \( K \) intersects \( S \) in the single point \( p \) . It now follows from Exercise (4.3.7: 7) that there exists a ball \( {K}^{\prime } \) that is concentric with \( K \), has radius greater than \( {r}_{\infty } \), and is disjoint from \( S \) . This ball must contain \( \bar{B}\left( {z, r}\right) \) and so belongs to \( \mathcal{F} \) . Since this contradicts our choice of \( {r}_{\infty } \), we conclude that \( S \) is, in fact, convex. ## (5.2.3) Exercises .1 Give an example of a norm \( \parallel \cdot {\parallel }^{\prime } \) on \( {\mathbf{R}}^{2} \), a closed convex subset \( S \) of \( {\mathbf{R}}^{2} \), and a point \( x \in {\mathbf{R}}^{2} \) such that \( x \) has infinitely many closest points in \( S \) relative to \( \parallel \cdot {\parallel }^{\prime } \) . .2 Let \( S \) be the subset of \( {c}_{0} \) consisting of all elements \( \left( {x}_{n}\right) \) such that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{x}_{n} = 0 \) . Show that \( S \) is a closed subspace of \( {c}_{0} \) and that no point of \( {c}_{0} \smallsetminus S \) has a closest point in \( S \) . (Given \( a = \left( {a}_{n}\right) \in {c}_{0} \smallsetminus S \) , set \( \alpha = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{a}_{n} \) and show that \( \rho \left( {a, S}\right) \leq \left| \alpha \right| \) . Let \( x = \left( {x}_{n}\right) \in S \) , suppose that \( \parallel a - x\parallel \leq \left| \alpha \right| \), and obtain a contradiction.) .3 Let \( S \) be a closed convex set in a uniformly convex Banach space \( X \) . (See Exercise (4.2.2: 15).) Show that to each point \( a \) of \( X \)
1008_(GTM174)Foundations of Real and Abstract Analysis
74
oint \( x \in {\mathbf{R}}^{2} \) such that \( x \) has infinitely many closest points in \( S \) relative to \( \parallel \cdot {\parallel }^{\prime } \) . .2 Let \( S \) be the subset of \( {c}_{0} \) consisting of all elements \( \left( {x}_{n}\right) \) such that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{x}_{n} = 0 \) . Show that \( S \) is a closed subspace of \( {c}_{0} \) and that no point of \( {c}_{0} \smallsetminus S \) has a closest point in \( S \) . (Given \( a = \left( {a}_{n}\right) \in {c}_{0} \smallsetminus S \) , set \( \alpha = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{a}_{n} \) and show that \( \rho \left( {a, S}\right) \leq \left| \alpha \right| \) . Let \( x = \left( {x}_{n}\right) \in S \) , suppose that \( \parallel a - x\parallel \leq \left| \alpha \right| \), and obtain a contradiction.) .3 Let \( S \) be a closed convex set in a uniformly convex Banach space \( X \) . (See Exercise (4.2.2: 15).) Show that to each point \( a \) of \( X \) there corresponds a unique closest point in \( S \) . (To prove existence, reduce to the case where \( a = 0 \) and \( \rho \left( {0, S}\right) = 1 \) . Then choose a sequence \( \left( {s}_{n}\right) \) in \( S \) such that \( \begin{Vmatrix}{s}_{n}\end{Vmatrix} \rightarrow 1 \), and show that \( \left( {{\begin{Vmatrix}{s}_{n}\end{Vmatrix}}^{-1}{s}_{n}}\right) \) is a Cauchy sequence in \( X \) .) .4 Give two proofs that \( {c}_{0} \) is not uniformly convex. .5 Let \( S \) be a bounded closed subset of \( {\mathbf{R}}^{N} \) with the property that to each \( x \in {\mathbf{R}}^{N} \) there corresponds a unique farthest point of \( S \) -that is, a point \( {s}_{0} \) of \( S \) such that \[ \begin{Vmatrix}{x - {s}_{0}}\end{Vmatrix} = \sup \{ \parallel x - s\parallel : s \in S\} . \] Show that \( S \) consists of a single point. (First show that \( S \) is bounded. Then choose \( r > 0 \) such that \( S \subset \bar{B}\left( {0, r/2}\right) \), and consider the family \( \mathcal{F} \) of all closed balls \( B \) such that \( S \subset B \subset \bar{B}\left( {0, r}\right) \) . Show that \( \mathcal{F} \) contains a ball with minimum radius, and then show that that radius is 0 .) Proposition (5.2.2) was first proved by Motzkin in 1935, and the result in Exercise (5.2.3: 5) by Motzkin, Straus, and Valentine in 1953. We now turn from our digression to the subject of projections. In the special case of Proposition (5.2.1) where \( S \) is a complete subspace of \( X \), the unique point of \( S \) closest to a given vector \( x \in X \) is called the projection of the vector \( x \) on \( S \), and the mapping that carries each vector in \( X \) to its projection on \( S \) is called the projection of \( X \) on \( S \) . For example, - the projection of \( X \) on \( X \) is the identity operator \( I : X \rightarrow X \) defined by \( {Ix} = x \) ; - projections on finite-dimensional subspaces of \( X \) are always defined, since finite-dimensional normed spaces are complete, by Proposition (4.3.3); - the projection of a Hilbert space on any closed subspace is defined, in view of Proposition (3.2.9). The next result enables us to show that projections are bounded linear mappings. (5.2.4) Proposition. Let \( S \) be a complete subspace of an inner product space \( X \), and \( P \) the projection of \( X \) on \( S \) . Then for each \( x \in X,{Px} \) is the unique vector \( s \in S \) such that \( x - s \) is orthogonal to \( S \) . Proof. Given \( x \) in \( X \), let \( d = \rho \left( {x, S}\right) \) . For all \( y \) in \( S \) and \( \lambda \) in \( \mathbf{F} \), we have \( {Px} - {\lambda y} \in S \), so that \[ \langle x - {Px} + {\lambda y}, x - {Px} + {\lambda y}\rangle \geq {d}^{2} = \langle x - {Px}, x - {Px}\rangle , \] and therefore \[ {\left| \lambda \right| }^{2}\parallel y{\parallel }^{2} + 2\operatorname{Re}\left( {{\lambda }^{ * }\langle x - {Px}, y\rangle }\right) \geq 0. \] Suppose that \( \operatorname{Re}\langle x - {Px}, y\rangle \neq 0 \) ; then by the Cauchy-Schwarz inequality, \( y \neq 0 \) . Taking \[ \lambda = - \frac{\langle x - {Px}, y\rangle }{\parallel y{\parallel }^{2}} \] we obtain the contradiction \[ {\left| \lambda \right| }^{2}\parallel y{\parallel }^{2} + 2\operatorname{Re}\left( {{\lambda }^{ * }\langle x - {Px}, y\rangle }\right) < 0. \] Thus \( \operatorname{Re}\langle x - {Px}, y\rangle = 0 \) . Likewise, \( \operatorname{Im}\langle x - {Px}, y\rangle = 0 \), so \( \langle x - {Px}, y\rangle = 0 \) . If, conversely, \( s \) is any vector in \( S \) such that \( x - s \) is orthogonal to \( S \) , then \( s - {Px} \) is in \( S \), and so \[ \langle s - {Px}, s - {Px}\rangle = \langle x - {Px}, s - {Px}\rangle - \langle x - s, s - {Px}\rangle = 0; \] whence \( s = {Px} \), by IP3. ## (5.2.5) Exercises .1 Prove that if \( S \) is a complete subspace of an inner product space \( X \) , then \( {\left( {S}^{ \bot }\right) }^{ \bot } = S \) . .2 Let \( P \) be the projection of a Hilbert space \( H \) onto a complete subspace \( S \) . Use Proposition (5.2.4) to show that \( P \) is a linear mapping, and that \[ \langle {Px},{Py}\rangle = \langle {Px}, y\rangle = \langle x,{Py}\rangle \] for all \( x, y \in H \) . Show also that \( \parallel {Px}\parallel \leq \parallel x\parallel \) for all \( x \in H \), and that if \( S \neq \{ 0\} \), then \( \parallel P\parallel = 1 \) . .3 In the notation of the preceding exercise prove that each vector \( x \in H \) has a unique representation in the form \( x = y + z \) with \( y \in S \) and \( z \bot S \), and that \( I - P \) is the projection of \( H \) on \( {S}^{ \bot } \) . .4 To each vector \( a \) in an inner product space \( X \) there corresponds a linear functional \( {u}_{a} \) defined on \( X \) by \[ {u}_{a}\left( x\right) = \langle x, a\rangle . \] Prove that \( {u}_{a} \) is bounded and has norm \( \parallel a\parallel \) ; and that if \( a \neq 0 \), then \( z = \parallel a{\parallel }^{-2}a \) is in \( \ker {\left( {u}_{a}\right) }^{ \bot },{u}_{a}\left( z\right) = 1 \), and \( a = \parallel z{\parallel }^{-2}z. \) .5 Let \( f \) be a nonzero linear functional on the Euclidean space \( {\mathbf{R}}^{n} \) . Prove that there exists a nonzero vector \( p \) orthogonal to the hyperplane \( \ker \left( f\right) \), such that \( f\left( x\right) = \langle x, p\rangle \) for each \( x \in {\mathbf{R}}^{n} \) . (Choose \( a \in {\mathbf{R}}^{n} \smallsetminus \ker \left( f\right) \) such that \( f\left( a\right) = 1 \) . Let \( b \) be the foot of the perpendicular from \( a \) to \( \ker \left( f\right) \), and let \( p = \lambda \left( {a - b}\right) \) for an appropriate value of \( \lambda \) . Note that each \( x \in {\mathbf{R}}^{n} \) can be written uniquely in the form \( x = f\left( x\right) a + y \) with \( y \in \ker \left( f\right) \) .) A family \( {\left( {e}_{i}\right) }_{i \in I} \) of elements of an inner product space \( X \) is said to be orthogonal if \( \left\langle {{e}_{i},{e}_{j}}\right\rangle = 0 \) whenever \( i, j \) are distinct indices in \( I \) . If, in addition, \( \begin{Vmatrix}{e}_{i}\end{Vmatrix} = 1 \) for each \( i \), then \( \left( {e}_{i}\right) \) is called an orthonormal family; in that case we call \( \left\langle {x,{e}_{i}}\right\rangle \) the corresponding \( i \) th coordinate of the element \( x \) of \( X \) . For example, in the space \( {L}_{2}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \), taken with the inner product \[ \langle f, g\rangle = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( t\right) g{\left( t\right) }^{ * }\mathrm{\;d}t \] the functions \( {e}_{n}\left( {n = 0, \pm 1, \pm 2,\ldots }\right) \) form an orthonormal sequence, where \[ {e}_{n}\left( t\right) = {\mathrm{e}}^{\mathrm{i}{nt}}. \] The corresponding \( n \) th coordinate of \( f \in {L}_{2}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \) is \[ \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( t\right) {\mathrm{e}}^{-\mathrm{i}{nt}}\mathrm{\;d}t \] which is better known as the \( n \) th Fourier coefficient of \( f \) . ## (5.2.6) Exercise Verify the mathematical claims made in the last paragraph. If \( {\left( {e}_{i}\right) }_{i \in I} \) is an orthonormal family in an inner product space \( X \), then for any finite index set \( J \subset I \) the vectors \( {e}_{j}\left( {j \in J}\right) \) are linearly independent: for if \( \mathop{\sum }\limits_{{j \in J}}{\lambda }_{j}{e}_{j} = 0 \), where each \( {\lambda }_{j} \in \mathbf{F} \), then for each \( i \in J \) , \[ 0 = \left\langle {\mathop{\sum }\limits_{{j \in J}}{\lambda }_{j}{e}_{j},{e}_{i}}\right\rangle = \mathop{\sum }\limits_{{j \in J}}{\lambda }_{j}\left\langle {{e}_{j},{e}_{i}}\right\rangle = {\lambda }_{i} \] Thus the vectors \( {e}_{j}\left( {j \in J}\right) \) form a basis for a finite-dimensional subspace of \( X \) . (5.2.7) Lemma. Let \( {\left( {e}_{n}\right) }_{n = 1}^{N} \) be a finite orthonormal family in an inner product space \( X \) . Then for each \( x \in X \) , \[ {\begin{Vmatrix}x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{e}_{n}\right\rangle \right| }^{2}, \] \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{e}_{n}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \] and \( x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle {x,{e}_{n}}\right\rangle {e}_{n} \) is orthogonal to each \( {e}_{k} \) . Proof. For each \( n \) write \( {\lambda }_{n} = \left\langle {x,{e}_{n}}\right\rangle \) . Then \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \left\langle {\mathop{\sum }\limits_{{m = 1}}^{N}
1008_(GTM174)Foundations of Real and Abstract Analysis
75
\) be a finite orthonormal family in an inner product space \( X \) . Then for each \( x \in X \) , \[ {\begin{Vmatrix}x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{e}_{n}\right\rangle \right| }^{2}, \] \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{e}_{n}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \] and \( x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle {x,{e}_{n}}\right\rangle {e}_{n} \) is orthogonal to each \( {e}_{k} \) . Proof. For each \( n \) write \( {\lambda }_{n} = \left\langle {x,{e}_{n}}\right\rangle \) . Then \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{e}_{n}\right\rangle {e}_{n}\end{Vmatrix}}^{2} = \left\langle {\mathop{\sum }\limits_{{m = 1}}^{N}{\lambda }_{m}{e}_{m},\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n}}\right\rangle \] \[ = \mathop{\sum }\limits_{{m, n = 1}}^{N}{\lambda }_{m}{\lambda }_{n}^{ * }\left\langle {{e}_{m},{e}_{n}}\right\rangle = \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {\lambda }_{n}\right| }^{2}. \] So \[ 0 \leq {\begin{Vmatrix}x - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n}\end{Vmatrix}}^{2} \] \[ = \parallel x{\parallel }^{2} - \left\langle {x,\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n}}\right\rangle - \left\langle {\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n}, x}\right\rangle + {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n}\end{Vmatrix}}^{2} \] \[ = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}^{ * }\left\langle {x,{e}_{n}}\right\rangle - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{\left\langle x,{e}_{n}\right\rangle }^{ * } + \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {\lambda }_{n}\right| }^{2} \] \[ = \parallel x{\parallel }^{2} - 2\mathop{\sum }\limits_{{n = 1}}^{N}{\left| {\lambda }_{n}\right| }^{2} + \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {\lambda }_{n}\right| }^{2} \] \[ = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\left| {\lambda }_{n}\right| }^{2} \] The first two of the desired conclusions now follow. On the other hand, \[ \left\langle {x - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{e}_{n},{e}_{k}}\right\rangle = \left\langle {x,{e}_{k}}\right\rangle - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}\left\langle {{e}_{n},{e}_{k}}\right\rangle = \left\langle {x,{e}_{k}}\right\rangle - {\lambda }_{k} = 0. \] (5.2.8) Proposition. If \( {\left( {e}_{i}\right) }_{i \in I} \) is an orthonormal family in an inner product space \( X \), then for each \( x \in X \) , \[ {I}_{x} = \left\{ {i \in I : \left\langle {x,{e}_{i}}\right\rangle \neq 0}\right\} \] is either empty or countable. Proof. Lemma (5.2.7) shows that for each finite subset \( J \) of \( I \) we have \[ \mathop{\sum }\limits_{{i \in J}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \] Hence the set \[ {I}_{x, n} = \left\{ {i \in I : {\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} > {n}^{-1}\parallel x{\parallel }^{2}}\right\} \] has at most \( n - 1 \) elements. Since \( {I}_{x} = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{I}_{x, n} \), we conclude that if \( {I}_{x} \) is nonempty, then it is countable. When \( {\left( {e}_{i}\right) }_{i \in I} \) is an orthonormal family in an inner product space \( X \) , Proposition (5.2.8) enables us to make sense of certain summations, such as \( \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \), over possibly uncountable index sets. If \( {I}_{x} \) is empty, we define \( \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} = 0 \) . If \( {I}_{x} \) is nonempty, it is either finite or countably infinite; taking, for example, the latter case (the former is even easier to handle), we define \[ \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2} \] (1) where \( {f}_{1},{f}_{2},\ldots \) is a one-one enumeration of \( {I}_{x} \) . Note that the series on the right-hand side converges, since its terms are nonnegative and (by Lemma (5.2.7)) its partial sums are bounded by \( \parallel x{\parallel }^{2} \) ; it follows from Exercise (1.2.17:1) that the value of the expression on the left-hand side of (1) is independent of our choice of the one-one enumeration \( {f}_{1},{f}_{2},\ldots \) of \( {I}_{x} \) . Moreover, we have Bessel's inequality \[ \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \] In turn, when \( X \) is a Hilbert space, we can give meaning to another important type of series. Writing \[ {s}_{k} = \mathop{\sum }\limits_{{n = 1}}^{k}\left\langle {x,{f}_{n}}\right\rangle {f}_{n} \] and using Lemma (5.2.7), we see that if \( k > j \), then \[ {\begin{Vmatrix}{s}_{j} - {s}_{k}\end{Vmatrix}}^{2} = {\begin{Vmatrix}\mathop{\sum }\limits_{{n = j + 1}}^{k}\left\langle x,{f}_{n}\right\rangle {f}_{n}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n = j + 1}}^{k}{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2}. \] Since \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2} \) converges, \( \left( {s}_{n}\right) \) is a Cauchy sequence in \( X \) ; so, as \( X \) is complete, \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle {f}_{n} \) converges to a sum \( s \in X \) . Likewise, if \( {f}_{1}^{\prime },{f}_{2}^{\prime },\ldots \) is another one-one enumeration of \( {I}_{x} \), then \( \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}^{\prime }}\right\rangle {f}_{n}^{\prime } \) converges to a sum \( {s}^{\prime } \in X \) . We show that \( s = {s}^{\prime } \) . Given \( \varepsilon > 0 \), we choose \( N \) such that if \( k \geq N \), then \[ \begin{Vmatrix}{s - {s}_{k}}\end{Vmatrix} < \varepsilon ,\begin{Vmatrix}{{s}^{\prime } - {s}_{k}^{\prime }}\end{Vmatrix} < \varepsilon ,\text{ and }\mathop{\sum }\limits_{{n = k + 1}}^{\infty }{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2} < {\varepsilon }^{2}, \] where \[ {s}_{k}^{\prime } = \mathop{\sum }\limits_{{n = 1}}^{k}\left\langle {x,{f}_{n}^{\prime }}\right\rangle {f}_{n}^{\prime } \] Taking \[ m = \max \left\{ {k : {f}_{k}^{\prime } = {f}_{n}\text{ for some }n \leq N}\right\} , \] we see that \( m \geq N \) and \[ {\begin{Vmatrix}{s}_{m}^{\prime } - {s}_{N}\end{Vmatrix}}^{2} \leq \mathop{\sum }\limits_{{n = N + 1}}^{\infty }{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2} < {\varepsilon }^{2}. \] Hence \[ \begin{Vmatrix}{s - {s}^{\prime }}\end{Vmatrix} \leq \begin{Vmatrix}{s - {s}_{N}}\end{Vmatrix} + \begin{Vmatrix}{{s}_{N} - {s}_{m}^{\prime }}\end{Vmatrix} + \begin{Vmatrix}{{s}_{m}^{\prime } - {s}^{\prime }}\end{Vmatrix} < {3\varepsilon }. \] Since \( \varepsilon \) is arbitrary, it follows that \( s = {s}^{\prime } \) . Hence the value of \[ \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle {e}_{i} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle {f}_{n} \] is independent of the choice of the one-one enumeration \( {f}_{1},{f}_{2},\ldots \) of \( {I}_{x} \) . ## (5.2.9) Exercise Let \( {\left( {e}_{i}\right) }_{i \in I} \) be an orthonormal family in a Hilbert space \( H \), and \( x, y \) elements of \( H \) such that \( {I}_{x} \) is countably infinite. Show that the value of the expression \[ \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle \left\langle {{f}_{n}, y}\right\rangle \] is independent of the one-one enumeration \( {f}_{1},{f}_{2},\ldots \) of \( {I}_{x} \) . Show also that if \( {f}_{1}^{\prime },{f}_{2}^{\prime },\ldots \) is a (possibly finite) one-one enumeration of \( {I}_{y} \), then \[ \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}^{\prime }}\right\rangle \left\langle {{f}_{n}^{\prime }, y}\right\rangle . \] (5.2.10) Proposition. Let \( {\left( {e}_{i}\right) }_{i \in I} \) be an orthonormal family in a Hilbert space \( H \), let \( S \) be the closure in \( H \) of the subspace of \( H \) generated by \( \left( {e}_{i}\right) \) , and let \( P \) be the projection of \( H \) on \( S \) . Then for all \( x, y \) in \( H \) , \[ {Px} = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle {e}_{i} \] \[ \parallel {Px}{\parallel }^{2} = \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \] \[ \parallel x - {Px}{\parallel }^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2}, \] \[ \langle {Px},{Py}\rangle = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle . \] Proof. Consider, for example, the case where \( {I}_{x} \) is countably infinite. Let \( {f}_{1},{f}_{2},\ldots \) be a one-one enumeration of \( {I}_{x} \) . Lemma (5.2.7) shows that \[ {\begin{Vmatrix}x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{f}_{n}\right\rangle {f}_{n}\end{Vmatrix}}^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2}, \] \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{f}_{n}\right\rangle {f}_{n}\end{Vmatrix}}
1008_(GTM174)Foundations of Real and Abstract Analysis
76
e}_{i} \] \[ \parallel {Px}{\parallel }^{2} = \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \] \[ \parallel x - {Px}{\parallel }^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2}, \] \[ \langle {Px},{Py}\rangle = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle . \] Proof. Consider, for example, the case where \( {I}_{x} \) is countably infinite. Let \( {f}_{1},{f}_{2},\ldots \) be a one-one enumeration of \( {I}_{x} \) . Lemma (5.2.7) shows that \[ {\begin{Vmatrix}x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{f}_{n}\right\rangle {f}_{n}\end{Vmatrix}}^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2}, \] \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{n = 1}}^{N}\left\langle x,{f}_{n}\right\rangle {f}_{n}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n = 1}}^{N}{\left| \left\langle x,{f}_{n}\right\rangle \right| }^{2} \] and \( x - \mathop{\sum }\limits_{{n = 1}}^{N}\left\langle {x,{f}_{n}}\right\rangle {f}_{n} \) is orthogonal to \( {f}_{1},\ldots ,{f}_{N} \) . Letting \( N \rightarrow \infty \), we see that \[ {\begin{Vmatrix}x - \mathop{\sum }\limits_{{i \in I}}\left\langle x,{e}_{i}\right\rangle {e}_{i}\end{Vmatrix}}^{2} = \parallel x{\parallel }^{2} - \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \] \[ {\begin{Vmatrix}\mathop{\sum }\limits_{{i \in I}}\left\langle x,{e}_{i}\right\rangle {e}_{i}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} \] and \( z = x - \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle {e}_{i} \) is orthogonal to each \( {f}_{n} \) . For each \( i \in I \) either \( {e}_{i} = {f}_{n} \) for some \( n \), and therefore \( z \bot {e}_{i} \), or else \( i \notin {I}_{x} \) ; in the latter case, using the continuity of the inner product, we have \[ \left\langle {z,{e}_{i}}\right\rangle = \left\langle {x,{e}_{i}}\right\rangle - \left\langle {\mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle {f}_{n},{e}_{i}}\right\rangle \] \[ = 0 - \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {\left\langle {x,{f}_{n}}\right\rangle {f}_{n},{e}_{i}}\right\rangle \] \[ = - \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle \left\langle {{f}_{n},{e}_{i}}\right\rangle \] \[ = 0\text{,} \] as \( {e}_{i} \) is orthogonal to each \( {f}_{n} \) . It now follows that \( z \) is orthogonal to each vector in \( S \), and hence, by Proposition (5.2.4), that \( {Px} = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle {e}_{i} \) . Using Exercise (5.2.5:2), the continuity of the inner product, and Exercise (5.2.9), we now obtain \[ \langle {Px},{Py}\rangle = \langle {Px}, y\rangle \] \[ = \left\langle {\mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle {f}_{n}, y}\right\rangle \] \[ = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{f}_{n}}\right\rangle \left\langle {{f}_{n}, y}\right\rangle \] \[ = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle \] By an orthonormal basis of a Hilbert space \( H \) we mean an orthonormal family that generates a dense linear subspace of \( H \) . The following is a more or less immediate consequence of Proposition (5.2.10). (5.2.11) Proposition. The following are equivalent conditions on an orthonormal family \( {\left( {e}_{i}\right) }_{i \in I} \) in a Hilbert space \( H \) . (i) \( \left( {e}_{i}\right) \) is an orthonormal basis of \( H \) . (ii) \( x = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle {e}_{i} \) for each \( x \in H \) . (iii) \( \mathop{\sum }\limits_{{i \in I}}{\left| \left\langle x,{e}_{i}\right\rangle \right| }^{2} = \parallel x{\parallel }^{2} \) for each \( x \in H \) . (iv) \( \langle x, y\rangle = \mathop{\sum }\limits_{{i \in I}}\left\langle {x,{e}_{i}}\right\rangle \left\langle {{e}_{i}, y}\right\rangle \) for all \( x, y \in H \) . The identity in condition (iv) of this proposition is known as Parseval's identity. ## (5.2.12) Exercises .1 Prove Proposition (5.2.11). .2 Use Zorn's Lemma (Appendix B) to prove that every nonzero Hilbert space has an orthonormal basis. .3 Let \( {\left( {e}_{i}\right) }_{i \in I} \) be an orthonormal basis in a separable Hilbert space \( H \) . By considering the balls \( B\left( {{e}_{i},1/\sqrt{2}}\right) \), or otherwise, show that \( I \) is a countable set. .4 Let \( H \) be an infinite-dimensional inner product space, and \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) an infinite orthonormal sequence of vectors in \( H \) . By considering \( \left( {e}_{n}\right) \) , and without invoking either Theorem (4.3.6) or Exercise (4.3.7:4), prove that the unit ball of \( H \) is not sequentially compact. .5 Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be an orthonormal basis of a separable Hilbert space \( H \) , and \( {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) an element of \( {l}_{2}\left( \mathbf{C}\right) \) . Show that there exists a unique element \( a \) of \( H \) such that \( \left\langle {a,{e}_{n}}\right\rangle = {a}_{n} \) for each \( n \) . (Show that the partial sums of the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{e}_{n} \) form a Cauchy sequence.) .6 Prove that the functions \[ t \mapsto {e}_{n}\left( t\right) = \frac{1}{\sqrt{2\pi }}{\mathrm{e}}^{\mathrm{i}{nt}}\;\left( {n \in \mathbf{Z}}\right) \] form an orthonormal basis of \( {L}_{2}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \) . (Noting Exercise (5.2.6), show that the linear space \( S \) generated by \( \left\{ {{e}_{n} : n \in \mathbf{Z}}\right\} \) is dense in \( {L}_{2}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \) . To do this, first consider \( f \in \mathcal{C}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \) . Construct a continuous function \( g \) on \( \mathbf{R} \), with period \( {2\pi } \), such that \( \parallel f - g{\parallel }_{2} \) is arbitrarily small. Then use Exercise (4.6.8: 6) to approximate \( g \), and therefore \( f \), by an element of \( S \) .) It follows from this exercise and Proposition (5.2.11) that for each \( f \in {L}_{2}\left( {\left\lbrack {-\pi ,\pi }\right\rbrack ,\mathbf{C}}\right) \) the corresponding Fourier expansion \[ x \mapsto \mathop{\sum }\limits_{{n = - \infty }}^{\infty }\widehat{f}\left( n\right) {\mathrm{e}}^{\mathrm{i}{nx}} \] converges to \( f \) in the \( {L}_{2} \) norm, where \[ \widehat{f}\left( n\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( t\right) {\mathrm{e}}^{-\mathrm{i}{nt}}\mathrm{\;d}t. \] In this case Parseval's identity takes the form \[ {\int }_{-\pi }^{\pi }{\left| f\left( x\right) \right| }^{2}\mathrm{\;d}x = \frac{1}{2\pi }\mathop{\sum }\limits_{{n = - \infty }}^{\infty }{\left| \widehat{f}\left( n\right) \right| }^{2}. \] .7 Take \( f\left( x\right) = x \) in the preceding exercise and apply Parseval’s identity, to show that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{n}^{-2} = {\pi }^{2}/6 \) . .8 Show that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{n}^{-4} = {\pi }^{4}/{90} \) . (Consider \( f\left( x\right) = \frac{1}{2}\left( {{x}^{2} - {\pi }^{2}}\right) \) .) Although Zorn's Lemma guarantees the existence of an orthonormal basis in any Hilbert space (Exercise (5.2.12: 2)), it does not enable us to construct orthonormal bases. The following Gram-Schmidt orthonormali-sation process enables us to construct orthonormal bases when the Hilbert space \( H \) is separable. Using Proposition (4.3.8), first construct a (possibly finite) total sequence \( \left( {{a}_{1},{a}_{2},\ldots }\right) \) of linearly independent vectors in \( H \) . For each \( n \) let \( {H}_{n} \) be the \( n \) -dimensional subspace of \( H \) spanned by \( \left\{ {{a}_{1},\ldots ,{a}_{n}}\right\} \) ; since this subspace is complete (by Proposition (4.3.3)), the projection \( {P}_{n} \) of \( H \) onto it is defined. Suppose we have found orthogonal vectors \( {b}_{1},\ldots ,{b}_{n} \) generating \( {H}_{n} \) . If \( H = {H}_{n} \), stop the construction. Otherwise, \( {a}_{n + 1} \notin {H}_{n} \) ; so by Proposition (5.2.4), \[ {b}_{n + 1} = {a}_{n + 1} - {P}_{n}{a}_{n + 1} \] is orthogonal to \( {H}_{n} \), and therefore \[ \left\langle {{b}_{n + 1},{b}_{k}}\right\rangle = 0\;\left( {1 \leq k \leq n}\right) . \] Elementary linear algebra shows that \( \left\{ {{b}_{1},\ldots ,{b}_{n + 1}}\right\} \) is a basis of \( {H}_{n + 1} \) . This completes the inductive construction of a (possibly finite) orthogonal sequence \( \left( {{b}_{1},{b}_{2},\ldots }\right) \) in \( H \) such that for each \( n,\left\{ {{b}_{1},\ldots ,{b}_{n}}\right\} \) is a basis of \( {H}_{n} \) . Setting \( {e}_{n} = {\begin{Vmatrix}{b}_{n}\end{Vmatrix}}^{-1}{b}_{n} \) and noting that \( \mathop{\bigcup }\limits_{n}{H}_{n} \) is dense in \( H \), we see that \( \left( {e}_{n}\right) \) is an orthonormal basis of \( H \) . The Gram-Schmidt orthonormalisation process has a very important application in approximation theory, which we now describe. Let \( w \) be a nonnegative continuous weight function on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . Define the inner product \[ \langle f, g{\rangle }_{w} = {\int }_{a}^{b}w\left( t\right) f\left( t\right) g\left( t\right) \mathrm{d}t \] on \( {L}_{2}\left( I\right) \), and the corresponding weighted least squares norm by \[ \parallel f{\parallel }_{2, w} = {\left( {\int }_{a}^{b}w\left( t\right) f{\left( t\right) }^{2}\mathrm{\;d}t\right) }^{1/2}. \] Given an element \( f \) of \( {L}_{2}\left( I\right) \) an
1008_(GTM174)Foundations of Real and Abstract Analysis
77
) such that for each \( n,\left\{ {{b}_{1},\ldots ,{b}_{n}}\right\} \) is a basis of \( {H}_{n} \) . Setting \( {e}_{n} = {\begin{Vmatrix}{b}_{n}\end{Vmatrix}}^{-1}{b}_{n} \) and noting that \( \mathop{\bigcup }\limits_{n}{H}_{n} \) is dense in \( H \), we see that \( \left( {e}_{n}\right) \) is an orthonormal basis of \( H \) . The Gram-Schmidt orthonormalisation process has a very important application in approximation theory, which we now describe. Let \( w \) be a nonnegative continuous weight function on a compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . Define the inner product \[ \langle f, g{\rangle }_{w} = {\int }_{a}^{b}w\left( t\right) f\left( t\right) g\left( t\right) \mathrm{d}t \] on \( {L}_{2}\left( I\right) \), and the corresponding weighted least squares norm by \[ \parallel f{\parallel }_{2, w} = {\left( {\int }_{a}^{b}w\left( t\right) f{\left( t\right) }^{2}\mathrm{\;d}t\right) }^{1/2}. \] Given an element \( f \) of \( {L}_{2}\left( I\right) \) and a natural number \( N \), we have the approximation problem: Find the polynomial function \( p \) of degree at most \( N \) that minimises the value of \[ \parallel f - p{\parallel }_{2, w}^{2} = {\int }_{a}^{b}w\left( t\right) {\left( f\left( t\right) - p\left( t\right) \right) }^{2}\mathrm{\;d}t. \] This polynomial is called the least squares approximation to \( f \) of degree at most \( N \) . Now, the set \( {\mathcal{P}}_{N} \) of polynomials of degree \( \leq N \) is a finite-dimensional subspace of \( \mathcal{C}\left( I\right) \) ; so the projection \( {P}_{N} \) of \( \mathcal{C}\left( I\right) \) on \( {\mathcal{P}}_{N} \) exists, and the unique least squares approximation to \( f \) of degree at most \( N \) is given by \( {p}_{N} = {P}_{N}f \) . To compute the coefficients of \( {p}_{N} \), we can use elementary multivariate calculus to calculate the values of \( {\lambda }_{0},\ldots ,{\lambda }_{N} \) that minimise \[ {\int }_{a}^{b}w\left( t\right) {\left( f\left( t\right) - \mathop{\sum }\limits_{{n = 0}}^{N}{\lambda }_{n}{t}^{n}\right) }^{2}\mathrm{\;d}t \] see [29]. However, this procedure is computationally inefficient if we are looking for least squares approximations to several functions in \( \mathcal{C}\left( I\right) \) . In that case a better procedure is to apply the Gram-Schmidt process to the total sequence consisting of the monomials \( 1, t,{t}^{2},\ldots \), to compute orthonormal polynomials \( {q}_{0},{q}_{1},\ldots \), where \( {q}_{n}\left( t\right) \) has degree \( n \) and \( \left\{ {{q}_{0},\ldots ,{q}_{n}}\right\} \) is a basis for \( {\mathcal{P}}_{n} \) ; then \[ {P}_{N}f = \mathop{\sum }\limits_{{n = 0}}^{N}{\left\langle f,{q}_{n}\right\rangle }_{w}{q}_{n} \] by Proposition (5.2.10). One advantage of this method is that, having found the least squares approximation \( {p}_{n} \) to \( f \) of degree at most \( n \), in order to find the least squares approximation of degree at most \( n + 1 \) we simply add to \( {p}_{n} \) the single term \( {\left\langle f,{q}_{n + 1}\right\rangle }_{w}{q}_{n + 1} \) . ## (5.2.13) Exercises .1 In the notation of the preceding paragraphs, take \( I = \left\lbrack {-1,1}\right\rbrack \) and \( w\left( t\right) = 1 \), and compute \( {q}_{0},{q}_{1} \), and \( {q}_{2} \) . Hence find the quadratic least squares approximation to \( {\mathrm{e}}^{x} \) in \( \mathcal{C}\left\lbrack {-1,1}\right\rbrack \) . .2 Let \( w \) be a nonnegative continuous weight function on \( I = \left\lbrack {a, b}\right\rbrack \), let \( f \in \mathcal{C}\left( I\right) \), and for each \( n \) let \( {p}_{n} \) denote the least squares approximation to \( f \) of degree at most \( n \) . Prove that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\begin{Vmatrix}f - {p}_{n}\end{Vmatrix}}_{2, w} = 0 \] .3 In the notation of the last exercise, let \( \left( {q}_{n}\right) \) be a sequence of polynomial functions that is orthogonal relative to \( \langle \cdot , \cdot {\rangle }_{w} \), such that \( {q}_{n} \) has degree \( n \) . Prove that each polynomial \( p \) of degree \( n \) can be written uniquely as a linear combination of \( {q}_{0},\ldots ,{q}_{n} \), and find the coefficient of \( {q}_{k} \) in this linear combination. .4 Continuing Exercise (5.2.13:3), prove that \( {q}_{n}\left( t\right) \) has \( n \) distinct real zeroes, and that those zeroes lie in the open interval \( \left( {a, b}\right) \) . (Let \[ p\left( t\right) = \left( {t - {t}_{1}}\right) \cdots \left( {t - {t}_{m}}\right) \] where \( {t}_{1},\ldots ,{t}_{m} \) are the zeroes of \( {q}_{n}\left( t\right) \) in \( \left( {a, b}\right) \) at which \( {q}_{n}\left( t\right) \) changes sign. Assume that \( m < n \), show that \( {\int }_{a}^{b}w\left( t\right) p\left( t\right) {q}_{n}\left( t\right) \mathrm{d}t \neq 0 \), and deduce a contradiction.) .5 Continuing Exercise (5.2.13:4), write \[ {q}_{n}\left( t\right) = {A}_{n}{t}^{n} + {B}_{n}{t}^{n - 1} + \ldots , \] \[ {c}_{n} = {\left\langle {q}_{n},{q}_{n}\right\rangle }_{w} \] \[ {\alpha }_{n} = \frac{{A}_{n + 1}}{{A}_{n}} \] \[ {\beta }_{n} = {\alpha }_{n}\left( {\frac{{B}_{n + 1}}{{A}_{n + 1}} - \frac{{B}_{n}}{{A}_{n}}}\right) \] and, for \( n \geq 1 \) , \[ {\gamma }_{n} = \frac{{A}_{n + 1}{A}_{n - 1}}{{A}_{n}^{2}} \cdot \frac{{\left\langle {q}_{n},{q}_{n}\right\rangle }_{w}}{{\left\langle {q}_{n - 1},{q}_{n - 1}\right\rangle }_{w}}. \] Prove the triple recursion formula: \[ {q}_{n + 1}\left( t\right) = \left( {{\alpha }_{n}t + {\beta }_{n}}\right) {q}_{n}\left( t\right) - {\gamma }_{n}{q}_{n - 1}\left( t\right) . \] (Consider \( p\left( t\right) = {q}_{n + 1}\left( t\right) - {\alpha }_{n}t{q}_{n}\left( t\right) \) .) .6 Let \( I = \left\lbrack {a, b}\right\rbrack \), let \( w \in \mathcal{C}\left( I\right) \), and let \( p \) be a polynomial function. Prove the equivalence of the following conditions. (i) \( {\int }_{a}^{b}w\left( t\right) p\left( t\right) q\left( t\right) \mathrm{d}t = 0 \) for all polynomial functions \( q \) of degree at most \( n \) . (ii) There exists an \( \left( {n + 1}\right) \) -times differentiable function \( u \) on \( I \) such that \[ w\left( x\right) p\left( x\right) = {u}^{\left( n + 1\right) }\left( x\right) \;\left( {x \in I}\right) \] and \[ {u}^{\left( k\right) }\left( {a}^{ + }\right) = {u}^{\left( k\right) }\left( {b}^{ - }\right) = 0\;\left( {k = 0,1,\ldots, n}\right) . \] .7 Let \( I = \left\lbrack {-1,1}\right\rbrack \), let \( \alpha ,\beta \in \left( {-1,\infty }\right) \), and let \[ w\left( x\right) = {\left( 1 - x\right) }^{\alpha }{\left( 1 + x\right) }^{\beta }\;\left( {x \in I}\right) . \] For each \( n \in \mathbf{N} \) define the Jacobi polynomial of degree \( n \) by Rodrigues’s formula: \[ {\phi }_{n}\left( x\right) = {\left( 1 - x\right) }^{-\alpha }{\left( 1 + x\right) }^{-\beta }\frac{{\mathrm{d}}^{n}}{\mathrm{\;d}{x}^{n}}\left( {{\left( 1 - x\right) }^{\alpha + n}{\left( 1 + x\right) }^{\beta + n}}\right) \] (where, of course, \( {\mathrm{d}}^{n}/\mathrm{d}{x}^{n} \) denotes the \( n \) th derivative). Use the preceding exercise to prove that \( {\left( {\phi }_{n}\right) }_{n = 0}^{\infty } \) is an orthogonal sequence in \( {L}_{2, w}\left( {I,\mathbf{C}}\right) \) . .8 In the special case \( \alpha = \beta = 0 \) of the last exercise, the Jacobi polynomial \( {\phi }_{n} \) is known as a Legendre polynomial and is usually denoted by \( {P}_{n} \) . Prove that the Legendre polynomials satisfy the recurrence relation \[ {P}_{n + 1}\left( x\right) = \left( {{4n} + 2}\right) x{P}_{n}\left( x\right) - 4{n}^{2}{P}_{n - 1}\left( x\right) \] on \( \left\lbrack {-1,1}\right\rbrack \) . Use this and Exercise (5.2.13: 1) to find \( {P}_{3}\left( x\right) \) and \( {P}_{4}\left( x\right) \) . (To establish the recurrence relation, write each term in the form \[ \frac{{\mathrm{d}}^{n - 1}}{\mathrm{\;d}{x}^{n - 1}}\left( {{\left( {x}^{2} - 1\right) }^{n - 1}q\left( x\right) }\right) \] where \( q\left( x\right) \) is a quadratic polynomial.) ## 5.3 The Dual of a Hilbert Space We saw in Exercise (5.2.5:4) that for each vector \( a \) in an inner product space \( X \) the mapping \( x \mapsto \langle x, a\rangle \) is a bounded linear functional on \( X \) . We now show that the dual of a Hilbert space consists precisely of bounded linear functionals of this form (cf. Exercise (5.2.5:5)). (5.3.1) The Riesz Representation Theorem. If \( u \) is a bounded linear functional on a Hilbert space \( H \), then there exists a unique vector \( a \in H \) such that \( u\left( x\right) = \langle x, a\rangle \) for each \( x \in H \) . In that case \( \parallel u\parallel = \parallel a\parallel \) . Proof. We first dispose of the uniqueness \( {}^{1} \) of \( a \) : indeed, if \[ \langle x, a\rangle = u\left( x\right) = \left\langle {x,{a}^{\prime }}\right\rangle \;\left( {x \in H}\right) , \] --- \( {}^{1} \) This uniqueness argument applies to a linear functional of the form \( x \mapsto \langle x, a\rangle \) on an inner product space. --- then, taking \( x = a - {a}^{\prime } \), we obtain \[ {\begin{Vmatrix}a - {a}^{\prime }\end{Vmatrix}}^{2} = \left\langle {a - {a}^{\prime }, a - {a}^{\prime }}\right\rangle = 0, \] so \( a = {a}^{\prime } \) . To establish the existence of \( a \), we may assume that \( u \neq 0 \) . As \( u \) is linear and continuous, \( \ker \left( u\right) \) is a closed subspace of \( H \) (Proposition (4.2.3)). Let \( P \) be the projection of \( H \) on \( \ker \left( u\right) \), and choose \( y \in H \) such that \( u\left( y\right) \neq 0 \) . Setting \[ z = u{\left( y\right) }^{-1}\left( {y - {Py}}\right) \] we see that \( z \in \ker {\left( u\right) }^{ \bot } \), by Proposition (5.2.4), and that \[ u\left( z\right) = u{\left( y\right) }^{-1}\left( {u\left( y\right) - u\left( {Py}\right) }\right) = 1. \] So for each \( x \) in \( H \) we have \[ x - u\left( x\right) z \in \ker \left( u\right) \] and therefore \
1008_(GTM174)Foundations of Real and Abstract Analysis
78
( x \mapsto \langle x, a\rangle \) on an inner product space. --- then, taking \( x = a - {a}^{\prime } \), we obtain \[ {\begin{Vmatrix}a - {a}^{\prime }\end{Vmatrix}}^{2} = \left\langle {a - {a}^{\prime }, a - {a}^{\prime }}\right\rangle = 0, \] so \( a = {a}^{\prime } \) . To establish the existence of \( a \), we may assume that \( u \neq 0 \) . As \( u \) is linear and continuous, \( \ker \left( u\right) \) is a closed subspace of \( H \) (Proposition (4.2.3)). Let \( P \) be the projection of \( H \) on \( \ker \left( u\right) \), and choose \( y \in H \) such that \( u\left( y\right) \neq 0 \) . Setting \[ z = u{\left( y\right) }^{-1}\left( {y - {Py}}\right) \] we see that \( z \in \ker {\left( u\right) }^{ \bot } \), by Proposition (5.2.4), and that \[ u\left( z\right) = u{\left( y\right) }^{-1}\left( {u\left( y\right) - u\left( {Py}\right) }\right) = 1. \] So for each \( x \) in \( H \) we have \[ x - u\left( x\right) z \in \ker \left( u\right) \] and therefore \[ 0 = \langle x - u\left( x\right) z, z\rangle = \langle x, z\rangle - u\left( x\right) \langle z, z\rangle = \langle x, z\rangle - u\left( x\right) \parallel z{\parallel }^{2}. \] Thus \( u\left( x\right) = \langle x, a\rangle \), where \( a = \parallel z{\parallel }^{-2}z \) . The Cauchy-Schwarz inequality shows that \( \left| {u\left( x\right) }\right| \leq \parallel a\parallel \parallel x\parallel \) . Since also \( \left| {u\left( {\parallel a{\parallel }^{-1}a}\right) }\right| = \parallel a\parallel \), we see that \( \parallel u\parallel = \parallel a\parallel \) . ## (5.3.2) Exercises .1 Find an alternative proof of the existence part of the Riesz Representation Theorem (5.3.1) for a separable Hilbert space \( H \) . (Let \( {\left( {e}_{n}\right) }_{n = 1}^{\infty } \) be an orthonormal basis of \( H \), and \( u \) a bounded linear functional on \( H \) . Show that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }u{\left( {e}_{n}\right) }^{ * }{e}_{n} \) converges to the desired element \( a \in H \) .) .2 Use the Riesz Representation Theorem to give another solution to Exercise (5.2.12: 5). (In the notation of that exercise, show that \( \left. {u\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}^{ * }\left\langle {x,{e}_{n}}\right\rangle \text{ defines a bounded linear functional on }H.}\right) \) .3 By the second dual of a normed space \( X \) we mean the dual space \( {X}^{* * } = {\left( {X}^{ * }\right) }^{ * } \) of \( {X}^{ * } \) . We say that \( X \) is reflexive if for each \( u \in {X}^{* * } \) there exists \( {x}_{u} \in X \) such that \( u\left( f\right) = f\left( {x}_{u}\right) \) for each \( f \in {X}^{ * } \) . Prove that any Hilbert space is reflexive. By an operator on a normed space \( X \) we mean a bounded linear mapping from \( X \) into itself; the set of operators on \( X \) is written \( L\left( X\right) \) . (Strictly speaking, we have here defined a bounded operator; since we do not consider unbounded operators in this book, it is convenient for us to use the term "operator" to mean "bounded operator".) It is common practice to denote the composition of operators by juxtaposition; thus if \( S, T \) are operators on \( X \), then \( T \circ S \) is usually written \( {TS} \) ; moreover, we write \( {T}^{2} \) for \( {TT},{T}^{3} \) for \( T\left( {TT}\right) \), and so on. For a first application of the Riesz Representation Theorem, let \( T \) be an operator on a Hilbert space \( H \), and for each \( a \in X \) consider the linear functional \( x \mapsto \langle {Tx}, a\rangle \) on \( H \) . The inequality \[ \left| {\langle {Tx}, a\rangle }\right| \leq \parallel {Tx}\parallel \parallel a\parallel \leq \parallel T\parallel \parallel a\parallel \parallel x\parallel \] shows that this functional is bounded and has norm at most \( \parallel T\parallel \parallel a\parallel \) . By the Riesz Representation Theorem, there exists a unique vector \( {T}^{ * }a \) such that \[ \langle {Tx}, a\rangle = \left\langle {x,{T}^{ * }a}\right\rangle \;\left( {x \in H}\right) ; \] moreover, \[ \begin{Vmatrix}{{T}^{ * }a}\end{Vmatrix} \leq \parallel T\parallel \parallel a\parallel \] (1) The mapping \( {T}^{ * } : H \rightarrow H \) so defined is called the adjoint of \( T \), and is an operator on \( H \) . To justify this last claim, consider \( a, b \) in \( H \) and \( \lambda ,\mu \) in \( \mathbf{F} \) . Since \[ \langle {Tx},{\lambda a} + {\mu b}\rangle = {\lambda }^{ * }\langle {Tx}, a\rangle + {\mu }^{ * }\langle {Tx}, b\rangle \] \[ = {\lambda }^{ * }\left\langle {x,{T}^{ * }a}\right\rangle + {\mu }^{ * }\left\langle {x,{T}^{ * }b}\right\rangle \] \[ = \left\langle {x,\lambda {T}^{ * }a + \mu {T}^{ * }b}\right\rangle \] for all \( x \in H \), we see that \[ {T}^{ * }\left( {{\lambda a} + {\mu b}}\right) = \lambda {T}^{ * }a + \mu {T}^{ * }b. \] So \( {T}^{ * } \) is linear. Inequality (1) shows that \( {T}^{ * } \) is bounded and has norm at most \( \parallel T\parallel \) . Since \[ \left\langle {{T}^{ * }x, y}\right\rangle = {\left\langle y,{T}^{ * }x\right\rangle }^{ * } = \langle {Ty}, x{\rangle }^{ * } = \langle x,{Ty}\rangle \] the uniqueness of the adjoint of \( {T}^{ * } \) shows that \( {\left( {T}^{ * }\right) }^{ * } = T \) . So \( \parallel T\parallel = \) \( \begin{Vmatrix}{\left( {T}^{ * }\right) }^{ * }\end{Vmatrix} \leq \begin{Vmatrix}{T}^{ * }\end{Vmatrix} \) and therefore \( \begin{Vmatrix}{T}^{ * }\end{Vmatrix} = \parallel T\parallel \) . An operator \( T \) on \( H \) is said to be - selfadjoint, or Hermitian, if \( {T}^{ * } = T \) ; - normal if \( {T}^{ * }T = T{T}^{ * } \) . Selfadjoint and normal operators have particularly amenable properties and are among the most important objects in Hilbert space theory. (See [24], [44], and other books that deal with such topics as spectral theory.) ## (5.3.3) Exercises In all the exercises of this set except the first, \( H \) is a complex Hilbert space, \( S \) and \( T \) are operators on \( H \), and \( \operatorname{ran}\left( T\right) \) denotes the range of \( T \) . .1 Let \( \left( {{e}_{1},{e}_{2},\ldots ,{e}_{n}}\right) \) be an orthonormal basis of the Euclidean Hilbert space \( {\mathbf{F}}^{n} \), and \( T \) an operator on \( {\mathbf{F}}^{n} \) . Show that \[ {Tx} = \mathop{\sum }\limits_{{j, k = 1}}^{n}\left\langle {x,{e}_{j}}\right\rangle \left\langle {T{e}_{j},{e}_{k}}\right\rangle {e}_{k} \] and hence that \( T \) can be associated with the \( n \) -by- \( n \) matrix whose \( \left( {j, k}\right) \) th entry is \( \left\langle {T{e}_{j},{e}_{k}}\right\rangle \) . With what matrix is \( {T}^{ * } \) associated in this way? .2 By a bounded conjugate-bilinear functional on \( H \) we mean a mapping \( u : H \times H \rightarrow \mathbf{C} \) that is linear in the first variable, conjugate linear in the second, and bounded, in the sense that there exists \( c > 0 \) such that \( \left| {u\left( {x, y}\right) }\right| \leq c\parallel x\parallel \parallel y\parallel \) for all \( x, y \in H \) . The least such \( c \) is the number written \[ \parallel u\parallel = \sup \left\{ {\left| {u\left( {x, y}\right) }\right| : x, y \in H,\parallel x\parallel \leq 1,\parallel y\parallel \leq 1}\right\} . \] Show that the mapping \( u : H \times H \rightarrow \mathbf{C} \) defined by \[ u\left( {x, y}\right) = \langle {Tx}, y\rangle \] (2) is a bounded conjugate-linear functional on \( H \) such that \( \parallel u\parallel = \parallel T\parallel \) . Show also that each bounded conjugate-linear functional \( u \) on \( H \) is related to a unique corresponding operator \( T \) as in equation (2). (For the second part, show that for each \( x \in H \) the mapping \( y \mapsto u{\left( x, y\right) }^{ * } \) is a bounded linear functional on \( H \) .) .3 Verify the polarisation identity: \[ 4\langle {Tx}, y\rangle = \langle T\left( {x + y}\right), x + y\rangle - \langle T\left( {x - y}\right), x - y\rangle \] \[ + \mathrm{i}\langle T\left( {x + \mathrm{i}y}\right), x + \mathrm{i}y\rangle - \mathrm{i}\langle T\left( {x - \mathrm{i}y}\right), x - \mathrm{i}y\rangle . \] Show that if \( \langle {Sx}, x\rangle = \langle {Tx}, x\rangle \) for all \( x \in H \), then \( S = T \) . Give an example of a nonzero operator \( T \) on the real Hilbert space \( {\mathbf{R}}^{2} \) such that \( \langle {Tx}, x\rangle = 0 \) for all \( x \in {\mathbf{R}}^{2} \) . .4 Let \( \lambda ,\mu \) be complex numbers. Show that \( {\left( \lambda S + \mu T\right) }^{ * } = {\lambda }^{ * }{S}^{ * } + {\mu }^{ * }{T}^{ * } \) and \( {\left( ST\right) }^{ * } = {T}^{ * }{S}^{ * } \) . .5 Prove that \( {T}^{ * }T \) and \( T{T}^{ * } \) are selfadjoint. .6 Prove each of the following statements. (i) \( \ker \left( {T}^{ * }\right) = \operatorname{ran}{\left( T\right) }^{ \bot } \) (ii) \( \overline{\operatorname{ran}\left( {T}^{ * }\right) } = \ker {\left( T\right) }^{ \bot } \) . (iii) \( \ker \left( T\right) = \ker \left( {{T}^{ * }T}\right) \) . (iv) \( \operatorname{ran}\left( {T{T}^{ * }}\right) \) is dense in \( \operatorname{ran}\left( T\right) \) . .7 Show that (i) \( T \) is selfadjoint if and only if \( \langle {Tx}, x\rangle \in \mathbf{R} \) for all \( x \in H \) . (ii) \( T \) is normal if and only if \( \parallel {Tx}\parallel = \begin{Vmatrix}{{T}^{ * }x}\end{Vmatrix} \) for each \( x \in H \) . (For part (i), consider \( \langle {Tx}, x\rangle - \left\langle {{T}^{ * }x, x}\right\rangle \), and note Exercise (5.3.3: 3).) .8 Prove that \( T \) is a projection if and only if \( {T}^{ * }T = T \), in which case \( T \) is idempotent—that is, \( {T}^{2} = T \) . (For "if", show first that \( T \) is selfadjoint, and then that \( \left( {x - {Tx}}\right) \bot {Ty} \) for all \( x, y \in H \) .) We close this chapter by sketching how the techniques of Hilbert space theory can be applied to prove the existence of a type of solution for one of the fundamental problems of potential th
1008_(GTM174)Foundations of Real and Abstract Analysis
79
\left( {{T}^{ * }T}\right) \) . (iv) \( \operatorname{ran}\left( {T{T}^{ * }}\right) \) is dense in \( \operatorname{ran}\left( T\right) \) . .7 Show that (i) \( T \) is selfadjoint if and only if \( \langle {Tx}, x\rangle \in \mathbf{R} \) for all \( x \in H \) . (ii) \( T \) is normal if and only if \( \parallel {Tx}\parallel = \begin{Vmatrix}{{T}^{ * }x}\end{Vmatrix} \) for each \( x \in H \) . (For part (i), consider \( \langle {Tx}, x\rangle - \left\langle {{T}^{ * }x, x}\right\rangle \), and note Exercise (5.3.3: 3).) .8 Prove that \( T \) is a projection if and only if \( {T}^{ * }T = T \), in which case \( T \) is idempotent—that is, \( {T}^{2} = T \) . (For "if", show first that \( T \) is selfadjoint, and then that \( \left( {x - {Tx}}\right) \bot {Ty} \) for all \( x, y \in H \) .) We close this chapter by sketching how the techniques of Hilbert space theory can be applied to prove the existence of a type of solution for one of the fundamental problems of potential theory. (For more information on this topic, see, for example, pages 117-122 of [23].) For the rest of this chapter only, we follow the usual notational conventions of applied mathematicians. Thus we denote three-dimensional vectors by boldface letters, the element of volume in \( {\mathbf{R}}^{3} \) by \( \mathrm{d}V \), the element of surface area by \( \mathrm{d}S \), the unit outward normal to a surface by \( \mathbf{n} \), and the inner product of two vectors \( \mathbf{u},\mathbf{v} \) in \( {\mathbf{R}}^{3} \) by \( \mathbf{u} \cdot \mathbf{v} \) . We assume familiarity with calculus in \( {\mathbf{R}}^{3} \), including the elementary vector analysis of the gradient operator \( \nabla \) and the divergence operator div, defined, respectively, by \[ \nabla f = \left( {\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z}}\right) \] for a real-valued function \( f \), and \[ \operatorname{div}\mathbf{u} = \left( {\frac{\partial {u}_{x}}{\partial x},\frac{\partial {u}_{y}}{\partial y},\frac{\partial {u}_{z}}{\partial z}}\right) \] for a vector \( \mathbf{u} = \left( {{u}_{x},{u}_{y},{u}_{z}}\right) \) . We also assume the fundamentals of the theory of \( {L}_{2}\left( \Omega \right) \) when \( \Omega \) is a Lebesgue measurable subset of \( {\mathbf{R}}^{3} \) . Let \( \Omega \) be a bounded open set in \( {\mathbf{R}}^{3} \) for which Gauss’s Divergence Theorem holds: \[ {\int }_{\Omega }\operatorname{div}\mathbf{u}\mathrm{d}V = {\int }_{\partial \Omega }\mathbf{u} \cdot \mathbf{n}\mathrm{d}S \] where \( \partial \Omega \) is the boundary surface of \( \Omega \) and \( \mathbf{u} : \bar{\Omega } \rightarrow {\mathbf{R}}^{3} \) is continuously differentiable on \( \Omega \) . It follows that Green’s Theorem holds in the form \[ {\int }_{\Omega }\left( {u{\nabla }^{2}v - v{\nabla }^{2}u}\right) \mathrm{d}V = {\int }_{\partial \Omega }\left( {u\frac{\partial v}{\partial n} - v\frac{\partial u}{\partial n}}\right) \mathrm{d}S \] where \( u, v \) are twice continuously differentiable mappings of \( \Omega \) into \( \mathbf{R},\partial /\partial n \) denotes differentiation along the outward normal to \( \partial \Omega \), and \( {\nabla }^{2} \) is the Laplacian operator, \[ {\nabla }^{2} = \frac{{\partial }^{2}}{\partial {x}^{2}} + \frac{{\partial }^{2}}{\partial {y}^{2}} + \frac{{\partial }^{2}}{\partial {z}^{2}} \] We assume the following result, embodying Poincaré's inequality. There exists a constant \( c > 0 \) such that if \( v : \bar{\Omega } \rightarrow \mathbf{R} \) is differentiable on \( \Omega \) and vanishes on the boundary of \( \Omega \), then \[ {\left( {\int }_{\Omega }{v}^{2}\mathrm{\;d}V\right) }^{1/2} \leq c{\left( {\int }_{\Omega }\parallel \nabla v{\parallel }^{2}\mathrm{\;d}V\right) }^{1/2}. \] For a proof of this inequality under reasonable conditions on \( \Omega \) we refer to [37], Chapter 5, Theorem 1. Given a bounded continuous function \( f : \Omega \rightarrow \mathbf{R} \), we consider the corresponding Dirichlet Problem: Find a function \( u : \bar{\Omega } \rightarrow \mathbf{R} \) that is twice differentiable on \( \Omega \) , satisfies \( {\nabla }^{2}u = f \) on \( \Omega \), and vanishes on the boundary of \( \Omega \) . Suppose we have found a solution \( u \) of this Dirichlet Problem. Let \( v : \Omega \rightarrow \) \( \mathbf{R} \) be twice differentiable and have compact support in \( \Omega \) -that is, \( v = 0 \) outside some compact subset of \( \Omega \) . Then it follows from Green’s Theorem that \[ {\int }_{\Omega }u{\nabla }^{2}v\mathrm{\;d}V = {\int }_{\Omega }{vf}\mathrm{\;d}V \] (3) since both \( u \) and \( v \) vanish on \( \partial \Omega \) . Now, it may not be possible to solve the Dirichlet Problem on \( \Omega \) ; but, as we now show, we can find a function \( u \) on \( \bar{\Omega } \) that behaves appropriately on \( \partial \Omega \) and that satisfies (3) for all \( v : \Omega \rightarrow \mathbf{R} \) that are twice differentiable and have compact support in \( \Omega \) . More advanced theory of partial differential equations then provides conditions on \( \Omega \) under which this so-called weak solution \( u \) of the Dirichlet Problem can be identified with a solution of the standard type. Let \( {\mathcal{C}}_{0}^{1}\left( \bar{\Omega }\right) \) be the space of functions \( u : \bar{\Omega } \rightarrow \mathbf{R} \) that have compact support in \( \Omega \) and are differentiable on \( \Omega \) ; and let \( {\mathcal{C}}^{1}\left( \bar{\Omega }\right) \) be the space of functions \( u : \bar{\Omega } \rightarrow \mathbf{R} \) such that \( u \) is differentiable on \( \Omega \) and \( \nabla u \) extends to a continuous function on \( \bar{\Omega } \) . Let \( {\widetilde{\mathcal{C}}}^{1}\left( \bar{\Omega }\right) \) be the space consisting of all elements of \( {\mathcal{C}}^{1}\left( \bar{\Omega }\right) \) that vanish on \( \partial \Omega ,{H}_{0}^{1}\left( \bar{\Omega }\right) \) the completion of \( {\widetilde{\mathcal{C}}}^{1}\left( \bar{\Omega }\right) \) with respect to the inner product defined by \[ \langle u, v\rangle = {\int }_{\Omega }\nabla u \cdot \nabla v\mathrm{\;d}V \] and \( \parallel \cdot {\parallel }_{H} \) the corresponding norm on \( {H}_{0}^{1}\left( \bar{\Omega }\right) \) . It is not hard to show that \( {\mathcal{C}}_{0}^{1}\left( \bar{\Omega }\right) \) is dense in \( {H}_{0}^{1}\left( \bar{\Omega }\right) \) with respect to this norm, and that \( {H}_{0}^{1}\left( \bar{\Omega }\right) \) can be identified with a certain set of Lebesgue integrable real-valued functions \( u \) on \( \bar{\Omega } \) . Now define a linear functional \( {\varphi }_{f} \) on \( {\widetilde{\mathcal{C}}}^{1}\left( \bar{\Omega }\right) \) by \[ {\varphi }_{f}\left( v\right) = {\int }_{\Omega }{vf}\mathrm{\;d}V \] Applying the Cauchy-Schwarz inequality in the Hilbert space \( {L}_{2}\left( \Omega \right) \), we obtain \[ \left| {{\varphi }_{f}\left( v\right) }\right| \leq {\left( {\int }_{\Omega }{v}^{2}\mathrm{\;d}V\right) }^{1/2}{\left( {\int }_{\Omega }{f}^{2}\mathrm{\;d}V\right) }^{1/2}. \] Hence, by Poincaré's inequality, \[ \left| {{\varphi }_{f}\left( v\right) }\right| \leq c{\left( {\int }_{\Omega }{f}^{2}\mathrm{\;d}V\right) }^{1/2}{\left( {\int }_{\Omega }\parallel \nabla v{\parallel }^{2}\mathrm{\;d}V\right) }^{1/2} \] \[ = c{\left( {\int }_{\Omega }{f}^{2}\mathrm{\;d}V\right) }^{1/2}\parallel v{\parallel }_{H} \] where the constant \( c \) is independent of \( v \) . Thus the linear functional \( {\varphi }_{f} \) is bounded. It therefore extends by continuity to a bounded linear functional \( {\varphi }_{f} \) on \( {H}_{0}^{1}\left( \bar{\Omega }\right) \) ; see Exercise (4.2.2:10). Thus, by the Riesz Representation Theorem (5.3.1), there exists a unique element \( u \) of \( {H}_{0}^{1}\left( \bar{\Omega }\right) \) such that \[ {\varphi }_{f}\left( v\right) = - \langle v, u\rangle \;\left( {v \in {H}_{0}^{1}\left( \bar{\Omega }\right) }\right) . \] For each \( v \) that has compact support in \( \Omega \) and is twice differentiable on \( \Omega \) , we now use the elementary vector identity \[ \operatorname{div}\left( {u\nabla v}\right) = \nabla u \cdot \nabla v + u{\nabla }^{2}v \] and Gauss's Divergence Theorem to show that \[ {\int }_{\Omega }u{\nabla }^{2}v\mathrm{\;d}V = - \langle v, u\rangle + {\int }_{\Omega }\operatorname{div}\left( {u\nabla v}\right) \mathrm{d}V \] \[ = {\varphi }_{f}\left( v\right) + {\int }_{\partial \Omega }u\nabla v \cdot \mathbf{n}\mathrm{d}S \] \[ = {\int }_{\Omega }{vf}\mathrm{\;d}V \] (Recall that \( v = 0 \) on the boundary of \( \Omega \) ). This completes the proof that \( u \) is the weak solution that we wanted. 6 An Introduction to Functional Analysis ...a wonderful piece of work; which not to have been blessed withal would have discredited your travel. Antony and Cleopatra, Act 1, Scene 2 In this chapter we first discuss the Hahn-Banach Theorem, the most famous case of which provides conditions under which a bounded linear functional on a subspace of a normed space \( X \) can be extended, with preservation of its norm, to a bounded linear functional on the whole of \( X \) . We then present several applications of this theorem, some of which illustrate the interplay between a normed space and its dual. In Section 2 we use the Hahn-Banach Theorem to obtain results about the separation of convex sets by hyperplanes. The last section of the chapter introduces the Baire Category Theorem, and includes some of its many applications in classical and functional analysis. ## 6.1 The Hahn-Banach Theorem Let \( X \) be a linear space over \( \mathbf{F} \) . If \( \mathbf{F} = \mathbf{C} \), then by a complex-linear functional on \( X \) we mean a mapping \( f : X \rightarrow \mathbf{C} \) such that \[ f\left( {x + y}\right) = f\left( x\right) + f\left( y\right) \] and \[ f\left( {\lambda x}\right) = {\lam
1008_(GTM174)Foundations of Real and Abstract Analysis
80
s the Hahn-Banach Theorem, the most famous case of which provides conditions under which a bounded linear functional on a subspace of a normed space \( X \) can be extended, with preservation of its norm, to a bounded linear functional on the whole of \( X \) . We then present several applications of this theorem, some of which illustrate the interplay between a normed space and its dual. In Section 2 we use the Hahn-Banach Theorem to obtain results about the separation of convex sets by hyperplanes. The last section of the chapter introduces the Baire Category Theorem, and includes some of its many applications in classical and functional analysis. ## 6.1 The Hahn-Banach Theorem Let \( X \) be a linear space over \( \mathbf{F} \) . If \( \mathbf{F} = \mathbf{C} \), then by a complex-linear functional on \( X \) we mean a mapping \( f : X \rightarrow \mathbf{C} \) such that \[ f\left( {x + y}\right) = f\left( x\right) + f\left( y\right) \] and \[ f\left( {\lambda x}\right) = {\lambda f}\left( x\right) \] for all \( x, y \in X \) and all \( \lambda \in \mathbf{C} \) . If \( f \) maps \( X \) into \( \mathbf{R} \) and satisfies these equations for all real numbers \( \lambda \), then \( f \) is called a real-linear functional on \( X \) . According to our first lemma, real-linear functionals can be characterised as the real parts of associated complex-linear functionals. (6.1.1) Lemma. Let \( X \) be a complex normed linear space. If \( f \) is a complex-linear functional on \( X \) and \( u \) is the real part of \( f \), then \( u \) is a real–linear functional on \( X \) and \[ f\left( x\right) = u\left( x\right) - \mathrm{i}u\left( {\mathrm{i}x}\right) \;\left( {x \in X}\right) . \] (1) If \( u \) is a real-linear functional on \( X \) and \( f \) is defined by equation (1), then \( f \) is a complex-linear functional on \( X \) . Moreover, if \( f \) and \( u \) are related as in equation (1) and either \( f \) or \( u \) is bounded, then both functionals are bounded and \( \parallel f\parallel = \parallel u\parallel \) . Proof. If \( f \) is a complex-linear functional on \( X \) and \( u = \operatorname{Re}\left( f\right) \), then it is easy to show that \( u \) is real-linear; moreover, equation (1) follows from the fact that \( z = \operatorname{Re}\left( z\right) - \mathrm{i}\operatorname{Re}\left( {\mathrm{i}z}\right) \) for any complex number \( z \) . On the other hand, if \( u \) is a real-linear functional on \( X \), and \( f \) is defined as in (1), then it is clear that \( f\left( {x + y}\right) = f\left( x\right) + f\left( y\right) \), and that \( f\left( {\lambda x}\right) = {\lambda f}\left( x\right) \) for all real \( \lambda \) . Also, \[ f\left( {\mathrm{i}x}\right) = u\left( {\mathrm{i}x}\right) - \mathrm{i}u\left( {{\mathrm{i}}^{2}x}\right) \] \[ = u\left( {\mathrm{i}x}\right) - \mathrm{i}u\left( {-x}\right) \] \[ = u\left( {\mathrm{i}x}\right) + \mathrm{i}u\left( x\right) \] \[ = \mathrm{i}f\left( x\right) \] from which it follows that \( f \) is complex-linear. If \( f \) is bounded, then as \( \left| {u\left( x\right) }\right| \leq \left| {f\left( x\right) }\right| \) for all \( x \in X, u \) is bounded and \( \parallel u\parallel \leq \parallel f\parallel \) . For each \( x \in X \) there exists \( \lambda \in \mathbf{C} \) such that \( \left| \lambda \right| = 1 \) and \( f\left( {\lambda x}\right) = {\lambda f}\left( x\right) = \left| {f\left( x\right) }\right| \) ; then \( f\left( {\lambda x}\right) \in \mathbf{R} \), so \[ \left| {f\left( x\right) }\right| = f\left( {\lambda x}\right) \] \[ = \operatorname{Re}\left( {f\left( {\lambda x}\right) }\right) \] \[ = u\left( {\lambda x}\right) \] \[ \leq \parallel u\parallel \parallel {\lambda x}\parallel = \parallel u\parallel \parallel x\parallel \] Hence \( \parallel f\parallel \leq \parallel u\parallel \), and therefore \( \parallel f\parallel = \parallel u\parallel \) . Finally, if \( u \) is bounded, then for all \( x \in X \) with \( \parallel x\parallel \leq 1 \) we have \[ \left| {f\left( x\right) }\right| \leq \left| {u\left( x\right) }\right| + \left| {u\left( {\mathrm{i}x}\right) }\right| \] \[ \leq \parallel u\parallel \left( {\parallel x\parallel + \parallel \mathrm{i}x\parallel }\right) \] \[ \leq 2\parallel u\parallel \] so \( f \) is bounded. By the foregoing, \( \parallel f\parallel = \parallel u\parallel \) . Let \( X \) be a vector space over \( \mathbf{F} \), and \( p \) a mapping of \( X \) into \( \mathbf{R} \) . We say that \( p \) is - subadditive if \( p\left( {x + y}\right) \leq p\left( x\right) + p\left( y\right) \) for all \( x, y \in X \) ; - positively homogeneous if \( p\left( {\lambda x}\right) = {\lambda p}\left( x\right) \) for all \( x \in X \) and \( \lambda \geq 0 \) ; - a sublinear functional if it is subadditive and positively homogeneous; - a seminorm if it is nonnegative and subadditive, and if \( p\left( {\lambda x}\right) = \) \( \left| \lambda \right| p\left( x\right) \) for all \( x \in X \) and \( \lambda \in \mathbf{F} \) . For example, if \( c \geq 0 \), then \( p\left( x\right) = c\parallel x\parallel \) defines a sublinear functional on \( X \) . Now let \( {X}_{0} \) be a subspace of \( X,{f}_{0} \) a linear functional on \( {X}_{0} \), and \( f \) a linear functional on \( X \) . We say that \( f \) extends \( {f}_{0} \) to \( X \), or that \( f \) is an extension of \( {f}_{0} \) to \( X \), if \( f\left( x\right) = {f}_{0}\left( x\right) \) for all \( x \in {X}_{0} \) . If also \( f \) is bounded and \( \parallel f\parallel = \begin{Vmatrix}{f}_{0}\end{Vmatrix} \), we say that \( f \) is a norm-preserving extension of \( {f}_{0} \) to \( X \) . We now prove a preliminary version of the extension theorem for linear functionals. (6.1.2) Proposition. Let \( X \) be a real normed space, \( {X}_{0} \) a subspace of \( X,{x}_{1} \) a point of \( X \smallsetminus {X}_{0} \), and \( {X}_{1} \) the subspace of \( X \) spanned by \( {X}_{0} \cup \left\{ {x}_{1}\right\} \) . Let \( p \) be a sublinear functional on \( X \), and \( {f}_{0} \) a linear functional on \( {X}_{0} \) such that \( {f}_{0}\left( x\right) \leq p\left( x\right) \) for all \( x \in {X}_{0} \) . Then there exists a linear functional \( f \) that extends \( {f}_{0} \) to \( {X}_{1} \) and satisfies \( f\left( x\right) \leq p\left( x\right) \) for all \( x \in {X}_{1} \) . Proof. Since \( {x}_{1} \notin {X}_{0} \), each element of \( {X}_{1} \) can be written uniquely in the form \( x + \lambda {x}_{1} \) with \( x \in {X}_{0} \) and \( \lambda \in \mathbf{R} \) . Let \( \tau \) be any real number, and provisionally define \[ f\left( {x + \lambda {x}_{1}}\right) = {f}_{0}\left( x\right) + {\lambda \tau } \] It is easily shown that \( f \) is a linear extension of \( {f}_{0} \) to \( {X}_{1} \) ; hence it remains to choose \( \tau \) so that \[ {f}_{0}\left( x\right) + {\lambda \tau } \leq p\left( {x + \lambda {x}_{1}}\right) \;\left( {x \in {X}_{0},\lambda \in \mathbf{R}\smallsetminus \{ 0\} }\right) . \] (2) To this end, replacing \( x \) by \( {\lambda x} \), using the positive homogeneity of \( p \), and then dividing both sides of (2) by \( \left| \lambda \right| \), we observe that (2) is equivalent to the two conditions \[ {f}_{0}\left( x\right) + \tau \leq p\left( {x + {x}_{1}}\right) \;\text{ if }x \in {X}_{0}\text{ and }\lambda > 0, \] \[ - {f}_{0}\left( x\right) - \tau \leq p\left( {-x - {x}_{1}}\right) \;\text{if}\;x \in {X}_{0}\;\text{and}\;\lambda < 0. \] In turn, these two conditions can be gathered together in one: \[ - p\left( {-{x}^{\prime } - {x}_{1}}\right) - {f}_{0}\left( {x}^{\prime }\right) \leq \tau \leq p\left( {x + {x}_{1}}\right) - {f}_{0}\left( x\right) \;\left( {x,{x}^{\prime } \in {X}_{0}}\right) . \] But for all \( x,{x}^{\prime } \in {X}_{0} \) we have \[ {f}_{0}\left( x\right) - {f}_{0}\left( {x}^{\prime }\right) = {f}_{0}\left( {x - {x}^{\prime }}\right) \] \[ \leq p\left( {x - {x}^{\prime }}\right) \] \[ = p\left( {x + {x}_{1} - {x}^{\prime } - {x}_{1}}\right) \] \[ \leq p\left( {x + {x}_{1}}\right) + p\left( {-{x}^{\prime } - {x}_{1}}\right) \] and therefore \[ - p\left( {-{x}^{\prime } - {x}_{1}}\right) - {f}_{0}\left( {x}^{\prime }\right) \leq p\left( {x + {x}_{1}}\right) - {f}_{0}\left( x\right) . \] Thus in order to satisfy (2), and thereby complete the proof, we need only invoke Exercise (1.1.1: 21). This brings us to the Hahn-Banach Theorem. (6.1.3) Theorem. Let \( {X}_{0} \) be a subspace of a real normed space \( X, p \) a sublinear functional on \( X \), and \( {f}_{0} \) a linear functional on \( {X}_{0} \) such that \( {f}_{0}\left( x\right) \leq p\left( x\right) \) for all \( x \) in \( {X}_{0} \) . Then there exists a linear functional \( f \) that extends \( {f}_{0} \) to \( X \) and satisfies \( f\left( x\right) \leq p\left( x\right) \) for all \( x \in X \) . Proof. Let \( \mathcal{F} \) denote the set of all linear functionals \( f \) that are defined on subspaces of \( X \) containing \( {X}_{0} \) and that have the following properties. (i) \( f = {f}_{0} \) on \( {X}_{0} \) and (ii) \( f\left( x\right) \leq p\left( x\right) \) for all \( x \) in the domain of \( f \) . Define the binary relation \( \preccurlyeq \) on \( \mathcal{F} \) by inclusion: \[ f \preccurlyeq g\text{if and only if}f \subset g\text{.} \] Then \( \preccurlyeq \) is a partial order on \( \mathcal{F} \) . Let \( \mathcal{C} \) be a chain in \( \mathcal{F} \) (that is, a nonempty totally ordered subset of \( \mathcal{F} \) ), and define \[ G = \mathop{\bigcup }\limits_{{g \in \mathcal{C}}}g = \{ \left( {x, y}\right) : \exists g \in \mathcal{C}\left( {y = g\left( x\right) }\right) \} . \] If \( \left( {x,{y}_{1}}\right) \in G \) and \( \left( {x,{y}_{2}}\right) \in G \), then there exist \( {g}_{1},{g}_{2} \in \mathcal{C} \) such that \( \left( {x,{y}_{1}}\right) \in \) \( {g}_{1} \) and \( \left( {x,{y}_{2}}\right) \in {g}_{2} \) . Si
1008_(GTM174)Foundations of Real and Abstract Analysis
81
\) that are defined on subspaces of \( X \) containing \( {X}_{0} \) and that have the following properties. (i) \( f = {f}_{0} \) on \( {X}_{0} \) and (ii) \( f\left( x\right) \leq p\left( x\right) \) for all \( x \) in the domain of \( f \) . Define the binary relation \( \preccurlyeq \) on \( \mathcal{F} \) by inclusion: \[ f \preccurlyeq g\text{if and only if}f \subset g\text{.} \] Then \( \preccurlyeq \) is a partial order on \( \mathcal{F} \) . Let \( \mathcal{C} \) be a chain in \( \mathcal{F} \) (that is, a nonempty totally ordered subset of \( \mathcal{F} \) ), and define \[ G = \mathop{\bigcup }\limits_{{g \in \mathcal{C}}}g = \{ \left( {x, y}\right) : \exists g \in \mathcal{C}\left( {y = g\left( x\right) }\right) \} . \] If \( \left( {x,{y}_{1}}\right) \in G \) and \( \left( {x,{y}_{2}}\right) \in G \), then there exist \( {g}_{1},{g}_{2} \in \mathcal{C} \) such that \( \left( {x,{y}_{1}}\right) \in \) \( {g}_{1} \) and \( \left( {x,{y}_{2}}\right) \in {g}_{2} \) . Since \( \mathcal{C} \) is a chain, either \( {g}_{1} \subset {g}_{2} \) or else, as we may assume, \( {g}_{2} \subset {g}_{1} \) ; then \( \left( {x,{y}_{2}}\right) \in {g}_{1} \) and therefore, as \( {g}_{1} \) is a function, \( {y}_{2} = {y}_{1} \) . It follows that \( G \) is a function on \( X \) ; and that if \( x \) is in the domain of some \( g \in \mathcal{C} \), then \( x \) is in the domain of \( G, G\left( x\right) = g\left( x\right) \), and therefore \( G\left( x\right) \leq p\left( x\right) \) . It is easy to show that the domain of \( G \) contains \( {X}_{0} \), and that \( G = {f}_{0} \) on \( {X}_{0} \) . To complete the proof that \( G \in \mathcal{F} \), we must show that \( G \) is linear on \( X \) . To this end, given \( x,{x}^{\prime } \) in the domain of \( G \), choose \( g,{g}^{\prime } \in \mathcal{C} \) such that \( \left( {x, G\left( x\right) }\right) \in g \) and \( \left( {{x}^{\prime }, G\left( {x}^{\prime }\right) }\right) \in {g}^{\prime } \) . As \( \mathcal{C} \) is a chain, we may assume that \( {g}^{\prime } \subset g \), so that \( \left( {{x}^{\prime }, G\left( {x}^{\prime }\right) }\right) \in g \) ; as \( g \) is linear, it follows that \( x + {x}^{\prime } \) is in the domain of \( g \) and therefore in the domain of \( G \), and that \[ G\left( {x + {x}^{\prime }}\right) = g\left( {x + {x}^{\prime }}\right) \] \[ = g\left( x\right) + g\left( {x}^{\prime }\right) \] \[ = G\left( x\right) + G\left( {x}^{\prime }\right) \text{.} \] Similarly, for each \( \lambda \in \mathbf{R}, G\left( {\lambda x}\right) = {\lambda G}\left( x\right) \) . Hence \( G \in \mathcal{F} \) . It is trivial to verify that \( G \) is an upper bound of \( \mathcal{C} \) in \( \mathcal{F} \) . We can now apply Zorn’s Lemma (see Appendix B) to produce a maximal element \( f \) of \( \mathcal{F} \) . It only remains to show that \( f \) is defined throughout \( X \) . But if \( f \) is not defined at some point \( {x}_{0} \) of \( X \), then, using Proposition (6.1.2), we can find an element \( g \) of \( \mathcal{F} \) such that \( f \preccurlyeq g \) and \( g \) is defined at \( {x}_{0} \) . Since \( f \) is maximal in \( \mathcal{F} \), it follows that \( f = g \), a contradiction. The name "Hahn-Banach Theorem" is often applied to the following corollary. (6.1.4) Corollary. Let \( {X}_{0} \) be a subspace of a normed space \( X \), and \( {f}_{0} \) a bounded linear functional on \( {X}_{0} \) . Then there exists a norm-preserving extension of \( {f}_{0} \) to \( X \) . Proof. First consider the case where \( {f}_{0} \) is a real-linear functional on \( {X}_{0} \) . Applying Theorem (6.1.3) with \( p\left( x\right) = \begin{Vmatrix}{f}_{0}\end{Vmatrix}\parallel x\parallel \), we obtain a real-linear functional \( f \) that extends \( {f}_{0} \) to \( X \) and satisfies \( f\left( x\right) \leq \begin{Vmatrix}{f}_{0}\end{Vmatrix}\parallel x\parallel \) for all \( x \in X \) . Replacing \( x \) by \( - x \) in this last inequality, we see that \[ \left| {f\left( x\right) }\right| = \max \{ f\left( x\right) , - f\left( x\right) \} \leq \begin{Vmatrix}{f}_{0}\end{Vmatrix}\parallel x\parallel \] for all \( x \in X \) ; whence \( f \) is bounded, and \( \parallel f\parallel \leq \begin{Vmatrix}{f}_{0}\end{Vmatrix} \) . But \( f \) extends \( {f}_{0} \), so \( \parallel f\parallel \geq \begin{Vmatrix}{f}_{0}\end{Vmatrix} \) and therefore \( \parallel f\parallel = \begin{Vmatrix}{f}_{0}\end{Vmatrix} \) . When \( f \) is a complex-linear functional, we apply the foregoing argument to construct a norm-preserving extension \( u \) of the real-linear functional \( \operatorname{Re}\left( {f}_{0}\right) \) to \( X \) . Lemma (6.1.1) then shows us that \[ f\left( x\right) = u\left( x\right) - \mathrm{i}u\left( {\mathrm{i}x}\right) \] defines a norm-preserving extension of \( {f}_{0} \) to \( X \) . ## (6.1.5) Exercises .1 Prove the complex Hahn-Banach Theorem: let \( {X}_{0} \) be a subspace of a complex normed space \( X, p \) a seminorm on \( X \), and \( {f}_{0} \) a linear functional on \( {X}_{0} \) such that \( \left| {{f}_{0}\left( x\right) }\right| \leq p\left( x\right) \) for all \( x \in {X}_{0} \) ; then there exists a linear functional \( f \) that extends \( {f}_{0} \) to \( X \) and satisfies \( \left| {f\left( x\right) }\right| \leq \) \( p\left( x\right) \) for all \( x \in X \) . (First apply the Hahn-Banach Theorem to the real-linear functional \( \operatorname{Re}\left( {f}_{0}\right) \) .) .2 Let \( X \) be a separable normed space. Prove Theorem (6.1.3) without using Zorn’s Lemma. (Let \( \left( {x}_{n}\right) \) be a dense sequence in \( X \), and \( p \) a sublinear functional on \( X \) . Starting with a given linear functional \( {f}_{0} \) on a subspace \( {X}_{0} \) of \( X \), extend \( {f}_{0} \) inductively to the subspace \( {X}_{n} \) of \( X \) spanned by \( {X}_{n - 1} \cup \left\{ {x}_{n}\right\} \), such that the linear extension \( {f}_{n} \) to \( {X}_{n} \) satisfies \( {f}_{n}\left( x\right) \leq p\left( x\right) \) for all \( x \in {X}_{n} \) . Then consider \( f = \mathop{\bigcup }\limits_{{n = 0}}^{\infty }{f}_{n} \) .) The Hahn-Banach Theorem—especially in the form of Corollary (6.1.4)— has many interesting applications. We begin with some of the simpler ones. (6.1.6) Proposition. Let \( S \) be a closed subspace of a normed space \( X \) , and let \( {x}_{0} \in X \smallsetminus S \) . Then there exists a bounded linear functional \( f \) on \( X \) such that (i) \( f\left( {x}_{0}\right) = 1 \) and (ii) \( f\left( x\right) = 0 \) for all \( x \in S \) . Proof. Let \( {X}_{0} \) be the subspace of \( X \) spanned by \( S \cup \left\{ {x}_{0}\right\} \), and define \[ {f}_{0}\left( {x + \lambda {x}_{0}}\right) = \lambda \;\left( {x \in S,\lambda \in \mathbf{F}}\right) . \] (This is a good definition: for, as \( {x}_{0} \notin S \), the representation of a given element of \( {X}_{0} \) in the form \( x + \lambda {x}_{0} \), with \( x \in S \) and \( \lambda \in \mathbf{F} \), is unique.) Then \( {f}_{0} \) is a linear functional on \( {X}_{0},{f}_{0}\left( x\right) = 0 \) if \( x \in S \), and \( {f}_{0}\left( {x}_{0}\right) = 1 \) . Now, as \( S \) is closed, we see from Exercise (3.1.10:3) that \( \rho \left( {{x}_{0}, S}\right) > 0 \) . So for all \( x \) in \( S \) and all nonzero \( \lambda \in \mathbf{F} \) , \[ \begin{Vmatrix}{x + \lambda {x}_{0}}\end{Vmatrix} = \left| \lambda \right| \begin{Vmatrix}{{\lambda }^{-1}x + {x}_{0}}\end{Vmatrix} \geq \left| \lambda \right| \rho \left( {{x}_{0}, S}\right) . \] Hence \[ \left| {f\left( {x + \lambda {x}_{0}}\right) }\right| = \left| \lambda \right| \leq \rho {\left( {x}_{0}, S\right) }^{-1}\begin{Vmatrix}{x + \lambda {x}_{0}}\end{Vmatrix}, \] and therefore \( {f}_{0} \) has the bound \( \rho {\left( {x}_{0}, S\right) }^{-1} \) . Applying Corollary (6.1.4) to \( {f}_{0} \), we obtain the desired linear functional \( f \) on \( X \) . (6.1.7) Proposition. If \( {x}_{0} \) is a nonzero element of a normed space \( X \) , then there exists a bounded linear functional \( f \) on \( X \) such that \( f\left( {x}_{0}\right) = \begin{Vmatrix}{x}_{0}\end{Vmatrix} \) and \( \parallel f\parallel = 1 \) . Proof. Let \( {X}_{0} \) be the subspace of \( X \) generated by \( \left\{ {x}_{0}\right\} \), define a linear functional \( {f}_{0} \) on \( {X}_{0} \) by \( {f}_{0}\left( {\lambda {x}_{0}}\right) = \lambda \begin{Vmatrix}{x}_{0}\end{Vmatrix} \), and apply Corollary (6.1.4) to \( {f}_{0} \) . (6.1.8) Corollary. For each \( x \) in a normed space \( X \) , \[ \parallel x\parallel = \sup \left\{ {\left| {f\left( x\right) }\right| : f \in {X}^{ * },\parallel f\parallel = 1}\right\} . \] Proof. If \( x = 0 \), the conclusion is trivial. If \( x \neq 0 \), then for all \( f \in {X}^{ * } \) with \( \parallel f\parallel = 1 \) we have \[ \left| {f\left( x\right) }\right| \leq \parallel f\parallel \parallel x\parallel = \parallel x\parallel . \] Since, by Proposition (6.1.7), there exists \( f \in {X}^{ * } \) such that \( \parallel f\parallel = 1 \) and \( f\left( x\right) = \parallel x\parallel \), the result follows. The remaining results and exercises in this section illustrate the interaction between a normed space \( X \) and its dual \( {X}^{ * } \), one of the most fascinating and beautiful aspects of modern analysis, in which the Hahn-Banach Theorem plays a fundamental part. ## (6.1.9) Exercises .1 Show that if \( X \) is a finite-dimensional Banach space, then \( {X}^{ * } \) is finite-dimensional and \( \dim \left( {X}^{ * }\right) = \dim \left( X\right) \) . (Reduce to the case where \( X \) is \( n \) -dimensional Euclidean space.) .2 Let \( S \) be a closed subspace of a Banach space \( X \), and define \[ {S}^{ \bot } = \left\{ {f \in {X}^{ * } : f\left( x\right) = 0\text{ for all }x \in S}\right\} . \] Prove that \( {S}^{ \bot } \) is a closed linear subspace of \( {X}^{ * } \) . Show that the f
1008_(GTM174)Foundations of Real and Abstract Analysis
82
lel . \] Since, by Proposition (6.1.7), there exists \( f \in {X}^{ * } \) such that \( \parallel f\parallel = 1 \) and \( f\left( x\right) = \parallel x\parallel \), the result follows. The remaining results and exercises in this section illustrate the interaction between a normed space \( X \) and its dual \( {X}^{ * } \), one of the most fascinating and beautiful aspects of modern analysis, in which the Hahn-Banach Theorem plays a fundamental part. ## (6.1.9) Exercises .1 Show that if \( X \) is a finite-dimensional Banach space, then \( {X}^{ * } \) is finite-dimensional and \( \dim \left( {X}^{ * }\right) = \dim \left( X\right) \) . (Reduce to the case where \( X \) is \( n \) -dimensional Euclidean space.) .2 Let \( S \) be a closed subspace of a Banach space \( X \), and define \[ {S}^{ \bot } = \left\{ {f \in {X}^{ * } : f\left( x\right) = 0\text{ for all }x \in S}\right\} . \] Prove that \( {S}^{ \bot } \) is a closed linear subspace of \( {X}^{ * } \) . Show that the following procedure yields a well-defined mapping \( T \) of \( {S}^{ * } \) into \( {X}^{ * }/{S}^{ \bot } \) : given \( f \) in \( {S}^{ * } \), choose a norm-preserving extension \( F \) of \( f \) to \( X \), and set \( {Tf} \) equal to the element of \( {X}^{ * }/{S}^{ \bot } \) that contains \( F \) . Prove that \( T \) is a norm-preserving linear isomorphism of \( {S}^{ * } \) onto \( {X}^{ * }/{S}^{ \bot } \) . Hence prove that for each \( F \in {X}^{ * } \) , \[ \sup \{ \left| {F\left( x\right) }\right| : x \in S,\parallel x\parallel \leq 1\} = \inf \left\{ {\parallel F - f\parallel : f \in {S}^{ \bot }}\right\} . \] .3 Let \( S \) be a closed linear subspace of a Banach space \( X \), let \( \varphi \) be the canonical map of \( X \) onto \( X/S \), and for each \( f \) in \( {\left( X/S\right) }^{ * } \) define \( {Tf} = f \circ \varphi \) . Prove that \( T \) is an isometric linear isomorphism of \( {\left( X/S\right) }^{ * } \) onto \( {S}^{ \bot } \) . Hence prove that for each \( x \in X \) , \[ \inf \{ \parallel x - s\parallel : s \in S\} = \sup \left\{ {\left| {f\left( x\right) }\right| : f \in {S}^{ \bot },\parallel f\parallel \leq 1}\right\} . \] .4 Let \( {x}_{1},\ldots ,{x}_{n} \) be elements of a real normed space \( X \), and \( {c}_{1},\ldots ,{c}_{n} \) real numbers. Prove the equivalence of the following conditions. (i) There exists \( f \in {X}^{ * } \) with \( \parallel f\parallel = 1 \) and \( f\left( {x}_{i}\right) = {c}_{i} \) for each \( i \) . (ii) \( \left| {{\lambda }_{1}{c}_{1} + \cdots + {\lambda }_{n}{c}_{n}}\right| \leq \begin{Vmatrix}{{\lambda }_{1}{x}_{1} + \cdots + {\lambda }_{n}{x}_{n}}\end{Vmatrix} \) . .5 Let \( X \) be a normed space, and define \[ \widehat{x}\left( f\right) = f\left( x\right) \;\left( {x \in X, f \in {X}^{ * }}\right) . \] Prove that (i) the mapping \( x \mapsto \widehat{x} \) is a linear isometry of \( X \) into its second dual \( {X}^{* * } \) ; (ii) \( X \) is reflexive (see Exercise (5.3.2: 3)) if and only if this mapping has range \( {X}^{* * } \) ; (iii) if \( X \) is reflexive, then it is a Banach space. .6 Prove that if \( X \) is an infinite-dimensional normed space, then \( {X}^{ * } \) is infinite-dimensional. (cf. Exercise (6.1.9:1). Suppose that \( {X}^{ * } \) is finite-dimensional, and consider the mapping \( x \mapsto \widehat{x} \) defined in the preceding exercise.) .7 Let \( X \) be a Banach space. Prove that (i) if \( {X}^{ * } \) is separable, then so is \( X \) ; (ii) if \( X \) is separable and reflexive, then \( {X}^{ * } \) is separable. (For (i), let \( \left\{ {{f}_{1},{f}_{2},\ldots }\right\} \) be dense in \( {X}^{ * } \), and for each \( n \) choose a unit vector \( {x}_{n} \) such that \( \left| {{f}_{n}\left( {x}_{n}\right) }\right| \geq \frac{1}{2}\begin{Vmatrix}{x}_{n}\end{Vmatrix} \) . Let \( Y \) be the closure of the subspace generated by \( \left\{ {{x}_{1},{x}_{2},\ldots }\right\} \), suppose that \( Y \neq X \), and use Proposition (6.1.6) to deduce a contradiction.) .8 Prove that a closed subspace \( Y \) of a reflexive Banach space \( X \) is reflexive. (For each \( f \in {X}^{ * } \) let \( {f}_{Y} \) denote the restriction of \( f \) to \( Y \) . Given \( u \in {Y}^{* * } \), choose \( \xi \in X \) such that \( u\left( {f}_{Y}\right) = f\left( \xi \right) \) for all \( f \in {X}^{ * } \) . Then use Propositions (6.1.6) and (6.1.4).) .9 We saw in Exercise (4.4.11:5) that the space \( {L}_{\infty } \), introduced in Exercise (4.4.11: 2), can be identified with the dual space of \( {L}_{1} = \) \( {L}_{1}\left( \mathbf{R}\right) \) . In this exercise we show that \( {L}_{1} \) can be identified with a subset of the dual of \( {L}_{\infty } \) but is not the whole of that dual. Prove that for each \( g \in {L}_{1} \) , \[ {u}_{g}\left( f\right) = \int {fg} \] defines an element of the dual of \( {L}_{\infty } \) . Let \( {X}_{0} \) be the set of all continuous functions \( f : \mathbf{R} \rightarrow \mathbf{R} \) that vanish outside some compact set, and define a bounded linear functional \( {u}_{0} \) on \( {X}_{0} \) by \( {u}_{0}\left( f\right) = f\left( 0\right) \) . Using Corollary (6.1.4), construct a norm-preserving extension \( u \) of \( {u}_{0} \) to \( {L}_{\infty } \) . By considering \( u\left( {f}_{n}\right) \), where \[ {f}_{n}\left( x\right) = \left\{ \begin{array}{ll} {\left( 1 - \left| x\right| \right) }^{n} & \text{ if }\left| x\right| \leq 1 \\ 0 & \text{ if }\left| x\right| > 1 \end{array}\right. \] show that there is no element \( g \) of \( {L}_{1} \) such that \( u = {u}_{g} \) . It follows from this exercise that, in contrast to \( {L}_{p} \) for \( 1 < p < \infty \) (see Theorem (4.4.10)), \( {L}_{1} \) is not reflexive. The next three lemmas, together with our work on the Hahn-Banach Theorem, enable us to produce a substantial strengthening of the following consequence of Riesz's Lemma (4.3.5): in an infinite-dimensional normed space, if \( 0 < \theta < 1 \), then there exists a sequence \( \left( {x}_{n}\right) \) of unit vectors such that \( \begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} > \theta \) whenever \( m \neq n \) . (6.1.10) Lemma. If \( f,{f}_{1},\ldots ,{f}_{n} \) are linear functionals on a linear space \( X \) over \( \mathbf{F} \) such that \( \ker \left( f\right) \supset \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) \), then \( f,{f}_{1},\ldots ,{f}_{n} \) are linearly dependent. Proof. We may assume that none of the functions under consideration is identically zero. We proceed by induction on \( n \) . In the case \( n = 1 \), choose \( a \in X \) such that \( {f}_{1}\left( a\right) = 1 \) . Then for each \( x \in X \) , \[ \left( {x - {f}_{1}\left( x\right) a}\right) \in \ker \left( {f}_{1}\right) \] so \[ 0 = f\left( {x - {f}_{1}\left( x\right) a}\right) = f\left( x\right) - f\left( a\right) {f}_{1}\left( x\right) . \] Hence \( f = f\left( a\right) {f}_{1} \), and therefore \( f \) and \( {f}_{1} \) are linearly dependent. Now suppose that the lemma holds for \( n = k \), and consider the case \( n = k + 1 \) . Let \( g \) be the restriction of \( f \) to \( \ker \left( {f}_{k + 1}\right) \), and for \( i = 1,\ldots, k \) let \( {g}_{i} \) be the restriction of \( {f}_{i} \) to \( \ker \left( {f}_{k + 1}\right) \) . Then \( \ker \left( g\right) \supset \mathop{\bigcap }\limits_{{i = 1}}^{k}\ker \left( {g}_{i}\right) \), so \( g = \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{g}_{i} \) for some elements \( {\lambda }_{i} \) of \( \mathbf{F} \), by our induction hypothesis. Thus \( f - \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{f}_{i} \) vanishes on \( \ker \left( {f}_{k + 1}\right) \) . By the case \( n = 1 \) that we have already proved, \( f - \mathop{\sum }\limits_{{i = 1}}^{k}{\lambda }_{i}{f}_{i} \) and \( {f}_{k + 1} \) are linearly dependent; so \( f,{f}_{1},\ldots ,{f}_{k + 1} \) are linearly dependent and the induction is complete. (6.1.11) Lemma. Let \( X \) be an infinite-dimensional normed space, and \( {f}_{1},\ldots ,{f}_{n} \) elements of \( {X}^{ * } \) . Then \( \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) \neq \{ 0\} \) . Proof. First assume that the \( {f}_{i} \) are linearly independent. By Exercise (6.1.9: 6), there exists an element \( f \) of \( {X}^{ * } \) such that \( f,{f}_{1},\ldots ,{f}_{n} \) are linearly independent. Lemma (6.1.10) now shows that \( \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) \) is not contained in \( \ker \left( f\right) \), from which the desired conclusion follows immediately. Now consider the case where the \( {f}_{i} \) are linearly dependent. Without loss of generality, we may assume that for some \( m \leq n,\left\{ {{f}_{1},\ldots ,{f}_{m}}\right\} \) is a basis for the linear space generated by all the \( {f}_{i} \) . By the first part of the proof, there exists a nonzero element \( \xi \) in \( \mathop{\bigcap }\limits_{{i = 1}}^{m}\ker \left( {f}_{i}\right) \) ; clearly, \( \xi \in \) \( \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) . \) (6.1.12) Lemma. Let \( X \) be an infinite-dimensional normed space, and \( {f}_{1},\ldots ,{f}_{n} \) linearly independent elements of \( {X}^{ * } \) . Then there exist nonzero elements \( \xi ,\eta \) of \( X \) such that \( {f}_{i}\left( \eta \right) < 0 = {f}_{i}\left( \xi \right) \) for each \( i \) . Proof. The existence of \( \xi \) follows from Lemma (6.1.11). On the other hand, Lemma (6.1.10) shows that for each \( i \) there exists \( {x}_{i} \in X \) such that \( {f}_{i}\left( {x}_{i}\right) = 1 \) and \( {f}_{j}\left( {x}_{i}\right) = 0 \) when \( j \neq i \) . Setting \( \eta = - \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i} \), we see that \( {f}_{i}\left( \eta \right) = - 1 \) for each \( i \) . (6.1.13) Proposition. If \( X \) is an infinite-dimensional normed space, then there exists a sequence \( \left( {x}_{n}\right) \) of unit vectors in \( X \)
1008_(GTM174)Foundations of Real and Abstract Analysis
83
left( {f}_{i}\right) \) ; clearly, \( \xi \in \) \( \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) . \) (6.1.12) Lemma. Let \( X \) be an infinite-dimensional normed space, and \( {f}_{1},\ldots ,{f}_{n} \) linearly independent elements of \( {X}^{ * } \) . Then there exist nonzero elements \( \xi ,\eta \) of \( X \) such that \( {f}_{i}\left( \eta \right) < 0 = {f}_{i}\left( \xi \right) \) for each \( i \) . Proof. The existence of \( \xi \) follows from Lemma (6.1.11). On the other hand, Lemma (6.1.10) shows that for each \( i \) there exists \( {x}_{i} \in X \) such that \( {f}_{i}\left( {x}_{i}\right) = 1 \) and \( {f}_{j}\left( {x}_{i}\right) = 0 \) when \( j \neq i \) . Setting \( \eta = - \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i} \), we see that \( {f}_{i}\left( \eta \right) = - 1 \) for each \( i \) . (6.1.13) Proposition. If \( X \) is an infinite-dimensional normed space, then there exists a sequence \( \left( {x}_{n}\right) \) of unit vectors in \( X \) such that \( \begin{Vmatrix}{{x}_{m} - {x}_{n}}\end{Vmatrix} \) \( > 1 \) whenever \( m \neq n \) . Proof. We construct the required vectors inductively as follows. Choosing a unit vector \( {x}_{1} \in X \), apply Proposition (6.1.7) to obtain \( {f}_{1} \in {X}^{ * } \) such that \( \begin{Vmatrix}{f}_{1}\end{Vmatrix} = 1 = {f}_{1}\left( {x}_{1}\right) \) . Now suppose that we have constructed unit vectors \( {x}_{1},\ldots ,{x}_{n} \) in \( X \), and linearly independent unit vectors \( {f}_{1},\ldots ,{f}_{n} \) in \( {X}^{ * } \), such that \( {f}_{i}\left( {x}_{i}\right) = 1 = \begin{Vmatrix}{f}_{i}\end{Vmatrix} \) for each \( i \) . By Lemma (6.1.12), there exist nonzero elements \( \xi ,\eta \) of \( X \) such that \( {f}_{i}\left( \eta \right) < 0 = {f}_{i}\left( \xi \right) \) for each \( i \) . Choose \( c > 0 \) such that \( \parallel \eta \parallel < \parallel \eta + {c\xi }\parallel \) . Setting \[ {x}_{n + 1} = \parallel \eta + {c\xi }{\parallel }^{-1}\left( {\eta + {c\xi }}\right) \] note that \( {f}_{i}\left( {x}_{n + 1}\right) < 0 \) for \( 1 \leq i \leq n \) . Now use Proposition (6.1.7) to obtain an element \( {f}_{n + 1} \) of \( {X}^{ * } \) such that \( \begin{Vmatrix}{f}_{n + 1}\end{Vmatrix} = 1 = {f}_{n + 1}\left( {x}_{n + 1}\right) \) . Suppose that \( {f}_{n + 1} = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{f}_{i} \) for some elements \( {\lambda }_{i} \) of \( \mathbf{F} \) . Then \[ \parallel \eta + {c\xi }\parallel = {f}_{n + 1}\left( {\eta + {c\xi }}\right) \] \[ = \left| {\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{f}_{i}\left( {\eta + {c\xi }}\right) }\right| \] \[ = \left| {\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{f}_{i}\left( \eta \right) }\right| \] \[ = \left| {{f}_{n + 1}\left( \eta \right) }\right| \] \[ \leq \parallel \eta \parallel \] \[ < \parallel \eta + {c\xi }\parallel \] a contradiction. Hence the linear functionals \( {f}_{1},\ldots ,{f}_{n},{f}_{n + 1} \) are linearly independent. Moreover, if \( 1 \leq i \leq n \), then \[ \begin{Vmatrix}{{x}_{n + 1} - {x}_{i}}\end{Vmatrix} \geq \left| {{f}_{i}\left( {{x}_{n + 1} - {x}_{i}}\right) }\right| = \left| {{f}_{i}\left( {x}_{n + 1}\right) - {f}_{i}\left( {x}_{i}\right) }\right| > 1, \] since \( {f}_{i}\left( {x}_{i}\right) = 1 \) and \( {f}_{i}\left( {x}_{n + 1}\right) < 0 \) . This completes our inductive construction. A sequence \( \left( {x}_{n}\right) \) in a Banach space \( X \) is called a Schauder basis if for each \( x \in X \) there exists a unique sequence \( \left( {\lambda }_{n}\right) \) in \( \mathbf{F} \) such that \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}{x}_{n} \) . In that case, \( X \) is separable, and the mapping \( x \mapsto {\left( {\lambda }_{n}\right) }_{n = 1}^{\infty } \) can be used to identify \( X \) with a sequence space. The notion of a Schauder basis generalises that of a basis in a finite-dimensional space. In the spaces \( {c}_{0} \) and \( {l}_{p}\left( {1 \leq p < \infty }\right) \) let \( {e}_{n} \) be the vector with \( n \) th term equal to 1 and all other terms 0 ; then \( \left\{ {{e}_{1},{e}_{2},\ldots }\right\} \) is a Schauder basis. Schauder bases for other separable Banach spaces, such as \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \), are not so easy to construct, and Enflo [15] has shown that there exist separable Banach subspaces of \( {c}_{0} \) that do not have a Schauder basis. We can, however, prove the following theorem. (6.1.14) Theorem. Every infinite-dimensional Banach space contains an infinite-dimensional closed subspace with a Schauder basis. The next two lemmas make this possible. (6.1.15) Lemma. Let \( \left( {x}_{n}\right) \) be a total sequence of nonzero elements of a Banach space \( X \), and \( c \) a positive number such that if \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) belong to \( \mathbf{F} \), and \( m < n \), then \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{m}{\lambda }_{i}{x}_{i}}\end{Vmatrix} \leq c\begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{x}_{i}}\end{Vmatrix}. \] (3) Then \( \left( {x}_{i}\right) \) is a Schauder basis for \( X \) . Proof. Consider any sequence \( \left( {\lambda }_{n}\right) \) in \( \mathbf{F} \) such that \( \mathop{\sum }\limits_{{i = 1}}^{\infty }{\lambda }_{i}{x}_{i} \) converges in \( X \) . If \( n > k \), then \[ \left| {\lambda }_{k}\right| = {\begin{Vmatrix}{x}_{k}\end{Vmatrix}}^{-1}\begin{Vmatrix}{{\lambda }_{k}{x}_{k}}\end{Vmatrix} \] \[ \leq c{\begin{Vmatrix}{x}_{k}\end{Vmatrix}}^{-1}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = k}}^{n}{\lambda }_{i}{x}_{i}}\end{Vmatrix} \] Letting \( n \rightarrow \infty \), we see that \[ \left| {\lambda }_{k}\right| \leq c{\begin{Vmatrix}{x}_{k}\end{Vmatrix}}^{-1}\begin{Vmatrix}{\mathop{\sum }\limits_{{i = k}}^{\infty }{\lambda }_{i}{x}_{i}}\end{Vmatrix}. \] A simple induction argument now enables us to prove that if \( \mathop{\sum }\limits_{{i = 1}}^{\infty }{\lambda }_{i}{x}_{i} = \) 0, then \( {\lambda }_{i} = 0 \) for each \( i \) . Thus a given element of \( X \) has at most one representation in the form \( \mathop{\sum }\limits_{{i = 1}}^{\infty }{\lambda }_{i}{x}_{i} \) with each \( {\lambda }_{i} \) in \( \mathbf{F} \) . It remains to show that such a representation exists. Let \( {X}_{\infty } \) be the subspace of \( X \) generated by \( \left\{ {{x}_{1},{x}_{2},\ldots }\right\} \), and for each \( n \) let \( {X}_{n} \) be the subspace generated by \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) . Define a (clearly linear) mapping \( {P}_{n} \) of \( {X}_{\infty } \) onto \( {X}_{n} \) by \[ {P}_{n}\left( {\mathop{\sum }\limits_{{i = 1}}^{\infty }{\lambda }_{i}{x}_{i}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{x}_{i} \] It follows from (3) that \( c \) is a bound for \( {P}_{n} \) on \( {X}_{\infty } \) . But \( {X}_{\infty } \) is dense in \( X \), so, by Exercise (4.2.2: 10), \( {P}_{n} \) extends to a bounded linear mapping \( {P}_{n} \) on \( X \) with bound \( c \) . By Corollary (6.1.4), the mapping \( \mathop{\sum }\limits_{{i = 1}}^{\infty }{\lambda }_{i}{x}_{i} \mapsto {\lambda }_{n} \) extends to a bounded linear functional \( {f}_{n} \) on \( X \) such that \[ {f}_{n}\left( x\right) {x}_{n} = {P}_{n}\left( x\right) - {P}_{n - 1}\left( x\right) \] where, for convenience, we set \( {P}_{0}\left( x\right) = 0 \) . We prove that \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n}\left( x\right) {x}_{n} \) for each \( x \in X \) . To this end, let \( \varepsilon > 0 \) and, using the fact that the sequence \( \left( {x}_{n}\right) \) is total, choose \( {\lambda }_{1},\ldots ,{\lambda }_{N} \) in \( \mathbf{F} \) such that \[ \begin{Vmatrix}{x - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n}}\end{Vmatrix} < \varepsilon \] For each \( k \geq N \) we have \[ \begin{Vmatrix}{x - {P}_{k}\left( x\right) }\end{Vmatrix} \leq \begin{Vmatrix}{x - \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n}}\end{Vmatrix} + \begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n} - {P}_{k}\left( x\right) }\end{Vmatrix} \] \[ < \varepsilon + \begin{Vmatrix}{{P}_{k}\left( {\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n} - x}\right) }\end{Vmatrix} \] \[ \leq \varepsilon + \begin{Vmatrix}{P}_{k}\end{Vmatrix}\begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n} - x}\end{Vmatrix} \] \[ \leq \varepsilon + \begin{Vmatrix}{P}_{k}\end{Vmatrix}\varepsilon \] \[ \leq \left( {1 + c}\right) \varepsilon \text{.} \] Hence \[ x = \mathop{\lim }\limits_{{k \rightarrow \infty }}{P}_{k}x = \mathop{\lim }\limits_{{k \rightarrow \infty }}\mathop{\sum }\limits_{{n = 1}}^{k}{f}_{n}\left( x\right) {x}_{n} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{f}_{n}\left( x\right) {x}_{n}. \] (6.1.16) S. Mazur’s Lemma. Let \( Y \) be a finite-dimensional subspace of an infinite-dimensional Banach space \( X \) . Then for each \( \varepsilon > 0 \) there exists a unit vector \( \xi \in X \) such that \[ \parallel y\parallel \leq \left( {1 + \varepsilon }\right) \parallel y + {\lambda \xi }\parallel \] (4) for all \( y \in Y \) and \( \lambda \in \mathbf{F} \) . Proof. Without loss of generality we may take \( \varepsilon < 1 \) . Let \( \left\{ {{y}_{1},\ldots ,{y}_{n}}\right\} \) be an \( \varepsilon /2 \) -approximation to the set \[ S = \{ y \in Y : \parallel y\parallel = 1\} \] (which is compact, by Exercise (4.3.7:2)). Using Proposition (6.1.7), for \( i = 1,\ldots, n \) construct \( {f}_{i} \in {X}^{ * } \) with norm 1 such that \( {f}_{i}\left( {y}_{i}\right) = 1 \) . By Lemma (6.1.11), there exists a unit vector \( \xi \in \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) \) . Consider any vector \( y \in Y \) and any \( \lambda \in \mathbf{F} \) . If \( y = 0 \), then (4) is trivial. If \( y \neq 0 \), then we may assume that \( \parallel y\parallel = 1 \) : otherwise, we just consider \( \parallel y{\para
1008_(GTM174)Foundations of Real and Abstract Analysis
84
exists a unit vector \( \xi \in X \) such that \[ \parallel y\parallel \leq \left( {1 + \varepsilon }\right) \parallel y + {\lambda \xi }\parallel \] (4) for all \( y \in Y \) and \( \lambda \in \mathbf{F} \) . Proof. Without loss of generality we may take \( \varepsilon < 1 \) . Let \( \left\{ {{y}_{1},\ldots ,{y}_{n}}\right\} \) be an \( \varepsilon /2 \) -approximation to the set \[ S = \{ y \in Y : \parallel y\parallel = 1\} \] (which is compact, by Exercise (4.3.7:2)). Using Proposition (6.1.7), for \( i = 1,\ldots, n \) construct \( {f}_{i} \in {X}^{ * } \) with norm 1 such that \( {f}_{i}\left( {y}_{i}\right) = 1 \) . By Lemma (6.1.11), there exists a unit vector \( \xi \in \mathop{\bigcap }\limits_{{i = 1}}^{n}\ker \left( {f}_{i}\right) \) . Consider any vector \( y \in Y \) and any \( \lambda \in \mathbf{F} \) . If \( y = 0 \), then (4) is trivial. If \( y \neq 0 \), then we may assume that \( \parallel y\parallel = 1 \) : otherwise, we just consider \( \parallel y{\parallel }^{-1}y \) . Choosing \( i \) such that \( \begin{Vmatrix}{y - {y}_{i}}\end{Vmatrix} < \varepsilon /2 \), we have \[ \parallel y + {\lambda \xi }\parallel \geq \begin{Vmatrix}{{y}_{i} + {\lambda \xi }}\end{Vmatrix} - \begin{Vmatrix}{y - {y}_{i}}\end{Vmatrix} \] \[ \geq {f}_{i}\left( {{y}_{i} + {\lambda \xi }}\right) - \frac{\varepsilon }{2} \] \[ = 1 - \frac{\varepsilon }{2} \] \[ > \frac{1}{1 + \varepsilon } \] since \( \varepsilon < 1 \) . Hence (4) obtains. Proof of Theorem (6.1.14). Let \( X \) be an infinite-dimensional Banach space, and \( \varepsilon > 0 \) . Choose positive numbers \( {\varepsilon }_{n} \) such that \[ \ln \left( {1 + {\varepsilon }_{n}}\right) < {2}^{-n - 2}\ln \left( {1 + \varepsilon }\right) \] for each \( n \) . Then \[ \mathop{\prod }\limits_{{n = 1}}^{\infty }\left( {1 + {\varepsilon }_{n}}\right) = \mathop{\lim }\limits_{{N \rightarrow \infty }}\mathop{\prod }\limits_{{n = 1}}^{N}\left( {1 + {\varepsilon }_{n}}\right) \leq \sqrt{1 + \varepsilon } < 1 + \varepsilon . \] Let \( {x}_{1} \) be a unit vector in \( X \) . By Mazur’s Lemma, there exists a unit vector \( {x}_{2} \in X \) such that \[ \parallel y\parallel \leq \left( {1 + {\varepsilon }_{1}}\right) \begin{Vmatrix}{y + \lambda {x}_{2}}\end{Vmatrix} \] for all \( y \) in the subspace generated by \( {x}_{1} \) and for all \( \lambda \in \mathbf{F} \) . By the same lemma, there exists a unit vector \( {x}_{3} \in X \) such that \[ \parallel y\parallel \leq \left( {1 + {\varepsilon }_{2}}\right) \begin{Vmatrix}{y + \lambda {x}_{3}}\end{Vmatrix} \] for all \( y \) in the subspace generated by \( \left\{ {{x}_{1},{x}_{2}}\right\} \) and for all \( \lambda \in \mathbf{F} \) . Carrying on in this way, we construct an infinite sequence \( \left( {x}_{n}\right) \) of unit vectors in \( X \) such that \[ \parallel y\parallel \leq \left( {1 + {\varepsilon }_{n}}\right) \begin{Vmatrix}{y + \lambda {x}_{n + 1}}\end{Vmatrix} \] for all \( y \) in the subspace generated by \( \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} \) and for all \( \lambda \in \mathbf{F} \) . It follows that if \( {\lambda }_{1},\ldots ,{\lambda }_{n} \in \mathbf{F} \) and \( m < n \), then \[ \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{m}{\lambda }_{i}{x}_{i}}\end{Vmatrix} \leq \left( {1 + {\varepsilon }_{m}}\right) \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{m}{\lambda }_{i}{x}_{i} + {\lambda }_{m + 1}{x}_{m + 1}}\end{Vmatrix} \] \[ \leq \left( {1 + {\varepsilon }_{m}}\right) \left( {1 + {\varepsilon }_{m + 1}}\right) \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{{m + 1}}{\lambda }_{i}{x}_{i} + {\lambda }_{m + 2}{x}_{m + 2}}\end{Vmatrix} \] \[ \leq \cdots \] \[ \leq \left( {1 + {\varepsilon }_{m}}\right) \left( {1 + {\varepsilon }_{m + 1}}\right) \cdots \left( {1 + {\varepsilon }_{n - 1}}\right) \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{x}_{i}}\end{Vmatrix} \] \[ \leq \left( {1 + \varepsilon }\right) \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i}{x}_{i}}\end{Vmatrix} \] Hence, by Lemma (6.1.15), \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) is a Schauder basis of the closure of the subspace of \( X \) that it generates. \( ▱ \) For our last application of the Hahn-Banach Theorem in this section, we show that if \( I \) is a compact interval, then the dual space \( \mathcal{C}{\left( I\right) }^{ * } \) can be isometrically embedded in the Banach space \( \left( {\mathcal{B}\mathcal{V}\left( I\right) ,\parallel \cdot {\parallel }_{\mathrm{{bv}}}}\right) \) of functions of bounded variation on \( I \) (introduced in Exercise (4.5.2: 4)). To this end, for convenience we say that a bounded function \( f : I \rightarrow \mathbf{R} \) is representable if there exists an increasing sequence \( \left( {f}_{n}\right) \) of elements of \( \mathcal{C}\left( I\right) \) that converges simply to \( f \) . We denote by \( \mathcal{R}\left( I\right) \) the subspace of \( \mathcal{B}\left( I\right) \) consisting of all bounded real-valued functions on \( I \) that can be written as the difference of two representable functions. Note that \( \mathcal{C}\left( I\right) \subset \mathcal{R}\left( I\right) \) . ## (6.1.17) Exercises 1. Prove that if \( J \) is a compact subinterval of \( I \), then \( - {\chi }_{J} \) is representable. .2 Let \( f \in \mathcal{C}\left( I\right) \), where \( I = \left\lbrack {a, b}\right\rbrack \), let \( P = \left( {{x}_{0},\ldots ,{x}_{n}}\right) \) be a partition of \( I \), and for each \( k\left( {0 \leq k \leq n - 1}\right) \) let \( {\xi }_{k} \) be any point of \( \left\lbrack {{x}_{k},{x}_{k + 1}}\right\rbrack \) . Define \( \psi \in \mathcal{B}\left( I\right) \) by \[ \psi = \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {{\chi }_{\left\lbrack a,{x}_{k + 1}\right\rbrack } - {\chi }_{\left\lbrack a,{x}_{k}\right\rbrack }}\right) . \] Show that \( \parallel f - \psi \parallel \rightarrow 0 \) as the mesh of \( P \) tends to 0 . (6.1.18) Theorem. Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval. Then for each real-valued function \( \alpha \) of bounded variation on \( I \) , \[ {u}_{\alpha } = {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) \] defines a bounded linear functional, with norm \( {T}_{\alpha }\left( {a, b}\right) \), on the Banach space \( \mathcal{C}\left( I\right) \) . Moreover, each bounded linear functional on \( \mathcal{C}\left( I\right) \) is of the form \( {u}_{\alpha } \), where \( \alpha \) is a function of bounded variation on \( I \) that is unique up to an additive constant. Proof. Throughout this proof, \( P = \left( {{x}_{0},{x}_{1},\ldots ,{x}_{n}}\right) \) is a partition of \( I \) , and for each \( i,{\xi }_{i} \) is any point of the interval \( \left\lbrack {{x}_{i},{x}_{i + 1}}\right\rbrack \) . Consider first a real-valued function \( \alpha \) of bounded variation on \( I \) . The linearity of \( {u}_{\alpha } \) follows from Exercise (1.5.16: 4). For each \( f \in \mathcal{C}\left( I\right) \) we have the following inequality for Riemann-Stieltjes sums: \[ \left| {\mathop{\sum }\limits_{{i = 0}}^{{n - 1}}f\left( {\xi }_{i}\right) \left( {\alpha \left( {x}_{i + 1}\right) - \alpha \left( {x}_{i}\right) }\right) }\right| \leq \parallel f\parallel \mathop{\sum }\limits_{{i = 0}}^{{n - 1}}\left| {\alpha \left( {x}_{i + 1}\right) - \alpha \left( {x}_{i}\right) }\right| \] \[ \leq \parallel f\parallel {T}_{\alpha }\left( {a, b}\right) \] In the limit as the mesh of the partition tends to 0 we obtain the inequality \[ \left| {{u}_{\alpha }\left( f\right) }\right| \leq \parallel f\parallel {T}_{\alpha }\left( {a, b}\right) \] which shows that the linear functional \( {u}_{\alpha } \) has bound \( {T}_{\alpha }\left( {a, b}\right) \) . Now consider any bounded linear functional \( u \) on \( \mathcal{C}\left( I\right) \) . By Corollary (6.1.4), there exists a norm-preserving extension \( {u}^{\sharp } \) of \( u \) to \( \mathcal{R}\left( I\right) \) . Referring to Exercise (6.1.17: 1), define a function \( \alpha : I \rightarrow \mathbf{R} \) by \[ \alpha \left( x\right) = {u}^{\sharp }\left( {\chi }_{\left\lbrack a, x\right\rbrack }\right) \;\left( {x \in I}\right) . \] To show that \( \alpha \) is of bounded variation on \( I \), let \( P \) be as in the foregoing, and for each \( k\left( {0 \leq k \leq n - 1}\right) \) let \[ {\sigma }_{k} = \operatorname{sgn}\left( {\alpha \left( {x}_{k + 1}\right) - \alpha \left( {x}_{k}\right) }\right) . \] Then \[ \phi = \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\sigma }_{k}\left( {{\chi }_{\left\lbrack a,{x}_{k + 1}\right\rbrack } - {\chi }_{\left\lbrack a,{x}_{k}\right\rbrack }}\right) \in \mathcal{R}\left( I\right) , \] \( \parallel \phi \parallel \leq 1 \), and \[ \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}\left| {\alpha \left( {x}_{k + 1}\right) - \alpha \left( {x}_{k}\right) }\right| = {u}^{\sharp }\left( \phi \right) \leq \begin{Vmatrix}{u}^{\sharp }\end{Vmatrix} = \parallel u\parallel . \] Hence \( \alpha \) is of bounded variation on \( I \), and \[ {T}_{\alpha }\left( {a, b}\right) \leq \parallel u\parallel \] (5) If, now, \( f \) is any element of \( \mathcal{C}\left( I\right) \), consider the function \[ \psi = \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {{\chi }_{\left\lbrack a,{x}_{k + 1}\right\rbrack } - {\chi }_{\left\lbrack a,{x}_{k}\right\rbrack }}\right) , \] which, again by Exercise (6.1.17:1), belongs to \( \mathcal{R}\left( I\right) \) . We have \[ \left| {u\left( f\right) - \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {\alpha \left( {x}_{k + 1}\right) - \alpha \left( {x}_{k}\right) }\right) }\right| = \left| {{u}^{\sharp }\left( f\right) - {u}^{\sharp }\left( \psi \right) }\right| \] \[ \leq \parallel u\parallel \parallel f - \psi \parallel \] Letting the mesh of the partition \( P \) tend to 0, we see from Exercise (6.1.17:2) that \( \
1008_(GTM174)Foundations of Real and Abstract Analysis
85
}^{\sharp }\end{Vmatrix} = \parallel u\parallel . \] Hence \( \alpha \) is of bounded variation on \( I \), and \[ {T}_{\alpha }\left( {a, b}\right) \leq \parallel u\parallel \] (5) If, now, \( f \) is any element of \( \mathcal{C}\left( I\right) \), consider the function \[ \psi = \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {{\chi }_{\left\lbrack a,{x}_{k + 1}\right\rbrack } - {\chi }_{\left\lbrack a,{x}_{k}\right\rbrack }}\right) , \] which, again by Exercise (6.1.17:1), belongs to \( \mathcal{R}\left( I\right) \) . We have \[ \left| {u\left( f\right) - \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {\alpha \left( {x}_{k + 1}\right) - \alpha \left( {x}_{k}\right) }\right) }\right| = \left| {{u}^{\sharp }\left( f\right) - {u}^{\sharp }\left( \psi \right) }\right| \] \[ \leq \parallel u\parallel \parallel f - \psi \parallel \] Letting the mesh of the partition \( P \) tend to 0, we see from Exercise (6.1.17:2) that \( \parallel f - \psi \parallel \rightarrow 0 \) ; also, \[ \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\xi }_{k}\right) \left( {\alpha \left( {x}_{k + 1}\right) - \alpha \left( {x}_{k}\right) }\right) \rightarrow {\int }_{a}^{b}f\left( x\right) \mathrm{d}\alpha \left( x\right) . \] Hence \( u\left( f\right) = {u}_{\alpha }\left( f\right) \) . Moreover, from (5) and the first part of the proof, \( \parallel u\parallel = {T}_{\alpha }\left( {a, b}\right) . \) Finally, the uniqueness, up to an additive constant, of the function \( \alpha \) corresponding to the given bounded linear functional \( u \) on \( \mathcal{C}\left( I\right) \) follows from Proposition (1.5.19). The full power of the Hahn-Banach Theorem is not needed to prove Theorem (6.1.18): for, as is shown on pages 106-110 of [40], it is possible to construct an extension of \( u \) to \( \mathcal{R}\left( I\right) \) by elementary means. We say that a function \( f : I \rightarrow \mathbf{R} \) of bounded variation on \( I = \left\lbrack {a, b}\right\rbrack \) is normalised if \( f\left( a\right) = 0 \) . It is easy to show that the normalised elements form a closed, and therefore complete, linear subspace of the Banach space \( \left( {\mathcal{B}\mathcal{V}\left( I\right) ,\parallel \cdot {\parallel }_{\mathrm{{bv}}}}\right) \) (6.1.19) Corollary. Under the hypotheses of Theorem (6.1.18), \( \mathcal{C}{\left( I\right) }^{ * } \) is isometrically isomorphic to the Banach space of normalised functions of bounded variation on \( I \) . ## (6.1.20) Exercises .1 Let \( I = \left\lbrack {a, b}\right\rbrack \) be a compact interval. Prove that the normalised elements of \( \mathcal{B}\mathcal{V}\left( I\right) \) form a Banach space relative to the norm \( \parallel \cdot {\parallel }_{\mathrm{{bv}}} \) . Then prove Corollary (6.1.19). .2 Compute the norm of the bounded linear functional \( u \) defined on \( \mathcal{C}\left\lbrack {-1,1}\right\rbrack \) by \[ u\left( f\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{\left( -1\right) }^{n}}{{n}^{2}}f\left( {1/n}\right) . \] .3 Let \( X \) be a compact metric space, and \( u \) a linear functional on \( \mathcal{C}\left( X\right) \) that is positive, in the sense that \( u\left( f\right) \geq 0 \) for all nonnegative \( f \in \) \( \mathcal{C}\left( X\right) \) . Prove that \( u \) is bounded and has norm equal to \( u\left( \mathbf{1}\right) \), where \( \mathbf{1} \) is the constant function \( x \mapsto 1 \) on \( X \) . .4 Let \( u \) be a bounded linear functional on \( \mathcal{C}\left( X\right) \), where \( X \) is a compact metric space. Prove that there exist positive linear functionals \( v, w \) on \( \mathcal{C}\left( X\right) \) such that \( u = v - w \) . (For \( f \geq 0 \) in \( \mathcal{C}\left( X\right) \) let \( v\left( f\right) = \sup \left\{ {u\left( g\right) : g \in \mathcal{C}\left( X\right) ,0 \leq g \leq f}\right\} .) \) ## 6.2 Separation Theorems In this section we use the Hahn-Banach Theorem to establish a number of geometric results about the separation of convex sets by a hyperplane. These results have many applications, including some significant ones in mathematical economics (see Appendix C). If \( A \) is a subset of a vector space, and \( t \in \mathbf{F} \), we define \[ {tA} = \{ {tx} : x \in A\} . \] (6.2.1) Lemma. Let \( X \) be a normed space, and \( A \) a convex subset of \( X \) containing 0 in its interior. Then the Minkowski functional \( p : X \rightarrow \mathbf{R} \) , defined by \[ p\left( x\right) = \inf \{ t > 0 : x \in {tA}\} \] is a sublinear functional on \( X \) . If \( p\left( x\right) < 1 \), then \( x \in A \) ; and if \( A \) is open, then \[ A = \{ x \in X : p\left( x\right) < 1\} . \] Proof. Choose \( r > 0 \) such that \( B\left( {0, r}\right) \subset A \) . If \( x \neq 0 \), then \[ x \in \frac{2\parallel x\parallel }{r}B\left( {0, r}\right) \subset \frac{2\parallel x\parallel }{r}A. \] It follows that \( p \) is defined throughout \( X \) . Let \( \alpha ,\beta \) be positive numbers such that \( x \in {\alpha A} \) and \( y \in {\beta A} \) ; then \[ x + y = \left( {\alpha + \beta }\right) \left( {\frac{\alpha }{\alpha + \beta }{\alpha }^{-1}x + \frac{\beta }{\alpha + \beta }{\beta }^{-1}y}\right) , \] where, by convexity, \[ \frac{\alpha }{\alpha + \beta }{\alpha }^{-1}x + \frac{\beta }{\alpha + \beta }{\beta }^{-1}y \in A. \] So \( x + y \in \left( {\alpha + \beta }\right) A \) . It now follows that \( p\left( {x + y}\right) \leq p\left( x\right) + p\left( y\right) \) . On the other hand, if \( \lambda > 0 \), then for all positive \( t \) we have \[ {\lambda x} \in {tA} \Leftrightarrow x \in \left( {{\lambda }^{-1}t}\right) A \] and therefore \[ t \geq p\left( {\lambda x}\right) \Leftrightarrow {\lambda }^{-1}t \geq p\left( x\right) \] so \( p\left( {\lambda x}\right) = {\lambda p}\left( x\right) \) . This last equation also holds when \( \lambda = 0 \), since \( p\left( 0\right) = 0 \) . Thus \( p \) is a sublinear functional on \( X \) . If \( p\left( x\right) < 1 \), then there exists \( t \in \left( {0,1}\right) \) such that \( {t}^{-1}x \in A \) ; by the convexity of \( A, x = \left( {1 - t}\right) 0 + t\left( {{t}^{-1}x}\right) \) belongs to \( A \) . Finally, suppose that \( A \) is open, and consider any \( x \in A \) . Since \( p\left( 0\right) = 0 \) , to prove that \( p\left( x\right) < 1 \) we may assume that \( x \neq 0 \) . Choose \( s > 0 \) such that \( \bar{B}\left( {x, s\parallel x\parallel }\right) \subset A \) ; then \( \left( {1 + s}\right) x \in A \), so \( p\left( x\right) \leq {\left( 1 + s\right) }^{-1} < 1 \) . (6.2.2) Lemma. Let \( A \) be a nonempty open convex subset of a normed space \( X \), and \( {x}_{0} \) a point of \( X \smallsetminus A \) . Then there exists a bounded real-linear functional \( f \) on \( X \) such that \( f\left( x\right) < f\left( {x}_{0}\right) \) for all \( x \) in \( A \) . Proof. By translation, we may assume that \( 0 \in A \) ; so, by Lemma (6.2.1), \[ p\left( x\right) = \inf \left\{ {t > 0 : {t}^{-1}x \in A}\right\} \] defines a sublinear functional on \( X \), and \( p\left( x\right) < 1 \) if and only if \( x \in A \) . Hence \( p\left( {x}_{0}\right) \geq 1 \) . Let \( {X}_{0} \) be the real linear subspace of \( X \) generated by \( \left\{ {x}_{0}\right\} \), and define a bounded real-linear functional \( {f}_{0} \) on \( {X}_{0} \) by \[ {f}_{0}\left( {\lambda {x}_{0}}\right) = \lambda \;\left( {\lambda \in \mathbf{R}}\right) . \] If \( \lambda \geq 0 \), then \[ {f}_{0}\left( {\lambda {x}_{0}}\right) = \lambda \leq {\lambda p}\left( {x}_{0}\right) = p\left( {\lambda {x}_{0}}\right) \] if \( \lambda < 0 \), then \[ {f}_{0}\left( {\lambda {x}_{0}}\right) = \lambda < 0 \leq p\left( {\lambda {x}_{0}}\right) . \] Thus \( {f}_{0}\left( x\right) \leq p\left( x\right) \) for all \( x \in {X}_{0} \) . By the Hahn-Banach Theorem (6.1.3), there exists a real-linear functional \( f \) on \( X \) such that - \( f\left( x\right) = {f}_{0}\left( x\right) \) for all \( x \in {X}_{0} \), and \[ \text{-}f\left( x\right) \leq p\left( x\right) \text{for all}x \in X\text{.} \] For all \( x \in A \) , \[ f\left( x\right) \leq p\left( x\right) < 1 = f\left( {x}_{0}\right) . \] It follows that the nonempty open set \( A \) is contained in the complement of the translated hyperplane \( {x}_{0} + \ker \left( f\right) \) ; whence, by Exercise (4.2.5:3) and Lemma (4.1.4), the hyperplane \( \ker \left( f\right) \) is closed in \( X \) . It follows from Proposition (4.2.3) that \( f \) is bounded. (6.2.3) Proposition. Let \( C \) be a nonempty closed convex subset of a normed space \( X \), and \( {x}_{0} \) a point of \( X \smallsetminus C \) . Then there exist a bounded real-linear functional \( f \) on \( X \), and a real number \( \alpha \), such that \( f\left( x\right) < \alpha < f\left( {x}_{0}\right) \) for all \( x \in C \) . Proof. Choose \( r > 0 \) such that \( B\left( {{x}_{0}, r}\right) \cap C = \varnothing \) . By Exercise (4.1.5: 6), \[ A = \{ x + y : x \in C, y \in B\left( {0, r}\right) \} \] is open and convex; also, \( {x}_{0} \notin A \) . By Lemma (6.2.2), there exists a bounded real-linear functional \( f \) on \( X \) such that \( f\left( x\right) < f\left( {x}_{0}\right) \) for all \( x \) in \( A \) . Since \( f \) is not identically \( 0, f\left( b\right) > 0 \) for some \( b \in B\left( {0, r}\right) \) . Taking \( \alpha = f\left( {x}_{0}\right) - f\left( b\right) \) , we see that for all \( x \in C \) , \[ f\left( x\right) = f\left( {x + b}\right) - f\left( b\right) < \alpha < f\left( {x}_{0}\right) . \] ## (6.2.4) Exercises .1 Let \( A \) be a compact convex subset of a real normed space \( X \), and \( B \) a closed convex subset of \( X \) . Prove that there exist \( f \in {X}^{ * } \) and \( \alpha ,\beta \in \mathbf{R} \) such that \( f\left( x\right) \leq \alpha < \beta \leq f\left( y\right) \) for all \( x \in A \)
1008_(GTM174)Foundations of Real and Abstract Analysis
86
left( {{x}_{0}, r}\right) \cap C = \varnothing \) . By Exercise (4.1.5: 6), \[ A = \{ x + y : x \in C, y \in B\left( {0, r}\right) \} \] is open and convex; also, \( {x}_{0} \notin A \) . By Lemma (6.2.2), there exists a bounded real-linear functional \( f \) on \( X \) such that \( f\left( x\right) < f\left( {x}_{0}\right) \) for all \( x \) in \( A \) . Since \( f \) is not identically \( 0, f\left( b\right) > 0 \) for some \( b \in B\left( {0, r}\right) \) . Taking \( \alpha = f\left( {x}_{0}\right) - f\left( b\right) \) , we see that for all \( x \in C \) , \[ f\left( x\right) = f\left( {x + b}\right) - f\left( b\right) < \alpha < f\left( {x}_{0}\right) . \] ## (6.2.4) Exercises .1 Let \( A \) be a compact convex subset of a real normed space \( X \), and \( B \) a closed convex subset of \( X \) . Prove that there exist \( f \in {X}^{ * } \) and \( \alpha ,\beta \in \mathbf{R} \) such that \( f\left( x\right) \leq \alpha < \beta \leq f\left( y\right) \) for all \( x \in A \) and \( y \in B \) . .2 Prove Helly’s Theorem: let \( \mathcal{F} \) be a finite family of convex subsets of \( {\mathbf{R}}^{n} \) with the property that the intersection of any \( n + 1 \) sets in \( \mathcal{F} \) is nonempty; then \( \bigcap \mathcal{F} \) is nonempty. (First use induction on the number of sets in \( \mathcal{F} \) ; then use induction on the dimension \( n \) .) .3 Let \( K \) be a convex subset of a normed space \( X \), and \( S \subset K \) . We say that \( S \) is an extreme subset of \( K \) if, for any distinct points \( x, y \) of \( K \) such that \( \frac{1}{2}\left( {x + y}\right) \in S \), we have \( x \in S \) and \( y \in S \) . If also \( S \) contains only one element, then that element is called an extreme point of \( K \) . Prove that the intersection of any family of extreme subsets of \( K \) is either empty or an extreme subset. Now suppose that \( K \) is also compact, and let \( \mathcal{E} \) be the family of all extreme subsets of \( K \), partially ordered by inclusion. Prove that \( \mathcal{E} \) has a minimal element \( {S}_{0} \) . (Use the finite intersection property and Zorn’s Lemma.) Then prove that \( {S}_{0} \) consists of a single point. (Suppose that \( {S}_{0} \) contains two distinct points \( \xi ,\eta \) . Choose \( f \in {X}^{ * } \) such that \( f\left( \xi \right) < \) \( f\left( \eta \right) \), and let \( \alpha = \mathop{\sup }\limits_{{x \in K}}f\left( x\right) \) . Show that \( {S}_{1} = \left\{ {x \in {S}_{0} : f\left( x\right) = \alpha }\right\} \) is an extreme subset of \( K \) such that \( {S}_{0} \smallsetminus {S}_{1} \neq \varnothing \) .) Finally, prove that \( K \) has at least one extreme point. .4 By the convex hull of a subset \( K \) of a normed space we mean the set of all elements of the form \( \mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{i}{x}_{i} \), where the \( {x}_{i} \) are elements of \( K \) and the \( {\lambda }_{i} \) are nonnegative real numbers such that \( \mathop{\sum }\limits_{{i = 1}}^{N}{\lambda }_{i} = 1 \) . Prove the Krein-Milman Theorem: a compact convex subset of a normed space \( X \) is the closure of the convex hull of the set of its extreme points. (Let \( C \) be the convex hull of the set of extreme points of the compact convex set, let \( {x}_{0} \in X \smallsetminus \bar{C} \), and apply Proposition (6.2.3).) When \( X = {\mathbf{R}}^{n} \), there is a weak extension of Proposition (6.2.3) to the case where \( C \) need not be closed. (6.2.5) Proposition. Let \( C \) be a nonempty convex subset of the Euclidean space \( {\mathbf{R}}^{n} \), and \( {x}_{0} \) a point of \( {\mathbf{R}}^{n} \smallsetminus C \) . Then there exists a bounded real-linear functional \( f \) on \( {\mathbf{R}}^{n} \) such that \( f\left( x\right) \leq f\left( {x}_{0}\right) \) for all \( x \in C \) . Proof. Since \( \bar{C} \) is nonempty, closed, and convex, Proposition (6.2.3) allows us to assume that \( {x}_{0} \in \bar{C} \smallsetminus C \) . Then, by Exercise (4.1.5:7), each open ball with centre \( {x}_{0} \) contains some point of the complement of \( \bar{C} \) . Choose a sequence \( \left( {x}_{k}\right) \) in \( {\mathbf{R}}^{n} \) that converges to \( {x}_{0} \), such that \( {x}_{k} \notin \bar{C} \) for each \( k \) . By Proposition (6.2.3) and Theorem (5.3.1), for each \( k \) there exist \( {p}_{k} \in {\mathbf{R}}^{n} \) and \( {\alpha }_{k} \in \mathbf{R} \) such that \[ \text{-}\left\langle {x,{p}_{k}}\right\rangle < {\alpha }_{k}\text{for all}x \in \bar{C}\text{, and} \] \[ \text{-}\left\langle {{x}_{k},{p}_{k}}\right\rangle = {\alpha }_{k}\text{.} \] Replacing \( {p}_{k} \) by \( {\begin{Vmatrix}{p}_{k}\end{Vmatrix}}^{-1}{p}_{k} \), we may assume that \( \begin{Vmatrix}{p}_{k}\end{Vmatrix} = 1 \) for each \( k \) . Since the unit ball of \( {\mathbf{R}}^{n} \) is compact (Theorem (4.3.6)), we may pass to a subsequence and assume that \( \left( {p}_{k}\right) \) converges to a limit \( p \) in \( {\mathbf{R}}^{n} \) ; then \( \parallel p\parallel = 1 \) . Also, as \[ \left| {\alpha }_{k}\right| \leq \begin{Vmatrix}{p}_{k}\end{Vmatrix}\begin{Vmatrix}{x}_{k}\end{Vmatrix} = \begin{Vmatrix}{x}_{k}\end{Vmatrix} \] and the sequence \( \left( \begin{Vmatrix}{x}_{k}\end{Vmatrix}\right) \), being convergent, is bounded, \( \left( {\alpha }_{k}\right) \) is a bounded sequence in \( \mathbf{R} \) . Passing to another subsequence, we may further assume that \( \left( {\alpha }_{k}\right) \) converges to a limit \( \alpha \) in \( \mathbf{R} \) . By continuity, for all \( x \in \bar{C} \) we have \[ \langle x, p\rangle \leq \alpha = \left\langle {{x}_{0}, p}\right\rangle \] It remains to take \( f\left( x\right) = \langle x, p\rangle \) . Let \( H \) be a hyperplane in the normed space \( X \), and \( a \) an element of \( X \smallsetminus H \) . By Propositions (4.2.4) and (6.1.1), for each \( \alpha \in \mathbf{R} \) there exists a unique real-linear functional \( f \) on \( X \) such that \[ a + H = \{ x \in X : f\left( x\right) = \alpha \} . \] We say that the translated hyperplane \( a + H \) separates the nonempty subsets \( A \) and \( B \) of \( X \) if \( f\left( x\right) \leq \alpha \) for all \( x \in A \), and \( f\left( x\right) \geq \alpha \) for all \( x \in B \) . (6.2.6) Minkowski’s Separation Theorem. Let \( A \) and \( B \) be disjoint nonempty convex subsets of \( {\mathbf{R}}^{n} \) . Then there exists a closed translated hyperplane that separates \( A \) and \( B \) . Proof. The nonempty set \[ C = B - A = \{ x - y : x \in B, y \in A\} \] is convex, and \( 0 \notin C \) . By Proposition (6.2.5), there exists a bounded real-linear functional \( f \) on \( {\mathbf{R}}^{n} \) such that \( f\left( z\right) \geq f\left( 0\right) = 0 \) for all \( z \in C \) . Hence \( f\left( x\right) \geq f\left( y\right) \) for all \( x \in A \) and \( y \in B \), and we need only apply Exercise (1.1.1:21) to obtain the required real number \( \alpha \) . The corresponding hyperplane \( {f}^{-1}\left( {\{ \alpha \} }\right) \) then separates \( A \) and \( B \) . ## (6.2.7) Exercise Let \( A, B \) be disjoint nonempty convex subsets of a normed space \( X \) such that \( A \) is compact and \( B \) is closed. Prove that there exist a bounded real-linear functional \( f \) on \( X \), and a real number \( \alpha \), such that \( f\left( x\right) > \alpha \) for all \( x \in A \), and \( f\left( x\right) < \alpha \) for all \( x \in B \) . (Reduce to the case where \( A = \{ 0\} \), note Exercise (4.1.5: 6), and apply Proposition (6.2.3).) ## 6.3 Baire's Theorem and Beyond In this section we prove one of the most useful theorems about complete metric spaces, Baire's Theorem, and then study several of its many interesting consequences. Among these are the existence of uncountably many continuous, nowhere differentiable functions on \( \left\lbrack {0,1}\right\rbrack \), and the Open Mapping Theorem for bounded linear mappings between Banach spaces. (6.3.1) Baire’s Theorem. The intersection of a sequence of dense open sets in a complete metric space is dense. Proof. Let \( X \) be a complete metric space, \( \left( {U}_{n}\right) \) a sequence of dense open subsets of \( X \), and \[ U = \mathop{\bigcap }\limits_{{n = 1}}^{\infty }{U}_{n} \] We need only prove that for \( {x}_{0} \in X \) and \( {r}_{0} > 0 \), the set \( U \cap \bar{B}\left( {{x}_{0},{r}_{0}}\right) \) is nonempty. To this end, since \( {U}_{1} \) is dense in \( X \), we can find \( {x}_{1} \) in \( {U}_{1} \cap B\left( {{x}_{0},{r}_{0}}\right) \) . Moreover, since both \( {U}_{1} \) and \( B\left( {{x}_{0},{r}_{0}}\right) \) are open, so is their intersection; whence there exists \( {r}_{1} \) such that \( 0 < {r}_{1} < 1 \) and \[ \bar{B}\left( {{x}_{1},{r}_{1}}\right) \subset {U}_{1} \cap B\left( {{x}_{0},{r}_{0}}\right) . \] Since \( {U}_{2} \) is dense in \( X \), we can now find \( {x}_{2} \) in \( {U}_{2} \cap B\left( {{x}_{1},{r}_{1}}\right) \) ; but \( {U}_{2} \cap \) \( B\left( {{x}_{1},{r}_{1}}\right) \) is open, so there exists \( {r}_{2} \) such that \( 0 < {r}_{2} < 1/2 \) and \[ \bar{B}\left( {{x}_{2},{r}_{2}}\right) \subset {U}_{2} \cap B\left( {{x}_{1},{r}_{1}}\right) \] Carrying on in this way, we construct a sequence \( \left( {x}_{n}\right) \) of points of \( X \), and a sequence \( \left( {r}_{n}\right) \) of positive numbers, such that for each \( n \geq 1,0 < {r}_{n} < 1/n \) and \[ \bar{B}\left( {{x}_{n},{r}_{n}}\right) \subset {U}_{n} \cap B\left( {{x}_{n - 1},{r}_{n - 1}}\right) . \] By induction, if \( m \geq n \), then \( {x}_{m} \in B\left( {{x}_{n},{r}_{n}}\right) \) ; whence \[ \rho \left( {{x}_{m},{x}_{n}}\right) < {r}_{n} < \frac{1}{n}\;\left( {m \geq n}\right) . \] (1) Thus \( \left( {x}_{n}\right) \) is a Cauchy sequence in \( X \) . Since \( X \) is complete, \( \left( {x}_{n}\right) \) converges to a limit \( {x}_{\infty } \) in \( X \) . Letting \( m \) tend to \( \infty \) in inequali
1008_(GTM174)Foundations of Real and Abstract Analysis
87
\) ; but \( {U}_{2} \cap \) \( B\left( {{x}_{1},{r}_{1}}\right) \) is open, so there exists \( {r}_{2} \) such that \( 0 < {r}_{2} < 1/2 \) and \[ \bar{B}\left( {{x}_{2},{r}_{2}}\right) \subset {U}_{2} \cap B\left( {{x}_{1},{r}_{1}}\right) \] Carrying on in this way, we construct a sequence \( \left( {x}_{n}\right) \) of points of \( X \), and a sequence \( \left( {r}_{n}\right) \) of positive numbers, such that for each \( n \geq 1,0 < {r}_{n} < 1/n \) and \[ \bar{B}\left( {{x}_{n},{r}_{n}}\right) \subset {U}_{n} \cap B\left( {{x}_{n - 1},{r}_{n - 1}}\right) . \] By induction, if \( m \geq n \), then \( {x}_{m} \in B\left( {{x}_{n},{r}_{n}}\right) \) ; whence \[ \rho \left( {{x}_{m},{x}_{n}}\right) < {r}_{n} < \frac{1}{n}\;\left( {m \geq n}\right) . \] (1) Thus \( \left( {x}_{n}\right) \) is a Cauchy sequence in \( X \) . Since \( X \) is complete, \( \left( {x}_{n}\right) \) converges to a limit \( {x}_{\infty } \) in \( X \) . Letting \( m \) tend to \( \infty \) in inequality (1), we have \( \rho \left( {{x}_{\infty },{x}_{n}}\right) \leq {r}_{n} \), and therefore \( {x}_{\infty } \in \bar{B}\left( {{x}_{n},{r}_{n}}\right) \), for each \( n \) . Taking \( n = 0 \) , we see that \( {x}_{\infty } \in \bar{B}\left( {{x}_{0},{r}_{0}}\right) \) ; taking \( n \geq 1 \), we see that \( {x}_{\infty } \in {U}_{n} \) . The alternative name Baire Category Theorem for Theorem (6.3.1) originates from the following definitions (due to Baire). A subset \( S \) of a metric space \( X \) is said to be - nowhere dense in \( X \) if the interior of \( \bar{S} \) is empty; - of the first category if it is a countable union of nowhere dense subsets; - of the second category if it is not of the first category. Baire's Theorem is equivalent to the statement a nonempty complete metric space is of the second category. ## (6.3.2) Exercises . 1 Prove the last statement; more precisely, prove that if a nonempty complete metric space is the union of a sequence of closed sets, then at least one of those closed sets has a nonempty interior. .2 Prove the extended version of Cantor's theorem on the uncountability of \( \mathbf{R} \) (Exercise (1.2.11: 4)): if \( \left( {x}_{n}\right) \) is a sequence of real numbers, then \( \left\{ {x \in \mathbf{R} : \forall n\left( {x \neq {x}_{n}}\right) }\right\} \) is dense in \( \mathbf{R} \) . .3 Prove that a nonempty complete metric space without isolated points is uncountable. We now show how Baire's Theorem can be used to prove the existence of continuous functions on \( I = \left\lbrack {0,1}\right\rbrack \) that are nowhere differentiable on \( I \) . For each positive integer \( n \) let \( {E}_{n} \) be the set of all \( f \in \mathcal{C}\left( I\right) \) with the property: there exists \( t \in \left\lbrack {0,1 - {n}^{-1}}\right\rbrack \) such that \( \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq {nh} \) whenever \( 0 < h < 1 - t \) . Note that \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{E}_{n} \) contains any \( f \in \mathcal{C}\left( I\right) \) such that for some \( t \in \lbrack 0,1) \) the right-hand derivative of \( f \) at \( t \) , \[ {f}^{\prime }\left( {t}^{ + }\right) = \mathop{\lim }\limits_{{h \rightarrow 0, h > 0}}\frac{f\left( {t + h}\right) - f\left( t\right) }{h}, \] exists. To see this, consider such \( f \) and \( t \) . Choose a positive integer \( {n}_{1} \) such that \( t \in \left\lbrack {0,1 - {n}_{1}^{-1}}\right\rbrack \) and \( \left| {{f}^{\prime }\left( {t}^{ + }\right) }\right| < {n}_{1} \) . Next choose \( {h}_{0} > 0 \) such that if \( 0 < h < {h}_{0} \), then \( \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq {n}_{1}h \) . If \( {h}_{0} = 1 - t \), set \( n = {n}_{1} \) . If \( {h}_{0} < 1 - t \), then for \( {h}_{0} \leq h < 1 - t \) we have \[ \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq \frac{2\parallel f\parallel }{{h}_{0}}h \] where \( \parallel \cdot \parallel \) denotes the sup norm on \( \mathcal{C}\left( I\right) \) ; so, taking \( n = \max \left\{ {{n}_{1},{n}_{2}}\right\} \) , where the positive integer \( {n}_{2} > 2\parallel f\parallel /{h}_{0} \), we have \( f \in {E}_{n} \) . We prove that \( \mathcal{C}\left( I\right) \smallsetminus {E}_{n} \) is dense and open in \( I \) . To this end, first let \( {\left( {f}_{k}\right) }_{k = 1}^{\infty } \) be a sequence in \( {E}_{n} \) that converges to a limit \( f \) in \( \mathcal{C}\left( I\right) \) . Then there exists a sequence \( \left( {t}_{k}\right) \) in \( \left\lbrack {0,1 - {n}^{-1}}\right\rbrack \) such that \[ \left| {{f}_{k}\left( {{t}_{k} + h}\right) - {f}_{k}\left( {t}_{k}\right) }\right| \leq {nh} \] whenever \( k \geq 1 \) and \( 0 < h < 1 - {t}_{k} \) . Since \( \left\lbrack {0,1 - {n}^{-1}}\right\rbrack \) is sequentially compact, we may assume without loss of generality that \( \left( {t}_{k}\right) \) converges to a limit \( t \in \left\lbrack {0,1 - {n}^{-1}}\right\rbrack \) . If \( 0 < h < 1 - t \), then for all sufficiently large \( k \) we have \( 0 < h < 1 - {t}_{k} \) and therefore \[ \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq \left| {f\left( {t + h}\right) - f\left( {{t}_{k} + h}\right) }\right| + \left| {f\left( {{t}_{k} + h}\right) - {f}_{k}\left( {{t}_{k} + h}\right) }\right| \] \[ + \left| {{f}_{k}\left( {{t}_{k} + h}\right) - {f}_{k}\left( {t}_{k}\right) }\right| + \left| {{f}_{k}\left( {t}_{k}\right) - f\left( {t}_{k}\right) }\right| \] \[ + \left| {f\left( {t}_{k}\right) - f\left( t\right) }\right| \] \[ \leq \left| {f\left( {t + h}\right) - f\left( {{t}_{k} + h}\right) }\right| + \begin{Vmatrix}{f - {f}_{k}}\end{Vmatrix} + {nh} \] \[ + \begin{Vmatrix}{f - {f}_{k}}\end{Vmatrix} + \left| {f\left( {t}_{k}\right) - f\left( t\right) }\right| \text{.} \] Letting \( k \rightarrow \infty \) and using the continuity of \( f \), we obtain \[ \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq {nh} \] Hence \( f \in {E}_{n} \), and therefore \( {E}_{n} \) is closed in \( \mathcal{C}\left( I\right) \) . Thus \( \mathcal{C}\left( I\right) \smallsetminus {E}_{n} \) is open in \( \mathcal{C}\left( I\right) \) . Given \( f \in \mathcal{C}\left( I\right) \) and \( \varepsilon > 0 \), we now use the Weierstrass Approximation Theorem (4.6.1) to construct a polynomial function \( p \) such that \( \parallel f - p\parallel < \) \( \varepsilon /2 \) . Choosing a positive integer \[ N > {\varepsilon }^{-1}\left( {n + \begin{Vmatrix}{p}^{\prime }\end{Vmatrix}}\right) \] define a continuous function \( q : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbf{R} \) such that for \( 0 \leq k \leq N - 1 \) , \[ q\left( \frac{k}{N}\right) = 0 \] \[ q\left( \frac{k + \frac{1}{2}}{N}\right) = \varepsilon /2 \] and \( q \) is linear on each of the intervals \[ \left\lbrack {\frac{k}{N},\frac{k + \frac{1}{2}}{N}}\right\rbrack ,\left\lbrack {\frac{k + \frac{1}{2}}{N},\frac{k + 1}{N}}\right\rbrack . \] Let \( g = p + q \in \mathcal{C}\left( I\right) \) . For each \( t \in \lbrack 0,1) \) we have \[ \left| {{g}^{\prime }\left( {t}^{ + }\right) }\right| \geq \left| {{q}^{\prime }\left( {t}^{ + }\right) }\right| - \left| {{p}^{\prime }\left( t\right) }\right| \geq {N\varepsilon } - \begin{Vmatrix}{p}^{\prime }\end{Vmatrix} > n \] so \( g \notin {E}_{n} \) . Since \[ \parallel f - g\parallel \leq \parallel f - p\parallel + \parallel q\parallel < \varepsilon \] we conclude that \( \mathcal{C}\left( I\right) \smallsetminus {E}_{n} \) is dense in \( \mathcal{C}\left( I\right) \) . Now let \( {F}_{n} \) be the set of all \( f \in \mathcal{C}\left( I\right) \) with the property: there exists \( t \in \left\lbrack {{n}^{-1},1}\right\rbrack \) such that \( \left| {f\left( {t + h}\right) - f\left( t\right) }\right| \leq {nh} \) whenever \( 0 < h < t \) . Arguments similar to those just used show that \( \mathcal{C}\left( I\right) \smallsetminus {F}_{n} \) is dense and open in \( \mathcal{C}\left( I\right) \), and that it contains any \( f \in \mathcal{C}\left( I\right) \) such that for some \( t \in (0,1\rbrack \) the left-hand derivative of \( f \) at \( t \) , \[ {f}^{\prime }\left( {t}^{ - }\right) = \mathop{\lim }\limits_{{h \rightarrow 0, h < 0}}\frac{f\left( {t + h}\right) - f\left( t\right) }{h}, \] exists. Let \[ S = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{E}_{n} \cup \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{F}_{n} \] Since \( \mathcal{C}\left( I\right) \) is complete (Proposition (4.5.4)), we see from Baire’s Theorem that \[ \mathcal{C}\left( I\right) \smallsetminus S = \mathop{\bigcap }\limits_{{n = 1}}^{\infty }\left( {\mathcal{C}\left( I\right) \smallsetminus {E}_{n}}\right) \cap \mathop{\bigcap }\limits_{{n = 1}}^{\infty }\left( {\mathcal{C}\left( I\right) \smallsetminus {F}_{n}}\right) \] is dense in \( \mathcal{C}\left( I\right) \) . Clearly, \( \mathcal{C}\left( I\right) \smallsetminus S \) consists of continuous, nowhere differentiable functions on \( I \) . ## (6.3.3) Exercises .1 Prove that \( \left\lbrack {0,1}\right\rbrack \) cannot be written as the union of a sequence of pairwise-disjoint closed sets. (Suppose that there exists a sequence \( \left( {F}_{n}\right) \) of pairwise-disjoint closed sets whose union is \( \left\lbrack {0,1}\right\rbrack \) . Show that the union of the boundaries of the sets \( {F}_{n} \) is closed and has an empty interior.) .2 Let \( X \) be a Banach space, and \( C \) a closed convex subset of \( X \) that is absorbing—that is, for each \( x \in X \) there exists \( t > 0 \) such that \( {tx} \in C \) . Prove that 0 does not belong to the closure of \( X \smallsetminus C \) . (Suppose the contrary, and show that for each positive integer \( n \) the complement of \( {nC} \) is dense and open in \( X \) .) .3 Prove that if a Banach space is generated by a compact set, then it is finite-dimensional. .4 Let \( X \) be a complete metric space, and \( {\left( {f}_{i}\right) }_{i \in I} \) a family o
1008_(GTM174)Foundations of Real and Abstract Analysis
88
\) . ## (6.3.3) Exercises .1 Prove that \( \left\lbrack {0,1}\right\rbrack \) cannot be written as the union of a sequence of pairwise-disjoint closed sets. (Suppose that there exists a sequence \( \left( {F}_{n}\right) \) of pairwise-disjoint closed sets whose union is \( \left\lbrack {0,1}\right\rbrack \) . Show that the union of the boundaries of the sets \( {F}_{n} \) is closed and has an empty interior.) .2 Let \( X \) be a Banach space, and \( C \) a closed convex subset of \( X \) that is absorbing—that is, for each \( x \in X \) there exists \( t > 0 \) such that \( {tx} \in C \) . Prove that 0 does not belong to the closure of \( X \smallsetminus C \) . (Suppose the contrary, and show that for each positive integer \( n \) the complement of \( {nC} \) is dense and open in \( X \) .) .3 Prove that if a Banach space is generated by a compact set, then it is finite-dimensional. .4 Let \( X \) be a complete metric space, and \( {\left( {f}_{i}\right) }_{i \in I} \) a family of continuous mappings of \( X \) into \( \mathbf{R} \) . Suppose that for each \( x \in X \) there exists \( {M}_{x} > 0 \) such that \( \left| {{f}_{i}\left( x\right) }\right| \leq {M}_{x} \) for all \( i \in I \) . Prove that there exist a nonempty open set \( E \subset X \) and a positive integer \( N \) such that \( \left| {{f}_{i}\left( x\right) }\right| \leq N \) for all \( i \in I \) and all \( x \in E \) . (Let \[ {C}_{n, i} = \left\{ {x \in X : \left| {{f}_{i}\left( x\right) }\right| \leq n}\right\} \] \[ {C}_{n} = \mathop{\bigcap }\limits_{{i \in I}}{C}_{n, i} \] and use Baire's Theorem.) A mapping \( f \) between metric spaces \( X \) and \( Y \) is called an open mapping if \( f\left( S\right) \) is an open subset of \( Y \) whenever \( S \) is an open subset of \( X \) . ## (6.3.4) Exercise Prove that a linear mapping \( T \) between normed spaces \( X, Y \) is open if and only if there exists \( r > 0 \) such that \( B\left( {0, r}\right) \subset T\left( {\bar{B}\left( {0,1}\right) }\right) \) . We now aim to apply Baire's Theorem to prove the following fundamental result on linear mappings between Banach spaces. (6.3.5) The Open Mapping Theorem. A bounded linear mapping of a Banach space onto a Banach space is open. The next lemma prepares us for the proof of this theorem. (6.3.6) Lemma. Let \( T \) be a linear mapping of a Banach space \( X \) into a normed space \( Y \) . Then \( T \) is open if and only if there exists \( r > 0 \) such that \( B\left( {0, r}\right) \subset \overline{T\left( {\bar{B}\left( {0,1}\right) }\right) }. \) Proof. Suppose that such a real number \( r \) exists. In view of the preceding exercise, it suffices to prove that \( B\left( {0, r/2}\right) \subset T\left( {\bar{B}\left( {0,1}\right) }\right) \) . Given \( y \in B\left( {0, r/2}\right) \), since \( {2y} \in B\left( {0, r}\right) \), we can find an element \( {x}_{1} \) of the unit ball of \( X \) such that \[ \begin{Vmatrix}{{2y} - T{x}_{1}}\end{Vmatrix} < \frac{r}{2}. \] So \( {2}^{2}y - {2T}{x}_{1} \in B\left( {0, r}\right) \), and therefore there exists \( {x}_{2} \) in the unit ball of \( X \) such that \[ \begin{Vmatrix}{{2}^{2}y - {2T}{x}_{1} - T{x}_{2}}\end{Vmatrix} < \frac{r}{2}. \] Carrying on in this way, we construct a sequence \( \left( {x}_{n}\right) \) of elements of the unit ball of \( X \) such that \[ \begin{Vmatrix}{{2}^{N}y - {2}^{N - 1}T{x}_{1} - {2}^{N - 2}T{x}_{2} - \cdots - T{x}_{N}}\end{Vmatrix} < \frac{r}{2} \] for each \( N \) . Thus \[ \begin{Vmatrix}{y - \mathop{\sum }\limits_{{n = 1}}^{N}{2}^{-n}T{x}_{n}}\end{Vmatrix} < {2}^{-N - 1}r \] and therefore the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}T{x}_{n} \) converges to \( y \) . Since \[ \mathop{\sum }\limits_{{n = j}}^{k}{2}^{-n}\begin{Vmatrix}{x}_{n}\end{Vmatrix} \leq \mathop{\sum }\limits_{{n = j}}^{k}{2}^{-n} \] whenever \( k > j \), we see from Exercise (4.1.8: 2) that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}{x}_{n} \) converges to an element \( x \) in the unit ball of \( X \) . The boundedness of \( T \) now ensures that \[ {Tx} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{2}^{-n}T{x}_{n} = y. \] Hence \( B\left( {0, r/2}\right) \subset T\left( {\bar{B}\left( {0,1}\right) }\right) \), and therefore \( T \) is open. The converse is trivial. Proof of the Open Mapping Theorem. Let \( T \) be a bounded linear mapping of a Banach space \( X \) onto a Banach space \( Y \) . Then \[ Y = T\left( X\right) = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }\overline{T\left( {\bar{B}\left( {0, n}\right) }\right) }, \] where each of the sets \( \overline{T\left( {\bar{B}\left( {0, n}\right) }\right) } \) is closed in \( Y \) . By Exercise (6.3.2:1), there exists a positive integer \( N \) such that \( \overline{T\left( {\bar{B}\left( {0, N}\right) }\right) } \) has a nonempty interior; so there exist \( {y}_{1} \in Y \) and \( R > 0 \) such that \[ B\left( {{y}_{1}, R}\right) \subset \overline{T\left( {\bar{B}\left( {0, N}\right) }\right) }. \] Setting \( z = {N}^{-1}{y}_{1} \) and \( r = {N}^{-1}R \), we easily see that \[ B\left( {z, r}\right) \subset \overline{T\left( {\bar{B}\left( {0,1}\right) }\right) }. \] So if \( y \in Y \) and \( \parallel y\parallel < r \), then \[ z \pm y \in \overline{T\left( {\bar{B}\left( {0,1}\right) }\right) } \] and therefore \[ y = \frac{1}{2}\left( {\left( {z + y}\right) - \left( {z - y}\right) }\right) \in \overline{T\left( {\bar{B}\left( {0,1}\right) }\right) }. \] Hence \[ B\left( {0, r}\right) \subset \overline{T\left( {\bar{B}\left( {0,1}\right) }\right) }, \] and therefore, by Lemma (6.3.6), \( T \) is open. The Open Mapping Theorem is one of a number of closely interrelated results. (6.3.7) Banach’s Inverse Mapping Theorem. A one-one bounded linear mapping of a Banach space onto a Banach space has a bounded linear inverse. Proof. Let \( T \) be a one-one bounded linear mapping of a Banach space \( X \) onto a Banach space \( Y \) . It is routine to prove that \( {T}^{-1} \) is a linear mapping from \( Y \) onto \( X \) . By the Open Mapping Theorem (6.3.5), if \( U \) is an open subset of \( X \), then \[ {\left( {T}^{-1}\right) }^{-1}\left( U\right) = T\left( U\right) \] is open in \( Y \) ; so \( {T}^{-1} \) is continuous, by Proposition (3.2.2), and is therefore a bounded linear mapping, by Proposition (4.2.1). By the graph of a mapping \( f : X \rightarrow Y \) we mean the subset \[ \mathcal{G}\left( f\right) = \{ \left( {x, f\left( x\right) }\right) : x \in X\} \] of \( X \times Y \) . (The graph of \( f \) is really the same as the function \( f \) itself, regarded as a set of ordered pairs.) (6.3.8) The Closed Graph Theorem. A linear mapping of a Banach space \( X \) into a Banach space \( Y \) is bounded if and only if its graph is a closed subset of \( X \times Y \) . Proof. Let \( T \) be a linear mapping of \( X \) into \( Y \) . It is a simple exercise to show that if \( T \) is bounded, then its graph is a closed subset of \( X \times Y \) . Suppose, conversely, that \( \mathcal{G}\left( T\right) \) is closed in \( X \times Y \) . Since \( X \times Y \), a product of complete metric spaces, is complete (by Proposition (3.5.10)), we see from Proposition (3.2.9) that \( \mathcal{G}\left( T\right) \), which is clearly a subspace of \( X \times Y \) , is a Banach space. Define a mapping \( H \) of \( \mathcal{G}\left( T\right) \) onto \( X \) by \[ H\left( {x,{Tx}}\right) = x\;\left( {x \in X}\right) . \] It is straightforward to show that \( H \) is one-one and linear. Also, \[ \parallel H\left( {x,{Tx}}\right) \parallel \leq \parallel x\parallel + \parallel {Tx}\parallel \] \[ \leq 2\max \{ \parallel x\parallel ,\parallel {Tx}\parallel \} \] \[ = 2\parallel \left( {x,{Tx}}\right) \parallel \] so \( H \) is bounded. It follows from Banach’s Inverse Mapping Theorem (6.3.7) that \( {H}^{-1} \) is a bounded linear mapping of \( X \) onto \( \mathcal{G}\left( T\right) \) ; but \[ \parallel {Tx}\parallel \leq \parallel \left( {x,{Tx}}\right) \parallel = \begin{Vmatrix}{{H}^{-1}x}\end{Vmatrix} \leq \begin{Vmatrix}{H}^{-1}\end{Vmatrix}\parallel x\parallel \] for all \( x \in X \), and so \( T \) is bounded. We met the following result - the Uniform Boundedness Theorem-in Exercise (4.2.2:14), where you were asked to fill in the details of a relatively little known elementary proof. We now place the Uniform Boundedness Theorem in its normal context, with its standard proof. (6.3.9) Theorem. Let \( {\left( {T}_{i}\right) }_{i \in I} \) be a family of bounded linear mappings from a Banach space \( X \) into a Banach space \( Y \), such that \( \left\{ {\begin{Vmatrix}{{T}_{i}x}\end{Vmatrix} : i \in I}\right\} \) is bounded for each \( x \in X \) . Then \( \left\{ {\begin{Vmatrix}{T}_{i}\end{Vmatrix} : i \in I}\right\} \) is bounded. Proof. Our hypotheses ensure that for each \( x \in X \) , \[ {u}_{x}\left( i\right) = {T}_{i}x\;\left( {i \in I}\right) \] defines an element \( {u}_{x} \) of \( \mathcal{B}\left( {I, Y}\right) \) . Clearly, the mapping \( x \mapsto {u}_{x} \) of \( X \) into \( \mathcal{B}\left( {I, Y}\right) \) is linear. We prove that its graph is closed in \( X \times \mathcal{B}\left( {I, Y}\right) \) . Indeed, if \( \left( {x}_{n}\right) \) is a sequence converging to a limit \( {x}_{\infty } \) in \( X \), such that the sequence \( \left( {u}_{{x}_{n}}\right) \) converges to a limit \( f \) in \( \mathcal{B}\left( {I, Y}\right) \), then for each \( i \in I \) we have \[ \begin{Vmatrix}{f\left( i\right) - {u}_{{x}_{\infty }}\left( i\right) }\end{Vmatrix} \leq \begin{Vmatrix}{f\left( i\right) - {u}_{{x}_{n}}\left( i\right) }\end{Vmatrix} + \begin{Vmatrix}{{u}_{{x}_{n}}\left( i\right) - {u}_{{x}_{\infty }}\left( i\right) }\end{Vmatrix} \] \[ \leq \begin{Vmatrix}{f - {u}_{{x}_{n}}}\end{Vmatrix} + \begin{Vmatrix}{{T}_{i}\left( {{x}_{n} - {x}_{\infty }}\right) }\end{Vmatrix} \] \[ \leq \begin{Vmatrix}{f - {u}_{
1008_(GTM174)Foundations of Real and Abstract Analysis
89
ht) \] defines an element \( {u}_{x} \) of \( \mathcal{B}\left( {I, Y}\right) \) . Clearly, the mapping \( x \mapsto {u}_{x} \) of \( X \) into \( \mathcal{B}\left( {I, Y}\right) \) is linear. We prove that its graph is closed in \( X \times \mathcal{B}\left( {I, Y}\right) \) . Indeed, if \( \left( {x}_{n}\right) \) is a sequence converging to a limit \( {x}_{\infty } \) in \( X \), such that the sequence \( \left( {u}_{{x}_{n}}\right) \) converges to a limit \( f \) in \( \mathcal{B}\left( {I, Y}\right) \), then for each \( i \in I \) we have \[ \begin{Vmatrix}{f\left( i\right) - {u}_{{x}_{\infty }}\left( i\right) }\end{Vmatrix} \leq \begin{Vmatrix}{f\left( i\right) - {u}_{{x}_{n}}\left( i\right) }\end{Vmatrix} + \begin{Vmatrix}{{u}_{{x}_{n}}\left( i\right) - {u}_{{x}_{\infty }}\left( i\right) }\end{Vmatrix} \] \[ \leq \begin{Vmatrix}{f - {u}_{{x}_{n}}}\end{Vmatrix} + \begin{Vmatrix}{{T}_{i}\left( {{x}_{n} - {x}_{\infty }}\right) }\end{Vmatrix} \] \[ \leq \begin{Vmatrix}{f - {u}_{{x}_{n}}}\end{Vmatrix} + \begin{Vmatrix}{T}_{i}\end{Vmatrix}\begin{Vmatrix}{{x}_{n} - {x}_{\infty }}\end{Vmatrix} \] \[ \rightarrow 0\text{as}n \rightarrow \infty \text{.} \] Hence \( f = {u}_{{x}_{\infty }} \), and so the linear mapping \( x \mapsto {u}_{x} \) has a closed graph. By Proposition (4.5.1) and the Closed Graph Theorem (6.3.8), this mapping is bounded. Let \[ c = \sup \left\{ {\begin{Vmatrix}{u}_{x}\end{Vmatrix} : x \in X,\parallel x\parallel \leq 1}\right\} . \] Then for all \( i \in I \) and all \( x \) in the unit ball of \( X \) , \[ \begin{Vmatrix}{{T}_{i}x}\end{Vmatrix} = \begin{Vmatrix}{{u}_{x}\left( i\right) }\end{Vmatrix} \leq \begin{Vmatrix}{u}_{x}\end{Vmatrix} \leq c. \] ## (6.3.10) Exercises .1 Prove that if \( T \) is an open bounded linear mapping of a Banach space \( X \) into a normed space \( Y \), then the range of \( T \) is complete. .2 Let \( X \) be a separable real Banach space with a Schauder basis \( \left( {x}_{n}\right) \) , and let \( S \) be the linear space consisting of all sequences \( {\left( {\lambda }_{n}\right) }_{n = 1}^{\infty } \) of real numbers such that the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}{x}_{n} \) converges in \( X \) . Show that \[ \begin{Vmatrix}{\left( {\lambda }_{n}\right) }_{n = 1}^{\infty }\end{Vmatrix} = \mathop{\sup }\limits_{{N \geq 1}}\begin{Vmatrix}{\mathop{\sum }\limits_{{n = 1}}^{N}{\lambda }_{n}{x}_{n}}\end{Vmatrix} \] defines a norm on \( S \) . Prove that \( S \) is a Banach space with respect to this norm. Then show that the mapping \[ {\left( {\lambda }_{n}\right) }_{n = 1}^{\infty } \mapsto \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}{x}_{n} \] is a bounded linear isomorphism of \( S \) onto \( X \) with a continuous inverse. Deduce that for each positive integer \( N \) the coordinate functional \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}{x}_{n} \mapsto {\lambda }_{N} \) belongs to the dual space \( {X}^{ * } \) . .3 Use an argument like that of Exercise (6.3.3:4) to give another proof of the Uniform Boundedness Theorem. .4 Let \( X, Y \) be Banach spaces, and suppose that for all distinct \( y,{y}^{\prime } \) in \( Y \) there exists a bounded linear functional \( f \) on \( Y \) such that \( f\left( y\right) \neq \) \( f\left( {y}^{\prime }\right) \) . Let \( T : X \rightarrow Y \) be a linear mapping such that if \( \left( {x}_{n}\right) \) is a sequence in \( X \) converging to 0, then \( \left( {f \circ T}\right) \left( {x}_{n}\right) \) converges to 0 for each bounded linear functional \( f \) on \( Y \) . Prove that \( T \) is bounded. (Use the Closed Graph Theorem.) .5 Let \( \left( {T}_{n}\right) \) be a sequence of bounded linear mappings of a Banach space \( X \) into a Banach space \( Y \), such that the sequence \( \left( {{T}_{n}x}\right) \) converges in \( Y \) for each \( x \in X \) . Prove that \[ {Tx} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{T}_{n}x \] defines a bounded linear mapping \( T : X \rightarrow Y \) . (Use the Uniform Boundedness Theorem.) .6 Let \( S, T \) be mappings of a Hilbert space \( H \) into itself such that \( \langle {Sx}, y\rangle = \langle x,{Ty}\rangle \) for all \( x, y \in H \) . Prove that \( S \) and \( T \) are linear mappings. Then give two proofs that both \( S \) and \( T \) are bounded. (For one proof use the Closed Graph Theorem.) .7 Let \( A, B \) be disjoint subspaces of a Banach space \( X \) such that each element \( x \) of \( X \) can be written uniquely in the form \[ x = {P}_{A}x + {P}_{B}x \] with \( {P}_{A}x \in A \) and \( {P}_{B}x \in B \) . Prove that the oblique projection mappings \( {P}_{A} : X \rightarrow A \) and \( {P}_{B} : X \rightarrow B \) so defined are linear, and that they are bounded if and only if \( A \) and \( B \) are closed in \( X \) . .8 Prove Landau’s Theorem: if \( \left( {a}_{n}\right) \) is a sequence of complex numbers such that \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}{x}_{n} \) converges for each \( \left( {x}_{n}\right) \in {l}_{2}\left( \mathbf{C}\right) \), then \( \left( {a}_{n}\right) \in {l}_{2}\left( \mathbf{C}\right) \) . (For each \( x = \left( {x}_{n}\right) \in {l}_{2}\left( \mathbf{C}\right) \) and each \( k \), define \( {s}_{k}\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{k}{a}_{n}{x}_{n} \) . Apply the Uniform Boundedness Theorem to the sequence \( {\left( {s}_{k}\right) }_{k = 1}^{\infty } \) of linear functionals on \( {l}_{2}\left( \mathbf{C}\right) \), to show that the partial sums of \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {a}_{n}\right| }^{2} \) are bounded.) .9 Let \( w \) be a weight function on the compact interval \( I = \left\lbrack {a, b}\right\rbrack \) . For each positive integer \( n \) let \( \left( {{x}_{n,0},{x}_{n,1},\ldots ,{x}_{n, n}}\right) \) be a partition of \( I \) , and define a linear functional \( {L}_{n} \) on \( \mathcal{C}\left( I\right) \) by \[ {L}_{n}f = \mathop{\sum }\limits_{{k = 0}}^{n}{c}_{n, k}f\left( {x}_{n, k}\right) \] where each \( {c}_{n, k} \in \mathbf{R} \) . Prove Polya’s Theorem on approximate quadrature: in order that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{L}_{n}f = {\int }_{a}^{b}f\left( x\right) w\left( x\right) \mathrm{d}x \) for all \( f \in \mathcal{C}\left( I\right) \), it is necessary and sufficient that \( \mathop{\sup }\limits_{{n \geq 1}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {c}_{n, k}\right| < \infty \) . .10 Let \( T \) be an operator on a Hilbert space. Prove that \( \operatorname{ran}\left( T\right) \) is closed if and only if \( \operatorname{ran}\left( {T}^{ * }\right) \) is closed. (Suppose that \( \operatorname{ran}\left( T\right) \) is closed, note Exercise (5.3.3: 6), and show that \( \operatorname{ran}\left( {{T}^{ * }T}\right) \) is closed. To do so, let \( {T}^{ * }T{x}_{n} \rightarrow \xi \), and use the Uniform Boundedness Theorem to show that the linear functional \( {Tx} \mapsto \langle x,\xi \rangle \) is bounded on the Hilbert space \( \operatorname{ran}\left( T\right) \) .) Perhaps the standard illustration of the Uniform Boundedness Theorem in action is the proof that there exists a \( {2\pi } \) -periodic continuous function \( f : \mathbf{R} \rightarrow \mathbf{C} \) whose Fourier series does not converge at 0 . Let \( \mathcal{S} \) denote the subspace of \( {\mathcal{C}}^{\infty }\left( {\mathbf{R},\mathbf{C}}\right) \) consisting of all \( {2\pi } \) -periodic continuous mappings of \( \mathbf{R} \) into \( \mathbf{C} \) . Recall that the Fourier series, or Fourier expansion, of \( f \in \mathcal{S} \) at \( x \) is defined to be \[ s\left( {f, x}\right) = \mathop{\sum }\limits_{{n = - \infty }}^{\infty }\widehat{f}\left( n\right) {\mathrm{e}}^{\mathrm{i}{nx}} \] where the Fourier coefficients are given by \[ \widehat{f}\left( n\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( t\right) {\mathrm{e}}^{-\mathrm{i}{nt}}\mathrm{\;d}t\;\left( {n \in \mathbf{Z}}\right) . \] For each positive integer \( N \) let \[ {s}_{N}\left( {f, x}\right) = \mathop{\sum }\limits_{{n = - N}}^{N}\widehat{f}\left( n\right) {\mathrm{e}}^{\mathrm{i}{nx}}. \] Then \[ {s}_{N}\left( {f, x}\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( t\right) {D}_{N}\left( {x - t}\right) \mathrm{d}t = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }f\left( {-t}\right) {D}_{N}\left( t\right) \mathrm{d}t \] where the Dirichlet kernel \( {D}_{N} \) is defined by \[ {D}_{N}\left( t\right) = \mathop{\sum }\limits_{{n = - N}}^{N}{\mathrm{e}}^{\mathrm{i}{nt}} \] Define a linear mapping \( {u}_{N} : \mathcal{S} \rightarrow \mathbf{R} \) by \[ {u}_{N}\left( f\right) = {s}_{N}\left( {f,0}\right) \] Then \[ \left| {{u}_{N}\left( f\right) }\right| \leq \frac{1}{2\pi }\parallel f\parallel {\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] where \( \parallel f\parallel \) is the sup norm of \( f \) . Thus \( {u}_{N} \) is bounded, and \[ \begin{Vmatrix}{u}_{N}\end{Vmatrix} \leq \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] (2) On the other hand, there exists a sequence \( \left( {f}_{n}\right) \) of elements of \( \mathcal{S} \) such that - \( - 1 \leq {f}_{n} \leq 1 \) for each \( n \), and \[ \text{-}{f}_{n}\left( t\right) \rightarrow \operatorname{sgn}\left( {{D}_{N}\left( t\right) }\right) \text{for each}t \in \mathbf{R}\text{;} \] see Exercise (6.3.11: 2). Using Lebesgue’s Dominated Convergence Theorem (2.2.14), we now obtain \[ {u}_{N}\left( {f}_{n}\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }{f}_{n}\left( {-t}\right) {D}_{N}\left( t\right) \mathrm{d}t \rightarrow \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\operatorname{sgn}\left( {{D}_{N}\left( {-t}\right) }\right) {D}_{N}\left( t\right) \mathrm{d}t \] as \( n \rightarrow \infty \) . But \[ {D}_{N}\left( t\right) = \frac{\sin \left( {N + \frac{1}{2}}\right) t}{\sin \left( \frac{t}{2}\right) } = {D}_{N}\left( {-t}\right) , \] so \[ \
1008_(GTM174)Foundations of Real and Abstract Analysis
90
}\end{Vmatrix} \leq \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] (2) On the other hand, there exists a sequence \( \left( {f}_{n}\right) \) of elements of \( \mathcal{S} \) such that - \( - 1 \leq {f}_{n} \leq 1 \) for each \( n \), and \[ \text{-}{f}_{n}\left( t\right) \rightarrow \operatorname{sgn}\left( {{D}_{N}\left( t\right) }\right) \text{for each}t \in \mathbf{R}\text{;} \] see Exercise (6.3.11: 2). Using Lebesgue’s Dominated Convergence Theorem (2.2.14), we now obtain \[ {u}_{N}\left( {f}_{n}\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }{f}_{n}\left( {-t}\right) {D}_{N}\left( t\right) \mathrm{d}t \rightarrow \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\operatorname{sgn}\left( {{D}_{N}\left( {-t}\right) }\right) {D}_{N}\left( t\right) \mathrm{d}t \] as \( n \rightarrow \infty \) . But \[ {D}_{N}\left( t\right) = \frac{\sin \left( {N + \frac{1}{2}}\right) t}{\sin \left( \frac{t}{2}\right) } = {D}_{N}\left( {-t}\right) , \] so \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{u}_{N}\left( {f}_{n}\right) = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] and therefore, in view of (2), \[ \begin{Vmatrix}{u}_{N}\end{Vmatrix} = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] Next, noting that \( \left| {\sin \left( {t/2}\right) }\right| \leq t/2 \) for all \( t > 0 \), we have \[ \begin{Vmatrix}{u}_{N}\end{Vmatrix} = \frac{1}{2\pi }{\int }_{-\pi }^{\pi }\left| {{D}_{N}\left( t\right) }\right| \mathrm{d}t \] \[ = \frac{1}{\pi }{\int }_{0}^{\pi }\left| \frac{\sin \left( {N + \frac{1}{2}}\right) t}{\sin \left( \frac{t}{2}\right) }\right| \mathrm{d}t \] \[ \geq \frac{2}{\pi }{\int }_{0}^{\pi }\frac{1}{t}\left| {\sin \left( {N + \frac{1}{2}}\right) t}\right| \mathrm{d}t \] \[ = \frac{2}{\pi }{\int }_{0}^{\left( {N + \frac{1}{2}}\right) \pi }\frac{1}{t}\left| {\sin t}\right| \mathrm{d}t \] \[ > \frac{2}{\pi }\mathop{\sum }\limits_{{n = 1}}^{N}\frac{1}{n\pi }{\int }_{\left( {n - 1}\right) \pi }^{n\pi }\left| {\sin t}\right| \mathrm{d}t \] \[ = \frac{4}{{\pi }^{2}}\mathop{\sum }\limits_{{n = 1}}^{N}\frac{1}{n} \] and therefore \( \begin{Vmatrix}{u}_{N}\end{Vmatrix} \rightarrow \infty \) as \( N \rightarrow \infty \) . By the Uniform Boundedness Theorem (6.3.9), there exists \( f \in \mathcal{S} \) such that the set \( \left\{ {\left| {{u}_{N}\left( f\right) }\right| : N \geq 1}\right\} \) is unbounded. Hence the Fourier series of \( f \) cannot converge at 0 . ## (6.3.11) Exercises .1 Prove that \[ \mathop{\sum }\limits_{{n = - N}}^{N}{\mathrm{e}}^{\mathrm{i}{nt}} = \frac{\sin \left( {N + \frac{1}{2}}\right) t}{\sin \left( \frac{t}{2}\right) } \] for each natural number \( N \) . .2 Prove that, in the notation of the preceding paragraphs, there exists a sequence \( \left( {f}_{n}\right) \) of elements of \( \mathcal{S} \) such that \( - 1 \leq {f}_{n} \leq 1 \) for each \( n \) , and such that \( {f}_{n}\left( t\right) \rightarrow \operatorname{sgn}\left( {{D}_{N}\left( t\right) }\right) \) for each \( t \in \mathbf{R} \) . .3 In view of the Riemann-Lebesgue Lemma (Exercise (2.3.3:13)), \[ {Tf} = {\left( \widehat{f}\left( n\right) \right) }_{n = 1}^{\infty } \] defines a mapping \( T : {L}_{1}\left\lbrack {-\pi ,\pi }\right\rbrack \rightarrow {c}_{0} \) . Prove that \( T \) is one-one but not onto \( {c}_{0} \) . (Show that there exists \( \alpha > 0 \) such that \( \begin{Vmatrix}{T\left( {D}_{n}\right) }\end{Vmatrix} \geq \) \( \alpha {\begin{Vmatrix}{D}_{n}\end{Vmatrix}}_{1} \) for each \( n \) .) Appendix A What Is a Real Number? In this appendix we sketch Bishop's adaptation of Cauchy's construction of the set \( \mathbf{R} \), based on the idea that a real number is an object that can be approximated arbitrarily closely by rational numbers. Passing over the standard construction of the set \( \mathbf{Z} \) of integers, we define a rational number to be an ordered pair \( \left( {m, n}\right) \) of integers, usually written \( m/n \) or \( \frac{m}{n} \), such that \( n \neq 0 \) . Two rational numbers \( m/n \) and \( {m}^{\prime }/{n}^{\prime } \) are said to be equal, and we write \( m/n = {m}^{\prime }/{n}^{\prime } \), if \( m{n}^{\prime } \) and \( {m}^{\prime }n \) are equal integers; this relation of equality is an equivalence relation. We should really define a rational number to be an equivalence class of ordered pairs of integers relative to the equivalence relation of equality that we have just introduced. In that case two rational numbers (equivalence classes) would be equal if and only if they were one and the same. However, it more closely reflects common practice if we follow the approach in which rational numbers are given by the integer pairs themselves, and equality of rational numbers is a defined notion (given by a certain condition on the integer pairs) rather than the logical one of identity. \( {}^{1} \) We follow a similar approach to the equality of real numbers in due course. In every case we use without further mention the standard symbol \( = \) to denote equality. \( {}^{1} \) For example, from childhood we are led to consider the rational numbers \( \frac{1}{2},\frac{2}{4} \), and \( \frac{3}{6} \) as equal, not as representatives of some equivalence class. For another example, we consider the numbers \( 1,0 \cdot {999}\cdots \), and \( \frac{5}{5} \) to be equal although they are not logically identical (they are presented to us in different ways). We omit the details of the familiar algebraic operations and the order relations \( > , \geq \) on the set \( \mathbf{Q} \) of rational numbers. We identify the integer \( n \) with the rational number \( n/1 \) . By a real number we mean a sequence \( x = {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) of rational numbers that is regular in the sense that \[ \left| {{x}_{m} - {x}_{n}}\right| \leq \frac{1}{m} + \frac{1}{n}\;\left( {m, n \in {\mathbf{N}}^{ + }}\right) . \] The term \( {x}_{n} \) is called the \( n \) th rational approximation to the real number \( x \) . The set of real numbers is, of course, denoted by \( \mathbf{R} \) . We identify a rational number \( r \) with the real number \( \left( {r, r, r,\ldots }\right) \) ; with that identification, \( \mathbf{Q},\mathbf{N} \), and \( \mathbf{Z} \) become subsets of \( \mathbf{R} \) . To specify completely the set \( \mathbf{R} \) of real numbers, we must equip it with an appropriate notion of equality. Two real numbers \( x = \left( {x}_{n}\right) \) and \( y = \left( {y}_{n}\right) \) are said to be equal if \[ \left| {{x}_{n} - {y}_{n}}\right| \leq \frac{2}{n}\;\left( {n \in {\mathbf{N}}^{ + }}\right) \] Note that this notion of equality is an equivalence relation: it is clearly reflexive and symmetric; its transitivity is a simple consequence of the following result. (A.1) Lemma. Two real numbers \( x = \left( {x}_{n}\right) \) and \( y = \left( {y}_{n}\right) \) are equal if and only if for each positive integer \( k \) there exists a positive integer \( {N}_{k} \) such that \( \left| {{x}_{n} - {y}_{n}}\right| \leq 1/k \) whenever \( n \geq {N}_{k} \) . Proof. If \( x = y \), then for each \( k \) we need only take \( {N}_{k} = {2k} \) . Conversely, suppose that for each \( k \) there exists \( {N}_{k} \) with the stated property, and consider any positive integers \( n \) and \( k \) . Setting \( m = k + {N}_{k} \), we have \[ \left| {{x}_{n} - {y}_{n}}\right| \leq \left| {{x}_{n} - {x}_{m}}\right| + \left| {{x}_{m} - {y}_{m}}\right| + \left| {{y}_{m} - {y}_{n}}\right| \] \[ \leq \left( {\frac{1}{n} + \frac{1}{m}}\right) + \frac{1}{k} + \left( {\frac{1}{n} + \frac{1}{m}}\right) \] \[ < \frac{2}{n} + \frac{3}{k} \] Since this holds for all positive integers \( k \), we see that \( \left| {{x}_{n} - {y}_{n}}\right| \leq 2/n \) . But \( n \) is arbitrary, so \( x = y \) . ## (A.2) Exercises .1 Complete the proof that equality of real numbers is an equivalence relation. .2 Let \( k \) be any positive integer. Show that the operation which assigns to each real number \( {\left( {x}_{n}\right) }_{n = 1}^{\infty } \) its \( k \) th rational approximation \( {x}_{k} \) does not preserve equality. .3 Prove that two real numbers \( x = \left( {x}_{n}\right) \) and \( y = \left( {y}_{n}\right) \) are equal if and only if for each \( c > 0 \) and each positive integer \( k \) there exists \( {N}_{k} \) such that \( \left| {{x}_{n} - {y}_{n}}\right| \leq c/k \) for all \( n \geq {N}_{k} \) . To introduce the algebraic operations on \( \mathbf{R} \) we need a special bound for the terms of a regular sequence \( x = \left( {x}_{n}\right) \) of rational numbers. We define the canonical bound \( {K}_{x} \) of \( x \) to be the least positive integer greater than \( \left| {x}_{1}\right| + 2 \) . It is easy to show that \( \left| {x}_{n}\right| < {K}_{x} \) for all \( n \) . The arithmetic operations on real numbers \( x = \left( {x}_{n}\right) \) and \( y = \left( {y}_{n}\right) \) are defined in terms of the rational approximations to those numbers as follows. \[ {\left( x + y\right) }_{n} = {x}_{2n} + {y}_{2n} \] \[ {\left( x - y\right) }_{n} = {x}_{2n} - {y}_{2n} \] \[ {\left( xy\right) }_{n} = {x}_{2\kappa n}{y}_{2\kappa n},\text{ where }\kappa = \max \left\{ {{K}_{x},{K}_{y}}\right\} \] \[ \max \{ x, y{\} }_{n} = \max \left\{ {{x}_{n},{y}_{n}}\right\} \] \[ \min \{ x, y{\} }_{n} = \min \left\{ {{x}_{n},{y}_{n}}\right\} \] \[ {\left| x\right| }_{n} = \left| {x}_{n}\right| \] Here, for example, \( {\left( x + y\right) }_{n} \) denotes the \( n \) th rational approximation to the real number \( x + y \), and \( \max \left\{ {{x}_{n},{y}_{n}}\right\} \) is the maximum, computed in the usual way, of the rational numbers \( {x}_{n} \) and \( {y}_{n} \) . Of course, we must verify that the foregoing definitions yield real numbers; we illustrate this verification with the case of the product \( {xy} \) .
1008_(GTM174)Foundations of Real and Abstract Analysis
91
( {x}_{n}\right) \) and \( y = \left( {y}_{n}\right) \) are defined in terms of the rational approximations to those numbers as follows. \[ {\left( x + y\right) }_{n} = {x}_{2n} + {y}_{2n} \] \[ {\left( x - y\right) }_{n} = {x}_{2n} - {y}_{2n} \] \[ {\left( xy\right) }_{n} = {x}_{2\kappa n}{y}_{2\kappa n},\text{ where }\kappa = \max \left\{ {{K}_{x},{K}_{y}}\right\} \] \[ \max \{ x, y{\} }_{n} = \max \left\{ {{x}_{n},{y}_{n}}\right\} \] \[ \min \{ x, y{\} }_{n} = \min \left\{ {{x}_{n},{y}_{n}}\right\} \] \[ {\left| x\right| }_{n} = \left| {x}_{n}\right| \] Here, for example, \( {\left( x + y\right) }_{n} \) denotes the \( n \) th rational approximation to the real number \( x + y \), and \( \max \left\{ {{x}_{n},{y}_{n}}\right\} \) is the maximum, computed in the usual way, of the rational numbers \( {x}_{n} \) and \( {y}_{n} \) . Of course, we must verify that the foregoing definitions yield real numbers; we illustrate this verification with the case of the product \( {xy} \) . Writing \( {z}_{n} = {x}_{2\kappa n}{y}_{2\kappa n} \), so that \( {xy} = \left( {z}_{n}\right) \), for all positive integers \( m \) and \( n \) we have \[ \left| {{z}_{m} - {z}_{n}}\right| = \left| {{x}_{2\kappa m}\left( {{y}_{2\kappa m} - {y}_{2\kappa n}}\right) + {y}_{2\kappa n}\left( {{x}_{2\kappa m} - {x}_{2\kappa n}}\right) }\right| \] \[ \leq \left| {x}_{2\kappa m}\right| \left| {{y}_{2\kappa m} - {y}_{2\kappa n}}\right| + \left| {y}_{2\kappa n}\right| \left| {{x}_{2\kappa m} - {x}_{2\kappa n}}\right| \] \[ \leq \kappa \left( {\frac{1}{2\kappa m} + \frac{1}{2\kappa n}}\right) + \kappa \left( {\frac{1}{2\kappa m} + \frac{1}{2\kappa n}}\right) \] \[ = \frac{1}{m} + \frac{1}{n}\text{.} \] Thus \( {xy} \) is a regular sequence of rational numbers - that is, a real number. In the rest of this appendix, \( x = \left( {x}_{n}\right), y = \left( {y}_{n}\right) \), and \( z = \left( {z}_{n}\right) \) are real numbers. ## (A.3) Exercises .1 Prove that \( \left| {x}_{n}\right| < {K}_{x} \) for each \( n \) . .2 Prove that \( x + y, x - y,\max \{ x, y\} ,\min \{ x, y\} \), and \( \left| x\right| \) are real numbers. .3 Let \( {x}^{\prime } \) and \( {y}^{\prime } \) be real numbers such that \( x = {x}^{\prime } \) and \( y = {y}^{\prime } \) . Prove that \( x + y = {x}^{\prime } + {y}^{\prime } \) and \( {xy} = {x}^{\prime }{y}^{\prime } \) . Thus the operations of addition and multiplication arise from functions on the Cartesian product \( \mathbf{R} \times \mathbf{R} \) when the relation of equality on that set is defined in the natural way: \( \left( {x, y}\right) = \left( {{x}^{\prime },{y}^{\prime }}\right) \) if and only if \( x = {x}^{\prime } \) and \( y = {y}^{\prime } \) . .4 Sums, differences, products, maxima, and minima of finitely many real numbers are defined inductively: for example, we define \[ \max \left\{ {{x}_{1},\ldots ,{x}_{n + 1}}\right\} = \max \left\{ {\max \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} ,{x}_{n + 1}}\right\} . \] Show that if \( \sigma \) is a permutation of \( \{ 1,\ldots, n\} \), then \[ \max \left\{ {{x}_{\sigma \left( 1\right) },\ldots ,{x}_{\sigma \left( n\right) }}\right\} = \max \left\{ {{x}_{1},\ldots ,{x}_{n}}\right\} . \] .5 Prove each of the following identities. (i) \( x + y = y + x \) (ii) \( x + \left( {y + z}\right) = \left( {x + y}\right) + z \) (iii) \( {xy} = {yx} \) (iv) \( 0 + x = x + 0 = x \) (v) \( {1x} = {x1} = x \) . (These should serve to convince you that addition and multiplication, as defined previously, have the properties that we expect from elementary school.) .6 Prove that for each \( m \) the \( m \) th rational approximation to \( 1/n - \) \( \left| {x - {x}_{n}}\right| \) is \( 1/n - \left| {{x}_{4m} - {x}_{n}}\right| \) . The real number \( x = \left( {x}_{n}\right) \) is said to be positive if there exists \( n \) such that \( {x}_{n} > 1/n \) . We define \( x > y \) to mean that \( x - y \) is positive; thus \( x > 0 \) if and only if \( x \) is positive. On the other hand, we say that \( x \) is - negative if \( - x \) is positive, and - nonnegative if \( {x}_{n} \geq - 1/n \) for all \( n \) . We write \( x \geq y \) to denote that \( x - y \) is nonnegative, and we define \( x < y \) and \( x \leq y \) to have the usual meanings relative to the relations \( > , \geq \) . ## (A.4) Exercise Prove that if \( x > 0 \), then \( x \geq 0 \) . (A.5) Proposition. A real number \( x = \left( {x}_{n}\right) \) is positive if and only if there exists a positive integer \( N \) such that \( {x}_{m} \geq 1/N \) for all \( m \geq N \) . On the other hand, \( x \) is nonnegative if and only if for each positive integer \( k \) there exists a positive integer \( {N}_{k} \) such that \( {x}_{m} \geq - 1/k \) for all \( m \geq {N}_{k} \) . Proof. If \( x \) is positive, then \( {x}_{n} > 1/n \) for some \( n \) . Choosing the positive integer \( N \) so that \( 2/N \leq {x}_{n} - 1/n \), for each \( m \geq N \) we have \[ {x}_{m} \geq {x}_{n} - \left| {{x}_{m} - {x}_{n}}\right| \] \[ \geq {x}_{n} - \frac{1}{m} - \frac{1}{n} \] \[ \geq {x}_{n} - \frac{1}{N} - \frac{1}{n} \] \[ > \frac{1}{N}\text{.} \] So the required property holds. If, conversely, that property holds, then \( {x}_{N + 1} > 1/\left( {N + 1}\right) \), so \( x > 0 \) . The proof of the second part of the proposition is left as an exercise. ## (A.6) Exercises .1 Prove the second part of the preceding proposition. .2 Prove that if \( x = {x}^{\prime }, y = {y}^{\prime } \), and \( x > y \) (respectively, \( x \geq y \) ), then \( {x}^{\prime } > {y}^{\prime } \) (respectively, \( {x}^{\prime } \geq {y}^{\prime } \) ). .3 Prove the Axiom of Archimedes: if \( x > 0 \) and \( y \geq 0 \), then there exists \( n \in {\mathbf{N}}^{ + } \) such that \( {nx} > y \) . .4 Prove that on \( \mathbf{Q} \) the relations \( > \) and \( \geq \), defined as for real numbers, coincide with the standard elementary order relations between rational numbers. .5 Prove the triangle inequality for real numbers: \( \left| {x + y}\right| \leq \left| x\right| + \left| y\right| \) . It is left as a relatively straightforward exercise to prove most of the elementary properties of the partial orders \( > , \geq \) on \( \mathbf{R} \) . However, we need to tie up a few loose ends, the first of which concerns the order density of \( \mathbf{Q} \) in \( \mathbf{R} \) and requires a simple lemma. (A.7) Lemma. \( \;\left| {x - {x}_{n}}\right| \leq 1/n \) for each \( n \) . Proof. Fix the positive integer \( n \) . By Exercise (A.3:6), for each \( m \) the \( m \) th rational approximation to \( 1/n - \left| {x - {x}_{n}}\right| \) is \[ \frac{1}{n} - \left| {{x}_{4m} - {x}_{n}}\right| \geq \frac{1}{n} - \left( {\frac{1}{4m} + \frac{1}{n}}\right) = - \frac{1}{4m} > - \frac{1}{m}. \] Hence \( 1/n - \left| {x - {x}_{n}}\right| \geq 0 \), and therefore \( \left| {x - {x}_{n}}\right| \leq 1/n \) . (A.8) Proposition. Q is order dense in \( \mathbf{R} \) -that is, for all \( x \) and \( y \) in \( \mathbf{R} \) with \( x < y \), there exists \( r \in \mathbf{Q} \) such that \( x < r < y \) . Proof. Since \[ 0 < y - x = {\left( {y}_{2n} - {x}_{2n}\right) }_{n = 1}^{\infty }, \] there exists \( N \) such that \( {y}_{2N} - {x}_{2N} > 1/N \) . Writing \[ r = \frac{1}{2}\left( {{x}_{2N} + {y}_{2N}}\right) \] and using Lemma (A.7), we have \[ r - x \geq r - {x}_{2N} - \left| {{x}_{2N} - x}\right| \] \[ \geq \frac{1}{2}\left( {{y}_{2N} - {x}_{2N}}\right) - \frac{1}{2N} > 0, \] and similarly \( y - r > 0 \) . Hence \( x < r < y \) . Here is a good application of Proposition (A.8). (A.9) Proposition. If \( x + y > 0 \), then either \( x > 0 \) or \( y > 0 \) . Proof. Let \( x + y > 0 \) . By Proposition (A.8), there exists a rational number \( \alpha \) such that \( 0 < \alpha < x + y \) . Using Exercise (A.6: 3), choose a positive integer \( n > 4/\alpha \) . Let \( r = {x}_{n} \) and \( s = {y}_{n} \) . Then \( r \) and \( s \) are rational; also, by Lemma (A.7), \( \left| {x - r}\right| < \alpha /4 \) and \( \left| {y - s}\right| < \alpha /4 \) . Using the triangle inequality, we now see that \[ r + s \geq \left( {x + y}\right) - \left( {\left| {x - r}\right| + \left| {y - s}\right| }\right) \] \[ > \alpha - \left( {\frac{\alpha }{4} + \frac{\alpha }{4}}\right) \] \[ = \frac{\alpha }{2}\text{.} \] Since \( r \) and \( s \) are rational numbers, either \( r > \alpha /4 \) or \( s > \alpha /4 \) . In the first case, \( x \geq r - \left| {x - r}\right| > 0 \) ; in the second, \( y > 0 \) . For each nonzero real number \( x \) the reciprocal, or inverse, of \( x \) is the real number \( \frac{1}{x} \) (also written \( 1/x \) or \( {x}^{-1} \) ) defined as follows. Choose a positive integer \( N \) such that \( \left| {x}_{n}\right| \geq 1/N \) for all \( n \geq N \), and set \[ {\left( \frac{1}{x}\right) }_{n} = \left\{ \begin{array}{ll} 1/{x}_{{N}^{3}} & \text{ if }n < N \\ 1/{x}_{n{N}^{2}} & \text{ if }n \geq N \end{array}\right. \] The last set of exercises in this appendix shows that this is a good definition of \( 1/x \) . ## (A.10) Exercises .1 Let \( x \) be a nonzero real number, and \( 1/x \) the reciprocal of \( x \) as just defined. Prove that \( 1/x \) is a real number, and that it is the unique real number \( t \) such that \( {xt} = 1 \) . .2 Let \( x \) be a nonzero real number, and let \( N \) be as in the definition of \( 1/x \) . Let \( M \) be a positive integer such that \( \left| {x}_{n}\right| \geq 1/M \) for all \( n \geq M \) , and define a real number \( y = \left( {y}_{n}\right) \) by \[ {y}_{n} = \left\{ \begin{array}{ll} 1/{x}_{{M}^{3}} & \text{ if }n < M \\ 1/{x}_{n{M}^{2}} & \text{ if }n \geq M. \end{array}\right. \] Give two proofs that \( y = 1/x \) . .3 Prove that the operation that assigns \( 1/x \) to the nonzero real number \( x \) is a function (respects equality) and maps the set of nonzero real numbers onto itself. Appendix B Axioms of Choice an
1008_(GTM174)Foundations of Real and Abstract Analysis
92
N \end{array}\right. \] The last set of exercises in this appendix shows that this is a good definition of \( 1/x \) . ## (A.10) Exercises .1 Let \( x \) be a nonzero real number, and \( 1/x \) the reciprocal of \( x \) as just defined. Prove that \( 1/x \) is a real number, and that it is the unique real number \( t \) such that \( {xt} = 1 \) . .2 Let \( x \) be a nonzero real number, and let \( N \) be as in the definition of \( 1/x \) . Let \( M \) be a positive integer such that \( \left| {x}_{n}\right| \geq 1/M \) for all \( n \geq M \) , and define a real number \( y = \left( {y}_{n}\right) \) by \[ {y}_{n} = \left\{ \begin{array}{ll} 1/{x}_{{M}^{3}} & \text{ if }n < M \\ 1/{x}_{n{M}^{2}} & \text{ if }n \geq M. \end{array}\right. \] Give two proofs that \( y = 1/x \) . .3 Prove that the operation that assigns \( 1/x \) to the nonzero real number \( x \) is a function (respects equality) and maps the set of nonzero real numbers onto itself. Appendix B Axioms of Choice and Zorn's Lemma In the early years of this century it was recognised that the following principle, the Axiom of Choice, was necessary for the proofs of several important theorems in mathematics. AC If \( \mathcal{F} \) is a nonempty family of pairwise-disjoint nonempty sets, then there exists a set that intersects each member of \( \mathcal{F} \) in exactly one element. In particular, Zermelo used this axiom explicitly in his proof that every set \( S \) can be well-ordered—that is, there is a total partial order \( \geq \) on \( S \) with respect to which every nonempty subset of \( X \) has a least element [57]. It was shown by Gödel [18] in 1939 that the Axiom of Choice is consistent with the axioms of Zermelo-Fraenkel set theory (ZF), in the sense that the axiom can be added to ZF without leading to a contradiction, and by Cohen [11] in 1963 that the negation of the Axiom of Choice is also consistent with ZF. Thus the Axiom of Choice is independent of ZF: it can be neither proved nor disproved without adding some extra principles to ZF. The Axiom of Choice is commonly used in an equivalent form (the one we used in the proof of Lemma (1.3.5)): \( \mathbf{A}{\mathbf{C}}^{\prime } \) If \( A \) and \( B \) are nonempty sets, \( S \subset A \times B \), and for each \( x \in A \) there exists \( y \in B \) such that \( \left( {x, y}\right) \in S \), then there exists a function \( f : A \rightarrow B \) -called a choice function for \( S \) -such that \( \left( {x, f\left( x\right) }\right) \in S \) for each \( x \in A \) . To prove the equivalence of these two forms of the Axiom of Choice, first assume that the original version \( \mathrm{{AC}} \) of the axiom holds, and consider nonempty sets \( A, B \) and a subset \( S \) of \( A \times B \) such that for each \( x \in A \) there exists \( y \in B \) with \( \left( {x, y}\right) \in S \) . For each \( x \in A \) let \[ {F}_{x} = \{ x\} \times \{ y \in B : \left( {x, y}\right) \in S\} . \] Then \( \mathcal{F} = {\left( {F}_{x}\right) }_{x \in A} \) is a nonempty family of pairwise-disjoint sets, so, by \( \mathrm{{AC}} \), there exists a set \( C \) that has exactly one element in common with each \( {F}_{x} \) . We now define the required choice function \( f : A \rightarrow B \) by setting \[ \left( {x, f\left( x\right) }\right) = \text{the unique element of}C \cap {F}_{x} \] for each \( x \in A \) . Now assume that the alternative form \( {\mathrm{{AC}}}^{\prime } \) of the Axiom of Choice holds, and consider a nonempty family \( \mathcal{F} \) of pairwise-disjoint nonempty sets. Taking \[ A = \mathcal{F} \] \[ B = \mathop{\bigcup }\limits_{{X \in \mathcal{F}}}X \] \[ S = \{ \left( {X, x}\right) : X \in \mathcal{F}, x \in X\} \] in \( {\mathrm{{AC}}}^{\prime } \), we obtain a function \[ f : \mathcal{F} \rightarrow \mathop{\bigcup }\limits_{{X \in \mathcal{F}}}X \] such that \( f\left( X\right) \in X \) for each \( X \in \mathcal{F} \) . The range of \( f \) is then a set that has exactly one element in common with each member of \( \mathcal{F} \) . There are two other choice principles that are widely used in analysis. The first of these, the Principle of Countable Choice, is the case \( A = \mathbf{N} \) of \( {\mathrm{{AC}}}^{\prime } \) . The second is the Principle of Dependent Choice: If \( a \in A, S \subset A \times A \), and for each \( x \in A \) there exists \( y \in A \) such that \( \left( {x, y}\right) \in S \), then there exists a sequence \( {\left( {a}_{n}\right) }_{n = 1}^{\infty } \) in \( A \) such that \( {a}_{1} = a \) and \( \left( {{a}_{n},{a}_{n + 1}}\right) \in S \) for each \( n \) . It is a good exercise to show that the Axiom of Choice entails the Principle of Dependent Choice, and that the Principle of Dependent Choice entails the Principle of Countable Choice. Since the last two principles can be derived as consequences of the axioms of \( \mathrm{{ZF}} \), they are definitely weaker than the Axiom of Choice. There are many principles that are equivalent to the Axiom of Choice. One of those, Zorn's Lemma, is needed for our proof of the Hahn-Banach Theorem in Chapter 6. A nonempty subset \( C \) of a partially ordered set \( \left( {A, \succcurlyeq }\right) \) is called a chain if for all \( x, y \in C \) either \( x \succcurlyeq y \) or \( y \succcurlyeq x \) . Zorn’s Lemma states that If every chain in a partially ordered set \( A \) has an upper bound in \( A \), then \( A \) has a maximal element. For a fuller discussion of axioms of choice, Zorn's Lemma, and related matters, see the article by Jech on pages 345-370 of [2]. Appendix C Pareto Optimality In this appendix we show how some of the results and ideas in our main chapters can be applied within theoretical economics. We assume that there are a finite number \( m \) of consumers and a finite number \( n \) of producers. Consumer \( i \) has a consumption set \( {X}_{i} \subset {\mathbf{R}}^{N} \), where a consumption bundle \( {x}_{i} = \left( {{x}_{{i}_{1}},\ldots ,{x}_{{i}_{N}}}\right) \in {X}_{i} \) is interpreted as follows: \( {x}_{{i}_{k}} \) is the quantity of the \( k \) th commodity (a good or a service) taken by consumer \( i \) when he chooses the consumption bundle \( {x}_{i} \) . Producer \( j \) has a production set \( {Y}_{j} \subset {\mathbf{R}}^{N} \), where the \( k \) th entry in the production vector \( {y}_{j} = \left( {{y}_{{j}_{1}},\ldots ,{y}_{{j}_{N}}}\right) \in {Y}_{j} \) is interpreted as the amount of the \( k \) th commodity produced by producer \( j \) under her adopted production schedule. Other important sets in this context are the aggregate consumption set \[ X = {X}_{1} + \cdots + {X}_{m} \] and the aggregate production set \[ Y = {Y}_{1} + \cdots + {Y}_{n} \] A price vector is simply an element \( p \) of \( {\mathbf{R}}^{N} \) ; the \( k \) th component \( {p}_{k} \) of \( p \) is the price of one unit of the \( k \) th commodity. Thus the total cost to consumer \( i \) of the consumption bundle \( {x}_{i} \) is \( \left\langle {p,{x}_{i}}\right\rangle \), where \( \langle \cdot , \cdot \rangle \) denotes the usual inner product on \( {\mathbf{R}}^{N} \) ; and the profit to producer \( j \) of the production vector \( {y}_{j} \) is \( \left\langle {p,{y}_{j}}\right\rangle \) . We assume that the preferences of consumer \( i \) are represented by a reflexive, transitive total partial order \( { \succcurlyeq }_{i} \) on \( {X}_{i} \), called the preference relation of consumer \( i \) . The corresponding relations \( { \succ }_{i} \) of strict preference, and \( { \sim }_{i} \) of preference-indifference, are defined on \( {X}_{i} \) as follows. \( x{ \succ }_{i}y \) if and only if \( x{ \succcurlyeq }_{i}y \) and not \( \left( {y{ \succcurlyeq }_{i}x}\right) ; \) \( x{ \sim }_{i}y \) if and only if \( x{ \succcurlyeq }_{i}y \) and \( y{ \succcurlyeq }_{i}x. \) Routine arguments show that \( { \succ }_{i} \) and \( { \sim }_{i} \) are transitive; that \( x{ \succ }_{i}x \) is contradictory; that if either \( x{ \succ }_{i}y \) or \( x{ \sim }_{i}y \), then \( x{ \succcurlyeq }_{i}y \) ; and that if either \( x{ \succ }_{i}y{ \succcurlyeq }_{i}z \) or \( x{ \succcurlyeq }_{i}y{ \succ }_{i}z \), then \( x{ \succ }_{i}z \) . The informal meaning of \( x{ \succcurlyeq }_{i}y \) is that consumer \( i \) finds \( x \) at least as attractive as \( y;x{ \succ }_{i}y \) means that he strictly prefers \( x \) to \( y \) ; and \( x{ \sim }_{i}y \) signifies that he does not mind which of \( x \) or \( y \) he obtains. It is convenient to introduce consumer \( i \) ’s upper contour set at \( x \) , \[ \lbrack x, \rightarrow ) = \left\{ {\xi \in {X}_{i} : \xi { \succcurlyeq }_{i}x}\right\} \] and his strict upper contour set at \( x \) , \[ \left( {x, \rightarrow }\right) = \left\{ {\xi \in {X}_{i} : \xi { \succ }_{i}x}\right\} . \] The preference relation \( { \succ }_{i} \) is said to be locally nonsatiated at \( {x}_{i} \in {X}_{i} \) if for each \( \varepsilon > 0, B\left( {{x}_{i},\varepsilon }\right) \cap \left( {{x}_{i}, \rightarrow }\right) \) is nonempty-that is, there exists \( {x}_{i}^{\prime } \in {X}_{i} \) such that \( \begin{Vmatrix}{{x}_{i} - {x}_{i}^{\prime }}\end{Vmatrix} < \varepsilon \) and \( {x}_{i}^{\prime }{ \succ }_{i}{x}_{i} \) . By a chosen point for consumer \( i \) under the price vector \( p \) we mean a point \( {\xi }_{i} \in {X}_{i} \) such that for all \( {x}_{i} \in {X}_{i} \) , \[ \left\langle {p,{\xi }_{i}}\right\rangle \geq \left\langle {p,{x}_{i}}\right\rangle \Rightarrow {\xi }_{i}{ \succcurlyeq }_{i}{x}_{i} \] or, equivalently, \[ {x}_{i}{ \succ }_{i}{\xi }_{i} \Rightarrow \left\langle {p,{x}_{i}}\right\rangle > \left\langle {p,{\xi }_{i}}\right\rangle \] (C.1) Lemma. If \( {\xi }_{i} \in {X}_{i} \) is a chosen point for consumer \( i \) under the price vector \( p \), and \( {x}_{i} \sim {\xi }_{i} \) is a point of \( {X}_{i} \) at which \( { \succ }_{i} \) is locally nonsat
1008_(GTM174)Foundations of Real and Abstract Analysis
93
for each \( \varepsilon > 0, B\left( {{x}_{i},\varepsilon }\right) \cap \left( {{x}_{i}, \rightarrow }\right) \) is nonempty-that is, there exists \( {x}_{i}^{\prime } \in {X}_{i} \) such that \( \begin{Vmatrix}{{x}_{i} - {x}_{i}^{\prime }}\end{Vmatrix} < \varepsilon \) and \( {x}_{i}^{\prime }{ \succ }_{i}{x}_{i} \) . By a chosen point for consumer \( i \) under the price vector \( p \) we mean a point \( {\xi }_{i} \in {X}_{i} \) such that for all \( {x}_{i} \in {X}_{i} \) , \[ \left\langle {p,{\xi }_{i}}\right\rangle \geq \left\langle {p,{x}_{i}}\right\rangle \Rightarrow {\xi }_{i}{ \succcurlyeq }_{i}{x}_{i} \] or, equivalently, \[ {x}_{i}{ \succ }_{i}{\xi }_{i} \Rightarrow \left\langle {p,{x}_{i}}\right\rangle > \left\langle {p,{\xi }_{i}}\right\rangle \] (C.1) Lemma. If \( {\xi }_{i} \in {X}_{i} \) is a chosen point for consumer \( i \) under the price vector \( p \), and \( {x}_{i} \sim {\xi }_{i} \) is a point of \( {X}_{i} \) at which \( { \succ }_{i} \) is locally nonsatiated, then \( \left\langle {p,{x}_{i}}\right\rangle \geq \left\langle {p,{\xi }_{i}}\right\rangle \) . Proof. Suppose that \( \left\langle {p,{x}_{i}}\right\rangle < \left\langle {p,{\xi }_{i}}\right\rangle \) . By the continuity of the mapping \( x \mapsto \langle p, x\rangle \) on \( {\mathbf{R}}^{N} \), there exists \( r > 0 \) such that if \( {x}_{i}^{\prime } \in {X}_{i} \) and \( \begin{Vmatrix}{{x}_{i}^{\prime } - {x}_{i}}\end{Vmatrix} < r \) , then \( \left\langle {p,{x}_{i}^{\prime }}\right\rangle < \left\langle {p,{\xi }_{i}}\right\rangle \) . As \( { \succcurlyeq }_{i} \) is locally nonsatiated at \( {x}_{i} \), there exists \( {x}_{i}^{\prime } \in {X}_{i} \) such that \( {x}_{i}^{\prime }{ \succ }_{i}{x}_{i} \) and \( \begin{Vmatrix}{{x}_{i}^{\prime } - {x}_{i}}\end{Vmatrix} < r \) . Then \( \left\langle {p,{\xi }_{i}}\right\rangle > \left\langle {p,{x}_{i}^{\prime }}\right\rangle \) ; so \( {\xi }_{i}{ \succcurlyeq }_{i}{x}_{i}^{\prime } \), as \( {\xi }_{i} \) is a chosen point. But we also have \( {x}_{i}^{\prime }{ \succ }_{i}{x}_{i}{ \sim }_{i}{\xi }_{i} \) and therefore \( {x}_{i}^{\prime }{ \succ }_{i}{\xi }_{i} \) , a contradiction. We now assume that consumer \( i \) has an initial endowment of commodities, represented by the vector \( {\bar{x}}_{i} = \left( {{\bar{x}}_{{i}_{1}},\ldots ,{\bar{x}}_{{i}_{N}}}\right) \) . The total initial endowment of all consumers is then \[ \bar{x} = {\bar{x}}_{1} + \cdots + {\bar{x}}_{m} \in X. \] We say that an element \( \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) of \( {Y}_{1} \times \cdots \times {Y}_{n} \) is an admissible array of production vectors; and that an element \( \left( {{x}_{1},\ldots ,{x}_{m}}\right) \) of \( {X}_{1} \times \cdots \times {X}_{m} \) is a feasible array of consumption bundles if there exists an admissible array \( \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) of production vectors such that \[ \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x} \] Intuitively, a feasible array is one that can be obtained by a distribution of the total initial endowment and the total of the production vectors under some production schedule. An array \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \in {X}_{1} \times \cdots \times {X}_{m} \) of consumption bundles is said to be Pareto optimal, or a Pareto optimum, if it is feasible and if the following condition holds. PO If \( \left( {{x}_{1},\ldots ,{x}_{m}}\right) \) is a feasible array such that \( {x}_{i}{ \succ }_{i}{\xi }_{i} \) for some \( i \), then there exists \( k \) such that \( {\xi }_{k}{ \succ }_{k}{x}_{k} \) . Equivalently, the array is Pareto optimal if there is no feasible array \( \left( {{x}_{1},\ldots }\right. \) , \( \left. {x}_{m}\right) \) such that \( {x}_{i}{ \succcurlyeq }_{i}{\xi }_{i} \) for all \( i \), and such that \( {x}_{i}{ \succ }_{i}{\xi }_{i} \) for at least one \( i \) . By a competitive equilibrium we mean a triple consisting of a price vector \( p \), an array \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) of consumption bundles, and an admissible array \( \left( {{\eta }_{1},\ldots ,{\eta }_{n}}\right) \) of production vectors, satisfying the following conditions. CE1 For \( 1 \leq i \leq m,{\xi }_{i} \) is a chosen point for consumer \( i \) under the price vector \( p \) . CE2 For \( 1 \leq j \leq n \), if \( {y}_{j} \in {Y}_{j} \), then \( \left\langle {p,{\eta }_{j}}\right\rangle \geq \left\langle {p,{y}_{j}}\right\rangle \) . CE3 \( \mathop{\sum }\limits_{{i = 1}}^{m}{\xi }_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{\eta }_{j} + \bar{x} \) . Condition CE1 expresses consumer satisfaction; CE2, profit maximisation; and CE3, feasibility. (C.2) Proposition. Assume that each \( { \succcurlyeq }_{i} \) is locally nonsatiated, and let \[ \left( {p,\left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) ,\left( {{\eta }_{1},\ldots ,{\eta }_{n}}\right) }\right) \] be a competitive equilibrium. Then \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) is a Pareto optimum. Proof. Condition CE3 ensures that \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) is a feasible array of consumption bundles. Suppose that \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) is not a Pareto optimum. Then there exist an array \( \left( {{x}_{1},\ldots ,{x}_{m}}\right) \) of consumption bundles and an admissible array \( \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) of production vectors such that \[ \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x} \] (1) \( {x}_{i}{ \succcurlyeq }_{i}{\xi }_{i} \) for all \( i \), and \( {x}_{k}{ \succ }_{k}{\xi }_{k} \) for some \( k \) . By CE1, if \( {x}_{i}{ \succ }_{i}{\xi }_{i} \), then \( \left\langle {p,{x}_{i}}\right\rangle > \left\langle {p,{\xi }_{i}}\right\rangle \) ; in particular, \( \left\langle {p,{x}_{k}}\right\rangle > \left\langle {p,{\xi }_{k}}\right\rangle \) . If \( {\xi }_{i}{ \succcurlyeq }_{i}{x}_{i} \), then \( {x}_{i}{ \sim }_{i}{\xi }_{i} \) and so, by Lemma (C.1), \( \left\langle {p,{x}_{i}}\right\rangle \geq \left\langle {p,{\xi }_{i}}\right\rangle \) . Thus \[ \mathop{\sum }\limits_{{i = 1}}^{m}\left\langle {p,{x}_{i}}\right\rangle > \mathop{\sum }\limits_{{i = 1}}^{m}\left\langle {p,{\xi }_{i}}\right\rangle \] \[ = \mathop{\sum }\limits_{{j = 1}}^{n}\left\langle {p,{\eta }_{j}}\right\rangle + \langle p,\bar{x}\rangle \;\text{ (by CE3) } \] \[ \geq \mathop{\sum }\limits_{{j = 1}}^{n}\left\langle {p,{y}_{j}}\right\rangle + \langle p,\bar{x}\rangle \;\text{ (by CE2). } \] Hence \[ \left\langle {p,\left( {\mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} - \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} - \bar{x}}\right) }\right\rangle > 0 \] and therefore, by the Cauchy-Schwarz inequality in \( {\mathbf{R}}^{N} \) , \[ \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} \neq \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x} \] This contradicts (1). Our next aim is to establish a partial converse of Proposition (C.2), providing conditions under which a Pareto optimum gives rise to a competitive equilibrium. We first introduce some more definitions. The preference relation \( { \succcurlyeq }_{i} \) on \( {X}_{i} \) is said to be convex if - \( {X}_{i} \) is convex, - \( x{ \succ }_{i}{x}^{\prime } \Rightarrow {tx} + \left( {1 - t}\right) {x}^{\prime }{ \succ }_{i}{x}^{\prime } \) whenever \( 0 < t < 1 \), and - \( x{ \sim }_{i}{x}^{\prime } \Rightarrow {tx} + \left( {1 - t}\right) {x}^{\prime }{ \succcurlyeq }_{i}{x}^{\prime } \) whenever \( 0 < t < 1 \) . In that case the sets \( \lbrack x, \rightarrow ) \) and \( \left( {x, \rightarrow }\right) \) are convex. We say that consumer \( i \) is nonsatiated at \( {\xi }_{i} \in {X}_{i} \) if there exists \( x \in {X}_{i} \) such that \( x{ \succ }_{i}{\xi }_{i} \) ; otherwise, we say that he is satiated at \( {\xi }_{i} \) . (C.3) Proposition. Let \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) be a Pareto optimum such that for at least one value of \( i \), consumer \( i \) is nonsatiated at \( {\xi }_{i} \), and let \( \left( {{\eta }_{1},\ldots ,{\eta }_{n}}\right) \) be an admissible array of production vectors. Suppose that \( { \succcurlyeq }_{i} \) is convex for each \( i \), and that the aggregate production set \( Y \) is convex. Then there exists a nonzero price vector \( p \) such that (i) for each \( i \), if \( {x}_{i} \in {X}_{i} \) and \( {x}_{i}{ \succcurlyeq }_{i}{\xi }_{i} \), then \( \left\langle {p,{x}_{i}}\right\rangle \geq \left\langle {p,{\xi }_{i}}\right\rangle \) ; (ii) for each \( j \), if \( {y}_{j} \in {Y}_{j} \), then \( \left\langle {p,{\eta }_{j}}\right\rangle \geq \left\langle {p,{y}_{j}}\right\rangle \) . Proof. We may assume that consumer 1 is nonsatiated at \( {\xi }_{1} \) . Choose an admissible array \( \left( {{\eta }_{1},\ldots ,{\eta }_{n}}\right) \) of production vectors such that \[ \xi = \mathop{\sum }\limits_{{i = 1}}^{m}{\xi }_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{\eta }_{j} + \bar{x} \] Let \( A \) be the algebraic sum of the sets \( \left( {{\xi }_{1}, \rightarrow }\right) \) and \( \mathop{\sum }\limits_{{i = 2}}^{m}\left\lbrack {{\xi }_{i}, \rightarrow }\right) \) , \[ A = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \in {\mathbf{R}}^{N} : {x}_{1}{ \succ }_{1}{\xi }_{1}\text{ and }\forall i \geq 2\left( {{x}_{i}{ \succcurlyeq }_{i}{\xi }_{i}}\right) }\right\} , \] and let \[ B = \left\{ {x \in {\mathbf{R}}^{N} : \exists y \in Y\left( {x = y + \bar{x}}\right) }\right\} . \] Clearly, \( B \) is convex; by our convexity hypotheses, \( A \) is convex. If \( A \cap B \) is nonempty, then there exist \( {x}_{1}{ \succ }_{1}{\xi }_{1},{x}_{i}{ \succcurlyeq }_{i}{\xi }_{i}\left( {2 \leq i \leq m}\right) \), and \( {y}_{j} \in \) \( {Y}_{j}\left( {1 \leq j \leq n}\right) \), such that \[ \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x} \] This contradicts the hyp
1008_(GTM174)Foundations of Real and Abstract Analysis
94
sum }\limits_{{j = 1}}^{n}{\eta }_{j} + \bar{x} \] Let \( A \) be the algebraic sum of the sets \( \left( {{\xi }_{1}, \rightarrow }\right) \) and \( \mathop{\sum }\limits_{{i = 2}}^{m}\left\lbrack {{\xi }_{i}, \rightarrow }\right) \) , \[ A = \left\{ {\mathop{\sum }\limits_{{i = 1}}^{N}{x}_{i} \in {\mathbf{R}}^{N} : {x}_{1}{ \succ }_{1}{\xi }_{1}\text{ and }\forall i \geq 2\left( {{x}_{i}{ \succcurlyeq }_{i}{\xi }_{i}}\right) }\right\} , \] and let \[ B = \left\{ {x \in {\mathbf{R}}^{N} : \exists y \in Y\left( {x = y + \bar{x}}\right) }\right\} . \] Clearly, \( B \) is convex; by our convexity hypotheses, \( A \) is convex. If \( A \cap B \) is nonempty, then there exist \( {x}_{1}{ \succ }_{1}{\xi }_{1},{x}_{i}{ \succcurlyeq }_{i}{\xi }_{i}\left( {2 \leq i \leq m}\right) \), and \( {y}_{j} \in \) \( {Y}_{j}\left( {1 \leq j \leq n}\right) \), such that \[ \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x} \] This contradicts the hypothesis that \( \left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) \) is a Pareto optimum. Hence \( A \) and \( B \) are disjoint subsets of \( {\mathbf{R}}^{N} \) . Since these sets are clearly nonempty, it follows from Minkowski's Separation Theorem (6.2.6) and the Riesz Representation Theorem (5.3.1) that there exist a nonzero vector \( p \in {\mathbf{R}}^{N} \) and a real number \( \alpha \) such that \( \langle p, x\rangle \geq \alpha \) for all \( x \in A \), and \( \langle p, x\rangle \leq \alpha \) for all \( x \in B \) . Since \( \xi \in B \), we have \( \langle p,\xi \rangle \leq \alpha \) . We now show that \( \langle p,\xi \rangle = \alpha \) . To this end, consider \( \mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i} \), with \( {x}_{1}{ \succ }_{1}{\xi }_{1} \) and \( {x}_{i}{ \succcurlyeq }_{i}{\xi }_{i}\left( {2 \leq i \leq m}\right) \) . For \( 0 < t < 1 \) define \[ {z}_{i}\left( t\right) = t{x}_{i} + \left( {1 - t}\right) {\xi }_{i}\;\left( {1 \leq i \leq m}\right) \] and \[ z\left( t\right) = \mathop{\sum }\limits_{{i = 1}}^{m}{z}_{i}\left( t\right) \] Since \( { \succcurlyeq }_{i} \) is convex for each \( i \) , \[ {z}_{1}\left( t\right) \in \left( {{\xi }_{1}, \rightarrow }\right) \] \[ {z}_{i}\left( t\right) \in \left\lbrack {{\xi }_{i}, \rightarrow }\right) \;\left( {2 \leq i \leq m}\right) \] whence \( z\left( t\right) \in A \) and therefore \( \langle p, z\left( t\right) \rangle \geq \alpha \) . Letting \( t \rightarrow 0 \) and using the continuity of the mapping \( x \mapsto \langle p, x\rangle \) on \( {\mathbf{R}}^{N} \), we see that \( \langle p,\xi \rangle \geq \alpha \) and therefore that \( \langle p,\xi \rangle = \alpha \), as we wanted to show. It now follows that \( \langle p, x\rangle \geq \langle p,\xi \rangle \) for all \( x \in A \), and that \( \langle p, x\rangle \leq \langle p,\xi \rangle \) for all \( x \in B \) . Thus if \( \left( {{y}_{1},\ldots ,{y}_{n}}\right) \) is an admissible array of production vectors, then \[ \left\langle {p,\mathop{\sum }\limits_{{j = 1}}^{n}{y}_{j} + \bar{x}}\right\rangle \leq \langle p,\xi \rangle = \left\langle {p,\mathop{\sum }\limits_{{j = 1}}^{n}{\eta }_{j} + \bar{x}}\right\rangle \] and therefore \[ \mathop{\sum }\limits_{{j = 1}}^{n}\left\langle {p,{y}_{j}}\right\rangle \leq \mathop{\sum }\limits_{{j = 1}}^{n}\left\langle {p,{\eta }_{j}}\right\rangle \] Given \( j \in \{ 1,\ldots, n\} \), and taking \( {y}_{j} \in {Y}_{j} \) and \( {y}_{k} = {\eta }_{k} \) for all \( k \neq j(1 \leq \) \( k \leq n \) ), we now obtain \( \left\langle {p,{\eta }_{j}}\right\rangle \geq \left\langle {p,{y}_{j}}\right\rangle \) . This completes the proof of (ii). A similar argument, using the fact that \( \langle p, x\rangle \geq \langle p,\xi \rangle \) for all \( x \in A \) , shows that \[ \left\langle {p,{x}_{1}}\right\rangle \geq \left\langle {p,{\xi }_{1}}\right\rangle \text{ for all }{x}_{1} \in \left( {{\xi }_{1}, \rightarrow }\right) \] (2) and that \( \left\langle {p,{x}_{i}}\right\rangle \geq \left\langle {p,{\xi }_{i}}\right\rangle \) for all \( {x}_{i} \in \left\lbrack {{\xi }_{i}, \rightarrow }\right) \left( {2 \leq i \leq m}\right) \) . To complete the proof of (i), we show that if \( {x}_{1}{ \sim }_{1}{\xi }_{1} \), then \( \left\langle {p,{x}_{1}}\right\rangle \geq \left\langle {p,{\xi }_{1}}\right\rangle \) . To this end, we recall that consumer 1 is nonsatiated at \( {\xi }_{1} \), so there exists \( {x}_{1}^{\prime } \in {X}_{1} \) with \[ {x}_{1}^{\prime }{ \succ }_{1}{\xi }_{1}{ \sim }_{1}{x}_{1} \] It follows from this and the convexity of \( { \succcurlyeq }_{1} \) that for each \( t \in \left( {0,1}\right) \) , \[ {x}_{1}^{\prime }\left( t\right) = t{x}_{1}^{\prime } + \left( {1 - t}\right) {x}_{1}{ \succ }_{1}{\xi }_{1} \] whence \( \left\langle {p,{x}_{1}^{\prime }\left( t\right) }\right\rangle \geq \left\langle {p,{\xi }_{1}}\right\rangle \), by (2). The continuity of the function \( x \mapsto \langle p, x\rangle \) on \( {\mathbf{R}}^{N} \) now ensures that \( \left\langle {p,{x}_{1}}\right\rangle \geq \left\langle {p,{\xi }_{1}}\right\rangle \), as we required. This completes the proof of (i). (C.4) Corollary. Under the hypotheses of Proposition (C.3), suppose also that the following conditions hold. (i) For each price vector \( p \) and each \( i\left( {1 \leq i \leq m}\right) \), there exists \( {\xi }_{i}^{\prime } \in {X}_{i} \) such that \( \left\langle {p,{\xi }_{i}^{\prime }}\right\rangle < \left\langle {p,{\xi }_{i}}\right\rangle \) (cheaper point condition). (ii) For each \( i\left( {1 \leq i \leq m}\right) ,\left( {{\xi }_{i}, \rightarrow }\right) \) is open in \( {X}_{i} \) . Then \( \left( {p,\left( {{\xi }_{1},\ldots ,{\xi }_{m}}\right) ,\left( {{\eta }_{1},\ldots ,{\eta }_{n}}\right) }\right) \) is a competitive equilibrium. Proof. In view of Proposition (C.3), we need only prove that CE1 holds. To this end, let \( {x}_{i}{ \succ }_{i}{\xi }_{i} \), and choose \( {\xi }_{i}^{\prime } \in {X}_{i} \) as in hypothesis (i). Then, by Proposition (C.3), \( {\xi }_{i}{ \succ }_{i}{\xi }_{i}^{\prime } \) . For each \( t \in \left( {0,1}\right) \) define \[ {x}_{i}\left( t\right) = t{\xi }_{i}^{\prime } + \left( {1 - t}\right) {x}_{i} \] As \( \left( {{\xi }_{i}, \rightarrow }\right) \) is open in \( {X}_{i} \), we can choose \( t \in \left( {0,1}\right) \) so small that \( {x}_{i}\left( t\right) { \succ }_{i}{\xi }_{i} \) . Then, by Proposition (C.3), \[ t\left\langle {p,{\xi }_{i}^{\prime }}\right\rangle + \left( {1 - t}\right) \left\langle {p,{x}_{i}}\right\rangle = \left\langle {p,{x}_{i}\left( t\right) }\right\rangle \] \[ \geq \left\langle {p,{\xi }_{i}}\right\rangle \] \[ = t\left\langle {p,{\xi }_{i}}\right\rangle + \left( {1 - t}\right) \left\langle {p,{\xi }_{i}}\right\rangle \] \[ > t\left\langle {p,{\xi }_{i}^{\prime }}\right\rangle + \left( {1 - t}\right) \left\langle {p,{\xi }_{i}}\right\rangle \] Hence \[ \left( {1 - t}\right) \left\langle {p,{x}_{i}}\right\rangle > \left( {1 - t}\right) \left\langle {p,{\xi }_{i}}\right\rangle \] and therefore \( \left\langle {p,{x}_{i}}\right\rangle > \left\langle {p,{\xi }_{i}}\right\rangle \) . Thus \( {\xi }_{i} \) is a chosen point. The cheaper point assumption cannot be omitted from the hypotheses of Corollary (C.4); see pages 198-201 of [51]. ## References The following list contains both works that were consulted during the writing of this book and suggestions for further reading. University libraries usually have lots of older books, such as [36], dealing with classical real analysis at the level of Chapter 1; a good modern reference for this material is [16]. Excellent references for the abstract theory of measure and integration, following on from the material in Chapter 2, are [21], [44], and [43]. (Note, incidentally, the advocacy of a Riemann-like integral by some authors [1].) Dieudonné's book [13], the first of a series in which he covers a large part of modern analysis, is outstanding and was a source of much inspiration in my writing of Chapters 3 through 5. An excellent text for a general course on functional analysis is [45]. This could be followed by, or taken in conjunction with, material from the two volumes by Kadison and Ringrose [24] on operator algebra theory, currently one of the most active and important branches of analysis. Two other excellent books, each of which overlaps our book in some areas but goes beyond it in others, are [34], which includes such topics as spectral theory and abstract integration, and [14], which extends measure theory into a rigorous development of probability. More specialised books expanding material covered in Chapter 6 are the one by Oxtoby [33] on the interplay between Baire category and measure, and Diestel's absorbing text [12] on sequences and series in Banach spaces. A wonderful book, written in a more discursive style than most others at this level, is the classic by Riesz and Nagý [40]; although more old-fashioned in its approach (it was first published in 1955), it is a source of much valuable material on Lebesgue integration and the theory of operators on Hilbert space. A relatively unusual approach to analysis, in which all concepts and proofs must be fully constructive, is followed in [5]; see also Chapter 4 of [8]. For general applications of functional analysis see Zeidler's two volumes [56]. Applications of analysis in mathematical economics can be found in [9], [30], and [51]. [1] R. G. Bartle: Return to the Riemann integral, Amer. Math. Monthly 103 (1996), 625–632. [2] J. Barwise: Handbook of Mathematical Logic, North-Holland, Amsterdam, 1977. [3] G. H. Behforooz: Thinning out the harmonic series, Math. Mag. 68(4), 289- 293, 1985. [4] A. Bielicki: Une remarque sur la méthode de Banach-Cacciopoli-Tikhonov, Bull. Acad. Polon. Sci. IV (1956), 261-268. [5] E.A. Bishop and D.S. Bridges: Constructive Analysis, Grundlehren der math. Wissenschaften 279, Springer-Verlag, Berlin-Heidel
1008_(GTM174)Foundations of Real and Abstract Analysis
95
ned in its approach (it was first published in 1955), it is a source of much valuable material on Lebesgue integration and the theory of operators on Hilbert space. A relatively unusual approach to analysis, in which all concepts and proofs must be fully constructive, is followed in [5]; see also Chapter 4 of [8]. For general applications of functional analysis see Zeidler's two volumes [56]. Applications of analysis in mathematical economics can be found in [9], [30], and [51]. [1] R. G. Bartle: Return to the Riemann integral, Amer. Math. Monthly 103 (1996), 625–632. [2] J. Barwise: Handbook of Mathematical Logic, North-Holland, Amsterdam, 1977. [3] G. H. Behforooz: Thinning out the harmonic series, Math. Mag. 68(4), 289- 293, 1985. [4] A. Bielicki: Une remarque sur la méthode de Banach-Cacciopoli-Tikhonov, Bull. Acad. Polon. Sci. IV (1956), 261-268. [5] E.A. Bishop and D.S. Bridges: Constructive Analysis, Grundlehren der math. Wissenschaften 279, Springer-Verlag, Berlin-Heidelberg-New York, 1985. [6] P. Borwein and T. Erdélyi: The full Müntz theorem in \( \mathcal{C}\left\lbrack {0,1}\right\rbrack \) and \( {L}_{1}\left\lbrack {0,1}\right\rbrack \) , J. London Math. Soc. (2), 54 (1996), 102-110. [7] N. Bourbaki: Eléments de Mathématique, Livre III: Topologie Générale, Hermann, Paris, 1958. [8] D.S. Bridges: Computability: A Mathematical Sketchbook, Graduate Texts in Mathematics 146, Springer-Verlag, Berlin-Heidelberg-New York, 1994. [9] D.S. and G.B. Mehta: Representations of Preference Orderings, Lecture Notes in Economics and Mathematical Systems 422, Springer-Verlag, Berlin-Heidelberg-New York, 1995. [10] E.W. Cheney: Introduction to Approximation Theory, McGraw-Hill, New York, 1966. [11] P.J. Cohen: Set Theory and the Continuum Hypothesis, W.A. Benjamin, Inc., New York, 1966. [12] J. Diestel: Sequences and Series in Banach Spaces, Graduate Texts in Mathematics 92, Springer-Verlag, Berlin-Heidelberg-New York, 1984. [13] J. Dieudonné: Foundations of Modern Analysis, Academic Press, New York, 1960. [14] R.M. Dudley, Real Analysis and Probability, Chapman & Hall, New York, 1989. [15] P. Enflo: A counterexample to the approximation property in Banach spaces, Acta Math. 130 (1973), 309-317. [16] E. Gaughan: Introduction to Analysis (4th Edn), Brooks/Cole, Pacific Grove, CA, 1993. [17] R.P. Gillespie: Integration, Oliver & Boyd, Edinburgh, 1959. [18] K. Gödel: The Consistency of the Axiom of Choice and the Generalized Continuum Hypothesis with the Axioms of Set Theory, Annals of Mathematics Studies, Vol. 3, Princeton University Press, Princeton, NJ, 1940. [19] R. Gray: Georg Cantor and transcendental numbers, Amer. Math. Monthly 101 (1994), 819-832. [20] P.R. Halmos: Naive Set Theory, van Nostrand, Princeton, NJ, 1960; reprinted as Undergraduate Texts in Mathematics, Springer-Verlag, Berlin-Heidelberg-New York, 1974. [21] P.R. Halmos: Measure Theory, van Nostrand, Princeton, NJ, 1950; reprinted as Graduate Texts in Mathematics 18, Springer-Verlag, Berlin-Heidelberg-New York, 1975. [22] J. Hennefeld: A nontopological proof of the uniform boundedness theorem, Amer. Math. Monthly 87 (1980), 217. [23] F. John: Partial Differential Equations (4th Edn), Applied Mathematical Sciences 1, Springer-Verlag, Berlin-Heidelberg-New York, 1982. [24] R.V. Kadison and J.R. Ringrose: Fundamentals of the Theory of Operator Algebras, Academic Press, New York, 1983 (Vol. 1) and 1986 (Vol. 2). [25] J.L. Kelley: General Topology, van Nostrand, Princeton, NJ, 1955; reprinted as Graduate Texts in Mathematics 27, Springer-Verlag, Berlin-Heidelberg-New York, 1975. [26] D. Kincaid and E.W. Cheney: Numerical Analysis (2nd Edn), Brooks/Cole Publishing Co., Pacific Grove, CA, 1996. [27] M. Kline: Mathematical Thought from Ancient to Modern Times, Oxford University Press, Oxford, 1972. [28] T.W. Körner: Fourier Analysis, Cambridge University Press, Cambridge, 1988. [29] J. Marsden and A. Tromba: Vector Calculus (3rd Edn), W.H. Freeman & Co., New York, 1988. [30] A. Mas-Colell, M.D. Whinston, J.R. Green: Microeconomic Theory, Oxford University Press, Oxford, 1995. [31] Y. Matsuoka: An elementary proof of the formula \( \mathop{\sum }\limits_{{k = 1}}^{\infty }1/{k}^{2} = {\pi }^{2}/6 \), Amer. Math. Monthly 68 (1961), 485-487. [32] N.S. Mendelsohn: An application of a famous inequality, Amer. Math. Monthly 58 (1951), 563. [33] J.C. Oxtoby: Measure and Category, Graduate Texts in Mathematics 2, Springer-Verlag, Berlin-Heidelberg-New York, 1971. [34] G.K. Pedersen: Analysis Now, Graduate Texts in Mathematics 118, Springer-Verlag, Berlin-Heidelberg-New York, 1991. [35] W.E. Pfaffenberger: A converse to a completeness theorem, Amer. Math. Monthly 87 (1980), 216. [36] E.G. Phillips: A Course of Analysis (2nd Edn), Cambridge Univ. Press, Cambridge, 1939. [37] J. Rauch: Partial Differential Equations, Graduate Texts in Mathematics 128, Springer-Verlag, Berlin-Heidelberg-New York, 1991. [38] J.R. Rice: The Approximation of Functions (Vol. 1), Addison-Wesley, Reading, MA, 1964. [39] F. Riesz: Sur l'intégrale de Lebesgue comme l'opération inverse de la dérivation, Ann. Scuola Norm. Sup. Pisa (2) 5, 191-212 (1936). [40] F. Riesz and B. Sz-Nagy: Functional Analysis, Frederic Ungar Publishing Co., New York, 1955. Republished by Dover Publications Inc., New York, 1990. [41] J. Ritt: Integration in Finite Terms, Columbia University Press, New York, 1948. [42] W.W. Rogosinski: Volume and Integral, Oliver & Boyd, Edinburgh, 1962. [43] H. Royden: Real Analysis (3rd Edn), Macmillan, New York, 1988. [44] W. Rudin: Real and Complex Analysis, McGraw-Hill, New York, 1970. [45] W. Rudin: Functional Analysis (2nd Edn), McGraw-Hill, New York, 1991. [46] S. Saks: Theory of the Integral (2nd Edn), Dover Publishing, Inc., New York, 1964. [47] H. Schubert: Topology (S. Moran, transl.), Macdonald Technical & Scientific, London, 1968. [48] R.M. Solovay: A model of set theory in which every set of reals is Lebesgue measurable, Ann. Math. (Ser. 2) 92, 1-56 (1970). [49] M. Spivak: Calculus, W.A. Benjamin, London, 1967. [50] B. Sz-Nagy: Introduction to Real Functions and Orthogonal Expansions, Oxford University Press, New York, 1965. [51] A. Takayama: Mathematical Economics, The Dryden Press, Hinsdale IL., 1974. [52] J.A. Todd: Introduction to the Constructive Theory of Functions, Birkhäuser Verlag, Basel, 1963. [53] C. de la Vallée Poussin: Intégrales de Lebesgue, fonctions d'ensemble, classes de Baire, Gauthier-Villars, Paris, 1916. [54] B.L. van der Waerden: Ein einfaches Beispiel einer nichtdifferenzierbaren stetigen Funktion, Math. Zeitschr. 32, 474-475, 1930. [55] Y.M. Wong: The Lebesgue covering property and uniform continuity, Bull. London Math. Soc. 4, 184-186, 1972. [56] E. Zeidler: Applied Functional Analysis (2 Vols), Applied Mathematical Sciences 108-109, Springer-Verlag, Berlin-Heidelberg-New York,1995. [57] E. Zermelo: Beweis, dass jede Menge wohlgeordnet werden kann, Math. Annalen 59 (1904) 514–516. ## Index Absolute convergence, 31 absolute value, 15 absolutely continuous, 84 absolutely convergent, 180 absorbing, 282 adjoint, 254 admissible array, 305 aggregate consumption set, 303 aggregate production set, 303 almost everywhere, 85 \( \alpha \) -periodic, \( {215} \) alternating series test, 28 antiderivative, 69 antisymmetric, 6 approximate solution, 230 approximation theory, 192 Ascoli's Theorem, 210 associated metric, 174 asymmetric, 6 attains bounds, 149 Axiom of Archimedes, 14, 295 Axiom of Choice, 299 Baire's Theorem, 279 Banach space, 178 Beppo Levi's Theorem, 101 Bernstein polynomial, 214 Bessel's inequality, 245 best approximation, 192 binary expansion, 29 binomial series, 61 Bolzano-Weierstrass property, 48 Bolzano-Weierstrass Theorem, 48 Borel set, 113 bound, 184 boundary, 39 bounded above, 7 bounded below, 7 bounded function, 8 bounded linear map, 183 bounded operator, 254 bounded sequence, 21, 141 bounded set, 134 bounded variation, 71 \( \mathcal{B}\mathcal{V}\left( I\right) ,{205} \) \( \mathcal{B}\left( {X, Y}\right) ,{204} \) \( C \) -measurable,116 canonical bound, 293 canonical map, 181 Cantor set, 39 Cantor's Theorem, 26 Cauchy sequence, 25, 140 Cauchy-Euler method, 230 318 Index Cauchy-Schwarz inequality, 235 Cauchy-Schwarz, 126 centre, 130 Cesàro mean, 215 chain, 300 chain connected, 160 Chain Rule, 55 change of variable, 107 characteristic function, 99 chosen point, 304 \( {\mathcal{C}}^{\infty }\left( {X, Y}\right) ,{206} \) Clarkson's inequalities, 198 closed ball, 130 Closed Graph Theorem, 285 closed set, 38, 130, 135 closest point, 192, 239 closure, 38, 130 cluster point, 38, 130, 135 compact, 146 comparison test, 27 competitive equilibrium, 305 complete, 26, 140 completion, 142, 179 complex numbers, 19 conjugate, 19 conjugate bilinear, 255 conjugate exponents, 194 conjugate linear, 234 connected, 158 connected component, 160 consumer, 303 consumption bundle, 303 consumption set, 303 continuous, 44, 136 continuous on an interval, 45 continuous on the left, 44 continuous on the right, 44 continuously differentiable, 223 contraction mapping, 220 Contraction Mapping Theorem, 220 contractive, 136 converge simply, 206 converge uniformly, 206 convergent mapping, 138 convergent sequence, 20, 139 convergent series, 27, 180 convex, 163, 178 convex hull, 277 coordinate, 242 coordinate functional, 287 countable, 4 countable choice, 300 countably infinite, 4 cover, 47, 146 \( \mathcal{C}\left( {X, Y}\right) ,{206} \) Decreasing, 8, 101 dense, 132 dependent choice, 300 derivative, 53 derivative, higher, 54 derivative, left, 53 derivative, right, 53 diameter, 133 differentiable, 53 differentiable on an interval, 53 differentiable, infinitely, 54 differentiable \( n \) -times,54 Dini derivates, 88 Dini's Theorem, 207 Dirichlet kernel, 288 Dirichlet Problem, 25
1008_(GTM174)Foundations of Real and Abstract Analysis
96
nected component, 160 consumer, 303 consumption bundle, 303 consumption set, 303 continuous, 44, 136 continuous on an interval, 45 continuous on the left, 44 continuous on the right, 44 continuously differentiable, 223 contraction mapping, 220 Contraction Mapping Theorem, 220 contractive, 136 converge simply, 206 converge uniformly, 206 convergent mapping, 138 convergent sequence, 20, 139 convergent series, 27, 180 convex, 163, 178 convex hull, 277 coordinate, 242 coordinate functional, 287 countable, 4 countable choice, 300 countably infinite, 4 cover, 47, 146 \( \mathcal{C}\left( {X, Y}\right) ,{206} \) Decreasing, 8, 101 dense, 132 dependent choice, 300 derivative, 53 derivative, higher, 54 derivative, left, 53 derivative, right, 53 diameter, 133 differentiable, 53 differentiable on an interval, 53 differentiable, infinitely, 54 differentiable \( n \) -times,54 Dini derivates, 88 Dini's Theorem, 207 Dirichlet kernel, 288 Dirichlet Problem, 257 discontinuity, 45, 136 discrete metric, 126 distance to a set, 133 divergence, 256 divergent series, 28 diverges, 20 Dominated Convergence Theorem, 104 dominates, 104 dual, 183 Edelstein's Theorem, 149 endpoint, 19, 163 enlargement, 155 \( \varepsilon \) -approximation,149 equal, 291, 292 equicontinuous, 208 equivalence class, 6 equivalence relation, 6 equivalent metrics, 131 equivalent norms, 184 essential supremum, 204 essentially bounded, 204 Euclidean metric, 127 Euclidean norm, 175 Euclidean space, 127 Euler's constant, 33 exp, 32 exponential series, 32 extended real line, 129 extension, continuous, 145 extremal element, 93 extreme point, 277 extreme subset, 277 Family, 5 farthest point, 240 Fatou's Lemma, 104 feasible array, 305 finite intersection property, 148 finite real number, 129 first category, 280 fixed point, 149, 220 Fourier coefficient, 242, 288 Fourier expansion, 248 Fourier series, 215, 288 frontier, 39 Fubini's Series Theorem, 90 function, 3 Fundamental Theorem of Calculus, 68,69 Gauss's Divergence Theorem, 256 geometric series, 28 Glueing Lemma, 163 gradient, 256 Gram-Schmidt, 249 graph, 285 greatest element, 7 greatest lower bound, 7 Green's Theorem, 256 Hahn-Banach Theorem, 262 Hahn-Banach Theorem, complex, 263 Heine-Borel-Lebesgue Theorem, 47 Helly's Theorem, 277 Hermitian, 254 Hilbert space, 237 Hölder’s inequality, 194, 196, 204 hyperplane, 187 hyperplane of support, 188 hyperplane, translated, 188 Idempotent, 256 identity mapping, 136 identity operator, 240 imaginary part, 19 increasing, 8, 101 index set, 5 induced metric, 131 infimum, 7 infimum of a function, 8 infinitely many, 20 inner product, 234 inner product space, 234 integers, 3 integrable, 95, 98, 234 integrable over a set, 99 integrable set, 113 integral, 95, 98 integration by parts, 109 integration space, 197 interior, 37, 130, 135 intermediate value property, 36 Intermediate Value Theorem, 51, 161 interval of convergence, 31 interval, bounded, 19 interval, closed, 19 interval, compact, 19 interval, finite, 19 interval, half open, 19 interval, infinite, 19 interval, length of, 19 interval, open, 18 Inverse Mapping Theorem, 285 irreflexive, 5 isolated, 133 isometric, 128 isometry, 128 iterates, 220 Jacobi polynomial, 252 Kernel, 186 Korovkin's Theorem, 212, 215 Krein-Milman Theorem, 277 L’Hôpital’s Rule, 57 Landau's Theorem, 287 Laplacian operator, 257 largest element, 7 laws of indices, 16 320 Index laws of logarithms, 18 least element, 8 least squares approximation, 250 least upper bound, 7 least-upper-bound principle, 12 Lebesgue covering property, 153 Lebesgue integrable, 95 Lebesgue integral, 95, 98 Lebesgue measure, 113 Lebesgue number, 153 Lebesgue primitive, 93 Lebesgue's Series Theorem, 103 left hand derivative, 282 Legendre polynomial, 252 lim inf, 24 lim sup, 24 limit as \( x \) tends to infinity,43 limit comparison test, 28 limit inferior, 24 limit of a function, 41 limit of a mapping, 138 limit of a sequence, 20, 139 limit point, 48, 138 limit superior, 24 limit, left-hand, 41 limit, right-hand, 41 Lindelöf's Theorem, 148 linear functional, 182 linear functional, complex-, 259 linear functional, extension of, 261 linear functional, real-, 260 linear map, 182 \( {L}_{\infty },{204} \) Lipschitz condition, 143, 219 Lipschitz constant, 219 locally compact, 156 locally connected, 160 locally nonsatiated, 304 logarithmic function, 18 lower bound, 7 lower integral, 63, 73 lower limit, 24 lower sum, 63, 73 \( {L}_{p}\left( X\right) ,{197} \) \( {L}_{p} \) -norm,197 Majorant, 7 majorised, 7 maximum element, 7 Mazur's Lemma, 270 Mean Value Theorem, 57 Mean Value Theorem, Cauchy's, 57 measurable, 110 measurable set, 113 measure, 113 measure zero, 80 mesh, 62 metric, 125 metric space, 126 metrisable, 135 minimum element, 7 Minkowski functional, 275 Minkowski's inequality, 126, 195, 196, 235 Minkowski's Separation Theorem, 278 minorant, 7 minorised, 7 modulus, 19 monotone sequence principle, 22 Müntz, 216 multilinear map, 184 multiplication of series, 32 Natural logarithmic function, 18 natural numbers, 3 negative, 12, 294 neighbourhood, 37, 130, 135 nested intervals, 24 nonnegative, 13, 294 nonoverlapping, 84 nonsatiated, 306 nonzero linear map, 186 norm, 174 norm of a linear map, 183 norm, weighted least squares, 249 norm-preserving, 261 normal operator, 254 normed space, 174 nowhere dense, 280 nowhere differentiable, 2, 282 null space, 186 Oblique projection, 287 open ball, 130 open mapping, 283 Open Mapping Theorem, 283 open set, 35, 130, 135 operator, 253 order dense, 14, 295 orthogonal, 237 orthogonal complement, 237 orthogonal family, 242 orthonormal, 242 orthonormal basis, 247 oscillation, 45 outer measure, 79 outer measure, finite, 80 \( P \) -adic metric,127 \( p \) -power summable,196 parallelogram law, 236 Pareto optimum, 305 Parseval's identity, 248 partial order, 6 partial sum, 27, 180 partially ordered set, 6 partition, 62 path, 163 path component, 164 path connected, 163 Peano's Theorem, 228 period, 215 periodic, 215 Picard's Theorem, 223 points at infinity, 129 pointwise, 4 polarisation identity, 255 Polya's Theorem, 288 positive, 12, 294 positive integers, 3 positive linear operator, 212 positively homogeneous, 261 power series, 31 precompact, 149 preference relation, 303 preference relation, convex, 306 preference-indifference, 304 prehilbert space, 234 preorder, 6 price vector, 303 primitive, 69 producer, 303 product metric, 165 product norm, 176 product normed space, 176 product of paths, 164 product, of metric spaces, 165, 170 production set, 303 production vector, 303 projection, 166, 240 pseudometric, 127 Pythagoras’s Theorem, 238 Quotient norm, 181 quotient space, 181 Radius, 130 radius of convergence, 31 ratio test, 29 rational approximation, 292 rational complex number, 193 rational number, 291 rational numbers, 3 real line, extended, 129 real number, 12, 292 real number line, 11 real part, 19 rearrangement, 34 reciprocal, 296 refinement, 62 reflexive, 253, 266 reflexive, 5 regular, 292 remainder term, Cauchy form, 59 remainder term, Lagrange form, 59 representable, 272 representation, 187 Riemann integrable, 64 Riemann integral, 64 Riemann sum, 67 Riemann-Lebesgue Lemma, 112 Riemann-Stieljtes integrable, 72 Riemann-Stieltjes integral, 72 Riemann–Stieltjes sum, 72 Riesz Representation Theorem, 252 Riesz's Lemma, 190 Riesz-Fischer Theorem, 198 right hand derivative, 280 Rodrigues’ formula, 252 Rolle's Theorem, 56 root test, 30 Satiated, 306 Schauder basis, 269 second category, 280 322 Index second dual, 253 self-map, 149, 220 selfadjoint, 254 seminorm, 261 separable, 132 separates, 278 sequence, 4 sequentially compact, 149 sequentially continuous, 45, 140 series, 27, 180 simple function, 116 smallest element, 8 step function, 99 Stone-Weierstrass Theorem, 216, 219 strict partial order, 6 strict preference, 303 strictly decreasing, 8 strictly increasing, 8 subadditive, 261 subcover, 47, 146 subfamily, 5 sublinear, 261 subsequence, 4 subspace, 131, 176 subspace of a prehilbert space, 234 sufficiently large, 20 sum, 180 sup norm, 175, 204 supremum, 7 supremum norm, 175 supremum of a function, 8 symmetric, 5 Taxicab metric, 126 Taylor expansion, 61 Taylor polynomial, 58 Taylor series, 61 Taylor's Theorem, 58 term, 4, 27 termwise, 5 Tietze Extension Theorem, 144 topological space, 134 topology, 135 total, 193 total order, 6 totally bounded, 149 totally disconnected, 160 transitive, 6 translation invariant, 81, 99 transported, 128 triangle inequality, 15, 126, 174 triple recursion formula, 251 Ultrametric, 127 unconditionally convergent, 180 uncountable, 4 Uniform Boundedness Theorem, 186, 286 Uniform Continuity Theorem, 49, 154 uniformly approximated, 212 uniformly continuous, 49, 142 uniformly convex, 186 uniformly equicontinuous, 209 unit ball, 174 unit vector, 174 upper bound, 7 upper contour set, 304 upper integral, 63, 73 upper limit, 24 upper sum, 63, 73 Urysohn's Lemma, 146 Variation, 71 Vitali covering, 82 Vitali Covering Theorem, 82 Weak solution, 257 Weierstrass Approximation Theorem, 212 Weierstrass’s \( M \) -test,46 weight function, 235 Zermelo, 299 Zorn’s Lemma, 300 ## Graduate Texts in Mathematics 61 Whittehead. Elements of Homotopy Theory. 62 KARGAPOLOV/MERLZJAKOV. Fundamentals of the Theory of Groups. 63 Bollobas. Graph Theory. 64 Edwards. Fourier Series. Vol. I 2nd ed. 65 WELLS. Differential Analysis on Complex Manifolds. 2nd ed. 66 WATERHOUSE. Introduction to Affine Group Schemes. 67 Serre. Local Fields. 68 WEIDMANN. Linear Operators in Hilbert Spaces. 69 LANG. Cyclotomic Fields II. 70 Massey. Singular Homology
1008_(GTM174)Foundations of Real and Abstract Analysis
97
186, 286 Uniform Continuity Theorem, 49, 154 uniformly approximated, 212 uniformly continuous, 49, 142 uniformly convex, 186 uniformly equicontinuous, 209 unit ball, 174 unit vector, 174 upper bound, 7 upper contour set, 304 upper integral, 63, 73 upper limit, 24 upper sum, 63, 73 Urysohn's Lemma, 146 Variation, 71 Vitali covering, 82 Vitali Covering Theorem, 82 Weak solution, 257 Weierstrass Approximation Theorem, 212 Weierstrass’s \( M \) -test,46 weight function, 235 Zermelo, 299 Zorn’s Lemma, 300 ## Graduate Texts in Mathematics 61 Whittehead. Elements of Homotopy Theory. 62 KARGAPOLOV/MERLZJAKOV. Fundamentals of the Theory of Groups. 63 Bollobas. Graph Theory. 64 Edwards. Fourier Series. Vol. I 2nd ed. 65 WELLS. Differential Analysis on Complex Manifolds. 2nd ed. 66 WATERHOUSE. Introduction to Affine Group Schemes. 67 Serre. Local Fields. 68 WEIDMANN. Linear Operators in Hilbert Spaces. 69 LANG. Cyclotomic Fields II. 70 Massey. Singular Homology Theory. 71 FARKAS/KRA. Riemann Surfaces. 2nd ed. 72 Stillwell. Classical Topology and Combinatorial Group Theory. 2nd ed. 73 Hungerford. Algebra. 74 Davenport. Multiplicative Number Theory. 2nd ed. 75 HOCHSCHILD. Basic Theory of Algebraic Groups and Lie Algebras. 76 Irraxa. Algebraic Geometry. 77 HECKE. Lectures on the Theory of Algebraic Numbers. 78 Burris/Sankappanavar. A Course in Universal Algebra. 79 WALTERS. An Introduction to Ergodic Theory. 80 Robinson. A Course in the Theory of Groups. 2nd ed. 81 FORSTER. Lectures on Riemann Surfaces. 82 Borr/Tu. Differential Forms in Algebraic Topology. 83 WASHINGTON. Introduction to Cyclotomic Fields. 2nd ed. 84 IRELAND/ROSEN. A Classical Introduction to Modern Number Theory. 2nd ed. 85 Edwards. Fourier Series. Vol. II. 2nd ed. 86 VAN LINT. Introduction to Coding Theory. 2nd ed. 87 Brown. Cohomology of Groups. 88 Pierce. Associative Algebras. 89 LANG. Introduction to Algebraic and Abelian Functions. 2nd ed. 90 BRøndsted. An Introduction to Convex Polytopes. 91 BEARDON. On the Geometry of Discrete Groups. 92 Diestel. Sequences and Series in Banach Spaces. 93 Dubrovin/Fomenko/Novikov. Modern Geometry-Methods and Applications. Part I. 2nd ed. 94 WARNER. Foundations of Differentiable Manifolds and Lie Groups. 95 SHIRYAEV. Probability. 2nd ed. 96 Conway. A Course in Functional Analysis. 2nd ed. 97 KOBLITZ. Introduction to Elliptic Curves and Modular Forms. 2nd ed. 98 BRÖCKER/TOM DIECK. Representations of Compact Lie Groups. 99 Grove/Benson. Finite Reflection Groups. 2nd ed. 100 Berg/Christensen/Ressel. Harmonic Analysis on Semigroups: Theory of Positive Definite and Related Functions. 101 Edwards. Galois Theory. 102 VARADARAJAN. Lie Groups, Lie Algebras and Their Representations. 103 LANG. Complex Analysis. 3rd ed. 104 Dubrovin/Fomenko/Novikov. Modern Geometry-Methods and Applications. Part II. 105 LANG. \( S{L}_{2}\left( \mathbf{R}\right) \) . 106 SILVERMAN. The Arithmetic of Elliptic Curves. 107 OLVER. Applications of Lie Groups to Differential Equations. 2nd ed. 108 Range. Holomorphic Functions and Integral Representations in Several Complex Variables. 109 LEHTO. Univalent Functions and Teichmüller Spaces. 110 LANG. Algebraic Number Theory. 111 Husemöller. Elliptic Curves. 112 LANG. Elliptic Functions. 113 KARATZAS/SHREVE. Brownian Motion and Stochastic Calculus. 2nd ed. 114 KOBLITZ. A Course in Number Theory and Cryptography. 2nd ed. 115 Berger/Gostiaux. Differential Geometry: Manifolds, Curves, and Surfaces. 116 Kelley/Srinivasan. Measure and Integral. Vol. I. 117 SERRE. Algebraic Groups and Class Fields. 118 Pedersen. Analysis Now. 119 ROTMAN. An Introduction to Algebraic Topology. 120 ZIEMER. Weakly Differentiable Functions: Sobolev Spaces and Functions of Bounded Variation. 121 LANG. Cyclotomic Fields I and II. Combined 2nd ed. 122 REMMERT. Theory of Complex Functions. Readings in Mathematics 123 EBBINGHAUS/HERMES et al. Numbers. Readings in Mathematics 124 DUBROVIN/FOMENKO/NOVIKOV. Modern Geometry-Methods and Applications. Part III. 125 Berensten/Gay. Complex Variables: An Introduction. 126 Borel. Linear Algebraic Groups. 2nd ed. 127 Massey. A Basic Course in Algebraic Topology. 128 Rauch. Partial Differential Equations. 129 FULTON/HARRIS. Representation Theory: A First Course. Readings in Mathematics 130 Dodson/Poston. Tensor Geometry. 131 LAM. A First Course in Noncommutative Rings. 132 BEARDON. Iteration of Rational Functions. 133 Harris. Algebraic Geometry: A First Course. 134 ROMAN. Coding and Information Theory. 135 Roman. Advanced Linear Algebra. 136 ADKINS/WEINTRAUB. Algebra: An Approach via Module Theory. 137 AXLER/BOURDON/RAMEY. Harmonic Function Theory. 138 COHEN. A Course in Computational Algebraic Number Theory. 139 BREDON. Topology and Geometry. 140 AUBIN. Optima and Equilibria. An Introduction to Nonlinear Analysis. 141 BECKER/WEISPFENNING/KREDEL. Gröbner Bases. A Computational Approach to Commutative Algebra. 142 LANG. Real and Functional Analysis. 3rd ed. 143 DOOB. Measure Theory. 144 DENNIS/FARB. Noncommutative Algebra. 145 VICK. Homology Theory. An Introduction to Algebraic Topology. 2nd ed. 146 BRIDGES. Computability: A Mathematical Sketchbook. 147 ROSENBERG. Algebraic \( K \) -Theory and Its Applications. 148 ROTMAN. An Introduction to the Theory of Groups. 4th ed. 149 RATCLIFFE. Foundations of Hyperbolic Manifolds. 150 EISENBUD. Commutative Algebra with a View Toward Algebraic Geometry. 151 SILVERMAN. Advanced Topics in the Arithmetic of Elliptic Curves. 152 ZIEGLER. Lectures on Polytopes. 153 FULTON. Algebraic Topology: A First Course. 154 BROWN/PEARCY. An Introduction to Analysis. 155 KASSEL. Quantum Groups. 156 KECHRIS. Classical Descriptive Set Theory. 157 MALLIAVIN. Integration and Probability. 158 ROMAN. Field Theory. 159 Conway. Functions of One Complex Variable II. 160 LANG. Differential and Riemannian Manifolds. 161 BORWEIN/ERDÉLYI. Polynomials and Polynomial Inequalities. 162 ALPERIN/BELL. Groups and Representations. 163 DIXON/MORTIMER. Permutation Groups. 164 NATHANSON. Additive Number Theory: The Classical Bases. 165 NATHANSON. Additive Number Theory: Inverse Problems and the Geometry of Sumsets. 166 SHARPE. Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. 167 MORANDI. Field and Galois Theory. 168 EWALD. Combinatorial Convexity and Algebraic Geometry. 169 BHATIA. Matrix Analysis. 170 BREDON. Sheaf Theory. 2nd ed. 171 PETERSEN. Riemannian Geometry. 172 REMMERT. Classical Topics in Complex Function Theory. 173 DIESTEL. Graph Theory. 174 BRIDGES. Foundations of Real and Abstract Analysis. 175 LICKORISH. An Introduction to Knot Theory. 176 LEE. Riemannian Manifolds. 177 Newman. Analytic Number Theory. 178 CLARKE/LEDYAEV/STERN/WOLENSKI. Nonsmooth Analysis and Control Theory.
1009_(GTM175)An Introduction to Knot Theory
0
# GraduateTexts inMathematics W.B. Raymond Lickorish # An Introduction to Knot Theory Springer ## Graduate Texts in Mathematics 175 Editorial Board S. Axler F.W. Gehring K.A. Ribet ## Graduate Texts in Mathematics 1 TAKEUTI/ZARING. Introduction to Axiomatic Set Theory. 2nd ed. 2 Oxtoby. Measure and Category. 2nd ed. 3 Schaefer. Topological Vector Spaces. 4 Hilton/Stammbach. A Course in Homological Algebra. 2nd ed. 5 MAC LANE. Categories for the Working Mathematician. 6 Hughes/Piper. Projective Planes. 7 Serre. A Course in Arithmetic. 8 TAKEUTI/ZARING. Axiomatic Set Theory. 9 Humphreys. Introduction to Lie Algebras and Representation Theory. 10 COHEN. A Course in Simple Homotopy Theory. 11 Conway. Functions of One Complex Variable I. 2nd ed. 12 Beals. Advanced Mathematical Analysis. 13 Anderson/Fuller. Rings and Categories of Modules. 2nd ed. 14 Golubitsky/Guillemin. Stable Mappings and Their Singularities. 15 Berberian. Lectures in Functional Analysis and Operator Theory. 16 Winter. The Structure of Fields. 17 Rosenblatt. Random Processes. 2nd ed. 18 Halmos. Measure Theory. 19 Halmos. A Hilbert Space Problem Book. 2nd ed. 20 Husemoller. Fibre Bundles. 3rd ed. 21 Humphreys. Linear Algebraic Groups. 22 BARNES/MACK. An Algebraic Introduction to Mathematical Logic. 23 Greub. Linear Algebra. 4th ed. 24 Holmes. Geometric Functional Analysis and Its Applications. 25 Hewitt/Stromberg. Real and Abstract Analysis. 26 Manes. Algebraic Theories. 27 Kelley. General Topology. 28 Zariski/Samuel. Commutative Algebra. Vol.I. 29 Zariski/Samuel. Commutative Algebra. Vol.II. 30 Jacobson. Lectures in Abstract Algebra I. Basic Concepts. 31 JACOBSON. Lectures in Abstract Algebra II. Linear Algebra. 32 JACOBSON. Lectures in Abstract Algebra III. Theory of Fields and Galois Theory. 33 Hirsch. Differential Topology. 34 SPITZER. Principles of Random Walk. 2nd ed. 35 Wermer. Banach Algebras and Several Complex Variables. 2nd ed. 36 Kelley/Namioka et al. Linear Topological Spaces. 37 Monk. Mathematical Logic. 38 Grauert/Fritzsche. Several Complex Variables. 39 Arveson. An Invitation to \( {C}^{ * } \) -Algebras. 40 KEMENY/SNELL/KNAPP. Denumerable Markov Chains. 2nd ed. 41 Apostol. Modular Functions and Dirichlet Series in Number Theory. 2nd ed. 42 Serre. Linear Representations of Finite Groups. 43 Gillman/Jerison. Rings of Continuous Functions. 44 KENDIG. Elementary Algebraic Geometry. 45 Loève. Probability Theory I. 4th ed. 46 Loève. Probability Theory II. 4th ed. 47 Moise. Geometric Topology in Dimensions 2 and 3. 48 SACHS/Wu. General Relativity for Mathematicians. 49 Gruenberg/Weir. Linear Geometry. 2nd ed. 50 Edwards. Fermat's Last Theorem. 51 KLINGENBERG. A Course in Differential Geometry. 52 Hartshorne. Algebraic Geometry. 53 Manin. A Course in Mathematical Logic. 54 Graver/Watkins. Combinatorics with Emphasis on the Theory of Graphs. 55 Brown/Pearcy. Introduction to Operator Theory I: Elements of Functional Analysis. 56 Massey. Algebraic Topology: An Introduction. 57 Crowell/Fox. Introduction to Knot Theory. 58 KOBLITZ. \( p \) -adic Numbers, \( p \) -adic Analysis, and Zeta-Functions. 2nd ed. 59 LANG. Cyclotomic Fields. 60 Arnold. Mathematical Methods in Classical Mechanics. 2nd ed. W.B. Raymond Lickorish # An Introduction to Knot Theory With 114 Illustrations W.B. Raymond Lickorish Professor of Geometric Topology, University of Cambridge, and Fellow of Pembroke College, Cambridge Department of Pure Mathematics and Mathematical Statistics Cambridge CB2 1SB England ## Editorial Board S. Axler F.W. Gehring K.A. Ribet Mathematics Department Mathematics Department Department of Mathematics San Francisco State East Hall University of California University University of Michigan at Berkeley San Francisco, CA 94132 Ann Arbor, MI 48109 Berkeley, CA 94720 USA USA USA ## Mathematics Subject Classification (1991): 57-01, 57M25, 16S34, 57M05 Library of Congress Cataloging-in-Publication Data Lickorish, W.B. Raymond. An introduction to knot theory / W.B. Raymond Lickorish. p. \( \mathrm{{cm}} - \) (Graduate texts in mathematics; 175) Including bibliographical references (p. - ) and index. ISBN 978-1-4612-6869-7 ISBN 978-1-4612-0691-0 (eBook) DOI 10.1007/978-1-4612-0691-0 1. Knot theory. I. Title. II. Series QA612.2.L53 1997 \( {514}^{\prime }{.224} - \mathrm{{dc}}{21} \) 97-16660 Printed on acid-free paper. ## (C) 1997 Springer Science+Business Media New York Originally published by Springer-Verlag New York Berlin Heidelberg in 1997 Softcover reprint of the hardcover 1st edition 1997 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Production managed by Steven Pisano; manufacturing supervised by Johanna Tschebull. Photocomposed pages prepared from the author's TeX files. 987654321 ISBN 978-1-4612-6869-7 SPIN 10628672 ## Preface This account is an introduction to mathematical knot theory, the theory of knots and links of simple closed curves in three-dimensional space. Knots can be studied at many levels and from many points of view. They can be admired as artifacts of the decorative arts and crafts, or viewed as accessible intimations of a geometrical sophistication that may never be attained. The study of knots can be given some motivation in terms of applications in molecular biology or by reference to parallels in equilibrium statistical mechanics or quantum field theory. Here, however, knot theory is considered as part of geometric topology. Motivation for such a topological study of knots is meant to come from a curiosity to know how the geometry of three-dimensional space can be explored by knotting phenomena using precise mathematics. The aim will be to find invariants that distinguish knots, to investigate geometric properties of knots and to see something of the way they interact with more adventurous three-dimensional topology. The book is based on an expanded version of notes for a course for recent graduates in mathematics given at the University of Cambridge; it is intended for others with a similar level of mathematical understanding. In particular, a knowledge of the very basic ideas of the fundamental group and of a simple homology theory is assumed; it is, after all, more important to know about those topics than about the intricacies of knot theory. There are other works on knot theory written at this level; indeed most of them are listed in the bibliography. However, the quantity of what may reasonably be termed mathematical knot theory has expanded enormously in recent years. Much of the newly discovered material is not particularly difficult and has a right to be included in an introduction. This makes some of the excellent established treatises seem a little dated. However, concentrating entirely on developments of the past decade gives a most misleading view of the subject. An attempt is made here to outline some of the highlights from throughout the twentieth century, with a little bias towards recent discoveries. The present size of the subject means that a choice of topics must be made for inclusion in any first course or book of reasonable length. Such selection must be subjective. An attempt has been made here to give the flavour and the results from three or four main techniques and not to become unduly enmeshed in any of them. Firstly, there is the three-manifold method of manipulating surfaces, using the pattern of simple closed curves in which two surfaces intersect. This leads to the theorem concerning the unique factorisation of knots into primes and to the theory concerning the primeness of alternating diagrams. Combinatorics applied to knot and link diagrams lead (by way of the Kauffman bracket) to the Jones polynomial, an invariant that is good, but not infallible, at distinguishing different knots and links. This invariant also has applications to the way diagrams of certain knots might be drawn. Next, techniques of elementary homology theory are used on the infinite cyclic cover of the complement of a link to lead to the "abelian" invariants, in particular to the well-known Alexander polynomial. That is reinforced by the association of that polynomial invariant with the Conway polynomial, as well as by a study of the fundamental group of a link's complement. The use of (framed) links to describe, by means of "surgery", any closed orientable three-manifold is explored. Together with the skein theory of the Kauffman bracket, this idea leads to some "quantum" invariants for three-manifolds. A technique, belonging to a more general theory of three-manifolds, that will not be described is that of the W. Haken's classification of knots. That technique gives a theoretical algorithm which always decides if two knots are or are not the same. It is almost impossible to use it, but it is good to know it exists [42]. One can take the view that the object of mathematics is to prove that certain things are true. That object will here be pursued. A declaration that something is true, followed by copious calculations that produce no contradiction, should not completely satisfy the intellect. However, even neglecting all logical or philosophical objections to this quest, there are genuine practical difficulties in attempting to give a totally
1009_(GTM175)An Introduction to Knot Theory
1
complement. The use of (framed) links to describe, by means of "surgery", any closed orientable three-manifold is explored. Together with the skein theory of the Kauffman bracket, this idea leads to some "quantum" invariants for three-manifolds. A technique, belonging to a more general theory of three-manifolds, that will not be described is that of the W. Haken's classification of knots. That technique gives a theoretical algorithm which always decides if two knots are or are not the same. It is almost impossible to use it, but it is good to know it exists [42]. One can take the view that the object of mathematics is to prove that certain things are true. That object will here be pursued. A declaration that something is true, followed by copious calculations that produce no contradiction, should not completely satisfy the intellect. However, even neglecting all logical or philosophical objections to this quest, there are genuine practical difficulties in attempting to give a totally self-contained introduction to knot theory. To avoid pathological possibilities, in which diagrams of links might have infinitely many crossings, it is necessary to impose a piecewise linear or differential restriction on links. Then all manoeuvres must preserve such structures, and the technicalities of a piecewise linear or differential theory are needed. One needs, for example, to know that any two-dimensional sphere, smoothly or piecewise linearly embedded in Euclidean three-space, bounds a smooth or piecewise linear ball. This is the Schönflies theorem; the existence of wild horned spheres shows it is not true without the technical restrictions. What is needed, then, is a full development of the theory of piecewise linear or differential manifolds at least up to dimension three. Laudable though such an account might be, experience suggests that it is initially counter-productive in the study of knot theory. Conversely, experience of knot theory can produce the incentive to understand these geometric foundations at a later time. Thus some basic (intuitively likely) results of piecewise linear theory will sometimes be quoted, sometimes with a sketch of how they are proved. Perhaps here piecewise linear theory has an advantage over differential theory, because up to dimension three, simplexes are readily visualisable; but differential theory, if known, will answer just as well. That apologia underpins the start of the theory. Significant direct quotations of results have however also been made in the discussion of the fundamental group of a link complement. That topic has been treated extensively elsewhere, so the remarks here are intended to be but something of a little survey. Also quoted is R. C. Kirby's theorem concerning moves between surgery links for a three-manifold. Furthermore, at the end of a section extensions of a theory just considered are sometimes outlined without detailed proof. Otherwise it is intended that everything should be proved! W. B. Raymond Lickorish ## Contents Preface \( \mathrm{V} \) Chapter 1. A Beginning for Knot Theory 1 Exercises 13 Chapter 2. Seifert Surfaces and Knot Factorisation 15 Exercises 21 Chapter 3. The Jones Polynomial 23 Exercises 30 Chapter 4. Geometry of Alternating Links 32 Exercises 40 Chapter 5. The Jones Polynomial of an Alternating Link 41 Exercises 48 Chapter 6. The Alexander Polynomial 49 Exercises 64 Chapter 7. Covering Spaces 66 Exercises 78 Chapter 8. The Conway Polynomial, Signatures and Slice Knots 79 Exercises 91 Chapter 9. Cyclic Branched Covers and the Goeritz Matrix 93 Exercises 102 Chapter 10. The Arf Invariant and the Jones Polynomial 103 Exercises 108 x Contents Chapter 11. The Fundamental Group 110 Exercises 121 Chapter 12. Obtaining 3-Manifolds by Surgery on \( {S}^{3} \) 123 Exercises 132 Chapter 13. 3-Manifold Invariants From The Jones Polynomial 133 Exercises 144 Chapter 14. Methods for Calculating Quantum Invariants 146 Exercises 164 Chapter 15. Generalisations of the Jones Polynomial 166 Exercises 177 Chapter 16. Exploring the HOMFLY and Kauffman Polynomials 179 Exercises 191 References 193 Index 199 1 ## A Beginning for Knot Theory The mathematical theory of knots is intended to be a precise investigation into the way that 1-dimensional "string" can lie in ordinary 3-dimensional space. A glance at the diagrams on the pages that follow indicates the sort of complication that is envisaged. Because the theory is intended to correspond to reality, it is important that initial definitions, whilst being precise, exclude unwanted pathology both in the things being studied and in the properties they might have. On the other hand, obsessive concentration on basic geometric technology can deter progress. It can initially be but tasted if it seem onerous. At its foundations, knot theory will here be considered as a branch of topology. It is, at least initially, not a very sophisticated application of topology, but it benefits from topological language and provides some very accessible illustrations of the use of the fundamental group and of homology groups. As is customary, \( {\mathbb{R}}^{n} \) will denote \( n \) -dimensional Euclidean space and \( {S}^{n} \) will be the \( n \) -dimensional sphere. Thus \( {S}^{n} \) is the unit sphere in \( {\mathbb{R}}^{n + 1} \), but it can be regarded as being \( {\mathbb{R}}^{n} \) together with an extra point at infinity. There is a linear or affine structure on \( {\mathbb{R}}^{n} \) ; it contains lines and planes and \( r \) -simplexes ( \( r \) -dimensional analogues of intervals, triangles and tetrahedra). \( {S}^{n} \) can also be regarded as the boundary of a standard \( \left( {n + 1}\right) \) -simplex, so that \( {S}^{n} \) is then triangulated with the structure of a simplicial complex bounding a triangulated \( \left( {n + 1}\right) \) -ball \( {B}^{n + 1} \) . Sometimes it seems more natural to describe \( {B}^{n + 1} \) as a disc; it is then denoted \( {D}^{n + 1} \) . Definition 1.1. A link \( L \) of \( m \) components is a subset of \( {S}^{3} \), or of \( {\mathbb{R}}^{3} \), that consists of \( m \) disjoint, piecewise linear, simple closed curves. A link of one component is a knot. The piecewise linear condition means that the curves composing \( L \) are each made up of a finite number of straight line segments placed end to end, "straight" being in the linear structure of \( {\mathbb{R}}^{3} \subset {\mathbb{R}}^{3} \cup \infty = {S}^{3} \) or, alternatively, in the structure of one of the 3-simplexes that make up \( {S}^{3} \) in a triangulation. In practice, when drawing diagrams of knots or links it is assumed that there are so very many straight line segments that the curves appear pretty well rounded. This insistence on having a finite number of straight line segments prevents a link from having an infinite number of kinks, getting ever smaller as they converge to a point (those links are called "wild"). An alternative way of avoiding wildness is to require that \( L \) be a smooth 1-dimensional submanifold of the smooth 3-manifold \( {S}^{3} \) . That leads to an equivalent theory, but in these low dimensions simplexes are often easier to manipulate than are sophisticated theorems of differential manifolds. Thus a piecewise linear condition applies to practically everything discussed here, but it will be given as little emphasis as possible. Definition 1.2. Links \( {L}_{1} \) and \( {L}_{2} \) in \( {S}^{3} \) are equivalent if there is an orientation-preserving piecewise linear homeomorphism \( h : {S}^{3} \rightarrow {S}^{3} \) such that \( h\left( {L}_{1}\right) = \) \( \left( {L}_{2}\right) \) . Here the piecewise linear condition means that after subdividing the simplexes in each copy of \( {S}^{3} \) into possibly very many smaller simplexes, \( h \) maps simplexes to simplexes in a linear way. Soon, equivalent links will be regarded as being the same link; in practice this causes no confusion. If the links are oriented or their components are ordered, \( h \) may be required to preserve such attributes. It is a basic theorem of piecewise linear topology that such an \( h \) is isotopic to the identity. This means there exist \( {h}_{t} : {S}^{3} \rightarrow {S}^{3} \) for \( t \in \left\lbrack {0,1}\right\rbrack \) so that \( {h}_{0} = 1 \) and \( {h}_{1} = h \) and \( \left( {x, t}\right) \mapsto \left( {{h}_{t}x, t}\right) \) is a piecewise linear homeomorphism of \( {S}^{3} \times \left\lbrack {0,1}\right\rbrack \) to itself. Thus certainly the whole of \( {S}^{3} \) can be continuously distorted, using the homeomorphism \( {h}_{t} \) at time \( t \), to move \( {L}_{1} \) to \( {L}_{2} \) . An inept attempt to define equivalence in terms of moving one subset until it becomes the other could misguidedly permit knots to be pulled tighter and tighter until any complication disappears at a single point. If \( {L}_{1} \) and \( {L}_{2} \) are equivalent, their complements in \( {S}^{3} \) are, of course, homeomorphic 3-dimensional manifolds. Thus it is reasonable to try to distinguish links by applying any topological invariant (for example, the fundamental group) to such complements. Similarly, any facet of the extensive theory of 3-dimensional manifolds can be applied to link complements; the theory of knots and links forms a fundamental source of examples in 3-manifold theory. It has recently been proved, at some length [37], that two knots with homeomorphic oriented complements are equivalent; that is not true, in general, for links of more than one component (a fairly easy exercise). An elementary method of changing a link \( L \) in \( {\mathbb{R}}^{3} \) to an equivalent link is to find a planar triangle in \( {\mathbb{R}}^{3} \) that intersects \( L \) in exactly one edge of the triangle, delete that edge from \( L \), and replace it by the other two edges of the triangle. See Figure 1.1. It can be shown that if two links are equi
1009_(GTM175)An Introduction to Knot Theory
2
r complements in \( {S}^{3} \) are, of course, homeomorphic 3-dimensional manifolds. Thus it is reasonable to try to distinguish links by applying any topological invariant (for example, the fundamental group) to such complements. Similarly, any facet of the extensive theory of 3-dimensional manifolds can be applied to link complements; the theory of knots and links forms a fundamental source of examples in 3-manifold theory. It has recently been proved, at some length [37], that two knots with homeomorphic oriented complements are equivalent; that is not true, in general, for links of more than one component (a fairly easy exercise). An elementary method of changing a link \( L \) in \( {\mathbb{R}}^{3} \) to an equivalent link is to find a planar triangle in \( {\mathbb{R}}^{3} \) that intersects \( L \) in exactly one edge of the triangle, delete that edge from \( L \), and replace it by the other two edges of the triangle. See Figure 1.1. It can be shown that if two links are equivalent, they differ by a finite sequence of such moves or the inverses of such moves (replace two edges of a triangle by the other one). This result will be assumed; any proof would have to penetrate the technicalities of piecewise linear theory (a proof can be found in [17]). Using such (possibly very small) moves, \( L \) can easily be changed so that it is in general position with respect to the standard projection \( p : {\mathbb{R}}^{3} \rightarrow {\mathbb{R}}^{2} \) . Here this means that each line segment of \( L \) projects to a line segment in \( {\mathbb{R}}^{2} \), that the ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_0.jpg) Figure 1.1 projections of two such segments intersect in at most one point which for disjoint segments is not an end point, and that no point belongs to the projections of three segments. Given such a situation, the image of \( L \) in \( {\mathbb{R}}^{2} \) together with "over and under" information at the crossings is called a link diagram of \( L \) . Of course, a crossing is a point of intersection of the projections of two line segments of \( L \) ; the "over and under" information refers to the relative heights above \( {\mathbb{R}}^{2} \) of the two inverse images of a crossing. This information is always indicated in pictures by breaks in the under-passing segments. If \( {L}_{1} \) and \( {L}_{2} \) are equivalent, they are related by a sequence of triangle moves as described above. After moving all the vertices of all the triangles by a very small amount, it can be assumed that the projections of no three of the vertices lie on a line in \( {\mathbb{R}}^{2} \) and the projections of no three edges pass through a single point. Then each triangle projects to a triangle, and one can analyse the effect on link diagrams of each triangle move. One of the more interesting possibilities is shown in Figure 1.2. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_1.jpg) Figure 1.2 With a little careful thought, it follows that any two diagrams of equivalent links \( {L}_{1} \) and \( {L}_{2} \) are related by a sequence of Reidemeister moves and an orientation-preserving homeomorphism of the plane. The Reidemeister moves are of three types, shown below in Figure 1.3; each replaces a simple configuration of arcs and crossings in a disc by another configuration. A move of Type I inserts or deletes a "kink" in the diagram; moves of Type III preserve the number of crossings. Any homeomorphism of the plane must, of course, preserve all crossing information. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_2.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_2.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_3.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_3.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_4.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_13_4.jpg) Type I Type II Type III Figure 1.3 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_14_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_14_0.jpg) Figure 1.5 The "moves" shown in Figure 1.4 can be seen (exercise) to be consequences of the three types of Reidemeister move. If the point at infinity is added to \( {\mathbb{R}}^{2} \), so that all moves and diagrams are now regarded as being in \( {S}^{2} \), then the "moves" of Figure 1.5 are combinations of Reidemeister moves of types two and three only (an easy exercise). Diagrams related by moves of Type II and Type III only are sometimes said to be regularly isotopic. It will always be assumed that \( {S}^{3} \) and \( {\mathbb{R}}^{3} \) are oriented. The components of an \( n \) -component link can be oriented in \( {2}^{n} \) ways, and a choice of orientation, indicated by arrows on a diagram, is extra information that may or may not be given. If \( K \) is an oriented knot, the reverse of \( K \) -denoted \( \mathrm{r}K \) -is the same knot as a set but with the other orientation. Often \( K \) and \( \mathrm{r}K \) are equivalent. If \( L \) is a link in \( {S}^{3} \) and \( \rho : {S}^{3} \rightarrow {S}^{3} \) is an orientation-reversing piecewise linear homeomorphism, then \( \rho \left( L\right) \) is a link called the obverse or reflection of \( L \) . Up to equivalence of \( \rho \left( L\right) \) , the choice of \( \rho \) is immaterial; \( \rho \left( L\right) \) is denoted \( \bar{L} \) . Regarding \( {S}^{3} \) as \( {\mathbb{R}}^{3} \cup \infty \), one can take \( \rho \) to be the map \( \left( {x, y, z}\right) \mapsto \left( {x, y, - z}\right) \), and then it is clear that a diagram for \( \bar{L} \) is the same as one for \( L \) but with all the over-passes changed to under-passes. As will later become clear, sometimes \( L \) and \( \bar{L} \) are equivalent, sometimes they are not. There do exist oriented knots (the knot named \( {9}_{32} \) is an example) for which \( K,\mathrm{r}K,\bar{K} \) and \( \overline{\mathrm{r}K} \) are four distinct oriented knots. A knot \( K \) is said to be the unknot if it bounds an embedded piecewise linear disc in \( {S}^{3} \) . Triangle moves across the 2-simplexes of a triangulation of such a disc show that the unknot is equivalent to the boundary of a single 2-simplex linearly embedded in \( {S}^{3} \), and hence it has (as expected) a diagram with no crossing at all. Two oriented knots \( {K}_{1} \) and \( {K}_{2} \) can be added together to form their sum \( {K}_{1} + {K}_{2} \) by a method that corresponds to the intuitive idea of tying one and then the other in the same piece of string; see Figure 1.6. More precisely, regard \( {K}_{1} \) and \( {K}_{2} \) as being in distinct copies of \( {S}^{3} \), remove from each \( {S}^{3} \) a (small) ball that meets the given knot in an unknotted spanning arc (one where the ball-arc pair is piecewise linearly homeomorphic to the product of an interval with a disc-point pair), and then identify together the resulting boundary spheres, and their intersections with the knots, so that all orientations match up. Some basic piecewise linear theory TABLE 1.1. The Knot Table to Eight Crossings ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_15_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_15_0.jpg) shows that balls meeting the knots in unknotted spanning arcs are essentially unique, so that the addition of oriented knots is (up to equivalence, of course) well defined. It is immediate that this addition is commutative, and it is easily seen to be associative. The unknot is a zero for this addition, but it will be seen a little later that no knot other than the unknot has an additive inverse. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_16_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_16_0.jpg) Definition 1.3. A knot \( K \) is a prime knot if it is not the unknot, and \( K = {K}_{1} + {K}_{2} \) implies that \( {K}_{1} \) or \( {K}_{2} \) is the unknot. (Whereas "irreducible" might be a better term than "prime", this is traditional terminology, and it transpires that prime knots do have the usual algebraic property of primeness.) Fairly simple knots can be defined by drawing diagrams, and to refuse to do this would be pedantic in the extreme. The crossing number of a knot is the minimal number of crossings needed for a diagram of the knot. Table 1.1 is a table of diagrams of all knots with crossing number at most 8 . There are 35 such knots. Following traditional expediency, the unknot is omitted, only prime knots are included and all orientations are neglected (so that each diagram represents one, two or four oriented knots in oriented \( {S}^{3} \) by means of the above operations \( r \) and \( \rho \) ). A notation such as " \( {8}_{5} \) " beside a diagram simply means that it shows the fifth knot with crossing number 8 in a traditional ordering (begun in the nineteenth century by P. G. Tait [118] and C. N. Little [92]). Such terminology and tables of diagrams exist for knots up to eleven crossings. It is easy to tabulate knot diagrams and, for low numbers of crossings, to be confident that a list is complete; the difficulty comes in proving that the entries are prime and that the tabulation contains no duplicates. This is accomplished by associating to a knot some "invariant"-a well-defined mathematical entity such as a a number, a polynomial, or a group-and proving the invariants are distinct. Many such invariants are discussed later. Recent calculations by M. B. Thistlethwaite have produced the data in Table 1.2 for the number of prime knots (with the above conventions that neglect orientation) for crossing number up to 15 . The table has been checked by J. Hoste and J. Weeks using totally independent methods from those of Thistlethwaite. TABLE 1.2. <table><tr><td>Crossing number</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td></tr><tr><td colspan="14">Number</td></tr><tr><td>of knots</td><td>1<
1009_(GTM175)An Introduction to Knot Theory
3
to tabulate knot diagrams and, for low numbers of crossings, to be confident that a list is complete; the difficulty comes in proving that the entries are prime and that the tabulation contains no duplicates. This is accomplished by associating to a knot some "invariant"-a well-defined mathematical entity such as a a number, a polynomial, or a group-and proving the invariants are distinct. Many such invariants are discussed later. Recent calculations by M. B. Thistlethwaite have produced the data in Table 1.2 for the number of prime knots (with the above conventions that neglect orientation) for crossing number up to 15 . The table has been checked by J. Hoste and J. Weeks using totally independent methods from those of Thistlethwaite. TABLE 1.2. <table><tr><td>Crossing number</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td><td>10</td><td>11</td><td>12</td><td>13</td><td>14</td><td>15</td></tr><tr><td colspan="14">Number</td></tr><tr><td>of knots</td><td>1</td><td>1</td><td>2</td><td>3</td><td>7</td><td>21</td><td>49</td><td>165</td><td>552</td><td>2176</td><td>9988</td><td>46972</td><td>253293</td></tr></table> The naming of knots by means of traditional ordering is overwhelmed by the quantity of twelve-crossing knots. C. H. Dowker and Thistlethwaite [26] have adapted Tait's knot notation to produce a coding for knots that is suitable for a computer. The method is as follows: Follow along a knot diagram from some base point, allocating in order the integers \( 1,2,3,\ldots \) to the crossings as they are reached. Each crossing receives two numbers, one from the over-pass strand, one from the under-pass. At each crossing one of the numbers will be even and the other odd. Thus an \( n \) -crossing diagram with a base point produces a pairing between the first \( n \) odd numbers and the first \( n \) even numbers. An even number is then decorated with a minus sign if the corresponding strand is an under-pass; if it is an over-pass, it is undecorated. If the knot is prime, its diagram can easily be reconstructed uniquely (neglecting orientations) from that pairing with signs. Thus, specifying the signed even numbers in the order in which they correspond to the odd numbers \( 1,3,5,\ldots ,{2n} - 1 \) specifies the knot up to reflection. Of course, there is no unique such specification, but for a given \( n \), there can be only finitely many such ways of describing a knot. Selecting the lowest possible \( n \) and the first description in a lexicographical ordering of the strings of even numbers does give a canonical name for the (unoriented, prime) knot from which the knot can be constructed. For example, the first four knots in the tables are given by the notations \[ {462},\;{4682},\;{481026},\;{681024}. \] The crossing number is an easily defined example of the idea of a knot invariant. Knots with different crossing numbers cannot be equivalent. However, because it is defined in terms of a minimum taken over the infinity of possible diagrams of a knot, the crossing number is in general very difficult to calculate and use. The unknotting number \( u\left( K\right) \) of a knot \( K \) is likewise a popular but intractable invariant; it will be mentioned in Chapter 7. By definition, \( u\left( K\right) \) is the minimum number of crossing changes (from "over" to "under" or vice versa) needed to change \( K \) to the unknot, where the minimum is taken over all possible sets of crossing changes in all possible diagrams of \( K \) . However, if intuitively \( K \) is thought of as a curve moving around in \( {S}^{3} \), then \( u\left( K\right) \) is the minimum number of times that \( K \) must pass through itself to achieve the unknot. This obvious measure of a knot's complexity is often hard to determine and use. In fact, knowledge of the unknotting number of a knot might better be thought of as an end product of knot theory. If it has been shown that \( K \) is not the unknot, but that one crossing change on some diagram of \( K \) does give the unknot, then of course \( u\left( K\right) = 1 \) . Thus, for example, it will soon be clear that \( u\left( {3}_{1}\right) = u\left( {4}_{1}\right) = 1 \) . However, at the time of writing, \( u\left( {8}_{10}\right) \) is unknown (it is either 1 or 2). A discussion of the problem of finding unknotting numbers and of many, many other problems in knot theory can be found in [67]. A glance at Table 1.1 shows that all the knots up to \( {8}_{18} \) have the property that in the displayed diagrams, the "over" or "under" nature of the crossings alternates as one travels along the knot. A knot is called alternating if it has such a diagram; alternating knots do seem to have particularly pleasant properties. It will later be seen that knots \( {8}_{19},{8}_{20} \) and \( {8}_{21} \) are not alternating. The apparent preponderance of alternating knots is simply a phenomenon of low crossing numbers. Looking at the given table, it is easy to imagine how various of its knots can be generalised to form infinite sets of knots by inserting extra crossings in a variety of ways. Further, note that for either orientation, \( r\left( {4}_{1}\right) = {4}_{1} = \overline{{4}_{1}} \) and \( r\left( {3}_{1}\right) = {3}_{1} \) ; later it will be seen that \( {3}_{1} \neq \overline{{3}_{1}} \) . Also \( {8}_{17} = r\overline{{8}_{17}} \), but it is known that \( {8}_{17} \neq \mathrm{r}\left( {8}_{17}\right) \) . A proof of this last result is not easy; it follows from F. Bonahon's "equivariant characteristic variety theorem" [14], and it was also proved by A. Kawauchi [63]; another proof is in [40]. The first examples of knots that differ from their reverses were those of H. F. Trotter [125], which will be discussed in Chapter 11. It is usually much more relevant to consider various classes of knots and links that have been found to be interesting, rather than to seek some list of all possible knots. An example, which later will be featured often, is that of pretzel knots and links. The pretzel link \( P\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) is shown in Figure 1.7. Here the \( {a}_{i} \) are integers indicating the number of crossings in the various "tassels" of the diagram. If \( {a}_{i} \) is positive, the crossings are in the sense shown (the complete "tassel" has a right-hand twist); if \( {a}_{i} \) is negative, the crossings are in the opposite sense. As \( n \) varies and different values are chosen for the \( {a}_{i} \), this gives an infinite collection of links. Indeed, counting link components shows that it gives infinitely many links, but various invariants will later be used to distinguish pretzel knots. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_18_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_18_0.jpg) Figure 1.7 The upper two diagrams of Figure 1.8 show rational (or 2-bridge) knots or links, denoted \( C\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) . Such a link has no more than two components. The diagrams differ slightly in the way the various strands are joined at the right-hand edge of the diagram; the first method is for odd \( n \), the second for even \( n \) . Again the \( {a}_{i} \) are integers, the sense of the crossings being as in the first diagram when all \( {a}_{i} \) are positive (so that then the upper "tassels" twist to the left and the lower ones to the right). For example, the second diagram shows \( C\left( {4,2,3, - 3}\right) \) . This notation, devised by J. H. Conway [20], is chosen so that the link can be termed the " \( \left( {p, q}\right) \) rational link" where the rational number \( q/p \) has the repeated fraction expansion \[ \frac{q}{p} = \frac{1}{{a}_{1} + \frac{1}{{a}_{2} + \ldots \frac{1}{{a}_{n - 1} + \frac{1}{{a}_{n}}}}}. \] It turns out that different ways of expressing \( q/p \) as such a repeated fraction always give the same link (though a link can correspond to distinct rationals). For ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_19_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_19_0.jpg) Figure 1.8 a \( \left( {p, q}\right) \) rational knot, \( \left| p\right| \) is an invariant of the knot -namely, its determinant (see Chapter 9). An important property of a rational link is that it can be formed by gluing together two trivial 2-string tangles. Such a tangle is a 3-ball containing two standard (unknotted, unlinked) disjoint spanning arcs. Each arc meets the boundary of its ball at just its end points. The gluing process identifies together the boundaries of the balls to obtain \( {S}^{3} \), and to produce the link, it identifies the four ends of the arcs in one ball with the ends of those in the other. This can be seen by considering a vertical line through one of the diagrams in Figure 1.8. The line meets the link in four points. The diagram to one side of the line represents two arcs in a ball and, forgetting the configuration on the other side of the line, the arcs untwist. The remainder of Figure 1.8 shows how \( C\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) can be regarded as the boundary of \( n \) twisted bands "plumbed" together. If the \( {a}_{i} \) in the expression for \( q/p \) as a repeated fraction are all even, then the union of these bands is an orientable surface. The recipe for this plumbing can be encoded in a simple linear graph, as shown, in which each vertex represents a twisted band and each edge a plumbing. The boundary of a collection of bands plumbed according to the recipe of a tree (a connected graph with no closed loop) is called an arborescent link. (Conway called such a link "algebraic".) If the tree has only one vertex incident to more than two edges, the resulting link is a "Montesinos link"; the pretzel links are simple examples. Arborescent links have been classified by Bonahon and L. C. Siebenmann [15]. The ideas of braids and the braid group give a useful way of describing knots and links. A braid of \( n \) strings is \( n
1009_(GTM175)An Introduction to Knot Theory
4
nder of Figure 1.8 shows how \( C\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) can be regarded as the boundary of \( n \) twisted bands "plumbed" together. If the \( {a}_{i} \) in the expression for \( q/p \) as a repeated fraction are all even, then the union of these bands is an orientable surface. The recipe for this plumbing can be encoded in a simple linear graph, as shown, in which each vertex represents a twisted band and each edge a plumbing. The boundary of a collection of bands plumbed according to the recipe of a tree (a connected graph with no closed loop) is called an arborescent link. (Conway called such a link "algebraic".) If the tree has only one vertex incident to more than two edges, the resulting link is a "Montesinos link"; the pretzel links are simple examples. Arborescent links have been classified by Bonahon and L. C. Siebenmann [15]. The ideas of braids and the braid group give a useful way of describing knots and links. A braid of \( n \) strings is \( n \) oriented arcs traversing a box steadily from the left to the right. The box will be depicted as a square or rectangle, and the arcs will join \( n \) standard fixed points on the left edge to \( n \) such points on the right edge. Over-passes are indicated in the usual way. The arcs are required to meet each vertical line that meets the rectangle in precisely \( n \) points (the arcs can never turn back in their progress from left to right). Two braids are the same if they are ambient isotopic (that is, the strings can be "moved" from one position to the other) while keeping their end points fixed. The standard generating element \( {\sigma }_{i} \) is shown in Figure 1.9, as is the way of defining a product of braids by placing one after another. Given any braid \( b \), its ends on the right edge may be joined to those on the left edge, in the standard way shown, to produce the closed braid \( \widehat{b} \) that represents a link in \( {S}^{3} \) . Any braid can be written as a product of the \( {\sigma }_{i} \) and their inverses ( \( {\sigma }_{i}^{-1} \) is \( {\sigma }_{i} \) with the crossing switched), and it is a result discovered by \( \mathrm{J} \) . W. Alexander that any oriented link is the closure of some braid for some \( n \) . There are moves (the Markov moves; see Chapter 16) that explain when two braids have the same closure. More details can be found in [9] or [7]. The \( n \) -string braids form a group \( {B}_{n} \) with respect to the above product; it has a presentation \[ \left\langle {{\sigma }_{1},{\sigma }_{2},\ldots ,{\sigma }_{n - 1};\;{\sigma }_{i}{\sigma }_{j} = {\sigma }_{j}{\sigma }_{i}\text{ if }\left| {i - j}\right| \geq 2,\;{\sigma }_{i}{\sigma }_{i + 1}{\sigma }_{i} = {\sigma }_{i + 1}{\sigma }_{i}{\sigma }_{i + 1}}\right. \text{ ). } \] Figure 1.9 shows the braid \( {\sigma }_{1}{\sigma }_{2}\ldots {\sigma }_{n - 1} \) . If \( b = {\left( {\sigma }_{1}{\sigma }_{2}\ldots {\sigma }_{n - 1}\right) }^{m} \), then \( \widehat{b} \) is called the \( \left( {n, m}\right) \) torus link. It is a knot if \( n \) and \( m \) are coprime. This link can be drawn on the standard (unknotted) torus in \( {\mathbb{R}}^{3} \) (just consider the \( n - 1 \) parallel strings of \( {\sigma }_{1}{\sigma }_{2}\ldots {\sigma }_{n - 1} \) as being on the bottom of the torus, and the other string as looping over the top of the torus). ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_20_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_20_0.jpg) Figure 1.9 There are many methods of constructing complicated knots in easy stages. A common process is that of the construction of a satellite knot. Start with a knot \( K \) in a solid torus \( T \) . This is called a pattern. Let \( e : T \rightarrow {S}^{3} \) be an embedding so that \( {eT} \) is a regular neighbourhood of a knot \( C \) in \( {S}^{3} \) . Then \( {eK} \) is called a satellite of \( C \), and \( C \) is sometimes called a companion of \( {eK} \) . The process is illustrated in Figure 1.10, where a satellite of the trefoil knot \( {3}_{1} \) is constructed. Note that if \( K \subset T \) and \( C \) are given, there are still different possibilities for the satellite, for \( T \) can be twisted as it embeds around \( C \) . A simple example of the construction is provided by the sum \( {K}_{1} + {K}_{2} \) of two knots; the sum is a satellite of \( {K}_{1} \) and of \( {K}_{2} \) . If \( K \) is a \( \left( {p, q}\right) \) torus knot on the boundary of \( T \), then \( {eK} \) is called the \( \left( {p, q}\right) \) cable ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_20_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_20_1.jpg) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_21_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_21_0.jpg) Figure 1.11 knot about \( C \) provided \( e \) maps a longitude of \( T \) to a longitude of \( C \) (see Definition 1.6). A crossing in a diagram of an oriented link can be allocated a sign; the crossing is said to be positive or negative, or to have sign +1 or -1 . The standard convention is shown in Figure 1.11. The convention uses orientations of both strands appearing at the crossing and also the orientation of space. A positive crossing shows one strand (either one) passing the other in the manner of a "right-hand screw". Note that, for a knot, the sign of a crossing does not depend on the knot orientation chosen, for reversing orientations of both strands at a crossing leaves the sign unchanged. Definition 1.4. Suppose that \( L \) is a two-component oriented link with components \( {L}_{1} \) and \( {L}_{2} \) . The linking number \( \operatorname{lk}\left( {{L}_{1},{L}_{2}}\right) \) of \( {L}_{1} \) and \( {L}_{2} \) is half the sum of the signs, in a diagram for \( L \), of the crossings at which one strand is from \( {L}_{1} \) and the other is from \( {L}_{2} \) . Note at once that this is well defined, for any two diagrams for \( L \) are related by a sequence of Reidemeister moves, and it is easy to see that the above definition is not changed by such a move (a move of Type I causes no trouble, as it features strands from only one component). The linking number is thus an invariant of oriented two-component links. To be equivalent, two such links must certainly have the same linking number. The definition given of linking number is symmetric: \[ \operatorname{lk}\left( {{L}_{1},{L}_{2}}\right) = \operatorname{lk}\left( {{L}_{2},{L}_{1}}\right) . \] This definition of linking number is convenient for many purposes, but it should not obscure the fact that linking numbers embody some elementary homology theory. Suppose that \( K \) is a knot in \( {S}^{3} \) . Then \( K \) has a regular neighbourhood \( N \) that is a solid torus. (This is easy to believe, but, technically, the regular neighbourhood is the simplicial neighbourhood of \( K \) in the second derived subdivision of a triangulation of \( {S}^{3} \) in which \( K \) is a subcomplex.) The exterior \( X \) of \( K \) is the closure of \( {S}^{3} - N \) . Thus \( X \) is a connected 3-manifold, with boundary \( \partial X \) that is a torus. This \( X \) has the same homotopy type as \( {S}^{3} - K, X \cap N = \partial X = \partial N \) and \( X \cup N = {S}^{3} \) . (Note the custom of using " \( \partial \) " to denote the boundary of an object.) Theorem 1.5. Let \( K \) be an oriented knot in (oriented) \( {S}^{3} \), and let \( X \) be its exterior. Then \( {H}_{1}\left( X\right) \) is canonically isomorphic to the integers \( \mathbb{Z} \) generated by the class of a simple closed curve \( \mu \) in \( \partial N \) that bounds a disc in \( N \) meeting \( K \) at one point. If \( C \) is an oriented simple closed curve in \( X \), then the homology class \( \left\lbrack C\right\rbrack \in {H}_{1}\left( X\right) \) is \( \operatorname{lk}\left( {C, K}\right) \) . Further, \( {H}_{3}\left( X\right) = {H}_{2}\left( X\right) = 0 \) . Proof. This result is true in any reasonable homology theory with integer coefficients; indeed, it follows at once from the relatively sophisticated theorem of Alexander duality. The following proof uses the Mayer-Vietoris theorem, which relates the homology of two spaces to that of their union and intersection. As it has been assumed that all links are piecewise linearly embedded, it is convenient to think of simplicial homology and to suppose that \( X \) and \( N \) are sub-complexes of some triangulation of \( {S}^{3} \) . Consider then the following Mayer-Vietoris exact sequence for \( X \) and the solid torus \( N \) that intersect in their common torus boundary: \[ {H}_{3}\left( X\right) \oplus {H}_{3}\left( N\right) \rightarrow {H}_{3}\left( {S}^{3}\right) \rightarrow \cdots \] \[ \cdots \rightarrow {H}_{2}\left( {X \cap N}\right) \rightarrow {H}_{2}\left( X\right) \oplus {H}_{2}\left( N\right) \rightarrow {H}_{2}\left( {S}^{3}\right) \rightarrow \cdots \] \[ \cdots \rightarrow {H}_{1}\left( {X \cap N}\right) \rightarrow {H}_{1}\left( X\right) \oplus {H}_{1}\left( N\right) \rightarrow {H}_{1}\left( {S}^{3}\right) \rightarrow \cdots . \] Now, \( {H}_{3}\left( X\right) \oplus {H}_{3}\left( N\right) = 0 \) . This is because any connected triangulated 3-manifold with non-empty boundary deformation retracts to some 2-dimensional subcomplex (just "remove" 3-simplexes one by one, starting at the boundary), and hence it has zero 3-dimensional homology. The homology of the torus, the solid torus and the 3-sphere are all known as part of any elementary homology theory, so in the above it is only \( {H}_{2}\left( X\right) \) and \( {H}_{1}\left( X\right) \) that are not known. The groups \( {H}_{3}\left( {S}^{3}\right) \) and \( {H}_{2}\left( {X \cap N}\right) \) are both copies of \( \mathbb{Z} \) . Recall that the Mayer-Vietoris sequence comes from the corresponding short exact sequence of chain complexes. A generator of \( {H}_{3}\left( {S}^{3}\right) \) is represented by the chain consisting of the sum of all t
1009_(GTM175)An Introduction to Knot Theory
5
{1}\left( X\right) \oplus {H}_{1}\left( N\right) \rightarrow {H}_{1}\left( {S}^{3}\right) \rightarrow \cdots . \] Now, \( {H}_{3}\left( X\right) \oplus {H}_{3}\left( N\right) = 0 \) . This is because any connected triangulated 3-manifold with non-empty boundary deformation retracts to some 2-dimensional subcomplex (just "remove" 3-simplexes one by one, starting at the boundary), and hence it has zero 3-dimensional homology. The homology of the torus, the solid torus and the 3-sphere are all known as part of any elementary homology theory, so in the above it is only \( {H}_{2}\left( X\right) \) and \( {H}_{1}\left( X\right) \) that are not known. The groups \( {H}_{3}\left( {S}^{3}\right) \) and \( {H}_{2}\left( {X \cap N}\right) \) are both copies of \( \mathbb{Z} \) . Recall that the Mayer-Vietoris sequence comes from the corresponding short exact sequence of chain complexes. A generator of \( {H}_{3}\left( {S}^{3}\right) \) is represented by the chain consisting of the sum of all the 3-simplexes of \( {S}^{3} \) coherently oriented. This pulls back to the sum of the 3-simplexes in \( X \) plus those in \( N \) . That maps by the boundary (chain) map to the sum of the 2-simplexes in \( \partial X \) plus those in \( \partial N \), and this in turn pulls back to the sum of the (coherently oriented) 2-simplexes in \( X \cap N \) ; this represents a generator of \( {H}_{2}\left( {X \cap N}\right) \) . Thus inspection of the map in the sequence between \( {H}_{3}\left( {S}^{3}\right) \) and \( {H}_{2}\left( {X \cap N}\right) \) shows that a generator is sent to a generator, and hence the map is an isomorphism. As \( {H}_{2}\left( {S}^{3}\right) = 0 \), the exactness implies that \( {H}_{2}\left( X\right) \oplus {H}_{2}\left( N\right) = 0 \) . As \( {H}_{2}\left( {S}^{3}\right) = 0 \) and \( {H}_{1}\left( {S}^{3}\right) = 0 \), the map from \( {H}_{1}\left( {X \cap N}\right) = \mathbb{Z} \oplus \mathbb{Z} \) to \( {H}_{1}\left( X\right) \oplus {H}_{1}\left( N\right) \) is an isomorphism. As \( {H}_{1}\left( N\right) = \mathbb{Z} \), this implies that \( {H}_{1}\left( X\right) = \mathbb{Z} \) . This isomorphism \( {H}_{1}\left( {X \cap N}\right) \rightarrow {H}_{1}\left( X\right) \oplus {H}_{1}\left( N\right) \) is induced by the inclusion maps of \( X \cap N \) into each of \( X \) and \( N \) . Suppose that \( \mu \) is a non-separating simple closed curve in \( X \cap N \) that bounds a disc in the solid torus \( N \), oriented so that \( \mu \) encircles \( K \) with a right-hand screw. Then \( \mu \) represents an element that is indivisible (that is, it is not the multiple of another element by a non-unit integer) in \( {H}_{1}\left( {X \cap N}\right) \) ; of course, \( \mu \) represents zero in \( {H}_{1}\left( N\right) \) . Thus under the above isomorphism, \( \left\lbrack \mu \right\rbrack \mapsto \) \( \left( {1,0}\right) \in \mathbb{Z} \oplus \mathbb{Z} = {H}_{1}\left( X\right) \oplus {H}_{1}\left( N\right) \), for the image must still be indivisible, and this can be taken to define the choice of identification of \( {H}_{1}\left( X\right) \) with \( \mathbb{Z} \) . Examination of the definition of linking numbers in terms of signs of crossings shows that \( C \) is homologous in \( X \) to \( \operatorname{lk}\left( {C, K}\right) \left\lbrack \mu \right\rbrack \) . Note that, with the notation of the above proof, a unique element of \( {H}_{1}\left( {X \cap N}\right) \) must map to \( \left( {0,1}\right) \), where the \( 1 \in {H}_{1}\left( N\right) \) is represented by the oriented curve \( K \) . As \( \left( {0,1}\right) \) is indivisible, this class is represented by a simple closed curve \( \lambda \) in \( X \cap N \) . This gives substance to the following definition: Definition 1.6. Let \( K \) be an oriented knot in (oriented) \( {S}^{3} \) with solid torus neighbourhood \( N \) . A meridian \( \mu \) of \( K \) is a non-separating simple closed curve in \( \partial N \) that bounds a disc in \( N \) . A longitude \( \lambda \) of \( K \) is a simple closed curve in \( \partial N \) that is homologous to \( K \) in \( N \) and null-homologous in the exterior of \( K \) . Note that \( \lambda \) and \( \mu \), the longitude and meridian, both have standard orientations coming from orientations of \( K \) and \( {S}^{3} \), they are well defined up to homotopy in \( \partial N \) and their homology classes form a base for \( {H}_{1}\left( {\partial N}\right) \) . The above ideas can easily be extended to the following result for links of several components. Theorem 1.7. Let \( L \) be an oriented link of \( n \) components in (oriented) \( {S}^{3} \) and let \( X \) be its exterior. Then \( {H}_{2}\left( X\right) = {\bigoplus }_{n - 1}\mathbb{Z} \) . Further, \( {H}_{1}\left( X\right) \) is canonically isomorphic to \( {\bigoplus }_{n}\mathbb{Z} \) generated by the homology classes of the meridians \( \left\{ {\mu }_{i}\right\} \) of the individual components of \( L \) . Proof. The proof of this is just an adaptation of that of the previous theorem. Here \( N \) is now a disjoint union of \( n \) solid tori. The map \( {H}_{3}\left( {S}^{3}\right) \rightarrow {H}_{2}\left( {X \cap N}\right) \) is the map \( \mathbb{Z} \rightarrow {\bigoplus }_{n}\mathbb{Z} \) that sends 1 to \( \left( {1,1,\ldots ,1}\right) \), implying that \( {H}_{2}\left( X\right) = {\bigoplus }_{n - 1}\mathbb{Z} \) . Now \( {H}_{1}\left( {N \cap X}\right) = {\bigoplus }_{2n}\mathbb{Z} \) and \( {H}_{1}\left( N\right) = {\bigoplus }_{n}\mathbb{Z} \), and the map \( {H}_{1}\left( {N \cap X}\right) \rightarrow \) \( {H}_{1}\left( N\right) \oplus {H}_{1}\left( X\right) \) is still an isomorphism, so \( {H}_{1}\left( X\right) = {\bigoplus }_{n}\mathbb{Z} \) . The argument about the generators is as before. If \( C \) is an oriented simple closed curve in the exterior of the oriented link \( L \) , the linking number of \( C \) and \( L \) is defined by \( \operatorname{lk}\left( {C, L}\right) = \mathop{\sum }\limits_{i}\operatorname{lk}\left( {C,{L}_{i}}\right) \) where the \( {L}_{i} \) are the components of \( L \) . By Theorem \( {1.7},\operatorname{lk}\left( {C, L}\right) \) is the image of \( \left\lbrack C\right\rbrack \in \) \( {H}_{1}\left( X\right) \equiv {\bigoplus }_{n}\mathbb{Z} \) under the projection onto \( \mathbb{Z} \) that maps each generator to 1 . ## Exercises 1. Show that the knot \( {4}_{1} \) is equivalent to its reverse and to its reflection. 2. A diagram of an oriented knot is shown on a screen by means of an overhead projector. What knot appears on the screen if the transparency is turned over? 3. From the theory of the Reidemeister moves, prove that two diagrams in \( {S}^{2} \) of the same oriented knot in \( {S}^{3} \) are equivalent, by Reidemeister moves of only Types II and III, if and only if the the sum of the signs of the crossings is the same for the two diagrams. 4. Attempt a classificaton of links of two components up to six crossings, noting any pairs of links in your table that you have not yet proved to be distinct. 5. Show that any diagram of a knot \( K \) can be changed to a diagram of the unknot by changing some of the crossings from "over" to "under". How many changes are necessary? 6. Prove that the \( \left( {p, q}\right) \) torus knot, where \( p \) and \( q \) are coprime, is equivalent to the \( \left( {q, p}\right) \) torus knot. How does it relate to the \( \left( {p, - q}\right) \) and \( \left( {-p, - q}\right) \) torus knots? 7. Find descriptions of the knot \( {8}_{9} \) in the Dowker-Thistlethwaite notation, in the Conway notation as a 2-bridge knot \( C\left( {{a}_{1},{a}_{2},{a}_{3},{a}_{4}}\right) \) and also as a closed braid \( \widehat{b} \) . 8. Prove that any 2-bridge knot is an alternating knot. 9. A knot diagram is said to be three-colourable if each segment of the diagram (from one under-pass to the next) can be coloured red, blue or green so that all three colours are used and at each crossing either one colour or all three colours appear. Show that three-colourability is unchanged by Reidemeister moves. Deduce that the knot \( {3}_{1} \) is indeed distinct from the unknot and that \( {3}_{1} \) and \( {4}_{1} \) are distinct. Generalise this idea to \( n \) -colourability by labelling segments with integers so that at every crossing, the over-pass is labelled with the average, modulo \( n \), of the labels of the two segments on either side. 10. Can \( n \) -colourability distinguish the Kinoshita-Terasaka knot (Figure 3.3) from the unknot? 11. Let \( {X}_{1} \) and \( {X}_{2} \) be the exteriors of two non-trivial knots \( {K}_{1} \) and \( {K}_{2} \) . Determine how a homeomorphism \( h : \partial {X}_{1} \rightarrow \partial {X}_{2} \) can be chosen so that the 3-manifold \( {X}_{1}{ \cup }_{h}{X}_{2} \) has the same homology groups as \( {S}^{3} \) . 12. Let \( M \) be a homology 3-sphere, that is, a 3-manifold with the same homology groups as \( {S}^{3} \) . Show that the linking number of a link of two disjoint oriented simple closed curves in \( M \) can be defined in a way that gives the standard linking number when \( M = {S}^{3} \) . 2 ## Seifert Surfaces and Knot Factorisation It will now be shown that any link in \( {S}^{3} \) can be regarded as the boundary of some surface embedded in \( {S}^{3} \) . Such surfaces can be used to study the link in different ways. Here they are used to show that knots can be factorised into a sum of prime knots. Later they will feature in the theory and calculation of the Alexander polynomial. Definition 2.1. A Seifert surface for an oriented link \( L \) in \( {S}^{3} \) is a connected compact oriented surface contained in \( {S}^{3} \) that has \( L \) as its oriented boundary. Examples of such surfaces are shown in Figure 2.1 and have been mentioned in Chapter 1 for two-bridge knots. Of course, any embedding into \( {S}^{3} \) of a compact connected oriented surface with non-empty bound
1009_(GTM175)An Introduction to Knot Theory
6
} \) . Show that the linking number of a link of two disjoint oriented simple closed curves in \( M \) can be defined in a way that gives the standard linking number when \( M = {S}^{3} \) . 2 ## Seifert Surfaces and Knot Factorisation It will now be shown that any link in \( {S}^{3} \) can be regarded as the boundary of some surface embedded in \( {S}^{3} \) . Such surfaces can be used to study the link in different ways. Here they are used to show that knots can be factorised into a sum of prime knots. Later they will feature in the theory and calculation of the Alexander polynomial. Definition 2.1. A Seifert surface for an oriented link \( L \) in \( {S}^{3} \) is a connected compact oriented surface contained in \( {S}^{3} \) that has \( L \) as its oriented boundary. Examples of such surfaces are shown in Figure 2.1 and have been mentioned in Chapter 1 for two-bridge knots. Of course, any embedding into \( {S}^{3} \) of a compact connected oriented surface with non-empty boundary provides an example of a link equipped with a Seifert surface. A surface is non-orientable if and only if it contains a Möbius band. Some surface can be constructed with a given link as its boundary in the following way: Colour black or white, in chessboard fashion, the regions of \( {S}^{2} \) that form the complement of a diagram of the link. Consider all the regions of one colour joined by "half-twisted" strips at the crossings. This is a surface with the link as boundary, and it may well be orientable. However, it may quite well be non-orientable for either one or both of the two colours. The usual diagram of the knot \( {4}_{1} \) has both such surfaces non-orientable. Thus, although this method may provide an excellent Seifert surface, a general method, such as that of Seifert which follows, is needed. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_25_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_25_0.jpg) Figure 2.1 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_0.jpg) Figure 2.2 ## Theorem 2.2. Any oriented link in \( {S}^{3} \) has a Seifert surface. Proof. Let \( D \) be an oriented diagram for the oriented link \( L \) and let \( \widehat{D} \) be \( D \) modified as shown in Figure 2.2. \( \widetilde{D} \) is the same as \( D \) except in a small neighbourhood of each crossing where the crossing has been removed in the only way compatible with the orientation. This \( \widehat{D} \) is just a disjoint union of oriented simple closed curves in \( {S}^{2} \) . Thus \( \widehat{D} \) is the boundary of the union of some disjoint discs all on one side of (above) \( {S}^{2} \) . Join these discs together with half-twisted strips at the crossings. This forms an oriented surface with \( L \) as boundary; each disc gets an orientation from the orientation of \( \widehat{D} \), and the strips faithfully relay this orientation. If this surface is not connected, connect components together by removing small discs and inserting long, thin tubes. In the above proof, \( \widehat{D} \) was a collection of disjoint simple closed curves constructed from \( D \) . These curves are called the Seifert circuits of \( D \) . The Seifert circuits of the knot \( {8}_{20} \) are shown in Figure 2.3. A Seifert surface for this knot is then constructed by adding three discs above the page and eight half-twisted strips near the crossings to join the discs together. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_26_1.jpg) Figure 2.3 The proof of Theorem 2.2 gives a way of constructing a Seifert surface from a diagram of the link. The surface that results may however not be the easiest for any specific use. A surface coming from the chessboard colouring technique, or from some partial use of it, may well seem more agreeable. The diagram of Figure 2.4 shows how, at least intuitively, a knot can have two very different Seifert surfaces; the two thin circles can be joined by a tube after following along the narrow ("knotted") strip or after swallowing that part of the picture. Definition 2.3. The genus \( g\left( K\right) \) of a knot \( K \) is defined by \[ g\left( K\right) = \min \text{. \{genus}\left( F\right) : F\text{is a Seifert surface for}K\} \text{.} \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_27_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_27_0.jpg) Figure 2.4 Here \( F \) has one boundary component, so as an abstract surface it is a disc with a number of "hollow handles" added. That number is its genus. More precisely, the genus of \( F \) is \( \frac{1}{2}\left( {1 - \chi \left( F\right) }\right) \), where \( \chi \left( F\right) \) is the Euler characteristic of \( F \) . The Euler characteristic in turn can be defined as the number of vertices minus the number of edges plus the number of triangles in any triangulation of \( F \) . It does not seem to be common to discuss the genus of a link, but there is no difficulty in extending the definition. Note that it follows at once that \( K \) is the unknot if and only if it has genus 0 . Also, if \( K \) has a Seifert surface of genus 1 and \( K \) is known not to be the unknot, then \( g\left( K\right) = 1 \) . The proof of Theorem 2.2 constructs a Seifert surface \( F \) for \( K \) from a diagram \( D \) of \( K \) . If \( D \) has \( n \) crossings and \( s \) Seifert circuits, then \( \chi \left( F\right) = s - n \) , so that \( g\left( K\right) \leq \frac{1}{2}\left( {n - s + 1}\right) \) . It has already been noted that though it is easy to define numerical knot and link invariants by minimising some geometric phenomenon associated with it, often such invariants are very hard to calculate and difficult to use. The genus of a knot, however, has a utility that arises from the following result of [115], which states that knot genus is additive. Theorem 2.4. For any two knots \( {K}_{1} \) and \( {K}_{2} \) , \[ g\left( {{K}_{1} + {K}_{2}}\right) = g\left( {K}_{1}\right) + g\left( {K}_{2}\right) . \] Proof. Firstly, suppose that \( {K}_{1} \) and \( {K}_{2} \), together with minimal genus Seifert surfaces \( {F}_{1} \) and \( {F}_{2} \), are situated far apart in \( {S}^{3} \) . Each \( {F}_{i} \) is a connected surface with non-empty boundary, so elementary homology theory shows that \( {F}_{1} \cup {F}_{2} \) does not separate \( {S}^{3} \) . Thus one can choose an arc \( \alpha \) from a point in \( {K}_{1} \) to a point in \( {K}_{2} \) that meets \( {F}_{1} \cup {F}_{2} \) at no other point and that intersects once a 2 -sphere separating \( {K}_{1} \) from \( {K}_{2} \) . The union of \( {F}_{1} \cup {F}_{2} \) with a "thin" strip around \( \alpha \) (twisted to match orientations) gives a Seifert surface for \( {K}_{1} + {K}_{2} \) that has genus the sum of the genera of \( {F}_{1} \) and \( {F}_{2} \) . Thus \[ g\left( {{K}_{1} + {K}_{2}}\right) \leq g\left( {K}_{1}\right) + g\left( {K}_{2}\right) . \] Now suppose that \( F \) is a minimal genus Seifert surface for \( {K}_{1} + {K}_{2} \) . Let \( \sum \) be a 2-sphere, intersecting \( {K}_{1} + {K}_{2} \) transversely at two points, of the sort that occurs in the definition of \( {K}_{1} + {K}_{2} \) . Thus \( \sum \) separates \( {K}_{1} + {K}_{2} \) into two arcs \( {\alpha }_{1} \) and \( {\alpha }_{2} \) , and if \( \beta \) is any arc in \( \sum \) joining the two points of \( \sum \cap \left( {{K}_{1} + {K}_{2}}\right) \), then \( {\alpha }_{1} \cup \beta \) and \( {\alpha }_{2} \cup \beta \) are copies of \( {K}_{1} \) and \( {K}_{2} \) . Now \( F \) and \( \sum \) are surfaces in \( {S}^{3} \) . Here it is being assumed throughout that all such inclusions are piecewise linear (as usual, "smooth" is just as good). Thus each can be regarded as a sub-complex of some triangulation of \( {S}^{3} \), and \( \sum \) can be moved (by a general position argument, moving "one vertex at a time") to a position in which it is transverse to the whole of \( F \) . (The local situation is then modelled on the intersection of two planes, or half-planes, placed in general position in 3-dimensional Euclidean space.) Thus, without loss of generality, it may be assumed that \( F \cap \sum \) is a 1-dimensional manifold which must be a finite collection of simple closed curves and one arc \( \beta \) joining the points of \( \sum \cap \left( {{K}_{1} + {K}_{2}}\right) \) . Each of these simple closed curves separates \( \sum \) into two discs (using the 2-dimensional Schönflies theorem), only one of which contains \( \beta \) . Let \( C \) be a simple closed curve of \( F \cap \sum \) that is innermost on \( \sum - \beta \) . This means that \( C \) bounds in \( \sum \) a disc \( D \), the interior of which misses \( F \) . Now use \( D \) to do surgery on \( F \) in the following way: Create a new surface \( \widehat{F} \) from \( F \) by deleting from \( F \) a small annular neighbourhood of \( C \) and replacing it by two discs, each a "parallel" copy of \( D \), one on either side of \( D \) . If \( C \) did not separate \( F \), this \( \widehat{F} \) would be a Seifert surface for \( {K}_{1} + {K}_{2} \) of genus lower than that of \( F \) (since the surgery has the effect of removing a hollow handle). As that is not possible, \( C \) separates \( F \) , and so \( \widehat{F} \) is disconnected. Consider the component of \( \widehat{F} \) that contains \( {K}_{1} + {K}_{2} \) . This is a surface of the same genus as \( F \) but which meets \( \sum \) in fewer simple closed curves ( \( C \), at least, has been eliminated). Repetition of this process yields a Seifert surface \( {F}^{\prime } \) for \( {K}_{1} + {K}_{2} \), of the same genus as \( F \), that intersects \( \sum \) only in \( \beta \) . Thus \( \sum \) separates \( {F}^{\prime } \) into two pieces which are Seifert surfaces for \( {K}_{1} \) and \( {K}_{2} \) . Hence \[ g\left( {K}_{1}\right) + g\left( {K}_{2}\right) \leq g\left( {{K}_{1} + {K}_{2}}\r
1009_(GTM175)An Introduction to Knot Theory
7
"parallel" copy of \( D \), one on either side of \( D \) . If \( C \) did not separate \( F \), this \( \widehat{F} \) would be a Seifert surface for \( {K}_{1} + {K}_{2} \) of genus lower than that of \( F \) (since the surgery has the effect of removing a hollow handle). As that is not possible, \( C \) separates \( F \) , and so \( \widehat{F} \) is disconnected. Consider the component of \( \widehat{F} \) that contains \( {K}_{1} + {K}_{2} \) . This is a surface of the same genus as \( F \) but which meets \( \sum \) in fewer simple closed curves ( \( C \), at least, has been eliminated). Repetition of this process yields a Seifert surface \( {F}^{\prime } \) for \( {K}_{1} + {K}_{2} \), of the same genus as \( F \), that intersects \( \sum \) only in \( \beta \) . Thus \( \sum \) separates \( {F}^{\prime } \) into two pieces which are Seifert surfaces for \( {K}_{1} \) and \( {K}_{2} \) . Hence \[ g\left( {K}_{1}\right) + g\left( {K}_{2}\right) \leq g\left( {{K}_{1} + {K}_{2}}\right) \] which, together with the preceding inequality, proves the result. Corollary 2.5. No (non-trivial) knot has an additive inverse. That is, if \( {K}_{1} + {K}_{2} \) is the unknot, then each of \( {K}_{1} \) and \( {K}_{2} \) is unknotted. Corollary 2.6. If \( K \) is a non-trivial knot and \( \mathop{\sum }\limits_{1}^{n}K \) denotes the sum of \( n \) copies of \( K \), then if \( n \neq m \) it follows that \( \mathop{\sum }\limits_{1}^{n}K \neq \mathop{\sum }\limits_{1}^{m}K \) . There are, then, certainly infinitely many distinct knots. Corollary 2.7. A knot of genus 1 is prime. Corollary 2.8. A knot can be expressed as a finite sum of prime knots. Proof. If a knot is not prime, it can be expressed as the sum of two knots of smaller genus. Now use induction on the genus. It will be worthwhile recalling now the following basic Schönflies theorem, already mentioned in the introduction. Essentially, it states that \( {S}^{2} \) cannot knot in \( {S}^{3} \) . Theorem 2.9. Schönflies Theorem. Let \( e : {S}^{2} \rightarrow {S}^{3} \) be any piecewise linear embedding. Then \( {S}^{3} - e{S}^{2} \) has two components, the closure of each of which is a piecewise linear ball. No proof will be given here for this fundamental, non-trivial result (for a proof see [81]). The piecewise linear condition has to be inserted, as there exist the famous "wild horned spheres" that are are examples of topological embeddings \( e : {S}^{2} \rightarrow {S}^{3} \) for which the complementary components are not even simply connected. The next result considers the different ways in which a knot might be expressed as the sum of other knots. It is the basic result needed to show that the expression of a knot as a sum of prime knots is essentially unique. The technique of its proof again consists of minimising the intersection of surfaces in \( {S}^{3} \) that meet transversely in simple closed curves, but the procedure here is more sophisticated than in the proof of Theorem 2.4. In the proof, use will be made of the idea of a ball-arc pair. Such a pair is just a 3-ball containing an arc which meets the ball's boundary at just its two end points. The pair is unknotted if it is pairwise homeomorphic to \( \left( {D \times I, \star \times I}\right) \) , where \( \star \) is a point in the interior of the disc \( D \) and \( I \) is a closed interval. Theorem 2.10. Suppose that a knot \( K \) can be expressed as \( K = P + Q \), where \( P \) is a prime knot, and that \( K \) can also be expressed as \( K = {K}_{1} + {K}_{2} \) . Then either (a) \( {K}_{1} = P + {K}_{1}^{\prime } \) for some \( {K}_{1}^{\prime } \), and \( Q = {K}_{1}^{\prime } + {K}_{2} \), or (b) \( {K}_{2} = P + {K}_{2}^{\prime } \) for some \( {K}_{2}^{\prime } \), and \( Q = {K}_{1} + {K}_{2}^{\prime } \) . Proof. Let \( \sum \) be a 2-sphere in \( {S}^{3} \), meeting \( K \) transversely at two points, that demonstrates \( K \) as the sum \( {K}_{1} + {K}_{2} \) . The factorisation \( K = P + Q \) implies that there is a 3-ball \( B \) contained in \( {S}^{3} \) such that \( B \cap K \) is an arc \( \alpha \) (with \( K \) intersecting \( \partial B \) transversely at the two points \( \partial \alpha \) ) so that the ball-arc pair \( \left( {B,\alpha }\right) \) becomes, on gluing a trivial ball-arc pair to its boundary, the pair \( \left( {{S}^{3}, P}\right) \) . As in the proof of Theorem 2.4, it may be assumed, after small movements of \( \sum \), that \( \sum \) intersects \( \partial B \) transversely in a union of simple closed curves disjoint from \( K \) . The immediate aim will be to reduce \( \sum \cap \partial B \) . Note that if this intersection is empty, then \( B \) is contained in one of the two components of \( {S}^{3} - \sum \), and the result follows at once. As \( \sum \cap K \) is two points, any oriented simple closed curve in \( \sum - K \) has linking number zero or \( \pm 1 \) with \( K \) . Amongst the components of \( \sum \cap \partial B \) that have zero linking number with \( K \) select a component that is innermost on \( \sum \) (with \( \sum \cap K \) considered "outside"). This component bounds a disc \( D \subset \sum \), with \( D \cap \partial B = \partial D \) . Now \( \partial D \) bounds a disc \( {D}^{\prime } \subset \partial B \) with \( {D}^{\prime } \cap K = \varnothing \) (by linking numbers), though \( {D}^{\prime } \cap \sum \) may have many components (see Figure 2.5). By the Schönflies theorem, the sphere \( D \cup {D}^{\prime } \) bounds a ball. "Moving" \( {D}^{\prime } \) across this ball to just the other side of \( D \) changes \( B \) to a new position, with \( \sum \cap \partial B \) now having fewer components than before. As the new position of \( B \) differs from the old by the addition or subtraction of a ball disjoint from \( K \), the new \( \left( {B,\alpha }\right) \) pair corresponds to \( P \) exactly as before. After repetition of this procedure, it may be assumed that each component of \( \sum \cap \partial B \) has linking number \( \pm 1 \) with \( K \) . (Thus, on each of the spheres \( \sum \) and \( \partial B \) , the components of \( \sum \cap \partial B \) look like lines of latitude encircling, as the two poles, the two intersection points with \( \mathrm{K} \) .) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_30_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_30_0.jpg) Figure 2.5 If now \( \sum \cap B \) has a component that is a disc \( D \), then \( D \cap K \) is one point, and as \( P \) is prime, one side of \( D \) in \( B \) is a trivial ball-arc pair (see Figure 2.5). Removing from \( B \) (a regular neighbourhood of) this trivial pair produces a new \( B \) with the same properties as before but having fewer components of \( \sum \cap B \) . Thus it may be assumed that every component of \( \sum \cap B \) is an annulus. Let \( A \) be an annulus component of \( \sum \cap B \) . Then \( \partial A \) bounds an annulus \( {A}^{\prime } \) in \( \partial B \) and \( A \) may be chosen (furthest from \( \alpha \) ) so that \( {A}^{\prime } \cap \sum = \partial {A}^{\prime } \) . Let \( M \) be the part of \( B \) bounded by the torus \( A \cup {A}^{\prime } \) and otherwise disjoint from \( \sum \cup \partial B \) . Let \( \Delta \) be the closure of one of the components of \( \partial B - {A}^{\prime } \) . Then \( \Delta \) is a disc, with \( \partial \Delta \) one of the components of \( {A}^{\prime } \), and \( \Delta \cap K \) equal to a single point (though \( \Delta \cap \sum \) may have many components). This is illustrated schematically in Figure 2.6. Let \( N\left( \Delta \right) \) be a small regular neighbourhood of \( \Delta \) in the closure of \( B - M \) . This should be thought of as a thickening of \( \Delta \) into \( B - M \) . The pair \( \left( {N\left( \Delta \right), N\left( \Delta \right) \cap \alpha }\right) \) is a trivial ball-arc pair. However, \( M \cup N\left( \Delta \right) \) is a ball, because its boundary is a sphere, and the fact that \( P \) is prime implies that the ball-arc pair \( \left( {M \cup N\left( \Delta \right), N\left( \Delta \right) \cap \alpha }\right) \) is either trivial or a copy of the pair \( \left( {B,\alpha }\right) \) . If it is trivial (that is, when \( M \) is a solid torus), \( B \) may be changed, as before, by removing (a neighbourhood of) this pair to give a new \( B \) with fewer components of \( \sum \cap B \) . Otherwise, \( M \) is a copy of \( B \) less a neighbourhood of \( \alpha \), and that is just the exterior of the knot \( P;\partial \Delta \) corresponds to a meridian of \( P \) . The closure of one of the complementary domains of \( \sum \) in \( {S}^{3} \) , ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_30_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_30_1.jpg) Figure 2.6 say that corresponding to \( {K}_{1} \), contains \( M \), and \( M \cap \sum = A \) . The meridian \( \partial \Delta \) bounds a disc in \( \sum - A \) that meets \( K \) at one point. This means that \( P \) is a summand of \( {K}_{1} \) as required, so \( {K}_{1} = P + {K}_{1}^{\prime } \) for some \( {K}_{1}^{\prime } \) . In this last circumstance, remove the interior of \( M \) and replace it with a solid torus \( {S}^{1} \times {D}^{2} \) . Glue the boundary of the solid torus to \( \partial M \), and ensure that the boundary of any meridional disc of \( {S}^{1} \times {D}^{2} \) is identified with a curve on \( \partial M \) that cuts \( \partial \Delta \) at one point. Then \( \left( {{S}^{1} \times {D}^{2}}\right) \cup N\left( \Delta \right) \) is a ball, so \( B \) has been changed to become a new ball \( {B}^{\prime } \), and \( \left( {{B}^{\prime },\alpha }\right) \) is a trivial ball-arc pair. The closure of \( {S}^{3} - B \) is unchanged; it is still a ball, so \( {S}^{3} \) is changed to a new copy of \( {S}^{3} \) . In that new copy, the knot has become \( Q \) and, viewed as being decomposed by \( \sum \), it
1009_(GTM175)An Introduction to Knot Theory
8
- A \) that meets \( K \) at one point. This means that \( P \) is a summand of \( {K}_{1} \) as required, so \( {K}_{1} = P + {K}_{1}^{\prime } \) for some \( {K}_{1}^{\prime } \) . In this last circumstance, remove the interior of \( M \) and replace it with a solid torus \( {S}^{1} \times {D}^{2} \) . Glue the boundary of the solid torus to \( \partial M \), and ensure that the boundary of any meridional disc of \( {S}^{1} \times {D}^{2} \) is identified with a curve on \( \partial M \) that cuts \( \partial \Delta \) at one point. Then \( \left( {{S}^{1} \times {D}^{2}}\right) \cup N\left( \Delta \right) \) is a ball, so \( B \) has been changed to become a new ball \( {B}^{\prime } \), and \( \left( {{B}^{\prime },\alpha }\right) \) is a trivial ball-arc pair. The closure of \( {S}^{3} - B \) is unchanged; it is still a ball, so \( {S}^{3} \) is changed to a new copy of \( {S}^{3} \) . In that new copy, the knot has become \( Q \) and, viewed as being decomposed by \( \sum \), it has become \( {K}_{1}^{\prime } + {K}_{2} \) . Thus \( Q = {K}_{1}^{\prime } + {K}_{2} \) . Corollary 2.11. Suppose that \( P \) is a prime knot and that \( P + Q = {K}_{1} + {K}_{2} \) . Suppose also that \( P = {K}_{1} \) . Then \( Q = {K}_{2} \) . Proof. By Theorem 2.10, there are two possibilities. The first is that for some \( {K}_{1}^{\prime }, P + {K}_{1}^{\prime } = {K}_{1} = P \) and \( Q = {K}_{1}^{\prime } + {K}_{2} \) . But then the genus of \( {K}_{1}^{\prime } \) must be zero, so \( {K}_{1}^{\prime } \) is the unknot and so \( Q = {K}_{2} \) . The second possibility is that for some \( {K}_{2}^{\prime }, P + {K}_{2}^{\prime } = {K}_{2} \) and \( Q = {K}_{2}^{\prime } + {K}_{1} \) . But then \( Q = {K}_{2}^{\prime } + P = {K}_{2} \) . Theorem 2.12. Up to ordering of summands, there is a unique expression for a knot \( K \) as a finite sum of prime knots. Proof. Suppose \( K = {P}_{1} + {P}_{2} + \cdots + {P}_{m} = {Q}_{1} + {Q}_{2} + \cdots + {Q}_{n} \), where the \( {P}_{i} \) and \( {Q}_{i} \) are all prime. By the theorem, \( {P}_{1} \) is a summand of \( {Q}_{1} \) or of \( {Q}_{2} + \) \( {Q}_{3} + \cdots + {Q}_{n} \), and if the latter, then it is a summand of one of the \( {Q}_{j} \) for \( j \geq 2 \) , by induction on \( n \) . Of course if \( {P}_{1} \) is a summand of \( {Q}_{j} \), then \( {P}_{1} = {Q}_{j} \) . By the corollary, \( {P}_{1} \) and \( {Q}_{j} \) may then be cancelled from both sides of the equation, and the result follows by induction on \( m \) . Note that this induction starts when \( m = 0 \) . Then \( n = 0 \) because the unknot cannot be expressed as a sum of non-trivial knots (again by consideration of genus). The theorems of this chapter are intended to make it reasonable to restrict attention to prime knots in most circumstances. Certainly that is the tradition when considering knot tabulation. ## Exercises 1. Prove that a non-trivial torus knot is prime by considering the way in which a 2-sphere, meeting the knot at two points, would cut the torus that contains the knot. 2. For a 2-bridge knot \( K \) there is a 2-sphere separating \( {S}^{3} \) into two balls, each of which intersects \( K \) in two standard arcs. By considering how this sphere might intersect a 2-sphere meeting the knot at two points, prove that a non-trivial 2-bridge knot is prime. 3. The bridge number of a knot \( K \) in \( {S}^{3} \) is the least integer \( n \) for which there is an \( {S}^{2} \) separating \( {S}^{3} \) into two balls, each meeting \( K \) in \( n \) standard (unknotted and unlinked) spanning arcs. Show that the sum of two 2-bridge knots is a 3-bridge knot. 4. Suppose that \( F \) is a Seifert surface for an oriented knot \( K \), and let \( C \) be an oriented simple closed curve contained in \( F - K \) . Prove that \( \operatorname{lk}\left( {C, K}\right) = 0 \) . 5. Prove that any knot may be changed to the unknot by a sequence of moves, each of which changes four arcs contained in a ball from one of the following configurations to the other. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_32_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_32_0.jpg) [Think of the knot as the boundary of a non-orientable surface.] 6. Let \( F \) be the Seifert surface for a knot constructed by means of the Seifert method (Theorem 2.2). Let \( N \) be a regular neighbourhood of \( F \) . Show that the closure of \( {S}^{3} - N \) is a handlebody (that is, it is homeomorphic to a regular neighbourhood of a connected graph in \( {S}^{3} \) ) homeomorphic to \( N \) . 7. Show, as outlined below, that a knot \( K \) with exterior \( X \) has a Seifert surface. Construct \( f : X \rightarrow {S}^{1} \) as follows: First define \( f \mid \partial X \) so that \( f \) maps a longitude to a single point and, when restricted to a meridian, \( f \) is a homeomorphism. Such an \( f \) can be extended over the 1-skeleton \( {T}^{\left( 1\right) } \) of some triangulation \( T \) of \( X \) so that if \( C \) is an oriented simple closed curve in \( {T}^{\left( 1\right) } \), then \( \operatorname{lk}\left( {C, K}\right) = \left\lbrack {fC}\right\rbrack \in {H}_{1}\left( {S}^{1}\right) \) . Finally extend \( f \) over the 2-skeleton, then over the 3-skeleton (using the fact that any map \( {S}^{2} \rightarrow {S}^{1} \) extends over the 3-ball). Assuming \( f \) is simplicial with respect to some triangulations of \( X \) and \( {S}^{1} \) (subdivisions of \( T \) and of a standard triangulation of \( {S}^{1} \) ), consider \( {f}^{-1}\left( x\right) \) where \( x \) is a point that is not a vertex in \( {S}^{1} \) . 8. Suppose that a knot \( A \) were to have an additive inverse \( B \) so that \( A + B \) is the unknot. Let \( K \) be the simple closed curve in \( {S}^{3} \) described as an infinite sum \( A + B + A + \) \( B + \cdots \) where each summand is in a ball, the balls becoming successively smaller and converging to a single point. This \( K \) will not be piecewise linear. By considering the infinite sum as both \( \left( {A + B}\right) + \left( {A + B}\right) + \cdots \) and \( A + \left( {B + A}\right) + \left( {B + A}\right) + \cdots \), show that there is a homeomorphism (probably not piecewise linear) of \( {S}^{3} \) to itself sending \( A \) to the unknot. 9. Suppose that addition of links is defined by just removing an unknotted ball-arc pair from each and identifying the resultant boundaries. Show that this is not a well-defined operation and that \( {L}_{1} + {L}_{2} = {L}_{1} + {L}_{3} \) does not necessarily imply that \( {L}_{2} = {L}_{3} \) . 3 ## The Jones Polynomial The theory of the polynomial invented by V. F. R. Jones gives a way of associating to every knot and link a Laurent polynomial with integer coefficients (that is, a finite polynomial expression that can include negative as well as positive powers of the indeterminate). The association of polynomial to link will be made by using a link diagram. The whole theory rests upon the fact that if the diagram is changed by a Reidemeister move, the polynomial stays the same. The polynomial for the link is then defined independently of the choice of diagram. Thus, if two links can be shown, by means of specific calculation from diagrams, to have distinct polynomials, then they are indeed distinct links. This is a relatively easy way of distinguishing knots with diagrams of few crossings. Table 3.1 displays the Jones polynomials for the knots of at most eight crossings shown in Chapter 1. Those polynomials are, by easy inspection, all distinct, so the corresponding knots are all distinct. As will be observed, the Jones polynomial is good, but not infallible, at distinguishing knots. However, that is not its most exciting achievement. Other invariants have, particularly with the aid of computers, always managed to distinguish any interesting pair of knots. Some of those invariants will be encountered in later chapters. The Jones polynomial, however, has been used to prove pleasing new results concerning the possible diagrams that certain knots can possess (see Chapter 5). In addition, the Jones polynomial has been much generalised; it has been developed into a theory, allied in some sense to quantum theory, giving invariants for 3-dimensional manifolds (see Chapter 13) and has been the genesis of a remarkable resurgence of interest in knot theory in all its forms. It is amazing that so simple, powerful and provocative a theory remained unknown until 1984, [53]. Because of the ease with which it can be developed, understood and used, the Jones polynomial has a place very near to the beginning of any exposition of knot theory. The simplest way to define it is by using a slightly different polynomial: the bracket polynomial discovered by L. H. Kauffman [59]. Definition 3.1. The Kauffman bracket is a function from unoriented link diagrams in the oriented plane (or, better, in \( {S}^{2} \) ) to Laurent polynomials with integer coefficients in an indeterminate \( A \) . It maps a diagram \( D \) to \( \langle D\rangle \in \mathbb{Z}\left\lbrack {{A}^{-1}, A}\right\rbrack \) and is characterised by (i) \( \langle ○ \rangle = 1 \) , (ii) \( \langle D \sqcup ○ \rangle = \left( {-{A}^{-2} - {A}^{2}}\right) \langle D\rangle \) , \( \langle X\rangle = A\langle X\rangle + {A}^{-1}\langle X\rangle . \) In this definition, \( ○ \) is the diagram of the unknot with no crossing, and \( D \sqcup ○ \) is a diagram consisting of the diagram \( D \) together with an extra closed curve \( ○ \) that contains no crossing at all, not with itself nor with \( D \) . In (iii) the formula refers to three link diagrams that are exactly the same except near a point where they differ in the way indicated. The bracket polynomial of a diagram with \( n \) crossings can be calculated by expressing it as a linear sum of \( {2}^{n} \) diagrams with no crossing, using (iii), and noting that any diagram with \( c \) components and no crossing has, by (i) and
1009_(GTM175)An Introduction to Knot Theory
9
determinate \( A \) . It maps a diagram \( D \) to \( \langle D\rangle \in \mathbb{Z}\left\lbrack {{A}^{-1}, A}\right\rbrack \) and is characterised by (i) \( \langle ○ \rangle = 1 \) , (ii) \( \langle D \sqcup ○ \rangle = \left( {-{A}^{-2} - {A}^{2}}\right) \langle D\rangle \) , \( \langle X\rangle = A\langle X\rangle + {A}^{-1}\langle X\rangle . \) In this definition, \( ○ \) is the diagram of the unknot with no crossing, and \( D \sqcup ○ \) is a diagram consisting of the diagram \( D \) together with an extra closed curve \( ○ \) that contains no crossing at all, not with itself nor with \( D \) . In (iii) the formula refers to three link diagrams that are exactly the same except near a point where they differ in the way indicated. The bracket polynomial of a diagram with \( n \) crossings can be calculated by expressing it as a linear sum of \( {2}^{n} \) diagrams with no crossing, using (iii), and noting that any diagram with \( c \) components and no crossing has, by (i) and (ii), \( {\left( -{A}^{-2} - {A}^{2}\right) }^{c - 1} \) for its polynomial. In doing this,(iii) must be used on the crossings in some order, but it is easy to see (by transposing adjacent crossings in the order) that another choice of order does not effect the outcome. This means that the bracket polynomial is defined for link diagrams in the plane, and that it satisfies (i), (ii) and (iii). (If ever the empty diagram is required, it must be given the "polynomial" \( {\left( -{A}^{-2} - {A}^{2}\right) }^{-1} \) .) If a diagram is changed in some way, then perhaps the polynomial changes, though the method of calculation makes it clear that changing a diagram by means of an orientation-preserving homeomorphism of the whole plane has no effect on the polynomial. The effect on \( \langle D\rangle \) of a Reidemeister move on \( D \) will now be investigated. Lemma 3.2. If a diagram is changed by a Type I Reidemeister move, its bracket polynomial changes in the following way: \[ \langle {\tau }_{0} - \rangle = - {A}^{3}\langle \frown \rangle ,\;\langle - \sigma \rangle = - {A}^{-3}\langle \frown \rangle . \] Proof. \[ \langle {\tau }^{ - }\rangle = A\langle \widehat{\sigma }\rangle + {A}^{-1}\langle \tau \rangle \] \[ = \left( {A\left( {-{A}^{-2} - {A}^{2}}\right) + {A}^{-1}}\right) \langle \frown \rangle \text{.} \] That produces the first equation; the second follows in the same way. Note that if in (iii) the crossing on the left-hand side were changed, then the right-hand side would be the same except for the interchange of \( A \) and \( {A}^{-1} \) . This follows from an application of (iii) rotated through \( \pi /2 \) . This means that if \( \bar{D} \) is the reflection of \( D \) -that is, \( D \) with the overs and unders of all of its crossings changed-then \( \langle \bar{D}\rangle = \overline{\langle D\rangle } \), where the over-bar on the right denotes the effect of the involution on \( \mathbb{Z}\left\lbrack {{A}^{-1}, A}\right\rbrack \) induced by exchanging \( A \) and \( {A}^{-1} \) . The two equations of Lemma 3.2 are related by this observation. This lemma is used several times in the following examples, which calculate the bracket polynomial of a diagram of a simple two-component link and then of a diagram of a trefoil knot. \[ \langle \text{ 小 }\rangle = A\langle \text{ 小 }\circlearrowleft \rangle + {A}^{-1}\langle \text{ 小 }\circlearrowleft \rangle \] \[ = \left( {-{A}^{4} - {A}^{-4}}\right) \text{.} \] \[ \langle \Delta \rangle = A\langle \Delta \rangle + {A}^{-1}\langle \Delta \rangle \] \[ = A\left( {-{A}^{4} - {A}^{-4}}\right) + {A}^{-7} \] \[ = \left( {{A}^{-7} - {A}^{-3} - {A}^{5}}\right) \text{.} \] Lemma 3.3. If a diagram \( D \) is changed by a Type II or Type III Reidemeister move, then \( \langle D\rangle \) does not change. That is, (i) \( \langle \) , \( > < \rangle = \langle > < \rangle \) , (ii) \( \langle z < < \rangle = \langle z < < \rangle \) . Hence \( \langle D\rangle \) is invariant under regular isotopy of \( D \) . Proof. (i) \[ \langle > < > \rangle = A\langle > > < \rangle + {A}^{-1}\langle > < \rangle \] \[ = - {A}^{-2}\langle \rangle \langle \rangle + \langle > \langle \rangle + {A}^{-2}\langle \rangle \langle \rangle . \] (ii) \[ \langle x < y\rangle = A\langle x < y\rangle + {A}^{-1}\langle x > y\rangle \] \[ = A\langle > < \rangle + {A}^{-1}\langle > \subset \rangle \] \[ = \langle > \leq < \rangle \text{. } \] Here the second line follows from the first by using (i) twice. Definition 3.4. The writhe \( w\left( D\right) \) of a diagram \( D \) of an oriented link is the sum of the signs of the crossings of \( D \), where each crossing has sign +1 or -1 as defined (by convention) in Figure 1.11. Note that this definition of \( w\left( D\right) \) uses the orientation of the plane and that of the link. Note, too, that \( w\left( D\right) \) does not change if \( D \) is changed under a Type II or Type III Reidemeister move. However, \( w\left( D\right) \) does change by +1 or -1 if \( D \) is changed by a Type I Reidemeister move. It is thought that nineteenth-century knot tabulators believed that the writhe of a diagram was a knot invariant, at least when no reduction in the number of crossings by a Type I move was possible in a diagram. That lead to the famous error of the inclusion, in the early knot tables, of both a knot and its reflection, listed as \( {10}_{161} \) and \( {10}_{162} \) (an error detected by \( \mathrm{K} \) . Perko in the 1970's). See Figure 3.1. The writhes of the diagrams are -8 and 10, respectively; yet, modulo reflection, these diagrams represent the same knot. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_35_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_35_0.jpg) Figure 3.1 The writhe of an oriented link diagram and the bracket polynomial of the diagram with orientation neglected are, then, both invariant under Reidemeister moves of Types II and III, and both behave in a predictable way under Type I moves. This leads to the following result, which is essentially a statement of the existence of the Jones invariant. Theorem 3.5. Let \( D \) be a diagram of an oriented link \( L \) . Then the expression \[ {\left( -A\right) }^{-{3w}\left( D\right) }\langle D\rangle \] is an invariant of the oriented link \( L \) . Proof. It follows from Lemma 3.3 that the given expression is unchanged by Reidemeister moves of Types II and III; Lemma 3.2 and the above remarks on \( w\left( D\right) \) show it is unchanged by a Type I move. As any two diagrams of two equivalent links are related by a sequence of such moves, the result follows at once. Definition 3.6. The Jones polynomial \( V\left( L\right) \) of an oriented link \( L \) is the Laurent polynomial in \( {t}^{1/2} \), with integer coefficients, defined by \[ V\left( L\right) = {\left( {\left( -A\right) }^{-{3w}\left( D\right) }\langle D\rangle \right) }_{{t}^{1/2} = {A}^{-2}} \in \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack , \] where \( D \) is any oriented diagram for \( L \) . Here \( {t}^{1/2} \) is just an indeterminate the square of which is \( t \) . In fact, links with an odd number of components, including knots, have polynomials consisting of only integer powers of \( t \) . It is easy to show, by induction on the number of crossings in a diagram, that the given expression does indeed belong to \( \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \) . Note that by Theorem 3.5, the Jones polynomial invariant is well defined and that \( V \) (unknot) \( = 1 \) . At the time of writing, it is unknown whether there is a nontrivial knot \( K \) with \( V\left( K\right) = 1 \) and finding such a \( K \), or proving none exists, is thought to be an important problem. The following table gives the Jones polynomial of knots with diagrams of at most eight crossings. It does not take very long to calculate such a table directly from the definition. It is clear that if the orientation of every component of a link is changed, then the sign of each crossing does not change. Thus the Jones polynomial of a knot does not depend upon the orientation chosen for the knot. It is easy to check that if the oriented link \( {L}^{ * } \) is obtained from the oriented link \( L \) by reversing the orientation of one component \( K \), then \( V\left( {L}^{ * }\right) = {t}^{-3\operatorname{lk}\left( {K, L - K}\right) }V\left( L\right) \) . Thus the Jones polynomial depends on orientations in a very elementary way. Displayed in Table 3.1 are the coefficients of the Jones polynomials of the knots shown in Chapter 1. A bold entry in the table is a coefficient of \( {t}^{0} \) . For example, \[ V\left( {6}_{1}\right) = {t}^{-4} - {t}^{-3} + {t}^{-2} - 2{t}^{-1} + 2 - t + {t}^{2}. \] The bracket polynomial of a diagram can be regarded as an invariant of framed unoriented links. For the moment, regard a framed link as a link \( L \) with an integer TABLE 3.1. Jones Polynomial Table <table><tr><td>1.40.86.88.9</td><td>49.8</td><td>SS</td><td>物8.8</td><td>231.000.988.008%s88So8L2之乙L乙L</td><td>a99yS少S</td><td></td></tr><tr><td>I上0</td><td>1</td><td></td><td></td><td>上1II1\( - \)\( - \)\( - \)上L10011</td><td>上【I11【L</td><td>iADLC J. I.</td></tr><tr><td>乙I0r</td><td>山s</td><td>出</td><td>山乙</td><td>心乙乙乙乙乙乙1乙上下1S乙乙I01I</td><td>乙心上【I-I</td><td></td></tr><tr><td>乙上09</td><td>Ss</td><td>力</td><td>力山</td><td>力s山S山下SSS乙乙I山山山下I上上</td><td>乙乙1上上I0</td><td>JOHCOTOTYING</td></tr><tr><td>出乙【上</td><td>外9</td><td>少</td><td>sS</td><td>ssSrS力t山山山出乙力力SSL乙I</td><td>s下下乙【上1</td><td></td></tr><tr><td>sL06</td><td>L少</td><td>9</td><td>9s</td><td>SStStttSSSS乙t山山乙乙乙上</td><td>乙乙乙10I0</td><td></td></tr><tr><td>乙乙1山</td><td>少9</td><td>s</td><td>sUn</td><td>stStttrt山山下下SSSS下乙T</td><td>乙上1I</td><td></td></tr><tr><td>乙上09</td><td>Sr</td><td>S</td><td>力t</td><td>t力tS出下Ss乙乙乙乙乙下上下s10</td><td>上II0O</td><td>1.001C</td></tr><tr><td>00扩</td><td>山w</td><td>乙</td><td>下w</td><td>乙了乙乙乙乙上乙L上上1II乙II</td><td>0</td><
1009_(GTM175)An Introduction to Knot Theory
10
e><tr><td>1.40.86.88.9</td><td>49.8</td><td>SS</td><td>物8.8</td><td>231.000.988.008%s88So8L2之乙L乙L</td><td>a99yS少S</td><td></td></tr><tr><td>I上0</td><td>1</td><td></td><td></td><td>上1II1\( - \)\( - \)\( - \)上L10011</td><td>上【I11【L</td><td>iADLC J. I.</td></tr><tr><td>乙I0r</td><td>山s</td><td>出</td><td>山乙</td><td>心乙乙乙乙乙乙1乙上下1S乙乙I01I</td><td>乙心上【I-I</td><td></td></tr><tr><td>乙上09</td><td>Ss</td><td>力</td><td>力山</td><td>力s山S山下SSS乙乙I山山山下I上上</td><td>乙乙1上上I0</td><td>JOHCOTOTYING</td></tr><tr><td>出乙【上</td><td>外9</td><td>少</td><td>sS</td><td>ssSrS力t山山山出乙力力SSL乙I</td><td>s下下乙【上1</td><td></td></tr><tr><td>sL06</td><td>L少</td><td>9</td><td>9s</td><td>SStStttSSSS乙t山山乙乙乙上</td><td>乙乙乙10I0</td><td></td></tr><tr><td>乙乙1山</td><td>少9</td><td>s</td><td>sUn</td><td>stStttrt山山下下SSSS下乙T</td><td>乙上1I</td><td></td></tr><tr><td>乙上09</td><td>Sr</td><td>S</td><td>力t</td><td>t力tS出下Ss乙乙乙乙乙下上下s10</td><td>上II0O</td><td>1.001C</td></tr><tr><td>00扩</td><td>山w</td><td>乙</td><td>下w</td><td>乙了乙乙乙乙上乙L上上1II乙II</td><td>0</td><td></td></tr><tr><td>1【</td><td>I1</td><td>【</td><td>I上</td><td>I1I111I10上【00</td><td></td><td></td></tr><tr><td></td><td></td><td>0</td><td></td><td>0上0</td><td></td><td></td></tr><tr><td></td><td></td><td>0</td><td></td><td>0</td><td></td><td></td></tr></table> assigned to each component. Let \( D \) be a diagram for \( L \) with the property that for each component \( K \) of \( L \), the part of \( D \) corresponding to \( K \) has as its writhe the integer assigned to \( K \) . Then \( \langle D\rangle \) is an invariant of the framed link. Note that any diagram for \( L \) can be adjusted by moves of Type I (or its reflection) to achieve any given framing. The Jones polynomial is characterised by the following proposition, which follows easily from the above definition (though historically it preceded that definition). Proposition 3.7. The Jones polynomial invariant is a function \[ V : \left\{ {\text{ Oriented links in }{S}^{3}}\right\} \rightarrow \mathbb{Z}\left\lbrack {{t}^{-1/2},{t}^{1/2}}\right\rbrack \] such that (i) \( V \) (unknot) \( = 1 \) , (ii) whenever three oriented links \( {L}_{ + },{L}_{ - } \) and \( {L}_{0} \) are the same, except in the neighbourhood of a point where they are as shown in Figure 3.2, then \[ {t}^{-1}V\left( {L}_{ + }\right) - {tV}\left( {L}_{ - }\right) + \left( {{t}^{-1/2} - {t}^{1/2}}\right) V\left( {L}_{0}\right) = 0. \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_38_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_38_0.jpg) Figure 3.2 Proof. \[ \langle X\rangle = A\langle X\rangle + {A}^{-1}\langle X\rangle \] \[ \langle X\rangle = {A}^{-1}\langle X\rangle + A\langle X\rangle . \] Multiplying the first equation by \( A \), the second by \( {A}^{-1} \), and subtracting gives \[ A\langle > < \rangle - {A}^{-1}\langle > < \rangle = \left( {{A}^{2} - {A}^{-2}}\right) \langle \rangle (\rangle . \] Thus, for the oriented links with diagrams as shown, using the fact that in those diagrams \( w\left( {L}_{ + }\right) - 1 = w\left( {L}_{0}\right) = w\left( {L}_{ - }\right) + 1 \), it follows that \[ - {A}^{4}V\left( {L}_{ + }\right) + {A}^{-4}V\left( {L}_{ - }\right) = \left( {{A}^{2} - {A}^{-2}}\right) V\left( {L}_{0}\right) . \] The substitution \( {t}^{1/2} = {A}^{-2} \) gives the required answer. Working from Proposition 3.7, a straightforward exercise shows that if \( {L}^{\prime } \) is \( L \) together with an additional trivial (unknotted, unlinking) component, then its Jones polynomial is given by \( V\left( {L}^{\prime }\right) = \left( {-{t}^{-1/2} - {t}^{1/2}}\right) V\left( L\right) \) . Proposition 3.7 characterises the invariant in that using it allows the Jones polynomial of any oriented link to be calculated. This follows from the fact that any link can be changed to an unlink of \( c \) unknots (for which the Jones polynomial is \( \left( {-{t}^{-1/2} - }\right. \) \( {t}^{1/2}{)}^{c - 1} \) ) by changing crossings in some diagram; formula (ii) of Proposition 3.7 relates the polynomials before and after such a change with the that of a link diagram with fewer crossings (which has a known polynomial by induction). The Jones polynomial of the sum of two knots is just the product of their Jones polynomials, that is, \[ V\left( {{K}_{1} + {K}_{2}}\right) = V\left( {K}_{1}\right) V\left( {K}_{2}\right) . \] This follows at once by considering a calculation of the polynomial of \( {K}_{1} + {K}_{2} \) and operating firstly on the crossings of just one summand. The same formula is true for links, but the sum of two links is not well defined; the result depends on which two components are fused together in the summing operation. That fact can easily be used, in a straightforward exercise, to produce two distinct links with the same Jones polynomial. If an oriented link has a diagram \( D \), its reflection has \( \bar{D} \) as a diagram; of course, \( w\left( D\right) = - w\left( \bar{D}\right) \) . As \( \langle \bar{D}\rangle = \overline{\langle D\rangle } \), this means that if \( \bar{L} \) is the reflection of the oriented link \( L \), then \( V\left( \bar{L}\right) \) is obtained from \( L \) by interchanging \( {t}^{-1/2} \) and \( {t}^{1/2} \) . The bracket polynomial of a diagram, of writhe equal to 3 , for the right-handed trefoil knot \( {3}_{1} \) has already been calculated, and that at once determines that \( - {t}^{4} + {t}^{3} + t \) is the Jones polynomial of the right-hand trefoil knot. Thus its reflection, the left-hand trefoil knot, has Jones polynomial \( - {t}^{-4} + {t}^{-3} + {t}^{-1} \), and as this is a different polynomial, the two trefoil knots are distinct knots (that is, the trefoil knot is not amphicheiral). The figure-eight knot \( {4}_{1} \) is seen, by simple experiment, to be the same knot as its reflection; a glance at Table 3.1 verifies that its Jones polynomial is indeed symmetric between \( t \) and \( {t}^{-1} \) . Figure 3.3 shows two distinct knots with the same Jones polynomial. The knot on the left is the Kinoshita-Terasaka knot, and that on the right is the Conway knot. That the knots are distinct can be shown by analysing their knot groups [110] or by determining their genera [32]. These two knots are related by the process called mutation. (Conway was the first to use this term.) That means that there is a ball in \( {S}^{3} \) whose boundary meets one of the knots at four points. If this ball, with its intersection with the knot, is removed from \( {S}^{3} \), rotated through angle \( \pi \) about an axis (in such a way as to preserve the four points), and then replaced, then the result is the other knot. In the diagrams, the boundary of the ball is indicated by ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_39_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_39_0.jpg) Figure 3.3 ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_40_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_40_0.jpg) Figure 3.4 a dotted circle; the three possible axes of rotation are an axis perpendicular to the plane of the diagram, a north-south axis and an east-west axis (though the latter produces no change in the example depicted). In the case of oriented knots, it may be necessary to change all the orientations within the ball in addition to rotating it, so that the result should be consistently oriented. Now the Jones polynomial can be calculated using Proposition 3.7. Use this first on the crossings within the ball, changing and destroying crossings and removing unlinking unknots, until the Jones polynomial of the knot (or link) is a linear sum of Jones polynomials of links that, within the ball, are all of one of the three forms of Figure 3.4. As each of these three configurations within the ball is unchanged by any of the three rotations, the same calculation ensues whether or not the ball is rotated. In fact, as oriented links are here being considered, only two of these three diagrams can occur; which two depends on the way the arrows are deployed. Pretzel links offer another easy example of mutation. There is a mutation on the pretzel link \( P\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) of Figure 1.7 that interchanges \( {a}_{i} \) and \( {a}_{i + 1} \) . Thus the Jones polynomial of \( P\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) is not changed when the \( \left\{ {a}_{i}\right\} \) are permuted in any way. It should be noted that the length of a calculation of the Jones polynomial of a link made directly from the definition depends exponentially on the number of crossings in a diagram. Thus it is impractical when the number of crossings is not small. There is however a calculation for the \( \left( {p, q}\right) \) torus knot given in Theorem 14.13. ## Exercises 1. Find the Jones polynomial of the \( \left( {2, q}\right) \) -torus knot. 2. Calculate the Jones polynomial of the 2-bridge knot given in Conway notation by \( C\left( {a, b}\right) \), where \( a \) and \( b \) are positive integers. 3. Show that the Jones polynomial of an oriented link \( L \) takes the value \( {\left( -2\right) }^{\# L - 1} \) when \( t = 1 \), where \( \# L \) is the number of components of \( L \) . 4. What is the value of the Jones polynomial of an oriented link \( L \) (i) when \( {t}^{1/2} = {e}^{{2\pi i}/3} \) and (ii) when \( {t}^{1/2} = {e}^{{\pi i}/3} \) ? 5. Calculate \( V\left( {5}_{2}\right) \) using only the characterisation of the Jones polynomial given in Proposition 3.7. 6. Prove that the knots \( {8}_{8} \) and \( {10}_{129} \), as shown in Figure 16.1, have the same Jones polynomial. 7. By considering the closure of braids of the form \( {\sigma }_{1}^{n}{\sigma }_{2}{\sigma }_{1}^{2}{\sigma }_{2} \), find two links with distinct Jones polynomials but with homeomorphic exteriors. 8. Suppose that \( {K}_{1} \) and \( {K}_{2} \) are knots and that \( {K}_{1} \sqcup {K}_{2} \) is the "distant union" of \( {K}_{1} \) and \( {K}_{2} \) , namely the two component link consisting of a copy of \( {K}_{1} \) and a copy
1009_(GTM175)An Introduction to Knot Theory
11
n oriented link \( L \) takes the value \( {\left( -2\right) }^{\# L - 1} \) when \( t = 1 \), where \( \# L \) is the number of components of \( L \) . 4. What is the value of the Jones polynomial of an oriented link \( L \) (i) when \( {t}^{1/2} = {e}^{{2\pi i}/3} \) and (ii) when \( {t}^{1/2} = {e}^{{\pi i}/3} \) ? 5. Calculate \( V\left( {5}_{2}\right) \) using only the characterisation of the Jones polynomial given in Proposition 3.7. 6. Prove that the knots \( {8}_{8} \) and \( {10}_{129} \), as shown in Figure 16.1, have the same Jones polynomial. 7. By considering the closure of braids of the form \( {\sigma }_{1}^{n}{\sigma }_{2}{\sigma }_{1}^{2}{\sigma }_{2} \), find two links with distinct Jones polynomials but with homeomorphic exteriors. 8. Suppose that \( {K}_{1} \) and \( {K}_{2} \) are knots and that \( {K}_{1} \sqcup {K}_{2} \) is the "distant union" of \( {K}_{1} \) and \( {K}_{2} \) , namely the two component link consisting of a copy of \( {K}_{1} \) and a copy of \( {K}_{2} \) separated by a 2-sphere. Show that \( V\left( {{K}_{1} \sqcup {K}_{2}}\right) = \left( {-{t}^{-1/2} - {t}^{1/2}}\right) V\left( {K}_{1}\right) V\left( {K}_{2}\right) \) . 9. Determine which knots with crossing number at most 8, other than \( {8}_{17} \), are amphicheiral (equivalent to their reflections). [In fact, \( {8}_{17} \neq \overline{{8}_{17}} \) .] 10. Verify the discovery of Perko that the knots illustrated in Figure 3.1 differ simply by reflection. 11. By considering the intersection between a disc spanning the unknot \( U \) and a 2-sphere meeting \( U \) at four points, show that \( U \) is the only knot related to \( U \) by mutation. 4 ## Geometry of Alternating Links An alternating diagram for a link is, as explained in Chapter 1, one in which the over or under nature of the crossings alternates along every link-component in the diagram; the crossings always go ". . . over, under, over, under, . . . " when considered from any starting point. A link is said to be alternating if it possesses such a diagram. It has long been realised that alternating diagrams for a knot or link are particularly agreeable. However, the question posed by R. H. Fox-"What is an alternating knot?" by which he was asking for some topological characterisation of alternating knots without mention of diagrams, is still unanswered. In later chapters the way in which the alternating property interacts with polynomial invariants will be discussed. In what follows here, some of the geometric properties of alternating links, discovered by W. Menasco [94], will be considered. The results are paraphrased by saying that an alternating link is split if and only if it is obviously split and prime if and only if it is obviously prime. Here "obviously" means that the property can at once be observed in the alternating diagram. This then establishes a ready supply of prime knots. Much of the ensuing discussion will concern 2-spheres embedded in \( {S}^{3} \) . It is to be assumed, as usual, that all such embeddings are piecewise linear (that is, simplicial with respect to some subdivisions of the basic triangulations). Definition 4.1. A link \( L \subset {S}^{3} \), having at least two components, is a split link if there is a 2-sphere in \( {S}^{3} - L \) separating \( {S}^{3} \) into two balls, each of which contains a component of \( L \) . A link diagram \( D \) in \( {S}^{2} \) is a split diagram if there is a simple closed curve in \( {S}^{2} - D \) separating \( {S}^{2} \) into two discs each containing part of \( D \) . Theorem 4.2. Suppose a link \( L \) has an alternating diagram \( D \) . Then \( L \) is a split link if and only if \( D \) is a split diagram. The proof of this will be one of the two main aims of this chapter. The next definition generalises Definition 1.3 to links (rather than knots) and expresses primeness in a slightly different way. It also extends the idea of primeness to diagrams. Definition 4.3. A link \( L \subset {S}^{3} \), other than the unknot, is prime if every 2 -sphere in \( {S}^{3} \) that intersects \( L \) transversely at two points bounds, on one side of it, a ball that intersects \( L \) in precisely one unknotted arc. A diagram \( D \subset {S}^{2} \), of a link other than the unknot, is a prime diagram if any simple closed curve in \( {S}^{2} \) that meets \( D \) transversely at two points bounds, on one side of it, a disc that intersects \( D \) in a diagram \( U \) of the unknotted ball-arc pair. \( D \) is strongly prime if such a \( U \) is always the trivial zero-crossing diagram. Note that the only prime split link is the trivial link of two components. In Chapter 5 it will be seen that it is straightforward to determine whether an alternating diagram represents the unknot, and so, given the alternating condition, references to the unknot in the above definition cause no problem. The second main result of the chapter is as follows: Theorem 4.4. Suppose \( L \) is a link that has an alternating diagram \( D \) . Then \( L \) is a prime link if and only if \( D \) is a prime diagram. This result shows at once that the alternating diagrams in the knot tables do indeed represent prime knots, for it is easy to check that those diagrams are prime. The proofs of these results depend upon a procedure for moving surfaces contained in the complement of a link, or transverse to it, to a standard position with reference to a diagram. This procedure, now to be described, is very general and does not use the alternating condition. Proofs of the stated theorems follow from the application of that condition to standard position surfaces. The description does require some notation and terminology as follows. As usual, if \( D \subset {S}^{2} \subset {S}^{3} \) is a diagram for a link \( L, D \) is a collection of curves with self-intersections in the sphere \( {S}^{2} \), together with over or under information at these intersections. The link \( L \) will be taken to be equal to \( D \) except near the crossings and, near any crossing, to be on a small sphere centred on the crossing. These small spheres are the boundaries of small balls called bubbles. The overpassing arcs are on the "upper" (or "Northern") halves of the small spheres, the under-passing arcs on the "lower" halves, \( {S}^{2} \) being regarded as separating the small spheres into "upper" hemispheres on one side of \( {S}^{2} \) and "lower" hemispheres on the other side. This is shown in Figure 4.1 on the left. Let \( {S}_{ + } \) and \( {S}_{ - } \) be the two 2-spheres created from \( {S}^{2} \) by removing the intersection of \( {S}^{2} \) with all the bubbles and replacing those discs by the upper hemispheres or the lower hemispheres, ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_43_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_43_0.jpg) Figure 4.1 respectively, of the bubbles’ boundaries. Let \( {B}_{ + } \) and \( {B}_{ - } \) be the balls bounded by \( {S}_{ + } \) and \( {S}_{ - } \), so that \( {B}_{ + },{B}_{ - } \) and the bubbles have disjoint interiors. Let \( F \) be a surface in \( {S}^{3} \) that is transverse to \( L \) . By a general position isotopy in \( {S}^{3}, F \) can be moved to a new position in which it is still transverse to \( L \) (it will meet \( L \) at points of \( L \cap {S}^{2} \) ), and is transverse to \( {S}_{ + },{S}_{ - } \) and to the north-south axes of all the bubbles. This means that \( F \) can be taken to meet each of \( {S}_{ + } \) and \( {S}_{ - } \) in the union of disjoint simple closed curves and to meet each bubble in disjoint saddles. (Maybe \( F \) is not transverse to \( {S}^{2} \) .) Each saddle is just a disc spanning the bubble; its boundary intersects \( {S}^{2} \) in four points that divide it into four arcs, two arcs in \( {S}_{ + } \) and two in \( {S}_{ - } \) (see Figure 4.1). The surface \( F \), in such general position, will be said to be in standard position with respect to the above data if, in addition, three conditions hold: (A) Each of \( F \cap {B}_{ + } \) and \( F \cap {B}_{ - } \) is a disjoint union of discs. (B) No component of \( F \cap {S}_{ + } \) or \( F \cap {S}_{ - } \) meets any bubble in more than one arc. (C) Each component of both \( F \cap {S}_{ + } \) and \( F \cap {S}_{ - } \) meets some saddle or meets \( L \) . Lemma 4.5. Let \( D \) be a non-split diagram for \( L \) . Suppose that \( F \) is a 2-sphere with the property that it separates the components of \( L \) ; then \( F \) can be replaced by another 2-sphere with the same property that is in standard position. Proof. (a) Suppose that \( C \) is amongst the \( n \) components of \( F \cap {S}_{ + } \) that do not bound disc components of \( F \cap {B}_{ + } \) . Choose \( C \) to be innermost on \( {S}_{ + } \) amongst such components. Then \( C \) is the boundary of a disc \( \Delta \) in \( {S}_{ + } \), and any component of \( F \cap {S}_{ + } \) contained in the interior of \( \Delta \) does bound a disc of \( F \cap {B}_{ + } \) . Thus if \( {\Delta }^{\prime } \) denotes a copy of \( \Delta \) displaced into \( {B}_{ + },{\Delta }^{\prime } \) can be chosen so that \( {\Delta }^{\prime } \cap F = \partial {\Delta }^{\prime } \) , \( \partial {\Delta }^{\prime } \) being a copy of \( C \) displaced along \( F \) into \( {B}_{ + } \) . Now \( \partial {\Delta }^{\prime } \) separates the sphere \( F \) into two discs \( {E}_{1} \) and \( {E}_{2} \) . Then \( {\Delta }^{\prime } \cup {E}_{1} \) or \( {\Delta }^{\prime } \cup {E}_{2} \) separates the components of \( L \) (because \( F \) did so). Let this new sphere be \( {F}^{\prime } \) . Then \( {F}^{\prime } \cap {S}_{ + } \) has fewer than \( n \) components not bounding discs in \( {F}^{\prime } \cap {B}_{ + } \), for either \( C \) is no longer part of that intersection or, if \( C \) is still present, \( C \) now bounds a disc. Furthermore, \( \left( {{F}^{\prime } \cap {B}_{ - }}\right) \subset \left( {
1009_(GTM175)An Introduction to Knot Theory
12
lta \) does bound a disc of \( F \cap {B}_{ + } \) . Thus if \( {\Delta }^{\prime } \) denotes a copy of \( \Delta \) displaced into \( {B}_{ + },{\Delta }^{\prime } \) can be chosen so that \( {\Delta }^{\prime } \cap F = \partial {\Delta }^{\prime } \) , \( \partial {\Delta }^{\prime } \) being a copy of \( C \) displaced along \( F \) into \( {B}_{ + } \) . Now \( \partial {\Delta }^{\prime } \) separates the sphere \( F \) into two discs \( {E}_{1} \) and \( {E}_{2} \) . Then \( {\Delta }^{\prime } \cup {E}_{1} \) or \( {\Delta }^{\prime } \cup {E}_{2} \) separates the components of \( L \) (because \( F \) did so). Let this new sphere be \( {F}^{\prime } \) . Then \( {F}^{\prime } \cap {S}_{ + } \) has fewer than \( n \) components not bounding discs in \( {F}^{\prime } \cap {B}_{ + } \), for either \( C \) is no longer part of that intersection or, if \( C \) is still present, \( C \) now bounds a disc. Furthermore, \( \left( {{F}^{\prime } \cap {B}_{ - }}\right) \subset \left( {F \cap {B}_{ - }}\right) \) . Thus, by repeating this, it may be assumed that \( F \) satisfies condition (A). (b) Let \( H \) be the upper hemisphere of the boundary of a bubble. \( H \) is a disc in \( {S}_{ + } \) that meets \( L \) in one over-pass arc and meets \( F \) in disjoint arcs all parallel to the over-pass. Let \( \delta \) be a diameter of \( H \) that intersects each of these arcs transversely at one point. The components of \( F \cap {S}_{ + } \) are disjoint simple closed curves on the sphere \( {S}_{ + } \) . If \( \delta \) meets one of these components at more than one point, then \( \delta \) must meet some such component at two points of \( \delta \cap F \cap {S}_{ + } \) that are consecutive along \( \delta \) . (This follows by considering the "innermost" component that \( \delta \) meets.) Thus, if some component \( C \) of \( F \cap {S}_{ + } \) meets the bubble in more than one arc, \( C \) can be chosen so that \( C \) meets \( \delta \) at adjacent points \( {p}_{1} \) and \( {p}_{2} \) of \( \delta \cap F \) ; see Figure 4.2. If \( {p}_{1} \) and \( {p}_{2} \) on \( \delta \) are on opposite sides of the over-pass, then they are both on opposite sides of the same saddle. The simple closed curve \( \gamma \) that consists of an arc from \( {p}_{1} \) to \( {p}_{2} \) across the saddle, followed by an arc from \( {p}_{2} \) to \( {p}_{1} \) in the disc in \( {B}_{ + } \) bounded by \( C \), is homotopic in \( {S}^{3} - L \) to the meridian loop around the over-pass arc. On the ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_45_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_45_0.jpg) Figure 4.2 other hand \( \gamma \), being contained in the sphere \( F \), is null-homotopic in \( {S}^{3} - L \) . This implies, erroneously by Theorem 1.7, that the meridian is null-homotopic. Thus \( {p}_{1} \) and \( {p}_{2} \) are on the same side of the over-pass and hence are points on adjacent saddles. Let \( {q}_{1} \) and \( {q}_{2} \) be the points where these two saddles intersect the north-south axis of the bubble. Consider the simple closed curve that consists of an arc in the first saddle from \( {q}_{1} \) to \( {p}_{1} \), an arc from \( {p}_{1} \) to \( {p}_{2} \) in the disc in \( {B}_{ + } \) bounded by \( C \), an arc in the second saddle from \( {p}_{2} \) to \( {q}_{2} \) and then back to \( {q}_{1} \) along the axis (see Figure 4.2). This curve bounds a disc \( \Delta \) that can be chosen, using condition (A), to meet \( F \) only in the above composition of arcs from \( {q}_{1} \) to \( {q}_{2} \) and to be disjoint from \( L \) . Now move \( F \) by an isotopy that pushes \( F \) across \( \Delta \) to a new position in which the intersection points \( {q}_{1} \) and \( {q}_{2} \) with the axis have been removed. Hence \( F \) can be changed to a new position with two fewer saddles. The previous process for ensuring that \( F \) satisfies condition (A) can then be repeated (it certainly does not increase the number of saddles). Repetition ensures that conditions (A) and (B) are satisfied. (c) Finally, suppose that a component \( C \) of \( F \cap {S}_{ + } \) meets no saddle at all. Thus \( C \subset {S}^{2} - D \), and \( C \) bounds a disc in \( F \cap {B}_{ + } \) and a disc in \( F \cap {B}_{ - } \), the union of these discs being \( F \) . As \( C \) does not separate \( D \), this union of two discs cannot separate \( L \) . Thus condition (C) is satisfied. (An alternative method for (c) is more useful in more general circumstances. As \( D \) is not a split diagram, \( C \) bounds a disc \( {\Delta }^{\prime } \) in \( {S}^{2} - D \) which is contained in \( {S}_{ - } \cap {S}_{ + } \) . Replace the disc of \( F \cap {B}_{ + } \) bounded by \( C \) with \( {\Delta }^{\prime } \), and then displace \( {\Delta }^{\prime } \) a little into \( {B}_{ - } \) . Repetition of this process changes \( F \), reducing the number of components of \( F \cap {S}_{ + } \) and \( F \cap {S}_{ - } \), until condition (C) is satisfied.) Lemma 4.6. Suppose that \( L \), with diagram \( D \), is not a split link. Suppose that \( F \) is a 2-sphere meeting \( L \) transversely at two points, with the property that \( F \) separates \( {S}^{3} \) into two 3-balls, neither of which intersects \( L \) in a trivial ball-arc pair. Then \( F \) can be replaced by another 2 -sphere, with the same property, that is in standard position. Proof. The proof of this lemma follows closely that of the preceding one. In (a), the boundary of the disc \( {\Delta }^{\prime } \) cannot separate, on \( F \), the two points of \( L \cap F \), or else a meridian of \( L \) would be null-homotopic in \( {S}^{3} - L \) . So, \( \partial {\Delta }^{\prime } \) bounds a disc \( E \) in \( F - L \), and \( {\Delta }^{\prime } \cup E \) bounds (by the Schönflies theorem) a 3-ball that is disjoint from \( \mathrm{L} \) (as \( L \) is not split). This ball can be used to change \( F \) by an isotopy that has the effect of replacing \( E \) with \( {\Delta }^{\prime } \) . In (b), for the case when \( {p}_{1} \) and \( {p}_{2} \) are on the same side of the over-pass, the reasoning is the same as before. When they are on opposite sides, consider the simple closed curve \( \gamma \) constructed as before. This \( \gamma \) bounds a disc \( \Gamma \) that meets \( L \) at one point, with \( \Gamma \cap F = \gamma \) . Now, \( \gamma \) must separate on \( F \) the points of \( F \cap L \) , or a meridian is null-homotopic. \( F \) can now be replaced by the union of \( \Gamma \) and one of the components of \( F - \gamma \) . It is straightforward to check (using the fact that additive inverses to knots do not exist) that a correct choice of component preserves the property that (the new) \( F \) does not bound a trivial ball-arc pair. This replacement reduces the number of saddles required, and so repeating the process finitely many times achieves conditions (A) and (B). The final part of the proof, to achieve condition (C), is exactly as before. Now, using all the preceding notation, suppose that \( F \) is a surface in standard position, and that the diagram \( D \), used to specify the concept of standard position, is alternating. Consider a component \( C \), temporarily oriented, of \( F \cap {S}_{ + } \) . Suppose, when \( C \) enters a certain region of \( \left( {{S}^{2} \cap {S}_{ + }}\right) - D \), it has a saddle to its left; then it can only leave that region with a saddle on its right, or at a point of \( F \cap L \) . This follows from the alternating property; see Figure 4.3. Thus, proceeding along \( C \) , saddles occur on the \( \ldots \) left, right, left, right \( \ldots \), except that points of \( F \cap L \) can substitute for some of these saddles. Proof of Theorem 4.2. Clearly, if \( D \) is a split diagram, then \( L \) is a split link. So, suppose \( L \) is split and \( D \) is non-split. By Lemma 4.5, there is a 2-sphere \( F \) in standard position that separates the components of \( L \) . Suppose \( C \) is an innermost component of \( F \cap {S}_{ + } \), so that \( C \) bounds a disc \( \Delta \) in \( {S}_{ + } \) with \( \Delta \cap F = C \) . By condition \( \left( \mathrm{C}\right), C \) meets a saddle and there changes from one region of \( \left( {{S}^{2} \cap {S}_{ + }}\right) - D \) to another. (Consideration of the chessboard colouring of these regions shows at once that \( C \) must meet, in total, an even number of saddles in order to return to ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_46_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_46_0.jpg) Figure 4.3 the original region.) Thus the alternating condition implies that \( C \) has at least one saddle to its left and one to its right. The arc of such a saddle, on the side of the saddle opposite to \( C \), is part of some other component of \( F \cap {S}_{ + } \) (by condition (B)). As there is a saddle on either side of \( C \), some component of \( F \cap {S}_{ + } \) is in the interior of \( \Delta \), and this contradicts the choice of \( C \) . Hence there is no component at all of \( F \cap {S}_{ + } \), and similarly \( F \cap {S}_{ - } \) is also empty. Thus \( F \subset {B}_{ + } \) or \( F \subset {B}_{ - } \) , and in either case \( F \) does not separate \( L \) . Proof of Theorem 4.4. Suppose that the link \( L \), with alternating diagram \( D \), is not prime. If \( L \) is a split link, then by Theorem 4.2 \( D \) is a split diagram, and it is easy to see that \( D \) is not prime. Thus it may be assumed that \( L \) is not split. There is a 2-sphere \( F \) in \( {S}^{3} \) that intersects \( L \) transversely at two points, separates \( {S}^{3} \) into two balls that do not meet \( L \) in just an unknotted arc, and is (by Lemma 4.6) in standard position. Let \( C \) be an innermost component of \( F \cap {S}_{ + } \) on \( {S}_{ + } \) . As in the preceding proof, \( C \) must have an even number (at least two) of pla
1009_(GTM175)An Introduction to Knot Theory
13
{S}_{ + } \) is in the interior of \( \Delta \), and this contradicts the choice of \( C \) . Hence there is no component at all of \( F \cap {S}_{ + } \), and similarly \( F \cap {S}_{ - } \) is also empty. Thus \( F \subset {B}_{ + } \) or \( F \subset {B}_{ - } \) , and in either case \( F \) does not separate \( L \) . Proof of Theorem 4.4. Suppose that the link \( L \), with alternating diagram \( D \), is not prime. If \( L \) is a split link, then by Theorem 4.2 \( D \) is a split diagram, and it is easy to see that \( D \) is not prime. Thus it may be assumed that \( L \) is not split. There is a 2-sphere \( F \) in \( {S}^{3} \) that intersects \( L \) transversely at two points, separates \( {S}^{3} \) into two balls that do not meet \( L \) in just an unknotted arc, and is (by Lemma 4.6) in standard position. Let \( C \) be an innermost component of \( F \cap {S}_{ + } \) on \( {S}_{ + } \) . As in the preceding proof, \( C \) must have an even number (at least two) of places where it either meets \( L \) or is incident on a saddle. If there are two or more consecutive saddles, the alternating property implies (as in the proof of Theorem 4.2) that \( C \) cannot be innermost. There are only two intersections with \( L \) available. Thus either (i) \( C \) contains both such intersections and two saddle-arcs separating them, or (ii) \( C \) contains one intersection and one saddle-arc, or (iii) \( C \) contains just the two intersection points with \( L \) and no saddle-arc. If \( F \cap {S}_{ + } \) has more than one component, it has at least two innermost components; each meets \( L \), as has just been observed. Case (i) cannot occur because there must be components of \( F \cap {S}_{ + } \) other than \( C \) to account for the arcs on the other sides of the saddles, but no more points of \( F \cap L \) are available for another innermost arc. The situation of case (ii) is shown on the left of Figure 4.4; the thicker arcs are parts of \( L \), and the ellipse represents \( C \) . The corresponding part of the configuration in \( F \cap {S}_{ - } \) is shown on the right, where it is seen that a contradiction to condition (B) arises. Thus case (iii) is the only possibility, and \( F \cap {S}_{ + } = F \cap {S}_{ - } = F \cap {S}^{2} \), this being one simple closed curve intersecting \( D \) at two points only. Because \( F \) separates \( L \) into non-trivial summands, this means that \( D \) is not a prime diagram. Observe that in toto, the method of the above proofs is first to use the hypotheses about \( F \) to put \( F \) into standard position and then to use the observation, implied by the alternating nature of \( D \), that left saddles and right saddles alternate along ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_47_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_47_0.jpg) a component of \( F \cap {S}_{ + } \) (though a point of \( F \cap L \) may replace such a saddle) to complete the argument. This method has been extended [94] with only a little extra ingenuity to produce the following results. Detailed proofs will not be given here; to produce them by extending the proofs of Theorems 4.2 and 4.4 is little more than an exercise. First a general definition from the theory of 3-manifolds is required. Definition 4.7. Suppose \( F \) is a surface, other than a 2-sphere, contained in a 3- manifold \( M \) . Then \( F \) is incompressible in \( M \) if any disc \( \Delta \subset M \) that spans \( F \) in \( M \) (that is, \( \Delta \cap F = \partial \Delta \) ) has the property that \( \partial \Delta \) bounds a disc in \( F \) . A 2-sphere is incompressible in \( M \) if it does not bound a 3-ball contained in \( M \) . This means that \( F \) has no "significant" spanning disc at all. Proposition 4.8. Suppose \( L \) is a non-split, prime, alternating link and \( F \) is a closed incompressible surface in \( {S}^{3} - L \) . Then there exists a disc \( \Delta \) spanning \( F \) in \( {S}^{3} \) that meets \( L \) transversely at precisely one point. Corollary 4.9. Suppose \( L \) is a non-split, prime, alternating link. Any incompressible torus \( T \) contained in \( {S}^{3} - L \) is parallel to the boundary of a solid torus neighbourhood of one of the components of \( L \) . A torus with that final property is called a peripheral torus of \( L \) . Note that in using this result, the non-split and prime conditions can easily be verified from the preceding theorems. The theory developed by W. P. Thurston [121], on the existence of hyperbolic structures on 3-manifolds, requires that no non-peripheral incompressible tori should be present. That there should be no "essential" annuli is also required. This theory, applied to the result of the corollary above, then shows that the complement of any non-split, prime, alternating link, other than a twist link (see Figure 4.5), has a complete hyperbolic structure of finite volume. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_48_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_48_0.jpg) Figure 4.5 Definition 4.10. A Conway sphere for a link \( L \) in \( {S}^{3} \) is a 2-sphere \( \sum \) in \( {S}^{3} \) that meets \( L \) transversely at four points such that (i) \( \sum - L \) is incompressible in \( {S}^{3} - L \) and (ii) any 2-sphere in \( {S}^{3} - \sum \) meeting \( L \) transversely at two points bounds a ball in \( {S}^{3} - \sum \) meeting \( L \) in just an unknotted arc. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_49_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_49_0.jpg) Figure 4.6 Note that the first condition implies that a disc spanning \( \sum \) in \( {S}^{3} - L \) cannot separate the part of \( L \) that lies on the same side of \( \sum \) as the disc. Discussion of Conway spheres is the essence of the characteristic variety theory for links due to Bonahon and Siebenmann ([14], [15]). They show that for any knot that is not a satellite, there is a well-defined maximal collection of Conway spheres that divides the knot into an arborescent part and a part in which any Conway sphere is pairwise parallel to a boundary component. The arborescent part consists of some copies of the 3-ball with two holes containing six arcs as in Figure 4.6, and some trivial 2-string tangles, glued together along some of their boundary \( \left( {{S}^{2},4}\right. \) point) pairs. The following result means that it is easy to spot Conway spheres from alternating link diagrams; it can be used to show that alternating knots near the beginning of the knot table certainly have no such spheres. Proposition 4.11. Suppose \( L \) is a non-split, prime link with alternating diagram D. If \( L \) has a Conway sphere, then it has a Conway sphere \( \sum \) such that \( \sum \cap {S}_{ + } \) is either (i) one curve containing all four points of \( \sum \cap L \) and meeting no saddle, as on the left of Figure 4.7, or (ii) two curves, each containing two of the points of \( \sum \cap L \) separated by two saddle-arcs, as on the right of Figure 4.7. Note that in either case \( \sum \cap {S}_{ - } \) is of the same form as \( \sum \cap {S}_{ + } \) . In case (ii), the Conway sphere has two minima, two saddles and two maxima. Some recent extensions of Menasco's method can be found in [41] and [2]. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_49_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_49_1.jpg) Figure 4.7 ## Exercises 1. Prove that a two-component link \( L \) that consists of a non-trivial knot \( K \) and a longitude of \( K \) is never a split link. 2. Prove, using the Jones polynomial, that the Whitehead link shown below is not a split link. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_50_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_50_0.jpg) 3. Find a prime diagram of a non-prime knot. Find a non-split diagram of a split link. 4. Show that a non-prime minimal crossing diagram of an alternating knot need not be an alternating diagram. 5. Let \( {K}_{1} \) and \( {K}_{2} \) be (possibly non-prime) knots. If \( {K}_{1} + {K}_{2} \) is alternating, show that \( {K}_{1} \) and \( {K}_{2} \) are both alternating. 6. Prove Proposition 4.8 and Corollary 4.9. 5 ## The Jones Polynomial of an Alternating Link This chapter contains some of the most impressive applications of the Jones polynomial. They give solutions to two problems encountered by P. G. Tait in the nineteenth century. It is shown that an alternating knot diagram, when "reduced" in a rather elementary way, has the minimal number of crossings and that its writhe is an invariant of the knot. The crossing number of some other types of knot is also determined. Let \( D \) be an \( n \) -crossing link diagram with its crossings labelled \( 1,2,3,\ldots, n \) . A state for \( D \) is a function \( s : \{ 1,2,3,\ldots, n\} \rightarrow \{ - 1,1\} \) . Of course, there are \( {2}^{n} \) such states. Given \( D \) and a state \( s \) for \( D \), let \( {sD} \) be a diagram constructed from \( D \) by replacing each crossing by two segments that do not cross. There are two possible ways of doing this. At the \( {i}^{th} \) crossing one way (the positive way) is used if \( s\left( i\right) = 1 \), and the other way (the negative way) it used if \( s\left( i\right) = - 1 \) . This is illustrated in Figure 5.1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_51_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_51_0.jpg) Figure 5.1 The diagram \( {sD} \), having no crossing at all, is just a set of disjoint simple closed curves. Let there be \( \left| {sD}\right| \) such curves. With this notation it is easy to write down a one-line formula for \( \langle D\rangle \), the Kauffman bracket of \( D \), as a summation over all possible \( {2}^{n} \) states. The proof of this formula, which follows in Proposition 5.1, is simply that it immediately satisfies the criteria of Definition 3.1. Proposition 5.1. If \( D \) is a link diagram with \( n \) crossings, the Kauffman bracket of \( D \) is
1009_(GTM175)An Introduction to Knot Theory
14
by replacing each crossing by two segments that do not cross. There are two possible ways of doing this. At the \( {i}^{th} \) crossing one way (the positive way) is used if \( s\left( i\right) = 1 \), and the other way (the negative way) it used if \( s\left( i\right) = - 1 \) . This is illustrated in Figure 5.1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_51_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_51_0.jpg) Figure 5.1 The diagram \( {sD} \), having no crossing at all, is just a set of disjoint simple closed curves. Let there be \( \left| {sD}\right| \) such curves. With this notation it is easy to write down a one-line formula for \( \langle D\rangle \), the Kauffman bracket of \( D \), as a summation over all possible \( {2}^{n} \) states. The proof of this formula, which follows in Proposition 5.1, is simply that it immediately satisfies the criteria of Definition 3.1. Proposition 5.1. If \( D \) is a link diagram with \( n \) crossings, the Kauffman bracket of \( D \) is given by \[ \langle D\rangle = \mathop{\sum }\limits_{s}\left( {{A}^{\mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) }{\left( -{A}^{-2} - {A}^{2}\right) }^{\left| {sD}\right| - 1}}\right) , \] where the summation is over all functions \( s : \{ 1,2,3,\ldots, n\} \rightarrow \{ - 1,1\} \) . Now, let \( {s}_{ + } \) and \( {s}_{ - } \) be the two constant states, so that for every \( i,{s}_{ + }\left( i\right) = 1 \) and \( {s}_{ - }\left( i\right) = - 1 \) . Of course, \( {s}_{ + } \) is the only state \( s \) for which \( \mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) = n \), and \( {s}_{ - } \) is the only one for which \( \mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) = - n \) . Definition 5.2. The diagram \( D \) is plus-adequate if \( \left| {{s}_{ + }D}\right| > \left| {sD}\right| \) for all \( s \) with \( \mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) = n - 2 \) and is minus-adequate if \( \left| {{s}_{ - }D}\right| > \left| {sD}\right| \) for all \( s \) with \( \mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) = 2 - n \) . If both conditions hold, \( D \) is called adequate. Although this looks complicated, it is in fact easy to test whether a diagram be adequate: Change \( D \) to \( {s}_{ + }D \) by replacing all the crossings in the positive manner described above, and inspect the diagram \( {s}_{ + }D \) . If the two segments of \( {s}_{ + }D \) that replace a crossing of \( D \) never belong to the same component of \( {s}_{ + }D \), then \( D \) is plus-adequate. So, just examine each component of \( {s}_{ + }D \) to see if it ever abuts itself at a former crossing. The same procedure applied to \( {s}_{ - }D \) detects minus-adequacy. The prime example of this is the following result. ## Proposition 5.3. A reduced alternating link diagram is adequate. Here, "reduced" means that there is no crossing of the form featured in Figure 5.2 or its reflection (in which the squares labelled \( X \) and \( Y \) contain the whole diagram away from the crossing). Such a crossing is called a nugatory or removable crossing. It is a crossing at which one region of the complement of the diagram in the plane features twice, appearing near the crossing in a pair of diagonally opposite quadrants. (In practice such a crossing could be removed by rotating half of the link.) ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_52_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_52_0.jpg) Figure 5.2 Proof of Proposition 5.3. Let the complementary planar regions of the diagram be coloured black and white in a chessboard fashion. The alternating condition implies that the components of \( {s}_{ + }D \) are the boundaries of the regions of one of the colours (the black ones, say) with the corners rounded off. Similarly, the components of \( {s}_{ - }D \) bound the white regions. The lack of removable crossings implies at once that \( D \) is adequate, for no region abuts itself. A specific non-alternating example is provided by the standard diagram of many pretzel knots. Figure 1.7 shows a diagram of the pretzel knot \( P\left( {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right) \) . Recall that the crossings are all of the sense indicated when \( {a}_{i} \) is positive and in the other sense when \( {a}_{i} \) is negative. If \( {p}_{1},{p}_{2},\ldots ,{p}_{r} \) are all positive integers and \( {q}_{1},{q}_{2},\ldots ,{q}_{s} \) are all negative, then \( P\left( {{p}_{1},{p}_{2},\ldots ,{p}_{r},{q}_{1},{q}_{2},\ldots ,{q}_{s}}\right) \) is adequate provided that \( {p}_{i} \geq 2 \) and \( {q}_{i} \leq - 2 \) for each \( i \), and \( r \geq 2 \) and \( s \geq 2 \) . Adequacy follows by simple inspection. If \( P \) is any Laurent polynomial in some indeterminate, the maximum and minimum powers of the indeterminate that occur in \( P \) will be denoted \( M\left( P\right) \) and \( m\left( P\right) \) . In what comes next the aim is to determine \( M\langle D\rangle \) and \( m\langle D\rangle \), the maximum and minimum powers of \( A \) that occur in the bracket (Laurent) polynomial of a diagram \( D \) . Lemma 5.4. Let \( D \) be a link diagram with \( n \) crossings. Then (i) \( M\langle D\rangle \leq n + 2\left| {{s}_{ + }D}\right| - 2 \), with equality if \( D \) is plus-adequate, and (ii) \( m\langle D\rangle \geq - n - 2\left| {{s}_{ - }D}\right| + 2 \), with equality if \( D \) is minus-adequate. Proof. (This is due, essentially, to Kauffman.) For any state \( s \) for \( D \) let \[ \langle D \mid s\rangle = {A}^{\mathop{\sum }\limits_{{i = 1}}^{n}s\left( i\right) }{\left( -{A}^{-2} - {A}^{2}\right) }^{\left| {sD}\right| - 1}, \] so that \( \langle D\rangle = \mathop{\sum }\limits_{s}\langle D \mid s\rangle \) . As \( \mathop{\sum }\limits_{{i = 1}}^{n}{s}_{ + }\left( i\right) = n \), it follows that \( M\left\langle {D \mid {s}_{ + }}\right\rangle = \) \( n + 2\left| {{s}_{ + }D}\right| - 2 \) . Now any state \( s \) can be achieved by starting with \( {s}_{ + } \) and changing, one at a time, the value of \( {s}_{ + } \) on selected integers that label the crossings. In other words, there exist states \( {s}_{0},{s}_{1},{s}_{2},\ldots ,{s}_{k} \) with \( {s}_{0} = {s}_{ + },{s}_{k} = s \) and \( {s}_{r - 1}\left( i\right) = {s}_{r}\left( i\right) \) for all \( i \in \{ 1,2,\ldots n\} \) except for a single integer \( {i}_{r} \) for which \( {s}_{r - 1}\left( {i}_{r}\right) = 1 \) and \( {s}_{r}\left( {i}_{r}\right) = - 1 \) . Then \( \mathop{\sum }\limits_{{i = 1}}^{n}{s}_{r}\left( i\right) = n - {2r} \) and, because \( {s}_{r - 1}D \) and \( {s}_{r}D \) are the same diagram except near one crossing of \( D,\left| {{s}_{r}D}\right| = \left| {{s}_{r - 1}D}\right| \pm 1 \) . Hence \( M\left\langle {D \mid {s}_{r - 1}}\right\rangle - M\left\langle {D \mid {s}_{r}}\right\rangle \) is 0 or 4 . Thus \( M\left\langle {D \mid {s}_{r}}\right\rangle \leq M\left\langle {D \mid {s}_{r - 1}}\right\rangle \), and so, for all \( s \) , it follows that \[ M\langle D \mid s\rangle \leq n + 2\left| {{s}_{ + }D}\right| - 2. \] If \( D \) is plus-adequate, it is immediate that \( \left| {{s}_{1}D}\right| = \left| {{s}_{ + }D}\right| - 1 \), so that \( M\left\langle {D \mid {s}_{r}}\right\rangle \) decreases at the first step, when \( r \) changes from 0 to 1, and never rises thereafter. Thus \( M\langle D \mid s\rangle < n + 2\left| {{s}_{ + }D}\right| - 2 \) when \( s \neq {s}_{ + } \) . Hence, in summing to achieve \( \langle D\rangle \), the maximal degree term of \( \left\langle {D \mid {s}_{ + }}\right\rangle \) is never cancelled by a term from \( \langle D \mid s\rangle \) for any \( s \) . The second statement of the lemma is really just the reflection of the first; its proof can be achieved by applying the above to \( \bar{D} \) . Corollary 5.5. If \( D \) is an adequate diagram, then \[ M\langle D\rangle - m\langle D\rangle = {2n} + 2\left| {{s}_{ + }D}\right| + 2\left| {{s}_{ - }D}\right| - 4. \] In order to interpret the last result, information is needed on \( \left| {{s}_{ + }D}\right| \) and \( \left| {{s}_{ - }D}\right| \) . This is provided in the next two lemmas. Note that a diagram of a link is said to be a connected diagram if it is a connected subset of the plane (when drawn with no gaps for the under-passes); that is, it is not a split diagram in the sense of Definition 4.1. Lemma 5.6. Let \( D \) be a connected link diagram with \( {n}_{ \cdot } \) crossings. Then \[ \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| \leq n + 2 \] Proof. Use induction on \( n \) . The result is clearly true when \( n = 0 \) ; suppose it to be true for diagrams with \( n - 1 \) crossings. Select a crossing of \( D \) . For at least one of the two ways of replacing the crossing with two segments that do not cross, the resulting diagram \( {D}^{\prime } \) is connected. Suppose, with no loss of generality, that this is achieved by the positive way. Then \( {s}_{ + }D = {s}_{ + }{D}^{\prime } \) and \( \left| {{s}_{ - }D}\right| = \left| {{s}_{ - }{D}^{\prime }}\right| \pm 1 \) . Thus, using the induction hypothesis, \[ \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| = \left| {{s}_{ + }{D}^{\prime }}\right| + \left| {{s}_{ - }{D}^{\prime }}\right| \pm 1 \leq \left( {n - 1}\right) + 2 \pm 1 \leq n + 2. \] Lemma 5.7. Let \( D \) be a connected \( n \) -crossing diagram. (i) If \( D \) is alternating, then \( \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| = n + 2 \) . (ii) If \( D \) is non-alternating and strongly prime (see Definition 4.3), then \[ \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| < n + 2. \] Proof. When \( D \) is alternating, \( \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| \) is the number of planar regions in the complement of \( D \) (as \( \left| {{s}_{ + }D}\right| \) is the number of black regions, \( \left| {{s}_{ - }D}\right| \) the number of white regions in a chessboard colouring). However, \( D \) is a four-valent planar graph, so consideration of
1009_(GTM175)An Introduction to Knot Theory
15
eft| {{s}_{ - }{D}^{\prime }}\right| \pm 1 \) . Thus, using the induction hypothesis, \[ \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| = \left| {{s}_{ + }{D}^{\prime }}\right| + \left| {{s}_{ - }{D}^{\prime }}\right| \pm 1 \leq \left( {n - 1}\right) + 2 \pm 1 \leq n + 2. \] Lemma 5.7. Let \( D \) be a connected \( n \) -crossing diagram. (i) If \( D \) is alternating, then \( \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| = n + 2 \) . (ii) If \( D \) is non-alternating and strongly prime (see Definition 4.3), then \[ \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| < n + 2. \] Proof. When \( D \) is alternating, \( \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| \) is the number of planar regions in the complement of \( D \) (as \( \left| {{s}_{ + }D}\right| \) is the number of black regions, \( \left| {{s}_{ - }D}\right| \) the number of white regions in a chessboard colouring). However, \( D \) is a four-valent planar graph, so consideration of the Euler number of the sphere shows that the number of regions is \( n + 2 \) (for the number of edges is \( {2n} \) ). Hence \( \left| {{s}_{ + }D}\right| + \left| {{s}_{ - }D}\right| = n + 2 \) . Now suppose that \( D \) is non-alternating and strongly prime. Use induction on \( n \) . The induction starts easily when \( n = 2 \) with the two-crossing non-alternating diagram of two unlinked components. Thus, suppose \( n \geq 3 \) . As \( D \) is non-alternating, it has two consecutive crossings that are both over-crossings or both under-crossings. Let \( c \) be a third crossing. As before, \( c \) can be removed in a positive or negative way. As \( D \) is strongly prime, the diagram resulting from either way will be connected. Consider the chessboard shading of the complementary regions of \( D \) and the graph \( \Gamma \) formed by taking a vertex for each black region and, for every crossing, an edge joining the vertices of the black regions that abut at that crossing. Strong primeness means that removal of any vertex does not separate \( \Gamma \) . The two ways of removing \( c \) correspond in \( \Gamma \) to removing, or shrinking to a point, the edge corresponding to \( c \) to produce a graph \( {\Gamma }^{\prime } \) . If deleting the interior of an edge \( e \) of \( \Gamma \) produces a separating vertex \( v \), then shrinking it does not produce a separating vertex (because \( v \) must be in any component of the complement of a neighbourhood of \( e \) in \( \Gamma \) ). Thus one way of removing \( c \) gives a diagram \( {D}^{\prime } \) that is strongly prime. Now \( {D}^{\prime } \) is non-alternating because it has the same two consecutive similar crossings as had \( D \) . Thus the induction hypothesis can be applied to \( {D}^{\prime } \) to give \( \left| {{s}_{ + }{D}^{\prime }}\right| + \left| {{s}_{ - }{D}^{\prime }}\right| < n + 1 \), and, as in the previous proof, this at once gives the required result. The next result, the work of Kauffman, K. Murasugi and Thistlethwaite, is one of the main triumphs of the Jones polynomial. Its consequences have already been advertised here. As explained below in the corollary, it implies that a reduced alternating diagram of a knot is a diagram with the minimal number of crossings for that knot. This was inherently a conjecture of Tait's when he was compiling the first knot tables [118]. Firstly a simple definition is needed. Definition 5.8. Suppose \( V \) is a Laurent polynomial in the indeterminate \( t \) . The breadth \( B\left( V\right) \) of \( V \) is the difference between the maximal degree of \( t \) and the minimal degree of \( t \) that occur in \( V \) . (Thus \( B\left( V\right) = M\left( V\right) - m\left( V\right) \) .) Theorem 5.9. Let \( D \) be a connected, \( n \) -crossing diagram of an oriented link \( L \) with Jones polynomial \( V\left( L\right) \) . Then (i) \( B\left( {V\left( L\right) }\right) \leq n \) ; (ii) if \( D \) is alternating and reduced, then \( B\left( {V\left( L\right) }\right) = n \) ; (iii) if \( D \) is non-alternating and a prime diagram, then \( B\left( {V\left( L\right) }\right) < n \) . Proof. Recall that under the substitution \( t = {A}^{-4} \) the Jones polynomial is given by \( V\left( L\right) = {\left( -A\right) }^{-{3w}\left( D\right) }\langle D\rangle \), so that \( {4B}\left( {V\left( L\right) }\right) = B\langle D\rangle = M\langle D\rangle - m\langle D\rangle \) (where \( M\langle D\rangle \) and \( m\langle D\rangle \) refer to powers of \( A \) ). Hence, by Lemmas 5.4 and 5.6, \[ {4B}\left( {V\left( L\right) }\right) \leq {2n} + 2\left| {{s}_{ + }D}\right| + 2\left| {{s}_{ - }D}\right| - 4 \leq {4n}. \] But if \( D \) is alternating and reduced, then it is adequate, and the inequalities of Lemma 5.4 are then equalities. Then the first part of Lemma 5.7 implies that \( {4B}\left( {V\left( L\right) }\right) = {4n} \) . When \( D \) is prime and non-alternating, any diagram summand that is a non-trivial diagram of the unknot makes no contribution to the Jones polynomial but does contribute to the number of crossings. Thus, without loss of generality, it may be assumed that \( D \) is strongly prime. Then the strict inequality of Lemma 5.7 produces the required result. Corollary 5.10. If a link \( L \) has a connected, reduced, alternating diagram of \( n \) crossings, then it has no diagram of less than n crossings; any non-alternating prime diagram for \( L \) has more than \( n \) crossings. Proof. The existence of the reduced alternating diagram for \( L \) implies, using Theorem 5.9 (ii), that \( B\left( {V\left( L\right) }\right) = n \) . If \( L \) has another diagram of \( m \) crossings, then Theorem 5.9 (i) implies that \( n = B\left( {V\left( L\right) }\right) \leq m \) . If this second diagram is non-alternating, then, by Theorem 5.9 (iii), \( n = B\left( {V\left( L\right) }\right) < m \) . Note that, from Table 3.1 the eight-crossing knots \( {8}_{19},{8}_{20} \) and \( {8}_{21} \) have Jones polynomials of breadth less than eight. Thus, by the above, if they were to have alternating diagrams, those diagrams would have less than eight crossings. However, knots with crossing number 7 or less have been classified earlier in the table, and no knot appears with the same polynomial as \( {8}_{19},{8}_{20} \) or \( {8}_{21} \) . Thus those three knots have no alternating diagrams at all. They are non-alternating knots. The idea of taking parallels of diagrams provides another source of adequate diagrams, as will now be explained. The idea was used in [116], as detailed in the next theorem, to give a quick proof of a result of Thistlethwaite [119] establishing the invariance of the writhe of reduced alternating diagrams of a knot. Thus, if early compilers of knot tables believed writhe to be an invariant, they were correct within the domain of alternating diagrams. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_56_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_56_0.jpg) Figure 5.3 Definition 5.11. If \( D \) is a link diagram, let its \( r \) -parallel \( {D}^{r} \) be the diagram in which each link-component of \( D \) is replaced by \( r \) copies, all parallel in the plane, each copy repeating the "over" and "under" behaviour of the original link-component. Figure 5.3 shows a diagram and its 2-parallel. Lemma 5.12. If \( D \) is plus-adequate, then \( {D}^{r} \) is plus-adequate; if \( D \) is minus-adequate, then \( {D}^{r} \) is minus-adequate. Proof. The result is immediate, because \( {s}_{ + }\left( {D}^{r}\right) = {\left( {s}_{ + }D\right) }^{r} \) ; see Figure 5.4. If \( D \) is plus-adequate, no component of \( {s}_{ + }\left( {D}^{r}\right) \) abuts itself at a former crossing, as it runs parallel to a component of \( {s}_{ + }D \) which, itself, has that property. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_56_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_56_1.jpg) Figure 5.4 Theorem 5.13. Let \( D \) and \( E \) be diagrams, with \( {n}_{D} \) and \( {n}_{E} \) crossings respectively, for the same oriented link \( L \) . Suppose that \( D \) is plus-adequate; then \[ {n}_{D} - w\left( D\right) \leq {n}_{E} - w\left( E\right) \] Proof. Let \( \left\{ {L}_{i}\right\} \) be the components of \( L \), and let \( {D}_{i} \) and \( {E}_{i} \) be the subdiagrams of \( D \) and \( E \) corresponding to \( {L}_{i} \) . Choose non-negative integers \( {\mu }_{i} \) and \( {v}_{i} \) such that for each \( i, w\left( {D}_{i}\right) + {\mu }_{i} = w\left( {E}_{i}\right) + {v}_{i} \) . Change \( D \) to \( {D}_{ * } \) by changing each \( {D}_{i} \) to \( {D}_{*i} \) by adding to \( {D}_{i} \) a total of \( {\mu }_{i} \) positive kinks. Similarly, change \( E \) to \( {E}_{ * } \) by adding \( {v}_{i} \) positive kinks to \( {E}_{i} \) for each \( i \) . Note that \( {D}_{ * } \) is still plus-adequate, \( w\left( {D}_{*i}\right) = \) \( w\left( {E}_{*i}\right) \), and \( w\left( {D}_{ * }\right) = w\left( {E}_{ * }\right) \), because the sum of the signs of crossings of distinct components is determined by the linking numbers of components of \( L \) . Now \( {D}_{ * }^{r} \) and \( {E}_{ * }^{r} \) are diagrams of the same link, namely \( L \) with each \( {L}_{i} \) replaced by \( r \) copies with mutual linking number \( w\left( {D}_{*i}\right) \) . Thus they have the same Jones polynomial. But they have the same writhe (namely, \( {r}^{2}w\left( {D}_{ * }\right) \) ), and so \( \left\langle {D}_{ * }^{r}\right\rangle = \left\langle {E}_{ * }^{r}\right\rangle \) . Now by Lemma 5.4, \[ M\left\langle {E}_{ * }^{r}\right\rangle \leq \left( {{n}_{E} + \mathop{\sum }\limits_{i}{v}_{i}}\right) {r}^{2} + 2\left( {\left| {{s}_{ + }E}\right| + \mathop{\sum }\limits_{i}{v}_{i}}\right) r - 2, \] \[ M\left\langle {D}_{ * }^{r}\right\rangle = \left( {{n}_{D} + \mathop{\sum }\limits_{i}{\mu }_{i}}\right) {r}^{2} + 2\left( {\left| {{s}_{ + }D}\right| + \mathop{\sum }\limits_{i}{\mu }
1009_(GTM175)An Introduction to Knot Theory
16
) = w\left( {E}_{ * }\right) \), because the sum of the signs of crossings of distinct components is determined by the linking numbers of components of \( L \) . Now \( {D}_{ * }^{r} \) and \( {E}_{ * }^{r} \) are diagrams of the same link, namely \( L \) with each \( {L}_{i} \) replaced by \( r \) copies with mutual linking number \( w\left( {D}_{*i}\right) \) . Thus they have the same Jones polynomial. But they have the same writhe (namely, \( {r}^{2}w\left( {D}_{ * }\right) \) ), and so \( \left\langle {D}_{ * }^{r}\right\rangle = \left\langle {E}_{ * }^{r}\right\rangle \) . Now by Lemma 5.4, \[ M\left\langle {E}_{ * }^{r}\right\rangle \leq \left( {{n}_{E} + \mathop{\sum }\limits_{i}{v}_{i}}\right) {r}^{2} + 2\left( {\left| {{s}_{ + }E}\right| + \mathop{\sum }\limits_{i}{v}_{i}}\right) r - 2, \] \[ M\left\langle {D}_{ * }^{r}\right\rangle = \left( {{n}_{D} + \mathop{\sum }\limits_{i}{\mu }_{i}}\right) {r}^{2} + 2\left( {\left| {{s}_{ + }D}\right| + \mathop{\sum }\limits_{i}{\mu }_{i}}\right) r - 2, \] the equality occurring since \( {D}_{ * }^{r} \) is plus-adequate. This is true for all \( r \), so, comparing coefficients of \( {r}^{2} \) , \[ {n}_{D} + \mathop{\sum }\limits_{i}{\mu }_{i} \leq {n}_{E} + \mathop{\sum }\limits_{i}{v}_{i} \] so that \( {n}_{D} - \mathop{\sum }\limits_{i}w\left( {D}_{i}\right) \leq {n}_{E} - \mathop{\sum }\limits_{i}w\left( {E}_{i}\right) \) . Hence, once again using the fact that the sum of the signs of crossings of distinct components is determined by linking numbers of \( L,{n}_{D} - w\left( D\right) \leq {n}_{E} - w\left( E\right) \) . Corollary 5.14. Let \( D \) and \( E \) be as above. (i) The number of negative crossings of \( D \) is less than or equal to the number of negative crossings of \( E \) . (ii) The number of positive crossings in a minus-adequate diagram is minimal. (iii) An adequate diagram has the minimal number of crossings. (iv) Two adequate diagrams of the same link (e.g. reduced alternating diagrams) have the same writhe. The corollary is just restating the theorem in different ways. An example of the use of the corollary is the two famous diagrams (the Perko pair), originally labelled \( {10}_{161} \) and \( {10}_{162} \), shown in Figure 3.1. The diagrams \( {10}_{161} \) and \( \overline{{10}_{162}} \) represent the same knot. Observe that \( w\left( {10}_{161}\right) = - 8 \) and \( w\left( \overline{{10}_{162}}\right) = - {10} \) . Inspection of the diagrams shows that \( \overline{{10}_{162}} \) is minus-adequate, the minimal number possible of positive crossings being zero. However, \( {10}_{161} \) is plus-adequate, and so any diagram must have at least nine negative crossings. As \( {10}_{161} \) has no diagram of less than ten crossings (from the classification tables), it is impossible to display the minimal number of positive crossings and the minimal number of negative crossings on the same diagram, and the two minima are achieved by the two given diagrams. The above theory gives, then, the recent solutions to two of the three "conjectures" formulated by Tait a century ago-namely, that reduced alternating diagrams minimise crossing number, and that two such diagrams of the same link have the same writhe. The third of these "conjectures"-that two reduced alternating diagrams of the same link are related by a sequence of "flyping" operations-has also recently been proved [95]. Such a "flyping" operation is shown in Figure 5.5. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_57_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_57_0.jpg) Figure 5.5 ## Exercises 1. Give an example of an \( n \) crossing diagram \( D \) for which \( M\langle D\rangle - m\langle D\rangle = 0 \) . 2. Let \( K \) be a prime alternating knot. Show that any adequate diagram of \( K \) must be alternating. 3. Let \( c\left( K\right) \) denote the crossing number of a knot \( K \) . If \( {K}_{1} \) and \( {K}_{2} \) are alternating knots prove that \( c\left( {K}_{1}\right) + c\left( {K}_{2}\right) = c\left( {{K}_{1} + {K}_{2}}\right) \) . [Such an equality is not known to be true for arbitrary knots.] 4. A knot \( K \) has a reduced alternating diagram with \( n \) crossings where \( n \) is odd. Show that \( K \) is not equivalent to its reflection \( \bar{K} \) . Can \( K + K \) be equivalent to its reflection? 5. Let \( D \) be a reduced \( n \) -crossing diagram of a knot \( K \) and suppose \( B\left( {V\left( K\right) }\right) = n \) . If \( D \) is not alternating, in what sense can it be said to be nearly alternating? 6. Show that a Whitehead double (a satellite using the curve shown in Figure 6.5) of a non-trivial alternating knot never has trivial Jones polynomial. 7. Show that the diagram of the Kinoshita-Terasaka knot shown in Figure 3.3 is adequate. What is the breadth of the Jones polynomial of this knot. Consider the same questions about the Conway knot. Each knot diagram in Figure 3.3 can be regarded as obtained by "summing together" a pair of "tangle" diagrams of two linked arcs in a disc, each tangle meeting the boundary of the disc at four points. What properties of the tangle diagrams will ensure adequacy of the knot diagram? 8. Find two prime knots that are distinct, even when orientations are neglected, that have (minimal) crossing number 15. ## The Alexander Polynomial The Alexander polynomial of an oriented link is, like the Jones polynomial, a Laurent polynomial associated with the link in an invariant way. The two polynomials give different information about the geometric properties of knots and links. The Alexander polynomial will, for example, give a lower bound for the genus of a knot, but it is not as useful as the Jones polynomial for investigating the required number of crossings in a diagram. The Alexander polynomial will later, in Theorem 8.6, be described combinatorially in terms of diagrams in a way that parallels Proposition 3.7, but the real interest of this invariant is that, in contrast to the Jones polynomial, it has a long history [3] and is well understood in terms of elementary homology theory. The homology approach to the Alexander polynomial, which will now be explained, describes it as a certain invariant of a homology module. To appreciate this, a little information about presentation matrices of modules is needed. There follows, then, a basic discussion of this topic, aimed at obtaining results rapidly. It may be neglected by the cognoscenti. Suppose that \( M \) is a module over a commutative ring \( R \) . It will be assumed that \( R \) has a 1 and that \( {1x} = x \) for all \( x \in M \) . A module can be regarded, by the insecure, as a vector space over a ring rather than over a field. A module is free if any element in it can be uniquely expressed as a linear sum of elements in a base; the module of \( n \) -tuples of elements of \( R \) is the canonical example of a free \( R \) -module. A finite presentation for \( M \) is an exact sequence \[ F\overset{\alpha }{ \rightarrow }E\overset{\phi }{ \rightarrow }M \rightarrow 0 \] where \( E \) and \( F \) are free \( R \) -modules with finite bases. If \( \alpha \) is represented by the matrix \( A \) with respect to bases \( {e}_{1},{e}_{2},\ldots ,{e}_{m} \) and \( {f}_{1},{f}_{2},\ldots ,{f}_{n} \) of \( E \) and \( F \) (the notation being so that \( \alpha {f}_{i} = \mathop{\sum }\limits_{j}{A}_{ji}{e}_{j} \) ), then the matrix \( A \), of \( m \) rows and \( n \) columns, is a presentation matrix for \( M \) . As \( \phi \) is a surjection, the images of \( {e}_{1},{e}_{2},\ldots ,{e}_{m} \) can be thought of as generators for \( M \), and the images of \( {f}_{1},{f}_{2},\ldots ,{f}_{n} \) as relations amongst those generators. Theorem 6.1. Any two presentation matrices \( A \) and \( {A}_{1} \) for \( M \) differ by a sequence of matrix moves of the following forms and their inverses: (i) Permutation of rows or columns; (ii) Replacement of the matrix \( A \) by \( \left( \begin{array}{ll} A & 0 \\ 0 & 1 \end{array}\right) \) ; (iii) Addition of an extra column of zeros to the matrix \( A \) ; (iv) Addition of a scalar multiple of a row (or column) to another row (or column). Proof. Suppose that the matrices \( A \) and \( {A}_{1} \) correspond, with respect to some bases, to the maps \( \alpha \) and \( {\alpha }_{1} \) in the following presentations: \[ F\overset{\alpha }{ \rightarrow }E\overset{\phi }{ \rightarrow }M \rightarrow 0 \] \[ \downarrow \gamma \; \downarrow \beta \; \updownarrow 1 \] \[ {F}_{1}\overset{{\alpha }_{1}}{ \rightarrow }{E}_{1}\overset{{\phi }_{1}}{ \rightarrow }M \rightarrow 0 \] The free base of \( E \) and the surjectivity of \( {\phi }_{1} \) can be used to construct a linear map \( \beta : E \rightarrow {E}_{1} \) so that \( {\phi }_{1}\beta = \phi \) . Similarly, the freeness of \( F \) and exactness at \( E \) and \( {E}_{1} \) produce a map \( \gamma : F \rightarrow {F}_{1} \) such that \( {\beta \alpha } = {\alpha }_{1}\gamma \) . If then \( \beta \) and \( \gamma \) are represented by matrices \( B \) and \( C \) with respect to the given bases, then \( {BA} = {A}_{1}C \) . A completely symmetrical argument produces maps \( {\beta }_{1} \) and \( {\gamma }_{1} \) with matrices \( {B}_{1} \) and \( {C}_{1} \) such that \( {B}_{1}{A}_{1} = A{C}_{1} \) . Letting " \( \sim \) " denote "equivalence by the above moves", the following is apparent. \[ A \sim \left( \begin{matrix} A & {B}_{1} \\ 0 & I \end{matrix}\right) \;\text{ (by (ii) and (iv)) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & {B}_{1}{A}_{1} \\ 0 & I & {A}_{1} \end{matrix}\right) \;\text{ (by (iii) and (iv)) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & 0 \\ 0 & I & {A}_{1} \end{matrix}\right) \;\text{ (by (iv), as }A{C}_{1} = {B}_{1}{A}_{1}\text{ ) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & 0 & {B}_{1}B \\ 0 & I & {A}_{1} & B \end{matrix}\right) \;\text{ (by (iii) and (iv)). } \] Now, for any \( e \in E,\phi {\beta
1009_(GTM175)An Introduction to Knot Theory
17
pha }_{1}\gamma \) . If then \( \beta \) and \( \gamma \) are represented by matrices \( B \) and \( C \) with respect to the given bases, then \( {BA} = {A}_{1}C \) . A completely symmetrical argument produces maps \( {\beta }_{1} \) and \( {\gamma }_{1} \) with matrices \( {B}_{1} \) and \( {C}_{1} \) such that \( {B}_{1}{A}_{1} = A{C}_{1} \) . Letting " \( \sim \) " denote "equivalence by the above moves", the following is apparent. \[ A \sim \left( \begin{matrix} A & {B}_{1} \\ 0 & I \end{matrix}\right) \;\text{ (by (ii) and (iv)) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & {B}_{1}{A}_{1} \\ 0 & I & {A}_{1} \end{matrix}\right) \;\text{ (by (iii) and (iv)) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & 0 \\ 0 & I & {A}_{1} \end{matrix}\right) \;\text{ (by (iv), as }A{C}_{1} = {B}_{1}{A}_{1}\text{ ) } \] \[ \sim \left( \begin{matrix} A & {B}_{1} & 0 & {B}_{1}B \\ 0 & I & {A}_{1} & B \end{matrix}\right) \;\text{ (by (iii) and (iv)). } \] Now, for any \( e \in E,\phi {\beta }_{1}{\beta e} = {\phi e} \), so, by the exactness at \( E \), the image of \( \left( {{\beta }_{1}\beta - {1}_{E}}\right) \) is contained in the image of \( \alpha \) . Because \( E \) is free, there is a map \( \delta : E \rightarrow F \) so that \( {\alpha \delta } = {\beta }_{1}\beta - {1}_{E} \) . Thus, if \( D \) is the matrix representing \( \delta \) , \( {AD} = {B}_{1}B - I \) . Hence, use of (iv) shows that \[ \left( \begin{matrix} A & {B}_{1} & 0 & {B}_{1}B \\ 0 & I & {A}_{1} & B \end{matrix}\right) \sim \left( \begin{matrix} A & {B}_{1} & 0 & I \\ 0 & I & {A}_{1} & B \end{matrix}\right) . \] Hence \[ A \sim \left( \begin{matrix} A & {B}_{1} & 0 & I \\ 0 & I & {A}_{1} & B \end{matrix}\right) \sim \left( \begin{matrix} {A}_{1} & B & 0 & I \\ 0 & I & A & {B}_{1} \end{matrix}\right) \sim {A}_{1}, \] where the second equivalence is by (i) and the third is by a repeat of the whole argument with the rôles of the two presentations interchanged. Definition 6.2. Suppose that \( M \) is a module over a commutative ring \( R \), having an \( m \times n \) presentation matrix \( A \) . The \( {r}^{th} \) elementary ideal \( {\mathcal{E}}_{r} \) of \( M \) is the ideal of \( R \) generated by all the \( \left( {m - r + 1}\right) \times \left( {m - r + 1}\right) \) minors of \( A \) . Of course, an \( \left( {m - r + 1}\right) \times \left( {m - r + 1}\right) \) minor is the determinant of the matrix that remains after the removal from \( A \) of \( \left( {r - 1}\right) \) rows and \( \left( {n - m + r - 1}\right) \) columns. The standard elementary properties of determinants, together with the above theorem, show that the elementary ideals are independent of the presentation matrix chosen to evaluate them. Note that \( {\mathcal{E}}_{r - 1} \subseteq {\mathcal{E}}_{r} \) . By convention, \( {\mathcal{E}}_{r} = R \) when \( r > m \) and \( {\mathcal{E}}_{r} = 0 \) if \( r \leq 0 \) . Note that if \( n = m \), the matrix \( A \) is square. Then there is only one \( m \times m \) minor, and \( {\mathcal{E}}_{1} \) is the principal ideal of \( R \) generated by det \( A \) . A standard example is gained by observing that a finite abelian group \( G \) is a \( \mathbb{Z} \) -module, it does have a square presentation matrix, and \( {\mathcal{E}}_{1} \) is the ideal of \( \mathbb{Z} \) generated by \( \left| G\right| \), the order of the group \( G \) . Returning to geometric things, consider the first homology group, with integer coefficients, of an orientable, compact, connected surface \( F \) with \( n \) boundary components. Any elementary homology theory - simplicial homology or singular homology, for example (or just basic intuition) asserts that \( {H}_{1}\left( {F;\mathbb{Z}}\right) = { \oplus }_{{2g} + n - 1}\mathbb{Z} \) generated by \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \), where the \( {f}_{i} \) are the oriented simple closed curves shown in Figure 6.1. There follows now a consideration of what happens when \( F \) is embedded in \( {S}^{3} \), probably with the "bands" of Figure 6.1 twisted, linked and knotted. The next result can be regarded as an instance of Alexander duality. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_61_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_61_0.jpg) Figure 6.1 Proposition 6.3. Suppose that \( F \) is a connected, compact, orientable surface with non-empty boundary, piecewise linearly contained in \( {S}^{3} \) . Then the homology groups \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) and \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) are isomorphic, and there is a unique nonsingular bilinear form \[ \beta : {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] with the property that \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \operatorname{lk}\left( {c, d}\right) \) for any oriented simple closed curves \( c \) and \( d \) in \( {S}^{3} - F \) and \( F \) respectively. Proof. The surface \( F \) is now embedded in \( {S}^{3} \) . As before, \( {H}_{1}\left( {F;\mathbb{Z}}\right) = \) \( {\bigoplus }_{{2g} + n - 1}\mathbb{Z} \) generated by \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) . Let \( V \) be a regular neighbourhood of \( F \) in \( {S}^{3} \), so that \( V \) is just a 3-ball with \( \left( {{2g} + n - 1}\right) 1 \) -handles attached. The inclusion of \( F \) in \( V \) is a homotopy equivalence, and \( {H}_{1}\left( {\partial V;\mathbb{Z}}\right) = \left( {{\bigoplus }_{{2g} + n - 1}\mathbb{Z}}\right) \oplus \left( {{\bigoplus }_{{2g} + n - 1}\mathbb{Z}}\right) \) . For this, generators \( \left\{ {\left\lbrack {f}_{i}^{\prime }\right\rbrack : 1 \leq i \leq {2g} + n - 1}\right\} \) and \( \left\{ {\left\lbrack {e}_{i}\right\rbrack : 1 \leq i \leq {2g} + n - 1}\right\} \) can be chosen so that each \( {e}_{i} \) is the boundary of a small disc in \( V \) that meets \( {f}_{i} \) at one point, and the inclusion \( \partial V \subset V \) induces on homology a map sending \( \left\lbrack {f}_{i}^{\prime }\right\rbrack \) to \( \left\lbrack {f}_{i}\right\rbrack \) and \( \left\lbrack {e}_{i}\right\rbrack \) to zero. Furthermore, the orientations of the \( \left\{ {e}_{i}\right\} \) can be chosen so that \( \operatorname{lk}\left( {{e}_{i},{f}_{j}}\right) = {\delta }_{ij} \) (the Krönecker delta). This all relates to the homology of the standard inclusion of \( F \) in a standard handlebody \( V \) ; it is \( {S}^{3} - F \) that is of interest. Now, if \( {V}^{\prime } \) is the closure of \( {S}^{3} - V \), then the inclusion of \( {V}^{\prime } \) in \( {S}^{3} - F \) is a homotopy equivalence. The Mayer-Vietoris theorem for \( {S}^{3} \) expressed as the union of \( V \) and \( {V}^{\prime } \) asserts that the following sequence is exact: \[ {H}_{2}\left( {{S}^{3};\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {\partial V;\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {V;\mathbb{Z}}\right) \oplus {H}_{1}\left( {{V}^{\prime };\mathbb{Z}}\right) \rightarrow {H}_{1}\left( {{S}^{3};\mathbb{Z}}\right) . \] As the first and last groups in this sequence are zero, the map in the middle, induced by inclusion maps, is an isomorphism. Thus \( {H}_{1}\left( {{V}^{\prime };\mathbb{Z}}\right) \) (which is isomorphic to \( \left. {{H}_{1}\left( {{S}^{3} - F}\right) }\right) \) is isomorphic to \( {\bigoplus }_{{2g} + n - 1}\mathbb{Z} \) and is generated by \( \left\{ {\left\lbrack {e}_{i}\right\rbrack : 1 \leq i \leq }\right. \) \( {2g} + n - 1\} \) . Now define \[ \beta : {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] by \( \beta \left( {\left\lbrack {e}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = {\delta }_{ij} \), and extend linearly. Suppose now that \( c \) and \( d \) are any oriented simple closed curves in \( {S}^{3} - F \) and \( F \) respectively, where \( \left\lbrack c\right\rbrack = \mathop{\sum }\limits_{i}{\lambda }_{i}\left\lbrack {e}_{i}\right\rbrack \) and \( \left\lbrack d\right\rbrack = \mathop{\sum }\limits_{i}{\mu }_{i}\left\lbrack {f}_{i}\right\rbrack \) . Then \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \mathop{\sum }\limits_{i}{\lambda }_{i}{\mu }_{i} \) . However, \( \operatorname{lk}\left( {c,{f}_{j}}\right) = \) \( \left\lbrack c\right\rbrack = \mathop{\sum }\limits_{i}{\lambda }_{i}\left\lbrack \overline{{e}_{i}}\right\rbrack \in {H}_{1}\left( {{S}^{3} - {f}_{j};\mathbb{Z}}\right) \) . Thus \( \operatorname{lk}\left( {c,{f}_{j}}\right) = {\lambda }_{j} \) . Similarly, \( \operatorname{lk}\left( {d, c}\right) = \) \( \mathop{\sum }\limits_{i}{\mu }_{i}\left\lbrack {f}_{i}\right\rbrack \in {H}_{1}\left( {{S}^{3} - c;\mathbb{Z}}\right) \), but this is \( \mathop{\sum }\limits_{i}{\mu }_{i}\operatorname{lk}\left( {{f}_{i}, c}\right) \), which by the above is \( \mathop{\sum }\limits_{i}{\lambda }_{i}{\mu }_{i} \) . Hence, as required, \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \operatorname{lk}\left( {c, d}\right) \) . Note that, whereas the above proof uses bases, \( \beta \) is characterised by linking numbers and is independent of bases. Note, too, that the bases used are mutually dual with respect to \( \beta \) in the sense that \( \beta \left( {\left\lbrack {e}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = {\delta }_{ij} \), and so, using standard base changing arguments, corresponding to any base for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) there is a \( \beta \) -dual base for \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) and vice versa. Now suppose that \( F \) is a Seifert surface for an oriented link \( L \) in \( {S}^{3} \), so that \( \partial F = L \) . Let \( N \) be a regular neighbourhood of \( L \), a disjoint union of solid tori that "fatten" the components of \( L \) . Let \( X \) be the closure of \( {S}^{3} - N \) .
1009_(GTM175)An Introduction to Knot Theory
18
required, \( \beta \left( {\left\lbrack c\right\rbrack ,\left\lbrack d\right\rbrack }\right) = \operatorname{lk}\left( {c, d}\right) \) . Note that, whereas the above proof uses bases, \( \beta \) is characterised by linking numbers and is independent of bases. Note, too, that the bases used are mutually dual with respect to \( \beta \) in the sense that \( \beta \left( {\left\lbrack {e}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = {\delta }_{ij} \), and so, using standard base changing arguments, corresponding to any base for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) there is a \( \beta \) -dual base for \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) and vice versa. Now suppose that \( F \) is a Seifert surface for an oriented link \( L \) in \( {S}^{3} \), so that \( \partial F = L \) . Let \( N \) be a regular neighbourhood of \( L \), a disjoint union of solid tori that "fatten" the components of \( L \) . Let \( X \) be the closure of \( {S}^{3} - N \) . Then \( F \cap X \) is \( F \) with a (collar) neighbourhood of \( \partial F \) removed. Thus \( F \cap X \) is just a copy of \( F \) and, just to simplify notation, it will be regarded as actually being \( F \) . This \( F \) has a regular neighbourhood \( F \times \left\lbrack {-1,1}\right\rbrack \) in \( X \), with \( F \) identified with \( F \times 0 \) and the notation chosen so that the meridian of every component of \( L \) enters the neighbourhood at \( F \times - 1 \) and leaves it at \( F \times 1 \) . Let \( {i}^{ \pm } \) be the two embeddings \( F \rightarrow {S}^{3} - F \) defined by \( {i}^{ \pm }\left( x\right) = x \times \pm 1 \) and, if \( c \) is an oriented simple closed curve in \( F \), let \( {c}^{ \pm } = {i}^{ \pm }c \) . Definition 6.4. Associated to the Seifert surface \( F \) for an oriented link \( L \) is the Seifert form \[ \alpha : {H}_{1}\left( {F;\mathbb{Z}}\right) \times {H}_{1}\left( {F;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \] defined by \( \alpha \left( {x, y}\right) = \beta \left( {{\left( {i}^{ - }\right) }_{ \star }x, y}\right) \) . Note that, from Proposition \( {6.3},\alpha \) is defined and bilinear, and if \( a \) and \( b \) are simple closed oriented curves in \( F \), then \( \alpha \left( {\left\lbrack a\right\rbrack ,\left\lbrack b\right\rbrack }\right) = \operatorname{lk}\left( {{a}^{ - }, b}\right) \) . Further, by sliding with respect to the second coordinate of \( F \times \left\lbrack {-1,1}\right\rbrack \), this is equal to \( \operatorname{lk}\left( {a,{b}^{ + }}\right) \) . Taking a basis \( \left\{ \left\lbrack {f}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) with a \( \beta \) -dual basis \( \left\{ \left\lbrack {e}_{i}\right\rbrack \right\} \) for \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) \) as before, \( \alpha \) is represented by the Seifert matrix \( A \), where \[ {A}_{ij} = \alpha \left( {\left\lbrack {f}_{i}\right\rbrack ,\left\lbrack {f}_{j}\right\rbrack }\right) = \operatorname{lk}\left( {{f}_{i}^{ - },{f}_{j}}\right) = \operatorname{lk}\left( {{f}_{i},{f}_{j}^{ + }}\right) . \] An immediate consequence is that in \( {H}_{1}\left( {{S}^{3} - F;\mathbb{Z}}\right) ,\left\lbrack {f}_{i}^{ - }\right\rbrack = \mathop{\sum }\limits_{j}{A}_{ij}\left\lbrack {e}_{j}\right\rbrack \) and \( \left\lbrack {f}_{j}^{ + }\right\rbrack = \mathop{\sum }\limits_{i}{A}_{ij}\left\lbrack {e}_{i}\right\rbrack \) Now let \( Y \) be the space \( X \) -cut-along- \( F \) . This means that \( Y \) is \( X - F \) compact-ified, with two copies, \( {F}_{ - } \) and \( {F}_{ + } \), of \( F \) replacing the removed copy of \( F(Y \) is homeomorphic to \( X \) less the open neighbourhood \( F \times \left( {-1,1}\right) \) of \( F \) ). Of course, \( X \) can be recovered from \( Y \) by gluing \( {F}_{ + } \) and \( {F}_{ - } \) together; thus \( X = Y/\phi \), where \( \phi \) is the natural homeomorphism \( \phi : {F}_{ - } \rightarrow F \rightarrow {F}_{ + } \) . Now take countably many copies of \( Y \) and glue them together to form a new space \( {X}_{\infty } \) . More precisely, let \( \left\{ {{Y}_{i} : i \in \mathbb{Z}}\right\} \) be spaces homeomorphic to \( Y \), and let \( {h}_{i} : Y \rightarrow {Y}_{i} \) be a homeomorphism. Let \( {X}_{\infty } \) be the space formed from the disjoint union of all the \( {Y}_{i} \) by identifying \( {h}_{i}{F}_{ - } \) with \( {h}_{i + 1}{F}_{ + } \) by means of the homeomorphism \( {h}_{i + 1}\phi {h}_{i}^{-1} \) . The whole construction is illustrated in Figure 6.2, which shows \( X \) cut to form \( Y \), then \( Y \) "uncurled", and then the copies \( {Y}_{i} \) of \( Y \) that are glued together to form \( {X}_{\infty } \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_63_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_63_0.jpg) Figure 6.2 On \( {X}_{\infty } \) there is a natural self-homeomorphism \( t : {X}_{\infty } \rightarrow {X}_{\infty } \) defined by \( t \mid {Y}_{i} = {h}_{i + 1}{h}_{i}^{-1} \) . Clearly this is well defined; \( t \) is thought of as a translation of \( {X}_{\infty } \) by "one unit to the right". Hence the infinite cyclic group \( \langle t\rangle \) generated by \( t \) acts on \( {X}_{\infty } \) as a group of homeomorphisms. Thus \( \langle t\rangle \) also acts on \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) (this action is really by means of the homology automorphism \( {t}_{ \star } \) induced by \( t \) , but the asterisk here is always suppressed). The ring \( \mathbb{Z} \) acts on any abelian group, so the group-ring \( \mathbb{Z}\langle t\rangle \) acts on \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) . Recall that for a group \( G \) written multiplicatively, the group-ring \( \mathbb{Z}G \) consists of formal \( \mathbb{Z} \) -linear sums of elements of \( G \) . Addition in \( \mathbb{Z}G \) comes from formal addition, and multiplication is induced by the multiplication in \( G \) and the distributive law. The ring \( \mathbb{Z}\langle t\rangle \) is, then, just the ring \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) of Laurent polynomials in \( t \) (that is, simply polynomials in \( {t}^{-1} \) and \( t \) with \( \mathbb{Z} \) coefficients). The presence of this action means that \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) is a module over the ring \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) . This terminology is used in the next fundamental theorem, which finds a presentation matrix for this module. Theorem 6.5. Let \( F \) be a Seifert surface for an oriented link \( L \) in \( {S}^{3} \) and let \( A \) be a matrix, with respect to any basis of \( {H}_{1}\left( {F;\mathbb{Z}}\right) \), for the corresponding Seifert form. Then \( {tA} - {A}^{\tau } \) is a matrix that presents the \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) -module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) . Proof. Express \( {X}_{\infty } \) as the union of subspaces \( {Y}^{\prime } \) and \( {Y}^{\prime \prime } \), where \( {Y}^{\prime } = \mathop{\bigcup }\limits_{i}{Y}_{{2i} + 1} \) and \( {Y}^{\prime \prime } = \mathop{\bigcup }\limits_{i}{Y}_{2i} \) . Each of these subspaces is the disjoint union of countably many copies of \( Y \), and their intersection is the union of countably many copies of \( F \) . The homology of \( {X}_{\infty } \) will now be investigated, using the Mayer-Vietoris theorem, in terms of the homology of \( {Y}^{\prime } \) and \( {Y}^{\prime \prime } \) . The Mayer-Vietoris long exact sequence of homology groups comes from a short exact sequence of chain complexes in a standard way. In this case the exact sequence of chain complexes is the following (where \( {C}_{n} \) is the \( {n}^{th} \) chain group): \[ 0 \rightarrow {C}_{n}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime }}\right) \overset{{\alpha }_{n}}{ \rightarrow }{C}_{n}\left( {Y}^{\prime }\right) \oplus {C}_{n}\left( {Y}^{\prime \prime }\right) \overset{{\beta }_{n}}{ \rightarrow }{C}_{n}\left( {X}_{\infty }\right) \rightarrow 0. \] Note that \( t \) interchanges \( {Y}^{\prime } \) and \( {Y}^{\prime \prime } \) so that the chain groups of these individual spaces are not modules over \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) ; however, each term in the above sequence is such a module. To achieve an exact sequence of homology modules, \( {\alpha }_{n} \) and \( {\beta }_{n} \) must be module maps with \( {\beta }_{n}{\alpha }_{n} = 0 \) . This is achieved if \( {\beta }_{n} \) is defined by \( {\beta }_{n}\left( {a, b}\right) = a + b \) and, for \( x \in {C}_{n}\left( {{Y}_{i - 1} \cap {Y}_{i}}\right) ,{\alpha }_{n} \) is defined by \( {\alpha }_{n}\left( x\right) = \left( {-x, x}\right) \in \) \( {C}_{n}\left( {Y}_{i - 1}\right) \oplus {C}_{n}\left( {Y}_{i}\right) \) . This short exact sequence of chain complexes of modules over \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) gives rise, in the usual way, to the following long exact sequence of homology modules: \[ \rightarrow {H}_{1}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{1}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{1}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\beta }_{ \star }}{ \rightarrow }{H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \rightarrow \] \[ \rightarrow {H}_{0}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{0}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{0}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) . \] Now \( F \) is, by definition of the term "Seifert surface", connected, so \( {H}_{0}\left( {F;\mathbb{Z}}\right) = \) \( \mathbb{Z} \) . But \( {Y}^{\prime } \cap {Y}^{\prime \prime } \) is countably many copies of \( F \), each moved to the next by the homeomorphism \( t \) . Thus \( {H}_{0}\left( {{Y}^{\prime } \cap {Y}^{\prime \prim
1009_(GTM175)An Introduction to Knot Theory
19
ay, to the following long exact sequence of homology modules: \[ \rightarrow {H}_{1}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{1}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{1}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\beta }_{ \star }}{ \rightarrow }{H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \rightarrow \] \[ \rightarrow {H}_{0}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \overset{{\alpha }_{ \star }}{ \rightarrow }{H}_{0}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{0}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) . \] Now \( F \) is, by definition of the term "Seifert surface", connected, so \( {H}_{0}\left( {F;\mathbb{Z}}\right) = \) \( \mathbb{Z} \) . But \( {Y}^{\prime } \cap {Y}^{\prime \prime } \) is countably many copies of \( F \), each moved to the next by the homeomorphism \( t \) . Thus \( {H}_{0}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \) consists of one copy of \( \mathbb{Z} \) for every power of \( t \) and so can be identified, as a module, with \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack { \otimes }_{\mathbb{Z}}{H}_{0}\left( {F;\mathbb{Z}}\right) \) (which is just a copy of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) ) with the generator of \( {H}_{0}\left( {{Y}_{0} \cap {Y}_{1};\mathbb{Z}}\right) \) corresponding to \( 1 \otimes 1 \) . However, \( {H}_{0}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{0}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) \) is just the direct sum of countably many copies of \( {H}_{0}\left( {Y;\mathbb{Z}}\right) \), so this may be identified with \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack { \otimes }_{\mathbb{Z}}{H}_{0}\left( {Y;\mathbb{Z}}\right) \), with the generator of \( {H}_{0}\left( {{Y}_{0};\mathbb{Z}}\right) \) corresponding to \( 1 \otimes 1 \) . Then \( {\alpha }_{ \star }\left( {1 \otimes 1}\right) = - \left( {1 \otimes 1}\right) + \left( {t \otimes 1}\right) \) . This implies that on \( {H}_{0}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) ,{\alpha }_{ \star } \) is injective, and hence \( {\beta }_{ \star } \) is a surjection. Now apply to \( {H}_{1} \) the same line of reasoning. \( {H}_{1}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \) can be identified with \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack { \otimes }_{\mathbb{Z}}{H}_{1}\left( {F;\mathbb{Z}}\right) \) so that \( x \in {H}_{1}\left( {{Y}_{0} \cap {Y}_{1};\mathbb{Z}}\right) \) corresponds to \( 1 \otimes x \) . \( {H}_{1}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{1}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) \) can be identified with \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack { \otimes }_{\mathbb{Z}}{H}_{1}\left( {Y;\mathbb{Z}}\right) \) so that \( y \in {H}_{1}\left( {Y}_{0}\right) \) corresponds to \( 1 \otimes y \) . Then, as a module, \( {H}_{1}\left( {{Y}^{\prime } \cap {Y}^{\prime \prime };\mathbb{Z}}\right) \) has a base \( \left\{ {1 \otimes \left\lbrack {f}_{i}\right\rbrack }\right\} \) and \( {H}_{1}\left( {{Y}^{\prime };\mathbb{Z}}\right) \oplus {H}_{1}\left( {{Y}^{\prime \prime };\mathbb{Z}}\right) \) has a base \( \left\{ {1 \otimes \left\lbrack {e}_{i}\right\rbrack }\right\} \), where the \( {e}_{i} \) and \( {f}_{i} \) are the simple closed curves used in Proposition 6.3. Now the definition of \( {\alpha }_{ \star } \) shows that \[ {\alpha }_{ \star }\left( {1 \otimes \left\lbrack {f}_{i}\right\rbrack }\right) = \mathop{\sum }\limits_{j}\left( {-{A}_{ij}\left( {1 \otimes \left\lbrack {e}_{j}\right\rbrack }\right) + {A}_{ji}\left( {t \otimes \left\lbrack {e}_{j}\right\rbrack }\right) }\right. \] where \( A \) is the Seifert matrix with respect to the given bases. Hence, with respect to the module bases \( \left\{ {1 \otimes \left\lbrack {f}_{i}\right\rbrack }\right\} \) and \( \left\{ {1 \otimes \left\lbrack {e}_{i}\right\rbrack }\right\} ,{\alpha }_{ \star } \) is represented by matrix \( {tA} - {A}^{\tau } \), and so, as \( {\beta }_{ \star } \) is surjective, this is a presentation matrix for the module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) . It will be shown (fairly easily), in the following chapter on covering spaces that \( {X}_{\infty } \) and the action on it by \( \langle t\rangle \) are well defined, given the oriented link \( L \) . Accept that fact for the time being. It implies at once that the \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) -module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) is an invariant of \( L \) . It is sometimes called the Alexander module of the oriented link. The actual module is cumbersome, but it has already been noted, as an immediate consequence of Theorem 6.1, that the elementary ideals of a module are invariants of that module. Definition 6.6. The \( {r}^{th} \) Alexander ideal of an oriented link \( L \) is the \( {r}^{th} \) elementary ideal of the \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) . The \( {r}^{th} \) Alexander polynomial of \( L \) is a generator of the smallest principal ideal of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) that contains the \( {r}^{th} \) Alexander ideal. The first Alexander polynomial is called the Alexander polynomial and is written \( {\Delta }_{L}\left( t\right) \) . Note at once that a generator of a principal ideal is unique only up to multiplication by a unit (an invertible element) in the ring. Thus the Alexander polynomials, as defined above, are well defined only up to multiplication by \( \pm {t}^{\pm n} \) . Note, too, that the module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) does have a square presentation matrix, namely \( {tA} - {A}^{\tau } \) , where \( A \) is a Seifert matrix (by Theorem 6.5 ). Hence, the first elementary ideal is principal, and the Alexander polynomial of \( L \) is given by \[ {\Delta }_{L}\left( t\right) \doteq \det \left( {{tA} - {A}^{\tau }}\right) \] where " \( \doteq \) " means "is equal to, up to multiplication by a unit". EXAMPLE 6.7. The unknot has a disc \( {D}^{2} \) for a Seifert surface. Cutting the exterior of the unknot along the disc gives \( {D}^{2} \times \left\lbrack {-1,1}\right\rbrack \), and gluing countably many copies of this together produces \( {X}_{\infty } = {D}^{2} \times \mathbb{R} \) . In this case, then, \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) = 0 \), and this zero module is presented by the \( 1 \times 1 \) unit matrix. Taking the determinant of this matrix (Theorem 6.5 is irrelevant here) shows that \( {\Delta }_{\text{unknot }}\left( t\right) \doteq 1 \) . EXAMPLE 6.8. Let \( {K}_{n} \) be the "twisted double" of the unknot, with orientation as shown in Figure 6.3. The lower part of the diagram has \( {2n} - 1 \) crossings in the sense shown when \( {2n} - 1 \) is positive; if \( {2n} - 1 \) is negative, the crossings there are in the opposite sense. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_66_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_66_0.jpg) Figure 6.3 For the Seifert surface \( F \) take the surface shown, with generators for \( {H}_{1}\left( F\right) \) represented by the oriented simple closed curves \( {f}_{1} \) and \( {f}_{2} \) as indicated. Recall that the Seifert matrix \( A \) is given by \( {A}_{ij} = \operatorname{lk}\left( {{f}_{i},{f}_{j}^{ + }}\right) \), where \( {f}_{j}^{ + } \) is a copy of \( {f}_{j} \) pushed off \( F \) into \( {S}^{3} - F \) in the direction defined by the oriented meridian of \( {K}_{n} \) . (The meridian encircles \( {K}_{n} \) in a "right-hand screw" direction.) Thus \( A = \left( \begin{matrix} 1 & 0 \\ - 1 & n \end{matrix}\right) \) . Note that a diagonal entry \( \operatorname{lk}\left( {{f}_{i},{f}_{i}^{ + }}\right) \) is always the number of right-handed twists of an annular neighbourhood of \( {f}_{i} \) in \( F \) . It follows that \[ \left( {{tA} - {A}^{\tau }}\right) = \left( \begin{matrix} t - 1 & 1 \\ - t & n\left( {t - 1}\right) \end{matrix}\right) \] so that \( {\Delta }_{{K}_{n}} \doteq n\left( {{t}^{2} - {2t} + 1}\right) + t \) . Note that \( {K}_{0} \) is the unknot and that this formula gives \( {\Delta }_{{K}_{0}} \doteq t \) . That is in accord with the result of the previous example, as \( t \) is a unit in \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) . Of course, \( {K}_{1} \) is the trefoil knot \( {3}_{1} \), and so that has Alexander polynomial \( {t}^{2} - t + 1 \) . Similarly, \( {K}_{2} \) is the knot \( {5}_{2} \), and this has polynomial \( 2{t}^{2} - {3t} + 2 \) . EXAMPLE 6.9. Let \( p, q \) and \( r \) be odd integers and let \( P\left( {p, q, r}\right) \) be the pretzel knot shown in Figure 6.4. Once again the crossings are in the sense shown for positive integers and in the opposite sense for negative integers. A Seifert surface is shown, together with generators \( {f}_{1} \) and \( {f}_{2} \) . Then the Seifert matrix is given by \[ A = \frac{1}{2}\left( \begin{array}{ll} p + q & q + 1 \\ q - 1 & q + r \end{array}\right) \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_67_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_67_0.jpg) Figure 6.4 and so \[ {\Delta }_{P\left( {p, q, r}\right) }\left( t\right) \doteq \det \left( {{tA} - {A}^{\tau }}\right) = \frac{1}{4}\left( {\left( {{pq} + {qr} + {rp}}\right) \left( {{t}^{2} - {2t} + 1}\right) + {t}^{2} + {2t} + 1}\right) . \] Note that if \( p, q \) and \( r \) are such that \( \left( {{pq} + {qr} + {rp}}\right) = - 1 \) (for example, \( \left( {p, q, r}\right) = \left( {-3,5,7}\right) ) \), then \( {\Delta }_{P\left( {p, q, r}\right) }\left( t\right) \doteq t \), which is the Alexander polynomial for the unknot. The knot \( P\left( {-3,5,7}\right) \) is known as Seifert’s knot with unit Alexander polynomial; it can be shown to be a non-trivial knot by, for
1009_(GTM175)An Introduction to Knot Theory
20
ce is shown, together with generators \( {f}_{1} \) and \( {f}_{2} \) . Then the Seifert matrix is given by \[ A = \frac{1}{2}\left( \begin{array}{ll} p + q & q + 1 \\ q - 1 & q + r \end{array}\right) \] ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_67_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_67_0.jpg) Figure 6.4 and so \[ {\Delta }_{P\left( {p, q, r}\right) }\left( t\right) \doteq \det \left( {{tA} - {A}^{\tau }}\right) = \frac{1}{4}\left( {\left( {{pq} + {qr} + {rp}}\right) \left( {{t}^{2} - {2t} + 1}\right) + {t}^{2} + {2t} + 1}\right) . \] Note that if \( p, q \) and \( r \) are such that \( \left( {{pq} + {qr} + {rp}}\right) = - 1 \) (for example, \( \left( {p, q, r}\right) = \left( {-3,5,7}\right) ) \), then \( {\Delta }_{P\left( {p, q, r}\right) }\left( t\right) \doteq t \), which is the Alexander polynomial for the unknot. The knot \( P\left( {-3,5,7}\right) \) is known as Seifert’s knot with unit Alexander polynomial; it can be shown to be a non-trivial knot by, for example, calculating its Jones polynomial. As a special example, consider \( P\left( {3,3, - 3}\right) \) (which is also listed as \( {9}_{46} \) ). The Seifert matrix \( A \) is \( \left( \begin{array}{ll} 3 & 2 \\ 1 & 0 \end{array}\right) \) and \( \left( {{tA} - {A}^{\tau }}\right) = \left( \begin{matrix} {3t} - 3 & {2t} - 1 \\ t - 2 & 0 \end{matrix}\right) \) . The first elementary ideal of the Alexander module is then the ideal generated by the determinant \( - 2{t}^{2} + {5t} - 2 \) (that is, the Alexander polynomial). The second elementary ideal is that generated by the \( 1 \times 1 \) minors, so that is the ideal of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) generated by \( \left( {t - 2}\right) \) and \( \left( {{2t} - 1}\right) \) . It is not the whole ring, as the evaluation at \( t = - 1 \) gives a surjection \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \rightarrow \mathbb{Z} \) that maps the ideal in question to \( 3\mathbb{Z} \) . This should be contrasted with the situation for the knot \( {6}_{1} \) . This has a diagram the same as that of Figure 6.3 with \( n = 3 \) and the top two crossings of the diagram changed. For this, \( A = \left( \begin{matrix} - 1 & 1 \\ 0 & 2 \end{matrix}\right) \) and \( \left( {{tA} - {A}^{\tau }}\right) = \left( \begin{matrix} 1 - t & t \\ - 1 & {2t} - 2 \end{matrix}\right) \) . Here the Alexander polynomial is again \( - 2{t}^{2} + {5t} - 2 \), but now the second elementary ideal is the whole of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) . Thus these two knots are distinguished by the second, but not by the first, Alexander ideal. Thus, the Alexander polynomial does not distinguish some pairs of knots. Nevertheless it is quite good at distinguishing knots; there follows soon a list of the Alexander polynomials of the prime knots up to eight crossings which this invariant certainly distinguishes from one another. First, though, there follow some easy properties of the Alexander polynomial. Theorem 6.10. (i) For any oriented link \( L,{\Delta }_{L}\left( t\right) \doteq {\Delta }_{L}\left( {t}^{-1}\right) \) . (ii) For any (oriented) knot \( K,{\Delta }_{K}\left( 1\right) = \pm 1 \) . Analogues of these results hold for the \( {r}^{th} \) Alexander polynomials. Proof. (i) Suppose that \( A \) is an \( n \times n \) Seifert matrix for \( L \) . Then \[ {\Delta }_{L}\left( t\right) \doteq \det \left( {{tA} - {A}^{\tau }}\right) = \det \left( {t{A}^{\tau } - A}\right) = {\left( -t\right) }^{n}\det \left( {{t}^{-1}A - {A}^{\tau }}\right) \doteq {\Delta }_{L}\left( {t}^{-1}\right) . \] (ii) Let \( A \) be the Seifert matrix for \( K \) coming from a standard base of \( {2g} \) oriented curves \( \left\{ {f}_{i}\right\} \) on a genus \( g \) Seifert surface \( F \) as shown in Figure 6.1. Now, \( {\Delta }_{K}\left( 1\right) = \pm \det \left( {A - {A}^{\tau }}\right) \), but \[ {\left( A - {A}^{\tau }\right) }_{ij} = \operatorname{lk}\left( {{f}_{i}^{ - },{f}_{j}}\right) - \operatorname{lk}\left( {{f}_{i}^{ + },{f}_{j}}\right) , \] and this is the algebraic number of intersections of \( {f}_{i} \) and \( {f}_{j} \) on the surface \( F \) . Hence \( \left( {A - {A}^{\tau }}\right) \) consists of \( g \) blocks of the form \( \left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right) \) down the diagonal and zeros elsewhere. The determinant of that is 1 . Note that for a link \( L \) of more than one component, \( {\Delta }_{L}\left( 1\right) = 0 \) by essentially the same proof (the blocks on the diagonal of \( \left( {A - {A}^{\tau }}\right) \) are now followed by some zeros). Corollary 6.11. For any knot \( K \) , \[ {\Delta }_{K}\left( t\right) \doteq {a}_{0} + {a}_{1}\left( {{t}^{-1} + t}\right) + {a}_{2}\left( {{t}^{-2} + {t}^{2}}\right) + \cdots , \] where the \( {a}_{i} \) are integers and \( {a}_{0} \) is odd. Proof. By Theorem 6.10(i), \( {\Delta }_{K}\left( t\right) \) can be written in the form \( {\Delta }_{K}\left( t\right) = {b}_{0} + \) \( {b}_{1}t + {b}_{2}{t}^{2} + \cdots + {b}_{N}{t}^{N} \), where \( {b}_{N - r} = \pm {b}_{r} \) with the same choice of sign for all \( r \) . If \( N \) were odd, \( {\Delta }_{K}\left( 1\right) \) would be even, which contradicts (ii) of the theorem. Hence \( N \) is even. If \( {b}_{N - r} = - {b}_{r} \) for all \( r \), then \( {b}_{N/2} = 0 \) and so \( {\Delta }_{K}\left( 1\right) = 0 \) , again a contradiction. Thus \( {b}_{N - r} = {b}_{r} \) for all \( r \) and \( {b}_{N/2} \) is odd, and so, within the indeterminacy of multiplication by units, \( {\Delta }_{K}\left( t\right) \) is of the required form. In the following table, the coefficients \( {a}_{0},{a}_{1},{a}_{2},\ldots \), that occur in the expression \( {\Delta }_{K}\left( t\right) \doteq {a}_{0} + {a}_{1}\left( {{t}^{-1} + t}\right) + {a}_{2}\left( {{t}^{-2} + {t}^{2}}\right) + \cdots \) are recorded. The signs are chosen so that \( {\Delta }_{K}\left( 1\right) = + 1 \), this being Conway’s normalisation. For example, \[ {\Delta }_{{8}_{7}}\left( t\right) = - 5 + 5\left( {{t}^{-1} + t}\right) - 3\left( {{t}^{-2} + {t}^{2}}\right) + \left( {{t}^{-3} + {t}^{3}}\right) . \] Proposition 6.12. Let \( L \) be an oriented link. Then \( \bar{L} \) and \( \mathrm{r}L \), the reflection and the reverse of \( L \), have the same Alexander polynomial as \( L \) up to multiplication by units. If \( {K}_{1} \) and \( {K}_{2} \) are oriented knots, \( {\Delta }_{\left( {K}_{1} + {K}_{2}\right) }\left( t\right) \doteq {\Delta }_{{K}_{1}}\left( t\right) {\Delta }_{{K}_{2}}\left( t\right) \) . Proof. If \( A \) is a Seifert matrix for \( L, - A \) is a Seifert matrix for \( \bar{L} \) and \( {A}^{\tau } \) is a Seifert matrix for \( \mathrm{r}L \) . If \( {A}_{1} \) and \( {A}_{2} \) are Seifert matrices for \( {K}_{1} \) and \( {K}_{2} \), then \( \left( \begin{matrix} {A}_{1} & 0 \\ 0 & {A}_{2} \end{matrix}\right) \) is a Seifert matrix for \( {K}_{1} + {K}_{2} \) . TABLE 6.1. Alexander Polynomial Table <table><thead><tr><th>Knot</th><th>\( {a}_{0} \)</th><th>\( {a}_{1} \)</th><th>\( {a}_{2} \)</th><th>\( {a}_{3} \)</th></tr></thead><tr><td>3,</td><td>\( - 1 \)</td><td>1</td><td></td><td rowspan="7"></td></tr><tr><td>\( {4}_{1} \)</td><td>3</td><td>\( - 1 \)</td><td></td></tr><tr><td>\( {5}_{1} \)</td><td>1</td><td>\( - 1 \)</td><td>1</td></tr><tr><td>\( {5}_{2} \)</td><td>\( - 3 \)</td><td>2</td><td></td></tr><tr><td>\( {\mathbf{6}}_{1} \)</td><td>5</td><td>\( - 2 \)</td><td></td></tr><tr><td>\( {\mathbf{6}}_{2} \)</td><td>\( - 3 \)</td><td>3</td><td>\( - 1 \)</td></tr><tr><td>\( {\mathbf{6}}_{3} \)</td><td>5</td><td>\( - 3 \)</td><td>1</td></tr><tr><td>\( {7}_{1} \)</td><td>\( - 1 \)</td><td>1</td><td>\( - 1 \)</td><td rowspan="7">1</td></tr><tr><td>\( {7}_{2} \)</td><td>\( - 5 \)</td><td>3</td><td></td></tr><tr><td>\( {7}_{3} \)</td><td>3</td><td>\( - 3 \)</td><td>2</td></tr><tr><td>74</td><td>\( - 7 \)</td><td>4</td><td></td></tr><tr><td>75</td><td>5</td><td>\( - 4 \)</td><td>2</td></tr><tr><td>76</td><td>\( - 7 \)</td><td>5</td><td>\( - 1 \)</td></tr><tr><td>7,</td><td>9</td><td>\( - 5 \)</td><td>1</td></tr><tr><td>\( {8}_{1} \)</td><td>7</td><td>\( - 3 \)</td><td></td><td rowspan="4">\( - 1 \)</td></tr><tr><td>\( {8}_{2} \)</td><td>3</td><td>\( - 3 \)</td><td>3</td></tr><tr><td>\( {8}_{3} \)</td><td>9</td><td>\( - 4 \)</td><td></td></tr><tr><td>\( {8}_{4} \)</td><td>\( - 5 \)</td><td>5</td><td>\( - 2 \)</td></tr><tr><td>85</td><td>5</td><td>\( - 4 \)</td><td>3</td><td>\( - 1 \)</td></tr><tr><td>86</td><td>\( - 7 \)</td><td>6</td><td>\( - 2 \)</td><td></td></tr><tr><td>87</td><td>\( - 5 \)</td><td>5</td><td>\( - 3 \)</td><td>1</td></tr><tr><td>88</td><td>9</td><td>\( - 6 \)</td><td>2</td><td></td></tr><tr><td>8,</td><td>7</td><td>\( - 5 \)</td><td>3</td><td>\( - 1 \)</td></tr><tr><td>\( {\mathbf{8}}_{10} \)</td><td>\( - 7 \)</td><td>6</td><td>\( - 3 \)</td><td>1</td></tr><tr><td>\( {\mathbf{8}}_{11} \)</td><td>-9</td><td>7</td><td>\( - 2 \)</td><td></td></tr><tr><td>\( {\mathbf{8}}_{12} \)</td><td>13</td><td>\( - 7 \)</td><td>1</td><td></td></tr><tr><td>\( {8}_{13} \)</td><td>11</td><td>\( - 7 \)</td><td>2</td><td></td></tr><tr><td>\( {\mathbf{8}}_{14} \)</td><td>\( - {11} \)</td><td>8</td><td>\( - 2 \)</td><td></td></tr><tr><td>\( {\mathbf{8}}_{15} \)</td><td>11</td><td>\( - 8 \)</td><td>3</td><td></td></tr><tr><td>\( {\mathbf{8}}_{16} \)</td><td>-9</td><td>8</td><td>\( - 4 \)</td><td>1</td></tr><tr><td>\( {\mathbf{8}}_{17} \)</td><td>11</td><td>\( - 8 \)</td><td>4</td><td>\( - 1 \)</td></tr><tr><td>\( {\mathbf{8}}_{18} \)</td><td>13</td><td>\( - {10} \)</td><td>5</td><td>\( - 1 \)</td></tr><tr><td>819</td><td>1</td><td>0</td><td>\( - 1 \)</td><td>1</td></tr><tr><td>\( {8}_{20} \)</td><td>3</td><td>\( - 2 \)</td><td>1</td><td></td></tr><tr><td>\( {\mathbf{8}}_{21} \)</td><td>\( - 5 \)</td><td>4</td><td>\( - 1 \)</td><td></td></tr></table> Proposition 6.13. If a knot \( K \) has genus \( g \), then \( {2g} \geq \) breadth
1009_(GTM175)An Introduction to Knot Theory
21
<td></td></tr><tr><td>\( {\mathbf{8}}_{12} \)</td><td>13</td><td>\( - 7 \)</td><td>1</td><td></td></tr><tr><td>\( {8}_{13} \)</td><td>11</td><td>\( - 7 \)</td><td>2</td><td></td></tr><tr><td>\( {\mathbf{8}}_{14} \)</td><td>\( - {11} \)</td><td>8</td><td>\( - 2 \)</td><td></td></tr><tr><td>\( {\mathbf{8}}_{15} \)</td><td>11</td><td>\( - 8 \)</td><td>3</td><td></td></tr><tr><td>\( {\mathbf{8}}_{16} \)</td><td>-9</td><td>8</td><td>\( - 4 \)</td><td>1</td></tr><tr><td>\( {\mathbf{8}}_{17} \)</td><td>11</td><td>\( - 8 \)</td><td>4</td><td>\( - 1 \)</td></tr><tr><td>\( {\mathbf{8}}_{18} \)</td><td>13</td><td>\( - {10} \)</td><td>5</td><td>\( - 1 \)</td></tr><tr><td>819</td><td>1</td><td>0</td><td>\( - 1 \)</td><td>1</td></tr><tr><td>\( {8}_{20} \)</td><td>3</td><td>\( - 2 \)</td><td>1</td><td></td></tr><tr><td>\( {\mathbf{8}}_{21} \)</td><td>\( - 5 \)</td><td>4</td><td>\( - 1 \)</td><td></td></tr></table> Proposition 6.13. If a knot \( K \) has genus \( g \), then \( {2g} \geq \) breadth \( {\Delta }_{K}\left( t\right) \) . Proof. Let \( F \) be a genus \( g \) Seifert surface for \( K \) . Then \( {tA} - {A}^{\tau } \) is a \( {2g} \times {2g} \) matrix, and so the degree in \( t \) of the polynomial \( \det \left( {{tA} - {A}^{\tau }}\right) \) is at most \( {2g} \) . The last result can be considered as an application of the Alexander polynomial. Although it is only in the form of an inequality, it gives geometric information about individual knots. The surface constructed by removing the interiors of disjoint discs from a genus \( g \) surface is said to still have genus \( g \) . Proposition 6.13 generalises at once to show that if a link \( L \) with \( c \) components bounds a connected orientable surface of genus \( g \), then \[ {2g} + c - 1 \geq \text{ breadth }{\Delta }_{L}\left( t\right) . \] Now, it is a theorem discovered by R. H. Crowell [22] (see also [17]) that if \( L \) has an alternating diagram that gives, by means of Seifert's method of Theorem 2.2, a connected Seifert surface of genus \( \widehat{g} \), then breadth \( {\Delta }_{L}\left( t\right) = 2\widehat{g} + c - 1 \) . Thus the genus is always minimal for a Seifert surface coming in this way from any alternating diagram. There are oriented links of two or more components that have their Alexander polynomials equal to zero. The next proposition describes some of them, but there are even more. Proposition 6.14. Suppose an oriented link \( L \) bounds a disconnected oriented surface in \( {S}^{3} \) ; then \( {\Delta }_{L}\left( t\right) \) is the zero polynomial. Proof. Suppose \( \sum \) is a disconnected oriented surface with boundary \( L \) . Form a connected surface \( F \) by connecting the components of \( \sum \) together with thin "pipes". Take a set of oriented curves \( \left\{ {f}_{i}\right\} \) that give a base for \( {H}_{1}\left( F\right) \), choosing \( {f}_{1} \) to be a curve encircling once around one of the pipes and ensuring that \( {f}_{1} \) is disjoint from the other \( {f}_{i} \) . This \( {f}_{1} \) bounds a disc \( D \) in \( {S}^{3} \) with \( D \cap F = \partial D \) . Then \( \operatorname{lk}\left( {{f}_{1},{f}_{i}^{ \pm }}\right) = 0 \) for all \( i \) . Hence the corresponding Seifert matrix \( A \) has its first row and first column consisting entirely of zeros. Of course then \( \det \left( {{tA} - {A}^{\tau }}\right) = 0 \) . The idea of a satellite knot was mentioned in Chapter 1. There is a simple formula that gives the Alexander polynomial of a satellite knot in terms of those of its companion and its pattern. This will now be explained. Theorem 6.15. In \( {S}^{3} \), let \( T \) be a standard, unknotted, solid torus that contains a knot \( K \) . Let \( e : T \rightarrow {S}^{3} \) be an embedding of \( T \) onto a neighbourhood of a knot \( C \), so that e maps a longitude of \( T \) (coming from the inclusion of \( T \) in \( {S}^{3} \) ) onto a longitude of \( C \) . Then \[ {\Delta }_{eK}\left( t\right) \doteq {\Delta }_{K}\left( t\right) {\Delta }_{C}\left( {t}^{n}\right) \] where \( K \) represents \( n \) times a generator of \( {H}_{1}\left( T\right) \) . Proof. Construct Seifert surfaces for the pattern knot \( K \) and the satellite \( {eK} \) in the following way: The unknotted solid torus \( T \) projects onto an annulus in the plane. Apply the Seifert method (Theorem 2.2) to the projection of \( K \), with some orientation, into this annulus. Seifert circuits in the annulus, connected by twisted strips at the crossings, are obtained. Cap off, with discs just above the annulus, any circuits that bound in the annulus; then use annuli to cap off adjacent pairs of curves that encircle the annulus in opposite directions. Add a vertical annulus to each remaining curve so that the result is an oriented surface \( F \) contained in \( T \), with \( \partial F \) being the union of \( K \) and \( n \) longitudes of \( T \) oriented in the same direction. A Seifert surface \( F \cup {nD} \) for \( K \) then consists of the union of \( F \) and \( n \) parallel copies of a spanning disc of \( T \) . Similarly, a Seifert surface \( {eF} \cup {nG} \) for \( {eK} \) consists of the union of \( {eF} \) and \( n \) parallel copies of a genus \( g \) Seifert surface \( G \) of the companion knot \( C \) (this \( G \) being regarded as in the closure of \( {S}^{3} - {eT} \) ). Note that if \( f \) is an oriented simple closed curve in \( T - K \), then \( \operatorname{lk}\left( {f, K}\right) = \) \( f \sqcap F \), where " \( \sqcap \) " denotes the algebraic sum of the transverse intersection points. Of course, \( f \sqcap F = {ef} \sqcap {eF} = {ef} \sqcap \left( {{eF} \cup {nG}}\right) = \operatorname{lk}\left( {{ef},{eK}}\right) \) . Thus linking numbers of curves in \( T \) are preserved by the embedding \( e \) . Note, as well, that if \( {f}^{\prime } \) is a simple closed curve in the interior of \( G \) (or is near to such a curve), then \( \operatorname{lk}\left( {{ef},{f}^{\prime }}\right) = 0 \) . This is because \( {ef} \) is homologous in \( {eT} \) to a sum of longitudes of \( C \), and they bound copies of \( G \) that can be taken to be disjoint from \( {f}^{\prime } \) . A Seifert matrix \( B \) for the satellite knot \( {eK} \) can be obtained as follows: Use as base for \( {H}_{1}\left( {{eF} \cup {nG}}\right) \) the image under \( e \) of curves in \( F \) that give a base for \( {H}_{1}\left( {F \cup {nD}}\right) \), together with \( n \) parallel copies, each in one of the \( n \) copies of \( G \) , of curves that provide a base for \( {H}_{1}\left( G\right) \) . Using the above remarks, the resulting Seifert matrix has the form \( \left( \begin{matrix} M & 0 \\ 0 & X \end{matrix}\right) \), where \( M \) is a Seifert matrix for \( K \) and \( X \) is the following \( n \times n \) block matrix, in which \( A \) is a Seifert matrix for \( C \) : \[ X = \left( \begin{matrix} A & A & A & \ldots & A \\ {A}^{\tau } & A & A & \ldots & A \\ {A}^{\tau } & {A}^{\tau } & A & \ldots & A \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ {A}^{\tau } & {A}^{\tau } & {A}^{\tau } & \ldots & A \end{matrix}\right) \] It is consideration of linking numbers of curves in the various parallel copies of \( G \) that gives rise to these off-diagonal copies of \( A \) and \( {A}^{\tau } \) . Consider now the linear combination \( \mathop{\sum }\limits_{{i = 1}}^{n}{t}^{n - i} \) (row \( i \) ) of the rows of blocks of the block matrix \[ {tX} - {X}^{\tau } = \left( \begin{matrix} {tA} - {A}^{\tau } & {tA} - A & {tA} - A & \ldots & {tA} - A \\ t{A}^{\tau } - {A}^{\tau } & {tA} - {A}^{\tau } & {tA} - A & \ldots & {tA} - A \\ t{A}^{\tau } - {A}^{\tau } & t{A}^{\tau } - {A}^{\tau } & {tA} - {A}^{\tau } & \ldots & {tA} - A \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ t{A}^{\tau } - {A}^{\tau } & t{A}^{\tau } - {A}^{\tau } & t{A}^{\tau } - {A}^{\tau } & \ldots & {tA} - {A}^{\tau } \end{matrix}\right) . \] That linear combination produces a row of blocks in which every entry is \( {t}^{n}A - {A}^{\tau } \) . Thus, replacing the first row of \( {tX} - {X}^{\tau } \) by this row and subtracting the first column from all the other columns, it is seen that \( \det \left( {{tX} - {X}^{\tau }}\right) \) \[ = {t}^{-{2g}\left( {n - 1}\right) }\det \left( \begin{matrix} {t}^{n}A - {A}^{\tau } & 0 & 0 & \ldots & 0 \\ \star & t\left( {A - {A}^{\tau }}\right) & \star & \ldots & \star \\ \star & 0 & t\left( {A - {A}^{\tau }}\right) & \ldots & \star \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \star & 0 & 0 & \ldots & t\left( {A - {A}^{\tau }}\right) \end{matrix}\right) . \] Now, by Theorem 6.10, \( \det \left( {A - {A}^{\tau }}\right) = 1 \), so that \( \det t\left( {A - {A}^{\tau }}\right) = {t}^{2g} \) . Thus \[ \det \left( {{tB} - {B}^{\tau }}\right) = \det \left( {{tM} - {M}^{\tau }}\right) \det \left( {{t}^{n}A - {A}^{\tau }}\right) , \] and this is the required formula. In Chapter 8, a Conway normalisation of the Alexander polynomial will be defined, and then the above result will become \( {\Delta }_{eK}\left( t\right) = {\Delta }_{K}\left( t\right) {\Delta }_{C}\left( {t}^{n}\right) \) . ## Corollary 6.16. (i) If \( {\Delta }_{{C}_{1}}\left( t\right) = {\Delta }_{{C}_{2}}\left( t\right) \), then satellites of \( {C}_{1} \) and \( {C}_{2} \) with the same pattern have the same Alexander polynomial. (ii) Reversing the direction of \( C \) has no effect on \( {\Delta }_{eK}\left( t\right) \) (though it can change the knot \( {eK} \) ). (iii) A Whitehead double of any knot has Alexander polynomial equal to 1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_72_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_72_0.jpg) Figure 6.5 The final statement needs a little clarification. A Whitehead double of \( C \) is a satellite formed by using for \( K \subset T \) the curve shown in Figure 6.5 or its reflection. Note that \( K \) is unknotted in \( {S}^{3} \) and represents zero in \( {H}_{
1009_(GTM175)An Introduction to Knot Theory
22
tion of the Alexander polynomial will be defined, and then the above result will become \( {\Delta }_{eK}\left( t\right) = {\Delta }_{K}\left( t\right) {\Delta }_{C}\left( {t}^{n}\right) \) . ## Corollary 6.16. (i) If \( {\Delta }_{{C}_{1}}\left( t\right) = {\Delta }_{{C}_{2}}\left( t\right) \), then satellites of \( {C}_{1} \) and \( {C}_{2} \) with the same pattern have the same Alexander polynomial. (ii) Reversing the direction of \( C \) has no effect on \( {\Delta }_{eK}\left( t\right) \) (though it can change the knot \( {eK} \) ). (iii) A Whitehead double of any knot has Alexander polynomial equal to 1. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_72_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_72_0.jpg) Figure 6.5 The final statement needs a little clarification. A Whitehead double of \( C \) is a satellite formed by using for \( K \subset T \) the curve shown in Figure 6.5 or its reflection. Note that \( K \) is unknotted in \( {S}^{3} \) and represents zero in \( {H}_{1}\left( T\right) \), so that \( n = 0 \) in the formula of Theorem 6.15. There is no formula for the Jones polynomial of a satellite knot analogous to that just proved for the Alexander polynomial. Indeed, the fact that interesting phenomena are encountered when searching for such an analogue underlies the discussion of Chapter 13. One further satisfying view of the Alexander polynomial of a knot gives an interpretation of it as a characteristic polynomial in the following way: Suppose that throughout the preceding theory the field of rational numbers, \( \mathbb{Q} \), is used instead of the ring of integers, \( \mathbb{Z} \) . Not very much would is changed. In particular, if \( A \) is a Seifert matrix, the matrix \( \left( {{tA} - {A}^{\tau }}\right) \) presents \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) as a \( \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack \) -module. Information about the elementary ideals of this (marginally) new module can be extracted from \( \left( {{tA} - {A}^{\tau }}\right) \) as before, though in general the information obtained is slightly weaker than when using integer coefficients. However, a generator of the first elementary ideal is still \( \det \left( {{tA} - {A}^{\tau }}\right) \) . Thus the Alexander polynomial of the knot is, up to multiplication by a unit (now an element of the form \( q{t}^{\pm n} \) for any \( q \in \mathbb{Q} \) ), equal to the determinant of any other square matrix representing this new module. Theorem 6.17. Let \( K \) be a knot in \( {S}^{3} \) and let \( t : {X}_{\infty } \rightarrow {X}_{\infty } \) be the (covering) translation of \( {X}_{\infty } \) (the infinite cyclic cover of the exterior of \( K \) ). Then \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) is a finite-dimensional vector space over the field \( \mathbb{Q} \) . The characteristic polynomial of the linear map \( {t}_{ \star } : {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \rightarrow {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) is, up to multiplication by a unit, equal to the Alexander polynomial of \( K \) . Proof. The ring \( \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack \) is a principal ideal domain. A proof of this, using the Euclidean algorithm, is much the same as the proof that shows the ring of ordinary polynomials over a field to be a principal ideal domain. Over \( \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack \) the module \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) is finitely presented by the matrix \( \left( {{tA} - {A}^{\tau }}\right) \) . However, over a principal ideal domain, any finitely presented module is just a direct sum of cyclic modules (see, for example, [38]). This is the same as saying that the module is presented by a square diagonal matrix. Thus \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) is presented by a matrix \( \operatorname{diag}\left( {{p}_{1},{p}_{2},\ldots ,{p}_{N}}\right) \), where \( {p}_{i} \in \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack \), and \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) is isomorphic as a module to \( {\bigoplus }_{i = 1}^{N}\left( {\mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack /{p}_{i}}\right) \) . None of the \( {p}_{i} \) is zero, for then the Alexander polynomial, the determinant of the matrix, would be zero. However, for a knot \( K \) , \( {\Delta }_{K}\left( 1\right) = \pm 1 \) . Consider, then, a typical summand of the form \( \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack /p \) where, multiplying by a unit, it may be assumed that \( p = {a}_{0} + {a}_{1}t + {a}_{2}{t}^{2} + \cdots + {a}_{r}{t}^{r} \) with \( {a}_{r} = 1 \) . Over the field \( \mathbb{Q} \), the vector space \( \mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack /p \) has a finite base \( \left\{ {1, t,{t}^{2},\ldots ,{t}^{r - 1}}\right\} \) , for the relation " \( p = 0 \) " expresses other powers of \( t \) linearly in terms of these. Of course, the action of \( {t}_{ \star } \) is just multiplication by \( t \) . With respect to this base, then, \( {t}_{ \star } \) is represented by the matrix \[ M = \left( \begin{matrix} 0 & 0 & 0 & & & - {a}_{0} \\ 1 & 0 & 0 & & & - {a}_{1} \\ 0 & 1 & 0 & & & - {a}_{2} \\ \vdots & & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & \ldots & 1 & 0 & - {a}_{r - 2} \\ 0 & 0 & 0 & \ldots & 1 & - {a}_{r - 1} \end{matrix}\right) \] As a polynomial in \( x \), the characteristic polynomial of this is the determinant of \( \left( {M - {xI}}\right) \) . Multiplying the \( {i}^{\text{th }} \) row of this matrix by \( {x}^{i - 1} \) and, for \( i \geq 2 \), adding it to the top row, this determinant is seen to be the determinant of \[ \left( \begin{matrix} 0 & 0 & 0 & & & - \mathop{\sum }\limits_{{i = 0}}^{r}{a}_{i}{x}^{i} \\ 1 & - x & 0 & & & - {a}_{1} \\ 0 & 1 & - x & & & - {a}_{2} \\ \vdots & & \ddots & \ddots & & \vdots \\ 0 & 0 & \ldots & 1 & - x & - {a}_{r - 2} \\ 0 & 0 & 0 & \ldots & 1 & - x - {a}_{r - 1} \end{matrix}\right) , \] which is \( {\left( -1\right) }^{r}\mathop{\sum }\limits_{{i = 0}}^{r}{a}_{i}{x}^{i} \) . This is \( \pm p \) . Now, up to a unit, the Alexander polynomial is the determinant of the presentation matrix \( \operatorname{diag}\left( {{p}_{1},{p}_{2},\ldots ,{p}_{N}}\right) \) for \( {H}_{1}\left( {{X}_{\infty };\mathbb{Q}}\right) \) . This is just \( \mathop{\prod }\limits_{{i = 1}}^{N}{p}_{i} \), and the above consideration applied to the summands of \( {\bigoplus }_{i = 1}^{N}\left( {\mathbb{Q}\left\lbrack {{t}^{-1}, t}\right\rbrack /{p}_{i}}\right) \) shows that (with \( x \) in place of \( t \) ) this is the characteristic polynomial of \( {t}_{ \star } \) . For more on the Alexander polynomial viewed as part of algebraic topology, see the survey by C. McA. Gordon [35]. ## Exercises 1. Find a Seifert surface \( F \) for the knot \( {7}_{3} \), select a convenient base for \( {H}_{1}\left( {F;\mathbb{Z}}\right) \) and find the Seifert matrix with respect to this base. Calculate the Alexander polynomial of \( {7}_{3} \) and check that your answer agrees with that given in the table of Alexander polynomials. 2. Calculate the Alexander polynomial of the two oriented links shown below. ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_74_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_74_0.jpg) 3. Determine the way that the Alexander polynomial of each of the oriented links shown below is related to the Alexander polynomials of knots \( {K}_{1} \) and \( {K}_{2} \) . ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_74_1.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_74_1.jpg) 4. Show that for a knot \( K \), the Alexander polynomial satisfies \( {\Delta }_{K}\left( t\right) \doteq 1 \) if and only if \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) = 0. \) 5. What polynomials can arise as Alexander polynomials of genus 1 knots? 6. Figure 12.7 (b) shows (neglecting the zeros) a very symmetric diagram of a three-component link called the Borromean rings. Different choices of directions for the components produce eight possible orientations for the link. Calculate the Alexander polynomial for each of the oriented links so formed. 7. Suppose that \( B \) is any \( {2n} \times {2n} \) matrix of integers with the property that \( B - {B}^{\tau } \) consists of \( n \) blocks of the form \( \left( \begin{matrix} 0 & 1 \\ - 1 & 0 \end{matrix}\right) \) running down the diagonal and zeros elsewhere. Prove that there exists a knot for which \( B \) is a Seifert matrix. 8. Calculate the Alexander polynomial of the knot \( K \) shown below. What is the genus of \( K \) ? Is \( K \) a prime knot? ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_75_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_75_0.jpg) 9. Show that for any knot \( K \) with Alexander polynomial \( {\Delta }_{K}\left( t\right) \), there is, for any positive integer \( n \), another knot with Alexander polynomial \( {\Delta }_{K}\left( {t}^{n}\right) \) . 10. Show that any knot \( C \) has a (non-trivial) satellite knot of genus 1 with the same Alexander polynomial as the trefoil knot \( {3}_{1} \) . 11. A fibred knot \( K \) is a knot with the property that its exterior \( X \) is a bundle over \( {S}^{1} \) with fibre an orientable surface \( F \) . This means that \( X \) is homeomorphic to \( F \times \left\lbrack {0,1}\right\rbrack \) quotiented by the identification \( \left( {x,0}\right) \equiv \left( {{hx},1}\right) \) for some homeomorphism \( h : F \rightarrow \) \( F \) . What is the Alexander polynomial of such a knot \( K \) ? Prove that \( g\left( K\right) \) is the genus of the surface \( F \) . If a genus 1 knot is fibred, what can be said about its Alexander polynomial? 7 ## Covering Spaces In order to bring to a satisfactory conclusion the theory of the last chapter, it is necessary to show that the space \( {X}_{\infty } \), together with the given action on it by the infinite cyclic group, is uniquely defined by the oriented link \( L \) under conside
1009_(GTM175)An Introduction to Knot Theory
23
non-trivial) satellite knot of genus 1 with the same Alexander polynomial as the trefoil knot \( {3}_{1} \) . 11. A fibred knot \( K \) is a knot with the property that its exterior \( X \) is a bundle over \( {S}^{1} \) with fibre an orientable surface \( F \) . This means that \( X \) is homeomorphic to \( F \times \left\lbrack {0,1}\right\rbrack \) quotiented by the identification \( \left( {x,0}\right) \equiv \left( {{hx},1}\right) \) for some homeomorphism \( h : F \rightarrow \) \( F \) . What is the Alexander polynomial of such a knot \( K \) ? Prove that \( g\left( K\right) \) is the genus of the surface \( F \) . If a genus 1 knot is fibred, what can be said about its Alexander polynomial? 7 ## Covering Spaces In order to bring to a satisfactory conclusion the theory of the last chapter, it is necessary to show that the space \( {X}_{\infty } \), together with the given action on it by the infinite cyclic group, is uniquely defined by the oriented link \( L \) under consideration. Here it will be seen that \( {X}_{\infty } \) is a certain covering space of the exterior of \( L \), and the theory of coverings will show it to be well defined. That is the present motivation, but it should be understood that the theory of covering spaces is an important part of many areas of mathematics (particularly Riemann surfaces and geometric structures on manifolds). It is intimately related to the study of the (appropriately named) fundamental group of a fairly general type of topological space. Thus the following discussion will be in the language of general topological spaces. In the whole of this chapter, \( B \) will be a path-connected, locally path-connected topological space. By definition, the locally path-connected condition means that each point has a base of path-connected neighbourhoods (that is, there are "arbitrarily small" such neighbourhoods for each point). Definition 7.1. A continuous map \( p : E \rightarrow B \) is a covering map if (i) \( E \) is path-connected and non-empty and (ii) for each \( b \in B \), there exists an open neighbourhood \( V \) of \( B \) such that \( {p}^{-1}V \) is a disjoint union of open sets in \( E \), each of which is mapped homeomorphically by \( p \) onto \( V \) . The map \( p \) is called the projection of the covering space \( E \) to the base space \( B \) . A covering map \( p : E \rightarrow B \) is, in other terminology, a locally trivial fibre map with discrete fibre. As an exercise, observe that the restriction of the covering map \( p \) to any proper subset of \( E \) fails to give a covering of \( B \) . Examples 7.2. (i) \( p : \mathbb{R} \rightarrow {S}^{1} \equiv \{ z \in \mathbb{C} : \left| z\right| = 1\} \) given by \( p\left( t\right) = \exp \left( {2\pi it}\right) \) . (ii) \( p : {S}^{1} \rightarrow {S}^{1} \) given by \( p\left( z\right) = {z}^{n} \) . (iii) \( p : {S}^{n} \rightarrow \mathbb{R}{P}^{n} \equiv {S}^{n}/\left( {x \sim \pm x}\right) \), where \( p \) is the quotient map. (iv) \( \bar{p} : {S}^{3} \rightarrow {L}_{p, q} \), where for \( p \) and \( q \) coprime integers, \( {L}_{p, q} \) is the lens space defined as the quotient of \( {S}^{3} \) by a certain action of the cyclic group of order \( p \) . Regard \( {S}^{3} \) as \( \left\{ {\left( {{z}_{1},{z}_{2}}\right) \in {\mathbb{C}}^{2} : {\left| {z}_{1}\right| }^{2} + {\left| {z}_{2}\right| }^{2} = 1}\right\} \) . If \( g \) generates the group, the action is defined by \( g\left( {{z}_{1},{z}_{2}}\right) = \left( {{z}_{1}\exp \left( {{2\pi i}/p}\right) ,{z}_{2}\exp \left( {-{2\pi iq}/p}\right) }\right) \) . The projection \( \bar{p} \) is the quotient map. ## Easy Properties 7.3. (i) The covering map \( p : E \rightarrow B \) maps open sets to open sets. It is locally a homeomorphism. In particular, \( E \) is locally path-connected. (ii) The covering map \( p : E \rightarrow B \) is surjective. (iii) The open set \( V \) of the definition can be taken to be path-connected. (iv) \( B \) has the quotient topology induced by \( p : E \rightarrow B \) . (v) If \( {b}_{1} \) and \( {b}_{2} \) belong to \( B \), then there is a bijection between \( {p}^{-1}{b}_{1} \) and \( {p}^{-1}{b}_{2} \) (this follows from the next lemma). Lemma 7.4. A covering map \( p : E \rightarrow B \) has the path lifting property. That is, given a point \( {e}_{0} \in E \) and a continuous map \( f : \left\lbrack {0,1}\right\rbrack \rightarrow B \) such that \( f\left( 0\right) = p\left( {e}_{0}\right) \), there exists a unique continuous map \( \widehat{f} : \left\lbrack {0,1}\right\rbrack \rightarrow E \) such that \( \widehat{f}\left( 0\right) = {e}_{0} \) and \( p\widehat{f} = f \) . Proof. The space \( B \) is the union of open sets \( \{ V\} \), as in the definition of a covering. Thus, by the compactness of \( \left\lbrack {0,1}\right\rbrack \) there is a dissection \( 0 = {t}_{0} < {t}_{1} < \) \( {t}_{2} < \cdots < {t}_{n} = 1 \) so that \( f\left\lbrack {{t}_{i - 1},{t}_{i}}\right\rbrack \subset {V}_{i} \) for some such open set \( {V}_{i} \) . Assume that \( \widehat{f} \mid \left\lbrack {0,{t}_{i - 1}}\right\rbrack \) has been defined with \( \widehat{f}\left( {t}_{i - 1}\right) \in {W}_{i, j} \) where \( {W}_{i, j} \) is one of the open subsets of \( {p}^{-1}{V}_{i} \) for which \( p : {W}_{i, j} \rightarrow {V}_{i} \) is a homeomorphism. Define \( \widehat{f} \mid \left\lbrack {{t}_{i - 1},{t}_{i}}\right\rbrack \) to be equal to \( {\left( p \mid {W}_{i, j}\right) }^{-1}f \) . For the uniqueness, suppose \( \widehat{\phi } \) is a second lift of \( f \), with \( \widehat{\phi }\left( 0\right) = {e}_{0} \) . Let \( \tau = \sup \{ t : \widehat{\phi } \mid \left\lbrack {0, t}\right\rbrack = \widehat{f} \mid \left\lbrack {0, t}\right\rbrack \} \) ; by continuity, \( \widehat{\phi }\left( \tau \right) = \widehat{f}\left( \tau \right) \) . Then, if \( \tau < 1 \), the above argument shows that \( \widehat{\phi }\left( {\tau + \epsilon }\right) = \widehat{f}\left( {\tau + \epsilon }\right) \) for all sufficiently small \( \epsilon \), contradicting the definition of \( \tau \) . Lemma 7.5. A covering map \( p : E \rightarrow B \) has homotopy-lifting property for paths. That is, given a continuous map \( \widehat{f} : \left\lbrack {0,1}\right\rbrack \times \{ 0\} \rightarrow E \) and a continuous map \( f : \left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \rightarrow B \) such that \( f\left( {t,0}\right) = p\widehat{f}\left( {t,0}\right) \), there exists a unique continuous extension of \( \widehat{f} \) to \( \widehat{f} : \left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \rightarrow E \) such that \( p\widehat{f} = f \) . Proof. The proof of this is entirely analogous to the proof of the previous lemma; here a dissection of the square \( \left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \) into a mesh of small squares, each mapping into some \( {V}_{i} \), is used. Elementary homotopy theory assigns to every topological space \( X \), equipped with a selected base point \( {x}_{0} \), a group \( {\Pi }_{1}\left( {X,{x}_{0}}\right) \) called its fundamental group. Recall that an element of the fundamental group is represented by a loop in \( X \) based at \( {x}_{0} \) (that is, a continuous function \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow X \) with \( \alpha \left( 0\right) = \alpha \left( 1\right) = {x}_{0} \) ), an actual element being a homotopy class, keeping ends fixed at \( {x}_{0} \), of such loops. The product of loops \( \alpha \) and \( \beta \), written \( \alpha \cdot \beta \), is formed by following around the loop \( \alpha \) and then \( \beta \) ; the inverse of \( \alpha \) is the loop \( \bar{\alpha } \), where \( \bar{\alpha }\left( t\right) = \alpha \left( {1 - t}\right) \) . These operations induce the group structure on the homotopy classes. A continuous function \( f \) from one based space to another induces a homomorphism \( {f}_{ \star } \) between their fundamental groups with the usual functorial properties. In particular, homeomorphic based spaces have isomorphic fundamental groups. A path in \( X \) from \( {x}_{0} \) to \( {x}_{1} \) induces, by means of path-composition, an isomorphism from \( {\Pi }_{1}\left( {X,{x}_{0}}\right) \) to \( {\Pi }_{1}\left( {X,{x}_{1}}\right) \), the isomorphisms induced by different paths being related by inner au-tomorphisms. Thus usually one restricts consideration to path-connected spaces, and then choice of base point is irrelevant up to group isomorphism; the base point is then often omitted from the notation. However, base points can never be neglected completely; any attempt to do so usually produces the first homology group. In general the fundamental group of a space is not abelian. If one "makes it abelian" by inserting relations that declare that all elements commute, then the result is indeed the first homology group. This is a one-dimensional version of the Hurewicz isomorphism theorem: for a connected cell complex \( X \), the quotient of \( {\Pi }_{1}\left( {X,{x}_{0}}\right) \) by its commutator subgroup (the subgroup generated by all elements of the form \( {ab}{a}^{-1}{b}^{-1} \) ) is isomorphic to \( {H}_{1}\left( {X;\mathbb{Z}}\right) \) . The Homotopy Exact Sequence 7.6. An immediate consequence of Lemma 7.5 is the following homotopy exact sequence for a covering map: \[ \{ 1\} \rightarrow {\Pi }_{1}\left( {E,{e}_{0}}\right) \overset{{p}_{ \star }}{ \rightarrow }{\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow {\Pi }_{0}\left( F\right) \rightarrow \{ 1\} . \] Here \( p\left( {e}_{0}\right) = {b}_{0} \) and \( F = {p}^{-1}{b}_{0} \) . Note that \( {\Pi }_{0}\left( F\right) \) is just the set of path components of \( F \) (which are just the individual points of \( F \) ) with a "zero", the component \( \left\{ {e}_{0}\right\} \) . The map \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow {\Pi }_{0}\left(
1009_(GTM175)An Introduction to Knot Theory
24
mensional version of the Hurewicz isomorphism theorem: for a connected cell complex \( X \), the quotient of \( {\Pi }_{1}\left( {X,{x}_{0}}\right) \) by its commutator subgroup (the subgroup generated by all elements of the form \( {ab}{a}^{-1}{b}^{-1} \) ) is isomorphic to \( {H}_{1}\left( {X;\mathbb{Z}}\right) \) . The Homotopy Exact Sequence 7.6. An immediate consequence of Lemma 7.5 is the following homotopy exact sequence for a covering map: \[ \{ 1\} \rightarrow {\Pi }_{1}\left( {E,{e}_{0}}\right) \overset{{p}_{ \star }}{ \rightarrow }{\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow {\Pi }_{0}\left( F\right) \rightarrow \{ 1\} . \] Here \( p\left( {e}_{0}\right) = {b}_{0} \) and \( F = {p}^{-1}{b}_{0} \) . Note that \( {\Pi }_{0}\left( F\right) \) is just the set of path components of \( F \) (which are just the individual points of \( F \) ) with a "zero", the component \( \left\{ {e}_{0}\right\} \) . The map \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \rightarrow {\Pi }_{0}\left( F\right) \) is defined as follows. A loop \( \alpha \) in \( B \) based at \( {b}_{0} \) lifts to a path \( \widehat{\alpha } \) starting at \( {e}_{0} \) . The required map sends the element \( \left\lbrack \alpha \right\rbrack \) represented by \( \alpha \) to \( \widehat{\alpha }\left( 1\right) \) . In this theory of lifting a path (or homotopy of paths) in the base space to a path in a covering space, one thinks of \( E \) as "above" \( B \) so that "lifting" has some intuitive feel about it. The next result answers speculation about whether a map from any space into \( B \) might be lifted. The answer, for a reasonable type of space, is that it can be lifted unless fundamental group considerations forbid the enterprise. Proposition 7.7. Let \( p : E \rightarrow B \) be a covering map with base points \( {e}_{0} \in E \) and \( {b}_{0} \in B \), chosen so that \( p{e}_{0} = {b}_{0} \) . Suppose \( X \) is a path-connected, locally path-connected, space with base point \( {x}_{0} \), and let \( f : \left( {X,{x}_{0}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) be continuous. Then there exists a continuous map \( g : \left( {X,{x}_{0}}\right) \rightarrow \left( {E,{e}_{0}}\right) \) such that \( {pg} = f \) if and only if \[ {f}_{ \star }{\Pi }_{1}\left( {X,{x}_{0}}\right) \subset {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) . \] When such a \( g \) exists, it is unique. Proof. If \( g \) exists, then \( {p}_{ \star }{g}_{ \star } = {f}_{ \star } \), and the result is clear. Conversely, suppose \( {f}_{ \star }{\Pi }_{1}\left( {X,{x}_{0}}\right) \subset {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) . If \( x \in X \), choose a path \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow X \) so that \( \alpha \left( 0\right) = {x}_{0} \) and \( \alpha \left( 1\right) = x \) . By Lemma 7.4, the path \( {f\alpha } \) lifts to a path \( \widehat{f\alpha } : \left\lbrack {0,1}\right\rbrack \rightarrow \) \( E \) with \( \widehat{f\alpha }\left( 0\right) = {e}_{0} \) . Note that if \( g \) exists as advertised, then \( g\left( x\right) = \widehat{f\alpha }\left( 1\right) \) by the uniqueness in Lemma 7.4, because \( {g\alpha } \) is a lift of \( {f\alpha } \) . Thus if \( g \) exists, it is unique. Now define \( g \) by \( g\left( x\right) = \widehat{f\alpha }\left( 1\right) \) . To check that is well defined, let \( \beta \) be another path in \( X \) from \( {x}_{0} \) to \( {x}_{1} \) . Then \( {f}_{ \star }\left\lbrack {\alpha \cdot \bar{\beta }}\right\rbrack \in {f}_{ \star }{\Pi }_{1}\left( {X,{x}_{0}}\right) \subset {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \), so there exists a loop \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow E \) with \( \gamma \left( 0\right) = {e}_{0} = \gamma \left( 1\right) \) so that \( {p\gamma } \) is homotopic, relative to \( \{ 0,1\} \), to \( f\left( {\alpha \cdot \bar{\beta }}\right) \) . By Lemma 7.5 that homotopy can be lifted, relative to \( \{ 0,1\} \), so that (at the end of the homotopy) there is a loop \( \widetilde{\gamma } : \left\lbrack {0,1}\right\rbrack \rightarrow E \) with \( \widetilde{\gamma }\left( 0\right) = {e}_{0} = \widetilde{\gamma }\left( 1\right) \) such that \( p\widetilde{\gamma } = f\left( {\alpha \cdot \bar{\beta }}\right) \) . Thus the lift of \( f\left( {\alpha \cdot \bar{\beta }}\right) \) starting at \( {e}_{0} \) is \( \widetilde{\gamma } \), a loop at \( {e}_{0} \) . Hence \( p\widetilde{\gamma }\left( t\right) = {f\alpha }\left( {2t}\right) \) and \( p\widetilde{\gamma }\left( t\right) = {f\beta }\left( {2t}\right) \) for all \( 0 \leq t \leq 1/2 \) . Thus \( \widehat{f\alpha }\left( 1\right) = \widetilde{\gamma }\left( {1/2}\right) = \widehat{f\beta }\left( 1\right) \), and so \( g \) is well defined. The continuity of \( g \) follows from the fact that \( X \) is locally path-connected, and so on sufficiently small open sets \( g \) is \( {p}^{-1}f \) . Suppose \( p : \left( {E,{e}_{0}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) is a covering map with base points as above. The subgroup \( {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) of \( {\Pi }_{1}\left( {B,{b}_{0}}\right) \) is called the group of the covering. Note that \( {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) is, as explained in the above proof, the set of homotopy classes, relative to \( \{ 0,1\} \), of loops \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow B \) based at \( {b}_{0} \) such that \( \widehat{\alpha }\left( 1\right) = {e}_{0} \), that is, such that \( \alpha \) lifts to a loop. Note too that \( {p}_{ \star } \) is injective (from the homotopy exact sequence) so that \( {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) is isomorphic to \( {\Pi }_{1}\left( {E,{e}_{0}}\right) \) . Proposition 7.8. Suppose \( p : \left( {E,{e}_{0}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) and \( {p}^{\prime } : \left( {{E}^{\prime },{e}_{0}^{\prime }}\right) \rightarrow \left( {B,{b}_{0}}\right) \) are two based coverings of \( B \) with the same group. Then these are equivalent in the sense that there exists a homeomorphism \( h : \left( {{E}^{\prime },{e}_{0}^{\prime }}\right) \rightarrow \left( {E,{e}_{0}}\right) \) such that \( {ph} = {p}^{\prime } \) . Proof. By Proposition 7.7, the map \( {p}^{\prime } \) lifts to a map \( h : \left( {{E}^{\prime },{e}_{0}^{\prime }}\right) \rightarrow \left( {E,{e}_{0}}\right) \) such that \( {ph} = {p}^{\prime } \) . Similarly, by Proposition 7.7 applied to the map \( p \) and covering \( {p}^{\prime } \), there is a map \( {h}^{\prime } : \left( {E,{e}_{0}}\right) \rightarrow \left( {{E}^{\prime },{e}_{0}^{\prime }}\right) \) such that \( {p}^{\prime }{h}^{\prime } = p \) . But then \( h{h}^{\prime } \) : \( \left( {E,{e}_{0}}\right) \rightarrow \left( {E,{e}_{0}}\right) \) is a lift of the map \( p \) with respect to the covering \( p \) . The identity map is another such lift. Hence, by the uniqueness of Proposition 7.7, \( h{h}^{\prime } \) is the identity. Similarly, \( {h}^{\prime }h \) is the identity, and so \( h \) and \( {h}^{\prime } \) are mutually inverse homeomorphisms. Now recall from Chapter 6 the map \( p : {X}_{\infty } \rightarrow X \), where \( X \) is the exterior of an oriented link \( L, F \) is a Seifert surface and \( {X}_{\infty } \) is the space constructed by gluing together countably many copies of \( Y \), where \( Y \) is \( X \) -cut-along- \( F \) . Theorem 7.9. The covering space \( p : {X}_{\infty } \rightarrow X \) of the exterior \( X \) of an oriented link \( L \) does not depend on the choice of Seifert surface used in its construction. Further, the action of the infinite cyclic group on \( {X}_{\infty } \) is likewise independent of \( F \) . Proof. It is clear from the construction of \( {X}_{\infty } \) that a loop \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow X \) lifts to a loop \( \widehat{\alpha } \) (that is, \( \widehat{\alpha }\left( 0\right) = \widehat{\alpha }\left( 1\right) \) ) in \( {X}_{\infty } \) provided \( \widehat{\alpha }\left( 0\right) \) and \( \widehat{\alpha }\left( 1\right) \) are in the same copy of \( Y \) . This is so if and only if \( \alpha \) intersects \( F \) zero times algebraically, for every time \( \alpha \) crosses \( F \), its lift moves from one copy of \( Y \) to an adjacent copy. Thus \( \alpha \) lifts to a loop if and only if the linking number of \( \alpha \) with \( L \) (that is, the sum of the linking numbers with the components of \( L \) ) is zero. Now, that statement is independent of the choice of Seifert surface for \( L \), so the group of the cover does not depend on \( F \) . Using the preceding proposition, the the first result follows at once. Consider the action by the infinite cyclic group \( \langle t\rangle \) on \( {X}_{\infty } \) . If \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow {X}_{\infty } \) is any path from some point \( a \) to \( {ta} \), then, by the above reasoning, \( {p\gamma } \) is a loop in \( X \) having linking number 1 with \( L \) . Conversely the lift of any such loop in \( x \) is a path from some \( a \) to \( {ta} \) . Suppose \( {p}^{\prime } : {X}_{\infty }^{\prime } \rightarrow X \) is a second version of \( {X}_{\infty } \) constructed from Seifert surface \( {F}^{\prime } \) and \( {h}^{\prime } : {X}_{\infty } \rightarrow {X}_{\infty }^{\prime } \) is the homeomomorphism such that \( {p}^{\prime }{h}^{\prime } = p \) . Trivially \( {p}^{\prime }{h}^{\prime }\gamma = {p\gamma } \), so that \( {h}^{\prime }\gamma \), being a lift of the loop \( {p\gamma } \) with respect to the covering \( {p}^{\prime } \), is a path in \( {X}_{\infty }^{\prime } \) from a point to its \( t \) -translate. Hence \( t{h}^{\prime }\left( a\right) = {h}^{\prime }\left( {ta}\right) \), and the homeomorphism \( {h}^{\prime } \) preserves the \( t \) -action. This then concludes the proof of the fact that the Alexander polynomial of an oriented link \( L \
1009_(GTM175)An Introduction to Knot Theory
25
{p\gamma } \) is a loop in \( X \) having linking number 1 with \( L \) . Conversely the lift of any such loop in \( x \) is a path from some \( a \) to \( {ta} \) . Suppose \( {p}^{\prime } : {X}_{\infty }^{\prime } \rightarrow X \) is a second version of \( {X}_{\infty } \) constructed from Seifert surface \( {F}^{\prime } \) and \( {h}^{\prime } : {X}_{\infty } \rightarrow {X}_{\infty }^{\prime } \) is the homeomomorphism such that \( {p}^{\prime }{h}^{\prime } = p \) . Trivially \( {p}^{\prime }{h}^{\prime }\gamma = {p\gamma } \), so that \( {h}^{\prime }\gamma \), being a lift of the loop \( {p\gamma } \) with respect to the covering \( {p}^{\prime } \), is a path in \( {X}_{\infty }^{\prime } \) from a point to its \( t \) -translate. Hence \( t{h}^{\prime }\left( a\right) = {h}^{\prime }\left( {ta}\right) \), and the homeomorphism \( {h}^{\prime } \) preserves the \( t \) -action. This then concludes the proof of the fact that the Alexander polynomial of an oriented link \( L \) is well defined (up to multiplication by a unit). The covering space \( {X}_{\infty } \) of \( X \) is called the infinite cyclic covering of the link exterior. A loop in \( X \) lifts to a loop in \( {X}_{\infty } \) if and only if it has zero linking number with \( L \) . In the case when \( L \) is a knot, this means, by Theorem 1.5, that the loop represents the zero element in \( {H}_{1}\left( X\right) \) . Then \( {p}_{ \star }{\Pi }_{1}\left( {X}_{\infty }\right) \) is the kernel of the natural map \( {\Pi }_{1}\left( X\right) \rightarrow {H}_{1}\left( X\right) \) , which, for any \( X \), is the commutator subgroup of \( {\Pi }_{1}\left( X\right) \) . In determining the Alexander polynomial of a knot, any convenient method of constructing \( {X}_{\infty } \) may be used. It is just necessary to construct a covering of the knot exterior with the property that a loop lifts to a loop if and only if it has zero linking number with the knot. The diagrams of Figure 7.1 show such a method for the exterior of the 4-crossing knot. The exterior of the knot \( {4}_{1} \) in the first diagram can be obtained by the following "surgery" procedure on the exterior of the knot of the second diagram (which is unknotted). Remove a (shaded) solid torus as shown and replace it with a solid torus in such a way that on the boundary of the (shaded) toral hole, the curve shown bounds a disc in the replacing torus. To contemplate that replacement, imagine cutting across a disc spanning the outside of the toral hole. This creates discs on either side of the cut. Twist one of these discs through \( {2\pi } \) about an axis through its centre, thereby reinserting two crossings into the knot; then glue the discs together again. The curve on the boundary of the hole has been changed to become a meridian of the toral hole. Then the solid torus fits neatly into the hole, and the first diagram is recreated. The third diagram is the same as the second, up to isotopy; care has been taken to keep track of the curve on the boundary of the toral hole. Now the infinite cyclic cover can be created by cutting across a disc spanning the unknot in this diagram and taking infinitely many copies glued end to end. The result is a copy of \( {D}^{2} \times \mathbb{R} \) from which infinitely many solid tori have been removed, as shown, and which are to be replaced so that the indicated curves become the boundaries of discs. (For this to happen it is important that the shaded solid torus was chosen to have zero linking number with the knot.) The \( t \) action on the cover is "translation to the right by one unit". Then \( {H}_{1}\left( {{X}_{\infty };\mathbb{Z}}\right) \) is generated as a module by the class of the ![5aaec141-7895-41cf-bdc1-c8a33b18f96f_81_0.jpg](images/5aaec141-7895-41cf-bdc1-c8a33b18f96f_81_0.jpg) Figure 7.1 curve \( x \) shown in the diagram, and there is one relator represented by the curve shown on the boundary of one toral hole. (The relators corresponding to curves on the other toral holes are translates of the first by powers of \( t \) .) This relator is \( - {t}^{-1}x + {3x} - {tx} \) . Thus the module is represented by the \( 1 \times 1 \) matrix \( - {t}^{-1} + 3 - t \) , and so (taking its determinant) \( - {t}^{-1} + 3 - t \) is the Alexander polynomial of the knot \( {4}_{1} \) . Note that the essence of the preceding discussion is that the diagram of the knot \( {4}_{1} \) can be changed to a diagram of the unknot by changing one crossing. That crossing is then encircled by the shaded solid torus. If \( K \) is a knot with a diagram that can be unknotted with \( m \) crossing changes, then the procedure can be repeated using \( m \) solid tori, each encircling one of these crossings. The result is a presentation of the Alexander module with \( m \) generators and \( m \) relators. Thus this module has an \( m \times m \) presentation matrix, and so its \( r \) th elementary ideal is \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \) for every \( r > m \) . This proves the following result about unknotting numbers (see Chapter 1). Theorem 7.10. If the rth elementary ideal of the Alexander module of a knot \( K \) is not the whole of \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \), then \( K \) has unknotting number \( u\left( K\right) \geq r \) . As an example, consider the pretzel knot \( P\left( {3,3, - 3}\right) \) discussed in Example 6.9. There it was shown that the second elementary ideal of the Alexander module is not \( \mathbb{Z}\left\lbrack {{t}^{-1}, t}\right\rbrack \), and so \( u\left( {P\left( {3,3, - 3}\right) }\right) \geq 2 \) . It is easy to see that two crossing changes do undo the knot, and so \( u\left( {P\left( {3,3, - 3}\right) }\right) = 2 \) . More information on the results of this technique can be found in [103]. Another example of a covering may be useful. Let \( {X}_{\infty } \rightarrow X \) be, as before, the infinite cyclic covering of the exterior of an oriented \( n \) -component link \( L \) . The cyclic group \( \langle t\rangle \) acts on \( {X}_{\infty } \) . Then \( {X}_{\infty }/\left\langle {t}^{2}\right\rangle \rightarrow X \) is a 2 -fold covering of \( X \) called the cyclic double cover of \( X \) . Denote \( {X}_{\infty }/\left\langle {t}^{2}\right\rangle \) by \( \widehat{{X}_{2}} \) . (This \( \widehat{{X}_{2}} \) can, if desired, be constructed from two copies of \( Y \), where \( Y \) is \( X \) cut along a Seifert surface, gluing together parts of the boundary in the obvious way.) A loop in \( X \) lifts to a loop in \( \widetilde{{X}_{2}} \) if and only if it has linking number zero modulo 2 with \( L \) . The covering is that corresponding to the kernel of the map \( {\Pi }_{1}\left( {X,{x}_{0}}\right) \rightarrow {H}_{1}\left( {X;\mathbb{Z}}\right) \rightarrow \mathbb{Z} \rightarrow \) \( \mathbb{Z}/2\mathbb{Z} \), where the second map sends each meridian to \( 1 \in \mathbb{Z} \) . Consider loops on the boundary of the solid torus neighbourhood \( {N}_{i} \) of any component \( {L}_{i} \) of \( L \) . A longitude lifts to a loop in \( \widehat{{X}_{2}} \) . A meridian does not lift to a loop, but the square of a meridian does lift to a loop. Thus, identifying \( \partial {N}_{i} \) with \( {S}^{1} \times {S}^{1} \), with longitude and meridian corresponding to the two factors, the covering restricted to the part of it over \( \partial {N}_{i} \) is a covering of a torus by a torus. It is equivalent to \( \left( {{z}_{1},{z}_{2}}\right) \mapsto \left( {{z}_{1},{z}_{2}^{2}}\right) \), where \( {S}^{1} \) is the unit complex numbers. (This is also clear from the construction of \( \widehat{{X}_{2}} \) by gluing together two copies of \( Y \) .) That map extends to a map \( {S}^{1} \times {D}^{2} \rightarrow {S}^{1} \times {D}^{2} \) defined by \( \left( {{z}_{1},{z}_{2}}\right) \mapsto \left( {{z}_{1},{z}_{2}^{2}}\right) \) . This is a covering map except on \( {S}^{1} \times \{ 0\} \) . It is called a covering branched over \( {S}^{1} \times \{ 0\} \) . Thus \( n \) solid tori can be glued to the boundary components of \( \widehat{{X}_{2}} \) to create \( {X}_{2} \), another \( n \) solid tori can be glued to the boundary of \( X \) to recreate \( {S}^{3} \) and the double covering map \( \widehat{{X}_{2}} \rightarrow X \) can be extended, as described above over each solid torus, to achieve a map \( {X}_{2} \rightarrow {S}^{3} \) called the double cover of \( {S}^{3} \) branched over \( L \) . This is a two-fold cover when restricted to (a map to) the complement of \( L \) . Note that this construction is independent of the orientation of \( L \), since \( 1 = - 1 \) in \( \mathbb{Z}/2\mathbb{Z} \) . The construction can be generalised at once to construct an \( r \) -fold cyclic cover of \( {S}^{3} \) branched over an oriented link. Two-bridge links provide a simple example. As explained in Chapter 1, a 2- bridge (or rational) link is obtained by gluing together the boundaries of two trivial 2-string tangles. The double cover of a ball branched over a trivial 2-string tangle is a solid torus. Thus the double cover of \( {S}^{3} \) branched over the link is two solid tori with their boundaries glued together. That is a lens space or, exceptionally, \( {S}^{3} \) or \( {S}^{1} \times {S}^{2} \) . In fact, \( {L}_{p, q} \) is the double cover of \( {S}^{3} \) branched over the \( \left( {p, q}\right) \) 2-bridge link. It has been shown that only this link has \( {L}_{p, q} \) as its double branched cover [46]. Further facts about covering spaces will be useful in Chapter 11. As has been noted, a covering map \( p : \left( {E,{e}_{0}}\right) \rightarrow \left( {B,{b}_{0}}\right) \) induces an injection on fundamental groups, and \( {p}_{ \star }{\Pi }_{1}\left( {E,{e}_{0}}\right) \) is called the group of the covering. The chief further result is that provided \( B \) is "semi-locally simply connected" (locally contractible will do fine), then for any given sub