TITLE: The 9 step problem QUESTION [6 upvotes]: A man can take a step forward, backward,left and right with equal probability. Find the probability that after 9 steps he will be just 1 step away from his initial point. I have done similar questions with movement restricted to forward and backward only ,but this one just blows my mind. REPLY [3 votes]: Here's a new combinatorial approach. Assume that there are $2n+1$ steps. Denote directions $H$ (Horizontal) and $V$ (Vertical) and polarities $+$ and $-$, (ie. $H+$ is right, $H-$ left, $V+$ forward, $V-$ backward). Of the $2n+1$ steps, choose any $\color{red}{n}$ steps. Number of ways: $\binom {2n+1}n$. For the chosen steps, assign polarity "$+$". For the remaining $n+1$ steps, assign polarity "$-$". $$\begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &? &?\\ V& &? &?\\ \hline & &\color{red}{n} &n+1 &2n+1\\ \hline\end{array}$$ Again, of the $2n+1$ steps, choose any $\color{orange}{n}$ steps. Number of ways: $\binom {2n+1}n$. For these $n$ steps, mark $H$ for those with polarity "$+$" and mark $V$ for those with polarity "$-$". For the remaining $n+1$ steps, do the opposite, i.e. mark $V$ for those with polarity "$+$" and mark $H$ for those with polarity "$-$". $$\begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &\color{orange}{r} &?\\ V& &? &\color{orange}{n-r}\\ \hline & &\color{red}{n} &n+1 &2n+1\\ \hline\end{array} \hspace{3cm} \begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &\color{orange}{r} &r+1 & 2r+1\\ V& &n-r &\color{orange}{n-r} & 2n-2r\\ \hline & &\color{red}{n} &n+1 &2n+1\\ \hline\end{array}$$ This ensures that $n$ matched polarity step-pairs, with one unmatched step, in this case $H-$. Multiply by $4$ for all four directions $(H+, H-, V+, V-)$ This gives total number of combinations as $4\large\binom {2n+1}n^2$. Hence probability of ending up one step away from original position is $$\frac {4\large{\binom {2n+1}n} ^2}{4^{2n+1}}=\frac {\large\binom {2n+1}n ^2}{4^{2n}}$$ NOTE also that the total number of ways to end up one step away from the original position after $2n+1(=2N-1)$ steps is the same as the total number of ways to end up back at the original position after $2n+2(=2N)$ steps, where $N=n+1$. Assume that there are $2N$ steps. Denote directions $H$ (Horizontal) and $V$ (Vertical) and polarities $+$ and $-$, (ie. $H+$ is right, $H-$ left, $V+$ forward, $V-$ backward). Of the $2N$ steps, choose any $\color{red}{N}$ steps. Number of ways: $\binom {2N}N$. For the chosen steps, assign polarity "$+$". For the remaining $N$ steps, assign polarity "$-$". $$\begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &? &?\\ V& &? &?\\ \hline & &\color{red}{N} &N &2N\\ \hline\end{array}$$ Again, of the $2N$ steps, choose any $\color{orange}{N}$ steps. Number of ways: $\binom {2N}N$. For these $N$ steps, mark $H$ for those with polarity "$+$" and mark $V$ for those with polarity "$-$". For the remaining $N$ steps, do the opposite, i.e. mark $V$ for those with polarity "$+$" and mark $H$ for those with polarity "$-$". $$\begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &\color{orange}{r} &?\\ V& &? &\color{orange}{N-r}\\ \hline & &\color{red}{N} &N &2N\\ \hline\end{array} \hspace{3cm} \begin{array}{&c|&c &c &c | &c} \hline & &+ &- &\text{Total} \\ \hline H& &\color{orange}{r} &r & 2r\\ V& &n-r &\color{orange}{N-r} & 2N-2r\\ \hline & &\color{red}{N} &N &2N\\ \hline\end{array}$$ Total number of combinations is $$\binom {2N}N\cdot \binom {2N}N=\binom {2N}N^2$$ Note that this equal to $$\binom {2n+2}{n+1}^2=4\binom {2n+1}n^2$$<|endoftext|> TITLE: Why is the ideal sheaf $\mathcal{I}_Y$ equal to $\mathcal{O}(-Y)$ and not $\mathcal{O}(Y)$? QUESTION [6 upvotes]: (Note: I have edited the question to make my confusion more specific.) In Huybrechts' Complex Geometry, on page 63 it is said that, $\mathcal{I}_Y$ is the ideal sheaf of holomorphic functions vanishing on $Y$ while on page 84 the following is said, Let $Y\subset X$ be an irreducible hypersurface $\ldots$ the ideal sheaf $\mathcal{I}$ of $Y$ and the subsheaf $\mathcal{O}(-Y)\subset\mathcal{O}_X$ $\ldots$ are equal. Additionally on page 83 we have the following proposition. Let $0 \neq s \in H^0(X,L)$. Then the line bundle $\mathcal{O}(Z(s))$ is isomorphic to $L$. So in particular, at least for an effective divisor $D$, the sections of the line bundle $\mathcal{O}(D)$ vanish on $D$. But above we saw that $\mathcal{O}(-D)$ is the sheaf of holomorphic functions vanishing on $D$, so I would naively expect that $\mathcal{O}(-D)$ would be the bundle whose sections vanish on $D$. How is it that $\mathcal{O}(-D)$ corresponds to the sheaf of functions vanishing on $D$, while it is $\mathcal{O}(D)$ that is the bundle whose sections vanish on $D$? In particular it would be nice if someone could explicitly describe, in terms of functions on patches (as far as this can be done), $\mathcal{O}(p)$ and $\mathcal{O}(-p)$, for a point $p$ in $\mathbb{P}^1$, showing explicitly how they behave as bundles with or without global sections, and making clear why it is $\mathcal{O}(-p)$ and not $\mathcal{O}(-p)$ that can be seen as a subsheaf of the sheaf of regular functions (which in particular is the ideal sheaf of the point). REPLY [4 votes]: While hwong557 gave a very helpful answer, it wasn't quite what I was looking for, as I really needed to have an explicit example worked through to see all the (probably obvious to everyone else) connections between different objects. I now think I've understood what's going on in an explicit example, and I'm happy. So I'm going to post what I wrote up for myself here, in case anyone else is like me and needs this spelt out step-by-step in this way. Here is an explicit example on $\mathbb{P}^1$. In Huybrechts' Complex Geometry on page 79 we are given the following prescription for constructing a line bundle $\mathcal{O}(D)$ given a Cartier divisor $D$, If $D=\sum a_i[Y_i]\in\text{Div}(X)$ corresponds to $f \in H^0(X,\mathcal{K}_X^*/\mathcal{O}_X^*)$, which in turn is given by functions $f_i \in \mathcal{K}_X(U_i)$ for an open covering $X= \bigcup U_i$, then we define $\mathcal{O}(D) \in \text{Pic}(X)$ as the line bundle with transition function $\psi_{ij} \mathrel{\mathop:}= f_i \cdot f_j^{-1} \in H^0(U_i \cap U_j,\mathcal{O}_X^*)$. (Note that $\psi_{ij}\mathrel{\mathop:}=\psi_i\cdot\psi_j^{-1}$ where $\psi_i$ are the local trivialisations $\pi^{-1}(U_i) \cong U_i \times \mathbb{C}^r$. That is, $\psi_{ij}$ is the transition function for moving from $U_j$ to $U_i$.) Let's work through an explicit example to understand this. Work on the space $X\equiv\mathbb{P}^1$, with homogeneous coordinate $x_0$ and $x_1$, and write 'NR' for $x_1\neq0$ ('north region') and 'SR' for $x_0\neq0$ ('south region'). Let the divisor $D=-p$, where $p$ is the 'north pole' at $x_0=0$. This divisor is defined by the meromorphic functions $\tfrac{x_1}{x_0}$ on $U_{NR}$ and $1$ on $U_{SR}$. By the above prescription for defining the line bundle $\mathcal{O}(-p)$ corresponding to this divisor $-p$, we set the transition function, $$\psi_{SR \to NR}=\frac{x_1}{x_0} \,.$$ We see that this line bundle has no global holomorphic sections, as expected (any holomorphic function on $U_{SR}$ would transition into a function with poles on $U_{NR}$). We also see that this is the line bundle which in a different notation is written $\mathcal{O}(-1)$, again as expected. Now consider a very different object: the ideal sheaf $\mathscr{I}_p$ of holomorphic functions vanishing on $p$. This is the sheaf that can assign to the open sets in our open covering holomorphic functions of the form, $$ U_{NR}: a_1\left(\frac{x_0}{x_1}\right)+a_2\left(\frac{x_0}{x_1}\right)^2+\ldots \, , $$ $$ U_{SR}: b_0+b_1\left(\frac{x_0}{x_1}\right)+\ldots \, . $$ (More specifically, we can assign a holomorphic function of the first form to any open set containing $p$, and a holomorphic function of the latter form to any open set not containing $p$.) We can see that the sheaf we have defined is a locally free sheaf. This is because on $U_{SR}$ the space of allowed functions is exactly $\mathcal{O}_{SR}$ (the isomorphism $\phi_{SR} : \mathscr{I}_p(U_{SR}) \to \mathcal{O}_{SR}$ is trivial), while on $U_{NR}$ there is an isomorphism $\phi_{NR} : \mathscr{I}_p(U_{NR}) \to \mathcal{O}_{NR}$ given by multiplying by $\tfrac{x_1}{x_0}$. Hence we have an open covering $\{U_{SR},U_{NR}\}$ for which $\mathscr{I}_p(U)$ is isomorphic to $\mathcal{O}_U$ for each of the open sets $U$. Using the general prescription (outlined below) for associating a line bundle to a locally free sheaf, we assign the transition function, $$ \psi_{SR \to NR} = \phi_{NR} \cdot \phi_{SR}^{-1} = \frac{x_1}{x_0} \, . $$ We see that this is exactly the transition function for $\mathcal{O}(-p)$ that we found above. This is a counter-intuitive result for the following reason. The sections of $\mathcal{O}(p)$ vanish at $p$, but it is $\mathcal{O}(-p)$ that is given by the line bundle associated to the sheaf $\mathscr{I}_p$ of holomorphic functions vanishing at $p$, and this sounds like a mismatch - given that the sections of $\mathcal{O}(p)$ vanish at $p$, it sounds like it should be this line bundle that corresponds to $\mathscr{I}_p$. We can see that this 'mismatch' is a result of the definition of $\mathcal{O}(D)$. In the case of $\mathcal{O}(-p)$, the contribution of $NR$ to the transition function $\psi_{SR \to NR}$ is given by the meromorphic function that defines the divisor in that region, i.e. $\tfrac{x_1}{x_0}$, which defines a pole at $x_0=0$. On the other hand, in the case of $\mathscr{I}_p$, the contribution to the transition function (of the line bundle that we construct using the standard prescription for getting from a locally free sheaf to a line bundle), is given by the isomorphism with $\mathcal{O}_{NR}$ given by $\phi_{NR}=\tfrac{x_1}{x_0}$ that by definition undoes the restriction of holomorphic functions in this region to ones that vanish at $p$ - that is, the isomorphism must act by multiplying by a factor that on its own would give a pole at $p$. Generalities on associating a line bundle to a locally free sheaf: From page 33 of Aspinwall (and elsewhere) we have the following definition of a locally free sheaf. We call a sheaf $\mathscr{E}$ locally free of rank $n$ if there is an open covering $\{U_{\alpha}\}$ of $X$ such that $\mathscr{E}(U_{\alpha})\cong\mathcal{O}_X(U_{\alpha})^{\oplus n}$ for all $\alpha$. Suppose we have a locally free sheaf $\mathscr{E}$, with an open covering satisfying the above condition. Let $\phi_{\alpha} : \mathscr{E}(U_{\alpha}) \to \mathcal{O}_X(U_{\alpha})^{\oplus n}$ be the explicit isomorphism. To get the holomorphic vector bundle corresponding to this locally free sheaf, we are told by Aspinwall to define on each intersection $U_{\alpha} \cap U_{\beta}$ the $n \times n$ matrix of holomorphic functions, $$ \phi_{\beta}\phi_{\alpha}^{-1}: \mathcal{O}_X(U_{\alpha} \cap U_{\beta})^{\oplus n} \to \mathcal{O}_X(U_{\alpha} \cap U_{\beta})^{\oplus n} \, . $$ How does this work really? What are these 'explicit isomorphisms'? Here is an example, at least for the case of a rank-one locally free sheaf (corresponding to a line bundle). Take the space $X \equiv \mathbb{P}^1$, with homogeneous coordinates $x_0$ and $x_1$, and consider the following sheaf. Assign to any open set $U \subseteq X$ the set of polynomials of homogeneous degree $1$ and without poles. So on the total space, we can assign a polynomial $ax_0+bx_1$, with $a,b \in \mathbb{C}$. ($x_0^2/x_1$ and so on would have poles.) Now consider the open covering consisting of the two open sets $x_0\neq0$ and $x_1\neq0$. On $x_0\neq0$, we can assign a polynomial $ax_0+bx_1+c\,\tfrac{x_1^2}{x_0}+\ldots$. Similarly on $x_1\neq0$, we can assign a polynomial $cx_1+dx_0+e\,\tfrac{x_0^2}{x_1}+\ldots$. On each of these two open sets, the space of allowed polynomials is isomorphic to the space of holomorphic polynomials - in $x_0\neq0$ the isomorphism (call it $\phi_0$) is given by dividing by $x_0$, while in $x_1\neq0$ the isomorphism (call it $\phi_1$) is given by dividing by $x_1$. Hence we have found an open covering such that the sheaf assigns to each open set an object isomorphic to the space of holomorphic functions on that open set. That is, we have shown that this sheaf is locally free (of rank one). Further, now that we have the explicit isomorphisms for a locally free sheaf, we can construct the corresponding line bundle by the above general prescription. The transition function from $x_0\neq0$ to $x_1\neq0$ is given by acting with the composition $\phi_1\circ\phi_0^{-1} = \tfrac{x_0}{x_1}$. This is exactly the line bundle $\mathcal{O}(1)$. It has global holomorphic sections corresponding to assigning the holomorphic function $c_0\,\tfrac{x_0}{x_1}+c_1$ on $x_1\neq0$ and the holomorphic function $c_0+c_1\,\tfrac{x_1}{x_0}$ on $x_0\neq0$. From the viewpoint of a locally free sheaf, the global sections are given by homogeneous polynomials $c_0\,x_0+c_1\,x_1$. (Note that the sheaf we have discussed, or its associated line bundle rather, is precisely $\mathcal{O}_{\mathbb{P}^n}(1)$.) Let us say something general about what we have just said, and about what the case would be we were to take a different degree for the polynomials. If we take arbitrary polynomials of homogeneous degree one on the total space, then this space is not isomorphic to $\mathcal{O}$, since there is no function we can multiply by to kill off all the poles. So what this sheaf assigns to the total space is not isomorphic to $\mathcal{O}$. But on the 'north' and 'south' open sets, we do have isomorphisms between what we assign to these open sets and $\mathcal{O}$ (and since these two open sets provide an open covering, this tells us that we have defined a locally free sheaf). The isomorphisms on the different open sets patch together to tell us about the transition functions. The $n$ in the sheaf $\mathcal{O}(n)$ is encoded in what is the homogeneous degree of the polynomials we allow (on all open sets, including the total space). Making a different choice of this degree will mean that we will have to choose different maps on the north and south regions to give explicit isomorphisms with $\mathcal{O}$ (we will have to divide by different powers of $x_0$ and $x_1$), so the set of isomorphisms that we try to patch together will be different for each $n$, and hence we will get different transition functions, and this will define these different line bundles.<|endoftext|> TITLE: Prove that there is a subsequence satisfying Three Properties (Lebesgue Integration) QUESTION [8 upvotes]: Suppose $\{f_n\}$ are Lebesgue measurable functions on $[0,1]$, such that $\int_0^1 |f_n|\,d\mu=1$ for all $n$, and $f_n\to 0$ almost everywhere. I have proved: given $\epsilon>0$, there exists a Lebesgue meausurable $E\subseteq [0,1]$ such that $\mu(E)<\epsilon$ and $$\lim_{n\to\infty}\int_E |f_n|\,d\mu=1$$ (using Egorov's Theorem, where $E$ turns out to be $[0,1]\setminus F$ for some closed $F$ in which the convergence is uniform.) Hence, or otherwise, how do we prove that there exists a subsequence $f_{n_k}$ of $f_n$ and sequences of measurable functions $g_k$ and $h_k$ such that (i) $f_{n_k}=g_k+h_k$ for all $k$ (ii) $g_kg_j=0$ a.e. for $k\neq j$ (iii) $\lim_{k\to\infty}\int_0^1|h_k|\,d\mu=0$ The hardest condition in my opinion is (ii). I managed to find a candidate that satisfies both (i) and (iii), but not (ii). Let $f_{n_k}$ be a subsequence such that $\int_E |f_{n_k}|\,d\mu>1-\frac 1k$. Let $g_k=f_{n_k}\chi_E$ and $h_k=f_{n_k}\chi_F$, where $E=[0,1]\setminus F$. Then (i) $f_{n_k}=g_k+h_k$ is satisfied. $\lim_{k\to\infty}\int_0^1 |h_k|\,d\mu=\lim_{k\to\infty}\int_F |f_{n_k}|\,d\mu=1-\lim_{k\to\infty}\int_E |f_{n_k}|\,d\mu=1-1=0$. So (iii) is satisfied. However, condition (ii) remains unsatisfied. Thanks for any help. REPLY [2 votes]: Using Egorov, we can choose $E_1\subset [0,1]$ such that $\int_{E_1}|f_1| > 0$ and $f_n\to 0$ uniformly on $E_1.$ Now $$1= \int_0^1|f_n| = \int_{E_1}|f_n| + \int_{[0,1]\setminus E_1}|f_n|,$$ and since $f_n\to 0$ uniformly on $E_1,$ we have $\int_{[0,1]\setminus E_1}|f_n|\to 1.$ Thus there exists $n_2>1$ such that $\int_{[0,1]\setminus E_1}|f_{n_2}| > 1/2.$ By Egorov we can choose a sequence $F_k \subset [0,1]\setminus E_1$ such that $$\mu (F_k) \to \mu([0,1]\setminus E_1),$$ with $f_n \to 0$ uniformly on each $F_k.$ So if $k$ is large enough, we'll have both $f_n \to 0$ uniformly on $F_k$ and $$\int_{F_k}|f_{n_2}| > 1/2.$$ Let $E_{2}$ be be any one of these $F_k.$ So we now have pairwise disjoint $E_1,E_2$ and $1= n_1 < n_2$ such that $f_n \to 0$ uniformly on $E_1\cup E_2$ and $\int_{E_k}|f_{n_k}| > 1-1/k$ for $k=1,2.$ We can continue this process by induction to obtain pairwise disjoint subsets $E_1, E_2, \dots$ and $1=n_1 < n_2 < \cdots $ such that for each $k,$ $f_n \to 0$ uniformly on $E_k$ and $\int_{E_k}|f_{n_k}| > 1-1/k.$ Now it's easy street. Define $g_k = f_{n_k}\chi_{E_k}, h_k = f_{n_k}\chi_{[0,1]\setminus E_k}.$ Then (i)-(iii) are satisfied.<|endoftext|> TITLE: Definition of a universal cover and the universal cover of a point QUESTION [5 upvotes]: I read on the English Wikipedia page on covering spaces that "a covering space is a universal covering space if it is simply connected", which looks like an actual definition of a covering space being a universal covering space. However, I read in other sources other definitions of a covering space. For example the French Wikipedia page on covering spaces (Revêtement) uses definition 4), infra. So, which one of these definitions is the "correct" one ? (1) The mapping $q : D \to X$ is a universal cover of the space $X$ if $D$ is simply connected ; (2) The mapping $q : D \to X$ is a universal cover of the space $X$ if for any cover $p : C \to X$ of the space $X$ where the covering space $C$ is connected, there exists a covering map $f : D \to C$ such that $p \circ f = q$ ; (3) The mapping $q : D \to X$ is a universal cover of the space $X$ if it is a Galois cover and for any cover $p : C \to X$ of the space $X$ where the covering space $C$ is connected, there exists a covering map $f : D \to C$ such that $p \circ f = q$ ; (4) The mapping $q : D \to X$ is a universal cover of the space $X$ if it is a Galois cover and for any cover $p : C \to X$ of the space $X$, there exists a covering map $f : D \to C$ such that $p \circ f = q$. It is true that $(1) \implies (3) \implies( 2)$ and $(4) \implies (3), (2)$. Is it also true that $(1) \implies (4)$ ? $(2) \implies (1)$ ? $(3) \implies (1)$ ? For example, using definitions 1), 2) or (3), it is clear that "the" universel cover of a point is itself, and the cover mapping $q : \{ \bullet \} \to \{ \bullet \}$ is the identity. Indeed : Using definition 1), the point is simply connected, so its universal cover is itself. Using definition 2) or 3), note that the identity map $q : \{ \bullet \} \to \{ \bullet \}$ is a Galois cover and that the covers of $\{ \bullet \}$ are all the non-empty discrete spaces, so the only connected ones are the 1-point spaces $C$ (the cover mapping being the natural bijection $p : C \to \{ \bullet \}$). Therefore there exists a covering map $f : \{ \bullet \} \to C$ (which is just $p^{-1}$) which satisfies $p \circ f = q$. However, we can't prove using definition 4) that "the" universel cover of a point is itself, since if $p : F \to \{ \bullet \}$ is a non-connected cover where $F$ is a discrete space of cardinality at least 2, there is no covering map $f : \{ \bullet \} \to F$ such that $p \circ f = q$ (as a covering map should be surjective). So my final question is : Is the universal cover of a 1-point space really a 1-point space ? Thanks. REPLY [10 votes]: It is a little hard to say "what definition is correct", since there are some definitions which are more appropriate to be context-dependent (see (*)), but let me make a point. A universal cover is essentially a solution to a universal property problem. That being its name (universal), it seems to me that the natural definition is the one that makes it really be... universal. What universal property does it satisfy? Consider the category $\mathcal{C}$ which has as objects covering maps from pointed topological spaces to a fixed pointed topological space $(X,x_0)$. That is, the objects are $p:(Y,y_0) \to (X,x_0)$ (more on this later). And the morphisms are covering maps $c: (Y_1,y_1) \to (Y_2,y_2)$ making the following diagram commute $$\begin{array}{ccccccccc} & & (Y_1,y_1) \\ & \swarrow{c} & \downarrow{p_2} \\ (Y_2, y_2)& \xrightarrow{p_1} & (X,x_0). \end{array}$$ We want a universal cover to be a initial object in this category. Why? A universal cover should cover any cover, and that is precisely what the above property is capturing. If we take the above assumption as a definition, how do we reconcile it with the usual definition as a simply connected cover? Well, consider the following theorem: Theorem:[Lifting Theorem] Let $p:(Y,y_0) \to (X,x_0)$ be a covering map. Assume that $W$ is path-connected and locally path-connected and that $f: (W,w_0) \to (X,x_0)$ is a given map. Then, there exists a lifting of $f$ to $(Y,y_0)$ covering $f$ if and only if $f_{\#} \pi_1(W,w_0) \subset p_{\#}\pi_1(Y,y_0)$. Moreover, such a lifting is unique. A moment of thought then tells us that if $\pi_1(W,w_0)$ is trivial in this theorem, then there always exists a lifting. Another moment of thought then shows us that if there is a object in our category $\mathcal{C}$ which has as the covering space a simply connected space, then it is the universal cover (with a small detail: we would need to prove that the lifting is indeed a covering map, but this is true). Now, some remarks. To guarantee uniqueness of things, we need to suppose that the spaces are pointed. Due to the hypothesis of the lifting theorem, we also need path-connectedness and locally path-connectedness. If we want a "clean" theory (that is, trying to minimize recurrent restatement of hypotheses), then a good solution is to clump up all of those things in a definition. For instance, that is what Bredon does in his topology and geometry book: Definition: A map $p: X \to Y$ is called a covering map (and $X$ is called a covering space of $Y$) if $X$ and $Y$ are Hausdorff, arcwise connected, and locally arcwise connected etc etc. OBS: Note that "arcwise connected" means "path-connected" for Bredon. Note now the phenomenon that I alluded to at the beginning: Bredon himself "changes" definition later for a specific purpose: On page $342$, he states: "For convenience, we use the word 'cover' in Proposition $7.6$ to mean everything in the definition of a covering space except for the connectivity requirements." (*)So, this answers ends up not answering our question. But that is because I think that it is unanswerable: some definitions are better than others when on different context, needing less or more from them. The situation is similar to how some texts define compact as "compact and Hausdorff", or some texts define "ring" with unity or not, or "regular space" entailing Hausdorff or not etc. However, to answer your final question: Yes, the universal cover of a point is a point.<|endoftext|> TITLE: Prove that $ \int_0^\infty \frac{\cos(2\pi x^2)}{\cosh^2(\pi x)}dx=\frac 14$? QUESTION [18 upvotes]: Recently I was reading Bruce Berndt and George Andrews' book "Ramanujan's lost notebook". Ramanujan showed how to calculate integrals of the form $ \int_0^\infty \frac{\cos(\pi wx^2)}{\cosh(\pi x)}dx $ when $w\in\mathbb Q$. Inspired by Ramanujan's work I decided to try to compute numerically various similar looking integrals and by coincidence it turned out that $ \int_0^\infty \frac{\cos(2\pi x^2)}{\cosh^2(\pi x)}dx=0.250000000000000000... $ It looks like this integral is $\frac 14$, but I have no clue how to prove it. Question: How to prove that $$ \int_0^\infty \frac{\cos(2\pi x^2)}{\cosh^2(\pi x)}dx=\frac 14 ?$$ REPLY [17 votes]: I'm busy today so some arguments have to be more rigorous Denote the integral in question by $I$ (which by symmetry can be extended to the whole real line) $$ I=\int_{0}^{\infty}\frac{\cos(2\pi x^2)}{\cosh^2(\pi x)} $$ Now consider the complex valued function $$ f(z)=\frac{e^{i 2 \pi z^2}}{\sinh(4 \pi z)\cosh(\pi z)^2} $$ then $$f(x\pm i)=\frac{ e^{i 2 \pi x^2\mp 4 \pi x}}{\sinh(4 \pi z)\cosh(\pi z)^2}$$ Furthermore integrating $f(z)$ around a rectangle $(-i -\infty,i -\infty,i +\infty,-i +\infty)$ in counterclockwise direction yields $$ \oint_C f(z)=\int_{\mathbb{R}}f(x+i)-\int_{\mathbb{R}}f(x-i)=2\pi i \sum\text{Res}(f(z),z\in C ) $$ but $\int_{\mathbb{R}}f(x+i)-f(x-i)=2 \int_{\mathbb{R}}\frac{e^{i 2 \pi x^2}}{\cosh(\pi x)^2}$. The real part of this integral is just four times the integral we are looking for, $I$. $$ 4 I=2\Re \int_{\mathbb{R}}\frac{e^{i 2 \pi x^2}}{\cosh(\pi x)^2}=\Re\left[2 i\pi \sum_{\sigma=\pm}\text{Res}(f(z),z=\sigma\frac{i}{2})\right] $$ since it turns out that only the residues at $z=\pm \frac{i}{2}$ a) have finite imaginary part and b) don't cancel so they are the only contributions to the real part of the above expression, given by $\text{Res}(f(z)=z=\pm \frac{i}{2})=-\frac{2+i \pi}{4\pi^2}$. Putting everything together we obtain $$ I=\frac{1}{4}\Re\left[-2 \pi i \left(\frac{2+i\pi}{4 \pi^2}+\frac{2+i\pi}{4 \pi^2}\right)\right]=\frac{1}{4} $$ QED Note that with some more (straightforward) work, also the integral with $\sin(2\pi x^2)$ in the numerator can be extracted from the above calculation by taking imaginary parts. $$ \int_{0}^{\infty}\frac{\sin(2\pi x^2)}{\cosh^2(\pi x)}=\frac{1}{4}-\frac{1}{2\pi} $$<|endoftext|> TITLE: Prove $\sum\limits_{i=k}^{n-1}\{\frac{\binom{i}{k}}{n}\}=\frac{n-k^{w(n)}}{2}$ QUESTION [9 upvotes]: $k$ is an odd number, $(n,k!)=1$, prove that $$\sum_{i=k}^{n-1}\left\{\frac{\binom{i}{k}}{n}\right\}=\frac{n-k^{w(n)}}{2},$$ where $\{x\}=x-[x]$, $w(n)$ is the number of distinct prime factors of $n$. For example, if $p>k$ is prime then $$\sum_{i=k}^{p-1}\left\{\frac{\binom{i}{k}}{p}\right\}=\frac{p-k}{2}.$$ REPLY [2 votes]: When $k=1,$ the proof is trivial. Suppose $k\geq 3$ and $n$ has one distinct prime factor (meaning $n$ is a prime greater than $k$). Since both $n$ and $k$ are odd, the number of terms $n-k$ in the sum is even. Define $C_i = \binom{i}{k}.$ Consider the sum $C_{k+j} + C_{n-1-j}$ for $j=0\dots (n-k)/2-1:$ $$ \begin{align} \binom{k+j}{k} + \binom{n-1-j}{k} &= \frac{(k+j)!}{k! j!} + \frac{(n-1-j)!}{k! (n-1-j-k)!} \nonumber \\ &= \frac{(k+j)!(n-1-j-k)! + (n-1-j)!j!}{k! j! (n-1-j-k)!}. \tag{1}\label{eqn1} \end{align} $$ If this sum is divisible by $n,$ the sum of the the remainders when dividing $C_{k+j}$ and $C_{n-1-j}$ by $n$ must be $n$ (since neither $C_{k+j}$ nor $C_{n-1-j}$ is divisible by $n$). The numerator of \eqref{eqn1} can be written $$ (n-(j+k+1))! \big((k+j)! + j!(n-(j+k))(n-(j+k-1))\cdots (n- (j+1))\big). $$ The term $$ j!(n-(j+k))(n-(j+k-1))\cdots (n-(j+1)) $$ is a polynomial in $n$ with constant term $-(j+k)!,$ so the numerator of \eqref{eqn1} is divisible by $n.$ So the sum of all the remainders of the $C_i$ when divided by $n$ is $n(n-k)/2;$ dividing the sum of remainders by $n$ gives the sum in the problem. If $n$ has multiple distinct prime factors, each factor must be greater than $k$ and $n$ must still be odd. I have not completely worked out the proof in this case, but I don't think it's too hard. It boils down to computing the number of pairs of remainders that sum to zero rather than $n$ because $C_{k+j}$ and $C_{n-1-j}$ are divisible by all the prime factors of $n.$<|endoftext|> TITLE: Complete monotonicity of a sequence related to tetration QUESTION [18 upvotes]: Let $\Delta$ denote the forward difference operator on a sequence: $$\Delta s_n = s_{n+1} - s_n,$$ and $\Delta^m$ denote the forward difference of the order $m$: $$\Delta^0 s_n = s_n, \quad \Delta^{m+1} s_n = \Delta\left(\Delta^m s_n\right).$$ We say that a sequence $s_n$ is completely monotone iff $$(-1)^m \Delta^{m} s_n > 0,$$ i.e. the sequence $s_n$ itself is positive and decreasing, its first differences $\Delta s_n$ are negative and increasing (that is, decreasing in absolute values), its second differences $\Delta^2 s_n$ are positive and decreasing (like the sequence $s_n$ itself), and so on. So, the sequence of second differences of a completely monotone sequence is also completely monotone (and so is the sequence of its differences of any even order). Let $a$ be a real number in the interval $12, \;a_n=g_n+O\lambda^{m+n}\;$ after m iterations. Formalizing this error estimate would be a way of showing that the $h_m(z)$ sequence converges to $h(z)$, and that lemma5 is valid. So if lemma2, lemma4, and lemma5 hold, the conclusion is that $S(z)$ is fully monotonic at the real axis in its range of analyticity, where $\Re(z+1)>S^{-1}(0)$ Finally, as in the previous answer we generate the desired sexp function. $$\text{sexp}_a(z)=S(z+k)\;\;\;k=S^{-1}(0)+1\;\;\;\text{sexp}_a(z)=a^{\text{sexp}(z-1)}$$ $\text{sexp}_a(z)$ is completely monotonic at the real axis if z>-2. Then the sequence $^n a=\text{sexp}_a(n)$ is also a completely monotonic sequence by lemma3 that if an analytic function is completely monotonic over a given range, then a sequence of equally spaced samples of that function in that range is also completely monotonic.<|endoftext|> TITLE: Is the following integration "trick" valid to reduce my integrand to a constant? QUESTION [29 upvotes]: $$\int_2^4 \frac{1}{\sqrt{\frac{\ln(3+x)}{\ln(9-x)}} +1}dx = 1$$ I noticed when $x$ went from $2$ to $4$, $3+x$ went from $5$ to $7$, and $9-x$ went from $7$ to $5$. I noticed that if we reverse the interval of $9-x$ we obtain $-(9-x)$ and whose interval goes from $5$ to $7$. In short, I concluded that reversing the interval of $9-x$ yielded the expression $3+x$. Therefore I allowed the above integral to be $$\int_2^4 \frac{1}{\sqrt{\frac{\ln(3+x)}{\ln(9-x)}} +1} dx = \int_2^4 \frac{1}{\sqrt{\frac{\ln(3+x)}{\ln(3+x)}} +1} dx.$$ Effectively, the logs cancelled and I was left with $\int_2^4 {1\over 1+1}dx =1$. REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\int_{2}^{4}{1 \over \root{\ln\pars{3 + x}/\ln\pars{9 - x}} + 1 }\,\dd x = \int_{2}^{4}{\root{\ln\pars{9 - x}} \over \root{\ln\pars{3 + x}} + \root{\ln\pars{9 - x}}}\,\dd x \\[1cm] = &\ {1 \over 2}\left\{\int_{2}^{4}{\root{\ln\pars{9 - x}} \over \root{\ln\pars{3 + x}} + \root{\ln\pars{9 - x}}}\,\dd x\right. \\[5mm] &\ \left. + \int_{2}^{4}{\root{\ln\pars{9 - \bracks{4 + 2 - x}}} \over \root{\ln\pars{3 + \bracks{4 + 2 - x}}} + \root{\ln\pars{9 - \bracks{4 + 2 - x}}}}\,\dd x\right\} = {1 \over 2}\int_{2}^{4}\dd x = \bbox[#ffe,10px,border:1px dotted navy]{\ds{1}} \end{align}<|endoftext|> TITLE: What is the difference between a function and a distribution? QUESTION [24 upvotes]: I remember there was a tongue-in-cheek rule in mathematical analysis saying that to obtain the Fourier transform of a function $f(t)$, it is enough to get its Laplace transform $F(s)$, and replace $s$ by $j\omega$. Because their formula is pretty much the same except for the variable of integration. And I know that this is not necessarily true (that's why I used the term tongue-in-cheek) e.g. take $f(t)=1$ to see the obvious difference. I read somewhere that although their formula is somehow similar, their result is not necessarily similar because Laplace transform is a function and Fourier transform is a distribution. For example, the Dirac delta, $\delta(\omega)$ is a distribution and not a function. So this led me to wonder: What is the difference between a function and a distribution? (preferably in layman's terms) Why the Laplace transform is a function and Fourier transform is a distribution? I mean, they are both infinite integrals. So what am I missing? REPLY [6 votes]: Short answer by analogy: distributions are to functions as real numbers are to rational ones. The real number set can be defined as all rational numbers with the limits of all convergent infinite sequences of rational numbers added in, so distributions would be the set of all functions plus all infinite sequences of functions that "converge" in some sense. The Dirac delta function, which is the classic example of a distribution, has many equivalent definitions just like there are many different ways to calculate $\pi$ or $\operatorname{e}$. One frequently used definition is: $$\delta(x) = \lim_{\sigma\rightarrow 0} \frac{1}{\sigma\sqrt{2\pi}} \operatorname{e}^{-\frac{1}{2} \left(\frac{x}{\sigma}\right)^2}.$$ Formally, just like how every real calculation done is performed on rational numbers, the limit is only taken after it's argument has been used in an integral. As a practical matter, though, just like taking integrals and derivatives, we can short circuit that formal process in most cases.<|endoftext|> TITLE: What does the following quote on John von Neumann mean? QUESTION [5 upvotes]: Wikipedia has the following quote on John von Neumann: Stan Ulam, who knew von Neumann well, described his mastery of mathematics this way: "Most mathematicians know one method. For example, Norbert Wiener had mastered Fourier transforms. Some mathematicians have mastered two methods and might really impress someone who knows only one of them. John von Neumann had mastered three methods." He went on to explain that the three methods were: A facility with the symbolic manipulation of linear operators; An intuitive feeling for the logical structure of any new mathematical theory; An intuitive feeling for the combinatorial superstructure of new theories. And I am wondering what do 'logical structure' and 'combinatorial superstructure' mean in this context? Please explain these methods. https://en.wikipedia.org/wiki/John_von_Neumann#Mastery_of_mathematics REPLY [3 votes]: The quote is from Adventures of a Mathematician (by Stanislaw Ulam): "Von Neumann was different. He also had several quite independent techniques at his fingertips. (It is rare to have more than two or three.) These included a facility for symbolic manipulation of linear operators. He also had an undefinable "common sense" feeling for logical structure and for both the skeleton and the combinatorial superstructure in new mathematical theories. This stood him in good stead much later, when he became interested in the notion of a possible theory of automata, and when he undertook both the conception and the construction of electronic computing machines. He attempted to define and to pursue some of the formal analogies between the workings of the nervous system in general and of the human brain itself." (page 96) In The World as a Mathematical Game (by Giorgio Israel and Ana Millán Gasca), we read that: "In actual fact, during the Königsberg Congress, none of the eminent participants realized the full import and implications of the result announced by Gödel – with one exception: von Neumann. After the discussion the latter rushed up to Gödel and took him aside in order to get a better understanding of his demonstration. He then left the Congress in a state of extraordinary excitement and spent the next month working on the issue. Less than two months later he wrote to Gödel to announce he had demonstrated, as a consequence of the theorem of incompleteness, that the consistency of arithmetic cannot be proved. Gödel replied that he had in the meantime succeeded in obtaining this demonstration and sent him a copy of the article that had already been presented for publication." (Chapter 2, page 30, ISBN 978-3-7643-9895-8 Birkhäuser Verlag AG, Basel - Boston - Berlin) And in this interview, Eugene Wigner tells a story where he asked Neumann if he could explain a theorem. Neumann asked Wigner whether he knows certain other theorems and a few more things. Subsequently, he provided an explanation (proof) on the spot using only theorems Wigner knew. Wigner concludes: "He understood things, not only in one way, but also together/in combination [with other theorems] ['összefoghatóan']." The Hilbertian view of mathematics was that of a gigantic "combinatorial game", and the above quotes demonstrate Neumann's ability to see how the elements combine together in a theory (skeleton) or how the theory (or theorem) connects (fit with) other theories (combinatorial superstructure). Some further quotes from The World as a Mathematical Game: "Moreover, his work on axiomatization and on proof theory led to a view of mathematics as «a combinatorial game played using primitive symbols in which it had to be determined in a finitely combinatorial way which combinations of primitive symbols the methods of construction or ‘proof’ led to», as he claimed at the Königsberg congress (Neumann (von) 1931). He never abandoned this view, and indeed built it up over the years, and this view helps to explain his interest in the scientific topics he concerned himself with in the 1940s and 1950s." (Chapter 2, page 30) and "Commenting on von Neumann’s scientific personality, Jean Dieudonné claimed that his genius lay in analysis and combinatorics, the latter being understood in a very wide sense, including the uncommon ability to organize and axiomatize complex situations that a priori do not seem amenable to mathematical treatment, as in quantum mechanics and the theory of games (Dieudonné 1976, 89)." (Chapter 2, page 48) and "For von Neumann, the world must be conceived of as a mathematical game, in the sense that in all cases it is useful and effective to seek axiomatic structures suitable for thinking of the phenomena in mathematical terms. The concept of strategic game is a kind of universal key for considering in terms of combinatorial structures all the interactions occurring in reality and to determine the conditions in which they allow an "acceptable" solution. But this in no way means that the world is actually a mathematical game. The conception of social interactions as a game, the combinatorial view on which the theory of automata will be founded, the analogy between brain and computer are tools of epistemological analysis, never ontological views." (Chapter 3.5, page 73)<|endoftext|> TITLE: Open superset of $\mathbb{Q}$ QUESTION [5 upvotes]: Let $S$ be an open set such that $\mathbb{Q}\subset S$. We can also define a set $T=\mathbb{R}\setminus S$. I have been trying to prove or disprove whether $T$ could be uncountable. I suspect $T$ has to be at most countable, is my intuition correct? REPLY [3 votes]: Expressed differently, the condition on $T$ is that it can have no rational elements and no rational limit points. But even without measure theory, we can easily specify such a $T$ that is uncountable -- for example the set of all numbers in $(0,1)$ in which the $n$th digit of the decimal representation is $3$ when $n$ is a perfect square, and either $4$ or $5$ otherwise.<|endoftext|> TITLE: number coefficients of an infinite root of 2 QUESTION [6 upvotes]: Today I stumbled upon one of Ramanujan's infinite roots with all the integers and that got me curious so I started to try to create one of my own. I started with $2=2$. Then $$2=\sqrt4$$ Then $$2=\sqrt{1+3}$$ Then $$2=\sqrt{1+\sqrt{9}}$$ Then $$2=\sqrt{1+\sqrt{2+7}}$$ Then $$2=\sqrt{1+\sqrt{2+\sqrt{3+46}}}$$ Then $$2=\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+2112}}}}$$ Then the final step I did was $$2=\sqrt{1+\sqrt{2+\sqrt{3+\sqrt{4+\sqrt{5+4460539}}}}}$$ Each step did equal 2 and while doing this I got very curious at to the numbers that were following each of the counting numbers in each step. By this I mean $$3$$ in $$1+3$$ and the $$7$$ in $$2+7$$ as well as the $$46,2112,$$and$$4460539$$ After trying for a long time, I failed to create a function that would give out these values when putting in $$1,2,3,4...$$ Which is why I have come here. This is purely for curiosity, but I need help to create a function that will spit out $$3,7,46,2112,4460539,..etc$$. Thanks! REPLY [2 votes]: Based on what you were doing, there's a clear recursion: $$f(n) = f(n - 1)² - n$$ This is a nonhomogenous quadratic map and likely has no closed form solution. I would say you should be ready for the possibility that no "plug and play" function exists. I can work on any other type of solution for this, we'll see if anything comes up.<|endoftext|> TITLE: Please help me to start learning math QUESTION [6 upvotes]: My name is Sonny and I am a mental health support worker. I am interested in learning math but having no background in it, I don't know where to start. i took math in grade 12 in Australia. so i know little bit of integral and differential calculus and high school geometry. i got interested in math due to fantastic results demonstrated in you tube videos about number theory. It really amazed me to know that numbers have properties independent of their own. hence the interest in math. Can any one please suggest me a book for a beginner like me? Thanks REPLY [3 votes]: The best way to learn math is to sit with good math books and do almost all of the problems in them. Get recommendations for your level, and when the mood moves you spend a few free hours with the book. I will recommend one book that may be at the right level for you from the comments. The book is Calculus by Michael Spivak, and I recommend finding a much cheaper copy than in the link. Theoretically it requires no prior knowledge, and despite the name this is not the calculus you learn in high school, it is a gateway to advanced mathematics.<|endoftext|> TITLE: Cayley-Bacharach for higher degree curves QUESTION [7 upvotes]: The Cayley-Bacharach theorem (also known as the 9 point theorem or the $8 \rightarrow 9$ theorem states that if two cubic curves intersect in 9 points, and C is any curve through 8 of those points, then C also passes through the 9th point. Suppose we replaced the word cubic with quartic or quintic? Suppose I have two curves of order $d$ that meet in $d^2$ points (by Bezouts), how many of these points does another degree $d$ curve C have to pass through before we an say it passes through all points? My intuition tells me that the answer would be ${d+2} \choose 2$$ -2$ - but is this accurate? I derive this from the dimension of the vector space = number of linear conditions imposed by points + 2 (our two curves which we want to span the vector space of vanishing polynomials). I am new to this area, so forgive any faux pas made with notation, convention, etc. REPLY [5 votes]: Cayley-Bacharach theorem indeed can be generalized to any plane curve. If $C_1$ and $C_2$ are plane curves, $\deg C_1 = n_1$ and $\deg C_2 = n_2$ meeting at $n_1 n_2$ distinct points, then any curve of degree $n_1+n_2-3$ that passes through all but one point of $C_1 \cap C_2$ also passes through the remaining point. In this form it is proven in Griffith-Harris in chapter on residues, so you need to know multi-dimensional residues to understand the proof. There is also very nice paper https://www.msri.org/~de/papers/pdfs/1996-001.pdf on Cayley-Bacharach theorem, its applications and variations.<|endoftext|> TITLE: Preimage of a compact set QUESTION [7 upvotes]: If $f : \mathbb{R}^n \to \mathbb{R}$ is continuous, is the preimage $f^{-1}([0,1])$ also compact? I'm trying to check the two conditions that it's closed and bounded. I know how to show it's closed, but I don't know how to show that it's bounded. Can I just say the domain is unbounded because it's $\mathbb{R}^n$? REPLY [12 votes]: It's not true. not even for $\mathbb R $ to $\mathbb R$. Consider $\sin(x)$<|endoftext|> TITLE: Where to start learning Differential Geometry/Differential Topology? QUESTION [30 upvotes]: I realize that this may be a very general question, perhaps even an unclear one (if it is I apologize), but as someone looking for the best way to start learning about these topics, I find that there is no clear path to learning Differential Geometry / Differential Topology, as there is with Analysis or General Topology, or even Abstract Algebra For example in Analysis, most agree that Principles of Mathematical Analysis by Walter Rudin is the place to begin, for Topology, Munkres book is the standard reference, and for Algebra, most tend to use either Dummit and Foote, Artin, Fraleigh or Lang. For Differential Geometry/Differential Topology, I find that there are no standard texts, the only one I know of is Lee's Introduction to Smooth Manifolds, however I feel I currently lack the prerequisites to tackle that book properly. Now I understand that to recommend a book to someone, you would need some gauge of their mathematical ability/maturity, but it is next to impossible to demonstrate that, so instead I can give a list of books that I'm currently reading through, and plan to read through in the next 3-6 months. What I'm currently reading Principles by Mathematical Analysis (Baby Rudin) Linear Algebra Done Right (by Sheldon Axler) Vector Calculus, Linear Algebra, and Differential Forms: A Unified Approach (by Hubbard and Hubbard) What I plan on reading soon Calculus on Manifolds by Spivak Topology by Munkres Complex Analysis by Alfhors Abstract Algebra by Dummit and Foote But after that I'm lost as to where to go further. I'm lost between Analysis on Manifolds by Munkres, A Comprehensive Introduction to Differential Geometry by Spivak, and do Carmo's Differential Geometry of Curves and Surfaces. Or should I just skip all those intermediate books and go straight to Lee's Introduction to Smooth Manifolds? A Side note I find that the more challenging a book I read is, and the more I struggle through a book, I develop a deeper understanding of the topics in the book, and a greater appreciation of the subject I'm studying as a whole. Based on the books I've read/plan to read, please recommend books that are not easy, but difficult and challenging. REPLY [21 votes]: It's been around 11 months since I first asked this question. I thought I would share my path to learning Differential Topology and Differential Geometry. Hopefully this will be of some help to others who are also hoping to learn Differential Topology and Differential Geometry. Firstly I read most of the contents of the General Topology part of Munkres. I tried afterwards to go through Calculus on Manifolds by Spivak, but I got bored really quickly, and as a book I didn't particularly enjoy reading or working out of it that much. So I jumped straight ahead to reading Topology from the Differentiable Viewpoint by Milnor, this quickly became one of my favourite books I've ever read. There was a saying I read somewhere on MathOverflow which said Run don't walk your way to Milnor's Topology from the Differentiable Viewpoint That couldn't have been more true. (If as a reader to this answer, this is the most important thing to take away) You just need a bit of General Topology and the basics of multivariable calculus and linear algebra to tackle it. In it's short 50 pages, it takes you deep into Differential Topology. I'm planning on rereading it again. I'm currently reading Differential Topology by Guillemin and Pollack which is a superb supplement to Milnor's book. The only drawback (although not a bad one) is with Milnor's and Guillemin and Pollack's books, all smooth manifolds are embedded in some euclidean space $\mathbb{R}^n$, and aren't abstract, though due to Whitney's Embedding Theorem this isn't too much of an issue. I am also currently reading Introduction to Smooth Manifolds by John Lee which is an incredibly well written book, it's clear, filled with tons of examples and exercises. I've also browsed through Introduction to Manifolds by Tu but compared to Lee's book I don't use it as much. Finally, I think a book that is worth mentioning is Introduction to Topological Manifolds also by John Lee which acts as a great first encounter to topological manifolds.<|endoftext|> TITLE: 4-digit password with unique digits not in ascending or descending order QUESTION [17 upvotes]: I need to calculate how many possible passwords there are if each password is 4 digits long, using the digits 0-9. All digits in the password must be unique, and cannot all be neither increasing nor decreasing. For example “3569” is not allowed, because the digits are in increasing order, while “1374” is allowed I know that a four digit password could be anything between 0000 to 9999, hence there are 10,000 combinations. But I am now stuck figuring out how to calculate the number of all possible passwords that are unique, neither increasing nor decreasing. I have tried to calculate the possible number of passwords if every digit only may be used once: $$P(n,r)=\frac{10!}{(10−4)!}=\frac{10⋅9⋅8⋅7⋅6⋅5⋅4⋅3⋅2⋅1}{6⋅5⋅4⋅3⋅2⋅1}=5040$$ But I am now quite sure if this is the answer to the question? If not how should I calculate such a question? REPLY [30 votes]: As you have already worked out, there are $^{10}P_4=5040$ passwords that repeat no digit. From this number we are to subtract those passwords whose digits are all increasing or all decreasing. All such passwords can be generated by picking four digits out of the ten without regards to order – there are $\binom{10}4=210$ ways to do so – and then arranging them in increasing or decreasing order as required. Since we have two choices of order, we subtract $210\cdot2=420$ passwords. Hence there are $5040-420=4620$ passwords with unique digits that are not all increasing or all decreasing.<|endoftext|> TITLE: Equivalent definitions for normal maps between von-Neumann algebras QUESTION [5 upvotes]: There are many different definitions for "normal" in literature, and I could not see the equivalence between the two following definitions: Let $M , N$ be von-Neumann algebra, and let $\varphi: M\to N$ be a map. We say that $\varphi$ is normal if: $\varphi(\sup x_{\alpha})=\sup \varphi (x_{\alpha})$ for all norm-bounded monotone increasing nets of self adjoint elements $\{x_{\alpha}\} \subseteq M_{sa}$. $\varphi$ is $\sigma$-weakly continuous (when identifying $M$ with its predual and recall that the weak$^*$-topology on the predual coincides with the relative ultra-weak ($\sigma$-weak) topology on $M\subseteq B(H)$). The direction $(2)\Rightarrow (1)$: If we let $(x_{\alpha})\subseteq M_{sa}$ be a norm-bounded increasing net, by Vigier lemma, $x_{\alpha}$ converges in SOT to some $x\in M_{sa}$ and actually $x=\sup_{\alpha} x_{\alpha}$. We know that on bounded subsets the ultra-weak topology coincides with the weak$^*$ topology, so by $\sigma-weak$ continuity of $\varphi$ we get $\varphi(x_\alpha)\to \varphi(x)$ (in norm). However, I'm not sure why $\lim \varphi(x_\alpha)=\sup \varphi (x_{\alpha})$. Maybe if we add an assumption that $\varphi$ is positive we could get it, again by applying Vigier's lemma. I don't know also how to show the converse direction. Maybe I also should mention that I'm not sure the above definitions I gave are equivalent, there is an option I did some "mix", or this is true for states? Thank you for your time. REPLY [6 votes]: The equivalence of course requires the map to be positive. The equality $\lim\varphi(x_j)=\sup\varphi(x_j)$ is just the fact, in the real line, that the supremum of a monotone increasing sequence is its limit. For the reverse implication, the proof one needs is the proof of the equivalence for states, because if you have that $\varphi$ respects suprema, the so does $f\circ\varphi$ for every normal state $f$; then $f\circ\varphi$ is $\sigma$-weakly continuous (by the proof for states) and then $\varphi$ is $\sigma$-weakly continuous. Similarly for the converse. So the key step is to prove the assertion when $N=\mathbb C$ (and, as mentioned, assuming that $\varphi$ is positive). The way I know to prove the equivalence is in Theorem 7.1.12 in Kadison-Ringrose: one proves the equivalence of the statements $\varphi=\sum_{n=1}^\infty\langle \cdot y_n,y_n\rangle$ where $\sum\|y_n\|^2=1$ and $\{y_n\}$ is orthogonal $\varphi=\sum_{n=1}^\infty\langle \cdot y_n,y_n\rangle$ where $\sum\|y_n\|^2=1$ $\varphi$ is wot-continuous on the unit ball of $M$ $\varphi$ is sot-continuous on the unit ball of $M$ $\varphi(\sup x_n)=\sup\varphi(x_n)$ for any monotone bounded net of selfadjoints $\varphi$ is completely additive $\varphi(\sum p_j=\sum\varphi(p_j)$ for every family of pairwise orthogonal projections. I think I have seen proofs that the first three are equivalent on their own, but I'm not sure if there is a shorcut to avoid the whole path for a complete proof of your implication.<|endoftext|> TITLE: Advanced book on partial differential equations QUESTION [10 upvotes]: I am looking for an advanced book on partial differential equations that makes use of functional analysis as much as possible. All the books I have looked in so far either shy away from functional analysis and try to avoid even basic concepts, or present results from functional analysis I know anyway just to discuss some very basic applications to partial differential equations (say, semigroup theory applied to the heat equation). The book I am looking for should use functional analysis instead of hard analysis whenever possible (I am well aware of the fact that the theory of partial differential equations is not merely an application of functional analysis), go into some advanced topics that are relevant for research, and not spend too much space on covering the results of functional analysis itself - I have my references for that. The background is that I am interested in operator equations that are not partial differential equations, yet methods from pde are often helpful. If it is relevant, I am mostly interested in elliptic and parabolic equations, although I don't want to limit the focus. REPLY [9 votes]: Here are some suggestions. Functional Analysis, Sobolev Spaces, and Partial Differential Equations by Haim Brezis. This violates your rule of not developing the functional analysis material, but is a very good book. You can skip the stuff you know and jump right to the PDE / operator bits. An Introduction to Partial Differential Equations by Michael Renardy and Robert Rogers. Here you want the last part of the book, say after chapter 8. There's a lot of nice stuff in Chapters 10-12 that uses lots of functional analysis to solve nonlinear elliptic problems, etc. Monotone Operators in Banach Space and Nonlinear PDE by Ralph Showalter. This is heavy functional / operator theoretic material used to solve some serious nonlinear problems. Nonlinear Differential Equations of Monotone Type in Banach Spaces by Biorel Barbu. This covers the same sort of material as the Showalter book. Applications of Functional Analysis and Operator Theory by Hutson and Pym. There's a lot more in here than applications in PDE, but you might find it interesting.<|endoftext|> TITLE: What's the sum of all the positive integral divisors of $540$? QUESTION [11 upvotes]: What's the sum of all the positive integral divisors of $540$? My approach: I converted the number into the exponential form. And found out the integral divisors which came out to be $24$. But couldn't find the sum...Any trick? REPLY [6 votes]: Prime factorization of 540 is $2^2\cdot 3^3\cdot 5$. Also sum of divisors of 540 equals: $$\begin{align} \sigma\left(2^2\cdot 3^3\cdot 5\right) &=\frac {2^3-1}{2-1}\cdot\frac{3^4 - 1}{3-1}\cdot \frac{5^2 -1}{ 5-1}\\ &=7\cdot 40\cdot 6\\ &= 1680 \end{align}$$<|endoftext|> TITLE: Difference between large and small categories QUESTION [7 upvotes]: In my book the following definition is given: A category $C$ is called small both the collection of objects and arrows are sets. Otherwise the category is called large. A set is delfined to be a collection of distinct objects. Now I am a little confused since the category of all finte sets is said to be small, which is ok for me since the objects and functions can be considered as sets. But why isnt the category of groups a small category. All groups may be considered as distinct objects and hence a set, same argument goes for group homomorphisms. Can someone explain why I am so confused here? REPLY [4 votes]: The main source of confusion is the definition of a set as "a collection of distinct objects". Such definition leads to Russell's paradox and Cantor's paradox, which are usually overcome through adopting an axiomatic set theory. In order for one to talk about small and large categories, one first needs to specify the adopted foundations of category theory. There are several foundations in which categories can be defined, the most frequently used is Zermelo-Fraenkel set theory with the axiom of choice (ZFC) and the axiom of the universe. In these settings, you can distinguish two different types of set-theoretical objects, namely (small) sets and proper classes, with respect to a given universe. There are some interesting and subtle differences between sets and proper classes, one which is the size difference. All the set-theoretical objects that are bigger than the chosen universe are proper classes. The category $\mathbf{FSet}$ of finite sets and their maps is small (with respect to an uncountable universe), because the axioms of ZFC and the uncountable universe imply that both $\mathop{\mathrm{Ob}}(\mathbf{FSet})$ and $\mathop{\mathrm{Mor}}(\mathbf{FSet})$ are small sets. However, $\mathop{\mathrm{Ob}}(\mathbf{Grp})$ is bigger than the universe, i.e. it is a proper class, and hence $\mathbf{Grp}$ is not small. One reason to care about having small or large categories, is that not all constructions that work for small categories work for large categories. For instance, in some foundations, the functor category $\mathbf{Fun}(\mathcal{C},\mathcal{D})$ exists when $\mathcal{C}$ is small, but not necessary when it is large. To read more about this subject, you may start with the foundations, and refer to the other links. P.S. Some sources refer to what is called here set-theoretical object simply by set, and emphasise the word small in the name of small sets, to make the distinction.<|endoftext|> TITLE: harmonic series - generating function QUESTION [6 upvotes]: I am currently learning about generating functions and I found an interesting one for harmonic series, $\dfrac{\log(1-x)}{x-1}$. Is there any hope I could get a formula for $n$th coefficient out of this? The $n$th derivative looks messy... In absence of formula, can I at least get some asymptotic information, like that harmonic series diverges? (Can be shown more simply, I know.) REPLY [4 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ With the identity $\ds{\pars{1 - x}^{m} = \sum_{k = 0}^{\infty} {m \choose k}\pars{-x}^{k} = \sum_{k = 0}^{\infty}{k - m - 1 \choose k}x^{k}}$: \begin{equation} \begin{array}{l} \mbox{Derivative respect of}\ \ds{m}: \\\ \ds{\pars{1 - x}^{m}\ln\pars{1 - x} = \sum_{k = 0}^{\infty}\bracks{\partiald{}{m}{k - m - 1 \choose k}}x^{k}} \\[5mm] \mbox{The limit}\ds{\ m \to - 1}: \\ \ds{-\,{\ln\pars{1 - x} \over 1 - x} = \sum_{k = 0}^{\infty}\color{#f00}{\bracks{-\,\partiald{}{m}{k - m - 1 \choose k}} _{\ m\ =\ - 1}}\ x^{k}} \end{array} \label{1}\tag{1} \end{equation} \begin{align} &\color{#f00}{\bracks{-\,\partiald{}{m}{k - m - 1 \choose k}}_{\ m\ =\ - 1}} = \left.\vphantom{\Huge A}-\,\partiald{}{m}\bracks{\Gamma\pars{k - m} \over k!\,\Gamma\pars{-m}}\right\vert_{\ m\ =\ - 1} \\[5mm] = & -\,{1 \over k!}\, {-\Gamma\, '\pars{k + 1}\Gamma\pars{1} + \Gamma\, '\pars{1}\Gamma\pars{k + 1}\over \Gamma^{2}\pars{1}} \\[5mm] = &\ -\,{1 \over k!}\bracks{% {-\Gamma\pars{k + 1}\Psi\pars{k + 1} + \Gamma\pars{1}\Psi\pars{1}\Gamma\pars{k + 1}}} \\[5mm] = &\ \Psi\pars{k + 1} - \Psi\pars{1} = \color{#f00}{H_{k}} \\[1cm] \stackrel{\mbox{see}\ \eqref{1}}{\implies} &\ \,\,\, \bbox[10px,#ffe,border:1px dotted navy]{\ds{% -\,{\ln\pars{1 - x} \over 1 - x} = \sum_{k = 1}^{\infty}H_{k}\, x^{k}}} \end{align}<|endoftext|> TITLE: On the evaluation of the integral $\int_{-\frac{b}{a}}^{\frac{1-b}{a}}\log\left(ax+b\right)\exp\left(-\frac{1}{2}x^2\right)\mathrm{d}x$. QUESTION [11 upvotes]: Let $x\in(0,1)$ be a random variable that follows a truncated normal distribution with density $$ f(x)= \begin{cases} \frac{1}{\sqrt{2\pi\sigma^2}D}\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right) & \text{, if}\:\:00$, is the neperian logarithm, i.e. the integral $$ I = \frac{1}{\sqrt{2\pi\sigma^2}D}\int_{0}^{1}\log(x)\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right)\mathrm{d}x, $$ which, using the substitution $u=\frac{x-\mu}{\sigma}$, is rewritten as follows $$ I = \frac{1}{\sqrt{2\pi}D}\int_{-\frac{\mu}{\sigma}}^{\frac{1-\mu}{\sigma}}\log\left(\sigma u+\mu\right)\exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u. $$ I know that this couldn't be given in a closed form. If this makes any difference, I want to use this for a practical application, so if there could be a numerical approximation with some given accuracy, that would suit me, but I would like to avoid it. REPLY [4 votes]: First Approach: EDIT: I'm not too fond of this one, the second one seems better to me.I'll write $a = -\frac{\mu}{\sigma}$ and $b = \frac{1-\mu}{\sigma}$ and assume $\mu \neq 0$. $$I = \frac{1}{\sqrt{2\pi}D}\int_{a}^{b}\log\left(\sigma u+\mu\right)\exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u.$$ If $\mu \neq 0$, we can rewrite $\log(\sigma u + \mu) = \log(\frac{1}{\mu}) + \log(1+\frac{\sigma}{\mu}u)$. I'll denote $c := \frac{\sigma}{\mu}$. We can split this up into two integrals $I = I_1 + I_2$, where $$ I_1 = \frac{1}{\sqrt{2\pi}D} \int_{a}^{b} -\log(\mu) \exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u = \frac{-\log(\mu)}{D}\text{erf}(u)\Big|_{a}^{b},$$ and $$I_2 = \frac{1}{\sqrt{2\pi}D}\int_{a}^{b}\log\left(1+ cu\right)\exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u.$$ Here I see two possiblites to continue: Plugging in the taylor expansion for $\log(1+cu)$ or the taylor expansion for $\exp(u)$. I'll begin with the former. Exchanging summation and integral (which i'm not yet sure is legal), and integrating by parts we obtain \begin{align} I_2 &= \frac{1}{\sqrt{2\pi}D}\int_{a}^{b}\sum_{n=1}^{\infty} \frac{(-1)^n}{n} (cu)^n\exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u \\ &=\frac{1}{\sqrt{2\pi}D}\sum_{n=1}^{\infty}\int_{a}^{b} \frac{(-c)^n}{n} u^n\exp\left(-\frac{1}{2}u^2\right)\mathrm{d}u\\ &=\frac{1}{D}\sum_{n=1}^{\infty} (-c)^n \left[ \frac{u^n}{n}\text{erf}(u)\Big|_{a}^{b} - \int_{a}^{b}u^{n-1}\text{erf}(u) \mathrm{d}u\right]\\ \end{align} Looking up the last integral in this integral table (page 5, number 7) , we obtain \begin{align} I_2 = \frac{1}{D}\sum_{n=1}^{\infty} \frac{(-c)^n}{\sqrt{\pi}(n+1)} \left[ e^{-u^2}\sum_{k=0}^{l-1}\frac{\Gamma(\frac{n}{2}+1)}{\Gamma(\frac{n}{2}-k+1)} u^{n-2k} - (1-j)\Gamma\left(l+\frac{1}{2}\right)\text{erf}(u)\right]_{a}^{b} \end{align} where $j = 0$ or $j=1$ such that $2l-j = n+1$. Now for the second approach, that yields a much nicer-looking sum. Second Approach: Starting with $$I = \frac{1}{\sqrt{2\pi\sigma^2}D}\int_{0}^{1}\log(x)\exp\left(-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2\right)\mathrm{d}x$$ I'll expand the exponential function: \begin{align} \sqrt{2\pi\sigma}D\cdot I &= \int_{0}^{1} \log(x) \sum_{n=0}^{\infty} \frac{(-1)^n}{2^n \cdot n!}\left(\frac{x-\mu}{\sigma}\right)^{2n} \mathrm{d}x \\ &= \sum_{n=0}^{\infty} \frac{(-1)^n}{\sigma^{2n}\cdot 2^n \cdot n!} \int_{0}^{1} \left(x-\mu\right)^{2n} \log(x) \mathrm{d}x \\ &= \sum_{n=0}^{\infty} \frac{(-1)^n}{\sigma^{2n}\cdot 2^n \cdot n!} \sum_{k=0}^{2n} \binom{2n}{k}(-\mu)^{2n-k} \int_{0}^{1}x^{k} \log(x) \mathrm{d}x \\ \end{align} Applying the formula $$\int x^{k}\ln x\,dx=\frac{x^{k+1}((k+1)\ln x-1)}{(k+1)^2}$$ we obtain $$ I = \frac{1}{\sqrt{2\pi\sigma^2}D} \sum_{n=0}^{\infty} \frac{(-1)^{n+1}}{\sigma^{2n}\cdot 2^n \cdot n!} \sum_{k=0}^{2n} \binom{2n}{k}(-\mu)^{2n-k} \frac{1}{(k+1)^2} $$ It's not too hard to see that interchanging integral and sum is allowed using Fubini. We can find yet another representation for $I$: Using the binomial theorem, we see that $$ \int_{0}^{1}\int_{0}^{1} (xy - \mu)^n \mathrm{d}x\mathrm{d}y = \sum_{k=0}^{2n} \binom{2n}{k}(-\mu)^{2n-k} \frac{1}{(k+1)^2}$$ Thus, we can write \begin{align} I &= \frac{1}{\sqrt{2\pi\sigma^2}D} \sum_{n=0}^{\infty} \frac{(-1)^{n+1}}{\sigma^{2n}\cdot 2^n \cdot n!} \int_{0}^{1}\int_{0}^{1} (xy - \mu)^n \mathrm{d}x\mathrm{d}y \\ &= -\int_{0}^{1} \int_{0}^{1} \frac{1}{\sqrt{2\pi\sigma^2}D} \exp\left(-\frac{1}{2}\left(\frac{xy - \mu}{\sigma}\right)^2\right) \mathrm{d}x\mathrm{d}y \end{align} Now we have a smooth integrand on our domain. Therefore, it should be possible to obtain good numerical results by applying high-degree 2-dimensional tensor product formulas of univariate Gauss-Legendre Quadrature formulas.<|endoftext|> TITLE: Motivation for Mobius Transformation QUESTION [6 upvotes]: Let $S$ denote the Riemann Sphere. Recall that a Mobius transformation is a function $f:S \to S$ defines as $z \to \frac {az+b}{cz+d}$ where $a,b,c,d \in \mathbb C$ with $ad-bc=1$. What is the motivation to study Mobius Transformation? Why should one look at the map defined in the above way? REPLY [7 votes]: Let's work over $\mathbb{R}$ first. The space of all lines in $\mathbb{R}^2$ (meaning one-dimensional subspaces) is called the real projective line $\mathbb{RP}^1$, and it can be constructed by putting the equivalence relation $v\sim \lambda v$ (for all nonzero scalars $\lambda\in\mathbb{R}^\times$) on $\mathbb{R}^2\setminus0$, then collecting together the equivalence classes. For most points, we have $(x,y)\sim (x/y,1)$. Except for those of the form $(x,0)\sim(1,0)$. We can define a bijection $\mathbb{RP}^1\leftrightarrow\mathbb{R}\cup\{\infty\}$ by asociating $(x,y)$ with $x/y$ if it exists and $(x,0)$ with $\infty$ otherwise. Note that topologically, this looks like a circle $S^1$. One can treat $S^1$ minus a point as $\mathbb{R}$ and then the missing point is called $\infty$. Any linear transformation $(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})\in\mathrm{GL}(2,\mathbb{R})$ sends lines to lines, so it "acts" on $\mathbb{RP}^1$. How does the corresponding action look on $\mathbb{R}\cup\{\infty\}$? Well, we have $$\begin{pmatrix}a&b\\c&d\end{pmatrix}\begin{pmatrix} x\\y\end{pmatrix} =\begin{pmatrix} ax+by \\ cx+dy \end{pmatrix} $$ so the action on the corresponding elements of $\mathbb{R}\cup\{\infty\}$ will be $$ \begin{pmatrix}a&b\\c&d\end{pmatrix}\cdot\frac{x}{y} =\frac{ax+by}{cx+dy} $$ or in other words $$ \begin{pmatrix}a & b \\ c & d \end{pmatrix} z=\frac{az+b}{cz+d} $$ using $z=x/y$. Note that division by $0$ is interpreted as giving $\infty$, and $0/0$ will never happen since we used an invertible matrix. The same thing can be done with $\mathbb{C}$ (or even the quaternions $\mathbb{H}$, although then one must be careful since multiplication and division are not commutative operations), in which case $\mathbb{C}\cup\{\infty\}$ is called the Riemann sphere. Also notice this is a group action. That means given any two $A,B\in\mathrm{GL}(2,\mathbb{C})$ and $z\in\mathbb{C}\cup\{\infty\}$, we have $A\cdot (B\cdot z)=(AB)\cdot z$, where $AB$ is just usual matrix multiplication.<|endoftext|> TITLE: Why can't linear maps map to higher dimensions? QUESTION [28 upvotes]: I've been trying to wrap my head around this for a while now. Apparently, a map is a linear map if it preserves scalar multiplication and addition. So let's say I have the mapping: $$f(x) = (x,x)$$ This is not a mapping to a lower or equal dimension, but to a higher one. Yet it seems to preserve scalar multiplication and addition: $$f(ax) = (ax,ax) = a(x,x) = af(x)$$ $$f(x+y) = (x+y,x+y) = (x,x) + (y,y) = f(x) + f(y)$$ I must have made an error in my logic somewhere, but I can't seem to find it. Or are linear maps simply defined this way? I would really appreciate to know this. REPLY [8 votes]: Although the essence has already been stated, let me try to give you a more graphic approach to linear maps. Often, when you get the right mental picture of a construct, the properties fall right into place. PS: Whoops, That turned out to be a lot. I hope it's not a bad thing I kind of answer your question in the last paragraph, only. I hope this is helpful for somebody though. Plus, I hope I didn't make any false statements here considering more than finite dimensions. The definition Let $V$, $W$ be vector spaces over a field $F$. A map $f: V → W$ is called linear, if: $∀x, y\in V: f(x+y) = f(x)+f(y)$ $∀x \in V, λ \in F: f(λx) = λf(x)$. What do linear maps map? The first important thing here is that they map vector spaces into vector spaces. They can be anything, so this alone doesn't help a lot. Could they be something different than vector spaces? Well, if they weren't, our statements wouldn't make much sense – they use scalar multiplication and addition, which are operations only defined in vector spaces. So far nothing interesting here. You can, however, immediately ask: “What does the image of a linear map look like?”, or, “in what way changes/transforms $f$ the space $V$ to $W$?”. What can this subset of $W$ look like? For instance, if $V=ℝ^3, W=ℝ^3$, can the image be the sphere? It obviously cannot, since for every vector $w = f(v)$ in the image, we can scale the parameter $w$ and get a scaled version $f(λv) = λf(v) = λw$. This greatly restricts what the image qualitatively looks like! In fact, if you follow a similar argument for the preserving of addition, you might conjecture: The image itself is a vector space! Proof (For the sake of completeness) Let $x, y\in f[V], λ\in F.$ Thus we find $v\in V: x=f(v)$ and $w\in V: y = f(w)$. Now, $x+y=f(v)+f(w)=f(v+w)$, thus $x+y$ is in the image. Similar we get $λx = λf(v) = f(λv)$ thus $λx$ in the image. QED. And now? The fact that the image is a vector space being a subset of the vector space $W$, i.e. a (vector) subspace of $W$, helps for the intuition: e.g. in $ℝ^3$, vector subspaces are ${0}$, lines and planes through the origin, and $ℝ^3$ itself. So somehow, $f$ transforms a vector space $V$ into a subspace of $W$. At the moment, we don't know an important thing however: How “big” is this subspace? Can we say something about the dimension? If not, can we find some restriction like upper/lower bound? The trick: Don't look at the whole space Let's just assume $V$ and $W$ have a basis, and, to make writing sums easier, to be finite dimensional. We then can express elements of these spaces as the sum of the basis vectors scaled by a certain amount, i.e. as the “coordinate tuple” which are said amounts. The (unique, bijective) map from the coordinate tuples to the vectors is called the “basis isomorphism”. Let's look at a vector $x=f(v)$ in the image of $f$. Choosing any ordered basis $(b_n)_n$ of $V$, we can write it as: $x = f(v) = f(\sum_{i=1}^n b_i v_i)$. We „expanded“ the vector $v$ in the preimage by looking at the bases separately (the $v_i$ are the coefficients with regards to our basis $b_i$). Now, the preserving of addition and scalar multiplication comes in handy: We can move the summation one level out! $$x = f(v) = \cdots = \sum_{i=1}^n v_i f(b_i)$$ This is actually a big deal! We now know that any element of the image can be described as linear combinations of the images of some basis elements in $V$ (or: it lies in the span of the image of the basis) – or, to put it differently: If you know the image of the basis elements, you know the image of the whole space. Once I got this, I pictured every (finite-dimensional, well, to be honest, 3-dimensional) linear map by picturing a base on the left side and the image of that base on the right side. This gives you immediately one constraint: The dimension of the image can at least not be larger than $\dim V$, since it is spanned by $\dim V$ (not necessarily linearly independent) vectors. Can it be less? Yes, if the images of the basis vectors are linearly dependent: Consider e.g. the map $$f: ℝ^3→ℝ^3, (x, y, z)↦ (x+z, y+z, 0)$$ It maps $e_x, e_y$ to themselves, but $f(0, 0, 1)=(1, 1, 0)$. So the base of the preimage maps to three vectors each lying in the $x-y$-Plane – in other words, they are linearly dependent, and span a subspace not of dimension 3, but of dimension 2. Your Question To answer your question: Yes, maps can indeed map to higher dimensional spaces. For instance, take $f: ℝ^n→ℝ^n+k, (x_1, …, x_n)↦(0, 0, …, x1, …, x_n)$. The dimension of their image (also called “rank”), however, cannot have a higher dimension. Thus, if you map to a higher dimension, your map cannot be surjective anymore. Matrix and determinant You might notice that whether or not the images of the basis vectors are linearly independent is a major factor to qualitatively determine the nature of this function (let the word sink in for a moment: determin-e… rings a bell?). Consider injectivity: If a n-dimensional space is transformed into an $m TITLE: Examples of Maass forms of $SL(2, \mathbb{Z}) \backslash \mathbb{H}$? QUESTION [6 upvotes]: The paper says "Let $f(z)$ be a Maass form of $SL(2, \mathbb{Z}) \backslash \mathbb{H}$" -- what are some examples of this? Are there any explicit examples? Here's an answer from What is the relationship between modular forms and Maass forms? In the more common terminology modular forms on the upper half-plane fall into two categories: holomorphic forms and Maass forms. In fact there is a notion of Maass forms with weight and nebentypus, which includes holomorphic forms as follows: if $f(x+iy)$ is a weight $k$ holomorphic form, then $y^{k/2}f(x+iy)$ is a weight $k$ Maass form. what about $\theta(z)^4$? That is a weight 2 form. Then is $y^{-1}\theta^4(z)$ weight 2 Maass form? There seems to be a Siegel-Maass correspondence takes theta functions and Maass forms. Is that what's here? Here is another point of confusion: https://en.wikipedia.org/wiki/Maass_wave_form https://en.wikipedia.org/wiki/Harmonic_Maass_form Are these two concepts related? Are Eisenstein series examples of Mass forms? Other resources (e.g. [1]) are written so succinctly I have no idea what is going on. Sources seem to indicate that explicity Maass forms are scarce. I am hoping to understand a bit why subjects might be important even if we can never actually write them down. REPLY [6 votes]: The canonical reference for learning about Maass forms is probably Spectral Methods of Automorphic Forms by Henryk Iwaniec. Good references for the more general theory of Maass forms are the first half of the paper The Subconvexity Problem for Artin $L$-Functions by Duke, Friedlander, and Iwaniec, and Chapter 3 of the book Automorphic Representations and $L$-Functions for the General Linear Group by Goldfeld and Hundley. I don't know a great reference for Maass forms of half-integral weight or for harmonic (weak) Maass forms. Classical Maass forms are a type of modular form. They are functions $f$ on the upper half-plane $\mathbb{H}$ that are automorphic, of moderate growth, and are eigenfunctions of the weight $0$ Laplacian with eigenvalue $\lambda_f$. Automorphic means that $f(\gamma z) = f(z)$ for all $z \in \mathbb{H}$ and $\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathrm{SL}_2(\mathbb{Z})$, where $\gamma z = \frac{az + b}{cz + d}$. Moderate growth means that $f(z)$ grows at most polynomially at the cusp at infinity, so that $f(z) = O(y^N)$ as $y \to \infty$ for some positive integer $N$. The weight $0$ Laplacian is \[\Delta = -y^2\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right).\] An explicit example of a classical (weight zero, level one) Maass form is the (real analytic) Eisenstein series \[E(z,s) = \sum_{\gamma \in \Gamma_{\infty} \backslash \Gamma} \Im(\gamma z)^s = \frac{1}{2} \sum_{\substack{m,n = 1 \ (m,n) = 1}}^{\infty} \frac{\Im(z)^s}{|mz + n|^s}, \] where $s \in \mathbb{C}$. This has Laplacian eigenvalue $\lambda = s(1 - s)$. If we demand that not only is $f$ of moderate growth, but that its constant term \[\rho_f(0) = \int_{0}^{1} f(x + iy) \, dx\] vanishes for all $y > 0$, then $f$ is said to be a Maass cusp form. Eisenstein series are not cusp forms. Maass cusp forms are very similar to holomorphic cusp forms of weight $k \geq 2$: they have a basis of Hecke eigenforms to which one can associate $L$-functions that are entire and satisfy a functional equation. With all this being said, it is very hard to explicitly write out a Maass cusp form. The only cases we can really write out explicitly are due to Maass; they arise from Hecke characters of a real quadratic extension of $\mathbb{Q}$. Here by explicit, I mean that we can write down the Fourier coefficients (or equivalently the Heck eigenvalues) of the Maass cusp form. In general, when dealing with Maass cusp forms, you really shouldn't be thinking about a specific example, and specific examples of modular forms are rarely why number theorists find them interesting. Even for holomorphic cusp forms that are Hecke eigenforms, we can rarely say anything particularly detailed about the Hecke eigenvalues. If you want to see numerical examples of (the Hecke eigenvalues of) Maass cusp forms, then you should browse the $L$-Functions and Modular Forms Database. More generally, one can talk of Maass forms $f$ of weight $k$, level $q$, and nebentypus $\chi$, where $k$ is an integer and $\chi$ is a Dirichlet character modulo $q$. This means we replace the automorphy condition with $f(\gamma z) = \chi(\gamma) j_{\gamma}(z)^k f(z)$ for all $\gamma \in \Gamma_0(q)$, the moderate growth condition is now required at every singular cusp of $\Gamma_0(q) \backslash \mathbb{H}$, and $f$ must be an eigenfunction of the weight $k$ Laplacian \[\Delta_k = -y^2\left(\frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2}\right) + iky \frac{\partial}{\partial x}.\] In this case, a holomorphic modular form $F(z)$ of weight $k \geq 2$ is a Maass form $f(z) = y^{k/2} F(z)$ of weight $k$, and a classical Maass form is a Maass form of weight $0$. One can also define half-integral weight Maass forms, though the automorphy condition is a little more complicated. The function $y^{-1} \theta(z)$ is not a Maass form of weight $2$, because it does not satisfy the right automorphy condition (try to work out exactly the automorphy condition it does satisfy). The Siegel-Maass correspondence is a relation between classical Maass forms (of weight $0$) and half-integral weight Maass forms (of weight $-1/2$ or $1/2$). The theta function is an example of a Maass form of weight $1/2$, but there are plenty of other such Maass forms (and they are very hard to explicitly describe). Harmonic (weak) Maass forms are a generalisation of classical Maass forms, where we weaken the moderate growth condition.<|endoftext|> TITLE: If $M$ is a closed subspace of an Hilbert space $H$, then $M^{\perp\perp}=M$ QUESTION [5 upvotes]: Let $H$ be an Hilbert space and define $M^{\perp}$ to be: $$M^{\perp}=\{x\in H\vert(x,m)=0\,\forall\, m\in M\}$$ where $M\subset H$ is any subset of $H$. It is easily seen that $M,\overline{M}\subset (M^{\perp})^{\perp}=:M^{\perp\perp}$. I don't manage to prove that $M^{\perp\perp}\subset M$ if $M$ closed. I know I have to use completeness since there are counterexamples in non-complete spaces, but I am kind of stuck. I tried to use the orthogonal decomposition theorem twice or to use the projection onto a closed convex in order to get a contradiction by supposing there exists $y\in M^{\perp\perp}\setminus M$, but it leads me nowhere. Any hint is appreciated. Thank you. REPLY [8 votes]: If you're allowed to use the orthogonal decomposition theorem, you can argue as follows: Let $v \in M^{\perp \perp}$. Then we can write $v = v_1 + v_2$, where $v_1 \in M$ and $v_2 \in M^{\perp}$. Hence we know that $\langle v_1, v_2 \rangle = 0$, and also $\langle v, v_2 \rangle = 0$. This implies $\langle v_2, v_2 \rangle = \langle v - v_1, v_2 \rangle = 0$, so $v_2 = 0$, and we get $v \in M$.<|endoftext|> TITLE: Is there a 'nice' axiomatization in the language of arithmetic of the statements ZF proves about the natural numbers? QUESTION [20 upvotes]: It's well known that ZF (equivalently ZFC by this question) proves more about the natural numbers than PA. The set of such statements is recursively enumerable so it is recursively axiomatizable. Is it difficult to explicitly axiomatize these statements in the language of arithmetic? Obviously 'nice' is somewhat subjective but I think one thing that would qualify is if the set of such statements had an axiomatization that is some finite set of axioms together with some finite set of axiom schemata that are 'quantified over formulas' like the axiom schema of induction, or (I'm not sure if this is equivalent or not) having a conservative extension in the language of $ACA_0$ that is finitely axiomatizable. REPLY [11 votes]: I'm hesitant to answer negatively - as soon as I do, someone's going to post some neat fact I didn't know about! - but I strongly suspect that the answer to your question is no, there is no such nice axiomatization known currently. Why? You mention $ACA_0$. Well, that the arithmetic consequences of ZF are much much much much more than those of $ACA_0$: for example, ZF proves the consistency of $ACA_0$, not to mention $ATR_0$, $\Pi^1_1-CA_0$, $Z_2$, $Z_3$, . . . , $Z_\omega$, . . . , $KP$, . . . , $Z$ (= Zermelo set theory), . . . More generally, suppose $\alpha$ is a "reasonably definable" ordinal (e.g. $\omega_1, \omega_2, \omega_\omega, ...$). Then $ZF$ proves that $V_\alpha$ exists, and hence that $Th(V_\alpha)$ is consistent. Now, $ZF$ can't decide what $Th(V_\alpha)$ is, in general (is the continuum hypothesis true? that's a question about $Th(V_{\omega+3})$!), but $ZF$ will be able to prove that certain things are in $Th(V_\alpha)$ (e.g. $ZF$ proves that $V_{\omega+\omega}$ is a model of $Z$). So we can take "large" definable $\alpha$s, and isolate "known subtheories" of $Th(V_\alpha)$, and $ZF$ proves the consistency of these theories - and this gives us a truly daunting class of arithmetic consequences of $ZF$. Before we even begin to do anything interesting, we've already left the realm of known theories of arithmetic far behind. There's another twist on this question that occurs to me. Suppose I want to describe the arithmetic consequences of some stronger theory ZFC+X (knowing me, X is probably a large cardinal property :P). Well, that's going to be even harder than describing the arithmetic consequences of ZFC. But, maybe I can "reduce" to the ZFC problem: what if I just want a "nicely" axiomatized set $\Gamma$ of arithmetic sentences such that maybe $\Gamma$ doesn't exhaust the arithmetic consequences of ZFC+X, but the arithmetic consequences of ZFC+$\Gamma$ are exactly the arithmetic consequences of ZFC+X! Let's say that such a $\Gamma$ "captures ZFC+X arithmetically over ZFC." Then: for "natural" such X (large cardinals, forcing notions, statements about cardinal arithmetic, etc.), can we find "natural" $\Gamma$ which capture ZFC+X arithmetically over ZFC? To the best of my knowledge no such example exists. Note that even ZFC+"Con(ZFC+X)" doesn't in general capture the arithmetic consequences of ZFC+X: ZFC+X will have some high-complexity arithmetic consequences (in the sense of the arithmetic hierarchy, while consistency statements are merely $\Pi^0_1$, and we can find e.g. true $\Pi^0_2$ sentences which are not provable from ZFC + all true $\Pi^0_1$ sentences! (If I recall correctly, "ZFC is $\Sigma^0_1$ correct" is such a statement.) And the same situation appears to hold with regard to describing the arithmetic consequences of ZFC "modulo" smaller theories, like KP or Z. Again, however, this is not my area of expertise, so I will be happy to be corrected if something is known along these lines!<|endoftext|> TITLE: Problem in proof of open mapping theorem? QUESTION [7 upvotes]: I was doing proof of open mapping theorem from the book Walter Rudin real and complex analysis book and struck at one point. Given if $X$ and $Y$ are Banach spaces and $T$ is a bounded linear operator between them which is $\textbf{onto}$. Then to prove $$T(U) \supset \delta V$$ where $U$ is open unit ball in $X$ and $\delta V = \{ y \in Y : \|y\| < \delta\}$. Proof- For any $y \in Y$ since map is onto, there exist an $x \in X$ such that $Tx = y$. It is also clear that if $\|x\| < k$, then $y \in T(kU)$ for any $k$. Clearly $$Y = \underset{k \in \mathbb{N}}{\cup} T(kU) $$ But as $Y$ is complete, by Baire category theorem it can't be written as countable union of nowhere dense sets. So there exist atleast one $k$ such that $ T(kU)$ is not nowhere dense. Thus this means $$(\overline{T(kU)})^0 \ne \emptyset$$ i.e. $ T(kU)$ closure has non empty interior. Let $W$ be open set contained in closure of $T(kU)$. Now for any $w \in W \implies w \in \overline{T(kU})$, so every point of $W$ is the limit of the sequence $\{Tx_i\}$, where $x_i \in kU$, Let us now fix $W$ and $k$. Now choose $y_0 \in W$ and choose $\eta > 0$, so that $y_0+y \in W$ if $\|y\| < \eta$. This can be done as $W$ is open set, so every point of it has some neighborhood also there. Now as $y_0 , y_0+y \in W$ from above paragraph there exist sequences $\{x_i'\}$ and $\{x_i''\}$ in $kU$ such that $$T(x_i') \to y_0 \qquad T(x_i'') \to y_0+y \quad as \ i \to \infty$$ Set $x_i = x_i'-x_i''$. Then clearly $$\|x_i\| \leq \|x_i'\| + \|x_i''\| < 2k$$ and $T(x_i) \to y$. Since this holds for every $y$ with $\|y\|< \eta$. Now it is written that, the linearity of $T$ shows that following is true for $\delta = \dfrac{\eta}{2k}$ To each $y \in Y$ and to each $\epsilon > 0$ there corresponds an $x \in X$ such that $$\|x\| \leq \delta^{-1}\|y\| \quad \text{and} \quad \|Tx-y\| < \epsilon \quad (1)$$ How does this follows? This proof is given in Walter rudin 3rd edition on page 112 REPLY [2 votes]: For any $y\in Y$, set $y^\prime = \dfrac {\eta y} {\|y\|}$. $y_0 +y^\prime \in \overline W$, so we can find $x^\prime\in 2kU$ such that $\|Tx^\prime - y^\prime\| < \dfrac {\eta\epsilon}{\|y\|}$. Then (1) holds for $y$, with $x = \dfrac {\|y\| x^\prime}\eta$. Or, just choose $\eta$ such that $y_0+y\in W$ if $\|y\| \leq \eta$ instead.<|endoftext|> TITLE: Summation of Central Binomial Coefficients divided by even powers of $2$ QUESTION [10 upvotes]: Whilst working out this problem the following summation emerged: $$\sum_{m=0}^n\frac 1{2^{2m}}\binom {2m}m$$ The is equivalent to $$\begin{align} \sum_{m=0}^n \frac {(2m-1)!!}{2m!!}&=\frac 12+\frac {1\cdot3}{2\cdot 4}+\frac{1\cdot 3\cdot 5}{2\cdot 4\cdot 6}+\cdots +\frac{1\cdot 3\cdot 5\cdot \cdots \cdot(2n-1)}{2\cdot 4\cdot 6\cdot \cdots \cdot 2n}\\ &=\frac 12\left(1+\frac 34\left(1+\frac 56\left(1+\cdots \left(1+\frac {2n-1}{2n}\right)\right)\right)\right) \end{align}$$ and terms are the same as coefficients in the expansion of $(1-x)^{-1/2}$. Once the solution $$ \frac {n+1}{2^{2n+1}}\binom {2n+2}{n+1}$$ is known, the telescoping sum can be easily derived, i.e. $$\frac 1{2^{2m}}\binom {2m}m=\frac {m+1}{2^{2(m+1)-1}}\binom {2(m+1)}{m+1}-\frac m{2^{2m-1}}\binom {2m}m$$ However, without knowing this a priori, how would we have approached this problem? REPLY [2 votes]: Using an Extension of Pascal's Rule $$ \begin{align} \sum_{m=0}^n\frac1{2^{2m}}\binom{2m}{m} &=\sum_{m=0}^n\frac{(2m-1)!!}{(2m)!!}\tag{1a}\\ &=\sum_{m=0}^n\binom{m-\frac12}{m}\tag{1b}\\ &=\sum_{m=0}^n\left[\binom{m+\frac12}{m}-\binom{m-1+\frac12}{m-1}\right]\tag{1c}\\ &=\binom{n+\frac12}{n}\tag{1d}\\[6pt] &=\frac{(2n+1)!!}{(2n)!!}\tag{1e}\\[6pt] &=\frac{2n+1}{2^{2n}}\binom{2n}{n}\tag{1f} \end{align} $$ Explanation: $\text{(1a)}$: $\frac{\color{#C00}{(2m)!}}{\color{#C00}{2^mm!}\,\color{#090}{2^mm!}}=\frac{\color{#C00}{(2m-1)!!}}{\color{#090}{(2m)!!}}$ $\text{(1b)}$: divide numerator and denominator by $2^m$ $\text{(1c)}$: apply $(4)$ with $\alpha=\frac12$ $\text{(1d)}$: telescoping sum $\text{(1e)}$: multiply numerator and denominator by $2^n$ $\text{(1f)}$: $\frac{(2n+1)!!}{(2n)!!}=\frac{(2n+1)\color{#C00}{(2n-1)!!}}{\color{#090}{(2n)!!}}=\frac{(2n+1)\color{#C00}{(2n)!}}{\color{#C00}{2^nn!}\,\color{#090}{2^nn!}}$ Extension of Pascal's Rule Newton's Generalized Binomial Theorem says $$ \begin{align} \sum_{m=0}^\infty\binom{m+\alpha}{m}x^m &=\sum_{m=0}^\infty(-1)^m\binom{-1-\alpha}{m}x^m\tag{2a}\\ &=(1-x)^{-1-\alpha}\tag{2b} \end{align} $$ Explanation: $\text{2a}$: convert to negative binomial coefficient $\text{2b}$: Binomial Theorem Thus, $$ \begin{align} \sum_{m=0}^\infty\binom{m-1+\alpha}{m}x^m &=(1-x)^{-\alpha}\tag{3a}\\[6pt] &=(1-x)(1-x)^{-1-\alpha}\tag{3b}\\[9pt] &=(1-x)\sum_{m=0}^\infty\binom{m+\alpha}{m}x^m\tag{3c}\\ &=\sum_{m=0}^\infty\binom{m+\alpha}{m}\left(x^m-x^{m+1}\right)\tag{3d}\\[3pt] &=\sum_{m=0}^\infty\left[\binom{m+\alpha}{m}-\binom{m-1+\alpha}{m-1}\right]x^m\tag{3e} \end{align} $$ Explanation: $\text{(3a)}$: apply $\text{(2b)}$ $\text{(3b)}$: factor out $(1-x)$ $\text{(3c)}$: apply $\text{(2b)}$ $\text{(3d)}$: distribute $(1-x)$ $\text{(3e)}$: substitute $m\mapsto m-1$ in the subtrahend Thus, for arbitrary $\alpha\in\mathbb{R}$, we can extend Pascal's Rule to $$ \binom{m-1+\alpha}{m}+\binom{m-1+\alpha}{m-1}=\binom{m+\alpha}{m}\tag4 $$<|endoftext|> TITLE: If $\int_1^ \infty \frac {x^3+3}{x^6(x^2+1)} \, \mathrm d x=\frac{a+b\pi}{c}$, then find $a,b,c$. QUESTION [10 upvotes]: If $$\int_1^ \infty \frac {x^3+3}{x^6(x^2+1)} \, \mathrm d x=\frac{a+b\pi}{c} $$ then find $a, b, c$. Now, using partial fractions I calculated $$a = 62-10\ln (2) \qquad\qquad b = -15 \qquad\qquad c = 20$$ but it took me more than 45 minutes to do all the work. The question was asked in an MCQ exam where only 4-5 minutes are available. I am probably missing something which can help to solve it. Thanks! Note it was asked in an exam for students of grade 12 so I basically don't know very complex integrations thus I am searching for integration via elementary functions only. REPLY [4 votes]: First, break the integral in two $$ \int_0^{\infty} \frac{ x^3 + 3 }{x^6(x^2+1)} = \int_1^{\infty} \frac{dx}{x^3(x^2+1)} + 3 \int_1^{\infty} \frac{dx}{x^6(x^2+1)} $$ First, we integrat the second integral. Notice $$ \int\limits_1^{\infty} \frac{(1 +x^2 - x^2) dx }{x^6(x^2+1)} = \int\limits_1^{\infty} \frac{dx}{x^6} - \int\limits_1^{\infty} \frac{ d x}{x^4(x^2+1)} = \frac{1}{5} - \int\limits_1^{\infty} \frac{dx}{x^4} + \int\limits_1^{\infty} \frac{ dx }{x^2 ( x^2+1)} =$$ $$ = \frac{1}{5} - \frac{1}{3} + \int\limits_1^{\infty} \left( \frac{1}{x^2} - \frac{1}{1+x^2} \right) dx = -\frac{2}{15} + \frac{ \pi }{2} +1 $$ Now, for the first integral, use same trick $$ \int\limits_1^{\infty} \frac{ (1 + x^2 - x^2)dx}{x^3(x^2+1)} = \int_1^{\infty} \frac{dx}{x^3} - \int_1^{\infty} \frac{ dx }{x(x^2+1)} = \frac{1}{2} - \int_1^{\infty} \frac{ dx }{x(x^2+1)}$$ Using $x = \tan t$, we solve the last integral easily, $$ \int_1^{\infty} \frac{ \sec^2 t dt }{tan t \sec^2 t } = \int_1^{\infty} \frac{ \cos t dt }{\sin t} = \ln ( \sin t ) = \ln ( \frac{ x }{\sqrt{x^2+1}}) = \frac{1}{2} \ln \left( \frac{x^2 }{x^2+1} \right) \bigg|_1^{\infty} = \frac{1}{2} \ln (1/2)$$ Thus, $$ \int_0^{\infty} \frac{ x^3 + 3 }{x^6(x^2+1)} = \frac{-2}{15} + \frac{\pi}{2} + 1 + \frac{1}{2} - \frac{1}{2} \ln(1/2) = \boxed{ \frac{30 \ln 2+ 81 + 30 \pi }{60}}$$<|endoftext|> TITLE: Classify groups of order $pq^2$ using semidirect product QUESTION [8 upvotes]: I am struggling with semidirect products and how they can be used to classify groups of a certain order. In particular, I need help with the nonabelian case. This is the problem I am working with.. Classify all groups of order $pq^2$ with $p$,$q$ primes, $p TITLE: Integrate $I=\int_0^1\frac{\arcsin{(x)}\arcsin{(x\sqrt\frac{1}{2})}}{\sqrt{2-x^2}}dx$ QUESTION [5 upvotes]: How to prove \begin{align} I &= \int_0^1\frac{\arcsin{(x)}\arcsin{(x\sqrt\frac{1}{2})}}{\sqrt{2-x^2}}dx \\ &= \frac{\pi}{256}\left[ \frac{11\pi^4}{120}+2{\pi^2}\ln^2{2}-2\ln^4{2}-12\zeta{(3)}\ln{2} \right] \end{align} By asking $$x=\sqrt{2}y$$ then using integration by parts, we have $$I=\frac{\pi^5}{2048}-\frac{1}{4}\int_0^1{\arcsin^4\left( \frac{z}{\sqrt{2}}\right) }\frac{dz}{\sqrt{1-x^2}}$$ But how to calculate this integral? I would appreciate your help REPLY [7 votes]: This integral can be done by recognizing that $$\frac{\arcsin{\frac{x}{\sqrt{2}}}}{\sqrt{2-x^2}} = \sum_{n=0}^{\infty} \frac{2^n x^{2 n+1}}{(2 n+1) \binom{2 n}{n}}$$ and that $$\int_0^1 dx \, x^{2 n+1} \arcsin{x} = \frac{\pi}{4 (n+1)} \left [1- \frac1{2^{2 n+2}} \binom{2 n+2}{n+1}\right ] $$ To see this, integrate by parts and see this answer. With a bit of algebra, we find that the integral is equal to $$\frac{\pi}{2} \sum_{n=0}^{\infty} \frac{2^n}{(2 n+1)(2 n+2) \binom{2 n}{n}} - \frac{\pi}{16} \sum_{n=0}^{\infty} \frac1{2^n (n+1)^2}$$ The first sum may be evaluated by recognizing that it is $$\int_0^1 dx \frac{\arcsin{\frac{x}{\sqrt{2}}}}{\sqrt{2-x^2}} = \frac{\pi^2}{32}$$ The second sum is recognized as $\operatorname{Li_2}{\left ( \frac12 \right )} = \frac{\pi^2}{6}-\log^2{2} $ Putting all of this together, we find that the integral is equal to $$\int_0^1 dx \, \frac{\arcsin{x} \arcsin{\frac{x}{\sqrt{2}}}}{\sqrt{2-x^2}} = \frac{\pi^3}{192} + \frac{\pi}{16} \log^2{2} $$ Numerical evaluation in Mathematica confirms the result, which differs from that asserted by the OP.<|endoftext|> TITLE: How to find the coordinates of the vertices of a pentagon centered at the origin QUESTION [6 upvotes]: I am attempting to follow this tutorial here: http://www.mathopenref.com/polygonradius.html My goal is to find the coordinates of vertices of a pentagon, given some radius. For example, if I know that the center is at $(0,0)$, and my radius is $8.1$, what formula can I use to get the coordinates of points A, E, B, D, C, if I know the center point between D, C (i.e $(0,5)$ REPLY [4 votes]: Assuming starting with b on $y$ axis, $$(x,y)= ( 8.1 \cos ( t + k \, 2 \pi/5) , 8.1 \sin( t + k\, 2 \pi/5)) $$ where $t$ is polar coordinate angle at (0,0) and $k$ is varied between 0 to 5 to arrive at all vertex coordinates of the regular pentagon.<|endoftext|> TITLE: Finding a minimal polynomial over $\mathbb{Q}(\sqrt5)$ QUESTION [5 upvotes]: The question is to find the minimal polynomial of $\sqrt{2}+\sqrt{7}$ over $\mathbb{Q}(\sqrt{5})$. First, I found its minimal polynomial over $\mathbb{Q}$ which is equal to $X^4 - 18X^2 + 25$. I suppose this could already be a candidate for a minimal polynomial over $\mathbb{Q}(\sqrt{5})$ so I tried proving that using the tower property, but I don't think that's the right approach. REPLY [4 votes]: Hint: First note that the minimal polynomial must divide $x^4-18x^2+25$ and using the tower law we can say that if this is not the minimal polynomial then it has to be quadratic. If you know what the other roots of $x^4-18x^2+25$ are then you can check all 6 pairs (in fact you only need 3 of them), to see if they have coefficients in $\mathbb{Q}(\sqrt{5})$ or not. There is a quicker method using the Galois group, but I'm guessing you haven't met this yet?<|endoftext|> TITLE: Completeness of Measure spaces QUESTION [6 upvotes]: A metric space X is called complete if every Cauchy sequence of points in X has a limit that is also in X. It's perfectly clear to me. A measure space $(X, \chi, \mu)$ is complete if the $\sigma$-algebra contains all subsets of sets of measure zero. That is, $(X, \chi, \mu)$ is complete if $N \in \chi$, $\mu (N) = 0$ and $A \subseteq N$ imply $A \in \chi$. Technically, I could understand the definition, but can't get the logic behind it. Questions: 1) Why do we care only about subsets of sets of measure zero to determine completeness? 2) How does the completeness of measure spaces relate to a completeness of metric spaces? 3) Could you suggest a concrete elementary example of a measure space (preferably, with simple sets) that isn't initially complete and then is completed? REPLY [3 votes]: 3) Take the sample space to be $\Omega=\{1,2,3\}$, the $\sigma$-algebra to be $\mathcal F=\{\emptyset,\Omega,\{1,2\},\{3\}\}$, and let $P$ be the probability measure on $(\Omega,\mathcal F)$ such that $P(\{3\})=1$. Then $P(\{1,2\})=0$ and $\{1\}$ is a non-$\mathcal F$-measurable subset of $\{1,2\}$. The probability space $(\Omega,\mathcal F,P)$ is not complete.<|endoftext|> TITLE: Approximation on partitions in $L^2([0,1]\times \Omega)$ QUESTION [5 upvotes]: I’m working on Nualart’s book “The Malliavin calculus and related topics” and in the proof of lemma 1.1.3 he mentions that the operators $P_n$ have their operator norm bounded by 1. I fail to see why, can you help me? Using Jensen’s inequality I get a norm more akin to $2^n$, so I guess Jensen is too weak to prove that? Quoting the proof: Let $u$ be a process in $L^2_a([0,1]\times\Omega)$ ($L^2_a$ are the adapted processes w.r.t Brownian motion) and consider the sequence of processes defined by $\tilde u^n(t)=\sum_{i=1}^{2^n-1}2^n\left(\int_{(i-1)2^{-n}}^{i2^{-n}}u(s)ds\right)1_{]i2^{-n},(i+1)2^{-n}]}(t)$. We claim that the sequence converges to $u$ in $L^2([0,1]\times\Omega)$. In fact define $P_n(u)=\tilde u^n$. Then $P_n$ is a linear operator in $L^2([0,1]\times\Omega)$ with norm bounded by one. REPLY [2 votes]: Jensen inequality works fine. Indeed (correcting the mistake spotted by Gordon): $$\int_0^1 (\tilde{u}^n)^2 (t) dt = \int_0^1 \sum_{i=0}^{2^n-1}\left(\int_{(i-1)2^{-n}}^{i2^{-n}}u(s)2^nds\right)^2 1_{]i2^{-n},(i+1)2^{-n}]}(t) dt,$$ since all the caracteristic functions have disjoint support. Hence, using Jensen inequality (with the probability measures $2^n ds$ on each interval $]i2^{-n},(i+1)2^{-n}]$): $$\begin{align}\|P_n u\|_{\mathbb{L}^2}^2 & \leq \int_0^1 \sum_{i=0}^{2^n-1}\left(\int_{(i-1)2^{-n}}^{i2^{-n}}u(s)^2 2^nds\right) 1_{]i2^{-n},(i+1)2^{-n}]}(t) dt \\ & = \sum_{i=0}^{2^n-1} \left(2^n\int_{(i-1)2^{-n}}^{i2^{-n}}u(s)^2 ds\right) \int_0^1 1_{]i2^{-n},(i+1)2^{-n}]}(t) dt \\ & = \sum_{i=0}^{2^n-1} \int_{(i-1)2^{-n}}^{i2^{-n}}u(s)^2 ds \\ & = \|u\|_{\mathbb{L}^2}^2. \end{align}$$ So, in this case, Jensen's inequality is not weaker; you just need to be careful on where you apply it. Note, by the way, that this can be proved much faster. Fix $n\geq 0$. Let $\pi_n := \{]i2^{-n},(i+1)2^{-n}]: \ 0 \leq i < 2^n\}$, and $\mathcal{C}_n := \sigma (\pi_n)$ be the $\sigma$-algebra generated by $\pi_n$. Then: $$P_n (u) = \mathbb{E} (u | \mathcal{C}_n),$$ and the conditional expectation is always a weak $\mathbb{L}^2$ contraction, which can be proved for instance with the conditional version of Jensen inequality: $$\mathbb{E} (P_n (u)^2) = \mathbb{E} (\mathbb{E} (u | \mathcal{C}_n)^2) \leq \mathbb{E} (\mathbb{E} (u^2 | \mathcal{C}_n)) = \mathbb{E} (u^2).$$ This point of view makes more sense from a probabilist's standpoint, I think.<|endoftext|> TITLE: Prove that a homomorphism $\phi$ must be trivial. QUESTION [7 upvotes]: Let $G,H$ be finite groups where $|G|$ and $|H|$ are coprime. Prove that any homomorphism $\phi :G\rightarrow H$ must be trivial $($ie. $\phi (x)=e_H, $ the identity element of $H, \forall x\in G)$. We know that $Ker(\phi )$ and $Im(\phi )$ are subgroups of $G$ and $H$, respectively. Then, the Lagrange Theorem asserts that $|Ker(\phi )|$ divides $|G|$ while $|Im(\phi )|$ divides $|H|$. I am trying to show that $|Im(\phi )|=1 \implies |Ker(\phi )|=|G| \implies \phi$ is trivial. I can show this last series of implications and the first part separately. How do I make the leap from where I left off to $|Im(\phi )|=1$? Side note: I am using the Range-Kernel Theorem as well: $|Im(\phi )|\times |Ker(\phi )|=|G|$. REPLY [5 votes]: You have the right idea: $|\mathrm{Im}(\phi)|$ divides $|H|$, but on the other hand it is equal to $\frac{|G|}{|\ker(\phi)|}$, which is a divisor of $|G|$. Since $|H|$ and $|G|$ are coprime, this means that $|\mathrm{Im}(\phi)|=1$.<|endoftext|> TITLE: Find the power series of $f(x)=\frac{1}{x^2+x+1}$ QUESTION [5 upvotes]: I want to find the power series of $$f(x)=\frac{1}{x^2+x+1}$$ How can I prove the following? $$f(x)=\frac{2}{\sqrt{3}} \sum_{n=0}^{\infty} \mathrm{sin}\frac{2\pi(n+1)}{3} x^n \,\,\,\, |x|<1$$ In particular I would like to know how to proceed in this case. The polinomial $x^2+x+1$ has no roots so here I cannot use partial fraction decomposition: what method should I use? REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Lets $\ds{r \equiv -\,{1 \over 2} + {\root{3} \over 2}\,\ic = \expo{2\pi\ic/3}.\quad r\ \mbox{and}\ \bar{r}\quad \mbox{are the roots of}\quad x^{2} + x + 1 = 0}$. \begin{align} {1 \over x^{2} + x + 1} & = {1 \over \pars{x - r}\pars{x - \bar{r}}} = \pars{{1 \over x - r} - {1 \over x - \bar{r}}}{1 \over r - \bar{r}} = \bracks{2\ic\Im\pars{1 \over x - r}}{1 \over 2\ic\Im\pars{r}} \\[5mm] & = -\,{2\root{3} \over 3}\,\Im\pars{\bar{r}\bracks{1 \over 1 - \bar{r}x}} = -\,{2\root{3} \over 3}\,\Im\pars{\bar{r}\sum_{n = 0}^{\infty} \bracks{\bar{r}x}^{n}} \\[5mm] & = -\,{2\root{3} \over 3}\,\sum_{n = 0}^{\infty} x^{n}\,\Im\pars{\bar{r}^{\, n + 1}} = -\,{2\root{3} \over 3}\,\sum_{n = 0}^{\infty} x^{n}\,\Im\pars{\exp\pars{-\,{2\bracks{n + 1}\pi \over 3}\,\ic}} \\[5mm] & =\ \bbox[15px,#ffe,border:2px dashed navy]{\ds{% {2\root{3} \over 3}\,\sum_{n = 0}^{\infty} \sin\pars{2\bracks{n + 1}\pi \over 3}x^{n}}}\qquad\qquad\verts{x} < 1 \end{align}<|endoftext|> TITLE: Differential equation $f''(x)+2 x f(x)f'(x) = 0$ QUESTION [5 upvotes]: I am trying to solve, $ f''(x)+2 x f(x)f'(x) = 0$ with boundary conditions $f(-\infty)=1$ and $f(\infty)=0$. I have found that for instance $f(x) = 3/2 x^{-2}$ but obviously it does not satisfy the proper boundary conditions. Any ideas for a solution? REPLY [2 votes]: Hint: This belongs to a generalized Emden–Fowler equation. $f''(x)+2xf(x)f'(x)=0$ $\dfrac{d^2f}{dx^2}=-2xf\dfrac{df}{dx}$ $\therefore\dfrac{d^2x}{df^2}=2fx\left(\dfrac{dx}{df}\right)^2$ Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=377: Let $\begin{cases}y=\dfrac{df}{dx}\\t=f^2\end{cases}$ , Then $\dfrac{d^2t}{dy^2}=2y^{-1}t^{-\frac{1}{2}}\left(\dfrac{dt}{dy}\right)^3$ $\therefore\dfrac{d^2y}{dt^2}=-2t^{-\frac{1}{2}}y^{-1}$ Which reduces to an Emden–Fowler equation.<|endoftext|> TITLE: Connection between linearly independent vectors and projective points in general position QUESTION [6 upvotes]: I'm trying to understand the connection between the notions of linear independence and general position. I have no background in geometry, so first I'll start with what I know and then I'll pose specific questions, please bear with me and correct me at any point. Let $q$ be a prime power, $d$ be a nonnegative integer, and $V$ be a $(d+1)$-dimensional vector space over the finite field $F_q$ with $q$ elements. For $v \in V$ denote $$[v] = \left\{ cv \mid c \in F_q, c\neq 0 \right\}.$$ Then the collection of symbols $[v]$ can be seen as the points of the $d$-dimensional projective space PG(d,q). Furthermore, for a $(k+1)$-dimensional subspace $S$ of $V$, the set $$\left\{ [s] \mid s \in S \right\}$$ is a $k$-flat of $PG(d,q)$. From what I read in pg. 19 of these notes I assume that this definition of the notion of "general position" is correct: We say that $m$ points in $PG(d,q)$ are in general position if they are not contained in any $(m-2)$-flat. So, to my understanding, the following statement is correct: The points $[v_1], \ldots, [v_m]$ of $PG(d,q)$ are in general position if and only if the vectors $v_1, \ldots, v_m$ are linearly independent. My proof. The points $[v_1], \ldots, [v_m]$ are in general position iff they are not contained in any $(m-2)$-flat, which is true iff $v_1, \ldots, v_m$ are not contained in any $(m-1)$-dimensional subspace in $V$. This is the same as linear independence of $v_1, \ldots , v_m$. My questions are: Is the above statement correct? Are the preceding definitions accurate? PS. The reason for my confusion is that I've read different definitions for "general position" that I don't understand well, as well as discussions were people explain that general position is not equivalent to linear independence (which I thought my statement above implies). While I'm trying to understand and digest things, it would be very helpful to know if I got the above correctly. REPLY [5 votes]: The definition from the notes is not exactly correct. In a projective space of dimension $d$ (projective dimension), a set of points is usually considered to be in general position if no $d+1$ are contained in a hyperplane. So for example if we consider the Fano plane $\mathrm{PG}(2,2)$, the four points associated with the vectors $$(1,0,0),\ (0,1,0),\ (0,0,1),\ (1,1,1)$$ are in general position, because no $3$ are contained in any line. The definition Massimo gives is rather for affine independence; I'm guessing he referred to Wikipedia for the definition where the writing is slightly confusing. Note that according to Wikipedia, a set of at most $d+1$ points in general position is affinely independent (ie vectors linearly independent); however we can have sets of more than $d+1$ points that are in general position, in which case these two concepts are different. Note that a collection of linearly independent vectors will define points that are in general position; but the concepts are not equivalent, as shown be the above example (though note that of those four vectors that I gave, any 3 of them form a linearly independent set).<|endoftext|> TITLE: Derivative of multivariate Gaussian PDF with respect to covariance QUESTION [6 upvotes]: While trying to derive the M-step of the EM-algorithm for a mixture of Gaussians, I came across this derivative, which I have no idea how to deal with: $$ \frac{\partial}{\partial \mathbf{\Sigma_k}} \left ( (2\pi)^{-d/2}|\mathbf{\Sigma_k}|^{-1/2}e^{-\frac{1}{2}(x-\mathbf{\mu_k})^T\mathbf{\Sigma_k}^{-1}(x-\mathbf{\mu_k})}\right ) $$ Basically, this the derivative of the multivariate Gaussian PDF with respect to the covariance matrix. My matrix calculus is not very good - how do I approach this? I've computed the derivative of the logarithm of this PDF before and that was a bit easier because the $|\mathbf{\Sigma_k}|$ and $\mathbf{\Sigma_k}^{-1}$ were in two separate terms that were added/subtracted. But here they are in two terms that are multiplied together. REPLY [3 votes]: I've found the answer and I'm posting it for posterity. I mentioned in the question that computing the derivative of the logarithm of the PDF was easier. It turns out that this can be used to compute the derivative of the PDF itself: $$ \frac{\partial \ln (f)}{\partial \mathbf{\Sigma}_k} = \frac{1}{f} \frac{\partial f}{\partial \mathbf{\Sigma}_k}\\ \Rightarrow \frac{\partial f}{\partial \mathbf{\Sigma}_k} = f \cdot\frac{\partial \ln (f)}{\partial \mathbf{\Sigma}_k} $$ Also, it turns out that taking the derivative of the PDF with respect to $\mathbf{\Sigma}^{-1}$ is easier and leads to the same answer.<|endoftext|> TITLE: Singapore math olympiad Trigonometry question: If $\sqrt{9-8\sin 50^\circ} = a+b\csc 50^\circ$, then $ab=$? QUESTION [5 upvotes]: $$\text{If}\; \sqrt{9-8\sin 50^\circ} = a+b\csc 50^\circ\text{, then}\; ab=\text{?}$$ $\bf{My\; Try::}$ We can write above question as $$\sin 50^\circ\sqrt{9-8\sin 50^\circ} = a\sin 50^\circ+b$$ Now for Left side, $$\sin 50^\circ\sqrt{9-8\sin 50^\circ} = \sqrt{9\sin^250^\circ-8\sin^350^{\circ}}$$ Now How can i solve it after that , Help required, Thanks REPLY [5 votes]: If you use the formula for the triple angle: $-8\sin^3(50)=2\sin(150)-6\sin(50)=1-6\sin(50)$ so your last square root becomes $\sqrt{9\sin^2(50)-6\sin(50)+1}=\sqrt{(3\sin(50)-1)^2}=3\sin(50)-1$ as the last expression is positive. So: $3\sin(50)-1=a\sin(50)+b$ Now if $a$ and $b$ are rational/integer, we have $a=3$, $b=-1$ so $ab=-3$ but in the case of real numbers there is no unique solution. For example if $a=0,b=3\sin(50)-1$ we have $ab=0$ In any case, the last equation is much simpler to work out than the first one :) REPLY [2 votes]: $$\sqrt{9-8\sin 50^\circ}$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-8\sin^350^\circ}$$ $$(\text{using }\sin^2x=1-\cos^x)$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-8\sin50^\circ(1-\cos^250^\circ)}$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-8\sin50^\circ+8\sin50^\circ\cos^250^\circ}$$ $$(\text{using }2\sin x\cos x=\sin2x)$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-8\sin50^\circ+4\sin100^\circ\cos50^\circ}$$ $$(\text{using }2\sin x\cos y=\sin(x+y)-\sin(x-y))$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-8\sin50^\circ+2(\sin150^\circ+\sin50^\circ)}$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-6\sin50^\circ+2\sin150^\circ}$$ $$(\text{using }\sin150^\circ=\sin30^\circ=\frac{1}{2})$$ $$=\csc50^\circ\sqrt{9\sin^250^\circ-6\sin50^\circ+1}$$ $$=\csc50^\circ\sqrt{(3\sin50^\circ-1)^2}$$ $$\text{(Taking the positive root.)}$$ $$=\csc50^\circ(3\sin50^\circ-1)$$ $$=3-\csc50^\circ$$ $$\text{So }a=3\text{ and }b=-1$$<|endoftext|> TITLE: Suppose that $p$ is a prime with $p \equiv 7 \pmod 8$. If $t = \frac{p - 1}{2}$ , prove that $2^t \equiv 1 \pmod p$ QUESTION [5 upvotes]: Suppose that $p$ is a prime with $p \equiv 7 \pmod 8$. If $t = \frac{p - 1}{2}$ , prove that $$2^t \equiv 1 \pmod p.$$ Any hints will be appreciated. Thanks so much. REPLY [2 votes]: Obviously $p$ is odd. It is well known that $$\left(\dfrac 2p\right)=(-1)^{\frac{p^2-1}{8}}$$ where $\left(\dfrac ap\right)$ is the Legendre's symbol. On the other hand $$p=8m+7\Rightarrow p^2-1=64m^2+112m+48\Rightarrow\frac{p^2-1}{8}=8m^2+14m+6\in 2\Bbb Z$$ It follows (because $\frac{p^2-1}{8}$ is even) $$\left(\dfrac 2p\right)= 1$$ which means that $2$ is a square modulo $p$. Thus $$\left(2\right)^{\frac{p-1}{2}}=\left(x^2\right)^{\frac{p-1}{2}}=x^{p-1}\equiv 1\pmod p$$<|endoftext|> TITLE: Is our interest in $\mathbb{R}$ "historical"? QUESTION [13 upvotes]: In the process of a topology course, it occurred to me that a number of concepts are defined with reference to $\mathbb{R}$ and standard subsets thereof. For instance, we consider metrics, which are of course maps into $\mathbb{R}$ (specifically the non-negatives, but still). We devote a chapter of a book to the concept of path-connectedness, i.e. a way of expressing (part of) a space as a continuous image of a closed real interval. We define separation axioms according to the ability to separate particular sets with continuous maps into closed intervals. A lot of concepts in topology are defined with reference to particular subspaces of $\mathbb{R}$. But move to other fields, say measure theory, and the entire branch is defined by functions into $\mathbb{R}$. I suppose my question is: To what extent is our interest in objects "founded on" $\mathbb{R}$ simply a matter of our massive familiarity with $\mathbb{R}$, and to what extent do these concepts draw on characteristics really constitutive of $\mathbb{R}$? Put another way, do we tend to consider real-based objects because the reals are "intuitive", or because $\mathbb{R}$ (or its relevant subsets) has particularly special properties that we don't really find in other (truly distinct*) objects? *By "truly distinct", I mean to preempt answers along the lines of, "Well you can construct this thing which isn't the set $\mathbb{R}$ per se, but is still for all intents and purposes the exact same thing as $\mathbb{R}$." REPLY [4 votes]: I think you can definitely try and make a distinction between fields of mathematics that are "founded on $\mathbb{R}$" in a strong sense and those that aren't (but in which our familiarity with $\mathbb{R}$ and its cousins might play and important role in terms of providing examples, intuition, driving questions, etc). Consider the following fields: Set theory. I would argue that the real numbers play no foundational role in set theory. You can define and investigate the notions of cardinality, partial orders, etc without even bothering to construct the real numbers. Of course one can say that our familiarity and interest in the real numbers is the reason set theory was created in the first place and problems like the continuum hypothesis (which involve the real numbers) served as important foundational problems in the field, but from a mathematical point of view, I would say that the interesting set-theoretic questions that involve the real numbers can be viewed as "applications" of set theory to specific situations and that the core of set theory has nothing to do with the real numbers. Group theory or any abstract algebra field. Again, one can investigate the abstract properties of groups without knowing anything about the real numbers. The familiarity with the real numbers can give us important examples of groups (such as Lie groups) but again, results about such groups can be viewed as applications of group theory to $\mathbb{R}$-founded objects. Smooth manifolds. Here, we are working with objects that are modelled locally on $\mathbb{R}^n$ and we are generalizing our ability to do calculus over the reals to this more abstract setting. This is a field that in my opinion is "strongly founded on $\mathbb{R}$". You can of course try to abstract away the properties that make the theory work (what do you need in order to do calculus, etc) and people have done that resulting in fields that are much less $\mathbb{R}$-founded but investigating smooth manifolds remains very $\mathbb{R}$-founded. Now, regarding general topology, I would like to argue that while it has many applications to objects that are "founded on $\mathbb{R}$" and fields that are founded on $\mathbb{R}$ in a strong sense (such as manifold theory), it is, not, per se, a field that is founded on $\mathbb{R}$ and the real numbers don't play a particularly important role. The basic players in topology are defined using an abstract family of axioms (very much like in group theory). By imposing additional restrictions (separation axioms, again, defined in terms of the basic operations), we can single out specific classes of topological spaces. For example, one might define a family of spaces that are regular, Hausdorff and have countably locally finite basis and investigate their properties. It turns out by the Nagata-Smirnov metrization theorem that such spaces are precisely the topological spaces that admit a metric but a priori we can investigate the topological properties of such spaces without introducing a metric at all. Choosing a metric on such a space can be considered as introducing an "axillary" data that helps us (as we are familiar with the real numbers and properties of distance in Euclidean spaces) to analyze the family and describe its topological properties. Regarding path-connectedness, the amazing answer of Eric to this question shows that the notion of path-connectedness can also be defined without introducing the interval $[0,1]$ and using the interval to define a path can be considered as introducing "axillary" data that helps us to visualize, give intuition and analyze path-connected spaces. Of course, the history went the other way and we care about paths because we visualize them as generalization of paths in an Euclidean space and we care about seperation axioms because we care about metric spaces and want to understand which parts that hold in metric spaces can be "abstracted away" but once the abstraction has been done, the real numbers stop playing a foundational role.<|endoftext|> TITLE: Partial sum of the harmonic series between two consecutive fibonacci numbers QUESTION [12 upvotes]: I was playing around with some calculations and I noticed that the partial sum of the harmonic series: $$s_n=\sum_{k=F_n}^{F_{n+1}}\frac 1 k$$ where $F_n$ and $F_{n+1}$ are two consecutive Fibonacci numbers have some interesting properties. It is close to $\frac 1 2$ for small values of $n$ and it seems to converge to a value less than $0.5$ for large $n$. This is what I've got so far: $$\lim_{n\to\infty} s_n\approx 0.481212$$ I googled a bit to see if there is some theorems or resources for this, and found nothing. I suspect that the series might converge to a smaller number and I may have reached some computational limitations which led to the conclusion that the limit is close to $\frac 1 2$. So my questions are: Can we show that the series converge to a non-zero value? In case the first answer is yes, can the limit be expressed in a closed form? REPLY [3 votes]: Way late to the party, but here's a general result, and an elementary derivation: Claim: Let $(a_n)$ and $(b_n)$ be sequences of positive integers with $a_n\to\infty$ and $\lim_{n\to\infty}\frac{b_n}{a_n}=c$. Then $$ \lim_{n\to\infty}\sum_{k=a_n}^{b_n}\frac1k=\log c.$$ Proof: Start with the inequalities $$\frac{x-1}x\le\log x\le x-1.$$ Substitute $x=(k+1)/k$ into the right inequality and $x=k/(k-1)$ into the left, obtaining $$\log(k+1)-\log k\le\frac1k\le\log k-\log(k-1).$$ Sum from $k=a_n$ to $k=b_n$, and use telescoping to find $$ \log\frac{b_n+1}{a_n}\le\sum_{a_n}^{b_n}\frac1k\le\log\frac{b_n}{a_n-1}.$$ Finally, take the limit as $n\to\infty$. Now apply this result with $a_n=F_n$ and $b_n=F_{n+1}$ and use the fact that $F_{n+1}/F_n$ tends to the golden ratio $\phi$.<|endoftext|> TITLE: Every polynomial with real coefficients is the sum of cubes of three polynomials QUESTION [44 upvotes]: How to prove that every polynomial with real coefficients is the sum of three polynomials raised to the 3rd degree? Formally the statement is: $\forall f\in\mathbb{R}[x]\quad \exists g,h,p\in\mathbb{R}[x]\quad f=g^3+h^3+p^3$ REPLY [80 votes]: We have that the following identity holds $$(x+1)^3+2(-x)^3+(x-1)^3=6x.$$ Hence $$\left(\frac{f(x)+1}{6^{1/3}}\right)^{3}+\left(\frac{-f(x)}{3^{1/3}}\right)^{3}+ \left(\frac{f(x)-1}{6^{1/3}}\right)^{3}=f(x).$$<|endoftext|> TITLE: What is the general formula for the Area of a 2-D surface in a 3-D manifold? QUESTION [5 upvotes]: Suppose we have a flat 3-D manifold $ ds^2=\delta_{ab}dx^{a}dx^{b},$ which contains a 2-D surface given by a parametric relationship $r^{a}\left(u,v\right)=x^{a}\left(u,v\right).$ Where $ u $ and $ v $ are two independent parameters. I know that the Area of this surface is the magnitude of a 3-D vector $$ dA_{a}=\varepsilon_{abc}\dfrac{\partial x^{b}}{\partial u}\dfrac{\partial x^{c}}{\partial v} du dv $$ $$dA^2=\delta^{ef}dA_{e}dA_{f}$$ How can I generalize this formula to an arbitrary 3-D manifold given by $ ds^2=g_{ab}dx^{a}dx^{b}$, where: $$ dA^2=g^{ab}dA_{b}dA_{a}? $$ That is, is it possible to write $$ dA_{a}=\kappa_{abc}\dfrac{\partial x^{b}}{\partial u}\dfrac{\partial x^{c}}{\partial v} du dv $$ and, if so, what is $\kappa_{abc}$ ? REPLY [3 votes]: @Solenodon Paradoxus may be has answered your question already. But let me do it in more detail. Suppose we have a 3-dim manifold $\mathfrak{M}_3$ with metric function $g_{\alpha\beta}$. Note that manifolds does not contain any vectors itself. Vectors emerges in tangent spaces. But we will not be so specific. So we will say that $\mathfrak{M}_3$ contains vectors, metric and etc. bearing in mind that this elements belong to its ($\mathfrak{M}_3$'s) tangent space. OK. Let us suppose also we have 2-dim manifold say $\mathfrak{M}_2\in\mathfrak{M}_3$. But $\mathfrak{M}_2$ can be considered individually with its own metric say $G_{\mu\nu}$. So you are asking the following question: How do I know $G_{\mu\nu}$ if I know $g_{\alpha\beta}$? Suppose $\mathfrak{M}_2$ represents some surface $M_2$ in your space. Or like you say $\mathfrak{M}_2$ can be parametrized by two variables $u, v$ (in the rest we will call it $u^1$ and $u^2$) in the following way: $$x_1 = x_1(u^1, u^2),\, x_2 = x_2(u^1, u^2),\,x_3 = x_3(u^1, u^2);$$ If we planed to calculate arc length $ds^2$ on our surface $M_2$ we could do it absolutely in the same way as we would do if this arc belonged $\mathfrak{M}_3$ but with the only specialty: $\vec{x}$ is a function of $u$ nd $v$. Let's do this. $$ds^2 = g_{\alpha\beta}(u^1,u^2)dx^\alpha(u^1, u^2) dx^\beta(u^1, u^2) = g_{\alpha\beta}\frac{dx^\alpha}{du^\mu}\frac{dx^\beta}{du^\nu}du^\mu du^\nu$$ But on the other hand $$ds^2 = G_{\mu\nu}du^\mu du^\nu$$ So we have eventually $$G_{\mu\nu} = g_{\alpha\beta}\frac{dx^\alpha}{du^\mu}\frac{dx^\beta}{du^\nu}$$<|endoftext|> TITLE: Terminal objects as "nullary" products QUESTION [7 upvotes]: I read something weird in my category theory book (Awodey p 47). " Observe also that a terminal object is a nullary product, that is, a product of no objects: Given no objects, there is an object $1$ with no maps, and given any other object $X$ and no maps, there is a unique arrow: $$!:X\to 1$$ making nothing further commute." Could anyone give a hint about what this means? I mean "given no objects, there is an object..?" Thank you REPLY [16 votes]: To form a product, you give me $n$ objects, $A_1,\dots,A_n$, and I give you back an object $A_1\times\dots\times A_n$, together with $n$ maps $\pi_i\colon A_1\times\dots\times A_n\to A_i$ (one to each of the $A_i$) satisfying the universal property of the product. So what happens if $n=0$? Then you give me $0$ objects, and I give you back an object which we call $1$, together with $0$ maps $\pi_i$ (one to each of the $A_i$, of which there aren't any), satisfying the universal property of the product. What does the universal property say in this case? For any $X$ given together with $0$ maps $f_i$ (one to each of the $A_i$, of which there aren't any), there is a unique map $!\colon X\to 1$ making all of the triangles commute ($\pi_i\circ ! = f_i$ for all $i$, of which there aren't any). Removing the vacuous conditions from the definition, we see that the empty product is an object $1$ such that for every object $X$ there is a unique map $!\colon X\to 1$, i.e. $1$ is a terminal object. REPLY [2 votes]: Well, what is a normal product (say of two objects, $A$ and $B$)? It is an object $A\times B$ with maps $\pi_1:A\times B\to A$, $\pi_2:A\times B\to B$ such that given $C$ with maps $f:C\to A$ and $g:C\to B$, there is a unique arrow $h:C\to A\times B$ making whatever diagram commute. Well, look at our situation with the terminal object $1$. If you (try to) think of it as a product, you will see there is no projection map included (i.e., it is a "product" of nothing). Now, look at the situation in the paragraph above. We need to have an object $C$ with arrows to the objects we took a "product" over. Well, we didn't take a product over any objects, so there does not need to be any arrows, just an object $C$. Then, because $1$ is terminal there is a unique map $C\to1$, which makes the "diagram" commute (there really isn't a diagram though, just the map $C\to 1$). Hence, $1$ satisfies the requirements of being a "product" only with no actual objects, so it is an "empty product".<|endoftext|> TITLE: If $F=m\dfrac{dv}{dt}$ why is it incorrect to write $F\,dt=m\,dv$? QUESTION [6 upvotes]: My university lecturer told me that: If $$F=m\dfrac{dv}{dt}$$ it's incorrect to write $$F\,dt=m\,dv\tag{1}$$ but it is okay to write $$\int F\,dt=\int m\,dv$$ for Newtons' second law. But never explained why $(1)$ is mathematically incorrect. My high school teacher told me that: Derivatives with respect to one independent variable can be treated as fractions. So this implies that $(1)$ is valid. This is clearly a contradiction as my high school teacher and university lecturer cannot both be correct. Or can they? Another example of this misuse of derivatives uses the specific heat capacity $c$ which is defined to be $$c=\frac{1}{m}\frac{\delta Q}{dT}\tag{2}$$ Now in the same vain another lecturer wrote that $$\delta Q=mc\,dT$$ by rearranging $(2)$. Another contraction to the first lecturer. I this really allowed or if it's invalid then which mathematical 'rule' has been violated here? EDIT: In my question here I have used formulae that belong to Physics but these were just simple examples to illustrate the point. My question is much more general and applies to any differential equation in mathematics involving the treatment of derivatives with respect to one independent variable as fractions. Specifically; Why is it 'strictly' incorrect to rearrange them without taking the integral of both sides? REPLY [3 votes]: The difference is not so much between university lecturers and highschool teachers as between mathematicians and physicists. Some mathematicians tend to frown on certain procedures that are perfectly acceptable to physicists. I was careful to write "some" because mathematicians familiar with Robinson's framework with infinitesimals do assign a perfectly rigorous meaning to formulas like $F\, dt = m\, dv$; see Keisler's beautiful textbook Elementary Calculus for details.<|endoftext|> TITLE: Why can't the second fundamental theorem of calculus be proved in just two lines? QUESTION [71 upvotes]: The second fundamental theorem of calculus states that if $f$ is continuous on $[a,b]$ and $F$ is an antiderivative of $f$ on the same interval, then: $$\int_a^b f(x) dx= F(b)-F(a).$$ The proof of this theorem in both my textbook and Wikipedia is pretty complex and long. It uses the mean value theorem of integration and the limit of an infinite Riemann summation. But I tried coming up with a proof and it was barely two lines. Here it goes: Since $F$ is an antiderivative of $f$, we have $\frac{dF}{dx} = f(x)$. Multiplying both sides by $dx$, we obtain $dF = f(x)dx$. Now, $dF$ is just the small change in $F$ and $f(x)dx$ represents the infinitesimal area bounded by the curve and the $x$ axis. So integrating both sides, we arrive at the required result. First, what is wrong with my proof? And if it is so simple, what is so fundamental about it? Multiplying the equation by $dx$ should be an obvious step to find infinitesimal area, right? Why is the Wikipedia (and textbook) proof so long? I have also read that the connection between differential and integral calculus is not obvious, making the fundamental theorem a surprising result. But to me, it seems trivial. So, what were the wrong assumptions I made in the proof and what am I taking for granted? It should be noted that I have already learnt differential and integral calculus and I am being taught the "fundamental theorem" in the end and not as the first link between the two realms of calculus. In response to the answers below: If expressing infinitesimals on their own is not "rigorous" enough to be used in a proof, then what more sense do they make when written along with an integral sign, or even in the notation for the derivative? The integral is just the continuous sum of infinitesimals, correct? And the derivative is just the quotient of two. How else should these be defined or intuitively explained? It seems to me that one needs to learn an entirely new part of mathematics before diving into differential or integral calculus. Plus we do this sort of thing in physics all the time. REPLY [3 votes]: When you see infinitesimals ($dx, dy$) in an expression, it is helpful to think of them as small positive numbers ($\Delta x, \Delta y$), together with the understanding that you are not finished until you take the limit (i.e. where $\Delta x$ goes to zero). This is basically what we do in calculus proofs--we work with deltas and then take the limit of the resulting expression. Before taking the limit, we are just working with numeric quantities. So, in some cases, there may be common delta factors in the numerator and denominator that are both going to zero at the same rate and can be cancelled out. If you can get the expression reduced to one where setting the delta values to zero will not lead to a singularity or an indeterminate expression, then you can safely replace them with zero to take the limit. Example: $$\frac{d}{dx}x^2 = \frac{d(x^2)}{dx}$$ $$= \lim_{\Delta x\to 0}\frac{(x+\Delta x)^2 - x^2}{\Delta x}$$ $$= \lim_{\Delta x\to 0}\frac{(x^2 + 2x\Delta x + \Delta x^2) - x^2}{\Delta x}$$ $$= \lim_{\Delta x\to 0}\frac{x^2 + (2x\Delta x + \Delta x^2) - x^2}{\Delta x}$$ $$= \lim_{\Delta x\to 0}\frac{2x\Delta x + \Delta x^2}{\Delta x}$$ $$= \lim_{\Delta x\to 0} ( 2x + \Delta x )$$ $$= 2x$$ As long as $\Delta x$ is not zero, you can divide by $\Delta x$, which allows you to factor the common $\Delta x$ from numerator and denominator. In the remaining expression, $\Delta x$ is just one term of the sum, and now, if it goes to zero, it can simply be dropped. This may help explain why "multiplying by $dx$" seems to work, since, before you actually take the limit, it is valid to multiply by $\Delta x$. But at some point, you need to take the limit, and the pivotal question is whether you can do that without having to perform an invalid operation such as dividing by zero. Note that you can always turn a false equation, such as $3=5$, into a true one by multiplying both sides by zero, but it doesn't prove anything about the original expression to do that. So "multiplying both sides by $dx$" does not necessarily accomplish anything meaningful.<|endoftext|> TITLE: There exists no zero-order or first-order theory for connected graphs QUESTION [6 upvotes]: Prove that no zero-order theory (i.e. propositional calculus, without quantification) or first-order theory can describe the "connected graph" (i.e. from any point one can reach each other point in finite steps). The only weapon I know in these situations is the compactness theorem: so I would like to prove that 1) such a theory is satisfiable (obvious) 2) enlarged with some (possibly infinite) additional formulas, it is finitely satisfiable, hence satisfiable 3) these adjoint formulas, when satisfied, give a contradiction. But I do not think that this way works, since, on the contrary, good connectedness properties increase (and don't decrease) if the graph is bigger, i.e. if one writes more points and arcs. So this doesn't seem, heuristically, a good way. Can someone tell me about some other solutions or possible attempts? Thank you in advance. REPLY [6 votes]: Assume you have a first-order theory $T$ in the language of graphs such that the models of $T$ are precisely the connected graphs. (The language of graphs has one two-place relation symbol $R,$ where $R(x,y)$ is intended to mean that there is an edge between node $x$ and node $y.)$ Add two new constant symbols $c$ and $d$ to the language. For each natural number $n\ge 2,$ let $\psi_n(x,y)$ be the following formula with two free variables: $\lnot(\exists z_1)\dots(\exists z_n)\big( (z_1=x) \wedge (z_n=y) \wedge \bigwedge_{1\le k \lt n} (z_k\,R\,z_{k+1})\big).$ Let $T'$ be the theory $T\cup\{c\ne d\}\cup\{\psi_n(c,d) \mid n\ge 2\}.$ We can see that $T'$ is finitely satisfiable, as follows. If $\Sigma$ is any finite subset of $T',$ let $m$ be the least natural number greater than $0$ and greater than or equal to every $n$ for which $\psi_n\in\Sigma.$ Define a graph $H$ by specifying that $H$ has $m+1$ nodes, which we'll number from $0$ to $m,$ with an edge connecting node $k$ to node $k+1$ (for $0\le k \le m-1),$ and with no other edges. The constant symbol $c$ is interpreted as node $0,$ and the constant symbol $d$ is interpreted as node $m.$ $H$ is a connected graph, so it is a model of $T.$ Since $m+1\ge 2,$ $H\models c\ne d.$ Finally, every connected path from node $0$ to node $m$ has at least $m+1$ nodes in it (including the endpoints), so $H\models\psi_n$ for all $n\le m,$ and hence $H\models\psi_n$ for all $\psi_n\in\Sigma.$ It follows that $H$ is a model of $\Sigma.$ Since $T'$ is finitely satisfiable, compactness tells us that $T'$ is satisfiable. Let $G$ be a model of $T'.$ Since $T'$ contains $T,$ $G$ must be a connected graph. But the interpretations of $c$ and $d$ in $G$ cannot be connected by a path of any finite length $n,$ because $G$ satisfies $\psi_n.$<|endoftext|> TITLE: Invariant subspace of cyclic space is cyclic QUESTION [5 upvotes]: Let $V$ be a finite dimensional vector space and let $T:V\rightarrow V$ be a cyclic linear operator, that is, there exists $v \in V$ such that $\{v, Tv, T^2v, \dots\}$ generates $V$. Let $W\subset V$ be a $T$-invariant subspace, that is, $T[W]\subset W$. I'm trying to see that $T|W$ is also $T|W$-cyclic, that is, there exists a $w \in W$ such that $W=\langle w, Tw, T^2w, \dots\rangle$. REPLY [5 votes]: Let's write $W=$. Since $v$ is cyclic vector of $V$, there exist $p_{1},\dots,p_{r}\in K[X]$ such that $w_{i}=p_{i}(T)v$. We have: $W==\{(q_{1}p_{1}+\dots +q_{r}p_{r})(T)v:q_{1},\dots,q_{r}\in K[X]\}$ (one contention is obvious; the other ocurres because $W$ is $T$-invariant). Now, $\{q_{1}p_{1}+\dots+q_{r}p_{r}:q_{1},\dots,q_{r}\in K[X]\}=_{K[X]}$, where $d=gcd(p_{1},\dots,p_{r})$ Then, $W=\{q(T)(d(T)v):q\in K[X]\}$, and $d(T)v$ is a cyclic generator of $W$<|endoftext|> TITLE: What's the difference between geometric, exterior and multilinear algebra? QUESTION [19 upvotes]: I've studied what I think is geometric algebra, but can't seem to understand the difference between it and exterior and multilinear algebra. And is it linked to Clifford and Grassmann algebras in any way? REPLY [22 votes]: Exterior algebra Exterior algebra defines an antisymmetric wedge product. An example of the wedge product of two unit vectors, called a two-form, is $$\mathbf{e}_1 \wedge \mathbf{e}_2 = -\mathbf{e}_2 \wedge \mathbf{e}_1.$$ An example of a wedge product of three (unit) vectors, a three-form, is $$\begin{aligned}\mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3 &= -\mathbf{e}_2 \wedge \mathbf{e}_1 \wedge \mathbf{e}_3 \\ &= \mathbf{e}_2 \wedge \mathbf{e}_3 \wedge \mathbf{e}_1 \\ &= -\mathbf{e}_3 \wedge \mathbf{e}_2 \wedge \mathbf{e}_1.\end{aligned}$$ A consequence of this antisymmetry is that any wedge product where one of the wedged vectors is colinear with another is zero. Exterior algebra also has the concept of duality, which provides a mapping between k-forms and N-k forms, where N is the dimension of the underlying vector space. For example, in a three dimensional Euclidean space the dual of the two form $ \mathbf{e}_1 \wedge \mathbf{e}_2 $, denoted $ *\left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) $ is the quantity $$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) \wedge \left( { \mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_3,$$ so $$*\left( {\mathbf{e}_1 \wedge \mathbf{e}_2} \right) = \mathbf{e}_3.$$ I believe that Grassmann algebras have the same structure as exterior algebras, but also define a regressive product related to the exterior algebra dual. Geometric algebra In an exterior algebra, one can add k-forms to other k-forms, but would not add forms of different rank. This restriction is relaxed in geometric algebra (GA), where a quantity such as $$1 + 2 \mathbf{e}_1 + 3 \mathbf{e}_2 \wedge \mathbf{e}_4 + 5 \mathbf{e}_1 \wedge \mathbf{e}_2 \wedge \mathbf{e}_4,$$ is perfectly well formed. The geometric algebra is built up of products of vectors, where the vector product is defined as an associative product $$\mathbf{a} (\mathbf{b} \mathbf{c}) = (\mathbf{a} \mathbf{b}) \mathbf{c} = \mathbf{a} \mathbf{b} \mathbf{c},$$ and where the product of a vector with itself is defined as the squared length of that vector $$\mathbf{a} \mathbf{a} = \mathbf{a} \cdot \mathbf{a} = \left\lvert {\mathbf{a}} \right\rvert^2.$$ In an Euclidean space such length is always positive, but that mixed sign length metrics (such as that of the Minkowski space used in special relativity) are also allowed. The product of two non-colinear vectors can be factored as $$\mathbf{a} \mathbf{b} = \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} + \mathbf{b} \mathbf{a} } \right) + \frac{1}{{2}} \left( { \mathbf{a} \mathbf{b} - \mathbf{b} \mathbf{a} } \right).$$ The first (symmetric) term can be identified with the dot-product, whereas the second completely antisymmetric term can be identified as with the wedge product, so this complete vector product is denoted $$\mathbf{a} \mathbf{b} = \mathbf{a} \cdot \mathbf{b} + \mathbf{a} \wedge \mathbf{b}.$$ This is one of the simplest examples of what is called a multivector in GA, containing the sum of a scalar (grade zero) and a bivector (grade two). There are a number of other consequences of the product axioms of GA. One such consequence is that the product of two perpendicular vectors is antisymmetric, and that any unit vector has a unit square. A number of specific algebraic structures can be represented with geometric algebras. For example, one can identify the algebra spanned by a scalar and unit bivector, such as $$\text{span} \left\{ { 1, \mathbf{e}_1 \mathbf{e}_2 } \right\}$$ with complex numbers. This is because any unit bivector of this form (in a Euclidean space) squares to unity $$\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2)^2 &= (\mathbf{e}_1 \mathbf{e}_2)(\mathbf{e}_1 \mathbf{e}_2) \\ &= \mathbf{e}_1 (\mathbf{e}_2 \mathbf{e}_1) \mathbf{e}_2 \\ &= -\mathbf{e}_1 (\mathbf{e}_1 \mathbf{e}_2) \mathbf{e}_2 \\ &= -(\mathbf{e}_1 \mathbf{e}_1) (\mathbf{e}_2 \mathbf{e}_2) \\ &= - (1)(1) \\ &= -1.\end{aligned}$$ Other examples of algebraic structures that can have GA representations include quaternions, the Pauli (spin) algebra of quantum mechanics, and the Dirac algebra from QED. The GA representation of dual vectors is through multiplication by a (unit) pseudoscalar (an ordered product of all the unit vectors of the space), often denoted $ I $, for the vector space. For example, negative multiplication by the three dimensional pseudoscalar has the duality property illustrated in the exterior algebra duality example $$\begin{aligned}-I \mathbf{e}_1&=-\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_1 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_1 \mathbf{e}_3 \\ &=\mathbf{e}_2 \mathbf{e}_3,\end{aligned}$$ $$\begin{aligned}-I\mathbf{e}_2 \mathbf{e}_3&=- \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_2 \mathbf{e}_3 \\ &=\mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_2 \mathbf{e}_3 \mathbf{e}_3 \\ &=\mathbf{e}_1.\end{aligned}$$ A number of fundamental geometric operations, such as projection, rotation, and reflection can all be represented using GA multivector product operations. Clifford algebra In GA the basis vectors for the space are typically real valued vectors. Complex valued vectors have uses in GA (i.e. frequency domain representation of vectors in electrodynamics), but the underlying basis for the vector space is still real valued (i.e. $\text{span} \left\{ { \mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3 } \right\}$ ). Clifford algebras provide a further generalization, allowing those basis vectors to reside in a complex vector space, with suitable modifications of the vector product rules. Multilinear All of these algebras are linear algebras. For example, in an exterior algebra $$\mathbf{a} \wedge (\alpha \mathbf{b} + \beta \mathbf{c}) = \alpha \mathbf{a} \wedge \mathbf{b} + \beta \mathbf{a} \wedge \mathbf{c},$$ $$(\alpha \mathbf{b} + \beta \mathbf{c})\wedge \mathbf{a} = \alpha \mathbf{b} \wedge \mathbf{a} + \beta \mathbf{c} \wedge \mathbf{a},$$ or in GA $$\mathbf{a} \left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right)= \alpha \mathbf{a} \mathbf{b} + \beta \mathbf{a} \mathbf{c} \mathbf{d}.$$ $$\left( { \alpha \mathbf{b} + \beta \mathbf{c} \mathbf{d} } \right) \mathbf{a} = \alpha \mathbf{b} \mathbf{a} + \beta \mathbf{c} \mathbf{d}\mathbf{a}.$$<|endoftext|> TITLE: Models of ZF with a Russell Socks Set QUESTION [5 upvotes]: Define a Russell socks set as a countable set of (pairwise disjoint) pairs such that no infinite subset has a choice function. Of course, if ZFC is consistent then it proves that no such set exists (as the axiom of choice is precisely the statement that every set has a choice function). On the other hand, it is known to be consistent with ZF that such a set exists. Many wonderful and entertaining consequences of such a set existing in a model of ZF can be found in papers such as 'On the number of Russell's socks [...]' by Herrlich and Tachtsis or in Ethan Thomas' undergraduate thesis on the subject. Neither of these papers explicitly construct a model of ZF containing a Russell socks set. Are any of the more common models such as Cohen's known to contain such a set? Is it easy to construct a model containing one? Any reference would be much appreciated! REPLY [5 votes]: Yes. Cohen's second model of $\lnot\sf AC$ is a model in which there is a Russell set. The proof can be found in Jech, "The Axiom of Choice" in Chapter 5, section 4. While Jech does not include the statement that the resulting set is a Russell set, it is implicit in the proof of Lemma 5.19. Additionally, Fraenkel's second model of $\sf ZFA$ has a Russell set, and in the same book by Jech, he provides "transfer theorems" for transferring some results from models with atoms to models of $\sf ZF$ (without atoms). These include the existence of a Russell set as well. Other transfer theorems (Pincus, Hall) are equally suitable for the job also.<|endoftext|> TITLE: Preorders vs partial orders - Clarification QUESTION [5 upvotes]: A binary relation is a preorder if it is reflexive and transitive. A binary relation is a partial order if it is reflexive, transitive and antisymmetric. Does that mean that all binary relations that are a preorder are also automatically a partial order as well? In other words is a binary relation a preorder if its only reflexive and transitive and nothing else? Thanks for your help. REPLY [2 votes]: A pre-order $a\lesssim b$ is a binary relation, on a set $S,$ that is reflexive and transitive. That is $\lesssim $ satisfies (i) $\lesssim $ is reflexive, i.e., $a\lesssim a$ for all and (ii) $\lesssim $ is transitive, i.e., $a\lesssim b$ and $b\lesssim c$ implies $a\lesssim c,$ for all $% a,b,c\in S.$ (A pre-ordered set may have some other properties, but these are the main requirements.) On the other hand a partial order $a\leq b$ is a binary relation on a set $S$ that demands $S$ to have three properties: (i) $\leq $ is reflexive, i.e., $% a\leq a$ for all $a\in S$, (ii) $\leq $ is transitive, i.e., $a\leq b$ and $% b\leq c$ implies $a\leq c,$ for all $a,b,c\in S$ and (iii) $\leq $ is antisymmetric, i.e., $a\leq b$ and $b\leq a$ implies $a=b$ for all $a,b\in S$% . So, as the definitions go, a partial order is a pre-order with an extra condition. This extra condition is not cosmetic, it is a distinguishing property. To see this let's take a simple example. Let's note that $a$ divides $b$ (or $a|b)$ is a binary relation on the set $Z\backslash \{0\}$ of nonzero integers. Here, of course, $a|b$ $\Leftrightarrow $ there is a $% c\in Z$ such that $b=ac.$ Now let's check: (i) $a|a$ for all $a\in Z\backslash \{0\}$ and (ii) $a|b$ and $b|c$ we have $a|c.$ So $a|b$ is a pre-order on $Z\backslash \{0\},$ but it's not a partial order. For, in $Z\backslash \{0\},$ $a|b$ and $b|a$ can only give you the conclusion that $a=\pm b,$ which is obviously not the same as $a=b.$ The above example shows the problem with the pre-ordered set $.$ It can allow $a\lesssim b$ and $b\lesssim a$ with a straight face, without giving you the equality. Now a pre-order cannot be made into a partial order on a set $$ unless it is a partial order, but it can induce a partial order on a modified form of $S.$ Here's how. Take the bull by the horn and define a relation $\sim ,$ on $$ by saying that $a\sim b$ $\Leftrightarrow a\lesssim b$ and $b\lesssim a$. It is easy to see that $\sim $ is an equivalence relation. Now splitting $S$ into the set of classes $\{[a]|$ $a\in S\}$ where $[a]=\{x\in S|$ $x\sim a\}.$ This modified form of $S$ is often represented by $S/\sim .$ Now of course $% [a]\leq \lbrack b]$ if $a\lesssim b$ but it is not the case that $b\lesssim a.$ Setting $[a]=[b]$ if $a\sim b$ (i.e. if $a\lesssim $ and $b\lesssim a).$ In the example of $Z\backslash \{0\}$ we have $Z\backslash \{0\}/\sim $ $% =\{|a|$ $|$ $a\in Z\backslash \{0\}\}.$ (Oh and as a parting note an equivalence relation is a pre-order too, with the extra requirement that $a\lesssim b$ implies $b\lesssim a.)$<|endoftext|> TITLE: Does compactness depend on the metric? QUESTION [8 upvotes]: If so, what are some examples of sets that are compact with respect to one metric but not compact with respect to another metric? REPLY [3 votes]: Compactness is a topological property, so if you have two metrics that induce the same topology, then either both metric spaces are compact, or else neither is compact. However, if you have two metrics that are allowed to be topologically inequivalent, then surely one can be compact and the other one non-compact. For example the spaces $\left] 0,1 \right[$ and $[0,1]$ with their usual metrics are compact and non-compact, respectively, but the underlying sets of the spaces have cardinality $2^{\aleph_0}$ in both cases, so you could identify the underlying sets by some bijection of your choice, and say this is an example of the kind you ask for in your question. More generally, let $A$ be any set. If $|A|$ is finite, then any topology on $A$ makes $A$ compact. In particular any metric on $A$ makes a compact metric space. (Actually, any metric on a finite set induces the discrete topology, but even this topology is still "small" enough to be compact in this case.) If $|A|$ is transfinite, you can choose the discrete metric $d(x,y)=1$ for all $x\ne y$. That gives one metric which is not compact. There will be other topologies on this set $A$ which give compact spaces (for example the trivial topology $\{ \varnothing, A \}$, or the cofinite topology) but they may not come from a metric (these topologies may be non-metrizable). I am not sure for which transfinite cardinalities $|A|$ we can choose a metric which makes the space compact, but as we saw, $|A| = 2^{\aleph_0}$ is one such cardinality. As mentioned by user N. S. in a comment to an earlier answer, the case $|A|=\aleph_0$ is easy enough. One can pick a countable, closed and bounded subset of $\mathbb{R}$, such as $\{\frac1n \mid n\in\mathbb{N}\} \cup \{ 0 \}$. With the metric inherited from $\mathbb{R}$, this is a compact metric space of cardinality $\aleph_0$.<|endoftext|> TITLE: Prove $\int_{0}^{1} \frac{\sin^{-1}(x)}{x} dx = \frac{\pi}{2}\ln2$ QUESTION [10 upvotes]: I stumbled upon the interesting definite integral \begin{equation} \int\limits_0^1 \frac{\sin^{-1}(x)}{x} dx = \frac{\pi}{2}\ln2 \end{equation} Here is my proof of this result. Let $u=\sin^{-1}(x)$ then integrate by parts, \begin{align} \int \frac{\sin^{-1}(x)}{x} dx &= \int u \cot(u) du \\ &= u \ln\sin(u) - \int \ln\sin(u) du \tag{1} \label{eq:20161030-1} \end{align} \begin{align} \int \ln\sin(u) du &= \int \ln\left(\frac{\mathrm{e}^{iu} - \mathrm{e}^{-iu}}{i2} \right) du \\ &= \int \ln\left(\mathrm{e}^{iu} - \mathrm{e}^{-iu} \right) du \,- \int \ln(i2) du \\ &= \int \ln\left(1 - \mathrm{e}^{-i2u} \right) du + \int \ln\mathrm{e}^{iu} du \,-\, u\ln(i2) \\ &= \int \ln\left(1 - \mathrm{e}^{-i2u} \right) du + \frac{i}{2}u^{2} -u\ln2 \,-\, ui\frac{\pi}{2} \tag{2} \label{eq:20161030-2} \end{align} To evaluate the integral above, let $y=\mathrm{e}^{-i2u}$ \begin{equation} \int \ln\left(1 - \mathrm{e}^{-i2u} \right) du = \frac{i}{2} \int \frac{\ln(1-y)}{y} dy = -\frac{i}{2} \operatorname{Li}_{2}(y) = -\frac{i}{2} \operatorname{Li}_{2}\mathrm{e}^{-i2u} \tag{3} \label{eq:20161030-3} \end{equation} Now we substitute equation \eqref{eq:20161030-3} into equation \eqref{eq:20161030-2}, then substitute that result into equation \eqref{eq:20161030-1}, switch variables back to (x), and apply limits, \begin{align} \int\limits_{0}^{1} \frac{\sin^{-1}(x)}{x} dx &= \sin^{-1}(x)\ln(x) + \sin^{-1}(x)\left(\ln2 + i\frac{\pi}{2}\right) \\ &- \frac{i}{2}[\sin^{-1}(x)]^{2} + \frac{i}{2} \operatorname{Li}_{2}\mathrm{e}^{-i2\sin^{-1}(x)} \Big|_0^1 \\ &= \frac{\pi}{2}\ln2 \end{align} I would be interested in seeing other solutions. REPLY [3 votes]: Integration by parts reduces the integral to, $$\int_{0}^{1} \frac{\ln x}{\sqrt{1-x^2}} dx$$ And the substitution $x=\sin u$ reduces the integral to, $$I=\int_{0}^{\frac{\pi}{2}} \ln (\sin u) du$$ And the substitution $v=\frac{\pi}{2}-x$ reduces the integral to, $$I=\int_{0}^{\frac{\pi}{2}} \ln (\cos v) dv$$ $$I=\int_{0}^{\frac{\pi}{2}} \ln (\cos u) du$$ Now adding the integrals and noting properties of logarithms we have, $$2I=\int_{0}^{\frac{\pi}{2}} \left( \ln (2 \sin x \cos x)-\ln 2\right) dx$$ Double angle, $$2I=\int_{0}^{\frac{\pi}{2}} \ln (\sin 2x) dx -\frac{\pi}{2} \ln 2$$ The substitution $s=2x$ gives $$2I=\frac{1}{2}\int_{0}^{\pi} \ln (\sin s) ds -\frac{\pi}{2} \ln 2$$ But $$\int_{0}^{\pi} \ln (\sin s) ds=2I$$ Follows from the substitution $w=\frac{\pi}{2}-s$ and the evenness of the function $f(w)=\ln (\cos w)$: $$\int_{0}^{\pi} \ln (\sin s) ds$$ $$=-\int_{\frac{\pi}{2}}^{-\frac{\pi}{2}} \ln (\cos w) dw$$ $$=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \ln (\cos w)dw $$ $$=2 \int_{0}^{\frac{\pi}{2}} \ln (\cos w) dw=2I$$ So, $$2I=I-\frac{\pi}{2}\ln 2$$ $$I=-\frac{\pi}{2}\ln 2$$<|endoftext|> TITLE: How is irrational exponent defined? QUESTION [6 upvotes]: I am trying to understand the most significant jewel in mathematics - the Euler's formula. But first I try to re-catch my understanding of exponent function. At the very beginning, exponent is used as a shorthand notion of multiplying several identical number together. For example, $5*5*5$ is noted as $5^3$. In this context, the exponent can only be $N$. Then the exponent extends naturally to $0$, negative number, and fractions. These are easy to understand with just a little bit of reasoning. Thus the exponent extends to $Q$ Then it came to irrational number. I don't quite understand what an irrational exponent means? For example, how do we calculate the $5^{\sqrt{2}}$? Do we first get an approximate value of $\sqrt{2}$, say $1.414$. Then convert it to $\frac{1414}{1000}$. And then multiply 5 for 1414 times and then get the $1000^{th}$ root of the result? ADD 1 Thanks to the replies so far. In the thread recommended by several comments, a function definition is mentioned as below: $$ ln(x) = \int_1^x \frac{1}{t}\,\mathrm{d}t $$ And its inverse function is intentionally written like this: $$ exp(x) $$ And it implies this is the logarithms function because it abides by the laws of logarithms. I guess by the laws of logarithms that thread means something like this: $$ f(x_1*x_2)=f(x_1)+f(x_2) $$ But that doesn't necessarily mean the function $f$ is the logarithms function. I can think of several function definitions satisfying the above law. So what if we don't explicitly name the function as $ln(x)$ but write it like this: $$ g(x) = \int_1^x \frac{1}{t}\,\mathrm{d}t $$ And its reverse as this: $$ g^{-1}(x) $$ How can we tell they are still the logarithm/exponent function as we know them? REPLY [3 votes]: One way of defining the real numbers is as equivalence classes of "the collection of all Cauchy sequences (or, equivalently, the collection of all increasing sequences with upper bound) of rational numbers" with "$\{a_n\}$ equivalent to $\{b_n\}$ if and only if $\{a_n- b_n\}$ converges to 0. The essentially says that, for example, $\pi$ is "represented" by the infinite decimal 3.1415926.... From that definition, if a is an irrational number then there exist a sequence of rational numbers $r_1, r_2, r_3, ...$ that converges to a. We then define $x^a$ to be the limit of the sequence $x^{r_1}, x^{r_2}, x^{r_3}, ...$. Using the same example as before, $2^\pi$ is defined as the limit of the sequence $2^3, 2^{3.1}, 2^{3.14}, 2^{3.141}, 2^{3.1415}, 2^{3.14159}, 2^{3.141592}, 2^{31415926}, ...$.<|endoftext|> TITLE: Prove that $\gcd{\left(\binom M1,\binom M2,\binom M3,\ldots,\binom Mn\right)}=1$ where $M=\mathrm{lcm}(1,2,3,\ldots,n)$ QUESTION [12 upvotes]: Let $n$ be a positive integer and let $$M=\mathrm{lcm}(1,2,3,\ldots,n).$$ Show that $$\gcd{\left(\binom{M}{1},\binom{M}{2},\binom{M}{3},\ldots,\binom{M}{n}\right)}=1$$ REPLY [2 votes]: Hint: Apply Lucas' Theorem which states that for non-negative integers $m$ and $n$, and a prime $p$, $$ {m \choose n} \equiv \prod_{i=0}^{k} {m_{i} \choose n_{i}} \pmod p,$$ where $m=m_{k}p^{k}+m_{k-1}p^{k-1}+\cdots+m_{1}p+m_{0}$ and $n=n_{k}p^{k}+n_{k-1}p^{k-1}+\cdots+n_{1}p+n_{0}$ are the base $p$ expansions of $m$ and $n,$ respectively. This uses the convention that ${m \choose n} =0$ when $m TITLE: Proving a basic cyclotomic identity QUESTION [5 upvotes]: Assume $T\in\mathbb{C}$, $m>1$ and $\mu_i$ be the $m$-th roots of unity. I want to prove that \begin{align} \prod_{i=1}^m \left(1-\mu_i T\right)=1-T^m. \end{align} By the brute force ansatz things get a little bit complicated. If I just expand the product I get \begin{align} \prod_{i=1}^m \left(1-\mu_i T\right)&=1-\left(\sum_{i=1}^m \mu_i\right)T+\left(\sum_{i_1 TITLE: How do you "simplify" the sigma sign when it is raised to a power? QUESTION [9 upvotes]: How do you simplify the following expression: $$\left(\sum^{n}_{k=1}k \right)^2$$ I am supposed to show that $$\left(\sum^{n}_{k=1}k \right)^2 = \sum^{n}_{k=1}k^{3} $$ The problem is I do not really know how to manipulate the sigma sign. I know that I (probably) need to use induction somehow, but the main question is how do you "simplify" the sigma sign when it is raised to a power. Due to the problem itself I know that (most likely); $$\left(\sum^{n}_{k=1}k \right)^2 = \sum^{n}_{k=1}k^{3} $$ so is it possible to simply manipulate the LHS so that it looks like the RHS? REPLY [6 votes]: This identity is a coincidence — it is not proven by doing general series manipulations, but instead by simply computing the left and right hand sides and confirming they're equal.<|endoftext|> TITLE: On powers of symmetric matrices. QUESTION [5 upvotes]: What is the best way to show if $A$ is symmetric then $A^2$ is as well using eigenvalues? REPLY [6 votes]: Just recall that for two matrices $B,C$ it holds that: $(BC)^T = C^TB^T$. Applying this with $B=A$, $C=A$, we get: $$ (A^2)^T = (AA)^T = A^TA^T = AA = A^2$$ You can also do it with eigenvalues but it seems a lot more tedious to me (and requires much stronger results): Let $A=VDV^T$ be the eigenvalue decomposition of $A$ (with $D$ diagonal, $V$ orthonormal, the decomposition exists by spectral theorem), then: $$ A^2 = VD^2V^T $$ And it is easy to see that this is symmetric as well. In particular, apply the identity I cited at the beginning and note that $D^2$ is symmetric since it is diagonal.<|endoftext|> TITLE: Possible Riemann's Hypothesis proof? QUESTION [6 upvotes]: First of all, I imagine it will not be correct, just because of its simplicity, but I would also want to know why, as I can't find any mistake on it. The "proof" would be based on convining two main theorems/formulae. The first one, would be this one , due to Nicolas, where it is stated that RH would hold iff: $$\frac{N_k}{\phi{(N_k)}} > e^{\gamma} \ln{(\ln{(N_k)})}$$ holded for every $k$ being $N_k$ the primorial of order $k$ and $\phi{(N_k)}$ its Euler's Totient function. Then, my main aim here will be to prove that formula for every $k$. To do that, I will use this other theorem: $$ \prod_{p \le n} \frac{p}{p-1} > e^{\gamma}(\ln{n})(1-\frac{1}{2(\ln{n})^{2}})$$ Taken from "Approximate Formulas for Some Functions of Prime Numbers" (link), Theorem 8 (3.28). As, in this case, $\frac{N_k}{\phi{(N_k)}} = \prod_{p \le p_k} \frac{p}{p-1}$, we can try to see if this holds: $$e^{\gamma}(\ln{p_k})(1-\frac{1}{2(\ln{p_k})^{2}})>e^{\gamma} \ln{\ln{N_k}}$$ Hence $$\ln{p_k}-\frac{1}{2\ln{p_k}}>\ln{\ln{N_k}}$$ For it to be more clear, we can change $\ln{N_k}$ by $\theta{(k)}$ (Chebyshev's First Function) so that $$\ln{p_k}-\frac{1}{2\ln{p_k}}>\ln{\theta{(k)}}$$ From there, we could easily get to $$\frac{1}{2\ln{p_k}}<\ln{\frac{p_k}{\theta{(k)}}}$$ And, with the bounds of Theorems 3 (3.12) and 4 (3.15), we get $$\frac{1}{2\ln{p_k}}<\ln{\frac{\ln{k}}{1+ \frac{1}{2\ln{k}}}}$$ What would be true for every big enought k, meaning that $$\frac{N_k}{\phi{(N_k)}} > e^{\gamma} \ln{(\ln{(N_k)})}$$ holds, and, with so, RH. Is this correct? Why would/would not it prove the RH? Thank you! Edit thanks to Jyrki Lahtonen REPLY [7 votes]: With $p_1, p_2, \ldots$ being a list of primes in increasing order we have $N_k = p_1 p_2 \cdots p_k$ for the primorial. Therefore $$\frac{N_k}{\phi(N_k)}=\prod_{p\le p_k}\frac{p}{p-1}.$$ Hence the lower bound is only $$ \frac{N_k}{\phi(N_k)} > e^\gamma \log p_k \left(1 - \frac1{2 \log^2 p_k}\right), $$ which does not work for the remaining argument.<|endoftext|> TITLE: Vieta Jumping: Related to IMO problem 6, 1988: If $ab + 1$ divides $a^2 + b^2$ then $ab + 1$ cannot be a perfect square. QUESTION [23 upvotes]: The famous IMO problem 6 states that if $a,b$ are positive integers, such that $ab + 1$ divides $a^2 + b^2$, then $\frac{ a^2 + b^2}{ab + 1 }$ is a perfect square, namely, $gcd(a,b)^2$. How about a modification of this problem: If $a,b$ are (strictly) positive integers, such that $ab + 1$ divides $a^2 + b^2$, then $ab + 1$ cannot be a perfect square. I am looking for a proof of the claim above, or a counter-example. One possible approach towards a proof is the following: Suppose there is such a pair $(a,b)$ as above, then by the famous IMO problem 6 from 1988, $\frac{ a^2 + b^2}{ab + 1 } = g^2$ where $g = gcd(a,b)$. Since $ab + 1$ is a perfect square, $a^2 + b^2 = c^2$ for some integer $c$. So that $(a,b,c)$ is a Pythagorean tripple, therefore there exists positive integers $n,m,l$ with $n$ coprime to $m$, such that $a = l(n^2 - m^2)$ and $b = 2lnm$ - by Euclids formula. Then by plugging in $a$ and $b$ in terms of $n,m,l$ into the original equation, and solving for $l$, it is possible to obtain the following: There exists positive coprime integers $n,m$ such that $\frac{(n^2 + m^2 + 1)(n^2 + m^2 - 1)}{2mn(n+m)(n-m)}$ is a perfect square. If we put the quotient above into a program, as in this python code snipet: N = 1000 for n in range(1,N): for m in range(n+1, N): A = (n*n + m*m + 1)*(n*n + m*m - 1) B = 2*m*n*(n+m)*(m-n) if A % B == 0: print(A/B) The quotient always is 2, regardless of whether or not $n$ and $m$ are co prime! So if the very strong implication $2mn(n+m)(n-m) \mid (n^2 + m^2 + 1)(n^2 + m^2 - 1) \implies \frac{(n^2 + m^2 + 1)(n^2 + m^2 - 1)}{2mn(n+m)(n-m)} = 2$ holds, then the original problem will be solved. REPLY [4 votes]: Here is a bit of a hacky answer to your question in the affirmative You observe that $k = \mathrm{gcd}(a,b)^2$. After plodding through various resources, it is because one can Viete Jump: $$ (a,b) \mapsto \big(a_1, b_1\big) \mapsto \dots \mapsto (k,0) $$ and the gcd is conserved. Once I agree with you, let $a = \text{gcd}(a,b) \, c$ and $b = \text{gcd}(a,b) \, d$ so that \begin{eqnarray} ab+1 &=& \frac{a^2+b^2}{\mathrm{gcd}(a,b)^2}= c^2 + d^2 \\ &=& \,\text{gcd}(a,b)^2 \, cd + 1 \end{eqnarray} In this way there are two conditions $c, d$ might solve (where the two $\square$ are different) and $\text{gcd}(c,d)=1$: \begin{eqnarray*} c^2 - \square \, cd + d^2 &=& 1 \\ c^2 + d^2 &=& \square \end{eqnarray*} Hopefully these two equations leads to a contradiction. Added 11/15 The answer is definitely no. Let $k = \mathrm{gcd}(a,b)$ We are trying to solve in integers: \begin{eqnarray} c^2 - k\; cd + d^2 &=& 1 \\ c^2 + d^2 = \square \end{eqnarray} As I learned, the first can be solve with $(c,d) = (1,0)$ or $(k,1)$ and there are an infinite family of solutions using consecutive terms of a recursive sequence [2, 3] $$ x_{n+1} = k \, x_n + x_{n-1} $$ There are sometimes ways to link Pythagorean triples to Pell equations [1] (Modular Tree of Pythagoras) $$ x_{n+1}^2 + \frac{1}{2} x_n^2 < \sqrt{x_{n+1}^2 + x_n^2 } = x_{n+1}^2 \sqrt{1 + (x_n/x_{n+1})^2} < x_{n+1}^2 + \frac{1}{2} x_n^2 + 1$$ This cannot be an integer. So any time we solve the Pell equation, we cannot also solve Pythagoras. $\quad\quad\square$ Old Answer This is discussed on Wikipedia's article on Vieta Jumping: Nobody of the six members of the Australian problem committee could solve it. Two of the members were George Szekeres and his wife, both famous problem solvers and problem creators. [...] The problem committee submitted it to the jury of the XXIX IMO marked with a double asterisk, which meant a superhard problem, possibly too hard to pose. After a long discussion, the jury finally had the courage to choose it as the last problem of the competition. Eleven students gave perfect solutions. Among the eleven was Bau Chau Ngô (Fields Medal 2010). His work on the Fundamental Lemma also has a jumpy flavor [1, 2, 3] but is quite advanced. The discussion on YouTube is helpful as well. These videos give a thorough discussion of different ways to solve The Legend of Question Six (Numberphile) The Return of the Legend of Question Six (Numberphile2) These may not directly solve your problem but provide historical context and indicate possible strategies. In the Wikipedia article, the example of Viete Jumping is IMO 1988/6 -- the same as asked in the question: Let $a,b$ be positive integers such that $ab+1$ divides $a^2 + b^2$ show that $ \frac{a^2 + b^2}{ab+1}$ is a perfect square. and the solution goes in three steps #1 Let $a, b \geq 0$ be solutions to $\frac{a^2 + b^2}{ab+1} = k$ such that $k$ is not a perfect square: $k \neq \square$ #2 Starting from $(a,b)$ we can try to generate another solution $(x,b)$ which solves the quadratic equation: $$ x^2 - kb\, x +(b^2 - k) = 0 $$ The map $(a,b) \mapsto (a_1,b)$ is our Vieta jumping Since both $a, a_1$ are acceptable solutions we have: $$ (x-a)(x-a_1) = x^2 - (a + a_1) x + aa_1 = 0$$ By the Viete equations (comparing the coefficients. We find out two things: $ a + a_1 = kb $ so that $a_1 = kb - a \in \mathbb{Z}$ (it is an integer) $ aa_1 = b^2 - k $ so that $a_1 = \frac{b^2 - k}{a} \neq 0$ #3 If $a \geq b$ we can deduce that $a_1 \geq 0$ (is positive) and additionally $ a > a_1 \ge 0 $ From #2 $ a_1 = \frac{b^2 - k }{a} < \frac{b^2}{a} < \frac{a^2}{a}=a $ $\frac{a_1^2 + b^2}{ab+1}= k > 0 $ implies that $ a_1b+1 > 0$ or $a_1 > - \frac{1}{b}$ but $a \in \mathbb{Z}$ so $a_1 \geq 0$. Summary We've show that given two positive numbers $a,b$ solving $\frac{a^2+b^2}{ab+1}=k$ with $k \neq \square$ we can always find another solution $(a_1,b)$ solving the same equation with $a > a_1$. Then the Viete jump consists of a map: $$ (a,b) \mapsto \left\{\begin{array}{rc} (b,a) & \text{ if } a \leq b \\ (\frac{b^2-k}{a},b) & \text{ if } a \geq b \end{array}\right.$$ While this does not solve your problem -- to show that $ab+1 \neq \square$ -- it does indicate possibly where to start and some possible resources. A quick use of Bezout formula shows that $ab+1$ should also divide $$ \big[(a^2 + b^2) + 2(ab+1)\big] + \big[(a^2 + b^2) - 2(ab+1)\big] = (a+b)^2 +(a-b)^2 $$ and this could lead to your contradiction.<|endoftext|> TITLE: How to choose a standard smooth structure for a manifold? QUESTION [7 upvotes]: Given any smooth manifold $(M,\mathscr{A})$ with a specified smooth structure $\mathscr{A}$, we can identify uncountably many distinct smooth structures $(\mathscr{B}_s)_{s \ge 0}$ such that $(M,\mathscr{B}_s)$ is also a smooth manifold. So how do we go about choosing a standard smooth structure to work with and do calculations with? How can we justify any of the uncountably many choices as the "best" one? Context: My confusion is a result of solving the following problem (i.e. my solution raises more questions than it answers) from Lee's Introduction to Smooth Manifolds, 1-6 on p.30: Let $M$ be a nonempty topological manifold of dimension $n \ge 1$. If $M$ has a smooth structure, show that it has uncountably many distinct ones. [Hint: first show that for any $s>0$, $F_s(x)=|x|^{s-1}x$ defines a homeomorphism from $\mathbb{B}^n$ to itself, which is a diffeomorphism if and only if $s=1$.] This answer and this question both seem to imply that all of these smooth structures should be diffeomorphic, but if they were diffeomorphic, then wouldn't the smoothness of transition maps between coordinate charts imply that they are equal (since any smoothness structure is a maximal atlas)? Anyway, even if they were all diffeomorphic, that still doesn't resolve the issue of which one should be the "standard" one and which one to choose for calculations, etc. For example, on p. 40 of this same book, it says that: ...$\mathbb{R}^4$ has uncountably many distinct smooth structures, no two of which are diffeomorphic to each other! The existence of nonstandard smooth structures on $\mathbb{R}^4$ (called fake $\mathbb{R}^4$'s) was first proved by Simon Donaldson and Michael Freedman in 1984 as a consequence of their work on the geometry and topology of compact 4-manifolds... So when working with $\mathbb{R}^4$, how does one decide which smooth structure is the "standard" smooth structure, and how can someone verify that they are working with the correct smooth structure? My attempt: Does the answer have to do with the fact that any topological manifold already comes pre-equipped with a collection of charts which are appropriate homeomorphisms but not necessarily smoothly compatible with each other, and thus any smooth structure is a strict subset of the family of charts inherited from the topological manifold structure? Such that any two distinct smooth structures are both strict subsets of the family of charts from the underlying topological manifold? So they are necessarily equivalent up to homeomorphism? I still don't see how to show that they are equivalent up to diffeomorphism. Also the fact that any smooth structure on a set induces a topological manifold structure on the same set would seem to suggest that any smooth atlas is not strictly contained in the family of charts resulting only from the topology of the underlying space, although I am not sure either way. REPLY [14 votes]: The point of this problem is to help the reader understand the difference between two distinct concepts. Suppose $M$ is a topological manifold, and $\mathscr A_1$ and $\mathscr A_2$ are two different smooth structures on $M$ (i.e., maximal smoothly compatible atlases). Then we can ask two questions about $\scr A_1$ and $\scr A_2$: What does it mean for $\scr A_1$ and $\scr A_2$ to be the same smooth structure on $M$? What does it mean for the smooth manifolds $(M,\scr A_1)$ and $(M,\scr A_2)$ to be diffeomorphic to each other? In question 1, we are given two different atlases on $M$, and the question is whether each chart of $\scr A_1$ is smoothly compatible with each chart in $\scr A_2$ and vice versa. The problem you quoted (Problem 1-6) asks you to construct uncountably many smooth structures on a given manifold that are distinct, in the sense that the charts of one are not smoothly compatible with the charts of another. This is possible even on $\mathbb R$; indeed, as the problem states, it is possible on any positive-dimensional topological manifold as long as it admits at least one smooth structure. The second question (whether two given smooth structures on $M$ result in smooth manifolds that are diffeomorphic to each other) is a completely different question. The result that @levap refers to about $\mathbb R$ (Problem 15-13 in my book) says that if $\scr A_1$ and $\scr A_2$ are two smooth structures on $\mathbb R$, then there is a map $F\colon \mathbb R\to\mathbb R$ that is a diffeomorphism from $(\mathbb R,\scr A_1)$ to $(\mathbb R,\scr A_2)$. Another way of saying this is that given a smooth chart $(U,\phi)\in\scr A_2$, the chart $(F^{-1}(U),\phi\circ F)$ will be a chart for $\mathbb R$ that is compatible with all the charts in $\mathscr A_1$ (and thus, by maximality, is already in $\mathscr A_1$). It doesn't say that every chart in $\mathscr A_2$ is already smoothly compatible with those in $\mathscr A_1$. And it does not contradict the fact that there are many distinct smooth structures on $\mathbb R$; it just says that any two of them are related to each other by such a map.<|endoftext|> TITLE: Is the sum of singular and nonsingular matrix always a nonsingular matrix? QUESTION [7 upvotes]: If $A$ and $B$ are singular and nonsingular respectively, where both are square, is $A+B$ always nonsingular? Suppose that $A$ is a singular matrix and that $B$ is nonsingular, where both are square of the same dimension. It is not hard to see that $AB$ and $BA$ are both singular. It seems natural to ask whether the same is true for addition of matrices instead of product. For $1\times1$ matrices (i.e., numbers), the only singular matrix is $0$; so if we add it to any nonsingular (invertible) matrix, it remains nonsingular. So to find a counterexample, we have to look at bigger matrices. REPLY [2 votes]: Let me tell one particular way of generating lots of examples. We will find $A$ such that $A +I$ will be singular. You can easily adapt this method to use with any non-singular matrix instead of identity. We will work backwards to get solutions. Take a matrix with two identical rows as $A+I$. This gives the condition that $$A+\pmatrix{1 &0&0\cr0&1&0\cr 0&0&1\cr}=\pmatrix{a & b &c \cr a &b & c\cr * & * & *\cr}$$ First two rows of $A$ are forced. To ensure singularity of $A$ make the last row identical to 2nd row: $$\pmatrix{a-1 & b& c\cr a &b-1 & c\cr a & b-1 & c} +\pmatrix{1 &0&0\cr0&1&0\cr 0&0&1\cr}=\pmatrix{a & b &c \cr a &b & c\cr * & * & *\cr} $$ Now we can work backwards to get the values to be used in place of stars: $$\pmatrix{a-1 & b& c\cr a &b-1 & c\cr a & b-1 & c} +\pmatrix{1 &0&0\cr0&1&0\cr 0&0&1\cr}=\pmatrix{a & b &c \cr a &b & c\cr a & b-1 & c+1\cr} $$ Now replace $a,b,c$ with your ATM pin number, your friend's age, and your annual salary in Euros respectively, you will get a solution.<|endoftext|> TITLE: Is it possible to interchange countable unions and intersections? QUESTION [11 upvotes]: Suppose there is a nonempty set $A_n^i$ that is indexed over $\omega$, the natural numbers. Can I say the following is true? $$\bigcup_{i \in \omega} \left\{\bigcap_{n\in \omega} A_n^i\right\} = \bigcap_{n\in \omega}\left\{ \bigcup_{i \in \omega} A_n^i\right\}$$ Can anyone give me some idea as to whether nor not I would be able to interchange the union and intersection? REPLY [18 votes]: Note that $$x\in\bigcup_{i\in\omega}\bigcap_{n\in\omega}A_n^i$$ iff $\exists i\in\omega\,\forall n\in\omega\,(x\in A_n^i)$, while $$x\in\bigcap_{n\in\omega}\bigcup_{i\in\omega}A_n^i$$ iff $\forall n\in\omega\,\exists i\in\omega\,(x\in A_n^i)$; the latter condition is on the face of it easier to satisfy, so you should look for an example in which $$\bigcup_{i\in\omega}\bigcap_{n\in\omega}A_n^i\subsetneqq\bigcap_{n\in\omega}\bigcup_{i\in\omega}A_n^i\;.$$ Specifically, we might try to construct the sets $A_n^i$ so that there is some element $a\in A_n^n$ for all $n\in\omega$, which will ensure that $$a\in\bigcap_{n\in\omega}\bigcup_{i\in\omega}A_n^i\;,$$ but so that there is no $i\in\omega$ such that $a\in A_n^i$ for all $n\in\omega$. This is easy: for each $i\in\omega$ make sure that $a\in A_n^i$ iff $n=i$. Thus, we can let $$A_n^i=\begin{cases} \{a\},&\text{if }n=i\\ \varnothing,&\text{otherwise}\;. \end{cases}$$ Then $$\bigcup_{i\in\omega}\bigcap_{n\in\omega}A_n^i=\bigcup_{n\in\omega}\varnothing=\varnothing\;,$$ but $$\bigcap_{n\in\omega}\bigcup_{i\in\omega}A_n^i=\bigcap_{n\in\omega}\{a\}=\{a\}\;.$$ REPLY [11 votes]: This is not possible in general. Even for finite unions and intersections you can have $$ (A_1\cap A_2)\cup(B_1\cap B_2)\subsetneqq(A_1\cup B_1)\cap(A_2\cup B_2). $$ Take for example $A_2=A_1^c$ (the complement in $X\neq\emptyset$) and $B_1=A_2,\;B_2=A_1$; then $$ A_1\cap A_2=B_1\cap B_2=\emptyset,\qquad A_1\cup B_1=A_2\cup B_2=X. $$<|endoftext|> TITLE: Solutions for $\cos(\alpha)+\cos(\beta)-2\cos(\alpha+\beta)=0$ with a certain value range. QUESTION [5 upvotes]: To proof How prove this equation has only one solution $\cos{(2x)}+\cos{x}\cdot\cos{(\sqrt{(\pi-3x)(\pi+x)}})=0$ I need first an analytically (not: numerically) proof for the following problem: Be $\enspace\displaystyle 0<\beta<\frac{2\pi}{3}<\alpha<2\pi$ . Then it exists exactly $\,$ one $\,$ solution $\,(\alpha;\beta)\,$ for $\enspace \cos(\alpha)+\cos(\beta)-2\cos(\alpha+\beta)=0 $ ? (The answers below show: No.) Known: $\enspace\displaystyle (\alpha_0;\beta_0):=\left(\pi;\arccos\left(\frac{1}{3}\right)\right)\enspace$ is a solution. REPLY [2 votes]: Rewrite the equation as $$ \cos (\alpha )+\cos (\beta )-2 \cos (\alpha ) \cos (\beta )+2 \sin (\alpha ) \sin (\beta )=0 $$ use the Weierstrass substitution for $\beta$ (or for $\alpha$), with $t=\tan(\beta/2)$ $$ \cos (\alpha )+\frac{1-t^2}{1+t^2}-\frac{2 \left(1-t^2\right) \cos (\alpha )}{1+t^2}+\frac{4 t \sin (\alpha )}{1+t^2}=\frac{t^2 (3 \cos (\alpha )-1)+4 t \sin (\alpha )+1-\cos (\alpha )}{1+t^2}=0 $$ Then solve the quadratic $$ t^2 (3 \cos (\alpha )-1)+4 t \sin (\alpha )+1-\cos (\alpha )=0 $$ This is a graph of the solutions: So there are infinite solutions, some particularly nice are $$ (\pi,\arccos(1/3)),\quad(4\pi/3,\arccos(11/14)),\quad(3\pi/2,\arccos(2/\sqrt{5}))\quad(5\pi/3,\arccos(\sqrt{11/12})) $$<|endoftext|> TITLE: Surface area of sphere within a cylinder QUESTION [5 upvotes]: I have to Compute the surface area of that portion of the sphere $x^2+y^2+z^2=a^2$ lying within the cylinder $\Bbb{T}:=\ \ x^2+y^2=by.$ My work: I start with only the $\Bbb{S}:=\ \ z=\sqrt{a^2-x^2-y^2}$ part and will later multiply it by $2$. $${\delta z\over \delta x}={-x\over \sqrt{a^2-x^2-y^2}}\ ;\ {\delta z\over \delta y}={-y\over \sqrt{a^2-x^2-y^2}}$$ Using the formula $$a(\Bbb{S})=\iint\limits_\Bbb{T}\sqrt{1+\left({\delta z\over \delta x}\right)^2+\left({\delta z\over \delta y}\right)^2}dx \ dy$$ I get $$a(\Bbb{S})=\iint\limits_{x^2+y^2=by}\sqrt{1+{by\over a^2-by}}\ dy\ dx\\=\int_0^{b/2}\int\limits_{{b\over 2}-\sqrt{{b^2\over 4}-x^2}}^{{b\over 2}+\sqrt{{b^2\over 4}-x^2}}\sqrt{1+{by\over a^2-by}}\ dy\ dx\\=a\int_0^{b/2}\int\limits_{{b\over 2}-\sqrt{{b^2\over 4}-x^2}}^{{b\over 2}+\sqrt{{b^2\over 4}-x^2}}{1\over\sqrt{a^2-by}}dy \ dx\\=a\int_0^{b/2}\left\{\left[{-2\sqrt{a^2-by}\over b}\right]_{{b\over 2}-\sqrt{{b^2\over 4}-x^2}}^{{b\over 2}+\sqrt{{b^2\over 4}-x^2}}\right\}\ dx$$ How to proceed now?The integral seems too bad. OR Is there a simpler parametrization ? REPLY [5 votes]: Another approach in spherical coordinates: parametrize the surface with \begin{cases} x=a \sin\phi \cos\theta \\ y=a \sin\phi \sin\theta \\ z=a \cos\phi \end{cases} with $(\theta,\phi)\in [0,\pi]\times[0,\Phi]$, where $\Phi \in ]0,\pi]$ is the solution to \begin{cases} x^2+y^2+z^2=a^2 \\ x^2+y^2=by \end{cases} Substituting $x,y,z$ by their expressions in spherical coordinates yields $$ \Phi = \sin^{-1}\left(\frac{b}{a}\sin\theta\right) $$ It follows that $$ S=\int_{0}^{\pi}\int_0^{\Phi} a^2\sin\phi\; d\phi d\theta = a^2 \int_0^{\pi}1-\cos\Phi\; d\theta $$ With a little trigonometric work ($\cos x =\pm \sqrt{1-\sin^2x}$), we can show that $$ \cos\Phi = \sqrt{1-\left(\frac{b}{a}\sin\theta\right)^2} $$ Therefore $$ S = \pi a^2 - a \int_0^{\pi} \sqrt{a^2-b^2\sin^2\theta}\; d\theta $$<|endoftext|> TITLE: What is "Russian-style" mathematics? QUESTION [24 upvotes]: I've just stumbled upon Gorodentsev's upcoming textbook 'Algebra I'. The description of it claims that it's very 'Russian-style'. This book is the first volume of an intensive “Russian-style” two-year graduate course in abstract algebra, and introduces readers to the basic algebraic structures – fields, rings, modules, algebras, groups, and categories – and explains the main principles of and methods for working with them. What does this mean? What differs 'Russian-style' from 'American-style' mathematics? REPLY [19 votes]: Russian-style should be understood not in opposition to American-style (that's cold war stuff) but rather in opposition to French-style or more precisely Bourbaki-style. The latter emphasizes formalism even sometimes at the expense of readability. The Russian style tends to focus on the essence rather than the formalism, and emphasize what is novel. A good example of accessible, popular, and rigorous writing in the Russian-style is a typical book by Vladimir Arnold; for example, his Mathematical methods of classical mechanics, an all-time favorite. The flip side of excessive formalism is often committing errors; this was richly illustrated in the case of Bourbaki by Adrian Mathias; see e.g., his http://link.springer.com/article/10.1007%2FBF03025863 .<|endoftext|> TITLE: Should I explore every mathematical theory myself first, or is it fine to read the proofs given? QUESTION [7 upvotes]: I have read a similar question posted on MSE, but honestly, I did not get the answer to what I was thinking. Suppose I am new to conic sections. Now, my book provides me an insight to what conic sections are, and their analytical focus-directrix definition, but doesn't really give me a proof of this focus-directrix property. I searched the internet to find a proof, and saw the ideas of Dandelin's spheres. I didn't read the whole proof, I still haven't(I might after getting the answer), so that I can try to explore conics my own way, and maybe create my own proof of the property. But, provided that the property is already proved by someone, should I really do it, or reading the already existing proof is good for me? Because, if I try to do this for every mathematical theory, it would take me years to do so. So, what is good for a mathematician or a physicist? Re-invent the wheel(because you never know, maybe you discover something new by doing so) or read the works of others and use it as a black box? REPLY [4 votes]: I used to worry about this as well. I eventually got my Ph.D. So, maybe I'm not wholly unqualified to answer. My advice would be to study the things that you like to study (or need to study for class). As you work through books, articles, and problems, try to think of examples that you think help explain the subject and maybe conjecture a few results and try to prove them to yourself. If you get stuck or bored, move on. You can always revisit these things later and take another crack at them. Or, you might later discover that your particular example or conjecture is actually much more difficult than you first expected. And, it is often a good idea to hear another person's perspective on a subject, especially someone that has spent a lot of time thinking about it. The important thing is to not push yourself to the point of boredom or mathematical exhaustion. Have fun playing with the math. Try to guess which things would be true in the subject you are studying. Try to test your grasp on the subject with examples that you come up with. But, keep your mind engaged. How much of each subject you want/need to develop on your own versus how much you will learn from someone else will become evident to you with enough experience. That last sentence is a good thing to recognize. Getting good at math requires a LOT of time and experience. Just make sure you don't kill your interest by pressing yourself unnaturally. As to avoid (more) rambling, I will draw my answer to an end. Ask any questions, and I will edit my responses in.<|endoftext|> TITLE: The Minkowski inequality for fractional order? QUESTION [6 upvotes]: Let $u\in C^\infty(\bar I)$ be given where $I=(0,1)$. Define $$ t(\alpha):=\left(\int_I\int_I \frac{|u(x)-u(y)|^\alpha}{|x-y|^{1+s\alpha}}\right)^{\frac1\alpha} $$ where $1<\alpha<2$, $01$. It's an exercise in the book.<|endoftext|> TITLE: What is mathematical logic? QUESTION [16 upvotes]: What does mathematical logic mean? In the book Analysis 1 by Terence Tao, it says: The purpose of this appendix is to give a quick introduction to mathematical logic, which is the language one uses to conduct rigourous mathematical proofs. Checking Wikipedia: Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. This seems like a completely different definition. Why, for example, is set theory considered part of logic? REPLY [2 votes]: Logic is generally understood to be the study of sound reasoning. Mathematical logic in the sense Tao uses this word is the kind of logic one uses when doing mathematics. This includes dealing with logical connectives (such as "and", "or", "if", and "if and only if"), quantifiers ("for all" and "exists"), variables, and proofs. But, as it sometimes happens in natural languages, one and the same word can have two (or more) different (though sometimes related) meanings. This might be the cause of your confusion. In fact, mathematical logic can also mean the branch of mathematics that deals with formulae, theories, proofs, models, … as mathematical objects. Of course, as all other branches of mathematics do, this branch of mathematics also uses mathematical logic in the former sense. The reason why some people regard set theory as a subfield of mathematical logic$^*$ in the latter sense is that these fields are historically quite related. You may be interested to learn about the foundational crisis. I found a talk given by mathematician Chaitin that gives a good overview over this topic: see Part 1, Part 2, Part 3, Part 4. By the way, the appendix on logic is included in the sample chapters of Tao's book. $^*$ But at the end of the day this is just a termininological convention. EDIT: This answer is just a restatement of Henry's comment: Terence Tao's 31 page appendix is really a description of the basic language and tools of mathematical proof to help understand the rest of the Analysis I book, rather than the deeper subject of mathematical logic. The sections are called: Mathematical statements; Implication; The structure of proofs; Variables and quantifiers; Nested quantifiers; Some examples of proofs and quantifiers; Equality.<|endoftext|> TITLE: Enumeration of finite automata QUESTION [7 upvotes]: There is a nice paper Enumeration of Finite Automata by Frank Harary and Ed Palmer which presents a formula $a(n,k,m)$ for the number of finite automata with $n$ states, $k$ input symbols and $m$ output symbols. It is stated in Corollary $3$ as \begin{align*} a(n,k,m)=\frac{1}{n!k!m!}\sum_{H_1} I(\alpha,\beta,\alpha)I(\alpha,\beta,\gamma) \end{align*} where the sum is over all permutations in $H_1=S_n^{S_n\times S_k}\times S_m^{S_n\times S_k}$ of the form $\{[(\alpha,\beta);\alpha^{-1}],[(\alpha,\beta);\gamma]\}$ with \begin{align*} I(\alpha,\beta,\gamma)=\prod_{p=1}^n\prod_{q=1}^k\left[\sum_{s|[p,q]}sj_s(\gamma)\right]^{j_p(\alpha)j_q(\beta)\langle p,q\rangle} \end{align*} Here the authors denote with $[p,q]:=\operatorname{lcm}(p,q), \langle p,q\rangle:=\operatorname{gcd}(p,q)$ and $j_p(\alpha)$ denotes the number of cycles of the permutation $\alpha$ with length $p$. The special case $n=k,m=1$ is already analysed and calculated for small values of $n$ in this MSE post, especially with focus to $n=4$. Question: Maybe someone could provide some computations for the general case for small values of $n,k,m$? REPLY [2 votes]: In answering this question we refer to the algorithm at the MSE link which works for the generalized problem as well. The only difference is that the values that go into the slots of the array/table are pairs of states and output symbols, meaning when we transition from a certain column on an input symbol corresponding to a row we transition to the state (first element of the pair) and output the symbol (second element of the pair). The action on the slots is the simultaneous action of $\pi$ and $\tau$ on the rows and columns and we now have a permutation $\sigma$ which acts on the set of output symbols and the action on the values is the combined action of $\tau$ and $\sigma$ on the state / symbol pairs. We get the following table for one output symbol. | 1| 1| 1| 1| 1| 1| 1| 1| | 3| 7| 13| 22| 34| 50| 70| 95| | 7| 74| 638| 4663| 28529| 151600| 713176| 3028727| | 19| 1474| 118949| 7643021| 396979499| 17265522590| 646203233957| 21243806443115| | 47| 41876| 42483668| 33179970333| 20762461502595| 10831034126757463| 4844565331763027596| 1896647286212566394157| | 130| 1540696| 23524514635| 274252613077267| 2559276179593762172| 19903050866658120066632| 132673733865643566661223817| 773869304738817313660236854435| | 343| 68343112| 18477841853059| 3802866637652928476| 626361440405926396941497| 85973094952794304259466151418| 10114722264843500593900485682759058| 1041247439945746392774732251877428013424| | 951| 3540691525| 19526400231564564| 81874932562648494674439| 274724907231470170012527305235| 768186632385442429091738459545921683| 1841148232300929744056375072663778725072045| 3861169308385212945415179151162048048461447621051| For two output symbols we have | 1| 2| 2| 3| 3| 4| 4| 5| | 6| 44| 226| 1036| 4006| 13876| 43186| 123706| | 22| 2038| 142336| 7775708| 341906882| 12592855970| 399366367444| 11132314379998| | 114| 176936| 238882846| 244698934716| 200649261017386| 137143648460408272| 80366174079209158078| 41217801421317353953038| | 538| 20943790| 694540531869| 17362195783419565| 347256965617453111707| 5787905149678353796143590| 82689320232608432438262174088| 1033688856029644143398545746261666| | 2800| 3108818680| 3081614657394158| 2300263170022800838590| 1373710145403734491538076692| 683647218221456315461840833799588| 291623393789554111334921119339297251576| 108848103655093534827120896470552784018126133| | 14435| 553255960308| 19368605578168164179| 510403370619400317035233276| 10760675018954199971112474584547034| 189053417206572805331242303827478007687534| 2846969183281612697167894035560332610102537605107| 37513627164757945129191686915360296965220882487348368322| | 76312| 114776687721990| 163754994767359896315206| 175823884588034784365611422263567| 151031502945525188132621372232074129315388| 108112560585492844973667875651850996929528575835574| 66334273232261168899346826889209523621370385072001650536116| 35612941825082950044316879351953518880328546726186269125209259942000| Three output symbols yield | 1| 2| 3| 4| 5| 7| 8| 10| | 6| 74| 775| 7124| 55668| 377269| 2255068| 12102178| | 29| 7623| 1804128| 329641077| 48317584819| 5910777204447| 620630699132987| 57098016161377374| | 190| 1501516| 10322146155| 53512221536494| 221968136483832014| 767306804276224740828| 2273639672252875423729778| 5895263464882668948056075498| | 1289| 401371270| 101367856946674| 19243544529701850104| 2922627429145967591227933| 369897467120287921148106491100| 40127586921742103692252419866530400| 3809020901470314640315364328599642887506| | 9673| 134138227473| 1518024410618449355| 12907594258334064169919121| 87803188849193004851359368791756| 497730359833453928180319002991414602093| 2418417068028280199534213597754694851805840225| 10281969996512134071147543063509604282591387558257520| | 73604| 53725010241266| 32201676604966459555889| 14499308203534486200843433873288| 5222906915046943511008193569385565417541| 1567819635143439097415728431946215896270059293161| 403397426941463986598664115278880491308873007636372427413| 90819310744609116970288225981171645606992548661728301980002516662| | 573442| 25081227120200634| 918865057207831149035535828| 25285803348327743049043999665003927370| 556668502782671968664754976635618690023788914186| 10212576716712592462402577334011641314012112279662417473469| 160593227242102911238351158110065456181421151497935704882980552606514| 2209668743041973325985756217800328983151637526070225333484395817216844313778044| Four symbols yield | 1| 2| 3| 5| 6| 9| 11| 15| | 6| 81| 1183| 17320| 223743| 2527953| 25100642| 222144431| | 29| 11676| 6064606| 2593640209| 897009602752| 259029607981273| 64163314527895517| 13915354324987224434| | 209| 3831148| 81573276196| 1334647986999812| 17493019379544106141| 191083931326433751661244| 1789145512052234025354299479| 14658245204843197745963032946030| | 1605| 1790644262| 1896670209705424| 1517048789183286280242| 970906913413864886205472630| 517817738821504564293534451239523| 236717123156531446119639354041331039161| 94687056373953999303903668799913187496263156| | 14581| 1059379897194| 67316410303471722434| 3215992447007150335848738654| 122917096383192644964591012637376201| 3914970565374711299589044295533654728633307| 106880364202506644619748019682746095700393900152769| 2553144552899651934745530164746582340956543973128820263056| | 139393| 753537775187942| 3384772964731425916075399| 11417522742490309099171117430032545| 30811161705715253062014503052903675566658545| 69288800372821565423720577304077855202305583626701885| 133558404360787903168869516536280931557107488047811301767090944| 225261750393971075099732774525570356293879964632402213718679044305894097| | 1396571| 625251791124395555| 228938436067723951049495991006| 62929794221715160999635636523327894882612| 13838407708142508413727196626725814975774777251143465| 2535915030456565177161444959970701001632828430350354446662490458| 398324009007397248996962807526047717969141597988514641606768498289421689877| 54745234941096457415294245370001308972451724232455240696557887565208148810995582605398| Five yield | 1| 2| 3| 5| 7| 10| 13| 18| | 6| 81| 1283| 23718| 427097| 7038183| 103821898| 1372476565| | 29| 12621| 9875766| 7694431189| 5108729338005| 2866744631627614| 1383444387175373624| 584738631310521555854| | 209| 5269634| 242293771832| 9508729532667775| 303537782294910006324| 8092008307288214998320242| 184959457244832433282602143175| 3699331066099122391214267900044654| | 1652| 3522483774| 10830193709142911| 26326043763404282897041| 51400728418762283743166947873| 83657888529920202329649049898106090| 116710057646947398301738658574346204631684| 142468411615177769332030145694979476640229799189| | 15851| 3145805694347| 748102205731495912974| 136208975222504119847429651282| 19858370962230255015514418124978318079| 2412787002750586428934439397030434799264061139| 251274509502830170033287481345380174207693056359521218| 22897389793260955643229220128574252798224672181928261140465132| | 166704| 3451400880452119| 73411241836287162439679965| 1180551376563438246848941889675139885| 15191078168438817387019547141066538853359987716| 162897187467367310343607416594982652886027395559664704867| 1497241493787657622590696899117249253525915361372369634716838093562| 12041433120029892610323311791551075975557111745695889862014376128073109032711| | 1903565| 4453493876743114141| 9696353154834682640039652745383| 15885725788645815939897203091966549890622620| 20821721985157272922019024288021084022442430164588852951| 22742872566990523157952024381064067346859577618809430167348089260678| 21292527088890116346521008056915214793265230093299707297080802600716512185635775| 17442838191172723332310678848004599133452884005515399679140805741625547114446168989072880538| Finally we get for six symbols | 1| 2| 3| 5| 7| 11| 14| 20| | 6| 81| 1296| 25462| 538398| 11293138| 222523395| 4028465835| | 29| 12695| 11328242| 12588216476| 13507531099557| 12816676023294742| 10610435880654869474| 7727294095780485593467| | 209| 5635034| 396518228841| 29902254119865429| 1947536351902062154396| 107300432454566001311927042| 5082116300041019725568491696927| 210740137620013511032529954013222997| | 1652| 4452248665| 28661573376513712| 168916250895768873373125| 817701868164546859278494745163| 3309982213851389919369842502624515185| 11489588802579132510260340618793545674029229| 34899323818332948931809633587657800749959429140381| | 15981| 5147747713851| 3350282292788028229116| 1806224092274722460193800299488| 785710893213334594665452752490935409600| 285033249600409431428643990739291312182972084132| 88635922473731155883430561365483225722614035385062387241| 24117625609927898779221726509298149056270088412428821435055351917| | 171494| 7721337186134447| 564461055370558962491069562| 32440112974696247296439224174402635608| 1495496356773389913366753876131348301821086183629| 57461231472727120738649283370058285613319924784137652332510| 1892438067444572851650149500498661434054764424790064535313952779756847| 54535174475104423211660022224834911399436311980199332329136348183225443895408945| | 2041940| 14003166710753529537| 128580139323392617149472430498611| 905045555050578843422814928359489284108944076| 5100536012710000997786910449314715054126988193344281363091| 23954874543703392448557828429937283387539096055257084732285733625757877| 96433013296267950226465899688337115485213562485757881838921103460239405763926814829| 339676614862729029614552301296020122485910436927008569295805654935518977116532247635480871741432| The Maple code for this was as follows. with(combinat); pet_cycleind_symm := proc(n) local p, s; option remember; if n=0 then return 1; fi; expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); end; pet_flatten_term := proc(varp) local terml, d, cf, v; terml := []; cf := varp; for v in indets(varp) do d := degree(varp, v); terml := [op(terml), seq(v, k=1..d)]; cf := cf/v^d; od; [cf, terml]; end; cycles_prod := proc(cyca, cycb) local ca, cb, lena, lenb, res, vlcm; res := 1; for ca in cyca do lena := op(1, ca); for cb in cycb do lenb := op(1, cb); vlcm := lcm(lena, lenb); res := res*a[vlcm]^(lena*lenb/vlcm); od; od; res; end; automaton := proc(N, M, K) option remember; local idx_slots, idx_cols, idx_syms, res, a, b, c, sim, flat_sim, sym, flat_sym, flat_a, flat_b, flat_c, cyc_a, cyc_b, len_a, len_b, p, q; if N > 1 then idx_slots := pet_cycleind_symm(N); else idx_slots := [a[1]]; fi; if M > 1 then idx_cols := pet_cycleind_symm(M); else idx_cols := [a[1]]; fi; if K > 1 then idx_syms := pet_cycleind_symm(K); else idx_syms := [a[1]]; fi; res := 0; for a in idx_slots do flat_a := pet_flatten_term(a); for b in idx_cols do flat_b := pet_flatten_term(b); sim := cycles_prod(flat_a[2], flat_b[2]); flat_sim := pet_flatten_term(sim); for c in idx_syms do flat_c := pet_flatten_term(c); sym := cycles_prod(flat_b[2], flat_c[2]); flat_sym := pet_flatten_term(sym); p := 1; for cyc_a in flat_sim[2] do len_a := op(1, cyc_a); q := 0; for cyc_b in flat_sym[2] do len_b := op(1, cyc_b); if len_a mod len_b = 0 then q := q + len_b; fi; od; p := p*q; od; res := res + p*flat_a[1]*flat_b[1]*flat_c[1]; od; od; od; res; end; output := proc(MXN, MXM, K) local data, N, M, fd, fname, width; data := table(); for N to MXN do data[N] := table(); for M to MXM do data[N][M] := automaton(M, N, K); od; od; fname := sprintf("automata-%d-%d-%d.txt", MXN, MXM, K); fd := fopen(fname, WRITE); for N to MXN do fprintf(fd, "|"); for M to MXM do width := nops(convert(data[MXN][M], base, 10)); fprintf(fd, "% *d|", width+1, data[N][M]); od; fprintf(fd, "\n"); od; fclose(fd); end;<|endoftext|> TITLE: What do we need to define a category? QUESTION [6 upvotes]: I used some results from the category theory without thinking about its foundations. However, after reading a few topics on MSE, this subject haunts me. My question is: What do we need to define a category? According to some books, a category consists of a class $\text{Obj}$ of objects and a set $\text{Hom}$ of morphisms which satisfy some axioms. For me it means, that to define a category we need some set theory. But there are many different set theories. Do they raise different category theories? Also, as I understand, when we are talking about specific categories, like $\text{Set}$, $\text{Grp}$,... we mean models (interpretations) of the axioms of a category. Is it correct? REPLY [2 votes]: First, one needs to adopt a foundation of mathematics to define a set, category, and other mathematical objects! Different foundations give different category theories. Some 'categories', which are called big in one foundation, do not exists in another, e.g. the functor category between two large categories and the localisation of a category with respect to a proper class (large set) of its morphisms. The meanings of the terms (small) set and (proper) class, and the operations you can perform on them, depend on the adopted foundation. Shulman's Set theory for category theory and Mac Lane's One universe as a foundation for category theory discuss the effect of the foundation on the resulting category theory, although the latter is more focused on the advantages of a specific foundation.<|endoftext|> TITLE: Natural example of a strictly recursively enumerable set below the halting problem QUESTION [5 upvotes]: My question is about sets that are recursively enumerable, not recursive and strictly weaker than the halting problem. I know that such sets do exist (in fact, infinitely many) - this is an answer to the famous Post's problem. But is there any natural set of this type? E.g., some set that was defined not only to be an example of something between $0$ and $0'$? All the proofs of undecidability I have seen so far (except for examples that have no much sense outside the computability theory) consist of encoding some (at least) Turing complete model. REPLY [3 votes]: There is no known example of such a set, and I believe it is widely believed that there are no such examples at all (although of course it's difficult to formalize this). We can prove that there is no "canonical" exampe of an intermediate c.e. set. Specifically, it is known (proved by Lachlan) that there is no $e$ such that For every $X$, $X<_T W_e^X<_T X'$ (that is, $W_e^-$ is a recipe for cooking up an intermediate c.e. set), and For every $X\equiv_TY$, we have $W_e^X\equiv_TW_e^Y$ uniformly (that is, the $W_e^-$ is actually finding an intermediate Turing degree - in some sense, $W_e^-$ is "invariant" in the appropriate sense: the amount of information in $W_e^X$ depends only on the amount of information in $X$; the "uniformly" at the end says that there is an algorithm providing a computation of $W_e^X$ from $W_e^Y$ when given an algorithm for computing $X$ from $Y$). Along similar lines, it is conjectured (Martin's conjecture) that every "reasonable" function from Turing degrees to Turing degrees is "almost always" an iterate of the jump operator $X\mapsto X'$. (The precise statement(s) of Martin's conjecture are a bit technical, so I'll omit them unless you're specifically interested - let me know in a comment if you are.) And Martin's conjecture is known to hold for certain classes of functions. All of this points towards a negative answer to your question. However, there is still hope: a natural c.e. set whose definition fundamentally doesn't relativize could still exist. For example, consider Hilbert's 10th problem for $\mathbb{Q}$. Based on current knowledge, it is possible that the set of polynomials with rational coefficients which have rational solutions could be of intermediate degree. Of course, it's also possible that it is computable or complete, too. There are other examples, also coming from algebra, which could conceivably generate such a natural intermediate set in a "non-relativizing" way, so as to avoid the Martin/Sacks obstacles. However, that said, I think the general view is still one of pessimism. Things get better if we shift to classes of degrees ("mass problems"), in which case there are many natural examples. In some sense, mass problems are more natural than Turing degrees anyways: they represent the complexity of problems with more than one solution, and most problems in mathematics ("find an ideal in this ring," "find a subspace of this vector space," "find a descending path through this illfounded linear order", ...) are of this type. This is a point of view heavily exploited in reverse mathematics. However, Turing degrees are still in some sense "more fundamental" (at least, I would argue that) and so the shift to mass problems is not entirely satisfying (at least, to me).<|endoftext|> TITLE: Proving that a variety of dimension zero is discrete. QUESTION [5 upvotes]: Say we have a variety of dimension zero, how do we prove this is discrete? I have some ghost of an idea of what is going on, thanks to threads like this Why is every Noetherian zero-dimensional scheme finite discrete? but I cant formulate a concrete proof, even after reducing it to the affine case. REPLY [6 votes]: A noetherian ring has only finitely many minimal prime ideals. In a zero-dimensional ring, any prime ideal is maximal. Combining these two statements, we get that the spectrum of a noetherian ring of dimension zero is a finite set of closed points. In particular, the spectrum is discrete since any sub-set is closed (being a finite union of closed subsets).<|endoftext|> TITLE: Closest points between two lines QUESTION [10 upvotes]: I have two arbitrary lines in 3D space, and I want to find the distance between them, as well as the two points on these lines that are closest to each other. Naturally, this only concerns the skew case, since the parallel and intersecting cases are trivial. I know how to find the distance, as the question was asked before and answered here. I haven't found a good explanation on how to find the two points that determine that distance, though. So specifically, given two lines $$L_1=P_1+t_1V_1$$ $$L_2=P_2+t_2V_2$$ I would like to find two points $X_1$ on $L_1$ and $X_2$ on $L_2$ such that the distance between $X_1$ and $X_2$ is minimal. REPLY [4 votes]: I work in a language that lacks a solver for systems of equations as we see in Brian's answer, so I thought I'd try a different solution. It too relies on perpendicularity, but it works by scaling the unit vector $\hat{b}$ by the length of $\hat{b}$ projected onto the $ab$ plane, or the rejection of $\hat{a}$ and $\hat{c}$ from $\hat{b}$ . So given lines $$\vec{a_0}+\hat{a}t$$ and $$\vec{b_0}+\hat{b}t$$ and their cross product $$\hat{c}=\langle\hat{a} \times \hat{b}\rangle$$ then $$t=-\dfrac{|\vec{r}|} {b\cdot\hat{r}} $$ where $\vec{r}$ is the rejection $$\vec{r}=\vec{d} - \hat{a}(\vec{d}\cdot \hat{a}) - \hat{c}(\vec{d}\cdot \hat{c}) $$ and $\vec{d}$ is the offset $$\vec{d}=\vec{b_0}-\vec{a_0}$$ I've put together a visualization using OpenScad, which converts pretty easily to glm/GLSL: module line(a,b=[0,0,0], width=0.05){ hull(){ translate(a) sphere(width); translate(b) sphere(width); } } function dot(a,b) = a*b; function normalize(a) = a/norm(a); a0=[1,1,1]; a1=[5,3,2]; a=normalize(a1-a0); b0=[3,2,0]; b1=[4,4,5]; b=normalize(b1-b0); %line(a0, a1); %line(b0, b1); cn = normalize(cross(b,a)); projection_ = dot(b0-a0, a) * a; rejection = b0-a0 - dot(b0-a0, a) * a - dot(b0-a0, cn) * cn; closest_approach = b0-b*norm(rejection)/dot(b,normalize(rejection)); color("red") line(a0, a0+projection_); color("green") line(b0, b0-rejection); color("blue") line(a0+projection_, a0+projection_+cn); color("yellow")line(b0, closest_approach);<|endoftext|> TITLE: Arithmetic or Geometric sequence? QUESTION [6 upvotes]: Given a sequence: $$1, \frac12, \frac13, \frac14, \frac15,...$$ Its explicit formula can be given as: $a(n) = \frac1n$ where $n \ge 1$. I actually want to know is it a geometric sequence or an arithmetic one? I tried finding common ratio and the common difference in this sequence to see if it's either one of them but was not successful. My common ratio ($r$) and common difference ($d$) were some horrible values. REPLY [8 votes]: The sequence you gave is called the Harmonic sequence. It is neither geometric nor arithmetic. Not all sequences are geometric or arithmetic. For example, the Fibonacci sequence $1,1,2,3,5,8,...$ is neither. A geometric sequence is one that has a common ratio between its elements. For example, the ratio between the first and the second term in the harmonic sequence is $\frac{\frac{1}{2}}{1}=\frac{1}{2}$. However, the ratio between the second and the third elements is $\frac{\frac{1}{3}}{\frac{1}{2}}=\frac{2}{3}$ so the common ratio is not the same and hence this is NOT a geometric sequence. Similarly, an arithmetic sequence is one where its elements have a common difference. In the case of the harmonic sequence, the difference between its first and second elements is $\frac{1}{2}-1=-\frac{1}{2}$. However, the difference between the second and the third elements is $\frac{1}{3}-\frac{1}{2}=-\frac{1}{6}$ so the difference is again not the same and hence the harmonic sequence is NOT an arithmetic sequence. REPLY [5 votes]: This is not a geometric series. $a_1=1, a_2=\frac12, a_3=\frac13$ If this is a geometric sequence, then it is necessary that $\frac{a_2}{a_1}=\frac{a_3}{a_2}$ $$\frac{a_2}{a_1}=\frac{1}{2}$$ $$\frac{a_3}{a_2}=\frac{2}{3}$$ The two numbers are different, hence it is not a geometric sequence. Similarly, you can verify that $$a_2-a_1 \neq a_3-a_2$$ To prove that something is a geometric sequence, you have to show that $\frac{a_{n+1}}{a_n}$ is a constant. To prove that something is an arithmetic sequence, you have to show that $a_{n+1}-a_n$ is a constant. For this problem, $$\frac{a_{n+1}}{a_n}=\frac{1/(n+1)}{1/n}=\frac{n}{n+1}=\frac{1}{1+1/n}$$ which is dependent on $n$. while $$a_{n+1}-a_n = \frac{1}{n+1}-\frac{1}{n}=-\frac{1}{n(n+1)}$$ which is again dependent on $n$.<|endoftext|> TITLE: How to split 59 in $\mathbb{Q}(\sqrt{13})$ QUESTION [5 upvotes]: How to split 59 in $\mathbb{Q}(\sqrt{13})$ as i know that 59 is prime and we can write $(x+y\sqrt{13})(x-y\sqrt{13})=59$ $x^2-13y^2=59$ but i cant find the x,y REPLY [2 votes]: You can save yourself a lot of time by availing yourself to the Legendre symbol. Does the prime $p$ split in $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$ (where $d$ is positive and squarefree)? If $$\left(\frac{d}{p}\right) = d^{\frac{p - 1}{2}} \equiv -1 \bmod p$$ then the answer is absolutely not. But if it's $1$ and $\mathcal{O}_{\mathbb{Q}(\sqrt{d})}$ is a unique factorization domain, then the answer is yes, and if it doesn't have unique factorization, then the answer is maybe. The Legendre symbol is JacobiSymbol[d, p] in Wolfram Mathematica and Wolfram Alpha. So with $59$ we see that $13^{29} \equiv 28 \equiv -1 \bmod 59$, which means that $59$ definitely does not split in $\mathcal{O}_{\mathbb{Q}(\sqrt{13})}$. This works even when the domain contains "half" integers. For example, $$\left(\frac{13}{17}\right) = 1$$ and $$\left(\frac{9 - \sqrt{13}}{2}\right) \left(\frac{9 + \sqrt{13}}{2}\right) = 17.$$<|endoftext|> TITLE: Behavior of a logarithmic derivative QUESTION [6 upvotes]: Suppose the function $f:(0,1] \to \mathbb{R}$ is differentiable and satisfies $f(x) \geqslant 0$ and $\lim_{x \to 0+}f(x) = 0$. One way demonstrate that the derivative of a function tends to behave worse than the function is as follows. Show that the logarithmic derivative $f'/f$ diverges to $\pm \infty$ as $ x \to 0+$ or is, at least, unbounded in a neighborhood of $x =0$. This is quite simple to show under more restrictive conditions. If we assume that that $f'/f$ is positive, bounded and integrable on $[\delta,1]$ for all $\delta > 0$, then, with a change of variable $y = f(x),$ we have $$\lim_{ \delta \to 0+}\int_{\delta}^{1}\frac{f'(x)}{f(x)}\,dx = \lim_{ \delta \to 0+}\int_{f(\delta)}^{f(1)} \frac{dy}{y} = \lim_{ \delta \to 0+}[\log f(1) - \log f(\delta)] = + \infty,$$ and this implies that $f'(x)/f(x) \to +\infty$ as $x \to 0+.$ The problem then is to show that, under the weaker conditions where $f'/f$ need not be integrable, etc., we have either divergence or unboundedness of $f'/f$ near $x =0$. I suspect this is true, but I am not sure that a divergent limit is always necessary. REPLY [3 votes]: I think the question can be resolved by applying the mean value theorem. For any points $y > x >0$ there exists $\xi_{x,y}$ between $x$ and $y$ such that $$\log f(y) - \log f(x) = \frac{f'(\xi_{x,y})}{f(\xi_{x,y})}(y-x).$$ Thus, $$\lim_{x \to 0+} \frac{f'(\xi_{x,y})}{f(\xi_{x,y})} = [\log f(y) - \lim_{x \to 0+} \log f(x)]/y = + \infty.$$ Hence, on any interval $(0,y]$ no matter how small we can find a sequence of points $(x_n)$ such that $f'(x_n)/f(x_n) \to + \infty.$ The mean value theorem is non-constructive with respect to the intermediate point, so we cannot determine that $x_n \to 0$ or more generally that $\lim_{x \to 0+} \xi_{x,y}= 0.$ This shows, at least, that $f'/f$ must be unbounded in any neighborhood of $x=0.$ An example would be $f(x) = x^2\sin(1/x),$ where $$\frac{f'(x)}{f(x)} = \frac{2}{x}- \frac{\cot(1/x)}{x^2}.$$ Here the limit does not exist, but the logarithmic derivative is clearly unbounded.<|endoftext|> TITLE: Learning Algebraic Geometry by EGA QUESTION [5 upvotes]: Does it make sense to study algebraic geometry by Grothendieck's EGA? I know French and I want to know whether I can read a treatise Grothendieck to explore this area. I am familiar with the abstract algebra, commutative algebra, algebraic topology (in the amount of Bourbaki's books), and differential geometry. REPLY [2 votes]: If you are thinking in reading all pages from the first one, I don't know. But if you skip some parts, I think yes. Even to study commutative algebra, EGA chapters $0_{III}, 0_{IV}$ are a good sequel to Atiyah's (even today with Bourbaki's Commutative Algebra chapter X, I like much more the exposition of EGA $0_{IV}$). Does it consume more time than other texts? Probably yes, but it depends on the reader. I think it is not so unusual to move from Hartshorne to EGA I and II when learning scheme theory in order to avoid so many noetherian hypothesis. Shall I understand that you have studied these topics from Bourbaki's books? In some sense EGA is close to Bourbaki in style, and since I think it is mainly a matter of taste to start with EGA, Harshorne, Liu, Mumford-Oda, etc., if you like Bourbaki, probably EGA is a good choice. In this case I would recommend you to start with chapter I (after learning basic sheaf theory from any short source), and only read chapter $0_I$ as needed. Having said this, I have not yet read but a few pages of Vakil's book, but with this caution, Vakil's book seems to be a good text, in some sense not far from EGA in style, and it was written as a textbook, so it is probably a better choice to start. I hope it will be soon in print form, but at least we have an e-reader version. Sorry for writing all these comments as an answer, but it was too long for a comment.<|endoftext|> TITLE: Which of these is the correct statement of Wilson's theorem? QUESTION [6 upvotes]: My textbook states: If $p$ is a prime, then $(p-1) \equiv -1\pmod p$. But the online version is $(p-1)! \equiv -1\pmod p$. Which one is correct? REPLY [19 votes]: The second one is Wilson's theorem. Though the first one is not absurd, since $$p\equiv 0\pmod p$$ you always have $$p-1\equiv -1\pmod p$$ whether $p$ is prime or not.<|endoftext|> TITLE: If $ a,b,c\in \left(0,\frac{\pi}{2}\right)\;,$ Then prove that $\frac{\sin (a+b+c)}{\sin a+\sin b+\sin c}<1$ QUESTION [5 upvotes]: If $\displaystyle a,b,c\in \left(0,\frac{\pi}{2}\right)\;,$ Then prove that $\displaystyle \frac{\sin (a+b+c)}{\sin a+\sin b+\sin c}<1$ $\bf{My\; Try::}$ Using $$\sin(a+\underbrace{b+c}) = \sin a\cdot \cos (b+c)+\cos a\cdot \sin (b+c)$$ $$ = \sin a\cdot (\cos b\cos c-\sin b\sin c)+\cos a(\sin b\cos c+\cos b\sin c)$$ $$ = \sin a\cos b\cos c-\sin a\sin b\sin c+\cos a \sin b\cos c+\cos a\cos b\sin c$$ Now how can i solve it after that , Help required, Thanks REPLY [4 votes]: Using $$\sin (a+b+c)-\sin a-\sin b-\sin c $$ $$= 2\cos\left(\frac{2a+b+c}{2}\right)\sin \left(\frac{b+c}{2}\right)-2\sin \left(\frac{b+c}{2}\right)\cos \left(\frac{b-c}{2}\right)$$ So $$ = 2\sin \left(\frac{b+c}{2}\right)\left[\cos \left( \frac{2a+b+c}{2}\right)-\cos \left(\frac{b-c}{2}\right)\right]$$ $$ = -4\sin \left(\frac{a+b}{2}\right)\sin \left(\frac{b+c}{2}\right)\sin \left(\frac{a+c}{2}\right)<0,$$ Bcz given $\displaystyle a,b,c \in \left(0,\frac{\pi}{2}\right)$. So we get $\displaystyle \frac{a+b}{2},\frac{b+c}{2}\;,\frac{c+a}{2}\in \left(0,\frac{\pi}{2}\right)$ So we get $$\sin (a+b+c)<\sin a+\sin b+\sin c\Rightarrow \frac{\sin (a+b+c)}{\sin a+\sin b+\sin c}<1$$<|endoftext|> TITLE: Quasiperiodic tiling of the hyperbolic plane? QUESTION [5 upvotes]: Has anyone produced a quasiperiodic tiling of the hyperbolic plane? Or is there a reason it cannot be done? By quasiperiodic I mean that the structure is not strictly periodic (i.e. equal to itsef after translation) but that any arbitrary large neighbourhood of any point can be found identically at an infinity of other locations. REPLY [2 votes]: Yes, this question has been somewhat studied, for instance by Chaim Goodman-Strauss. See this paper of his. See also this paper and references in both. Below is an image from the second paper which gives the first step in building a strongly aperiodic set of tiles in the hyperbolic plane, which I think Chaim would be ok with me copying here. Perhaps one of the most important points brought up in this work is that the notion of aperiodicity or quasiperiodicity in the hyperbolic setting is more subtle than in the Euclidean case, and one should be careful with the definition being used (thus the use of 'weakly aperiodic' and 'strongly aperiodic').<|endoftext|> TITLE: Prob. 5, Sec. 20, in Munkres' TOPOLOGY, 2nd ed: What is the closure of $\mathbb{R}^\infty$ in $\mathbb{R}^\omega$ in the uniform topology? QUESTION [12 upvotes]: Here's Prob. 5, Sec. 20 in the book Topology by James R. Munkres, 2nd edition: Let $\mathbb{R}^\infty$ be the subset of $\mathbb{R}^\omega$ consisting of all sequences that are eventually zero. What is the closure of $\mathbb{R}^\infty$ in $\mathbb{R}^\omega$ in the uniform topology? Justify your answer. My effort: Let $x = \left( x_1, x_2, x_3, \ldots \right)$ be an element of the closure of $\mathbb{R}^\infty$ in the uniform metric topology on $\mathbb{R}^\omega$. Then, for any real number $\varepsilon \in (0, 1)$, we can find a point $y = \left( y_1, y_2, y_3, \ldots \right)$ in $\mathbb{R}^\infty$ such that $\tilde{\rho}(x,y) < \varepsilon$, where $$\tilde{\rho}(x,y) = \sup \left\{ \ \min \left\{ \ \left\vert x_n - y_n \right\vert, \ 1 \ \right\} \ \colon \ n \in \mathbb{N} \ \right\}.$$ So, for each $n \in \mathbb{N}$, we have $\left\vert x_n - y_n \right\vert < \varepsilon$. Now as $ y \in \mathbb{R}^\infty$, so there exists a natural number $N$ such that $y_n = 0$ for all $n > N$. So we can conclude that $\left\vert x_n \right\vert < \varepsilon$ for all $n > N$, form which it follows that the sequence $x$ converges to the real number $0$. Conversely, if $x = \left( x_1, x_2, x_3, \ldots \right)$ is a sequence of real numbers converging to $0$, then, for any given real number $\varepsilon \in (0, 1)$, we can find a natural number $N$ such that $\left\vert x_n \right\vert < \frac{\varepsilon}{2}$ for all $n > N$. Now let $y = \left( x_1, \ldots, x_N, 0, 0, \ldots \right)$. Then clearly $y \in \mathbb{R}^\infty$ and $\tilde{\rho}(x,y) \leq \frac{\varepsilon}{2}$, thus showing that $x$ is in the closure of $\mathbb{R}^\infty$. Thus, the closure of $\mathbb{R}^\infty$ in the uniform topology on $\mathbb{R}^\omega$ equals the set $c_0$ of all the sequences of real numbers which converge to $0$, in the standard metric on $\mathbb{R}$. Am I right? REPLY [2 votes]: Looks great! The only improvement I can see is that in the converse direction, you might as well just use $\varepsilon$ instead of $\frac{\varepsilon}{2}$.<|endoftext|> TITLE: $\sum_{k\geq 1}\left(1-2k\,\text{arctanh}\frac{1}{2k}\right)=\frac{\log 2-1}{2}$ - looking for an elementary solution QUESTION [6 upvotes]: As stated in the title, I am looking for the most elementary proof of the following identity: $$ \sum_{k\geq 1}\left(1-2k\,\text{arctanh}\frac{1}{2k}\right) = \frac{\log 2-1}{2}\tag{1}$$ I have a proof that exploits $2\,\text{arctanh}\frac{1}{2k}= \log(2k+1)-\log(2k-1)$, summation by parts and Stirling's inequality, but I have the strong feeling I am missing something quite trivial, maybe related with some Riemann sum or with $$ \sum_{k\geq 1}\left(1-2k\,\text{arctanh}\frac{1}{2k}\right) = -\sum_{m\geq 1}\frac{\zeta(2m)}{4^m(2m+1)}. \tag{2}$$ Any help is appreciated, thanks in advance. I forgot to mention that I would like to avoid proving $$\forall t\in(0,1),\qquad \sum_{k\geq 1}\frac{4t^2}{4k^2-t^2}=2-\pi t \cot\frac{\pi t}{2} \tag{3}$$ for first. That clearly allows us to compute the LHS of $(1)$ as an integral, but requires Herglotz trick or something similar (Weierstrass products, digamma function, whatever). REPLY [6 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\sum_{k\ \geq\ 1}\bracks{1 - 2k\,\mrm{arctanh}\pars{1 \over 2k}} = {\ln\pars{2} - 1 \over 2}:\ ?}$. $$\bbx{\ds{% \begin{array}{c} \mbox{Indeed, the partial sum can be evaluated explicitly by}\ elementary\ means\mbox{:} \\[3mm] \ds{\sum_{k = 1}^{N}\bracks{1 - 2k\,\mrm{arctanh}\pars{1 \over 2k}} = N - N\ln\pars{2N + 1} - N\ln\pars{2} + \ln\pars{\bracks{2N}! \over N!}} \\[3mm] = \ds{N\bracks{1 - \ln\pars{1 + {1 \over 2N}}} - 2N\ln\pars{2} -N\ln\pars{N} + \ln\pars{\bracks{2N}! \over N!}} \\[3mm] \mbox{The proof is at the}\ \ds{\color{#f00}{very\ end}}. \end{array}}} $$ It yields the correct limit: $$ \lim_{N \to \infty}\bracks{N - N\ln\pars{2N + 1} - N\ln\pars{2} + \ln\pars{\bracks{2N}! \over N!}} = \bbox[#ffe,10px,border:1px dotted navy]{\ds{% {\ln\pars{2} - 1 \over 2}}} $$ because $$ \ln\pars{\bracks{2N}! \over N!} \,\,\,\stackrel{\mrm{as}\ N\ \to\ \infty}{\sim}\,\,\, \pars{2N + \color{#f00}{1 \over 2}}\ln\pars{2} + N\ln\pars{N} - N $$ $\ds{\color{#f00}{Without\ Stirling}}$, it's difficult to recover the crucial above $\ds{\color{#f00}{red}}$ mentioned $\ds{\color{#f00}{1 \over 2}}$ factor. Namely, \begin{align} \ln\pars{\bracks{2N}! \over N!} & = \sum_{k = 1}^{N}\ln\pars{k + N} = N\ln\pars{N} + N\ \overbrace{\bracks{{1 \over N}\sum_{k = 1}^{N}\ln\pars{1 + {k \over N}}}} ^{\ds{\stackrel{\mrm{as}\ N\ \to\ \infty}{\sim}\,\,\,2\ln\pars{2} - 1}} \\[5mm] & \stackrel{\mrm{as}\ N\ \to\ \infty}{\sim}\,\,\, \pars{2N + \color{#f00}{0}}\ln\pars{2} + N\ln\pars{N} - N \end{align} Finite Sum: \begin{align} &\sum_{k = 1}^{N}\bracks{1 - 2k\,\mrm{arctanh}\pars{1 \over 2k}} = \sum_{k = 1}^{N}\bracks{1 - k\ln\pars{2k + 1 \over 2k - 1}} \\[5mm] = &\ N - \sum_{k = 1}^{N}k\ln\pars{2k + 1} + \sum_{k = 1}^{N}k\ln\pars{2k - 1} \\[5mm] = &\ N - \sum_{k = 1}^{N}k\ln\pars{2k + 1} + \sum_{k = 1}^{N - 1}\pars{k + 1}\ln\pars{2k + 1} \\[5mm] = &\ N - \sum_{k = 1}^{N}k\ln\pars{2k + 1} + \sum_{k = 1}^{N}\pars{k + 1}\ln\pars{2k + 1} - \pars{N + 1}\ln\pars{2N + 1} \\[5mm] = &\ N - \pars{N + 1}\ln\pars{2N + 1} + \sum_{k = 1}^{N}\ln\pars{2k + 1} \\[5mm] = &\ N - \pars{N + 1}\ln\pars{2N + 1} + \sum_{k = 3}^{2N + 1}\ln\pars{k} - \sum_{k = 2}^{N}\ln\pars{2k} \\[1cm] = &\ N - \pars{N + 1}\ln\pars{2N + 1} + \bracks{-\ln\pars{2} + \sum_{k = 2}^{2N}\ln\pars{k} + \ln\pars{2N + 1}} \\[5mm] - &\ \bracks{\pars{N - 1}\ln\pars{2} + \sum_{k = 2}^{N}\ln\pars{k}} \\[1cm] = &\ \bbx{N - N\ln\pars{2N + 1} - N\ln\pars{2} + \ln\pars{\bracks{2N}! \over N!}} \end{align}<|endoftext|> TITLE: A ring in which the two operations are equal is {0} QUESTION [5 upvotes]: Let R be a ring in which the two operations are equal, i.e., $ a + b = ab \mbox{ }\forall a,b \in R $. Prove that $R = \{0 \}$. I tried to prove that $R \subset \{0 \} $ and $ \{0 \} \subset R $. For the second inclusion, we have $ 0 + 0 = 0 = 0 \cdot 0 $. So $\{0 \} \subset R $. However, I can't figure out a way of showing that $R \subset \{0 \} $. Any tips? REPLY [10 votes]: Although the question has already been answered pretty accurately, I would like to detail the typical reasoning used in this case. What you want to prove is that $R \subset \{0 \} $. What you should do, is try to prove that every element of $R$ is also an element of $\{0\}$. As wrote User1006, the way to achieve this is: Let $x\in R.$ $$\begin{align}x+0 &= x\cdot0 \\ x\cdot0 &= 0~~~~~\textrm{ by definition of ring}\end{align}$$ (This line is not that trivial) $$\begin{align} ~~x+0 &= 0\\ x&= 0\,.\end{align}$$ $x$ is any element of $R.$ Hence, $\forall x \in R, x \in \{0\}.$ $$~~ R \subset \{0\}.$$ This is basically what User1006 wrote, but every time you come across such a question, this is the formality you should keep in mind.<|endoftext|> TITLE: Natural Deduction First Order Logic $∃y∀x(P(x) ∨ Q(y))↔∀x∃y(P(x) ∨ Q(y))$ QUESTION [5 upvotes]: I'm working on some of my logic exercises for my end term exam in Predicate Logic. One of these exercises is "Show with natural deduction that $\vdash ∃y∀x(P(x) ∨ Q(y))↔∀x∃y(P(x) ∨ Q(y))$" I'm getting the part that shows you can do $∃y∀x(P(x) ∨ Q(y)) \vdash ∀x∃y(P(x) ∨ Q(y))$. It's the other way around I'm not able to do. I'm getting stuck introducing the Universal Quantifier whilst having them as a free variable. Where I'm getting stuck I know this isn't correct/possible, but I cannot figure out how it should be done... Is there anyone who is able to solve it? Or is it simply not possible? Thanks in advance! REPLY [3 votes]: Hint It must be derivable, due to the fact that it is valid (in classical logic). We may check it with two equivalences : $\exists x \ (\alpha \lor \beta(x)) \equiv (\alpha \lor \exists x \beta(x))$, if $x$ is not free in $\alpha$ and (this one holds only in classical logic) : $\forall x \ (\alpha \lor \beta(x)) \equiv (\alpha \lor \forall x \beta(x))$, if $x$ is not free in $\alpha$. Thus, starting from : $∃y∀x(P(x) ∨ Q(y))$ we can get, by the second equivalence above : $∃y(∀xP(x) ∨ Q(y))$ and then, using the first one we get the equivalent : $(∀xP(x) ∨ ∃yQ(y))$. Now we re-apply the second equivalence to get : $∀x(P(x) ∨ ∃yQ(y))$ and finally : $∀x∃y (P(x) ∨ Q(y))$.<|endoftext|> TITLE: How to prove easily that a generalized ellipse is $C^1$ QUESTION [5 upvotes]: Consider the graphical representation below: It describes a generalized ellipse $(E)$ in the following sense: $$(E)=\{M \ \ | \ \ d(M,(S_1))+d(M,(S_2))=18 \}$$ where $(S_1)$ and $(S_2)$, playing a role of "generalized foci", are the squares centered in $F_1(-7,0)$ and $F_2(7,0)$ with sides' length 2, $d(M,(C))$ is the distance from a point $M$ to a convex set $(C)$ defined by $inf_{C \in (C)} d(M,C)$ where $d(.,.)$ is the usual (Euclidean) distance. $d(M,(C))$ being a continuous function of $M$ (see for example (Distance to a closed set is continuous.)), generalized ellipse $(E)$ is piecewise continuous. I would like to show that it is more than that: that it is smooth, in a geometrical sense: (what is called sometimes $\Delta^1$) in each connection point of two arcs (see below) there is a common tangent (as a consequence, it will be possible to define a $C^1$ parameterization). Question: is there a simple mean to establish this "smoothness" ? My work: I have established this smoothness explicitly by first computing the equations of the different constituent arcs of $(E)$. then for each connecting point (such as $L, N, I$...), by computing the left and right derivative (when it exists) and checking that their values are equal or checking the presence of a vertical tangent. Due to $Ox$ and $Oy$ symmetries, it suffices to describe the first quadrant part of (E), or, in an equivalent form, the different parts of arc MLNIS. Here are the equations of the constituent arcs: $$\begin{cases} \text{Arc ML:}\ &\text{line segment:} \ \ & x=10 & -1 \leq y \leq 1\\ \text{Arc LN:}\ &\text{ellipse with foci B and C:} \ \ & \frac{(x-1)^2}{81}+\frac{(y-1)^2}{32}=1& 8 \leq x \leq 10, 1 \leq y \leq \frac{41}{9}\\ \text{Arc NI:}\ &\text{parabola:} \ \ & y=- \frac{1}{36}(x+6)^2+10& 6 \leq x \leq 8, \frac{41}{9} \leq y \leq 6\\ \text{Arc IS:}\ &\text{ellipse with foci B and A:} \ \ & \frac{x^2}{81}+\frac{(y-1)^2}{45}=1& -6 \leq x \leq 6, 6 \leq y \leq 8 \end{cases}$$ But of course, it is very tedious... Moreover I would like to generalize this study to "generalized foci" that could be any polygonal shape, and even any convex curve. REPLY [4 votes]: Here's a conceptual sketch: If $P$ is a convex polygon, the distance function $$ f_{P}(x) = d(x, P) $$ is of class $C^{1}$ outside of $P$: Each level set of $f_{P}$ is a union of finitely many line segments (each parallel to and of the same length as some side of $P$) and circle arcs (each centered at a vertex of $P$). Since an arc of a level set $\{f_{P} = c\}$ is tangent to any segment it meets (the sublevel set $\{f_{P} \leq r\}$ is the Minkowski sum of $P$ with a closed disk of radius $r$), each level curve is of class $C^{1}$. The gradient of $f_{P}$ is therefore continuous (as well as non-vanishing) outside $P$, proving that $f_{P}$ is continuously-differentiable. Now suppose $(P_{i})_{i=1}^{n}$ is a finite collection of convex polygons. Let $f_{i} := f_{P_{i}}$ be the distance to $P_{i}$, and let $$ f = \sum_{i=1}^{n} f_{i} $$ be the sum of the corresponding distance functions. Since each $f_{i}$ is of class $C^{1}$ outside $P_{i}$, the function $f$ is of class $C^{1}$ outside the union of the $P_{i}$. If $c$ is sufficiently large (e.g., if the level curve $\{f = c\}$ encloses all the $P_{i}$), then the level $\{f = c\}$ is of class $C^{1}$ by the implicit function theorem: The gradient of $f$ is a sum of non-vanishing gradient fields, and for a point sufficiently far from every $P_{i}$, the gradients of the $f_{i}$ lie in some closed half-plane, so their sum is non-zero. Particularly, if $P_{1}$ and $P_{2}$ are your squares, the gradient of $f = f_{1} + f_{2}$ is non-vanishing on your "generalized ellipse" $E = \{f = 18\}$, so $E$ is of class $C^{1}$.<|endoftext|> TITLE: Let $C,D$ be categories and $F:C\to D$ and $G:D\to C$ be adjoint functors. Then $F$ is fully faithful iff the unit is an isomorphism? QUESTION [6 upvotes]: Let $C,D$ be categories and $F:C\to D,G:D\to C$ be such that $F$ is a left adjoint of $G$. Prove that $F$ is fully faithful iff the unit is an isomorphism. (This is an exercise from the book by T. Leinster) I think I can do one direction: $\impliedby$: If we let $\eta$,$e$ be unit/counit then as $\eta:\text{id}_C \implies GF$ is an isomorphism, it follows that the composite $$ \text{Hom}(x,y)\to\text{Hom}(F(x),F(y))\to\text{Hom}(GF(x),GF(y)) $$ for all $x,y\in C$ is an isomorphism, which implies that $F$ is faithful. But $F$ must be full since the composite $$ G=G\circ \text{id}_D \implies GFG \implies G $$ is an identity transformation (I'm rather skeptical about this). But I don't have any idea regarding reverse direction, in which I have to show that the arrow $x\to GF(x)$ has an inverse for all $x\in C$. REPLY [8 votes]: It's actually possible to prove something a bit more general : $F$ is faithful if and only if every component of $\eta $ is a monomorphism. $F$ is full if and only if every component of $\eta$ is a split epimorphism. The first statement holds because for every objects $X,Y$ and every arrows $u,v:X\to Y$ of $C$, $$\eta_Y\circ u=\eta_Y\circ v\Longleftrightarrow F(u)=F(v)$$ (because the two equalities correspond to one another through the natural bijection $\operatorname{Hom}_C(X,GFY)\simeq \operatorname{Hom}_D(FX,FY)$). For the second statement, first assume that every $\eta_X$ is a split epimorphism, with some section $s_X$. Take an arrow $g:FX\to FY$, then $f=s_Y\circ G(g)\circ \eta_X$ is an arrow $X\to Y$, and $$GF(f)\circ \eta_X=\eta_Y\circ f = G(g)\circ \eta_X,$$which shows that $F(f)=g$. Assume now that $F$ is full; then for every $X$ there must be some arrow $s_X:GFX\to X$ such that $F(s_X)=\epsilon_{FX}:FGFX\to FX$. Now $$\epsilon_{FX} \circ F(\eta_X\circ s_X)=\epsilon_{FX} \circ F(\eta_X)\circ \epsilon_{FX}=\epsilon_{FX}=\epsilon_{FX}\circ F(id_{GFX}),$$hence $\eta_X\circ s_X=id_{GFX}$.<|endoftext|> TITLE: Is there a relationship between germs and Taylor coefficients? QUESTION [5 upvotes]: For the definition of germ, please see below. I am having some difficulty internalizing the concept of germ due to an inability to think of concrete examples, which led to me having the following questions: 1. Are the germs of holomorphic functions (at a point $p$) simply single-element equivalence classes, because the Identity theorem implies that any two holomorphic functions which agree identically on a neighborhood are identical on their entire domains of definition? 2. Can one describe the members of the equivalence class of a smooth germ explicitly for simple enough examples? E.g. Let $M=\mathbb{R}$, and let $f(x)=x$, then is the germ of $x$ at $0$ just $$[x]_p=\{g\in C^{\infty}: g(0)=0, g'(0)=1,g^{(r)}(0)=0\ \forall\ r \ge 1 \}? $$ 3. Do similar results hold for the members of the equivalence class of a $C^k$ germ, e.g. $$[x]_p = \{ g \in C^k: g(0)=0, g'(0)=1, g^{(r)}(0)=0\ \forall\ 1 \le r \le k \}? $$ 4. Two distinct (real or complex) analytic functions cannot coincide on a neighborhood of a point -- is this the smallest class of functions for which this holds? (I.e. are germs non-trivial precisely for classes of functions which do not always coincide with their Taylor series, and are the equivalence classes simply the functions with the same Taylor series up to a certain order?) My conjectures are motivated by the idea of derivatives being "infinitesimal" or local approximations of functions, as well as the fact that the standard example of a non-analytic smooth function is the only function I can think of which belongs to the germ of another function, but other than this intuition I have no reason to think that the germs of analytic, smooth, or differentiable functions can be described in the manner above. That and this comment in Lee (on p.72): The germ definition has a number of advantages. One of the most significant is that it makes the local nature of the tangent space clearer, without requiring the use of bump functions. Because there do not exist analytic bump functions, the germ definition of tangent vectors is the only one available on real-analytic or complex-analytic manifolds. The only reason I might suspect that these are false is that they would lead to a much simpler definition (in my opinion) then the one given in the book. On the other hand, a definition of germs based on equality of Taylor coefficients would not generalize very well to classes of continuous functions and the like, which is perhaps the intent. Definition: (taken from p.71 of Introduction to Smooth Manifolds by John Lee): A smooth function element on [a smooth manifold] $M$ is an ordered pair $(f,U)$, where $U$ is an open subset of $M$ and $f: U \to \mathbb{R}$ is a smooth function. Given a point $p \in M$, let us define an equivalence relation on the set of all smooth function elements whose domains contain $p$ by setting $(f,U) \sim (g,V)$ if $f \equiv g$ on some neighborhood of $p$. The equivalence class of a function element $(f,U)$ is called the germ of $f$ at $p$. The set of all germs of smooth functions at $p$ is denoted by $C_p^{\infty}(M)$... Let us denote the germ at $p$ of the function element $(f,U)$ simply by $[f]_p$; there is no need to include the domain of $U$ in the notation, because the same germ is represented by the restriction of $f$ to any neighborhood of $p$. To say that two germs $[f]_p$ and $[g]_p$ are equal is simply to say that $f \equiv g$ on some neighborhood of $p$, however small. Related questions (in which I could not find the answer): (1) (2) (3) (4) (5) (6) (7) (8) (9) REPLY [4 votes]: The equivalence class of a function element $(f,U)$ around $p \in M$ will never consist of a single element unless the topology of $M$ is such that you can find an open neighborhood $V$ of $p$ which is "minimal" in the sense that any other open neighborhood $V$ of $p$ must contain it. Otherwise, you can take some open neighborhood $p \in V \subsetneq U$ and consider the different (but equivalent) function element $(f|_{V}, V)$. The identity theorem implies that if $(f, U)$ and $(g,V)$ are holomorphic equivalent function elements around $p$ and if $U \cap V$ is connected, then $f|_{U \cap V} = g|_{U \cap V}$ which won't necessarily be the case if you will work with smooth or continuous functions. However, a different version of the identity theorem states that if $f \colon U \rightarrow \mathbb{C}$ and $g \colon V \rightarrow \mathbb{C}$ are holomorphic functions with $p \in U \cap V$ and $f^{(i)}(p) = g^{(i)}(p)$ for all $i \geq 0$ then $f$ and $g$ will agree on an open neighborhood of $p$ and so they will define the same germ. Hence, you can describe a germ $[f]$ of holomorphic functions at $p$ by providing a list of all derivatives of $f$ (or any other $g$ with $[g] = [f]$) at $p$. In fact, it is more convenient to provide the list $(a_0, \dots, a_n, \dots)$ of the coefficients of the local power series expansion $f(z) = \sum_{n=0}^{\infty} a_n (z - p)^n$ which are given by $a_n = \frac{f^{(n)}(p)}{n!}$. This way, you can see that not all possible sequences arise as a germ of some function elements - only those sequences for which the power series defined by $\sum_{n=0}^{\infty} a_n (z - p)^n$ converges in a neighborhood of $p$. Thus, the set of germs of a holomorphic function at $p$ is in bijection with the set $$ \left \{ (a_0, \dots, a_n, \dots) \, \big| \, a_i \in \mathbb{C}, \limsup_{n \to \infty} |a_n|^{\frac{1}{n}} < \infty \right \}. $$ You can't describe nicely the germ of a smooth/$C^k$/continuous function around $p \in M$ unless $M$ is zero dimensional. If $f \colon \mathbb{R} \rightarrow \mathbb{R}$ and $g \colon \mathbb{R} \rightarrow \mathbb{R}$ are smooth and define the same germ around $x = 0$ then $f^{(i)}(x) = g^{(i)}(x)$ for all $i \geq 0$ but this list is not enough to characterize explicitly the germ. For example, the zero function and the function $e^{-\frac{1}{x^2}}$ (a standard example for a function whose taylor series converges, but not to the function) share the derivatives of all orders at $x = 0$ but do not define the same germ. Characterizing the germ of a function in some category is the same as characterizing all possible local behaviors of functions and this is hopeless unless the category is rigid in some sense (for example, if you work with holomorphic/analytic/polynomial functions).<|endoftext|> TITLE: A direct proof that there is a prime between $n$ and $n^2+1$? QUESTION [8 upvotes]: I am trying to prove there is a prime between $n$ and $n^2+1$ without using Bertrand's postulate or Prime number theorem. Do you have any idea? Yuval Filmus's answer for this problem provides a quite useful idea. But since $n^2+1\lt n!$ for $n\ge4$, I do not know how to use it for this question. REPLY [3 votes]: We need to find (or at least prove the existence of) a positive integer $K = K(n)$ having a prime factor between $n$ and $n^2+1$. For a prime $p$ and a postivie integer $k$, let $v_p(k)$ denote the exponent of $p$ in the prime factorisation of $k$, i.e. $v_p(k)$ is characterised by $p^{v_p(k)} \mid k$ and $p^{v_p(k)+1} \nmid k$. Next recall Legendre's formula $$v_p(n!) = \sum_{\ell = 1}^{\infty} \biggl\lfloor \frac{n}{p^{\ell}}\biggr\rfloor = \sum_{\ell = 1}^{\bigl\lfloor \frac{\log n}{\log p}\bigr\rfloor} \biggl\lfloor \frac{n}{p^{\ell}}\biggr\rfloor$$ for the prime factorisation of the factorials. Much of Legendre's formula generalises to arbitrary (increasing) arithmetic progressions. If we have an arithmetic progression $$A_m = a + m\cdot d$$ with $a \geqslant 0$ and $d > 0$, then for every $r$ coprime to $d$ exactly one of every $r$ consecutive terms of the progression is a multiple of $r$. Let $\mu_r(k)$ denote the number of multiples of $r$ among $A_1, A_2, \dotsc, A_k$. Without any special knowledge about $d$ and $r$ we cannot pin down $\mu_r(k)$ exactly, but we know that for $r$ coprime to $d$ we have $$\biggl\lfloor \frac{k}{r}\biggr\rfloor \leqslant \mu_r(k) \leqslant \biggl\lceil \frac{k}{r}\biggr\rceil.$$ By the same argument that shows Legendre's formula we can, for the "A-orial" $$P_k = \prod_{m = 1}^k A_m,$$ see that $$v_p(P_k) = \sum_{\ell = 1}^{\infty} \mu_{p^{\ell}}(k)$$ for all primes $p$. For primes not dividing $d$, although we cannot give an explicit exact form for $v_p(P_k)$, we have pretty reasonable bounds, namely $$\sum_{\ell = 1}^{\bigl\lfloor \frac{\log k}{\log p}\bigr\rfloor} \biggl\lfloor \frac{k}{p^{\ell}}\biggr\rfloor \leqslant v_p(P_k) \leqslant \sum_{\ell = 1}^{\bigl\lfloor \frac{\log A_k}{\log p}\bigr\rfloor} \biggl\lceil \frac{k}{p^{\ell}}\biggr\rceil,$$ since evidently $\mu_r(k) = 0$ for $r > A_k$. Now let's come to the proof that for $n > 1$ there is always a prime strictly between $n$ and $n^2$ (for $n = 1$ we need the extra $+1$). Fix $n \geqslant 2$, and let $p_s$ be the largest prime not exceeding $n$, so $p_s \leqslant n < p_{s+1}$. Consider the arithmetic progression $$A_m = 1 + m \cdot p_s$$ and let $k = p_s - 1$. On the one hand, we have $$P_k = \prod_{m = 1}^k A_m > \prod_{m = 1}^k (A_m - 1) = p_s^{k}\cdot k!\,.\tag{1}$$ On the other hand, we have $A_k = p_s(p_s - 1) + 1 = p_s^2 - p_s + 1 < p_s^2$ and thus by the above $$v_p(P_k) \leqslant \sum_{\ell = 1}^{\bigl\lfloor 2 \frac{\log p_s}{\log p}\bigr\rfloor}\biggl\lceil \frac{k}{p^{\ell}}\biggr\rfloor \leqslant \biggl\lfloor 2\frac{\log p_s}{\log p}\biggr\rfloor + \sum_{\ell = 1}^{\infty} \biggl\lfloor \frac{k}{p^{\ell}}\biggr\rfloor = \biggl\lfloor 2\frac{\log p_s}{\log p}\biggr\rfloor + v_p(k!)$$ for all primes $p \neq p_s$, hence $$\prod_{p < p_s} p^{v_p(P_k)} \leqslant \prod_{p < p_s} \Bigl( p_s^2\cdot p^{v_p(k!)}\Bigr) = p_s^{2(s-1)}\cdot k!\,.\tag{2}$$ Also, none of the $A_m$ is divisible by $p_s$, whence $v_{p_s}(P_k) = 0$. Therefore it follows from $(1)$ and $(2)$ that $$\prod_{p \leqslant n} p^{v_p(P_k)} < P_k$$ and consequently $P_k$ has a prime factor larger than $n$ as soon as $2(s-1) \leqslant k = p_s-1$, or equivalently $p_s \geqslant 2s - 1$. Which in fact holds for all $s$. And since a prime factor of $P_k$ divides at least one $A_m, \, 1 \leqslant m \leqslant k$ it follows that all prime factors of $P_k$ are $\leqslant A_k < p_s^2 \leqslant n^2$. So for $n \geqslant 2$ there always is at least one prime $q$ satisfying $n < q < n^2$ (or, slightly more tightly, $n < q \leqslant n^2 - n + 1$).<|endoftext|> TITLE: When irreducible elements of a UFD remain irreducible in a ring extension QUESTION [6 upvotes]: Let $U$ be a Noetherian UFD and let $D$ be a Noetherian integral domain (not known to be a UFD) such that $U \subseteq D$. Further assume that $U$ and $D$ have the same finite Krull dimension. Of course, generally, an irreducible (=prime) element of $U$ may become reducible in $D$. What can be said about such pairs of domains with the additional property that every irreducible element of $U$ remains irreducible in $D$? An example: $U=\mathbb{C}[x^2]$, $D=\mathbb{C}[x^2][x^3]$; if I am not wrong, every irreducible element of $\mathbb{C}[x^2]$ remains irreducible in $D=\mathbb{C}[x^2][x^3]$ (though not prime). Edit: If my above question is too general, then I wish to ask the following question: Given an irreducible element $u \in U$, can one find a "nice" criterion which guarantees that $u$ remains irreducible in $D$? New edit: Another question: If we further assume that $U \subseteq D$ is etale, then is it true that every irreducible element of $U$ remains irrdducible in $D$? or is it true that every prime element of $U$ remains prime in $D$? Please see this recent question. Thank you very much! REPLY [3 votes]: This does not force $D$ to be a UFD as you originally asked, here is a counterexample. Take $U = \mathbb{Z}_{(2)}$ and $D = \mathbb{Z}_{(2)}[X]/(X^2 - 8)$. Then $U$ is a DVR and its only non-zero prime is $(2)$. An easy computation shows that all units in $D$ are $a + bX$, where $a$ is a unit in $U$. Using this it is not hard to show that $2$ remains irreducible. But clearly we have $$ 2^3 = 8 = X \cdot X $$ in $D$, showing that $D$ is not a UFD.<|endoftext|> TITLE: If two Riemannian manifolds can be isometrically immersed in each other, are they isometric? QUESTION [19 upvotes]: Let $M,N$ be smooth compact oriented Riemannian manifolds with boundary. Suppose that both $M,N$ can be isometrically immersed in each other. Must $M,N$ be isometric? Does anything change if we also assume $\operatorname{Vol}(M)=\operatorname{Vol}(N)$? Note: I assume $M,N$ are connected (Otherwise, as mentioned by Del, we can take $N$ to be two disjoint copies of $M$). Of course, if both manifolds can be isometrically embedded in each other, then they are isometric. This follows from volume considerations: Suppose $i:M \to N,j:N \to M$ are isometric embeddings. Then, $i(M),M$ are isometric, hence $\operatorname{Vol}(M)=\operatorname{Vol}(i(M))\le \operatorname{Vol}(N)$. Similarly, $\operatorname{Vol}(N)\le \operatorname{Vol}(M)$. Thus, $\operatorname{Vol}(i(M))=\operatorname{Vol}(N)$. Since $i(M)$ is compact, it is a closed subset of $N$. Thus, if $i(M) \neq N$, then $N\setminus i(M)$ is open, and so has a positive volume, contradicting $\operatorname{Vol}(i(M))=\operatorname{Vol}(N)$. This shows $i,j$ are surjective, thus isometries. Updades and Remarks: $(1) \,$ If $M$, $N$ have no boundaries, the answer is positive. This follows easily from a metric argument. Let $i:M \to N, j:N \to M$ be the given isometric immersions. Then $i(M)$ is clopen in $N$, hence $i$ is surjective. Similarly, $j$ is surjective. A possible generalization to the case with boundaries: Assuming that every smooth orientation preserving isometric immersion maps boundary into boundary (see this question), we know that $j \circ i(\partial M) \subseteq \partial M$, so we can imitate the above argument to this case: First, we note $i(\partial M) \subseteq \partial N$ (since $j(N^0) \subseteq M^0$). It follows $i(M^o)$ is clopen in $N^o$, hence $i(M^o)=N^o$. Since $i(M)$ is closed in $N$, and contains the dense subset $N^o$, $i$ is surjective, and moreover $i(\partial M) = \partial N , i(M^o)= N^o$. By symmetry, $j$ is surjective, and the same argument in the previous case imply $j \circ i:M \to M $ is a surjective nonexpanding map, hence a metric isometry. Then, the $1$-Lipschitzity of $i,j$ implies $i$ is a metric isometry. So, by the positive answer to this question $i$ is a smooth Riemannian isometry. $(2)$ It is enough to prove that an orientation-preserving isometric immersion $M \to M$ is a Riemannian isometry. (and in particular maps $\partial M$ onto $\partial M$). Indeed, let $i:M \to N, j:N \to M$ be the given immersions and assume the above statement holds. Then $j \circ i:M \to M$ is an isometry, and so $j \circ i(\partial M) = \partial M$. This implies that $i(\partial M) \subseteq \partial N$ (since $j(N^0) \subseteq M^0$). Also, $j \circ i:M \to M$ is an isometry $\Rightarrow$ $i$ is injective and $j$ is surjective. By symmetry, $i,j$ are bijections. Since we know that $i(\partial M) \subseteq \partial N , i(M^o)\subseteq N^o$, and $i$ is surjective it follows that $i(\partial M) = \partial N , i(M^o)= N^o$. Since $i$ is in particular a metric isometry, the positive answer to this question, shows $i^{-1}$ is smooth, hence $i$ is a Riemannian isometry. REPLY [4 votes]: $\DeclareMathOperator{\vol}{vol}\newcommand{\Bar}[1]{\overline{#1}}$tl; dr: Yes, $M$, and $N$ are isometric, assuming only that each is connected, complete, and of finite volume. Lemma 1: If $(M, g)$ is a complete Riemannian manifold, $(N, h)$ is a connected Riemannian manifold, and $\dim M = \dim N$, then an isometric immersion $i:(M, g) \to (N, h)$ is a surjective covering map. Proof: The image $i(M)$ is open (because $i$ is an isometric immersion, hence a local diffeomorphism) and closed (because $(M, g)$ is complete) and non-empty, hence equal to $N$ (since $N$ is connected). Let $(\Bar{M}, \Bar{g}) = (M, g)/i$ denote the Riemannian quotient. That is, define an equivalence relation on $M$ by $p \sim p'$ if and only if $i(p) = i(p')$. Since $i$ is an isometric immersion and $\dim M = \dim N$, the quotient acquires the structure of a smooth Riemannian manifold isometric to $(N, h)$. Let $\pi:M \to \Bar{M}$ denote the quotient map. Let $q$ be an arbitrary point of $\Bar{M}$, and $V_{r} = V_{r}(q) \subset (\Bar{M}, \Bar{g})$ the geodesic ball of radius $r$ about $q$. Fix a point $p \in \pi^{-1}(q)$ arbitrarily, and choose $r > 0$ small enough that $U_{r}(p) \subset (M, g)$, the geodesic ball of radius $r$ about $p$, is mapped isometrically to $V_{r}$ by $\pi$. To complete the proof, it suffices to show that $\pi^{-1}(V_{r})$ is a disjoint union of geodesic balls, each mapped isometrically to $V_{r}$ by $\pi$. With the notation of the preceding paragraph, $U_{r}(p) \subset \pi^{-1}(V_{r})$. Conversely, if $x$ is a point of $\pi^{-1}(V_{r})$, so that $\Bar{x} = \pi(x) \in V_{r}$, there is a minimal geodesic $\Bar{\gamma}$ joining $\Bar{x}$ to $q$. Since $\pi$ is a local isometry, the geodesic $\gamma$ that starts at $x$ and satisfies $\pi_{*}\gamma'(0) = \Bar{\gamma}'(0)$ is a lift: $\Bar{\gamma} = \pi \circ \gamma$. Consequently, $\gamma$ joins $x$ to some point $p$ in $\pi^{-1}(q)$. Since $d(x, p) = d(\Bar{x}, q) < r$, we have $x \in U_{r}(p)$. Lemma 2: If $(M, g)$ and $(N, h)$ are Riemannian manifolds with $(M, g)$ connected, complete, and of finite volume, and if there exist isometric immersions $i:M \to N$ and $j:N \to M$, then $j \circ i:M \to M$ is an isometry Proof: Suppose $i:M \to N$ and $j:N \to M$ are isometric immersions. (Particularly, $\dim M = \dim N$.) The composition $j \circ i:M \to M$ is an isometric immersion, hence by Lemma 1 a covering map, say with $d$ sheets, so that $\vol(M) = d\vol(M)$. Since $\vol(M)$ is finite, $d = 1$. That is, $j \circ i$ is a diffeomorphism as well as a local isometry, hence an isometry. Corollary: If $(M, g)$ and $(N, h)$ are complete, connected, finite-volume Riemannian manifolds, and if there exist isometric immersions $i:(M, g) \to (N, h)$ and $j:(N, h) \to (M, g)$, then $i$ and $j$ are isometries. Proof: By Lemma 2, $j \circ i$ is bijective, so $j$ is surjective and $i$ is injective. Reversing roles, $i \circ j$ is bijective, so $i$ is surjective and $j$ is injective. That is, each of $i$ and $j$ is a bijective isometric immersion, hence an isometry.<|endoftext|> TITLE: How to determine if conditional expectations with respect to different measures are equal a.s.? QUESTION [7 upvotes]: Let $(\Omega, \mathcal{F}, P)$ be a probability space and let $\mathcal{A}$ be a sub-$\sigma$-algebra of $\mathcal{F}$. Let $Q_{\mathcal{A}}$ be a probability measure on $(\Omega, \mathcal{A})$ and define, for all $P$-integrable $f$, $$Q(f) := \int E_P(f \mid \mathcal{A})dQ_{\mathcal{A}}.$$ Note that $E_P(f \mid \mathcal{A})$ is the conditional expectation with respect to $P$. Also note that $Q$ defines a measure on $(\Omega, \mathcal{F})$ by taking $f$ to be an indicator function (we abuse notation by writing $Q(A)$ for $A \in \mathcal{F}$). Motivation. The idea is that we start with a "prior" probability space $(\Omega, \mathcal{F}, P)$. This extends to a linear functional (expectation) on the space $L^1$ of $P$-integrable functions $f$. Then we "learn" something about the sub-algebra $\mathcal{A}$ and adopt the new probability $Q_{\mathcal{A}}$ defined on $\mathcal{A}$. The question arises: How to extend this new probability $Q_{\mathcal{A}}$ to all of $\mathcal{F}$ (and thereby $L^1$)? We consider the extension $Q$ defined above. Question. Does $E_P(f \mid \mathcal{A}) = E_Q(f \mid \mathcal{A})$ a.s. ($P$)? Added Question. Is $Q \ll P$? The a.s. equality would follow if I could show that $$\int_A E_Q(f \mid \mathcal{A}) dP = \int_A f dP$$ for all $A \in \mathcal{A}.$ But I'm not sure what can be said when integrating a $Q$-conditional expectation against the measure $P$. I believe I can show the result for the case where $\mathcal{A}$ is generated by a countable partition $\{A_i \}_{i \in I}$ with $P(A_i)>0$ and $Q_{\mathcal{A}}(A_i) > 0$ for all $i \in I$. In that case, we have $$E_Q(f \mid \mathcal{A}) = \sum_i E_Q(f \mid A_i) \mathbf{1}_{A_i},$$ so it suffices to show that $E_Q(f \mid A_i) = E_P(f \mid A_i)$ for all $i \in I$. To that end we calculate (I abuse notation, writing $A_i = \mathbf{1}_{A_i}$) $$\begin{align} E_Q(f \mid A_i) &= \frac{1}{Q(A_i)} \int_{A_i}fdQ \\ &= \frac{1}{Q(A_i)} \int E_P(fA_i \mid \mathcal{A}) dQ_{\mathcal{A}} \\ &= \frac{1}{Q(A_i)} \int \left(\sum_{j \in I} \mathbf{1}_{A_j} \frac{1}{P(A_j)} \int_{A_i}fA_j dP \right)dQ_{\mathcal{A}} \\ &= \frac{Q_{\mathcal{A}}(A_i)}{Q(A_i)} \frac{1}{P(A_i)} \int_{A_i} f dP \\ &= E_P(f \mid A_i).\end{align}$$ The problem with extending this to general $\mathcal{A}$ is that I can't say explicitly what the conditional expectations (almost surely) are. REPLY [6 votes]: It's a very interesting question, because it leads to a useful discussion on matters, which usually are overlooked when one speaks of conditional expectations. The first thing to note is that one should be extremely careful with writing things like $$ E_Q(f \mid \mathcal{A}) = E_P(f \mid \mathcal{A}) \tag{1} $$ for some measures $P$ and $Q$ (with $Q$ not necessarily defined as in the OP). The sad truth is that the conditional expectation $E_Q(f \mid \mathcal{A})$ is defined modulo $Q$-null sets. This makes equality (1) totally meaningless unless the measures $Q$ and $P$ have the same null sets, i.e. they are equivalent. Now turning to the particular situation, $E_{Q_{\mathcal A}}(E_P(f\mid\mathcal A))$ cannot be defined if $Q_{\mathcal A}\not\ll P|_{\mathcal A}$. Indeed, in this case case there is some $A\in\mathcal A$ with $Q_{\mathcal A}(A)>0 = P(A)$. Changing the values of $E_P(f\mid\mathcal A)$ on $A$ does not change this conditional expectation. However, it does change the value of $E_{Q_{\mathcal A}}(E_P(f\mid\mathcal A))$. Consequently, we must assume that $Q_{\mathcal A}\ll P|_{\mathcal A}$. Moreover, if $P|_{\mathcal A}\not \ll Q_{\mathcal A}$, then (1) cannot hold $\pmod P$. Indeed, in this case there is some set $B\in\mathcal A$ such that $P(B)>0 = Q(B)$. Then we can change the left-hand side of (1) on $B$ arbitrarily. On the bright side, (1) does hold $\pmod Q$ even in this case. To show this, note that by definition of $E_Q(\cdot\mid\mathcal A)$, (1) holds iff for any $A\in \mathcal A$ $$ E_Q(A\,f ) = E_Q(A\,E_P(f \mid \mathcal{A})). $$ But $$ E_Q(A\,f) = Q(A\, f) = E_{Q_{\mathcal{A}}}(E_P(A\,f \mid \mathcal{A})) \\ = E_{Q}(E_P(A\,f \mid \mathcal{A})) = E_{Q}(A\,E_P(f \mid \mathcal{A})), $$ as required. The answer to the second question is positive (but, I repeat, you must assume $Q_{\mathcal A}\ll P|_{\mathcal A}$). Indeed, if $P(A) = 0$, then $E_P(A\mid\mathcal A) = 0 \pmod P$ thanks to the tower property, therefore, $E_P(A\mid\mathcal A) = 0 \pmod Q_\mathcal A$ in view of the absolute continuity, so $$ Q(A) = E_{Q_{\mathcal{A}}}(E_P(A\mid \mathcal{A})) = 0, $$ as claimed. (Alternatively, you can just show, by definition, that $dQ/dP = dQ_{\mathcal A}/dP|_{\mathcal A}$.)<|endoftext|> TITLE: Probability against winning a raffle QUESTION [5 upvotes]: "If there are 10000 raffle tickets, all of which are sold, and you purchased 20 of these tickets, what are the odds against you winning?" This was a question I got wrong on a recent test which I plan to retake (as an altered version of the original), but I need help understanding how to go about solving this. I originally thought that all I needed to was take 1 - the probability of winning, but that ended up being incorrect. Any help? REPLY [8 votes]: The subtlety here is one in language. If you were asked to find the probability that you lose the raffle, you'd be correct, just take $1-P(win)$. However, you are being asked for the odds against winning. Odds represent the ratio of possible outcomes, they are different from probability. In simple examples, this roughly means that probability is $\frac{\text{Number of ways to win}}{\text{Total number of outcomes}}$, whereas odds against are $\frac{\text{Number of ways to lose}}{\text{Number of ways to win}}$. For example, suppose we have a fair 6-sided die. The odds against you rolling a 2 are 5:1, since there are 5 outcomes in which you lose (fail to roll a two) and only 1 outcome where you win (roll a 2). In your example, this translates to the following: 9980 of the tickets are other people's tickets; if theirs gets called, you lose. The other 20 are yours. So assuming the lottery is played fairly, the odds against you winning are 9980:20, since in 9980 outcomes, you lose, and in 20 outcomes, you win. This ratio can be reduced to 499:1.<|endoftext|> TITLE: Computing a tricky limit QUESTION [7 upvotes]: $$ \lim_{n \to \infty} \frac{1^m + 2^m + 3^m + ... + (2n-1)^m }{n^{m+1}} $$ I am kind of stuck since I cannot make it look into a form that would involve the integral of certain function. I know somehow it would be easy if we can compare this limit to a riemman sum. Any ideas? REPLY [14 votes]: Observe \begin{align} \sum^{2n-1}_{k=1} \frac{k^m}{n^m n} = \sum^{2n-1}_{k=1} \left(\frac{k}{n} \right)^m\frac{1}{n} \approx \int^2_0 x^m\ dx. \end{align}<|endoftext|> TITLE: Why isn't $D_\infty$ the set of symmetries of a circle? QUESTION [13 upvotes]: According to Wikipedia, "the infinite dihedral group Dih∞ is an infinite group with properties analogous to those of the finite dihedral groups." However, it doesn't appear that this has anything to do with the symmetries of the circle, which surprised me, because that seems like the most natural generalization of the finite-order dihedral groups, which are sets of symmetries of regular polygons. Put another way, since regular polygons "approach" becoming a circle as the number of vertices, $n\to\infty$, it seems like $D_\infty$ should be the symmetries of the circle. So what kinds of symmetries does $D_\infty$ represent, and why was this group chosen for extension of the finite dihedral groups over the symmetries of the circle? (Or are they related in some way I'm just not grasping?) REPLY [16 votes]: What you think of when taking $n$ to $\infty$ is this: That is, you've got polygons inscribed in a common circle. Note however that even in that model, the limit to infinity would not get the full circle, because only rational multiples of $2\pi$ will appear. But the limit that leads to $D_\infty$ is more like this: And if you continue that to infinity, it's easy to see that the the sides will approach a straight line with equidistant points, and a rotation will end up as a translation on that line.<|endoftext|> TITLE: Four color theorem disproof? QUESTION [42 upvotes]: My brother in law and I were discussing the four color theorem; neither of us are huge math geeks, but we both like a challenge, and tonight we were discussing the four color theorem and if there were a way to disprove it. After some time scribbling on the back of an envelope and about an hour of trial-and-error attempts in Sumopaint, I can't seem to come up with a pattern that only uses four colors for this "map". Can anyone find a way (algorithmically or via trial and error) to color it so it fits the four color theorem? REPLY [6 votes]: To answer the "algorithmically" question, this map has some regions that only border four others. There is a relatively short, algorithmic proof that if you can 4-colour all but one of the regions of a map, and the last region, R, only borders four others (call them R_1, R_2, R_3, R_4 in clockwise ordering about R), then you can colour the whole map. To do this, work as follows. It is easy unless R_1 to R_4 already use all the colours. Say they are blue, green, red, yellow in that order around region R. Now we try to recolour R_1 to red, in the hope that this will allow us to colour R blue. If we do that, we have to recolour any red region which borders R_1 blue. Then we have to recolour any blue region which borders one of these red, and so on. We keep doing this until we run out of things to recolour. Now one of two things happens: either we can colour R blue or we had to recolour R_3 blue. In the latter case, there was a chain of red and blue regions stretching from R_1 to R_3, but there can't also be a chain of green and yellow regions stretching from R_2 to R_4 (they have to cross somewhere, but the fact that they use different colours means that they can't). So now if we try the same trick recolouring R_2 yellow, any yellow regions next to R_2 green, and so on, this time we won't have to recolour R_4, and we will be able to colour R green. We've now shown that if we can colour everything apart from a region that meets four (or fewer) others, we can do this recolouring trick and then colour the last region. Similarly, if we had a colouring of some of the regions, and one of the uncoloured regions only bordered four coloured ones, we can recolour in this way and then colour that region as well. So if we can find an ordering of regions such that each one borders at most four previous ones, we can progressively recolour in this way. In this map we can -- basically progressively remove regions that only have four neighbours in what's left, then reverse that ordering -- so this algorithm will work. This method was the basis of Kempe's incorrect proof of the 4-colour theorem, and was used by Heawood to prove the 5-colour theorem (using five colours we are ok so long as there is always a region we can remove which borders at most five others, but that is true for any plane map). It can be used to easily find a 4-colouring of Martin Gardner's "April Fools" map, which would be very difficult to find by trial and error.<|endoftext|> TITLE: Understanding higher derivatives as multilinear mappings QUESTION [6 upvotes]: I'm trying to understand how to relate the higher derivatives to multilinear mappings. Let $f$ be a differentiable function. Then, since we have $Df:V\subset \mathbb{R}^n\rightarrow \text{Lin}(\mathbb{R}^n,\mathbb{R}^p) $, can I say that $Df\in \text{Lin}(\mathbb{R}^n,\text{Lin}(\mathbb{R}^n,\mathbb{R}^p))$? Also I'm trying to relate this new way - for me at least - of thinking of higher order derivatives with what I already know, for example calculating the hessian matrix by taking the usual partial derivatives. The book I'm using has the following theorem to allow me to compute the derivatives of multilinear mappings. So, if I can think of $Df$ as in $\text{Lin}(\mathbb{R}^n\times\mathbb{R}^n,\mathbb{R}^p)$, then by the above theorem, we have $D(Df)(a_1,a_2)(h_1,h_2)=Df(h_1)(a_2)+Df(a_2)(h_2)$. However, I'm not seeing how this relates to the usual simpler calculation of the partial derivatives Any help would be appreciated. REPLY [5 votes]: If $f \colon V \rightarrow \mathbb{R}^p$ then $Df \colon V \rightarrow \operatorname{Lin}(\mathbb{R}^n,\mathbb{R}^p)$ and so $D^2f \colon V \rightarrow \operatorname{Lin}(\mathbb{R}^n, \operatorname{Lin}(\mathbb{R}^n, \mathbb{R}^p))$. Let us try to unravel what this means. First, note that a linear map $T \colon \mathbb{R}^n \rightarrow \operatorname{Lin}(\mathbb{R}^n, \mathbb{R}^p)$ is the same thing as a bilinear map $S \colon \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^p$. More precisely, we can define a map $\varphi \colon \operatorname{Lin}(\mathbb{R}^n, \operatorname{Lin}(\mathbb{R}^n, \mathbb{R}^p)) \rightarrow \operatorname{Lin}^2(\mathbb{R}^n, \mathbb{R}^p)$ by setting $\varphi(T)(v,w) := T(v)(w)$ and this map is an isomorphism. More generally, one can construct a similar identification $$ \underbrace{\operatorname{Lin}(\mathbb{R}^n, \operatorname{Lin}(\mathbb{R}^n, \dots \, (\operatorname{Lin}(\mathbb{R}^n, \mathbb{R}^p) \, \dots )}_{k \text{ times}} \approx \operatorname{Lin}^k(\mathbb{R}^n, \mathbb{R}^p) $$ which allows you to identify the $k$-th derivative $D^kf|_{q}$ at a point $q$ (the underline notation is useful to differentiate the point and the vector parameters and to reduce the cluttering of paranthesis) with a $k$-multilinear map. Now, consider the case where $p = 1$ and so $f$ is a scalar function. The first derivative $Df \colon V \rightarrow \operatorname{Lin}(\mathbb{R}^n,\mathbb{R}) = \left( \mathbb{R}^{n} \right)^{*}$ sends each point $q \in V$ to a linear functional $(Df)(q) = Df|_{q}$ (the underline notation keeps track at which point we are working with and reduces clutter of parenthesis) which acts as a directional derivative: $$ (Df|_{q})(v) = \lim_{t \to 0} \frac{f(q + tv) - f(q)}{t}. $$ In particular, if we take $v = e_i$ (where $(e_1,\dots,e_n)$ is the standard basis vector of $\mathbb{R}^n$ we get $(Df|_{q})(e_i) = \frac{\partial f}{\partial x^i}(q) = \frac{\partial f}{\partial x^i}|_{q}$. Thus, if we represent each $Df|_{q}$ by a (row) vector $(Df|_{q}(e_1), \dots, Df|_{q}(e_n))$, we have $Df "=" \nabla f$ and we recover the usual notion of a gradient of a function. Let us move to the second derivative. By the identification above, we can think of $D(Df)(q) = D(Df)|_{q} = D^2f|_{q}$ (the second derivative at a point $q \in V$) as a bilinear map $\varphi(D^2f|_{q}) \colon \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$ which is usually also denoted by $D^2f$ (making the identification above invisible) and is simply a bilinear form on $\mathbb{R}^n$. Any bilinear form is completely determined by the matrix representing it with respect to some basis so let us consider the matrix $A_{ij} = \varphi(D^2f|_{q})(e_i, e_j)$ where $(e_i)$ is the standard basis of $\mathbb{R}^n$. I claim that $A = \operatorname{Hess}(f)|_{q}$. To verify it, we unravel all the relevant definitions and properties of the derivative: $$ A_{ij} = \varphi(D^2f|_{q})(e_i,e_j) = ((D(Df)|_{q})(e_i))(e_j) = \left( \lim_{t \to 0} \frac{Df|_{q + te_i} - Df|_{q}}{t} \right)(e_j) = \lim_{t \to 0} \frac{Df|_{q + te_i}(e_j) - Df|_{q}(e_j)}{t} = \lim_{t \to 0} \frac{\frac{\partial f}{\partial x^j}(q + te_i) - \frac{\partial f}{\partial x^j}(q)}{t} = \frac{\partial^2 f}{\partial x^i x^j}(q).$$ More generally, you should think of $(D^k f)|_{q}(v_1, \dots, v_k)$ as taking first the directional derivative of $f$ with respect to the direction $v_1$, then taking the directional derivative of the result with respect to $v_2$, etc and finally evaluating the result at the point $q$. Regarding the theorem you quote, let me demonstrate it in the case $k = 2$. Thus, we consider a function $f \colon \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}^p$ which is bilinear and want to understand the derivative. For example, if $n = 2$ and $p = 1$ we can consider $$ f((x,y),(u,v)) = 2xu + 4 xv + 5yu + 6yv. $$ The derivative should be a map $Df \colon \mathbb{R}^n \times \mathbb{R}^n \rightarrow \operatorname{Lin}(\mathbb{R}^n \times \mathbb{R}^n, \mathbb{R}^p)$ and we have $$ Df|_{(q_1,q_2)}(v_1,v_2) = f(q_1,v_2) + f(v_1,q_2). $$ How does this work for our function $f$? For example, $$ Df|_{(x_0,y_0),(u_0,v_0)}((1,0),(0,0)) = \frac{\partial f}{\partial x}\big|_{(x_0,y_0),(u_0,v_0)} = (2u)|_{(x_0,y_0),(u_0,v_0)} = 2u_0 \\ = f((x_0,y_0),(0,0)) + f((1,0),(u_0,v_0)).$$<|endoftext|> TITLE: Higher order corrections to saddle point approximation QUESTION [7 upvotes]: I'd like to ask for hints how to obtain higher order corrections to approximations obtained by the saddle point method. References will be also welcome. Unfortunately what comes up when googling is mostly just usual leading order approximation. Let me show my idea how to do it. Consider an integral $I(t)= \int_{\mathbb R} e^{tx^2 - x^4} dx$. I am interested in the limit $t \to \infty$ through complex values. My idea is to expand quartic in the exponential around its stationary point $x_0$ as $a+b(x-x_0)^2-x^4$ and replace $e^{-x^4}$ by $1-x^4$. The result is that I get correct leading order behaviour, but wrong next to leading order corection for some values of argument of $t$. I know that the next to leading order term is wrong because I know the exact form of this integral in terms of Bessel functions and asymptotics of these are known. REPLY [7 votes]: A very instructive reference for this is section 4.7 of Miller's Applied Asymptotic Analysis which performs the analogous analysis to determine the asymptotic behavior of the Airy function. Carrying out the full analysis of your integral would be too much for an answer here, so I'll give an outline and the appropriate section references from the book. First write $t = re^{i\theta}$ with $r \geq 0$ and substitute $x = \sqrt{r} y$ to get $$ I(t) = \sqrt{r}\int_{\mathbb R} \exp\!\left\{ r^2 \left(e^{i\theta} y^2-y^4\right)\right\}dy. $$ The exponent function $\varphi_\theta(y) = e^{i\theta}y^2 - y^4$ has three saddle points $y = y^*$, one at $y^*=0$ and one at either solution of $(y^*)^2 = e^{i\theta}/2$. Depending on the value of theta, one (or more) of these saddle points $y^*$ of $\varphi_\theta$ will dominate the others and determine the asymptotics. This is known as the Stokes phenomenon. The process of determining which saddle point dominates is described in section 4.7 of the book. Once the appropriate saddle points are determined it remains to apply the method of steepest descent as described in sections 4.2 through 4.4. The basic idea is that after the appropriate contour has been chosen you apply the Laplace method as described in section 3.4. This process yields a complete asymptotic expansion which is valid as $r \to \infty$ for a given fixed $\theta$.<|endoftext|> TITLE: Recovering connection from parallel transport QUESTION [15 upvotes]: (doCarmo, Riemannian Geometry, p.56, Q2) I want to prove that the Levi-Civita connection $\nabla$ is given by $$ (\nabla_X Y)(p) = \frac{d}{dt} \Big(P_{c,t_0,t}^{-1}(Y(c(t)) \Big) \Big|_{t=t_0}, $$ where $p \in M$, $c \colon I \to M$ is an integral curve of $X$ through $p$, and $P_{c,t_0,t} \colon T_{c(t_0)}M \to T_{c(t)}M$ is the parallel transport along $c$, from $t_0$ to $t$. My approach is to use the uniqueness of the Levi-Civita connection (a theorem proved elsewhere in the textbook) and show that the RHS satisfies all of its properties, i.e. It is an affine connection, It is symmetric, It is compatible with the metric. However, for the first part, I am stuck on proving that $$ \nabla_{fX + gY}Z = f \nabla_X Z + g \nabla_Y Z. $$ So far, I have the following $$ f \nabla_X Z = f \Big( \frac{d}{dt} \Big( P_{c_X,t_0,t}^{-1}(Z(c_X(t)) \Big) \Big|_{t=t_0} \Big), $$ $$ g \nabla_Y Z = g \Big( \frac{d}{dt} \Big( P_{c_Y,t_0,t}^{-1}(Z(c_Y(t)) \Big) \Big|_{t=t_0} \Big), $$ $$ \nabla_{fX + gY}Z = \frac{d}{dt} \Big( P_{c,t_0,t}^{-1}(Z(c(t)) \Big) \Big|_{t=t_0}, $$ where $$ c_X (t_0) = c_Y (t_0) = c(t_0) = p, $$ $$ \frac{d c_X}{dt} = X(c_X(t)), $$ $$ \frac{d c_Y}{dt} = Y(c_Y(t)), $$ $$ \frac{d c}{dt} = fX(c(t)) + gY(c(t)). $$ I'm sure the solution is something simple like working in local coordinates but I'm having trouble so any direction would be appreciated. REPLY [14 votes]: You can probably make your idea work but it won't be easy. The reason is that your formula that recovers the connection from the parallel transport is true not only for the Levi-Civita connection but also for arbitrary connections. This means that in order to identify the right hand side as the Levi-Civita connection, you will need to understand what makes the parallel transport of the Levi-Civita connection special compared with the parallel transport of a general connection. The compatibility with the metric is easy - this implies that parallel transport is an isometry. However, to understand how the symmetry affects the parallel transport is much more delicate (see here for example). A much less painful way to solve the exercise is to use the notion of a parallel frame along $c$ (which if I remember correctly is introduced in one of the other exercises). Namely, pick some basis of $\xi_1(p), \dots, \xi_n(p)$ of $T_pM$ and extend it by parallel transport to a frame $(\xi_1, \dots, \xi_n)$ of vectors fields along $c$. Now, write the restriction of $Y$ to $c(t)$ as $Y = Y^i(t) \xi_i(c(t))$ (summation convention is in use) and note that $$ (\nabla_X Y)(c(t)) = \frac{DY(c(t))}{dt} = \dot{Y}^i(t) \xi_i(c(t)) + Y^i(t) \frac{D\xi_i(t)}{dt} = \dot{Y}^i(t) \xi_i (c(t)) $$ which means that the covariant derivative relative to the frame $\xi_i$ is given simply by the regular derivative. Then, $$ \frac{d}{dt} \left( P_{c,t_0, t}^{-1}(Y(c(t)) \right)|_{t = t_0} = \frac{d}{dt} \left( P_{c,t_0, t}^{-1}(Y^i(t) \xi_i(c(t))) \right)|_{t = t_0} \\ = \frac{d}{dt} \left( Y^i(t) \xi_i(p) \right)|_{t = t_0} = \dot{Y}^i(t_0) \xi_i(p) = (\nabla_X Y)(p).$$<|endoftext|> TITLE: intersection of hypercube and hypersphere QUESTION [12 upvotes]: There is a number of similar questions already (e.g. this one), but as far as I can see, none quite cuts it for me. In $n$-dimensional euclidean space, a hypercube $H$ with side lengths $2A$ is centered around the origin. So is a hypersphere $S$ with radius $x$. What is the fraction of volume of the hypercube $H$ that is also inside the hypersphere $S$, that is, what is the volume of $H\cap S$? As calculating the fraction with respect to the hypercube is trivial by just dividing by its volume in the end, it boils down to calculating the volume of the intersection. My first idea was to separate three different cases: If $x n \cdot A^2$, the hypercube is fully contained in the hypersphere. In this case, the volume is simply that of the hypercube, that is, $(2A)^n$. For intermediate values of $x$, the intersection is given as the volume of the hypersphere minus $2n$ hyperspherical caps, for which there is also a closed form solution (e.g. here) After my calculation consistently gave wrong results, I was forced to admit that the case (3) is more difficult than I thought, because as soon as the opening angle of the hypercaps is larger than $\pi/4$, they start to intersect along the edges of the hypercube, whereas the corners are still outside the intersection volume. For $n=3$, this can be seen in this graphic, which was generated by wolframalpha. Thus, the solution proposed in (3) double-counts these volumes. I can't seem to come up with general solution to calculate this, because counting (and calculating) the intersection areas is very tedious. Is there any closed-form, analytic solution available for this problem? REPLY [8 votes]: The complexity here comes from the fact that in $n$ dimensions there are $n-1$ types of extended boundaries of the hypercube (in which $1,2,\ldots,n-1$ coordinates are maxed-out at $\pm A$). So, while in $3$ dimensions there are only edges and faces, the nomenclature of "caps" and "corners" does not capture the behavior in higher dimensions. The hypersphere starts intersecting the boundaries of type $j$ when its radius reaches $A\sqrt{j}$, and only fully contains them when its radius exceeds $A\sqrt{n}$, so we expect the final formula to be non-smooth at $n$ different radii. However, we can find a reasonably simple recursive form. Let $V_n(R)$ be the volume of the intersection in $n$ dimensions when the hypersphere has radius $R$ and the hypercube has side length $2$. Then $$ V_n(R)=\int_{x_1=-1}^{+1}\int_{x_2=-1}^{+1}\cdots\int_{x_n=-1}^{+1}I\left[\sum_{i=1}^{n}x_i^2 < R^2\right]dx_1 dx_2 \cdots dx_n, $$ where $I(\Phi)$ is $1$ when $\Phi$ is true and $0$ otherwise. The integrand is nonzero only when $|x_1| TITLE: Do there exist functions that grow faster than $ax+b$, slower than $a^x$ and still posses these nice congruence properties? QUESTION [7 upvotes]: Functions like $f(x)=2x+3$ and $f(x)=3^x-8$ have some very nice properties if it comes to congruences. In particular, if you pick any $n\in\Bbb{N}$ and write down $f(x)\mod n$, you'll see that it's a repeating pattern, with no numbers occuring more than once in each cycle. A clearer, more formal definition due to Greg Martin: A function $h$ defined on the positive integers is called faithfully periodic with period $q$ if it has the property $h(m)=h(n)$ if and only if $n\equiv m\pmod q$. A function $f:\Bbb{N}\to\Bbb{Z}$ is now normal if for every modulus $k\geq 2$, the function $\pi_k\circ f$ is faithfully periodic, where $\pi_k:\Bbb{N}\to\Bbb{Z}/k\Bbb{Z}$ is the natural quotient map. Also, for every modulus $k\geq 2$, let $f_q(k)$ be the period of $\pi_k\circ f$. I have not been able to find any normal functions which grow faster than a linear function, but slower than an exponential function, In particular normal functions $f(n)=O(n^\alpha)$ with $\alpha>1$ Question: Do there exist any such functions? What I've proven so far 1) This is quite obvious, but if $f$ is a normal function with $f(0)=0$, then $\forall n,m\in\Bbb{Z}: f_q(n)\mid m\implies m\mid f(n)$ Proof: set $\pi_k\circ f=h_k$. cleary for all $k$, we have $h_k(0)=0$. Now $h_k(n)=0$ if and only if $f_q(k)\mid n$. Some Intuition about why linear and exponential functions are normal Short and simple: they can be defined as sequences $\{a_n\}_{n=1}^{\infty}$ in such a way that, for all $k\in\Bbb{N}$, we don't need to know the value of $n$ or $a_{n-1}$ to compute $a_n\pmod k$, we only need $a_{n-1}\pmod k$. To be clear, this is just my intuiton. I think it will be easy to prove that such functions are always normal, but not easy to prove that all normal functions 'look' like this. REPLY [2 votes]: I think a stronger result is given in the Perelli-Zannier paper I mention in the comments. A sequence of integers is "arithmetically periodic" if it is periodic modulo $p$ for all sufficiently large primes $p$. Theorem. Let $f:{\bf N}\to{\bf Z}$ be arithmetically periodic, with period $r_p$ for prime $p>p_0$. Suppose there exists a set $J_p\subseteq{\bf Z}/p{\bf Z}$ such that $|J_p|=r_p$ and $$f({\bf N})\cap(a+p{\bf Z})\ne\emptyset{\qquad\rm whenever\qquad}a\in J_p$$ Then three cases can occur: (i) $r_p$ is constant for large $p$, and $f$ is periodic. (ii) $r_p=p$ for large $p$, and $f$ is a polynomial of degree 1. (iii) There exists an integer $a$ and rational numbers $A$ and $B$ such that $f(n)=Aa^n+B$. The paper is available from http://www.sciencedirect.com/science/article/pii/0022314X8290083X<|endoftext|> TITLE: Are $14$ and $21$ the only "interesting" numbers? QUESTION [196 upvotes]: The numbers $14$ and $21$ are quite interesting. The prime factorisation of $14$ is $2\cdot 7$ and the prime factorisation of $14+1$ is $3\cdot 5$. Note that $3$ is the prime after $2$ and $5$ is the prime before $7$. Similarly, the prime factorisation of $21$ is $7\cdot 3$ and the prime factorisation of $21+1$ is $11\cdot 2$. Again, $11$ is the prime after $7$ and $2$ is the prime before $3$. In other words, they both satisfy the following definition: Definition: A positive integer $n$ is called interesting if it has a prime factorisation $n=pq$ with $p\ne q$ such that the prime factorisation of $n+1$ is $p'q'$ where $p'$ is the prime after $p$ and $q'$ the prime before $q$. Are there other interesting numbers? REPLY [272 votes]: Note that exactly one of $n$ and $n+1$ is even. It follows that for $n$ to be interesting, either $n=3p$ and $n+1=2N(p)$ or $n=2p$ and $n+1=3P(p)$, where $P(p)$ and $N(p)$ are the previous and next primes to $p$ respectively. Rearranging we get that $p$ must satisfy one of the following two equations: $$\frac{3p+1}2=N(p)\tag1$$ $$\frac{2p+1}3=P(p)\tag2$$ However, by a 1952 result of Jitsuro Nagura, for $p\ge25$ there is always a prime between $p$ and $\frac65p$. In particular, if $p\ge31$ is a prime: $$\frac56p TITLE: If $f(f(x))=x^2-x+1$, what is $f(0)$? QUESTION [7 upvotes]: Suppose that $f\colon\mathbb{R}\to\mathbb{R}$ without any further restriction. If $f(f(x))=x^2-x+1$, how can one find $f(0)$? Thanks in advance. REPLY [3 votes]: If such a function $f$ exists, then $f(0) = 1$, but such a function $f$ does not exist. See the paper: When is $f(f(x)) = az^2 +bz+c$? R. E. Rice, B. Schweizer and A. Sklar The American Mathematical Monthly Vol. 87, No. 4 (Apr., 1980), pp. 252-263 (Link to PDF not behind the JSTOR paywall) Edit: Such a function does not exist in $\mathbb{C}$! Or in any algebraically closed field of characteristic zero. But you can have such a function in the reals. See the epilogue of the paper (page 262).<|endoftext|> TITLE: Intuition about subbasis for a topology QUESTION [8 upvotes]: In general topology the idea of a basis is quite simple. The definition is: Let $X$ be a set, a set $B\subset \mathcal{P}(X)$ is said to be a basis for a topology on $X$ if: For each $x\in X$ there's $U\in B$ such that $x\in U$. If $x\in U_1\cap U_2$, with $U_1,U_2\in B$, then there is $U_3\in B$ such that $x\in U_3$ and $U_3\subset U_1\cap U_2$. With that, the topology $\tau$ generated by $B$ is defined so that $U\in \tau$ if for each $x\in U$ there's $U_x\in B$ with $x\in U_x\subset U$. In other words, $\tau$ is the set of all unions of elements of $B$. Then one proves that $\tau$ is indeed a topology. Obviously this is the natural extension of the open balls we use in metric spaces. It is quite simple to understand and to get some intuition about it. The other definition, I simply can't get any intuition about is the idea of subbasis. The definition is: Let $X$ be a set, a set $S\subset \mathcal{P}(X)$ is said to be a subbasis for a topology on $X$ if the union of all sets on $S$ equals $X$. In that case, the set $$B = \left\{S_1\cap\dots\cap S_n : S_i\in S, n\in \mathbb{N}\right\},$$ is a basis for a topology $\tau$ in $X$. In other words $\tau$ is the set of all unions of all finite intersections of elements in $S$. If on the one hand the idea of basis is quite intuitive and simple to understand based on the simple example of open balls, the idea of subbasis seems quite different. I mean, I know it works. The proof that $\tau$ is a topology is quite simple. What is not simple is to understand the intuition. In that case: what is the intuition about subbasis? Why would anyone consider the object defined that way? Why is it relevant and how can we understand it properly to have some intuition on when we need to use it? REPLY [8 votes]: Some authors don't even require a subbasis to have union equal to all of $X$, i.e. a subbasis is just any subset $S \subseteq \mathcal{P}(X)$ whatsoever. Whichever approach is adopted, the idea is just to use $S$ to generate a topology $\tau$ which includes $S$, and as few additional open sets as possible. Since an arbitrary intersection of topologies is a topology, one way to get $\tau$ is to take $$\tau = \bigcap \{ \tau' : \tau' \text{ is a topology with } S \subseteq \tau' \}.$$ But, it turns out we can also obtain $\tau$ by writing down a basis for it. Namely, $$B = \left\{S_1\cap\dots\cap S_n : S_i\in S, n\in \mathbb{N}\right\}$$ can be checked to be a basis for the topology $\tau$ above. Anyway, I agree it's natural to be a bit suspicious at first of the definition of a subbasis. It seems too loose of a concept to be good for anything, right? But the point is to think of this as being something more akin to... say a generating set for a group. Given any subset $S$ of a group $G$, we can define the smallest subgroup $\langle S \rangle$ containing $S$. This can also be written down explicitly as $\{ g_1 \cdots g_n : g_i \in S \text{ or } g_i^{-1} \in S\}$. But here we don't find it strange that no assumption was made about $S$, right? We have just used any old subset of $G$ to generate a smallest group. The situation in topology is the same. It is just that, because we use the special terminology "subbasis", we initially suspect that subbases should be somehow "special". But, in fact, a subbasis is just any old collection of subsets which we use to generate a smallest topology.<|endoftext|> TITLE: Invariant measures of the doubling map on the closed interval QUESTION [6 upvotes]: Consider the doubling map $g\colon [0,1]\to [0,1]$ given by $g(x)=2x \, {\rm mod}\; 1$. It is clearly discontinuous at 1/2. However, its counterpart $G$ on the circle $G(e^{2\pi i \theta}) = e^{4\pi i \theta}$ ($\theta\in [0,1)$) is continuous therefore $G$ has many invariant measures. Very often it is claimed in ergodic theory books that one may remove discontinuity of $g$ by passing to the circle (by identifying the end-points of $[0,1]$). It is however not clear to me whether this operation affects the $g$-invariant measures or not. Is there a one-to-one correspondence between $g$-invariant measures on $[0,1]$ and $G$-invariant ones on the circle? It seems to me that this could be done if we took into account only those $g$-invariant measures whose probability distribution functions vanish at 0. Do I get this right? Layman's explanation concerning passing from the interval to the circle in the case of the doubling map would be highly appreciated. REPLY [2 votes]: The difference is quite small and can at most affect measures that assign masses to $0$ and/or $1$. If you define $g(x) = \lfloor 2x \rfloor$, then $g$ is a map from $[0,1]$ to $[0,1)$. Any invariant measure must assign zero measure to $\{1\}$ and the Borel-sigma algebra of $[0,1)$ is isomorphic (for any Borel measure) to $S^1={\Bbb R}/{\Bbb Z}$. So in this case you may uniquely identify measures in the two cases. If, however, you define e.g. $g(x)=2x$ for $x\in [0,0.5]$ and $g(x)=2x-1$ for $x\in (0.5]$ then you do actually have two invariant distinct dirac masses at $0$ and $1$. You have a similar scenario for other variations of defining the map at the end-points. But the difference will be minor. In particular, if you consider a measure that does not assign mass to 0 or 1 then from a measure point of view there is no difference between this system and the circle doubling map.<|endoftext|> TITLE: Closed form for $\int_0^e\mathrm{Li}_2(\ln{x})\,dx$? QUESTION [18 upvotes]: Inspired by this question and this answer, I decided to investigate the family of integrals $$I(k)=\int_0^e\mathrm{Li}_k(\ln{x})\,dx,\tag{1}$$ where $\mathrm{Li}_k(z)$ represents the polylogarithm of order $k$ and argument $z$. $I(1)$ evaluates to $e\gamma$, but $I(2)$ has resisted my efforts (which can be seen here). Neither ISC nor WolframAlpha could provide a closed form for its numerical value--however, I've conjectured a possible analytic form. $$\eqalign{\int_0^e\mathrm{Li}_2(\ln{x})\,dx&\stackrel?=\,_3F_3(1,1,1;2,2,2;1)+\frac{\pi^2(2e-5)}{12}+\frac{\gamma^2}{2}-\gamma\,\mathrm{Ei}(1)\\&=0.578255559804073275225659054377625577...\tag{2}}$$ Brevan Ellefsen has computed that my conjecture is accurate to at least 150 digits. Brevan also gave the alternate form $$\frac{\pi^2e}{6}+\gamma G_{1,2}^{2,0}\left(-1\left|\begin{array}{c}1\\0,0\\\end{array}\right.\right)+G_{2,3}^{3,0}\left(-1\left|\begin{array}{c}1,1\\0,0,0\\\end{array}\right.\right).\tag{2.1}$$ Is there a closed form for $I(2)$ that doesn't involve Meijer G or hypergeometrics? The simplicity of the following two equations seems to suggest that there might be. $(3.1)$ follows directly from $(3)$, which I've proven here. $$\begin{align} \sum_{k=1}^\infty I(k)&=e\tag{3}\\\sum_{k=2}^\infty I(k)&=e(1-\gamma)\tag{3.1}\end{align}$$ PROGRESS UPDATE: Using this equation, I've turned $_3F_3(1,1,1;2,2,2;1)$ into $$\lim_{c\to 1}\left(\frac{\mathrm{Ei}(1)-\gamma}{c-1}+\frac{1-e}{(c-1)^2}+\frac{(-1)^{-c}\,\Gamma(c-1)}{c-1}+\frac{(-1)^{1-c}\,\Gamma(c,-1)}{(c-1)^2}\right),\tag{4}\label{4}$$ but I don't know how to proceed from there. EDIT: This limit leads nowhere. See below. SECOND PROGRESS UPDATE: After some studying of the properties of the Meijer G function, I've finally cracked the limit; however, the result is an underwhelming $_3F_3(1,1,1;2,2,2;1)$. Before I evaluate the limit, I'd first like to state the following intermediate result: Lemma $(4.1)$: For $z\in\mathbb{C}$, $$G_{2,3}^{3,0}\left(z\left|\begin{array}{c}1,1\\0,0,0\\\end{array}\right.\right)=\gamma\ln{z}+\frac12\ln^2(z)-z\,_3F_3(1,1,1;2,2,2;-z)+\frac{\gamma^2}2+\frac{\pi^2}{12}.\tag{4.1}\label{4.1}$$ My proof for this can be found here. Now I return to the limit $\eqref{4}$. Consider the following: $\frac{1}{c-1}=\frac{c-1}{(c-1)^2}$, $(c-1)\Gamma(c-1)=\Gamma(c)$, and $(-1)^{1-c}=-(-1)^{-c}$. Based on these algebraic identities, the limit can be written as $$\lim_{c\to 1}\frac{(c-1)(\mathrm{Ei}(1)-\gamma)+1-e+(-1)^{-c}(\Gamma(c)-\Gamma(c,-1))}{(c-1)^2}.\tag{4.2}$$ In this form, the limit is $\frac{0}0$. Using l'Hospital twice, we obtain $$\lim_{c\to 1}{(-1)^{-c}\left(-G_{3,4}^{4,0}\left(-1\left|\begin{array}{c}1,1,1\\0,0,0,c\\\end{array}\right.\right)+\Gamma{(c)}\left(\frac{\psi_0{(c)}^2}2-i\pi\psi_0(c)+\frac{\psi_1(c)}2-\frac{\pi^2}2\right)\right)}$$ $$\begin{align}&=G_{3,4}^{4,0}\left(-1\left|\begin{array}{c}1,1,1\\0,0,0,1\\\end{array}\right.\right)-\frac{\psi_0(1)^2}2+i\pi\psi_0(1)-\frac{\psi_1(1)}2+\frac{\pi^2}2\\&=G_{2,3}^{3,0}\left(-1\left|\begin{array}{c}1,1\\0,0,0\\\end{array} \right.\right)-\frac{\gamma^2}{2}-i\gamma\pi+\frac{5\pi^2}{12}.\tag{4.2a}\end{align}$$ Using Lemma $\eqref{4.1}$, we know that $$G_{2,3}^{3,0}\left(-1\left|\begin{array}{c}1,1\\0,0,0\\\end{array}\right.\right)={}_3F_3(1,1,1;2,2,2;1)+\frac{\gamma^2}2+i\gamma\pi-\frac{5\pi^2}{12},\tag{4.3}$$ which can be rewritten as $$G_{2,3}^{3,0}\left(-1\left|\begin{array}{c}1,1\\0,0,0\\\end{array}\right.\right)-\frac{\gamma^2}{2}-i\gamma\pi+\frac{5\pi^2}{12}={}_3F_3(1,1,1;2,2,2;1).\tag{4.3a}$$ Thus, $$\eqalign{&\lim_{c\to 1}\left(\frac{\mathrm{Ei}(1)-\gamma}{c-1}+\frac{1-e}{(c-1)^2}+\frac{(-1)^{-c}\,\Gamma(c-1)}{c-1}+\frac{(-1)^{1-c}\,\Gamma(c,-1)}{(c-1)^2}\right)\\=&{}_3F_3(1,1,1;2,2,2;1).\tag{4.4}}$$ REPLY [3 votes]: Lemma 1: For $\lambda \in \mathbb{R}^{+}$ $$\sum\limits_{n=1}^{\infty} \dfrac{\lambda^n}{n!}\mathcal{H}_{n} = e^{\lambda}\left(\ln \lambda + \gamma - \operatorname{Ei}(-\lambda)\right) \tag{1} \label{lemma1}$$ where $\mathcal{H}_{n}$ — $n$-th harmonic number and $\operatorname{Ei}(\cdot)$ — exponential integral. Start from two series representations of lower incomplete gamma function $\gamma(\beta, \lambda)$: \begin{align} \gamma(\beta, \lambda) &= e^{-\lambda}\sum\limits_{n=0}^{\infty} \dfrac{\lambda^{n+\beta}}{\beta\left(\beta+1\right)\ldots\left(\beta+n\right)} \\ \gamma(\beta, \lambda) &= \sum\limits_{n=0}^{\infty} (-1)^{n}\dfrac{\lambda^{n+\beta}}{n!\left(\beta+n\right)} \end{align} Now take the derivative with respect to $\beta$ at $1$. Since \begin{align*} \dfrac{\mathrm{d}}{\mathrm{d}\beta}\left(\dfrac{1}{\beta\left(\beta+1\right)\ldots\left(\beta+n\right)}\right)_{\beta=1} &= -\left.\dfrac{1}{\beta\left(\beta+1\right)\ldots\left(\beta+n\right)}\left(\dfrac{1}{\beta}+\dfrac{1}{\beta+1}+\ldots+\dfrac{1}{\beta+n}\right)\right\vert_{\ \beta=1} \\ &= -\dfrac{\mathcal{H}_{n+1}}{(n+1)!} \end{align*} we have that \begin{align*} -e^{-\lambda}\sum\limits_{n=1}^{\infty} \dfrac{\lambda^{n}}{n!}\mathcal{H}_{n}+\ln \lambda\ e^{-\lambda}\sum\limits_{n=1}^{\infty} \dfrac{\lambda^{n}}{n!} &= -\ln \lambda\sum\limits_{n=1}^{\infty} (-1)^{n}\dfrac{\lambda^{n}}{n!} + \sum\limits_{n=1}^{\infty} (-1)^{n}\dfrac{\lambda^{n}}{n!\ n} \\ -e^{-\lambda}\sum\limits_{n=1}^{\infty} \dfrac{\lambda^{n}}{n!}\mathcal{H}_{n}+\ln \lambda \left(1-e^{-\lambda}\right) &= \ln \lambda \left(1-e^{-\lambda}\right) + \sum\limits_{n=1}^{\infty} (-1)^{n}\dfrac{\lambda^{n}}{n!\ n} \\ \sum\limits_{n=1}^{\infty} \dfrac{\lambda^{n}}{n!}\mathcal{H}_{n} &= -e^{\lambda}\sum\limits_{n=1}^{\infty} (-1)^{n}\dfrac{\lambda^{n}}{n!\ n} \\ &= e^{\lambda}\left(\ln \lambda + \gamma - \operatorname{Ei}(-\lambda)\right) \end{align*} Let $$ \mathfrak{I}(\lambda, \alpha, \beta) = \int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}\,x^{\alpha - 1}}{\left(1+x\right)^{\alpha+\beta}}\,\mathrm{d}x $$ Lemma 2: Let $\alpha, \lambda \in \mathbb{R}^{+}$ and $\alpha + \beta > 0$. Then $$ \mathfrak{I}(\lambda, \alpha, \beta) = \dfrac{\Gamma(\alpha)}{\Gamma(\alpha+\beta)}\lambda^{\beta}\mathfrak{I}(\lambda, \alpha + \beta, -\beta) \tag{2} \label{lemma2} $$ Using Laplace transform properties we have \begin{align*} \mathfrak{I}(\lambda, \alpha, \beta) &= \int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}x^{\alpha - 1}}{\left(1+x\right)^{\alpha+\beta}}\,\mathrm{d}x \\ &= \int\limits_{0}^{\infty} e^{-\lambda t}t^{\alpha-1}\,\mathcal{L}\left\{\dfrac{e^{-x}x^{\alpha+\beta-1}}{\Gamma(\alpha+\beta)}\right\}(t)\,\mathrm{d}t \\ &= \int\limits_{0}^{\infty} \dfrac{\Gamma(\alpha)}{\left(\lambda+x\right)^{\alpha}}\dfrac{e^{-x}x^{\alpha+\beta-1}}{\Gamma(\alpha+\beta)} \,\mathrm{d}x \\ &= \dfrac{\Gamma(\alpha)}{\Gamma(\alpha+\beta)}\lambda^{\beta}\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}x^{\alpha+\beta-1}}{\left(1+x\right)^{\alpha}}\,\mathrm{d}x \\ &= \dfrac{\Gamma(\alpha)}{\Gamma(\alpha+\beta)}\lambda^{\beta}\mathfrak{I}(\lambda, \alpha + \beta, -\beta) \end{align*} Claim 1: For $\lambda > 0$ and $\beta \in \mathbb{R}$ $$ \mathfrak{I}(\lambda, 1, \beta) = e^{\lambda}\lambda^{\beta}\Gamma(-\beta, \lambda) \tag{3} \label{claim1} $$ where $\Gamma(\cdot, \cdot)$ — upper incomplete gamma function. \begin{align*} \mathfrak{I}(\lambda, 1, \beta) &= \int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{\left(1+x\right)^{1+\beta}}\,\mathrm{d}x \\ &= e^{\lambda}\int\limits_{1}^{\infty} \dfrac{e^{-\lambda x}}{x^{1+\beta}}\,\mathrm{d}x \\ &= e^{\lambda}\lambda^{\beta}\int\limits_{\lambda}^{\infty} \dfrac{e^{-x}}{x^{1+\beta}}\,\mathrm{d}x \\ &= e^{\lambda}\lambda^{\beta}\Gamma(-\beta, \lambda) \end{align*} After applying \eqref{lemma2} to \eqref{claim1} we have that $$ \int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}x^{\beta}}{1+x}\,\mathrm{d}x = e^{\lambda}\Gamma(1+\beta)\Gamma(-\beta, \lambda) \tag{4} \label{consec1} $$ Taking the derivative with respect to the variable $\beta$ at point $(\lambda, \beta) = (1, 0)$ leads us to \begin{align*} \int\limits_{0}^{\infty} \dfrac{e^{-x}\ln x}{1+x}\,\mathrm{d}x &= e\dfrac{\partial}{\partial \beta}\Big(\Gamma(1+\beta)\Gamma(-\beta, 1)\Big)\Bigg\vert_{\beta=0} \\ &= -e\gamma \int\limits_{1}^{\infty} \dfrac{e^{-x}}{x}\,\mathrm{d}x -e \int\limits_{1}^{\infty} \dfrac{e^{-x}\ln x}{x}\,\mathrm{d}x \\ &= e\gamma\operatorname{Ei}(-1)-\dfrac{1}{2}e\int\limits_{1}^{\infty} e^{-x}\ln^2 x\,\mathrm{d}x \\ &= e\gamma\operatorname{Ei}(-1)-\dfrac{1}{2}e\dfrac{\mathrm{d}^2}{\mathrm{d}x^2}\Gamma(1)+\dfrac{1}{2}e\int\limits_{0}^{1} e^{-x}\ln^2 x\,\mathrm{d}x \\ \end{align*} $$ \int\limits_{0}^{\infty} \dfrac{e^{-x}\ln x}{1+x}\,\mathrm{d}x = e\gamma\operatorname{Ei}(-1)-\dfrac{1}{2}e\left(\gamma^2+\dfrac{1}{6}\pi^2\right)-e\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{n!\,n^2} \tag{5} \label{first} $$ Let $$ f(\lambda) = \int\limits_{0}^{\infty} e^{-x}\ln^2 \left(\lambda x + x^2\right)\,\mathrm{d}x $$ Then \begin{align*} f'(\lambda) &= 2\int\limits_{0}^{\infty} e^{-x}\dfrac{\ln \left(\lambda x + x^2\right)}{\lambda + x}\,\mathrm{d}x \\ &= 2\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{1+x}\Big(\ln \left(1+x\right)+\ln x + 2 \ln \lambda\Big)\,\mathrm{d}x \\ &= 4\ln \lambda \int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{1+x}\,\mathrm{d}x + 2\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{1+x}\ln \left(1+x\right)\,\mathrm{d}x + 2\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{1+x}\ln x\,\mathrm{d}x \\ &= -4\ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda) - 2\dfrac{\partial}{\partial \beta}\left.\left(\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}}{\left(1+x\right)^{1+\beta}}\,\mathrm{d}x\right)\right\vert_{\beta=0} + 2\dfrac{\partial}{\partial \beta}\left.\left(\int\limits_{0}^{\infty} \dfrac{e^{-\lambda x}x^{\beta}}{1+x}\,\mathrm{d}x\right)\right\vert_{\beta=0} \\ &= -4\ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda) -2e^{\lambda}\dfrac{\partial}{\partial \beta}\Big(\lambda^{\beta}\Gamma(-\beta,\lambda)\Big)\Bigg\vert_{\beta=0} + 2e^{\lambda}\dfrac{\partial}{\partial \beta}\Big(\Gamma(1+\beta)\Gamma(-\beta,\lambda)\Big)\Bigg\vert_{\beta=0} \\ &= -2\ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda)+2\gamma e^{\lambda}\operatorname{Ei}(-\lambda) \end{align*} Integrating over $[0,1]$ leads us to \begin{align*} f(1) &= f(0) -2\int\limits_{0}^{1} \ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda)\,\mathrm{d}\lambda + 2\gamma\Big(e^{\lambda}\operatorname{Ei}(-\lambda)-\ln \lambda\Big)\Bigg\vert_{0}^{1} \\ &= 2\gamma e\operatorname{Ei}(-1)+2\gamma^2+\dfrac{2}{3}\pi^2-2\int\limits_{0}^{1} \ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda)\,\mathrm{d}\lambda \end{align*} For integral in last formula we use \eqref{lemma1}: \begin{align*} \int\limits_{0}^{1} \ln \lambda e^{\lambda}\operatorname{Ei}(-\lambda)\,\mathrm{d}\lambda &= \int\limits_{0}^{1} \ln^2 \lambda e^{\lambda}\,\mathrm{d}\lambda +\gamma \int\limits_{0}^{1} \ln \lambda e^{\lambda}\,\mathrm{d}\lambda + \sum\limits_{n=2}^{\infty} \dfrac{\mathcal{H}_{n-1}}{n!\ n} \\ &= \sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2} + \gamma^2-\gamma\operatorname{Ei}(1) + \sum\limits_{n=1}^{\infty} \dfrac{\mathcal{H}_{n}}{n!\ n} \end{align*} Combining all together we have $$ \int\limits_{0}^{\infty} e^{-x}\ln^2 \left(x+x^2\right)\,\mathrm{d}x = 2\gamma\Big(e\operatorname{Ei}(-1)+\operatorname{Ei}(1)\Big)+\dfrac{2}{3}\pi^2-2\sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2}-2\sum\limits_{n=1}^{\infty} \dfrac{\mathcal{H}_{n}}{n!\ n} \tag{6} \label{second} $$ From another side \begin{align*} \int\limits_{0}^{\infty} e^{-x}\ln^2 \left(x+x^2\right)\,\mathrm{d}x &= \int\limits_{0}^{\infty} e^{-x}\Big(\ln^2 \left(1+x\right) + \ln^2 x + 2\ln \left(1+x\right)\ln x\Big)\,\mathrm{d}x \\ &= \gamma^2+\dfrac{1}{6}\pi^2+e\int\limits_{1}^{\infty}e^{-x}\ln^2 x\,\mathrm{d}x + 2\int\limits_{0}^{\infty} e^{-x}\ln \left(1+x\right)\ln x\,\mathrm{d}x \\ &= \left(1+e\right)\left(\gamma^2+\dfrac{1}{6}\pi^2\right)+2e\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{n!\ n^2} + 2\int\limits_{0}^{\infty} e^{-x}\ln \left(1+x\right)\ln x\,\mathrm{d}x \end{align*} Express last integral \begin{align*} \int\limits_{0}^{\infty} e^{-x}\ln \left(1+x\right)\ln x\,\mathrm{d}x &= \gamma\Big(e\operatorname{Ei}(-1)+\operatorname{Ei}(1)\Big) -\dfrac{1}{2}\gamma^2+\dfrac{1}{4}\pi^2-\dfrac{1}{2}e\left(\gamma^2+\dfrac{1}{6}\pi^2\right) \\ &-e\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{n!\ n^2}-\sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2}-\sum\limits_{n=1}^{\infty} \dfrac{\mathcal{H}_{n}}{n!\ n} \end{align*} Now use integration by parts and already obtained result \eqref{first}: \begin{align*} \int\limits_{0}^{\infty} e^{-x}\ln \left(1+x\right)\ln x\,\mathrm{d}x &= \int\limits_{0}^{\infty} e^{-x}\dfrac{\ln x}{1+x}\,\mathrm{d}x + \int\limits_{0}^{\infty} e^{-x}\dfrac{\ln \left(1+x\right)}{x}\,\mathrm{d}x \\ &= e\gamma\operatorname{Ei}(-1)-\dfrac{1}{2}e\left(\gamma^2+\dfrac{1}{12}\pi^2\right)-e\sum\limits_{n=1}^{\infty} \dfrac{(-1)^n}{n!\ n^2}+ \int\limits_{0}^{\infty} e^{-x}\dfrac{\ln \left(1+x\right)}{x}\,\mathrm{d}x \end{align*} So $$ \int\limits_{0}^{\infty} e^{-x}\dfrac{\ln \left(1+x\right)}{x}\,\mathrm{d}x = -\dfrac{1}{2}\gamma^2+\dfrac{1}{4}\pi^2+\gamma\operatorname{Ei}(1)-\sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2}-\sum\limits_{n=1}^{\infty} \dfrac{\mathcal{H}_{n}}{n!\ n} \tag{7} \label{final} $$ Back to original problem With integration by parts and substitution original integral can be converted to \begin{align*} \int\limits_{0}^{e} \operatorname{Li}_2\left(\ln x\right)\,\mathrm{d}x &= \dfrac{1}{6}e\pi^2-\int\limits_{0}^{\infty} e^{-x}\dfrac{\ln \left(1+x\right)}{x}\,\mathrm{d}x+\int\limits_{0}^{1} e^{x}\dfrac{\ln \left(1-x\right)}{x}\,\mathrm{d}x \\ &= \dfrac{1}{6}e\pi^2 + \dfrac{1}{2}\gamma^2-\dfrac{1}{4}\pi^2-\gamma\operatorname{Ei}(1)+\sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2}+\sum\limits_{n=1}^{\infty} \dfrac{\mathcal{H}_{n}}{n!\ n} \\ &+\int\limits_{0}^{1} \dfrac{\ln\left(1-x\right)}{x}\,\mathrm{d}x+\sum\limits_{n=1}^{\infty}\dfrac{1}{n!}\int\limits_{0}^{1} x^{n-1}\ln\left(1-x\right)\,\mathrm{d}x \\ &= \dfrac{1}{6}e\pi^2 + \dfrac{1}{2}\gamma^2-\dfrac{5}{12}\pi^2-\gamma\operatorname{Ei}(1)+\sum\limits_{n=1}^{\infty} \dfrac{1}{n!\ n^2} \end{align*}<|endoftext|> TITLE: Analytic Geometry proof of orthogonality in triangle geometry QUESTION [5 upvotes]: In triangle $ABC$, $AB = AC$, $D$ is the midpoint of $\overline{BC}$, $E$ is the foot of the perpendicular from $D$ to $\overline{AC}$, and $F$ is the midpoint of $\overline{DE}$. Prove that $\overline{AF}$ is perpendicular to $\overline{BE}$. My first approach was to align the triangle in the first quadrant, on the x-coordinates and started calculating slopes and positions of points. But then things got messy real fast, i'm afraid I'm approaching this problem the wrong way. Is there a better way? No trigonometry just yet! Solutions are greatly appreciated. Thanks in advance! REPLY [4 votes]: Here is a vectorial proof with a little trigonometry at the end. We can transform our objective thus: $$\tag{2}AF \perp BE \ (a) \ \ \ \ \iff \ \ \ 2\vec{AF} \cdot \vec{BE}=0 \ (b)$$ (symbol $\cdot$ meaning "dot product"). The LHS of (2b) can be transformed thus: $$(\vec{AD}+\vec{AE}) \cdot (\vec{BD}+\vec{DE})=\vec{AD}\cdot\vec{DE} + \vec{AE} \cdot \vec{BD} $$ (taking into account the orthogonality of AD and BD, and AE and DE). Let $\alpha=\widehat{BAC}=\widehat{DAC}$ and $h$ be the length of altitude $AD$. It is easy to establish that: $$\begin{cases} \vec{AD} \cdot \vec{DE}&=&- h \times h \sin(\alpha) & \ \ \text{(the minus sign is essential)}\\ \vec{AE} \cdot \vec{BD} &=& h \cos(\alpha) \times h \tan(\alpha) & \end{cases}$$ (where $\times$ is the ordinary multiplication of real numbers). Adding these two results gives $0$ ; we have thus obtained the RHS of (2).<|endoftext|> TITLE: Asymptotic form of Bessel $Y_0(x)$ for small $x$ QUESTION [6 upvotes]: The relevant part of the integral definition of $Y_0$ is $$-\frac{2}{\pi}\int_0^{\infty } e^ {-x \sinh t} \, dt$$ which should be asymptotic to $$\frac{2}{\pi}\big( \ln\frac x 2 + \gamma \big)$$ Where $\gamma$ is the Euler-Mascheroni Constant. How can I evaluate that integral for small x? It would be sufficient to find the logarithmic term, but I would also be interested in where the Euler-Mascheroni Constant comes from. REPLY [3 votes]: The constant might be fixed the following way: Denote the integral in question is $D=\frac{\pi C}{2}=\color{blue}{J_1}+\color{red}{J_2}$ with $$ \color{blue}{J_1}=\color{blue}{\int_0^1\frac{1-e^{-z}}{z}\quad},\quad\color{red}{J_2}=\color{red}{-\int_1^{\infty}\frac{e^{-z}}{z}\quad} $$ Let us start with $\color{blue}{J_1}$ and integrate by parts: $$ \color{blue}{J_1}=\color{blue}{\lim_{\epsilon\rightarrow0}\left(\log{\epsilon}-\int_{\epsilon}^1\frac{e^{-z}}{z}\right)}=\color{blue}{\lim_{\epsilon\rightarrow0}\left(\log{\epsilon}-\log{\epsilon}-\int_{0}^1\log(z)e^{-z}\right)}=\color{blue}{-\int_{0}^1\log(z)e^{-z}} $$ Integrating now $\color{red}{J_2}$ by parts yields $$ \color{red}{J_2}=\color{red}{-\int_{1}^{\infty}\log(z)e^{-z}} $$ therefore $$ -D=-(\color{blue}{J_1}+\color{red}{J_2})=\color{blue}{\int_{0}^1\log(z)e^{-z}}+\color{red}{\int_{1}^{\infty}\log(z)e^{-z}}={\int_{0}^{\infty}\log(z)e^{-z}} $$ or $$ D=\gamma $$ which implies $$ C=\frac{2 \gamma}{\pi} $$<|endoftext|> TITLE: Show series $1 - 1/2^2 + 1/3 - 1/4^2 + 1/5 - 1/6^2$ ... does not converge QUESTION [5 upvotes]: I was wondering if my proof is correct, and if there are any better alternative proofs. Or maybe proof that use nice tricks i might need in the future. $$1 - \frac{1}{2^2} + \frac{1}{3} - \frac{1}{4^2} + \frac{1}{5} - \frac{1}{6^2} \ldots = \sum_{n =1}^\infty \left(\frac{1}{2n + 1} - \frac{1}{(2n + 2)^2}\right) + 1 - \frac{1}{2^2}$$ Now we know that $$\sum_{n = 1}^\infty \frac{1}{2n + 1}$$ diverges and $$\sum_{n = 1}^\infty -\frac{1}{(2n + 2)^2}$$ converges. Hence their sum diverges (I proved this fact). Hence, the series diverges. Any obvious mistake or better way of tackling it? Maybe using partial sums since i am clueless how to use them. REPLY [3 votes]: First, there seems to be quite a bit of confusion in the comments concerning "grouping". Let's take a look at it, essentially: \begin{align} 1 - \frac{1}{2^2} + \frac{1}{3} - \frac{1}{4^2} + \frac{1}{5} - \frac{1}{6^2} +\cdots &\stackrel{?}{=} \left(1 - \frac{1}{2^2}\right) + \left(\frac{1}{3} - \frac{1}{4^2}\right) + \left(\frac{1}{5} - \frac{1}{6^2}\right) + \cdots \\ &= \sum_{n=0}^\infty \left(\frac{1}{2n + 1} - \frac{1}{(2n + 2)^2}\right). \end{align} Now, when you say that you're "not allowed to group", I guess you're right in that you've generated a somewhat different series but the two series are concretely related. In particular, if the left side converges, then the right side must also converge. This is a special case of the fact that, if a sequence converges, then any subsequence of that sequence also converges. Specifically, if we let $s_n$ denote the $n^{\text{th}}$ partial sum of the series on the left and we let $$S_n = \sum_{n=0}^n \left(\frac{1}{2n + 1} - \frac{1}{(2n + 2)^2}\right)$$ denote the $n^{\text{th}}$ partial sum of the series on the right, then $S_n = s_{2n}$. Thus, $S_n$ is a subsequence of $s_n$ and, if $s_n$ converges then $S_n$ must converge to the same limit. Taking the contrapositive, if $S_n$ diverges, then $s_n$ must also diverge. I think you make a mistake, though, at the next step by breaking the series up into two series. You are essentially rearranging the series which is only valid when the series is absolutely convergent. The approach at this point is to simply combine the fractions to get $$\frac{1}{2n + 1} - \frac{1}{(2n + 2)^2} = \frac{4 n^2+6 n+3}{4 (n+1)^2 (2 n+1)},$$ to which the limit comparison test is easily applicable. This is exactly the approach I took in my answer to this question.<|endoftext|> TITLE: If $f$ is differentiable on $[1,2]$, then $\exists \alpha\in(1,2): f(2)-f(1) = \frac{\alpha^2}{2}f'(\alpha)$ QUESTION [6 upvotes]: If $f$ is differentiable on $[1,2]$, then $\exists \alpha\in(1,2) : f(2)-f(1) = \frac{\alpha^2}{2}f'(\alpha)$ I really would like some hint. I noticed that the equation can be written $$\int_1^2f(x)'\mathbb{d}x = f'(\alpha)\int_0^{\alpha} x\mathbb{d}x$$ EDIT: I confused the theorems. I guess I have to apply the Mean Value Theorem, but I don't know how. REPLY [5 votes]: Let $ g(x) = f(1/x) $ for $ x \in [1/2, 1] $. Applying the mean value theorem, there is a $ c \in (1/2, 1)$ with $$ g'(c) = \frac{g(1) - g(1/2)}{1/2} = 2 \; ( g(1) - g(1/2) ) $$ Rewrite this in terms of $ f $, using $ g'(c) = -\frac{1}{c^2} f'\left(\frac{1}{c} \right) $, and $ g(1) = f(1) $, and $ g(1/2) = f(2) $, to find: $$ \frac{1}{c^2} f'\left(\frac{1}{c} \right) = 2 (f(2) - f(1) ) $$ Set $\alpha = 1/c $ so that $ \alpha \in (1,2) $ and divide by $ 2 $ to find: $$ \frac{1}{2} \alpha^2 f'(\alpha) = f(2) - f(1) $$ REPLY [2 votes]: Hint: Consider the function $g(x)= f'(x)- \frac{2(f(2)-f(1))}{x^2}$ in the interval $[1,2]$ Now if you integrate you'll find that $G(x)=\int_1^x g(t)dt= f(x)+\frac{2(f(2)-f(1))}{x}+f(1)-2f(2)$ Now it is easy to see that $ G(1)=G(2)=0$ so according to Rolle's theorem there is a root of $G'(=g)$ in $(1,2)$<|endoftext|> TITLE: Should these symbols be italicised? QUESTION [5 upvotes]: Differentials should not be italicised according to ISO standards $$ \int f\,dx \Rightarrow \int f\,\mathrm{d}x $$ But should these symbols be italicised? Continuous functions: $C(X,Y)$ vs $\mathrm C(X,Y)$ Imaginary unit: $i$ vs $\mathrm i$ Lp spaces: $L^p(\mu)$ vs $\mathrm L^p(\mu)$ Subscripts: $C_c(X)$ vs $C_{\mathrm c}(X)$, $\lVert\cdot\rVert_u$ vs $\lVert\cdot\rVert_{\mathrm u}$ REPLY [2 votes]: Language is determined by actual usage, not by "authorities" (who are generally viewed by linguists as describers, not prescribers). See, for example, What Is 'Correct' Language, from the Linguistic Society of America. Technical language is no different in this regard from ordinary language. From what I've seen, mathematicians, at least in the U.S., tend to use an italicized $d$ in a differential (and this can be seen in the house style in many mathematics journals), whereas engineers and physicists generally use a roman $\mathrm{d}$ (the latter in accordance with what ISO says is the standard). As for the specific symbols you mentioned, I would look at common usage in the journals that you would like to publish in; also, take geography into account because there are regional differences in typography as well. One final remark: Use your own personal sense of esthetics too.<|endoftext|> TITLE: Evaluating contour integral along the boundary of the fundamental domain of $SL_2(\mathbb{Z})$ near poles QUESTION [5 upvotes]: Background: I'm working through Serre's introduction to modular forms in A Course in Arithmetic, Ch. 6, $\S 3$, where we prove the weighted sum of the count of poles and zeros on the fundamental domain is k/12 (k/6 by Serre's notation). The contour integral is broken into pieces, some of which are arcs of circles. I don't understand why the contour integrals along these arcs are proportional to the portion of the circumference the path takes. Question: Let $f$ be a modular function of weight $2k$. Let $C$ be a negatively-oriented circlular path centered at $\rho=e^{2\pi/3}$ with radius $r>0$. For sufficiently small $r$ (we in fact take the limit as $r\to\infty$), the closed contour integral $\frac{1}{2i\pi}\int_C \frac{df}{f}=-v_\rho(f)$, but I don't see why evaluating only along 1/6 of the circumference produces 1/6 the result - for instance, if $a=\rho+ir$ and we integrate clockwise along the arc of a circle of radius $r$ centered at $\rho$ to $b$ on the unit circle, $$\frac{1}{2i\pi}\lim_{r\to 0}\int_{a}^{b}\frac{df}{f}=-\frac{1}{6}v_\rho(f).$$ I'm not sure if this is a general property of complex analysis - because the radius is going to zero, the value of the function becomes constant along the contour, so the integral becomes linear in arc length? - or a property of modular functions in particular. REPLY [2 votes]: Here's (most of) the demonstration, with thanks to user1952009 for the clue. Let $f$ be a modular function of weight $k$. Note first that, for $g=\begin{bmatrix} a & b \\ c & d\end{bmatrix}\in G$, the full modular group, $$\frac{df(gz)}{f(gz)}=\frac{d[(cz+d)^kf(z)]}{(cz+d)^kf(z)}=\frac{kc\,dz}{cz+d}+\frac{df}{f}.$$ In particular, for $T=\begin{bmatrix} 1 & 1 \\ 0 & 1\end{bmatrix}$, $\frac{df(Tz)}{f(Tz)}=\frac{df}{f}$, for $S=\begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix}$, $\frac{df(Sz)}{f(Sz)}=\frac{k\,dz}{z} + \frac{df}{f}$, for $ST=\begin{bmatrix} 0 & -1 \\ 1 & 1\end{bmatrix}$, $\frac{df(STz)}{f(STz)}=\frac{k\,dz}{z+1} + \frac{df}{f}$, and for $(ST)^2=\begin{bmatrix} -1 & -1 \\ 1 & 0 \end{bmatrix}$, $\frac{df((ST)^2z)}{f((ST)^2z)}=\frac{k\,dz}{z} + \frac{df}{f}$. We demonstrate the case of $z=\rho$, but the case is easier for $z=i$. We wish to determine the value of the sums of the integrals along the two arcs of radius $r$ near $\rho$ and $-\bar{\rho}$ in the fundamental domain $D$ of the full modular group $G$. Let the arc near $\rho$ be from $a_0$ to $a_1$ on the unit circle and the arc near $-\bar{\rho}$ be from $b_0$ on the unit circle to $b_1$. \begin{align*} \int_{a_0}^{a_1}\frac{df}{f}+\int_{b_0}^{b_1}\frac{df}{f} &= \int_{a_0}^{a_1}\frac{df}{f}+\int_{T(b_0)}^{T(b_1)}\frac{df(Tz)}{f(T)} \\ &= \int_{a_0}^{a_1}\frac{df}{f}+\int_{b_0-1}^{b_1-1}\frac{df}{f} \\ &= \int_{b_0-1}^{a_1}\frac{df}{f} \end{align*} since the sum of integrals is precisely over the arc from $b_0-1$ to $a_1$ on the circle $C_r(\rho)$ of radius $r$ centered at $\rho$, since $b_1-1=a_0$. It is the value of this last integral as $r\to 0$ that we are seeking. We show that taking the image of this arc under $ST$ and $(ST)^2=(ST)^{-1}$ forms a closed loop. Note $ST(b_0-1)=\frac{-1}{b_0-1+1}=-1/b_0=a_0$ and $(ST)^2(a_1)=T^{-1}S^{-1}(a_1)=T^{-1}(b_0)=b_0-1$, so the endpoint of the arc is the starting point of the image of the arc under $ST$, and thus the starting and ending points of the image under $ST$ and image under $(ST)^2$ align. We use the fact that the action of $G$ is one to one, and that our original arc lies in $D\cup T^{-1}D$, and the intersection of that region with its image under $ST$ and $(ST)^2$ occurs only on the boundary to demonstrate these three paths together form a closed loop with no self-intersections. Note also that $ST$ is a bounded transformation near $\rho$, that is, given $r>0$, there exists a $\delta>0$ such that $\left|\rho-z\right|<\delta$ implies $\left|\rho-STz\right| TITLE: Why does a unitary in the Calkin algebra always lift to an (co-)isometry? QUESTION [5 upvotes]: Cf. the the title, consider a separable infinite-dimensional Hilbert space H, and the short exact sequence $$0 \to \mathcal{K}(H) \to \mathcal{B}(H) \to \mathcal{Q}(H) \to 0,$$where $\mathcal{K}(H)$ is the compact operators and $\mathcal{Q}(H)$ is the quotient, also known as the Calkin algebra. Let $u$ be a unitary in $\mathcal{Q}(H)$. This means that $u = T + \mathcal{K}$, for a $T \in \mathcal{B}(H)$ where the differences $TT^* - I$ and $T^*T - I$ are compact. How can one show that $u$ lifts to an isometry, or a co-isometry? That is, why is there an isometry (or co-isometry) $S \in \mathcal{B}(H)$ such that $\pi (S) = u$, where $\pi$ denotes the quotient mapping? (Equivalently $S - T$ is compact) This is an exercise in Rørdam's book on K-theory. In the previous part of the exercise one shows that whenever $E$ and $F$ are projections in $\mathcal{B}(H)$ with $Rank(E) \leq Rank(F)$ there exists a partial isometry $V$ with $V^*V = E$ and $VV^* \leq F$. I am assuming that one should use this somehow, possibly together with some results about the index map $\delta_1$, but I am really stuck on this one. Any help is appreciated! REPLY [7 votes]: Here is an argument that does not appeal to index theory and uses the suggestion in the book. Let $T$ such that $\pi(T)=u$. Write $T=V|T|$ for the polar decomposition of $T$. Then, $V$ is a partial isometry, i.e. $V^*V$ and $VV^*$ are projections. Since $\pi(T^*T)=\pi(I)$ and $\pi$ is a $*$-homomorphism, $\pi(|T|)=\pi ((T^*T)^{1/2})=\pi(I)$. It follows that $\pi(V)=\pi(T)$. We have that $\pi (V) $ is a unitary and $V^*V$ is a projection, so $I-V^*V$ is a compact projection; thus, finite-rank. Similarly with $I-VV^*$. Now, if $V^*V=I$ or $VV^*=I$, then $V$ is, respectively, an isometry or a co-isometry. If $I-V^*V$ and $I-VV^*$ are both nonzero, we have two cases: $\dim\text{Rank}(I-V^*V)\leq\dim\text{Rank}(I-VV^*)$. By the previous exercise in the book, let $W$ be a partial isometry with $W^*W=I-V^*V$ and $WW^*\leq I-VV^*$. By conjugating the inequality with $V^*$, we get $$ 0=V^*(I-VV^*)V=V^*WW^*V. $$ It follows that $W^*V=0$. Then $V+W$ is an isometry, since $$ (V+W)^*(V+W)=V^*V+W^*W+W^*V+V^*W=V^*V+W^*W=I. $$ $\dim\text{Rank}(I-V^*V)\geq\dim\text{Rank}(I-VV^*)$. Similar to the previous case, now the isometry will be $V^*+W^*$. Finally, note that since $W^*W$ is finite rank, $$0=\pi(W^*W)=\pi(W)^*\pi(W),$$ so $\pi(W)=0$. Then $$ \pi(V+W)=\pi(V)=\pi(T)=u. $$<|endoftext|> TITLE: Prove that $\sum\limits_{cyc}\sqrt[3]{a^2+4bc}\geq\sqrt[3]{45(ab+ac+bc)}$ QUESTION [12 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers. Prove that: $$\sqrt[3]{a^2+4bc}+\sqrt[3]{b^2+4ac}+\sqrt[3]{c^2+4ab}\geq\sqrt[3]{45(ab+ac+bc)}$$ A big problem in this inequality there is around $(1,1,0)$. I tried Holder: $$\left(\sum\limits_{cyc}\sqrt[3]{a^2+4bc}\right)^3\sum_{cyc}(a^2+4bc)^3(ka+b+c)^4\geq\left(\sum\limits_{cyc}(a^2+4bc)(ka+b+c)\right)^4$$ Thus, it remains to prove that $$\left(\sum\limits_{cyc}(a^2+4bc)(ka+b+c)\right)^4\geq45(ab+ac+bc)\sum_{cyc}(a^2+4bc)^3(ka+b+c)^4,$$ which is nothing for all $k\geq0$. Of course, we can use Holder with $(ka^2+b^2+c^2+mab+mac+nbc)^4$, but I think in this way even uvw will not help. I have a proof of the following inequality. Let $a$, $b$ and $c$ be non-negative numbers and $k=8\cos^340^{\circ}.$ Prove that: $$\sqrt[3]{a^2+kbc}+\sqrt[3]{b^2+kac}+\sqrt[3]{c^2+kab}\geq\sqrt[3]{9(1+k)(ab+ac+bc)},$$ but it not so comforts. Thank you! REPLY [6 votes]: Assume that $ab+bc+ca > 0$. Rewrite the inequality as $\sqrt[3]{u} + \sqrt[3]{v} + \sqrt[3]{w} \ge 3$ where $$u = \frac{27(a^2+4bc)}{45(ab+bc+ca)}, \ v = \frac{27(b^2+4ca)}{45(ab+bc+ca)}, \ w = \frac{27(c^2+4ab)}{45(ab+bc+ca)}.$$ We will use the fact that $$\sqrt[3]{x} \ge \frac{3x(5+4x)}{5x^2+20x+2}, \ x \ge 0$$ which follows from $$x - \Big( \frac{3x(5+4x)}{5x^2+20x+2}\Big)^3 = \frac{x(125x^2+272x+8)(x-1)^4}{(5x^2+20x+2)^3} \ge 0.$$ Using the fact above, it suffices to prove that $$\frac{3u(5+4u)}{5u^2+20u+2} + \frac{3v(5+4v)}{5v^2+20v+2} + \frac{3w(5+4w)}{5w^2+20w+2} \ge 3$$ or $f(a,b,c) \ge 0$ where $f(a,b,c)$ is a homogeneous polynomial. We use the Buffalo Way. WLOG, assume that $c = \min(a,b,c)$. There are two possible cases: 1) $c \le b\le a$: Let $b = c + s, \ a = c+s+t; \ s, t\ge 0$. Note that $f(c+s+t, c+s, c)$ is a polynomial with non-negative coefficients. True. 2) $c \le a \le b$: Let $a = c + s, \ b = c + s+t; \ s, t \ge 0$. Note that $f(c+s, c+s+t, c)$ is a polynomial with non-negative coefficients. True. We are done.<|endoftext|> TITLE: Binomial Theorem for Fractional Powers QUESTION [5 upvotes]: We know that the binomial theorem and expansion extends to powers which are non-integers. For integer powers the expansion can be proven easily as the expansion is finite. However what is the proof that the expansion also holds for fractional powers? A simple an intuitive approach would be appreciated. REPLY [3 votes]: You could calculate, for example, $(1+x)^{1/2}=a_0+a_1x+a_2x^2+\cdots$ by squaring both sides and comparing coefficients. For example we can get the first three coefficients by ignoring all degree $3$ terms and higher: $$1+x=a_0^2+2a_0a_1x+2a_0a_2x^2+a_1^2x^2+\cdots$$ From here we can conclude that $a_0=\pm1$ (we'll take $+1$ to match what happens when $x=0$). Then comparing coefficients of $x$ we have $2a_1=1$, so $a_1=1/2$. Finally, comparing coefficients of $x^2$, we have $2a_0a_2+a_1^2=0$, so $2a_2+1/4=0$ and $a_2=-1/8$. You can definitely get as many coefficients as you want this way, and I trust that you can even derive the binomial coefficient formula. However, this is not any easier than the Taylor series, where you take $(1+x)^{1/2}=a_0+a_1x+a_2x^{2}+\cdots$ and find the coefficients by saying the $n$th derivatives on both sides have to be equal at $0$. For example, plugging in $0$ on both sides we conclude $a_0=1$. Calculating the first derivative of both sides, we have $$\frac{1}{2}(x+1)^{-1/2}=a_1+2a_2x+\cdots$$ Plugging in $0$, we get $a_1=1/2$. Taking the derivative one more time, we see $$(-1/2)(1/2)(1+x)^{-3/2}=2a_2+\cdots$$ Plugging in $x=0$, we have $(-1/2)(1/2)=2a_2$, or $a_2=-1/8$. The advantage to this way, is that it is much easier to see the pattern of coefficients! Unfortunately, there is a big hole in both arguments. They will give you what the coefficients have to be, but they won't prove that the series expansion converges in the first place. We started off by assuming you could write $1+x$ as an infinite power series, but there is no guarentee that this exists, and actually it doesn't converge unless $|x|<1$, which we never used. So you need to estimate the error in Taylor's formula to complete the proof rigorously.<|endoftext|> TITLE: Why does this particular ratio of prime numbers seems to converge to 3? QUESTION [8 upvotes]: I noticed this interesting property of prime numbers, and I'd like to know if it has an explanation/proof/disproof. Define $p(n)$ to be the $n$'th prime number. Define the following sequence: $$\Sigma(n) = \begin{cases} p(1), & \text{if $n=1$} \\ \Sigma(n-1)+p(n), & \text{if $n>1$ and $\Sigma(n-1)-p(n)<0$} \\ \Sigma(n-1)-p(n), & \text{otherwise} \end{cases}$$ The first few elements of the sequence are: $2,5,0,7,18,5,22,3,26,55$. Now, in $\Sigma(n)$ lets look at all indexes $n$'s such that $\Sigma(n-1)<\Sigma(n)<\Sigma(n+1)$. These indexes also form a sequence, which I'll denote by $a(k)$. Here are its first elements: $4,9,22,57,146,367,946,2507$. So, what I noticed is that these two limits seem to hold: $$\lim_{k\to \infty}\frac{p(a(k+1))}{p(a(k))} = 3$$ $$\lim_{k\to \infty}\frac{\Sigma(a(k+1))}{\Sigma(a(k))} = 3$$ Here is a graph of the former of these ratios: Of course, these are only empirical findings. Do you have other reasons to believe that they are true? REPLY [2 votes]: Say for some $n$: $$\Sigma(n-1)<\Sigma(n)<\Sigma(n+1)$$ Then: $$\Sigma(n+1)=\Sigma(n-1)+p(n)+p(n+1)\geq p(n)+p(n+1)$$ Suppose $n>25$. Now $p(n)>\dfrac{5}{6}p(n+1)$ and $p(n+2)<\frac65p(n+1)<\frac{36}{25}p(n)$. Hence: $$p(n)+p(n+1)>\Big(\frac56+1\Big)p(n)>\frac{36}{25}p(n)>p(n+2)$$ So $\Sigma(n+1)>p(n+2).$ Therefore $\Sigma(n)$ cannot increase more than $2$ times in a row for $n>25$. Say for some $n$: $$\Sigma(n-1)>\Sigma(n)>\Sigma(n+1)$$ Then: $$\Sigma(n-1)\geq p(n)+p(n+1)>p(n-1)+p(n)$$ Therefore $\Sigma(n-2)>p(n-1)$ and $\Sigma(n-1)=\Sigma(n-2)-p(n-1)$. So: $$\Sigma(n-1)>\Sigma(n)>\Sigma(n+1)\implies \Sigma(n-2)>\Sigma(n-1)>\Sigma(n)$$ But since $\Sigma(n)$ decreases somewhere (For instance, $\Sigma(3)<\Sigma(2)$), there are not $n$ such that $\Sigma(n-1)>\Sigma(n)>\Sigma(n+1)$ and hence $\Sigma$ does never decrease twice in a row. This confirms what Ivan Neretin says, namely that the pattern will be +,-+,-... and occasionally +,+,-, when $\Sigma(n-1) TITLE: Prove that $\frac{p^p-1}{p-1}$ is not prime if $p \equiv 1 \pmod 4$ QUESTION [16 upvotes]: Let $p$ be a prime number such that $p \equiv 1 \pmod 4$. Prove that $\frac{p^p-1}{p-1}$ is not prime. We can rewrite $\frac{p^p-1}{p-1}$ as $$\dfrac{p^p-1}{p-1} = 1+p+p^2+\cdots+p^{p-1},$$but how do we show this is not prime? REPLY [2 votes]: We may notice that $\frac{p^p-1}{p-1}=\Phi_p(p)$, where $\Phi_p$ is the $p$-th cyclotomic polynomial. If we assume that for some prime $q$ we have $\Phi_p(x)\equiv 0\pmod{q}$, then $x$ has order $p$ in $\mathbb{Z}/(q\mathbb{Z})^*$, hence $p\mid(q-1)$, or $q\equiv 1\pmod{p}$, by Lagrange's theorem. Additionally, the constraint $p\equiv 1\pmod{4}$ ensures that $\Phi_p(p)$ has a Aurifeuillean factorization.<|endoftext|> TITLE: Why do row replacement operations change the eigenvalues/eigenvectors but not the determinant? Specifically adding/subtracting rows. QUESTION [6 upvotes]: Sorry for asking what may be a stupid question, but I'm really struggling conceptually to understand why adding and subtracting rows in a matrix changes the eigenvalues and eigenvectors but not the determinant. I know that scaling and swapping rows changes both, but I can't find anything on adding and subtracting. The question I was studying was true/false: A row replacement operation on A does not change the eigenvalues. I looked on numerous sites and they all said false, but none of them had any justification. REPLY [3 votes]: Well, one thing that may happen when you sum rows is that, on a triangular matrix with a zero diagonal entry, you may possibly change the number of zero diagonal entries. This makes new eigenvalues appear: consider the matrices $$\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\quad \begin{pmatrix}0&1\\ 0&1\end{pmatrix}$$ It does not change the determinant, though, because the determinant is multilinear and alternating on the rows, so for instance $$\det \begin{pmatrix} \color{blue}{R_1}+\color{red}{\lambda R_2}\\ \color{red}{R_2}\\\vdots\\ R_m\end{pmatrix}=\det\begin{pmatrix} \color{blue}{R_1}\\ R_2\\\vdots\\ R_m\end{pmatrix}+\color{red}{\lambda}\det\begin{pmatrix} \color{red}{R_2}\\ \color{red}{R_2}\\\vdots\\ R_m\end{pmatrix}=\det\begin{pmatrix} R_1\\ R_2\\\vdots\\ R_m\end{pmatrix}+\color{red}0$$<|endoftext|> TITLE: Product Rule for Ito Processes QUESTION [14 upvotes]: Is it true that if $X(t)$ is an Ito process and $p(t)$ is non-stochastic, then the ordinary chain rule applies, that is, $$d(X(t)p(t)) = dX(t)p(t) + X(t)p'(t)dt?$$ REPLY [9 votes]: This can be obtained directly from Ito's product rule: $$ d(X(t)Y(t)) = X(t)dY(t) + Y(t)dX(t) + dX(t)dY(t) $$ For illustration, assume your $dX(t)$ and $dp(t)$ has form: $$ d(X_t) = \mu_1dt + \sigma_1dW_t \\ d(Y_t) = \mu_2dt $$ Since your $p(t)$ is non-stochastic, it only has derivative w.r.t time and thus the final term is : $$ dX(t)dY(t) = \mu_1dt(\mu_2dt) + \sigma_1dW_t(\mu_2dt) = 0 $$ as a result of cross term and quadratic variation of time. One other way to understand the product rule is to use the two-dimensional Ito formula and let $f(t,x,y) = xy$.<|endoftext|> TITLE: Axiom of choice implies law of excluded middle QUESTION [20 upvotes]: Diaconescu's Theorem states that AC implies the Law of Excluded Middle. Essentially the proof goes by defining $A = \{x \in \{0,1\} : x = 0 \lor p\}$, and $B = \{x \in \{0,1\} : x = 1 \lor p\}$, for a given proposition $p$, and defining a choice function $f : \{A,B\} \rightarrow \{0,1\}$. We then show that no matter what values $f$ takes, either $p$ or $\lnot p$ holds. The full proof can be found here. What confuses me is why the Axiom of Choice is needed, as we are only choosing from two sets, and finite choice is provable in standard ZF. In addition, I've heard that countable choice does not imply excluded middle, but clearly we are not choosing from more than countably many sets in this proof. REPLY [8 votes]: The background theory is an important issue. It is not quite right to say that the law of the excluded middle is provable from the axiom of choice. There are two important details: We are talking about provability in constructive set theories that include separation axioms for undecidable formulas. We are talking about the axiom of choice as expressed in set theory. There are other constructive systems, such as constructive type theories, where the relevant form of the axiom of choice does not imply the law of the excluded middle. The implication in Diaconescu's theorem is particular to constructive set theory. Separately, as described in the Stanford Encyclopedia article at http://plato.stanford.edu/entries/set-theory-constructive/index.html#ConChoPri , the axiom of countable choice does not imply the law of excluded middle even in constructive set theory. If we try to apply the axiom of countable choice to the set in Diaconescu's theorem, nothing odd happens, because it is easy to write a choice function if we view $\{A, B\}$ as a sequence of two sets. The trick in Diaconescu's theorem is that if we apply choice to the family $\{A, B\}$, the choice function has to be extensional, and the theorem leverages the extensionality. Replacing the family with a sequence of two sets $A$ and $B$ makes it easier to write a formula for an extensional choice function.<|endoftext|> TITLE: Brezis' exercise 4.19 QUESTION [5 upvotes]: I'm having trouble with problem 4.19 of Brezis' book in functional analysis (Functional Analysis, Sobolev Spaces and Partial Differential Equations). It asks for a sequence $f_n\geq 0$ in $L^1(0,1)$ and a function $f\in L^1(0,1)$ such that $f_n \to f$ weakly $\sigma(L^1,L^\infty)$; $||f_n||_1 \to ||f||_1$; $||f_n-f||_1 \not \to 0$. What sequence $f_n$ work? I know that i cannot choose $f=0$ because statement 2 would contradicts 3. REPLY [8 votes]: Consider $r_n \colon \mathbb{R} \to \mathbb{R}$ given by $r_0(x) = (-1)^{\lfloor x\rfloor}$ and $r_n(x) = r_0(2^n\cdot x)$. Let $g_n = r_n\lvert_{(0,1)}$. Then show that $g_n \to 0$ weakly, and use these functions to construct the desired sequence $(f_n)$. Accepting that $g_n \to 0$ weakly for the moment, one sees that for every $f\in L^1(0,1)$ with $f(x) \geqslant 1$ for all $x$, by setting $f_n = f + g_n$ we have a sequence $f_n \geqslant 0$ in $L^1(0,1)$ with $f_n \to f$ weakly, and $\lVert f_n - f\rVert_1 = \lVert g_n\rVert_1 = 1$ for all $n$. For $n \geqslant 1$, we also have $$\lVert f_n\rVert_1 = \int_0^1 f_n(x)\,dx = \int_0^1 f(x)\,dx + \int_0^1 g_n(x)\,dx = \int_0^1 f(x)\,dx = \lVert f\rVert_1.$$ So it remains to see that $g_n \to 0$ weakly. First we show that $$\int_0^1 g_n(x)h(x)\,dx \to 0\tag{1}$$ for all $h \in C([0,1])$. Since $[0,1]$ is compact, every $h\in C([0,1])$ is uniformly continuous. Given $\varepsilon > 0$, we can therefore find a $\delta > 0$ such that $\lvert x-y\rvert \leqslant \delta \implies \lvert h(x) - h(y)\rvert \leqslant \varepsilon$. Then let $n \geqslant 1$ such that $2^{-n} \leqslant \delta$. For $0 < k < 2^n$ we have \begin{align} \biggl\lvert\int_{2^{-n}(k-1)}^{2^{-n}(k+1)} g_n(x)h(x)\,dx\bigr\rvert &= \biggl\lvert\int_{2^{-n}(k-1)}^{2^{-n}(k+1)}g_n(x)\bigl(h(x) - h(2^{-n}k)\bigr)\,dx\bigr\rvert\\ &\leqslant \int_{2^{-n}(k-1)}^{2^{-n}(k+1)} \lvert g_n(x)\rvert\cdot\lvert h(x) - h(2^{-n}k)\rvert\,dx\\ &\leqslant 2^{1-n}\varepsilon. \end{align} Summing over all odd $k$ between $0$ and $2^n$, we obtain $$\biggl\lvert \int_0^1 g_n(x)h(x)\,dx\biggr\rvert \leqslant \varepsilon.$$ Thus $(1)$ is proved. Next we take an arbitrary $h \in L^\infty(0,1)$. Given $\varepsilon > 0$, by Luzin's theorem there is a $h_\varepsilon \in C([0,1])$ with $\lVert h_\varepsilon\rVert_\infty \leqslant \lVert h\rVert_\infty$ such that $\lambda(\{ x : h_\varepsilon(x) \neq h(x)\}) < \varepsilon$. Then we have \begin{align} \biggl\lvert \int_0^1 g_n(x)h(x)\,dx\biggr\rvert &\leqslant \biggl\lvert \int_0^1 g_n(x)h_\varepsilon(x)\,dx\biggr\rvert + \biggl\lvert \int_0^1 g_n(x)\bigl(h(x) - h_\varepsilon(x)\bigr)\,dx\biggr\rvert\\ &\leqslant \biggl\lvert \int_0^1 g_n(x)h_\varepsilon(x)\,dx\biggr\rvert + \biggl\lvert \int_{\{ x : h_\varepsilon(x) \neq h(x)\}} g_n(x)\bigl(h(x) - h_\varepsilon(x)\bigr)\,dx\biggr\rvert\\ &\leqslant \biggl\lvert \int_0^1 g_n(x)h_\varepsilon(x)\,dx\biggr\rvert + \int_{\{ x : h_\varepsilon(x) \neq h(x)\}}\lvert h(x) - h_\varepsilon(x)\rvert\,dx\\ &\leqslant \biggl\lvert \int_0^1 g_n(x)h_\varepsilon(x)\,dx\biggr\rvert + 2\lVert h\rVert_\infty \varepsilon, \end{align} and hence $$\limsup_{n\to\infty}\: \biggl\lvert \int_0^1 g_n(x)h(x)\,dx\biggr\rvert \leqslant 2\lVert h\rVert_\infty \varepsilon.$$ Since that holds for all $\varepsilon > 0$, we conclude $$\lim_{n\to\infty} \int_0^1 g_n(x) h(x)\,dx = 0,$$ as desired.<|endoftext|> TITLE: "Algebrization" of Analytic Concepts QUESTION [5 upvotes]: Please excuse my inventing new words in the title. I've been studying algebraic geometry, and one of my favourite parts of the subject (at least, before schemes) is the way that one can describe, or even define, the tangent space at a point $P$ of a variety $V$ purely algebraically, as the (dual space of) $\mathfrak{m}/\mathfrak{m}^2$, where $\mathfrak{m}$ is the unique maximal ideal of the local ring $\mathcal{O}_{V,P}$. Are there other instances of concepts which are traditionally analytic in nature, but have been given alternative algebraic descriptions? My search yielded little beyond differential algebra, which seems to be an underdeveloped subject of study, but I would think that if one wanted to "re-imagine" analysis using abstract algebra, this would be a good place to start - taking the interesting algebraic properties of derivative (such as the product rule) and building them into the definition of some new algebraic object. Algebraic analysis is apparently also a subject of study, but information is scarce. Conversely, are there instances of algebraic concepts being re-cast only in terms of analysis? REPLY [2 votes]: Plenty of things can be done in algebra. Completion is typically an analytical thing but it can be done through algebra. Let $A$ be some algebraic structure and for all $i\in\Bbb N$ we have that $S_i$ is a substructure such that we can make a quotient $A/S_i$, or if we go more exotic we would deal with congruence relations, such that $S_i\supseteq S_{i+1}$. Anyhow using this we can deal with cauchy sequences by defining that, using additive notation, a sequence $(x_i)$ is cauchy if for a given $k$ there always exists an $N$ such that for $i,j>N$ we have that $x_i-x_j\in S_k$. Through this we can complete rings, groups, algebras, modules and much else. This is a simple way of getting the $p$-adic numbers. This is also reversable as the sequence of substructures induces a natural metric. The derivative has been mentioned before is another.<|endoftext|> TITLE: Determinant of the derivative of a map between manifolds QUESTION [8 upvotes]: Ok, suppose $M,N$ are Riemannian manifolds and $F:M\to N$ is a smooth map between them. In a book I have here they consider that $\dim M=m \geq \dim N=n$ and that $x\in M$ is a regular point such that the derivative $DF(x):T_x M\to T_{F(x)}N$ is surjective. With this conditions, note that $\ker DF(X)^\perp$ is isomorphic to $T_{F(x)}N$. So far so good, the problem comes when they start to talk about the determinant of $DF(x)$. As far as I know, we talk about determinant in the context of matrices. So here started my trouble, for $DF(x)$ is a linear map between abstract spaces, there is no obvious way to make $DF(x)$ into a matrix. After a search on Google, I got these two definitions, which I wanted to share here to be sure they are valid. For the first one, fortunately, I had some knowledge of differential forms, so I didn't got so confused when seeing it. Definition 1: given any base $(v_1,\ldots,v_n)$ of $\ker DF(x)^\perp$, we define $$\det (DF(x)) = \frac{\omega(DF(x)v_1,\ldots,DF(x)v_n)}{\mu(v_1,\ldots,v_n)},$$ where $\omega$ is the volume form on $N$ and $\mu$ is the volume form on $M$. The second definition was told me by a friend and I couldn't find something useful on the Internet about it. Definition 2: let $(\varphi, U)$ be a orthonormal coordinate system on $x$ and $(\psi, V)$ a orthonormal coordinate system on $F(x)$. Then $\det (DF(x)) = \det \left(D\Phi(\varphi^{-1}(x))\right)$, where $\Phi=\psi^{-1}\circ F\circ\varphi$. This second definition also makes sense, assuming that the determinant doesn't depend on the charts chosen. In fact, this second definition is better, because it reduces the problem to the computation of a determinant of some matrix, which is something familiar. Just to be clear, $\varphi$ is a map from $U\subset \mathbb{R}^m$ to $M$ and $\psi$ is a map from $V\subset \mathbb{R}^n$ to $N$. I have more than one question, some of them probably you will find easy to answer (assuming you are used to Riemannian Geometry). 1) This is a terminology question. Instead saying $(\varphi,U)$ is an orthonormal coordinate system, could I say it is an orthonormal chart ? I'm asking this because looks like it's common to use the term charts, except when you are gonna say it is orthogonal or orthonormal. In this case I found most of people prefer to use it with the term coordinate system. 2) From my reading, $(\varphi,U)$ is an orthogonal (orthonormal) coordinate system (or chart?) when $\left(\frac{\partial}{\partial x_1}|_x,\ldots,\frac{\partial}{\partial x_m}|_x\right)$ is an orthogonal (orthonormal) base for $T_xM$. Is my understanding correct? 3) The definitions 1 and 2 are really equivalent? I got a little suspicious about using "orthonormal" in definition 2. Maybe it could be "orthogonal", I don't know. PS: questions 1 and 2 are important but are technical details necessary to ask question 3. The last question is the most important of all. REPLY [6 votes]: Both definitions you suggest do not compile: The volume form $\omega$ on $M$ eats $m$ tangent vectors at each point. How do you feed it with only $n$ vectors? The matrix $D\Phi(\varphi^{-1}(x))$ is an $n \times m$ matrix. How do you define the determinant of a non-block matrix? In order to clarify things let me discuss first the linear algebra relevant for your question. The main point is that there is no notion of determinant for a linear map between two vector spaces of different dimensions and there is no notion of determinant for a linear map between two different vector spaces of the same dimension without a choice of an extra data. If $T \colon V \rightarrow V$ is a linear operator on a finite dimensional vector space (so the domain and codomain are the same), you can define $\det(T)$ by choosing a basis $\mathcal{B}$ for $V$ and defining $\det(T) := \det([T]_{\mathcal{B}})$ where $[T]_{\mathcal{B}} \in M_n(\mathbb{F})$ is the square matrix that represents the operator $T$ with respect to the basis $\mathcal{B}$. This definition uses a basis but is, in fact, independent of the basis we work with as $$\det([T]_{\mathcal{B}'}) = \det(P^{-1} [T]_{\mathcal{B}} P) = \det([T]_{\mathcal{B}}) $$ where $P$ is the change of basis matrix $P =[\operatorname{id}]_{\mathcal{B}}^{\mathcal{B'}}$. If $T \colon V \rightarrow W$ is a linear map between two vector spaces of the same (finite) dimension, you can try and define $\det(T)$ by representing $T$ as a matrix. However, to represent $T$ as a matrix you need to pick two different bases $\mathcal{B}$ for $V$ and $\mathcal{C}$ for $W$ and if $\mathcal{B}', \mathcal{C}'$ are other bases, we have $$ [T]_{\mathcal{C}}^{\mathcal{B}} = [\operatorname{id}]^{\mathcal{C}'}_{\mathcal{C}}[T]^{\mathcal{B}'}_{\mathcal{C}'} [\operatorname{id}]^{\mathcal{B}}_{\mathcal{B}'}$$ and so there's no reason that $\det([T]_{\mathcal{C}}^{\mathcal{B}}) = \det([T]^{\mathcal{B}'}_{\mathcal{C}'})$. To save the situation, we should go back to the case $T \colon V \rightarrow V$ and reinterpret $\det(T)$ differently. If $V = \mathbb{R}^n,$ the scalar $\det(T)$ is the signed factor by which $T$ scales the volume of an $n$-dimensional parallelotope. In an abstract vector space $V$, we have no natural notion of (signed) volume of an $n$-dimensional parallelotope. Such notion is provided by a choice of a volume form $0 \neq \omega_V \in \Lambda^{\text{top}}(V^{*})$. Given such a volume form, we can define $\det(T)$ as the unique scalar such that $T^{*}(\omega_V) = \det(T) \omega_V$ and the nice thing is that this definition is actually independent of the volume form! Finally, to define a notion of a determinant for a map $T \colon V \rightarrow W$, we equip $V$ and $W$ with volume forms $\omega_V \in \Lambda^{\text{top}}(V^{*})$ and $\omega_W \in \Lambda^{\text{top}}(W^{*})$ and define $\det(T)$ by the equation $T^{*}(\omega_W) = \det(T) \omega_V$. This definition depends on both $\omega_V$ and $\omega_W$. Now assume $(V, g_V)$ is an inner product space. Even though you provided extra data, there is still no natural volume form that is defined on $V$. In order to define a natural form, you need to choose an orientation for $V$ (and equivalence class $\mathfrak{o}_V$ of elements of $\Lambda^{\operatorname{top}}(V)$ or $\Lambda^{\operatorname{top}}(V^{*})$ depending on your definitions). Once chosen, there is a unique volume form $\omega_{g_V,\mathfrak{o}_V}$ that behaves nicely with the metric and the orientation (this is precisely the Riemannian volume form). It is determined by the fact that if $(e_1, \dots, e_n)$ is a positive orthonormal basis of $V$ then $\omega_{g_V,\mathfrak{o}_V}(e_1 \wedge \dots \wedge e_n) = 1$. In your situation you have a surjective linear map $T \colon (V, g_V, \mathfrak{o}_V,\omega_V) \rightarrow (W, g_W, \mathfrak{o}_V,\omega_W)$ with $\dim V = m, \dim W = n$. The map $T|_{(\ker T)^{\perp}} \colon (\ker T)^{\perp} \rightarrow W$ is a linear map between two vector spaces of the same dimension and so we can try and make sense of $\det \left( T|_{(\ker T)^{\perp}} \right)$. The right hand side has by assumption a volume form but the left hand side is only a subspace of a space that has a volume form. In general, a subspace of a space with a volume form doesn't get a volume form but $(\ker L)^{\perp}$ has an inner product (the restriction of $g_V$) and we can give it an orientation using the map $T$ and these two structures endow $(\ker T)^{\perp}$ with an orientation and allow us to talk of the determinant. Finally, we can provide corrected versions of your definitions for $\det T$: If $v_1, \dots, v_n$ is a basis of $( \ker T)^{\perp}$ such that $Tv_1, \dots, Tv_n$ is a positive basis of $W$, complete it to a positive basis $v_1, \dots, v_n, u_1, \dots, u_{m-n}$ of $V$ and then $$ \det \left( T|_{(\ker T)^{\perp}} \right) = \frac{\omega_W(Tv_1, \dots, Tv_n)}{\omega_V(v_1, \dots, v_n, u_1, \dots, u_{m-n})}. $$ If $\mathcal{B} = (v_1, \dots, v_n)$ is an orthonormal basis of $( \ker T)^{\perp}$ such that $Tv_1, \dots, Tv_n$ is a positive basis of $W$ and $\mathcal{C} = (w_1, \dots, w_m)$ is a positive orthonormal basis of $W$ then $\det \left( T|_{(\ker T)^{\perp}} \right) = \det\left(\left[ T|_{(\ker T)^{\perp}} \right]_{\mathcal{C}}^{\mathcal{B}}\right)$. I'll leave it to you to verify that the definitions are equivalent and consistent with what I described before.<|endoftext|> TITLE: Can any infinite set be written as the union of finite sets? QUESTION [13 upvotes]: While working on a problem, I was wondering about the following: Is it possible to write any infinite set as union of finite sets or not ? REPLY [31 votes]: Sure, it's possible. Let $X$ be any set. Then $$ X = \bigcup_{x \in X} \{x \}. $$ So any set is a union of singletons (except maybe for the empty set - dependent on how you'd interpret the above formula in this case).<|endoftext|> TITLE: Any way to solve this right angle triangle problem without trig? QUESTION [18 upvotes]: I know that the answer is 180 from using the arctangent (arctan(1)+arctan(2)+arctan(3)), but thats boring. Is there any way to solve this problem without the help of trig at all? REPLY [22 votes]: I think the picture is self-explanatory, but let me justify some details. The segment in the middle, the one that's across two vertical squares on the top picture, is reflected with respect to the vertical axis of symmetry of the two vertical squares to obtain segment $AB$ on the bottom picture. Clearly $x = 45^{\circ}$ because it is the angle between an edge and a diagonal of a square (see first segment on top picture). Triangle $ABC$ is isosceles with right engle $\angle\, BAC = 90^{\circ}$, as its base angle. Simply perform a $90^{\circ}$ counterclockwise rotation around point $A$ and you see that $AB$ is mapped to $AC$. Alternatively, look at this picture, where clearly $BACD$ is a square: From all these pictures it follows that $x + y + z = 45^{\circ} + y + z = 180^{\circ}$<|endoftext|> TITLE: Geodesic convexity and the 2nd fundamental form QUESTION [7 upvotes]: Let $(M,g)$ be a Riemannian manifold, $\Omega\subset M$ be a closed set with smooth boundary $\partial\Omega$ and $\nu$ be the unit normal of $\partial\Omega$ pointing into $\Omega$. $\Omega$ is said to be geodesically convex iff $\forall x_0, x_1\in\Omega$ $\exists c:[0,1]\stackrel{\text{geodesic}}\to(M,g)$ s.t. $c(0)=x_0, c(1)=x_1$, $c([0,1])\subset\Omega$, $\mathrm{Length}[c]=d_g(x_0,x_1)$. Suppose $\Omega$ is geodesically convex. Then... [Q.1] Does it hold that the 2nd fundamental form of $\partial\Omega$ toward $\nu$ is nonnegative definite at each point on $\partial\Omega$? [Q.2] Let $\psi_r(x):=\mathrm{exp}^g_x [r\nu(x)]\in N$ $(x\in\partial\Omega)$. Then for small $|r|$, $\psi_r$ is an embedding. Here, does it hold that the inner 2nd fundamental form of $\psi_r$ is nonnegative definite at each point on $\partial\Omega$ when $r>0$ and is sufficinetly small? Thank you. REPLY [2 votes]: I will address your first question. First define, for a fixed point $p\in\partial \Omega$, three conditions on $\partial \Omega$. a) There is an open subset $U\subset M$ with $p\in U$, such that any two points in $U\cap \Omega$ can be joined by length minimising geodesic $c:[0,1]\rightarrow M$ with $c[0,1]\subset U\cap \Omega$. b) Any geodesic $c:(-\epsilon,0]\rightarrow M$ with $c(-\epsilon,0)\subset \mathrm{int(\Omega)}$ and $c(0)= p$ hits the boundary transversally, i.e. $g(\dot c(0),\nu(p))\neq 0$. c) The second fundamental form $l_\nu(\cdot,\cdot)=g(\nabla_\cdot \cdot, \nu)$ is non-negative at $p$. We will prove that a) $\Rightarrow$ b) $\Rightarrow$ c), which answers your first question. Step 1 Take geodesic normal coordinates $x^1,\dots,x^n$ , centred at $p$, with $\partial_n\vert_p=\nu(p)$. Using the implicit function theorem, one can show that there is a smooth function $f:\mathbb{R}^{n-1}\rightarrow \mathbb{R}$ such that $$ \Omega \cap V =\{x^n\ge f(x^1,\dots,x^{n-1})\}, $$ where $V$ denotes the coordinate patch. Next define smooth vector fields $X_1,\dots,X_{n-1}$ on $V$ by $$ X_j(q)=\partial_j\vert_q + \partial_jf(x^1(q),\dots,x^{n-1}(q))\partial_n\vert_q. $$ If $q\in V \cap \partial \Omega$, then $X_j(q) \in T_q\partial \Omega$ and hence for $1\le j\le n-1$ we have $$ \partial_jf(0) = X_j^k(p) g_{kl}(p) \nu^l(p)=g(X_j(p),\nu(p)) = 0 \tag{1}. $$ Step 2 We claim that $$ \text{b)} \quad \Leftrightarrow \quad f \text{ has a local minimum at 0.} \tag{2} $$ Note that in the coordinates fixed above, b) means that $\{x^n=0\}$ (the hypersurface spanned by geodesics through $p$ which are not transversal) does not intersect $int(\Omega)\cap V =\{x^n> f(x^1,\dots,x^{n-1})\}$ near $p$, or in other words that $f(x^1,\dots,x^{n-1})\ge 0$ in a neighbourhood of $p$. This proves (2). Step 3 We're now in a position to prove a) $\Rightarrow$ b). Suppose b) is wrong, then $f$ does not have a local minimum at $0$. I.e. for any neighbourhood $U$ of $p$ there is a point $q\in \partial \Omega \cap U$ such that $$x^n(q)=f(x^1(q),\dots,x^{n-1}(q))<0. \tag{3}$$ Since we are in geodesic coordinates, the unique minimising geodesic joining $p$ and $q$ is given by $L =\{x^j=tx^j(q): 0\le t \le 1\}$. Assuming a), we must have $L\subset \Omega \cap U$, which implies $$ tx^n(q) \ge f(tx^1(q),\dots,tx^{n-1}(q)). $$ Divide by $t$ and take the limit $t\rightarrow 0$, then the right hand side will converge to $0$, since $Df(0)=0$ (see $(1)$). We obtain $x^n(q)\ge0$, which is a contradiction to $(3)$. Step 4 We want to relate the second fundamental form to the Hessian of $f$. To this end note that, for $1\le j \le n-1$ and $1\le k \le n$ we have $$ \nabla_kX_j (p)= (\Gamma_{kj}^l \partial_l + \partial_k \partial_j f \partial_n + \Gamma_{kn}^r \partial_r)(p) = \partial_j\partial_kf(p) \cdot \nu(p) $$ and since $f$ is independent of $x^n$, we further obtain $$ \nabla_{X_k}X_j(p) = \partial_j\partial_kf(p) \cdot \nu(p), $$ which implies $l_{\nu}(X_k,X_j)\vert _p = \partial_j\partial_kf(p)$. Since the $X_j$ form a basis of $T_p\partial \Omega$ we have $$ \text{c)}\quad \Leftrightarrow \quad f \text{ has non-negative Hessian at $0$}. \tag{4} $$ From $(2)$ and $(4)$ it is evident that b) implies c). Remark: With the same kind of argument one obtains an interesting result in the other direction. Assuming that the second fundamental form of $\partial \Omega$ is strictly positive, all geodesics coming from $\Omega$ hit $\partial \Omega$ transversally.<|endoftext|> TITLE: Is it possible to cut the unit disk in $5$ "small" parts? QUESTION [5 upvotes]: Let $D = \{(x,y) \in \Bbb R^2 \mid x^2+y^2 \leq 1\}$ be the unit disk. Is it possible to find five subsets $A_1, \dots, A_5 \subset D$ such that they cover $D$ and they all have diameter at most $1$? My conditions just mean $$D = \bigcup\limits_{i=1}^5 A_i \qquad\text{and}\qquad \mathrm{diam}(A_i) := \sup\limits_{x,y \in A_i} \|x-y\|_2 \leq 1, \;\;\forall i \in \{1,\dots,5\}.$$ Of course, this is possible with $6$ pieces, namely $$A_i = \{re^{ia} \;\mid\; 0≤r≤1,\; 2\pi (i-1) /6 ≤ a ≤ 2\pi i/6\}$$ But I don't think that this is possible with only $5$ pieces (even with non-measurable subsets), but I don't see any simple argument. Thank you for your help! REPLY [2 votes]: I will follow the idea of Daniel Fischer. If $B_1:=A_1\cap \partial D$ can be covered by sector of central angle $t$, then $t_0=\frac{\pi}{3}$ is greatest value. Proof. If not, then we have $x,\ y\in B_1$ s.t. $\angle xOy >\frac{\pi}{3}$, where $O$ is origin. Then $|x-y|>1$. It is a contradiction since ${\rm diam}\ A_1\leq 1$. That is, five $A_i$ covers arc of length at most $5\pi/3$.<|endoftext|> TITLE: Group presentation of $A_5$ with two generators QUESTION [12 upvotes]: In [Huppert, Endliche Gruppen, p140] the author shows that the alternating group $A_5$ is isomorphic to $G := \langle x,y \mid x^5=y^2=(xy)^3=1 \rangle$. The proof is elementary but long and complicated. Is there a simple way to prove the assertion by using some theory? Of course essentially we have to show that $|G| \leq 60$. Here is a possible attempt: $A_5$ is generated by $(1,2,3,4,5)$ and $(12)(34)$, and these elements satisfy the above relations. We can try to give a proof of $|A_5| \leq 60$ by using these generators (and the well known subgroup structure of $A_5$), and then to adapt the same proof for $G$. This could be done as follows: Set $a := xy$ and $b := (xy)^{x^2} = x^{-1}y{x^2}$. Both elements are of order three. The corresponding permutations are $(2,4,5)$ and $(1,2,4)$ so in principle we should be able to show that $U := \langle a,b \rangle$ (which is in fact isomorphic to $A_4$) has at most $12$ elements. For doing so we define $V := \langle ab, (ab)^b \rangle$. $V$ has to be isomorphic to the Klein four group, so we have to show that $(ab)$ and $(ab)^b$ are commuting involutions (should be possible somehow...), and that $b$ normalizes $V$ (easy). Then it is clear that $U = V \langle b \rangle$ has at most $12$ elements. Finally, we have to show that the index $|G:U|$ is at most $5$. This is the only part, where I have no idea how to proceed. Any ideas? REPLY [9 votes]: Finally, I am able to complete my sketch of the proof. We begin by proving the following: $G := \langle x,y \mid x^3=y^3=(xy)^2 = 1 \rangle$ is isomorphic to $A_4$ Proof: $A_4$ is generated by $(123)$ and $(234)$, and these permutations satisfy the above relations. Hence, $A_4$ is a homomorphic image of $G$. We will show henceforth $|G| \leq 12$. Let $a = xy$ and $b = a^x = yx$. We have $a^2 = b^2 = 1$, and also $(ab)^2 = xy^{-1}x^{-1}y^{-1}x = x (xy)^{-2}x^2 = 1$. So $V := \langle a,b \rangle$ is a homomorphic image of $C_2 \times C_2$. Since $a^x = b \in V$ and $b^x = x^{-1}yx^2 = (yx)^{-1}(xy)^{-1} = ba \in V$, $\langle x \rangle$ normalizes $V$, and $G = V \langle x \rangle$ has at most 12 elements. $\square$ Now we are able to prove the original statement: $G := \langle x,y \mid x^5=y^2=(xy)^3=1 \rangle$ is isomorphic to $A_5$ Proof: $A_5$ is generated by $(12345)$ and $(12)(34)$, and these permutations satisfy the above relations. Hence, $A_5$ is a homomorphic image of $G$. We will show $|G| \leq 60$. Let $a = xy$ and $b = a^{x^2} = x^{-1}yx^2$. We have $a^3=b^3=1$. In the following we will frequently need the identity $$yx^{-1}y= xyx,\tag{$\ast$}$$ which follows directly from $(xy)^3=1$. Using $(*)$ we compute $(ab)^2 = x(yx^{-1}y)x^3(yx^{-1}y)x^2 = x(xyx)x^3(xyx)x^2 = 1$. Hence, $U := \langle a,b \rangle$ is a homomorphic image of $A_4$, and has therefore at most $12$ elements. We finish the proof by showing that the complete set of right cosets of $U$ in $G$ is given by $\Omega = \{ U, Ux, Ux^2, Ux^3, Ux^4 \}$. Since $G$ acts transitively on its right cosets, this can be done by showing that $\Omega$ is invariant under the action of the generators $x$ and $y$. It is clear that $\Omega x = \Omega$. Furthermore, we have $Uy = Uay = Ux$ $(Ux)y = Ua = U$ $(Ux^2)y = Ux(xyx)x^{-1} = Ux(yx^{-1}y)x^{-1} = Uabx^2 = Ux^2$ $(Ux^3)y = Ub^{-1}x^4 = Ux^4$ $(Ux^4)y = Ubx^3 = Ux^3$ This also shows $\Omega y = \Omega$, and hence $\Omega G = \Omega$, which completes the proof. $\square$ I am quite satisfied with this proof, since it is very conceptual. But still it is quite long and depends on many calculations which seem a bit random. I would be happy to see shorter proofs which are using more sophisticated concepts.<|endoftext|> TITLE: How to find $f_1\circ f_2\circ\cdots f_{13}(2),$ where $f_n(x) = \frac{nx+9}{x+3}$ QUESTION [5 upvotes]: How to find $f_1\circ f_2\circ\cdots f_{13}(2),$ where $f_n(x) = \frac{nx+9}{x+3}$ in a reasonable amount of time? I solved this problem through brute forcing, but that took about an hour, and got the final answer $23/11.$ Is there a much quicker way to do this? (The recommended time is 5-10 minutes) REPLY [10 votes]: Express $f_n(x) = n + \frac{9-3n}{x+3}$. Then $f_3(x) = 3$, for all $x$. Thus your expression reduces to $f_1(f_2(3))$. Edit: As cardboard_box notes in the comments, one needs to check that $f_n(x)\neq -3$ for $n\leq 13$ in the above composition (and the appropriate $x$). You can do it as described by cardboard_box.<|endoftext|> TITLE: There must exist a random variable with certain given law? QUESTION [6 upvotes]: Let $(\Omega,\mathcal F,\mathbb P)$ and $(E,\mathcal G,\mu)$ be two probability spaces. My question is the following: Measure-theoretically, is there exist a measurable mapping $X:(\Omega,\mathcal F)\to(E,\mathcal G)$, such that $\mu$ is just the push-forward measure of $\mathbb P$ w.r.t $X$, i.e., $\mu(A)=\mathbb P(X^{-1}(A))$ for $\forall A\in\mathcal G$. Or equivalently in probability, is there exist a random variable $X:(\Omega,\mathcal F)\to(E,\mathcal G)$, such that $\mu$ is just the law of $X$. For stochastic processes, there is the well-known Kolmogorov extension theorem to guarantee the existence of stochastic processes for given finite-dimensional distributions. But for random variables, is there some theorem to guarantee the existence for given law? Any comments or references will be appreciated. REPLY [4 votes]: In general no, but if they are both Polish spaces and the cardinality (either finite, countable or continuous) of $\Omega$ is greater than that of $E$, each endowed with their Borel $\sigma$-algebra and a probability measure, then yes (Kuratowski's theorem)<|endoftext|> TITLE: Lebesgue integral over unions QUESTION [5 upvotes]: I want to prove the following: Let $f$ be a nonnegative $\mathcal{M}$-measurable function, and let $\{E_n\}_{n=1}^{\infty}$ be a sequence of Lebesgue measurable sets, where $E_1\subset E_2\subset ...$ Then $$\int_{\bigcup_{n=1}^{\infty}E_n}f{\rm d}\lambda = \lim_{n\to\infty}\int_{E_n}f{\rm d}\lambda$$ Can anyone help? I am very new to measure theory, and don't manage to prove this on my own. REPLY [2 votes]: Here are some additional hints: $\int_{\bigcup \limits_{n=1}^{\infty} E_{n}} f \,d\lambda = \int \limits_{X} f \chi_{\bigcup \limits_{n=1}^{\infty} E_{n}} \,d\lambda$ by definition of integrating over a subset of the domain $X$. Also, the characteristic function $\chi_{\bigcup \limits_{n=1}^{\infty} E_{n}}$ can be written as $\lim \limits_{m \to \infty}\chi_{\bigcup \limits_{n=1}^{m} E_{n}}$, right? Also, $\bigcup \limits_{n=1}^{m} E_{n} = E_{m}$ (why?). So, you want to show: $\int \limits_{X} f ( \lim \limits_{m \to \infty} \chi_{E_{m}}) \,d\lambda = \lim \limits_{m \to \infty} \int \limits_{X} f \chi_{E_{m}} \,d\lambda$. Now look at user zhw.'s hint.<|endoftext|> TITLE: Why are the domains for $\ln x^2$ and $2\ln x$ different? QUESTION [10 upvotes]: If I have a function like this $f(x)=2 \ln(x)$ and I want to find the domain, I put $x>0$. But if I use the properties of logarithmic functions, I can write that function like $f(x)=\ln(x^2)$ and so the domain is all $\mathbb{R}$ and the graphic of function is different. Where is the mistake? REPLY [4 votes]: First, you could use $\ln x$ to define functions with different domains as long as $\ln x$ is defined in that domain. Second, the rule $\ln x^n=n\cdot \ln x$ is a bit sloppy. It should always be pointed out that $x>0$. Likewise, $\ln ab=\ln a+\ln b$, only if $a,b>0$.<|endoftext|> TITLE: If $f$ is a strictly-increasing differentiable function, how can we characterise the set of points where $f'$ vanishes? QUESTION [5 upvotes]: I am trying to determine necessary and sufficient constraints on the set of roots of a positive derivative $f'(x) \geqslant 0$ that determine whether $f$ is strictly increasing or just increasing. Now, in high school I learned that If $f'(x) \geqslant 0$ for all $x$, and $f'(x)=0$ has finitely-many solutions, then $f$ is strictly increasing. While this is a useful statement, I realise that the condition on $f'$ is necessary but certainly not sufficient. Indeed, it is easy to construct a strictly-increasing function whose derivative vanishes on a countably-infinite set of points (take for instance $f(x)=\int_0^x \sin^2{t}\ \mathrm{d}t$). In fact, it is even possible to construct a strictly-increasing function whose derivative vanishes over an uncountable set. Here's a construction I came up with: Let $\mathcal{C} \subset [0,1]$ be the Cantor set. Define $I:[0,1]\to \left\{0,1\right\}$ such that $I(x)=0$ if $x \in \mathcal{C}$ and $I(x)=1$ if $x \notin \mathcal{C}.$ Then define $$f(x)=\int_0^x I(t)\ \mathrm{d}t, \qquad \ \ \ 0 \leqslant x \leqslant 1.$$ Since $\mathcal{C}$ is measure-zero, it is not hard to show that $I(x)$ has measure-zero discontinuity and hence that $f(x)$ is a well-defined integral in the Riemann sense. Further, one can show that $f(x)$ is strictly increasing, and $f'(x)=0$ for all $x\in \mathcal{C}$, which is an uncountable set. I suspect that the necessary and sufficient condition for $f$ to be strictly-increasing is that the set where $f'$ vanishes must be nowhere dense: Proposition. Let $f$ be a differentiable function with $f'(x) \geqslant 0$ for all $x$, and let $S$ be the set of points where $f'$ vanishes. Then $f$ is strictly-increasing iff $S$ is nowhere-dense. Here is my proposed proof: Proof. Equivalently, we prove the negation of the statement, namely that $f$ is locally-constant on some interval iff it some nonempty subset of $S$ is dense: $(\Rightarrow)$ If $f$ is locally-constant on some interval, call it $J$, then $f'(x)=0$ for $x \in J$ hence $S \cap J \subseteq S$ is dense since it is an interval. However I am not able to prove the statement in the other direction. Assuming that my proposition is true, can someone help me out on the proof? If the statement isn't true, then what would be a possible counter-example? And what would be the characterisation of $S$? REPLY [3 votes]: Here is an alternative necessary and sufficient condition. Claim: Suppose $f:\mathbb{R}\rightarrow\mathbb{R}$ is a differentiable function with $f’(x)\geq 0$ for all $x \in \mathbb{R}$. Then $f$ is strictly increasing if and only if on every interval $[a,b]$ with $a0$. Proof: Suppose $f$ is strictly increasing. Let $a,b$ be real numbers such that $a0$. Now suppose $f$ is such that on every interval $[a,b]$ with $a0$. Fix any real numbers $a,b$ such that $a0$. By definition of derivative, we have for all sufficiently small $h>0$: $$ \frac{f(c+h)-f(c)}{h} >0$$ Hence, there is an $h>0$ such that $a < c < c+h < b$ and $f(c+h)-f(c) > 0$ and so: $$ f(a) \overset{(a)}{\leq} f(c) < f(c+h) \overset{(b)}{\leq} f(b) $$ where (a) and (b) hold because $f$ is nondecreasing. $\Box$<|endoftext|> TITLE: Average minimum distance between $n$ points generate i.i.d. with uniform dist. QUESTION [8 upvotes]: Let $U$ be distributed uniformly on $[0,a]$. Now suppose we generate $n$ independent points according to $U$. What is the average minimum distance between these $n$ points? That is \begin{align} E\left[ \min_{i,j\in [1,n]: i\neq j} |U_i-U_j|\right] \end{align} Is this a correct formulation of average minimum distance between? REPLY [6 votes]: Assume $a=1$ for simplicity. Let $M$ be the minimum distance among these $n$ points. Note $M$ is always at most $\frac1{n-1}$, which occurs when the points are evenly spaced. To calculate $EM$, we first calculate $P(M>m)$, for $0\le m \le 1/(n-1)$. $P(M>m)$ is, by symmetry, equal to $n!$ times $P(M>m\text{ and } U_1m)=n!\cdot \text{Vol}(S)=n!\cdot \frac1{n!}(1-(n-1)m)^n=(1-(n-1)m)^n $$ and therefore $$ \begin{align} EM =\int_0^{1/(n-1)}P(M>m)\,dm &=\int_0^{1/(n-1)}(1-(n-1)m)^n\,dm\\ &=\frac{1}{n-1}\frac{(1-(n-1)m)^{n+1}}{n+1}\Big|_{0}^{1/(n-1)}\\ &=\boxed{\frac{1}{n^2-1}} \end{align} $$ To get the answer for the interval $[0,a]$, simply multiply this result by $a$.<|endoftext|> TITLE: Construction of Hyperbolic Circles With a Given Radius QUESTION [5 upvotes]: In the Poincare Disk Model, Hyperbolic circles (i.e. the locus of all points with a given distance from a center point) are also circles in the euclidean sense, but with the euclidean center different from the hyperbolic center. My question is, given a known hyperbolic center point and hyperbolic radius, how can one find the euclidean center and radius of the hyperbolic circle? Either a geometric construction or a simple analytical formula will do. To clarify: You are only given the radius length, not a point on the circle. There is no constraint on what you are allowed to use in your solution: Analytic solutions are perfectly acceptable. In the diagram: Given the point $A$ and a hyperbolic distance, find the point $B$ and the euclidean radius of the circle. $\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space$ REPLY [3 votes]: Making my comment more explicit ... Writing $|\cdot|$ for Euclidean distance, and $|\cdot|^\star$ for hyperbolic distance, we have a relatively simple relation for distances from the origin: $$|OX|^\star = \log\frac{1 + |OX|}{1 - |OX|} = 2 \operatorname{atanh}|OX| \qquad\qquad |OX| = \tanh\frac{\;|OX|^\star}{2} \tag{$\star$}$$ Let the diameter of the target circle meet $\overleftrightarrow{OA}$ at $P$ and $Q$, and define $a := |OA|$, $p := |OP|$, $q := |OQ|$, with $a^\star$, $p^\star$, $q^\star$ their hyperbolic counterparts. Let $r^\star$ be the target circle's hyperbolic radius. We may assume $p \geq a$ (one of the diameter's endpoints must be on the "far side" of center $A$), so that $$p^\star = a^\star + r^\star \quad\to\quad p = \tanh\frac{a^\star + r^\star}{2} = \frac{(1+a)\exp r^\star - (1 - a)}{(1+a)\exp r^\star + ( 1 - a)} \tag{1}$$ For the other endpoint, $Q$, an ambiguity arises based on whether the origin lies outside or inside the circle, but we have $$q^\star = \pm ( a^\star - r^\star ) \quad\to\quad q = \pm \tanh\frac{a^\star - r^\star}{2} = \pm \frac{(1+a) - ( 1 - a )\exp r^\star}{(1+a) + (1-a)\exp r^\star} \tag{2}$$ where "$\pm$" is "$-$" for $O$ inside the circle, and "$+$" otherwise. (If you like, you can absorb the sign into the distances $q$ and $q^\star$, so that they are negative when $\overrightarrow{OA}$ and $\overrightarrow{OQ}$ point in opposite directions, and positive otherwise.) With the endpoints of the target circle's diameter known, determining the Euclidean center and Euclidean radius is straightforward. $\square$ Note. If $R$ is such that $|OR|^\star = r^\star$, and if we define $r := |OR|$, then $(1)$ and $(2)$ become: $$p = \frac{a + r}{1 + a r} \qquad\qquad q = \pm \frac{a - r}{1 - a r} \tag{3}$$<|endoftext|> TITLE: Sufficient but not Necessary conditions QUESTION [8 upvotes]: We were having a discussion at the office when we should have been working and I suggested that an example of a Sufficient but not Necessary condition is: Given a natural number of fewer than, say, 25 digits. We wish to establish if it is divisible by six. An example of a Necessary but not Sufficient condition was: Is the number divisible by two? The other guy was happy with that and agreed. My example of a Sufficient but not Necessary condition (however simple) was: Is the number equal to six? The other guy (who has a degree in math) insisted that because this would not apply to numbers such as 12 this could not be Sufficient. I maintained that that is the whole point, the condition is not necessary but because it is Sufficient, if it is true, you are done, QED. This promptly devolved into "It is not", "It is, too" which did not seem very mathematical, somehow. Could we get somebody to comment? The other guy decided he did not want to discuss this further but I would like to feel a little vindicated. (If I am wrong, I will send him your answer.) Thank you in advance. REPLY [2 votes]: Per this answer, 'A is sufficient for B.`   means that    'A is a subset of B'. A picture and real-life example may aid to understand the following: Your main question is whether (2) follows from (1); (1) P → Q [ P is a sufficient condition for Q ], (2) Q → P [ P is a necessary condition for Q ]. The reason the entailment from (1) to (2) doesn't hold is that it's possible that Q follow from some proposition R that is not equivalent to P. The only instance where the entailment is realized is one where all necessary conditions for Q are logically equivalent to P. The above is exemplified in the picture below, if P = Northern Ireland, Q = UK, R = Great Britain. Then being in Northern Ireland is sufficient for being in the UK, but is NOT necessary for being in the UK, because one can also be in U.K. by being in Great Britain.<|endoftext|> TITLE: Assigning integer to finite CW complex such that following hold. QUESTION [5 upvotes]: For each $n \in \mathbb{Z}$, is there a unique function $\varphi$ assigning an integer to each finite CW complex, such that the following hold? $\varphi(X) = \varphi(Y)$ if $X$ and $Y$ are homeomorphic. $\varphi(X) = \varphi(A) + \varphi(X/A)$ if $A$ is a subcomplex of $X$. $\varphi(S^0) = n$. REPLY [5 votes]: Yes. Of course, such a function exists; let $\varphi_n(X) = n\tilde{\chi}(X)$, where $\tilde{\chi}$ is the reduced Euler characteristic. Your property 2 follows from the reduced homology long exact sequence. Call a function of your kind $\varphi_n$. 1) $\varphi_n(A \vee B) = \varphi_n(A) + \varphi_n(B)$. You can always subdivide the CW structures on $A \vee B$ so that their wedge is a CW complex with $A$ and $B$ given as subcomplexes. Then $B = (A\vee B)/A$. 2) $\varphi_n(S^k) = (-1)^kn$. This follows because if $S^{k-1} \subset S^k$ is the equator, then $S^k/S^{k-1} = S^k \vee S^k$. So $\varphi_n(S^k) = \varphi_n(S^{k-1}) + 2\varphi_n(S^k)$. So $\varphi_n(S^k) = -\varphi_n(S^{k-1})$. 3) Let $X^k$ be the $k$-skeleton of $X$. Suppose we know that for CW complexes of dimension at most $k$, $\varphi_n(X) = n\tilde{\chi}(X)$. Then if $X$ is of dimension $(k+1)$, $\varphi_n(X) = \varphi_n(X^k) + \varphi_n(\vee_\ell S^{k+1}) = (-1)^{k+1}\ell n$, where $\ell$ is the number of $(k+1)$-cells in $X$. But the reduced homology is precisely the same as the alternating sum of the number of cells in each dimension, minus one. So the result follows.<|endoftext|> TITLE: Limit of the given sum: $f(x) = \lim_{n\to \infty} \sum_{r=1}^n 3^{r-1}\sin^3(x/(3^r))$ QUESTION [7 upvotes]: $$f(x) = \lim_{n\to \infty} \sum_{r=1}^n 3^{r-1}\sin^3(x/(3^r)) $$ I tried using the formula relating $\sin(3x)$ to $\sin^3(x)$ but got later stuck with a similar series who's sum I didn't know how to calculate REPLY [7 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\mrm{f}\pars{x} = \lim_{n\to \infty}\sum_{r = 1}^{n}3^{r - 1}\sin^{3}\pars{x \over 3^{r}}:\ ?}$. \begin{align} \mrm{f}\pars{x} & = \lim_{n\to \infty}\sum_{r = 1}^{n}3^{r - 1}\sin^{3}\pars{x \over 3^{r}} = {1 \over 4}\lim_{n\to \infty}\sum_{r = 1}^{n}\bracks{% 3^{r}\sin\pars{x \over 3^{r}} - 3^{r - 1}\sin\pars{x \over 3^{r - 1}}} \\[1cm] & =\require{cancel} {1 \over 4}\lim_{n\to \infty}\left\lbrace% \bracks{\cancel{3\sin\pars{x \over 3}} - \color{#f00}{\sin\pars{x}}} + \bracks{\cancel{3^{2}\sin\pars{x \over 3^{2}}} - \cancel{3\sin\pars{x \over 3}}}\right. + \\[5mm] &\phantom{= {1 \over 4}\lim_{n \to \infty}\braces{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!}} \left.\bracks{\cancel{3^{3}\sin\pars{x \over 3^{3}}} - \cancel{3^{2}\sin\pars{x \over 3^{2}}}} + \cdots + \bracks{\color{#f00}{3^{n}\sin\pars{x \over 3^{n}}} - \cancel{3^{n - 1}\sin\pars{x \over 3^{n - 1}}}}\!\!\right\rbrace \\[1cm] & = {1 \over 4}\,\lim_{n \to \infty}\bracks{-\sin\pars{x} + 3^{n}\sin\pars{x \over 3^{n}}} = \bbx{{1 \over 4}\bracks{\vphantom{\Large a}x - \sin\pars{x}}} \end{align}<|endoftext|> TITLE: Dominated Convergence Theorem Exercise QUESTION [6 upvotes]: I am asked to find $$\lim_{n \to \infty} \int_0^\infty n^2e^{-nx} \tan^{-1} x \, dx.$$ Here is my attempt. Write $$\int_0^\infty n^2e^{-nx}\tan^{-1}x \, dx=\int_0^1 n^2e^{-nx} \tan^{-1} x \,dx + \int_1^\infty n^2e^{-nx}\tan^{-1} x \, dx$$ $$=\int_0^{n^2} e^{-\frac x n} \tan^{-1}\left(\frac x {n^2}\right) \, dx+\int_1^\infty n^2 e^{-nx} \tan^{-1}x \, dx.$$ Then note that $$\left| 1_{(0,n^2)}(x)e^{-x/n}\tan^{-1} \left(\frac x {n^2}\right) \right| \le \frac \pi 2$$ for all $x>0$ and all $n\ge 1$ and $$|n^2e^{-nx}\tan^{-1}x| \le \frac{\pi}{2}\frac 2 {x^2}$$ for all $x\in [1,\infty)$ and all $n\ge 1$. Thus the dominated convergence gives $${\lim_{n\to\infty} \int_0^\infty 1_{(0,n^2)}(x)e^{-x/n}\tan^{-1} \left(\frac x {n^2}\right) \, dx = 0}$$ and $$\lim_{n\to\infty} \int_1^\infty n^2e^{-nx} \tan^{-1}x\,dx=0,$$ and hence $$\lim_{n \to \infty}\int_0^\infty n^2e^{-nx}\tan^{-1}x\,dx=0.$$ Is this correct? EDIT: Unfortunately the above is not correct (see Dr. MV's comment). The correct justification is shown below (given by Sangchul Lee). $$\int_0^\infty n^2e^{-nx} \tan^{-1} xdx=\int_0^\infty ne^{-x} \tan^{-1} (\frac{x}{n}) \, dx.$$ Since $$|ne^{-x}\tan^{-1} (\frac{x}{n})|\le xe^{-x}$$ for all $x>0$ and all $n\ge1$ we deduce that $$\lim_{n \to \infty} \int_0^\infty n^2e^{-nx} \tan^{-1} x \, dx=\lim_{n\to\infty}\int_0^\infty ne^{-x} \tan^{-1} (\frac{x}{n}) \, dx=\int_0^\infty \lim_{n \to\infty}ne^{-x} \tan^{-1} (\frac{x}{n}) \, dx=\int_0^\infty xe^{-x} \, dx=1.$$ The point is that $\tan^{-1}x\le x$ for all $x\ge0$, an inequality I had forgotten! REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\lim_{n \to \infty}\int_{0}^{\infty}n^{2}\expo{-nx}\arctan\pars{x} \,\dd x:\ ?}$. $$\bbox[#ffe,10px,border:1px dotted navy]{% \mbox{Besides the 'original motivation', it can be evaluated as follows:}} $$ \begin{align} &\lim_{n \to \infty}\int_{0}^{\infty}n^{2}\expo{-nx}\arctan\pars{x} \,\dd x \\[5mm] = &\ \lim_{n \to \infty}\braces{% n^{2}\int_{0}^{\infty}\expo{-nx}x\,\dd x + n^{2}\int_{0}^{\infty}\expo{-nx}\bracks{\arctan\pars{x} - x}\dd x} \end{align} Moreover, $\ds{\arctan\pars{x} - x = -\,{\xi^{2} \over \xi^{2} + 1}\,x\quad}$ for some $\ds{\quad\xi\ {\large\mid}\ 0 < \xi < x > 0}$ such that: \begin{align} & 0 < \verts{n^{2}\int_{0}^{\infty}\expo{-nx}\bracks{\arctan\pars{x} - x}\dd x} < n^{2}\int_{0}^{\infty}\expo{-nx}x^{3}\,\dd x = {6 \over n^{2}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\to}\,\,\, \color{#f00}{\large 0} \\[5mm] &\mbox{and}\quad \lim_{n \to \infty}\pars{n^{2}\int_{0}^{\infty}\expo{-nx}x\,\dd x} = \color{#f00}{\large 1} \end{align} $$ \implies\bbx{\ds{% \lim_{n \to \infty}\int_{0}^{\infty}n^{2}\expo{-nx}\arctan\pars{x}\,\dd x = 1}} $$<|endoftext|> TITLE: Picard groups and fundamental groups of connected algebraic groups QUESTION [5 upvotes]: Recently, I'm reading V. L. Popov's paper "Picard groups of homogeneous spaces of linear algebraic groups and one-dimensional homogeneous vector bundles" V. L. Popov, 1974, and I got confused about "Theorem 6" in that paper which says that "Let $G$ be a connected linear algebraic group with radical $R$. Then $\mathrm{Pic}(G)$ is isomorphic to the fundamental group of the semisimple group $G/R$." This theorem follows from "Theorem 3" and "Theorem 4" in that paper. However, this seems to contradict the following example. Consider $G = \mathrm{GL}(n,\mathbb{C})$, whose radical $R$ is isomorphic to $\mathbb{G}_{m}$. Then the homogeneous space is $G/R = \mathrm{PGL}(n,\mathbb{C})$. We know that $\mathrm{Pic}(\mathrm{GL}(n,\mathbb{C}))$ is $0$, but $\pi_{1}(\mathrm{PGL}(n,\mathbb{C}))=\mathbb{Z}/n\mathbb{Z}$. Can anyone explain what I've understood incorrectly? REPLY [3 votes]: This is really just a long comment since I can't access the paper. There definitely seems to be something wrong with the formula, but formulas that I do know are correct basically perfectly account for the missing factor. Namely, here are two facts that I definitely know. Let me denote by $X^\ast(G)$ the character group $\text{Hom}(G,\mathbf{G}_m)$. Fact 1: Let $\varphi:G'\to G$ be a map of connected algebraic groups with $\ker\varphi$ of multiplicative type (i.e. $\ker\varphi$ is geometrically a product of tori and roots of unity). Then, there is the following exact sequence: $$0\to X^\ast(G)\to X^\ast(G')\to X^\ast(\ker\varphi)\to \text{Pic}(G)\to\text{Pic}(G')\to 0$$ and Fact 2: If $X^\ast(G)=0$ (and $G$ is a connected algebraic group) then $G$ has a universal cover, and $\text{Pic}(G)=X^\ast(\pi_1(G))$. So, now, suppose, for example, that $G$ is a reductive group. Then, of course, we know that $R(G)$ is a torus and so $\varphi:G\to G/R(G)$ has multiplicative kernel. So, applying Fact 1 we get $$0\to X^\ast(G/R(G))\to X^\ast(G)\to X^\ast(R(G))\to \text{Pic}(G/R(G))\to \text{Pic}(G)\to 0$$ Now, since $G/R(G)$ is semi-simple, we have that $X^\ast(G/R(G))=0$ (indeed, the image of $G/R(G)$ in $\mathbf{G}_m$ would be connected, semisimple, and abelian--so trivial). Thus, the above really reduces to $$0\to X^\ast(G)\to X^\ast(R(G))\to \text{Pic}(G/R(G))\to\text{Pic}(G)\to 0$$ Then, using Fact 2, and the just mentioned fact that $X^\ast(G/R(G))=0$, we have that $\text{Pic}(G/R(G))=X^\ast(\pi_1(G/R(G)))$. So, finally, our sequence looks like $$0\to X^\ast(G)\to X^\ast(R(G))\to X^\ast(\pi_1(G))\to \text{Pic}(G)\to 0$$ Now, let's run this for $G=\text{PGL}_n$ so that $R(G)=\mathbf{G}_m$, and $G/R(G)=\text{PGL}_n$. Then, evidently: $$X^\ast(\text{GL}_n)=X^\ast(\text{GL}_n/D(\text{GL}_n))=X^\ast(\text{GL}_n/\text{SL}_n)=X^\ast(\mathbf{G}_m)=\mathbb{Z}$$ and $$X^\ast(R(G))=X^\ast(\mathbf{G}_m)=\mathbb{Z}$$ Thus, it remains to find what $X^\ast(\pi_1(\text{PGL}_n))$ is. But, $\text{PGL}_n=\text{PSL}_n$ and $\text{SL}_n\to\text{PGL}_n$ is a central isogeny with kernel $\mu_n$. Since $\text{SL}_n$ is simply connected, this implies that $\pi_1(\text{PGL}_n)=\mu_n$. Thus, $$X^\ast(\pi_1(\text{PGL}_n))=X^\ast(\mu_n)=\mathbb{Z}/n\mathbb{Z}$$ Thus, we have, finally, the sequence $$0\to \mathbb{Z}\to\mathbb{Z}\to\mathbb{Z}/n\mathbb{Z}\to \text{Pic}(\text{GL}_n)\to 0$$ which implies the desired result (that $\text{Pic}(\text{GL}_n)=0$) since the map $X^\ast(\text{GL}_n)\to X^\ast(\mathbf{G}_m)$ is multiplication by $n$ (since any $\varphi\in X^\ast(\text{GL}_n)$ factors through the determinant which, essentially, is multiplication by $n$ on $\mathbf{G}_m$: the composition $\mathbf{G}_m\xrightarrow{\approx}R(\text{GL}_n)\xrightarrow{\det}\mathbf{G}_m$ is multiplication by $n$). In general, since $\text{Pic}(G/R(G))=X^\ast(\pi_1(G/R(G))$ and $\pi_1(G/R(G))$ is some finite abelian group, we have, non-canonically that $X^\ast(\pi_1(G/R(G))=\pi_1(G/R(G))$. Thus, the above analysis shows that we have a short exact sequence $$0\to X^\ast(G)\to X^\ast(R(G))\to \pi_1(G/R(G)))\to \text{Pic}(G)\to 0$$ Thus, since $\pi_1(G/R(G))$ and $\text{Pic}(G)$ are finite groups, if they are isomorphic then the map $\pi_1(G/R(G))\to \text{Pic}(G)$ is an isomorphism (since they're of the same order and finite) and thus $X^\ast(G)\to X^\ast(R(G))$ is an isomorphism. Conversely, if $X^\ast(G)\to X^\ast(R(G))$ is an isomorphism, then evidently $\pi_1(G/R(G))\to \text{Pic}(G)$ is an ismorphism. So, now note that the morphism $X^\ast(G/D(G))\to X^\ast(G)$ is evidently an isomorphism. Thus, we see that $X^\ast(G)\to X^\ast(R(G))$ is an isomorphism if and only if $X^\ast(G/D(G))\to X^\ast(R(G))$ is an isomorphism. But, note that since $G/D(G)$ and $R(G)$ are tori, this is equivalent to the statement that $R(G)\to G/D(G)$ is an isomorphism. But, $R(G)=Z(G)^\circ$ (we can ignore reduced subschemes because we're in characteristic $0$) and it's well known that $G=D(G)R(G)$ and thus $R(G)\to G/D(G)$ is surjective (which implies the claim above that $X^\ast(G)\to X^\ast(R(G))$ is injective) and it's an isomorphism if and only if $R(G)\cap D(G)$ is trivial. So, the upshot of all of this is the following: Conclusion: There is a canonical surjection $\pi_1(G/R(G))\to \text{Pic}(G)$ which is an isomorphism if and only if $R(G)\cap D(G)$ is trivial. In fact, the kernel of this map has size $|R(G)\cap D(G)|$. In fact, since $\pi_1(G/R(G))$ and $\text{Pic}(G)$ are finite abelian groups, they are isomorphic if and only if $\pi_1(G/R(G))\to \text{Pic}(G)$ is an isomorphism. Thus, $\pi_1(G/R(G))$ is isomorphic to $\text{Pic}(G)$ if and only if $D(G)\cap R(G)$ is trivial. I haven't thought too deeply about when $R(G)\cap D(G)$ is non-trivial--do you know an example in which $G$ is not already semisimple? Anyways, it seems that the stated theorem is wrong precisely because $R(\text{GL}_n)\cap D(\text{GL}_n)$ is non-trivial, and exactly order $n$ accounting for your $\mathbb{Z}/n\mathbb{Z}$ discrepancy.<|endoftext|> TITLE: Why does Tao use the word *metatheory* in this context? QUESTION [7 upvotes]: In a comment under his page about the book Analysis 1, Terence Tao writes: If one were to formalise the metatheory implicitly used [in the text Analysis 1], though, it would be a set theory as a language with equality I am confused about his usage of the word metatheory here. Checking wikipedia, "a metatheory or meta-theory is a theory whose subject matter is some theory." But this seems to me to not be meant by Terence Tao, since the set theory described by Terence Tao in his book is not intended to be a metatheory of some other theory, it is just used as a basis from which one can rigorously define the objects one needs in analysis. Do you know why Tao used the word metatheory then? Does this word maybe have another meaning than that described by the wikipedia article https://en.wikipedia.org/wiki/Metatheory? REPLY [2 votes]: I think that you can compare with : Terence Tao, Analysis I (3rd ed, 2016): Ch.2 [page 15] : the natural numbers, defined in terms of Peano axioms Ch.3 [page 33] : set theory : "almost every other branch of mathematics relies on set theory as part of its foundation" Ch.4 [page 74] : the construction, using set theory, of other number systems : integers and rationals Ch.5 [page 94] : the construction of the real numbers. Finally : Appendix A : Mathematical Logic : "which is the language one uses to conduct rigorous mathematical proofs." Now, you can read them in reverse order : within math log we define language (and the tools) of first-order language with equality. This is used to build-up (first-order) set theory. With the concepts (and axioms) of set theory we may develop the number systems, up to analysis. The word "metatheory" is not uswed in the book; thus, I think that in the statement you are quoting, Tao means the "foundational framework" of real analysis : set theory formalized with first-order language with equality.<|endoftext|> TITLE: Hatcher Question 1.2.10 - Show that the loop $\gamma$ is nullhomotopic QUESTION [6 upvotes]: See page 53 of Hatcher's Algebraic topology for reference to image. Consider two arcs $\alpha$ and $\beta$ embedded in $D^2 \times I$ as shown in the figure. The loop $\gamma$ is obviously nullhomotopic in $D^2 \times I$, but show that there is no nullhomotopy of $\gamma$ in the complement of $\alpha \cup \beta$. My reasoning is to consider the fundamental group of the space $(D^2 \times I) - (\alpha \cup \beta)$. To calculate the fundamental group of this space, we let $X = D^2 \times I, A = X - S^1$ and $B = X - S^1$ in such a way that $A \cap B = X - (\alpha \cup \beta)$. So by van Kampen's theorem, we have an isomorphism $$\frac{\pi_1(A) \ast \pi_1(B)}{N} \cong \pi_1(X).$$ This is given by $$\frac{\mathbb{Z} \ast \mathbb{Z}}{N} \cong 0$$ This obviously does not work and I'm not sure of how to proceed. REPLY [2 votes]: I'll add the figure from Hatcher for clarity. The key here is to note that the knot can be undone by moving one edge of each arc from one side of the cylinder across to the other side. I actually figured this out playing with two strings. Now, you have a full cylinder minus two straight lines. This has a deformation retract to a disk with two holes, which is in turn homeomorphic to a wedge of $2$ circles - so it has the fundamental group $F_2=\langle a,b\rangle$. Now, throughout this process you need to keep track of where $\gamma$ is. At this point, it encompasses both holes. So after the retract to a wedge of $2$ circles, $\gamma$ is represented by $a\circ b$. In $F_2$, one has $a\circ b\neq e$. So $\gamma$ is not null-homotopic. [I would like to thank @ChesterX and @Juggler for pointing out an error in my answer, hence the edit.]<|endoftext|> TITLE: Is "generalized" singular homology/cohomology a thing? If not, why not? QUESTION [7 upvotes]: From what I understand, the singular homology groups of a topological space are defined like so: Topological Particulars. There's a covariant functor $F : \mathbb{\Delta} \rightarrow \mathbf{Top}$ that assigns to each natural number $n$ the corresponding $n$-simplex. This yields a functor $$\mathbf{Top}(F-,-) : \Delta^{op} \times \mathbf{Top} \rightarrow \mathbf{Set}.$$ Hence to each topological space $X$, we can assign a simplicial set $\mathbf{Top}(F-,X) : \Delta^{op} \rightarrow \mathbf{Set}.$ General nonsense. We observe that every simplicial set induces a simplicial abelian group; that every simplicial abelian group induces a chain complex; and that chain complexes have homology and cohomology groups. Ergo, simplicial sets have homology/cohomology groups. Putting these together, we may speak of the homology and cohomology groups of a topological space $X$. However, the topological particulars don't seem too important. In fact, for any category $\mathbf{C}$ and any functor $F : \Delta \rightarrow \mathbf{C}$, there's a simplicial set $\mathbf{C}(F-,X)$ attached to each $X \in \mathbf{C}$, and therefore $X$ has homology and cohomology. For example, the underlying set functor $U : \mathbf{CMon} \rightarrow \mathbf{Set}$ has a left-adjoint $F : \mathbf{Set} \rightarrow \mathbf{CMon}$. But since $\Delta \subseteq \mathbf{Set}$ and $\mathbf{CMon} \subseteq \mathbf{Mon}$, this yields a functor $F : \Delta \rightarrow \mathbf{Mon}$. This should in turn allow us to attach homology and cohomology groups to each monoid $M$, by studying the simplicial set $\mathbf{Mon}(F-,M)$. Question. Is this a thing? If not, why not? REPLY [4 votes]: If I understood correctly, you have a cosimplicial object $F^\bullet \in \mathsf{cC}$ (AKA a functor $F : \Delta \to \mathsf{C}$), and an object $X \in \mathsf{C}$; and you're considering the simplicial set $\operatorname{Hom}_{\mathsf{C}}(F^\bullet, X) \in \mathsf{sSet}$. Sure, people use constructions like this from time to time, it's a very general construction... But since it's so general it's hard to get more specific than that. It occurs in tons of different settings. I don't think it's really fair to call that "the homology of $X$", either; it heavily depends on what $F^\bullet$ is. For example when you have a category tensored over $\mathsf{sSet}$, given two objects $X$ and $Y$, you can build the mapping space $$\operatorname{Map}_{\mathsf{C}}(X,Y) = \operatorname{Hom}_{\mathsf{C}}(X \otimes \Delta^\bullet, Y) \in \mathsf{sSet}$$ which is used very, very often, satisfying among other things $\pi_0 \operatorname{Map}_{\mathsf{C}}(X,Y) = [X,Y]$ is the set of homotopy classes of map $X \to Y$. Even more specifically the singular simplicial set $S_\bullet(X)$ is given by $\operatorname{Map}_{\mathsf{Top}}(*, X)$ (where $\mathsf{Top}$ is tensored over simplicial sets in the standard fashion). So homology is really a special case of a special case. What you're considering is very general. Homology is interesting because it satisfies things like the Eilenberg–Steenrod axioms, we have theorems like the UCT, Künneth's theorem... You can prove a great deal about homology using the setting you're considering (for example $\operatorname{Hom}_{\mathsf{C}}(F^\bullet, X \times Y) = \operatorname{Hom}_{\mathsf{C}}(F^\bullet, X) \times \operatorname{Hom}_{\mathsf{C}}(F^\bullet, Y)$ is obvious, and then you have the Eilenberg–Zilber theorem and finally Künneth's formula), but many other properties heavily depend on the specific $F^\bullet = |\Delta^\bullet|$ used.<|endoftext|> TITLE: Steps for proving that a sequence converges, using the epsilon definition of convergence QUESTION [7 upvotes]: I haven't been able to find any sources that clearly and methodically state the approach for proving the convergence of a sequence, using the epsilon definition of convergence. At best, I have been able to find vague, unjustified demonstrations. However, this does nothing to help me learn. I want to be able to generalise this method across all convergence problems that I encounter. I would like someone to state the steps and associated reasoning involved in proving that a sequence converges, using the epsilon definition of convergence. Please specify the reasoning behind each step of the methodology, to assist in justifying your calculations. I would like the 'why' and 'how' behind each step of such a proof. I have the sequence $ \{a_n\}_{n=1}^{\infty}$, where $a_n = \dfrac{(-1)^{n+1}}{n}$, $L = 0$. From what I have read, we want to prove that for any $\epsilon > 0$, there exists some $N > 0$, such that if $n > N$, $|a_n - L| < \epsilon$. However, as alluded to above, I do not fully appreciate or understand what this is saying. Thank you. REPLY [8 votes]: To be able to generalize procedures across various epsilon-delta proofs, it is important to notice what are the stand out features of such proofs(tricks, conversions etc.) In this case, suppose we want to show that $L=0$. Let us take $\epsilon>0$. We want to find an $N >0 $ such that if $n > N$ then $|a_n - L| < \epsilon$. In our case, $L=0$, so it changes to $|a_n| < \epsilon$. Step 1: Look at $a_n$ carefully. It is $\dfrac{(-1)^{n+1}}{n}$. Now, we are required to find out about $|a_n|$, so let us compute $|a_n|$. This is a step by itself, because it gives a clear direction of attack: To attack this problem, we will calculate $|a_n|$ explicitly, and try to find $N$ satisfying the limit conditions explicitly. We normally do this because $a_n$ is not a very complicated quantity, so it is easy to work with. Step 2:So what is $|a_n|$? It is $\left|\dfrac{(-1)^{n+1}}{n} \right|$. Since the modulus splits across the fraction, $$ \left|\dfrac{(-1)^{n+1}}{n} \right| = \dfrac{|(-1)^{n+1}|}{|n|} = \dfrac{1}{n} $$ There is no trick here. We just simply calculated $|a_n-L|$ directly, because in this problem it was easy to do so. The reason for this step is motivated by the previous step. Step 3: Now, suppose we were given $\epsilon>0$ and were asked to find a large enough $N$ such that if $n > N$ then $|a_n| < \epsilon$. However, we have now calculated $|a_n|$, and it is $\frac 1n$. Hence, we are trying to find $n$ such that $\frac{1}{n} < \epsilon$. However: $$ \epsilon > \frac 1n \iff n > \frac{1}{\epsilon} $$ The above piece of insight is vital to us: we can find our $N$ explicitly. Step 4: Let $N$ be the smallest integer greater than $\frac{1}{\epsilon}$. Then, note that: $$ n > N \implies n > \frac 1 \epsilon \implies \epsilon > \frac 1n $$ Step 5: Hence, this $N$ works for the given problem, so we can conclude by definition of limit that $a_n \to 0$. What have we learnt from here? 1) Wherever possible and easy, calculate $|a_n -L |$ explicitly. It is the safest option for simple looking $a_n$. 2) In Step $3$, we actually worked backwards. We assumed that the $N$ which we wanted existed, and then we tracked back to actually find that $N$, right? Working backwards is a very big trick, because the $N$ that you want can be explicitly found via working backward, as has happened in this case. Unfortunately, the problem that you have is not very illustrative, because it doesn't go through all the tricks and twists that one goes through while evaluating a normal tricky limit. You need to find a good example (and me), and then you will get a better grip on limits, because (certainly unknown to you, so please don't berate yourself) this problem was damp squib compared to some of the harder limits you have to come across. Hopefully, though, I have done justice to this problem. Please get back on doubts.<|endoftext|> TITLE: 4 equal figures which will fit together to form square QUESTION [5 upvotes]: A figure consists of 5 equal squares in the form of a cross. show how to divide it by two straight cuts into 4 equal figures which will fit together to form a square. i cut the figure through 2 perpendicular straight lines through center as shown. is it correct? is there any other way to solve this question? REPLY [2 votes]: This is another way to cut the cross.<|endoftext|> TITLE: How did early mathematicians make it without Set theory? QUESTION [51 upvotes]: It is said that Cauchy was a pioneer of rigour in calculus and a founder of complex analysis. Yet if baffles me as set theory was an invention of the 1870s, 20 years after the death of Cauchy. Currently the beginning of most concepts in mathematics begins with the concept of set. Furthermore the concept of groups whose foundations were laid by Galois and Abel were done so long before set theory. I hope there is a genral way to answer these questions 1) We define functions with a domain and range both being sets. But when Cauchy used the symbol 'f(x)', what did it really mean to him? As Cauchy was notorious for his rigorous approach, it is hard to believe that he may have just used the word function ambiguously with intuitive satisfaction. (If the following question makes the topic too broad I'd be more than happy to list it as a separate question. 2)To a certain extent I can even brush away the idea of functions before sets. But I simply cannot grasp how the concept of group was formulated without a set and I'm puzzled as to how Galois and Abel were independently able to frame methods to prove the unsolvability of the quintic (these days the proof makes generous use of set theory)without sets. In these days where N, Z, Q and R all sets, how did the early masters do what they did? How on earth was calculus made rigorous without the sets of different numbers? REPLY [4 votes]: Set theory is one of the most common ways in modern mathematics to justify the ontology of the entities mathematicians are dealing with, by providing a general framework where all (or most, to be safe) of these entities can be constructed from the empty set. On the other hand, the procedures of mathematics are able to stand, and rigorously at that, without the set-theoretic foundations. Cauchy was rigorous not in the sense that he had set theory (he didn't) or that he gave epsilon-delta definitions of continuity (he didn't), but because he adhered to the rigor of the geometry of Euclid as it was practiced for several centuries before him, and because he rejected the generality of algebra (roughly, cavalier summation of divergent series) of Euler and Lagrange. The distinction procedures versus ontology and how it sheds light on the history of mathematics is explored in the article "Toward a history of mathematics focused on procedures".<|endoftext|> TITLE: can a Car Registration Number, a combination of prime, be prime? QUESTION [11 upvotes]: While waiting in my car, I noticed registration number of a car parked in front of my car was 6737. So it was a concatenation of two prime numbers 67 and 37. Now I know following ways to check whether any number is prime or not Let $p$ be the number to be checked for being prime or not 1) if p is divided by any prime in between 2 to $\sqrt[2]{(p)}$ then it is not a prime. 2) $\forall i ; 1< i < p $ if p mod i equals $0$ then the number is not prime. Do we have s method so that we can check whether combination of prime numbers (here 67 and 37) constitutes prime number (6737) or not ? REPLY [3 votes]: Like pointed out in the comments by @naveen dankal, primes and patterns are not "best friends"*. Or at least we do not know of many patterns concerning primes. In general, finding out if a given number is prime or not can be a very difficult question to answer. If a given number is the concatenation of other primes or not, is just as irrelevant - as far as I know. Nonetheless, I asked Mathematica to find some pairs of primes that, when concatenated, yield another prime number. I started with just the prime numbers with 2 digits. There are 20 of them: $$11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, \ 79, 83, 89, 97$$ And there are 73 concatenations that are prime, which is $73/400 = 18.25\%$ of the concatenations. The ones found are listed here: $$[1117, 11, 17], [1123, 11, 23], [1129, 11, 29], [1153, 11, 53], [1171, 11, 71], [1319, 13, 19], [1361, 13, 61], [1367, 13, 67], [1373, 13, 73], [1723, 17, 23], [1741, 17, 41], [1747, 17, 47], [1753, 17, 53], [1759, 17, 59], [1783, 17, 83], [1789, 17, 89], [1913, 19, 13], [1931, 19, 31], [1973, 19, 73], [1979, 19, 79], [2311, 23, 11], [2341, 23, 41], [2347, 23, 47], [2371, 23, 71], [2383, 23, 83], [2389, 23, 89], [2917, 29, 17], [2953, 29, 53], [2971, 29, 71], [3119, 31, 19], [3137, 31, 37], [3167, 31, 67], [3719, 37, 19], [3761, 37, 61], [3767, 37, 67], [3779, 37, 79], [4111, 41, 11], [4129, 41, 29], [4153, 41, 53], [4159, 41, 59], [4337, 43, 37], [4373, 43, 73], [4723, 47, 23], [4729, 47, 29], [4759, 47, 59], [4783, 47, 83], [4789, 47, 89], [5323, 53, 23], [5347, 53, 47], [5923, 59, 23], [5953, 59, 53], [6113, 61, 13], [6131, 61, 31], [6143, 61, 43], [6173, 61, 73], [6719, 67, 19], [6737, 67, 37], [6761, 67, 61], [6779, 67, 79], [7129, 71, 29], [7159, 71, 59], [7331, 73, 31], [7919, 79, 19], [7937, 79, 37], [8311, 83, 11], [8317, 83, 17], [8329, 83, 29], [8353, 83, 53], [8389, 83, 89], [8923, 89, 23], [8929, 89, 29], [8941, 89, 41], [8971, 89, 71]$$ On the other hand, there are $1061$ primes with 4 digits, so the percentage of 4-digit prime numbers which are a concatenation of two 2-digit primes is $73/1061$ which is less than $7\%$. Funnily enough, there are $237$ unique 4-digit primes that are the concatenation of a 1-digit prime i.e. $2,3,5,7$ and a 3-digit prime. This makes up for a total of more than $22\%$ of 4-digit prime numbers being a concatenation of one prime digit with a 3-digit prime. In total, we conclude that there are $310$ concatenations that yield 4-digit prime numbers, for a total of around $29.2\%$. This means that one out of every 3 cars with prime numbers in the license plate has a number made of concatenating 2 primes! (if the license plate is like the one in the picture) After that, I concatenated, in order, the first $1000$ prime numbers with themselves. The way I did it was simple: let us say $h_1$ and $h_2$ are the two prime numbers that are going to be concatenated. We will want to check if $h_1h_2$ is prime. I fixed $h_1$ and made $h_2$ iterate over the $1000$ said prime numbers, to check if $h_1h_2$ was prime. Then changed $h_1$ to the next prime and repeated. And so on and so forth until I checked all $1000^2$ concatenations of the first $1000$ prime numbers. (By the way, the $1000^{th}$ prime number is $104729$) Along the way I found $107850$ concatenations that turned out to also be prime numbers, which is roughly $10.8\%$ of all concatenations tested. To me this turns out to be a seemingly high percentage! I sure was not expecting it. (Please notice this does not mean I found exactly $107850$ unique primes. I did not test for repetitions nor removed them, so the actual prime count should be a little lower than that). (If requested, I can extend the bounds of the test to more than $1000$ prime numbers) Some of the concatenations I found are listed here (I use $a : b$ to mean "$a$ concatenated with $b$") $2 : 3 = 23$ (obvious one, I guess) $2 : 29 = 229$ ... $7919 : 7109 = 79197109$ $7919 : 7907 = 79197907$ Funnily enough, I found no concatenation that started with a prime number after $7919$ (up until my bound which was the prime $104729$). Maybe this means that there are less concatenated primes that the result of concatenating two big prime numbers? Who knows...<|endoftext|> TITLE: What is the total sum of the cardinalities of all subsets of a set? QUESTION [49 upvotes]: I'm having a hard time finding the pattern. Let's say we have a set $$S = \{1, 2, 3\}$$ The subsets are: $$P = \{ \{\}, \{1\}, \{2\}, \{3\}, \{1, 2\}, \{1, 3\}, \{2, 3\}, \{1, 2, 3\} \}$$ And the value I'm looking for, is the sum of the cardinalities of all of these subsets. That is, for this example, $$0+1+1+1+2+2+2+3=12$$ What's the formula for this value? I can sort of see a pattern, but I can't generalize it. REPLY [2 votes]: You can encode every subset of a set as a binary number. If the set has $n$ elements, then you need the binary numbers of length $n$. Position $i$ in the number is element $i$ of the original set. An element $i$ is in the subset, if you have a 1 at position $i$. There are $2^n$ numbers of length $n$, and there are $n$ digits per number. So in all the digits of all numbers are $n*2^n$. Now the sum of the cardinality of all subsets is just the number of 1's in all these numbers. For each binary number, you can complement each bit to get another binary number of the same length. Thus the number of 0's and 1's must be the same, or put it another way: the number of 1's must be half of the digits. Therefore, the number of 1's is $n*2^n/2$ or $n*2^{n-1}$, and that is the number of the sum of the cardinalities.<|endoftext|> TITLE: Does there exist a basis for the set of $2\times 2$ matrices such that all basis elements are invertible? QUESTION [11 upvotes]: As the title says, I'm wondering whether there exists a basis for the set of $2\times 2$ matrices (with entries from the real numbers) such that all basis elements are invertible. I have a gut feeling that it is false, but don't know how to prove it. I know that for a matrix to be invertible, it must be row equivalent to the identity matrix and I think I may be able to use this in the proof, but I don't know how. Thanks in advance for any help, Jack REPLY [6 votes]: I just want to point out that, in order to prove that the answer to a question of the form Does there exist a basis for $M_n(\mathbb{R})$ consisting of [matrices of some special form]? is "yes", you only actually need to show that the special matrices span all of $M_n(\mathbb{R})$, since any spanning set can be reduced to a basis. It's usually a lot easier to check that some special set matrices are spanning than it is to explicitly describe a basis. Let's look at the case you asked about---invertible matrices. Let $A$ be any matrix. Take $\lambda$ to be a nonzero number which isn't an eigenvalue of $A$ (which can be done because $A$ can have at most $n$ eigenvalues). Then we have $$A = \lambda I + (A-\lambda I)$$ where $\lambda I$ and $A-\lambda I$ are both invertible. Since every matrix can be written as a sum of two invertibles, the invertibles are spanning, and can, in principal, be reduced to a basis.<|endoftext|> TITLE: Find the function given its Fourier series QUESTION [7 upvotes]: I am solving an exercise in which I'm asked to show that $$1=\frac{4}{\pi}\sum_{n=1}^\infty{\frac{\sin((2n-1)x)}{2n-1}}, 0 TITLE: Matrix given by $a_{ij} = 1/(i+j)$ is non-singular. QUESTION [10 upvotes]: What is a smart way to see that $$A\stackrel{\cdot}{=} \begin{bmatrix} 1/2 & 1/3 & \cdots & 1/(n+1) \\ 1/3 & 1/4 & \cdots & 1/(n+2) \\ \vdots & \vdots & \ddots & \vdots \\ 1/(n+1) & 1/(n+2) & \cdots & 1/(2n) \end{bmatrix} $$is non-singular? I computed $\det A$ for $n=1,2$ and $3$ but I failed to see the pattern. The context is as follows: in $\Bbb R[x] $ with inner product given by $\langle p(x),q(x)\rangle \stackrel{\cdot}{=}\int_0^1p(x)q(x)\,{\rm d}x$, I want to see that if $U\stackrel{\cdot}{=} \{p(x) \in \Bbb R[x]\mid p(0)=0\}$, then $U^{\perp} =\{0\}$ (and hence $U^{\perp\perp}=\Bbb R[x]\neq U$). I took $f(x) \in U^\perp$ and computed $\langle f(x),x^k\rangle$ for $k\geq 1$. I obtained a homogeneous system for the coefficients of $f(x)$ which has $A $ as the matrix associated. So if $A$ is non-singular I'm done. REPLY [2 votes]: About determinant of $A$, it is a special case of something called Cauchy's double alternant. In general, given any $2n$ numbers $x_1, \ldots, x_n, y_1, \ldots, y_n$. If one construct a $n \times n$ matrix with entry $\frac{1}{x_i + y_j}$ at $i^{th}$ row, $j^{th}$ column, one has $$\det\left[ \frac{1}{x_i + y_j}\right] = \frac{\prod_{1\le i < j\le n}(x_i - x_j)(y_i - y_j)}{\prod_{1\le i, j \le n}(x_i + y_j)}\tag{*1}$$ In particular, this means if $x_i$ and $-y_j$ are $2n$ distinct numbers, the determinant is non-zero and hence the matrix is invertible. It is not that hard to prove $(*1)$ ourselves, consider the expression $$\prod_{1\le i, j \le n}(x_i + y_j)\times\det\left[ \frac{1}{x_i + y_j}\right]$$ If one expand the determinant, it is easy to see this expression is a polynomial in $x_i, y_j$. Since the determinant vanishes whenever $x_i$ equals to some $x_j$ or $y_i$ equals to some $y_j$. This polynomial contains $\prod_{1\le i < j\le n}(x_i - x_j)(y_i - y_j)$ as a factor. By matching the degree of polynomial on both sides, we find $$\det\left[ \frac{1}{x_i + y_j}\right] = \lambda_n \frac{\prod_{1\le i < j\le n}(x_i - x_j)(y_i - y_j)}{\prod_{1\le i, j \le n}(x_i + y_j)}$$ for some constant $\lambda_n$ depends only on $n$. If one replace $x_1$ by $\epsilon x_1$, $y_1$ by $\epsilon y_1$, send $\epsilon$ to $0$ and look at the limiting behaviors of both sides, one can conclude $\lambda_n = \lambda_{n-1}$. One can verify $\lambda_1 = \lambda_2 = 1$ by hand and hence $\lambda_n = 1$ for all $n$. Apply this to the Hilbert like matrix $A$ at hand, we get $$\det\left[\frac{1}{i+j}\right] = \frac{(c_n c_{n+1})^2}{c_{2n+1}} \ne 0 \quad\text{ where }\quad c_n = \prod_{k=1}^{n-1} k! $$ and hence $A$ is invertible.<|endoftext|> TITLE: An example of a Cameron–Martin space QUESTION [6 upvotes]: I am trying to compute an example for a Cameron–Martin space, following an exercise in M. Hairer's notes on SPDEs. The problem is the following: consider $\mathcal{C}[0,1]$ endowed with Wiener measure. Prove that the Cameron Martin space (or "reproducing kernel" space) associated to this measure is $W^{1,1}.$ To see that an element in the range of the covariance operator has one weak derivative is not difficult. But my computations actually show that the weak derivative is bounded, not only square integrable. In fact let $\mu \in \mathcal{M}[0,1] = C[0,1]^*,$ then $$Q(\mu) (s)= \int_0^1s \wedge u \text{ } \mu(du) = \int_0^s\int_u^1 \mu(dv) du.$$ Now if we look at our weak derivative $\frac{d}{ds} Q(\mu) (s)= \int_s^1 \mu(dv)$ we see that this function is actually bounded, since $\mu$ is of bounded total variation. So the question is: what did I do wrong? Where have I missed the $L^2$ norm of the derivative? REPLY [3 votes]: I actually solved the issue, and as always these "easy" exercises show that you need to properly understand the definitions. The Cameron–Martin space $\mathcal{H}$ is the completion of the range of the covariance operator, which we call $\overset{\circ}{\mathcal{H}}$, endowed with the scalar product induced by $Q$ (hence it is in particular a Hilbert space). What I did not do above is see what the scalar product induced by $Q$ is. So let $a, b \in \overset{\circ}{\mathcal{H}}, a = Q \mu , b = Q \nu.$ We get, by abusing the notiation of scalar product (note that we have Banach spaces so in general the written product is a duality pairing): $$_{\overset{\circ}{\mathcal{H}}} = = \int_0^1 Q \mu (s) \: \nu (ds) = \int_0^1 \left( \int_0^s \left( \int_u^1 \mu(dv) \right) \: du \right) \: \nu(ds) = $$ $$ = \int_0^1 \left(\int_u^1 \mu(dv) \right) \left( \int_u^1 \nu(ds)\right) \: du = _{W^{1,2}_0} $$ So we get the $W^{1,2}_0$ norm, where by $W^{1,2}_0$ we refer to the Sobolev space with Dirichlet condition in zero (otherwise we would not really have a norm). This means that if we take the completion of the space w.r.t this norm, we actually find $W^{1,2}_0 = \mathcal{H}.$<|endoftext|> TITLE: Expected days to finish a box of cookies QUESTION [5 upvotes]: Adam has a box containing 10 cookies. Each day he eats each cookie with a probability of $\frac12$. Calculate the expected number of days it takes Adam to complete the cookies. As a start we can set $X$ as the expected days it takes for Adam to finish eating the cookies. However I'm unable to progress further. REPLY [6 votes]: Let $E_n$ denote the expected number of days for Adam to finish a box containing $n$ cookies. For a given $n$ and a given $k$ with $0\le k\le n$, with probability $\frac1{2^n}\binom nk$ Adam will eat $k$ cookies on that day and take an expected $E_{n-k}$ more days to finish the box. From here we can derive an equation for $E_n$: $$\begin{align} E_n&=\sum_{k=0}^n\frac1{2^n}\binom nk(1+E_{n-k})\\ &=\sum_{k=0}^n\frac1{2^n}\binom nk(1+E_k)\\ &=1+\sum_{k=0}^n\frac1{2^n}\binom nkE_k\\ E_n-\frac1{2^n}E_n&=1+\sum_{k=0}^{n-1}\frac1{2^n}\binom nkE_k\\ E_n&=\frac{2^n}{2^n-1}\left(1+\sum_{k=0}^{n-1}\frac1{2^n}\binom nkE_k\right) \end{align}$$ The last line is a formula for $E_n$ in terms of all $E_k$ with $0\le k TITLE: Logarithm of the determinant of a positive definite matrix QUESTION [8 upvotes]: For positive definite $C=LL^T$, where $L$ is the lower triangular Cholesky factor of $C$, why is $\log(\det(C))=2\operatorname{trace}(\log(L))$? I know that if $\{\lambda_i\}$ are the eigenvalues of $C$, $\det(C)=\prod_i\lambda_i$, so that $\log(\det(C))=\sum\log(\lambda_i)$ but I'm not sure where to go from there. REPLY [7 votes]: Hint 1: $\det(C)=\det(LL^T)=\det(L)\det(L^T)=\det(L)^2$, so $\log\det(C)=2\log\det(L)$. Denote by $\lambda_i$ the eigenvalues of $L$ and continue in the same way as you tried. Hint 2: For the Jordan normal form $L=SJS^{-1}$ it holds $\log(L)=S\log(J)S^{-1}$, so $$ \operatorname{trace}\log(L)=\operatorname{trace}(S\log(J)S^{-1})=\operatorname{trace}\log(J). $$<|endoftext|> TITLE: Coefficients of a formal power series satisfying $\exp(f(z)) = 1 + f(q\,z)/q$ QUESTION [9 upvotes]: Let $(q;\,q)_n$ denote the $q$-Pochhammer symbol: $$(q;\,q)_n = \prod_{k=1}^n (1 - q^k), \quad(q;\,q)_0 = 1.\tag1$$ Consider a formal power series in $z$: $$f(z) = \sum_{n=1}^\infty \frac{(-1)^{n+1}P_n(q)}{n!\,(q;\,q)_{n-1}}z^n,\tag2$$ where $P_n(q)$ are some (yet unknown) polynomials in $q$: $$P_n(q) = \sum_{k=0}^{m} c_{n,k} \, q^k,\tag3$$ where $m=\binom{n-1}2 = \frac{(n-1)(n-2)}2$ and $c_{n,k}$ are some integer coefficients. Suppose the formal power series $f(z)$ satisfies the functional equation $$\exp(f(z)) = 1 + f(q\,z)/q.\tag4$$ Expanding the left-hand side of $(4)$ in powers of $z$ using the exponential partial Bell polynomials, and comparing coefficients at corresponding powers of $z$ at both sides, we can obtain a system of equations, by solving which we can find the coefficients of the polynomials $P_n(q)$: $$ \begin{align} P_1(q) &= 1\\ P_2(q) &= 1\\ P_3(q) &= 2 + q\\ P_4(q) &= 6 + 6 q + 5 q^2 + q^3\\ P_5(q) &= 24 + 36 q + 46 q^2 + 40 q^3 + 24 q^4 + 9 q^5 + q^6\\ \dots \end{align}\tag5 $$ This is a quite slow process, even when done on a computer. I computed the polynomials up to $n=27$ (they can be found here) using a Mathematica program that can be found here. There are some patterns in the coefficients I computed (so far they are just conjectures): $$ \begin{align} c_{n,0} &= (n-1)!&\vphantom{\Huge|}\\ c_{n,1} &= \frac{(n-2)(n-1)!}2, &n\ge2\\ c_{n,2} &= \frac{(3n+8)(n-3)(n-1)!}{24}, &n\ge3\\ c_{n,3} &= \frac{(n^2 + 5 n - 34)\,n!}{48}, & n\ge4 \end{align} \tag6 $$ and $$ \begin{align} c_{n,m} &= 1&\vphantom{\Huge|}\\ c_{n,m-1} &= \frac{(n+1)(n-2)}2, &n\ge2\\ c_{n,m-2} &= \frac{(3 n^3 - 5 n^2 + 6 n + 8)(n-3)}{24}, &n\ge3\\ c_{n,m-3} &= \frac{(n^4 - 10 n^3 + 43 n^2 - 74 n + 16) (n - 1) \, n}{48}, &n\ge4 \end{align} \tag7 $$ where $m=\binom{n-1}2$. Other coefficients seem to follow more complicated patterns. We can also observe that $$ \begin{align} P_n(1) &= \frac{(n-1)!\,n!}{2^{n-1}}\\ P_{2n}(-1) &= \frac{(2n-1)!\,n!}{3^{n-1}}\\ P_{2n-1}(-1) &= \frac{(2n-1)!!\,(2n-2)!}{6^{n-1}}, \end{align}\tag8 $$ where $n!!$ denotes the double factorial. I am trying to find a more direct formula for the polynomials $P_n(q)$ or their coefficients $c_{n,k}$ (possibly, containing finite products and sums, but not requiring to solve equations). REPLY [5 votes]: Using the first recurrence relation here, we can find a recurrence for the polynomials $P_n(q)$: $$P_1(q) = 1, \quad P_n(q) = \sum_{k=1}^{n-1} {{n-1} \choose {k-1}} {{n-2} \brack {k-1}}_q P_k(q) \, P_{n-k}(q) \, q^{n-k-1},$$ where $n \choose k$ is the binomial coefficient, and ${n \brack k}_q$ is the $q$-binomial coefficient (also known as the Gaussian binomial coefficient). A Mathematica program that computes them using this recurrence can be found here. If we introduce a notation for the coefficients of the formal power series $f(z)$, that are rational functions of $q$: $$f(z) = \sum_{n=1}^\infty Q_n(q)\,z^n, \quad Q_n(q) = \frac{(-1)^{n+1}P_n(q)}{n!\,(q;\,q)_{n-1}},$$ then we can have a simpler recurrence for them: $$Q_1(q) = 1, \quad Q_n(q) = \frac1{n \, (1-q^{1-n})}\sum_{k=1}^{n-1} k \, q^{-k} \, Q_k(q) \, Q_{n-k}(q).$$ It would be nice to find a more direct, non-recurrent formula for them.<|endoftext|> TITLE: Why don't we use the étale definition of sheaves in Algebraic Geometry QUESTION [6 upvotes]: I am studying scheme theory, thus I need to learn about sheaves of groups, modules and the basic constructions you do with sheaves: kernels, images, sums, tensor products, base change. Often we only obtain a presheaf so we take the associated sheaf. Then to prove basic results, such as adjointness: $$\text{Hom}_{\mathcal{O}_X}(f^{\ast}\mathcal{G},\mathcal{F})\simeq \text{Hom}_{\mathcal{O}_Y}(\mathcal{G},f_{\ast}\mathcal{F})$$ when $f:\, (X,\mathcal{O}_X)\rightarrow(Y,\mathcal{O}_Y)$ is a mophism of ringed spaces, $\mathcal{F}$ a $\mathcal{O}_X$-module and $\mathcal G$ a $\mathcal O_Y$-module. You need to go by some very awful verification. Nobody actually does the full proof, so students such as myself need to go through it by themselves. I find it very frustrating. However, when reading Rotman's Introduction to Homological Algebra, I discovered the étale definition of a sheaf. Then I came across Serre's Faisceaux Algébriques Cohérents, and the basic results seemed much easier to prove. My question is: why don't we adopt more often, the étale definition of sheaves? Almost every textbook, uses the classic construction with the functor $$\mathbf{Opn}(X)^{op}\rightarrow\mathcal{C}$$ from the open category of a topological space. My guess is that we are interested in the functorial properties of our constructions. However, we can prove that the two definitions are canonically the same and we can do everything with one or the other. I would gladly welcome any insight. REPLY [2 votes]: The functorial definition of a sheaf on a site is more general, and hence deserves to be the default definition. It is needed for étale cohomology, for instance. However, when working sheaves on a space both perspectives work well, depending on what you want to do. The étale space definition is useful for when you want to work with the inverse image of a sheaf, since in that setting the inverse image is just the pullback of the étale space. It is also useful when proving that the inverse image along an étale map has a left adjoint. But on the other hand, when it comes to the pushforward of a sheaf, the functorial perspective is much easier to use, because here the pushforward is just precomposition.<|endoftext|> TITLE: Where in the analytic hierarchy is the theory of true set theory? QUESTION [6 upvotes]: Where in the analytic hierarchy is the theory of all true sentences in ZFC? In higher-order ZFC? In ZFC plus large cardinal axioms? Edit: It seems that this is ill-defined. Why is this ill-defined for ZFC, but true for weaker theories like Peano arithmetic and higher-order arithmetic? REPLY [9 votes]: It depends what you mean by "true sentences of ZFC." If you mean the set of true sentences in the language of set theory - that is, the theory of the ambient model of set theory $V$ - then this isn't in the analytic hierarchy at all. This is because in ZFC we can define the true theory of $V_{\omega+1}$, which consists essentially of the true analytic sentences. So the theory of $V$ is strictly more complicated than anything in the analytic hierarchy. Indeed, any "reasonable" hierarchy will fail to reach the complexity of the theory of $V$, for a much more fundamental reason: the theory of $V$ can't be definable in $V$, by Tarski's theorem! So any complexity hierarchy, all of whose levels are definable, can't capture $Th(V)$. For example, $Th(V)$ is not arithmetic, analytic, $\Pi^m_n$ for any $m, n$ (note that already $\Pi^2_1$ exhausts the arithmetic, analytic, and much more) or computable from $Th(D, \in)$ for any definable set $D$. Note that we can take $D$ to be something like "$V_\kappa$ for the first inaccessible $\kappa$," or similarly with "inaccessible" replaced with any other definable large cardinal notion. So even $Th(V_\kappa, \in)$ for "big" $\kappa$ is much, much less complicated than $Th(V)$. Meanwhile, exactly how complicated $Th(V)$ is depends on $V$ - see Mitchell's answer. If, on the other hand, you mean the set of consequences of ZFC, then this is just at the level of $\Sigma^0_1$ (or $0'$) - not even into the analytic hierarchy, just the first nontrivial level of the arithmetic hierarchy.<|endoftext|> TITLE: $\mathbb{Z}$ is Noetherian but not Artinian QUESTION [8 upvotes]: I do believe that my question is simple. I do not understand what is wrong. So help me sort it out please. I know the following facts: $\mathbb{Z}$ is a Noetherian ring and it is not Artinian because the infinite sequence $(\mathbb{Z}/2\mathbb{Z}) \supseteq (\mathbb{Z}/4\mathbb{Z}) \supseteq (\mathbb{Z}/8\mathbb{Z}) \cdots $ doesn't hold the Descending Chain Condition. And A ring $R$ is Artinian iff $R$ is Noetherian and every prime ideal is maximal. We see that all prime ideals have the form $p\mathbb{Z}$ and are maximal. This is example of a module which is Noetherian but not Artinian as well. REPLY [14 votes]: $0$ is a prime ideal of $\mathbb{Z}$ that is not maximal.<|endoftext|> TITLE: Quick way to check if a matrix is diagonalizable. QUESTION [20 upvotes]: Is there any quick way to check whether a matrix is diagonalizable or not? In exam if a question is asked like "Which of the following matrix is diagonalizable?" and four options are given then how can one check it quickly? I hope my question makes sense. REPLY [3 votes]: One nice characterization is this: A matrix or linear map is diagonalizable over the field F if and only if its minimal polynomial is a product of distinct linear factors over F. So first, you can find the characteristic polynomial (https://en.wikipedia.org/wiki/Characteristic_polynomial). If the characteristic polynomial itself is a product of linear factors over $F$, then you are lucky, no extra work needed, the matrix is diagonalizable. If not, then use the fact that minimal polynomial divides the characteristic polynomial, to find the minimal polynomial. (This may not be easy, depending on degree of characteristic polynomial)<|endoftext|> TITLE: Let $f$ be a bijection $\mathbb{N} \rightarrow \mathbb{N}$ , prove there exist positive integers such that... QUESTION [7 upvotes]: Let $f:\mathbb{N} \rightarrow \mathbb{N} $ be a bijection. Prove that there exist positive integers $a < a + d < a + 2d$ such that $f(a) < f(a + d) < f(a + 2d).$ REPLY [3 votes]: Let a = $f^{-1}(1)$. Let $b_1 = f^{-1}(2)$. 1) If $b_1> a$ set $d = b_1-a$. $f(b_1 + d) > 2$. Therefore $$a < b_1 = a + d < b_1+ d = a + 2d$$ and $f(a) < f(a + d) < f(a +2d)$. 2) If $b_1< a$, let $b_2= f^{-1}(3)$ and proceed as in 1) above. Repeat as needed until $b_m= f^{-1}(m+1)> a$ There are only finitely many natural numbers less than a, so for some $k$ $b_k$ must be greater than $a$. Setting $d = b_k-a$ gives us our solution.<|endoftext|> TITLE: functional equation of type $f(x+f(y)+xf(y)) = y+f(x)+yf(x)$ QUESTION [5 upvotes]: If $f:\mathbb{R}-\{-1\}\rightarrow \mathbb{R}$ and $f$ is a differentiable function that satisfies $$f(x+f(y)+xf(y)) = y+f(x)+yf(x)\forall x,y \in \mathbb{R}-\{-1\}\;,$$ Then value of $\displaystyle 2016(1+f(2015)) = $ $\bf{My\; Try::}$ Using partial Differentiation, Differentiate w r to $x$ and $y$ as a constant $$f'(x+f(y)+xf(y)) \cdot (1+f(y)) = f'(x)+yf'(x)$$ Similarly Differentiate w r to $y$ and $x$ as a constant $$f'(x+f(y)+xf(y))\cdot (f'(y)+xf'(y)) = 1+f(x)$$ Now Divide these two equation, We get $$\frac{1+f(y)}{(1+x)f'(y)} = \frac{(1+y)f'(x)}{1+f(x)}$$ Now How can i solve it after that, Help required, Thanks REPLY [2 votes]: Let $x=0$ then $$f(f(y))=y\,\big(\,f(0)+1\big)+f(0)\tag{1}$$ and from this we get $f(f(0))=f(0)$ and $f(f(1))=2f(0)+1$. Let $y=0$ and $x=1$ in original equation then $$f\big(1+2f(0)\big)=f(1)\tag{2}$$ applying $\,f$ we get $$f(f(1+2f(0)))=f(f(1))=2f(0)+1$$ Substituting $y=1+2f(0)$ in $(1)$ we obtain $$f(f(1+2f(0)))=(1+2f(0))(f(0)+1)+f(0)=1+3f(0)+2(f(0))^2,$$ and from the last two equations we get $$f(0)\big(1+f(0)\big)=0.$$ If $f(0)=-1$, then $f(f(0))=f(-1)$ which is not defined but we know that $f(f(0))=f(0)$. Thus $f(0)=0$ and from $(1)$ $\:f(f(x))=x$. For the next step see This post<|endoftext|> TITLE: Proving $\int_0^1\frac{\vert f(x)\rvert^2}{x^2}\,\mathrm dx\le4\int_0^1{\vert f'(x)\rvert^2}\,\mathrm dx$ when $f\in\mathcal C^1([0,1])$ and $f(0)=0.$ QUESTION [14 upvotes]: Let us assume $f \in \mathcal{C}^1([0,1])$ and $f(0)=0.$ Prove that $$\int_{0}^{1} \frac{\vert f(x) \rvert^2}{x^2}dx \le 4 \int_{0}^{1} {\vert f'(x) \rvert^2}dx.$$ By integrating by parts I obtained the following $$\int_{0}^{1} \frac{\vert f(x) \rvert^2}{x^2}dx = -\frac{1}{x} \lvert f(x) \rvert^2\Big|_0^1+2\int_{0}^{1} \frac{f(x)|f'(x)|}{x |f(x)|} \le 2\int_{0}^{1} \frac{|f'(x)|}{x } $$ but I'm not sure of the result and its usefulness, it's easy Calculus, but I can't go on. Any suggestions? REPLY [6 votes]: You have almost done it with your integration by parts attempt, just missing one simple last step. Again as you started was \begin{align} I&=\int_0^1\frac{f(x)^2}{x^2}\,dx=\underbrace{-\frac{f(x)^2}{x}\Bigg|_0^1}_{\le 0}+2\int_0^1\frac{f(x)}{x}f'(x)\,dx\le 2\int_0^1\frac{f(x)}{x}f'(x)\,dx\le\\ &\le 2\left(\int_0^1\frac{f(x)^2}{x^2}\,dx\right)^{1/2}\left(\int_0^1 f'(x)^2\,dx\right)^{1/2}=2\sqrt{I}\left(\int_0^1 f'(x)^2\,dx\right)^{1/2}. \end{align} Now divide by $\sqrt{I}$ and square all.<|endoftext|> TITLE: Why are metrics defined as functions in $\mathbb{R}^{+}$? QUESTION [12 upvotes]: A metric on a set $S$ is a function $d: S^2 \to \mathbb{R}^{+}$ that is symmetric, sub-additive, non-negative, and takes $(x,y)$ to $0$ iff $x=y$. My question is: what makes $\mathbb{R}^{+}$ so special that metrics are universally defined in terms of it? Why don't we use some other totally-ordered set with a least element $m$ and an abelian operator $+$ which preserves order and under which $m$ is neutral? To take it one step further, why aren't metrics defined, more generally, to map elements of $S^2$ to any totally-ordered set that satisfies these conditions? REPLY [2 votes]: Why are metrics defined as functions in $\mathbb{R}^+$? A: Well, actually metrics are not always defined as functions in $\mathbb{R}^+$. As is pointed out in Glitch's answer, when the metric take values in the real numbers, we have Real Analysis at our disposal. And sometimes, that is all that is needed, for example, in classical Functional Analysis, as long as we consider Archimedean valued fields (equivalently, subfields of $\mathbb{C})$, the norms and metrics that are usually considered don't need to take values outside the real numbers to obtain great results. But certainly, this is not the case if we allow the use of non-Archimedean valued fields with valuations of rank larger than 1 (taking values in ordered groups that cannot be embedded in $\mathbb{R}^+$). Although in this case, we cannot make use of real analysis, we use $p$-adic analysis (if we are using $p$-adic fields) or Levi-Civita analysis (if we are using the Levi-Civita field) or any other ultrametric analysis. In this context, the norms and seminorms need to take values outside the real numbers (see this definition), and here the need for metrics with values in arbitrary linearly ordered sets is evident. This generalization of matrics are called "scales" and its study has far-reaching consequences for non-Archimedean Functional Analysis. For an introduction in this area I recommend the paper: Banach spaces over fields with a infinite rank valuation - [H.Ochsenius A., W.H.Schikhof] - 1999 After that see: Norm Hilbert spaces over Krull valued fields - [H. Ochsenius, W.H. Schikhof] - Indagationes Mathematicae, Elsevier - 2006<|endoftext|> TITLE: How to justify Einstein notation manipulations without explicitly writing sums? QUESTION [5 upvotes]: In calculating the expression for the coordinates of the Lie Bracket of two vector fields, one has to "interchange the roles of the dummy indices $i$ and $j$ in the second term" (p.187, Lee Introduction to Smooth Manifolds) i.e. justify the following equality: $$X^j \frac{\partial Y^i}{\partial x\ ^j} \frac{\partial f}{\partial x^i} - Y^i \frac{\partial X^j}{\partial x^i}\frac{\partial f}{\partial x\ ^j} \overset{?}{=} X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} - Y^j\frac{\partial X^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i}. $$ Now writing out the sums explicitly this is fairly easy to do: $$\sum_i\sum_j \left[X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} - Y^i \frac{\partial X^j}{\partial x^i}\frac{\partial f}{\partial x^j} \right] = \sum_i\sum_j\left[X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} \right] - \sum_i\sum_j\left[ Y^i \frac{\partial X^j}{\partial x^i}\frac{\partial f}{\partial x^j} \right] \\ = \sum_i\sum_j\left[X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} \right] - \sum_j\sum_i \left[ Y^j \frac{\partial X^i}{\partial x^j}\frac{\partial f}{\partial x^i} \right] = \sum_i\sum_j\left[X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} \right] - \sum_i\sum_j \left[ Y^j \frac{\partial X^i}{\partial x^j}\frac{\partial f}{\partial x^i} \right] \\ = \sum_i\sum_j \left[X^j \frac{\partial Y^i}{\partial x\ ^j}\frac{\partial f}{\partial x^i} - Y^j \frac{\partial X^i}{\partial x^j}\frac{\partial f}{\partial x^i} \right] .$$ However, writing out all of these sums is fairly laborous and defeats the purpose of using Einstein notation in the first place. Question: Is there a list somewhere of allowed manipulations using Einstein notation? I would like to use such a list to rigorously justify manipulations like the above using Einstein notation in the future with a clean conscience. I could probably supply the proofs writing out the sums explicitly myself, so the list of allowed manipulations doesn't need to come with proofs for all of the rules. Note: This is related to a previous question of mine, where I asked (essentially) whether and if so which and how many manipulations using Einstein notation require finiteness of the index sets in order to be justified. Note that the above calculation is another example where the finiteness of the indexing sets is appealed to implicitly in order to justify switching the order of summation in the second-to-last step (the third-to-last step consists simply of renaming variables). REPLY [6 votes]: It is not entirely clear what you are asking but let me point out that the calculation you wrote down is correct even if you erase all the $\Sigma$s, so you could have written a much shorter calculation to justify this. What is in the background is essentially the distributivity law for the real numbers as well as associativity of multiplication. Of course you can exploit commutativity of multiplication also but this does not seem to be used in the particular calculation you presented. To convince yourself that $u^i_j v_i^j + w^j_i x^i_j = u^i_j v_i^j + w^i_j x_i^j$ is a legitimate procedure, first change the names of the variables in the second summand: $$ u^i_j v_i^j + w^j_i x^i_j = u^i_j v_i^j + w^p_q x_p^q. $$ Now since both $p$ and $q$ are dummy variables, they can be changed respectively to $i$ and $j$ (in that order), so that we get $$ u^i_j v_i^j + w^j_i x^i_j = u^i_j v_i^j + w^p_q x_p^q=u^i_j v_i^j + w^i_j x_i^j. $$ When one uses the Einstein summation convention, the $\Sigma$ signs carry no additional information at all; in particular it cannot be said that something can be shown with the $\Sigma$s but not without them.<|endoftext|> TITLE: $f''+f \ge 0$ implies $f(x)+f(x+\pi) \ge 0$ QUESTION [6 upvotes]: Let $f: \mathbb{R} \to \mathbb{R}$ be a function of class $C^2$ satisfying $f''(t)+f(t) \ge 0$ for all $t \in \mathbb{R}$. Show that $f(t)+f(t+\pi) \ge 0$. What I did: Set $f''(t)+f(t)=g(t)$. This is an LDE of order 2, and we denote this equation by (E), and the corresponding homegoneous equation by (H). and $f$ is of the form $A\cos(t)+B\sin(t)+y_0(t)$ where $y_0(t)$ is a particular solution of (E), $A,B$ are constants. The trigonometric part cancels in the evaluation of $f(t)+f(t+\pi)$, so the problem boils down to finding a $y_0(t)$ which is always nonnegative. Well let's search for such a $y_0(t)$ using the technique of reductin of order, i.e let's set $y_0(t)=\lambda y_h(t)$ where $y_h(t)$ is a particular solution of $(H)$. All solutions of $(H)$ are sinusoidal so if this method is going to work we might as well set $y_0(t)=\lambda \sin(t)$. Substituting, we find $\sin(t)\lambda''+2\cos(t)\lambda'=g$. So if $\sin(t)=0$, $\lambda'=g(t)/2$. Let $I_{2k}=(2k\pi,(2k+1)\pi)$, $I_{2k+1}=((2k+1)\pi,(2k+2)\pi)$. Define $L_I=2k\pi$ if $I=I_{2k}$, $L_I=(2k+1)*\pi$ if $I=I_{2k+1}$. If $I \in \{I_{2k},I_{2k+1}\}$ we have $(\frac{d}{dt} [\lambda'\sin(t)])/\sin^2(t)=g$ for all $t \in I$ $\lambda'\sin^2(t)=\int^t_{L_I} g(u)\sin(u)\,du+C_I$ for all $t \in I$ where $C_I$ is a constant of integration.Note that the integral is well-defined since $gsin(u)$ is continuous, and the integral is therefore itself continuous. Note that if $I=I_{2k}$, then the integrand is positive, and $\sin^2(t)$ is always positive, so if we choose $C_I$ correctly then $\lambda'$ is positive. The opposite holds if $I=I_{2k+1}$. This is good because we want $\lambda$ positive on $I_{2k}$ and negative on $I_{2k+1}$. Now note that $\lambda'$ is continuous on all the $I$'s. However if we impose that $\lambda'$ be continuous on $\mathbb{R}$ then we run into a problem because $\lim_{t \to L_I, t>L_I}RHS=C_I$, which must be equal to $\lambda'(L_I)\sin^2(L_I)=0$, for all $I$. But then $\lim_{t \to L_I, t v\\ 0, &t < v\end{cases}$$ in the integral of $(*1)$ is not accidental. It is the Green's function for the linear differential operator $\frac{d^2}{dt^2} + 1$. In certain sense, one can think of $G$ as the right inverse of this differential operator.<|endoftext|> TITLE: Finite abelian group contains an element with order equal to the lcm of the orders of its elements QUESTION [7 upvotes]: I will quote a question from my textbook, to prevent misinterpretation: Let $G$ be a finite abelian group and let $m$ be the least common multiple of the orders of its elements. Prove that $G$ contains an element of order $m$. I figured that, if $|G|=n$, then I should interpret the part with the least common multiple as $lcm(|x_1|,\dots,|x_n|)=m$, where $x_i\in G$ for $0\leq i\leq n$, thus, for all such $x_i$, $\exists a_i\in\mathbb{N}$ such that $m=|x_i|a_i$. I guess I should use the fact that $|x_i|$ divides $|G|$, so $\exists k\in \mathbb{N}$ such that $|G|=k|x_i|$ for all $x_i\in G$. I'm not really sure how to go from here, in particular how I should use the fact that $G$ is abelian. REPLY [5 votes]: A finite abelian group can be written as a (finite) direct product of cyclic groups: $$ G=C_{m_1}\times C_{m_2}\times\dots\times C_{m_r} $$ where $C_n$ denotes a cyclic group of order $n$. Thus the order of any element in $G$ divides $\operatorname{lcm}(m_1,m_2,\dots,m_r)$. On the other hand, if $g_i$ is a generator of $C_{m_i}$, the element $$ g=(g_1,g_2,\dots,g_r) $$ has order precisely $\operatorname{lcm}(m_1,m_2,\dots,m_r)$. Fill in the details.<|endoftext|> TITLE: Units in ring of integers are exactly those with norm {-1,1} QUESTION [5 upvotes]: Let $K$ be an algebraic number field and $R$ be the ring of the integers of $K$. Show that an element $u\in R$ is a unit of $R$ if and only if $N_{K/\mathbb{Q}}(u)\in \{-1,1\}$. It is easy to show units are of norm {-1,1}. But on the other hand, I have no idea. REPLY [7 votes]: Without using ideals: - if $u\in R$ is a unit, then $N(u)N(u^{-1}) = 1$, an equation in $\mathbf Z$, hence $N(u) =\pm 1$ - conversely, if $u\in R$ has norm $\pm1$, as an algebraic integer $u$ is a root of a polynomial of the form $X^n + ... + a_1 X\pm1\in \mathbf Z[X]$, hence $\pm(u^{n-1} + ... + a_1)\in R$ is the inverse of $u$ .<|endoftext|> TITLE: Did Euclid really prove the existence of irrational numbers? QUESTION [9 upvotes]: In Proposition 10.10 of Euclid's Elements, Euclid tries to construct a line segment which is incommensurable with a given line segment. (Two line segments are incommensurable if there exists no common line segment that both are integer multiples of, or equivalently, if the ratio of their lengths does not equal a ratio of natural numbers.) Here's what he says: Let A be the assigned straight line. It is required to find two straight lines incommensurable, the one in length only, and the other in square also, with A. Set out two numbers B and C which do not have to one another the ratio which a square number has to a square number, that is, which are not similar plane numbers, and let it be contrived that B is to C as the square on A is to the square on D, for we have learned how to do this. Therefore the square on A is commensurable with the square on D. And, since B does not have to C the ratio which a square number has to a square number, therefore neither has the square on A to the square on D the ratio which a square number has to a square number, therefore A is incommensurable in length with D. This proof relies on Euclid's Proposition 10.9, which states in part that "squares which do not have to one another the ratio which a square number has to a square number also do not have their sides commensurable in length either". So what Euclid does is he chooses two natural numbers B and C such that the ratio of B to C is not equal to a ratio of square numbers. And then he constructs two squares whose areas are in the ratio of B to C. And finally he uses Proposition 10.9 to show that the sides of the two squares are incommensurable. But my question is, where does Euclid get the fact that there exists two natural numbers B and C such that the ratio of B to C is not equal to a ratio of square numbers? Euclid just says "Set out two numbers B and C which do not have to one another the ratio which a square number has to a square number, that is, which are not similar plane numbers". For those who don't know, two natural numbers $m$ and $n$ are called similar plane numbers if there exist natural numbers $p$, $q$, $r$, and $s$ such that $m=pq$, $n=rs$, and the ratio of $p$ to $q$ is equal to the ratio of $r$ to $s$, or equivalently the ratio of $p$ to $r$ is equal to the ratio of $q$ to $s$. Now using Euclid's Proposition 8.18 and Proposition 8.11, it's easy to prove that the ratio of similar plane numbers is equal to the ratio of square numbers. But did Euclid ever prove the converse of that statement, i.e. that if two numbers are not similar plane numbers then their ratio is not equal to the ratio of square numbers? Because if Euclid didn't prove that, then he didn't really do the hard number theory work needed to prove that irrational numbers exist. EDIT: I just found out that Euclid stated the result I want him to prove in a Lemma after Proposition 10.9: It has been proved in the arithmetical books that similar plane numbers have to one another the ratio which a square number has to a square number, and that, if two numbers have to one another the ratio which a square number has to a square number, then they are similar plane numbers. The translator says that the justification of this is "VIII.26 and converse". Here is what Proposition 8.26: Similar plane numbers have to one another the ratio which a square number has to a square number. But where does Euclid prove the converse, namely that "if two numbers have to one another the ratio which a square number has to a square number, then they are similar plane numbers"? That's the hard thing to prove. Euclid claims "it has been proved in the arithmetical books", i.e. in Books 7-9, but I can't seem to find such a proof. REPLY [4 votes]: Regarding similar plane numbers : Definition VII.21 : Similar plane numbers are those which have their sides proportional. Example : The numbers $18$ and $8$ are similar plane numbers. When $18$ is interpreted as a plane number with sides $6$ and $3$, and $8$ has sides $4$ and $2$, then the sides are proportional. I.e. : $\dfrac 2 3 = \dfrac 4 6$. Prop.X.9 amounts to proving that : Line segments which produce a square whose area is an integer, but not a square number, are incommensurable with the unit length. Euclid states : "Set out two numbers B and C which do not have to one another the ratio which a square number has to a square number, that is, which are not similar plane numbers". We can consider a rectangle with sides $n$ and $1$, where $n$ is a number whatever that is not a square (like: $2,3,5,\ldots$); by Prop.II.14 we can build the square with the same area. Now we have to compare the square B with area $n$ and the unit square C and proceed by contradiction (in the way that Aristotle's proof is managed) assuming that they are "produced" by lines A and D commensurable in length, i.e. whose values are $p$ and $q$, such that : $n : 1 = p^2 : q^2$ and assume as usual that $p$ and $q$ are relatively prime. Thus, from Prop.VII.27 also $p^2$ and $q^2$ are relatively prime. Thus, by Prop.VII.21, they are the least of the numbers in that ratio. But also $n$ and $1$ are so : relatively prime and the least of the numbers in that ratio. Thus : $n=p^2$ and $1=q^2$ from which we have to conclude that $n$ is a square, contrary to assumption. For incommensurable magnitudes, see : Def.X.1. Those magnitudes are said to be commensurable which are measured by the same measure, and those incommensurable which cannot have any common measure. In Proposition X.2 the so-called euclidean algorithm : ἀνθυφαίρεσις (anthyphairesis, reciprocal subtraction) is used to prove that : If, when the less of two unequal magnitudes is continually subtracted in turn from the greater that which is left never measures the one before it, then the two magnitudes are incommensurable. See the comment for a geometrical example, and see Heath's edition, Vol.III, page 19 for the application to the side-diagonal case. In Prop.X.5 he proves that : Commensurable magnitudes have to one another the ratio which a number has to a number followed by its converse : Prop.X.6. In Prop.X.7 Euclide proves the contrapositive of X.6 : Incommensurable magnitudes do not have to one another the ratio which a number has to a number followed by X.8 : the contrapositive of X.5. Thus, applying X.5 to the result regarding so-called Aristotelian proof, it gives us another example of incommensurable magnitudes. The Aristotelian proof has been introduced into Elements as X.117, but it is considered an interpolation : maybe the "interpolator" decided to supplement Elements with this "obvious" result... For details, see Salomon Ofman's works. For a full-lenght book on the history of incommensurability in Greek mathematics, you can see : Wilbur Richard Knorr, The Evolution of the Euclidean Elements : A Study of the Theory of Incommensurable Magnitudes and Its Significance for Early Greek Geometry (1973).<|endoftext|> TITLE: $1729$, Fermat's Last Theorem, and Ramanujan's sums of cubes formula QUESTION [7 upvotes]: In the following page in one of Ramanujan's Lost Notebooks, Ramanujan found a formula for sums of cubes such as the famed $1729$. Which can be found in the bottom right hand corner. But also in that page, is a formula: If$$\sum_{n\geq0}a_nx^n=\frac {1+53x+9x^2}{1-82x-82x^2+x^3}\\\sum_{n\geq0}b_nx^n=\frac {2-26x-12x^2}{1-82x-82x^2+x^3}\\\sum_{n\geq0}c_nx^n=\frac {2+8x-10x^2}{1-82x-82x^2+x^3}$$ Then$$a_n^3+b_n^3=c_n^3+(-1)^n$$ My Question: How would you go about proving this? I did notice one thing: It seemed like Ramanujan was using generating functions. Where given a sequence, you pretend that the numbers are coefficients of a polynomial, and you can then collapse that down into a single expression. For example: The counting numbers $1,2,3,4,5,\ldots$ can be represented as$$1+2x+3x^2+4x^3+\ldots=\frac {1}{(1-x)^2}\tag{1}$$ But if you use generating functions, then you can't substitute that $x$ with anything, which makes them both powerful, but dangerous. So what would be the point of using generating functions? REPLY [5 votes]: Michael Hirschhorn has given two proofs of Ramanujan's cube equations: Michael D. Hirschhorn, An amazing identity of Ramanujan, Mathematics Magazine 68 (1995) 199–201. Michael D. Hirschhorn, A proof in the spirit of Zeilberger of an amazing identity of Ramanujan, Mathematics Magazine 69 (1996) 267–269. For a short summary see also here. In fact, the existence of this result by Ramanujan depends very much on special circumstances of the solution $$ (A^2 +7AB−9B^2)^3 +(2A^2 −4AB+12B^2)^3 = (2A^2 +10B^2)^3 +(A^2 −9AB−B^2)^3, $$ for $4$ cubes. Tito Piezas has found several other Ramanujan-like families of such identities with four cubes, see this MSE-question.