TITLE: Finding the limit $\lim_{n\to \infty} \left({\frac{n+1}{n-2}}\right)^\sqrt n$ QUESTION [6 upvotes]: I have to find: $$\lim_{n\to \infty} \left({\frac{n+1}{n-2}}\right)^\sqrt n$$ But, to be honest, I haven't got a faintest idea how to even begin. Is there a way to evaluate this radical exponent? REPLY [3 votes]: $$\lim_{n\to\infty}\left({\frac{n+1}{n-2}}\right)^\sqrt n=\lim_{n\to\infty}\left(1+{\frac{3}{n-2}}\right)^{\frac{n-2}{3}\frac{3\sqrt n}{n-2}}=e^{0}=1$$ because $$\lim_{n\to\infty}\left(1+{\frac{3}{n-2}}\right)^{\frac{n-2}{3}}=e$$ $$\lim_{n\to\infty}\frac{3\sqrt n}{n-2}=0$$ REPLY [2 votes]: Note that we have $$\left(\frac{n+1}{n-2}\right)^{\sqrt n}=\left(\frac{\left(1+\frac1n\right)^n}{\left(1-\frac2n\right)^n}\right)^{n^{-1/2}}$$ In THIS ANSWER and THIS ONE, I showed using only the limit definition of the exponential function that $\left(1+\frac xn\right)^n$ is monotonically increasing for $x>-n$. Therefore, we have $$2\le \left(1+\frac1n\right)^n \left(1-\frac2n\right)^n\ge \frac1{16} \tag 2$$ Putting $(1)$ and $(2)$ together, we find $$(2e^2)^{n^{-1/2}}\le \left(\frac{n+1}{n-2}\right)^{\sqrt n}\le (16e)^{n^{-1/2}}$$ whereupon applying the squeeze theorem yields $$\lim_{n\to \infty}\left(\frac{n+1}{n-2}\right)^{\sqrt n}=1$$<|endoftext|> TITLE: Holomorphic Frobenius Theorem QUESTION [5 upvotes]: I'm trying to understand a proof of the Holomorphic Frobenius Theorem using the smooth version as seen in Voisin's Complex Geometry book: (pg 51) http://www.amazon.com/Hodge-Theory-Complex-Algebraic-Geometry/dp/0521718015 She starts with a holomorphic distribution $E$ of dimension $k$ on a complex manifold which is closed under bracket. So, $[E,E]\subseteq E$. Then, to reduce to the real case, we take the real part of the distribution to get another distribution $\Re(E)$ of dimension $2k$ in the real tangent bundle. What I can't understand is why this real distribution, $\Re(E)$, also satisfies the bracket condition. The books says this follows since $E$ is holomorphic and satisfies the bracket condition. My guess is that a local frame for it is given by the real and imaginary parts of a local frame $E$ but i'm not sure how to proceed from there. Any help is much appreciated. REPLY [3 votes]: I didn't check your link, but Voisin is probably restricting the standard complex-linear isomorphism $$ X' \leftrightarrow X := \tfrac{1}{2}(X' \otimes 1 - JX' \otimes i) = \tfrac{1}{2}(X' - iJX') \tag{1} $$ between the real tangent bundle $(TM, J)$ and the holomorphic tangent bundle $(T^{1,0}M, i)$, a.k.a., the $i$-eigenspace of $J$ acting on the complexified bundle $TM \otimes \mathbf{C}$. Because $J$ is integrable, \begin{align*} [X, Y] &= \tfrac{1}{4}[X' - iJX', Y' - iJY'] \\ &= \tfrac{1}{4}\bigl([X', Y'] - i[JX', Y'] - i[X', JY'] - [JX', JY']\bigr) \\ &= \tfrac{1}{2}\bigl([X', Y'] - iJ[X', Y']\bigr) \\ &=: Z, \end{align*} the holomorphic vector field corresponding via (1) to $Z' = [X', Y']$. In words, the isomorphism (1) respects the complex bracket. If $E \subseteq T^{1,0}M$ is a holomorphic distribution closed under the bracket, the corresponding real distribution $E' \subset TM$ is also closed under the bracket. It may also be helpful to read about the Nijenhuis tensor.<|endoftext|> TITLE: Is every compact space locally compact? QUESTION [7 upvotes]: Suppose that $(X,\tau)$ is a topological space. If $(X,\tau)$ is compact, then $(X,\tau)$ is locally compact. Does this statement hold for any $(X,\tau)$, or does it only hold when $(X,\tau)$ is Hausdorff? REPLY [19 votes]: Because of your question I assume that the definition of compact that you use does not require the space to be Hausdorff. The answer then depends on your definition of locally compact: If you require every point $x \in X$ to have some compact neighbourhood (weaker condition) then this is always true, because $X$ itself is a compact neighbourhood of $x$. If you require each point $x \in X$ to have a neighbourhood basis consisting of compact neighbourhoods (stronger condition) then this is not necessarily true. If $X$ is Hausdorff and compact then $X$ is normal and therefore in particular regular, which (for a Hausdorff space) is equivalent to each point having a neighbourhood basis consisting of closed neighbourhoods. These closed neighborhoods are then also compact, so in this case each point $x \in X$ has a neighbourhood basis cosisting of compact sets, which is why $X$ is locally compact even in the sense of the stronger definition. But it is not necessary for a compact space $X$ to be Hausdorff to also be locally compact in the sense of the stronger definition: Take any set $X$ together with the indiscrete topology, i.e. $\{\emptyset,X\}$. The resulting space is compact. Then for every $x \in X$ the only possible neighbourhood basis is $\{X\}$, which consists of compact sets. REPLY [4 votes]: This is true trivially. A space is locally compact if every point has a compact neighborhood. If the space itself is compact, then it is a compact neighborhood of every point.<|endoftext|> TITLE: In common tongue, what is the differences between sparse and dense matrices? QUESTION [6 upvotes]: What are the differences with sparse and dense matrices in practice, so as to offer some insight to new learners on a more intuitive level. Obviously everyone knows about the dictionary definition of sparse and dense matrices (a definition based on the portion of zero/non-zero elements) But why are they so important from a mathematical application/optimization/problem solving point of view? Is it that a lot of neat algorithms are defined such that they can only be operated on a problem if it satisfies such and such criteria, and some guy just proved that sparse | dense matrices tends to satisfy the aforementioned criteria really well Or is it to do with the limited amount of computer memory available in real life, and that we must somehow "compress" matrices for faster computation - as such sparse matrices would be more desired Or is it just a fuzzy guideline word that mathematicians use, as opposed to strict criterion fulfilling definitions that imply X properties about the matrices (e.g. make sure matrice is sparse and not dense because too many elements/variables too long to compute - or something to that nature?) In Summary: is the only major difference as a result of computational limitation and resource savings or are there fundamental mathematical differences between the two that make one uniquely operable and the other not Answer so essentially it revolves around our ability to compute something. so there really isnt some "fundamental" difference (like the difference between the first derivative or a second derivative of a function). but its just a thing that rose out of technical limitations in real life during computation. REPLY [9 votes]: It's not really about the matrices. It's about how the cost of certain algorithms, data structures, procedures, etc. relate to the size of the matrix and the number of non-zero elements. For example, if a data structure is designed for storing sparse matrices, it means that it can store a matrix in space proportional to the number of non-zero elements. Simpler data structures for storing matrices (like 2-dimensional arrays) take space proportional to the size of the matrix. If you're working with a sparse matrix data structure, it's probably because your matrix is large, but sparse, and you rely on the sparseness of the matrix to meet your requirements for the size of the data structure in memory. Similarly, if an algorithm is designed to work with sparse matrices, then its cost is defined in terms of the number of non-zero elements in the matrix rather than the matrix's size. If you're using an algorithm like that, them again it's probably because your matrix is large, but sparse, and you rely on the sparseness of the matrix to meet your requirements for the speed of the computation. So, in summary, there is no specific density at which the matrix changes from sparse to dense. When you start finding it useful to use data structures and algorithms that are optimized for sparse matrices, then you start thinking about the costs in terms of the number of non-zero elements, and at that point you are using sparse matrices. ALSO, a class of matrices (e.g, diagonal, tridiagonal, etc.) is generally called sparse if the number of non-zero elements is proportional to the number of rows or columns instead of rows*columns. This gives sparse matrix algorithms an advantage in computational complexity (big O), meaning that sparse matrix algorithms will always perform better on sufficiently large matrices in that class.<|endoftext|> TITLE: How to simplify $\lim_{n\to \infty}\sum_{r=1}^n \tan^{-1} \dfrac{2r+1}{r^4+2r^3+r^2+1}$ QUESTION [5 upvotes]: $$\lim_{n\to \infty}\sum_{r=1}^n \tan^{-1} \dfrac{2r+1}{r^4+2r^3+r^2+1}$$ How am I supposed to do it? One thing I see here is $$\lim_{n\to \infty}\sum_{r=1}^n \tan^{-1} \dfrac{2r+1}{(r^2+r)^2+1}$$ Here derivative of $r^2+r$ is $2r+1$. (if it helps) It's final answer is $\pi/4$ REPLY [2 votes]: You aren't too far from the answer. Credit on discovering what $(r+1)^2$ was. Write $r^2+r$ as $r(r+1)$. Squaring that would give you $r^2$ and $(r+1)^2$. And the difference of those two terms is the numerator. Now, it's in the form of $\tan(A-B)$. Now, telescope this sum which becomes $\arctan(r)- \arctan(1)$ $\pi/2 - \pi/4 = \pi/4$<|endoftext|> TITLE: Find the sum $\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$ QUESTION [9 upvotes]: Find the sum $$\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$$ When I asked my teacher how can I solve this question he responded it is very hard, you can't solve it. I hope you can help me in solving and understanding the question. REPLY [5 votes]: Using the notation $n^\underline{r}=\overbrace{n\ (n-1)\ (n-2)\cdots(n-r+1)}^{r\text{ terms}}$ for the falling factorial , we have $$\begin{align} \sum_{k=1}^n\frac {k\cdot k!}{n^k}\binom nk&= \sum_{k=1}^n\frac {\color{blue}k\cdot k!}{n^k}\cdot \frac {n^\underline{k}}{k!}\\ &=\sum_{k=1}^n\frac {n^\underline{k}}{n^k}\color{blue}{[n-(n-k)]}\\ &=n\underbrace{\sum_{k=1}^n\frac {n^\underline{k}}{n^k}-\frac{n^\underline{k+1}}{n^{k+1}}}_{\text{telescoping sum}} &&\text{as }n^\underline{k}(n-k)=n^\underline{k+1}\\ &=n &&\text{as }n^{\underline{n+1}}=0 \end{align}$$ Putting $n=100$ gives $$\sum_{k=1}^{100}\frac {k\cdot k!}{100^k}\binom {100}k=100\qquad\blacksquare$$<|endoftext|> TITLE: Prove that $f = q_1 + Gq_2$ for some $q_1, q_2 \in \mathbb{k}_{sym}(x_1,\dots,x_n)$ QUESTION [5 upvotes]: Suppose the orbit of the function $f \in \mathbb{k}(x_1,\dots,x_n)$ under the action of $\{\phi_\sigma\mid\sigma \in \mathfrak{S}_n\}$ has length $2$. Prove that $f = q_1 + Gq_2$ for some $q_1, q_2 \in \mathbb{k}_{\text{sym}}(x_1,\dots,x_n)$, where $G = W_n$ in case of $\mathrm{char}(\mathbb{k}) \neq 2$ and $G=F$ in case of $\mathrm{char}(\mathbb{k}) = 2$. $$F(x_1, \dots, x_n) = \sum_{\sigma \in \mathfrak{A}_n} \prod_{i=1}^nx^{i-1}_{\sigma(i)}$$ $$W_n(x_1, \dots, x_n)= \prod_{1 \leq i < j \leq n}(x_i - x_j)$$ Honestly, i'm not really good at Galois theory, so i would be glad to hear any suggestions. REPLY [3 votes]: The key to this exercise is the bit of knowledge that the alternating group $A_n$ is the only subgroup of index two in the full symmetric group $S_n$. For $n\ge5$ this follows from the simplicity of $A_n$, and for $n<5$ it can be checked case-by-case. Leaving it to you to fill in the details to those claims. Anyway, if $f$ has an orbit of size two, we can conclude that the stabilizer of $f$ must be equal to $A_n$. Let $K$ be the fixed field of $S_n$, and let $L$ be the fixed field of $A_n$. Assume first the characteristic is $\neq2$. Show that $[L:K]=2$. If $R$ is any polynomial such that $R\in L\setminus K$, then $\{1,R\}$ is a $K$-basis for $L$. Show that $W\in L\setminus K$. Figure out why steps 2. and 3. do it. When characteristic is equal to two, then the above argument does not work, because $W\in K$ in that case. But you can use $F$ in its place in item 3.<|endoftext|> TITLE: Absolute value and max/min function: why $a + b + |a - b|=2\max(a,b)$? QUESTION [13 upvotes]: I am being told that $a + b + |a - b|$ is equal to $2\max(a,b)$. What is the reasoning behind this? REPLY [14 votes]: Intuitively, notice that $\frac{a + b}{2}$ is the midway point between $a$ and $b$, and $\frac{|a - b|}{2}$ is half the distance between the two numbers, so $$\frac{a + b}{2} + \frac{|a - b|}{2}$$ is the mid way point between the two points plus half of the distance between them, which brings you to the larger of the two numbers, thus $$\frac{a+b + |a-b|}{2} = \max(a,b).$$<|endoftext|> TITLE: How are simple groups the building blocks? QUESTION [16 upvotes]: I know a bit about simple groups. A finite Abelian group is a (direct) product of finite cyclic groups. The simple finite Abelian groups are exactly $\mathbb{Z}_p$ for $p$ a prime. And so, I understand how all finite Abelian groups are made up of finite simple groups. But, from what I understand, all finite groups are in some way made up of (finite) simple groups. My question is: how does that work? What does it (more precisely) mean that a finite group is made up of simple groups? Edit: Thanks to Stefan for directing me to questions that have basically already have the answer. I have done a bit more of research on this and I think I can narrow my question a bit. I would like to understand how simple groups are the building blocks of all finite groups. That is, I would like to understand how, given a finite group $G$, one can find (or show there exists) simple groups $G_1, \dots, G_n$ such that $G$ is [insert something] of $G_1,\dots G_n$. From here, I understand now that it somehow has to do with composition series and Jordan-Holder's Theorem. I think I understand the definition of a short exact sequence. From that same question: Then $G$ is built from some uniquely determined (Jordan-Hölder) simple groups $H_i$ by taking extensions of these groups. I still don't get how this group $G$ is determined by the simple groups. I guess I am looking for more details basically putting together how one starts with a finite group $G$, "finds" simple groups $G_1, \dots G_n$ and then says that $G$ is isomorphic to something in terms of the simple groups. REPLY [9 votes]: Let $G=G_0$ be a finite group. Consider the set of proper nontrivial normal subgroups of $G_0$. If this set is empty, then $G_0$ is simple. Otherwise, the set is ordered by inclusion and we may choose a maximal element $G_1$. Note that by the correspondence theorem $G_0/G_1$ must be simple and we obtain a short exact sequence $$1\to G_1\to G_0\to G_0/G_1\to 1.$$ This says that $G_0$ is built out of $G_1$ and $G_0/G_1$ (or, $G_0$ is an extension of $G_0/G_1$ by $G_1$). Now, suppose that $i\geq1$ and we have constructed $$G_{i}< G_{i-1}<\cdots< G_1< G_0$$ with $G_j\lhd G_{j-1}$ and $G_{j-1}/G_j$ simple for $0\leq j\leq i$. Consider the set of proper nontrivial normal subgroups of $G_{i}$. If this set is empty, then $G_i$ is simple. Otherwise, the set is ordered by inclusion and we may choose a maximal element $G_{i+1}$. Again, by the correspondence theorem we have that $G_i/G_{i+1}$ is simple and we have an exact sequence $$1\to G_{i+1}\to G_i\to G_i/G_{i+1}\to 1.$$ As before, this means the $G_i$ is an extension of $G_i/G_{i+1}$ (a simple group) by $G_{i+1}$. Now, since $G$ is finite, this process must terminate and we obtain a sequence $$1=G_{n+1} TITLE: Why is the Jacobian matrix equal to the matrix associated to a linear transformation? QUESTION [5 upvotes]: Given the linear transformation $f$, we can construct the matrix $A$ as follows: on the $i$-th column we put the vector $f(\mathbf e_i)$ where $E = (\mathbf e_1, \ldots, \mathbf e_n)$ is a basis of $\mathbb R^n$. Now, I read that for linear transformations, if we use the canonical basis of $\mathbb R^n$, the matrix $A$ is equal to $J f$ (the Jacobian matrix associated to $f$). It's indeed so for the few examples that I tried, but I cannot find a proof of this fact. REPLY [4 votes]: If $f:\mathbb{R}^n\to\mathbb{R}^p$ is linear, then you know that for all $a\in\mathbb{R}^n,$ you have that $Df(a)=f.$ As $Jf(a)$ is the matrix which represents $f$ in the canonical bases $\mathcal{C}_n$ and $\mathcal{C}_p$ of $\mathbb{R}^n$ and $\mathbb{R}^p,$ you will have that $$Jf(a)=[f]_{\mathcal{C}_p\mathcal{C}_n}=A,$$ with your definition of $A.$<|endoftext|> TITLE: A Grothendieck topology on $\Delta$ QUESTION [5 upvotes]: Is there a choice for a Grothendieck topology on $\Delta$ for which most interesting simplicial sets are sheaves (like representables, horns and boundaries, and more generally all categories)? I suspect I can look at the Segal condition as a sheaf condition, but I'm not able to go further, nor I find information googling something similar to my question (which is, I admit it, somewhat vague). REPLY [4 votes]: No, there is no such a topology. In fact, the simplex category $\mathbf{\Delta}$ admits only two Grothendieck topologies, namely the indiscrete topology (which is the canonical topology in this case), and the discrete topology; and sheaves, in either case, are not interesting. The indiscrete topology on $\mathbf{\Delta}$ is given by the maximal sieves, i.e. it is the topology with covering sieves $\{\Delta^n\mid n\in \mathbb{N}\}$. All simplicial sets are indiscrete-sheaves. More generally, all presheaves on any category are indiscrete-sheaves. Let $\tau$ be any topology on $\mathbf{\Delta}$ that is finer than the indiscrete topology, then there exists $[n]\in \mathbf{\Delta}$ and a $\tau$-covering sieves $S\subsetneq \Delta^n$, and hence $\mathrm{id}_{[n]}\notin S_n$. For every $i\in [n]$, one has $$ {(\sigma^i_{n})}^\ast(S)\subset \Lambda^{n+1}_i \bigcap \Lambda^{n+1}_{i+1} $$ that $\sigma^i_{n} \partial_{n+1}^i=\sigma^i_{n} \partial_{n+1}^{i+1}=\mathrm{id}_{[n]}\notin S_n$. Thus, in particular, $\Lambda^{n+1}_i$ is a $\tau$-covering sieve, for every $i\in [n+1]$. Let $\underline{j}:[0]\to [n+1]$ denote the unique morphism in $\mathbf{\Delta}$ whose image is $\{j\}$, for some $j\in[n+1]$. Then, $$ \underline{j}^\ast (\bigcap_{\substack{i\in [n+1]\\i\neq j}} \Lambda^{n+1}_i)=\underline{j}^\ast (<\partial_{n+1}^{j}>)=\emptyset_{[0]}. $$ Thus, the empty sieve $\emptyset_{[0]}$ is a $\tau$-covering sieve. For every $[m]\in \mathbf{\Delta}$, one has $0_m^\ast \emptyset_{[0]}=\emptyset_{[m]}$, for the terminal morphism $0_m:[m]\to [0]$. Therefore, $\tau$ is the discrete topology on $\mathbf{\Delta}$, i.e. all sieves in $\mathbf{\Delta}$ (including the empty sieves) are $\tau$-covering sieves. The discrete-sheaves are precisely the terminal simplicial sets. Hence, the indiscrete topology is the canonical topology on $\mathbf{\Delta}$.<|endoftext|> TITLE: How to solve $\int_0^{\pi/2} \ln{(x^2 + \ln^2{(\cos{x})})} \,\mathrm{d}x$ QUESTION [16 upvotes]: $$\int_0^{\pi/2} \ln{(x^2 + \ln^2{(\cos{x})})} \,\mathrm{d}x$$ I was given this integral yesterday by someone on a forum and after a few hours of having a go at it I didn't really get anywhere significant. My first idea was to use a series of substitutions which would simply the integral into a form where taylor expansions could be used to solve it. Another idea of mine was to use complex numbers to get rid of some of the log terms, changing the $\ln^2{(\cos{x})}$ term could possibly make this integral a lot more manageable. I'd prefer if someone could help me solve this using basic methods although if there is a more complicated but elegant solution (using contour integration for example) it could still be beneficial to someone else. We don't do the equivalent of Calc III in my school so I am not very familiar with methods which go beyond the Calc II syllabus (DUDIS, Laplace ect..). REPLY [4 votes]: Following the comment left by @tired, we note that we can factor the argument of the quadratic as $$x^2+\log^2(\cos(x))=(\log(\cos(x)+ix)(\log(\cos(x)-ix)$$ Writing $ix=\log(e^{ix})$ we have $$x^2+\log^2(\cos(x))=\left|\log(\cos(x)e^{ix})\right|^2=\left|\log\left(\frac{1+e^{i2x}}{2}\right)\right|^2$$ Next, we write the integral of interest as $$\int_0^{\pi/2}\log(x^2+\log^2(\cos(x)))\,dx=\frac14\int_{-\pi}^{\pi}\log\left(\left|\log\left(\frac{1+e^{ix}}{2}\right)\right|^2\right)\,dx$$ We move to the complex plane by letting $z=e^{ix}$. The integral of interest becomes $$\begin{align} \int_0^{\pi/2}\log(x^2+\log^2(\cos(x)))\,dx&=\frac12\oint_{|z|=1}\log\left(\left|\log\left(\frac{1+z}{2}\right)\right|\right)\frac{1}{iz}\,dz\\\\ &=\frac12\text{Re}\left(\oint_{|z|=1}\log\left(\log\left(\frac{1+z}{2}\right)\right)\frac{1}{iz}\,dz\right)\\\\ &=\frac12 \text{Re}\left(2\pi i \left(\frac{\log(\log(2))+i\pi}{i}\right)\right)\\\\ &=\pi \log(\log(2)) \end{align}$$ where we tacitly deformed the contour around the branch point singularities at $z=1$ and $z=-1$ and noted that the contributions from integrations around the deformations are zero.<|endoftext|> TITLE: Prove that the number 14641 is the fourth power of an integer in any base greater than 6? QUESTION [21 upvotes]: Prove that the number $14641$ is the fourth power of an integer in any base greater than $6$? I understand how to work it out, because I think you do $$14641\ (\text{base }a > 6) = a^4+4a^3+6a^2+4a+1= (a+1)^4$$ But I can't understand why they had to specify that the base is greater than 6? Is that because if it's 4, then 4 will be cancelled out in the equation and if it's 6, it will be too? Please advise. Sorry for asking such a trivial question. REPLY [4 votes]: As a matter of personal preference, I like $b$ to represent the base rather than $a$. So then $(b + 1)^2 = b^2 + 2b + 1$. And $(b^2 + 2b + 1)^2 = b^4 + 4b^3 + 6b^2 + 4b + 1$. These facts are true whether $b$ is an ordinary positive integer greater than 1 or a more "exotic" number, like $\sqrt{-2}$. But whether $(b + 1)^4$ gets represented as 14641 in base $b$, that's a slightly different story. Consider for example $b = 2$. Indeed $3^4 = 2^4 + 4 \times 2^3 + 6 \times 2^2 + 4 \times 2 + 1$. But the problem is that here it turns out that $6 > b^2$ and $4 > b$. Then $4b^3$ requires more than four binary digits to represent, $6b^2$ requires more than three bits and $4b$ requires more than two bits. So the binary representation of 81 is 1010001 rather than 14641. Some of these problems persist through $b = 6$, because although $4b^3 < b^4$, we still have $6b^3 > b^3$. Then 2401 is 15041 in base 6. The smallest integer $b$ satisfying $4b^3 < b^4$, $6b^2 < b^3$ and $4b < b^2$ is $b = 7$. Indeed 4096 in base 7 is 14641.<|endoftext|> TITLE: What's so special about hyperbolic curves? QUESTION [5 upvotes]: This is really a two-part question, but I would be happy to get an answer for either bit. By a hyperbolic curve as defined by e.g. Szamuely in Galois Groups and Fundamental Groups (p.137) I mean an open subcurve $U$ of an integral proper normal curve $X$ over a field $K$ such that $2g-2+n>0$ where $g$ is the genus of the base change $X_\bar{K}$ $n$ is the number of closed points in $X_\bar{K}\backslash U_\bar{K}$. Many theorems/conjectures in anabelian geometry involve hyperbolic curves - for example, the section conjecture states that the rational points $U(K)$ on a hyperbolic curve $U$ correspond bijectively to (conjugacy classes of) sections of $\rho$ in the exact sequence of étale fundamental groups $1\rightarrow \pi_1 (U_\bar{K}) \rightarrow \pi_1 (U)\xrightarrow{\rho}G_K \rightarrow 1$ where $G_K$ is the absolute Galois group of $K$. Now, amongst all hyperbolic curves I have seen many particular references to the curve $U = \mathbb{P}_{\mathbb{Q}}^1 \backslash \left\{0,1,\infty\right\}$. Indeed, in his Notes on Etale Cohomology (p.30), J.S. Milne says that $\pi_1 (U)$ is arguably the most interesting object in mathematics. I know that in part this is because it ought to give us insight into the absolute Galois group of the rationals, but the particular choice of removed points seems quite mysterious to me. So my questions are: Where does the interest in hyperbolic curves come from? Why is $= \mathbb{P}_{\mathbb{Q}}^1 \backslash \left\{0,1,\infty\right\}$ of particular interest? REPLY [7 votes]: The main theorem to mention is: Theorem. (Belyi) Let $X$ be a smooth projective curve over $\mathbb C$. Then $X$ is defined over $\bar {\mathbb Q}$ if and only there exists a map $X \to \mathbb P^1_{\mathbb C}$ ramified over at most three points. Remark. The 'only if' part is due to Belyi in 1979, and is basically an algorithm. The 'if' part was known before that (I believe it was proven by Weil in 1956). See this paper for a modern account of the 'if' part, and some background. Note that three points on $\mathbb P^1$ are in general position, i.e. for any ordered set $(x,y,z)$ of distinct [closed] points, there exists an automorphism of $\mathbb P^1$ mapping $(x,y,z)$ to $(0,1,\infty)$. This is just linear algebra: we can choose coordinates so that $x$, $y$, and $z$ correspond to $[0:1]$, $[1:1]$, and $[1:0]$ respectively. Thus, if $X$ is a curve defined over a number field $K$, then we get a finite étale map $$U \to \mathbb P^1_K\setminus\{0,1,\infty\}$$ of some open $U \subseteq X$. This suggests that the study of $\pi_1^{\operatorname{\acute et}}(\mathbb P^1_{\mathbb Q} \setminus\{0,1,\infty\})$ could be interesting for number theoretic purposes. Indeed, the 'fibration' $$\begin{array}{ccc}\mathbb P^1_{\bar{\mathbb Q}}\setminus\{0,1,\infty\} & \to & \mathbb P^1_{\mathbb Q} \setminus\{0,1,\infty\} \\ & & \downarrow \\ & & \operatorname{Spec} \mathbb Q \end{array}$$ induces a short exact sequence $$1 \to \pi_1^{\operatorname{\acute et}}(\mathbb P^1_{\bar{\mathbb Q}}\setminus\{0,1,\infty\}) \to \pi_1^{\operatorname{\acute et}}(\mathbb P^1_{\mathbb Q}\setminus\{0,1,\infty\}) \to \pi_1^{\operatorname{\acute et}}(\operatorname{Spec} \mathbb Q) \to 1.$$ The first group is the free profinite group $\hat F_2$ on two generators, since a change of algebraically closed field does not alter the fundamental group (and over the complexes we get the profinite completion of the topological fundamental group). The last group, of course, is just $\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q)$. Thus, this in turn defines a map $$\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q) \to \operatorname{Aut}(\hat F_2) \twoheadrightarrow \operatorname{Out}(\hat F_2).$$ Corollary (of Belyi's theorem). The map $\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q) \to \operatorname{Out}(\hat F_2)$ is injective. See Szamuely's Galois groups and fundamental groups, Theorem 4.7.7. This means that we can view $\operatorname{Gal}(\bar{\mathbb Q}/\mathbb Q)$ as a subgroup of an object coming from topology. If we could understand what the image of the above injection is, we would solve all of number theory. This is the motivation for Grothendieck's study of dessins d'enfants. There are also applications to the inverse Galois problem; see for example [loc. cit.], section 4.8. See this post for some further applications and alternative perspectives on Belyi's theorem.<|endoftext|> TITLE: Is the matrix filled with the areas of pairwise intersections of disks in a plane always positive semidefinite? QUESTION [5 upvotes]: Consider disks $s_1, \cdots, s_n$ in the plane and let $a_{ij}$ be the area of $s_i\cap s_j$. Is it true that for any real numbers $x_1,\cdots, x_n$ we have $$ \sum_{i,j=1}^n x_ix_j a_{ij} \geq 0$$ Equivalent formulation: one can put $a_{ij}$ into a matrix $A$ and ask whether it is positive semidefinite. For $n=2$ this is true since $$a_{12}^2\le \min(a_{11},a_{22})^2 \le a_{11}a_{22} $$ REPLY [11 votes]: The question above has an affirmative answer. More generally, we have the following result: Theorem: Suppose $s_1, \dots, s_n$ are measurable sets in the plane $\mathbf{R}^2$ with finite area. Let $a_{i,j}$ be the area of $s_i \cap s_j$. Then $$ \sum_{i,j=1}^n x_i x_j a_{i,j} \ge 0 $$ for all real numbers $x_1, \dots, x_n$. Proof: In the inner product space $L^2(\mathbf{R}^2)$ (with the usual area measure), let $f_i$ be the characteristic function of $s_i$. Thus $$ a_{i,j} = \int_{s_i \cap s_j} 1 = \int_{\mathbf{R}^2} f_i f_j = \langle f_i, f_j \rangle. $$ Now suppose $x_1, \dots, x_n$ are real numbers. Let $f = x_1 f_1+ \dots + x_n f_n$. Then \begin{align*} \sum_{i,j=1}^n x_i x_j a_{i,j} &= \sum_{i,j=1}^n x_i x_j \langle f_i, f_j \rangle\\[6pt] &= \Bigl\langle \sum_{i=1}^n x_i f_i, \sum_{j=1}^n x_j f_j \Bigr\rangle\\[6pt] &=\langle f, f \rangle \\[6pt] &\ge 0, \end{align*} as desired.<|endoftext|> TITLE: What's the arc length of an implicit function? QUESTION [12 upvotes]: While an explicit function $y(x)$'s arc length $s$ is easily obtained as $$s = \int \sqrt{1+|y'(x)|^2}\,dx,$$ is there any formula for implicit functions given by $f(x,y) = 0$? One can use the implicit differentiation $y'(x) = -\frac{\partial_y f}{\partial_x f}$ to obtain $$s = \int\sqrt{1 + |\partial_y f / \partial_x f|^2}\,dx,$$ but that still requires (locally) solving for $y(x)$. Is there any formulation that does not require this, e.g. another implicit equation involving $s$? Thoughts so far: One could rewrite $s$ as $$s = \int |\nabla f|\, |\partial_x f|dx,$$ or symmetrize to $$s = \int |\nabla f|\, \underbrace{(|\partial_x f|dx + |\partial_y f|dy)}_{(*)}/2$$ where $(*)$ might be strongly related to $|df|$ I guess (though it's not identical due to the $|\cdot|$), but then? REPLY [7 votes]: Consider the divergence theorem on the two-dimensional region $\mathcal R = \{(x,y):f(x,y)\le 0\}$ bounded by the curve $\mathcal C = \partial\mathcal R = \{(x,y):f(x,y)=0\}$, $$\iint_{\mathcal R} \nabla\cdot\mathbf v\,\mathrm dA = \oint_{\mathcal C}\mathbf v\cdot\hat{\mathbf n}\,\mathrm d\ell.$$ If we take $\mathbf v=\hat{\mathbf n}=(\nabla f)/\|\nabla f\|$, we have $\mathbf v\cdot\hat{\mathbf n} = 1$, so $$\iint_{\mathcal R} \nabla\cdot\left(\frac{\nabla f}{\|\nabla f\|}\right)\,\mathrm dA = \oint_{\mathcal C}\mathrm d\ell,$$ which is the arc length of the curve. I don't know if this formula is useful at all, but it does satisfy your requirements.<|endoftext|> TITLE: Integral with trigonometric function QUESTION [5 upvotes]: I have a problem with this integral $$\int_\ \frac{\sin 2x }{ \sqrt{4-\cos^2 x}} \, dx$$ We can transform it to $$\int_\ \frac{2\sin x \cos x }{ \sqrt{4-\cos^2 x}} \, dx$$ Using substitution $u^2 = 4 - \cos^2 x $ we get $$\int_\ \frac{2u }{\ u } \, du$$ And it gives bad result. Can you point when did i make the mistake ? REPLY [2 votes]: The result shown by WolframAlpha is $$ \int\frac{\sin 2x}{\sqrt{4-\cos^2 x}} \, dx = \sqrt{2}(\sqrt{7-\cos2x}-\sqrt{7})+\text{constant} $$ that can be rewritten, discarding additive constants, $$ \sqrt{14-2\cos2x}=\sqrt{14-4\cos^2x+2}= \sqrt{4(4-\cos^2x)}=2\sqrt{4-\cos^2x} $$ This means you made no mistake at all.<|endoftext|> TITLE: Calculating the length of the paper on a toilet paper roll QUESTION [347 upvotes]: Fun with Math time. My mom gave me a roll of toilet paper to put it in the bathroom, and looking at it I immediately wondered about this: is it possible, through very simple math, to calculate (with small error) the total paper length of a toilet roll? Writing down some math, I came to this study, which I share with you because there are some questions I have in mind, and because as someone rightly said: for every problem there always are at least 3 solutions. I started by outlining the problem in a geometrical way, namely looking only at the essential: the roll from above, identifying the salient parameters: Parameters $r = $ radius of internal circle, namely the paper tube circle; $R = $ radius of the whole paper roll; $b = R - r = $ "partial" radius, namely the difference of two radii as stated. First Point I treated the whole problem in the discrete way. [See the end of this question for more details about what does it mean] Calculation In a discrete way, the problem asks for the total length of the rolled paper, so the easiest way is to treat the problem by thinking about the length as the sum of the whole circumferences starting by radius $r$ and ending with radius $R$. But how many circumferences are there? Here is one of the main points, and then I thought about introducing a new essential parameter, namely the thickness of a single sheet. Notice that it's important to have to do with measurable quantities. Calling $h$ the thickness of a single sheet, and knowing $b$ we can give an estimate of how many sheets $N$ are rolled: $$N = \frac{R - r}{h} = \frac{b}{h}$$ Having to compute a sum, the total length $L$ is then: $$L = 2\pi r + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi R$$ or better: $$L = 2\pi (r + 0h) + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi (r + Nh)$$ In which obviously $2\pi (r + 0h) = 2\pi r$ and $2\pi(r + Nh) = 2\pi R$. Writing it as a sum (and calculating it) we get: $$ \begin{align} L = \sum_{k = 0}^N\ 2\pi(r + kh) & = 2\pi r + 2\pi R + \sum_{k = 1}^{N-1}\ 2\pi(r + kh) \\\\ & = 2\pi r + 2\pi R + 2\pi \sum_{k = 1}^{N-1} r + 2\pi h \sum_{k = 1}^{N-1} k \\\\ & = 2\pi r + 2\pi R + 2\pi r(N-1) + 2\pi h\left(\frac{1}{2}N(N-1)\right) \\\\ & = 2\pi r N + 2\pi R + \pi hN^2 - \pi h N \end{align} $$ Using now: $N = \frac{b}{h}$; $R = b - a$ and $a = R - b$ (because $R$ is easily measurable), we arrive after little algebra to $$\boxed{L = 4\pi b + 2\pi R\left(\frac{b}{h} - 1\right) - \pi b\left(1 + \frac{b}{h}\right)}$$ Small Example: $h = 0.1$ mm; $R = 75$ mm; $b = 50$ mm thence $L = 157$ meters which might fit. Final Questions: 1) Could it be a good approximation? 2) What about the $\gamma$ factor? Namely the paper compression factor? 3) Could exist a similar calculation via integration over a spiral path? Because actually it's what it is: a spiral. Thank you so much for the time spent for this maybe tedious maybe boring maybe funny question! REPLY [4 votes]: If there are $N$ layers, the thickness is $h=(R-r)/N$. For cylindrical layers, the lengths increase linearly with layer number, so we can take the average circumference $2\pi\bar r$, where $r= (r+R)/2$, times $N$: $$ L = 2\pi \bar r N $$ If we have an Archimedean spiral, the radius increases linearly with azimuthal angle. Thus, we can take the average radius $\bar r$ multiplied by the total angle $2\pi N$, and again: $$ L = 2\pi N \bar r . $$<|endoftext|> TITLE: What is the order when doing $x^{y^z}$ and why? QUESTION [48 upvotes]: Does $x^{y^z}$ equal $x^{(y^z)}$? If so, why? Why not simply apply the order of the operation from left to right? Meaning $x^{y^z}$ equals $(x^y)^z$? I always get confused with this and I don't understand the underlying rule. Any help would be appreciated! REPLY [2 votes]: The exponent is evaluated first if it is an expression. Examples are $3^{x+1}=3^{\left(x+1\right)}$ and $e^{5x^3+8x^2+5x+10}$ (the exponent is a cubic polynomial) and $10^{0+0+0+10^{15}+0+0+0}=10^{10^{15}}$. The left-associativity simply fails when the exponent contains multiple terms.<|endoftext|> TITLE: Formula for the simple sequence 1, 2, 2, 3, 3, 4, 4, 5, 5, ... QUESTION [9 upvotes]: Given $n\in\mathbb{N}$, I need to get just enough more than half of it. For example (you can think this is : number of games $\rightarrow$ minimum turns to win) $$ 1 \rightarrow 1 $$ $$ 2 \rightarrow 2 $$ $$ 3 \rightarrow 2 $$ $$ 4 \rightarrow 3 $$ $$ 5 \rightarrow 3 $$ $$ 6 \rightarrow 4 $$ $$ 7 \rightarrow 4 $$ $$ \vdots $$ $$ 2i \rightarrow i+1 $$ $$ 2i+1 \rightarrow i+1 $$ $$ \vdots $$ Is it possible to create a simple formula without piecewise it into odd and even? Sorry for my bad English. REPLY [64 votes]: How about: $$ \frac{3+2n+(-1)^n}{4} $$ or (continuous function of $n \in \mathbb R$ or even $\mathbb C$): $$ \frac{3+2n+\cos(\pi n)}{4} $$<|endoftext|> TITLE: Why is the Axiom of Infinity necessary? QUESTION [8 upvotes]: I am having trouble seeing why the Axiom of Infinity is necessary to construct an infinite set. According to a professor of who's mine teaching a class on "infinity," the Peano axioms are only adequate to establish the existence of all of the natural numbers, but not also that there is an infinite set consisting of them. To do so, we must stipulate not only the Axiom of Induction, but that there also exists an inductive set (via the Axiom of Infinity). So, why does the existence of an infinite set of the natural numbers not just follow from the existence of all of the natural numbers? REPLY [15 votes]: BrianO's answer is spot-on, but it seems to me you may not be too familiar with models and consistency proofs, so I'll try to provide a more complete explanation. If anything it may better steer you towards what you need to study, as admittedly I'm about to gloss over a lot of material. Why do we need the axiom of infinity? Because we know (and can prove) that the other axioms of ZFC cannot prove that any infinite set exists. The way this is done is roughly by the following steps: Remember a set of axioms $\Sigma$ is inconsistent if for any sentence $A$ the axioms lead to a proof of $A \land \neg A$. This can be written as $\Sigma \vdash A \land \neg A \to \neg Con(\Sigma)$ If $Inf$ is the statement "an infinite set exists", then $\neg Inf$ is the statement "no infinite sets exist". The axiom of infinity is essentially the assumption that $Inf$ is true and hence $\neg Inf$ is false. If we don't need the axiom of infinity, then with the other axioms $ZFC^* = ZFC - Inf$, we should be able to prove $Inf$ as a theorem, in other words we'll posit that $ZFC^* \vdash Inf$ We assume that $ZFC$, and hence the subset $ZFC^*$, are consistent. We then add $\neg Inf$ as an axiom to $ZFC^*$, which we'll call $ZFC^+$ By showing that $(ZFC - Inf) + \neg Inf$ has a model (a set in which all the axioms are true when quantifiers range only over the elements of the set), we can prove the relative consistency $Con(ZFC) \to Con(ZFC^+)$. In other words we're basically just proving $ZFC^+$ is consistent, but we need to be explicit that this proof assumes $ZFC$ is consistent. The model we want is $HF$, the set of all hereditarily finite sets. I'll leave it you to verify all the axioms of $ZFC^+$ hold in this set. But the important point is $HF \models ZFC^+$, and our relative consistency is proven. (This follows from Godel's completeness theorem) We are assuming that $ZFC^* \vdash Inf$, but because $ZFC^+$ is an extension of $ZFC^*$ it must also be the case that $ZFC^+ \vdash Inf$. But then we have $ZFC^+ \vdash Inf \land \neg Inf$ and is thus inconsistent, a contradiction. Thus we must conclude that our hypothesis $ZFC^* \vdash Inf$ is false and there is no proof of $Inf$ from the other axioms of ZFC. $Inf$ must be taken as an axiom to be able to prove that any infinite set exists.<|endoftext|> TITLE: Prove that $\lim_n \int_{\Bbb R} \frac{\sin(n^2 x^5)}{n^2 x^4} \chi_{(0,n]} d\lambda(x) = 0$ QUESTION [6 upvotes]: Prove that: $$\lim_n \int_{\Bbb R} \frac{\sin(n^2 x^5)}{n^2 x^4} \chi_{(0,n]} d\lambda(x) = 0$$ I am self-learning these stuff, and I would like to check whether I did things right. Here's my work: Call $f_n(x)$ the integrand. We have: $$\left| f_n(x) \right| = \left| \frac{\sin(n^2 x^5)}{n^2 x^4}\chi_{(0,1]} + \frac{\sin(n^2 x^5)}{n^2 x^4}\chi_{(1,n]}\right| \le \frac{n^2 x^5}{n^2x^4} \chi_{(0,1]} + \frac{1}{n^2 x^4} \chi_{(1,n]} \\ \le x \chi_{(0,1]} + \frac1{x^4} \chi_{(1,n]} \le x \chi_{[0,1]} + \frac{1}{x^4}\chi_{[1,\infty)} := g(x)$$ for all $n \ge 1$ and $x \in \Bbb R$. Note that $(f_n)$ is a sequence of measurable functions, and it converges pointwise to $0$. The function $x \mapsto x$ is Riemann-integrable on $[0,1]$, hence it is Lebesgue-integrable and the integrals coincide. Also, $x \mapsto 1/x^4$ is Riemann-integrable on every compact $[1,a]$, with $a > 1$, and its $\int_1^{\infty}$ is absolutely convergent, hence it is Lebesgue integrable and the integrals coincide. Then, $$\int_{\Bbb R} g d\lambda = \int_0^1 x dx + \int_1^{\infty} \frac{dx}{x^4} < \infty$$ Hence $g \in L^1$. Therefore, by LDCT, $$\lim_n \int_{\Bbb R} f_n d\lambda = \int_{\Bbb R} \lim_n f_n d\lambda = 0$$ REPLY [2 votes]: Your solution with the dominated convergence theorem is good. Alternatively, one can use the fact that $|\sin(t)|\leqslant \min\{1,t\}$ for each non-negative $t$, we have for each $\delta$: $$\left|\int_{\Bbb R} \frac{\sin\left(n^2 x^5\right)}{n^2 x^4} \chi_{(0,n]} \mathrm d\lambda(x)\right|\leqslant \int_{[0,\delta]}x \mathrm d\lambda(x)+\frac 1{n^2}\int_{[\delta,+\infty)}\frac 1{x^4}\mathrm d\lambda(x)=\frac{\delta^2}2+\frac{3}{n^2\delta^3}.$$ This is enough to conclude the wanted result. By optimizing in $\delta$, we can see that the convergence is (at least) of order $n^{-2/5}$.<|endoftext|> TITLE: open equivalence relation and closed graph of it QUESTION [5 upvotes]: I need to prove that if $\sim$ is an open equivalence relation on a topological space S and $R = \{(x,y)\in S\times S : x\sim y\}$ is a close subset of $S\times S$ then $\Delta = \{(x,x)\in S\times S\}$ is a close subset of $S\times S$. I tried to apply ideas from the theory and exercises with similar requests, like that the quotient map is an open map but failed to solve that. REPLY [3 votes]: The result is false. Let $S=\{0,1,2\}$ with the topology $\tau=\big\{\varnothing,\{0,1\},\{2\},S\big\}$, and let $\sim$ be the equivalence relation on $S$ whose equivalence classes are $\{0,1\}$ and $\{2\}$ clearly $$R=(\{0,1\}\times\{0,1\})\cup\{\langle 2,2\rangle\}\;.$$ $R$ is a closed subset of $S\times S$, and the quotient map $q:S\to S/\!\!\sim$ is open, but $S$ is not Hausdorff, so $\Delta$ is not closed in $S\times S$. (Specifically, neither $\langle 0,1\rangle$ nor $\langle 1,0\rangle$ has an open nbhd in $S\times S$ that is disjoint from $\Delta$.) Added: Apparently you wanted to assume the result that a space is Hausdorff if and only if its diagonal is closed and prove the following result: Suppose $\sim$ is an open equivalence relation on a topological space $S$. Then the quotient space $S/\!\!\sim$ is Hausdorff if and only if the graph $R$ of $\sim$ is closed in $S\times S$. Let $q:S\to S/\!\!\sim$ be the quotient map. Suppose that $q(x)$ and $q(y)$ are distinct points of $S/\!\!\sim$. Then $x\not\sim y$, so $\langle x,y\rangle\in(S\times S)\setminus R$. $R$ is closed in $S\times S$, so there are open $U,V\subseteq S$ such that $\langle x,y\rangle\in U\times V\subseteq(S\times S)\setminus R$. Clearly $U$ and $V$ are disjoint open nbhds of $x$ and $y$, respectively, in $S$. The map $q$ is open, so $q[U]$ and $q[V]$ are disjoint open nbhds of $q(x)$ and $q(y)$, respectively, in $S/\!\!\sim$. Thus, $S/\!\!\sim$ is Hausdorff if $R$ is closed. For the other direction assume that $S/\!\!\sim$ is Hausdorff; you need to show that $R$ is closed. Let $\Delta=\{\langle q(x),q(x)\rangle:x\in S\}$, the diagonal in $(S/\!\!\sim)\times(S/\!\!\sim)$. By hypothesis this is closed, and $q$ is continuous, so $q^{-1}[\Delta]$ is closed in $S\times S$. But $q^{-1}[\Delta]$ is ... ?<|endoftext|> TITLE: Davenport's Q-method (Finding an orientation matching a set of point samples) QUESTION [7 upvotes]: I have an initial set of 3D positions that form a shape. After letting them move independently, my goal is to find the best rotation of the original configuration to try to match the current state. This is for a soft body physics simulation, the idea being that if I can construct an optimal 'rigid' frame for the deformed shape then I can apply a shape matching constraint that removes deformation without introducing energy. Existing solutions tend to find the optimal linear transformation representing the deformation, and then use various methods to decompose the matrix into rotation and scale/shear components. However, I found the orientations provided by such methods tended to not be very stable. After significant searching I discovered that my problem was identical to a problem solved by NASA to determine satellite orientations. When I implemented their solution my simulation was remarkably stable. I want to gain a better understanding of why it works. Details of Davenport's Q-method are here. Somehow, after taking a bunch of outer, cross and dot products of the original and deformed samples, jamming them into a symmetric 4x4 matrix, and then computing the eigenbasis for that matrix, the eigenvector corresponding to the largest eigenvalue can be reinterpreted as a quaternion that is the best orientation to use. The author of the linked paper claims this result is easy to prove, but I guess easy is relative. Can anyone walk me through why this works? REPLY [9 votes]: Since nobody has answered this yet and its been more than a year I'll take a stab. I'll apologize for my engineery answer from the start. Problem Description The Davenport Q-Method Solution is a solution to what is referred to as Wahba's Problem, which was proposed by Grace Wahba in 1965(Wahba Paper). Wahba's problem is to find the rotation matrix that minimizes the cost function $$\min_{\mathbf{T}} J(\mathbf{T})=\frac{1}{2}\sum_{i}{w_i}\left\|\mathbf{b}_i-\mathbf{T}\mathbf{a}_i\right\|^2$$ where $\mathbf{a}_i$ are a set of unit vectors expressed in frame $A$, $\mathbf{b}_i$ are the same set of unit vectors expressed in frame $B$, $\mathbf{T}$ is the rotation matrix to transform from frame $A$ to $B$, and $w_i$ is some weight corresponding to each vector pair (usually set to be the inverse of the variance of the measurement that your vectors are generated from). Note that the $1/2$ comes from the maximum likelihood estimate (MLE) formulation of Wahba's problem. Since it is just a constant multiplier, it will have no effect on the minimization problem so we can ignore it in future steps. This is a constrained minimization problem, with the constraint that $$\mathbf{T}^{-1}=\mathbf{T}^T,\qquad\left\|\mathbf{T}\right\|=1$$ The Q-Method Solution Linear Algebra Transformations In 1968, Davenport came up with a solution to Wahba's problem using attitude quaternions (Davenport Paper). To get to Davenport's solution we need to manipulate the cost function. First, express the vector norm as an inner product $$ \min_T J(\mathbf{T}) = \sum_i{w_i(\mathbf{b}_i-\mathbf{T}\mathbf{a}_i)^T(\mathbf{b}_i-\mathbf{T}\mathbf{a}_i)}$$ Distributing the multiplication and recalling (a) that $\mathbf{a}_i$ and $\mathbf{b}_i$ are unit vectors (and thus their inner product with themselves is 1), (b) the constraint that $\mathbf{T}^T\mathbf{T}=\mathbf{I}$, and that an inner product is a scalar and thus symmetric (that is $\mathbf{a}_i^T\mathbf{b}_i=\mathbf{b}_i^T\mathbf{a}_i$) the minimization problem can be written as $$\min_{\mathbf{T}}J(\mathbf{T})=\sum_i{2w_i(1-\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i})$$ Dropping the constant multiplier 2, and recognizing that $\sum{w_i}$ will have no effect on the minimization problem we can further write this as* $$\min_{\mathbf{T}}J(\mathbf{T})=-\sum_i{w_i\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i}$$ Now, making use of the fact that the trace operator is a linear operator, and that the trace of a scalar is the scalar, we can write $$\min_{\mathbf{T}}J(\mathbf{T})=-\text{Tr}\left[\sum_i{w_i\mathbf{b}_i^T\mathbf{T}\mathbf{a}_i}\right]$$ which, using the cyclic property of the trace can be written as $$\min_{\mathbf{T}}J(\mathbf{T})=-\text{Tr}\left[\mathbf{T}\sum_i{w_i\mathbf{a}_i\mathbf{b}_i^T}\right]=\text{Tr}\left[\mathbf{T}\mathbf{B}^T\right]$$ where $\mathbf{B}=\sum_i{w_i\mathbf{b}_i\mathbf{a}_i^T}$ is known as the attitude profile matrix. We can now use the equation to convert an attitude quaternion to a rotation matrix $$\mathbf{T}=(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\mathbf{I}+2\mathbf{q}_v\mathbf{q}_v^T-2q_s\left[\mathbf{q}_v\times\right],$$ where $$\left[\mathbf{a}\times\right]=\left[\begin{array}{rrr}0 & -\mathbf{a}(3) & \mathbf{a}(2) \\ \mathbf{a}(3) & 0 & -\mathbf{a}(1) \\ -\mathbf{a}(2) & \mathbf{a}(1) & 0\end{array}\right]$$ is the skew-symmetric cross product matrix, $\mathbf{q}_v$ is the vector portion of the attitude quaternion, and $q_s$ is the scalar portion of the attitude quaternion, to substitute in for $\mathbf{T}$ $$\min_{\mathbf{q}}J(\mathbf{q})=-\text{Tr}\left[\left((q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\mathbf{I}+2\mathbf{q}_v\mathbf{q}_v^T-2q_s\left[\mathbf{q}_v\times\right]\right)\mathbf{B}^T\right]$$ Distributing $\mathbf{B}^T$ and the trace operator leaves us with $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}^T\right]-2\text{Tr}\left[\mathbf{q}_v\mathbf{q}_v^T\mathbf{B}^T\right]+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$ At this point it again becomes necessary to make use of trace properties (the cyclic property, the scalar property $\text{Tr}\left[a\right]=a$, and the transpose property $\text{Tr}\left[\mathbf{A}^T\right]=\text{Tr}\left[\mathbf{A}\right]$). Applying these properties we can simplify to $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-2\mathbf{q}_v^T\mathbf{B}\mathbf{q}_v+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$ Further, recognizing that $2\mathbf{a}^T\mathbf{A}\mathbf{a}=\mathbf{a}^T(\mathbf{A}+\mathbf{A}^T)\mathbf{a}$ we can reduce this to $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-\mathbf{q}_v^T(\mathbf{B}+\mathbf{B}^T)\mathbf{q}_v+2q_s\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right].$$ Examine the term $\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]$. Applying the operators it can be seen that $$\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]=\mathbf{q}_v(1)(\mathbf{B}(3, 2)-\mathbf{B}(2, 3))+\mathbf{q}_v(2)(\mathbf{B}(1, 3)-\mathbf{B}(3, 1))+\mathbf{q}_v(3)(\mathbf{B}(2, 1)-\mathbf{B}(1, 2))$$ Defining $$\mathbf{z}=\left[\begin{array}{ccc} \mathbf{B}(2, 3)-\mathbf{B}(3, 2) \\ \mathbf{B}(3, 1) - \mathbf{B}(1, 3) \\ \mathbf{B}(1, 2)-\mathbf{B}(2, 1)\end{array}\right]$$ (which implies that $\left[\mathbf{z}\times\right]=\mathbf{B}^T-\mathbf{B}$), then we can write $$\text{Tr}\left[\left[\mathbf{q}_v\times\right]\mathbf{B}^T\right]=-\mathbf{z}^T\mathbf{q}_v$$ and our minimization problem becomes $$\min_{\mathbf{q}}J(\mathbf{q})=-(q_s^2-\mathbf{q}_v^T\mathbf{q}_v)\text{Tr}\left[\mathbf{B}\right]-\mathbf{q}_v^T(\mathbf{B}+\mathbf{B}^T)\mathbf{q}_v-2q_s\mathbf{z}^T\mathbf{q}_v.$$ Defining $\mathbf{S}=\mathbf{B}+\mathbf{B}^T$ and $\mu=\text{Tr}\left[\mathbf{B}\right]$ and simplifying gives $$\min_{\mathbf{q}}J(\mathbf{q})=-\left(\mathbf{q}_v^T(\mathbf{S}-\mu\mathbf{I})\mathbf{q}_v+q_s\mathbf{z}^T\mathbf{q}_v+q_s\mathbf{q}_v^T\mathbf{z}+q_s^2\mu\right).$$ This can equivalently be written as an inner product $$\min_{\mathbf{q}}J(\mathbf{q})=-\left[\begin{array}{cc} \mathbf{q}_v^T(\mathbf{S}-\mu\mathbf{I})+q_s\mathbf{z}^T & \mathbf{q}_v^T\mathbf{z}+q_s\mu\end{array}\right]\left[\begin{array}{c} \mathbf{q}_v \\ q_s \end{array}\right].$$ Finally, we can write this as $$\min_{\mathbf{q}}J(\mathbf{q})=-\left[\begin{array}{cc} \mathbf{q}_v^T & q_s \end{array}\right]\left[\begin{array}{cc} \mathbf{S}-\mu\mathbf{I} & \mathbf{z} \\ \mathbf{z}^T & \mu\end{array}\right]\left[\begin{array}{c} \mathbf{q}_v \\ q_s \end{array}\right]=-\mathbf{q}^T\mathbf{K}\mathbf{q}$$ where $\mathbf{q}$ is the attitude quaternion (vector first), and $\mathbf{K}$ is the Davenport matrix. Optimization Problem Having finally sufficiently simplified the cost function, we can now perfom a constrained minimization using a Lagrange multiplier to enforce the constraint that $\mathbf{q}^T\mathbf{q}=1$ $$\min_{\mathbf{q}\text{, }\lambda} J(\mathbf{q}\text{, }\lambda) =-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)$$ Applying the first differential condition to this results in $$\mathbf{K}\mathbf{q}=\lambda\mathbf{q}$$ which is a 4x4 eigenvalue/eigenvector problem. The attitude quaternion that minimizes the cost function in the unit eigenvector corresponding to the largest (most positive eigenvalue). To understand this remember that the function we are minimizing is $$\min_{\mathbf{q}\text{, }\lambda} J(\mathbf{q}\text{, }\lambda) =-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)$$ which, when $\mathbf{q}$ is a unit eigenvector of $\mathbf{K}$, simplifies to $$-\mathbf{q}^T\mathbf{K}\mathbf{q}+\lambda(\mathbf{q}^T\mathbf{q}-1)=-\lambda$$ since $\mathbf{q}^T\mathbf{K}\mathbf{q}=\lambda$ when $\mathbf{q}$ is an eigenvector of $\mathbf{K}$. So overall, yes the derivation is simple in that it only requires relatively basic linear algebra/optimization but it is complex in that it requires a good deal of creativity in order to get everything into the proper form. *Note here that when we attempt to minimize the sum of the square of the 2-norm of the difference between $\mathbf{b}_i$ and $\mathbf{T}\mathbf{a}_i$ we end up minimizing the sum of the inner products between the unit vectors, which is equivalent to the cosine of the angle between the vectors. Because cosine is an even function, minimizing the cosine of the angle between the vectors is equivalent to minimizing the angle between the vectors, so we are actually minimizing the angle between the unit vectors. Intuitively it makes sense that we want to find the rotation matrix that minimizes the angle between the aligned unit vectors.<|endoftext|> TITLE: How to prove this series about Fibonacci number: $\sum_{n=1}^{\infty }\frac{F_{n}}{2^{n}}=2$? QUESTION [13 upvotes]: How to prove this series result: $$\sum_{n=1}^{\infty }\frac{F_{n}}{2^{n}}=2$$ where $F_{1}=1,~F_{2}=1,~F_n=F_{n-1}+F_{n-2},~~n\geq 3$. I have no idea where to start. REPLY [17 votes]: Hint. You may use the fact that, if $$f(x)=\sum_{n=0}^\infty F_nx^n \tag1$$ with $F_n$ the nth Fibonnacci number, $F_{n+2}=F_{n}+F_{n+1}$, then $$f(x)={x\over 1-x-x^2}.\tag2$$ A proof of $(2)$ may be found here. Apply it to $x:=\dfrac12$.<|endoftext|> TITLE: cardinality of a basis for a topology QUESTION [8 upvotes]: Suppose X is a space of cardinality $\le \kappa$. I would like to claim that any topology on X has a basis of cardinality $\le \kappa$. Intuitively it's true since even the discrete topology has such basis but i can't prove or find a counter-example... Thanks! REPLY [6 votes]: It does not appear to be true. Let me recommend a wonderful website $\pi$-Base where you can search for topologies having certain properties. In particular I searched for a countable space which is not second countable (no countable base) here. It gives various examples.<|endoftext|> TITLE: Some questions about S.Roman, "Advanced Linear Algebra" QUESTION [6 upvotes]: Question for those who have studied Roman's book "Advanced Linear Algebra". How self-contained is this book. Can I study determinants directly from this in context of exterior algebra and tensor products? How much one can understand if he didn't have a previous course in Linear Algebra. I want to study linear algebra but I want to do it properly with focus in abstract algebra. That is, I want the book to talk about modules, tensor products, exterior algebras. I tried Blyth's "Module theory - an approach to linear algebra" and Winitzki's "Linear Algebra via Exterior Products", but it didn't work out very well. Not beause the material was too hard, but because I simply don't like the style. It's not fully rigorous. Now I hope I can learn something from Roman's book. REPLY [3 votes]: Well I think the book is quite self-contained. Probably you won’t need to take a pre-course when studying the book since the book has covered the topics quite rigorously and explicitly. For example, chapter $0$ covers the topics on some basic cardinal arithmetic and the author immediately applies it in the proof of some dimension equations in chapter $1$. And comments in the book provide an insight into the definition which is appreciated by me. See for example in tensors.<|endoftext|> TITLE: summation of a binomial expression that doesn't start from 0 QUESTION [6 upvotes]: I have the following expression: $$ \sum_{k=9}^{17}\binom{17}{k} $$ and I need to show that it's equal to: $$ 2^{16} $$ now I know that if 'k' was starting from zero and not from 9 , like this: $$ \sum_{k=0}^{17}\binom{17}{k} $$ then there is this identity that says it's equal to: $$ 2^{17} $$ But Because the summation starts from 9 I don't know what to do.. can you help please? thank you REPLY [10 votes]: For integers $n, k$ with $0 \le k \le n$, the binomial coefficients satisfy the “symmetry” $$ \binom{n}{k} = \binom{n}{n-k} $$ It follows that $$ \sum_{k=0}^{8}\binom{17}{k} = \sum_{k=9}^{17}\binom{17}{17-k} = \sum_{k=9}^{17}\binom{17}{k} $$ and therefore $$ \sum_{k=9}^{17}\binom{17}{k} = \frac 12 \sum_{k=0}^{17}\binom{17}{k} = \frac 12 \cdot (1+1)^{17} = 2^{16} \, . $$ But note that this approach works only in this symmetric case where we sum the first or second half of the binomial coefficients in a row of Pascal's triangle for odd $n$ (or with a small modification for even $n$). According to Wikipedia, there is no closed formula for the general case $\sum_{k=j}^n \binom nk$ unless one resorts to the Hypergeometric function.<|endoftext|> TITLE: Prob. 9 (c), Sec. 18 in Munkres' TOPOLOGY, 2nd ed: Continuity of a map deduced from its restrictions to closed subsets of the domain QUESTION [5 upvotes]: Here's Prob. 9, Sec. 18 in Topology by James R. Munkres, 2nd edition: Let $\{ A_\alpha \}$ be a collection of subsets of $X$; let $X = \bigcup_\alpha A_\alpha$. Let $f \colon X \to Y$: suppose that $f | A_\alpha$ is continuous for each $\alpha$. (a) Show that if the collection $\{A_\alpha \}$ is finite and each set $A_\alpha$ is closed, then $f$ is continuous. [ I have managed to show this! ] (b) Find an example where the collection $\{ A_\alpha \}$ is countable and each $A_\alpha$is closed, but $f$ is not continuous. [Example found easily!] (c) An indexed family of sets $\{ A_\alpha \}$ is said to be locally finite if each point $x$ of $X$ has a nieghborhood that intersects $A_\alpha$ for only finitely many values of $\alpha$. Show that if the family $\{ A_\alpha \}$ is locally finite and each $A_\alpha$ is closed, then $f$ is continuous. It is part (c) that stumps me. My Attempt at Part (c): Let $B$ be a closed set in $Y$. We need to show that $f^{-} (B)$ is closed in $X$. Let $A \colon= f^{-1}(B)$. Then $$A = \bigcup_\alpha \left( f | A_\alpha \right)^{-1} (B).$$ Now since $f | A_\alpha$ is continuous for each $\alpha$, each set $\left( f | A_\alpha \right)^{-1} (B)$ is closed in $A_\alpha$ and hence is closed in $X$. Suppose that $x \in X - A$. Then $x$ has a neighborhood $U$ (i.e. an open set $U$ containing $x$) that intersects only finitely many of the sets in the collection $\{A_\alpha \}$. Let $\alpha_1, \ldots, \alpha_n$ be the values of the indices $\alpha$ for which $U$ intersects the sets $A_\alpha$. Since $x \not\in A$, therefore $x \not\in \left( f | A_\alpha \right)^{-1} (B)$ for any $\alpha$. So $\left( f|A_\alpha \right) (x) \not\in B$ for any $\alpha$. But as $X = \bigcup_\alpha A_\alpha$, so $x \in A_\alpha$ for some $\alpha$, and so $f(x) = \left( f|A_\alpha \right) (x) $ for some $\alpha$. But $x$ can only belong to one of the sets $A_{\alpha_1}, \ldots, A_{\alpha_n}$. So $f(x) = \left( f | A_{\alpha_i} \right) (x)$ for some $i \in \{ i , \ldots, n\}$. Thus we can conclude that $f(x) \not\in B$. Let $S_i \colon= \left( f | A_{\alpha_i} \right)^{-1} (B)$. Then $x \not\in S_i$ for any $i$. Since the sets $\left( f | A_{\alpha_i} \right)^{-1} (B)$ are closed in $X$, we can conclude that, for each $i$, the point $x$ has a neighborhood $U_i$ such that $U_i \cap S_i = \emptyset$. Let $V \colon= U \cap U_1 \cap \ldots \cap U_n$. Then $V$ is a neighborhood of $x$ and if $v \in V$, then $v \in U$ so that $v \in A_{\alpha_i}$ only for some $i = 1, \ldots, n$ and $v \in U_i$ so that $v \not\in S_i$ for any $i = 1, \ldots, n$. Also then $v \not\in A_\alpha$ for any $\alpha \neq \alpha_1, \ldots, \alpha_n$. Is my reasoning so far correct? How to show from here that $v \not\in A$? P.S.: My Attempt at Part (c) Contd.: Now if this $v$ were to lie in $A$, then $v$ would have to lie in some set $\left( f | A_\alpha \right)^{-1}(B)$ for some $\alpha$. But as each set $\left( f | A_\alpha \right)^{-1}(B)$ is contained in $A_\alpha$ and as $v$ cannot be in any set $A_\alpha$ for any $\alpha$ different from one of the $\alpha_i$, so we can conclude that if $v$ were to lie in $A$, then $v$ would have to be in one of the sets $\left( f | A_{\alpha_i} \right)^{-1}(B) = S_i$ for some $i = 1, \ldots, n$. But as $v$ is in $U_i$ and $U_i \cap S_i = \emptyset$ for each $i$, so $v$ cannot be in $S_i$ for any $i$. Thus $v$ cannot be in $A$, which implies that $v \in X-A$, showing that the open set $U \cap U_1 \cap \cdots \cap U_n \subset X-A$. And $x \in U \cap U_1 \cap \cdots \cap U_n$ also. Thus every point $x \in X-A$ has a neighborhood disjoint from $A$, which implies that set $X-A$ is open and hence $A$ is closed, as required. Is my proof correct now? Or, is there any problem in it? REPLY [3 votes]: I think easier approach would be to look at continuity at a point. Let $x \in X$ and take a neighbourhood $U$ of $x$ such that $U$ intersects only finite amount of the sets $A_\alpha$. Now consider $U$ as a topological space and show that the function $g: U \to Y$, $g(x) = f(x)$, is continuous (use the result from (a)). Now for each $x \in X$ there exists a neighbourhood $U_x$ such that the restriction of $f$ to $U_x$ is continuous. This means that for each neighbourhood $V$ of $f(x) \in Y$ there exists a neighbourhood $W \subset U_x$ of $x$ in the topological space $U_x$ such that $f(W) \subset V$. But since $U_x$ is open in $X$ and $W$ is open in $U_x$, $W$ is open in $X$ too. Thus $f$ is continuous at $x$ in the topological space $X$.<|endoftext|> TITLE: Well-foundedness of cardinals and the axiom of choice QUESTION [8 upvotes]: Without axiom of choice, it is not generally true that the class of all cardinals (in this question we consider Scott cardinal rather than cardinals as ordinals) is not well-founded under the ordinary cardinality comparison. However, we also know that it is fine to assume the well-foundedness of the class of all cardinals. Under ZF with this assumption, we can prove that every infinite set is Dedekind-infinite as follows: for infinite set $X$, consider the collection of cardinals $$\mathcal{A} = \{|A| : A\subseteq X \text{ and $A$ is infinite}\}.$$ From assumption, $\mathcal{A}$ is well-founded. If $|B|$ is a minimal element, then $B$ should be Dedekind-infinite, since $|B|-1 = |B|$. (where $|B|-1$ is a cardinality of the set $B$ except one element in $B$.) I wonder we can prove a stronger result; for example, the axiom of choice follows from that the class of cardinals is well-founded? I would appreciate your answer. REPLY [8 votes]: This is an open problem. It was shown that for every $\kappa$, $\sf DC_\kappa$ cannot prove that the cardinals are well-founded. While not enough to conclude the principle is equivalent to the axiom of choice ($\sf BPI$ does not follow from $\sf DC_\kappa$ either), it is worth remarking that we really don't know much about this principle. A very recent paper gave a nice survey of this problem and related results: Paul Howard, Eleftherios Tachtsis, "No decreasing sequence of cardinals", Archive for Mathematical Logic, First online: 28 December 2015. Let me finish by stating that generally speaking the structure of the cardinals is a bit of a wild beast when it comes to the axiom of choice. We don't have good techniques to control it very well in order to produce separating models for much awaited-results (e.g. the Partition Principle is a statement about the structure of the cardinals). So we mainly know how to violate things wildly (e.g. embed partial orders into the cardinals of a model), but not how to fine tune this in order to produce nice results.<|endoftext|> TITLE: How could we define the factorial of a matrix? QUESTION [201 upvotes]: Suppose I have a square matrix $\mathsf{A}$ with $\det \mathsf{A}\neq 0$. How could we define the following operation? $$\mathsf{A}!$$ Maybe we could make some simple example, admitted it makes any sense, with $$\mathsf{A} = \left(\begin{matrix} 1 & 3 \\ 2 & 1 \end{matrix} \right) $$ REPLY [2 votes]: A factorial of an element of a group $n$ can be interpreted in an abstract algebraic sense, which can then be used with matrices. I will try to motivate the intuition here rather than ONLY giving a definition. Consider the expression $n!$ this is $ 1 \times 2 \times 3 \times ... n $ We now consider an arbitrary group $G$ with (not necessarily commutative operation +) then we can consider a particular element of $G$ (with generators $g_1, g_2 ... g_r$) which we express minimally as the word $$W = g_{\mu_1} + g_{\mu_2}+.... g_{\mu_s} $$ To be concrete if we let $G$ be the integers then there are two generators $1, -1$ and a positive integer number $N$ can be expressed in the way above as $$N = N(1) = \underbrace{1+1+1...}_{N \ \text{times}} $$ Here $g_{\mu_k}$ is just positive $1$ the whole time. If there was a a multiplication operation $\times$ defined now on this group as well (We haven't done it yet!) then we could define $W!$ as $$ g_{\mu_1} \times \left( g_{\mu_1} + g_{\mu_2} \right) \times \ (g_{\mu_1} + g_{\mu_2} + g_{\mu_3} ) \times \ ...\times \ (g_{\mu_1} + g_{\mu_2} + ... g_{\mu_s}) $$ You can verify that if the group $G$ is the integers this results in $$ 1 \times 2 \times 3 ... n $$ Which is what we would expect. So the problem of defining a factorial on a group reduces very naturally to: "How to add a multiplication to a group"? If $G$ is abelian then you can make $G$ into a ring (with possibly non commutative multiplication). If $G$ is not abelian then either a left or right near ring is the way to go. Once you add your multiplication then you get a natural factorial.<|endoftext|> TITLE: Evaluating $\int^{\pi}_0\arctan\left(\frac{p\sin x}{1-p\cos x}\right)\sin(nx) dx$ by differentiation under integral? QUESTION [7 upvotes]: I saw that $$ \int^{\pi}_{0}\arctan \left(\frac{p \sin x}{1-p \cos x}\right) \sin(nx) dx=\frac{\pi}{2n} p^n $$ for $$p^2 <1$$ I tried to prove using differentiation under integral but got stuck at this step $$ I^{\prime} (p)=\int^{\pi}_{0} \frac{\sin x \sin (nx)}{1+p^2-2p \cos x}dx $$ What to do next? REPLY [7 votes]: For $p^{2} <1$, $$\frac{\sin x}{1+p^{2}-2p \cos (x)} = \sum_{k=1}^{\infty} p^{k-1} \sin(kx). $$ This can be derived by evaluating the geometric series $\sum_{k=0}^{\infty}(p e^{ix})^{k} $. So assuming $n$ is a positive integer, we have $$ \begin{align} I'(p) &= \int_{0}^{\pi} \sin (nx) \sum_{k=1}^{\infty} p^{k-1} \sin(kx) \, dx \\ &= \sum_{k=1}^{\infty}p^{k-1} \int_{0}^{\pi} \sin(nx) \sin(kx) \, dx \\ &= p^{n-1} \int_{0}^{\pi} \sin^{2}(nx) \, dx \tag{1} \\ &= \frac{p^{n-1}}{2} \left(\int_{0}^{\pi} \, dx - \int_{0}^{\pi} \cos(2nx) \, dx \right) \\&= \frac{p^{n-1}}{2} (\pi-0) \\ &= \frac{\pi}{2}p^{n-1}. \end{align}$$ $(1)$ The functions $\sin(nx)$ and $\sin(kx)$ are orthogonal on $[0, \pi]$ unless $k=n$.<|endoftext|> TITLE: If $ A=\frac{1}{2\sqrt{1}}+\frac{1}{3\sqrt{2}}+\frac{1}{4\sqrt{3}}+.........+\frac{1}{100\sqrt{99}}\;,$ Then $\lfloor A \rfloor =$ QUESTION [9 upvotes]: If $\displaystyle A=\frac{1}{2\sqrt{1}}+\frac{1}{3\sqrt{2}}+\frac{1}{4\sqrt{3}}+.........+\frac{1}{100\sqrt{99}}\;,$ Then $\lfloor A \rfloor =$ Where $\lfloor x \rfloor$ represent floor function of $x$. $\bf{My\; Try:}$ For lower bound $$\sum^{99}_{k=1}\frac{1}{(k+1)\sqrt{k}}>\sum^{99}_{k=1}\frac{1}{(k+1)k}=\sum^{99}_{k=1}\left[\frac{1}{k}-\frac{1}{k+1}\right]=1-\frac{1}{99}$$ Now I didn't understand how can I solve it, any help? Thanks REPLY [3 votes]: Let $$A = \sum_{k=1}^{99} \frac1{(k+1) \sqrt{k}} $$ You can show that the sum is greater than $1$. First we can lower bound the sum by $$\sum_{k=1}^{99} \frac1{k (k+1)} = \frac{99}{100} $$ which, while less than one, may be corrected by its second term: $$\frac{99}{100} + \frac13 \left (\frac1{\sqrt{2}} - \frac12 \right ) \gt 1$$ i.e., $A \gt 1$. Next, consider that $$A = \sum_{k=1}^{99} \frac1{(k+1) \sqrt{k}} = \sum_{k=1}^{99} \left (\frac{\sqrt{k}}{k} - \frac{\sqrt{k+1}}{k+1} \right ) + \sum_{k=1}^{99} \frac{\sqrt{k+1}-\sqrt{k}}{k+1}$$ The first sum on the RHS is just $9/10$. The second sum, however, is $$ \sum_{k=1}^{99} \frac1{(k+1) (\sqrt{k+1}+\sqrt{k})} \le \frac12 \sum_{k=1}^{99} \frac1{(k+1) \sqrt{k}} = \frac12 A$$ Thus $A \le \frac9{10} + \frac12 A$, or $$A = \sum_{k=1}^{99} \frac1{(k+1) \sqrt{k}} \le \frac95$$ and is also greater than one. Thus, the floor of the sum is $1$.<|endoftext|> TITLE: Example of operator with spectrum equal to $\mathbb{C}$? QUESTION [5 upvotes]: In my Functional Analysis course, we proved that for a (possibly unbounded) operator $T$ that is densely defined, closed, and symmetric, exactly one of the following four occurs: $\sigma(T) = \mathbb C$; $\sigma(T) = \{\lambda \in \mathbb C \mid \Im \lambda \geq 0\}$; $\sigma(T) = \{\lambda \in \mathbb C \mid \Im \lambda \leq 0\}$; $\sigma(T) \subset \mathbb R$. Now, 4 is easy; this is true for selfadjoint operators. I'm having a hard time coming up with an example for option 1; can you guys help me? REPLY [4 votes]: Let $T=\frac{1}{i}\frac{d}{dt}$ be defined on the domain $\mathcal{D}(T)$ consisting of all absolutely continuous functions $f \in L^2[0,1]$ for which $f(0)=0=f(1)$. More precisely, $f \in \mathcal{D}(T)\subset L^2[0,1]$ is an equivalence class of functions equal a.e. with one element $\tilde{f}$ of the equivalence class that is absolutely continuous on $[0,1]$ with $\tilde{f}'\in L^2[0,1]$. Then $T$ is closed and densely-defined. It's not hard to check that $T$ is symmetric: $$ (Tf,g)-(f,Tg) = \frac{1}{i}\int_{0}^{1}f'\overline{g}+f\overline{g}'dt=\left.\frac{1}{i}f\overline{g}\right|_{0}^{1} = 0. $$ The resolvent equation is $(T-\lambda I)f=g$, which means $$ f'-i\lambda f=ig,\;\;\; f(0)=0=f(1). $$ Using an integrating factor $e^{-i\lambda t}$ and the fact that $f(0)=0$ must hold, you can see that the following is necessary: $$ \frac{d}{dt}(fe^{-i\lambda t})=e^{-i\lambda t}ig \\ f(x)e^{-i\lambda x} = \int_{0}^{x}e^{-i\lambda t}ig(t)dt $$ However, this is an actual solution iff $$ \int_{0}^{1}e^{-i\lambda t}g(t)dt = 0. $$ So there is no $\lambda\in\mathbb{C}$ for which a solution of the resolvent equations can be found for all $g$. Therefore $\sigma(T)=\mathbb{C}$.<|endoftext|> TITLE: What is $\bigcap_{n \in \mathbb{N}} \left(0, {1\over n}\right)$? QUESTION [5 upvotes]: What is$$\bigcap_{n \in \mathbb{N}} \left(0, {1\over n}\right)?$$I suspect it is the empty set, and we would see this by using the Archimedean property of $\mathbb{R}$ or something like that, but I have no idea on how to prove it. Can anybody help me? Thanks in advance! REPLY [4 votes]: You got it right: it is indeed an empty set. To prove this, suppose it isn't empty, that is $\exists x\in\mathbb{R},\,x\in\bigcap\limits_{n\in\mathbb{N}}\left( 0,\frac{1}{n}\right)$. Since there's an intersection, this means that $\forall n\in\mathbb{N},\,x\in\left( 0,\frac{1}{n}\right)$, that is $x>0$ and $\forall n\in\mathbb{N},n<\frac{1}{x}$. Guess you found the contradiction :D<|endoftext|> TITLE: Trace of the $k$-th Exterior Power of a Linear Operator QUESTION [6 upvotes]: Let $V$ be an $n$ dimensional vector space over a field $F$ and $T$ be a linear operator over $V$. Assume that the characteristic of $F$ is not $2$. Definition. Consider the map $f_1:V^n\to \Lambda^n V$ as $$f(v_1, \ldots, v_n)= \sum_{i=1}^n v_1\wedge \cdots \wedge v_{i-1}\wedge Tv_i\wedge v_{i+1} \wedge \cdots \wedge v_n$$This is an alternating multilinear map and thus it induces a unique linear map $\Lambda^n V\to \Lambda^n V$. Since $\dim(\Lambda^n V)=1$, this linear map is multiplication by a constant which we call the trace of $T$. The above is standard and it naturally calls for the following generalization before which we discuss a notation. Given an $n$ tuple $(v_1, \ldots, v_n)$ of vectors in $V$ and an increasing $k$-tuple $I=(i_1, \ldots , i_k)$ of integers between $1$ and $n$, write $v_{I, j}$ to denote $Tv_j$ if $j$ appears in $I$ and simply $v_j$ if $j$ does not appear in $I$. Further write $v_I$ to denote $v_{I, 1}\wedge \cdots \wedge v_{I, n}$. Definition. Let $f_k:V^n\to \Lambda^n V$ be defined as $$f_k(v_1, \ldots, v_n)= \sum_{I \text{ an increasing }k\text{-tuple}}v_I$$ Then $f_k$ is an alternating multilinear map and this induces a unique linear map $\Lambda^n V\to \Lambda^n V$. Again, this linear map is multiplication by a constant which we call the $k$-th trace of $T$ and denote it as $\text{trace}_k(T)$. From this post I have am convinced that the following is true Statement. $\text{trace}_k(T)= \text{trace}(\Lambda^k T)$. I am unable to prove this. REPLY [4 votes]: It is convenient to use the Hodge star to simply the calculations. Choose a non-degenerate symmetric bilinear form $\left< \cdot, \cdot \right>$ on $V$ that has an orthonormal basis (for example, the one corresponding to the identity matrix) and let $(e_1, \ldots, e_n)$ be an orthonormal basis with respect to the chosen bilinear form. We will use $\sum_{I}$ to denote summation over increasing multi-indices $I$ of size $k$. Thus, $$ \mathrm{trace}(\Lambda^k T)(e_1 \wedge \cdots \wedge e_n) = \sum_{I} \left< (\Lambda^kT)(e_I), e_I \right> \left( e_1 \wedge \cdots \wedge e_n \right) = \sum_{I} \Lambda^k T(e_I) \wedge (*e_I) = \sum_{I \coprod J = [n]} \pm \left( \Lambda^k T(e_I) \wedge e_J \right) = \sum_{I \coprod J = [n]} \pm \left(Te_{i_1} \wedge \cdots \wedge Te_{i_k} \wedge e_{j_1} \wedge \cdots \wedge e_{j_{n-k}} \right) $$ where $J$ is an increasing multi-index such that $I \coprod J = [n]$ and we used the fact that $*e_I = \pm e_J$. A sign calculation that uses the definition of the Hodge star shows that in fact the sign is plus which shows that $$ \mathrm{trace}(\Lambda^k T)(e_1 \wedge \cdots \wedge e_n) = f_k(e_1, \cdots, e_n) $$ and thus $\mathrm{trace}(\Lambda^k T) = \mathrm{trace}_k(T)$. One can also show this without using the Hodge star. Choose some basis $(e_1,\dots,e_n)$ for $V$. The expression for $f_k(e_1,\dots,e_n)$ is the sum of $n \choose k$ terms where each term is obtained from $e_1 \wedge \dots \wedge e_n$ by choosing an increasing tuple $I = (i_1, \dots, i_k)$ and applying $T$ to each $e_{i_j}$ while leaving the rest of the vectors intact and in the same order. Let $J$ be the unique increasing tuple $J$ such that $I \coprod J = [n]$ and then by reordering the vectors in the wedge product, we can write each term as $$ (-1)^{\sigma(I)} Te_{i_1} \wedge \dots \wedge Te_{i_k} \wedge e_{j_1} \wedge \dots \wedge e_{j_{n-k}} = (-1)^{\sigma(I)} \Lambda^k(T)(e_I) \wedge e_J $$ where $(-1)^{\sigma(I)}$ is the sign that comes from the reordering. Now, $$ \operatorname{trace}(f_k) = (e^1 \wedge \dots \wedge e^n)(f_k(e_1, \dots, e_n)) = (e^1 \wedge \dots \wedge e^n) \sum_{I} (-1)^{\sigma(I)} \Lambda^k(T)(e_I) \wedge e_J = \sum_{I} (-1)^{\sigma(I)} (e^I \wedge e^J)((-1)^{\sigma(I)} \Lambda^k(e_I) \wedge e_J = \sum_{I} (e^I \wedge e^J)(\Lambda^k(e_I) \wedge e_J). $$ Each $(e^I \wedge e^J)(\Lambda^k(e_I) \wedge e_J)$ is the determinant of an upper triangular block matrix whose lower $(n-k) \times (n-k)$ block is $I$. The vanishing of the rightmost $k \times (n-k)$ block comes from "$e^I(e_J)$" while the fact that the lower $(n -k) \times (n-k)$ block is $I$ comes from "$e^J(e_J)$". Hence, $$ \operatorname{trace}(f_k) = \sum_{I} e^I(\Lambda^k(e_I)) = \operatorname{trace}(\Lambda^k(T)). $$<|endoftext|> TITLE: Why it is more accurate to evaluate $x^2-y^2$ as $(x+y)(x-y)$ in floating point system? QUESTION [5 upvotes]: The expression $x^2-y^2$ exhibits catastrophic cancellation if $|x|\approx|y|$. Why it is more accurate to evaluate as $(x+y)(x-y)$ in floating point system (like IEEE 754)? I see this is intuitively true. Any one can help demonstrate an example when $|x|\approx|y|$? And is there any (or how to write) a formal proof for the claim? A detailed explanation would be very much appreciated! Thank you! REPLY [6 votes]: Each single elementary operation has a truncation error of at most $1/2$ ulp (unit in the last place), which is about a relative error of less than $\mu=2^{-53}$. Let's express the floating point realizations of the two expressions using these relative errors $|δ_i|\le \mu$. The errors are \begin{align} fl(fl(x^2)-fl(y^2))&=(x^2(1+δ_1)-y^2(1+δ_2))(1+δ_3) \\ &=(x^2-y^2)+x^2δ_1-y^2δ_2+(x^2-y^2)δ_3+\text{higher order terms} \end{align} which has as first order upper bound $[x^2+y^2+|x^2-y^2|]\mu=2\max(x^2,y^2)\mu$, and $$ fl(fl(x+y)fl(x-y))=((x+y)(1+δ_1)\,(x-y)(1+δ_2))(1+δ_3) $$ where the upper bound for the first order terms is $3 |x^2-y^2| μ$. To demonstrate that this calculation is indeed reflected in actual floating point arithmetic, consider $x\in [1,2]$ and $y=x+ε$ with $ε$ fixed to some small value, here $\varepsilon=7\cdot 2^{-m}$ for some $m$ in the upper range of the mantissa length. Then the exact difference is $2εx+ε^2$. The error bound for the first formula now appears as $2x^2\mu$ and for the second formula as $6xε\mu$. The plots show that the derived error bounds are valid and rather tight for both formulas, the error of the second formula is only really visible in the logarithmic plot and is zero for all practical purposes.<|endoftext|> TITLE: Why doesn't this infinite exponential growth go beyond 2.5? QUESTION [7 upvotes]: My calculus book says that with: $$a=x^{x^{x^{.^{.^{.}}}}}$$ (exponent tower goes on forever), then: $$x=a^\frac{1}{a}$$ I tried it out with $a=3$ so $x=3^\frac{1}{3}$ and then ran a python program to test it. I did: $$(3^\frac{1}{3})^{(3^\frac{1}{3})^{(3^\frac{1}{3})^{.^{.^{.}}}}}$$ Between 500 runs and 10.000 runs the answer stayed: 2.4780526802882967 Maybe I don't appreciate enough what infinity means, but why isn't it closer to $3$ after 10.000 runs? Is the answer $x=a^\frac{1}{a}$ really correct? Will this really finally be 3 if it would run infinitely? For reference, this was the little python program: x=3**(1/3) y=x for i in range(0,100000): y=x**y print(y) REPLY [7 votes]: $$y_{n+1}=x^{y_n}$$ $$\ln y_{n+1}=y_n \ln x=y_n (\frac{1}{3}\ln 3)$$ What you are getting is $y_{n+1}=y_n=2.4780526802882967$, $$\frac{\ln y_{n}}{y_n}= \frac{1}{3}\ln 3$$ Which should have been true only for $y_n =3$ if $\frac{\ln x}{x}$ was a one to one function except that it's not. The above plot shows the function $\frac{\ln x}{x}$ .Observe that the line $y=\frac{\ln 3}{3}$ intercepts the graph at 2 points- $x=3$ and $x=2.4780526802882967$.<|endoftext|> TITLE: Elementary Set Theory: Ordinals, Well Orderings and Isomorphisms QUESTION [5 upvotes]: I need to show that for any countable ordinal $\alpha$ there is a set A $\subseteq \mathbb{Q}$ such that (A, <) is isomorphic to ($\alpha, \in$). To do it I am supposed to show the following stronger statement by induction on ordinals $\alpha$ < $\omega_{1}$: Let P($\alpha$) be the statement "For every interval (a, b) with rational endpoints, there is an A $\subseteq$ (a,b) such that (A, <) is isomorphic to ($\alpha, \in$)." I think that the purpose of breaking $\mathbb{Q}$ into intervals (a,b) is to make $\mathbb{Q}$ a countable set, but I don't see the significance of doing this or how it would make the proof easier. I feel like I need to define a well-ordered set in (a,b) but I don't know how to do this since if a and b are both negative then there is no least element, so no well-ordering. I believe that I am supposed to be using transfinite induction? But I don't know how P($\alpha$) implies P(S($\alpha$)) for S($\alpha$) successor of $\alpha$. Any help would be greatly appreciated... thanks in advance. REPLY [5 votes]: As with any transfinite induction over ordinals, we need to do this in three cases: $P(0)$. This is obvious. $P(\alpha) \implies P(S(\alpha))$. This might seem difficult, but it actually isn't. Take the interval $(a, b)$. $P(\alpha)$ implies that there is a subset $A_\alpha \subseteq (a, \frac{a+b}2)$ that is order isomorphic to $\alpha$. Now set $A_{\alpha + 1} = A_\alpha \cup \left\{\frac{a + 2b}{3}\right\}$, and you're done. $(\forall \alpha < \gamma(P(\alpha))\implies P(\gamma)$ for $\gamma$ a (countable) limit ordinal. This is the tricky part. The reason that it's tricky is that the different $A_\alpha$ might not converge in any way to a suitable candidate for $A_\gamma$. So we need some way to stabilize this. First, fix a infinite sequence $\{a_i\}_{i \in \omega_0}$ of rational numbers such that $a < a_1 < a_2 < a_3 <\cdots < b$. Also, fix some strictly increasing sequence $\{\alpha_i\}_{i \in \omega_0}$ of ordinals that converges to $\gamma$. The idea is to make sure that both $A_{\alpha_i} \subseteq A_{\alpha_{i + 1}}$, and $A_{\alpha_i}\subseteq (a, a_i)$, so that one is able to take the "limit" (i.e. the union) and have it behave nicely so that we may use that as $A_\gamma$. We begin by using $P(\alpha_0)$ to establish an $A_{\alpha_0}\subseteq (a,a_0)$. Then, for any $i$, there is an ordinal $\beta_i$ so that $\alpha_i + \beta_i = \alpha_{i + 1}$ (which means $\beta_i \leq \alpha_{i+1} < \gamma$). Therefore we may use $P(\beta_i)$ to find an $A_{\beta_i}\subseteq (a_i, a_{i+1})$ which is order isomorphic to $\beta_i$, and thus $A_{\alpha_{i+1}} = A_{\alpha_i} \cup A_{\beta_i}$ is order isomorphic to $\alpha_{i+1}$. This finishes the proof. It is worth noting where this fails for uncountable ordinals. The first uncountable ordinal, $\omega_1$ is a so-called regular cardinal, which means that if you try to apply step $3$ to it (it is, of course, a limit ordinal), then you cannot find an increasing sequence $\{\alpha_i\}_{i \in \omega_0}$ that converges to $\omega_1$. You need an uncountable sequence to reach it. And you cannot find an uncountable sequence $\{a_i\}_{i \in \omega_1}$ to match it, since there aren't enough rational numbers.<|endoftext|> TITLE: Associativity of tensor product over various rings QUESTION [6 upvotes]: From Atiyah-MacDonald: Exercise 2.15. Let $A$, $B$ be rings, let $M$ be an $A$-module, $P$ a $B$-module and $N$ an $(A,B)$-bimodule (that is, $N$ is simultaneously an $A$-module and a $B$-module and the two structures are compatible in the sense that $a(xb) = (ax)b$ for all $a \in A$, $b \in B$, $x \in N$). Then $M \otimes_A N$ is naturally a $B$-module, $N \otimes_B P$ an $A$-module, and we have $$ (M \otimes_A N) \otimes_B P \cong M \otimes_A (N \otimes_B P). $$ Source Do they mean the isomorphism to be one of abelian groups? Or of modules over one of the rings somehow? REPLY [6 votes]: This is a natural isomorphism of abelian groups, so it will respect any extra module structures which may be present; naturality in $M$ implies that if $M$ is a $(C, A)$-bimodule for some other ring $C$ then this is an isomorphism of $C$-modules, and naturality in $P$ implies that if $P$ is a $(B, D)$-bimodule for some other ring $D$ then this is an isomorphism of $D$-modules. In particular, if all rings involved are commutative, then this is always the case with $C = A, D = B$, so we get an isomorphism of $(A, B)$-bimodules.<|endoftext|> TITLE: Which of the $43,380$ possible nets for a dodecahedron is the narrowest? QUESTION [6 upvotes]: I want to fit multiple regular dodecahedron nets on to an infinitely long roll of paper. I want this to result in the largest possible dodecahedrons, for a roll of a given width. My hunch is that the longer and narrower the net, the larger the dodecahedron I can produce; proving this one way or the other might be an interesting side-exercise. My main question for now is simply: for a given size of pentagon, which of the $43,380$ possible nets for a regular dodecahedron fits into the narrowest rectangle? REPLY [10 votes]: Just as a starter, I propose the most obvious one. Area of rectangle is 32.89 if every edge of dodecahedron is of unit length. EDIT. If one is interested in the narrowest possible net, I think the above disposition is still the best one. Because the central "belt" of six pentagons (yellow in the picture below) cannot be altered without widening the net, and the other surrounding pentagons can be moved to other positions, but this doesn't narrow (at best) the witdth of the net (see possible new positions, in blue, of three pentagons). The width of this net is $\sqrt{5+2\sqrt5}(3+\sqrt5)/4\approx4.02874$ times the length of a single edge. EDIT 2. Inspired by net n° 9383 in Horiyama's list I could find a strip slightly narrower than the above, at the expense of having its border not parallel to any pentagon side (see picture). Its width is $\approx 3.93448$. EDIT 3. Oolong discovered the best candidate, up to now: it is n° 43362 in the catalogue, corresponding to a width $\approx 3.66547$. EDIT 4. Oolong discovered an even narrower net: it is n° 36753 in the catalogue, corresponding to a width $\approx 3.3166$. EDIT 5. I performed an exhaustive search, using Mathematica and the complete collection of dodecahedron net centers in Mathematica format, which can be found at Horiyama's site. For every net I checked all the lines passing through two vertices: in case all the other vertices lied on the same side of the line, I then computed the distance from the line to the farthest vertex. The shortest of those distances is the "width" of the net. Here are a few of the best results. WIDTH NET NUMBERS 3.07768 41382, 32924, 32920, 32511, 32494, 32492 3.26889 26440, 23967, 23620, 20027, 19706, 19668 3.3166 42665, 42591, 42549, 42546, 39271, 39268, 36753, 36743, 36717, 36716, 36607, 36598, 36581, 36445, 36439, 36408, 36390, 36304, 36298, 36267, 36264, 36263, 29579, 28755, 28742, 28741, 28740, 28734, 28496, 28489, 28488, 28456, 28434, 28433, 28432, 28416, 27807, 27806, 27805, 27729, 27728, 27727, 27674, 27673, 27672 Notice that the narrowest width can be computed exactly: $3.07768=\sqrt{5+2\sqrt5}$. Here's a picture of n° 41382, which is one of the "winners": EDIT 6. Here's the Mathematica code I used. (* some definitions *) lato=2Sin[Pi/5]//Simplify; sqdist[a_,b_]:=(a-b).(a-b); rot[a_,b_,t_]:=b+{{Cos[t],-Sin[t]},{Sin[t],Cos[t]}}.(a-b); cross2[{ax_,ay_},{bx_,by_},{cx_,cy_}]:=(ax*by-ay*bx+bx*cy-cx*by+ cx*ay-cy*ax)/Sqrt[sqdist[{ax,ay},{bx,by}]]; (* main loop; "r04_n.math" are Horiyama's files *) all={}; Do[ file="/path/r04_"<>ToString[n]<>".math"; Get[file]; net={}; Do[ If[sqdist[p[[i]],p[[k]]]==a^2//Simplify, cmid=p[[i]]+(p[[k]]-p[[i]])/a; start=rot[cmid,p[[i]],Pi/5]//Simplify; pent=Table[rot[start,p[[k]],2j*Pi/5]//FullSimplify,{j,0,5}]; net=Append[net,pent]; pent=Table[rot[start,p[[i]],2j*Pi/5]//FullSimplify,{j,0,5}]; net=Append[net,pent] ], {i,1,Length[p]-1}, {k,i+1,Length[p]}]; pts=Flatten[net,1]//N; pts=Union[pts,SameTest -> (sqdist[#1,#2]<0.00001&)]; best=1000; Do[ wid=-1; flag=True; Do[ t=cross2[pts[[i]],pts[[j]],pts[[k]]]; If[Abs[t]<0.0000001,Continue[]]; If[wid<0,wid0=Sign[t];wid=0]; If[t*wid0<-0.0000001,flag=False;Break[]]; If[Abs[t]>Abs[wid],wid=Abs[t]], {k,1,Length[pts]}]; If[flag && wid/lato TITLE: Rudin's definition on measurable function QUESTION [5 upvotes]: In the definition of measurable function in Rudin's book, he defines measurable function from a measurable space $X$ to a topological space $Y$ as the inverse image of every open set in the range space is measurable in the domain space, the definition is equivalent to the common definition of measurable function when $Y$ is equipped with borel set ( the smallest $\sigma-$ algebra containing open sets). However, if the $\sigma-$ algebra containing open sets on $Y$ is bigger than Borel set, then this definition gives a broader range of measurable function, can anyone tell me why Rudin defines like that? REPLY [3 votes]: While no one here can speak on Rudin's behalf, it is probably safe to assume that he did so to keep the material as simple as possible, since his books are mainly concerned with functions mapping to $\mathbb{R}$ or $\mathbb{C}$. The usual definition of measurable function is: Definition: Let $(X,\mathcal{M})$ and $(Y,\mathcal{N})$ be measurable spaces. A function from $f\colon X\rightarrow Y$ is measurable if for all $E$ in $\mathcal{N}$, $f^{-1}(E)$ is in $\mathcal{M}$. ...and here is a statement (and proof) that gets you from Rudin's definition to the one above (assuming $\mathcal{N}$ is the Borel sets): Theorem: Let $(X,\mathcal{M})$ and $(Y,\mathcal{N})$ be given measure spaces. Assume $\mathcal{N}$ is generated by $\mathcal{E}$ (that is to say, $\mathcal{N}=\sigma(\mathcal{E})$ is the smallest $\sigma$-algebra containing $\mathcal{E}$). Let $f\colon X\rightarrow Y$ be a function. If for every $E$ in $\mathcal{E}$, $f^{-1}(E)$ is in $\mathcal{M}$, $f$ is measurable. Proof: Consider the set $$ \mathcal{A}=\left\{ E\subset Y\colon f^{-1}(E)\text{ is in }\mathcal{M}\right\} . $$ Check that $\mathcal{A}$ is stable under complements and countable unions to conclude that it is a $\sigma$-algebra. By the assumption, $\mathcal{E}\subset\mathcal{A}$. Since $\mathcal{E}$ generates $\mathcal{N}$, $\mathcal{E}\subset\mathcal{N}\subset\mathcal{A}$, as desired.<|endoftext|> TITLE: Proving that maximizing a sum of functions of different independent variables is equivalent to maximizing each function QUESTION [5 upvotes]: Let $$ \pi = f_1(x_1) + f_2(x_2) + f_3(x_3) + \dots + f_n(x_n) = \sum_{i=1}^n f_n(x_i) $$ where $f_i$ denote different functions and $x_i$ denote different independent variables Would proving that $\pi$ is maximized by maximizing $f_i(x_i) \forall i$ be as simple as assuming, BWOC, that $\pi$ is not maximized by doing so, and then noting that this is a contradiction to $f_i(x_i)$ being maximized for each term, since some term must be higher? Note that if this was written as an optimization problem, it would be unconstrained (as I believe constraints would mess with this result). It is pretty much a trivial question/property, but I see it used here and there and have not seen why it is true discussed. Edit: Additionally, what if each term was instead the composition of functions? That is, what if we had $$ \pi = g(f_1(x_1)) + g(f_2(x_2)) + g(f_3(x_3)) + \dots + g(f_n(x_n)) = \sum_{i=1}^n g(f_n(x_i)) $$ I believe the same argument applies, correct? Because, once again, if maximizing each term does not maximize the sum, then some term must be larger than the maximum, which is a contradiction. Edit 2: and it would also then apply to multivariable functions, so long as each function is of different (multiple) independent variables. Thanks. REPLY [2 votes]: Hint: $$\frac{\partial \pi}{\partial f_i} = 1 > 0$$<|endoftext|> TITLE: Eigenvector and eigenvalue for exponential matrix QUESTION [19 upvotes]: $X$ is a matrix. Let $v$ be an eigenvector of $X$ with corresponding eigenvalue $a$. Show that $v$ is also an eigenvector of $e^{X}$ with eigenvalue $e^{a}$ If $X$ is diagonalizable, then we can start writing out terms using Taylor expansion of $e^{X}$ but I can't seem to get anywhere. Thanks for the help Edit: Corrected question to read 'Let $v$ be an eigenvector of $X$' instead of 'Let $v$ be an eigenvector of $e^X$'. REPLY [6 votes]: If we let $ \Phi(t) = e^{t X}$, we see that $\Phi$ satisfies the (matrix) equation $\dot{Y} = Y X$ subject to the initial condition $Y(0) = I$. Let $\xi(t) = \Phi(t) v$, where $Xv = a v$, then we see that $\dot{\xi}(t) = \Phi(t) X v = a \Phi(t)v= a \xi(t)$, and so $\xi(t) = e^{a t} v$. Taking $t=1$ we get $e^X v = e^a v$.<|endoftext|> TITLE: Stability of limit cycle associated with a homogenous linear quation QUESTION [5 upvotes]: Study the stability of the limit cycle r=1 for the system given in polar coordinates by the equations $\dot{r}=(r^2−1)(2x−1), \dot{\phi}=1$, where $x=r\cos \phi$. I've been trying to solve this problem by estimating the return function, but haven't made any progress. Can anyone give me some hints? REPLY [2 votes]: Just found out that this is a problem in Arnold's ODE book. The limit cycle is asymptotically stable. This can be seen as follows: Whenever the trajectory starts at some point with $x>\frac12$ it will be pushed away from the periodic orbit $r=1$ until it reach the line $x=\frac12$. After that it is pushed to the periodic orbit until it reach again $x=\frac12$. But since since the line $x=\frac12$ divides the plane asymmetrically, the time that the orbit spends in the region $x<\frac12$ is more than the time that it passed in the region $x<\frac12$ in the last "iteration".<|endoftext|> TITLE: Probability of every ball occurring in multiple independent random samples QUESTION [5 upvotes]: An urn contains 5 distinct numbered balls. You choose 2 without replacement. You then reset the urn and choose another 2 without replacement. Do this one more time. Now you have three random samples of size 2. What is the probability that all of the numbered balls appear at least once in your 3 random samples of size 2? My thinking of a way to approach this is through complements. Finding the probability one of the balls is missing, two, and three. Then subtracting all of these scenarios from one. Is this a solid approach? Here is the work: P(all numbered balls appear at least once) = 1 - P(at least one ball is missing) P(at least one ball is missing) = P(one ball missing) + P(two balls are missing) + P(three balls are missing) I found the probabilities of one ball, two balls, and three balls missing to be the following: P(one ball missing) = $$\left(1*{4 \choose 2}/{5 \choose2}\right)^2$$ P(two balls missing) = $$\left(1*{3 \choose 2}/{5 \choose2}\right)^2$$ P(three ball missing) = $$\left(1*{2 \choose 2}/{5 \choose2}\right)^2$$ This is because the first time you can choose any three balls, in the one ball missing case, you can only choose two balls from four when there are five possible balls. You can choose either the two that are original or any combination with a different arbitrary two but one must be left out. This is true for both random samples following the first. Any thoughts? REPLY [3 votes]: I encourage you to work through the details of your own approach. Here is my alternative. After the first drawing (a pair of balls), we have exactly two balls (out of five possible) that have been drawn (and we can never get fewer than this in the end). The second drawing can affect which balls have been seen. There are three possible outcomes as far as the number of different balls drawn in either the first or second drawing. 0. We might get the same two balls as we did the first time. 1. We might get one of the original two balls and a third ball not seen in the first drawing. 2. We might get neither of the original two balls, so two new balls from this second drawing (for a total of four seen after both drawings). Given that there are $\binom{5}{2}=10$ ways the second drawing can occur, counting the number of $2$-subsets of the five (distinct) balls, it should be pretty easy to work out the probability for each of these three outcomes. Only one of the ten ways corresponds to case 0. Case 2. corresponds to $\binom{3}{2}=3$ of them. Thus case 1. has probability $6$ out of ten, or $0.6$. If case 0. occurred in the second drawing, nothing that happens in the third and final drawing will result in getting all five balls. We will fail to get all five balls at least once. If case 1. occurred in the second drawing, then we have a "fighting chance". We have seen three of the balls, and what is needed is to draw exactly the two balls in the third drawing that had not been seen before. As just pointed out, the chance of doing that in the third drawing is the same as case 0. in the second drawing: one chance in ten. Finally if case 2. occurred in the third drawing, it will be a lot more likely to succeed in getting the one missing ball (we have two "chances" to get it). A little calculation shows that the probability of going from case 2. in the second drawing to getting all five balls in the end is: four chances in ten. (It might be easier to count how many ways there are to fail to get the last ball, namely six ways out of ten.) Now put the possible paths to success together in a non-overlapping way. We could get all five balls in three drawings either by: A. Getting case 1. in the second drawing and filling out the sample in the third drawing. Probability is $0.6 \times 0.1 = 0.06$. B. Getting case 2. in the second drawing and sampling the missing fifth ball in the third drawing. Probability is $0.3 \times 0.4 = 0.12$. Add the probabilities of these disjoint outcomes and you have the combined probability that all five balls will appear at least once in the three samples. Let me outline how we can make this computation scale up to similar but more complicated situations. In the Markov chain approach, which we've sketched in words above, one identifies "states" that occur and the probabilities (when a drawing is made) of transitioning from one state to another. We then format all these probabilities into a state-transition matrix. Here we have considered states based on how many balls have been seen. Originally no balls were yet seen, and it is possible that up to five balls will be seen (after three draws). So we'll label the six states $B_0,B_1,\ldots,B_5$ according to how many different balls have been seen. The Reader will note that when a draw begins in state $B_0$, there is a probability of $1$ that a transition to state $B_2$ will occur. Similarly the transition probabilities from state $B_2$ are calculated above as follows: $$ \begin{align*} B_2 &\to B_0 &: \; 0.0 \\ &\to B_1 &: \; 0.0 \\ &\to B_2 &: \; 0.1 \\ &\to B_3 &: \; 0.6 \\ &\to B_4 &: \; 0.3 \\ &\to B_5 &: \; 0.0 \end{align*} $$ If we compile all the transition probabilities into a matrix, where $M_{ij}$ is the chance of going from $B_i$ to $B_j$, then: $$ M = \begin{bmatrix} 0 & 0 & 1.0 & 0 & 0 & 0 \\ 0 & 0 & 0.4 & 0.6 & 0 & 0 \\ 0 & 0 & 0.1 & 0.6 & 0.3 & 0 \\ 0 & 0 & 0 & 0.3 & 0.6 & 0.1 \\ 0 & 0 & 0 & 0 & 0.6 & 0.4 \\ 0 & 0 & 0 & 0 & 0 & 1.0 \end{bmatrix} $$ Letting $p_0,p_1,\ldots,p_5$ be the probabilities of states before a draw and $q_0,q_1,\ldots,q_5$ the probabilities after a draw. Then: $$ \begin{bmatrix} p_0 & p_1 & p_2 & p_3 & p_4 & p_5 \end{bmatrix} M = \begin{bmatrix} q_0 & q_1 & q_2 & q_3 & q_4 & q_5 \end{bmatrix} $$ Given that the initial "probability" distribution of states (before any drawing) is $p_0 = 1$ and the rest zeros, it is not terribly hard to show that the distribution after three drawings is: $$ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} M^3 = \begin{bmatrix} 0.00 & 0.00 & 0.01 & 0.24 & 0.57 & 0.18 \end{bmatrix} $$ The final entry of this last probability distribution of states is our answer.<|endoftext|> TITLE: An exercise in Fine Structure of constructible universe concerning projectum patterns QUESTION [10 upvotes]: This question assumes some familiarity with Jensen's fine structure analysis of the constructible universe L (https://en.wikipedia.org/wiki/Jensen_hierarchy, http://www.math.cmu.edu/~laiken/papers/FineStructure.pdf). Everything to follow is in L. Given some $J_\alpha$, the n-th projectum of $J_\alpha, \rho_n(J_\alpha)$ is defined as follows: the least $\rho\leq \alpha$ such that there exists a subset of $\omega\cdot \rho$ which is $\Sigma_n(J_\alpha)$ but not in $J_\alpha$. Another equivalent characterization is that $\rho_n$ is the least $\delta\leq \alpha$ such that there exists a $\Sigma_n(J_\alpha)$ that maps $\omega\cdot \delta$ onto $J_\alpha$. Of course if $1<\rho_n<\alpha$, then $J_\alpha\models \rho_n \text{ is a cardinal}$ so $\omega\cdot \rho_n = \rho_n$. The exercise is asking to produce any arbitrary pattern of the projectums. More concretely like the following, exhibit a $J_\alpha$ such that $\rho_k(J_\alpha)=\alpha, k=0,1,2,3, \rho_4(J_\alpha)<\alpha, \rho_5<\rho_4, \rho_j=\rho_5 \forall j\geq 6$. What I can do now is to produce one drop (I feel if somehow I know how to produce two drops then I am done). More precisely, consider $J_{\omega_2}$. Let $\xi$ be the least ordinal in $J_{\omega_2}$ which is not $\Sigma_4$-definable from $\omega_1$. Take the $\Sigma_4$ Skolem Hull in $J_{\omega_2}$ with parameters from $\omega_1 \cup \{\xi\}$, denoted by $Hull_{\Sigma_4}^{J_{\omega_2}}(\omega_1\cup \{\xi\}) \simeq_\pi J_{\beta}=Hull_{\Sigma_4}^{J_{\beta}}(\omega_1\cup \{\pi(\xi)\})$ by condensation via $\pi$. Then it's not hard to verify that $\rho_4(J_\beta)=\omega_1$ (with standard parameter $\{\pi(\xi)\}$) and $\rho_k(J_\beta)=\beta, k<4$ by elementarily. But it is obvious that the above construction also yields that $\rho_k(J_\alpha)=\omega_1$ for all $k\geq 4$ by cardinality considerations. My feeling is that I should probably produce those projectums starting from $\rho_5$ (i.e. backtrack). But I don't see how, so far, to get another projectum drop. Thanks in advance! REPLY [2 votes]: Let me include an argument here. Goal: given $s\in 2^{<\omega}$ we will produce a $J$-structure such that the projecta behave exactly as indicated by the string $s$ (1 means drop and 0 means stay). Assume $V=L$. For any $s\in 2^{<\omega}$, for all infinite cardinals $\kappa$, there exists $\kappa \leq \alpha <\kappa^+$ such that in $J_\alpha$ the projecta pattern is $s$. Fix $\kappa$. We need to demonstrate how to produce one more drop. Proceed by induction on the length of $s$, say $n$. Let $\beta\geq \kappa^+$ be such that $J_\beta$ has projecta pattern $s$. Our goal is to produce $\kappa\leq \alpha <\kappa^+$ such that the projecta pattern is $s^\frown \langle 1\rangle$, aka one more drop of projecta. Consider $A=Hull_{n+1}^{J_\beta}(\kappa \cup S)$ where $$S=\{\kappa\}\cup \{\rho_i(J_\beta): i\leq n\} \cup \{p_i: i\leq n\}\cup \{\gamma_i: i\leq n+1\}$$ where $\gamma_i$ is the parameter in $J_\beta$ for the $\Sigma_{i}$-Skolem function for $J_\beta$. We know $A\prec_{n+1} J_\beta$. What is left to check $\rho_{n+1}(A)=\kappa$ and $\rho_i(A)=\rho_i(J_\beta)$ for all $i\leq n$. Then the transitive collapse of $A$ will work. $\rho_{n+1}(A)\leq\kappa$ because there exists a $\Sigma_{n+1}$ definable function that maps a subset of $\kappa$ onto $A$. But $\kappa$ is a cardinal. It follows that $\rho_{n+1}(A)=\kappa$. Fix any $i\leq n$, we need to show $\rho_i(A)=\rho_i(J_\beta)$ (this also applies to the degenerate case where $\rho_i(J_\beta)=\beta$). $\rho_i(A)\leq \rho_i(J_\beta)$, since $h_i$ maps a subset of $\rho_i(J_\beta)$ onto $A$ with parameter in $A$, we know by soundness that $\rho_i(A)\leq \rho_i(J_\beta)$. Conversely, some $\Sigma_i(A)$ $h'_i$ maps $\rho_i(A)$ onto $A$. We have $A\models \forall x \exists y<\rho_i(A) h'_i(x)=y$. Note the sentence is $\Pi_{i+1}$ with parameters in $A$. By $\Sigma_{n+1}$ elementarity we know that $J_\beta \models \forall x \exists y<\rho_i(A) h'_i(x)=y$. Hence $\rho_i(J_\beta)\leq \rho_i(A)$.<|endoftext|> TITLE: Prove $|P(0)|\leq 2n+1$ QUESTION [28 upvotes]: Let $P(x)$ be a polynomial with degree $\leq n$ and $|P(x)|\leq\frac{1}{\sqrt{x}}$ for $x\in(0,1]$. Prove that $|P(0)|\leq 2n+1$. The idea should be that if $|P(0)|$ is too large, then the polynomial cannot change values fast enough to avoid intersecting the curve $ \pm 1/\sqrt{x}$, but I don't see how to formalize this. REPLY [14 votes]: Since $-\frac{1}{\sqrt{x}}\le P(x)\le\frac{1}{\sqrt{x}}$ for $0 TITLE: Characterize units in formal power series $R[[x]]$ QUESTION [5 upvotes]: Suppose $R$ is a commutative ring with unity. Define $R[[x]]$ as "formal power series in the variable $x$ with coefficients from $R$". These are the infinite sums of the form $ \sum_{n=0}^\infty a_ix^i, a_i\in R. $ Is there any way to characterize all of the units in this ring $R[[x]]$? REPLY [4 votes]: The units of $R[[x]]$ are exactly the formal power series whose constant term correspond to units of $R$. One way to prove this is to consider truncations of the power series and show that as $\deg(p(x))$ increases, you can eventually find a $q(x))$ of the same degree such that all the terms of $p(x)q(x)$ less than $K$ and note that the constant term are $0$. Thus the nonzero terms get "pushed off" the power series, and leave us with the product of the constant terms. Another way to see this is to note that you can find a $q$ such that $p(x)q(x)-p_0q_0$ has a root at every element of the ring. Once you've done this, it's simple to see that this product can be $1$ iff $p_0$ is a unit.<|endoftext|> TITLE: Geometric intuition for the Stein factorization theorem? QUESTION [13 upvotes]: What is the intuition behind the Stein Factorization Theorem? I understand that it was originally a theorem in several complex variables, so I was wondering if there's some geometric explanation that isn't as opaque as the statement in EGA. In particular, why would one expect this theorem to be true? Are there any suggestive examples or heuristics? For example, when I think about the upper semi-continuity of dimension, I always have in the back of my head the picture of a blow up. REPLY [17 votes]: I've reorganized this answer to highlight the intuition in the complex analytic case, and how the difficulties are in different places compared to the algebraic case. The thing that makes Stein factorization "unintuitive," I think, is the following corollary (III.11.3 in Hartshorne) to the Theorem on Formal Functions: Zariski's Connectedness Theorem. If $f\colon X \to Y$ is a proper map of complex spaces, and $\mathcal{O}_Y \to f_*\mathcal{O}_X$ is an isomorphism, then the fibres $f^{-1}(y)$ are connected for all $y \in Y$. The reason we even need formal functions in the algebraic proof of this theorem is that the Zariski topology is too coarse, and so we can't use actual open sets to construct a section of $\mathcal{O}_X$ that doesn't push forward to a section on $\mathcal{O}_Y$. On the other hand, the topology for complex analytic spaces (basically, ringed spaces that are locally isomorphic to zero sets of sets of holomorphic functions in a domain in $\mathbf{C}^n$) is fine enough that we can prove this quite easily: Proof. First, $f$ is surjective, and so $f^{-1}(y)$ is non-empty for all $y \in Y$. Now suppose $f^{-1}(y)$ is disconnected; then, there exists an open neighborhood $U$ of $f^{-1}(y)$ which is disconnected. By shrinking the neighborhood if necessary, we can assume $U = U_1 \cup U_2$ has the form $f^{-1}(V)$ for $V$ a neighborhood of $y$ in $Y$ (by the closedness of $f$; see, e.g., [Grauert–Remmert Lem. 2.3.1]), and that $U_1 \cap U_2 = \emptyset$. The section of $\mathcal{O}_X$ which is $1$ on $U_1$ and $0$ on $U_2$ gives a section $\varphi$ of $\mathcal{O}_Y$ on $V$ by the isomorphism $\mathcal{O}_Y \overset{\sim}{\to} f_*\mathcal{O}_X$; this is a contradiction since $\varphi(y) = 0$ and $\varphi(y) = 1$. $\blacksquare$ Provided this fact, the subtleties in the complex analytic case are actually things that in algebraic geometry, we often take for granted. The two central results are: Grauert's Direct Image theorem [Grauert–Remmert, Thm. 10.4.6]. If $f\colon X \to Y$ is proper, and $\mathscr{F}$ is a coherent $\mathcal{O}_X$-module, then $f_*\mathscr{F}$ is a coherent $\mathcal{O}_Y$-module. Note this holds for higher direct images as well, but the point is that the proof is surprisingly difficult. On the other hand, for Hartshorne (in the projective case) this follows (in Cor. II.5.20) by the fact that (in the notherian case) the push-forward of a quasi-coherent sheaf is quasi-coherent, and the module of global sections on a projective scheme is finitely-generated over the base ring. The other central result is: Remmert's Proper Mapping Theorem [Grauert–Remmert, Thm. 10.6.1]. If $f\colon X \to Y$is proper, then the image set $f(X)$ is an analytic subset of $Y$. This is a direct corollary of the Direct Image theorem (since $f(X) = \operatorname{Supp} f_*(\mathcal{O}_X)$ is analytic), but can be proved independently using the extension theorem of Remmert–Stein. In any case, the proofs seem to be much more subtle than in Hartshorne, Exc. II.4.4, where you prove the same result in the algebraic context. With the facts above you can prove the Stein factorization theorem in exactly the same way as in, say, Hartshorne, Cor. III.11.5. What is nice, though, is that the intuition that the Stein factorization first contracts connected components of fibers to points, and then gives a finite cover of your target, is actually true. So suppose you have a proper morphism $f\colon X \to Y$ of complex spaces. We define a level set of $f$ to be any connected component of a fibre $f^{-1}(y)$ of $y \in Y$. Denote the set of level sets of $f$ by $Y'$, and let $p \colon Y' \to Y$ be the natural map taking a level set in $f^{-1}(y)$ to $y$. Now assign to each $x \in X$ the connected component of $f^{-1}(f(x))$ which contains $x$, and call this map $f'\colon X \to Y'$. We then have the factorization $$X \overset{f'}{\longrightarrow} Y' \overset{p}{\longrightarrow} Y$$ The map $f'$ is surjective, and so we can endow $Y'$ with the quotient topology. Then, $f'$ is proper, and $p$ is a finite continuous map (it has finite fibers and is proper). Now give $Y'$ the structure of a ringed space by letting $f_*'(\mathcal{O}_X)$ be its structure sheaf. In this setting, we have the following Theorem [Bănică–Stănăşilă, Thm. III.2.12]. The ringed space $(Y',f_*'(\mathcal{O}_X)$ is a complex space, and is isomorphic to $Z := \operatorname{\mathbf{Spec}} f_*\mathcal{O}_X$. In the reduced case, unraveling the definition of $\operatorname{\mathbf{Spec}} f_*\mathcal{O}_X$ gives an even more explicit description of the factorization as follows, following [Bell–Narasimhan, Thm. 2.10]. First, since $f_*(\mathcal{O}_X)$ is coherent as an $\mathcal{O}_Y$-module by Grauert's Direct Image theorem, for any $y \in Y$ there is a neighborhood $U$ such that $f_*(\mathcal{O}_X)$ is generated by holomorphic functions $h_1,\ldots,h_k$ over $\mathcal{O}_Y$. Then, the map $g_U\colon f^{-1}(U) \to U \times \mathbf{C}^k$ where $x \mapsto (f(x),h_1(x),\ldots,h_k(x))$ is proper and its image is an analytic subset $Z_U$ by Remmert's Proper Mapping theorem. The $h_j$ are constant on connected components of fibres of $f$ by the maximum modulus principle, and so the restriction of the projection $U \times \mathbf{C}^k \to U$ to $Z_U$ is finite. You can then glue these $Z_U$ together to get the space $Z$.<|endoftext|> TITLE: Find the remainder when ${{5^5}^5}^5$ is divided by $24$ QUESTION [9 upvotes]: Find the remainder when ${{5^5}^5}^5$ is divided by $24$ I tried using congruence modulo. $$5^2\equiv1\mod{24}$$ $$5^5=125\mod{24}$$ But this does not give the correct answer. REPLY [3 votes]: Proceeding via your method gives: $$5^2\equiv1\mod{24}$$ $$(5^2)^2\equiv(1)^2\mod{24}$$ $$5\cdot(5^4)\equiv5\cdot(1)\mod{24}$$ $$(5^5)\equiv5\mod{24}$$ So, we have: $${5^5}^5\equiv{{5}^5}\mod{24}\equiv5 \mod{24}$$ You can continue this as many times as you want. In the end, you'll have a remainder 5.<|endoftext|> TITLE: Why was $\aleph$ (aleph) chosen for infinities? QUESTION [10 upvotes]: Why did Cantor choose a letter from the Hebrew alphabet to represent infinities, rather than using some Greek letter? REPLY [11 votes]: Cantor had Jewish roots, which is probably why he was familiar with the Hebrew alphabet. But it's unlikely to be the reason de jure or de facto for the choice. From Georg Cantor: His Mathematics and Philosophy of the Infinite By Joseph Warren Dauben: Not wishing to invent a new symbol himself, he chose the aleph, the first letter of the Hebrew alphabet. The choice was especially clever, as he was happy to admit, since the Hebrew aleph served simultaneously to represent the number one, and the transfinite numbers, as cardinal numbers, were themselves infinite unities. (p. 179) The author then continues with something that looks like an anecdotal ex post facto explanation about how this represented a new beginning.<|endoftext|> TITLE: Prove inequality $1 < \frac{1}{n} + \frac{1}{n+1} + \ldots + \frac{1}{3n-1} < 2$ QUESTION [6 upvotes]: Prove the inequality $1 < \frac{1}{n} + \frac{1}{n+1} + \ldots + \frac{1}{3n-1} < 2$ For all $n \in \mathbb{N}$ I've done the right hand side, but can't do the left side of the inequality. For the right: $\frac{1}{n} + \ldots + \frac{1}{3n-1} < \underbrace{\frac{1}{n} + \ldots + \frac{1}{n}}_{2n} = 2$ Now I haven't been able to make progress with the left. REPLY [7 votes]: Hint: Show $\dfrac{1}{n} \lt \dfrac{1}{n+k}+\dfrac{1}{3n-1-k} \lt \dfrac{2}{n} $ for $0 \le k \lt n$ Then sum over $k$<|endoftext|> TITLE: An Analogue of Chinese Remainder Theorem for Groups QUESTION [9 upvotes]: I am trying to prove the following analogue of Chinese remainder theorem for groups: Let $G$ be group and let $H_1, \dots, H_n$ be its normal subgroups such that their indices $[G : H_1], \dots, [G : H_n]$ are pairwise coprime. Then we have $$G/(H_1 \cap \cdots \cap H_n) \cong G/H_1 \times \cdots \times G/H_n.$$ I think that a good strategy would be to try to prove that the mapping $\phi$ defined by $$\phi(g(H_1 \cap \cdots \cap H_n)) = (gH_1, \dots, gH_n)$$ is an isomorphism, but I am not sure how to do this. REPLY [5 votes]: The strategy you have is not the most rigorous one (why is $\phi$ well defined ? why is $\phi$ a group morphism ?). I would suggest to begin with : $$\psi:G\rightarrow G/H_1\times \dots\times G/H_n $$ $$g\mapsto (gH_1,\dots, gH_n) $$ This is a group morphism, that will factor by $H_1\cap\dots \cap H_n$ and this will prove at the same time (and rigorously) that $\phi$ is well defined, a group morphism and one-to-one. Clearly $Ker(\psi)=H_1\cap\dots\cap H_n$. Indeed, $\psi(g)$ is trivial iff for all $1\leq i\leq n$ we have $gH_i=H_i$ iff for all $1\leq i\leq n$ $g\in H_i$. Hence $\psi$ factors through $H_1\cap\dots\cap H_n$ by : $$\phi:G/H_1\cap\dots \cap H_n\rightarrow G/H_1\times \dots\times G/H_n $$ $$g H_1\cap \dots \cap H_n\mapsto (gH_1,\dots, gH_n) $$ Remark here that I did not use the fact that the indices are coprime to each other. Of course, we will use it to show that $\phi$ is onto. Denote $d_i:=[G:H_i]$. Denote $H:=H_1\cap\dots \cap H_n$. Remark that : $$[G:H]=[G:H_i][H_i:H]=d_i[H_i:H]$$ Hence $d_i$ divides $[G:H]$ for all $i$, since they are pairwise coprime, their product also divides $[G:H]$. Denoting $d:=d_1\dots d_n$ we get that : $$d\text{ divides } |G/H| $$ But $G/H$ is isomorphic to $Im(\phi)$ included in $G/H_1\times\dots G/H_n$ which is of cardinal $d_1\times\dots\times d_n=d$ so $|Im(\phi)|=d$ and $\phi$ is onto.<|endoftext|> TITLE: Find a thousand natural numbers such that their sum equals their product QUESTION [23 upvotes]: The question is to find a thousand natural numbers such that their sum equals their product. Here's my approach : I worked on this question for lesser cases : \begin{align} &2 \times 2 = 2 + 2\\ &2 \times 3 \times 1 = 2 + 3 + 1\\ &3 \times 3 \times 1 \times 1 \times 1 = 3 + 3 + 1 + 1 + 1\\ &7 \times 7 \times 1 \times 1 \times \dots\times 1 \text{ (35 times) } = 7 + 7 + 1 + 1 .... \text{ (35 times) } \end{align} Using this logic, I seemed to have reduced the problem in the following way. $a \times b \times 1 \times 1 \times 1 \times\dots\times 1 = a + b + 1 + 1 +...$ This equality is satisfied whenever $ ab = a + b + (1000-n)$ Or $ abc\cdots n = a + b + \dots + n + ... + (1000 - n)$ In other words, I need to search for n numbers such that their product is greater by $1000-n$ than their sum. This allows the remaining spots to be filled by $1$'s. I feel like I'm close to the answer. Note : I have got the answer thanks to Henning's help. It's $112 \times 10 \times 1 \times 1 \times 1 \times ...$ ($998$ times)$ = 10 + 112 + 1 + 1 + 1 + ...$ ($998$ times) This is for the two variable case. Have any of you found answers to more than two variables ? $abc...n = a + b + c + ... + n + (1000 - n) $ REPLY [5 votes]: A solution with four numbers different from 1 is: $$16 \times 4 \times 4 \times 4 \times 1^{996} = 16 + 4 + 4 + 4 + (996 \times 1) = 1024$$ How was this found? $1024 = 2^{10}$ appeared to be a promising candidate for the sum and product because it's slightly larger than 1000 and has many factors. The problem then was to find a, b, c, d such that: $$a+b+c+d=10$$ $$2^a+2^b+2^c+2^d=1024 -(1000-4)=28$$ None of a-d can be more than 4 since $2^5=32 > 28$. But on trying $a=4$, reducing the problem to finding b,c,d such that $b+c+d=6$ and $2^b+2^c+2^d = 12$, the solution was apparent.<|endoftext|> TITLE: Positivity of the alternating sum associated to at most five subspaces QUESTION [7 upvotes]: Let $V_1 , V_2 , \dots , V_n $ be vector subspaces of $ \mathbb{C}^m$ and let $$\alpha = \sum_{r=1}^n (-1)^{r+1} \sum_{ \ i_1 < i_2 < \cdots < i_r } \dim(V_{i_1} \cap \cdots \cap V_{i_r})$$ For $n = 2$ we have the equality $ \alpha = \dim(\sum_{i = 1}^{n} V_i) $; it's false for $n>2$, see this answer. For $n=3$, we have only the inequality $ \alpha \ge \dim(\sum_{i = 1}^{n} V_i) $; it's false for $n>3$, see this post. For $n>5$, the inequality $\alpha \ge 0$ is false in general, see the comment of Darij Grinberg below. Question: Is it true that $\alpha \ge 0$, in the case $n \le 5$? Remark: I think this question interesting for itself; it admits also applications in the interaction between representations theory and subgroups lattice. REPLY [5 votes]: Here is a purely combinatorial proof for $n=5$. We first generalize the problem as follows: Let $B_5$ be the boolean lattice of rank $5$, i.e. the subsets lattice of $\{1,2,3,4,5\}$. Lemma: Let $\phi: B_5 \to \mathbb{R}_{\ge 0}$ be a map satisfying that $\forall a, b \in B_5$: $(1)$ $ \ $ $a \le b \Rightarrow \phi(a) \le \phi(b)$ [poset morphism] $(2)$ $ \ $ $\phi(a \vee b) + \phi(a \wedge b) \ge \phi(a ) + \phi(b)$ and let $a_i= \{i \}^\complement$ the complement of $\{i \}$ in $\{1,2,3,4,5\}$, then $$\sum_{r=1}^5 (-1)^{r+1}\sum_{i_1 < i_2 < \cdots < i_r} \phi( a_{i_1} \wedge \cdots \wedge a_{i_r}) \ge 0$$ proof: we reorganize the alternative sum into the sum of the following components: $\phi(\{1,2,3,4\}) - \phi(\{1,2,3\}) - \phi(\{1,2,4\}) + \phi(\{1,2\})$ $\phi(\{1,3,4,5 \}) - \phi(\{1,3,4\}) - \phi(\{1,3,5\}) + \phi(\{1,3\})$ $\phi(\{2,3,4,5\}) - \phi(\{2,3,4\}) - \phi(\{3,4,5\}) + \phi(\{3,4\})$ $\phi(\{1,2,4,5 \}) - \phi(\{2,4,5\}) - \phi(\{1,4,5\}) + \phi(\{4,5\})$ $\phi(\{1,2,3,5\}) - \phi(\{1,2,5\}) - \phi(\{2,3,5\}) + \phi(\{2,5\})$ $ \phi(\{1,5\}) - \phi(\{1\})$ $ \phi(\{2,4\}) - \phi(\{2\})$ $ \phi(\{2,3\}) - \phi(\{3\})$ $ \phi(\{1,4\}) - \phi(\{4\})$ $ \phi(\{3,5\}) - \phi(\{5\})$ $ \phi(\emptyset) $ but the first five components are positive by $(2)$, the next five components are positive by $(1)$, and the last is positive by definition $\square$. Now the answer of the question is yes by observing that the map $\phi$ defined by $$\phi( a_{i_1} \wedge \cdots \wedge a_{i_r}) = \dim (V_{i_1} \cap \cdots \cap V_{i_r})$$ checks $(1)$ and $(2)$. For $(1)$ it is immediate. For $(2)$ we use the following equality and inclusion: $\dim(U+V) = \dim(U) + \dim(V) - \dim(U \cap V)$ and $(A\cap B ) + (A\cap C) \subseteq A$.<|endoftext|> TITLE: Continuous but not compact operator on $L^2(0,\infty)$ QUESTION [6 upvotes]: Define the following operator on $L^2(0,\infty)$: $$Tf(x)=\frac{1}{x} \int_0^xf(y)dy,\quad f\in L^2(0\infty).$$ I would like to see that it is continuous but not compact. So, this is an integral operator with kernel $k(x,y)=\frac{1}{x}\mathbf1_{(0,x)}(y)$. The problem is that $k$ is not even in $L^2(0,\infty)^2$. Thus, the usual bound $\|Tf\|_2\leq \|k\|_2\cdot \|f\|_2$ does not work. Hence I am not even sure why the operator is well-defined. I.e. why is $Tf$ even in $L^2(0,\infty)$? And how might we show continuity/non-compactness? REPLY [4 votes]: Let $\def\norm#1{\left\|#1\right\|_{L^2}}f \in L^2(0,\infty)$. We have \begin{align*} \norm{Tf}^2 &= \def\abs#1{\left|#1\right|}\int_0^\infty \frac 1{x^2} \abs{\int_0^x f(y)\, dy}^2 \, dx\\ &\le \int_0^\infty \frac 1{x^2}\left(\int_0^x y^{1/4}y^{-1/4}\abs{f(y)}\, dy\right)^2 \, dx \\ &\le \int_0^\infty \frac 1{x^2} \left[\left( \int_0^x y^{-1/2}\, dy\right)^{1/2}\left(\int_0^x y^{1/2}\abs{f(y)}^2 \, dy\right)^{1/2}\right]^2 \, dx\\ &= \int_0^\infty \frac {2x^{1/2}}{x^2} \int_0^x y^{1/2}\abs{f(y)}^2 \, dy\, dx\\ &= \int_0^\infty \int_y^\infty \frac 2{x^{3/2}} \, dx\cdot y^{1/2}\abs{f(y)}^2 \, dy\\ &= \int_0^\infty \frac 4{y^{1/2}} \cdot y^{1/2}\abs{f(y)}^2\, dy\\ &= 4 \norm f^2 \end{align*} Hence, $\norm{Tf}\le 2\norm f$, proving the continuity of $T$. To see that $T$ is not compact, let $f_n := n^{1/2} \chi_{[0,1/n]}$, then $\norm{f_n} = 1$ and $$ \norm{Tf_n - Tf_m}^2 \ge \int_0^{1/m} (m^{1/2} - n^{1/2})^2\, dx = \left(1 - \left(\frac nm\right)^{1/2}\right)^2 $$ So $(Tf_n)$ does not have a convergent subsequence.<|endoftext|> TITLE: Every variety is isomorphic to an intersection of a linear space and a Veronese surface QUESTION [5 upvotes]: "Deduce that any projective variety is isomorphic to an intersection of a Veronese variety with a linear space" I've been trying to solve this exercise from Joe Harris book. I can see that if a variety $X\subset \mathbb{P}^n$ has only polynomials of degree $d$, then in the coordinates of $\nu_d(\mathbb{P}^n)$ it is, indeed, a linear space. The problem is when I have different degrees. If I take a family of polinomial of maximum degree $d$, and I have a polynomial $f_0$ in this family, with $m=deg ~f_0 TITLE: Notation: $\mathbb{Z}[\sqrt{-5}]$ QUESTION [5 upvotes]: Show that the elements 2,3, and $1 \pm \sqrt{-5}$ are irreducible elements of $\mathbb{Z}[\sqrt{-5}]$. I have never seen this notation before. From another post I am interpreting this to mean the following: $\mathbb{Z}[\sqrt{-5}] = \{a_{0}+a_{1}\sqrt{-5}+ \dots + a_{n}(\sqrt{-5})^{n} \colon a_{i} \in \mathbb{Z} \}$. Am I correct or is it something else? I do not need the proof, just verification of notation. REPLY [3 votes]: You are correct, but it it suffices to see that $\mathbb Z [\sqrt{-5}] = \{a_0 + a_1\sqrt{-5}: a_i \in \mathbb Z \}$<|endoftext|> TITLE: BMO2 2016 Number Theory Problem QUESTION [11 upvotes]: Suppose that $p$ is a prime number and that there are different positive integers $u$ and $v$ such that $p^2$ is the mean of $u^2$ and $v^2$. Prove that $2p−u−v$ is a square or twice a square. Can anyone find a proof. I can't really see any way to approach the problem. REPLY [2 votes]: Note that $2p^2=u^2+v^2$, or $(p-u)(p+u)=(v-p)(v+p)$. WLOG, suppose $u TITLE: Double Integration over finite plane. QUESTION [5 upvotes]: $$\phi(z)=\frac{\sigma}{4\pi\varepsilon_0}\int_{\frac{-a}{2}}^{\frac{a}{2}}\int_{\frac{-a}{2}}^{\frac{a}{2}}\frac{1}{\sqrt{x^2+y^2+z^2}}~dx~dy$$ I'm not sure how to do this integral. For the first integral w.r.t $x$ i tried to substitute $x=\sqrt{y^2+z^2}\sin{\theta}\implies dx=\sqrt{y^2+z^2}\cos\theta~d\theta$. The integral then becomes: $$\phi(z)=\frac{\sigma}{4\pi\varepsilon_0}\int_{\frac{-a}{2}}^{\frac{a}{2}}\int_{-\text{?}}^\text{?}1~d\theta~dy$$ But the bounds are $\text{?}=\arcsin{\frac{a}{2\sqrt{y^2+z^2}}}$ since arcsin is an odd function. However this just makes it even harder to solve. So what is the best way to do this integral? If it helps I will give the context of the question. I am asked to find the strength of the electric field at a height z above the centre of a square sheet with constant charge density $\sigma$ and side lengths $a$. REPLY [4 votes]: By symmetry, the electric field along the $z$ axis will have only a $z$ component with $$E_z(0,0,z)=\left.-\frac{\partial \phi(x,y,z)}{\partial z}\right|_{(0,0,z)}$$ Therefore, we have $$E_z(0,0,z)=\frac{\sigma z}{\pi \epsilon_0}\int_0^{a/2}\int_0^{a/2}\frac{1}{(x^2+y^2+z^2)^{3/2}}\,dx\,dy$$ Rather than attempt a transformation to cylindrical coordinates, we proceed here with integrating directly in Cartesian coordinates. We evaluate the inner integral by making the substitution $x=\sqrt{y^2+z^2}\tan \theta$. This yields $$\begin{align} \int_0^{a/2}\frac{1}{(x^2+y^2+z^2)^{3/2}}\,dx&=\left.\left(\frac{x}{(y^2+z^2)\sqrt{x^2+y^2+z^2}}\right)\right|_{0}^{a/2}\\\\ &=\frac{a/2}{(y^2+z^2)\sqrt{(a/2)^2+y^2+z^2}} \end{align}$$ Therefore, we have reduced the expression for the electric field along the $z$ axis to $$E_z(0,0,z)=\frac{\sigma z}{\pi \epsilon_0}\int_0^{a/2}\frac{a/2}{(y^2+z^2)\sqrt{(a/2)^2+y^2+z^2}}\,dy$$ To evaluate the remaining integral, we make the standard trigonometric substitution $y=\sqrt{(a/2)^2+z^2}\tan(u)$. Then, we have $$\begin{align} E_z(0,0,z)&=\frac{\sigma z(a/2)}{\pi \epsilon_0}\int_0^{\arctan\left(\frac{a/2}{\sqrt{(a/2)^2+z^2}}\right)}\,\,\frac{\cos(u)}{z^2+(a/2)^2\sin^2(u)}\,du\\\\ &=\frac{\sigma z(a/2)}{\pi \epsilon_0}\int_0^{(a/2)/\sqrt{(a/2)^2+(a/2)^2+z^2}} \frac{1}{z^2+(a/2)^2v^2}\,dv\\\\ &=\frac{\sigma }{\pi \epsilon_0}\arctan\left(\frac{(a/2)^2}{z\sqrt{2(a/2)^2+z^2}}\right) \end{align}$$ NOTE 1: We can recover, of course, the potential along the $z$ axis by integrating the electric field. In this problem, integration by parts facilitates. This is left as an exercise for the reader. NOTE 2: As $a\to \infty$, the arctangent goes to $\pi/2\,\text{sgn}(z)$, and we recover the familiar result of the electric field from a uniform surface charge on an infinite surface, namely $\vec E(0,0,0^{\pm})=\pm \hat z\,\frac{\sigma}{2\epsilon_0}$. NOTE 3: As $z\to 0^{\pm}$, the arctangent goes to $\pi/2 \,\text{sgn}(z)$ and the electric field is $\vec E(0,0,0^{\pm})=\pm \hat z\,\frac{\sigma}{2\epsilon_0}$ NOTE 4: As $z\to \pm \infty$, the arctangent goes to $\frac{(a/2)^2}{z^2}\,\text{sgn}(z)$ and the electric field is $\vec E(0,0,z\to \pm \infty)=\pm \hat z\,\frac{\sigma a^2}{4\pi \epsilon_0\,z^2}$, which appears as the field from a point charge $q=\sigma a^2$.<|endoftext|> TITLE: Proving number of partitions of $n$ to $3$ parts at most. QUESTION [5 upvotes]: I have an exercise, to prove that the number of partitions of $n$ to at most $3$ integers is $\frac{(n+3)^2}{12}$ rounded. I tried to prove by induction but I don't know how. REPLY [4 votes]: The number of partitions of $n$ into at most $k$ integers is equal to the number of partitions of $n$ into integers no bigger than $k$. Let's call this number $p(n,k)$. In general $$p(n,k)=p(n,k-1)+p(n-k,k)$$ starting with $p(0,0)=1$ and $p(n,0)=0$ for $n\not=0$ and zero if $n$ or $k$ are negative. For given $k$, patterns are going to repeat every $k!$ terms, so we can find and show by induction that $p(m,1)=1$ and $p(2m,2)=p(2m+1,2)=m+1$ for $m \ge 0$ and thus $p(6m+0,3)= 3m^2+3m+1 = \frac{((6m+0)+3)^2}{12} + \frac1{4}$ $p(6m+1,3)= 3m^2+4m+1 = \frac{((6m+1)+3)^2}{12} - \frac1{3}$ $p(6m+2,3)= 3m^2+5m+2 = \frac{((6m+2)+3)^2}{12} - \frac1{12}$ $p(6m+3,3)= 3m^2+6m+3 = \frac{((6m+3)+3)^2}{12} + 0$ $p(6m+4,3)= 3m^2+7m+4 = \frac{((6m+4)+3)^2}{12} - \frac1{12}$ $p(6m+5,3)= 3m^2+8m+5 = \frac{((6m+5)+3)^2}{12} - \frac1{3}$ Since all the fractions on the right hand side are smaller than $\frac12$, this leads to the conclusion that you can round and say $$p(n,3)= \left[\frac{(n+3)^2}{12}\right].$$<|endoftext|> TITLE: Powers of a prime as one more than the square of an integer... QUESTION [6 upvotes]: Given a fixed prime $p$, are there finitely many positive integers $k$ such that $p^k = n^2 +1$ for some $n$? REPLY [4 votes]: The more general equation $n^2+1 = y^k$ for any positive integers $n$ and $y$ was proven to have no solutions for $k\geq 3$ by Lebesgue in 1850. Obviously, there are no solutions for $k=2$ since the smallest difference between positive perfect squares is 3. See Bugeaud for complete details (including generalization to $n^2+D = y^k$).<|endoftext|> TITLE: Categorification of algebra structures QUESTION [9 upvotes]: This might be a bit of a soft question. Take a $\mathbb{C}$-linear category. Form the complex vector space spanned by its objecs modulo exact sequences. This construction is, as far as I know, the reason why $\mathbb{C}$-linear categories are thought of as categorified vector spaces. Now put a monoidal structure on the category. This gives a multiplication on the vector space from before, making it into the Grothendieck algebra of the category. So monoidal structures on linear categories are categorifications of associative algebras. The property of the algebra being associative corresponds to structure of the associator on the category, as we would expect in a categorification. As a further warmup, a commutative algebra categorifies to a braided monoidal structure. Again the property of commutativity is categorified to the braided structure in the category. Now, imagine an involutive algebra. This is the strange bit. An involution is structure on the algebra. But the categorification is a property on the category, the existence of duals! (The dual of an object gives its involution in the algebra, and two duals to the same object are always isomorphic, so it's well-defined.) What's going on? Why is the usual order of stuff, structure and property reversed? Are there other examples of this phenomenon? Remark: I'm lying a bit, the property of the involution to square to the identity categorifies to a pivotal structure, which is a monoidal natural transformation from the identity to the double dual. REPLY [9 votes]: Here's a simpler example: in ring theory, picking a basis of your ring (as a module over the ground commutative ring $k$) is extra structure. But in category theory there is sometimes a "distinguished basis" (e.g. the simple objects in an abelian category), which will pass to just a basis after decategorifying. For example, Hecke algebras have a famous basis called the Kazhdan-Lusztig basis which is used to define Kazhdan-Lusztig polynomials, which are important in representation theory. It turns out that this basis can be thought of as coming from a categorification of Hecke algebras using what are called Soergel bimodules. So category theory sometimes furnishes "property-like structure" (stuff that's not preserved by functors but is uniquely determined by categorical considerations) in a way that's invisible and just looks like some random extra structure when you decategorify. This is arguably a major reason to care about categorification: to see how the category theory suggests more structure than is visible at the decategorified level.<|endoftext|> TITLE: How do we conclude that $f(x)=0, \forall x\in \mathbb{R}$ ? QUESTION [6 upvotes]: Suppose that $f:\mathbb{R}\rightarrow \mathbb{R}$ is a periodic function such that $\displaystyle{\lim_{x\rightarrow +\infty}f(x)=0}$. I want to show that $f(x)=0$ for all $x\in \mathbb{R}$. $$$$ Let $T$ be the period of $f$, then $f(x)=f(x+T)$. Therefore, we have that $$0=\lim_{x\rightarrow +\infty}f(x)=\lim_{x\rightarrow +\infty}f(x+T)=\lim_{x\rightarrow +\infty}f(x+2T)=\dots =\lim_{x\rightarrow +\infty}f(x+nT), \ \forall n\in \mathbb{Z}$$ So, $$|f(x)|=|f(x+T)|=|f(x+2T)|=\dots =|f(x+nT)|\leq \epsilon$$ $$$$ But how exactly do we conclude that $f(x)=0, \forall x\in \mathbb{R}$ ? REPLY [4 votes]: Let $x\in \mathbb{R}$. Then $x_n:=x+nT$ tends to $+\infty$ ($T$ is the period of $f$) and $f(x_n)=f(x)$ for all $n$, since $f$ is $T-$periodic. Since $\displaystyle \lim_{x\rightarrow +\infty}f(x)=0$, we get that $\displaystyle \lim_{n\rightarrow +\infty}f(x_n)=0$ and by the uniqueness of the limit, we get $f(x)=0$, as desired.<|endoftext|> TITLE: Is there an easy criterion to determine whether given polynomials form a complete intersection? QUESTION [7 upvotes]: Suppose we have homogeneous polynomials in $s$ variables $F_1, ..., F_n$ with coefficients in integers. Let $X$ be a variety (or algebraic set) defined by the simultaneous equations $$ F_1(\mathbf{x}) = ... = F_n(\mathbf{x}) = 0. $$ I was wondering under what conditions do $F_1, .., F_n$ form a complete intersection? Is there a relatively easy to check condition that polynomials have to satisfy to guarantee $X$ to be a complete intersection? I would greatly appreciate any commments/references! Thank you very much! REPLY [4 votes]: It seems to me that you're basically asking for an algorithm. The one I previously proposed fails (as noted by Mariano), so let me try something else. Disclaimer. I am not an expert at all. I hope I haven't made any more mistakes. Reference. A great reference is Eisenbud's Commutative Algebra with a view towards Algebraic Geometry. Answer to your question. The first thing you need to do, of course, is compute a minimal set of generators for your ideal $I$. We can actually do more: compute a minimal free resolution (see Eisenbud, Lemma 19.4 for definition and existence) $$0 \to P_r \to \ldots, \to P_0 \to I \to 0$$ of the finitely generated module $I$ over the (graded!) ring $k[x_0,\ldots,x_n]$. It is a theorem that any other resolution will contain this minimal resolution as a summand (see Eisenbud, Theorem 20.2 for the local case, and note that it also works in the graded case, cf. Exercise 20.1). This justifies the name 'minimal resolution'. In particular, the rank of $P_0$ is the number of generators required, and this also equals the dimension of $I/(x_0,\ldots,x_n)I$ as $k$-vector space (Lemma 19.4). Once you have a minimal set of generators $g_1,\ldots,g_r$ for $I$, it is relatively straightforward: now we just have to check that $V(I) \subseteq \mathbb P^n$ has dimension $n-r$. For simplicity, let's assume $r \leq n$, so that $V(I) \neq \varnothing$. You would have to think about how you can check something is a complete intersection when there are more than $n$ generators. (For example, $(x_0,\ldots,x_n)^2$ cannot be generated by fewer than $\frac{(n+1)(n+2)}{2}$ elements, yet its vanishing set is empty, which is (?) a complete intersection.) There are many ways of proceeding. Let me list a few: One can try to compute the transcendence degree of the field $K = \operatorname{Frac} k[x_0,\ldots,x_n]/I$. In characteristic $0$, it is necessary and sufficient that $\Omega_{K/k}^1$ has dimension $n+1-r$, i.e. that the Jacobian of $g_1,\ldots,g_r$ has maximal rank (i.e. $r$) at the generic point of $k[x_0,\ldots,x_n]/I$. One can probably compute the transcendence degree through elimination theory as well (again, not an expert). Eisenbud might tell you a bit about what I mean by elimination theory. The first two approaches only work when $I$ is a prime ideal. In general, one can compute the Hilbert polynomial of the graded ring $k[x_0,\ldots,x_n]/I$. It should be a polynomial of degree $n-r$ for $V(I)$ to be a complete intersection. See Theorem 15.26 in Eisenbud for a way to compute the Hilbert polynomial. Etc. So the best case is if $f_1,\ldots,f_s$ are a minimal set of generators of a prime ideal $I$ over a field of characteristic $0$. In that case, you only have to check that the Jacobian is generically (on $V(I)$) of rank $s$. Again, there are multiple ways you can do this: You can check that none of the $s\times s$-minors is in $I$. See Eisenbud, section 15.10.1 for computability of ideal membership. You can check that not all of the vanishing loci of the $s\times s$-minors contain $I$, for instance by some geometric argument (this is not an algorithm, but maybe how you would do it in practice). Example. The easiest example I know of something that is not a complete intersection is the twisted cubic in $\mathbb P^3$. If the embedding is \begin{align*} \mathbb P^1 &\mapsto \mathbb P^3\\ [u,v] &\mapsto [u^3, u^2v, uv^2, v^3], \end{align*} then it is cut out by $xz-y^2$, $yw-z^2$, and $xw-yz$. This is a minimal set of generators, for one can check that $I/(x,y,z,w)I$ is $3$-dimensional. The Jacobian is $$\left(\begin{array}{ccc}z & -2y & x & 0 \\ 0 & w & -2z & y \\ w & -z & -y & x\end{array}\right).$$ The $3\times 3$-minors are \begin{align} 3zwy-w^2x-2z^3,\\ zwx-2y^2w+z^2y,\\ -2z^2x+xyw+y^2z,\\ 3xyz-2y^3-wx^2, \end{align} all of which are easily seen to be in $I$. We conclude that $V(I)$ is not a complete intersection. Of course, if we already knew that these three relations are minimal relations cutting out a $\mathbb P^1$, then it is clear that it is not a complete intersection, by an easy dimension count. But in general, you do not know a priori what the isomorphism type of the subvariety is.<|endoftext|> TITLE: Alternating group on infinite sets QUESTION [5 upvotes]: It is well known that the only normal subgroup of $S_n$ is $A_n$ when $n\geqslant 5$, and that $A_n$ is also simple. Furthermore, $A_{\infty}$, the even permutations on $\mathbb{N}$, is also simple. This lead me to wonder about the following: Take a general set $X$ with cardinality $\kappa>\aleph_0$ from which we can generate the group $\text{Sym}\,X$. Questions can we define an alternating group on $X?$ if so does it remain the only normal subgroup of $\text{Sym}\, X?$ REPLY [8 votes]: If $\operatorname{Sym} X$ denotes the group of finite-support permutations, you can define $\operatorname{Alt} X$ as the group of even finite-support permutations. If $N\subseteq \operatorname{Sym} X$ is a normal subgroup, then $N\cap \operatorname{Sym} F$ is normal in $\operatorname{Sym} F$ for any finite $F\subset X$. It follows that if $N$ contains any nontrivial even permutation $\sigma$, it must contain all of $\operatorname{Alt} X$ (since for any finite set $F$ with at least $5$ elements containing the support of $\sigma$, it must contain all of $\operatorname{Alt} F$, and every element of $\operatorname{Alt} X$ is in some such $\operatorname{Alt} F$). Similarly, if $N$ contains any odd permutation, it must be all of $\operatorname{Sym} X$. So the only nontrivial proper normal subgroup of $\operatorname{Sym} X$ is $\operatorname{Alt} X$. If you want to consider the group of all permutations of $X$ (with arbitrary support), then there are more normal subgroups. For instance, for any infinite cardinal $\lambda\leq|X|$, the subgroup of permutations with support of cardinality $<\lambda$ is a normal subgroup. In fact, these together with the finite-support alternating group are all the nontrivial proper normal subgroups of the full permutation group (I don't know the proof of this off the top of my head; this is known as the "Baer-Schreier-Ulam theorem"). In particular, this indicates that there is no reasonable notion of "sign" for permutations with infinite support (there is no "$<\lambda$-support alternating subgroup" among the normal subgroups unless $\lambda=\aleph_0$).<|endoftext|> TITLE: Circle is similar to a polygon with infinite number of sides QUESTION [5 upvotes]: It is known from the time of Euclid, that a circle is similar to a polygon with infinite number of sides. But this ^^ is informal. Do you know any formalization where it appears that a circle is a polygon with infinite number of sides? REPLY [6 votes]: The idea of viewing a circle as a (regular?) infinite-sided polygon goes back at least as far as Nicholas of Cusa (sometimes referred to as Cusanus). The idea was picked up by Kepler who used it in area calculations, many years before the idea was developed formally in a mathematically adequate form. The torch was picked up by Leibniz who most likely introduced the term infinitesimal so that the polygon now becomes infinitesimal-sided. In the 18th century, Leibniz's ideas were developed by his followers like Johann Bernoulli, and his followers' followers like Leonhard Euler, who used both infinitesimals and infinite numbers in a routine way. In the 19th century, infinitesimals were still in common use chez Cauchy, who used them to define continuity of a function $f$ as follows: $f$ is continuous if each infinitesimal increment $\alpha$ always produces an infinitesimal change $f(x+\alpha)-f(x)$ in the function. The next generation of mathematicians developed set-theoretic foundations that ultimately formalized the real numbers, but failed to formalize infinitesimal procedures of the founders of the calculus. The work on infinitesimals continued throughout the second half of the 19th century and the beginning of the 20th century, by people like Paul du Bois-Raymond, Veronese, Hahn, Dehn, Hilbert, and others. Skolem in 1933 introduced a model of Peano arithmetic containing infinite numbers. It was not until the 1960s that Abraham Robinson pulled all of these strings together by creating a modern framework for working with infinitesimal and infinite numbers that meets current standards of mathematical rigor. In Robinson's framework, one can approximate the circle by a regular polygon with $H$ sides where $H$ is an infinite hypernatural number. More precisely, the circle is the standard part of the infinite-sided polygon.<|endoftext|> TITLE: Looking for a non-combinatorial proof that $a! \cdot b! \mid (a+b)!$ QUESTION [6 upvotes]: (I use $a$ and $b$ to denote natural numbers.) Question. Without appealing to the combinatorial interpretation of $$\frac{(a+b)!}{a! b!}$$ as a multinomial coefficients, is there a proof that for all $a$ and $b$, we have $$a! \cdot b! \mid (a+b)! \qquad?$$ Basically, I want a proof that just uses some clever algebra. I was thinking that maybe we can use modular arithmetic, and try to understand the value of $(a+b)!$ modulo $a! \cdot b!$, and eventually show that this is $0$. Ideas, anyone? REPLY [2 votes]: There's a truly beautiful proof by Tim Gowers. And a similar question has been asked before.<|endoftext|> TITLE: Find the value of $\sum_{n =1}^\infty \frac 1 {5^{n+1}-5^n+1}$ QUESTION [10 upvotes]: $$\sum_{n = 1}^\infty \dfrac 1 {5^{n+1}-5^n+1}$$ I can factorize denominator to $4\times5^n+1$ to confirm the series does not diverge, But how do I calculate its actual sum? The series is not a telescoping series nor I can partial factorise. I get confused due to $+1$ in the denominator. Thanks a lot REPLY [3 votes]: The series doesn't have a closed form (except for a very complicated one involving Q-Polygamma function, as was said in a comment), however, we can transform it to get much better convergence. $$\frac{1}{4 \cdot 5^n+1}=\frac{1}{4 \cdot 5^n} \left(1-\frac{1}{4 \cdot 5^n}+\frac{1}{4^2 \cdot 5^{2n}}-\frac{1}{4^3 \cdot 5^{3n}}+\cdots \right)$$ $$\sum_{n=1}^{\infty} \frac{1}{4^k \cdot 5^{k n}}=\frac{1}{4^k} \left( \dfrac{1}{1-\dfrac{1}{5^{k}}}-1 \right)=\frac{1}{4^k (5^k-1)}$$ $$\sum_{n=1}^{\infty} \frac{1}{4 \cdot 5^n+1}=\sum_{k=1}^{\infty} (-1)^{k+1} \frac{1}{4^k (5^k-1)}=0.06001587909991328$$ Why is this series better? Since it is alternating, it provides upper and lower bounds, unlike the first series, which converges monotonely from below. It also just gives much better approximations, both in terms of their numerical value and the size of their denominators. Let's denote: $$A_N=\sum_{n=1}^{N} \frac{1}{4 \cdot 5^n+1}$$ $$B_N=\sum_{k=1}^{N} (-1)^{k+1} \frac{1}{4^k (5^k-1)}$$ Now compare: $$A_2=\frac{122}{2121}=0.0575200$$ $$A_3=\frac{7027}{118069}=0.0595161$$ $$B_2=\frac{23}{384}=0.0598958$$ $$B_3=\frac{1429}{23808}=0.0600218$$ $$0.0598958<\sum_{n=1}^{\infty} \frac{1}{4 \cdot 5^n+1}<0.0600218$$ Here is the plot of both $A_N$ and $B_N$ up to $N=10$<|endoftext|> TITLE: Prove that $a(x+y+z) = x(a+b+c)$ QUESTION [5 upvotes]: If $(a^2+b^2 +c^2)(x^2+y^2 +z^2) = (ax+by+cz)^2$ Then prove that $a(x+y+z) = x(a+b+c)$ I did expansion on both sides and got: $a^2y^2+a^2z^2+b^2x^2+b^2z^2+c^2x^2+c^2y^2=2(abxy+bcyz+cazx) $ but can't see any way to prove $a(x+y+z) = x(a+b+c)$. How should I proceed? REPLY [5 votes]: By C-S inequality, $(ax+by+cz)^2\le (a^2+b^2+c^2)(x^2+y^2+z^2)$ with equality iff $(x,y,z)=\lambda(a,b,c)$ for some $\lambda$ or $(a,b,c)=(0,0,0)$. But, if $(a,b,c)=(0,0,0)$, the problem is trivially true. If it not the case, then $x=\lambda a$, $y=\lambda b$ and $z=\lambda c$. Then $x+y+z=\lambda(a+b+c)$. Multiplying by $a$ both sides and remembering $x=\lambda a$ yield us the proof. REPLY [5 votes]: HINT: To do it without linear algebra, expand both sides and subtract like terms to leave $$a^2y^2+a^2z^2+b^2x^2+b^2z^2+c^2x^2+c^2y^2=2abxy+2acxz+2bcyz\;.$$ Notice that you can rearrange this as $$(a^2y^2-2abxy+b^2x^2)+(a^2z^2-2acxz+c^2x^2)+(b^2z^2-2bcyz+c^2y^2)=0\;,$$ or $$(ay-bx)^2+(az-cx)^2+(bz-cy)^2=0\;.$$ What can you conclude about $ay-bx$, $az-cx$, and $bz-cy$? What can you conclude about $a(x+y+z)$ and $x(a+b+c)$?<|endoftext|> TITLE: how to prove an element is non-zero in a tensor-product QUESTION [11 upvotes]: I was studying the following example from Atiyah & MacDonald's Introduction to Commutative Algebra: Let $x$ be the non-zero element in $N := \mathbf{Z}/ 2\mathbf{Z}$, $M := \mathbf{Z}$, and $M' := 2 \mathbf{Z}$. The element $2 \otimes x$ is zero in $M \otimes N$, but non-zero in $M' \otimes N$. I suppose this can be seen by the fact that $2 \otimes x$ generates $M' \otimes N$, and the tensor-product $M' \otimes N$ is non-zero. However, I was wondering, what one would do if that element weren't a generator of $M' \otimes N$. So my question is: What are the methods to prove that an element is non-zero in a tensor-product of modules? Thanks a lot! REPLY [26 votes]: If you dont have any further knowledge on $M \otimes N$, this is the very first step and an instant consequence of the universal property of the tensor-product: $m \otimes n \in M \otimes N$ is non-zero iff there exists some $R$-Module $T$ and a bilinear map $M \times N \to T$, which maps $(m,n)$ to non-zero. Proof: If $m \otimes n \neq 0$, let $T = M \otimes N$ and consider the map $M \times N \to M \otimes N$ from the universal property. On the other hand, if such a $T$ and a map exists, by the universal property, we get a map $M \otimes N \to T$, which maps $m \otimes n$ to non-zero. In particular $m \otimes n$ is non-zero. In our case, choose $T=\mathbb Z/2\mathbb Z$. The map $2\mathbb Z \times \mathbb Z/2\mathbb Z, (a,b) \mapsto \frac{ab}{2}$ is bilinear and maps $(2,1)$ to $1 \neq 0$, hence $0 \neq 2 \otimes 1 \in 2\mathbb Z \otimes \mathbb Z/2\mathbb Z$.<|endoftext|> TITLE: Is there a formula for the area under $\tanh(x)$? QUESTION [7 upvotes]: I understand trigonometry but I've never used hyperbolic functions before. Is there a formula for the area under $\tanh(x)$? I've looked on Wikipedia and Wolfram but they don't say if there's a formula or not. I tried to work it out myself and I got this far: $\tanh(x) = {\sinh(x)\over\cosh(x)} = {1-e^{-2x}\over 1+e^{-2x}} = {e^{2x}-1\over e^{2x}+1} = {e^{2x}+1-2\over e^{2x}+1} = 1-{2\over e^{2x}+1}$ Now I'm stuck. I don't know if I'm on the right track or not. REPLY [3 votes]: Why all those difficult substitutions and jabbers? Just observe that we can use the logarithm derivative formula: $$\int\frac{f'(x)}{f(x)}\ \text{d} x = \ln(f(x)) + C$$ thence, having $\tanh(x) = \frac{\sinh(x)}{\cosh(x)} = \frac{f'(x)}{f(x)}$ it comes easy to get the result $$\int\frac{\sinh(x)}{\cosh(x)}\ \text{d}x = \ln(\cosh(x)) + C$$<|endoftext|> TITLE: Numerical evidence of law of iterated logarithm (random walk) QUESTION [5 upvotes]: The law of iterated logarithm states that for a random walk $$S_n = X_1 + X_2 + ... X_n$$ with $X_i$ independent random variables such that $P(X_i = 1) = P(X_i = -1) = 1/2$, we have $$\limsup_{n \rightarrow \infty} S_n / \sqrt{2 n \log \log n} = 1, \qquad \rm{a.s.}$$ Here is Python code to test it: import numpy as np import matplotlib.pyplot as plt N = 10*1000*1000 B = 2 * np.random.binomial(1, 0.5, N) - 1 # N independent +1/-1 each of them with probability 1/2 B = np.cumsum(B) # random walk plt.plot(B) plt.show() C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N)))) M = np.maximum.accumulate(C[::-1])[::-1] # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization plt.plot(M) plt.show() Question: I have done it lots of times, but the ratio is nearly always decreasing to 0, instead of having a limit 1. Where is the problem? Here's the kind of plot I have most often for the ratio (which should approach $1$): REPLY [3 votes]: I think the problem is that the number of attempts that can be used in a numerical simulation $n$ is finite. Notice this: if $Y_n=\frac{S_n}{\sqrt{2n\log\log n}}$, by properties of random walk we know $\mathbb{E}[Y_n]=\frac{\mathbb{E}[S_n]}{\sqrt{2n\log\log n}}=0$ and $$ Var[Y_n]=\frac{Var[S_n]}{2n\log\log n}=\frac{n}{2n\log\log n}=\frac{1}{2\log\log n}\to 0 $$ which implies $Y_n$ converges to 0 in distribution (we can prove it using Chebyshev's inequality). In particular, if we define $Y_{k,n}=\max_{k\leq \ell \leq n}Y_\ell$ (which is the variable you are using in your code, instead of the variable $Z_k=\sup_{\ell \geq k}Y_{\ell}$ which is the variable one should use), then "$Y_{k,n}\searrow_{k\to n} Y_n$", which in turn converges to 0. So, in the large majority of cases, in your simulations $Y_{k,n}$ should converge to 0.<|endoftext|> TITLE: Quaternions: Why is the angle $\frac{\theta}{2}$? QUESTION [7 upvotes]: The equation for creating a quaternion from an axis-angle representation is $$x'= x \sin\left(\frac \theta 2\right)$$ $$y' = y \sin\left(\frac \theta 2\right)$$ $$z' = z \sin\left(\frac \theta 2\right)$$ $$w' = \cos\left(\frac \theta 2\right)$$ But why $\frac \theta 2$? Why not just $\theta$? REPLY [5 votes]: Hint: Consider the unitary quaternion $$ u=\cos \theta + \bf{i}\sin \theta $$ and the pure imaginary quaternion ${\bf j}$. Then the product: $ u{\bf j}u^* $ (where $u^*=\cos \theta - {\bf i}\sin \theta$ is the conjugate of $u$) is: $$ (\cos \theta + {\bf i}\sin \theta){\bf j}(\cos \theta - {\bf i}\sin \theta)= (\cos \theta + {\bf i}\sin \theta)({\bf j}\cos \theta + {\bf k}\sin \theta)= $$ $$ =(\cos^2 \theta -\sin^2 \theta){\bf j}+2{\bf k}\cos \theta \sin \theta = {\bf j}\cos 2 \theta+{\bf k} \sin 2 \theta $$ that is the vector ${\bf j}$ rotated by an angle $2 \theta$ around the ${\bf i}$ axis. Now, do the same for the other axis ${\bf j}$ and ${\bf k}$ and extend this result to any rotation around an axis ${\bf u}= u_1{\bf i}+u_2{\bf j}+u_3{\bf k}$.<|endoftext|> TITLE: Classification of homomorphisms $\mathbb Q \to \mathbb C^\times$ QUESTION [6 upvotes]: Are there any textbooks which discuss/classify the injective group homomorphisms from $\mathbb Q$ (under addition) into $\mathbb C \setminus \{0\}$ (under multiplication)? REPLY [4 votes]: The multiplicative group $\mathbb C^*$ is the product of $\mathbb R^+_0$ with the unit circle $C$. Any homomorphism decomposes into a modulus and an argument that way, and its kernel is the intersection of the kernels of the two components. So the set you are looking for is the cartesian product of the homomorphisms to the unit circle with the homomorphisms to the positive reals, minus the pairs where neither member is injective. Group homomorphisms $\mathbb Q\to\mathbb R^+_0$ are uniquely determined by how they map $1,$ since $n$-th roots are unique in $\mathbb R^+_0.$ Conversely, every positive real $r$ defines a homomorphism $$\varphi_r:\mathbb Q\to\mathbb R^+_0:q\mapsto r^q.$$ Thus the second component of our cartesian product is indexed by the positive real numbers $r.$ The homomorphisms are injective for all $r\neq1.$ The interesting part of the question is to describe the injective homomorphisms from $\mathbb Q$ to the unit circle. A homomorphism $\psi:\mathbb Q\to C$ is determined by its values on reciprocals of prime powers. There, of course, we do not have complete freedom of choice because we have to satisfy the constraints $$\psi(p^{-n})=\psi(p^{-(n+1)})^p,\ n=0,1,\ldots$$ For a complete description of the unitary characters of the additive group of rational numbers I would like to refer to older answers in this forum such as Representation theory of the additive group of the rationals? For a unitary character $\psi$ to be injective a necessary and sufficient condition is that $\pi^{-1}\arg\psi(1)\notin\mathbb Q.$<|endoftext|> TITLE: Survey of varieties of non-standard analysis? QUESTION [7 upvotes]: Is there a reliable, reasonably up-to-date, survey article doing a "compare and contrast" on varieties of non-standard analysis? REPLY [4 votes]: The best starting point is the detailed account by Hamkins here which will hopefully be published in a more formal venue eventually. Two approaches that are not discussed there in detail are Hrbacek's relative set theory, as well as the $\alpha$-theory of Di Nasso and collaborators. Also, the new edition of Nonstandard analysis for the working mathematician by Loeb and Wolff seems promising; see here. However I haven't been able to get my hands on a copy yet. Such an article is online here, soon to appear in Real Analysis Exchange.<|endoftext|> TITLE: Does there exist a surjective homomorphism from $(\mathbb R,+)$ to $(\mathbb Q,+)$ ? QUESTION [5 upvotes]: Does there exist a surjective homomorphism from $(\mathbb R,+)$ to $(\mathbb Q,+)$ ? ( I know that there 'is' a 'surjection' , but I don't know whether any surjective homomrophism from $\mathbb R$ to $\mathbb Q$ exist or not . Please help . Thanks in advance ) REPLY [11 votes]: Sure. Fix a Hamel basis $B$ for $\Bbb R$ over $\Bbb Q$, fix some $r\in B$ and map $B\setminus\{r\}$ to $0$, and $r$ to $1$. Then you're done. If you want to avoid the axiom of choice, you can't. It is consistent that every homomorphism from $\Bbb R$ to $\Bbb Q$ is continuous, and therefore its image is connected. But this means that every homomorphism is $0$. (This is a consequence of "Every set of reals is Lebesgue measurable" as well "Every set of reals has the Baire property", at least in the presence of Dependent Choice. Both of these have been shown consistent with Dependent Choice by Solovay starting with an inaccessible, and later Shelah proved that for the Baire property you do not need an inaccessible cardinal.) REPLY [7 votes]: Hint Write $\mathbb R$ as a vector space over $\mathbb Q$. Pick a basis $B$ and define a vector space homomorphism by sending $B$ to $\{ 1\}$. Show it has the desired property. Second solution Use Zorn's Lemma to define a maximal homomorphism from a subgroup of $\mathbb R$ to $\mathbb Q$ which is identity on $\mathbb Q$.<|endoftext|> TITLE: What does the integral of position with respect to time mean? QUESTION [22 upvotes]: The integral of acceleration with respect to time is velocity. The integral of velocity with respect to time is position. What is the integral of position with respect to time, and what does it mean? Please explain so that your answer is understandable by someone who took calculus I. REPLY [2 votes]: Suppose there's a lever that you can move from $0$ to $1$. Letting $f(t)$ be the position of that lever over time as you move it, you can think of the lever's velocity $f'$, acceleration $f''$, etc. Now imagine that lever controls a floodgate. The floodgate is closed at the $0$ position and opens as you move the lever towards $1$. The integral $\int_0^t f(x)\,\mathrm{d}x$ measures the accumulation of water that has spilled from the floodgate over time. If you leave the lever at some fixed position, the water will be flowing out at a constant rate. The absement in this situation measures the accumulated water, equal to $\int_0^t f(x)\,\mathrm{d}x$, and is a measure of the sustained position of the lever.<|endoftext|> TITLE: Would this solution of the limit of the sequence be correct? QUESTION [24 upvotes]: Let's suppose that I have the sequence $a_n = \frac{1}{n^2} + \frac{2}{n^2} + \frac{3}{n^2} + \ldots + \frac{n}{n^2}, n \in \mathbb{N}$. And I have to find the limit of the sequence as $n \rightarrow \infty$. Would the below solution be correct? The sequence $a_n$ can be rewritten as \begin{align} a_n &= \sum_{k=1}^{n} \frac{k}{n^2} \\ &= \frac{1}{n^2} \sum_{k=1}^{n} k \\ &= \frac{1}{n^2} \cdot \frac{n(n+1)}{2} \\ &= \frac{n+1}{2n}. \end{align} Thus we have \begin{align} \lim_{n \rightarrow \infty} a_n = \lim_{n \rightarrow \infty} \frac{n+1}{2n} = \lim_{n \rightarrow \infty} \frac{n}{2n} = \frac{1}{2}. \end{align} REPLY [19 votes]: Your solution is fine. Here is an other way to get the result. I know it is not the easier way to solve it, but the advantage is that you can generalize to $b_n = \sum_{k=1}^{n} \frac{k^2}{n^3}$ for instance. Notice that $$a_n = \frac{1}{n} \sum_{k=0}^{n} \frac{k}{n}$$ looks like a Riemann sum for $f : x \mapsto x$ since $$a_n = \sum_{k=0}^{n} f\left(\frac{k}{n}\right) \left(\frac{k+1}{n} - \frac{k}{n} \right)$$ Therefore the limit of the sequence $(a_n)_{n≥1}$ is $$\int_0^1 f = \left[\frac{x^2}{2}\right]_0^1 = \frac{1}{2}$$<|endoftext|> TITLE: A Increasing Multiplicative Functional Equation where $nm$ is a cube QUESTION [6 upvotes]: Let $f:\mathbb{N}\rightarrow\mathbb{N}$ be a strictly increasing function such that $$f(2)=2$$ and $$f(mn)=f(m)f(n)$$ for all positive integers $m,n$ such that $mn$ is a perfect cube. Prove that $f(n)=n$ for every positive integer $n$. This problem was inspired by the $24$th W.L. Putnam Mathematical Competition, whose proof can be seen here on page 8. Here is my attempted proof. Note that $f(9)f(192)=f(24)f(72),\quad f(3)f(72)=f(24)f(9)$ . Multiplying these two equations give that $f(24)^{2}=f(3)f(192)$. Note that $f(24) \le f(9)f(3)-3$, and $f(192) \ge f(3)f(9)+165$. This gives us that $f(3)^2(f(9)^2-f(9))-f(3)(6f(9)+165)+9 \ge 0$. However, this route proved unsuccesful. I had tried to show that $f(24)=24$, which would imply by induction that $f(n)=n$ for all $n$. Is there a way to easily solve the problem above? Any help would be appreciated. REPLY [4 votes]: Because it is strictly increasing, all we have to do is prove there are an infinite number of solutions to $f(n) = n$, since all the numbers in between two solutions are also solutions themselves. Using induction on $f(8 \cdot x^3) = f(8) \cdot f(x^3)$, the problem reduces to showing there exists $n \ge 4$ with $f(n)=n$. It looks like you got at least this far with your $f(24)$ idea. Below is my attempt: We have $f(8^k) = f(8)^k$ and also $f(27^k) = f(27)^k$ for all $k$. So for any $x$ which is a power of $8$, $a = \frac{\log{f(8)}}{\log{8}}$, $f(x) = x^{a}$ and for any $x$ which is a power of $27$, $b = \frac{\log{f(27)}}{\log{27}}$, $f(x) = x^{b}$. For all $t$ there is an $s$ such that $8^s \lt 27^t \lt 8^{s+1}$, and since $f$ is increasing, $f(8^s) \lt f(27^t) \lt f(8^{s+1})$, which expands to $(8^{s})^{a} \lt (27^{t})^{b} \lt (8^{s+1})^{a}$. But this means $a = b$. For example, if $b \lt a$, then for sufficiently large $t$ it will be the case that $(27^t)^b \lt (8^s)^a$, which contradicts $f$ being increasing. This is because $27^t$ is no more than $8$ times larger than $8^s$. Likewise, assuming $a \lt b$ leads to a contradiction. All of the above works for any two cubes instead of $8$ and $27$, so by transitivity, there exists $c \ge 3$ such that for all $x$, $f(x^3) = x^c$. Now, $f(125) = 5^c$ and $f(128) = \frac{f(8)}{f(4)} \cdot f(64) = 2 \cdot 4^c$. Using $f(128) \gt f(125)$, $3 \le c \lt \frac{\log{2}}{\log{5} - \log{4}} \approx 3.10628 \lt 4$. So $c = 3$, since it must be an integer for $f(x^3) = x^c$ to always be an integer, and that proves the proposition.<|endoftext|> TITLE: How can I show that $X$ and $Y$ are independent and find the distribution of $Y$? QUESTION [10 upvotes]: $X_1,X_2,\dots,X_n$ is an i.i.d. sequence of standard Gaussian random variables. \begin{align}X&=\frac{1}{n}(X_1+X_2+\dots+X_n) \\[0.2cm] Y&=(X_1-X)^2+(X_2-X)^2+\dots+(X_n-X)^2\end{align} How can I show that $X$ and $Y$ are independent? How can I find the distribution of $Y$? Can we use the following method to show that $X$ and $Y$ are independent? $$Cov(X,Y)=0$$ Or is there any other proper way? REPLY [2 votes]: Partial solution here. $Y$ is a quadratic form. In particular, let $\mathbf{1} \in \mathbb{R}^n$ be a vector of ones, and define $$P_\mathbf{1} = \mathbf{1}\left(\mathbf{1}^{T}\mathbf{1}\right)^{-1}\mathbf{1}^{T} = \mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T}\text{.}$$ Let $$\mathbf{X}=\begin{bmatrix} X_1 \\ X_2 \\ \vdots\\ X_n \end{bmatrix}\text{.}$$ Then $$P_\mathbf{1}\mathbf{X} = \mathbf{1}\left(\dfrac{1}{n}\right)\begin{bmatrix} 1 & 1 & \cdots & 1 \end{bmatrix}\begin{bmatrix} X_1 \\ X_2 \\ \vdots\\ X_n \end{bmatrix} = \mathbf{1}\left(\dfrac{1}{n}\right)\sum_{i=1}^{n}X_i = \mathbf{1}X = X\mathbf{1}\text{.}$$ Furthermore, $$\begin{align} Y &= \sum_{i=1}^{n}(X_i-X)^2 \\ &= (\mathbf{X}-X\mathbf{1})^{T}(\mathbf{X}-X\mathbf{1}) \\ &= (\mathbf{X}-P_\mathbf{1}\mathbf{X})^{T}(\mathbf{x}-P_\mathbf{1}\mathbf{X}) \\ &= [(\mathbf{I}-P_{\mathbf{1}})\mathbf{X}]^{T}(\mathbf{I}-P_{\mathbf{1}})\mathbf{X} \\ &= \mathbf{X}^{T}(\mathbf{I}-P_{\mathbf{1}})^{T}(\mathbf{I}-P_{\mathbf{1}})\mathbf{X}\text{.} \end{align}$$ $\mathbf{I}$ is obviously symmetric, and notice $$P_{\mathbf{1}}^{T} = \left[\mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T}\right]^{T} = \mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T} = P_{\mathbf{1}}$$ so hence $P_{\mathbf{1}}$ is symmetric, so that $\mathbf{I}-P_{\mathbf{1}}$ is symmetric as well, and $$(\mathbf{I}-P_{\mathbf{1}})^{T} = \mathbf{I}-P_{\mathbf{1}}\text{.}$$ This gives $Y = \mathbf{X}^{T}(\mathbf{I}-P_{\mathbf{1}})^{2}\mathbf{X}$. Observe also that $$P_\mathbf{1}^2 = \left(\mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T}\right)^2 = \mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T}\mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T} = \mathbf{1}\left(\dfrac{n}{n}\right)\left(\dfrac{1}{n}\right)\mathbf{1}^{T} = \mathbf{1}\left(\dfrac{1}{n}\right)\mathbf{1}^{T} = P_{\mathbf{1}}$$ hence $P_{\mathbf{1}}$ is idempotent. Notice $$(\mathbf{I}-P_{\mathbf{1}})^{2} = \mathbf{I}^2-2\mathbf{I}P_{\mathbf{1}}+P_{\mathbf{1}}^2 = \mathbf{I}-2P_{\mathbf{1}}+P_{\mathbf{1}} = \mathbf{I}-P_{\mathbf{1}}$$ since $P_{\mathbf{1}}^2 = P_{\mathbf{1}}$, as shown earlier. Hence $Y = \mathbf{X}^{T}(\mathbf{I}-P_{\mathbf{1}})\mathbf{X}$. It can be shown that the rank of $\mathbf{I}-P_{\mathbf{1}}$ is $n - 1$. Using the theorem 7 here (p. 29), you can show that $$Y \sim \sigma^2\chi^2_{n-1}(\boldsymbol{\mu}^{\prime}(\mathbf{I}-P_{\mathbf{1}})\boldsymbol{\mu}^{\prime})$$ If $$Q = \left(\mathbf{1}^{T}\mathbf{1}\right)^{-1}\mathbf{1}^{T} = \left(\dfrac{1}{n}\right)\mathbf{1}^{T}\text{.}$$ then $X = Q\mathbf{X}$. For the remainder of the proof, see this page, under "Examples."<|endoftext|> TITLE: How can I get to Mars with a polynomial? QUESTION [11 upvotes]: In order to get to Mars you must win a video game. The video game chooses $10$ points $(a_i,b_i)$ where $a_i$ and $b_i$ are single-digit integers, and places a disk with radius $1/3$ on each of the points. You must find a polynomial $f$ such that the graph of $f$ hits all $10$ discs. However, you must choose your polynomial before seeing where the disks are. Find a polynomial that guarantees you a trip to mars. The only clue I have for this problem is that for some point $x=h$ I must have a polynomial which is quite "steep" at that point and resembles the graph of a vertical line.In this way I am guaranteed that any other lattice point $(a_i,b_i)$ above it is included. The only way I can imagine a polynomial doing that is when I have a product of positive expressions, i.e for $x=h$ I would have something like $(x-h)(x+r)(x+d)$ where $r,d$ are some large numbers. The main conceptual difficulty I am facing here is to produce this kind of behavior for every single digit integer on the number line... REPLY [12 votes]: We know that $(a,b)\in[0,0]\times [9,9]$ (since $a,b$ are single-digit integers. If negative numbers are considered "single-digit integers", too, then this is generalizable for those, too). So, what about $$p(x)=100(x-0)(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)(x-7)(x-8)(x-9)$$ A plot made with Wolfram Mathematica 10.0: So if you wanna go to Mars, this is probably a good polynomial to try.<|endoftext|> TITLE: Difference between a limit and accumulation point? QUESTION [14 upvotes]: What is the exact difference between a limit point and an accumulation point? An accumulation point of a set is a point, every neighborhood of which has infinitely many points of the set. Alternatively, it has a sequence of DISTINCT terms converging to it? Whereas a limit point simply has a sequence which converges to it? i.e. something like $(1)^n$ which is a constant sequence. Is this the right idea? As much detail and intuition as possible would be greatly appreciated. REPLY [8 votes]: The difference is very simple. 1) As you wrote: an accumulation point of a set is a point, every neighborhood of which contains infinitely many points of the set. 2) But a limit point is a special accumulation point. No matter how small neighborhood you choose, all members $a_n$ (after a certain $n$) are in the neighborhood of the limit point. The requirement for all members (after certain $n$) is obviously stronger than the requirement for infinitely many points/members. So every limit point is an accumulation point, but not every accumulation point is a limit point. Also note that: (1) if a sequence has a limit point, then that's the only accumulation point of the sequence; (2) if a sequence has more than one accumulation points, that this sequence has no limit point. Try to prove these two, it will clear your confusions.<|endoftext|> TITLE: Finding the square root of $6-4\sqrt{2}$ QUESTION [6 upvotes]: I found this standupmaths video on YouTube about the A4 paper puzzle. I really liked the puzzle and managed to get the answer by using a calculator. However, the answer (which I won't spoil), led me to think that the equation to solve it might simplify - which it does. In the middle of the simplification, I got this expression: $$\sqrt{6-4\sqrt{2}}$$ which for other reasons I suspected to be equal to: $$\ 2-\sqrt{2}$$ I tried squaring the above and, sure enough, it does give: $$6-4\sqrt{2}$$ My question is, how would I have been able to find the square root of $$6-4\sqrt{2}$$ if I hadn't been able to guess at it? Is there a standard technique? I've tried looking on the web but don't really even know what to search for. REPLY [2 votes]: We have to assume that the nested radical can be rewritten as the sum of two other radicals (surds). $\sqrt{6-4\sqrt{2}}=\sqrt{d}+\sqrt{e}$ Squaring both sides gives us $$6-4\sqrt{2}=d+e+2\sqrt{de}$$ This can be solved by finding $2$ numbers that sum to $6$ and multiply to $4^2\cdot 2/4=8$ ($2\sqrt{de}=4\sqrt{2}\rightarrow de=8$) Numbers $4$ and $2$ work, so $$\sqrt{6-4\sqrt{2}}=\sqrt{4}-\sqrt{2}=2-\sqrt{2}$$<|endoftext|> TITLE: What does $f|_A$ mean? QUESTION [8 upvotes]: If $f$ a is a function and $A$ is a set, what could the notation $$f|_A$$ mean? Is it perhaps "restricted to set $A$"? REPLY [3 votes]: The notation $f|_A$ is probably best understood via a meaningful example. Before giving one (I hope it will be useful, anyway), it would probably be good to consult two decent references: 1) The Wikipedia page on the restriction of a function. 2) Abstract Algebra by Dummit and Foote (p. 3, 3rd Ed.). The relevant portion from the Wiki blurb: Let $f\colon E\to F$ be a function from a set $E$ to a set $F$, so that the domain of $f$ is in $E$ (i.e., $\operatorname{dom}f\subseteq E$). If $A\subseteq E$, then the restriction of $f$ to $A$ is the function $f|_A\colon A\to F$. Informally, the restriction of $f$ to $A$ is the same function as $f$, but is only defined on $A\cap\operatorname{dom} f$. Wiki's "informal" remark is the key part in my opinion. The following excerpt from Dummit and Foote's Abstract Algebra may be slightly more abstract, but I think a meaningful example will clear everything up. If $A\subseteq B$, and $f\colon B\to C$, we denote the restriction of $f$ to $A$ by $f|_A$. When the domain we are considering is understood we shall occasionally denote $f|_A$ again simply as $f$ even though these are formally different functions (their domains are different). If $A\subseteq B$ and $g\colon A\to C$ and there is a function $f\colon B\to C$ such that $f|_A=g$, we shall say $f$ is an extension of $g$ to $B$ (such a map $f$ need not exist nor be unique). Example: Let $g\colon\mathbb{Z}^+\to\{1\}$ be defined by $g(x)=1$ and let $f\colon\mathbb{Z}\setminus\{0\}\to\{1\}$ be defined by $f(x)=\dfrac{|x|}{x}$. Using the notation from the second paragraph above, we have $g\colon A\to C$ and $f\colon B\to C$, where $A = \mathbb{Z^+}$ $B=\mathbb{Z}\setminus\{0\}$ $C=\{1\}$ and, clearly, $A\subseteq B$. Thus, we have the following: \begin{align} f|_A &\equiv f\colon\mathbb{Z^+}\to\{1\}\tag{by definition}\\[0.5em] &= \frac{|x|}{x}\tag{by definition}\\[0.5em] &= \frac{x}{x}\tag{if $x\in\mathbb{Z^+}$, then $|x|=x$ }\\[0.5em] &= 1\tag{simplify}\\[0.5em] &\equiv g\colon\mathbb{Z^+}\to\{1\}\tag{by definition}\\[0.5em] &= g. \end{align} Apart from some slight notational abuse, perhaps, the above example shows that $f$ is an extension of $g$ to $B$ since $f|_A=g$.<|endoftext|> TITLE: Examples of non-Euclidean domains which have a universal side divisor QUESTION [10 upvotes]: Let $R$ be a ring. A nonzero nonunit element of $R$ is called a universal side divisor if for every element $x$ of $R$ there is some element $z$ of $R$ such that $u$ divides $x - z$ in $R$ where $z$ is either zero or a unit, i.e., there is a type of division algorithm. This concept is used to demonstrate examples which are P.I.D. but not Euclidean. The existence of universal side divisors is a weakening of the Euclidean condition. I seek examples which are non-Euclidean domains which have universal side divisors. REPLY [4 votes]: I claim that $\pi:=11-3 \sqrt{10}$ is a universal side divisor in $\mathbb{Z}[\sqrt{10}]$. Since $\mathbb{Z}[\sqrt{10}]$ is not a UFD, it can't be euclidean. Verification The element $\pi$ has norm $11^2-3^2 \cdot 10 = 31$, so $\mathbb{Z}[\sqrt{10}]/\pi$ is the field $\mathbb{F}_{31}$. In this quotient, $\sqrt{10} \equiv 11/3 \equiv 14$. Let $u$ be the unit $3+\sqrt{10}$, so $u \equiv 3+14 \equiv 17 \bmod \pi$. We want to show that every element of $\mathbb{Z}[\sqrt{10}]$ is either $0 \bmod \pi$ or congruent to a power of $u$ modulo $\pi$. In other words, it is enough to show that $u$ is a generator of the unit group of $\mathbb{Z}[\sqrt{10}]/\pi$ or, concretely, that $17$ is a generator of $\mathbb{F}_{31}^{\times}$. As it happens this is true. Let's also check that $\mathbb{Z}[\sqrt{10}]$ is not a UFD: We have $\sqrt{10}^2 = 2 \cdot 5$ so, if we had unique factorization, we'd have to have $2$ split as $\alpha \beta$ with $N(\alpha) = N(\beta) = \pm 2$. But $x^2-10 y^2 = \pm 2$ is impossible modulo $5$. Method I wanted an infinite unit group, so there would be many units to chose from. The easiest way to get that was a real quadratic field. I wanted it NOT to be a PID, so that it wouldn't be Euclidean. So I took the first real quadratic field with class number bigger than $1$. I wanted the element $\pi$ to be a prime so that the unit group modulo $\pi$ would be cyclic, so I took the first prime in $\mathbb{Z}[\sqrt{10}]$. Then I checked, and I won! Even if the first prime hadn't worked, we expect that any non-perfect power is a primitive root infinitely often, so I'd expect to win eventually.<|endoftext|> TITLE: $G$ be a group of order $pn$ , where $p$ is a prime and $p>n$ , then is it true that any subgroup of order $p$ is normal in $G$? QUESTION [5 upvotes]: Let $G$ be a group of order $pn$ , where $p$ is a prime and $p>n$ , then is it true that any subgroup of order $p$ is normal in $G$ ? ( I know that any subgroup of index smallest prime dividing order of the group would be normal , but this thing is far away from it . Please help . Thanks in advance ) REPLY [10 votes]: Here is a very elementary proof, which uses Lagrange's Theorem, but not Sylow's Theorem. It is enough to prove that any two subgroups $P,Q$ of order $p$ are equal, because then we must have $gPg^{-1}=P$ for all $g \in G$, so $P$ is normal. So suppose that $P \ne Q$. Then $P \cap Q = \{1 \}$ by Lagrange. They are both cyclic so $P = \{x^i : 0 \le i < p \}$ and $Q = \{y^i : 0 \le i < p \}$ for some $x,y$. Since $|G| = pn < p^2$, the elements $x^iy^j$ with $0 \le i,j < p$ cannot all be distinct, so there is an equality $x^iy^j=x^ky^l$ with $(i,j) \ne (k,l)$. But then $x^{i-k} = y^{l-j}$, contradicting $P \cap Q = \{1 \}$.<|endoftext|> TITLE: Proving that $e^\pi=e^{-\pi}$ QUESTION [16 upvotes]: I've been stuck with this for a while now. I have this chain of reasoning that would imply $e^{-\pi}=e^\pi$, obviously false, since $e^\pi$ and $e^{-\pi}$ are two real distinct numbers and so I must have made an assumption somewhere that I cannot actually do. I know I have to be very careful when working with complex numbers, especially when they're in the exponents, and so I tried to make the steps as small as I could so that it would be easier to point out where it went wrong. \begin{align} e^{-\pi}&= e^{\pi\cdot -1}\tag{1}\\ &=e^{\pi\cdot i^2}\tag{2}\\ &= e^{\pi\cdot i\cdot i}\tag{3}\\ &= \left(e^{\pi\cdot i}\right)^i\tag{4}\\ &= (-1)^i\tag{5}\\ &=\left(\tfrac{1}{-1}\right)^i\tag{6}\\ &=\left((-1)^{-1}\right)^i\tag{7}\\ &=(-1)^{-1\cdot i}\tag{8}\\ &=(-1)^{-i}\tag{9}\\ &=\left(e^{i\pi}\right)^{-i}\tag{10}\\ &=e^{i\pi\cdot -i}\tag{11}\\ &=e^{-i^2\pi}\tag{12}\\ &=e^\pi\tag{13} \end{align} I suspect it has something to do with changing the base from $e$ to $-1$, but what does that mean? Are complex powers only defined for positive bases? Any help is appreciated. REPLY [20 votes]: Here is your "proof" presented differently: We have $e^{i\pi}=-1=\frac{1}{-1}=\frac{1}{e^{i\pi}}=e^{-i\pi}$. So far everything is right. Now our idea is to take both sides to the power of $i$: $(e^{i\pi})^i=(e^{-i\pi})^i$. The erroneous conclusion would appear if you used the identity $(a^b)^c=a^{bc}$. And here lies the problem: this identity doesn't hold for all complex numbers. (EDIT: in fact, this identity isn't always true if we have real numbers $a,b,c$ as leonbloy mentions in the comment. Keep that in mind!) One might also touch on the topic: What is $(e^{i\pi})^i$? Here we need to go back to definition of exponentiation of complex numbers: $a^b=e^{b\ln a}$. However, there is a serious problem here: complex logarithm is multivalued. Taking one branch of the logarithm, we have $\ln e^{i\pi}=i\pi$, so $(e^{i\pi})^i=e^{i\cdot i\pi}$ (NOTE: here we use the definition of complex exponentiation, not exactly the property $(a^b)^c=a^{bc}$), which is $e^{-\pi}$. However, at the same time we have $e^{i\pi}=e^{-i\pi}$, so we could say $\ln e^{i\pi}=-i\pi$. That way we get $(e^{i\pi})^i=e^{i\cdot (-i\pi)}=e^\pi$. So if you think about this for a while, the core of the problem is that complex logarithm, and hence also exponentiation, are multivalued.<|endoftext|> TITLE: Show that $ \lim\limits_{n\to\infty}\frac{1}{n}\sum\limits_{k=0}^{n-1}e^{ik^2}=0$ QUESTION [28 upvotes]: TL;DR : The question is how do I show that $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{ik^2}=0$ ? More generaly the question would be : given an increasing sequence of integers $(u_k)$ and an irrational number $\alpha$, how do I tell if $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi \alpha u_k}=0$ ? I'm not asking for a criterium for completely general sequences, an answer for sequences like $u_k=k^2$, $v_k=k!$ or $w_k=p(k)$ with $p\in \mathbf Z [X]$ would already be awesome. A little explanation about this question : In Real and Complex Analysis by Rudin there is the folowing exercise : Let $f$ be a continuous, complex valued, $1$-periodic function and $\alpha$ an irrational number. Show that $\displaystyle \lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}f(\alpha k)=\int_0^1f(x)\mathrm d x$. (We say that $(\alpha k)_k$ is uniformly distributed in $\mathbf R / \mathbf Z$) With the hint given by Rudin the proof is pretty straightforward : First one show that this is true for every $f_j=\exp(2i\pi j\cdot)$ with $j\in \mathbf{Z} $. Then using density of trigonometric polynomials in $(C^0_1(\mathbf{R}),\|\cdot\|_\infty)$ and the fact that the $0$-th Fourier coefficient of $f$ is it's integral over a period, one can conclude using a $3\varepsilon$ argument. This proof is possible because one can compute explicitly the sums $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k}=\frac{1}{n}\cdot\frac{1-e^{2i\pi j\alpha n}}{1-e^{2i\pi j\alpha}}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$ Now using a different approach (with dynamical systems and ergodic theorems) Tao show in his blog that $(\alpha k^2)_k $ is uniformly distributed in $\mathbf R / \mathbf Z$ (corollary 2 in this blog). I'd like to prove this result using the methods of the exercice of Rudin, but this reduce to show that $$\displaystyle \frac{1}{n}\sum_{k=0}^{n-1}e^{2i\pi j \alpha k^2}\longrightarrow 0 \text{ when }n\to\infty \text{ and }j\in \mathbf{Z}^*.$$ Hence my question. P.S. When i ask wolfram alpha to compute $\sum_{k\geq0}e^{ik^2}$ it answer me with some particular value of the Jacobi-theta function. Of course the serie is not convergent but maybe it's some kind of resummation technique or analytic continuation. I'm not familiar with these things but it might be interesting to look in that direction. REPLY [15 votes]: Gauss sums Your sum is strongly related to the Gauss sum. The usual trick is to compute the modulus. This works particularly smoothly over $\mathbf{Z}/p\mathbf{Z}$ as with usual Gauss sums, but essentially it works here too: If $S = \sum_{k=0}^{n-1} e^{ik^2},$ then \begin{align} |S|^2 &= \sum_{k=0}^{n-1} \sum_{k'=0}^{n-1} e^{i(k'^2 - k^2)}\\ &= \sum_{h=-n+1}^{n-1} \sum_{\substack{0\leq k0$. Since $(h/\pi)_{h=1}^\infty$ is equidistributed mod $1$ the number of $h=0,\dots,n-1$ for which $|1-e^{i2h}| \leq \epsilon$ is $O(\epsilon n)$, so \begin{equation} |S|^2 \leq 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| \leq \epsilon}}n + 2\sum_{\substack{0\leq h < n\\ |1-e^{i2h}| > \epsilon}} \frac2{|1-e^{i2h}|} \leq O(\epsilon n^2) + O(\epsilon^{-1} n). \end{equation} Since $\epsilon$ was arbitrary this implies $|S|^2=o(n^2)$, and hence $|S|=o(n)$. The van der Corput trick The only thing we really used about $k^2$ here is that for fixed $h$ we understand the behaviour of the sequence $(k+h)^2 - k^2$, and indeed if you repeat the above calculation but with a judicious application of the Cauchy--Schwarz inequality then you prove a general fact called van der Corput's difference theorem (aka Weyl's differencing trick): if $(u_k)$ is a sequence such that for each $h\geq 1$ the sequence $(u_{k+h}-u_k)$ is equidistributed modulo $1$, then $(u_k)$ is equidistributed modulo $1$. See for example Corollary 2 on Tao's blog here. This implies for example that $\sum_{k=0}^{n-1} e^{i2\pi p(k)} = o(n)$ for every nonconstant polynomial $p$ with irrational leading coefficient. Other sequences In general there is no hard and fast rule about $\lim \frac1n \sum_{k=0}^{n-1} e^{i2\pi \alpha u_k}$, i.e., about equidistribution of $(u_k)$, and in fact the other sequence you mention, $k!$, is indeed very different. To take a slightly simpler example which is qualitatively similar, consider $u_k = 2^k$. Let $f_n(\alpha)$ be the exponential sum $\frac1n \sum_{k=1}^n e^{i2\pi \alpha 2^k}$. Then it is a well known consequence of the ergodic theorem that $f_n(\alpha)$ converges to $0$ for almost every $\alpha$. On the other hand clearly $f_n(\alpha)\to 1$ for every dyadic rational $\alpha$, as $\alpha 2^k$ is eventually constantly $0$ mod $1$. But then by Baire category theorem we must have for a comeagre set of $\alpha$ that $f_n(\alpha)$ does not converge to $0$. Thus it's difficult to say anything too general about $f_n(\alpha)$, especially for particular $\alpha$. For instance, proving $\lim_{n\to\infty} f_n(\sqrt{2})=0$ is a famous open problem. Test your understanding Here are some related problems to think about, not all of which I know off-hand how to answer! Is $(\sqrt{n})$ equidistributed mod $1$? What about $(\log n)$? Show that there are some $\alpha$ for which $f_n(\alpha)$ does not converge. Determine $\{z: f_n(\alpha)\to z~\text{for some}~\alpha\}$. Let $g_n(\alpha) = \frac1n \sum_{k=1}^n e^{i2\pi \alpha k!}$. Prove statements for $g_n$ analogous to those we proved for $f_n$. Is there a power of $2$ with at least $7$ decimal digits equal to $7$? Think of other silly (but not open) problems like these ones.<|endoftext|> TITLE: How many numbers of $10$ digits that have at least $5$ different digits are there? QUESTION [5 upvotes]: In principle I resolved it as if the first number could be zero, to the end eliminate those that start with zero. The numbers that can use $4$ certain figures (for example, $1$, $2$, $3$ and $4$) are $4^{10}$. The numbers that can use any $4$ digits are ${10\choose 4}\cdot 4^{10}$ I'm saying "they can use," which does not mean that use; however this is very advantageous for this problem, for those who "can use" includes four digits using $4$ digits, which use $3$ to $2$ and using those who only use one. So answering the question of the problem, the answer is: "All ten-digit numbers except those who can only use four digits" \begin{align} &= 10^{10} - {10\choose 4} \cdot 4^{10}\\ &= 10^{10} - 210 \cdot 4^{10} \end{align} There is no reason to believe that the figures have some asymmetric distribution, so it is obvious that for all these numbers, the tenth start with zero. Since starting with zero are not exactly ten-digit numbers, we discard it. The solution is: \begin{align} \tfrac 9{10} (10^{10} - 210 \cdot 4^{10})&= 9(10^9 - 21 \cdot 4^{10})\\ &= 8,801,819,136 \end{align} But I'm not sure this reasoning is correct. REPLY [3 votes]: This answer is a slightly different variation of the theme which affirms the result of @MarkoRiedel. Here we use exponential generating functions to count the number of configurations of labelled objects and apply the coefficient of operator $[x^n]$ to denote the coefficient of $x^n$ in a generating series. If we are looking for the number of strings of length $10$ consisting of $4$ different objects, whereby each object may occur zero or more times, we calculate \begin{align*} 10![x^{10}]e^{4x} \end{align*} each object occurs at least once, we remove $x^0$ from the generating function $e^x$ and calculate \begin{align*} 10![x^{10}](e^{x}-1)^4 \end{align*} each object occurs at most three times, we take the initial four summands from the series representation of $e^x$ and calculate \begin{align*} 10![x^{10}]\left(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}\right)^4 \end{align*} We calculate the wanted number of $10$-digit numbers containing at least $5$ different digits by calculating the complement. We start with calculating the number of all $10$-digit numbers. From this number we subtract the $10$-digit numbers which contain exactly $j$ different digits, $j=1,\ldots,4$. The number of all $10$-digit numbers are those starting with $1,\ldots,9$ followed by $9$ digits from $\{0,\ldots,9\}$. We obtain \begin{align*} 9\cdot 10^9 \end{align*} different numbers. Now we calculate the $10$-digit numbers which contain exactly four different digits. If we consider four different digits, the number is \begin{align*} 10![x^{10}](e^x-1)^4 \end{align*} Since we have to choose four digits out of $\{0,\ldots,9\}$, there are $\binom{10}{4}$ different possibilities giving a total of \begin{align*} \binom{10}{4}10![x^{10}](e^x-1)^4 \end{align*} From this number we have to subtract the number of strings which start with $0$. We can describe these strings as those which start with $0$ followed by zero or more occurrences of $0$ and one or more occurrences of the other three digits. Precisely nine digits have to follow the leading zero. We obtain so \begin{align*} 9![x^9]e^x(e^x-1)^3 \end{align*} Note the factor $e^x$ reflects the fact that $0$ may occur zero or more times after the leading $0$ while the other three digits have to occur at least once. Since there are $\binom{9}{3}$ different possibilities for choosing the three digits different from $0$ we conclude the number containing precisely four different digits are \begin{align*} \binom{10}{4}10![x^{10}](e^x-1)^4-\binom{9}{3}9![x^9]e^x(e^x-1)^3 \end{align*} $$ $$ Since we have to subtract from $9\cdot 10^{9}$ all numbers containing precisely four, three, two and one different digits, we finally obtain \begin{align*} &9\cdot10^9-\binom{10}{4}10![x^{10}](e^x-1)^4+\binom{9}{3}9![x^9]e^x(e^x-1)^3\\ &\quad\qquad-\binom{10}{3}10![x^{10}](e^x-1)^3+\binom{9}{2}9![x^9]e^x(e^x-1)^2\\ &\quad\qquad-\binom{10}{2}10![x^{10}](e^x-1)^2+\binom{9}{1}9![x^9]e^x(e^x-1)^1\\ &\quad\qquad-\binom{10}{1}10![x^{10}](e^x-1)^1+\binom{9}{0}9![x^9]e^x\\ &=9\cdot10^9-210\cdot818520+84\cdot204630\\ &\qquad\qquad-120\cdot55980+36\cdot18660\\ &\qquad\qquad-45\cdot1022+9\cdot511\\ &\qquad\qquad-10\cdot1+1\cdot1\\ &=8839212480 \end{align*}<|endoftext|> TITLE: True or false? "sum of an m-strongly convex and a convex function is m-strongly convex" QUESTION [5 upvotes]: I would like to know if the following conjecture is true or false? If $f(x) = g(x) + h(x)$ where $g$ is m-strongly convex and $h$ is convex, then $f$ is m-strongly convex. NOTE: For a non-differentiable function $F$, m-strongly convexity means $F(y) \geq F(x) + g^T(y - x) + \frac{m}{2} ||y - x||^2, \forall x, y$ where $g \in \partial F(x)$ is a subgradient of $F$ at $x$. If $F$ is differentiable, m-strongly convexity can also be defined as $\nabla^2 F(x) \succeq m I, \forall x$ where $I$ is the identity matrix. You can see the wikipedia page, this blog post by Sébastien Bubeck, or these lecture notes from the Algorithms course at Cornell University for more details on strong convexity. REPLY [7 votes]: Sure, you can just add the inequalities. That is, if $g$ is $m$-strongly convex then, for some $a \in \partial g(x)$ $$ g(y) \ge g(x) + a^T(y-x) + \frac{m}{2}\|y-x\|^2, $$ And if $h$ is convex, then for some $b \in \partial h(x)$: $$ h(y) \ge h(x) + b^T(y-x), $$ Then by adding we have: $$ g(y) +h(y) \ge g(x) +h(x) + (a+b)^T(y-x) + \frac{m}{2}\|y-x\|^2 $$ $$ f(y) \ge f(x) + (a+b)^T(y-x) + \frac{m}{2}\|y-x\|^2 $$ So $f$ is strongly $m$-convex and $a+b \in \partial f(x)$:<|endoftext|> TITLE: Prove that $\dim(U+W) + \dim(U\cap W) = \dim U + \dim W$ QUESTION [7 upvotes]: Let $V$ be a vector space over a field $k$ and let $U$, $W$ be finite-dimensional subspaces of $V$. Prove that $\dim(U+W) + \dim(U\cap W) = \dim U + \dim W$ I'm given that to begin this problem I can find the bases: $\{v_1,\dots,v_p\}$ for $U\cap W$ $\{v_1,\dots,v_p, u_1,\dots,u_q\}$ for $U$ and $\{v_1,\dots,v_p, w_1,\dots,w_r\}$ for $W$ and then I just need to show that $\{v_1,\dots,v_p, u_1,\dots,u_q, w_1,\dots,w_r\}$ is a basis for $U+W$. My question is: how does one go about showing that it is a basis for $U+W$ and then use that to prove the above question? Side note: This question has already been asked here: Given two subspaces $U,W$ of vector space $V$, how to show that $\dim(U)+\dim(W)=\dim(U+W)+\dim(U\cap W)$ However, the first answer given does not apply to solving it the way I want to with finding the bases. The second answer simply gives me what I already knew to start with. Thus, I am asking this question again since I'm asking how to solve it a particular way instead of just any general hints towards solving it. REPLY [3 votes]: A shorter proof: consider $T:U \times W \to U + W$ by $T(u, w) = u - w,$ then $\ker T = U \cap W$ and the theorem of dimension $\dim \ker T + \dim \ \mathrm{image}\ T = \dim\ \mathrm{domain}\ T$ gives the result at once (since $T(U \times W) = U + W$ and $\dim U \times W = \dim U + \dim W$).<|endoftext|> TITLE: Limit in number theory QUESTION [10 upvotes]: I was given the following thing to prove: $$\lim_{n \to \infty} {d(n) \over n} = 0$$ where $d(n)$ is the number of divisors of n. I'm so sure how to approach this question. One way I thought of is to use the UFT to turn the expression to: $$\lim_{n \to \infty} {\prod (x_i + 1) \over \prod p_i^{x_i}}$$ And then to use L'Hôpital's rule for each $x_i$, so I get something like this: $$\lim_{n \to \infty} {1 \over \ln (\sum p_i) \prod p_i^{x_i}}$$ That equals zero. Is this a good approach? Is there a different way to solve this? REPLY [13 votes]: It is actually true that $\lim_{n\to\infty} \frac{d(n)}{n^c}=0$ for any $c>0$, but if you only want it for $c=1$, the simplest proof would be the following. The divisors $\delta\mid n$ can be organized into pairs $(\delta,n/\delta)$, and in every pair the smallest divisor is $\min\{\delta,n/\delta\}\le\sqrt n$. It follows that the number of pairs is at most $\sqrt n$. Thus $d(n)\le 2\sqrt n$ and the assertion follows. REPLY [5 votes]: Hint: For every divisior $p$ greater than $\sqrt{n}$ of $n$, there is a divisor $q$ smaller than $\sqrt{n}$ such that $n=pq$. It follows that we can divide the divisiors in pairs where one of the elements is smaller than $\sqrt{n}$, and hence $d(n) \leq 2\sqrt{n}$. Can you use this to evaluate the limit?<|endoftext|> TITLE: An intuitive explanation of how the mathematical definition of ergodicity implies the layman's interpretation 'all microstates are equally likely'. QUESTION [10 upvotes]: I'm self-studying Statistical Mechanics; in it I got Fundamental Postulate of Statistical Mechanics and that took me to ergodic hypothesis. In the most layman's language, it says: In an isolated system in thermal equilibrium, all microstates are equi-probable and equally-likely. However, I could carry out my venture in Statistical Mechanics till now. Lately, I came across the actual definition of ergodicity, especially that of Wikipedia: [...] the term ergodic is used to describe a dynamical system which, broadly speaking, has the same behavior averaged over time as averaged over the space of all the system's states (phase space). In statistics, the term describes a random process for which the time average of one sequence of events is the same as the ensemble average. Wikipedia writes about ergodic hypothesis: [...] over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region ... Also, as Arnold Neumaier wrote about ergodic hypothesis: [...] every phase space trajectory comes arbitrarily close to every phase space point with the same values of all conserved variables as the initial point of the trajectory .... I couldn't get those mathematical definitions as those are beyond my level; still I tried to connect these definitions with the layman's one but couldn't do so. I know a bit of phase space, ensembles and nothing more. I would appreciate if someone explains in an intuitive manner how the definitions the time average of one sequence of events is the same as the ensemble average and time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region imply the layman's interpretation of ergodic hypothesis. REPLY [5 votes]: It's useful to consider finite-state Markov chains with states $\{ 1, \ldots, N \}$. Such a Markov chain is defined by its transitions matrix $P = (P_{ij})_{i,j=1}^N$. We require that $0 \leq P_{ij} \leq 1$ for each $i, j = 1, \ldots, N$ and that $\sum_{j=1}^N P_{ij} = 1$. Thus, we can think of $P_{ij}$ as the probability of jumping from state $i$ to state $j$. We initialize the Markov chain in a state $X_0$ and let $X_n$ be the state at time $n$ (so $X_n$ is a random variable in $\{ 1, \ldots, N \}$). A natural requirement is that the Markov chain be irreducible, which essentially means that we can get from any state to any other state with positive probability. A finite-state Markov chain is said to be ergodic if it is irreducible and has an additional property called aperiodicity. The ergodic theorem for Markov chains says (roughly) that an ergodic Markov chain approaches its "stationary distribution" (see the previous link) as time $n \to \infty$. Now in the case of physical systems, an additional assumption is usually that the system be reversible. It turns out that the stationary distribution of a finite-state irreducible reversible Markov chain is the uniform distribution, which assigns equal probability $1/N$ to each of the possible states. Putting all this together, we see that a finite-state reversible ergodic Markov chain converges to the uniform distribution (i.e. reaches an equilibrium as time goes to infinity in which all states are equally likely). The notion of ergodic dynamical system you asked about is a vast generalization of this idea.<|endoftext|> TITLE: How does Hartshorne's definition of group schemes encode the law for the neutral element? QUESTION [7 upvotes]: Hartshorne's Algebraic Geometry says A scheme $X$ with a morphism to another scheme $S$ is a group scheme over $S$ if there is a section $e\colon\;S\to X$ (the identity) and a morphism $\rho\colon\;X\to X$ over $S$ (the inverse) and a morphism $\mu\colon\;X\times X\to X$ over $S$ (the group operation) such that (1) the composition $\mu\circ(\operatorname{id}\times\rho)\colon\;X\to X$ is equal to the projection $X\to S$ followed by $e$, and (2) the two morphisms $\mu\circ(\mu\times\operatorname{id})$ and $\mu\circ(\operatorname{id}\times\mu)$ from $X\times X\times X\to X$ are the same. Clearly those two demands formalize that $\rho$ is a right-inverse and $\mu$ is associative. However, I miss some statement concerning the (right-)neutrality of $e$: I would expect something like The morphism $\mu\circ(\operatorname{id}\times e)\circ(X\overset\sim\to X\times_S S)$ is the identity. Does this somehow already follow from the cited definition? REPLY [3 votes]: As stated it doesn't appear to work. For simplicity set $S = \mathrm{Spec} (k)$. Here's a counterexample: let $X$ be any $k$-variety, $e : S \to X$ any point of $X$, and $\rho : X \to X$ any morphism. Set $\mu : X \times X \to X$ to be the constant morphism $\mu(x,y) = e$. This clearly satisfies properties (1) and (2) that you listed (since $\mu(\mu(x,y),z) = \mu(x,\mu(y,z))\ ( = e)$ and $\mu(x,\rho(x)) = e$), but does not make $X$ a group scheme over $\mathrm{Spec}(k)$. In particular, it fails the last property you pointed out, $\mu(e,x) \ne x$.<|endoftext|> TITLE: Can we take a logarithm of an infinite product? QUESTION [7 upvotes]: Suppose we have an infinite product $S = \prod_{n=1}^{\infty} a_n$ of positive real numbers. Then is it always the case that $$ \log(S) = \sum_{n=1}^{\infty} \log a_n ? $$ I am sure this is the case, but I wanted to make sure. Thank you! REPLY [8 votes]: \begin{align} \log\prod_{n=1}^\infty a_n&=\log\lim_{k\to\infty}\prod_{n=1}^ka_n\\ &=\lim_{k\to\infty}\log\prod_{n=1}^ka_n\\ &=\lim_{k\to\infty}\sum_{n=1}^k\log a_n\\ &=\sum_{n=1}^\infty\log a_n \end{align} I was able to switch the $\log$ and the $\lim$ because $\log$ is continuous. REPLY [4 votes]: Yes (with a small caveat on whether you want to deal with $-\infty$ as a sum). If the product converges to some $S > 0$, then $$\ln \prod_{n=1}^N a_n\xrightarrow[N\to\infty]{} \ln S$$ by continuity of the logarithm. But we do have $$ \ln \prod_{n=1}^N a_n = \sum_{n=1}^N \ln a_n $$ so we do have that the series $\sum_{n=1}^N \ln a_n$ is convergent, and its limit is indeed $\ln S$. Now, if $S=0$, you do have $\sum_{n=1}^N \ln a_n \xrightarrow[N\to\infty]{} -\infty$, but it's up to you whether you want to call this "$\ln S$"...<|endoftext|> TITLE: Law of Excluded Middle Controversy QUESTION [11 upvotes]: I was reading an introductory book on logic and it mentioned in passing that the Law of Excluded Middle is somewhat controversial. I looked into this and what I got was the intuistionists did not accept it in contrasts to the formalists. I'm curious but I'm only taking an introductory course in logic though very informally and vaguely I do know the history of mathematics upto Godel's incompleteness and Hilbert's program. Can anyone tell me what is the nature of the controversy in a way that suits my level of understanding about mathematics? Further, what is an example of constructive mathematics or constructive mathematical proof and why do the intuistionists say that LEM is no good for it(if that's right)? Thanks! REPLY [2 votes]: The law of excluded middle (LEM) is the crucial ingredient in any proof by contradiction (as opposed to proof of contrapositive). The objection to LEM is most easily understood in this context. Some proofs by contradiction establish the existence of this or that mathematical object. Constructively speaking, the problem with a proof by contradiction is that it does not provide any indication of how such a mathematical object is to be found or described specifically. Briefly put, the constructivist claim is that existence is construction, rather than proof of impossibility of nonexistence. The constructive mindset leads one quite far afield and leads to startling conclusions, such as the idea that the traditional extreme value theorem could be false constructively. To understand this concept intuitively, notice that there is no way to find an approximation to such a claimed extremum. This is because the extremum is not a continuous function of the input function, as one easily convinces onself by looking at simple examples of a function with two "humps" of about the same height.<|endoftext|> TITLE: Salem Numbers, roots on the unit circle QUESTION [8 upvotes]: There are algebraic integers which are not roots of unity , for example consider the irreducible polynomial $ P(x)= x^4-2x^3-2x+1 $. A computer software can show that this polynomial has two real roots outside the unit circle(one greater than one and the other less than one) and two roots on the unit circle. However I don't know how to prove this rigorously that there are two roots on the unit circle. Usually when I wanted to prove some root of a polynomial is on the unit circle , I'd multiply that by some other polynomial to get something of the form $ x^n - 1 $ and it's obvious that every root of such expression has norm one, however , in this case this is not possible since none of roots of $ P(x) =0 $ are roots of unity. Actually there is a whole lot of examples, called Salem numbers. It's an algebraic integer $\lambda > 1$ such that all of its Galois conjugates are on the unit circle except $\frac{1}{\lambda}$. The polynomial given above was an example of a minimal polynomial of a Salem number. Does anyone have any idea how can I prove this, i.e. roots are on the unit circle except two of them?(I'm looking for a method that can be applied to more than just one example, hopefully lots of Salem numbers) REPLY [2 votes]: Here is the best way to count roots on the unit circle. Your quartic polynomial has even degree and symmetric coefficients: $c_k = c_{4-k}$ for $0 \leq k \leq 4$. From the symmetry in the coefficients, your polynomial can be written in the form $x^2g(x + 1/x)$ for a polynomial $g(x)$, where in fact $g(x) = x^2 - 2x - 2$. The mapping $z \mapsto z + 1/z$ sends the unit circle in a 2-to-1 way onto the interval $[-2,2]$ (except for being 1-to-1 at the endpoints). The roots of your polynomial on the unit circle (which don't include $\pm 1$) are in a 2-to-1 correspondence with roots of $g(x)$ in the interval $[-2,2]$. There is one such root for $g(x)$, so your polynomial has two roots on the unit circle. Further examples are discussed at https://kconrad.math.uconn.edu/blurbs/galoistheory/numbersoncircle.pdf, including polynomials that don't have symmetric coefficients. Your example is treated at the very start by a more elementary method that does not easily generalize to higher degree and also in later examples by methods that do generalize to higher degree.<|endoftext|> TITLE: Does the determinant give you the index over $\mathcal{O}_k$ as well as over $\mathbb{Z}$? QUESTION [7 upvotes]: It is a standard fact that if $M$ is a nonsingular $n\times n$ integer matrix, the index of the $\mathbb{Z}$-span of its columns as an abelian group in $\mathbb{Z}^n$ is $|\det M|$. What happens if we replace $\mathbb{Z}$ by the ring of integers in some number field? More precisely: Let $K$ be a finite extension of $\mathbb{Q}$ with ring of integers $\mathcal{O}_K$. Let $M$ be an $n\times n$ nonsingular matrix with entries in $\mathcal{O}_K$, and let $\Lambda$ be the sub-$\mathcal{O}_K$-module of $\mathcal{O}_K^n$ spanned by the columns of $M$. What can we say about the index of $\Lambda$ as an abelian group inside $\mathcal{O}_K^n$? Is it $|\det M|$? If so, what's the proof? (And if not, what's really going on?) EDIT: The question is naive in a way. A priori, $\det M$ is an element of $\mathcal{O}_K$, so $|\det M|$ isn't defined without specifying a particular archimedean place of $K$. If there are several, then $|\det M|$ will depend on an arbitrary choice whereas $[\mathcal{O}_K^n:\Lambda]$ will not. So this leads me to expect that in general, the answer is no. On the other hand, what if, say, $\det M \in \mathbb{Z}$ and/or $M$ is unitary? 2nd EDIT: A propos of Mariano's comment, maybe the right answer is actually $|N_{K/\mathbb{Q}}(\det M)|$, at least for number fields of class number $1$ and maybe more generally? REPLY [2 votes]: I believe that Propositions 1.10 and 1.16 of this paper of mine give (a generalization of) what you are looking for. The notation may take some getting used to. Indeed all the key ingredients of the discussion above appear in this somewhat abstracted context...and they comprise the proof.<|endoftext|> TITLE: An obvious pattern to $i\uparrow\uparrow n$ that is eluding us all? QUESTION [37 upvotes]: Start with $i=\sqrt{-1}$. This will be $a_1$. $a_2$ will be $i^i$. $a_3$ will be $i^{i^{i}}$. $\vdots$ etc. In Knuth up-arrow notation: $$a_n=i\uparrow\uparrow n$$ And, amazingly, you can evaluate $\lim_{n\to\infty}a_n=\lim_{n\to\infty}i\uparrow\uparrow n=e^{-W(-\ln(i))}\approx0.4383+0.3606i$. You can check this, it does indeed converge to this value. In fact, I decided to make a graph of $a_n$ to show that it converges. (y axis is imaginary part, x axis is real part.) And, to little astonishment, I quickly noticed that there is an apparent pattern to the graph. Commonly, we define $x\uparrow\uparrow0=1$, which I have included in the graph. So the pattern seems very obvious. It follows a curved path that converges onto the point that was given above. And, if you connect the dots, starting with the first point (given on the left as the first point) and trace a nice line to the second, third, and so fourth numbers, you will find an interesting spiral. I thought that at first, this spiral was writable as an equation, but apparently, there are a few implications. You will notice that the blue dots are way closer to the converging point and that the red and black dots are a little closer. So whatever equation you can come up with should account that $a_{3n}$ is closest to the number you are trying to converge to. I want (so desperately) to see if anyone can come up with an equation that allows the computation of $a_{0.5}$ that satisfies $$i^{a_{0.5}}=a_{1.5}$$a well known identity you can find on the Wikipedia. At first glance of the graph I went on to think that perhaps, just perhaps, I (or you) could find a formula that allows us to define $i\uparrow\uparrow 0.5$. If you are familiar with De'Moivres formula, it is a formula that allows us to perform compute $$\sqrt{i}$$ with relative ease. It was derived when De'Moivre noticed an interesting pattern to $(a+bi)^n$. He proceeded to write his formula concerning the distance from zero and the angle from the positive real axis. So I must tell you that I wish for the same to occur with $i\uparrow\uparrow n$. Perhaps the answer lies in using a different coordinate system. Perhaps the answer lies in calculating the distance one of the points on one of the lines (black, red, or blue) is from the converging spot and the adding in the angle at which the next point changes. My progress on determining such a formula has gone nowhere. The most I can say is that $a_n$ is probably not chaotic and does indeed converge in a way that is most certainly not random. REPLY [5 votes]: As Gottfried hints, there is yet another solution to $^{0.5}i \approx 1.07571355731 + 0.873217399108i$ I will use this question to describe a unique Abel function for $f(z)=i^z$. I wrote a pari-gp complex base tetration program available for download at math.eretrandre.org. The results posted here were generated with that program. I will use this question about base(i) to show that if there is a solution of this type, than it has to be unique. This tetration can be regarded as an extension of Kneser's solution for real bases>$\exp(\frac{1}{e})$, to tetration for complex bases. So what is "this type" of complex base slog/abel function solution? The answer is this Abel function involves both primary fixed points. The op points out the attracting fixed point, $l_1 \approx 0.438282936727 + 0.360592471871i\;$. There is also a repelling fixed point $l_2 \approx -1.86174307501 - 0.410799968836i\;$. Henryk Trapmann's uniqueness criteria says if you can make a sickle between the two fixed points, bounded on one side by a defined curve f(z), and bounded on the other side by $i^{f(z)}$. For sexp base(i), we can choose f(z) as a straight line between the primary fixed points. Henryk's proof says if there is a one to one analytic mapping between the sickle, and the Abel function, excluding the two fixed points, and if the derivative of the Abel function is never zero, than it is unique to an additive constant. The additive constant is uniquely determined by the requirement that Tetration have the slog(1)=abel(1)=0. Here is a picture of the sickle, and $\alpha(z)$ or the abel/slog on the sickel. You can see the one-to-one mapping between the two fixed points, extending between $-\Im \infty$ and $+\Im \infty$. The mapping between the straight line, and f(z) are always be definition exactly one cycle apart, since $\alpha(f(z))=\alpha(z)+1$. I also filled in vertical grid lines for sexp(z+0.25), sexp(z+0.5) and sexp(z+0.75). The two graphs are colored identical to allow visual verification of the one to one mapping. Because $\exp_i(z)$ is well defined, the sexp(z) function can be extended to the right over the entire complex plane, and extended to the left except for logarithmic branch singularities. So this slog on a sickle defines sexp(z) base(i) for the entire complex plane! Henryk Trapmann's uniqueness proof generates a mapping function between this solution and the other purported solution. Since both functions are analytic on the strip, it turns out both the mapping function and its inverse have to be entire, which can only be the case if the two slogs are the same except for an additive constant. Near the attracting fixed point, the function approaches arbitrarily closely to the attracting fixed point Abel/Schroeder function, and near the repelling fixed point, the function approaches the repelling fixed point Abel/Schroeder function. sexp base i in the complex plane, grids are 1 unit apart. You can see the logarithmic singularity at z=-2. The Abel function Taylor series was computed using the following form: $$\alpha(z)=\frac{\ln(z-l_1)}{\ln(\lambda_1)} + \frac{\ln(z-l_2)}{\ln(\lambda_2)} + p(z)$$ $\lambda_1$ and $\lambda_2$ are the multipliers at the two fixed points, $l_1$ and $l_2$, $$i^{l_1+z} = l_1 + \lambda_1 \cdot z + a_2 \cdot z^2 + a_3 \cdot z^3...$$ It turns out $p(z)$ has a relatively mild singularity at each of the two fixed points when this form is used for the Abel/slog function. For example, $p(z)$ and its derivative are both continuous and differentiable at both of the two fixed points, although the 3rd and higher derivatives are not continuous, since the periodicity at the two fixed points is less than 3. The pari-gp fatou.gp complex base sexp program would be used as follows: \r fatou.gp setmaxconvergence(); /* base i is poorly behaved */ sexpinit(i); sexp(0.5) 1.07571355731392 + 0.873217399108003*I Here are numerical values for $l_1, l_2, r_1=\frac{1}{\ln(\lambda_1)}$, $r_2=\frac{1}{\ln(\lambda_1)}$, and p(z), and equation for the slogestimation. The radius of convergence for p(z) is $|\frac{l1-l2}{2}|$, centered between the fixed points. l1 = 0.4382829367270321116269751636 + 0.3605924718713854859529405269*I; l2 = -1.861743075013160391397055791 - 0.4107999688363923093542478071*I; r1 = -0.02244005259030164710115539234 - 0.4414842544742195824980579384*I; r2 = 0.3613567874856575121871741974 + 0.4459440823588587557573111438*I; slogest(z) = { z = r1*(log(I*(z-l1))-Pi*I/2) + r2*(log(-I*(z-l2))+Pi*I/2) + subst(p,x,(z-0.5*(l1+l2))); return(z); } {p= -0.06582860911769610907611153624 - 0.6391834058813427803550150237*I +x^ 1* ( 0.0004701290774740458290098771596 - 0.04537158729375693129580356342*I) +x^ 2* (-0.003324372336079859782821095201 + 0.001495132937745569349230811243*I) +x^ 3* ( 0.0007980787520098490845820065316 - 0.001533441799004958947560304185*I) +x^ 4* (-0.001108786744422696031980666816 - 2.731877902187453470989686831 E-6*I) +x^ 5* ( 0.0001798802115603965459766944797 + 0.0001776744851391085901363383617*I) +x^ 6* (-0.0001598048157256642978352955851 - 3.381203527058705270044424867 E-5*I) +x^ 7* ( 4.834500417029476499351747515 E-5 + 7.971199385246578717457250724 E-5*I) +x^ 8* (-2.079867322054674351760533351 E-5 - 1.406842037326640069256998532 E-5*I) +x^ 9* ( 1.690770367738385075341590185 E-5 + 2.135309134452918173269411762 E-5*I) +x^10* (-3.353412252728033524441156034 E-6 - 5.845155821267264231283805042 E-6*I) +x^11* ( 6.565965239846111713090140941 E-6 + 4.769875342842561685863158675 E-6*I) +x^12* (-1.296330399893039321872277846 E-6 - 2.456868593278006094540299988 E-6*I) +x^13* ( 2.533916981224509417637955994 E-6 + 7.218102304226136498196092124 E-7*I) +x^14* (-8.409501999009543726430092781 E-7 - 9.279879295518162345796637972 E-7*I) +x^15* ( 9.275250588492317644336514121 E-7 - 9.631817386499723279878826279 E-8*I) +x^16* (-5.215083989292973029369039510 E-7 - 2.615530503953161606154084492 E-7*I) +x^17* ( 3.116111111914753868488936298 E-7 - 1.665350480228628392912034933 E-7*I) +x^18* (-2.781400382567721610094378621 E-7 - 1.112607415413251118507915520 E-8*I) +x^19* ( 9.051635326999330520247332230 E-8 - 1.085008978701155103767460830 E-7*I) +x^20* (-1.238964597578335282733301968 E-7 + 5.485260567253507938012071652 E-8*I) +x^21* ( 1.846113879795222761048581419 E-8 - 5.632856406052825347059503708 E-8*I) +x^22* (-4.197980145789475821721904427 E-8 + 5.240770851948157536559348001 E-8*I) +x^23* (-1.428548543349343274791836858 E-9 - 2.602689861858463106421605234 E-8*I) +x^24* (-6.065810598994532136326922961 E-9 + 3.328188440463381773510055778 E-8*I) +x^25* (-4.925408783783587354755417128 E-9 - 1.098256950809547844459017995 E-8*I) +x^26* ( 5.571041113925408468110396754 E-9 + 1.638316780708641282846896470 E-8*I) +x^27* (-4.149193648847472629362625045 E-9 - 4.116268895249720851777701930 E-9*I) +x^28* ( 6.721271351954440168744328856 E-9 + 5.947866395141685553477517779 E-9*I) +x^29* (-2.739160795070694522203609350 E-9 - 1.172354292086004770247804721 E-9*I) +x^30* ( 4.623071483414304725202549852 E-9 + 9.232228309095999063811309141 E-10*I) +x^31* (-1.585766089923197553788716462 E-9 - 1.651307950491239271118345156 E-11*I) +x^32* ( 2.363704675846105632188520360 E-9 - 8.296748095830550218237145087 E-10*I) +x^33* (-8.032209583204614846555211647 E-10 + 3.448796470634182196661522301 E-10*I) +x^34* ( 8.586697889632180390697042972 E-10 - 1.035571885972986467540525699 E-9*I) +x^35* (-3.283321823970989260857161769 E-10 + 3.700931769620798039214959724 E-10*I) +x^36* ( 1.060749698244576767546142485 E-10 - 7.216733126248904197113800569 E-10*I) +x^37* (-7.435780239422554167112328213 E-11 + 2.755791031783421936772787526 E-10*I) +x^38* (-1.572500505206983217824447542 E-10 - 3.667364964073739957874509082 E-10*I) +x^39* ( 3.623731852785295824889239864 E-11 + 1.629657324418246834695914694 E-10*I) +x^40* (-1.802976354364813915048318195 E-10 - 1.261880617212203404824890625 E-10*I) }<|endoftext|> TITLE: Showing that a function is uniformly continuous but not Lipschitz QUESTION [6 upvotes]: If $g(x):= \sqrt x $ for $x \in [0,1]$, show that there does not exist a constant $K$ such that $|g(x)| \leq K|x|$ $ \forall x \in [0,1]$ Conclude that the uniformly continuous function $g$ is not a Lipschitz function on interval $[0,1]$. Necessary definitions: Let $A \subseteq \Bbb R$. A function $f: A \to \Bbb R$ is uniformly continuous when: Given $\epsilon > 0$ and $u \in A$ there is a $\delta(\epsilon, u) > 0$ such that $ \forall x \in A$ and $|x - u| < \delta(\epsilon,u)$ $\implies$ $|f(x) - f(u)| < \epsilon$ A function $f$ is considered Lipschitz if $ \exists$ a constant $K > 0$ such that $ \forall x,u \in A$ $|f(x) - f(u)| \leq K|x-u|$. Here is the beginning of my proof, I am having some difficulty showing that such a constant does not exist. Intuitively it makes sense however showing this geometrically evades me. Proof (attempt): Suppose $g(x): = \sqrt x$ for $x \ in [0,1]$ Assume $g(x)$ is Lipschitz. $g(x)$ Lipschitz $\implies$ $\exists$ constant $K > 0$ such that $|f(x) - f(u)| \leq K|x-u|$ $\forall x,u \in [0,1]$. Evaluating geometrically: $\frac{|f(x) - f(u)|}{ |x-u|}$ = $\frac{ \sqrt x - 1}{|x-u|}$ $ \leq K$ I was hoping to assume the function is Lipschitz and encounter a contradiction however this is where I'm stuck. Can anyone nudge me in the right direction? REPLY [2 votes]: Well, choose $u=0$. What can you say about $\frac{|f(x)-f(u)|}{|x-u|}=\frac{\sqrt{x}}{x}$ as $x$ gets closer to zero? Here is another proof using derivatives. Since $f'(x)=\frac{1}{2\sqrt{x}}$, it follows that $$\lim_{x\to 0} f'(x)=+\infty$$ This means that for every $M>0$ there exists $\delta$ such that for every $x$ in $I=[0, \delta]$, $f'(x)>M$. Suppose by contradiction that such $K$ exists. Consider $M=K+1$ and $x_0\in \text{int}(I)$: then $$f'(x_0)=\lim_{x\to x_0}\frac{f(x)-f(x_0)}{x-x_0}>K+1$$ But then there exists an interval $J$ such that for every $x\in I \cap J$ $$\frac{f(x)-f(x_0)}{x-x_0}>K+1$$ $$\implies |f(x)-f(x_0)|>(K+1)|x-x_0|$$ contradicting our hypothesis.<|endoftext|> TITLE: Hillary Clinton's Iowa Caucus Coin Toss Wins and Bayesian Inference QUESTION [9 upvotes]: In yesterday's Iowa Caucus, Hillary Clinton beat Bernie Sanders in six out of six tied counties by a coin-toss*. I believe we would have heard the uproar about it by now if this was somehow rigged in her favor, but I wanted to calculate the odds of this happening, assuming she really was that lucky, and assuming she rigged various numbers of the tosses. * As many people have pointed out already, this turned out to be a selective data set - Sander's won just about as many coin tosses as Mrs. Clinton did. Read on if you still care about the problem. At first I calculated the odds using simple rules for the probabilities of independent events: $$ P(6\text{H})=6*P(\text{H})=\left( \frac{1}{2} \right)^{6}= \frac{1}{64} \approx 1.56\% $$ i.e. naively, there was a 1.56% chance it was fair. But I vaguely remembered from reading about Bayesian inference that we can make a more educated statement about whether or not this was fair using Bayes' Theorem, and assuming various numbers of the coin tosses were rigged. I tried it out myself, and here's what I came up with, but I'm fairly positive I worked this out incorrectly, so here's hoping you wonderful people can help. Here's my shot at it: Example assuming it was fair (0% chance it was rigged): $$ P(6\text{H}) = \underbrace{P(6\text{H}|\text{fair})}_{1/64}\underbrace{P(\text{fair})}_{1} + \underbrace{P(6\text{H}|\text{not fair})}_{1}\underbrace{P(\text{not fair})}_{0} = \frac{1}{64} $$ and by Bayes Theorem: $$ P(\text{fair}|6\text{H}) = \frac{P(6\text{H}|\text{fair})P(\text{fair})}{P(6\text{H})} = \frac{(1/64)(1)}{(1/64)}=1 $$ (obviously). Assuming $n$ of the tosses were rigged: $$ P(6\text{H}) = \underbrace{P(6\text{H}|\text{fair})}_{\left(\frac{1}{2}\right)^{6}}\underbrace{P(\text{fair})}_{1-\frac{n}{6}} + \underbrace{P(6\text{H}|\text{not fair})}_{1}\underbrace{P(\text{not fair})}_{\frac{n}{6}} = \frac{6-n}{384} + \frac{n}{6} = \frac{63n+6}{384} $$ and by Bayes' Theorem: $$ P(\text{fair}|6\text{H}) = \frac{P(6\text{H}|\text{fair})P(\text{fair})}{P(6\text{H})} = \frac{\left(\frac{1}{64}\right)\left(\frac{6-n}{6}\right)}{\left(\frac{63n+6}{384}\right)}=\frac{6-n}{63n+6} $$ Here's a plot of the probabilities that the coin tosses were fair given an assumption of $n$ unfair coins: Questions: I'm pretty sure some of my assumptions for probabilities were off in various parts of this - if so, where did I go wrong? On the off chance I carried this out correctly, what can be made of these results? For example, is it most probable that there were 0, 1, or 2 coin tosses that were unfair, as making the assumption that there were $n<3$ unfair coins gives a probability $P(\text{fair}|6\text{H})$ greater than the $1/64$ chance it was fair? EDIT: @Eric Wofsey Informed me that I was calculating the wrong probability. What I really wanted to calculate was $P(0|6H)$, the probability of 0 coins being rigged, considering an outcome of 6 heads. What I learned (I'm new to Bayesian inference) is that it all depends upon your prior guess as to the probability that n of the coins were rigged. As he pointed out: $$ P(0|6H) = \frac{P(0)}{\sum_{i=0}^{6}2^iP(i)} $$ where $$ P(n) = {6 \choose n}p^n(1-p)^{6-n} $$ and $p$ is the prior probability that each coin toss was rigged. Here's what $P(0|6H)$ looks like fully expanded (assuming the prior $p$ is the same for each $P(n)$): As I learned, the prior probability is arbitrarily chosen, and represents your belief/guess as to the likelihood that the coins were rigged. I was interested in looking at what the distribution of $P(0|6H)$ looked like for values of $p$ from 0 to 1 (0 meaning you believe there's no possibility the coins were rigged, 1 meaning you're certain the coins were rigged). Here's the plot: I may be going way off the reservation here, but if this graph represents values of $P(0|6H)$ for prior probabilities of having rigged coins, wouldn't the integral of this from $p=0$ to $1$ represent the total probability of 0 rigged coins, considering an outcome of 6 heads, with each prior $p$ weighted equally? Whether or not I'm abusing the maths, the integral evaluates to: $$ \int_{0}^{1} P(0|6H)(p) \ dp = 0.0822\dots $$ Note: I'm thinking in retrospect that the prior $p$ should probably be different for every $P(n)$ and assuming they're the same for each $P(n)$ is likely problematic, but I thought I'd share my process anyway. EDIT 2: On further thought, it seems like what I really want to compute is the integral: $$\int{\int{\int{\int{\int{\int{\int P(0|6H) \ d p_0 \ d p_1 \ d p_2 \ d p_3 \ d p_4 \ d p_5 \ d p_6}}}}}}$$ where $$ P(0|6H) = \dfrac{(1-p_0)^6}{(1-p_0)^6 + 12p_1(1-p_1)^5 + 60p_2^2(1-p_2)^4 + 160p_3^3(1-p_3)^3 + 240p_4^4(1-p_4)^2 + 192p_5^5(1-p_5) + 64p_6^6} $$ and $$ p_0 + p_1 + p_2 + p_3 + p_4 + p_5 + p_6 = 1 $$ and $p_n$ is the prior probability that $n$ coins are rigged. I have absolutely no idea how one would go about even thinking about evaluating this integral - it seems as though there are a range of values for the integral anyway, depending on the choices of $p_n$. It seems it is definitely possible given specific choices for $p_n$ and maybe even a distribution for the $p_n$s, dependent on n, such that the distribution is still normalized, like a weighted decaying distribution or something (gets less likely as n increases that that number of coins was rigged). Happy Tuesday REPLY [4 votes]: Your computation doesn't make any sense. Assuming that $n$ of the tosses were rigged doesn't mean that you're assigning a prior probability of $n/6$ to "not fair". If you're saying the only possibilities are "fair" and "not fair" and "not fair" means a 100% of Clinton winning all $6$ tosses (which is what your computation of "$P(6H)$" implies), then that means either all the coins are rigged or none of them are, with $P(\text{not fair})$ being the prior probability that they are all rigged. The computation that you seem to be trying to do when you're computing "$P(6H)$" is not $P(6H)$ but $P(6H|n\text{ rigged coins})$, i.e. the probability of getting $6$ heads assuming exactly $n$ of the coins were rigged. This is very easy to compute: it's just the probability of the $6-n$ non-rigged coins coming up heads, which is $1/2^{6-n}$. Note that if you allow this possibility, you are no longer saying the only options are "fair" and "not fair"; rather the options are "$n$ rigged coins" for each $n$ between $0$ and $6$ (with $n=0$ being "fair" and $n=6$ being what you called "not fair"). You then get that $$P(6H)=P(6H|0)P(0)+P(6H|1)P(1)+\dots+P(6H|6)P(6)=\frac{P(0)}{2^6}+\frac{P(1)}{2^5}+\dots+ P(6),$$ where I am abbreviating the event "$n$ rigged coins" as simply "$n$". You then get that $$P(0|6H)=\frac{P(6H|0)P(0)}{P(6H)}=\frac{P(0)}{P(0)+2P(1)+\dots+2^6P(6)}$$ is the probability that all the coins were fair, given that they all came up heads. Note here that $P(0),P(1),\dots,P(6)$ are prior probabilities: the probability (before you knew the outcome of the coin tosses) with which you believed that $n$ of the coin tosses were rigged. You don't get a value for $P(0|6H)$ until you plug in values for these priors. If, for instance, you believe that each coin toss independently had a prior probability of $p$ of being rigged, then $P(n)=\binom{6}{n}p^n(1-p)^{6-n}$. However, this is probably not a reasonable assumption (you wouldn't expect the riggedness of each toss to be independent--if one of them is being rigged, then that makes it more likely there is a conspiracy which means more of them will be rigged).<|endoftext|> TITLE: Quotient space of the reals by the rationals QUESTION [9 upvotes]: Let $\mathbb{R}/{\sim}$ be the quotient space given by the equivalence relation $a \sim b$ if $a$ and $b$ are rational. I am trying to understand general properties of the quotient topology and this example seems worth fleshing out in full. It's also a very strange example to me so I'd appreciate feedback on what I've figured out so far. In order to figure out what the topology on $\mathbb{R}/{\sim}$ looks like we need to examine where the surjection $\pi: \mathbb{R} \to \mathbb{R}/{\sim}$ sends open sets in $\mathbb{R}$. Now any open interval $U \subset \mathbb{R}$ contains both irrational and rational points; the rationals all get sent to the same point $q$ while the irrationals get sent to separate points. So an open set in $\mathbb{R}/{\sim}$ is similar to an open set in the irrationals as a subspace of the reals (with the caveat that all open sets in $\mathbb{R}/{\sim}$ share the rational point $q$). Is this space connected? I believe so as I can't think of a proper separation. As Alex notes below this is not correct: My professor also mentioned this space is an example where a compact subset, namely the irrationals, is not closed. As for compactness I think it is for this reason: the rationals are dense in $\mathbb{R}$, so if we put an open neighborhood around each rational then we will cover $\mathbb{R}$. Similarly, if we put an open neighborhood around the rational point $q \in \mathbb{R}/{\sim}$, then this single neighborhood will contain all irrational points and thus be a finite cover of $\mathbb{R}/{\sim}$. Are there any other significant properties of this space I should know about? In particular, is it homeomorphic to anything notable? Does it serve as a useful counterexample for any other important properties? And does this particular topology have a name? REPLY [4 votes]: I'll add my own answer, elaborating on my comments to the original post and to Rob Arthan's (now deleted) answer, just to clear things up. A subset of $\mathbb{R}/{\sim}$ is open if and only if its preimage under the map $\mathbb{R} \to \mathbb{R}/{\sim}$ is open. Since every nonempty open set in $\mathbb{R}$ contains a rational number, the open sets of $\mathbb{R}/{\sim}$ are in bijection with $\{U\subseteq \mathbb{R}\mid U\text{ is open, and }\mathbb{Q}\subset U\}\cup \{\emptyset\}$. How can we find open sets in $\mathbb{R}$ that contain $\mathbb{Q}$? Well, such a set is exactly the complement of a closed set which is disjoint from $\mathbb{Q}$, i.e. a closed set of irrationals. So we can take $\mathbb{R}$ and remove a single irrational, or finitely many irrationals, or a sequence of irrationals limiting to some irrational, etc. etc. The somewhat unintuitive thing is that there are some very large (in the sense of Lebesgue measure) closed sets of irrationals. For example, fix $\epsilon>0$, enumerate the rationals as $\{q_n\mid n\in\mathbb{N}\}$, and let $O_n$ be an open interval of length $\epsilon \cdot 2^{-n}$ containing $q_n$. Then let $U = \bigcup_{n\in\mathbb{N}} O_n$. Then $\lambda(U) \leq 2\epsilon$, where $\lambda$ is the Lebesgue measure. To see that the image of the set $I$ of irrationals in $\mathbb{R}/{\sim}$ is not compact, let $U$ be the set defined above for $\epsilon <\frac{1}{2}$ (so $U$ doesn't contain any interval $(n,n+1)$). Then for $n\in \mathbb{Z}$, let $U_n = U \cup (n,n+1)$. Now each $U_n$ is open, and $I\subseteq \bigcup_{n\in\mathbb{Z}} U_n$, but this open cover has no finite subcover. All the $U_n$ contain $\mathbb{Q}$ so the same is true of their images in $\mathbb{R}/{\sim}$.<|endoftext|> TITLE: Many point compactification QUESTION [5 upvotes]: If $X$ is a noncompact LCH space (locally compact, Hausdorff) then its one point compactification is $X^*=X\cup \{\infty\}$ with topology $\mathcal{T^*}$ given by $U \in \mathcal{T^*}$ iff either a) $U \subset X$ is open, or b) if $\infty \in U$ then $U^c \subset X$ is compact What whould be the definition and the topology for 2 point compactification, or 4 point compactification. Like in $\mathbb{R^2}$ where there are 4 infinities: $\pm \infty$ on $x$ axis and $\pm \infty$ on $y$ axis? REPLY [2 votes]: $R$ has a $1$-point and a $2$-point compactification but not an $n$-point comp... for finite $n>2.$ (This was a problem in AMM some decades ago.) The space $\omega_1$ with the $\epsilon$-order topology has, up to equivalence, only 1 comp... namely, the identity embedding into $\omega_1+1.$ It may be useful in some algebraic contexts to have just 4 infinities for the real plane, on the axes, but topologically the line $y=x$ is going to be confused about which of these it should "go to".<|endoftext|> TITLE: Advantage of Lebesgue sigma-algebra over Borel? QUESTION [14 upvotes]: What it says on the tin. Using the Borel $\sigma$-algebra on the reals instead of the Lebesgue $\sigma$-algebra has the advantage that it allows a broader class of measures, many of which are quite natural: For example the "uniform" measure on the Cantor set is defined on the Borel $\sigma$-algebra, but cannot be defined on the Lebesgue algebra. So why don't we just use the Borel $\sigma$-algebra for everything? What advantage does the Lebesgue $\sigma$-algebra have? I mean, it has more measurable sets, but sets that are Lebesgue-measurable but not Borel-measurable (or for that matter, sets that are not Borel-measurable, period) are extremely pathological, not explicitly constructible, and (as far as I can tell) never show up naturally. And it's complete, but I have no idea what makes that a useful property. REPLY [18 votes]: You might be interested in what Borel thought about this. Every modern student learns that Lebesgue's measure is the completion of Borel's measure and that this is rather obvious. Take the Cantor set of measure zero (i.e., Borel measure zero). All subsets are Lebesgue measurable but not all subsets are Borel sets. It is clearly a "bad" thing to have sets (and functions) around that your theory has to avoid, so of course Lebesgue's measure is clearly more useful than Borel's. But Borel didn't buy that. He was a bit of a constructionist. Not like you sometimes find among people that don't believe in infinite sets or even infinite decimal expansions. He didn't accept that any of these Lebesgue measurable sets that are not also Borel sets can be constructed in any acceptable way. He had given a procedure (countable but transfinite) that constructed all the Borel sets and Lebesgue had no demonstration that there were any other sets that you could actually encounter. Borel and Lebesgue were the best of friends---until they weren't. It was this issue that drove them apart. Borel was a bit older and had supervised Lebesgue's dissertation. But he quite resented the acclaim that Lebesgue was getting for his measure and his integral when the original ideas were all due to Borel. If you fully believe that non Borel sets don't truly exist then it appears Lebesgue has stolen the glory and with no justification. Priority disputes among mathematicians are fairly rare, but they can be as bitter as such disputes in other fields. I am not enough of an historian to tell much more of this story. (Of course that wouldn't stop me from telling such stories in lectures.) But I would say that, at least formally, this dispute couldn't have been settled until around 1914. That is when a young Russian mathematician (Suslin) showed that the projection of a two-dimensional Borel set onto one-dimension need not be a Borel set, but did have to be Lebesgue measurable. I hope that this would have settled the issue in Borel's mind but, if so, it did not restore their friendship. But Borel might have enjoyed one aspect: Suslin made his discovery by finding a rather gross error in a paper of Lebesgue's, a paper that claimed the projection of a Borel set would be a Borel set. The mistake Lebesgue made was an embarrassingly simple one.<|endoftext|> TITLE: Prove that if $\sum_{n=1}^ \infty na_n$ converges, then $\sum_{n=1}^ \infty a_n$ converges. QUESTION [6 upvotes]: Prove that if $\displaystyle \sum_{n=1}^ \infty na_n$ converges, then $\displaystyle\sum_{n=1}^ \infty a_n$ converges. No, $a_n$ are not necessarily positive numbers. I've been trying summation by parts. REPLY [4 votes]: Dirichlet's Test with {$na_n$} and {$1/n$}? Since $\sum_{n=1}^\infty na_n$ converges, the partial sums $\sum_{n=1}^\infty na_n$ are bounded. $\lim \frac {1}{n} = 0$. The sequence $\frac {1}{n}$ is monotone, therefore the series $\sum_{n=1}^\infty \left\lvert \frac {1}{n+1} - \frac {1}{n}\right\rvert$ converges and the conditions hold for the test. So, $\sum_{n=1}^\infty (\frac {1}{n})(na_n)$=$\sum_{n=1}^\infty a_n$ converges.<|endoftext|> TITLE: Definition of Sigma Algebra QUESTION [6 upvotes]: I was wondering, why are we not allowed to take arbitrary unions (likewise intersections) in the definition of a sigma algebra?; I am looking for a more or less intuitive reason. It seems to me that most of the motivation in defining sigma algebras lies in Measure Theory. So, does allowing arbitrary unions/intersections somehow screw-up the theory? Also, is there a mathematical structure which is similar to a sigma algebra, for which arbitrary unions/intersections is specified? EDIT: By arbitrary, I mean to include families of subsets which are not necessarily index-able by a countable set. REPLY [3 votes]: If you allowed for unions of families, you can see that this would immediately ruin the theory on $\mathbb{R}$ (and most other interesting places). This is because any such algebra containing the singletons would also contain any subset of the real numbers. We would hence find ourselves once again at the problem of Carathéodory: having too many measurable sets under which we cannot establish additivity of the form $$\mu(\sqcup E_n) = \sum \mu(E_n).$$<|endoftext|> TITLE: Examples of a categories without products QUESTION [25 upvotes]: A question was raised in our class about the non-existence of product in a category. The two examples that came up in the discussion was the category of smooth manifolds with boundary and the category of fields. But I cannot formally prove why categorical product does not exist in these categories. I can see that category of finite fields cannot have infinite products due to cardinality reasons, but I would like to know why category of fields cannot even have finite products. Is there any other simple example of a category without (finite) products? Any help regarding this appreciated. REPLY [12 votes]: Posets (viewed as categories) provide many examples of categories without certain limits. A product of two elements in a poset is simply a greatest lower bound, so you just need to make a poset where some pair of elements has no greatest lower bound. For example, the poset $\{ a, b, c, d \}$ where $a \leq c, a \leq d, b \leq c, b \leq d$. Then both $a$ and $b$ are lower bounds for $c$ and $d$, but they are incomparable.<|endoftext|> TITLE: Let $a$ be a root of the cubic $x^3-21x+35=0$. Prove that $a^2+2a-14$ is a root of the cubic. QUESTION [5 upvotes]: Let $a$ be a root of the cubic $x^3-21x+35=0$. Prove that $a^2+2a-14$ is a root of the cubic. My effort Working backwards I let $P(x)$ be a polynomial with roots $a,a^2+2a-14$ and $r$. Thus, $$P(x)=(x-a)(x-r)(x-(a^2+2a-14))$$ Expanding, I get $$P(x) =(x^2-(a+r)x+ar)(x-(a^2+2a-14)) $$ $$P(x) =x^3+x^2[-(a^2+2a-14)-(a+r)]+x[(a+r)(a^2+2a-14)+ar]-ar(a^2+2a-14)$$ Equating coefficients of $P(x)$ with the given cubic $x^3-21x+35=0$ I have the following system of equations : \begin{array} \space (a^2+2a-14)+(a+r)&=0 \\ (a+r)(a^2+2a-14)+ar&=-21 \\ -ar(a^2+2a-14)&=35 \\ \end{array} From the first equation I have $(a^2+2a-14) =-(a+r) $ which, substituted in the other two equations ,it yields \begin{array} \space -(a+r)^2+ar &=-21 \\ ar(a+r) &=35 \\ \end{array} Rearranging the second equation for $ar$ I have $ar=\cfrac{35}{(a+r)}$ which I now substitute into the first eq. to get: \begin{array} \space -(a+r)^2+\cfrac{35}{(a+r)}&=-21 \\ -(a+r)^3+35 +21(a+r) &=0 \\ \end{array} My problem now is that the last equation looks pretty darn close to $x^3-21x+35=0$ but some signs are not in the right place,which makes me wonder if I have made some careless mistake(I have already checked but I don't see it) or if I have left some algebraic manipulations to do. REPLY [2 votes]: Since the term in $x^2$ is missing, the sum of the three roots is zero; so $a^2+2a-14$ is a root if and only if $-a-(a^2+2a-14)=-a^2-3a+14$ is also a root. Since $$ (x-a^2-2a+14)(x+a^2+3a-14)=x^2+ax-a^4 - 5a^3 + 22a^2 + 70a - 196 $$ and the remainder of $-t^4 - 5t^3 + 22t^2 + 70t - 196$ divided by $t^3-21t+35$ is $t^2-21$, we have $$ (x-a^2-2a+14)(x+a^2+3a-14)=x^2+ax+a^2-21 $$ so \begin{align} (x-a)(x-a^2-2a+14)(x+a^2+3a-14) &=(x-a)(x^2+ax+a^2-21)\\ &=x^3-21x-a^3+21a\\ &=x^3-21x+35 \end{align} is the required factorization.<|endoftext|> TITLE: Is $x$ irrational when $2^{x}+3^{x}=6$? QUESTION [11 upvotes]: Is $x$ rational or irrational when $2^{x}+3^{x}=6$. How to show that? REPLY [2 votes]: If $\gcd(m,n)=1$ with $n\gt1$ and we let $u=2^{m/n}$, the minimal polynomial for $u$ is of degree $n$, namely $P(u)=u^n-2^m=0$. But if $2^{m/n}+3^{m/n}=6$, then $$3^m=(6-u)^n=6^n-{n\choose1}6^{n-1}u+\cdots+(-1)^nu^n$$ which we can rewrite as $$Q(u)=(-1)^nu^n+\cdots-{n\choose1}6^{n-1}u+6^n-3^m=0$$ Adding or subtracting $P$ and $Q$ to cancel the $u^n$ term gives a polynomial of lower degree with $u$ as a root, contradicting the assumed minimality of $P$. Thus the equation $2^x+3^x=6$ is not satisfied by any rational value of $x$ (since $2^m+3^m\not=6$ for any integers $m$). Note, the proof is simple only because we're only considering the sum of two rational powers. An equation like $2^x+3^x+5^x=30$ would (I think) require the more general results referenced in Wojowu's answer.<|endoftext|> TITLE: Showing $\int_{1}^{0}\frac{\ln(1-x)}{x}dx=\frac{\pi ^{2}}{6}$ QUESTION [5 upvotes]: Is there way to show $$\int_{1}^{0}\frac{\ln(1-x)}{x}dx=\frac{\pi ^{2}}{6}$$ without using the Riemann zeta function? REPLY [4 votes]: (I assume that what you mean is that you don't want to use the fact that $\sum 1/n^2=\pi^2/6$.) Yes, there is a way of seeing this, and this is the basis of Mikael Passare's paper How to compute $\sum 1/n^2$ by solving triangles (free preprint on arXiv, or the published version on JSTOR for subscribers). If you set $x=e^{-t}$, your integral becomes $$ \int_0^{\infty} -\ln(1-e^{-t}) \, dt , $$ or $$ \int_0^{\infty} -\ln(1-e^{-x}) \, dx $$ if we call the variable $x$ again. This is the area of a region in the first quadrant of the $xy$-plane, below the curve $y=-\ln(1-e^{-x})$ (or equivalently $e^{-x}+e^{-y}=1$). This is the region called $U_0$ i Passare's paper, and he shows that there are two other regions $U_1$ and $U_2$ with the same area, and then via a clever area-preserving change of variables that the combined area of $U_0$, $U_1$ and $U_2$ equals the area of a certain right triangle $T$ which is obviously $\pi^2/2$. Hence the area of $U_0$ (your integral) is $\pi^2/6$. (See also this answer and this.)<|endoftext|> TITLE: Fibonacci sequence in the factorization of certain polynomials having a root at the Golden Ratio QUESTION [5 upvotes]: I was playing around with the Golden Ratio $\Phi = \frac{1 + \sqrt 5}{2}$ on Wolfram Alpha and I noticed that if $F_n$ denotes the $n{th}$ Fibonacci number, then the polynomial $P_n(x) = x^n - F_n x - F_{n-1}$ seems always have a root at $\Phi$ (and hence is divisible by the minimal polynomial $f(x) = x^2 - x - 1$ of $\Phi$ ) After testing several $P_n$ I noticed that factoring out the $f(x)$ gave products of the form: $$ P_n(x) = (x^2 - x - 1)(F_1 x^{n-2} + F_2 x^{n-3} + ... + F_{n-2} x + F_{n-1}) $$ where the $F_k$ are the $k^{th}$ Fibonacci numbers. I've been stumped as to why the coefficients of the second factor should list the elements of the Fibonacci sequence like that, and so I'm hoping someone here can provide insight as to why it all seems to work out so nicely. REPLY [3 votes]: Distributing the product in the r.h.s. of the factorization formula $$ P_n(x) = (x^2 - x - 1)(F_1 x^{n-2} + F_2 x^{n-3} + ... + F_{n-2} x + F_{n-1}) $$ gives \begin{align*}(x^2 - x - 1)\left(\sum_{k = 1}^{n - 1} F_k x^{n - 1 - k}\right) &= x^2 \sum_{k = 1}^{n - 1} F_k x^{n - 1 - k} - x \sum_{k = 1}^{n - 1} F_k x^{n - 1 - k} - \sum_{k = 1}^{n - 1} F_k x^{n - 1 - k} \\ &= \sum_{k = 1}^{n - 1} F_k x^{n + 1 - k} - \sum_{k = 1}^{n - 1} F_k x^{n - k} - \sum_{k = 1}^{n - 1} F_k x^{n - 1 - k} \end{align*} Judiciously reindexing to match exponents in $x$ gives that this is $$\sum_{k = -1}^{n - 3} F_{k + 2} x^{n - 1 - k} - \sum_{k = 0}^{n - 2} F_{k + 1} x^{n - 1 - k} - \sum_{k = 1}^{n - 1} F_k x^{n - 1 - k} .$$ If we peel off the terms whose index does not appear in every sum, we can combine the resulting summations and collect like terms: \begin{multline*} \left(F_1 x^n + F_2 x^{n - 1} + \sum_{k = 1}^{n - 3} F_{k + 2} x^{n - 3}\right) - \left(F_1 x^{n - 1} + \sum_{k = 1}^{n - 3} F_{k + 1} x^{n - 3} + F_{n - 1} x \right) \\ \hspace{9cm}- \left(\sum_{k = 1}^{n - 3} F_k x^{n - 3} + F_{n - 2} x + F_{n - 1} \right) \\ = F_1 x^n + (F_2 - F_1) x^{n - 1} + \sum_{k = 1}^{n - 3} (F_{k + 2} - F_{k + 1} - F_k) x^{n - 3} - (F_{n - 1} + F_{n - 2}) x - F_{n - 1} .\end{multline*} We have $F_1 = F_2 = 1$, so the leading term is $x^n$ and the $x^{n - 1}$ term vanishes. The coefficient of the $k$th term of the summation is $F_{k + 2} - F_{k + 1} - F_k$, but this vanishes by the definition of the Fibonacci sequence. The coefficient $F_{n - 1} + F_{n - 2}$ of the $x$ term, again by definition, is equal to $F_n$. So, as desired, the product is $$\color{#bf0000}{\boxed{(x^2 - x - 1)(F_1 x^{n-2} + F_2 x^{n-3} + ... + F_{n-2} x + F_{n-1}) = x^n - F_n x - F_{n - 1}}} .$$ NB we can view this as a variation of the familiar factorization $$(x - 1)(x^{n - 1} + x^{n - 1} + \cdots + x + 1) = x^n - 1 .$$ More precisely, we can rederive this latter identity by characterizing the sequence $(1, 1, 1, \ldots)$ via the recurrence relation $G_1 = 1$, $G_{k + 1} = G_k$ ($k \geq 1$) and proceeding as above.<|endoftext|> TITLE: Finding the expected value in the given problem. QUESTION [8 upvotes]: It is given that a monkey types on a 26-letter keyboard with all the keys as lowercase English alphabets. Each letter is chosen independently and uniformly at random. If the monkey types 1,000,000 letters, what is the expected number of times the sequence "proof" appears? Here is the suggested solution: Let the random variable $X_i = 1$ if the word "proof" appears at the index $i$ else $X_i = 0$. Let $n = 1,000,000$ Hence, the expected value of number of appearances of the word is: \begin{align} &E\left[ \sum\limits _{i=1}^{n-4}X_i \right] & & \text{Eqn 1} \\ &= \sum\limits _{i=1}^{n-4}E[X_i] & & \text{Eqn 2} \end{align} Now $E[X_i] = 26^{-5}$. Hence expected number of appearances = $(n-4)\cdot26^{-5}$ (Note that the upper limit is $n-4$ because the word proof is of length $5$. Hence it can at most start at the index $n-4$ to finish the index $n$) Now here is my doubt. We know that if $X_i = 1$, then the following few random variables like $X_{i+1}$,$X_{i+2}$ etc, can't be $1$ because you cannot have a word proof starting at index $i$ and then another word proof starting at index i+1. Where in this proof have we imposed that restriction? REPLY [5 votes]: To simplify the subscripts, I'm going to consider just the first few letters the monkey types. Add a variable $k$ to all my subscripts if you want to apply the reasoning at an arbitrary point in the string. It is true that $E(X_1) = 26^{-5}$ and also that $E(X_2) = 26^{-5}$. We just have to cycle through the $26^6$ equally-likely possibilities for the first six letters the monkey types. Of these, $26$ are strings of the form "proof_" and another $26$ are of the form "_proof", where the blank is filled in with some letter from a to z. What you've observed is that when we condition the expectation of $X_2$ on the value of $X_1$, we get a result that is different from the ordinary (not conditioned) expectation. Specifically, $E(X_2 \mid X_1 = 1) = 0$, which is less than $26^{-5}$. Since the method you used applies a theorem of probability whose required assumptions are satisfied by the assumptions of your question (in particular, the expectation of each $X_i$ exists), we should find that there is something else going on that will somehow "balance out" the fact that the observation $X_1 = 1$ lowers the expected value of $X_2$. And in fact there is something else going on. We only observe $X_1 = 1$ once, on average, for each $26^5$ times we let a monkey type a million letters. The other $26^5 - 1$ times (on average) that we do this, we observe $X_1 = 0$. In just $26$ of those $26^5 - 1$ times (on average), the first five letters typed by the monkey will be a string of the form "_proo", where the blank is filled by a letter from a to z. In those cases, there is a $1/26$ probability (conditioned on the observed data) that the sixth letter will be f and that $X_2$ will be $1$, that is, $$P(X_2 = 1 \mid \text{letters 2 through 5 are "proo"}) = 1/26.$$ In the other $26^5 - 27$ cases, there is zero probability (conditioned on the observed data) that $X_2 = 1$. Let $A$ be the event that first five letters have the form "_proo", $B$ the event that the first five letters are neither "proof" nor anything of the form "_proo". Then the expectation of $X_2$ conditioned on the observation $X_1 = 0$ is \begin{align} E(X_2 \mid X_1 = 0) &= 0 \cdot P(X_2 = 0 \mid A) \, P(A \mid X_1 = 0) \\ & \qquad {} + 1 \cdot P(X_2 = 1 \mid A) \, P(A \mid X_1 = 0) \\ & \qquad {} + 0 \cdot P(X_2 = 0 \mid B) \, P(B \mid X_1 = 0) \\ & \qquad {} + 1 \cdot P(X_2 = 1 \mid B) \, P(B \mid X_1 = 0) \\ &= P(X_2 = 1 \mid A) \, P(A \mid X_1 = 0) \\ & \qquad {} + P(X_2 = 1 \mid B) \, P(B \mid X_1 = 0) \\ &= \frac{1}{26} \left( \frac{26}{26^5 - 1} \right) + 0 \cdot P(B \mid X_1 = 0) \\ &= \frac{1}{26^5 - 1} \end{align} This is ever so slightly greater than the unconditional expectation, $26^{-5}$. If fact it is just large enough so that \begin{align} E(X_2) &= E(X_2 | X_1 = 0)\, P(X_1 = 0) + E(X_2 | X_1 = 1)\, P(X_1 = 1) \\ &= \frac{1}{26^5 - 1} \left( \frac{26^5 - 1}{26^5} \right) + 0 \cdot P(X_1 = 1) \\ &= 26^{-5}. \end{align} In summary, the fact that $X_1 = 1$ forces $X_2 = 0$ is balanced by the fact that $X_1 = 0$ gives a tiny boost to the probability that $X_2 = 1$. You could do a similar analysis for the effect of $X_1$ on $X_3$, $X_4$, and $X_5$. The total effect is that while an occurrence of "proof" at one position rules out occurrences at several nearby positions, each place where "proof" does not occur causes "proof" to be a little more likely than usual to occur in nearby positions. For example, "proof" at position $1$ rules out "proof" at position $5$, but it raises the probability from $26^{-5}$ to $26^{-4}$ that the string at position $5$ will be "fproo", which in turn gives a relatively very high probability ($1/26$) that "proof" will occur at position $6$.<|endoftext|> TITLE: Linear independence of matrix-conjugate vectors QUESTION [5 upvotes]: Let's define the vectors $\mathbf{v}_1,\dots,\mathbf{v}_m \in \mathbb{R}^n$, with $m\leq n$, to be mutually conjugate with respect to matrix $\mathbf{A} \in \mathbb{R}^{n\times n}$ if $$\mathbf{v}_{i}^T\mathbf{A}\mathbf{v}_j = 0, \qquad 1 \leq i,j \leq m, \qquad i\not=j.$$ Assume that $\mathbf{A}$ is nonsingular, and suppose that $$ \mathbf{v}_{i}^T\mathbf{A}\mathbf{v}_i \not= 0, \qquad 1 \leq i\leq m.$$ Prove then that $\mathbf{v}_1,\dots,\mathbf{v}_m$ are linearly independent in $\mathbb{R}^n$. Does this hold even if $\mathbf{A}$ is singular? Idea of the proof The sketch of the proof I have found suggest to consider the matrix $$\mathbf{P}=\begin{pmatrix} \mathbf{v}_1 \dots \mathbf{v}_m \end{pmatrix}^T \mathbf{A} \begin{pmatrix} \mathbf{v}_1 \dots \mathbf{v}_m \end{pmatrix}.$$ It is a diagonal matrix, with $$\mathbf{P}_{(i,i)}=\mathbf{v}_{i}^T\mathbf{A}\mathbf{v}_i$$ then its determinant is not null. How could I infer then that the vectors are linearly independent? Is it necessary the assumption $\mathbf{A}$ nonsingular REPLY [2 votes]: Ok, I seem to have found a different proof that doesn't require $\mathbf{A}$ to be nonsingular. Indeed, suppose that $\mathbf{v}_1,\dots,\mathbf{v}_m$ are not linearly independent. Then there exist $k \in \{1,\dots,m\}$ such that $$ \mathbf{v}_k=\sum_{\substack{i=1\\i\not=k}}^m\alpha_i\mathbf{v}_i$$ and for at least $h\in \{1,\dots,m\}, h\not=k$ it is $\alpha_h \not=0$. Then we have $$ \mathbf{v}_{h}^T\mathbf{A}\mathbf{v}_k=\mathbf{v}_{h}^T\mathbf{A}\left(\sum_{\substack{i=1\\i\not=k}}^m\alpha_i\mathbf{v}_i\right)=\sum_{\substack{i=1\\i\not=k}}^m \alpha_i\mathbf{v}_{h}^T\mathbf{A}\mathbf{v}_i=\alpha_h\mathbf{v}_{h}^T\mathbf{A}\mathbf{v}_h\not=0$$ a contradiction.<|endoftext|> TITLE: Modified Laplace's method QUESTION [5 upvotes]: In the application of Laplace method (or steepest descent) it is often assumed that the dependence on the factor N, on which we are expanding the integral, is only in the argument of the exponential. What if we have an expression to expand for large $N$ of the type: $\int_{-\infty}^{+\infty} f(N,t) e^{Ng(t) dt}$ where the function $f$ is only mildly depending on $N$. Would the method change? Are there examples of functions of this form? REPLY [6 votes]: The Laplace method can indeed handle some integrals like this, and broad strokes of the procedure are essentially the same. First you would establish that the main contribution to the size of the integral comes from a neighborhood of some $t = t_0$ (often a critical point of $g$), where the size of the neighborhood potentially depends on $N$. Then you would show that, in this neighborhood, $f(N,t) \approx f(N,t_0)$ in some sense. The details of these two steps depend on the growth and decay properties of $f$ and $g$. In my answer to this question I outlined these steps for the integral $$ I(N) = \int_0^\infty f(N,t) e^{Ng(t)}\,dt $$ with $$ f(N,t) = N^{-t} \qquad \text{and} \qquad g(t) = \log\!\left(\frac{t}{1+t^2}\right). $$ In this case the function $g$ has a critical point at $t=1$, and in the answer I end up showing that $$ I(N) \approx f(N,1) \int_{1-\epsilon}^{1+\epsilon} e^{Ng(t)}\,dt. $$ If $f$ and $g$ are nice enough this will be the usual outcome of the method. It should be noted, of course, that not all integrals of the form $$ \int_0^\infty f(N,t) e^{Ng(t)}\,dt $$ can be approximated in this way. Take, for example, $f(N,t) = (t+1)^N$ and $g(t) = -t^2$. In this case $g$ has a critical point at $t=0$ but $$ \int_0^\infty f(N,t) e^{Ng(t)}\,dt \not\approx f(N,0) \int_{0}^{\epsilon} e^{Ng(t)}\,dt. $$ Here the growth of $f(N,t)$ competes with the decay of $e^{Ng(t)}$ as $t \to \infty$, creating a critical point for the whole integrand which does not belong to $g$ alone.<|endoftext|> TITLE: MLE of a discrete random variable QUESTION [5 upvotes]: For some reason I am having difficulty understand how to calculate the mle of a discrete rv. The pmf is: $$p(k;\theta) = \left\{\begin{array}{cl} \dfrac{1-\theta}3&\text{if } k=0\\[5pt] \dfrac{1}{3}&\text{if } k=1\\[5pt] \dfrac{1+\theta}{3}&\text{if } k=2\\[5pt] 0&\text{otherwise}&\end{array}\right.$$ We're also told that we have $X_1 , X_2, \ldots , X_n$ iid rvs from the above dist (not told how many $n$) I need to figure out the likelihood and loglikelihood. I know that the likelihood is just the product of all the pmfs but i dont get how to do this for this discrete rv. I also know that the loglikelihood will just end up being the sum of all the logs of the pmfs.. but again.. i am confused. some help would be great! REPLY [15 votes]: Be aware that, when doing MLE (in general, when doing parametric estimation) you are computing (estimating) a parameter of a probability function (pmf). If the variable is discrete, it means (roughly) that its probability function takes discrete values (in this case, $k=1,2,3$), but the parameter itself can be continuous (it can take any real value, in some domain). So, the first thing you need to make clear is that: what is the parameter of my pmf that I want to estimate? in this case, it's $\theta$ it's continuous? what's its domain? in this case, looking at the pmf, we see that $\theta$ must be in the range $[-1,1]$. In this range, and only in this range the probability function is valid (takes non-negative values). Then the parameter is continous and its domain is $-1 \le\theta \le 1$ Once you have that establlished, you try to write the likelihood. If you are not sure, start by some simple example. Assume you have only two samples, say, $x_1=2$, $x_2=0$. The likelihood of this realization is $$L(\theta)=p(x_1=2;\theta) \times p(x_2=0;\theta) = \frac{1+\theta}{3} \frac{1-\theta}{3} $$ To write this in general, suppose you have $n_0$ samples that take value $x=0$, $n_1$ that take value $x=1$ etc. Then $$L(\theta)=p(x=0;\theta)^{n_0}p(x=1;\theta)^{n_1}p(x=2;\theta)^{n_2}$$ Write that expression down, and take its logarithm if you think this simplifies things (it does). Then ask yourself: for given $n_0,n_1,n_2$, this is a (continous) function of $\theta$, what is the value of $\theta$ that maximizes this function, in the given domain? Update: given that you've done your homework, here's my solution $$\log L(\theta)= n_0 \log(1+\theta) +n_2 \log(1-\theta) +\alpha $$ where $\alpha $ is a term that does not depend on $\theta$ (we can leave it out). This function is differentiable in $(-1,1)$, so we can look for critical points (candidate extrema) as: $$\frac{d\log L(\theta)}{d \theta}= \frac{n_0}{1+\theta}-\frac{n_2}{1-\theta} $$ Equalling this to zero, we get $\theta_0=(n_0-n_2)/(n_0+n_2)$ Have we already found then the MLE? Not really. We have only found a critical point of $L(\theta)$. To assert that a critical point is a global maximum we need to 1) check that it's a local maximum (it could be a local minimum or neither) 2) check that the local maximum is really a global maximum (what about the non-differentiable or boundary points?). We can usually check that with the second derivative. But in this case it's simpler. We see that at the boundary ($\theta = \pm 1$) the likelihood tends to $-\infty$. Hence, given that the function is differentiable inside the interval, and it has a single critical point, it must be a (local and global) maximum.<|endoftext|> TITLE: Computation of an iterated integral QUESTION [6 upvotes]: I want to prove $$\int\limits_{-\infty}^\infty\int\limits_{-\infty}^\infty\frac{\sin(x^2+y^2)}{x^2+y^2}dxdy=\frac{\pi^2}{2}.$$ Since the function $(x,y)\mapsto\sin(x^2+y^2)/(x^2+y^2)$ is not integrable, I can't use the Theorem of Change of Variable. So, I'm trying to use residue formulae for some suitable holomorphic function to compute the inner integral, but I can't continue. Can someone suggest me a hint to solve this problem? Addendum: I may be wrong, but I suspect Theorem of Change of Variable (TCV) is not the answer. The reason is the following: the number $\pi^2/2$ is gotten if we apply polar coordinates, but TCV guarantees that if we apply any other change of variable we can get the same number, $\pi^2/2$. If this function were integrable, this invariance property would be guaranteed, but it is not the case. Thus we may have strange solutions to this integral. REPLY [6 votes]: Let $D$ be any Jordan domain in $\mathbb{R}^2$, containing origin in its interior, whose boundary $\partial D$ has the form $r = f(\theta)$ in polar coordinates where $f \in C[0,2\pi]$. Consider following integral as a functional of $D$: $$\mathcal{I}_D \stackrel{def}{=} \int_D \phi(x,y) dx dy \quad\text{ where }\quad\phi(x,y) = \frac{\sin(x^2+y^2)}{x^2+y^2} $$ Since the origin is a removable singularity for $\phi(x,y)$, as long as $D$ is of finite extent, there isn't any issue about integrability or change of variable. We have $$\mathcal{I}_D = \int_0^{2\pi} \int_0^{f(\theta)}\frac{\sin(r^2)}{r^2} rdr d\theta = \frac12\int_0^{2\pi} \left[\int_0^{f(\theta)^2}\frac{\sin t}{t} dt \right] d\theta $$ For any non-increasing, non-negative function $g$ on $(0,\infty)$. Using integration by part (the RS version), one can show that $$\left|\int_a^b g(x) \sin(x) dx \right| \le 2 g(a)\quad\text{ for }\quad 0 < a < b < \infty$$ For any $R > 0$ where $B(0,R) \subset D$. By setting $g(x)$ to $1/x$, above inequality leads to following estimate for $\mathcal{I}_D$. $$\left| \mathcal{I}_D - \mathcal{I}_{B(0,R)} \right| = \frac12 \left| \int_0^{2\pi} \left[\int_{R^2}^{f(\theta)^2}\frac{\sin t}{t} dt \right] d\theta \right| \le \frac12 \int_0^{2\pi} \left|\int_{R^2}^{f(\theta)^2}\frac{\sin t}{t} dt\right| d\theta \le \frac{2\pi}{R^2} $$ For any fixed $Y$, the integrand $\phi(x,y)$ is Lebesgue integrable over $(-\infty,\infty)\times [-Y,Y]$. Double integral of the form below is well defined. With help of DCT, one can evaluate it as a limit $$\int_{-Y}^Y \int_{-\infty}^{\infty}\phi(x,y) dxdy = \lim_{X\to\infty}\int_{-Y}^Y \int_{-X}^X \phi(x,y) dxdy = \lim_{X\to\infty}\mathcal{I}_{[-X,X]\times[-Y,Y]}$$ We will combine this with above estimation. By setting $R = Y$ and $[-X,X] \times [-Y,Y]$ taking the role of $D$, one get $$\left|\int_{-Y}^{Y} \int_{-\infty}^{\infty}\phi(x,y) dxdy - \mathcal{I}_{B(0,Y)}\right| \le \limsup_{X\to\infty}\left|\int_{-Y}^{Y} \int_{-X}^{X}\phi(x,y) dxdy - \mathcal{I}_{B(0,Y)}\right| \le \frac{2\pi}{Y^2}$$ Since following two limits exist, $$\lim_{Y\to\infty} \mathcal{I}_{B(0,Y)} = \lim_{Y\to\infty} \pi\int_0^{Y^2}\frac{\sin t}{t}dt = \pi\int_0^\infty \frac{\sin t}{t} dt = \frac{\pi^2}{2} \quad\text{ and }\quad \lim_{Y\to\infty}\frac{2\pi}{Y^2} = 0$$ By squeezing, the double integral at hand exists as an improper integral! $$\int_{-\infty}^\infty \int_{-\infty}^\infty \phi(x,y) dxdy \stackrel{def}{=} \lim_{Y\to\infty} \int_{-Y}^Y \int_{-\infty}^\infty \phi(x,y) dxdy = \lim_{Y\to\infty} \mathcal{I}_{B(0,Y)} = \frac{\pi^2}{2}$$<|endoftext|> TITLE: Is $k+p$ prime infinitely many times? QUESTION [10 upvotes]: I have the following conjecture: Let $k\in\mathbb{N}$ be even. Now $k+p$ is prime for infinitely many primes $p$. I couldn't find anything on this topic, but I'm sure this has been thought of before. I tried to solve this using Dirichlet's theorem on arithmetic progressions and the Green–Tao theorem, but no luck with those. Is this question equivalent to an existing open problem? If not, how can I prove this (I prefer hints, but I appreciate full answers, too)? - Edit - As has been pointed out in the comments, this is not a duplicate. I'm asking for infinitely many primes $p$ such that $p+k$ is prime, not only one. REPLY [2 votes]: There are some results known. It has been proven that there is at least one $k \leq 246$ that appears infinitely often as a prime gap. Furthermore, assuming respectively the Elliott–Halberstam conjecture and its generalisation, one can prove that there is at respectively at least one $k \leq 12$ and $k \leq 6$ that appears infinitely often as a prime gap. Of course, a prime gap is stronger than your condition, since the primes don't have to be consecutive.<|endoftext|> TITLE: What do $\{ceps_q\}_{q=0}^Q$ and $\{a_q\}_{q=1}^p$ mean? QUESTION [14 upvotes]: As a programmer who hasn't had any higher mathematical training, I sometimes find mathematical equations described in books or online that I'd like to implement in my programs, but they have symbols in them that I'm unfamiliar with. Or they use symbols that I'm familiar with in an unfamiliar way. It's very frustrating, especially as I can't even tell what area of mathematics to start hunting for them in. Right now, I'm trying to figure out this: (source: oakcircle.com) I've mostly figured out the actual equations shown there (the summation symbol wasn't too hard to find via Wikipedia, and I finally figured out that the big open-brace was a way of showing if-then equations), but I'm stuck on the two things in the initial paragraph. The first one, between "The cepstrum coefficients" and "can be estimated"... what do the "Q" and "q=0" mean in that? And in the one between "from the LPC coefficients" and "using a recursion procedure:", the "p" and "q=1"? REPLY [10 votes]: Number[] a = new Number[p + 1]; // range from 1 to p ... some stuff to initialize a ... Number[] ceps = new Number[Q + 1]; // range from 0 to Q ceps[0] = ln(G); for (int q = 1; q <= p; q++) { Number sum = a[q]; for (k = 1; k <= q - 1; k++) { sum += (k - q) / q * a[k] * ceps[q - k]; } ceps[q] = sum; } for (int q = p + 1; q <= Q; q++) { Number sum = 0; for (int k = 1; k <= p; k++) { sum += (k - q) / q * a[k] * ceps[q - k]; } ceps[q] = sum; } Edit: (k - q) / q * a[k] * ceps[q - k] should really be ((Number)(k - q))/((Number)q) * a[k] * ceps[q - k] or a[k] * ceps[q - k] * (k - q) / q To avoid integer division.<|endoftext|> TITLE: What is the number of ordered triplets $(x, y, z)$ such that the LCM of $x, y$ and $z$ is ... QUESTION [5 upvotes]: What is the number of ordered triplets $(x, y, z)$ such that the LCM of $x, y$ and $z$ is $2^33^3$ where $x, y,z\in \Bbb N$? What I tried : At least one of $x, y$ and $z$ should have factor $2^3$ and at least one should have factor $3^3$. I then tried to figure out the possible combinations but couldn't get the correct answer. REPLY [2 votes]: Consider all candidate triples of the form: $$ (2^{a_1}3^{b_1}, 2^{a_2}3^{b_2}, 2^{a_3}3^{b_3}) $$ where for each $i \in \{1, 2, 3\}$, we have $a_i, b_i \in \{0, 1, 2, 3\}$. We define such a candidate triple to be valid if for some $j, k \in \{1, 2, 3\}$, we have $a_j = 3$ and $b_k = 3$. Otherwise, if ($a_j \in \{0, 1, 2\}$ for all $j \in \{1, 2, 3\}$) or ($b_k \in \{0, 1, 2\}$ for all $k \in \{1, 2, 3\}$), then such a candidate triple is considered invalid. Observe that: \begin{align*} \text{# of valid triples} &= \text{# of candidate triples} - \text{# of invalid triples} \\ &= 4^6 - (3^3 \cdot 4^3 + 4^3 \cdot 3^3 - 3^6) \end{align*}<|endoftext|> TITLE: Is there a first order formula $\varphi[x]$ in $(\mathbb Q, +, \cdot, 0)$ such that $x≥0$ iff $\varphi[x]$? QUESTION [6 upvotes]: In the first-order language $\mathscr L$ having $(+, \cdot, 0)$ as signature, it is easy to define a formula $\phi[x]$, namely $\exists y \; x = y^2$, satisfying : $$\text{for all } x \in \Bbb R, \quad x \in \Bbb R_+ \;\text{ if and only if} \;\; \phi[x] $$ My question is : what happens if I replace $\Bbb R$ by $\Bbb Q$ ? More precisely : Is there a first-order formula $\phi[x]$ of $\scr L$, such that $$\text{for all } x \in \Bbb Q, \quad x \in \Bbb Q_+ \;\text{ if and only if} \;\; \phi[x] $$ Said differently, I would like to know if the set of the positive rationals is definable in that language. Related questions are, for instance : (1), (2). I don't know if the $\scr L$-structure $(\Bbb Q, +, \cdot, 0)$ admits elimination of quantifiers. If this is the case, then this could be helpful ; see this answer. Thank you for your comments ! REPLY [2 votes]: It is a famous result of Julia Robinson that $(\Bbb{Q}, +, \cdot, 0)$ is undecidable. This implies that $(\Bbb{Q}, +, \cdot, 0)$ does not admit elimination of quantifiers. That the rational numbers are not definable in the first-order theory of the reals follows from this, but also follows from well-known facts about O-minimality of the first-order theory of the reals<|endoftext|> TITLE: What is "white noise" and how is it related to the Brownian motion? QUESTION [29 upvotes]: In the Chapter 1.2 of Stochastic Partial Differential Equations: An Introduction by Wei Liu and Michael Röckner, the authors introduce stochastic partial differential equations by considering equations of the form $$\frac{{\rm d}X_t}{{\rm d}t}=F\left(t,X_t,\dot B_t\right)$$ where $\left(\dot B_t\right)_{t\ge 0}$ is a "white noise in time" (whatever that means) with values in a separable Hilbert space $U$. $\left(\dot B_t\right)_{t\ge 0}$ is said to be the "generalized time-derivative of a $U$-valued Brownian motion $(B_t)_{t\ge 0}$. Question: What exactly do the authors mean? What is a "white noise in time" and why (and in which sense) is it the "generalized time-derivative" of a Brownian motion? You can skip the following, if you know the answer to these questions. I will present what I've found out so far: I've searched the terms "white noise" and "distributional derivative of Brownian motion" on the internet and found few and inconsistent definitions. Definition 1: In the book An Introduction to Computational Stochastic PDEs the authors do the following: Let $(\phi_n)_{n\in\mathbb N}$ be an orthonormal basis of $L^2([0,1])$, e.g. $\phi_n(t):=\sqrt 2\sin(n\pi t)$. Then $$W_t:=\lim_{n\to\infty}\sum_{i=1}^n\phi_i(t)\xi_i\;\;\;\text{for }t\in [0,1]\;,$$ where the $\xi_i$ are independent and standard normally distributed random variables on a probability space $(\Omega,\mathcal A,\operatorname P)$, is a stochastic process on $(\Omega,\mathcal A,\operatorname P)$ with $\operatorname E[W_t]=0$ and $$\operatorname E[W_sW_t]=\delta(s-t)\;\;\;\text{for all }s,t\in [0,1]$$ where $\delta$ denotes the Dirac delta function. They call $(W_t)_{t\in [0,1]}$ white noise. This definition seems to depend on the explicit choice of the orthnormal basis $(\phi_n)_{n\in\mathbb N}$ and I don't see the connection to a "derivative" of a Brownian motion (needless to say that I don't see how this would generalize to a cylindrical Brownian motion). However, maybe it has something to do with the following: Let $(B_t)_{t\ge 0}$ be a real-valued Brownian motion on $(\Omega,\mathcal A,\operatorname P)$. Then the Karhunen–Loève theorem yields $$B_t=\lim_{n\to\infty}\sum_{i=1}^n\sqrt{\zeta_i}\phi_i(t)\xi_i\;\;\;\text{for all }t\in [0,T]$$ in $L^2(\operatorname P)$ and uniformly in $t$, where $(\phi_n)_{n\in\mathbb N}$ is an orthonormal basis of $L^2([0,1])$ and $(\xi_n)_{n\in\mathbb N}$ is a sequence of indepedent standard normally distributed random variables on $(\Omega,\mathcal A,\operatorname P)$. In particular, $$\zeta_i=\frac 4{(2i-1)^2\pi^2}$$ and $$\phi_i(t)=\sqrt 2\sin\frac t{\sqrt{\zeta_i}}\;.$$ The authors state, that we can formally consider the derivative of $B$ as being the process $$\dot B_t=\lim_{n\to\infty}\sum_{i=1}^n\phi_i(t)\xi_i\;.$$ I have no idea why. Nevertheless, we may notice the following: Let $${\rm D}^{(\Delta t)}_t:=\frac{B_{t+\Delta t}-B_t}{\Delta t}\;\;\;\text{for }t\ge 0$$ for some $\Delta t>0$. Then $\left({\rm D}^{(\Delta t)}_t\right)$ is a stochastic process on $(\Omega,\mathcal A,\operatorname P)$ with $$\operatorname E\left[{\rm D}^{(\Delta t)}_t\right]=0\;\;\;\text{for all }t\ge 0$$ and $$\operatorname{Cov}\left[{\rm D}^{(\Delta t)}_s,{\rm D}^{(\Delta t)}_t\right]=\left.\begin{cases}\displaystyle\frac{\Delta t-|s-t|}{\Delta t^2}&\text{, if }|s-t|\le \Delta t\\0&\text{, if }|s-t|\ge \Delta t\end{cases}\right\}=:\eta^{(\Delta t)}(s-t)\;\;\;\text{for all }s,t\ge 0\;.$$ Since $$\int\eta^{(\Delta t)}(x)\;{\rm d}x=\int_{-\Delta t}^{\Delta t}\eta^{(\Delta t)}(x)\;{\rm d}x=1$$ we obtain $$\eta^{(\Delta t)}(x)\stackrel{\Delta t\to 0}\to\delta(x)\;,$$ but I have no idea how this is related to white noise. Definition 2: In Stochastic Differential Equations with Applications to Physics and Engineering, Modeling, Simulation, and Optimization of Integrated Circuits and Generalized Functions - Vol 4: Applications of Harmonic Analysis they take a real-valued Brownian motion $(B_t)_{t\ge 0}$ on $(\Omega,\mathcal A,\operatorname P)$ and define $$\langle W,\phi\rangle:=\int\phi(t)B_t\;{\rm d}\lambda\;\;\;\text{for }\phi\in\mathcal D:=C_c^\infty([0,\infty))\;.$$ Let $\mathcal D'$ be the dual space of $\mathcal D$. We can show that $W$ is a $\mathcal D'$-valued Gaussian random variable on $(\Omega,\mathcal A,\operatorname P)$, i.e. $$\left(\langle W,\phi_1\rangle,\ldots,\langle W,\phi_n\rangle\right)\text{ is }n\text{-dimensionally normally distributed}$$ for all linearly independent $\phi_1,\ldots,\phi_n\in\mathcal D$, with expectation $$\operatorname E[W](\phi):=\operatorname E\left[\langle W,\phi\rangle\right]=0\;\;\;\text{for all }\phi\in\mathcal D$$ and covariance $$\rho[W](\phi,\psi):=\operatorname E\left[\langle W,\phi\rangle\langle W,\psi\rangle\right]=\int\int\min(s,t)\phi(s)\psi(t)\;{\rm d}\lambda(s)\;{\rm d}\lambda(t)\;\;\;\text{for all }\phi,\psi\in\mathcal D\;.$$ Moreover, the derivative $$\langle W',\phi\rangle:=-\langle W,\phi\rangle\;\;\;\text{for }\phi\in\mathcal D\tag 1$$ is again a $\mathcal D'$-valued Gaussian random variable on $(\Omega,\mathcal A,\operatorname P)$ with expectation $$\operatorname E[W'](\phi)=0\;\;\;\text{for all }\phi\in\mathcal D\tag 2$$ and covariance \begin{equation} \begin{split} \varrho[W'](\phi,\psi)&=\int\int\min(s,t)\phi'(s)\psi'(t)\;{\rm d}\lambda(s)\;{\rm d}\lambda(t)\\ &=\int\int\delta(t-s)\phi(s)\psi(t)\;{\rm d}\lambda(t)\;{\rm d}\lambda(s) \end{split} \end{equation} for all $\phi,\psi\in\mathcal D$. Now they call a generalized Gaussian stochastic process with expectation and covariance given by $(1)$ and $(2)$ a Gaussian white noise. Thus, the generalized derivative $W'$ of the generalized Brownian motion $W$ is a Gaussian white noise. Again, I don't know how I need to generalize this to the case of a cylindrical Brownian motion. Moreover, this definition seems to be less naturally to me and I don't think that this is the notion Liu and Röckner had in mind. Definition 3: In some lecture notes, I've seen the following the definition: Let $W$ be a centered Gaussian process, indexed by test functions $\phi\in C^\infty([0,\infty]\times\mathbb R^d)$ whose covariance is given by $$\operatorname E\left[W_\phi W_\psi\right]=\int_0^\infty{\rm d}t\int_{\mathbb R^d}{\rm d}x\int_{\mathbb R^d}{\rm d}y\phi(t,x)\psi(t,x)\delta(x-y)\tag 3$$ or $$\operatorname E\left[W_\phi W_\psi\right]=\int_0^\infty{\rm d}t\int_{\mathbb R^d}{\rm d}x\phi(t,x)\psi(t,x)\tag 4\;.$$ Then $W$ is called "white noise in time and colored noise in space" in the case $(3)$ and "white noise, both in time and space" in the case $(4)$. They simply state that $\delta$ is some "reasonable" kernel which might blow up to inifinity at $0$. I suppose this is related to Definition 2. Again, I don't know how I need to generalize this to the case of a cylindrical Brownian moton. Definition 4: This definition is very sloppy in its notation: Let Let $(W_t)_t$ be a centered Gaussian process with covariance $\operatorname E[W_sW_t]=\delta(s-t)$ where $\delta$ denotes the Dirac delta function. Then, in a [lecture note] I've found (Example 3.56), they state that $$B_t:=\int_0^tW_s\;{\rm d}B_s\tag 5\;\;\;\text{for }t\ge 0$$ is a real-valued Brownian motion. I haven't verified that result. Is it correct? Whatever the case is, if this is the reason, why white noise is considered to be the derivative of a Brownian motion, we should be able that every Brownian motion as a representation of the form $(5)$. Can this be shown? The same questions as above remain. Definition 5: Let $(B_t)_{t\ge 0}$ be a real-valued Brownian motion on $(\Omega,\mathcal A,\operatorname P)$ and define $$\langle W,\varphi\rangle:=\int_0^\infty\varphi(s)\;{\rm d}B_s\;\;\;\text{for }\phi\in\mathcal D:=C_c^\infty((0,\infty))\;.$$ Then $$\langle W',\varphi\rangle:=\int_0^\infty\varphi'(s)\;{\rm d}B_s\;\;\;\text{for }\phi\in\mathcal D$$ is considered to be the generalized derivative of the generalized Brownian motion $W$. The same questions as above remain. Conclusion: I've found different notions of "white noise" and "generalized derivative" of a Brownian motion, but I don't know in which sense they are consistent and which of them Liu and Röckner meant. So, I would be very happy if someone could give a rigorous definition of these terms in the case of a cylindrical Brownian motion or at least in the case of a Hilbert space valued Brownian motion. REPLY [11 votes]: I'm a physicist, so I'll give you the "dirty" answer. No definitions, theorems, or proofs. Let's start with Brownian motion. Brownian motion is the path taken by tiny particles in a viscous fluid due to being bombarded by the random thermal motion of the fluid molecules. There are two main modeling approaches. Einstein used a limited derivation of the Fokker-Plank equation to show that an ensemble of such particles obeys the diffusion equation. Langevin took a noise approach and showed that a particle with a small amount of momentum driven by small uncorrelated impacts follows a path with an exponentially decaying autocorrelation. Turning back to white noise (I'll tie it all together eventually), assuming spatial and temporal homogeneity (the conditions in the infinite beaker are the same everywhere and do not change in time), then the tiny impacts to the particle constitute a sort of noise signal in time. Maybe they are the readings on an impact meter strapped to the particle. Because these impacts are 1) uncorrelated, 2) independent, and 3) comprised of an enormous amount of other collisions taking place in the fluid between hitting the Brownian particles, their magnitude has a Gaussian distribution. If you take the Fourier transform (in real life, the FFT) of the impact signals for a large ensemble of Brownian particles and average them, you find that the power spectrum is constant over all frequencies, and that the power for a given frequency is distributed across the ensemble as a Gaussian distribution around the mean. Thus the impact signals are (on average) a combination of equal portions of all frequencies, we call it "white" noise, as in light comprised of all frequencies of visible light. Back to the particles, their motion is the sum of a large number of these white noise impacts. If you are willing to consider this an integral in time, then you know that the power spectrum of this time integral will be proportional to $1/f^2$. This is the power spectrum of Brownian motion. Taking this the opposite way, from the Langevin equation you can see that the motion of a Brownian particle has an exponentially decaying autocorrelation. This corresponds to a power spectrum that decays as $1/f^2$. Calculating the derivative in frequency space, the derivative if Brownian motion looks like white noise. None of this is either physically nor mathematically rigorous. But this is the general modeling approach used in physics and digital signal processing. However, these dirty models are the basis, and I might even say raison d'etre for the rigorous mathematical models. However, being dirty, there are probably multiple ways the white noise and Brownian motion can be defined. So it may simply be that when you read book A or paper X you need to use their definition.<|endoftext|> TITLE: Which non-abelian finite groups have the property that every subgroup is normal? QUESTION [6 upvotes]: If $G$ is an abelian group, every subgroup $H$ of $G$ is normal. I searched for non-abelian finite groups $G$ , such that every subgroup is normal and GAP showed only the groups $G'\times Q_8$ , where $Q_8$ is the quaternion-group of order $8$ and $G'$ is an abelian group with the additional property that $C_4$ is not a subgroup of $G'$. The largest group I found with GAP was $C_{15}\times Q_8$. Is it true that every non-abelian finite group $G$ with the property that every subgroup of $G$ is normal, is isomorphic to $G'\times Q_8$ with an abelian group $G'$ without $C_4$ as a subgroup ? REPLY [8 votes]: Hall's book "The Theory of Groups" proves (Theorem 12.5.4) that these groups, known as Hamiltonian groups, are all of the form $Q_8 \times A \times B$ where $A$ is an elementary $2$-group, $B$ is an abelian group where every element is of finite odd order, and $Q_8$ is the quaternion group. All groups of this form are Hamiltonian. Note there is no finiteness restriction on these $A$, $B$ groups, so there are also infinite group.<|endoftext|> TITLE: On groups with presentations $ \langle a,b,c\mid a^2=b^2=c^2=(ab)^p=(bc)^q=(ca)^r=(abc)^s=1\rangle $... QUESTION [7 upvotes]: $$ \langle a,b,c\mid a^2=b^2=c^2=(ab)^p=(bc)^q=(ca)^r=1\rangle =\Delta(p,q,r) $$ This is a presentation of a triangle group $\Delta(p,q,r)$, a special kind of Coxeter group. EDIT In fact, these are called extended triangle groups, by G. Jones and D. Singerman in Maps, hypermaps and triangle groups... What about the following presentation: $$ \langle a,b,c\mid a^2=b^2=c^2=(ab)^p=(bc)^q=(ca)^r=(abc)^s=1\rangle $$ Do these groups have a name and where are they treated? The presentation in question are motivated by this and that... ANOTHER EDIT if $p=q=r$ is prime and $s=1$ this is called triangular Fuchsian group here... REPLY [8 votes]: I haven't come across a name for this family in full generality, but the special case in which $p=2$ was defined and studied by Coxeter in his paper H. S. M. Coxeter, The abstract groups $G^{ m, n, p}$, Trans. Amer. Math. Soc. 45 (1939), 73-150. where (in your notation) the group is called $G^{q,r,s}$. Also, when $s$ is even, your group has a subgroup of index $2$ with presentation $\langle x,y \mid x^p=y^q=(xy)^r=[x,y]^{s/2} \rangle$. These groups were studied in the same paper by Coxeter, and denoted $(p,q,r;s/2)$. Both of these families have been extensively studied since then, in particular concerning their finiteness. They are generally infinite for sufficiently large values of the parameters, and there is just a handful of remaining cases for which their finiteness is still unknown. A few years ago Havas and I showed, using a big computer calculation, that $(2,3,13;4)$ is finite of order $358\,848\,921\,600$. So your group with $(p,q,r,s) = (2,3,13,8)$ has twice that order.<|endoftext|> TITLE: How can I check whether the group $[16,13]$ in GAP with $3$ generators can be generated by $2$ elements? QUESTION [6 upvotes]: The group $[16,13]$ in GAP has structure $(C4\times C2):C2$ and is generated by the permutations $(1234)(5678)$ , $(15)(26)(37)(48)$ and $(57)(68)$ . The group $[16,3]$ in contrast with the same structure is a $2$-generated group : $(12)(34)$ and $(23)(5678)$ is a possible set of generators. How can I prove that the group $[16,13]$ cannot be generated by $2$ elements ? REPLY [5 votes]: In gap there is a MinimalGeneratingSet(G) command which pulls minimal generating set, in this case it says it is a three element set. There is some more information on this page. The documentation says it is really only efficient for nilpotent, solvable groups, and some special cases (if you have all the correct packages). In this particular case you could also use gap to compute the abelianization (or by hand possibly), which is rank three (and order 8 group), so the full group could not be generate by less than three elements. A general strategy I can think of is that you can compute quotients, and figure out the minimal generating set of the quotients (which in principal would be easier) and get bound on the minimal number of generators required.<|endoftext|> TITLE: Why was the zeta function introduced? QUESTION [9 upvotes]: I know the 'Zeta Function' is very useful in Mathematics, and that it has relations with many other functions (such as the 'Gamma Function'). I also know the 'Zeta Function' $\zeta(s)$ is defined as: $$\zeta (s) = \sum_{n=1}^{\infty} {1\over {n^s}}$$ But my question is why and how was this even derived? I've studied and understood many proofs regarding $\zeta(s)$, such as: $$\Gamma(s) \zeta(s) = \int_{0}^{\infty} {{u^{s-1}\over {e^u}-1}} \space du$$ $$\zeta(s) = {2^s}{\pi^{s-1}}{sin \bigg({\pi s\over 2}\bigg)}{\Gamma(1-s)}{\zeta(1-s)}$$ But anytime I try search up information regarding the derivation of $\zeta(s)$, all I get is the fact that Leonhard Euler was amongst the first to study it. Nothing more. Is there any article I can read that talks about how $\zeta(s)$ came to be? REPLY [4 votes]: It is all about Prime Numbers. Among few things we know about prime numbers (i.e. prime numbers are infinite), there are many characteristics we do not know about primes. Some of these unproved conjectures and unknown formulas are: - Every even number greater $\gt 4$ can be expressed as a sum of two primes. - Twin primes {$p\,, p+2 \in \text{prime}$} are infinite. - How to determine if a given number is a prime? - How many prime numbers exist below a given number? Between the 16-17th century, there was a famous problem in mathematics called Basel Problem. Basel problem asked to find the precise summation (closed form) of the reciprocals of the squares of all natural numbers: $$\lim_{n\rightarrow\infty}\left(\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\text{...}+\frac{1}{n^2}\right) = \space\text{?}$$ This problem lasted for about a hundred (100) years until, it had been solved by Euler in 1734. He was the first to find the limit equal to $\pi^2/6$. After solving this famous problem, Euler continues to investigate these kinds of series, and found that: $$\small\frac{1}{1^x}+\frac{1}{2^x}+\frac{1}{3^x}+\frac{1}{4^x}+\text{... etc} = \frac{2^x}{2^x-1}\cdot\frac{3^x}{3^x-1}\cdot\frac{5^x}{5^x-1}\cdot\frac{7^x}{7^x-1}\cdot\frac{11^x}{11^x-1}\cdot\text{... etc}\quad\colon x\gt1$$ This beautiful identity, later called Euler Product, reflects a direct relationship between all integers and all primes. By rephrasing and extending to complex plane, the Zeta function of a complex argument $s$ is defined by: $$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s} = \prod_{p\space prime}\frac{1}{1-1/p^s}\quad\colon Re\{s\}\gt1$$ Where the behavior of Zeta Function is governed by the behavior of Prime Numbers. In other words, the Zeta function gives a way to study the discrete prime numbers using a continues analytic function. In fact, Euler was fascinating by this conversion idea and likewise he introduce the Gamma function: $$\Gamma(s)=\int_{0}^{\infty} \frac{x^{s-1}}{e^x}\,dx\quad\colon Re\{s\}\gt0$$ as a way of extending the factorial function from discrete integer points to continuous real curve. $$n!=n\,(n-1)! \quad\equiv\quad \Gamma(x)=(x-1)\,\Gamma(x-1)$$ - John Derbyshire: Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem. - Julian Havil: Gamma: Exploring Euler's Constant. - Leonhard Euler: Introduction to Analysis of The Infinite.<|endoftext|> TITLE: Proving that cyclotomic polynomials have integer coefficients QUESTION [5 upvotes]: I don't understand why Gauss's lemma is invoked in the proof in Dummit and Foote that $\Phi_n(x)$ (the $n$th cyclotomic polynomial) belongs to $\mathbb{Z}[x]$. I'm an analyst and I wanted to remind myself about cyclotomic polynomials. The following is the proof as I've written it, and because it's been a while since I've done algebra I want to know if there is something I'm taking for granted. It is a fact that if $R$ is a unital commutative ring, $f \in R[x]$ is a monic polynomial and $g \in R[x]$ is a polynomial, then there are $q,r \in R[x]$ with $g = qf+r$, $r=0$ or $\deg r < \deg f$. First, $\Phi_1(x)=x-1 \in \mathbb{Z}[x]$. For $n>1$, assume that $\Phi_d(x) \in \mathbb{Z}[x]$ for $1 \leq d < n$. Then let $f = \prod_{d \mid n, d TITLE: Seeking non-inductive, combinatorial proof of the identity $1^2 + 2^2 + 3^2 + \cdots + n^2 = \frac{n(n + 1)(2n + 1)}{6}$ QUESTION [5 upvotes]: How do you prove $$1^2 + 2^2 + 3^2 + \cdots + n^2 = \dfrac{n(n + 1)(2n + 1)}{6}$$ without induction? I'm looking for a combinatorial proof of this. REPLY [2 votes]: We can count the sum $$2^2 + 4^2 + \dots + (2n)^2 = {2n + 2 \choose 3}$$ as follows: Choose three numbers from $\{1, 2, \dots, 2n+2\}$. If the largest number chosen is odd, notate it $2m+1$ - the other two smaller numbers can be chosen in ${2m \choose 2}$ ways. If the largest number chosen is $2m+2$, the other two numbers can be chosen in ${2m+1 \choose 2}$ ways. From the identity $${2m \choose 2} + {2m+1 \choose 2} = (2m)^2$$ and summing this for $m = 0, 1, 2, \dots, n$, the even result follows and the general result follows from dividing by two.<|endoftext|> TITLE: How Can $ {L}_{1} $ Norm Minimization with Linear Equality Constraints (Basis Pursuit / Sparse Representation) Be Formulated as Linear Programming? QUESTION [9 upvotes]: Problem Statement Show how the $L_1$-sparse reconstruction problem: $$\min_{x}{\left\lVert x\right\rVert}_1 \quad \text{subject to} \; y=Ax$$ can be reduced to a linear programming problem of form similar to: $$\min_{u}{b^Tu} \quad \text{subject to} \; Gu=h, Cu\le e.$$ We can assume that $y$ belongs to the range of $A$, typically because $A\in \mathbb{R}^{m\times n}$ is full-rank with $m\lt n$. What I've Got I have never worked with linear programming before, and though I think I understand the basics, I have no experience with this kind of reduction. So far, I have tried understanding the problem geometrically: any $x$ in the solution set to $y=Ax$ can be written as the sum of some arbitrary solution $x_0$ and a vector $v$ in the null space of $A$, so the solution set is a shifted copy of the null space. We are trying to expand the $L_1$-ball (or hyper-diamond? I don't know what to call it) until one of the corners hits that shifted subspace. My problem is, I don't know how to express that formally. The best I can think of is to use a method similar to Converting Sum of Infinity Norm and $ {L}_{1} $ Norm to Linear Programming and let $t_i=\left\lvert x_i\right\rvert, i=1\dots n$ and rewrite the objective as: $$\min_{t}{1^Tt} \quad \text{subject to} \; x\le t, -x\le t, y=Ax$$ But then $x$ is still floating around in the problem, which doesn't match the desired form (and isn't implementable with MATLAB's linprog function, which I will have to do later). And even if we find such a $t$, recovering the underlying $x$ doesn't seem straightforward to me either. Am I even moving in the right direction? Any help is appreciated. REPLY [15 votes]: Conversion of Basis Pursuit to Linear Programming The Basis Pursuit problem is given by: $$ \begin{align*} \arg \min_{x} \: & \: {\left\| x \right\|}_{1} \\ \text{subject to} \: & \: A x = b \end{align*} $$ Method A The term $ {\left\| x \right\|}_{1} $ can written in element wise form: $$ {\left\| x \right\|}_{1} = \sum_{i = 1}^{n} \left| {x}_{i} \right| $$ Then setting $ \left| {x}_{i} \right| \leq {t}_{i} $ one could write: $$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: \left| {x}_{i} \right| \leq {t}_{i} \; \forall i \end{align*} $$ Since $ \left| {x}_{i} \right| \leq {t}_{i} \iff {x}_{i} \leq {t}_{i}, \, {x}_{i} \geq -{t}_{i} $ then: $$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: {x}_{i} \leq {t}_{i} \; \forall i \\ & \: {x}_{i} \geq -{t}_{i} \; \forall i \end{align*} $$ Which can be farther refined: $$ \begin{align*} \arg \min_{t} \: & \: \boldsymbol{1}^{T} t \\ \text{subject to} \: & \: A x = b \\ & \: I x - I t \preceq \boldsymbol{0} \\ & \: -I x - I t \preceq \boldsymbol{0} \end{align*} $$ Which is a Linear Programming problem. Method B Define: $$ x = u - v, \; {u}_{i} = \max \left\{ {x}_{i}, 0 \right\}, \; {v}_{i} = \max \left\{ -{x}_{i}, 0 \right\} $$ Then the problem becomes: $$ \begin{align*} \arg \min_{u, v} \: & \: \sum_{i = 1}^{n} {u}_{i} + {v}_{i} \\ \text{subject to} \: & \: A \left( u - v \right) = b \\ & \: u \succeq \boldsymbol{0} \\ & \: v \succeq \boldsymbol{0} \end{align*} $$ Which is also a Linear Programming problem. MATLAB Implementation MATLAB Implementation is straight forward using the linprog() function. The full code, including validation using CVX, can be found in my StackExchange Mathematics Q1639716 GitHub Repository. Code Snippet - Method A function [ vX ] = SolveBasisPursuitLp001( mA, vB ) numRows = size(mA, 1); numCols = size(mA, 2); %% vX = [vX; vT] mAeq = [mA, zeros(numRows, numCols)]; vBeq = vB; vF = [zeros([numCols, 1]); ones([numCols, 1])]; mA = [eye(numCols), -eye(numCols); -eye(numCols), -eye(numCols)]; vB = zeros(2 * numCols, 1); sSolverOptions = optimoptions('linprog', 'Display', 'off'); vX = linprog(vF, mA, vB, mAeq, vBeq, [], [], sSolverOptions); vX = vX(1:numCols); end Code Snippet - Method B function [ vX ] = SolveBasisPursuitLp002( mA, vB ) numRows = size(mA, 1); numCols = size(mA, 2); % vU = max(vX, 0); % vV = max(-vX, 0); % vX = vU - vX; % vUV = [vU; vV]; vF = ones([2 * numCols, 1]); mAeq = [mA, -mA]; vBeq = vB; vLowerBound = zeros([2 * numCols, 1]); vUpperBound = inf([2 * numCols, 1]); sSolverOptions = optimoptions('linprog', 'Display', 'off'); vUV = linprog(vF, [], [], mAeq, vBeq, vLowerBound, vUpperBound, sSolverOptions); vX = vUV(1:numCols) - vUV(numCols + 1:end); end I used the code above in Reconstruction of a Signal from Sub Sampled Spectrum by Compressed Sensing.<|endoftext|> TITLE: How to prove that $k^3+3k^2+2k$ is always divisible by $3$? QUESTION [13 upvotes]: How can I prove that the following polynomial expression is divisible by 3 for all integers $k$? $$k^3 + 3k^2 + 2k$$ REPLY [2 votes]: By Fermat's Little theorem $k^3\equiv k\pmod{3}$, therefore: $$k^3+\underbrace{3k^2}_{\equiv 0}+\underbrace{2k}_{\equiv -k}\equiv k^3-k\equiv 0\pmod{3}$$<|endoftext|> TITLE: What is meant by a 'pure' wave? QUESTION [6 upvotes]: What is meant by a 'pure' wave? I know it might sound like a basic question, but I've never been taught this. I saw that a sine wave is a pure wave. I tried Googling what a pure wave is, but all I get is links regarding Pure Wave Inverters for sale...which is not what I'm looking for. REPLY [3 votes]: In electrical technology any periodic motion can be expressed as a sum of the fundamental wave and several harmonics whose frequencies are multiple of the fundamental, expressible in Fourier series. If the wave is pure it means that amplitudes other than the fundamental are zero. The other harmonics are seen as "contaminating" the pure sine wave. It is pure or aka simple harmonic, as is represented by the harmonic dynamic time differential function which strictly has a time period $T$ where $ \omega T = 2 \pi $ in: $$ \ddot x + \omega^2 x =0. $$ A square wave for example has several non-fundamental additives to the pure signal. It consistes of third, fifth,.. order harmonics: Other Harmonics in Square Wave<|endoftext|> TITLE: Zauner's conjecture QUESTION [7 upvotes]: The conjecture is as follow: In $\mathbb{C}^{n}$, there exists $\{v_1,\cdots,v_{n^2}\}$ such that the following holds: $$ \left| \left \langle v_i, v_j \right \rangle \right| = \begin{cases} 1 & i = j\\ \frac{1}{n+1} & i \ne j\end{cases}$$ I have a prove for when $n = 2$, basically what I did is just assuming without loss of generality that one of the vectors is $\begin{bmatrix} 1 \\ 0 \end{bmatrix}$, and brute force the rest of the vectors. I'm curious where this construction fails when $n \ge 3$, or has the conjecture already been proven? I can't seem to find literature that it has been proven on the Internet though. REPLY [8 votes]: Here is a recent update, with a proof that Zauner's conjecture (the existence the existence of $n^2$ equiangular lines in $n$ complex dimensions) holds for $n\leq 67$: SIC-POVMs: A new computer study (2010). It remains an open question whether the conjecture is true in all dimensions. Some insight into the difficulty of the conjecture is given in The Lie Algebraic Significance of Symmetric Informationally Complete Measurements (2011). See also this MO post and http://physicsoverflow.org/382 This blog from 2015 gives more background.<|endoftext|> TITLE: Which is the most powerful language, set theory or category theory? QUESTION [7 upvotes]: As far as I know, mathematics is written based on a language which can be for example set theory or category theory. My concern is about the power of these languages. How can we realize which language is more powerful and we can construct more structures with them? Maybe, there is an assumption which implies all languages are equal, and actually, there is no strength preference although there is maybe the simplicity preference. If it is the case, could you please clarify why you think this assumption is good and obvious? p.s. : I am not an expert, so if you think my question is absolutely wrong or does not make sense at all, please let me know. REPLY [33 votes]: Set theory and category theory are both foundational theories of mathematics (they explain basics), but they attack different aspects of foundations. Set theory is largely concerned with "how do we build mathematical objects (or what could we build)" while category theory is largely concerned with "what structure to mathematical objects have (or could have)"? Mathematicians work in informal set theory and informal category theory, which are immensly useful as lingua franca and as collections of universally useful concepts and techniques, but their formal versions are not actually needed by mathematicians for the most part. This is witnessed by the fact that the average mathematician is unable to list the axioms of Zermelo-Fraenkel set theory, and even of first-order logic. Yet, they are perfectly able to do complicated math. The formal versions of set theory and category theory are of interest to people who study foundations of mathematics. These relationship between these two and computation has been known for a while. I highly recommend Bob Harper's blog post about the Holy Trinity for a quick read, and Steve Awodey's From Sets to Types to Categories to Sets if you would like know more about the connections and their significance. The upshot is that we can mostly translate between set theory, type theory, and category theory, and that ordinary mathematicians could do their mathematics in either of the three systems, but the systems are not exclusive. In fact, a smart mathematician will be aware of their connections and will take advantage of them.<|endoftext|> TITLE: $\lim_{n \to \infty} \int_{0}^{n}(1-\frac{3x}{n})^ne^{\frac{x}{2}}dx$=? QUESTION [6 upvotes]: $$\lim_{n \to \infty} \int_{0}^{n}\left(1-\frac{3x}{n}\right)^ne^{\frac{x}{2}}dx$$ I thought about using the theorem of monotonic convergence and had $$f_n{(x)}=\left(1-\frac{3x}{n}\right)^ne^{\frac{x}{2}} \lambda_{[0,n]}(x)$$ it is rising. but I don't know where it is valid to write: $$\lim_{n \to \infty} \int_{0}^{n}\left(1-\frac{3x}{n}\right)^ne^{\frac{x}{2}}dx= \int_{0}^{\infty}\lim_{n \to \infty}\left(1-\frac{3x}{n}\right)^ne^{\frac{x}{2}}dx=\int_{0}^{\infty}e^{\frac{x}{2}-3x}dx$$ REPLY [5 votes]: The integral diverges. To see this, we can write $$\int_0^n \left(1-\frac{3x}n\right)^ne^{x/2}\,dx=\int_0^{n/3} \left(1-\frac{3x}n\right)^ne^{x/2}\,dx+\int_{n/3}^n \left(1-\frac{3x}n\right)^ne^{x/2}\,dx \tag 1$$ We will present two parts. In Part $1$, we will show that the first integral on the right-hand side of $(1)$ converges. In Part $2$, we will show that the second integral diverges. PART $1$: The first integral on the right-hand side of $(3)$ can easily be shown to converge. In THIS ANSWER, I used only the limit definition of the exponential function and Bernoulli's Inequality to establish that the exponential function satisfies the inequality $$\left(1+\frac{x}{n}\right)^n\le e^x$$ for $x>-n$. Therefore, we assert that $\left(1-\frac{3x}n\right)^n\le e^{-3x}$ for $x TITLE: Construction of a continuous function which maps some point in the interior of an open set to the boundary of the Range QUESTION [7 upvotes]: I was studying the Inverse function theorem when I came across the following problems : (Let the closed set $V$ i.e the range have non-empty interior) Does there exist a continuous onto function from an open set $U$ in $\mathbb{R}^n $ to a closed set $V$ in $\mathbb{R}^m$ such that some points in the interior of $U$ get mapped to the boundary of $V$? Does there exist a continuous $1-1$ map from an open set $U$ in $\mathbb{R}^n $ to a closed set $V$ in $\mathbb{R}^m$ such that some points in the interior of $U$ get mapped to the boundary of $V$? If there are examples in $C(\mathbb{R})$ i.e continuous functions from $\mathbb{R}$ to $\mathbb{R}$, then that would be great too! Though I do need some example in the general case too. Simpler examples will be really appreciated. Thanks in advance. Edit: The case (1) can be dealt with using any "cut-off" function. e.g let $U,V$ two balls around $0$ in $\mathbb{R}^n $ with radius $r(>1)$ and $1$, and be open and closed respectively. Let $f: U \rightarrow V $ such that $x \in V \implies f(x)=x$ and $x \in U-V \implies f(x)= x/||x|| $. REPLY [5 votes]: I am assuming you want $V$ to actually be the image of $U$. In this case, there is no such map satisfying your second condition. If $m = n$, this follows from invariance of domain, since the image of $U$ will necessarily be open. If $m < n$, there is no continuous injective map from $\mathbb{R}^n$ to $\mathbb{R}^m$ (let alone to a closed subset). You can find some more elementary arguments here, or you can again apply invariance of domain. In particular, if $f : U \to V$ is continuous and injective, and $\iota : \mathbb{R}^m \to \mathbb{R}^n$ is an inclusion map, then $\iota \circ f : U \to \mathbb{R}^n$ is an open map, but the image of $U$ is not open in $\mathbb{R}^n$. For $m > n$, similar logic: we cannot have a continuous injective map from $U$ onto a set with non-empty interior in $\mathbb{R}^m$ (since $\mathbb{R}^n$ and $\mathbb{R}^m$ are not homeomorphic). Since we didn't even use the fact that an interior point maps to the boundary of $V$, I suspect there is an easier argument. (Maybe take an interior point $u$ which maps to the boundary, restrict to a small compact neighborhood of $u$, and use the fact that the map on the compact neighborhood is a homeomorphism and must preserve the boundary). Also, I guess it's possible that you never intended for $V$ to actually be the image of $U$. In this case, we still cannot find such a function when $m \leq n$, based on the same arguments as above, but we can for $m > n$. For example, take $V$ to be the unit disk in $\mathbb{R}^2$ and consider the map $(-1/2,1/2) \to \mathbb{R}^2 : x \mapsto (x,1-4x^2)$, or something similar. This maps $0$ to the boundary of $V$.<|endoftext|> TITLE: Proof that the special linear group $\mathrm{SL}(n,\mathbb{R})$ is a smooth manifold QUESTION [5 upvotes]: This is my definition of a smooth manifold I am supposed to work with: Let $\mathcal{M} \subseteq \mathbb{R}^{n}$. The set $\mathcal{M}$ is a $k$-dimensional smooth submanifold of $\mathbb{R}^{n}$ if: $\mathcal{M}$ is given locally as the zero set of a function $F\colon W \to \mathbb{R}^{n-k}$, where $W \subseteq \mathbb{R}^{n}$ is open in $\mathbb{R}^{n}$, $F$ is of class $C^{\infty}$, and so that $\mathcal{M} \cap W = \{\, x \in W \mid F(x)=0\, \}$. My definition also requires that the Jacobian $\mathcal{J}_{F}(x)$ has rank $n-k$ for every point $x \in \mathcal{M} \cap W$. $\ $ The special linear group is the set $\mathrm{SL}(n,\mathbb{R}) = \{\, A \in M_{n \times n}(\mathbb{R}) \mid \det(A)=1\, \}$. I am trying to prove that this set is a smooth submanifold. From my reading online, I have understood that this submanifold has dimension $n^2-1$. I'm going to make the identification $\mathbb{R}^{n^2}\cong M_{n \times n}(\mathbb{R})$. My thoughts on proving this are to define a function $G\colon M_{n \times n}(\mathbb{R}) \to \mathbb{R}$ given by $G(A)=\det(A)-1$. We have obviously $M_{n \times n}(\mathbb{R}) \cap \mathrm{SL}(n,\mathbb{R}) = \mathrm{SL}(n,\mathbb{R})$ where $M_{n \times n}(\mathbb{R}) \cong \mathbb{R}^{n^2}$ is open, and we know that $G$ is smooth since $\det$ is smooth. Furthermore, $\mathrm{SL}(n,\mathbb{R}) = \{\, A \in M_{n \times n}(\mathbb{R}) \mid G(A)=0\, \}$. Finally, the Jacobian of the function $\mathcal{J}_{G}$ is going to be a column vector of length $n^2$, and I will need to show that it has rank $1$. But since its a column vector, isn't this an immeadiate conclusion? So to me it seems like this proof is finished. However, it seems a little too easy. Have I made any errors or omissions? REPLY [8 votes]: To show that the Jacobian really is of rank $1$ (i.e., not equal to the zero vector), I suggest you expand $\det A$ by minors along the first column, say: $$\det A = \sum_{i=1}^n (-1)^{i+1}a_{i,1}M_{i,1}.$$ Of course, $(-1)^{i+1}M_{i,1}$ is in fact an entry of the Jacobian. So the Jacobian can only be zero, if also $\det A=0$. But we are looking at the Jacobian for points $A\in SL_n(\Bbb R)$!<|endoftext|> TITLE: Does there exist $a\in\mathbb N$, $b\in\mathbb Z$ that $2^na+b$ is a square for all $1\le n\le5$? QUESTION [5 upvotes]: We consider such $a\in\mathbb N$, $b\in\mathbb Z$, o numbers of the form $2^na+b$ is square to the largest possible number of values of $n=1,2,3,4,\ldots$. It is easy to see that for $a = 60 $, $ b = -119 $ produced four squares: $2a+b=1=1^2$, $4a+b=121=11^2$, $8a+b=361=19^2$, $16a+b=841=29^2$. The next value $ 32a + b = 1801 $ square is no longer. And now the actual questions: a) Does there exist $a\in\mathbb N$, $b\in\mathbb Z$ that $2^na+b$ is a square for all $1\le n\le5$? b) Does there exist $a\in\mathbb N$, $b\in\mathbb N$ that $2^na+b$ is a square for all $1\le n\le4$? REPLY [2 votes]: $$a = 4110224238120; b = 74019764836081 $$ the numbers for the item b)<|endoftext|> TITLE: For which Lie groups $G$ can one write $g$ as the exponential $\exp X$ of some $X \in {\frak g}$ for every element $g \in G$? QUESTION [6 upvotes]: I am reading a book on matrix Lie algebras (Brian Hall's). Corollary 2.30. says that if $G$ is a connected matrix Lie group, then every element $A$ of $G$ can be written in the form $$A=e^{X_1}e^{X_2}\ldots{}e^{X_m}$$ for some $X_1$,$X_2$$\ldots$$X_m$ in the Lie algebra. Immeadiately after it is stressed that even if $G$ is connected, it is not true that any every element $A$ of $G$ canbe written $$A=e^{X}$$ where $X$ is a Lie algebra element. My background is on physics, and I have many times seen Lie groups written using ony one exponential with absolute impunity (for example with $SU(2)$). Can anybody tell me when it is true that there is some $X$ Lie algebra element for every $A$ in a Lie group? REPLY [9 votes]: If $G$ is compact and connected, one can prove (by constructing a bi-invariant metric on $G$ and relating the metric and Lie group exponential maps) that the exponential map $\exp : {\frak g} \to G$ is surjective. This justifies the claim for, e.g., $SU(2)$. Using the Baker-Campbell-Hausdorff formula, one can show that $\exp$ is also surjective for Lie groups that are connected, simply connected, and nilpotent. (See this blog post of Terry Tao for more.) There is a condition on Lie groups equivalent to surjectivity of $\exp$, namely, divisibility: A group is divisible iff for every $g \in G$ and every $k \in \Bbb Z_+$ there is some $h \in G$ such that $h^k = g$. (See the paper cited below.) Checking this condition is not necessarily easier than checking surjectivity directly, however. We can use this criterion to show readily that $\exp$ is not surjective for all connected Lie groups: For example, we can check directly that $B := \pmatrix{-1&0\\0&-2}$ has no square root in $GL(2, \Bbb R)$ (say, by writing out the components of $X^2 = B$ and deriving a contradiction). So, $GL_+(2, \Bbb R) := \{A \in GL(2, \Bbb R) : \det A > 0\}$ is not divisible, and hence $\exp: {\frak gl}(2, \Bbb R) \to GL_+(2, \Bbb R)$ is not surjective. Hoffman, Lawson. Divisible Semisubgroups of Lie Groups. J. London Math. Soc. (1983) s2-27 (3): 427-434.<|endoftext|> TITLE: Subsets of sets containing empty set QUESTION [7 upvotes]: Why is $\{\emptyset\}$ not a subset of $\{\{\emptyset\}\}$? It contains this element, but why is it not a subset? REPLY [10 votes]: You can think of sets like plastic bags if you want; the empty set is just a plastic bag with nothing in it, $\{\emptyset\}$ is a plastic bag with another plastic bag in it, and $\{\{\emptyset\}\}$ is three layers of plastic bags. The element relation $A\in B$ means that you could open up bag $B$ and take out $A$. The subset relation $A\subset B$ means that every object that you could directly take out of $A$ can also be directly taken out of $B$. So, look at $\{\emptyset\}$. You can "open it up and" take out $\emptyset$, but you can't do that with $\{\{\emptyset\}\}$. Therefore, $\{\emptyset\}\not\subset \{\{\emptyset\}\}$.<|endoftext|> TITLE: Size of a point. QUESTION [5 upvotes]: I know this may sound too simple or maybe too absurd to discuss, but I am having a hard time visualizing a point in space! In Euclid's Elements a 'Point' is defined as Something which has no part. Now, any geometrical figure viz. Line Segment,Triangle,Square etc. can be said to be composed of points. No matter how small we try to make a point, it still has some size/dimension. So,how can these infinitude of points add up to give the length/perimeter of the above mentioned figures,when according to Euclid,these have NO PART? REPLY [2 votes]: No matter how small we try to make a point, it still has some size/dimension. It sounds a bit like you are talking about drawing a point, but that is not what we're doing when we imagine a point in geometry. A point is an idealized, primitive notion. It does not have any physical size to speak of. how can these infinitude of points add up to give the length/perimeter of the above mentioned figures, when according to Euclid,these have NO PART? As Andre Nicholas mentioned in the comments, the idea of having "no part" speaks to the indivisibility or atomicness of a point. Lines and planes have many parts: in particular, their points are parts of them. Why do you feel you can raise an objection? Is there some rule somewhere that says a collection of things without size can't have a size? It sounds a bit like you're thinking of these in terms of measure theory, where the measure of the whole can be the sum of measures of the parts (provided there are not too many parts.) But nobody has established any sort of measure in this discussion. Now, even if one established a conventional measure like length and area, the axioms only provide additivity for countable collections of sets, not uncountable collections like the set of points on a segment. There's just no concrete reason to be disturbed when an enormous collection of dimensionless things can be gathered into something with some dimension to it. PS: Why spend a lot of time discussing matters in Euclid's terms? In this day and age, it's probably better to take a modern approach first, and then can one appreciate Euclid more fully and maybe not get so tripped up in archaic language.<|endoftext|> TITLE: How do find the numerical average of $x^x$ from $(-4,-2)$ without x-values that give a complex output? QUESTION [5 upvotes]: I wanted to find the approximate average of all real points in $(x)^{x}$ from $[-4,-2]$. This means I am ignoring all real inputs that give a complex output and need average to be a real number. To first solve this I found the following defined sets of $x^x$ when $x<0$. $$x=-\frac{2m}{2k+1}|m,k\in\mathbb{Z}$$ $$x=-\frac{2m+1}{2k+1}|m,k\in\mathbb{Z}$$ Each of the sets coincides with another equation so I got the following peice-wise definition. $$x^x=\begin{cases} (-x)^x & x=\left\{ -{2m\over 2k+1}\ |\ m, k \in \Bbb Z\right\}\\ -\left(-x\right)^{x} & x=\left\{ -{2m+1\over 2k+1}\ |\ m, k \in \Bbb Z\right\}\ \\ \text{undefined} & x\neq\left\{ -{2m\over 2k+1}\bigcup-{2m+1\over 2k+1}\ |\ m, k \in \Bbb Z\right\} \end{cases} $$ Then I took the limit definition of an integral along with $\frac{1}{b-a}$ to determine the average. $$\frac{1}{b-a}\lim_{n\to\infty}\sum_{i=1}^{n}{f\left(a+\left(\frac{b-a}{n}\right)i\right)}\left(\frac{b-a}{n}\right)=\frac{1}{b-a}\int_{a}^{b}f(x)$$ where $f(x)=x^x$ (Note that it is possible to skip $x$-values that given an undefined output in an interval. An example is $\{-12/3,-11/3,-10/3,-9/3,-8/3,-7/3,-6/3\}$) When $n\in\left\{\left.{4n+2}\right| n \in \mathbb{Z} \right\}$ there are numbers whose outputs coincide with $(-x)^{x}$ and numbers whose outputs coincide with $-(-x)^{x}$. So I took the probabilty of having numbers whose ouputs coincide with ($(-x)^x$ vs a numbers whose outputs coincide with $-(-x)^{x}$ in $[-4,-2]$. I got $\frac{1}{2}$ for both of them. Thus I found when $n \in \left\{\left.{4n+2}\right| n \in \mathbb{Z} \right\}$ as $n\to\infty$ $$\frac{1}{(-2)-(-4)}\lim_{n\to\infty}\sum_{i=1}^{n}{f\left(-4+\left(\frac{2}{n}\right)i\right)}\left(\frac{2}{n}\right)=\frac{1}{2}\left(\frac{1}{2}\int_{-4}^{-2}{(-x)}^{x}+\frac{1}{2}\int_{-4}^{-2}-{\left(-x\right)}^{x}\right)=0$$ But when $n$ is odd, all numbers in the interval have outputs coinciding only with $(-x)^{x}$. So when $n$ is odd integer and $n\to\infty$ $$\frac{1}{-2+4}\lim_{n\to\infty}\sum_{i=1}^{n}{f\left(-4+\left(\frac{2}{n}\right)i\right)}\left(\frac{2}{n}\right)=\frac{1}{2}\int_{-4}^{-2}{\left(-x\right)}^{x}\approx.062152$$ I have two different numbers depending on which $n$value I choose. So does this mean the average does not exist? If not how could one find the average? What other methods can be used to find a real number average? EDIT Analyzing this I believe that one should choose specific $n's$ despite a limit having to accept all $n's$ I presume that an interval of reimmmen sum interval should be as dense as possible and represent all defined numbers. This leads me to believe the average is 0. Since this goes against the definition of a limit, I think the definition of a reimmen sum should be slightly altered for dense defined inputs with undefined outputs. For example I believe intervals with numerator increase of $1$ in $\{-12/3,-11/3,-10/3,-9/3,-8/3,-7/3,-6/3\}$ can would give an accurate average compared to an numerator increase of $2$ in $\{-12/3,-10/3,-8/3,-6/3\}$. When there is numerator increase of $1$ I end up with zero. Since it represents inputs whose output coincide with both $(-x)^x$ and $-(-x)^{x}$ the average should be zero. Perhaps my reasoning is correct but I would like to find out if this is the case. REPLY [2 votes]: There is at least one sense in which you might say the "average" value of $f(x) = x^x$ on $[-4,-2]$ is zero. Before getting to that, however, I'd like to document some of the ways not to approach this problem. For various reasons, I think trying to apply integration is a non-starter. The function as we understand it here is defined only on a countable number of points (rational exponents) and undefined on an uncountable number of points in the interval. This makes it impossible to apply the definition of Riemannian integration (in any form I've ever seen it), and the Lebesgue integral of any function over a countable set of points is zero by definition regardless of the function values, so it tells us nothing about the average value of the function. In addition to not having a good domain for integration, within that domain you have function values that are greater than $1/256$ interleaved densely with function values that are less than $-1/256$. So any time you use one of those values as the height of a rectangle "approximating" the function, it's completely unrepresentative of infinitely many function values in the interval. If you attempt to define an integral using only Riemann sums of $n$ rectangles with uniform widths (which is not a legitimate use of Riemann sums in the first place), as $n$ goes to infinity, each time $n$ increases you'll have some number of intervals on which that interval's part of its "Riemann" rectangle "flips" across the axis. This process never converges. Worse still, whenever $n$ is divisible by $4$, about half the values of $x^x$ for $x = -4 + j\frac2n$ are not even defined, because after reducing the fraction to lowest terms, the denominator is even. So basically I would immediately discard any ideas that would define the average in terms of an integral, or anything purporting to be some kind of Riemann sum. The next best thing would be if we could somehow average the function values as an infinite series. There are only a countable number of $x$ values in the domain of this function, so they can be organized in a series. Unfortunately for this approach, a series using these function values cannot be absolutely convergent. Even after we divide each partial sum by the number of terms, it's still possible to "converge" to any sum you want by choosing the order in which the terms are added. One thing we can do is (somewhat arbitrarily) decide that we're interested in some particularly "regular" subsequences of the function values, for example, for some integer $k$, take $x^x$ where $x$ ranges over all multiples of the fraction $\frac{1}{2k+1}$ within the interval $[-4,-2]$. Take the average of these values, and then see what happens to that average as $k$ goes to infinity; that is, for $f(x) = x^x$, define $$ \mu = \lim_{k\to\infty} \frac{1}{4k+3} \sum_{j=0}^{4k+2} f \left(-4 + \frac{j}{2k+1} \right) $$ and call $\mu$ defined in this way a kind of "average". (Note: Despite its resemblance to Riemann sums that are sometimes used when computing Riemann integrals, this is not the derivation of an integral. It is simply an arithmetic mean of a set of values of $f(x)$ for $x \in [-4,-2]$. To actually have a Riemann integral, you have to consider all partitions of the interval into "almost disjoint" subintervals, including many for which it is not possible to construct a Riemann sum over regularly spaced values of $x$ like these. And let's not forget that rigorous definitions of Riemann integration typically assume a function that is defined on a compact interval, not a function that is undefined on a dense subset of points in that interval.) The value of this "average" is zero. Here's how we can know that: For any integer $k$, we can separate the aforementioned sum into four parts: the first term, the last term, the "odd" terms in between, and the "even" terms in between: \begin{split} \sum_{j=0}^{4k+2} f \left(-4 + \frac{j}{2k+1} \right) = f(-4) &+ \sum_{j=0}^{2k} f \left(-4 + \frac{2j+1}{2k+1} \right) \\ &+ \sum_{j=1}^{2k} f \left(-4 + \frac{2j}{2k+1} \right) + f(-2). \end{split} Define a function $g$ by $g(x) = x^{-x}$. If $x = -\frac pq$, then for $p$ even and $q$ odd, $$f(x) = \left(-\frac pq\right)^{-p/q} = \left(\frac pq\right)^{-p/q} = g(-x),$$ but for $p$ and $q$ both odd, $$f(x) = \left(-\frac pq\right)^{-p/q} = -\left(\frac pq\right)^{-p/q} = -g(-x).$$ So we can rewrite the sum as \begin{split} \sum_{j=0}^{4k+2} f \left(-4 + \frac{j}{2k+1} \right) = g(4) &- \sum_{j=0}^{2k} g \left(4 - \frac{2j+1}{2k+1} \right) \\ &+ \sum_{j=1}^{2k} g \left(4 - \frac{2j}{2k+1} \right) + g(2) \\ = g(2) &- \sum_{j=0}^{2k} g \left(2 + \frac{2j+1}{2k+1} \right) \\ &+ \sum_{j=1}^{2k} g \left(2 + \frac{2j}{2k+1} \right) + g(4). \end{split} Since $g$ is a decreasing function over $[2,4]$, $$ g \left(2 + \frac{2j-1}{2k+1} \right) > g \left(2 + \frac{2j}{2k+1} \right) > g \left(2 + \frac{2j+1}{2k+1} \right), $$ so we add the term $g(2)$ to the sum of the "even" terms and compare them to the sum of the "odd" terms, we find that $$ g(2) + \sum_{j=1}^{2k} g \left(2 + \frac{2j}{2k+1} \right) > \sum_{j=0}^{2k} g \left(2 + \frac{2j+1}{2k+1} \right), $$ but if we instead add the sum of the "even" terms to $g(4)$, we find that $$ \sum_{j=0}^{2k} g \left(2 + \frac{2j+1}{2k+1} \right) > \sum_{j=1}^{2k} g \left(2 + \frac{2j}{2k+1} \right) + g(4). $$ We can conclude that $$ -g(2) < \sum_{j=1}^{2k} g \left(2 + \frac{2j}{2k+1} \right) - \sum_{j=0}^{2k} g \left(2 + \frac{2j+1}{2k+1} \right) < -g(4). $$ and therefore $$ \frac{1}{256} = (g(2) + g(4)) - g(2) < \sum_{j=0}^{4k+2} f \left(-4 + \frac{j}{2k+1} \right) < (g(2) + g(4)) - g(4) = \frac14. $$ Since the sum is bounded for any $k$, $$ \mu = \lim_{k\to\infty} \frac{1}{4k+3} \sum_{j=0}^{4k+2} f \left(-4 + \frac{j}{2k+1} \right) = 0 $$ Now, to make this just a little more interesting, instead of taking just the multiples of $\frac{1}{2k+1}$, suppose we consider all multiples of any fraction $\frac{1}{2t+1}$ for any integer $t \leq k$, but count each such rational number only once. That is, instead of throwing away all the multiples of $\frac13$ and replacing them with multiples of $\frac15$, then throwing them away and replacing them with multiples of $\frac17$, we let the smaller fractions "fill in" the gaps between the larger ones without throwing anything away. So at $k=3$ we have terms in the sum for all multiples of $\frac13$, $\frac15$, and $\frac17$ in the interval $[-4,-2]$, but the values $f(-4)$, $f(-3)$, and $f(-2)$ are all counted only once in the sum. For $k=4$ we add terms for all the multiples of $\frac19$ except the multiples of $\frac13$, since we already have those; and so forth. This would be a relatively satisfying limit to evaluate. I suspect that the limit of the sum as $k$ goes to infinity is still zero, although I do not have a proof at this time. (Each new batch of terms resembles one of the $\frac{1}{2k+1}$ sums from the proof above, but it has "holes" that are not accounted for in that proof; otherwise this new limit would be practically a corollary of the previous one.)<|endoftext|> TITLE: application of L'Hopital's rule? QUESTION [5 upvotes]: I am trying to evaluate the following limit: $$ \lim_{x \to 0} \frac{e^x}{\sum_{n = 1}^\infty n^k e^{-nx}}, $$ where $k$ is a large (but fixed) positive integer. I am unsure how to proceed. Can this be done using L'Hopital's rule? Just started learning calculus, thanks guys!! REPLY [4 votes]: For each $x\lt\frac12$, there is an $n\in\mathbb{N}$ so that $1\lt\frac1x-1\le n\lt\frac1x$. Picking this one term out gives $$ \sum_{n=0}^\infty n^ke^{-nx}\ge \left(\frac1x-1\right)^ke^{-1} $$ As $x\to0^+$, $\left(\frac1x-1\right)^ke^{-1}\to\infty$<|endoftext|> TITLE: Is the set of all topological spaces bigger than the set of all metric space? QUESTION [23 upvotes]: I was wondering right that since the notion of a topology is much more general than that of a metric, and that "neighborhodness", if you will, and the concept of continuity, is generalized by the notion of a topology. so is the set of all topological spaces actually bigger than that of metric spaces? in other words if $$\mathscr{T}=\{x|x \text{ is a topological space}\}$$ and if $$\mathscr{M}=\{x|x\text{ is a metric space}\}$$ then is $$\text{card}\mathscr{T}>\text{card}\mathscr{M}$$ or are they equal? in both cases how can we prove that? or perhaps the sets $\mathscr{T}$ and $\mathscr{M}$ don't even exist at all similar to how the set of all sets doesn't exist? REPLY [3 votes]: The other answers here deal (correctly) with cardinality questions about the classes you're asking about. I don't think that's the way to ask the question that seems to interest you. Perhaps you want to know in what sense there are more topological spaces than metric spaces. If that's your question then ... Every metric space is a topological space in a natural way - a metric determines a topology. (Different metrics on the same space may or may not determine the same topology.) But there are topological spaces where the topology does not come from any metric. See https://en.wikipedia.org/wiki/Metrization_theorem and https://mathoverflow.net/questions/52032/examples-of-non-metrizable-spaces for examples.<|endoftext|> TITLE: Proof that the Rubik’s Cube group is 2-generated QUESTION [14 upvotes]: Singmaster (1981) writes, on page 32 of his Notes on Rubik’s Magic Cube: Frank Barnes observes that the group of the cube is generated by two moves: \begin{align*} \alpha &= L^2 B R D^{-1} L^{-1} &=(RF,RU,RB,UB,LD,LB,LU,BD,DF,FL,RD)& \\ &&\cdot (FUR,UBR,LDB,LBU,DLF,BDR,DFR)\\ \beta &= UFRUR^{-1}U^{-1}F^{-1} &=(UF,UL)_+(UR)_+(UBR,UFL)_-(URF)_+ \end{align*} Observe that $\alpha^7$ is an $11$-cycle of edges and $\alpha^{11}$ is a $7$-cycle of corners, that $\beta$ affects the edge and corner left fixed by $\alpha$, and that $$\beta^2 = (UF)_+(UL)_+(UBR)_-(UFL)_-(UFR)_-$$ [...] The remaining details are left as an exercise. I hadn't seen this notation before, so I'll explain it here. Notation like $(LU, BD, DF)$ means an edge cycle, in which: The $L$-$U$ edge moves to the $B$-$D$ edge's place, with the $L$ half ending up on the $B$ face, and the $U$ half ending up on the $D$ face. Similarly $BD \to DF$ and $DF \to LU$. The notation for corners is similar. Notation like $(UF, UL)_+$ is a twisted cycle: again, $UF \to UL$, but now $UL \to FU$; the final edge gets flipped when cycling back to the first edge. For corners, the notation is similar, but corners rotate, they don't flip. A subscript $+$ means clockwise rotation, a subscript $-$ means counterclockwise rotation. $(UR)_+$ means a single edge is flipped. $(UBR)_-$ means a single corner is rotated counterclockwise. I would like to show that this is indeed true, by writing each element in $\{F,B,L,R,U,D\}$ as a product of elements in $\{\alpha, \beta, \alpha^{-1}, \beta^{-1}\}$ – preferably, having those product be as short as possible. How would I go about finding them? (I'm okay with using software like GAP – if it is at all computationally possible.) REPLY [13 votes]: To start, we will use the example from "Analyzing Rubik's Cube with GAP". It creates the group generated by the six generators, corresponding to the six faces of the cube: +--------------+ | | | 1 2 3 | | | | 4 up 5 | | | | 6 7 8 | | | +--------------+--------------+--------------+--------------+ | | | | | | 9 10 11 | 17 18 19 | 25 26 27 | 33 34 35 | | | | | | | 12 left 13 | 20 front 21 | 28 right 29 | 36 back 37 | | | | | | | 14 15 16 | 22 23 24 | 30 31 32 | 38 39 40 | | | | | | +--------------+--------------+--------------+--------------+ | | | 41 42 43 | | | | 44 down 45 | | | | 46 47 48 | | | +--------------+ It is easy to identify which of the six permutations corresponds to the rotation of the upper (U), left (L), front (F), right (R), back (B) and down (D) sides to use the same letters as in the question above. We will now create them in GAP: gap> U:=( 1, 3, 8, 6)( 2, 5, 7, 4)( 9,33,25,17)(10,34,26,18)(11,35,27,19);; gap> L:=( 9,11,16,14)(10,13,15,12)( 1,17,41,40)( 4,20,44,37)( 6,22,46,35);; gap> F:=(17,19,24,22)(18,21,23,20)( 6,25,43,16)( 7,28,42,13)( 8,30,41,11);; gap> R:=(25,27,32,30)(26,29,31,28)( 3,38,43,19)( 5,36,45,21)( 8,33,48,24);; gap> B:=(33,35,40,38)(34,37,39,36)( 3, 9,46,32)( 2,12,47,29)( 1,14,48,27);; gap> D:= (41,43,48,46)(42,45,47,44)(14,22,30,38)(15,23,31,39)(16,24,32,40);; Next, we will construct a group generated by these permutations: gap> G:=Group(F,B,L,R,U,D); gap> Size(G); 43252003274489856000 We may now try to use SmallGeneratingSet to find a generating set that has fewer elements. SmallGeneratingSet does not guarantee to return a non-redundant list of minimal possible length, but this time we're lucky: gap> sgs:=SmallGeneratingSet(G); [ (1,32,41,3,6,19,35,48,22,27,11,25,9,38,16,33,17,8)(2,15,37,7,5,28,45,23,10, 20)(4,13,34,44,12,18,26,21,31,42)(14,46,40)(29,36)(39,47), (1,43,27,41,11,48)(2,29,23)(3,16,6,32,9,24)(4,20,5,18,21,37,45,15,39)(7,28, 12,31,44,47,10,13,26)(8,40)(14,25)(17,38,35,30,33,22)(19,46)(34,36,42) ] gap> Length(sgs); 2 Thus, indeed the group is 2-generated. Now let's create permutations a and b corresponding to $\alpha$ and $\beta$ from the question: gap> a:=L^2*B*R*D^-1*L^-1; (1,22,32,30,25,27,40)(2,15,12,10,39,42,20,31,28,26,29)(3,14,9,41,38,43,19)(4, 47,23,13,45,21,5,36,34,44,37)(8,33,46,35,16,48,24) gap> b:=U*F*R*U*R^-1*U^-1*F^-1; (3,6,27,11,33,17)(4,18,10,7)(5,26)(8,25,19) It is easy to check that they indeed generate the same group: gap> H:=Group(a,b); gap> Size(G)=Size(H); true gap> G=H; true It remains to show how to factorise, for example, $U$ in terms of $\alpha$ and $\beta$ and their inverses. We will use the same approach that is used here to solve the puzzle. gap> K:=FreeGroup("a","b"); gap> hom := GroupHomomorphismByImages( K, H, GeneratorsOfGroup(F), GeneratorsOfGroup(H) ); [ a, b ] -> [ (1,22,32,30,25,27,40)(2,15,12,10,39,42,20,31,28,26,29)(3,14,9, 41,38,43,19)(4,47,23,13,45,21,5,36,34,44,37)(8,33,46,35,16,48,24), (3,6,27,11,33,17)(4,18,10,7)(5,26)(8,25,19) ] gap> w:=PreImagesRepresentative( hom, U ); b*a^2*b*a^-5*b*a^-1*b^-1*(a*b*a)^2*b*a^-2*b*a^-1*b^-1*a*b^-1*a^-1*b^-1*a^3*b^-\ 1*a^-2*(b^-1*(a*b)^2*a^4*b*a^-6*(b^-1*a^-1*b^-1*a)^2)^2*b^-1*a^3*b*(b*a)^4*a^2\ *b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a*(a*b)^\ 4*a^3*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*(a*\ (a*b)^4*a^3*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)\ ^2*a^2*b^-1*a^-2*b)^2*(a*b)^2*(b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a\ ^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2*b^-1*a^-2*(b*a)^2*a^2*b^3*a^2*b*a^-1*b^-1\ *a^22*b*a^-1*b^-1*a^-6*b^-2*a^2*b^-1*(b^-1*a^-1)^2*b^-6*a^4*b*(b*a^4*b*a^-7*b^\ -1*a^3)^2*b*((b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b\ *a^-1*b^-1*a)^2*a^2)^2*b^-1*a^-2*b*a^2*b^-1*a^-2*(b*a)^2*b*((b*a)^4*a^2*b^2*a^\ -3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2)^3*b^-1*a^-\ 2*((b*a)^2*b)^2*a*b*a^3*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*\ a^-1*b^-1*a)^2*a^2*b^-5*a^-1*b^2*a^-1*b*(a*b*a^2)^2*a*b*a^-1*b*a^13*b*a^-1*b^-\ 1*a^-11*b*a*b^-1*a^3*(b*a^-1)^2*b^-1*a*b^-1*a^-1*b^-1*a^3*b^-1*a^-2*b^-1*a*b^2\ *a*b*a^-1*((b*a)^2*a^3*b*a^-6*(b^-1*a^-1*b^-1*a)^2*b^-1*a)^2*(b*a)^2*b^2*a^4*b\ *a^-7*b^-1*(a*b)^3*a^4*b*a^-7*b^-1*a^3*b*(b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2\ *b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*(a*(a*b)^4*a^3*b^2*a^-3*b*a^-1*b^-\ 1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2*b^-1*a^-2*b)^2*(a*b)^2*(\ b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^\ 2*a^2*b^-1*a^-2*b*a^4*b^3*a^2*b*a^-1*b^-1*a^22*b*a^-1*b^-1*a^-6*b^-2*a^2*b^-1*\ (b^-1*a^-1)^2*b^-6*a^4*b*(b*a^4*b*a^-7*b^-1*a^3)^2*b*((b*a)^4*a^2*b^2*a^-3*b*a\ ^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2)^2*b^-1*a^-2*b*a^\ 2*b^-1*a^-2*(b*a)^2*b*((b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-\ 1*a*b*(a*b*a^-1*b^-1*a)^2*a^2)^3*b^-1*a^-2*((b*a)^2*b)^2*a*b*a^3*b^2*a^-3*b*a^\ -1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2*b^-1*a^-2*(b*a)^2*\ a^3*b*a^-6*(b^-1*a^-1*b^-1*a)^2*b^-1*a*b^2*a*b*a^-1*(b*a)^2*a^3*b*a^-6*(b^-1*a\ ^-1*b^-1*a)^2*b^-1*(a*b)^2*a^4*b*a^-6*(b^-1*a^-1*b^-1*a)^2*b^-1*a^-1*b^-6*a*b^\ 6*a^2*b^5*a^-1*b*a*b^-5*a*(b*a^2*b)^2*a^-3*b^4*a^-5*b^-2*a^2*b^-1*(b^-1*a^-1)^\ 2*b^-6*a^4*b*(b*a^4*b*a^-7*b^-1*a^3)^2*b^2*a^-1*b*a*b^-1*a^-1*b^-1*a*b^3*a^-1*\ b*(b*a^3)^2*b*a^-1*b*a^13*b*a^-1*b^-1*a^-11*b*a*b^-1*a^3*(b*a^-1)^2*b^-1*a*b^-\ 1*a^-1*b^-1*a^3*b^-1*a^-2*b^-1*a*b^2*a*b*a^-1*((b*a)^2*a^3*b*a^-6*(b^-1*a^-1*b\ ^-1*a)^2*b^-1*a)^2*(b*a)^2*b^2*a^4*b*a^-7*b^-1*(a*b)^3*a^4*b*a^-7*b^-1*a^3*b*(\ b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^\ 2*(a*(a*b)^4*a^3*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^\ -1*a)^2*a^2*b^-1*a^-2*b)^2*(a*b)^2*(b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-\ 5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2*b^-1*a^-2*b*a^4*b^3*a^2*b*a^-1*b^-1*\ a^22*b*a^-1*b^-1*a^-6*b^-2*a^2*b^-1*(b^-1*a^-1)^2*b^-6*a^4*b*(b*a^4*b*a^-7*b^-\ 1*a^3)^2*b*((b*a)^4*a^2*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*\ a^-1*b^-1*a)^2*a^2)^2*b^-1*a^-2*b*a^2*b^-1*a^-2*(b*a)^2*b*((b*a)^4*a^2*b^2*a^-\ 3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a^-1*b^-1*a)^2*a^2)^3*b^-1*a^-2\ *((b*a)^2*b)^2*a*b*a^3*b^2*a^-3*b*a^-1*b^-1*a^-2*b*a^-5*b*a^-1*b^-1*a*b*(a*b*a\ ^-1*b^-1*a)^2*a^2*b^-1*a^-2*(b*a)^2*a^3*b*a^-6*(b^-1*a^-1*b^-1*a)^2*b^-1*a*b^2\ *a*b*a^-1*(b*a)^2*a^3*b*a^-6*(b^-1*a^-1*b^-1*a)^2*b^-1*(a*b)^2*a^4*b*a^-6*(b^-\ 1*a^-1*b^-1*a)^2*b^-1*a^-1*b^-6*a*b^6*a^2*b^5*a^-1*b*a*b^-5*a^-1*b^-1*(a^-1*b^\ -1*a*b^2)^2*b^4 It's not guaranteed that this factorisation is the shortest - finding that would be much more difficult computational task. Now one could similarly calculate factorisation for other five permutations. They will be computed faster than in the first call to PreImagesRepresentative since some data needed for the algorithm are already computed and stored in the group.<|endoftext|> TITLE: When is the metric completion of a Riemannian manifold a manifold with boundary? QUESTION [9 upvotes]: Let $(M,g)$ be a connected smooth Riemannian manifold and denote by $(M,d)$ the induced metric space following by taking topological metric to be the infimum over length of curves in the standard way. Suppose that $(M,d)$ is not complete and let $(\hat{M},d)$ denote the metric completion. What can be said about $(\hat{M},d)$ being a smooth manifold with smooth boundary? Edit: Take e.g. any open rectangle in Euclidean space. Then the completion will have a boundary that is not smooth (at the corners). REPLY [2 votes]: Here is an example answering a slightly stronger question, from Jason DeVito's comment: Construct a connected Riemannian manifold whose metric completion is not homeomorphic to a topological manifold (with or without the boundary). The simplest example, I think, is: Take 2-torus $T^2$ smoothly embedded in the unit sphere $S^3$. Now, consider the euclidean cone $C$ over $T^2$ in $R^4$. Then let $M= C- \{0\}$, equipped with the Riemannian metric $g$ induced from $R^4$. I claim that the metric completion of $(M,g)$ is homeomorphic to $C$. Indeed, if $(x_i)$ is a Cauchy sequence in $(M, d_g)$ (where $d$ is the distance function associated with $(M,g)$), then, since $d(x,y)\le |x-y|$, it follows that $(x_i)$ is also a Cauchy sequence in $R^4$. Hence, its (euclidean) limit is in the closure of $M$ in $R^4$. Thus, we obtain a map $f: \overline{(M,d)}\to C$ sending the equivalence class of $(x_i)$ to its euclidean limit. It is easy to see that $f$ is continuous. We need to check that $f$ is bijective. Surjectivity of $f$ is clear by considering a sequence $x_i= \frac{1}{i} x$, where $x\in M$ is arbitrary. Let us check injectivity. Consider two Cauchy sequences $(x_i), (y_i)$ such that $\lim x_i= \lim y_i=0\in R^4$. We need to show that $$ \lim_{i\to\infty} d(x_i, y_i)=0. $$ Let $s_i= |x_i|$, $t_i=|y_i|$. Define the point $z_i= \frac{t_i}{s_i} x_i$, hence, $|z_i|=|y_i|=t_i$. It is easy to check that $d(x_i, z_i)\to 0$. Therefore, $(z_i)$ is again a Cauchy sequence equivalent to $(x_i)$. Moreover, if $D$ is the diameter of $T^2$ with respect to its Riemannian metric, then $d(z_i, y_i)\le t_i D \to 0$. Hence, $(z_i)$ is also equivalent to $(y_i)$. Thus, $(x_i), (y_i)$ represent the same point of the metric completion of $(M,d)$. Injectivity of $f$ follows. A similar argument shows continuity of $f^{-1}$. Therefore, $f$ is a homeomorphism. Lastly, being a cone over a torus, $C$ is not a topological manifold (since removing the origin changes its fundamental group, which cannot happen for a topological manifold). There are other examples one can construct, for instance, removing from the euclidean plane a suitable collection of disjoint closed round disks converging to a point. But proving that the metric completion is homeomorphic to the closure in the euclidean plane (which is also not a manifold since it is not locally simply connected) is a bit more painful.<|endoftext|> TITLE: Concavity of the $n$th root of the volume of $r$-neighborhoods of a set QUESTION [6 upvotes]: Let $A$ be a closed subset of $\mathbb{R}^n$. For $r>0$, let $A_r$ be the $r$-neighborhood of $A$, namely the set $\{x:\operatorname{dist}(x,A)\le r\}$. Is the function $f(r) = \mu(A_r)^{1/n}$ concave? ($\mu$ is the Lebesgue measure.) Context This came up in the discussion of George Lowther's answer, where I asserted that the concavity of $f$ follows from the Brunn-Minkowski inequality. As George Lowther pointed out, this approach seems to require $A$ to be convex. Here's a proof for convex $A$. Since $A$ is convex, we have $A=\frac12(A+A)$, using Minkowski/vector addition. Let $B$ be the closed unit ball (also convex). For $r,s>0$ we have $$ A+\frac{r+s}{2}B = \frac12 (A+A)+\frac r2 B + \frac s2 B = \frac12(A+rB)+\frac12(A+sB) $$ Taking the $n$th root of volume on both sides and using the Brunn-Minkowski inequality, we obtain $$ f((r+s)/2) = \mu\left(\frac12(A+rB)+\frac12(A+sB)\right)^{1/n} \ge \frac12 \mu(A+rB)^{1/n}+\frac12 \mu(A+sB)^{1/n} $$ proving that $f$ is midpoint-concave. Since it's also monotone, it is concave. The proof falls apart when $A$ is not convex, since the inclusion $A\subset \frac12(A+A)$ goes the wrong way. But I don't see a counterexample. REPLY [2 votes]: No, $f$ does not have to be concave. For a counterexample in dimension $n=2$, let $A$ be the union of the closed unit disc centred at the origin and a single point $P$ with $\lVert P\rVert > 1$. We can compute the area of $A_r$ easily for small $r$ (specifically, $2r\le\lVert P\rVert-1$), $$ \mu(A_r)=\pi(1+r)^2+\pi r^2=\pi(1+2r+2r^2). $$ Differentiating $f(r)=\sqrt{\pi(1+2r+2r^2)}$ twice, $$ f^{\prime\prime}(r)=\frac{\sqrt\pi}{(1+2r+2r^2)^{3/2}} > 0. $$ So, $f$ is strictly convex for $r\le(\lVert P\rVert-1)/2$..<|endoftext|> TITLE: Uniqueness of Smith normal form in Z (ring of integers) QUESTION [6 upvotes]: It is a very well known fact that Smith Normal Form has proven useful when dealing with the development of the structure theorem of finitely generated abelian groups. In this context, there is an approach that takes advantage of the next result, which indeed is a very particular case of a much more general theorem related with a special kind of rings. If $A$ is a $m\times n$ matrix with integer coefficients, then there exist two matrices $P$ of size $m\times m$ and $Q$ of size $n\times n$, both having integer entries and $\det =\pm 1$, such that $PAQ$ is a diagonal matrix with diagonal entries $d_1,d_2,\ldots,d_k$ ($k<\min(m,n)$) such that $d_1\mid d_2\mid \ldots\mid d_k$, and each $d_i$ is a positive integer. Furthermore, $d_1\mid \ldots\mid d_k$ are unique. I have no problem with the proof of the "existence" part of the last theorem. However, I can't manage to give a proof of the "uniqueness" part; at most, I can only show that if $d'_1\mid\ldots\mid d'_{\ell}$ have the same property, then $d'_1 = d_1$ (they are both the gcd of the entries of $A$). The idea is not to give a proof using structure theorems, but only any kind of "very elemental" proof (dealing, if possible, just with $\mathbb{Z}$ properties, and not talking about general rings/modules). REPLY [2 votes]: See here (page 76). The main idea is this: For each k, the GCD of all the determinants of $k \times k$ submatrices of $A$ (with possibly different sets of indices for columns and rows) is preserved under multiplying by integer matrices with $\det =\pm 1$. These invariants equal $d_1 \cdot \ldots \cdot d_k$ in the smith form for each $k$, determining it uniquely.<|endoftext|> TITLE: Is there a polynomial such that $F(p)$ is always divisible by a prime greater than $p$? QUESTION [24 upvotes]: Is there an integer-valued polynomial $F$ such that for all prime $p$, $F(p)$ is divisible by a prime greater than $p$? For example, $n^2+1$ doesn't work, since $7^2+1 = 2 \cdot 5^2$. I can see that without loss of generality it can be assumed that $F(0) \ne 0$. Also, it is enough to find a polynomial where the property is true for sufficiently large prime $p$, since we could multiply that polynomial by some prime in the sufficiently large range and fix all the smaller cases. I think it is possible that there are no such polynomials, is there any good hint for proving this? I can't find any solutions to $\text{gpf}(p^4+1) \le p$ for prime $p \le 10000$, where $\text{gpf}$ is the greatest prime factor, but there are plenty for $\text{gpf}(p^3+1) \le p$, for example $\text{gpf}(2971^3+1) = 743 \lt 2971$. So I guess $F(p) = p^4+1$ might be an example. I also checked higher powers for small $p$ and couldn't find solutions there either, so $k \ge 4 \rightarrow \text{gpf}(p^k+1) \gt p$ is plausible. REPLY [4 votes]: Probably not, but this might be very hard to prove. For each $\epsilon > 0$ the asymptotic fraction of integers $x TITLE: Proving that the Calkin-Wilf tree enumerates the rationals. QUESTION [8 upvotes]: The Calkin-Wilf tree is an infinite undirected graph (tree) which is constructed as follows: starting from the root at $\frac{1}{1}$, each node $\frac{a}{b}$ has two children: a left child $\frac{a}{a+b}$ a right child $\frac{a+b}{b}$ This tree has the property that every rational appears in it exactly once, in lowest terms. I'm interested in ways to intuitively understand this fact. Most of what I know on this topic comes from this wonderful blog, which gives a proof [*] of the above at the link. He points out that every child uniquely defines a parent, and that every parent has a either a smaller numerator or a smaller denominator than its child. Therefore, if you start from any fraction $\frac{p}{q}$ in lowest terms, you can always trace a path back to $\frac{1}{1}$, the root. This is a really nice proof, but it feels a bit "backwards" to me, in that we visualize walking the tree from the bottom up. Does anyone know of alternate proofs of this fact? I don't need rigor, just intuition. Thoughts: Clearly, all children must have either a greater numerator or a greater denominator than any of their ancestors, so they can't be repeats of an ancestor. (We also need "lowest terms" for this, but that follows by a separate argument -- see footnote). So I'm only worried about "cousins". Perhaps there is some property that all the left children of a node share, which the right children do not? That would solve the problem, I believe. *My summary only covers the part of his argument that proves "every rational appears in it exactly once." The "in lowest terms" part involves Euclid's Algorithm, and is covered in the next post. REPLY [2 votes]: I know this is old thread but I can't help to mention that it may not need to be this complicated. For any positive rational number $\frac{m}{n}$ in its simplest form, if $m>n$ we know it is a right-side child node and the parent should be $\frac{m-n}{n}$; otherwise it is a left-side child node and the parent is $\frac{m}{n-m}$. The path from $\frac{m}{n}$ to the root 1 is thus unique and can be found easily by repeatedly applying this strategy. This is exactly the algorithm to find $\gcd(m,n)$ by Euclidean Division, and since $m$ and $n$ are coprime, it is guaranteed to end up with the root (ie, $m=n=1$).<|endoftext|> TITLE: Number of ways to partition $40$ balls with $4$ colors into $4$ baskets QUESTION [11 upvotes]: Suppose there are $40$ balls with $10$ red, $10$ blue, $10$ green, and $10$ yellow. All balls with the same color are deemed identical. Now all balls are supposed to be put into $4$ identical baskets, such that each basket has $10$ balls. What is the number of ways to partition these balls? I tried this problem, but it seems very complicated to correctly formulate, because the number of a particular color in a basket determines the partition of other baskets. I wonder someone can help figure out a quick and clean way to solve this problem? REPLY [4 votes]: Consider the problem of $4n$ balls with $n$ balls of each of the four colors being distributed into four indistinguishable baskets where each basket holds exactly $n$ balls. The naive approach here would be to use the Polya Enumeration Theorem (twice). Surprisingly enough this is sufficient to compute the initial segment of the sequence using the recurrence by Lovasz for the cycle index $Z(S_n)$ of the multiset operator $\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \textsc{MSET}_{=n}$ on $n$ slots, which is $$Z(S_n) = \frac{1}{n} \sum_{l=1}^n a_l Z(S_{n-l}) \quad\text{where}\quad Z(S_0) = 1.$$ This recurrence lets us calculate the cycle index $Z(S_n)$ very easily. The answer is then given by $$[A^n B^n C^n D^n] Z(S_4)(Z(S_n)(A+B+C+D)).$$ Using Maple and a reasonable amount of computational resources this yields the sequence $$1, 17, 93, 465, 1746, 5741, 16238, 41650, 97407, 212412, 434767, \\ 845366, 1569344, 2801696, 4828140, 8069053, \\ 13114785, 20796651, 32242621, 48986553, 73052382, 107114645, \\ 154621230, 220021932, 308940815,\ldots$$ In particular the value for $n=10$ is given by $$212412.$$ This is OEIS A253259 where we discover a variation of the problem definition that confirms the validity of these results. Oddly enough no recurrence relation or other indication of how these numbers were computed is given in the OEIS entry. Perhaps we will see a recurrence now that there are enough test data to verify its correctness, if indeed it exists. The Maple code for the above is quite straightforward. with(combinat); pet_varinto_cind := proc(poly, ind) local subs1, subs2, polyvars, indvars, v, pot, res; res := ind; polyvars := indets(poly); indvars := indets(ind); for v in indvars do pot := op(1, v); subs1 := [seq(polyvars[k]=polyvars[k]^pot, k=1..nops(polyvars))]; subs2 := [v=subs(subs1, poly)]; res := subs(subs2, res); od; res; end; pet_cycleind_symm := proc(n) local l; option remember; if n=0 then return 1; fi; expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); end; V := proc(n) option remember; local comb, gf, var; comb := pet_varinto_cind(A+B+C+D, pet_cycleind_symm(n)); gf := expand(pet_varinto_cind(comb, pet_cycleind_symm(4))); for var in [A,B,C,D] do gf := coeff(gf, var, n); od; gf; end; Addendum. I realized I had overlooked an additional OEIS link to A257463 when I wrote the above several hours ago. It proposes a simple algorithm which uses the isomorphism between the problem and factorizations of $p_1^n p_2^n p_3^n p_4^n$ into four factors $q$ all of which have $\Omega(q) = n.$ The algorithm generates all of these using the observation that uniqueness of factorizations can be guaranteed by generating the factors in non-increasing order. It uses memoization to speed this up. Nonetheless when I pasted it verbatim into Maple and tried it on several test cases it performed very poorly compared to what I have above. I will therefore keep the post for now. The quest continues. Addendum II. As per request by @WillOrrick I am posting the code for the general problem of $k$ colors. V := proc(n, k) option remember; local base, comb, gf, var; base := add(Q[p], p=1..k); comb := pet_varinto_cind(base, pet_cycleind_symm(n)); gf := expand(pet_varinto_cind(comb, pet_cycleind_symm(k))); for var in [seq(Q[p], p=1..k)] do gf := coeff(gf, var, n); od; gf; end; We thus obtain for five colors the sequence $$1, 73, 1417, 19834, 190131, 1398547, 8246011, 40837569, 174901563, \\ 664006236, 2274999093, 7139338769, 20758868781, 56466073587, \\ 144806582536, 352420554194, 818441723112, 1822255658908,\ldots$$ Addendum III. Granting Maple several hours of computation time and 5GB of memory we get for four colors: $$1, 17, 93, 465, 1746, 5741, 16238, 41650, 97407, 212412, 434767, \\ 845366, 1569344, n2801696, 4828140, 8069053, 13114785, 20796651, \\ 32242621, 48986553, 73052382, 107114645, 154621230, 220021932, \\ 308940815, 428492880, 587520315, 797019526, 1070458096, 1424339518, \\ 1878618620, 2457435561, 3189651885, 4109787687, 5258703597,\ldots$$ This confirms the generating function by @WillOrrick.<|endoftext|> TITLE: Is every diffeomorphism an element of a one parameter group of diffeomorphims? QUESTION [6 upvotes]: I understand that a smooth vector field on a manifold $M$, generates a "flow"/one parameter group action, lets say $\sigma(t,s): \mathbb{R} \times M \rightarrow M$, and $\sigma_t: M \rightarrow M$ gives a one parameter group of diffeomorphisms. My question is, do every diffeomorphism have to be an element of a such group? My naive guess is no, but I am confused because I think set of diffeomorphisms also form "a" group. I would appreciate if you can give an example of a such diffeomorphism. REPLY [5 votes]: Complementing the answer of @QiaochuYuan, there exist diffeomorphisms $f : D^2 \to D^2$ which are in the identity component of $\pi_0(\text{Diff}(M))$ but which are not contained in any 1-parameter group of diffeomorphisms. For an example of such an $f$, take three points $p,q,r \in \text{int}(D^2)$, and take any diffeomorphism $f$ preserving $\{p,q,r\}$ so that the restriction to $D^2 - \{p,q,r,\partial D^2\}$ has pseudo-Anosov isotopy class, preserving a geodesic lamination $\Lambda \subset D^2 - \{p,q,r,\partial D^2\}$ with respect to a complete finite area hyperbolic structure on $D^2 - \{p,q,r,\partial D^2\}$. One may construct $f$ so that $\Lambda$ is a minimal set on which $f$ acts with hyperbolic dynamics, including dense orbits, and so that the restriction of $f$ to $D^2 - \Lambda$ has wandering dynamics. It follows that anything which commutes with $f$ preserves $\Lambda$ as a set, and preserves the decomposition of $\Lambda$ into its 1-dimensional leaves. From this, it follows that $f$ is not part of a 1-parameter subgroup. For a more conceptual understanding of this example, every surface diffeomorphism that is contained in a 1-parameter subgroup has zero topological entropy. And, every diffeomorphism which, on the complement of some finite set, is isotopic to a pseudo-Anosov diffeomorphism, has positive topological entropy.<|endoftext|> TITLE: Find all $f:\mathbb {R} \rightarrow \mathbb {R}$ where $f(f(x))=f'(x)f(x)+c$ QUESTION [9 upvotes]: Recently, while studying calculus, I have come across multiples problems which asked the following: If $f(x)$ is a polynomial, find all $f(x)$ that $f(f(x))=f'(x)f(x)+c$, where $c$ is a constant. This problem can be solved as so: If $deg(f(x))=n$, the upper equation implies that $n^2=2n-1$, or that $n=1$. This implies that $f(x)=ax+b$, and it is just a matter of calculation from here. But how does one find all $f:\mathbb {R} \rightarrow \mathbb {R}$ where $f(f(x))=f'(x)f(x)+c$? I am asking for solutions whe $f(x)$ is not necessarily a polynomial. Because of my above method, I thought that $f'(x)$ would be a constant. However, I was not able to prove this. Differentiating both sides gave me that $(f'(f(x))-f'(x))(f'(x))=f''(x)f(x)$. This proved no help at all. Since I am young, please use methods that are comprehensible to a high-school student. Any help would be appreciated. REPLY [4 votes]: There is a solution with $f(x)=ax+bx^2+cx^3+dx^4+...$. Here, I am taking your $c=0$, but introducing my $c$ as one of the coefficients. $$f(f(x))=af(x)+bf(x)^2+cf(x)^3+...\\ =a^2x+abx^2+acx^3+...+ba^2x^2+2ab^2x^3+...+ca^3x^3+...\\ f'(x)f(x)=(a+2bx+3cx^2+...)(ax+bx^2+cx^3+...)\\ =a^2x+3abx^2+(4ac+2b^2)x^3+... $$ Equate coefficients $$ a^2=a^2\\ab+ba^2=3ab\to a=2\\ac+2ab^2+ca^3=4ac+2b^2\to c=-b^2$$ so it seems the first few terms must be $f(x)=2x+bx^2-b^2x^3+...$ EDIT: I made Maple find this solution, when your $c=0$: $$\color{red}{f(x)=\frac2b(1+2bx-\sqrt{1+2bx})}$$ When $x=2y$, the coefficients of the series above were $1,1,-2,5,-14,42$. This sequence is fairly famous, or I could have looked it up in the OEIS (Online Encyclopedia of Integer Sequences). I then used the formula, which involves $\frac1{n+1}{2n\choose n}$, in the sequence, and Maple worked out the infinite sum. To check that, let $y=\sqrt{1+2bx}$, and suppose $y>1/2$. Then $$f(x)=\frac2b(y^2-y)\\ 1+2bf(x)=4y^2-4y+1=(2y-1)^2\\f(f(x))=\frac2b(4y^2-4y+1-(2y-1))$$ On the other hand, $f'(x)=4-\frac{2}{\sqrt{1+2bx}}=4-\frac2y$, so $$f(x)f'(x)=\frac2b(y^2-y)(4-\frac2y)=\frac2b(y-1)(4y-2)=\frac2b(4y^2-6y+2)=f(f(x))$$<|endoftext|> TITLE: How did the rule of addition come to be and why does it give the correct answer when compared empirically? QUESTION [30 upvotes]: I'm still a high school student and very interested in maths but none of my school books describe these kind of things, and only say how to do it not the whys. My question is very simple, for example: 19 +25 = 44 because the one from adding 9 and 5 goes on to add with 1 and two. How did this rule of addition come to be? Here's a bit of explanation that can be useful(sorry if it is frustrating): Suppose we are a 3 year old child and no one teaches us how to add and we recognize 1 is our index and 5 all palm fingers held up. Someone gives us the problem add: 1+5 so we hold 'em up, right? and again someone gives us to add 8564+2345 so we can't lift 'em up. So we try to device a rule but we don't recognize 6+4= 10 in which 0 stays and one jumps neither can we say that only the digits from rightmost positions are to be added. This is what i meant. REPLY [2 votes]: Other people have done all the math-y specifications, and have done so spectactularly. From reading the comments, it seems as if the "why we have to carry exceeds over" hasn't been received by everyone yet. Let me try to explain that with an "analogy" (it's not really an analogy, but.. let's just go with it): I assume you are familiar with the binary representation (or base-2). In all numbersystems, there is a highest number any single-digit representation can contain. In base-2, that number is 1. In base-10, that number is 9. This means that in base-2, it's not possible to represent the number 2 like 2 or 02 -- it has to be 10, because the value 2 is too high to be represented in the first single-digit position. Ergo, you must carry the exceed over to the next position. Another way of looking at this is, again using base-2, each single-digit representation -- each position -- is a counting of how many of its corresponding values the number should specify, and each position can only correspond to one value. Let's take the base-2 number 10 and start reading from the right: 1st position: 0 -- how many ONEs do we have 2nd position: 1 -- how many TWOs do we have Which of course corresponds to the number 2 in base-10. Another number, 1011101 and its corresponding base-10 values: 1st position: 1 -- number of ONEs 2nd position: 0 -- number of TWOs 3rd position: 1 -- number of FOURs 4th position: 1 -- number of EIGHTs 5th position: 1 -- number of SIXTEENs 6th position: 0 -- number of THIRTYTWOs 7th position: 1 -- number of SIXTYFOURs Counting it all up -- I don't speak latex, so I'm sorry for the formatting -- it becomes 64*1 + 32*0 + 16*1 + 8*1 + 4*1 + 2*0 + 1*1 = 93. Carrying this logic over to base-10, where each position can hold {0, 9} and naturally cannot be double-digit: 1st position: number of ONEs 2nd position: number of TENs 3rd position: number of HUNDREDs ... nth position: number of 10^(n-1)s If we have the addition 8 + 4 = 12, and we weren't able to carry exceeds, the first position on the right-hand side would contain the double-digit number 12 (and the second position would contain 0), which is illegal. The first position can only contain the number of ones, and it is impossible in this representation to have more than nine ones. The moment you get 10 ONEs, you don't actually have 10 ONEs -- you have one TEN. Thusly, we either carry the 1 (meaning one TEN in this example) over to the next position where it belongs (with the rest of the TENs), leaving 2 in the position for ONEs, giving us the number 1*10 + 2*1 = 12 -- or the numbersystem breaks down. In the everyday, this representation is still used explicitly to some extent: "One-thousand two-hundred" = 1200: 1st position: 0 -- number of ONEs 2nd position: 0 -- number of TENs 3rd position: 2 -- number of HUNDREDs 4th position: 1 -- number of THOUSANDs Finally, imagine if you weren't able to carry exceeds: you have literally no way of knowing what any number you're presented with means. The number 1200 could mean "one-thousand two-hundred" or it could mean "one-thousand and twenty", or the number 10200 could mean "one-thousand two-hundred" or "ten-thousand twenty" or "one-thousand twenty", depending on how you align the addition. Re: myself earlier, the numbersystem is 100% useless if you do not carry.<|endoftext|> TITLE: Finding $b-a$ for positive integer $a$, $b$ satisfying $\sum_{x=1}^\infty\frac{3x^2+12x+16}{(x(x+1)(x+2)(x+3)( x+4))^3}=\frac1{4(a!)^b}$ QUESTION [6 upvotes]: $$ \sum_{x = 1}^{\infty}\frac{3x^{2} + 12x + 16} {\left[\vphantom{A^{A}}x\left(x + 1\right)\left(x + 2\right) \left(x + 3\right)\left(x + 4\right)\right]^{\, 3}} = \frac{1}{4\left(a!\right)^{b}} $$ Compute $b-a$ if $b$ and $a$ are positive integers. I asked my teacher to help me in solving this sum. But, unfortunately, he said I can't. it is a very hard question, So, I hope you can help me in solving this problem. REPLY [5 votes]: The answer is $\color{green}{-1}$. Indeed, the sum converges very quickly (the terms are $\Theta(x^{-13})$) and the inverses of the partial sums are $$55741.9354839\\ 55312.3888560\\ 55297.2897327\\ 55296.1612133\\ 55296.0276428\\ 55296.0059708\\ 55296.0015387\\ 55296.0004559\\ 55296.0001513\\ 55296.0000551\\ \cdots$$ Obviously, this tends to $4\cdot4!^3$.<|endoftext|> TITLE: Difficult Integral $\int_0^{1/\sqrt{2}}\frac{\arcsin({x^2})}{\sqrt{1+x^2}(1+2x^2)}dx=$ QUESTION [7 upvotes]: I have a difficult integral to compute.I know the result, but need to know the method of calculation. How prove this result? $$\int_0^{1/\sqrt{2}}\frac{\arcsin({x^2})}{\sqrt{1+x^2}(1+2x^2)}dx=\frac{\pi^2}{144}$$ REPLY [4 votes]: $\displaystyle J=\int_0^{\tfrac{1}{\sqrt{2}}}\dfrac{\arcsin (x^2)}{\sqrt{1+x^2} (1+2x^2)}dx$ Perform change of variable $y=x^2$, one obtains: $\displaystyle J=\int_0^{\tfrac{1}{2}}\dfrac{\arcsin x}{2\sqrt{x(1+x)}(1+2x)}dx$ $\displaystyle J=\left[\arcsin(x)\arctan\left(\sqrt{\dfrac{x}{1+x}}\right)\right]_0^{\frac{1}{2}}-\int_0^{\tfrac{1}{2}}\dfrac{\arctan\left(\sqrt{\dfrac{x}{1+x}}\right)}{\sqrt{1-x^2}}dx$ $\displaystyle J=\dfrac{\pi^2}{36}-\int_0^{\tfrac{1}{2}}\dfrac{\arctan\left(\sqrt{\dfrac{x}{1+x}}\right)}{\sqrt{1-x^2}}dx$ Define on $[0,1]$, $\displaystyle F(a)=\int_0^{\tfrac{1}{2}}\dfrac{\arctan\left(a\sqrt{\dfrac{x}{1+x}}\right)}{\sqrt{1-x^2}}dx$ For all $a$ in $[0,1]$, $\displaystyle F'(a)=\int_0^{\tfrac{1}{2}}\dfrac{x}{\sqrt{x(1-x)}(a^2x+x+1)}dx$ For all $a$ in $[0,1]$, $F'(a)=2\left[\dfrac{\sqrt{2+a^2}\arctan\left(\dfrac{\sqrt{x}}{\sqrt{1-x}}\right)-\arctan\left(\dfrac{\sqrt{2+a^2}\sqrt{x}}{\sqrt{1-x}}\right)}{(1+a^2)\sqrt{2+a^2}}\right]_0^{\tfrac{1}{2}}$ For all $a$ in $[0,1]$, $F'(a)=\dfrac{\pi}{2}\dfrac{1}{1+a^2}-\dfrac{2\arctan\left(\sqrt{2+a^2}\right)}{(1+a^2)\sqrt{2+a^2}}$ Therefore, $\displaystyle \int_0^{\tfrac{1}{2}}\dfrac{\arctan\left(\sqrt{\dfrac{x}{1+x}}\right)}{\sqrt{1-x^2}}dx=\int_0^1 F'(a)da=\dfrac{\pi}{2}\int_0^1\dfrac{1}{1+a^2}da-2\int_0^1\dfrac{\arctan\left(\sqrt{2+a^2}\right)}{(1+a^2)\sqrt{2+a^2}}da$ Therefore, $\displaystyle J=\dfrac{\pi^2}{36}-\dfrac{\pi^2}{8}+2\int_0^1\dfrac{\arctan\left(\sqrt{2+a^2}\right)}{(1+a^2)\sqrt{2+a^2}}da$ The last integral is the Ahmed's integral (see How to evaluate Ahmed's integral? ) Finally, $J=\dfrac{\pi^2}{36}-\dfrac{\pi^2}{8}+2\times \dfrac{5\pi^2}{96}=\dfrac{\pi^2}{144}$<|endoftext|> TITLE: Is I-adic completion a ring epimorphism? QUESTION [8 upvotes]: Let $R$ be a commutative ring and let $I \subset R$ be an ideal. For any $n \ge 1$, the ring homomorphism $R \rightarrow R/I^n$ is surjective, hence an epimorphism in the category of rings. What about the natural map $R \rightarrow \hat{R_I}:=\lim_n R/I^n$ to the $I$-adic completion of $R$? This map is no longer surjective, but is it nevertheless an epimorphism? If it is not an epimorphism in general, then I would also be interested in hearing about classes of rings for which it is an epimorphism. For example, does it hold for the p-adic integers? REPLY [6 votes]: (1) If $\mathbb{Z} \to \mathbb{Z}_p$ was an epimorphism, this would imply that $\mathbb{Q} \to \mathbb{Z}_p \otimes_{\mathbb{Z}} \mathbb{Q} = \mathbb{Q}_p$ is an epimorphism. But $\mathbb{Q}$ is a field and $\mathbb{Q} \to \mathbb{Q}_p$ is not surjective, so this is a contradiction. (2) I think that $R \to \widehat{R}_I$ is almost never an epimorphism. (3) See here for a classification of epimorphisms $\mathbb{Z} \to A$. (4) There was a seminar about epimorphisms of commutative rings. For example, Prop. 1.5 in D. Lazard's "Epimorphismes plats" tells us that $f : A \to B$ is an epimorphism of commutative rings if and only if the following conditions are satisfied: $\mathrm{Spec}(B) \to \mathrm{Spec}(A), \mathfrak{p} \mapsto f^{-1}(\mathfrak{p})$ is injective For every prime ideal $\mathfrak{p}$ of $B$, the natural map $Q(A/f^{-1}(\mathfrak{p})) \to Q(B/\mathfrak{p})$ is an isomorphism. The kernel $J = \ker(B \otimes_A B \xrightarrow{*} B)$ is a finitely generated $B \otimes_A B$-module with $J = J^2$. The second condition fails for $\mathbb{Z} \to \mathbb{Z}_p$ if we take $\mathfrak{p}=0$.<|endoftext|> TITLE: How to define "being inside of something" in the context of topology? QUESTION [13 upvotes]: I'm a Psychologist and Neuroscientist with interest in math and I just started reading about Topology. I have to say it's not easy to grasp the concepts without a practical example, so I'm trying to understand topology in a practical (psychologically applicable) way. I was thinking for example about the concept of something being inside of another thing, like someone being inside a house, tea being inside a cup or a smaller circle lying inside a bigger one asf. Humans can identify those things as being the same (belonging to one equivalence class?), i.e. if I ask someone to identify the object inside the other one, every normal functioning person will be able to identify the object inside, no matter how different the properties (color, size, form asf.) of the objects are. So there must be some general properties the brain uses. But how can I define this concept of being inside another thing topologically/mathematically so that it is applicable for a wide range of objects? And what if it gets even more complex. What if a time factor is included like putting something inside another thing. For example putting a key inside a keyhole, putting a steak in the frying pan, putting food into a shopping bag asf. So here it's about a processes over time which should belong to the same equivalence class. How can this be defined? I hope it became clear what I mean and I'm looking for some inspirational thoughts. Also if anyone can recommend literature with emphasis on practical applications, I'd be thankful :). REPLY [3 votes]: As noted in other answers, mathematics use different notions of ''inside/outside'' or ''interior/exterior''. And probably none of them completely capture the meaning of the usual language. So, instead of starting from mathematical definitions, I try starting from the intuitive meaning of '' inside/outside''. It seems to me that the idea of being inside or outside something require at least two conditions: 1) that such thing is inserted on some greater ''ambient'' so that there can be an ''outside'' . 2)That it has a ''boudary'' I give some example: A circle ( the boundary) in a plane (the ambient) divides the plane in two non connected components and we can define the interior as the component that contains the center of the circle and the exterior as the other component. But, what about if the ambient is a sphere (as the Earth)? A circle on a sphere can hawe two ''interiors'' that can be difficult to distinguish: thik at the equator as a circle, what is its interior? So it seems that the common intuition of ''interior/exterior'' assumes (unconsciously?) that the ambient is isomorphic to a $\mathbb{R}^3$ space. But the example of the cup of tea suggest that this intuitive ambient space is really a physical space that has a privileged direction up-down so that the tea is in the cup if it is concave up, but it comes out if we reverse the cup. Now, how we can define such intuitions in mathematical way? I think that we can find the mathematical concepts that can work better in the theory of topological manifolds. Here the concepts of connected components, boundary, embedding in a greater space, ... can be well defined (also if not always in a simple way). If we want to describe the motion of something from outside to inside a set delimited by a boundary, we have to use some function of time, so we need some property of continuity and differentiability for such a function and, probably, we have to work in a differentiable manifold, so that we can find if a line that represents the motion intersect the boundary and in wich direction. Finally, I really don't know how to treat the existence of a privileged direction, but someone more expert in topology probably knows how to do.<|endoftext|> TITLE: Degree of Rational Function QUESTION [7 upvotes]: This might sound like a very trivial question but I found different answers on the web. Assume one has a rational function $$\frac{f(x)}{g(x)} ,$$ where $f(x)$ and $g(x)$ are polynomials. What is the degree of the rational function? Is it the maximum degree of $f$ and $g$? Or is it $\deg(f) – \deg(g)$? Thanks REPLY [10 votes]: The convention that I have seen is that the degree of the rational function $$s(x) := \frac{f(x)}{g(x)},$$ where $f$ and $g$ are polynomials that have no common factors, is $$\deg s := \max\{\deg f, \deg g\} .$$ One motivation for this definition is that, in analogy with the notion of degree of a polynomial, over $\Bbb C$ the equation $$s(x) = w$$ has $\deg s$ solutions (in the Riemann sphere, and counting multiplicity) for generic $w \in \Bbb C$. Indeed, we can rearrange $s(x) = w$ as the polynomial equation $$f(x) - w g(x) = 0$$ and, when $\deg f \neq \deg g$ (and most of the time when $\deg f = \deg g$), the degree of the polynomial $f - w g$ is $\max\{\deg f, \deg g\} = \deg s$.<|endoftext|> TITLE: Is there an irrational number $a$ such that $a^a$ is rational? QUESTION [9 upvotes]: It can be proved that there are two irrational numbers $a$ and $b$ such that $a^b$ is rational (see Can an irrational number raised to an irrational power be rational?) and that for each irrational number $c$ there exists another irrational number $d$ such that $c^d$ is rational (see For each irrational number b, does there exist an irrational number a such that a^b is rational?). My question is: Is there an irrational number $a$ such that $a^a$ is rational (and how could you prove that)? REPLY [5 votes]: Consider the unique (positive) solution $a$ to $x^x = 2$. If $a$ were rational, say, $a = \frac{p}{q}$, $p$ and $q$ are positive integers such that $\gcd(p, q) = 1$, we would have $$\left(\frac{p}{q}\right)^{p / q} = 2 ,$$ and rearranging gives $$p^p = q^p 2^q .$$ Since there is no integer $n$ such that $n^n = 2$, we must have $q > 1$ and hence $2 \mid p^p$. Because $2$ is prime, we have $2 \mid p$. So, $2$ occurs an even number of times the prime factorization of $p^p$ and likewise of $q^p$. Since $p^p = q^p 2^q$, we must have $2 \mid q$, but now $2 \mid p, q$, and this contradicts $\gcd(p, q) = 1$. Thus, $a$ is irrational but $a^a$ is rational (in fact, an integer). REPLY [5 votes]: If $a^a=2$, then $a$ is irrational: If $a=p/q$, then $(p/q)^p=2^q$ is an integer, so $p/q$ is an integer.<|endoftext|> TITLE: Can we say that $\det(A+B) = \det(A) + \det(B) +\operatorname{tr}(A) \operatorname{tr}(B) - \operatorname{tr}(AB)$. QUESTION [5 upvotes]: Let $A,B \in M_n$. Is this formula true? $$\det(A+B) = \det(A) + \det(B) + \operatorname{tr}(A) \operatorname{tr}(B) - \operatorname{tr}(AB).$$ REPLY [4 votes]: $n=2$, $\newcommand{\tr}{\operatorname{tr}}$ $$\det(A+B) = \det(A) + \det(B) + \tr(A) \tr(B) - \tr(AB).$$ $n=3$, letting $c(X) = (\tr(X)^2 – \tr(X^2)) / 2$, \begin{align*} \det(A + B) ={}& \det(A) + \det(B) – \tr(AB)\tr(A) – \tr(AB)\tr(B) +{} \\ &{}+ c(A)\tr(B) + \tr(A)c(B) + \tr(AAB) + \tr(ABB) \end{align*} $n>3$, a formula with $2^n$ terms should be obtainable from the 1987 Reutenauer and Schützenberger's "A formula for the determinant of a sum of matrices"<|endoftext|> TITLE: ELI5: Riemann-integrable vs Lebesgue-integrable QUESTION [15 upvotes]: What is the difference between Riemann-integrable and Lebesgue-integrable? Does it have anything to do with the absolute value of the integrand; something like $\text{Lebesgue-integrable} \Leftrightarrow \int |f(x)| < \infty$? REPLY [23 votes]: The main difference between integrability in the sense of Lebesgue and Riemann is the way we measure 'the area under the curve'. The Riemann integral asks the question what's the 'height' of $f$ above a given part of the domain of the function. The Lebesgue integral on the other hand asks, for a given part of the range of $f$, what's the measure of the $x$'s which contribute to this 'height'. The following is taken from the wikipedia page for Lebesgue integration, and most instructive (Riemann in blue on the top, Lebesgue in red on the bottom): (Taken from https://en.wikipedia.org/wiki/Lebesgue_integration#/media/File:RandLintegrals.png, CC BY-SA 3.0) Or, another way to explain this: I have to pay a certain sum, which I have collected in my pocket. I take the bills and coins out of my pocket and give them to the creditor in the order I find them until I have reached the total sum. This is the Riemann integral. But I can proceed differently. After I have taken all the money out of my pocket I order the bills and coins according to identical values and then I pay the several heaps one after the other to the creditor. This is my integral. This due to Reinhard Siegmund-Schultze, (2008), "Henri Lebesgue", in Timothy Gowers, June Barrow-Green, Imre Leader, Princeton Companion to Mathematics, Princeton University Press. As a result of the different definitions, different classes of functions are integrable: By definition, $f$ is Lebesgue-integrable iff $|f|$ is Lebesgue-integrable. And, indeed, as basket noted, we can integrate a lot of functions in the Lebesgue sense which can't be integrated in the Riemann sense (e.g. the so-called Dirichlet function which is $1$ on the rational numbers and $0$ on the irrational ones). We even have the nice result that if $f$ is bounded and defined on a compact set and Riemann integrable, then it is Lebesgue integrable. On the other hand, if the domain isn't bounded, then $f$ might be integrable in the improper Riemann sense, but not integrable in the Lebesgue sense (e.g. $f(x)=\frac{\sin(x)}{x}$ on $[1,\infty)$ - for this function, even $|f|$ is not integrable in the improper Riemann sense). REPLY [2 votes]: One can't really simplify things to the level that you apparently want. Simply the theory is somewhat complicated and there is no simple short answer to your question. Since presumably you already know the definitions, let me comment on two aspects, which are essentially the most important, of course in MY opinion: 1) The simplest nontrivial comment is that Riemann-integration looks at the domain of the function, while Lebesgue-integration looks at the codomain, certainly at the expense of needing more than just intervals. 2) On the other hand, it is reasonable to argue that the most important property of Lebesgue-integration is that one can construct a complete space using (equivalence classes of) Lebesgue-integrable functions.<|endoftext|> TITLE: Map from circle to real line QUESTION [5 upvotes]: I am asked to show that, for any continuous $\phi:\;S^1\to\mathbb{R}$ where $S^1=\{ \|\mathbf{x}\|=1,\;\mathbf{x}\in\mathbb{R}^2\}$, there exists $\mathbf{z}\neq 0$ such that: $$\phi(\mathbf{z})=\phi(-\mathbf{z})$$ It is suggest that I use connectedness. I know both sets are connected, and that a continuous map preserves connectedness, but I can't see how this helps. I thought of considering arcs from $\mathbf{z}$ to $-\mathbf{z}$, but again I cannot see how to argue that there must be an arc such that the image of its endpoints are collapsed to a single point in $\mathbb{R}$. Help? REPLY [3 votes]: Since it is suggested that connectedness be explicitly used, I might phrase it like this: The image of the connected domain of the function $\mathbf z \mapsto \phi(\mathbf z) - \phi(-\mathbf z)$ must be a connected subset of $\mathbb R$. If it is not everywhere $0$, then for any point $\mathbf z_0$ where it is not $0$, the function changes signs as $\mathbf z$ goes from $\mathbf z_0$ to $-\mathbf z_0$. Thus the image includes both positive and negative numbers. All connected subsets of $\mathbb R$ that contain both positive and negative numbers contain $0$.<|endoftext|> TITLE: Weak convergence of probability measures and uniform convergence of functions QUESTION [5 upvotes]: I am stuck on Problem 4.12 of Karatzas and Shreve's book Stochastic Calculus and Brownian Motion: Suppose that $\{ \mathbb{P}_n \}$ is a sequence of probability measures on $(C[0, \infty), \mathcal{B} (C[0, \infty)))$ which converges weakly to a probability measure $\mathbb{P}$. Suppose, in addition, that $\{f_n\}$ is a uniformly bounded sequence of real-valued, continuous functions on $C[0, \infty)$ converging to a continuous function $f$, the convergence being uniform on compact subsets of $C[0, \infty)$. Then $$ \lim_{n \to \infty} \int_{C[0,\infty)} f_n \, d \mathbb{P_n} = \int_{C[0,\infty)} f \, d \mathbb{P} .$$ The result is clear if the iterated limit is considered, i.e. $$ \lim_{m \to \infty} \lim_{n \to \infty} \int_{C[0,\infty)} f_n \, d \mathbb{P_m} = \int_{C[0,\infty)} f \, d \mathbb{P} .$$ But I am not sure how we could show this statement. Any ideas? REPLY [5 votes]: Fix $\varepsilon$. First use tightness to find a compact subset $K=K(\varepsilon)$ of $C[0,+\infty)$ such that $\mathbb P_n(K)\gt 1-\varepsilon$. Use the uniform convergence of $(f_n)_{n\geqslant 1}$ to $f$ in order to handle the integral of $f_n$ over $K$. Use the uniform bound to handle the integral of $f_n$ over the complement of $K$ (which has a measure $\mathbb P_n$ which does not exceed $\epsilon$).<|endoftext|> TITLE: Is $[N]^\#([N])$ congruent to $w_n(\nu_N)([N])$ mod $2$, where $\nu_N$ is the normal bundle of the embedding of $N$ in $M$? QUESTION [5 upvotes]: Let $M$ be a closed, smooth, orientable $2n$-manifold, and let $N$ be a closed, smooth, orientable $n$-submanifold. Let $[N]^\#$ denote the cohomology class (Poincaré) dual to the homology class $[N]$. Geometrically, if $N'$ is another $n$-submanifold, $[N]^\#([N'])$ counts the number of intersections of $N'$ with $N$ (after perturbing them to be in general position), counted with sign. Is $[N]^\#([N])$ congruent to $w_n(\nu_N)([N])$ mod $2$, where $\nu_N$ is the normal bundle of the embedding of $N$ in $M$? REPLY [2 votes]: The answer is yes for the following reason: $\omega_n$ is the Euler class mod 2 Now use e.g. Theorem 4.7 here which says $e(\nu_N)([N])$ counts number of intersections, or you argue that the Thom class of $\nu_N$ in $M$ is the Poincaré dual of $N$ (follows from this exercise). Hence, by pulling back to the cohomology of $N$ the result follows. You should easily be able to write down the explicit formulas, where you will only need the above facts and naturality. Also note that both of the above arguments are closely related.<|endoftext|> TITLE: How does computing the determinant of a matrix with unit vectors give you the Cross Product? QUESTION [5 upvotes]: Say you had $(a_x,a_y,a_z)\times(b_x,b_y,b_z)$, you would set up a matrix like the following: And the resulting would be your Cross Product or the coordinates of an orthogonal vector. My question is why? Why does forming it that way give you the magnitude of an orthogonal vector and how is it related to the $\sin(\theta)$ definition of Cross Product. REPLY [3 votes]: The determinant of a $3\times3$ matrix can be viewed as the triple product of its columns (or rows): $$ \begin{align} \det\begin{bmatrix} x_1&y_1&z_1\\ x_2&y_2&z_2\\ x_3&y_3&z_3 \end{bmatrix} &= \begin{bmatrix} x_1\\ x_2\\ x_3 \end{bmatrix} \times \begin{bmatrix} y_1\\ y_2\\ y_3 \end{bmatrix} \cdot \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}\\ &= \begin{bmatrix} (x\times y)_1\\ (x\times y)_2\\ (x\times y)_3 \end{bmatrix} \cdot \begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}\tag{1} \end{align} $$ If we replace $\begin{bmatrix} z_1\\ z_2\\ z_3 \end{bmatrix}$ in $(1)$ by $\begin{bmatrix} \boldsymbol{i}\\ \boldsymbol{j}\\ \boldsymbol{k} \end{bmatrix}$, we get $$ \begin{align} \det\begin{bmatrix} x_1&y_1&\boldsymbol{i}\\ x_2&y_2&\boldsymbol{j}\\ x_3&y_3&\boldsymbol{k} \end{bmatrix} &= \begin{bmatrix} (x\times y)_1\\ (x\times y)_2\\ (x\times y)_3 \end{bmatrix} \cdot \begin{bmatrix} \boldsymbol{i}\\ \boldsymbol{j}\\ \boldsymbol{k} \end{bmatrix}\\[6pt] &=(x\times y)_1\boldsymbol{i}+(x\times y)_2\boldsymbol{j}+(x\times y)_3\boldsymbol{k}\\[18pt] &=x\times y\tag{2} \end{align} $$<|endoftext|> TITLE: How to find $z$-score QUESTION [5 upvotes]: I have some probabilities, but I have to find the $z$-score. I am not sure how do to this when I am told I have to use slope-intercept. Where do I plug the numbers in exactly? Here is one of my problems: Find $d^{\prime}$ and locate $X_C$ approximately on drawing of distributions. $$\begin{array}{|l|l|l|} \hline \text{Response} & \text{Stimuli}\\ & N & S+N \\ \hline N & 39 & 21 \\ S+N & 30 & 57 \\ \hline \end{array}$$ First step is to convert raw numbers into probabilities $$ \begin{array}{lclcl} p(\text{HIT}) &=& p(Y|S+N) = 57/(57+21) = 57/78 &=& 0.7308 \\ p(\text{FA}) &=& p(Y|N) = 30/(39+30) = 30/69 &=& 0.4347 \end{array} $$ You then need to use the table to convert these values to $z$-scores. Remember because the table does not have every value, you will need to use a slope intercept approach to calculate this value. Then use the formula to calculate $d'$. EDIT: Here is the table it is referring to: \begin{array}{|} \hline \text{$\quad\quad$Tabled values of the normal curve}\\ \hline \text{area}\ 0-t & t\\ 0 & 0\\ 0.39 & 0.1 \\ 0.79 &0.2 \\ 0.118 & 0.3\\ 0.155 & 0.4\\ 0.192 & 0.5\\ 0.226 & 0.6\\ 0.258 & 0.7\\ 0.288 & 0.8\\ 0.316 & 0.9\\ 0.341 & 1.0\\ 0.364 & 1.1\\ 0.385 & 1.2\\ 0.403 & 1.3\\ 0.419 & 1.4\\ 0.433 & 1.5\\ 0.445 & 1.6\\ 0.455 & 1.7\\ 0.464 & 1.8\\ 0.471 & 1.9\\ 0.477 & 2.0\\ 0.482 & 2.1\\ 0.486 & 2.2\\ 0.489 & 2.3\\ 0.492 & 2.4\\ 0.494 & 2.5\\ \hline \end{array} REPLY [3 votes]: The table you've linked is a pretty nonstandard format for a z-score table, but it seems to be referring to a situation like this: The area under the curve between $0$ and $t$ is the probability of a normally distributed variable falling between $0$ and $t$. Using the fact that the curve is symmetric about $0$, you can deduce the probability of a normally distributed variable falling in any interval you care to. A $z$-score would correspond to $t$ on the diagram and in your table. However, the probability associated with a $z$-score is not the probability of falling into $[0, t]$ (as appears on the table), but rather the cumulative probability, that is, the probability of falling into $[-\infty, t]$. Since you're finding cumulative probabilities in your first step (at least, I sure hope you are because if you're directly finding probabilities for $[0, t]$ then something like $.7308$ makes no sense) you need an extra step, to go from the full cumulative probability to the probability seen on the table. If your cumulative probability is greater than $.5$ (i.e., $t > 0$), all you need to do is subtract $.5$ from it to get the probability for $[0, t]$. If your cumulative probability is less than $.5$, subtract your cumulative probability from $.5$ and negate to get the probability of falling into $[t, 0]$. Since the normal curve is symmetric, this is the same as the probability of falling into $[0, t]$. As for using "slope intercept" to find the exact value, it sounds like your instructor just wants you to interpolate values that aren't on the table. For example, if you get a probability of $.2$, that's $x$ portion of the way between the probabilities $.196$ and $.226$ corresponding to $t = .5$ and $t = .6$ on your table. Hence, you want a value of $t$ that is $x$ portion of the way between $.5$ and $.6$. All that's left is to find $x$: $$x = \frac{.2 - .192}{.226 - .192} = \frac{.004}{.034} = .1176 \ldots$$ Then you have your $z$-score: $$t = .5 + x * (.6 - .5) = .5 + .0117 \ldots = .5117 \ldots $$<|endoftext|> TITLE: Proving the existence of a proof without actually giving a proof QUESTION [66 upvotes]: In some areas of mathematics it is everyday practice to prove the existence of things by entirely non-constructive arguments that say nothing about the object in question other than it exists, e.g. the celebrated probabilistic method and many things found in this thread: What are some things we can prove they must exist, but have no idea what they are? Now what about proofs themselves? Is there some remote (or not so remote) area of mathematics or logic which contains a concrete theorem that is actually capable of being proved by showing that a proof must exist without actually giving such a proof? Naively, I imagine that this would require formalizing a proof in such a way that it can be regarded as a mathematical object itself and then maybe showing that the set of such is non-empty. The only thing in this direction that I've seen so far is the "category of proofs", but that's not the answer. This may sound like mathematical science fiction, but initially so does e.g. the idea of proving that some statement is unprovable in a framework, which has become standard in axiomatic set theory. Feel free to change my speculative tags. REPLY [3 votes]: Various good answers here, but the following seems to have been left out and is surely the simplest. Consider the propositional calculus. (If you are not familiar with this term: essentially it is logic using basic terms like "and", "or", "not", but which does not have quantifiers "for all" or "there exists".) One can determine whether or not a statement of propositional calculus is "always true" (technically, a tautology) by constructing a truth table. One can also define proofs in such a system; a theorem is any statement which can be proved. However, it turns out that tautologies and theorems are in fact the same. So, by constructing a truth table and proving that a statement is a tautology, you have also proved that it has a proof, but you have not given the proof. I think this gives an answer to your question, though it should also be remarked that there is an algorithm which in effect "turns a truth table into a proof". So although in the situation I have described you would not have written down a proof, you would be able to do so by purely mechanical procedures.<|endoftext|> TITLE: If $f$ is not continuous then $\ker f$ is dense in $X$ QUESTION [5 upvotes]: Let $X$ be a normed space and $f:X\rightarrow \mathbb R$ a linear function. I saw an old post with this problem, but there is not a complete proof. For beginning I have to consider that $\|f\|=\infty$. REPLY [9 votes]: Note that $f$ is bounded if and only if it is continuous. So let $f$ be unbounded. This means for any $n \in \mathbb N$ we have an $x_n \in X$ so that $f(x_n) ≥ n \|x_n\|$. By rescaling set $\|x_n\|=1$. Now let $z$ be in $X$. From the construction of the $x_n$ it follows that $z_n:= z - \frac{f(z)}{f(x_n)}x_n$ is a sequence that converges to $z$. But $f(z_n)=f(z)-\frac{f(z)}{f(x_n)}f(x_n)=0$, so $z_n$ lies in the kernel of $f$. But since $z_n \to z$ you have $z \in \overline{\ker(f)}$. Because $z$ was arbitrary the kernel is dense.<|endoftext|> TITLE: Find elevator height given rope length? QUESTION [5 upvotes]: This question is deceptively difficult. I feel like it's probably some classic example somewhere, but I'm not sure how to describe it in enough detail to get valid results in searching online. Problem Statement There is an elevator, which is pulled up by a rope. The rope wraps around the pulley (shown in the picture) before accumulating on the elevator wire drum. The pulley diameter $r$ is known and the distance from the pulley to vertical $x$ is known. The elevator rope makes some wrap around the pulley ($\theta$) before travelling to the center of the top of the elevator. Assume there is an identical setup on the other side with identical setup and parameters. Given the length of payed out rope $L$, equal to the blue + green lengths shown in the image, what is the height $y$ of the elevator? My Approach The section in blue is equal to the rope wrapped around the pulley and the section in green is equal to the total rope length minus that amount. Defining the green section to be $a$ and the blue section to be $b$: $$ L = a + b \\ b = r \theta \\ L = a + r\theta \\ a = L - r\theta \\ $$ Also, by similar triangles, the angle between green and vertical is also equal to $\theta$. Now, updating the drawing with this information: Treating the green section as a hypotenuse, there are effective legs $y'$ and $x'$, where: $$ y' = y - r\sin{\theta} \\ x' = x + r\cos{\theta} \\ $$ Then, using Pythagorean Theorem and the definition of $a$ above: $$ a = \sqrt{(x')^2 + (y')^2} \\ L - r\theta = \sqrt{(x + r\cos{\theta})^2 + (y - r\sin{\theta})^2} $$ But this alone doesn't buy me much because I don't know $y$ or $\theta$. Two unknowns, one equation. I tried to find an expression for $\theta$ by exploiting the fact that the angle between $a$ and vertical is equal to the angle of wrap. I define a $b'$, which is equal to the path $a$ would have taken past the pulley had there been no wrap: $$ b' = r \tan{\theta} \\ $$ Now there's an $L'$, which is equal to $a + b'$, which again is a length laid from the pivot tangent to the pulley instead of wrapping around the pulley. There's also an $r'$, which is the distance from the middle of the pulley to this new $L'$-horizontal vertex: $$ r' = r \sec{\theta} \\ r' = \frac{r}{\cos{\theta}} \\ $$ And now, where previously the horizontal distance was $x + r$, now it's $x + r'$, and I can state: $$ L'\sin{\theta} = (x + r') \\ (a + b')\sin{\theta} = (x + \frac{r}{\cos{\theta}}) \\ (a + r\tan{\theta})\sin{\theta} = (x + \frac{r}{\cos{\theta}}) \\ (L - r\theta + r\tan{\theta})\sin{\theta} = (x + \frac{r}{\cos{\theta}}) \\ $$ $$ (L + r(\tan{\theta} - \theta))\sin{\theta} = (x + \frac{r}{\cos{\theta}}) \\ $$ This is the best I can get at an expression for $\theta$, but Matlab is unable to find an explicit solution for $\theta$. With no expression for $\theta$, I can't solve for $y$ in the earlier equation. Is there any closed form solution to this problem, or do the functions of $\theta$ mean I'm stuck using a numeric solver? REPLY [3 votes]: It appears that it is not possible to obtain the answer in the closed form, which is frequently the case in the problems involving the rope wraping around a pulley. Below are my thoughts on the solution. First of all, we can relate $x, r, L, \theta$ in one relatively "simple" equation: $\sin\theta = \frac{x+r\cos\theta}{L-r\theta}$, or $L\sin\theta = x+r(\cos\theta + \theta\sin\theta)$. And this is the troubling point, since $\theta$ enters this equation both itself and as an argument of the trigonometric function. This is a transcendental equation, which does not support a solution in the closed form. One, however, can obtain approximate solution assuming $\theta$ to be small if $x,r\ll L$ and expanding $\sin\theta$ and $\cos\theta$ in Taylor series. But, generally, numerical way is the only option. Next, one can relate $y$ and $\theta$ (similarly to how the author did in the post): $y = (L-r\theta)\cos\theta + r\sin\theta = x\cot\theta+r\csc\theta$, which can be found easily after computing $\theta$. Hope this helps.<|endoftext|> TITLE: Roots of iterations of polynomials QUESTION [9 upvotes]: Let $f \in \Bbb Q[X]$ a polynomial, and let denote by $f^n$ the composition $\underbrace{f \circ \cdots \circ f}_{n \text{ times }}$. Let $R(f^n) \subset \Bbb C$ the roots of $f^n$. I'm interested in knowing how $R(f^n)$ behaves as $n \to \infty$. Here is an example : $g(x) = 0.5 x^5-0.5x-1$, with $n=1,2,3$ Here are some questions : Is it true that $R(g^n)$ is "uniformly bounded", i.e. there is a ball of radius $r>0$ that contains all the $R(g^n)$ ? This is clearly false for the polynomials $X-a, (a≠0)$. But it seems that this could be true for the $g(x) = 0.5 x^5-0.5x-1$ (and also for many other examples I've tried). I think that the average of the roots of $f^n$ is constant if $\deg(f) ≥ 2$, but I don't know if this helps. [soft question] Looking at the "fractal" pattern for $g^3$ (see above ; again this happens for other polynomials), I have somehow the intuition that $R(g^n)$ "converges" to some set $R \subset \Bbb C$. How could I formalize this idea, and then possibly prove that my intuition can be turned into a "theorem" ? In order to formalize this idea, my attempt was to consider a collection $(x_{m,k})_{k≥1}$ of Cauchy sequences such that $\bigcup_{n≥1} R(g^n) = \bigcup_{m≥1} \{x_{m,k} \mid k ≥ 1\}$ and $x_{m,k} \in R(g^k)$. There should be better (and more correct) ways to formalize my intuition… Thank you for your comments ! I did the pictures with Mathematica. I tried to see what happens for $g^4$, but it was completely wrong (because $g^4$ has degree $625$, which is pretty large, I presume). I hope that there is no error with the fact that $g^3$ has degree $125$. REPLY [8 votes]: Edit: After reading the comment of lhf to the original question, I figured I might point out that what's going on here is simply inverse iteration. How might we compute the roots of $f^n$, given its very high degree? Well, computing the roots of $f$ isn't hard, say we get $$z_{1},z_{2},z_{3},z_{4},z_{5}.$$ Now, each of those points has five more preimages. The collection of all of those will be the roots of $f^2$. Each of those has five more preimages, giving 125 roots of $f^3$. The process we're describing here is exactly inverse iteration - a well known method for generating an approximation to the Julia set of $f$. Thus, that's exactly the set your process is converging to. Since you mention Mathematica, I might mention that we can check this with just a few lines of Mathematica code: invImage[z_] := w /. NSolve[w^5/2 - w/2 - 1 == z, w]; invImage[zs_List] := invImage /@ zs; roots = Flatten[Nest[invImage, 0, 4]]; JuliaSetPlot[z^5/2 - z/2 - 1, z, ColorFunction -> None, Epilog -> {Red, Opacity[0.5], Point[{Re[#], Im[#]} & /@ roots]}] Here's another way to think about why this might work. The roots of a polynomial $f$ are exactly the fixed points of the polynomial $g(z)=f(z)+z$, since $$f(z_0)=0 \implies g(z_0) = f(z_0)+z_0=0+z_0=z_0.$$ Thus, the roots of your iterated function $F=f^n$ mostly lie on the Julia set for the function $F(z)+z$. There might be a few, isolated attractive points in the Fatou set. Let's examine the implications for the fourth iterate - a polynomial of 620 terms whose coefficients have an absolute value of average size around $10^{23}$. The Julia set of a polynomial does not depend continuously on its coefficients, but it's actually close to continuous. We might hope that changing the coefficient of $z$ from $-4$ to $-3$ won't change the Julia set too much. We might even hope that, as we increase the number of iterations, the affect of adding 1 to the $z$ coefficient becomes less and less. The point being that we could examine the Julia set of $F$ itself, which is the same as the Julia set of the original $f$. Thus, we might hope that your roots are clustering on the Julia set of $f$.<|endoftext|> TITLE: What is the intuitive difference between almost sure convergence and convergence in probability? QUESTION [11 upvotes]: It is a standard fact in probability that almost sure convergence is stronger than convergence in probability. I can only see the differences in the proof. However, is there a way to view it intuitively? Is it true that almost sure convergence has a tighter hold on the tails of a sequence of random variables than convergence in probability does? The definition of convergence in probability I am using is that given $\epsilon >0$: $$ \lim_{n \to \infty} P(|X_n-X|> \epsilon) = 0 $$ and the definition of almost sure convergence I am using is: $$ P(\lim_{n \to \infty}X_n = X) = P\left(\omega \in \Omega: \lim_{n \to \infty}X_n(\omega) = X(\omega)\right) = 1 $$ The two above appear almost exactly the same to me, except that the limit on $n$ is outside of convergence in probability and within the probability measure for almost sure convergence. Is there an easy to understand intuitive difference here? Thanks! REPLY [7 votes]: For simplicity, consider the case where $X = 0$ and $X_n$ is the indicator function of an event $E_n$. "$X_n$ converges almost surely to $0$" says that with probability $1$, only finitely many of the events $E_n$ occur. "$X_n$ converges in probability to $0$" says that the probability of event $E_n$ goes to $0$ as $n \to \infty$. Consider a case where for each $m$ you partition the sample space into $m$ events, each of probability $1/m$, and take all these events for all $m$ to form your sequence $E_n$. Then $X_n \to 0$ in probability because the probabilities of the individual events go to $0$, but each sample point is in infinitely many $E_n$ (one for each $m$) so $X_n$ does not go to $0$ almost surely.<|endoftext|> TITLE: How many answers to $|3^x-2^y|=5$? QUESTION [11 upvotes]: How many answers are there to the equation $|3^x-2^y|=5$ such that $x$ and $y$ are positive integers? Are there infinite? I've found $(2,2)$, $(3,5)$, and $(1,3)$. It seems to explode with larger values, but it's not a steady increase and there seems to be many dips. Do we KNOW that there are no large values for $x$ and $y$ where a power of 3 comes close to a power of 2? REPLY [5 votes]: Here's an elementary self-contained argument that there is no solution with $y>5$. A power of $3$ is congruent to either $1$ or $3 \bmod 8$, so once $y \geq 3$ we must have $3^x - 2^y = -5$. Once $y \geq 6$, we then have $3^x \equiv -5 \bmod 2^6$, and thus $x \equiv 11 \bmod 16$. But then $3^x + 5 \equiv 12 \bmod 17$, and no power of $2$ is congruent to $12 \bmod 17$ (the powers of $2 \bmod 17$ are $2,4,8,-1,-2,-4,-8,1,2,4,8,-1$ etc.), QED.<|endoftext|> TITLE: Affine variety over a field which is not algebraically closed can be written as the zero set of a single polynomial QUESTION [12 upvotes]: I am trying to prove the following statement. If a field $K$ is not algebraically closed, then any $K$-variety $V\subset\mathbb{A}^n$ can be written as the zero set of a single polynomial in $K[X_1,X_2,...,X_n].$ Suppose $V=V(f_1,f_2,...,f_n)$, and I would want to find a polynomial $g\in K[X_1,X_2,...,X_n]$ such that $V=V(g)$. I know that if I can find a polynomial $\phi\in K[X_1,X_2,...,X_n]$ whose only zero is $(0,0,...,0)$, then it is done, since we could take $g=\phi(f_1,f_2,...,f_n)$. For $n=1$, then I would just choose $\phi=X$, but I do not even know how to proceed to the case $n=2$. Maybe we could choose an irreducible polynomial of degree $>1$. Any help will be appreciated. REPLY [15 votes]: It actually suffices to find such a $\phi$ in the case $n=2$. For instance, if you have such a $\phi$ for $n=2$, then $\phi(X_1,\phi(X_2,X_3))$ works for $n=3$, and $\phi(X_1,\phi(X_2,\phi(X_3,X_4)))$ works for $n=4$, and so on. In the case $n=2$, what you can do is take a nonconstant polynomial $f(t)\in K[t]$ with no roots in $K$ and homogenize it. That is, if $f$ has degree $d$, define $\phi(X,Y)=Y^df(X/Y)$. You can check that $\phi$ is a homogeneous polynomial of degree $d$ in $X$ and $Y$ in which the coefficient of $X^d$ is nonzero (since that coefficient comes from the leading coefficient of $f$). If $\phi(a,b)=0$ for $a,b\in K$ and $b\neq 0$, then $b^df(a/b)=0$ so $a/b$ is a root of $f$, which is impossible. On the other hand, if $b=0$, then because the coefficient of $X^d$ in $\phi$ is nonzero, we must also have $a=0$. Thus $(0,0)$ is the only zero of $\phi$.<|endoftext|> TITLE: Prove $\lim_{n \to \infty} \frac{\ln(n)}{n}=0$ without L'Hospital's Rule QUESTION [6 upvotes]: Prove the following without using L'Hospital's Rule, integration or Taylor Series: $$\lim_{n \to \infty} \frac{\ln(n)}{n}=0 $$ I began by rewriting the expression as: $$\lim_{n \to \infty}{\ln(n^{1/n})} $$ Since the text shows $$\lim_{n \to \infty}{n^{1/n} = 1} $$ I was wondering is the proof just as simple as stating: $$\lim_{n \to \infty}{\ln(1) = 0} $$ or do I need to apply the squeeze theorem, use a $\varepsilon$-N proof, or etc? REPLY [31 votes]: Since $e^x> x$, we have $\ln x < x$ for all $x >0$. Hence, $$0 \leqslant \frac{\ln n}{n} = \frac{2 \ln \sqrt{n}}{n} < \frac{2 \sqrt{n}}{n} = \frac{2}{\sqrt{n}} \to 0$$<|endoftext|> TITLE: Toffoli gate cannot be decomposed into a sequence of one or two classical bits gates QUESTION [7 upvotes]: Materials in quantum information often emphasize that one and two bit classical reversible gates cannot achieve universality for the classical reversible computation, whereas universal quantum computing can be achieved only using one or two qubits gates. I want to understand why classical reversible computing cannot be achieved with only one and two bits classical reversible gates. I consulted with some of the materials, but all the materials I consulted with only 'illustrated' why it was difficult or seemed to be impossible to simulate some kinds of classical reversible gates only using one and two bits classical reversible gates, never giving a satisfactory clear mathematical proof as to that impossibility. Specifically, I wonder if there is any mathematically clear proof for the claim that Toffoli gate cannot be achieved only using one and two bits classical reversible gates. Thanks in advance. REPLY [5 votes]: The proof uses the fact that the Toffoli gate is non-affine, but all 1 and 2 bit classical reversible gates are affine. Here we go: Definition: Let $B=F_2$=({0,1},+,$\cdot$) be the smallest field. The map $\theta_3: (x,y,z)\to (x,y,xy+z)$ where the vector $(x,y,z)\in B^3$, is called "Toffoli gate". Lemma: Let $f:B^2\to B^2$ be bijective. Then f is affine, i.e. $\forall \mathbf{x}\in${0,1}$^2$, $f(\mathbf{x})$ satisfies $$ \phantom{texttexttext}f(\mathbf{x}) = \mathbf{M}\mathbf{x} +\mathbf{a} \phantom{texttexttext}(*), $$ where $\mathbf{a} \in B^2$ and $\mathbf{M} \in \mathrm{GL}_2(B)$, i.e. an invertible 2x2 matrix whose entries are from $B$. Proof: There are $24=4!$ distinct bijective maps f. Now, there are also $24=4\times 6$ maps generated from () by the 4 distinct vectors $\mathbf{a}$ and 6 invertible 2x2 $B$-valued matrices $\mathbf{M}$. Now we only need to show that all 24 of these maps are indeed distinct, so we can conclude that all maps $f$ are of the form (). Assume $$ f=f' \Leftrightarrow f(\mathbf{x})=f'(\mathbf{x})\phantom{x} \forall \mathbf{x}\in B \Leftrightarrow (\mathbf{M}-\mathbf{M}')\mathbf{x} +(\mathbf{a}-\mathbf{a}') =0 \text{ } \forall \mathbf{x}\in B $$ Letting $\mathbf{x} =\mathbf{0}$, this implies that $\mathbf{a} = \mathbf{a}'$. Thus, upon substituting $\mathbf{x} = (0,1)$ and $\mathbf{x}=(1,0)$, it follows that $\mathbf{M}=\mathbf{M}' \text{ }\square$. Remark: Any composition of affine functions is affine. Corollary: The Toffoli gate is not affine since it may be written as $\theta_3 = id + \Delta$, where $\Delta(x,y,z)=(0,0,xy)$ and since $id $ is affine, the assumption that $\theta_3$ is affine leads to a contradiction because $\Delta$ isn't affine. Therefore, one cannot construct the Toffoli gate from 1 and 2 bit reversible classical gates $\square$. This means that one and two bit reversible classical gates are not "universal", i.e. they're not a basis for arbitrary gates. Such a basis may however be constructed from generalised Toffoli gates (where "generalised" means that they may need to have auxiliary bits).<|endoftext|> TITLE: Is it possible to find a perfect cube like 111...11? QUESTION [5 upvotes]: Can we find a perfect cube like $111...111$(all digits are $1$), apart from the number $1$ itself? It's easy to prove that there can't be anything like $111...11$ that is a perfect square besides $1$, but how to do this for perfect cube? Are there some new techniques to do this? REPLY [4 votes]: (Edited version) If $$k^3=\frac{10^n-1}9$$ then $$k^3-1=\frac{10^n-10}{9}$$ $$9(k-1)(k^2+k+1)=10(10^{n-1}-1)$$ So if such $k$ exists and both sides are non-zero (i.e. $n\neq1$ and $k\neq1$), it implies that, First situation: When either of the factors $k^2+k+1$ or $k-1$ is divisible by $10$, $$k^2+k+1\equiv 0 \text{ (mod 10) or }k=10m+1$$ Notice $\forall k\ge 0$, $k^2+k+1$ must be odd, so the first condition is not satisfied. For the second condition, $k=10m+1$, if it is true, then $$9(10m+1)^3=10^n-1$$ Rearrange and simply and we get $$100m^3+30m^2+3m=\frac{10^{n-1}-1}{9}.$$ So for every $n$, the premise to get this condition right is to get it correct for every $n-1$. So finally it is equivalent to prove when $n=1$, there exist a positive integer(s) $m$ such that $m(100m^2+30m+3)=0$. But no real solution except $m=0$. When $m=0, k=1, n=1$ Second situation: Only $k-1$ is divisible by 5. But then $k^2+k+1$ is divisble by 2, which is impossible. Third situation: Only $k-1$ is divisible by 2. We write $k=2t+1$. So we investigate whether $k^2+k+1$ is divisible by 5. Observe $$(2t+1)^2+(2t+1)+1=4t^2+6t+3$$ and $$4t^2+6t+3\equiv -t^2+t-2\pmod{5}$$ This is equivalent to state that $$t^2-t+2\equiv 0 \pmod{5}$$ But by inspection, this is not valid. (By testing the case 5q, 5q+1, ... , 5q+4) So this situation is impossible.<|endoftext|> TITLE: geodesic computation: "energy" minimization versus arc length minimization QUESTION [12 upvotes]: Is it true that applying the Euler-Lagrange equation to the integral $E(\gamma)=\int_{t_1}^{t_2} g_{\alpha\beta}(\gamma^{\alpha})'(\gamma^{\beta})'\operatorname{d}\!t$ rather than the arc length integral $L(\gamma)=\int_{t_1}^{t_2} \sqrt{g_{\alpha\beta}(\gamma^{\alpha})'(\gamma^{\beta})'}\operatorname{d}\!t$ is just a mathematical trick for simplifying computation when parameterizing by arc length? Isn't it true that if you are parameterizing a curve on a surface by something other than a constant multiple of arc length (for example, a curve $\mathbf{C}(x)=(x,y(x))$ in the parameter space of the hemisphere surface mapping $z=\sqrt{1-x^2-y^2}$), the critical point of $E(\mathbf{C})$ will not be a geodesic in general? REPLY [20 votes]: The minimum for the length functional is badly non-unique: it is given by any parametrization of the geodesic, not necessarily the natural one. On the other hand, the energy functional is locally uniformly convex; therefore the minimizer is unique and the variational calculus is nice. It is instructive to do the computation for the Euclidean plane. Suppose we are looking for the condition that a curve $(x(t),y(t))$ is a geodesic. Then, for the energy functional $ \int \left(x'(t)^2+y'(t)^2\right)dt, $ the Euler-Lagrange equations read $x''(t)=0$ and $y''(x)=0$, which is the straight line traced at constant speed. On the other hand, for the length functional $\int \sqrt{x'(t)^2+y'(t)^2}dt, $ one gets, after some algebra, $$ \frac{x''}{x'}=\frac{x'x''+y'y''}{x'^2+y'^2}\;\mathrm{and}\;\frac{y''}{y'}=\frac{x'x''+y'y''}{x'^2+y'^2}, $$ and from the equality between the left-hand sides we get $x'(t)\equiv Cy'(t)$, which also gives a straight line, but with arbitrary parametrization. Added: It is worth to mention that the "energy functional" is a Lagrangian action for a free particle confined to the surface. Therefore, the law of conservation of energy implies that any solution $\gamma(t)$ will move at constant speed (which is of course also possible to see directly). And if one restricts the the class of curves to such solutions, the EL equations indeed take the same form for both functionals. REPLY [3 votes]: The correct answer to the second question is that the critical point of $E(\gamma)$ (sometimes called the "energy" functional) is not a geodesic in general when parameterizing by something other than arc length. It is not a geodesic for the example given in the question (the simplest possible non-trivial example). The coefficients of the first fundamental form of the upper hemisphere mapping $z=\sqrt{1-x^2-y^2}$ are $g_{11}=1+x^2/z^2,$ $g_{12}=xy/z^2,$ and $g_{22}=1+y^2/z^2.$ The coordinates of the curve are $\gamma^1(x)=x$ and $\gamma^2(x)=y(x)$ (the 2 means the second coordinate of the curve, not squared) so the derivatives of the coordinates are $(\gamma^1)'=1$ and $(\gamma^2)' = y'(x).$ So $$ g_{\alpha\beta} (\gamma^{\alpha})'(\gamma^{\beta})'= (1+x^2/z^2) + (2xy/z^2) y' + (1+y^2/z^2) (y')^2. $$ Replacing $z^2$ with $1-x^2-y^2$ yields $$ F(x,y,y')=(1 + x^2/(1-x^2-y^2)) + (2xy/(1-x^2-y^2))y' + (1 + y^2/(1-x^2-y^2))(y')^2. $$ This is is the integrand of the "energy" functional and $\sqrt{F}$ is the integrand of the arc length functional. Applying the Euler-Lagrange equation to the "energy" functional yields $$ (x^2-1)(y^2+x^2-1)y'' - (x^2-1)y(y')^2 + 2xy^2 y' - y(y^2-1) = 0. \tag{1} $$ Applying the E-L equation to the arc length functional yields $$ (y^2+x^2-1)y'' - x(x^2-1)(y')^3 + (3x^2-1)y(y')^2 - x(3y^2-1)y' + y(y^2-1)=0. \tag{2} $$ It is easy to show that $y=kx$ (a straight line through the origin, hence a great circle when mapped to the hemisphere) solves the second ODE (for the arc length functional) but not the first ODE (for the "energy" functional). The image below shows the incorrect solution for the "energy" functional ODE for the initial conditions ($y(0)=0$, $y'(0)=1$) in red and the correct solution in blue. The projection of a great circle whose plane is not perpendicular to the $xy$-plane into the $xy$-plane is an ellipse. Hence, there should also be solutions to the ODE of the form $(a^2+1)x^2 + 2abxy + (b^2+1)y^2=1$ (the plane of the great circle is $z=ax+by$). It is easy to check that these ellipses are indeed solutions of (2) but not (1) using a computer algebra system. The answer to the first question is that using $E(\gamma)$ instead of $L(\gamma)$ when parameterizing by arc length is indeed just a mathematical trick to simplify computation. In his book "Differential Geometry", Kreyszig derives the geodesic equation $$ \ddot{\gamma}^{\tau} + \Gamma_{\alpha\beta}^{\tau}\dot{\gamma^{\alpha}}\dot{\gamma^{\beta}} = 0 $$ (where dot denotes differentiation with respect to arc length) by applying the Euler-Lagrange equation to the arc length functional $L(\gamma)$ and then using the fact that the curve parameter is arc length to simplify the resulting equation. It is easy to show that applying the E-L equation to $E(\gamma)$ results in exactly the same equation (the geodesic equation) with a slightly less messy computation. Minimizing $E(\gamma)$ is valid because $ds / dt$ (the integrand of $L(\gamma)$) equals $(ds / dt)^2$ (the integrand of $E(\gamma)$) equals 1 when parameterizing by arc length. There seems to be a silly, false, and unfortunately widespread belief that a geodesic must be parameterized by a constant multiple of arc length. The book "Elementary Differential Geometry" by Barrett O'Neill first published in 1966 may be a source of this belief. On page 245 of the revised 2nd edition, O'Neill defines a geodesic as a curve $\alpha:\mathbb{R}\rightarrow \mathbb{R}^3$ on a surface $M$ such that $\alpha''(t)$ is always orthogonal to the tangent plane of $M$ at $\alpha(t)$. He then goes on to "prove" that this means geodesics must have constant speed. The "proof" goes like this: $\frac{\operatorname{d}}{\operatorname{d}\!t} (|\alpha'|^2 = \alpha'\cdot\alpha') = 2\alpha'\cdot\alpha''=0$ ($\alpha'\cdot\alpha'' = 0$ since $\alpha'$ is orthogonal to the surface normal and $\alpha''$ is parallel to the surface normal). Hence $|\alpha'|^2$ (and the speed $|\alpha'|$) is constant. The problem with this "proof" is that the correct definition of a geodesic curve on a surface (found on page 154 of Kreyszig's book) is a curve $\alpha$ such that the geodesic curvature $k_g$ vanishes at every point of the curve. Since $k_g$ is defined as $k\sin\gamma$ where $k$ is the curvature of the curve and $\gamma$ is the angle between $\ddot{\alpha}$ and the surface normal, $\ddot{\alpha}(s)$ is always parallel to the the surface normal of $M$ at $\alpha(s)$ whenever $\ddot{\alpha}(s)$ has non-zero length (dot denotes differentiation with respect to arc length and $s$ denotes arc length). If $\ddot{\alpha}$ has zero length, then $k$ must be zero and $k_g$ is also zero by definition. The truth is that a geodesic is just a curve on a surface, and like all curves, it has infinitely many non-constant speed parameterizations. Let $\alpha:[0,L]\rightarrow\mathbb{R}^3$ (where $L$ is the length of the curve) be a geodesic curve on a surface $M$ parameterized by arc length. Let $\varphi:[0,1]\rightarrow [0,L]$ be any smooth, non-linear bijection. Then $\gamma = \alpha\circ \varphi$ is another way of representing exactly the same geodesic curve, but $|\gamma'| = |\alpha'|\varphi'$ is non-constant since $\varphi'$ is non-constant.<|endoftext|> TITLE: Show that every compact metrizable space has a countable basis QUESTION [16 upvotes]: Show that every compact metrizable space has a countable basis. My try: Let $X$ be a compact space and metrizable. Now for each $n\in \Bbb N$; I can consider the open cover $\{B(x,\frac{1}{n}):x\in X\}$ of $X$ .As $X$ is compact we can find finite number of $x_i;1\leq i\le n$ corresponding to each $n$ . $B_n=\{B(x_i,\frac{1}{n});1\leq i\le n\}$ .Now $\Bbb B=\{B_n:n\in \Bbb N\}$ is a countable collection . It remains to show that $\Bbb B$ is a basis of $X$. Let $U$ be an open set in $X$.Let $x\in U\implies \exists r>0$ such that $B(x,r)\in U$.Then we have for some $n;\frac{1}{n}0$ such that $B(x,r)\subset U$, we need only to find a $B$ that's completely included in $B(x,r)$. For $n$ so large that $1/n TITLE: How do I show that $n=2$ is the only integer satisfy :$\cos^n\theta+ \sin^n\theta=1$ for all $\theta$ real or complex? QUESTION [5 upvotes]: It is well known that :$\cos²\theta+ \sin²\theta=1$ for all $\theta$ real or complex ,I would like to ask about the general equality :$\cos^n\theta+ \sin^n\theta=1$ if there is others values of the positive integer $n$ than $n=2$ for which : $$\cos^n\theta+ \sin^n\theta=1$$ for all $\theta$ real or complex ? probably the equivalent question is to ask this question : Question:How do I show that $n=2$ is the only integer satisfy :$$\cos^n\theta+ \sin^n\theta=1$$ for all $\theta$ real or complex ? Thank you for any help REPLY [2 votes]: Try differentiating both sides: $$\begin{align}\sin^nx+\cos^nx &= 1 \\ \implies \frac{d}{dx}\left(\sin^nx+\cos^nx\right) &= 0 \\ n(\sin x)^{n-1} \cos x-n(\cos x)^{n-1}\sin x&=0 \\ (\tan x)^{n-1}&=\tan x \\\end{align}$$ $$ \implies n-1=1 \\ {n = 2}$$<|endoftext|> TITLE: Area between two curves, which curve is on top? QUESTION [5 upvotes]: Given a question like this: Find the area between ${y = x^2 + 2x - 3}$ and ${y = 2x^2 -5x -3}$. I know how to find the area ${\int y_1 - y_2}$ but how can I tell which one is the top curve? Are there any shortcuts to determining the top curve? REPLY [3 votes]: Assuming you work only with polynomials and they don't intersect here is a little shortcut: Find the highest exponent, that is different for both polynomials. The one, with the higher coefficient is on top.<|endoftext|> TITLE: Prove that $\frac{1}{n+1} + \frac{1}{n+3}+\cdots+\frac{1}{3n-1}>\frac{1}{2}$ QUESTION [5 upvotes]: Without using Mathematical Induction, prove that $$\frac{1}{n+1} + \frac{1}{n+3}+\cdots+\frac{1}{3n-1}>\frac{1}{2}$$ I am unable to solve this problem and don't know where to start. Please help me to solve this problem using the laws of inequality. It is a problem of Inequality. Edit: $n$ is a positive integer such that $n>1$. REPLY [5 votes]: The sum can be written as \begin{align} \frac{1}{n+1} + \frac{1}{n+3} + \ldots + \frac{1}{3n - 1} & = \sum_{i=1}^n \frac{1}{n + 2i - 1}. \end{align} Now recall the AM-HM inequality: $$ \frac 1n\sum_{i=1}^n(n + 2i - 1) > \frac{n}{\sum_{i=1}^n \frac{1}{n + 2i - 1}}. $$ (The requirement that $n > 1$ guarantees that the inequality is strict.) Rearrange to get \begin{align} \sum_{i=1}^n \frac{1}{n + 2i - 1} & > \frac{n^2}{\sum_{i=1}^n(n + 2i - 1)} = \frac 12. \end{align} REPLY [2 votes]: Any statement that needs to be proved for all $n\in\mathbb{N}$, will need to make use of induction at some point. We have \begin{align} S_n & = \sum_{k=1}^n \dfrac1{n+2k-1} = \dfrac12 \left(\sum_{k=1}^n \dfrac1{n+2k-1} + \underbrace{\sum_{k=1}^n \dfrac1{3n-2k+1}}_{\text{Reverse the sum}}\right)\\ & = \dfrac12 \sum_{k=1}^n \dfrac{4n}{(n+2k-1)(3n-2k+1)} = \sum_{k=1}^n \dfrac{2n}{(n+2k-1)(3n-2k+1)} \end{align} From AM-GM, we have $$4n = (n+2k-1) + (3n-2k+1) \geq 2 \sqrt{(n+2k-1)(3n-2k+1)}$$ This gives us that $$\dfrac1{(n+2k-1)(3n-2k+1)} \geq \dfrac1{4n^2}$$ Hence, we obtain that $$S_n = \dfrac12 \sum_{k=1}^n \dfrac{4n}{(n+2k-1)(3n-2k+1)} \geq \sum_{k=1}^n \dfrac{2n}{4n^2} = \dfrac12$$ Also, just to note, every step in the above solution requires induction. Also, as @MartinR rightly points out, the inequality is strictly in our case for almost all $k$ except for $k=\dfrac{n+1}2$ (since equality holds only when $n+2k-1 = 3n-2k+1 \implies k = \dfrac{n+1}2$). REPLY [2 votes]: $f(x) = 1/x$ is strictly convex, therefore $$ \frac{1}{2n} < \frac 12 \left( \frac{1}{n+k} + \frac{1}{3n-k} \right) $$ for $k = 1, ..., n-1$, or $$ \frac{1}{n+k} + \frac{1}{3n-k} > \frac {1}{2n} + \frac {1}{2n} $$ Combining terms pairwise from both ends of the sum shows that $$ \frac{1}{n+1} + \frac{1}{n+3}+\dots+\frac{1}{3n-3} + \frac{1}{3n-1} > \underbrace{\frac {1}{2n} + \frac {1}{2n} + \dots +\frac {1}{2n} + \frac {1}{2n}}_{n \text{ terms}} = \frac 12. $$ (If $n$ is odd then the middle term $ \frac {1}{2n}$ is not combined with another one. But since $n> 1$ there is at least one "pair" to combine, which gives the strict inequality.)<|endoftext|> TITLE: Frobenius norm and submultiplicativity QUESTION [5 upvotes]: I read (page 8 here) that if $A$ and $B$ are rectangular matrices so that the product $AB$ is defined, then $$(1)\quad||AB||_F^2\leq ||A||_F^2||B||_F^2$$ Does that mean that the inequality above also holds when the number of rows of $A$ is larger than the number of columns of $B$? The justification (Cauchy Swartz): $$||AB||_F^2=\sum_{i=1}^n\sum_{j=1}^k(a_i^\top b_j)^2\leq \sum_{i=1}^n\sum_{j=1}^k||a_i||_2^2||b_j||^2_2=||A||_F^2||B||_F^2$$ does not require $k$ (the number of columns of $B$) to equal $n$ (the number of rows of $A$). Intuitively, you could also add imaginary columns of 0's to $B$, so I can believe the claim. On the other hand, in other places I only see $(1)$ claimed for matrix of the same size and have had a hard time finding it claimed for the more general case (where $A$ $B$ are merely multiplicative) online. REPLY [2 votes]: Indeed, the only requirement to have the inequality that you wrote for the Frobenius norm and for arbitrary matrices is that the product $AB$ is defined. If you are looking for a reference see for example the book Numerical Linear Algebra, by Trefethen and Bau, page 23.<|endoftext|> TITLE: Improper Integral $\int_0^1\frac{\arcsin^2(x^2)}{\sqrt{1-x^2}}dx$ QUESTION [9 upvotes]: $$I=\int_0^1\frac{\arcsin^2(x^2)}{\sqrt{1-x^2}}dx\stackrel?=\frac{5}{24}\pi^3-\frac{\pi}2\log^2 2-2\pi\chi_2\left(\frac1{\sqrt 2}\right)$$ This result seems to me digitally correct? Can we prove that the equality is exact? REPLY [17 votes]: \begin{align}\int_0^{1} \frac{\arcsin^2 x^2}{\sqrt{1-x^2}}\,dx &= \frac{1}{2}\int_0^{1} \frac{\arcsin^2 x}{\sqrt{x}\sqrt{1-x}}\,dx \tag{1}\\&= \frac{1}{2}\int_0^{\pi/2} \frac{\theta^2\cos \theta}{\sqrt{\sin \theta - \sin^2 \theta}}\,d\theta \tag{2}\\&= \frac{1}{\sqrt{2}}\int_0^{\pi/2} \frac{\left(\frac{\pi}{2} - \theta\right)^2\cos \frac{\theta}{2}}{\sqrt{1-2\sin^2 \frac{\theta}{2}}}\,d\theta \tag{3}\\&= \int_0^{\pi/2} \left(\frac{\pi}{2} - 2\arcsin \left(\frac{\sin \alpha}{\sqrt{2}}\right)\right)^2\,d\alpha \tag{4}\\&= \frac{\pi^3}{8} - 2\pi \int_0^{\pi/2} \arcsin \left(\frac{\sin \alpha}{\sqrt{2}}\right)\,d\alpha + 4\int_0^{\pi/2} \left(\arcsin \left(\frac{\sin \alpha}{\sqrt{2}}\right)\right)^2\,d\alpha \tag{5}\end{align} where, we made $x \mapsto \sqrt{x}$ in step $(1)$. In step $(2)$ we made $\theta = \arcsin x$ and finally in $(3)$ we made the change of variable $\sin \dfrac{\theta}{2} = \dfrac{\sin \alpha}{\sqrt{2}}$. Now we recall the famous series expansion: $\displaystyle \arcsin^2 x = \frac{1}{2}\sum\limits_{n=1}^{\infty} \dfrac{(2x)^{2n}}{n^2\binom{2n}{n}}$, Hence, \begin{align*}\int_0^{\pi/2} \left(\arcsin \left(\frac{\sin \alpha}{\sqrt{2}}\right)\right)^2\,d\alpha &= \frac{1}{2}\sum\limits_{n=1}^{\infty} \dfrac{2^n}{n^2\binom{2n}{n}}\int_0^{\pi/2} \sin^{2n} \alpha \,d\alpha\\&= \frac{\pi}{4}\sum\limits_{n=1}^{\infty} \dfrac{1}{n^22^{n}} = \frac{\pi}{4}\operatorname{Li}_2 \left(\frac{1}{2}\right) = \frac{\pi}{8}\left(\zeta(2) - \log^2 2\right)\end{align*} also, the infinite series expansion for $\displaystyle \arcsin x = \sum\limits_{n=0}^{\infty} \dfrac{\binom{2n}{n}x^{2n+1}}{(2n+1)4^n}$ give us, \begin{align*}\int_0^{\pi/2} \arcsin \left(\frac{\sin \alpha}{\sqrt{2}}\right)\,d\alpha &= \frac{1}{\sqrt{2}}\sum\limits_{n=0}^{\infty} \dfrac{\binom{2n}{n}}{(2n+1)8^n}\int_0^{\pi/2} \sin^{2n+1} \alpha \,d\alpha\\&= \frac{1}{\sqrt{2}}\sum\limits_{n=0}^{\infty} \dfrac{1}{(2n+1)^2 2^n} = \chi_2 \left(\frac{1}{\sqrt{2}}\right)\end{align*} Combining the results, $$\int_0^{1} \frac{\arcsin^2 x^2}{\sqrt{1-x^2}}\,dx = \frac{5\pi^3}{24} - \frac{\pi}{2}\log^2 2 - 2\pi \chi_2 \left(\frac{1}{\sqrt{2}}\right)$$<|endoftext|> TITLE: How to find a onto homomorphism between two groups? QUESTION [7 upvotes]: Consider the following subgroups of $\text{SL}(2,\mathbb{Z})$ : $A$ the subgroup of matrices with determinant $1$ : \begin{bmatrix}4\mathbb{Z}+1&8\mathbb{Z}\\4\mathbb{Z}&4\mathbb{Z}+1\end{bmatrix} $B$ the subgroup of matrices with determinant $1$ : \begin{bmatrix}2\mathbb{Z}+1&8\mathbb{Z}\\4\mathbb{Z}&2\mathbb{Z}+1\end{bmatrix} I want some onto homomorphism from $B$ to $A$ whose kernel is \begin{bmatrix}1&0\\0&1\end{bmatrix}\begin{bmatrix}-1&0\\0&-1\end{bmatrix} How to get this? I have no idea how to find the map. REPLY [5 votes]: Let $$\beta=\begin{pmatrix} 2a+1 & 8 b \\ 4c&2d+1 \end{pmatrix} \in B .$$ Then $4 ad+ 2a+2d+1 -32bc = 1$, that is $2ad+a+d = 16 bc$, which yields $a \equiv b \pmod2$. Thus, either the elements on the diagonal of $\beta$ are both congruent to $1$ or both congruent to $3$ modulo $4$. In the first case I set $\varphi (\beta) = \beta$ and I call $\beta$ even, in the second case $\varphi(\beta) = - \beta$ and call $\beta$ odd. Parity behaves as usual and thus $\varphi$ is a homomorphism. Its image is $A$ since $A$ is the subgroup of the even matrices of $B$ and its kernel is $\{\pm I_2\}$. Hence $\varphi$ is the desired homomorphism.<|endoftext|> TITLE: Is integral curve a embedded 1 dimensional submanifold of the given manifold? QUESTION [5 upvotes]: I can easily see a proof that shows its going to be an immersed submanifold . (I am removing the case if the vector field at that point is 0). I am not able to see if it's a embedded submanifold or not? Thank you. REPLY [3 votes]: In general, no, absolutely not! Consider $M = \Bbb R^2/\Bbb Z^2$ and $X= \partial/\partial x + a\partial/\partial y$, where $a$ is irrational. Then an integral curve of this is a line with irrational slope, which is dense in $M$. About as far from embedded you can get! You should expect that the "generic" vector field has non-embedded integral curves.<|endoftext|> TITLE: Squaring both sides when units are different? QUESTION [10 upvotes]: Given $((9) \text{inches})^{1/2} = ((0.25) \text{yards})^{1/2}$, then which of the following statements is true? $((3) \text{inches}) = ((0.5) \text{yards})$ $((9) \text{inches}) = ((1.5) \text{yards})$ $((9) \text{inches}) = ((0.25) \text{yards})$ $((81) \text{inches}) = ((0.0625) \text{yards})$ My question is : Can I apply here as $x^{1/2}=y^{1/2}$ then square both sides $\implies x=y$. But as given units are different. So, Can you explain it, please? REPLY [2 votes]: Answer 3. is correct. If you say for linear measure $$ 1 ft = 12 \, in, $$ then for area measure $$ 1 ft^2 = 144 \, in^2, $$ and $$ 1 ft^3 = 1728 \, in^3, $$ For derived units Speed $$ 1 ft/ \min = 12/60 = 0.2 in/\sec, $$ Rate of discharge or volume rate $$ 3600 m^3 / hour = 1 m^3/\sec, $$ and so on.<|endoftext|> TITLE: Function that is the sum of all of its derivatives QUESTION [51 upvotes]: I have just started learning about differential equations, as a result I started to think about this question but couldn't get anywhere. So I googled and wasn't able to find any particularly helpful results. I am more interested in the reason or method rather than the actual answer. Also I do not know if there even is a solution to this but if there isn't I am just as interested to hear why not. Is there a solution to the differential equation: $$f(x)=\sum_{n=1}^\infty f^{(n)}(x)$$ REPLY [4 votes]: the question means: $$y-y'-y''-y'''-......y^n=0$$ the differential equation is homogeneous and the characteristics equation is $$1-r-r^2-r^3-.......r^n=0$$ $$r(1+r+r^2+r^3+....)=1$$ by using the geometric series (r<1) $$\frac{r}{1-r}=1$$ $$r=1-r$$ $$r=\frac{1}{2}$$ so the function is $$y=Ce^{\frac{x}{2}}$$<|endoftext|> TITLE: solution of differential equation $\left(\frac{dy}{dx}\right)^2-x\frac{dy}{dx}+y=0$ QUESTION [5 upvotes]: The solution of differential equation $\displaystyle \left(\frac{dy}{dx}\right)^2-x\frac{dy}{dx}+y=0$ $\bf{My\; Try::}$ Let $\displaystyle \frac{dy}{dx} = t\;,$ Then Diferential equation convert into $t^2-xt+y=0$ So Its solution is given by $\displaystyle t=\frac{x\pm \sqrt{x^2-4y}}{2}$ So we get $$\frac{dy}{dx} = \frac{x\pm \sqrt{x^2-4y}}{2}$$ Now How can I solve after that, Help me Thanks REPLY [3 votes]: $$y'(x)^2-xy'(x)+y(x)=0\Longleftrightarrow$$ $$y(x)=-y'(x)^2+xy'(x)\Longleftrightarrow$$ $$y'(x)=xy''(x)+y'(x)-2y'(x)y''(x)\Longleftrightarrow$$ $$y'(x)=y'(x)+y''(x)\left(x-2y'(x)\right)\Longleftrightarrow$$ $$y''(x)\left(x-2y'(x)\right)=0$$ Now, solve them separately: For the first one: $$y''(x)=0\Longleftrightarrow$$ $$\int y''(x)\space\text{d}x=\int0\space\text{d}x\Longleftrightarrow$$ $$\int y'(x)\space\text{d}x=\text{C}_1\Longleftrightarrow$$ Substitute $y'(x)=\text{C}_1$ into $y(x)=xy'(x)-y'(x)^2$: $$y(x)=\text{C}_1x-\text{C}_1^2$$ For the second one: $$x-2y'(x)=0\Longleftrightarrow$$ $$y'(x)=\frac{x}{2}\Longleftrightarrow$$ Substitute into $y(x)=xy'(x)-y'(x)^2$: $$y(x)=\frac{x^2}{4}$$ So, finally we found that: $$y(x)=\frac{x^2}{4}\space\space\space\space\space\space\space\text{or}\space\space\space\space\space\space\space y(x)=\text{C}_1x-\text{C}_1^2$$<|endoftext|> TITLE: Is there any result on the "counting" of minimal atlas? QUESTION [7 upvotes]: Take a differentiable manifold $M$. Define $\eta(M)$ as $\min\{\#\mathfrak{A} \mid \mathfrak{A} \text{ is an atlas for $M$}\}$. For example, if $M=S^n$, we have that $\eta(M)=2$, since $S^n$ is compact and $2$ charts (the stereographic projections) are enough to cover $M$. Is $\eta(M)$ known for a wide range of manifolds? Is it somehow manageable to compute it? Does there exist any technique? REPLY [2 votes]: There is in fact a number concerned with the minimal amount of open subsets to cover $M$ which satisfy contractability in $M$ --- the Lusternik Schnirelmann category $cat(M)$. This gives you (at least with my definitiong of a chart) $$ cat(M)\leq \eta(M). $$ There are a lot of interesting techniques presented in the literature for this, which might be interesting for you. Note that historically this category was originally defined for closed subsets, so literature can be non-consistent. Note that $cat(M)$ is also related to other fields such as Morse theory.<|endoftext|> TITLE: Does $\operatorname{Spec}$ preserve pushouts? QUESTION [6 upvotes]: The spectrum-functor $$ \operatorname{Spec}: \mathbf{cRng}^{op}\to \mathbf{Set} $$ sends a (commutative unital) ring $R$ to the set $\operatorname{Spec}(R)=\{\mathfrak{p}\mid \mathfrak{p} \mbox{ is a prime ideal of R}\}$ and a morpshim $f:S\to R$ to the map $\operatorname{Spec}(R)\to \operatorname{Spec}(S)$ with $\mathfrak{p}\mapsto f^{-1}(\mathfrak{p})$. Does this functor send pullback squares \begin{eqnarray} S\times_R T&\to& T\\ \downarrow && \downarrow\\ S&\to& R \end{eqnarray} of (commutative unital) rings to pushout squares \begin{eqnarray} \operatorname{Spec}(R)&\to& \operatorname{Spec}(T)\\ \downarrow && \downarrow\\ \operatorname{Spec}(S)&\to& \operatorname{Spec}(S\times_R T) \end{eqnarray} of sets? Put in other words, does the functor $\operatorname{Spec}$ from above preserve pushouts? REPLY [3 votes]: Pullbacks in $\mathbf{CRing}$ do not necessarily go to pushouts in $\mathbf{Sch}$ or $\mathbf{Set}$. Consider the construction of $\mathbb{P}^1_k$: in $\mathbf{Sch}$ (resp. $\mathbf{Set}$), we have the following pushout square, $$\require{AMScd} \begin{CD} \mathbb{A}^1_k \setminus \{ 0 \} @>>> \mathbb{A}^1_k \\ @VVV @VVV \\ \mathbb{A}^1_k @>>> \mathbb{P}^1_k \end{CD}$$ but if pullbacks in $\mathbf{CRing}$ go to pushouts in $\mathbf{Sch}$ (resp. $\mathbf{Set}$), that would imply that $\mathbb{P}^1_k \cong \operatorname{Spec} k$, which is nonsense.<|endoftext|> TITLE: Equality in Conditional Jensen's Inequality QUESTION [6 upvotes]: Conditonal Jensen's Inequality says that for a convex function $\varphi$, a random variable $X$, and a sub-sigma-field $\mathcal{F}$, $E[\varphi(X)\mid \mathcal{F}] \geq \varphi(E[X\mid \mathcal{F}])$. In ordinary Jensen's Inequality, $E[\varphi(X)]\geq \varphi(E[X])$, and we have equality if and only if $X$ is degenerate (i.e., almost surely a constant) or $\varphi$ is linear. I'm wondering if an analogous result holds for the conditional version. Is it the case that $E[\varphi(X)\mid\mathcal{F}]=\varphi(E[X\mid\mathcal{F}])$ if and only if $X \in \mathcal{F}$ or $\varphi$ is linear? (Certainly the "if" is true, but I'm wondering about the "only if.") REPLY [6 votes]: Method 1: Abbreviate $Y:=E[X|\mathcal F]$. Let $g(x)$ denote the right-hand derivative of $\varphi$ at $x$. Because $\varphi$ is strictly convex, we have $\varphi(x)>g(m)(x-m)+\varphi(m)$ for all $x\not=m$. Thus, $$ \varphi(X)\ge g(Y)(X-Y)+\varphi(Y) $$ with strict inequality off $\{X=Y\}$ (almost surely). Taking conditional expectations in the inequality above we obtain $E[X|\mathcal F]\ge \varphi(Y)$, and $$ \{E[\varphi(X)|\mathcal F]=\varphi(Y)\}\subset\{P[X\not=Y|\mathcal F]=0\} $$ almost surely. Method 2: Let $\mu(\omega,dx)$ be a regular conditional distribution of $X$ given $\mathcal F$. (Such exists because $X$ is real valued.) That is, for each Borel set $B\subset\Bbb R$, $\omega\mapsto \mu(\omega,B)$ is $\mathcal F$-measurable, for each $\omega\in\Omega$, $B\mapsto \mu(\omega,B)$ is a probability measure on $\Bbb R$, and $\int_{\Bbb R} f(x)\,\mu(\omega,dx)$ is a version of $E[f(X)|\mathcal F](\omega)$ for suitably integrable $f$. Now apply Jensen's inequality (for the strictly convex function $\varphi$) to the probability measure $\mu(\omega,\cdot)$ for each fixed $\omega$. The conclusion is that for $P$-a.e. $\omega\in\Omega$, the equality of $E[\varphi(X)|\mathcal F](\omega)$ and $\varphi(E[X|\mathcal F](\omega))$ forces $\mu(\omega,\cdot)$ to be a unit point mass at $E[X|\mathcal F](\omega)$.<|endoftext|> TITLE: Variation of the Kempner series – convergence of series $\sum\frac{1}{n}$ where $9$ is not a digit of $1/n$. QUESTION [18 upvotes]: It is easy to argue that the Kempner series converges: $$ \sum\limits_{\substack{n \text{ s.t. 9 is}\\\text{ not a digit} \\\text{ of } n}} \frac{1}{n} < \infty$$ Let $E \subset \Bbb N_{>0}$ the subset of the positive integers such that $9$ is not a digit of the decimal expansion of $1/n$ (the decimal expansion is not allowed to have a trailing infinite sequence of "$9$"s. For instance $0.24999...$ is not allowed). Here are the first numbers that don't belong to $E$ : $11,13,17,19,21,23,29,31,34,38,41,…$ (not known by the OEIS, by the way). My question is: Does the series $$ \sum\limits_{n \in E} \frac{1}{n} \tag 1$$ converge? My attempt is : Let $1/n = 0,a_1 a_2 \dots a_k \overline{b_1 b_2 \dots b_m}$ with $n \in E$. Since $1/n$ has no digit "9", we have at most $9^{k+m}$ possibilities for the $a_i$'s and $b_j$'s. Moreover, $1/n ≥ 0,00...0\overline{00...01}≥1/10^{k+m}$. But then I can only bound my series $(1)$ from below, by some real number. So, this is not a clue for the divergence of the series. Apparently, the numbers of the form $n=10k+1$ don't belong to $E$. Maybe we can find sufficiently many numbers that have $9$ in the decimal representation of their reciprocals, so that $(1)$ could converge... Any comment will be appreciated ! REPLY [8 votes]: Consider the numbers whose reciprocals have exactly $n$ initial zeros after the decimal point. These are the numbers from $10^n+1$ to $10^{n+1}$, of which there are $9\cdot10^n$. Consider the next $n$ decimal digits of the reciprocals. By a count as for the Kempner series, there are $8\cdot9^{n-1}$ patterns of these digits that don't contain a $9$. Since smaller reciprocals are more densely spaced, the pattern exhibited by most reciprocals is the least possible pattern, a one followed by $n-1$ zeros. It is exhibited by at most $100$ reciprocals, since $$ \frac1{10^{n+1}}=\frac{10^{n-1}}{10^{2n}} $$ and $$ \frac1{10^{n+1}-10^2}=\frac1{10^{n+1}}\cdot\frac1{1-10^{1-n}}\gt\frac1{10^{n+1}}\left(1+10^{1-n}\right)=\frac{10^{n-1}+1}{10^{2n}}\;. $$ Thus, each of the $8\cdot9^{n-1}$ admissible patterns is exhibited by at most $100$ reciprocals and their sum is at most $100\cdot8\cdot9^{n-1}\cdot10^{-n}$. Summing over $n$ yields the bound $$ \sum_{n=0}^\infty100\cdot8\cdot9^{n-1}\cdot10^{-n}=\frac{800}9\sum_{n=0}^\infty\left(\frac9{10}\right)^n=\frac{800}9\cdot\frac1{1-\frac9{10}}=\frac{8000}9\approx889 $$ for the series, which therefore converges.<|endoftext|> TITLE: How do you find the maximum value of $|z^2 - 2iz+1|$ given that $|z|=3$, using triangle inequality? QUESTION [8 upvotes]: Problem: How do you find the maximum value of $|z^2 - 2iz+1|$ given that $|z|=3$, using triangle inequality? My attempt: $$|z^2 - 2iz+1|\le|z|^2+2|i||z|+1$$ $$\implies |z^2 - 2iz+1|\le16$$ However, this does not provide a strict upper bound on the inequality, where the equality holds. I also tried writing it as: $$|(z-i)^2 + 2| <= |(z-i)|^2 + 2$$ This last equation does suggest that the maximum value occurs at $-3i$, however, provides an even higher upper bound of $18$. Wolfram Alpha gives the answer as $14$, and it occurs as $-3i$. I know that the equality only holds when all the complex numbers are collinear, but that has not helped me with this question. REPLY [4 votes]: After playing with the triangle inequality for a while, we may realize that we are not going to arrive at the maximum without absurd ingenuity, so we consider other methods: Calculus I: stationary points: Substitute $z = 3 \mathrm{e}^{\mathrm{i}\theta}$, find real and imaginary parts and construct the modulus as the sum of the squares of those parts, giving (simplified) $$\sqrt{2} \left( \sqrt{59 + 9 \cos(2 \theta) - 48 \sin(\theta)} \right) \text{.}$$ Differentiate this with respect to $\theta$, giving $$ -\frac{3\sqrt{2} \left( 8 \cos(\theta) + 3 \sin(2\theta) \right)}{\sqrt{59 + 9 \cos(2 \theta) - 48 \sin(\theta)}} \text{.}$$ Set this equal to zero and solve for $\theta$, giving $\pm \pi/2$ as locations of stationary points. Evaluating the substituted polynomial at these two angles gives $-2$ and $-14$, so the maximum modulus of the polynomial on the circle of radius $3$ is $14$. Lagrange Multipliers: Construct $|z^2 + 2\mathrm{i} z + 1| - \lambda(|z| - 3)$ then take derivatives with respect to $z$ and $\lambda$, set those simultaneously equal to zero and solve. You get that $z = \pm 3\mathrm{i}$. Plugging in again, we find the maximum modulus is 14. Geometry: This polynomial is $(z-(\mathrm{i}+\mathrm{i}\sqrt{2}))(z-(\mathrm{i}-\mathrm{i}\sqrt{2}))$. Taking the modulus, we realize the level sets are collections of points whose product of distances from two given point (the roots just found) are fixed. These level sets are Cassini ovals. By symmetry, then, the maximum will be on the imaginary axis and it is no great challenge to realize it will be the one of $3\mathrm{i}$ and $-3\mathrm{i}$ that is farthest from the midpoint of the roots (which is $\mathrm{i}$). Plugging $-3i$ back into the polynomial, we get that the maximum modulus is $14$, again.<|endoftext|> TITLE: Expected value problem with cars on a highway QUESTION [13 upvotes]: There is a very long, straight highway with $N$ cars placed somewhere along it, randomly. The highway is only one lane, so the cars can’t pass each other. Each car is going in the same direction, and each driver has a distinct positive speed at which she prefers to travel. Each preferred speed is chosen at random. Each driver travels at her preferred speed unless she gets stuck behind a slower car, in which case she remains stuck behind the slower car. On average, how many groups of cars will eventually form? (A group is one or more cars traveling at the same speed.) A friend showed me this question and we didn't know how to go about it. I've taken a probability course so my mind immediately went to counting methods or expectation values, but I don't know if this is the wrong intuition. Anybody know how to solve this? REPLY [6 votes]: Suppose that of the $N$ cars, the $i$th is the slowest, so that the last group of cars consists of all but the first $i - 1$. In this case, the expected number $E_i(N)$ of groups among the $N$ cars is $1$ (this last group) plus the expected number of groups in the first $i - 1$ cars, that is, $$E_i(N) = 1 + E(i - 1) .$$ Each of the $N$ cars has equal probability $\frac{1}{N}$ of being the slowest, so the expected number $E(N)$ of groups among the $N$ cars is \begin{align*} E(N) &= \sum_{i = 1}^n P(\textrm{the $i$th car is the slowest}) \cdot E_i(N) \\ &= \sum_{i = 1}^n \frac{1}{n} [1 + E(i - 1)] \\ &= 1 + \frac{1}{n} \sum_{i = 1}^{n - 1} E(i) . \end{align*} (In the last equality we've reindexed and used the trivial observation that $E(0) = 0$.) Working out the first few values of $E(N)$ suggests that $$\color{#bf0000}{\boxed{E(N) = H_N := 1 + \frac{1}{2} + \cdots + \frac{1}{N}}} ,$$ and it's straightforward to prove this using induction and the formula for $E(N)$ derived above. Asymptotically, we have $$E(N) = H_N = \log N + \gamma + O\left(\tfrac{1}{N}\right),$$ where $\gamma \approx 0.57721$ is the Euler-Mascheroni constant. The numbers $H_N$ are, by the way, the harmonic numbers, and they show up in the solutions of some other famous puzzles, like the Book-Stacking Problem and the Coupon Collector's Problem.<|endoftext|> TITLE: Minimum elements present in {0, 1, 2, ..., 225} to guarantee triple which sums to 225 QUESTION [5 upvotes]: Suppose I have the set: $$A=\{0, 1, 2, ... 224, 225\}$$ I want to find a triple that sums to $225$ (where a triple is a set of 3 unique values from the set). No Repetition Version: There are many such triples including: $$(0, 1, 224), (0, 2, 223), ... , (74, 75, 76)$$ Note: $(0, 0, 225)$ is not a valid triple as $0$ is repeated. What is the minimum number of values which must be present in the set $A$ such that we can guarantee that a triple must exist? Or put another way, how many elements can I allow an opponent to strategically remove such that I can still guarantee that a triple must exist (without seeing which values were removed)? Background & Version with Repetition: This question arose from an assignment to write an $O(n)$ algorithm which, given a list of perhaps $100,000$ non-negative integers, will determine whether there are $3$ integers which sum to $225$ (note: I have long since finished the assignment, but am still wondering about a better solution). Here, repetition is allowed so $(0, 0, 225)$ or $(75, 75, 75)$ are valid triples. My approach was to create a 'counter' array $C$ of size $226$ and initialize it with $0's$. While iterating through the list, if the value $i$ is encountered where $0\le i \le 225$, then increment $C[i]$. The counter array therefore keeps track of the number of times each relevant value occurs. We can stop counting after $1$ occurrence for any given value $j \gt 225/2$, and after $2$ occurrences for any other value, except $75$ which must be counted for up to $3$ occurrences since the triple $(75, 75, 75)$ must be checked for. After iterating through the list the counter array can be checked for the presence of valid triples using nested loops and so on. But an optimization can be made. If for example, every value from $0...225$ has been found at least once, then we can say that a triple exists without even checking. The question is then similar to the simplified case above. After how many 'hits', where some $A[i]$ is incremented, can we simply return true and know that a triple must exist? I've been trying to find a solution to the simpler version of this problem for quite a while. I asked my algorithms professor and a TA. I also wrote a small program to try and brute force a ceiling on this but quickly ran into combinatorial problems. REPLY [3 votes]: Logophobic's answer gives the correct answer of $152$, but is missing a complete proof of optimality. Here is a complete proof. We may partition $A$ into $A = A_1 \cup A_2$, where $A_1 = \{0,1,\ldots,74\}$ and $A_2 = \{75,76,\ldots,225\}$. Suppose $B \subseteq A$ contains no triple summing to $225$. Let $B_1 = B \cap A_1$ and $B_2 = B \cap A_2$. Let $k$ be the size of $B_1$, so that the elements of $B_1$ are $0\leq x_1 < \cdots < x_k \leq 74$. The $k=0,1$ cases were properly analyzed by logophobic, but I will repeat the analysis here for completeness. If $k=0$, then $B = B_2 \subseteq A_2$, implying that $|B| \leq |A_2| = 151$. Indeed, the bound is tight, since no 3 distinct elements of $A_2$ sum to $225$. If $k=1$, then we demand $B_2 \neq A_2$, since the distinct integers $y_1 = 75$ and $y_2 = 150-x_1$ have the property that $y_1, y_2 \in A_2$ and $x_1+y_1+y_2=225$. And so $|B| = |B_1|+|B_2| \leq 1 + 150 = 151$. If $k=2$, then consider the distinct elements $y_1=75$ and $y_2=225-x_1-x_2$. Clearly, $y_2\notin B_2$, since $x_1+x_2+y_2=225$. If $y_1\notin B_2$, we have that $|B_2| \leq |A_2|-2 = 149$. If $y_1\in B_2$, then the distinct elements $y_3=150-x_1$ and $y_4=150-x_2$ both cannot be in $B_2$, since $x_1+y_1+y_3=x_2+y_1+y_4=225$. So in this case we also have $|B_2| \leq |A_2|-2 = 149$. Thus, we have $|B| = |B_1|+|B_2| \leq 2 + 149 = 151$. If $k\geq3$, then note that the $(2k-3)$ distinct values given by $(225-x_i-x_j)$ where $1\leq i < j\leq k$ with $|\{i,j\} \cap \{1,k\}|=1$ all lie in the set $A_2$. This implies that $|B_2|$ is bounded by $|A_2| - (2k-3)$, and so we have $|B| = |B_1| + |B_2| \leq k + (151 - (2k-3)) = 154-k\leq 151$. This completes the proof.<|endoftext|> TITLE: Heine definition of limit of a function at infinity using sequences QUESTION [7 upvotes]: I couldn't find the answer neither on Google, nor this website, so decided to ask. The Heine definition of limit: from Wikipedia $\lim_{x\to a}f(x)=L$ if and only if for all sequences $x_n$ (with $x_n$ not equal to $a$ for all $n$) converging to $a$ the sequence $f(x_n)$ converges to $L$. Does it work with limit of a function at infinity, using a sequence that converges to infinity? (for example: $x_n = n$) Thank you. REPLY [7 votes]: Yes. Suppose $\lim_{x\to+\infty}f(x)=L$. That is, for every $\epsilon>0$ there exists $x_0$ such that $x>x_0$ implies $|f(x)-L|<\epsilon$. Let $x_n\to+\infty$, that is for every $M>0$ exists $n_0\in\Bbb N$ such that $n>n_0$ implies $x_n>M$. I claim that $f(x_n)\to L$. Indeed, let $\epsilon>0$ be given. Then pick $x_0$ such that $x>x_0$ implies $|f(x)-L|<\epsilon$. Then pick $n_0\in\Bbb N$ such that $n>n_0$ implies $x>x_0$. It follows that $|f(x_n)-L|<\epsilon$ for $n>n_0$. Next suppose $\lim_{n\to\infty}f(x_n)=L$ for all sequences with $x_n\to\infty$. I claim that $\lim_{x\to+\infty}f(x)=L$. Indeed, let $\epsilon>0$ be given. Assume there does not exist $M$ such that $x>M$ implies $|f(x)-L|<\epsilon$. Then we can define a sequence $x_n$ as follows: For given $n\in \Bbb N$ pick $x_n$ arbitrary with $x_n>n$ and $|f(x_n)-L|\ge\epsilon$ (such $x_n$ exists by our assumption applied to $M=n$). Then clearly $x_n\to +\infty$ and hence $|f(x_n)-L|<\epsilon$ for sufficiently large $n$. As this contradicts $|f(x_n)-L|\ge\epsilon$, the initial assumption must be wrong. That is: There does exist some $M$ such that for all $x>M$ we have $|f(x)-L|<\epsilon$. Alternatively, observe that $x\to+\infty$ is equivalent to $\frac1x\to0^+$ and use that to translate the desired statement about $+\infty$ to a statement about finite $a$.<|endoftext|> TITLE: Pointwise convergence to zero, with integrals converging to a nonzero value QUESTION [6 upvotes]: For $n\in{\mathbb{N}}$ let $$f_n(x)=nx(1-x^2)^n\qquad(0\le x\le 1).$$ Show that $\{f_n\}_{n=1}^\infty$ converges pointwise to $0$ on $[0,1]$. Show that $\{\int_0^1f_n\}_{n=1}^\infty$ converges to $\frac12$. I've already shown both of these statements to be true. What I don't understand is this: how can $f_n$ converge pointwise to $0$, yet the sequence $\{\int_0^1f_n\}_{n=1}^\infty$ converges to $\frac12$? Isn't that almost like saying $f(x)=0\qquad(a\le x\le b)$, but $\int_a^bf(x)=\frac12$? Clearly that would be false. I know this has something to do with that this is a sequence of functions but it still baffles my mind. Thanks in advance. REPLY [5 votes]: Perhaps easier to visualize: Let $f_n$ be a triangular spike over $[1/(2n),1/n]$ of height $2n.$ Fixing any $x\in (0,1],$ these triangles eventually all lie to the left of $x.$ Hence $f_n(x)$ is eventually $0.$ Also, $f_n(0) = 0$ for every $n.$ Thus $f_n \to 0$ pointwise on $[0,1].$ But $\int_0^1f_n = 1/2$ for every $n.$<|endoftext|> TITLE: Quadratic integer ring with universal side divisor? QUESTION [8 upvotes]: It seems that in every paper mentioning universal side divisors, they are defined very succinctly and with a bunch of symbols, so that I remain completely confused as to what they are and how to find a concrete specific example of such a divisor. An answer to another question talks about polynomial rings, which I don't really understand. Can someone give me an example of a quadratic integer ring (like $\mathbb{Z}[i]$ or $\mathbb{Z}[\sqrt{2}]$) that has a universal side divisor and how to find that specific divisor? REPLY [5 votes]: What has really helped me understand universal side divisors are these two examples in $\mathbb{Z}$: 2 and 3. The units of $\mathbb{Z}$ are $-1$ and 1. Obviously all even numbers are multiples of 2. As for odd numbers, you just add $-1$ or 1 (either one) and bam, you got an even number. To get a multiple of 3 from a number that's not already a multiple of 3 you add $-1$ or 1 (only one or the other will work). In $\mathbb{Z}[i]$ we have two more units to keep in mind, $i$ and $-i$. We also have to look at norms. Every number with an even norm is divisible by $1 + i$ (which itself has 2 for a norm). If a number has an odd norm, all you have to do is add $i$, $-1$, $-i$ or 1 (any of these) to get a number with an even norm, which is therefore divisible by $1 + i$.<|endoftext|> TITLE: $n^a$ integral for all integer $n$ implies $a$ integral QUESTION [9 upvotes]: Let $a>0$ be a real number, such that for all integers $n\geq 1$: $n^a \in \mathbb N$ Show that $a$ must be an integer. It's not difficult to show this when $a$ is a rational number: $2^\frac{p}{q}$ is irrational when the fraction is in lowest terms and $q \neq 1$. When $a$ is irrational, then for all $n\geq 1$, there exists $m_n\in \mathbb N^*$, such that: $$a = \frac{\log m_n}{\log n}; \quad \text{$m_n$ is not a power of $n$}$$ I think considering $n=2,3$ is enough to show a contradiction, but I can't seem to find it. This is what I get: $$a = \frac{\log p}{\log 2} = \frac{\log q}{\log 3}$$ $$p = 2^{\log q/\log 3} $$ I think the RHS is irrational when $q$ is not a power of $3$, but I can't prove it. The closest thing I have to a solution is this answer on a similar question. But it uses an unproven conjecture, and I was hoping for a more elementary proof. REPLY [7 votes]: The idea is decreasing the exponent $a$ below $0$ by taking differences. For every function $f$, let $(\Delta f)(x)=f(x+1)-f(x)$ be the usual forward difference. $Lemma$. If $f(x)$ is a $k$ times differentiable function on $[n,n+k]$ then there is some $\xi\in(n,n+k)$ such that $$ f^{(k)}(\xi) = (\Delta^kf)(n) = \sum_{\ell=0}^k (-1)^{k-\ell}\binom{k}{\ell}f(n+\ell). $$ $Proof$. Induction on $k$. For $k=1$ this is exactly Lagrange's mean value theorem. If the Lemma holds true for $k-1$ then apply it to the function $g(x)=\Delta f$. With some $\zeta\in(n,n+k-1)$, and then $\xi\in(\zeta,\zeta+1)\subset(n,n+k)$, we have $$ (\Delta^k f) (n) = (\Delta^{k-1} g)(n) = g^{(k-1)}(\zeta) = (\Delta f^{(k-1)})(\zeta) = f^{(k)}(\xi), $$ the lemma has been proved. Now suppose that $a$ is not an integer; let $k$ be a positive integer with $k-1 TITLE: Indicator function and liminf and limsup QUESTION [6 upvotes]: Can anyone please explain why the following is true? And what is the intuition behind it? $$\chi_A(x) = \begin{cases}1 &, x \in A\\ 0 &, x \notin A.\end{cases}$$ Then we have $$\chi_{\liminf A_n}(x) = \liminf \chi_{A_n}(x); \quad \chi_{\limsup A_n}(x) = \limsup \chi_{A_n}(x)$$ for all $x\in X$. Thank you. REPLY [5 votes]: Fix a sequence $(A_n \mid n \in \mathbb N)$ of sets. By definition $\liminf A_n = \bigcup_{m \in \mathbb N} \bigcap_{l \ge m} A_l$. In other words: $x \in \liminf A_n$ if and only if there is an $m \in \mathbb N$ such that $x \in A_l$ for all $l \ge m$. Thus $\chi_{\liminf_{n \in \mathbb N} A_n}(x) = 1$ if and only if there is some $m \in \mathbb N$ such that for all $l \ge m$ we have that $x \in A_l$. Now consider a sequence $(x_n \mid n \in \mathbb N)$ of real numbers $x_n$. By definition $\liminf_{n \in \mathbb N} x_n = \sup_{m \in \mathbb N} \inf_{l \ge m} x_l$. We are interested in the case where $x_n = \chi_{A_n}(x)$ for some fixed $x$. Then $\liminf_{n \in \mathbb N} \chi_{A_n}(x) = 1$ if and only if there is some $m \in \mathbb N$ such that for all $l \ge m$ we have that $\inf_{l \ge m} \chi_{A_n}(x) = 1$. This in turn holds if and only if there is some $m \in \mathbb N$ such that for all $l \ge m$ we have that $x \in A_l$. Comparing these two yields $\chi_{\liminf_{n \in \mathbb N} A_n} (x) = 1$ if and only if $\liminf_{n \in \mathbb N} \chi_{A_n}(x) = 1$. This also yields $\chi_{\liminf_{n \in \mathbb N} A_n} (x) = 0$ if and only if $\liminf_{n \in \mathbb N} \chi_{A_n}(x) = 0$ and consequently $\chi_{\liminf_{n \in \mathbb N} A_n} (x) = \liminf_{n \in \mathbb N} \chi_{A_n}(x)$. The argument for $\limsup$ is analogous. Regarding the intuition: $\liminf_{n \in \mathbb N} A_n$ contains precisely those element of $\bigcup_{n \in \mathbb N} A_n$ which are members of all but finitely many $A_n$'s. For example $x$ may be in some but not all of the sets $A_1, \ldots, A_{100}$. But for $l > 100$ we always have that $x \in A_l$. Such an $x$ appears in all the $A_n$'s except for finitely many (namely some of the sets $A_1, \ldots, A_{100}$) and thus $x \in \liminf_{n \in \mathbb N} A_n$. In contrast, $\limsup_{n \in \mathbb N} A_n$ contains precisely those elements of $\bigcup_{n \in \mathbb N} A_n$ which are members of infinitely many of the $A_n$'s. It may be the case that such a member also misses from infinitely many of the $A_n$'s and therefore $\liminf_{n \in \mathbb N} A_n$ may be different from $\limsup_{n \in \mathbb N} A_n$. But we always have that $\liminf_{n \in \mathbb N} A_n \subseteq \limsup_{n \in \mathbb N} A_n$.<|endoftext|> TITLE: Action via automorphism QUESTION [10 upvotes]: I want to ask what does it mean to say a group $A$ acts on $N$ via automorphisms. It is a notion used in M.Isaacs book and I am not familiar with. I tried to find how it is defined but a scanned e book is so hard to go through. Suppose $*: A\times N \to N$ is a group action of $A$ on $N$, thus we have homomorphism $\phi: A \to S_N$. Now what does action via automorphism mean. Thanks! REPLY [5 votes]: Isaacs defines this notion on page 68 of his finite group theory here. To keep this answer available, let me restate it: Given groups $\mathcal A = (A,\cdot_A, 1_A), \mathcal B = (B, \cdot_B, 1_B)$ we say that $\mathcal A$ acts on $\mathcal B$ via automorphisms iff there is a (left-)group action $\ast \colon \mathcal A \times \mathcal B \to \mathcal B$ such that for all $a \in A$ and all $b,c \in B$ we have $a \ast (b \cdot_B c) = (a \ast b) \cdot_B (a \ast c)$.<|endoftext|> TITLE: Number of solutions in a field of order $32$ QUESTION [5 upvotes]: Let $F$ be a field of order $32$. Then find the number of non-zero solutions $(a,b)\in F\times F$ of the equation $x^2+xy+y^2=0$. As , $|F|=32$ , so $(F\setminus\{0\},.)$ forms a group of order $31$, which is prime. So , $F\setminus \{0\}\simeq\mathbb Z_{31}$. Then how I proceed ? REPLY [3 votes]: If $x^2 + xy + y^2=0$, then $x^3 - y^3 = (x-y)(x^2+xy+y^2)=0$. But $3$ does not divide $31$, so $a\mapsto a^3$ is injective, and therefore $x=y$.<|endoftext|> TITLE: Is every monomorphism an injection? QUESTION [18 upvotes]: We say a morphism is a monomorphism if $fg=fh$ implies $g=h$. So if $f$ is a monomorphism, is it necessarily an injection? i.e. $f(x)=f(y)$ implies $x=y$. My approach is to consider a specific morphism $g_x$ who map every elements to $x$, thus $f(x)=f(y) \Rightarrow fg_x=fg_y \Rightarrow g_x=g_y \Rightarrow x=y$. But not every category allows this $g_x$. For exemple, the category of linear space. Although in the category of linear space this question can be solved in another way, I want to know some general approach about the question. REPLY [4 votes]: Recall that a morphism $f : A \to B$ is a monomorphism if it's left-cancellative. This is equivalent to saying that the map $$(f \circ -) : \operatorname{Hom}(-,A) \to \operatorname{Hom}(-,B)$$ is injective. Of course, whether or not this map is injective has little to do with $f$'s concrete representation as a function of sets, and as D_S's answer shows, it's easy to engineer an example that demonstrates this. You could say that what prevents this pathology from arising in $\mathbf{Set}$ is that every point of a set $X$ corresponds to a map from the terminal object $1$ pointing at that point.<|endoftext|> TITLE: An irreducible polynomial in a subfield of $\mathbb{C}$ has no multiple roots in $\mathbb{C}$ QUESTION [5 upvotes]: Let $K\subset \mathbb{C}$ be a subfield and $f\in K[t]$ an irreducible polynomial. Show that $f$ has no multiple roots in $\mathbb{C}$. If I understand this question correctly, I must show that there is no $a \in \mathbb{C}$ such that $(t-a)^n|f$ in $F[t]$ with $n>1$. So suppose $(t-a)^2|f$ and $f=(t-a)^2h$. Then we have $f'=2(t-a)h+(t-a)^2h' \Rightarrow (t-a)|f'$ so $\gcd(f,f'$) is not constant. Therefore $f$ is divisible by some square of non-constant polynomial in $F[t]$, which is a contradiction. Is my argument correct? Thank you. REPLY [2 votes]: We can assume $f$ has degree $\ge 2$. Suppose that $f$ and $f'$ are relatively prime as polynomial over $K$. Then (Bezout) there exist polynomials $u(t)$ and $v(t)$, with coefficients in $K$ such that $uf+vf'=1$. But then $f$ and $f'$ cannot have a common root in any field extension of $K$, for such a root would have to be a root of the polynomial $1$. In particular, they cannot have a common root in $\mathbb{C}$. This contradicts the assumption that $f$ has a double root in $\mathbb{C}$. So $f$ and $f'$ are not relatively prime as polynomials over $K$. Let $g=\gcd(f,f')$. Then $g$ is a polynomial over $K$ of degree $\ge 1$ and less than the degree of $f$, and $g$ divides $f$. This contradicts the irreducibility of $f$ over $K$.<|endoftext|> TITLE: Probability theory with the hyperreals? QUESTION [5 upvotes]: Forgive the undoubted ignorance of this question. I am out of my element both in probability and in nonstandard analysis. A mathematically curious layperson friend recently had a conversation with me where he wanted a notion of "pick a random integer" (meaning uniformly random). I told him that there isn't room for this idea in the theory of probability as developed from the Kolmogorov axioms, because a countable probability space can't have atoms of equal probability, because $\sum 0 = 0$, but $\sum \alpha = \infty > 1$ for $\alpha >0$ by the archimedean principle. He asked, why couldn't the probability of picking an individual integer be $1/\infty$? "I know infinity isn't a number but in this context it feels like it could be," he said. If you want $\infty$ and $1/\infty$ to be numbers, well, this is what nonstandard analysis was made for, right? So, my question is this: Has anybody tried to formulate probability theory with probabilities in a field of hyperreal numbers rather than in $\mathbb{R}$? If so, what issues come up? Does the theory have significant differences with the standard theory or does this notion not really change anything substantive? Basically, what was the outcome? If this has been done, I would be most interested in a sort of executive summary, but I would also accept a reference. Thanks in advance. Addendum: Just found this related question at MO. REPLY [3 votes]: To answer your question about infinity specifically, one fixes an infinite hypernatural $H$, and works with the collection $$\{1,2,\ldots,n-1,n,n+1,\ldots,H-1, H\}.$$ As you suggested, one can assign probability $\frac{1}{H}$ to the occurrence of each individual number in this collection. This is the basic idea behind using infinitesimals in probability. Such sets are called hyperfinite. The basic idea of Brownian motion, namely moving an infinitesimal amount in a random direction infinitely many times, finds a literal interpretation in Robinson's framework. There are two approaches to this that are technically somewhat different: one developed by Peter Loeb and the other by Edward Nelson. Here are some recent sources: Nonstandard Analysis for the Working Mathematician. Editors: Loeb, Peter A., Wolff, Manfred P. H. (Eds.) Springer 2015. See here; Herzberg, Frederik S. Stochastic calculus with infinitesimals. Lecture Notes in Mathematics, 2067. Springer, Heidelberg, 2013. See here.<|endoftext|> TITLE: SLLN when the expectation in infinite QUESTION [8 upvotes]: In a Post I found it says: Whenever ${\rm E}(X)$ exists (finite or infinite), the strong law of large numbers holds. That is, if $X_1,X_2,\ldots$ is a sequence of i.i.d. random variables with finite or infinite expectation, letting $S_n = X_1+\cdots + X_n$, it holds $n^{-1}S_n \to {\rm E}(X_1)$ almost surely. The infinite expectation case follows from the finite case by the monotone convergence theorem. Can someone give a reference/answer to this question? I want to prove that: If $EX^{+}_{k}=∞ $ and $EX^{-}_{k}<∞ $ then $n^{−1}S_{n}→∞$ a.s. REPLY [2 votes]: The answer is positive. Reference: first page of the paper The Strong Law of Large Numbers When the Mean is Undefined KB Erickson · 1973 TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 185. November 1973 Also theorem 2.4.5, R.Durrett, Probability, theory and examples (there's a proof there, which coinside with the Did's proof above).<|endoftext|> TITLE: Why is Laplace Transform used for ODEs QUESTION [5 upvotes]: This part is taken from differential equations with applications and historical George simmons. According to the given information , there are another integral transformation.I wonder why is the Laplace transform ($a=0 , b=\infty , K(p,x)=e^{-px}$) begin used for solving ODEs.Are there any advantages or other particular reasons for using this combination ? REPLY [3 votes]: The Laplace transform is useful because it transforms an ODE into an algebraic equation in the transformed variable (or a PDE into an ODE), and it includes the initial conditions as part of the algebraic equation. The usefulness of this LT of course depends on the ability to find the inverse transform of the solution to the algebraic equation. In general, this requires the evaluation of a complex integral. However, for those not versed in complex integration methods, there are tables from which one may simply write down the inverse. Thus, the LT provides a simple means of solving an ODE using algebra. No better way to understand this than to provide a specific example. Let's solve $$f''(t) + 4 f'(t) + 3 f(t) = t \sin{t}$$ $$f(0) = 0 \quad f'(0) = 1$$ The LT of the LHS of the equation is $$s^2 F(s) - 1 + 4 s F(s) + 3F(s) $$ The LT of the RHS is $$\int_0^{\infty} dt \, t \sin{t}\; e^{-s t} = \operatorname{Im}{\int_0^{\infty} dt \, t \, e^{-(s-i) t}} = \operatorname{Im}{\left [\frac1{(s-i)^2}\right ]} = \frac{2 s}{(s^2+1)^2}$$ We need only solve a simple algebraic equation to find $F(s)$: $$F(s) = \frac{(s^2+1)^2+2 s}{(s^2+1)^2 (s+3)(s+1)} $$ We may then use tables or the residue theorem to find the inverse: $$f(t) = \frac14 e^{-t} - \frac{47}{100} e^{-3 t} + \frac{1}{50} [(5 t+2) \sin{t} +(11-10 t) \cos{t}]$$<|endoftext|> TITLE: Generalisation of euclidean domains QUESTION [9 upvotes]: Recently I wondered how dependent the definition of euclidean domains is of the co-domain of the norm-function. To be precise: Let's define a semi-euclidean domain as a domain $R$ together with a norm $\delta : R \rightarrow \alpha$ for an ordinal $\alpha$ (or even the class of all ordinals) such that for $f,g \in R\ \exists\ q,r\in R$ such that $f=qg+r$ and $\delta(r)<\delta(g)$. With the same argument as for euclidean domains semi-euclidean domains are principal-ideal-domains. Are there any semi-euclidean-domains which are not euclidean? Thanks for your ideas, Takirion Edit: As the image of $\delta$ is well ordered (and as such isomorphic to an ordinal) we can assume that $\delta$ is surjective. Here is an idea that didn't work: $\mathbb{R}^{\alpha}$, the "Polynomial Ring" indexed by an limit ordinal $\alpha$ with $\delta (f)=\deg(f)$, because if $\omega_1\in \alpha$ we can't divide $x^{\omega_1}$ by $x^2$. Edit 2: Using Zorn's Lemma we can find a minimal norm in terms of the partial ordering $f\leq g \iff f(a)\leq g(a)\forall a\in R$ for two norms f and g. To show this let $(A, \leq)$ the partial ordered Set of all norms from $R$ in an ordinal $\alpha$ (note that to use Zorns Lemma here \alpha has to be an ordinal and can't be be class of all ordinals). Let $f_1\geq f_2\geq f_3\geq ...$ be a chain in $A$. Set $f(a)=min\lbrace f_n (a)|n\in \mathbb{N}\rbrace$. It's easy to see, that f is a norm on R and by definition it is a lower bound for all the $f_n$. So by Zorn's Lemma we find a minimal Element $\delta^*\in A.\square$ As $\delta^*$ is minimal obviously $\delta^*$ is surjective on an initialsegment of $\alpha$ (so we just assume it is surjective on $\alpha$). If $\alpha \leq \omega$ $R$ is euclidean and we are done. If not we find $x\in R$ with $\delta^*(x)=\omega$. But as $\delta^*$ is minimal this means, that for each $n\in \mathbb{N}$ we find $y\in R$ such that $y=q*x+r \implies \delta^*(r)>n$. At the moment i hope that it either will be possible to show that such a $x$ can't exist, or that this property of some elements gives hints on where to look for semi-euclidean but not euclidean rings. I found another interesting thing: Given that there is any euclidean function on the ring, we can construct $the$ minimal norm explicitely by transfinite induction. We define $\delta^{-1}(-\infty)=0$ and $\delta^{-1}(\alpha)=\lbrace x\in R\vert\forall y\in R \exists q,r\in R:y=q*x+r, \delta(r)<\alpha \rbrace$ for every ordinal $\alpha$. If we have any euclidean function $\mu$ we see immediately that every $x\in R$ gets assigned a value $\delta(x)<\mu(x)$ which shows both, that $\delta$ is defined on all elements of $R$ and that it is minimal. That it is indeed an euclidean function follows directly from the definition. By all the stuff above getting assigned a limit-ordinal is a property which belongs to an element and is essentialy independent of the euclidean function. So I guess if there is a semiceulclidean not euclidean ring those elements will be interesting objects to be studied. REPLY [5 votes]: These 'semi-Euclidean domains' have been studied and are often called (transfinite) Euclidean Domains. Motzkin was the first to realize that the co-domain of the norm-function need only be a well ordered set (see Motzkin's paper: 'The Euclidean Algorithm') and since then many authors have looked at the similarities between the classical definition of Euclidean domains and the extended definition. (For an introduction into these types of objects see Lenstra's "Lectures on euclidean rings" here: https://www.math.leidenuniv.nl/~hwl/PUBLICATIONS/pub.html) As for your question: "Are there any semi-euclidean domains which are not euclidean?" the answer is yes, there are transfinite Euclidean domains whose co-domain is an ordinal larger than $\omega$. Masayoshi Nagata and Jean-Jacques Hiblot independently found rings with a co-domain of $\omega^2$ (see Nagata's paper "On Euclid algorithm" or Hiblot's paper "Des anneaux euclidiens dont le plus petit algorithme n’est pas a valeurs finies"). More recently, in work done by Conidis, Nielsen and Tombs, rings were constructed to have an arbitrary indecomposable ordinal as the co-domain (see Conidis, Nielsen and Tombs' paper "Transfinitely valued Euclidean domains have arbitrary indecomposable order type").<|endoftext|> TITLE: Analyticity of $\overline {f(\bar z)}$ given $f(z)$ is analytic QUESTION [10 upvotes]: Suppose $f$ is an analytic function on a domain $D$. Then I need to show that $\overline {f(\bar z)}$ is also analytic. Here is what I did - Suppose $f(z) = u(x,y) + iv(x,y)$ where $u$ and $v$ are real functions of $x$ and $y$ and $z = x + iy$. Now $f(\bar z) = u(x,-y) + iv(x,-y) $ and then $\overline {f(\bar z)} = u(x,-y) - iv(x,-y) $. To show that a function is analytic, I need to verify that it satisfy Cauchy-Riemann Equations. I differentiated $\overline {f(\bar z)}$ and checked that it actually satisfies these equations. But here is my doubt - Being analytic means that the function if complex differentiable. Now here to check for analyticity, I am differentiating my function without proving that it's actually analytic. So is this the right way to do it? REPLY [4 votes]: Since $f$ is analytic, it can be represented as a power series, that is, for every $z_o\in D$, $$f(z)=\sum_{n=0}^\infty c_n(z-z_0)^n$$ Let's assume that $z_0=0$. Then $$\overline {f(\overline z)}=\sum_{n=0}^\infty c_n\overline {{z}}^n=\overline{ \lim_{k\to \infty}\sum_{n=0}^k c_n\overline{z}^n}$$ Since the function $z\mapsto \overline z $ is continuous, $$\overline{ \lim_{k\to \infty}\sum_{n=0}^k c_n\overline{z}^n}=\lim_{k\to \infty}\overline{\sum_{n=0}^kc_n\overline{z}^n}= \lim_{k\to \infty}\sum_{n=0}^k \overline{c_n\overline{{z}^n}}=\lim_{k\to \infty}\sum_{n=0}^k\overline{c_n}z^n=\sum_{n=0}^{\infty}\overline{c_n}z^n$$ But $$|c_nz^n|=|c_n||z^n|=|\overline{c_n}||z^n|=|\overline{c_n}z^n|$$ and then $\sum_{n=0}^{\infty}c_nz^n$ is absolutely convergent if and only if $\sum_{n=0}^\infty\overline{c_n}z^n$ is absolutely convergent. But $f(z)$ is analytic in $D$, so $\sum_{n=0}^{\infty}c_nz^n$ is absolutely convergent; then we can conclude that $$\overline {f(\overline z)}=\sum_{n=0}^\infty \overline{c_n}{z}^n$$ is also analytic in $D$.<|endoftext|> TITLE: $p = \sqrt{1+\sqrt{1+\sqrt{1 + \cdots}}}$; $\sum_{k=2}^{\infty}{\frac{\lfloor p^k \rceil}{2^k}} = ? $ QUESTION [9 upvotes]: Let $p = \sqrt{1+\sqrt{1+\sqrt{1 + \cdots}}}$ The sum $$\sum_{k=2}^{\infty}{\dfrac{\lfloor p^k \rceil}{2^k}}$$ Can be expressed as $\frac{a}{b}$. Where $\lfloor \cdot \rceil$ denotes the nearest integer function. Find $a+b$. My Work $p^2 = 1+p \implies p = \dfrac{1+\sqrt{5}}{2} = \phi$ And also $\phi^n = \dfrac{L_n + F_n\sqrt{5}}{2}$. Where $L_n$ is the $n-th$ Lucas-Number. What to do with the next part? Note: Problem Collected from Brilliant.org REPLY [4 votes]: You are correct that $$p=\frac{1+\sqrt{5}}{2}$$ This is the golden ratio, ususally labeled $\phi$. There is a well-known relationship between $\phi$ and the Lucas numbers $L_k$: for all natural numbers $k$, $$L_k = \phi^k + \left(\frac{-1}{\phi}\right)^k$$ The second term above will be less than $1/2$ for $k >1$, so rounding $\phi^k$ to the nearest integer must give $L_k$. This means your sum can be written in terms of Lucas numbers. Let your sum be $S$. Then $$S = \displaystyle\sum\limits_{k=2}^{\infty} \frac{L_k}{2^k}$$ Now we can write out a few terms of $S$, $2S$, and $4S$: \begin{align} S = \frac{L_2}{4}&+\frac{L_3}{8}+\frac{L_4}{16}+\frac{L_5}{32}+\cdots \\\\ 2S = \frac{L_2}{2}+\frac{L_3}{4}&+\frac{L_4}{8}+\frac{L_5}{16}+\cdots \\\\ 4S = L_2+\frac{L_3}{2}+\frac{L_4}{4}&+\frac{L_5}{8}+\frac{L_6}{16}\cdots \\ \end{align} I've intentionally lined up the sums above like that because if we add up the first two equations, term by term, we get $$ 3S = \frac{L_2}{2} + \frac{L_2+L_3}{4} + \frac{L_3+L_4}{8}+\frac{L_4 + L_5}{16}+\cdots $$ But the Lucas numbers, like the Fibonacci numbers, satisfy the recursion $L_k + L_{k+1} = L_{k+2}$. Therefore $$3S = \frac{L_2}{2} + \left(\frac{L_4}{4} + \frac{L_5}{8} + \frac{L_6}{16}+\cdots\right)$$ But everything after the first term looks a lot like what we had written for $4S$ above! In fact, we can write $$3S = \frac{L_2}{2} + \left(4S - L_2 - \frac{L_3}{2}\right)$$ $$S = \frac{L_2}{2} + \frac{L_3}{2}$$ $$S = \frac{L_4}{2}$$ Finally, we can solve for your sum: $$S = \frac{7}{2}$$<|endoftext|> TITLE: Intuitive meaning of the concept “computable” QUESTION [7 upvotes]: My question is a follow-up question to this one: How to show that a function is computable? The original question was: Is the following function $$g(x) = \begin{cases} 1 & \mbox{if } \phi_x(x) \downarrow \mbox{or } x \geq 1 \\ 0 & \mbox{otherwise } \end{cases}$$ computable? The accepted answer was that this function is computable because it is either the constant 1 function or the function that is 1 everywhere except on input 0. According to my professor at university, this answer is in fact correct. But I don't understand the intuition behind it. As far as I know, “computable” means (amongst other things) that there is an algorithm that actually can compute the function. I don't really get how such an algorithm would look in this case. Obviously, if $x\geq 1$, the algorithm can immediately return 1. But if the input is 0, the algorithm would have to simulate $\phi_0(0)$. If $\phi_0$ terminates on input 0, then the algorithm can return 1. But if $\phi_0$ doesn't terminates on input 0, the algorithm will run forever and never return 0. So my understanding of the concept “computable” is in conflict with its actual meaning. Can someone explain where the error in my reasoning lies? REPLY [5 votes]: A function is not the same thing as the words used to describe it. The function is the just the mapping of inputs to outputs. That mapping can be described in infinitely many different ways. For example, the identity function could be described both as "Given $x$, output $x$", or as "Given $x$, add $5$ to $x$. Now multiply $x$ by $3$, and subtract $15$. Now divide by $3$. Output the result." The question "is this function computable" means "does there exist an algorithm which happens to produce the same outputs as this function for all inputs", and not "does there exist an algorithm which resembles this description of the function", because you're asking about the function, not the particular description of it.<|endoftext|> TITLE: An odd property of Egyptian fractions QUESTION [8 upvotes]: This question arose through a response to this post. For which integers $N>1$ does the fraction $\frac 1N$ appear in the Egyptian Fraction expansion of $\frac {N-1}{N}$? To specify: As such expansions are not unique, I should say which one I refer to. Here we consider the expansion obtained through the greedy algorithm. Thus $$\frac 12=\frac 12\;\;\&\;\;\frac 34=\frac 12+\frac 14\;\;\&\;\;\frac {11}{12}=\frac 12+\frac 13+\frac 1{12}$$ are easy examples. A quick search for $N<100$ yields $N=\{2,4,12,84\}$ as examples. Taking that (short) list to OEIS leads to $[A053631][1]$, the sequence $a_i$ starting with $a_1=2$ and having the property that, for $i>1$, $\{a_{i-1}+1,a_i,a_i+1\}$ are a Pythagorean triple. That sequence continues from $84$ as $3612,\, 6526884,\, 21300113901612,\dots$ and it is easy to verify that those three, at least, are examples for the present question as well. Are these all examples? Are there others? Edit: as remarked in the comments, in each of the cases cited above, $\frac 1N$ appears as the final term in the expansion. REPLY [4 votes]: Note that $2 \cdot 2=4, 4 \cdot 3=12, 12 \cdot 7=84, 84 \cdot 43=3612, 3612 \cdot 1807=6526884$. If $k$ is a solution, then it is $\frac {k-2}k + \frac 1k$, where we don't care how $\frac {k-2}k$ is expressed. Then let $m=k(\frac k2+1)$, which is a multiple of $4$ because $k$ is. We have $\frac {m-1}m=\frac {k(\frac k2+1)-1}{k(\frac k2+1)}=\frac {k-2}k+\frac 1k+\frac 1{k+2}=\frac {k-2}k+\frac 1{k/2+1}+\frac 1{k(\frac k2+1)}=\frac {k-2}k+\frac 1{k/2+1}+\frac 1m$, so $m$ is a solution. This shows there are an infinite number of solutions, but does not show there are no others<|endoftext|> TITLE: $\lim_{n \to \infty}(\frac{a_n}{\sqrt{a_n^2+1}})=\frac{1}{2}$ - show that $a_n$ is convergent sequence QUESTION [5 upvotes]: Problem: Show that $a_n$ is convergent sequence and find a limit of $a_n$. $$\lim_{n \to \infty}(\frac{a_n}{\sqrt{a_n^2+1}})=\frac{1}{2}$$ I tried to look at this as normal limit problem so I wrote this: $$\lim_{n \to \infty}(\frac{a_n}{\sqrt{a_n^2+1}})=\lim_{n \to \infty}(\frac{1}{\sqrt{1+1}})=\frac{1}{2}$$ But I didn't get anything which can help me to solve a problem. REPLY [6 votes]: You can invert the function $y = {x \over \sqrt{x^2 + 1}}$ as follows. $$y^2 = {x^2 \over x^2 + 1} = 1 - {1 \over x^2 + 1}$$ $$1 - y^2 = {1 \over x^2 + 1}$$ $${1 \over 1 - y^2} = x^2 + 1$$ $${1 \over 1 - y^2} - 1 = x^2$$ So we have $$x^2 = {y^2 \over 1 - y^2}$$ Seeing that $x$ and $y$ must have the same sign, we have $$x = {y \over \sqrt{1 - y^2}}$$ Hence if for your sequence $x_n$ you write $y_n = {x_n \over \sqrt{1 + x^2}}$, then you have $$x_n = {y_n \over \sqrt{1 - y_n^2}}$$ Since $\lim_{n \rightarrow \infty} y_n = {1 \over 2}$, by the continuity of ${y \over \sqrt{1 - y^2}}$ you have $$\lim_{n \rightarrow \infty} x_n = {{1 \over 2} \over \sqrt{1 - {1 \over 4}}}$$ $$= {1 \over \sqrt{3}}$$<|endoftext|> TITLE: What to do if the modulus is not coprime in the Chinese remainder theorem? QUESTION [11 upvotes]: Chinese remainder theorem dictates that there is a unique solution if the congruence have coprime modulus. However, what if they are not coprime, and you can't simplify further? E.g. If I have to solve the following 5 congruence equations $x=1 \pmod 2$ $x=1 \pmod 3$ $x=1 \pmod 4$ $x=1 \pmod 5$ $x=1\pmod 6$ as gcd (2,3,4,5,6) is not coprime, how would you do it. I have heard that you can't use the lcm of the numbers, but how does it work? Sorry for the relatively trivial question and thank you in advance. REPLY [2 votes]: \begin{align} x &\equiv 1 \pmod 2 \\ x &\equiv 1 \pmod 3 \\ x &\equiv 1 \pmod 4 \\ x &\equiv 1 \pmod 5 \\ x &\equiv 1 \pmod 6 \\ \end{align} Note that $x \equiv 1 \pmod 6 \implies \left\{ \begin{array}{l} x\equiv 1 \pmod 2 \\ x \equiv 1 \pmod 3 \end{array} \right.$ Replace $x \equiv 1 \pmod 6$ in your original list with those two equivalences and sort the list by prime number bases: \begin{align} x &\equiv 1 \pmod 2 \\ x &\equiv 1 \pmod 2 \\ x &\equiv 1 \pmod 4 \\\hline x &\equiv 1 \pmod 3 \\ x &\equiv 1 \pmod 3 \\ \hline x &\equiv 1 \pmod 5 \\ \end{align} Note first that $x \equiv 1 \pmod 4 \implies x \equiv 1 \pmod 2$. This means that the equivalence $x \equiv 1 \pmod 2$ is included in the equivalence $x \equiv 1 \pmod 4$ and is therefore superfluous - it can be removed. So we can simplifiy the list to \begin{align} x &\equiv 1 \pmod 4 \\ x &\equiv 1 \pmod 3 \\ x &\equiv 1 \pmod 5 \\ \end{align} The solution is $x \equiv 1 \pmod{30}$.<|endoftext|> TITLE: Evaluating $\cos^{\pi}\pi$ via binomial expansion of $\left(\frac12(e^{xi}+e^{-xi})\right)^\pi$ QUESTION [6 upvotes]: I was challenged to take $\cos^{\pi}(\pi)$ and expand it using binomial expansion and $\cos(x)=\frac{e^{xi}+e^{-xi}}2$, which I tried: $$\cos^{\pi}(\pi)=\left(\frac{e^{\pi i}+e^{-\pi i}}2\right)^{\pi}$$ $$=\frac{(e^{\pi i}+e^{-\pi i})^{\pi}}{2^{\pi}}$$ $$(2\cos(\pi))^{\pi}=S$$ $$S_1=\sum_{n=0}^{\infty}\frac{\pi!e^{\pi^2i-2n}}{n!(\pi-n)!}$$ $$S_2=\sum_{n=0}^{\infty}\frac{(\pi)!e^{-\pi^2i+2n}}{n!(\pi-n)!}$$ The difference between $S_1$ and $S_2$ is that I did binomial expansion starting with different terms, which we will see why in a moment. I note: $$S=\frac{S_1+S_2}{2}$$ $$S=\sum_{n=0}^{\infty}\frac{\pi!}{n!(\pi-n)!}\frac{e^{\pi^2i-2n}+e^{-\pi^2i-2n}}2$$ Now, reapplying the complex extension of the cosine function (which is why I had $S_1$ and $S_2$): $$S=\sum_{n=0}^{\infty}\frac{\pi!}{n!(\pi-n)!}\cos(\pi^2-2n)$$ So, we have: $$(2\cos(\pi))^{\pi}=\sum_{n=0}^{\infty}\frac{\pi!}{n!(\pi-n)!}\cos(\pi^2-2n)$$ And this simplifies to: $$(-2)^{\pi}=\sum_{n=0}^{\infty}\frac{\pi!}{n!(\pi-n)!}\cos(\pi^2-2n)$$ But we can clearly see the LHS is a complex number while the right side produces real numbers. So where did I go wrong? REPLY [2 votes]: Please note that we cannot use all power identities, being true for real numbers, for complex numbers freely. When you want to expand$$\cos ^{\pi}(\pi)=\left ( \frac{e^{i \pi }+e^{-i\pi }}{2} \right ) ^ {\pi }\tag{*}\label{*}$$by the generalized binomial theorem, you should not use the identities$$(z^a)^b=z^{ab}$$and$$(z_1z_2)^a=z_1^az_2^a,$$ which are not necessarily true for complex numbers, and using them freely may lead to contradictions.$\dagger$ According to the generalized binomial theorem,$$(x+y)^r=\sum_{n=0}^{\infty } \binom{r}{n}x^{r-n}y^n.$$So, the correction expansion of \ref{*} is$$\left ( \frac{e^{i \pi }+e^{-i\pi }}{2} \right ) ^ {\pi }= \sum_{n=0}^{\infty }\binom{\pi }{n}\left (\frac{e^{i \pi }}{2}\right )^{\pi -n}\left (\frac{e^{-i \pi }}{2}\right )^n=S_1,$$which is equal to$$\left ( \frac{e^{-i \pi} +e^{i\pi }}{2} \right ) ^ {\pi }= \sum_{n=0}^{\infty }\binom{\pi }{n}\left (\frac{e^{-i \pi }}{2}\right )^{\pi -n}\left (\frac{e^{i \pi }}{2}\right )^n=S_2.$$ Footnote $\dagger$ According to the generalized definition of exponentiation to complex real powers, for a complex number $z$ and a real number $a$ we have$$z^a=e^{a \ln z}.$$As we know the complex logarithm is a multivalued function, since for a complex number $z=re^{i \theta }=re^{i (\theta +2k \pi )}$, where $k$ is any integer, we have$$\ln z = \ln re^{i (\theta +2k \pi ) }=\ln r +i(\theta +2k\pi ).$$Now, when we apply the mentioned exponentiation identities to a complex number, we may jump from the principal branch of the complex logarithm function (the branch corresponding to the analytic continuation of the real logarithm function) to another one, and so some contradictions may arise. For example, consider the following contradictions arising from using the non-identities:$$(-1)=(-1)^{\frac{1}{3}}=(e^{i\pi})^{\frac{1}{3}}=e^{i\frac{\pi}{3}}=\cos \frac{\pi}{3}+i\sin \frac{\pi}{3},$$ $$1=1^{\frac{1}{2}}=((-1)(-1))^{\frac{1}{2}}=(-1)^{\frac{1}{2}}(-1)^{\frac{1}{2}}=(i)(i)=i^2=-1.$$<|endoftext|> TITLE: How do dependent products in category theory relate to type theory? QUESTION [5 upvotes]: I feel like I understand the construction of dependent product types relatively well, it makes sense to me how the introduction and elimination rules work together to create the concept of a function whose co-domain depends on the values of the function in the domain, but given the definitions and motivation on n-cat lab for the corresponding notion in category theory, I don't get the same intuition. Given an element $x : I \rightarrow A$ in a category, and a categorical dependent product $f = \Pi_{x : A} F(x) $, it isn't even clear to me how one would obtain the element $f(x) : F(x)$, like one can using the elimination rule of the dependent product in type theory. More generally, I'd like to see a more explicit connection with the dependent product in type theory. REPLY [7 votes]: Uff. It's hard to talk about this without introducing all of categorical type theory. The general notion is the notion of a comprehension category which I believe was introduced by Bart Jacobs. See his book or his thesis. However, when I did a search to get a reference for some definitions I found this recent set of notes, "Type Theory through Comprehension Categories" by Paolo Capriotti, which is pretty good. We start with a category $\mathcal{C}$ whose objects we're going to think of as contexts which we'll think of as roughly lists of types. Given this, the type $\tau$ in context $\Gamma$ will be represented by an arrow $p_\tau : \Gamma,\tau\longrightarrow\Gamma$ called a display map which will be used to represent weakening. (Usually, only a subset of arrows will be designated as display maps.) We might notate a type-in-context as $\Gamma \vdash \tau$. So a type over $\Gamma$ is an object of the slice category $\mathcal{C}/\Gamma$. A term, $M$, of type $\tau$ is an arrow $M : \Gamma\longrightarrow\Gamma,\tau$ such that $p_\tau \circ M = id$. Typically this would be notated as $\Gamma \vdash M : \tau$. In other words, $M$ is an arrow from $id_\Gamma \to p_\tau$ in $\mathcal{C}/\Gamma$. An arbitrary arrow in $\mathcal{C}$, $\sigma : \Delta\longrightarrow\Gamma$ represents a substitution. Intuitively, it's a substitution for variables in $\Gamma$ in terms of variables in $\Delta$. These arrows give rise to functors $\sigma^* : \mathcal{C}/\Gamma\longrightarrow \mathcal{C}/\Delta$. To interpret $\sigma^*$ we require $\mathcal{C}$ to have pullbacks. $$\require{AMScd} \begin{CD} \Delta,\tau[\sigma] @>>> \Gamma,\tau \\ @V\sigma^*(p_\tau)VV @VVp_\tau V \\ \Delta @>>\sigma> \Gamma \end{CD}$$ The action of $\sigma^*$ on arrows, i.e. terms, we might notate as $\Delta \vdash M[\sigma] : \tau[\sigma]$. The operation $(-)^*$ is itself pseudofunctorial meaning it preserves composition and identities only up to isomorphism. (This pseudofunctoriality, while a little to rigid to be completely natural categorically [it corresponds to a cloven fibration], is too loose type theoretically where, because we have syntax, we expect everything to hold "on the nose". Getting strict functoriality precipitates a lot of the complexity in this field.) With that in place, the dependent product for a type $\tau$ is the right adjoint of $p_\tau^*$. $p_\tau^* \dashv \Pi_\tau : \mathcal{C}/\Gamma,\tau \to \mathcal{C}/\Gamma$. (Obviously we are going to want the adjoint to exist for all types and for them to fit together appropriately. The coherence conditions are called the Beck-Chevalley conditions.) We can write the action of $\Pi_\tau$ on a type as $$\frac{\Gamma,x:\tau\vdash B}{\Gamma\vdash\Pi x:\tau.B}$$ In particular we have the natural isomorphism $$\mathcal{C}/\Gamma,A(p_A^*(-),=) \cong \mathcal{C}/\Gamma(-, \Pi_A(=))$$ which if we instantiate to $id_\Gamma$ and $p_B$ gives a mapping on arrows (essentially terms) that in type theoretic notation would look like $$\frac{\Gamma,x:A\vdash M:B}{\Gamma\vdash\lambda x\!:\!A.M : \Pi x\!:\!A.B}$$ and post-composition by the counit gives a natural transformation of hom-functors $$\mathcal{C}/\Gamma,A(-,p_A^*(\Pi_A(=))) \to \mathcal{C}/\Gamma,A(-,=)$$ which, instantiating with $id_{\Gamma,A}$ and $p_B$, gives $$\frac{\Gamma,x:A\vdash M:\Pi y\!:\!A.B}{\Gamma,x:A\vdash Mx : B[y\mapsto x]}\quad x \text{ not free in } B$$ This rule represents a commutative triangle to which we can apply the substitution functor induced by the (underyling arrow of the) term $\Gamma \vdash N : A$ to get the more usual $$\frac{\Gamma\vdash N:A \quad \Gamma\vdash M:\Pi x\!:\!A.B}{\Gamma\vdash MN : B[x\mapsto N]}$$ (Actually this is slightly different from what I stated and more corresponds to doing $\sigma^*(\varepsilon_{p_B} \circ M[p_A^*])$ where $\sigma$ is the substitution induced by $N$.) I've been fairly sloppy in this presentation, assuming certain properties hold strictly that may not. See the links above for explications of those details and how to deal with them.<|endoftext|> TITLE: Exploding (a.k.a open-ended) dice pool QUESTION [7 upvotes]: Say we role $n$ identical, fair dice, each with $d$ sides (every side comes up with the same probability $\frac{1}{d}$). On each die, the sides are numbered from $1$ to $d$ with no repeating number, as you would expect. So an ordinary $d$ sided die pool. Every dice in the outcome that shows a number equal or higher than the threshold number $t$ is said to show a hit. Every die that shows the maximum result of $d$ is rolled again, which we call "exploding". If the re-rolled dice show hits, the number of hits is added to the hit count. Dice that show the maximum after re-rolling are rolled again and their hits counted until none show a maximum result. Given the values of $$ d\ ...\ \text{Number of sides on each die}\ \ d>0 $$ $$ n\ ...\ \text{Number of dies rolled}\ \ n\ge 0$$ $$ h\ ...\ \text{Number of hits, we want the probability for}$$ $$ t\ ...\ \text{Threshold value for a die to roll a hit}\ \ 0 < t \le d$$ what is the probability to get exactly exactly $h$ hits? Lets call it: $$p^\text{exploding}(d,n,t,h) = p_{d,n,t,h}$$ Can you derive a formula for this probability? Example roll: We roll 7 six-sided dice and count those as hits that show a 5 or a 6. In this example, $d=6$, $n=7$, $t=5$. The outcome of such a roll may be 6,5,1,2,3,6,1. That's three hits so far, but we have to roll the two sixes again (they explode). This time it's 6, 2. One more hit, and one more die to roll. We are at four hits at this point. The last die to be re-rolled shows 6 again, we re-roll it yet another time. On the last re-roll it shows a 4 - no more hits. That gives five hits in total and the roll is complete. So, for this roll $h=5$. Simple case for just one die $n=1$: If we roll only one die with the same threshold as above, so ($d=6$, $n=1$, $t=5$), the probabilities can be easily calculated: $$ p_{6,1,5,0} = \frac{4}{6} \quad \text{(Probability for exactly 0 hits - roll 1-4 on the first roll, no explosion here)} $$ $$ p_{6,1,5,1} = \frac{1}{6} + \frac{1}{6} \cdot \frac{4}{6} \quad \text{(Probability for exactly 1 hit - roll either a 5 or a result of 1-4 after a 6)} $$ $$ p_{6,1,5,2} = \frac{1}{6} \cdot \frac{1}{6} + \frac{1}{6} \cdot \frac{1}{6} \cdot \frac{4}{6} \quad \text{(Probability for exactly 2 hits - either a 6 and 5 or two sixes and 1-4)} $$ $$ p_{d,1,t,h\ge 1} = \left(\frac{1}{d}\right)^{h-1}\frac{d-t}{d} + \left( \frac{1}{d} \right)^h \cdot \frac{t-1}{d} \quad \text{(Probability for exactly $h\ge 1$ hits - either $h-1$ maximum rolls and non-maximal success or $h$ maximum rolls and a non-success )} $$ Without Explosion: For none-exploding dice the probability would just be binomially distributed: $$ p^\text{non-exploding}_{d,n,t,h} = \binom{n}{h} \left( \frac{d-t+1}{d} \right)^h \left( 1 - \frac{d-t+1}{d} \right)^{n-h} $$ $$ E^\text{non-exploding}_{d,n,t} = n \frac{d-t+1}{d}; \qquad V^\text{non-exploding}_{d,n,t} = n \frac{(d-1)(d-t+1))}{d^2} $$ Where $E_{d,n,t}$ is the expected number of hits, and $V_{d,n,t}$ its variance. Edit1: In the mean time I found Probability of rolling $n$ successes on an open-ended/exploding dice roll. However I'm afraid, I don't fully get the answer there. E.g. the author says $s = n^k + r$, which does not hold for his examples. Also I'm not sure how to get $s$, $k$ and $r$ from my input values stated above (which are $d$, $n$, $h$ and $s$). Edit2: If one had the probability for $b$ successes via explosions, given that the initial role had $l$ successes prior to the explosions, one could just subtract all those probabilities for all values of $b$ from the value for the pure binomial distributions with $l$ successes and add the respective value to the pure binomial probability of $b+l$ successes. Just an idea. I suppose this should be something like a combination of geometric and binomial distribution. Edit3: I accepted Brian Thug's excellent answer, giving the formula: $$ p^\text{exploding}_{d,n,t,h} = \frac{(t-1)^n}{d^{n+h}} \sum_{k=0}^{\max\{h, n\}} \binom{n}{k} \binom{n+h-k-1}{h-k} \left[ \frac{d(d-t)}{t-1} \right]^k $$ $$ E^\text{exploding}_{d,n,t} = n\frac{d+1-t}{d-1}; \qquad V^\text{exploding}_{d,n,t} = E_{d,n,t} - n\frac{(d-t)^2-1}{(d-1)^2} $$ Here is a graph from a simulation (html) that illustrates the whole thing: REPLY [6 votes]: ETA: OK, I think I've fixed the problem. Off-by-one error... I think this can be done with generating functions. The generating function for a single die is given by $$ F(z) = \frac{t-1}{d} + \frac{(d-t)z}{d} + \frac{zF(z)}{d} $$ We can interpret this as follows: The probability that there are no hits on the one die is $\frac{t-1}{d}$, so $F(z)$ has that as the coefficient for $z^0 = 1$. The probability that there is one hit and the die doesn't "explode" (repeat) is $\frac{d-t}{d}$, so $F(z)$ has that as the coefficient for $z^1 = z$. In the remaining $\frac{1}{d}$ of the cases, the die explodes and the situation is exactly as it was at the start, except that there is one hit already to our credit, which is why we have $zF(z)$: the $F(z)$ takes us back to the beginning, so to speak, and the multiplication by $z$ takes care of the existing hit. This expression can be solved for $F(z)$ via simple algebra to yield $$ F(z) = \frac{t-1+(d-t)z}{d-z} $$ whose $z^h$ coefficient gives the probability for $h$ hits. For example, for the simple case $n = 1, d = 20, t = 11$: \begin{align} F(z) & = \frac{10+9z}{20-z} \\ & = \frac{10+9z}{20} \left(1+\frac{z}{20}+\frac{z^2}{20^2}+\cdots\right) \\ & = \left( \frac{1}{2} + \frac{9}{20}z \right) \left(1+\frac{z}{20}+\frac{z^2}{20^2}+\cdots\right) \\ \end{align} and then we obtain the probability that there are $h$ hits from the $z^h$ coefficient of $F(z)$ as $$ P(H = h) = \frac{1}{2\cdot20^h}+\frac{9}{20^h} = \frac{19}{2\cdot20^h} \qquad h > 0 $$ with the special case $$ P(H = 0) = \frac{1}{2} $$ In general, we can obtain the expectation of the number of hits $\overline{H}$ as $$ \overline{H} = F'(1) = \frac{d(d-t)+t-1}{(d-1)^2} = \frac{d+1-t}{d-1} $$ Now, for $n$ dice, we have $$ [F(z)]^n = \left[ \frac{t-1+(d-t)z}{d-z} \right]^n $$ We can write this as $N(z)M(z)$, where \begin{align} N(z) & = [t-1+(d-t)z]^n \\ & = \sum_{k=0}^n \binom{n}{k} (t-1)^{n-k}(d-t)^kz^k \end{align} and \begin{align} M(z) & = \left(\frac{1}{d-z}\right)^n \\ & = \frac{1}{d^n} \left( 1+\frac{z}{d}+\frac{z^2}{d^2}+\cdots \right)^n \\ & = \sum_{j=0}^\infty \binom{n+j-1}{j} \frac{z^j}{d^{n+j}} \end{align} so we can obtain a closed form for $P(H = h)$ from the $z^h$ coefficient of $[F(z)]^n = N(z)M(z)$ as \begin{align} P(H = h) & = \sum_{k=0}^{\max\{h, n\}} \binom{n}{k} \binom{n+h-k-1}{h-k} \frac{(t-1)^{n-k}(d-t)^k}{d^{n+h-k}} \\ & = \frac{(t-1)^n}{d^{n+h}} \sum_{k=0}^{\max\{h, n\}} \binom{n}{k} \binom{n+h-k-1}{h-k} \left[ \frac{d(d-t)}{t-1} \right]^k \end{align} For example, for $n = 1, d = 6, t = 5$ (the example in the OP), the above expression yields $$ P(H = h) = \frac{5}{3 \cdot 6^h} \qquad h > 0 $$ with the special case $$ P(H = 0) = \frac{2}{3} $$ which coincides with the conclusions drawn in the comments to the OP. The expectation for the number of hits could be obtained by evaluating $\frac{d}{dz} [F(z)]^n$ at $z = 1$, but owing to the linearity of expectation, it is obtained more straightforwardly as $n$ times the expected number of hits for one die, namely $$ \overline{H} = \frac{n(d+1-t)}{d-1} $$ I think this all checks out, but some independent verification (or disproof, as appropriate) would be nice.<|endoftext|> TITLE: If there is a branch of $\sqrt{z}$ on an open set $U$ with $0 \notin U,$ then there is also a branch of $arg$ $z.$ QUESTION [5 upvotes]: Show that if there is a branch of $\sqrt{z}$ on an open set $U$ with $0 \notin U,$ then there is also a branch of $arg$ $z.$ I am unable to proceed any further in this and any help in this regard would be greatly appreciated. REPLY [5 votes]: A natural way is proving the ``obvious'' fact that if an open set $U\subset\mathbb{C}$ contains a closed curve $\gamma$ with non-zero winding number around $0$ then $U$ contains a closed curve $\gamma_1$ with winding number $1$ around $0$. To do this, first we replace $\gamma$ by a closed polygon. After some perturbation the polygon has no multiple segments, just finitely many self-intersections. At the self-intersections the polygon can be split into finitely many simply closed polygons. From the sum of the winding numbers of the small polygons, at least one of them has non-zero winding number. But, due to Jordan's curve theorem, the winding number of a simply closed polygon can be only $0$ or $\pm 1$. Another proof: Suppose that there is a continuous branch of $\sqrt{z}$ on $U$. Let $V=\{\sqrt{z}:z\in U\}$. We will prove that $\log z$ has a holomorphic branch on $V$; this provides a branch of $2\log\sqrt{z}$ on $U$. Notice that the sets $V$ and $-V$ are disjoint. For the existence of $\log z$ on $V$ it suffices if every closed polygon in $V$ has zero winding number around $0$. Take an arbitrary closed polygon $\gamma\subset V$, and for every $z\in V\subset \gamma$, let $n_\gamma(z) =\frac1{2\pi i}\int_{w\in\gamma}\frac{\mathrm{d}w}{w-z}$ be the winding number of $\gamma$ around the point $z$. We want to prove $n_\gamma(0)=0$. Consider the polygon $-\gamma$. As $\gamma\subset V$ and $-\gamma\subset (-V)$ and the sets $V$ and $-V$ are disjoint, the curves $\gamma$ and $-\gamma$ are disjoint, too. Let $z_1\in(-\gamma)$ and $z_2\in(-\gamma)$ be two points of $-\gamma$ with minimum and maximum distance from $0$. In the open set $\mathbb{C}\setminus(\{0\}\cup \gamma)$ the points $0$ and $z_1$ are connected by a line segment; the points $z_1$ and $z_2$ are connected by the curve $-\gamma$; finally $z_2$ and $\infty$ are connected by a ray. Therefore, the points $0,z_1,z_2,\infty$ are in the same component of $\overline{\mathbb{C}}\setminus(\{0\}\cup\gamma)$, so $$ n_\gamma(0) = n_\gamma(z_1) = n_\gamma(z_2) = n_\gamma(\infty) = 0. $$ Hence, every closed polygon $\gamma\subset V$ has winding number $0$ around $0$, so there is a holomorphic branch of $\log z$ on $V$.<|endoftext|> TITLE: What is the significance of Coleman maps arising in Iwasawa theory? QUESTION [6 upvotes]: I have come across two instances of "Coleman map" Let $E$ be an elliptic curve defined over $\mathbb{Q}_p$. Let $k_\infty$ be the unique $\mathbb{Z}_p$ extension of $\mathbb{Q}_p$ contained in $\mathbb{Q}_p(\mu_{p^\infty})$ with Galois group $\Gamma = 1+p\mathbb{Z}_p \cong \mathbb{Z}_p$. Let $k_n$ be the $n$-th layer in this tower. Let $T=T_p(E)$ be the $p$-adic Tate module of the elliptic curve $E$. Then the $\textit{Coleman map for E}$ is a map $$Col: \varprojlim_{n} H^1(k_n, T^*(1)) \rightarrow \Lambda=\mathbb{Z}_p[[\Gamma]] $$ I am referring to page 572 of this paper where I learn that the power series $Col_z(x)$ equals the $p$-adic L-function of the elliptic curve E when $z$ is a special element discovered by Prof. Kato. There is another instance where I have come across a Coleman power series, as a power series that generates norm-coherent sequence of units in the tower $\mathbb{Q}_p(\mu_{p^ \infty})/\mathbb{Q}_p$. That is, if I have a sequence of units $\mathbb{u}=(u_n)_{n \geq 0} \in \varprojlim_{n \geq 0} \mathcal{O}^{\times}_{\mathbb{Q}_p(\mu_{p^n})}$ (the inverse limit on the right hand side is w.r.t. the norm map of fields $\mathbb{Q}_p(\mu_{p^m}) \rightarrow \mathbb{Q}_p(\mu_{p^n}), m \geq n$), then $\exists!$ unique power series $Col_{\mathbb{u}}(x) \in \mathbb{Z}_p[[x]]$ such that $Col_{\mathbb{u}}(\zeta_{p^{1+n}}-1)=u_n, \forall n\geq 0$. How does the first Coleman map relate to the second? Or even more generally, are there other instances where Coleman maps arise, and what is the general philosophy behind Coleman maps? Any thoughts and/or link to any articles/notes are welcome. Thank you!! REPLY [6 votes]: A huge chunk of the literature of p-adic Iwasawa theory is devoted to this question. One of the milestones in the subject is Perrin-Riou's book "P-adic L-functions and p-adic representations", and reading that would be one excellent way to learn the topic. There's also Colmez's Bourbaki seminar "Fonctions L p-adiques", if you're confident reading French. For a generalisation to non-commutative Iwasawa theory (and much more besides) there's a wonderful, but very hard, article by Fukaya and Kato "A formulation of conjectures on p-adic zeta functions in non-commutative Iwasawa theory". To answer your direct question about how the two "Coleman maps" you describe are related: they are both special cases of a single more general construction. You can make this construction (at least) for any crystalline p-adic representation of the absolute Galois group of $\mathbf{Q}_p$. One example of such a representation is the 1-dimensional trivial representation, and that gives you Coleman's original map for norm-compatible systems of units; another example of such a representation is the Tate module of an elliptic curve, and then we get the first example from your question. (PS: This is not quite true as stated; I should have said that applying Perrin-Riou's general construction to the trivial representation gives the logarithm of Coleman's power series for norm-compatible units. But that's a minor point.)<|endoftext|> TITLE: $-1$ as the only negative prime. QUESTION [46 upvotes]: I was recently thinking about prime numbers, and at the time I didn't know that they had to be greater than $1$. This got me thinking about negative prime numbers though, and I soon realized that, for example, $-3$ could not be prime because $3 \cdot (-1) = -3$. In some sense $-1$ could be though because its only factors that are integers are $-1$ and $1$, and this is allowed for primes. Is there some way, by this logic, that $-1$ can be considered a prime then? REPLY [5 votes]: I think the answers so far don't give the OP's intuitions enough credit. There is, indeed, something quite special about the set $$X = \{-1,2,3,5,7,\ldots\},$$ and the OP deserves an answer that makes this explicit. I'll do my best to give one. Firstly, observe the following. Every non-zero integer can be written as a product of the elements of $X.$ If $Y$ is a proper subset of $X$, then it's not the case that every non-zero integer can be written as a product of the elements of $Y$. In other words, $X$ is a minimal generating subset for the monoid $(\mathbb{Z}_{\neq 0}, \times,1).$ Of course, so too is the subset $$\{-1,-2,3,5,7,\ldots\}.$$ It follows that $X$, as defined above, isn't altogether that special. Nonetheless, the presence of $-1$ is essential: Proposition. Every minimal generating subset of $\mathbb{Z}_{\neq 0}$ contains $-1$. So $-1$ is actually pretty important. At this point, it's worth observing something cool about $\mathbb{N}_{\neq 0} = \{1,2,3,4,\ldots\}.$ In particular, notice that $\mathbb{N}_{\neq 0}$ has a unique minimal generating subset, namely the (usual) prime numbers. Finally, there's something else cool about $-1$ that makes it a bit primelike. If $p \in \mathbb{N}$ is a prime number in the usual sense and $n$ is an integer, we can ask: "what is the minimum number of copies of $p$ needed to build $n$?" This is usually denoted $\nu_p(n).$ Following this, it makes sense to make the following definition. Suppose $Z$ is a subset of $\mathbb{Z}_{\neq 0}$ that generates it multiplicatively. Suppose also that $q \in Z$. Then it makes sense to write $\nu_q^Z(n)$ for the least number of occurrences of $q$ in any factorization of $n$ as a product of elements of $Z$. Now recall that we defined $$X = \{-1,2,3,5,7,\cdots\}.$$ It follows that given a non-zero integer $n$, the expression $\nu_{-1}^X(n)$ equals $0$ if $n$ is positive, and it equals $1$ if $n$ is negative. We can also define $$Y = \{-1,-2,-3,-5,-7,\cdots\}.$$ It follows that given an integer $n>0$, the expression $\nu_{-1}^Y(n)$ equals $0$ if the number of primes in the factorization of $n$ is even, and it equals $1$ if the number of primes in the factorization of $n$ is odd.<|endoftext|> TITLE: Is it true that $\mathbb{Q}(\sqrt{2}) \cap \mathbb{Q}(i) = \mathbb{Q}$? QUESTION [5 upvotes]: Is it true that $\mathbb{Q}(\sqrt{2}) \cap \mathbb{Q}(i) = \mathbb{Q}$? I know that \begin{align*} \mathbb{Q}(\sqrt{2}) &= \{a+b\sqrt{2} \mid a,b \in \mathbb{Q}\}, \\ \mathbb{Q}(i) &= \{a+bi \mid a,b \in \mathbb{Q}\} \end{align*} I tend to believe it is, since every element in $\mathbb{Q}(\sqrt{2})$ belongs to $\mathbb{Q}(i)$ iff $b=0$. Also, an element in $\mathbb{Q}(i)$ belongs to $\mathbb{Q}(\sqrt{2})$ iff $b = 0$. Is that enough for a formal proof? REPLY [3 votes]: Well, if you've actually proved the biconditional statements you mentioned, then you're done. Alternatively, show that $$\Bbb Q\subseteq\Bbb Q(i)\cap\Bbb Q\bigl(\sqrt2\bigr),$$ which I leave to you. Then, suppose $z\in\Bbb Q(i)\cap\Bbb Q\bigl(\sqrt2\bigr).$ Since $z\in\Bbb Q\bigl(\sqrt2\bigr),$ then $z\in\Bbb R.$ From there, we can use the fact that $z\in\Bbb Q(i)$ to readily show that $z\in\Bbb Q,$ completing the proof.<|endoftext|> TITLE: Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $f'(x)$ is continuous and $|f'(x)|\le|f(x)|$ for all $x\in\mathbb{R}$ QUESTION [8 upvotes]: Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $f'(x)$ is continuous and $|f'(x)|\le|f(x)|$ for all $x\in\mathbb{R}$. If $f(0)=0$, find the maximum value of $f(5)$. $f'(x)=f(x)$ is true when $f(x)=ke^x$. $f(x)=0$ satisfies the condition. So $f(5)=0$ which is also the correct answer. But is there any method other than substitution? REPLY [3 votes]: The inequality indicates $$ \left|\frac{\mathrm{d}}{\mathrm{d}x}\log\left|f(x)\right|\,\right|\le1 $$ If $\log\left|f(5)\right|=a$ is finite, then $\log\left|f(0)\right|\ge a-5$. Since $f(0)=0$, we must have $f(5)=0$.<|endoftext|> TITLE: What is the main purpose of learning about different spaces, like Hilbert, Banach, etc? QUESTION [19 upvotes]: I just started to learn about functional analysis and have started to learn about various spaces, like $L^{p}$, Banach, and Hilbert spaces. However, right now my understanding is rather mechanical. That is, my understanding of say the Hilbert space is that it is a vector space with an inner product such that the norm defined by it turns into a complete metric space. Additionally, that generally vector spaces will fulfill certain criteria. Hence, my understanding is rather unmotivated by why they are defined a certain way. Is there a reason why certain vector spaces are defined the way they are? What is it about vector spaces having certain properties that makes it appealing to study? Does it allow us to do certain things on the spaces that makes it so that we must use it? Sorry if my understanding is rather weak, I just started to learn more advanced spaces from a purely mathematical point of view and have had a hard time getting an answer from professors. In summary, right now it seems that someone just gave a bunch of random conditions to define certain vector spaces and I really have no idea why they defined it that way, and why it couldn't be defined with other conditions. REPLY [14 votes]: $L^2$ function spaces arose out of Parseval's identity for the Fourier series, an identity that was known by the late 1700's: $$ \frac{1}{\pi}\int_{-\pi}^{\pi}|f(t)|^2dt = \frac{1}{2}a_0^2+\sum_{n=1}^{\infty}a_n^2+b_n^2, $$ where the Fourier series for $f$ is $$ f(x) \sim \frac{a_0}{2}+\sum_{n=1}^{\infty}a_n\cos(nx)+b_n\sin(nx). $$ That establish a connection between square integrable functions and an infinite-dimensional Euclidean space with sums of square of coordinates. Not much was made of this connection at first. The Cauchy-Schwarz inequality for complex spaces would not be stated by Cauchy for another couple of decades (Schwarz was not attached to the original inequality bearing Cauchy's name, only Cauchy.) In between, Fourier started his work on Heat Conduction, separation of variables and more general orthogonal expansions arising from these methods. Decades passed before, around 1850-1860, Schwarz published a paper on solutions of minimization problems where he derived the Cauchy-Schwarz inequality for integrals, and it was realized that the inequality gave the triangle inequality. A new concept of distance and convergence was emerging. Over the next few decades, these ideas led Mathematicians to consider functions as points in a space with distance and geometry imposed through norms and inner-product. That was a game-changing abstraction. During this period of abstraction, a real number was defined for the first time in a rigorous way, after roughly 24 centuries of trying to figure out how to make sense of irrationality. Compactness was discovered, and abstracted to sets of functions through equicontinuity. Fourier's ideas were being cast into the context of the new, rigorous Math. Riemann developed his integral, and by the early 1900's, Lebesgue has defined his integral, both with the stated goal of studying the convergence of Fourier series. Cantor, Hilbert, and many others were laying the rigorous, logical foundations of Mathematics, and Hilbert abstracted the Fourier series to consider $\ell^2$ as an infnite dimensional generalization of Euclidean space. Topology was being created through abstract metric and then through neighborhood axioms in the new set theory. Function spaces were now fashionable, with $\ell^2$, $L^2$ leading the way. Early in this 20th century evolution, one of the Riesz brothers looked at continuous linear funtionals on $C[a,b]$, and represented them as integrals. The idea of continuity of functionals was just being explored. Functional Analysis was born, and there was a push to explore abstract function spaces. Representing functionals was the order of the day. $L^p$ was a natural abstraction that cemented the idea of the dual as having to be separate and distinct from the orignal space. Hahn and Banach both discovered how to extend continuous linear functionals. Before this period in the early part of the 20th century, there was no distinction of a space and a dual. $L^p$ spaces became an important part of decoupling the space and its dual, and providing convincing evidence that it was necessary to do so. Then there was a move toward abstract operators, with Hilbert and von Neumann leading the way. By the time Quantum Mechanics arrived, all the pieces were in place to be able to lay a foundation for Quantum Mechanics. Hilbert had already studied symmetric operators. Spectrum of operators was defined well before it was realized that operators were a perfect fit for Quantum, where it was later found that the Mathematician's spectrum was actually the Physics spectrum! von Neumann had proved the Spectral Theorem for selfadjoint operators. Topological ideas abstracted from convergence, algebras of operators, functions, etc., set off a mushroom cloud of thought, helping to lead to other mushroom clouds.<|endoftext|> TITLE: Confusion with Courant: Which of his two calculus books is THE one? QUESTION [13 upvotes]: Since I've worked my way through Spivak's Calculus book, I thought I'd give Courant's allegedly fantastic exposition of the subject a go as well. However, I've run into a problem. People in stackexchange threads always praise and suggest Courant's Calculus without specifying whether that's supposed to be his "Differential and Integral Calculus" or his "Introduction to Calculus and Analysis" book. So my question is: which one of these two is the "famous" and "legendary" Calculus book that everybody always talks about when the "great three" - Spivak, Apostol and Courant - are mentioned or recommended to people asking for a first course in calculus? Note: I'm a math major. I'm mostly interested in what the oft-mentioned calculus book by Courant is, and not which of his two books would suit my prior exposure to calculus (Spivak) best in terms of the follow-up level. That is not to say I wouldn't appreciate an informed opinion on that matter, I certainly would (and I hope some experienced readers will be able to enlighten me), but I'm primarily asking this question to find out which is the more canonical one. REPLY [5 votes]: Differential and Integral Calculus is the classic. The first edition in English came out in 1934. Introduction to Calculus and Analysis is a somewhat modified version co-authored by Fritz John. Careful attention to either version will give you (just about the same) very good grounding in calculus, so you may want to read which ever one is easier to get a copy of. But if you want to read the classic, it's Differential and Integral Calculus. Another very lovely, old calculus text is A Course of Pure Mathematics by G. H. Hardy. It's impossible to learn the subject from the book by Landau (also called "Differential and Integral Calculus"), but it's worth a look once you already know the subject.<|endoftext|> TITLE: Prove that $u\cdot v = 1/4||u+v||^2 - 1/4||u-v||^2$ for all vectors $u$ and $v$ in $\mathbb{R}^n$ QUESTION [5 upvotes]: I need some help figuring out how to work through this problem. Prove that $ u \cdot v = 1/4 ||u + v||^2 - 1/4||u - v||^2$ for all vectors $u$ and $v$ in $\mathbb{R}^n$. Sorry, forgot to include my work so far: I decided to ignore the 1/4 and deal with it later once I had a better understanding of the question. $= ||u+v||^2 - ||u-v||^2$ $= (u+v)(u+v) - (u-v)(u-v)$ $= u(u+v) + v(u+v) - u(u-v) + v(u-v)$ $= uu + uv + uv + vv - uu + uv + uv - vv$ $u \cdot v= 3(uv)$ This is as far as I've gotten, not sure if I'm on the right track or where to go next. REPLY [2 votes]: Here is a start $$||u+v||^2= \langle u+v, u+v \rangle= \langle u, u \rangle+\dots \,. $$ Do the same with the other and multiply both eqs and by $\frac{1}{4}$ and subtract. See my answer. Note: $$\langle u, v\rangle = u.v \,. $$<|endoftext|> TITLE: Example of a Tensor Product of Modules with Non-Decomposable Elements QUESTION [5 upvotes]: Given a ring $R$ and $R$-modules $A_R$ and $_{R}B$, we define the tensor product $A \otimes_R B$ as the free abelian group on $A \times B$ modded out by the subgroup generated by the elements of the following form: $$ (a+a',b)\!-\!(a,b)\!-\!(a',b) \quad\quad (a,b+b')\!-\!(a,b)\!-\!(a,b') \quad\quad (ar,b)\!-\!(a,rb) $$ Alternatively we can think of $A \otimes_R B$ as being generated by elements of the form $a \otimes b$ (which are called the decomposable elements) where $a \in A$ and $b \in B$, but not every element of $A \otimes_R B$ has to be a decomposable element that looks like $a \otimes b$. Are there good illustrative (concrete?) examples of modules $A$ and $B$ and a ring $R$ where $A \otimes_R B$ has non-decomposable elements? It seems like when working with concrete modules, for all the examples I've seen so far the tensor product is isomorphic to some nice group, which I think means that there are no non-decomposable elements to consider. Alternatively when working with tensor products abstractly you can treat the decomposable elements as a basis for $A \otimes_R B$ and not bother thinking about the non-decomposable elements. Either way, I'm missing an intuitive sense of what a tensor product with non-decomposable elements "looks like." REPLY [7 votes]: Let $\mathbb{F}$ be a field and let $V$ be a module over $\mathbb{F}$ (i.e. a vector space over $\mathbb{F}$), then $V^*$ is also a module over $\mathbb{F}$. The tensor product $V^*\otimes_{\mathbb{F}}V$ is canonically isomorphic to $\operatorname{End}_{\mathbb{F}}V$ via the map induced by the bilinear map $V^*\times V \to \operatorname{End}_{\mathbb{F}}(V)$, $(\varphi, w) \mapsto \Phi_{(\varphi, w)}$ where $\Phi_{(\varphi, w)}(v) = \varphi(v)w$. If $V$ is a finite-dimensional vector space over $\mathbb{F}$ of dimension $n$, choosing a basis $\{e_1, \dots, e_n\}$ for $V$ induces an isomorphism $\operatorname{End}_{\mathbb{F}}V \cong M_{n\times n}(\mathbb{F})$ by the map $a_i^je^i\otimes e_j \mapsto [a^i_j]$. Under the isomorphisms $V^*\otimes V \cong \operatorname{End}_{\mathbb{F}}V \cong M_{n\times n}(\mathbb{F})$, the decomposable elements of $V^*\otimes V$ correspond to the rank one matrices. To see that this last claim is true, let $\varphi \in V^*\setminus\{0\}$ and $w \in V\setminus\{0\}$, then $\varphi = x_ie^i$ and $w = y^je_j$, so $\varphi\otimes w = x_iy^j e^i\otimes e_j$ and is mapped under the isomorphism to the matrix $[x_iy^j]$. Now note that $$[x_iy^j] = \begin{bmatrix} x_1\\ \vdots\\ x_n\end{bmatrix}[y^1\ \dots\ y^n] = xy^T$$ so $[x_iy^j]$ has rank one. Conversely, any rank one $n\times n$ matrix can be put in the form $xy^T$ for some $x, y \in \mathbb{F}^n\setminus\{0\}$ (such a product is called an outer product), and hence we can trace back the isomorphism to find a decomposable element of $V^*\otimes_{\mathbb{F}}V$. More generally, elements of $V^*\otimes_{\mathbb{F}} V$ which can be written as a sum of $k$ decomposable elements (but no fewer) correspond to sums of $k$ outer products (but no fewer). These are precisely the matrices of rank $k$. In fact, one often defines the rank of an element in a tensor product as the smallest number of decomposable elements needed to write it as a sum, and the above simply states that the two notions of rank agree. Note: After choosing a basis $\{e_1, \dots, e_n\}$ for $V$, one obtains the basis $\{e^i\otimes e_j \mid 1 \leq i, j \leq n\}$ for $V^*\otimes_{\mathbb{F}}V$. While the elements $e^i\otimes e_j$ (and their non-zero scalar multiples) are decomposable, they are not the only decomposable elements of $V^*\otimes_{\mathbb{F}}V$. An element of $V^*\otimes_{\mathbb{F}}V$ is decomposable if it is of the form $\varphi\otimes w$ for some non-zero $\varphi \in V^*$ and non-zero $w \in V$. For example, the element $e^1\otimes e_1 + e^2\otimes e_1$ is not of the form $ae^i\otimes e_j$, but it is decomposable as $$e^1\otimes e_1 + e^2\otimes e_1 = (e^1 + e^2)\otimes e_1.$$ However, what is true is that an element $L$ of $V^*\otimes_{\mathbb{F}}V$ is decomposable if and only if there is some basis $\{f_1, \dots, f_n\}$ of $V$, such that $L = f^i\otimes f_j$ for some $i$ and $j$. REPLY [5 votes]: Take $R$ to be a field $k$ and take $A = B = k^2$, with basis $e_1, e_2$. The tensor product $A \otimes_k B$ is $k^4$ with basis $e_1 \otimes e_1, e_1 \otimes e_2, e_2 \otimes e_1, e_2 \otimes e_2$, and most elements of it are indecomposable. For example, $e_1 \otimes e_1 + e_2 \otimes e_2$ is indecomposable. There's no more reason to expect all tensors to be decomposable than there is to expect all polynomials to factor. In fact we can be fairly explicit about which tensors decompose in the above special case: the decomposable tensors are precisely those of the form $$(ae_1 + be_2) \otimes (ce_1 + de_2) = ac e_1 \otimes e_1 + ad e_1 \otimes e_2 + bc e_2 \otimes e_1 + bd e_1 \otimes e_2.$$ Working projectively, this defines a morphism $\mathbb{P}^1 \times \mathbb{P}^1 \to \mathbb{P}^3$ (a special case of the Segre embedding), so we expect its image to be a $2$-dimensional subvariety of the $3$-dimensional variety $\mathbb{P}^3$, and in particular not the whole thing. More explicitly, an example of a nontrivial polynomial equation satisfied by the image is $x_{11} x_{22} = x_{12} x_{21}$, where $x_{ij}$ is the coefficient of $e_i \otimes e_j$ (note that this is not satisfied in our example), and this in fact generates the ideal defining the image of the Segre embedding. As Mariano says in the comments, another perspective is the following. If $V$ is a finite-dimensional $k$-vector space, then $V \otimes_k W$ can be identified with the space of linear maps $V^{\ast} \to W$. Among those, the decomposable tensors are the maps of rank at most $1$. If $\dim V = 2$ and $W = V^{\ast}$ then $V \otimes_k V^{\ast}$ can be identified with $\text{End}(V)$, and then an element of $\text{End}(V)$ has rank at most $1$ iff its determinant is zero; that's the equation $x_{11} x_{22} = x_{12} x_{21}$ above. In general a matrix has rank at most $1$ iff all of its $2 \times 2$ minors vanish.<|endoftext|> TITLE: How was the 506-digit prime number 999...9998999...999 found? QUESTION [18 upvotes]: I was surprised to encounter a claim made on the internet that the following number is prime: 99999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999989999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999 It's all 9s except for one 8. This 506-digit number didn't look especially prime to me. I couldn't find it in any publicly available lists (which clonk out around 8 or so digits), so I did trial division up to 626543489 and then did Miller-Rabin with 5000 rounds (way overkill). It seems, in-fact, to be prime. My question is--is there anything significant about this number that would help us realize that it is prime? How was it found? It's not a Mersenne, Fermat, or Perfect prime, for instance. It's not particularly large (the largest known as of this writing is in the tens of millions of digits), but I suspect the previous and next prime numbers aren't known. REPLY [5 votes]: At first I thought it was a palindromic prime. There are lots of variations, and the largest currently known has 474,501 digits (Wikipedia seems to be out of date -- see The Prime Pages). For the top 3, they have some form M+1 where M is mostly factorable, hence a BLS75 n-1 proof can be done. We can find palindromic primes of this sort with lots of tools, for example: perl -Mntheory=:all -E '$s=8; for (1..3000) { $s="9${s}9"; say if is_prime($s); }' finds quite a few examples including the 757 digit prime formed by an eight with 378 nines on each side. There are lots of proof methods that work for numbers this size: WraithX's APR-CL, Alpertron's APR-CL, Pari/GP's APR-CL, my ECPP-DJ or Perl/ntheory, and Primo's ECPP, among others. Most of those proof methods work pretty well up to 2-3k digits. Primo is the only public tool that excels past that, and has been used up to 30k digits (a long undertaking on a hefty machine). But the example you gave isn't a palindrome since it has 252 nines on one side and 253 on the other. We can find it by replacing the $s=8 with $s=89 in the script above, along with both smaller and larger primes with the same form. If using something like Pari/GP it may be nicer to use a different way of writing the number, e.g. $10^{506}-10^{253}-1$, rather than using strings. Lastly, we can look at http://factordb.com and see that this number has been in the database for at least 5 years, with an N+1 proof. I believe factordb as well as the primes pages uses PFGW for the proof, which unfortunately doesn't output a certificate even though one should be easily constructed during the proof (admittedly it's not hard to run it again given the factorization, but it would be nice to be able to check the certificate like we can do with Primo).<|endoftext|> TITLE: Solving an exponential equation with different bases QUESTION [10 upvotes]: Solve the equation $2^x + 5^x = 3^x + 4^x$. I can figure out two special solutions $x=0$ and $x=1$, and I try to prove that they are the only two solutions. However, I find it hard to do so because I can't prove the monotony given there are also exponential in the derivative. Any hints to that? REPLY [3 votes]: Since $f(a)=a^x$ is concave for $x\in[0,1]$ and convex for $x\not\in(0,1)$, the definitions of concavity and convexity say $$ \begin{align} \color{#C00}{3^x}+\color{#090}{4^x} &=\color{#C00}{\left(\frac23\cdot2+\frac13\cdot5\right)^x}+\color{#090}{\left(\frac13\cdot2+\frac23\cdot5\right)^x}\tag1\\[3pt] &{\ge\atop\le}\color{#C00}{\left(\frac23\cdot2^x+\frac13\cdot5^x\right)}+\color{#090}{\left(\frac13\cdot2^x+\frac23\cdot5^x\right)}{{\quad\text{if }x\in[0,1]}\atop{\quad\text{if }x\not\in(0,1)}}\tag2\\[6pt] &=2^x+5^x\tag3 \end{align} $$ Explanation: $(1)$: write $3$ and $4$ as convex combinations of $2$ and $5$ $(2)$: definition of $\text{concavity}\atop\text{convexity}$ $(3)$: combine like terms Furthermore, equality holds only if $f$ is linear on $\{2,3,4,5\}$, and that happens iff $x\in\{0,1\}$.<|endoftext|> TITLE: Evaluate $\sum_{n=1}^{\infty} \frac{n}{n^4+n^2+1}$ QUESTION [5 upvotes]: I am trying to re-learn some basic math and I realize I have forgotten most of it. Evaluate $$\sum_{n=1}^{\infty} \frac{n}{n^4+n^2+1}$$ Call the terms $S_n$ and the total sum $S$. $$S_n < \frac{1}{n^3} \Rightarrow \sum_{n=1}^{\infty} \frac{n}{n^4+n^2+1} = S < \infty$$ $$S_n = \frac{n}{n^4+n^2+1} = \frac{n}{(n^2+1)^2-1}$$ It has been more than a few years since I did these things. I would like a hint about what method I should try to look for? Thanks. REPLY [3 votes]: Notice, use partial fractions as follows $$\sum_{n=1}^{\infty}\frac{n}{n^4+n^2+1}$$$$=\sum_{n=1}^{\infty}\frac{n}{(n^2-n+1)(n^2+n+1)}$$ $$=\frac 12\sum_{n=1}^{\infty}\left(\frac{1}{n^2-n+1}-\frac{1}{n^2+n+1}\right)$$ $$=\frac 12\lim_{n\to \infty}\left(\left(\frac{1}{1}-\frac{1}{3}\right)+\left(\frac{1}{3}-\frac{1}{7}\right)+\left(\frac{1}{7}-\frac{1}{13}\right)+\ldots +\left(\frac{1}{n^2-n+1}-\frac{1}{n^2+n+1}\right)\right)$$ $$=\frac 12\lim_{n\to \infty}\left(1-\frac{1}{n^2+n+1}\right)$$ $$=\frac 12\left(1-0\right)=\color{red}{\frac 12}$$<|endoftext|> TITLE: Find the Jordan normal form of a nilpotent matrix $N$ given the dimensions of the kernels of $N, N^2, N^3$ QUESTION [12 upvotes]: Let $N\in \text{Mat}(10 \times 10,\mathbb{C})$ be nilpotent. Furthermore let $\text{dim} \ker N =3 $, $\text{dim} \ker N^2=6$ and $\text{dim} \ker N^3=7$. What is the Jordan Normal Form? The only thing I know is that there have to be three blocks, since $\text{dim} \ker N = 3$. Thank you very much in advance for your help. REPLY [6 votes]: This can be seen in term of partitions. An $n \times n$ nilpotent matrix $N$ can be described via a partition $$ p = (n_{1}, n_{2}, \dots, n_{k}) $$ of $n$, with $n_{1} \ge n_{2} \ge \dots \ge n_{k} > 0$, which records the size of the nilpotent Jordan block in a Jordan normal form. Now one can show (it is really straightforward) that for the dual partition $q$ of $p$ one has $$ q = (\dim(\ker(N)), \dim(\ker(N^{2})) - \dim(\ker(N)), \dim(\ker(N^{3})) - \dim(\ker(N^{2})), \dots). $$ In your case $$ q = (3, 6-3, 7-6, \dots) = (3, 3, 1, \dots), $$ and thus $$ q = (3, 3, 1, 1, 1, 1). $$ The dual partition is $$ p = (6, 2, 2). $$<|endoftext|> TITLE: Not all norms are equivalent in an infinite-dimensional space QUESTION [8 upvotes]: How to prove that not all norms are equivalent in an infinite-dimensional vector space? In particular, I would like to prove that for a space $X$ of continuous real-valued functions defined on interval $[0,1]$, every two norms $\|\ .\|_p$ ($p \in [1, \infty]$) are not equivalent. REPLY [6 votes]: Consider the two spaces $L^{p_1}(-1, 1), L^{p_2}(-1, 1)$ with $1\le p_1 TITLE: Find the limit $\lim_{n\to\infty}\left(\sqrt{n^2+n+1}-\left\lfloor\sqrt{n^2+n+1}\right\rfloor\right)$ QUESTION [9 upvotes]: $$\lim_{n\to\infty}\left(\sqrt{n^2+n+1}-\left\lfloor\sqrt{n^2+n+1}\right\rfloor\right)\;=\;?\quad(n\in I) \\ \text{where $\lfloor\cdot\rfloor$ is the greatest integer function.}$$ This is what I did: Since $[x] = x - \{x\}$ we get our limit equal to $$\lim_{n\to\infty}\left\{\sqrt{n^2+n+1}\right\}$$ Moving the limit inside the fractional part function and replacing $n=\frac 1h \; \text {where } h\to0^+$ we get $$\left\{\lim_{h\to0^+} \frac{\sqrt{h^2+h+1}}h\right\}$$ Applying L'Hospital Rule, we get our limit equal to $\left\{\frac 12\right\}$ which is $0$. The problem: The answer in the answer key is $\frac12$. So here, the only problem I seem to find in my solution is that $n\in I$ and simply assuming $n = \frac 1h$ doesn't ensure our $n$ to be an integer. Can anyone provide a way to either correctly assume a new value for $n$ or any alternate way to solve this? REPLY [6 votes]: Alternative idea for proving the essential facts: writing $$\sqrt{n^2+n+1} = n\sqrt{1+1/n+1/n^2}$$ and using the Taylor series of $\sqrt{1+x}$: $$1+{\frac{x}2}-{\frac{x^2}8}+O(x^3)$$ we have $$\sqrt{n^2+n+1} = n+\frac12+\frac3{8n}+O(1/n^2)$$ and $$\lfloor\sqrt{n^2+n+1}\rfloor = n.$$<|endoftext|> TITLE: What do we actually prove using induction theorem? QUESTION [8 upvotes]: Here is the picture of the page of the book, I am reading: $$P_k: \qquad 1+3+5+\dots+(2k-1)=k^2$$ Now we want to show that this assumption implies that $P_{k+1}$ is also a true statement: $$P_{k+1}: \qquad 1+3+5+\dots+(2k-1)+(2k+1)=(k+1)^2.$$ Since we have assumed that $P_k$ is true, we can perform operations on this equation. Note that the left side of $P_{k+1}$ is the left side of $P_k$ plus $(2k+1)$. So we start by adding $(2k+1)$ to both sides of $P_k$: \begin{align*} 1+3+\dots+(2k-1) &= k^2 &P_k\\ 1+3+\dots+(2k-1)+(2k+1) &= k^2+(2k+1) &\text{Add $(2k+1)$ to both sides.} \end{align*} Factoring the right side of this equation, we have $$1+3+\dots+(2k-1)+(2k+1) =(k+1)^2 \qquad P_{k+1}$$ But this last equation is $P_{k+1}$. Thus, we have started with $P_k$, the statement we assumed true, and performed valid operations to produce $P_{k+1}$, the statement we want to be true. In other words, we have shown that if $P_k$ is true, then $P_{k+1}$ is also true. Since both conditions in Theorem 1 are satisfied, $P_n$ is true for all positive integers $n$. In the top line there is written that we have to show that 'the assumption $P_k$ is true implies $P_{k+1}$ is true'. And what I think is that: as long as we know that the state of a proposition being true for any positive integer $k$ after Base number implies that proposition is true for integer $k+1$, we have to show that $P_k$ is true. However I don't have an idea yet how to show the truth of $P_k$. So, my first question is that who is right? My book or me? And if I am right then how can I show the truth of $P_k$? 2) This may be taken as the second question but it is also bit annoying. In the second paragraph it is written, "Since we have assumed that $P_k$ is true, we can perform operations on it". Why is it necessary for an equation to be true for performing operations on it? Well, this is not much annoying as compared to the first question because whole the induction theory depends upon that. REPLY [3 votes]: There is another way of thinking of this. Throw out the standard induction theorem, and replace it with this one: Any nonempty set of natural numbers has a smallest element. This seems obviously true, right? This is called the well-ordering principle, and it is equivalent to induction. Here's how you do induction with the well-ordering principle (WOP for short): Prove that $P_0$ holds (or $P_1$, if you don't consider zero to be a "natural number"). Take this set: $$ S = \{x | x \in \mathbb{N} \wedge \neg P_x \} $$ That's the set of natural numbers for which our proposition does not hold. Assume $S$ is non-empty. Then, by WOP, it has a least element, which we can call $k+1$. In particular, we know that $\neg P_{k+1}$ (we know that the proposition does not hold for $k+1$, since it's an element of $S$). By step (1), we know $k+1 > 0$ (or $k+1 > 1$, if you started with $P_1$). So $k$ is a natural number. If we had not proven step (1), $k$ could be equal to negative one (or zero), which would invalidate the rest of the proof. Prove that $\neg P_{k}$; that is, prove that the proposition is not true for $k$, based on our knowledge that it is not true for $k+1$. Since $k < k + 1$ and $k \in S$, we have a contradiction ($k + 1$ isn't the smallest element of the set). That means our assumption in step (3) must be false. The set of counterexamples is empty, meaning the proposition must hold for every number. Note that these statements are equivalent: $$ P_k \implies P_{k+1} \\ \neg P_{k+1} \implies \neg P_k $$ If you can prove one, you can prove the other quite easily. Standard induction asks you to prove the first, while WOP-based induction requires you to prove the second. In the end, it's just a matter of notation. But you might find one form of induction more intuitive than the other. If you are going to use this method, you should note that we usually talk about $k$ and $k - 1$ instead of $k+1$ and $k$; I used the latter because it corresponds more obviously to standard induction.<|endoftext|> TITLE: Intuition behind: Integral operator as generalization of matrix multiplication QUESTION [9 upvotes]: So I am teaching myself more in-depth about integral operators and every once and awhile I see this little 'factoid', that integral operators are generalizations of matrix multiplications. In particular, if: $$Lf(x) = \int_{X} k(x,y) f(y) du(y)$$ So then, naturally, if $\mathbf{A}$ is an $m,n$ matrix with entries $a_{ij}$ and $\mathbf{u} \in \mathcal{C}^{n}$, $\mathbf{A}\mathbf{u} \in \mathcal{C}^{m}$, we have: $(\mathbf{A}\mathbf{u})_{i} = \sum_{j=1}^{n} a_{ij} u_{j}, i=1 \ldots m$ So I always see something similar to the following statement: The entries $k(x,y)$ are analogous to the entries $a_{ij}$ of matrix $\mathbf{A}$ and the values $Lf(x)$ are analogous to the entries $(\mathbf{A}\mathbf{u})_{i}$ I have never seen any example or more detail regarding this statement. Can somebody make this a bit clearer? Here is my thought, the $x$ defines the 'row', while the $y$ defines the column. So then if we wanted to, we could define a sequence $\{x_{0}, x_{1}, \ldots\}$ to then 'sample' the output of the operator integral $L$. For instance, if $L$ mapped $f$ to a certain region $P$, we would want to define our sequence to exist only in this region $P$ in order to save computation. If in this region, the functions $Lf$ had significant norm/power only in a subspace or section of this region $P_{k}$, we could 'sample' this subspace only -- ie $\{x_{0}, x_{1}, \ldots\} \in P_{k}$ and still get a 'reasonable' approximation to the resulting function/signal $Lf$. Similarly, we could 'sample' $f$ on its domain with $\{y_{0}, y_{1}, \ldots\}$ and reduce the computation even further, when the sequence captures 'most of the information' about $f$ that is. REPLY [5 votes]: Its always a bit hard to guess what another person might find intuitive, but here are my two cents on the topic. You can interpretate the elements of $\mathbb{R}^n$ as functions from the set $\{1,...,n\}$ to $\mathbb{R}$, where for $f \in \mathbb{R}^{n}$, $f(i)$ would just be the $i$-th component of the vector. We know from linear algebra that any linear operator $L: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ can be written as $L f = A\cdot \vec x$, where $A$ is an $n \times n$-matrix and $\vec x$ is the vector associated with $f$. We could invent a "kernel" function to write this down differently, with $k: \{1,...,n\} \times \{1,...,n\} \to \mathbb{R}$, $k(i,j) := A_{ij}$. We then have the formula $$Lf(i) = (A\cdot\vec x)_i = \sum_{j=1}^{n} k(i,j) f(j).$$ Now let's replace $\{1,...,n\}$ with some infinite set $X$. Writing down matrices and using the multiplication rules in the same way as in $\mathbb{R}^n$ seems to be a complicated approach here, but it is easy to see what the generalisation of the formula above should be: The values $k(x,y)$ for $x,y \in X$ are the "matrix entries", so we get $$Lf(x) := \sum_{y \in X}k(x,y)f(y)$$ Now for countable $X$ this might still make sense, if we introduce some restrictions on $k$ and $f$ in order to ensure convergence, but for uncountable $X$ (which is the more interesting case) the sum doesn't make sense any more (at least if $k$ is nonzero almost everywhere). The integral is often viewed as a "continuous" analogon to summation (e.g. by physicists, or in measure theory), and as it is itself a limit of sums, it seems only natural to consider operators of the form $$Lf(x) = \int_{X}k(x,y)f(y) dy$$<|endoftext|> TITLE: Calculating (co)limits of ringed spaces in $\mathbf{Top}$ QUESTION [14 upvotes]: Let $\mathbf{Top}$ be the category of topological spaces, $\mathbf{RS}$ the category of ringed spaces and $\mathbf{LRS}$ the category of locally ringed spaces. There are forgetful functors $$ U_{\mathbf{LRS}}: \mathbf{LRS} \to \mathbf{RS} $$ (the inclusion of the (non-full) subcategory) and $$ U_{\mathbf{RS}}: \mathbf{RS} \to \mathbf{Top} $$ (forgetting the structure sheaf). All three categories have all (small) limits and colimits. Are $U_{\mathbf{LRS}}$ and $U_{\mathbf{RS}}$ (right?) adjoint functors? Do $U_{\mathbf{LRS}}$ and $U_{\mathbf{RS}}$ perserve pushouts? I am interested in these questions because I can think easier of topological spaces than of ringed (or locally ringed) spaces. For example, when I intuitively want to see what the pushout of two (locally) ringed spaces is, I want to see first what happens on topological spaces and afterwards think of what is going on with the structure sheaves. Am I allowed to do this? REPLY [19 votes]: The forgetful functor $\mathsf{LRS} \to \mathsf{RS}$ has a right adjoint. The right adjoint "$\mathrm{Spec}$" is a rather direct generalization of the spectrum of a commutative ring. You can find the construction in W. D. Gillam's Localization of ringed spaces, for instance. The underlying set of $\mathrm{Spec}(X,\mathcal{O}_X)$ consists of all pairs $(x,\mathfrak{p})$, where $x$ is a point in $X$ and $\mathfrak{p}$ is a prime ideal of $\mathcal{O}_{X,x}$. The structure sheaf is defined in such a way that the stalk at $(x,\mathfrak{p})$ is the local ring $(\mathcal{O}_{X,x})_{\mathfrak{p}}$. As a corollary, $\mathsf{LRS} \to \mathsf{RS}$ preserves all colimits. But this also comes out from the construction of colimits of locally ringed spaces, which you can find in Demazure-Gabriel's Groupes algébriques, I. §1. 1.6. The forgetful functor $\mathsf{LRS} \to \mathsf{RS}$ has no left adjoint, since it does not preserve limits. For example, $\mathrm{Spec}(\mathbb{Z})$ is the terminal object of $\mathsf{LRS}$, but $(\{\star\},\underline{\mathbb{Z}})$ is the terminal object of $\mathsf{RS}$. For a description of limits in $\mathsf{LRS}$, see Gillam's paper above. The forgetful functor $\mathsf{RS} \to \mathsf{Top}$ has a right adjoint which maps $X$ to $(X,\underline{\mathbb{Z}})$. It follows that $\mathsf{RS} \to \mathsf{Top}$ preserves colimits. Specifically, the colimit of a diagram $((X_i,\mathcal{O}_i))_{i \in I}$ of ringed spaces is $(\mathrm{colim}_i X_i,\lim_i (u_i)_* \mathcal{O}_i)$, where $(u_i : X_i \to \mathrm{colim}_i X_i)$ is the colimit cone of the topological spaces. The forgetful functor $\mathsf{RS} \to \mathsf{Top}$ has a left adjoint which maps $X$ to $(X,0)$. It follows that $\mathsf{RS} \to \mathsf{Top}$ preserves limits. Specifically, the limit of a diagram $((X_i,\mathcal{O}_i))_{i \in I}$ of ringed spaces is $(\lim_i X_i,\mathrm{colim}_i (u_i)^{-1} \mathcal{O}_i)$, where $(u_i : \lim_i X_i \to X_i)$ is the limit cone of the topological spaces.<|endoftext|> TITLE: A field extension of degree 2 is a Normal Extension. QUESTION [11 upvotes]: Let $L$ be a field and $K$ be an extension of $L$ such that $[K:L]=2$. Prove that $K$ is a normal extension. What I have tried : Let $ f(x)$ be any irreducible polynomial in $L[x] $ having a root $\alpha$ in $K$ and let $\beta$ be another root. Then I have to show $\beta \in K$. I already have $L(\alpha)=K$, but now how to show $\beta\in K??$ Any hints? REPLY [16 votes]: Since $\alpha \in K$ is a root of the irreducible polynomial $f \in L[X]$, then $f$ is the minimal polynomial of $\alpha$ over $L$. The degree $d$ of $\alpha$ over $L$ is $≤2$, because $[K : L]=2$. If $d=1$, what can you conclude? If $d=2$, write $f(X)=X^2+aX+b=(X-\alpha)(X-\beta)$. What are the relations between $\alpha$ and $\beta$? If $d=1$, then $f(X)=X-\alpha$ so that $\beta=\alpha \in K$. If $d=2$, then $X^2+aX+b=X^2-(\alpha+\beta)X+\alpha\beta$ so that $-a=\alpha+\beta$, or $\beta = -a-\alpha \in K$, since $a \in L \subset K$.<|endoftext|> TITLE: $\iiint_V \ x^{2n} + y^{2n} + z^{2n} \,dx\,dy\,dz$ QUESTION [6 upvotes]: $$\iiint_V \ x^{2n} + y^{2n} + z^{2n} \,dx\,dy\,dz$$ where V is the unit sphere. No information is given about n but I assume it is an integral. All I could think to do was to convert to spherical co-ordinates and use reduction formulae, but I ended up with really messy answers. Any help would be brilliant. REPLY [2 votes]: Following runaround's excellent hint, the required integral is $$3\int_{-1}^1 z^{2n}\hskip-1.4cm \underbrace{\int\int dx\,dy}_{\mbox{area of disk of radius }\sqrt{1-z^2}} \hskip-1.3cm dz=3\pi \int_{-1}^1 z^{2n}(1-z^2)\,dz={12\pi\over(2n+3)(2n+1)}.$$<|endoftext|> TITLE: Convergence of Riemann sums of a periodic function QUESTION [11 upvotes]: Short version for people who don't like reading: Let $f\colon\mathbb{R}\to\mathbb{R}$ be $1$-periodic, measurable and bounded. Is it true that, for almost all $x$, the average of $f(x)$, $f(x+\frac{1}{n})$, $f(x+\frac{2}{n})$, …, $f(x-\frac{1}{n})$ tends to $\int_0^1 f$ when $n\to+\infty$? And now for a more detailed version of the question: Let $\mathbb{T} := \mathbb{R}/\mathbb{Z}$ so that functions $\mathbb{T}\to\mathbb{R}$ can be identified with $1$-periodic functions $\mathbb{R}\to\mathbb{R}$. If $f\colon\mathbb{T}\to\mathbb{R}$, we define $\mathscr{M}_n(f)\colon\mathbb{T}\to\mathbb{R}$ by $$(\mathscr{M}_n(f))(x) := \frac{1}{n}\sum_{k=0}^{n-1} f\Big(\!x+\frac{k}{n}\Big)$$ the $n$-th “Riemann sum” of $f$, i.e., the average of the $n$ translates of $f$ by $n$-th periods. If also $f \in L^1(\mathbb{T})$ we define $\mathscr{E}(f)\colon\mathbb{T}\to\mathbb{R}$ by $$(\mathscr{E}(f))(x) := \int_{\mathbb{T}} f(t)\,dt$$ (constant function!) the integral, i.e., overall average of $f$. The general problem is in what ways and under what assumptions we can say that $\mathscr{M}_n(f) \to \mathscr{E}(f)$. Precise questions are below (at end), but first let me first state a few simple known facts relevant to this situation, that might help provide some background: If $f$ is a step function (where "step function" means a linear combination of characteristic functions of intervals) then $|\mathscr{M}_n(f) - \mathscr{E}(f)| \leq \frac{\|f\|_\infty}{n}$ everywhere. (Sketch of proof: if $f = \mathbf{1}_{[0,r/n)}$ with $r\in\mathbb{N}$ then in fact $\mathscr{M}_n(f) = \mathscr{E}(f)$, and if $f_c = \mathbf{1}_{[0,c)}$ with $\frac{r}{n}\leq c<\frac{r+1}{n}$ then $f_{r/n} \leq f_c \leq f_{(r+1)/n}$ everywhere so that the same inequality holds after applying $\mathscr{M}_n$, i.e., $\frac{r}{n} \leq \mathscr{M}_n(f_c) \leq \frac{r+1}{n}$, whence the conclusion for $f_c$, and then for a general step function by translating and taking linear combinations.) If $f \in L^p(\mathbb{T})$ with $1\leq p<\infty$ then $\mathscr{M}_n(f) \to \mathscr{E}(f)$ in $L^p(\mathbb{T})$. (Follows from the above by density of step functions in $L^p$ and the fact that $\mathscr{M}_n$ and $\mathscr{E}$ have norm $1$.) If $f$ is Riemann-integrable then $\mathscr{M}_n(f) \to \mathscr{E}(f)$ uniformly on $\mathbb{T}$. (Fairly obvious using the first point and the following definition of R-integrable functions: for every $\varepsilon>0$ there exist $h$ and $\varphi$ step functions such that $|f-h|\leq\varphi$ everywhere and $\int\varphi \leq \varepsilon$.) If $u_n = n\mathbf{1}_{[0,1/n)}$ then $\mathscr{M}_n(f) - \mathscr{E}(f) = \mathscr{M}_n(f - (f*u_n))$ (writing $*$ for convolution), and when $f$ is measurable we have $f*u_n \to f$ almost everywhere (by the existence of right Lebesgue points). The Fourier coefficients of $\mathscr{M}_n(f)$ are those of $f$ at indices multiple of $n$, the other being $0$; so they converge punctually (i.e., for a given idnex) to those of $\mathscr{E}(f)$. Also, if the Fourier coefficients of $f$ are $\ell^q$ then the convergence of Fourier coefficients of $\mathscr{M}_n(f)$ to those of $\mathscr{E}(f)$ holds in $\ell^q$. Update 2016-02-10: If $f$ is $L^1(\mathbb{T})$, it is not necessarily the case that $\mathscr{M}_n(f) \to \mathscr{E}(f)$ almost everywhere, or indeed, anywhere: this is a theorem of Marcinkiewicz and Zygmund ("Mean values of trigonometrical polynomials", Fund. Math. 28 (1937), chapter II, theorem 3 on p. 157). Their counterexample (which is $(-\log|x|)/\sqrt{|x|}$ on $[0,\frac{1}{2}]$) is certainly not bounded, however. Birkhoff's ergodic theorem is probably also worth mentioning here: if $\xi$ is irrational, then for all $f\in L^1(\mathbb{T})$, for almost all $x$ we have $\frac{1}{n}\sum_{k=0}^{n-1} f(x+k\xi) \to \mathscr{E}(f)$. Now at last here are my questions, motivated by the gaps left in the above facts: If $f \in L^\infty(\mathbb{T})$, do we have $\mathscr{M}_n(f) \to \mathscr{E}(f)$ in $L^\infty(\mathbb{T})$? If not, do we have $\mathscr{M}_n(f) \to \mathscr{E}(f)$ in $L^\infty(\mathbb{T})$ almost everywhere? [This is the "short version" at the start of this post.] REPLY [3 votes]: The answer is "no", even for $f$ measurable and bounded (or indeed, a characteristic function), and even just for convergence almost everywhere (or indeed, anywhere). This is the main result in: Walter Rudin, "An Arithmetic Property of Riemann Sums", Proc. Amer. Math. Soc. 15 (1964), 321–324.<|endoftext|> TITLE: Finding $\sqrt{17}$ and $\sqrt{257}$ in the regular $17$-gon and $257$-gon? QUESTION [5 upvotes]: (Edit: I need to revise this question with my original intent. Pls do not answer it yet. Thanks.) Given the regular $n$-gon formed by the $n$-th roots of unity. For some $n$, how do we find $\sqrt{n}$ using the sum/difference of line segments? $n=5:$ It is enough to use one line segment: If $x^5=1$, then it can be the distance between the root $x_0$ on the real line, and $x_2$ in the second quadrant, $\hskip2.2in$ $$1+\sqrt{\big(1+\cos\big(\tfrac{4\pi}{5}\big)\big)^2+\big(\sin\big(\tfrac{4\pi}{5}\big)\big)^2}=\frac{1+\sqrt{5}}{2}\tag1$$ $n=17:$ I observed that using the sum/difference of four line segments would do. Define, $$L(\alpha,\beta)=\sqrt{\left(\cos\big(\tfrac{2\pi\,\alpha}{17}\big)+\cos\big(\tfrac{2\pi \,\beta}{17}\big)\right)^2+\left(\sin\big(\tfrac{2\pi\,\alpha}{17}\big)-\sin\big(\tfrac{2\pi \,\beta}{17}\big)\right)^2}$$ then, $$L(0,3)-L(1,5)+L(3,7)+L(4,8)=\frac{1+\sqrt{17}}{2}\tag2$$ $n=257:$ $$???\tag3$$ Questions: Is there an alternative to $(2)$ that is purely a sum of positive values? How do we find $(3)$? (I assume it needs $64$ line segments.) REPLY [6 votes]: If $p\equiv1\bmod4$, then $\sum_0^{p-1}e^{2\pi ia^2/p}=\sqrt p$. And if $p≡3$ mod $4$ the right side of the sum equality is just $i\sqrt{p}$. Such sums, for either caae, are called Gauss sums. For example in a regular hendecagon with vertices numbered 0 through 10 in rotational order, the Gauss sum shows that the distances from vertex $k$ to $11−k$, with $k$ nonzero and a negative sign attached for $k=2$, gives $\sqrt{11}$ times the circumradius.<|endoftext|> TITLE: Is there any way to solve integral of $\sqrt{8-x^{2}}$ without using $\sin$ or $\cos$ formulas? QUESTION [5 upvotes]: I was thinking about the following integral if I could solve it without using trigonometric formulas. If there is no other way to solve it, could you please explain me why do we replace $x$ with $2\sqrt 2 \sin(t)$? I'm really confused about these types of integrals. $$\int \sqrt{8 - x^2} dx$$ REPLY [7 votes]: By rescaling the variable, let us replace the constant $8$ by $1$, for convenience. The equation $y=\sqrt{1-x^2}$ represents the upper-half of the unit circle, and the integral $$\int_{t=0}^x\sqrt{1-t^2}dt$$ is the area of a vertical "slice" between the abscissas $0$ and $x$. You can compute it as the area of a sector of aperture $\theta$ such that $\sin(\theta)=x$, plus a triangle of base $x$ and height $\sqrt{1-x^2}$. Hence, $$A=\frac12\theta+\frac12x\sqrt{1-x^2}=\frac12\arcsin(x)+\frac12x\sqrt{1-x^2}.$$ This is how a trigonometric function appears, and you can't avoid it because it belongs to the final solution. You also see the connection by taking the derivative $$(\arcsin(x))'=\frac1{\sqrt{1-x^2}}.$$ The trigonometric function disappears and is replaced by a rational expression. A similar phenomenon occurs with the logarithm, $$(\ln(x))'=\frac1x,$$ and this is why you will see logarithms appear now and then in antiderivatives.<|endoftext|> TITLE: Local properties of morphisms of schemes QUESTION [5 upvotes]: In Hartshorne Proposition II.5.8, he shows, given a morphism $f \colon X \to Y$ where X and Y are schemes and $\mathcal{G}$ a quasi-coherent sheaf of $\mathcal{O}_{Y}$- modules, that $f^{*}\mathcal{G}$ is a quasi-coherent sheaf of $\mathcal{O}_{X}$-modules. Similarly, he shows that given $f$ quasicompact and separated, that the pushforward of a quasi-coherent sheaf is quasi-coherent. My question concerns his reduction to the affine case. I understand that the quasi-coherent property is local, since it can be checked on an affine cover by definition, so for proving that the pullback of a quasi-coherent sheaf is quasi-coherent, one can assume $X$ to be affine. However, why is one allowed to also assume $Y$ to be affine as it is done in the proof, while for proving the pushforward, one can only assume $Y$ affine and not $X$? REPLY [6 votes]: If $U\subseteq X$ is an open set and $V\subseteq Y$ is any open set containing $f(U)$, then the restriction of $f^*\mathcal{G}$ to $U$ depends only on the restriction of $\mathcal{G}$ to $V$ (more precisely, $(f^*\mathcal{G})|_U=g^*(\mathcal{G}|_V)$, where $g:U\to V$ is the restriction of the morphism $f$ to a morphism $U\to V$). To show $f^*\mathcal{G}$ is quasicoherent, it suffices to show that $X$ has an open cover by affine open sets $U\subseteq X$ on which $(f^*\mathcal{G})|_U$ is quasicoherent. For each affine open $V\subseteq Y$, $f^{-1}(V)\subseteq X$ can be covered by affine open sets, and these sets taken over all $V$ will cover all of $X$. So it suffices to show $(f^*\mathcal{G})|_U$ is quasicoherent if $U\subseteq X$ is affine open and there exists an affine open $V\subseteq Y$ such that $f(U)\subseteq V$. But in that case, $(f^*\mathcal{G})|_U=g^*(\mathcal{G}|_V)$ as above, so you may as well replace $f$ by $g$ and assume $X=U$ and $Y=V$ are affine. For the pushforward, if $V\subseteq Y$ is open, $(f_*\mathcal{F})|_V$ depends on the restriction of $\mathcal{F}$ to the set $f^{-1}(V)\subseteq X$, so you can replace $Y$ by $V$ and $X$ by $f^{-1}(V)$ to assume that $Y$ is affine. To assume that $X$ is affine as well as in the pullback case, you would need to have an open cover of $Y$ by affine open sets $V$ such that $f^{-1}(V)$ is affine as well. Such an open cover need not exist (for instance, if $Y$ has only one point and $X$ is not affine). So in short, the difference between the two cases is that there always exists a cover of $X$ by affine open subsets $U$ such that $f(U)$ is contained in an affine open set, but there need not exist a cover of $Y$ by affine open subsets $V$ such that $f^{-1}(V)$ is affine.<|endoftext|> TITLE: How to prove this curiosity that has to do with cubes of certain numbers? QUESTION [8 upvotes]: I saw on facebook some image on which these identities that I am going to write below are labeled as "amazing math fact" and on the image there are these identities: $1^3+5^3+3^3=153$ $16^3+50^3+33^3=165033$ $166^3+500^3+333^3=166500333$ $1666^3+5000^3+3333^3=166650003333$ and then it is written under these identities "and so on and on and on and on!" which suggests that for every $k \in \mathbb N$ we shuld have $(1 \cdot 10^k + \sum_{i=0}^{k-1} 6 \cdot 10^i)^3 + (5 \cdot 10^k)^3 + (\sum_{i=0}^{k} 3 \cdot 10^i)^3=16...650...03...3$ (on the right hand side of the above stated identity the number of times that number $6$ is shown up is $k-1$, the number of times that number $0$ is shown up is $k-1$ and the number of times that number $3$ is shown up is $k$) This problem seems attackable with mathematical induction but I would like to see how it could be proved without using mathematical induction in any step(s) of the proof. REPLY [10 votes]: This deserves a shorter proof. Call $x=10^{k+1}$. Then we have: $$\begin{array}{rcl}166\ldots 666&=&\frac{x}{6}-\frac{2}{3}\\500\ldots 000&=&\frac{x}{2}\\333\ldots 333&=&\frac{x}{3}-\frac{1}{3}\end{array}$$ Now: the sum of the cubes on the left-hand side is: $$\color{red}{166\ldots 666^3+500\ldots 000^3+333\ldots 333^3}=\left(\frac{x}{6}-\frac{2}{3}\right)^3+\left(\frac{x}{2}\right)^3+\left(\frac{x}{3}-\frac{1}{3}\right)^3=\frac{1}{6^3}((x-4)^3+(3x)^3+(2x-2)^3)=\frac{1}{6^3}(36x^3-36x^2+72x-72)=\color{red}{\frac{1}{6}(x^3-x^2+2x-2)}$$ On the other hand, the big number on the right-hand side is: $$\color{red}{1666\ldots666\cdot x^2+500\ldots000\cdot x+333\ldots333}=\left(\frac{x}{6}-\frac{2}{3}\right)\cdot x^2+\frac{x}{2}\cdot x+\frac{x}{3}-\frac{1}{3}=\frac{1}{6}(x^3-4x^2+3x^2+2x-2)=\color{red}{\frac{1}{6}(x^3-x^2+2x-2)}$$ which is the same value. (Note how we used multiplication by $x^2$ and $x$ to "shift those big numbers to the left".)<|endoftext|> TITLE: How to Catch Up? QUESTION [7 upvotes]: I am finishing up my bachelor's degree in mathematics at the University of North Florida, and I plan on going to graduate school, but I feel very behind. One of my professor's gave us this problem: $\frac 1a + \frac 1b + \frac 1c \ge \frac 1 { \sqrt{bc}} + \frac 1 { \sqrt{ac}} + \frac 1 { \sqrt{ab}}$ I had no idea how to solve this, and he said it was on his entrance exam for his university in Lebanon. He showed us how to solve it using the geometric mean equation, and it was obvious after that, but why is it that we didn't immediately think of that? This is just one example of me missing a problem he would consider elementary, and I'm wondering why that is. Is the content and way of thinking so much different in other countries than in America? We really don't bother with proofs or logic until undergraduate school. I'm fine with understanding proofs, but I feel like I lack this mental database of knowledge and experience that other students have far before undergrad. Is there a good resource for me to fill in these holes? I want to be able to identify these similarities and common patterns that will allow me to solve problems easily, but I'm not sure what knowledge that I'm missing. REPLY [3 votes]: I don't think the majority of high school students in any country would think that's an easy problem. However, there are differences between the U.S. and a number of other countries that do go some way towards explaining why something like that might be on a university entrance exam in other countries: Some countries have separate "elite" streams for the most able students. This is particularly the case where universities have very competitive entrance exams. Some countries are better able than others to attract competent and intelligent people to teach in elementary and secondary schools. This is partly related to the social status of teachers in society; in China, to take an extreme example, surveys have shown that the public views teachers as being on par with doctors in terms of status. Contrast this with most of Europe and the U.S., where many schools have serious discipline problems, in part because students (and their parents) don't respect teachers. Related to the last point, teachers in the U.S., starting at the elementary level and continuing into high school, tend not to have the requisite knowledge to teach math in a way that emphasizes justification and proof rather than just algorithms. This is very clearly demonstrated in Liping Ma's comparative study of math teachers in the U.S. and in China, Knowing and Teaching Elementary Mathematics. As a result, students are inculcated with the attitude that math is a collection of techniques to solve routine problems, and this continues all the way to college, perpetuating the cycle in the next generation of teachers. The way American textbooks are written tends to exacerbate the problem, if anything. With regard to your question about what to do now, the answer really depends on whether you're most interested in catching up on this sort of "elite" high-school-level material, or on doing the same for undergraduate-level stuff. Seeing as you're going to graduate school, I would think the latter would be more of a priority. If you're curious about high-school-level problems, what Dave Renfro recommended in his comments seems like a good idea. You could also have a look at Problems in Elementary Mathematics by Lidsky or A Problem Book in Algebra by Krechmar. (These were classics for high school students preparing for the hardest university entrance exams in the Soviet Union.) On undergraduate material, where your focus should be depends very much on your present level of skill in calculus/analysis/algebra. If you find many of the questions on the GRE math subject test difficult, you might consider reading a book like Apostol's Calculus (Vol. 1) or Spivak's Calculus, skipping parts you know well, followed by a multivariable book like Apostol's second volume. For harder problems in undergraduate analysis, besides the standard textbooks (e.g. Rudin, Apostol, Zorich), you could consider the problem book by Makarov/Goluzina or the ones by Kaczor and Nowak. In algebra, again apart from the standard textbooks (e.g. Artin, Jacobson, Dummit/Foote, Godement), there are the problem books by Proskuryakov (linear algebra) and Faddeev/Sominski. If you can read French, then there are a number of books with problems that are at a sort of "elite" level in undergraduate analysis/algebra (for entrance exams to the top engineering and science schools in France). The five-volume series by Ramis/Deschamps/Odoux and the four-volume one by Arnaudiès/Fraysse are the best known textbooks, but there are also problem books by Leichtnam/Schauer and another set by Francinou/Gianella (Oraux X ENS).<|endoftext|> TITLE: What are the two disjoint closed sets that cannot be separated by two disjoint open neighborhoods in the Ellentuck topology? QUESTION [5 upvotes]: Denote by $X := [\mathbb{N}]^\infty$ the set of infinite subsets of $\mathbb{N}$. Recall that the Ellentuck topology is a topology on $X$ generated by sets of the form $\{A\text{ infinite} \mid s\text{ is an initial segment of }A\text{ and }A\setminus s\subset B\}$ for some finite $s\subset \mathbb{N}$ and (infinite) $B\subset \mathbb{N}$. In the paper On completely Ramsey sets by Szymon Plewik, it was proved that the Ellentuck topology is not normal. The argument essentially reduced to a construction of a closed separable subspace, say $Y\subset X$, containing a discrete closed subset $Z$ of cardinality $2^{\aleph_0}$. Suppose that $X$ is normal. Because $Y$ is a closed subset of $X$, $Y$ is normal as well. On the one hand, since $Y$ is separable, the set of continuous function on $Y$, denoted by $C(Y)$, is of cardinality $\le 2^{\aleph_0}$. On the other hand, by the Tietze extension theorem, every (continuous) function on $Z$ can be extended to a continuous function on $Y$, and so $C(Y)$ is of cardinality at least $2^{2^{\aleph_0}}$. A contradiction. I am wondering if one can demonstrate an explicit construction of two disjoint closed subsets of $X$ that cannot be separated by two disjoint open neighborhoods? REPLY [3 votes]: I’ll use the machinery of Proposition $\mathbf{4}$ and its proof in Szymon Plewik, On completely Ramsey sets. Let $H=\{V\cup A^*\in U:|A|=\omega\}$ and $K=\{V\cup A^*\in U:|A|<\omega\}$; $H$ and $K$ are disjoint closed discrete sets in $[\omega]^\omega$ in the Ellentuck topology. Suppose that $G$ is an Ellentuck-open set containing $H$; I’ll show that every Ellentuck-open nbhd of $K$ meets $G$. For each infinite $A\subseteq\omega$ there is a finite $s_A\subseteq V\cup A^*$ such that $$V\cup A^*\in\langle s_A,V\cup A^*\rangle\subseteq G\;.$$ The set $[\omega]^\omega\setminus\{\omega\}$ is a dense $G_\delta$ in the natural topology on $\wp(\omega)$ (i.e., the Cantor space topology), and there are only countably many finite subsets of $\omega$, so by the Baire category theorem there is a finite $s\subseteq\omega$ such that the closure $C$ of $\mathscr{A}=\left\{A\in[\omega]^\omega:s_A=s\right\}$ in the natural topology has non-empty natural interior. Thus, there are disjoint finite $t,x\subseteq\omega$ such that $\langle t,\omega\setminus x\rangle\subseteq C$. For $i=0,1$ let $\pi_i:\omega\times\omega\to\omega:\langle n_0,n_1\rangle\mapsto n_i$. Let $t_0=\pi_0\big[h^{-1}[s]\big]$ and $x_0=\pi_1\big[h^{-1}[s]\big]$; if $s_A=s$, then $A\in\langle t_0,\omega\setminus x_0\rangle$. Thus, we may assume that $t_0\subseteq t$ and $x_0\subseteq x$. Let $\langle r,V\cup t^*\rangle$ be a basic Ellentuck nbhd of $V\cup t^*$; we may assume that $r$ is large enough so that $\pi_0\big[h^{-1}[r]\big]=t$ and $\pi_1\big[h^{-1}[r]\big]\supseteq x$. Moreover, $s\setminus V\subseteq h[t\times x]\subseteq t^*$, so $s\subseteq V\cup t^*$, and we may assume that $s\subseteq r$. Now fix $A\in\mathscr{A}\cap\left\langle t,\omega\setminus\pi_1\big[h^{-1}[r]\big]\right\rangle$; clearly $r\subseteq V\cup A^*$. Let $B=V\cup(A^*\cap t^*)$; then $$\begin{align*} B&\in\langle r,V\cup(A^*\cap t^*)\rangle\\ &=\langle r,V\cup A^*\rangle\cap\langle r,V\cup t^*\rangle\\ &\subseteq\langle s,V\cup A^*\rangle\cap\langle r,V\cup t^*\rangle\\ &\subseteq G\cap\langle r,V\cup t^*\rangle\;, \end{align*}$$ so $V\cup t^*\in K$ is in the Ellentuck closure of $G$. Thus, $H$ and $K$ cannot be separated by disjoint Ellentuck-open sets.<|endoftext|> TITLE: What is the result of a number greater than 2 raised to the power of {Aleph-0}? QUESTION [7 upvotes]: So, I know that $2^{\aleph_0} = \beth_1$, right? What about another number, say $10$, raised to the power of $\aleph_0$? Is $10^{\aleph_0} = \beth_1$ also true, or is $10^{\aleph_0} > \beth_1$ somehow? REPLY [4 votes]: We have $2^{\aleph_0} \leq 3^{\aleph_0} \leq 4^{\aleph_0} \leq \dots \leq \aleph_0^{\aleph_0}$, because of the inclusions $\{0, 1\}^\Bbb N \subset \{0,1,2\}^\Bbb N \subset \dots \subset \Bbb N^\Bbb N$. So if we prove that $\aleph_0^{\aleph_0} \leq 2^{\aleph_0}$, then we see that all of these cardinalities are in fact equal. To show this, we need to find some injection $f: \Bbb N^{\Bbb N} \to \{0, 1\}^\Bbb N$. There are many ways to do this; my favorite is as follows. Let $a = (a_n)$ be some sequence of natural numbers. Then we define $f(a)$ to be the sequence consisting of first $a_0$ ones, followed by a zero, then $a_1$ ones, followed by a zero, then $a_2$ ones, followed by a zero, and so on. This gives a sequence of zeroes and ones, and if $b = (b_n)$ is another sequence of natural numbers, then $f(a) = f(b)$ if and only if $a_n =b_n$ for all indices $n$ if and only if $a = b$. So $f$ is indeed injective, and therefore $\aleph_0^{\aleph_0} \leq 2^{\aleph_0}$. So indeed $2^{\aleph_0} = 3^{\aleph_0} = \dots = \aleph_0^{\aleph_0}$.<|endoftext|> TITLE: Master method and choosing $\epsilon$ QUESTION [5 upvotes]: I am reading CLRS3, currently Chapter 4 and Section 4.5, "The master method for solving recurrences." I understood what is the $\epsilon$ , but I can't understand why they choose $ \epsilon \thickapprox 0.2$ here : $$T(n) = 3T\left(\frac{n}{4}\right) + n\lg n$$ we have $a=3$, $b=4$, $f(n) = n\lg n$, and $n^{\log_b a}=n^{\log_4 3}=O(n^{0.793})$. Since $f(n) = \Omega(n^{\log_4 3+\epsilon})$, where $\epsilon \approx 0.2$, case 3 applies if we can show that the regularity condition holds for $f(n)$. [...] (See picture for a copy of the text.) Can you help me ? REPLY [4 votes]: For Case 3 to apply, you need $f(n) = \Omega( n^{\log_b a+\epsilon} )$ for some constant $\epsilon>0$. In this problem $a=3$, $b=4$ and $f(n) = n\log n$, so that you need to exhibit a value of $\epsilon>0$ such that $n\log n = \Omega(n^{\log_4 3+\epsilon})$; which amounts to saying $\underbrace{\log_4 3}_{\simeq 0.792}+\epsilon\leq 1$. Here, they chose $\epsilon \simeq 0.2$ because this works. Absolutely any value of $\epsilon$ such that $$\log_4 3 < \log_4 3+\epsilon \leq 1$$ would have done the job as well.<|endoftext|> TITLE: If $\sum a_n$and$\sum b_n$diverge, can$\sum \min\{a_n,b_n\}$converge? QUESTION [7 upvotes]: Do there exist sequences $\{a_n\}$ and $\{b_n\}$ satisfying all of the following properties? $a_n>0$ and $b_n>0$ $\{a_n\}$ and $\{b_n\}$ are both decreasing $\sum a_n$ and $\sum b_n$ both diverge $\sum\min\{a_n,b_n\}$ converges REPLY [5 votes]: Edit: Oops. Looking at the comments I see this is exactly what Michael has been suggesting. Sorry - these things happen. Yes. Say $1=N_1N_j^2$). (I'm assuming that decreasing means non-increasing. You could easily jiggle the above a little to get strictly decreasing sequences.)<|endoftext|> TITLE: Fixed subfield of the field of rational functions QUESTION [5 upvotes]: Let $K(X)$ be the field of rational functions of $X$ over some field $K$. Let $\phi: K(X) \rightarrow K(X)$ be the $K$-morphism such that $\phi (X)=1-X$. We have $L:=\{ f\in K(X) : \phi (f)=f\}$. Find some element $Y \in K(X)$ such that $K(Y)=L$. I am fairly certain that $Y=(2X-1)^{2}$ and can easily show that $K(Y) \subset L$. I am having some trouble with showing that $L \subset K(Y)$. REPLY [5 votes]: Here’s another method. You know that your transformation $\phi$ is of order two, and that the “conjugate” of $X$ is $1-X$. The minimal polynomial for $X$ over the fixed field is accordingly $f(T)=T^2-T+(X(1-X))$. Here I’ve used the sum of the conjugates for the linear coefficient (with the necessary change of sign) and the product of the conjugates for the constant term. So it would seem that the fixed field is $K(X(1-X))$. Indeed, if we call $X(1-X)=\xi$ temporarily, we certainly know that $K(\xi)$ is a good field, and that $X$ is a root of $T^2-T+\xi$, with the property that it generates $K(X)$. Notice that this proof works just as well in characteristic two.<|endoftext|> TITLE: Are there sets of zero measure and full Hausdorff dimension? QUESTION [11 upvotes]: I would like to ask the following: Are there "many" sets, say in the interval $[0,1]$, with zero Lebesgue measure but with Hausdorff dimension $1$? The motivation for this question is the dichotomy between measure and category. There are certainly dense sets with zero Lebesgue measure. But a dense set need not have positive Hausdorff dimension (for example, the rationals are dense but have zero Hausdorff dimension). Honestly, I would already be satisfied with an answer to the following question: Is there any set in $[0,1]$ with zero Lebesgue measure but with Hausdorff dimension $1$? REPLY [4 votes]: For a "naturally occurring" example, let $b_1$ and $b_2$ be positive integers $\geq 2$ such that no positive integer power of $b_1$ equals a positive integer power of $b_2$ (i.e. $(b_1)^m = (b_2)^n$ has no solution where $m$ and $n$ are positive integers). Kenji Nagasaka proved in 1979 that the set of real numbers normal to base $b_1$ but not normal to base $b_2$ is a measure zero set with Hausdorff dimension $1.$ See my 5 July 2002 sci.math post Numbers normal to one base but not to another base. (Note: In that post I seem to have reversed the definitions of multiplicatively dependent and multiplicatively independent.) Actually, Nagasaka only proved the Hausdorff dimension $1$ part. The measure zero part follows from the long-known fact that all real numbers except for a set of measure zero are normal to every base.<|endoftext|> TITLE: Converse of Fermat's Little Theorem. QUESTION [5 upvotes]: If $a^n\equiv a \pmod n$ for all integers $a$, does this imply that $n$ is prime? I believe this is the converse of Fermat's little theorem. REPLY [2 votes]: Here is one more example: We see that $341 = 11 \cdot31$ and $2^{340} = 1\mod341$ To show this we see that by routine calcultions the following relations hold $$2^{11} = 2\mod31 $$ $$ 2^{31} = 2 \mod 11$$ Now by using Fermat's little theorem $$ ({2^{11}})^{31} = 2^{11} \mod 31 $$ but $2^{11} = 2 \mod 31$ so I leave you to fill the details of showing $2^{341} = 2 \mod 341$.<|endoftext|> TITLE: Why is the Monotone Convergence Theorem restricted to a nonnegative function sequence? QUESTION [15 upvotes]: Monotone Convergence Theorem for general measure: Let $(X,\Sigma,\mu)$ be a measure space. Let $f_1, f_2, ...$ be a pointwise non-decreasing sequence of $[0, \infty]$-valued $\Sigma-$measurable functions, i.e. for every $k\ge 1$ and every $x$ in $X$, $$0 \le f_k(x) \le f_{k+1}(x).$$ Next, set the pointwise limit of the sequence ${f_n}$ to be $f$. That is, for every $x$ in $X$, $$f(x) = \lim_{k\to \infty}f_k(x).$$ Then $f$ is $\Sigma-$measurable and $$\lim_{k\to \infty}\int f_k d\mu = \int f d\mu.$$ I've noticed that when it comes to monotone convergence theorem (either Lebesgue or general measure), usually its definition restricts the monotone function sequences to be nonnegative. I'm not sure why the 'nonnegative' is necessary. REPLY [6 votes]: If they could be negative, then the statement would also have to be true for any sequence $f_n$ where $f_n(x)\geq f_{n+1}(x)$ by just reflecting across $0$. But then consider when $f_n:[0,\infty]\rightarrow R, f_n(x)=\frac{x}{n}$. In this case, for each $x$, $\lim_n f_n(x)=0$, and thus $f_n \rightarrow f=0$ pointwise, but the integrals are infinite for each finite $n$.