TITLE: Prove the following series $\sum\limits_{s=0}^\infty \frac{1}{(sn)!}$ QUESTION [9 upvotes]: Prove that, $$\sum\limits_{s=0}^\infty \frac{1}{(sn)!}=\frac{1}{n}\sum\limits_{r=0}^{n-1}\exp\left(\cos\frac{2r\pi}{n}\right)\cos\left(\sin\frac{2r\pi}{n}\right)$$ I don't have a real idea on how to start approaching this question, some hints and suggestions would be helpful. REPLY [4 votes]: $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ The roots of $\ds{z^{n} - 1 = 0}$ are given by $\ds{\braces{\exp\pars{{2\pi r \over n}\,\ic}\ \mid\ r = 0,1,\ldots,n - 1}}$. Note that $\ds{{1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{{2\pi rs \over n}\,\ic}}$ is equal to $\ds{1}$ whenever $\ds{n \mid s}$ and it vanishes out otherwise. \begin{align} \color{#f00}{\sum_{s = 0}^{\infty}{1 \over \pars{sn}!}} & = \sum_{s = 0}^{\infty}{1 \over s!}\, \bracks{{1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{{2\pi rs \over n}\,\ic}} = {1 \over n}\sum_{r = 0}^{n - 1}\sum_{s = 0}^{\infty}{1 \over s!}\, \bracks{\exp\pars{{2\pi r \over n}\,\ic}}^{s} \\[5mm] & = {1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{\exp\pars{{2\pi r \over n}\,\ic}} = {1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{\cos\pars{2\pi r \over n}} \exp\pars{\ic\sin\pars{2\pi r \over n}} \\[5mm] & = {1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{\cos\pars{2\pi r \over n}}\bracks{% \cos\pars{\sin\pars{2\pi r \over n}} + \sin\pars{\sin\pars{2\pi r \over n}}\ic}\label{1}\tag{1} \end{align} \begin{align} \color{#f00}{\sum_{s = 0}^{\infty}{1 \over \pars{sn}!}} & = \color{#f00}{{1 \over n}\sum_{r = 0}^{n - 1}\exp\pars{\cos\pars{2\pi r \over n}} \cos\pars{\sin\pars{2\pi r \over n}}} \end{align} It's clear that the imaginary part of \eqref{1} vanishes out.<|endoftext|> TITLE: Are coordinates always less than norm QUESTION [5 upvotes]: Is it true in any normed vector space that the modulus of each coordinate of a given vector is less than the norm of the vector itself ? It seems clear for euclidean norms when using an orthonormal basis (even orthogonal), also true of the modulus on the complex plane, but whether true or not for any norm does anyone have a proof or counterexample ? REPLY [2 votes]: If I understand correctly, your question is the following: Let $X$ be a normed space and $B=\{b_i: i\in I\}$ be a (lets say normalized, to avoid trivial counterexamples) Hamel basis of it. Every $x\in X$ can be written as a linear combination $x=\sum_{i\in F_x} \lambda_i b_i$, where $F_x$ is a finite subset of $I$. Is it true that $|\lambda_i|\leq \|x\|$ for every $x=\sum_{i\in F_x} \lambda_i b_i\in X$ and $i\in F_x$ ? The answer depends on the underlying space, however any infinite dimensional Banach space will provide a counterexample. To see why this is the case, remember that for every element $b_i$ of the Hamel basis, we can associate a linear functional $b_i^{\#}$ with the property that $b_i^{\#}(b_j)=\delta_{ij}$, for every $j\in I$. These $b_i^{\#}$'s are called the coordinate functionals associated with the basis $\{b_i:i\in I\}$. It is known (for example see Are the coordinate functions of a Hamel basis for an infinite dimensional Banach space discontinuous?) that in every Banach space, at most finitely many coordinate functionals are continuous. To return to your question, the inequality $|\lambda_i|\leq \|x\|$, for every $x\in X$, implies that $b_i^{\#}$ is bounded with $\|b_i^{\#}\|\leq 1$. So, this can only happen for at most finitely many such $i's$. The answer, of course, is quite different if by "coordinates" you mean coordinates associated with a Schauder basis instead of a Hamel basis. In this case they are always bounded and in most working examples they are uniformly bounded as well.<|endoftext|> TITLE: $ab(a+b) + bc(b+c) + ac(a+c) \geq \frac{2}{3}(a^{2}+b^{2}+c^{2})+ 4abc$ for $\frac1a+\frac1b+\frac1c=3$ and $a,b,c>0$ QUESTION [6 upvotes]: Let $a$, $b$, and $c$ be positive real numbers with $\displaystyle \frac{1}{a}+\frac{1}{b}+\frac{1}{c}=3$. Prove that: $$ ab(a+b) + bc(b+c) + ac(a+c) \geq \frac{2}{3}(a^{2}+b^{2}+c^{2})+ 4abc. $$ Let us consider the following proofs. $$ a^{2}+b^{2}+c^{2} \geq ab+bc+ca $$ By the Arithmetic Mean-Geometric Mean Inequality we have $$ a^{2}+b^{2} \geq 2ab,\ \ b^{2}+c^{2} \geq 2bc,\ \ c^{2}+a^{2} \geq 2ca \tag{1} $$ If we add together all the inequalities $(1)$, we obtain $$ 2a^{2}+2b^{2}+2c^{2} \geq 2ab+2bc+2ca $$ By dividing both side by $2$, the result follows. Now let us consider, $$ ab(a+b) + bc(b+c) + ac(a+c) \geq 6abc \tag{2} $$ I already have proved $(2)$ Then, We are given, $$ \frac{1}{a}+\frac{1}{b}+\frac{1}{c}=3 \implies bc+ac+ab=3abc \tag{3} $$ Notice that we have $$ a^{2}+b^{2}+c^{2} \geq bc+ac+ab=3abc $$ So, $$ a^{2}+b^{2}+c^{2} \geq 3abc \tag{4} $$ Let us multiply both side of $(4)$ by $\displaystyle\frac{2}{3}$, yield $$ \frac{2}{3}(a^{2}+b^{2}+c^{2}) \geq 2abc $$ Here where I stopped. Would someone help me out ! Thank you so much REPLY [6 votes]: We note that \begin{align*} ab(a+b)+bc(b+c)+ac(a+c) &= a^2b+ab^2+b^2c+bc^2+a^2c+ac^2\\ &=\frac{a^2}{\frac{1}{b}}+\frac{b^2}{\frac{1}{a}} + \frac{b^2}{\frac{1}{c}}+ \frac{c^2}{\frac{1}{b}}+\frac{a^2}{\frac{1}{c}}+\frac{c^2}{\frac{1}{a}}\\ &\ge \frac{4(a+b+c)^2}{2(\frac{1}{a}+\frac{1}{b}+\frac{1}{c})} \qquad \mbox{(by the Schwarz inequality)}\\ &=\frac{2}{3}(a+b+c)^2\\ &=\frac{2}{3}(a^2+b^2+c^2+2ab+2bc+2ac)\\ &=\frac{2}{3}(a^2+b^2+c^2) + \frac{4}{3}(ab+bc+ca)\\ &=\frac{2}{3}(a^2+b^2+c^2) + 4abc. \end{align*} Here, we employed the Schwarz inequality of the from \begin{align*} &\ (a+b+b+c+a+c)^2 \\ =&\ \left(\frac{a}{\sqrt{\frac{1}{b}}}\sqrt{\frac{1}{b}}+\frac{b}{\sqrt{\frac{1}{a}}}\sqrt{\frac{1}{a}}+\frac{b}{\sqrt{\frac{1}{c}}}\sqrt{\frac{1}{c}}+\frac{c}{\sqrt{\frac{1}{b}}}\sqrt{\frac{1}{b}}+\frac{a}{\sqrt{\frac{1}{c}}}\sqrt{\frac{1}{c}}+\frac{c}{\sqrt{\frac{1}{a}}}\sqrt{\frac{1}{a}}\right)^2\\ \le&\ \left(\frac{a^2}{\frac{1}{b}}+\frac{b^2}{\frac{1}{a}} + \frac{b^2}{\frac{1}{c}}+ \frac{c^2}{\frac{1}{b}}+\frac{a^2}{\frac{1}{c}}+\frac{c^2}{\frac{1}{a}}\right)\left(2\big(\frac{1}{a}+\frac{1}{b}+\frac{1}{c} \big)\right). \end{align*}<|endoftext|> TITLE: The set of real sequences has no countable spanning set QUESTION [6 upvotes]: I'm working through some old Harvard Math 55 problem sets, and one problem in particular has got me stumped: If $F$ is field, then prove that the vector space $F^\infty$ over $F$ (the space of infinite tuples or sequences) has no countable spanning set. In the case that $F$ is countable or finite, we can show that the space spanned by a countable set is necessarily itself countable and therefore cannot span $F^\infty$, which is uncountable. However, that argument fails when the field is uncountable, since the span of even a single vector is an uncountable set. Any hints on how to proceed? REPLY [2 votes]: Here is one idea/hint for a proof by diagonalization. Given a countable set of sequences $C$, we want to make a sequence $s$ that is not in the span of $C$, which is to say $s$ is not a linear combination of any finite subset of $C$. To do this, it is sufficient to make an $s$ such that, for each finite subset $T$ of $C$, say of size $k_T = k$, there are some $k+1$ coordinates of $s$ such that the vectors we get in $\mathbb{R}^{k+1}$ by restricting $T$ to those coordinates do not span the vector in $\mathbb{R}^{k+1}$ obtained by restricting $s$ to those coordinates. This guarantees that $s$ is not in the span of $T$. No set of $k$ vectors in $\mathbb{R}^{k+1}$ can span $\mathbb{R}^{k+1}$, so if we choose a set of new coordinates for each finite subset $T$ of $C$, we can choose values of $s$ on those coordinates appropriately to ensure the previous condition holds. This means we just need to set up the appropriate construction to make $s$. Separately, there is a more well known argument that shows that no infinite dimensional Banach space can have a countable basis; see the question Let $X$ be an infinite dimensional Banach space. Prove that every Hamel basis of X is uncountable. In this case, we only have a vector space, so the techniques used there don't seem to apply.<|endoftext|> TITLE: Unbounded linear operator QUESTION [7 upvotes]: Let $(A, \|\cdot\|_A), (B, \|\cdot\|_B)$ be normed linear spaces. Consider $T \in L(A,B)$ The operator norm of $T$ is defined to be $$\|T\| = \sup\{\|Tx\|_B: \|x\|_A \leq 1\}$$ $T$ is bounded if $\|T\| < \infty$ otherwise it is unbounded. So can someone give me an example of an unbounded linear operator? This seems very counterintuitive to me because, that means $$\exists \space x \in A, \|Tx\|_B = \infty$$ but then any scalar multiples of $Tx$ would have an infinite norm. Then what would $T(0)$ be? REPLY [10 votes]: Take $A=B$ be the set of complex sequences with finitely many nonzero terms: $$ A=B=\{\,\{x_n\}\,:\ \exists m\in\mathbb N\ \text{ with }x_n=0\ \forall n\geq m\}. $$ Consider in both the supremum norm ($\|x\|=\max\{|x_1|,|x_2|,\ldots\}$). Define $$ T(x_1,x_2,\ldots,x_n,0,0,\ldots)=(x_1,2x_2,3x_3,\ldots,nx_n,0,0,\ldots). $$ Then $T$ is linear. And, if $e$ is the sequence $(0,\ldots,0,1,0,0,\ldots)$ (the 1 in the $n^{\rm th}$ position), then $\|e\|=1$ and $$ \|Te\|=n. $$ As we can do this for every $n$, $\|T\|=\infty$. As you can see, here $\|Tx\|<\infty$ for all $x$. Finally, you ask about $T(0)$; for a linear operator, $T(0)=0$ always (bounded or unbounded, it doesn't matter). For a different and maybe more natural example, consider $A=B$ the set of polynomials as a subset of $C[0,1]$, and let $$ Tp=p' $$ be the differentiation operator. This is an unbounded operator, since $\|x^n\|=1$ and $\|T(x^n)\|=n$ for all $n$.<|endoftext|> TITLE: Mean Width of Disjoint Union QUESTION [5 upvotes]: $\newcommand{\vol}{\operatorname{vol}}$Let $K$ be a convex body in $\Bbb R^n$ (a convex body is a convex, compact subset of $\Bbb R^n$ with nonempty interior). The mean width of $K$ is defined as $b(K)=\dfrac{2}{\vol_{n-1}(\partial B_2^n)} \int_{S^{n-1}} h_K(u) \,du$, where $h_K(u)=\sup_{x\in K}\langle x,u\rangle$ is the support function of $K$ and $du$ is shorthand for $dH^{n-1}(u)$. Is there a closed-form expression for the mean width of a disjoint union of convex bodies? Or perhaps a closed form expression for the support function of a disjoint union? For example, for convex bodies $K$ and $L$ the volume (and surface area) satisfy a valuation property $\vol_n(K\cup L)=\vol_n(K)+\vol_n(L)-\vol_n(K\cap L)$ (and similarly for surface area), even when $K\cup L$ is not convex. Is this true for the mean width as well, i.e. is it true that $b(K\cup L)+b(K\cap L)=b(K)+b(L)$? If not, does one of the directions in the inequality always hold? Or, by what formula or inequality are the support function $h_{K\sqcup L}$ of a disjoint union and $h_K,h_L$ related? My attempt at a solution: By Groemer's extension theorem, the mean width can be extended uniquely to a valuation on the class of polyconvex sets, so it satisfies the valuation property $b(K\cup L)+b(K\cap L)=b(K)+b(L)$. (seems fishy) Thank you in advance for your help. REPLY [2 votes]: these are actually two questions. If you define the "mean width" for non-convex bodies by the same integral over $h_K(x)=\sup_{v\in K} \langle x,v\rangle$, the "mean width" of the union of two convex bodies will in general depend on the concrete shape and the relative position of the bodies. Consider a simple example: Let $K$ be a unit disk centered at $(-d/2,0)$ and $L$ be a unit disk centered at $(d/2,0)$, $d>2$ Then $b(K)=b(L)=2$ and $b(K\cap L)=b(\emptyset)=0$. However, if an element of $S^1$ is represented by $u\in [0,2\pi]$: \begin{align*} b(K\cup L)&=\frac{1}{\pi}\int_0^{2\pi} (1+ d/2 |\cos u|) du \\ &= 2 + \frac{2d}{\pi} \int_0^{\pi/2} \cos u du = 2+ \frac{2d}{\pi} \; . \end{align*} Hence there can't be a formula expressing this integral just in terms of $b(K)$, $b(L)$ and $b(K\cap L)$. The second question is, whether $b$ can be extended from the class of convex bodies to the class of arbitrary unions of convex bodies, such that additivity holds. The answer is yes. The mean width is a valuation for convex bodies, as the support function mapping is a valuation, s. e.g. Rolf Schneider: Convex Bodies: The Brunn–Minkowski Theory, p. 330. See also p. 331 for an explicit additive extension of the support function and thus the mean width. (Groemer's extension theorem might have been used instead to prove the existence of such an extension.) The extension of the support function looks like $$h_K(x) = \sum_{\lambda \in {\mathbb R}} (\chi(H_{x,\lambda}\cap K) - \lim_{\mu\searrow \lambda} \chi(H_{x,\mu}\cap K)) \; ,$$ and the integral definition of the mean width with this extension of the support function yields the extension of the mean width.<|endoftext|> TITLE: Rooted Binary Trees and Catalan Numbers QUESTION [9 upvotes]: To form a rooted binary tree, we start with a root node. We can then stop, or draw exactly $2$ branches from the root to new nodes. From each of these new nodes, we can then stop or draw exactly $2$ branches to new nodes, and so on. We refer to a node as a parent node if we have drawn branches from it. This diagram shows all distinct rooted binary trees with at most $0,$ $1,$ $2,$ or $3$ parent nodes: (Note that, in the diagram, the roots are at the top and the branches extend downward -- somewhat contrary to what you'd expect for something called a "tree"!) Prove that the number of distinct rooted binary trees with exactly $n$ parent nodes is the $n^{\text{th}}$ Catalan number. To count the number of rooted binary trees, I think you do something with a power of 2, because there's two choices at each point. But that's all I have. And for the Catalan numbers, which is $C_n = \frac 1{n+1}\binom{2n}n,$ and I don't understand the other recurrence, if you could explain that, it would be great. Can someone walk me through this problem? Thanks in advance! REPLY [9 votes]: Catalan numbers satisfy the recurrence: $C_0 = 1, C_{n+1} = \sum_{i=0}^nC_iC_{n-i}, n \geq 0$ So it suffices that show that binary trees satisfy the same recurrence. Let $T_n$ be the number of binary trees with $n$ parent nodes. There is 1 tree with zero parent nodes. So $T_0=1$. For $n \geq 0$: A tree $t$ with $n+1$ parent nodes has a root with two subtrees as children $t_1$ and $t_2$. Since the root of $t$ is a parent node, $t_1$ and $t_2$ must have $n$ parent nodes together (i.e. if $t_1$ has $i$ parent nodes then $t_2$ has $n-i$ parent nodes). Then the number of ways to make children $t_1$ and $t_2$ is $\sum_{i=0}^nT_iT_{n-i}$.<|endoftext|> TITLE: What is the Cholesky Decomposition used for? QUESTION [9 upvotes]: It seems like a very niche case, where a matrix must be Hermitian positive semi-definite. In the case of reals, it simply must be symmetric. How often does one have a positive semi-definite matrix in which taking it's Cholesky Decomposition has a significant usage? REPLY [10 votes]: Symmetric and positive definite matrices that can be Cholesky factored appears in many applications: Normal equations for least squares problems. Discretizations of self adjoint partial differential equation boundary value problems. Hessians of convex functions (in many cases the Hessian is made to be convex) in optimization. Systems of equations arising from the primal-dual barrier method for linear programming. As to why one would use the Cholesky factorization rather than another matrix factorization such as the LU factorization, the answer is that Cholesky factorization is substantially faster than LU factorization because it can exploit the symmetry of the matrix and because pivoting isn't required. Cholesky factorization of sparse positive definite matrices is fairly simple in comparison with LU factorization because of the need to do pivoting in LU factorization.<|endoftext|> TITLE: Riemann and Darboux Integral of a product of two functions QUESTION [5 upvotes]: I'm studying the Darboux definition of integrability, which I completely explained here. There's an exercise that asks me to prove that the Darboux Integrability is equivalent to Riemann Integrability, but this Riemann integral is defined as the following: It first defines a 'pointed' partition (I don't know how to say it in ensligh) by the following: a 'pointed' partition $[a,b]$ is a pair $P^*=(E,ξ)$, where $P=\{t_0, t_1, \cdots, t_n\}$ is a partition of $[a,b]$ and $ξ = (ξ_1, \cdots, ξ_n)$ is a list of $n$ chosen numbers such that $t_{i-1}\le ξ_i\le t_i$ for each $i=1,\cdots ,n$. Now, the Riemann Integral is defined as: $$\sum(f,P^*) = \sum_{i=1}^n f(ξ_i)(t_i-t_{i-1})$$ (I didn't understand the notation for the left hand side of the equation, by the way) Finally, I'm asked to prove the following: given $f,g:[a,b]\to \mathbb{R}$ integrable functions, for the entire partition $P=\{t_0, \cdots, t_n\}$ of $[a,b]$ let $P^* = (P,ξ)$ and $P^{\#} = (P, η)$ be pointed partitions of $P$, then: $$\lim_{|P|\to 0}\sum f(ξ_i)g(η_i)(t_i-t_{i-1}) = \int_a^b f(x)g(x) \ dx$$ I guess here I need to prove that the Riemann Integral of the product of two functions if the darboux integral of $f(x)g(x)$, but it seems too obvious, I just need to verify that $f,g$ are integrable, then their product is too, isn't it? I'm pretty sure this should be a hard question. Is there another interpretation that I'm missing? REPLY [5 votes]: I have discussed the equivalence of definitions of Riemann integral based on Darboux sums and Riemann sums in this and this answer. The idea is that the Riemann sums are technically difficult to handle because of the arbitrary nature of $\xi_{i}$ in sub-interval $[t_{i - 1}, t_{i}]$ of a partition $T = \{t_{0}, t_{1}, t_{2}, \ldots, t_{n}\}$ of $[a, b]$. Hence the Darboux sums are used which replace the $f(\xi_{i})$ with supremum and infimum of $f$ on $[t_{i - 1}, t_{i}]$ and thereby remove dependency on the arbitrary $\xi_{i}$. Now I come to your question. Suppose $f, g$ are Riemann integrable on $[a, b]$. Then it is possible to prove that their product $fg$ is also Riemann integrable on $[a, b]$. By definition of Riemann integral as a limit of Riemann sums we see that $$\int_{a}^{b}f(x)g(x)\,dx = \lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})g(\xi_{i})(t_{i} - t_{i - 1})\tag{1}$$ where the notation is taken from your question. You are now asked to prove that the points $\xi_{i}$ need not necessarily chosen to be same for $f$ and $g$ so that we can use $\xi_{i}$ for $f$ and another set of points $\eta_{i}$ for $g$. This is correct and we prove it below. Note that \begin{align} |\Delta| &= \left|\lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})g(\xi_{i})(t_{i} - t_{i - 1}) - \lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})g(\eta_{i})(t_{i} - t_{i - 1})\right|\notag\\ &= \left|\lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})\{g(\xi_{i}) - g(\eta_{i})\}(t_{i} - t_{i - 1})\right|\notag\\ &\leq M\lim_{|P| \to 0}\sum_{i = 1}^{n}|g(\xi_{i}) - g(\eta_{i})|(t_{i} - t_{i - 1})\notag\\ &\leq M\lim_{|P| \to 0}\sum_{i = 1}^{n}\{M_{i}(g) - m_{i}(g)\}(t_{i} - t_{i - 1})\notag\\ &= M\lim_{|P| \to 0}\{U(P, g) - L(P, g)\}\tag{2}\\ &= M \cdot 0\notag\\ &= 0\notag \end{align} where $M$ is some bound for $|f|$ on $[a, b]$ and $M_{i}(g)$ and $m_{i}(g)$ are supremum and infimum of $g$ on $[t_{i - 1}, t_{i}]$. Hence $\Delta = 0$ and it follows that $$\int_{a}^{b}f(x)g(x)\,dx = \lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})g(\xi_{i})(t_{i} - t_{i - 1}) = \lim_{|P| \to 0}\sum_{i = 1}^{n}f(\xi_{i})g(\eta_{i})(t_{i} - t_{i - 1})\tag{3}$$ The limit in $(2)$ is $0$ and it is somewhat difficult to prove that it is $0$. I have proved it in the linked answer as the following theorem: Integrability Condition 2 A function $f$ bounded on $[a, b]$ is Riemann integrable over $[a, b]$ if and only if for every $\epsilon > 0$ there is a number $\delta > 0$ such that $$U(P, f) - L(P, f) < \epsilon$$ whenever norm $|P| < \delta$. Note: The result in question is called Bliss' Theorem which was proved by G. A. Bliss (see his paper) as an alternative to a more general result called Duhamel Principle.<|endoftext|> TITLE: Prove that $\sqrt{3} \notin \mathbb{Q}[\sqrt 2,\sqrt[3]{2},\dots,\sqrt[n]{2},\dots]$ QUESTION [5 upvotes]: I'm studying chapter 7.2 Algebraic extensions in Abstract Algebra by S. Lovett, and I'm stuck with an exercise problem. Let $S = \{ \sqrt[n]{2} : n \in \mathbb{Z}$ with $n \geq 2 \}$. Prove that $\sqrt{3} \notin \mathbb{Q}[S]$. Here is my argument. Suppose that $\sqrt{3} \in \mathbb{Q}[S]$. Since $\mathbb{Q}[S] = \cup \mathbb{Q}[\sqrt[n]{2}]$, there exists an $n$ such that $\sqrt{3} \in \mathbb{Q}[\sqrt[n]{2}]$. If $n$ is odd, $[\mathbb{Q}[\sqrt[n]{2}]:\mathbb{Q}]=[\mathbb{Q}[\sqrt[n]{2}]:\mathbb{Q}[\sqrt{3}]][\mathbb{Q}[\sqrt{3}]:\mathbb{Q}]$ and $[\mathbb{Q}[\sqrt{3}]:\mathbb{Q}]=2$, so we have a contradiction. Now I'm stuck at proving $\sqrt{3} \notin \mathbb{Q}[\sqrt[n]{2}]$ for $n$ even. I appreciate any help on this part or suggestion of another approach on whole problem. REPLY [3 votes]: If you can show the statement for $n$ a power of $2$, then you can argue with degrees to take care of all other $n$. Proof by induction works here. Let $\beta_k = \sqrt[2^k]{2}$. Suppose $\sqrt{3} \not\in \mathbb{Q}(\beta_k)$ where $k \leq m$. Now suppose $\sqrt{3} \in \mathbb{Q}(\beta_{m+1})$. Then there exist $a_0,a_1 \in \mathbb{Q}(\beta_m)$ such that $$\sqrt{3} = a_0 + a_1\beta_{m+1}.$$ Square both sides and try to get a contradiction.<|endoftext|> TITLE: Probability Theory, Symmetric Difference QUESTION [7 upvotes]: I'm trying to show this property of the symmetric difference between two sets defined for two sets in a universe $A$ and $B$ by $$ A\Delta B=(A\cap B^{c})\cup(B\cap A^{c}) $$ I need to show that $$ \mathbb{P}(A\Delta C)\leq \mathbb{P}(A\Delta B)+\mathbb{P}(B\Delta C) $$ for sets $A, B,$ and $C$ in the universe. I showed in the first part of the problem that $$ \mathbb{P}(A\Delta B)=\mathbb{P}(A)+\mathbb{P}(B)-2\mathbb{P}(A\cap B) $$ My idea was to note that $$ \mathbb{P}(A\Delta C)\leq\mathbb{P}(A\cap C^{c})+\mathbb{P}(C\cap A^{c}) $$ by probability laws and then leverage the fact that for any set I can write it as a union with another set. That is $$ \mathbb{P}(A)=\mathbb{P}(A\cap B)+\mathbb{P}(B^{c}\cap A) $$ and likewise for $C$ to substitute in for $P(A)$ and $P(B)$ terms. However, I end up running in circles. My TA did say I was on the right track, though. Any suggestions would be helpful. Thanks. REPLY [4 votes]: If $\chi_U$ denotes the characteristic function of the set $U$, then we have $\chi_{A\Delta B} = |\chi_A - \chi_B|$. Hence we have \begin{align*} \mathbb{P}(A\Delta C) &= \int |\chi_A - \chi_C| d\mu \\ &= \int |(\chi_A - \chi_B) + (\chi_B - \chi_C)| d\mu \\ &\leq \int (|\chi_A - \chi_B| + |\chi_B - \chi_C|) d\mu \\ &= \int (|\chi_A - \chi_B|d\mu + \int |\chi_B - \chi_C|) d\mu\\ &= \mathbb{P}(A\Delta B) + \mathbb{P}(B\Delta C) \end{align*}<|endoftext|> TITLE: In a math paper, is this considered redundant? QUESTION [26 upvotes]: If I want to put emphasis on a matrix being entrywise-nonnegative, can I write in my paper "...and thus there exists a nonnegative matrix $\mathcal{Q} \in \mathcal{M}_N(\mathbb{R^+})$..." or is that redundant and should be avoided? is it instead better to say "...and thus there exists a matrix $\mathcal{Q} \in \mathcal{M}_N(\mathbb{R^+})$..." Personally, I like the first choice, in order to put emphasis on the existence of a nonnegative matrix, but I wonder whether it's bad style for math writing. I'm using $\mathcal{M}_N(\mathbb{R^+})$ to denote the set of $n\times n$ matrices with nonnegative entries. Thanks, REPLY [20 votes]: First of all, I'm not sure "nonnegative matrix" means, unambiguously, "matrix with nonnegative entries." But my bigger issue with ...and thus there exists a nonnegative matrix $\mathcal{Q} \in \mathcal{M}_N(\mathbb{R^+})$... is that you may not need the notation at all. Sometimes programs that debug code will point out when you're declaring a variable but not using it for anything. It's not exactly an error, but it can complicate your writing. If you're going to refer to the set of $N\times N$ matrices with nonnegative entries several times, name it at the beginning of a paragraph. Then use the shorthand notation later on. As in: Let $\mathcal{M}_N(\mathbb{R^+})$ be the set of all $N\times N$ matrices with nonegative entries. ... yada yada yada ... and thus there exists $\mathcal{Q} \in \mathcal{M}_N(\mathbb{R^+})$ such that ... This way, someone who missed what exactly $\mathcal{M}_N(\mathbb{R^+})$ was can scan backwards to the beginning of the paragraph to find it. Someone who remembers can just move on without stumbling over an inline declaration. If you're not going to refer to the set often, there's no need to name it with notation. Just say ... and thus there exists an $N\times N$ matrix $\mathcal{Q}$ with nonnegative entries such that ... Erdos wrote “the best notation is no notation.” Use it only if you need it.<|endoftext|> TITLE: Maths symbol that looks like Sputnik QUESTION [20 upvotes]: I'm doing some marking for a year 8 (12 to 13-year-old) scholarship paper and I saw this symbol that looks like the Sputnik probe and I have no idea what it is, does anyone know? REPLY [43 votes]: I think it is a typesetting problem with the parentheses. My best guess at the intended question would be: $$\left( 2-\frac 12 \right) \left( 2-\frac 23 \right) \left( 2-\frac 34 \right) \left( 2-\frac 45 \right)$$ This seems reasonable given the level of the other questions.<|endoftext|> TITLE: Arrangements of a,a,a,b,b,b,c,c,c in which no three consecutive letters are the same QUESTION [16 upvotes]: Q: How many arrangements of a,a,a,b,b,b,c,c,c are there such that $\hspace{5mm}$ (i). no three consecutive letters are the same? $\hspace{5mm}$ (ii). no two consecutive letters are the same? A:(i). 1314. ${\hspace{5mm}}$ (ii). 174. I thought of using the General Principle of Inclusion and Exclusion along with letting $p_i$ denote a property that.. , and doing so, I will evaluate $E(0)$, which gives us the number of arrangements without any of the properties. How do I go about doing that? I am trying to use the method stated above in solving this question, but I am unable to generalize the properties $p_i$. REPLY [11 votes]: This answer is based upon a generating function of generalized Laguerre polynomials \begin{align*} L_k^{(\alpha)}(t)=\sum_{i=0}^k(-1)^k\binom{k+\alpha}{k-i}\frac{t^i}{i!} \end{align*} The Laguerre polynomials have some remarkable combinatorial properties and one of them is precisely suited to answer problems of this kind. This is nicely presented in Counting words with Laguerre series by Jair Taylor. We find in section $3$ of this paper: Theorem: If $m_1,\ldots,m_k,n_1,\ldots,n_k$ are non-negative integers, and $p_{m,n}(t)$ are polynomials defined by \begin{align*} \sum_{n=0}^\infty p_{m,n}(t)x^n=\exp\left(\frac{t\left(x-x^m\right)}{1-x^m}\right) \end{align*} then the total number of $k$-ary words that use the letter $i$ exactly $n_i$ times and do not contain the subwords $i^{m_i}$ is \begin{align*} \int_0^\infty e^{-t}\prod_{j=1}^k p_{m_j,n_j}(t)\,dt \end{align*} Here we consider a $3$-ary alphabet $\{a,b,c\}$ and words built from the characters $$a,a,a,b,b,b,c,c,c$$ First case: Bad words $\{aaa,bbb,ccc\}$ We have $m_1=m_2=m_3=3, n_1=n_2=n_3=3$ We obtain with some help of Wolfram Alpha \begin{align*} p_{3,3}(t)&=[x^3]\exp\left(\frac{t\left(x-x^3\right)}{1-x^3}\right)\\ &=[x^3]\left(1+tx+\frac{1}{2}t^2x^2+\left(\frac{1}{6}t^3-t\right)x^3+\cdots\right)\\ &=\frac{1}{6}t^3-t \end{align*} It follows \begin{align*} \int_0^\infty e^{-t}\left(\frac{1}{6}t^3-t\right)^3\,dt=1314 \end{align*} Second case: Bad words $\{aa,bb,cc\}$ We have $m_1=m_2=m_3=2, n_1=n_2=n_3=3$ and obtain \begin{align*} p_{2,3}(t)&=[x^3]\exp\left(\frac{t\left(x-x^2\right)}{1-x^2}\right)\\ &=[x^3]\left(1+tx+\left(\frac{1}{2}t^2-t\right)x^2+\left(\frac{1}{6}t^3-t^2+t\right)x^3+\cdots\right)\\ &=\frac{1}{6}t^3-t^2+t \end{align*} It follows \begin{align*} \int_0^\infty e^{-t}\left(\frac{1}{6}t^3-t^2+t\right)^3\, dt=174 \end{align*}<|endoftext|> TITLE: Is the trace of a unitary matrix always real? QUESTION [5 upvotes]: In a physics context I work with the SU(2) and SU(3) matrix groups. Let $U$ be such a special unitary matrix. There are expressions like $(U - \mathrm{h.c.})$ which turn out to be traceless. This means that the trace of $U$ is always real. I played around with Mathematica to generate matrices $U$ from the generators $\sigma_i$ and the matrix exponential function. The real part of the trace of an SU(2) matrix in terms of the three algebra components nicely oscillates: The imaginary part seems to be virtually zero, the numerical matrix exponential generates small imaginary parts of say $10^{10}$. From the color coding this seems rather zero: I know that $U = \exp(\mathrm i \alpha_i \sigma_i)$, that $U^{-1} = U^\dagger$ and a couple of other identities. However I cannot deduce that the trace is always real. Is it true that the trace of special unitary matrices is always real? How can one show it? REPLY [8 votes]: The answer is yes for $2 \times 2$ matrices. We need some facts here: The determinant is the product of all eigenvalues The trace is the sum of all eigenvalues The eigenvalues of a unitary matrix have magnitude $1$ It follows that a $2 \times 2$ unitary matrix has two complex eigenvalues satisfying $\lambda_1 \lambda_2 = 1$, as well as $|\lambda_1| = |\lambda_2| = 1$. However, this means that $\lambda_1 \overline{\lambda_1} = \lambda_1\lambda_2 = 1 \implies \overline{\lambda_1} = \lambda_2$ (the bar denotes the complex conjugate). We then have that the trace of $U$ will be $\lambda_1 + \overline{\lambda_1} = 2 \operatorname{Re}[\lambda_1]$. So, the trace of $U$ is necessarily real. The answer is no for $3 \times 3$ matrices. In particular, consider the matrix $U = e^{2 \pi i/3}I$. This will also fail to hold for larger matrices.<|endoftext|> TITLE: An amazing approximation of $e$ QUESTION [10 upvotes]: As we can read in Wolfram Mathworld's article on approximations of $e$, the base of natural logarithm, An amazing pandigital approximation to e that is correct to $18457734525360901453873570$ decimal digits is given by $$\LARGE \left(1+9^{-4^{6 \cdot 7}}\right)^{3^{2^{85}}}$$ found by R. Sabey in 2004 (Friedman 2004). The cited paragraph raises two natural questions. How was it found? I guess that Sabey hasn't used the trial and error method. Using which calculator can I verify its correctness "to $184\ldots570$ decimal digits"? REPLY [15 votes]: $$\begin{aligned} (1+9^{-4^{42}})^{3^{2^{85}}} &=(1+9^{-4^{42}})^{3^{2*2^{84}}}\\ &=(1+9^{-4^{42}})^{9^{2^{84}}} \\ &=(1+9^{-4^{42}})^{9^{4^{42}}}\\ &=\Bigl(1+\frac1{9^{4^{42}}}\Bigr)^{9^{4^{42}}}\qquad\text{where }=\left(1+\frac1n\right)^n. \end{aligned}$$ This is just the limit definition of $e$ with a large number as an approximation for $\infty$. Edit: Numberphile just did a video on this, which also gives a pandigital approximation for $\pi$, but it's only accurate up to ten digits.<|endoftext|> TITLE: Generalization of Lagrange's theorem $(1768)$? QUESTION [5 upvotes]: Here's the theorem : Let $p$ a prime number and $u_0,...,u_n$, a list of integers such that $p\not \mid u_n$. Then : $u_nx^n+...+u_1x+u_0 \equiv 0\pmod p$ admits at most $n$ solutions $\pmod p$. The proof can by done using induction on $n$ and the property of the prime number $p$. Now, I was wondering how it will work if we consider an integer $k$ instead of $p$. The statement will give : Let $k$ an integer and $u_0,...,u_n$, a list of integers such that $\gcd(k,u_n)=1$. Then how many solutions $\pmod k$ the equation : $u_nx^n+...+u_1x+u_0 \equiv 0\pmod k$ admits ? I think we can start with the decomposition theorem $k=p_1^{a_1}...p_l^{a_l}$. Maybe it will give a system in CRT style. Here's my attempt : First important fact : if $k\mid u_n\Leftrightarrow p_1^{a_1}...p_l^{a_l}\mid u_n\Rightarrow \exists i\in \{1,...,l\}, \ p_i^{l_i}\mid u_n$. For a factor $p_i^{a_i}$ we try to find the number of solutions $\pmod{p_i^{a_i}}$ of the equation : $u_nx^n+...+u_1x+u_0\equiv 0 \pmod{p_i^{a_i}}$. By induction on $n$ I have : -For $n=0$ : the equation becomes : $u_1x\equiv -u_0 \pmod{p_i^{a_i}}$. The equation becomes $u_1x\equiv -u_0 \pmod{p_i^{a_i}}$. But we have $\gcd(u_1,p_i)=1$ and by property of Bézout we can deduce that $\gcd(u_1,p_i^{a_i})=1$. So $u_1$ has an inverse element and we can take $x\equiv -u_1^{-1}u_0 \pmod{p_i^{a_i}}$ which represents one solution (the only one). -For $n=n+1$ : the equation becomes $u_{n+1}x^{n+1}+u_nx^n+...+u_1x+u_0\equiv 0 \pmod{p_i^{a_i}}$.If I consider $y$ a solution of the equation with the multiplicity $e=1$ (for instance) we have the fact that we can factorize the equation by $(x-y)^{e}$ . It gives $(x-y)^{e}P(x)\equiv 0 \pmod{p_i^{a_i}}$ with $P$ a degree $n$ polynomial and with highest coefficient $u_{n+1}$ such that $\gcd(p_i^{a_i},u_{n+1})=1$. So the equation admits at most $n+1$ solutions $\pmod{p_i^{a_i}}$. So there is at most $n$ solutions for $\pmod{p_i^{a_i}}$ and for each $i\in \{1,...,l\}$. If I want to use the CRT it gives a systeme of $l$ lines where each polynomials admit at most $n$ solutions. How can I conclude $\pmod k$ (it's not a field) ? If we suppose that for each $p_i^{a_i}$ there are at most $n$ solutions we can factorize the $l$ lines with $n$ factors. Unfortunately this fact is false (look at $(x-1)(x-2)(x-4)\equiv 0 \pmod{9}$ which have $4$ solutions instead of $3$). Here is the main system : $\left\{\begin{array}{rl} u_{n}(x-x_{1_1})(x-x_{1_2})...(x-x_{1_n}) &\equiv 0 \pmod{p_1^{a_1}} \\ &\vdots \\ u_{n}(x-x_{i_1})(x-x_{i_2})...(x-x_{i_n}) &\equiv 0 \pmod{p_i^{a_i}} \\ &\vdots \\ u_{n}(x-x_{l_1})(x-x_{l_2})...(x-x_{l_n}) &\equiv 0 \pmod{p_l^{a_l}} \\ \end{array} \right.$ And for instance by Euclid's lemma (for the case of $(a_i)_{i\{1,...,l\}}=1$) to count the number of systems : for $u_{n}(x-x_{1_1})$ we have $(n^{(l-1)})$ systems possible with one solution. It's the same for each $u_{n}(x-x_{i_j})$ with $j\in \{1,...,n\}, \ i=1$ right.If it's the case it will give $n^l$ solutions. Thanks in advance ! REPLY [4 votes]: By CRT, the number of solutions mod $k$ is the product of the numbers of solutions mod $p_i^{a_i}$. So we reduce to working mod $p^a$. But you want to make your assumption $\gcd(k, a_n) =1$, not $k \not\mid a_n$. Suppose your polynomial $f(x)$ has degree $n$. Mod $p$ it may have up to $n$ linear factors, counted by multiplicity. Each solution mod $p^{a}$ is also a solution mod $p$. Hensel's lemma says if $f(r) \equiv 0 \mod p$ and $f'(r) \not\equiv 0 \mod p$, there is a unique solution of $f(x)=0 \mod p^a$ such that $x \equiv r \mod p$. But if $f'(r) \equiv 0 \mod p$ (i.e. $r$ is a multiple root mod $p$), there may be more: maybe as many as $p^{a-1}$. So if $a > 1$ we get a bound of $p^{a-1} n/2$ solutions mod $p^a$. This is probably not best possible, but it's a start.<|endoftext|> TITLE: How logarithm is properly defined on a field? QUESTION [8 upvotes]: Given a field $(F,+,\times)$, an exponential function is defined as a function $E:F\to F$ s.t. $E(x+y)=E(x)E(y)$ and $E(0)=1$ where $0$ is the additive identity and $1$ is the multiplicative identity. I am curious how logarithm is properly defined for $F$ and how the connection with the exponential function is made? Is it defined as a $L:F \to F$ function s.t. $L(xy)=L(x)+L(y)$ and $L(1) = 0$? Any explanation and reference is welcome. Thank you! REPLY [4 votes]: I suppose that the function defined by the functional equation is continuous, so to avoid ''wild'' solutions. In this case, to see when we can define an inverse (logarithm) function, define: $$ A_0=\{x\in F : E(x)=1\} $$ Now we can prove that if $A_0=\{0\}$ that $E(x)$ is invertible. Prove by contraposition: If we have $x_1,x_2 \in F$ such that $E(x_1)=E(x_2)$ than $E(x_1-x_2)=E(x_1)E(-x_2)=E(x_1)E(x_2)^{-1}=E(x_1)E(x_1)^{-1}=1$, so $x_1-x_2 \in A_0$: contradiction. In this case we can define an inverse of the $E$ function: $$ L=E^{-1}:E(F)\to F \quad L(a)=x \quad \mbox{such that}\quad E(x)=a $$ and we can prove that $L(ab)=L(E(x)E(y))=L(E(x+y))=x+y=L(a)+L(b)$. This is the case if $F=\mathbb{R}$. But, if there is $x_0\ne0 \in A_0$ than we can prove that $E(x)$ is a periodic function because: $E(x+nx_0)=E(x)E(nx_0)=E(x)E(x_0)^n=E(x)\cdot 1^n= E(x)$. So the function $E$ is not invertible and if we want define a ''logarithm'' we have to chose one period that fix a ''principal value'' for the inverse function. This is the case if $F=\mathbb{C}$.<|endoftext|> TITLE: Topology of the space of conformal classes of metrics QUESTION [7 upvotes]: Definitions: Consider on a fixed smooth manifold $M$ the space $\text{Met}(M)$ of Riemannian metrics on $M.$ This lives inside an infinite dimensional topological vector space (in fact, in is a Frechet space). Two metrics $h,g \in \text{Met}(M)$ are said to be conformally equivalent if there exists a nonvanishing (a fortiori positive) smooth function $f$ such that $g(-,-)=fh(-,-).$ This defines an equivalence relation on $Met(M).$ We define the quotient space $\text{Conf}(M):= \text{Met}(M)/\{\text{conformal equivalence}\}$ and endow it with the quotient topology. My question: It's clear that $\text{Met}(M)$ is contractible since any two metrics can be joined by a straight-line homotopy. Is it true that $\text{Conf}(M)$ is also contractible? Note that we have the fiber sequence $\{\text{positive functions on M}\} \to \text{Met}(M) \to \text{Conf}(M).$ Since the first two spaces are contractible by straight-line homotopies, it follows from the long exact sequence on homotopy groups that $\text{Conf}(M)$ has vanishing homotopy groups. If this space had the homotopy type of a CW complex, it would be contractible by Whitehead's theorem. However I don't see why it would have the homotopy type of a CW complex... REPLY [4 votes]: Since you did not specify the topology on the space of conformal classes, I will make up my own. Namely, the set of conformal structures on $M^n$ can be identified with the set of reductions of the frame bundle to the bundle whose structure group is the conformal group $CO(n)\cong R_+\times O(n)$. In other words, this is the set of sections of the bundle $E$ over $M$ whose fibers are copies of $F=GL(n,R)/CO(n)$, which is a contractible manifold. I will therefore equip $Conf(M)$ with the $C^\infty$-compact-open topology on the space of sections of $E\to M$. Now my answer to this question proves that $Conf(M)$ is contractible.<|endoftext|> TITLE: Can you transpose a matrix using matrix multiplication? QUESTION [7 upvotes]: Say you have a matrix A = \begin{bmatrix}a&b\\c&d\end{bmatrix} and I want it to look like $A^T$ = \begin{bmatrix}c&a\\d&b\end{bmatrix} Can this be done via matrix multiplication? Something like a matrix T such that $T*A = A^T$. REPLY [15 votes]: If there were such a $T$, we would have that $T = T \times I = I^T = I$, where $I$ is the identity matrix. But then it would follow that $A = I \times A = T \times A = A^T$ for all matrices $A$; i.e., that all matrices are their own transposes. As this is not true, we conclude there cannot be any such $T$ as desired.<|endoftext|> TITLE: If $C$ is the Cantor set, then $C+C=[0,2]$. QUESTION [11 upvotes]: Question : Prove that $C+C=\{x+y\mid x,y\in C\}=[0,2]$, using the following steps: We will show that $C\subseteq [0,2]$ and $[0,2]\subseteq C$. a) Show that for an arbitrary $n\in\mathbb{N}$ we can always find $x_n,y_n\in C_n$, where $$C_n=\left[0,\frac1{3^n}\right]\bigcup \dots \bigcup\left[\frac{3^{n}-1}{3^n},1\right]$$such that for a given $s\in[0,2]$ we have $x_n+y_n=s$. b) Then we will set $x=\lim x_n$ and $y=\lim y_n$, then $x+y=s$. My progress : Showing that $C+C\subseteq [0,2]$ is obvious, and I did part a) by showing that if $x_n,y_n$ are in different subintervals then concluding that $x_n+y_n$ covers $[0,2]$ (can be done using induction on $n$). My difficulty is in the second part. The sequence $x_n$ is bounded so it must have a convergent subsequence $(x_{n_{k}})$. If we set $x=\lim x_{n_{k}}$, then we can conclude $\lim y_{n_{k}}=y=s-x$, thus $x+y=s$. First I thought that $x,y$ will be in $C$ as it is closed. However $(x_{n_{k}})$ may not necessarily be in $C$, so $x$ can't be in $C$ for sure. Is this last thought correct? How do I overcome this last gap in my solution? Thanks for your help. REPLY [7 votes]: To carry out (b), note that if $\ell\le n_k$, then $x_{n_k}\in C_\ell$. Thus, for each $\ell$ there is an $m(\ell)$ such that $x_{n_k}\in C_\ell$ whenever $k\ge m(\ell)$. Let $\ell$ be arbitrary. Clearly $$x=\lim_{k\ge m(\ell)}x_{n_k}\;,$$ and $C_\ell$ is closed, so $x\in C_\ell$. Thus, $x\in\bigcap_\ell C_\ell=C$. (You might be interested in comparing your argument with the first proof here; they’re basically the same idea, just expressed a bit differently.)<|endoftext|> TITLE: How to prove from scratch that there exists $q^2\in(n,n+1)$? QUESTION [6 upvotes]: Given $n$ a positive integer, how would you prove from scratch that there exists a rational number $q$ such that $n2\}$ $A=Q\setminus B$ where $Q^+$ denotes the positive rationals. So, for the purposes of the problem, I still don't even know what the real numbers are. REPLY [6 votes]: As $1<(\frac54 )^2<2<(\frac32 )^2<3$, we may assume wlog. that $n\ge 3$. With $q=\frac ab$, our task is to find $a,b$ such that $nb^24n^5\,\}$ is a non-empty (contains $3n^3$) subset of $\Bbb N$, hence has a minimal element $a$. Clearly, $a>2n^2>1$. Then $$(a-1)^2=a^2-2a+1>a^2\left(1-\frac 2a\right)>a^2\left(1-\frac 1{n^2}\right) $$ If we assume $a^2\ge 4n^5+4n^4$, this leads to $$ (a-1)^2>4n^5+4n^4-4n^3-4n^2=4n^5+4n^2((n-1)^2-2)>4n^5$$ contradicting minimality of $a$. Hence $a^2<4n^5+4n^4$, as desired. REPLY [2 votes]: Notice that for $n > 4$ $$ n\sqrt{n + 1} - n \sqrt{n} = \frac{n}{\sqrt{n + 1} + \sqrt{n}}\geq \frac{n}{2\sqrt{n + 1}} > 1$$ so there is an integer $k$ such that $$n\sqrt{n} \leq k TITLE: 3-manifold examples of homomorphisms between fundamental groups that are not induced by continuous maps QUESTION [8 upvotes]: I am looking for some examples of closed, orientable 3-manifolds $M$ and $N$ and a homomorphism $\phi : \pi_1(M) \to \pi_1(N)$ such that $\phi$ is not induced by any continuous map $f : M \to N$. Are there examples of this occurring for lens spaces? I do not believe that this phenomenon can occur for maps between surfaces. REPLY [4 votes]: To piggyback on Moishe's answer: Every map $T^3 \to T^3 \# T^3$ has degree zero. Thus one sees immediately by investigating the cohomology that the induced map on $H^1$ is not surjective, and in particular there are many homomorphisms that cannot be realized. Suppose the map has nonzero degree; make it transverse to the connected sum sphere. Write $T^3 = M_1 \cup M_2$, where $M_i$ is the inverse image of the $i$th component of the connected sum; the boundary $\partial M_1 = \partial M_2 = f^{-1}(S^2)$. Then we have a map $(M_i, \Sigma) \to (T^3 \setminus D^3, S^2)$. Then the map $H_3(M_i, \Sigma) \to H_3(T^3 \setminus D^3, S^2)$ is still multiplication-by-$d$ (degree is an entirely local property!) Collapsing the 2-sphere boundary to get a map $(M_i, \Sigma) \to (T^3, *)$, this is again degree $d$. This implies that $\Sigma$ is not a 2-sphere, since $T^3$ is irreducible, and any map $S^3 \to T^3$ has degree zero. I claim that this implies $\Sigma$ is incompressible inside the domain. For suppose you had a compressing disc. Because $\pi_2T^3$ is trivial, this implies the compressing disc is homotopic to a disc inside $\Sigma$; but by assumption the original loop was not null in $\Sigma$. This gives a contradiction. Now an incompressible surface is $\pi_1$-injective by the loop theorem. So it must be a map $T^2 \to T^3$ that's $\pi_1$-injective; one can verify that this means that the surface is homologically nontrivial in $T^3$. But it bounds the $M_i$! So we've arrived at a contradiction to the original claim, and thus every map $T^3 \to T^3 \# T^3$ is degree zero. This argument is not particularly special to $T^3$; you should be able to mimic it to get many more examples. The general result is that a map $Y \to N$ has degree zero if $N$ has more prime summands than $Y$ has disjoint, non-parallel, incompressible surfaces.<|endoftext|> TITLE: Inductive proof for $\binom{2n}{n}=\sum\limits_{k=0}^n\binom{n}{k}^2$ QUESTION [13 upvotes]: I want to prove the following identity using induction (not double counting method). Although it is a specific version of Vandermonde's identity and its inductive proof is presented here, but I need a direct inductive proof on this, not the general form. $$\binom{2n}{n}=\sum\limits_{k=0}^n\binom{n}{k}^2$$ I have tried to simplify $\binom{2n+2}{n+1}$ using Pascal's theorem, but did not get any result. Any help? REPLY [11 votes]: Suppose $$ \sum_{k=0}^n\binom{n}{k}^2=\binom{2n}{n}\tag{1} $$ then $$ \begin{align} \sum_{k=0}^{n+1}\binom{n+1}{k}^2 &=\sum_{k=0}^{n+1}\left[\binom{n}{k}+\binom{n}{k-1}\right]^2\tag{2a}\\ &=\sum_{k=0}^{n+1}\left[\binom{n}{k}^2+\binom{n}{k-1}^2+2\binom{n}{k}\binom{n}{k-1}\right]\tag{2b}\\ &=\binom{2n}{n}\left(1+1+\frac{2n}{n+1}\right)\tag{2c}\\ &=\binom{2n+2}{n+1}\tag{2d} \end{align} $$ Explanation: $\text{(2a)}$: Pascal's Triangle identity $\text{(2b)}$: algebra $\text{(2c)}$: apply $(1)$ and $(3)$ $\text{(2d)}$: $\binom{2n+2}{n+1}=\frac{4n+2}{n+1}\binom{2n}{n}$ Lemma: $$ \sum_{k=1}^n\binom{n}{k}\binom{n}{k-1}=\frac{n}{n+1}\sum_{k=0}^n\binom{n}{k}^2\tag{3} $$ Proof: Since $\binom{n}{k-1}=\frac{k}{n-k+1}\binom{n}{k}$, we have $\binom{n}{k}+\binom{n}{k-1}=\frac{n+1}{n-k+1}\binom{n}{k}$. Therefore, $$ \frac{n-k+1}{n+1}\left[\binom{n}{k}+\binom{n}{k-1}\right]\binom{n}{k-1}=\binom{n}{k}\binom{n}{k-1}\tag{3a} $$ Since $\binom{n}{k}=\frac{n-k+1}{k}\binom{n}{k-1}$, we have $\binom{n}{k}+\binom{n}{k-1}=\frac{n+1}{k}\binom{n}{k-1}$. Therefore, $$ \frac{k}{n+1}\left[\binom{n}{k-1}+\binom{n}{k}\right]\binom{n}{k}=\binom{n}{k-1}\binom{n}{k}\tag{3b} $$ Adding $(3a)$ and $(3b)$ and cancelling yields $$ \frac{n-k+1}{n+1}\binom{n}{k-1}^2+\frac{k}{n+1}\binom{n}{k}^2=\binom{n}{k-1}\binom{n}{k}\tag{3c} $$ Summing $(3c)$ over $k$, and substituting $k\mapsto k+1$ in the leftmost sum, gives $$ \frac{n}{n+1}\sum_{k=0}^n\binom{n}{k}^2=\sum_{k=1}^n\binom{n}{k-1}\binom{n}{k}\tag{3d} $$ QED<|endoftext|> TITLE: Given subsequences converge, prove that the sequence converges. QUESTION [8 upvotes]: I have looked through previous posts but have been struggling with this problem. The sequence is {$a_n$} and its subsequences {$a_{2k}$}, {$a_{2k+1}$}, {$a_{3k}$} converge. I have to prove that {$a_n$} converges. I know that a sequence converges if all of its subsequences converge. I'm suspecting that I have to prove that every subsequence belongs into these 3 subsequences. Thank you for your time and help. REPLY [12 votes]: Hint: it is sufficient to show that the convergent sequences $\{a_{2k}\}$ and $\{a_{2k+1}\}$ have the same limit. To see that they do have the same limit, note that $\{a_{2k}\}$ and $\{a_{2k+1}\}$ each have a subsequence that is a subsequence of the convergent sequence $\{a_{3k}\}$. REPLY [5 votes]: If all of the even subsequences converge and all of the odd subsequences converge, then the original sequence will converge iff both of the above subsequences converge to the same value. To show both the even and odd subsequences converge to the same value, you need a third subsequence that converges and covers more than a finite amount of even and odd values. That third subsequence is given here as $\{a_{3k}\}$. More generally, if two subsequences converge and every term in the original sequence belongs to one of the two subsequences, then convergence of the original sequence requires a third sequence that takes terms from both sequences infinitely many times.<|endoftext|> TITLE: A question about straight lines that bisect the area of a triangle QUESTION [5 upvotes]: Let $K$ be a convex subset of the Euclidean plane $E(2)$ whose boundary is a triangle. Is it true that there cannot exist 4 pairwise distinct concurrent straight lines in $E(2)$, each of which bisects the area of $K$? Many results in the literature strongly suggest that this is true, but I have not so far been able to find a theorem that actually states or implies that it is. REPLY [5 votes]: As @JeanMarie has mentioned, this problem is "affine invariant"; so, if we can settle it for, say, an equilateral triangle, then we will have solved it for all triangles. Now, given $\triangle ABC$, and a particular vertex, and a continuous parameter, we can consider the family of area-bisectors that form sub-triangles with the vertex. For instance, for vertex $A$, we get triangles $\triangle AB_t C_t$ where $$B_t = A + t\;(B-A) \quad\text{and}\quad C_t := A + \frac{1}{2t}\;(C-A) \quad\text{, for}\;\;\frac{1}{2}\leq t \leq 1 \tag{1}$$ Here, because $$2|\triangle AB_t C_t| = |\overline{AB_t}||\overline{AC_t}|\;\sin A = t\;|\overline{AB}|\cdot\frac{1}{2t}|\overline{AC}| \sin A = \frac{1}{2}|\overline{AB}||\overline{AC}|\sin A = |\triangle ABC|$$ we have that $\overline{B_tC_t}$ is in fact an area-bisector. (Note that the restrictions on $t$ guarantee that this bisector crosses the sides adjacent to $A$.) If we take our triangle to have vertices $A = (1, 0)$, $B = (\cos\frac{2\pi}{3}, \sin\frac{2\pi}{3})$, $C = (\cos\frac{4\pi}{3}, \sin\frac{4\pi}{3})$, then we compute the equation $$\overleftrightarrow{B_tC_t} :\qquad x\;(1+2 t^2) \;-\; y \;\sqrt{3}\;(1-2 t^2) \;-\; ( 1 - t )( 1 - 2t) \;=\; 0 \tag{2}$$ The "envelope" of this family of lines arises from eliminating $t$ from $(2)$ and its derivative with respect to $t$ $$4 xt\;+\;4y t\;\sqrt{3}\;+\; 3 -4 t\;=\;0 \tag{3}$$ to obtain this hyperbola: $$ 8 ( x-1 )^2 - 24 y^2 = 9 \tag{4}$$ Its center is at $A$, and its asymptotes are the extended sides $\overleftrightarrow{AB}$ and $\overleftrightarrow{AC}$. The corresponding envelopes related to vertices $B$ and $C$ are appropriately-rotated copies of this hyperbola: Observe that the three hyperbolas are pairwise tangent, and that the medians of the triangle form common tangent lines. In fact, when we restrict our attention in accordance with $(1)$, it turns out that we only care about the hyperbolic arcs determined by the points of tangency. Let's zoom in: This curvilinear triangle is cut into six regions by the original triangle's medians. Importantly, and quite clearly, no tangent line to any one hyperbolic arc enters either of the two regions "opposite" that arc. We see, then, that a point in the interior of any of those regions lies on three tangent lines; in this figure ... ... dots indicate how many tangents of a hyperbolic arc meet a given region. At points along the borders between regions ---that is, along the triangle's medians--- "four" tangents can meet, but two of them coincide with the median, bringing the number of distinct tangents down to three. (The center point has ostensibly "six" concurrent tangents ---two per arc--- though obviously only three are distinct.) Since tangents to the hyperbolic arcs are precisely the triangle's area-bisectors, we have shown that at most three distinct bisectors may concur at a point. $\square$<|endoftext|> TITLE: Efficiently computing Schur complement QUESTION [8 upvotes]: I would like to compute the Schur complement $A-B^TC^{-1}B$, where $C = I+VV^T$ (diagonal plus low rank). The matrix $A$ has $10^3$ rows/columns, while $C$ has $10^6$ rows/columns. The Woodbury formula yields the following expression for the Schur complement: $$A-B^TB+B^TV(I+V^TV)^{-1}V^TB.$$ This expression can be evaluated without storing large matrices, but the result suffers from numerical inaccuracies. Is there a numerically more stable way of computing the Schur complement, without high memory requirements? REPLY [3 votes]: Let us decompose the matrix $VV^T$ into its eigenvectors. Thus we have, $VV^T = \Sigma u_iu_i^T$ where $u_i$ is defined where $ u_i = \sqrt[2]\lambda_i \alpha_i$ where $\lambda_i$ is an eigenvalue of $VV^T$ and $\alpha_i$ is the corresponding eigenvector. Since the matrix $VV^T$ is positive definite its eigenvectors would be orthogonal meaning that $u_i^Tu_j= \sqrt[2]\lambda_i\alpha_i^T\alpha_j\sqrt[2]\lambda_j = 0$ when $i \neq j$.Applying Sherman-Morrison formula, for a $u_1$ we get - $(I+u_1u_1^T)^{-1} = I^{-1} - \frac{u_1u_1^T}{1 + u_1^Tu_1}$. This expression simplifies to $(I+u_1u_1^T)^{-1} = I - \frac{u_1u_1^T}{1 + \lambda_1}$ where $\lambda_1$ is the eigenvector corresponding to $u_1$. Now consider $(I^{-1}+ u_1u_1^T +u_2u_2^T) = (I+u_1u_1^T)^{-1} + \frac{(I - \frac{u_1u_1^T}{1 + \lambda_1})u_2u_2^T(I - \frac{u_1u_1^T}{1 + \lambda_1})}{1 + u_2^Tu_2}$. Since $u_1^Tu_2 =0$, $u_2u_2^Tu_1u_1^T = 0$. Hence the numerator simplifies as $u_2u_2^T$. Thus, $(I + u_1u_1^T + u_2u_2^T)^{-1} = I - \frac{u_1u_1^T}{1+\lambda_1} - \frac{u_2u_2^T}{1+\lambda_2}$. By induction it can be proven that $(I + \Sigma u_iu_i^T)^{-1} = I - \Sigma \frac{u_iu_i^T}{(1 + \lambda_i)}$. This can be used to compute the inverse of $(I + VV^T)$ with sufficient numerical stability. Regarding the final computation I think multiplication is inevitable. The time complexity of inversion by the given method is still $O(n^3)$ where $VV^T$ has dimensions $nxn$. So this would be the time complexity of the overall operation as well. But the memory required is $O(n^2)$. Also the net inaccuracy would be proportional to $\Sigma \Delta \lambda_i$ where $\Delta \lambda_i$ is the error in computing the eigenvalues.<|endoftext|> TITLE: Double Integral Math Olympiad Problem QUESTION [13 upvotes]: I was taking a Math Olympiad test and one of the questions was to calculate the following double integral: $$\int_0^\infty\int_0^\infty\frac{\log|(x+y)(1-xy)|}{(1+x^2)(1+y^2)}\ \mathrm{d}x\ \mathrm{d}y$$ Here, as usual, $\log a$ and $|a|$ are the natural logarithm and absolute value of $a$ respectively. I'm guessing that you're not supposed to solve it analytically, but rather find some symmetry argument or clever simplification that would make it straightforward. Since I don't even know where to start, any help is welcome. In case you want to know, this was taken from the 2016 Rio de Janeiro State Math Olympiad, known in Portuguese as OMERJ. REPLY [5 votes]: $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\color{#f00}{\int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{\bracks{x + y}\verts{1 - xy}} \over \pars{1 + x^{2}}\pars{1 +y^{2}}} \,\dd x\,\dd y}\ =\ \overbrace{\int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{x} \over \pars{1 + x^{2}}\pars{1 + y^{2}}}\,\dd x\,\dd y}^{\ds{=\ 0}} \\[5mm] +&\ \int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{1 + y/x} \over \pars{1 + x^{2}}\pars{1 + y^{2}}}\,\dd x\,\dd y + \int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{\verts{1 - xy}} \over \pars{1 + x^{2}}\pars{1 + y^{2}}}\,\dd x\,\dd y \end{align} In the second integral, in the RHS, lets $\ds{x\ \mapsto\ 1/x}$: \begin{align} &\color{#f00}{\int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{\bracks{x + y}\verts{1 - xy}} \over \pars{1 + x^{2}}\pars{1 +y^{2}}} \,\dd x\,\dd y} \\[5mm] = &\ \int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{1 + y/x} \over \pars{1 + x^{2}}\pars{1 + y^{2}}}\,\dd x\,\dd y + \int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{\verts{1 - y/x}} \over \pars{1 + x^{2}}\pars{1 + y^{2}}}\,\dd x\,\dd y \\[5mm] = &\ \int_{0}^{\infty}{1 \over x^{2} + 1}\int_{0}^{\infty} {\ln\pars{\verts{1 - y^{2}/x^{2}}} \over 1 + y^{2}}\,\dd y\,\dd x \,\,\,\stackrel{y/x\ \mapsto\ y}{=}\,\,\, \int_{0}^{\infty}{x \over x^{2} + 1}\int_{0}^{\infty} {\ln\pars{\verts{1 - y^{2}}} \over 1 + x^{2}y^{2}}\,\dd y\,\dd x \\[5mm] = &\ \int_{0}^{\infty}\ln\pars{\verts{1 - y^{2}}}\ \overbrace{\int_{0}^{\infty} {x \over \pars{x^{2} + 1}\pars{y^{2}x^{2} + 1}}\,\dd x} ^{\ds{\ln\pars{y} \over y^{2} - 1}}\ \,\dd y\ =\ \int_{0}^{\infty}{\ln\pars{\verts{1 - y^{2}}}\ln\pars{y} \over y^{2} - 1}\,\dd y \end{align} Now, we split the integral along $\ds{\pars{0,1}}$ and $\ds{\pars{1,\infty}}$. Later on, we make the substitution $\ds{y \mapsto 1/y}$ in the second integral $\pars{~\mbox{along}\ \pars{1,\infty}~}$: \begin{align} &\color{#f00}{\int_{0}^{\infty}\int_{0}^{\infty} {\ln\pars{\bracks{x + y}\verts{1 - xy}} \over \pars{1 + x^{2}}\pars{1 +y^{2}}} \,\dd x\,\dd y} \\[5mm] = &\ \int_{0}^{1}{\ln\pars{1 - y^{2}}\ln\pars{y} \over y^{2} - 1}\,\dd y + \int_{0}^{1}{\bracks{\ln\pars{1 - y^{2}} - 2\ln\pars{y}}\ln\pars{y} \over y^{2} - 1}\,\dd y \\[5mm] = &\ 2\int_{0}^{1} {\ln\pars{1 - y^{2}}\ln\pars{y} - \ln^{2}\pars{y} \over y^{2} - 1}\,\dd y \,\,\,\stackrel{y^{2}\ \mapsto\ y}{=}\,\,\, 2\int_{0}^{1} {\ln^{2}\pars{y}/4 - \ln\pars{1 - y}\ln\pars{y}/2 \over 1 - y}\,{1 \over 2}\,y^{-1/2}\dd y \\[5mm] = &\ {1 \over 4}\ \underbrace{% \int_{0}^{1}{y^{-1/2}\ln^{2}\pars{y} \over 1 - y}\,\dd y} _{\ds{=\ 14\zeta\pars{3}}}\ -\ {1 \over 2}\ \underbrace{\int_{0}^{1}{y^{-1/2}\ln\pars{y}\ln\pars{1 - y} \over 1 - y}\,\dd y}_{\ds{=\ -\pi^{2}\ln\pars{2} + 7\zeta\pars{3}}}\ =\ \color{#f00}{{1 \over 2}\,\pi^{2}\ln\pars{2}} \end{align} The last two integrals can be straightforward reduced to derivatives of the Beta Function and evaluated at suitable limits.<|endoftext|> TITLE: Why I can't divide by y in this equation: 4y = y? QUESTION [7 upvotes]: I have this equation $$ 4y = y $$ If I divide by y in both sides I would get this: $$ 4 = 1$$ And this does not have sense. I know that the solution is 0 but why I get this answer when dividing by y. What's the logic behind? REPLY [3 votes]: Whenever you do something like: $y^2 = 4y $ so $y^2/y = 4y/y $ Or $y (y+1)=2 (y+1) $ so $y (y+1)/(y+1)=2 (y+1)/(y+1) $ Ever SINGLE time, you absolutely MUST, with no exceptions EVER, you must acknowledge, the possibility that what you are dividing by might be zero and thus you can not do it, and you must make a case for this. You must do this EVERY time. So for example: If you are solving $y^2 = 7y $ and you say: $y^2 = 7y$ $y^2/y= 7/y$ $y=7$ Then you have done it WRONG!!! That is the wrong answer. You must do it this way instead. "$y^2 = 7y $" "There are two cases we must consider. Case 1: $y \ne 0$ And case 2: $y=0$. If $y\ne 0$ then we can divide both sides by $y $ and $y^2/y = 7y/y $ $y = 7$ BUT we must also consider case 2. If $y= 0$ we can NOT divide by $y $. In this case $y=0$ so we are done. Our answer is two possibilities: either $y=7$ or $y=0$. So.... To do $4y = y $ we MUST do two cases: Case 1: $y \ne 0$. IF so (and this is an "if"; it might not be true) we can divide by $y $ and get $4=1$. As this is impossible, this case is not true. It is NOT the case that $y\ne 0$. Case 2: $y = 0$. This must be true. So our answer is $y=0$ and we can not EVER divide by 0. You must do these two cases every single £¥₩@ing time. If you don't you are doing it wrong. Period.<|endoftext|> TITLE: Show that the numerator of $1+\frac12 +\frac13 +\cdots +\frac1{96}$ is divisible by $97$ QUESTION [14 upvotes]: Let $\frac{x}{y}=1+\frac12 +\frac13 +\cdots +\frac1{96}$ where $\text{gcd}(x,y)=1$. Show that $97\;|\;x$. I try adding these together, but seems very long boring and don't think it is the right way to solving. Sorry for bad english. REPLY [5 votes]: This is along the lines of DanielV's proposed solution. We know $97$ is prime. Multiplying both sides by $96!$, it's enough to show that $\sum_{n=1}^{96} \frac{96!}{n}$ (which is a sum of integers) is divisible by $97$, because $96!$ and $97$ are coprime. Let $a_n = \frac{96!}{n}$ for $n\in\{1,\ldots,96\}$. Then we have $n a_n = 96!$, so $a_n \equiv 96! n^{-1} \pmod {97}$, where $n^{-1}$ is the inverse of $n$ in the integers mod $97$. Then $\sum_{n=1}^{96} a_n$ is divisible by $97$ because $$ \sum_{n=1}^{96} a_n \equiv 96! \sum_{n=1}^{96} n^{-1} \color{red}{=} 96! \sum_{n=1}^{96} n = 96! \frac{96 \cdot 97}{2} \equiv 0 \pmod{97}.$$ The red $\color{red}{=}$ holds because the inverse $n \mapsto n^{-1}$ is a permutation of $\{1,\ldots,96\}$.<|endoftext|> TITLE: topological structure on smooth manifolds QUESTION [5 upvotes]: In John Lee's Introduction to Smooth Manifolds, a smooth manifold is defined as a topological manifold with a smooth structure. In do Carmo's Riemannian Geometry, a differentiable (smooth) manifold is defined by giving the smooth structure on merely a set $M$ and the author makes a remark that such smooth structure induces a natural topology on $M$. Here is my question: In Lee's definition, what is the relation between the topological structure and the smooth structure? Must the topology of the manifold (in Lee's definition) induced by the smooth structure in the way that do Carmo mentions? The following are the definition and the remark by do Carmo mentioned above. REPLY [2 votes]: There are two aspects: First, the two definitions are easily not the same, since in Lee's definition of a topological manifolds it is assumed that the topology is Hausdorff and second countable. Thus the two standard examples: The line with two origins (Non Hausdorff) and any uncountable set with the discrete topology (Non second countable) are smooth manifold in DoCarmo's definition, but not in Lee's definition. Second, the above is the only difference. If we assume further that in DoCarmo's definition the topology induced on the set $M$ is both Hausdorff and second countable, then $M$ with the topology given is a topological manifold, and the atlas $\{ (x_\alpha^{-1}, x_\alpha (U_\alpha)\}_{\alpha}$ is a smooth structure (as in Lee's definition) on $M$. Remark: Though Lee's definition is the "morally correct" one, in practical situation people almost always use DoCarmo's convention $x_\alpha : U_\alpha\to M$ to perform local calculations (one cannot do any meaningful local calculation on $x_\alpha(U_\alpha)$, which is just an open set of an abstract topological manifold).<|endoftext|> TITLE: Do I need to cite an old theorem, if I've strengthened it, wrote my own theorem statement, with a different proof? QUESTION [25 upvotes]: Let's say I have been studying an older theorem statement and its proof. I feel that it can be improved, e.g., I can make it a stronger double-implication theorem statement, with a different proof. Would I still need to cite the paper from which I first studied the theorem? Even though my work is completely organic and self-contained? Thanks, REPLY [43 votes]: Adding this only because I think the point wasn't emphasized enough in the other answers, but it may actually hurt the credibility - or, at least, the perceived professionalism - of your paper if you did not cite the old result. An important part of research is to establish context and relevance, and anyone who happens to know or find the old result might (wrongly) conclude that you didn't carry out that part as expected if you didn't cite the prior work.<|endoftext|> TITLE: Does $K[\alpha_1, ..., \alpha_n]=K(\alpha_1, ..., \alpha_n)$ imply $\alpha_1, ..., \alpha_n$ are algebraic over $K$? QUESTION [7 upvotes]: I know how to prove that if $\alpha_1, ..., \alpha_n$ are algebraic over $K$, then $K[\alpha_1, ..., \alpha_n]$ is a field (i.e., $K[\alpha_1, ..., \alpha_n]=K(\alpha_1, ..., \alpha_n)$). I also know the converse is true for $n=1$ and also know how to prove it. However, I'm having real trouble to deal with the case $n\geq 2$. I've tried to use the same strategy with $n=1$, which envolves the surjective homomorphism $\psi:K[X]\to K[\alpha]$ with $F \mapsto F(\alpha)$ and the fact that $K[X]$ is a principal ideal domain. But that doesn't work with a similar map $\psi:K[X_1, ..., X_n]\to K[\alpha_1, ..., \alpha_n]$ since $K[X_1, ..., X_n]$ is not a principal domain. I couldn't disprove it either. I tried to find a small example with $K=\mathbb{R}$ and $n=2$, but it also couldn't figure it out. Any ideas? Thanks! REPLY [6 votes]: Yes. This follows from (or is essentially the content of) the Zariski lemma: a finitely generated algebra over a field $k$ that is itself a field is finite over $k$. You can find several proofs in these notes by Pete Clark, around page 206.<|endoftext|> TITLE: Simplifying an Expression further QUESTION [5 upvotes]: I have trouble doing this and I'm not sure why are they the same and what are the steps I need to do to reach the simplified answer. For example ... $\frac{-8}{\sqrt{128}}$ This is the same as $\frac{-1}{\sqrt{2}} $ I'm not sure how to reach the simplified expression from the earlier expression .. Can anyone help me ? Thanks ! REPLY [5 votes]: Here are the steps: $$\frac {-8}{\sqrt{128}}=$$Simplify the bottom fraction $$ \frac {-8}{8\sqrt{2}}$$ cancel out the 8's $$ \frac {-1}{\sqrt{2}}$$<|endoftext|> TITLE: Proof of Binomial Sum via Double Counting QUESTION [5 upvotes]: I have attempted to double count the following equivalence but to no avail. I'm unable to arrive at the Left Hand Side. $\displaystyle \sum_{k=0}^n \frac{(-1)^k}{k+3} \binom{n}{k} = \frac{2}{(n+1)(n+2)(n+3)} = \frac{1}{n+1}-\frac{2}{n+2}+\frac{1}{n+3}$ REPLY [2 votes]: This is the $m = 2$ case of the more general identity $$\sum_{k=0}^n \frac{(-1)^k}{k + m + 1} \binom nk = \frac{m!n!}{(m + n + 1)!} = \sum_{k=0}^m \frac{(-1)^k}{k + n + 1} \binom mk.$$ We’ll prove the left equation, since the right equation is symmetric. Consider the following game. From a shuffled stack of $n + m + 1$ envelopes containing different numbers, one envelope is set aside, and $m$ envelopes are dealt to the player’s hand. The player chooses any subset of the remaining $n$ envelopes to add to their hand. Then all the envelopes are opened. The player wins if all numbers in their hand are higher than the number that was set aside. Eve plays a game, choosing a random even-sized subset of the remaining $n$ envelopes. Odin plays a game, choosing a random odd-sized subset. Their respective winning probabilities are clearly $$p_E = \frac{1}{2^{n - 1}}\sum_{\text{$k$ even}} \frac{1}{k + m + 1} \binom nk, \quad p_O = \frac{1}{2^{n - 1}}\sum_{\text{$k$ odd}} \frac{1}{k + m + 1} \binom nk.$$ To compare these strategies, construct a correspondence between Eve games and Odin games by finding the largest-numbered of the $n$ envelopes, and retroactively adding it to, or removing it from, the player’s hand. Any winning Odin game is converted into a winning Eve game (because adding an envelope larger than one already in the hand has no effect, and removing an envelope can only help). Any winning Eve game is converted back into a winning Odin game, unless Eve’s hand had none of the $n$ envelopes (only the required $m$), and all of the $m$ numbers are larger than the number set aside, and all of the $n$ numbers are smaller than the number set aside. The difference between Eve’s and Odin’s winning probabilities is the probability of this exception: $$p_E - p_O = \frac{1}{2^{n-1}} · \frac{m!n!}{(m + n + 1)!},$$ yielding the desired result.<|endoftext|> TITLE: If $A$ is a path connected subspace, is $q_*: \pi_1(X) \to \pi_1(X/A)$ a surjection? QUESTION [5 upvotes]: Let $X$ be a topological space and $A \subset X$ be a path connected subspace. Let $q: X \to X/A$ be the quotient map. Every example i work out it seems that the induced map $q_*: \pi_1(X) \to \pi_1(X/A)$ is a surjection. Is this always true? Are there some conditions under which it is true? Thank you in advance! REPLY [5 votes]: For CW complexes, you can do the following: Use the long exact sequence of homotopy groups to prove that $\pi_1(X)\rightarrow \pi_1(X,A)$ is surjective. Use excision for homotopy groups (Hatcher, Prop. 4.28) to prove that $\pi_1(X,A)\rightarrow \pi_1(X/A)$ is surjective. EDIT: There is a shorter argument using Van Kampen's theorem: $X/A$ is homotopy equivalent to the union of $X$ and $CA$ (the cone of $A$) along $A$. As all three spaces are path connected, we have that the homomorphism $\pi_1(X)*\pi_1(CA)\to \pi_1(X/A)$ is surjective, and since $CA$ is contractible, we have that $\pi_1(X)\to \pi_1(X/A)$ is surjective as well. Notice that this works in more generality than CW-complexes.<|endoftext|> TITLE: Is this Galois theory proof of Fundamental Theorem of Algebra correct? QUESTION [6 upvotes]: I am studying Galois theory through Lang's Algebra and Dummit-Foote's Abstract Algebra. While studying the Fundamental Theorem of Algebra's proofs from both books I spent a lot of time to understand and in the process tried to simplify or rewrite the proof. I like to do this several times. For the proof we need two facts or results as follows: (a) There are no non-trivial finite extensions of $\Bbb R$ of odd degree. (b) There are no quadratic extensions of $\Bbb C$. Fundamental Theorem of Algebra : $\Bbb C$ is algebraically closed. Proof : Since $\Bbb R$ has characteristic $0$, every finite extension is separable. Hence $\Bbb R(i)/ \Bbb R$ is separable (Because $\Bbb R(i)/ \Bbb R$ is finite extension). $\Bbb R(i)=\Bbb C$ is contained in a finite Galois extension $K$ over $\Bbb R$. (By Corollary 23(Dummit-Foote): Let $E/F$ be any finite separable extension. Then $E$ is contained in an extension K which is Galois over $F$ and is minimal in the sense that in a fixed algebraic closure of $K$ any other Galois extension of $F$ containing $E$ contains $K$. We used $E=\Bbb R(i)$ and $F=\Bbb R$.) Let $G$ be the Galois group of $K/ \Bbb R$. Using fact (a), since there are no non-trivial finite extensions of $\Bbb R$ of odd degree, we have $|G|$ is even. Therefore $|G|=2^n m$ where $m$ is an odd number and $n \ge 1$. Let $H$ be a sylow$-2-$subgroup of $G$ and $F$ be the fixed field of $H$. Hence $|G:H|=m=|F:\Bbb R|$. But again by fact (a), $|G:H|=m=1$ $\Rightarrow G=H$ is a $2-$group. We know that p-groups have subgroups of all orders and they all are normal subgroups. Also $[K:\Bbb R]=[K: \Bbb R(i)][\Bbb R(i): \Bbb R] \Rightarrow 2^n=[K: \Bbb R(i)](2) \Rightarrow [K: \Bbb R(i)]=2^{n-1}.$ Hence Gal$(K/\Bbb R(i))$ is a $2-$group of order $2^{n-1}$ where $n \ge 1$ where $n \gt 1$ would mean that this group is non-trivial and $n=1$ would mean that it is trivial. If $n \gt 1$, Since $2-$groups have subgroups of all orders (Being p-groups), there exists an extension of $\Bbb R(i)=\Bbb C$ of order $2$ which is contradiction to fact (b). So we can say that $n=1$ and Gal$(K/ \Bbb R(i))=1.$ Hence $K=\Bbb R(i)=\Bbb C$. I have ommited proofs of facts (a) and (b) as they are precisely the same as in Dummit and Foote. Also I have mentioned only those things of which I want to be sure whether they are correct or not. REPLY [4 votes]: Although there is a lot of correct material in this proof, I see two flaws. One of them is critical, the other is superficial. Flaw #1: The critical flaw is that you are not postulating an algebraic extension of $\mathbb{C}$. Thus in some sense the proof never begins. The statement that $\mathbb{C}$ is algebraically closed is the statement that if $K$ is an algebraic extension of $\mathbb{C}$, then $K=\mathbb{C}$. Thus you should begin the proof by supposing that $K$ is any algebraic extension of $\mathbb{C}$. This is not how you defined $K$. You introduced it as a Galois extension of $\mathbb{R}$ containing $\mathbb{C}$, which is guaranteed to exist by Dummit&Foote corollary 23. Thus when you prove things about $K$, you are not proving them about any algebraic extension of $\mathbb{C}$ but only about a specific one you have constructed in the proof. To drive the point home, you don't even need corollary 23 to construct a finite Galois extension $K$ over $\mathbb{R}$ containing $\mathbb{C}$. $K=\mathbb{C}$ is already such an extension. So you could have replaced the sentence "$\mathbb{C}$ is contained in a finite Galois extension $K$ over $\mathbb{R}$" with the sentence "$K=\mathbb{C}$ is a finite Galois extension of $\mathbb{R}$" and the logical work of the sentence wouldn't really have changed. But then if afterwards you proved that $K=\mathbb{C}$, you wouldn't have proved anything at all. Flaw #2: It is the case that $p$-groups have subgroups of every order dividing the group order (I assume this is what you mean "all orders"), and that $p$-groups have normal subgroups of each of these orders as well, but it is not true that every subgroup of a $p$-group is normal, since nonabelian $p$-groups do exist. What is true is that for each order dividing the group order, there exist subgroups of that order, at at least one of them is normal.<|endoftext|> TITLE: For vector p-norm, can we prove it is decreasing without using derivative? QUESTION [6 upvotes]: For vector p-norm defined as $(∑_{i=1}^n x_i^p )^{\frac{1}{p}}$ for any $p\ge 1$ and vector ${\bf{x}}=\{x_1,...,x_n\}$. The following proves it is decreasing with respect to $p$ by taking derivative (you don't need to read the whole proof, just have a look), However, I am thinking if there is another approach without using the derivative. Is there any proof for monotonicity of p-norm without using derivatives? The upper bound can be proved by Holder's inequality by Relations between p norms We have plenty of inequalities that lead to the definition of p-norm: Young's inequality, Jensen's inequality, Holder's inequality, Minkowski’s inequality. Maybe there is a proof using those inequalities? REPLY [3 votes]: In Help show $(∑_{i=1}^n |x_i | )^p≥∑_{i=1}^n |x_i |^p $ using common inequalities (like Holder's inequality) you find a proof for the special case $$\|x\|_1 \ge \|x\|_p.$$ Now, replace $x$ by $|y|^r$ and you find $$\|y\|_r^{1/r}=\||y|^r\|_1 \ge \||y|^r\|_p = \|y\|_{rp}^{1/r}.$$ Taking the $r$th root you have $$\|y\|_r \ge \|y\|_{r p}$$ and this settles the general case.<|endoftext|> TITLE: Why is $e$ not a Liouville number? QUESTION [6 upvotes]: A Liouville number is an irrational number $x$ with the property that, for every positive integer $n$, there exist integers $p$ and $q$ with $q > 1$ and such that $0 < \mid x - \frac{p}{q} \mid < \frac{1}{q^n} $. I'm looking for either hints or a complete proof for the fact that $e$ is not a Liouville number. I can prove that $e$ is irrational and even that it is transcendental, but I'm a bit stuck here. Here's my research: The wikipedia article about Liouville numbers states: [...] not every transcendental number is a Liouville number. The terms in the continued fraction expansion of every Liouville number are unbounded; using a counting argument, one can then show that there must be uncountably many transcendental numbers which are not Liouville. Using the explicit continued fraction expansion of $e$, one can show that e is an example of a transcendental number that is not Liouville. However, theres clearly more to the argument then the boundedness of the continued fraction of $e$, because the terms of $e$'s continued fraction expansion are unbounded and yet it is not a Liouville number. Also, if possible, i would like to avoid using continued fractions at all. This book has the following as an exercise: Prove that $e$ is not a Liouville number. (Hint: Follow the irrationality proof of $e^n$ given in the supplements to Chapter 1.) Unfortunately, the supplements to Chapter 1 are not publically available in the sample and I do not want to buy that book. This book states: Given any $\varepsilon > 0$, there exists a constant $c(e,\varepsilon) > 0$ such that for all $p/q$ there holds $\frac{c(e,\varepsilon)}{q^{2+\varepsilon}} < \mid e - \frac{p}{q} \mid$. [...] Using [this] inequality, show that $e$ is not a Liouville number. Which, given the inequality, I managed to do. But I do not have any idea of how one would go about proving that inequality. I greatly appreciate any help! REPLY [5 votes]: Using Gauss continued fraction for $\tanh$, it is not difficult to show that the continued fraction of $e$ has the following structure: $$ e=[2;1,2,1,1,4,1,1,6,1,1,8,1,1,10,1,1,12,\ldots]\tag{1}$$ then by studying the sequence of convergents $\left\{\frac{p_n}{q_n}\right\}_{n\geq 1}$ through $$\left|\frac{p_n}{q_n}-\frac{p_{n+1}}{q_{n+1}}\right|=\frac{1}{q_n q_{n+1}}=\frac{1}{q_n(\alpha_{n+1}q_n+q_{n-1})}\tag{2}$$ and $$ \left|e-\frac{p_n}{q_n}\right| = \left|\sum_{k\geq n}\frac{(-1)^k}{q_k q_{k+1}}\right| \tag{3} $$ we may easily get that there is no rational approximation such that $$ \left|e-\frac{p_n}{q_n}\right|\leq \frac{1}{q_n^4}\tag{4} $$ hence $e$ is not a Liouville number. It is not difficult to use $(1)$ to prove the stronger statement The irrationality measure of $e$ is $2$.<|endoftext|> TITLE: Probability of 4 consecutive heads in 10 coin tosses QUESTION [7 upvotes]: I am trying to compute the probability of having 4 (or more) consecutive heads in 10 coin tosses. I tried using recursion but it led to a complicated expression so i think i did not quite manage. I saw similar questions asked here that were solved with difficult approaches, but this problem looks like it could be solved in a couple of lines so I must be doing something wrong. If anyone could help me understand it or propose a different approach for solving the problem i would be very grateful. Have a nice day! REPLY [7 votes]: In general, this question has been answered here several times, eg here. For these particular small numbers, you might group the total count by the position of the first successful run So $N = N_1 + N_2 + \cdots N_7$ $N_1 = 2^6$ [HHHH******] $N_2 = 2^5$ [THHHH*****] $N_3 = 2^5$ [*THHHH****] $N_4 = 2^5$ [**THHHH***] $N_5 = 2^5$ [***THHHH**] $N_6 = 2^5 -2$ [****THHHH*] minus [HHHHTHHHH*] $N_7 = 2^5 -3$ [*****THHHH] minus ([HHHHTTHHHH] , [THHHHTHHHH] , [HHHHHTHHHH]) Which gives $N=64 + 6 \times 32 - 5 = 251$ So the probability is $N/2^{10} = 251/1024$<|endoftext|> TITLE: Sum of series : $1+11+111+...$ QUESTION [10 upvotes]: Sum of series $1+11+111+\cdots+11\cdots11$ ($n$ digits) We have: $1=\frac {10-1}9,$ $11=\frac {10^2-1}9$ . . . $11...11= \frac {10^n-1}9$ (number with $n$ digits) and summing them we find the sum ($S$) as: $S=(10^{n+1}-9n-10)/81$ Also the general form of terms is: $s(n)=(10^{n+1}-10^n-9)/81$ Now consider the function: $f(x)=10^{x+1}−10^x-9$ Since $\Delta x= 1$, due to definition of integral we can write: $S=(1/81)\sum (10^{x+1}−10^x-9), [1, ∞]$ $ =(1/81)∫(10^{x+1}-10^x-9) dx ;[0, 1]$ but it does not work. Can someone say what went wrong, i.e, Why doesn't the integral give $S$ as I mentioned first? I realized now that this is more a sequence rather than a series. A sequence is a set of numbers which are resulted from a general term where as a series is a set of functional elements; the derivative of elements of a sequence is zero and its integration is pointless. So using the integration of general term of a sequence to find its sum is just not needed. REPLY [22 votes]: $$ \begin{align*} S &= \sum_{i=1}^n (10^i-1)/9 \\[6pt] &= \frac{1}{9} \left(\sum_{i=1}^n 10^i - \sum_{i=1}^n 1 \right) \\[6pt] &= \frac{1}{9} \left(\frac{10}{9}(10^n -1) - n\right) \\[6pt] &= \frac{10}{81} (10^n -1) - \frac{n}{9} \end{align*} $$<|endoftext|> TITLE: **Intuition** behind Lagrange Multipliers QUESTION [5 upvotes]: Why does this method actually work?I learnt that a while ago and have been using it ever since. I apply it to almost every optimisation problem. This method just seemed too good to be true. Now, I wonder why this works. (The answer does not need to be rigorous, an intuitive sketch will work just fine for me.) REPLY [11 votes]: In the simplest case, there is a constraint curve $C:g(x,y)=0$ in the $xy$-plane. Above it is a surface $z=f(x,y)$. If all the points on $C$ are plugged into $f$, then you get some sort of path wondering around the hills and valleys of the surface. In a Lagrange problem, you want to find the highest (or lowest) elevation on that path. If you look down at the $xy$-plane, you can see $C$ and also a bunch of concentric(-ish) level curves $k=f(x,y)$. Now you want find the level curve of $f$ with the largest $k$ that intersects $C$. So pick out a level curve and notice that it intersects $C$ a couple times. Now start increasing $k$. At some point, there will be a very last level curve that intersects $C$. In my mind the level curves are shrinking with increasing $k$ until that last curve is just touching $C$ and any larger $k$ gives a curve that misses $C$. If these curves are smooth (the level curves and $C$) then the last time they touch, they are tangent to each other. Which means their slopes in the $xy$-plane are the same. Which means their normal vectors are parallel. Which means the gradients of the 2-variable parent functions are parallel. So we write $\nabla f = \lambda \nabla g$. If this doesn't explain it well enough, try imagining me waving my hands about in an instructive fashion.<|endoftext|> TITLE: Solving $\int_0^\infty\dfrac{dx}{(1+x^n)^n}=1$ QUESTION [14 upvotes]: A friend has challenged me to find the value of $n$ such that $\displaystyle\int_0^\infty\dfrac{dx}{(1+x^n)^n}=1.$ I know that the value of $n$ is equal to $\phi,$ but I do not know how to prove this. REPLY [11 votes]: The easy part. One may observe that, by the change of variable $x \to \dfrac1x$, one gets $$ \begin{align} \int_0^\infty\frac{dx}{(1+x^\phi)^\phi}&=\int_0^\infty\frac{x^{\phi^2-2}}{(1+x^\phi)^\phi}\:dx \\\\&=\int_0^\infty\frac{x^{\phi-1}}{(1+x^\phi)^\phi}\:dx \\\\&=\frac1{\phi}\int_0^\infty\frac{(1+x^{\phi})'}{(1+x^\phi)^\phi}\:dx \\\\&=\frac1{\phi}\int_1^\infty u^{-\phi}\:du \\\\&=\frac{1}{\phi(\phi-1)} \\\\&=1 \end{align} $$ if $\phi>1$ and $\phi^2=\phi+1$. REPLY [8 votes]: I don't see how do we do that unless we bring the whole thing to Beta function with some clever (or maybe trivial) substitution. Say, $t={1\over1+x^n}$. Then $x=\sqrt[n]{{1\over t}-1}$. Then $dx=-{dt\over n t^2({1/t}-1)^{1-1/n}}$, and $$\int_0^\infty\dfrac{dx}{(1+x^n)^n}=-\int_1^0t^n\left({1\over t}-1\right)^{{1\over n}-1}{dt\over n\cdot t^2}={1\over n}\int_0^1t^{n-1-{1\over n}}(1-t)^{{1\over n}-1}dt={1\over n}B\left(n-{1\over n},{1\over n}\right)={1\over n}\cdot{\Gamma\left(n-{1\over n}\right)\Gamma\left({1\over n}\right)\over \Gamma(n)}={\Gamma\left(n-{1\over n}\right)\Gamma\left(1+{1\over n}\right)\over \Gamma(n)}$$ At this point, it is pretty easy to verify directly that $n=\varphi$ fits. (Note that $\varphi-{1\over\varphi}=1$ and $1+{1\over\varphi}=\varphi$). What could we do without that afterknowledge, I wonder...<|endoftext|> TITLE: Notation for set excluding element QUESTION [12 upvotes]: Consider the set $\mathbb{A}=\{a,b,c\}$. I want to refer to "the set $\mathbb{A}$ excluding the element $a$" Is the notation $\mathbb{A} \sim \{a\}$ equivalent to $\mathbb{A} \setminus \{a\}$? Is the former an abuse of logic notation? The latter is what I am used to. REPLY [17 votes]: The notation $\Bbb A - \{a\}$ is often used to mean the same thing as $\Bbb A \setminus \{a\}$ (the set difference), but I've never seen it with a tilde and can't find any references to it being used this way with Google. The tilde $\sim$ is sometimes used as a negation or "not" symbol in set theory, in which case $$\Bbb A \setminus \{a\} = \bigl\{x : x \in \Bbb A, \sim\!(x\in\{a\})\bigr\}.$$ The tilde is also used sometimes for equivalence relations, where $x \sim y$ means $x$ and $y$ are equivalent (in the same equivalence class) under some equivalence relation $\sim$. A particularly common example of this is with the cardinality of sets. We say $A \sim B$ if $A$ and $B$ have the same cardinality, that is $|A| = |B|$, and we call them equinumerous.<|endoftext|> TITLE: Does this trig equation have no solution? $\sqrt 2\sin \left(\sqrt 2x\right)+\sin (x)=0$ QUESTION [5 upvotes]: Solve the following trig. equation for $x$ $$\sqrt 2\sin \left(\sqrt 2x\right)+\sin (x)=0$$ My try: I divided by $\sqrt{\left(\sqrt2\right)^2+1^2}=\sqrt 3$, $$\frac{\sqrt 2}{\sqrt 3}\sin\left(\sqrt 2x\right)+\frac{1}{\sqrt 3}\sin (x)=0$$ $$\sqrt{\frac{2}{3}}\sin\left(\sqrt 2x\right)+\frac{1}{\sqrt 3}\sin (x)=0$$ I got stuck here, I have no clue to proceed to find the values of $x$. REPLY [2 votes]: Very interesting question. Here it is a summary of this answer: There are an infinite number of real roots; All the roots of the given function are real; The number of roots in the interval $[0,T]$ is extremely close to $T\pi^{-1}\sqrt{2}$. 1.) Given $f(x)=\sqrt{2}\sin(x\sqrt{2})+\sin(x)$, it is enough to prove that $f(x)$ is positive/negative at infinite points of $\pi\mathbb{Z}$ to get that $f(x)$ has an infinite number of zeroes, since it is a continuous function. So it is enough to study the behaviour of $f(k\pi)=\sqrt{2}\sin\left(\frac{2\pi k}{\sqrt{2}}\right)$ or the behaviour of $$ e_k = \exp\left(2\pi i\cdot\frac{k}{\sqrt{2}}\right). $$ Since $\sqrt{2}\not\in\mathbb{Q}$, $\{e_k\}_{k\in\mathbb{Z}}$ is dense in the unit circle (much more actually, it is a equidistributed sequence). The projection $z\to \text{Im}(z)$ (bringing $e_k$ to $\sqrt{1/2}\,f(k\pi)$) preserves density as any continuous map, hence it follows that $f(x)$ takes an infinite number of positive/negative values over $\pi\mathbb{Z}$, hence has an infinite number of real roots. 2.) Additionally, all the roots of $f(x)$ are real. This is a consequence of the Gauss-Lucas theorem, since the entire function $f(x)$ has an antiderivative $$ F(x) = -\cos(x)-\cos(x\sqrt{2}) = -2\cos\left(\frac{1+\sqrt{2}}{2}x\right)\cos\left(\frac{1-\sqrt{2}}{2}x\right) $$ with only real roots, hence $f(x)$ cannot have any root in $\mathbb{C}\setminus\mathbb{R}$. 3.) Using a bit of topological degree theory, we have that the number of zeroes $N(T)$ of $f(x)$ in the interval $[0,T]$ is extremely well-approximated by the real part of $$ \frac{1}{\pi}\int_{0}^{T}\frac{2e^{\sqrt{2}it}+e^{it}}{\sqrt{2} e^{\sqrt{2}it}+e^{it}}\,dt = \frac{T\sqrt{2}}{\pi}-\frac{\sqrt{2}-1}{\pi}\int_{0}^{T}\frac{dt}{1+\sqrt{2}e^{(\sqrt{2}-1)it}}$$ where the last integral is bounded. It follows that $N(T)\sim \frac{T\sqrt{2}}{\pi}$. Here it is a plot of the curve $\gamma(t)=\sqrt{2}e^{\sqrt{2}it}+e^{it}$ for $t\in[0,200]$: $\hspace1.5in$ by the triangle inequality, such a curve is contained in the annulus $\sqrt{2}-1\leq |z|\leq \sqrt{2}+1$.<|endoftext|> TITLE: Proving that $\iint\limits_{x^2+y^2\leq 1} e^x \cos y dxdy=\pi$ QUESTION [6 upvotes]: I want to prove that $$\iint\limits_{x^2+y^2<1} u(x,y) dxdy=\pi$$ where $u(x,y)=e^x \cos y$. There is a theorem which says that if $u\in C^2(\Omega)$ and $\nabla^2 u=0$ in a domain $\Omega\subseteq \mathbb{R}^n$, then for any ball $B=B_R(v)$ with $\overline{B}\subset\Omega$, $$u(v)=\frac{1}{\omega_n R^n}\int_B u dx$$ where $\omega_n$ is the volume of the ball of radius 1 in $\mathbb{R}^n$. The double integral above can be seen as a particular case of the theorem, since $\omega_2=\pi$, $R=1$ and $u(0,0)=1$. It's also clear that $\nabla^2 u=0$ (It's the real part of $e^z,z\in\mathbb{C}$). I want to prove it without using this mean value theorem. In a standard way, I get to the integral $$2\int_{-1}^1 e^x\sin \sqrt{1-x^2}dx$$ which seems crazy. Numerically it seems to be $\pi$ efectively. How could I calculate the integral? REPLY [4 votes]: Write \begin{align*} 2 \int_{-1}^{1} e^x \sin \sqrt{1-x^2} \, dx &= 2 \Im \int_{-1}^{1} e^{x+i\sqrt{1-x^2}} \, dx \\ &= 2 \Im \int_{0}^{\pi} e^{e^{i\theta}}\sin\theta \, d\theta \qquad (x = \cos\theta) \\ &= \Im \int_{0}^{\pi} i e^{e^{i\theta}} (e^{-i\theta} - e^{i\theta}) \, d\theta. \end{align*} Now expanding the double exponential term using the Taylor series, we can write \begin{align*} 2 \int_{-1}^{1} e^x \sin \sqrt{1-x^2} \, dx &= \sum_{n=0}^{\infty} \frac{1}{n!} \Im \int_{0}^{\pi} i (e^{(n-1)i\theta} - e^{(n+1)i\theta}) \, d\theta. \end{align*} But notice that we have the following sifting property: $$ \Im \int_{0}^{\pi} i e^{ik\theta} \, d\theta = \begin{cases} \pi, & k = 0 \\ 0, & k \neq 0 \end{cases}. $$ Therefore we get $$ 2 \int_{-1}^{1} e^x \sin \sqrt{1-x^2} \, dx = \frac{1}{0!}(0 - 0) + \frac{1}{1!}({\color{red}\pi} - 0) + \frac{1}{2!}(0 - 0) + \cdots = \pi. $$ REPLY [4 votes]: It is not that crazy. It is equivalent to $$ I = 2\cdot\text{Im}\int_{0}^{1}\exp\left(x+i\sqrt{1-x^2}\right)+\exp\left(-x+i\sqrt{1-x^2}\right)\,dx $$ or to $$ I = 2\cdot\text{Im}\int_{0}^{\pi/2}\left[\exp(e^{i\theta})+\exp(-e^{-i\theta})\right]\sin(\theta)\,d\theta$$ or, by exploiting $e^z=\sum_{n\geq 0}\frac{z^n}{n!}$, to $$ I = 2\cdot\sum_{n\geq 0}\frac{1}{n!}\int_{0}^{\pi/2}\left[\sin(n\theta)-(-1)^n \sin(n\theta)\right]\sin(\theta)d\theta $$ that simplifies to: $$ I = 4\cdot\sum_{m\geq 0}\frac{1}{(2m+1)!}\int_{0}^{\pi/2}\sin((2m+1)\theta)\sin(\theta)\,d\theta $$ where only the contribute given by $m=0$ is non-zero. It follows that: $$ I = 4\int_{0}^{\pi/2}\sin^2(\theta)\,d\theta = \color{red}{\pi}.$$<|endoftext|> TITLE: "Forcing a Cohen real inside a Cohen real" not a product forcing QUESTION [6 upvotes]: My professor mentioned that binary iterated forcings $P *\dot{Q}$ cannot always be written as product forcings $P \times R$ for some forcing notion $R$, hence iterated forcing is a proper generalization. I see that an easy counterexample is $P * \dot{Q}$ where $P$ is the Cohen forcing $\dot{Q}$ is the name of the Sacks forcing notion in the extension; by minimality of Sacks it is minimal over another extension, but if it were $P \times R$ then by the Product Lemma it would not be (since Cohen forcing is not minimal). However, the example my professor gave was that if we do the Cohen forcing, and then "force a Cohen real inside of one of the new Cohen reals" (force a Cohen subset of $\{n : s(n) = 1 \}$ where $s$ is a Cohen real), then that is not the same as $P\times R$ for any forcing notion $R$ where $P$ is the Cohen forcing. Why is this? REPLY [4 votes]: Good question! Let me begin with apparently claiming that your professor is wrong: the "Cohen-in-Cohen" extension your professor describes is exactly the product-of-two-Cohens extension. Let $G$ be Cohen-generic over $V$, let $\mathbb{P}$ be Cohen forcing, and let $\mathbb{Q}$ be Cohen-in-$G$ forcing. Let $H$ be $\mathbb{Q}$-generic over $V[G]$, and let $$J=\{i: \mbox{the $i$th element of $G$ is in $H$}\}.$$ Note that the Cohen poset is the same across all forcing extensions - this is not generally true. For example, let $\mathbb{S}_0$ be Sacks forcing in $V$, $G$ be $\mathbb{S}_0$-generic, and $\mathbb{S}_1$ be Sacks forcing in $V[G]$. Then $V[G]\models\mathbb{S}_0\not\cong\mathbb{S}_1$. This is why the square Sacks extension is different from the Sacks-over-Sacks extension. I claim that $J$ is Cohen-generic over $V[G]$. To see that $J$ is Cohen over $V[G]$, note that there is an obvious isomorphism from $\mathbb{Q}$ to $\mathbb{P}$, and that this isomorphism lives in $V[G]$ and carries $H$ to $J$. So what? Well, we have $V[G][H]=V[G][J]$ (since we can recover $H$ from $G$ and $J$, and vice-versa). So $\mathbb{P}*\dot{\mathbb{Q}}$ and $\mathbb{P}^2$ yield the same generic extensions. So I would say "$\mathbb{P}^2$ and $\mathbb{P}*\dot{\mathbb{Q}}$ are the same" . . . in a sense. The issue is what "same" means, here. The two posets $\mathbb{P}^2$ and $\mathbb{P}*\dot{\mathbb{Q}}$ are not, in fact, isomorphic; so it depends what notion of "sameness" we're looking for here. The example you give is a better one, since we can distinguish the two posets even at the level of generic extensions they yield.<|endoftext|> TITLE: fish tank problem QUESTION [7 upvotes]: A rectangular swimming pool with dimensions of 11m and 8m is built in a rectangular backyard. The area of the backyard is 1120m^2. If the strip of yard surrounding the pool is of uniform width, how wide is the strip? So I tried to find a diagram but it doesn't seem to make sense because the backyard could have any dimensions...so i made the backyard 28 by 40 and then the answer would be 8.5 and 16 which isn't correct. REPLY [6 votes]: The information that the strip of yard surrounding the pool is of uniform width is important. The following calculatins are done in meters. See the picture below ($w$ denotes the width and $m$ is meters; please note that some of the $w$'s are rotated!): Let a and b be the dimensions of the rectangular backyard. You know that $a\cdot b = 1120$. As the strip of surrounding yard is of uniform width, you also know that $a - 11 = b - 8$, because $(a - 11 = 2w = b - 8)$. Now you can isolate a: $a = b + 3$, thus you must solve $(b+3)\cdot b = 1120$. The positive solution to this equation is $b=32$. Thus the width of the strip is $\frac{32-8}{2}$=12. Here I have just isolated $w$ in the equation $8 + 2w = b$, see the figure.<|endoftext|> TITLE: Morphism that is not a mapping QUESTION [6 upvotes]: I have encountered a statement in Lang's Algebra (Revised Third Edition, page 53) concerning morphisms of objects that seems strange to me: "In practice, in this book we shall see that most of our morphisms are actually mappings, or closely related to mappings." I have always been under the impression that the terms 'mapping' and 'morphism' are synonymous in the context of categories. Perhaps it is just the case that Lang defines the two in a way that they disagree, but I can't find such an instance. Are they in fact different? If so, what is an example of a morphism that is not a mapping? REPLY [6 votes]: One of my favorite "counterexamples" to preconceived notions about categories is matrix algebra: The objects are natural numbers The arrows are matrices ($\hom(m,n)$ is the collection of $n \times m$ matrices) Composition of arrows is the matrix product<|endoftext|> TITLE: When do coproducts map canonically to products? QUESTION [10 upvotes]: I have noticed that in certain categories (e.g. $k$-vector spaces, pointed sets, ...), given an indexed family $\{x_i\}_i$ of objects, we always have a canonical map $$\bigsqcup_ix_i\longrightarrow\prod_ix_i.$$ I was wondering what conditions do we have to require on an arbitrary category to make it so that this always happens. My thoughts so far: I think that this is somehow linked to the fact that in these categories the initial object is the same as the terminal object. So assume $\mathcal{C}$ is a category with an object $*$ that is both initial and terminal. Then for an index $j$ we can define an object $\pi_j^{-1}(*)$ as the pullback $$\require{AMScd} \begin{CD} \pi_j^{-1}(*) @>>> \prod_ix_i\\ @VVV @VV{\pi_j := \prod_{i\neq j}p_i}V\\ * @>>> \prod_{i\neq j}x_i \end{CD}$$ where $p_i$ are the projections. Dually, using the coproduct we can define an object $\bigsqcup_ix_i\big/\bigsqcup_{i\neq j}x_i$. My idea is that if we can show that any of these two objects is canonically isomorphic to $p_j$ (and intuitively it is what happens in vector spaces and sets), then we'd obtain our morphism by universal property of either the product or the coproduct. Does anybody know if this is correct? And if it is, how can it be done? REPLY [8 votes]: You have the right idea. The fact that $*$ is both initial and terminal (it's then called a zero object) implies that between any two objects $x$ and $y$ there is a unique zero morphism $0 \colon x \to y$, which is the composite of the morphisms $x \to * \to y$. So if you have a coproduct $\epsilon_i \colon x_i \to \coprod x$ and a product $\pi_i \colon \prod x \to x_i$, you can define a family of maps $\tau_{ij} \colon x_j \to x_i$ by $$ \tau_{ij} = \begin{cases} 1_{x_i} & i = j, \\ 0 & i \ne j. \end{cases} $$ Then for each $i$, the coproduct gives you a unique map $\rho_i \colon \coprod x \to x_i$ satisfying $\rho_i \epsilon_j = \tau_{ij}$. But then the product gives you a unique map $\varphi \colon \coprod x \to \prod x$ satisfying $\pi_i \varphi = \rho_i$. Similarly, for each $j$, the product gives you a unique map $\sigma_j \colon x_j \to \prod x$ satisfying $\pi_i \sigma_j = \tau_{ij}$. But then the coproduct gives you a unique map $\varphi' \colon \coprod x \to \prod x$ satisfying $\varphi' \epsilon_j = \sigma_j$. I'll leave it to you to prove that $\varphi = \varphi'$ here. :-) In the case of a preadditive category, finite coproducts and finite products are equivalent, and in the above construction, $\rho_i = \pi_i$ and $\sigma_j = \epsilon_j$, and $\varphi$ is the identity. See my other answer for more on that.<|endoftext|> TITLE: Proving if it is possible to write 1 as the sum of the reciprocals of x odd integers QUESTION [11 upvotes]: Let $x$ be an even number. Is it possible to write 1 as the sum of the reciprocals of $x$ odd integers? Write a proof supporting your answer. I tried a lot of these, and I think it is no because I didn't find any possible combinations. REPLY [19 votes]: You can use contradiction to prove this. Suppose $$\frac1{k_1}+\frac1{k_2}+...\frac1{k_x}=1$$ Multiplying both sides by the denominators, you get $$k_2k_3...k_x+k_1k_3...k_x+...k_1k_2...k_{x-1}=k_1k_2...k_x$$ The left side is even but the right side is odd, and there's your contradiction.<|endoftext|> TITLE: Lottery Ticket Question QUESTION [5 upvotes]: A lottery ticket contains six different numbers, chosen from 1 to 39. The winning ticket will match all six numbers in the correct order, plus a bonus number, which may match the other six numbers. The second prize matches the six winning numbers in the correct order, but not the bonus number. What is the probability of winning first or second prize? REPLY [6 votes]: The portion "plus a bonus number, which may match the other six numbers. The second prize matches the six winning numbers in the correct order, but not the bonus number", clearly suggests that the bonus number is separately drawn from the $39$ numbers. Thus there are two cases: $\underline{Bonus\; number\; is\; among\; the\; six\; on\; the\; ticket}$ You can win only the first prize, with $Pr = \dfrac{1}{39\cdot38\cdot37\cdot36\cdot35\cdot34}$ $\underline{Bonus\; number\; is\; outside\; the\; six\; on\; the\; ticket}$ You can win the second prize, with $Pr = \dfrac{38}{39}\times\dfrac{1}{39\cdot38\cdot37\cdot36\cdot35\cdot34}$ You can win the first prize, with $Pr = \dfrac{1}{39}\times\dfrac{1}{39\cdot38\cdot37\cdot36\cdot35\cdot34}$<|endoftext|> TITLE: Prove Morera's Theorem in circles cases. QUESTION [5 upvotes]: Suppose that f is continuous on C, and $$ \oint_C f(z)dz=0 $$ for every circle $C\in \mathbb C$. Prove f is holomorphic in C. How to deal with this cirlce case? REPLY [6 votes]: Step 1. Assume first that $f \in C^1$. Then writing $f = u + iv$ for $u, v : \Bbb{C} \to \Bbb{R}$ shows that for any $z_0 \in \Bbb{C}$ and $r > 0$, $$ 0 = \oint_{\partial B_r(z_0)} f(z) \, dz = \oint_{\partial B_r(z_0)} (u \, dx - v \, dy) + i \oint_{\partial B_r(z_0)} (u \, dy + v \, dx)$$ and hence the real part and the imaginary part vanish simultaneously. Now by the Green's theorem, $$ 0 = -\oint_{\partial B_r(z_0)} (u \, dx - v \, dy) = \iint_{B_r(z_0)} \left( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x}\right) \, dxdy $$ $$ 0 = \oint_{\partial B_r(z_0)} (u \, dy + v \, dx) = \iint_{B_r(z_0)} \left( \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y}\right) \, dxdy $$ Since this is true for any ball $B_r(z_0)$, dividing both equations by $|B_r(z_0)| = \pi r^2$ and taking $r \to 0$ yields the Cauchy-Riemann equation $$ \frac{\partial u}{\partial x} -= \frac{\partial v}{\partial y}, \qquad \frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x}. $$ This shows that $f$ is holomorphic. Step 2. Now we only impose the condition that $f$ is continuous. In order to utilize the previous step, let $\varphi : \Bbb{C} \to \Bbb{R}$ be a compactly supported smooth function such that $$ \iint_{\Bbb{C}} \varphi(\mathrm{x}) \, d^2\mathrm{x} = 1. $$ Then it is not hard to check that $f_n$ defined by $$ f_n(z) = \iint_{\Bbb{C}} f(\mathrm{x})\varphi_n(z-\mathrm{x}) \, d^2\mathrm{x} = \iint_{\Bbb{C}} f(z-\mathrm{x})\varphi_n(\mathrm{x}) \, d^2\mathrm{x}, \qquad \varphi_n(\mathrm{x}) = n^2 \varphi(n\mathrm{x}) $$ are smooth and the sequence $(f_n)$ converges locally uniformly to $f$. Moreover, \begin{align*} \oint_{\partial B_r(z_0)} f_n(z) \, dz &= \oint_{\partial B_r(z_0)} \iint_{\Bbb{C}} f(z-\mathrm{x})\varphi_n(\mathrm{x}) \, d^2\mathrm{x} dz \\ &= \iint_{\Bbb{C}} \bigg( \oint_{\partial B_r(z_0)} f(z-\mathrm{x}) \, dz \bigg) \varphi_n(\mathrm{x}) \, d^2\mathrm{x} \\ &= \iint_{\Bbb{C}} \bigg( \oint_{\partial B_r(z_0-\mathrm{x})} f(z) \, dz \bigg) \varphi_n(\mathrm{x}) \, d^2\mathrm{x} \\ &= 0. \end{align*} Therefore $(f_n)$ is a sequence of holomorphic functions by Step 1. Since $f$ is a locally uniform limit of holomorphic functions, $f$ is holomorphic as well. (Cauchy integration formula guarantees this.) Addendum. Locally uniform convergence of $f_n$: Let $\bar{B}(0, R)$ be any compact ball in $\Bbb{C}$ and $\epsilon > 0$ be given. Pick $\delta > 0$ and $N \in \Bbb{N}$ as follows: Since $f$ is uniformly continuous on the compact set $\bar{B}(0, R+1)$, there exists $\delta > 0$ such that $|f(z) - f(w)| < \epsilon$ whenever $z, w \in \bar{B}(0, R+1)$ and $|z-w| < \delta$. May assume that $\delta < 1$. Since $\varphi$ is compactly supported, there exists $N > 0$ such that $\operatorname{supp} \varphi_n \subset B(0, \delta)$ for all $n \geq N$. Now for $z \in \bar{B}(0, R)$ and for $n \geq N$, \begin{align*} | f_n(z) - f(z) | &\leq \iint_{\Bbb{C}} |f(z-\mathrm{x}) - f(z)| |\varphi_n(\mathrm{x})| \, d^2\mathrm{x} \\ &= \iint_{B(0, \delta)} |f(z-\mathrm{x}) - f(z)| |\varphi_n(\mathrm{x})| \, d^2\mathrm{x} \\ &\leq \iint_{B(0, \delta)} \epsilon |\varphi_n(\mathrm{x})| \, d^2\mathrm{x} \\ &= C\epsilon, \end{align*} where $C = \iint_{\Bbb{C}}|\varphi| = \iint_{\Bbb{C}}|\varphi_n|$ is an absolute constant. This shows that $f_n \to f$ uniformly on $\bar{B}(0, R)$.<|endoftext|> TITLE: Runners and their chance of winning QUESTION [5 upvotes]: Six runners are entered in a track meet, and have equal ability. What is the probability that a) they will finish in ascending order of their ages? b) Shanaze will finish first or Tanya will finish second? c) Shanaze and Tanya will not finish back-to-back? REPLY [6 votes]: $A) \frac 1{6!}$Only one possible way for them to finish in right order out of all of the possible ways. $B)$ Chance of Shanaze finishing first is $\frac 16$, and chance of Tanya finishing second is $\frac 16$, subtract the probability that both events will happen from the combined probability of the two: $\frac 16+\frac 16-\frac 1{36}=\frac {11}{36}.$ $C)$ Chance of Shanaze and Tanya not finishing back-to back is $\frac 23$ REPLY [5 votes]: Part a: There are $6!$ possible arrangements, one of which is the one you want, so the probability is $$ \frac{1}{6!} $$ Part b: By inclusion exclusion $$ P(A\cup B)=P(A)+P(B)-P(A\cap B)=1/6+1/6-1/36=\frac{11}{36} $$ For part c: Pretend the two people with names are holding hands, how many possible arrangements are there? $$ 2*5*4! $$ All the spots the pair can occupy, corrected by the number of ways to swap a pair, times the number of ways to permute the rest. So we take the complement of this and divide by the total arrangements as in part a to get $$ \frac{6!-240}{6!}=\frac{2}{3} $$<|endoftext|> TITLE: Evaluate : $\int _0^{\infty } \sin (ax^2)\cos \left(2bx\right)dx$ QUESTION [5 upvotes]: How can I solve $$\int_{0}^{\infty} \sin \left (ax^2 \right) \cos \left (2bx \right)dx$$ Thanks! REPLY [7 votes]: I will assume that $a \gt 0$. One way to attack this integral is to extend the integration region to the whole real line by symmetry, and then split up the product into a sum of sines. Thus, the integral is $$\frac14 \int_{-\infty}^{\infty} dx \, \sin{\left ( a x^2+2 b x \right )} + \frac14 \int_{-\infty}^{\infty} dx \, \sin{\left ( a x^2-2 b x \right )}$$ which may be expressed as $$\frac14 \operatorname{Im}{\left [\int_{-\infty}^{\infty} dx \, e^{i (a x^2+2 b x)} \right ]}+\frac14 \operatorname{Im}{\left [\int_{-\infty}^{\infty} dx \, e^{i (a x^2-2 b x)} \right ]} $$ Complete the squares and get $$\frac14 \operatorname{Im}{\left [ e^{-i b^2/a} \int_{-\infty}^{\infty} dx \, e^{i a (x+b/a)^2} \right ]} + \frac14 \operatorname{Im}{\left [ e^{-i b^2/a} \int_{-\infty}^{\infty} dx \, e^{i a (x-b/a)^2} \right ]} $$ Note that both integrals are equal - we just shift the respective origins. So we now have the integral being equal to $$\frac12 \operatorname{Im}{\left [ e^{-i b^2/a} \int_{-\infty}^{\infty} dx \, e^{i a x^2} \right ]}$$ The integral converges and may be shown to be equal to $\sqrt{\pi/(-i a)}$. Thus, we find that $$\int_0^{\infty} dx \, \sin{a x^2} \, \cos{2 b x} = \frac12 \sqrt{\frac{\pi}{2 a}} \left (\cos{\frac{b^2}{a}}-\sin{\frac{b^2}{a}} \right ) $$<|endoftext|> TITLE: How did Do Carmo get the following differential of the Gauss map? QUESTION [5 upvotes]: Below is an example from Do Carmo's Differential Geometry page 139 "The Geometry of the Gauss Map". Let us analyse the point $p=(0,0,0)$ of the hyperbolic paraboloid $z=y^2-x^2$. For this, we consider a parametrisation $\textbf{x}(u,v)$ given by $$\textbf{x}(u,v)=(u,v,v^2-u^2),$$ and compute the normal vector $N(u,v)$. We obtain successively $\textbf{x}_u=(1,0,-2u),$ $\textbf{x}_v=(0,1,2v),$ $N=\Big(\frac{u}{\sqrt{u^2+v^2+\frac{1}{4}}},\frac{-v}{\sqrt{u^2+v^2+\frac{1}{4}}},\frac{1}{2\sqrt{u^2+v^2+\frac{1}{4}}}\Big)$. Notice that at $p=(0,0,0)$ $\textbf{x}_u$ and $\textbf{x}_v$ agree with the unit vectors along the $x$ and $y$ axes, respectively. Therefore, the tangent vector at $p$ to the curve $\alpha(t)=\textbf{x}(u(t),v(t))$, with $\alpha(0)=p$, has, in $\mathbb{R}^3$, coordinates $(u'(0),v'(0),0)$. I understand up until this point. Now my question is what follows: How can Do Carmo get the following: Restricting $N(u,v)$ to this curve and computing $N'(0)$, we obtain $N'(0)=(2u'(0),-2v'(0),0)$ I have little clue on how can he get $2u'(0)$ and $2v'(0)$? Could somebody please help clarify this confusion? Thanks. REPLY [4 votes]: You need to differentiate $N(u(t), v(t))$, given that $u(0) = v(0) = 0$. To calculate the first component of $N$, which is $\frac{u(t)}{\sqrt{u(t)^2 + v(t)^2+ 1/4}}$, we have $$\frac{d}{dt} \frac{u(t)}{\sqrt{u(t)^2 + v(t)^2+ 1/4}} = \frac{u'(t)}{\sqrt{u(t)^2 + v(t)^2+ 1/4}} - \frac{u(t)(u(t)u'(t) + v(t)v'(t))}{(u(t)^2 + v(t)^2+ 1/4)^{3/2}}$$ so setting $t=0$ gives $$\frac{u'(0)}{\sqrt{0^2 + 0^2+ 1/4}} = 2u'(0)$$ similar for the other components.<|endoftext|> TITLE: Why do we need finiteness of the first set in "continuity from above"? QUESTION [7 upvotes]: If $E_1 \supset E_2 \supset ...$ and $\mu(E_1)<\infty$ then $\mu(\bigcap E_j)=\lim \mu(E_j)$. But why need $\mu(E_1)<\infty$? Is $(-\infty,-n)$ an counter example? REPLY [6 votes]: Assume you have a family of sets $E_n = [n, +\infty)$ and a Lebesgue measure $\mu$. Then $\mu( \bigcap E_n) = \mu(\emptyset) = 0$ on the other hand for each $n$ $\mu(E_n) = \infty$ so $\lim_{n \to \infty} \mu(E_n) = \infty$ This fact works because proof (as I know it ) of the theorem in question relies on the fact that $$\mu(E_1) - \mu(\bigcap E_n) =\lim_{n \to \infty} \mu(E_1 \setminus E_n) = \mu(E_1) - \lim_{n \to \infty} \mu(E_n)$$ which cannot be correctly processed if $\mu(E_1) = \infty$ by definition of algebra for extended real numbers. To be more general we will have equality $$ \mu(\bigcap E_n) - \lim_{n \to \infty} \mu(E_n) = \infty - \infty $$ which is indeterminant. The straight implication of this fact is that many theorems of probability won't work in case of general measures e. g. Egoroff theorem.<|endoftext|> TITLE: Union of two countable sets is countable [Proof] QUESTION [5 upvotes]: Theorem: If $A$ and $B$ are both countable sets, then their union $A\cup B$ is also countable. I am trying to prove this theorem in the following manner: Since $A$ is a countable set, there exists a bijective function such that $f:\mathbb{N}\to A$. Similarly, there exists a bijective function $g:\mathbb{N}\to B$. Now define $h:\mathbb{N}\to A\cup B$ such that: $$h(n)=\begin{cases} f(\frac{n+1}{2})&\text{, n is odd}\\ g(n/2) & \text{, n is even} \\ \end{cases}$$ So in essence, $h(1)=f(1)$, $h(2)=g(1)$, $h(3)=f(2)$ and so on. Now we have to show that h is a bijection. h(n) is one-one: Proof: If $h(n_1)=h(n_2)$ then, if $n_1$ and $n_2$ are both either odd or even, we get $n_1=n_2$. But if, suppose $n_1$ is odd and $n_2$ is even, this implies that: $$f\left(\frac{n_1+1}{2}\right)=g\left(\frac{n_2}{2}\right)$$ How can one deduce from this equality that $n_1=n_2$? I tried to think about this and realized that if $A\cap B=\phi$ then this case is impossible as it would imply that there is a common element in both sets. On the other hand, if we assume that $A\cap B\neq \phi$, then either $f\left(\frac{n_1+1}{2}\right)\in A\cup B$ or $g\left(\frac{n_2}{2}\right)\in A\cup B$....Beyond this I'm clueless. Edit: Solution by the author- REPLY [4 votes]: A set $S$ is countable iff its elements can be enumerated. Since $A$ is countable, you can enumerate $A=\{a_1,a_2,a_3,...\}$. Since $B$ is countable you can enumerate $B=\{b_1,b_2,...\}$ Enumerate the elements of $A\cup B$ as $\{a_1,b_1,a_2,b_2,...\}$ and thus $A\cup B$ is countable.<|endoftext|> TITLE: Isometry group of a compact Pseudo-Riemannian manifold QUESTION [8 upvotes]: Can someone give an example of a compact Pseudo-Riemannian manifold (that is a manifold with an indefinite metric) with non-compact isometry group? Here is some more background to my question: Myers and Steenrod proved that the isometry group of a Riemannian manifold is a lie group and furthermore if the manifold is compact so is the isometry group. One can also prove that the isometry group of a Pseudo-Riemannian manifold is a lie group too. But in this case the isometry group is not compact in general even if the manifold is. REPLY [2 votes]: Here are the details for my comment above. A matrix $A\in SL(2,{\mathbb Z})$ is called Anosov if its eigenvalues have absolute value different from $1$. Equivalently, $tr(A)\notin [-2,2]$. Given any two distinct 1-dimensional subspaces $L_1, L_2\subset {\mathbb R}^2$ there is a unique, up to scale, nondegenerate bilinear form $b$ on ${\mathbb R}^2$ which vanishes on both lines. (If $e_i$ is a generating vector of $L_i$ then the form $b$ is uniquely determined by $b(e_1,e_2)$. For instance, if $L_1, L_2$ are the coordinate axes then $b(x,y)=xy$.) This form defines a Lorenztian metric on ${\mathbb R}^2$. If $A\in SL(2, {\mathbb R})$ preserves both lines $L_1, L_2$ then it preserves the form $b$ as well. Now, given an Anosov matrix $A\in SL(2, {\mathbb Z})$ let $L_1, L_2$ be its eigenspaces (both eigenvalues have to be real and distinct) and $b$ be the corresponding bilinear form which has to be invariant under $A$. The linear transformation $A$ preserves the standard integer lattice ${\mathbb Z}^2$ in ${\mathbb R}^2$ and, hence, descends to an automorohism $f: T^2= {\mathbb R}^2/{\mathbb Z}^2$. The bilinear form descends to a flat Lorenzian metric $g$ on $T^2$ invariant under the automorphism $f$. Thus, $Isom(T^2, g)$ contains the infinite cyclic group $\Gamma$ generated by $f$. I claim that $\Gamma$ is not contained in any Lie group $G$ with finitely many component acting (topologically) on $T^2$. Indeed, $f$ induces an infinite order automorphism on the 1st homology group $H_1(T^2, {\mathbb Z})$, namely the one given by the matrix $A$, where we use projections of the standard coordinate vectors in ${\mathbb R}^2$ as the generators of the homology group. If $G$ is a subgroup of $Homeo(T^2)$ containing $\Gamma$ then the path connected component of identity $G^0$ of $G$ acts trivially on the 1-st homology group $H_1(T^2, {\mathbb Z})$. Hence, if $G$ has only finitely many connected components then the image of $G$ in $Aut(H_1(T^2, {\mathbb Z}))$ is finite. Thus, the automorphism $f$ as above cannot belong to such $G$. Since every compact group has only finitely many connected components, it follows that $f$ cannot belong to any compact Lie subgroup of $Homeo(T^2)$. Since the isometry group of any pseudo-Riemannian manifold is a Lie group, it follows that $Isom(T^2, g)$ is noncompact. qed<|endoftext|> TITLE: Zeta of 3, why cant we get the value the zeta of odd n? QUESTION [5 upvotes]: $f(x) = \pi^2 - x ^2$ on $|x|<\pi$ and $f(x+2\pi)=f(x)$. I did find its Fourier expansion $f(x)=\frac{2}{3}\pi^2$+$\sum_{n \geq 1}\frac{4}{n^2}(-1)^{n+1}\cos x$ And by putting $x=\pi$, I got the zeta of $2$ , $\zeta(2) = \sum_{n \geq 1}\frac{1}{n^2} = \frac{\pi^2}{6}$. Similarly, if i try do this process about $f(x) = \pi^4-x^4$ then I could get the value $\zeta(4)=\frac{\pi^4}{90}$ if so, why can't I get the value of $\zeta(3)$ with $f(x) = \pi^3 - x ^3$ ? I wonder the principle that I can't do in general, why can't we get the value the zeta of odd $n$? REPLY [8 votes]: Of course I don't have a proof that $\zeta(3)$ is a completely new constant, but I think this might be interesting : Let $\displaystyle f(z) = \frac{\pi^2}{\sin^2(\pi z)}$ and $\displaystyle g(z) = \frac{d^2}{dz^2} \log(\Gamma(z))$ $= \psi'(z)$. Using Liouville's theorem (and showing $\Gamma(z)$ has no zeros) you get that $$f(z) = \sum_{n=-\infty}^\infty \frac{1}{(z+ n)^2}, \qquad\quad g(z) = \sum_{n=0}^\infty \frac{1}{(z+n)^2}$$ Differentiating : $\displaystyle f^{(k)}(z) = (-1)^k \frac{(2+k)!}{2}\sum_{n=-\infty}^\infty \frac{1}{(z+ n)^{2+k}}$ so that $$f^{(2k+1)}(1/2) = 0, \qquad f^{(2k)}(1/2) = (2+2k)!(2^{2k+2}-1)\zeta(2k+2)$$ $$g^{(k)}(1/2) = \frac{(2+k)!}{2}(2^{k+2}-1)\zeta(k+2)$$ Finally, evaluating $\zeta(2k)$ is as easy as evaluating the derivatives of $\sin(\pi z)$ at $z=1/2$, while evaluating $\zeta(2k+1)$ is as hard as evaluating the derivatives of $\Gamma(z)$ at $z=1/2$.<|endoftext|> TITLE: Proof that $\sin(x(t))$ is not a linear and time-invariant system? QUESTION [5 upvotes]: Suppose $\sin(x(t))$. I want to prove whether $\sin(x(t))$ is both linear and time-invariant or not. Is this proof flawed, or is it sound and correct? Time Invariance: Suppose $x_1(t)$ is a particular input signal to the system. Now, suppose $x_2(t)$ is $x_1(t)$ but shifted by $T$ time units such that $x_2(t) = x_1(t - T)$. Suppose $y_1(t)$ and $y_2(t)$ are the responses of the input signals of the system. To prove time-invariance, we must determine whether $y_1(t - T) = y_2(t)$ holds or not. Therefore, we expand both sides: $$LHS: y_1(t - T) = \sin(x_1(t - T))$$ $$RHS: y_2(t) = \sin(x_2(t))$$ $$sin(x_1(t - T))$$ Lemma 1: Therefore, $\sin(x(t))$ is a time-invariant system. Linearity Suppose $x_1(t)$ and $x_2(t)$ represent two distinct input signals, and $y_1(t)$ and $y_2(t)$ represent the two responses of the input signals into the system, respectively. Therefore... $$y_1(t) = \sin(x_1(t))$$ $$y_2(t) = \sin(x_2(t))$$ To prove linearity, we must prove two properties: the scaling property and the additive property. Scaling To prove the scaling property, we must determine whether the following equation is true: $$Ay(t) = \sin( Ax(t))$$ such that $A \in \mathbb{R}$. After substitution... $$A \sin(x(t)) \neq \sin(Ax(t))$$ Therefore, the system does not satisfy the scaling property. Therefore, the system is not linear. Therefore, the system is not both linear and time-invariant. REPLY [3 votes]: It is correct. You could also find out about the nonliterary by looking at the Taylor series expansion of $\sin(x(t))$: $$\sin(x(t))=x(t)-\frac{x^3(t)}{3!}+\frac{x^5(t)}{5!}+\cdots$$ the higher degrees of $x(t)$ are all nonlinear.<|endoftext|> TITLE: The curves $y^2=x^{2k+1}$ are never isomorphic to each other QUESTION [8 upvotes]: For every $k \in \Bbb N$, we consider the affine variety $X_k=\Bbb V(y^2-x^{2k+1}) \subset \Bbb A^2$. Gathmann's note contain the following exercise: Show that the curves $X_k$ are pairwise nonisomorphic. Equivalently, We need to show that the rings $$\frac{\Bbb C[x,y]}{(y^2-x^{2k+1})}$$ are pairwise nonisomorphic. The fact that $X_1, X_0$ are not isomorphic is well -known: this is just saying that the cuspidal curve is not isomorphic to $\Bbb A^1$, since one is smooth and the other is not. It is suggested as a hint to look at the blow-up of $X_k$ at the origin, and it surely suffices to show that these blow-ups $\tilde X_k \subset \Bbb A^2 \times \Bbb P^1 $ are pairwise nonisomorphic. I'm also interested in knowing whether this is the "standard" proof, or there are other methods that work. (like the ring-isomorphism approach above.) If we denote the coordinates of $\tilde {\Bbb A^2}$ by $((x_1,x_2),(y_1:y_2))$ then Gathmann shows that the blow-up of $X_k$ is given in $U_1=\{y_1 \neq 0\}$ by the equation $y_2^2-x_1^{2k-1}=0$. Is it enough to show that these curves are pairwise nonisomorphic in $\Bbb A^2$? REPLY [4 votes]: The ring isomorphism is also easy to do (blowing up is also fine). If you have an isomorphism $f:A_k\to A_l$, where $A_n=\mathbb{C}[x,y]/(y^2-x^{2n+1})$, then it induces an isomorphism of their normalizations, which is just $\mathbb{C}[t]=R$, where the map from $A_n\to R$ is, sending $x\mapsto t^2, y\mapsto t^{2n+1}$. Now, $f$ induces an automorphism of $R$, which are just linear maps, $t\mapsto at+b$, with $a\neq 0$. Of course, we may assume $k\leq l$ and both at least 1. Then following $t^2$ coming from $A_k$, which should land inside the image of $A_l$, one checks that $b=0$. Now, it is elementary to check that $k=l$, by following where $t^{2k+1}$ goes.<|endoftext|> TITLE: Is $C[0,1]$ a Banach space for the $L^1$ norm? QUESTION [5 upvotes]: Is $C[0,1]$ a Banach space with respect to the norm $\|f\| = \int\limits_0^1|f(t)| \, dt$? People keep telling me it is, but lets consider: $f_n(x) = x^n$. This function defines a Cauchy sequence, yet the limit clearly isn't a continuous function! REPLY [4 votes]: Under the $L^1$ norm, $C[0,1]$ is not a Banach space. Your example $f_n(x) = x^n$ doesn't do the trick since $\|f_n(x) - 0\| \to 0$ as $n \to \infty$, so there is a limit to this sequence in $C[0,1]$. However, it is well known that you can create a sequence $g_n \in C[0,1]$ such that $g_n \to 1_{A}$ for any interval $A \subset [0,1]$, for example $A = (1/4, 3/4)$, but there is no $g \in C[0,1]$ such that $\|g - 1_A\| = 0$.<|endoftext|> TITLE: Proof that uniform convergence implies convergence in norm of function space QUESTION [5 upvotes]: In an arbitrary normed space $X$ consider a sequence of functions $f_n:X\to \mathbb{R}$ which converges uniformly to some function $f: X\to \mathbb{R}$. Now consider an arbitrary function space $Y$ (over $\mathbb{R}$) consisting of functions from $X$ to $\mathbb{R}$ with arbitrary norm $\|\cdot\|_Y$, which contains $f_n$ for each natural $n$ and contains $f$. Now intuitively I feel that uniform convergence should imply that $\|f-f_n\|_Y \xrightarrow[\infty]{n}0$. It is easy enough to prove this in any concrete function space like the $L^p$ spaces, but I'm struggling to find a proof for the general case, which is frustrating because it seems intuitively so obvious. I feel the proof should go along the lines of, because for any $\varepsilon>0$ we can find a natural $N$ such that for all $n>N$ $$|f(x)-f_n(x)|<\varepsilon \quad \forall x\in X,$$ then in some sense $f-f_n$ is in some sense within an $\varepsilon$ "distance" of $0$. I'm struggling to make this argument concrete using only the ideas of general normed spaces. I now believe that there exists a strange norm which makes this untrue. Translated to metric spaces obviously the discrete metric would give us a suitable counterexample. I can't find a suitable analogue in a normed space because of scalar multiplication. If anyone can give a proof or provide a counterexample as to whether uniform convergence implies convergence in the norm, or can direct me to a reference on the topic I'd be very appreciative. EDIT: I'd like to rephrase the question to deal exclusively with the kind of spaces I had in mind, which Daniel Fischer uncannily knew. As he pointed out a counter example will be any space $L^P(X)$ where $\mu(X)=\infty$. So let us rephrase the question and deal with a compact subspace of $X$, say $Z$. Then if we consider a sequence $f_n:Z\to \mathbb{R}$ converging to $f:Z\to \mathbb{R}$, and redefine $Y$ to be a function space consisting of functions from $Z$ to $\mathbb{R}$ with some arbitrary norm, does uniform convergence then imply convergence in the norm? As my measure theory course was rather disappointing I'm not entirely sure that the measure of a compact subset is finite. If not then obviously the answer stays the same. Are there any conditions that force the statement to be true? REPLY [3 votes]: This is not going to work with any norm. For instance if $X=[0,1]$, $Y=C^1(X)$ with the norm $$||f||_Y=\sup_{x\in [0,1]}\big(|f(x)|+|f'(x)|\big)$$ then uniform convergence of $f_n$ to $f$ (where $f_n$ and $f$ are $C^1$) does not ensure that $||f_n-f||_Y\rightarrow 0$.<|endoftext|> TITLE: Maximize $\int_\Gamma\,\langle{y,x}\rangle^2\, dx$ QUESTION [7 upvotes]: Given a bounded, compact, closed surface $\Gamma\subset\mathbb{R}^n$, I'm searching $$ \max_{y\in\mathbb{R}^n, \|y\|=1} \int_\Gamma \langle y, x\rangle^2. $$ Without the square, and with the enclosed volume $\Omega$, $$ \max_{y\in\mathbb{R}^n, \|y\|=1} \int_\Omega \langle y, x\rangle, $$ I'm guessing $y_\text{max}$ points towards the centroid of $\Omega$. Not sure how to prove that though. Any hints? REPLY [3 votes]: Let's do the second one first. Let \begin{align*} f(y_1, \dots, y_n) &= \int_\Omega (y_1x_1 + \dots y_n x_n)\,dV \\ g(y_1, \dots, y_n) &= y_1^2 + y_2^2 + \dots + y_n^2 \end{align*} We want the critical values of $f$ subject to the constraint that $g=1$. By the method of Lagrange multipliers, there is a constant $\lambda$ such that, for each $i$ from $1$ to $n$, $\frac{\partial f}{\partial y_i} = \lambda \frac{\partial g}{\partial y_i}$. That is, $$ \int_\Omega x_i \,dV = 2\lambda y_i $$ for each $i$. By squaring and summing all $n$ of these equations, we get $$ 4\lambda^2 = \sum_{i=1}^n \left(\int_\Omega x_i \,dV\right)^2 $$ The only way the right-hand side is zero is if $\int_\Omega x_i \,dV = 0$ for each $i$. In that case $f$ is identically zero and any $y$ will do. Otherwise, $\lambda \neq 0$, so $$ y_i = \frac{1}{2\lambda} \int_\Omega x_i \,dV $$ Since the $i$th coordinate of the centroid of $\Omega$ is $\bar x_i =\frac{1}{\operatorname{Vol}(\Omega)}\int_\Omega x_i \,dV$, I agree that $y$ is a multiple of $\bar x$. The first one is trickier, at least, to me. Let $$ h(y_1,\dots,y_n) = \int_\Gamma (x_1y_1+\dots+x_ny_n)^2\,ds $$ Again, we want to maximize $h$ subject to $g=1$. We have $$ 2\int_\Gamma (x_1y_1+\dots+x_ny_n)x_i\,ds = 2\lambda y_i \tag{$*$} $$ for each $i$. If we set $$ M_{ij} = \int_\Gamma x_i x_j \,ds $$ then $(*)$ is equivalent to $$ \sum_{j=1}^n M_{ij} y_j = \lambda y_i $$ In other words, $y$ is an eigenvector of the matrix $M$, corresponding to the eigenvalue $\lambda$. The matrix $M$ is symmetric, and I think, positive definite. I am guessing the latter can be shown by squaring each equation and adding them up like before. So there is an orthonormal basis of eigenvectors. Therefore the maximum value of $f$ is the maximum of the eigenvalues of $M$. I'm not sure if we can say more than that. This is a very interesting problem but I've already spent way too much time that I should be doing something else! \smiley<|endoftext|> TITLE: Can locally "a.e. constant" function on a connected subset $U$ of $\mathbb{R}^n$ be constant a.e. in $U$? QUESTION [8 upvotes]: Consider a non-empty connected open subset $U$ of $\mathbb{R}^n$. Suppose a measurable function $u:U\to\mathbb{R}$ is locally constant on $U$, then it must be constant on $U$ according to this question. Here is my question: What if one changes "locally constant" to "locally a.e. constant"? More precisely, assume that for every $x\in U$ there is an open neighborhood $V$ of $x$ in $U$ such that $u$ is constant a.e. in $V$. Can one conclude that $u$ is constant on $U$ a.e.? [Motivation] This question is mostly for a rigorous last step in the proof of this problem. REPLY [5 votes]: $\mathbb{R}^n$ is second countable, therefore all of its subspaces are also second countable, and thus Lindelöf spaces. By assumption, we can cover $U$ with open sets such that $u$ is a.e. constant on each of these sets. Let $\{ V_n : n \in \mathbb{N}\}$ be a countable subcover, and for $n \in \mathbb{N}$ let $N_n$ be a null set such that $u$ is constant on $V_n \setminus N_n$. Let $N = \bigcup_{n\in\mathbb{N}} N_n$. Then $N$ is a null set, and $u$ is constant on $U \setminus N$. To see the latter, let $c_n$ be the value $u$ takes on $V_n \setminus N_n$. If $V_n \cap V_k \neq \varnothing$, then $(V_n \setminus N_n) \cap (V_k \setminus N_k) \neq \varnothing$ (it has positive measure), whence $c_k = c_n$. Let $$W_m = \bigcup \{ V_n : c_n = c_m\}.$$ Then each $W_m$ is open, and either $W_m = W_k$ or $W_m \cap W_k = \varnothing$. By the connectedness of $U$, it follows that $W_{43} = U$.<|endoftext|> TITLE: $\sum_{n=1}^\infty\frac1{n^6}=\frac{\pi^6}{945}$ by Fourier series of $x^2$ QUESTION [5 upvotes]: Prove that $$\sum_{n=1}^\infty\frac1{n^6}=\frac{\pi^6}{945}$$ by the Fourier series of $x^2$. By Parseval's identity, I can only show $\sum_{n=1}^\infty\frac1{n^4}=\frac{\pi^4}{90}$. Could you please give me some hints? REPLY [3 votes]: Considering $f(x)=x^3$ By Parseval identity we can prove that $\sum_{n=1}^{\infty}\frac{1}{n^6}=\frac{\pi^6}{945}$. Then $$a_0=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)dx=0,$$$$a_n=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\cos(nx)dx=0$$ and \begin{align}b_n&=\frac{1}{\pi}\int_{-\pi}^{\pi}f(x)\sin(nx)dx\\&=(-1)^{n+1}\frac{2\pi^2}{n}+(-1)^{n}\frac{12\pi}{n^3}\end{align} From the relation $$\frac{1}{\pi}\int_{-\pi}^{\pi}|f|^2dx=\frac{a_0}{2}+\sum_{n=1}^\infty(a_n^2+b_n^2)$$we get \begin{align}\sum_{n=1}^\infty(\frac{144}{n^6}+\frac{4\pi^4}{n^2}-\frac{48\pi^2}{n^4})=\frac{2\pi^6}{7}\end{align}$$\sum_{n=1}^\infty\frac{144}{n^6}=\frac{16\pi^6}{105}$$ $$\sum_{n=1}^\infty\frac{1}{n^6}=\frac{\pi^6}{945}$$<|endoftext|> TITLE: Functional recursion equations QUESTION [7 upvotes]: Lets denote n-times recursion as: $$ f(f(f ... f(x) ...)) = ({^nf}(x)), ({^0}f(x))=x $$ My question is: is there general approach to solve 'algebraic' functional equation? $$ a_n({^nf}(x))+a_{n-1}({^{n-1}f}(x)) ... + a_0x = 0 $$ For example: $ {^3f}(x) - x = 0 $ has 2 known to me solutions: $f(x) = \frac{1}{1-x}$, $f(x)=x$ and $ {^2f}(x) - x = 0 $ has 2 simple solutions: $f(x) = 1-x$, $f(x)=x$ In some special cases solutions are easy to find, in another - not. I will be thankful for any references to research done in this area. ADDITION: For example, $({^2}f)(x)=x$ valid for all functions $y=f(x)$ that can be expressed in the form $\psi(x,y)=\psi(y,x)$, i.e. has parameters symmetry. REPLY [2 votes]: For some class of function there is the concept of Carleman/Bell matrices, converting the problem of functional iteration into that of matrix-powers. Basically that concerns functions for which you have a power-series having a nonzero radius of convergence. The best adapted cases are such functions, where the power series has no constant term: then the Carleman-matrix is triangular and admits powers and often even fractional powers giving power series for iterations and even fractional iterations accordingly. The field is very wide, has a lot of, sometimes complicated, requirements. Just to make the above a bit more intuitive: Consider a matrix $F$ containing columnwise the coefficients of the formal powerseries of some function $f(x)$ and of its powers $f(x)^0, f(x)^1, f(x)^2, ...$ Consider then a vectorexpression $V(x) = [1,x,x^2,x^3,...]$ Then the idea for the Carleman-matrix is, to be able to evaluate $$ \begin{array} {rll} V(x) \cdot M &= [1,f(x),f(x)^2, ...] & \text{ which is also }\\ V(x) \cdot M &= V(f(x)) & \text{ and then of course }\\ V(f(x)) \cdot M &= V(f°^2(x)) \\ V(f°^2(x)) \cdot M &= V(f°^3(x)) \\ \cdots & \text{ and finally }\\ V(x) \cdot M^h &= V(f°^h(x)) \end{array}$$ Then fractional iterates are constructed - at least their formal power series- if fractional powers of $M$ can be constructed, for instance by diagonalization or matrixlogarithm/matrixexponentiation. If this is all possible with some given function $f(x)$ then ony can approach your examples of functional equations on iterations with polynomials with the powers of F or even series on F. For instance, one approach for solving the problem of tetration is to approximate the solution of $\small (I-F)^{-1} $ (which is like a geometric series with matrix $F$ as quotient, an example can be found for instance as a sidenote in an article of P. Walker on the fractional iterate of the $\exp()$. Such series are called "Neumann-series" and have of course their own strong requirements) As I said above, this is far from being trivial and as well technical as principal problems must be solved, or often cannot be solved. Also this is only one approach to that problem of function-composition and -iteration, having already a vast amount of literature - books and online available articles... Sidenote: the diagonalization (if possible) of a Carlemanmatrix F for some function $f(x) = a_1x + a_2x^2 + ... $ reflects/implements also the concept of the Schröder-function (That concept has been introduced around 1890 by Ernst Schröder, and developed later by G. Koenigs and others) . In the 1950/60/70 E. Jabotinsky developed specifically the attempt using the Carlemanmatrix.<|endoftext|> TITLE: A continuous bijection from $l_2 $ onto a subset of $l_2$ whose inverse is everywhere discontinuous. QUESTION [7 upvotes]: I was reading an article from AMM, titled, A continuous bijection from $l_2 $ onto a subset of $l_2$ whose inverse is everywhere discontinuous. In this he constructed the function $T:l_2\rightarrow l_1$ as $T(x)=$$(\sigma(x_1)x_1^2, \cdots,\sigma(x_i)x_i^2, \cdots )$ And shown this function to be bijective, continuous whose inverse is everywhere discontinuous. My question is, What's the motivation behind this example? Why did the author find this example? What's the importance of this example? REPLY [3 votes]: Topological point of view. Any continuous bijection between compact Huasdorff spaces is a homeomorphism, that is its inverse is continuous. This example shows that compactness condition can not be dropped. Author's example shows necessity of compactness in a strong sense, because the inverse of $T$ is not merely discontinuous, its everywhere discontinuous. Functional analytic point of view. By bounded inverse theorem any continuous linear bijection between Banach spaces is a linear homeomorphism. Again, the author's example shows necessity of linearity in that theorem, otherwise you can get inverse map which is discontinuous (and even more - discontinuous at every point).<|endoftext|> TITLE: Integrate $x^2 e^{-x^2/2}$ QUESTION [6 upvotes]: Is it possible to integrate $$\int_0^{\infty} x^2 e^{-x^2/2}\, \mathrm dx$$ by hand? The answer is $\frac{1}{2\sqrt{2}}$ My apologies if this does not meet the standards of this blog. I will delete it if requested. REPLY [4 votes]: Using a standard probability distribution: If you know the Gaussian distribution $\mathcal{G}(\mu,\sigma)$: its pdf is $f_{\mu,\sigma}\colon\mathbb{R}\to\mathbb{R}$ defined by $$ f_{\mu,\sigma}(x) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x-\mu)^2}{2\sigma^2}} $$ and you want to compute, for $X\sim\mathcal{G}(0,1)$, $$\begin{align*} \int_{0}^\infty x^2e^{-\frac{x^2}{2}}dx &= \frac{1}{2}\cdot\sqrt{2\pi}\int_{-\infty}^\infty x^2f_{0,1}(x)dx = \frac{\sqrt{2\pi}}{2} \mathbb{E}[X^2] = \frac{\sqrt{2\pi}}{2}\left( \mathbb{E}[X^2]-\mathbb{E}[X]^2\right) \\&= \frac{\sqrt{2\pi}}{2}\operatorname{Var}X = \frac{\sqrt{2\pi}}{2}\cdot 1 \\&= \sqrt{\frac{\pi}{2}} \end{align*}$$ where for the first step we used the fact that $x\mapsto x^2e^{-\frac{x^2}{2}}$ is an even function (hence the factor $\frac{1}{2}$ and the change of bounds in the integral).<|endoftext|> TITLE: What is the difference between a vector valued function and a vector field? QUESTION [6 upvotes]: I think that I understand that if I parameterize $y=x\,$ I can write it as $f(t)=(t,t)\,$. So I'm assigning position vectors (coming from the origin) to the point $(t_0,t_0)$ for $t=t_0$. So I can say that that a vector valued function takes a scalar parameter and assign to a vector in $R^n$. And if a $(x,y)$ is going from $R^2$ to $R^n$, it is a vector field, because for each position vector I'm associating a vector ? (instead of each scalar $t$) I'm confused because if vector valued functions can be going from $R^2$ to $R^2$ isn't it just like a vector field? (for each vector position I receive another vector in that point). REPLY [3 votes]: Sure. If you are in a region $U \subset \mathbb{R}^n$ a vector field on $U$ is a vector valued function on $U$ (taking values in $\mathbb{R}^n$). In my mind (although maybe there are different conventions) a "vector valued function" is just any function from some space to a vector space. (I'm intentionally being a bit vague by what I mean by a "space", but most reasonable subsets of $\mathbb{R}^n$ you could write down should be fine) In particular you could define a vector valued function from some region in $\mathbb{R}^3$ to $\mathbb{R}^2$ if you wanted to. A vector field is something more specific, it is an assignment to each point a vector inside the tangent space of your space at that point. As you suggested, for some open set $U \in \mathbb{R}^2$ this is just the same as a vector valued function $U \to \mathbb{R}^2$. However if instead our space is the unit circle $S^1 = \{(x,y) |x^2+y^2 = 1 \}$ in $\mathbb{R}^2$, then a vector field on $S^1$ is a map from $S^1$ to $\mathbb{R}^2$ with the condition that the vector assigned to a given point is tangent to the circle at that point (a harder condition to meet than just being an arbitrary vector valued function).<|endoftext|> TITLE: The sigma function (sum of divisors) multiplicative proof QUESTION [8 upvotes]: I am trying to prove that $\sigma(p_1^a\cdot p_2^b) =\sigma(p_1^a)\cdot\sigma(p_2^b)$ where $p_1$ and $p_2$ are prime numbers. We know that $\sigma(p_1^a) = \frac{p_1^{a+1}-1}{p_1-1}$ and $\sigma(p_2^b) = \frac{p_2^{b+1}-1}{p_2-1}$. Now I am trying to find the divisors of $p_1^a\cdot p_2^b$ and add them: I found that the divisors are $1$, $p_1$, $p_1^{2},\dotsc,p_1^{a}$, $p_2$, $p_2^{2},\dotsc,p_2^{b}$, $p_1\cdot p_2$, $p_1\cdot p_2^2,\dotsc,p_1\cdot p_2^{b},\dotsc,p_1^{a}\cdot p_2,\dotsc,p_1^{a}\cdot p_2^{b}$. Now when we do their summation we get $\sum_{k=0}^a$ $p_1^k$ + $\sum_{k=1}^b$ $p_2^k$ + ($\sum_{k=1}^{a}$ $p_1^k\cdot\sum_{k=1}^b$ $p_2^k$), is this right? If yes I can't reach $\frac{p_1^{a+1}-1}{p_1-1}\cdot\frac{p_2^{b+1}-1}{p_2-1}$. REPLY [2 votes]: $$\sigma(n) = \sum_{d \mid n} d$$ If $\gcd(n,m) = 1$ then there is bijection $(d,d') \to dd'$ between the couples of divisors $ d \mid n, d' \mid m$ and the divisors of $ nm$, and hence $$\sigma(n)\sigma(m) = (\sum_{d \mid n} d)(\sum_{d' \mid m} d') = \sum_{d \mid n, \ d' \mid m} dd' = \sum_{k \mid nm} k=\sigma(nm)$$<|endoftext|> TITLE: Classes, sets and Russell's paradox QUESTION [14 upvotes]: As I understand, Russell's paradox demonstrates that not every class can be regarded as a set. He defines $$S:=\{x: x \text{ is a set such that }x\notin x\}$$ Assuming that $S$ is a set, this gives a contradiction. However, if in the above definition we replace "set" by "class", we find that $S$ cannot be a class. In other words, the paradox can be used for any structure, not just sets. My (naïve) understanding is that a set can be identified as a single object, while that's not necessarily true for classes, which can be any collection of objects. If that is true, then in the above definition we could not say things like "$x$ is a class such that…" since it identifies the class as a single object $x$. That would seem to resolve my confusion, but I saw in books sentences like "Let $A$ and $B$ be classes…" which confuse me further, since they again refer to classes (which are not necessarily sets) as individual objects $A$ and $B$. Surely my reasoning is wrong. What am I missing? What is the difference between a class and a set? REPLY [3 votes]: In a system like MK (Morse-Kelley) set theory where there are two sorts, intended to be one for sets and one for classes in general, we can indeed construct (as an object in the system) any class of the form $\{ x : Set(x) \land φ(x) \}$, where $Set$ is the predicate corresponding to the sort intended for sets. It is then possible to prove that this object is not a set, via Russell's proof. Namely in MK you would be able to prove the following sentence: $\neg Set(\{ x : Set(x) \land x \notin x \})$. Note that there is little point in having the predicate $Class$ corresponding to the sort intended for classes, because in MK we have essentially that $\forall x\ ( Class(x) )$. In practice when working in MK we say "$x$ is a set" to mean "$Set(x)$" and "$x$ is a class" to mean "$Class(x)$", which we just mentioned is totally redundant since everything is a class in MK. Well why do people still say it then? It is because most mathematical work actually is based on some informal type theory (see this article by De Bruijn and this book), and so we think of each object actually as having a type, rather than being a set or class! Now if you want to work in ZFC completely, then you cannot even talk about classes in the same way, since they are not even objects in the system. In ZFC, we can only define classes in the limited sense that we can define new predicate-symbols, if our system supports definitorial expansion. So the Russell class does not exist as an object in the system, but we can define the predicate-symbol $Russell$ as follows: Let $Russell$ be a $1$-place predicate such that $\forall x\ ( Russell(x) \equiv x \notin x )$. Being a predicate-symbol rather than a collection, it makes no sense to ask whether $Russell$ is a member of itself. Likewise, in ZFC "$x \in S$" when $S$ is a class should be considered as syntactic sugar for "$S(x)$". That is the precise sense in which we can handle classes in pure ZFC. Similarly, using definitorial expansion we can handle class-functions, because defining them amounts to defining new function-symbols. For example in ZFC the power-set function-symbol "$\mathcal{P}$" is not a function but a class-function.<|endoftext|> TITLE: Is the empty set homeomorphic to itself? QUESTION [30 upvotes]: Consider the empty set $\emptyset$ as a topological space. Since the power set of it is just $\wp(\emptyset)=\{\emptyset\}$, this means that the only topology on $\emptyset$ is $\tau=\wp(\emptyset)$. Anyway, we can make $\emptyset$ into a topological space and therefore talk about its homeomorphisms. But here, we seem to have an annoying pathology: is $\emptyset$ homeomorphic to itself? In order to this be true, we need to find a homeomorphism $h:\emptyset \to \emptyset$. It would be very unpleasant if such a homeomorphism did not exist. I was tempted to think that there are no maps from $\emptyset$ into $\emptyset$, but consider the following definition of a map: Given two sets $A$ and $B$, a map $f:A\to B$ is a subset of the Cartesian product $A\times B$ such that, for each $a\in A$, there exists only one pair $(a,b)\in f\subset A\times B$ (obviously, we denote such unique $b$ by $f(a)$, $A$ is called the domain of the map $f$ and $B$ is called the codomain of the map $f$). Thinking this way, there is (a unique) map from $\emptyset$ into $\emptyset$! This is just $h=\emptyset\subset \emptyset\times \emptyset$. This is in fact a map, since I can't find any element in $\emptyset$ (domain) which contradicts the definition. But is $h$ a homeomorphism? What does it mean for $h$ to have an inverse, since the concept of identity map is not clear for $\emptyset$? Nevertheless, $h$ seems to be continuous, since it can't contradict (by emptiness) anything in the continuity definition ("pre-images of open sets are open")… So is $\emptyset$ homeomorphic to itself? Which is the mathematical consensus about this? "Homeomorphic by definition"? "We'd rather not speak about empty set homeomorphisms…" "…"? REPLY [4 votes]: In every category, every object is isomorphic to itself, via the identity morphism. This follows immediately from the definitions of category theory, the existence of a identity morphism for every object is an axiom. So if you want to call topological spaces a category, and the empty space an object of that category, then you must accept that the identity morphism of that space is a homeomorphism. The same goes for the empty set as object in the category of sets, whose identity map is a bona fide bijection. By the way "identity" morphism (or arrow) is just a name, and does not have to be an identity map (or a map at all), though it does have to be neutral in the composition with other morphisms (and with itself). But in the categories of sets and topological spaces morphisms are maps, and the identity map is the only map that is neutral in composition with morphisms, so the identity morphism must be the identity map. And in case of an empty set/space, the identity map is the only map to itself around anyway.<|endoftext|> TITLE: $\Bbb F_2[X]$ modules with 8 elements QUESTION [6 upvotes]: Problem 4. Let $\Bbb F_2$ be the field with 2 elements and let $R=\Bbb F_2[X]$. List, up to isomorphism, all $R$-modules with 8 elements. Solution. We use the classification theorem of modules over a PID. Since $R$ is a finite module, it is in particular an $\Bbb F_2$ vector space. We can write $$M\cong R/n_1R\oplus R/n_2 R\oplus\cdots\oplus R/n_r R$$ for polynmials $n_1\mid n_2\mid\cdots\mid n_r$. In our case, we have $\sum_{i=1}^r\deg n_i=3$, so we have three options: $r=1,\deg n_1=3$, $r=2,\deg n_2=2$, and $r=3,\deg n_3=1$. The first case yields 8 options. For the second case, we need $n_2$ to be reducible, so we have $X(X+1),X^2,(X+1)^2$ as choices. The first choice yields 2 decompositions, and the latter choices yield 1 decomposition each, for a total of 4. For the linear case, we need the same linear term repeated thrice, which is 2 choices. Therefore there are 14 in all, listed by invariant factors below: $$\begin{align} &\{X^3+(0/1)X^2+(0/1)X+(0/1)\}\\ &\{X^2,X\},\{X^2+X,X+(0/1)\},\{X^2+1,X+1\}\\ &\{X,X,X\},\{X+1,X+1,X+1\}. \end{align}$$ My question is the problem above. I can understand the solution, but how can I see that the solutions given are from distinct isomorphism classes? For instance, is the below true? $$R/(X)\oplus R/(X)\oplus R/(X)\cong R/(X+1)\oplus R/(X+1)\oplus R/(X+1)\\ \cong F_2\oplus F_2\oplus F_2$$ Thanks for any help. REPLY [3 votes]: Let $L$ be the comprehensive list of nonisomorphic $\Bbb{F}_2[X]$-modules of order $8$. Then by the counting argument you cite above $L$ must be contained in the following list: $\Bbb{F}_2[X]/(X^3)$ $\Bbb{F}_2[X]/(X^3+X^2)=\Bbb{F}_2[X]/(X^2(X+1))\\\cong \Bbb{F}_2[X]/(X^2)\oplus \Bbb{F}_2[X]/(X+1)$ $\Bbb{F}_2[X]/(X^3+X)=\Bbb{F}_2[X]/(X(X+1)^2)\\\cong \Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/((X+1)^2)$ $\Bbb{F}_2[X]/(X^3+1)=\Bbb{F}_2[X]/((X+1)(X^2+X+1))\cong {\Bbb{F}_2[X]/(X+1)\oplus \Bbb{F}_2[X]/(X^2+X+1)}$ $\Bbb{F}_2[X]/(X^3+X^2+X)=\Bbb{F}_2[X]/(X(X^2+X+1))\\\cong \Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X^2+X+1)$ $\Bbb{F}_2[X]/(X^3+X^2+1)$ $\Bbb{F}_2[X]/(X^3+X+1)$ $\Bbb{F}_2[X]/(X^3+X^2+X^1+1)= \Bbb{F}_2[X]/((X+1)^3)$ $\Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X^2)$ $\Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X(X+1))\\\cong \Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X)\oplus\Bbb{F}_2[X]/(X+1)$ $\Bbb{F}_2[X]/(X+1)\oplus \Bbb{F}_2[X]/(X(X+1))\\\cong \Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X+1)\oplus\Bbb{F}_2[X]/(X+1)$ $\Bbb{F}_2[X]/(X+1)\oplus \Bbb{F}_2[X]/((X+1)^2)$ $\Bbb{F}_2[X]/(X)\oplus \Bbb{F}_2[X]/(X)\oplus\Bbb{F}_2[X]/(X)$ $\Bbb{F}_2[X]/(X+1)\oplus \Bbb{F}_2[X]/(X+1)\oplus\Bbb{F}_2[X]/(X+1)$, where the $\Bbb{F}_2[X]$-module isomorphisms $\cong$ are due to the structure theorem (and the coprimeness of pertaining polynomials). Note that $X$ and $X+1$ are irreducible in $\Bbb{F}_2[X]$ as they are of degree $1$. Likewise $X^2+X+1,X^3+X^2+1$ and $X^3+X+1$ are irreducible in $\Bbb{F}_2[X]$ since they have no roots in $\Bbb{F}_2$. Again applying the structure theorem to the final forms of each of these (isomorphism classes of) modules we can conclude that $L$ is precisely the list provided. Also note that 1-8 in $L$ is the comprehensive list of nonisomorphic cyclic $\Bbb{F}_2[X]$-modules of order $8$. We can write $L$ in a more compact way using the so-called module-type notation. Define $p_{1,1}(X):=X, p_{1,2}(X):=X+1, p_{2,1}(X):=X^2+X+1,p_{3,1}(X):=X^3+X^2+1$,$p_{3,2}(X):=X^3+X+1\in\Bbb{F}_2[X]$. Then $L$ is $(p_{1,1}^3)$ $(p_{1,1}^2,p_{1,2}^1)$ $(p_{1,1}^1,p_{1,2}^2)$ $(p_{1,1}^1,p_{2,1})$ $(p_{1,2}^1,p_{2,1})$ $(p_{3,1}^1)$ $(p_{3,2}^1)$ $(p_{1,2}^3)$ $(p_{1,1}^1,p_{1,1}^2)$ $(p_{1,1}^1,p_{1,1}^1,p_{1,2}^1)$ $(p_{1,1}^1,p_{1,2}^1,p_{1,2}^1)$ $(p_{1,2}^1,p_{1,2}^2)$ $(p_{1,1}^1,p_{1,1}^1,p_{1,1}^1)$ $(p_{1,2}^1,p_{1,2}^1,p_{1,2}^1)$ Remark: The initial counting argument does not provide a priori that distint polynomials provide distint isomorphism classes. We prove distinctness by factoring them into irreducibles and referring to the structure theorem. It should also be noted that the same strategy is not that practical in general, for instance consider the question of classifying all $\Bbb{F}_3[X]$-modules of order $3^4$. Then there are $5$ options: $$ 4; 1+3; 2+2; 1+1+2; 1+1+1+1,$$ and just for the cyclic case one has to worry about $3^3$ polynomials to begin with, wlog taking them to be monic. Regarding the last part of your question, I'd like to add a comment. I believe you build those (ring) isomorphisms by using the evaluations at $0$ and $1$, and then you say e.g. $\Bbb{F}_2[0]\cong\Bbb{F}_2(0)=\Bbb{F}_2$. But for this to make sense as an $\Bbb{F}_2[X]$-module homomorphism you need to have a map $\operatorname{sca}:\Bbb{F}_2[X]\times\Bbb{F}_2\to\Bbb{F}_2$ that defines an $\Bbb{F}_2[X]$-module structure on $\Bbb{F}_2$ to begin with, which does not make much sense.<|endoftext|> TITLE: What is the maximum convergent $x$ in the power tower $x^{x^{x^{x\cdots}}}$? QUESTION [8 upvotes]: In the power tower $x^{x^{x^{x\cdots}}}$ where there is an infinite stack of $x$'s, what is the maximum convergent number? I know the answer by playing with the form $x^y=y$ and using Mathematica, but I don't know how to solve this by hand. REPLY [7 votes]: Fix an $x > 0$. Since the map $a \mapsto x^a$ is continuous, we know that if the infinite tower $x^{x^{x^\cdots}}$ converges to some limit $y$ then $x^y = y$, which implies $x = y^{1/y}$. Elementary calculus shows that the maximum possible value of $y^{1/y}$ occurs at $y = e$, so it is impossible for the infinite tower to converge unless $x \le e^{1/e}$. It remains to show that the sequence actually converges for $x = e^{1/e}$. To prove this we need only establish the inequality $1 < y \le x^y \le e$ for any $y$ in the range $1 < y \le e$. Then a simple induction will show that the sequence $x, x^x, x^{x^x}, \ldots$ is increasing and bounded, hence convergent. Finally, the desired inequality is easy to prove using the aforementioned calculation that shows $e^{1/e}$ is the unique maximum of the function $y^{1/y}$.<|endoftext|> TITLE: Burnside group $B(2, 3)$, how to see has $27$ elements and isomorphic to certain group? QUESTION [6 upvotes]: The Burnside group $B(d, n)$ is defined as the quotient of the free group on $d$ generators by the normal subgroup generated by all $n$th powers. Question. How do I see that $B(2, 3)$ has $27$ elements and is isomorphic to the group of matrices of the form$$\begin{pmatrix} 1 & x & y \\ 0 & 1 & z \\ 0 & 0 & 1 \end{pmatrix}$$for $x,y,z\in\mathbb{F}_3$? REPLY [11 votes]: That $u^3=1$ for all $u$ in this matrix group is an immediate calculation, and moreover this matrix group is generated by the matrices $e_{12}(1)$ and $e_{23}(1)$. So this matrix group is a quotient of $B(2,3)$. Therefore it is enough to show that $B(2,3)$ has order $\le 27$. Let $x,y$ be its generators, and write $X=x^{-1}$, $Y=y^{-1}$, and $[u,v]=uvu^{-1}v^{-1}$. \begin{align*}[[x,y],y]= & xyXYyyxYXY\\ = & xyXyxxyx(XY)^3\\ = & xyXyXyx \\ = & xx(Xy)^3x=xxx=1\end{align*} Similarly $[[x,y],x]=1$. So $[x,y]$ is central. It follows that every element can be written as $x^ay^b[x,y]^c$; moreover by the exponent condition, $a,b,c$ can be chosen in $\{0,1,2\}$. This yields at most 27 elements and concludes the proof.<|endoftext|> TITLE: Partial marginalization of conditional probability QUESTION [5 upvotes]: I was reading about marginalization on Wikipedia, specifically I read: $$p_X(x) = \int_y p_{X\mid Y}(x\mid y)p_Y(y)\,dy$$ I was wondering if the following is true $$\int_y p_{X\mid YZ}(x\mid y,z)p_Y(y) \, dy = p_{X\mid Z}(x\mid z)$$ $X, Y$ and $Z$ are random variables, with pdf-s $p_X$, p$_Y$ and $p_Z$ respectively. REPLY [3 votes]: Only if ${p}_{\lower{0.5ex}{Y}}(y)={p}_{\lower{0.5ex}{Y\mid Z}}(y\mid z)$ By the Law of Total Probability: $${p}_{\lower{0.5ex}{X\mid Z}}(x\mid z) =\int_{\Bbb R} {p}_{\lower{0.5ex}{X\mid Y,Z}}(x\mid y,z)~{p}_{\lower{0.5ex}{Y\mid Z}}(y\mid z) \,\mathrm{d}y$$ However if $Y$ and $Z$ are pairwise independent, then indeed: $${p}_{\lower{0.5ex}{X\mid Z}}(x\mid z) =\int_{\Bbb R} {p}_{\lower{0.5ex}{X\mid Y,Z}}(x\mid y,z)~{p}_{\lower{0.5ex}{Y}}(y) \,\mathrm{d}y$$<|endoftext|> TITLE: Sub-modules of free modules: twisting question a little QUESTION [5 upvotes]: The following theorems appear many places in books and on this site also. Sub-modules of a free module are free, provided the ring of scalars is P.I.D. If $R$ is not P.I.D. then sub-module of a free $R$-module may not be free. I have gone through the proof of theorem as well as counterexample. But my next question comes from these two facts: Let $M$ is an $R$-module (and assume that $M$ has proper sub-modules). If $R$ is not a P.I.D., then is it necessary that there exists a sub-module of $M$ which is not free? REPLY [3 votes]: Yes. If $M$ is not free, then the result is trivial. Assume that $M$ is free. Then $M$ contains a copy of $R$, so it is sufficient to prove the claim for $R$. Since $R$ is not a PID, then either it contains a non-principal ideal, or it is not an integral domain. In the latter case, let $a$ be a zero-divisor; then the submodule of $R$ generated by $a$ is not free. Finally, assume that $R$ is an integral domain, and let $I$ be a non-principal ideal of $R$. If $I$ were a free $R$-module, then it would admit a basis $(x_j)_{j\in J}$, with $J$ a set containing at least two elements. But then, for any $i\neq j$ in $J$, we would have $$ x_ix_j - x_jx_i = 0, $$ contradicting the linear independence of $x_i$ and $x_j$. Thus $I$ is not a free module.<|endoftext|> TITLE: Improper integral of $\sin(1/x)/x$ from 0 to 1 vs Lebesgue Integral QUESTION [5 upvotes]: Q1) How do I prove that the improper Riemann Integral $$\int_0^1\frac{\sin(\frac 1x)}x\,dx$$ converges? It does, according to WolframAlpha (http://www.wolframalpha.com/input/?i=integrate+sin(1%2Fx)%2Fx+from+0+to+1). The estimates $\sin(\frac 1x)\leq\frac 1x$ or $\sin(\frac 1x)\leq 1$ both do not seem to work here since $\frac 1x$ and $\frac{1}{x^2}$ are both not integrable on $(0,1]$. Q2) How do we prove that as a Lebesgue integral, $$\int_0^1\frac{\sin(\frac 1x)}x\,dx$$ does not exist? Roughly I know it is because of the $\infty-\infty$ reason because $f^+=\infty$, $f^-=\infty$, but how do we show that? Or alternatively, we could show $|f|$ is not Lebesgue integrable? Thanks for any help. REPLY [2 votes]: For Q2), hint: Let $a_n = 1/(3\pi/4+n\pi), b_n = 1/(\pi/4+n\pi).$ Observe $$\int_{a_n}^{b_n} \frac{|\sin (1/x)|}{x}\, dx \ge \frac{\sqrt2 /2}{b_n}(b_n-a_n).$$<|endoftext|> TITLE: Given finitely many points in a vector space $V$, is there a basis such that the first coordinate of each point is distinct? QUESTION [8 upvotes]: Suppose I have some $n$-dimensional vector space $V$ and a finite collection of $m$ distinct points $v_1,\dotsc, v_m\in V$. Is there a basis of $V$ such that the first coordinate of each $v_i$ is distinct? This obviously fails when the base field is finite, but my intuition over $\mathbb{R}^n$ has convinced me it's true when the base field is infinite. I'm having a hard time proving it though. Any suggestions? REPLY [3 votes]: Let $V$ be a vector space of dimension $n$ over a finite field. Lemma: Given a finite set of vectors $F$ there exists $W\leq V$ of dimension $n-1$ with $W\cap F=\varnothing$. We prove every maximal subspace with $W\cap F=\varnothing$ must have dimension $n-1$. Suppose not: Let $W$ be a maximal subset with $W\cap F$ and let $U$ be a subspace of dimension $2$ with $W\cap U=\varnothing$. Notice that for each $u\in U$ we have $f-\alpha u \in W$ for some non-zero scalar $\alpha$ and $f\in F$. Given $f\in F$ let $X_f$ be the set of vectors $u\in U$ such that $f-\alpha u\in W$ for some scalar $\alpha$. We conclude $\bigcup\limits_{f\in F}X_f=U$. If we define $Y_f$ as the subspace spanned by $X_f$ then it should also be clear that $\bigcup\limits_{f\in F} {Y_f}=U$. This implies that $Y_f=U$ for some $f$ (because a finite union of proper subspaces of a vector space over an infinite field cannot be the entire vector space). Therefore we can obtain $u_1$ and $u_2$ linearly independent vectors in $X_f$. Notice that we have $f-\alpha u_1\in W$ and $f-\beta u_2\in W$, which implies $\beta u_2-\alpha u_1\in W$, but we also have $\beta u_2-\alpha u_1\in U$. Which implies $\beta u_2-\alpha u_1=0$. A contradiction. Hence the lemma is proved. Applying the lemma to our particular case is trivial. Let $v_1,v_2\dots v_m$ be the vectors and consider the set $u_1,u_2,\dots u_k$ constisting of all values $v_i-v_j$. By the lemma there exists a subspace of dimension $n-1$ $W$ such that $u_i\not\in W$ for all $1\leq i \leq k$. Let $b_2,b_3\dots b_n$ be a basis for $W$. Then any extension $b_1,b_2\dots b_n$ does the trick.<|endoftext|> TITLE: Is $x \sim y$ iff $y - x \in \mathbb Q$ an equivalence relation? QUESTION [5 upvotes]: I can easily show that the relation $x\sim y$ iff $y - x \in \mathbb Q$ and $x,y \in (0,1]$ is an equivalence relation since it is reflexive (the number zero is rational), symmetric (the negative of a rational number is rational) and transitive as the sum/difference of two rational numbers is rational but I get stuck in what follows. Specifically, I know that equivalence relations induce mutually disjoint equivalence classes but I can't verify that here. My silly counterexample is that if $x=\frac{1}{3}$ then clearly $y=\frac{3}{3}$ is in the equivalence class but $y=\frac{3}{3}$ is also in the equivalence class of $x=\frac{2}{3}$ as that difference too is rational. My question therefore is, what have I misunderstood here? Thank you. REPLY [4 votes]: The numbers $\frac{1}{3}, \frac{2}{3}, \frac{3}{3}$ are all in the same equivalence class, they are all equivalent to each other.<|endoftext|> TITLE: 2 conjectured recursion limits for $e$ and $\pi$. QUESTION [6 upvotes]: Consider the following recursions $$ x_{n+2} = x_{n+1} + \frac{x_n}{n} $$ $$y_{n+2} = \frac{ y_{n+1}}{n} + y_n $$ I have been toying around with different starting values ( complex Numbers ) , divergeance etc. But was not able to conclude much. However I noticed when $$ x_1 = 0 $$ $$y_1 = 0 $$ $$ x_2 = 1 $$ $$ y_2 = 1 $$ We get the following limit recursions $$ \lim_{n \to \infty} \frac{n}{x_n} = e $$ $$ \lim_{n \to \infty} \frac{2 n}{y_n ^2} = \pi $$ How to prove these ?? And how about the divergeance / convergeance for other complex initial values ? Edit : a partial answer occurs here Mirror algorithm for computing $\pi$ and $e$ - does it hint on some connection between them? http://www.pi314.net/eng/miroir.php But the issue of other starting values is not resolved yet. ( so this is not a complete duplicate ) For the first recursion we have an answer ( see below ) but at the time of posting , the second has no answer with respect to variable initial conditions yet. REPLY [5 votes]: Here is a solution for the second case: Let $(y_n : n \geq 1)$ satisfy the recurrence relation $$ y_{n+2} = \frac{y_{n+1}}{n} + y_n, \qquad y_1 = a, \quad y_2 = b. \tag{1}$$ Let $y$ be the generating function of $(y_n)$, i.e., $$ y(x) = \sum_{n=1}^{\infty} y_n x^n. $$ The recurrence relation $\text{(1)}$ translates to the following differential equation: $$ x(x^2 - 1) y'(x) + (x+2)y(x) = ax(x+1) $$ Solving this equation under the constraint $y(x) = ax + bx^2 + \mathcal{O}(x^3)$ gives $$ y(x) = \frac{ax}{1-x} + \frac{x^2}{1-x}\left( \frac{a \arcsin x}{\sqrt{1-x^2}} + \frac{b-a}{\sqrt{1-x^2}} \right). $$ Now the following results are useful for our computation: Fact. We have the following Taylor expansions: $$ \frac{1}{\sqrt{1-x^2}} = \sum_{n=0}^{\infty} \frac{(2n-1)!!}{(2n)!!} x^{2n} \quad \text{and} \quad \frac{\arcsin x}{\sqrt{1-x^2}} = \sum_{n=0}^{\infty} \frac{(2n)!!}{(2n+1)!!} x^{2n+1}. $$ From this, we find that $$ y(x) = a\left( \sum_{n=1}^{\infty} x^n \right) + x^2 \sum_{n=0}^{\infty} \Bigg( a \sum_{0 \leq 2k+1 \leq n} \frac{(2k)!!}{(2k+1)!!} + (b-a) \sum_{0 \leq 2k \leq n} \frac{(2k-1)!!}{(2k)!!} \Bigg) x^n$$ and hence we have $$ y_{n+2} = a + a \sum_{0 \leq 2k+1 \leq n} \frac{(2k)!!}{(2k+1)!!} + (b-a) \sum_{0 \leq 2k \leq n} \frac{(2k-1)!!}{(2k)!!}, \qquad n \geq 0. $$ Finally, from the Stirling's formula it is easy to see that $$ \frac{(2n)!!}{(2n+1)!!} \sim \frac{\sqrt{\pi}}{2} \frac{1}{\sqrt{n}} \quad \text{and} \quad \frac{(2n-1)!!}{(2n)!!} \sim \frac{1}{\sqrt{\pi n}} $$ as $n \to \infty$. Therefore, by the Cesàro-Stolz theorem we have $$ y_n \sim \left( \sqrt{\frac{\pi}{2}} a + \sqrt{\frac{2}{\pi}} (b-a) \right) \sqrt{n}, $$ or equivalently, $$ \lim_{n\to\infty} \frac{y_n^2}{2n} = \frac{1}{\pi}\left( b + \left(\frac{\pi}{2}-1\right) a \right)^2 . $$<|endoftext|> TITLE: An interesting exercise about converging positive series, involving $\sum_{n\geq 1}a_n^{\frac{n-1}{n}}$ QUESTION [12 upvotes]: Yesterday I stumbled across an interesting exercise (Indam test 2014, Exercise B3): (Ex) Given a positive sequence $\{a_n\}_{n\geq 1}$ such that $\sum_{n\geq 1}a_n$ is convergent, prove that $$ \sum_{n\geq 1}a_n^{\frac{n-1}{n}}$$ is convergent, too. My proof exploits an idea from Carleman's inequality. We have: $$ a_n^{\frac{n-1}{n}}=\text{GM}\left(\frac{1}{n},2a_n,\frac{3}{2}a_n,\ldots,\frac{n}{n-1}a_n\right) $$ and by the AM-GM inequality $$ a_n^{\frac{n-1}{n}}\leq \frac{1}{n}\left(\frac{1}{n}+a_n\sum_{k=1}^{n-1}\frac{k+1}{k}\right)\leq \frac{1}{n^2}+\left(1+\frac{\log n}{n}\right)a_n $$ hence $$ \sum_{n\geq 1}a_n^{\frac{n-1}{n}}\color{red}{\leq} \frac{\pi^2}{6}+\left(1+\frac{1}{e}\right)\sum_{n\geq 1}a_n.$$ Now my actual Question: Is there a simpler proof of (Ex), maybe through Holder's inequality, maybe exploiting the approximations $$ \sum_{m TITLE: Algebraic Closure of $\mathbf F_p$ [Lang, Algebra, Chapter 6, Problem 22] QUESTION [5 upvotes]: Problem. Let $K$ be the field obtained from $\mathbf F_p$ by adjoining all primitive $\ell$-th roots of unity for primes $\ell\neq p$. Then $K$ is algebraically closed. It suffices to show that the polynomial $x^{p^n}-x$ splits in $K$ for all $n$. In order to show this, it in turn suffices to show that the polynomial $x^{q^n}-1$ splits in $K$ for all primes $q\neq p$ and all $n$. This is because $x^{p^n}-1= x(x^{p^n-1}-1)$. Say $p^n-1=p_1^{k_1} \cdots p_m^{k_m}$, where $p_i$'s are distinct primes. Assuming each $f_i(x):=x^{p_1^{k_i}}-1$ splits in $K$, we deduce that $K$ has a primitive $p_i^{k_i}$-th root of unity for all $1\leq i\leq m$ since each $f_i$ is separable by the derivative test. If $\zeta_i$ denotes the primitive $p_i^{k_i}$-th root of unity in $K$, then we see that $\zeta_1\times \cdots\times \zeta_m$ is a primitive $p_q^{k_1}\times \cdots \times p_m^{k_m}$-th root of unity and we see that $x^{p^n-1}-1$ splits in $K$. So the problem boils down to showing that $x^{q^n}-1$ splits in $K$ for all primes $q\neq p$ and all $n$. I am stuck here. REPLY [6 votes]: The idea is that given any prime power $ q^k $, we may take a prime $ w $ such that $ w $ divides $ p^{q^k} - 1 $ but does not divide $ p^{q^{k-1}} - 1 $, in other words, such that $ p $ has order $ q^k $ modulo $ w $. First, assuming the existence of such a prime $ w $, we observe that $ \mathbf F_p(\zeta_w) $ is the finite field with $ p^{q^k} $ elements, so it is the splitting field of $ X^{p^{q^k}} - X $ over $ \mathbf F_p $. Now, we proceed with the argument. To see that such a prime $ w $ exists, we use the polynomial identity $$ \frac{(1 + X)^q - 1}{X} = \sum_{k=0}^{q-1} C(q, k+1) X^{k} $$ and write $$ a = \frac{p^{q^k} - 1}{p^{q^{k-1}} - 1} = \sum_{j=0}^{q-1} C(q, j+1) (p^{q^{k-1}} - 1)^j $$ Clearly, we have $ a > q $. On the other hand, if a prime $ v $ divides both $ a $ and the denominator, it must also divide $ q $ by the sum on the right hand side, and since $ q $ is prime we must have $ v = q $. However, in that case $ q^2 $ cannot divide $ a $, so $ a $ has a prime factor $ w \neq q $. Since $ w $ cannot be a divisor of the denominator, it is the desired prime number.<|endoftext|> TITLE: One sided Chebyshev's inequality QUESTION [8 upvotes]: How to prove the one-sided Chebyshev's inequality which states that if $X$ has mean $0$ and variance $\sigma^2$, then for any $a > 0$ $$P(X \geq a) \leq \frac{\sigma^2}{\sigma^2+a^2} \quad?$$ Attempted solution: I know the Chebyshev's inequality which States that$$P(|X-\mu| \geq a) \leq \frac{\mathrm{Var}(X)}{a^2}~.$$ If I first argue that for any $b > 0$ $$P(X \geq a) \leq P{[(X+b)^2 \geq (a+b)^2]} \\ \begin{align} \implies &P(X\geq a) \leq \frac{E(X+b)^2}{(a+b)^2} \\ &P(X \geq a) \leq \frac{E(X^2)+2E(X)b+b^2}{(a+b)^2} \\ &P(X \geq a) \leq \frac{\sigma^2+ b^2}{(a+b)^2} \end{align}$$ I got the correct answer. REPLY [16 votes]: I have a couple of proofs at http://www.se16.info/hgb/cheb.htm#OTProof and http://www.se16.info/hgb/cheb2.htm One of these, loosely based on Probability and Random Processes by Grimmett and Stirzaker, would give a proof like this: With $a>0$, for any $b\ge 0$ $$P(X\ge a) = P(X+b \ge a+b) \le E\left[\dfrac{(X+b)^2}{(a+b)^2}\right] = \dfrac{\sigma^2+b^2}{(a+b)^2}$$ But treating $\dfrac{\sigma^2+b^2}{(a+b)^2}$ as a function of $b$, the minimum occurs at $b = \sigma^2 / a$, so $$P(X\ge a) \le \dfrac{\sigma^2+(\sigma^2/a)^2}{(a+\sigma^2/a)^2} =\dfrac{\sigma^2(a^2+\sigma^2)}{(a^2+\sigma^2)^2} = \dfrac{\sigma^2}{ \sigma^2+a^2}.$$<|endoftext|> TITLE: Weighted sum of two dice such that the result is a random integer between $0$ and $35$ QUESTION [8 upvotes]: The numbers shown by 2 dice are labelled $d$ and $e$. $A, B$ and $C$ are constants, giving a score $S=Ad + Be + C$. Find $A, B$ and $C$ such that the range of possible values for $S$ covers all integers from $0$ to $35$, with an equal probability of each score. [Oxford PAT exam, 2011] I have tried to find equations in terms of $A, B, C$ for different values of $S$. REPLY [2 votes]: I'm sure a more rigorous approach is possible, but thinking about this intuitively: If we are rolling two dice, there are 6x6=36 possible outcomes. 0 to 35 is a range of 36 possible outcomes. Therefore, the solution must involve exactly one combination of the roll of the dice for each outcome: we can't have 2,1 and 1,2 both mapping to the same value, for example. So I imagine a 2-dimensional table with 6 rows and 6 columns, with each cell in the table having a different number, from 0 to 35. The first row would be 0 to 5, the second 6 to 11, and so on. Every number from 0 to 35 occurs. Each number occurs exactly once, that is, each number represents exactly one combination of the roll of two dice, so all have equal probability. I could generate the first number in each row by subtracting one from the die roll and then multiplying by 6. Or, if you prefer, multiply by 6 and then subtract 6. For the column offset I take the second die minus 1. That is: S=(d x 6 - 6) + (e - 1) or S=d x 6 + e - 7<|endoftext|> TITLE: Is there a "good" reason why $\left\lfloor \frac{n!}{11e}\right\rfloor$ is always even? QUESTION [59 upvotes]: (A follow-up of sorts to this question.) The quantity $\left\lfloor \frac{n!}{11e}\right\rfloor$ is always even, which can be proved as follows. Using the sum for $\frac{1}{e}$, we split the fraction up into three parts: $A_n=\sum_{k=0}^{n-11} (-1)^k\frac{n!}{11k!}$ is a multiple of the even integer $10! \binom{n}{11}$, and so can be ignored. $B_n=\sum_{k=n-10}^n (-1)^k\frac{n!}{11k!}=\frac{1}{11}\sum_{k=0}^{10} (-1)^{n-k}(n)_k$. This is a finite sum of falling factorials, all of which are polynomial in $n$ with integer coefficients. So $B_n$ is of the form $\frac{P(n)}{11}$ where $P(n)$ is a polynomial in $n$ with integer coefficients. $C_n = \sum_{k=n+1}^{\infty} (-1)^k\frac{n!}{11k!}$ is an alternating series whose terms decrease monotonically in absolute value, and so $|C_n|<\frac{n!}{11(n+1)!}<\frac{1}{11}$. Putting all this together, we can see that: Since $B_n$ is always an integer multiple of $\frac{1}{11}$ and $|C_n|<\frac{1}{11}$, $C_n$ can only affect the value of $\left\lfloor \frac{n!}{11e}\right\rfloor$ when $B_n$ is an integer. In this case it will change the parity when $C_n$ is negative (i.e., $n$ is even) and leave it alone when $C_n$ is positive. Since $P(n)=11B_n$ is a polynomial with integer coefficients, $B_n$'s integer status is $11$-periodic, which means that whether $C_n$ affects the parity of $\left\lfloor \frac{n!}{11e}\right\rfloor$ is $22$-periodic. Similarly, the parity of $\lfloor B_n \rfloor$ is also $22$-periodic. So the parity of $\left\lfloor \frac{n!}{11e}\right\rfloor$ is $22$-periodic. Moreover, we can compute its first $22$ values to be: $$ 0, 0, 0, 0, 0, 4, 24, 168, 1348, 12136, 121360, 1334960, 16019530, 208253902, 2915554640, 43733319612, 699733113794, 11895462934514, 214118332821268, 4068248323604100, 81364966472082010, 1708664295913722230 $$ (a sequence which does not appear to be in OEIS). All of these are even, and so $\left\lfloor \frac{n!}{11e}\right\rfloor$ must be even for all $n$. This is not a very satisfying proof, though; in the end, it looks like we need a random $2^{22}$-fold coincidence to go our way in order to get the result we want. (In fact, that's the only place we used the specific value of $11$ in our proof at all; the rest of the proof shows that the parity of $\left\lfloor \frac{n!}{ke}\right\rfloor$ is $2k$-periodic for all positive integers $k$.) Even though $11$ was chosen arbitrarily, it looks like the heuristic probability of everything lining up falls off rapidly enough that it's surprising it all works out for any $k$ which is even that large. Can someone convince me that this fact is less surprising than it looks? I would take either a completely different proof that established the result with less case analysis, or a reason why the parity of these numbers can be expected to be non-independent... REPLY [6 votes]: Half a suggestion for simplification: It seems a little awkward to work with the parity of the floor of $n!/11e$ when the more fundamental question is why the fractional part of $n!/ 22e$ is always less than 1/2 or $$\left\{\frac{n!}{22e}\right\}<\frac12.$$ Your $A_n/2$ is always an integer, so it has zero fractional part. The fractional part of $B_n/2=P(n)/22$ is of course 22-periodic. Evaluate the polynomial $P(n)$ modulo 22 for a complete residue system modulo 22, e.g. for the integers 0 through 21. (N.B. $P(n)$ is not strictly a polynomial -- it involves a factor of $(-1)^n$ -- but that does not affect its periodicity for any even period.) $C_n/2$ is small in magnitude and its sign alternates with a period 2 which divides 22. Although this is a small change, I feel that reworking the proof in this way may be more illuminating and remove some of the "magic" and obfuscation of the underlying ideas.<|endoftext|> TITLE: Knight and Knaves logic problem QUESTION [7 upvotes]: From "Discrete mathematics and its applications", a book by Kenneth H. Rosen, chapter 1.1 exercise 57, goes as: A says "I am a knave or B is a knight" and B says nothing. Knight always tell the truth and knaves always lie. We are to determine of which type are A and B. Assuming that, p: A is a knight q: B is a knight Can I arrive to the answer (provided by the book), which is "A is a knight and B is a knight" using a truth table? And if not, then how? REPLY [4 votes]: A truth table would help. In that table, there are four possible truths; (i) A and B are knights, (ii) A is a knight and B is a knave, (iii) A is a Knave and B is a knight, and (iv) A and B are knaves. Let's proceed with testing whether (i) is true or false. If both A and B are knights, then the statement by A that "I am either a knave or B is a knight" cannot be refuted. Next, let's test whether (ii) is true or false. If A is a knight and B is a knave, then the statement by A that "I am either a knave or B is a knight" cannot be true. By hypothesis, A is a knight and is telling the truth. So, A is not a knave and the statement by A must mean that B is a knight. Inasmuch as B is a knave by hypothesis, we have a contradiction. Therefore, the hypothesis that A is a knight and B is a knave is false. Can you continue from here?<|endoftext|> TITLE: $\lim_{x\to 0}\left[1^{\frac{1}{\sin^2 x}}+2^{\frac{1}{\sin^2 x}}+3^{\frac{1}{\sin^2 x}}+\cdots + n^{\frac{1}{\sin^2 x}}\right]^{\sin^2x}$ QUESTION [5 upvotes]: $$\lim_{x\to 0}\left[1^{\frac{1}{\sin^2x}}+2^{\frac{1}{\sin^2x}}+3^{\frac{1}{\sin^2x}}+\cdots + n^{\frac{1}{\sin^2x}}\right]^{\sin^2x}$$ Limit is of form $(\infty)^{0} $ $$\lim_{x\to 0}e^{\sin^2x\log{ {\left[1^{\frac{1}{\sin^2x}}+2^{\frac{1}{\sin^2x}}+3^{\frac{1}{\sin^2x}}+\cdots + n^{\frac{1}{\sin^2x}}\right]}}}$$ I don't know how to proceed further. REPLY [3 votes]: $$ \begin{align} &\lim_{x\to0}\left[1^{\frac1{\sin^2(x)}}+2^{\frac1{\sin^2(x)}}+3^{\frac1{\sin^2 (x)}}+\cdots+n^{\frac1{\sin^2(x)}}\right]^{\sin^2(x)}\\ &=\lim_{x\to\infty}\left[1^x+2^x+3^x+\cdots+n^x\right]^{1/x}\\ &=n\lim_{x\to\infty}\left[\left(\frac1n\right)^x+\left(\frac2n\right)^x+\left(\frac3n\right)^x+\cdots+\left(\frac{n-1}n\right)^x+1^x\right]^{1/x}\\[4pt] &=n\,[0+0+0+\cdots+0+1]^0\\[8pt] &=n \end{align} $$<|endoftext|> TITLE: Are there multidimensional algebraic numbers that aren't algebraic numbers? QUESTION [5 upvotes]: Note, in the following, when I'm talking about solutions in multiple dimensions compared to ones in one dimension, what I mean is that each coordinate on its own should be considered a single-dimensional solution. So for instance, if I get $\begin{bmatrix} x\\ y \end{bmatrix} = \begin{bmatrix} s_1\\ s_2 \end{bmatrix}$, then I consider $s_1$ and $s_2$ two separate solutions, eventhough they normally clearly are only one solution of a two-dimensional problem. This is so I can actually compare with one-dimensional solutions. The whole premise of the question wouldn't make sense if not for that. The set of algebraic numbers is the set of roots of polynomials of arbitrary degree $n$ with integer coefficients $a_i$. $$P_n{\left[x\right]} = \sum_{i = 0}^{n}{a_i x^i}\\ \mathbb{\overline{Q}_n} : \left\{ x | P_n{\left[x\right]} = 0 \right\}\\ \mathbb{\overline{Q}} : \text{values of } \mathbb{\overline{Q}_n}\text{ for any } n.$$ If you try plugging in algebraic numbers as coefficients for those polynomials, you won't find anything new. The solutions will all be algebraic numbers as well. Is this also true if you allow higher-dimensional polynomials? Say you have a polynomial of the form $a_{2 0} x^2 + a_{0 2} y^2 + a_{1 1} x y + a_{1 0} x + a_{0 1} y + a_{0 0} = 0$ where $a_{i j} \in \mathbb{Z}$ and $a_{2 0}, a_{0 2}, a_{1 1}$ can't all be zero. If I'm not mistaken, this will not yet suffice since you'll get infinitely many solutions that form a $1$-dimensional subspace. So let's add a second polynomial of the same form to fix a couple points. $$\begin{align*} a_{2 0} x^2 + a_{0 2} y^2 + a_{1 1} x y + a_{1 0} x + a_{0 1} y + a_{0 0} & = 0 \\ b_{2 0} x^2 + b_{0 2} y^2 + b_{1 1} x y + b_{1 0} x + b_{0 1} y + b_{0 0} & = 0 \\ a_{i j}, b_{i j} & \in \mathbb{Z} \end{align*}$$ and the six highest degree terms can't all be zero, so that at least one polynomial has degree $2$. As long as these equations are independent, only a discrete finite set of solutions should be left over. I declare these solutions to be two-dimensional algebraic numbers of the second degree. Are these solutions the same as (one-dimensional) algebraic numbers? By trying out this scheme for the first degree case (so just systems of linear equations), it is easy to see that the results will just be rational numbers which already are fully covered by (one-dimensional) first degree algebraic numbers (i.e. Integers and Rational Numbers). Based on that result, my hunch is that the answer to the title question will be negative: Algebraic numbers already fully cover this case for all degrees and all dimensions. But of course, the degree $1$ case is trivially represented in linear algebra, so it could easily be the case that non-algebraic solutions exist only for higher degrees. So is my hunch correct? Bonus question 1: (Assuming my hunch is right) Will higher dimensional solutions change in degree? I.e. Can I represent (one-dimensional) algebraic numbers of degree $>n$ as solutions to systems of multidimensional polynomial equations of degree $n$? Solution counting henceforth will be in the normal sense: $\begin{bmatrix} x\\y\end{bmatrix} = \begin{bmatrix}s_1\\s_2\end{bmatrix}$ is only one solution. Bonus question 2: Up to how many solutions will a system of polynomial equations of degree $n$ in $d$ dimensions generally have? REPLY [2 votes]: Here is a way of phrasing what Jyrki said slightly differently (although I like his wording better--this is just to add a different perspective). Let $R=k[x,y]/(f,g)$ and $X=\text{Spec}(R)$. Assume that $f$ and $g$ have no irreducible factors in common. I claim that this implies $X$ is $0$-dimensional. Indeed, let $Y\subseteq X$ be an irreducible component of $X$. Then, $Y$ corresponds to a minimal prime $\mathfrak{p}\supseteq (f,g)$. If $\mathfrak{p}$ weren't maximal then $\mathfrak{p}=(h)$ for some irreducible polynomial $h$ and this implies then that $h\mid f,g$ contradicting our assumptions. Thus, $\mathfrak{p}$ is maximal and thus $Y=\text{Spec}(k[x,y]/\mathfrak{p})$ is $0$-dimensional. This then implies that $R$ is a $0$-dimensional $\mathbb{Q}$-algebra. This is somewhat obvious if one uses Jyrki's result relating Krull dimension and transcendence degree, but there's an easier method in this case (using just basic commutative algebra). Namely, since $R$ is $0$-dimensional and Noetherian it's Artinian. But, this then means that $R=A_1\times\cdots\times A_n$ with $A_i$ Artin local rings. Now, if $\mathfrak{m}_i$ denotes the maximal ideal of $A_i$ then $A_i/\mathfrak{m}_i$ is finite-dimensional over $\mathbb{Q}$ (this follows since $A_i$ is finitely generated over $\mathbb{Q}$--this is the Nullstellensatz). Note then we have the natural filtration of $A_i$ by $\mathbb{Q}$-subspaces $$A_i\supseteq\mathfrak{m}_i\supseteq\cdots\supseteq\mathfrak{m}_i^k=\{0\}$$ we will then be done (showing that $A_i$ is finite dimensional) if we can show that each $\mathbb{Q}$-space $\mathfrak{m}_i^\ell/\mathfrak{m}_k^{\ell+1}$ is finite-dimensional. But, note that it's dimension over $A_i/\mathfrak{m}$ (which is a finite-dimensional $\mathbb{Q}$-algebra) is finite since it's dimension (by Nakayama's lemma) is a minimal generating set for $\mathfrak{m}_i^\ell$ (which is finite since $A_i$ is Noetherian). Thus, the $A_i$, and thus $R$ is finite-dimensional over $\mathbb{Q}$. So, now, suppose that $(x,y)\in K^2$ is a solution to $f=g=0$ (where $K$ is any field of characteristic $0$--perhaps $K=\mathbb{C}$ is the obvious choice from your setup). Then, note that $\mathbb{Q}[x,y]\subseteq K$ has the property that it's a quotient of $R$ and thus, finite-dimensional as a $\mathbb{Q}$-algebra. This implies that $$\mathbb{Q}[x,y]\subseteq\{\alpha\in K:\alpha\text{ algebraic over }\mathbb{Q}\}$$ or, said differently, that $x,y\in\overline{\mathbb{Q}}$. A perhaps nice way of thinking about this conceptually (with a lot swept under the rug) is the following. For a finite-type $\mathbb{Q}$-scheme the closed points of $X$ correspond (essentially--one must mod out by a Galois action) to the $\overline{\mathbb{Q}}$-points $X(\overline{\mathbb{Q}})$. So, one should see that $X$ produces non-algebraic points precisely when $X$ has non-closed points. That said, a (finite type $\mathbb{Q}$-scheme) scheme $X$ is zero-dimensional if and only if it consists of finitely many closed points. So, $X$ will have non-algebraic points if and only if it's zero-dimensional. If you like algebra, then one can phrase this last condition as follows. The system $f_1=\cdots=f_k=0$ (with $f_i\in \mathbb{Q}[x_1,\ldots,x_n]$) will have non-algebraic points if and only if $\sqrt{(f_1,\ldots,f_k)}$ is NOT a product of maximal ideals. So, in your specific example, if $(a_i,b_i)$ denote the finitely many solutions of $f=g=0$ then $$\sqrt{(f,g)}=\prod_i (x-a_i,y-b_i)$$ which shows why your example produces no non-algebraic numbers. Just as a silly example: the non-algebraic point $(\pi,\pi^{-1})$ of $\mathbb{G}_{m,\mathbb{Q}}=\text{Spec}(\mathbb{Q}[x,y]/(xy-1))$ gives rise to the point $(0)=(xy-1)$ (since this is the kernel of the map $\mathbb{Q}[x,y]\to\mathbb{Q}[\pi,\pi^{-1}]$ with $x\mapsto \pi, y\mapsto \pi^{-1}$) which is a non-closed point. EDIT: Here is another criterion that I think you might like. I claim that if $f_i\in k[x_1,\ldots,x_n]$ then $f_1=\cdots=f_k=0$ has only algebraic solution (in some field extension $K/\mathbb{Q}$) if and only if for one (equivalently all) algebraically closed extensions $L/\mathbb{Q}$ the system $f_1=\cdots=f_k=0$ has finitely many roots. Indeed, label the implications $f_1=\cdots=f_k=0$ has only algebraic solutions. $f_1=\cdots=f_k=0$ has finitely many solutions in $\overline{\mathbb{Q}}$. $f_1=\cdots=f_k=0$ has finitely many solutions in any $K/\mathbb{Q}$. $f_1=\cdots=f_k=0$ has finitely many solutions in some $L/\mathbb{Q}$ with $L=\overline{L}$. -1. implies 2. since we showed that in this $X:=\text{Spec}(R)$ with $R:=k[x_1,\ldots,x_n]/(f_1,\ldots,f_k)$ is $0$-dimensional and so discrete and so (by Noetherianess) finite. -2. implies 1. follows from Noether normalization. Namely, if 1) didn't hold then $X$ would be positive dimensional, so surject onto some $\mathbb{A}^r_\mathbb{Q}$ with $r\geqslant 1$ and thus, consequently, have infinitely many $\overline{\mathbb{Q}}$-points. -2. implies 3. because since $X$ is a finite collection of $\overline{\mathbb{Q}}$-points by the equivalence of 1. and 2. -3. implies 4. obviously. -4. implies 2. obviously. So, your process of adding more equations to cut down the number of solutions to a finite amount doomed your quest for non-algebraic points from the beginning.<|endoftext|> TITLE: How can the equation of a hyperbola be $xy=1$? QUESTION [5 upvotes]: We know that the standard form of a hyperbola is $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$ which is not coherent with $xy=1$. So, how can we say that $xy=1$ is a hyperbola? REPLY [4 votes]: Actually equations in the form of $Ax^2+2Bxy+C^2+Dx+Ey=F$ can be simplified to an equation with variables $u,v$ depends on $x,y$ such that $\{u,v\}$ is an orthogonal basis for $\Bbb R^2$, i.e. one can rotate the graph in $uv$ coordinate into $xy$ plane. Let $Ax^2+2Bxy+C^2+Dx+Ey=F$. Then this equation can be written as $$q(x,y)=\begin{pmatrix}x&y\end{pmatrix}\begin{pmatrix}A&B\\B&C\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}+\begin{pmatrix}D&E\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix}=F$$ Find the eigenvalues and eigenvectors of $\begin{pmatrix}A&B\\B&C\end{pmatrix}$,namely $v_1,\lambda_1,v_2,\lambda_2$ . Then a rotation matrix$R$ can be defined as $$R=\begin{pmatrix}\frac{v_1}{|v_1|}&\frac{v_2}{|v_2|}\end{pmatrix}\text{ if }\det\begin{pmatrix}\frac{v_1}{|v_1|}&\frac{v_2}{|v_2|}\end{pmatrix}=1$$ $$R=\begin{pmatrix}\frac{v_1}{|v_1|}&-\frac{v_2}{|v_2|}\end{pmatrix}\text{ if }\det\begin{pmatrix}\frac{v_1}{|v_1|}&\frac{v_2}{|v_2|}\end{pmatrix}=-1$$ In such way $\begin{pmatrix}x\\y\end{pmatrix}=R\begin{pmatrix}u\\v\end{pmatrix}$. One can verify that the $xy$ term in the original given equation would be eliminated. Draw the graph with variables $u,v$ on the $uv$ plane. Then draw $xy$ plane with $u,v$ axis on it, then just draw the graph with respect to $u,v$ axis. For practice, try graph $x^2+2xy+y^2-3x+y=0$, which is a parabola.<|endoftext|> TITLE: Weak Riemann Hypothesis? QUESTION [6 upvotes]: The Riemann Hypothesis says that all non-trivial zeros of the Riemann zeta function lie on $Re(z)=\frac{1}{2}$ line instead this region $Re(z) \in (0,1)$. It seems a natural question to be that instead of proving that $Re(z)=\frac{1}{2}$ one could try to prove that $Re(z) \in (\epsilon,1-\epsilon)$. I am familiar with a zero-free region that is used in the analysis of Prime-number theorem but that is too weak to conclude some like this. My question is this question as hard as proving Riemann Hypothesis ?? Are there some developments toward proving this ?? Or a result that proving this would imply proving Riemann Hypothesis? REPLY [5 votes]: We don't know yet if $\zeta(s)$ has a sequence of zeros converging to $Re(s)=1$. There is an Euler product (but having no functional equation) having a sequence of zeros converging to $Re(s) = 1$. Let $$h(x) = x-\sum_{k=K}^\infty \frac{x^{1-1/k+ik^2}}{1-1/k+ik^2}$$ and iteratively for every prime $q$ : $$a_{q} = h(q) -\sum_{p < q} a_{p} $$ Finally let $a_n = \prod_{p | n} a_p$ and set $$F(s) = \sum_{n=1}^\infty a_n n^{-s} = \prod_p (1+\sum_{k \ge 1} a_{p^k}p^{-sk} )= \prod_p \left( 1+ \frac{a_p}{p^s-1}\right)$$ So that $$\log F(s) = \sum_p \log (1+ \frac{a_p}{p^s-1}), \qquad \frac{F'(s)}{F(s)} = \sum_p \frac{\frac{a_pp^{s}\ln(p)}{(p^s-1)^2)} }{1+ \frac{a_p}{p^s-1}} = \sum_p a_p p^{-s} + \sum_p \sum_{k \ge 2} b_{p^k}p^{-sk}$$ By the Abel summation formula you have $$\frac{F'(s)}{F(s)} = s \int_1^\infty g(x) x^{-s-1}dx, \qquad g(x) = \sum_{p < x} a_p+\sum_{p^k < x} b_{p^k}$$ And the prime gap shows that $$g(x) = \sum_{p < x} a_p + \mathcal{O}(x^{1/2+\epsilon}) = h(x) + \mathcal{O}(x^{1/2+\epsilon})$$ i.e. $$\frac{F'(s)}{F(s)}+\frac{1}{s-1}- \sum_{k=K}^\infty \frac{1}{s-1+1/k-ik^2} = s\int_1^\infty \left(g(x)-h(x)\right) x^{-s-1}dx$$ is analytic for $Re(s) > 1/2$, and hence $F(s)$ is meromorphic there, with one pole at $s=1$ and its zeros at $1-\frac{1}{k}+ik$<|endoftext|> TITLE: Integral using residue theorem complex analysis QUESTION [6 upvotes]: How to solve this integral using residue theorem? $$\int_0^∞ \frac{x^{(a-1)}}{(x+b)(x+c)} \, dx $$ $0 < a < 1, \ \ \ b > 0, \ \ \ c > 0$ REPLY [3 votes]: $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\int_{0}^{\infty}{x^{a - 1} \over \pars{x + b}\pars{x + c}} \,\dd x:\ ?.\qquad 0 < a <1\,,\quad b >0\,,\quad c>0}$. $$\bbox[8px,#efe,border:0.1em groove navy]{\mbox{It's interesting to see that the integral can be performed by 'real methods'}}$$ \begin{align} &\color{#f00}{\int_{0}^{\infty}{x^{a - 1} \over \pars{x + b}\pars{x + c}}\,\dd x} = {1 \over c - b}\pars{\int_{0}^{\infty}{x^{a - 1} \over x + b}\,\dd x - \int_{0}^{\infty}{x^{a - 1} \over x + c}\,\dd x}\label{1}\tag{1} \end{align} The LHS integral converges whenever $\ds{0 < \Re\pars{a} < \color{#f00}{2}}$ albeit the OP asked for the condition $\ds{0 < a < \color{#f00}{1}}$. The RHS integrals converge whenever $\ds{0 < \Re\pars{a} < \color{#f00}{1}}$ which coincides with the OP condition $\ds{0 < a < 1}$ whenever $\ds{a \in {\mathbb R}}$. $\bbox[8px,#ffe,border:0.1em groove navy]{\mbox{Then,}\ \eqref{1}\ \mbox{is evaluated with}\ \ds{0 < \Re\pars{a} < \color{#f00}{1}}\ \mbox{which is more general that the above OP condition}}$ In the first integral we make $\ds{x/b \mapsto x}$ while in the second we make $\ds{x/c \mapsto x}$: \begin{align} &\color{#f00}{\int_{0}^{\infty}{x^{a - 1} \over \pars{x + b}\pars{x + c}}\,\dd x} = {b^{a - 1} - c^{a - 1} \over c - b}\int_{0}^{\infty}{x^{a - 1} \over x + 1} \,\dd x\label{2}\tag{2} \end{align} Now, $\ds{t \equiv {1 \over x + 1}\implies x = {1 \over t} - 1\implies \totald{x}{t} = -\,{1 \over t^{2}}}$: \begin{align} &\color{#f00}{\int_{0}^{\infty}{x^{a - 1} \over \pars{x + b}\pars{x + c}}\,\dd x} = {b^{a - 1} - c^{a - 1} \over c - b}\int_{1}^{0}t\pars{{1 \over t} - 1}^{a - 1} \,\pars{-\,{\dd t \over t^{2}}} \\[5mm] = &\ {b^{a - 1} - c^{a - 1} \over c - b}\int_{0}^{1}t^{-a}\pars{1 - t}^{a - 1}\,\dd t \label{3.a}\tag{3.a} = {b^{a - 1} - c^{a - 1} \over c - b}\, {\Gamma\pars{-a + 1}\Gamma\pars{a} \over \Gamma\pars{1}} \\[5mm] = &\ \color{#f00}{{b^{a - 1} - c^{a - 1} \over c - b}\,{\pi \over \sin\pars{\pi a}}} \label{3.b}\tag{3.b} \end{align} Note that the integral involved in \eqref{3.a} $\pars{~the\ Beta\ function~}$ converges whenever $$ \Re\pars{-a} > -1\,,\quad\Re\pars{a - 1} > -1\qquad\implies\qquad 0 < \Re\pars{a} < 1 $$ which defines the general condition for the integrals evaluation.<|endoftext|> TITLE: Only $\pi$ has this property? QUESTION [7 upvotes]: We know that, for any rational number $p$, we have that $\cos(p\pi)$ is an algebraic number. Since this property comes from the fact that $e^{ip\pi}$ is algebraic (as a root of unity), I suspect that $\pi$ is the unique transcendental number with such property, in the sense that there does not exists another transcendental number $\alpha\ne q \pi$, for rational $q$, such that $\cos(p\alpha)$ is an algebraic number. But I don't find a proof. It is true? REPLY [7 votes]: This answer shows that $\alpha=\cos^{-1}(3/5)$ is not a rational multiple of $\pi$. Additionally, the Lindemann-Weierstrass theorem shows that $\alpha$ is transcendental. Nevertheless, $\cos(p\alpha)$ is algebraic for all rational $p$. If $p=\frac{n}{m}$, then $\cos(p \alpha)$ satisfies the polynomial equation $T_m(\cos(p \alpha))=T_n(3/5)$, where $T_k$ is the $k$th Chebyshev polynomial.<|endoftext|> TITLE: How to determine if vector b is in the span of matrix A? QUESTION [6 upvotes]: Given a matrix A = \begin{bmatrix} 1 &2 &3 \\ 4 &5 &6 \\ 7 &8 &9 \end{bmatrix} Determine if vector $b$ is in $span(A)$ where $$ b = \begin{bmatrix} 1 \\ 2 \\ 4 \end{bmatrix} $$ REPLY [9 votes]: First we address $\mathrm{Span}(A)$, $$\mathrm{Span}(A) = \left\{ \left[ \begin{array}{c} x_1\\ x_2\\ x_3\\ \end{array}\right] : \left[ \begin{array}{c} x_1\\ x_2\\ x_3\\ \end{array}\right] = a \left[\begin{array}{c} 1\\ 4\\ 7\\ \end{array}\right] + b \left[\begin{array}{c} 2\\ 5\\ 8\\ \end{array}\right] + c \left[\begin{array}{c} 3\\ 6\\ 9\\ \end{array}\right]. a,b,c \in \mathbb{R} \right\}.$$ From this definition we can see that asking if vector $\vec{b} \in \mathrm{Span}(A)$ is equivalent to asking if there exists a vector $\vec{x}$ such that $A\vec{x} = \vec{b}$, because $$\vec{x} = \left[\begin{array}{c} x_1\\ x_2\\ x_3\\ \end{array}\right],$$ then $$A\vec{x} = \left[\begin{array}{ccc} 1 & 2 & 3\\ 4& 5 & 6\\ 7 & 8 & 9\\ \end{array}\right] \left[ \begin{array}{c} x_1\\ x_2\\ x_3 \end{array}\right] = x_1 \left[ \begin{array}{c} 1\\ 4\\ 7 \end{array}\right] + x_2 \left[ \begin{array}{c} 2\\ 5\\ 8 \end{array}\right] + x_3 \left[ \begin{array}{c} 3\\ 6\\ 9 \end{array}\right].$$ So if there exists $x_1, x_2, x_3$ such that the final line equals $\vec{b}$, then we know $\vec{b}$ is in $\mathrm{Span}(A)$. This means it suffices to ask if there exists a $\vec{x}$ such that $A\vec{x} = \vec{b}$. This final equation is equivalent to the matrix equation: $$\left[\begin{array}{ccc} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{array} \right] \vec{x} = \left[\begin{array}{c} 1\\ 2\\ 4\\ \end{array}\right],$$ which we can convert to the system: $$\left[\begin{array}{ccc|c} 1 & 2 & 3 & 1\\ 4 & 5 & 6 & 2\\ 7 & 8 & 9 & 4\\ \end{array}\right].$$ As you might have learned, we solve this system by row reduction (I used technology for this step, your instructor may require row reduction by hand): $$\mathrm{RREF}\left(\left[\begin{array}{ccc|c} 1 & 2 & 3 & 1\\ 4 & 5 & 6 & 2\\ 7 & 8 & 9 & 4\\ \end{array}\right]\right) = \left[\begin{array}{ccc|c} 1 & 0 & 0 & \frac{-4}{3}\\ 0 & 1 & 0 & \frac{8}{3} \\ 0 & 0 & 1 & -1\\ \end{array}\right]. $$ This implies the vector $\vec{x}$ we seek is given by: $$\vec{x} = \left[\begin{array}{c} \frac{-4}{3}\\ \frac{8}{3} \\ -1\\ \end{array} \right].$$ The existence of this $\vec{x}$ alone guarantees $\vec{b} \in \mathrm{Span}(A)$, but lets check our answer by computing $A\vec{x}$. $$A\vec{x} = \left[\begin{array}{ccc} 1 & 2 & 3\\ 4 & 5 & 6\\ 7 & 8 & 9 \end{array} \right] \left[\begin{array}{c} \frac{-4}{3}\\ \frac{8}{3} \\ -1\\ \end{array} \right] = \left[\begin{array}{c} 1\\ 2 \\ 4\\ \end{array} \right], $$ as desired. Some things to think about: are there any vectors in $\mathbb{R}^3$ that are not in Span($A$)? If so can you find them, if not can you justify it? Hope this answer helps!<|endoftext|> TITLE: Structures strictly between $\{\mathbb{R},+,<\}$ and $\{\mathbb{R},+,\cdot,<\}$. QUESTION [6 upvotes]: Let $\mathcal{R}_0=\{\mathbb{R},+,<\}$ be the order divisible abelian group of reals and $\mathcal{R}=\{\mathbb{R},+,\cdot,<\}$ be the real closed field of reals. Both $\mathcal{R}_0$ and $\mathcal{R}$ are o-minimal structures. Let $f:\mathbb{R}\setminus\{0\}\rightarrow \mathbb{R}$ be the map given by $f(x)=1/x$. Consider $\mathcal{R}_f=\{\mathbb{R},+,f,<\}$, the structure generated after expanding $\mathcal{R}_0$ by adding function $f$ as a definable set. Is $\mathcal{R}_f = \mathcal{R}$, i.e. is the field product in $\mathbb{R}$ definable in $\mathbb{R}_f$? The question arises from the more general doubt of whether there exists an o-minimal group containing a definable homeomorphism between a bounded and an unbounded set that doesn't end up defining also a field structure. REPLY [5 votes]: Multiplication is definable in $\mathcal R_f$. Step 0. All the additive group operations ($+, -, 0$) are definable. The operation $+$ is given in the language, while $0$ is the unique identity for $+$ and $-x$ is the unique additive inverse of $x$. Step 1. The singleton $\{1\}$ is definable. $1$ is the unique element $x$ satisfying the formula $(0 TITLE: The First Homology Group is the Abelianization of the Fundamental Group. QUESTION [25 upvotes]: I am trying to understand the proof of the following fact from Hatcher's Algebraic Topology, section 2.A. Theorem. Let $X$ be a path connected space. Then the abelianization of $\pi_1(X, x_0)$ is isomorphic to $H_1(X)$. I am having trouble understanding the last step of the proof. Step 1. First we define a map $h:\pi_1(X, x_0)\to H_1(X)$ which sends the homotopy class $[f]$ of a loop $f$ based at $x_0$ to the homology class of the cycle $f$. One checks that this is a well-defined group homomorphism. Further $h$ is surjective. Step 2. Now since $H_1(X)$ is abelian, the map $h$ factors through the abelianization $\pi_1(X, x_0)^{ab}$ of $\pi_1(X, x_0)$. Step 3. The main part is to show that the induced map $\pi_1(X, x_0)^{ab}\to H_1(X)$ is injective. To do this we need to show that if a loop $f$ based at $x_0$ in $X$ is a boundary, then $f$ is in the commutator subgroup. Since $f$ is a boundary, we have singular $2$-simplices $\sigma_i$ such that $f=\partial(\sum_i n_i\sigma_i)$. We have $\partial \sigma_i = \tau_{i0} - \tau_{i1} + \tau_{i2}$, where $\tau_{ij}$'s are singular one simplices obtained by restricting $\sigma_i$ on the edges of the standard $2$-simplex. Hatcher shows that one may assume that each $\tau_{ij}$ is a loop based at $x_0$. Step 4. This step is where I am having problem. Hatcher writes "Using the additive notation in the abelian group $\pi_1(X, x_0)^{ab}$, we have the formula $[f]=\sum_{i, j}(-1)^jn_i[\tau_{ij}]$ because of the cancelling pairs of $\tau_{ij}$'s. We can rewrite the summation $\sum_{i, j} (-1)^j n_i[\tau_{ij}]$ as $\sum_i n_i [\partial \sigma_i]$ where $[\partial \sigma_i]=[\tau_{i0}]-[\tau_{i1}]+[\tau_{i2}]$. Since $\sigma_i$ gives nullhomotopy of the composed loop $\tau_{i0}-\tau_{i1}+\tau_{i2}$, we conclude that $[f]=0$ in $\pi_1(X, x_0)^{ab}$." I do not seem to understand anything in the paragraph quoted above. Can somebody elaborate on that? The additive notation is especially confusing. So if possible please use the multiplicative notation. Thank you. REPLY [13 votes]: I follow the proof by Hatcher that the OP outlines. The first important point is the following, once you have proved that $f = \Sigma_{i,j}(-1)^jn_i\tau_{i,j}$ for your 1-cycle $f$, you need to remember that the group of 1-chains (which contains the 1-cycles as a subgroup) is the free $\mathbb{Z}$-module (ie the free abelian group) on the set of continuous maps from $\Delta_1$ to $X$. Hence, in particular, every element $f$ of this group as a unique expression of the form $n_1f_1+\ldots +n_pf_p$ with $n_i\in\mathbb{Z}$ and $f_i$ a continuous map from $\Delta_1$ to $X$. So, from the equality $f = \Sigma_{i,j}(-1)^jn_i\tau_{i,j}$ you conclude, as noted by Hatcher (third paragraph from the end), that $f$ is one of the $\tau_{i,j}$'s and the remaining $\tau_{i,j}$'s form canceling pairs. This allows you to state this equality between homotopy classes $[\Sigma_{i,j}(-1)^jn_i\tau_{i,j}] = \Sigma_{i,j}(-1)^jn_i[\tau_{i,j}]$. But $\Sigma_{i,j}(-1)^jn_i[\tau_{i,j}] = \Sigma_in_i[\partial\sigma_i]$. The second important point consists in noting the fact that, $\sigma_i$ being a singular 2-simplex with boundary given by $\tau_{i0} -\tau_{i1} +\tau_{i2}$, you can continuously deform this boundary, through $\sigma_i$, into the constant loop at $x_0$. Hence, one has the following equalities between homotopy classes $[\partial \sigma_i] = [\tau_{i0}]-[\tau_{i1}]+[\tau_{i2}]=0$. Now, third point, you need to realize this last equality means $[\tau_{i2}] = -([\tau_{i0}]-[\tau_{i1}])$, or in multiplicative notation as you wish $[\tau_{i2}]=([\tau_{i0}][\tau_{i1}]^{-1})^{-1}$. Thus $[\partial\sigma_i]=[\tau_{i0}] -[\tau_{i1}] +[\tau_{i2}]$, being an element followed by its inverse, belongs to the commutator subgroup and so $\Sigma_in_i[\partial\sigma_i]=[f]$ does, meaning that $[f]$ is trivial in $\pi(X,x_0)_{ab}$.<|endoftext|> TITLE: Natural realizations of closed orientable surfaces QUESTION [5 upvotes]: A beautiful fact is that The space of configurations of a 5-vertex polygon with unit length sides, two of whose vertices are fixed, is a closed orientable surface of genus $3$. Similarly, but much more simply, the torus is the space of configurations of a double pendulum. Are there other natural realizations of high genus closed orientable surfaces? REPLY [2 votes]: Actually, there are several relatively simple planar linkages whose configuration spaces are surfaces of higher genera. For instance, if $S_n$ is a linkage which is a "spider" with $n$ legs each of which has one joint and such that tips of the legs are glued to the plane, then the configuration space of $S_n$ is a surface of certain genus $g$. Such linkages (with $n=3$) first were analyzed (to the best of my knowledge) in a paper by Thurston and Weeks: W. Thurston and J. Weeks, The Mathematics of three dimensional manifolds, Scientific American, 251 (1984) 94-106. The following is a theorem proven in P.Mounoud, Sur l’espace des configurations d’une araignee. Osaka J. Math., 48(1):149–178, 2011. Theorem. Let $g$ be an natural number and $r$ the biggest integer such that $2r$ divides $g-1$. A compact orientable surface of genus $g$ is diffeomorphic to the configuration space of some spider if and only if $$ 2^{-r}(g − 1) \le 6r + 12.$$ On the other hand, if one allows linkages which are "centepides" then the oriented surface of any given genus is realized: D. Jordan and M. Steiner Compact surfaces as configuration spaces of mechanical linkages, Israel Journal of Mathematics, 122 (2001) 175–187,<|endoftext|> TITLE: Does the Riemann zeta function converge in higher dimensions? QUESTION [5 upvotes]: It is known that the Riemann zeta function $$\displaystyle \zeta(s) = \sum_{n = 1} \frac{1}{n^s}$$ converges for all $\mathrm{Re}(s) > 1$, and admits an analytic continuation to the rest of the complex plane apart from its poles. My question concerns a generalisation of this: suppose that we replace $n$ with a vector $m \in \mathbb{Z}^d$ and consider the sum $$\displaystyle \sum_{\substack{\ \ m \in \mathbb{Z}^d} \\ {\ \ \ m \neq 0}} \frac{1}{|m|^s}.$$ Does this sum also converge absolutely for $\mathrm{Re}(s) > 1$? It clearly converges if $d = 1$. If $d = 2$, then we have: $$\displaystyle \sum_{\substack{\ \ m_1 \in \mathbb{Z}} \\ {\ \ \ m_1 \neq 0}} \sum_{\substack{\ \ m_2 \in \mathbb{Z}} \\ {\ \ \ m_2 \neq 0}} \frac{1}{|m_1 + m_2|^s}, \ \ m_1 + m_2 \neq 0.$$ It doesn't seem clear to me why this should converge. The closest I've been able to find is the so-called Hurwitz zeta function, which looks somewhat similar to my sum except that I'm summing over the parameter $q$ as well. I've tried comparing the double sum to a double integral of the same thing over $\mathbb{R}^2$, but Mathematica tells me that the resulting integral does not converge for $s = 3/2$, which is not a good sign. Using Mathematica on the sum directly -- summing over $\mathbb{N}$ rather than $\mathbb{Z}$ -- yields $\zeta(s - 1) + \zeta(s)$ for the case $d = 2$. REPLY [2 votes]: use the change of variable $r = x+y$ for showing that $$\iint_{[1,\infty)\times[1,\infty)} f(x+y)dxdy = \int_2^\infty (r-2)\, f(r)dr$$ Then we can do the usual comparison with a Riemann integral $$\sum_{n,m \in \mathbb{N}^*\times \mathbb{N}^*} |n+m|^{-s}= \iint_{[1,\infty)\times[1,\infty)} (\lfloor x\rfloor+\lfloor y\rfloor)^{-s} dxdy$$ $$\iint_{[1,\infty)\times[1,\infty)} (x+y)^{-s}dxdy +\iint_{[1,\infty)\times[1,\infty)} (\lfloor x\rfloor+\lfloor y\rfloor)^{-s}-(x+y)^{-s}) dxdy$$ $$ = \int_2^\infty (r-2)r^{-s}dr + \iint_{[1,\infty)\times[1,\infty)} \mathcal{O}((x+y)^{-s-1})dxdy =\int_2^\infty (r-2)r^{-s}dr + \int_1^\infty \mathcal{O}(r^{-s})dr$$ where I used that $\displaystyle(\lfloor x\rfloor+\lfloor y\rfloor)^{-s} - (x+y)^{-s} = s \int_{\lfloor x\rfloor+\lfloor y\rfloor}^{x+y} t^{-s-1}dt = \mathcal{O}((x+y)^{-s-1})$ The same argument shows that $\displaystyle\sum_{(n_1, \ldots, n_d) \in (\mathbb{N}^*)^d} (n_1+\ldots +n_d)^{-s}$ converges for $Re(s) > d$.<|endoftext|> TITLE: Effective equidistribution QUESTION [10 upvotes]: I have an irrational number $\alpha$ (in this case, $\alpha=1/(2\pi)$, but hopefully answers will be more general) and I am interested in finding bounds on the size of $$ T=\{k: k\alpha-\lfloor k\alpha\rfloor \in I\} $$ for some interval $I\in[0,1)$. (Actually, I think it would be more natural to take intervals of $S^1$, or finite unions of same, but this is sufficient for my purposes.) By the equidistribution theorem we know that $$ \lim_{n\to\infty}\frac{|T\cap\{1,2,\ldots\}|}{n}=\ell $$ where $\ell$ is the length of the interval $I$. But I would like to know more about the error term $$ E_n=|T\cap\{1,2,\ldots\}|-\ell n. $$ REPLY [6 votes]: It is not possible to put a bound better than $o(n)$ for arbitrary irrational $\alpha$. More precisely, if $f\colon\mathbb{N}\to\mathbb{R}^+$ is any function with $\liminf_{n\to\infty}f(n)/n=0$ then there exists uncountably many irrational $\alpha$ with $\limsup_{n\to\infty}\lvert E_n\rvert/f(n)=\infty$. See my answer on mathoverflow. However, for specific $\alpha$ it is possible to do better. As shown in the answer linked above, we have $E_n=o(n^x)$ for any $x > 1/2$ so long as $\alpha$ has irrationality measure 2, and almost every real number has irrationality measure 2. It is an unsolved problem as to whether $\pi$ (and, equivalently, $1/(2\pi)$) has irrationality measure 2, but it is expected that it does.<|endoftext|> TITLE: Defining $|x|=-1$ QUESTION [9 upvotes]: Q 1a Is it possible to define a number $x$ such that $|x|=-1$, where $|\cdot|$ means absolute value, in the same manner that we define $i^2=-1$? I have no idea if it makes sense, but then again, $\sqrt{-1}$ used to not be a thing either. To be more explicit, I want as many properties to hold as possible, e.g. $|a|\times|b|=|a\times b|$ and $|a|=|-a|$, as some properties that seem to hold for all different types of numbers (or in some analogous way). Q 1b If we let the solution to $|x|=-1$ be $x=z_1$, and we allow the multiplicativeness property, $$|(z_1)^2|=1$$ Or, further, $$|(z_1)^{2n}|=1\tag{$n\in\mathbb N$}$$ Note that this does not mean $z_1$ is any such real, complex, or any other type of number. We used to think $|x|=1$ had two solutions, $x=1,-1$, but now we can give it the solution $x=e^{i\theta}$ for $\theta\in[0,2\pi)$. Adding in the solution $(z_1)^{2n}$ is no problem as far as I can see. However, there result in some problems I simply cannot quite see so clearly, for example, $$|z_1+3|=?$$ There exists no such way to define such values at the moment. Similarly, let $z_2$ be the number that satisfies the following: $$|z_2|=z_1$$ As far as I see it, it is not possible to create $z_2$, given $z_1$ and $z_0\in\mathbb C$. The following has a solution, in case you were wondering. $$|\sqrt{z_1}|=i$$ so no, I did not forget to consider such cases. But, more generally, I wish to define the following numbers in a recursive sort of way. $$|z_{n+1}|=z_n$$ since, as far as I can tell, $z_{n+1}$ is not representable using $z_k$ for $k\le n$. In this way, the nature of $z_n$ goes on forever, unlike $i$, which has the solution $\sqrt i=\frac1{\sqrt2}(1+i)$. So, my second question is to ask if anyone can discern some properties about $z_n$, defining them as we did above? And what is $|z_1+3|=?$ Q 2a This part is important, so I truly want you guys (and girls) to consider this: Can you construct a problem such that $|x|=-1$ will be required in a step as you solve the problem, but such that the final solution is a real/complex/anything already well known. This is similar to Casus irreducibilis, which basically forced $i$ to exist by establishing its need to exist. I am willing to give a large rep bounty for anyone able to create such a scenario/problem. Q 2b And if it is truly impossible, why? Why is it not possible to define some 'thing' the solution to the problem, keep a basic set of properties of the absolute value, and carry on? What's so different between $|x|=-1$ and $x^2=-1$, for example? Thoughts to consider: Now, Lucian has pointed out that there are plenty of things we do not yet understand, like $z_i\in\mathbb R^a$ for $a\in\mathbb Q_+^\star\setminus\mathbb N$. There may very well exist such a number, but in a field we fail to understand so far. Similarly, the triangle inequality clearly cannot coexist with such numbers as it is. For the triangle inequality to exist, someone has to figure out how to make triangles with non-positive/real lengths. As for the properties/axioms of the norm I want: $$p(v)=0\implies v=0$$ $$p(av)=|a|p(v)$$ REPLY [7 votes]: First of all, you can define $|\cdot|$ to mean whatever you want in any given context, as long as you're clear and upfront about it. That being said, one usually wants $|\cdot|$ to be a norm, which means it fulfills a certain list of criteria. Among them is $|x|\geq 0$. If you break these rules, does your operation really deserve to be called "absolute value"? Does your operation deserve to be written using $|\cdot |$? Personally, I would say it doesn't, which means that using that symbol wouldn't be wrong, per se, but it would make it more difficult for your readers to understand what's going on, simply because of what they expect from that notation. One notable exception, as pointed out in the comments, is the determinant of square matrices. And real / complex numbers are square matrices (of dimension $1\times1$), so in that context we really have $|-1|=-1$. But that's a different operation. REPLY [2 votes]: The absolute value is quite a different thing than a square. A square simply comes from multiplication and nothing else. Especially, a square does not need an order on the underlying structure. However, the absolute value can only be defined after an order in defined by setting $$ |x| = \begin{cases} x & x\geq 0\\ -x & x < 0\end{cases}. $$ So, it is indeed defined to be non-negative. It is not that you may have some algebraic structure with an absolute value and then ask yourself "What if $|x|$ is negative?" in the same way you ask about squares… Put differently: You can't deduce form the field axioms that $x^2 = -1$ has no solutions, but you can deduce from the axioms of the ordering that $|x|=-1$ has no solutions. To answer the actual question: I haven't seen variant of absolute values (or norms, or metrics) to take negative values and doubt that such a thing has been studied.<|endoftext|> TITLE: Multiplication by One QUESTION [14 upvotes]: Throughout school we are taught that when something is multiplied by 1, it equals itself. But as I am learning about higher level mathematics, I am understanding that not everything is as black and white as that (like how infinity multiplied by zero isn't as simple as it seems). Is there anything in higher-level, more advanced, mathematics that when multiplied by one does not equal itself? REPLY [4 votes]: A key aspect to this question is realizing that 1 is nothing more than a symbol with a vertical line in it. It has no intrinsic meaning. It is mathematicians who choose to give it meaning. As many have pointed out, it is very common to assign the multiplicative identity element the symbol 1. This is the symbol for the multiplicative identity in our usual arithmetic, and it turns out that this is very convenient for people to remember. However, it's just a symbol. Your multiplicative identity could be ☃ if you wanted. There might be some grumbling about your symbol choices, but it's legal. Now 1 is also the symbol given to $Su(0)$, that is "the successor to 0". This meaning for the symbol 1 comes from addition, rather than multiplication. It happens to be that, in normal arithmetic, the number that comes after 0 ($Su(0)$) and the multiplicative identity are the same number. If I may borrow Glare's excellent example of modulo 10 addition and multiplication over the set $\{0, 2, 4, 6, 8\}$, the successor of 0 is 2, but the multiplicative identity on this ring is 6. One valid reason you may see a lack of the symbol l is because of this situation. Because people often think about the number after 0 and the multiplicative inverse as being the same thing, one may choose not to use that symbol if it could cause confusion. In Glare's example, the successor to 0 and the multiplicative inverse are different. Maybe this is a good time to not use 1. (That being said, 1 as a multiplicative inverse is very common, so even though I say you could choose not to use it that way... people will). Now I used numbers in that example. I used them for two reasons. One is because that's how Glare presented them in his answer. The other is because you and I are both very comfortable with how those symbols operate. I could have had addition and multiplication over $\{☀,☁,☂,☃,☄\}$ and provided you the following definitions for the addition and multiplication operators: add ☀ ☁ ☂ ☃ ☄ mul ☀ ☁ ☂ ☃ ☄ ☀ ☀ ☁ ☂ ☃ ☄ ☀ ☀ ☀ ☀ ☀ ☀ ☁ ☁ ☂ ☃ ☄ ☀ ☁ ☀ ☂ ☄ ☃ ☃ ☂ ☂ ☃ ☄ ☀ ☁ ☂ ☀ ☄ ☃ ☃ ☁ ☃ ☃ ☄ ☀ ☁ ☂ ☃ ☀ ☃ ☃ ☃ ☃ ☄ ☄ ☀ ☁ ☂ ☃ ☄ ☀ ☃ ☁ ☃ ☂ The resulting math would be the same, but your anger at me for using nonstandard symbols might be justified. By using the common symbols 0, 2, 4, 6, and 8, in an environment where their behavior is very similar to how they are used in normal arithmetic, the whole process goes a lot smoother!<|endoftext|> TITLE: Show that $\{a \log n \}$ is not equidistributed for any $a$ QUESTION [5 upvotes]: I need to prove that the sequence $\{a \log n \}$ is NOT equidistributed for any $a \in \mathbb{R}$. Now, I think that Weyl's criterion will be a good idea to use here. So, I need to show that $$\frac{1}{N} \sum_{n=1}^{N} e^{2 \pi i k (a \log n)} \rightarrow 0 $$ as $N \rightarrow \infty$ for any $k \in \mathbb{Z} - \{0\}$ doesn't hold. I have two problems here. Firstly, I don't know whether the base of $\log$ is $10$ or $e$. So, this can be tried for both bases. Secondly, if I assume that the base is $e$, then I have $$ e^{\log n^{a 2 \pi i k }}$$ inside the summation. This is equal to $n^{a 2 \pi i k}. $ So, I am left with $$\frac{1}{N} \sum_{n=1}^{N} n^{ 2 \pi i k a} $$ Now suppose I take a negative $k$ and positive $a$ or vice-versa, then wouldn't this series converges to $0$ as $N \rightarrow \infty$ ? REPLY [2 votes]: Apply the Euler–Maclaurin summation formula. For $f\in C^1\big([1,N]\big)$, it reads $$\sum_{n=1}^N f(n)=\int_1^N f(x)\,dx+\frac{f(1)+f(N)}{2}+\int_1^N\left(\{x\}-\frac12\right)f'(x)\,dx,$$ where $\{x\}=x-\lfloor x\rfloor$ is the fractional part of $x$. For $f(x)=e^{ic\log x}$ with $c=2\pi ka$, we have $f'(x)=icf(x)/x$ and $$W_N:=\frac1N\sum_{n=1}^N e^{ic\log N}=I_N+F_N+R_N, \\I_N=\frac1N\left.\frac{e^{(1+ic)\log x}}{1+ic}\right|_{x=1}^{x=N}=\frac{e^{ic\log N}-1/N}{1+ic}, \\F_N=\frac{1+e^{ic\log N}}{2N},\quad|R_N|\leqslant\frac{|c|}{N}\int_1^N\frac12\frac{dx}{x}=\frac{|c|}{2}\frac{\log N}{N}.$$ As $N\to\infty$, clearly $F_N\to 0$ and $R_N\to 0$ but $I_N\not\to 0$. Hence, $W_N\not\to 0$.<|endoftext|> TITLE: Is any necessary and sufficient criteria for a topological space to be compact using continuous functions? QUESTION [11 upvotes]: We know that there is a criteria for a space being connected using continuous functions, namely, A topological space $(X,\tau)$ is said to be connected if for any continuous function $f:X\to\{\pm 1\}$ (where the topology on $\{\pm 1\}$ is the subspace topology that it inherits as a subspace of the topological space $\mathbb{R}$ with usual topology) is constant. I was wondering if there is any necessary and sufficient criterion for a topological space to be compact using continuous functions. More specifically, what I want is a definition of the form, A topological space $(X,\tau)$ is said to be compact if for any continuous function $f:X\to Y$ (where $Y$ is the topological space with property $P$) satisfies property $Q$. Is there any such criteria? REPLY [15 votes]: Here is a negative answer if you impose certain restrictions. Let us suppose we only care about Hausdorff spaces, so we require that the space $Y$ is Hausdorff but only require that the criterion work if $X$ is Hausdorff. Then if you require the criterion to involve just one space $Y$ and that property $Q$ cannot involve the topology on $X$ (i.e., it depends only on the underlying map of sets $X\to Y$), there is no such criterion. To prove that no such criterion exists, I will prove the following statement. For any Hausdorff space $Y$, there exist two different Hausdorff spaces $X$, $X'$ on the same underlying set such that $X$ is compact and $X'$ is not, and a map $f:X\to Y$ is continuous iff it is continuous as a map $X'\to Y$. (Actually, the argument will only require that limits of sequences in $Y$ are unique, which is weaker than $Y$ being Hausdorff.) To prove this, let $\kappa$ be a regular uncountable cardinal such that $\kappa>|Y|$. Let $X=(\kappa+1)\times(\omega+1)$ with the product topology, and let $X'$ be the same set with the topology generated by the product topology together with the set $\{(\kappa,\omega)\}\cup(\kappa+1)\times\omega$. Then $X$ is compact Hausdorff, and $X'$ is not compact since it has a strictly finer topology than a compact Hausdorff topology. Since the topology in $X'$ is finer than the topology on $X$, clearly a continuous map $X\to Y$ is also continuous as a map $X'\to Y$. Conversely, suppose $f:X'\to Y$ is continuous. Since the topologies of $X$ and $X'$ agree on the complement of the point $(\kappa,\omega)$, it suffices to show $f$ is continuous at $(\kappa,\omega)$ in the topology of $X$. Suppose $n<\omega$. Since $f$ is continuous at the point $(\kappa,n)$ and $Y$ is $T_1$, for each $y\in Y$ such that $y\neq f(\kappa,n)$, there exists $\beta_{n,y}<\kappa$ such that $f(\alpha,n)\neq y$ for all $\alpha>\beta_{n,y}$. Now define $$\beta=\sup\{\beta_{n,y}:n<\omega,y\neq f(\kappa,n)\}.$$ Since $\kappa$ has cofinality greater than $|\omega\times Y|$, $\beta<\kappa$. Thus we have found a single $\beta<\kappa$ such that for all $n<\omega$ and all $\alpha>\beta$, $f(\alpha,n)=f(\kappa,n).$ Write $g(n)=f(\kappa,n)$. Now for any $\alpha\in\kappa+1$ (including $\alpha=\kappa$), $(\alpha,n)$ converges to $(\alpha,\omega)$ in $X'$ as $n\to\infty$. Continuity of $f$ now implies that for any $\alpha>\beta$ (including $\alpha=\kappa$), $f(\alpha,\omega)$ is a limit of the sequence $g(n)$. Since $Y$ is Hausdorff, this limit is unique, and so actually $f(\alpha,\omega)$ takes the same value for all $\alpha>\beta$, which we will call $g(\omega)$. We have thus seen that the restriction of $f$ to the set $U=\{\alpha:\alpha>\beta\}\times(\omega+1)$ is just the composition of the projection onto $\omega+1$ with a certain map $g:\omega+1\to Y$. Morever, $g$ is continuous, since $g(n)$ converges to $g(\omega)$. It follows that $f$ is continuous with respect to the product topology on $U$. Since $U$ is a neighborhood of $(\kappa,\omega)$ in $X$, it follows that $f$ is continous at $(\kappa,\omega)$ in the topology of $X$. Thus $f$ is continuous as a map $X\to Y$. (The spaces used in this proof are variants of the Tychonoff plank, which is a famous source of counterexamples in general topology.) On the other hand, here is a positive answer if you allow a family of different spaces $Y$ (but a reasonably simple such family). Let $S=\{0,1\}$, topologized such that $\{1\}$ is open but $\{0\}$ is not. We write $\mathbf{0}\in S^I$ for the constant function $I\to S$ taking value $0$, for any set $I$. Let the patch topology on $S^I$ be the product topology with respect to the discrete topology on $\{0,1\}$. Then: A space $X$ is compact iff whenever $I$ is any set and $f:X\to S^I$ is a continuous map such that the point $\mathbf{0}$ is in the closure of the image of $f$ with respect to the patch topology, $\mathbf{0}$ is in the image of $f$. (So the spaces $Y$ are $S^I$ for any set $I$, and condition $Q$ is that the image of $f$ is "closed at $\mathbf{0}$ with respect to patch topology".) The proof is just a matter of unravelling definitions. A continuous map $f:X\to S$ is just the characteristic function of an open set, so a continuous map $f:X\to S^I$ is a collection of open subsets $U_i$ of $X$ indexed by $I$. To say that $\mathbf{0}$ is in the closure of the image of $f$ with respect to the patch topology is to say that no finite subcollection of the $U_i$ covers $X$ (since a patch neighborhood of $\mathbf{0}$ just consists of requiring that some finite collection of the indices are $0$). To say that $\mathbf{0}$ is in the image of $f$ is to say that the $U_i$ do not cover $X$. So the criterion says exactly that any open cover has a finite subcover.<|endoftext|> TITLE: Where does TREE(n) sit on the Fast Growing Hierarchy? QUESTION [6 upvotes]: I'm interested in where TREE(n) would rank on the fast growing hierarchy. I've read that the small tree(n) function is bound by the Small Veblen Ordinal. Given that TREE(n) grows faster, where would TREE(n) fall in the FGH? Assuming Friedman's definition of TREE(n): TREE(n) is the length of the longest sequence of trees T1,T2, … labeled from {1,2,3,..,n} such that, for all i, Ti has at most i vertices, and for all i, j such that $i T$. If $T$'s root has a larger label than $S$'s root, then $T > S$. Otherwise, if $S$'s root has more children than $T$, then $S > T$. If $T$'s root has more children than $S$, $T > S$. Otherwise, compare the children of $S$'s root with the children of $T$'s root, starting from the smallest (using this ordering), then the second smallest, and so on. Whichever tree first has a larger child, that tree is bigger; otherwise, the trees are identical, so they are equal. With some work one can show that this is a well-ordering of order type $\theta(\Omega^\omega \omega,0)$ that respects the ordering under label and infemum preserveing embedding. That $\theta(\Omega^\omega \omega,0)$ is an upper bound as well was proven in 1979 by Diana Schmidt in her thesis. More generally, the TREE ordering using labels from an ordinal $\alpha$ has length $\theta(\Omega^\omega \alpha, 0)$. The other part is using this fact to prove that TREE(n) grows at roughly $F_{\theta(\Omega^\omega \omega,0)}(n)$ in the fast-growing hierarchy. The lower bound is again easier. First, given a tree $T$ and a nonnegative integer $n$, define a finite sequence of trees $S(T,n)_i$ starting from $i=n$ by setting $T_n = T$, and defining $T_{i+1}$ to be the largest tree less than $T_i$ (using the above ordering) with at most $i+1$ vertices. Since the above ordering is a well-ordering, the sequence reaches the root tree after some finite number of steps, and the sequence ends there. Next, define $t(\alpha)$ to be the tree that corresponds to the ordinal $\alpha$ in the above well-ordering of trees, and let $T_\alpha(n)$ be the index of the final tree in the sequence $S(t(\alpha),n)$. It is then immediate that $T_\alpha(n)$ satisfies: $T_0(n) = n$ If $t(\alpha)$ has at most $n+1$ vertices, $T_{\alpha+1}(n) = T_\alpha(n+1)$ If $\alpha$ is limit, then $T_\alpha(n) = T_{\alpha[n+1]}(n+1)$, where $\alpha[n]$ is defined to be the ordinal such that $t(\alpha[n])$ is the largest tree less than $t(\alpha)$ with at most $n$ vertices. Note that this is very similar to the Hardy hierarchy $H_\alpha(n)$ using the same definition of $\alpha[n]$, except for the second rule where $t(\alpha)$ is required to have less than or equal to $n+1$ vertices. So long as this is true, then $T_\alpha(n) \ge H_\alpha(n)$. Since $F_\alpha(n) \approx H_{\omega^\alpha}(n)$, for $\varepsilon$-numbers $\alpha$ we will have $T_\alpha(n) \ge H_\alpha(n) \approx F_\alpha(n)$. Now, $\theta(\Omega^\omega \omega,0)$ does not correspond to a tree, but one can see that under the above rules $T_{\theta(\Omega^\omega \omega,0)}(n)$ will be the final index of the sequence $S(T,n)$ where $T$ is simply chosen to be the largest tree with at most n vertices. Therefore $TREE(n) \ge T_{\theta(\Omega^\omega \omega,0)}(n+1) \approx F_{\theta(\Omega^\omega \omega,0)}(n+1)$. The other direction is harder; I don't know the specifics, but from talking with Andreas Weiermann, upper bounds can be extracted "subrecusively" using the length of the ordering, but unfortunately this work remains unpublished. I believe the upper bounds will be in the same general range as the lower bound, either $F_{\theta(\Omega^\omega \omega,0)}(cn)$ or perhaps $F_{\theta(\Omega^\omega \omega,0)+1}(n)$. So I think it is reasonable to say that TREE(n) lies at $F_{\theta(\Omega^\omega \omega,0)}(n)$ in the fast-growing hierarchy.<|endoftext|> TITLE: Prove that the set of integers of the form $2^k−3$ contains an infinite subset of relatively prime numbers QUESTION [10 upvotes]: Prove that the set of integers of the form $2^k − 3(k = 2, 3, ...) $ contains an infinite subset in which every two members are relatively prime. Trying to solve this problem, I stumbled on this conjecture: There is an infinity of prime numbers $p$ such that $p=2^n - 1, n\in \mathbb{N}$ I'm interested in solving both the problem and the conjecture. REPLY [2 votes]: Let $p_i$ be the $i^\text{th}$ odd prime. Let $(k_r)_{r=1}^\infty$ be any sequence of integers such that $k_1 \geqslant 3$, $k_{r+1} > k_r$, and $k_{r+1}$ is divisible by $\prod_{i=1}^{m_r} (p_i - 1)$, where $p_{m_r}$ is the largest prime factor of $a_r = 2^{k_r} - 3$. Then $$ a_{r+1} = \left(2^{k_{r+1}/(p_i - 1)}\right)^{p_i - 1}\!\!\! - 3 \equiv -2 \pmod{p_i} \quad (i = 1, \ldots, m_r), $$ by Fermat's little theorem. Therefore, $a_{r+1}$ has no prime factor in common with $a_r$; and $m_{r+1} > m_r$. The sequences $(k_r)$, $(m_r)$, $(a_r)$ are strictly increasing. If $r, s$ are integers with $1 \leqslant r < s$, then $a_s$ is not divisible by any of the primes $p_1, \ldots, p_{m_{s-1}}$, in particular $p_1, \ldots, p_{m_r}$, therefore $a_s$ is prime to $a_r$.<|endoftext|> TITLE: How many natural solutions to $x_1+x_2+x_3+x_4=100$ with $x_1 \neq x_2 \neq x_3 \neq x_4$? QUESTION [10 upvotes]: I have invented a little exercise of combinatorics that I can't solve (without brute forcing): in how many ways can I choose 4 non-negative integers, the sum of which is 100 and they are all different? So: $x_1+x_2+x_3+x_4=100$ $x_1, x_2, x_3, x_4 \in \mathbb N$ $x_1 \neq x_2 \neq x_3 \neq x_4$ I have thought that the result have to be the number of total combination $\binom {100 + 4 - 1} {4 - 1}$ minus the ways I can solve $2x_1 + x_2 + x_3 = 100$. Anyway, I am not able to solve it either. The solution should be: 161664 or $\binom {100 + 4 - 1} {4 - 1} - 15187 = 176851 - 15187 = 161664$ Does anyone know how to calculate the combinations of this problem? REPLY [2 votes]: Here is the solution from my side. One can translate this problem into a partition problem. The much-studied partition function $p_k(n)$ is defined as the number of solutions of the equation $x_1+\cdots+x_k=n$ where $x_1\ge x_2\ge\cdots\ge x_k\ge1$; those solutions are called the partitions of $n$ into (exactly) $k$ parts. For fixed $k$, the generating function is $$\sum_{k=0}^\infty p_k(n)x^n=\frac{x^k}{(1-x)(1-x^2)\cdots(1-x^k)}.$$ For $n\ge k\gt1$ we have the recurrence equation $$p_k(n)=p_{k-1}(n-1)+p_k(n-k)$$ with initial values $p_0(0) = 1$ and $p_k(n) = 0$ if $n \le 0$ or $k \le 0$ , and $n$ and $k$ are not both zero $x_1 + x_2 +x_3 +x_4 = 100$, where $x_i$ is non-negative integer. Let, $x_1 = a_1$, $x_2 = x_1 + a_2$, $x_3 = x_2 + a_3$, $x_4 = x_3 + a_4$; Where $a_1 \ge 0$ and $a_2, a_3, a_4 \ge 1$. This is to ensure that the numbers stay distinct $4a_1 + 3a_2 + 2a_3 + a_4 = 100$ This is the exact same relation that appears when one resolves the equation of standard partition problem, except that $a_1$ should be $\ge 1$ and $a_2, a_3, a_4$ should be $\ge 0$. We can do this by the following transformations $a_1 = t_1 - 1$, $a_2 = t_2 + 1$, $a_3 = t_3 + 1$, $a_4 = t_4 + 1$ This satisfies our assumptions and now $t_1 \ge 1$ and $t_2, t_3, t_4 \ge 0$. $4t_1 + 3t_2 + 2t_3 + t_4 = 98$ This is equivalent to dividing 98 into exactly 4 parts. Since all the 4 numbers can be aranged in any order to form other combinations of distinct non-negative $x_1, x_2, x_3, x_4$, therefore $4!$ should be multiplied. The final answer would be $\Rightarrow 4! \cdot p_4(98)$ $p_4(98)$ can be calculated using the recurrence relation through hand or a computer program, but luckily we have closed form solutions available for certain values of $k$ in $p_k(n)$ $$p_4(n) = \left[ \frac{1}{144}t_4^3(n) - \frac{1}{48}t_4(n) \right]$$ for n is even $$p_4(n) = \left[ \frac{1}{144}t_4^3(n) - \frac{1}{12}t_4(n) \right]$$ for n is odd Here $\left[ \cdot \right]$ stands for nearest integer function and $t_k(n) = n + \frac{1}{4}.k(k-3)$ Subsitution values gives us $p_4(98) = \left[ \frac{99^3}{144} - \frac{99}{48} \right] = 6736$, $4! = 24$, $\Longrightarrow 24* 6736 = 161\:644$ A generalized approach for this problem has been disussed in this StackExchange question. This answer was also inspired by this answer<|endoftext|> TITLE: Number of $3$s in the units place QUESTION [6 upvotes]: I'm stuck on this problem: (1) In the sequence $7,7^2,7^3,7^4,\ldots,7^{2014}$ how many terms have $3$ as the units digit? After some random stuff, I have found that the unit digits of $7$ go in the order $7,9,3,1$ And then back to $7$. But I don't know how to utilize this in the problem! REPLY [7 votes]: You are doing it correctly. Simply divide $2014$ by $4$, where you get $503$, plus a $7^{2013}, 7^{2014}$ at the end. Since the $3$ comes $3rd$ in the sequence, it will not appear in the last two, so the answer is just $503$.<|endoftext|> TITLE: Scientific Notation Question QUESTION [5 upvotes]: I'm taking a practice test on scientific notation. The question is What is .000563x10-7 in proper scientific notation. I chose the answer 5.63x10-3. However, this answer was incorrect and I'm not sure why. Does anyone know what the proper answer would be? Thanks, Dave REPLY [8 votes]: Tedious but important to get the concept way: $.000563\times 10^{-7} = .000563 \times 0.0000001 = .0000000000563 =$ (count the decimals) = $5.63 \times 10^{-11}$ More straight forward after you've gotten the concept. $.000563 \times 10^{-7} = 5.63 \times 10^n \times 10^{-7} = $ (count the decimals) $ 5.63 \times 10^{-4} \times 10^{-7} = 5.63 \times 10^{-11}$. So what did you do wrong? You added the 4 rather than subtracted the for. Thing is $10^{-7}$ means make smaller means shift decimal point further to the left-- if the decimal is already to the left shift it further.<|endoftext|> TITLE: Why are homeomorphisms important? QUESTION [39 upvotes]: I attended a guest lecture (I'm in high school) hosted by an algebraic topologist. Of course, the talk was non-rigorous, and gave a brief introduction to the subject. I learned that the goal of algebraic topology is to classify surfaces in a way that it is easy to tell whether or not surfaces are homeomorphic to each other. I was just wondering now, why are homeomorphisms important? Why is it so important to find out whether two surfaces are homeomorphic to each other or not? REPLY [3 votes]: Just to add to the already-good answers provided so far: Human beings in general, and scientists in particular, are classifiers. We group "like things" in buckets and try to make statements that must be true for everything in the bucket (e.g., how biologists classify kingdoms, phyla, species, etc.) In the case of topology, we can place spaces in buckets according to their homeomorphism type. We could, instead, use other classification schemes (e.g., homotopy type, which is a looser form of equivalence.) Category theory provides a nice setting for making sense of this general, scientific process: When studying a certain class of objects, the fundamental (e.g., "important"/"interesting"/primary) goal is to classify the objects up to categorical isomorphism. And for the category of topological spaces and continuous maps, isomorphisms are homeomorphisms.<|endoftext|> TITLE: How do I formulate this under ZFC set theory? QUESTION [6 upvotes]: Here is a lemma given in Munkres-Elements of algebraic topology This statement seems impossible to be encoded in ZFC set theory. The condition $(a)$ can be formulated in ZFC, but $(b)$ seems impossible to be formulated in ZFC. Could it be encoded in ZFC set theory? For example, since there is a complete explicit way to construct the singular homology groups of topological spaces, saying "there is a homology functor $H_n:Top\rightarrow Ab$" can be completely formulated in ZFC. However, the condition $(b)$ asserts that there exists a class $\{D_X:X\in Top\}$ such that $D_X$ is natural for all $X$. This cannot be formulated in ZFC. I wonder if there is a smart trick to avoid this size problem. REPLY [6 votes]: Yes, this can be expressed in ZFC. The issue, as you state, is that the "map" $D:X\mapsto D_X$ is a class function, since there are proper class-many topological spaces. However, this is fine: the map $D$ is definable by a (very messy) formula $\psi$ in the language of set theory, so, rather than writing e.g. "$D_X$ satisfies ...", we can instead write "For every $A$, if $\psi(A, X)$ holds, then $A$ satisfies . . .". (Why is $D$ definable? Look at the proof of the lemma!) Of course, the general treatment of class maps does go beyond ZFC, either to a class theory like $NBG$ or $MK$, or to ZFC augmented with appropriately large sets, e.g. ZFC+Universes. But this is unnecessary here, and in most contexts.<|endoftext|> TITLE: Composition of 2 involutions QUESTION [7 upvotes]: How can we prove that any bijection on any set is a composition of 2 involutions ? Since involutions are bijections mapping elements of a set to elements of the same set, I find it weird that this applies to any bijection. Thanks for your help ! REPLY [5 votes]: I assume that by "bijection" on a set $S$ you mean a bijection from $S$ to itself. The question would not make sense for a bijection from $S$ to some other set. The bijection decomposes $S$ into orbits. It suffices to prove for a single orbit. An orbit under the bijection is either a finite cycle $p_0 \to p_1 \to p_2\to \cdots \to p_n = p_0$ or a two-sided infinite sequence $\cdots \to p_{-2} \to p_{-1} \to p_0 \to p_{1} \to p_2 \to \cdots$. In the infinite case, you can take the involutions $p_i \to p_{-i}$ and $p_i \to p_{1-i}$. In the finite case, $p_i \to p_{-i \pmod n}$ and $p_i \to p_{1-i \pmod n}$.<|endoftext|> TITLE: Decomposition group and inertia group QUESTION [9 upvotes]: Let $L/K$ be a Galois extension with Galois group $G$. Let $O_K$ and $O_L$ be the ring of algebraic integers of $K$ and $L$ respectively. Let $P\subseteq O_K$ be a prime. Let $Q\subseteq O_L$ be a prime lying over $P$ with ramification index e$(Q|P)=e$ and inertia degree f$(Q|P)=f$. Let $D(Q|P)$ be the decomposition group of $Q$. In other words $$D(Q|P)=\lbrace\sigma\in G\text{ }|\text{ }\sigma(Q)=Q\rbrace$$ Let $L_D$ be the decomposition field (the fixed field of $D(Q|P)$). Define $Q_D=Q\cap L_D$. The inertia group is $$E(Q|P)=\lbrace \sigma\in D(Q|P):\sigma(x)\equiv x\text{ mod } Q\text{ for all } x\in O_L\rbrace$$ Let $L_E$ be the inertia field (the fixed field of $E(Q|P)$). Define $Q_E=Q\cap L_E$. From algebraic number theory we know the following e$(Q_D|P)=1$ and f$(Q_D|P)=1$ e$(Q_E|P)=1$ and f$(Q_E|P)=f$ I want to find a prime $Q_D'$ of $L_D$ lying over $P$ such that e$(Q_D'|P)\neq1$ and f$(Q_D'|P)\neq1$. Similarly, is there any prime $Q_E'$ of $L_E$ lying over $P$ such that e$(Q_E'|P)\neq1$ and f$(Q_E'|P)\neq f$ ? What we know for sure is that to find such $Q_D'$ (resp. $Q_E'$), we must choose $P$ and $Q$ such that $D(Q|P)$ (resp. $E(Q|P)$) is not normal in $G$. (Corollary 2 of Theorem 28, Number fields, Daniel A Marcus) REPLY [4 votes]: We are looking for a collection of ramification and splitting behaviours over a single prime $P$, all of which are possible, but rarely combine in a single easily-calculated example. I have produced a case where each of the things you ask for does indeed occur, in order to illustrate the general case. But I should point out that it is easy to produce individual lower-degree examples demonstrating separately each of the requirements you have placed on the primes above $P$. Recall the well-known formula describing the splitting of a prime $P$ in an extension $L/K$ in terms of its ramification and residue field extension indices: \begin{equation*} \Sigma_{Q\mid P}\ e_{Q}f_{Q} = [L:K]. \end{equation*} As you have pointed out, when the extension is Galois all of the $e_Q$ are equal to some fixed $e$, and all of the $f_Q$ are equal to a fixed value $f$; so this reduces to \begin{equation} efg = [L:K]. \end{equation} where we have used the standard notation $g$ for the number of primes of $L$ above the prime $P$ of $K$ in a Galois extension. We would like to see different types of behaviour at each of the levels $L$, $L_D$ and $L_E$ for some prime $Q$ of $L$ above $P$, which implies that we need $e\geq2$ and $f\geq2$. Moreover in order to have any chance of seeing different outcomes above the same prime $P$, we need that $P$ split into at least 2 distinct primes at the level $L_D$, otherwise $L_D=K$. Hence we also need $g\geq2$. So the degree of the extension $L/K$ needs to be at least 8. Finally, as you also point out, we need that the Galois group Gal($L/K$) contain non-normal subgroups in order that any prime have a chance that the fixed field of its decomposition and/or inertia groups be non-Galois; otherwise we are just in a lower-degree version of the above formula. It is not sufficient, by the way, that the extension simply be non-Abelian, since for example the quaternion group Q8 is non-Abelian but has no non-trivial non-normal subgroups. All of this forces the extension to have degree at least 16 (the non-normal subgroups of the dihedral group D4 of order 8 do not allow enough varied behaviour). We also would like an extension whose Galois group is furnished with lots of non-normal subgroups. So the easiest place to start would be something with Galois group S4. In order to simplify things let us assume that $K=\mathbb{Q}$, and take some "general" quartic extension, the simplest interesting one of which might be the splitting field $L$ of the polynomial $q(x) = x^4+x+1$. I used MAGMA for the following calculations. This extension $L/\mathbb{Q}$ is ramified only above $P=229$, splitting into six primes with residue field extension (="inertia") degree $f=2$ and ramification degree $e=2$. Choosing any of these primes $Q$ say we have an inertia group $E(Q\mid P)$ which is cyclic of order 2 ($\cong C_2$) and a decomposition group $D(Q\mid P)$ which is isomorphic to $C_2^2$. We then calculate that there is always a prime $Q_D'$ of $L_D$ over $P$ with $e_{Q_D'}=f_{Q_D'}=2$. Similarly there is always a prime $Q_E'$ of $L_E$ over $P$ with $e_{Q_E'}=2$ and $f_{Q_E'}=1$.<|endoftext|> TITLE: Conditional probability question involving Bayes' theorem QUESTION [6 upvotes]: A jar has 300 buttons in it. 299 buttons have one side red, the other side blue. One of the buttons has both sides blue. I randomly take a button from the jar and toss it 5 times getting 5 blues in a row. What is the probability I chose the special blue button? I did: $$ \begin{align} P(\text{chose blue button}|\text{5 blue tosses}) &= \frac{P(\text{choose blue button and 5 blue tosses})}{P(\text{5 blue tosses})}\\ &= \frac{\frac{1}{300}\cdot 1^5}{\frac{299}{300} \left(\frac{1}{2}\right)^5 + \frac{1}{300}1^5} \end{align} $$ Is this correct? REPLY [5 votes]: Straightforward application of Bayes' Theorem. I agree with @msm that you have the right method and answer. Notice that your answer (approx 0.1) is substantially greater than just the probability (approx .003) of choosing the 'pure blue' button without conditional info from 'testing out' the button with 5 tosses. Just for verification, I simulated the experiment 10 million times in R statistical software. Results are in reasonable agreement with your answer. (Because choosing a 'pure blue' button is so rare, such a simulated result may be accurate to only about 2, maybe 3, decimal places.) m = 10^7; p = x = numeric(m) b = c(1, rep(.5, 299)) # buttons in jar for (i in 1:m) { p[i] = sample(b, 1) # choose a button x[i] = rbinom(1, 5, p[i]) } # toss 5 times and count 'blue's mean(p == 1) # proportion of 'pure blue' buttons chosen ## 0.0033274 1/300 # compare ## 0.003333333 mean(p[x==5] == 1) # when get 5 'blues' what proportion 'pure blue' bottons ## 0.09672815 1/(299/32 + 1) # compare (your answer) ## 0.09667674<|endoftext|> TITLE: Submodule maximal if and only if quotient module simple? QUESTION [5 upvotes]: Why is the case that a submodule $L$ of a module $L$ is maximal, among submodules of $M$ distinct from $M$, if and only if the quotient module $M/L$ is simple? REPLY [2 votes]: As I said in the other question. The bijection between submodules containing $L$ and submodules of $M/L$ tells us that if we assume that $L$ is maximal, that there exist no proper submodules that strictly contains $L$, then there exist no proper submodules in $M/L$ as a proper submodule there would correspond to a proper submodule strictly containing $L$ which we have already excluded. Ergo $M/L$ is simple. Assuming $M/L$ is simple, that means there exists no proper non-trivial submodules. Now through same reasoning as before, the existence of a proper submodule that contains $L$ would correspond to a proper non-trivial submodule in $M/L$ which we already excluded, and as such $L$ must then be maximal.<|endoftext|> TITLE: Are all "numbers" just one unit value transformed by a function? QUESTION [45 upvotes]: I'm a programmer, and I've been thinking about this for some time. I have a saying: There are only three specific numbers: 0, 1, and ∞. ∞ is just a special case of 1. Our base function really just converts a unary value into a string (example: "101") that we can parse in a given base. Tonight, I've been putting some thought towards this, and I realize - I believe that the entire set of natural numbers can be represented by a recursive function that only works on a unit value, as follows (I apologize in advance for the terrible notation, this is not my forte): // Please ignore the fact that this is invalid code for various, various reasons N (the set of natural numbers) = add(1) where add(x) = return x + add(x); Following this, I believe that the set of integers can be found as a union of the natural number function and a similar function using subtract instead of add (resulting in the full set of negative numbers). Going further, I applied more thought - perhaps the set of real numbers can be applied through division functions for the set of all integers divided by the set of all integers (which we've already established via the functions above). I suppose my question is this - as strange as it may seem, is there only one unit value? (Or, possibly, a very small subset of seed numbers - perhaps 0 and 1 (and maybe -1 for the complex plane?)) Is our number system just a system of transforms on top of one unit value? If this is a really stupid question, I apologize. It's just that the more I think about this, the more I really wonder about it. There's a good chance that I'm missing something or that this has already been derived, but I'd love to know. I'm sure this community knows much more than I do about this. I hope that this interests all of you as much as it does me. REPLY [4 votes]: You might want to check out Church numerals, where each natural number $n$ is defined as the function $f \mapsto f^n$; so $0 = f \mapsto id$ $1 = f \mapsto f$ $2 = f \mapsto f \circ f$ $3 = f \mapsto f \circ f \circ f$ etc. It's easy to define a successor function, addition, multiplication, etc. using this style (omitting un-necessary parentheses in function applications by convention): $$Sn = f \mapsto f \circ nf$$ $$n + m = mS\;n$$ $$n * m = m (k\mapsto n+k)\;0$$ $$n^m = m(k \mapsto n * k)\; 1$$ etc. While this is mostly used today in a programming / programming languages context, I believe it was originally intended in a foundations of math context (so in math, the natural numbers would be taken to 'mean' the above functions). Of course, programming and programming languages as we know them today grew directly out of the work in foundations of mathematics during the 1920s and 1930s.<|endoftext|> TITLE: Help understanding Proof of Fundamental Theorem of Algebra QUESTION [6 upvotes]: In the book "Topology from the Differential Viewpoint", Milnor gives a proof of the fundamental theorem of algebra. It goes essentially like this: Consider the stereographic projection $h_{+}$ of $S^2$ onto $C$ from the north pole and the stereographic projection $h_{-}$ from the south pole. If $P$ is a non constant polynomial, define $f=h_{+}^{-1}Ph_{+}$. Then $f$ is smooth even in a neighborhood of the north pole. To see this consider the map $Q=h_{-}fh{-}^{-1}$. If $f(z)=\sum_{i=1}^n a_iz^i$ then $Q=\frac{z^n}{\sum \bar{a_i}z^{n-i}}$ which is smooth in a neighborhood of zero, hence $f$ is smooth in a neighborhood of the north pole. The singular values of $f$ are the zeros of $P'$ of which there are finitely many. Hence the set of regular values of $f$ is the sphere with finitely many points removed and is therefore connected. The locally constant function $\#f^{-1}(y)$ giving the number of points in $f^{-1}(y)$ is thus constant. Since it is not zero everywhere, it must be zero nowhere. In particular $f$ is surjective and so $P$ must have a zero The question that I have is: What is the point of introducing the stereographic projection? Why does the space need to be compact in order for this argument to work? I don't see where the compactness of $S^2$ comes into play. Also, when Milnor says that $\#f^{-1}(y)$ is locally constant, doesn't this only apply to the regular values? So wouldn't it only follow that $f$ is surjective onto the set of regular values? Why do we get that $f$ is surjective onto the entire sphere? Finally, when he says that $f$ is surjective onto the sphere does he mean onto the whole sphere or onto the sphere with the north pole removed? REPLY [3 votes]: He uses that if $f:M\to N$ is smooth and $M$ and $N$ have the same dimension, $M$ compact, then the cardinal of the fiber on the regular values is finite and locally constant, so, as $\mathbb C$ is not compact, he uses the Alexandorff's compactification of $\mathbb C$ to use that result. You get that for every $y\in N$ regular value, $f^{-1}(y)$ is not empty, so it rest to look what happen when $y$ is a critical value. By definition $y$ is a critical value if $f^{-1}(y)$ contains at leat a critical point, so by definition, $f^{-1}(y)$ is not empty for $y$ critical value. Then for every $y\in N$, $f^{-1}(y)$ is not empty, so $f$ is surjective. $f$ is surjective in the whole sphere, it takes the north pole to itself.<|endoftext|> TITLE: Does there exist an uncountable number of isolated points? QUESTION [6 upvotes]: This question arose from my thinking about order-preserving isomorphisms on the Real numbers. These functions must be injections (“one-to-one”) $$a\ne b \implies f(a)\ne f(b)$$ and preserve order $$a\le b \implies f(a)\le f(b)$$ In other words, I knew that these functions must be strictly increasing, perhaps with a finite number of points with zero slope. For if there were any intervals $(a,b)$ on which the function had zero or decreasing slope, it would no longer be an injection or preserve order: two different inputs would have the same output. But then I realized that even an infinite, albeit countable, number of points would suffice, for example, a function that is increasing everywhere except at every integer; something like $f(x)=x+sin(x)$. This function’s graph looks like a smoothed-out staircase, and its derivative $f^{\prime}(x)=1+cos(x)$ is tangent to the $x$-axis at those points. So then taking a step further, I wondered if perhaps an uncountable number of points would suffice, provided that these points weren’t “together in a straight line”, or put more precisely, they didn’t make an interval. My first thought was an uncountable set of irrational numbers within an interval, say, $[1,2]$. This function would be increasing at all rational points, but have zero slope at all irrational points. (There are uncountably many irrational numbers within $[1,2]$.) If such a function exists, and is continuous and differentiable, its derivative would be something like the Dirichlet function: $$ g^{\prime}(x) = \begin{cases} 1 &: &x < 1\\ 1 &: &x \in [1,2]\cap\mathbb{Q}\\ 0 &: &x \in [1,2]-\mathbb{Q}\\ 1 &: &x > 2\\ \end{cases} $$ I'm not sure if that’s possible. If it isn’t, then the points in my uncountable set must all be isolated, that is, there must exist an $\epsilon$-neighborhood around each point that doesn’t contain any other points in the set. Is it possible to select uncountably many isolated points? REPLY [6 votes]: For $x\in\mathbb R$ and $\epsilon\gt0$ let $B(x,\epsilon)$ denote the $\epsilon$-neighborhood $(x-\epsilon,x+\epsilon).$ A set $X\subseteq\mathbb R$ is discrete if, for each point $x\in X,$ there is a number $\epsilon\gt0$ such that $B(x,\epsilon)\cap X=\{x\}.$ Every discrete subset of $\mathbb R$ is countable. Proof. Assume for a contradiction that there is an uncountable discrete set $X\subseteq\mathbb R.$ For $n\in\mathbb N$ let $X_n=\{x\in X:B(x,\frac1n)\cap X=\{x\}\}.$ Then $X=\bigcup_{n\in\mathbb N}X_n,$ so $X_n$ is uncountable for some $n.$ Choose $n\in\mathbb N$ so that $X_n$ is uncountable. Then the sets $B(x,\frac1{2n}),\ x\in X_n$ constitute an uncountable family of pairwise disjoint open intervals, which is impossible. However, the fact that discrete subsets of $\mathbb R$ are countable seems unrelated to your question about slopes of order-isomorphisms of $\mathbb R'$ There is a strictly increasing continuously differentiable bijection $f:\mathbb R\to\mathbb R$ such that $\{x:f'(x)=0\}$ is uncountable and even has positive Lebesgue measure. Proof. Let $C$ be a compact nowhere dense subset of $\mathbb R$ with positive Lebesgue measure, e.g., a so-called fat Cantor set. Define $$g(x)=d(x,C)=\min\{|x-y|:y\in C\},$$ the distance from $x$ to $C,$ and define $$f(x)=\int_0^xg(t)dt.$$ Since $g$ is continuous and nonnegative, $f$ is continuously differentiable and nondecreasing. In fact, since $C$ is nowhere dense, $\{x:f'(x)\gt0\}$ is everywhere dense, whence $f$ is strictly increasing. Since $f(-\infty)=-\infty$ and $f(+\infty)=+\infty,\ $ $f$ is a bijection. Finally, the set $\{x:f'(x)=0\}=\{x:g(x)=0\}=C$ has positive Lebesgue measure.<|endoftext|> TITLE: Why is $\frac{p(1-p)}{1−(1−p)^2}=\frac{1-p}{2-p}$? QUESTION [6 upvotes]: I found this solution in an old exam: $$\frac{p(1-p)}{1−(1−p)^2}=\frac{1-p}{2-p}$$ Without any further explanation. Could someone explain to me how to do this transition? REPLY [4 votes]: By inspection (or by $(1-p)^2=1$), $p=0$ and $p=2$ are roots of the denominator. As the coefficient of the highest power is $-1$, it factors as $p(2-p)$. The rest is up to you.<|endoftext|> TITLE: Find an example of series which converges only absolutely on $\mathbb Q$ QUESTION [8 upvotes]: I am currently working on the completeness of metric spaces, so I studied the following theorem: If $E$ is a Banach space then any absolutely convergent series is convergent. Since $\mathbb Q$ is not complete, I wanted to illustrate this theorem with an absolutely convergent series on $\mathbb Q$ which wouldn't converges: $$\sum_{n=0}^\infty \vert r_n\vert\in\mathbb Q\quad\text{ and }\quad \sum_{n=0}^\infty r_n\in\mathbb R\setminus\mathbb Q \quad (\text{ with $r_n\in \mathbb Q$}).$$ Is was not able to find such series. Is there an example like it (as simple as possible ?). REPLY [2 votes]: The relationship between convergence and absolute convergence in an ordered field is studied in this note, which has its genesis in a question that was asked on this site (by me; it was answered by my coauthor, N.J. Diepeveen). The case of subfields of $\mathbb{R}$ makes for a nice undergraduate exercise (as seems to be happening here!): see Theorem 1. Or let me explain: let $x \in [-1,1] \setminus \mathbb{Q}$. Then there is a sequence (in fact a unique sequence, since $x \notin \mathbb{Q}$, but this is not needed for what follows) $\epsilon_n \in \{ \pm 1\}$ such that $x = \sum_{n=1}^{\infty} \frac{\epsilon_n}{2^n}$. The way you get $\epsilon_n$ is by a simple inductive process: for $n \geq 0$, having chosen $\epsilon_1,\ldots,\epsilon_n$ already, we choose $\epsilon_{n+1}$ to be $1$ if $\sum_{k=1}^n \frac{\epsilon_k}{2^k} < x$ and $-1$ if $\sum_{k=1}^n \frac{\epsilon_k}{2^k} > x$. A little thought shows that this converges to $x$. Now notice that no matter what the sign sequence $\epsilon_n$ is, the absolute series is $\sum_{n=1}^{\infty} \frac{1}{2^n} = 1 \in \mathbb{Q}$. Note that this argument works equally well with $\mathbb{Q}$ replaced by any proper subfield $F \subsetneq \mathbb{R}$, or in fancier terms, any incomplete Archimedean ordered field. Note also that this is essentially the same answer as @Martín-Blas Pérez Pinilla, but I wanted to give my take on it.<|endoftext|> TITLE: Non-Euclidean Geometrical Algebra for Real times Real? QUESTION [8 upvotes]: This question was triggered by a series of others and reading some references: Keshav Srinivasan & Euclid Eclid's Elements As quoted from the last reference: GEOMETRICAL ALGEBRA. We have already seen [ .. ] how the Pythagoreans and later Greek mathematicians exhibited different kinds of numbers as forming different geometrical figures. [ skip text ] The product of two numbers was thus represented geometrically by the rectangle contained by the straight lines representing the two numbers respectively. It only needed the discovery of incommensurable or irrational straight lines in order to represent geometrically by a rectangle the product of any two quantities whatever, rational or irrational; and it was possible to advance from a geometrical arithmetic to a geometrical algebra, which indeed by Euclid’s time (and probably long before) had reached such a stage of development that it could solve the same problems as our algebra so far as they do not involve the manipulation of expressions of a degree higher than the second. [ rest deleted ] As quoted from the second reference: Descartes began by interpreting the algebraic operations of addition, subtraction, multiplication, division, and extraction of square roots as geometric constructions on lines. He represented each (positive) magnitude by a line. Addition and subtraction were the same as Euclid’s. To add two lines, just extend one by the length of the other. To subtract one line from another, just take the remainder after cutting it off the other. Multiplication and division, however, were different from Euclid’s. Euclid represented the product of two lines by a rectangle, the product of three lines by a box in space, and Euclid didn’t represent the product of four lines. But Descartes took the product of two lines to be another line. [emphasis is mine] That required selecting a unit line, that is, a line of length $1$. Then to find the product $ab$ of two quantities $a$ and $b$, he only needed to find the fourth proportional of $1$, $a$, and $b$. Here is the construction by Descartes (figure on the left): After defining a unit OE and the lengths (= reals) OA and OB, in order to define the product OP, it is necessary to draw a line OP parallel to EB. Of course, there is a relationship between Euclid's and Descartes' construction: the areas of OARB and OEQP are equal. It should be noted though, that negative numbers are impossible within Euclid's system, while the (Des)Cartesian coordinate system ensures that they can be properly represented geometrically. Why is negative times negative = positive? Descartes' Euclidean Geometrical Algebra gives us a nice proof without words : Parallel lines seem to be essential all over the place, thus requiring an Euclidean geometry. And doesn't our common Equals sign not just represent two parallel $=$ lines? So the question is: does there exist a multiplication of reals a la Descartes in a non-Euclidean Geometrical Algebra ? Possible duplicate question, without an answer : Geometric basis for the real numbers . REPLY [6 votes]: There are two answers to your question. One answer is negative, namely, the one given in the book "Geometry: Euclid and Beyond" by R.Hartshorne, where he notes that the Euclidean parallels axiom (or its equivalent, Axiom P) is needed for his geometric definition of geometric multiplication. A side remark: Hartshorne's book is one of the few places in the literature (actually, the only one apart from Hilbert that I am aware of) where, starting with Euclidean axioms one carefully builds the Hilbert plane by constructing an ordered field, etc. Other treatments start with a version of Birkhoff's axioms, which is a bit of cheating since these axioms have real numbers already present in the set of axioms (since the distance function takes values in ${\mathbb R}$). On the other hand, if you allow (much more!) complicated geometric definitions of algebraic operations, then the answer is positive. You can find it in this paper by M.Kourganoff Universality theorems for linkages in homogeneous surfaces, where he uses certain configurations of line segments in the hyperbolic plane to define algebraic operations.<|endoftext|> TITLE: Delta function at the origin in polar coordinates QUESTION [8 upvotes]: I have some problems understanding what the best way of dealing with the delta functions in polar coordinates (I know there are many questions on the subjects on this website but they are all not satisfactory). In (Delta function integrated from zero), they claim that the delta function is given by $\delta^{(2)}=\frac{\delta(r)}{\pi r}$ while in (Dirac delta in polar coordinates) it is claimed that $\delta^{(2)}=\frac{\delta(r)}{2\pi r}$. However, the confusion probably comes from the fact that when evaluating a delta function in polar coordinates, one ends up with the expression $\int_0^\infty f(x)\delta(x)$. This expression is ill-defined as far as I can tell, since using different limiting functions for the delta function can give different results, and thus none of the above expressions can be a well-defined definition of the delta function in polar coordinates. So my question is, if I want to write down the delta function in polar coordinates, what is the best representation for working with it? In my particular case, I want to be able to start with the delta function in polar coordinates and then do coordinate transformations to obtain it in other coordinate systems, without any ambiguities. edit: The best representation I can come up with would be to regularize the radial direction, and write the delta function as $\delta=\frac{1}{r}\delta(r-\epsilon)\delta(\theta-\theta_0)$ for some arbitrary $\theta_0$ and then let $\epsilon\rightarrow0$ in the end. REPLY [2 votes]: see http://mathworld.wolfram.com/DeltaFunction.html eqn 46. The result given there corresponds to your first equation, $\delta^{(2)}=\frac{\delta(r)}{\pi r}$. However it can be more complicated: $\delta^{(2)}=\frac{\delta(r)}{\pi r}$ is only for a Dirac Delta located at the origin. See the pdf at https://www.google.com/#q=06_notes_2dfunctions page 18. This shows the result given in your first equation is for an Dirac Delta at the origin, but your final equation $\delta=\frac{1}{r}\delta(r-\epsilon)\delta(\theta-\theta_0)$ represents an Dirac Delta radially offset from the origin by $\epsilon$ and rotated through the angle $\theta_0$. So your equation should work for you, maybe rewritten $\delta(r-r_0)=\frac{1}{r_0}\delta(r-r_0)\delta(\theta-\theta_0)$ where $\epsilon$ is replaced with $r_0$ and no limiting process is needed. Consider removing 'at the origin' from your title unless you want that limitation. In that case your first equation would work.<|endoftext|> TITLE: What is the difference between a tensor product and an outer product? QUESTION [17 upvotes]: I have seen the tensor product written as $$ \left( \begin{array}{c} a \\ b \\ \end{array} \right) \otimes \left( \begin{array}{c} c \\ d \\ \end{array} \right) = \left( \begin{array}{c} ac \\ ad \\ bc \\ bd \\ \end{array} \right)$$ However I have also seen it written as $$ \left( \begin{array}{c} a \\ b \\ \end{array} \right) \otimes \left( \begin{array}{c} c \\ d \\ \end{array} \right) = \left( \begin{array}{c} a \\ b \\ \end{array} \right) \left( \begin{array}{c} c & d \\ \end{array} \right) = \left( \begin{array}{c} ac & ad\\ bc & bd \\\end{array} \right) $$ Which I have seen in the context of outer products. Why are there two ways to do a tensor product? REPLY [9 votes]: Both are different realizations of the tensor product. Consider $V=\mathbb{R}^ 2$. Your first "equality" arises from the isomorphism $$V \otimes V \to \mathbb{R}^4$$ $$e_1 \otimes e_1 \mapsto e_1;$$ $$e_1 \otimes e_2 \mapsto e_2;$$ $$e_2 \otimes e_1 \mapsto e_3;$$ $$e_2 \otimes e_2 \mapsto e_4.$$ The second "equality" arises from the isomorphism $V \otimes V \simeq V \otimes V^* \simeq Hom(V,V)$, where the first isomorphism comes from the isomorphism of the dual with the space arising from the standard inner product of $\mathbb{R}^2$ (which is essentially just transposition), and the second one comes from $(w,v^*) \mapsto v^*(\cdot) w.$ Note that it becomes a $2 \times 2$ matrix in the end, which is exactly (not exactly, but represents canonically) an element of $Hom(\mathbb{R}^2, \mathbb{R}^2)$. This is mathematically speaking. I don't know why one is more appropriate than the other in your context. What I can say is that the second way is very useful, because it allows us to translate an endomorphism in terms of something structurally and algebraically rich such as the tensor product. The first one seems to be simply a down to earth immediate way to realize the tensor product as an array.<|endoftext|> TITLE: The Chudnovsky pi formula $1/\pi$ revisited QUESTION [10 upvotes]: Define the constants, $$A=163\cdot1114806\\B=13591409\\C=640320$$ Given the binomial coefficient $\binom{n}{k}$, then we have the pi formulas, $$\frac{1}{\pi} =\frac{12}{(C)^{3/2}}\sum^\infty_{k=0} \frac{(6k)!}{(3k)!\,k!^3} \frac{3Ak+ B}{(-C^3)^k}$$ and $$\frac{1}{\pi} =\frac{12}{(C+4)^{3/2}}\sum_{k=0}^\infty\tbinom{2k}{k}\sum_{j=0}^{k/3} (-1)^j\tbinom{k}{3j}\tbinom{2j}{j}\tbinom{3j}{j}\frac{Ak+B-\color{blue}{1448}/3}{(C+4)^k}$$ $$\frac{1}{\pi} =\frac{12}{(C-4)^{3/2}}\sum_{k=0}^\infty\tbinom{2k}{k}\sum_{j=0}^{k/3} (+1)^j\tbinom{k}{3j} \tbinom{2j}{j}\tbinom{3j}{j}\frac{Ak+B+\color{blue}{1448}/3}{(-C+4)^k}$$ and $$\frac{1}{\pi} =\frac{12}{(C+12)^{3/2}}\sum_{k=0}^\infty\tbinom{2k}{k}\sum_{j=0}^k(-3)^{k-3j}\tbinom{k}{3j} \tbinom{2j}{j}\tbinom{3j}{j}\frac{Ak+B-\color{blue}{1448}}{(-C-12)^k}$$ $$\frac{1}{\pi}=\frac{12}{(C-12)^{3/2}}\sum_{k=0}^\infty\tbinom{2k}{k}\sum_{j=0}^k\,(+3)^{k-3j}\,\tbinom{k}{3j}\tbinom{2j}{j}\tbinom{3j}{j}\,\frac{Ak+B+\color{blue}{1448} }{(-C+12)^k}$$ The first is the Chudnovsky formula, while the rest are also Ramanujan-Sato series (of level 9?). One can give the general form of the Chudnovsky using Eisenstein series. Q: But what yields the blue number $\beta$? These are $\beta=4, 24, 76, 1448$ for $d=19,43,67,163$, respectively. (Note: Typo corrected.) P.S. A similar phenomenon happens for the Ramanujan pi formula which uses $d=58$. I discuss this briefly in my blog Ramanujan Once A Day. REPLY [3 votes]: This is mainly a re-post of my comment: In this question, I have defined $$A_N:=\sqrt{-N}\cdot\frac{E_2(\tau_N)-\frac{3}{\pi\cdot Im(\tau_N)}}{\eta^4(\tau_N)}$$ where $\eta$ denotes the Dedekind $\eta$-Function and $E_2$ is the Eisenstein series of weight $2$, and $\tau_N=\frac{N+\sqrt{-N}}{2}$ is a quadratic irrationality with class number $1$. For the terms $\beta_N$ of the question above, it holds $\color{red}{e^{i\pi/3}\,6\beta_N =A_N}$, or (with $N=d$): $$\beta=\frac{\sqrt{-d}}{e^{i\pi/3}\,6}\cdot\frac{E_2(\tau_d)-\frac{3}{\pi\cdot Im(\tau_d)}}{\eta^4(\tau_d)}$$ A proof that the $A_N$ are algebraic integers of $\mathbb Z$ can be found here in Appendix B (which uses Appendix A).<|endoftext|> TITLE: Why integral domain is also called entire ring? QUESTION [7 upvotes]: I remember reading somewhere (most probably in Lang's Algebra) that integral domain in also known by the name "entire ring". I was thinking that is it somehow connected with complex analysis, but unfortunately I could not figure out much. I know that if $ \Omega$ is a domain in $\mathbb C$ then $R=\{f: \Omega \to \mathbb C: f \text { is holomorphic}\}$ is an integral domain. Is it true that every integral domain can be obtained as ring of holomorphic function of some domain? Also what might be the possible reason for using the terminology 'entire ring' for integral domain? REPLY [2 votes]: Is it true that every integral domain can be obtained as ring of holomorphic function of some domain? No, this is not true. For one thing, this would imply an absolute upper bound on the cardinality of an integral domain. Moreover, this connection is not really the historical reason for the name, see where does the term "integral domain" come from? Also what might be the possible reason for using the terminology 'entire ring' for integral domain? I do not know what the actual reasoning was, but a a reason might be that "integral" is used in a different sense in ring theory, too, namely an element is called integral over a ring $R$ if it is the root of a monic polynomial over $R$; and, a domain is called integrally closed if it contains all the integral elements from its quotient-field. Both in French and in German two distinct words are used to signify those two notions, and one might want to follow the same practice in English. Namely "intègre" (F) and "integer" (G) for "integral" as in "integral domain" and "entier" (F) and "ganz" (G) for "integral" as in "intrgal element." What is strange though is that if this would be adopted the English usage would be somehow just the other way round relative to the French and German one, in that "entire" would not correspond to "entier" and "ganz." It might be further worth noting that "intègre" (F) and "integer" (G) rather evoke the meaning "integrous," which would not be completely non-intuitive either (though it is not the historical motivation in German).<|endoftext|> TITLE: Sum of a series indexed by ordinals QUESTION [9 upvotes]: If $\mu$ is an ordinal, how can we formalize that $$ \sum_{\lambda<\mu}x_{\lambda}=z $$ When $\mu=\omega$, this is just the usual infinite series, the partial sums converge to $z$. What is the definition for higher ordinals? I don't think this is quite the same as an uncountable sum, it is a sum over an well-ordered index. We can define it as a limit points of finite sums, but I was wondering whether there is a "better" (possibly equivalent) definition that takes advantage of the extra structure (well-oredring of the index). I suspect this will make sense only if countably many elements are non-zero. As for the context, I guess the most general one could be that of topological vector spaces, so that we have the vector operations and limits. REPLY [8 votes]: I'm assuming here that the $x_{\lambda}$ are real numbers. (Complex numbers would be fine too -- that doesn't matter. This was written before the edit mentioning topological vector spaces in general, and I haven't thought about it at that level of generality.) In the first part of this answer, we'll see how to define convergence of a sum indexed by a countable ordinal. In the second part of this answer, we'll see that there is no way to define the convergence of any well-ordered sum with uncountably many non-zero terms (it's easy to eliminate the case where all the non-zero terms are positive reals, but in fact we'll rule out any non-zero values at all). This will be done via an axiomatization of the notion of convergence of a well-ordered sum. COUNTABLE WELL-ORDERED SUMS In this part, we'll handle well-ordered sums where all but countably many of the terms are $0.$ For countable ordinals $\mu,$ the sum can be defined by transfinite induction, as follows: $\sum_{\alpha<0}x_{\alpha}=0,$ $\sum_{\alpha<\mu}x_{\alpha}=(\sum_{\alpha<\beta}x_{\alpha})+x_\beta,$ if $\mu=\beta+1,$ and, for $\mu$ a countable limit ordinal, $\sum_{\alpha<\mu}x_{\alpha}=z$ iff for every increasing function $f\colon\omega\to\mu$ which is cofinal in $\mu,$ $z=\lim_{n\to\infty}\sum_{\alpha TITLE: Bezier offset self intersections QUESTION [6 upvotes]: An "offset", or parallel curve, is "a curve whose points are at a fixed normal distance from a given curve". It might happen that the offset curve intersect itself, as shown here (the most inner green curve). This kind of "loop" is called "local" and appears when the curvature of given curve is greater than $1/d$ (i.e. the offset distance). I would like to compute the approximate cubic Bezier offset curve/curves to a given cubic Bezier curve ($B$) but the local loops are giving me troubles. I though I would split the original curve at parameters ($t$), where the curvature radius ($r$) falls below the given offset distance. The signed curvature radius can be calculated as: $$r = \frac{\|B'(t)\|^3}{B'(t) \times B''(t)}$$ The problem I have is I don't know how to find those parameter values. I thought I would somehow get some nice polynomial whose roots are the location along the original curve where the splits would occur. But I cannot figure out how to derive this polynomial from the curvature formula or whether it is even possible (there is a polynomial in denominator)? Could someone, please, give me a helping hand here? My question is: where to trim the original curve (into possibly more parts) so that the curvature radius would be at least the offset distance everywhere? Alternatively, there is a paper "Error Bounded Variable Distance Offset Operator for Free Form Curves and Surfaces" written by Elber and Cohen. In the chapter 4, it describes a way of trimming off the local loops by usage of tangents but unfortunately I don't really understand it. Perhaps someone could cast some light into this method instead? REPLY [6 votes]: Let the control points of the Bezier curve be $\mathbf{p}_1$, $\mathbf{p}_2$, $\mathbf{p}_3$, and $\mathbf{p}_4$. Then the parametrized Bezier curve is $$ \mathbf{b}(t) = (1 - t)^3 \mathbf{p}_1 + 3 t(1 - t)^2 \mathbf{p}_2 + 3 t^2 (1 - t) \mathbf{p}_3 + t^3 \mathbf{p}_4 \,. $$ The slope (tangent) of $\mathbf{b}(t)$ is $$ \begin{align} \mathbf{t}(t) & = \mathbf{b}^{'}(t) \\ &= - 3(1 - t)^2 \mathbf{p}_1 + 3 [(1 - t)^2 - 2 t(1 - t)] \mathbf{p}_2 + 3 [2t(1 - t) - t^2] \mathbf{p}_3 + 3 t^2 \mathbf{p}_4 \\ & = - 3(1 - t)^2 \mathbf{p}_1 + 3 (1 - t)(1 - 3t) \mathbf{p}_2 + 3 (2t - 3t^2) \mathbf{p}_3 + 3 t^2 \mathbf{p}_4 \,. \end{align} $$ The normal to $\mathbf{b}(t)$ can be computed using the cross product of the tangent with the $\mathbf{e}_z$ vector where $\mathbf{e}_z = (0, 0, 1)$ and assuming $\mathbf{p}_i = (x_i, y_i, 0)$. Then $$ \begin{align} \mathbf{n}(t) & = \mathbf{t}(t) \times \mathbf{e}_z = (t_2z_3-t_3z_2)\mathbf{e}_x-(t_1z_3-t_3z_1)\mathbf{e}_y+(t_1z_2-t_2z_1)\mathbf{e}_z \\ & = t_2\,\mathbf{e}_x - t_1\,\mathbf{e}_y + 0\,\mathbf{e}_z \\ \end{align} $$ Note that $\lVert\mathbf{n}(t)\rVert_{} = \lVert\mathbf{t}(t)\rVert_{}$. The convexity of $\mathbf{b}(t)$ is $$ \begin{align} \mathbf{c}(t) = \mathbf{b}^{''}(t) & = 6 (1 - t) \mathbf{p}_1 + 3 [-(1 - 3t) - 3(1 - t)] \mathbf{p}_2 + 3 (2 - 6 t) \mathbf{p}_3 + 6 t \mathbf{p}_4 \\ & = 6 (1 - t) \mathbf{p}_1 - 6 (2 - 3t) \mathbf{p}_2 + 6 (1 - 3 t) \mathbf{p}_3 + 6 t \mathbf{p}_4 \\ \end{align} $$ The offset curve is defined as $$ \mathbf{b}_d(t) = \mathbf{b}(t) + \frac{\mathbf{n}(t)}{\lVert\mathbf{n}(t)\rVert_{}} \,d =: \mathbf{b}(t) + \widehat{\mathbf{n}}(t)\,d $$ where $d$ is the offset distance. The tangent to the offset curve is $$ \mathbf{t}_d(t) = \mathbf{b}^{'}(t) + \cfrac{d \widehat{\mathbf{n}}}{d t}\,d = \mathbf{t}(t) + \cfrac{d \widehat{\mathbf{n}}}{d t}\,d\,. $$ The derivative of the unit normal is $$ \cfrac{d \widehat{\mathbf{n}}}{d t} = \cfrac{d }{d t}\left(\frac{\mathbf{n}(t)}{\lVert\mathbf{n}(t)\rVert_{}}\right) = \frac{1}{\lVert\mathbf{n}(t)\rVert_{}} \cfrac{d \mathbf{n}}{d t} + \cfrac{d }{d t}\left(\frac{1}{\lVert\mathbf{n}(t)\rVert_{}}\right)\mathbf{n}(t)\,. $$ Note that $$ \cfrac{d \mathbf{n}}{d t} = \cfrac{d }{d t}[\mathbf{t}(t) \times \mathbf{e}_z] = \cfrac{d \mathbf{t}}{d t} \times \mathbf{e}_z = \mathbf{b}^{''}(t) \times \mathbf{e}_z = \mathbf{c}(t) \times \mathbf{e}_z = c_2\,\mathbf{e}_x - c_1\,\mathbf{e}_y\,. $$ To find the derivative of the inverse of the norm of $\mathbf{n}(t)$, we note that $$ \frac{1}{\lVert\mathbf{n}(t)\rVert_{}} = \frac{1}{\sqrt{n_j n_j}} = (n_j n_j)^{-1/2} \,. $$ Therefore, $$ \cfrac{d }{d t}\left(\frac{1}{\lVert\mathbf{n}(t)\rVert_{}}\right) = -\frac{1}{2} (n_j n_j)^{-3/2} \left(2n_j \cfrac{d n_j}{d t}\right) = - \frac{1}{\lVert\mathbf{n}(t)\rVert_{}^3} \left(\mathbf{n}(t) \cdot \cfrac{d \mathbf{n}}{d t}\right) $$ and we have $$ \cfrac{d \widehat{\mathbf{n}}}{d t} = \frac{1}{\lVert\mathbf{n}(t)\rVert_{}} \cfrac{d \mathbf{n}}{d t} - \frac{1}{\lVert\mathbf{n}(t)\rVert_{}^3} \left(\mathbf{n}(t) \cdot \cfrac{d \mathbf{n}}{d t}\right) \mathbf{n}(t) = \frac{1}{\lVert\mathbf{n}(t)\rVert_{}} \cfrac{d \mathbf{n}}{d t} - \frac{1}{\lVert\mathbf{n}(t)\rVert_{}^3} \left[\mathbf{n}(t) \otimes \mathbf{n}(t)\right] \cdot \cfrac{d \mathbf{n}}{d t} $$ or $$ \cfrac{d \widehat{\mathbf{n}}}{d t} = \frac{1}{\lVert\mathbf{n}(t)\rVert_{}} \left[\boldsymbol{I} - \widehat{\mathbf{n}}(t) \otimes \widehat{\mathbf{n}}(t)\right] \cdot \cfrac{d \mathbf{n}}{d t} $$ where $\boldsymbol{I}$ is the identity matrix. We now have an algebraic expression for the offset tangent: $$ \mathbf{t}_d(t) = \mathbf{t}(t) + \frac{d}{\lVert\mathbf{n}(t)\rVert_{}} \left[\boldsymbol{I} - \widehat{\mathbf{n}}(t) \otimes \widehat{\mathbf{n}}(t)\right] \cdot [\mathbf{c}(t) \times \mathbf{e}_z] $$ Define $\mathbf{m}(t) := \mathbf{c}(t) \times \mathbf{e}_z$ to get $$ \mathbf{t}_d(t) = \mathbf{t}(t) + \frac{d}{\lVert\mathbf{n}(t)\rVert_{}} \left[\boldsymbol{I} - \widehat{\mathbf{n}}(t) \otimes \widehat{\mathbf{n}}(t)\right] \cdot \mathbf{m}(t) \,. $$ The Elber-Cohen approach uses the sign of $ \mathbf{t}(t) \cdot \mathbf{t}_d(t)$ to find the locations of the cusps. This computation has to be done numerically. Once the cusps have been found, the two segments are intersected numerically to find the crossing point. Curvature Warning The algebra below needs to be checked for correctness. The curvature of the Bezier curve is $$ \frac{1}{d}\,\mathbf{e}_z = \kappa(t)\, \mathbf{e}_z = \frac{\mathbf{t}(t) \times \mathbf{c}(t)}{\lVert\mathbf{t}(t)\rVert_{}^3} \,. $$ Therefore, $$ d = \frac{\lVert\mathbf{t}(t)\rVert_{}^3}{\left[\mathbf{t}(t) \times \mathbf{c}(t)\right]\cdot \mathbf{e}_z} = \frac{\lVert\mathbf{t}(t)\rVert_{}^3}{\mathbf{t}(t) \cdot \left[\mathbf{c}(t) \times \mathbf{e}_z\right]} = \frac{\lVert\mathbf{t}(t)\rVert_{}^3}{\mathbf{t}(t) \cdot \mathbf{m}(t)} \,. $$ Plugging the above relation for $d$ into the expression for $\mathbf{t}_d$, we have $$ \begin{align} \mathbf{t}_d(t) & = \mathbf{t}(t) + \frac{\lVert\mathbf{t}(t)\rVert_{}^3}{\lVert\mathbf{n}(t)\rVert_{}\left[\mathbf{t}(t) \cdot \mathbf{m}(t)\right]} \left[\boldsymbol{I} - \widehat{\mathbf{n}}(t) \otimes \widehat{\mathbf{n}}(t)\right] \cdot \mathbf{m}(t) \\ & = \frac{\lVert\mathbf{n}\rVert_{} (\mathbf{t} \cdot \mathbf{m}) \,\mathbf{t} + \lVert\mathbf{t}\rVert_{}^3 \left[\boldsymbol{I} - \widehat{\mathbf{n}}\otimes\widehat{\mathbf{n}}\right] \cdot \mathbf{m}} {\lVert\mathbf{n}\rVert_{} (\mathbf{t} \cdot \mathbf{m})} \,. \end{align} $$ Using $\lVert\mathbf{n}\rVert_{} = \lVert\mathbf{t}\rVert_{}$, we get $$ \mathbf{t}_d = \frac{\lVert\mathbf{t}\rVert_{}^3 (\mathbf{t} \cdot \mathbf{m}) \,\mathbf{t} + \lVert\mathbf{t}\rVert_{}^3 \left[\lVert\mathbf{t}\rVert_{}^2\,\boldsymbol{I} - \mathbf{n} \otimes \mathbf{n}\right] \cdot \mathbf{m}} {\lVert\mathbf{t}\rVert_{}^3 (\mathbf{t} \cdot \mathbf{m})} = \frac{ (\mathbf{t} \cdot \mathbf{m}) \,\mathbf{t} + \left[\lVert\mathbf{t}\rVert_{}^2\,\boldsymbol{I} - \mathbf{n} \otimes \mathbf{n}\right] \cdot \mathbf{m}} { \mathbf{t} \cdot \mathbf{m}} \,. $$ The Elber-Cohen approach uses the equation $$ \mathbf{t}(t) \cdot \mathbf{t}_d(t) = 0 $$ to find the values of $t$ where trimming is needed. Thus, $$ \begin{align} \mathbf{t}(t) \cdot \mathbf{t}_d(t) &= \frac{ (\mathbf{t} \cdot \mathbf{m}) \,(\mathbf{t} \cdot \mathbf{t}) + \left[\lVert\mathbf{t}\rVert_{}^2\,\boldsymbol{I} - \mathbf{n} \otimes \mathbf{n}\right] : (\mathbf{t} \otimes \mathbf{m})} {\mathbf{t} \cdot \mathbf{m}} \\ & =\frac{\lVert\mathbf{t}\rVert_{}^2 (\mathbf{t} \cdot \mathbf{m}) + \left[\lVert\mathbf{t}\rVert_{}^2\,\boldsymbol{I} - \mathbf{n} \otimes \mathbf{n}\right] : (\mathbf{t} \otimes \mathbf{m})} {\mathbf{t} \cdot \mathbf{m}} \\ & =\frac{2\lVert\mathbf{t}\rVert_{}^2 (\mathbf{t} \cdot \mathbf{m}) - (\mathbf{n} \cdot \mathbf{t}) (\mathbf{n} \cdot \mathbf{m})} {\mathbf{t} \cdot \mathbf{m}} \\ & = 2\lVert\mathbf{t}\rVert_{}^2 = 0 \end{align} $$ where we have used $\mathbf{n} \cdot \mathbf{t} = 0$. The quantity $\lVert\mathbf{t}\rVert$ is never zero and hence the expression for $d$ cannot be used to solve the above quartic equation for $t$. Instead, Elber and Cohen use the expression $$ \mathbf{t} \cdot \mathbf{t}_d = \mathbf{t}\cdot \mathbf{t} + \frac{d}{\lVert\mathbf{t}\rVert}(\mathbf{t} \cdot \mathbf{m}) $$ to find the sign of the vector product as $$ \text{sign}[\mathbf{t} \cdot \mathbf{t}_d] = \text{sign}\left[1 + \frac{d}{\lVert\mathbf{t}\rVert^3}(\mathbf{t} \cdot \mathbf{m}]\right] = \text{sign}\left[1 + \kappa(t) d\right] \,. $$ This expression is used to justify the use of change in sign of the dot product of the two tangents in determining the cusps in the offset curve. R-Code require("ggplot2") # Install spatstat for polyline intersections if (!require(spatstat)) { install.packages("spatstat") library(spatstat) } setwd(".") Bezier <- function(t, x1, x2, x3, x4) { B <- (1.0 - t)^3*x1 + 3.0*t*(1- t)^2*x2 + 3.0*t^2*(1-t)*x3 + t^3*x4 #print(paste("t = ", t, " x1 = ", x1, " B = ", B)) return(B) } BezierTangent <- function(t, x1, x2, x3, x4) { Bp <- -3*(1 - t)^2*x1 + 3*(1 - t)*(1 - 3*t)*x2 + 3*(2*t - 3*t^2)*x3 + 3*t^2*x4 normBp <- sqrt(Bp[1]^2 + Bp[2]^2) Bphat <- Bp/normBp return(Bphat) } BezierNormal <- function(t, x1, x2, x3, x4) { tangent <- BezierTangent(t, x1, x2, x3, x4) normal <- c(tangent[2], -tangent[1]) return(normal) } BezierConvex <- function(t, x1, x2, x3, x4) { Bpp <- 6*(1 - t)*x1 - 6*(2 - 3*t)*x2 + 6*(1 - 3*t)*x3 + 6*t*x4 normBpp <- sqrt(Bpp[1]^2 + Bpp[2]^2) Bpphat <- Bpp/normBpp return(Bpphat) } BezierOffset <- function(B, N, d) { Bd = B + N*d return(Bd) } BezierOffsetTangent <- function(t, x1, x2, x3, x4, d) { # Compute tangent tvec <- -3*(1 - t)^2*x1 + 3*(1 - t)*(1 - 3*t)*x2 + 3*(2*t - 3*t^2)*x3 + 3*t^2*x4 # Compute normal nvec <- c(tvec[2], -tvec[1]) # Compute norm of normal norm_nvec <- sqrt(nvec[1]^2 + nvec[2]^2) # Compute nhat nhat <- nvec/norm_nvec # Compute convexity cvec <- 6*(1 - t)*x1 - 6*(2 - 3*t)*x2 + 6*(1 - 3*t)*x3 + 6*t*x4 # Compute dn_dt dn_dt <- c(cvec[2], -cvec[1]) # Compute nhat otimes nhat nhat.nhat = matrix(c(nhat[1]*nhat[1], nhat[1]*nhat[2], nhat[2]*nhat[1], nhat[2]*nhat[2]), nrow = 2, byrow = TRUE) # Compute (nhat o nhat).dn_dt nhat.nhat.dn_dt = nhat.nhat %*% dn_dt # Compute I.dn_dt II = matrix(c(1, 0, 0, 1), nrow = 2, byrow = TRUE) I.dn_dt = II %*% dn_dt # Compute I.dn_dt/||n|| I.dn_dt_n = I.dn_dt/norm_nvec # Compute dnhat_dt dnhat_dt_old = (I.dn_dt - nhat.nhat.dn_dt)/norm_nvec tdvec <- tvec + dnhat_dt_old*d norm_tdvec <- sqrt(tdvec[1]^2 + tdvec[2]^2) #print(tvec) # Compute dn_dt dn_dt = c(cvec[2], -cvec[1]) # Compute dn_dt/||n|| dn_dt_n = dn_dt/norm_nvec # Compare dn_dt/||n|| #print(dn_dt_n - I.dn_dt_n) # Compute nhat . dn_dt nhat.dn_dt = nhat[1]*dn_dt[1] + nhat[2]*dn_dt[2] # Compute (nhat . dn_dt) nhat nhat.dn_dt_nhat = nhat*nhat.dn_dt # Compare (nhat . dn_dt) nhat #print(nhat.dn_dt_nhat - nhat.nhat.dn_dt) # Compute (nhat . dn_dt) nhat/||n|| nhat.dn_dt_nhat_n = nhat.dn_dt_nhat/norm_nvec # Compute dnhat/dt dnhat_dt = dn_dt_n - nhat.dn_dt_nhat_n # Compare dnhat_dt #print(dnhat_dt - dnhat_dt_old) # Compute td td = tvec + dnhat_dt*d # Compute tdhat tdhat = td/sqrt(td[1]^2 + td[2]^2) # Compare td and tdvec #print(td - tdvec) return(tdhat) } x1 = c(1,2) x2 = c(1.45,3) x3 = c(1.55,3) x4 = c(1.7,2) tlist = seq(from = 0, to = 1, length.out = 50) B <- sapply(tlist, function(t) {Bezier(t, x1, x2, x3, x4)}) Bprime <- sapply(tlist, function(t) {BezierTangent(t, x1, x2, x3, x4)}) Bnorm <- sapply(tlist, function(t) {BezierNormal(t, x1, x2, x3, x4)}) Bpprime <- sapply(tlist, function(t) {BezierConvex(t, x1, x2, x3, x4)}) d = 0.1 Boffset <- BezierOffset(B, Bnorm, d) Boffsetprime <- sapply(tlist, function(t) {BezierOffsetTangent(t, x1, x2, x3, x4, d)}) T.Td <- mapply(function(Tx, Ty, Tdx, Tdy) { Tt = c(Tx, Ty) Td = c(Tdx, Tdy) Tx*Tdx + Ty*Tdy }, Bprime[1,], Bprime[2,], Boffsetprime[1,], Boffsetprime[2,]) print(T.Td) # Find sign chnage indices sign.T.Td = sign(T.Td) sign.changes <- function(d) { p <- cumsum(rle(d)$lengths) + 1 p[-length(p)] } indices = sign.changes(sign.T.Td) df = data.frame(x = B[1,], y = B[2,], label = "Curve") df = rbind(df, data.frame(x = c(x1[1], x2[1], x3[1], x4[1]), y = c(x1[2], x2[2], x3[2], x4[2]), label = "Control points")) df = rbind(df, data.frame(x = Boffset[1,], y = Boffset[2,], label = "Offset")) df_pts = rbind(data.frame(x = Boffset[1,indices[1]], y = Boffset[2,indices[1]]), data.frame(x = Boffset[1,indices[2]], y = Boffset[2,indices[2]])) df_pts$label = "Cusps" segment_1 = data.frame(x0 = Boffset[1,1:(indices[1]-1)], x1 = Boffset[1,2:indices[1]], y0 = Boffset[2,1:(indices[1]-1)], y1 = Boffset[2,2:indices[1]]) segment_2 = data.frame(x0 = Boffset[1,indices[2]:(ncol(Boffset)-1)], x1 = Boffset[1,(indices[2]+1):ncol(Boffset)], y0 = Boffset[2,indices[2]:(ncol(Boffset)-1)], y1 = Boffset[2,(indices[2]+1):ncol(Boffset)]) segment_window = owin(c(min(Boffset[1,]), max(Boffset[1,])), c(min(Boffset[2,]), max(Boffset[2,]))) segment_1.psp = as.psp(segment_1, window = segment_window) segment_2.psp = as.psp(segment_2, window = segment_window) crossings = as.data.frame(crossing.psp(segment_1.psp, segment_2.psp)) print(crossings) df_cross = data.frame(x = crossings$x, y = crossings$y, label = "Crossing") plt = ggplot() + geom_path(data = df[which(df$label == "Curve" | df$label == "Offset"),], aes(x = x, y = y, color = label), size = 1) + geom_point(data = df[which(df$label == "Control points"),], aes(x = x, y = y, color = label), size = 5) + geom_point(data = df_pts, aes(x = x, y = y, color = label), size = 2) + geom_point(data = df_cross, aes(x = x, y = y, color = label), size = 2) print(plt) dev.copy(png, "Crossings.png") dev.off() df_tan = data.frame(x = B[1,], xend = B[1,]+Bprime[1,]/20.0, y = B[2,], yend = B[2,]+Bprime[2,]/20.0, label = "Tangent") plt = plt + geom_segment(data = df_tan, aes(x = x, xend = xend, y = y, yend = yend, color = label), size = 0.75, arrow = arrow(length = unit(0.2, "cm"))) df_tan_orig = data.frame(x = Boffset[1,], xend = Boffset[1,]+Bprime[1,]/20.0, y = Boffset[2,], yend = Boffset[2,]+Bprime[2,]/20.0, label = "OrigTangent") plt = plt + geom_segment(data = df_tan_orig, aes(x = x, xend = xend, y = y, yend = yend, color = label), size = 0.5, arrow = arrow(length = unit(0.2, "cm"))) df_tan_off = data.frame(x = Boffset[1,], xend = Boffset[1,]+Boffsetprime[1,]/20.0, y = Boffset[2,], yend = Boffset[2,]+Boffsetprime[2,]/20.0, label = "OffsetTangent") plt = plt + geom_segment(data = df_tan_off, aes(x = x, xend = xend, y = y, yend = yend, color = label), size = 0.5, arrow = arrow(length = unit(0.2, "cm"))) #dev.new() #print(plt) df_nor = data.frame(x = B[1,], xend = B[1,]+Bnorm[1,]/20.0, y = B[2,], yend = B[2,]+Bnorm[2,]/20.0, label = "Normal") plt = plt + geom_segment(data = df_nor, aes(x = x, xend = xend, y = y, yend = yend, color = label), size = 0.5, arrow = arrow(length = unit(0.2, "cm"))) #dev.new() #print(plt) df_con = data.frame(x = B[1,], xend = B[1,]+Bpprime[1,]/20.0, y = B[2,], yend = B[2,]+Bpprime[2,]/20.0, label = "Convexity") plt = plt + geom_segment(data = df_con, aes(x = x, xend = xend, y = y, yend = yend, color = label), size = 0.5, arrow = arrow(length = unit(0.2, "cm"))) + coord_fixed() dev.new() print(plt) dev.copy(png, "TangentsNormals.png") dev.off() df_T = data.frame(x = Bprime[1,], y = Bprime[2,], label = "Tangent") df_T = rbind(df_T, data.frame(x = Boffsetprime[1,], y = Boffsetprime[2,], label = "OffsetTangent")) plt1 = ggplot(data = df_T) + geom_path(aes(x = x, y = y, color = label), size = 1.0) coord_fixed() #dev.new() #print(plt1) df_TTd = data.frame(t = tlist, T.Td = T.Td) plt2 = ggplot(data = df_TTd) + geom_path(aes(x = t, y = T.Td), size = 1.0) dev.new() print(plt2) dev.copy(png, "T_Td.png") dev.off()<|endoftext|> TITLE: Geometric intuition for directional derivatives QUESTION [6 upvotes]: What I'm trying to do in this post is to see that the intuition I've built is correct, and, if it's not, I would like someone to share its own intuition on why directional derivates are related with the gradient vector. My intuition: The formal definition of a directional derivative is: $$ \frac{\partial f}{\partial \vec{v}} =\nabla f(a,b) \cdot \vec{v} $$ where $\vec{v}$ is the vector that indicates the direction where we need to compute the rates of change. By the definition of partial derivatives, when we compute $\frac{\partial f}{\partial x}$ , we're fixing a plane in $y$ direction, and just analysing what a tiny change in $x$ effects our output. Same happens in $\frac{\partial f}{\partial y}$, we fix a plane in $x$ direction, and analyse what a tiny change in $y$ effects our output. Now, when we compute a directional derivate of $\vec{v}$, what we're doing (in my head) is fixing a plane, $\beta$, that has $\vec{v}$ as one of it's directional vectors and intersects the surface. Just like the picture below: Because $\beta$ has $\vec{v}$ as one of it's directional vectors, what we're essentially doing is checking what a tiny change in direction of $\vec{v}$ causes to our output (surface). But we already know what a tiny change in $x$ causes and what a tiny change in $y$ causes, respectively, $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$... Assuming that everything I've said is correct, we can decompose $\vec{v}$ in some linear combination of the basis vector of our space, in this case, the standard basis: $$ \vec{v} = \left[\begin{matrix} a\\ b\\ c\\ \end{matrix}\right] = a \cdot \left[\begin{matrix} 1\\ 0\\ 0\\ \end{matrix}\right] + b \cdot \left[\begin{matrix} 0\\ 1\\ 0\\ \end{matrix}\right] + c \cdot \left[\begin{matrix} 0\\ 0\\ 1\\ \end{matrix}\right] $$ Ignoring the third vector $\left[\begin{matrix} 0\\ 0\\ 1\\ \end{matrix}\right]$ because it deals with our output, we can see $\left[\begin{matrix} 1\\ 0\\ 0\\ \end{matrix}\right]$ and $\left[\begin{matrix} 0\\ 1\\ 0\\ \end{matrix}\right]$ as some change in $x$ and $y$ direction, that are computed by their partial derivatives, and we're looking on what a tiny change in $\vec{v}$ direction causes to our output, hence: $$ \frac{\partial f}{\partial \vec{v}} = a \cdot \frac{\partial f}{\partial x} + b \cdot \frac{\partial f}{\partial y}\\ \frac{\partial f}{\partial \vec{v}} = \nabla f(a,b) \cdot \vec{v} $$ Am I correct? REPLY [2 votes]: Let’s back up a bit. As Hans Ludmark points out in his comment above, the basic definition of the directional derivative in the direction specified by the unit vector $\mathbf u=(u_1,u_2)$ at a point $P=(a,b)$ is via a limit similar to the one from elementary calculus: $${\partial f\over\partial\mathbf u}(a,b)=\lim_{h\to0}{f(a+hu_1,b+hu_2)-f(a,b)\over h}.$$ As you’ve observed, this amounts to taking a vertical slice through the surface and then computing the ordinary derivative of that slice, as illustrated below. This derivative is, of course, the slope of the tangent line (blue) to the slice at that point. Observe that this line is also the intersection of the tangent plane at that point (grayish blue) with the cutting plane (violet), so we can interpret the directional derivative as the steepness of the tangent plane in a given direction. As you rotate the cutting plane around $P$, the slope of this line changes, reaching a maximum when the two planes are perpendicular, as we’ll see below. (You can also see that this is the case by visualizing cutting a cylinder parallel to the $z$-axis by a plane and imagining what happens to the high point as you move that plane around.) Let’s say that the tangent plane is given by the equation $\lambda x+\mu y-z=d$ with normal $\mathbf n_t=(\lambda,\mu,-1)$. A normal to the cutting plane is $\mathbf n_c=(-u_2,u_1,0)$, which is just $\mathbf u$ rotated ninety degrees. In $\mathbb R^3$ we can find the direction of the line of intersection via a cross product: $$\mathbf n_t\times\mathbf n_c=(u_1,u_2,\lambda u_1+\mu u_2)$$ and the slope of this line is thus $${\lambda u_1+\mu u_2\over\sqrt{u_1^2+u_2^2}}=\lambda u_1+\mu u_2=(\lambda,\mu)\cdot\mathbf u=\|(\lambda,\mu)\|\cos\phi,$$ where $\phi$ is the angle between the projection of $\mathbf n_t$ onto the $x$-$y$ plane and $\mathbf u$. The slope is therefore maximal when $\phi=0$, i.e., when $\mathbf u$ and the projection of $\mathbf n_t$ point in the same direction, but this happens when the two planes are perpendicular. The maximum value of this slope is $\|(\lambda,\mu)\|$. This is where the gradient of $f$ comes in. If we write the equation of the surface as $F(x,y,z)=f(x,y)-z=0$, then $\nabla F=(f_x,f_y,-1)$ is normal to the surface, so an equation of the tangent plane at $(a,b,f(a,b))$ is $$xf_x(a,b)+yf_y(a,b)-z=af_x(a,b)+bf_y(a,b)-f(a,b).$$ This is exactly in the form analyzed above, with $\lambda=f_x(a,b)$ and $\mu=f_y(a,b)$, so $${\partial f\over\partial\mathbf u}(a,b)=\nabla f(a,b)\cdot\mathbf u$$ with the maximal rate of change given by $\|\nabla f(a,b)\|$. This seems awfully coincidental, but it’s not. Going back to the plane equation $\lambda x+\mu y-z=d$ above, the coefficients $\lambda$ and $\mu$ are respectively the “$x$-slope” and “$y$-slope,” i.e., the slopes of the intersections with planes parallel to the $x$- and $y$-axes. These slopes are encoded in the normal $(\lambda,\mu,-1)$. For the tangent plane, these slopes are the directional derivatives in the directions of the coordinate axes, also known as the partial derivatives of $f$.<|endoftext|> TITLE: Laurent series for $\cot (z)$ QUESTION [6 upvotes]: I'm looking for clarification on how to compute a Laurent series for $\cot z$ I started by trying to find the $\frac{1}{\sin z}$. I've found multiple references that go from an Taylor expansion for $\sin z$ directly to an expression for $\frac{1}{\sin z}$ but I am unable to follow how they got there. This thread Calculate Laurent series for $1/ \sin(z)$ started to answer my question but I do not understand how to use the given formulas to "iteratively compute" the coefficients, and the example given has several coefficients in place and I'm not sure how they were obtained. REPLY [9 votes]: If you are looking for a trucated series, you could start from $$\tan(z)=z+\frac{z^3}{3}+\frac{2 z^5}{15}+\frac{17 z^7}{315}+\frac{62 z^9}{2835}+O\left(z^{11}\right)$$ which makes $$\cot(z)=\frac 1{z+\frac{z^3}{3}+\frac{2 z^5}{15}+\frac{17 z^7}{315}+\frac{62 z^9}{2835}+O\left(z^{11}\right)}=\frac 1z \frac 1{1+\frac{z^2}{3}+\frac{2 z^4}{15}+\frac{17 z^6}{315}+\frac{62 z^8}{2835}+O\left(z^{10}\right)}$$ and perform long division to get $$\cot(z)=\frac{1}{z}-\frac{z}{3}-\frac{z^3}{45}-\frac{2 z^5}{945}-\frac{z^7}{4725}+O\left(z^9\right)$$ If you want the infinite series consider that $$\cot(z)=\frac 1 {\tan(z)}=f(z)=\sum_{i=0}^\infty a_iz^{i-1}$$ what you can rewrite as $$1=\tan(z)\sum_{i=0}^\infty a_iz^{i-1}$$ that is to say $$1=\left(\sum^{\infty}_{n=1} \frac{B_{2n} (-4)^n (1-4^n)}{(2n)!} z^{2n-1}\right)\times \sum_{i=0}^\infty a_iz^{i-1}$$ For simplicity, let us define $$b_n=\frac{B_{2n} (-4)^n (1-4^n)}{(2n)!}$$ in order to solve $$1=\sum^{\infty}_{n=1}b_nz^{2n-1} \times \sum_{i=0}^\infty a_iz^{i-1}$$ Developing, we get $$1=a_0 b_1+a_1 b_1 z+ (a_2 b_1+a_0 b_2)z^2+ (a_3 b_1+a_1 b_2)z^3+ (a_4 b_1+a_2 b_2+a_0 b_3)z^4+ (a_5 b_1+a_3 b_2+a_1 b_3)z^5+ (a_6 b_1+a_4 b_2+a_2 b_3+a_0 b_4)z^6+(a_7 b_1+a_5 b_2+a_3 b_3+a_1 b_4)z^7 +\cdots$$ Now,we need to solve, for the $a_i$'s the equations $$a_0 b_1=1$$ $$a_1 b_1=0$$ $$a_2 b_1+a_0 b_2=0$$ $$a_3 b_1+a_1 b_2=0$$ $$a_4 b_1+a_2 b_2+a_0 b_3=0$$ $$a_5 b_1+a_3 b_2+a_1 b_3=0$$ $$a_6 b_1+a_4 b_2+a_2 b_3+a_0 b_4=0$$ $$a_7 b_1+a_5 b_2+a_3 b_3+a_1 b_4=0$$ This does not make much problem (using successive eliminations for example). This leads to the infinite series $$\cot(z)=\sum_{n=0}^\infty (-1)^n\frac{ 2^{2 n}\, B_{2 n} }{(2 n)!}z^{2 n-1}$$<|endoftext|> TITLE: Does $A_P \otimes_A A_Q$ have a nice description? QUESTION [5 upvotes]: Let $A$ be a ring and suppose $P$ and $Q$ are distinct maximal ideals. I am trying to understand the ring $A_P \otimes A_Q$. I am wondering if $A_P \otimes A_Q$ is ring isomorphic to something else I can get my hands on. Is there some other ring isomorphic to $A_P \otimes A_Q$ that can help me understand $A_P \otimes A_Q$? Thanks. REPLY [4 votes]: Here is my claim, which I prove below: Given multiplicative systems $S_1, S_2 \subseteq A$, we have $$S_1^{-1}A \otimes_A S_2^{-1}A \cong (S_1 S_2)^{-1}A,$$ where $S_1 S_2 = \{s_1 s_2 \mid s_1 \in S_1, s_2 \in S_2\}$ is the multiplicative system generated by $S_1 \cup S_2$. It is not hard to see that $(S_1S_2)^{-1}A = \bar S_1^{-1}(S_2^{-1}A)$, where $\bar S_1$ is the image of $S_1$ in $S_2^{-1}A$. Why is the claim true? Because localization and tensor product both have universal properties. Let me state the universal property for localization a little differently than usual: If $S \subseteq A$ is a multiplicative system, then $\eta_{S^{-1}A} \colon A \to S^{-1}A$ is an $A$-algebra with the universal property that, given any $A$-algebra $\eta_B \colon A \to B$ such that $\eta_B(S)$ is contained in the units $B^\times$ of $B$, there is a unique $A$-algebra morphism $f \colon S^{-1}A \to B$ (i.e. $f$ is a ring homomorphism with $f \circ \eta_{S^{-1}A} = \eta_B$). Put more concisely, $S^{-1}A$ is the initial object in the category consisting of $A$-algebras in which $S$ maps to units. Tensor product also has a universal property -- it's the coproduct in the category of $A$-algebras. Now we prove the claim. For brevity, let $T = S_1^{-1}A \otimes_A S_2^{-1}A$. We show that $T$ satisfies the universal property of $(S_1S_2)^{-1}A$. So let $\eta_B \colon A \to B$ be any $A$-algebra such that $\eta_B(S_1S_2) \subseteq B^\times$; we must show there is a unique $A$-algebra map $T \to B$. Now $S_i \subseteq S_1 S_2$ implies the existence of unique $A$-algebra map $f_i \colon S_i^{-1}A \to B$. Then since $T$ is a coproduct, there is a unique $A$-algebra map $f \colon T \to B$ such that $f \circ \iota_i = f_i$, where $\iota_i \colon S_i^{-1}A \to S_1^{-1}A \otimes S_2^{-1}A$ are the obvious maps. Now if $g \colon T \to B$ is any other $A$-algebra map, then $g \circ \iota_i = f_i$ by uniqueness of $f_i$, so $g = f$ by uniqueness of $f$.<|endoftext|> TITLE: Drawing two cards from a deck of 16 (4 ranks and 4 suits) QUESTION [9 upvotes]: I'm reviewing for an exam and have come across a problem marked incorrect on my homework. The problem reads, There are 16 cards in a deck. The cards have 4 ranks (Jack, Queen, King, and Ace) and 4 suits (Clubs, Diamonds, Hearts, and Spades). You are dealt two cards. What is the probability you get a Diamond card? I misread this question when I first asnwered it, and I'm unsure how to get the correct solution. The solution page says that the solution is $\frac{9}{20}$. What is the probability you get two cards of the same rank? I said that once the first card is drawn, you'll have three remaining cards with that same rank out of a total of 15 cards, so you have a $\frac{3}{15} = \frac{1}{5}$ chance. This answer was the same as the solution manual. Is my logic correct? What is the probability you don't get a Diamond card? This is just 1 - (the solution to part a) = $\frac{11}{20}$. What is the probability you don't get a Diamond card and you get two cards of the same rank? Let A be the event you don't get a Diamond card. Let B be the event you get two cards of the same rank. $P(A' \cup B) = P(B) - P(A \cap B) = \frac{1}{5} - \frac{1}{10} = \frac{1}{10}$ Is there an easier way to think about this? I believe I have the most misunderstanding on the first question. Thank you for any assistance. REPLY [5 votes]: For the last question: What is the probability you don't get a Diamond card and you get two cards of the same rank? You can do this most easily directly. The first card must not be a diamond, but can be anything else: 12/16 or, simplified, 3/4. There are only two possibilities for the second card: the two non-diamond cards of same rank as the first card, out of the fifteen remaining cards. 2/15. 3/4 * 2/15 = 1/10<|endoftext|> TITLE: It is possible to apply a derivative to both sides of a given equation and maintain the equivalence of both sides? QUESTION [13 upvotes]: I have an equation with this shape, where $k,t_1,r \in \Bbb N$: $$2^{(x+1)^2}=k+t_1(x^2+r)$$ And I noticed that I can find $t_1$ in terms of $x$ as follows: $$ \frac{{\rm d} }{{\rm d}x} 2^{(x+1)^2} = \frac{{\rm d} }{{\rm d}x} \left( k+t_1(x^2+r) \right)$$ And then I would continue as follows: $$ 2(x+1)\cdot2^{(x+1)^2}\cdot\ln(2) = t_12x$$ $$ (x+1)\cdot2^{(x+1)^2}\cdot\ln(2) = t_1x$$ So finally: $$ t_1=\frac{(x+1)\cdot2^{(x+1)^2}\cdot\ln(2)}{x}$$ Is it possible to use a derivative in such case? I think that if both functions $2^{(x+1)^2}$ and $t_1(x^2+r)$ are equivalent and the derivative exists it might be possible, but I am not very sure about the validity of that step. Thank you! REPLY [2 votes]: What is a derivative? Let $f$ be defined on an open $I \subset \mathbb{R}$ and fix $c \in I$. $f$ is differentiable at $c$ if and only if there exists a real number denoted $f'(c)$ such that $f(x) = f(c) + (x-a)f'(c) + o(x-a)$. It's easy to see, therefore, that the derivative is a "local" property. It matters not only on the value of $f$ at $c$, but at a neighbourhood of $c$. What does this have to do with your question, you ask? Suppose $f$ and $g$ are differentiable. If $f(a) = g(a)$ for some $a \in I$, then $f'(a) = g'(a)$ needn't be true for any $a \in I$. For example, if you have $x + 1 = 2$, this has a solution $x=1$, but the derivatives of $g(x) := 2$ and $f(x) := x+1$ never coincide. On the other hand, if $f(a) = g(a)$ for some $a \in I$ and for each $a$ satisfying $f(a) = g(a)$ there is a $\delta$ such that $f(x) = g(x)$ for all $x \in (a - \delta, a + \delta)$, then we can definitely say $f'(a) = g'(a)$ for all such $a$. Of course, if $f$ and $g$ are differentiable and equal everywhere then the above holds clearly and hence $f' = g'$ for all $x \in \mathbb{R}$.<|endoftext|> TITLE: The Perfect Sharing Algorithm (ABBABAAB...) QUESTION [7 upvotes]: Less of a question and more of an exercise, it has to do with something I found while doing some programming and being unable to find things. Basically I wanted a formula for the perfect sharing algorithm as I call it,(ABBABAABBAABABBA...), I don't know the proper name of the sequence but it's used for truly fair sharing between 2 people. I couldn't find a formula so I figured this out after a while. Start with AB Every A turns into an AB and every B turns into a BA. AB -> ABBA -> ABBABAAB ... This allowed me to get a computer to achieve the algorithm in many ways, but it also raised some questions. Question 1: Can I repeat this forever and have it properly generate the algorithm? The algorithm is normally created by taking AB, then inverting each 2-state 'digit' and sticking it on the end (ABBA). You then take this entire sequence and repeat the process (ABBABAAB). This is an infinite sequence. Is what I'm doing going to generate the same sequence as the second method? Question 2. 2 people decide they want to share a task, so they use this algorithm. a) If they know how many turns have occurred but forget who's turn it is, can they generate an equation that tells them who's turn it is given the number of turns that have passed? b) The 2 people forget where they are in the sequence, but they know who's turn it is right now. How many previous turns will they need to remember in order to find their place again under the worse possible scenario? REPLY [5 votes]: This sequence is the Thue-Morse sequence about which you can find quite a bit online; you can find a plethora of facts about it at the Online Encyclopedia of Integer Sequences as pointed out by Q the Platypus in the comments. (As a side note this is a really good website for finding sequences; you only need to type in the first 7 elements $0,1,1,0,1,0,0$ to it to get the Thue-Morse sequence). The Thue-Morse sequence has some interesting properties, including the fact that it describes a fair way to share as you noted. The answer to your first question is yes, as Wolfram Mathworld confirms; specifically you can find here that: [This sequence] is constructed by following a few simple steps: (1) start with the two digit string, $01$ (2) replace every $0$ in the string by $01$, and replace every $1$ in the string by $10$. (3) with the newly-created string from the previous step, go back to the beginning of step 2, and replace each $0$ and $1$ with the same values as before. This (if you replace $0$ with $A$ and $1$ with $B$) is precisely your algorithm, which is an alternative way of generating the Thue-Morse sequence to the usual way which you mention (inverting the bits and adding the result on to the previous step - described here), so this substitution system does in fact generate the Thue-Morse sequence. To answer part (a) of your second question, here you can find a mention of a way of calculating the $n^{th}$ term in the series: To compute the nth element $t_{n}$, write the number n in binary. If the number of ones in this binary expansion is odd then $t_{n}=1$, if even then $t_{n}=0$. This can be used to calculate who's turn it is if they know how many turns they have had, by simply taking the number $n$ of the turn they are up to, writing it in binary, and if there are an odd number of ones in the binary expansion then it is $B$'s turn and $A$'s turn if the number of ones is even. The same Mathworld page writes this in the simple form $t_n=s_{2}(n)\mod{2}$ where $s_{2}(n)$ is the binary digit sum. As to part (b) of your second question, I do not know in general what information is required to be able to find your place in the series. However, simply knowing a past sequence of turns will not help if $A$ and $B$ do not know around how long they have been playing for, since if we let $S$ be the sequence of past turns that they do remember, and let $n$ be the actual position which they are at (which they want to find); then let $m=2^{\left\lceil\log_{2} n\right\rceil}$ be the next point at which the normal generating process will invert and add on the bits up to $m$. However, the $m$ bits that follow bit $m$ are the inverse of the previous bits, and the $2m$ bits that follow bit $2m$ are the inverse of the first $2m$ bits. But this means that the sequence of bits from $3m$ to $4m$ are identical to the first $m$ bits, so the sequence $S$ occurs there also (since it occurs in the first $m$ bits). $S$ will also occur (as well as in many other places) between bits $15m$ and $16m$ and between bits $63m$ and $64m$ and so on. Thus $S$ actually occurs infinitely many times eventually in the Thue-Morse sequence, a property formally expressed here by noting that the Thue-Morse sequence is a uniformly recurrent word, i.e. for any past sequence $S$ they can remember, there is some length $l_S$ such that $S$ occurs in every block of length $l_S$ in the entire sequence (even though the Thue-Morse sequence is not periodic). The following image (where black dots are $1$s and white dots are $0$s) illustrates the recurrent nature of the sequence: Thus if they know the last sequence of moves $S$, they would at the very least need to know around how many moves they have made already in order to be able to find where they are up to in the sequence, since the same sequence $S$ occurs infinitely many times throughout the game.<|endoftext|> TITLE: Basics of infinite-dimensional Lie algebras QUESTION [6 upvotes]: I am a physicist with some background in the representation theory of finite-dimensional Lie algebras. Having now hard times with the very basics of the infinite-dimensional ones. In most sources the infinite-dimensional Lie algebra is defined either by its generalised Cartan matrix (finite-sized) or via commutation relations between its simple roots (again, finite number). See e.g. wiki or any of the papers here. We generate the rest of the algebra's elements by taking commutators of simple roots. For the finite-dimensional Lie algebras this process terminates at some point, and new commutators are no longer producing new (linearly independent of the ones obtained earlier) elements. In the case of the infinite-dimensional algebras, we should be able to go infinitely far in this process. Question 1 (see wiki for notations) If for certain natural number $(1 - c_{ij})$ we have $$\operatorname{ad}^{1-c_{ij}} (e_i)\,e_j = 0 \quad,$$ then how can we keep generating new elements infinitely? Looks like at some point this process should terminate, just as in the finite-dimensional case. Question 1, rephrased Why does relaxing the condition of positive definiteness of the Cartan matrix leads to such dramatic changes in the structure of the Lie algebra? What exactly change at the level of the commutation relations between $\{e_i, f_i, h_i\}$? (if it does change...) Question 2 Some authors (page 13 here) write the commutation relations for the Kac-Moody and Virasoro algebras in the following way: $$[T_m^a, T_n^b] = i f^{ab}{}_c T^c_{m+n} \quad,$$ $$[L_m, L_n] = (m-n) L_{m+n} \quad.$$ From these it's really obvious that one has the infinite number of linearly independent elements in the algebras. How can these form be translated to the language of the finite number of simple roots $\{e_i, f_i, h_i\}$? Thanks. REPLY [3 votes]: 1) Because we can build words of arbitrary length (= roots of arbitrary height) by mixing generators. For example, for the simplest affine case $A_1^{(1)}$ (= affine $SU(2)$ algebra), we have $ad^{3}(e_1)e_2 = 0$, but $ad(e_2)ad^2(e_1)e_2 \neq 0$ and in fact $[ad^2(e_2)ad^2(e_1)]^n e_2 \neq 0$. 2) The first set of commutation relations apply to the affine case (also known as affine Lie algebras) only, and they are missing the "level" term: \begin{equation} [T_m^a, T_n^b] = if^{ab}_c T^c_{m+n} + \delta_{m,-n} mk \end{equation} In that case, you start with a finite-dimensional Lie algebra, written in terms of Chevalley relations as before (with $e_1...e_n$, etc.). You then distinguish a specific positive root $\delta$ by $\delta \cdot \alpha_i \geq 0 \forall i$. This root is unique. You append to this algebra a generator $e_0$ in such a way that the generalized Cartan matrix now has determinant 0 and rank n. The original simple roots $e_i, i=1...n$ map onto $T_0^\alpha$ for $\alpha$ chosen to be in the "positive" root space of the finite-dimensional algebra. The new simple root corresponds to $T^{-\delta}_1$. Likewise the generators $f_i$ map to $T_0^{-\alpha}$ and $T^{\delta}_{-1}$. The central elements map to a subset of the $T_0^a$ in the center of the finite Lie algebra and to the "level" $k$. The way the Virasoro algebra enters into this is as quadratic terms in the universal enveloping algebra.<|endoftext|> TITLE: Nonlinear functions from $\mathbb{R}^n$ to $\mathbb{R}^n$ that preserve or grow the angle between any two vectors? QUESTION [5 upvotes]: Do there exist differentiable almost-everywhere functions on $\mathbb{R}^n \rightarrow \mathbb{R}^n$ such that $\frac{|\langle x, y \rangle|}{|x||y|} \geq \frac{|\langle f(x), f(y) \rangle|}{|f(x)||f(y)|}$? How does one go about constructing one? REPLY [5 votes]: I claim that those maps are precisely those that preserves lines through the origin, followed by an orthogonal movement. For $n=1$, the condition is void (except that we demand $0\mapsto 0$ perhaps?) and hence the claim holds. Your inequality demands that image vectors are "at least as orthogonal" as the input vectors. In particular, such a map preserves orthogonality. Thus for any $v\ne0$ with $f(v)\ne 0$, it induces a map with the same properties from $v^\perp$ to $f(v)^\perp$, i.e., $\Bbb R^{n-1}\to\Bbb R$. If $e_1,\ldots, e_n$ is the standard basis of $\Bbb R^n$, then $f(e_1),\ldots, f(e_n)$ is an orthogonal base of $\Bbb R^n$ and by performing an orthogonal movement, we may assume $f(e_i)=c_ie_i$ with $c_i>0$. We may assume that $f|_{e_n^\perp}$ is of the claimed form. For $v\in\Bbb R^n$ write $c=ae_n+w$ with $w\in e_n^\perp$. Assume $c\ne0$. Then $v^\perp$ intersects $e_n^\perp$ in an $\Bbb R^{n-2}$ left invariant under$f$, hence $f(v)$ is confined to the perpendicular space of that, which is a $2$-plane. Thus it remains to show the claim for $n=2$. Indeed, $v=ae_1+be_2$ with non-zero $a,b$ can map at most to $a'e_1+b'e_2$ with $a':b'=\pm a:b$. IN case of negative sign, add a reflection at one of the axes. Then for all other vectors $w$ in the plane, the angle condition relative to $e_1$, $e_2$, $v$ determine that $f(w)$ is on the same line as $w$.<|endoftext|> TITLE: Tensors and matrices multiplication QUESTION [6 upvotes]: I have to prove an equality between matrices $R=OTDO$ where $R$ is a $M\times M$ matrix $O$ is a $2\times M$ matrix $T$ is a $M\times M\times M$ tensor $D$ is a diagonal $2\times 2$ matrix The entries of the matrices and the tensor are probabilities so the result should somehow be the consequence of Bayes formula. The problem is that I have no idea how to compute that because I don't know how to use tensors. I had an algebra course about tensor products of vector spaces a long time ago but it was very abstract so I don't know how to multiply tensors in practice. I'm surprised because the first matrix of the product has $2$ rows, the last one has $M$ columns and yet the result is a $M\times M$ matrix. Could you explain how to do this? For example, what's the dimension of $OT$? I am familiar with the Kronecker product of matrices, is it useful here? EDIT Stupid me... I've spent hours trying to understand this product and... this was a typo. It was $O^TDO$ and the equality was straightforward... I've been confused by the fact that the tensor $T$ did exist and there could be and equality involving it. At least I've learnt a few things about tensors, thank you again for the answers! REPLY [7 votes]: The multiplication of a tensor by a matrix (or by a vector) is called $n$-mode product. Let $\mathcal{T} \in \mathbb{R}^{I_1 \times I_2 \times \cdots \times I_N}$ be an $N$-order tensor and $\mathbf{M} \in \mathbb{R}^{J \times I_n}$ be a matrix. The $n$-mode product is defined as $$ (\mathcal{T} \times_{n} \mathbf{M})_{i_{1}\cdots i_{n-1}ji_{n+1}\cdots i_N} = \sum \limits_{i_n = 1}^{I_n} \mathcal{T}_{i_{1}i_{2}\cdots i_{n}\cdots i_{N}}\mathbf{M}_{ji_{n}}.$$ Note this is not a standard product like the product of matrices. However, you could perform a matricization of the tensor along its $n$-mode (dimension $n$) and thus effectuate a standard multiplication. The $n$-mode matricization of $\mathcal{T}$, say $\mathbf{T}_{(n)}$, is an $I_{n} \times I_{1}\cdots I_{n-1}I_{n+1}\cdots I_{N}$ matrix representation of $\mathcal{T}$. In other words, it is just a matrix form to organize all the entries of $\mathcal{T}.$ Hence, the multiplications below are equivalent $$\mathcal{Y} = \mathcal{T} \times_{n} \mathbf{M} \iff \mathbf{Y}_{(n)} = \mathbf{M} \mathbf{T}_{(n)}, $$ where $\mathbf{Y}_{(n)}$ is the $n$-mode matricization of the tensor $\mathcal{Y} \in \mathbb{R}^{I_1 \times \cdots \times I_{n-1} \times J \times I_{n+1} \times \cdots \times I_N}$. For more details, see Tensor Decompositions and Applications.<|endoftext|> TITLE: Architecture of Cantor's proof QUESTION [5 upvotes]: Cantor's diagonal argument consists of two parts: bijection and the extraction of the new number. If he shows that a given architecture of bijection doesn't work, why does it imply that any other architecture of bijection should not work either? Just an addendum to make my point clearer: Another possible architecture that comes to my mind is to write real numbers in the table and correspond them to the nth power of 2. Then we start to construct new numbers that weren't listed in our table and correspond them with powers of 3. After that we construct new numbers from the table of power of 3 (even if without checking we had them in the first table) we correspond them to the nth power of 5, and so on and so forth. It is easy to notice that this is a poor bijection architecture as you can put them back into one list and just repeat the construction of the new number. Why can it be mapped back to a simple list for any architecture? On the other hand we can easily think of correspondence of natural numbers to itself, in a way that we will end up having extra unlisted numbers.E.g.(1->2,2->4, etc...). Obviously it doesn't imply that there are more natural numbers than 'natural numbers'. REPLY [7 votes]: Long comment Can be useful to read Cantor's original proof of the theorem : There are infinite sets that cannot be put into one-to-one correspondence with the set of positive integers into : Georg Cantor (1892), "Ueber eine elementare Frage der Mannigfaltigkeitslehre", Jahresbericht der Deutschen Mathematiker-Vereinigung 1890–1891, 1: 75–78 : Let consider a set $M$ of elements of the form $E = (x_1, x_2, \ldots, x_{\nu}, \ldots)$ where each "coordinate" $x_i$ is either $m$ or $w$. If $E_1, E_2, \ldots, E_{\mu}, \ldots$ is any infinite list [unendliche Rehie] of elements of the set $M$, then there is always and element $E_0$ of $M$ that does not match any $E_{\nu}$ [keine $E_{\nu}$ übereinstimmt]. The "diagonal argument" follows. The key-points of the proof are : its generality : introducing sequences of abstract symbols, Cantor shows that the uncountability is not depending on some specific property of real numbers the proof is "constructive" : for any given list, it gives us a "procedure" to manufacture a new element not in the list. See also : Robert Gray, "Georg Cantor and Transcendental Numbers", American Mathematical Monthly, 101: 819–832.<|endoftext|> TITLE: Is "The empty set is a subset of any set" a convention? QUESTION [70 upvotes]: Recently I learned that for any set A, we have $\varnothing\subset A$. I found some explanation of why it holds. $\varnothing\subset A$ means "for every object $x$, if $x$ belongs to the empty set, then $x$ also belongs to the set A". This is a vacuous truth, because the antecedent ($x$ belongs to the empty set) could never be true, so the conclusion always holds ($x$ also belongs to the set A). So $\varnothing\subset A$ holds. What confused me was that, the following expression was also a vacuous truth. For every object $x$, if $x$ belongs to the empty set, then $x$ doesn't belong to the set A. According to the definition of the vacuous truth, the conclusion ($x$ doesn't belong to the set A) holds, so $\varnothing\not\subset A$ would be true, too. Which one is correct? Or is it just a convention to let $\varnothing\subset A$? REPLY [4 votes]: Every theory has axioms, which are some propositions held to be true without being proven from anything else, and are not provable from each other. Subsequent truths of the theory derived from the axioms are theorems. The properly termed question is whether the empty set being a subset of every other set is axiom of set theory, or a theorem. It depends on how "subset" is defined. If $A\subset B$ means that every element of $A$ is in $B$, it is not necessarily true that $\emptyset$ is a subset of anything, since it has no elements. In this case, $\emptyset \subset A$ can be added as an axiom. It doesn't conflict with anything, and simplifies all reasoning about subsets. Alternatively, if $A\subset B$ is defined as "$A$ has no elements that are not also in $B$", then we do not require the extra axiom for the $\emptyset$ case. If $A$ has no elements at all, it has no elements that are not in $B$. Suppose that we use the first, positively termed definition of subset, and then adopt as an axiom not $\forall A:\emptyset \subset A$, but rather its negation: $\exists A:\emptyset \not\subset A$, or the outright proposition $ \forall A:\emptyset \not\subset A$. This is just going to cause problems. We can "do" set theory as before, but all the theorems will be uglified by having to avoid the special cases involving the empty set. In any derivation step in which we rely on a subset relation being true, or assert one, we will have to add the verbiage of an additional statement which asserts that the variable in question doesn't denote the empty set. This proposition then has to be carried in all the remaining derivations, unless something else makes it superfluous (some unrelated assurance from elsewhere that the set in question isn't empty). Working with this clumsy subset definition that doesn't work with the empty set very well, someone is eventually going to have an epiphany and introduce a new subset-like relation which doesn't have these ugly problems: a new $A\ \mathbf{subset*}\ B$ binary relation which reduces exactly to $A\subset B$ when neither $A$ nor $B$ are $\emptyset$, and which, simply by definition, reduces to a truth whenever $A = \emptyset$, regardless of $B$. That person will then realize that all the existing work is simpler if this $\mathbf{subset*}$ operation is used in place of $\subset$. At the end of the day it boils down to criteria like: is the system consistent (doesn't contradict itself), is it complete (does it capture the truths we want) and also is it convenient: are the rules configured so that we do not trip over unnecessary cases and superfluous logic.<|endoftext|> TITLE: Does randomness exist in computers and in nature? QUESTION [10 upvotes]: In the programming language Python, you can import random and then with random.random() you can get a random number between $0$ and $1$. But is it truly random or are there constraints in how computers are built that makes them not truly random number generators? For instance, could there be a bias around the first value generated or value already in memory? How would one figure out the difference? REPLY [7 votes]: Pseudorandom number generators are based on well-defined mathematical sequences and are perfectly deterministic. Given the same seed, they will always generate the same sequence. This is actually often a desirable property, as it allows you to test at will in the same conditions. Sources of true randomness are present in a computer due to the many independent (asynchronous) tasks running and events occurring all the time. RAM content at well chosen addresses or clock time are indeed unpredictable and random. You can use this data to seed your generator every now and then. Pseudorandom generators are designed to simulate a uniform distribution and they are validated by theory and by statistical tests to ensure that. On the opposite, other sources of randomness may exhibit different kind of bias and non-uniformities. So in a way, they will look less random. There are probably some random sources (like described above) that you might detect by statistical tests or by discovering patterns, but this is probably a difficult exercise.<|endoftext|> TITLE: $E|A_1|<\infty$ and i.o events for $\left\{A_n\right\}$ are iid QUESTION [5 upvotes]: Given $\left\{A_n\right\}$ are i.i.d. Show that $E(|A_1|)< \infty$ $\iff \ P\left\{|A_n| > n \ \ \text{i.o}\right\} = 0$. My attempt: ($\Rightarrow$) Assume $E(|A_1|)< \infty$. Since $\left\{A_n\right\}$ are i.i.d, by the Strong Law of Large Number, $\frac{1}{n}\sum_{i=1}^{n} |A_i|\rightarrow E(|A_1|)$ almost surely. This is equivalent to $\forall\ \epsilon > 0$, $\lim_{n\rightarrow \infty} P(\cup_{k\geq n}|(\frac{1}{k}\sum_{i=1}^{k} |A_i|)-E(|A_1|)|\geq \epsilon)= 0$. This implies for sufficiently large $k$, $\frac{1}{k}\sum_{i=1}^{k} |A_i| - E(|A_1|)\ <\ \epsilon$. Since we don't know whether $E(|A_1|) > 1$ or not, we cannot choose $\epsilon = 1-E(|A_1|)$ to get $|A_k|$ bounded above by $k$. Could someone please help with this part? ($\Leftarrow$) If $P\left\{|A_n| > n \ \ \text{i.o}\right\} = 0$, this implies for sufficiently large $k$, $k$ is fixed, $|A_k|< k$. This implies $E(|A_k|) = E(|A_1|) < E(k) = k < \infty$ (the first equality is due to $\left\{A_n\right\}$ are iid). My question: Could someone help complete my "solution" above for the forward direction? If I'm on the wrong track, please help point that out to me. REPLY [3 votes]: The forward part is an easy consequence of Borel-Cantelli'e lemma. First of all see that: $$ \mathbb E(|A_1|)\geq \sum_{n=1}^\infty \mathbb P(|A_1|>n). $$ Using i.i.d. assumption: $$ \sum_{n=1}^\infty \mathbb P(|A_1|>n)= \sum_{n=1}^\infty \mathbb P(|A_n|>n). $$ So $\sum_{n=1}^\infty \mathbb P(|A_n|>n)<\infty$ and therefore from Borel-Cantelli's lemma: $$ \mathbb P(|A_n|>n , \rm{ i.o. })=0. $$<|endoftext|> TITLE: Unit square inside triangle. QUESTION [16 upvotes]: Some time ago I saw this beautiful problem and think it is worth to post it: Let $S$ be the area of triangle that covers a square of side $1$. Prove that $S \ge 2$. REPLY [3 votes]: If some edge, say $BC$ of a triangle $ABC$, does not touch the square, by moving $BC$ towards $A$, we may obtain a smaller similar triangle such that the side parallel to the original $BC$ touches the square. Therefore, we may assume that all three sides of the triangle touch the square. Consequently, the triangle can have at most one vertex (say, possibly $A$) that lies on the open interiors of the NW, NE, SW or SE regions to the four corners (because, if there are two such vertices, the edge join them will not touch the square). So we get something like the figure below: A : : : : NW : : NE ~~~~.-------.~~~~ | | B | | | | ~~~~.-------.~~~~ SW : : SE : : : C : Drop a perpendicular of $B$ to its adjacent side of the square. Let $P$ be where the extension of this line segment intersects the other side of the square (see the figure below) and let $D$ be where this line extension meets $AC$. Drop a perpendicular of $CQ$ to $PB$. Also, extend the side of the square opposite to $B$. Let this extended line intersects $AB$ at $E$: A \ E \ | \ .-------. D---P-Q-----|------B \ | | | \ | | | \.-------. \ | \| C Now the area of $ABC$ is greater than the total area of the triangles $CQD, CQB$ and $BPE$. So, it suffices to prove the following proposition: if $R$ is an inscribed rectangle of a right-angled triangle $T$ such the two geometric entities share a common vertex (which must be the right angle of $T$), then the area of $T$ is at least double the area of $R$. But the truthfulness or falseness of the above proposition is invariant if we scale the whole figure along the direction of any side of $R$, because both the areas of $R$ and $T$ scale by the same proportion. Therefore we may further assume that $R$ is a square. Now, borrowing the words of Hagen von Eitzen in another answer here, it is obvious that if we flip the outside tips of $T$ inside, the two flipped tips of $T$ will always "envelope" the interior of the square seamlessly, with the longer tip poking out. Hence the proposition is true.<|endoftext|> TITLE: The 'Square root' Function QUESTION [14 upvotes]: $G := \{f : f:[0,1] \rightarrow [0,1]$ such that it is bijective function and strictly increasing } Now the question is For any $ h \in G,$does there exist $g \in G$ such that $h=g \circ g $? Is such a $g$, if it exist , unique? My observation : $G$ is a group under function composition.(Is it helpful?) Every function in $G$ is continuous. Conjecture: if $h \in G$ has $n \in \mathbb{N}$ fixed points in (0,1) then it has $n+1$ 'square root' functions. Please help me to solve the question! REPLY [10 votes]: The square root always exists and, except for the special case where $f$ is the identity, there are uncountably many square roots. Case I: $f$ has no fixed points in $(0,1)$. By the intermediate value theorem, we either have $f(x) > x$ for all $x\in(0,1)$ or, $f(x) < x$ for all $x\in(0,1)$. I will consider (1) (the situation with $f(x) < x$ can be handled similarly). Choose any $x_0\in(0,1)$ and set $x_k=f^k(x_0)$, which is a strictly increasing sequence over $k\in\mathbb{Z}$. The limits of $x_k$ as $k\to\pm\infty$ are fixed points of $f$, so are equal to $1$ and $0$ respectively. Now, choose any increasing homeomorphism $\theta\colon[0,1]\to[x_0,x_1]$. Extend $\theta$ to all of $\mathbb{R}$ by setting $$ \theta(k+x)=f^k(\theta(x)) $$ for $k\in\mathbb{Z}$ and $x\in[0,1)$. This defines a homeomorphism from $\mathbb{R}$ to $(0,1)$. Furthermore, $$ f(\theta(x))=\theta(x+1). $$ We can define a square root by $$ g(x) = \theta(\theta^{-1}(x)+1/2) $$ for $x\in(0,1)$, and $g(0)=0$, $g(1)=1$. Note that $g(x_0)=\theta(1/2)$, which can be chosen to be any value in $(x_0,x_1)$, so there are infinitely many square roots. Case II: Now, for the general case. Let $S\subseteq[0,1]$ be the set of fixed points of $f$. We define the square root to also be the identity on $S$. As $S$ is closed, its complement is a countable union of disjoint open intervals $(a,b)$ and, restricted to each such interval, $f$ gives a homeomorphism of $[a,b]$ with no fixed points in $(a,b)$. So, applying the construction above, there are uncountably many square roots on each such interval. So, $f$ has a square root and, except in the case where $S$ is all of $[0,1]$, there are uncountably many square roots.<|endoftext|> TITLE: $|G|=p(p+1)$ for $p$ prime, then $G$ has a normal subgroup of order $p$ or $p+1$ QUESTION [8 upvotes]: I am trying to solve the above question, as an application of Sylow's theorem. Let $P$ be the p-Sylow subgroup. Then $n_p | (p+1)$ and $n_p \equiv 1 \pmod{p}$. If $n_p =1$, $P$ is normal and we are done, else $n_p = p+1$. Now, \begin{equation} 1+n_p(p-1) = 1 + (p+1)(p-1) = p^2, \end{equation} is the total number of elements in the p-Sylow subgroups. So if $n_p = p+1$, that means the number of elements not in any p-Sylow subgroup is $|G|-p^2=p$. If these $p$ elements and the identity form a subgroup then its a subgroup of the smallest prime index, so we are done. But how do I show that all the elements not in the p-Sylow subgroups form a subgroup, i.e. the subgroup generated by these elements has trivial intersection with the $p-$Sylow subgroups? REPLY [6 votes]: I will expand my comment into an answer. Let $S$ be the set of elements that do not lie in any Sylow $p$-subgroup of $G$. You have shown by a counting argument that $|S|=p$. Let $q$ be any prime that divides $p+1$.Then $S$ must contain some element $g$ of order $q$. Since $n_p=p+1$, we have $N_G(P) = P$, so $g \not\in N_G(P)$. Let $x$ be a generator of $P$. Then the powers $x,x^2,\ldots, x^{p-1}$ of $x$ all generate $P$, so none of them can centralize $g$. Hence the $p$ elements $\{ g, g^x,g^{x^2}, \ldots, g^{x^{p-1}} \}$ (where $g^h$ means $hgh^{-1}$) are all distinct. Since they all have order $q$, they all lie in $S$, and so $S = \{ g, g^x,g^{x^2}, \ldots, g^{x^{p-1}} \}$. So every element of $S$ has order $q$, and hence $q$ must be the only prime dividing $p+1$, so $p+1$ is a power of $q$, and $S \cup \{ 1 \}$ must be the unique Sylow $q$-subgroup of $G$.<|endoftext|> TITLE: Asymptotic evaluation of $\int_0^{\pi/4}\cos(x t^2)\tan^2(t)dt$ QUESTION [15 upvotes]: In Bender-Orszag's Advanced Mathematical Methods for Scientists and Engineers on page 313 we encounter the following integral $$ I(x)=\int_0^{\pi/4}\cos(x t^2)\tan^2(t)dt $$ and it is asked to derive the first two terms of the asymptotic expansion of $I(x)$ as $x\rightarrow +\infty$ . Setting $x=i y$ and analytically continuing at the end of the calcultion I was able to show that $$ I(x)\sim 2\frac{\sin(\frac{x\pi^2}{16})}{\pi x}+\mathcal{O}(x^{-2}) $$ using standard Laplace method. Despite the fact that this seems to fit comparsion with numerical data, I am not satisfied by this approach for three reasons: 1) It is quiet cumbersome (at least for me) to derive higher order terms with my method. Is there a fast way to do so? 2) The question can be found in relation with "Method of steepest descent" so I suppose one should attack it by this method. How can it be applied? I somehow fail here 3) I am not totally satisfied by my analytic continuation argument. How can this made rigourous? I am grateful to anyone who can shed light on any of my questions! REPLY [4 votes]: I know that this may not be the direction that OP want, but we can revive @Olivier Oloa's idea to give an elementary solution. Taking integration by parts, we get \begin{align*} I_0(x) :\!\!&= \int_{0}^{\frac{\pi}{4}} \cos(xt^2)\tan^2 t \, dt \\ &= \frac{2}{\pi}\frac{\sin\big(\frac{\pi^2}{16}x\big)}{x} + \frac{1}{2x} \underbrace{\int_{0}^{\frac{\pi}{4}} \sin(xt^2) \bigg( \frac{\tan^2 t}{t^2} - \frac{2\tan t\sec^2 t}{t} \bigg) \, dt}_{=:I_1(x)}. \end{align*} Now we want to apply a similar technique to $I_1(x)$ to extract higher order terms, but this technique need a modification in the following way: Write \begin{align*} I_1(x) &= \int_{0}^{\frac{\pi}{4}} \sin(xt^2) \bigg( 1 + \frac{\tan^2 t}{t^2} - \frac{2\tan t\sec^2 t}{t} \bigg) \, dt - \int_{0}^{\frac{\pi}{4}} \sin(xt^2) \, dt. \end{align*} This modification takes care of the singularity that would otherwise have popped up under integration by parts. So the same technique applies to the first integral above and shows that it is of order $\mathcal{O}(x^{-1})$. So we get $$ I_1(x) = - \int_{0}^{\frac{\pi}{4}} \sin(xt^2) \, dt + \mathcal{O}(x^{-1}). $$ On the other hand, using the substitution $u = xt^2$ and assuming $x > 0$, we have \begin{align*} \int_{0}^{\frac{\pi}{4}} \sin(xt^2) \, dt &= \frac{1}{2\sqrt{x}} \int_{0}^{\frac{\pi^2}{16}x} \frac{\sin u}{\sqrt{u}} \, du \\ &= \frac{1}{2\sqrt{x}} \int_{0}^{\infty} \frac{\sin u}{\sqrt{u}} \, du - \frac{1}{2\sqrt{x}} \int_{\frac{\pi^2}{16}x}^{\infty} \frac{\sin u}{\sqrt{u}} \, du \\ &= \frac{1}{2\sqrt{x}} \sqrt{\frac{\pi}{2}} - \frac{1}{2\sqrt{x}} \bigg( \underbrace{\frac{4}{\pi}\frac{\cos \big(\frac{\pi^2}{16}x\big)}{\sqrt{x}} + \frac{1}{2} \int_{\frac{\pi^2}{16}x}^{\infty} \frac{\cos u}{u^{3/2}} \, du}_{=\mathcal{O}(x^{-1/2})} \bigg). \end{align*} Combining altogether, we get $$ I_0(x) = \frac{2}{\pi}\frac{\sin\big(\frac{\pi^2}{16}x\big)}{x} - \frac{1}{4}\sqrt{\frac{\pi}{2}}x^{-3/2}+ \mathcal{O}(x^{-2}). $$<|endoftext|> TITLE: Explicit formula for higher order derivatives in higher dimensions QUESTION [5 upvotes]: Let $f:\mathbb{R}^n\to\mathbb{R}^m$ be a function at least $C^k$ and fix some point $x_0\in\mathbb{R}^n$. I want an explicit formula for its derivatives of higher order. I already read this threads: What are higher derivatives? Using higher order derivatives One thing is clear, $D^kf(x_0)$ is a $k$-linear map from $\underbrace{\mathbb{R}^n\times\ldots\times\mathbb{R}^n}_{k \text{ times}}$ to $\mathbb{R}^m$. Yet, the answers given do not address my question, they only give the abstract description but I want a concrete description. For $k=1$ we have that $Df(x_0)v = [Df(x_0)]\cdot v$, where $ [Df(x_0)] = \left(\frac{\partial f_i(x_0)}{\partial x_j}\right)$ stands for the Jacobian matrix of $Df(x_0)$. So we have an explicit formula. For $k\geq 2$, it's not clear how to get an explicit formula for $D^kf(x_0)(v_1,\ldots,v_k)$ from the definition and the case $k=1$. There must be some way to do this. I hope you can help me. Thank you very much. REPLY [5 votes]: Write $${\bf v}_1 =\sum_{i_1=1}^n v_1^{i_1}{\bf e}_{i_1}, \cdots , {\bf v}_k =\sum_{i_k=1}^n v_k^{i_k}{\bf e}_{i_k},$$where $\{{\bf e}_1,\ldots, {\bf e}_n\}$ is the standard basis. Also, let $\{{\bf e}^\ast_1,\ldots, {\bf e}^\ast_n\}$ be the dual basis. Then: $$\begin{align} D^kf({\bf x}_0)({\bf v}_1,\cdots,{\bf v}_k) &= D^kf({\bf x}_0)\left(\sum_{i_1=1}^n v_1^{i_1}{\bf e}_{i_1},\cdots, \sum_{i_k=1}^n v_k^{i_k}{\bf e}_{i_k}\right) \\ &= \sum_{i_1,\ldots,i_k=1}^n v_1^{i_1}\ldots v_k^{i_k} D^kf({\bf x}_0)({\bf e}_{i_1},\cdots,{\bf e}_{i_k}) \\ &=\sum_{i_1,\ldots,i_k=1}^n v_1^{i_1}\ldots v_k^{i_k} \frac{\partial^k f}{\partial x_{i_1}\cdots \partial x_{i_k}}({\bf x}_0) \\ &= \sum_{i_1,\ldots,i_k=1}^n \frac{\partial^k f}{\partial x_{i_1}\cdots \partial x_{i_k}}({\bf x}_0) {\bf e}^\ast_{i_1}\otimes \cdots \otimes {\bf e}^\ast_{i_k}({\bf v}_1,\cdots,{\bf v}_k) \end{align}$$Hence: $$D^kf({\bf x}_0) = \sum_{i_1,\cdots,i_k=1}^n \frac{\partial^k f}{\partial x_{i_1}\cdots \partial x_{i_k}}({\bf x}_0) {\bf e}^\ast_{i_1}\otimes \cdots \otimes {\bf e}^\ast_{i_k}. $$ We could have used any basis instead of the standard one - the only difference is that we'd have iterated directional derivatives as the coefficients of that tensor in the new basis. I guess that's as concrete we can get.<|endoftext|> TITLE: How does topology on a space relate to differentiation? QUESTION [6 upvotes]: I read in Chapter 1 of Lee's Introduction to Smooth Manifolds that there's no way to define a purely topological property that would serve as a criterion for smoothness. So, I tried to think about the meaning of this sentence and I couldn't really link topology to differentiation! I mean derivatives of functions are defined on open domains, and openness is a topological abstract concept. But how are the two related? I mean assume that I change the topology of the real line from the Euclidean metric to some other topology. For example, discrete topology or the topology generated by half-open intervals $[a,b)$. How will the notion of differentiation change then? I assume we have to study functions defined on $\mathbb{R}$ to answer this. So, some examples of functions that are differentiable with respect to the Euclidean topology but fail to be differentiable in some other topology or vice versa are appreciated. REPLY [5 votes]: As far as I can guess what Lee might have meant, on the basic level, the reason is that smoothness properties are not preserved by homeomorphisms, and hence smoothness of a function is not a topological property. There is no need to go into exotic topologies (where it doesn't a priori make sense to talk about smoothness${}^\dagger$). For example, the identity function on the real line is certainly smooth, but if you compose it with a non-smooth homeomorphism of the line (say, a piecewise linear strictly increasing function), you will get a non-smooth function. On a somewhat deeper level, you can have two differential manifolds which are homeomorphic, but have incompatible differential structures, such as regular and exotic spheres. ($\dagger$ There are more robust notions of smoothness and manifolds than those of real manifolds. For example, given any local field (like the $p$-adics), you can pretty much rewrite the standard definitions of smoothness and a manifold and they work just fine. But I am not aware of any such notion which would work for the lower limit topology.)<|endoftext|> TITLE: What will be the value of the following determinant without expanding it? QUESTION [27 upvotes]: $$\begin{vmatrix}a^2 & (a+1)^2 & (a+2)^2 & (a+3)^2 \\ b^2 & (b+1)^2 & (b+2)^2 & (b+3)^2 \\ c^2 & (c+1)^2 & (c+2)^2 & (c+3)^2 \\ d^2 & (d+1)^2 & (d+2)^2 & (d+3)^2\end{vmatrix} $$ I tried many column operations, mainly subtractions without any success. REPLY [6 votes]: If you expand the squares, you'll see that every column is a linear combination of $$ \begin{bmatrix}a^2 \\ b^2 \\ c^2 \\ d^2\end{bmatrix}, \begin{bmatrix}a \\ b \\ c \\ d\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\ 1 \\ 1 \\ 1\end{bmatrix}. $$ There are four columns that are linear combinations of three vectors. So they can't be linearly independent, thus the determinant is $0$. (Or: The column space of the matrix is spanned by $3$ vectors. So the rank of the matrix is at most $3$, which means the matrix is singular. So the determinant is $0$.)<|endoftext|> TITLE: $\text{null}\,T^k\subsetneq\text{null}\,T^{k+1}$ and $\text{range}\,T^k\supsetneq\text{range}\,T^{k+1}$ for all $k\in\mathbb{N}$ QUESTION [6 upvotes]: Let $V$ be a vector space over $\mathbb{F}=\mathbb R$ or $\mathbb C$ and $T$ an operator on $V$. It is well known that $$\forall k\in\mathbb{N},\,\text{null}\,T^k\subseteq\text{null}\,T^{k+1}\,\land\,\text{range}\,T^{k+1}\supseteq\text{range}\,T^k$$ Exercise $21$ page $251$ in Sheldon Axler's Linear Algebra Done Right is: Find a vector space $W$ and $T\in\mathcal{L}(W)$ sich that $\text{null}\,T^k\subsetneq\text{null}\,T^{k+1}$ and $\text{range}\,T^k\supsetneq\text{range}\,T^{k+1}$ for every positive integer $k$. It is well known that if $\dim V=n$ is finite then $$\text{null}\,T^n=\text{null}\,T^{n+1}=\text{null}\,T^{n+2}=\cdots$$ and that $$\text{range}\,T^n=\text{range}\,T^{n+1}=\text{range}\,T^{n+2}=\cdots$$ Therefore we must choose an infinite dimensional vector space. I chose $W=\mathbb{F}^\infty$ the set of all sequences $(a_1,a_2,\cdots)$ over $\mathbb F$. Consider $\mathcal{B_c}=\{e_1,e_2,\cdots\}$ its canonical basis. When we define $T$ such as $T(a_1,a_2,\cdots)=(a_2,a_3,\cdots)$ we have $\forall k\in\mathbb{N}\backslash\{0\},\,\text{null}\,T^k=\text{span}\{e_1,\cdots,e_k\}\,\land\,\text{range}\,T^k=\mathbb F^\infty$, which satisfies only one condition. On the other hand, defining $T$ by $T(a_1,a_2,\cdots)=(0,a_1,a_2,\cdots)$ gives $\text{null}\,T^k=\{0\}\,\land\,\text{range}\,T^k=\text{span}\{e_{k+1},e_{k+2},\cdots\}$, $T$ satisfies the other. Projections are no good as $T=T^2$ and I tried some few other examples but I couldn't find a good one. The idea I kept in mind while looking for an example is that as we move on from $T^k$ to $T^{k+1}$, one vector is "transported" from $\text{range}\,T^k$ to $\text{null}\,T^k$. Unfortunately, I failed to find such an operator. Could you please provide me with some examples? REPLY [5 votes]: We can zip your examples for strictly increasing null spaces and for strictly decreasing ranges together to obtain an example of an endomorphism having both. If $A \colon X \to X$ has $\operatorname{null} A^k \subsetneq \operatorname{null} A^{k+1}$ for all $k \in \mathbb{N}$, and $B \colon Y \to Y$ has $\operatorname{range} B^k \supsetneq \operatorname{range} B^{k+1}$ for all $k \in \mathbb{N}$, then the operator $C \colon X \times Y \to X \times Y$ given by $C(x,y) = (Ax,By)$ has strictly increasing null spaces - $x \in \operatorname{null} A^{k+1} \setminus \operatorname{null} A^k \iff (x,0) \in \operatorname{null} C^{k+1}\setminus \operatorname{null} C^k$ - and strictly decreasing ranges - $y \in \operatorname{range} B^k \setminus \operatorname{range} B^{k+1} \iff (0,y) \in \operatorname{range} C^k \setminus \operatorname{range} C^{k+1}$. In your examples you use $X = Y = W$, and then we can weave the two examples together using an isomorphism $W\times W \to W$, for example the operator $$T \colon (a_1, a_2, a_3,\dotsc) \mapsto (a_3, 0, a_5, a_2, a_7, a_4, \dotsc )$$ shifting odd-indexed entries two positions left and even indexed entries two positions right has the desired property. We can also use an "infinite matrix" with nilpotent Jordan blocks of unbounded size on the diagonal to achieve the goal, for example $$M = \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 &\cdots\\ 0 & 0 & 0 & 0 & 0 & 0 &\cdots\\ 0 & 0 & 0 & 1 & 0 & 0 &\cdots\\ 0 & 0 & 0 & 0 & 1 & 0 &\cdots\\ 0 & 0 & 0 & 0 & 0 & 0 &\cdots\\ \vdots & & & & & &\ddots \end{pmatrix}$$ having a Jordan block of size $2$ followed by a block of size $3$, then a block of size $4$ and so on. Since each row contains at most one nonzero entry, the product $Mx$ is well-defined for $x \in W$, and $T\colon x \mapsto Mx$ is a well-defined linear operator on $W$ with the desired property. This differs from the previous example, since there we had $\dim (\operatorname{null} T^{k+1} / \operatorname{null} T^k) = 1$ and $\dim (\operatorname{range} T^k/\operatorname{range} T^{k+1}) = 1$ for all $k$ while in the later example the dimensions are all infinite.<|endoftext|> TITLE: Least upper bound property implies Cauchy completeness QUESTION [6 upvotes]: Given an ordered field $F$, the following two statements are equivalent: $F$ has the Least-Upper-Bound property. $F$ is Archimedean and $F$ is "sequentially complete"/"Cauchy complete" (all Cauchy sequences in $F$ converge). For some reason I have been unable to find a proof of this result on the web. REPLY [4 votes]: First we will establish some intermediate results: Theorem 1: Every Cauchy sequence is bounded. Proof: Take $\epsilon=1$, then there is $N\in\mathbb{N}$ such that $$m, n \geq N\quad\implies |x_m-x_n|<1$$ So that $$|x_m|<|x_N|+1\quad\quad\forall\;m\geq N$$ therefore $$|x_n|\leq\max\lbrace |x_1|,\ldots,|x_N|,|x_N|+1\rbrace\quad\quad\forall\;n\in \mathbb{N}$$ Theorem 2: Every sequence has a monotone subsequence. Proof: Lemma in Bolzano-Weierstrass Theorem 3: Let $F$ be an ordered field with the LUB property, then every bounded and monotone sequence in $F$ is convergent. Proof: WLOG lets assume the sequence is increasing. As it is bounded it has a supremum: $s=\sup_{n\in\mathbb{N}}\lbrace x_n\rbrace$ We claim that $s$ is the limit of the sequence. Let $\epsilon>0$ then there is $N\in\mathbb{N}$ such that $$s-\epsilonx$ Proof: If this is not the case, integers would be bounded by some $x$. Theorems $(1)$, $(2)$ and $(3)$ above prove Cauchy completeness. For the reverse implication $(2)\implies (1)$: Theorem 4: Suppose $F$ has the Archimedean property, then every monotone bounded sequence is Cauchy. Proof: WLOG take the sequence to be increasing. If $\lbrace x_n\rbrace_{n}$ is not Cauchy then there exists $\epsilon>0$ so that for every $N\in\mathbb{N}$ $$n>m\geq N\quad\implies x_n-x_m\geq \epsilon$$ We are going to extract a subsequence such that it is not bounded. For $N=1$ choose ${n_1},\ {n_2}$ such that $x_{n_2}-x_{n_1}>\epsilon$ Now take $N^{\prime}>n_2$ and choose ${n_3},\ {n_4}$ such that $x_{n_4}-x_{n_3}>\epsilon$. Continue in this way to construct a subsequence. Note that there are infintely many differences greater than $\epsilon$, so by the Archimedean Principle the subsequence diverges. This contradicts the fact that the sequence was bounded. Observe that if $F$ is also Cauchy complete then every monotone bounded sequence is convergent. We have established: Cauchy completeness+Archimedean Property $\implies$ Convergence of every monotone and bounded sequence The answer in this post proves: Convergence of every monotone and bounded sequence $\implies$ LUB Property<|endoftext|> TITLE: Does there exist $\alpha \in \mathbb{R}$ and a field $F \subset \mathbb{R} $ such that $F(\alpha)=\mathbb{R}$? QUESTION [6 upvotes]: I was thinking what would be the opposite of a field extension and I suppose it might be this: A field $F$ can have the element $\alpha$ deleted if there exists a subfield $E=F$ such that $E(\alpha)=F$. You might call $F\setminus(\alpha)$ something like a deletion. The thing about this is that you can then undo any field extension. That is, $F(\alpha) \setminus (\alpha)=F$. Does this concept have a name? So naturally, I am wondering if any elements can be "deleted" in this manner from $\mathbb{R}$. I realized that for $\alpha=\sqrt{2}$, there is no such field $E$ such that $E(\alpha)=\mathbb{R}$. If there was, there would be $a,b \in E$ such that $2^{1/4}=a+b\sqrt{2}$ as $\mathbb{R}$ is an extension of $E$ and the fourth root is in $\mathbb{R}$. Squaring, and with some algebra (being careful with avoiding division by zero), one gets the contradiction that $\sqrt{2} \in E$. I haven't thought this through carefully, but I believe for any algebraic number we have a similar failure. That is, there is a polynomial of degree $n$, $p \in E(x)$, with $p(\alpha)=0$. Then in particular $\alpha^{1/k} = p_k(\alpha)$ for $p_k$ of degree $n$. Raising to the $k$th power we get $\alpha = q_k(\alpha)$ for $q_k$ polynomial of degree $n$. This gives a linear system in $\alpha^k$ and my hunch it is nonsingular or the singular cases can be dealt with. The above is for algebraic numbers. I think the above can be used to show, for instance, that $\pi$ cannot work as it is likely algebraic over $E$ as $E$ contains many transcendental numbers. This is much more murky to me but makes me think that there is no element you can delete from $\mathbb{R}$. So my question: Does there exist $\alpha \in \mathbb{R} \setminus \mathbb{Q}$ and a field $F \subset \mathbb{R}$ such that $F(\alpha)=\mathbb{R}$? More specifically, I want $F$ with $\alpha \notin F$. REPLY [8 votes]: No such subfield $F$ exists. First, $\mathbb{R}$ cannot be of finite degree over $F$, else $\mathbb{C}$ would be of finite degree over $F$, and this is only possible if $F=\mathbb{R}$ or $\mathbb{C}$ by the Artin-Schreier theorem. Thus, if such a field existed, then $\mathbb{R}$ would be of infinite degree over $F$, and $a$ would be transcendental over $F$. But then the field $F(a)$ would have many non-trivial automorphisms (for instance, the ones fixing all elements in $F$ and sending $a$ to any degree $1$ polynomial in $a$); since we know that the only field automorphism of $\mathbb{R}$ is the identity, then this is not possible. A simpler argument pointed out by arctic tern is that neither of $a$ and $-a$ have a square root in $\mathbb{R}$, which is impossible for elements of $\mathbb{R}$.<|endoftext|> TITLE: How to use Karush-Kuhn-Tucker (KKT) conditions in inequality constrained optimization QUESTION [5 upvotes]: I am trying to understand how to use the Karush-Kuhn-Tucker conditions, similar as asked but not answered in this thread. Assume the target function is given by $f(x)$, where $x$ a vector. Let $g(x) \ge 0 $ be an inequality constraint under which we wish to maximize $f$. The Lagrange function is given by $L(x,\lambda)=f(x)+\lambda g(x)$. From a textbook I know that I have to maximize this function "with respect to the conditions": $$ g(x) \ge 0 \\ \lambda \ge 0 \\ \lambda g(x) = 0$$ Now I do not fully understand what this means. Does this mean I have to maximize $L$ as if $g(x)=0$ and then I 'check' whether the three conditions are met? An example. Let $f(x)= 1 - x_1^2 - x_2^2$ and $g(x)= 1-x_1-x_2$. We find $$ \nabla_x L = 2 \begin{bmatrix} -1 & 0 \\ 0 & -1 \\ \end{bmatrix} x + \lambda \begin{bmatrix} -1 \\ -1 \\ \end{bmatrix} =0 $$ $$ \nabla_{\lambda} L = \begin{bmatrix} -1 \\ -1 \\ \end{bmatrix} x + 1 = 0 $$ From here we can simplify to find $\lambda=-1$ and $$x=\begin{bmatrix} 1/2 \\ 1/2 \\ \end{bmatrix} $$ We can verify $g(x) = 0$ as well as $\lambda g(x) = 0$ as required. However, $\lambda<-1$. So it seems the optimizations 'failed', but I am not sure what this means. It seems to suggest we found a minimum? Please help me with how to work with these constraints. REPLY [6 votes]: Think about what an inequality constraint means for optimality: either the optimum is away from the boundary of the optimization domain, and so the constraint plays no role; or the optimum is on the constraint boundary. In this case $f$ has a local minimum if the "downhill direction" is directly opposed to the constraint: $$\nabla f = \lambda \nabla g$$ for some positive $\lambda$. (Consider that if $\nabla f$ is merely parallel to $\nabla g$, but has opposite direction, then you can lower $f$ by moving slightly inside the domain -- separating from the constraint boundary. And of course if $\nabla f$ and $\nabla g$ are not parallel, you can decrease $f$ either by sliding along the constraint boundary, or moving inside the domain.) This either-or condition is usually called complementarity and the difficult part of solving an inequality-constrained minimization problem is figuring out the status of each constraint: classifying it either as inactive (first case above, where $g(x) > 0$ at the optimum) or active (second case above, where $g(x) = 0$.) The trio of conditions \begin{align*} g(x) &\geq 0\\ \lambda &\geq 0\\ g(x)\lambda &= 0 \end{align*} is just a clever way of encoding that every inequality constraint must be either active or inactive. As a practical matter, you can solve the optimization problem by trying all possible active sets: i.e., for a single constraint, try solving the problem assuming the constraint is active, and solve it again assuming the constraint is inactive. Check which of the solutions are valid, as you suggest in your post. Of course for large numbers $n$ of inequality constraints this method doesn't scale well, since there are $2^n$ possible active sets. Sophisticated numerical algorithms exist which try to deal with this. Keywords for further study: active set methods, interior point methods.<|endoftext|> TITLE: Finding the sum $\frac{x}{x+1} + \frac{2x^2}{x^2+1} + \frac{4x^4}{x^4+1} + \cdots$ QUESTION [9 upvotes]: Suppose $|x| < 1$. Can you give any ideas on how to find the following sum? $$ \frac{x}{x+1} + \frac{2x^2}{x^2+1} + \frac{4x^4}{x^4+1} + \frac{8x^8}{x^8+1} + \cdots $$ REPLY [2 votes]: Evaluating $$\sum_{q\ge 0} \frac{2^q x^{2^q}}{1+x^{2^q}}$$ we obtain $$\sum_{q\ge 0} 2^q \sum_{k\ge 0} (-1)^k x^{(k+1)2^q} = \sum_{n\ge 1} x^n \sum_{2^q|n} 2^q (-1)^{n/2^q-1}.$$ Now observe that $$\sum_{2^q|n} 2^q (-1)^{n/2^q-1} = \sum_{p=0}^{v_2(n)} 2^p (-1)^{n/2^p-1}$$ where $v_2(n)$ is the exponent of the highest power of $2$ that divides $n.$ This is $$-\sum_{p=0}^{v_2(n)-1} 2^p + 2^{v_2(n)} = - (2^{v_2(n)}-1) + 2^{v_2(n)} = 1.$$ because $n/2^p$ is even unless $p=v_2(n).$ (This also goes through when $n$ is odd and we have one value for $p$, namely zero.) Hence the end result is $$\sum_{n\ge 1} x^n = \frac{x}{1-x}.$$<|endoftext|> TITLE: cardinality of the Borel $\sigma$-algebra of a second countable space QUESTION [8 upvotes]: Second countability by itself doesn't restrict the cardinality of a topological space, since every set with the trivial topology is a second countable space, but it seems natural to ask whether second countability restricts the cardinality of the Borel $\sigma$-algebra of the space. Can the cardinality of the Borel $\sigma$-algebra of a second countable space be arbitrarily big? If this is the case is there a simple construction for a second countable space with an arbitrarily big Borel $\sigma$-algebra? REPLY [9 votes]: More generally, suppose $A$ is a collection of subsets of a set $X$ and let $B$ be the $\sigma$-algebra generated by $A$. We can construct $B$ inductively as follows. We define an increasing sequence $B_\alpha$ of collections of subsets of $X$, for each $\alpha<\omega_1$. Let $B_0=A$, and given $B_\alpha$, let $B_{\alpha+1}$ be the set of complements, countable unions, and countable intersections of elements of $B_\alpha$. If $\alpha$ is a limit ordinal, define $B_\alpha=\bigcup_{\beta<\alpha} B_\beta$. Then I claim that $$B=\bigcup_{\alpha<\omega_1} B_\alpha.$$ Indeed, clearly by induction $B_\alpha\subseteq B$ for all $\alpha$. On the other hand, $\bigcup_{\alpha<\omega_1}B_\alpha$ is a $\sigma$-algebra, since given countably many elements of it they are all contained in some $B_\alpha$, and then their union and intersection (and the complement of any one of them) is contained in $B_{\alpha+1}$. Since $B$ is by definition the smallest $\sigma$-algebra containing $A$, $B\subseteq\bigcup_{\alpha<\omega_1}B_\alpha$. We can now use this to bound the cardinality of $B$. Notice that $|B_1|\leq |A|^{\aleph_0}$ (assuming $|A|>1$). Since $$(|A|^{\aleph_0})^{\aleph_0}=|A|^{\aleph_0^2}=|A|^{\aleph_0^2}=|A|^{\aleph_0},$$ it follows by induction that $|B_\alpha|\leq |A|^{\aleph_0}$ for all $\alpha$. Thus $$|B|\leq \aleph_1\cdot |A|^{\aleph_0}=|A|^{\aleph_0}.$$ Finally, let us apply this to your question. If $X$ is a second-countable space, we can take $A$ to be a countable basis. Every open set is a countable union of elements of $A$ and is thus in $B$, so $B$ will be the Borel algebra of $X$. We thus conclude that the Borel algebra has cardinality at most $$|A|^{\aleph_0}=2^{\aleph_0}.$$ (Of course, this upper bound is easy to achieve, for instance for $X=\mathbb{R}$.)<|endoftext|> TITLE: Do positive semidefinite matrices have to be symmetric? QUESTION [27 upvotes]: Do positive semidefinite matrices have to be symmetric? Can you have a non-symmetric matrix that is positive definite? I can't seem to figure out why you wouldn't be able to have such a matrix, but all my notes specify positive definite matrices as "symmetric $n \times n$ matrices." Can anyone help me with an example of a non-symmetric positive definite matrix, or some insight into a proof for why it would need to be symmetric should that be the case? Thanks! REPLY [8 votes]: Let me just add that there exist a branch of optimization and variational analysis in which the notion of a nonnegative definite matrix which is not symmetric is fundamental. Consider a smooth convex function; by convexity its Hessian is nonnegative positive definite while by Schwarz Theorem, it is also symmetric. However, there exist "non-gradient operators" i.e. which do not integrate into a function but still possess a notion of convexity. This is the notion of a monotone operator, which in the smooth case, its Jacobian is a nonnegative definite matrix which may not be symmetric.<|endoftext|> TITLE: Greatest open ball in the unit $n$-cell QUESTION [5 upvotes]: Let $n\geq2$ be an integer, let $C=[-1,1]^n$ and let $A$ be the set of all real numbers $r$ such that $r$ is the radius of some open ball $V$ such that $V$ is contained in $C$ and $V$ is disjoint to the open ball whose center is $\bf 0$ and radius is $1.$ Find $\sup A.$ I think the problem of finding $y=\sup A$ (without proof) isn't very difficult, because we can look at the case when $n=2$ and then "generalize" (graphically and using the pythagorean theorem and to generalize, we use the pythagorean theorem in $n$ dimensions). I found that $y=(\sqrt n-1)/(\sqrt n +1)$ I intituively see that a point $x$ (there are exactly $2^n$ such points) at which we can center an open ball $V$ of radius $y$ such that $V\subseteq C$ and $V\cap B(\mathbf{0},1)=\varnothing$ belongs to the line connecting the vector $\bf 0$ and the vector $\bf 1$, all of whose coordinates are $1.$ Thus $x=t\mathbf{1}$ for some real $01$ (because $x\notin B(\mathbf{0},1)$ and also $x$ must be an interior point of the complement of this ball). But then I don't know what to do. I would like to be as rigorous as possible. Thank you for any help. REPLY [2 votes]: Consider a point $p ∈ [-1, 1]^n \setminus B(\mathbf{0}, 1)$. Observe that $B(p, r) ⊆ [-1, 1]^n \setminus B(\mathbf{0}, 1)$ if and only if $r$ is not greater than the distance from $p$ to any of the faces of the cube and also not greater than the distance from $p$ to $B(\mathbf{0}, 1)$. But we can calculate those! For $n = 2$ and $p = (x, y)$ the distances from the faces are $1 - x$, $1 - y$, $x - (-1) = 1 + x$, and $y - (-1) = 1 + y$. The distance from the ball is $\sqrt{x^2 + y^2} - 1$. To find the supremum it is enough to maximize the minimum of the distances. By the symmetry of the problem, we may suppose $0 ≤ x ≤ y ≤ 1$, and hence consider only minimum of $1 - y$ and $\sqrt{x^2 + y^2} - 1$. We consider two cases, depending on which of the quantities is lesser. This is decided by condition $1 - y ≤ \sqrt{x^2 + y^2} - 1$, which is in our situation equivalent to $2 \sqrt{1 - y} ≤ x$. If $2 \sqrt{1 - y} ≤ x$, then $r = 1 - y$, which we are maximizing. Equivalently, we are minimizing $y$ under condition $0 ≤ 2 \sqrt{1 - y} ≤ x ≤ y ≤ 1$. Note that as $y$ decreases from $1$ to $0$, the quantity $2 \sqrt{1 - y}$ increaces from $0$ to $2$. Hence, when miniminzing $y$ under the condition we obtain $x = y = 2\sqrt{1 - y}$. If $2 \sqrt{1 - y} ≥ x$, then $r = \sqrt{x^2 + y^2} - 1$, which we are maximizing. $r$ inscreases as $x$ increases, so we put $x = \min(y,\, 2\sqrt{1 - y})$ and we maximize $\sqrt{\min(y,\, 2\sqrt{1 - y})^2 + y^2} - 1$ for $0 ≤ y ≤ 1$. We again consider two cases: if $y ≤ 2 \sqrt{1 - y}$, then we are maximizing $\sqrt{y^2 + y^2} - 1 = \sqrt{2} y - 1$ or equivalently $y$ under conditions $0 ≤ y ≤ 2\sqrt{1 - y},\, 1$. By the same observation on behavior of the two quantities we obtain again $y = 2 \sqrt{1 - y}$. Otherwise if $y ≥ 2 \sqrt{1 - y}$, we maximize $\sqrt{4 (1 - y) + y^2} - 1 = 1 - y$ and therefore minimize $y$ under $0 ≤ 2 \sqrt{1 - y} ≤ y ≤ 1$, which we have already done. In all cases we get $x = y = 2 \sqrt{1 - y}$ with solution $x = y = 2 \sqrt{2} - 2$ and $r = 1 - y = \sqrt{x^2 + y^2} - 1 = 3 - 2 \sqrt{2} = (\sqrt{2} - 1)/(\sqrt{2} + 1)$, which is the desired result. The computation can be hopefully generalized to arbitrary $n$. We can again suppose $0 ≤ x_1 ≤ x_2 ≤ … ≤ x_n ≤ 1$ and maximize $\min(1 - x_n,\, \sqrt{∑_{i = 1}^n x_i^2} - 1)$. Also note that $(\sqrt{n} - 1)/(\sqrt{n} + 1) \to 1$ as $n \to ∞$. Therefore, at higher dimensions the smaller balls have almost the same radius as $B(\mathbf{0}, 1)$, and so are intersecting. They are thouching exactly when $n = 9$. More precisely, every two of the $2^9$ balls with centers sharing all but one coordinate are touching. I gave this problem to a colleague of mine and he found the result for $n = 2$ a different way: he considered a copy of the square rotated by $45$ degrees and then inscribed the small circle we a looking for to one of the resulting triangles. Then by using simple trigonometry we get $r = \tan^2(π/8)$. As a byproduct we obtain $\tan^2(π/8) = (\sqrt{2} - 1)/(\sqrt{2} + 1)$.<|endoftext|> TITLE: Best way to learn maths - proofs or exercises? QUESTION [8 upvotes]: My question is regarding the most effective way to learn maths? Should one concentrate on churning through the exercises or is it better to concentrate on understanding and reproducing the proofs? I understand that ideally one would do both however my time is limited, (father of two children under 3, run a small business, full course load). I am in the first year of doing a mathematics degree an am struggling. I go to the lectures religiously and ask question, however more often than not I leave not fully understanding the subject matter. I usually end up poring over the textbooks, scouring the internet or harassing the teaching assistants till I understand, but this eats into the time I have left to do the exercises and review the proofs. Any advice or insights gratefully accepted! REPLY [9 votes]: Early math classes (especially calculus) are very often taught from an applied perspective, where proofs and understanding aren't very important. More advanced classes (should you take them) will be entirely focused on understanding and proof. So if you plan to continue in math, it's definitely worth learning the deeper nature of ideas. Maybe you're just trying to learn some applied knowledge for, say, engineering. In that case, it's not necessary to learn anything well- as long as you can solve the problems. But it is often easier to learn ideas instead of algorithms: if you understand how to derive other differentiation rules from the chain rule, you have to memorize much less. I support the proof approach for two reasons: firstly, I think it's a lot more enjoyable and interesting. But secondly, it's easy to forget details if you don't know why they matter. Imagine someone teaches you how to make an origami crane. Then you make one on your own: you know if you made a mistake, because it doesn't look like a crane. You can try to back up and figure out your error, but you have an intuition about what should happen. But suppose you learned how to make an origami crane without any paper, just by memorizing sequences of folds. If you had to write down the folds, it would be very easy to make a mistake: you don't really know what's going on, or what it leads to, and there's no way to differentiate "a pretty good crane" from "something really weird." And then suppose you really got into origami, and understood exactly how certain folds would lead to the final result. You can even make your own constructions. Now, if you try to make a crane, you barely need to remember anything- just a general idea of what it should look like will be enough. It would take a while to get this level of knowledge, but it'd be really hard to make a bad crane. Math is similar: you can memorize algorithms, you can learn when an argument makes sense, and you can learn how to make good arguments yourself. But the third option makes everything so much easier, once you get past the initial hurdle.<|endoftext|> TITLE: Is it possible that isomorphic $\pi_n$'s not induced by a map? QUESTION [6 upvotes]: This might be a stupid question, but I want to make sure as a beginner of AT. A map between CW-complexes $f: A \rightarrow B$ is defined to be a weak homotopy equivalence if it induces isomorphisms $f_*: \pi_n(A) \rightarrow \pi_n(B)$ for all $n$. But is it true that $\pi_n(A) \cong \pi_n(B)$ for all $n$ implies that such a map $f$ exists? If not, what is the reason? And what is a good counterexample? REPLY [6 votes]: Another easy/interesting counter example: Consider $X=S^1\vee S^3$ and its double cover $X_2$, i.e attach two copies of $S^3$ one in north pole and one in south pole of $S^1$. Then $\pi_1(X) \cong \mathbb{Z} \cong \pi_1(X_2)$ and the covering map induces isomorphisms in $\pi_n$ for all $n\geq 2$. But $X$ and $X_2$ are not homotopically equivalent since their Euler Characteristics are different. So there cannot be any induced isomorphism by Whitehead theorem.<|endoftext|> TITLE: Forcing and PDE's QUESTION [7 upvotes]: Anybody knows where can I read about some relations between PDE's and set theory? Something like "There exists a PDE such that the existence of a solution for it cannot be determined in ZFC" or "Given a PDE, the existence of a solution is absolute". I'm just looking for some references. Thank you! REPLY [5 votes]: Shoenfield absoluteness shows that you're not going to get any simple examples of this. Roughly speaking, there is a hierarchy of mathematical propositions in terms of their quantifier complexity, and Shoenfield absoluteness states that sentences of a simple enough form cannot have their truth values altered by forcing. This level is $\Pi^1_2$: statements of the form "For every real $r$, there is a real $s$ such that [stuff]", where [stuff] involves only quantification over natural numbers (or equivalently rationals). The negation of a $\Pi^1_2$ sentence is $\Sigma^1_2$ and is also absolute. Let's look at a very special case: the statement "$E$ has a solution defined on all of $\mathbb{R}$," where $E$ is some differential equation in $f, f', f'', . . . $. Well, continuous functions are coded by real numbers; so the outer quantifier is $\Sigma^1_1$, and we need to examine how complicated the statement "$r$ codes a solution to $E$" is. With a bit of work, it's not hard to show that this is $\Pi^1_1$ (in fact, I believe this is overkill), so the original statement is $\Sigma^1_2$; hence its truth value cannot be altered by forcing. Similarly, saying that there is at most one solution to $E$ is $\Pi^1_2$: "For all $r, s$, either one of $r$ or $s$ does not code a solution to $E$, or $r=s$." So again, this statement's truth value cannot be altered by forcing. (This shouldn't be too surprising, given the philosophy that the solution space carved out by a differential equation is a reasonably tame object; this suggests that the quantifier complexity of the relevant statements isn't too great.) Now, more complicated statements such as "any differential equation of such-and-such a form has a unique solution" are on the face of it more susceptible to set-theoretic techniques, but even here we run into problems: most nice classes of differential equations have low-quantifier-complexity descriptions, and I believe most of the relevant statements can still be expressed in a $\Pi^1_2$ or $\Sigma^1_2$ manner. Finally, even given a sufficiently complex statement, note that large cardinals provide extensions of Shoenfield absoluteness all the way up the projective hierarchy. So to get a statement which provably can be altered by forcing, we would need something like the Continuum Hypothesis which is fundamentally a statement about sets of reals. But such statements are quite rare outside of set theory. This doesn't rule out the possibility of still using forcing in the context of PDEs: there are examples of theorems proved using forcing, as follows - force to produce a more-easily-analyzed model of set theory in which you show $\varphi$ holds, then argue via appropriate absoluteness that $\varphi$ must have been true to begin with. These arguments are (in my opinion) spectacular, but also few and far between. Currently, I do not know of any instances in differential equations.<|endoftext|> TITLE: The best performing (theoretical complexity-wise) algorithm to solve this quadratic program QUESTION [5 upvotes]: Find the best performing (complexity-wise) algorithm to solve the following quadratic program $$\begin{array}{ll} \text{minimize} & \frac 12\|\mathrm x - \mathrm v\|_2^2\\ \text{subject to} & 1_n^T \mathrm x = \mathrm 1\\ & \mathrm x \geq 0_n\end{array}$$ where $\mathrm v \in \mathbb R^n$ is given. I have started learning Moreau Yoshida Regularization to try to solve this problem, as I was hinted by my supervisor that the best performing algorithm makes use of that theory. I will be adding the work that I have done on this question soon. REPLY [5 votes]: Your problem corresponds to orthogonally projecting the point $\textbf{v} \in \mathbb R^n$ onto the unit simplex. The problem can be solved analytically in $\mathcal O(n)$ using Kiwiel's algorithm (e.g see Algorithm 3 of this paper). Needless to say this is the best possible theoretical bound. If you want something a bit simpler (implementation-wise) with essentially the same complexity, then this $\mathcal O(n\log n)$ algorithm will do.<|endoftext|> TITLE: Why isn't $\arccos(\cos x)$ equal to $x$? QUESTION [5 upvotes]: How did we get $2\pi - x$? Kindly provide a general answer because many other similar questions have the same issue. REPLY [7 votes]: First, $\arccos (\cos x)\ne x$ in your case because $x$ is defined to be from $(\pi, 2\pi)$ and range of $\arccos x$ is $[0,\pi]$, that is $\arccos x$ cannot return values you want - it cannot return $x$. That is why we have to shift $x$ in range of $[0,\pi]$. We can do that by letting $y=2\pi-x$. Remember that $\cos(2\pi-x)=\cos x$, so we did not change original equation. Now, since $y\in [0,\pi]$ we can write $\arccos (\cos y)=y$ or $\arccos(\cos (2\pi-x))=2\pi-x$ or (final answer) $\arccos (\cos x)=2\pi-x$.<|endoftext|> TITLE: 1-Smoothness of the Symmetric Softmax Function QUESTION [9 upvotes]: Define the symmetric softmax of a vector $x\in \mathbb{R}^n$ to be $$L(x)=\log\sum_i(e^{x_i}+e^{-x_i}).$$ Equation (6) in this paper states that for all $x$ and $y$ $$|\nabla L(x)-\nabla L(y)|_1 \leqslant ||x-y||_{\infty}.$$ (Apparently, this property is called 1-smoothness in optimisation) I'm having a hard time proving this. I also tried to look for a proof but couldn't find one. I'd appreciate someone pointing me to a reference containing a proof. Thanks. REPLY [2 votes]: Here's an actual proof. It is inspired by one I found on page 116 of "First-Order Methods in Convex Optimization", although the book builds on a lot of theory that isn't really necessary when we are only interested in $L(x)$, so I cut out all of that and reduced it to the essential steps. As the question notes, we are interested in proving 1-smoothness of the symmetric softmax. To that extent, first define in general terms what we mean by L-smoothness: Definition ($L$-Smoothness, somewhat informal): Let $f: \mathbb{R}^n \to \mathbb{R}$ be some function and consider some norm $\lVert\cdot\rVert$ on $\mathbb{R}^n$. $f$ is $L$-smooth if it is differentiable and $\lVert \nabla f(x) - \nabla f(y)\rVert_* \le L \lVert x - y \rVert$, where $\lVert\cdot\rVert_*$ is the dual norm defined for the dual space ${\mathbb{R}^n}^*$ of $\mathbb{R}^n$ through $\lVert v \rVert_* = \max_x \{ \langle v, x \rangle \mid \lVert x \rVert = 1 \}$ for $v \in {\mathbb{R}^n}^*$. Now in our case, $\lVert\cdot\rVert$ is the supremum norm, and its dual norm is the 1-norm, which shows that 1-smoothness is indeed the property we are interested in. We make use of the following lemma, proved at the end: Lemma: If $f$ is convex, then $f$ is $L$-smooth if $\langle d, \nabla^2 f(x) \cdot d \rangle \le L \lVert d \rVert^2$ for all $d \in \mathbb{R}^n$. (As an aside: The proof hence proceeds exactly as in this paper, Proposition 4, but with different norms) To simplify the calculation, we will prove a stronger result, namely 1-smoothness of the unsymmetric softmax $S(x) = \log\left( \sum_i e^{x_i} \right)$, because as user24121 observes, we can always plug in $x = [x_1, \ldots, x_n, -x_1, \ldots, -x_n]$ to obtain the symmetric softmax. After some calculation, we see that the Hessian of $S$ is given by $$ \nabla^2 S(x) = \mathrm{diag}(\sigma(x)) - \sigma(x)\sigma(x)^T $$ where $$ \sigma(x)_i = \frac{e^{x_i}}{\sum_j e^{x_i}} $$ And now the computation is straightforward: \begin{align} d^T \nabla^2 S(x) d &= d^T \mathrm{diag}(\sigma(x))d - (\sigma(x)^Td)^2 \\ &\le d^T \mathrm{diag}(\sigma(x))d \\ &\le \lVert d \rVert_\infty^2 \lVert \sigma(x) \rVert_1 \le \lVert d \rVert_\infty^2 \end{align} The only thing left to do now is to prove the lemma: Proof of the Lemma: By Taylor's Theorem, for any $x, d \in \mathbb{R}^n$, there exists $\xi \in \mathbb{R}^n$ such that $$ f(x + d) = f(x) + \nabla f(x)^T d + \frac12 d^T \nabla^2 f(\xi) d \overset{\text{assumption}}\le f(x) + \nabla f(x)^T d + \frac{L}{2} \lVert d \rVert^2 $$ We continue by proving that any $f$ satisfying the inequality is $L$-smooth. To do so, we must overcome the hurdle that we only have expressions of the form $\langle \nabla f(x + d), d \rangle$, but the definition of the dual norm requires the second argument to be independent of the first. To get around this, we take a detour via the Bregman distance: Let $D_f(x, d) = f(x + d) - f(x) - \nabla f(x)^T d \le \frac{L}{2}\lVert d \rVert^2$ (this is the Bregman distance, although we do not require $f$ to be strictly convex). Note that because $f$ is convex, $f(x + d) \ge f(x) + \nabla f(x)^Td$ and hence $D_f(x, d) \ge 0$ for all $d$. At the same time, for any $\delta \in \mathbb{R}^n$, $$ D_f(x, d + \delta) = f(x + d + \delta) - f(x) - \nabla f(x)^T (d + \delta) $$ Bounding $f((x + d) + \delta)$ through the bound at the start of this proof yields \begin{align} D_f(x, d + \delta) &\le f(x + d) - f(x) - \nabla f(x)^T (d + \delta) + \nabla f(x + d)^T \delta + \frac{L}{2}\lVert \delta \rVert^2 \\ &= D_f(x, d) + \langle \nabla f(x + d) - \nabla f(x), \delta \rangle + \frac{L}{2} \lVert \delta \rVert^2 \end{align} Now with the very specific choice of $\delta = -\frac{\lVert \nabla f(x + d) - \nabla f(x) \rVert_*}{L} v$ for any $v \in \mathbb{R}^n$ with $\lVert v \rVert = 1$ and $\langle \nabla f(x + d) - \nabla f(x), v \rangle = \lVert \nabla f(x + d) - \nabla f(x) \rVert_*$, we can compute \begin{align} 0 &\le D_f(x, d + \delta) \le D_f(x, d) - \frac{\lVert \nabla f(x + d) - \nabla f(x) \rVert_*}{L} \langle \nabla f(x + d) - \nabla f(x), v \rangle + \frac{1}{2L} \lVert \nabla f(x + d) - \nabla f(x) \rVert_*^2 \\ &= D_f(x + d) - \frac{1}{2L} \lVert \nabla f(x + d) - \nabla f(x) \rVert_*^2 \end{align} Hence $$ \frac{1}{2L} \lVert \nabla f(x + d) - \nabla f(x) \rVert_*^2 \le D_f(x + d) \le \frac{L}{2} \lVert d \rVert^2 $$ Multiplying by $2L$ and taking the square root completes the proof.<|endoftext|> TITLE: Solution of SDE $dX_t = \mu(t)X_tdt + \sigma X_t dW_t$ QUESTION [5 upvotes]: I didn't study stochastic process in a systematic way, but I need to use it in financial analysis. Here's my question. I know the solution of the SDE $dX_t = \mu X_tdt + \sigma X_t dW_t$, given that $\mu$ and $\sigma$ are constants, and now I was asked to solve the following SDE $dX_t = \mu(t)X_tdt + \sigma X_t dW_t$, given that $\mu(t)$ is a growth function My attempt So I simply follow the procedure as that when $\mu$ is constant. Let $y=f(x)= \ln X$ (actually I don't know why $\ln X$ is chosen, could anyone explain it to me?) and apply Ito's lemma. We have $$dy=(\mu (t) - \frac{\sigma ^2}{2})dt + \sigma dW_t$$ Differentiating both sides, the solution is given by $$y_t =y_0 + \int_0^t \mu (s)s ds - \frac{\sigma ^2}{2}t + \sigma W_t $$ Since $X=e^y$, we have $$X_t = e^{y_0 + \int_0^t \mu (s)s ds - \frac{\sigma ^2}{2}t + \sigma W_t} =X_0 e^{ \int_0^t \mu (s)s ds - \frac{\sigma ^2}{2}t + \sigma W_t}$$ Is my try correct? REPLY [5 votes]: The reason $\ln(x)$ is chosen is this: (Note: where I say $B_{t}$ I mean Brownian motion, which you denoted in your question as $W_{t}$) The SDE you provided is one of the few we can explicitly solve. I'll talk about Geometric Brownian Motion (GBM) $dX_{t} = \mu X_{t} \,dt + \sigma X_{t} \,dB_{t}$, but as you mentioned in your question, your case is the same when $\mu$ becomes a function of $t$. You can "multiply" the stochastic differential equation (SDE) in its differential form by $\frac{1}{X_{t}}$ to get $$\frac{1}{X_{t}}dX_{t} = \frac{1}{X_{t}}\mu X_{t} \,dt + \frac{1}{X_{t}}\sigma X_{t} \,dB_{t} $$ and this simplifies to $$ \frac{1}{X_{t}}dX_{t} = \mu \,dt + \sigma \,dB_{t}.$$ Notice that the right hand side no longer depends on $X_{t}$. Now recall Ito's formula for a $C^{2,1}$ function $f(t,x)$ ($C^{2,1}$ means $f$ is twice differentiable in $x$ and once differentiable in $t$). Ito's formula tells us if $X_{t}$ satisfies the previous SDE, then $f(t,X_{t})$ will satisfy: $$d(f(t,X_{t})) = \frac{\partial f}{\partial t}(t,X_{t}) \,dt + \frac{\partial f}{\partial x}(t,X{t}) \,dX_{t} + \frac{1}{2} \frac{\partial^{2} f}{\partial x^{2}}(t,X_{t}) \,d[X]_{t}$$ where $[X]_{t}$ is the quadratic variation process of $X_{t}$. Notice that in the SDE given to us by Ito's formula, one of the terms on the right hand size is $\frac{\partial f}{\partial x}(t,X{t}) \,dX_{t}$. This almost looks like our $\frac{1}{X_{t}} \,dX_{t}$, which was the left hand side of our original SDE. If we can choose $f(t,X_{t})$ wisely such that these are equal (i.e., such that $\frac{\partial f}{\partial x}(t,X{t})= \frac{1}{X_{t}}$, then we can solve this special SDE. Hopefully you see that we should have $f(t,x) = \ln(x)$ for the above equality to hold. Okay, so since $f$ doesn't depend on $t$, let's call it $f(x)$ to save space. Substituting this $f$ into Ito's formula above gives: $$d(\ln(X_{t})) = 0 \,dt + \frac{1}{X_{t}} \,dX_{t} - \frac{1}{2} \frac{1}{X_{t}^2} \,d[X]_{t}$$ Hmm... on the right hand side there is conveniently a $\frac{1}{X_{t}} \,dX_{t}$ (which was the whole point! That's why we choose $f$ as we did) and we have an SDE for this already. We have that $\frac{1}{X_{t}}dX_{t} = \mu \,dt + \sigma \,dB_{t}$. Okay, so solving for $\frac{1}{X_{t}}\,dX_{t}$ in the Ito's formula SDE gives $$\frac{1}{X_{t}} \,dX_{t} =d(\ln(X_{t})) + \frac{1}{2} \frac{1}{X_{t}^2} \,d[X]_{t} $$ and this allows us to set the right hand side of the above equal to $\mu \,dt + \sigma \,dB_{t}$. So we have $$d(\ln(X_{t})) + \frac{1}{2} \frac{1}{X_{t}^2} \,d[X]_{t} = \mu \,dt + \sigma \,dB_{t}. $$ What is $d[X]_{t}$? If you do the computation, you get $\sigma^{2} X_{t}^{2} \,dt$, so that our equation becomes $$d(\ln(X_{t})) + \frac{1}{2} \sigma^{2} \,dt = \mu \,dt + \sigma \,dB_{t}. $$ This simplifies to $$\ln(X_{t}) = \ln(X_{0}) + \int \limits_{0}^{t}(\mu - \frac{1}{2} \sigma^{2}) \,dt + \int \limits_{0}^{t}\sigma \,dB_{t} $$ so that $X_{t} = X_{0}e^{\int \limits_{0}^{t}(\mu - \frac{1}{2} \sigma^{2}) \,dt + \int \limits_{0}^{t}\sigma \,dB_{t}}$.<|endoftext|> TITLE: How can the surd $\sqrt{2-\sqrt{3}}$ be expressed? QUESTION [5 upvotes]: I was wondering how $\sqrt{2-\sqrt{3}}$ could be expressed in terms of $\frac{\sqrt{3}-1}{\sqrt{2}}$. I did try to solve both the expressions separately but none of them seemed to match. I would appreciate it if someone could also mention the procedure REPLY [8 votes]: Theorem: Given a nested radical of the form $\sqrt{X\pm Y}$, it can be rewritten into the form $$\sqrt{\frac {X+\sqrt{X^2-Y^2}}{2}}\pm\sqrt{\frac {X-\sqrt{X^2-Y^2}}{2}}\tag{1}$$ Where $X>Y$. Therefore, we have $X=2,Y=\sqrt{3}$ because $2>\sqrt{3}$. So plugging that into $(1)$ gives us $$\sqrt{\frac {2+\sqrt{4-3}}{2}}-\sqrt{\frac {2-\sqrt{4-3}}{2}}\tag{2}$$ Simplifying $(2)$ gives us $$\sqrt{\frac {2+1}{2}}-\sqrt{\frac {2-1}{2}}\implies \sqrt{\frac 32}-\sqrt{\frac 12}$$ $$\therefore\sqrt{2-\sqrt{3}}=\frac {\sqrt{3}-1}{\sqrt{2}}$$ Alternatively, one can rewrite it as a sum of two surds, and simplify from there. Specifically, let $\sqrt{2-\sqrt3}$ equal $\sqrt d-\sqrt e$. Squaring, we get\begin{align*} & 2-\sqrt3=d+e-2\sqrt{de}\\ & \therefore\begin{cases}d+e=2\\de=\frac 34\end{cases}\end{align*} With solving for $d$ and $e$ gives the simplification.<|endoftext|> TITLE: Find all functions $f:\mathbb Z \rightarrow \mathbb Z$ such that $f(0)=2$ and $f\left(x+f(x+2y)\right)=f(2x)+f(2y)$ QUESTION [7 upvotes]: Find all functions $f:\mathbb Z \rightarrow \mathbb Z$ such that $f(0)=2$ and $$f\left(x+f(x+2y)\right)=f(2x)+f(2y)$$ for all $x \in \mathbb Z$ and $y \in \mathbb Z$ My work so far: 1) $x=0$ $$f\left(f(2y)\right)=f(2y)+2$$ 2) $y=0$ $$f\left(x+f(x)\right)=f(2x)+2$$ 3) Let $n\ge 0$. Use induction we have If $f(2n)=2n+2$ then $$f(f(2n))=f(2n+2)=2n+2+2=2n+4$$ Hence, if $k=2m\ge0$ then $f(k)=k+2$ 4) $n<0$ I need help here REPLY [2 votes]: Using your results, we find that $$\tag1f(x+f(x+2y))=2x+2y+4\qquad\text{for }x,y\ge0 $$ In particular, $$\tag2 f(x+f(x))=2x+4\qquad\text{for }x\ge0$$ Let $S=\{\,k\in\Bbb Z\mid f(2k)=2k+2\,\}$. You essentially showed that $k\in S\implies k+1\in S$ and hence from the given $0\in S$, we have $\Bbb N_0\subseteq S$. With $x=-2y$ we have $$ \tag3f(-2y+2)=f(-4y)+f(2y)\qquad\text{for }y\in\Bbb Z$$ Thus if two of $1-y,-2y,y$ are in $S$, then tso is the third. In particular, for $y<0$ we already known $1-x,-2y\in S$; we conclude $S=\Bbb Z$, i.e., $$\tag4f(x)=x+2\qquad \text{for }x\in2\Bbb Z$$ and hence $$\tag{1'}f(x+f(x+2y))=2x+2y+4\qquad\text{for }x,y\in\Bbb Z $$ and in particular $$\tag{2'} f(x+f(x))=2x+4\qquad\text{for }x\in\Bbb Z$$ Assume that for some odd $x=2n+1$ the value $f(x)=2m$ is even. Then for $k\in\Bbb Z$ $$\begin{align}f(1+2k)&=f\bigl(1+2(k-m)+f(2n+1)\bigr)\\ &=f\Bigl((1+2k-2m)+f\bigl((1+2k-2m)+2(n-k+m)\bigr)\Bigr)\\ &= 2(1+2k-2m)+2(n-k+m)+4\\ &=6+2k-2m+2n\end{align}$$ and so with the odd constant $c:=2n-2m+5$ $$f(x)=x+c\qquad\text{for }x\in 2\Bbb Z+1$$ Then $6=f(1+f(1))=f(2+c)=2+2c$ implies $c=2$, contradicting that $c$ is odd. Therefore $f(x)$ is odd for all odd $x$. But then $x+f(x)$ is even and from $(4)$, we get $2x+4=f(x+f(x))=x+f(x)+2$ and so $$f(x)=x+2 $$<|endoftext|> TITLE: Bezier curvature extrema QUESTION [8 upvotes]: For a planar cubic Bezier curve $B (x(t),y(t))$, I would like to find the values of parameter $t$ where the curvature (or curvature radius) is greatest/smallest. The formula for curvature is: $$r = \dfrac{(x'^2+y'^2)^{(3/2)}}{x' (t) y''(t) - y'(t) x''(t)}$$ The problem is that there is that square root in it so I was wondering whether it is not possible to express the curvature extrema by some combination of the curve derivatives? The idea is that for finding, say, the values where the slope of the curve is parallel to x-axis one needs to solve the quadratic function of the curve's first derivative. So I thought that maybe the extremes are in fact the values where the acceleration along the curve (or something similar) is greatest/smallest and I hoped that it would be possible to write it down as a polynomial. REPLY [8 votes]: Short summary The points of extremal curvature which are not inflection points will correspond to roots of the quintic polynomial $$(x'''y' - x'y''')(x'^2 + y'^2)-3(x'x'' + y'y'')(x''y' - x'y'')$$ Finding these roots generally requires numeric methods; there can be no simple formula using radicals (i.e. fractional exponents) nor a ruler and compass construction of the points in question. Avoiding square roots The problem is that there is that square root in it For this I'd indeed go with John's comment: the extrema of the curvature are extrema of the squared curvature. The converse may not hold if the curvature passes through zero, but if you are interested in the extrema of the absolute value of the curvature, then all maxima of the squared curvature will be relevant. So look at $$k^2=\frac{(x'\,y'' - y'\,x'')^2}{(x'^2+y'^2)^3}$$ Characterizing extrema Compute the derivative of that with respect to $t$, concentrate on the numerator of the resulting rational function, and you will get a polynomial of degree $7$ in $t$. It contains two factors: one of degree $2$ which is equivalent to $x'\,y'' - y'\,x''$ and describes the inflection points where the curvature is zero. And one of degree $5$ which gives you the remaining points of extremal curvature. A concrete and illustrated example I tried to determine whether all of these points are indeed relevant, and found several situations where all seven critical values for $t$ were real numbers between $0$ and $1$. So far I haven't found a really beautiful setup, where all seven points are easy to see, with sufficient distance between them and sufficient differences in curvature. Here is the best I have so far: $$P_1=(0,4),P_2=(5,4),P_3=(7,0),P_4=(5,3)\\ t\in\{0.211, 0.326, 0.508, 0.634, 0.734, 0.852, 0.920\}$$ The curves looks like depicted below. Control points in cyan, inflection points in magenta and other extremal curvature points in red. I also included the osculating circles (light red) resp. tangent lines (light magenta) at the critical points, but they are all so close together that it's really hard to see. Below is what the curvature looks like for the example above. It clearly depicts the fact that the curvature hardly differs at all between some of these extremal points, while at one maximum it's so large that it's way off the graph. I'm sorry I don't have a more balanced example. Simple closed form impossible Since there are at least five (and depending on your definition perhaps even seven) relevant solutions, I doubt you can get around solving this numerically. And if you do tackle this numerically, the above approach is probably easiest to understand and implement, so I'd not try looking for a different description. But more on that in the next section. To be more precise: the five non-inflection points of extremal curvature in the example above are the roots of $$18056\,t^5 - 48720\,t^4 + 49540\,t^3 - 23550\,t^2 + 5200\,t - 425 = 0$$ The Galois group of this polynomial is $S_5$, so there can be no way to describe its roots using radicals. Since most ruler and compass geometric constructions can't perform operations more complex than taking square roots, that also rules out reasonably simple geometric descriptions of these points. Numeric approach According to Wikipedia (and thanks to the comment by Jean Marie), the derivatives of your point can be computed like this: \begin{align*} B(t)&=(1-t)^3\,P_1+3(1-t)^2t\,P_2+3(1-t)t^2\,P_3+t^3\,P_4\\ B'(t)&=3(1-t)^2\,(P_2-P_1)+6(1-t)t\,(P_3-P_2)+3t^2\,(P_4-P_3)\\ B''(t)&=6(1-t)\,(P_3-2P_2+P_1)+6t\,(P_4-2P_3+P_2)\\ B'''(t)&=6(P_4-3P_3+3P_2-P_1)\\ B''''(t)&=0 \end{align*} According to my computer algebra system, and unless I made a mistake, the derivative of the squared curvature is $$\frac{\mathrm d\,k^2}{\mathrm dt}= 2\frac{\bigl(x''y' - x'y''\bigr)\bigl((x'''y' - x'y''')(x'^2 + y'^2) -3(x'x'' + y'y'')(x''y' - x'y'')\bigr)}{(x'^2 + y'^2)^4}$$ Since you want this to be zero, you can ignore the denominator here. If you don't care about the inflection points, you can omit the first parenthesis in the numerator as well. If you know the coordinates of your four points, you can compute the coefficients of $t$ in the individual polynomials, combine them to the resulting polynomial (of degree 5 if you dropped the inflection condition or 7 if not) and then find roots of that polynomial numerically. It is interesting to note that $(x'''y' - x'y''')$ has degree $1$ in $t$ and $(x''y' - x'y'')$ has degree $2$ in $t$, even though the degrees of the involved terms would suggest a higher resulting degree. But the terms in the highest coefficient cancel out, otherwise the second big parenthesis in the numerator would be a polynomial of degree $6$. If you do this in integer arithmetic, you will be able to cancel a common factor of $162$ from all the coefficients of this polynomial, but for floating point arithmetic it doesn't really matter. Responses to comments I'll try to answer some questions from your comment. why at most 7 roots? Because the relevant condition can be expressed as a polynomial of degree $7$ so there can never be more than these. If I image a cubic Bezier I have a hard time finding seven places where the curvature could be most extreme. I also have a hard time finding one where all the different points and curvatures can be seen well. But the example above should illustrate the kind of situation this is about. Whether you consider five or seven solutions depends on whether you care about inflection points with zero curvature or not. Also, perhaps there are other ways to get the positions than squaring the curvature formula? There may well be. But the solutions have to be equivalent. At least in the example above, we know that there are five resp. seven relevant points, so other solution cannot end up with a simpler description of these solutions. At least not simpler in terms of the degree of the describing polynomial. And lastly, would something like the dot product of first and second derivative work? If not, why? That would result polynomial of degree 3. Well, that's simply due to the definition of the curvature. While it sounds plausible that this dot product of yours would share some properties of curvature, it's a different quantity with different extremal parameters, so it does not fit the established concept of curvature as the inverse radius of osculating circles. If you can't use that dot product all by itself, you could instead use it as part of a larger formula. Towards that goeal, the determinant (or 2d cross product, or whatever you want to call it) is proportional to the sine of the angle, just as the dot product is proportional to the cosine. So generally speaking you have $$\langle v,w\rangle^2+(v\times w)^2=\lVert v\rVert^2\cdot\lVert w\rVert^2$$ for arbitrary vectors $v$ and $w$. Which means that in the case at hand you could write the squared curvature as $$k^2=\frac{(x'^2+y'^2)(x''^2+y''^2)-(x'x''+y'y'')^2}{(x'^2+y'^2)^3}$$ but you gain nothing by doing this. The actual increase in degree comes from computing the derivative of such an expression, in order to find the extrema.<|endoftext|> TITLE: Rings with a given number of (prime, maximal) ideals QUESTION [13 upvotes]: Given three cardinal numbers $1≤a≤b\lambda$ and $\mu\neq \lambda^{\nu}$ for any cardinal $\nu$. Say that a triple $(a,b,c)$ is realizable if there is a nontrivial commutative unital ring with $a$ maximal ideals, $b$ prime ideals, and $c$ ideals. If $R$ is a commutative unital ring write $t(R)$ for its associated triple, $(a,b,c)$. I might write $a(R)$ for the number of maximal ideals of $R$, $b(R)$ for the number of prime ideals of $R$, and $c(R)$ for the number of ideals of $R$. We must have $1\leq a\leq b\leq c$, since nontrivial rings have at least one maximal ideal, maximal ideals are prime, and prime ideals are ideals. I will always consider triples like this. As Jay observed, if $c$ is finite, then a triple is realizable iff $a=b$ and $c$ can be factored as $(e_1+1)\cdots (e_a+1)$ where each $e_j$ is a positive integer. To prove this, Let $R$ be a commutative ring realizing $(a,b,c)$ with $c$ finite. $R$ must be Artinian, so it is isomorphic to a product $L_1\times \cdots\times L_k$ of nontrivial local rings. Necessarily $k = a = b$. Each ideal of $R$ has the form $I_1\times \cdots\times I_k$ with $I_j\lhd L_j$ for all $j$, since $R$ is unital, so $c(R) = c(L_1)\cdots c(L_k)$. Each $c(L_j)\geq 2$, since a nontrivial ring has at least $2$ ideals, hence $c(L_j)=e_j+1$ for some positive integer $e_j$. Thus $c(R)$ factors as $(e_1+1)\cdots (e_a+1)$ where $e_j+1=c(L_j)\geq 2$ for all $j$. This shows that the conditions mentioned for triples of finite numbers must hold. Conversely, suppose the conditions for triples of finite numbers hold, namely that $a=b$ and $c = (e_1+1)\cdots (e_a+1)$ where $e_j\geq 1$ for each $j$. Then $(a,b,c)$ is realized by the ring $$ \mathbb Z_{p_1^{e_1}\cdots p_a^{e_a}}\cong \mathbb Z_{p_1^{e_1}}\times \cdots\times \mathbb Z_{p_a^{e_a}}, $$ where $p_1 TITLE: Subordinate matrix norm of 1-norm QUESTION [7 upvotes]: Show that for the vector norm $$\|x\|_1=\sum_{i=1}^n |x_i|$$ the subordinate matrix norm is $$ \|A\|_1=\max_{1\leq j\leq n} \sum_{i=1}^n |a_{ij}| $$ I've managed to narrow it down doing the following: $$ \|A\|_1=\sup_{\|u\|_1=1} \left\{\sum_{i=1}^n |(Au)_i|\right\} = \sup_{\|u\|_1=1} \left\{\sum_{i=1}^n \left|\sum_{j=1}^n a_{ij}u_j\right|\right\} $$ Now intuitively I can see why the subordinate norm is what it is. I just can't come up with a mathematical way of proving it from here. I know $u_j=1$ when the sum in the sub. matrix norm is largest, however how do I prove that doing this makes the maximum equal to the supremum? Thank you in advance. REPLY [3 votes]: Let us carry on $\sum_{i=1}^{n}\left|\sum_{j=1}^{n}a_{ij}u_j\right| \leq \sum_{i=1}^{n}\sum_{j=1}^{n}\left|a_{ij}u_j\right|$ Let us inverse the summation order: $\sum_{i=1}^{n}\sum_{j=1}^{n}\left|a_{ij}u_j\right| =\sum_{j=1}^{n}\sum_{i=1}^{n}\left|a_{ij}u_j\right| = \sum_{j=1}^{n}\left|u_j\right|\sum_{i=1}^{n}\left|a_{ij}\right| \leq \sum_{j=1}^{n}\left|u_j\right|\bigg(\max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)\bigg)$ Because $||u||_1 = 1$, we have that: $ \sum_{j=1}^{n}\left|u_j\right|\bigg(\max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)\bigg)=\max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)$ Therefore, we have proved throughout these inequalities that $||A||_1 \leq \max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)$. Let $j_\star$ the column index such that $\max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big) = \sum_i^n |a_{ij_\star}|$. Then define the vector $x_\star = (0,0,\dots,1,0,\dots 0) $ where $1$ is at the $j_\star$-th position. Then $||Ax_\star||_1 = \sum_i^n |a_{ij_\star}| = \max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)$. Now remember we have the following lemma (left as an exercise): Lemma: If $\sup_x A(x) \leq c$ and there exists $x_\star$ such that $A(x_\star) = c$, then $\sup_x A(x) = c$. Hence $||A||_1 = \max_{j=1,...,n}\big(\sum_{i=1}^{n}\left|a_{ij}\right|\big)$<|endoftext|> TITLE: Conditional expectation when minimum is given. QUESTION [5 upvotes]: I try to solve this: Let $X,Y$ be two independent exponential r.v. with parameters $\mu,\lambda>0$. > Let $T:=\min(X,Y)$ Compute $\mathbb{E}(T\vert X)$ Now there is a hint to compute $\mathbb{E}(Tf(X))$ for some measurable function $f:\mathbb{R}\to \mathbb{R}$ but what confuses me, is that $\min(x,y)$ has two components and is not from $\mathbb{R}\to \mathbb{R}$. So I tried to rewrite $$\mathbb{E}(Tf(X))=\mathbb{E}(\mathbb{E}(Tf(X)\vert X))=\mathbb{E}(f(X)\mathbb{E}(T\vert X))$$ but I don't see whether this is useful or not. My second attempt was to work with the density. I computed the CDF for $T$ which is $F(t)=1-e^{-t(\lambda + \mu)}$ and the PDF $$f(t)=(\lambda + \mu)e^{-t(\lambda + \mu)}$$ But I'm not sure, if this is useful here. REPLY [3 votes]: For this question, you can also work it out directly as follows: \begin{align*} E(\min(X, Y) \mid X) &=E(X1_{Y\ge X} \mid X) + E(Y1_{X\ge Y} \mid X)\\ &=X \int_X^{\infty}\lambda e^{-\lambda y} dy + \int_0^{X}\lambda y e^{-\lambda y} dy\\ &=\frac{1}{\lambda}\left(1-e^{-\lambda X} \right). \end{align*}<|endoftext|> TITLE: Let A be skew-symmetric, and denote its singular values by $\sigma_1\geq \sigma_2\geq \dots \sigma_n\geq0$. QUESTION [5 upvotes]: Let A be skew-symmetric, and denote its singular values by $\sigma_1\geq \sigma_2\geq \dots \sigma_n\geq0$. Show that a) If n is even, then $\sigma_{2k}=\sigma_{2k-1}\geq 0, k= 1,2,\dots n/2.$ If n is odd, then the same relationship holds up to $k=(n-1)/2$ and also $\sigma_n=0$. b) The eigenvalues $\lambda_j=(-1)^ji\sigma_j$, $j=1,2,\dots,n$. I know that skew symmetric means $-A=A^T$ and I know that the eigenvalues of a skew-symmetric matrix are either purely imaginary or zero. I am not able to get this though and I have been trying all week... Thanks for your time. REPLY [5 votes]: Hint: The singular values of $A$ are the (positive) square roots of the eigenvalues of the matrix $A^TA = -A^2$. So, if $\lambda$ is an eigenvalue of $A$, then $\sqrt{-\lambda^2} = |\lambda|$ is a singular value. Because $A$ is real, its complex (i.e. non-real) eigenvalues must come in complex-conjugate pairs.<|endoftext|> TITLE: Without excluded middle: simple example of field with nilpotents? QUESTION [5 upvotes]: I would like to verify I'm not making a mistake. Does the statement "fields do not have non-zero nilpotents" depend on the law of excluded middle? All proofs of this fact I can conjure rely on the dichotomy $a=0\vee a\neq 0$ along with the implication (for fields) that $a\neq 0\implies a$ invertbile. Moreover, in algebraic geometry it's possible to show the generic ring of Zariski topoi is internally a field (has the implication $a\neq 0\implies a$ invertible, but not necessarily excluded middle). Is there some super simple example of a (commutative unitary) ring which is internally a field and has nilpotents? Added. What about the statement "in a field, the only nilpotent is zero"? Remark. The implication ($\neg (a=0)\implies a$ is invertible) is not of my invention; it is taken as the definition of a "ring of fractions" in section 2 of this paper by Kock. Fields are defined analogously. REPLY [2 votes]: I think the intuitionistic definition of field you assume has to be disputed: First, you have $$(\ddagger):\qquad\forall s: \neg(s=0)\Rightarrow (s\text{ invertible})\quad\Longleftrightarrow\quad \forall s: \neg(s=0)\Leftrightarrow (s\text{ invertible}).$$ There, since $\neg\neg\neg\Phi\Leftrightarrow\neg\Phi$ is intuitionistically valid for any proposition $\Phi$, you get that $$(\star)\qquad\left[\neg(s=0)\Leftrightarrow (s\text{ invertible})\right]\quad\Longrightarrow\quad \left[\neg\neg(s\text{ invertible})\Leftrightarrow (s\text{ invertible})\right].$$ It's questionable you want to impose this restriction, because e.g. it does not hold when interpreting the domain of $s$ as the structure sheaf ${\mathscr O}_X$ of a reduced scheme $X$, which internally you might want to think of as a field. The reason the R.H.S. of $(\star)$ does not hold in this case is that the truth value of $(s\text{ invertible})$ is the (scheme-theoretic) non-vanishing locus $\{x\in X\ |\ s(x)\neq 0\text{ in }k(x)\}$ of $s$, so $\neg\neg(s\text{ invertible})$ holds iff the latter is dense in $X$, which is the case iff $s$ is regular. On the other hand, the alternative implication/equivalence defining intuitionistic fields $$(\dagger):\qquad\forall s: \neg(s\text{ invertible})\Rightarrow (s=0)\quad\Longleftrightarrow\quad \forall s: \neg(s\text{ invertible})\Leftrightarrow (s=0)$$ does hold for reduced schemes. Also, with this definition you can quickly prove that $$s\text{ nilpotent}\quad\Longrightarrow\quad s=0$$ since $s\text{ nilpotent}\Rightarrow\neg(s\text{ invertible})$. Source: Ingo Blechschmidt's paper is a very interesting and pleasant read on all of this.<|endoftext|> TITLE: Proof $\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ QUESTION [6 upvotes]: $\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$ I've tried converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated REPLY [4 votes]: Using the formula for the sum of a geometric series: $$ \sum_{k=0}^\infty x^k=\frac1{1-x}\tag{1} $$ Taking the derivative of $(1)$ and tossing the terms which are $0$: $$ \sum_{k=1}^\infty kx^{k-1}=\frac1{(1-x)^2}\tag{2} $$ Taking the derivative of $(2)$ and tossing the terms which are $0$: $$ \sum_{k=2}^\infty k(k-1)x^{k-2}=\frac2{(1-x)^3}\tag{3} $$ Reindexing the sum in $(3)$: $$ \sum_{k=0}^\infty(k+2)(k+1)x^k=\frac2{(1-x)^3}\tag{4} $$ Plug in $x=\frac14$: $$ \begin{align} \sum_{k=0}^\infty\frac{(k+2)(k+1)}{4^k} &=\frac2{\left(\frac34\right)^3}\\[6pt] &=\frac{128}{27}\tag{5} \end{align} $$<|endoftext|> TITLE: Has anyone ever actually seen this Daniel Biss paper? QUESTION [43 upvotes]: A student asked me about a paper by Daniel Biss (MIT Ph.D. and Illinois state senator) proving that "circles are really just bloated triangles." The only published source I could find was the young adult novel An Abundance of Katherines by John Green, which includes the following sentence: Daniel [Biss] is world famous in the math world, partly because of a paper he published a few years ago that apparently proves that circles are basically fat, bloated triangles. This is probably just Green's attempt to replicate something Biss told him about topology (an example of homotopy, perhaps). But the statement seems to have intrigued students and non-mathematicians online. So I'm curious: has anyone seen such a paper? Is this a simplified interpretation of a real result (maybe in The homotopy type of the matroid grassmannian?). REPLY [62 votes]: The book with those words was published in 2006, before the retraction of Biss' major results on combinatorial differential geometry. In the cited paper, Biss had published an amazing breakthrough along the lines that Green understood, showing that certain continuous geometric objects were equivalent to discrete combinatorial objects. That is similar to, but much more general than, the relation between a geometric circle and the triangle (in the sense of graph theory, 3 vertices connected by 3 edges, oriented to go around the triangle, and not a Euclidean geometry triangle or a physical triangular plate). I quote and annotate an excerpt from the introduction to Biss' article, to give the idea of how incredible it must have sounded at the time. Had it been correct then "world famous in the math world" would have been a good description. [p.931]...the theory of matroid bundles is the same as the theory of vector bundles [!!]. This gives substantial evidence that a CD [Combinatorial Differential] manifold has the capacity to model many properties of smooth manifolds. To make this connection more precise, we give... a definition of morphisms that makes CD manifolds into a category admitting a functor from the category of smoothly triangulated manifolds. Furthermore, these morphisms have appropriate naturality properties for matroid bundles and hence [combinatorially defined] characteristic classes [!!!], so many maneuvers in differential topology carry over verbatim to the CD setting. This represents the first demonstration that the CD category succeeds in capturing structures contained in the smooth but absent in the topological and PL categories [!!!!!!!], and suggests that it might be possible to develop a purely combinatorial approach to smooth manifold topology [!!!]. The boldface and bracketed material are my annotations, with ! marks as a subjective rating of how amazing each statement would have been if true. The first sentence in boldface seems to include the combinatorial construction of Pontrjagin classes, a major research problem. That would have been a big achievement, but the paper claims to do it as part of something even bigger, as the next two sentences elaborate. The last two items in boldface, doing smooth (beyond the topological and piecewise-linear categories) topology on discrete combinatorial objects, was considered science fiction. It was not expected then or now that such a thing is possible and to do it in 20 pages must have struck many people as some kind of dream. Nevertheless, the error was apparently subtle enough to pass the reviewers, though experts were stating their skepticism about the paper soon after it was published. The official retraction happened several years later and the author of a book published in 2006 would not necessarily have known of the problems with the paper. People were challenging the correctness of the article by 2005, and the "the problem was already acknowledged and discussed in private correspondence between experts in April 2006" but during the time the book (or his contribution to it) was written, Biss may have believed that his results were correct or could be fixed. Biss became a politician in Illinois, and his election opponents mentioned the situation with his papers during one of the election campaigns.<|endoftext|> TITLE: Are there infinitely many primes $p$ such that $p-2$ and $p+2$ are composite? QUESTION [9 upvotes]: Are there infinitely many primes $p$ such that $p-2$ and $p+2$ are composite? If $p\neq3$ then either $p+2$ or $p-2$ is a multiple of three, but this does not settle the matter for both. We know that there are infinitely many primes. But it is not known whether there are infinitely many twin primes, so something extra is needed here. REPLY [7 votes]: Almost all primes have that property, as a result of the scarcity of twin primes. Infiniteness of non-twin primes.<|endoftext|> TITLE: Laplace transform for dummies QUESTION [15 upvotes]: The question Fourier transform for dummies has an amazing answer: https://math.stackexchange.com/a/72479/115703 Could the Laplace transform be explained in as illuminating a way? Why should the Laplace transform work? What's some of the history behind it? REPLY [5 votes]: Vague one liner: Laplace Transform is the weight of the forced response of a system for an exponential input $e^{st}$ Answer: Forget the Laplace Transform for some time. Let’s solve an Ordinary Differential Equation, of the form, $$y'(t)+ay(t)=x(t)$$ Let $D:\dfrac{d}{dx}$ and $\dfrac{1}{D}:\int_{-\infty}^tdt$, where $D$ is called the Heaviside-Operator, the former is true, only when $f(-\infty) =0$. Now the ODE can be written as, $$(D+a)y=x \tag*{...(1)}$$ $$ \implies y=\dfrac{1}{D+a}x$$ So finding the operator $\dfrac{1}{D+a}$ gives us the solution. To do so, consider the following, $$D[e^{at}\,y(t)]=ae^{at}y(t)+e^{at}D[y(t)]$$ $$D[e^{at}\,y(t)]=e^{at}(D+a)[y(t)]\tag*{...(2)}$$ Now, looking at $(1)$&$(2)$ it's evident that, $$D[y\,e^{at}]=e^{at}\,x$$ $$y=e^{-at}\dfrac{1}{D}[e^{at}\,x(t)]$$ $$y=e^{-at}\int_{-\infty}^t e^{at}\,x(t)dt$$ $$\implies\dfrac{1}{D+a}[x(t)]=e^{-at}\int_{-\infty}^t e^{at}\,x(t)dt$$ For higher order ODEs, for example a second order ODE, $$y''(t)+Ay'(t)+By(t)=x(t)$$ $$(D^2+AD+B)[y(t)]=x(t)$$ $$y(t)=\dfrac{1}{D^2+AD+B}[x(t)]$$ $$y(t)=\dfrac{1}{(D+a)(D+b)}[x(t)]$$ which can be operated one after another, or expanded with partial fractions and then considered as two first order systems(superposition). In general, $$y(t)=H(D)x(t)\tag*{...(3)}$$ We can arrive at the solution, by another way; Convolution. I'm not going to explain convolution in detail, but discuss the results. $$y(t)=h(t)*x(t)=\int_{-\infty}^{\infty}h(\tau)x(t-\tau)d\tau$$ Where $h(t)$ is the impulse response (input Dirac-Delta) of the system. Now suppose we express, the input and output functions, $x(t)$ and $y(t)$ in terms of $\delta(t)$. $$x(t)=X(D)\delta(t)$$ $$y(t)=Y(D)\delta(t)$$ $$(3)\implies Y(D)\delta(t)=H(D)X(D)\delta(t)$$ $$\bbox[5px,border:2px solid red]{Y(D)=H(D)X(D)}$$ So, a convolution in time domain, $$y(t)=h(t)*x(t)$$ is converted to, multiplication in "operator-domain", $$Y(D)=H(D)X(D)$$ So, now when we are given a ODE, for an input, $x(t)$ we find the corresponding operator $X(D)$ that gives us $x(t)$ when operated over $\delta(t)$, and find Y(D) by multiplying X(D) and system operator H(D) and hence find $y(t)$ from $Y(D)$. It would be comfortable to form a catalog of X(D) for certain known inputs. u(t) is step function($u(t) = 1 \text{ for } t>0 \text{ and } 0 \text{ for } t<0$) $$\begin{array} {|r|r|}\hline \mathbf{x(t)} & \mathbf{X(D)} \\ \hline \delta(t) & 1 \\ \hline u(t) & \dfrac{1}{D} \\ \hline tu(t) & \dfrac{1}{D^2} \\ \hline \dfrac{t^m}{m!}u(t) & \dfrac{1}{D^{m+1}} \\ \hline e^{-rt} & \dfrac{1}{D+r} \\ \hline \cos(\omega_0t)u(t) & \dfrac{D}{D^2+\omega_0^2} \\ \hline \sin(\omega_0t)u(t) & \dfrac{\omega_0}{D^2+\omega_0^2} \\ \hline e^{-\sigma t}\cos(\omega_0t)u(t) & \dfrac{D+\sigma}{(D+\sigma)^2+\omega_0^2} \\ \hline e^{-\sigma t} \sin(\omega_0t)u(t) & \dfrac{\omega_0}{(D+\sigma)^2+\omega_0^2} \\ \hline \end{array} $$ For example, consider this second order ODE $$y''(t)+3y'(t)+2y=x(t)$$ $$\text{for } x(t)=4\cos(t)\,u(t)$$ $$(D^2+3P+2)[y(t)]=x(t)$$ $$y(t)=\dfrac{1}{D^2+3P+2}x(t)$$ $$\implies H(D)=\dfrac{1}{D^2+3P+2}$$ $$X(D)=\dfrac{D}{D^2+1}\tag*{...refer the catalog}$$ $$\implies Y(D)=\left(\dfrac{1}{D^2+3D+2}\right) \left(\dfrac{4D}{D^2+1}\right)$$ $$Y(D)=\dfrac{4D}{(D+1)(D+2)(D^2+1)}$$ $$Y(D)=\dfrac{-2}{D+1}+\dfrac{8/5}{D+2}+\frac{2}{5}\dfrac{D}{D^2+1}+\frac{2}{5}\dfrac{3}{D^2+1}$$ $$\implies y(t)= -2e^{-t}u(t) + \frac{8}{5}e^{-2t}u(t) + \frac{2}{5} \cos(t)u(t) + \frac{6}{5} \sin(t)u(t)\tag*{...refer catalog}$$ Now, suppose, we provide an input, $x(t)=e^{st}, \, s\in \mathbb{C} , s=\sigma+i\omega$. Finding the response to this input is highly beneficial, because, this complex input encircles many inputs of our catalog. So, if $H(D)$ is the system operator $$y(t)=H(D)x(t)$$ $$X(D)=\dfrac{1}{D-s}$$ $$Y(D)=\dfrac{H(D)}{D-s}$$ $^\dagger$Provided that p=s is not a factor of $H(D)$ $$\dfrac{H(D)}{D-s}= \dfrac{K}{D-s} + Y_n(D)$$ $$(D-s)\dfrac{H(D)}{D-s}\biggr\rvert_{D=s}= K$$ $$\implies K=H(s)$$ $$\therefore Y(D)=\dfrac{H(s)}{D-s}+Y_n(D)$$ $$\implies \bbox[yellow,5px,border:2px solid red]{y(t)=\underbrace{H(s)e^{st}}_{\text{forced response}} + \underbrace{y_n(t)}_{\text{natural response}}} \tag*{...(4)}$$ Now, let's solve the ODE for the same input, with convolution, $$y(t) = h(t)*x(t)$$ $$y(t)=\int_{-\infty}^{\infty}h(\tau)e^{s(t-\tau)}d\tau$$ $$y(t)=\left(\int_{-\infty}^{\infty}h(\tau)e^{-s\tau}d\tau\right)\,e^{st}$$ $$y(t)=H(s)e^{st}$$ The response of the system is also of the form $e^{st}$ weighted by $H(s)$. This $H(s)$ is also called the Laplace Transform $\mathcal{L[x(t)]}$. So Laplace transform is a special case of this "operator-method". $$\mathcal{L[h(t)]}=\int_{-\infty}^{\infty}h(t)e^{-st} dt$$ If our system is causal, meaning $h(t)=0$, for $t<0$ $$\implies \mathcal{L[h(t)]}=\int_{0^-}^{\infty}h(t)e^{-st} dt $$ $^\dagger$: This condition, implies that the forced response of the system cannot be calculated with the Laplace transform at the poles. This actually led me to a question; What does taking Laplace transform on both sides mean? Reference: Network Analysis and synthesis by Ali Hajimiri<|endoftext|> TITLE: Generating the equation for probability QUESTION [7 upvotes]: I have one red die and a green die. I want to find the probability of rolling at least one 4. Now, because they are 2 different dice, there are 36 possibilities. So the first possibility is a 4 from the red and some other number for the green. Second possibility is a 4 from the green and some other number for the red. Both roll a 4. So I can clearly see that the answer is 3 ways. However, if I want to arrive at the answer using combinatoric equations, how would I do so? REPLY [7 votes]: Actually, there are more than three ways of rolling a four. There are $1 \cdot6=6$ ways to roll a $4$ for the red and another number for the green, and $6\cdot1=6$ ways of rolling a $4$ for the green and another number for the red. Now, you subtract $1$ from the total, because you counted the combination $(6,6)$ twice. $12-1=11$, and there are $6\cdot6=36$ total possibilities, so the answer is $$\dfrac {11}{36}$$<|endoftext|> TITLE: Möbius strip in non-orientable surface QUESTION [8 upvotes]: So I am trying to go over the proof of classification of surfaces and along the way, I would like to prove most result that are commonly used. So far, we can suppose the existence of a triangulation. Let establish the different definition that I would like to use here. Definition: A compact surface $X$ without boundary is said to be orientable if and only if $H_2(X, \mathbb{Z}) \neq 0$. If $X$ is not orientable, then it is said to be unorientable. Let $X$ be a surface and let $\mbox{Mob}$ be a Möbius strip, which is the quotient space $$\mbox{Mob} := (\mathbb{R}/2\mathbb{Z} \times [-1, 1] )/\langle \tau \rangle.$$ where $\tau :\mathbb{R}/2\mathbb{Z} \times [-1, 1] \to \mathbb{R}/2\mathbb{Z} \times [-1, 1]$ is given by $t(s,t)=(s+1,-t)$. The core curve of $\mbox{Mob}$ is the simple close curve given by $\{(x,0)| x \in \mathbb{R}/2\mathbb{Z}\} $. We can also assume the existence of a PL neighborhood for our curve. Definition: A curve $\gamma$ is one-sided if there exist an embedding $\varphi : \mbox{Mob} \to X$ such that the $\gamma$ is the core curve of $\varphi (\mbox{Mob})$. Using these definitions, I would like to prove the following proposition: Proposition: A surface $X$ is orientable if and only if it does not contain any one-sided curve. Here's a proof that if $X$ is orientable, then it does not contain any one-sided curve. However, I did not find a satisfactory proof of the converse. I would ideally like to have a net proof of this result using these definitions and most importantly without using the classification of surfaces (in particular the concept of genus). $(\Rightarrow)$ Suppose $e: \mathbb{R}/\mathbb{Z} \to X$ is an embedding, and let $f: \mbox{Mob} \to X$ be such that $f \circ c= e$. Define $M= f(\mbox{Mob})$ and let $V= X-f (\mbox{Mob}_{\frac{1}{2}})$ where $$ \mbox{Mob}_{\frac{1}{2}} = (\mathbb{R}/2\mathbb{Z} \times [-\frac{1}{2}, \frac{1}{2} ] )/\langle \tau \rangle.$$ It is straightforward to construct a deformation retraction from $M$ to $e(\mathbb{R}/\mathbb{Z})$. In particular, $H_i(M) \cong H_i(\mathbb{R}/\mathbb{Z})$. From the inclusions $M \cap V \to X$, $\iota_M: M \cap V \to M$, and $\iota_V: M \cap V \to V$, we obtain the Mayer-Vietoris long exact sequence $$ \cdots \to H_2(M) \oplus H_2(V) \to H_2(X)~ \stackrel{\delta}{\to} H_1(M \cap V) \stackrel{\iota_M \oplus \iota_V}{\longrightarrow} H_1(M) \oplus H_1(V) \to \cdots.$$ The space $V$ is an open 2-dimensional manifold and hence $H_2(V) = \{0\}$. In addition, $H_2(M) \cong H_2(\mathbb{R}/\mathbb{Z})=\{0\}$, and so the map $\delta$ is an injection. On the other hand, $H_1(M \cap V)$ is generated by the 1-cycle $\partial M$. The map $\iota_M$ is induced by the inclusion into $M$, and the class of $\partial M$ in $H_1(M)$ is nonzero. (Indeed, in $H_1(M)$, we have $[\partial M] = 2 \cdot [c(\mathbb{R}/\mathbb{Z})]$ and $[c(\mathbb{R}/\mathbb{Z})]$ generates $H_1(M)$.) Therefore $\iota_M$---and hence $\iota=\iota_M \oplus \iota_V$---is injective. Since the sequence is exact, the image of $\delta$ equals the kernel of $\iota$, and therefore $H^2(X) = \{0\}$. REPLY [16 votes]: We need a more local definition of orientability to answer your question. One way to do this is to say that for any point $p$ on an $n$-manifold $M$, a local orientation at $p$ is choice of a generator $g_p$ of the relative homology group $H_n(M, M \setminus p)$ (which is isomorphic to $\Bbb Z$ by excision). A global orientation on $M$ is then choice of an orientation at $x$ for every $x\in M$ so that the choice is "consistent", in the sense that for any point $p \in M$ there is a chart around $p$ containing a ball $B$ of finite radius such that all the orientations $g_x$ for $x \in B$ are images of one single generator $g_B$ of $H_n(M, M \setminus B)$ by the isomorphism $H_n(M, M \setminus B) \to H_n(M, M \setminus x)$ induced from the inclusion $(M, M \setminus B) \hookrightarrow (M, M \setminus x)$. There's a curious construction you could do using this machinery. Namely, consider the set $\widetilde{M}$ of all local orientations at all the points of $M$. There's a projection map $f: \widetilde{M} \to M$ that sends each local orientation to the point it orients, i.e., $f(g_p) = p$. Clearly every fiber of $f$ has cardinality two, because there are exactly two local orientations possible at each point on the manifold ($\pm 1$ are the only possible generators of $\Bbb Z$). Give $\widetilde{M}$ the topology generated by the basis of sets of the form $\mathcal{U}(g_B)$ consisting of orientations $g_x$ which are images of the generator $g_B$ of $H_n(M, M \setminus B)$ by the map $H_n(M, M \setminus B) \to H_n(M, M \setminus x)$ for balls $B$ of finite radius on $M$. This makes $f$ into a two fold covering map. $\widetilde{M}$ is known as the "orientation double cover" Notice that local orientations of $M$ at a point $x$ are exactly same as a fiber $f^{-1}(x)$ of this covering projection. A global orientation is a section/trivialization of the orientation double cover. There's a natural morphism $H_n(M) \to \Gamma$ where $\Gamma$ is the $\Bbb Z$-module generated by the global sections of the orientation double cover; this is given by sending each homology class $\alpha \in H_n(M)$ to the "section" $s(x) = \alpha_x$ where $\alpha_x$ is image of $\alpha$ by the homomorphism $H_n(M) \to H_n(M, M \setminus x)$. $s$ is not a section of the orientation cover because $\alpha_x$ is not necessarily a generator of $H_n(M, M \setminus x)$, but it is a multiple of a section of the orientation cover. This is in fact an isomorphism (Hatcher Chapter 3.3., Lemma 3.27). If $M$ is not orientable, $\widetilde{M}$ does not admit a global section ($\Gamma \cong 0$) which immediately implies $H_n(M) = 0$ (and vice versa). If it is orientable, $\widetilde{M}$ admits a global section which generates $\Gamma$. The morphism $\Gamma \to H_n(M, M \setminus x) \cong \Bbb Z$ for any $x \in M$, sending a section to it's value on the fiber $f^{-1}(x)$ is an isomorphism irrespective of the chosen $x$, as $M$ is connected. Hence, orientability of $M$ implies $H_n(M) \cong \Bbb Z$. This explains why your definition is equivalent to this one. Suppose $M$ is a closed surface not containing embedded Moebius strips. Pick a point $p$ on $M$ and define an orientation on it by choosing a generator $g_p$ of $H_2(M,M \setminus p)$. For any other point $q$ on $M$, choose a path $\gamma$ joining $p$ and $q$, take a tubular neighborhood $U_\gamma$ of the path and consider the diagram $$H_2(M, M \setminus p) \leftarrow H_2(M, M \setminus U_\gamma) \rightarrow H_2(M, M \setminus q)$$ where both of the arrows are induced from the inclusion maps $(M, M \setminus U_\gamma) \hookrightarrow (M, M \setminus p)$ and $(M, M \setminus U_\gamma) \hookrightarrow (M, M \setminus q)$. By a long exact sequence argument, you can argue these are both isomorphisms. So push the generator of $H_2(M, M \setminus p)$ to $H_2(M, M \setminus q)$ using the sequence of arrows in this diagram. This gives an orientation $g_q$ at $q$. To prove that this orientation is canonical we must verify that it does not depend on the path $\gamma$ chosen from $p$ to $q$. Suppose $\gamma'$ is another path from $p$ to $q$ that gives the orientation $-g_q$ at $q$ in the above process. This means the loop $\gamma' \cdot \gamma^{-1}$ based at $q$ is "orientation reversing", i.e., transports orientation from $g_q$ to $-g_q$. This means $\gamma' \cdot \gamma^{-1}$ lifts to a path on the orientation double cover $\widetilde{M}$ joining the two preimages in the fiber $f^{-1}(p)$. Taking $V = U_\gamma \cup U_{\gamma'}$ to be a neighborhood of this loop (where $U_{\gamma'}$ is a tubular neighborhood of $\gamma'$, similarly), we can say that this means $f$ restricts to a connected orientation double cover $f : f^{-1}(V) \to V$ of $V$, i.e., $V$ is non-orientable. Surfaces are smoothable, so giving $M$ a smooth structure, we can invoke tubular neighborhood theorem to say that $V$ is topologically an interval bundle over a circle. There are only two such objects - $S^1 \times (0, 1)$ and the (open) Moebius strip, only the latter of which admits a connected orientation double cover. This means $V$ is homeomorphic to a Moebius strip, contradicting hypothesis on $M$. Thus $g_q$ does not depend on the chosen path $\gamma$, and we could use the technique to canonically push the chosen orientation $g_p$ to an orientation $g_x$ on every point $x \in M$ by a path to have a consistent global orientation on $M$. Thus, $M$ is orientable.<|endoftext|> TITLE: Is $x^2+x+1$ ever a perfect power? QUESTION [5 upvotes]: Using completing the square and factoring method I could show that the equation $x^2+x+1=y^n$, where $x,y$ are positive odd and $n$ is positive even integers, does not have solution, but I could not show that for positive odd $x,y$ and odd $n>1$ the equation does (does not) have solution. Thank you for your contribution. REPLY [6 votes]: Your equation can be rewritten as $$\frac{x^{3}-1}{x-1} = y^{N}.$$ The Diophantine equation $$ \frac{x^{n} − 1}{x-1} = y^{q} \quad x > 1, \quad y>1, \quad n>2 \quad q \geq 2 \quad \tag1$$ was the subject matter of a couple of papers of T. Nagell from the $1920's.$ Some twenty-odd years later, W. Ljunggren clarified some points in Nagell’s arguments and completed the proof of the following result. Theorem. Apart from the solutions $$\frac{3^{5}-1}{3-1}=11^{2}, \quad \frac{7^{4}-1}{7-1}=20^{2}, \quad \frac{18^{3}-1}{18-1} = 7^{3},$$ the equation in $(1)$ has no other solution $(x, y, n, q)$ if either one of the following conditions is satisfied: $(1.$ $q = 2$, $(2.$ $3$ divides $n$, $(3.$ $4$ divides $n$, $(4.$ $q =3$ and $n$ is not congruent with $5$ modulo $6$. Clearly enough, this theorem implies that there are only two solutions to the equation you are considering: $(x=1, y=3, N=1)$ and $(x=18, y=7, N=3)$.<|endoftext|> TITLE: What was the motivation behing Poincaré saying that Topology was a disease? QUESTION [6 upvotes]: Poincaré once said that: "Point set topology is a disease from which the human race will soon recover." Today we know that he was clearly wrong. What was on his mind when he said that? REPLY [6 votes]: The purported Poincaré's quotation is about set theory [Mengenlehere]. We can see : Jeremy Gray, Did Poincaré say “set theory is a disease” ? (1991). According to Gray, the source is James Pierpont's quotation from a footnote into Otto Hölder's Die mathematische Methode, 1924, page 556. Gray translate it as : Poincaré at the International Mathematical Congress of 1908 in Rome said that one would later look back on set theory [Mengenlehere] as a disease one has overcome. Regarding Poincaré point of view against Cantorism, see : Henri Poincaré, Science and method (original ed, 1908) : Ch.I : II. The Future of Mathematics Ch.II : III. Mathematics and Logic.<|endoftext|> TITLE: Evaluation of $\lim_{n\rightarrow \infty}\binom{2n}{n}^{\frac{1}{n}}$ QUESTION [5 upvotes]: Evaluation of $$\lim_{n\rightarrow \infty}\binom{2n}{n}^{\frac{1}{n}}$$ without using Limit as a sum and stirling Approximation. $\bf{My\; Try:}$ Using $$\binom{2n}{n} = \sum^{n}_{r=0}\binom{n}{r}^2$$ Using $\bf{Cauchy\; Schwarz}$ Inequality $$\left[\sum^{n}_{r=0}\binom{n}{r}^2\right]\cdot \left[\sum^{n}_{r=0}1\right]\geq \left(\sum^{n}_{r=0}\binom{n}{r}\right)^2 = 2^{2n} = 4^n$$ So $$\frac{4^n}{n+1}<\sum^{n}_{r=0}\binom{n}{r}^2=\binom{2n}{n}$$ But i did not understand how can i calculate upper bound such that i can apply the Squeeze theorem. REPLY [3 votes]: $\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ By "$\textsf{using limits}$", the Stoltz-Ces$\grave{a}$ro Theorem yields: \begin{align} \lim_{n \to \infty}{1 \over n}\,\ln\pars{{2n \choose n}} & = \lim_{n \to \infty}{\ln\pars{{2\bracks{n + 1} \choose n + 1}} - \ln\pars{{2n \choose n }} \over \pars{n + 1} - n} = \lim_{n \to \infty}\ln\pars{{2n + 2 \choose n+1} \over {2n \choose n}} \\[5mm] & = \lim_{n \to \infty}\ln\pars{\bracks{2n + 2}\bracks{2n + 1} \over \bracks{n + 1}\bracks{n + 1}} = \ln\pars{4} \end{align} $$ \color{#f00}{\lim_{n \to \infty}{2n \choose n}^{1/n}} = \color{#f00}{4} $$<|endoftext|> TITLE: Why is the exponential function not in the subspace of all polynomials? QUESTION [16 upvotes]: The exponential function can be written as $$e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots.$$ The subspace of all polynomials is $$\text{span}\{1, x,x^2, \dots \}$$ Sure $e^x$ is in this set? REPLY [7 votes]: $\mathrm{span}(A)$ is the set of finite linear combinations of terms from $A$. Infinite sums require notions of limits and bring up issues of convergence radii (there are plenty of infinite polynomial that converge only at a single point).<|endoftext|> TITLE: The $C^*$-algebra generated by two projections QUESTION [5 upvotes]: The following exercise is taken from the book "An introduction to K-theory for $C^*$ algebras, exercise 13.4. Let $A$ be the sub-$C^*$ algebra of $C([0,1],M_2(\Bbb{C}))$ consisting of all functions $f$ such that $$f(0)=\begin{pmatrix} d & 0 \\0 & 0 \end{pmatrix} \ \ \text{ and } \ \ f(1)=\begin{pmatrix} d_1 & 0 \\0 & d_2 \end{pmatrix}$$ for some $d,d_1,d_2\in \Bbb{C}$, and consider the elements $p,q\in A$ given by $$p(t)=\begin{pmatrix} 1 & 0 \\0 & 0 \end{pmatrix},\ \ \ \ q(t)=\begin{pmatrix} 1-t & \sqrt{t(1-t)} \\ \sqrt{t(1-t)} & t \end{pmatrix}.$$ $(i)$ I have shown that $p,q$ are projections in $A$. $(ii)$ Show that $A=C^*(p,q)$ by the following hints: (a). Use $$pq(t)=\begin{pmatrix} 1-t & \sqrt{t(1-t)} \\ 0 & 0 \end{pmatrix},\ \ \ (p-q)^2(t)=\begin{pmatrix} t & 0\\ 0 & t \end{pmatrix}.$$ I've shown that for any $d,d_1,d_2\in \Bbb{C}$ there exists $f\in C^*(p,q)$ with $f(0)=\text{diag}(d,0)$ and $f(1)=\text{diag}(d_1,d_2)$. If it is relevant, my $f$ is given by $f=d_1p(p-q)^2+d_2(1-p)(p-q)^2+dpqp$. Here is my problematic step: (c). If $\{e_{i,j}\}_{i,j=1,2}$ denote the standard matrix units for $M_2(\Bbb{C})$, and $0< \delta < 1/2$, then there exists $h_{i,j}\in C^*(p,q)$ with $\|h_{i,j}\|\leq 1$ and $h_{i,j}(t)=e_{i,j}$ for $t\in [\delta, 1-\delta]$. My idea: Assume we can get $g(t)=\begin{pmatrix} t(1-t) & t(1-t) \\ t(1-t) & t(1-t) \end{pmatrix} \in C^*(p,q)$, it should be possible and not very difficult, I think... Then if I take $\min \{t(1-t), \delta(1-\delta)\}$ in each coordinate, after some normalizing,I get the desired matrix. But I couldn't find combinations that give me such a matrix (I've tried also the famous formula for minimum between two functions). Moreover, after these steps I should conclude $(ii)$, and I'm not sure how to do it... any hints would be appreciated. Thank you for your time. REPLY [4 votes]: Note that you cannot really use $1-p$ for your $f$, since the identity is not in your subalgebra. But you can still write $(p-q)^2-p(p-q)^2$. The idea of the hint is that by choosing $d_1$ and $d_2$ in your $f$ you can get $$ t\,e_{11}=\begin{bmatrix}t&0\\0&0\end{bmatrix},\ \ \ t\,e_{22}=\begin{bmatrix}0&0\\0&t\end{bmatrix} $$ in $C^*(p,q)$. If you now calculate $p(p-q-te_{11}+te_{22})$ you get $$ \begin{bmatrix}0&-\sqrt{t(1-t)}\\0&0\end{bmatrix}\in C^*(p,q). $$ For $t\in (\delta,1-\delta)$, we can guarantee that $\sqrt{t(1-t)}>\sqrt{\delta-\delta^2}>0$. Now, let $k\in C[0,1]$ be $$k(t)=\begin{cases}\dfrac t{\delta^2},&\ t\in[0,\delta]\\ \ \\ \dfrac1t,&\ t\in(\delta,1-\delta)\\ \ \\ \dfrac{1-t}{\delta(1-\delta)},&\ t\in [1-\delta,1 ]\end{cases}$$ and put $g(t)=\sqrt{k(t)\,k(1-t)}$. We get $$ g(te_{11})=\begin{bmatrix}g(t)&0\\0&0\end{bmatrix}\in C^*(p,q). $$ As $g(t)\,\sqrt{t(1-t)}=1$ on $(\delta,1-\delta)$, $$ h_{12}(t)=g(te_{11})\begin{bmatrix}0&\sqrt{t(1-t)}\\0&0\end{bmatrix}\in C^*(p,q) $$ is a function such that $h_{12}(t)=e_{12}$ for $t\in(\delta,1-\delta)$, and $\|h_{12}\|\leq1$. With a similar idea one can obtain $h_{11}$ and $h_{22}$, by doing functional calculus on $te_{11}$ and $te_{22}$ (now with $g(0)=g(1)=1$), and $h_{21}=h_{12}^*$. So, if you now take $f\in A$, you have $$f(t)=\begin{bmatrix}f_{11}(t)&f_{12}(t)\\ f_{21}(t)&f_{22}(t)\end{bmatrix},$$ with $f_{12}(0)=f_{12}(1)=0$, $f_{21}(0)=f_{21}(1)=0$, $f_{22}(0)=0$. You have $$ \begin{bmatrix}f_{11}(t)&0\\0&0\end{bmatrix}=f_{11}(te_{11})\in C^*(p,q). $$ Similarly, $f_{22}(t)\,e_{22}=f_{22}(te_{22})\in C^*(p,q)$. We can also get $f_{12}(t)e_{11}\in C^*(p,q)$. If we now write $g_n$ for the $g$ constructed above in the case $\delta=1/n$, we get that $f_{12}=\lim_ng_nf_{12}$ (it is essential here that $f_{12}(0)=f_{12}(1)=0$ for the limit to be uniform). Then $$ \begin{bmatrix}0&f_{12}(t)\\ 0&0\end{bmatrix}=\lim_n g_n(te_{11})f_{12}(te_{11})\,\begin{bmatrix}0&\sqrt{t(1-t)}\\0&0\end{bmatrix}\in C^*(p,q). $$<|endoftext|> TITLE: Canonical Topology QUESTION [6 upvotes]: Let $V$ be an n-dimensional vector space over $\mathbb{R}$. Of course $V \cong \mathbb{R^n}$ since we can define an isomorphism $$f:V\longrightarrow \mathbb{R^n}$$ by mapping basis elements to basis elements. But such isomorphism requires a choice of basis. I can define a topology $\tau = \{f^{-1}(V); V \ $is open in$\ \mathbb{R^n}\}$ But is $\tau$ a canonical topology? An isomorphism is canonical if it is defined without having to choose a basis. But what does canonical mean in terms topology? In my case, I had to "choose" a basis to define an isomorphism, then use that isomorphism to define open sets. So I'm not sure if it works... REPLY [10 votes]: It is a canonical topology in the following two senses: A priori, the topology $\tau$ depends on the choice of an isomorphism $f \colon V \rightarrow \mathbb{R}^n$ so let's denote $\tau$ by $\tau_f$. However, if you pick a different linear isomorphism $g \colon V \rightarrow \mathbb{R}^n$ then you have $\tau_f = \tau_g$ so in fact, the topology doesn't depend on the choice of an isomorphism. To see that, note that we can find a linear isomorphism $T \colon \mathbb{R}^n \rightarrow \mathbb{R}^n$ such that $T \circ f = g$. Since $T$ is a linear isomorphism, it is, in particular, a homeomorphism of $\mathbb{R}^n$ (with the standard topology) and so a set $U$ is open in $\mathbb{R}^n$ iff $T^{-1}(U)$ is open in $\mathbb{R}^n$. But since $g^{-1}(U) = f^{-1}(T^{-1}(U))$, the resulting topologies are the same. You can characterize $\tau$ without any choice by saying that $\tau$ is the weakest topology on $V$ with respect to which, any linear map $T \colon V \rightarrow \mathbb{R}^n$ (or even any linear map $T \colon V \rightarrow \mathbb{R}^m$) is continuous (where the right hand side gets the usual topology). Thus $$ \tau = \{ T^{-1}(U) \, | U \subseteq \mathbb{R}^n \textrm{ is open}, T \colon V \rightarrow \mathbb{R}^n \textrm{ is a linear isomorphism}\} \\ = \{ T^{-1}(U) \, | \, U \subseteq \mathbb{R}^n \textrm{ is open}, T \colon V \rightarrow \mathbb{R}^n \textrm{ is linear} \} \\ = \{ T^{-1}(U) \, | \, U \subseteq \mathbb{R}^m \textrm{ is open}, T \colon V \rightarrow \mathbb{R}^m \textrm{ is linear} \} $$ and you eliminate any "choice" by taking "all the possible choices". To see that the three definitions for $\tau$ coincide, let $T \colon V \rightarrow \mathbb{R}^m$ be a linear map and let $U \subseteq \mathbb{R}^m$ be an open set. We need to show that we can find an open $\tilde{U} \subseteq \mathbb{R}^n$ and a linear isomorphism $\phi \colon V \rightarrow \mathbb{R}^n$ such that $T^{-1}(U) = \phi^{-1}(\tilde{U})$. Choose some linear isomorphism $\phi \colon V \rightarrow \mathbb{R}^n$ and write $T = (T \circ \phi^{-1}) \circ \phi$ where $T \circ \phi^{-1} \colon \mathbb{R}^n \rightarrow \mathbb{R}^m$ is linear, hence continuous. Then $$ T^{-1}(U) = \phi^{-1} \left( \underbrace{\left( T \circ \phi^{-1} \right)^{-1} (U)}_{\tilde{U}} \right). $$<|endoftext|> TITLE: Show that the Euclidean Algorithm terminates in less than seven times the number of digits in $b$. QUESTION [5 upvotes]: Let $b=r_o, r_1, r_2,\dots$ be the successive remainders in the Euclidean algorithm applied to $a$ and $b$. Show that after every two steps, the remainder is reduced by at least one half. In other words, verify that $$r_{i+2} \lt\frac12 r_i$$ for every $i=0,1,2,\dots$. Conclude that the Euclidean algorithm terminates in at most $2\log_2 b$ steps, In particular, show that the number of steps is at most seven times the number of digits in $b$. [Hint. What is the value of $\log_2 10$ ?] Several times I have seen that logarithm is used to find something relating to number of digits of a number, but really can't find out the insight behind this. So, can you people please help me acquiring that knowledge please explain it to me, or at least provide a link to some article clearly described for beginners. And here I really can't figure out why the cap is seven times the number of digits in $b$, how can I relate this to the result about $\lt \log_2a+\log_2b$, I just know simple facts about logarithm, really can't make out where it has any reference to the number of digits. Please help me. Just because I have no idea, this question does not alleviate my confusion a little bit. And also, what's the significance of $\log_2$? Why not considering other bases? REPLY [3 votes]: This is how we can relate logarithms to numbers of digits: Claim. For $n$ a positive integer, let $d$ be the number of digits of $n$ in base $B$. Then $d-1 \le \color{blue}{\log_B(n) < d}$. For example, in base $B=10$, any $4$-digit number $n$ satisfies $1000 \le n < 10000$, so $3 = \log_{10}(1000) \le \log_{10} n < \log_{10}(10000) = 4$. Proof. If $n$ has $d$ digits in base $B$, then (since the $d$th place has value $B^{d-1}$) we must have $B^{d-1} \le n < B^d$. Taking logarithms, and noting that $\log_B$ is an increasing function, we get $d-1 \le \log_B(n) < d$. $\Rule{1ex}{1ex}{0pt}$ So if you have proved that the Euclidean algorithm terminates in at most $2 \log_2(b)$ steps, then we also have the fact that $\color{blue}{\log_{10}(b) < d}$, where $d$ is the number of (base-$10$) digits of $b$. So, the number of steps is at most $2 \log_2(b) = 2 \log_2(10) \log_{10}(b) < 2 \log_2(10) d \approx 6.64d$.<|endoftext|> TITLE: Can we use trigonometric idendities to calculate $\cos(x)$ and $\sin(x)$ for extremely large $x$? QUESTION [6 upvotes]: If we want to calculate $\sin(x)$ and $\cos(x)$ for very large $x$ , lets say $10^5$ , the usual way is to reduce the number $x$ modulo $2\pi$. If the number is a large power of a small number, for example $2^{200}$, we could also use $$\cos(2x)=\cos^2x-\sin^2x$$ and $$\sin(2x)=2\sin(x)\cos(x)$$ multiple times. Do we have any chance, if the power is too high for both methods ? For example, can we calculate $\sin(x)$ and $\cos(x)$ for $x=10^{10^{10^{10}}}$ ? Note, that the power tower must be calculated from above. So, we have $$x=10^{(10^{(10^{10))}}}$$ In tetration-notation, we can write $x=10\uparrow\uparrow 4$ REPLY [3 votes]: By computing $\cos(x)$ and $\sin(x)$ (and thus $\exp(ix)$), you are in effect computing the fractional part of $x/(2\pi)$. Any numerical procedure that attempts to do this (approximately) must deal with the fact this is extremely sensitive to small relative errors in $x$, when $x$ is large. Thus if you want to approximate $\cos\left(10^{10^{10}}\right)$ to any accuracy, you'd need to do computations with at least $10^{10}$ significant digits. This might be near the boundary of feasibility with today's computers. But for $\cos\left(10^{10^{10^{10}}}\right)$, you'd need $10^{10^{10}}$ significant digits; the known universe wouldn't be big enough to store that many digits.<|endoftext|> TITLE: Intuition for the chain homotopy of Poincaré lemma. QUESTION [7 upvotes]: In the proof of Poincaré lemma, Bredon essentially constructs a chain homotopy between the identity and the null map. Namely, for an open convex set $U \subset \mathbb{R}^{n+1}$ containing the origin, he builds $$\phi: \Omega^{p+1} (U) \to \Omega^{p}(U) $$ as follows. For $\omega=fdx_{j_0} \wedge \cdots dx_{j_p}$, define $$\phi(\omega)=\big(\int_0^1t^pf(tx)\big)\eta ,$$ where $\eta=\sum_{i=0} ^p (-1)^i x_{j_i}dx_{j_0} \wedge \cdots \wedge \widehat{dx_{j_i}} \wedge \cdots \wedge dx_{j_p}.$ Then extend linearly. Now to check that it is a chain homotopy is just computation. What I don't get is how to come up with that idea. Is there some intuition/motivation for considering this chain homotopy? REPLY [3 votes]: Expanding on the comments: This is a special case of the chain homotopy associated with a homotopy $M \times [0,1] \to N$. With $\pi \colon M \times [0,1] \to M$ being the projection, we can let $\phi = \pi_* \circ h^*$ $$ \Omega^{p+1}(N) \xrightarrow{h^*} \Omega^{p+1}(M \times [0,1]) \xrightarrow{\pi_*} \Omega^p(M) $$ where $\pi_*$ is integration along the fiber. Then $$ d\circ \phi + \phi \circ d = h_1^* - h_0^* \colon \Omega^p(N) \to \Omega^p(M)$$ so that $\phi$ is a chain homotopy between the chain maps $h_0^*$ and $h_1^*$. In your case we have a homotopy $h \colon U \times [0,1] \to U, h(x,t) = tx$ from the constant map to the identity map (which only exists because $U$ is star-convex around $0$). Then we have $h^*(dx_j) = t \,dx_j + x_j \,dt$, so you can work out that, for $\omega=f \,dx_{j_0} \wedge \cdots dx_{j_p}$, $$h^*(\omega)(x,t) = t^{p+1} f(tx) dx_{j_0} \wedge \cdots \wedge dx_{j_p} + t^p f(tx) \,dt \wedge \eta,$$ where $\eta$ is as you defined. Then integrate: $$ \phi(\omega)(x) = \pi_*(h^*(\omega))(x) = \int_0^1 t^p f(tx) \,dt \cdot \eta.$$ Now $\phi$ is a chain homotopy between $h_0^* = 0$ and $h_1^* = \operatorname{id}$ and you're done.<|endoftext|> TITLE: How to know when a quintic is solvable QUESTION [7 upvotes]: So according to Abel-Ruffini Theorem, it states that there is no algebraic solution, in the form of radicals, to general polynomials of degree $5$ or higher. But I'm wondering if there is a way to decide whether a polynomial, such as $$x^5+14x^4+12x^3+9x+2=0$$ has roots that can be expressed in radicals or not just by having a glance at the polynomial. REPLY [4 votes]: As the others have commented, to know when a quintic (or higher) is solvable in radicals requires Galois theory. However, there is a rather simple aspect when it is not solvable that is easily understood and can be used as a litmus test. Theorem: An irreducible equation of prime degree $p>2$ that is solvable in radicals has either $1$ or $p$ real roots. (Irreducible, simply put, means it has no rational roots.) By sheer coincidence, the irreducible quintic you chose has $3$ real roots so, by looking at its graph, you can indeed tell at a glance that this is not solvable in radicals. Going higher, if an irreducible septic has $3$ or $5$ real roots, then you automatically know it is not solvable. And so on. P.S. And before you ask, it does not work the other direction: if it has $1$ or $p$ real roots, it does not imply it is solvable in radicals. It is a necessary but not sufficient condition.<|endoftext|> TITLE: Sum of Orthogonal projection operator in closed subspace M,N of Hilbert is Projection operator on sum of two subspaces iff M and N are orthogonals QUESTION [5 upvotes]: I want to prove this proposition: Let $H$ be an Hilbert space and let $M,N$ closed subspaces of $H$. Let$P_M$ and $P_N$ orthogonal projections with range(M) and range(N) respectively. Prove that $P_M + P_N = P_{M+N}$ if and only if $M \perp N$. According to what I have studied in Regression analysis and linear algebra, I understand this proposition perfectly and my geometrical imagination capacity says that it is something obvious, but I don't know how to prove it, nethier how to start the proof (For me, this is just by definition of projection). REPLY [4 votes]: Suppose first that $M\perp N$. Then, for any $x,y\in H$, $$ \langle P_MP_Nx,y\rangle=\langle P_Nx,P_My\rangle=0. $$ As we can do this for all $x,y$, we get that $P_MP_N=0$. Then $$ (P_M+P_N)^2=P_M^2+P_N^2=P_M+P_N $$ and $P_N+P_M$ is a projection. Given $x+y\in M+N$ with $x\in M$ and $y\in N$, $$(P_M+P_N)(x+y)=P_Mx+P_Nx+P_My+P_Ny=x+y,$$since $P_Mx=x$, $P_nx=0$, $P_My=0$, $P_Ny=y$. If $z\in (M+N)^\perp$, then $$\langle(P_M+P_N)z,w\rangle=\langle z,(P_M+P_N)w\rangle=0.$$ So $P_M+P_N=P_{M+N}$. Conversely, if $P_M+P_N=P_{M+N}$, then $$ P_M+P_N=P_{M+N}=P_{M+N}^2=P_M+P_N+P_NP_M+P_MP_N. $$ Then $$\tag{*}P_MP_N+P_NP_M=0.$$ Multiply on the left by $I-P_M$, and we get $(I-P_M)P_NP_M=0$. This is $P_MP_NP_M=P_NP_M$. Then $$ P_MP_N=(P_NP_M)^*=(P_MP_NP_M)^*=P_MP_NP_M=P_NP_M. $$ Then $P_NP_M=P_MP_N$ and now $(*)$ becomes $P_MP_N=0$. If we take $x\in M$ and $y\in N$, $$ \langle x,y\rangle=\langle P_Mx,P_Ny\rangle=\langle P_NP_Mx,y\rangle=0. $$ So $M\perp N$.<|endoftext|> TITLE: When is the Composite with Cube Root Smooth QUESTION [10 upvotes]: Let $f:\mathbf R\to \mathbf R$ be a smooth map and $g:\mathbf R\to \mathbf R$ be defined as $g(x)=f(x^{1/3})$ for all $x\in \mathbf R$. Problem. Then $g$ is smooth if and only if $f^{(n)}(0)$ is $0$ whenever $n$ is not an integral multiple of $3$. One direction is easy. Assume $g$ is smooth. Then we have $f(x)=g(x^3)$ for all $x$. Differentiating and using the chain rule gives that the required derivatives of $f$ vanish. I am struggling with the converse. Assume $f^{(n)}(0)=0$ whenever $n$ is not an integral multiple of $3$. We need to show that $g$ is smooth. Since $x^{1/3}$ is smooth at all points except $x=0$, we see that $g$ too is so. So the only problem is at $0$. I am only able to show that the first derivative of $g$ at $0$ exists. Here is what I have done. Let $x>0$ be arbitrary. By Taylor we know that $$f(x)= f(0)+f'(0)x+ \frac{f''(0)}{2}x^2+ \frac{f'''(\lambda_x x)}{6}x^3$$ for some $0<\lambda_x<1$. This gives by hypothesis that $$f(x) - f(0) = \frac{f'''(\lambda_x x)}{6}x^3$$ Thus $$g(x)-g(0)=\frac{f'''(\lambda_{x^{1/3}} x^{1/3})}{6} x$$ and therefore $$\frac{g(x)-g(0)}{x}=\frac{f'''(\lambda_{x^{1/3}} x^{1/3})}{6}$$ Since $\lim_{x\to 0}f'''(\lambda_{x^{1/3}} x^{1/3}))$ exists, we see that $g'(0)$ exists. But I am not able to extend this argument to show that $g''(0)$ etc. also exist. REPLY [3 votes]: Just wanted to cover up the $\implies$ case, where the other answers doesn't addressed, and the OP just explained vaguely "differentiating and using the chain rule gives that the required derivatives of $f$ vanish". As stated in a similar question, it is not just that simple. Let's show that if $f \circ \psi^{-1}\colon \widetilde{\mathbb{R}}\to \mathbb{R}$ is smooth, where $\psi^{-1}(x) = x^{\frac{1}{3}} $, then $f^{(n)} (0) = 0$ for all $n \nmid 3$. We can express $f = (f \circ \psi^{-1}) \circ \psi)$, and apply the combinatorial form of the Faà di Bruno's formula to express the $n$-th derivative: $$ \frac{d ^n}{d x^n} (\varphi \circ \psi) (x)= \sum_{\pi \in \Pi} \frac{d^{|\pi|} \varphi}{d x^{|\pi|} } \left( \psi(x) \right) \cdot \prod_{B \in \pi} \frac{d^{|B|} \psi}{d x^{|B|} } (x)\,, $$ where $\varphi = f \circ \psi^{-1}$, $\pi$ runs throught $\Pi$, which is the collection of all partitions of the set $\{i\}_{i=1}^n$, and $B$ runs through all blocks of partition $\pi$. The values of $\psi^{(n)} (0)$ are all $0$, except for $n = 3$. So, the product is positive only if $|B| = 3$ for all $B \in \pi$. It follows that the formula above can only give non-zero values if $n$ is divisible by $3$.<|endoftext|> TITLE: Terminology: Independent Copy of Random Variables QUESTION [7 upvotes]: Suppose $X_1,\ldots, X_n$ are (independent) RVs. What does it mean to say that $X_1',\ldots, X_n'$ is an independent copy of $X_1,\ldots, X_n$? Does it mean that each $X_i'$ is independent of $X_i$ or does it mean that the joint distribution of $(X_1,\ldots, X_n)$ is the same as the joint distribution of $(X_1',\ldots, X_n')$? Or does it mean something else entirely? I find the term a bit confusing since I am not sure how you can be both independent and a copy (since being a copy would imply being dependent). REPLY [3 votes]: Consider the case when $n=1$. When we say that $X'$ is an independent copy of $X$ what we mean is that the distribution of $X'$ is the same as the distribution of $X$ AND that $X$ and $X'$ are independent. (you can use any other symbol for $X'$). This expression is usually used when you want to repeat the same exact statistical experiment number of times independently. So, people tend to use the same symbol with a prime or tilde. For example one use for Monte Carlo simulations,is to approximate the expectation of a function $\varphi(X_1, X_1, \dots, X_n)$ in which the joint random variable \begin{equation} (X_1, X_1, \dots, X_n) \sim P_X \end{equation} To do this, we need the realizations of number of independent copies of $(X_1, X_1, \dots, X_n)$. In your question you only have 2 copies. In Monte Carlo methods, we should use as many copies as possible. The estimator of the expectation is then given by \begin{equation} { \hat{\mathbb{E}}}[\varphi(X_1, X_2, \dots, X_n)]: = \frac{1}{M} \sum_{m=1}^M \varphi(X^{(m)}_1, X^{(m)}_2, \dots, X^{(m)}_n) \end{equation} in which \begin{equation} X^{(m)}_1, X^{(m)}_2, \dots, X^{(m)}_n \quad \text{for all } m \in \{1, \dots ,M\}\backslash i \quad \text{for some } i \in \{1, \dots ,M\} \end{equation} are independent copies of a random variable \begin{equation} X^{(i)}_1, X^{(i)}_2, \dots, X^{(i)}_n \sim P_X. \end{equation} Observe that \begin{equation} { \hat{\mathbb{E}}}[\varphi(X_1, X_2, \dots, X_n)] \end{equation} is a random variable that depends on the $M$ copies. Once you realize all the copies, you get an estimate of the expectation.<|endoftext|> TITLE: The "functions" of untyped lambda calculus are not (set theoretic) functions so what are they? QUESTION [18 upvotes]: You can't model $\lambda$ functions as set functions because the domain of a $\lambda$ function includes that function itself. This violates foundation. However they clearly are some sort of arrow. Are the arrows of $\lambda$ calculus there own thing or can they be represented in some other way? REPLY [3 votes]: What you are missing is that $f(f)$ is not the same as $f(code(f))$. A function $f$ may be coded as another object $code(f)$, which may be in the domain of $f$. In computation theory, λ-functions are coded as strings of a certain form, which can be thought of as programs of some sort. Applying one program $p$ to another program $q$ requires a machine/interpreter to perform the application, such as a universal Turing machine $U$, such that $U(p,x)$ is the output of executing the program $p$ on input $x$. $U$ itself is not a program, but corresponds to one, namely $code(U)$. Multiple inputs can be coded, for example we can have a function $pair$ such that a program given $pair(x,y)$ can obtain $x$ and $y$. Indeed, we could design $code(U)$ such that $U(code(U),pair(p,x)) = U(p,x)$. Similarly, we have $f(code(f)) = U(code(f),code(f))$.<|endoftext|> TITLE: About 2-state graph system QUESTION [8 upvotes]: Let $G$ is finite graph, and there is state $0$ or $1$ for each vertex of $G$. For each vertex $v$, let $N(v)$ be set of $v$ and vertices in the 1-neighborhood of $v$. In the next phase, for each vertex $v$, state of $v$ is changed to majority state in $N(v)$. (If the number is tied, state of v is not changed.) For example, see attached picture below. In the top case, $10/01$ and $01/10$ appear alternately in period $2$. In the bottom case, $1/11/11$ appear permanently. I want to prove that every graph ultimately converge to alternating in period $2$ or never changing. I don't have any plausible result. If you give me some idea, I'll be happy. Thank you. +) Is this kind of dynamical system already known? If you know information relevant to this system, please give me some link. [Added as an afterthought] I solved the problem by myself. I want to explain my solution, but It is hard for me to write formal proof. So I write just sketch of proof. (1) It is easy to show there is period $k$ because there is only finite possible states. (2) For each vertex $v$, define $s(v)$ : sequence of $0$, $1$ showing state of $v$ in period $k$. (or we can think $s(v)$ is $k$-tuple. I called $s(v)$ spectrum of $v$.) (3) When two vertices $v$ and $w$ are adjacent, define $p(v, w) = (v(2)w(1)+v(3)w(2)+\cdots+v(1)w(k)) - (v(1)w(2) + v(2)w(3) + \cdots + v(k)w(1)) $ (where $v(i)$ denote $i$'th component of $s(v)$) If $v(i-1)=0$ and $v(i+1)=1$ and $w(i)=1$ then $w(i)$ contribute $+1$ to $p(v, w)$ and if $v(i-1)=1$ and $v(i+1)=0$ and $w(i)=1$ then $w(i)$ contribute $-1$ to $p(v, w)$ and otherwise $w(i)$ does not contribute to $p(v, w)$. Let $w_1, w_2, \cdots , w_m$ is all vertices adjacent to $v$. Naively, If $v(i-1)=0$ and $v(i+1)=1$, that means $w_j(i)=1$ for many $j$, so many $+1$ are contributed to $p(v, w)$. In contrast, If $v(i-1)=1$ and $v(i+1)=0$, that means $w_j(i)=1$ for little $j$, so small $-1$ are contributed to $p(v, w)$. Therefore we can expect that $\sum_{j=1}^{m} p(v, w_j)$ is maybe positive. (4) In fact, we can show that [a] $\sum_{j=1}^{m} p(v, w_j)= 0$ (if $s(v)$ has period 1 or 2) [b] $\sum_{j=1}^{m} p(v, w_j) > 0$ (if $s(v)$ has period larger than 2) [c] $p(v, w) + p(w, v) = 0$ So summation of $p(v, w)$ about all two possible adjacent vertex $v, w$ must be 0 (by [c]). But if there is vertex $v$ that s(v) has period larger than 2, its value must be positive (by [a], [b]). Sorry for poor English and unfriendly explain. REPLY [2 votes]: Unfortunately, I don’t found the respective talk at the conferences pages (but there still is an option to ask about my colleague who attended the same conferences). Nevertheless, I googled two relevant links, and there should be more of them. The links which I found: “On the Voting Time of the Deterministic Majority Process” by Dominik Kaaser, Frederik Mallmann-Trenn, and Emanuele Natale. In particular, it seems that your conjecture is true (see the abstract and the beginning of page 4). Section 1.3 "Related work" of the book “Language, Culture, Computation” by eds. Nachum Dershowitz and Ephraim Nissan should contain some relevant references.<|endoftext|> TITLE: Possible tiling patterns for equiangular hexagon with alternating side lengths QUESTION [6 upvotes]: I want to make a pattern of an equiangular hexagons with side lengths of 1-5-1-5-1-5. Here are 3 examples I made: Which different patterns exist? Are there more than the ones I show? And which one is the most space-efficient, so that it minimizes the wasted space between the hexagons? I checked the wikipedia page on hexagonal tilings but I cannot find examples for this specific hexagon. REPLY [8 votes]: Here are the patterns I could think of, represented by their unit cells and with their densities listed beside: The first tiling shown in the question has no code in my diagram and has density $\frac{46}{49}=0.9388$. This is not very interesting because it is a rearrangement of the A0 tiling (the second in the question), which has the same density. From the A0 tiling we can shift the hexagons on their long edges by one to five units to create the A1 to A5 tilings. As the holes between the tiles grow, so does the density decrease: A1 has density $\frac{23}{26}=0.8846$ A2 has density $\frac{46}{61}=0.7541$ A3 has density $\frac{23}{38}=0.6053$ A4 has density $\frac{46}{97}=0.4742$ A5 has density $\frac{23}{62}=0.3710$ The B tiling is the last one shown in the question and has density $\frac{23}{27}=0.8519$. Now to answer the question of densest possible tiling. When I started typing this I thought it was tiling M, which has density $\frac{23}{24}=0.9583$ and features rows of tiles that can slide over each other. Then I realised the rows could be pushed into each other a bit more, resulting in the real winner: tiling N with density $\frac{46}{47}=0.9787$. So if you want to tile a real wall or floor with this shape, your best bets are tilings M and N. While M is less dense, it isn't chiral like N, and its rows may make it better suited to the (usually) rectangular areas tiles are supposed to cover.<|endoftext|> TITLE: What is the definition of a complex manifold with boundary? QUESTION [6 upvotes]: Can anybody help me to be clear about this definition. I know the definition of a real manifold with boundary (as in Lee's book) and the definition of a complex manifold (locally diffeomophic to an open set in $\mathbb{C}^{n}$ and transition maps are holomorphic). What is the definition of a complex manifold with boundary? I see it many times while reading about the complex-Monge Ampere equations on Kahler manifolds. REPLY [5 votes]: The usual definition is that $M$ is a complex manifold with boundary if and only if it has an atlas of biholomorphically compatible charts, each of which has as its image either an open subset of $\mathbb C^n$ or a set of the form $\{z\in U :f(z)\le 0\}$, where $U\subseteq\mathbb C^n$ is open and $f\colon U\to\mathbb R$ is a $C^\infty$ submersion. It's important to allow such "curved" model boundaries instead of insisting that the image be a relatively open subset of a half-space, because most hypersurfaces in $\mathbb C^n$ are not biholomorphically equivalent to a plane such as $\{z: \operatorname{Im} z=0\}$.<|endoftext|> TITLE: One relator groups shortest word in quotient QUESTION [9 upvotes]: Suppose we have a group $\langle S \vert r \rangle \cong F_S /\langle \langle r \rangle \rangle$ where $S$ is a finite set of generators and $r \in F_{S}$, i.e. a finitely generated one realtor group. Suppose $r$ is cyclically reduced (i.e. $r \not = l^c$ for some $\vert l \vert_S < \vert r \vert_S$), then is there some $u \in \langle \langle r \rangle \rangle$ such that $0 < \vert u \vert_S < \vert r \vert_S$? I believe such a $u$ doesn't exist, however when getting pen and paper out to try and prove it I get a bit stuck (I have tried expressing such a $u$ as a product of conjugated copies of $r$ to play around for a contradiction, as well as a induction on the length of such an expression). I have looked up at the literature surrounding one relator groups and it seems like it is predominantly dealing with the groups themselves not the presentations. I expect the answer to be pretty trivial but I really can't see it, sorry! The actual question I care about has $u$ as a subword of $r$ (which might make it easier) but I was looking to see if the general question was solved and what methods where involved. REPLY [11 votes]: "One-relator theory" can be annoyingly difficult and these "obvious" results tend to be difficult, and almost always use some form of Magnus' method, which you can read about in Combinatorial Group Theory by Lyndon and Schupp (or an other book of the same title by Magnus, Karrass, Solitar). Although maybe it is naive to think it should be easy, after all the only restriction we are placing is that the group can be defined by one relation, why should there be any other connection in the class of one-relator groups? For the subword of a relation problem, there is a proof in the paper On relators and diagrams for groups with one defining relation by Weinbaum which shows that no subword is trivial in the group (subword proper, not trivial). That means that at least the part of your question you care about holds. Howie also has a proof of this result (and a more general result) in his paper On locally indicable groups. This latter paper gives a more topological interpretation of the Magnus method by using something called towers, which were originally introduced to prove some fundamental results in 3-manifolds, like the loop theorem. In general there can be $u$ which are trivial in the one relator group, with length less than $r$: Let $r=x^2zxzxz$ and consider $$\begin{align*} r(xz)^{-1}r^{-1} (xz) &=(x^2zxzxz)(xz)^{-1}r^{-1}xz \\ &=(x^2zxz)(x^2zxzxz)^{-1} (xz) \\ &= xz^{-1}x^{-1}z \end{align*}$$ Which has length $4<7$. Similar examples to this can be constructed by finding different basis elements of $F_2$ (which are long with respect to your choosen basis) and use them as relators which give a presentations for $\mathbb{Z}$. That works since you end up with all elements commuting with each other, and you can find a short commutator relation(something similar can be done with $\mathbb{Z}^2$ like my example above). This has the problem that if I changed basis I could make the above relation length $1$, and we would no longer have any strictly smaller words which are relations. We can come up with a less trivial class of examples though. Consider Baumslag-Solitar groups $BS(m,n)=\langle a,b \mid ba^mb^{-1}=a^n \rangle$, and note that $[a,ba^mb^{-1}]=[a,a^n]=1$, the length of $[a,ba^mb^{-1}]$ is $6+2m$, and the length of the defining relation is $2+m+n$. It is not hard to see that you can arrange $6+2m<2+m+n$ by making $n$ large enough. I am pretty sure that the standard defining relation for $BS(m,n)$ is the shortest defining relation (although I am not one hundred percent positive). It would be interesting to have a hyperbolic example, where the defining relation is the shortest possible, but there are smaller relations. Random one-relator groups (whatever that means) satisfy small cancellation condition $C'(1/6)$, which implies hyperbolic, and this class of groups do not have relations smaller than the defining relation (by Dehn's algorithm). In fact any Dehn presentation of a group (a group is hyperbolic iff it has a Dehn presentation!) will have the property you will not have relations smaller than the smallest relation in the presentation, by applying Dehn's algorithm. This is a hyperbolic example update from an answer(and question) of mine on mathoverflow. Below I provide an example of a one ended, one relator, hyperbolic group where there are shorter relators than the defining relation: Using a handy characterizations in a paper by Ivanov and Schupp called On hyperbolicity of small cancellation groups and one-relator groups. Consider $\langle a,b,c \mid ab^2ac^{12}\rangle$. By checking Whitehead automorphisms this relation is as short as possible in the $Aut(F_3)$ orbit and has every generator in the relation, so does not have infinitely many ends(Thanks to ADL for the correction). One can look at the abelianization to rule out zero or two ends. Theorem 3 in the above paper says that this group is hyperbolic since it has exactly two occurrences of $a$, no $a^{-1}$, and $b^2c^{-12}$ is not a proper power in the free group generated by $a,b,c$. Now $$(ab^2ac^{12})c(c^{-12}a^{-1}b^{-2}a^{-1})c^{-1}=ab^2aca^{-1}b^{-2}{a^{-1}}c^{-1}$$ which is shorter than the defining relation.<|endoftext|> TITLE: Prove that a subset of meager set is also meager QUESTION [6 upvotes]: I have to prove that a subset $A \subset F$ of meager set $F$ is also a meager set. From the definition of meager set $F = \bigcup_{n=1}^{\infty}F_n$, where $F_n$ is a nowhere dense set. As I know we have to show that $A$ has also similiar form, but the question is how do we find these nowhere dense sets, that sum to the whole $A$? REPLY [7 votes]: Notice that $A=F\cap A=\bigcup_{n=1}^\infty F_n\cap A$. Is $A_n:=F_n\cap A$ a nowhere dense set?<|endoftext|> TITLE: Prove that $\int_1^{\infty}\frac{dx}{x\sqrt{x^2-1}}=\frac{\pi}{2}$ using complex analysis QUESTION [7 upvotes]: I'm trying to show that $\int_1^{\infty}\frac{dx}{x\sqrt{x^2-1}}=\frac{\pi}{2}$ by letting $f(z)=\frac{1}{z\sqrt{z^2-1}}=\frac{1}{z}e^{-\log(z^2-1)^{\frac{1}{2}}}$. and I need to show that, on the 4 straight lines, $$L_1^+=\{z|z:R+\epsilon i \rightarrow \rho+1+\epsilon i\}, L_1^-=\{z|z:\rho+1-\epsilon i \rightarrow R-\epsilon i\}, $$$$L_2^+=\{z|z:-\rho-1+\epsilon i \rightarrow -R+\epsilon i\}, L_2^-=\{z|z:-R+\epsilon i \rightarrow -\rho-1+\epsilon i\}$$, which are the parts of a simple closed contour which is surrounding the simple pole $z=0$ of $f$ and excluding $\{x\in R | x$ is less than or equal to $-1$ or $x$ is greater than or equal to $1 \}$, $$\lim_{\rho\rightarrow 0,R\rightarrow \infty, \epsilon\rightarrow 0 }\int_{L_{1,2}^{+,-}}f(z)dz=-\frac{\pi}{2}.$$ it is negative since I chose the contour negatively oriented at the first. I was able to show that the other parts go to $0$, but somehow those four line integrals cancel out each other and make me crazy. specifically, I got the following results: $L_1^+ \rightarrow -\frac{\pi}{2}$ $L_1^- \rightarrow \frac{\pi}{2}$ $L_2^+ \rightarrow \frac{\pi}{2}$ $L_2^- \rightarrow -\frac{\pi}{2}$ I think the problem is to choose new branch cut when I make $\epsilon$ go to $0$. if $z=\sigma e^{i \phi}$ and $(z^2-1)^\frac{1}{2}=r e^{i\theta}$, then $2\theta=Arg(\sigma^2 e^{2i \phi}-1)$, so $\theta$ behaves similarly with $\phi$, and from here the above results come out and I don't know where I did wrong. I know the other methods like letting $u=\sqrt{x^2-1}$, but I just want to do it this way to get used to it. any helps or hints? thanks in advance. REPLY [2 votes]: Here is the contour of integration and the signs of $\sqrt{z^2-1}$ near the real axis. For large $|z|$, in the upper half-plane, $\sqrt{z^2-1}\approx+z$, whereas in the lower half-plane, $\sqrt{z^2-1}\approx-z$. Note that along the imaginary axis, $\sqrt{z^2-1}$ is positive imaginary, so at $z=0$, we have $\sqrt{z^2-1}=+i$. This means that $\operatorname*{Res}\limits_{z=0}\left(\frac1{z\sqrt{z^2-1}}\right)=-i$. The integral along the blue contours vanishes. On the large blue circles, the size of the integrand is $\sim\frac1{|z|^2}$ and on the small blue circles, the size of the integrand is $\sim\frac1{\sqrt{2|z-1|}}$ and $\sim\frac1{\sqrt{2|z+1|}}$ The integral along each of the red and green contours is the integral we want due to the signs of $z$, $\sqrt{z^2-1}$, and the direction of the contour. Therefore, $$ \begin{align} 4\int_1^\infty\frac{\mathrm{d}z}{z\sqrt{z^2-1}} &=2\pi i\operatorname*{Res}_{z=0}\left(\frac1{z\sqrt{z^2-1}}\right)\\ &=2\pi \end{align} $$ Thus, $$ \int_1^\infty\frac{\mathrm{d}z}{z\sqrt{z^2-1}}=\frac\pi2 $$<|endoftext|> TITLE: Gauss-divergence theorem for volume integral of a gradient field QUESTION [21 upvotes]: I need to make sure that the derivation in the book I am using is mathematically correct. The problem is about finding the volume integral of the gradient field. The author directly uses the Gauss-divergence theorem to relate the volume integral of gradient of a scalar to the surface integral of the flux through the surface surrounding this volume, i.e. $$\int_{CV}^{ } \nabla \phi dV=\int_{\delta CV}^{ } \phi d\mathbf{S}$$ The book page is available via this link: http://imgh.us/Esx.jpg Is that true? is there any mathematical derivation available for Gauss-divergence theorem (or similar theorem) when we consider gradient instead of divergence? Does that has any physical significance as in case of divergence? REPLY [8 votes]: $\newcommand{\bbx}[1]{\bbox[8px,border:1px groove navy]{{#1}}\ } \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{\mrm{CV}}\nabla\phi\,\dd V & = \sum_{i}\hat{x}_{i}\int_{\mrm{CV}}\partiald{\phi}{x_{i}}\,\dd V = \sum_{i}\hat{x}_{i}\int_{\mrm{CV}}\nabla\cdot\pars{\phi\,\hat{x}_{i}}\,\dd V \\[5mm] & = \sum_{i}\hat{x}_{i}\int_{\mrm{\partial CV}}\phi\,\hat{x}_{i}\cdot\dd\vec{S} = \sum_{i}\hat{x}_{i}\int_{\mrm{\partial CV}}\phi\,\pars{\dd\vec{S}}_{i} \\[5mm] & = \int_{\mrm{\partial CV}}\phi\,\sum_{i}\pars{\dd\vec{S}}_{i}\hat{x}_{i} =\ \bbx{\int_{\mrm{\partial CV}}\phi\,\dd\vec{S}} \end{align} One interesting application of this identity is the Archimedes Principle derivation ( the force magnitude over a body in a fluid is equal to the weight of the mass of fluid displaced by the body ): $$ \left\{\begin{array}{rl} \ds{P_{\mrm{atm.}}:} & \mbox{Atmospheric Pressure.} \\[1mm] \ds{\rho:} & \mbox{Fluid Density.} \\[1mm] \ds{g:} & \mbox{Gravity Acceleration}\ds{\ \approx 9.8\ \mrm{m \over sec^{2}}.} \\[1mm] \ds{z:} & \mbox{Depth.} \\[1mm] \ds{m_{\mrm{fluid.}}:} & \ds{\rho V_{\mrm{body}} = \rho\int_{\mrm{CV}}\,\dd V} \end{array}\right. $$ \begin{align} \int_{\mrm{\partial CV}}\pars{P_{\mrm{atm.}} + \rho gz}\pars{-\dd\vec{S}} & = -\int_{\mrm{CV}}\nabla\pars{P_{\mrm{atm.}} + \rho gz}\,\dd V \\[2mm] & = -\int_{\mrm{CV}}\rho g\,\hat{z}\,\dd V = -m_{\mrm{fluid}}\, g\,\hat{z} \end{align}<|endoftext|> TITLE: Fundamental group of closed $3$-manifold and its subgroups. QUESTION [7 upvotes]: Let $M$ be a closed $3$-manifold with $\pi_2(M) = 0$. Can the fundamental group of $M$ contain a subgroup isomorphic to $\mathbb{Z}^4$? If the fundamental group of $M$ contains a subgroup isomorphic to $\mathbb{Z}^3$, must that subgroup be of finite index? REPLY [5 votes]: Observe that to satisfy those conditions, the universal cover $N$ of $M$ has to be non- compact. Now $\pi_2(N)=\pi_2(M)=0$. So by Poincaré duality for non-compact space $H_k(N)=0$ for all $k$ and thus Hurewich theorem implies $N$ is contractible. So $M$ is a $K(G,1)$ space. Now any cover of this is also of same type. So $\mathbb Z^4$ cannot be its cover since if it is a cover then it has to be homotopically equivalent to a closed 4 manifold $S^1\times S^1\times S^1\times S^1$ which is not possible since it's 4th homology group is trivial. And if $ \mathbb Z^3$ is not a finite index subgroup, then $M$ has a non-compact 3 manifold cover with $\mathbb Z^3$ fundamental group. But then it is homotopically equivalent to $S^1\times S^1\times S^1$ , which is not possible, since it's 3rd homology is zero. EDIT: One can also see that the only free abelian group $\mathbb Z^n$ that can occur as $\pi_1(M)$ is $\mathbb Z^3$, the fundamental group of $3-torus$, since $n-torus$ is a $K(\mathbb Z^n,1)$ and this has $H_n$ non-zero and $H_i$ zero for $i>n$.<|endoftext|> TITLE: A question on an inequality relating a function and its derivative QUESTION [5 upvotes]: Let $f:[0,1]\to \Bbb R$ be a differentiable function such that $f(0)=0$ and $|f'(x)| \le k|f(x)|\;\forall x \in [0,1]$,($k>0$), then which of the following is always true? (A) $f(x)=0 \; \forall \; x \in \Bbb R$ (B) $f(x)=0 \; \forall \; x \in [0,1]$ (C) $f(x) \ne 0 \; \forall \; x \in [0,1]$ (D) $f(1) = k$ This question appeared in a test I gave today (its obviously completed). I would love a hint on how to approach this question, and also some insight on how I should have thought about it from the beginning. Since mean value theorems were on syllabus (Lagrange's mean value theorem, Rolle's theorem) so I suspect their use is required, though I don't see how. Thank you! REPLY [4 votes]: Since this is a multiple-choice question, we can settle on (B) just from process of elimination: (A) makes no sense because the domain of the function is $[0,1]$, and the constant function $0$ shows that (C) and (D) are false. But this is a bit unsatisfying, so let's show that (B) is true: define $$ g(x)=f(x)^2e^{-2kx}$$ for $0\leq x\leq 1$. Then $g(0)=0$, and $$ g^{\prime}(x)=2f(x)f^{\prime}(x)e^{-2k x}-2kf(x)^2e^{-2kx}=2e^{-2kx}(f^{\prime}(x)f(x)-kf(x)^2)$$ $$ \leq 2e^{-2kx}(|f^{\prime}(x)||f(x)|-kf(x)^2)\leq 2e^{-2kx}(k|f(x)|^2-kf(x)^2)=0 $$ Therefore $g$ is non-negative and non-increasing on $[0,1]$, so $0\leq g(x)\leq g(0)=0$ for all $x\in[0,1]$. This implies that $f=0$ on $[0,1]$.