TITLE: How to prove $\sum\limits_{n=1}^\infty\frac{\sin(n)}n=\frac{\pi-1}2$ using only real numbers. QUESTION [9 upvotes]: I noticed that a lot of the time, people ask whether the following sum converges: $$\sum_{n=1}^\infty\frac{\sin(n)}n$$ Though I've never stopped to ask what it equaled. According to this other post, the sum is given as $$\sum_{n=1}^\infty\frac{\sin(n)}n=\frac{\pi-1}2$$ The solution involves realizing $\sin(n)=\Im e^{in}$ and the Taylor expansion for the natural logarithm. While thats great and all, how can I prove this using only real numbers? REPLY [9 votes]: As suggested in comments, lets use fourier series. =). From here we have the fourier series of $x$, valid in the range $[-\pi, \pi]$: $$ x = -2\sum_{n=1}^\infty\frac{(-1)^{n}}{n}\sin(nx) $$ If we insert: $x=\pi-1$, it will elliminate the $(-1)^n$ from the formula. $$ \sin(nx) = \sin(n\pi - n) = \sin(n\pi)\cos(n)-\cos(n\pi)\sin(n) = -(-1)^n\sin(n) $$ Then: $$ \pi-1 = 2\sum_{n=1}^\infty\frac{1}{n}\sin(n) \quad\implies\quad \sum_{n=1}^\infty\frac{\sin(n)}{n} = \frac{\pi-1}{2} $$<|endoftext|> TITLE: Why does this pattern of "nasty" integrals stop? QUESTION [27 upvotes]: We have (typo corrected), $$\begin{aligned} \pi &=\int_{-\infty}^{\infty}\frac{(x-1)^2}{\color{blue}{(2x - 1)}^2 + (x^2 - x)^2}\,dx,\quad\text{(by Mark S.)}\\[1.8mm] \pi &=\int_{-\infty}^{\infty}\frac{(x+1)^2}{\color{blue}{(x + 1)}^2 + (x^2 + x)^2}\,dx\\[1.8mm] \pi &=\int_{-\infty}^{\infty}\frac{(x+1)^2}{\color{blue}{(x^2 - x - 1) }^2 + (x^2 + x)^2}\,dx\\[1.8mm] \color{red}\pi &=\int_{-\infty}^{\infty}\frac{(x+1)^2}{\color{blue}{(x^3 + 2x^2 - x - 1)}^2 + (x^2 + x)^2}\,dx\\[1.8mm] \pi &=\int_{-\infty}^{\infty}\frac{(x-1)^2}{\color{blue}{(x^3 - 3x^2 + 1)}^2 + (x^2 - x)^2}\,dx\\[1.8mm] ?? &=\int_{-\infty}^{\infty}\frac{(x\pm1)^2}{\color{blue}{(x^5 + 3x^4 - 3x^3 - 4x^2 + x + 1)}^2 + (x^2 \pm x)^2}\,dx \end{aligned}$$ where those in blue are the minimal polynomials of $x=\frac{1}{2\cos(2\pi/p)}$ for $p=1,3,5,7,9,11$. Note: The red pi is the notorious one in the post, A nasty integral of a rational function, $$\int_0^{\infty} \frac{x^8 - 4x^6 + 9x^4 - 5x^2 + 1}{x^{12} - 10 x^{10} + 37x^8 - 42x^6 + 26x^4 - 8x^2 + 1} \, dx = \frac{\pi}{2}$$ as well as in this post after some manipulation. Q1: Why did the "pattern" of using minimal polynomials work then stop at $p=11$, and how can we make it continue by adjusting the other parameters? $\color{green}{Update:}$ Based on an insight from an old post, using the "negative" case of $p=7$, its denominator is still a sextic with a solvable Galois group and we find, $$\int_{-\infty}^{\infty}\frac{(x\color{red}-1)^2}{\color{blue}{(x^3 + 2x^2 - x - 1)}^2 + (x^2 \color{red}- x)^2}\,dx=\pi\sqrt{\frac{u}{\color{green}{12833}}}$$ where $u$ is a root of a monic nonic also with a solvable Galois group, $$\small -\color{green}{12833}^3*1782434241^2 - 41120374319577904376201744753 u - 354521093943488815427187669 u^2 - 550802363395052799639795 u^3 - 176617825075778391189 u^4 + 116970252692553921 u^5 - 20201478347596 u^6 + 1625465206 u^7 - 63997 u^8 + u^9=0$$ The denominator of $p=9$ also is solvable. However, for $p=11$, it no longer is. Q2: Was the pattern interrupted because the denominator of $p=11$ no longer has a solvable Galois group? REPLY [6 votes]: Partial answer. Mathematica 11.0.1.0 (64-bit Windows version) seems to symbolically evaluate $$ \int_{-\infty}^{\infty} \frac{(x - 1)^2}{(1 + x - 4 x^2 - 3 x^3 + 3 x^4 + x^5)^2 + (x^2 - x)^2} \, \mathrm{d}x =2\pi y $$ where $ y $ is a root (the 5th root in Mathematica's ordering), of \begin{align*} \small F(y)=55936138949897200689844509841956235222126377325 - 2082209926471466695895506312399091645554188710590 y + 49399208260228586110040380712822122163293326842296 y^2 - 904097593617672391563622547821611243428330356636656 y^3 + 14127632726315977701496334077804393041066245226028208 y^4 - 192534883415138070802102412843131348551007666040509024 y^5 + 2248032708977729589700543648210682328879792825038892288 y^6 - 22010013756272539692699272127690186099540607721493676800 y^7 + 177728824048935169179013735666882776433001119535910888192 y^8 - 1170270214760621202108304618484485542592211842152325435904 y^9 + 6226689208769791815298929222960276164825821302955689534464 y^{10} - 26437408929821178291367173439675032999610116594417230776320 y^{11} + 87275205150008062398776420803782617539547332212906935361536 y^{12} - 209632027731557385765045313738415590487122817707525011718144 y^{13} + 284829590179494874220555955086122649413365826411704845058048 y^{14} + 245738741392479529396402731465119601079938307163739661565952 y^{15} - 2744252632383133719563152613313766366008892259189754592296960 y^{16} + 9042239242455966498125021473251288480014205602523431668940800 y^{17} - 19642481348541153825949628077511851598796849639028033440972800 y^{18} + 31384454408136427453055038714389257858518560896664228069376000 y^{19} - 37847103175390150688294536889184184478935891337063789625344000 y^{20} + 34290036775233047407263179281808801381553538237009356390400000 y^{21} - 22732262960008031643227099738915285612779131750417374904320000 y^{22} + 10440433388762840105269721355193655567662001399784538112000000 y^{23} - 2974656530310569079556114222635017838466182586996359168000000 y^{24} + \color{blue}{831141777440}^5 y^{25}=0 \end{align*} The discriminant $ d $ of the integrand's denominator $$G(x)=(1 + x - 4 x^2 - 3 x^3 + 3 x^4 + x^5)^2 + (x^2 - x)^2$$ is $d=-2^5\times\color{blue}{831141777440}$. The discriminant of the $25$-deg $F(y)$ is divisible by $d^{65}$. However, its constant term is not integrally divisible by $ d $. Moreover, $ y' = 831141777440 y $ is an algebraic integer. The discriminant of the minimal polynomial of $ y' $ is divisible by $d^{246}$. The constant term is divisible by $ d^{10} $, but the quotient is not a perfect power of an integer. The positive case ++ passes the first test, but also fails on the second. The constant term is divisible by $ d^{990} $, but the quotient is not a perfect power.<|endoftext|> TITLE: On the integral $\int_0^\infty \eta^2(i x) \,dx = \ln(1+\sqrt{3}+\sqrt{3+2 \sqrt{3}})$ and its cousins QUESTION [25 upvotes]: While experimenting with integrals involving the Dedekind Eta function, I came across a family of integrals which seem to follow a very simple pattern. With $y \in \mathbb{N}$, define: $$A(y) = \int_0^{\infty} \eta( i x)\,\eta(i x y)\,dx.$$ The integral can be rewritten in the following infinite series forms: \begin{align} A(y) & = \frac{12}{\pi} \sum_{(n,m) \in \mathbb{Z}^2} \frac{(-1)^{n+m}}{(6n+1)^2+y \, (6m+1)^2} \\[8pt] & =\frac{2 \sqrt{3}}{\sqrt{y}} \sum_{n \in \mathbb{Z}} \frac{(-1)^n}{6n+1} \, \dfrac{ \sinh \frac{\pi \sqrt{y}}{3} (6n+1)}{\cosh \frac{\pi \sqrt{y}}{2} (6n+1)} \\[8pt] & = \frac{2}{\sqrt{y}} \sum_{n \in \mathbb{Z}} (-1)^n \tanh^{-1} \left( \frac{\sqrt{3}}{2} \operatorname{sech}(\pi \sqrt{y} (n+1/6))\right). \end{align} Numerical computations seem to confirm that \begin{align} A(1) & = \ln\left(1+ \sqrt{3} +\sqrt{3+2 \sqrt{3}} \right) \tag{1} \\[8pt] A(2) & = \frac1{\sqrt{2}} \ln \left(1+ \sqrt{2} + \sqrt{2+ 2 \sqrt{2}} \right) \tag{2} \\[8pt] A(3) & = \frac1{\sqrt{3}} \ln \left( 1+ 2^{1/3} + 2^{2/3} \right) \tag{3} \end{align} And generally, it looks like $$A(y) = \frac1{\sqrt{y}} \,\ln u \tag{4}$$ where $u$ is the root closest to $1$ from above, of a polynomial $P_y$. I've checked dozens of different $y$'s and made a list of those polynomials - check this pastebin link. Some are missing, e.g. I could not find $P_6$. Others seem to follow patterns of their own, for example the Heegner numbers. Here's the polynomial for $y=163$: $$\small P_{163}(u) = u^{12} + 640314 u^{10} + 1280624 u^9 + 640287 u^8 - 1280736 u^7 - 2561412 u^6 - 1280736 u^5 + 640287 u^4 + 1280624 u^3 + 640314 u^2 + 1 = 0$$ Other interesting things to look at are the behaviour of $P_y(1)$ and $P_y(-1)$, with regard to $y \pmod{24}$, and approximations to $\pi$ which follow from terminating the infinite series at its first term. However, I have got no clue how to prove it. What would be a way to prove $(4)$? What can be said about the polynomials $P_y$? Also, can you help me find $P_6$, or other missing polynomials from my list? Edit. Finally, I was able to produce a closed form for this integral thanks to @DaveHuff's hints. The idea is to rewrite the infinite series as $$A(y) = \frac2{\sqrt{y}} \sum_{n=0}^{\infty} \tanh^{-1}\left( \dfrac{\cos \frac{\pi}{6} (2n+1)}{\cosh \frac{\pi \sqrt{y}}{6} (2n+1)}\right),$$ and then, using $\displaystyle \,\,\,\tanh^{-1}x = \frac12 \ln \left( \frac{1+x}{1-x} \right),$ proceed to factorize the summand and obtain $$\sqrt{y} \,A(y) = \sum_{n=1}^{\infty} \ln \left( \dfrac{(1-e^{5 \pi i n/6-\pi n\sqrt{y}/6})(1-e^{-5 \pi i n/6-\pi n\sqrt{y}/6})}{(1-e^{ \pi i n/6-\pi n\sqrt{y}/6})(1-e^{-\pi i n/6-\pi n\sqrt{y}/6})} \right),$$ which means: $$A(y) = \frac1{\sqrt{y}} \,\ln \left( \dfrac{\eta\left(\frac{i \sqrt{y}+5}{12}\right)\eta\left(\frac{i \sqrt{y}-5}{12}\right)}{\eta\left(\frac{i \sqrt{y}+1}{12}\right)\eta\left(\frac{i \sqrt{y}-1}{12}\right)}\right).$$ I still don't know enough eta quotient theory, so I don't know how to show that this eta quotient is in fact algebraic for every natural $y$ (let alone bring it to the implicit form in @TitoPiezasIII's answer), but this is still good progress. REPLY [5 votes]: Let $\color{blue}{\tau =\frac{1+\sqrt{-y}}{2}}$ and $y$ a positive integer. The well-known the j-function $j(\tau)$ would then be an algebraic number. Consider the OP's relations, $$A(y) = \frac{2}{\sqrt{y}}\,\tanh^{-1}\sqrt{z-1} = \frac{1}{\sqrt{y}}\,\ln\frac{1+\sqrt{z-1}}{1-\sqrt{z-1}}$$ where, $$z=\frac{2}{k}\left(1-\sqrt{1-k+k^2}\right)$$ $$k =\frac{1}{4}e^{2\pi\, i /3}\left(\frac{\sqrt{2}\,\eta(2\tau)}{\eta(\tau)}\right)^8$$ It is known that, $$j(\tau) = \frac{(x+16)^3}{x}$$ where $x = \left(\frac{\sqrt{2}\,\eta(2\tau)}{\eta(\tau)}\right)^{24}$. So if $j(\tau)$ is an algebraic number, then so is $x$ and $z$. What remains (based on an update by the OP) is to show that, $$\frac{1+\sqrt{z-1}}{1-\sqrt{z-1}}=\frac{\eta\big(\tfrac{\tau+2}{6}\big)\,\eta\big(\tfrac{\tau-3}{6}\big)}{\eta\big(\tfrac{\tau}{6}\big)\,\eta\big(\tfrac{\tau-1}{6}\big)}\tag0$$ though this step seems difficult. An alternative way to show that $z$ also is an algebraic number is by directly expressing it in terms of $j(\tau)$ itself. Define, $$h = \big(\tfrac{1}{27}\,j(\tau)\big)^{1/3}\tag1$$ and the cubic in $v$, $$v^3-3h^2v-2(h^3-128)=0\tag2$$ The discriminant $D$ of this is $D=64-h^3$. Since $\tau=\frac{1+\sqrt{-y}}{2}$ and $y>3$ has negative $h$, this implies the cubic has only one real root. Using the real root $v$, then $z$ satisfies the simple relation, $$z^2-(h+v)(z-1)=4\tag3$$ Since $h$ is an algebraic number, then so is $z$. P.S. Of course, this is also another way to solve for $z$. However, the appropriate root of $(3)$ has to be used.<|endoftext|> TITLE: Self study Control Theory QUESTION [11 upvotes]: Please forgive the long setup but I think it is relevant to my question. I am a third year Electrical Engineering student (before dismissing me a an engineer please read the rest of the question) and I am planning on doing graduate studies in Control Theory. I find it really brings together pure math and some sort of distant application which is enough for me. As such I've taken the usual engineering math courses (Calculus, Linear Algebra, Complex Analysis, Dynamical Systems, a whole ton of Fourier analysis, PDEs, Probabilities and such) where they proceeded to completely disregard any rigor. The only thing close to rigorous math that I actually did was in our Algorithms course which was fascinating (P=NP, Graphs, etc) and actually satisfyingly rigorous. Anyways, I am now at a point where I want to strengthen my actual math knowledge and especially work towards a really good knowledge of Differential Geometry, Complex Analysis and Topology. As such I began studying the basics: real analysis with Chapman Pugh which I am really enjoying. However I would have appreciated some input on what you think is the best way to proceed from here. My plan was next to do Topology with Munkres, Abstract Algebra with Dummit (perhaps not everything but at the very least a good coverage of group theory) and sometime after Smooth Manifolds by Lee and Papa Rudin. What do you think? REPLY [7 votes]: I have a background in EE and I am studying control theory, therefore I have some different insight than someone from pure or applied mathematics. Let me preface my answer with a warning: Most people are turned on by the IDEA of studying complicated subjects, but these are the same people who are turned off actually having to do the work. You see these questions on Mathstackexchange all the time, people listing dozens of books each 700 pages long and asking whether if it is feasible to go through them during the course of their undergrad. I think engineering students in general are more prone to burn out while studying if the material is not tethered to applications. To gain a basic background in control theory for application or research here are the courses that are must: The basics: Linear algebra, complex analysis, multivariable calculus, ODE Frequency domain control theory, and State Space control theory Nonlinear dynamical system Linear operator theory These are the prerequisites that most control books will contain or teach you such as one by Sontag, Dullerud, Sastry, Khalil, Vidyasagar. To do more advanced work you need a course on (usually in grad school): Optimization Real analysis and topology Classical mechanics Probability and random processes These will cover optimal control, control of time varying systems, robotics and stochastic control and coming up with new results. There are other fields that I haven't even mentioned and requires (more) sophisticated tools like finite automaton, machine learning, etc. Now the question is which field do you want to apply your control in. This opens to the door to biology, quantum mechanics, circuit theory, signal and image processing Hope this helps.<|endoftext|> TITLE: Prove that all nxn nilpotent matrices of order n are similar. QUESTION [5 upvotes]: I have to show that all $n \times n$ nilpotent matrices of order n are similar. My initial approach was to show that for all nilpotent matrices their minimun characteristic polynomial is of the form: $$\lambda^n$$ Is this sufficient? Can someone show me a formal approach to this problem? Thanks! REPLY [6 votes]: Examples of nilpotent matrices of the same order that are not similar $$ A=\begin{pmatrix} 0 & & & \\ & 0 & & \\ & & 0 & 1 \\ & & & 0 \end{pmatrix}, \ \ B=\begin{pmatrix}0 & 1& & \\ & 0 & & \\ & & 0 & 1 \\ & & & 0 \end{pmatrix}. $$ They both have $$ A\neq 0, \ B\neq 0 , \ A^2 = B^2 = 0. $$ However, these matrices $A$ and $B$ are not similar. Proof that the nilpotent $n\times n$ matrices of order $n$ are similar For nilpotent $n\times n$ matrix of order $n$, there is only one possible Jordan form. Since it is nilpotent, it has only $0$ as an eigenvalue. Since it is nilpotent of order $n$, it must be similar to the following Jordan block: $$ J(0, n) = \begin{pmatrix} 0 & 1 & & \cdots & \\ & 0 & 1 & \cdots & \\ &\cdots & \\ & & \cdots & 0 & 1 \\ & & \cdots & & 0 \end{pmatrix}. $$ If the matrix has eigenvalue all zeros, and does not have Jordan form as above, the the nilpotency order is less than $n$.<|endoftext|> TITLE: Is it possible to use polynomial interpolation to show that $\cos(x) = \sum_{n=0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!}$? QUESTION [5 upvotes]: Several months ago, I discovered that one can make use a system of linear equations to obtain a polynomial that approaches certain functions. And I know these is the series representation for: $$\displaystyle \cos(x) = 1 - {x^{2} \over 2!} + {x^{4} \over 4!} - \cdots = \sum_{n=0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!}$$ So I decided to test this and try to obtain this series with the bakground I learned above. So I did the following: I'd need a systems of equations in the following form: $$\begin{cases} a_1 x_1+a_0=\cos(x_1) \\ a_1 x_2+a_0=\cos(x_2) \\ \end{cases}$$ $$\begin{cases} a_2 x^2_1+a_1 x_1+a_0=\cos(x_1) \\ a_2 x^2_2+a_1 x_2+a_0=\cos(x_2) \\ a_2 x^2_3+a_1 x_3+a_0=\cos(x_3) \\ \end{cases}$$ $$\begin{cases} a_3 x^3_1+a_2 x_1^2+a_1 x_1+a_0=\cos(x_1) \\ a_3 x^3_2+a_2 x^2_2+a_1 x_2+a_0=\cos(x_2) \\ a_3 x^3_3+a_2 x^2_3+a_1 x_3+a_0=\cos(x_3) \\ a_3 x^3_4+a_2 x^2_4+a_1 x_4+a_0=\cos(x_4)\\ \end{cases}$$ So, find a solution for $a_n$ should give me the coefficients of a polynomial that approaches the $\cos(x)$ at $x_n$. And hence, I did this: $$\begin{cases} a_0+\frac{\pi a_1}{4}=\frac{1}{\sqrt{2}} \\ a_0+\frac{\pi a_1}{2}=0 \\ \end{cases}$$ $$\begin{cases} a_0+\frac{\pi a_1}{4}+\frac{\pi ^2 a_2}{16}=\frac{1}{\sqrt{2}} \\ a_0+\frac{\pi a_1}{2}+\frac{\pi ^2 a_2}{4}=0 \\ a_0+\frac{3 \pi a_1}{4}+\frac{9 \pi ^2 a_2}{16}=-\frac{1}{\sqrt{2}} \\ \end{cases}$$ $$\begin{cases} a_0+\frac{\pi a_1}{4}+\frac{\pi ^2 a_2}{16}+\frac{\pi ^3 a_3}{64}=\frac{1}{\sqrt{2}} \\ a_0+\frac{\pi a_1}{2}+\frac{\pi ^2 a_2}{4}+\frac{\pi ^3 a_3}{8}=0 \\ a_0+\frac{3 \pi a_1}{4}+\frac{9 \pi ^2 a_2}{16}+\frac{27 \pi ^3 a_3}{64}=-\frac{1}{\sqrt{2}} \\ a_0+\pi a_1+\pi ^2 a_2+\pi ^3 a_3=-1 \\ \end{cases}$$ And (using Mathematica), I've found the solutions: $$\begin{array}{cc} a_0= \sqrt{2} & a_1= -\frac{2 \sqrt{2}}{\pi } \\ \end{array}$$ $$\begin{array}{ccc} a_0= \sqrt{2} & a_1= -\frac{2 \sqrt{2}}{\pi } & a_2= 0 \\ \end{array}$$ $$\begin{array}{cccc} a_0= 1 & a_1= \frac{2 \left(8 \sqrt{2}-11\right)}{3 \pi } & a_2= -\frac{16 \left(\sqrt{2}-1\right)}{\pi ^2} & a_3= \frac{32 \left(\sqrt{2}-1\right)}{3 \pi ^3} \\ \end{array}$$ My thinking is that as we put more equations in the system for more values of $\cos$, it will give a polynomial that better approaches $\cos$. And as I put more equations with more values for cosine at the system, the solutions were: $$\begin{array}{cc} a_0= \sqrt{2} & a_1= -\frac{2 \sqrt{2}}{\pi } \\ \end{array}$$ $$\begin{array}{ccc} a_0= \sqrt{2} & a_1= -\frac{2 \sqrt{2}}{\pi } & a_2= 0 \\ \end{array}$$ $$\begin{array}{cccc} a_0= 1 & a_1= \frac{2 \left(8 \sqrt{2}-11\right)}{3 \pi } & a_2= -\frac{16 \left(\sqrt{2}-1\right)}{\pi ^2} & a_3= \frac{32 \left(\sqrt{2}-1\right)}{3 \pi ^3} \\ \end{array}$$ $$\begin{array}{ccccc} a_0= 5-3 \sqrt{2} & a_1= -\frac{122-91 \sqrt{2}}{3 \pi } & a_2= -\frac{2 \left(129 \sqrt{2}-164\right)}{3 \pi ^2} & \\ a_3= \frac{16 \left(17 \sqrt{2}-22\right)}{3 \pi ^3} & a_4= -\frac{32 \left(3 \sqrt{2}-4\right)}{3 \pi ^4} \\ \end{array}$$ $$\begin{array}{cccccc} a_0= -5 \left(2 \sqrt{2}-3\right) & a_1= \frac{2 \left(707 \sqrt{2}-990\right)}{15 \pi } & a_2= -\frac{4 \left(222 \sqrt{2}-307\right)}{3 \pi ^2} &\\ a_3= \frac{8 \left(153 \sqrt{2}-214\right)}{3 \pi ^3} & a_4= -\frac{64 \left(12 \sqrt{2}-17\right)}{3 \pi ^4} & a_5= \frac{128 \left(7 \sqrt{2}-10\right)}{15 \pi ^5} \\ \end{array}$$ $$\begin{array}{ccccccc} a_0= 35-24 \sqrt{2} & a_1= \frac{8 \left(434 \sqrt{2}-615\right)}{15 \pi } & a_2= -\frac{4 \left(9014 \sqrt{2}-12725\right)}{45 \pi ^2} & \\ a_3= \frac{128 \left(31 \sqrt{2}-44\right)}{3 \pi ^3} & a_4= -\frac{32 \left(317 \sqrt{2}-452\right)}{9 \pi ^4} & a_5= \frac{1024 \left(7 \sqrt{2}-10\right)}{15 \pi ^5} & \\ a_6= -\frac{512 \left(7 \sqrt{2}-10\right)}{45 \pi ^6} \\ \end{array}$$ $$\begin{array}{cccccccc} a_0= -3 (16 \sqrt{2}-23) & a_1= \frac{2 \left(25220 \sqrt{2}-35733\right)}{105 \pi } & a_2= -\frac{4 \left(20270 \sqrt{2}-28671\right)}{45 \pi ^2} & \\ a_3= \frac{8 \left(19044 \sqrt{2}-26999\right)}{45 \pi ^3} & a_4= -\frac{32 \left(989 \sqrt{2}-1404\right)}{9 \pi ^4} & a_5= \frac{256 \left(360 \sqrt{2}-511\right)}{45 \pi ^5} & \\ a_6= -\frac{512 \left(55 \sqrt{2}-78\right)}{45 \pi ^6} & a_7= \frac{2048 \left(12 \sqrt{2}-17\right)}{315 \pi ^7} \\ \end{array}$$ Now, here I expected a miracle: $\quad \quad \quad\quad \quad\quad$ I expected that somehow, the coefficients of $a_n$ converged to the coefficients of the series I gave in the beginning (because of the better polynomial that approaches $\cos$) but just by looking at it, this doesn't seems to be the case. But I don't know how to prove this is actually not the case: Is it possible that the method I employed could yield the results I expect or is it impossible? I understand that my thinking is poorly justified and perhaps too vague but I want to know if something can be made with it or if it is possible to use polynomial interpolation to show that $\cos(x) = \sum_{n=0}^{\infty} \frac{(-1)^nx^{2n}}{(2n)!}$? REPLY [2 votes]: You can do what you wanted to do here with Mathematica. Unprotect[Power]; 0^0 = 1; ClearAll[sys, doit, a]; sys[n_?OddQ] := Table[With[{x = Pi (m - (n - 1)/2)/(n - 1)}, Sum[a[k] x^k/k!, {k, 0, n - 1}] == Cos[x]], {m, 0, n - 1}]; doit[n_?OddQ] := (soln = Solve[sys[n], Array[a, n, 0]][[1]]; f[x_] := Sum[a[k] x^k/k!, {k, 0, n - 1}] /. soln); doit[7]; Print[soln, N[Table[soln[[n, 2]], {n, 7}], 5]]; Plot[{Cos[x], f[x]}, {x, -Pi, Pi}] (* {a[0] -> 1, a[1] -> 0, a[2] -> (-517 + 270*Sqrt[3])/(5*Pi^2), a[3] -> 0, a[4] -> (-216*(-68 + 39*Sqrt[3]))/Pi^4, a[5] -> 0, a[6] -> (46656*(-26 + 15*Sqrt[3]))/Pi^6} {1.0000, 0, -0.99996, 0, 0.99789, 0, -0.93361} *) The purpose of 0^0=1 is to avoid a glitch. The sys[] function returns a list of polynomial equations, The main doit[] function solves the polynomials equations to find the coefficients. The doit[7] sets the function f[] to be a 6th degree polynomial with the coefficients found by solving the system of equations. The found coefficients are then printed and the Plot[] does a comparison of the actual Cos[] function with the found f[] polynomial function. The key idea here is to interpolate the $\,\cos\,$ function at a set of points uniformly distributed over the whole interval $\,[-\pi,\pi].$ The difference in the function and its polynomial interpolation is pretty good even for $\,n=7.$ In fact, the coefficients are getting very close to the actual coefficients of the Taylor series.<|endoftext|> TITLE: Constructing a cubic given four points QUESTION [8 upvotes]: Question: Is there an easier way to solve this problem? Suppose the polynomial $f(x)$ is of degree $3$ and satisfies $f(3)=2$, $f(4)=4$, $f(5)=-3$, and $f(6)=8$. Determine the value of $f(0)$. My Attempt: I started off with the general cubic $ax^3+bx^2+cx+d=f(x)$ and manually plugged in each point to get the following system:$$\begin{align*} & 27a+9b+3c+d=2\\ & 64a+16b+4c+d=4\\ & 125a+25b+5c+d=-3\\ & 216a+36b+6c+d=8\end{align*}\tag1$$ Solving the system with the handy matrix gives the solutions as $a=\frac 92,b=-\frac {117}2,c=245,d=-328$. Thus, $f(0)=-328$. Even though I (think) solved the problem correctly, this method seems a bit "bulky" especially when everything becomes a higher degree. So I'm wondering if there is a quicker way to evaluate this kind of problem. REPLY [5 votes]: You can directly find out the polynomial $f$ by considering it according as the $x$-values available: Let $f(x)=a_0+a_1(x-3)+a_2(x-3)(x-4)+a_3(x-3)(x-4)(x-5)$ for real constants $a_0,a_1,a_2,a_3$. Note that we don't need to take into account the value $x=6$ as this is already a cubic polynomial. Then, $f(3)=2\Rightarrow a_0=2$ $f(4)=4\Rightarrow a_1=2$ $f(5)=-3\Rightarrow a_2=-\frac{9}{2}$ $f(6)=8\Rightarrow a_3=\frac{9}{2}$ Thus, $f(x)=2+2(x-3)-\frac{9}{2}(x-3)(x-4)+\frac{9}{2}(x-3)(x-4)(x-5)$. I think the calculations are pretty simple this way as you have chosen $f$ to be such.<|endoftext|> TITLE: Counting outcomes for coin tosses QUESTION [6 upvotes]: Don't laugh, this is a dumb question, but my brain just doesn't work mathematically. A question in my math class says A coin is tossed 4 times. Compute the probability of at least 2 tails occurring. OK, so I know I figure out how many total events are in the sample, then figure out how many possible ways at least 2 tails are occurring, and divide. My problem is, I can NEVER seem to figure out how many total events there are! I start with HHHH, HHHT, HHTH, HTHH, and so on, but I always get lost somewhere along the way, miss an event, and never get them all. My book says there are 16 different possibilities. Is there a better way of figuring out how many different events could happen?? REPLY [2 votes]: Partition the possible results into three sets: $T_2$ (exactly two tails occur), $T_{>2}$ (more than two tails occur), and $T_{<2}$ (fewer than two tails occur). These sets are exclusive and exhaustive, so $P(T_2)+P(T_{>2})+P(T_{<2})=1$. Furthermore, $P(T_{>2})=P(H_{<2})$, and by symmetry, $P(T_2)+2P(T_{<2})=1$, so $P(T_{2})=1-2P(T_{<2})$ It’s not hard to find the probability P(T_2), which is the probability that the sequence of throws is an arrangement of $2$ $H$’s and $2$ $T$’s, of which there are $4\choose2$ many, giving a probability of ${4\choose2} / 2^4=\frac38$, so $P(T_{<2})=\frac{1-\frac38}2=\frac5{16}$, and therefore the probability of having not fewer than $2$ tails is $1-\frac5{16}=\frac{11}{16}$.<|endoftext|> TITLE: How to solve $\sqrt{6}\cdot x^4 - (\sqrt{3}+\frac{3}{2}\sqrt{2})x^2 +\frac{3}{2} = 0 $ QUESTION [5 upvotes]: Of course I could put this is mathematica/wolframalpha or use a formula, but I think in here is a trick how to solve it very simply, but I can't figure it out. Help/Hints very appreciated $\sqrt{6}\cdot x^4 - (\sqrt{3}+\frac{3}{2}\sqrt{2})x^2 +\frac{3}{2} = 0 \Leftrightarrow $ $x^2(\sqrt{6}x^2-\sqrt{3}+ \frac{3}{2}\sqrt{2})= -\frac{3}{2}\Leftrightarrow $ REPLY [5 votes]: As there is no $x^3$ or $x$ term you can treat it as a quadratic in $x^2$. Let $t=x^2$ as dvix suggested. $$\sqrt{6}\cdot t^2-\sqrt{3}t-\frac{3}{2}\sqrt{2}t+\frac{3}{2}=0$$ Then factorize as suggested by G.Sassetelli. $$\sqrt{3}t(\sqrt{2}t-1)-\frac{3}{2}(\sqrt{2}t-1)=0$$ $$\left(\sqrt{3}t-\frac{3}{2}\right)\left(\sqrt{2}t-1\right)=0$$ $$t=\frac{\sqrt{3}}{2}\text{ or }t=\frac{1}{\sqrt{2}}$$ Note: This makes me think the original question was trig related so it might have been useful to include your steps which let up to the equation you were trying to solve. $$x^2=\frac{\sqrt{3}}{2}\text{ or }x^2=\frac{1}{\sqrt{2}}$$ $$x=\frac{\sqrt[4]{3}}{\sqrt{2}}\text{ or }x=\frac{1}{\sqrt[4]{2}}$$ Or with rational denominators: $$x=\frac{\sqrt[4]{12}}{2}\text{ or }x=\frac{\sqrt[4]{8}}{2}$$<|endoftext|> TITLE: Calculate $\int_{0}^\infty\frac{dx}{\left(1+\frac{x^3}{1^3}\right)\left(1+\frac{x^3}{2^3}\right)\left(1+\frac{x^3}{3^3}\right)\ldots}$ QUESTION [22 upvotes]: I'm interested in the integral $$ I=\int_{0}^\infty\frac{dx}{\left(1+\frac{x^3}{1^3}\right)\left(1+\frac{x^3}{2^3}\right)\left(1+\frac{x^3}{3^3}\right)\ldots}.\tag{1} $$ So far I have been able to reduce this integral to an integral of an elementary function in the hope that it will be more tractable $$ I=\frac{8\pi}{\sqrt{3}}\int_{-\infty}^\infty\frac{e^{ix\sqrt{3}}\ dx}{\left(e^x+e^{-x}+e^{ix\sqrt{3}}\right)^3},\tag{2} $$ using the approach from this question. In that question it was also proved that $$ \int_{-\infty}^\infty\frac{dx}{\left(e^x+e^{-x}+e^{ix\sqrt{3}}\right)^2}=\frac{1}{3},\tag{3} $$ which gives some indication that the integral in the right hand side of $(2)$ might be calculable. Also note that the integrand in $(1)$ can be expressed as $$ \Gamma(x+1)\left|\Gamma\left(1+e^{\frac{2\pi i}{3}}x\right)\right|^2. $$ Bending the contour of integration in the integral on the RHS of $(2)$ one obtains an alternative representation $$ I=8\pi\int_0^\infty\frac{e^{x\sqrt{3}}~dx}{\left(2\cos x+e^{x\sqrt{3}}\right)^3}.\tag{4} $$ There are some calculable integrals containing the infinite product $\prod\limits_{k=1}^\infty\left(1+\frac{x^3}{k^3}\right)$, e.g. $$ \int_{0}^\infty\frac{\left(1-e^{\pi\sqrt{3}x}\cos\pi x\right)e^{-\frac{2\pi}{\sqrt{3}}x}\ dx}{x\left(1+\frac{x^3}{1^3}\right)\left(1+\frac{x^3}{2^3}\right)\left(1+\frac{x^3}{3^3}\right)\ldots}=0. $$ Q: Is it possible to calculate $(1)$ in closed form? REPLY [3 votes]: define: $$ Q^n(x) = \prod_{a=1}^n{1-\left(\frac{x}{a}\right)^3} = \left(1-\left(\frac{x}{a}\right)^3\right) \cdot\prod_{b\ne a}{1-\left(\frac{x}{b}\right)^3} = \left(1-\left(\frac{x}{a}\right)^3\right)\cdot Q_a(x) $$ using: $$\begin{align*} (a) && & 1-\left(\frac{x}{a}\right)^3 = 0 \Leftrightarrow x=az_i, \; (z_i)^3=0.\; i=0,1,2 \\ (b) && & \frac{d}{dx}\left(1-\left(\frac{x}{a}\right)^3\right) = -3\frac{x^2}{a^2}; \; (x=az_i)\; -3\frac{\bar{z}_i}{a} \\ (c) && & (b\ne a) \Rightarrow1-\left(\frac{az_i}{b}\right)^3 = 1-\left(\frac{a}{b}\right)^3 \\ (d) && & \frac{d}{dx}Q^n(x)=\left(1-\left(\frac{x}{a}\right)^3\right)Q'_a(x) - 3\frac{x^2}{a^3}Q_a(x) \\ (a,b,c,d) \Rightarrow (e) && & Q'(az_i)=0-3\frac{\bar{z}_i}{a}Q_a(az_i) = -3\frac{\bar{z}_i}{a} \prod_{b\ne a}{\left(1-\left(\frac{a}{b}\right)^3\right)} = -3\frac{\bar{z}_i}{a}P(a) \end{align*}$$ $$\begin{align*} \frac1{Q^n(x)} && & \stackrel{pfd}{=} \sum_{a=1}^n{\sum_{i=0}^3{\frac1{(x-az_i)\cdot Q'_a(x)}}} \\ && & \stackrel{(e)}{=} \sum_{a=1}^n{\sum_{i=0}^3{\frac1{-3\bar{z_i}(x-az_i)} \cdot \frac{a}{P(a)}}} \\ && & = \sum_{a=1}^n{\frac{a}{P(a)}\sum_{i=0}^3{\frac{z_i}{-3(x-az_i)}}}\\ && & = \sum_{a=1}^n{\frac{a}{P(a)}\cdot\frac{a^2}{a^3-x^3}}\\ \end{align*}$$ $$\begin{align*} \int_0^{\infty}{\frac1{Q_n(-x)}dx} && & = \int_0^{\infty}{ \sum_{a=1}^n{\frac{a}{P(a)}\cdot\frac{a^2}{a^3+x^3}}dx} \\ && & = \sum_{a=1}^n{\frac{a}{P(a)}\cdot\int_0^{\infty}{\frac{a^2}{a^3+x^3}dx}}\\ && & = \sum_{a=1}^n{\frac{a}{P(a)}\cdot\frac{2\pi}{3\sqrt{3}}} \\ \end{align*}$$ $$\begin{align*} \lim_{n\to\infty}\int_0^{\infty}{\frac1{Q_n(-x)}dx} && & = \lim_{n\to\infty}\sum_{a=1}^n{\frac{a}{P(a)}\cdot\frac{2\pi}{3\sqrt{3}}}\\ && & = \frac{2\pi}{3\sqrt{3}}\lim_{n\to\infty}\sum_{a=1}^n{\frac{a}{P(a)}}\\ \end{align*}$$ Answer: $$\frac{2\pi}{3\sqrt{3}}\sum_{a=1}^{\infty}{a\prod_{b\ne a}{\left(1-\left(\frac{a}{b}\right)^3\right)^{-1}}}$$<|endoftext|> TITLE: Evaluate limit. QUESTION [5 upvotes]: Let $f : \mathbb R \to \mathbb R$ be differentiable at $x = a$. Evaluate: $$ \lim_{n\to \infty}\large[{f(a +\frac{1}{n^2})}+{f(a +\frac{2}{n^2})}+...+{f(a +\frac{n}{n^2})}-nf(a)] $$ Answer: $\ $ $\ \frac{1}{2}f'(a)$ My attempt: $$ \lim_{n\to \infty}\large[{f(a +\frac{1}{n^2})}-f(a)+{f(a +\frac{2}{n^2})}-f(a)+...+{f(a +\frac{n}{n^2})}- f(a)] $$ I don't know how to proceed from here? Please just give me hint. I want to solve this question by my self. Thank you. REPLY [3 votes]: From Taylor's Theorem with the Peano remainder, $$\bbox[5px,border:2px solid #C0A000]{f(a+k/n^2)-f(a)=f'(a)\frac k{n^2}+h(k/n^2)\frac k{n^2}} \tag 1$$ where $\displaystyle \lim_{k/n^2\to 0}h(k/n^2)=0$. Using $(1)$, we can write $$\begin{align} \sum_{k=1}^n f(a+k/n^2)-nf(a)&=\frac1{n^2}\sum_{k=1}^n kf'(a)+\frac{1}{n^2}\sum_{k=1}^nkh(k/n^2)\\\\ &=\frac{n(n+1)}{2n^2}f'(a)+\frac1{n^2}\sum_{k=1}^n kh(k/n^2) \tag 2 \end{align}$$ It is easy to see that the limit of the first term on the right-hand side of $(2)$ is $\frac12f'(a)$. Next, note that for all $\epsilon>0$ there exists a number $\delta(\epsilon)>0$ such that for $k/n^2<\delta(\epsilon)$, $|h(k/n^2)|\le \epsilon$. Therefore, given $\epsilon>0$, $$\left|\frac{1}{n^2}\sum_{k=1}^n kh(k/n^2)\right|<\epsilon \frac{n(n+1)}{2n^2}\le \epsilon$$ whenever $k/n^2\le 1/n<\delta$. Therefore, we have $$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty }\sum_{k=1}^n f(a+k/n^2)-nf(a)=\frac12 f'(a)}$$ And we are done!<|endoftext|> TITLE: k piles with $k(k+1)/2$ balls QUESTION [8 upvotes]: Given $\frac{k(k+1)}{2}$ balls arranged in $m$ piles. A player picks a ball from each pile and creates a new pile. As a result, piles with one ball disappear, and a new pile with $m$ balls is created. For example ($k=2$), if we sort the piles by their size: $$(1,1,1)\rightarrow (3) \rightarrow (1,2) \rightarrow (1,2)$$ It is easy to see, that for any $k$: $$(1,2,\dots,k)\rightarrow (1,2,\dots,k)$$ Let $(1,2,\dots,k)$ be denoted as the stationary state Show that no matter what is the initial configuration of piles, the piles configuration would converge to the stationary state. REPLY [4 votes]: Thanks to the comment by Michael Lugo, I found a paper covering the "Bulgarian solitaire" problem and the solution. I knew the solution is based on setting a function that decreases in every move, however I was having problem coming up with the function. The function the paper describes, is the potential energy (height times mass times g) of the balls, when the piles are vertical and placed in a box, in the case that the box is rotated to 45 degrees. See full paper here I've tested their solution in python and it works: f = lambda x: sum([(len(x)-i)*t+(t+1)*t/2.0 for i,t in enumerate(x)]) Where x is sorted tuple that describes how many balls are in each pile. you can see my full code here<|endoftext|> TITLE: On Selberg's Proof of the formula for the Selberg Integral QUESTION [7 upvotes]: I was looking at the proof of Selberg's Integral Formula, which is given below: Selberg Integral Formula Let $$\Delta(x_1,\ \cdots,\ x_n)\equiv\Delta(\vec{x}) = \prod_{1\le i TITLE: How to determine the number of coin tosses to identify one biased coin from another? QUESTION [6 upvotes]: If coin $X$ and coin $Y$ are biased, and have the probability of turning up heads at $p$ and $q$ respectively, then given one of these coins at random, how many times must coin A be flipped in order to identify whether we're dealing with coin $X$ or $Y$? We assume a 0.5 chance that we can get either coin. REPLY [2 votes]: If you know beforehand that $p=1$ and $q=0$ or vice versa then one flip is enough. If you know beforehand that $p=1$ and $0 TITLE: Purpose of Inverse Functions QUESTION [21 upvotes]: Finding inverse functions and understanding their properties is fairly basic within mathematics. During my studies it was found fairly simple and easy to comprehend that it was a swapping of the outputs and inputs of a function. But now it has reappeared in calculus as finding the derivative of inverse functions and has me thinking what is the actual real world application of inverses.Like how studying quadratics is extremely useful in modeling objects in free fall. I know my question may seem trivial to many here, i'm a high school student and we never get to have these discussions in class. REPLY [2 votes]: Another more complex set of examples comes from the area of study usually called the Time Value of Money. This problem domain is full of functions and inverses that allow you to compute missing terms of a loan from various viewpoints. For example, given a sum to borrow, say, $1 million, an interest rate, say, 10%, and a number of periods (months or years), how much must I pay each period to retire (pay off) the loan. Conversely, if you want to pay $25,000 per period, how many payments must you make to retire the debt?<|endoftext|> TITLE: How do you prove this very different method for evaluating $\sum_{k=1}^nk^p$ QUESTION [8 upvotes]: I found the following formula in my previous question. This differs from my previous question in that I want an alternative proof of the below recursive formula for calculating $\displaystyle\sum_{k=1}^nk^p$. Suppose I had a function recursively defined as $$f(x,p)=a_px+p\int_0^xf(t,p-1)dt$$ $$a_p=1-p\int_0^1f(t,p-1)dt$$ For $p\in\mathbb N$. For $p=0$, we trivially get $f(x,0)=x$, which shall be our initial condition. It can then be noticed that $$a_1=1-\int_0^1tdt=\frac12$$ $$f(x,1)=\frac12x+\int_0^xtdt=\frac12x+\frac12x^2$$ $$a_2=1-2\int_0^1\frac12t+\frac12t^2dt=\frac16$$ $$f(x,2)=\frac16x+\int_0^x\frac12t+\frac12t^2dt=\frac16x+\frac14x^2+\frac16x^3$$ And the general pattern is $f(x,p)=\sum_{k=1}^xk^p$ whenever $x\in\mathbb N\quad(?)$ How do I prove that whenever $x\in\mathbb N$ $$f(x,p)=\sum_{k=1}^xk^p$$ without applying the methods mentioned in the link above? REPLY [2 votes]: Define $$B_p(x):=p\left(f(x,p-1)-x^{p-1}\right)+a_p$$ then $$B'_p(x)=p\left(f'(x,p-1)-(p-1)x^{p-2}\right)=p\left [\left(a_{p-1}x+(p-1)\int_0^xf(t,p-2)\textrm{d}t\right)^{'}-(p-1)x^{p-2} \right]=p\left(a_{p-1}+(p-1)f(x,p-2)-(p-1)x^{p-2}\right)=pB_{p-1}(x)$$ and $$\int_0^1B_p(t)\textrm{d}t=\int_0^1\left[p\left(f(t,p-1)-t^{p-1}\right)+a_p\right]\textrm{d}t=p\int_0^1f(t,p-1)\textrm{d}t-p\int_0^1t^{p-1}\textrm{d}t+a_p=1-a_p-p\frac{1}{p}+a_p=0$$ With the starting value $B_0(w)=1$ this uniquely determines the sequence of Bernoulli polynomials see Koenigsberger, K. (2003) Analysis 1: Springer. It follows that $$B_p(0)=p\left(f(0,p-1)-0^{p-1}\right)+a_p=a_p=B_p$$ and $$\frac{B_{p+1}(x+1)-B_{p+1}}{p+1}=\frac{(p+1)(f(x+1,p)-(x+1)^p)+a_{p+1}-a_{p+1}}{p+1}=f(x+1,p)-(x+1)^p=\sum_{k=1}^xk^p.$$<|endoftext|> TITLE: Darboux coordinate for contact geometry QUESTION [5 upvotes]: I'm reading Geiges' notes. (https://arxiv.org/pdf/math/0307242.pdf) In the proof of Theorem 2.44 on page 17, the existence of the contact version Darboux coordinate is reduced to solving $H_t$ for each $t$, the PDE near the origin of $\mathbb{R}^{2n+1}$ $$\dot{\alpha}_t (R_{\alpha_t})+dH_t(R_{\alpha_t} )= 0$$ where $\alpha_t$ is a $1$-parameter family of contact forms and $R_{\alpha_t}$ is the corresponding reeb vector field. And he said that this equation always has a solution by integration if the neighborhood is small enough so that $R_{\alpha_t}$ has no closed orbit. My question is why this is obvious? What I know is that this equation is a quasilinear first order PDE and can possibly be solved by the method of characteristics. But I can't find a reference that contains a clear statement when this kind of equation can be solved. Thank you. REPLY [2 votes]: First, let us prove the following general result: Proposition. Let $M$ be a manifold and $p\in M$, let $X$ be a vector field on $M$ and let $g\colon M\rightarrow\mathbb{R}$ smooth. If $X(p)\neq 0$, then there exists $U$ an open neighborhood of $p$ in $M$ and $f\colon U\rightarrow\mathbb{R}$ smooth such that: $$\mathrm{d}f(X)=g_{\vert U}.$$ Furthermore, if $g(p)=0$, one may assume that $f(p)=0$ and $\mathrm{d}f_p=0$. Proof. Using the straightening theorem, there exists $(U,\phi)$ a chart of $M$ around $p$ such that $\phi_*X=\frac{\partial}{\partial x_1}$. Furthermore, one may assume that for $(x_1,\ldots,x_n)\in\phi(U)$, one has the following property: $$s\in[\min(0,x_1),\max(0,x_1)]\Rightarrow(s,x_2,\ldots,x_n)\in\phi(U)\tag{$\star$}.$$ Therefore, one can define a smooth map $F\colon\phi(U)\rightarrow\mathbb{R}$ by the following formula: $$F(x_1,\ldots,x_n):=\int_{0}^{x_1}g(\phi^{-1}(s,x_2,\ldots,x_n))\,\mathrm{d}s.$$ Notice that one has $F(0)=0$ and $\frac{\partial F}{\partial x_1}=g\circ\phi^{-1}$. In this section, assume that $g(p)=0$, then there exists constants $a_2,\ldots,a_n$ such that $\mathrm{d}F_0=\sum\limits_{i=2}^na_i\mathrm{d}x_i$ and let us define $\overline{F}\colon\phi(U)\rightarrow\mathbb{R}$ in the following fashion: $$\widetilde{F}(x_1,\ldots,x_n):=F(x_1,\ldots,x_n)-\sum_{i=2}^na_ix_i.$$ Notice that $\widetilde{F}(0)=0$, $d\widetilde{F}_0=0$ and $\frac{\partial\widetilde{F}}{\partial x_1}=g\circ\phi^{-1}$, so that one can assume that $F(0)=0$ and $\mathrm{d}F_0=0$. Finally, with $f=F\circ\phi$, using the chain rule, for all $x\in U$, one has: $$\mathrm{d}f_x(X(x))=(\mathrm{d}F_{\phi(x)}\circ T_x\phi)(X(x))=\mathrm{d}F_{\phi(x)}\left(\frac{\partial}{\partial x_1}_{\big\vert\phi(x)}\right)=\frac{\partial F}{\partial x_1}_{\big\vert\phi(x)}=g(x).$$ In addition, if $g(p)=0$, then one has $\mathrm{d}f_p=\mathrm{d}F_0\circ T_p\phi=0$. Whence the result. $\Box$ Remark. When I say that $(U,\phi)$ a chart aroud $p$, I mean $\phi(p)=0$. Remark. Let $\|\cdot\|$ be the product norm on $\mathbb{R}^n$, since $0\in\phi(U)$ is open, there exists $\varepsilon>0$, s.t. $B(0,\varepsilon)\subset U$. Let $x\in B(0,\varepsilon)$ and $s\in[\min(0,x_1),\max(0,x_1)]$, then notice that $|s|\leqslant|x_1|$, so that one has: $$(s,x_2,\ldots,x_n)\in B(0,\varepsilon).$$ Therefore, shrinking $U$ to $\phi^{-1}(B(0,\varepsilon))$ establishes the technical assumption $(\star)$. For all $t\in[0,1]$, since $R_{\alpha_t}(0)\neq 0$ (for example: $R_{\alpha_t}(0)\not\in\xi_0$) applying the result to the map $-\dot{\alpha_t}(R_{\alpha_t})$, there exists $U_t$ an open neighborhood of $0$ in $\mathbb{R}^{2n+1}$ and a map $H_t\colon U_t\rightarrow\mathbb{R}$ such that: $$\dot{\alpha_t}(R_{\alpha_t})+\mathrm{d}H_t(R_{\alpha_t})=0.$$ To conclude, one has to see that there exists a single neighborhood on which all $H_t$, $t\in[0,1]$ are defined. For all $t\in[0,1]$, let us define the following time of existence: $$\varepsilon_t:=\sup\{\varepsilon>0\textrm{ s.t. }B(0,\varepsilon)\subset U_t\},$$ then $t\mapsto\varepsilon_t$ is a lower semicontinuous function and by compacity of $[0,1]$, one has $\varepsilon:=\inf\limits_{t\in[0,1]}\varepsilon_t>0$. Finally, for all $t\in[0,1]$, $H_t$ is defined on $B(0,\varepsilon)$.<|endoftext|> TITLE: Limits of a distribution property QUESTION [6 upvotes]: I've been cracking my brains since several days so far trying to solve this exercise. I'll firstly claim the problem and then will clarify what I know and what I've tried to do. So, we are given a real-valued r.v. on some probability space s.t. $\mathbb E(|X|)<\infty$ and by $F$ we denote the distribution function. What we need to show is $$\lim_{z \ \rightarrow \ - \ \infty}z F(z) = \lim_{z \ \rightarrow \ + \ \infty}z (1-F(z)) = 0$$ We know, that $\lim_{z \ \rightarrow \ - \ \infty}F(z) = 0$ and $\lim_{z \ \rightarrow \ + \ \infty}F(z) = 1$ and what I though of was to somehow show that the distribution function increases to $1$ faster than $z$ goes to infinity. Also, on this step we still don't know whether a density exists (it will be mentioned later in the exercise). I realize that the solution might be quite simple but I'm really stuck with it. Any help would be highly appreciated. REPLY [6 votes]: It suffices to prove that $$\tag{*}\lim_{z\to +\infty}z\Pr\left(\left|X\right|\gt z\right)=0,$$ since the two wanted limits are up to a constant that of $z\Pr\left(X\gt z\right)$ and $z\Pr\left(-X\gt z\right)$ as $z$ goes to $+\infty$. The convergence (*) follows from Markov's inequality: $$z\Pr\left(\left|X\right|\gt z\right) \leqslant \mathbb E\left[\left|X\right|\mathbf 1\left\{\left|X\right| \gt z\right\}\right]$$ and then use monotone convergence.<|endoftext|> TITLE: The limit of $(1+\frac1x)^x$ as $x\to\infty$ QUESTION [6 upvotes]: I was experimenting with a graphing calculator to check $$\lim_{x\to\infty} (1+\frac1x)^x=e$$ I plotted the graph of $y=(1+\frac1x)^x$, and I was surprised when I zoomed out enough. It is oscillating until it reaches $x=9\times 10^{15}$, where $y$ becomes $1$ and stays there afterwards. How could we explain this? REPLY [14 votes]: Very easily: your calculator is using double-precision floating-point arithmetic. So the values of $1+\dfrac1x$ are truncated to $53$ bits and the larger $x$, the more significant bits are lost. You are actually computing $(1+\epsilon)^x$ where $\epsilon$ is a truncation of $\dfrac1x$ to a multiple of $2^{-53}$. In the intervals such that $\epsilon$ remains constant, you get an exponential curve. When you exceed $x=2^{53}\approx9.0072\times10^{15}$, then $\epsilon=0$. As you can verify, the previous jumps occur precisely at $$\epsilon=3\cdot2^{-53}\to x\approx3.0024\times10^{15},$$ $$\epsilon=6\cdot2^{-53}\to x\approx1.5012\times10^{15},$$ and probably $$\epsilon=7\cdot2^{-53}\to x\approx1.2867\times10^{15}.$$ The particular integers that appear depend on the details of the division algorithm used (with possible guard bits and rounding). For the computation of the powers, logarithms are used, yielding the formula $$\left(1+\frac1x\right)^x\approx e^{x\ln(1+\epsilon)}.$$ As $\ln(1+\epsilon)\approx\epsilon$ in theory, we can expect that $\ln(1+\epsilon)$ is evaluated like $\eta$, a value close to $\epsilon$, but not necessarily a simple multiple (also because of the inner details of the algorithm). At the jump points, $x=1/\epsilon$ and the function value is $e^{\eta/\epsilon}$. This explains why the largest value is exactly $e^2=7.3890\cdots$, due to $\epsilon=2^{-53}$ and $\eta=2\cdot2^{-53}$. The two values at $\epsilon=3\cdot2^{-53}$ must be exactly $e^{2/3}$ and $e^{4/3}$, corresponding to $\eta=2\cdot2^{-53}$ and $4\cdot2^{-53}$ respectively. The values at $\epsilon=6\cdot2^{-53}$ can be conjectured to be $e^{19/24}$ and $2^{29/24}$ (from $\eta=4.75\cdot2^{-53}$ and $7.25\cdot2^{-53}$), but this is less sure. Below is a qualitative simulation of this phenomenon with the formula $$e^{\frac x{64}\left[\frac{64}x\right]}$$ where the brackets denote a rounding operation. The "envelope" curves are the exponentials $e^{1\pm x/128}$. REPLY [3 votes]: I think this is a floating point error, coming from the fact that inside computers, numbers are constantly being rounded to some fixed number of significant digits. Basically, once $x$ becomes large enough, $\frac1x$ becomes really small, and the program cannot tell the difference between $1 + \frac1x$ and $1+\frac1{x+d}$ for relatively small $d$. That means that for an entire interval, $1 + \frac1x$ gets rounded to some fixed $1+\frac1{x_0}$, and then exponentiated. You can note that on each of the separate continuous parts, the function looks very much like $a^x$ would for some constant $a$. After $9\cdot 10^{15}$, the program just gives up completely, says "as far as I'm concerned, $1 + \frac1x = 1$" and your function becomes constant.<|endoftext|> TITLE: What conditions on a map of schemes guarantee that pullback of global sections is injective? QUESTION [7 upvotes]: Consider a morphism of schemes $f: X \rightarrow Y$. What conditions on $f, X, Y$ are sufficient to guarantee that $H^0(Y, \mathscr{F}) \rightarrow H^0(X, f^* \mathscr{F})$ is injective for All $\mathscr{O}_Y$ modules $\mathscr{F}$? All (quasi-)coherent $\mathscr{O}_Y$ modules $\mathscr{F}$? A necessary condition for the first and second questions is that $f$ is surjective on closed points, since if $y \in Y \setminus f(X)$ is closed, we can take $\mathscr{F} = i_* \mathscr{O}_{k(y)}$. Then for any $x \in X$, $(f^* \mathscr{F})_x \simeq \mathscr{F}_{f(x)} = 0$, so $f^* \mathscr{F} = 0$. But $H^0(Y, \mathscr{F}) = k(y) \neq 0$. In the case that $\mathscr{F}$ is a locally free sheaf, $f$ is quasi-compact and $Y$ is reduced, this condition is also sufficient (and in fact, we only need $f$ to be dominant). To see this, the map $H^0(Y, \mathscr{F}) \rightarrow H^0(X, f^* \mathscr{F})$ is the global part of the canonical map of sheaves $\mathscr{F} \rightarrow f_* f^* \mathscr{F}$. Let $U$ be an affine open set such that $\mathscr{F}|_U \simeq \mathscr{O}_U$. Then on $U$, the canonical map is identified with the structure map $\mathscr{O}_U \rightarrow f_* \mathscr{O}_{f^{-1}(U)}$, which is injective since $\mathscr{O}_Y \rightarrow f_* \mathscr{O}_X$ is injective under these hypotheses. Is this condition also sufficient for the first two questions? The affine version of the question is: Which extensions of rings $A \subseteq B$ have the property that for any (resp. any finitely presented) $A$-module $M$, the map $M \rightarrow M \otimes_A B$ is injective? Since this always holds when $M$ is flat (which implies locally free in the finitely presented case), it seems natural to guess that we should require $B$ to be faithfully flat over $A$. However, I cannot find a way to use this property or a counterexample when $\mathrm{Spec} \ B \rightarrow \mathrm{Spec} \ A$ is surjective but $B$ is not flat over $A$. EDIT I found a partial answer to this question in EGA IV-2 2.2.8: if $X, Y$ are arbitrary and $f$ is faithfully flat, then the canonical morphism is injective for all sheaves of quasi-coherent modules. This goes by identifying the global sections of $\mathscr{F}$ with morphisms $u: \mathscr{O}_Y \rightarrow \mathscr{F}$ and noting that the canonical map agrees with the map $u \rightarrow f^*(u)$. Faithful flatness says that if $f^*(u) = 0$ then $u = 0$. Actually, Remark 2.2.9 says that the $\mathscr{O}_Y$-module $\mathscr{F}$ does not need to be quasi-coherent for this proof to go through, although the proof is unclear to me. Also, Alex provided a counterexample in the case that $f$ is surjective but not flat. So now, the remaining question is: If $X$ and $Y$ are "nice enough", does $f$ really have to be flat? EDIT 2 Here is an easy counterexample when $X, Y$ are both smooth but not integral. Let $Y = \mathbf{A}^1$, $X = (\mathbf{A}^1 \setminus \{0\}) \sqcup \{0\}$, and $f$ the map given on rings by $k[x] \rightarrow k \times k[x,x^{-1}]$, $x \mapsto (0, x)$. Then let $\mathscr{F}$ be the coherent sheaf corresponding to the module $k[x]/x^2$. $f^* \mathscr{F} \simeq k$, since $(0,1) = x^2 * x^{-2}$ is killed. Then, $x$ maps to $0$ in the global section map. REPLY [3 votes]: If $Y$ is Noetherian, so that any quasi-coherent sheaf can be written as a direct limit of coherent sheaves, then we see that the questions for coherent and quasi-coherent sheaves are equivalent. Furthermore, the pushforward of a quasi-coherent sheaf on any open $U\subset Y$ is quasi-coherent on $Y$, and so the question for $f:X \to Y$ is equivalent to the analogous question for each $f^{-1}(U) \to U$, as $U$ ranges over the members of an open cover of $Y$, say an affine open cover. If $f$ is quasi-compact (e.g. if $X$ is also Noetherian), then $f^{-1}(U)$ is a union of finitely many affines, and so we may find a surjective map $V \to f^{-1}(U)$ with affine source which is locally on the domain an open immersion, and so the question reduces to the case of $V \to U$, with both $U$ and $V$ affine. In this case, as has basically been observed already, a sufficient condition is that the corresponding ring map $A \to B$ admit a section (as a morphism of $A$-modules), or even admits a section after a faithfully flat base-change. (Thus if $X\to Y$ is faithfully flat, we get a positive answer, because if $A \to B$ is faithfully flat, then the base-changed morphism $B \to B\otimes_A B$ admits a section, even as a map of rings, namely the multiplication map $B\otimes_A B \to B$.) There are other cases where one necessarily has sections: e.g. Hochster's direct summand conjecture, whose proof was recently completed by Andre (see here) shows that if $R \subset S$ is a finite extension with $R$ a Noetherian regular ring splits as a map of $R$-modules. So if $X\to Y$ is a finite surjective morphism of Noetherian schemes, with $Y$ furthermore regular, then the question has a positive answer (but you should check that my reductions to the affine case are correct!).<|endoftext|> TITLE: A non-Vandermonde matrix with Vandermonde-like determinant? QUESTION [6 upvotes]: This question is related to the previous one. Consider $n$ variables $x_1,x_2,\ldots,x_n$ and the following $n\times n$ matrix: $$ A=\begin{bmatrix} 1 & \cdots & 1 \\ x_2 + x_3 + \dots + x_n & \dots & x_1 + x_2 + \dots + x_{n-1} \\ x_2{x_3} + x_2{x_4}+ \dots + x_{n-1}x_n & \dots & x_1{x_2} + x_1{x_3}+ \dots + x_{n-2}x_{n-1 } \\ \vdots & \dots & \vdots\\ x_2 x_3 \dots x_n & \dots & x_1 x_2 \dots x_{n-1} \\ \end{bmatrix}. $$ When $i>1$, the element $a_{ij}$ is the sum of all possible products of $i-1$ variables $x_k$'s with distinct indices, except that $x_j$ is not participating in any term on column $j$. Formally, $$ a_{ij}=\sum_{k_1<\cdots1$, we have $$ \begin{align*} a_{ij}-a_{i1}=(x_1-x_j)\sum_{k_1<\cdots TITLE: Is there a primitive recursive function which gives the nth digit of $\pi$, despite the table-maker's dilemma? QUESTION [9 upvotes]: I asked a question about this a while ago and it got deleted, so I've looked into it a bit more and I'll explain my problem better. Planetmath.org told me that there is a primitive recursive function which gives the nth digit of $\pi$, but didn't prove it. When I asked before how one might prove it, a couple of people suggested using Gregory's series. Now I can see that if you know that to find the nth digit of $\pi$ you need to calculate it to within $10^{-m}$ where m is a p.r.f of n, you can define $$\lfloor{{10}^n\pi}\rfloor = \lfloor\frac{4\sum_{i=1}^{5\times10^{m-1}} \lfloor{\frac{10^{2m+1}}{2i-1}}\rfloor(-1)^{i-1}}{10^{2m+1-n}}\rfloor$$ and then you're basically there. The problem is this: can you ever find an m such that calculating pi to accuracy $10^{-m}$ gives you the nth digit correct with probability 1? Isn't there always a small probability that the digits between the nth and the mth would be all 9 or all 0, and so you could still get the nth digit wrong, because say they were all 9, you could have calculated a number which had the nth digit one higher, say ...300000... instead of ...299999... which would still be accurate to within $10^{-m}$. In fact if as is suspected $\pi$ is normal, doesn't the sequence of n nines occur an infinite number of times for any n? This problem is called the table-maker's dilemma, but I haven't found it explicitly mentioned in this context. So, my question is, is it the case that either a) you can't really define a primitive recursive function using an arithmetic series like this, or b) there is actually some way of finding m as a function of n. Thanks! REPLY [6 votes]: If we simply want the function to be computable (i.e. "recursive" in the jargon of computability theory), this is not a problem. We have a modulus of convergence for a series for $\pi$, so we know an error bound on each partial sum, with those error bounds going to zero. In effect, we have a computable sequence of nested rational intervals, whose intersection is just $\pi$. So, if we simply compute better and better approximations, because $\pi$ is irrational we will eventually see that $\pi$ is greater than, or less than, any particular rational number $r$, because $r$ will eventually fall outside one of the intervals in our sequence. Because finite decimal expansions give rational numbers, this means we can pin down the precise decimal expansion by repeatedly finding the next digit in this way. However, this process is not obviously primitive recursive, because there is no bound on how small the interval will need to be before it excludes our target rational $r$. This is exactly the issue mentioned in the question, where a long string of $9$s in an approximation could eventually cause a change in a much earlier digit of the expansion. The algorithm I just mentioned uses an unbounded search to find an interval that excludes $r$, while primitive recursive methods are not generally able to perform unbounded searches. In general, if we look at numbers other than $\pi$, it is not clear at all that we could convert an arbitrary Cauchy sequence of rationals with a given modulus of convergence into a decimal expansion via a single primitive recursive process. So, if the decimal expansion of $\pi$ is indeed primitive recursive, some other method, or some additional (nontrivial) information about the number $\pi$ or the sequence of approximations will be necessary.<|endoftext|> TITLE: Finding UMVUE of $\theta$ when the underlying distribution is exponential distribution QUESTION [7 upvotes]: Hi I'm solving some exercise problems in my text : "A Course in Mathematical Statistics". I'm in the chapter "Point estimation" now, and I want to find a UMVUE of $\theta$ where $X_1 ,...,X_n$ are i.i.d random variables with the p.d.f $f(x; \theta)=\theta e^{-\theta x}, x\gt0$. I know that $E(X_i)=1/\theta,$ for each $i$, and also have that $\bar{X}$ (or equivalently $\sum_1^n X_i$) is a complete sufficient statistic for $\theta$. But I cannot go any further here. Somebody can help me? REPLY [10 votes]: You have $\overline{X}$ complete & sufficient and moreover $E[ \overline{X} ] = 1/\theta$; i.e. $\overline{X}$ is the UMVUE for $1/\theta$. It seems reasonable to guess that $1/\overline{X}$ may be the UMVUE for $\theta$. Note that $\sum_{i=1}^n X_i \sim \Gamma(n,\theta)$ since each $X_i$ is exponential rate $\theta$ and they're iid. Let $Z \sim \Gamma(n,\theta)$. \begin{align*} E[1/\overline{X}] = n E[1/Z] &= n \int_0^\infty \dfrac{1}{z} \dfrac{\theta^n}{\Gamma(n)} z^{n-1} e^{- \theta z} \; dz \\ &= n \int_0^\infty \dfrac{\theta^n}{\Gamma(n)} z^{n-2} e^{-\theta z } \; dz \\ &= n \theta \dfrac{\Gamma(n-1)}{\Gamma(n)} \underbrace{\int_0^\infty \dfrac{\theta^{n-1}}{\Gamma(n-1)} z^{n-2} e^{-\theta z } \; dz}_{=1} \\ &= \dfrac{n \theta \Gamma(n-1)}{\Gamma(n)} = \dfrac{n \theta}{n-1} \end{align*} So $ \dfrac{n-1}{n} \cdot \dfrac{1}{\overline{X}} = \dfrac{n-1}{\sum_{i=1}^n X_i}$ is the UMVUE for $\theta$.<|endoftext|> TITLE: Show that invertible matrices with an additional condition are diagonalizable. QUESTION [6 upvotes]: Let $A$ and $B$ be invertible $2 \times 2$ matrices such that $AB = -BA$ over the complex numbers. Show that $A$ and $B$ are diagonalizable. REPLY [6 votes]: We have $B=A^{-1}(-B)A$, hence $B$ and $-B$ are similar. Since $B$ is invertible , $0$ is not an eigenvalue of $B$. Now, if $\lambda_0$ is an eigenvalue of $B$, then $-\lambda_0$ is an eigenvalue of $-B$. By similarity: $-\lambda_0$ is an eigenvalue of $B$. The $2 \times 2$ - matrix $B$ has therefore the two distinct eigenvalues $\lambda_0$ and $-\lambda_0$<|endoftext|> TITLE: Proof: $\mathbb Q \cap [0,1]$ is not compact (with the definition) QUESTION [6 upvotes]: I have to prove that $\mathbb Q \cap [0,1]$ is not compact, directly with the definition with open covers (I am not allowed to use theorems like Heine-Borel). My attempt: So I need to find an open cover of $\mathbb Q \cap [0,1]$ that has no finite subcover. I assume that we need to this by something like approximating an irrational number, i.e. $$ U_n := \left]-1, \frac{\sqrt2}{2} - \frac{1}{n}\right[ \cup \left]\frac{\sqrt2}{2}+\frac{1}{n},2\right[ $$ This is a open cover of $\mathbb Q \cap [0,1]$ but I am not sure if this does not have a finite sub cover, like for example just $\mathopen]-1,2\mathclose[$ Any hints? REPLY [3 votes]: More generally (and easily), consider any irrational number $\alpha\in[0,1]$. There is a decreasing sequence $(b_n)$ in $\mathbb{R}_{>0}$ (even in $\mathbb{Q}$, actually) such that $\alpha+b_0<1$ and $\lim_{n\to\infty}b_n=0$. Then you can consider the open sets $$ U_n=(-1,\alpha)\cup(\alpha+b_n,2) $$ Clearly, $U_n\subseteq U_{n+1}$, for all $n$, and so $$ U_{n_1}\cup U_{n_2}\cup \dots\cup U_{n_k}=U_m $$ where $m=\max\{n_1,n_2,\dots n_k\}$. Since $\alpha<\alpha+b_m$, there is a rational number $q\in\mathbb{Q}\cap[0,1]$ with $\alpha\alpha$, there exists $n$ such that $q>\alpha+b_n$, so $q\in U_n$. Note that compactness means every open cover has a finite subcover; the finite covers do, but this is irrelevant.<|endoftext|> TITLE: If $A$ is a symmetric positive definite matrix, then $A_{ii}(A^{-1})_{ii}\geq1$ for all $i$. Equality? QUESTION [6 upvotes]: I know that, if $A$ is an $n\times n$ symmetric positive definite matrix, then $A_{ii}(A^{-1})_{ii}\geq1$ for all $i=1,\ldots,n$. A proof is the following: we can write $A=PDP^T$ and $A^{-1}=PD^{-1}P^T$, where $D$ is diagonal and $P$ orthogonal. Then $$ A_{ii}=\sum_{l=1}^n P_{il}^2 D_{ll},\quad (A^{-1})_{ii}=\sum_{l=1}^n \frac{P_{il}^2}{D_{ll}}.$$ Then, by the Cauchy-Schwarz inequality and the fact that the rows of $P$ have norm $1$, $$ A_{ii}(A^{-1})_{ii}=\left(\sum_{l=1}^n (P_{il}\sqrt{D_{ll}})^2\right)\left(\sum_{l=1}^n\left(\frac{P_{il}}{\sqrt{D_{ll}}}\right)^2\right)\geq \left(\sum_{l=1}^n P_{il}\sqrt{D_{ll}}\frac{P_{il}}{\sqrt{D_{ll}}}\right)^2=1.$$ My question is about when the equality holds. Looking at the above proof, we have to use the fact that there is equality in the Cauchy-Schwarz inequality if and only if the vectors are linearly dependent. So, $A_{ii}(A^{-1})_{ii}=1$ if and only if there is a $\lambda_i\in\mathbb{R}$ such that $$P_{il}\sqrt{D_{ll}}=\lambda_i \frac{P_{il}}{\sqrt{D_{ll}}}.$$ I think that, if $A_{ii}(A^{-1})_{ii}=1$ for all $i=1,\ldots,n$, then $A$ is diagonal, but I am not sure. The idea would be to prove that each row of $P$ has a component $1$ and the rest are zeros. So suppose that, for a row $i$, there are two columns $l_i$ and $k_i$ such that $P_{i,l_{i}}\neq0$ and $P_{i,k_{i}}\neq0$. Then $D_{l_i,l_i}=D_{k_i,k_i}=\lambda_i$, that is, $A$ has two eigenvalues equal. I do not know how to proceed and if this fact is important. Any ideas? REPLY [5 votes]: Equality holds for all $i$ if and only if $A$ is diagonal. This is evident if you recall that $(A^{-1})_{ii}$ is the inverse of a Schur complement. Let $A=\pmatrix{a_{11}&b^\top\\ b&C}$. As $A$ is positive definite, so is $C$. Yet, using Schur complement, we get $(A^{-1})_{11}=(a_{11}-b^\top C^{-1}b)^{-1}\ge a_{11}^{-1}$. So, equality holds if and only if $b^\top C^{-1}b=0$, i.e. iff $b=0$. It follows that $A$ is diagonal if $A_{ii}(A^{-1})_{ii}=1$ for all $i$.<|endoftext|> TITLE: Find all real functions $f:\mathbf{R} \to \mathbf{R}$ satisfying the relation $f(x^2+y(f(x)))=x(f(x+y))$. QUESTION [9 upvotes]: While doing some old INMO (Indian National Mathematical Olympiad) problems I am stuck on a question which is as follow: Find all functions $f:\mathbf{R} \to \mathbf{R}$ satisfying the relation $f(x^2+y(f(x)))=x(f(x+y))$. Though I have worked on many problems related to functions, but still I m clueless at this one. I shall be highly thankful if you can give me some hints/suggetions. Thanks. REPLY [4 votes]: Claim: $f(x)=0$ or $f(x)=x$ for all $x \in \mathbf{R}$. Set $x=0$, then $f(yf(0))=0$ hence $f\equiv 0$ is a solution. Otherwise $f$ is not constant and $f(0)=0$. Set $y=0$, then $f(x^2)=xf(x)$ for all $x$. In particular $x^2=(-x)^2$ hence $xf(x)=-xf(-x)$. Therefore $f$ is a odd function. Suppose there exists $x_0\neq 0$ such that $f(x_0)=0$. Setting $x=x_0$ we have $f(x_0^2)=x_0f(x_0+y)$; but $f$ is not constant, hence it is a contradiction. Set $x+y=0$, then $f(x^2-xf(x))=0$. Using (2) and (3), we have $f(x^2-f(x^2))=0$ hence $x^2-f(x^2)=0$. Therefore $f(z)=z$ for all $z\ge 0$. Using that $f$ is odd by (2), then $f$ is the identity.<|endoftext|> TITLE: How many squares can be made from points on $ z(t) = e^{2\pi i\, t} + \frac{1}{\sqrt{3}} e^{2\pi i\, 3t} $? QUESTION [9 upvotes]: Inspire by the Toeplitz Square Problem, how many squares can be drawn on the curve: $$ z(t) = e^{2\pi i\, t} + \frac{1}{\sqrt{3}} e^{2\pi i\, 3t} $$ wth $t \in [0, 2\pi]$. Here is an image: We're up to one five nine ten squares. Here is an example that is not aligned wit the axes. can be prove there only one square here? It's not quite square, can we move it around to be a square? E.g. Can this quadrilateral be massaged into a square? Whose points all lie on this cubic curve? i heard the existence of one square is known for algebraic curves like this. maybe with no guarantee of exact count. a dimension count has that a quadrilateral is defined by 8 real numbers. the squares in Euclidean plane can be defined by 4 numbers. the quadrilaterals on a curve are defines by 4 numbers. generically these curves should intersect in a $$4+4-8=0$$ dimensional set. possibly an empty collection of points. Other possible obstructions is when these curves are very bumpy. Then I think one introduces really tiny squares! REPLY [6 votes]: Summary - There are $13$ squares inside the curve and $9$ of them are axis-aligned. Part I - how to locate the axis-aligned squares. Following is a picture showing $5$ of the axis-aligned ones. To locate the axis-aligned squares, we first define two auxillary functions for $t \in [0,2\pi]$. $$\begin{cases} X(t) &= \cos(t) + \frac{\cos(3t)}{\sqrt{3}}\\ Y(t) &= \sin(t) + \frac{\sin(3t)}{\sqrt{3}} \end{cases}$$ In terms of $X(t), Y(t)$, the original curve (in blue) is given by the parametrization $$[0,2\pi] \ni t \mapsto \gamma(t) = X(t) + iY(t) \in \mathbb{C}$$ Next, we define two auxillary curves $$ [0,2\pi] \ni t \mapsto \begin{cases} \gamma_1(t) = (X(t)+2Y(t)) + Y(t)i & \text{( light red )}\\ \gamma_2(t) = X(t) + (Y(t)+2X(t))i & \text{( light blue )} \end{cases} \in \mathbb{C} $$ In order for a pair of points $z_{\pm} = x \pm iy$ to form a vertical edge of a square, one of the two pairs $z_{\pm} - 2y$ or $z_{\pm} + 2y$ need to lie on $\gamma$. Let's say $z_{\pm} - 2y$ lies on $\gamma$, then $z_+$ lies on the slanted curve $\gamma_1$ (the light red one). We can locate axis-aligned squares centered on real axis by intersecting $\gamma$ with $\gamma_1$. By a similar argument, we can locate axis-aligned squares centered on imaginary axis by intersecting $\gamma$ with $\gamma_2$. In first quadrant ($\Re z, \Im z > 0$) $\gamma$ and $\gamma_1$ intersect at three points. They are the points $A,B,C$ in above diagram. $\gamma$ and $\gamma_2$ also intersect in first quadrant in three points. One of them is $C$ and the other two are the points $D,E$ in above diagram. From these $5$ points, one can construct $5$ squares whose vertices lies completely on $\gamma$. If you reflect the $4$ squares containing $A, B, D, E$ as vertices with respect to origin, one get $4$ more squares. This means there are $9$ axis aligned squares whose vertices lie on $\gamma$. It turns out this exhausts all axis-aligned squares. Part II - how to count the number of squares. To count the total number of squares, we treat $p, q$ as two variables from $[0, 2\pi]$. For each $p,q$, consider the square $PUQV$ with vertices $$\begin{array}{rl} P = \gamma(p),& U = \frac{1+i}{2}\gamma(p) + \frac{1-i}{2}\gamma(q)\\ Q = \gamma(q),& V = \frac{1-i}{2}\gamma(p) + \frac{1+i}{2}\gamma(q) \end{array} $$ As functions of $(p,q)$, $P(p,q)$ and $Q(p,q)$ always lie on $\gamma$. If we can figure out the two loci in $pq$-plane for $U(p,q)$ lies on $\gamma$ and for $V(p,q)$ lies on $\gamma$, the intersection of these loci will be the parameters $(p,q)$ one need to construct a square all of its vertices lie on $\gamma$. To achieve this, we need a simple criterion to tell whether a point $z = x+iy$ lies on $\gamma$ or not (or at the least, a way to filter out most points that doesn't lie on $\gamma$). It turns out there is one: Let $a = \frac{1}{\sqrt{3}}$, points on $\gamma$ are given by the parametrization: $$z = x + iy = e^{it}(1 + a e^{2it})$$ Taking absolute value and square, we get $$z\bar{z} = 1 + 2a\cos 2t + a^2$$ Taking real part and square, we get $$\begin{align}a(z + \bar{z})^2 &= 4a(\cos t + a\cos 3t)^2 = 4a\cos\theta^2(1 + a(4\cos^2 t - 3))^2\\ &= (2a + 2a\cos 2t) ( 1 - a + 2a\cos 2t)^2\\ &= (z\bar{z} - (1-a)^2)(z\bar{z} - a(a+1))^2 \end{align} $$ Substitute $a$ back by $\frac{1}{\sqrt{3}}$ and simplify, we get $$\Lambda(z) \stackrel{def}{=} \frac{1}{\sqrt{3}}(z^2+\bar{z}^2) - (z\bar{z})^3 + 2(z\bar{z})^2 + \frac{4}{27} = 0$$ This means $\gamma$ is contained inside the hexic curve $$\Lambda(x+iy) = \frac{2}{\sqrt{3}}(x^2 - y^2) - (x^2+y^2)^3 + 2(x^2+y^2)^2 + \frac{4}{27} = 0$$ In principle, we can locate the desired parameters by finding the two loci in $(p,q) \in [0,2\pi]$ for $$\Lambda(U(p,q)) = \Lambda\left(\frac{1+i}{2}\gamma(p) + \frac{1-i}{2}\gamma(q)\right) = 0\\ \Lambda(V(p,q)) = \Lambda\left(\frac{1-i}{2}\gamma(p) + \frac{1+i}{2}\gamma(q)\right) = 0 $$ and compute their intersections. Excluding those with $p = q$, these are the parameters for constructing the squares we seek. Part III - the result. To understand what the two loci look like, I wrote a program to compute $\Lambda(U(p,q))$ and $\Lambda(V(p,q))$ as some sort of heatmap. This is the heatmap I get: Above heatmap cover the parameter space for $(p,q) \in [0,2\pi]^2$. A point is red when $\Lambda(U(p,q))$ is close to zero. A point is blue when $\Lambda(V(p,q))$ is close to zero. Outside the diagonal $p = q$, the red and blue "strips" intersect at $56$ points. Four of them, e.g. the point labelled by $X$, comes from the two self intersection points of $\gamma$, they don't give us any squares. The remaining $52$ intersections falls into $\frac{52}{4} = 13$ groups. Each group give us one square. For each group, I have picked one of the member and label them: Squares $A, B, C, D, E$ contain a vertex with same name in first figure. Squares $A', B', D' E'$ are images of square $A, B, C, D$ under $z \to -z$. Square $Y$ is the non-axis aligned square with vertices at: $$\begin{array}{lr} (+1.191840337616712, &+0.8762580923680066),\\ (-0.1554901579762806, &+0.5869927399759873),\\ (+0.1337751944157387, &-0.7603377556170049),\\ (+1.481105690008731, &-0.4710724032249856) \end{array}$$ Squares $Y', Z, Z'$ are the image of square $Y$ under $z \to -z$, $z \to \bar{z}$ and $z \to -\bar{z}$ respectively. In short, if I didn't make any mistake in analyzing above heatmap, there are $13$ squares formed from points on $\gamma$. $9$ of them is axis-aligned while the remaining $4$ non-axis aligned.<|endoftext|> TITLE: Mathematical Expectation. E[E[x]]=E[x] QUESTION [6 upvotes]: Is it true that $ E[E[x]]=E[x]$? I can't find this property. If it isn't true than why $E[(X −E[X])^2]=E[X^2]−E[X]^2$? REPLY [8 votes]: Yes, $E[E[X]] = E[X]$. This is because $E[X]$ is just a number, it's not random in any way. So when we ask for $E[E[X]]$, i.e., our best guess for the number $E[X]$, well since $E[X]$ is just a constant number which is not random, we know its value, so our best guess for it should be it, i.e., $E[E[X]] = E[X]$. To calculate $E[ (X - E[X])^{2}]$, we first multiply the inside and get: $$(X - E[X])^{2} = X^{2} - 2XE[X] + E[X]^{2}.$$ Now, taking the expectation of both sides gives: \begin{split} E[(X - E[X])^{2}] &= E[X^{2} - 2XE[X] + E[X]^{2}] \\ &= E[X^{2}] - \underbrace{E[2XE[X]]}_{2E[X]*E[X]} + \underbrace{E[E[X]^2]}_{E[X]^2} \end{split} where we used $E[2XE[X]] = 2E[X]*E[X]$ since $2E[X]$ is just a constant number and we know $E[cX] = cE[X]$ for any constant number $c$. So, the above last line equals $E[X^{2}] - 2E[X]^{2} + E[X]^{2}$ which simplifies to $E[X^{2}] - E[X]^{2}$.<|endoftext|> TITLE: why scalar projection does not yield coordinates? QUESTION [11 upvotes]: Suppose we have an ordered basis $\{v_1,\dots,v_n\}$ in some inner product space. Let us project a vector $v$ on each $v_i$ by multiplying $v_i$ by the "scalar projection" $(v,v_i)/\|v_i\|$. Intuitively, it seems that each scalar projection $(v,v_i)/\|v_i\|$ indicates the amount of $v$ that goes in $v_i$ and therefore the $i^{th}$ coordinate of $v$ should be $(v,v_i)/\|v_i\|$. But that does not happen unless the basis is orthogonal. Mathematically I can justify this but can someone give an intuitive reason as to what goes wrong. For example with $B=\{(1,0),(1,1)\}$ in the Euclidean space $\mathbb R^2$ where $v=(0,1)$? REPLY [4 votes]: I think your intuition should be that there is a huge difference between "the portion of $v$ that points in the direction of $v_1$" and "the portion of $v_1$ that is necessary to compose $v$". You may be carrying the intuition that they are the same from the orthogonal case, but you really shouldn't be. Here are a few reasons why: $v_2$ may also point significantly in the direction of $v_1$, but it makes no sense to include this in the coefficient of $v_1$. E.g. consider $v = 1v_1 + 1v_2$ and dot both sides with $v_1$. The portion of $v$ pointing towards $v_1$ is a quantity that depends purely on $v$ and $v_1$, but the basis representation of $v$ depends critically on all the basis vectors. It's unreasonable to expect the former to be able to tell you the latter, even approximately. Fundamentally, representing a vector in terms of a basis is an inversion problem (it involves solving a matrix equation). The process you are proposing is fundamentally a multiplication process (let's simplify to the case of a basis of unit vectors, and it literally is multiplication). So in a very general, abstract sense you are doing it backwards. But in the special case of orthogonal basis $B$ we have $B^{-1} = B^T$ so we can indeed take inverses by multiplying. The fallacy here is believing that one can, in the generic case compute $B^{-1}$ just by rescaling $B$ by some $\|B\|^2$: that's not how matrix inverses work.<|endoftext|> TITLE: Showing that numbers of the form 10101010...1 are composite QUESTION [6 upvotes]: I want to prove that all numbers of the form 1010101010...1 are composite except for 101. I'm able to prove it for all numbers with an even number of ones, but I can't figure out any ideas for the remaining numbers. REPLY [6 votes]: You have numbers of the form $\dfrac{100^n-1}{99}=\dfrac{(10^{n}-1)(10^{n}+1)}{9\times 11}$: If $n$ is odd you have $\dfrac{100^{2k-1}-1}{99}=\dfrac{10^{2k-1}-1}{9} \times \dfrac{10^{2k-1}+1}{11}$ If $n$ is even you have $\dfrac{100^{2k}-1}{99}=\dfrac{10^{2k}-1}{99} \times \dfrac{10^{2k}+1}{1}$ and these factors are all integers, though some may be equal to $1$ when $k=1$ REPLY [2 votes]: Hint: view the number as a geometric series. The first term is $1$ and the ratio is $100$. Use the formula for the sum of a geometric series, then see the numerator as a difference of two squares.<|endoftext|> TITLE: Matrices whose all principal $k\times k$ sub-matrices are positive semidefinite QUESTION [6 upvotes]: I would like to know whether the set of $n\times n$ Hermitian matrices whose all ${{n}\choose{k}}$ principal $k\times k$ sub-matrices---the matrices obtained by removing $n-k$ columns as well as the corresponding rows---are positive semidefinite is a well-studied set, and if so under which name. This is for a given $1\leq k\leq n$. If $k=1$, this is the set of $n\times n$ matrices whose diagonal entries are non-negative, and, if $k=n$, it is the set of $n\times n$ positive-semidefinite matrices. I am interested in what is known for the general case $1\leq k\leq n$, in particular for the non-trivial case $1 TITLE: Is a map that preserves lines and fixes the origin necessarily linear? QUESTION [34 upvotes]: Let $V$ and $W$ be vector spaces over a field $\mathbb{F}$ with $\text{dim }V \ge 2$. A line is a set of the form $\{ \mathbf{u} + t\mathbf{v} : t \in \mathbb{F} \}$. A map $f: V \to W$ preserves lines if the image of every line in $V$ is a line in $W$. A map fixes the origin if $f(0) = 0$. Is a function $f: V\to W$ that preserves lines and fixes the origin necessarily linear? REPLY [3 votes]: You may be interested in this paper Affinity of a Permutation of a Finite Vector Space It discusses the problem of how many k-flats (cosets of a k-dimentional subspace) of an n-dimensional vector space over a finite field must be preserved by a permutation to force the permutation to preserve all k-flats. See the references for the history of this problem for other fields. And, by the way, Vilmos Totik and Wen-Xiu Ma proved (personal communication) that if f is a transformation of Euclidean n-space, n > 1, such that for all but countably many lines L the image f(L) is a line, then the image of any line is a line, hence f is an affine transformation.<|endoftext|> TITLE: Is a function which preserves zero and affine lines necessarily linear? QUESTION [11 upvotes]: My question is directly inspired by this other recent question, but I was trying to figure out whether or not it holds for $\mathbb R$. This led me to two questions. Let $n \ge 2$ be an integer (we're not including $n = 1$ as there are trivial counterexamples in dimension $1$). Let $V$ and $W$ be real vector spaces, both of dimension $n$, and let $f: V \to W$ be a bijection which sends zero to zero and affine lines to affine lines. Is $f$ necessarily linear? Let $V$ be a real vector space of dimension $n$. If we forget the vector space structure on $V$, and remember only the data of what the affine lines in $V$ are and $n$, can we recover the topology on $V$? An affirmative to question 1 implies an affirmative to question 2: Simply choose any bijection $f: V \to \mathbb R^n$ which preserves affine lines, and $f$ is necessarily a homeomorphism. In dimension $n=2$, question $2$ is equivalent to the following problem: Using only the data of what the affine lines are, given $3$ distinct parallel lines (which can be characterized as those which don't intersect with each other), determine which line is in the middle. Once you know that, you can describe open sets in terms of the unions of lines between two parallel lines. Similarly, for $n=2$, question 2 is also equivalent to: Using only the data of what the affine lines are, given a line and $3$ distinct points, determine which point is in the middle. It would also be good to know whether, and how, the answers to 1 and 2 depend on the dimension of $V$. REPLY [3 votes]: I'll provide an outline of a proof - thanks to Moishe Cohen and Christian Sievers for providing some of the hints. Theorem: Let $F$ be any field with at least $3$ elements, let $V$ and $W$ be vector spaces over $F$, both of dimension $\ge 2$, and let $f: V \to W$ be a bijection which sends $0$ to $0$ and affine lines to affine lines. Then $f$ is semi-linear, that is, there is an automorphism $\phi: F \to F$ such that $f(av+w) = \phi(a)f(v)+f(w)$, for every $v, w \in V$ and $a \in F$. In the case of $F = \mathbb R$, there are no nontrivial automorphisms, so every such bijection is linear. Of course, the presense of $0$ is a minor technicality; it enables the easy use of linear algebra. Without $0$, every map is "semi-affine". In the following outline, there is good Euclidean Geometric intuition for $F = \mathbb R$, but all steps can be carried out algebraically for arbitrary fields. Affine planes in $V$ can be characterized as follows: For two lines $L_1$ and $L_2$ which intersect at a unique point, their affine span $A$ is the union of the two lines with all lines $L$ which intersect $L_1$ and $L_2$ at distinct points. This is the step that requires $|F| \ge 3$; everything after hinges only on knowing what the affine lines and affine planes and $0$ are. We can therefore characterize parallel lines: They are the pairs of lines which are disjoint and lie in the same affine plane. We can characterize addition: If $v$ and $w$ are linearly independent, then denote by $L_v$ and $L_w$ the lines which go through $0$ and $v$, and $0$ and $w$ respectively. Let $L_v'$ be the line parallel to $L_v$ which passes through $w$, and let $L_w'$ be the line parallel to $L_w$ which passes through $v$. Then $v+w$ is the unique point in the intersection of $L_v'$ and $L_w'$. Furthermore, $(-w)$ can be characterized by $(v+w)+(-w) = v$. Then for $v'$ linearly dependent with $v$, $v+v' = ((v+w)+v')+(-w)$. For nonzero vectors $v$, denote $L_v$ as in 3. For linearly independent $v$ and $w$, we can characterize the bijection $L_v \to L_w$ given by $a v \mapsto a w$, for each $a \in F$. Explicitly, $aw$ is the intersection with $L_w$ of the line through $av$ which is parallel to the line passing through $v$ and $w$. For $v'$ nonzero and linearly dependent with $v$, we can characterize the same bijection $L_v \to L_{v'}$ via an auxiliary linearly independent $w$. Fix an arbitrary $v \neq 0$ in $V$. The above constructions give us addition on $L_v$ and allow us to construct the multiplication map $L_v \times L_v \to L_v$ given by $(av, bv) \mapsto abv$. This assigns the structure of a field to $\mathbb L_v := L_v$. This field is, of course, isomorphic to $F$. The above observations allow us to define the vector space action of $\mathbb L_v$ on $V$. The function $f$ must restrict to an isomorphism of fields $\mathbb L_v \to \mathbb L_{f(v)}$, and be linear with respect to this isomorphism. The isomorphism of fields need not necessarily agree with the map $av \mapsto af(v)$, which is why $f$ may not necessarily be linear.<|endoftext|> TITLE: Dodgy Turing degrees QUESTION [8 upvotes]: Below, I'm specifically interested in weak truth table (wtt) reducibility, but other reducibilities between truth table and Turing are interesting to me, too, if the question happens to be easier to answer for them. Let $d$ be a Turing degree, and fix a representative $X\in d$. Say $d$ is (wtt-)dodgy if there is some $d$-computable functional $F=\Phi_e^{X\oplus -}$ such that for all $Y$ with $deg(Y)=d$, we have $\Phi_e^{X\oplus Y}=F(Y)$ is total, $F(Y)\equiv_TY$, but $F(Y)\not\le_{wtt}Y$. (Originally this read "$\not\equiv_{wtt}$", but it was pointed out to me that we can trivially build such an $F$ by "padding out" $Y$ with lots of irrelevant bits; this strategy does not obviously work for this version of the question, however.) (Note that I demand nothing about $F(Z)$ for $Z\not\in d$; in particular, $F$ only needs to output reals when fed elements of $d$, it may fail to be total elsewhere.) Dodginess is most interesting for "sufficiently large" degrees - for example, above $0'$ every Turing degree splits into infinitely many $wtt$-degrees, so the question is nontrivial. Dodginess is a reasonably definable property, so by Martin's Cone Theorem, either every sufficiently large degree is dodgy or every sufficiently large degree is not dodgy. My question is, Which of these two holds? My feeling is that a fairly simple trick should show that every sufficiently large (indeed, $\ge_T0'$) degree will not be dodgy; however, I don't see how to do this. In particular, the Recursion Theorem doesn't seem to immediately kill it: suppose $d$ is a sufficiently large degree, and fix $X\in d$. Then $F$ can be identified with a total computable function $g$: $F(\Phi_e^X)\sim\Phi_{g(e)}^X$. Now $g$ is total computable, so it has a fixed point $c$: $\Phi_c^X\sim \Phi_{g(c)}^X$. However, there's no reason to believe that $\Phi_c^X$ is total, let alone an element of $d$, so I don't see how to get any leverage here. REPLY [2 votes]: This question has been answered at mathoverflow. I've made this CW so I don't gain reputation for someone else's work.<|endoftext|> TITLE: If $\sum n a_n$ converges, then $\sum a_n$ converges and $\sum |a_n|^p$ converges for $p>1$. QUESTION [6 upvotes]: As in the title, consider the problem: If $\sum n a_n$ converges, then $\sum a_n$ converges and $\sum |a_n|^p$ converges for $p>1$. My intuition for the first part of the proof is to use this equality, $$\sum a_n = \sum n a_n -\big(\sum a_n(n-1)\big) $$ Obviously the first part of the RHS converges by assumption, and I want to say the second part of the RHS does as well, however I don't know if it's entirely obvious that it does. Is this reasoning valid? And if there is a more clever way, please let me know. Additionally, I'm not sure where to begin in showing that the hypothesis implies $\sum|a_n|^p$ converges. All help is greatly appreciated. REPLY [6 votes]: The reasoning presented is not valid; the two expressions are equal only if you can rearrange terms, which requires $\sum a_n$ to be absolutely convergent (which it need not be). Part 1: Use Abel's test, with $b_n=\frac{1}{n}$. Part 2: You might begin by noting that since $\sum na_n$ converges, we must have $\lim_{n\to\infty} na_n=0$. REPLY [4 votes]: For part 2: Since $\lim_{n \rightarrow \infty} na_n = 0$, there must exist some $N$ such that for $n \geq N$, $|na_n| \leq 1$. This implies $|a_n| \leq \frac{1}{n}$, and therefore $|a_n|^p \leq \frac{1}{n^p}$, and you are done by the comparison test. REPLY [4 votes]: Use the summation by parts $$ \sum_{n=1}^N a_n = \sum_{n=1}^N na_n \frac{1}{n} = \frac{1}{N}\sum_{n=1}^N n a_n + \sum_{n=1}^{N-1} (\frac{1}{n}-\frac{1}{n+1})\sum_{m=1}^n m a_m$$ note that $|\sum_{m=1}^n m a_m| < B$ and $\frac{1}{n}-\frac{1}{n+1} = \frac{1}{n (n+1)}$ For the second one it is trivial : $\sum_{n=1}^\infty n a_n$ converges means that $n a_n \to 0$ so that $n^p |a_n|^p \to 0$ i.e. $|a_n|^p < C n^{-p}$ and $\sum_{n=1}^\infty |a_n|^p $ converges for $p > 1$<|endoftext|> TITLE: Can we prove the law of total probability for continuous distributions? QUESTION [28 upvotes]: If we have a probability space $(\Omega,\mathcal{F},P)$ and $\Omega$ is partitioned into pairwise disjoint subsets $A_{i}$, with $i\in\mathbb{N}$, then the law of total probability says that $P(B)=\sum_{i=1}^{n}P(B|A_{i})P(A_i{})$. This law can be proved using the following two facts: \begin{align*} P(B|A_{i})&=\frac{P(B\cap A_{i})}{P(A_{i})}\\ P\left(\bigcup_{i\in \mathbb{N}} S_{i}\right)&=\sum_{i\in\mathbb{N}}P(S_{i}) \end{align*} Where the $S_{i}$'s are a pairwise disjoint and a $\textit{countable}$ family of events in $\mathcal{F}$. However, if we want to apply the law of total probability on a continuous random variable $X$ with density $f$, we have (like here): $$P(A)=\int_{-\infty}^{\infty}P(A|X=x)f(x)dx$$ which is the law of total probabillity but with the summation replaced with an integral, and $P(A_{i})$ replaced with $f(x)dx$. The problem is that we are conditioning on an $\textit{uncountable}$ family. Is there any proof of this statement (if true)? REPLY [10 votes]: Excellent question. The issue here is that you first have to define what $\mathbb{P}(A|X=x)$ means, as you're conditioning on the event $[X=x]$, which has probability zero if $X$ is a continuous random variable. Can we still give $\mathbb{P}(A|X=x)$ a meaning? In the words of Kolmogorov, "The concept of a conditional probability with regard to an isolated hypothesis whose probability equals 0 is inadmissible." The problem with conditioning on a single event of probability zero is that it can lead to paradoxes, such as the Borel-Kolmogorov paradox. However, if we don't just have an isolated hypothesis such as $[X=x]$, but a whole partition of hypotheses $\{[X=x] ~|~ x \in \mathbb{R}\}$ with respect to which our notion of conditional probability is supposed to make sense, we can give a meaning to $\mathbb{P}(A|X=x)$ for almost every $x$. Let's look at an important special case. Continuous random variables in Euclidean space In many instances where we might want to apply the law of total probability for continuous random variables, we are actually interested in events of the form $A = [(X,Y) \in B]$ where $B$ is a Borel set and $X,Y$ are random variables taking values in $\mathbb{R}^d$ which are absolutely continuous with respect to Lebesgue measure. For simplicity, I will assume here that $X,Y$ take values in $\mathbb{R}$, although the multivariate case is completely analogous. Choose a representative of $f_{X,Y}$, the density of $(X,Y)$, and a representative of $f_X$, the density of $X$, then the conditional density of $Y$ given $X$ is defined as $$ f_{Y|X}(x,y) = \frac{f_{X,Y}(x,y)}{f_{X}(x)}$$ at all points $(x,y)$ where $f(x) > 0$. We may then define for $A = [(X,Y) \in B]$ and $B_x := \{ y \in \mathbb{R} : (x,y) \in B\}$ $$\mathbb{P}(A | X = x) := \int_{B_x}^{} f_{Y|X}(x,y)~\mathrm{d}y, $$ at least at all points $x$ where $f(x) > 0$. Note that this definition depends on the choice of representatives we made for the densities $f_{X,Y}$ and $f_{X}$, and we should keep this in mind when trying to interpret $P(A|X=x)$ pointwise. Whichever choice we made, the law of total probability holds as can be seen as follows: \begin{align*} \mathbb{P}(A) &= \mathbb{E}[1_{B}(X,Y)] = \int_{B} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\int_{B_x} f_{X,Y}(x,y)~\mathrm{d}y~\mathrm{d}x \\ &= \int_{-\infty}^{\infty}f_{X}(x)\int_{B_x} f_{Y|X}(x,y)~\mathrm{d}y~\mathrm{d}x = \int_{-\infty}^{\infty}\mathbb{P}(A|X=x)~ f_X(x)~\mathrm{d}x. \end{align*} One can convince themselves that this construction gives us the properties we would expect if, for example, $X$ and $Y$ are independent, which should give us some confidence that this notion of conditional probability makes sense. Disintegrations The more general name for the concept we dealt with in the previous paragraph is disintegration. In complete generality, disintegrations need not exist, however if the probability space $\Omega$ is a Radon space equipped with its Borel $\sigma$-field, they do. It might seem off-putting that the topology of the probability space now comes into play, but I believe for most purposes it will not be a severe restriction to assume that the probability space is a (possibly infinite) product of the space $([0,1],\mathcal{B},\lambda)$, that is, $[0,1]$ equipped with the Euclidean topology, Borel $\sigma$-field and Lebesgue measure. A one-dimensional variable $X$ can then be understood as $X(\omega) = F^{-1}(\omega)$, where $F^{-1}$ is the generalized inverse of the cumulative distribution function of $X$. The disintegration theorem then gives us the existence of a family of measures $(\mu_x)_{x \in \mathbb{R}}$, where $\mu_x$ is supported on the event $[X=x]$, and the family $(\mu_x)_{x\in \mathbb{R}}$ is unique up to $\text{law}(X)$-almost everywhere equivalence. Writing $\mu_x$ as $\mathbb{P}(\cdot|X=x)$, in particular, for any Borel set $A \in \mathcal{B}$ we then again have $$\mathbb{P}(A) = \int_{-\infty}^{\infty} \mathbb{P}(A|X=x)~f_X(x)~\mathrm{d}x.$$ Reference for Kolmogorov quote: Kolmogoroff, A., Grundbegriffe der Wahrscheinlichkeitsrechnung., Ergebnisse der Mathematik und ihrer Grenzgebiete 2, Nr. 3. Berlin: Julius Springer. IV + 62 S. (1933). ZBL59.1152.03.><|endoftext|> TITLE: Is the finite sum of factorials constant modulo the summation limit? QUESTION [10 upvotes]: The answer to the following question would give an alternative solution to an old olympiad question if it is true. Prove that there is no (constant) integer $c$ such that $$1!+2!+\dots + q! \equiv c \bmod q \text{ for all $q \in \mathbb N^\ast$.}$$ ($\mathbb N^\ast = \mathbb N \setminus \{0\}$) REPLY [3 votes]: Let $ K ( q ) = \sum _ { k = 1 } ^ { q - 1 } k ! $. Since $ q ! \equiv 0 \pmod q $, thus $ c $ is an integer such that for every positive integer $ q $, we have $ K ( q ) \equiv c \pmod q $. for instance, we have $ K ( q ! ) \equiv c \pmod { q ! } $. But by definition of $ K $, we know that $ K ( q ! ) \equiv K ( q ) \pmod { q ! } $, which leads to $ K ( q ) \equiv c \pmod { q ! } $. Hence there is a sequence of integers like $ ( k _ q ) _ { q \in \mathbb Z ^ + } $ such that $ c = k _ q \cdot q ! + K ( q ) $. Now for every positive integer $ q $: $$ 0 = c - c = k _ { q + 1 } \cdot ( q + 1 ) ! + K ( q + 1 ) - k _ q \cdot q ! - K ( q ) = \big( ( q + 1 ) k _ { q + 1 } - k _ q + 1 \big) q ! $$ $$ \therefore \quad k _ { q + 1 } = \frac { k _ q - 1 } { q + 1 } $$ Now using induction we show that for every natural number $ n $, we must have $ | k _ q | \ge q ^ n $. For the base case, we note that if $ k _ q = 0 $, then $ k _ { q +1 } $ can't be an integer, so $ | k _ q | \ge 1 = q ^ 0 $. For the induction step, we have: $$ \frac { | k _ q | + 1 } { q + 1 } \ge \frac { | k _ q - 1 | } { q + 1 } = | k _ { q + 1 } | \ge ( q + 1 ) ^ n $$ $$ \therefore \quad | k _ q | \ge ( q + 1 ) ^ { n + 1 } - 1 \ge q ^ { n + 1 } $$ But this leads to an obvious contradiction. So $ c $ doesn't exist.<|endoftext|> TITLE: Prove $4a^2+b^2+1\ge2ab+2a+b$ QUESTION [5 upvotes]: Prove $4a^2+b^2+1\ge2ab+2a+b$ $4a^2+b^2+1-2ab-2a-b\ge0$ $(2)^2(a)^2+(b)^2+1-2ab-2a-b\ge0$ Any help from here? I am not seeing how this can be factored REPLY [3 votes]: Multiply by $2$, then we get: $$8a^2+2b^2+2 \ge 4ab+4a+2b$$ and rearranging we get: $$(4a^2-4ab+b^2)+(b^2-2b+1)+(4a^2-4a+1) \ge 0$$ and then $$(2a-b)^2+(b-1)^2+(2a-1)^2 \ge 0$$ REPLY [2 votes]: High school solution: $$ 4a^2+b^2+1-2ab-2a-b=\frac{(2a-1)^2+(2a-b)^2+(b-1)^2}{2}\ge 0. $$<|endoftext|> TITLE: Question on proof that $|G| = pqr$ is not simple QUESTION [5 upvotes]: Assume $|G| = pqr$ where $p,q,r$ are primes with $p < q < r$. Then $G$ is not simple. I have a problem understanding the proof (see for example here). In the proof one assumes that $n_p,n_q,n_r > 1$ (number of each $p,q,r$-Sylow subgroups respectively) and then by Sylow we have $$n_r | pq \qquad \text{and} \qquad n_r = 1 + kr, k\in \mathbb{N}_0$$ Now one deduces that $n_r = pq$, which I do not understand. REPLY [5 votes]: Since $n_r$ is not $1$, we have $k>0$ which implies $n_r=1+kr > r > p$ and $q$, so the only possible divisor of $pq$ is $pq$.<|endoftext|> TITLE: When do Eigenvalues of $Q^TAQ$ and $A$ coincide? QUESTION [5 upvotes]: Lets say I have some $n\times n$ matrix $A$ and a $n\times l$ matrix $Q$, whose columns for a basis for some subspace $\mathcal{L} \subset \mathbb{R^n}$. My intuition tells me that I would need $Q$ to be a full orthogonal basis of $\mathbb{R^n}$ (i.e. $l=n$), such that all eigenvalues $\lambda_i$ of $Q^TAQ$ and $A$ coincide. However, I don't see where in the proof for the latter assertion the same dimensionality is needed: Be $v\in\mathbb{R}^n\setminus\{0\}$ an eigenvector of A, i.e. $Av=\lambda v$. Then $\lambda$ is an eigenvalue of $Q^TAQ$ with eigenvector $Q^Tv$: $$ (Q^TAQ)(Q^Tv)=Q'A(QQ^T)v=Q^TAv=\lambda(Q^Tv). $$ Be $v\in\mathcal{L}\setminus\{0\}$ an eigenvector of (Q^TAQ), i.e. $(Q^TAQ)v=\lambda v$. Then $\lambda$ is an eigenvalue of $A$ with eigenvector $Qv$: $$ A(Qv)=IAQv=QQ^TAQv=Q(Q^TAQv)=\lambda(Qv). $$ Maybe you can help me out here. Thanks a lot in advance! REPLY [3 votes]: In case Q forms an orthonormal basis of $\mathcal{L}$, you have $Q^TQ = I$, but as long as $l TITLE: Every class in $H_{n - 1}(M; \mathbb{Z})$ represented by closed, smooth, orientable submanifold of codimension $1$? QUESTION [9 upvotes]: Let $M$ be a closed, smooth, orientable $n$-manifold. How do I see that every class in $H_{n - 1}(M; \mathbb{Z})$ (resp. $H_{n - 2}(M; \mathbb{Z})$) is represented by a closed, smooth, orientable submanifold of codimension $1$ (resp. $2$)? Thoughts. Perhaps we want to use the fact that $S^1$ is a $K(\mathbb{Z}, 1)$ (resp. $\mathbb{C}P^\infty$ is a $K(\mathbb{Z}, 2)$) and represent a cohomology class by a smooth map, then use general position? REPLY [5 votes]: Your approach is the right one, as also mentioned in the comments. Let me spit out some details: $H_{n-1}(M;\mathbb Z)$ isomorphic to $H^1(M;\mathbb Z)\cong [M,K(\mathbb Z,1)]=[M, S^1]$. Now given $f:M\rightarrow S^1$, take a regular value p, and check that $[f^{-1}(p)]$ is the required homology class. The story for codimension $2$ is very similar. Then $H_{n-2}(M;\mathbb Z)$ corresponds to $H^2(M,\mathbb Z)\cong [M,K(\mathbb{Z},2)]= [M,\mathbb{CP}^\infty]$. Now $\mathbb{CP}^\infty$ is not a manifold, so we can't use general position directly. However, it can be well approximated by manifolds, which is enough: By cellular approximation, any map $M\rightarrow \mathbb{CP}^\infty$ can be homotoped to a map into $\mathbb{CP}^{\dim M}$. Now $\mathbb{CP}^{\dim M-1}\subset \mathbb{CP}^{\dim M}$, and up to homotopy, $f$ will meet $CP^{\dim M-1}$ transversely. Check that $f^{-1}(\mathbb{CP}^{\dim M-1})$ represents the right homology class. For higher codimension homology classes the story along these lines ends here I believe. However Thom investigated this problem in general, and gave more answers (certain multiples of homology classes can be represented by submanifolds, but generally this does not work. Over $\mathbb{Z}/2\mathbb{Z}$ it is always possible)<|endoftext|> TITLE: Can a right triangle have odd-length legs and even-length hypotenuse? QUESTION [11 upvotes]: Is it possible to have an even integer hypotenuse and odd integer legs (perpendicular and base) in a right triangle? If yes, please give an example. If no then please prove that. REPLY [24 votes]: Suppose that you are right and there is a possible right triangle such that $a=2k+1, b=2m+1$ and $c=2n$ {legs are odd while hypotenuse is even} and you have $a^2+b^2=c^2$. Now on expanding (substituting for $a, b, c$) you will get that: $$(2k+1)^2+(2m+1)^2=(2n)^2$$ $$4(k^2+m^2+k+m)+2=4n^2$$ On dividing the equation by $2$, you will get $$2(k^2+m^2+k+m)+1=2n^2$$ Notice that term on left is odd while the term on right is even. A contradiction. So, you were wrong and therefore there does not exist such a right triangle<|endoftext|> TITLE: Can you have negative sets? QUESTION [41 upvotes]: I figure that since you can, of course, have members in a set, have only a single member in a set, and then have no members in a set, it seems not then a big step forward (or backwards depending how you think of it) to think of a set with negative members. I shall elucidate. Since set theory deals with membership, and it deals not with the quantity, but the quality of those members, perhaps it be possible to have a set with negative members which subtract members from another set whose positive counterparts is contained therein. For example, the union of the sets $A$ and $B$, where set $A = \{1,2,3\}$ and set $B =\{-3\}$ would result in the set $A ∪ B = {1,2}$. Two notes: First, you can arbitrarily construct any set one desires, but when applied to the real world, perhaps this may be of use?; Second, the empty set seems frivolous but turned out to be quite useful, maybe the same may be said for negative sets? As someone pointed out, and they are of course correct, the set would actually be $\{1,2,3,-3\}$. However, in sticking with the principle, is what am describing denotable? REPLY [4 votes]: If you seek a generalization of multisets where elements can have negative membership count (signed multisets) then this has been investigated many times in the past. One place to start learning about the history is Wayne D. Blizard's paper Negative Membership, which cites prior work by T. Hailperin, M. Kline, E. Fischbein, H, Whitney, R. Rado, M. P. Schutzenberger, S. Eilenberg, R Feynman and others. See also Blizard's The Development of Multiset Theory and also this prior question.<|endoftext|> TITLE: Exercise 3A.7 of "Finite group theory", M. Isaacs QUESTION [6 upvotes]: Let $G$ finite group and $\sigma \in \text{Aut}(G)$, suppose that at most two prime numbers divide $o(\sigma)$. Show that $\left \langle \sigma \right \rangle$ has a regular orbit on $G$. Suppose $o(\sigma)=p^\alpha q^\beta$ with $\alpha, \beta$ non-negative numbers, $\alpha+\beta>0$. Decompose (as a partition) $G$ in orbits under the action of $\left \langle \sigma \right \rangle$: \begin{gather} G=x_1^{\left \langle \sigma \right \rangle}\cup \dots \cup x_n^{\left \langle \sigma \right \rangle} \end{gather} Write $\lambda_i=x_i^{\left \langle \sigma \right \rangle}$ and $m=\mathrm{lcm}\{o(\lambda_1), \dots ,o(\lambda_n)\}$ and suppose $m0$. If $\beta = 0$, that is if $o(\sigma)=p^{\alpha}$, then let $\tau=\rho=\sigma^{p^{\alpha-1}}$. If $\beta>0$, then let $\tau = \sigma^{p^{\alpha-1}q^{\beta}}$ and $\rho = \sigma^{p^{\alpha}q^{\beta-1}}$. In either case the elements $\tau$ and $\rho$ generate the only minimal subgroups of $\Sigma$. Since both $\tau$ and $\rho$ are nonidentity automorphisms of $G$, the subgroups $C_G(\tau), C_G(\rho)$ are proper subgroups of $G$. $G$ cannot be the union of any two proper subgroups, so there is an element $g\in G-(C_G(\tau)\cup C_G(\rho))$. I claim that the orbit $\Sigma g$ is regular. By definition, $\Sigma$ acts transitively on $\Sigma g$. To see that $\Sigma$ acts regularly on $\Sigma g$, it suffices (since $\Sigma$ is abelian) to show that $C_{\Sigma}(g) = \{1\}$. This equality holds because, by the choice of $g$, we have $\tau, \rho\notin C_{\Sigma}(g)$. Since any nonidentity subgroup of $\Sigma$ contains either $\tau$ or $\rho$, this forces $C_{\Sigma}(g)=\{1\}$. \\\<|endoftext|> TITLE: How is the Königsberg 7 bridge problem related to topology? QUESTION [5 upvotes]: How can we use topology to solve the famous konigsberg 7 bridge problem? By using graph theory we can say that there does not exists any such path but I want to know the application of topology on the 7 bridge problem. Could anybody explain it to me? Thanks REPLY [2 votes]: I don't think there's a natural solution of it using topology or any real reason to attempt to state it and solve it using tools of modern topology. The reason that this problem is commonly mentioned when talking about the history of topology and regarded as part of the begginings of topology is that problem is "topological" in the following sense: We are talking about a "shape". A city with divided by some rivers connected by some bridges. We can easily draw this city. In this sense the problem seems geometrical, we are talking about properties of a "shape". But what differentiates it from other geometrical problems like finding perimeters, areas, or lengths is that if we bend and stretch the shape (the city, or its drawing) the answer to the problem does not change. There either is or isn't a walk that crosses all the bridges and no ammount of stretching and bending will change the answer to it. So the problem suggests that shapes have fundamental properties that are invariant under smooth deformation (no cutting or gluing) these properties can be referred to as the topological properties of the shape. Studying these properties is the subject of topology. But this particular problem is more easily solved in terms of graph theory, it's just a simple and early example of a property that is invariant under smooth deformation; a property that is determined by some mysterious relationship between the points in a shape, a relationship that is very independent to the distance between them.<|endoftext|> TITLE: Prove $2^{1/3} + 2^{2/3}$ is irrational QUESTION [6 upvotes]: What's the nice 'trick' to showing that the following expression is irrational? $2^{1/3} + 2^{2/3}$ REPLY [5 votes]: Here's a slightly more 'sledgehammer' approach: since $x^3-2$ is irreducible over the rationals, the minimal polynomial of its root $z=2^{1/3}$ must be of degree three. But if $2^{1/3}+2^{2/3}$ were rational, say $\frac ab$, then that would imply that $z+z^2=\frac ab$, or $bz^2+bz-a=0$, contradicting the minimal-degree statement above. OTOH, we get a nice prize for all this 'heavy machinery': the exact same argument shows in one fell swoop that $a2^{1/3}+b2^{2/3}$ is irrational for all (nonzero) rational $a,b$; in other words, $2^{1/3}$ and $2^{2/3}$ are linearly independent over $\mathbb{Q}$.<|endoftext|> TITLE: The smallest odd perfect number must exceed $10^{300}$. QUESTION [6 upvotes]: I am studying about perfect numbers from last two week and have experienced so much adventure in studying such an interesting topic. The basic sources have been Wikipedia and the book Euler: Master of us all. After proving so many results and reading so much theory I m stuck on one of the results mentioned at the end of the the book "Euler: Master of us all". The result is as follow: The smallest odd perfect number must exceed $10^{300}$. Since the name of mathematician who gave the result is not given in the book so I can't even find it on internet. I shall be highly thankful if you can give me a hint to approach for this result or can supply a direct proof. Forgive me if this result is trivial and I m missing very common thing. Thanks. REPLY [6 votes]: The Wikipedia article gives a stronger lower bound, $10^{1500}$, which is shown in a paper by Ochem and Rao (2012) that says they obtained the improvement by modifying the method by which Brent, Cohen, and te Riele (1991) got the bound you ask about. See the PDF here or the Math. Comp. journal page here. REPLY [5 votes]: That's hardly a "common thing". The paper establishing the $10^{300}$ bound dates back to $1991$ and can be downloaded from the author's page: Improved techniques for lower bounds for odd perfect numbers . Abstract If $N$ is an odd perfect number, and $q^k$ is the highest power of $q$ dividing $N$, where $q$ is prime and $k$ is even, then it is almost immediate that $N \gt q^{2k}$. We prove here that, subject to certain conditions verifiable in polynomial time, in fact $N > q^{5k/2}$. Using this and related results, we are able to extend the computations in an earlier paper to show that $N > 10^{300}$. See also the OddPerfect.org preaanouncement.<|endoftext|> TITLE: Examples about that $\exp(X+Y)=\exp(X) \exp(Y)$ does not imply $[X,Y]=0$ where $X,Y$ are $n \times n $ matrix QUESTION [6 upvotes]: I read the https://en.wikipedia.org/wiki/Matrix_exponential There is a saying that "The converse is not true in general. The equation $\exp(X+Y)=\exp(X) \exp(Y)$ does not imply that X and Y commute." I would like to know some concrete examples. REPLY [11 votes]: Hint Consider $X=\begin{pmatrix} \pi i&0\\0&-\pi i\end{pmatrix}$ and $Y=\begin{pmatrix} 0&1\\0&-2\pi i\end{pmatrix}.$<|endoftext|> TITLE: Maximum difference inequality QUESTION [6 upvotes]: Let for $k = 1, \dots, K$, $A_k \in \mathbb{R}$ and $B_k \in \mathbb{R}$ be numbers that depend on $k$. I think the following holds true, but I don't know how to show it. $$\left| \max_{k}|A_k| - \max_{k} |B_k| \right| \leq \max_{k}\left|A_k - B_k\right|.$$ Here $|\cdot|$ denotes absolute value. Using the triangle inequality doesn't help here, and I don't know what else can be used. REPLY [6 votes]: For $1\le k\le K$ you have $$\vert A_k \vert =\vert A_k-B_k +B_k \vert \le \vert A_k-B_k \vert +\vert B_k \vert $$ Hence $$\max\limits_k \vert A_k \vert \le \max\limits_k \vert A_k-B_k\vert + \max\limits_k \vert B_k \vert$$ or $$\max\limits_k \vert A_k \vert - \max\limits_k \vert B_k \vert \le \max\limits_k \vert A_k - B_k\vert$$ by symmetry you also get the inequality $$\max\limits_k \vert B_k \vert - \max\limits_k \vert A_k \vert \le \max\limits_k \vert A_k - B_k\vert$$ and therefore the desired inequality $$\left| \max_{k}|A_k| - \max_{k} |B_k| \right| \leq \max_{k}\left|A_k - B_k\right|$$<|endoftext|> TITLE: Alice and Bob are flipping coins... QUESTION [6 upvotes]: Alice and Bob are playing a game. They randomly determine who starts, then they take turns flipping a number of coins (N) and adding them to a growing pile. The first one to collect their target number of tails (T) wins. When Alice's variables are equal to Bob's ($N_{A} = N_{B}$, $T_{A} = T_{B}$), the odds of her winning are obviously 50%. However, for $N_{A} = 2, T_{A} = 20, N_{B} = 1, T_{B} = 10$ Alice's chances of victory appear to be slightly lower than 50%. This is based on running a few hundred thousand simulations of the game in Python. This outcome is, unfortunately, unintuitive to me. What is the mathematical reason for it? Note: This is a specific example chosen to highlight an issue I'm having in a more general problem. In the general problem, the players each have: Odds of an attempt getting them a point (O), number of attempts they get to make on their turn (N), and total number of collected points needed to win (T). If someone could also provide an equation that predicts the probability of Alice or Bob winning, given $O_{A}, N_{A}, T_{A}, O_{B}, N_{B}$, and $T_{B}$, I would be grateful. REPLY [7 votes]: You're right! This is rather non-intuitive. Since the player who starts is chosen at random, we can ignore this in our calculations since it will average out (you can show this more formally). On each turn let $A$ be the number of tails that Alice flips, and $B$ the number of tails for Bob. $A \sim Binomial(2,1/2)$ $B \sim Binomial(1,1/2)$ Now let $A_j$ be the number of tails that Alice has after turn j. Define $B_j$ similarly for Bob. We can see that $A_j = A_{j-1} + A$ and $B_j = B_{j-1} + B$. So at each turn, $A_j$ is the sum of iid binomials, and $B_j$ is the sum of different iid binomials. Hence $A_j$ and $B_j$ are both still binomial. I'll omit this proof but if you're interested I can add it. Now we have: $A_j \sim Binomial(2j, 1/2)$ $B_j \sim Binomial(j, 1/2)$ So for each turn j, we should compare the probabilities $P(A_j \geq 20)$ and $P(B_j \geq 10)$. These can be found in terms of the Binomial CDF. For your problem I have plotted these probabilities for turns 1 through 30. Interestingly, as the game goes on longer, it appears the Alice has a higher chance of winning! But on average, the game will end after 20 turns and Bob still has a slightly higher probability at this point (.58 vs .56). Bob has a higher chance of winning early in the game, which ends up working in his favor! UPDATE: We can actually derive an equation for P(Alice Wins). Since it is in terms of the Binomial CDF it has to be computed numerically, as the Binomial CDF depends on the Incomplete Beta Function. THE EQUATION: $P(\text{Alice Wins}) = \sum_{j=1}^\infty P(\text{Alice Wins On Turn j})$ $P(\text{Alice Wins On Turn j}) = \\ P(A_j \geq T_A, \ A_{j-1} < T_A)\bigl[P(B_j < T_B) + \frac{1}{2}P(B_j \geq T_B, \ B_{j-1} < T_B)\bigr] $ Let's break it down. $P(\text{Alice Wins on Turn j}) = P_1(P_2 + \frac{1}{2}P_3)$ $P_1$ is just the probability that Alice reaches her Target on turn j (not before). $P_2$ is the probability that Bob hasn't yet reached his target, and $P_3$ is the probability that Bob also JUST reached his target. If this happens, then Alice wins only if she went first, which was a 50-50 chance, hence the $\frac{1}{2}$. Now we must simply calculate the $P_i$.The easiest of which is $P_2$, just the cdf $F_{B_j}(T_B-1)$. $P_1$ and $P_2$ are somewhat more involved, although there calculations are the same exact idea. $P(A_j \geq T_A, \ A_{j-1} < T_A) = \sum_{m=1}^{N_A}\sum_{n=m}^{N_A}P(A_{j-1} = T_A - m)P(A = n) $ I skipped the details, but the above is due to the fact that we can write what we want in terms of $A_{j-1}$ and $A$ (instead of $_j$), and these variable are independent. It's a complicated problem, but we have all the peices we need to compute the exact probability that Alice wins, given parameters $N_A, T_A, O_A, N_B, T_B, O_B$ and even $P_A$, the probability that Alice goes first. $P(\text{Alice Wins}) = \sum_{j=1}^\infty\\ \biggl[\sum_{m=1}^{N_A}\sum_{n=m}^{N_A}P(A_{j-1} = T_A - m)P(A = n)\biggr]\biggl[1-F_{B_j}(T_B-1) + \\ P_A\sum_{m=1}^{N_B}\sum_{n=m}^{N_B}P(B_{j-1} = T_B - m)P(B = n)\biggr]$ Indeed for the problem you described above, we see that Alice has only a 46.32907 % chance of winning. If you want to see more details, or play around with different values, the R-code which calculates the equation I've just described can be found here.<|endoftext|> TITLE: Is there an intuition for cyclic monotonicity? QUESTION [5 upvotes]: Cyclic monotonicity says that if we have a correspondence, $x(w)$, the $x$ is cyclically monotone in $w$ if for a finite sequence $w_1,\cdots w_k$ and $x^*(w_i)\in x(w_i)$, we have $\sum_{i=1}^k (w_i-w_{i+1})\cdot x^*(w_i) \leq 0$ (I guess this is technically the definition for cyclically monotone increasing). I am wondering if there is some intuition (or clear) explanation of what this says. To me, it seems to say that when $w_{i+1} >w_i$ then $x_i > x_{i+1}$, in some sort of "general" sense. REPLY [3 votes]: R.T. Rockefellar was the convex analyst who showed that a (multivalued) linear operator is the subdifferential of a convex function iff the operator is cyclically monotone. To quote one of his papers on that topic: The cyclic monotonicity condition can be viewed heuristically as a discrete substitute for two classical conditions: that a smooth convex function has a positive semi-definite second differential, and that all circuit integrals of an integrable vector field must vanish.<|endoftext|> TITLE: Show that $(1+\frac{x}{n})^n \rightarrow e^x$ uniformly on any bounded interval of the real line. QUESTION [13 upvotes]: Show that $(1+\frac{x}{n})^n \rightarrow e^x$ uniformly on any bounded interval of the real line. I am trying to argue from the definition of uniform convergence for a sequence of real-valued functions, but am struggling quite a lot. My efforts so far have concentrated on trying to find a sequences, ${a_n}$ which tends to zero, such that $$|(1+\frac{x}{n})^n -e^x |\leq a_n$$ for all $n$. But I have been unsuccessful thus far. All help is greatly appreciated. REPLY [6 votes]: Herein, we present an approach that for any given $\epsilon>0$, produces a number $N$, which depends on $\epsilon$ and not $x$, such that $\displaystyle \left|e^x-\left(1+\frac xn\right)^n\right|<\epsilon$ whenever $n>N$. To do this we will use the inequalities, which I established in THIS ANSWER using only the limit definition of the exponential function and Bernoulli's Inequality. The inequalities used in the ensuing analysis are $$\bbox[5px,border:2px solid #C0A000]{e^x\le 1+x} \tag 1$$ for $x<-1$ and $$\bbox[5px,border:2px solid #C0A000]{\log(x)\ge \frac{x-1}{x}} \tag 2$$ for $x>0$. To this end, we proceed. We assume that $x\in [a,b]$ and that $\epsilon>0$ is given. Furthermore, we will choose $n$ such that $n>-x$ for all $x\in[a,b]$. Using $(1)$ and $(2)$ we can write $$\begin{align} \left|e^x-\left(1+\frac xn\right)^n\right|&=\left|e^x-e^{n\log\left(1+\frac xn\right)}\right|\\\\ &\le \left|e^x-e^{\frac{x}{1+x/n}}\right|\\\\ &=e^x\,\left|1-e^{-x^2/(x+n)}\right|\\\\ &\le e^x\,\left|\frac{x^2}{x+n}\right|\\\\ &\le e^{b}\frac{|\max^2(a,b)|}{n+a}\\\\ &<\epsilon \end{align}$$ whenever $ \displaystyle n>\frac{e^b\,\max^2(a,b)}{\epsilon}-a$. We take $\displaystyle N(\epsilon)=1+\left\lfloor \frac{e^b\,\max^2(a,b)}{\epsilon}-a \right\rfloor$ and we are done!<|endoftext|> TITLE: Is there a closed-form for $\displaystyle\frac{d^ny}{dx^n}$? QUESTION [6 upvotes]: I am dealing with this... Question Given $y$ is a function of $x$, $x^n+y^n=1$, where $n$ is a positive integer. Find $\displaystyle\frac{d^ny}{dx^n}$ in terms of $x$, $y$ and $n$. Example 1 For example, when $n=1$, $x+y=1$ then $\displaystyle\frac{dy}{dx}=-1$. Example 2 When $n=3$, $x^3+y^3=1$ then $\displaystyle\frac{d^3y}{dx^3}=-\left(\frac{10x^6}{y^8}+\frac{12x^3}{y^5}+\frac{2}{y^2}\right)$. Example 3 When $n=6$, $x^6+y^6=1$ then $\displaystyle\frac{d^6y}{dx^6}=-\left(\frac{623645x^{30}}{y^{35}}+\frac{1612875x^{24}}{y^{29}}+\frac{1425875x^{18}}{y^{23}}+\frac{482625x^{12}}{y^{17}}+\frac{46100x^6}{y^{11}}+\frac{120}{y^5} \right)$. Table of values Below is a triangular array of values for the coefficient: \begin{array}{|c|c|c|c|} \hline n \backslash k& 1 & 2 & 3 & 4 & 5 & 6 &.\\ \hline 1 & 1\\ \hline 2 & 1 & 1&\\ \hline 3 & 10& 12&2&\\ \hline 4 & 231&378&153&6 &\\ \hline 5 & 9576&20160&12960&2400 & 24&\\ \hline 6 & 623645&1612875 & 1425875&482625&46100&120&\\ \hline. \end{array} Denote $a_{n,k}$ (where $k$ is a positive integer $\le n$) as the number in the $n$-th row and the $k$-th column. (e.g. $a_{3,2}=12$) Therefore, \begin{align} \displaystyle \frac{d^ny}{dx^n}=-\sum_{k=1}^n a_{n,k}\left(\frac{x^{n^2-kn}}{y^{n^2-kn+n-1}}\right).\end{align} I found that \begin{align}\boxed{\textbf{E1:} \quad \displaystyle \sum_{k \rm \ is \ odd} a_{n,k} =\sum_{k \rm \ is\ even} a_{n,k} \ \text{for}\ n>1\ \ \ },\end{align} (i.e. $a_{n,1}+a_{n,3}+a_{n,5}+...=a_{n,2}+a_{n,4}+a_{n,6}+...$) and \begin{align}\boxed{\textbf{E2:}\qquad \qquad \qquad a_{n,n}=(n-1)! \qquad \quad \ }\end{align} related to the factorial. E1 and E2 are the two equalities I have discoverd. Moreover, \begin{align}\boxed{ \ a_{n,k}\ \text{is divisible by}\ (n-1) \text{.}\qquad \text{(i.e.}\ \ (n-1)|a_{n.k}\ \text{)}\ }\end{align} Someone has mentioned the generalized binomial theorem which can reduce $\displaystyle \frac{d^ny}{dx^n}$ to \begin{align} \displaystyle \frac{d^ny}{dx^n}=\sum_{k=1}^{\infty} \binom{1/n}{k} \frac{(kn)!}{\left(\left(k-1\right) n\right) !} x^{(k-1)n}\end{align} by rewriting $y=\left(1-x^n\right)^{1/n}$ for $|x|<1$. It could be the answer, but now I'm more interested in finding the closed-form for $a_{n,k}$. Is there a closed-form for $\displaystyle\frac{d^ny}{dx^n}$ (in terms of $a_{n,k}$)? OR Is there a closed-form for $a_{n,k}$? Thanks. REPLY [2 votes]: We derive a recurrence formula for $a_{n,k}$ which could be helpful to find a closed formula. We start with some introductory remarks. We can write the function $y=y(x)$ in the form \begin{align*} y(x)&=\left(1-x^n\right)^{\frac{1}{n}}\\ \end{align*} and the first derivatives are \begin{align*} \frac{d}{dx}y(x)&=-x^n\left(1-x^n\right)^{\frac{1}{n}-1}\\ \frac{d^2}{dx^2}y(x)&=-\left[(n-1)x^{2n-2}\left(1-x^n\right)^{\frac{1}{n}-2}+(n-1)x^{n-2}\left(1-x^n\right)^{\frac{1}{n}-1}\right]\\ \frac{d^3}{dx^3}y(x)&=-\left[(2n-1)(n-1)x^{3n-3}\left(1-x^n\right)^{\frac{1}{n}-3} +3(n-1)^2x^{2n-3}\left(1-x^n\right)^{\frac{1}{n}-2}\right.\\ &\qquad\qquad\left.+(n-1)(n-2)x^{n-3}\left(1-x^n\right)^{\frac{1}{n}-1}\right]\tag{1} \end{align*} OPs example 2 has the following representation \begin{align*} \frac{d^3y}{dx^3}&=-\left(\frac{10x^6}{y^8}+\frac{12x^3}{y^5}+\frac{2}{y^2}\right)\\ &=-\left(10x^6\left(1-x^3\right)^{\frac{1}{3}-3}+12x^3\left(1-x^3\right)^{\frac{1}{3}-2} +2\left(1-x^3\right)^{\frac{1}{3}-1}\right)\tag{2} \end{align*} The expression (2) corresponds to (1) when setting $n=3$. From (1) we derive a general formula. Claim: We consider the $k$-th derivative of $y$ in the form \begin{align*} \frac{d^k}{dy^k}y(x)=-\sum_{j=1}^k a_{k,k-j+1}x^{jn-k}(1-x^n)^{\frac{1}{n}-j}\qquad\qquad 1\leq k\leq n\tag{3} \end{align*} with $a_{k,j}$ polynomials in $n$. The following is valid for $1\leq j\leq k\leq n$: \begin{align*} a_{1,1}&=1\\ a_{k,k-j+1}&=((j-1)n-1)a_{k-1,k-j+1}+(jn-k+1)a_{k-1,k-j}\\ \end{align*} We also set $a_{k,0}=a_{k,k+l}=0$ for $k,l\geq 1$. $$ $$ The start value $a_{1,1}=\color{blue}{1}$ follows from the first derivative already stated above: \begin{align*} \frac{d}{dx}y(x)&=-\color{blue}{1}x^n\left(1-x^n\right)^{\frac{1}{n}-1}\\ \end{align*} We obtain from (3) for $1\leq k\leq n$ \begin{align*} \frac{d^k}{dy^k}y(x)&=\frac{d}{dy}\left(\frac{d^{k-1}}{dy^{k-1}}y(x)\right)\\ &=\frac{d}{dx}\left(-\sum_{j=1}^{k-1} a_{k-1,k-j}x^{jn-k+1}(1-x^n)^{\frac{1}{n}-j}\right)\tag{4}\\ &=-\sum_{j=1}^{k-1} a_{k-1,k-j}(jn-k+1)x^{jn-k}(1-x^n)^{\frac{1}{n}-j}\\ &\qquad -\sum_{j=1}^{k-1} a_{k-1,k-j}x^{jn-k+1}\left(\frac{1}{n}-j\right)(1-x^n)^{\frac{1}{n}-j-1}\left(-nx^{n-1}\right)\tag{5}\\ &=-\sum_{j=1}^{k-1} a_{k-1,k-j}(jn-k+1)x^{jn-k}(1-x^n)^{\frac{1}{n}-j}\\ &\qquad -\sum_{j=1}^k a_{k-1,k-j}\left(jn-1\right)x^{(j+1)n-k}(1-x^n)^{\frac{1}{n}-j-1}\tag{6}\\ &=-\sum_{j=1}^k a_{k-1,k-j}(jn-k)x^{jn-k-1}(1-x^n)^{\frac{1}{n}-j}\\ &\qquad -\sum_{j=2}^{k+1} a_{k-1,k-j+1}\left((j-1)n-1\right)x^{jn-k}(1-x^n)^{\frac{1}{n}-j}\tag{7}\\ &=-\sum_{j=1}^k \left[(jn-k)a_{k-1,k-j}+\left((j-1)n-1\right)a_{k-1,k-j+1}\right] x^{jn-k}(1-x^n)^{\frac{1}{n}-j}\tag{8} \end{align*} and the claim follows. Comment: In (4) we use the expression (3) and replace $k$ with $k-1$ to represent the $(k-1)$-st derivative of $y(x)$. In (5) we apply the product rule to the series. In (6) we collect terms of the right-hand series. In (7) we shift the index $j$ of the second series by one to prepare merging of both parts. In (8) we collect both series and note that for $j=1$ and $j=k$ the summands $a_{k-1,0}=a_{k-1,k}=0$. Example: $k=1,\ldots,5$ We use the recurrence relation to show polynomials $a_{k,j}$ for small $k$ revealing thereby a regular structure. \begin{array}{rllll} a_{1,1}&=1\\ \hline a_{2,1}&=(n-1)a_{1,1}\\ a_{2,2}&=(n-1)a_{1,1}\\ \hline a_{3,1}&=(2n-1)a_{2,1}\\ a_{3,2}&=(2n-2)a_{2,1}&+(n-1)a_{2,2}\\ a_{3,3}&=&+(n-2)a_{2,2}\\ \hline a_{4,1}&=(3n-1)a_{3,1}\\ a_{4,2}&=(3n-3)a_{3,1}&+(2n-1)a_{3,2}\\ a_{4,3}&=&+(2n-3)a_{3,2}&+(n-1)a_{3,3}\\ a_{4,4}&=&&+(n-3)a_{3,3}\\ \hline a_{5,1}&=(4n-1)a_{4,1}\\ a_{5,2}&=(4n-4)a_{4,1}&+(3n-1)a_{4,2}\\ a_{5,3}&=&+(3n-4)a_{4,2}&+(2n-1)a_{4,3}\\ a_{5,4}&=&&+(2n-4)a_{4,3}&+(n-1)a_{4,4}\\ a_{5,5}&=&&&+(n-4)a_{4,4}\\ \end{array}<|endoftext|> TITLE: compact support intuition needed QUESTION [6 upvotes]: I am studying Distribution theory. But I am curious about that why we coin compact support. In what situation is it useful? Can any one give an intuitive way to explain this concept? REPLY [4 votes]: Why do we require that $\varphi(x) = 0$ for $x$ large enough ? Because it allows us to integrate by parts without fear : $$\int_{-\infty}^\infty T(x) \varphi'(x)dx = \lim_{x \to \infty} T(x)\varphi(x)-T(-x)\varphi(-x)-\int_{-\infty}^\infty T'(x) \varphi(x)dx$$ Here $\varphi \in C^\infty_c$ and $T$ is a distribution, so $T(x)\varphi(x)$ doesn't make sense, but if you assume that $\varphi(x) = 0$ for $|x| > M$ then clearly $\lim_{x \to \infty} T(x)\varphi(x)-T(-x)\varphi(-x) = 0$ and it makes sense to write $$\langle T,\varphi' \rangle =\int_{-\infty}^\infty T(x) \varphi'(x)dx = -\int_{-\infty}^\infty T'(x) \varphi(x)dx=-\langle T',\varphi \rangle \tag{1}$$ which is exactly what we need for defining $\delta'$ the derivative of the Dirac delta and the derivatives of distributions in general. Now the big question : how do you prove that $(1)$ makes sense (that it doesn't lead to some contradictions) ? Well you can take it as a definition, so nothing to prove, or you can show that when defining the distributions as linear operators $C^\infty_c \to \mathbb{R}$ continuous for the test function space topology, then the differentiation operator $\langle T,.\rangle \mapsto \langle T',.\rangle$ is continuous in the sense of distributions. See also the Schwartz space, where we replace the compact support property by a decay $o(x^{-k})$ at $\infty$, discarding the distributions with a too large grow rate and keeping the so-called tempered distributions.<|endoftext|> TITLE: On Reshetnikov's integral $\int_0^1\frac{dx}{\sqrt[3]x\ \sqrt[6]{1-x}\ \sqrt{1-x\,\alpha^2}}=\frac{1}{N}\,\frac{2\pi}{\sqrt{3}\,|\alpha|}$ QUESTION [14 upvotes]: V. Reshetnikov gave the remarkable integral, $$\int_0^1\frac{dx}{\sqrt[3]x\,\sqrt[6]{1-x}\,\sqrt{1-x\left(\sqrt{6}\sqrt{12+7\sqrt3}-3\sqrt3-6\right)^2}}=\frac\pi9(3+\sqrt2\sqrt[4]{27})\tag1$$ More generally, given some integer/rational $N$, we are to find an algebraic number $\alpha$ that solves, $$\int_0^1\frac{dx}{\sqrt[3]x\ \sqrt[6]{1-x}\ \sqrt{1-x\,\alpha^2}}=\frac{1}{N}\,\frac{2\pi}{\sqrt{3}\,|\alpha|}\tag2$$ and absolute value $|\alpha|$. (Compare to the similar integral in this post.) Equivalently, to find $\alpha$ such that, $$\begin{aligned} \frac{1}{N} &=I\left(\alpha^2;\ \tfrac12,\tfrac13\right)\\[1.8mm] &= \frac{B\left(\alpha^2;\ \tfrac12,\tfrac13\right)}{B\left(\tfrac12,\tfrac13\right)}\\ &=B\left(\alpha^2;\ \tfrac12,\tfrac13\right)\frac{\Gamma\left(\frac56\right)}{\sqrt{\pi}\,\Gamma\left(\frac13\right)}\end{aligned} \tag3$$ with beta function $\beta(a,b)$, incomplete beta $\beta(z;a,b)$ and regularized beta $I(z;a,b)$. Solutions $\alpha$ for $N=2,3,4,5,7$ are known. Let, $$\alpha=\frac{-3^{1/2}+v^{1/2}}{3^{-1/2}+v^{1/2}}\tag4$$ Then, $$ - 3 + 6 v + v^2 = 0, \quad N = 2\\ - 3 + 27 v - 33v^2 + v^3 = 0, \quad N = 3\\ 3^2 - 150 v^2 + 120 v^3 + 5 v^4 = 0, \quad N = 5\\ - 3^3 - 54 v + 1719 v^2 - 3492v^3 - 957 v^4 + 186 v^5 + v^6 = 0, \quad N = 7$$ and (added later), $$3^4 - 648 v + 1836 v^2 + 1512 v^3 - 13770 v^4 + 12168 v^5 - 7476 v^6 + 408 v^7 + v^8 = 0,\quad N=4$$ using the largest positive root, respectively. The example was just $N=2$, while $N=4$ leads to, $$I\left(\tfrac{1-\alpha}{2};\tfrac{1}{3},\tfrac{1}{3}\right)=\tfrac{3}{8},\quad\quad I\left(\tfrac{1+\alpha}{2};\tfrac{1}{3},\tfrac{1}{3}\right)=\tfrac{5}{8}$$ I found these using Mathematica's FindRoot command, and some hints from Reshetnikov's and other's works, but as much as I tried, I couldn't find prime $N=11$. Q: Is it true one can find algebraic number $\alpha$ for all $N$? What is it for $N=11$? REPLY [2 votes]: I. Duplication Following Nemo's lead in this answer, we find the formula, $$\frac{1}{2}I(p^2;\tfrac{1}{2},\tfrac{1}{3})=I(1+q^3;\tfrac{1}{2},\tfrac{1}{3})$$ where $p,q$ are related by the $12$-deg, $$p^2(-2 + 2 q + q^2)^6 = 36(1 + q^3) (4 + 4 q + 6 q^2 - 2 q^3 + q^4)^2$$ This then enables us to find infinitely many $\displaystyle\frac{1}{2^n N}$. For example, since $I(p^2;\tfrac{1}{2},\tfrac{1}{3})=\frac{1}{3}$ is known, then solving for $I(\alpha^2;\tfrac{1}{2},\tfrac{1}{3})=\frac{1}{6}$ turns out to involve a $36$-deg equation. II. Triplication (Courtesy of Nemo.) Starting with, $$B\left(z;\frac{1}{2},\frac{1}{3}\right)=2 \sqrt{z} \, _2F_1\left(\frac{1}{2},\frac{2}{3};\frac{3}{2};z\right). $$ The transformation $$ \, _2F_1\left(\frac{1}{2},\frac{2}{3};\frac{3}{2};-\frac{3 z \left(1-\frac{z}{9}\right)^2}{(1-z)^2}\right)=\frac{(1-z) \, }{1-\frac{z}{9}}{}_2F_1\left(\frac{1}{2},\frac{2}{3};\frac{3}{2};z\right) $$ applied two times gives $$ \frac{1}{3} B\left({\frac{(9-z)^2 z \left(z^3+225 z^2-405 z+243\right)^2}{729 (1-z)^2 (z+3)^6}};\frac{1}{2},\frac{1}{3}\right)=B\left(z;\frac{1}{2},\frac{1}{3}\right). $$<|endoftext|> TITLE: Simple connectedness of basin of attraction QUESTION [6 upvotes]: I want to prove that the immediate basin of attraction of a finite attracting fixed or periodic point is simply connected. We are talking about complex numbers ! According to Remark 2 p. 281 and Exercise 4.2 p. 283 of the text of Devaney [1], If $z_0$ is a finite attracting orbit (i.e., $z_0 \neq +\infty$), then any component of its basin of attraction is simply connected. This fact is an easy consequence of the Maximum Principle (see Exercise 4.2). Exercise 4.2. Prove that the immediate attracting basin of a (finite) attracting periodic point is simply connected. Apparently easy so I must be overlooking something. Who can give me an accurate proof ? [1] Robert L. Devaney, An Introduction to Chaotic Dynamical Systems, 2nd ed., Westview Press, 2003. REPLY [3 votes]: If you want to be rigorous then I don't think it is that easy (but I am not a real expert in the domain). One possible approach is to use two topological results: Lemma 1: If $A\subset \widehat{\Bbb C}$ (the Riemann sphere) is connected then each connected component of $ \widehat{\Bbb C}\setminus A$ is simply connected. (more or less obvious) You may use this in the following way: Let $\gamma: [0,1]\rightarrow {\Bbb C}$, $\gamma(0)=\gamma(1)$ be a continuous map (a loop). Let $\Omega\subset \widehat{\Bbb C}$ be the connected component of $\widehat {\Bbb C}\setminus \gamma$ containing $\infty$. We define $\Phi(\gamma) = \widehat{\Bbb C}\setminus \Omega$ to be its compliment. By the above lemma, both $\Omega$ and (more interestingly here) $\Phi(\gamma)\subset {\Bbb C}$ are simply connected. Furthermore, $\partial \Phi(\gamma) \subset \gamma$. The set $\Phi(\gamma)$ contains intuitively the points "encircled by $\gamma$". The second topological result comes from the maximum principle. Lemma 2: If $D\subset {\Bbb C}$ is a bounded domain and $f:{\Bbb C}\rightarrow {\Bbb C}$ is analytic then $\partial f (D) \subset f (\partial D)$, i.e. the image of the boundary contains the boundary of the image. To see this, note that if $w_0=f(z_0)\in \partial f(D)$, $z_0\in D$ but $w_0$ is not in $f(\partial D)$ then you get a contradiction with the maximum principle for the function $z\in D \mapsto 1/(f(z)-w)$ by choosing $w$ close enough to $w_0$ but in the complement of $f(D)$. Now, let $p$ be an attractive fixed point of a periodic point of an entire map $f:{\Bbb C}\rightarrow {\Bbb C}$ (or some iterate of it in the case of a periodic point). Let $\gamma$ be a loop consisting of points in a basin of attraction of $p$ (possibly the immediate basin but it need not be) and define as above $D=\Phi(\gamma)$. By the two Lemmas $$ \partial f^n (D) \subset f^n(\partial D) \subset f^n(\gamma)$$ which implies that $f^n(D)$ converges to $p$ since $f^n (\gamma)$ does. Thus, $D$ belongs to the same basin of attraction as $\gamma$ and since $D$ is simply connected so is the basin.<|endoftext|> TITLE: Linear Transformation from Infinite dimensional to Finite dimensional Space QUESTION [5 upvotes]: Let $T:V\to V$ be a linear transformation, where $V$ is an infinite-dimensional vector space over a field $F$. Assume that $T(V)=\{T(v):v\in V\}$ is finite-dimensional. Show that $T$ satisfies a nonzero polynomial over $F$, that is, there exists $a_0,\dots, a_n\in F$, with $a_n\neq 0_F$ such that $$a_0v+a_1T(v)+\dots+a_nT^n(v)=0_V$$ for all $v\in V$. I am not very sure how to approach this question. Suppose the dimension of $T(V)$ is $n$. I tried considering the set $\{T(v),T^2(v),\dots,T^{n+1}(v)\}$ which has to be linearly dependent thus there exists $a_i$ such that $a_1T(v)+\dots+a_{n+1}T^{n+1}(v)=0$. This seems to be similar to what the question whats, except that the polynomial is dependent on $v$, while the question wants a polynomial that works for all $v\in V$. Thanks for any help. REPLY [3 votes]: You're almost there. Let $v_1, \dots, v_n \in V$ such that $T(v_1), \dots, T(v_n)$ is a basis for $T(V)$. Let $P_i(T)$ be a polynomial in $T$ such that $P_i(T)(v_i)=0$. Take $P(X)=XP_1(X)\cdots P_n(X)$. Take $v \in V$. Then $T(v)=a_1T(v_1)+\dots+a_nT(v_n)$ and so $v = a_1 v_1 + \dots + a_n v_n + u$ with $u \in \ker T$. Therefore, $P(T)(v) = 0$ because $T,P_1(T),\dots, P_n(T)$ commute.<|endoftext|> TITLE: Existence of orthogonal coordinates on a Riemannian manifold QUESTION [8 upvotes]: This is probably a very naive question, but so far I could not find an answer: Let $(M,g)$ be a Riemannian manifold. Can we always find "orthogonal coordinates" locally? More precisely, I am asking if for every $p \in M$ there exists a neighbourhood $U$ and a diffeomorphism $\phi:\mathbb{R}^n \to U$, such that $g_{ij}=g(d\phi(e_i),d\phi(e_j))=0$ for $i \neq j$. Clarification: Note that I want $g_{ij}=0$ on all $U$, not just at $p$. Also, I allow $g_{ii} \neq g_{jj}$ for $i \neq j$ (the special case where $g_{ii}$ is independent of $i$ is called isothermal coordinates-and corresponds to conformal flatness of $U$). Of course, this is weaker than requiring $M$ to be conformally flat, since a (linear) map which maps an orthogonal basis to an orthogonal basis does not need to be conformal. REPLY [11 votes]: A Riemannian metric $g$ on an $n$-dimensional manifold is called locally diagonalizable if it is locally isometric to a Riemannian metric on a domain in $R^n$ with diagonal metric tensor. In dimension $n=2$ every Riemannian metric is locally diagonalizable due to existence of isothermal coordinates. For $n\ge 3$ the problem of local diagonalizability was solved in D. DeTurck, D. Yang, Existence of elastic deformations with prescribed principal strains and triply orthogonal systems. Duke Math. J. 51 (1984), no. 2, 243–260. They proved that for $n=3$ every Riemannian metric is indeed locally diagonalizable while for all $n\ge 4$ there are obstructions to local diagonalizability. For instance, $W(e_i, e_j, e_k, e_l)=0$ for every orthonormal frame with distinct $i, j, k, l$, here $W$ is the Weyl tensor.<|endoftext|> TITLE: How to convert numerical claims to first order logic? QUESTION [6 upvotes]: A). There are atmost 2 apples ? B). There are exactly 2 apples ? C). There is atmost 1 apple ? D). There is exactly 1 apple ? Is there any procedure to convert these type of english sentences as generally we have "For All" and "There Exists" in the sentences, but how is this different ? REPLY [19 votes]: Here are some possible ways to make these kinds of numerical claims in general: 'At least n' (Method 1) "There is at least 1 P" : $\exists x P(x)$ "There are at least 2 P's" : $\exists x \exists y (P(x) \land P(y) \land x \not = y)$ "There are at least 3 P's" : $\exists x \exists y \exists z (P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z)$ Etc. Note that with this method you need to use $n \choose 2$ non-identity claims (in addition to $n$ $P$ claims) so that increases rather quickly (e.g. to express there are at least 10 P's, we would need 45 non-identity statements! ... can we improve on that? Yes! But first let's discuss 'at most': 'At most n' (Method 1) One method to do 'at most n' is to deny 'at least n+1'. So: "There is at most 1 P": $\neg \exists x \exists y (P(x) \land P(y) \land x \not = y)$ If you bring the negation inside, this is equivalent to: $\forall x \forall y ((P(x) \land P(y)) \rightarrow x = y)$ "There are at most 2 P's": $\neg \exists x \exists y \exists z (P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z)$ Again, bringing the negation inside you get: $\forall x \forall y \forall z ((P(x) \land P(y) \land P(z)) \rightarrow (x = y \lor x = z \lor y = z))$ ... which is what goblin did! Etc. OK, so here we get even more (non-)identity statements: $n+1 \choose 2$ (in addition to n+1 $P$ claims). Later we will see how we can do this more efficiently, but first: 'Exactly n' (Method 1) One method is to recognize that 'Exactly n' is equivalent to 'at least n and at most n'. Doing a straightforward conjunction, we thus get: "There is exactly 1 P" : $\exists x P(x) \land \neg \exists x \exists y (P(x) \land P(y) \land x \not = y)$ or, equivalently: $\exists x P(x) \land \forall x \forall y ((P(x) \land P(y)) \rightarrow x = y)$ "There are exactly 2 P's" : $\exists x \exists y (P(x) \land P(y) \land x \not = y) \land \neg \exists x \exists y \exists z (P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z)$ or, equivalently: $\exists x \exists y (P(x) \land P(y) \land x \not = y) \land \forall x \forall y \forall z ((P(x) \land P(y) \land P(z)) \rightarrow (x = y \lor x = z \lor y = z))$ Etc. 'Exactly n' (Method 2) OK, so these claims get really big really fast. Can we do better? Yes. In stead of just conjuncting the 'at least' and 'at most' claims, let's integrate these two ideas: To say there are exactly $n$ P's, we can say that there are $n$ different P's ... but no others: "There is exactly 1 P" : $\exists x (P(x) \land \neg \exists y (P(y) \land x \not = y))$ or, equivalently: $\exists x (P(x) \land \forall y (P(y) \rightarrow x = y))$ "There are exactly 2 P's" : $\exists x (\exists y (P(x) \land P(y) \land x \not = y) \land \neg \exists z (P(z) \land z \not = x \land z \not = y))$ or, equivalently: $\exists x \exists y ((P(x) \land P(y) \land x \not = y) \land \forall z (P(z) \rightarrow (z = x \lor z = y)))$ "There are exactly 3 P's" : $\exists x \exists y \exists z ((P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z \land \neg \exists w (P(w) \land w \not = x \land w \not = y \land w \not = z))$ or, equivalently: $\exists x \exists y \exists z ((P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z \land \forall w (P(w) \rightarrow (w = x \lor w = y \lor w = z)))$ Etc. Interestingly, we see that in the second half of the statement, we no longer get $n+1 \choose 2$ (non-)identity claims plus $n+1$ $P$ claims, but merely $n$ (non-)identity claims and exactly one $P$ claim, because we end up saying: once you have your $n$ different P's, then any P is one of the $n$ objects, so you can't have any more than $n$. This is an idea that we can use to write the 'at least' and 'at most' claims more efficiently as well: 'At most n' (Method 2) As just observed, we can say that there are exactly $n$ P's by saying that if you already have $n$ different P's, you can't get any other $P$. But to say that there are 'at most' $n$ P's, we don't have to require those $n$ P's to be different. In fact, we don;t even have to require that they be P's: we can simply say that we can pick $n$ objects, that may or may not be different, and that may or may not be P's, such that there is no object that is different from all those, and that is a P. So: 'There is at most one P' : $\exists x \forall y (P(y) \rightarrow y = x)$ Again, notice that I am not saying that $x$ is a P ... this statement would also be true if there are not any P's at all (of course, I do need a non-empty domain, but that assumption is typically built into our logic systems). However, any P's that do exist will have to be the same ... hence there is at most 1 P. "There are at most 2 P's" : $\exists x \exists y \forall z (P(z) \rightarrow (z = x \lor z = y))$ Again, this is not saying that $x$ and $y$ are P's, but they could be. And I am also not claiming that $x$ and $y$ are different ... but they could be. Hence, you can have 0,1, or 2 different P's, but you definitely can't have 3 or more! "There are at most 3 P's" : $\exists x \exists y \exists z \forall w (P(w) \rightarrow (w = x \lor w = y \lor w = z))$ Etc. So, in general, the claim is that there are $n$ objects such that any $P$ has to be one of those objects, and that means that you can't have more than $n$ P's. The nice thing about this expression is that it uses exactly $n$ identity statements, so it is quite a bit more efficient to write than the first method for expressing 'at most', especially for large n. In fact, you only have to use exactly one $P$ predicate, instead of $n$. 'At least n' (Method 2) We pointed out earlier that 'at most n' is the negation of 'at least n+1', but that also means that 'at least n+1' is the negation of 'at most n'. So: "There are at least 2 P's": $\neg \exists x \forall y (P(y) \rightarrow y = x)$ which is equivalent to $\forall x \exists y (P(y) \land y \not = x)$ The latter statement says that for whatever object $x$ I pick, I can always find a different object that is a P. So, if there would be only one P, then that wouldn't be true, since I can pick that P for the $x$, and now there is no different $y$ that is also a $P$. So, you need at least 2 P's to make this true. "There are at least 3 P's": $\neg \exists x \exists y \forall z (P(z) \rightarrow (z = x \lor z = y))$ which is equivalent to: $\forall x \forall y \exists z (P(z) \land z \not = x \land z \not= y)$ Etc. To say that there are at least $n$ P's, we can say that no matter how you pick $n-1$ objects (so even if those are all different and all P's), you can always find an object different from all those, and that is a P. And, once again, this method saves us a lot of writing: only $n-1$ identity claims plus 1 $P$ claim. 'Exactly n' (Method 3) In the second method for 'exactly n', we saw that a claim like "There are exactly 3 P's" translated into: $\exists x \exists y \exists z (P(x) \land P(y) \land P(z) \land x \not = y \land x \not = z \land y \not = z \land \forall w (P(w) \rightarrow (w = x \lor w = y \lor w = z))$ Thus, while in the second half we reduce the number of identity and $P$ statements in comparison to the first method, we still get $3 \choose 2$ non-identity claims, and $n$ $P$ claims in the first half. Now, one trick to get rid of all the $P$ claims in the first half is to do this: "There is exactly 1 P" : $\exists x \forall y (P(y) \leftrightarrow x = y)$ "There are exactly 2 P's" : $\exists x \exists y (x \not = y \land \forall z (P(z) \leftrightarrow (z = x \lor z = y))$ "There are exactly 3 P's" : $\exists x \exists y \exists z ( x \not = y \land x \not = z \land y \not = z \land \forall w (P(w) \leftrightarrow w = x \lor w = y \lor w = z))$ That is, by using a biconditional, any of the existentially quantified objects will have to be a $P$, so we don;t have to individually specify this. Of course, the drawback of this method is that we still have $n \choose 2$ non-identity claims. 'Exactly n' (Method 4) Especially for large $n$ then, the most efficient way to express 'exactly n' seems to be the conjunction of the efficient ways to expressing 'at least n' and 'at most n'. That is: "There are exactly 2 P's" : $\forall x \exists y (P(y) \land y \not = x) \land \exists x \exists y \forall z (P(z) \rightarrow (z = x \lor z = y))$ "There are exactly 3 P's" : $\forall x \forall y \exists z (P(z) \land z \not = x \land z \not= y) \land \exists x \exists y \exists z \forall w (P(w) \rightarrow (w = x \lor w = y \lor w = z))$ (ok, so these two are not any more efficient than earlier ones, but again, once $n$ gets large, this method will be most efficient, as it has $2n-1$ non-identity claims, and 2 $P$ claims.) Wow, ok, so that turned out to be a much larger post than I intended, sorry!<|endoftext|> TITLE: What defines arrows equality QUESTION [5 upvotes]: I have decided to learn basics of category theory, but have stumbled upon the very first exercise: given a category C, prove that identity arrow is unique among arrows with domain of X and codomain of X, where X is from the objects of the given category C. But I fail to see or find any definition of arrows equality or inequality. In other words, given 2 arrows: $f:X\to Y $ and $g:X\to Y$, how can I say if they are same or not? REPLY [5 votes]: Say you have two identity arrows $\operatorname{Id}_1, \operatorname{Id}_2:X\to X$. By the defining property of identity arrow, we have $$ \operatorname{Id}_1 = \operatorname{Id}_1\circ \operatorname{Id}_2 = \operatorname{Id}_2 $$ and thus the two are equal. Basically, the axioms and definitions of your theory will tell you when two things are equal. In this case, an identity arrow $\operatorname{Id}:X\to X$ is defined by the following: for any $f:X\to Y$ and any $g:Z\to X$, we have $f = f\circ{\operatorname{Id}}$ and $g = {\operatorname{Id}}\circ g$. One can deduce general results (usually called theorems) which will assist you in less simple cases so you don't have to appeal directly to the axioms all the time, but in the end, all equalities are proven from whatever equalities your axioms and definitions give you. Exactly how you should prove that $f$ and $g$ in your question are equal will depend greatly on how they are defined, and what you know about the category in which you are working. Some categories only have one arrow for each (ordered) pair of objects, and in that case, they will automatically be equal. Other categories are more complicated. In most common categories, like the categories of groups (abelian or general), topological spaces, and so on, equality of arrows is not commonly shown on a category theoretical level, although some specific cases can benefit greatly from a category theoretical formulation.<|endoftext|> TITLE: Find the number of triples $(x,y,z)$ of real numbers satisfying the equation $x^4+y^4+z^4+1=4xyz$ QUESTION [5 upvotes]: QUESTION Find the number of triples $$(x,y,z)$$ of real numbers satisfying the equation $$x^4+y^4+z^4+1=4xyz$$ I have tried to solve this problem but can't quite understand how to manipulate the given data to reach a clear result. Could someone please explain how I should approach and solve this problem.Thanks :) REPLY [3 votes]: If $ x, y, z\ge 0$, then by AM-GM inequality we have $$ x^4+y^4+z^4+1\ge 4\sqrt[4]{x^4y^4z^4}=4xyz. $$ So the equality holds when $ x=y=z=1$. On the other hand, if one or three of the variables are negative, the RHS would be negative, but the LHS is always positive. Thus there are exactly two variables $< 0$. Wlog suppose that $ y, z <0$. Set $ u=-y $ and $ v=-z $. The equation becomes $ x^4+u^4+v^4+1=4xuv $. But this equation has the same form than the original one and we get that its solution is $ x=u=v=1$. Then $ y=z=-1$. Hence the solutions for the equation are $(x, y, z)=(1,1,1), (1-1,-1) $ and permutations.<|endoftext|> TITLE: On the integral $\int_0^1\frac{dx}{\sqrt[4]x\ \sqrt{1-x}\ \sqrt[4]{1-x\,\gamma^2}}=\frac{1}{N}\,\frac{2\pi}{\sqrt{2\gamma}}$ QUESTION [27 upvotes]: V. Reshetnikov gave the interesting integral, $$\int_0^1\frac{\mathrm dx}{\sqrt[4]x\ \sqrt{1-x}\ \sqrt[4]{2-x\,\sqrt3}}=\frac{2\,\sqrt2}{3\,\sqrt[8]3}\pi\tag1$$ After some experimentation, it turns out that more generally, given some integer/rational $N$, we are to find an algebraic number $\gamma$ that solves, $$\int_0^1\frac{dx}{\sqrt[4]x\ \sqrt{1-x}\ \sqrt[4]{1-x\,\gamma^2}}=\frac{1}{N}\,\frac{2\pi}{\sqrt{2\gamma}}\tag2$$ (Compare to the similar integral in this post.) Equivalently, to find $\gamma$ such that, $$\begin{aligned} \frac{1}{N} &=I\left(\gamma^2;\ \tfrac14,\tfrac14\right)\\[1.8mm] &= \frac{B\left(\gamma^2;\ \tfrac14,\tfrac14\right)}{B\left(\tfrac14,\tfrac14\right)} \end{aligned} \tag3$$ with beta function $B(a,b)$, incomplete beta $B(z;a,b)$ and regularized beta $I(z;a,b)$, and $B\left(\tfrac14,\tfrac14\right)=\frac{\sqrt\pi}{\Gamma^2\left(\frac14\right)}$. Reshetnikov's example, after tweaking, was just the case $N=\frac{3}{2}$ and $\gamma=\frac{3^{1/4}}{\sqrt{2}}$. Solutions for prime $N=2,3,5,7$ are known. Let $v=\gamma$, then, $$-1 + 2 v^2 = 0\quad\quad N=2\\ - 1 + 2 v + 2 v^2 = 0\quad\quad N=3\\ - 1 + 8 v - 4 v^2 - 8 v^3 + 4 v^4 = 0\quad\quad N=5$$ etc, with $N=7$ using a $12$-deg equation. I found these using Mathematica's FindRoot command but, unlike the other post, I couldn't find a nice common form for $\gamma$. (The pattern of this family is also different. I had expected $N=7$ to also involve a sextic only.) Q: Is it true one can find algebraic number $\gamma$ for all prime $N$? What is it for $N=11$? Update, Aug 16, 2019 In this comment, Reshetnikov gave the explicit solution to, $$I\left(\gamma^2;\ \tfrac14,\tfrac14\right) = \tfrac17$$ as, $$\small\gamma = \frac16\left(5\cos x-\sqrt3\sin x-1-\sqrt3\sqrt{7+4\sqrt7-(11+2\sqrt7)\cos x+\sqrt3(5+2\sqrt7)\sin x}\right)$$ where $x = \tfrac13\arccos\big(\tfrac{13}{14}\big)$. P.S. I forgot I also found $\gamma$ in this 2016 post as, $$\gamma = \tfrac12\left(2\cos\tfrac{2\pi}7-\sqrt{2\cos\tfrac{4\pi}7+\sqrt2\csc\tfrac{9\pi}{28}}\right)$$ REPLY [9 votes]: For any rational $0 TITLE: Combinatorial puzzle reminiscent of knapsack problem. Is this classic? QUESTION [9 upvotes]: I have $n$ red integers $a_1,\ldots,a_n$ (not necessarily distinct), all with $1\leq a_i\leq n$. I also have $n$ blue integers $b_1,\ldots,b_n$ with same constraints. I want to show that there is a (red) subset of the $a_i$'s and a blue subset of the $b_j$'s that add up to the same value. I.e. that there exist two non-empty subsets $I,J\subseteq\{1,\ldots,n\}$ such that $\sum_{i\in I}a_i = \sum_{j\in J}b_j$. This is obvious if $a_1=a_2=\cdots=a_n$ and $b_1=b_2=\cdots=b_n$ and it seems that allowing the $a_i$'s and the $b_j$'s to be distinct only give me more opportunities for the existence of matching subsets but I did not manage to find a proof (or a counter-example?). At the moment the best I can prove is: if the $a_i$'s and $b_j$'s are all $\leq \frac{n}{2}$ then a solution exists and one can even find a solution where $I,J$ are convex subsets of $\{1,\ldots,n\}$ (i.e., they are subintervals). The solution is found by a greedy algorithm that runs in linear-time. This looks like a classic problem but I am outside my field ... REPLY [8 votes]: Yes, it's a "classic problem" in pigeonhole principle for olympiad problems. Proof by contradiction. Suppose not. Without loss of generality, $ \sum a_i > \sum b_i $. Define $f(k)$ to be the smallest value $j$ such that $$ \sum_{i=1}^j a_i > \sum_{i=1}^k b_i$$ Note that the difference between these partial sums $ \sum_{i=1}^{f(k)} a_i - \sum_{i=1}^k b_i $ is in the set $\{1, 2, \ldots , n-1\}$, since it is not 0, and it is capped by $a_j-1$. (Use the fact that $ \sum_{i=1}^{j-1} a_i < \sum_{i=1}^k b_i$) By pigeonhole principle, since there are $n$ differences but only $n-1$ possibilities, thus there exists 2 differences that are identical. IE $$ \sum_{i=1}^{f(k)} a_i - \sum_{i=1}^k b_i = \sum_{i=1}^{f(l)} a_i - \sum_{i=1}^l b_i $$ Now, take the differences of the partial sums, and they will have the same sum. IE (with $k>l$) $$ \sum_{i=f(l) + 1}^{f(k)} a_i = \sum_{i=l+1}^k b_i $$ This also proves the observation that it is sufficient to consider subintervals. This also shows why the condition of $1 \leq a_i \leq n$ is the best possible, with obvious counterexamples if we relax the condition further. Putnam 1993 has a similar problem, with a similar solution that you are welcome to find. Let $x_1, \ldots , x_{19}$ be positive integers less than or equal to 93. Let $y_1, \ldots , y_{93}$ be positive integers less than or equal to 19. Prove that there exists a (nonempty) sum of some $x_i$’s equal to a sum of some $y_i$’s.<|endoftext|> TITLE: Relationship Between the Albanese Variety and $\rm{Pic}^{0}(X)$ QUESTION [5 upvotes]: I've been learning about the Albanese variety $\rm{Alb}(X)$ of a projective variety $X$. As discussed very well in John Baez' blog (https://golem.ph.utexas.edu/category/2016/08/the_magic_of_algebraic_geometr.html) there is apparently an analogy between $\rm{Alb}(X)$ and the free abelian group on a pointed set. To my knowledge the correspondence goes something like this: Let $\rm{Set}_{*}$ be the category of pointed sets. One can define a functor from $\rm{Set}_{*}$ into the category of abelian groups $\rm{AbGrp}$, defined by taking a pointed set to the free abelian group it generates. This construction satisfies the universal property you would expect. In the world of varieties, the Albanese map is given as a functor $$\rm{Var}_{*} \to \rm{AbVar}$$ where you send a pointed variety $X$ to its Albanese variety $\rm{Alb}(X)$. The Albanese is classified by a universal property, so I guess this is what motivates people to think of the Albanese as the "free abelian variety of a projective variety." This makes sense to me, as it seems like most of the literature defines the Albanese by this universal property, and its more concrete geometrical properties are proved as theorems. So my questions are about its geometrical properties: I've heard the Albanese variety is dual to the degree-zero Picard group: $$\rm{Alb}(X) \simeq (\rm{Pic}^{0}(X))^{*}$$ I know that $\rm{Pic}^{0}(X)$ classifies degree-zero line bundles on $X$, but I was hoping someone could lay out the correspondence between its dual and the Albanese. Perhaps my confusion is arising from that dual being the dual in the category of abelian varieties. Secondly, what, if anything, does the analogy with the free abelian group say about this concrete realization of the Albanese as the dual of a variety which classifies line bundles? REPLY [9 votes]: Well, this is true when $X$ is normal and projective over $k$ characteristic $0$ (thanks to Ariyan for mentioning below that I forgot to say this--if you want to this work over arbitrary perfect field $k$ you need to replace $\text{Pic}^0_{X/k}$ with ites reduced subgroup scheme--in characteristic $0$ smoothness is guaranteed), say. First of all it's a somewhat important technical point to note that the Albanese is a constructed associated to pointed varieties. Namely, it's associated to a pair $(X,x)$ where $X$ is a variety and $x\in X(k)$. The claim then is that there is an initial map amongst maps from $(X,x)\to (A,e)$ where $A$ is an abelian variety over $k$ and $e\in A(k)$ its identity section. The idea is actually somewhat simple: suppose that $(X,x)\to (A,e)$ is a map of pointed varieties. This then induces a map $\text{Pic}_{A/k}\to \text{Pic}_{X/k}$ where, and here's where we use the section, we take our choice of $\text{Pic}_{X/k}$ to mean the group scheme representing $$T\mapsto \left\{(\mathscr{L},i):\mathscr{L}\in\text{Pic}(X\times_k T)\text{ and }i:x^\ast\mathscr{L}\xrightarrow{\approx}\mathcal{O}_T\right\}$$ and similarly for $A$. The point being that the choice of a section gives us a natural functorial description of the connected component of the Picard scheme (i.e. the functor representing the fppf sheafification of $T\mapsto \text{Pic}(X\times_k T)$). Now, it then follows that the connected component of the identity $\text{Pic}^0_{A/k}\subseteq\text{Pic}_{A/k}$ maps into the connected component of the idenity $\text{Pic}^0_{X/k}\subseteq\text{Pic}_{X/k}$ and thus we obtain a map $\text{Pic}^0_{A/k}\to \text{Pic}^0_{X/k}$. We then need two facts: 1) The group scheme $\text{Pic}^0_{X/k}$ is an abelian variety (I think this is due to Raynaud?). $\newcommand \Pic{\mathrm{Pic}}$ 2) $(\text{Pic}^0_{A/k})^\vee\cong A$--this is almost by definition. Thus, we obtain a map of abelian varieties $(\text{Pic}^0_{X/k})^\vee\to A$. I leave it for you to unravel that if $(X,x)\to(\text{Pic}^0_{X/k})^\vee$ is the obvious map (what is this?) then the above actually shows that the composition $(X,x)\to ((\text{Pic}^0_{X/k})^\vee,e)\to (A,e)$ is $(X,x)\to (A,e)$. Thus, $\text{Alb}(X,x)=(\text{Pic}^0_{X/k},e)$. So, you see that this was almost definitionally trivial, that the dual of the Picard scheme is the Albanese, once you know 1) and 2) above. In summary: maps of group varieties give you maps (in the opposite direction) on Picard schemes, and so one gets a map of group varieteis (in the right direction!) on the dual of Picard schemes. Once you know now the Picard schemes look like for general varieties, and once you realize we defined the dual abelian variety in terms of its Picard scheme, everything is perfect. I think for your question about 'free abelian variety' on $(X,x)$ you mean the 'group law theorem' coming from the Abel-Jacobi map. I have to run now--let me explain that later if no one else has. EDIT: Now that I have more time, let me tie up some loose ends. $\newcommand \Alb{\mathrm{Alb}}$ Let me first address this discussion of $\text{Alb}(X,x)$ as being the 'free abelian variety on the points of $X$'. Specifically, let me state the result that says something to this affect, and then interpret it more philosophically: Theorem: Let $X$ be a variety over $k$ (a perfect field) and $f:(X,x)\to (\Alb(X,x),e)$ an Albanese variety (i.e. an initial abelian variety). Then, for $n\gg 0$ the map $X^n\to \Alb(X,x)$ defined by $\displaystyle (x_1,\ldots,x_n)\mapsto \sum_{i=1}^n f(x_i)$ is a surjection. What this tells you is that, roughly, you can think of $\text{Alb}(X,x)$ as being something like $X^n/\sim$ (for a suitable equivalence relation $\sim$) with addition being formal addition in $X^n$. In fact, you can see that since $\Alb(X,x)$ is abelian, the map $X^n\to \Alb(X,x)$ factors through $X^{(n)}:=X^n/S_n$. To give an indication why this is not so surprising from the perspective of Picard varieties, let's restrict our attention for a second to the case when $X=C$ a smooth projective curve over $k$ (algebraically closed, and characteristic $0$ for convenience). Why should something like $C^n$ have anything to do with $\text{Pic}^0_{C/k}$--why should 'free abelian groups on points' have anything to do with Picard groups? The answer is actually simpler than you might think! Namely, let's imagine what an element of the free abelian group on $C$ is. It's really just a formal linear combination $n_1 p_1+\cdots+n_mp_m$ of points on $C$. But, such objects are also known by another name: divisors. But, divisors are classically known to have a lot to do with line bundles: they're the same thing! Namely, recall that for any divisor $D$ one can associate a line bundle $\mathcal{O}(D)$ and this map from the class group $\text{Cl}(C)$ (divisors modulo principal divisors) to $\text{Pic}(C)$ is an isomorphism. So, for example what is the map $C^n\to\Alb(X,x)$ if we think of $\Alb(C,x)$ as being $\Pic^0_{C/k}$ (NB: for pointed curves the Picard variety is principally polarized--there is a canonical identification between $\Pic^0_{C/k}$ and its dual--one usually calls this common object the Jacobian variety of $C$)? Well, the naive guess is simple. Namely, to an $n$-tuple $(p_1,\ldots,p_n)$ we get a divisor $p_1+\cdots+p_n$ and thus we get a line bundle $\mathcal{O}(p_1+\cdots+p_n)$. But, this can't be the map $C^n\to\text{Pic}^0_{C/k}$ since $\mathcal{O}(p_1+\cdots+p_n)$ isn't degree $0$--it's degree $n$. That said, this is precisely where our choice of base point becomes pivotal. Namely, we want a canonical way to take $p_1+\cdots+p_n$ and turn it into a degree $0$ divisor, and given our fixed base point $x$ the way is clear: $p_1+\cdots+p_n-nx$. Thus, our map $C^n\to\text{Pic}^0_{C/k}$ is $(p_1,\ldots,p_n)\mapsto \mathcal{O}(p_1+\cdots+p_n-nx)$. So the above Theorem then says something to the effect that every degree $0$ line bundle looks like $\mathcal{O}(p_1+\cdots+p_n-nx)$ for points $(p_1,\ldots,p_n)\in C^n$ if $n\gg 0$. So, to summarize: what is the content of the relationship between the 'free abelian variety on the points of $X$' perspective and the 'Picard variety perspective'? It's that the Picard group is the same thing as the class group!<|endoftext|> TITLE: Evaluate the integral $ \int_{0}^{\infty}\frac{\sin(x^{2})x^{2}\ln(x)}{e^{x^2}-1}dx $ QUESTION [7 upvotes]: Does the Following integral admit a closed form? $$ \int_{0}^{\infty}\dfrac{\sin(x^{2})x^{2}\ln(x)}{e^{x^2}-1}dx $$ What I tried was: Define another integral $ I(a) $ as: $$ I(a)= \int_{0}^{\infty}\dfrac{\sin(x^{2})x^{a}}{e^{x^{2}}-1}dx $$ Write it as: $$ I(a) = \text{Im} \left[ \sum_{r=1}^{\infty} \int_{0}^{\infty} x^{a}e^{-x^{2}(r-\iota)}dx \right] $$ Clearly the required integral is $ I'(2) $. The above simplifies to: $$ \text{Im}\left[\frac{\Gamma(\frac{a+1}{2})}{2}\sum_{r=1}^{\infty}\frac{1}{(r-\iota)^{\frac{a+1}{2}}} \right] $$ which further simplifies to : $$ I(a) = \frac{\Gamma(\frac{a+1}{2})}{2}\sum_{r=1}^{\infty} \frac{\sin(\frac{a+1}{2}\tan^{-1}(\frac{1}{r}))}{(r^{2}+1)^{\frac{a+1}{4}}} $$ Let alone $I'(a) $ I could not evaluate even $I(a)$ in general form The only one which i could solve was $ a=1 $ SO that $$ I(1) = \int_{0}^{\infty}\dfrac{\sin(x^{2})x}{e^{x^{2}}-1}dx = \frac{1}{2}\left[\frac{e^{2\pi}(\pi -1)+(\pi +1)}{e^{2\pi}-1}\right] $$ Any other approach or hints/suggestions are more than welcome! REPLY [4 votes]: $$\begin{eqnarray*}\int_{0}^{+\infty}\frac{\sin(x^2)x^2\log x}{e^{x^2}-1}\,dx &=& \frac{1}{4}\int_{0}^{+\infty}\frac{\sin(z)\sqrt{z}\log(z)}{e^z-1}\,dz\\ &=&\frac{1}{4}\left.\frac{d}{d\alpha}\int_{0}^{+\infty}\frac{\sin(z)z^{\alpha+1/2}}{e^z-1}\,dz\,\right|_{\alpha=0^+}\\&=&\frac{1}{4}\left.\frac{d}{d\alpha}\sum_{n\geq 1}\int_{0}^{+\infty}\sin(z)z^{\alpha+1/2}e^{-nz}\,dz\,\right|_{\alpha=0^+}\\&=&\frac{1}{4}\text{Im}\left.\frac{d}{d\alpha}\sum_{n\geq 1}\int_{0}^{+\infty}z^{\alpha+1/2}e^{(i-n)z}\,dz\,\right|_{\alpha=0^+}\\&=&\frac{1}{4}\text{Im}\left.\frac{d}{d\alpha}\sum_{n\geq 1}\frac{\Gamma\left(\alpha+3/2\right)}{(n-i)^{\alpha+3/2}}\right|_{\alpha=0^+}\\&=&\frac{1}{4}\text{Im}\left[\sum_{n\geq 1}\frac{\Gamma'(3/2)}{(n-i)^{3/2}}+\sum_{n\geq 1}\frac{\Gamma(3/2)\log(n-i)}{(n-i)^{3/2}}\right]\end{eqnarray*}$$ depends on the imaginary part of a Hurwitz zeta function and its derivative at $s=\frac{3}{2}$. Here we have $\Gamma(3/2)=\tfrac{\sqrt{\pi}}{2}$ and $\Gamma'(3/2)=\Gamma(3/2)\psi(3/2) = \tfrac{\sqrt{\pi}}{2}(2-\log 4-\gamma)$.<|endoftext|> TITLE: Integral calculus sine functions: $\frac{1}{2\pi }\int_{-\pi }^{\pi }\frac{\sin\left((n+1/2)\,x\right)}{\sin\left(x/2\right)}\,dx = 1$ QUESTION [6 upvotes]: For an integer, $n$, how do I show the following? $$ \frac{1}{2\pi }\int_{-\pi }^{\pi }\frac{\sin\left((n+1/2)\,x\right)}{\sin\left(x/2\right)}\,dx = 1. $$ Can I use induction? REPLY [11 votes]: To use induction, first establish a base case. If $n=0$, then we see trivially that $$\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left((n+1/2)x\right)}{\sin(x/2)}\,dx=\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left(x/2\right)}{\sin(x/2)}\,dx=1$$ Next, we assume that for some integer $N\ge 1$ we have $$\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left((N+1/2)x\right)}{\sin(x/2)}\,dx=1$$ We now examine the integral for $n=N+1$. Proceeding, we have $$\begin{align} \frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left((N+1+1/2)x\right)}{\sin(x/2)}\,dx&=\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left((N+1/2)x\right)+2\cos((N+1)x)\sin(x/2)}{\sin(x/2)}\,dx\\\\ &=\color{blue}{\frac{1}{2\pi}\int_{-\pi}^\pi \frac{\sin\left((N+1/2)x\right)}{\sin(x/2)}\,dx}+\color{red}{\frac1\pi \int_{-\pi}^\pi \cos((N+1)x)\,dx}\\\\ &=\color{blue}{1}+\color{red}{0}\\\\ &=1 \end{align}$$ as was to be shown!<|endoftext|> TITLE: Can weak convergence be checked on an orthonormal basis? QUESTION [6 upvotes]: Let $H$ be a Hilbert space with orthnormal basis $(e_i)_{i\in I}$. Let $(x_n)_{n\in\mathbb N}$ be a sequence in $H$ with $\langle x_n, e_i\rangle \to \langle x, e_i\rangle$ for some $x\in H$ and all $i\in I$. Can we conclude from this that that $(x_n)_{n}$ converges weakly? My intuition says no but I didn't find a counterexample. REPLY [4 votes]: Yes, if you know that your sequence (or net) is bounded. See here and here for more general results. For unbounded nets you may still have pointwise convergence to 0, without being weakly convergent, so in general the answer is no.<|endoftext|> TITLE: Zero Lyapunov exponent for chaotic systems QUESTION [6 upvotes]: In addition to a positive Lyapunov exponent (for sensitivity to ICs), why do continuous chaotic dynamical systems also require a zero Lyapunov exponent? REPLY [11 votes]: Every continuous-time dynamical system with a bounded, non fixed-point dynamics has at least one zero Lyapunov exponent. This does not only apply to chaotic dynamics but also to periodic or quasiperiodic ones. To see why this is the case, let $x$ and $y$ be the two trajectory segments, whose separation ($x-y$) you consider for defining or calculating the Lyapunov exponents. At every point of the attractor (or invariant manifold), we can represent this separation in a basis of Lyapunov vectors, each of which corresponds to one Lyapunov exponent. In this representation, each component of the separation grows or shrinks independently according to the respective Lyapunov exponent (on average). For example, in chaos with one positive Lyapunov exponent, the separation will quickly point in the corresponding direction because this Lyapunov exponent dominates the other ones. Now, suppose that the trajectory segment $y$ is such that $y(t) = x(t+ε)$ for some time $t$, i.e., it is a temporally slightly advanced version of $x$. The separation of these segments may grow and shrink with time, depending on the speed of the phase-space flow, but on average it should stay constant due to the following: Since the dynamics is bounded, the trajectory $x$ will need to get close to $x(t)$ again, i.e., there needs to be some $τ$ such that $x(t+τ) \approx x(t)$. Due to the phase-space flow being continuous, we also have $y(t+τ) = x(t+τ+ε) \approx x(t+ε) = y(t)$ and thus: $$ |x(t+τ) - y(t+τ)| \approx |x(t)-y(t)|$$ Therefore, separations in the direction of time neither shrink nor grow (on average) and in this direction we get a zero Lyapunov exponent: If we consider only such separations to compute a Lyapunov exponent, we obtain: $$ \begin{align} λ &= \lim_{τ→∞} \; \lim_{|x(t)-y(t)|→0}\; \frac{1}{τ} \ln\left(\frac{|x(t+τ)-y(t+τ)|}{|x(t)-y(t)|}\right)\\ &= \lim_{τ→∞} \; \lim_{|x(t)-y(t)|→0}\; \frac{1}{τ} \ln\left(\frac{|x(t)-y(t)|}{|x(t)-y(t)|}\right)\\ &=0 \end{align} $$ (We now have $=$ instead of $\approx$ due to the limits averaging everything and allowing us to consider arbitrarily close $x(t)$ and $x(t+τ)$.) Finally, it’s intuitive that separations along the time direction do not mingle with separations in other directions and thus correspond to one distinct Lyapunov vector at every point on the attractor. Therefore, all such dynamical systems must have at least one zero Lyapunov exponent. For a more rigorous and detailed discussion, see H. Haken – At least one Lyapunov exponent vanishes if the trajectory of an attractor does not contain a fixed point, Phys. Lett. A (1983).<|endoftext|> TITLE: Dimension of space of linear maps between vector spaces QUESTION [5 upvotes]: Let $F$ be a field, $V$ and $W$ are vector spaces over the field $F$. The dimension of $V$ is $n$ and the dimension of $W$ is $m$, where $m, n$ are natural numbers. Let $\mathcal{L}$ be a vector space of all linear maps from V to W. Determine the dimension of $\mathcal{L}$ depending on values $m,n$. I know there should be a solution using isomorphism between $\mathcal{L}$ an a vector space of $m \times n$ matrices, but I can't prove it. REPLY [8 votes]: Here is the outline for the proof: Let $B=\{ v_1,...,v_n\}$ be a basis for $V$ and $C=\{w_1,..., w_m\}$ be a basis for $W$. We will now try to find a basis for $$\mathscr{L}(V,W) = \{T:V\rightarrow W\ |\ T \ \text{is linear} \}. $$ For each element of $\{1,..., m\} \times \{1,..., n\}$, consider the linear transformation $E^{p,q}$ whose image in the basis $B$ is given by $$E^{p,q}(v_i) = \begin{cases} 0, & \text{if $i\neq q$} \\ w_p, & \text{if $i = q$} \end{cases}$$ All is left to do now is to prove that these linear transformations are linearly independent and that they span $\mathscr{L} (V,W)$, that is that they form a basis for that space. Since the set of these linear transformations has $nm$ elements, it follows that dim$\ \mathscr{L} (V,W) = nm$. The details are for you to fill in, but the main idea is there. You should be able to prove that there are indeed linear transformations satisfying the given image on the basis and that the set is linearly independent and spans $\mathscr{L} (V,W)$ (if you don't, please let me know). This proof was taken from Hoffman and Kunze's Linear Algebra book. The idea of the proof is pretty much the same as establishing an isomorphism between $\mathscr{L} (V,W)$ and $M_{m \times n}(F)$ (think about the matricial representation of a linear transformation from $V$ to $W$ with respect to the basis $B$ and $C$), which is "the usual" proof.<|endoftext|> TITLE: Eigenvalues and elementary row operations QUESTION [6 upvotes]: We know that elementary row operations do not change the determinant of a matrix but may change the associated eigenvalues. Consider an example, say two $5 \times 5$ matrix are given: $$A = \begin{pmatrix} 0 & 1 & 0 & 0 & 0\\ a & b & 0 & 0 & 0\\ 0 & 0 & p & q & r\\ 0 & 0 & s & t & u\\ 0 & 0 & v & w & x\\ \end{pmatrix}, \hspace{2cm} B = \begin{pmatrix} 0 & 1 & 0 & 0 & 0\\ a & b & 0 & 0 & 0\\ 0 & 0 & p & q & r\\ 0 & 0 & s & t & u\\ ka & kb & v & w & x\\ \end{pmatrix} $$ Now $B$ can easily be reduced to $A$ by using the following operation on $B$ $$R_5 - kR_2$$ Now these two have the same eigenvalues. It is cumbersome to try to symbolically calculate the eigenvalues to show they are indeed same (I have tried tons of matrices with random numbers in Mathematica). $A$ is a block diagonal matrix and $B$ is reduceable to one. If you talked about systems, $A$ shows two decoupled spaces (of dimensions $2$ and $3$) within the $5-D$ vector space of $A$. Can anyone prove that such a pair of matrices always have same eigenvalues? Is there any property that says so? Does $A$ being a block diagonal matrix have to do anything with the eigenvalues being the same? Any insight or discussion is welcome! Please correct me if I used any term loosely or wrongly. REPLY [4 votes]: You may already know that $$\det\pmatrix{A&0\\B&C}=\det\pmatrix{A&0\\0&C}=\det A\cdot\det C$$ which can be shown using the fact that the determinant doesn't change by elementary row operations. Also note that the eigenvalues of $M$ are the roots of $\det(\lambda I-M)=0$. Now let $M=\pmatrix{A&0\\B&C}$ then $$\begin{align}\det(\lambda I-M)&=\det\pmatrix{\lambda I-A&0\\B&\lambda I-C}\\&=\det\pmatrix{A_1&0\\B&C_1}\\&=\det\pmatrix{A_1&0\\0&C_1}=\det\pmatrix{\lambda I-A&0\\0&\lambda I-C} \end{align}$$<|endoftext|> TITLE: Show that $f_n(x) = \frac{\sin(nx)}{\sqrt{n}}$ converges uniformly to $0$ QUESTION [5 upvotes]: I need to show that $f_n(x) = \frac{\sin(nx)}{\sqrt{n}}$ converges uniformly to $0$. My book has a theorem that says that if $|f_n(x)|n_0\implies n>\frac{1}{\epsilon^2}\implies \sqrt{n}>\frac{1}{\epsilon}\implies \frac{1}{\sqrt{n}}<\epsilon$ therefore it converges uniformly. Now, I need to also show that $f'_n$ diverges in every point of the interval $[0,1]$. The derivative is: $$\frac{n\cos nx}{\sqrt{n}}$$ should I just say that this limit goes to infinity because $n$ grows faster than $\sqrt{n}$? REPLY [3 votes]: If $x=0$ we have $\sqrt n\cos (nx) = \sqrt n \to \infty.$ It's trickier if $x\in (0,x].$ Let $A = \{e^{it}: t \in [0,1]\},$ $ B = \{e^{it}: t \in [\pi,\pi +1]\}.$ Claim*: For each $x\in (0,1],$ $e^{inx}\in A$ for infinitely many $n,$ and $e^{inx}\in B$ for infinitely many $n.$ Accepting this (it's a nice exercise), we see that for each $x\in (0,1],$ $\sqrt n\cos (nx) > \cos 1\cdot \sqrt n$ for infinitely many $n,$ $\sqrt n\cos (nx) <- \cos 1\cdot \sqrt n$ for infinitely many $n.$ So for all $x\in (0,1],$ the sequence $\sqrt n\cos (nx)$ oscillates wildly, taking on arbitrarily large positive and negative values. $\text{*}$Proof: If $A$ is an arc on the unit circle of arc length $1,$ and $x\in (0,1],$ then $ e^{inx} \in A$ for infinitely many $n.$ Proof: The sequence $e^{i nx}$ marches around the circle infinitely many times in steps of fixed arc-length $x\le 1.$ Now remember what your momma taught you: You can't step over a puddle that's larger than your stride. Apply this to our sequence: Because the steps are no more than the length of $A,$ we have to land in $A$ at least once every orbit. That's the idea, and you can make it perfectly rigorous.<|endoftext|> TITLE: Coupling showing that $\operatorname{Bin}(n,\frac{1}{n+1})$ stochastically dominates $\operatorname{Bin}(n-1,\frac{1}{n})$ QUESTION [8 upvotes]: The classical inequality $$ \left(1-\frac{1}{n}\right)^{n-1} > \frac{1}{e} $$ has a probabilistic generalization: the binomial distribution $\operatorname{Bin}(n-1,\frac{1}{n})$ is stochastically dominated by the Poisson distribution $\operatorname{Po}(1)$. A simple coupling proof of this can be found in Klenke and Mattner, Stochastic ordering of classical discrete distributions. They also prove the stronger result that $\operatorname{Bin}(n,\frac{1}{n+1})$ stochastically dominates $\operatorname{Bin}(n-1,\frac{1}{n})$, which generalizes the fact that the sequence $(1-\frac1n)^{n-1}$ is decreasing. Is there a natural coupling that shows that $\operatorname{Bin}(n,\frac{1}{n+1})$ stochastically dominates $\operatorname{Bin}(n-1,\frac{1}{n})$? What I have in mind is an elementary argument that involves a balls-into-bins scenario, but any other simple coupling would be great. (Just to be clear: I already know that there exists some coupling, this follows from the stochastic domination.) REPLY [2 votes]: I am not sure wether this is a good answer, but I would say monotone coupling would be one possibility. Illustration for $n=2$ For example let $n=2$, $\ X\sim \mu=\text{Bin}\bigl(2, \frac13 \bigr)$, $ \ Y\sim \nu=\text{Bin}\bigl(1,\frac12 \bigr)$. Since $X$ can only take three values a natural candidate for a suitable probability space for $X$ would be $\Omega = \{\omega_0, \omega_1, \omega_2\}$ where $X(\omega_i)=i$. But it turns out that we have to split $\omega_1$ into two parts to achieve monotone coupling. So we pick $\Omega = \{\omega_0, \omega_{1a}, \omega_{1b}, \omega_2\}$ and define $X$ as $X(\omega_{i})=i$ for $i=0,2$ and $X(\omega_{1a}) = X(\omega_{1b})=1$. This will become clear later. Then we define a probability measure $\mathbb{P}$ such that $X\sim \mu$ under $\mathbb{P}$, namely $$ \mathbb{P}(\omega_0) = \frac49, \quad \mathbb{P}(\omega_{1a}) = \frac1 {18}, \quad \mathbb{P}(\omega_{1b}) = \frac 7 {18}, \quad \mathbb{P}(\omega_{2}) = \frac19. $$ We want to construct now a random variable $Y$ on $\Omega$ such that $Y \sim \nu$ under $\mathbb{P}$ and we do this by monotone coupling. The mass of $X$ (or resp $\mu$) is monotonly transferred to $Y$ (or respectively to $\nu$). In the end $Y$ should only take two values (0 and 1) with probability $\frac12$ each. We start from the left and put all of $\mu$'s mass from zero (i.e. $\omega_0$) and put it into $\{Y=0 \}$, which means that $Y(\omega_0)=0$. Then there is no more mass left at $\{X=0\}$ but there is still some room at $\{Y=0 \}$ (since $\mathbb{P}(\omega_0)= \frac 49$ but $\mathbb{P}(Y=0)$ should be $\frac 12$). So we continue monotonically: we take some mass form $\{X=1\}$ and put it into $\{Y=0\}$ until $\{Y=0\}$ is full (i.e has mass $\frac 12$). Thats why we had to split up $\omega_1$ into $\omega_{1a}$ and $\omega_{1b}$: we can now assign $\omega_{1a}$ with $\{Y=0 \}$. $\omega_{1a}$ has exactly the right mass to guarantee that $\mathbb{P}(Y=0)= \frac 12$. The remaining mass at $\{X=1\}$ (i.e $\omega_{1b}$) is transferred to $\{Y=1 \}$, i.e $Y(\omega_{1b})=1$. Then we continue monotonically and transfer the mass from $\{X=2\}$ (which is all the remaining mass) to $\{Y=1 \}$, which means that $Y(\omega_2)=1$. To sum up $$ (X,Y)(\omega_0)=(0,0), \quad (X,Y)(\omega_{1a})=(1,0), \quad (X,Y)(\omega_{1b})=(1,1), \quad (X,Y)(\omega_2)=(2,1), $$ and $X \geq Y$. General Case Of course this was only for $n=2$, but the general case is similar. You use the same technique of monotone coupling. Therefore you take all the mass from $\{X=0\}$ and assign it to $\{Y=0\}$. Then you split the mass at $\{X=1\}$, assign one part of it to $\{Y=0\}$ and the other part to $\{Y=1\}$, and so on.<|endoftext|> TITLE: On the Whitney-Graustein theorem and the $h$-principle. QUESTION [6 upvotes]: If you are in a hurry and this question still has caught your interest, please jump directly to the last proposition, where my question lies. Throughout this question I am going to identity $\mathbb{S}^1$ and $[0,1]/\partial[0,1]$. Therefore, when I will talk about mappings from $\mathbb{S}^1$ to $\mathbb{R}^2$, I will consider maps $f\colon[0,1]\rightarrow\mathbb{R}^2$ such that $f(0)=f(1)$. Let $I(\mathbb{S}^1,\mathbb{R}^2)$ be the set of immersions of $\mathbb{S}^1$ into $\mathbb{R}^2$, that is the set of $C^1$-mappings from $\mathbb{S}^1$ to $\mathbb{R}^2$ such that their derivatives do not vanish. My goal is to prove the well-known: Theorem. (Whitney-Graustein) The turning number gives a bijection from $\pi_0(I(\mathbb{S}^1,\mathbb{R}^2))$ to $\mathbb{Z}$. For sake of clarity, by now let $X:=C^0(\mathbb{S}^1,\mathbb{R}^2\setminus\{(0,0)\})$. Inspired by the Gromov's $h$-principle, I introduced the following map: $$J\colon\left\{\begin{array}{ccc}I(\mathbb{S}^1,\mathbb{R}^2)&\rightarrow&X\\f&\mapsto&f'\end{array}\right..$$ I claim that one has the following: Theorem. The map $J$ induces a well-defined bijection $$\pi_0(J)\colon\left\{\begin{array}{ccc}\pi_0(I(\mathbb{S}^1,\mathbb{R}^2))\rightarrow\pi_0(X)\\ [f]_0\mapsto [f']_0\end{array}\right..$$ I have already prove the well-definedness and the following: Proposition. Let $f\in X$, there exists $g\in I(\mathbb{S}^1,\mathbb{R}^2)$ and $H\colon\mathbb{S}^1\times[0,1]\overset{C^0}{\rightarrow}\mathbb{R}^2\setminus\{(0,0)\}$ such that $H(\cdot,0)=f$ and $H(\cdot,1)=g'$. Proof. On request. $\Box$ Which has direct corollary $\pi_0(J)$ being surjective. Hence, I am left to establish the following: Proposition. Let $g_1,g_2\in I(\mathbb{S}^1,\mathbb{R}^2)$ such that there exists $H\colon\mathbb{S}^1\times [0,1]\overset{C^0}\rightarrow\mathbb{R}^2\setminus\{(0,0)\}$ such that $H(\cdot,0)={g_1}'$ and $H(\cdot,1)={g_2}'$. Then, there exists $F\colon\mathbb{S}^1\times [0,1]\overset{C^1}{\rightarrow}\mathbb{R}^2$ such that $F(\cdot,0)=g_1$, $F(\cdot,1)=g_2$ and for all $t\in [0,1],F(\cdot,t)\in I(\mathbb{S}^1,\mathbb{R}^2)$. Proof. My idea is to integrate the homotopy $H$, that is introducing: $$F(x,t):=\int_0^xH(u,t)\,\mathrm{d}u-x\int_0^1H(u,t)\,\mathrm{d}u.$$ The removed corrective term is here to ensure that for all $t\in [0,1]$, $F(0,t)=F(1,t)$, that is for the well-definedness of $F(\cdot,t)$ on $\mathbb{S}^1$. Notice that one has $F(\cdot,0)=g_1-g_1(0)$ and $F(\cdot,1)=g_2-g_2(0)$. Therefore, if for all $t\in [0,1]$, $F(\cdot,t)\in I(\mathbb{S}^1,\mathbb{R}^2)$, I am almost done. However, it is not clear and wrong in all generality, that the following quantity is nonzero: $$\frac{\mathrm{d}}{\mathrm{d}x}F(x,t)=H(x,t)-\int_{0}^1H(u,t)\mathrm{d}u.$$ That is where I am stuck. $\Box$ Question. If $H(x,\cdot)$ is non-constant and belongs to $\mathbb{S}^1$, I am done. Indeed, $\displaystyle\int_0^1H(u,t)\mathrm{d}u$ will lie in the interior of the unit disk. However, it is not clear to me that I can boil down my problem to this case and doing a naive radial on $H(x,\cdot)$ homotopy does not seem to help in anything. If the latter proposition is true, I am done with the injectivity of $\pi_0(J)$ and with the Whitney-Graustein theorem. Indeed, I will have the following commutative diagramm, where all arrows are bijections: $$\require{AMScd}\begin{CD} \pi_0(I(\mathbb{S}^1,\mathbb{R})) @>\textrm{turning number}>> \mathbb{Z}\\ @VV\pi_0(J)V @AA\deg A\\ \pi_0(X) @>\textrm{str. def. retract}>> \pi_1(\mathbb{S}^1)\end{CD}$$ Any enlightenment will be greatly appreciated. If my approach proving the last proposition is plain wrong, could you provide me some other thoughts to manage my way toward the proof? REPLY [4 votes]: You're pretty close to having all of the details. First of all, you can assume (by homotopies of the initial $g_1$ and $g_2$) that $g_1$ and $g_2$ both have total length $1$ and are parametrized by arc length. In this case, the ranges of $g_1^\prime$, $g_2^\prime$, and $H$ are all $S^1$, which resolves one of the issues in your final question. Now, for any fixed $t \in (0,1)$, $H(x,t) : S^1 \to S^1$ is homotopic to $g_1^\prime$ and $g_2^\prime$, and therefore has the same degree as those two maps. In particular, if the degree of those two is nonzero, then $H(x,t)$ cannot be constant for any fixed $t$, and your argument works. It remains to show that if the degrees are zero, you can choose $H$ so that $H(x,t)$ is nonconstant for every fixed $t$. If the maps $H(\cdot,0),H(\cdot,1):S^1 \to S^1$ have degree $0$, we can lift to maps $h_0,h_1 : S^1 \to \mathbb{R}$ (with respect to the usual covering $\mathbb{R} \to S^1 : t \mapsto e^{it}$). Note that this would not be possible for other degrees, since the image would not be a loop. We can also homotope the initial $g_0$ and $g_1$ so that $h_0(x) = h_1(x)$ for all $x\in[0,\varepsilon]$ for some small $\varepsilon$, and also so that $h_0$ and $h_1$ are nonconstant on this interval. Now define $h_t$ as the straight-line homotopy from $h_0$ to $h_1$: $$h_t(x) = (1-t)h_0(x) + th_1(x),$$ and finally, $$H(x,t) = ( \cos(h_t(x)), \sin(h_t(x)) ),$$ which is a homotopy with all the properties you need. $\square$ If you haven't done so, check out the book "h-Principles and Flexibility in Geometry" by Geiges. It's an excellent, readable intro to h-principle which includes this proof!<|endoftext|> TITLE: Proving that if $(ab)^{p}=a^{p}\,b^{p}$, then the p-sylow subgroup is normal QUESTION [9 upvotes]: So, while studying Abstract Algebra, i ran into this problem (i.n. Herstein, second edition, chapter 2.12) and have been stuck since: Given a group of finite order, and a prime $p$ that divides $o(G)$, and suppose that $\forall\,a,b\in G$, $(ab)^{p}=a^{p}\,b^{p}$. Prove that the $p$-sylow subgroup is normal in $G$. What I've tried: I defined a mapping $\varphi:G\to H=\lbrace x^{p}:\,x\in G\rbrace;\,\,\varphi(x)=x^{p}$, wich would be a surjective homomorphism. Then, I proved that $\ker(\varphi)\subseteq P$, where $P$ is a $p$-sylow subgroup. If I could prove either that $P\subseteq\ker(\varphi)$ or that $o(\ker(\varphi))=o(P)$, that would end it, because that would imply that $P=\ker(\varphi)$, and I know that $\ker(\varphi)\unlhd G$. One ideia to follow up on those would be to use the firs isomorphism theorem ($G/\ker(\varphi)\simeq Im(\varphi)$) to get $o(G)=o(\ker(\varphi))o(Im(\varphi))$ and from there work something out about the orders, but I cannot think of how to do that. I've also tried proving that $G$ only has one $p$-sylow subgroup, using Sylow's third theorem, but I believe it's a dead end. Any ideas? REPLY [3 votes]: Let's try mathematical induction with respect to the group order $n = |G|$. The claim is obiviously true for $n=1$. Let it be true for $n=N$. To show it is true for $n=N+1$, let $|G| = N+1$ with $p^\alpha \big | |G|$ and $\alpha$ is the largest power of $p$ dividing $N+1$. And $(ab)^p = a^p b ^p$ Let $K = \ker(\phi)$ First notice that, $p \big | |K|$. Consider the quotient group $G/K$, assuming $K$ is not the $p$ syllow s.g (otherwise we are done), By induction $G/K$ {has order $< N+1$, and satisfies $(ab)^p = a^p b ^p$ } contains a p-syllow s.g $\tilde{P}$ which is normal in $G/K$. Let $P = \{x\in G| xK \in \tilde{P}\}$, which is a normal subgroup in $G$. And we can notice that $p^\alpha \big | |P|$ and hence $P$ contains a p-syllow s.g of $G$ which is normal in $P$ (by induction and verify that $o(P) TITLE: Prime Factorization of sequence QUESTION [5 upvotes]: I am looking into Integer Factorization and have found an interesting pattern which I cannot explain. For a composite number $N$, where $N = PQ$. All factors of $(P + Q)$ are found amongst the factors of $(Nx^2 + 1)$ for integer $x > 0$. For example $5713 = 29\cdot197$ $(P + Q) = 226 = 2 \cdot 113$ The sequence $(5713 x^2 + 1)$ is: $5714, 22853, 51418, 91409, 142826\cdots$ The unique prime factorization of this sequence is: $2, 17, 19, 43, 47, 53, 67, 71, 73, 89, 103, 109, 113, 127, 139, 151, 163\cdots$ Note that not every prime is listed, $3, 5, 7, 11, 13\dots$ are missing. However the prime factors of $(P + Q)$ are (in this case $2$ & $113$). Is anyone able shed some light onto 1. Why some primes are missing from the unique prime factorization of the sequence? 2. Why the factors of $(P + Q)$ are contained in the factorization of this sequence? Highly factorable numbers work too. $3795 = 3 \cdot 5 \cdot 11 \cdot 23$ $(P + Q)$ can be any of $(124, 148, 188, 268, 356, 764, 1268)$ and the factors of all can be found amongst the unique prime factorization of $(3795 x^2 + 1)$. The only example I found where this doesn't work is when N is a square (eg. $4, 9, 16$) or a multiple of a square (eg. $12, 75, 98$). I apologize for any terminology issues, my maths knowledge is not great. I was suggested math.stackexchange over stackoverflow. REPLY [2 votes]: Very nice observation! I've never seen this before, but here's a simple proof. Suppose $p|P+Q$ where $p$ is a prime. Then $Q = -P$ mod $p$, and $PQ = -P^2$ mod $p$. For $p$ to factor an element of the sequence we require $p|(PQx^2 + 1)$ for some $x$. That is, $PQx^2 = -1$ mod $p$. Combining these two relationships we obtain $$ P^2x^2 = 1 \quad mod \space p $$ which is satisifed whenever $x = \pm P^{-1}$ mod $p$ To understand why some primes are missing from the factors of the sequence, you need to learn about quadratic residues. These are covered in any introductory text on number theory. The simple answer is that for a prime $p$ to exist in the sequence, $-PQ$ must be a quadratic residue modulo $p$.<|endoftext|> TITLE: question about line bundle on projective scheme QUESTION [5 upvotes]: Let $X$ be a projective variety. Suppose $\mathcal{L}$ is a basepoint free globally generated line bundle so we get a map $\pi:X\to \mathbb{P}^N$ induced by $\mathcal{L}$. Let $R_n=H^0(X,\mathcal{L}^{\otimes{n}})$ and $R=\oplus_{n=0}^{\infty}R_n$. I vaguely remember that $\overline{\pi(X)}\cong \text{Proj} (R)$ (the grading is by natrual number) Is it true? In particular, when $\mathcal{L}$ is very ample,do we have $X\cong Proj(R)$? Thank you! REPLY [4 votes]: For very ample, this is true and if not, in general it is false. Just as an example, take a degree 2 line bundle $L$ on an elliptic curve $X$. Then $N=1$ in your notation, so $\pi(X)=\mathbb{P}^1$. But, $\mathrm{Proj}\, R$ is just $X$, since $L$ is ample.<|endoftext|> TITLE: How tall should my Christmas tree be? QUESTION [14 upvotes]: This question has vexed me for the 20 years we've lived at my current house. There is a fir tree in the front that I dress every Christmas with lights. It grows. I prune it. This is what it looks like with the lights on... The bulbs (purple dots) are all on a single string that I start at the top and helically wrap down to the bottom. There are 100 bulbs spaced 300mm apart. I have decided that the tree looks best if the height is twice the width at the base. Q. What height should I maintain the tree at so that all the bulbs are equispaced from each other? I take this to mean that the next wrap around the tree is 300mm in Z below the previous wrap. Not perfect equidistance, but it will do for the neighbours and me. (There are similar questions, but I believe non so specific.) REPLY [8 votes]: Nice question! I learned a few things as I was looking for the solution. I assume that the length of your light cord is $0.3\text{ meters} \times 99 = 29.7$m (explanation: In the picture you posted, it seems that the cord starts and ends with a light bulb, so there are $99$ segments in between, each with a length of $0.3$m. I also assume (as you state in the comments) that you want each twist to be $0.3$m apart. Note that this is not the same as a bulb being equidistant with the bulbs around it, as the bulbs on different twists will be somewhat further apart than $0.3$m. But it is a good enough approximation. Besides, I think that your original restriction (exactly equidistant bulbs) might not be possible with a conical spiral. In any case, since you are fine with each twist being $0.3$m apart, we will work with this assumption, as it makes the problem easier to solve. The general parametric equations that define a conical spiral are: $$\begin{array}{rl} x =& t\cdot r\cdot \cos(\alpha \cdot t)\\ y =& t\cdot r\cdot \sin(\alpha \cdot t) \\ z =& t \end{array} $$ Where $t$ is a variable that expresses the vertical distance from the tip of the cone, $r$ is the radius of the cone at $t=1$ and $a$ is a parameter that affects how densely the twists are wound around the cone. The bigger the $a$ the more dense the winding. What is $r$ in our problem? We want the height to be double the diameter so at distance $1$m from the cone tip we simply want $r = \frac14$ meters (all distance units will be expressed in meters). What should $\alpha$ be? Setting $\alpha \cdot t = 2\pi$ means a full turn/twist around the cone, and since we want the starting point of the twist with the ending point of the twist to be $0.3$m apart, this means that $\alpha = \frac{2\pi}{0.3}$. Edit: no, this means that they are $0.3$m apart in the vertical direction ($t$ is vertical distance). What we need is that the spirals are $0.3$m apart on the surface of the cone. So, how much is $t$ if the distance on the surface of the cone is $0.3$? If we take a cross section of the cone we can form a right triangle, where the hypotenuse is $0.3$, one side (the vertical distance) is $t$, and the other side is $t/4$. Applying the pythagorean theorem we find that $t = 0.3\cdot \frac{4}{\sqrt{17}}$. So we want $\alpha \cdot \left( 0.3\cdot \frac{4}{\sqrt{17}}\right) = 2\pi \iff \alpha = \frac{2\pi}{0.3} \cdot \frac{\sqrt{17}}{4}$ We have established parameters $\alpha$ and $r$, so our conical spiral is defined fully. But how to we find the height of the cone/tree? The arc length of a conical spiral is: $$\text{length}(t) = \frac12t \sqrt{1+r^2(1+\alpha^2t^2)}+\frac{1+r^2}{2\alpha r}\text{sinh}^{-1}\left( \frac{\alpha\cdot r\cdot t}{\sqrt{1+r^2}}\right)$$ Plugging in $\text{length}(t)=29.7$, $r=\frac14$, $\alpha = \frac{2\pi}{0.3}\cdot \frac{\sqrt{17}}{4}$ we can solve for t to get $t \approx \bbox[5px,border:2px solid red]{3.295}$ meters. So if you make your tree about $3.3$ meters tall and make your twists about $0.3$ meters apart, then you will have the coverage that you want. Here's how your light bulb spiral might look like (it was a bit tricky to place the $100$ bulbs on the graph, I was happy that I succeeded in the end): $\hspace{2cm}$ And here's a side view of the spiral. As you can see there are about $11.5$ twists. $\hspace{3cm}$ You can find the Python code I wrote to create the graphs here. I hope this answer can help you with your lights installation. Merry Christmas! :)<|endoftext|> TITLE: Applications of Euclidean harmonic analysis to geometry? QUESTION [5 upvotes]: The term "Euclidean harmonic analysis" means the studying of the classical Fourier transform for functions on $\mathbb{R}^n$ or $\mathbb{T}^n$. This include the basic properties of the Fourier transform such as Plancherel's theorem for $L^2$ functions as well as other more sophisticated techniques and results, for example, the Fourier transform for $L^p$ functions, Hardy-Littlewood maximal function, Littlewood-Paley functionals, etc. My question is if there is any application for these theories to geometry? What I know about applications of the Fourier transform to geometry is that it can be used to define Sobolev spaces for $L^2$ functions on manifolds and then pseudodifferential operators. Combining these with the topological $K$-theory, one can prove the Atiyah-Singer index theorem which gives a lot of interesting results for compact manifolds. A standard reference for this is Lawson's book, Spin Geometry. But to do this we don't need anything that is essentially more delicate then the Plancherel's theorem or formulae such as $\widehat{\partial_x^\alpha u}(\xi)=\xi^\alpha \widehat{u}(\xi)$. So do all those $L^p$ results or Hardy-Littlewood maximal functions, etc. help us understand more about geometry? If so, does anyone has some reference for this kind of results? REPLY [4 votes]: Analysis results involving $L^p$ spaces for $p \neq 2$ show up naturally when you are studying non-linear partial differential equations. This occurs for example in the theory of $J$-holomorphic curves in symplectic geometry which gives you a powerful tool to study symplectic invariants of your manifold. The basic idea is that given a symplectic manifold $(M,\omega)$, one can choose an almost complex structure $J$ that is compatible with $\omega$ and study the solutions inside $M$ of the non-linear Cauchy-Riemann equation $du \circ j + J \circ du = 0$ where $u \colon C \rightarrow M$ and $(C,j)$ is a Riemann surface. Such maps are called pseudoholomorphic or $J$-holomorphic curves (inside $M$). In local coordinates, this equation is a non-linear version of the regular Cauchy-Riemann equation and one wants to show various properties such as regularity and removal of singularities. Since $C$ is a (real) two-dimensional surface, the natural arena on which the equation is defined is on a Sobolev space $W^{1,p}$ with $p > 2$ (by the Sobolev embedding theorem, this is the minimal Sobolev space in which functions are guaranteed to be continuous and there are various problems in even making sense of the equation if $p \leq 2$). By applying standard techniques, one is lead to use results from the theory of elliptic regularity in $L^p$ spaces for $p \neq 2$. Such results are in turn based on more advanced analysis involving the theory of singular integrals and the Calderon-Zygmund decomposition. A standard reference for what I described above is the book "J-Holomorphic Curves and Symplectic Topology" by McDuff and Salamon in which there is a whole appendix dedicated to proving and using hard analysis to study the properties of $J$-holomorphic curves.<|endoftext|> TITLE: Why is $\infty \cdot 0$ an indeterminate form, if $\infty$ can be treated as a very large positive number? QUESTION [5 upvotes]: Why is $\infty \cdot 0$ indeterminate? Although $\infty$ is not a real number, it can be treated as a very large positive number, and any number multiplied by $0$ is $0$. Why not in this case? REPLY [3 votes]: Short Answer The form "$0\cdot \infty$" is an indeterminate form because there are examples of functions $f(x) \to 0$ and $g(x) \to +\infty$ such that $f(x) g(x) \to 0$, $f(x) g(x) \to L \in \mathbb{R}$, and $f(x)g(x) \to \pm \infty$. Since knowing that a limit has the form $0\cdot \infty$ is insufficient to determine the value of that limit, such limits are said to have indeterminate form. Long Answer Typically, one of the first theorems which is taught when students start learning about limits is something like: Theorem 1: Suppose that $f$ and $g$ are two functions, and that there are $L,M\in\mathbb{R}$ such that $$ \lim_{x\to a} f(x) = L \qquad\text{and}\qquad \lim_{x\to a} g(x) = M. $$ Then $$ \lim_{x\to a} f(x)g(x) = LM. $$ This theorem states that if two functions each have a limit at some point, then (1) the product of the two functions has a limit at that point and (2) the limit of the product is the product of the limits. Assuming that the hypotheses are met (i.e. both functions have finite limits at some point), then we can determine the limit of the product. In cases where the hypotheses are not met (for example, if $\lim_{x\to a} f(x) = +\infty$), the theorem does not apply. However, we might still like to compute limits as though the theorem did apply (that is, we might like to determine a more relaxed set of hypotheses under which the statement of the theorem might still hold). For example, Theorem 2: Suppose that $f$ and $g$ are two functions, that there is $L>0$ such that $$ \lim_{x\to a} f(x) = L, \qquad\text{and}\qquad \lim_{x\to a} g(x) = +\infty. $$ Then $$ \lim_{x\to a} f(x) g(x) = +\infty. $$ This theorem says, more or less, that $L \cdot (+\infty) = +\infty$ whenever $L>0$. Even though $g$ has no limit at $a$ (or, if you prefer, $g$ has an infinite limit at $a$), we can still determine how the limit of the product behaves at $a$. It is in this sense that we may treat infinity like a very large real number. It isn't actually a very large real number, but in this context, it behaves kind of like a very large real number. Now, once we start treating infinity like a very large real number, it is very tempting to attempt to give the following theorem: Non-Theorem: Suppose that $f$ and $g$ are two functions. Then $$ \lim_{x\to a} f(x) g(x) = \left( \lim_{x\to a} f(x) \right) \left( \lim_{x\to a} g(x) \right). $$ If one attempts to use such a theorem, however, one should quickly realize that there are many examples where this simply doesn't work. Even if $f$ and $g$ behave relatively nicely near $a$ (e.g. one of them diverges to $\pm \infty$), it may not be possible to determine the limit. In any such case, the limit could be said to be indeterminate. For example, let $f(x) = x^{-1}$ and let $g(x) = x^{\alpha}$. If $\alpha > 0$, then $$ \lim_{x\to +\infty} g(x) = +\infty, $$ and $$ \lim_{x\to +\infty} f(x) = 0 $$ in any case. Therefore, with respect to the non-theorem above, we might try to conclude that $$ \lim_{x\to a} f(x) g(x) "=" 0 \cdot (+\infty). $$ That is, the limit has the form $0\cdot \infty$. But note that there are (at least) three distinct cases: If $0 < \alpha < 1$, then $$ \lim_{x\to \infty} f(x) g(x) = \lim_{x\to \infty} x^{\alpha - 1} = 0, $$ since $\alpha - 1 < 0$. If $\alpha = 1$, then $$ \lim_{x\to \infty} f(x) g(x) = \lim_{x\to\infty} x^{0} = \lim_{x\to \infty} 1 = 1. $$ If $\alpha > 1$, then $$ \lim_{x\to\infty} f(x) g(x) = \lim_{x\to \infty} x^{1-\alpha} = +\infty, $$ since $\alpha - 1 > 0$. Hence knowing that the limit has the form $0\cdot \infty$ does not give us enough information to determine the value of that integral. Indeed, we can come up with examples like the above which take any possible value, including both positive and negative infinity. Since knowing the form of the limit is not enough to determine the value of the limit, we reasonably say that the limit is of indeterminate form.<|endoftext|> TITLE: Spectrum of the inverse operator? QUESTION [6 upvotes]: How can you prove that the spectrum of the inverse of an operator $A^{-1}$ is given by all $\frac{1}{\lambda}$ for all $\lambda \in \sigma(A)\backslash \{0\}$? REPLY [10 votes]: We suppose that $A$ is invertible . By $\rho(T)$ we denote the resolvent set of an operator $T$. Let $ \lambda \ne 0$. Then: $$ (*) \quad A^{-1}- \lambda I=A^{-1}(I-\lambda A)=\lambda A^{-1}(\frac{1}{\lambda}I-A).$$ From $(*)$ we see: $\lambda \in \rho(A^{-1})$ iff $\frac{1}{\lambda} \in \rho(A)$<|endoftext|> TITLE: How many $3 \times 3$ integer matrices are orthogonal? QUESTION [9 upvotes]: Let $S$ be the set of $3 \times 3$ matrices $\rm A$ with integer entries such that $$\rm AA^{\top} = I_3$$ What is $|S|$ (cardinality of $S$)? The answer is supposed to be 48. Here is my proof and I wish to know if it is correct. So, I am going to exploit the fact that the matrix A in a set will be orthognal, so if the matrix is of the form \begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \\ \end{bmatrix} Then each column and row will have exactly one non-zero element which will be +1 or -1. Thus, I have split possibilities for the first column into three cases and counted the possibilities in each case as follows :- $$a_{11} \neq 0$$ or $$ a_{21} \neq 0$$ or $$ a_{31} \neq 0$$ In case 1), we obviously have two possibilities(+1 or -1) so we consider the one where the entry is +1. Now, notice that the moment we choose the next non-zero entry, all the places for non-zero entries will be decided because of the rule 'each column and row will have exactly one non-zero element'. Meaning, if b and c are remaining two non-zero entries, we only have two possibilities left \begin{bmatrix} 1 & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \\ \end{bmatrix} or \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & c \\ 0 & b & 0 \\ \end{bmatrix} Using the fact that b and c are simply $$\pm1$$ In each of the above matrices, we get 4 possibilities for each of the matricies. Thus, 8 possibilities in totality. Basically, we are getting 8 possibilities on the assumption that $$a_{11} = 1$$ Thus, we get 16 possibilities on the case that $$a_{11} \neq 0$$ Following, the second and third cases analogously, we get a total of 16 possibilities in each of them and 48 possibilities in total. Source :- Tata Institute of Fundamental Research Graduate School Admissions 2016 REPLY [3 votes]: Note that each column of $A$ must be an integer vector of unit length which means that each column is of the form $\pm e_i$ for some $1 \leq i \leq 3$ (where $e_i$ are the standard basis vectors). Thus, we need to pick a permutation of the $e_i$'s to put as columns and then, for each column independently, decide whether it gets a plus or a minus sign. This results in a total of $3! \cdot 2^3 = 6 \cdot 8 = 48$ options for $A$.<|endoftext|> TITLE: Why is it necessary to have convexity outside the feasible region? QUESTION [5 upvotes]: In the book "convex optimization" by Prof. Boyd and Prof. Vandenberghe, a convex problem is defined as, $$ \begin{equation}\label{op_prob_gen_equivalent} \begin{aligned} & {\text{minimise}} & & \ f_0(x) \\ & \text{subject to} & & f_i(x) \leqslant 0 \ , \ \quad \forall i \in \{1,2,...,m\}\\ &&& a_i^Tx=b_i \ , \ \quad \forall i \in \{1,2,...,p\}\\ \end{aligned} \quad , \end{equation} $$ where $f_0(x) \ , \ f_1(x) \ , \ ... \ , \ f_m(x)$ are convex. Let $X$ be the feasible set defined as $X=\{x| x \in \mathbb{R}^n, \ f_i(x) \leqslant 0 \ \forall i =1,2,...,m , \ a_i^Tx=b_i \ , \ \forall i =1,2,...,p \}$. My question is, why is it necessary for $f_0(x) \ , \ f_1(x) \ , \ ... \ , \ f_m(x)$ to be convex outside $X$? REPLY [6 votes]: Frankly, it's for largely practical reasons, though as LinAlg points out below, even some theoretical results depend on convexity outside of the feasible region. One of the most important reasons that we care about convex optimization problems is that they are theoretically tractable, and can be readily solved in practice. So now let's consider how we go about solving them. One of the distinctions that we often make when classifying algorithms that solve convex programs is between feasible methods and infeasible methods. A feasible method is guaranteed to consider only points within the feasible region; an infeasible method makes no such guarantee. Infeasible algorithms can be easier to work with, but of course they do require convexity outside of the feasible region. On the other hand, a feasible algorithm doesn't care if your constraint functions are convex outside of the feasible region. Or does it? The complication is that a feasible method must be initialized with a feasible point. If you happen to know one, you win. But if not, you must employ a two-phase approach. The first phase is used to find a feasible starting point for the second phase. One way to do it is to solve a model like this: \begin{array}{lll} \text{minimize} & z \\ \text{subject to} & f_i(x) \leq z & i= 1,2,\dots,m \\ & a_i^T x = b & i=1,2,\dots, p \\ & z \geq 0 \end{array} The good news is that it's easy to find a feasible initial point for this problem: just choose any $x$ that satisfies the equality constraints, and set $z_i=f_i(x)$, $i=1,2,\dots, m$. With that point, you can use your standard feasible optimization method to solve this problem; and once you achieve $z=0$, you can stop and transition to phase 2. The bad news is that, if $z>0$, you're starting outside the original feasible region. So you can't rely on the fact that the functions are convex just within the feasible region; they must be convex outside of that region, too. Indeed, we now have to guarantee that the functions are convex for all $x$ satisfying the equality constraints. In limited circumstances, you might be able to find a way to expand your search space carefully to avoid regions of non-convexity. But the point remains that it is not sufficient to have convexity simply within the feasible region.<|endoftext|> TITLE: How do algebraists intuitively picture normal subgroups and ideals? QUESTION [18 upvotes]: I have developed some intuitive picture to homomorphisms and kernels, but have troubles developing useful insight to normal subgroups and ideals. REPLY [8 votes]: To elaborate on a very important point briefly touched on in Matt Samuel’s answer: in algebraic geometry, a ring $R$ is viewed as representing a space $X$, with elements of the ring being some kind of functions on the space — extending the way that e.g. the polynomial ring $\mathbb{Z}[x,y]$ can be viewed as a ring of functions on the 2-dimensional plane. any subspace $X' \subset X$ of the space should then induce an ideal $I_X \subseteq R$, the ideal of functions that vanish (i.e. are constantly 0) on the subspace $X'$. So an ideal can represent a subspace; for instance, the ideal $(x-1) \subseteq \mathbb{Z}[x,y]$ represents the line $x = 1$ in the plane, since a polynomial vanishes on that subspace exactly if it’s a multiple of $x-1$. slightly more generally, the functions that “vanish to higher order” on a subspace also form an ideal. E.g. $((x-1)^2)$ represent the functions that vanish to second-order on the line $x=1$. If you haven’t met this idea of higher-order vanishing before, look at (or imagine) a 3-d graph of the function $z = (x-1)^2$ on the $xy$-plane, and see how it’s not just zero on the line $x=1$, it’s flat on that line. Intuitively, one can think of it as vanishing not just on that line, but on an infinitessimal thickening of that line. (And $(x-1)^3$ vanishes on a slightly thicker infinitessimal thickening, and so on.) So intuitively, in the algebraic geometry picture, ideals of a ring correspond to subspaces of a space.<|endoftext|> TITLE: Help understanding proof of every topological group is regular QUESTION [8 upvotes]: I found a proof that any topological group is regular here, but I got lost in the last part. The whole argument goes like this: Consider the map $f:G \times G \to G$ defined by $f(a,b)=ab^{-1}$. This map is always continuous in a topological group. Now take $x \in U$, where $U$ is an open set in $G$. Then $f^{-1}(U)$ contains $(x,e)$, so we have $(x,e)\in V \times W \subseteq f^{-1}(U)$ for some open subsets $V$ and $W$ such that $x\in V$ and $e\in W$. Hence, $x\in V$. Furthermore, $V\cap (X-U)W=\emptyset$, since any element in the intersection corresponds to $a\in V$, $b\in W$, such that $ab^{-1}\notin U$, which is a contradiction. Since $(X-U)W$ is an open set containing $X-U$, $X$ is regular. The boldface argument is what I don't understand. How can I see that $V\cap (X-U)W=\emptyset$? REPLY [6 votes]: If $y\in V\cap (X - U)W$, then $y\in V$ and $y = st$ for some $s\notin U$ and $t\in W$. Then $(y,t)\in V\times W$, so $f(y,t) \in U$, i.e., $yt^{-1}\in U$. On the other hand, $s = yt^{-1}\in U$. This contradiction shows $V\cap (X - U)W = \emptyset$.<|endoftext|> TITLE: Is a function whose derivative vanishes at rationals constant? QUESTION [9 upvotes]: I'm trying to make a problem for my advanced calculus students. I was thinking, if we have a differentiable function $f:\mathbb{R}\to\mathbb{R}$ such that $f'(q)=0$ for all $q\in\mathbb{Q}$, can we say that $f$ is constant? REPLY [3 votes]: The function $f$ doesn't have to be constant! A non-constant function $f$ with the required properties is given as Exercise 13.J in A. C. M. van Rooij, W. H. Schikhof, A Second Course on Real Functions, based on an example due to Y. Katznelson and Karl Stromberg, in Everywhere differentiable, nowhere monotone, functions, Amer. Math. Monthly 81 (1974), 349–354, jstor. Here is a copy of the example: Another example is has been constructed by Dimitrie Pompeiu in Sur les fonctions dérivées, Math. Ann. 63 (1907), no. 3, 326—332, doi: 10.1007/BF01449201, eudml, GDZ. You can have a look here.<|endoftext|> TITLE: If $x+\frac{1}{x}=\frac{1+\sqrt{5}}{2}$ then $x^{2000}+\frac{1}{x^{2000}}= $? QUESTION [9 upvotes]: If $x+\frac{1}{x}=\frac{1+\sqrt{5}}{2}$ then $$x^{2000}+\frac{1}{x^{2000}}=?$$ My try: $$\left(x^{1000}\right)^2+\left(\frac{1}{x^{1000}}\right)^2=\left(x^{1000}+\frac{1}{x^{1000}}\right)^2-2$$ Continuation ? REPLY [7 votes]: Here's another approach for the record. The equation $x+ \frac{1}{x}=\alpha$ where $\alpha \in [-2,2]$ can be solved as follows. Identify $\alpha$ is $2\cos (\theta)$, and observe that letting $z=e^{i\theta}$,from the definition of $\cos (\theta)$, we have $$z+ \frac{1}{z}=2\cos(\theta)=\alpha.$$ Also, from the definition of $\cos$, $$z^k + \frac{1}{z^k} = 2\cos (k \theta).$$ Now back to our question. Let $\alpha = \frac{1+\sqrt{5}}{2}$. Then $\cos(\theta) = \frac{1+\sqrt{5}}{4}$. This gives us $\theta = \pm \frac{\pi}{5} \mod 2\pi$. Therefore the answer to the problem is $2\cos (400\pi)=2$.<|endoftext|> TITLE: Calculate: $\lim\limits_{n\to\infty} \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2}$ QUESTION [7 upvotes]: Calculate: $$\lim\limits_{n\to\infty} \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2}$$ I thought a Riemann sum could lead to something, but couldn't find a suitable partition. Hint, please? REPLY [10 votes]: One may recognize a Riemann sum, by writing $$ \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2}=\frac1n \cdot\sum_{k=0}^{n} \frac{2+\frac{k}n }{1+(2+\frac{k}n)^2}, $$ then letting $n \to \infty$, to obtain $$ \frac1n \cdot\sum_{k=0}^{n} \frac{2+\frac{k}n}{1+(2+\frac{k}n)^2} \to \int_0^1 \frac{2+x }{1+(2+x)^2}\:dx.\tag1 $$ Add-on. Since $f:[0, 1] \rightarrow [0, 1]$ with $f(x)=\frac{2+x}{1+(2+x)^2}$ satisfies $f \in \mathcal{C}^1([0,1])$, then one is allowed to apply the standard result $$ \frac1n\sum_{k=0}^{n} f\left(\frac{k}{n}\right) =\int_0^1 f(x)\,dx + \frac{f(0) + f(1)}{2n}+o\left(\frac1n \right) \tag2 $$ giving, as $n \to \infty$, $$ \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2}=\frac{\ln 2}2+\frac{7}{20\: n}+o\left(\frac1n \right). \tag3 $$ One may in fact express the given sum in terms of the digamma function, using $$ \sum_{k=0}^{n} \frac{1}{k+b}=\psi\left(n+b+1\right)-\psi\left(b\right), \qquad \text{Re}\:b>0, $$ and writing $$ \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2} = \text{Re}\:\sum_{k=0}^{n} \frac{1}{k+(2+i)n} $$ then recalling the asymptotics of the digamma function, as $n \to \infty$, one obtains $$ \sum_{k=0}^{n} \frac{2n+k}{n^2+(2n+k)^2}=\frac{\ln 2}2+\frac7{20\:n}+\frac1{300\:n^2}+\frac7{60\:000 \:n^4}+o\left(\frac1{n^5} \right).\tag4 $$<|endoftext|> TITLE: Showing n! is greater than n to the tenth power QUESTION [21 upvotes]: I'd like to show $n!>n^{10} $ for large enough n ( namely $ n \geq 15 $). By induction, I do not know how to proceed at this step: $$ (n+1)\cdot n!>(n+1)^{10} $$ As I can't see how to simplify $(n+1)^{10} $. This seems like such a trivial thing (and it probably is), yet I can't do it. Isn't there an easier way to show this? (P.S. I need to refrain from the use of derivatives, integrals etc., I suppose, then you could work something out with the slope of the respective functions) REPLY [3 votes]: There's no need for induction or anything beyond elementary arithmetic. For $n\geq 15$, \begin{align*} n!&\geq n(n-1)\cdots(n-9)\times 5\cdot 4\cdot 3\cdot 2 \\ &= 120n(n-1)\cdots(n-9)\\ &> n \cdot \frac{15}{14}(n-1)\cdot \frac{15}{13}(n-2)\cdots \frac{15}{6}(n-9)\\ &\geq n^{10}\,, \end{align*} where the strict inequality is because the third line is approximately $53n(n-1)\cdots(n-9)$ and the final inequality is because, for all $i$ and $n$ with $1\leq i<15\leq n$, $$\frac{15}{15-i}(n-i) = n + \frac{(n-15)i}{15-i} \geq n\,.$$<|endoftext|> TITLE: Geometry of $2 \times 2$ matrix mappings QUESTION [6 upvotes]: Suppose $\mathbf{A}$ is a $2 \times 2$ matrix. I'm interested in the geometry of the mapping $T(\mathbf{x}) = \mathbf{A}\mathbf{x}$ from $\mathbb{R}^2$ to $\mathbb{R}^2$. The effect on the vectors $\mathbf{e_1} = (1,0)$ and $\mathbf{e_2} = (0,1)$ is clear -- $T(\mathbf{e_1})$ and $T(\mathbf{e_2})$ are just the columns of $\mathbf{A}$. So, by linearity, a unit square gets mapped to a parallelogram. The green square gets mapped to the pink parallelogram in the pictures below. Now let's consider the effect on a unit circle. It gets mapped to an ellipse, and I'm interested in how the geometry of this ellipse is related to the matrix $\mathbf{A}$. One case seems clear: if $\mathbf{A}$ is symmetric, then the axes of the ellipse are the eignevectors of $\mathbf{A}$, and its semi-axis lengths are the eigenvalues. The "stretching" of the circle to form the ellipse is nicely related to eigenvalues and eigenvectors. Fabulous. This is illustrated in the following picture: Another case is also clear: if the eigenvalues of $\mathbf{A}$ are not real, then presumably there is no relationship whatsoever to the geometry of the ellipse. Now the case that's puzzling me: what if $\mathbf{A}$ is not symmetric, but still has real eigenvalues. What sort of geometric relationship exists in this case, if any? This case is illustrated in the following picture: REPLY [3 votes]: The relevant keyword to read about is "singular values". For a map $T \colon \mathbb{R}^2 \rightarrow \mathbb{R}^2$, you can always find an orthonormal basis $(v_1,v_2)$ of the domain (with respect to the standard Euclidean metric) and an orthonormal basis $(w_1,w_2)$ of the range such that $T(v_i) = \sigma_i w_i$ for $\sigma_i \geq 0$. The numbers $\sigma_i$ are called the singular values of $T$ and are the eigenvalues of $\sqrt{T^{*}T}$ or $\sqrt{TT^{*}}$ (in your cases, $\sqrt{A^TA}$ or $\sqrt{AA^T}$). Since the bases $(v_i),(w_i)$ are orthonormal, this means that assuming $T$ is invertible (and then $\sigma_i > 0$ for all $i$), a unit circle will be mapped to an ellipse whose axes are $w_1,w_2$ of lengths $\sigma_1,\sigma_2$ respectively. It $T$ is not invertible, it will map the unit circle to a "circle" of lower dimension. If $T$ is positive definite, then $T$ is orthogonally diagonalizable, you can get $v_i = w_i$ and the $\sigma_i$ will be the eigenvalues of $T$. If $T$ is symmetric, the singular values will be the absolute values of the eigenvalues of $T$. If $T$ is not symmetric, then the only relation you can expect between the singular values and the eigenvalues is that $\sigma_1 \cdot \sigma_2 = |\lambda_1 \cdot \lambda_2| = |\det(A)|$. For example, consider the family of matrices $$ A = \begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix}. $$ The only eigenvalue of $A$ is $1$ and if $a \neq 0$ then $A$ is not diagonalizable. The singular values of $A$ are given by $$ \sigma_1 = \sqrt{\frac{2 + a^2 + a\sqrt{4 + a^2}}{2}}, \sigma_2 = \sqrt{\frac{2 + a^2 - a\sqrt{4 + a^2}}{2}}. $$ They satisfy $\sigma_1 \sigma_2 = 1$ and $\sigma_1,\sigma_2 > 0$ and when $a$ runs in $(0,\infty)$, the singular value $\sigma_1$ runs between $(1,\infty)$ while $\sigma_2 = \frac{1}{\sigma_1}$ runs between $1$ and $0$ so you get all possible pairs of singular values subject to the constraint $\sigma_1 \sigma_2 = 1$. Geometrically, since $\det(A) = 1$, the matrix $A$ must map the unit disc to an ellipse of area $1$. As $\sigma_1 \to \infty$, the matrix $A$ maps the unit disc to an ellipse in which one axis (corresponding to the singular value $\sigma_1$) becomes longer and longer while the second axis becomes shorter and shorter to keep the area of ellipse constant.<|endoftext|> TITLE: On solutions of a certain $n\times n$ linear system whose coefficients are all $\pm1$ QUESTION [5 upvotes]: Let $n>2$ and $A \in M_n(\mathbb{R})$ be a $\{-1,1\}$-matrix (whose elements are $-1$ or $1$). Let $b\in\mathbb R^n$ contains the row-wise counts of minus ones in $A$ (i.e. $b_i$ is the number of minus ones in the $i$-th row of $A$). Suppose $Av = b$. Prove or disprove that if $Ax=0$ has only the trivial solution, then $v$ has at least two identical elements. For example, $$ A = \pmatrix{1 & 1 & 1 \\ -1 & 1 & 1\\ -1 & -1 & 1}, \ b=\pmatrix{0\\ 1\\ 2}, \ v=\pmatrix{-\frac12\\ -\frac12\\ 1}. $$ I couldn't find a counterexample, so for the moment I'm assuming it is indeed true. Perhaps the more pertinent issue I'm having is not knowing where to start, given the peculiar structure of the matrix and of $b$. I considered proving the contrapositive (if $v_1 = \ ...\ = v_n$, then there exists some nonzero/nontrivial solution to $Ax = 0$), but I didn't get far with that. I appreciate all help, even if it's just a nudge in the right direction (or if anyone finds a counterexample). Thank you kindly! Edit: In addition, I have not found a counterexample for the case where each $a_{ij}$ is either $-\delta$ or $\delta$ for $\delta \in \mathbb{R}$, so if the initial statement is indeed true, then something that would point to this fact would be optimal (or at the very least interesting). Thanks again! REPLY [2 votes]: Edit. The hypothesis is false. The smallest counterexample I found was $5\times5$. We have $\det(A)=-16$ and $Av=b$ for $$ A=\pmatrix{-1&1&-1&-1&-1\\ 1&-1&1&-1&1\\ -1&-1&-1&1&1\\ -1&1&1&1&-1\\ 1&1&-1&1&1}, \ b=\pmatrix{4\\ 2\\ 3\\ 2\\ 1}, \ v=\frac12\pmatrix{-11\\ 9\\ 4\\ -6\\ 14}. $$<|endoftext|> TITLE: Proof that the exponential martingale is a Brownian Motion QUESTION [8 upvotes]: Let $B_t$ be a brownian motion, $\delta \in \mathbb{R}$. I have to prove if $e^{\delta B_t - \frac{\delta^2 t}{2}}$ is also a brownian motion. I can't not use any Ito's stuff because it is not part of the course this problem comes from. But I can use any other stuff from martingales, Levy characterization theorem, and brownian motion properties and theorems. This is what I've tried: A brownian motion must have independent stationary increments with normal distribution, and continuous paths. And it must start at a value in $\mathbb{R}$. The function $e^{x+y}$ is a continuous Borel-Medible so, if $B_t - B_s$ has normal distribution with continous paths, the process $e^{\delta (B_t - B_s) - \frac{\delta^2 (t-s)}{2}}$ is also normally distributed with continous paths. It starts at value 1. The problem comes when proving independence. If I choose carefuly the values of the intervals $[t,s]$ I can have the intervals of the function $e^{\delta (B_t - B_s) - \frac{\delta^2 (t-s)}{2}}$ overlapping, then the resulting value will be a function of values of different intervals (no more independence!). Then I tried the Levy characterization theorem: The process $e^{\delta B_t - \frac{\delta^2 t}{2}}$ has continous paths and it is a martingale (for the proof see of the book Brzezniak Z., Zastawniak T. - Basic Stochastic Processes, A Course Through Exercises, Springer, 2002. Solution to exersice 6.35) Meh, I blatantly copy/pasted the proof for you to enjoy: This time my problem comes from proving that $e^{2*\delta B_t - \delta^2 t}-t$ is a martingale. If I make $\alpha = 2* \delta$ and repeat the solution 6.35 I have no way to get rid of the $-t$ term. As long as I can prove here, the process $e^{\delta B_t - \frac{\delta^2 t}{2}}$is a martingale but it is not a brownian motion. Please, tell me if I did something wrong. Edit: I fix a typo because I wrote plus sign in the exponential instead of a minus sign. I'm almost sure (not pun intended) the exponential martingale is not a brownian motion, I was getting confused with the geometric brownian motion, but they are different processes. REPLY [4 votes]: it's clearly not a Brownian motion. $$ P(B_1 <0 ) =0.5. $$ $$ P(e^{\delta B_t - \frac{\delta^2 t}{2}} < 0 )=0. $$ It is a martingale: for $s < t$ $$ e^{\delta B_t - \frac{\delta^2 t}{2}}=e^{\delta B_s - \frac{\delta^2 s}{2}}e^{\delta (B_t-B_s) - \frac{\delta^2 (t-s)}{2}} $$ Taking a conditional expectation on time $s$ the second factor has expectation $1$ and the result follows.<|endoftext|> TITLE: Line integral with respect to arc length QUESTION [13 upvotes]: I ran into a problem that, initially, I thought was a typo. $$\int_C\ e^xdx $$ where C is the arc of the curve $x=y^3$ from $(-1,-1)$ to $(1,1)$. I have only encountered line integrals with $ds$ before, not $dx$ (or $dy$, for that matter). At first, I thought the $dx$ was supposed to be a $ds$, but that led to an unsolvable integral. Unfortunately, my book does not cover this topic very well, and the online answers I have found are rather vague. I tried plugging in $x=y^3$ to get $$ \int_{-1}^{1}\ e^{y^3}3y^2dy $$ which evaluates to $e - \frac{1}{e}$. This is also what I get when I do $ \int_{-1}^{1}\ e^{x}dx $, so I am inclined to believe it is correct. However, I'm not sure, and I'd like to know for certain if my intuition is valid. REPLY [2 votes]: It is fairly standard to write $\int_C P (x,y)dx+Q (x,y)dy $ for the line integral $\int_C\vec F\cdot d\vec r $, where $\vec F (x,y)=(P (x,y),Q (x,y))$. So in this case $\vec F (x,y)=(e^x ,0)$. Here one can solve as you did; or you can notice that $\vec F $ is conservative, with potential function $f (x,y)=e^x $. Thus $$\int_C e^x\,dx=f (1,1)-f (-1,-1)=e-\frac1e. $$ It is important to note all this works because $P $ depends only on $x $.<|endoftext|> TITLE: Too Restrictive Axiom- Example QUESTION [6 upvotes]: Can someone give me an example of an object where the rules (axioms) are so restrictive that the result leads to few mathematical structures? REPLY [4 votes]: An axiomatization is said to be categorical if all of its models are isomorphic; this is as restrictive as you can get. However, by the Löwenheim-Skolem theorem, no first-order theory with an infinite model is categorical (because any theory with at least one infinite model has models of every infinite cardinality, and models of different cardinalities can't be isomorphic). Here are two ways of proceeding: $$ $$  (1)  Look at second-order logic. (In first-order logic, we can quantify only over members of the domain $M$ of the model in question. In second-order logic, we can quantify over relations on the model—in other words, we can quantify over subsets of $M^k$ for each natural number $k).$ There are important examples of categorical theories in second-order logic: One is second-order Peano arithmetic; here the induction axiom applies to all subsets of the model (unlike first-order Peano arithmetic, where the induction axiom applies only to those subsets of the model that can be defined by a first-order formula). Second-order Peano arithmetic is categorical; its only model, up to isomorphism, is the usual model of natural numbers. So this second-order theory characterizes the structure of natural numbers. Another example is the second-order theory of the real numbers, with a completeness axiom that says that every set of reals with an upper bound has a least upper bound (not just those subsets that are definable by a first-order formula). Again, this second-order theory is categorical; it characterizes the structure of the real numbers. $$ $$  (2)   Go back to first-order logic (which is much more tractable than second-order logic), and, instead of categoricity, look at categoricity in power. ("Power" here is used to mean "cardinality".) If $\kappa$ is some cardinal number, a first-order theory $T$ with an infinite model is said to be $\kappa$-categorical if all models of $T$ of cardinality $\kappa$ are isomorphic. This is as restrictive as you can get in first-order logic. A major example of this is the first-order theory of dense linear orderings without endpoints. Cantor proved that every countable model of this theory is isomorphic to the set of rational numbers with the usual ordering. So this theory is $\aleph_0$-categorical. Morley proved a wonderful theorem on categoricity in power: If a countable first-order theory $T$ is $\kappa$-categorical for some uncountable cardinal $\kappa,$ then T is $\kappa$-categorical for every uncountable cardinal $\kappa.$ This leaves four possibilities for a countable first-order theory $T$ with infinite models: (a) $T$ is not categorical in any infinite power; (b) $T$ is $\aleph_0$-categorical but is not categorical in any uncountable power; (c) $T$ is not $\aleph_0$-categorical but is categorical in every uncountable power; (d) $T$ is $\kappa$-categorical in every infinite power. There are examples of each of (a), (b), (c), (d) above.<|endoftext|> TITLE: Maximum Likelihood Estimation with Indicator Function QUESTION [5 upvotes]: I need to solve this exercise from the book below. $\textbf{Mathematical Statistics, Knight (2000)}$ $\textbf{Problem 6.17}$ Suppose that $X_1,\ldots,X_n$ are i.i.d. random variables with frequency function \begin{equation} f(x;\theta)=\begin{cases} \theta & \text{for $x=-1$}.\\ (1- \theta)^2 \theta^x & \text{for $x=0,1,2,\ldots$} \end{cases} \end{equation} (a) Find the Cramer-Rao lower bound for unbiased estimators based on $X_1,\ldots,X_n$. (b) Show that the maximum likelihood estimator of $\theta$ based on $X_1,\ldots,X_n$ is $$\hat{\theta}_n = \frac{2 \sum_{i=1}^{n} I_{(X_{i}=-1)} + \sum_{i=1}^n X_i}{2n + \sum_{i=1}^n X_i}$$ and show that $\{\hat{\theta}_n\}$ is consistent for $\theta$. (c) Show that $\sqrt{n}(\hat{\theta}_n-\theta)\to_d N(0,\sigma^2(\theta))$ and find the value of $\sigma^2(\theta)$. Compare $\sigma^2(\theta)$ to the Cramer-Rao lower bound in part (a). No clue on how to solve (a) or (c). I started to solve (b) but I can't seem to arrive at the desired solution. I'm getting this: \begin{align} \mathcal{L} &= \prod_{i=1}^n (1-\theta)^2 \theta^{x_i I_{(X_i \geq 0)} + I_{(X_{i}=-1)}} \\ \mathcal{L} &= (1-\theta)^{2 \sum_{i=1}^n I_{(X_i \geq 0)}} \theta^{\sum_{i=1}^n x_i I_{(X_i \geq 0)} + \sum_{i=1}^n I_{(X_{i}=-1)}} \\ \log \mathcal{L} &= 2 \sum_{i=1}^n I_{(X_i \geq 0)} \log(1-\theta) + \sum_{i=1}^n x_i I_{(X_i \geq 0)} \log \theta + \sum_{i=1}^n I_{(X_i=-1)} \log \theta \end{align} $\textbf{FOC}$ \begin{align} 0 &= - \frac{2 \sum_{i=1}^n I_{(X_i \geq 0)}}{1-\theta} + \frac{\sum_{i=1}^n x_i I_{(X_i \geq 0)}} \theta + \frac{\sum_{i=1}^n I_{(X_i=-1)}} \theta \\ \\ \hat{\theta}_n &= \frac{\sum_{i=1}^n I_{(X_i=-1)} + \sum_{i=1}^n x_i I_{(X_i \geq 0)}}{\sum_{i=1}^n I_{(X_i=-1)} + 2 \sum_{i=1}^n I_{(X_i \geq 0)} + \sum_{i=1}^n x_i I_{(X_i \geq 0)}} \end{align} which differs from the result I'm given... Any help would be greatly appreciated. REPLY [3 votes]: \begin{align} L(\theta) & = \prod_{i=1}^n \theta^{I_{x_i=-1}} (1-\theta)^{2 I_{x_i\ge 0} } \theta^{x_i I_{x_i\ge 0}}. \\[10pt] \ell(\theta) = \log L(\theta) & = (\log\theta)\sum_{i=1}^n (I_{x_i=-1} + x_i I_{x_i \ge 0} ) + 2(\log(1-\theta)) \sum_{i=1}^n I_{x_i\ge0} \\[10pt] \ell\,'(\theta) & = \frac 1 \theta \sum_{i=1}^n (I_{x_i=-1} + x_i I_{x_i \ge 0}) - \frac 2 {1-\theta} \sum_{i=1}^n I_{x_i\ge 0} = \frac A \theta -2\frac B {1-\theta} \\[10pt] & = 0 \text{ if and only if } A(1-\theta) - 2B\theta = 0, \\[10pt] & \qquad \text{and that holds precisely if } A = 2B\theta+A\theta = (A+2B)\theta, \text{ so} \\[10pt] \theta & = \frac A {A+2B} = \frac{\sum_{i=1}^n (I_{x_i=-1} + x_i I_{x_i \ge 0})}{\sum_{i=1}^n (I_{x_i=-1} + x_i I_{x_i \ge 0}) + 2\sum_{i=1}^n I_{x_i\ge 0}}. \end{align}<|endoftext|> TITLE: Minimizing a quadratic-over-linear fractional function QUESTION [5 upvotes]: This is from the Convex Optimization book by Boyd and Vandenberghe. Show that $$ \min \ \frac{\|Ax-b\|_2^2}{c^T x + d} $$ $x \in \left\{x : c^T x +d > 0 \right\}$ has a minimizer $x^* = x_1 +t x_2$ where $x_1=(A^TA)^{-1}A^T b$, $x_2=(A^TA)^{-1}c$ and $t \in \mathbb{R}$ is obtained by solving a quadratic equation. From the structure of the solution, it seems like I am supposed to split the problem into two parts, but apart from that I don't really understad how to solve this. I tried to differentiate to find the minimizer, but I didn't get anything of this form. (In the problem before this, we had to show that f is closed, if that is relevant). REPLY [7 votes]: Let us rewrite the problem to a convex optimization problem by adding a variable $s$: $$\min \{ s||Ax-b||^2 : s = 1/(c^Tx + d) \}$$ and then substituting $y = xs$: $$\min \{ s||A(y/s)-b||^2 : (c^Ty + ds) = 1 \}$$ Note that the objective function is the perspective of a convex function, and is therefore convex. The KKT stationarity conditions for $y$ and $s$ read: $$2(A^TA(y/s)-A^Tb) + \lambda c = 0$$ $$-2\frac{||Ax||^2}{s^2} + b^T b + \lambda d = 0$$ The first condition can be solved for $y/s$: $$x = \frac{y}{s} = (A^TA)^{-1}A^Tb-\frac{1}{2} \lambda (A^TA)^{-1} c$$ Your $t$ is now $-\lambda/2$. To find $\lambda$, consider the KKT stationarity condition for $s$, and plug in $s = 1/(c^Tx + d)$ to obtain the quadratic equation.<|endoftext|> TITLE: Can kurtosis measure peakedness? QUESTION [6 upvotes]: Wikipedia says kurtosis only measures tailedness but not peakedness. But I remember my teacher said several times that high excess kurtosis usually corresponds to fat tails AND thin peak. High excess kurtosis accompanied by fat tails can be easily seen by the usual definition of kurtosis(fourth central moment). But what about peakedness? If kurtosis doesn't measure it, is there any statistic that can do the job? My Statistics textbook isn't clear about this part. REPLY [2 votes]: Kurtosis Kurtosis is the value... $$E\left[{\left( \frac{X-\mu}{\sigma} \right)^4}\right]$$ ...the average/expectation of the fourth power of the standardized variable. It is a measure for the tendency of values to be spread far out over a large distance relative to the distance of one standard deviation. View in terms of quantile function The following graphic might illustrate it intuitively. In this graphic, we express the expectation/mean of a function $h(x)$ (for instance $h(x) = x^4$ if we compute the 4-th moment) as an integral over the quantiles (with $f(x)$ the density and $Q(p)$ the quantile function $$E_{X}[h(x)] = \int_{-\infty}^\infty h(x) f(x) dx = \int_0^1 h(Q(p)) dp $$ Let's call this quantile function for the squared distance $R^2(p)$ In the graphic we have plotted the $R^2(p)$ quantile function the standard normal distribution (this actually matches the quantile function of a chi-squared distribution with 1 degree of freedom which is the distribution for the square of a standard normal distributed variable). This expresses the distribution of the squared distance from the mean. Now, in this view: The variance $\sigma^2$ is equal to the area under the curve. And if the variable $X$ is standardized, then the area should be equal to 1. In this image, we have stressed this by marking the area's above and below the line equal to 1. The area's of these two should be equal in order to have $\sigma^2 = 1$. The 4-th moment (and the kurtosis if the curve is normalized) is equal to the integral of the square of this function $R^2(p)$ Kurtosis dependency on tail So we see that the deviation of the kurtosis from 1 is much similar to the principle of the deviation between the mean squared and the squared mean. If the quantile function $R^2(p)$ varies a lot around it's mean $1 \sigma$, then you will have a larger kurtosis value. The way in which the quantile function $R^2(p)$ varies will have a large influence. The kurtosis does not depend so much on the amount of the red area in the image, but more on whether this area is spread out over a range that includes large values (a few large values, which count stronger than many small values). The image below shows the quantile function for a discrete variable with the mass function $$f(x) = \begin{cases} \hphantom{-} 0.1/(a-1) & \text{if} & x = \pm \sqrt{a} \\ -0.1/(a-1)+0.4 & \text{if} & x = \pm 1 \\ \hphantom{-}0.2 & \text{if} & x = 0 \\ \hphantom{-}0 & \text{else} \end{cases}$$ In this image you see that depending on the distribution of the values above $\sigma$ you can have a higher or lower kurtosis. The surface area needs to be the same, but you can spread it out over a few values with high value (large $a$) or over many values with a low value. A high kurtosis means that the values above $1\sigma$ are spread out a lot. Relationship with Westfall's theorem In this post on the statistics site I learned about some interesting theorems. Main Theorem: Let $Z_X = (X - \mu_X)/\sigma_X$ and let $\kappa(X) = E(Z_X^4)$ denote the kurtosis of $X$. Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$. and Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. We can quickly and intuitively get these theorems from the representation in terms of the quantile function. (and also we can improve the refined theorem) In the below image we see how the computation of the kurtosis can be split up into two different parts. The values $<\sigma$ and the values $>\sigma$. We see that the contribution to the kurtosis of the values $<\sigma$ is very low. At most it can be 1 which is the case when the green area fills nearly the entire area below the line $1 \sigma$. With the refined theorem we assume that the density of $Z_X^2$ is decreasing on [0,1]. In that case the curve must be neccesarily below the diagonal that crosses with the point (0,0) and the point (x,1) where the quantile function equals 1. (in the image we have drawn this line in black). So with the conditions of the refined theorem the largest possible contribution from the values below $1\sigma$ will be $$\int_0^1 x^2 dx = 1/3$$ So, we could pose New Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+$1/3$”. Wrap up It is not a rule that higher kurtosis means peakedness. A high kurtosis stems from two 'sources' Many values below $1\sigma$ and many above $1\sigma$ (this does relate indirectly to peakedness; in order to have this discrepancy you need to have many values close to the mean) The values above $1\sigma$ are spread out over a large range. Instead of a many values a little bit above $1\sigma$ we have a few values a lot above $1\sigma$ So kurtosis relates to a combination of two aspects. This is why in general (and in practice) we observe high peakedness and high kurtosis together, but it should not be regarded as a rule that high kurtosis means high peakedness or vice versa. Peakedness If kurtosis doesn't measure it, is there any statistic that can do the job? My Statistics textbook isn't clear about this part. You could have a wide variety of measures, but broadly you could see peakedness as many values close to the mean (in the graphic a large green area). So the mass of the distribution within $\pm k \sigma$ might be a measure. And the value of $k$ will depend on the particular application (I actually can not think of any application for peakedness. I believe that people are more interested in the mass of the distribution outside $\pm k \sigma$ where $k$ is large)<|endoftext|> TITLE: Algorithm for Hensel's Lifting QUESTION [5 upvotes]: If I know the solution for $\;x^2 \equiv n \mod p\;$ (where $p$ is prime and a quadratic residue of $n$), how can I find the solution of $\;x^2 \equiv n \mod p^2$, in an algorithmic way? REPLY [3 votes]: As you've mentioned using the Hensel's Lifting Lemma is the way to go here. First define $f(x) = x^2 - n$ for some fixed $n$. As we have a zero for $f$ modulo $p$, let it be $x_1$. Now let $x_2 = x_1 + pt$. By Hensel's Lemma we have: $$f(x_2) = f(x_1) + ptf'(x_1) \pmod {p^2}$$ Now equate the last equation to 0 and solve it. Here's an example. Let $p=7, n = 2$, then can have $x_1=3$ and $f(x) = x^2 - 2$. Now we have: $$f(x_1) + ptf'(x_1) \equiv 0 \pmod{p^2}$$ $$\frac{f(x_1)}{p} + tf'(x_1) \equiv 0 \pmod p$$ $$\frac 77 + 6t \equiv 0 \pmod p$$ $$6t \equiv 6 \pmod p \implies t \equiv 1 \pmod p$$ Therefore we can take $t=1$ and we have $x_2 = x_1 + pt = 3 + 1 \cdot 7 = 10$. And indeed $7^2 \mid 10^2 - 2 = 98$<|endoftext|> TITLE: Distributivity of subspaces QUESTION [6 upvotes]: I need to either give a proof or find a counterexample to a statement: $$L+(M∩N) = (L + M)∩(L + N)$$ Where $L$,$M$,$N$ are subspaces of a vector space $V$. I could do $LHS⊆RHS$ proof, but I'm stuck with backwards proof. I would be really grateful if someone could help me out. REPLY [12 votes]: Take $M=\mathbb R\times\{0\}$ and $N=\{0\}\times \mathbb R$. Then set $L=\{(x,x)|x\in\mathbb R\} $ Then, $M\cap N=\{(0,0)\}$ (should be obvious why) meaning that the left side is equal to $L+\{0\}=L$. But $L+M = L+N = \mathbb R^2$ (this should be easy to prove), meaning the right side is equal to $\mathbb R^2\cap \mathbb R^2 = \mathbb R^2$. Since $L\neq \mathbb R^2$, clearly, the equality does not hold, and you can only conclude that $LHS\subseteq RHS$.<|endoftext|> TITLE: Find all function $ f:\Bbb R\to\Bbb R$ such that $ f(x+y)f(x-y)=(f(x)+f(y))^2-4x^2f(y)$. QUESTION [5 upvotes]: While doing some INMO questions, one entry went this way: Find all function $ f:\Bbb R\to\Bbb R$ such that $f(x+y)f(x-y)=(f(x)+f(y))^2-4x^2f(y)$. I made an approach similar to this : On Putting $x=0, y=0$ we get $$f(0+0)f(0-0)=(f(0)+f(0))^2-4\times0^2f(0)$$ $$f(0)^2=(2\times f(0))^2$$Which gives us $f(0)=0$. Then, on putting $x=1, y=1$, $$f(1+1)f(1-1)=(f(1)+f(1))^2-4\times1^2f(1)$$ $$f(2)f(0)=(2\times f(1))^2+-4\times f(1)$$ $$4\times f(1)=4\times f(1)^2$$ Which gives us $f(1)=0 $ or $f(1)=1$. From here, I can't go further. I think that method I m working on is quite right and will take me to the right answer. But the problem is that I can't find that right answer. I shall be thankful if you can provide me a hint or a complete solution. Thanks. SIDE NOTE: I am using this method for a while (I got this one from then answer of a post). Now I m thinking to switch, If you know some other method to solve such functional equations, please try to give your answer by that method. REPLY [2 votes]: HINT: First: $$x=y=0 \Rightarrow f^{2}(0)=4f^{2}(0) \Rightarrow f(0)=0$$ Second: $$x=y \Rightarrow 0=4f^{2}(x)-4x^{2}f(x) \Rightarrow f(x)[f(x)-x^{2}]=0 (1)$$ If we check, we can see that $f(x)=0$ and $f(x)=x^{2}$ are solutions. Now you have to prove that those are the only ones. For that some ideas can help. $$x=0 \Rightarrow f(y)[f(-y)-f(y)]=0 (2) $$ $$y=-x \Rightarrow 0=[f(x)+f(-x)]^{2}-4x^{2}f(-x)(3)$$ Suppose that we have a solution $f(x)$ such that for some $x_0$ we have $f(x_0) \ne 0$, then backing to $(1)$ and $(2)$ we get $f(x_0)=f(-x_0)=x_{0}^{2}$. You have to prove that $x_0=0.$<|endoftext|> TITLE: When Two Connections Determine the Same Geodesics QUESTION [12 upvotes]: It's my first question! I hope I'm correctly formatting it. I'm trying to prove that two connections $\nabla, \widetilde{\nabla}$ on a manifold determine the same geodesics iff their difference tensor is alternating. The difference tensor of two connections is the tensor $D(X,Y) = \widetilde{\nabla}_XY - \nabla_XY$, so this is saying that the geodesics are the same for the two connections iff $D(X,Y) = -D(Y,X)$ for all vector fields $X, Y$ on $M$. Since $D$ is a tensor it can be written as a sum of its symmetric and alternating parts, $D = S + A$. Since $A(X,X) \equiv 0$ we have $D(X,X) = S(X,X)$. So if $S(X,X) = 0$ then so does $D(X,X)$, and if $D(X,X)$ is always $0$, then $$ D(X+Y,X+Y) =S(X+Y, X+Y) = S(X,X) + S(Y,Y) + 2S(X,Y) = 2S(X,Y)$$ Then $S(X,Y) \equiv 0$. So it's enough to check that having the same geodesics is the same as $D(X,X) \equiv 0$. This observation was made in Spivak Vol. II Ch. 6, Addendum 1, but his proof that this latter condition is equivalent to the connections having the same geodesics doesn't make sense to me. Does anyone know how to show that if the connections have the same geodesics then the difference tensor is $0$ on the diagonal? That's the direction I'm stuck on. REPLY [6 votes]: Assume that the tensor $D$ vanishes. Let $\alpha$ be a geodesic with respect to one of the connections, say $\nabla$. Then $$\nabla_TT\equiv 0$$where $T=\dot \alpha$. Using the identity $D\equiv 0$, we get $$\bar\nabla_TT=0$$ i.e. the connection $\bar\nabla$ has the same geodesic. Conversely, we assume that $\nabla$ and $\bar\nabla$ have the same geodesics. Let $X$ be any vector field. We will prove that $D$ vanishes point-wise. Let $p\in M$ with nbd $U$ and let $\gamma$ be the geodesic with $\gamma(0)=p,\dot\gamma(0)=X_p$(ODE guarantees that there is a unique such geodesic), then $$D(X_p,X_p)=(\nabla_XX-\bar\nabla_XX)_p=0$$i.e. $D$ vanishes at $p$ and consequently at every point.<|endoftext|> TITLE: Fields $F < K TITLE: Comparing LU or QR decompositions for solving least squares QUESTION [6 upvotes]: Let $X \in R^{m\times n}$ with $m>n$. We aim to solve $y=X\beta$ where $\hat\beta$ is the least square estimator. The least squares solution for $\hat\beta = (X^TX)^{-1}X^Ty$ can be obtained using QR decomposition on $X$ and $LU$ decomposition on $X^TX$. The aim to compare these. I noticed that we can use Cholesky decomposition instead of $LU$, since $X^TX$ is symmetric and positive definite. Using $LU$ we have: $\hat\beta = (X^TX)^{-1}X^Ty=(LU)^{-1}X^Ty$, solve $a=X^Ty$ which is order $O(2nm)$, then $L^{-1}b=a$ at cost $\sum_1^{k=n} (2k-1)$ and finally $U^{-1}a$ at the same cost of $\sum_1^{k=n} (2k-1)$. I didn't count the cost of computing $L^{-1}$ and $U^{-1}$. Using $QR$ we have: $\hat\beta = (X^TX)^{-1}X^Ty=((QR)^TQR)^{-1}R^TQ^Ty=R^{-1}Q^Ty$, where we solve $Q^Ty=a$ at cost $O(n^2)$ and $R^{-1}a$ with cost $\sum_1^{k=n} (2k-1)$. Comparing the decompositions: It seems that QR decomposition is much better than LU. I think the cost of computing QR is higher than LU, which is why we could prefer to use LU. On the other hand if we are given the decompositions, we should use QR. $SVD$ decomposition: Is there any advantage to use SVD decomposition? REPLY [3 votes]: Your reasoning at the top is really odd. The LU decomposition is twice as fast as the standard QR decomposition and it will solve most systems. There are however pathologically ill-conditioned systems and non-square systems. That is where it will use the QR or SVD. The main reason for the SVD is it allows you to be selective about your condition number. There are many other decompositions. The Cholesky decomposition is twice as fast as the LU decomposition but only for positive definite Hermitian matrices. All of this neglects the sparsity of the matrix as well.<|endoftext|> TITLE: A nice pattern for the regularized beta function $I(\alpha^2,\frac{1}{4},\frac{1}{2})=\frac{1}{2^n}\ $? QUESTION [5 upvotes]: In this post, the problem was given integer/rational $N$, to solve for algebraic number $z$ in the equation, $$\begin{aligned}\frac{1}{N} &=I\left(z^2;\ a,b\right)\\[1.5mm] &= \frac{B\left(z^2;\ a,b\right)}{B\left(a,b\right)} \end{aligned} $$ using the beta function $B(a,b)$, incomplete beta $B(z;a,b)$ and regularized beta $I(z;a,b)$. It seems for some $a,b,$ there can be a pattern for the solutions. Given the equation for $n>1$, $$I\left(x_1^2;\ \tfrac14,\tfrac12\right)=\frac{1}{2^n}\tag1$$ First define $\color{blue}{\gamma = u+\sqrt{-1+u^2}},$ with fundamental unit $u=1+\sqrt{2}.\ $ Then, For $n=2$: $$x_1=x_2-\sqrt{1+x_2^2},\quad \color{blue}{x_2 =\gamma}$$ For $n=3$: $$x_1=x_2-\sqrt{1+x_2^2},\quad \color{green}{x_2 = x_3+\sqrt{-1+x_3^2},\quad x_3=x_4+\sqrt{1+x_4^2},}\quad \color{blue}{x_4 = \gamma}$$ For $n=4$: $$x_1=x_2-\sqrt{1+x_2^2},\quad x_2 = x_3+\sqrt{-1+x_3^2},\quad x_3=x_4+\sqrt{1+x_4^2},\\ \quad \color{green}{x_4 = x_5+\sqrt{-1+x_5^2},\quad x_5=x_6+\sqrt{1+x_6^2},}\quad \color{blue}{x_6 = \gamma}$$ and so on, where we add the same two nested layers (in green) each time and so end with even index $x_m$. Questions: Does this pattern really hold for all $2^n$ and $n>1?$ Why the regularity? What is the integral associated with $(1)$ similar to the ones in the post cited above? REPLY [2 votes]: We have $$ B\left(z;\ \tfrac14,\tfrac12\right)=4\sqrt[4]{z}\, _2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};z\right). $$ By formula 2.1.15 from Erdelyi, "Higher transcendental functions", vol.I $$ _2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};z\right)=\sqrt{\frac{1}{1-z}} \, _2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};-\frac{4 z}{(1-z)^2}\right). $$ Since $z$ and $-\frac{4 z}{(1-z)^2}$ have different signs when $z$ is real we need to apply this formula one more time $$ \, _2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};z\right)=\frac{1}{2}\sqrt[4]{\frac{16 (1-z)^2}{(z+1)^4}} \, _2F_1\left(\frac{1}{2},\frac{1}{4};\frac{5}{4};\frac{16 z (1-z)^2}{(z+1)^4}\right). $$ In terms of incomplete Beta function one gets $$ B\left(z;\ \tfrac14,\tfrac12\right)=\frac12 B\left(\tfrac{16 z (1-z)^2}{(z+1)^4};\ \tfrac14,\tfrac12\right). $$ I think this formula answers the question 1. From this formula one can work out the recursion for the argument.<|endoftext|> TITLE: If series is divergent will a constant also keep it diverging? QUESTION [5 upvotes]: If $\sum_{n=0}^{\infty} b_n$ diverges, and $c \in \mathbb{R}$ then does $\sum_{n=0}^{\infty} cb_n$ diverge? My answer: NO. Let $c=0$, then the sum is $\sum_{n=0}^{\infty} 0 = 0$. True conclusion? REPLY [2 votes]: Your answer is correct. However, if $c\neq 0$, then the sum does, indeed, diverge. This is because if $\sum_{n=0}^\infty b_n$ converges, then the sum $\sum_{n=0}^\infty a\cdot b_n$ also converges for any constant $a\in\mathbb R$. So, if the right side would converge and $c\neq 0$, you can multiply the right sum by $\frac1c$ and get that the left side must converge. (therefore proving $\neg B\implies \neg A$ which also proves $A\implies B$). REPLY [2 votes]: Your conclusion is true. Now what if $c\in\mathbb{R}\backslash\{0\}$? The answer will be YES! $0$ is an exception. In fact you'll even have equivalence when $c\neq 0$: $\sum_{n=0}^\infty b_n$ converges if and only if $\sum_{n=0}^\infty cb_n$ converges.<|endoftext|> TITLE: The Projection Matrix is Equal to its Transpose QUESTION [6 upvotes]: By inspecting its formula one can see easily that the matrix for projection onto a subspace is equal to its transpose. But what is the underlying "geometric" reason for this equality? I have hard time figuring out why it has to be so? EDIT: After reviewing the suggested links, it became more clear, but the reasoning was still escaping my intuition. However, I think that I can (based on the formal arguments in the links) put forward a rough argument. Basically, for any operator $A$ and two vectors $x$, $y$ $$=$$ For a vector $y$ in the orthogonal complement to the subspace S we're projecting onto using the $P^S$ projection operator, $$=0=$$ Because we took any vector $x$, it means that $(P^S)^Ty=0$. As $y$ is in the orthogonal complement, $P^Sy=0$ this means that $$(P^S)^Ty=0=P^Sy$$ (in plain English, for any vector in the orthogonal complement to $S$ the action of $P^S$ makes it 0, which is a symmetric action). For $y$ in $S$ and any $x$, $=$ and $P^Sy=y$ (by the properties of the projection operation), but also $=$ (true for any operator). Putting these together that yields that:$$(P^S)^Ty=y=P^Sy$$. (in plain English, for any vector in $S$ the action of $P^S$ is the identity, which is a symmetric action). To sum it up, this shows that the operator $P^S$ acts symmetrically on both elements of S and on elements in its orthogonal complement. Being that it is a linear operator, then it acts symmetrically on all vectors $x$. A big thing why this works (I think) is that the concept of projection implies (finite) dimension inner product space and that is much more structured than a simple (finite) dimensional vector space. LAST EDIT: Even more clear after digesting the answer of Trial and Error. Given a subspace M to project onto, any vector $z_1$ can be written as the sum of two orthogonal components: $$z_1= (z_1-P_{\mathcal{M}}z_1) + P_{\mathcal{M}}z_1,$$ But the key geometric thing is that these two components belong to orthogonal subspaces, M and $\perp M$ (any vector from one subspace is perpendicular on any vector from the other subspace). The projection operator will leave the component in M unchanged and annihilate the component in $\perp M$. Because of the fact that the subspaces themselves are orthogonal the product of ANY two arbitrary vectors $z_1, z_2$ $$=<(z_1-P_{\mathcal{M}}z_1) + P_{\mathcal{M}}z_1,(z_2-P_{\mathcal{M}}z_2) + P_{\mathcal{M}}z_2>$$ becomes (by the orthogonality of M and $\perp M$: $$=<(z_1-P_{\mathcal{M}}z_1),(z_2-P_{\mathcal{M}}z_2)>+$$ But applying $P_M$ annihilates/zeroes the first bracket and leaves unchanged the second bracket. This holds whether we apply it to $z_1$ or to $z_2$. Ergo the effect of $P_M$ is the same whether applied to the first of second term of the scalar product. So its matrix should be equal to its transpose. The bolded text indicates key issues that I was not fully appreciating before. Thank you for your patience! REPLY [6 votes]: As you learned in Calculus, the orthogonal projection $P$ of a vector $x$ onto a subspace $\mathcal{M}$ is obtained by finding the unique $m \in \mathcal{M}$ such that $$ (x-m)\perp \mathcal{M}. \tag{1} $$ So the orthogonal projection operator $P_{\mathcal{M}}$ has the defining property that $(x-P_{\mathcal{M}}x)\perp \mathcal{M}$. And $(1)$ also gives $$ (x-P_{\mathcal{M}}x) \perp P_{\mathcal{M}}y,\;\;\; \forall x,y. $$ Consequently, $$ \langle P_{\mathcal{M}}x,y\rangle=\langle P_{\mathcal{M}}x,(y-P_{\mathcal{M}}y)+P_{\mathcal{M}}y\rangle= \langle P_{\mathcal{M}}x,P_{\mathcal{M}}y\rangle $$ From this it follows that $$ \langle P_{\mathcal{M}}x,y\rangle=\langle P_{\mathcal{M}}x,P_{\mathcal{M}}y\rangle = \langle x,P_{\mathcal{M}}y\rangle. $$ That's why orthogonal projection is always symmetric, whether you're working in a real or a complex space.<|endoftext|> TITLE: Approximating $\sqrt{2}$ in rational numbers QUESTION [5 upvotes]: Let a sequence of rational numbers be defined recursively as $x_{n+1} = (\frac{x_n}{2} + \frac{1}{x_n})$ with $x_1$ some arbitrary positive rational number. We know that, in the universe of real numbers, this sequence converges to $\sqrt{2}$. But suppose we don't know anything about real numbers. How do we show that that ${x_n}^2$ gets arbitrarily close to $2$? I've already shown that $x_n > 2$ and that the sequence is decreasing. But I'm having difficulty showing that ${x_n}^2$ gets as close to $2$ as we want using nothing but inequalities. Since we're assuming no knowledge of real numbers, I don't want to use things like the monotone convergence theorem, the least upper bound property etc. This exercise is of interest to me because it can it can help explain the development of irrational numbers to a student who knows nothing about them. REPLY [2 votes]: The question raised by Vishal is interesting, so I have decided to give my contribution by answering in a detailed way a slight generalization of it. Precisely, I want to show how to approximate by rational numbers the square root of every positive rational number $a$ by proving that the sequence $\langle x_n \rangle_{n\in\mathbb{N}}$, defined as $$ x_{n+1}= \begin{cases} x_1 & n=0\\ \\ \dfrac{1}{2}\left(x_n+\dfrac{a}{x_n}\right) & n\geq 1 \end{cases} $$ is such that $x_n^2\to a$ as $n \to +\infty$ for any choice of the positive rational number $x_1$, i.e. for any $x_1>0$, $x_1\in\mathbb{Q}$. First of all, by squaring both sides of defining equation of $x_{n+1}$ for all $n\geq 1$ we obtain $$ x_{n+1}^2 = \frac{x_n^2}{4} + \frac{a}{2} + \frac{a^2}{4x_n^2}$$ Subtracting $a$ from both of its two sides, we get $$ \begin{split} x_{n+1}^2 - a & = \frac{x_n^2}{4} - \frac{a}{2} + \frac{a^2}{4x_n^2}\\ & = \frac{1}{4}\left(x_n^2 - 2a + \frac{a^2}{x_n^2}\right)\\ & = \frac{1}{4}\left(x_n - \frac{a}{x_n} \right)^2 = \frac{(x_n^2 - a)^2}{4x_n^2}\geq 0 \end{split} $$ Specializing the equation for $n=1, 2$ we obtain $$ x_2^2 - a = \frac{(x_1^2 - a)^2}{4x_1^2} $$ and $$ \begin{split} x_3^2 - a & = \frac{(x_2^2 - a)^2}{4x_2^2} = \frac{(x_2^2 - a)}{4x_2^2}(x_2^2-a)\\ & = \frac{1}{4}\frac{(x_1^2 - a)^2}{4x_2^2x_1^2}(x_2^2-a) = \frac{1}{4}\frac{(x_1^2 - a)^2}{(x_1^2+a)^2}(x_2^2-a) \\ & \leq \frac{1}{4}(x_2^2-a) = \frac{1}{16}\frac{(x_1^2 - a)^2}{x_1^2} \end{split} $$ From the calculations above, it seems plausible to suppose that the following estimate holds: $$ 0 \leq x_n^2 - a \leq \frac{1}{2^{2(n-1)}} \frac{(x_1^2 - a)^2}{x_1^2}\quad \forall n\geq 2 $$ This is true, as it can be easily shown by generalized induction. Proceeding to do so, we first note that for $n=2$ the estimate holds as we have already shown above: then, assuming it being true for $n$ we obtain $$ \begin{split} x_{n+1}^2 - a & = \frac{(x_n^2 - a)^2}{4x_n^2} = \frac{(x_n^2 - a)}{4x_n^2}(x_n^2-a)\\ & = \frac{1}{4}\frac{(x_{n-1}^2 - a)^2}{4x_n^2x_{n-1}^2}(x_n^2-a) = \frac{1}{4}\frac{(x_{n-1}^2 - a)^2}{(x_{n-1}^2+a)^2}(x_n^2-a) \\ & \leq \frac{1}{4}(x_n^2-a) \leq \frac{1}{4\cdot 2^{2(n-1)}}\frac{(x_1^2 - a)^2}{x_1^2} =\frac{1}{2^{2(n+1-1)}}\frac{(x_1^2 - a)^2}{x_1^2} \end{split} $$ Therefore the estimates holds true for every $n\geq2$ and this, by the sandwich theorem, implies $x_n^2\to a$ for any choice of the positive rational $x_1$. A few notes: The proof uses only elementary inequalities and the result of every step is in $\mathbb{Q}$, since this field is closed respect to the four basic arithmetical operations. If we take $a=2$ we have the direct answer to the question of Vishal. However, despite being needed by the didactic aims of Vishal, the hypothesis $a, x_1\in\mathbb{Q}$ is not needed by the logical structure of the reasoning: all works perfectly for any $a,x_1\in\mathbb{R}_+$. This is exactly what Emanuel Fisher asks to do in his beautiful (even if flawed by many typos) "Intermediate Real Analysis" (1983, Springer Verlag, exercise III.8.8 page 139). However, since in that textbook the reals have been introduced earlier as Dedekind cuts, the formal development needed to solve the exercise is slightly simpler due to the fact that it is not needed to square the terms of the sequence in order to work inside $\mathbb{Q}$. An observation from the "approximation theorist" point of view: the approximation error halves at every iteration whatever the value of the initial value $x_1>0$ is.<|endoftext|> TITLE: Proving a function is well defined QUESTION [5 upvotes]: The question I'm working on asks: Let $J_4 =\{0,1,2,3\}$. Then $J_4 −\{0\}=\{1,2,3\}$. Student C tries to define a function S: $J_4 -\{0\}\to J_4 -\{0\}$ as follows: For each $x \in J_4 - \{0\}$, $S(x)$ is the number $y$ so that $(xy) \bmod 4 = 1$. Student F claims that $S$ is not well defined. Who is right: student C or student D? Justify your answer. I have several questions: What is the salience of $J_4 - \{0\}$ ? I see that it removes $0$ from the list of elements, but this notation confuses me. I'm not sure exactly what it does, and it appears to be done twice (See function $S$). Does subscript in $J_4$ mean $\bmod 4$ anything in the set? The statement "$S(x)$ is the number $y$ so that $(xy) \bmod 4 = 1$" is also confusing me. $x$ remains the input and $y$ is the result of function $S$, right? To show that this is ill-defined would require showing that for an $x$, there are multiple $y$, making this not a function. To that it is well-defined I would have to do the opposite, right? For a problem like this, should I start off by just plugging numbers in and seeing what happens, or is there a systematic approach I should be aware of? REPLY [2 votes]: 1.) I take $J_4-\{0\}$ in this context to just mean $J_4 \setminus \{0\}$, which is a more common notation in set theory, i.e $J$ "minus" $\{0\}$. There does not seem to be any ambiguity here. 2.) No, this is just standard (more or less) notation for $J_n=\{0,1,2,...,n-1\}$, which will have $n$ elements. Modulo is then a relation that you could invoke on the set. So we could say that we are "counting in $\mathbb{Z}_4=\{0,1,2,3\}$". This would usually refer to counting modulo $4$, in which we only use the numbers in $\mathbb{Z}_4$. (To be formal one defines these things using equivalence classes, but that is somewhat out of scope.) 3.) I understand how it can sound confusing but yes, it is exactly as it is written (I assume of course, I did not write it). So $S(x)=y$. 4.) If you think about it for a while, what is $S(1)$? It can only be sent to any of the elements in $J_4\setminus \{0\}$, i.e to any of $1,2,3$. But what number(s) $y$ has the property that $(xy)=1 \mod 4$? In conclusion, that it is not well defined refers to that it is not defined for all elements of its domain, i.e $J_4\setminus \{0\}$ in this case. Hope this helps.<|endoftext|> TITLE: Combinatorics for a 3-d rotating automaton QUESTION [8 upvotes]: Let's suppose that we have some kind of special 3-dimensional rotating automaton. The automaton is capable to generate rotation about selected $X$ or $Y$ or $Z$ axis (in a current frame) in steps by only constant +$\dfrac{\pi}{6}$ angle (i.e. rotation can be generated only in one direction - reverse rotation is prohibited) so transition from matrix $R_{i-1}$ to $R_{i}$ (right-lower indices denote here states before and after a single step) is achieved with the use of formula: $R_{i}=Rot_{x,y,z}( \dfrac{\pi}{6})R_{i-1}$ Initial state is coded as the identity matrix $R_0=I$, all other states are described as rotation matrices in reference to the frame representing by this $I$ matrix. Questions: how many $n$ distinct states (coded in generated matrices) can be achieved for not limited number of steps. This full set of achievable states coded $\{^{1}R ,^{2}R ...{^{n}R} \}$ might be named to be a full space of rotating automaton (here left-upper indices should be somehow reasonably organized, but hard to say how - it's open issue) - all states and transitions between states can be, perhaps, visualized with the use of a graph how many distinct states can be generated by exactly $6$ steps in the automaton (...maybe there is a general formula for $n$ steps ?) by how many ways can be achieved multi-step rotation from $I$ to $I$ with the condition that on this trajectory of states the same one step transition ${^{j}R}{\rightarrow}{^{k}R}$ (if possible) is allowed only one time. (for example if it were only rotation about a single axis allowed - the number would be obviously $3$ i.e. three 12-step transitions, but in general case rotations about different axes can be mixed) REPLY [4 votes]: Note: this is a barely-better-than-nothing answer! How many different positions can be generated given an unlimited number of steps? The answer is: an infinite number of positions. I believe I have a proof, but it appears long and involved and full of potential pitfalls. But the essence lies in the impossibility of having a closed curve on the sphere if you alternate between a $\pi/6$ rotation along one axis, a rotation along a second axis, the "reverse" of the first (which you obtain with $11$ steps in the other direction) and the "reverse" of the second: think "up", "left", "down", "right". I believe anyone reasonably well-versed in spherical trigonometry should be able to show this. In fact, I believe one can prove that, even if one can use only $2$ of the admissible rotations, and actually for any "step" that is an angle other than an integer multiple of $\pi/2$, for any final $R$ and any arbitrarily small $\epsilon$ there is a sequence of rotations that brings the robot to some $R'$ such that the maximum difference between any element of $R$ and $R'$ is less than $\epsilon$. How many distinct positions can you generate with $6$ steps? Clearly no more than $3^6$. I believe that it's actually $3^6-3$, since based on the same arguments of point $1$ above the only possibility of ending up in the same position with a sequence of $6$ steps is to take two different sequences each resulting in a $\pi/2$ rotation (so, $3$ identical steps, followed by $3$ identical steps). Out of the $9$ possible cases, $3$ (with "identical" subsequences) lead to distinct positions, the remaining $6$ lead form $3$ pairs that lead to $3$ distinct positions. "By how many ways can be achieved multi-step rotation...?" I am not sure I understand the question correctly, but I believe that the answer is infinite, based on $1$. The idea is this: assume a "generalized move" $R$ is a sequence of $s$ identical rotation steps around the same axis, with $1\leq s \leq 11$. Denote by $-R$ the "reverse" of $R$, a sequence of $12-s$ identical steps around the same axis: obviously applying first $R$ and then $-R$ brings you back to the starting position without ever being in the same state twice in-between. Now assume you can reach from I a position R, and consider a sequence of generalized moves $R_1,...,R_n$ that is "minimal", i.e. there is no shorter sequence of generalized moves from I to R. Then the sequence $R_1,...,R_n,-R_n,...,-R_1$, brings you back to $I$, and never performs the same sequence of rotations between two identical positions twice. Note: I've abstracted and generalized questions 1 and 3 in the related set of questions that you can find here!<|endoftext|> TITLE: How to compute $\int\limits_{\mathbb{R}^3}\det D^2 u(\mathbf{r}) d\mathbf{r}$? QUESTION [9 upvotes]: Let $u\in C^3(\mathbb{R}^3)$ and $u$ is zero outside some bounded domain. I need to find the value of $$ \int\limits_{ \large\mathbb{R}^3}\det \mathrm D^2 u(\mathbf{r}) \, \mathrm d\mathbf{r} $$ I suspect it equals zero. I can show this for some specific cases. Here $\mathrm D u$ is a gradient of $u$, while $\mathrm D^2u$ is a Hessian. REPLY [6 votes]: Note that the Hessian matrix for $u: \mathbb{R}^3\to \mathbb{R}$ is the Jacobian of the gradient, $$\text D^2u = \text D(\text Du) = \text J(\nabla u )$$ Let $\text{D}u = v = (v_x,v_y,v_z)$. Then, since $u$ and $v$ are continuous on $\mathbb{R}^3$ and vanish outside of some bounded domain $\Omega$, they must also vanish on the boundary. In particular, $v|_{\partial\Omega}\equiv 0$. Recall that the Jacobian determinant is used to change coordinates in integration, since $$\det \text Dv \ \mathrm{d}x\mathrm{d}y\mathrm{d}z = \mathrm{d}v_x\mathrm{d}v_y\mathrm{d}v_z$$ for any $v:\mathbb{R}^3\to\mathbb{R}^3$ (and in particular, $v=\text Du$), and so $$\int_{\Omega}\det \text{D}^2u \ \mathrm{d}x\mathrm{d}y\mathrm{d}z = \int_{\Omega}\det \text D(\text{D}u) \ \mathrm{d}x\mathrm{d}y\mathrm{d}z = \int_{\Omega}\mathrm{d}v_x\mathrm{d}v_y\mathrm{d}v_z$$ By Gauss-Ostrogradsky Theorem, $$\int_{\partial\Omega}F(r(s,t))\|\partial_s r\times \partial_t r\| \ \mathrm{d}s\mathrm{d}t = \int_{\partial \Omega}F \ \mathrm{d}S = \int_\Omega \text{tr}\text DF \ \mathrm{d}V$$ where $F: \mathbb{R}^3\to\mathbb{R}^3$ is any continuously differentiable function and $r: I_s \times I_t \to\mathbb{R}^3$ is a parameterization of $\partial \Omega$. Let $F(v)=\frac{1}{3}v$ so that $\text DF = \frac{1}{3}\mathbf 1$ and so $\text{tr} \text DF = 1$. Then, for any $r$, $$F(r(s,t)) = \frac{1}{3}v|_{\partial\Omega} \equiv 0$$ which gives us the result $$\int_\Omega \det\text{D}^2u \ \mathrm{d}x\mathrm{d}y\mathrm{d}z = \int_{\Omega} \det \text D(\text{D}u) \ \mathrm{d}x\mathrm{d}y\mathrm{d}z = \int_\Omega \mathrm{d}v_x\mathrm{d}v_y\mathrm{d}v_z = \int_{\partial\Omega} 0\cdot\|\partial_s r\times \partial_t r\| \ \mathrm{d}s\mathrm{d}t = 0$$<|endoftext|> TITLE: Proving that a function is multiplicative. QUESTION [5 upvotes]: Let $f(x)$ be a polynomial with integral coefficients, and let $\psi(n)$ denote the numbers of values $f(0), f(1), ..., f(n-1)$ which are coprime to $n$. I must show that $\psi$ is multiplicative, meaning that: $$\psi(mn) = \psi(m) \cdot \psi(n)$$ assuming $\gcd(m,n)=1$. Furthermore, I must show that $$\psi(p^\alpha) = p^{\alpha-1} \cdot (p-b_p)$$ where $b_p$ is the number of integers $f(0), f(1), ..., f(p-1)$ which are divisible by the prime $p$. I thought that this proof would be similar to the proof of multiplicity for Euler's totient function, but I have not been able to make the connection. I think once I find a proof for the first part, maybe I will be better able to understand the second part. Any help is appreciated! REPLY [2 votes]: We start with a lemma. Lemma: Let $p$ be a prime number. Then for every $f(x)\in \mathbb{Z}[x]$ and $k\in \mathbb{Z}$ $$f(p+k)\equiv f(k) \pmod p.$$ Proof: Let's suppose that $f(x)=c_{0}x^n+\ldots +c_{1}x+c_{n}$, then $$f(p+k)=c_{0}(p+k)^n+\ldots c_{1}(p+k)+c_{n}\equiv c_{0}k^n+\ldots c_{1}k+c_{n}=f(k) \pmod p. \tag*{$\blacksquare$}$$ First, let's take $n=p^a$, we can make $p^a/p=p^{a-1}$ groups of $p$ elements in the set $\{f(0), f(1),\ldots, f(p-1), f(p),\ldots, f(2p-1),\ldots, f(p^a-p),\ldots, f(p^a-1)\}$. By the lemma each group has $b_{p}$ numbers which are divisible by $p$. Then, in total we have $p^a-p^{a-1}\cdot b_{p}=p^{a-1}(p-b_{p})$ numbers which aren't divisibles by $p$, i.e. coprime with $p^a$. So, $\psi(p^a)=p^{a-1}(p-b_{p})$. Now, let's suppose that $n=p_{1}^{a_{1}}\cdots p_{r}^{a_{r}}$, the idea is to apply the previous reasoning for each prime factor $p_{i}$, but to avoid repetitions, i.e., numbers which are multiple of more than one prime factor, we'll successively be subtracting the multiples of $p_{i}$, starting with $p_{1}$. Set $n=n_{0}$, and pick $p_{1}$, the number of multiples of $p_{1}$ is $$m_{1}=(p_{1}^{a_{1}}\cdots p_{r}^{a_{r}}/p_{1})\cdot b_{p_{1}}=p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}}b_{p_{1}},$$ then we put $n_{1}=n_{0}-m_{1}=p_{1}^{a_{1}}\cdots p_{r}^{a_{r}}-p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}}b_{p_{1}}=p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}}(p_{1}-b_{p_{1}})$. Now, for $n_{1}$ we count the number of multiples of $p_{2}$. As in the case of $p_{1}$ we have $m_{2}=p_{1}^{a_{1}-1}p_{2}^{a_{2}-1}\cdots p_{r}^{a_{r}}(p-b_{p_{1}})b_{p_{2}}$, and thus $$n_{2}=n_{1}-m_{2}=p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}}(p-b_{p})-p_{1}^{a_{1}-1}p_{2}^{a_{2}-1}\cdots p_{r}^{a_{r}}(p-b_{p_{1}})b_{p_{2}}=$$ $$p_{1}^{a_{1}-1}p_{2}^{a_{2}-1}\cdots p_{r}^{a_{r}}(p-b_{p_{1}})(p_{2}-b_{p_{2}}).$$ After applying the same process $r-2$ more times, we deduce that $$\psi(n)=n_{r}=n_{r-1}-m_{r}=$$ $$=p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}}(p_{1}-b_{p_{1}})\ldots (p_{r-1}-b_{p_{r-1}})-p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}-1}(p_{1}-b_{p_{1}})\ldots (p_{r-1}-b_{p_{r-1}})b_{p_{r}}=$$ $$=p_{1}^{a_{1}-1}\cdots p_{r}^{a_{r}-1}(p_{1}-b_{p_{1}})\ldots (p_{r}-b_{p_{r}}).$$ Finally, applying the last formula it's easy to deduce that if $\gcd(m,n)=1$, then $$\psi(mn)=\psi(m)\psi(n).$$ Note: If the process isn't very clear I recommend you to apply it to $n=30$.<|endoftext|> TITLE: Details of Spivak's Proof of Stokes' Theorem QUESTION [6 upvotes]: In Spivak's Calculus on Manifolds, the proof of Stokes Theorem on $\mathbb{R}^n$ begins as follows... It seems to me that there's something here which can be very confusing: When you pull back the $k-1$ form $f dx^1 \wedge ... \wedge \widehat{dx^i} \wedge ... \wedge dx^k$ along ${I^k}_{(i,\alpha)}$, the result is again a $k-1$ form, which should be integrated over a $(k-1)$-cube. However, in the line below, the integral is over $[0,1]^k$. It amounts to the same thing, since ${I^k}_{(i,\alpha)}^*(f dx^1 \wedge ... \wedge \widehat{dx^i} \wedge ... \wedge dx^k) = f(x^1, ..., x^{i-1},\alpha,x^i, ..., x^{k-1})\,dx^1 \wedge ... \wedge dx^{k-1}$ and then $$ \begin{aligned}& \int_{[0,1]^{k-1}}f(x^1, ..., x^{i-1},\alpha,x^i, ..., x^{k-1})\,dx^1 \wedge ... \wedge dx^{k-1} \\ = & \int_{[0,1]}\left(\int_{[0,1]^{k-1}}f(x^1, ..., x^{i-1},\alpha,x^i, ..., x^{k-1})\,dx^1 \wedge ... \wedge dx^{k-1}\right)dx^k \\ = & \int_{[0,1]^{k}}f(x^1, ..., x^{i-1},\alpha,x^i, ..., x^{k-1})\,dx^1 \wedge ... \wedge dx^k \\ = & \int_{[0,1]^{k}}f(x^1, ..., x^{i-1},\alpha,x^{i}, ..., x^{k-1})\,dx^1 ... dx^k \\ = & \int_{[0,1]^{k}}f(x^1, ..., x^{i-1},\alpha,x^{i+1}, ..., x^{k})\,dx^1 ... dx^k\end{aligned}$$ where the second line follows since the pulled back form is constant with respect to $x^k$ and the last line follows since we're working with the Riemann integral over $[0,1]^k$ so we're really just renaming variables. I think it's a bit of a stretch to ask the reader to 'note' that without any further indication as to why it's true. Spivak pulls a similar trick later on in the proof, which I noticed another StackExchange question on. After having run through the steps of the proof on a small example, I'm guessing that the reason for doing this is to avoid having to talk about renaming variables. So, my two questions are: Is there a simpler way to make sense of the 'note' which I addressed above? Am I correct in thinking that the extra integration is done to make the proof more concise and avoid discussion of renaming variables? Or is there some other reason I'm missing? REPLY [3 votes]: I agree that it's conceptually somewhat unsatisfying to turn the $(k-1)$-dimensional integrals into $k$-dimensional integrals, but it avoids all sorts of ugly notation. Note, for example, that in the second line of your second paragraph, you got it wrong: You should have written $$(I^k_{i,\alpha}){}^*f dx^1\wedge\dots\wedge\widehat{dx^i}\wedge\dots\wedge dx^k = f(x^1,\dots,x^{i-1},\alpha,x^{i+1},\dots,x^k)dx^1\wedge\dots\wedge\widehat{dx^i}\wedge\dots\wedge dx^k.$$ Rewriting all the integrals over the $k$-cube avoids the notational morass. (It's not a matter of "remaining" variables; it's a matter of notating which one is omitted. But most likely that's what you intended.)<|endoftext|> TITLE: What are some counter-intuitive results in mathematics that involve only finite objects? QUESTION [231 upvotes]: There are many counter-intuitive results in mathematics, some of which are listed here. However, most of these theorems involve infinite objects and one can argue that the reason these results seem counter-intuitive is our intuition not working properly for infinite objects. I am looking for examples of counter-intuitive theorems which involve only finite objects. Let me be clear about what I mean by "involving finite objects". The objects involved in the proposed examples should not contain an infinite amount of information. For example, a singleton consisting of a real number is a finite object, however, a real number simply encodes a sequence of natural numbers and hence contains an infinite amount of information. Thus the proposed examples should not mention any real numbers. I would prefer to have statements which do not mention infinite sets at all. An example of such a counter-intuitive theorem would be the existence of non-transitive dice. On the other hand, allowing examples of the form $\forall n\ P(n)$ or $\exists n\ P(n)$ where $n$ ranges over some countable set and $P$ does not mention infinite sets would provide more flexibility to get nice answers. What are some examples of such counter-intuitive theorems? REPLY [3 votes]: $()()$ is not a palindrome but $())($ is. Intuition tells us that if we can put a mirror in the centre and it reflects, then it's a palindrome. But because the mirror exchanges left brackets for right ones, our intuition deceives us in this particular instance.<|endoftext|> TITLE: What can you say about a mapping $f : \Bbb Z\to \Bbb Q$? QUESTION [7 upvotes]: Written with StackEdit. Which of the following can be true for a mapping $$ f : \Bbb Z \to \Bbb Q $$ A. It is bijective and increasing B. It is onto and decreasing C. It is bijective and satisfied $f(n) \ge 0$ if $n \le 0$ D. It has uncountable image Here's my attempt at finding a mapping that satisfies C(Following Method to prove countability of $\Bbb Q$ from Stephen Abbot ) Define $A_n = \{ \frac ab : a,b\ge 0, \gcd(a,b) = 1,a+b=n,b \ne 0 \}$ $B_n = \{ \frac {-a}{b} : a,b > 0, \gcd(a,b) = 1, a+b = n \} $ Mapping the negative integers and 0 to the set $A_n \forall n \in \Bbb N$ and positive integers to the set $B_n \ \forall n \ \in \Bbb N$ in the same manner as Stephen Abbot, the mapping will be surjective because the sets $A_n$ and $B_n$ would certainly cover $\Bbb Q$ and would be injective because no two sets have any elements in common. Correct Answer - C Source - Tata Institute of Fundamental Research Graduate Studies 2014 Is my proof correct? In any case, I can't imagine such a method to be required in this problem so is there a more 'easy' mapping? Also, why can't we have a mapping as desired in A and B? REPLY [4 votes]: A. False. Suppose $f\colon\mathbb{Z}\to\mathbb{Q}$ is bijective and increasing. Then $f(0)0}$ (positive rationals). Now define $$ f(n)=\begin{cases} -g(n-1) & \text{if $n>0$}\\[4px] 0 & \text{if $n=0$}\\[4px] g(-n+1) & \text{if $n<0$} \end{cases} $$ (note: $\mathbb{N}=\{0,1,2,\dotsc\}$).<|endoftext|> TITLE: Sum of all consecutive natural root differences on a given power QUESTION [10 upvotes]: I accidentally observed that $\sqrt{n} - \sqrt{n-1}$ tends to $0$ for higher values of $n$, so I've decided to try to sum it all, but that sum diverged. So I've tried to make it converge by giving it a power $m$. $$S_m=\sum_{n=1}^\infty (\sqrt{n} - \sqrt{n-1})^m$$ How would one calculate the values of this sum for a choosen $m\in\mathbb R$? Not just estimate but write them in a non-decimal form if possible, preferably using a better converging formula. It converges if $m>2$. The values seem to tend to numbers with non-repeating decimals. Thanks to achille hui and his partial answer here, it looks like $S_m$ for odd values of $m$ is a linear combination of riemann zeta function at negative half-integer values: \begin{align} S_3 &\stackrel{}{=} -6\zeta(-\frac12) \approx 1.247317349864128...\\ S_5 &\stackrel{}{=} -40\zeta(-\frac32) \approx 1.019408075593322...\\ S_7 &\stackrel{}{=} -224\zeta(-\frac52) - 14\zeta(-\frac12) \approx 1.00261510344449... \end{align} If we decide to replace the constant $m$ with $n\times{k}$ where $k$ is a new constant, then we can talk about $S_k$, Which converges if $k>0$; $$S_k=\sum_{n=1}^\infty (\sqrt{n} - \sqrt{n-1})^{nk}$$ I wonder if these values could also be expressed in a similar way as $S_m$. Values still tend to numbers with seemingly non-repeating decimals according to the Wolfram Alpha: $$ S_1 \approx 1.20967597004937847717395464774494290 $$ Also notice that the functions of these sums are similar to the zeta function; $\color{blue}{S_m} \sim \color{red}{\zeta}$ But I think it's only due the fact that they all approach $1$? REPLY [3 votes]: It does converge for $m > 2$, since $\sqrt{n} - \sqrt{n-1} \sim 1/(2\sqrt{n})$ and $\sum_n n^{-m/2}$ converges for $m > 2$.<|endoftext|> TITLE: Does $\sum\limits_{i=1}^{\infty}|a_i||x_i| < \infty$ whenever $\sum\limits_{i=1}^{\infty} |x_i| < \infty $ imply $(a_i)$ is bounded? QUESTION [9 upvotes]: Written with StackEdit. Suppose $(a_i)$ is a sequence in $\Bbb R$ such that $\sum\limits_{i=1}^{ \infty} |a_i||x_i| < \infty$ whenever $\sum\limits_{i=1}^{\infty} |x_i| < \infty$. Then is $(a_i)$ a bounded sequence? Look at the end of the question for the right answer. If the statement '$(a_i)$ is a properly divergent sequence implies that there exists some $k \in \Bbb N$ such that $\sum\limits_{i=1}^{\infty} {1/{a_i}}^k$ is convergent' was true, we could have easily proven $(a_i)$ is bounded by using sub-sequences but since that is dis-proven by $ln(n)$, can we use something around it? Like can all the functions which do not satisfy the 'statement' I mentioned be considered as a special case of functions? Correct Answer - Yes, $(a_i)$ is bounded. Source - Tata Institute of Fundamental Research Graduate Studies 2013 REPLY [2 votes]: Hint: Look at this simple fact: If the positive series $\sum a_n$ diverges and $s_n=\sum\limits_{k\leqslant n}a_k$ then $\sum \frac{a_n}{s_n}$ diverges as well. Hint2: Consider $x_n = \dfrac{1}{\sum\limits_{i=1}^{n}a_i}$<|endoftext|> TITLE: Easiest way to show $x^2 - 229y^2 = 12$ has no solutions in integers QUESTION [11 upvotes]: Asking Wolfram Alpha doesn't count, but for what it's worth, I have already done so. I have tried squares modulo a few different values. For example, modulo $36$, we have the possibility that $x^2 \equiv 13$ (e.g., if $x = 7$) and $229y^2 \equiv 1$ (e.g., if $y = 7$ also). This $13 - 1$ problem shows up in most of the other moduli I have tried. Is modular arithmetic the way to go, just that I haven't tried the right modulus yet, or is a different method the way to go? This goes to showing that $3$ is irreducible but not prime in $\mathcal{O}_{\textbf{Q}(\sqrt{229})}$. Since the fundamental unit is of norm $-1$, I don't need to worry about $x^2 - 229y^2 = -12$, and since the fundamental unit is a so-called "half-integer," I don't need to worry about $x^2 - 229y^2 = \pm 3$ either. REPLY [16 votes]: The continued fraction for $\sqrt{229}$ is $[15,\dot7,1,1,7,\dot{30}]$. The convergents to $\sqrt{229}$ derived from the continued fraction are $p/q=15/1,106/7,121/8,227/15,1710/113,\dots$ The values of $p^2-229q^2$ are $-4,15,-15,4,-1,\dots$ – these repeat (with changed sign), since the continued fraction is periodic. Now there's a theorem that says that if $p^2-Dq^2=c$ with $|c|<\sqrt D$ then $p/q$ is a convergent to the continued fraction of $\sqrt D$. We have $D=229$, $c=12$ so the condition holds, but no convergent gives us $p^2-229q^2=12$, so the equation has no solution. The theorem is stated at http://mathworld.wolfram.com/PellEquation.html but also in any textbook treatment of continued fractions and Pell's equation, where you'll also find details behind the other assertions I have made here. EDIT. Note that Dario Alpern has a solver for $ax^2+bxy+cy^2+dx+ey+f=0$. In Step-by-step mode, it tells you exactly what it does on any given problem. For this problem, it uses exactly the method I've used (although it's even sparser on detail than I have been). I had some trouble getting the solver to work, though – no luck on Safari, nor on Firefox, but it worked fine on Chrome. Dario has some other tools on his site that are worth knowing about.<|endoftext|> TITLE: Find $ \sum_{ (m,n) \neq (0,0)} \sqrt{m^2 + n^2}$ or $\zeta_{\mathbb{Q}[i]}(-1)$ QUESTION [6 upvotes]: I would like to compute $\zeta_{\mathbb{Q}[i]}(-1)$ - a Dedekind zeta function. Mimicking the computation for $\zeta(-1)$, we can observe the following diverges: $$ \frac{1}{4}\sum_{ (m,n) \neq (0,0)} \sqrt{m^2 + n^2} = 1+\sqrt{2}+2 + 2 \sqrt{5} + \sqrt{8} + \dots $$ and I would like to gives infinite divergent sum a finite value along the same line as these answers: Why does $1+2+3+\cdots = -\frac{1}{12}$? In particular there is Abel's theorem which I am going to misuse slightly. If $\sum a_n$ converges then: $$ \lim_{x \to 1^{-}} \sum a_n x^n = \sum a_n $$ which is a statement about continuity of the infinite series in $x$. Trying to make it work here. $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;x^{\sqrt{m^2 + n^2}} = \frac{d}{dx}\Bigg[\sum_{(m,n) \neq (0,0)} x^{\sqrt{m^2 + n^2}} \Bigg]$$ This is not so helpful as I now have a Puisieux series (what on earth is $x^\sqrt{2}$ ?) and there is no closed form. What about: $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;x^{m+n} = \frac{d}{dx}\bigg[\sum_{(m,n) \neq (0,0)} x^{m+n} \bigg]$$ This could converge as long as we have an estimate for the sum (this could be a separate strategy): $$ \sum_{m+n = N} \sqrt{m^2 + n^2} $$ maybe zeta-function regularization is our only option. The Dedekind function does have a Mellin transform $$ \sum_{(m,n) \neq (0,0)} \sqrt{m^2 + n^2} \;e^{t\sqrt{m^2 + n^2}} = \frac{d}{dt}\Bigg[\sum_{(m,n) \neq (0,0)} e^{t\sqrt{m^2 + n^2}} \Bigg]$$ similar to what I have found. So that zeta regularization and Abel regularization are kind of the same. Note As I've written it $\sum \sqrt{m^2 + n^2} = \zeta_{\mathbb{Q}(i)}(-\frac{1}{2})$ which I imagine should not attain any special value :-/ REPLY [6 votes]: You may find here a derivation of the identity : $$\tag{1}\sum_{(n,k)\neq (0,0)}\frac{1}{\left ( n^2+k^2 \right )^s}=4\,\zeta(s)\,\beta(s),\quad\Re(s)>1$$ with $\beta$ the Dirichlet beta function so that analytic continuation to the (out of bounds) value $\,s=-\frac 12\,$ should give you the (regularized) series : \begin{align} \tag{2}\frac{1}{4}\sum_{ (m,n) \neq (0,0)} \sqrt{m^2 + n^2} &=\zeta\left(-\frac 12\right)\,\beta\left(-\frac 12\right)\\ &\approx -0.0572060775943\\ \end{align}<|endoftext|> TITLE: On Dedekind Cuts and Decimal Expansions QUESTION [5 upvotes]: In grade school and high school, I was taught a real number is a number with a decimal expansion--that is, a finite sequence of digits followed by a decimal point followed by an infinite sequence of digits. When I moved on to studying analysis, I was introduced to the Dedekind cut construction of the real numbers, and then proved every real number could be expressed as a decimal expansion. Now presumably, the real numbers could also be rigorously constructed as decimal expansions, very similarly to the Cauchy sequence construction. Is there a particular reason why, when Dedekind performed his original construction of the reals, he chose to use cuts rather than decimal expansions (or base 2 expansions for that matter)? Is there some unforeseen difficulty in rigorously constructing the reals by means of decimal expansions? REPLY [4 votes]: One problem is that multiple decimal representations correspond to the same real number. Of course, this is easily solved, though. I think the real point was foundational: defining reals as decimal expansions is defining them as sequences of rationals (the Cauchy definition is via equivalence classes of sequences of rationals - the decimals approach picks out a "canonical" Cauchy sequence for a given real). By contrast, Dedekind cuts define a real as a set of rationals. On the philosophical side, sets are slightly simpler than sequences, and Dedekind was very interested in developing the foundations of mathematics. Dedekind's definition is also more natural in that it doesn't fix a base: so it really defines the real numbers without making any arbitrary choices.<|endoftext|> TITLE: Number of intersections of $n-2$ dimensional spheres inside $S^{n-1}$ QUESTION [5 upvotes]: Consider the sphere $S^{n-1} = \{ (x_1,\ldots, x_n)\in\mathbb{R}^n:\ x_1^2+\ldots +x_n^2=1 \}$ and denote by $S_i$ (for $i=1\ldots n$) the $n-2$ dimensional subsphere of $S^{n-1}$ orthogonal to $e_i$. More precisely, $S_i = \{ (x_1,\ldots, x_{i-1},0,x_{i+1},\ldots, x_n):\ x_1^2+\ldots+x_{i-1}^2+x_{i+1}^2+\ldots+x_n^2=1 \}$. The image below gives an illustration in the 3 dimensional case. In dimension 3, it's easy to see the number of intersection between the $S_i$'s. We have that $\sharp (S_1\cap S_2) = \sharp(S_1\cap S_3) = \sharp(S_2\cap S_3) = 2,$ $\sharp(S_1\cap S_2\cap S_3) = 0$. I'm interested in the number of intersections in the general case. My guess is the number of intersections will be a power of 2, depending on how many $S_i$'s I am intersecting. I don't know how to make this calculations and couldn't find any article or textbook about this. My last hope is to share my problem here. Thank you! REPLY [4 votes]: $\newcommand{\Reals}{\mathbf{R}}$Generally, $0 \leq k \leq n$ distinct coordinate hyperplanes in $\Reals^{n}$ mutually intersect in a subspace of dimension $n - k$, whose intersection with the unit sphere $S^{n-1}$ is a great sphere $S^{n-k-1}$. The number of points in this set is equal to: Infinity if $k < n - 1$; Two if $k = n - 1$; Zero if $k = n$. For example, the intersection of the unit $3$-sphere $S^{3} \subset \Reals^{4}$ with the hyperplanes $\{x_{1} = 0\}$ and $\{x_{2} = 0\}$ is the unit circle lying in the $(x_{3}, x_{4})$-plane, which contains infinitely many points.<|endoftext|> TITLE: What does this $0$ mean in Wolfram|Alpha's application of the chain rule? QUESTION [6 upvotes]: If I enter this command to differentiate $(2+3x)^4$ I get the step-by-step output displayed here. The relevant part of the output is as follows: Possible derivation: $\frac{d}{d x}\left((2+3x)^4\right)$ Using the chain rule, $\frac{d}{d x}\left((3x+2)^4\right) = \frac{d u^4}{d u}0$, where $u=3x+2$ and $\frac{d}{d u}\left(u^4\right)=4u^3$: $4(2+3x)^3\left(\frac{d}{d x}(2+3x)\right)$ I picked here a very simple example that could easily be done by hand. However, Wolfram|Alpha would use the same notation even if I picked a more complex example. I understand in basic terms what needs to be done to reach a solution, but I don't understand what the $0$ is doing in the notation. REPLY [2 votes]: My guess would be a bug in their software. The actual calculation on Wolfram Alpha most probably is done by a Mathematica kernel in the background. My guess is that the $du/dx$ that is to be shown at that point is erroneously passed as expression to the kernel as something to evaluate. Trying to derive u rather than u[x], the kernel then interprets u as a constant, of which the derivative of course is $0$. But of course that's only an educated guess.<|endoftext|> TITLE: Does a glass of water sing because of the SO(2) symmetry? QUESTION [11 upvotes]: This might very well be a crucially flawed reasoning. But I think has to have something true behind. I was trying to explain basic ideas of representation of Lie Groups to an 11 years old girl who asked what I was studying. What I wanted to explain was the relation between special functions and symmetries. The mathematical thing I wanted to explain was that if we have a group $G$ acting on a space $X$, and we look at the space of infinitely derivable functions on $X$, i.e. $\mathcal C^\infty(X)$, then there is a natural representation of $G$ on $C^\infty(X)$. So I thought about the simplest example I had in mind which was the case of of the circle $G = S^1$ which has ($SO(2)$ symmetry). Then the representation are given by harmonic analysis of Fourier series. To explain this I said to consider a glass of water which has cylindrical symmetry ("if I rotate the glass you cannot say how much I did rotate the glass, so it has a symmetry") then the vibration and deformations of the edge of the glass are the functions on $S^1$ that can be classified in harmonics... That's why - I concluded - glasses are used to sing with water... I said that but really I'm not sure at all if it's effectively the case. I mean that I'm note sure until what extension my reasoning was correct. Is the role of the water just the one to annihilate every representation but one (or some) which get excited and that's why the glass emits only one definite sound? Do you think the reasoning has a tragic flaw somewhere? To what extension is valid? REPLY [2 votes]: Well, we can certainly have singing/ringing from non-axisymmetric objects, cf. e.g. this, this Phys.SE posts and links therein. So $SO(2)$ symmetry is not necessary in that sense. However, according to e.g. Ref. 1, it is apparently a good mathematical model to consider a wineglass as having topology $S^1\times I$, where the interval $I$ has 1 free and 1 fixed endpoint. The circle $S^1$ has $SO(2)$ symmetry and can be analyzed via Fourier series. The interval $I$ typically has no symmetry. The lowest mode has 2 nodes along $S^1$ and 0 nodes along $I$. There are also higher harmonics/overtones. References: Jundt, Radu, Fort, Duda, Vach & Fletcher, Vibrational modes of partly filled wine glasses, J. Acoust. Soc. Am. 119 (2006) 3793.<|endoftext|> TITLE: Prove that two matrices commute iff the square of the matrices commute QUESTION [7 upvotes]: In my textbook there is a task in which I have to prove the relation \begin{equation} AB=BA\Leftrightarrow A^2B^2=B^2A^2. \end{equation} For ($\Rightarrow$) it is easy \begin{equation} AB=BA\Rightarrow (AB)^2=(BA)^2\Rightarrow ABAB=BABA\Rightarrow BBAA=AABB. \end{equation} But how do I prove ($\Leftarrow$)? REPLY [2 votes]: The implication '$\Leftarrow$' is so obviously false it surprises me that one should even ask this question. Though commutation of matrices can arise in many ways, one of the most simple ways is when one of the matrices is a scalar matrix (multiple of the identity). So if '$\Leftarrow$' were true, it would mean at least that whenever $A^2$ is a scalar matrix then $A$ commutes with every other matrix $B$; this clearly cannot be true. There is a multitude of kinds of matrices whose square is scalar without any reason for the matrix itself to commute with all other matrices: the matrix of any reflection operation ($A^2=I$), that of a rotation by a quarter turn $(A^2=-I$), or a nilpotent matrix of index$~2$ (i.e., $A\neq0$ but $A^2=0$). These give many choices for a counterexample. (In fact the only way $A$ can commute with all other matrices is for $A$ to be scalar itself, but you don't need to know this fact to find counterexamples to '$\Leftarrow$'.)<|endoftext|> TITLE: Inner product in dual Hilbert space QUESTION [5 upvotes]: Let $H^*$ be a dual space of a Hilbert space $H$. Then inner product is defined as $$(f,g)=(J^{-1}f,J^{-1}g),$$ where $f,g\in H^*$ and $J\colon H\to H^*$ is the canonical isomorphism. I want to prove that $$\|f\|=\|J^{-1}f\|=\sqrt{(J^{-1}f,J^{-1}f)}=\sqrt{(f,f)}.$$ Any ideas on how to approach this proof? REPLY [2 votes]: $$ \|f\|_{H^*} = \sup_{\|x\|_H\le 1} f(x) = \sup_{\|x\|_H\le 1} (J^{-1}f,x)_H = \sup_{\|g\|_{H^*}\le 1} (J^{-1}f,J^{-1}g)_H = \|J^{-1}f\|_H $$ first equality: definition of operator norm, second: use isomorphism, third: $J$ is isometric, fourth: Cauchy-Schwarz.<|endoftext|> TITLE: Understanding the orientable double cover QUESTION [14 upvotes]: Definition: if $M$ is a smooth manifold, define the orientable double cover of $M$ by: $$\widetilde{M}:=\{(p, o_p)\mid p\in M, o_p\in\{\text{orientations on }T_pM\}\}$$ together with the function $\pi:\widetilde{M}\to M$ with $\pi((p,o_p))=p$. There are three things I'm trying to understand about $\widetilde{M}$: What is its differentiable structure? Why is $\widetilde{M}$ orientable? Why is the connectedness of $\widetilde{M}$ equivalent to the non-orientability of $M$? Here's where I'm at: first, for the topology of $\widetilde{M}$, one may define $\widetilde{U}\subset\widetilde{M}$ as open $\Leftrightarrow \exists U\subset M$ open with $$\widetilde{U}=\{(p,o_p)\mid p\in U, o_p\in\{\text{orientations on }T_pM\}$$ Now I'm trying to figure out some chart $(\widetilde{U},\widetilde{\phi})$ at $(p,o_p)$ based on $(\phi, U)$ at $p$. I've tried this: \begin{align*} \widetilde{\phi}:\widetilde{U}&\to\mathbb{R}^n\\ (p, o_p)&\mapsto \phi(p) \end{align*} But that obviously doesn't work because it is not even injective. Somehow I have to involve the orientation $o_p$ in the definition, but I really don't know how to do it. About the orientability, I guess it will have something to do with the orientability of the atlas $\{(\widetilde{U}_{\alpha}, \widetilde{\phi}_{\alpha})\}$, but since I can't figure out the definition of $\widetilde{\phi}$, I'm stuck. Now for the connectedness of $\widetilde{M}$ and non-orientability of $M$, that I have no idea. REPLY [21 votes]: Almost $2$ years later, I'll give a complete answer to my own question. Step 1 (Topology of $\widetilde{M}$): Take an atlas $\{(U_\alpha,\varphi_\alpha)\}_{\alpha\in\Lambda}$ such that $\{U_\alpha\}_{\alpha\in\Lambda}$ is a countable basis for $M$. Define the following subsets of $\widetilde{M}$: $$U_\alpha^+:=\left\{(p,o_p)\in\widetilde{M}\mid p\in U_\alpha,\, o_p=\left[\left.\frac{\partial }{\partial\varphi_\alpha^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\alpha^n}\right|_p\right]\right\},$$ $$U_\alpha^-:=\left\{(p,o_p)\in\widetilde{M}\mid p\in U_\alpha,\,o_p=-\left[\left.\frac{\partial }{\partial\varphi_\alpha^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\alpha^n}\right|_p\right]\right\}.$$ We define the topology of $\widetilde{M}$ as the one generated by the basis $\{U_\alpha^+,U_\alpha^-\}_{\alpha\in\Lambda}$. This is a countable basis since $\Lambda$ is countable. In order to check that this topology is Hausdorff we only need to use the fact that the topology of $M$ is Hausdorff. We prove, in addition, that this makes $\pi$ a continuous, open map (in fact, a double covering). Indeed, notice that for any $\alpha\in\Lambda$ we have $\pi^{-1}(U_\alpha)=U_\alpha^+\cup U_\alpha^-$ and $\pi(U_\alpha^\pm)=U_\alpha$. Since $\{U_\alpha\}_{\alpha\in\Lambda}$ is a basis for $M$ and $\{U_\alpha^+,U_\alpha^-\}_{\alpha\in\Lambda}$ is a basis for $\widetilde{M}$, consequently $\pi$ is continuous and open. Moreover, for an arbitrary $p\in M$, any open set $U_\alpha$ containing $p$ is such that $\pi^{-1}(U_\alpha)=U_\alpha^+\cup U_\alpha^-$ (disjoint union) and $\pi|_{U_\alpha^\pm}:U_\alpha^\pm\to U_\alpha$ is a homeomorphism, which shows that $\pi$ is a double covering. Step 2 (Differentiable Structure of $\widetilde{M}$): Define $\varphi_\alpha^+:U^+_\alpha\to \varphi_\alpha(U_\alpha)\subset\mathbb{R}^n$ by $\varphi^+_\alpha=\varphi_\alpha\circ\pi|_{U_\alpha^+}$ and, similarly, $\varphi_\alpha^-:U^-_\alpha\to\varphi_\alpha(U_\alpha)\subset\mathbb{R}^n$ by $\varphi^-_\alpha=\varphi_\alpha\circ\pi|_{U_\alpha^-}$. Both $\varphi_\alpha^+,\varphi_\alpha^-$ are homeomorphisms, because $\varphi_\alpha$ and $\pi|_{U_\alpha^\pm}$ are homeomorphisms. Moreover: \begin{align*} \varphi_\alpha^\pm\circ(\varphi_\beta^\pm)^{-1}(x_1,...,x_n)&=\varphi_\alpha^\pm\left(\underbrace{\varphi_\beta^{-1}(x_1,...,x_n)}_{=:p},\pm\left[\left.\frac{\partial }{\partial\varphi_\beta^1}\right|_p,...,\left.\frac{\partial }{\partial\varphi_\beta^n}\right|_p\right]\right)\\ &=\underbrace{\varphi_\alpha\circ\varphi_\beta^{-1}}_{\text{smooth}}(x_1,...,x_n). \end{align*} (the upper indexes $\pm$ are not relevant to this argument) This shows that the atlas $\{(U_\alpha^+,\varphi_\alpha^+),(U_\alpha^-,\varphi_\alpha^-)\}_{\alpha\in\Lambda}$ is compatible, which makes $\widetilde{M}$ a smooth manifold. This also makes $\pi$ a local diffeomorphism, since $\pi|_{U_\alpha^\pm}=\varphi_\alpha^{-1}\circ\varphi_\alpha^\pm$ and $\varphi_\alpha,\varphi_\alpha^\pm$ are diffeomorphisms. Step 3 (Orientability of $\widetilde{M}$): Let's construct a pointwise orientation $O:(p,o_p)\mapsto O_{(p,o_p)}$ on $\widetilde{M}$. Take an arbitrary $(p,o_p)\in\widetilde{M}$. Since $\pi$ is a local diffeomorphism, $(d\pi)_{(p,o_p)}$ is a bijective linear transformation and we may find a unique $O_{(p,o_p)}$ which corresponds to $o_p$ via $d\pi$. More precisely, define $O_{(p,o_p)}:=[(d\pi)_{(p,o_p)}^{-1}e_1,...,(d\pi)_{(p,o_p)}^{-1}e_n]$, where $\{e_1,...,e_n\}$ is any basis for $T_pM$ with $o_p=[e_1,...,e_n]$. We show that $O$ is continuous. Notice that for a neighbourhood $U_\alpha$ of $p$, we either have $(p,o_p)\in U_\alpha^+$, in which case $O_{(q,o_q)}=\left[\left.\frac{\partial }{\partial(\varphi_\alpha^+)^1}\right|_{(q,o_q)},...,\left.\frac{\partial }{\partial(\varphi_\alpha^+)^n}\right|_{(q,o_q)}\right]$ for all $(q,o_q)\in U_\alpha^+$, or $(p,o_p)\in U_\alpha^-$, in which case $O_{(q,o_q)}=\left[\left.\frac{\partial }{\partial(\varphi_\alpha^-)^1}\right|_{(q,o_q)},...,\left.\frac{\partial }{\partial(\varphi_\alpha^-)^n}\right|_{(q,o_q)}\right]$ for all $(q,o_q)\in U_\alpha^-$. Since $(p,o_p)$ is arbitrary, this means that $O$ is continuous. Thus $\widetilde{M}$ is orientable. Step 4 (Orientability of $M$ vs. Connectedness of $\widetilde{M}$): Suppose $\widetilde{M}$ is disconnected. Since $\pi$ is a double cover, this means that $\widetilde{M}=U\cup V$, where $U,V$ are disjoint open subsets such that both $\pi|_U:U\to M$ and $\pi|_V:V\to M$ are diffeomorphisms. As $\widetilde{M}$ is orientable, in particular $U$ is orientable, so $M$ inherits an orientation from $U$ via $\pi|_U$. Conversely, suppose $M$ is orientable and take an oriented atlas $\{U_\alpha,\varphi_\alpha\}_{\alpha\in\Lambda}$. We show that $\widetilde{M}$ is the disjoint union of the open sets $\bigcup_\alpha U_\alpha^+$ and $\bigcup_\alpha U_\alpha^-$, which means that $\widetilde{M}$ is disconnected. Assume by contradiction that $U_\alpha^+\cap U_\beta^-\neq \emptyset$ for some $\alpha,\beta\in\Lambda$. If $(p,o_p)\in U_\alpha^+\cap U_\beta^-$, this means that $p\in U_\alpha\cap U_\beta$ with $o_p=\left[\left.\frac{\partial}{\partial \varphi_\alpha^1}\right|_p,...,\left.\frac{\partial}{\partial \varphi_\alpha^n}\right|_p\right]=$ $-\left[\left.\frac{\partial}{\partial \varphi_\beta^1}\right|_p,...,\left.\frac{\partial}{\partial \varphi_\beta^n}\right|_p\right]$, therefore $\det(D(\varphi_\alpha\circ\varphi_\beta^{-1})(\varphi_\beta(p)))<0$ (absurd, since the atlas is oriented). $_\blacksquare$<|endoftext|> TITLE: How to get inverse of formula for sum of integers from 1 to n? QUESTION [11 upvotes]: I know very well that the sum of integers from $1$ to $n$ is $\dfrac{n\times(n+1)}2$. What I'm interested in today, and cannot find a solution for, is performing the opposite operation. Let $m = \dfrac{n^2 + n} 2$. Knowing the value of $m$, how do I figure out the value of $n$? I could easily program a solution but I'd much prefer an algebraic one. REPLY [11 votes]: You have got that $m = \dfrac{n^2 + n} 2$ which will give you $2m=n(n+1)$. You can make a quadratic equation $n^2+n-2m=0$. On solving the quadratic equation you get that $n=\frac{-1 \pm\sqrt{1+8m}}{2}$. Now solve this (as you know the $m$, you can easily find $n$) and eliminate the negative solution (As $n$ can not be negative).<|endoftext|> TITLE: When can an algebraic number be approximated by a $p$-adic number? QUESTION [6 upvotes]: Let $F$ be an algebraic function field in one variable over the finite field $\mathbb{F}_{p}$. In particular, $F$ is not perfect. Let $a \in F-F^p$ and $$f(Y)=Y^p - a \in F[Y]$$ be a purely inseparable (and irreducible) polynomial. Let $\mathcal{P}$ be a place of $F$, so we can consider the $\mathcal{P}$-adic distance. I have two related questions: 1) Could a $\mathcal{P}$-adic element (an element of the completion of $F$ at $\mathcal{P}$) be a root of $f$? Hensel's lemma does not apply here since $f' \equiv 0$ in characteristic $p$. 2) If not, how well can an element of $F$ approximate a root of $f$ with respect to some choice of distance? The quantity I am interested in is $$\max_{x \in F_{\mathcal{P}}} \quad \upsilon_{\mathcal{P}}(x^p-a)$$ Can we say something about $\max_{x \in F_{\mathcal{P}}} \upsilon_{\mathcal{P}}(x^p-a)$ for "most" places $\mathcal{P}$? After all, for most places $\mathcal{P}$, $\upsilon_{\mathcal{P}}(a)=0$, so let us ignore the finitely many places that are zeros or poles of $a$. REPLY [2 votes]: A nice question. The answer to (1) is No. You have a global field over the constant field $\Bbb F_p$, and I’ll call it $K$ instead of $F$ if you don’t mind. In adjoining a (the) root of the inseparable polynomial $f(Y)=Y^p-a$, you are adjoining the $p$-th root of $a$, and your field is a (pure) inseparable extension of degree $p$, and thus is $K^{1/p}$. In asking whether an element of any completion of $K$ could be a (the) root of $f$, you are asking whether $a$ itself is a $p$-th power in any completion of $K$. In other words, you are asking whether, in a completion $K_{\mathcal P}$, $a$ can possibly be a $p$-th power, in other words an element of $(K_{\mathcal P})^p$. We may as well agree what a field $K_{\mathcal P}$ looks like: it’ll be Laurent series in a uniformizer $t$, but with coefficients from the residue field of $\mathcal P$, necessarily a finite extension of $\Bbb F_p$, and so some finite field $\kappa$. Of course $t$ may be taken to be an element of $K$. Now consider the two inseparable extensions $K\supset K^p$ and $K_{\mathcal P}\supset(K_{\mathcal P})^p$, both of degree $p$. For the latter extension, $\{1,t,t^2,\cdots,t^{p-1}\}$ is certainly a good basis, and indeed it’s a good basis for $K$ over $K^p$ as well, since $t\notin K^p$. Since $a\notin K^p$, when we express it as a linear combination of the basis elements $t^i$ with coefficients in $K^p$, say $$a=\sum_{i=0}^{p-1}\alpha_it^i\,,$$ at least one of the $\alpha_i$ with $1\le i TITLE: Axis of Symmetry for a General Parabola QUESTION [5 upvotes]: Given a general parabola $$(Ax+Cy)^2+Dx+Ey+F=0$$ what is the axis of symmetry in the form $ax+by+c=0$? It is possible of course to first work out the angle of rotation such that $xy$ and $y^2$ terms disappear, in order to get an upright parabola $y=px^2+qx+r$ and proceed from there. This may involve some messy trigonometric manipulations. Could there be another approach perhaps, considering only quadratic and linear equations? Addendum From the solution (swapped) by Meet Taraviya and some graphical testing, the equation for the axis of symmetry is Axis of Symmetry: $$\color{red}{Ax+Cy+\frac {AD+CE}{2(A^2+C^2)}=0}$$ which is quite neat. Note that the result is independent of $F$. Awaiting further details on the derivation. Addendum 2 Here is an interesting question on MSE on a similar topic. Addendum 3 (added 23 May 2018) Tangent at Vertex: $$Cx-Ay+\frac {(A^2+C^2)(F-k^2)}{CD-AE}=0$$ where $k=\frac {AD+CE}{2(A^2+C^2)}$. Note that the parabola can also be written as $$\underbrace{Cx-Ay+d}_{\text{Tangent at Vertex if $=0$}} =m\;\big(\underbrace{Ax+Cy+k}_{\text{Axis of Symmetry if $=0$}}\big)^2$$ where $$m=\frac {A^2+C^2}{AE-CD}$$ and $d=\frac {(A^2+C^2)(F-k^2)}{CD-AE}$ See Desmos implementation here. REPLY [7 votes]: Write the equation as:- $$(Ax+Cy+t)^2+(D-2At)x+(E-2Ct)y+F-t^2=0$$ Choice of t is made such that $A\cdot(D-2At)+C\cdot(E-2Ct)=0$ $Ax+Cy+t=0$ is the symmetry axis of parabola. Also, the line $(D-2At)x+(E-2Ct)y+F-t^2=0$ is the tangent at vertex of the parabola. Explanation :- Interpret $y^2=4ax$ as:- $$(Distance \space from \space x=0)^2=4a\cdot (Distance \space from \space y=0)$$ Note that $y=0$ is the symmetry axis of the parabola and $x=0$ is the tangent at vertex. Also, they are perpendicular to each other (This explains why $A\cdot(D-2At)+C\cdot(E-2Ct)=0$ must be true for these lines to be them. This statement is equivalent to $m_1m_2=-1$)This property holds true for a general parabola. Thus a parabola can be represented as:- $$(Distance \space from \space L_1)^2=4a\cdot (Distance \space from \space L_2)$$ where $L_1$ and $L_2$ are the symmetry axis and the tangent at vertex of perpendicular.<|endoftext|> TITLE: How to understand the projective compactification of a vector bundle? QUESTION [8 upvotes]: The following question confused me a bit: Given a rank $n$ vector bundle or locally free sheaf $\mathcal{E}$ on $X$, each fiber of this vector bundle is a vector space of dimension $n$. Therefore, we can compactify it into a projective space $\mathbb{P}^n$. And if we do this compatibly for every fiber we get a $\mathbb{P}^n$-projective bundle. Now my question is whether this projective bundle is $\mathbb{P}(\mathcal{E}\oplus\mathcal{O}_X)$ or $\mathbb{P}(\mathcal{E}^{\vee}\oplus\mathcal{O}_X)$. Here by $\mathbb{P}(\mathcal{F})$ for some locally free sheaf $\mathcal{F}$ I mean the projective bundle of hyperplanes (instead of the projective bundle of lines). At first I thought the answer should be $\mathbb{P}(\mathcal{E}\oplus\mathcal{O}_X)$. Then something seems to be wrong. If I consider the zero section of the vector bundle $ \mathcal{E}$, it can also be viewed as a section of the projective bundle $\mathbb{P}(\mathcal{E}\oplus\mathcal{O}_X)$. Then it should corresponds to the surjection $\mathcal{E}\oplus\mathcal{O}_X \rightarrow \mathcal{O}_X\rightarrow 0$ by sending $\mathcal{E}$ to $0$ because what else it can be. However if I consider a concrete example, namely the blowing up of $\mathbb{P}^2$ at a point, this is not the case. The blow-up can be viewed as a $\mathbb{P}^1$-projective bundle (i.e. ruled surface) over $\mathbb{P}^1$. Let $X=\mathbb{P}^1$. Then the resulting space of the blow-up is $\mathbb{P}(\mathcal{O}_X(1)\oplus\mathcal{O}_X)$. Therefore, we can also see it as the projective compactification of the line bundle $\mathcal{O}_X(1)$. But then the surjection $$ \mathcal{O}_X(1)\oplus\mathcal{O}_X\rightarrow \mathcal{O}_X\rightarrow 0 $$ actually determines the section at infinity, which is the exceptional divisor of the blow-up. Instead, for any section $s\in H^0(X,\mathcal{O}_X(1))$, there is a natural map $$ \mathcal{O}_X\rightarrow \mathcal{O}_X(1) $$ given by multiplication of $s$. Therefore, when considered as a section of $\mathbb{P}(\mathcal{O}_X(1)\oplus\mathcal{O}_X)$, $s$ is determined by the induced surjection $$ \mathcal{O}_X(1)\oplus\mathcal{O}_X\rightarrow \mathcal{O}_X(1)\rightarrow 0. $$ This example makes me believe that the projective compactification of $\mathcal{E}$ is actually $\mathbb{P}(\mathcal{E}^{\vee}\oplus\mathcal{O}_X)$. Now everything seems to be consistent. Let $s\in H^0(X,\mathcal{E})$. Then we have a map $$ \mathcal{O}_X\rightarrow\mathcal{E} $$ given by multiplication of $s$. Take the dual we have a map $$ \mathcal{E}^{\vee}\rightarrow \mathcal{O}_X. $$ And the induced surjection $$ \mathcal{E}^{\vee}\oplus\mathcal{O}_X\rightarrow \mathcal{O}_X \rightarrow 0 $$ defines the section $s$. It seems a little bit weird that the projective compactification of $\mathcal{E}$ might be $\mathbb{P}(\mathcal{E}^{\vee}\oplus\mathcal{O}_X)$ instead of $\mathbb{P}(\mathcal{E}\oplus\mathcal{O}_X)$. Is there any explanation for this (if I am not completely talking about nonsense)? My guess is that it should have something to do with the fact I am considering projective bundle of hyperplanes. According to my previous experience, if some results involving projective bundle differs by a dual, it is caused by the two different conventions whether it is projective bundle of lines or hyperplanes. REPLY [11 votes]: First let me end the suspense: the projective closure of the vector bundle $\mathbb V(\mathcal E)$ associated to a locally free sheaf $\mathcal E$ on the scheme $X$ is $$\widehat {\mathbb V(\mathcal E)}=\mathbb P(\mathcal E\oplus \mathcal O_X)\quad (\bigstar)$$ (EGA II,Proposition (8.4.2), page 168). And now here are some explanations relating to this confusing subject: Given a quasicoherent sheaf $\mathcal E$ on a scheme $X$, Grothendieck associates to it an $X$-scheme $\pi:\mathbb V(\mathcal E)\to X$ by defining $\mathbb V(\mathcal E)=Spec(\mathbb S(\mathcal E))$, where $\mathbb S(\mathcal E)$ is the Symmetric Algebra sheaf associated to the sheaf of $\mathcal O_X$-Modules $\mathcal E$. As with all morphisms of schemes one can associate to $\pi$ the $\mathcal O_X$-sheaf $\mathcal S$ of its sections, with $\mathcal S(U)$ consisting of the morphisms $s:U\to \mathbb V(E)$ such that $\pi \circ s=Id_U$. The unfortunate result of these definitions however is that $\mathcal S=\mathcal {E}^\vee$, the source of much confusion ! So if you have a geometric vector bundle $E$ on $X$ with associated sheaf of sections $\mathcal E$, beware that $E=\mathbb V(\mathcal E^\vee)$. And the projective closure of your $X$-scheme $ E$ is, according to formula $(\bigstar)$, the $X$-scheme $$\widehat {E}=\widehat {\mathbb V(\mathcal E^\vee)}=\mathbb P(\mathcal E^\vee \oplus \mathcal O_X)$$<|endoftext|> TITLE: Compatible germs and the espace étalé QUESTION [5 upvotes]: The following is a confusion I'm having that I cannot find answers to anywhere. If this question has already been asked, I apologise, but I couldn't find any answers after some pretty extensive searching. I know this is four questions, but I think really it's just one (i.e. how do I understand these two concepts in light of each other). After starting to read Vakil's The Rising Sea (which is fantastic, by the way), I have one big confusion. There is the concept of compatible germs and also the concept of the étalé space. They seem very linked, but I can't quite pin down how. Edit: question. In the comments and answers there has been plenty of help with the first and last question, so it's really just the two questions in bold that I'm left with now :) Here's what I've come up with after thinking about this some more, as a more concrete version of the remaining questions (hopefully): we know that taking sections of $p\colon\sqcup_{x\in X}\mathcal{F}_x\to X$ gives us the sheafification of $\mathcal{F}$, as does taking compatible germs. So is there an association between compatible germs and sections $\sigma$ of $p$, e.g. a bijection between the two? Let $\mathcal{F}$ be a sheaf (of sets) on a topological space $X$, and $U$ an open set of $X$. Here are some facts/definitions (largely from Vakil's The Rising Sea): The natural map $\varphi\colon\mathcal{F}(U)\to\prod_{x\in U}\mathcal{F}_x$ is injective. An element $(s_x)_{x\in U}\in\prod_{x\in U}\mathcal{F}_x$ is a collection of compatible germs if any of the following equivalent properties hold: for all $x\in U$ there exists a neighbourhood $U_x\subset U$ and a section $f\in\mathcal{F}(U_x)$ such that for all $y\in U$ we have $s_y=f_y$ (where $f_y$ is the germ of $f$ at $y$); $(s_x)_{x\in U}$ is the image of a section $f$ under the map $\varphi$ (i.e. the above condition holds but with $U_x=U$ for all $x$). The espace étalé $\Lambda(\mathcal{F})$ associated to $\mathcal{F}$ (or more generally any presheaf) is constructed as follows: as a set, $\Lambda(\mathcal{F})=\coprod_{x\in X}\mathcal{F}_x$; as a topological space, the basis for the open sets of $\Lambda(\mathcal{F})$ is given by the $\{V_{U,\,f}\mid U\in\mathsf{Op}(X), f\in\mathcal{F}(U)\}$ where $V_{U,\,f}=(f_x)_{x\in U}$; as an étalé space, the local homeomorphism is given by projection, i.e. $p\colon\Lambda(\mathcal{F})\to X$ acts as $f_x\mapsto x$. The sheaf $\Gamma(p\colon E\to X)$ associated to a continuous map $p\colon E\to X$ acts on open sets as follows: $\Gamma(p\colon E\to X)(U)=\{\sigma\colon U\to E \mid p\circ\sigma=\mathrm{id}_U\}$. Sheafification, which can be constructed by taking only compatible germs, is just $\Gamma\Lambda$. (The last fact is emphasised because it seems to me like it should be the thing that ties everything together.) Questions: Is all of the above correct? How can we think of compatible germs in terms of the étalé space of a (pre)sheaf? I am almost certain that I have read somewhere it is equivalent to the continuity of the sections $\sigma$ or something similar, but I can't find this anywhere. It seems like a collection of germs is compatible if and only if it is open in $\Lambda(\mathcal{F})$, but this doesn't sound right to me (or at least not the whole picture), especially when you ask... ...why does the germ map $\varphi$ use the product of sets while the étalé space uses the coproduct? Does this mean we can't link the two concepts? Is there a less confusing notation for elements of $\prod_{x\in U}\mathcal{F}(U)$? Writing $(s_x)_{x\in U}$ always looks to me like we take one section $s$ and look at all of its germs (i.e. compatibility!), but writing something like $(s_x^{(x)})_{x\in U}$ (trying to emphasise that the section that we take the germ of varies with the point we're taking the germ at) seems quite cumbersome (and also something I've never seen!). REPLY [2 votes]: As about 2, of course with these definitions a basis for the étalé space topology is exactly made up of compatible germs (they just satisfy the second definition). About 3, I think it has already been cleared up: the elements of $\prod_{x\in X} \mathcal{F}_x$ are sequences of germs $(g_x)_{x\in S}$, while elements in $\coprod_{x\in X}\mathcal{F}_x$ are just germs $g_x$, with a label attached to remind the point they come from. I think there is no way to avoid the confusion in 4, except maybe to reserve a special notation for compatible germs, but notations are already quite heavy in these topics. In practice, the nature of the object you are considering is made clear from the context. What is the use of this strange object? Well, it is an older point of view over sheaf theory which retains somewhat more geometric intuition than the current modern definition. You can view a sheaf $\mathcal{F}$ on a topological space $X$ as the triple $(X,\Lambda,p)$, where $\Lambda$ is a topological space and $p:\Lambda \longrightarrow X$ is a local homemorphism; in fact, just define $\Lambda$ as the étalé space and $p$ as above. It gives some advantages also in topoi theory: see for instance this MO thread.<|endoftext|> TITLE: When do we use hidden induction? QUESTION [22 upvotes]: If I'm correct, hidden induction is when we use something along the lines of "etc..." in a proof by induction. Are there any examples of when this would be appropriate (or when it's not appropriate but used anyway)? REPLY [4 votes]: Hidden induction happens a lot in cases where you go backwards from $n$ to $1$, using some kind of reduction argument. For example, the proof that every number can be written as a product of primes: Let $n$ be some number. If it's prime, then we're done. Otherwise it can be written as $ab$, with $a, b < n$. Again, if both $a$ or $b$ are prime, we're done, otherwise they can be broken up in the same way. Since the factors are getting smaller and smaller, this process must stop eventually, but the only way it can stop is if one of all of the numbers involved are prime. Induction serves to tighten up the structure of the argument, replacing a vague "this process must stop" with an explicit invocation of an axiom: Suppose every number less than $n$ can be written as a product of primes. If $n$ is prime, we're done. Otherwise, $n=ab$ with $a, b < n$, so $a$ and $b$ can be written as products of primes. Therefore $n$ can be written as a product of primes.<|endoftext|> TITLE: Exercise 4.9, Chapter I, in Hartshorne QUESTION [5 upvotes]: Let $X$ be a projective variety of dimension $r$ in $\mathbf{P}^n$ with $n\geq r+2$. Show that for suitable choice of $P\notin X$, and a linear $\mathbf{P}^{n-1}\subseteq \mathbf{P}^n$, the projection from $P$ to $\mathbf{P}^{n-1}$ induces a birational morphism of $X$ onto its image $X'\subseteq \mathbf{P}^{n-1}$. My way: W.L.O.G., assume that $X\setminus U_0\neq\emptyset$. Since $X$ is a projective variety, then $K(X)\cong S(X)_{(0)}$, which implies that $K(X)=k(x_1/x_0,\dots,x_n/x_0)$. Since $\dim X=r$, by Theorem 4.8A and Theorem 4.7A on page 27 in Hartshorne, then W.L.O.G., we can assume that $x_1/x_0,\dots,x_r/x_0$ is a separating transcendence base for $K(X)$ over $k$, which implies that $x_{r+1}/x_0,\dots,x_{n}/x_0$ are separable over $k(x_1/x_0,\dots,x_r/x_0)$. By Theorem 4.6A on page 27 in Hartshorne, $K(X)=k(x_1/x_0,\dots,x_r/x_0)[y]$, where $y$ is a $k(x_1/x_0,\dots,x_r/x_0)$-linear combination of $x_{r+1}/x_0,\dots,x_n/x_0$. Now I do not how to continue. REPLY [6 votes]: I have rewritten this answer. We actually need a stronger version of Hartshorne's Theorem 4.6A (the theorem of the primitive element). Theorem 4.6A$^\star$. Let $L$ be a finite separable extension field of a field $K$, and suppose that $K$ contains an infinite subset $S$. Then, there is an element $\alpha \in L$ which generates $L$ as an extension field of $K$. Furthermore, if $\beta_1,\beta_2,\ldots,\beta_n$ is any set of generators of $L$ over $K$, then $\alpha$ can be taken to be a linnear combination $$\alpha = c_1\beta_1 + c_2\beta_2 + \cdots + c_n\beta_n$$ of the $\beta_i$ with coefficients $c_i \in S$. Proof. This follows from the proof of [Zariski–Samuel, Ch. II, §9, Thm. 19], which uses Kronecker's "method of indeterminates," but we rewrite their proof below. Consider the field extension $$L \subseteq L(X,X_1,X_2,\ldots,X_n)$$ of $L$, where $X,X_1,X_2,\ldots,X_n$ are a set of indeterminates, and consider the subfields \begin{align*} K^\star &= K(X_1,X_2,\ldots,X_n)\\ L^\star &= L(X_1,X_2,\ldots,X_n) \end{align*} in $L(X,X_1,X_2,\ldots,X_n)$. Then, $L^\star = K^\star(\beta_1,\beta_1,\ldots,\beta_n)$, and $L^\star$ is a finite separable extension of $K^\star$ since the $\beta_i$ are separable over $K$, and hence also separable over $K^\star$ (see [Zariski–Samuel, Ch. II, §5, Lem. 2]). Consider the element $$\beta^\star = X_1\beta_1 + X_2\beta_2 + \cdots + X_n\beta_n \in L^\star.\tag{1}\label{eq:zs1}$$ Let $F(X)$ be the minimal polynomial of $\beta^\star$ in $K^\star[X]$. The coefficients of $F(X)$ are rational functions of $X_1,X_2,\ldots,X_n$ with coefficients in $K$; let $g(X_1,X_2,\ldots,X_n) \in K[X_1,X_2,\ldots,X_n]$ be a common denominator of these rational functions. Then, $$g(X_1,X_2,\ldots,X_n) \cdot F(X) = f(X,X_1,X_2,\ldots,X_n) \in K[X,X_1,X_2,\ldots,X_n],$$ and we have $$f(\beta^\star,X_1,X_2,\ldots,X_n) = 0.\tag{2}\label{eq:zs2}$$ Let $$G(X_1,X_2,\ldots,X_n) = f(X_1\beta_1+X_2\beta_2+\cdots+X_n\beta_n,X_1,X_2,\ldots,X_n).\tag{3}\label{eq:zs3}$$ Then, $G(X_1,X_2,\ldots,X_n)$ is a polynomial in $X_1,X_2,\ldots,X_n$ with coefficients in $L$, and \eqref{eq:zs2} says $G(X_1,X_2,\ldots,X_n) = 0$. Thus, all partial derivatives $\partial G/\partial X_i$ are zero for $i \in \{1,2,\ldots,n\}$. By \eqref{eq:zs3}, we then have $$\beta_i \cdot f'(\beta^\star,X_1,X_2,\ldots,X_n) + f_i(\beta^*,X_1,X_2,\ldots,X_n) = 0\tag{4}\label{eq:zs4}$$ for every $i \in \{1,2,\ldots,n\}$, where \begin{align*} f'(X,X_1,X_2,\ldots,X_n) &= \frac{\partial f(X,X_1,X_2,\ldots,X_n)}{\partial X},\\ f_i(X,X_1,X_2,\ldots,X_n) &= \frac{\partial f(X,X_1,X_2,\ldots,X_n)}{\partial X_i}. \end{align*} The left-hand side in each equation \eqref{eq:zs4} is a polynomial in $L[X_1,X_2,\ldots,X_n]$ by \eqref{eq:zs1}, and hence is the zero polynomial. Thus, the equations \eqref{eq:zs4} remain valid if we substitute for $X_1,X_2,\ldots,X_n$ any elements of $K$. On the other hand, we have $$f'(X,X_1,X_2,\ldots,X_n) = g(X_1,X_2,\ldots,X_n)\,F'(X)$$ where $F'(X) = dF/dX$, and hence $$f'(\beta^\star,X_1,X_2,\ldots,X_n) \ne 0,$$ since $\beta^\star$ is separable over $K^\star$ and therefore $F'(\beta^\star) \ne 0$. Thus, $f'(\beta^\star,X_1,X_2,\ldots,X_n)$ is a nonzero polynomial in $L[X_1,X_2,\ldots,X_n]$. Since $S \subseteq L$ and $S$ is an infinite subset, we can find elements $c_1,c_2,\ldots,c_n \in S$ such that $(c_1,c_2,\ldots,c_n)$ is not a zero of that polynomial [Zariski–Samuel, Ch. I, §18, Thm. 14]. Setting $$\beta = c_1\beta_1 + c_2\beta_2 + \cdots + c_n\beta_n,$$ we have that $$f'(\beta,c_1,c_2,\ldots,c_n) \ne 0\tag{5}\label{eq:zs5}$$ and $$\beta_i \, f'(\beta,c_1,c_2,\ldots,c_n) + f_i(\beta,c_1,c_2,\ldots,c_n) = 0\tag{6}\label{eq:zs6}$$ for every $i \in \{1,2,\ldots,n\}$. The equation \eqref{eq:zs6} and the inequality \eqref{eq:zs5} imply that $\beta_i \in K(\beta)$, and since $\beta \in L$, we therefore see that $L = K(\beta)$. This completes the proof of the theorem. $\blacksquare$ We now prove Hartshorne's exercise, which we restate below. Exercise [Hartshorne, Ch. I, Exer. 4.9]. Let $X$ be a projective variety of dimension $r$ in $\mathbf{P}^n$, with $n\geq r+2$. Show that for a suitable choice of $P \notin X$, and a linear $\mathbf{P}^{n-1} \subseteq \mathbf{P}^n$, the projection from $P$ to $\mathbf{P}^{n-1}$ induces a birational morphism of $X$ onto its image $X' \subseteq \mathbf{P}^{n-1}$. Proof. Let $k$ denote the ground field over which $X$ is defined. After permuting coordinates, we may assume without loss of generality that $X\cap U_0\neq\emptyset$. Then, the images of the rational functions $x_i/x_0$ generate $K(X)$ over $k$. Since $k$ is algebraically closed, we see the extension $K(X)/k$ is separably generated by [Hartshorne, Ch. I, Thm. 4.8A]. Since $\dim X = r$, after possible permutation of coordinates, we have that $x_1/x_0,x_2/x_0,\ldots,x_r/x_0$ form a separating transcendence basis for $K(X)$ over $k$ by [Hartshorne, Ch. I, Thm. 4.7A] to give the chain $$k \subseteq k(x_1/x_0,x_2/x_0,\ldots,x_r/x_0) \subseteq K(X)$$ of field extensions. Next, by setting $S = k$ in the theorem of the primitive element (Theorem 4.6A$^\star$ above), we see that $K(X)$ is generated by $$\alpha = \sum_{i=r+1}^n c_i\frac{x_i}{x_0}$$ where $c_i \in k$ for every $i$. After a linear change of coordinates, we may assume that $\alpha = x_{r+1}/x_0$. Now consider the map \begin{align*} \mathbf{P}^n &\dashrightarrow \mathbf{P}^n\\ [x_0:\cdots:x_{n-1}:x_n] &\mapsto [x_0:\cdots:x_{n-1}:0] \end{align*} This is the projection away from the point $P = Z(x_0,x_1,\ldots,x_{n-1})$ to the hyperplane $Z(x_n)$. Restricting the codomain to $Z(x_n) \simeq \mathbf{P}^{n-1}$, we obtain the rational map \begin{align*} \pi\colon \mathbf{P}^n &\dashrightarrow \mathbf{P}^{n-1}\\ [x_0:\cdots:x_{n-1}:x_n] &\mapsto [x_0:\cdots:x_{n-1}] \end{align*} which we claim induces a birational map $\pi\rvert_X\colon X \dashrightarrow \pi(X)$. Let $X'$ denote the image of $X$ through this map $\pi$. Then, $K(X')$ is generated by the rational functions $x_i/x_0$ for $1 \le i \le r+1$. The map on function fields corresponding to $\pi$ is \begin{align*} K(X') &\hookrightarrow K(X)\\ x_i/x_0 &\mapsto x_i/x_0 \end{align*} Since $x_i/x_0$ for $1 \le i \le r+1$ generate $K(X)$, we see that $K(X') = K(X)$, and so the map $\pi$ is birational. $\blacksquare$ We note that to prove Exercise 3.14 with this method, we can perform the operation above $n - (r+1)$ times to find a birational map between $X$ and a hypersurface in $\mathbf{P}^{r+1}$. In the notation of the proof above, composing all of these projections from points can be described as the linear projection \begin{align*} \mathbf{P}^n &\dashrightarrow \mathbf{P}^n\\ [x_0:\cdots:x_n] &\mapsto [x_0:\cdots:x_{r+1}:0:0\cdots:0] \end{align*} away from the $(n-r-1)$-plane $Z(x_1,x_2,\ldots,x_{r+1})$ to the $(r+1)$-plane $Z(x_{r+2},x_{r+3},\ldots,x_n)$; see for example [Shafarevich, Ex. 1.27]. Restricting the codomain to the $(r+1)$-plane $Z(x_{r+2},x_{r+3},\ldots,x_n) \simeq \mathbf{P}^{r+1}$, this gives a rational map $\pi\rvert_X\colon X \dashrightarrow \mathbf{P}^{r+1}$ that is birational onto its image.<|endoftext|> TITLE: Is it obvious that $q = \frac{2p+2}{p+2} > p$, how does one easily show $q > p$? QUESTION [5 upvotes]: I was going through walter Rudin's first example when trying to show $A = \{p \in Q_+ : 0 p$ because $p \in A \implies p^2 < 2 \iff p^2 - 2 < 0 $. So it means $ - \frac{p^2 - 2}{p+2}$ has to be positive and therefore $q$ is a little bigger than $p$ since it starts off at $p$ and only increases. However, if I only had the second statement I would have no idea if $q > p$. Is there a way to see that the second statement is larger than $p$ also without appealing to the first statement? Or what is the purpose of the second statement on Rudin's example? REPLY [3 votes]: Since $p^2<2$, we have $p^2+2p<2+2p$, so $p(p+2)<2p+2$, whence $p<\dfrac{2p+2}{p+2}$. (Note that $p+2>0$.)<|endoftext|> TITLE: How many entries in $3\times 3$ matrix with integer entries and determinant equal to $1$ can be even? QUESTION [11 upvotes]: Let $A$ be a $3\times 3$ matrix with integer entries such that $\det(A)=1$. At most how many entries of $A$ can be even? I get a possible solution as $6$ by considering the $3 \times 3$ identity matrix. But I am not sure about that is it possible to have more than $6$ even entries. Please help me enumerate this problem to prove my answer. REPLY [13 votes]: Using Laplace expansion or Sarrus's rule, we have $$ \begin{vmatrix}a&b&c\\d&e&f\\g&h&i\end{vmatrix}=aei-afh-bdi+bfg+cdh-ceg$$ In order for this expression to be equal to $1$, it must be odd, meaning that at least one of the $6$ products must be odd. And if one of the products is odd, then all three of the terms in the product must be odd. Therefore there can be at most $6$ even entries, and the identity matrix shows that there can be exactly six. REPLY [7 votes]: If there are 7 or more even entries, you can show that the determinant is even, since the expansion is a sum of the products of three numbers, one of which is always even.<|endoftext|> TITLE: Sylow $p$-subgroups of $GL_n(\mathbb{F}_p)$ QUESTION [6 upvotes]: How to find the number $n_p$ of Sylow $p$-subgroups of $GL_n(\mathbb{F}_p)$? For example, in $GL_3(\mathbb{F}_p)$, $|GL_3(\mathbb{F}_p)|=(p^3-1)(p^3-p)(p^3-p^2) = p^3(p-1)^3(p+1)(p^2+p+1)$, we have the Heisenberg group $$ \begin{pmatrix} 1&x&y\\ 0&1&z\\ 0&0&1 \end{pmatrix} $$ as a Sylow $p$-subgroup. And its normalizer is at least the upper triangular matrices, which has order $p^3(p-1)^3$. So we now know $n_p\mid(p+1)(p^2+p+1)$. How to proceed from here to determine $n_p$? REPLY [5 votes]: The book Groups and Representations by Alperin-Bell is a nice exposition for $\rm{GL}_n(\mathbb{F}_q)$. Here is a way to find the number of Sylow-$p$ subgroups in your question, whose details you can try to fill with the help of tools in the book mentioned. (1) The group $\rm{U}_n(\mathbb{F}_p)$ of upper triangular matrices with all $1$ on diagonal is a Sylow-$p$ subgroup of $\rm{GL}_n(\mathbb{F}_p)$. (2) The number of Sylow-$p$ subgroups in any group is equal to the index of normalizer of Sylow-$p$ subgroup. (3) For the group $\rm{U}_n(\mathbb{F}_p)$, try to show that the group $B_n(\mathbb{F}_p)$ of upper triangular invertible matrices is the normalizer. For this, proceed as follows. (3.0) Show that $\rm{U}_n(\mathbb{F}_p)$ is normal in $\rm{B}_n(\mathbb{F}_p)$. (For this, consider a map from $\rm{B}_n(\mathbb{F}_p)$ to itself, which sends any matrix to a diagonal matrix with same diagonal entries as in original; clear? Show that this is a homomorphism. Then what is kernel? Kernel is always a normal subgroup.) (3.1) Suppose $g\in \rm{GL}_n(\mathbb{F}_p)$ normalizes $\rm{U}_n(\mathbb{F}_p)$. (3.2) By Bruhat decompositon (follow book), $g$ can be written (uniquely) as $b_1wb_2$ where $b_1,b_2$ are in $\rm{B}_n(\mathbb{F}_p)$ and $w$ is a permutation matrix. (3.3) Since $g=b_1wb_2$ and $b_1,b_2$ normalizes $\rm{U}_n(\mathbb{F}_p)$ (by (3.0)) hence $w$ normalizes $\rm{U}_n(\mathbb{F}_p)$. (3.4) An easy marix computation shows that the only permutation matrix normalizing $\rm{U}_n(\mathbb{F}_p)$ is identity. Hence $g=b_1b_2\in \rm{B}_n(\mathbb{F}_p)$. (3.5) Find the order of the normalizer $\rm{B}_n(\mathbb{F}_p)$ of Sylow-$p$ subgroup, and find its index in $\rm{GL}_n(\mathbb{F}_p)$. q.e.d<|endoftext|> TITLE: An interesting definite integral $\int_0^1(1+x+x^2+x^3+\cdot\cdot\cdot+x^{n-1})^2 (1+4x+7x^2+\cdot\cdot\cdot+(3n-2)x^{n-1})~dx=n^3$ QUESTION [20 upvotes]: How to prove $~~ \forall n\in\mathbb{N}^+$, \begin{align}I_n=\int_0^1(1+x+x^2+x^3+\cdot\cdot\cdot+x^{n-1})^2 (1+4x+7x^2+\cdot\cdot\cdot+(3n-2)x^{n-1})~dx=n^3.\end{align} My Try: Define $\displaystyle S(n)=\sum_{k=0}^{n-1}x^k=1+x+x^2+x^3+\cdot\cdot\cdot+x^{n-1}=\frac{x^n-1}{x-1}$. Then, \begin{align}\frac{d}{dx}S(n)=S'(n)=1+2x+3x^2+\cdot\cdot\cdot(n-1)x^{n-2}=\sum_{k=0}^{n-1}kx^{k-1}.\end{align} Therefore, \begin{align} I_n&=\int_0^1 S^2(n)\left(3S'(n+1)-2S(n)\right)~dx\\ &=3\int_0^1 S^2(n)S'(n+1)~dx-2\int_0^1 S^3(n)~dx\\ &=3\int_0^1 S^2(n)(S'(n)+nx^{n-1})~dx-2\int_0^1 S^3(n)~dx\\ &=3\int_0^1 S^2(n)~d(S(n))+3\int_0^1 S^2(n)(nx^{n-1})~dx-2\int_0^1 S^3(n)~dx\\ &=n^3-1+\int_0^1 S^2(n)(3nx^{n-1}-2S(n))~dx\\ &=n^3-1+\int_0^1 \left(\frac{x^n-1}{x-1}\right)^2\left(3nx^{n-1}-2\cdot\frac{x^n-1}{x-1}\right)~dx \end{align} So the question becomes: Prove \begin{align}I'=\int_0^1 \left(\frac{x^n-1}{x-1}\right)^2\left(3nx^{n-1}-2\cdot\frac{x^n-1}{x-1}\right)~dx=1.\end{align} \begin{align}I'&=\int_0^1 \frac{3nx^{n-1}(x^n-1)^2}{(x-1)^2}-\frac{2(x^n-1)^3}{(x-1)^3}~dx\\ &=\int_0^1 \frac{(x-1)^2\left(\frac d {dx} (x^n-1)^3\right)-2(x^n-1)^3(x-1)}{(x-1)^4}~dx\\ &=\int_0^1 \frac d {dx} \left(\frac{(x^n-1)^3}{(x-1)^2}\right)~dx\\ &=\lim_{x \to 1} \frac{(x^n-1)^3}{(x-1)^2}-\frac{(0^n-1)^3}{(0-1)^2}\\ \end{align} $$\therefore I'=1.$$ \begin{align}\therefore I_n=n^3.\end{align} There MUST be other BETTER ways evaluating $I_n$. Could anyone give me some better solutions? Thanks. REPLY [48 votes]: First apply the substitution $x = t^3$. Then \begin{align*} I_n &= \int_{0}^{1} (1 + t^3 + \cdots + t^{3n-3})^2 (1 + 4t^3 + \cdots + (3n-2)t^{3n-3}) \cdot 3t^2 \, dt \\ &= \int_{0}^{1} 3 (t + t^4 + \cdots + t^{3n-2})^2 (1 + 4t^3 + \cdots + (3n-2)t^{3n-3}) \, dt. \end{align*} Now let $u = u(t) = t + t^4 + \cdots + t^{3n-2}$. Then $$ 3 (t + t^4 + \cdots + t^{3n-2})^2 (1 + 4t^3 + \cdots + (3n-2)t^{3n-3}) = 3u^2 \frac{du}{dt}.$$ Therefore $$ I_n = \left[ u(t)^3 \right]_{t=0}^{t=1} = u(1)^3 - u(0)^3 = n^3. $$<|endoftext|> TITLE: Closed-forms for $\int_0^\infty\frac{dx}{\sqrt[3]{55+\cosh x}}$ and $\int_0^\infty\frac{dx}{\sqrt[3]{45\big(23+4\sqrt{33}\big)+\cosh x}}$ QUESTION [10 upvotes]: (This summarizes results for cube roots from here and here. The fourth root version is this post.) Define $\beta= \tfrac{\Gamma\big(\tfrac56\big)}{\Gamma\big(\tfrac13\big)\sqrt{\pi}}=\frac1{B\big(\tfrac{1}{3},\tfrac{1}{2}\big)}$ with beta function $B(a,b)$. Then we have the nice evaluations, $$\begin{aligned}\frac{3}{5^{5/6}} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-4\big)\\ &=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+4x^3}}\\[1.7mm] &=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{9+4\sqrt{5}}\,x}}\\[1.7mm] &=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{9+\cosh x}} \end{aligned}\tag1$$ and, $$\begin{aligned}\frac{4}{7} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-27\big)\\ &=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+27x^3}}\\[1.7mm] &=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{55+12\sqrt{21}}\,x}}\\[1.7mm] &=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{55+\cosh x}} \end{aligned}\tag2$$ Note the powers of fundamental units, $$U_{5}^6 = \big(\tfrac{1+\sqrt{5}}{2}\big)^6=\color{blue}{9+4\sqrt{5}}$$ $$U_{21}^3 = \big(\tfrac{5+\sqrt{21}}{2}\big)^3=\color{blue}{55+12\sqrt{21}}$$ Those two instances can't be coincidence. Question: Is it true this observation can be explained by, let $b=2a+1$, then, $$\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+ax^3}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{b+\sqrt{b^2-1}\,x}}=2^{1/3}\int_0^\infty\frac{dx}{\sqrt[3]{b+\cosh x}}$$ Example: We assume it is true and use one of Noam Elkies' results as, $$\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6}; -a\big) = \frac{6}{11^{11/12}\, U_{33}^{1/4}} $$ where $a=\sqrt{11}\,(U_{33})^{3/2}$ with fundamental unit $U_{33}=23+4\sqrt{33}$. Since $b=2a+1=45\,U_{33}$, we then have the nice integral, $$2^{1/3}\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{45\big(23+4\sqrt{33}\big)+\cosh x}}=\frac{6}{11^{11/12}\,U_{33}^{1/4}}=0.255802\dots$$ where $\beta= \tfrac{\Gamma\big(\tfrac56\big)}{\Gamma\big(\tfrac13\big)\sqrt{\pi}}.\,$ So is it true in general? REPLY [3 votes]: I've noticed that for $$ I = \int_0^\infty \frac{dx}{\sqrt[3]{b + \cosh(x)}} $$ if we take a Mellin transform with respect to $b$, we get $$ \mathcal{M}_b[I](s) = \frac{\Gamma(\frac{1}{3}-s)\Gamma(s)}{\Gamma(\frac{1}{3})}\int_0^\infty \frac{\text{sech}^{-s}{x}}{\cosh^{1/3}(x)}\;dx $$ $$ \mathcal{M}_a[I](s) = \frac{\sqrt{\pi}\Gamma(\frac{1}{3}-s)\Gamma(\frac{1}{6}-\frac{s}{2})\Gamma(s)}{2\Gamma(\frac{1}{3})\Gamma(\frac{2}{3}-\frac{s}{2})}, \; \Re(s)<\frac{1}{3} $$ $$ I(s) = \mathcal{M}^{-1}_s\left[\frac{\sqrt{\pi}\Gamma(\frac{1}{3}-s)\Gamma(\frac{1}{6}-\frac{s}{2})\Gamma(s)}{2\Gamma(\frac{1}{3})\Gamma(\frac{2}{3}-\frac{s}{2})}\right](b) $$ $$ I(s) = \frac{\Gamma(\frac{1}{6})^2}{2^{5/3}\Gamma(\frac{1}{3})}\;_2F_1\left(\frac{1}{6},\frac{1}{6},\frac{1}{2},b^2\right) - \frac{b \Gamma(\frac{2}{3})^2}{2^{2/3}\Gamma(\frac{1}{3})}\;_2F_1\left(\frac{2}{3},\frac{2}{3},\frac{3}{2},b^2\right) $$ $$ 2^{1/3}\beta I(s) = \frac{\sqrt{3}}{2^{2/3}}\;_2F_1\left(\frac{1}{6},\frac{1}{6},\frac{1}{2},b^2\right) - \frac{2\sqrt{\pi}b \Gamma(\frac{5}{6})}{\Gamma(\frac{1}{6})^2}\;_2F_1\left(\frac{2}{3},\frac{2}{3},\frac{3}{2},b^2\right) $$ so you are looking for $a$ and $b$ such that satisfy $$ \,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-a\big) = \frac{\sqrt{3}}{2^{2/3}}\;_2F_1\left(\frac{1}{6},\frac{1}{6},\frac{1}{2},b^2\right) - \frac{2\sqrt{\pi}b \Gamma(\frac{5}{6})}{\Gamma(\frac{1}{6})^2}\;_2F_1\left(\frac{2}{3},\frac{2}{3},\frac{3}{2},b^2\right) $$ i.e. $a=4,b=9$ and $a=27,b=55$, and would need to prove $$ \,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-a\big) = \frac{\sqrt{3}}{2^{2/3}}\;_2F_1\left(\frac{1}{6},\frac{1}{6},\frac{1}{2},(2a+1)^2\right) - \frac{2\sqrt{\pi}(2a+1) \Gamma(\frac{5}{6})}{\Gamma(\frac{1}{6})^2}\;_2F_1\left(\frac{2}{3},\frac{2}{3},\frac{3}{2},(2a+1)^2\right) $$ The residues if you subtract the two and plot are around machine epsilon. I hope this is helpful.<|endoftext|> TITLE: Prove that $\vert\sin(x)\sin(2x)\sin(2^2x)\cdots\sin(2^nx)\vert < \left(\frac{\sqrt{3}}{2}\right)^n$ QUESTION [7 upvotes]: For $x$ in $\mathbb R^*$ and $n$ in $\mathbb N$, define $$U_n = \sin(x)\sin(2x)\sin(2^2x)\cdots\sin(2^nx) = \prod_{k=0}^n\sin(2^k x)$$ Prove that $$\vert U_n\vert \leq \left(\frac{\sqrt{3}}{2}\right)^n$$ Can anyone help please. (Edited by @River Li) I found it was B6(c) in the 34th Putnam (1973): B-6: On the domain $0 \le \theta \le 2\pi$: (a) Prove that $\sin^2\theta \sin 2\theta$ takes its maximum at $\pi/3$ and $4\pi/3$ (and hence its maximum at $2\pi/3$ and $5\pi/3$). (b) Show that $$\left|\sin^2\theta \, \Big[\sin^3(2\theta) \cdot \sin^3(4\theta) \cdots \sin^3(2^{n - 1}\theta)\Big]\, \sin (2^n\theta)\right|$$ takes its maximum at $\theta = \pi/3$. (The maximum may also be attained at other points.) (c) Derive the inequality: $$\sin^2\theta \cdot \sin^2(2\theta) \cdot \sin^2(4\theta) \cdots \sin^2(2^n\theta)\le (3/4)^n.$$ See: The American Mathematical Monthly, Vol. 81, No. 10, Dec., 1974, (Page 1086-1095) or https://prase.cz/kalva/putnam/putn73.html REPLY [15 votes]: Easy to see that $|U_1(x)|\leq\frac{\sqrt3}{2}$ and $|U_2(x)|\leq\frac{3}{4}$ for every $x$. Assume that $|U_k(x)|\leq\left(\frac{\sqrt3}{2}\right)^k$ for every real $x$ and every $k\leq n$, for some $n\ge2$. If $|\sin{x}|\leq\frac{\sqrt3}{2}$, then $$|U_{n+1}(x)|=|\sin x|\cdot|U_n(2x)|\leq\frac{\sqrt3}{2}\cdot\left(\frac{\sqrt3}{2}\right)^n=\left(\frac{\sqrt3}{2}\right)^{n+1}$$ If $|\sin{x}|\geq\frac{\sqrt3}{2}$, then $|\cos{x}|\leq\frac{1}{2}$ and $|\sin{x}\sin2x|=|2(1-\cos^2x)\cos{x}|\leq\frac{3}{4}$. Thus, $$|U_{n+1}(x)|=|\sin{x}\sin2x|\cdot|U_{n-1}(4x)|\leq\frac{3}{4}\cdot\left(\frac{\sqrt3}{2}\right)^{n-1}=\left(\frac{\sqrt3}{2}\right)^{n+1}$$ By induction on $n\ge1$, we are done.<|endoftext|> TITLE: I take it so long everytime I learn mathematics myself. What should I do? QUESTION [35 upvotes]: Stack-Exchange, I am a freshman in Mathematics Department. I always learn mathematics the hard way. Everytime I learn mathematics on my own by reading from Mathematics textbooks, I would read the definition, try to figure out what it really means, stick new definitions which concepts that I already know, prove theorem without ever reading given proof. But recently, I find this method cumbersome. In order to get a deep level of understanding those concept and solve a good amount of problems, I trade it off with a lot of my time, and my energy. My seniors tell me that I should learn from different source of material to get more insight view, and separate the processes of learning theory and problem solving. I should read all the theory before trying to apply them to save times. But I'm used to the way of Pólya's mouse, so I am worried that I won't be able to catch up with all my classes like this. I want to try the way my seniors told me; that might be faster. But I want to know if the method would come with a loss of depth in understanding. So I am asking for help. Also, please give me some tips to boost up the performance and the quality of learning mathematics all by myself. Thank you all very much. REPLY [2 votes]: I wish I had you for a teacher or a student. I am long past the age when I can take classes in school, and everything I do is driven by curiosity. Recently, I became interested in retro-tech and began with the table of four-place logarithms we were given in high school. (Before computers or calculators log tables were the one of the instruments by which human computing power could be amplified.) Wow! What a bunch of errors in that thing, but I am finding ways to weed out the worst mistakes and improve the accuracy. I think I have discovered things about log tables that nobody else knows. It is an empowering feeling. And some day, when some conference needs some kind of paper, I will have one pre-generated. Most likely this would be the HHC 2017 conference dealing with HP calculators. It is a good way to learn. Never quit!<|endoftext|> TITLE: What is a covering set of a Sierpinski number? What does it do? QUESTION [6 upvotes]: Recently a new prime number has been discovered, which eliminates one of the six remaining candidates for the smallest Sierpinski numbers. So I was reading the wikipedia article about the Sierpinski number, where I came across what is called a covering set of primes for a Sierpinski number. Different Sierpinski numbers has different covering sets. I understood that the elements belongs to the covering set divides the Sierpinski number, associated with the covering set. But what a covering set do? How it helps in finding smallest Sierpinski number ? Can anyone guide me through this? Thanks. REPLY [5 votes]: A covering set doesn't help in "finding smallest Sierpinski number". It is merely used in order to show that a given $k\in\mathbb{N}$ is a Sierpinski number, as part of proving that the expression $k\cdot2^n+1$ is composite for every $n\in\mathbb{N}$ (becuse it is divisible by one of the values in the covering set). In other words, the covering set is inferred during the process of proving that $k$ a Sierpinski number. You could say that a covering set is part of the proof's output rather than input: We take a $k$ and prove that it has a covering set, not vice-versa. For the record, allow me to emphasize that I became familiar with these numbers only a few days ago while reading about this on the news, so the answer above is based solely on my understanding of the same Wikipedia article that you mention.<|endoftext|> TITLE: Basis for a vector space and normed space QUESTION [5 upvotes]: While defining the basis for a vector space we impose two conditions (linearly independence and spaning) for the basis set. I am unable to see the condition of linearly independence in the case of schauder basis for a normed space. Can any body explain it to me why there is no need of being linearly independence for basis set in case of normed spaces and what is the role of linearly independence in case of vector space? REPLY [3 votes]: Schauder bases are linearly independent. Indeed, let $(e_n)_{n=1}^\infty$ be a Schauder basis for a Banach space. If it were linearly dependent, then the zero vector would have two expansions contradicting uniqueness: $$0= \sum_{k=1}^\infty 0\cdot e_k = \sum_{k=1}^\infty a_k e_k$$ where $a_k$ are eventually zero, but some of them are non-zero and add up to 0 when multiplied by $e_k$. However, linear independence in not really the primary issue when dealing with Schauder bases.<|endoftext|> TITLE: LU decomposition; do permutation matrices commute? QUESTION [14 upvotes]: I have an assignment for my Numerical Methods class to write a function that finds the PA=LU decomposition for a given matrix A and returns P, L, and U. Nevermind the coding problems for a moment; there is a major mathematical problem I'm having that my professor seems incapable of addressing, and after searching for hours (perhaps inefficiently) I could find no accounting of it. How do we extricate the permutation matrices from the row elimination matrices? Essentially the idea, if I understand it correctly, is that we perform a series of transformations on a matrix $A$ by applying successive lower triangular matrices that eliminate single elements, thus $L_nL_{n-1}...L_2L_1A = U$. In my understanding, this is computationally useful because lower triangular atomic matrices can be inverted by changing the sign of the off-diagonal element, so $A = L_1^{-1}L_2^{-1}...U$. That's all fine (assuming I'm correct), but the introduction of pivot matrices between each $L_j$ seems to make the problem intractable. In every accounting I've seen some sorcery occurs that looks like this: $$L_nP_nL_{n-1}...P_3L_2P_2L_1P_1A = U \Rightarrow P_nP_{n-1}...P_2P_1L_nL_{n-1}...L_2L_1A = U$$ And no one bothers to explain how this happens or in fact even states it explicitly. If possible I would like to know a) Is this operationally acceptable? b) What properties of these respective classes of matrices make this kind of willy-nilly commutation legal? c) Is my understanding of the method and its advantages accurate? REPLY [2 votes]: $\require{color}$ I found @Calle explanation interesting but have a hard time following the arguments. @kuzooroo explanation fits better my way of thinking but the arguments are too dense. I decided to start a new answer since my arguments will not fit in a comment slot. Following @Calle: Note that, since each $P_i$ is a one-cycle permutation (one interchange), $P_i^{-1}=P_i$ since doing an interchange twice bring the matrix to the same original form. We now should prove that $\Lambda$ is a lower triangular matrix with ones in the diagonal. $\Lambda_3$ is, of course, a lower triangular with 1 in the diagonal. To prove that $\Lambda_2$ is lower triangular with ones in the diagonal we take into account the fact that $L_2$ is atomic. That is, \begin{eqnarray*} L_2 = \begin{pmatrix} 1 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ 0 & 1 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ \vdots & l_{32} & \ddots & \vdots & \vdots &\vdots & \vdots & \vdots & \vdots\\ \vdots & \vdots & \cdots & 1 & 0 & \cdots &\cdots& \cdots & \vdots \\ \vdots & l_{i2} & \cdots & 0 & 1 & 0 &\cdots & \cdots & \vdots \\ \vdots & \vdots & \cdots & \vdots & \ddots & \ddots & \ddots & \cdots & \vdots \\ \vdots & l_{k2} & \vdots & \vdots & \vdots & 0 & 1 & 0 & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\ddots & \vdots & 0 \\ 0 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & 0 & 1 \end{pmatrix} \nonumber. \end{eqnarray*} Let us assume that $P_3$ interchanges rows $i$ with $k$. (note that both $i$ and $k$ are larger than 2). We have that \begin{eqnarray*} P_3 L_2 = \begin{pmatrix} 1 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ 0 & 1 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ \vdots & l_{32} & \ddots & \vdots & \vdots &\vdots & \vdots & \vdots & \vdots\\ \vdots & \vdots & \cdots & 1 & 0 & \cdots &\cdots& \cdots & \vdots \\ \vdots & \boxed{l_{k2}} & \vdots & \vdots & \textcolor{green}{0} & 0 & \textcolor{blue}{1} & 0 & \vdots \\ \vdots & \vdots & \cdots & \vdots & \ddots & \ddots & \ddots & \cdots & \vdots \\ \vdots & \boxed{l_{i2}} & \cdots & 0 & \textcolor{green}{1} & 0 & \textcolor{blue}{0} & \cdots & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\ddots & \vdots & 0 \\ 0 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & 0 & 1 \end{pmatrix} \nonumber. \end{eqnarray*} Finally $P_3 L_2 P_3^{-1}= P_3 L_2 P_3$ ineterchanges columns $i$ (green) with $k$ (blue) on $P_3 L_2$. The $1$ in column $k$ (blue) takes the place of the 0 in column $i$ (green) . In the same way the $1$ in green in row $k$ takes the position of the blue $0$ on the same row. That is \begin{eqnarray*} P_3 L_2 P_3^{-1} = \begin{pmatrix} 1 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ 0 & 1 & \cdots & 0 & \cdots & \cdots &\cdots & \cdots & 0 \\ \vdots & l_{32} & \ddots & \vdots & \vdots &\vdots & \vdots & \vdots & \vdots\\ \vdots & \vdots & \cdots & 1 & 0 & \cdots &\cdots& \cdots & \vdots \\ \vdots & \boxed{l_{k2}} & \vdots & \vdots & \textcolor{green}{1} & 0 & \textcolor{blue}{0} & 0 & \vdots \\ \vdots & \vdots & \cdots & \vdots & \ddots & \ddots & \ddots & \cdots & \vdots \\ \vdots & \boxed{l_{i2}} & \cdots & 0 & \textcolor{green}{0} & 0 & \textcolor{blue}{1} & \cdots & \vdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\ddots & \vdots & 0 \\ 0 & 0 & \cdots & 0 & \cdots & \cdots &\cdots & 0 & 1 . \end{pmatrix} \nonumber. \end{eqnarray*}e This is, again, an atomic matrix with the entries $l_{i2}$ and $l_{k2}$ interchanged. In the same way, $P_2 L_1 P_2^{-1}$ is atomic and so it is $\Lambda_1 = P_3 P_2 L_1 P_2^{-1} P_3^{-1}$, In general for \begin{eqnarray*} U= L_{n-1} P_{n-1} L_{n-2} P_{n-2} \cdots P_2 L_1 P_1 A, \end{eqnarray*} we define \begin{eqnarray*} \Lambda_i = P_{n-1} \cdots P_{i+2} P_{i+1} L_i P_{i+1}^{-1} P_{i+2}^{-1} \cdots P_{n-1}^{-1}. \end{eqnarray*} where we assume $P_0=I$. The same procedure shown above guarantee that $\Lambda_i$ is atomic and so $\Lambda=\prod \Lambda_i$ is lower triangular with ones in the diagonal. Hence this completes what is needed to verify that $PA=LU$.<|endoftext|> TITLE: Number of matrices on $\mathbb Z_{p}$ with a given characteristic polynomial QUESTION [11 upvotes]: How can I find the number of n×n matrices on $\mathbb Z_{p}$ with a given characteristic polynomial? for example: If $p$ is a prime number s.t. $p \equiv 3 \pmod 4$, then number of $2\times 2$ matrices on $\mathbb Z_{p}$ that their characteristic polynomial is $x^2+1$ would be equal to $p(p-1)$. I don't have the proof of above example. REPLY [4 votes]: $\newcommand{\Size}[1]{\left\lvert #1 \right\rvert}$For your special case, with characteristic polynomial $x^{2} + 1$ and $p \equiv 3 \pmod{4}$, the matrix is conjugate to its rational canonical form $$ R = \begin{bmatrix} 0 & 1\\ -1 & 0\\ \end{bmatrix}. $$ So it is just a matter to see how many conjugates $R$ has. Alternatively, one may compute the centralizer of $R$, and discover that it is $$ a I + b R $$ with $a, b$ not both zero. So the centralizer has order $p^2 - 1$, hence $R$ has $$\dfrac{\Size{\operatorname{GL}(2,p)}}{p^2 - 1} = \dfrac{(p^{2} - 1)(p^{2} - p)}{p^2 - 1} = p(p-1)$$ conjugates. When $p \equiv 1 \pmod{p}$, then the matrix can be put in the form $$ R = \begin{bmatrix} a & 0\\ 0 & -a\\ \end{bmatrix}. $$ where $a$ is a primitive fourth root of unity. This time the centralizer has order $(p-1)^{2}$, as it consists of the diagonal matrices, hence $R$ has $$\dfrac{\Size{\operatorname{GL}(2,p)}}{(p - 1)^{2}} = \dfrac{(p^{2} - 1)(p^{2} - p)}{(p - 1)^{2}} = p(p+1)$$ conjugates.<|endoftext|> TITLE: What is semantics in the context of mathematical logic? QUESTION [5 upvotes]: I have been trying to familiarize myself with the foundations of mathematics, which led me to discussions about propositional, first-order, and second-order logic. I understand that semantics is related to model theory and the satisfiability of models; but I feel that I'm not taking away what I am supposed to. I understand that in language, semantics defines the "meaning" of words and phrases. Is this analogous to the use of the term in mathematical logic? If so, how does one rigorously talk about the meaning of a statement in logic or math? Additional insight, sources, and reading recommendations would be greatly appreciated. REPLY [4 votes]: Mitchell Spector explained the basic ideas and the completeness (+soundness) theorem. But let me give you some more in-depth explanation of the model-theoretic perspective. In model theory, usually${}^\dagger$, we only really care about semantics. I believe that for most model theorists, the true meaning of the completeness theorem is that we can use the familiar rules of inference to derive semantically true statements from other semantically true statements. This is (as far as I know) in contrast to non-first order logics, where we may have no (complete) rules of inference. Consider the formulas $\varphi_1(x)= 0\leq x$ and $\varphi_2(x)=\exists y (y\cdot y=x)$ in the language of ordered rings $\{0,1,\leq,+,\cdot\}$. Syntactically, they are very different: the first one is quantifier-free, while the second one is not. The first one uses the symbol $\leq$, while the other one doesn't. However, if we consider them in the field of real numbers (or any real closed field), then they are equivalent: any real number satisfies $\varphi_1$ (in the real numbers) if and only if it satisfies $\varphi_2$. More generally, if $T$ is any first-order theory in a language $L$, while $\varphi_1(x)$ and $\varphi_2(x)$ are arbitrary $L$-formulas with the same free variables, then we say that $\varphi_1$ is equivalent to $\varphi_2$ under $T$ if $T\vdash \varphi_1\leftrightarrow \varphi_2$, that is, $T$ proves that $\varphi_1$ and $\varphi_2$ are equivalent. Essentially by completeness, this is exactly equivalent to saying that in any model of $T$, the interpretations of $\varphi_1$ and $\varphi_2$ (i.e. the sets of elements satisfying them) are exactly the same. It follows that equivalence under $T$ is an equivalence relation on the set of all $L$-formulas in variable $x$ (or any other prescribed set of free variables), and that (in models of $T$), all the semantic content of a formula is expressed by its equivalence class. For example, if we are in a real closed field, then $\varphi_1(x)$ and $\varphi_2(x)$ express the same thing, so if we only care about the meaning of the formulas, there is no need to distinguish between them. This idea gives rise to the notion of the Lindenbaum-Tarski algebra, i.e. the Boolean algebra of all equivalence classes of formulas with prescribed free variables. In fact, we can (and do) go one step further. One can show that if we have any model $M$ of a complete theory $T$, then we can find a larger $L$-structure $N$ (a so-called elementary extension, which means, intuitively, that $M\subseteq N$ and anything we can say about an element of $M$ is true in $N$ if and only if it is true in $M$) which has the property that two elements of $N$ satisfy the same $L$-formulas if and only if there is an automorphism of $N$ which takes one of them to the other (we say that $N$ is strongly homogeneous). This reduces the problem of checking which formulas a given element satisfies to the problem of checking which orbit of $\operatorname{Aut}(N)$ it belongs to. Moreover, if we pick $N$ to be sufficiently rich (whence it is called a monster model), virtually all interesting properties of $T$ are captured by $N$, its automorphism group, and the various Lindenbaum-Tarski algebras, or rather, their interpretations in $N$ (which we call definable sets). $\dagger$ sometimes we do care about syntax because it tells us something about semantics, or makes it easier to understand them. For example, the theory of the real field in the language of ordered rings has quantifier elimination. This is a syntactic property, but it implies that the theory of real closed fields (in any equivalent language, for example the language of rings) is model complete, which is an important semantic property.<|endoftext|> TITLE: Finding how many prime numbers lie in a given range QUESTION [7 upvotes]: How many prime numbers $p$ are there which satisfy this condition? $$13! +1 \lt p \leq 13! +13$$ Which method should I use to solve this, or could you help with the first steps? REPLY [2 votes]: What you have to remember is that $n!$ is divisible by each prime number $p \leq n$. Then $n! + p$ is also divisible by $p$. In the specific case of $n = 13$, it follows that $13!$ is divisible by $2, 3, 5, 7, 11, 13$, and consequently, $13! + 2$ is divisible by $2$, $13! + 3$ is divisible by $3$, you get the idea. So yeah, no primes there.<|endoftext|> TITLE: What are topoi? QUESTION [16 upvotes]: I have been hearing a lot about the concept of "topos". I asked a friend of mine in the know and he said that topoi are a generalization of sheaves on a topological space. In particular, topoi were usefull when an actual topology was not available. Can anyone elaborate on this or make this idea more clear? REPLY [11 votes]: Topoi can be looked at from many points of view. Topoi can be seen as categories of sheaves on (generalized) spaces. Indeed, the premier example of a (Grothendieck) topos is the category $\mathrm{Sh}(X)$ of set-valued sheaves on a topological space $X$. Instead of spaces, also sites work. Topoi can be seen as generalized spaces. For instance we have a functor from the category of topological spaces to the category of topoi, namely the functor $X \mapsto \mathrm{Sh}(X)$. This functor is fully faithful if we restrict to sober topological spaces. (Soberness is a very weak separation axiom. Every Hausdorff space is sober and so is every scheme from algebraic geometry.) Many geometrical concepts generalize to topoi, for instance there are: point of a topos, open and closed subtopos, connected topos, continuous map between topoi, coverings of topoi, ... Topoi can be seen as alternate mathematical universes. The special topos $\mathrm{Set}$, the category of sets and maps, is the usual universe. Any topos admits an "internal language" which can be used for working inside of a topos as if it consisted of plain sets. Any theorem which admits an intuitionistic proof (a proof not using the law of excluded middle or axiom of choice) is valid in any topos. For instance the statement "For any short exact sequence $0 \to M' \to M \to M'' \to 0$ of modules, the module $M$ is finitely generated if $M'$ and $M''$ are" is such a theorem and therefore also holds in the topos of sheaves on a ringed space. In this way it automatically yields the statement "For any short exact sequence $0 \to \mathcal{F}' \to \mathcal{F} \to \mathcal{F''} \to 0$ of sheaves of $\mathcal{O}_X$-modules, the sheaf $\mathcal{F}$ is of finite type if $\mathcal{F}'$ and $\mathcal{F}''$ are". In the internal language of some topoi, exotic statements such as "any map $\mathbb{R} \to \mathbb{R}$ is smooth" or "there exists a real number $\varepsilon$ such that $\varepsilon^2 = 0$ but $\varepsilon \neq 0$" hold. This is useful for synthetic differential geometry. Topoi can be seen as embodiments of logical theories: For any (so-called "geometric") theory $\mathbb{T}$ there is a classifying topos $\mathrm{Set}[\mathbb{T}]$ whose points are precisely the models of $\mathbb{T}$ in the category of sets, and conversely any (Grothendieck) topos is the classifying topos of some theory. The classifying topoi of two theories are equivalent if and only if the theories are Morita-equivalent. I learned this from the nLab entry on topoi. The main examples for topoi are: The category $\mathrm{Set}$ of sets and maps. The category $\mathrm{Sh}(X)$ of set-valued sheaves on any site. Grothendieck conceived topoi because of this example – he needed it for etalé cohomology. The "etalé topology" on a scheme is not an honest topology, but a Grothendieck site. The effective topos associated to any model of computation. In the internal language of such a topos, the statement "for any natural number $n$, there is a prime number $p > n$" holds if and only of there is a program in the given model of computation which computes, given any number $n$, a prime number $p > n$. The statement "any map $\mathbb{N} \to \mathbb{N}$ whatsoever is given by a Turing machine" is true in many of those topoi. Topoi can for instance be used in algebraic geometry to work with generalized topologies like the etalé topology, in logic to construct interesting models of theories, in computer science to compare models of computation, as tools to build bridges between different subjects of mathematics. Very fine resources for learning about topoi include: Tom Leinster's informal introduction to topos theory. Start here! The textbook Sheaves in Geometry and Logic by Saunders Mac Lane and Ieke Moerdijk. The reference Sketches of an Elephant: A Topos Theory Compendium by Peter Johnstone. If you are in a hurry, then enjoy Luc Illusie's two-page note in the AMS series "What is …?".<|endoftext|> TITLE: What's wrong with this reasoning that $\frac{\infty}{\infty}=0$? QUESTION [61 upvotes]: $$\frac{n}{\infty} + \frac{n}{\infty} +\dots = \frac{\infty}{\infty}$$ You can always break up $\infty/\infty$ into the left hand side, where n is an arbitrary number. However, on the left hand side $\frac{n}{\infty}$ is always equal to $0$. Thus $\frac{\infty}{\infty}$ should always equal $0$. REPLY [5 votes]: You cannot use an "infinite" distributive law with division by the symbol $\infty$. In other words the statement $$\frac{\sum_{n=1}^\infty a_n}{\infty} = \sum_{n=1}^\infty \frac{a_n}{\infty}$$ is not valid if $\sum_n a_n$ is a divergent series. For the left-hand side cannot have any meaning when that sequence diverges, whereas the right-hand side is the zero series (provided that each $a_n$ is an ordinary finite number) which has the sum $0$ for any reasonable summation definition. The law $\frac{a+b}{d}=\frac{a}{d}+\frac{b}{d}$ is true for real or complex numbers $a$, $b$ and $d$ provided $d\ne 0$, but it does not quite generalize to the form you use.<|endoftext|> TITLE: Which elements of the fundamental group of a surface can be represented by embedded curves? QUESTION [6 upvotes]: Let $F$ be a closed orientable genus $g$ surface. Which elements of $\pi_1(F)$ can be represented by simple closed curves? I know that the standard generators can and I read the claim that if an element of $\pi_(F)$ can be represented by a simple closed curve and it is homologically nontrivial then it must necessarily be one of the standard generators. Why is this true? Additionally, if an element is homologically trivial then it is necessarily a product of commutators of the standard generators - which of these products of commutators can be represented by simple closed curves? Any references are also appreciated, thanks! REPLY [2 votes]: You are looking for the work of Birman and Series I believe. Here is a paper they wrote which looks at this for simple closed curves on orientable surfaces with boundary. I know that there is a paper for surfaces without boundary, also by Birman and Series, which came out after this one, but I cannot find it at the moment. I will look around and let you know if I find the other paper, but maybe someone else knows where it is. They give an algorithm which lets you determine if a word in $\pi_1(F)$ represents a simple closed curve on the surface.<|endoftext|> TITLE: Measuring $\pi$ with alternate distance metrics (p-norm). QUESTION [30 upvotes]: How/why does $\pi$ vary with different metrics in p-norms? Full question is below. Background Long ago I did an investigation on Taxicab Geometry using basic geometry. One think I recall is that a circle (as defined by all points equal distance from a centre point) 'looks' like a diamond. The 'circumference' of this circle is 8. As an extension I looked at other metrics of the form: $$D_n\left((x_1,y_1),(x_2,y_2)\right)=(|x_2-x_1|^n+|y_2-y_2|^n)^\frac{1}{n}$$ (My limited reading of wikipedia suggests I should call this a p-norm.) More recently using differing values of $n$ I calculated the 'circumference' of unit circles in these metrics. I took the definition of a unit circle to be all points a distance of one unit from the origin. This gave me a formula for a semi-circle: $$y=\left(1-|x|^n\right)^\frac{1}{n}$$ I took the normal arc length formula of: $$\int_a^b\sqrt{1+\left(\frac{dy}{dx}\right)^2}$$ and replaced all the powers of $2$ with powers of $n$ to get: $$\int_a^b\left(1+\left|\frac{dy}{dx}\right|^n\right)^\frac{1}{n}$$ Combining the circle with the arc length formula (and take a quarter circle and times it by 4 gave) the following integral: $$4\int_0^1\left(1+\left|\frac{d}{dx}\left(1-x^n\right)^\frac{1}{n}\right|^n\right)^\frac{1}{n}dx$$ Then $\pi(n)$ is found by dividing 'circumference' by two (twice the radius). Doing so led to this graph of $\pi(n)$ against $n$. Interestingly $n=2$ is a minimum (both local and absolute) making our commonly thought of value of $\pi$ special. Question (EDIT) Math Question: Is my distance formula for a different metric correct? (Moishe Cohen's comment suggests it might not be). Math Question: Assuming the math above is ok, is there a reason for $(2,\pi)$ to be a minimum? Math/Philosophy Question: Assuming above ok, is this why we observe the metric $D_2$ in the real world? Note I have not formally studied metrics, tensors or vector spaces or related topics (but am happy to do some light reading if your answer requires it). REPLY [17 votes]: Math Question: Assuming the math above is ok, is there a reason for (2,π) (2,π) to be a minimum? The $p=2$ is the only p-norm that has the $SO(3)$ Lie Group structure. In other words, it is rotation invariant. Try it, you can rotate the coordinate system without changing the length. You can't do that with the Taxicab metric, the length you get will change. The deep answer to your question is that only $p=2$ has a continuous symmetry, rotation. All the other p-norms have a either a finite number of symmetries or symmetries. Here's the thing, a p-unit circle is something like every possible vector that can generated from the group glued together. Imagine starting with a small vector then copying and rotating it just a bit, attaching it to the original vector, then repeating with the copy. Eventually, you'll generate the circle. You generate any of the p-norm circles using a similar process using their symmetry, albeit a bit more opaque. So why is $p=2$ special? The circle is made with the shortest vectors, and only the shortest vectors available to construct it. Why? Because they're all the same length! This isn't true unless you're rotationally invariant and guess what? Only $p=2$ is!! Math/Philosophy Question: Assuming above ok, is this why we observe the metric D2 D2 in the real world? Quick run through physics. In physics, you have a theory described by a lagrangains that describes the amount of action particles do over a length of time. It's a scalar function of distance, speed, and time. The symmetries of the lagrangian correspond to conserved quantities according to Noether's Theorem. In standard classical mechanics, we assume three things, (1) The action of a particle moving through some path along space is unchanged if you shift the start of the motion forward or backward in time. (Energy is Conserved) (2) The action is unchanged if you shift the translate the start of the motion to another location. (Momentum is Conserved) (3) The action is unchanged if you rotate the motion of the particle. (Angular Momentum is Conserved) The last one isn't talked about to much, however, it's very important and highly relevant to our discussion. Since, our lagrangian, the theory for our physical theory, is invariant under rotation, there can never be a preferred direction in our theory. This means that means the metric we are using to measure distance and speed must also be rotation invariant. The $p=2$ norm is the only p-norm that'll work.<|endoftext|> TITLE: On $\big(\tfrac{1+\sqrt{5}}{2}\big)^{12}=\small 161+72\sqrt{5}$ and $\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{161+72\sqrt{5}\,x}}$ QUESTION [14 upvotes]: (This summarizes scattered results from here, here, here and elsewhere. See also this older post.) I. Cubic Define $\beta= \tfrac{\Gamma\big(\tfrac56\big)}{\Gamma\big(\tfrac13\big)\sqrt{\pi}}= \frac{1}{48^{1/4}\,K(k_3)}$. Then we have the nice evaluations, $$\begin{aligned}\frac{3}{5^{5/6}} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-4\big)\\ &=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+4x^3}}\\[1.7mm] &=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{9+4\sqrt{5}}\,x}}\\[1.7mm] &=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{9+\cosh x}} \end{aligned}\tag1$$ and, $$\begin{aligned}\frac{4}{7} &=\,_2F_1\big(\tfrac{1}{3},\tfrac{1}{3};\tfrac{5}{6};-27\big)\\ &=\beta\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+27x^3}}\\[1.7mm] &=\beta\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{\color{blue}{55+12\sqrt{21}}\,x}}\\[1.7mm] &=2^{1/3}\,\beta\,\int_0^\infty\frac{dx}{\sqrt[3]{55+\cosh x}} \end{aligned}\tag2$$ Note the powers of fundamental units, $$U_{5}^6 = \big(\tfrac{1+\sqrt{5}}{2}\big)^6=\color{blue}{9+4\sqrt{5}}$$ $$U_{21}^3 = \big(\tfrac{5+\sqrt{21}}{2}\big)^3=\color{blue}{55+12\sqrt{21}}$$ Those two instances can't be coincidence. II. Quartic Define $\gamma= \tfrac{\sqrt{2\pi}}{\Gamma^2\big(\tfrac14\big)}= \frac{1}{2\sqrt2\,K(k_1)}=\frac1{2L}$ with lemniscate constant $L$. Then we have the nice, $$\begin{aligned}\frac{2}{3^{3/4}} &=\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-3\big)\\ &=\gamma\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+3x^4}}\\[1.7mm] &\overset{\color{red}?}=\gamma\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{\color{blue}{7+4\sqrt{3}}\,x}}\\[1.7mm] &=2^{1/4}\,\gamma\,\int_0^\infty\frac{dx}{\sqrt[4]{7+\cosh x}} \end{aligned}\tag3$$ and, $$\begin{aligned}\frac{3}{5}&=\,_2F_1\big(\tfrac{1}{4},\tfrac{1}{4};\tfrac{3}{4};-80\big)\\ &=\gamma\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+80x^4}}\\[1.7mm] &\overset{\color{red}?}=\gamma\,\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{\color{blue}{161+72\sqrt{5}}\,x}}\\[1.7mm] &=2^{1/4}\,\gamma\,\int_0^\infty\frac{dx}{\sqrt[4]{161+\cosh x}} \end{aligned}\tag4$$ with $a=161$ given by Noam Elkies in this comment. (For $4$th roots, I just assumed the equality using the blue radicals based on the ones for cube roots.) Note again the powers of fundamental units, $$U_{3}^2 = \big(2+\sqrt3\big)^2=\color{blue}{7+4\sqrt{3}}$$ $$U_{5}^{12} = \big(\tfrac{1+\sqrt{5}}{2}\big)^{12}=\color{blue}{161+72\sqrt{5}}$$ Just like for the cube roots version, these can't be coincidence. Questions: Is it true these observations can be explained by, let $b=2a+1$, then, $$\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[3]{x^2+ax^3}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small2/3} \sqrt[3]{b+\sqrt{b^2-1}\,x}}=2^{1/3}\int_0^\infty\frac{dx}{\sqrt[3]{b+\cosh x}}$$ $$\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[4]{x^3+ax^4}}=\int_{-1}^1\frac{dx}{\left(1-x^2\right)^{\small3/4} \sqrt[4]{b+\sqrt{b^2-1}\,x}}=2^{1/4}\int_0^\infty\frac{dx}{\sqrt[4]{b+\cosh x}}$$ REPLY [3 votes]: Too long for a comment : In general, for strictly positive values of n we have $$\begin{align} \sqrt[n]2\int_0^\infty\frac{dx}{\sqrt[n]{\cosh2t~+~\cosh x}} ~&=~ \int_0^1\frac{dx}{\sqrt{1-x}\cdot\sqrt[n]{x^{n-1}~+~x^n\cdot\sinh^2t}} \\\\ ~&=~ \int_{-1}^1\frac{dx}{\sqrt[n]{(1-x^2)^{n-1}}\cdot\sqrt[n]{\cosh2t~+~x\cdot\sinh2t}} \end{align}$$<|endoftext|> TITLE: Are kernels unique to homomorphisms? QUESTION [8 upvotes]: I would like to ask if two different homomorphisms can share the same kernel. For instance for the kernel $n \mathbb{Z} $, is it possible to come up with homomorphisms other than the function mapping integers to residue classes modulo $n$? Thanks. REPLY [3 votes]: Think of $k\mapsto\exp(\frac{2k\pi}n\mathbf i):\Bbb Z\to\Bbb C^\times$ which has kernel $n\Bbb Z$.<|endoftext|> TITLE: Finding the area inside the plot $x^4+y^4=x^2+y^2$ QUESTION [5 upvotes]: Find the area inside the plot $x^4+y^4=x^2+y^2$. REPLY [3 votes]: After proceeding with the polar coordinate, it suffices to evaluate the following: $$I = 4\int_{0}^{\frac{\pi}{2}}\dfrac{d\theta}{2-\sin^2(2\theta)}= 2\int_0^{\pi}\dfrac{dx}{2-\sin^2x}$$. Now do a tangent substitution: $$\tan x = t\Rightarrow \sin^2x = \dfrac{t^2}{1+t^2},\,\, dx = \dfrac{dt}{1+t^2}$$ Finally, $$I = 4\int_0^{\infty}\dfrac{dt}{2+t^2} =2\sqrt{2}\cdot(\arctan(\infty) - \arctan(0)) = \sqrt{2}\pi = 4.4429...$$ Or this is also a routine exercise on complex contour integration using the Euler identity $\sin x = \dfrac{e^{ix}-e^{-ix}}{2i}$.<|endoftext|> TITLE: Existence of an onto group homomorphism from $S_4$ to $\Bbb Z_4$ QUESTION [5 upvotes]: Let $S_n$ be the symmetric group of $n$ letters. Then does there exist an onto group homomorphism from $S_4$ to $\Bbb Z_4$? My try: Suppose that $f:S_4 \to \Bbb Z_4$ is a group homommorphism. Then $S_4/\ker f\cong \Bbb Z_4\implies o(\ker f)=6\implies \ker f$ is isomorphic to $S_3$ or $\Bbb Z_6$. If $\ker f=\Bbb Z_6\implies S_4\cong \Bbb Z_6\times \Bbb Z_4$ which is false as $S_4$ is not commutative whereas $\Bbb Z_6\times \Bbb Z_4$ is. If $\ker f=S_3\implies S_3$ is a normal subgroup of $S_4$. Now take $S_3=\{e,(12),(23),(13),(123),(132)\}$.Then $(14)(123)(14)=(234)\notin S_3$.Hence $S_3$ is not normal. Is my solution correct?? REPLY [5 votes]: Your argument up to ''$\ker f=S_3\text{ or }\mathbb{Z}_6$'' is correct. But after this, it is possible but lengthy to continue the arguments; for example if kernel is isomorphic to $S_3$ then you have taken it equal to $\{(1), (123),..\}$; this is correct but needs a justification. Better is the following: $|\ker f|=6$, so $\ker f$ contains an element of order $3$. Since elements of order $3$ in $S_4$ are precisely $3$-cycles (easy to prove) and any two $3$-cycles are conjugate, hence all the $3$-cycles of $S_4$ should be in the kernel (since kernel is normal). But now we get a contradiction. How many $3$-cycles are there in $S_4$? What is size of kernel?<|endoftext|> TITLE: Can sum of a series be uncountable QUESTION [12 upvotes]: There are several methods to say whether sum of series is finite or not. Can we say whether sum of series is countable or not. For example $S_n=\Sigma_{0 \leq i \leq n}{2^i}= 2^{n+1}-1$ So for $n=\aleph_0$ $ S_{\aleph_0}=2^{\aleph_0}-1$ So can we say that S has a value which is not countable. But wouldn't that mean that sum of integers turns out be a non integer, as all integers form a set of countables. Or can we say that the series $S_i$ has a limit which lies outside set of inetgers so $\{S_1, S_2, S_3 ...\} \subset I$ has a limit $p$, such that $p \notin I$ Edit: I understood that $\{S_1, S_2, S_3 ...\} \subset I$ has a limit $p$ is incorrect as neither $\aleph_0$ nor $2^{\aleph_0}$ lie in $\mathbf{R}$ but I am still confused about other things. Edit: Suppose we have sets $A_1, A_2, ... A_i...$ each having $2^i$ elements for $i=1,2 ...$ i.e. $\#(A_i)=2^i$. Suppose all these sets are disjoint can we say whether the set $A=\bigcup_{1 \leq i \leq \infty} A_i$ is countable or not. REPLY [14 votes]: Recall that the statement $S_n = 2^{n+1} - 1$ is proven by induction. Induction is not magic - it cannot apply to things that aren't in its domain. For example, just because $S_n = 2^{n+1} - 1$ doesn't mean $S_{\mathrm{apple}} = 2^{\mathrm{apple}+1}-1$. Induction operates in two steps: first, the base case shows that the claim holds at $n = 0$. Second, the induction step shows that if the claim holds for $n$ then it holds for $n+1$. Now, to show that (for example) it works for $10$, you notice that the induction step says that if it works for $9$ then it works for $10$. If it works for $8$, then it works for $9$. And so on and so forth, back to $0$, which we already know works. The thing is that for apples or $\aleph_0$, there isn't a way to step back to $0$ one by one. The argument by induction doesn't work, and so the final result has no reason to work either. EDIT: You mentioned in a comment the equation $(x - 1)(x^n + x^{n - 1} + \ldots + 1) = x^{n+1} - 1$. This equation is proven by induction on $n$. It therefore holds for all natural numbers, but $\aleph_0$ is not covered by the induction. I'll draw your attention, for example, to the case where $n$ is negative, or $n = \pi$. The equation doesn't even make sense for those values of $n$, because those values are too different from positive natural numbers. The same is true of $\aleph_0$. As I mentioned in a comment below, virtually nothing that is true of finite $n$ is also true of $\aleph_0$. Even the simple statement $n < n + 1$ is false when $n = \aleph_0$.<|endoftext|> TITLE: Can we always extend a Holder-boundary continuous function to whole domain? QUESTION [9 upvotes]: Let $\Omega\subseteq\mathbb{R}^{n}$ be a smooth domain, and let $f\in C^{\alpha}\left(\partial\Omega\right),$ where $\alpha\in\left(0,1\right).$ Do we always have that there exists a function $\widetilde{f}\in C^{\alpha}\left(\overline{\Omega}\right)$ so that $\left.\widetilde{f}\right|_{\partial\Omega}\equiv f?$ Note that, if $\alpha>1$ the result is true and can be found in the book by Gilbarg + Trundinger (Lemma 6.38, p 137). REPLY [6 votes]: We will use the fact that $$ |x_{1}-x_{2}|^{\alpha}\leq(|x_{1}-x_{3}|+|x_{2}-x_{3}|)^{\alpha}\leq |x_{1}-x_{3}|^{\alpha}+|x_{2}-x_{3}|^{\alpha}. $$ Let $E\subseteq\mathbb{R}^{n}$ and let $f:E\rightarrow\mathbb{R}$ be such that $ |f(x)-f(y)|\leq L|x-y|^{\alpha} $ for all $x,y\in E$. Define $$ h(x):=\inf\left\{ f(y)+L|x-y|^{\alpha}:\,y\in E\right\} ,\quad x\in\mathbb{R}^{n}. $$ If $x\in E$, then taking $y=x$ we get that $h(x)\leq f(x)$. To prove that $h(x)$ is finite for every $x\in\mathbb{R}^{n}$, fix $y_{0}\in E$. If $y\in E$ then $$ f(y)-f(y_{0})+L|x-y|^{\alpha}\geq-L|y-y_{0}|^{\alpha}+L|x-y|^{\alpha}% \geq-L|x-y_{0}|^{\alpha}, $$ and so \begin{align*} h(x) =\inf\left\{ f(y)+L|x-y|^{\alpha}:\,y\in E\right\} \geq f(y_{0})-L|x-y_{0}|^{\alpha}>-\infty. \end{align*} Note that if $x\in E$, then we can choose $y_{0}:=x$ in the previous inequality to obtain $h(x)\geq f\left( x\right) $. Thus $h$ extends $f$. Next we prove that $$ \left\vert h(x_{1})-h\left( x_{2}\right) \right\vert \leq L|x_{1}% -x_{2}|^{\alpha}% $$ for all $x_{1}$,$\,x_{2}\in\mathbb{R}^{n}$. Given $\varepsilon>0$, by the definition of $h$ there exists $y_{1}\in E$ such that $$ h(x_{1})\geq f(y_{1})+L|x_{1}-y_{1}|^{\alpha}-\varepsilon. $$ Since $h\left( x_{2}\right) \leq f(y_{1})+L|x_{2}-y_{1}|^{\alpha}$, we get \begin{align*} h(x_{1})-h\left( x_{2}\right) & \geq L|x_{1}-y_{1}|^{\alpha}-L|x_{2}% -y_{1}|^{\alpha}-\varepsilon\\ & \geq-L|x_{1}-x_{2}|^{\alpha}-\varepsilon. \end{align*} Letting $\varepsilon\rightarrow0$ gives $h(x_{1})-h\left( x_{2}\right) \geq-L|x_{1}-x_{2}|^{\alpha}$. Interchanging the roles of $x_{1}$ and $x_{2}$ proves that $h$ is Holder continuous.<|endoftext|> TITLE: Is there a onto group homomorphism from $\Bbb Q$ to $\Bbb Z$? QUESTION [8 upvotes]: I am learning group-homomorphisms. I have two questions: Is there a onto group homomorphism from $\Bbb Z$ to $\Bbb Q$? Is there a onto group homomorphism from $\Bbb Q$ to $\Bbb Z$? I have the answer of the first one. $\Bbb Z$ is cyclic and homomorphic image of a cyclic group is cyclic but $\Bbb Q$ is not. I am stuck here. Please help me. REPLY [21 votes]: Your answer to the first question is correct. For the second question, suppose that there were an onto homomorphism $f : \mathbb{Q} \to \mathbb{Z}$. Then there exists some $q \in \mathbb{Q}$ such that $f(q) = 1$. But then, $x = f(q/2)$ is an integer satisfying $$2x = x+x = f(q/2) + f(q/2) = f(q/2 + q/2) = f(q) = 1,$$ which is impossible. Therefore there is no such $f$.<|endoftext|> TITLE: Maurer-Cartan form and curvature form on a Lie Group QUESTION [5 upvotes]: We know that, given a Lie group, we can build its Maurer-Cartan form $\omega_G=P^{-1}dP$ (I don't explain now the meaning of these symbols, which I think to be generally known). This is a left-invariant form and satisfies the Maurer-Cartan equation $d\omega_G+\omega_G\wedge\omega_G=0$. My question is: how can we relate this form with the so-called "curvature forms" on the left-invariant metric (I use "the" since they're all equivalent)? I'll explain this question by steps: 1) By "connection form" I mean a matrix-valued 1-form on a manifold which satisfies $\nabla e_i=\omega^j_i\otimes e_j$ given a frame $\{e_j\}$. Is there a clear way to obtain the connection form from the M-C form? 2) Given the connection form $\omega$, we define the curvature form $\Theta=d\omega+\omega\wedge\omega$ (i.e. $\Theta^j_i=d\omega_i^j+\omega^j_k\wedge\omega^k_i$). It turns out that $\Theta$ is of the form $\frac{1}{2}R^i_{jkt}\theta^k\wedge\theta^t$, where $\{\theta^j\}$ is the dual coframe of $\{e_j\}$ and $R^i_{jkt}=R_{ijkt}$ is the $(4,0)$ curvature tensor $$R(X,Y,Z,W)=g(\tilde R(Z,W)Y,X)$$ where $\tilde R$ is the well-known curvature $(3,1)$ tensor $\tilde R(Z,W)Y=\nabla_Z\nabla_W Y-\nabla_Z\nabla_W Y-\nabla_{[Z,W]}Y$. 3) Now, if a relation between the M-C form and the connection form $\omega$ exists, it must not be that they simply are the same thing, since in that case the Maurer-Cartan equation would say that $\Theta$ is $0$, and therefore so is the curvature tensor, which isn't true for all Lie groups. But since the expression of $\Theta$ and the Maurer-Cartan equation are so similar I suspect that a (less trivial) relation must exist. REPLY [2 votes]: A) We consider the trivial principal fiber bundle $P(M,G)$, with $P=M\times G$. Then a regular flat connection is defined on $P$ by considering as horizontal subspace on each $u=(x,a) \in P$ the tangent space of $M\times \{a\}$. We consider now the Maurer-Cartan form $ω_{G}$ οn $G$ and the projection map $p:M\times G \rightarrow G$ which induces the form $ω=p^{*}ω_{G}$ on $P$. This is the connection form for the regular flat connection of $P=M\times G$. It can be proved the curvature is zero: $dω=d(p^{*}ω_{G})=p^{*}(dω_{G})=p^{*}(-[ω_{G},ω_{G}])=-[p^{*}ω_{G},p^{*}ω_{G}]=-[ω,ω]$. Now, set on the above $M=G$, so we have the trivial primary bundle $p:G\times G\rightarrow G$ with connection form to be the pullback of the Maurer-Cartan on $G$. Thus, for this connection the curvature is zero. B) Let's now consider the same question by using another more general bundle: the primary fiber bundle $L$ of linear frames on $G$. A connection on that bundle is by definition a linear connection on the corresponding relative budle $E$ of $G$. We remark that the connection notation $∇(X,Y)$ refers to the relative bundle $E$ and it is the "infinitesimal analogue" of the connection structure on $E$, the latter induced by the connection structure on the principal bundle $L$. By Helgason's book, there is a 1-1 correspondence between the set of the left invariant connections on a Lie group $G$ and the set of the bilinear functions $α:g \times g \rightarrow g$, where $g$ is the Lie algebra, with $α(X,Y)=∇(X',Y')$, where $X',Y'$ are the left invariant fields corresponding to $X,Y$. We choose $α$ to be identically zero. So, $α(X,Y)=∇(X',Y')=0$. Then we have a corresponding linear connection on $G$. For this connection we have that the Christoffel symbols $Γ^{k}_{ij}=0$, since it is identically a zero connection, so for the connection forms we have $ω_{j}^{i}=\sum Γ^{i}_{kj}ω^{k}=0$, where $ω^{k}$ give the dual basis locally. So in the general formula $dω_{i}=-\sum ω_{k}^{i} \wedge ω^{k}+\frac{1}{2}\sum T_{jk}^{i}ω^{j}\wedge ω^{k}$ the first sum in the right side is zero and we get $dω^{i}=\frac{1}{2}\sum T_{jk}^{i}ω^{j}\wedge ω^{k}$. Here $T$ is the torsion tensor. But it can be proved $T_{jk}^{i}=Γ_{jk}^{i}-Γ_{κξ}^{i}-c_{jk}^{i}=-c_{jk}^{i}$, where $c$ are the structural constants of the group $G$. So we get the formula $dω^{i}=-\frac{1}{2}\sum c_{jk}^{i}ω^{j}\wedge ω^{k}$ which is the Maurer-Cartan equation for that connection and it s not related to the usual Maurer-Cartan equation for $ω_{G}$, see (A). The curvature of that connection is also zero, not because of the last formula, but because the connection form is identically zero.<|endoftext|> TITLE: Frobenius Morphism on Elliptic Curves QUESTION [7 upvotes]: I am having some confusion concerning the Frobenius morphism of an elliptic curve over a finite field $\mathbb{F}_q$ with $q = p^r$ and $p$ prime. I am working with Silverman's "Arithmetic of Elliptic Curves" and currently on the following example: My question is, if the Frobenius endomorphism fixes exactly $E(\mathbb{F}_q)$ and further $E^{(q)}=E$, what does this endomorphism do? Isn't it then just the identitly, leaving every point fixed? Or shouldn't it fix the points $E(\mathbb{F}_p)$? I just don't see what the frobenius morphism does on an elliptic curve over a finite field, if all the points in questions are left fixed. I hope someone understands my, a little messy and unclear, question and can help me to free the knot in my head. REPLY [6 votes]: Assume that $E$ is defined over $\Bbb{F}_q$. A point here is that while $\phi_q$ maps the curve $E$ to itself, and fixes all the points of $E(\Bbb{F}_q)$, it does not fix all the points of $E(\Bbb{F}_{q^r})$ for $r>1$. In fact, its fixed points are exactly the points in $E(\Bbb{F}_q)$. But there is more to an elliptic curve defined over $\Bbb{F}_q$ than its rational points! When we consider "all of the curve", we want to include the points of $E(\overline{\Bbb{F}_q})$ with coordinates in an algebraic closure $\overline{\Bbb{F}_q}$ of $\Bbb{F}_q$, i.e. over the union of all the extension fields $\Bbb{F}_{q^r}$. Viewing $\phi_q$ as a mapping from $E(\overline{\Bbb{F}_q})$ to itself brings with it a lot of tools. Largely because algebraic geometry really should be done over an algebraic closed field. The set of points in $E(\Bbb{F}_q)$ as such is just a finite collection of points not worthy of being called a curve. As already indicated in the quoted passage, study of the action of $\phi_q$ on $E(\overline{\Bbb{F}_q})$ is at the heart of many a further development. The Hasse bound on the number of the points is just the tip of that iceberg. The celebrated point counting algorithm due to Schoof-Elkies-Atkin depends on the study of the action of $\phi_q$. I am the wrong person to describe all the connections in detail, but I think that use of $\phi_q$ here is an analogue of the Lefschetz' fixed point counting formula that you may have seen in algebraic topology.<|endoftext|> TITLE: Closure of $\mathbb{Q}\times\mathbb{Q}$ in British Rail metric QUESTION [5 upvotes]: I'm wondering what is the closure of $\mathbb{Q}\times\mathbb{Q}$ in $(\mathbb{R}^{2},d)$ where $d$ is British Rail metric: $$ d(x,y) = \left\{ \begin{array}{lr} ||x-y|| & \text{if} \; \; x,y,0 \; \; \text{are collinear,}\\ || x || + ||y||& \;\;\;\; \text{otherwise.} \end{array} \right. $$ At this moment I'm thinking about set $$\{(x,y)\in\mathbb{R}^{2}:\exists q\in\mathbb{Q} \quad qx=y\lor x=0\}$$ because I think it is a set of all points that lie on lines passing through $(0,0)$ with rational slope. Is it correct answer to my question? REPLY [2 votes]: Since this looks like a homework, I will only provide some hints. Given two points $p, q$ in $R^2$ I will say that the pair $(p,q)$ has type I if $p, q, 0$ are collinear and type II otherwise. Given any point $p\in R^2$ and a sequence $(q_n)$ in $R^2$ I will say that the sequence $(q_n)$ is of type I if all pairs $(p,q_n)$ are of type I. Similarly, for type II. Now, fix $p$. Any sequence $q_n$ splits as (at most) two infinite subsequences, each subsequence has either type I or type II. Assuming that $(q_n)$ is of type II, what can you say about $$ \lim \inf_{n\to\infty} d(p, q_n) ? $$ For what points $p$ there is a sequence $(q_n)$ of type I consisting entirely of points in ${\mathbb Q}^2$? Once you have answered these two questions, you will obtain a description of the closure of ${\mathbb Q}^2$ (with respect to the metric $d$) as a union of certain lines in $R^2$. See also this, for Brian Scott's answer which will help.<|endoftext|> TITLE: Area of paralellogram QUESTION [5 upvotes]: Find the area of parallelogram of the two vectors (6,0,1,3) and (2,1,3,1). The area is magnitude of the cross product of the two vectors. Right now the only way i was taught to do the cross product is getting the determinant of putting the vectors in and i,j,k matrix. I dont know how to do it with 4 points of the vector. REPLY [3 votes]: The area of your (v,w)-parallelogram is equal to $\sqrt{Gram(v,w)}=\sqrt{\det(AA^T)}$, with the matrix $A=\begin{bmatrix}v & w\end{bmatrix}$. See, for example, Why is the volume of a parallelepiped equal to the square root of $\sqrt{det(AA^T)}$ or How do I compute the area of this parallelogram.<|endoftext|> TITLE: Show that the Galois group is cyclic QUESTION [6 upvotes]: Let $p$ be a prime, $n\in \mathbb{N}$ and $f=x^{p^n}-x-1\in \mathbb{F}_p[x]$ irreducible. Let $a\in \overline{\mathbb{F}}_p$(=algebraic closure of $\mathbb{F}_p$) is a root of $f$. We have that $\mathbb{F}_p(a)$ contains all the roots of $f$, for each $b\in \mathbb{F}_{p^n}$ the $a+b$ is a root of $f$ and that that $\mathbb{F}{p^n}\leq \mathbb{F}_p(a)$ How can we show that $n=p^i$ for any $i\in \{0, 1, \ldots , n\}$ ? I want to show that $Gal(\mathbb{F}_p(a)/\mathbb{F}_{p^n})$ is cyclic and let $\tau$ be a generator. Since $a+b$ is a root of $f$ for each $b\in \mathbb{F}_{p^n}$, we have that $f$ is separable, or not? So, $\mathbb{F}_p(a)$ is the splitting field of the separable polynomial $f\in \mathbb{F}_p[x]$. Therefore the extension $\mathbb{F}_p(a)/\mathbb{F}_p$ is Galois, with $|Gal(\mathbb{F}_p(a)/\mathbb{F}_p)|=[\mathbb{F}_p(a):\mathbb{F}_p]=\deg f=p^n$. How could we continue? REPLY [4 votes]: We know from your other topic that this is a Galois extension. Now consider $|K|=|\Bbb F_p(a)| = p^k(=p^{p^n})$, then $[K:\Bbb F_p]=k \; (=p^n)$. We also know that the defining characteristic of elements of $K$ is that $x^{p^k}-x=0$, and $k$ is the smallest positive integer for which this holds for all $\alpha\in K$. But then the field automorphism $x\mapsto x^p$ has order $k$ in the Galois group. But as $|\text{Gal}(K/\Bbb F_p)|=k$, we see that this automorphism generates the group. Note: Just as in your previous topic, there is absolutely nothing special about the polynomial you chose: this is true for all finite extensions of finite fields.<|endoftext|> TITLE: Self Teaching Analysis, Topology and Differential Geometry QUESTION [5 upvotes]: I posted some questions similar to this one not long ago but I think I phrased them wrong and as such got valuable input but not really an answer to my questions. If reposting a similar question like this is against the rules of this forum please tell me! I am still new to this site and eager to learn. Basically I am in electrical engineering (3rd year) but I think I should have done an undergrad in math. I have a limited interest in application and am really much more interested in class when we do rigorous math. At this point I am considering either doing a masters degree in math or going into Control Theory (not control systems, but rather the mathematical theories behind nonlinear control which I guess is more applied math than engineering). What I want to do now is go through textbooks on my own to make up for the butchery of math that happened in my engineering classes. The great thing is that I will have a year-long internship starting in May during which I will have time to dedicate to this. I want to emphasize that although there is a strong chance I will go into Control Theory, I want to cover rigorous math and detach myself almost entirely from engineering. My real question/concern is what textbooks to use for each subject and what order to do it all in. Here is my current plan, I would very much like your input: Real Analysis by Chapman Pugh (I'm already quite deep into it and loving it) Topology by Munkres (Part I: General Topology) Abstract Algebra by Dummit and Foote (Group Theory) Topology by Munkres (Part II: Algebraic Topology) Smooth Manifolds by John M. Lee And then perhaps more of the Abstract Algebra textbook and/or an intro to Dynamical Systems, Chaos and Fractals. Please note I do have some experience in proofs despite engineering. Some of it is due to computer science courses behind quite rigorous and the rest is self practice. For example I am finding Chapman Pugh quite accessible. REPLY [2 votes]: I think that Eric Auld's answer is fantastic, and I agree with most of what he said. I'm adding an answer just to throw in my own two cents. Learning some algebra is good, but if you want to do control theory you might be better off initially spending less time on that and more time on learning some dynamics/ODEs. For a first dynamics introduction, "Nonlinear Dynamics and Chaos" by Strogatz is very good and a pleasure to read. For ODEs, I'm a fan of "Differential Equations, Dynamical Systems, and Linear Algebra" by Hirsch/Smale (the 1974 edition), and also the ODE text by V.I. Arnold. Chapters 1-3 of Munkres is a good first exposure to topology but I think that Lee's topological manifolds book is especially useful background for differential topology/geometry. I also liked the algebraic topology introduction in Lee's book a little better than the one in Munkres. Lee's smooth manifolds book is now my favorite on the subject, but I read other books first. You might also like the books by Boothby and Guillemin/Pollack. I was in your (almost) exact same shoes 5 years ago: third year electrical engineering student, unhappy with rigor in engineering classes, realized I should have been a math major, reading mathematics books, etc. I am now an electrical engineering PhD student in control theory/dynamics. Feel free to PM me if you ever want to talk more!<|endoftext|> TITLE: Show that a convex compact set in $R^2$ can be cut into 4 sets of equal area by 2 perpendicular lines QUESTION [9 upvotes]: Okay I need to show this using calculus and mean value theorem. My try : Let $D$ be a convex and compact set in $R^2$ Now let $R$ be a compact closed rectangle such that $D \subset R$. Draw two lines parallel to the axis such that $D$ is now composed of four subsets name them $R_1 \cup R_2 \cup R_3 \cup R_4 = D $ Okay now create these functions : $$F(p)= 1 \ if\ p\epsilon R_1 \\ =0 \ else\\\\G(p)= 1 \ if\ p\epsilon R_2 \\ =0 \ else \\\\ H(p)= 1 \ if\ p\epsilon R_3 \\ =0 \ else \\\\ T(p)= 1 \ if\ p\epsilon R_4 \\ =0 \ else$$ Then we have $$\int\int_{R}F(p)+\int\int_{R}G(p)+\int\int_{R}H(p)+\int\int_{R}T(p)=A(D)$$ $A(D)$ denotes the area of this compact convex set $D$ Now since $$\int\int_{R}F(p)= A(R_1)=\sum_{R_{ij}: \bigcup R_{ij}=R_1} A(R_{ij})$$ We can say that $\exists R_{ij}: \bigcup R_{ij}=R_1$ such that $$\sum_{R_{ij}: \bigcup R_{ij}=R_1} A(R_{ij})> \frac{A(D)}{4}$$$ Also $\exists R_{ij}: \bigcup R_{ij}=R_1$ such that $$\sum_{R_{ij}: \bigcup R_{ij}=R_1} A(R_{ij}) < \frac{A(D)}{4}$$$ Then by the intermediate value theorem $\exists R_{ij}: \bigcup R_{ij}=R_1$ such that $$\sum_{R_{ij}: \bigcup R_{ij}=R_1} A(R_{ij}) = \frac{A(D)}{4}$$$ Then we can find $R_{ij}$ which form the $R_1$ such so that above is the case. I can apply the same procedure to other functions. What do you think about this approach? Is it correct/incorrect what kind of correction does it need? REPLY [4 votes]: Let me sketch the proof (it is going to be relatively brief, so a lot of details need to be checked...). Say $S$ is your compact convex set in the plane, take a huge circle that contains $S$. Fix a point on the circle, and let $D_p$ be the corresponding diameter of $C$. A simple exercise shows that we can find a line $L_p$ parallel to $D_p$ and cutting $S$ in two pieces of the same area, and similarly we can find a line $L_p'$ orthogonal to $D_p$ and cutting $S$ in two pieces of the same area. These two lines $L_p$ and $L_p'$ determine a division of $S$ into $4$ pieces (number them say counterclockwise) $S_1(p)$, ... , $S_4(p)$ and it is clear that $S_1(p)$ and $S_3(p)$ have the same area, and similarly for $S_2(p)$ and $S_4(p)$. So we just need to ensure that there is a point $P$ on the circle $C$ for which $S_1(p)$ and $S_2(p)$ have the same area. Now study the continuous function given by$$A(p) := \text{area}(S_1(p)) - \text{area}(S_2(p))$$and apply the intermediate value theorem (it is easy to find two points $p$, $p'$ such that $A(p) = -A(p')$).<|endoftext|> TITLE: Prove that $\sqrt{5n+2}$ is irrational QUESTION [9 upvotes]: I'm trying to follow this answer to prove that $\sqrt{5n+2}$ is irrational. So far I understand that the whole proof relies on being able to prove that $(5n+2)|x^2 \implies (5n+2)|x$ (which is why $\sqrt{4}$ doesn't fit, but $\sqrt{7}$ etc. does), this is where I got stuck. Maybe I'm overcomplicating it, so if you have a simpler approach, I'd like to know about it. :) A related problem I'm trying to wrap my head around is: Prove that $\frac{5n+7}{3n+4}$ is irreducible, i.e. $(5n+7)\wedge(3n+4) = 1$. REPLY [3 votes]: Since $$ \{0,1,2,3,4\}\overset{x^2}{\longrightarrow}\{0,1,4\}\pmod{5}\tag{1} $$ and $$ 2\not\in\{0,1,4\}\pmod{5}\tag{2} $$ we know that $2$ is not a square mod $5$. That means that $$ 5n+2=x^2\tag{3} $$ has no solutions with $n,x\in\mathbb{Z}$. If $\sqrt{5n+2}\in\mathbb{Q}$, then there is some $x\in\mathbb{Q}$ so that $$ x^2-(5n+2)=0\tag{4} $$ which implies that $x$ is a rational algebraic integer, therefore, by this answer, it must be an integer. However, this contradicts that $(3)$ has no integer solutions. Therefore, $$ \sqrt{5n+2}\not\in\mathbb{Q}\tag{5} $$ Since $$ 3(5n+7)-5(3n+4)=1\tag{6} $$ we know that $5n+7$ and $3n+4$ have no common factors.<|endoftext|> TITLE: Asymptotic behavior of digit sum of $2^{n}$ QUESTION [6 upvotes]: Terence Tao in his brilliant book Solving Mathematical Problems: a Personal Perspective states (page 17): It is highly probable (though not proven!) that the digit-sum of $2^{n}$ is approximately $(4.5 \log_{10} 2)n≈1.355n$ for large $n$. This problem sounds very interesting to me! Do you know something more about this problem? What is its exact formulation (replacing word approximately with a limit)? Is it still not proven? I found only this which does not satisfy me enough. Thank you very much for your answers and have a nice week! REPLY [4 votes]: As discussed in the comments: Informally, this is a statement about the distribution of the digits in $2^n$. If we imagined that they were distributed uniformly, then the claim would follow at once: The average value of a randomly selected digit is $\frac 92=4.5$ and the number of digits in $2^n$ is $\lceil n\log_{10} 2\rceil$. Of course, it isn't at all clear that this assumption is justified, nor is it clear how one might go about proving it. Indeed, the units place is obviously not uniform (it must be even) nor is the lead digit uniform (the low digits are favored disproportionately, see, e.g., this). Of course, a couple of digits at the ends do not have any material impact on the overall distribution of the units.<|endoftext|> TITLE: Evaluate $\sum_{n=1}^{\infty}{1\over n^5}$ up to the second decimal place QUESTION [5 upvotes]: I am trying to evaluate $$\sum_{n=1}^{\infty}{1\over n^5}$$ up to the second decimal place. While the series is convergent, I have no idea how to construct such a bound, preferably using basic properties of series and sequences. Any hints? REPLY [3 votes]: Alternating Series $$ \begin{align} &1+\frac1{2^5}+\frac1{3^5}+\frac1{4^5}+\frac1{5^5}+\frac1{6^5}+\dots\\ &\phantom{1}-\frac{2}{2^5}\phantom{+\frac1{3^5}1}-\frac2{4^4}\phantom{+\frac1{5^5}1}-\frac{2}{6^5}\tag*{$\left(-\frac2{2^5}\text{times the line above}\right)$}\\ =&1-\frac1{2^5}+\frac1{3^5}-\frac1{4^5}+\frac1{5^5}-\frac1{6^5}+\dots\\ \end{align} $$ Thus, the alternating sum is $\frac{15}{16}$ of the non-alternating sum. By the Alternating Series Test, the error at any point is less than the first unused term. Thus to get two digits of precision, we only need to go up to, but not including, $\frac1{3^5}=\frac1{243}$. That is, $$ \begin{align} \sum_{k=1}^\infty\frac1{k^5} &\approx\frac{16}{15}\left(1-\frac1{2^5}\right)\\ &=1.0333333 \end{align} $$ to within $\frac{16}{15}\frac1{243}=0.0043896$. In fact, the Alternating Series Test says that $$ 1.0333333\le\sum_{k=1}^\infty\frac1{k^5}\le1.0377229 $$ where the upper bound is $$ \frac{16}{15}\left(1-\frac1{2^5}+\frac1{3^5}\right) $$ The next lower bound is $$ \frac{16}{15}\left(1-\frac1{2^5}+\frac1{3^5}-\frac1{4^5}\right)=1.0366812 $$ So now we can say that $$ \bbox[5px,border:2px solid #C0A000]{1.0366812\le\sum_{k=1}^\infty\frac1{k^5}\le1.0377229} $$ Thus, to two decimal places, the sum would be $1.04$. Euler-Maclaurin Sum Formula Using the Euler-Maclaurin Sum Formula: $$ \begin{align} &\sum_{k=1}^{100}\frac1{k^5}-\left[-\frac1{4n^4}+\frac1{2n^5}-\frac5{12n^6}+\frac7{24n^8}-\frac1{2n^{10}}+\frac{11}{8n^{12}}-\frac{65}{12n^{14}}\right]_{n=100}\\[9pt] &=1.036927755143369926331365486457 \end{align} $$ The next term is $\frac{691}{24n^{16}}$, so the error is on the order of $3\times10^{-31}$. Estimating the Error with Bernoulli's Inequality Using the Alternating Series Test is pretty easy, but we can also use Bernoulli's Inequality to estimate the error. For $p\ge1$, $$ \begin{align} \frac1{n^p}-\frac1{(n+1)^p} &=\frac1{n^p}\left[1-\frac1{\left(1+\frac1n\right)^p}\right]\\ &\ge\frac1{n^p}\left[1-\frac1{1+\frac pn}\right]\\ &=\frac1{n^p}\frac{\frac pn}{1+\frac pn}\\ &\ge\frac1{n^p}\frac{\frac pn}{\left(1+\frac1n\right)^p}\\[4pt] &=\frac p{n(n+1)^p}\\[8pt] &\ge\frac p{(n+1)^{p+1}} \end{align} $$ The first two inequalities are due to Bernoulli's inequality, the last is simply because $n\lt n+1$. Using this inequality, we get that $$ \begin{align} \sum_{k=1}^\infty\frac1{k^{p+1}}-\sum_{k=1}^n\frac1{k^{p+1}} &=\sum_{k=n+1}^\infty\frac1{k^{p+1}}\\ &=\sum_{k=n}^\infty\frac1{(k+1)^{p+1}}\\ &\le\frac1p\sum_{k=n}^\infty\left(\frac1{k^p}-\frac1{(k+1)^p}\right)\\ &=\frac1{p\,n^p} \end{align} $$ Thus, the error in approximating the infinite sum using $n$ terms is at most $\frac1{4n^4}$. Therefore, the error in $$ \sum_{k=1}^3\frac1{k^5}=1.0353652 $$ is at most $\frac1{4\cdot3^4}=0.0030864$. That is, $$ \bbox[5px,border:2px solid #C0A000]{1.0353652\le\sum_{k=1}^\infty\frac1{k^5}\le1.0384516} $$ Thus, to two decimal places, the sum would be $1.04$.<|endoftext|> TITLE: Is $\sqrt{1 + \sqrt{2}}$ a unit in some ring of algebraic integers? QUESTION [7 upvotes]: Since $\sqrt{1 + \sqrt{2}}$ has a minimal polynomial $x^4 - 2x^2 - 1$, it seems to me like this number should be a unit in some ring of algebraic integers. My first thought was that maybe it's a unit in the ring of algebraic integers of $\mathbb{Q}(\root 4 \of 2)$, but $$\frac{\sqrt{1 + \sqrt{2}}}{1 - \root 4 \of 2} = -\sqrt{17 + 12 \sqrt{2} + 2 \sqrt{140 + 99 \sqrt{2}}},$$ an algebraic integer of degree $8$. So much for that. Next it occurs to me that maybe what I'm looking for is the ring of algebraic integers of $\mathbb{Q}(\sqrt{1 + \sqrt{2}})$, which may be integrally closed but I can't say for sure. REPLY [9 votes]: Letting $\alpha = \sqrt{1 + \sqrt{2}}$ and $K = \mathbb{Q}(\sqrt{1 + \sqrt{2}})$, then $\alpha$ is indeed a unit in the ring of integers $O_K$, and even in the ring $\mathbb{Z}[\alpha]$ (which may or may not be the full ring of integers). One can see this from the minimal polynomial that you found: $$ \alpha^4 - 2 \alpha^2 - 1 = 0 \implies 1 = (\alpha^3 - 2 \alpha)\alpha \implies \frac{1}{\alpha} = \alpha^3 - 2 \alpha \, . $$ In fact, an element is a unit in the ring of algebraic integers iff the constant term of its minimal polynomial is a unit in $\mathbb{Z}$, i.e., $\pm 1$. (Note that the constant term of the minimal polynomial is $\pm$ the field norm $N_{K/\mathbb{Q}}(\alpha)$.)<|endoftext|> TITLE: Why is the upper Riemann integral the infimum of all upper sums? QUESTION [7 upvotes]: I was reading the theory of Riemann integration when I cam across the following , If $f$ is bounded on $[a,b]$, and $P = \{x_0,x_1,x_2.......x_n\}$ is a partition of $[a,b]$, let $$M_j = \sup_{x_{j-1}\leq x\leq x_j}f(x)$$ The upper sum of f over P is $$S(P) = \sum_{j=1}^{n} M_j(x_j-x_{j-1})$$ and the upper integral of $f$ over $[a,b]$, denoted by $$\int_{a}^{b^-} f(x)dx$$ is the infimum of all upper sums. The theorem similarly goes on to state the result for lower sums. My doubt is : I do not understand how is $$\int_{a}^{b^-} f(x)dx$$ the infimum of all upper sums. I understand that if we refine the partition P, then the upper sum would decrease, so it may be a lower limit for all the upper sums computed on the refinements of P (but still being the lower limit does not prove that it is the infimum) and what about those partitions for which P itself is the refinement of? How do I know that it will be a lower limit for those, let alone a infimum? REPLY [15 votes]: Your question does have some ambiguity. From the wording of your question and comments it appears that you want to know: Does the limit of upper sums (with respect to partitions getting finer and finer) equal the infimum of all upper sums? First of all note that when we are dealing with limits of things dependent on a partition of an interval then there are two ways in which the limit operation can be defined: 1) Limit via refinement of a partition: Let $P = \{x_{0}, x_{1}, x_{2},\ldots, x_{n} \}$ be a partition of $[a, b]$ where $$a =x_{0} < x_{1} < x_{2} < \cdots < x_{n} = b$$ A partition $P'$ of $[a, b]$ is said to be a refinement of $P$ (or finer than $P$) if $P \subseteq P'$. Let $\mathcal{P}[a, b]$ denote the collection of all partitions of $[a, b]$ and let $F:\mathcal{P}[a, b] \to \mathbb{R}$ be a function. A number $L$ is said to be the limit of $F$ (via refinement) if for every $\epsilon > 0$ there is a partition $P_{\epsilon}\in \mathcal{P}[a, b]$ such that $|F(P) - L| < \epsilon$ for all $P \in \mathcal{P}[a, b]$ with $P_{\epsilon} \subseteq P$. 2) Limit as norm of parititon tends to $0$: If $P = \{a = x_{0}, x_{1}, x_{2}, \ldots, x_{n} = b\}$ is a partition of $[a, b]$ then the norm $||P||$ of partition $P$ is defined as $||P|| = \max_{i = 1}^{n}(x_{i} - x_{i - 1})$. Let $\mathcal{P}[a, b]$ denote the collection of all partitions of $[a, b]$ and let $F: \mathcal{P}[a, b] \to \mathbb{R}$ be a function. A number $L$ is said to be limit of $F$ as norm of partition tends to $0$ if for every $\epsilon > 0$ there is a $\delta > 0$ such that $|F(P) - L| < \epsilon$ for all $P\in \mathcal{P}[a, b]$ with $||P|| < \delta$. This is written as $\lim_{||P|| \to 0}F(P) = L$. Note that for a given function $F:\mathcal{P}[a, b] \to \mathbb{R}$ the limiting behavior of $F$ can be different according to these two definitions given above. In fact if $F(P) \to L$ as $||P||\to 0$ then $F(P) \to L$ via refinement but the converse may not hold in general. Let us establish that if $F(P) \to L$ as $||P||\to 0$ then $F(P) \to L$ via refinement. Let $\epsilon>0$ be arbitrary and let $\delta>0$ be such that $|F(P) -L|<\epsilon$ whenever $||P||<\delta$. Let us now choose any specific partition $P_{\epsilon} $ with $||P_{\epsilon} ||<\delta$. If $P_{\epsilon} \subseteq P$ then $$||P||\leq ||P_{\epsilon} ||<\delta\tag{A} $$ and hence by our assumption $|F(P) - L|<\epsilon $. Therefore it follows that $F(P) \to L$ via refinement also. Notice that the argument here crucially hinges on inequality $(\text{A}) $. Starting with an $\epsilon>0$ we first found a $\delta>0$ via the given assumption $\lim_{||P||\to 0}F(P)=L$. The process of finding a suitable partition $P_{\epsilon} $ crucially depends on the implication $$P, Q\in\mathcal{P} [a, b], P\subseteq Q\implies ||Q||\leq||P||$$ which leads to inequality $(\text{A}) $ above. If the reverse implication $$P, Q \in \mathcal{P} [a, b], ||Q||\leq||P||\implies P\subseteq Q $$ were true then one could provide a similar argument as in last paragraph to prove that if $F(P) \to L$ via refinement then $F(P) \to L$ as $||P||\to 0$. We just need to set $\delta=||P_{\epsilon} ||$ and we are done. But this is not the case. Now let $f$ be a function defined and bounded on $[a, b]$ and let $P = \{x_{0}, x_{1}, x_{2}, \ldots x_{n}\}$ be a partition of $[a, b]$. Let $M_{k} = \sup\,\{f(x), x \in [x_{k - 1}, x_{k}]\}$ and let $\mathcal{P}[a, b]$ denote the collection of all partitions of $[a, b]$. We define the upper sum function $S:\mathcal{P}[a, b] \to \mathbb{R}$ by $$S(P) = \sum_{k = 1}^{n}M_{k}(x_{k} - x_{k - 1})$$ It is easy to prove that if $m = \inf\,\{f(x), x \in [a, b]\}$ then $S(P) \geq m(b - a)$ for all $P \in \mathcal{P}[a, b]$ and further if $P, P' \in \mathcal{P}[a, b]$ are such that $P \subseteq P'$ then $S(P') \leq S(P)$. It follows that $J = \inf\,\{S(P), P \in \mathcal{P}[a, b]\}$ exists. Your question can now be worded more concretely into one of the following two forms: Does $S(P) \to J$ via refinement? or Does $\lim_{||P|| \to 0}S(P) = J$? The answer to the first question is obviously "yes" and you should be able to prove this using the definition of limit via refinement given above. The answer to second question is also "yes" but it is difficult to prove. We first prove the result for a non-negative function $f$. Let $\epsilon > 0$ be given. Since $J = \inf\,\{S(P), P \in \mathcal{P}[a, b]\}$, there is a partition $P_{\epsilon} \in \mathcal{P}[a, b]$ such that $$J \leq S(P_{\epsilon}) < J + \frac{\epsilon}{2}\tag{1}$$ Let $P_{\epsilon} = \{x_{0}', x_{1}', x_{2}', \ldots, x_{N}'\}$ and let $M = \sup\,\{f(x), x \in [a, b]\} + 1$. Let $\delta = \epsilon / (2MN)$ and consider a partition $P = \{x_{0}, x_{1}, x_{2}, \ldots, x_{n}\}$ with $||P|| < \delta$. We can write $$S(P) = \sum_{k = 1}^{n}M_{k}(x_{k} - x_{k - 1}) = S_{1} + S_{2}\tag{2}$$ where $S_{1}$ is the sum corresponding to the index $k$ for which $[x_{k - 1}, x_{k}]$ does not contain any point of $P_{\epsilon}$ and $S_{2}$ is the sum corresponding to other values of index $k$. Clearly for $S_{1}$ the interval $[x_{k - 1}, x_{k}]$ lies wholly in one of the intervals $[x_{j - 1}', x_{j}']$ made by $P_{\epsilon}$ and hence $S_{1} \leq S(P_{\epsilon})$ (note that $f$ is non-negative). For $S_{2}$ we can see that the number of such indexes $k$ is no more than $N$ and hence $S_{2} < MN\delta = \epsilon / 2$ (note that $f$ is non-negative here). It follows that $$J \leq S(P) = S_{1} + S_{2} < S(P_{\epsilon}) + \frac{\epsilon}{2} < J + \epsilon\tag{3}$$ for all $P \in \mathcal{P}[a, b]$ with $||P|| < \delta$. It follows that $S(P) \to J$ as $||P|| \to 0$. Extension to a general function $f$ can be achieved by writing $f(x) = g(x) + m$ where $m = \inf\,\{f(x), x \in [a, b]\}$ and noting that $g$ is non-negative. Another interesting example showing the difference between two limit definitions is given in this answer. Note: The limit of a Riemann sum is based on the two definitions given above but there is a slight complication. A Riemann sum depends not only on a partition but also on choice of tags corresponding to a partition. Formally one can view a Riemann sum not as a function from $\mathcal{P} [a, b] $ to $\mathbb{R} $ but rather as a relation from $\mathcal{P} [a, b] $ to $\mathbb {R} $ such that it relates every partition of $[a, b] $ to one or more real numbers.<|endoftext|> TITLE: Interesting tiling with a lot of symmetrical shapes QUESTION [10 upvotes]: I have such an interesting observation: if I take a square grid and rotate it over itself by atan(3/4) , it forms a structure which has four axes of reflection symmetry: The resulting structure is really mesmerizing, I see a lot of symmetric shapes in it: octagons, stars, rhombuses. And all of them appear inside this structure in different sizes, namely scaled by an integer factor: Also looking at minimal periods, I notice that exactly the same structure I can create from rhombus grids, put over itself at 90 degrees. Namely rhombuses which have height of double of its width. And the same structure, only with one additional square grid over it, I can create from a parallelogramm. Once I have made a question about this parallelogram (Very special geometric shape - parallelogram (No name yet?)) The relation between all shapes which spawn these grids: To spawn it from parallelogram I copy-reflect the parallelogram grid and then copy-rotate the whole by 90 degrees: Question: How this particular structure is classified, or named? Do you know of any articles about it or any applications? To be exact, I made up two structures, but since they are almost the same, I hope this does not add much confusion. REPLY [3 votes]: Such kind of things are called multi-grids. The kind of multi-grids you drew are called singular. Non-singular or regular multi-grids, where no three different lines intersect, provide a general method for constructing non-periodic tilings of the plane. They were used (maybe itroduced?) by N. de Bruijn in his paper Algebraic theory of Penrose tilings. Later on de Bruijn's student Beenker used tetra-grids to describe Ammann-Beenker tilings -take a look at references therein. The multi-grids method for constructing non-periodic tilings of the plane is generalized by the projection method (also cut-and-project method) and is related to the mathematical theory of quasi-crystals. Such method consists in projecting a stripe of a higher dimensional lattice onto a totally irrational subspace. The relation between projection tilings and the inflational structure was studied by Harriss and Lamb.<|endoftext|> TITLE: Is mapping from a countable set to an uncountable set never surjective? QUESTION [5 upvotes]: Suppose $f: X \rightarrow \mathbb{R}$, where $X$ is a countable set. Does it mean that it is always not surjective? (Sorry for such a basic question, but we never dealt with this question in elementary real analysis class). REPLY [16 votes]: This is true, and does not require the axiom of choice. First of all, without loss of generality our countable set $X$ is $\mathbb{N}$. By definition, a set is uncountable if it does not inject into $\mathbb{N}$. But given a surjection $f: \mathbb{N}\rightarrow Y$, we can define an injection $g: Y\rightarrow \mathbb{N}$ as follows: $$g(x)=\min\{n: f(n)=x\}.$$ So if $Y$ is uncountable, no surjection from $\mathbb{N}$ to $Y$ exists. This does not require the axiom of choice anywhere. Note that in Alex Wertheim's comment, he used choice to pick a representative of $\{a: f(a)=x\}$; however, since the domain of our map was well-ordered, we didn't need choice, and we could simply pick the least element. This works in general for any well-orderable set (and every countable set is well-orderable, by definition). By contrast, if $X$ is not well-orderable, then we may have a $Y$ which is larger than $X$ ($X$ injects into $Y$ but $Y$ doesn't inject into $X$) but such that $X$ does surject onto $Y$. Math without choice is weird. In the specific case $Y=\mathbb{R}$, we can do even better: given a map $f:\mathbb{N}\rightarrow\mathbb{R}$, define a sequence of nontrivial closed intervals $I_n=[a_n, b_n]$ such that $I_n\supseteq I_{n+1}$ and $f(n)\not\in I_n$ (it's easy to show that such a sequence of intervals exists. Then $J=\bigcap I_n$ is nonempty; but any element of $J$ is not in the range of $f$. This is, in fact, Cantor's original argument for the uncountability of $\mathbb{R}$.<|endoftext|> TITLE: Relationship between properties of linear transformations algebraically and visually QUESTION [5 upvotes]: I learned from 3Blue1Brown's Linear Algebra videos that a 2-D transformation is linear if it follows these rules: lines remain lines without getting curved the origin remains fixed in place grid lines remain parallel and evenly spaced I'm now going through linear algebra from a textbook, which lays out this definition of a linear transformation: T(u+v) = T(u) + T(v) T(cu) = cT(u) I'm wondering, is there a connection between these two ways of thinking of linear transformations? Do the visual ways of seeing 2-D linear transformations correspond to the formal definition when in 2-D? REPLY [2 votes]: If we make reasonable algebraic definitions for the three points above, then the answer is yes: there is a correspondence between the geometric perspective and the algebraic perspective. Let $f$ be a map that fixes the origin ($f(\vec{0}) = \vec{0}$) and sends lines of evenly spaced points to lines of evenly spaced points ($f(\vec{u} + c\vec{v}) = \vec{u}' + c\vec{v}'$). Then we have $$ f(c\vec{v}) = c\vec{v}' $$ If we take $c=1$ then we get $f(\vec{v}) = \vec{v}'$. So $f(c\vec{v}) = cf(\vec{v})$. Now consider $$ f(c_1\vec{u} + c_2\vec{v}) = \vec{u}' + c_2\vec{v}' $$ If we set $c_2 = 0$, we get $$ f(c_1\vec{u}) = c_1 f(\vec{u}) = \vec{u}' $$ So we have $$ f(c_1 \vec{u} + c_2 \vec{v}) = c_1f(\vec{u}) +c_2\vec{v}' $$ But if $c_1 = 0$, we then have $$ f(c_2 \vec{v}) = c_2 f(\vec{v}) = c_2 \vec{v}' $$ So $f(c_1 \vec{u} + c_2 \vec{v}) = c_1 f(\vec{u}) + c_2f(\vec{v})$. So $f$ is linear. Therefore, if $f$ fixes the origin and sends lines of evenly spaced points to evenly spaced points, then $f$ is linear. Can you show the converse ?