TITLE: Symmetries and eigenvalues of the Laplacian. QUESTION [26 upvotes]: Lets consider a domain $\Omega \subseteq \mathbb R^2$ smooth enough, and the eigenvalue for the laplacian \begin{align} -\Delta u &= \lambda u &x\in\Omega\\ u &= 0 &x\in \partial \Omega \end{align} I am interested in an explicit relation between the group of symmetries of $\Omega$ and the multiplicity of the eigenvalues of $-\Delta$.The idea is that if I have a symmetry $R$ and an eigenfunction $u$, then $u(R(x))$ will be also an eigenfunction with the same eigenvalue. For instance, in the case of the square we have as symmetry group $D_4$.In this case a base of eigenfunctions is (for the square with sides of length $\pi$) $$u_{nm}(x,y) = \sin(n x)\sin(m y)$$ so for $n \not=m$ we have that $u_{nm}$ and $u_{mn}$ are two l.i. function associated to the same eigenvalue, and we can obtain one from the other with the symmetry $R:(x,y) \rightarrow (y,x)$.But not every degeneracy can be attributed to the symmetries, for instance $u_{5,5}$ and $u_{1,7}$ are associated to the same eigenvalue, but there is not symmetry between them. On the other hand, symmetries doesn't implies degeneracy, for instance for a rectangle with aspect ratio not the square root of a rational, the spectrum is simple, but still has some symmetries $\mathbb Z_2 \times \mathbb Z_2$. In particular, i would like to know if no symmetries implies no degeneracy of the spectrum. REPLY [2 votes]: For the last question, you already gave an answer. You noticed that on the square, $u_{5,5}$ and $u_{1,7}$ are not linked by symmetries, but have the same eigenvalues. Notice now that the sinuses functions take the value zero on any multiple of $\pi$, so you can build a non-symmetric domain $\Omega$ by gluing togheter adjacent squares. For example, take $$ \Omega = [-\pi,\pi]^2 \,\,\cup\,\, [-\pi,\pi]\times[\pi,3\pi]\,\,\cup\,\, [\pi,3\pi]\times[-\pi,\pi]\,\,\cup\,\, [3\pi,5\pi]\times[-\pi,\pi] $$ This domain has no symmetries, but $u_{5,5}$ and $u_{1,7}$ are still l.i. and have the same eigenvalues.<|endoftext|> TITLE: What does it mean to integrate a Brownian motion with respect to time? QUESTION [10 upvotes]: I am reading about stochastic process, but could not make sense if one equation I encountered. Can anyone help me understand it? The equation states that suppose R(s) is an interest rate process, then the discount process is $D(t)=e^{- \int_0^t R(s)ds} $. Suppose R(t)=W(t) is a simple Brownian motion, what does $\int_0^t R(s)ds$ mean? Is it a Lebesgue integral? Or is it an Ito's integral? How to interpret it intuitively? This is from chapter 5 of Shreve's stochastic calculus for finance, equation 5.2.17 on page 215. REPLY [8 votes]: It might be easier to think about if we abstract a little. Brownian motion is just a continuous, time dependent random variable determined by some probability space $(X,\mathcal F, P)$. Let us view it as a function of two variables, $f(x,t)$. If $x$ is fixed, we have a continuous function, $f_x(t)=f(x,t)$, and we can compute it's integral $F(x,t)=F_x(t)=\int_0^t f_x(s) ds$, where we have just taken the usual integral of a continuous function (Riemann, Lebesgue, whatever, all the definitions agree for nice continuous functions). Thus, we have a transformation of the space of time dependent continuous random variables on $X$ (i.e., functions on $X\times \mathbb R$). The fact that we are working in particular with brownian motion doesn't enter into things. A more complicated question is what it means to integrate a function or a random variable with respect to Brownian motion. This gets you to the Ito integral (and other similar variants) which are more subtle.<|endoftext|> TITLE: How do we prove that $\int_{0}^{1}\int_{0}^{1}{\left(\ln{x}\ln{y}\right)^s\over 1-xy}dxdy=\Gamma^2(1+s)\zeta(2+2s)$? QUESTION [9 upvotes]: How do we prove that $$\int_{0}^{1}\int_{0}^{1}{\left(\ln{x}\ln{y}\right)^s\over 1-xy}dxdy=\Gamma^2(1+s)\zeta(2+2s)$$ Integrate with respect to x first, let $s=1$ $$\int_{0}^{1}{\ln{y}\ln{x}\over 1-yx}dx$$ $u=\ln{x}\rightarrow xdu=dx$ $$\ln{y}\int_{0}^{\infty}{u\over y-e^{-u}}du$$ I don't think I am in the right track here, any hints please. REPLY [17 votes]: Hint. One may write, for $s>-1$, $0 TITLE: Is the set of hyperreal numbers a quotient ring? QUESTION [6 upvotes]: It is easy to see that the set of real sequences $\mathbb{R}^{\mathbb{N}}$ is a ring. It suffices to define, for all $r,s\in\mathbb{R}^{\mathbb{N}}$, the operations $r\oplus s =(r_n+s_n)_{n\in\mathbb{N}}$ and $r\odot s=(r_n\cdot s_n)_{n\in\mathbb{N}}$. Let $\mathcal{U}$ be a nonprincipal ultrafilter on $\mathbb{N}$. For all $r\in\mathbb{R}^{\mathbb{N}}$, we define the set $r^{(0)}=\{n\in\mathbb{N} \mid r_n=0\}$. My question: Is the set of the $\textit{almost null sequences}$ $$ \mathbb{I} = \{r\in\mathbb{R}^{\mathbb{N}}\mid r^{(0)}\!\in\mathcal{U}\} $$ a two-sided ideal of $\mathbb{R}^{\mathbb{N}}$? I think yes, because if $s\in\mathbb{I}$ and $r\in\mathbb{R}^{\mathbb{N}}$, then $(s\odot r)^{(0)}\in\mathcal{U}$ (i.e. the product of any sequence and an almost null sequence is almost null). If yes, is the set of the hyperreal numbers the quotient ring $\mathbb{R}^{\mathbb{N}}\diagup \mathbb{I}$? In this case two sequences should belong to the same class if their difference is almost null (namely, they match on an index set which belongs to the ultrafilter $\mathcal{U}$) REPLY [6 votes]: Yes, $\mathbb{I}$ is an ideal. One needs to check closure under addition, but this is straightforward: The intersection of two sets in the ultrafilter is in the ultrafilter. The quotient construction you described is correct, it is the usual construction of an ultrapower. It is one of the ways to construct a non-standard model of analysis. There are others: a non-standard model of analysis is not necessarily an ultrapower. Also, the index set need not be the natural numbers. It is not really accurate to speak of the hyperreals, since there are non-standard models of analysis of arbitrarily large cardinality. Note that the hyperreals are much more than a field. In particular, the ultrapower construction gives, for each function $f:\mathbb{R}^n\to\mathbb{R}$, and for every relation $A\subset \mathbb{R^n}$, a natural extension to the ultrapower that preserves first-order properties.<|endoftext|> TITLE: Pigeonhole Principle Question: Jessica the Combinatorics Student QUESTION [16 upvotes]: Jessica is studying combinatorics during a $7$-week period. She will study a positive integer number of hours every day during the $7$ weeks (so, for example, she won't study for $0$ or $1.5$ hours), but she won't study more than $11$ hours in any $7$-day period. Prove that there must exist some period of consecutive days during which Jessica studies exactly $20$ hours. Here are my thoughts so far: Let $f(n)$ represent the total number of hours Jessica has studied after day $n$. Clearly, there are $49$ days in total, and the domain of $f$ is integers in the interval $[0,49]$. Proving that there must exist some period of consecutive days during which Jessica studies exactly $20$ hours is equivalent to proving that there must exist $i$ and $j$ such that $f(i)-f(j)=20$. This is a really interesting question, but I don't see a clear path forward. How do you solve this question? Please try not to use extremely advanced math or I won't understand :P REPLY [6 votes]: OK this was an interesting problem in itself, and the pigeonhole principle is implicit in practically any argument, so I just felt I would try to reason it out directly. With all kudos to Brian M Scott's more "hands-off" solution. I'll proceed by trying to devise a schedule that allows Jessica to beat the expectation of a run of days with a 20-hour study total. First observe that Jessica will not study more than 5 hours in one day, since she studies at least 1 hour every day, and no more than 11 hours in a seven-day period (so she only has 4 hours excess over the minimum study of $7 \times 1$ hour per day). Next note that this means that any run of 4 consecutive single-hour study days means that we can find a run of days with total study time of 20 hours, as follows: extend the 4 single-hour days out (forwards or backwards) until the total study time reaches or exceeds 20 hours. The maximum possible total at this point is 24 hours, due to the 5-hour maximum per day. Then shorten the range by discarding from the single-hour study days until 20 hours is achieved. Now note that a 5-hour study day will actually mean that the adjacent period has a run of 6 single-hour study days. So we need not consider any sequence with a 5 hour study day further. So the working maximum study day is now 4 hours, which means a run of 3 single-hour study days will automatically allow us to find a 20-hour run of days, as before. But the 6 days after/before a 4-hour day must contain a run of 3 single-hour study days, so a 4-hour study maximum will also lead to a 20-hour run. So now consider a 3-hour limit on study per day. Now this means that an run of 2 successive single-hour study days will allow finding a 20-hour run as before, and a 3 hour study day will mean that the 6-day period before and after has only at most 8 hours to allocate to those days, which will inevitably generate successive single-hour study days. Finally a 2-hour study limit per day only requires, under the previous construction, that there is a single day somewhere with only one hour studied, and we know from the pigeonhole principle that there are at least 3 such days in every 7-days period - quite apart from the fact that studying 2 hours every day would easily allow us to find a 20-hour run in any case. The above construction implies that the 7-week course length is significantly more than is necessary to force the existence of a run of days with a 20-hour study total.<|endoftext|> TITLE: Limit of $\lim_{t \to \infty} \frac{ \int_0^\infty \cos(x t) e^{-x^k}dx}{\int_0^\infty \cos(x t) e^{-x^p}dx}$ QUESTION [20 upvotes]: Let \begin{align} f(t,k,p)= \frac{ \int_0^\infty \cos(x t) e^{-x^k}dx}{\int_0^\infty \cos(x t) e^{-x^p}dx}, \end{align} My question: How to find the following limit of the function $f(t,k,p)$ \begin{align} \lim_{t \to \infty} f(t,k,p), \end{align} for any $p>0$ and $k>0$. What is known Some facts about the function Note that $\int_0^\infty \cos(x t) e^{-x^k}dx$ is a fourierier transform of $e^{-{|x|^k}}$. For $02$ we know that $\int_0^\infty \cos(x t) e^{-x^k}dx$ has countable many zeros. See this questions. A related question was asked here. Because for the case of $p>2$ the denominator has countable many zeros I am not sure if $\lim_{t \to \infty} f(t,k,p)$ even exists. It would be nice to show if it exists or not. Other trivial case include $k=1,p=2$ and $k=2,p=1$ since inverse fourier transforms of $e^{-|x|}$ and $e^{-|x|^2}$ are know in closed form. Clearly, the case of $k=p$ is trivial. So, we would like to analyze $k>p$ and $pp, \\ \lim_{t \to \infty} f(t,k,p)&=\infty, \ k0$ (blue). To apply the method of steepest desecent we have to deform our original contour of integration $\mathcal{O}$ into a path of constant phase $\mathcal{C}$ (expect a phase jump at infinity) with converging constitutents. This means that since $\Im f(0)=0$ we have to follow initally the red contour $C_0$ into the converging blue region up to $e^{i\pi/3}\infty$. We call this piece $\mathcal{C}_{0}$. Now because we we are at complex infinity we can jump to a contour with different phase, which is the part of green dashed contour $C_1$ which connects $e^{i\pi/3}\infty\rightarrow \infty$ going through the saddle point at $z_1$. We call this piece $\mathcal{C}_{1}$. This means that $\mathcal{C}=\mathcal{C}_{0}+\mathcal{C}_{1}$ Now we have done the hardest part and can conclude by analyticity $$ \int_{\mathcal{O}}f(z)dz=\underbrace{\int_{\mathcal{C}_0}f(z)dz}_{J_0}+\underbrace{\int_{\mathcal{C}_1}f(z)dz}_{J_1} $$ In a next step we need to determine which are the dominant contributions to the above integral? By construction $J_1$ is dominated by the contributions of the saddle point. Because $\Re(z_1^3-i z_1)>0$ this part will decay exponentially with $\text{const} \times n^{3/2}$ exponent. Due to the usual exponential decay of $f(z)$ in $J_0$ it is clear that this part will be dominated by it's contribution from around the origin (this argument DOESN'T WORK for the orginal contour $\mathcal{O}$ since $e^{-x^3}\sim \mathcal{O}(1)$ for many periods of $\cos(nx)$ so we can't only use the contributions around $x=0$ to determine the leading order behaviour of the integral).We might write $$ J_0=\int_{\mathcal{C_0}}f(z)dz \sim i\int_{0}^{\infty}e^{i \beta_{3,n}y^3}e^{-\beta_{3,n} y}dy\sim i\int_{0}^{\infty}(1+i \beta_{3,n} y^3+\mathcal{O}(\beta_{3,n}^2 y^{6}))e^{-\beta_{3,n} y}dy=\\ \frac{i}{\beta_{3,n}}-\frac{3!}{\beta_{3,n}^3}+\mathcal{O}(\beta_{3,n}^{-5}) $$ Please ask, if you have questions at this point! So we can conclude that for $n$ large enough $J_0 \gg J_1$, taking real parts and remembering the defintion of $\beta_{m,n}$ and $\alpha_{m,n}$ $$ I(3,n)\sim\alpha_{3,n}\Re(J_0)\sim-\alpha_{3,n}\frac{3!}{\beta^3_{3,n}}+\mathcal{O}(\alpha_{3,n}\beta_{3,n}^{-5})=\\-\frac{3!}{n^4}+\mathcal{O}(n^{-6})\quad \text{as} \,\, n\rightarrow+\infty $$ General result (1) The above argument can easily repeated for arbitary odd $m$ since we have always the same type of contour:a phase zero part, dominated by the origin yielding a power law and a steepest desecent part through one saddlepoint which gives an exponetial decaying contribution which is subdominant (but get more important as $m$ increases since $\beta_{m,n}$ is a monotonically decreasing functions in $m$ with $\beta_{\infty,n}=n$ ). Following the steps as above we get $$ I(m,n)\sim (-1)^{\nu(m)}\frac{m!}{n^{m+1}} \quad \text{as} \,\, n\rightarrow+\infty \,\,\text{and} \,\, m\,\,\text{is odd} $$ where $\nu(m)=1$ if $m = 4l-1$ and $\nu(m)=0$ if $m = 4l+1$ with $l\in \mathbb{N_{+}}$ (is there a nice name for this function?) After this huge achivement let's pause for the moment, take a deep breath and then let us go on with the case that $m$ is even Also in this case we will focus first focus on the "simple" case $m=4$ and then see what we can learn from it to find a general result. Crucial example (2): $m=4$ Let us start again with a picture (observe that i use parity to extend the range of integration to the whole real line) the coloring is as above, the only changes are that we now have three saddlepoints at $(\frac{\cos(\pi/6+k\pi/6)}{4^{1/3}},\frac{\sin(\pi/6+k\pi/6)}{4^{1/3}})$ with $k\in (0,1,2)$ and that $g(z)=z^4-i z$. Now one might notice that there are two contours of steepest descent: $\Im(g(z))=0$ passing through $z_3$ called $\color{\red}{\mathcal{C}_0}$ and another one passing through $z_1$ via the part of $\color{\green}{C_1}$ connecting $\infty\rightarrow i \infty$ (again denoted by $\mathcal{C}_1$) and the part of $\color{\orange}{C_2}$ connecting $i \infty\rightarrow - \infty$, denoted $\mathcal{C_2}$. which one we should choose? Suppose we want to take $\mathcal{C}_0$, then to get a closed contour of constant phase we had to add pieces from non-convergent regions which is forbidden (the integral doesn't converge in this case). Furthermore the corresponding saddle point contribution is supressed since $\Re(g(z_3))>\Re(g(z_{1}))=\Re(g(z_{2}))$ so we can safley take $\mathcal{C}=\mathcal{C}_1+\mathcal{C}_2$. From the above relation we also see that both saddle points on this contour will equaly contribute so we have to take them both into account. Again, by analyticity we can state $$ \int_{\mathcal{O}} f(z)dz=\underbrace{\int_{\mathcal{C}_1} f(z)dz}_{J_1}+\underbrace{\int_{\mathcal{C}_2} f(z)dz}_{J_2} $$ Since the both, $J_1$ and $J_2$ are dominated by their respective saddlepoints we can linearize the contour of integration around both of them. For example we have at $z_1$ that $g(z)\sim g(z_1+e^{-i\pi/12}t)\sim g(z_1)+g''(z_1)e^{-i\pi/6}t^2/2$ where $t\in\mathbb{R}$. We get (as an exercise, proof this it is an application of the standard Laplace method) $$ J_1\sim e^{-i\pi/12}e^{-\beta_{4,m}g(z_1)} \int_{-\infty}^{\infty}dte^{-\beta_{4,m}e^{-i\pi/6}g''(z_1)t^2/2}=e^{-i\pi/12}\frac{\sqrt{2\pi}}{\sqrt{\beta_{4,m}{e^{-i \pi/6}g''(z_1)}}}e^{-\beta_{4,m}g(z_1)} $$ and since $z_1=-z_2^*$ we can proof that $$ J_2=J_1^* $$ from which it follows that $$ \int_{\mathcal{O}}f(z)dz=2\Re(J_1) $$ or $$ \int_{\mathcal{O}}f(z)dz\sim\frac{\sqrt{8 \pi}}{n^{2/3}\sqrt{|g''(z_1)|}}\cos(n^{4/3}\Im(g(z_1))-\frac{\pi}{6})e^{-n^{4/3}\Re(g(z_1))} $$ From which the original integral of interest can be deduced by dividing by two (note that our integral is already real) since we doubled the integration range in the beginning $$ I(4,n)\sim\frac{\sqrt{2 \pi}}{n^{1/3}\sqrt{|g''(z_1)|}}\cos(n^{4/3}\Im(g(z_1))-\frac{\pi}{6})e^{-n^{4/3}\Re(g(z_1))} $$ which is totally different from the powerlaw behaviour we know from the case that $m$ is odd (why?). General result (2) The general case can be done along the same lines by showing that $$ \Re(g(z_{k,m}))=-\underbrace{\left(\frac1m-1\right)}_{<0}\Im(z_{k,m}) $$ which shows that the extremal points with smallest imaginary real part will minimize the real part of $g(z_{k,m})$. Furthermore it is straightforward to prove that $\min(\Im(z_{k,m}))=\Im(z_{1,m})=\Im(z_{m/2-1,m})$ so we have always two saddles which contribute equally to the asymptotic expansion of the integral. The rest of the argument goes through as in the case $m=4$ (expect that the contours of steepest descent have to be modified by a small amount, but i leave this part to you, the answer is already way too long) yielding $$ I(m,n)\sim\frac{\sqrt{2 \pi}\alpha_{m,n}}{\sqrt{|g''(z_{1,m})|\beta_{m,n}}}\cos(\beta_{m,n}\Im(g(z_{1,m}))-\arg(z_{1,m}))e^{-\beta_{m,n}\Re(g(z_{1,m}))}\\ \quad \text{as} \,\, n\rightarrow+\infty \,\,\text{and} \,\, m\,\,\text{is even} $$ with corrections proportinal to $\frac{\text{expression above}}{\beta_{m,n}}$<|endoftext|> TITLE: Sum to closed form QUESTION [8 upvotes]: I need to evaluate the following summation: $$ \sum_{n\in\mathbb{Z}} \frac{-1}{i(2n+1)\pi -\mu} $$ where $n$ is summed over all the integers from $-\infty$ to $\infty$ including 0. Putting this into Mathematica gives $\frac{1}{2}\tanh\frac{\mu}{2}$. What is the intermediate steps to get from the summation to the closed form? REPLY [2 votes]: Start with Euler's infinite product formula for the sine function: $$\text{sin }z=z \prod_{n=1}^\infty \left(1-\frac{z^2}{n^2 \pi^2}\right).$$ It follows that $$\text{sinh}\;z = -i\;\text{sin}\;iz=z \prod_{n=1}^\infty \left(1+\frac{z^2}{n^2 \pi^2}\right).$$ As a result, $$\text{sinh}\;\frac{z}{2} = \frac{z}{2} \prod_{n=1}^\infty \left(1+\frac{z^2}{4n^2 \pi^2}\right) = \frac{z}{2} \prod_{n\;\text{even}\;\geq 2} \left(1+\frac{z^2}{n^2 \pi^2}\right).$$ Using the identity $$\text{sinh}\;2z=2\;\text{sinh}\;z\;\text{cosh}\;z,$$ we find that $$\text{cosh}\;\frac{z}{2} = \frac{\text{sinh}\;z}{2\;\text{sinh}\;\frac{z}{2}}=\frac{z \prod_{n=1}^\infty \left(1+\frac{z^2}{n^2 \pi^2}\right)}{2\cdot\frac{z}{2} \prod_{n\;\text{even}\;\geq 2} \left(1+\frac{z^2}{n^2 \pi^2}\right)}=\prod_{n\;\text{odd}\;\geq 1} \left(1+\frac{z^2}{n^2 \pi^2}\right).$$ Taking the logarithmic derivative $\frac{f'}{f}$ on both sides yields $$\frac{1}{2}\text{tanh}\;\frac{z}{2}=\sum_{n\;\text{odd}\;\ge 1}\frac{2z/(n^2 \pi^2)}{1+\frac{z^2}{n^2 \pi^2}}=\sum_{n\;\text{odd}\;\ge 1}\frac{2z}{n^2 \pi^2+z^2}.$$ Now, $$\frac{2z}{n^2 \pi^2+z^2}=\frac{1}{z+n \pi i}+\frac{1}{z-n \pi i},$$ so $$\frac{1}{2}\text{tanh}\;\frac{z}{2}=\sum_{n\;\text{odd}\;\ge 1}\left(\frac{1}{z+n \pi i}+\frac{1}{z-n \pi i}\right),$$ which is the same as the desired sum. (There are convergence issues with the doubly-infinite sum in the question as stated, but this is the principal value of the sum, which is probably what was intended.)<|endoftext|> TITLE: $\int_a^af(x) \, dx$ always $0$? QUESTION [7 upvotes]: I was studying integrals and just out of curiosity, Does there exist any 'continuous' functions such that $\int_a^af(x) \, dx$ ($a$ is any number) equals a value other than $0$? Since continuous functions are Riemann integrable, so I think it should be $0$. Is this correct? Also, with out the condition 'continuous', does there exist any function such that $\int_a^af(x) \, dx$ isn't $0$? EDIT I'm looking for any function that $\int_a ^a\ f(x) \neq 0$. Can anyone find me one? REPLY [3 votes]: No, for $f(x)$ any continuous function, $\int_a^a f(x)dx= 0$. As for non-continuous functions, my first thought was a "Dirac delta function" for which $\int_C \delta(x)dx= 1$ for any set, C, containing 0. However, a "delta function" is not a true function- it is a "generalized function" or "distribution".<|endoftext|> TITLE: Hypergeometric Random Variable Expectation QUESTION [12 upvotes]: In a binomial experiment we know that every trial is is independent and that the probability of success, $p$ is the same in every trial. This also means that the expected value of any individual trial is $p$. So if we have a sample of size $n$, by the linearity property of the expectation, the expected value of the same is just $n \cdot p$. This is all intuitive. When the population size is finite and when we don't replace the items after every trial, we can't use the binomial distribution to get the probability of $k$ successes in a sample of size $n$ where the population is of size $N$ and the number of successes is $R$ simply because the probability of obtaining a success after every trial changes as the $R$ or/and $N$ change(s). So far so good. Yet when they calculate the expected value of the hypergeometric random variable, it is $(n \cdot R/N)$. This seems to me as the same as saying the probability of obtaining a success in every trial is the same ($R/N$) which is not intuitive at all because I should at least be expecting to see $N$ reducing by $1$ after every trial. I know that there's a flaw in my thinking. Can someone help point it out ? Edit: I think I'm going to give up on understanding why the expected value of the hypergeometric random variable (HRV) is at it is. None of the answers have alleviated my confusion. I don't think I've made my confusion clear enough. My problem is I'm going about the process of finding the expected value of the HRV in the same way as that of the binomial random variable (BRV). In the BRV's case, if the sample is of size $n$ and we consider each item in the sample as random variable of its own, then $X = X_1+X_2+\cdots+X_n$. To find the $E[X]$, we simply add the $X_i$. Since an item is returned after it is checked, the probability of success does not change. In the case of the HRV, I should expect the probability of success to change because it is not returned back to the population. However, this doesn't seem to be the case. This is my problem. REPLY [2 votes]: As others have pointed out, the probability of a red ball at each of your $n$ draw actually is $R/N$. They are just correlated. You can also compute this expectation directly from the identity $$ \sum_{r=0}^n r\binom{R}{r}\binom{N-R}{n-r} = R\binom{N-1}{n-1} $$ To see this, the rhs counts the number of ways to pick a red ball and then $n-1$ other balls of any colour. The lhs counts the number of ways to pick $r$ red balls, $n-r$ white balls, and then pick a red ball. These are the same. Since $$ N\binom{N-1}{n-1} = n\binom{N}{n} $$ by a similar argument, the expectation you want is $$ \frac{R\binom{N-1}{n-1}}{\binom{N}{n}} = n\frac{R}{N} $$<|endoftext|> TITLE: Intuition Behind the Hyperbolic Sine and Hyperbolic Cosine Functions QUESTION [5 upvotes]: After enough time studying mathematics, we develop an instinct for the sine and cosine functions and their relationship to our standard Euclidean Geometry. I have come across the functions $\sinh(x)$ and $\cosh(x)$ multiple times while studying math including: $(1)$ Lorentz Transformations $(2)$ Integrals and Identities $(3)$ Complex Analysis. Taken at face value, I understand these functions and their definitions $-$ but I feel like I'm missing the point. What is a natural way for me to understand these functions as intuitively as I understand $\sin(x)$ and $\cos(x).$ Note: I have consulted other answers looking for the answer to this question. I am searching for a more fundamental explanation of how these functions came about analogous to the natural representations of $\sin$ and $\cos$ in terms of angles on the unit circle. Of course If I overlooked such an explanation, please simply point me to it. REPLY [7 votes]: There is an absolutely fascinating little booklet called "Hyperbolic Functions" by V. G. Shervatov in which the author develops circular and hyperbolic functions in parallel from a purely geometric viewpoint. It is from the "Russian Series In Mathematics" and was written decades ago (1950s, I think) and is out of print, but is still out there if you search for it. Google is your friend in this regard. I bought a copy of this as a kid and I think it changed my life. It may well be the reason I became a mathematician.<|endoftext|> TITLE: Proof about a Topological space being arc connect QUESTION [5 upvotes]: While reading a book i found a topological space described as: Let $(X,\tau)$ be the topological space formed by adding to the ordinary closed unit interval $[0,1]$ another right end point,say $1*$, with the sets $(a,1)\cup${1*} as a local neighborhood basis. Then it says that such topological space is arc connected. I found almost exactly the same question here which has yet to be solve,however i'll provide some details. The book itself states that since [0,1] and [0,1)$\cup${1} are homeomorphic as subspaces,and the subspace topology on [0,1] is Euclidean,X is the union of two compact subspaces and thus compact,by the same reasoning it is arc connected.* How can such argument prove me that there is a injective path from $1$ to $1*$? Is it possible to explicit such path? Further details: As the book states: Path and arc connectednesss relate to the existence of certain continuous functions from the unit interval into a topological space.Continuous functions from the unit interval are called paths,if they are one-to-one they are arcs. REPLY [4 votes]: This space is not arc-connected. Indeed, suppose $f:[0,1]\to X$ is an arc such that $f(0)=1$ and $f(1)=1^*$. Then $f(1/2)\in [0,1)$, and $f|_{[0,1/2]}$ is a (reparametrized) path from $1$ to $f(1/2)$ in $[0,1]$. Thus $f([0,1/2])$ must contain all of $[f(1/2),1)$. But by a similar argument, $f([1/2,1])$ also must contain all of $[f(1/2),1)$. This contradicts injectivity of $f$.<|endoftext|> TITLE: The Jordan Decomposition Theorem, Folland QUESTION [5 upvotes]: The Jordan Decomposition Theorem - If $\nu$ is a signed measure, there exists unique positive measures $\nu^+$ and $\nu^-$ such that $\nu = \nu^+ - \nu^-$ and $\nu^+\perp \nu^-$. Attempted proof - Let $X = P\cup N$ be a Hahn decomposition for $\nu$ where $P$ and $N$ are positive and negative sets respectively.. Then let us define the positive and negative measure as such $$\nu^{+}(E) := \nu(E\cap P) \ \ \ \nu^{-}(E) := - \nu(E\cap N)$$ then, \begin{align*} \nu^+(E) - \nu^-(E) &= \nu(E\cap P) + \nu(E\cap N)\\ &= \nu(E) \end{align*} So we have $\nu = \nu^+ - \nu^-$ and $\nu^+\perp \nu^-$. Now I believe to complete this proof we need to show uniqueness. As in assuming we have $\nu = \tilde{\nu^{+}} - \tilde{\nu^{-}}$ is another such pair and we have $\tilde{\nu^{+}}\perp\tilde{\nu^{-}}$ we can find $\tilde{N}$ and $\tilde{P}$ such that $X = \tilde{P}\cup\tilde{N}$ and we need to check that $\tilde{P}$ is positive and $\tilde{N}$ is negative (not sure how to do that yet). Then we need to show that $\tilde{\nu^{+}} = \nu^+$ and $\tilde{\nu^{-}} = \nu^-$ again I am not sure how to do that either yet. Any suggestions is greatly appreciated. REPLY [6 votes]: The Jordan Decomposition Theorem - If $\nu$ is a signed measure, there exists unique positive measures $\nu^+$ and $\nu^-$ such that $\nu = \nu^+ - \nu^-$ and $\nu^+\perp \nu^-$. Proof - Let $X = P\cup N$ be a Hahn decomposition for $\nu$ where $P$ and $N$ are positive and negative sets respectively.. Then let us define the positive and negative measure as such $$\nu^{+}(E) := \nu(E\cap P) \ \ \ \nu^{-}(E) := - \nu(E\cap N)$$ then, \begin{align*} \nu^+(E) - \nu^-(E) &= \nu(E\cap P) + \nu(E\cap N)\\ &= \nu(E) \end{align*} So we have $\nu = \nu^+ - \nu^-$ and $\nu^+\perp \nu^-$. To complete this proof, let us to show uniqueness. Suppose that $\nu = \tilde{\nu^{+}} - \tilde{\nu^{-}}$ is another such pair and we have $\tilde{\nu^{+}}\perp\tilde{\nu^{-}}$ we can find $\tilde{P}$ and $\tilde{N}$ such that $X = \tilde{P}\cup\tilde{N}$, $\tilde{P}$ is null for $\tilde{\nu^{-}}$ and $\tilde{N}$ is null for $\tilde{\nu^{+}}$. So, for any measurable set $E\subset \tilde{P}$, $$ \nu(E)= \tilde{\nu^{+}}(E) - \tilde{\nu^{-}}(E)=\tilde{\nu^{+}}(E)-0 = \tilde{\nu^{+}}(E) \geq 0$$ So $\tilde{P}$ is a positive set for $\nu$. In a similar way, So, for any measurable set $F\subset \tilde{N}$, $$ \nu(F)= \tilde{\nu^{+}}(F) - \tilde{\nu^{-}}(F)=0-\tilde{\nu^{-}}(F)= -\tilde{\nu^{-}}(F) \leq 0$$ So $\tilde{N}$ is a negative set for $\nu$. So $\tilde{P}$, $\tilde{N}$ is another Hahn decomposition for $\nu$. So, from 3.3 Hahn Decomposition Theorem, we have that $P\Delta \tilde{P} = N \Delta \tilde{N}$ is a null set for $\nu$. So, $P\setminus \tilde{P}$, $\tilde{P}\setminus P$, $N\setminus \tilde{N}$ and $\tilde{N} \setminus N$ are null sets for $\nu$ For any measurable set $H\subset X$, we have \begin{align*} \nu^+(H)&= \nu(H\cap P)= \nu(H\cap(P\cap\tilde{P} ))+\nu(H\cap(P\setminus \tilde{P}))=\nu(H\cap(P\cap\tilde{P}) )=\\&= \nu(H\cap(P\cap\tilde{P}) )+\nu(H\cap(\tilde{P} \setminus P))= \nu(H\cap \tilde{P})= \tilde{\nu^{+}}(H\cap \tilde{P}) - \tilde{\nu^{-}}(H\cap \tilde{P})=\\&= \tilde{\nu^{+}}(H\cap \tilde{P})= \tilde{\nu^{+}}(H\cap \tilde{P}) + \tilde{\nu^{+}}(H\cap \tilde{N})= \tilde{\nu^{+}}(H) \end{align*} So we have that $\nu^+=\tilde{\nu^{+}}$. In a similar way, we can prove that $\nu^-=\tilde{\nu^{-}}$. (Or, if we had assumed, without loss of generality, that $\nu$ does not assume the value +\infty, we have $\nu^-=\nu - \nu^+ = \nu - \tilde{\nu^{+}}= \tilde{\nu^{-}}$).<|endoftext|> TITLE: Product of a vector and its transpose (Projections) QUESTION [20 upvotes]: I am doing a basic course on linear algebra, where the guy says $a^Ta$ is a number and $aa^T$ is a matrix not.m Why? Background: Say we are projecting a vector $b$ onto a vector $a$. By the condition of orthogonality, the dot product is zero $$a^T(b-xa)=0$$ then $$x =\frac{a^Tb} {a^Ta}$$. The projection vector $p$ since it lies on $a$ is: $$p=ax$$ $$p=a\frac{a^Tb} {a^Ta}$$ $$p=\frac{aa^T} {a^Ta}b$$ To me both $aa^T$ and $a^Ta$ are dot products and the order shouldn't matter. Then $p=b$. But it is not. Why? REPLY [26 votes]: You appear to be conflating the dot product $a\cdot b$ of two column vectors with the matrix product $a^Tb$, which computes the same value. The dot product is symmetric, but matrix multiplication is in general not commutative. Indeed, unless $A$ and $B$ are both square matrices of the same size, $AB$ and $BA$ don’t even have the same shape. In the derivation that you cite, the vectors $a$ and $b$ are being treated as $n\times1$ matrices, so $a^T$ is a $1\times n$ matrix. By the rules of matrix multiplication, $a^Ta$ and $a^Tb$ result in a $1\times1$ matrix, which is equivalent to a scalar, while $aa^T$ produces an $n\times n$ matrix: $$ a^Tb = \begin{bmatrix}a_1&a_2&\cdots&a_n\end{bmatrix}\begin{bmatrix}b_1\\b_2\\ \vdots\\b_n\end{bmatrix} = \begin{bmatrix}a_1b_1+a_2b_2+\cdots+a_n b_n\end{bmatrix} \\ a^Ta = \begin{bmatrix}a_1&a_2&\cdots&a_n\end{bmatrix}\begin{bmatrix}a_1\\a_2\\ \vdots\\a_n\end{bmatrix} = \begin{bmatrix}a_1^2+a_2^2+\cdots+a_n^2\end{bmatrix} $$ so $a^Tb$ is equivalent to $a\cdot b$, while $$aa^T = \begin{bmatrix}a_1\\a_2\\ \vdots\\a_n\end{bmatrix}\begin{bmatrix}a_1&a_2&\cdots&a_n\end{bmatrix} = \begin{bmatrix}a_1^2&a_1a_2&\cdots&a_1a_n \\ a_2a_1&a_2^2&\cdots&a_2a_n \\ \vdots&\vdots&\ddots&\vdots \\ a_na_1&a_na_2&\cdots&a_n^2\end{bmatrix}.$$ Note in particular that $b\cdot a=b^Ta$, not $ba^T$, as the latter is also an $n\times n$ matrix. The derivation of the projection might be easier to understand if you write it slightly differently. Start with dot products: $$p={a\cdot b\over a\cdot a}a={1\over a\cdot a}a(a\cdot b)$$ then replace the dot products with equivalent matrix products: $$p={1\over a^Ta}a(a^Tb).$$ This expression is a product of the scalar ${1\over a^Ta}$ with three matrices. Since matrix multiplication is associative, we can regroup this as $${1\over a^Ta}(aa^T)b.$$ This is a scalar times an $n\times n$ matrix times an $n\times1$ matrix, i.e., a vector. Addendum: The scalar factor can be absorbed into the $n\times n$ matrix $aa^T$; the resulting matrix $\pi_a$ represents orthogonal projection onto (the span of) $a$. That it is a projection is easy to verify: $$\pi_a^2 = \left({aa^T\over a^Ta}\right)^2 = {(aa^T)(aa^T)\over (a^Ta)(a^Ta)} = {a(a^Ta)a^T\over(a^Ta)(a^Ta)} = {(a^Ta)(aa^T)\over(a^Ta)(a^Ta)} = {aa^T\over a^Ta} = \pi_a,$$ again using associativity of matrix multiplication and the fact that $a^Ta$ is a scalar so commutes with matrices. In addition, $$\pi_aa = {aa^T\over a^Ta}a = {a^Ta\over a^Ta}a = a,$$ as expected. In the above derivation of projection onto $a$, $b$ was an arbitrary vector, so for all $b$, $\pi_ab$ is some scalar multiple of $a$. In other words, the image (column space) of $\pi_a$ is spanned by $a$—it’s the line through $a$—and so the rank of $\pi_a$ is one. This can also be seen by examining $aa^T$ directly: each column is a multiple of $a$. As a final note, the above derivation requires that the vectors and matrices be expressed relative to a basis that’s orthonormal with respect to the dot product. It’s possible to remove this restriction, but the expression for the projection matrix will be more complex.<|endoftext|> TITLE: What is the expected distortion of a linear transformation? QUESTION [5 upvotes]: Let $A: \mathbb{R}^n \to \mathbb{R}^n$. I am interested in the "average distortion" caused by the action of $A$ on vectors. (i.e stretching or contraction of the norm). Consider for instance the uniform distribution on $\mathbb{S}^{n-1}$, and the random variable $X:\mathbb{S}^{n-1} \to \mathbb{R}$ defined by $X(x)=(\|A(x)\|_2)^2$. What is the expectation of $X$? Using SVD, it is easy to check that the problem reduces to $A$ being a diagonal matrix with non-negative entries. So, the question amounts to calculating $$\int_{\mathbb{S}^{n-1}} \sum_{i=1}^n (\sigma_ix_i)^2 $$ (and dividing by the volume of $\mathbb{S}^{n-1}$). Is there a closed formula for this integral? Also, one could take the expected value of the norm, and not its square (I thought this should be easier if there are no sqaure roots involved...) REPLY [4 votes]: Using the comment given by Hagen von Eitzen, we get: $$ \int_{\mathbb{S}^{n-1}} \sum_{i=1}^n (\sigma_ix_i)^2 = \sum_{i=1}^n \int_{\mathbb{S}^{n-1}} (\sigma_ix_i)^2= \sum_{i=1}^n \sigma_i^2\int_{\mathbb{S}^{n-1}} x_i^2$$ By symmetry, we see that $\int_{\mathbb{S}^{n-1}} x_i^2$ is independent of the index $i$, hence: $$\int_{\mathbb{S}^{n-1}} x_i^2=\frac{1}{n} \sum_{j=1}^n \int_{\mathbb{S}^{n-1}} x_j^2 = \frac{1}{n} \int_{\mathbb{S}^{n-1}} \sum_{j=1}^n x_j^2=\frac{1}{n} \int_{\mathbb{S}^{n-1}} 1=\frac{1}{n} \operatorname{Vol}(\mathbb{S}^{n-1})$$ Hence the average distortion is: $$E(X)=\frac{1}{n}\sum_{i=1}^n \sigma_i^2 $$<|endoftext|> TITLE: Intuitive reasons of ring modulo maximal ideal or prime ideal QUESTION [5 upvotes]: Are there any intuitive reasons that can help us remember that $R/I$ is a field iff $I$ is a maximal ideal; $R/I$ is an integral domain iff $I$ is a prime ideal? (I can understand the proof, but have problems with remembering the result correctly and intuitive understanding.) I can roughly understand that if $I$ is maximal ideal, $R/I$ will have the minimal number of ideals namely zero ideal and itself, which is the criteria of being a field? How about if $I$ is prime ideal? How do we intuitively see that $R/I$ is a domain? REPLY [3 votes]: I had just the same questions when I first met those propositions, so I'd like to share my intuitions. Before starting, let me say that I do think going through nilpotent elements - radical ideal - reduced ring first helps understanding prime and maximal ideas. At least, this was my learning process, and I found easier finding intuitive explanation of why a ring modulo a radical ideal gives a reduced ring. So maybe have a look at Relantionship between nilpotents - radical ideals - reduced rings. (If you understand that, I think you may even have all the elements to come up with the rest of the answer yourself.) Now to prime and maximal ideals. Prime ideals: a domain is a place where no two elements, when multiplied together, yield zero. We always need to keep this in the back of our minds, because this is what we would like to achieve. So how can we achieve that? Take whatever ring you want, how can we, in a way, build a domain ring out of it? We say: okay, the problem is when $a*b = 0$ and $a, b \neq 0$. It seems stupid, but the whole solution is in the latter condition. If we could magically say that every time that $a*b = 0$, either $a$ or $b$ would become zero (whatever this means), then we would indeed have built a domain ring. But lo!, we do have a way of transforming elements to zero, and that is through a quotient space! In fact, imagine to go through all possible products between all ring elements: every time one of those products would yield zero, you take one of the factors and put it in a set. At the end of the story, you have a set of most null-divisor elements (I say most, because in $\mathbb{Z}_{6}$, for example, $2*3 = 0$, and you can put in your set either $2$ or $3$, you don't need to take them both.) What it's true is that a such set is more than a set, it's indeed an ideal: what we call a prime ideal. Now, what does it happen if we quotient the ring by a prime ideal? We are transforming to zero all elements which gave us problems in the ring (i.e. which gave rise to null-divisors), and lo!: if $a*b = 0 \mid a,b \neq 0$ in the ring, we are now sure that in the quotient, either $a$ or $b$ are indeed $0$! Thus we have a domain ring! Maximal ideals: this gets way trickier, and I fear I don't have a ready intuition to share. I do have some hints, but I am not entirely sure they are correct, so take them as they are. As above, we want to set to zero all elements which don't have multiplicative inverse. First of all, we know that if a maximal ideal contains an invertible element, than that ideal is actually the whole ring (i.e. it is not a proper ideal). So all invertible elements will for sure be out of a maximal ideal, which is made up entirely by non-invertible elements (the ones we want to get rid of). However, things aren't so easy, because the strategy we used before doesn't work here: grouping together all non-invertible elements doesn't give us an ideal (think about the non-invertible of $\mathbb{Z}_{10}$, for example), so we can't expect to build a field just taking away all non-invertible elements. For example, we know that for $\mathbb{Z}$, all ideals of the form $(p) \mid p $ prime are maximal, and indeed all $\mathbb{Z}_{p}$ are fields. But look at what happens! We are not taking away non-invertible elements (2 is non-invertible in $\mathbb{Z}$, and yet it's still there), but in that quotient elements manage to find a multiplicative inverse among the ones that are left! This fact (that inverses are created in the quotient) is what makes very difficult having an intuition for this fact, I believe.<|endoftext|> TITLE: 1-1 correspondence of class group of an order '$\mathcal{O}$' and elliptic curves having complex multiplication by $\mathcal{O}$ QUESTION [5 upvotes]: I came across these two results Let $\mathcal{O}$ be an order in an imaginary quadratic field.There is a 1−1 correspondence between the ideal class group $C(\mathcal{O})$ and the homo-thety classes of lattices with $\mathcal{O}$ as their full ring of complex multiplication. There is a 1-1 correspondence between invertible ideal classes of ring $\mathcal{O}$ and the set of triples $$\left\{(a,b,c) \in \mathbb{Z}^3:{\begin{split}& a>0;\ \gcd(a,b,c)=1 ;\\&|b| \leq a\leq c;\ b^2-4ac=D;\\& b\ >0\text{ whenever }|b|=a\text{ or }a=c\end{split} }\right\}$$ where $D$ is the discriminant of the number ring $\mathcal{O}$ My question is how can I relate both of these results, It looks to me that $2$ can be inferred from $1$. Any refrence for the same would be of great help. REPLY [2 votes]: The reference of this answer is D. Cox 'Primes of the form $x^2+ny^2$'. The formulation 1 is Theorem 10.14 and Corollary 10.20, and 2 is Theorem 7.7 (ii). Then it is more natural that 1 is inferred from 2, than 2 is inferred from 1. The elements in the set of triples described in 2, can be considered as a set of equivalence classes of quadratic forms $C(D)$ with discriminant $D$ where $D$ is the discriminant of $\mathcal{O}$. We write a lattice $L$ generated by $\alpha$ and $\beta$ as $L=[\alpha, \beta]$. This means $L=\{ m\alpha+n\beta: m, n \in \mathbb{Z}\}$. We write $L\sim L'$ if two lattices $L$ and $L'$ are homothetic (i.e. nonzero multiple of each other). The two statements are far apart in the book, but we can relate them via points in the fundamental domain for $\Gamma_1 = \mathrm{SL}(2,\mathbb{Z})$. First, for any lattice $L$ in $\mathbb{C}$, there is a nonzero $\lambda\in\mathbb{C}$ such that $L=\lambda [1,z]$ with $z$ in the upper half plane $\mathbb{H}$. Then $L\sim [1,z]$. Also, it is shown in chapter 10 of the book, that two lattices $L$ and $L'$ are homothetic if and only if $j(L)=j(L')$. Here $j(L)$ means the $j$-invariant of the lattice $L$. Moreover, we can regard $j$ as a function from the upper half plane $\mathbb{H}$ to $\mathbb{C}$. This is easy to check: if $L\sim [1,z]$, we set $j(z) = j(L)$. The bijection in 2, maps an element $ax^2+b xy+cy^2$ in $C(D)$ to $[a, \frac{-b+\sqrt{D}}2]$ in $C(\mathcal{O})$. This fractional $\mathcal{O}$-ideal is homothetic to $[1,\tau]$ where $\tau = \frac{-b+\sqrt{D}}{2a}$. The restrictions described in 2, put the point $\tau$ in the fundamental domain $\mathcal{F}$ for $\Gamma_1 = \mathrm{SL}(2,\mathbb{Z})$. An important property of $j$ function is that the $j$ function is injective on $\mathcal{F}$. Thus, for any distinct points $z$, $z'$ in $\mathcal{F}$, we have $j(z)\neq j(z')$. Equivalently, $[1,z]\not\sim [1,z']$. Since the $\tau$ obtained from the quadratic form $ax^2+bxy+cy^2$ with the restrictions in 2, is uniquely determined for each forms. This means that distinct triples in 2, will give different $\tau$. Thus, we obtain the statement 1, since distinct ideal classes in $C(\mathcal{O})$ fall in distinct homothety classes. From this point of view, it is easy to see that $C(D)=C(\mathcal{O})$ is finite. To see this, consider $\tau$ obtained from $ax^2+bxy+cy^2$. Then the possibilities for $a$ is finite, since $\mathcal{F}$ is positive distance away from the real axis. The possibilities for $b$ is also finite, since $|b|\leq a$.<|endoftext|> TITLE: Conditional expectation and independence on $\sigma$-algebras and events QUESTION [6 upvotes]: In many statistics papers, proofs might proceed as follows: Under the event $A$, the random variables $X$ and $Y$ are independent. (Often this means that on $A^C$, they might be dependent). Then some properties of conditional independence might be used, e.g. to calculate $ \mathbb E[ X \mid A, Y] $. I feel quite uncomfortable with the latter type of manipulations. Therefore, I am wondering two things: 1) What is the proper definition of conditional independence when conditioning on both events and $\sigma$-algebras? My guess would be that we have to check the factorization property only for sets $A \cap B_X$ and $A \cap B_y$, where $B_X \in \sigma(X)$, $B_Y \in \sigma(Y)$ the generated $\sigma$-algebras of the above random variables. (Similarly for the definition of conditional expectation.) 2) Is there any reference on properties of such expectations which are taken conditionally on both $\sigma$-algebras and events? Something akin to the standard properties of conditional expectations (conditioned on $\sigma$-algebras) used to simplify expressions or to calculate them. Or is there any simple trick to generalize the standard results to this setting? REPLY [5 votes]: I will try to answer my question above; it would be great if someone can confirm (since again I did not find any textbook describing this, except the one Exercise in Billingsley mentioned below)! To set things up, let $(\Omega, \mathcal{F}, \mathbb P)$ be a probability space. $A \in \mathcal{F}$ is an event with probability $\mathbb P(A) > 0$, $X: \Omega \to \mathbb R$ is a random variable and $\mathcal{G} \subset \mathcal{F}$ is a sub-$\sigma$-algebra. We are interested in defining: $\mathbb E[X \mid A, \mathcal{G}]$. There are two "natural" ways to do this. First, we will do this by just using the standard definition of conditional expectation but with respect to the measure $\mathbb P_A$, where this is just the conditional probability measure with mass $P_A(B)$ on $\mathcal{F}$-measurable sets $B$: $$ \mathbb P_A(B) = \frac{\mathbb P(A \cap B)} {\mathbb P(A)}$$ Thus we define $\mathbb E[X \mid A, \mathcal{G}]$ for $X \in L^1(\mathbb P_A)$ by the following properties: $\mathbb E[X \mid A, \mathcal{G}]$ is $\mathcal{G}$ measurable. $\int_B \mathbb E[X \mid A, \mathcal{G}] d\mathbb P_A = \int_B X d\mathbb P_A $ for all $\mathcal{G}$-measurable sets $B$. We can quickly see that to check $X \in L^1(\mathbb P_A)$, it is sufficient to check $X \in L^1(\mathbb P)$ while for property 2. we can just check: $\int_B \mathbb E[X \mid A, \mathcal{G}] d\mathbb P = \int_B X d\mathbb P $ for all sets $B \in \{ G \cap A \mid G \in \mathcal{G}\}$ The second way of defining $\mathbb E[X \mid A, \mathcal{G}]$ is by defining it for indicator variables of $\mathcal{F}$-measurable sets $B$ as (also see related math.se post): $$ \mathbb E[ \mathbf{1}_{B} \mid A , \mathcal{G}] = \frac{\mathbb E[ \mathbf{1}_{B}\mathbf{1}_{A} \mid \mathcal{G}] }{\mathbb E[ \mathbf{1}_{A} \mid \mathcal{G}]}$$ By exercise 34.4 a) in the book "Probability and Measure" by Billingsley, we get that in fact these two definitions are equivalent. So we are good to go. Now we are still interested in the calculus of such conditional expectations. It turns out to be simple, since we can just use the standard calculus where the expectations are taken w.r.t. to the measure $\mathbb P_A$! Also properties such as "on the event $A$, $X$ is independent of $\mathcal{G}$", also just mean that $X$ is independent of $\mathcal{G}$ under the measure $\mathbb P_A$.<|endoftext|> TITLE: How to write the commutator subgroup in terms of the generators of the group? QUESTION [5 upvotes]: Let $G=\langle\ S\ |\ R\ \rangle$ be a finitely presented group. The commutator subgroup of $G$ is the group generated by $\{[a,b]\ |\ a,b\in G\}$ and is denoted by $[G,G]$, where $[a,b]=aba^{-1}b^{-1}$. 1. Is it true that $[G,G]$ is generated by $\{[s,t]\ |\ s,t\in S\}$ ? I need to show that any $[a,b]$ can be written as a word in $[s,t]$ but I am not sure if it can be done. The above is a generalized question which I asked myself when considering the particular case below - Let $W=\langle\ S\ |\ R\ \rangle$ where, $$S=\{s_1,\cdots,s_{2n}\}$$ $$R=\left\{s_i^2 : 1\le i\le 2n\right\}\bigcup\left\{(s_is_j)^2:1\le i,j\le 2n,j\neq i+n\right\}$$ 2. Then is $[W,W]$ is generated by $\{[s_i,s_{i+n}]\ |\ 1\le i\le n \}$ ? Thank you. REPLY [7 votes]: The answer is no. If $G$ is a free group of rank greater than $1$ then $[G,G]$ is not even finitely generated. If what you were asking were true, then for any $2$-generator group $[G,G]$ would be cyclic, which is clearly not the case. It is true in your specific example. In the case $n=1$, $W$ is the infinite dihedral group with $[W,W]$ generated by $[s_1,s_2]$. For general $n$, $W$ is the direct product of its subgroups $\langle s_i,s_{i+n}\rangle$, so $[W,W]$ is free abelian of rank $n$, with generators $[s_i,s_{i+n}]$.<|endoftext|> TITLE: Finding the roots $x^4-4x^3-x^2-8x+4=0$ (contest math) QUESTION [7 upvotes]: So the problem is : $x^4-4x^3-x^2-8x+4=0$, find all solutions A tip that I have gotten, is to divide both sides by $x^2$. I've tried so, but I do not manage to see any further. Do anyone know how this tip could help me? (Yes, I'm aware that the polynomial above can be factorized into two degree 2 polynomials, which promptly gives me the answer. But that factorization would be extremely hard to spot, which is why I'm asking about the dividing) Thanks in advance :) Edit: meant to write $x^2$, not $2$ REPLY [4 votes]: $ x^4 - 4x^3 - x^2 -8 x + 4$ to try to factor it as a product of two polynomials with degree two I will try this $ x^4 -4x^3 -x^2-8x+4=(x^2+ax+c)(x^2+dx+e) $ but the constant term is 4 so we have two choices $ c=1, e=4$ or $ c=2, e=2$ if you choose the second you get the equations $ a+d=-4, 4+ad=-1, 2a+2d=-8$ if you solve them you come up with a solution or maybe there is not a solution.<|endoftext|> TITLE: Is the projective line minus one point always isomorphic to the affine space? QUESTION [7 upvotes]: I'm thinking about the following problem: If I take a general point $p \in \mathbb{P}^1$ out of the projective line, is $\mathbb{P}^1 - \{ p \}$ isomorphic to the affine space $\mathbb{A}^1$? I ask this because if $p = [1, 0] \in \mathbb{P}^1$, the map $[x, 1] \mapsto x$ gives an isomorphism $\mathbb{P}^1 - \{[1, 0] \} \cong \mathbb{A}^1$. I guess that this should be true somehow for a general point $p$, but I can't quite get my head around the map that I need to define. REPLY [8 votes]: If you have a 2 by 2 invertible matrix $M$ over $k$, then $M$ induces an isomorphism of $\mathbb P^1$ with itself. If you have a general point $[a:b]$ in $\mathbb P^1$, write down a matrix that sends $[a:b]$ to $[1:0]$ so that you have $\mathbb P^1 - [a:b] \cong \mathbb P^1 - [1:0]$. Then apply your isomorphism above. By the way, your isomorphism above is not quite right because it's not independent of representative. For example $[2x:2]$ would go to $2x$, not $x$. To get around this, define your isomorphism $\mathbb{P}^1 - \{[1: 0] \} \to \mathbb{A}^1$ by sending $[x:y]$ to $x/y$.<|endoftext|> TITLE: Ring of algebraic integers as lattice points in the complex plane QUESTION [9 upvotes]: Let, $i=\sqrt{-1}$ and $\omega = e^{\frac{2\pi i}{3}}$. I know that we can represent the ring of integers $\mathbb{Z}[i]$ and $\mathbb{Z}[\omega]$ as square and triangular lattice on complex plane respectively. Motivated by this and this question I wish to know: Is there a way to plot such lattice diagrams for each and every ring of algebraic integer? If yes, then how? Is there a way to determine the shape of lattice units (rectangular, triangular,...) without actually plotting them? By "how" I am asking for an example of code for a software (preferably open-source) which can be used to plot all such lattices. REPLY [6 votes]: This is not really an answer, but might be relevant. I record it as CW. The ring of algebraic integers $R$ of a number field $K$, of degree $n$, is always a free $\mathbb{Z}$-module of rank $n$, that is there are elements $\omega_1, \dots, \omega_n$, called an integral basis, such that every element $r\in R$ has a unique representation of the form $r=\sum_{i=1}^n a_i \omega_i$ with $a_i$ integers. Various programs are able to compute the integral basis. To then print a selection of the relevant points is not hard, but it will often just yield a colored area as the set is dense. The way to use the lattice structure effectively is described in a linked to answer.<|endoftext|> TITLE: Variable triples vs. single quad card game. Who has the advantage? QUESTION [6 upvotes]: $2$ people decide to count wins on a card game using a well shuffled standard $52$ card deck. Community (shared) cards are drawn one at a time from the deck without replacement until there is a winner for a hand. The rules are player A can initially win if $4$ triples appear (such as $KKK,444,AAA,777$). Player B wins if a single quad appears (such as $QQQQ$). As soon as there is a winner, the hand is finished, the win is awarded, all drawn cards are returned to the deck, the cards are reshuffled well, and the next hand will be drawn. There is one twist however. If B wins a hand, then next hand will have a lower win threshold for A. For example, initially the win threshold for A is $4$ triples. However, if B wins the first hand, then the new threshold will be $3$ triples. Conversely, each time A wins, A's threshold will be increased by $1$. So for example, if A wins the first hand, A's new win threshold will be $5$ triples to win. The minimum # of required triples is $1$. No more than $39$ cards will ever need to be drawn to determine a winner since even if $13$ triples are required, the $39$th card will guarantee a winner but it is likely a quad would have appeared way before then. Note that the triples and quads need not be in any order. For example, $Q,2,Q,4,Q,7,A,10,Q$ is a quad. The requirement is not $4$ like ranks in a row however that is also a quad but very unlikely. I am thinking since the difficulty of A winning is variable based on who wins the previous hand(s), this should reach some type of equilibrium where it is about equally likely for each to win in the longrun but how can I show this mathematically? I ran a computer simulation and it looks like my initial hypothesis is right. There seems to be an equally likely chance for A or B to win on average. Even with as few as $10$ trials ($10$ winning hands), I am seeing mostly $5$ wins for each but sometimes $4$ vs. $6$ but I didn't even see $3$ vs. $7$ yet so it seems to hit equilibrium VERY quickly. Since the # of required trials (wins) is so low, you can probably just try this with a real deck of cards and confirm it is about $50/50$ with as few as $10$ winning hands. Just remember to update the winning threshold for A properly (for example, $4, 5, 4, 3$...). Unlike a fair coin toss where $8$, $9$, or even $10$ heads are possible, it seems almost impossible for that to happen with this type of "self adjusting" game. So how can it be shown mathematically that this type of game will reach equilibrium where either player has about the same chance to win overall? If you do a computer simulation of this, it is somewhat amazing how many times it will hit exactly half and half. For example, if I run $1000$ decisions, I usually get $500$ A wins and $500$ B wins. It is very consistent (yet I am using different random numbers each run). This is WAY more predictable than something like fair coin flips which could get off to a "rocky" start. I think you can call this type of game "self adjusting" in that it will reach a "fair equilibrium" very quickly. This seems like a hard problem to state mathematically so I put a bounty of $100$ points on it as an extra incentive to those of you who want to try to solve it. If $2$ different users submit good answers then what I usually do is give one person the checkmark and the other the bounty to be more fair to both. Good luck. REPLY [3 votes]: By drawing a picture, you can get a really intuitive solution. Roughly speaking, yes you'll converge toward an equal probability of A and B winning in this game, and in any game where the following conditions hold (I've named them for convenience): Zero-sum. In every game, either A wins or B wins. Monotonicity The game gets harder for A whenever A wins, and easier whenever A loses. Straddling. There's a difficulty level where A wins with probability no less than ½, and a difficulty level where A wins with probability no more than ½. I like to think of the game as being played like this: A and B shuffle the deck fairly, then spread out all the cards in order and see whether A or B would have won if they had drawn the cards one-by-one. Every time they shuffle the deck, there is a clear winner— either A or B. (There is never a case where they both win, for example.) Let $n$ denote the game's current difficulty level for $A$, meaning the number of triples that $A$ must obtain in order to win. Let $p_n$ be the probability that when the deck is fairly-shuffled, the result yields a win for A— a deck where A will find $n$ triples before B finds a single quad. Of course, the more difficult the game, the harder it is for $A$ to win: $p_1 > p_2 > p_3 > \ldots \geq 0 $. (This is true because every deck that's a win for A at some certain high level of difficulty is also a win for A with respect to each lower level of difficulty— the win conditions are nested.) And we know that $p_1 = 1$, because A will always find a single triple before B finds a single quad. We also know that A can never win when $n=14$ because there simply aren't that many triples in the deck—we have that $p_{14} = 0$ Now we can draw an abstract diagram of the game possibilities which looks sort of like this: There is a node for each of the difficulty levels $n=1, 2, 3, $ and so on. Each node represents the game played at a different difficulty level for A. There is an arrow from each node to the one after it, representing the fact that the game can get more difficult if A wins. There is an arrow from each node to the one before it, representing the fact that the game can get easier if A loses. (As a special case, there is an arrow from $n=1$ to itself since we don't let the game get easier than that.) Each of the arrows is labeled with the probability of making that particular transition. The rightward arrows are labeled with the probability of $A$ winning. The leftward probabilities are labeled with the probability of $A$ losing. (Because of the zero-sum property, if probability of going forwards is $p_n$, then the probability of going backwards is $1-p_n$.) Now instead of playing our card game, we can simply play on this diagram: start on the $n=1$ node. With probability $p_n$, move right. With probability $1-p_{n}$, move left. We can ask about the game's equilibrium by asking whether there's an equilibrium state in this diagram. In fact, there must be an equilibrium state. Here's why: We have the straddling property: $p_1 = 1$ and $p_{14} = 0$. We have the monotonicity property: $p_1 > p_2 > \ldots $. We have the zero-sum property: the probability of moving left is one minus the probability of moving right. Putting this together, we conclude: the probability of $A$ winning starts out at $p_1 = 1$, and smoothly/monotonically decreases up until $p_{14}=0$, taking on values in between. At the same time, the probability of $B$ winning starts out at 0 when $n=1$, and smoothly/monotonically increases up to $n=14$, where the probability is certain. Of course, we recall that we move rightward whenever $A$ wins and leftward whenever $B$ wins. This means that you can draw a dividing line that separates the nodes into two groups: the left group where $p_n \geq \frac{1}{2}$, and the right group where $p_n \leq \frac{1}{2}$. Note that if you are in the left group, you are more likely move rightward, and if you are in the right group, you are more likely to move leftward! (This is the "self-adjusting" property!) So at the boundary between the left and the right groups, there's a state $k$ where $p_k\approx \frac{1}{2}$. This game will gravitate toward an equilibrium state $k$ where $p_k \approx \frac{1}{2}$; such a state must exist because the probabilities smoothly/monotonically vary between extremes of $p_1 = 1$ and $p_{14}=0$, and such a state attracts equilibrium because you are more likely to move rightward for all states $n< k$, and more likely to move leftward for all states $n>k$. (!!) I hope this helps! If you want an even more formal rendition of this intuitive result, you can express our diagram in the language of Markov chains, and this equilibrium solution as the steady state of such a Markov chain. Edit: About fast convergence The series of games happens in roughly two phases. In the first phase, the game converges toward an equilibrium node $k$. In the second phase, the game fluctuates around this equilibrium node. During the first phase, you are essentially playing this game: Flip an unfairly-weighted coin until you've seen $k$ heads. The weight of the coin will change depending on the outcome: initially, the coin will always land on heads. And as you see more and more heads than tails, the coin will be less and less likely to come up heads again. But the game is always weighted in your favor because the probability of heads is always greater than ½. If we were tossing a fair coin, we would expect that it would take $2k$ tosses to reach an equilibrium node. If we were tossing a coin that always came up heads, we would expect it would take exactly $k$ tosses to reach the equilibrium state. With an adaptive coin as in this game, our rough expectation is that it will take somewhere between $k$ and $2k$ tosses to reach equilibrium. In this game, $k$ might be around 7, so we would expect it to take around 11 games to reach equilibrium. At the end of the first phase, we know for certain that $\text{# A wins} = k+\text{# B wins}$. (Think about the number of left and right transitions required to reach equilibrium state $k$.) So based on the expected number of wins, the expected win ratio is approximately somewhere between $k:k+k = \frac{1}{3}$ and $2k:2k+k = \frac{2}{3}$. During the second phase, the games fluctuates around the equilibrium node. The game's self-corrective behavior is key— what emerges from the rules of this game is that if A has lost more games than won, A is more likely to win, and vice versa. A theoretical fair coin toss is memoryless, meaning that the probability of getting heads does not depend on previous coin tosses. In contrast, this game is self-correcting: whenever the game moves leftward, the probability shifts to make it more likely to move rightward, and vice-versa. The other key property is that, I expect, there is a sharp transition where at one difficulty level $k$, we have that $p_k$ is reasonably less than ½, while for the very next difficulty level $k+1$, we have that $p_{k+1}$ is reasonably more than ½. A standard deck has thirteen faces (A, 2, 3, ..., K, Q, J). If you played with a deck of cards with more than the standard number of faces — perhaps you played with cards labeled (A, B, C, ..., X, Y, Z)— then I think this transition would become less sharp. The sharpness of the transition controls the strength of the self-correcting behavior (when you move too far left/right, how significantly the odds change to favor correcting it.) Edit: Calculated probabilities The qualitative analysis above is enough to prove that convergence happens, and happens quickly. But if you want to know quantitatively how likely A is to win at each level of difficulty, here are the results: n P(A wins) P(B wins) ---------------------------------------------- 1 1.0 0.0 2 0.877143650546 0.122856349454 3 0.713335184606 0.286664815394 4 0.540136101559 0.459863898441 5 0.379534087296 0.620465912704 6 0.245613806098 0.754386193902 7 0.144695490795 0.855304509205 8 0.0763060803548 0.923693919645 9 0.0351530543328 0.964846945667 10 0.0136337090929 0.986366290907 11 0.00419040924575 0.995809590754 12 0.000911291868977 0.999088708131 13 0.000105680993713 0.999894319006 14 0.0 1.0 Note: I wrote a computer program to compute these, so feel free to double-check my results. But qualitatively, the numbers do seem correct, and they agree with my hand calculations for n=$1$, n=$13$, and n=$14$. Importantly, these probabilities show that the game is essentially fair right from the start (where initially n=$4$)— the game starts near equilibrium. And because there is a sharp transition between n=$4$ (where A is more likely to win, $0.54$) and n=$5$ (where A is more likely to lose, $0.38$) (overall difference $\approx 0.54-0.38 = 0.16$), we expect this equilibrium to be very stable and self-correcting. Edit: Expected number of hands before reaching a given win threshold for A Because we can model this game as a Markov chain, we can directly compute the expected number of rounds you'd have to play before reaching a certain difficulty threshold for $A$. (Feel free to use a simulation to check these formal results.) Following the methods of this webpage: https://www.eecis.udel.edu/~mckennar/blog/markov-chains-and-expected-value, I did the following: Define the transition matrix $P$. $P$ is a $14\times 14$ matrix, where the entry $P_{ij}$ is the probability of going from difficulty level $i$ to difficulty level $j$. Of course, based on the rules of the game, $P_{ij}$ is zero unless $j = i+1$ or $j = i-1$, in which case $P_{ij}$ is the probability of A winning (respectively losing) when the win threshold is $i$. (See table for exact values of these probabilities.) Form the matrix $T \equiv I - P$, where $I$ is the $14\times 14$ identity matrix. Form the size-13 column vector $\text{ones}$ where each entry is 1. Pick a difficulty threshold $d$ I'm interested in reaching. (For example, $d=12$.) Delete row $d$ and column $d$ from the matrix $T$. Solve the equation $T \cdot \vec{x} = \text{ones}$ for $\vec{x}$. Each entry $x_i$ in the solution is the expected number of hands to get from state $i$ to state $d$. Here are the results: the expected number of hands to get from difficulty 4 to ... ... difficulty 1 is 150 hands. ... difficulty 2 is 22.11 hands. ... difficulty 3 is 5.34 hands. ... difficulty 4 is 0 hands (already there.) ... difficulty 5 is 3.48 hands. ... difficulty 6 is 11.81 hands. ... difficulty 7 is 41.46 hands. ... difficulty 8 is 223.65 hands. ... difficulty 9 is 2,442 hands. ... difficulty 10 is 63,362 hands. ... difficulty 11 is 4,470,865 hands. (over 1 million.) ... difficulty 12 is 1,051,870,809 hands. (over 1 billion.) ... difficulty 13 is 1,149,376,392,099 hands. (over 1 trillion.) ... difficulty 14 is 12,354,872,878,705,208. (over 10 quadrillion.)<|endoftext|> TITLE: Real roots of $z^2+\alpha z + \beta=0$ QUESTION [7 upvotes]: Question:- If equation $z^2+\alpha z + \beta=0$ has a real root, prove that $$(\alpha\bar{\beta}-\beta\bar{\alpha})(\bar{\alpha}-\alpha)=(\beta-\bar{\beta})^2$$ I tried goofing around with the discriminant but was unable to come with anything good. Just a hint towards a solution, might work. REPLY [2 votes]: Let $x$ be a real root of the given equation. Then we have \begin{align*} x^2+\alpha x+\beta&=0\\ x^2+\overline\alpha x+\overline\beta&=0 \end{align*} and after subtracting we get \begin{gather*} (\alpha-\overline\alpha)x+(\beta-\overline\beta)=0\\ x=-\frac{\beta-\overline\beta}{\alpha-\overline\alpha} \end{gather*} Now let us plug this into the original equation \begin{align*} x^2&=-\alpha x -\beta\\ \frac{(\beta-\overline\beta)^2}{(\alpha-\overline\alpha)^2} &= \frac{\alpha(\beta-\overline\beta)}{\alpha-\overline\alpha} - \beta\\ \frac{(\beta-\overline\beta)^2}{(\alpha-\overline\alpha)^2} &= \frac{\alpha(\beta-\overline\beta)}{\alpha-\overline\alpha} - \frac{\beta(\alpha-\overline\alpha)}{\alpha-\overline\alpha}\\ \frac{(\beta-\overline\beta)^2}{(\alpha-\overline\alpha)^2} &= \frac{\beta\overline\alpha-\alpha\overline\beta}{\alpha-\overline\alpha} \\ (\beta-\overline\beta)^2 &= (\beta\overline\alpha-\alpha\overline\beta)(\alpha-\overline\alpha) \end{align*} This is basically what we wanted to prove. (In the given problem, the sign in both brackets is changed to opposite, which does not change the expression.)<|endoftext|> TITLE: Infinite Ordinal Sum QUESTION [6 upvotes]: When working with ordinal numbers, would it be correct to say that: $$ \sum_{i=0}^{\infty}1 = \omega$$ Or does this simply not make sense? In the ordinals, does the notation $\sum^\infty_{i=0}$ even make sense or would $\sum_{i=0}^\alpha$, with $\alpha$ being a (potential infinite) ordinal, be the only correct notation? Thank you very much. REPLY [8 votes]: Ordinal summation requires an ordinal index. And $\infty$ is not an ordinal. Other than that, the summation does make sense in general. If $I$ is a linearly ordered set, and $x_i$ is a linearly ordered set for each $i\in I$, then $\sum_{i\in I}x_i$ would be the order type obtained by replacing $i$ with $x_i$, and considering the "[somewhat-]lexicographic order" obtained. If $I$ is an ordinal and each $x_i$ is an ordinal, it turns out that the sum is an ordinal as well. Which is why everything works out. As far as notation goes, I'd probably go for $\sum_{i<\alpha}$ and not $\sum_{i=1}^\alpha$. Which will also allow you to catch those pesky limit cases.<|endoftext|> TITLE: Proof/derivation of $\lim\limits_{n\to\infty}{\frac1{2^n}\sum\limits_{k=0}^n\binom{n}{k}\frac{an+bk}{cn+dk}}\stackrel?=\frac{2a+b}{2c+d}$? QUESTION [30 upvotes]: I just came up with the following identity while solving some combinatorial problem but not sure if it's correct. I've done some numerical computations and they coincide. $$\lim_{n\to \infty}{\frac{1}{2^n}\sum_{k=0}^{n}\binom{n}{k}\frac{an+bk}{cn+dk}}\;\stackrel?=\;\frac{2a+b}{2c+d}$$ Here $a$, $b$, $c$, and $d$ are reals except that $c$ mustn't $0$ and $2c+d\neq0$. I wish I could explain how I came up with it, but I did nothing but comparing numbers with the answer then formulated the identity, and just did numerical computations. REPLY [4 votes]: I believe a probabilistic argument can be given here. Let $X_1, X_2, \ldots $ be iid Bernoulli random variables with success probability $p$. Then by Strong Law of Large Numbers $$\bar{X}_n:=\frac{1}n\sum_{i=1}^n X_i \stackrel{a.s.}{\to} p$$ Also for any continuous function $g$ we have $g(\bar{X}_n) \stackrel{a.s.}{\to} g(p)$. Now if $g$ is such that there exist a random variable $Y$ with finite expectation such that $g(\bar{X}_n) \le Y$ almost surely. Then by DCT we have $E(g(\bar{X}_n)) \to g(p)$. Now this has lot of applications. Example 1: Take $p=\frac12$. $g(x)=\frac{a+bx}{c+dx}$. Make sure $g$ is continuous $[0,1]$. Then clearly $g$ is bounded. Hence by DCT $$E(g(\bar{X}_n))=\sum_{i=1}^n \frac{\binom{n}{k}}{2^n}g\left(\frac{k}{n}\right) \to g(\frac12)=\frac{2a+b}{2c+d}$$ Example 2: Take $p=\frac12$,$g(x)=e^{-x^2}$. Clearly $g$ is continuous and bounded. Hence $$\frac1{2^n}\sum_{i=1}^n \binom{n}{k}e^{-k^2/n^2} \to e^{-1/4}$$ Similarly we can obtain plethora of such results.<|endoftext|> TITLE: Prove that $(2^n-1)(3^n-1)$ is not a perfect square QUESTION [12 upvotes]: Prove that $(2^n-1)(3^n-1)$ is not a perfect square. I have tried this problem for a few days already and I feel I am really far from solving it. Most of my approaches have been analyzing how many times 2 divides the number, and how many times 3 divides it, as well as various mods. I am starting to think the proof is going to be factoring on a weird field or something like that instead. We can see that if $n$ is odd then $3^n-1$ is divisible by $2$ exactly one time so the exponent of $2$ in the prime factorization of the number is $1$ and thus it is not a perfect square. Furthermore by lifting the exponent lemma we know that since $n$ is even the exponent of $2$ in the prime factorization of $3^n-1$ is $3-1+v_2(2) = 2+v_2(n)$ so we need $v_2(n)$ to be even. Therefore it is greater tan or equal to $2$ i.e $4$ divides $n$. Similarly by lifting we can see that the exponent of $3$ in $2^n-1$ is $1+v_3(n)$ so we have $v_3(n)$ is odd i.e $3$ divides $n$. Therefore if the expression is a perfect square we must have $12|n$. REPLY [6 votes]: That there are no solutions was proved by Szalay in 1997; a generalization to the equation $$ (2^n-1)(3^m-1) = z^2 $$ was given by Walsh in 2000 or so : http://mysite.science.uottawa.ca/gwalsh/slov1.pdf The proof follows from elementary arguments about (binary) recurrence sequences and local considerations at the primes $2$ and $3$.<|endoftext|> TITLE: (Tournament of towns 1994) Prove the inequality QUESTION [7 upvotes]: Let $a_1,a_2,\ldots,a_n$ be real positive numbers. Prove that $$\left(1+\frac{a_1^2}{a_2}\right)\left(1+\frac{a_2^2}{a_3}\right) \cdots \left(1+\frac{a_n^2}{a_1}\right) \geq(1+a_1)(1+a_2) \cdots (1+a_n)$$ REPLY [12 votes]: By Cauchy-Schwarz inequality, we have the following: $$(1+a_2)\left(1+\frac{a_1^2}{a_2}\right)\geq (1+a_1)^2$$ $$(1+a_3)\left(1+\frac{a_2^2}{a_3}\right)\geq (1+a_2)^2$$ $$\vdots$$ $$(1+a_1)\left(1+\frac{a_n^2}{a_1}\right)\geq (1+a_n)^2$$ from where we have: $$\left(1+\frac{a_1^2}{a_2}\right)\left(1+\frac{a_2^2}{a_3}\right)\cdots\left(1+\frac{a_n^2}{a_1}\right)\prod_{i=1}^n (1+a_i)\geq \prod_{i=1}^n (1+a_i)^2.$$ By division with $\prod_{i=1}^n (1+a_i)$ we get $$\left(1+\frac{a_1^2}{a_2}\right)\left(1+\frac{a_2^2}{a_3}\right)\cdots\left(1+\frac{a_n^2}{a_1}\right)\geq \prod_{i=1}^n (1+a_i).$$<|endoftext|> TITLE: How to find the smallest set of generating elements in a group? QUESTION [5 upvotes]: Is there a systematic procedure for finding the smallest set of generating elements of a finite group? REPLY [3 votes]: A following method can be used to find the smallest set of the generating elements in a finite group: It is based on the following theorem: Suppose $G$ is a finite group, $\{X_i\}_{i = 1}^{n}$ are i.i.d uniformly distributed random elements of $G$. Then $P(\langle \{X_i\}_{i = 1}^{n} \rangle = G) = \sum_{H \leq G} \mu(G, H) {\left(\frac{|H|}{|G|}\right)}^n$, where $\mu$ is the Moebius function for subgroup lattice of $G$. Thus, the smallest possible cardinality of a generating set can be described by the following formula: $$\min\{n \in \mathbb{N}| \sum_{H \leq G} \mu(G, H) {\left(\frac{|H|}{|G|}\right)}^n > 0\}$$ And if we know the smallest possible cardinality of a generating set (let's denote it by $s$), then we can find an example by checking all of the $C_{|G|}^s$ subsets on whether they lie in one of the maximal proper subgroups or not. If they lie - they are the smallest possible generating set.<|endoftext|> TITLE: higher K-theory of complex numbers QUESTION [8 upvotes]: What is known about the higher algebraic K-theory of the complex numbers $\mathbb{C}$? It's obvious that $K_0(\mathbb{C}) = \mathbb{Z}$. According to Wikipedia, it seems like we should have $K_1(\mathbb{C}) = \mathbb{C}^\times$. It seems like one can write down something for $K_2$, though I don't really understand it very well. Roughly, I want to get a sense of how hard this problem is (blind guess: it is hard). If it turns out this problem is easy, then the follow-up question is what is the higher algebraic K-theory of $BG$, i.e. the exact category of $G$-representations, for $G$ an algebraic group? REPLY [9 votes]: I'm definitely not an expert on algebraic $K$-theory, but here is at least some idea for what is true for $K_i(\mathbf{C})$. The main result is the following: Theorem (Suslin 1984). Modulo uniquely divisible groups, we have $$K_i(\mathbf{C}) = \begin{cases} 0 & \text{if}\ i\ \text{even}\\ \mathbf{Q}/\mathbf{Z} & \text{if}\ i\ \text{odd} \end{cases}$$ I don't know if you can get a more complete description of the uniquely divisible part. The key idea is to use the following Theorem (Suslin 1984). $\mathrm{BGL}(\mathbf{C})^+ \to \mathrm{BGL}(\mathbf{C})^\mathrm{top}$ induces isomorphisms on homology and homotopy groups with finite coefficients. Here, $\mathrm{BGL}(\mathbf{C})$ denotes the classifying space for the discrete group $\mathrm{GL}(\mathbf{C})$, while $\mathrm{BGL}(\mathbf{C})^\mathrm{top} \simeq \mathrm{BU}$ is the classifying space of the topological group $\mathrm{GL}(\mathbf{C})^\mathrm{top}$. The first Theorem then follows by a result due to Weibel; look at the last section in Suslin's paper. For other references, the relevant chapter (§3) in Weibel's book (I've linked to a draft version) is useful, as is this ICM lecture (§2) by Suslin, and this survey (§22) by Grayson. Note also that Jardine has an alternate proof of the first theorem using sheaf cohomology on the big étale site. For your second question, I think this example shows that one approach is to look for a comparison like in the second theorem above betweeen $\mathrm{BG}$ and $\mathrm{BG}^\mathrm{top}$, and then to get information about $\mathrm{BG}^\mathrm{top}$ somehow. Even in the case above, though, Weibel's calculation looks difficult.<|endoftext|> TITLE: Are the elements of a module also called vectors? QUESTION [8 upvotes]: Are the elements of a module also called vectors? Or if someone says 'vector', are they talking only about a vector space? If no context is given, are there some standard assumptions? REPLY [6 votes]: No. Normally we do not have special names like this ... Elements of a group are called groupies Elements of a ring are called ringlets<|endoftext|> TITLE: $\|D_{f}(x) v\|=\|v\|$ $\implies$ $f$ is an isometry QUESTION [7 upvotes]: Let $f\colon \mathbb{R}^m \to \mathbb{R}^m$ be a $C^2$ map such that $\|D_f(x)v\|=\|v\|$ for all $v\in\mathbb{R}^m$, where $D_f(x)$ is the derivative of $f$ at $x$. Then I am asked to prove that $\|f(x)-f(y)\|=\|x-y\|$ for all $x,y\in\mathbb{R}^m$. There's a hint that says to use the Schwarz theorem. I just can prove that $\|f(x)-f(y)\| \le \|x-y\|$, using the image of $\alpha \colon [0,1] \to \mathbb{R}^m$, $\alpha(t)=x+(y-x)t$. Consider $\gamma\colon [0,1] \to \mathbb{R}^m$, $\gamma(t)=f(\alpha(t))$, we have that $\gamma '(t)=D_{f}(x +(y-x)t).(y-x)$. Since $\gamma$ connect $f(x)$ and $f(y)$ and it is $C^{1}$ we have: $\|f(x)-f(y)\| \leq L(\gamma) = \displaystyle\int_{[0,1]} \|\gamma '(t)\|dt=\displaystyle\int_{[0,1]}\|D_{f}(x +(y-x)t).(y-x)\|dt$. By assumtion this is equals to $\displaystyle\int_{[0,1]} \|x-y\| \leq \|\displaystyle\int_{[0,1]} x-y\space dt\|= \|x-y\|$. I have no idea how use schwarz theorem in this case. Any help is welcome. REPLY [5 votes]: $f$ is an isometry if its second partials vanish. Here's a quick explanation why: If the second partials vanish, then $D_f(\cdot)$ is constant, so that $$ \begin{eqnarray} \forall(v\in\mathbb{R}^m) \: f(v) & = & f(0) + \int_0^1 \frac{d}{dt} f(tv)\: dt \\ & =& f(0) + \int_0^1 D_f(x) \cdot v \:dt \\ & = & f(0) + \int_0^1 D_f(0) \cdot v \:dt \\ & = & f(0) + D_f(0)\cdot v. \end{eqnarray} $$ Then $\forall(v,w \in \mathbb{R}^m) \: ||f(v)-f(w)|| = ||D_f(0)v - D_f(0)w|| = ||D_f(0) (v-w)|| = ||v-w||$, as desired. The more interesting part is showing that the second partials all vanish: Some notation: Let $f^i$ be component $i$ of $f$ (so $f=(f^1,f^2,\dots,f^m)^T$); let $f^i_j := \partial_j f^i$ be the partial of $f^i$ in direction $e_j$; and let $f_j := \partial_j f = (f^1_j,f^2_j,\dots,f^m_j)^T$. $||D_f(x) v|| = ||v||$ for all $v$ means that $||D_f(x)||$ is an orthogonal matrix. So $D_f(x)^T D_f(x) =D_f(x) D_f(x)^T = \mathcal{I}_{m\times m}$, or in components $\sum_j f^i_j f^k_j = \delta^{ik}$ and $\sum_if^i_j f^i_k = \delta_{jk} = f_j \cdot f_k$. Apply a partial derivative to the second of these relations to get $0 = \partial_l (f_j \cdot f_k) = f_{jl} \cdot f_k + f_j \cdot f_{kl}$, so that $f_{jl} \cdot f_k = -f_j \cdot f_{kl}$, which holds for all indices $j,k,l$. Now we have $$ \begin{eqnarray} f_{jl} \cdot f_k & = & -f_j \cdot f_{kl} \\ & = & - f_{j} \cdot f_{lk} \\ & = & f_{jk} \cdot f_{l} \\ & = & f_{kj} \cdot f_{l} \\ & = & -f_{k} \cdot f_{lj} \\ & = & -f_k \cdot f_{jl} \\ \end{eqnarray} $$ so that $f_{jl}\cdot f_k = 0$. Note that I have swapped the order of differentiation (i.e. used Schwartz' theorem) numerous times above. Now since $D_f(x)$ is an orthogonal matrix, the columns $f_i$ form an orthonormal basis for $\mathbb{R}^m$. Since $f_{jk}$ is orthogonal to all $f_i$, it must therefore be the zero vector. This shows that all second partials of $f$ vanish.<|endoftext|> TITLE: If B(X) is isomorphic to B(Y), does that mean X is isomorphic to Y (for X and Y Banach spaces)? QUESTION [6 upvotes]: Let $X$ and $Y$ be Banach spaces such that $\mathcal{B}(X)$ is linearly isomorphic to $\mathcal{B}(Y)$ (where $\mathcal{B}(\cdot)$ denotes the algebra of bounded linear operators). Must it always be the case that $X$ is therefore isomorphic to $Y$? I suspect this is not true, but I can't immediately see how to produce a counter-example. Surely, whether it is true or false, this must be well-known. REPLY [4 votes]: Ben, the answer is no. Note that if $X$ is reflexive, then $B(X)$ is isometric to $B(X^*)$ via $T\mapsto T^*$. Note that this map is an anti-isomorphism of Banach algebras. As for less trivial examples, $B(\ell_p)$ is Banach-space isomorphic to $B(L_p)$ as well to $B(X)$ for any other separable, infinite-dimensional $\mathscr{L}_p$-space and $\ell_p$ is not isomorphic to $L_p$ unless $p=2$. This was observed by Arias and Farmer but I suppose it had been known before to the big shots in the field.<|endoftext|> TITLE: In set theory, what does it mean for a variable to have a bar symbol above it? QUESTION [8 upvotes]: Please see the image below. What does the bar symbol mean in this context? REPLY [10 votes]: The complement of the set $B$, also commonly denoted as $B'$ or $B^c$. It is the set: $$\bar{B} = B^c = B' = \{x \mid x \not \in B\}$$<|endoftext|> TITLE: Rational numbers as vectors in infinite dimensional space with the basis $( \log 2,\log 3, \log 5, \log 7, \dots, \log p, \dots) $ QUESTION [7 upvotes]: Since every natural number can be represented as $a=2^{n_1}3^{n_2}5^{n_3}7^{n_4}\cdots p_k^{n_k}\cdots$ it makes sense to represent natural numbers by vectors, using the properties of logarithms: $$\log a=n_1 \log 2+n_2 \log3+n_3 \log5+\cdots$$ This space appears to be similar to the usual Euclidean space if we extend it to an infinite number of dimensions. If we allow negative coordinates, we can also put all rational numbers in this space. For example, here is part of the plane $(\log 2, \log 3)$: Does this space have any application in number theory? If it is studied, then how is it usually defined? Are the usual Cartesian vector dot product and the usual Euclidean norm used? Or does it make sense to use a different norm (for example, taxicab norm)? REPLY [3 votes]: There is a generalization of vector spaces called "modules" which allow any ring to serve as scalars. When you use the integers as the ring of scalars, a "module" is the same thing as a "abelian group". The group of 'factorizations' is indeed a free abelian group, which is the kind of abelian group that behaves most similarly to a vector space. Factorizations are indeed important in number theory. More generally, rather than the rationals you might consider number fields or even global fields. You would then consider things like prime ideals or places instead of prime numbers. Formally taking logarithms like you are is somewhat superfluous — what you're doing is mainly just changing the notation of the group operation to $+$ so that it's easier to think about it in terms of linear algebra. It can indeed be useful to extend to real coefficients rather than merely integer coefficients. e.g. after restricting to a finite set of primes, number theorists like to view the group of factorizations as a lattice contained in the corresponding vector space $\mathbb{R}^n$ and use geometric methods to study things. The most natural norm to take here is a weighted $L^1$ norm $$ \left\| \sum_{n=0}^{\infty} a_n \log p_n \right\| = \sum_{n=0}^{\infty} |a_n| \ln p_n $$ This way, the norm of the factorization of an integer is precisely the natural logarithm of the magnitude of that integer. More generally, if $a$ and $b$ are relatively prime nonzero integers, then the norm of the factorization of $a/b$ is simply $\ln|a/b|$.<|endoftext|> TITLE: When is $\overline{K}/K$ a Galois extension of $K$? QUESTION [6 upvotes]: When is $\overline{K}/K$ a Galois extension of $K$, where $\overline{K}$ stands for the algebraic closure of $K$? I have the following three extensions: $\overline{\mathbb{Q}}/\mathbb{Q}$,$\overline{\mathbb{F}_p}/\mathbb{F}_p$,$\overline{\mathbb{F_p} (t)}/\mathbb{F}_p (t)$ I know that algebraic closure is always normal. So I only need to check whether these algebraic closures are separable or not. But algebraic extension of characteristic zero field and finite field is always separable, so all three are Galois extension. Is my reasoning correct? Thank you. REPLY [6 votes]: As others have said, your reasoning is correct for the first two examples but not the third. Let me discuss how to answer the question for a general field. In general, if $K$ is a field, then $\overline{K}/K$ is Galois iff either $K$ has characteristic $0$ or $K$ has characteristic $p$ and every element of $K$ has a $p$th root. Such a field is called perfect. Indeed, if some element $a\in K$ does not have a $p$th root, then $\sqrt[p]{a}\in \overline{K}$ is not separable over $K$. Conversely, if every element of $K$ has a $p$th root and $a\in\overline{K}$ is not separable, let $f(x)$ be the minimal polynomial of $a$. Since $a$ is not separable, we can write $f(x)=g(x^p)$ for some polynomial $g(x)\in K[x]$. But since every element of $K$ has a $p$th root, we can take the $p$th roots of all the coefficients of $g$ to get $h(x)\in K[x]$ such that $f(x)=h(x)^p$. This contradicts irreducibility of $f(x)$. In particular, a finite field is perfect since $x\mapsto x^p$ is injective and hence surjective since any injection from a finite set to itself is surjective. But $\mathbb{F}_p(t)$ is not perfect, since $t$ has no $p$th root.<|endoftext|> TITLE: $\ker \phi = (x_1-a_1, ..., x_n-a_n)$ for a ring homomorphism $\phi: R[x_1, ..., x_n] \to R$ QUESTION [12 upvotes]: Let $R$ be a commutative ring, $a_1, ..., a_n$ its elements and $\phi: R[x_1, ..., x_n] \to R$ defined by $ \phi(f(x_1, ..., x_n)) = f(a_1, ... ,a_n)$ a ring homomorphism. Prove: $\ker \phi = (x_1-a_1, ..., x_n-a_n)$ It is obvious that $(x_1 - a_1, ..., x_n -a_n) \subseteq \ker \phi$. I'm not sure how to prove the converse. At this point I don't know any division algorithms for multivariable polynomials, only for the ones in $R[x]$(and the book from where I taken the exercise doesn't assume the reader to know something beyond basics of rings and ideals, and the division algorithm for $R[x]$). Though I know this could be solved for $i = 1$ by dividing by $x-a$: Let $f(x) \in \ker \phi$, divide by $x-a: f(x) = q(x)(x-a) + r, f(a) = r = 0$, so $f(x) = q(x)(x-a) \in (x-a)$. REPLY [3 votes]: This proof is pretty notation-heavy for which I apologize, but the idea is not complicated: a polynomial $f$ is usually written as a Taylor series centered at $(0,\ldots, 0)$. We use the Binomial Theorem to re-center it at $(\alpha_1, \ldots, \alpha_n)$ and show that the constant term of this series is $f(\alpha_1, \ldots, \alpha_n)$. Lemma: Let $R$ be a unital commutative ring and $R[x_1, \ldots, x_n]$ be the $n$-variable polynomial ring over $R$. For $\alpha_1, \ldots, \alpha_n \in R$, let \begin{align*} \varphi = \text{eval}_{\alpha_1, \ldots, \alpha_n}: R[x_1, \ldots, x_n] &\to R\\ x_1, \ldots, x_n &\mapsto \alpha_1, \ldots, \alpha_n \end{align*} be the evaluation homomorphism. Then $\ker(\varphi) = (x_1-\alpha_1, \ldots, x_n-\alpha_n)$ and the induced map $\overline{\varphi}: R[x_1, \ldots, x_n]/(x_1-\alpha_1, \ldots, x_n-\alpha_n) \to R$ is an isomorphism. Proof: Certainly $(x_1-\alpha_1, \ldots, x_n-\alpha_n) \subseteq \ker(\varphi)$ so it remains to show the reverse inclusion. Given $f \in \ker(\varphi)$, we may write $f(x_1, \ldots, x_n) = \sum_{\gamma} a_\gamma {x_1}^{\gamma_1} \cdots {x_n}^{\gamma_n}$ where $a_\gamma = 0$ for all but finitely many multi-indices $\gamma$. Writing $x_i = (x_i - \alpha_i) + \alpha_i$ and expanding by the binomial theorem, we have \begin{align*} f(x_1, \ldots, x_n) &= \sum_{\gamma} a_\gamma {x_1}^{\gamma_1} \cdots {x_n}^{\gamma_n} = \sum_{\gamma} a_\gamma ((x_1 -\alpha_1) +\alpha_1)^{\gamma_1} \cdots ((x_n -\alpha_n) +\alpha_n)^{\gamma_n}\\ &= \sum_{\gamma} a_\gamma \left(\sum_{\kappa_1=0}^{\gamma_1} \binom{\gamma_1}{\kappa_1} (x_1 -\alpha_1)^{\kappa_1}\alpha_1^{\gamma_1 - \kappa_1}\right) \cdots \left(\sum_{\kappa_n=0}^{\gamma_n} \binom{\gamma_n}{\kappa_n} (x_n -\alpha_n)^{\kappa_n}\alpha_n^{\gamma_n - \kappa_n}\right)\\ &= \sum_{\gamma} a_\gamma \sum_{\kappa_1=0}^{\gamma_1} \cdots \sum_{\kappa_n=0}^{\gamma_n} \binom{\gamma_1}{\kappa_1} \cdots \binom{\gamma_n}{\kappa_n}\alpha_1^{\gamma_1 - \kappa_1} \cdots\alpha_n^{\gamma_n - \kappa_n} (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n}\\ &= \sum_{\gamma} a_\gamma \sum_{\kappa} \binom{\gamma_1}{\kappa_1} \cdots \binom{\gamma_n}{\kappa_n}\alpha_1^{\gamma_1 - \kappa_1} \cdots\alpha_n^{\gamma_n - \kappa_n} (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n}\\ &=\sum_{\kappa} \underbrace{\left(\sum_{\gamma}a_\gamma\binom{\gamma_1}{\kappa_1} \cdots \binom{\gamma_n}{\kappa_n}\alpha_1^{\gamma_1 - \kappa_1} \cdots\alpha_n^{\gamma_n - \kappa_n}\right)}_{b_\kappa} (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n}\\ &=\sum_{\kappa} b_\kappa (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n}. \end{align*} Note that the constant term $b_{(0,\ldots, 0)}$ is \begin{align*} b_{(0,\ldots, 0)} = \sum_{\gamma} a_\gamma\binom{\gamma_1}{0} \cdots \binom{\gamma_n}{0}\alpha_1^{\gamma_1} \cdots\alpha_n^{\gamma_n} = f(\alpha_1, \ldots,\alpha_n) = \varphi(f) = 0 \end{align*} since $f \in \ker(\varphi)$. Now all other terms in the sum $\sum_{\kappa} b_\kappa (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n}$ have a factor of $(x_i - \alpha_i)^{\kappa_i}$ for some $i$ with $\kappa_i \geq 1$, hence belong to $(x_1 - \alpha_1, \ldots, x_n - \alpha_n)$. Thus $f(x_1, \ldots, x_n) = \sum_{\kappa} b_\kappa (x_1 -\alpha_1)^{\kappa_1} \cdots (x_n -\alpha_n)^{\kappa_n} \in (x_1 - \alpha_1, \ldots, x_n - \alpha_n)$.<|endoftext|> TITLE: List of theorems named after non-human animals QUESTION [9 upvotes]: I think it would be entertaining if we could come up with a list of theorems named after non-human animals (so excluding names like "Gauss's lemma" and the like). So far, I have only encountered two, or if there's more I can't remember them: $1.$ The Butterfly Lemma (aka the Zassenhaus Lemma in group theory) $2.$ The Snake Lemma of homological algebra Is anyone aware of any more theorems named after animals? And if so, why does it have the name it has? (I'm also not sure as to whether or not I should tag this question as a soft question, as it seems that whether or not a theorem has such a name has an objective answer, but I am happy to edit if need be.) REPLY [4 votes]: Here are some mathematical theorems or solved problems named after farm animals: King chicken theorem and other theorems about pecking orders (with chicken comics!) Archimedes' cattle problem Goat problem Then there are mathematical objects (or related algorithms) named after animals: Caterpillar tree, its cousin centipede graph, and the more distantly related lobster graph Crocodile dilemma Hydra game (technically Hydra is mythical so this doesn't count but it's an interesting game.) Ant colony optimization REPLY [3 votes]: Here are some which, like the other answers, are not all strictly theorems: The busy beaver function, a classic example of uncomputability Dragon fractals, which I've seen appear in the theory of wavelets The butterfly theorem in plane geometry. Note this is distinct from the butterly lemma which OP mentioned. The Baire category theorem in analysis (kidding).<|endoftext|> TITLE: Can different choices of regulator assign different values to the same divergent series? QUESTION [16 upvotes]: Physicists often assign a finite value to a divergent series $\sum_{n=0}^\infty a_n$ via the following regularization scheme: they find a sequence of analytic functions $f_n(z)$ such that $f_n(0) = a_n$ and $g(z) := \sum_{n=0}^\infty f_n(z)$ converges for $z$ in some open set $U$ (which does not contain 0, or else $\sum_{n=0}^\infty a_n$ would converge), then analytically continue $g(z)$ to $z=0$ and assign $\sum_{n=0}^\infty a_n$ the value $g(0)$. Does this prescription always yield a unique finite answer, or do there exist two different sets of regularization functions $f_n(z)$ and $h_n(z)$ that agree at $z=0$, such that applying the analytic continuation procedure above to $f_n(z)$ and to $h_n(z)$ yields two different, finite values? REPLY [8 votes]: The way you have the question written, the procedure can absolutely lead to different, finite, results depending on one's choice of the $f_n(z)$. Take the simple example of $1-1+1-1+\ldots$. The most obvious possibility is to take $f_n(z)=\frac{(-1)^n}{(z+1)^n}$ (i.e., a geometric series), in which case $$ g(z)=\sum_{n=0}^{\infty}f_n(z)=\sum_{n=0}^{\infty}\frac{(-1)^n}{(z+1)^{n}}=\frac{1}{1+\frac{1}{z+1}}=\frac{z+1}{z+2}, $$ where the sum converges for $|z+1|>1$, and $g(0)=1/2$. But if you don't insist on the terms forming a power series, then there are other possibilities. For instance, let $f_{2m}(z)=(m+1)^z$ and $f_{2m+1}(z)=-(m+1)^z$ (i.e., zeta-regularize the positive and negative terms separately); then $g(z)=0$ everywhere, where the sum converges for $\Re(z) < -1$ and is analytically continued to $z=0$. By taking an appropriate linear combination of the first and second possibilities, you can get $1-1+1-1+\ldots$ to equal any value at all. Specifically, taking $$ f_n(z)=(-1)^n \left(\frac{2\beta}{(z+1)^n}+(1-2\beta)\left\lceil\frac{n+1}{2}\right\rceil^z\right), $$ you find $g(z)=2\beta(z+1)/(z+2)$, convergent in an open region of the left half-plane, and $g(0)=\beta$.<|endoftext|> TITLE: Show that a point is a midpoint of a side of a triangle QUESTION [6 upvotes]: In $\Delta ABC$, the bisector of $\angle A$ intersects $BC$ at $D$. The perpendicular to $AD$ from $B$ intersects $AD$ at $E$. The line through $E$ parallel to $AC$ intersects $BC$ at $G$, and $AB$ at $H$. Prove that $H$ is the mid-point of $AB$. I have no idea on how to prove this question. Any hint would be much appreciated. REPLY [2 votes]: We need to consider only the triangle $ABE$ with the point $H$ on $AB$ such that $\angle HEB=\angle EAC=\angle BAE=\theta$. We have $\angle BHE=2\theta$ and $\angle BEH=\frac12\pi-\theta$; so $\angle HBE=\frac12\pi-\theta$. Hence triangle $BHE$ is isosceles, as is $EHA$, with $|HB|=|HE|=|HA|.$<|endoftext|> TITLE: Compute $\sum\limits_{k=0}^{100}\frac{1}{(100-k)!(100+k)!}$ QUESTION [6 upvotes]: $$\sum_{k=0}^{100}\frac{1}{(100-k)!(100+k)!}$$ My work $$\sum_{k=0}^{100}\frac{2n!}{(2n!)(n-k)!(n+k)!}$$ $$\sum_{k=0}^{100}\frac{^{2n}C_{n-k}}{(2n!)}$$ $$\sum_{k=0}^{100}\frac{^{2n}C_{n+k}}{(2n!)}$$ How I should proceed further ? REPLY [2 votes]: Solving the problem for $n \in \mathbb{N}$ instead of only $n = 100$, you get \begin{align} \sum_{k=0}^{n} \frac{1}{(n-k)!(n+k)!} &= \frac{1}{(2n)!} \sum_{k=n}^{2n} \frac{(2n)!}{k!(2n-k)!} = \frac{1}{(2n)!} \sum_{k=n}^{2n} \binom{2n}{k} \\ &= \frac{1}{2(2n)!} \left( \binom{2n}{n} + 2^{2n} \right) \\ \end{align} In the last step you use the fact that $$ \sum_{k=n}^{2n} \binom{2n}{k} = \frac{1}{2} \left( \binom{2n}{n} + 2^{2n} \right) $$ which is shown by \begin{align} 2^{2n} &= \sum_{k=0}^{2n} \binom{2n}{k} \\ &= \sum_{k=0}^{n-1} \binom{2n}{k} ~+~ \binom{2n}{n} ~+~ \sum_{k=n+1}^{2n} \binom{2n}{k} \\ &= \sum_{k=n+1}^{2n} \binom{2n}{2n-k} ~+~ \binom{2n}{n} ~+~ \sum_{k=n+1}^{2n} \binom{2n}{k} \\ &= \sum_{k=n+1}^{2n} \binom{2n}{k} ~+~ \binom{2n}{n} ~+~ \sum_{k=n+1}^{2n} \binom{2n}{k} \\ &= 2 \sum_{k=n}^{2n} \binom{2n}{k} ~-~ \binom{2n}{n} \end{align} from which you get $$ \frac{1}{2} \left( 2^{2n} + \binom{2n}{n} \right) = \sum_{k=n}^{2n} \binom{2n}{2n-k} $$<|endoftext|> TITLE: Mordell curves with many integral points QUESTION [5 upvotes]: For $k\in{\mathbb Z},k\neq 0$, denote by $f(k)$ the number of integral points on the Mordell curve $y^2-x^3=k$. According to the data at http://tnt.math.se.tmu.ac.jp/simath/MORDELL , the largest value of $f$ on the interval $[-10000,10000]$ is 32, attained for $k=1025$. Is it known/conjectured whether $f$ can take arbitrarily high values ? REPLY [6 votes]: The largest value for $f(k)$ that I know of is an example due to Noam Elkies with $k = 509142596247656696242225$, where there are (at least) $125$ pairs of solutions (so $f(k)=250$ in your notation). If ranks of elliptic curves over the rationals are absolutely bounded, then so is $f(k)$ (as long as one restricts to $6$th power free values of $k$ to avoid trivial scaling). Nothing is provable at this point, however.<|endoftext|> TITLE: A condition for being a prime: $\;\forall m,n\in\mathbb Z^+\!:\,p=m+n\implies \gcd(m,n)=1$ QUESTION [14 upvotes]: If $\;p=m+n$ where $p\in\mathbb P$, then $m,n$ are coprime, of course. But what about the converse? Conjecture: $p$ is prime if $\;\forall m,n\in\mathbb Z^+\!:\,p=m+n\implies \gcd(m,n)=1$ Tested (and verified) for all $p<100000$. REPLY [4 votes]: for p = 1 obviously wrong (for all positive integers m, n with m+n=p (of course there are no ones, doesn't matter) there is gcd(m,n)=1, but 1=p is not prime)<|endoftext|> TITLE: Are all infinite sums not divergent? In quantum field theory QUESTION [7 upvotes]: I am a physicist interested in physics. In particular this question is related to quantum field theory. I recently came across a derivation of the infinite sum $1+1+1+1+..... $ that produced the result -1/2, aka zeta regularization (from Terry Tao's blog) This was quite surprising to me as I had previously known an infinite sum of 1s to be divergent - from taking a math physics course by the guy that wrote "the book" on asymptotic methods. (Indeed I've known the sum of all positive integers to be finite for quite some time as well as many other "divergent-looking" sums and I get the whole idea behind summation methods like Padé, Shanks, Euler, etc.) Anyways this prompted me to wonder; Are ALL infinite sums not divergent? If not then how can one determine whether a sum os divergent or not? What was all this business in undergrad calculus about learning tests of convergence and all this business in complex analysis about series if weird things like $1+1+1+1+.....$ are actually convergent?? I'm still confused by all this stuff. And I haven't found an answer to these questions that "click" Any help understand this topic would be kindly appreciated. REPLY [17 votes]: No, many infinite sums converge. An infinite sum (or "series") $a_0 + a_1 + a_2 + \dots$ is defined to converge to a value $S$ if the limit $$ S = \lim_{n\rightarrow \infty} \sum_{i=1}^n a_i$$ exists. For example, if the value of $a_i$ falls off exponentially quickly or as a power law faster than $1/i$, then the series converges. The various convergence tests you learned in calculus can give you more precise criteria for convergence. The infinite series $1 + 1 + 1 + \dots$ is not convergent - the above limit does not exist. However, it is regularizable - that is, you can play some tricks on it that "beat it into shape" well enough that you can assign some finite number to it. But this finite number is not actually the sum, which does not exist. Being regularizable is a much weaker criterion than being convergent. So whenever you come across a divergent series in QFT and replace it with its regularized value, it's very important that you take into account that the two quantities aren't actually equal. Despite its being very important, there are approximately zero physicists who actually do it. Edit: the OP asked a very good question in the comments that I though was worth addressing in my main answer: whether imposing different regulators on the same divergent series always yields the same result. If anyone has any thoughts, I've posed that question at Can different choices of regulator assign different values to the same divergent series?. Also, I once asked a related question at https://physics.stackexchange.com/questions/254051/how-can-dimensional-regularization-analytically-continue-from-a-discrete-set for which I never got a satisfactory answer. REPLY [2 votes]: Read more carefully the first part of Terry Tao's post. He replaces the regular partial sums with a smoothed sum $$\sum_{n=1}^N n^s \to \sum_{n=1}^\infty \eta(n/N) n^s$$ where $\eta$ is a cutoff function. The result he finds for the smoothed sum of $1+1+1+\dots$ (case $s=0$) is $$\sum_{n=1}^\infty \eta(n/N) = - \frac 1 2 + C_{\eta,0} N + O(1/N)$$ where $$C_{\eta,0} =\int_0^\infty \eta(x) dx$$ On the right side there is an asymptotic expansion of the smoothed sum. As you can see, there is indeed a constant term $-1/2$, but the following term is divergent in the limit $N \to \infty$. So it is misleading to state that $$1+1+1+\dots = -1/2$$ because $-1/2$ is just the constant term of an asymptotic expansion which is divergent in the limit $N\to \infty$. What is usually done in QFT (see Luboš Motl's answer here for example) is to cancel the leading divergence by means of a local counterterm. Basically, a "trick" is used to ged rid of the divergence and be able to write $1+1+1+\dots=-1/2$.<|endoftext|> TITLE: How many lines are needed on the plane to get all angles from $1$ to $359$ degrees? QUESTION [5 upvotes]: I am trying to solve following problem: How many lines are needed on the plane to get all angles from $1$ to $359$ degrees? We can move lines in parallel. I am thinking that since moving in parallel doesn't change angles we can put them intersecting in one point. It is obvious that $180$ angles are enough ($0,...,179$ degrees). Need to find minimal number. For every number from $1$ to $359$ there should be two lines that intersect in that angle. REPLY [3 votes]: Trying to find $15$ such lines, I found only $2$ dozens of (non-equivalent) such sets but with defect in $1$ angle. And it is interesting, that all found sets have these defects only: $9^{\circ} \;(171^{\circ})$; $27^{\circ} \;(153^{\circ})$; $63^{\circ} \;(117^{\circ})$; $81^{\circ} \;(99^{\circ})$. Examples: defect $9^{\circ}$: $0,4,23,29,39,41,56,61,87,101,126,129,136,137,150$; $0,8,36,40,42,43,55,60,65,86,98,108,119,135,149$; $0,5,8,22,28,29,40,55,59,75,98,100,123,136,146$; $0,7,11,13,25,40,48,50,53,74,85,104,105,121,143$; $0,8,26,32,39,68,69,73,85,88,90,102,113,123,140$; $0,5,26,36,38,42,59,60,89,100,103,108,115,128,135$; defect $27^{\circ}$: $0,1,3,9,22,38,67,93,103,107,122,127,132,139,150$; $0,2,6,7,9,22,39,47,57,68,91,96,110,122,146$; $0,5,23,31,36,53,56,60,62,72,100,114,115,135,146$; $0,1,5,24,49,62,64,71,74,92,95,103,109,129,145$; $0,1,8,10,13,31,34,48,59,63,83,102,108,124,144$; $0,15,16,20,54,60,71,73,82,85,95,96,103,106,132$; defect $63^{\circ}$: $0,3,25,32,33,46,51,74,84,86,90,101,110,121,146$; $0,4,7,15,20,25,37,39,68,94,95,104,118,140,146$; $0,2,8,31,41,59,66,89,92,102,106,111,126,127,138$; $0,11,12,17,30,33,53,58,62,82,89,97,99,123,137$; $0,3,4,21,31,33,40,56,71,76,82,90,95,114,136$; $0,16,20,21,22,55,56,65,73,80,92,103,105,106,134$; defect $81^{\circ}$: $0,1,15,18,20,24,31,52,60,85,95,107,132,142,154$; $0,5,20,36,45,53,55,58,79,102,106,108,109,120,148$; $0,1,7,10,25,33,38,50,54,73,84,111,123,125,145$; $0,5,13,33,54,84,85,98,110,111,121,123,127,130,145$; $0,3,5,24,41,50,51,54,70,85,93,107,113,118,125$; $0,6,13,16,18,23,37,38,64,67,71,78,106,114,123$. So, I am convinced that Hagen von Eitzen's answer "$16$ lines" is the final answer.<|endoftext|> TITLE: Generalized Harmonic Number Summation $ \sum_{n=1}^{\infty} {2^{-n}}{(H_{n}^{(2)})^2}$ QUESTION [14 upvotes]: Prove That $$ \sum_{n=1}^{\infty} \dfrac{(H_{n}^{(2)})^2}{2^n} = \tfrac{1}{360}\pi^4 - \tfrac16\pi^2\ln^22 + \tfrac16\ln^42 + 2\mathrm{Li}_4(\tfrac12) + \zeta(3)\ln2 $$ Notation : $ \displaystyle H_{n}^{(2)} = \sum_{r=1}^{n} \dfrac{1}{r^2}$ We can solve the above problem using the generating function $\displaystyle \sum_{n=1}^{\infty} (H_{n}^{(2)})^2 x^n $, but it gets rather tedious especially taking into account the indefinite polylogarithm integrals involved. Can we solve it using other methods like Euler Series Transform or properties of summation? REPLY [3 votes]: second approach suggested by Cornel Ioan Valean using summation by parts and lets start with the following sum: with ${N \in \mathbb{N}_{\ \geq\ 1}}$ \begin{align} \sum_{n=1}^N\frac{\left(H_{n-1}^{(2)}\right)^2}{2^n}=\sum_{n=1}^N\frac{\left(H_n^{(2)}\right)^2}{2^n}-2\sum_{n=1}^N\frac{H_n^{(2)}}{n^22^n}+\sum_{n=1}^N\frac1{n^42^n}\tag{1} \end{align} on the other hand: \begin{align} \sum_{n=1}^N\frac{\left(H_{n-1}^{(2)}\right)^2}{2^n}=\sum_{n=1}^{N-1}\frac{\left(H_{n}^{(2)}\right)^2}{2^{n+1}}=\sum_{n=1}^{N}\frac{\left(H_{n}^{(2)}\right)^2}{2^{n+1}}-\frac{\left(H_{N}^{(2)}\right)^2}{2^{N+1}}\tag{2} \end{align} from $(1)$ and $(2)$ we reach $$\sum_{n=1}^N\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=4\sum_{n=1}^N\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^N\frac{1}{n^42^n}-2\frac{\left(H_{N}^{(2)}\right)^2}{2^{N+1}}$$ letting $N$ approach $\infty$ we get $$\sum_{n=1}^\infty\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=4\sum_{n=1}^\infty\frac{H_n^{(2)}}{n^22^n}-2\sum_{n=1}^\infty\frac{1}{n^42^n}-0$$ I was able here to prove $$\begin{align*} \sum_{n=1}^{\infty}\frac{H_n^{(2)}}{{n^22^n}}=\operatorname{Li_4}\left(\frac12\right)+\frac1{16}\zeta(4)+\frac14\ln2\zeta(3)-\frac14\ln^22\zeta(2)+\frac1{24}\ln^42 \end{align*}$$ which follows $$\sum_{n=1}^\infty\frac{\left(H_{n}^{(2)}\right)^2}{2^n}=2\operatorname{Li_4}\left(\frac{1}{2}\right)+\frac14\zeta(4)+\ln2\zeta(3)-\ln^22\zeta(2)+\frac16\ln^42 $$<|endoftext|> TITLE: What is the intuition behind $\mathbb{P} (A \text{ and }B) = \mathbb{P}(A) · \mathbb{P}(B)$ if they are independent events? QUESTION [10 upvotes]: I am unable to understand this formula intuitively. REPLY [2 votes]: The Theory of Probability is an axiomatic structure. As you can see in Definition 1 in section 5 in Kolmogorov's Theory of Probability, this formula defines what we mean by independent events. That is, it is not a theorem and it is not based on anything empirical. It just is. A definition is not something one arrives at. It is something one takes as given. Saying "N events are independent" and "the probability of all of them happening is the multiplication of the probabilities of each" mean the same thing.<|endoftext|> TITLE: How to explain combinatorial identities? QUESTION [6 upvotes]: The setup of binomial expansion formula can be traced by two paths, one of which is "pure" proof by induction (using properties of combinatorial numbers), the other is "practical" comprehension by operation (considering subsets of a finite set). There are some more examples of things like this. $$\binom n 1 + 2\binom n 2 + \cdots + n\binom n n = n 2^{n - 1}$$ (Make a team out of $n$ people, and appoint a leader.) or $$\binom n 0 ^2 + \binom n 1 ^2 + \cdots + \binom n n ^2 = \binom {2n} n$$ (Choose $n$ people from $n$ ladies and $n$ gentlemen.) Sadly I cannnot figure out what this means "in real life": $$\binom n 1 + 3\binom n 3 + \cdots = 2\binom n 2 + 4\binom n 4 + \cdots$$ Any hint will be appreciated. (BTW: Is it always possible to "explain" combinatorial identities by "reality"? I wonder sometimes it may seem too "artificial"...) REPLY [4 votes]: An “in real life” interpretation of the last identity is that the number of different chaired even-sized committees from $n$ people equals the number of chaired odd-sized committees from $n$ people. Note that this is not an identity if $n=1$, as the left side is $1$ and the right side is $0$. However, it is an identity for $n>1$. A combinatorial proof (that is, a proof not using algebra) is possible, even though it isn’t as slick as the double-counting proofs of the other identities you mention. You can prove this by describing a one-to-one correspondence, or pairing, between the committees counted on one side of the identity and the committees counted on the other side. In other words, if you can match every even-sized chaired committee with a different odd-sized chaired committee so that all committees are matched, the number of even-sized chaired committees must equal the number of odd-sized chaired committees. Here’s one way to do it. Since we have decided that $n>1$, we can choose two different people from the $n$ people and call them $A$ and $Z$. Take any odd-sized chaired committee $c$. We’ll look at three disjoint and exhaustive cases. Case 1: If $A$ is not on the committee, add $A$ as a member to get an even-sized committee. This is the match for $c$. Case 2: If $A$ is on the committee, but is not the chair, remove $A$ from the committee (and keep the existing chair) to get the even-sized match for $c$. Note that so far, the matches in the two cases are different, because in Case 1, the match for $c$ includes $A$ and in Case 2 it doesn’t. The only case left to handle is if $A$ is the chair of $c$. To find the match for $c$ in this case, add or remove $Z$ from the committee, depending on whether $Z$ is a member. The matched committee has even size. The match has $A$ as chair, which did not occur in Case 1 or 2, so it is not the same as any previous match. It remains to show that every odd-sized committee was matched (or to describe the inverse matching process). I’ll leave that up to you. [Added: There is actually a simpler way to describe the matching, still using two named people $A$ and $Z$, that matches any chaired committee with one of different parity and that is pretty clearly a bijection, because if applied twice, it gets you the committee you started with. But I won’t spoil the fun of coming up with a description.]<|endoftext|> TITLE: Norm of the Resolvent QUESTION [9 upvotes]: Let $\mathbb{H}$ be a Hilbert space, $A$ a self-adjoint operator with domain $D_{A}$, $R_{A}$ the resolvent of $A$, and $z$ a point in the resolvent set $\rho(A)$. How could you prove the inequality \begin{equation} ||R_{A}(z)|| \leq 1/ d(z,\sigma(A)), \end{equation} where $\sigma(A)$ is the spectrum of $A$, and $d(z,\sigma(A))$ the distance of $z$ from $\sigma(A)$? I found this inequality in Hoslip & Sigal, Introduction to Spectral Theory, Sect 5.2, where they reference Reed and Simon, Methods of Modern Mathematical Physics, vol. I, but I could not find a proof in that book. Thank you very much in advance. PS I just note here that since for any closed operator \begin{equation} ||R_{A}(z)|| \geq 1/ d(z,\sigma(A)), \end{equation} (just note that all the point $w$ such that $|z-w| < ||R_{A}(z)||$ belong to $\rho(A)$), the above inequality must actually hold with equality. REPLY [7 votes]: For a bounded normal operator $N$, the norm and spectral radius of $N$ are the same. That is, $\|N\|=\sup_{\lambda\in\sigma(N)}|\lambda|$. Let $\lambda \notin \sigma(A)$. Assume $A$ is unbounded. Then $(A-\lambda I)^{-1}$ is bounded and normal, with $$ \sigma((A-\lambda I)^{-1})=\frac{1}{\sigma(A)-\lambda}\cup\{0\}=\left\{ \frac{1}{\mu-\lambda} : \mu\in\sigma(A) \right\}\cup\{0\}. $$ Therefore, $$ \|(A-\lambda I)^{-1}\|=\sup_{\xi\in\sigma((A-\lambda I)^{-1})}|\xi| = \sup_{\mu\in\sigma(A)}\frac{1}{|\mu-\lambda|} = \frac{1}{\mbox{dist}(\lambda,\sigma(A))}. $$<|endoftext|> TITLE: Are Pandemic chain reactions confluent? (vertex spills weight to neighbors at threshold, once) QUESTION [6 upvotes]: Are resolutions of chain reactions order-independent in the board game Pandemic? More formally: You're given an undirected graph $G = (V, E)$ and a vertex weight $w \colon V \to \{0, \ldots, 3\}$. Each vertex is also in a state $s \colon V \to \{\textsf{no outbreak}, \textsf{outbreak}\}$. To increase a vertex $v$: If $s(v) = \textsf{outbreak}$ then nothing happens. Else, if $w(v) < 3$ then $w(v) \gets w(v) + 1$. Else $s(v) \gets \textsf{outbreak}$ and increase all of $v$'s neighbours. It is not specified which order $v$'s neighbours should be increased in. Is such a specification superfluous? I observe that setting $s(v) \gets \textsf{outbreak}$ is essentially the same as removing $v$ from $G$. I don't know how that helps, though. Also, if I understand what confluence means, I think my question can be stated: "is the following rewriting system confluent?", where each element is a specification of $w$ and $s$ and a number of due increases for each vertex $v$, and each rewrite is an increase of some $v$ (for which an increase is due; also decrement the number of increases due for $v$). REPLY [2 votes]: This can be argued directly, by considering the directed acyclic graph (DAG) within $(V, E)$ that encodes all of the resulting updates; a directed edge $(u, v)$ corresponds to an outbreak at $u$ that either incremented $w(v)$ or set $s(v) \leftarrow \text{outbreak}$. In this sense, the in-degree of a node corresponds to the number of cubes added (plus one if an outbreak was also triggered). The claim is that any order of updates will always result in a DAG with the same in-degree at each node (i.e., the same number of cubes added, and the same outbreaks). Assume by way of contradiction that two different orderings result in two DAGs: $D_1 = (V, E_1)$ and $D_2 = (V, E_2)$, with in-degree $\deg^-_{D_1}(v) > \deg^-_{D_2}(v)$ for some $v \in V$. Thus there exists some $u \in V$ such that $e := (u, v) \in E_1$ but $e \not\in E_2$. This can only happen if the parent $u$ had an outbreak in $D_1$ but not in $D_2$ (since $\deg^-_{D_1}(v) > \deg^-_{D_2}(v)$ implies that $D_2$ would have updated $v$ from $u$ if it could). Thus $\deg^-_{D_1}(u) > \deg^-_{D_2}(u)$ as well (since $u$ had an outbreak in $D_1$ but not $D_2$). This argument repeats until we hit the root vertex of $D_1$ (since we are traversing up a directed acyclic path in $D_1$). But the in-degree of the root vertex must be $0$ in both DAGs, a contradiction. $\square$ This could also be turned into an inductive proof to be a bit more formal, but I feel this would only obscure the intuition. Also note this works for the case of multiple roots as well. In this case we create an imaginary vertex connected to each root (with 3 cubes), and then induce an outbreak at the new root.<|endoftext|> TITLE: Does the proof that the Nested Set Theorem implies the Axiom of Completeness require the Archemedean property? QUESTION [5 upvotes]: Here is the beginning of a standard proof that the Nested Set Theorem implies the Axiom of Completeness (e.g. see here and here): Suppose $E$ is a non-empty set bounded above by $b$ and let $a$ be an element of $E$. Define the following sequence of sets: \begin{gather*} [a_1, b_1] = [a,b] \\[8pt] [a_{n+1}, b_{n+1}] = \begin{cases} \left[a_n, \dfrac{a_n + b_n}{2}\right] & \text{if $\, \dfrac{a_n + b_n}{2}\, $ is an upper bound of $E$} \\[8pt] \left[\dfrac{a_n + b_n}{2}, b_n\right]& \text{otherwise} \end{cases} \end{gather*} Then $[a_1, b_1]$ is bounded and $\{[a_n, b_n]\}_{n=1}^\infty$ is a descending countable collection of nonempty closed sets of real numbers, so the Nested Set Theorem implies that the intersection $\bigcap_{n=1}^\infty [a_n,b_n]$ is non-empty. Let $x$ be an element of this intersection. Correct me if I'm wrong, but to show that $x$ is the least upper bound of $E$ you need to know that $b_n \to x$ and $a_n \to x$. Is it possible to know this without invoking the Archemedean property of the reals? REPLY [4 votes]: You did not say what exactly you mean by nested set theorem, but I will assume that you mean the nested interval property: If $\{F_n\}$ is a sequence of closed bounded intervals such that $F_{n+1} \subseteq {F_n}$ for every $n \in \mathbb{N}$, then $\bigcap_{n=1}^\infty \ne \emptyset$. This, combined with the ordered field axioms, does not imply order-completeness. A useful reference for this type of question is James Propp's Real analysis in reverse. On page 13 (of version 4) it says The Nested Interval Property (18) does not imply completeness and gives a number of references for counterexamples. So you do need some extra hypothesis, such as the Archimedean property.<|endoftext|> TITLE: Solid tori, meridians, and longitudes QUESTION [6 upvotes]: I am working through some of Rolfsen's "Knots and Links" and I have needed to go back and take a more careful look at the first few sections where he carefully discusses curves on solid tori. Let $V$ be a solid tori. A simple (i.e. injective) closed curve that is essential (i.e. not homotopic to a point) in $\partial V$ is called a meridian if it bounds a disc in $V$. 1) Why is such a curve $J$ bounding a disc in $V$ imply that $J$ is the image of $\{ 1\} \times S^1$ under a homeomorphism $S^1 \times D^2 \to V$? 2) Why are all meridians in $V$ ambiently isotopic (thus permitting to talk about "the" meridian)? My next questions regard the equivalence of various definitions of a longitude in a solid torus. A simple closed curve $K$ is called a longitude if it is the image of $S^1 \times \{1\}$ under some homeomorphism $S^1 \times D^2 \to V$. If $K$ is a longitude of $V$ then we see that $K$ represents a generator of $H_1(V) = \pi_1(V)$ and $K$ represents some meridian of $V$ transversely at a single point. However, I do not understand the converses to these statements: 3) If a simple closed curve in $\partial V$ represents a generator of $H_1(V)$ why is a longitude of $V$? 4) If $K$ intersects some meridian of $V$ (transversely) in a single point, then why does it follow that $K$ is a longitude? Finally, I understand that a automorphism of $\partial V$ that extends to an automorphism of $V$ must map a meridian to a meridian but: 5) Why is it that if an automorphism of $\partial V$ that maps a meridian on $V$ to a meridian of $V$ extends to an automorphism of $V$? Is there an analogous result giving a criterion for when automorphisms of boundaries of arbitrary genus handlebodies extend to automorphisms over the whole handlebody? Thanks - sorry if this is too many questions packed into one post but I figured that they are all interrelated. REPLY [9 votes]: There are a couple of theorems from algebraic/geometric topology that you need to address these questions. The first theorem is that the set of isotopy classes of nontrivial simple closed curves on $\partial V$ corresponds bijectively with the set of ordered pairs $(m,n)$ of relatively prime integers, where the isotopy class of $c$ corresponds to $(m,n)$ if and only if $c$ is homologous to the cycle $m \cdot \mu + n \cdot \lambda$, if and only if (letting $p$ be a common base point of $\mu$ and $\lambda$) the curve $c$ is freely homotopic to the concatenated closed curve $\mu^m \lambda^n$. Since $\mu$ is trivial in $\pi_1(V)$ whereas $\lambda$ generates $\pi_1(V)$, it follows that $c$ is homotopically trivial in $V$ if and only if $(m,n) = (\pm 1,0)$. Thus all homotopically trivial curves in $V$ are in a single isotopy class modulo change of orientation, which answers 2). Question 1) follows because an ambient isotopy in $\partial V$ between two meridian curves in $\partial V$ easily extends to an ambient isotopy of $V$ (why? that's your question 5) which I'll address below). To address 3), a curve $c$ corresponding to $(m,n)$ represents a generator of $H_1(V)$ if and only if $n = \pm 1$. So now all one has to do is to explicitly construct a longitude representing $(m,\pm 1)$. To do this, take a longitude representing $(0,\pm 1)$, and take its image under the $m^{\text{th}}$ power of a Dehn twist on $V$ around a meridian disc. To address 4), we need an additional theorem: given $c,d$ corresponding to pairs $(m,n)$ and $(p,q)$, the algebraic intersection number of $c,d$ is equal to the determinant $mq-np$. If $c,d$ intersect transversely in a single point then $mq-np=\pm 1$. If in addition $c$ is a meridian and so $(m,n)=(\pm 1,0)$, then $p=\pm 1$, and so $d$ is a longitude. Finally, to turn to question 5), suppose $c,c'$ are isotopic simple closed curves in $\partial V$. First let me show that there exists an ambient isotopy of $V$ that takes $c$ to $c'$. Let $h : \partial V \times [0,1] \to \partial V$ be the ambient isotopy between $c,c'$, where $h_0(x)=h(x,0)=x$ and $h_1(x)=h(x,t)$ takes $c$ to $c'$. Pick a collar neighborhood $N(\partial V) \subset V$ and a homeomorphism $f : N(\partial V) \approx \partial V \times [0,1]$. By exploiting the fact that the domain of $h$ equals the range of $f$, you can combine $f$ and $h$ into an isotopy of $V$ between $c$ and $c'$ which is stationary in $h(V)-N(\partial V)$. You can think of this as saying that any isotopy of $\partial V$ can be "absorbed" into $V$. I'll leave further details to you. So now you ask, why can any automorphism of $\partial V$ taking $c$ to $c'$ be extended to an automorphism of $V$ taking $c$ to $c'$? From what I've said above, your question reduces to the special case that $c=c'$. Now we need a third theorem: the automorphism group of $\partial V$, or more precisely the mapping class group which is the group of self-homeomorphisms modulo isotopy, is isomorphic to $GL_2(\mathbb{Z})$, where an automorphism of $\partial V$ corresponds to a matrix $\begin{pmatrix}m & n \\ p & q\end{pmatrix}$ if and only if the meridian maps to a $(m,n)$ curve and the longitude maps to a $(p,q)$ curve. Thus an automorphism fixes the meridian if and only if it corresponds to a matrix of the form $\begin{pmatrix}1 & 0 \\ p & \pm 1\end{pmatrix}$, so all you have to do is to construct specific automorphisms of $V$ whose restriction to $\partial V$ corresponds to each of those matrices. When the lower right entry is $+1$ then these are simply the Dehn twist powers around a meridian disc. Otherwise, when it is $-1$, then compose the Dehn twist powers with a reflection.<|endoftext|> TITLE: Relation between open sentences and sets (conceptual question) QUESTION [5 upvotes]: Hi I'm a college student getting into the more proof oriented side of math. I was reviewing Mathematical Proofs, A Transition to Advanced Mathematics 2nd edition and after thinking about chapters 1 and 2 came up with a question. Can sets be thought of us as the solution set of an open sentence over a specific domain? Sets can be described in a number of ways: a list of numbers following some pattern {2,4,6,...}; an expansion of sorts {2x: x is a natural number}; and what got me thinking about this whole thing, elements which satisfy some condition {x is even: x>0}. For example x>0 could be considered an open sentence with the set of even numbers as the domain. If all sets could be described as elements that satisfy some condition (or open sentence) wouldn't that mean sets are the solution sets of open sentences? Then normal set operations like intersections, unions, differences of sets, or partitions of sets could be thought of in terms of doing those operations to solution sets of open sentences. These open sentences may even be seemingly unrelated except for sharing similar solution sets. Also for an open sentence to have a solution set it needs to have a set as its domain. However this domain could be considered as a solution set to an open sentence containing its own domain (if what I'm asking is true atleast), and that type of thinking could go on infinitely. So is this connection between sets and open sentences correct? Any thoughts or advice about this whole topic are welcome. Maybe it seems like a pointless question, but I enjoy finding interrelations between seemingly separate parts of math. REPLY [8 votes]: Your musings make excellent sense, but they lead into somewhat subtle territory where there are some pitfalls to avoid that you can't be expected to notice for yourself. When set theory was first invented in the late 1800s, the general idea among the inventors was that a set is simply a more convenient way to talk about an open sentence, and use algebraic techniques to manipulate them. This is the idea you're trying to formulate here. After a few decades of work, however, it became clear that this way of thinking, as useful as it is in many situations, can also sometimes lead to blatant nonsense. Most famously this is demonstrated by Russell's paradox, which considers the open sentence "$x$ is a set that does not have itself as an element". If this open sentence corresponds to a set, then that set would need to be an element of itself exactly if it isn't an element of itself. Madness! These discoveries were the cause of much arguing and grief among mathematicians in the years around 1900, but the consensus that eventually emerged -- and which is still considered fundamental -- is that some but not all open sentences determine sets. The rules for which open sentences you can use to make sets are called the axioms of set theory. Several different sets of such rules have been proposed, but the one that essentially everyone use these days is known as ZFC set theory (for Zermelo-Fraenkel set theory with Choice). As an undergraduate, it is likely you won't be shown the precise ZFC rules unless you actively seek them out -- instead you're more or less expected to develop a feeling for what one can do by imitating your textbooks and professors. This apparently mad approach to education seems to work better in practice than it has any right to -- but of course, now that you know what to search for, you have the opportunity to find the real thing yourself. Quickly and informally, these are the ZFC rules: It is always allowed to use an open sentence of the form "$x$ is one of the following (finitely many) things ...". This corresponds to directly listing the elements of the set, as in $\{42,108,117,666\}$. It is always allowed to use any open sentence of the form "$x$ is an element of $A$, and additionally such-and-such", where $A$ is a set you already know exists. This allows set-builder notation, $\{x\in A\mid \text{such-and-such}(x) \}$. This is the workhorse rule of ZFC, the axiom of separation, also sometimes known as the axiom of set comprehension or subsets. It says, in the language you use here, that as long as your open sentence has a domain that you already know is a set, you're good and don't need to worry. It is allowed to use the open sentences "$x$ is a subset of $A$" and "$x$ is an element of some element of $A$", again where $A$ is an already known set. This produces the power set $\mathcal P(A)=\{x\mid x\subseteq A\}$ and the union of $A$'s elements: $\cup A = \{x\mid \exists y: x\in y\in A\}$. It is allowed to use the open sentence "$x$ is a natural number", guaranteeing that $\mathbb N$ is a set. (The actual technical formulation of this axiom of infinity is a bit subtler than this, but it will do for a first glimpse). If $F$ is any function you can come up with a precise definition for, then you may use the open sentence "$x$ is the value of $F$ when the input is some element of $A$", producing $\{F(y)\mid y\in A\}$. This is the axiom of replacement, providing the F of ZFC (since it was invented by Adolf Fraenkel). Essentially it says that when $A$ is a set, you can replace each of its elements with the value of $F$ on this element, and what you get from replacing all of them is still a set. Finally there's the axiom of choice, which states that there exist sets with certain properties which may not correspond to any open sentence. A full discussion of this would be too long for this answer; the important thing to be aware of is that in mainstream mathematics we do not assume that every set we see is given by some open sentence that can be written down.<|endoftext|> TITLE: Why can we exchange numbers when working with modulo expressions? QUESTION [6 upvotes]: Please excuse me if the answer is obvious because I'm a beginner. Why can we exchange numbers when working with modulo expressions? For example: $$4^2 \equiv (-1)^2 \pmod{5}$$ You may say the replacement between $4$ and $-1$ is justified because: $$4\equiv -1 \pmod{5}$$ I understand that equality, when you divide $4$ by $5$ you get a remainder $4$ and if we subtract $5$ from that we get $-1$. But I still don't understand why we can replace $4$ with $-1$. Furthermore if $a\equiv c \pmod{b}$ are we justified in replacing $a$ with $c$ in every occasion? REPLY [3 votes]: You need the function you are dealing with to preserve multiplication. In fancier language, that means it is a homomorphism from $(\mathbb{Z},\cdot)$ to $(\mathbb{Z}/n\mathbb{Z},\cdot)$. In simpler language, that means that if $x,y$ are integers then $f(x \cdot y)=[x] \cdot [y]$, where the first $\cdot$ is integer multiplication, $[z]$ denotes the equivalence class of $z$ mod $n$, and the second $\cdot$ represents multiplication mod $n$. (Note that we often represent $[z]$ by the remainder of $z$ after division by $n$.) For example $f : \mathbb{Z} \to \mathbb{Z}/n\mathbb{Z},f(x)=[x^2]$ is such a homomorphism, so $a^2 \equiv b^2 \mod n$ whenever $a \equiv b \mod n$. (Here $[y]$ denotes the equivalence class of $y$ mod $n$.) On the other hand, although $4 \equiv 9 \mod 5$, $2^4$ and $2^9$ are not equivalent mod 5.<|endoftext|> TITLE: Puiseux Expansion of Gamma Function about Infinity QUESTION [6 upvotes]: In trying to find interesting proofs that Student's T Distribution converges to the Regularized Normal Distribution when $k$ (the number of desgrees of freedom) grows without bounds (i.e. $= \infty$). One of the ways I tried involved trying to find an expansion for $\frac{\Gamma\left(\frac{v+1}{2}\right)}{\Gamma\left(\frac{v}{2}\right)}$ about $v=\infty$, though I could not make any headway on finding the expansion and I feared it would be extremely complicated. However, when I asked Wolfram Alpha I got a beautiful Puiseux Series, which is expressed as: $$\frac{\Gamma\left(\frac{v+1}{2}\right)}{\Gamma\left(\frac{v}{2}\right)}\simeq \frac{v^{1/2}}{\sqrt2}-\frac{v^{-1/2}}{4\sqrt2}+\frac{v^{-3/2}}{32\sqrt2}+5\frac{v^{-5/2}}{128\sqrt2}-21\frac{v^{-5/2}}{2048\sqrt2}+\cdots$$ (more terms can be found in the link provided if desired). I can't help but notice that the denominators are simply $2^n \sqrt{2}$ for $n\in(0,2,5,7,11,\ldots)$ However, I can't find any pattern in $n$, nor can I explain the additional coefficients that appear starting on the fourth term, so I can't think of a way to work backward to find a proof. Regardless, all I need is the first term for the purpose of taking a limit, as all the other terms will vanish anyway. Does anyone know how to prove the expansion? Edit: I should note that I have already attacked the problem using Stirlings Series, but I found that to be somewhat brute force for such a simple summation and I hoped for a more clever argument (it does however some light on where the coefficients come from). Again, I only need to know the series up to $\frac{v^{1/2}}{\sqrt2} + O(v^{-1/2})$ REPLY [5 votes]: Powers of $\boldsymbol{2}$ in the denominator This is only an observational guess, but I have computed the asymptotic series for $$ \frac{\Gamma\left(x+\frac12\right)}{\Gamma(x)\sqrt{x}} $$ out a large number of terms and the exponents of $2$ in the denominators seem to be $$ 3n+\sum_{k=1}^\infty\left\lfloor\frac{n}{2^k}\right\rfloor $$ Thus, the exponents of $2$ in the denominators of the asymptotic series for $$ \frac{\Gamma\left(\frac{v+1}2\right)}{\Gamma\left(\frac{v}2\right)\sqrt{\frac{v}2}} $$ would be $$ \bbox[5px,border:2px solid #C0A000]{2n+\sum_{k=1}^\infty\left\lfloor\frac{n}{2^k}\right\rfloor} $$ This gives denominators of $$ 2^0,2^2,2^5,2^7,2^{11},2^{13},2^{16},2^{18},2^{23},2^{25},2^{28},\dots $$ The Asymptotic Series To compute the asymptotic series, we can compute the series at infinity $$ \begin{align} &\textstyle\log\left(x-\tfrac12\right)-\log(x-1)\\ &\textstyle=\log\left(1-\tfrac1{2x}\right)-\log\left(1-\tfrac1x\right)\\ &\textstyle=\frac1{2x}+\frac3{8x^2}+\frac7{24x^3}+\frac{15}{64x^4}+\frac{31}{160x^5}+\frac{21}{128x^6}+\frac{127}{896x^7}+\frac{255}{2048x^8}+\frac{511}{4608x^9}+\frac{1023}{10240x^{10}}\\ &\textstyle+\frac{2047}{22528x^{11}}+\frac{1365}{16384x^{12}}+\frac{8191}{106496x^{13}}+\frac{16383}{229376x^{14}}+\frac{32767}{491520x^{15}}+\frac{65535}{1048576x^{16}}+\frac{131071}{2228224x^{17}}\\ &\textstyle+\frac{29127}{524288x^{18}}+\frac{524287}{9961472x^{19}}+\frac{209715}{4194304x^{20}}+\frac{299593}{6291456x^{21}}+\frac{4194303}{92274688x^{22}}+\frac{8388607}{192937984x^{23}}+O\!\left(\frac1{x^{24}}\right)\tag{1} \end{align} $$ Then we can apply the Euler-Maclaurin Sum Formula to get $$ \begin{align} &\textstyle\log\left(\frac{\large\Gamma\left(x+\frac12\right)}{\large\Gamma\left(x\right)\vphantom{\frac12}}\right)\\ &\textstyle=\frac12\log(x)-\frac1{8x}+\frac1{192x^3}-\frac1{640x^5}+\frac{17}{14336x^7}-\frac{31}{18432 x^9}+\frac{691}{180224x^{11}}-\frac{5461}{425984x^{13}}\\ &\textstyle+\frac{929569}{15728640x^{15}}-\frac{3202291}{8912896x^{17}}+\frac{221930581}{79691776x^{19}}-\frac{4722116521}{176160768x^{21}}+O\!\left(\frac1{x^{23}}\right)\tag{2} \end{align} $$ Then we can exponentiate to get $$ \begin{align} &\textstyle\frac{\large\Gamma\left(x+\frac12\right)}{\large\Gamma\left(x\right)\vphantom{\frac12}\sqrt{x}}\\ &\textstyle=1-\frac1{8x}+\frac1{128x^2}+\frac5{1024x^3}-\frac{21}{32768x^4}-\frac{399}{262144x^5}+\frac{869}{4194304x^6}+\frac{39325}{33554432x^7}\\ &\textstyle-\frac{334477}{2147483648x^8}-\frac{28717403}{17179869184x^9}+\frac{59697183}{274877906944x^{10}}+\frac{8400372435}{2199023255552x^{11}}-\frac{34429291905}{70368744177664x^{12}}\\ &\textstyle-\frac{7199255611995}{562949953421312x^{13}}+\frac{14631594576045}{9007199254740992x^{14}}+\frac{4251206967062925}{72057594037927936x^{15}}-\frac{68787420596367165}{9223372036854775808x^{16}}\\ &\textstyle-\frac{26475975382085110035}{73786976294838206464x^{17}}+\frac{53392138323683746235}{1180591620717411303424x^{18}}+\frac{26275374869163335461975}{9444732965739290427392x^{19}}\\ &\textstyle-\frac{105772979046693606062363}{302231454903657293676544x^{20}}-\frac{64759060397977041632277937}{2417851639229258349412352x^{21}}+\frac{130175508110590141461339987}{38685626227668133590597632x^{22}}\\ &\textstyle+O\!\left(\frac1{x^{23}}\right)\tag{3} \end{align} $$ Then we can substitute $x=\frac v2$ to get $$ \begin{align} &\textstyle\frac{\large\Gamma\left(\frac{v+1}2\right)}{\large\Gamma\left(\frac{v}2\right)\sqrt{\frac{v}2}}\\ &\textstyle=1-\frac1{4v}+\frac1{32v^2}+\frac5{128v^3}-\frac{21}{2048v^4}-\frac{399}{8192v^5}+\frac{869}{65536v^6}+\frac{39325}{262144v^7}\\ &\textstyle-\frac{334477}{8388608v^8}-\frac{28717403}{33554432x^9}+\frac{59697183}{268435456v^{10}}+\frac{8400372435}{1073741824v^{11}}-\frac{34429291905}{17179869184v^{12}}\\ &\textstyle-\frac{7199255611995}{68719476736v^{13}}+\frac{14631594576045}{549755813888v^{14}}+\frac{4251206967062925}{2199023255552v^{15}}-\frac{68787420596367165}{140737488355328v^{16}}\\ &\textstyle-\frac{26475975382085110035}{562949953421312v^{17}}+\frac{53392138323683746235}{4503599627370496v^{18}}+\frac{26275374869163335461975}{18014398509481984v^{19}}\\ &\textstyle-\frac{105772979046693606062363}{288230376151711744v^{20}}-\frac{64759060397977041632277937}{1152921504606846976v^{21}}+\frac{130175508110590141461339987}{9223372036854775808v^{22}}\\ &\textstyle+O\!\left(\frac1{v^{23}}\right)\tag{4} \end{align} $$<|endoftext|> TITLE: Is $x^5 + x^3 + 1$ irreducible in $\mathbb{F}_{32}$ and $\mathbb{F}_8$? QUESTION [6 upvotes]: Problem: Is $f(x) = x^5 + x^3 + 1$ irreducible in $\mathbb{F}_{32}$ and $\mathbb{F}_8$? My thought: $f(x)$ is irreducible in $\mathbb{F}_2$ and has degree $5$. So we can conclude that $\mathbb{F}_{32} \simeq \mathbb{F}_2[x]/f(x)$. Then apparently $f$ is not irreducible in $\mathbb{F}_{32}$. But I don't know how to work on the case $\mathbb{F}_8$. I know that $\mathbb{F}_8 \simeq \mathbb{F}[x]/g(x)$ where $g(x)$ is some irreducible polynomial of degree $3$ in $\mathbb{F}_2 [x]$. For example, it can be $g(x) = x^3 + x + 1$. But how would that help me? REPLY [6 votes]: Here’s another approach, based on your good start. You’ve observed that $f$ is $\Bbb F_2$-irreducible, and that indeed $\Bbb F_{32}$ is its splitting field. In other words, $f(x)=\prod_i(x-\alpha_i)$, where the alpha’s are five different elements of $\Bbb F_{32}$. If you could take fewer than five of the factors above and multiply them together to get a polynomial in $\Bbb F_8[x]$, then its coefficients would be in $\Bbb F_{32}\cap\Bbb F_8=\Bbb F_2$, in other words a proper $\Bbb F_2$-divisor of $f$. Thus the only way you can multiply some of those factors together to get an $\Bbb F_8$-polynomial is to use all of them. So $f$ is $\Bbb F_8$-irreducible.<|endoftext|> TITLE: Showing the set of real values for which the pre-image has measure greater than zero is measure zero QUESTION [6 upvotes]: The question is stated as follows: Show that if $f: \mathbb{R} \rightarrow \mathbb{R}$ is measurable, then the set $E = \{x \in \mathbb{R} \ | \ m(f^{-1}(x)) > 0 \}$ has measure zero. This problem seems simple enough at first glance, and I feel like I had a solution. We begin by showing that $E$ is measurable using a definition from Royden. That is, since $E \subset \mathbb{R}$, and we have \begin{align} m^*(\mathbb{R}) = m^*(\mathbb{R} \cup E) + m^*(\mathbb{R} \cup E^C) \end{align} it follows that $E$ is a measurable set. Traditionally, if the inverse of $f$ is defined, then f is bijective and for each $y \in \mathbb{R}$, there should be one $x \in \mathbb{R}$ such that $f(x) = y$. But then for each $y \in \mathbb{R}$, $m(f^{-1}(y)) = m(\{x\}) = 0$. From this, it should follow not only that $E$ is empty, but that $m(E) = 0$. Here is where I'm not sure. If we just consider $f^{-1}$ to return the pre-image of a point $y \in \mathbb{R}$, and say that $f$ is a constant function defined $f(x) = c$, then $m(f^{-1}(c)) = m(\mathbb{R}) = \infty > 0$. In this case, however, $E = \{c\}$, which has measure zero. With this new interpretation of $f^{-1}$, I'm not exactly sure how to proceed. If the measure of $E$ was not zero, there would necessarily be an interval $(y_1,y_2) \subset E$ for which each point in the interval $y \in (y_1,y_2)$, we have $f^{-1}(y) = (x_1,x_2)$. I'm not sure what implications this might have, if any. Should I work at obtaining a contradiction here, or should I attempt to prove that $m(E) = 0$ directly? Edit: As was pointed out in the comments, my argument for the measurability of $E$ is insufficient. Further, my conclusion that an interval lived in $E$ is equally false (thanks Cantor) REPLY [2 votes]: The function $f$ doesn't even have to be measurable for this to be true. (Of course, if $f$ isn't measurable, then there may exist $x \in \mathbb{R}$ such that $f^{-1}(x)$ is not a measurable set, but that's OK — any such $x$ won't be in $E$ since it's not true that $m\left(f^{-1}(x)\right)\gt 0.)$ If $f$ isn't required to be measurable, we're interpreting $E$ as $\lbrace x\;\vert\;f^{-1}(x)\text{ is measurable and }m(f^{-1}(x))>0\rbrace$ or, equivalently, $\lbrace x\;\vert\;f^{-1}(x)\text{ has positive measure}\rbrace.$ As @DavidC.Ullrich pointed out, $E$ will actually turn out to be countable. For positive integers $s$ and $t,$ let $E_{s,t} = \lbrace x\;\vert\; m\left([t,t+1) \cap f^{-1}(x) \right) \gt \frac{1}{s} \rbrace . $ Claim: Each $E_{s,t}$ is finite. In fact, the cardinality of $E_{s,t}$ is less than $s.$ Proof of claim: Suppose $x_1, \dots, x_n$ are $n$ distinct elements of $E_{s,t};$ we'll show that $n \lt s.$ The sets $f^{-1}(x_1), \dots, f^{-1}(x_n)$ are pairwise disjoint, so $$1 = m\left([t,t+1)\right) \ge \sum_{k=1}^{n}m\left([t,t+1)\cap f^{-1}(x_n)\right) \gt \frac{n}{s},$$ where the last inequality is because each of the $n$ summands is greater than $\frac{1}{s}.$ It follows that $n \lt s.$ Now define $E_t = \lbrace x\;\vert\; m\left([t,t+1) \cap f^{-1}(x) \right) \gt 0 \rbrace .$ A real number $a$ is greater than $0$ iff there is a positive integer $s$ such that $a$ is greater than$\frac{1}{s};$ so $E_t = \cup \lbrace E_{s,t}\;\vert\; s\text{ is a positive integer}\rbrace.$ This is a countable union of finite sets, so each $E_t$ is countable. Next we observe that if a subset $A$ of $\mathbb{R}$ has positive measure, then, for some integer $t,$ the set $A \cap [t,t+1)$ has positive measure. After all, each of the sets $A \cap [t,t+1)$ is the intersection of two measurable sets, so is measurable; but if they all had measure $0,$ then $A$ would be the countable union of measure $0$ sets, so $A$ would have measure $0$ itself. It follows that $E \subseteq \cup \lbrace E_t\;\vert\;t\text{ is an integer}\rbrace.$ The union here is a countable union of countable sets, so it must be countable. Its subset $E$ must therefore be countable also. Any countable set has measure $0,$ so we're done. Finally, a note on the axiom of choice: this argument doesn't use the axiom of choice although it may at first appear to. (The general theorem that a countable union of finite sets is countable does require the axiom of choice, as does the theorem that a countable union of countable sets is countable. But in the particular cases used here, the axiom of choice is not needed.) The reason that choice isn't required here is that the reals are linearly ordered (by the usual ordering), and this linear ordering provides a specific, definable well-ordering of each of our finite sets $E_{s,t}.$ This in turn gives us a way to define a specific counting of each of the sets $E_t,$ and that's all that's needed to conclude that their union is countable.<|endoftext|> TITLE: Is the universal enveloping algebra functor exact? QUESTION [6 upvotes]: The universal enveloping algebra is a functor from Lie algebras to unital associative algebras, and is left adjoint to the functor which sends a unital associative algebra to a Lie algebra with bracket given by the commutator. Being a left adjoint, the universal enveloping algebra construction is obviously right exact, but is it left exact? It would be nice if it was, but I have a feeling it isn't. Unfortunately I don't know enough about Lie algebras to think of a counterexample. REPLY [4 votes]: No; it already fails to preserve finite products. It sends a product of Lie algebras to a tensor product of algebras.<|endoftext|> TITLE: Irreducibility of a quadric QUESTION [5 upvotes]: I am struggling with a problem in Shafarevich's Basic Algebraic Geometry. First, some context: Fix $k$ an algebraically closed field. Lines in $\mathbb{P}^3$ correspond to planes through the origin in $4$-dimensional space. Thus lines in $\mathbb{P}^3$ have an interpretation as points of the $(2,4)$-Grassmannian, which has an embedding in $\mathbb{P}^5$ given by Plücker coordinates. Call this embedding $\Pi$. In section 1.6 of Shafarevich's book, it is detailed how points corresponding to lines in a surface in $\mathbb{P}^3$ are given by a projective subvariety of $\Pi$. The problem I am struggling with asks you to show that points in the Plücker surface corresponding to lines in an irreducible quadric $Q$ over $\mathbb{P}^3$ are given by two disjoint conics. One way to approach this seems to find a "nice" form for $Q$ where the solution becomes obvious, for example by using the fact that one can pick coordinates to "diagonalize" $Q$ and then solve the problem there. However, I am having the following issues: (i) I am not sure what the rank of the quadratic form corresponding to an irreducible quadric should be. According to Georges Elencwajg's answer, Quadrics are birational to projective space it seems that the rank should be 4, but $x_0^2+x_1^2+x_2^2$ which would correspond with a quadratic form of rank 3 seems pretty irreducible to me. I understand that the rank cannot be less than 3 though (2 is a problem because then we have a change of coordinates to $x_0^2+x_1^2$ which is obviously reducible, and 1, well...). (ii) Diagonalizing seems to require a "Gram-Schmidt" process using the bilinear form associated with the quadratic form, which only is available for fields with characteristic different from 2, and since Shafarevich does not specify this in his statement of the exercise, this approach to the problem does not work in all cases. Thus another approach is necessary, and I don't have one. Any help would be appreciated. REPLY [2 votes]: Let me try to answer your first question at least (and leave the second to someone who knows more about that). If $Q$ were a quadric cut out by a quadratic form of rank 3, then yes it would be irreducible, but it would not be smooth. It would be a projective cone over a plane conic and of course the vertex of this cone is the unique singularity. E.g. $Q := V(x_0^2 + x_1^2 + x_2^2)$ can be viewed as the cone over the conic $Q\cap V(x_3)$ cut out by the same equation in the plane $V(x_3)$ thought of as $\mathbb{P}^2$ sitting inside $\mathbb{P}^3$ with $Q$ having vertex at $[0:0:0:1]$. In this case, any line contained in the cone $Q$ contains the vertex. (Suppose not, then let $v$ be the vertex point and $p \in Q$ a non-vertex point with $L\subset Q$ a line containing $p$ but not $v$. If $H = \overline{vL}$ is the unique plane containing $v$ and $L$ then it intersects $Q$ in two (not necessarily distinct) lines which meet at $v$ and also contains $L$ - that's a degree (at least) 3 intersection, which contradicts Bezout's theorem). This means that the parameter space $\Phi$ (in the Grassmannian $G(2,4)$) of lines lying in $Q$ can't be disconnected because if you take the family of lines over $\Phi$ (i.e. $\Psi := \{(p,[L]) \in Q \times \Phi : p \in L\}\subset \mathbb{P}^3\times G(2,4) $), it maps surjectively to $Q$ (via $(p,[L]) \mapsto p$) so that the preimage of a non-vertex point $p \in Q$ contains points $(p,[L]) \in \Psi$ where $L$ is any line in $Q$ containing $p$. Since we saw above that $L$ is unique for $p$ a non-vertex point, this is an isomorphism away from the preimage of the vertex and it's not hard to check that this couldn't happen if $\Phi$ (and hence $\Psi$) were disconnected (check, for example, that the map would not contract a connected component to the vertex point). Indeed, $\Psi$ can be realized as the strict transform of $Q$ under the blowup of $\mathbb{P}^3$ at the vertex of $Q$ and is isomorphic to $\mathbb{P}_C(\mathcal{O}\oplus \mathcal{O}(1)) = \mathbb{F}_2$ for $C$ the conic in $\mathbb{P}^2$ that $Q$ is a cone over.<|endoftext|> TITLE: How to find a symmetric matrix that transforms one ellipsoid to another? QUESTION [9 upvotes]: Given two origin-centered ellipsoids $E_0$ and $E_1$ in $\mathbb{R}^n$, I'd like to find an SPD (symmetric positive definite) transformation matrix $M$ that transforms $E_0$ into $E_1$. Let's say $E_0$ and $E_1$ are specified by SPD matrices that take the unit sphere to the respective ellipsoid: $$ M_0 = R_0 D_0 {R_0}^{-1} \mathrm{\ takes\ unit\ sphere\ to\ }E_0\\ M_1 = R_1 D_1 {R_1}^{-1} \mathrm{\ takes\ unit\ sphere\ to\ }E_1 $$ where each $R_i$ is a rotation matrix and $D_i$ is a positive diagonal matrix whose diagonal entries are the respective ellipsoid's principal radii. Then of course the matrix $M_1 {M_0}^{-1}$ will take $E_0$ to $E_1$, but it is in general not a symmetric matrix (even though each $M_i$ is symmetric), so that's not a solution. I strongly suspect there is a unique SPD matrix that takes $E_0$ to $E_1$. How can it be computed? More generally, if $R$ is any rotation matrix, then $M_1 R {M_0}^{-1}$ will take $E_0$ to $E_1$. In fact I suspect the matrices that take $E_0$ to $E_1$ are precisely the matrices of this form. So perhaps the question boils down to "Given SPD $M_0$, $M_1$, find a rotation $R$ such that $M_1 R {M_0}^{-1}$ is symmetric". This comes from a statistics question: given a point cloud in $\mathbb{R}^n$ and a target covariance matrix, I want to find an SPD transformation such that the transformed point cloud has the target covariance. Intuitively, it seems like what's needed is an SPD transformation that transforms the point cloud's error ellipsoid (found by taking the square root of its covariance matrix) to the error ellipsoid of the target covariance matrix; so that's why I'm asking this question. However, even if I found such a transform, I'm not certain the transformed point cloud will have exactly the desired target covariance; if it turns out that it doesn't, I'll ask a different question for the underlying statistics problem. REPLY [3 votes]: Denote by $B$ the unit sphere. As $QB=B$ for every real orthogonal matrix $Q$, if we can solve the equation $$PM_0Q=M_1\tag{1}$$ for some positive definite $P$ and some real orthogonal $Q$, then $P$ will take $E_0$ to $E_1$, because $PE_0=PM_0B=PM_0QB=M_1B=E_1$. Now, $(1)$ is equivalent to $(M_0^\top PM_0)Q=M_0^\top M_1$. Hence we may simply perform a polar decomposition $\tilde{P}Q$ on $M_0^\top M_1$ and set $P=(M_0^\top)^{-1}\tilde{P}M_0^{-1}$. Note that we do not require that $M_0$ and $M_1$ are positive definite in the above. All we need is that $M_0$ is invertible.<|endoftext|> TITLE: What's wrong with my understanding of the Freyd-Mitchell Embedding Theorem? QUESTION [9 upvotes]: It's truly bizarre that there exists no full modern exposition of this theorem, as noted elsewhere. Anyway, I thought I'd poke through and see if I could get the gist of how it works as somebody who has familiarity with categorical techniques, if not abelian techniques. Here's how it goes, following Swan and wikipedia. We have a small abelian category $\mathcal{A}$, and we want a full exact embedding into the category of modules for some ring. The first step is to take the Yoneda embedding $\mathcal{A} \to \mathcal{L}^\mathrm{op}$, where $\mathcal{L} = \mathrm{Lex}(\mathcal{A},\mathsf{Ab})$. There are other ways to denote $\mathcal{L}$ -- it's $\mathrm{Ind}(\mathcal{A}^\mathrm{op})$, or $\mathrm{Pro}(\mathcal{A})^\mathrm{op}$. So it's a general fact that this embedding is exact, and I totally believe that $\mathcal{L}^\mathrm{op}$ is abelian. But unless I'm reading something wrong, the point is that $\mathcal{L}^\mathrm{op}$ actually is a category of modules over a ring -- one constructs a projective generator in it. This can't be right. Because $\mathcal{L}^\mathrm{op} = \mathrm{Ind}(\mathcal{A}^\mathrm{op})^\mathrm{op}$ is the opposite of a locally presentable category! So $\mathcal{L}^\mathrm{op}$ can't be locally presentable (the opposite of a locally presentable category is never locally presentable unless the category is a preorder -- cf the nlab Counterexample 7, or Thm 1.64 in Adámek and Rosický), and hence it can't be the category of modules over a ring. What am I misunderstanding? The obvious thing to do is to dualize and embed $\mathcal{A}$ into $\mathrm{Lex}(\mathcal{A}^\mathrm{op},\mathsf{Ab}) = \mathrm{Ind}(\mathcal{A})$, which is locally presentable. But if you do this it seems it would take some kind of miracle for the generator to be projective. REPLY [5 votes]: I just wanted to outline a proof of the Freyd-Mitchell embedding theorem that even I can understand. Proposition 1. If $\mathcal{A}$ is an abelian category, then $\mathrm{Ind}(\mathcal{A})$ is abelian, and the inclusion $\mathcal{A} \to \mathrm{Ind}(\mathcal{A})$ is fully faithful, exact, takes values in compact objects, and preserves generators and projective objects. Proof. The fully faithfulness, exactness, and compactness are standard (true for any category $\mathcal{A}$ whatsoever). For abelianness, see here. The preservation of generators is not hard to see using the "quotient of a coproduct" characterization of generators in a category with coproducts. The preservation of projective objects follows from the fact that every epimorphism in $\mathrm{Ind}(\mathcal{A})$ is the cokernel of a levelwise map of filtered colimits of objects of $\mathcal{A}$. Proposition 2. Any locally finitely presentable category $\mathcal{C}$ contains an injective cogenerator $I$. Proof. By the small object argument, it's easy to construct an object $I$ which is injective with respect to monomorphisms between small objects. Then an induction on the rank of objects, similar to the one here, shows that $I$ is actually injective with respect to all monomorphisms. In the abelian case, this is also an old theorem of Grothendieck. Proposition 3. A cocomplete abelian category $\mathcal{A}$ with a compact projective generator $P$ is equivalent to the category of $\mathrm{End}(P)$-modules. Proof. The functor $\mathcal{A}(P,-): \mathcal{A} \to \mathrm{End}(P)\mathrm{-Mod}$ preserves all colimits and is fully faithful on the object $P$. One shows that the subcategory of $\mathcal{A}$ on which the functor is fully faithful is closed under colimits, so is all of $\mathcal{A}$. So this functor embeds $\mathcal{A}$ as a full subcategory of $\mathrm{End}(P)\mathrm{-Mod}$ closed under colimits and containing $\mathrm{End}(P)$, which thus is all of $\mathrm{End}(P)\mathrm{-Mod}$. Theorem. If $\mathcal{A}$ is a small abelian category, then there is a fully faithful, exact embedding of $\mathcal{A}$ into a category of modules over a ring. Proof. First embed $\mathcal{A}$ into $\mathrm{Pro}(\mathcal{A}) = \mathrm{Ind}(\mathcal{A}^\mathrm{op})^\mathrm{op}$; by Proposition 1 this category is abelian and the embedding is fully faithful and exact. It is also the opposite of the locally (finitely) presentable category $\mathrm{Ind}(\mathcal{A}^\mathrm{op})$, so it contains an projective generator $P$ by Proposition 2. Let $\mathcal{B} \subset \mathrm{Pro}(\mathcal{A})$ be closure of $\mathcal{A} \cup \{P\}$ under finite limits and colimits; this is a full abelian subcategory, and the inclusion $\mathcal{A} \to \mathcal{B}$ is fully faithful and exact. By Proposition 1, so is the inclusion $\mathcal{B} \to \mathrm{Ind}(\mathcal{B})$, and moreover, $P$ is a compact projective generator in $\mathrm{Ind}(\mathcal{B})$. So by Proposition 3, the emedding $\mathcal{A} \to \mathrm{Ind}(\mathcal{B})$ is the desired embedding.<|endoftext|> TITLE: Could Euclid have proven Dedekind's definition of real number multiplication? QUESTION [8 upvotes]: In Euclid's day, the modern notion of real number did not exist; Euclid did not believe that the length of a line segment was a quantity measurable by number. But he did think it made sense to talk about the ratio of two lengths. In fact, he devotes Book V of his Elements to the study of such ratios, using the so-called Eudoxian theory of proportions. Here's how it works. Let $w$ and $x$ be two magnitudes of the same kind (for instance two lengths), and let $y$ and $z$ be two magnitudes of the same kind (for instance two areas). Then [according to Euclid,][2] the ratio of $w$ to $x$ is said to be equal to the ratio of $y$ to $z$ if for all positive integers $m$ and $n$, if $nw$ is greater, equal, or less than $mx$, then $ny$ is greater, equal, or less than $mz$, respectively. Or to put it in modern language, $w/x = y/z$ if the same rational numbers $m/n$ are less than both, the same rational numbers are equal to both, and the same rational numbers are greater than both. In other words, a ratio is defined by the classes of rational numbers which are less than, equal to, and greater than it. If you've studied real analysis; this should look familiar to you: it is how the real number system is constructed using Dedekind cuts! In fact, Dedekind took the Eudoxian theory of proportions in Euclid's Book V as the inspiration for his Dedekind cut construction. So to sum up, while Euclid wouldn't have thought of them as numbers, his notion of "ratios" basically corresponds to our notion of "positive real numbers". Now with that background, my question is about the multiplication of real numbers. Here is how Euclid defines the product of ratios: we say that the product of $w/x$ and $y/z$ is equal to $u/v$ if there exist magnitudes $r,s,$ and $t$ such that $w/x = r/s$, $y/z = s/t$, and $u/v = r/t$. (This is well-defined by Euclid's proposition V.22) But this is not the standard way that multiplication is defined in the Dedekund cut construction of the real numbers, where you form a new cut by taking the products of the rational numbers in the two cuts that you're multiplying. So my question is, how can we prove than Euclid's definition of real number multiplication is equal to the Dedekind cut definition? If I'm not mistaken, the problem basically reduces to proving the following: For all positive integers $l$, $m$, and $n$ and all magnitudes $x$, $y$, and $z$: If $l/m < x/y$ and $m/n < y/z$ then $l / n < x/ z$ If $l/m = x/y$ and $m/n = y/z$ then $l / n = x/ z$ If $l/m > x/y$ and $m/n > y/z$ then $l / n > x/ z$ And that in turn is equivalent to the following: For all positive integers $l$, $m$, and $n$ and all magnitudes $x$, $y$, and $z$: If $ly < mx$ and $mz < ny$ then $lz < nx$ If $ly = mx$ and $mz = ny$ then $lz = nx$ If $ly > mx$ and $mz > ny$ then $lz > nx$ So does anyone have any idea how to go about proving that? For reference, addition of magnitudes is associative and commutative, and magnitudes also obey the following properties: V.1. Multiplication by numbers distributes over addition of magnitudes. $m(x_1 + x_2 + ... + x_n) = m x_1 + m x_2 + ... + m x_n$ V.2. Multiplication by magnitudes distributes over addition of numbers. $(m + n)x = mx + nx$ V.3. An associativity of multiplication. $m(nx) = (mn)x$ V.5. Multiplication by numbers distributes over subtraction of magnitudes. $m(x – y) = mx – my$ V.6. Uses multiplication by magnitudes distributes over subtraction of numbers. $(m – n)x = mx – nx$ EDIT: Out of the three statements I wanted to prove, I just realized that Euclid proved statement 2 in his proposition V.22. So now I just need to prove statements 1 and 3. REPLY [3 votes]: Here's a proof of the first statement; the third statement can be proven similarly. If $lyy$ implies $nx>ny$, and so (assuming any two magnitudes are comparable) it follows that if $nx TITLE: Adding digits to make a number prime or composite QUESTION [10 upvotes]: Players A and B alternate writing one digit to make a six-figure number. That means A writes digit $a$, B writes digit $b$, ... to make a number $\overline{abcdef}$. $a,b,c,d,e,f$ are distinct, $a\neq 0$. A is the winner if this number is composite, otherwise B is. Is there any way to help A or B always win? REPLY [2 votes]: The existing answers can be simplified: their case splits are unnecessary. For B to win it is necessary but not sufficient that the final digit B picks is in $\{1, 3, 7, 9\}$. If A's first pick is $3$ and A ensures that the first three digits include $9$ (i.e. A picks $9$ for the third digit if B didn't pick it for the second digit), then when it comes to A's third pick (choosing the fifth digit from six unpicked digits) we have the following guarantees: Both $3$ and $9$ have been picked There is at least one unpicked digit equal to $1 \bmod 3$, and at least one equal to $2 \bmod 3$. Either there is at least one unpicked digit equal to $0 \bmod 3$, or the sum of the first four digits is $0 \bmod 3$. Therefore A can always choose the fifth digit such that the sum of the first five digits is $2 \bmod 3$. Then since B needs the final digit to be $1$ or $7$, the number will end up being divisible by $3$ and hence B cannot win.<|endoftext|> TITLE: Arnold's proof of Abel's theorem QUESTION [13 upvotes]: I'm seeking help understanding this video. The author considers the equation $ax^5+bx^4+cx^3+dx^2+ex+f = 0$ and shows both the coefficients $a, b$... and solutions $x_1, x_2$... in the complex plane. The author claims that if the coefficients are varied, moving them along short loops so they return to their original values and the solutions don't return to their original values, but instead exchange places, an expression for the solution cannot be found. What is the significance of the solutions not returning to themselves? REPLY [18 votes]: I think the claim is a bit more complicated than that. For simplicity, let's look at a quadratic polynomial. If the roots are $r$ and $s$, and we impose the condition that the leading coefficient be $1$, then the polynomial is $$ (x-r)(x-s)=x^2-(r+s)x+rs=x^2+bx+c, $$ with $b=-(r+s)$ and $c=rs$. So we can find $b$ and $c$ if we know $r$ and $s$. The problem at hand, however, is the reverse: to find $r$ and $s$ given $b$ and $c$. To solve this in general means to find functions $f$ and $g$ such that $$ r=f(b,c),\qquad s=g(b,c). $$ The issue is that this is a somewhat paradoxical demand. The expressions for $b$ and $c$ in terms of $r$ and $s$ are symmetric under interchange of $r$ and $s$ (as they must be, since permuting the roots doesn't change the product $(x-r)(x-s)$). Because of the symmetry between $r$ and $s$, how can the function $f$ know which of $r$ and $s$ it is supposed to be finding? This issue is put into sharp relief by the idea of setting $r$ and $s$ in motion. If $r$ and $s$ move around and then return to their starting values, but with $r$ and $s$ interchanged, $b$ and $c$ will return to their original values. Therefore $f(b,c)$ and $g(b,c)$ would seem to return to their original values, which were $r$ and $s$. But this appears to be wrong: since $r$ moved around and ended up at $s$, it seems that $f(b,c)$ should equal $s$ at the end of this motion, not $r$. The resolution of the paradox is that, in complex analysis, functions can have multiple branches. So the value of the function $re^{i\theta}\mapsto\sqrt{r}e^{i\theta/2}$ does not return to its original value if $r$ is held fixed (say to $1$) and $\theta$ varies from $0$ to $2\pi$, but, instead, picks up a minus sign. A second circuit around the origin does return the value to the starting value. This is exactly what is needed to allow $r$ and $s$ to move in such a way that $b$ and $c$ end up with their original values, but $f(b,c)$ and $g(b,c)$ end up swapping values. In fact we know that the quadratic formula makes use of the function $re^{i\theta}\mapsto\sqrt{r}e^{i\theta/2}$, which is what allows this magic to occur. In summary, there is, in the final analysis, no problem with roots not returning to their original values at the same time that the coefficients in the polynomial do return to their original values. So how does the proof in the video work? What it does is to try to construct very special paths so that, not only do $b$ and $c$ return to their original values, but any allowable formulas one might propose for $f(b,c)$ and $g(b,c)$ also return to their original values, while at the same time $r$ and $s$ swap values. ("Allowable formulas" means formulas that involve only taking roots and using the four arithmetic operations.) If such paths could be found, then we would have a proof by contradiction that formulas for $f(b,c)$ and $g(b,c)$ cannot exist. We, of course, know this strategy must fail in the quadratic case, since the quadratic formula does, in fact, exist. But it is exactly this strategy that works in the quintic case. While in the quadratic case (and also in the cubic and quartic cases) any path the forces the values of $f(b,c)$ and $g(b,c)$ back to their starting values also ends up forcing $r$ and $s$ back to their original values (rather than swapping them), in the case of five roots, there are paths such that allowable formulas expressing the roots in terms of coefficients, $g(b,c,d,e,f)$, $h(b,c,d,e,f)$, ..., are all forced to return to their original values, but the roots, $r$, $s$, ..., are permuted. This produces the proof by contradiction. Added: I hope I was clear about two points in my original answer: (1) the paradox described in the paragraph beginning "The issue is put into sharp relief..." is only apparent; (2) that paragraph does not contain the idea of the proof of Abel's theorem, although the correct idea is somewhat related to the apparent paradox described there. So what is the apparent paradox? To make precise how the roots move, we imagine continuous functions $R(t)$, $S(t)$ of $t\in[0,1]$ such that $R(0)=r$, $S(0)=s$, $R(1)=s$, $S(1)=r$. Let the coefficients of the quadratic be $B(t)=-(R(t)+S(t))$, $C(t)=R(t)S(t)$. These are continuous functions of $t$ as well. Suppose that our hypothetical formulas for the roots are such that $f(B(0),C(0))=r$ and $g(B(0),C(0))=s$. The apparent paradox is that two different lines of reasoning give different values for $f(B(1),C(1))$. One argument says that $B(1)=B(0)$ and $C(1)=C(0)$, and therefore $$ \begin{aligned} f(B(1),C(1))=f(B(0),C(0))&=r,\\ g(B(1),C(1))=g(B(0),C(0))&=s. \end{aligned} $$ (As discussed above, this argument is actually incorrect.) The other argument is that $R(t)$ and $S(t)$ change continuously with $t$, and therefore so do $B(t)$, $C(t)$, $f(B(t),C(t))$, and $g(B(t),C(t))$. From this, we conclude that, since $f(B(0),C(0))=R(0)$, $f(B(t),C(t))$ remains equal to $R(t)$ throughout the entire motion, and therefore $f(B(1),C(1))=R(1)=s$. (The roots typically remain a finite distance apart throughout their motion, so the only way to end up with $f(B(1),C(1))=r$ is for a discontinuous jump to occur at some point, which cannot happen.) If both arguments had been correct, we would have a proof by contradiction that the function $f$ (and $g$) cannot exist. But since the first argument is incorrect, this does not prove the nonexistence of $f$. In fact, $f$ does exist in the quadratic case. To prove the nonexistence of the analogous functions in the quintic case requires a more elaborate argument, as discussed above.<|endoftext|> TITLE: On the Liouville-Arnold theorem QUESTION [11 upvotes]: A system is completely integrable (in the Liouville sense) if there exist $n$ Poisson commuting first integrals. The Liouville-Arnold theorem, anyway, requires additional topological conditions to find a transformation which leads to action-angle coordinates and, in these set of variables, the Hamilton-Jacobi equation associated to the system is completely separable so that it is solvable by quadratures. What I would like to understand is if the additional requirement of the Liouville-Arnold theorem (the existence of a compact level set of the first integrals in which the first integrals are mutually independent) means, in practice, that a problem with an unbounded orbit is not treatable with this technique (for example the Kepler problem with parabolic trajectory). If so, what is there a general approach to systems that have $n$ first integrals but do not fulfill the other requirements of Arnold-Liouville theorem? Are they still integrable in some way? REPLY [4 votes]: Let $M= \{ (p,q) \in \mathbb{R}^{n} \times \mathbb{R}^n \}$ ($p$ denotes the position variables and $q$ the corresponding momenta variables). Assume that $f_1, \cdots f_n$ are $n$ commuting first integrals then you get that $M_{z_1, \cdots, z_n} := \{ (p,q) \in M \; : \; f_1(p,q)=z_1, \cdots , f_n(p,q)=z_n \} $ with $z_i \in \mathbb{R}$ is a Lagrangian submanifold. Observe that if the compactness and connectedness condition is satisfied then there exist action angle variables which means that the motion lies on an $n$-dimensional torus (which is a compact object). The compactness condition is equivalent to that a position variable, $p_k$, or a momentum variable, $q_j$, cannot become unbounded for fixed $z_i$. Consequently, if the compactness condition is not satisfied there is no way you can expect to find action angle variables since action angle variable imply that the motion lies on a torus which is a compact object.<|endoftext|> TITLE: How to find the matrix of a quadratic form? QUESTION [6 upvotes]: I was wondering. If I have a bilinear symmetric form, it is easy to find its matrix. But, when I have a quadratic form, which is the procedure to do that? I heard that one possibility is: If $q$ is my quadratic form, then $$f(x,y) = \frac{1}{4}q(x+y) - \frac{1}{4}q(x-y)$$ is the bilinear symmetric form associated, so the method reduces to find the matrix of $f(x,y).$ The point is that seems a little noising... How to find, in practice, the matrix of quadratic form? Thanks in advance. REPLY [3 votes]: It suffices to note that if $A$ is the matrix of your quadratic form, then it is also the matrix of your bilinear form $f(x,y) = \frac 14[q(x+y) - q(x - y))]$, so that $$ a_{ij} = f(e_i,e_j) = \frac 14(q(e_i+e_j) - q(e_i - e_j)) $$ where $\{e_1,e_2,\dots,e_n\}$ is the standard basis of $\Bbb R^n$. I think that's the most direct way to get the matrix of your quadratic form.<|endoftext|> TITLE: What are some standard methods for solving functional equations? QUESTION [8 upvotes]: I have searched the internet for methods on solving functional equations, unfortunately, most of them consist mainly of substituting values for the variables or on guessing solutions. I think those methods sound like either an impossible task due to the infinite number of possible substitutions or a leap of faith due also to the infinite number of guesses there can be. How could one for example solve: $$f(1+x)=1+f(x)^2$$ I have tried many different ideas on this, from trying guesses to applying integrals, to eliminating the $1$ by calculating $f(2+x)$ and subtracting equations, then trying to simplify the answer through sums or products, but nothing seems to work. I think this is a rather interesting subject because $f(x)=f(x-1)^2$ has a solution that is very easy to find but somehow adding a $1$ makes it a much harder problem. Are there ways to solve this equation or even a polynomial functional equation? REPLY [3 votes]: As for iterating polynomials, there are two general forms of polynomials that we currently know how to iterate. They are polynomials of the form $$P(x)=a(x-b)^d+b$$ and $$T(x)=\cos(k\arccos(x))$$ and yes, I know the second one doesn't look like a polynomial, but for integer values of $k$, it is a polynomial where the inverse cosine is defined, and can be extended to other values (accurately, for our purposes) using the hyperbolic cosine function. For more info, see Chebyshev Polynomials. The iteration formulas for each of these is given by $$P^n(x)=a^{\frac{b^n-1}{b-1}}(x-b)^{d^n}+b$$ and $$T^n(x)=\cos(k^n\arccos(x))$$ In case you were wondering how this applies to your problem of finding $f$ given that $$f(1+n)=(Q\circ f)(n)$$ where $Q$ is a polynomial, here's how: if you assign a value for $f(0)$, say $y_0$, then you can say that $$f(n)=(Q^n\circ f)(0)=Q^n(y_0)$$ which allows you to find $f:\mathbb Z\to\mathbb Z$.<|endoftext|> TITLE: Find all positive integers $k,m,n$ satisfying: $\frac1k+\frac1m+\frac1n=\frac{1}{1996}$ QUESTION [5 upvotes]: Find all positive integers $k,m,n$ satisfying: $\frac1k+\frac1m+\frac1n=\frac{1}{1996}$ The trivial answer is: $k=m=n=3*1996$ $kmn=1996(km+mn+nk)=499\times4\times(km+mn+nk)$ , now $kmn$ must be divisible by all factors in RHS , but how can we get more info about $k,m,n$ ? REPLY [4 votes]: Solutions only for $k=m$. Assume $k=m$, then $\frac{1}{1996}=\frac{2n+m}{mn}$ or $mn=1996\cdot 2n + 1996m$ or $$mn - 1996\cdot 2n - 1996\cdot m =0$$ or $$(n-1996)(m-1996\cdot 2)=2\cdot 1996^2$$ or: $$(n-1996)(m-1996\cdot 2)=2\cdot 1996^2.$$ If $d\mid 2\cdot 1996^2$ then $n=1996+d$ and $m-1996\cdot 2 = \frac{2\cdot 1996^2}{d}$ or $m=1996\cdot 2 + \frac{2\cdot 1996^2}{d}$. Since $1996=2^2\cdot 499$, then $d\mid 2^5\cdot 499^2$, which yields already 18 possible solutions. The extreme case, when $d=1$, gives $n=1997$ and $m=k=2\cdot 1996 \cdot 1997$. When $d=2\cdot 1996^2$, $n=1996(1+2\cdot 1996)$ and $m=k=1996\cdot 2 + 1$. Algorithm for finding all soluions If you assume $k TITLE: Prove: if $f(0)=0$ and $f'(0)=0$ then $f''(0)\geq 0$ QUESTION [8 upvotes]: let $f$ be a nonnegative and differentiable twice in the interval $[-1,1]$ Prove: if $f(0)=0$ and $f'(0)=0$ then $f''(0)\geq 0$ Are all the assumptions on $f$ necessary for the result to hold ? what can be said if $f''(0)= 0$ ? Looking at the taylor polynomial and lagrange remainder we get: $$f(x)=f(0)+f'(0)x+\frac{f''(c)x^2}{2}$$ $$f(x)=\frac{f''(c)x^2}{2}$$ Because the function is nonnegative and $\frac{x^2}{2}\geq 0$ so $f''(c)\geq 0$ For 1., all the data is needed but I can not find a valid reason. For 2., can we conclude that the function the null function? REPLY [3 votes]: Another approach is via the method of contradiction. Assume that $f''(0) < 0$. Then the function $f'$ is strictly decreasing at $0$ which means that there is a neighborhood $I$ of $0$ such that if $x \in I, x < 0$ then $f'(x) > f'(0) = 0$ and if $x \in I, x > 0$ then $f'(x) < f'(0) = 0$. We can obviously choose $I$ of the form $(-h, h)$ and hence observing sign of $f'$ in $(-h, h)$ we see that $f$ is strictly increasing in $(-h, 0]$ and strictly decreasing in $[0, h]$ and since $f(0) = 0$ it follows that $f(x) < 0$ for all $x \in I$ except $x = 0$. And this is contrary to the hypotheses that $f$ is non-negative. Thus we obtain the desired contradiction. BTW the above argument can be replaced by the following concise argument. Since $f'(0) = 0, f''(0) < 0$ the point $0$ is a local strict maximum of $f$ and hence $f(x) < f(0) = 0$ for all $x$ in some neighborhood of $I$ of $0$ except $x = 0$. And this contradicts that $f$ is non-negative. The argument in first paragraph actually shows how the vanishing of derivative and sign of second derivative guarantee local maxima/minima and if the reader is well aware of the proof of second derivative test for maxima/minima, it is preferable to adopt the argument in second paragraph. As can be seen from the arguments above we don't need continuity of $f''$. And we can avoid difficult theorems like Taylor or L'Hospital. The argument used in first paragraph leads to one of the simpler proofs of Taylor's Theorem with Peano's form of remainder. Update: The condition $f' (0)=0$ is superfluous here. The function is non-negative and $f(0)=0$ and hence $0$ is a point of local minima so that $f'(0)=0$.<|endoftext|> TITLE: A simple question about the Hamming weight of a square QUESTION [5 upvotes]: Let we define the Hamming weight $H(n)$ of $n\in\mathbb{N}^*$ as the number of $1$s in the binary representation of $n$. Two questions: Is it possible that $H(n^2) TITLE: Solving what Mathematica could not QUESTION [20 upvotes]: Right, so as the final step of my project draws near and after having made a bad layout sort of question, I am posting a new one very clear and unambiguous. I need to find this specific definite integral which Mathematica could not solve: $$ \int_{x=0}^\pi \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \sin \left( \frac{1+A}{\sqrt{A}} \omega \tanh^{-1} \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \right) \, dx.$$ where $ 0 0 $ are parameters of the problem. I tried to use a substitution of the argument of the hyperbolic arctan but that seemed to make it worse. I was posting here in the hopes of receiving help, if someone could tell me if it is at all possible to solve it analytically via a trick of sorts or a clever substitution, or maybe it is an elliptic integral in disguise. I thank all helpers. ** My question on the Melnikov integral can be found here I just used trig identities to soften it up REPLY [6 votes]: Here is a sketch for a residue-free solution: First, consider the formula $$\int_0^{\infty} \frac{\cos( a x)}{\cosh(\frac{\pi}{2} x)}dx = \operatorname{sech} a. \tag{1}$$ Since $\displaystyle \,\,\sin(a)\sin(b) = \frac12 \cos(a-b)-\frac12 \cos(a+b), \,\,$ we obtain $$ \int_0^{\infty} \frac{\sin( a x) \sin(b x)}{\cosh(\frac{\pi}{2} x)}dx = \frac12 \operatorname{sech}(a-b) - \frac12 \operatorname{sech}(a+b). \tag{2}$$ Now, using a fourier inversion argument $\displaystyle\left( f(a)=\int_0^{\infty} g(x)\sin(a x) dx \iff \frac{\pi}{2} g(a) = \int_0^{\infty} f(x) \sin(a x) dx \right),\,$ we obtain $$\int_0^{\infty} \sin( b x) ( \operatorname{sech}(a-x) - \operatorname{sech}(a+x)) dx = \pi \cfrac{ \sin(a b)}{\cosh( \frac{\pi}{2} b)}. \tag{3}$$ Finally, letting $a \mapsto i \tan^{-1} \sqrt{a},\,$ and noting that $$\operatorname{sech}(a-b) - \operatorname{sech}(a+b) = 2 \frac{\sinh a}{\cosh^2 a} \,\frac{\sinh b}{\cosh^2 b}\, \frac1{1-\tanh^2 a \, \tanh^2 b},$$ we find that $$\frac{\pi}{\sqrt{a}} \cfrac{\sinh( b \tan^{-1} \sqrt{a} )}{\cosh( \frac{\pi}{2} b)} = 2 \sqrt{1+a} \int_0^{\infty} \cfrac{\sin(b x) \sinh x}{\cosh^2 x + a \, \sinh^2 x} dx, \tag{4}$$ which is exactly your integral, with $b= \frac{1+a}{\sqrt{a}} \omega,$ and the substitution $$\tanh^{-1} \frac{\cos \frac{x}{2}}{\sqrt{1+ a \sin^2 \frac{x}{2}}} \mapsto x. \tag{5}$$<|endoftext|> TITLE: Is there a way to measure how (non)convex a function is, maybe analogous to condition number? QUESTION [7 upvotes]: Consider the functions $f(x) = \sin x$ and $g(x) = (x+1)^2 (x-1)^2$. We know that $f$ has an infinite number of local minimizers and is nonconvex on a non-compact subset of $R$. We know that $g$ has two local minimizers and is nonconvex only on a compact subset of $R$. So, in a sense, $g$ is "closer" to being a convex function than $f$ is. Is there some test or measure that one might use to represent this fact? I realize that one can look at the hessian and see where it is positive semidefinite, but that can get difficult to interpret, particularly if the function in question is defined on a high-dimensional space or has a complicated hessian. REPLY [3 votes]: Along the lines of my previous comment: Let $n$ be a positive integer and let $\mathcal{X} \subseteq \mathbb{R}^n$ be a convex set. Motivated by the standard definition of $\mu$-strong convexity, we can define a (possibly nonconvex) function $f:\mathcal{X}\rightarrow\mathbb{R}$ to have “convexity parameter $\mu$” if $\mu$ is the largest real number such that $$ f(x) - \frac{\mu}{2} ||x||^2$$ is a convex function over $x \in \mathcal{X}$. Here, $||x||= \sqrt{\sum_{i=1}^nx_i^2}$ is the standard Euclidean norm. Intuitively, the value $\mu$ is the weight associated with the largest parabola that can be subtracted from $f$ while ensuring the resulting function is convex. It can be shown that the function $f$ is convex if and only if $\mu \geq 0$. If $\mu>0$ the function is said to be strongly convex. Larger values of $\mu$ correspond to "stronger" forms of convexity. In the special case when $\mathcal{X}=\mathbb{R}$ and the function $f$ is twice differentiable, the value $\mu$ is equal to: $$ \mu = -\inf_{x \in \mathbb{R}} f’’(x) $$ and hence represents the degree of curvature of the function. Examples: 1) $f(x) = \sin(x)$. Then: \begin{align} h(x) &= \sin(x) - \frac{\mu}{2}x^2\\ h'(x) &= \cos(x) - \mu x \\ h''(x) &= -\sin(x) - \mu \end{align} The largest value of $\mu$ for which $h''(x) \geq 0$ for all $x$ is $\mu=-1$. 2) $g(x) = (x+1)^2(x-1)^2$. Then: $$ g''(x) = 12x^2-4 \geq -4$$ The largest value of $\mu$ for which $g(x) - \frac{\mu}{2}x^2$ is convex is $\mu=-4$. Thus, by this measure, this function $g$ is "more nonconvex" than the previous function $f$. Some useful properties of this definition: 1) If $f$ has convexity parameter $\mu \in \mathbb{R}$, then for any constants $a \in \mathbb{R}^n$ and $b \in \mathbb{R}$, the function $f(x) + a^Tx+b$ also has convexity parameter $\mu$. 2) If $f$ and $g$ are two functions that have convexity parameters $\mu$ and $\lambda$, respectively, then $f+g$ has convexity parameter greater than or equal to $\mu+\lambda$. 3) If $f$ has convexity parameter $\mu \in \mathbb{R}$, then $f(x) - \frac{r}{2}||x-c||^2$ is a convex function for every constant $c \in \mathbb{R}^n$ and $r \leq \mu$. 4) Suppose $f$ is a convex function. Fix $c \in \mathbb{R}^n$ and $\mu \in \mathbb{R}$. Then $f(x) + \frac{\mu}{2}||x-c||^2$ has convexity parameter greater than or equal to $\mu$.<|endoftext|> TITLE: Giving a basis for the column space of A QUESTION [8 upvotes]: Let $A = \begin{bmatrix}3&3&3\\3&5&1\\-2&4&-8\\-2&-4&0\\4&9&-1\end{bmatrix}$ Give a basis for the column space of A So what I've done so far is put it in RREF (which was a task itself) and got $\begin{bmatrix}1&0&2\\0&1&-1\\0&0&0\\0&0&0\\0&0&0\end{bmatrix}$, but I'm not sure what to do next to give the "basis" of the column space of A REPLY [7 votes]: Fact. Let $A$ be a matrix. The nonzero rows of $\DeclareMathOperator{rref}{rref}\rref(A^\top)$ form a basis of the column space of $A$. In our case we have $$ A= \left[\begin{array}{rrr} 3 & 3 & 3 \\ 3 & 5 & 1 \\ -2 & 4 & -8 \\ -2 & -4 & 0 \\ 4 & 9 & -1 \end{array}\right] $$ Row-reducing $A^\top$ gives $$ \rref(A^\top)= \left[\begin{array}{rrrrr} 1 & 0 & -\frac{11}{3} & \frac{1}{3} & -\frac{7}{6} \\ 0 & 1 & 3 & -1 & \frac{5}{2} \\ 0 & 0 & 0 & 0 & 0 \end{array}\right] $$ The fact above implies that \begin{align*} \langle1, 0, -{11}/{3}, {1}/{3}, -{7}/{6} \rangle && \langle0,1, 3, -1, 5/2 \rangle \end{align*} forms a basis of the column space of $A$.<|endoftext|> TITLE: The square of a infinite series. QUESTION [7 upvotes]: Say we have $\sum_{n=1}^{\infty} f(n,x)=f(x)$, which often happens with Taylor series: Can we express: $$\left(\sum_{n=1}^{\infty} f(n,x) \right)^2$$ As something that does not involve the square. I.e can multiply this out : $$\left(f(1,x)+f(2,x)+f(4,x)+.... \right)\left(f(1,x)+f(2,x)+f(3,x)+...\right)$$ I know by distributive property we have: $$=f(1,x)\sum_{n=1}^{\infty} f(n,x)+f(2,x)\sum_{n=1}^{\infty} f(n,x)+f(3,x)\sum_{n=1}^{\infty} f(n,x)+..$$ Can we simplify further? Why I ask this is that I'm really interested if we can express $\left(\frac{\ln (x)}{x-1} \right)^2$ as a series by only using the Taylor series for $\ln x$. I know I can just write the coefficients and multiply them out , but the new coefficients to the new series do not form an easy pattern to right down. REPLY [4 votes]: What you are referring to is called the Cauchy product. If you consider multiplying out term by term, you might expect to see something like $$(a_1+a_2+a_3+\cdots)(b_1+b_2+b_3+\cdots)=a_1b_1+(a_1b_2+a_2b_1)+(a_1b_3+a_2b_2+a_3b_1)+\cdots$$ where we are generalizing the FOIL method in the only way that makes sense. The question is $(1)$ whether the sum converges and $(2)$ if it converges to what we expect it to (that is, if the first converges to $A$ and the second to $B$, then the product converges to $AB$). There is no answer in general but there are a few theorems to keep in mind: $(1)$: if one (or both) of the series converges absolutely (and the other converges), then the product converges to $AB$. $(2)$ if all three series converge, then the product indeed converges to $AB$. In any particular case, like the one you mentioned, it may be difficult to show convergence of the product since the terms do not generalize well. One possibility is to find a bound for the product (perhaps using partial fractions) and show that the bound goes to zero face enough to guarantee convergence.<|endoftext|> TITLE: Have I found an example of norm-Euclidean failure in $\mathbb Z [\sqrt{14}]$? QUESTION [10 upvotes]: Based on the proof that $\mathcal O_{\mathbb Q (\sqrt{-19})}$ is not Euclidean because it lacks universal side divisors, I have convinced myself that $\mathbb Z [\sqrt{14}]$ is Euclidean because it has as universal side divisors numbers with a norm of $2$, e.g., $4 + \sqrt{14}$ (though I have not proved this rigorously). Clearly an example of norm-Euclidean failure must involve numbers with odd norms. I've been looking at $\gcd(3, 7 + 2 \sqrt{14})$. There is a question that's coming up as "similar" that gives $\gcd(3, 3 + \sqrt{14})$ as a possible example of norm-Euclidean failure (for all I know this question might wind up being closed as a duplicate of that one). I have done calculations with both pairs, and I have found (unless I've made errors), that in $3 = q(7 + 2 \sqrt{14}) + r$ results in larger $|N(r)|$ than $3 = q(3 + \sqrt{14}) + r$. But even if I have made no mistakes of arithmetic, this still does not prove either of these examples leads to norm-Euclidean failure. July 27, 2016: Just so that hopefully no one can say I have undefined terms: by "an example of norm-Euclidean failure" in this domain I mean a pair of numbers $a, b$ in this domain such that it is impossible to find suitable numbers $q, r$ in this domain to satisfy $a = qb + r$ with $|N(a)| > |N(b)|$ and $|N(b)| > |N(r)|$. REPLY [13 votes]: Although $\mathbb{Z}[\sqrt{14}]$ is not norm-Euclidean, you haven't proved it yet. At least, you can successfully get through one step of the Euclidean algorithm: $$3 = (7+2\sqrt{14})(-2+\sqrt{14}) + (-11 - 3\sqrt{14}),$$ and the absolute value of the norm of $-11 - 3\sqrt{14}$ is $5$, which is less than the absolute value of the norm of $7+2\sqrt{14}$, which is 7. ADDED: Mr. Brooks: $(3,3+\sqrt{14})$ certainly does not work. In fact, you can write $$3 = (3 + \sqrt{14})(-4) + (15 + 4\sqrt{14})$$ and note that $15 + 4\sqrt{14}$ is a unit so we've proven $3$ and $3+\sqrt{14}$ are relatively prime. By the way, here is the trick to use the norm to carry out a Euclidean algorithm: Goal: Given $a$ and $b$ in the ring of integers, find $q$ in the ring of integers such that $$a = bq + r$$ and $N(r) < N(b)$. Strategy: Take the quotient $a/b$, which is in the number field, and try to find some integral $q$ such that $N(a/b -q) < 1$. Then set $r = a - bq$ and observe that $$N(r) = N(a - bq) = N(a/b - q)N(b) < N(b).$$ In fact, this leads to a general computational method for trying to prove a number ring is a norm-Euclidean domain: Given a field $K$ and ring of integers $\mathcal{O}$, try to prove that: For all $x \in K$, there exists $y \in \mathcal{O}$ such that $N(x-y) < 1$. This gives a geometric approach to the problem. There are several nice papers by J.P. Cerri which discuss norm-Euclidean domains. I really recommend them if you are interested in this subject (in addition of course to the survey papers by Lenstra and by Lemmermeyer). PROOF that $\mathbb{Z}[\sqrt{14}]$ is not a norm-Euclidean domain: I'm not sure what the standard proof is, but here is one that works: Consider $x = (1+\sqrt{14})/2$. We want to show that for any $y \in \mathbb{Z}[\sqrt{14}]$ that $N(x+y) > 1$. Write $y = a + b\sqrt{14}$, with $a$ and $b$ integers. Then suppose $$N(x+y) = \left|\left(a+\frac12\right)^2 - 14\left(b+\frac12\right)^2\right| = \left|a^2 + a - 14b^2 - 14b -\frac{13}{4}\right| < 1.$$ Since $a$ and $b$ are integers, the only possibilities are $$a^2 + a - 14b^2 - 14b = 3 \text{ or } 4.$$ We can rule out $3$ because the left hand side is always even. To rule out 4, we observe that $a^2 + a$ can only equal 0, 2, 6 or 12 modulo 14, and in particular never equals 4 modulo 14. So we have a contradiction, proving that $\mathbb{Z}[\sqrt{14}]$ is not norm-Euclidean. There is another proof at PlanetMath.<|endoftext|> TITLE: Fun with combinatorics and 80 business customers QUESTION [6 upvotes]: In business with 80 workers, 7 of them are angry. If the business leader visits and picks 12 randomly, what is the probability of picking 12 where exactly 1 is angry? (7/80)(73/79)(72/78)(71/77)(70/76)(69/75)(68/74)(67/73)(66/72)(65/71)(64/70)*(63/69)*12=0.4134584151106464 What is the probability more than 2 are angry? My idea is to calculate the probability of 2,3,4,5,6, and 7 angry people just like did in the previous example and then add them altogether. In the previous example I can seat the one person 12 times. In all the different 12 spots, and then times by 12. The problem I have now is, how many times can I seat 2 people in 12 spots? If I use the combinatorics formula I will get a negative factorial. There must be a much easier way than this. REPLY [3 votes]: The denominators of the fractions stay constant. The total multiplication across the denominators is all the ways to pick 12 people from 80, where the order is retained: $$ 80\cdot 79\cdot 78\cdot 77\cdot 76\cdot 75\cdot 74\cdot 73\cdot 72\cdot 71\cdot 70\cdot 69 = \frac{80!}{68!} $$ We say that order is unimportant, so $\frac{80!}{68!\,12!} = {80 \choose 12}$ options Then the numerators are the combination of the choices from the angry $(k)$ and non-angry $(12-k)$ groups, which are ${7 \choose k}$ and ${73 \choose 12-k}$, so overall the probability is $$\frac{{7 \choose k}{73 \choose 12-k}}{80 \choose 12}$$ and checking this against your result for $k=1$ we have $$\frac{{7 \choose 1}{73 \choose 11}}{80 \choose 12} = \frac{68!\,12!}{80!}\frac{7!}{1!\,6!}\frac{73!}{ 62!\,11!} = \frac{12\cdot 7\cdot 68\cdot 67\cdot 66\cdot 65\cdot 64\cdot 63}{80\cdot 79\cdot 78\cdot 77\cdot 76\cdot 75\cdot 74} \approx 0.413458415 $$ The easiest of the possible calculations is where none of the chosen employees are angry $(k=0)$, which is $$\frac{{7 \choose 0}{73 \choose 12}}{80 \choose 12} = \frac{68!\,12!}{80!}\cdot 1 \cdot\frac{73!}{ 61!\,12!} = \frac{68\cdot 67\cdot 66\cdot 65\cdot 64\cdot 63\cdot 62}{80\cdot 79\cdot 78\cdot 77\cdot 76\cdot 75\cdot 74} \approx 0.305171687$$ Then all other cases can be worked similarly. You talk about getting a negative factorial when finding the combination of how to choose $2$ seats from $12$, but that is just ${12 \choose 2}= \frac {12!}{10!\,2!} = \frac{12\cdot 11}{2} = 66$ so I don't know how you arrived at a negative factorial.<|endoftext|> TITLE: Why are Lie groups automatically analytic manifolds? QUESTION [8 upvotes]: In the book by Kolar, Michor, and Slovak, it is shown that multiplication $\mu:G\times G\to G$ is analytic in some neighborhood of $e$. Specifically, they show that in the chart given by $\exp^{-1}$, with domain some neighborhood of the origin, multiplication is real analytic. They claim that it follows that $\mu$ is analytic on all of $G$. By this, I think they mean there always exists a real analytic structure on $G$ in which $\mu$ is everywhere analytic. Could someone please supply the details? I'm assuming the thought is to push this neighborhood of analyticity around the group by using the multiplication map and the fact that a Lie group is generated by any neighborhood of $e$. However, this is only true for connected Lie groups, so this approach would fail for e.g. $\mathrm{O}(n)$, $n>2$. REPLY [2 votes]: Asssume that your group is linear (a closed subgroup of $GL(n,R)$), and let $\cal G \subset M(n, R)$ its Lie algebra. Then in a neigbourhood of the identity, $G= \{g\in M(n,R)/ \log g \in \cal G \}$, where $r>0$ is a small number, and $\log : B(Id, r)\subset M(n,R)$ is an analytic function. This proves that in the neigbourhood of the identity, $G$ is an analytic submanifold of $Gl(n,R)$, as $\cal G$ is just a linear vector space. But left translation by a matrix $A$ is certainly an analitic (even polynomial) automorphism of $Gl(n,R)$, so $G$ is an analytic submanifold of the algebraic hence analytic. So $G$ is an analytic submanifold of $Gl(n,R)$ and a subgroup, therefore an analytic group. For the general case, you use the fact that $\cal G$ can be embedded as a Lie subalgebra of $M(n,R)$, and apply the same sort of argument.<|endoftext|> TITLE: Quotients of Elliptic Curves QUESTION [7 upvotes]: I am fairly inexperienced with elliptic curves so there might be aspects of my question that may need better wording but let me know if there are any issues: Question: Say I have an elliptic curve over $\mathbb{F}_7$ and this curve has 12 points. I take a subgroup of size 3 and I quotient the curve by that subgroup. Magma and Sage can easily tell me the equation of the curve where the quotient lives. Not surprisingly, extra points pop up when taking a quotient that where not defined over $\mathbb{F}_7$ but become defined over $\mathbb{F}_7$ when you take the quotient. So the curve they spit out may (and usually does according to my random sample) still have 12 points. What is happening on the function field side? Is the function field of the original curve an extension of the function field of the other curve? If not what is going on? REPLY [6 votes]: Such a map is called an isogeny, and isogenies over finite fields preserve the number of points (in fact, two elliptic curves over a finite field are isogenous iff they have the same number of points). On the function field side, this corresponds to an embedding of the function field of the original curve into the second one. By the existence of the dual isogeny, you also have an embedding of the second into the first. All of this is explained quite nicely in the first few chapters of Silverman's Arithmetic of Elliptic Curves book.<|endoftext|> TITLE: Disproving existence statements QUESTION [6 upvotes]: I am trying to get some practice on disproving existence statements and I was really stuck on this one: "There exists an example of three distinct positive integers different from $a,2a,3a$ for some $a \in \mathbb{N}$ having the property that each divides the sum of the other two." I have tried working with the negation for all distinct positive integers different from $a,2a,3a$ for some $a \in \mathbb{N}$ where it each does not divide the sum of the other two, and thought about it using the contrapositive, except that I am not quite sure how to start it. In fact I was rather stumped on expressing it in generalized terms. Would someone mind helping me out with this ? I think this is quite a good example and was quite difficult for me so I would really like to know the best way to approach such a problem. Help is greatly appreciated. REPLY [16 votes]: One should not worry too much about logic (negation, contrapositives), and instead think about the concrete problem, that is, just think about numbers. What can we discover about numbers such the sum of any two is divisible by the third? If $x,y,z$ are positive integers, with $x\lt y\lt z$, then it looks "hardest" for $z$ to divide $x+y$. Since $x+y\lt 2z$, the only way we can have divisibility is if $x+y=z$. Nice simplification! Now we want $y$ to divide $x+z$, so we want $y$ to divide $2x+y$. That can only happen if $y$ divides $2x$. But since $x\lt y$, that forces $2x=y$, and we are finished.<|endoftext|> TITLE: By what measure does the busy beaver function grow faster than any computable function? QUESTION [5 upvotes]: It has been proven that the busy beaver function grows faster than any computable function. But I wouldn't think that speed of growth is well-defined. What is the definition? Is there some index? REPLY [8 votes]: It's a good question. Let's say $BB(n)$ denotes the busy beaver function of a positive integer $n$. Of course, you can always define a computable function that is bigger than $BB(n)$ for small $n$. For example, here's a computable function: $$ f(n) = BB(10000). $$ It's computable because it just returns a specific number, and it's bigger than $BB(n)$ for $1 \le n \le 9999$. So when we say $BB(n)$ grows faster than any computable function, we must mean something for large enough $n$. One way of saying it is this: Proposition 1. For any computable function $f$, there is a positive integer $N$ such that for all $n > N$, $BB(n) > f(n)$. In other words, once you go out far enough (past the integer $N$), busy beaver beats $f(n)$. A stronger thing to say (and more in line with what "grows faster" usually means in math, actually) is the following statement, also true: Proposition 2. For any computable function $f$, $\lim_{n \to \infty} \frac{BB(n)}{f(n)} = \infty$. If you're not familiar with limits, it's saying that eventually, $BB(n)$ will be at least $100$ times as large as $f(n)$; if you go even further it will be at least $1000$ times as large, and so on; for any constant number $C$, it will eventually be that $BB(n)$ is $C$ times or bigger than $f(n)$. In practice, "eventually" here is a gross understatement: the busy beaver function will become astronomically larger (i.e., not just $10$ or $100$ times larger) than most of the usual computable functions you can think of very quickly, probably after $n = 5$ or so.<|endoftext|> TITLE: Left vs right semi direct products QUESTION [5 upvotes]: I just want to make sure that I am not doing anything silly here, but if we let $G$ be a group with $H,K$ subgroups, $H\lhd G$, and $\phi:K\rightarrow Aut(H)$, then is $$H\rtimes_\phi K \approx K \ _\phi\ltimes H$$ where the multiplication in the first is given by $$(h_1,k_1)(h_2,k_2) = (h_1\phi_{k_1}(h_2),k_1k_2). $$ Basically does it make complete sense just to switch the "slots" in the order pair? This idea has come up in a project that I have been looking at for some time and using this notion would help me simplify some calculations greatly. Thanks in advance. REPLY [4 votes]: The free product $A*B$ of two groups is formed by considering all "words" formed using "letters" from $A$ and $B$, subject only to the condition that multiplying two elements of $A$ gives the same result as it does in $A$ itself, and similarly for $B$, but otherwise multiplying one element from $A$ with another from $B$ does not simply. In general, then, elements of $A*B$ look like $a_1b_1a_2b_2\cdots$. If $\phi:K\to\mathrm{Aut}(H)$, then we may impose the relations $khk^{-1}=\phi_k(h)$ within $H*K$. Formally this means we take the quotient of $H*K$ by the normal subgroup generated by the set of all elements of the form $khk^{-1}\phi_k(h)^{-1}$. Call this quotient group $G$. For any product $kh\in H*K$, its image in $G$ may be equated with $(khk^{-1})k=\phi_k(h)k$. Using this sliding rule, every $h_1k_1h_2k_2\cdots$ can be simplified to just $hk$. But no two elements of the form $hk$ can be equal, for $h_1k_1=h_2k_2$ implies $h_2^{-1}h_1=k_2k_1^{-1}$ which is in $H\cap K=\{e\}$ within $H*K$ and hence in $G$. For this reason, we may identify $G$ with the cartesian product $H\times K$, but it remains to see what the multiplication operation is. In fact, to evaluate $(h_1k_1)(h_2k_2)$, simply use the sliding rule on the middle two terms $k_1h_2=\phi_{k_1}(h_2)k_1$ to obtain $h_1k_1h_2k_2=h_1\phi_{k_1}(h_2)\cdot k_1k_2$. This is where the multiplication rule in the usual formal definition of $H\rtimes_\phi K$ comes from. But there was no reason to use $H\times K$ instead of $K\times H$. The sliding rule applies just as well the other way, with $hk=k(k^{-1}hk)$. Then $(k_1h_1)(k_2h_2)=k_1k_2\cdot\phi_{k_2^{-1}}(h_1)h_2$. We can use this if we want to define a $K{}_\phi\ltimes H$ semidirect product. Then $H\rtimes_\phi K$ and $K{}_\phi\ltimes H$ should be isomorphic because they are both just $H*K$ modulo $khk^{-1}=\phi_k(h)$. Indeed, within the latter group we know that $kh=\phi_k(h)k$, so $K{}_\phi\ltimes H\xrightarrow{\sim} H\rtimes_\phi K$ should just be $(k,h)\mapsto(\phi_k(h),k)$.<|endoftext|> TITLE: Proving that $2^{2a+1}+2^a+1$ is not a perfect square given $a\ge5$ QUESTION [5 upvotes]: I am attempting to solve the following problem: Prove that $2^{2a+1}+2^a+1$ is not a perfect square for every integer $a\ge5$. I found that the expression is a perfect square for $a=0$ and $4$. But until now I cannot coherently prove that there are no other values of $a$ such that the expression is a perfect square. Any help would be very much apreciated. REPLY [3 votes]: I will assume that $a \ge 1$ and show that the only solution to $2^{2a+1}+2^a+1 = n^2$ is $a=4, n=23$. This is very non-elegant but I think that it is correct. I just kept charging forward, hoping that the cases would terminate. Fortunately, it seems that they have. If $2^{2a+1}+2^a+1 = n^2$, then $2^{2a+1}+2^a = n^2-1$ or $2^a(2^{a+1}+1) = (n+1)(n-1)$. $n$ must be odd, so let $n = 2^uv+1$ where $v $ is odd and $u \ge 1$. Then $2^a(2^{a+1}+1) = (2^uv+1+1)(2^uv+1-1) = 2^u v(2^u v+2) = 2^{u+1} v(2^{u-1} v+1) $. If $u \ge 2$, then $a = u+1$ and $2^{a+1}+1 =v(2^{u-1} v+1) $ or $2^{u+2}+1 =v(2^{u-1} v+1) =v^22^{u-1} +v $. If $v \ge 3$, the right side is too large, so $v = 1$. But this can not hold, so $u = 1$. Therefore $2^a(2^{a+1}+1) = 2^{2} v( v+1) $ so that $a \ge 3$. Let $v = 2^rs-1$ where $s$ is odd and $r \ge 1$. Then $2^{a-2}(2^{a+1}+1) = v( 2^rs) $ so $a-2 = r$ and $2^{a+1}+1 = vs \implies 2^{r+3}+1 = vs = (2^rs-1)s = 2^rs^2-s $. Therefore $s+1 =2^rs^2-2^{r+3} =2^r(s^2-8) \ge 2(s^2-8) \implies 2s^2-s \le 17$ so $s = 1$ or $3$. If $s = 1$, then $2^{r+3}+1 =2^r-1 $ which can not be. If $s = 3$ then $2^{r+3}+1 =9\cdot 2^r-3 \implies 4 =9\cdot 2^r-2^{r+3} =2^r \implies r = 2, v = 11, a = 4$ and $2^9+2^4+1 =512+16+1 =529 =23^2 $.<|endoftext|> TITLE: A question concerning non-algebraic extension. QUESTION [5 upvotes]: Let $\tau:F \to \overline{F}$ be a field embedding. Then is $\overline{F}/\tau(F)$ algebraic extension? I don't think so but I cannot find a counterexample. Would you let me know a counterexample? REPLY [9 votes]: It is a bit unclear what is asked, but judging from the comments of others the following interesting interpretation comes to mind. We start with a field $F$ and its algebraic closure $\overline{F}$, and then we wonder whether $\overline{F}/\tau(F)$ is algebraic for all field homomorphisms $\tau:F\to\overline{F}$. The answer to that question is 'No'. Let $F=\Bbb{Q}(x_0,x_1,\ldots)$ be a purely transcendental extension of the rationals of a countably infinite transcendence degree. Let us define $\tau:F\to\overline{F}$ by declaring $\tau(x_i)=x_{i+1}$ and extending that to a homomorphism of fields in the obvious way. Then the element $x_0\in\overline{F}$ will be transcendental over $\tau(F)$.<|endoftext|> TITLE: Optimization with box constraints - via nonlinear function QUESTION [10 upvotes]: I have the following convex optimization problem over $\mathbb{R}^n$ with box constraints: $$\begin{align}\text{minimize }&\;f(x)\\ \text{subject to }&\;x \in [-1,1]^n\end{align}$$ I can see two approaches to handle this: Approach 1. Use a dedicated convex optimization method that handles box constraints (e.g., projected gradient descent; L-BFGS-B, i.e., L-BFGS with box constraints). Approach 2. Apply a nonlinear transform to force $x$ to be in the box $[-1,1]^n$. In particular, define $$g(w) = f(\varphi(w))$$ where $\varphi: \mathbb{R}^n \to \mathbb{R}^n$ applies the hyperbolic tangent coordinate-wise, i.e., $\varphi(w)_i = \tanh w_i$. Then, solve the unconstrained optimization problem $$\text{minimize }\;g(w)$$ where now we allow $w$ to range over all of $\mathbb{R}^n$. Note that the solution $w$ to this unconstrained problem immediately yields a solution to the original problem, by taking $x = \varphi(w)$. We can then use any optimization procedure to solve the unconstrained problem -- we're not limited to ones that explicitly support box constraints. Basically, I'm just applying a substitution or change-of-variables to force the box constraints to always hold. My question. Is there any reason to prefer one approach over the other? Should we expect one to converge significantly faster, or be more robust, or something? In the readings I've done so far, I've seen lots of references to Approach 1, but I've never seen anyone mention Approach 2. Are there any pitfalls with Approach 2 that I should be aware of? I have a convex optimization solver that seems to work well in my domain, but doesn't support box constraints. So, if there are no pitfalls or shortcomings of Approach 2, it would be convenient to apply the solver I already have to solve the unconstrained optimization problem. (The existing solver already has many well-tuned heuristics; it would be painful to implement projected gradient descent or L-BFGS from scratch and implement all of those heuristics.) However, I want to find out if there are some pitfalls I might not be aware of with Approach 2. Approach 2 seems vaguely reminiscent of barrier methods (or interior point methods), though it's not the same. Does the theory in that area offer any relevant insights into this question? REPLY [2 votes]: Even if the question is outdated, let me suggest some other approaches in case someone happens to have a similar problem and finds its way here. The Projected Newton method has been very useful to me when you need to minimize a quadratic objective under box constraints. Especially so if the Hessian matrix turns to be sparse or has some structure that allows fast inverse calculation. Alternatively if you problem is not quadratic but the gradient is cheap to compute I would suggest the already mentioned Projected Gradient, or the Frank-Wolfe method. Usually the tricky part of this method involves solving a subproblem where you minimize a linear function subject to your problem constraints. Fortunately, having just box constraints makes this very easy!<|endoftext|> TITLE: Is category equivalence unique up to isomorphism? QUESTION [7 upvotes]: Let $\mathbf C$, $\mathbf D$ be categories and $F, F':\mathbf C \to \mathbf D$ and $G, G':\mathbf D \to \mathbf C$ be functors of the shown direction. Is it the case that, if each of $(F,G)$ and $(F',G')$ is an equivalence, then $F\cong F'$ (isomorphic in $\mathbf D^\mathbf C$)? Initially I thought (without thinking) that there couldn't be more than "one" equivalence (up to isomorphism) between two categories. But, once I tried to prove this, I couldn't find any reason whay it should be so. Was I wrong? Any help will be appreciated. REPLY [2 votes]: Peter Freyd's book "Abelian Categories" contains a discussion of this topic in Exercise B at the end of Chapter 1. He defines the automorphism class group of a category $C$ to be the group of equivalences $C\to C$ modulo those naturally isomorphic to the identity. As an example where this is non-trivial, he gives the category of ordered sets and order-preserving functions. The functor that reverses the ordering is an equivalence but is not isomorphic to the identity.<|endoftext|> TITLE: Blow-up of a point on a smooth simply connected variety QUESTION [6 upvotes]: Apologies if this is an obvious question. Suppose I have a variety which I know is smooth and simply connected and blow-up a smooth point so that the resulting variety is smooth. Does the exact point that I blow-up matter or will the resulting variety be the same regardless of which points I blow-up? REPLY [8 votes]: If "the same" is interpreted to mean "isomorphic", then this is not true. I will give a counterexample below, but first let me mention a positive result: If the variety is homogeneous, by which I mean that its automorphism group acts transitively, then the isomorphism class is independent of the point you blow up. (This also implies you can't get counterexamples from really simple varieties such as $\mathbf P^n$ or quadrics.) Now for a counterexample: Here's the simplest one I can think of. Let $\pi: X \rightarrow \mathbf P^2$ be the blowup of $\mathbf P^2$ in any 2 distinct points $p_1$, $p_2$. Now let $f: Y \rightarrow X$ be the blowup of a third point $p$. If $\pi(p)$ does not lie on the line joining $p_1$ and $p_2$, then $Y$ will not have any smooth curves of self-intersection $-2$, but if $\pi(p)$ does lie on the line, there will be such a curve. So the isomorphism class of $Y$ changes depending on $p$. Maybe that is not so satisfying, because I only showed that the isomorphism class changes for "special" positions of the point $p$. But there are examples where the isomorphism class varies continuously with $p$: Let $X$ be the blowup of $\mathbf P^2$ in 5 general points, and $f: Y \rightarrow X$ the blowup of a sixth point $p$. Then $Y$ is a cubic surface. There is a 4-dimensional moduli space of cubic surfaces up to automorphism, which implies that the isomorphism class of $Y$ must be nonconstant as we vary $p$. (I can give more details if necessary.) Finally, in spite of these negative results, there is a weaker sense in which the blowups are "the same", which might suffice for many purposes: The family of blowups parametrised by $p \in X$ forms a flat family over $X$. So any property of varieties that is constant in flat families is independent of the choice of blown-up point of $p$. In particular, if $X$ is a complex projective variety then the diffeomorphism class of the blowup is independent of $p$.<|endoftext|> TITLE: Is a group homomorphism a module homomorphism? QUESTION [5 upvotes]: Let $A$, $B$ be two abelian groups, let $f$ be a group homomorphism from $A$ to $B$. Now we consider $A$, $B$ as $R$-modules over a ring $R$. Show that $f$ is also a module homomorphism between them. I know is clear that $f(a+b)=f(a)+f(b)$, but how can I show that $f(ra)=rf(a)$? REPLY [18 votes]: In general there is no reason for $f$ to be an $R$-module homomorphism just because it is an abelian group homomorphism. Consider complex conjugation $\bar{\cdot} : \mathbb{C} \to \mathbb{C}$. It is clearly an abelian group morphism since $\overline{z+z'} = \bar{z} + \bar{z}'$. But if you take $R = \mathbb{C}$, it's clearly not an $R$-module homomorphism, since e.g. $\overline{i \cdot 1} = - i \neq i \cdot \bar{1} = i$. If $R = \mathbb{Z}$ then as quid mentions it would actually be true. If $n \ge 0$ is an integer then $$f(n \cdot x) = f(x + \dots + x) = f(x) + \dots + f(x) = n \cdot f(x),$$ because $f$ is a group morphism, and then $f((-n) \cdot x) = -f(n \cdot x)$ (still because $f$ is a group morphism) thus $f((-n) \cdot x) = (-n) \cdot f(x)$. But this is a very special situation.<|endoftext|> TITLE: what is domain of $f(x)=(-1)^x$ QUESTION [5 upvotes]: What exactly is the domain of the $$f(x)=(-1)^x$$ Because if $x=\pm0.5$ or $x=\pm0.25$ etc we get imaginary numbers. But if we take $x=\pm\frac{1}{3}$ we get a real number, so how to define its domain? Do we restrict it to only integers? REPLY [7 votes]: There is no problem in defining such a function over the rational numbers that can be represented with an odd denominator. However such a function will not obey the standard rules for exponentials: $$ (-1)^{1/3}=-1 $$ but, on the other hand, $$ ((-1)^{2/1})^{1/6}=1 $$ The usefulness of such a function is unclear. However, you can define it as a multivalued function over the complex numbers, using the fact that $-1=e^{i\pi+2ki\pi}$ ($k$ any integer) $$ (-1)^x=e^{(2k+1)ix\pi} $$ Note that in general this will have infinitely many values, unless $x$ is rational. If $x$ is rational and $x=a/b$, with odd $b$, then only one of the values will be real and, conversely, if one of the values is real, then $x$ is rational and $x=a/b$ with odd $b$. (Exercise on complex numbers.)<|endoftext|> TITLE: Volume in higher dimensions QUESTION [5 upvotes]: Let me first state the statement which I want to prove (encountered while studying "Geometry of Number"): Suppose $A$ is a convex, measurable, compact and centrally symmetric subset of $\mathbb{R}^n$ where $\mathbf{x}=(x_1, \ldots, x_r, x_{r+1}, \ldots, x_n)\in A$ iff $|x_i|\leq 1$ for $1\leq i \leq r$ and $x_{r+1}^2 + x_{r+2}^2 \leq 1, \ldots, x_{n-1}^2 + x_{n}^2 \leq 1$ Then $\text{vol}(A) = 2^r \pi^{\frac{n-r}{2}}$ The definition of various terms used above : $A\subset \mathbb{R}^n$ is called a convex set if $\mathbf{x}$ and $\mathbf{y}$ are in $A$ then so is the entire line segment joining them. Measurable refers to Lebesgue measure in $\mathbb{R}^n$; the Lebesgue measure $\text{vol}(A)$ coincides with any reasonable intuitive concept of n-dimensional volume, and Lebesgue measure is countably additive. Centrally symmetric means symmetric around $\mathbf{0}$: if $\mathbf{x} \in A$ then so is $-\mathbf{x}$. I have no idea about how to approach this problem. Edit: It is also given that $n-r$ is an even number. REPLY [3 votes]: To keep things simple we will do some induction: Set $A_r = \{(x_1, \dots, x_r) \in \mathbb R^r : \forall i \in \{1,\dots, n\}: \vert x_i \vert \leq 1\}$. Consider the integral $\int_{A_r} 1 \ d\lambda(x_1, \dots, x_r)$, we will prove by induction that $$\int_{A_r} 1 \ d\lambda(x_1, \dots, x_r) = 2^r.$$ For $n = 1$ we get $$\int_{A_1} 1 \ d x_1 = \int_{-1}^1 1\ dx_1 = 2.$$ Now let the statement hold for $r \in \mathbb N$. Then we get for $r + 1$ with Fubini that $$\int_{A_{r+1}} 1 \ d\lambda(x_1, \dots, x_{r+1}) = \int_{-1}^1 \bigg(\int_{A_r} 1 \ d\lambda(x_1, \dots, x_r) \bigg)dx_{r+1} = \int_{-1}^1 2^r\ dx_{r+1} = 2^{r +1}.$$ Now to the second part. Let $k := n - r$ and $B_{k} = \{(x_{r+1}, \dots, x_n) \in \mathbb R^{n-r} : x_{r+1}^2 + x_{r+2}^2 \leq 1, \ldots, x_{n-1}^2 + x_{n}^2 \leq 1 \}$ for even $k$. Now we prove by induction that $$\int_{B_k} 1 \ d\lambda(x_{r+1}, \dots, x_n) = \pi^{k/2}.$$ For $k = 2$ we get by using polar coordinates $$ \int_{B_2} 1\ d\lambda(x_{r+1}, x_{r+2}) = \int_0^1 \int_0^{2 \pi} r\ d\varphi dr = \int_0^1 2\pi r\ dr = \pi.$$ Now let the statement hold for $k \in \mathbb N$ even. Then we get for $k + 2$ with Fubini that \begin{align*}\int_{B_{k+2}} 1\ d\lambda(x_{r+1}, \dots, x_{n + 2}) &= \int_{B_2} \bigg( \int_{B_{k}} 1\ d\lambda(x_{r+1}, \dots, x_n)\bigg) d\lambda(x_{n+1}, x_{n + 2}) = \int_{B_2} \pi^{k/2}\ d\lambda(x_{n+1}, x_{n + 2}) \\ &= \pi^{k/2} \int_0^1 \int_0^{2 \pi} r\ d\varphi dr = \pi^{k/2} \pi = \pi^{(k + 1)/2}\end{align*} All together we can get know with Fubini \begin{align*}\operatorname{vol}(A) &= \int_A 1\ d\lambda(x_1, \dots, x_n) = \int_{A_r} \bigg(\int_{B_k} 1\ d\lambda(x_{r+1}, \dots, x_n) \bigg) d\lambda(x_1, \dots, x_r) \\ &= \int_{A_r} \pi^{k/2}\ d\lambda(x_1, \dots, x_r) = \pi^{k/2} \int_{A_r}1 \ d\lambda(x_1, \dots, x_r) = \pi^{k/2} 2^r.\end{align*} And that was the result we were aiming for. The "physics" version: Denote the unit disk by $D = \{x \in R^2 : x_1^2 + x_2^2 \leq 1\}$. Then $A = [-1,1]^r \times D^{(n - r)/2}$. Show like above that $\operatorname{vol}([-1,1]) = 2$ and $\operatorname{vol}(D) = \pi$. Then we got $$\operatorname{vol}(A) = \operatorname{vol}([-1,1]^r \times D^{(n - r)/2}) = \operatorname{vol}([-1,1])^r \operatorname{vol}(D)^{(n - r)/2} = 2^r \pi^{(n - r)/2}.$$ This prove makes use of the fact you can write $A$ as $A = [-1,1]^r \times D^{(n - r)/2}$ and the relation between the one-dimensional Lebesgue measure and the multi-dimensional Lebesgue measure. Hope it helps :)<|endoftext|> TITLE: Trace norm of a triangular matrix with only ones above the diagonal QUESTION [8 upvotes]: For $n\in\mathbb N^*$, we consider the triangular matrix $$ T_n = \begin{pmatrix} 1 & \cdots & 1 \\ & \ddots & \vdots \\ 0 & & 1 \end{pmatrix} \in M_{n,n}(\mathbb R) \,. $$ The trace norm of $T_n$, that is the sum of the singular values of $T_n$, is denoted by $\|T_n\|_{\text{Tr}}$. Is it true that $$ \sup_{n\in\mathbb N^*} \Big\{\frac{1}{n}\|T_n\|_{\text{Tr}}\Big\} < \infty \,? $$ EDIT Is it true that $$ \sup_{n\in\mathbb N^*} \Big\{\frac{1}{n\log(n)}\|T_n\|_{\text{Tr}}\Big\} < \infty \,? $$ An equivalent definition of the trace norm is $\|T_n\|_{\text{Tr}}:=\text{Tr}[\sqrt{T_n^T T_n}]$, where the square root $\sqrt{A}$ of a nonnegative matrix $A$ is the only nonnegative matrix such that $\sqrt{A}^2=A$. (And by $A$ nonnegative I mean $⟨u,Au⟩\geq0$ for all vector $u\in\mathbb R^n$). EDIT 2 One can explicitly compute the singular values of $T_n$. $$T_n^{-1} = \begin{pmatrix} 1 & -1 & & 0 \\ & \ddots & \ddots & \\ & & \ddots & -1 \\ 0 & & & 1 \end{pmatrix} \in M_{n,n}(\mathbb R) \,. $$ The singular values $\sigma_1,\dots,\sigma_n$ of $T_n$ are related to those, $\lambda_1,\dots,\lambda_n$, of $T_n^{-1}$ through $\sigma_j=\lambda_j^{-1}$. It is easier to compute the singular values of $T_n^{-1}$ because the eigenvalues $\mu_j=\lambda_j^2$ of $$A_n = (T_n^{-1})^* T_n^{-1} = \begin{pmatrix} 1 & -1 & & & 0 \\ -1 & 2 & -1 & & \\ & -1 & \ddots & \ddots & \\ & & \ddots & \ddots & -1 \\ 0 & & & -1 & 2 \end{pmatrix} \,,$$ can be computed explicitly (as for the discrete laplacian). Eigenvalues of $A_n$ First, $A_n$ is real symmetric, hence it can be diagonalized in an orthonormal basis, and its eigenvalues $\mu_j$ are real. Then using, say, Gershgorin's circle theorem, the eigenvalues are in the interval $[0,4]$. If $\psi=(\psi_1,\dots,\psi_n)^T$ is an eigenvector of $A_n$ associated with the eigenvalue $\mu$, then \begin{align} \psi_2 & = (1-\mu)\psi_1 & (1) \\ \psi_3 & = (2-\mu)\psi_2 - \psi_1 & (2)\\ & \vdots \\ \psi_{j+2} &= (2-\mu)\psi_{j+1} - \psi_j & (j)\\ & \vdots \\ \psi_n & = (2-\mu)\psi_{n-1} - \psi_{n-2} & (n-1)\\ (2-\mu)\psi_n &= \psi_{n-1} & (n) \end{align} From Eq. $(2)$ to $(n-1)$, one can see that $\psi_j$ is linear, recursive sequence of order two. Since the roots of the polynomial $X^2+(\mu-2)X+1$ are $$\frac{2-\mu\pm i \sqrt{\mu(4-\mu)}}{2}=e^{\pm i\theta}$$ with $\theta\in [0,\pi]$ and $\cos(\theta)=1-\frac{\mu}{2}$, $\psi_j=\Re(a e^{i(j-1)\theta})$ with $a$ a complex number. Up to a (real) normalization factor $\psi_j=\Re(e^{i(\varphi+(j-1)\theta)})$ for some $\varphi\in\mathbb R$. Using $\mu = 2-e^{i\theta}-e^{-i\theta}$ and Eq.(1) and (n), yields \begin{align} e^{i(\varphi+\theta)}+e^{-i(\varphi+\theta)}&=(e^{i\theta}+e^{-i\theta}-1)(e^{i\varphi}+e^{-i\varphi}) \\ (e^{i\theta}+e^{-i\theta})(e^{i(\varphi+(n-1)\theta)}+e^{-i(\varphi+(n-1)\theta)})&=e^{i(\varphi+(n-2)\theta)}+e^{-i(\varphi+(n-2)\theta)} \end{align} i.e. \begin{align} \cos(\varphi-\theta)&=\cos(\varphi) & (1)'\\ \cos(\varphi+n\theta)&=0 & (n)' \end{align} From $(1)'$, either $\theta=0$ or $\varphi = \frac{\theta}{2}+k\pi$. $\theta=0$ would give $\mu=0$ which is excluded since $A_n$ is an invertible matrix. Hence using $\varphi = \frac{\theta}{2}+k\pi$ and $(n)'$: $$(n+\frac{1}{2})\theta=(k+\frac{1}{2})\pi$$ Using that $\theta\in[0,\pi]$, we get that $\theta\in \Big\{\frac{j-\frac{1}{2}}{n+\frac{1}{2}} \pi \mid j=1,\dots,n\Big\}$. And in fact each of these values yields an eigenvalue and an eigenvector. The corresponding eigenvalues are $\mu_j=4\sin^2\Big(\frac{j-\frac{1}{2}}{n+\frac{1}{2}} \frac{\pi}{2}\Big)$, $ j=1,\dots,n$. Trace Norm of $T_n$ The singular values of $T_n$ can now be deduced: $$\sigma_j=\frac{1}{2\sin\Big(\frac{j-\frac{1}{2}}{n+\frac{1}{2}} \frac{\pi}{2}\Big)}\,,\quad j=1,\dots,n$$ and the trace norm is $$ \|T_n\|_{\text{Tr}}=\frac{1}{2}\sum_{j=1}^n \frac{1}{\sin\Big(\frac{j-\frac{1}{2}}{n+\frac{1}{2}} \frac{\pi}{2}\Big)} \,.$$ Using that $\sin x\geq \frac{2}{\pi}x$ on $[0,\frac{\pi}{2}]$ one gets the upper bound: $$ \|T_n\|_{\text{Tr}}\leq \frac{n+\frac{1}{2}}{2}\sum_{j=1}^n \frac{1}{j-1/2}\leq \frac{n+\frac{1}{2}}{2} \Big(\frac{1}{2}+\ln(2n+1)\Big)\,,$$ which implies that $$ \limsup_{n\in\mathbb N^*} \Big\{\frac{1}{n\log(n)}\|T_n\|_{\text{Tr}}\Big\} \leq \frac{1}{2} \,. $$ Actually one also has a lower bound $$ \frac{n+1/2}{\pi}\Big(\ln(\tan(\frac{\pi}{4}))-\ln(\tan(\frac{\pi}{4(n+1/2)}))\Big) \leq \frac{n+1/2}{\pi} \int_{\frac{\pi}{2n+1}}^{\frac{\pi}{2}} \frac{dx}{\sin(x)} \leq \|T_n\|_{\text{Tr}} \,,$$ which implies that $$ \frac{1}{\pi} \leq \liminf_{n\in\mathbb N^*} \Big\{\frac{1}{n\log(n)}\|T_n\|_{\text{Tr}}\Big\} \,. $$ REPLY [6 votes]: In Loewner ordering, we have $$ T_n^{-1}(T_n^{-1})^\top =\pmatrix{2&-1\\ -1&\ddots&\ddots\\ &\ddots&2&-1\\ &&-1&1} \preceq\pmatrix{2&-1\\ -1&\ddots&\ddots\\ &\ddots&2&-1\\ &&-1&2}=P. $$ Using the spectral formula for tridiagonal Toeplitz matrices, the eigenvalues of $P$ are given by $2+2\cos\left(\frac{k\pi}{n+1}\right)=4\cos^2\left(\frac{k\pi}{2(n+1)}\right)$ with $k=1,2,\ldots,n$. Therefore, the $k$-th smallest singular value of $T_n$ is bounded below by $\frac12\sec\left(\frac{k\pi}{2(n+1)}\right)$ and \begin{align} \frac1n\|T_n\|_{\text{Tr}} &\ge\frac1n\sum_{k=1}^n\frac12\sec\left(\frac{k\pi}{2(n+1)}\right)\\ &\ge\frac1\pi\int_0^{n\pi/2(n+1)}\sec x\,dx\\ &=\frac1\pi\ln\left(\sec\frac{n\pi}{2(n+1)} + \tan \frac{n\pi}{2(n+1)}\right), \end{align} which is unbounded when $n\to\infty$.<|endoftext|> TITLE: Find all positive integers satisfying: $x^5+y^6=z^7$ QUESTION [7 upvotes]: Find all positive integers satisfying: $x^5+y^6=z^7$ No algebraic method came into my mind,just tried to find some answers and failed! Of course it's very simple to write a computer program finding at least one solution but I prefer not to use computer. To me it's like Fermat's equation!!! So hard!! REPLY [4 votes]: I have found that the noble art of programing is much underestimated, for it often can provide vital clues on the path to finding many solutions to an equation. However, finding all solutions to this equation is a task certainly beyond my skills. I expect you’ve heard of the, frequently putdown, Beal’s Conjecture and the million dollar prize apparently offered for its resolution. For this problem, it states that $gcd(x,y,z)>1$. However, a bigger clue that solutions exist with $gcd(x,y,z)>1$ is to be found within the smallest solution, $$(x,y,z)=(262144,32768,8192)$$ which can obtained using the formula given by @Robys. In this case, $gcd(x,y,z)=8192$ Hence, if we define $f= gcd(x,y,z)$, $x=fu$, $y=fv$ and $z=fw$ we obtain $$f^5u^5+f^6v^6=f^7z^7$$ Divide by $f^5$ to give a quadratic in $f$, $$u^5+fv^6=f^2w^7$$ Hence, $$f=\frac{v^6 + \sqrt{v^{12}+4u^5w^7}}{2w^7}$$ It’s now much more practical to code a program to produce a reasonable number of results, without using convoluted techniques, by testing that f is integer. The aforementioned solution, $$(x,y,z)=(262144,32768,8192)$$ now becomes $$(f,u,v,w)=(8192,32,4,1)$$ or even $$(f,u,v,w)=(2^{13},2^5,2^2,1)$$ This perhaps seems less trivial with the next in this family of solutions, where $$(x,y,z)=(1152921504606846976,1125899906842624,8796093022208)$$ becomes $$(f,u,v,w)=(2^{43},2^{17},2^7,1)$$ Here are my results, first in numbers, then in prime factors. I’ve eliminated the unwanted results, with $f$ integer but $gcd(u,v,w)>1$, manually. The first two are not new to this post. $$(f,u,v,w)$$ $$(8192,32,4,1)$$ $$(8796093022208,131072,128,1)$$ $$(6530347008,7776,36,1)$$ $$(7086739046912,134456,98,1)$$ $$(35664401793024,248832,144,1)$$ $$(8605184,392,14,1)$$ $$(51018336,972,18,1)$$ $$(916132832,1922,31,1)$$ $$(16307453952,9216,48,1)$$ $$(233861123808,18252,78,1)$$ $$(1250000000000,50000,100,1)$$ $$(3185049600000,57600,120,1)$$ $$(40814668800000,194400,180,1)$$ $$(201689413697376,175692,242,1)$$ $$(260161285128192,254016,252,1)$$ $$(72900000,1350,15,1)$$ $$(6083264512,8112,26,1)$$ $$(30375000000000,225000,150,1)$$ $$(850305600000,48600,90,1)$$ $$(3119171623488,77976,114,1)$$ $$(101629210098393,267126,211,1)$$ $$(3796875,1125,15,2)$$ $$(33038369407,16129,127,2)$$ ======= $$(f,u,v,w)$$ $$(2^{13},2^5,2^2,1)$$ $$(2^{43},2^{17},2^7,1)$$ $$(2^{12}*3^{13},2^5*3^5,2^2*3^3,1)$$ $$(2^9*7^{12},2^3*7^5,2*7^2,1)$$ $$(2^{26}*3^{12},2^{10}*3^5,2^4*3^2,1)$$ $$(2^9*7^5,2^3*7^2,2*7,1)$$ $$(2^5*3^{13},2^2*3^5,2*3^2,1)$$ $$(2^5*31^5,2*31^2,31,1)$$ $$(2^{26}*3^5,2^{10}*3^2,2^4*3,1)$$ $$(2^5*3^9*13^5,2^2*3^3*13^2,2*3*13,1)$$ $$(2^{10}*5^{13},2^4*5^5,2^2*5^2,1)$$ $$(2^{22}*3^5*5^5,2^8*3^2*5^2,2^3*3*5,1)$$ $$(2^{13}*3^{13}*5^5,2^5*3^5*5^2,2^2*3^2*5,1)$$ $$(2^5*3^5*11^{10},2^2*3*11^4,2*11^2,1)$$ $$(2^{18}*3^{10}*7^5,2^6*3^4*7^2,2^2*3^2*7,1)$$ $$(2^5*3^6*5^5,2*3^3*5^2,3*5,1)$$ $$(2^{14}*13^5,2^4*3*13^2,2*13,1)$$ $$(2^9*3^5*5^{12},2^3*3^2*5^5,2*3*5^2,1)$$ $$(2^9*3^{12}*5^5,2^3*3^5*5^2,2*3^2*5,1)$$ $$(2^6*3^9*19^5,2^3*3^3*19^2,2*3*19,1)$$ $$(3^5*211^5,2*3*211^2,211,1)$$ $$(3^5*5^6,3^2*5^3,3*5,2)$$ $$(2^5*31^5,2*31^2,31,1)$$ $$(127^5,127^2,127,2)$$ I apologise, in advance, for any errors in these results.<|endoftext|> TITLE: What is the link between Primes and zeroes of Riemann zeta function? QUESTION [5 upvotes]: Usually Riemann hypothesis is introduced along this lines. (1.1) Geometric progressions were known since forever (1.2) Euler factorization links a product of primes and a sum of natural numbers (1.3) Harmonic series diverges, thus there are infinity many primes (1.4) Riemann expanded zeta function to whole complex plane and conjured that all non-trivial zeroes have $\Re=1/2$ So far so good. Nice and steady. How did he come to this conclusion isn't obvious, but this is why RH is an open problem. Suddenly... (2.1) Non-trivial zeroes of zeta function impose constraints on distribution of prime numbers including the accuracy of prime counting function distribution of (a) gaps between primes, (b) twin primes ... Indeed, (1.2) linked all natural numbers to primes. But why distribution of zeroes is relevant? Why not Fourier transform of zeta function, not distribution of ones? In other words, I understand why a certain property of zeta function is relevant to primes. But why this certain property happen to be the distribution of zeroes? Edit: I've read this What is so interesting about the zeroes of the Riemann $\zeta$ function? but roughly around this point "This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their 'expected' positions." "Roughly speaking, the explicit formula says the Fourier transform of the zeros of the zeta function is the set of prime powers plus some elementary factors." it gets hard to follow. REPLY [4 votes]: with $\psi(x) = \sum_{p^k < x} \ln p$ you have the Riemann explicit formula $$\psi(x) = x - \sum_\rho \frac{x^\rho}{\rho} + \mathcal{O}(1) $$ ($\rho$ being the non trivial zeros of $\zeta(s)$, obtained from the residue theorem applied to $\frac{-\zeta'(s)}{\zeta(s)} = \sum_{p^k} p^{-sk}\ln p $ $= s \int_2^\infty \psi(x) x^{-s-1}dx$ $ \implies \psi(x) = \frac{1}{2i\pi} \int_{\sigma-i\infty}^{\sigma+i\infty} \frac{-\zeta'(s)}{\zeta(s)} \frac{x^s}{s}ds$ by inverse Laplace/Mellin transform) then the Riemann hypothesis is that $$\frac{\psi(e^u) - e^u}{e^{u/2}} = - \sum_\rho \frac{e^{i u \text{ Im}(\rho)}}{\rho} + \mathcal{O}(e^{-u/2})$$ i.e. $\frac{\psi(e^u) - e^u}{e^{u/2}}$ is almost periodic, having an expansion that is "almost" a Fourier series (you doubt it would have a huge impact on the distribution of prime numbers)<|endoftext|> TITLE: Perfect circles in the Mandelbrot set? QUESTION [11 upvotes]: It is known that the boundary of the period 2 hyperbolic component of the Mandelbrot set is a perfect circle of radius $\frac{1}{4}$ centered at $-1$. Moreover it is known that the boundaries of the circle-like period 3 hyperbolic components are not perfect circles ("A Parameterization of the Period 3 Hyperbolic Components of the Mandelbrot Set", Dante Giarrusso, Yuval Fisher). Question: is the period 2 component the only hyperbolic component in the Mandelbrot set whose boundary is a perfect circle? Similarly, is the period 1 component the only hyperbolic component in the Mandelbrot set whose boundary is a perfect cardioid? I suspect the answer to both questions is "yes", but haven't found any conclusive references. REPLY [3 votes]: Here's something I tried, thanks to Adam's comment for the basic idea for $p \ge 3$. This answer is missing some details, comments suggesting improvements are welcome, as would be other answers that fill in the gaps. It also relies on the fact (?) that: $$\forall 0 \neq a \in \mathbb{C}, 0 \neq b \in \mathbb{C}, 1 < m \in \mathbb{N} . \exists x . |a e^{i x} + b e^{i m x}| \neq 1$$ Let $F(z, c) = z^2 + c$ with $F^{p+1}(z, c) = F^p(F(z, c), c)$. Now the boundary of a hyperbolic component can be parameterized by $\theta \in \mathbb{R}$ by the solution of the equation system: $$ F^p(z,c) = z \\ \frac{\partial}{\partial z}F^p(z,c) = e^{i \theta} $$ Now the question reduces to showing $c$ is of the form $c = c_0 + r_0 e^{i \phi}$ where $c_0 \in \mathbb{C}$ and $r_0 \in \mathbb{R}$ are constants and $\phi \in \mathbb{R}$. $F^p(z,c) = z$ defines a polynomial of even degree $P(z) = 0$, whose constant coefficient is the product of its roots and is a polynomial in $c$ of degree $2^{p-1}$. The roots include those of $F^q(z, c) = z$ where $q | p$. Also, $\frac{\partial}{\partial z}F^p(z, c) = 2^p \Pi z_k$ where the $z_k$ are the $p$ roots in the periodic orbit of the desired solution $z$ (all $z_k$ are roots of $F^p(z, c) = z$, the remaining roots have lower period). Case $p = 1$: $$ z^2 + c_0 + r_0 e^{i \phi} = z \\ \therefore z = \frac{1 \pm \sqrt{1 - 4(c_0 + r_0 e^{i \phi})}}{2} \\ \frac{\partial}{\partial z} = 2 z = e^{i \theta} $$ Now $|e^{i \theta}| = 1$ but $\exists x . |2 z| = |1 \pm \sqrt{x}| \neq 1$, so conclude that period $1$ component is not a perfect circle. Case $p = 2$: The equations reduce to $$ 4(1 + c_0 + r_0 e^{i \phi}) = e^{i \theta} $$ with obvious solution $c_0 = -1, r_0 = \frac{1}{4}, \phi = \theta$, so conclude that the period 2 component is a perfect circle. Case $p = 3$: The equations reduce to $$ 8 (c^3 + 2c^2 + c + 1) = e^{i \theta} \text{ where } c = c_0 + r_0 e^{i \phi}$$ For this to hold, the coefficients of $e^{i k \phi}$ must be zero for all $k > 1$. But setting $k = 3$ implies $r_0^3 = 0$ but we know that $r_0 > 0$ as hyperbolic components have non-empty interior. Contradiction, conclude that no period 3 component is a perfect circle. Case $p > 3$: Similarly to the $p = 3$ case, get a polynomial of degree $m > 1$ in $e^{i \phi}$ whose highest term has coefficient $r_0^m$. It remains to show that the polynomial really does have degree greater than $1$. The constant coefficient (product of roots) is a polynomial of degree $2^{p-1}$ in $c$, divided by the corresponding constant coefficient of all smaller divisors of the period gives: $$m = 2^{p-1} - \sum_{q | p, q < p} 2^{q-1}$$ which solved numerically gives: $$\begin{aligned} p & & & m \\ 1 & & & 1 \\ 2 & & & 1 \\ 3 & & & 3 \\ 4 & & & 5 \\ 5 & & & 15 \\ 6 & & & 25 \\ \vdots \end{aligned}$$ Finally, $m > 1$ for all $p \ge 3$ because $\exists q > 1 . q \nmid p$.<|endoftext|> TITLE: Roots of Unity with Rational Real Parts QUESTION [5 upvotes]: All of the $4^{\text{th}}$ and $6^{\text{th}}$ roots of unity have real parts that are rational numbers. Are these the only roots of unity $z$ such that $\text{Re}(z)\in \mathbb{Q}$ ? REPLY [5 votes]: Suppose $z^n=1$ and $\Re(z)\in \mathbb{Q}$. Then also $\left(\overline{z}\right)^n=1$, so $z$ and $\overline{z}$ are both algebraic integers. This means that $z+ \overline{z} = 2 \Re(z)$ is also an algebraic integer. The only algebraic integers that are rational are integers, so $2 \Re(z)$ must be an integer. Since $z$ lies on the unit circle we find that the only possible values for $2\Re(z)$ are $\{-2,1,0,1,2\}$. These values indeed correspond to the 6th roots of unity together with $\pm i$.<|endoftext|> TITLE: Why is the category of finitely generated modules over a non-noetherian ring not abelian? QUESTION [12 upvotes]: I am learning about abelian categories for a talk I have to give next week. One of the first questions I had upon learning this definition is "does there exist an additive category that is not abelian?" The question, Additive category that is not abelian, gives many great answers to the question but I am curious about the details, in particular about the example of finitely generated modules over a non-noetherian ring. The definition I am using for abelian categories is as followed: A category $\mathcal{C}$ is abelian if 1) $\mathcal{C}$ is an additive category 2) Every morphism in $\mathcal{C}$ has a kernel and cokernel 3) Every monomorphism is the kernel of a map, and every epimorphism is a cokernel of a map. Thus my question is: Given the above definition of abelian categories, why is the category of finitely-generated modules over a non-noetherian ring not abelian? My guess would be that this could fail because some kernels/cokernels might not be finitely generated, but I am blanking on how to construct an example. REPLY [12 votes]: Claim: The inclusion $R\text{-mod}\hookrightarrow R\text{-Mod}$ preserves kernels. Once this is known, it follows that a kernel in $R\text{-mod}$, if it exists, must be isomorphic to the corresponding kernel in $R\text{-Mod}$; in particular, the latter is finitely generated. Specializing this observation to projections $\pi_I: R\to R/I$ for an ideal $I$ in $R$, it follows that $\ker(\pi_I)$ exists in $R\text{-mod}$ if and only if $I$ is finitely generated. Letting $I$ vary, we see that $R\text{-mod}$ is abelian only if all ideals of $R$ are finitely generated. Proof of claim: Suppose $f: M\to N$ is a morphism of finitely generated $R$-modules and $k: K\to M$ is a kernel of $f$ in $R\text{-mod}$. Further, let $g: T\to M$ be another $R$-module homomorphism with $fg=0$ and $T$ arbitrary. Then, by assumption, for any finitely generated submodule $\iota: S\subseteq T$ the composite $g\iota$ factors uniquely through $k$ via some $t_S: S\to K$. Since any module is the union of its finitely generated submodules, it follows that a factorization of $g$ through $k$ is unique, if it exists. In turn, applying this uniqueness it moreover follows that for any other $\iota^{\prime}: S^{\prime}\subseteq T$, the factorizations $S\to K$ and $S^{\prime}\to K$ of $\iota$ resp. $\iota^{\prime}$ agree on $S\cap S^{\prime}$. Therefore, all $t_S$ glue to a factorization $t: T\to K$ of $g$ through $k$, proving that $k$ is a kernel in $R\text{-Mod}$. Addendum (independent of the rest): If you like it more technically, you can package the same argument as follows: Consider any category ${\mathscr C}$ (generalizing $R\text{-Mod}$), any diagram $D: I\to {\mathscr C}$ over some index category $I$ (generalizing $\bullet\rightrightarrows\bullet$), and any cone $c: D\to X$ over it, that is, you have $X\in{\mathscr C}$ and for any $i\in I$ you have a morphism $c_i: X\to D(i)$ such that $$X\xrightarrow{c_i} D(i)\xrightarrow{D(\alpha)} D(j) = X\xrightarrow{c_j} D(j)$$ for any arrow $\alpha$ in $I$. In other words, $X\to D$ is a candidate for a limit-cone for $I$, and you might ask: Question: Which objects of ${\mathscr C}$ indeed 'see' $X$ as the limit of $D$? Formally, this means that for some $Y\in{\mathscr C}$ you can check whether the natural morphism in $\textsf{Set}$, $${\mathscr C}(Y,X)\to {\lim}_I{\mathscr C}(Y,D(i))$$ is an isomorphism. Call the respective subcategory ${\mathscr C}_D$ for lack of a better name. Now you have two facts: Since inverse limits commute, ${\mathscr C}_D$ is closed under colimits in ${\mathscr C}$. If $X$ and the codomain of $I$ are contained in some full subcategory ${\mathscr D}$ in which $X\to D$ is indeed an inverse limit, then ${\mathscr D}\subseteq{\mathscr C}_D$. Combining both, it follows that ${\mathscr C}_D={\mathscr C}$ if there's a full subcategory ${\mathscr D}\subset{\mathscr C}$ containing $X$ and the codomain of $D$, for which $X\to D$ is a limit, and such that any object of ${\mathscr C}$ is a colimit of a diagram in ${\mathscr D}$. This applies to $R\text{-mod}\subset R\text{-Mod}$ and shows that the latter embedding preserves all limits, in particular kernels.<|endoftext|> TITLE: Solving the ODE $y^{\prime\prime}(x)-y(x)=g(x)$ using the Fourier transform, without missing solutions QUESTION [6 upvotes]: I'm supposed to solve the ODE $y^{\prime\prime}(x)-y(x)=g(x)$ using the Fourier transform and then explain if I got the most general solution. First of all, I don't know what "solve" means here because the furthest I can get is $$-\frac{\hat g(\omega)}{1+\omega^2}=\hat y(\omega)$$ which by the convlution theorem tells me $y=-(g\ast \frac 12 e^{-|t|})$ and I don't see what more I can do. Second, I don't understand what solutions I am missing... Help! REPLY [6 votes]: You have a solution: $$ -\frac{1}{2}\int_{-\infty}^{\infty}g(t)e^{-|t-x|}dt $$ That solution works for a large class of functions $g$, but it is not the most general solution because you can add solutions of $y''-y=0$. So a more general solution is $$ y(x)= A e^{x}+Be^{-x}-\frac{1}{2}\int_{-\infty}^{\infty}g(t)e^{-|t-x|}dt, $$ where $A$ and $B$ are arbitrary constants.<|endoftext|> TITLE: Is there any known relationship between $\sum_{i=1}^{n} f(i)$ and $\sum_{i=1}^{n} \dfrac {1}{f(i)}$ QUESTION [6 upvotes]: If we knew $\sum_{i=1}^{n} f(i)=S$ ($n$ can be $\infty$), is there anything we could say about $\sum_{i=1}^{n} \dfrac {1}{f(i)}$ in terms of $S$? The only thing I was able to find on the web is link to a dead question. If there is no known relationship, is it possible for there to exist one that we just haven't discovered? REPLY [7 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{\sum_{1 = 1}^{n}\mathrm{f}\pars{i} + \sum_{1 = 1}^{n}{1 \over \mathrm{f}\pars{i}}} & \geq \color{#f00}{2n} \end{align}<|endoftext|> TITLE: If $A$ is a $2\times2$ matrix, what is $\det(4A)$ in terms of $\det(A)$? QUESTION [11 upvotes]: If $A$ is a $2\times2$ matrix, what is $\det(4A)$ in terms of $\det(A)$? This seems trivial, but I'm not sure exactly what they are asking. I'm guessing I have some matrix $A = \begin{bmatrix}a&b\\c&d\end{bmatrix}$ where I know $\det(A) = ad - bc$. So if they want to know what is $\det(4A)$ wouldn't it just be $4A = 4\begin{bmatrix}a&b\\c&d\end{bmatrix} = \begin{bmatrix}4a&4b\\4c&4d\end{bmatrix} = \det(A) = 16ad - 16bc$? REPLY [2 votes]: Another view of the multilinear aspect of the determinant of a square matrix $A$ is the "recursive form" involving minors. If the matrix is $n\times n$, one can compute determinant with minors stemming from $(n-1)\times (n-1)$ matrices: $$|A| = \sum (-1)^{i+j} a_{ij} M_{ij}\,.$$ The determinant of a $1\times 1$ matrix (a scalar) grows linearly with the factor $\lambda$ applied to the (unique) matrix element. Hence the determinant of a $2\times 2$ matrix grows twice faster, since you get one $\lambda$ for the $a_{ij}$ terms (for instance $a$ and $c$ for you), and one for the $M_{ij}$ terms, which derive from dimension $1\times 1$ matrices (for instance $b$ and $d$ for you), as you wrote with $\lambda=4$. For a $3\times 3$ matrix, you get one $\lambda$ for the $a_{ij}$ terms, and $\lambda^2$ for the $M_{ij}$ terms, coming from dimension $2\times 2$ objects. More generally i, you get one $\lambda$ for the $a_{ij}$ terms, and $\lambda^{n-1}$ for the $M_{ij}$ terms, which are computed from dimension $(n-1)\times (n-1)$ matrices, hence $\lambda^n $ total.<|endoftext|> TITLE: Correspondence between maximal ideals and multiplicative functionals of a non unital, commutative Banach algebra. QUESTION [11 upvotes]: Let $\mathcal{A}$ be a non (necessarily) unital commutative Banach algebra, and let $$ M_{\mathcal{A}} = \{ \phi:\mathcal{A} \to \mathbb{C} : \phi \mbox{ is multiplicative and not trivial}\} $$ and $$ \mathrm{Max}(\mathcal{A})=\{ I \lhd \mathcal{A} : I \mbox{ maximal} \}.$$ If $\mathcal{A}$ is unital, it is well known that there is a bijection between $M_{\mathcal{A}}$ and $\mathrm{Max}(\mathcal{A})$ sending each functional to its kernel (the inverse is given by the quotient and the Gelfand-Mazur theorem). My question is, is this still a bijection in the non-unital case? I'm aware that if $\mathcal{A}$ is a commutative C*-algebra it is still a bijection. Also that the restriction gives a bijection from $M_{\tilde{\mathcal{A}}} \setminus \{ \pi:\tilde{\mathcal{A}} \to \mathbb{C} \}$ to $M_{\mathcal{A}}$; but this fact don't seem enough to conclude the result. I haven't been able to find a source for this. Thanks in advance. REPLY [5 votes]: Linear-multiplicative functionals (aka characters) on complex Banach algebras are automatically continuous, so their kernels are closed. (You will find a slick proof of this fact on p. 181 of Allan's and Dales' Introduction to Banach Spaces and Algebras.) However, in the non-unital case it may well happen that a maximal ideal is dense. The right notion to look at is the notion of a maximal modular ideal. Such ideals are bijectively associated to (kernels of) characters. You may also be interested in this thread.<|endoftext|> TITLE: What is a good book for math students to learn machine learning in depth? QUESTION [11 upvotes]: I am a math master student and have done fundamental math courses like probability theory, measure theory, linear algebra and know a little bit about functional analysis. What is good way for me to learn machine learning in depth? I have read the classical text Pattern-Recognition and Machine Learning last summer; my impression was that it was very ineffective to read the book chapter by chapter like a mathematical text. The book does not go deep enough for many algorithms and skip too many steps considered too technical by engineers. Is there a machine learning book that maybe does not cover too many topics, but treat each one in depth and takes advantage of math when necessary? It will be great to be able connect fundamental mathematical objects with machine learning (I am thinking about Lp spaces, hilbert space etc). REPLY [4 votes]: With a background in pure math you will surely enjoy these books: Mohri's Foundations of Machine Learning which is available for free, Shalev-Shwartz's Understanding Machine Learning: From Theory to Algorithms also available for free, Devroye's A Probabilistic Theory of Pattern Recognition, Lattimore's Bandit Algorithms. You will also like the publications of Foundations and Trends in Machine Learning.<|endoftext|> TITLE: Dividing a circle into $3$ equal pieces using $2$ parallel lines QUESTION [9 upvotes]: I originally found this question in James Stewart's Calculus, specifically in one of the Problems Plus sections. The question asks how $3$ people can share a pizza while making just $2$ cuts, instead of the usual $3$. The basic idea is to divide a circle into $3$ equal pieces using $2$ parallel lines that are the same distance from the center, as in the picture below. I am having trouble getting started on this question, mainly because I have no idea how to find the areas of the $3$ pieces. I can see that the cuts should not be too close to the edges or the center, as this would make middle piece too large or too small respectively. But I cannot see much more than this. REPLY [6 votes]: Picture an $x$-axis running at a right angle to the two vertical lines in your picture. Draw the ray from the center to the point on the pizza in the upper right were your vertical line on the right intersects the boundary. Let $\theta$ be the angle that that ray makes with the $x$-axis. Similarly draw a ray from the center to the lower right intersection of a vertical line with the boundary, corresponding to the angle $-\theta$. Then the length of the intersection of that line with the pizza is $2\sin\theta$ (where the radius of the pizza is $1$). The fraction of the pizza between those rays is $2\theta/(2\pi)$ of the whole pizza (since $2\theta$ is the angle between those bounding rays and $2\pi$ is the angle encompassing the whole pizza. The part to the right of that vertical line is what you want to be $1/3$ of the pizza. That part is the whole part between those rays minus the part between those rays that is to the left of that line. The part between those rays to the left of that line is a triangle. The area of a triangle is $\frac 1 2\times\text{base}\times\text{height}$. Call that vertical part of length $2\sin\theta$ the base; then the height is $\cos\theta$. The area of the triangle is therefore $\frac 1 2 (2\sin\theta)(\cos\theta)$. The area to the right of that vertical line is therefore \begin{align} & \left( \frac{2\theta}{2\pi}\times\text{area of the whole pizza} \right) - (\sin\theta\cos\theta) \\[10pt] = {} & \frac\theta\pi\cdot\pi - \sin\theta\cos\theta \\[10pt] = {} & \theta-\sin\theta\cos\theta\qquad = \theta - \frac 1 2 \sin(2\theta). \end{align} You want to make that equal to $1/3$ of the whole pizza, thus to $(1/3)\pi$. You can do that by Newton's method.<|endoftext|> TITLE: Structure and normality of the Galois group of $x^{15}-15 \in \mathbb{Q}[x]$. QUESTION [7 upvotes]: Let $f(x) = x^{15}-15\in \mathbb{Q}[x]$. By Eisenstein's Criterion (using 3 or 5), $f$ is irreducible. Then $L=\mathbb{Q}(\sqrt[15]{15}, \omega)$ is the splitting field of $f$, where $\omega$ is a primitive $15$th root of unity. We then have that the order of the Galois group $G=Gal(L/\mathbb{Q})$ is just $15\cdot \phi(15) =120$ - the product of the extensions. My question is, knowing this, can we determine which Sylow subgroups are normal along with their structure? Of course the 3 and 5 Sylow subgroups will be isomorphic to $\mathbb{Z}_3$ and $\mathbb{Z}_5$ respectively, but I'm not sure if there's enough info to determine the 2-Sylow subgroup nor the number of each of them to determine normality. Should I focus on looking at subfields of $L$? REPLY [2 votes]: First let us see that indeed $[L:\Bbb Q]=120$. To ease notation set $a=15$. Consider the subfields $K_1=\Bbb Q(\sqrt[3]a,\omega_3)$ and $K_2=\Bbb Q(\sqrt[5]a,\omega_5).$ As $3\nmid[\Bbb Q(\omega_3):\Bbb Q]$, no root of $x^3-a$ lies at $\Bbb Q(\omega_3)$, thus $x^3-a$ is irreducible over $\Bbb Q(\omega_3)$; because of this, hence $[K_1:\Bbb Q]=6.$ Similarly $[K_2:\Bbb Q]=20$ and $x^3-a$ is irreducible over $\Bbb Q(\omega_{15})$ As $[\Bbb Q(\sqrt[3]a,\omega_{15}):\Bbb Q]=24,$ $K_1(\omega_5)=\Bbb Q(\sqrt[3]a,\omega_{15})$ and $[K_1:\Bbb Q]=6$, we obtain $[K_1(\omega_5):K_1]=4.$ Similarly $[K_2(\omega_3):K_2]=2$. Since $5\nmid[\Bbb Q(\sqrt[3]a,\omega_{15}):\Bbb Q],$ $x^5-a$ is irreducible over $\Bbb Q(\sqrt[3]a,\omega_{15}),$ therefore as $L=\Bbb Q(\sqrt[3]a,\omega_{15})(\sqrt[5]a),$ it follows that $[L:K_1]=20$ and $[\Bbb Q(\sqrt[15]a,\omega_{15}):\Bbb Q]=120$. Similarly $[L:K_2]=6$. Furthermore, as $[K_1:\Bbb Q]=6=[L:K_2],$ we obtain $K_1$ and $K_2$ are linearly disjoint over $\Bbb Q$. However $K_1\cdot K_2=L$, therefore \begin{equation} G_{\Bbb Q}^L\simeq G_{K_1}^L\times G_{K_2}^L, (1) \end{equation} since $K_1$ and $K_2$ are normal over $\Bbb Q$. As $L=K_1(\sqrt[5]{a},\omega_5)$ and $L=K_2(\sqrt[3]{a},\omega_3)$ we may consider $G_{K_1}^L$ and $G_{K_2}^L$ as subgroups of $S_5$ and $S_3$ respectively. We have $|G_{K_2}^L|=[L:K_2]=6$, in consequence $G_{K_2}^L\simeq S_3$. It is easy to see $G_{K_1}^L$ has an element $\sigma$ of order $4$; recall that $L=K_1(\sqrt[5]{a},\omega_5)$. As $|G_{K_1}^L|=[L:K_1]=20$, by Sylow's third theorem we must have that $\langle \sigma\rangle$ is normal in $G_{K_1}^L$; otherwise all elements in this group would have order $2$ or $4$. The subgroup of $G_{K_1}^L$ of size $5$ is normal in $G_{K_1}^L$ by the same theorem. Thus we obtain the Sylow subgroups of $G_{\Bbb Q}^L$ using $(1)$: Let $H_3$ be the Sylow $3$-subgroup of $S_3$, then $\{e\}\times H_3$ is the only Sylow $3$-subgroup of $G_{\Bbb Q}^L$. Let $H_5$ denote the Sylow $5$-subgroup of $G_{K_1}^L$, then $H_5\times\{e\}$ is the only Sylow $5$-subgroup of $G_{\Bbb Q}^L$. If $H_2^{1},H_2^{2}$ and $H_2^{3}$ are the Sylow $2$-subgroups of $S_3$ and $H_2$ is the Sylow $2$-subgroup of $G_{K_1}^L$, then $H_2\times H_2^{i}$ for $i=1,2,3$ are the Sylow $2$ -subgroups of $G_{\Bbb Q}^L$.<|endoftext|> TITLE: how far the distribution from the uniform distribution QUESTION [5 upvotes]: I have two discrete probability distributions $P$ and $Q$, where $P=(p_1,...,p_n)$ and $Q=(q_1,...,q_n)$, in addition I have uniform distribution $U=(\frac{1}{n},...,\frac{1}{n})$. The question is how to measure which distribution $P$ or $Q$ is the closest to uniform distribution. I am not sure if I can use Kullback–Leibler divergence because is not a "correct" distance. Also, I don't know if I can use entropy. REPLY [5 votes]: What prevents you from using Kullback-Leibler divergence (KL divergence) as a measure of distance from the uniform distribution? I do agree with you on the fact that KL divergence is not a true measure of "distance" because it does not satisfy (a) symmetry, and (b) triangle inequality. Nonetheless, it can serve as a criterion for measuring how far/close a distribution is to the uniform distribution. Suppose $\mathcal{X}=\{x_{1},\ldots,x_{n}\}$ is a finite alphabet, and $P=\{p_{1},\ldots,p_{n}\}$ and $U=\{1/n,\ldots,1/n\}$ are two distributions on $\mathcal{X}$, with $U$ being the uniform distribution. Then, the KL-divergence between $P$ and $U$, denoted as $D(P||U)$, is defined to be the following quantity: \begin{align} D(P||U)&=\sum\limits_{i=1}^{n}P(x_{i})\log_{2}\left(\frac{P(x_{i})}{U(x_{i})}\right)\\ &=\sum\limits_{i=1}^{n}p_{i}\log_{2}\left(\frac{p_{i}}{1/n}\right)\\ &=\log_{2}\left(n\right)+\sum\limits_{i=1}^{n}p_{i}\log_{2}\left({p_{i}}\right)\\ &=\log_{2}(n)-H(P), \end{align} where $H(P)=\sum\limits_{i=1}^{n}p_{i}\log_{2}\left(\frac{1}{p_{i}}\right)$ is the (Shannon) entropy of the distribution $P$. Since $D(\cdot||\cdot)\geq 0$, it is clear that the uniform distribution is the most "random" distribution that can be assigned to an alphabet since its entropy is equal to $\log_{2}(n)$ bits. If there is another distribution $Q=\{q_{1},\ldots,q_{n}\}$ defined on $\mathcal{X}$, and if $D(P||U)H(Q)$, and thus, $P$ is more "random" than $Q$ (which makes sense since $P$ is closer to the uniform distribution than $Q$). Thus, closer a distribution is to the uniform distribution (closer in the sense of KL divergence), more "random" it is.<|endoftext|> TITLE: Prove that $\sqrt{a^2+3b^2}+\sqrt{b^2+3c^2}+\sqrt{c^2+3a^2}\geq6$ if $(a+b+c)^2(a^2+b^2+c^2)=27$ QUESTION [29 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers such that $(a+b+c)^2(a^2+b^2+c^2)=27$. Prove that: $$\sqrt{a^2+3b^2}+\sqrt{b^2+3c^2}+\sqrt{c^2+3a^2}\geq6$$ A big problem here around $(a,b,c)=(1.6185...,0.71686...,0.4926...)$. In this case we get $\sum\limits_{cyc}\sqrt{a^2+3b^2}-6=0.000563...$. My trying. Let $a^2+3b^2=4x^2$, $b^2+3c^2=4y^2$ and $c^2+3a^2=4z^2$, where $x$, $y$ and $z$ are non-negatives. Hence, we need to prove that $$\sum\limits_{cyc}\sqrt{x^2-3y^2+9z^2}\leq\frac{\sqrt7(x+y+z)^2}{\sqrt{3(x^2+y^2+z^2)}}$$ Let $k$ and $m$ be non-negatives, for which $x-ky+mz>0$, $y-kz+mx>0$, $z-kx+my>0$ and $1-k+m>0$. By C-S $\left(\sum\limits_{cyc}\sqrt{x^2-3y^2+9z^2}\right)^2\leq(1-k+m)(x+y+z)\sum\limits_{cyc}\frac{x^2-3y^2+9z^2}{x-ky+mz}$. Thus, it remains to prove that $$(1-k+m)\sum\limits_{cyc}\frac{x^2-3y^2+9z^2}{x-ky+mz}\leq\frac{7(x+y+z)^3}{3(x^2+y^2+z^2)}$$ It's a sixth degree, but I didn't find a values of $k$ and $m$, such that the last inequality will be true. By this way we can prove that $\sum\limits_{cyc}\sqrt{a^2+2b^2}\geq3\sqrt3$ is true, but it's not comforting. Also I tried to use Holder, but without success. Thank you! REPLY [3 votes]: There seems to be bugs in the segment after where I marked "***". Needed for check. When I saw the form $\sqrt{a^2+3b^2}$ I thought of the absolute value of a complex number. So let $$u=a+\sqrt{3}bi$$ $$v=b+\sqrt{3}ci$$ $$w=c+\sqrt{3}ai$$ And now what you want to prove becomes $$|u|+|v|+|w|\geq6$$ $$u+v+w=(1+\sqrt{3}i)(a+b+c)$$ $$|u|^2+|v|^2+|w|^2=4(a^2+b^2+c^2)$$ $$(u+v+w)^2(|u|^2+|v|^2+|w|^2)$$ $$=(1+\sqrt{3}i)^2(a+b+c)^2\cdot4(a^2+b^2+c^2)$$ $$=4(1+\sqrt{3}i)^2(a+b+c)^2(a^2+b^2+c^2)$$ $$=4(1+\sqrt{3}i)^2\cdot27$$ $$|u+v+w|^2(|u|^2+|v|^2+|w|^2)=|4(1+\sqrt{3}i)^2\cdot27|=4\cdot27\cdot4$$ Now I thought I should separate $|u+v+w|$ to $|u|+|v|+|w|$ so that all the elements in the expression were independent $|u|$, $|v|$, $|w|$, and its form would be closer to the inequality we want to prove. So I used the triangle inequality, $$|u|+|v|+|w|\geq|u+v|+|w|\geq|u+v+w|$$ $$(|u|+|v|+|w|)^2(|u|^2+|v|^2+|w|^2)\geq|u+v+w|^2(|u|^2+|v|^2+|w|^2)=4\cdot27\cdot4$$ Let $x=|u|,\ \ y=|v|,\ \ z=|w|$ Then the problem becomes, $$\mathrm{if\ \ }(x+y+z)^2(x^2+y^2+z^2)\geq4\cdot27\cdot4$$ $$\mathrm{prove\ that\ \ }x+y+z\geq6$$ Proof: let $k$ be a number so that $x+y+z\geq k\geq0$ is always true. A known formula: $$(x-y)^2+(y-z)^2+(z-x)^2=3(x^2+y^2+z^2)-(x+y+z)^2\geq0$$ ***Then $$3(x^2+y^2+z^2)\geq(x+y+z)^2\geq k^2$$ $$\implies (x+y+z)^2(x^2+y^2+z^2)\geq k^2\cdot\frac{k^2}{3}=\frac{k^4}{3}$$ But it's already known that $(x+y+z)^2(x^2+y^2+z^2)\geq4\cdot27\cdot4$ has to be true. So to let $$(x+y+z)^2(x^2+y^2+z^2)\geq\frac{k^4}{3}$$ be always true, $$\frac{k^4}{3}\leq4\cdot27\cdot4$$ $$k\leq6$$ This proof doesn't need $a,b,c\geq0$. It just needs them to be real numbers.<|endoftext|> TITLE: Sections of tensor bundle are tensor product of sections QUESTION [11 upvotes]: Given $E,F$ vector bundles over a manifold $M$, I would like to know a proof of $\Gamma(E\otimes F) = \Gamma(E) \otimes_{C^\infty(M)} \Gamma(F)$. Where $\Gamma$ denotes the smooth sections over $M$. Thanks REPLY [6 votes]: The statement is trivially true if $E$ is a trivial vector bundle. Note also that if the statement is true for $E$, then it is also true for any direct summand of $E$. The conclusion follows from the fact that any vector bundle on a manifold is a subbundle of a trivial vector bundle.<|endoftext|> TITLE: Negation of definition of continuity QUESTION [11 upvotes]: This should be a very easy question but it might just be that I'm confusing myself. So we have the definition of a function $f$ on $S$ being continuous at $x_0$: For any $\epsilon$>0, there exists $\delta>0$ such that: whenever $|x-x_0|<\delta$, we have $|f(x)-f(x_0)|<\epsilon$ And I assume the negation is There exists $\epsilon$>0 such that for all $\delta>0$, $|x-x_0|<\delta$ yet $|f(x)-f(x_0)|\ge \epsilon$. Now I want to show that the function $f(x)=\sin(\frac{1}{x})$ together with $f(0)=0$ cannot be made into a continuous function at $x=0$. So I need to show that there exists $\epsilon>0$ such that for all $\delta>0$, $|x|<\delta$ yet $|f(x)|\ge\epsilon$. Let $\epsilon = \frac{1}{2}$. Then no matter what $\delta$ we choose, let $|x|<\frac{1}{2}$. It is certainly possible that $|f(x)|\ge \frac{1}{2}$, because, well, $\frac{1}{x}$ can really take on arbitrarily large value as $x$ is small. Now, what confuses me is that, as $x$ gets small, $f(x)$ can certainly be greater than $\frac{1}{2}$ for infinitely many times, but it will be less than that infinitely many times, too. But I suppose it doesn't really matter. So I think there's something wrong with my negation but I couldn't figure out where. Update: The correct version can be found here. Watch for Lemma 4.6 REPLY [10 votes]: The negation is: There exists $ϵ>0$ such that for all $δ>0$, there is an $x_\delta$ such that $ |x_\delta−x_0|<δ$ yet $|f(x_\delta)−f(x_0)|≥ϵ$ REPLY [7 votes]: Your negation is correct; you should specify though that what you're defining is continuity at the point $x_0$, which is distinct from continuity in the whole domain $S$. Your choice of $\epsilon=1/2$ is fine. However you need to do some more work to show that $f$ can't be continuous. Suppose we try to make $f$ into a continuous function by assigning $f(0)=y_0$. Take any $\delta>0$. Case 1: Suppose $y_0<0$. Let $x=1/(\pi/2+2\pi N)$ where $N$ is chosen large enough so $|x|<\delta$. Then $|f(x)-f(x_0)|=|1-y_0|\geq1>\epsilon$ which proves discontinuity. Case 2: Suppose $y_0\geq0$. Let $x=1/(-\pi/2+2\pi N)$ where $N$ is chosen large enough so $|x|<\delta$. Then $|f(x)-f(x_0)|=|-1-y_0|\geq1>\epsilon$ which again proves discontinuity. Thus we conclude there's no choice of $y_0=f(0)$ which makes $f$ continuous at zero. REPLY [2 votes]: The negation is: there exists $\epsilon >0$ such that for any $\delta>0$ we can find an $x$ such that $|x-x_0|<\delta$ and $|f(x)-f(x_0)| > \epsilon$. And you have just proved this.<|endoftext|> TITLE: Distribution of sums of random variables over finite field QUESTION [5 upvotes]: Let $q$ be an odd prime, $X_1, \ldots, X_n$ be i.i.d. random variables over $\mathbb Z_q$, and $0 < p < 1$ be some constant. Let $X_i$ take on the value $0$ with probability $p$, and the remaining probability mass be distributed uniformly among the other $q - 1$ values, i.e. $$ \begin{align*} \Pr[X_i = 0] &= p \\ \Pr[X_i = j] &= \frac{1 - p}{q - 1}\text{ for }j \ne 0\text{.} \end{align*} $$ What can one say about the distribution of $\sum\nolimits_{i = 1}^{n} X_i \pmod q$? I tried doing things inductively, but couldn't arrive at a pattern. I can tell that there are $q + 1$ ways to obtain each number $k \in \mathbb Z_q$ in $X_1 + X_2$, so it shouldn't be too hard to argue that $\Pr[X_i = j]\text{ for }j \ne 0$ remains distributed uniformly among some probability mass, but what probability mass exactly gets left for them? REPLY [4 votes]: We can first solve the easier problem of finding the probability $p_k$ that $k$ uniformly random non-zero elements of $\mathbb Z_q$ add up to $0$. This probability satisfies the recurrence $$ p_{k+1}=\frac1{q-1}(1-p_k) $$ with the initial value $p_0=1$, with solution $$ p_k=\frac{1-(1-q)^{1-k}}q\;. $$ You're adding $k$ non-zero elements with probability $\binom nk(1-p)^kp^{n-k}$, so the total probability to get $0$ is \begin{align} \sum_{k=0}^n\binom nk(1-p)^kp^{n-k}\cdot\frac{1-(1-q)^{1-k}}q &= \frac1q\left(1-(1-q)^{1-n}(1-p+p(1-q))^n\right) \\ &= \frac1q\left(1+(q-1)\left(\frac{pq-1}{q-1}\right)^n\right)\;, \end{align} and the remaining probability is distributed uniformly among the non-zero elements. Alternatively, you could directly solve the recurrence $$ a_{n+1}=pa_n+\frac{1-p}{q-1}(1-a_n)\;, $$ which tracks the probability $a_n$ of having a zero sum after $n$ summands: With probability $p$ a zero summand is added to a zero sum, and with probability $\frac{1-p}{q-1}$ the right non-zero summand is added to a non-zero sum to make it zero.<|endoftext|> TITLE: $e^{\left(\pi^{(e^\pi)}\right)}\;$ or $\;\pi^{\left(e^{(\pi^e)}\right)}$. Which one is greater than the other? QUESTION [16 upvotes]: $\newcommand{\bigxl}[1]{\mathopen{\displaystyle#1}} \newcommand{\bigxr}[1]{\mathclose{\displaystyle#1}} $ $$\large e^{\bigxl(\pi^{(e^\pi)}\bigxr)}\quad\text{or}\quad\pi^{\bigxl(e^{(\pi^e)}\bigxr)}$$ Which one is greater? Effort. I know that $$e^\pi\ge \pi^e$$ Then $$\pi^{(e^\pi)}\ge e^{(\pi^e)}$$ But I can't say $$e^{\bigxl(\pi^{(e^\pi)}\bigxr)}\le \pi^{\bigxl(e^{(\pi^e)}\bigxr)}$$ or $$e^{\bigxl(\pi^{(e^\pi)}\bigxr)}\ge \pi^{\bigxl(e^{(\pi^e)}\bigxr)}$$ REPLY [4 votes]: The function $x^{\frac{1}{x}}$ is strictly decreasing for $x>e$, the maximum is $e^{\frac{1}{e}}$. Therefore: $e\le a $a^{\frac{1}{a}}>b^{\frac{1}{b}}$ => $a^b>b^a$ => $a^b-1>b^a-1$ => $(a^b-1)\ln b>(b^a-1)\ln a$ => $b^{a^b-1}>a^{b^a-1}$ => $b^{a^b-1}\frac{\ln a}{a}>a^{b^a-1}\frac{\ln b}{b}$ => $a^{b^{a^b}}>b^{a^{b^a}}$ Here: $a:=e$ and $b:=\pi$<|endoftext|> TITLE: Why are $L$-functions a big deal? QUESTION [26 upvotes]: I've been studying modular forms this semester and we did a lot of calculations of $L$-functions, e.g. $L$-functions of Dirichlet-characters and $L$-functions of cusp-forms. But I somehow don't see, why they are considered a big deal. To me it sounds more like a fun fact to say: "You know, the Riemann-$\zeta$ has analytic continuation.", and I don't even know what to say about $L$-functions of cusp-forms. So why are $L$-functions such a big thing in automorphic forms and analytic number theory? Thanks! REPLY [19 votes]: There's a lot one could say, but I'll try to be brief. Roughly the idea (just like with the zeta functions) is that L-functions provide a way to analytically study arithmetic objects. Specifically a lot of interesting data is encoded in the location of the zeroes and poles of L-functions, and because L-functions are analytic objects, you can now use analysis to study arithmetic. Here are some examples: The fact that $\zeta(s)$ has a pole at $s=1$ implies the infinitude of primes. (added) The Riemann hypotheses and generalizations, which are about location of nontrivial zeroes of zeta-/L-functions, have lots of implications, such as refined information about distribution of prime numbers. The fact that Dirichlet L-functions do not have a zero at $s=1$ implies there are infinitely many primes in arithmetic progressions. Dirichlet introduced the notion of L-functions to prove this fact. If $E : y^2 = x^3+ax+b$ is an elliptic curve and its $L$-function $L(s,E)$ (which is also the $L$-function of an elliptic curve) is nonzero at the central value $s=1$, then $y^2=x^3+ax+b$ has only finitely many rational solutions. This is the known direction of the Birch and Swinnerton-Dyer conjecture. (added) In addition to knowing just locations of zeroes and poles of L-functions, the actual values of L-functions at special points contain further arithmetic information. For instance, if $\chi_K$ is the quadratic Dirichlet character associated to an imaginary quadratic field $K$, then the class number formula says $L(1,\chi_K)$ is essentially the class number of $K$. Similarly, the value of $L(1,E)$ in the previous example is conjecturally expressed in terms of the size of the Tate-Shafarevich group of $E$ and the number of rational points on $E$. As mentioned in the comments, $L$-functions are also a convenient tool to associate different kinds of objects to each other, e.g., elliptic curves and modular forms, but are not strictly needed to do this. Nice $L$-functions will have at least meromorphic continuation to $\mathbb C$, Euler products, and certain bounds on their growth. For instance, L-functions of eigencusp forms and Dirichlet L-functions. These properties make $L$-functions nice analytic objects to work with. In particular, the Euler product provides a way to study global objects from local data (one finite set of data for each prime number $p$). (added) See also this MathOverflow question.<|endoftext|> TITLE: What is the exact role of logic in the foundations of mathematics? QUESTION [7 upvotes]: At high school and in the beginning of my university studies, I used to believe the following "equation": Foundations of mathematics = Logic + Set Theory Of course, this "equation" does not hold absolutely, the most notorious example being category theory. However, examples like this usually question the second term of the "right hand side" only. At least to my knowledge, almost nobody seems to challenge the thesis that logic indeed forms one of the foundations of mathematics. However, my belief in this thesis has been shattered when I encountered the usual "definition" of semantics of first-order predicate logic. This definition is formulated in meta-language – for instance, the semantics of the universal quantifier is defined like that $\forall x P(x)$ is true iff $P(x)$ is true for all $x$. Obviously, this can never be formalized, as any attempt for a formal definition would require already defined semantics of higher order logic, and so on. So the "definition" of first-order semantics is stated utterly on the intuitive level. Nobody usually bothers to explain why this is OK. I have become slightly more comfortable with this after reading comments to this question addressing the issue. It seems that it simply is not the aim of logic to make these matters formal. On the other hand, logic simply studies formal reasoning "from above", sometimes using slightly informal definitions or arguments (as in any other part of mathematics). However, there are several important points that I am not comfortable with so far: Logic is usually said to be a foundation of mathematics because it makes mathematical reasoning formal. However, as demonstrated above, some parts of logic are highly informal themselves. Is it possible that the real foundation of mathematics is not logic as a whole, but only formal systems that are studied by logic (e.g., predicate logics viewed as collections of axioms and inference rules, etc.)? Is it true that in higher level mathematics, it can only be formally established that a given statement is provable, but saying that a statement is true requires a resort to intuition (because of the above reasons)? Why does one even speak about the definition of first-order semantics? Would not it be more precise to speak about its description or something similar? In a field such formal and low-level like logic, I find this a bit disturbing and confusing. Are my views correct, or am I completely wrong? REPLY [2 votes]: I think that the problem lies in the multiple meaning the word logic assume in mathematics and more generally in deductive sciences. If I understood correctly, I believe that you are right on the first point. If by logic you mean a formal system of inference rules (for instance first order logic) then mathematics, as any other deductive science, is founded on such logic because it needs these inference rules to prove mathematical statements (i.e. theorems). To be fairy specific one needs the syntactic part of logic: a description of what are formulas and what are the operations/inference rules for deriving theorems. The semantics is not foundamental for the purpose of foundations (at least in my opinion), except for a psychological reason: as an intuitive motivation for justifying the validity of the inference rules. On the second point, if by true you mean the intuitive concept then I think you are right again. Mathematics cannot tell what it means for a statement to be true in the intuitive way, that kind of problems belong to the philosophy ... I think. On the other hand (and this address point three in your question) mathematics allows to provide a formal concept of truth, which basically is a neat trick that allows to reduce proof of properties of some kind of objects (structures like groups, topological spaces, etc) to theorems in some meta-theory. This is what model theory basically is all about and the usefulness of its results are the reason why we need also a formal definition of the semantics of first order logic (inside set-theory). Hope this helps, if not feel free to ask for any details or specifications.<|endoftext|> TITLE: Let $S$ be a diagonalizable matrix and $S+5T=I$. Then prove that $T$ is also diagonalizable. QUESTION [13 upvotes]: My solution: Since $S$ is diagonalizable, so we can write $S=P^{-1}DP$, where $P$ is an invertible matrix and $D$ is a diagonal matrix. Now $5T=I-S=P^{-1}P-P^{-1}DP=P^{-1}(I-D)P$. So $T=P^{-1}\frac{1}{5}(I-D)P$. Since $I-D$ is also a diagonal matrix, hence $T$ is diagonalizable. Is my proof correct? Can it be done in another way? thanks. REPLY [2 votes]: Proposition 1. If $F$ is any field and $A\in M_{n}(F)$ is a diagonalizable matrix then, for every polynomial $f(x)$, $f(A)\in M_{n}(F)$ is also diagonalizable. Proof. We have $A=P^{-1}DP$ for some invertible matrix $P$ and diagonal matrix $D$. Then $f(A)=P^{-1}f(D)P$. Since $f(D)$ is diagonal, $f(A)$ is diagonalizable. Corollary. If the minimal polynomial of a matrix $A\in M_{n}(F)$ is a product of distinct linear factors then, for every polynomial $f(x)$, the minimal polynomial of $f(A)$ is also a product of distinct linear factors. Proposition 2. If $F$ is any field and $A\in M_{n}(F)$ is a triangulable matrix then, for every polynomial $f(x)$, $f(A)\in M_{n}(F)$ is also triangulable. Proof. We have $A=P^{-1}\Delta P$ for some invertible matrix $P$ and (upper or lower) triangular matrix $\Delta$. Then $f(A)=P^{-1}f(\Delta)P$. Since $f(\Delta)$ is (upper or lower) triangular, $f(A)$ is triangulable. Corollary. If the minimal polynomial of a matrix $A\in M_{n}(F)$ factors completely over $F$ then, for every polynomial $f(x)$, the minimal polynomial of $f(A)$ also factors completely over $F$.<|endoftext|> TITLE: Name of this famous question? QUESTION [30 upvotes]: I think that this question is well known but I cannot remember its name, and now I am interested in it and wanted to look it up, but cannot find anything just based on a description. If anyone knows the name or can find it (or anything similar), that would be very helpful. The question is as follows: You have a rigid rod of unit length, and some curve in 3d space, which is linear for all but a finite portion of its length. We say the rod is "on the curve" if each of its endpoints are on the curve. Call the "ends of the curve" the 2 linear portions that are infinite. Prove or disprove that for all possible such curves, we can move the rod from one end of the curve to the other while staying on the curve the whole time. REPLY [25 votes]: The problem seems to be surprisingly dependent on the precise assumptions we make. Consider, for example, this curve $ \kern-2em\displaystyle \gamma(t) = \begin{cases} \langle \max(t,-1),t \rangle & t \le 0 \\ \langle t,\frac34 t \sin(5 \log(t)) \rangle & 0 < t \le 1 \\ \langle \cos(t-1)),\sin(t-1) \rangle & 1 \le t \le 1+\pi \\ \langle -1, t-1-\pi \rangle & 1+\pi \le t \end{cases} $ This is a continuous curve with a Lipschitz continuous parameterization (the derivative of $t\sin(\log(t))$ is bounded) and is therefore rectifiable. And the ends are in opposite directions as required. Yet, the ladder cannot be moved continuously along the curve. Imagine that it moves from high parameter values to low ones. Whenever the front is is at one of the points where the curve touches the upper red line ($y=\frac34x$), the back end will need to be around point $A$ on the circular arc, since that is the only place on the curve behind the front end that has the right distance from it. ($A$ is the center of the radius-$1$ circle that has $y=\frac34x$ as tangent at the origin). Conversely, when the front end is at one of the points where the curve touches the lower red line, the back end must be around $B$. Since the front oscillates between the two red lines infinitely many times before it reaches the origin, so must the back end oscillate between $A$ and $B$. But that means that the position of the back end cannot tend to a limit while the front end approaches $(0,0)$, so a continuous movement of the ladder is impossible. (The factors of $\frac34$ and $5$ are just there to allow the diagram to show enough of the wiggles to make it clear what is going on. It would work just as well with $t\sin(\log t)$ as with $\frac 34t\sin(5\log t)$, only not as visibly. It is important to keep the largest of the wiggles from extending beyond the unit circle, though -- otherwise the argument about where the back end of the ladder has to be won't work). Requiring the curve to be (piecewise) smooth will probably avoid this kind of trouble.<|endoftext|> TITLE: Compute a diagonalizable matrix close in matrix exponential QUESTION [8 upvotes]: It is known that for any matrix $A$, one can perturb $A$ slightly so that the resulting $A(\epsilon)$ is diagonalizable. I am wondering whether for any matrix $A$, $\epsilon>0$, there is an algorithm to compute a diagonalizable matrix $A(\epsilon)$ such that $$\|e^A - e^{A(\epsilon)}\|_2\leq \epsilon$$ Here $e^A$ is the matrix exponential, and $\|\cdot\|_2$ is the 2-norm. REPLY [2 votes]: A matrix $A$ is diagonalizable over the field of complex numbers if all eigenvalues of $A$ are distinct. To see this consider the Jordan decomposition $A = VJV^{-1}$, where $J$ is a complex matrix in the Jordan form. If all eigenvalues of $A$ are distinct, then $J$ contains only blocks of size $1\times 1$, hence $J$ is diagonal. In order to compute a perturbation, that makex $A$ diagonalizable consider the complex Schur factorization of $A$, i.e. $A = UTU'$, where $U$ is unitary and $T$ is upper triangular with complex entries, and $U'$ denotes conjugate transposition. Diagonal of $T$ contains eigenvalues of $A$. Let $\tilde{A} = U(T+D)U'$, where $D$ is a diagonal matrix, such that all diagonal elements of $T+D$ are distinct and $\|D\|_2\leq \mu$ for some $0 < \mu \leq 1$. Such matrix can be easlily constructed. Then $\tilde{A} = A + UDU'\equiv A + P$, and $\|P\|_2 = \|D\|_2 \leq \mu$. The matrix $\tilde{A}$ is diagonalizable. For any two matrices $X$, $Y$ and any matrix norm: $$\|e^{X+Y} - e^X\| \leq \|Y\|e^{\|X\|}e^{\|Y\|}$$ Thus $$\|e^{A+P} - e^A\|_2 \leq \|P\|_2e^{\|A\|_2}e^{\|P\|_2} \leq \mu e^{\|A\|_2}e^{\mu} \leq \mu e^{\|A\|_2 + 1}$$ If we take $\mu = \min(1,\epsilon / e^{\|A\|_2 + 1})$, then $\|e^{A+P} - e^A\|_2\leq \epsilon$. For a real matrix $A$ containing complex eigenvalues it is not possible to find a small perturbation $P$, such that $A + P$ is diagonalizable over the field of real numbers, but it is possible if $A$ has only real eigenvalues. Then, this construction can be repeated using the real Schur factorization.<|endoftext|> TITLE: Necessary and sufficient condition for branch points on a Riemann surface. QUESTION [8 upvotes]: I've been reading out of a book by V.B. Alekseev about Abel's theorem on the insolubility of the quintic, and I'm a bit troubled by its presentation on Riemann surfaces. My question is as follows: Suppose $X$ is the Riemann surface defined by the zero locus of the polynomial $P \in \mathbb{C}[z, w]$. I'm confused as to the nature of the branch points and how to find them. I've heard people say that the branch points occur when $\frac{\partial P}{\partial w}$ vanishes. This doesn't occur if $P(z, w) = w^{2} - z^{2}$, where the whole gradient vanishes. I'm wondering if there's a precise condition about which points are branch points versus which points are singularities. Also I'm quite confused about what happens when $P$ is reducible. It seems if $P$ contains a square factor then it will always share a root with $\frac{\partial P}{\partial w}$, so this vanishes for infinitely many $z$. I'm sorry if this is quite vague, but this book never seems to indicate how to find branch points and I'm just looking for guidance as to how to do it in general. REPLY [2 votes]: Your book seems a bit loose with defining what a branch point is. I'll give a vague description/intuition then I'll give you a reference to make everything rigorous. Here is the simplest example of a branch point. Consider the function $f: \mathbb C\to \mathbb C$ given by $f(z)=z^2$. $0$ (in the image copy of $\mathbb C$) is a branch point because it is "hit with multiplicity 2". $4$ (in the image copy of $\mathbb C$) is not a branch point because $4$ is hit by $2$ and $-2$ but at $2$ and $-2$, $f$ "has multiplicity 1". Being a branch point depends on a map between two spaces. Here is a connection to the definition that your book gives (changing sheets of a multi-valued function). Say we want to define a square root function on $\mathbb C$. Clearly this is multi-valued so we somehow want to capture this behavior. Define $X = \{ (x, y) \in \mathbb C^2 \mid y^2 = x \}$. Take $f: X \to \mathbb C$ by sending $(x, y) \to x$. While I've set up all this in $\mathbb C$, let's just think about the picture in $\mathbb R^2$. So $X$ is just a sideways parabola and $f$ is just projection to the $x$ axis. Note any function $g$ on a open set $U \subset \mathbb C$ into $X$ such that $f \circ g$ is the identity on $U$ is a "local square root function". So if I'm at $x=1$, I can choose the positive real root or the negative square root. The upper half and lower half are thought of as the sheets of the function. At $0$ these meet ("with multiplicity 2"), and so according to your book's definition $0$ is a branch point. Everything I've said here is vague and imprecise so I'll refer you to Rick Miranda's book Algebraic Curves and Riemann Surfaces. Specifically Chapter 2, Section 4. He has explicit exercises at the end to find ramification and branch points, and you can check your answer with the Hurwitz Formula. Hope that helps! PS. We take the polynomial $P$ to be irreducible so that the zero set of $P$ is a complex manifold, and this spaces I mentioned above need to be complex manifolds.<|endoftext|> TITLE: Is diameter of a set a measure? QUESTION [8 upvotes]: Suppose the diameter of a nonempty set $A$ is defined as $$\sigma(A) := \sup_{x,y \in A} d(x,y)$$ where $d(x,y)$ is a metric. Is $\sigma(.)$ a 'measurement'? I.e., how do I prove the countable additivity for this particular case? REPLY [4 votes]: Let $A$ be a circle of diameter 1, and let $B$ a circle of diameter 2, having the same center of $A$. Note that $A \subseteq B$. Now, $\sigma(B) = 2$, yet $\sigma(B \setminus A) + \sigma(A) = 2 + 1 = 3$.<|endoftext|> TITLE: Have I found an error in Williams' "Martingales" exercises? QUESTION [7 upvotes]: I want to solve Problem EG.2. from Probability with Martingales: Planet X is a ball with centre O. Three spaceships A, B and C land at random on its surface, their positions being independent and each uniformly distributed on the surface. Spaceships A and B can communicate directly by radio if $\measuredangle AOB < 90^\circ$. Show that the probability that they can keep in touch (with, for example, A communicating with B via C if necessary) is $(\pi + 2)/(4\pi)$. I believe I have shown that the probability is in fact $(2 \pi + 1) / (4 \pi)$. To begin, we denote by $AB$ (resp. $AC$, $BC$) the event that $A$ and $B$ are within $90^\circ$ of each other. We then observe that the probability that communication is possible is given by: $$P(comms) = P(AB \cup (AC \cap BC)),$$ which after some basic manipulation may be re-written as $$P(comms) = P(AB) + P(AC \cap BC) - P(AB \cap AC \cap BC).$$ We then proceed by computing each term on the right-hand side. (A warning that my argument and notation is somewhat loose, and I would appreciate any ideas about tightening it.) For $P(AB)$, we have $$P(AB) = \iint_S P(AB | A = x) \frac{1}{|S|} dS(x),$$ where $S$ denotes the planet's surface and $|S|$ its surface area. By symmetry, we see $P(AB|A) = 1/2$, so that $P(AB) = (1/2)(1/|S|)|S|=1/2$. Next, $$P(AC \cap BC) = \iint_S P(AC|C = x) P(BC|C = x) \frac{1}{|S|} dS(x).$$ By the same argument as above, $P(AC|C) = P(BC|C) = 1/2$, and hence $P(AC \cap BC) = 1/4$. Finally, we have $$P(AB \cap AC \cap BC | A=x) = \iint_{S/x} P(AC \cap BC | A = x, B = y) \frac{1}{2|S|} dS(y),$$ where $S/x$ denotes the hemisphere of $S$ centred on $x$. Now, $P(AC \cap BC|A=x,B=y)$ corresponds to the fraction of $S$ covered by the intersection of $S/x$ and $S/y$. If we define a spherical coordinate system on $S$ with $x$ corresponding to polar angle $\theta = 0$, then this fraction is given by $1/2 - \theta(y)/(2 \pi)$ for $0 \leq \theta(y) \leq \pi/2$ (i.e. $\theta(y)$ in $S/x$), and hence $$P(AB \cap AC \cap BC | A=x) = \frac{1}{4} - \frac{1}{4\pi}$$ (where I skip performing the double integral above here explicitly for brevity). All up, we then obtain $$P(comms) = \frac{2 \pi + 1}{4 \pi},$$ as stated above, which disagrees with Williams' stated result, although no mistake is obvious to me. Moreover, I have written a Monte Carlo simulator in Python that seems to confirm my result (and which I can provide if anyone would find it useful). In summary, I ask the following: (1) Is Williams correct, or am I?; and (2) How can my argument be made tighter, more rigorous, or more elegant? REPLY [6 votes]: The problem is ambiguously worded. You sensibly took "they" to refer to spaceships A and B, the subject of the previous sentence, but it's meant to refer to all three spaceships, the subject of the sentence before that. Your calculations are correct, and for the probability that all three spaceships are in contact they yield \begin{align} P(AB\cap BC\cap CA)+3P(AB\cap BC\cap\overline{CA}) &= 3P(AB\cap BC)-2P(AB\cap BC\cap CA) \\ &= 3\cdot\frac14-2\cdot\left(\frac14-\frac1{4\pi}\right) \\ &= \frac{\pi+2}{4\pi}\;. \end{align}<|endoftext|> TITLE: Does multiplying all a number's roots together give a product of infinity? QUESTION [65 upvotes]: This is a recreational mathematics question that I thought up, and I can't see if the answer has been addressed either. Take a positive, real number greater than 1, and multiply all its roots together. The square root, multiplied by the cube root, multiplied by the fourth root, multiplied by the fifth root, and on until infinity. The reason it is a puzzle is that while multiplying a positive number by a number greater than 1 will always lead to a bigger number, the roots themselves are always getting smaller. My intuitive guess is that the product will go towards infinity, it could be that the product is finite. I also apologize that as a simple recreational mathematics enthusiast, I perhaps did not phrase this question perfectly. REPLY [6 votes]: While the other answers are valid, they do tend to refer to previously established results, which (in a recreational context) is a bit frustrating. So here is a self-contained answer, for those who prefer that sort of thing. First step: $$a^{1/2}\times a^{1/3}\times a^{1/4}\times…=a^{1/2+1/3+1/4+…}$$ Second step:$$\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+\frac{1}{9}+…\text{equals:}$$ $$\frac{1}{2}\ge\frac{1}{2}$$ $$…+\frac{1}{3}+\frac{1}{4}\gt\frac{1}{4}+\frac{1}{4}=\frac{1}{2}$$ $$…+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}\gt\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}=\frac{1}{2}$$ $$\text{…and so on.}$$ That is, $\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+…$ is made up of infinitely many pieces, each of which is greater than $\frac12$. So it adds up to infinity, and so $$a^{1/2}\times a^{1/3}\times a^{1/4}\times…\text{ or}$$ $$a^{1/2+1/3+1/4+…}$$ is infinite if $a>1$, zero when $a<1$, and $1$ when $a=1$. And not a $\sum$ sign anywhere to be seen!<|endoftext|> TITLE: Green's Function for Laplacian on $S^1 \times S^2$ QUESTION [8 upvotes]: As indicated by the title, I am looking to find the Green's function for the Laplacian on $S^1 \times S^2$. Is such a function known? If not, does anyone have an approach to constructing such a function? My first idea isto combine the Green's function on $S^2$ in a nice way with some $1$-periodic function on $\mathbb{R}$, but I haven't had much luck. REPLY [4 votes]: Here's a way to get a representation as series. It works in a more general situation. Let $M$ and $N$ be smooth closed Riemannian manifolds. Denote by $\{\varphi_i\}_{i=1}^\infty$, $\{\psi_j\}_{i=j}^\infty$ the eigenfunctions with eigenvalues $\lambda_i$ and $\mu_j$ for the Laplace operators on $M$ and $N$ respectively. Let $L_2$ norms of eigenfunctions be equal to one so delta-functions can be expanded as $$ \delta_M(x-x')=\sum_{i=1}^\infty \varphi_i(x) \varphi_i(x'), $$ $$ \delta_N(y-y')=\sum_{i=1}^\infty \psi_i(y) \psi_i(y'). $$ Then Green's function for $M$ is given by series $$ G_M(x,x')=\sum_{i=1}^\infty \frac{\varphi_i(x) \varphi_i(x')}{\lambda_i} $$ and analogously for $N$. The Green's function for $M\times N$ is $$ G_{M\times N}(x,y,x',y')= \sum_{i,j=1}^\infty \frac{\varphi_i(x)\varphi_i(x') \psi_j(y)\psi_j(y')}{\lambda_i+\mu_j} $$ since $$ \Delta_{x,y} G_{M\times N}(x,y,x',y')= \sum_{i,j=1}^\infty \varphi_i(x)\varphi_i(x') \psi_j(y)\psi_j(y')= $$ $$ \delta_M(x-x')\delta_N(y-y')=\delta_{M\times N}(x-x',y-y'). $$ So one has to combine eigenfunctions rather than Green's functions themselves. For your case $\varphi_i$ are cosines (up to a constant) and $\psi_j$ are spherical harmonics. A possibility to get a closed form for this series seems rather thin to me.<|endoftext|> TITLE: What is a singular space? QUESTION [6 upvotes]: A book I am reading on orbifolds uses the word singular space but doesn't say what it means. The book is Orbifolds and Stringy Topology by ALR the quote is "Orbifolds are singular spaces that are locally modelled on a quotient of a smooth manfiold by the action of a finite group". REPLY [3 votes]: A manifold is locally modeled on $\Bbb R^n$. An orbifold is locally modeled on $\Bbb R^n/G$, where $G$ is a finite subgroup of $O(n)$. On the one hand, you can think of this as allowing for singular models of manifolds. For $\Bbb Z/n \subset SO(2)$, $\Bbb R^2/\Bbb Z/n$ is a cone with cone angle $2\pi/n$; or for $\Bbb Z/2 \subset O(n)$ (where the nontrivial element acts by negation on $\Bbb R^n$), the resulting object is homeomorphic to an open cone on $\Bbb{RP}^{n-1}$. On the other hand, note that the resulting object is more structured than a topological space (or even a smooth manifold; your charts vary not just over open subsets of $\Bbb R^n$, but rather that equipped with an orthogonal group action). For instance, the cone example above is topologically equivalent to $\Bbb R^2$, but not equivalent as an orbifold. So the visualization you should have of an orbifold should really look like a quotient somehow when you visualize it.<|endoftext|> TITLE: Two randomly chosen coprime integers QUESTION [11 upvotes]: This is a twist on the problem commonly known to have solution $6/\pi^2$. Suppose when choosing from all natural numbers $\mathbb{N}$, the probability of choosing $n \in \mathbb{N}$ is given by $P(n)=\frac{1}{2^n}$. Now when choosing two natural numbers, what is the probability (in closed form) of choosing two coprime numbers? Notice, the probability of choosing something divisible by $p$ is $$\frac{1}{2^p}+\frac{1}{2^{2p}}+\frac{1}{2^{3p}}+\frac{1}{2^{4p}}+\ldots=\frac{1}{2^p-1}$$ so the probability of choosing two numbers both divisible by $p$ is $$\frac{1}{(2^p-1)^2}$$ Meaning $$P(a,b;p)=1-\frac{1}{(2^p-1)^2}$$ where $P(a, b;p)$ is the probability that either $a$ or $b$ is not divisible by $p$. Then the answer I'm looking for is $$P(a,b)=\prod_{p\text{ prime}}P(a,b;p)=\prod_{p\text{ prime}}\left(1-\frac{1}{(2^p-1)^2}\right)$$ where $P(a,b)$ is the probability that $a$ and $b$ are coprime. Anyway, I'm curious about a closed form expression for this number, similar to the original problem I mentioned. Any insight would be very helpful. Edit As Mark Fischler has pointed out below, this product representation assumes the events of $p|a$ and $p|b$ are independent, which should not be the case. If anyone can also explain a way of constructing a more correct probability, it would be very helpful. REPLY [4 votes]: By @miracle173 's answer, we are only left with $P(x\leq n, y\leq n, (x,y)=1)$. We can find an asymptotic formula as an application of Mobius function: $$ \begin{align} P(x\leq n, y\leq n, (x,y)=1)&= \sum_{\substack{{x\leq n, y\leq n}\\{(x,y)=1}}} 2^{-x-y}\\ &= \sum_{x\leq n, y\leq n} 2^{-x-y} \sum_{d|(x,y)} \mu(d)\\ &=\sum_{d\leq n} \mu(d) \sum_{a\leq \frac nd, b\leq \frac nd} 2^{- d(a+b) }\ \ \ (\textrm{substitute }x=da, \ y=db)\\ &=\sum_{d\leq n}\mu(d) \left( \frac{\frac{1}{2^d}}{1-\frac{1}{2^d}}+O(2^{-n})\right)^2\\ &=\sum_{d\leq n}\mu(d) \frac{1}{(2^d-1)^2}+O(n 2^{-n}). \end{align} $$ Thus, the probability has to converge to $$ \sum_{d=1}^{\infty}\frac{\mu(d)}{(2^d-1)^2}\approx 0.867630801985022350790508146212902422392760107477\ldots $$ according to SAGE.<|endoftext|> TITLE: What is actually the standard definition for Radon measure? QUESTION [13 upvotes]: I see that there are various definitions for Radon measure and they are NOT equivalent, but they are equivalent on locally compact Hausdorff spaces. I think this is the reason why Radon measure has several different definitions, but I'm curious what is the standard one. On locally compact Hausdorff spaces, representation theorem let Radon measure play an important role. However, I think there would be some areas in mathematics that Radon measure still plays an important role when the given space is NOT locally compact hausdorff space. In that situation, what is the definition of Radon measure? (For example, under the definition given in Folland's Real Analysis, I found that Vitali-Carathéodory theorem is true for Radon measure on a Hausdorff space. (Not necessarily locally compact)) REPLY [11 votes]: You don't give explicitly the options you are choosing between, so I am not a priori certain if I am answering your question. However, I do want to give it a try. For all of the details, see: https://en.wikipedia.org/wiki/Radon_measure (honestly this is mostly paraphrasing from the relevant articles there, although with some commentary of my own included and large parts of the content from there omitted). Radon measure - a measure $\mu$ which satisfies two properties: $\mu$ is locally finite $\mu$ is inner regular Radon measures are only defined for Hausdorff spaces. This is because of both the local finiteness condition as well as the inner regularity condition. Just to make sure we are on the same page: locally finite measure - a measure $\mu$ on a $\sigma-$algebra $\Sigma$ of a Hausdorff topological space $(X, \tau)$ such that: $\tau \subset \Sigma$ For every $x \in X$, there exists an open neighborhood (i.e. open set) $U_x \in \tau$ such that $x\in U_x$ and $$\mu(U_x) < \infty$$ Note that the second condition relies upon the first, because otherwise we are not guaranteed that the expression $\mu(U_x)$ is even defined. inner regular measure - a measure $\mu$ on a $\sigma$-algebra $\Sigma$ of a Hausdorff topological space $(X, \tau)$ such that: $\tau \subset \Sigma$ For every set $A \in \Sigma$, and for every $\varepsilon > 0$, there exists a compact subset $K \subset X$ such that $$\mu(A \setminus K):=\mu(A-K) < \varepsilon$$ Note that an equivalent formulation of the second condition for inner regularity is: $$\mu(A) = \sup\{\mu(K) :\ K\text{ is compact}, K \subset A \}$$ Also note that because $(X,\tau)$ is by assumption Hausdorff, all compact sets $K$ are closed (this is not true for general topological spaces). This makes inner regularity a true dual notion to outer regularity, at least in those spaces satisfying the Heine-Borel theorem, since then it means "every measurable set can have its measure be approximated from inside by closed (and bounded) sets", while outer regularity means "every measurable set can have its measure be approximated from outside by open sets". Both are true for Lebesgue measure on the real line (for which Heine-Borel also holds), but one probably sees outer regularity used more often in general in measure theory (for example in the Caratheodory extension theorem) because the open-ness condition extends more elegantly to general spaces which don't satisfy the Heine-Borel theorem, in contrast to inner regularity, for which one must switch explicitly to compact sets from closed sets in order to not lose generality; but one loses a certain degree of intuition in the process. Radon measures are meant to some extent to be generalizations of both the Lebesgue and Dirac measures on the real line, since they interact well with the underlying topology of the space and because the measure of points does not have to be zero (in contrast to the Lebesgue measure). Technically we can extend the definition to non-Hausdorff spaces by changing "compact" to "closed-compact" in the inner regularity property, but this seems not to be useful, so it is almost never done. (Note that even in the non-Hausdorff case a Radon measure has to be defined on a $\sigma-$algebra containing the Borel $\sigma-$algebra, and thus the underlying topology, in order for both the local finiteness as well as the inner regular conditions to make sense. We need "closed compact" and not just compact, so that $X \setminus K$ is an open set, and thus $A \setminus K = A \cap U$ for some $U \in \tau$. ) Locally Compact Hausdorff spaces: On these spaces (but these spaces only, not on general Hausdorff spaces), Radon measures are particularly nice. In this case, Radon measures can be expressed in terms of continuous linear functionals on the space of continuous functions of compact support. (Note that since our space is Hausdorff having compact support means having closed support, which is good because one usually wants the support to be closed, or to define the support to be the closure of the set on which the function is non-zero.) Specifically, in the case of a locally compact Hausdorff space, the space of continuous functions with compact support $C_C (X, \tau)$ is a vector space, and every Radon measure $\mu$ on $(X, \tau)$ defines a continuous linear functional on $C_C(X, \tau)$ of the form: $$ f \mapsto \int f \mathrm{d}\mu$$ By the Riesz-Markov-Kakutani theorem, every (positive) continuous linear functional on $(X, \tau)$ can be written in the above form. Thus some authors define Radon measures in this manner for locally compact Hausdorff spaces only, and ignore the case of Radon measures on general Hausdorff spaces entirely (e.g. Bourbaki, Hewitt & Stromberg, Dieudonne).<|endoftext|> TITLE: Geometric interpretation of a result from commutative algebra QUESTION [6 upvotes]: I have come across the following result in Hartshorne, Algebraic Geometry, I.6.5 for those who have the book. The result says that if $K$ is a finitely generated extension of some base (algebraically closed) field $k$ of transcendence degree $1$, and if $\mathcal{C}_{K}$ is the set of discrete valuation rings of the extension $K/k$, then we have that the set $\left\lbrace R \in \mathcal{C}_{K} \mid x \notin R \right\rbrace $ is finite. So basically, if we take a rational function $x$ on a smooth projective curve, then the number of discrete valuations that $x$ is an element of is finite. Would someone be able to give a geometric interpretation of this result? I think that would vastly improve my understanding of the underlying commutative algebra. Any insight would be greatly appreciated. Thanks REPLY [2 votes]: The intuition is that if you write $K=\text{Quot}(R)$ for some discrete valuation ring $R$ with prime element $\pi$, then this presentation corresponds to fixing a point on the curve, and writing an element ('meromorphic function') of $K$ in the form $\pi^k\varepsilon$ with $k\in{\mathbb Z}$ and $\varepsilon\in R^{\times}$ determines whether it has zero of order $k$ (for $k\geq 0$) or a pole of order $-k$ (for $k\leq 0$) at that point. Therefore, fixing $x\in K$ and looking at $R$ such that $x\notin R$ means looking at the set of points at which $x$ has a pole, of which there should be only finitely many.<|endoftext|> TITLE: If $B$ is an ideal of $A$ then $B[x]$ is an ideal of $A[x]$ - what's wrong with my proof? QUESTION [6 upvotes]: This is exercise E.2 from chapter 24 of Pinter's A Book of Abstract Algebra: If $B$ is an ideal of $A$, $B[x]$ is not necessarily an ideal of $A[x]$. Give an example to prove this contention. It seems pretty easy to me to construct a proof that $B[x]$ is indeed an ideal of $A[x]$, so I would like to know what's wrong with it: First we want to show that $B[x]$ is a subgroup of $A[x]$ under addition: Let $p, q \in B[x]$. To calculate $p+q$ we simply add the corresponding coefficients. Since the coefficients are in $B$ and $B$ is a subgroup, the coefficients of $p+q$ belong to $B$ and so $p+q\in B[x]$. Let $p \in B[x]$. Again, since $B$ is a subgroup, $-p$ is in $B[x]$. Now I show that, given any $p \in B[x]$ and $r \in A[x]$, $pr$ and $rp$ are in $B[x]$. The coefficients of $pr$ are given by $$(pr)_i = \sum_{j+k=i} p_j r_k = \sum_{j=0}^i p_j r_{i-j}$$ Each term of the sum is a product of some $p_j$, which is in $B$, and some element of $A$. Since $B$ is an ideal, all $p_j r_k$ are in $B$, and so is the sum; therefore, $pr \in B[x]$. The same argument works for $rp$. I seem to have proved that $B[x]$ is an ideal of $A[x]$. Where have I gone wrong? REPLY [2 votes]: Your proof is correct. I don't own the book, but I was able to find a version of this via Google (Google-Books should be good enough), and the corrected statement reads: If $B$ is an $\textit{ideal}$ of $A$, $B[x]$ is an ideal of $A[x]$. (For those wondering, the first "ideal" is probably printed in italics because the exercise before asks to show the same statement for subrings instead of for ideals.)<|endoftext|> TITLE: How to covert min min problem to linear programming problem? QUESTION [9 upvotes]: I have the following problem: set $P=\{1,2,3...,n\}$ for index $i$, set $K=\{1,2,3,...,m\}$ for index $k$. Value $B_i^k$ is indexed by both $i$ and $k$, while value $l_i$ is indexed by only $i$. Here the objective is that for any $i$ we find the minimum $B^k_i$ value $\mathop {\min }\limits_{k \in K} B_i^k$, minus $l_i$, then accumulate it over $i$. I don't mention the constraints here because they are at least 10 constraint equations on $B_i^k$ and other decision variables that are not included in the objective function, such as binary variables $x_{ij}$, etc. All the constraints can be converted to linear constraints. Is there a way to convert the objective function to standard LP format? Thank you. The objective is : $\min \sum\limits_{i \in P}^{} {\left( {\left( {\mathop {\min }\limits_{k \in K} B_i^k} \right) - {l_i}} \right)} $ =============================================================== Update: I still need help with this problem. As @user25004 pointed, naively I can define a new variable $A_i=\mathop {\min }\limits_{k \in K} B_i^k$, and add more constraints on $A_i$ : $A_i <=B_i^k$, for any $k$. But this is not correct because the outside is also a $min$, so solver will make $A_i$ infinitely low to get a "minimum". I have looked up other related "minimizing over minimum" or "maximizing over maximum" problem. Not much luck but in this post https://stackoverflow.com/questions/10792139/using-min-max-within-an-integer-linear-program , which one answer mentioned "min over min". And he suggest, without details, that we should try "defining new binary variables and use the big-M coefficient". I was trying to make $\mathop {\min }\limits_{k \in K} B_i^k = \sum_{k}B_i^k y_k$ , where $y_k$ is binary variables. And then $B_i^k y_k = B_i^k - M_k(1-y_k)$ using the big-M coefficient. But I know this big-M method is useful when it is in the constraints, the problem is it is now in the objective. I just couldn't continue. Does anybody know how to effectively formulate the inner minimum $\mathop {\min }\limits_{k \in K} B_i^k$ ? Thank you so much. REPLY [11 votes]: Greg Glockner showed how to linearize the following example: $$ \min\left\{\min\{x_1,x_2,x_3\}\right\} $$ For the sake of clarity, I will explain how he achieves this. He introduces a variable $z=\min\{x_1,x_2,x_3\}$ and binary variables $y_1,y_2,y_3$ to deactivate extra (big-$M$) constraints : $$ z\ge x_1-My_1\\ z\ge x_2-My_2\\ z\ge x_3-My_3\\ y_1+y_2+y_3=2 $$ Lets analyze these constraints. First $y_1+y_2+y_3=2$ imposes that exactly $2$ binaries equal $1$, so that exactly two constraints are no longer active. The only constraint that will remain active (the one for which $y_i=0$) is the one for which $x_i$ is the minimum. For example, if $x_1=\min\{x_1,x_2,x_3\}$, then you will have $y_1=0$ and $y_2=y_3=1$, so that the only constraint that remains active is $$ z\ge x_1 $$ And since the solver is minimizing $z$, it will try to fix it to $x_1$ at least, ensuring tightness. Now, back to your problem, replace constraints $A_i\le B_i^k$ by similar ones, e.g.: $$ A_i\ge B_i^k-My_i\quad \forall i\in P\\ \sum_{i\in P}y_i=|P|-1 $$<|endoftext|> TITLE: what is the nature of a ball that goes over a "corner" of the real projective plane? QUESTION [6 upvotes]: I'm make a little computer program to help me understand different 2d topological spaces, (such as torus and mobius band). I'm having issues with drawing balls that go over a corner of the real projective plane. By "corner" I am referring to a rectangle that is mapped to the projective plane as shown in the wolfram link. The images below were drawn with my program. It shows the boundary of a ball drawn around point p. The top image make sense to me (the red, blue, orange, and purple dots show points that are identified with one another). The bottom picture shows my programs interpretation of when the ball go over the corner, and you can see that the circles in the bottom right are overlapping. Does the ball around p, in the bottom case, contain all points in either circle at the bottom right (i.e. their union)? Is my program not drawing the balls correctly? REPLY [5 votes]: When a rectangle's edges are identified to make a torus or Klein bottle (each of which admits a flat metric), the corner geometry is isometric: The four right angles add up to $2\pi$. Contrast with the situation of the projective plane: At the corners of the Euclidean rectangle, the model does not conformally represent a neighborhood. Instead, there are cone points, one for each pair of diagonally opposite vertices. Each cone point has incident angle $\pi$, from the two right angles. In your top picture, the disk does not enclose the cone point at the northwest/southeast corner, but you can see the disk's boundary "puckering" near the southeast corner. In the bottom picture, the disk contains a cone point, and the boundary circle winds twice around it. (The circle's unit tangent field has a total turning of $2\pi$, but there's only an intrinsic angle of $\pi$ incident at the cone point.) If it's feasible to represent the projective plane by using antipodal boundary identifications on a disk (which eliminates the cone points), you'll get boundary behavior that better aligns with intuition. Edit: There's a pleasant cross-cap model of the projective plane that can be made in about one minute from a rectangular sheet of paper (and taped closed as desired) to illustrate physically what's happening with the disk in the original question. Fold the paper in half along the vertical midline. Unfold, and repeat along the horizontal midline. The sheet should now have two perpendicular valley folds running diametrically across. Slit the paper from the midpoint of the top side to the center (the bold segment). Fold quarter I down onto II, then fold II rightward onto III, then fold III upward onto IV. All four quarters lie over IV in a four-ply arrangement. Identify (e.g., tape closed) the right and top edges of the top two plies (I and III), and the right and top edges of the bottom two plies (II and IV). The slit, running along the left edge, must be mentally re-joined; this (together with the actual segment joining II and III) is the line of self-intersection of the cross cap. When I and II are folded rightward onto III, the top edge of I and the bottom edge of III are brought together with opposite orientation, and the left edge of I and the right of III are brought together with opposite orientation. Similarly, when I, II, and III are folded upward onto IV, the free edges of II and IV are brought together with the correct orientations to make a projective plane. The finished (taped) model can be opened up into a double-sheeted cone (like a paper hat) whose vertex is the center of the original sheet. The "cone points" are the corners of the original sheet. At each, only two right angles' worth of paper touch, so each point has angular defect $\pi$. At every other point (including the center of the sheet), a small neighborhood has incident angle $2\pi$.<|endoftext|> TITLE: Why is $\mathrm{Spec}(\mathbb{Z})$ a terminal object in the category of affine schemes? QUESTION [5 upvotes]: I've seen this claim repeated in many places (always without source or proof), that $\mathrm{Spec}(\mathbb{Z})$ is a terminal object – however, the most I've been able to prove myself is that for any affine scheme $\mathrm{Spec}(R)$, $\exists$ a morphism $\varphi:\mathrm{Spec}(R)\to\mathrm{Spec}(\mathbb{Z})$. How? Let $X=\mathrm{Spec}(\mathbb{Z})$, $Y=\mathrm{Spec}(R)$, then $\forall\ \emptyset\neq U\subseteq X$ open, $\mathcal{O}_X(U)=\mathbb{Z}$, so that we may let $\varphi$ be the map induced by the canonical homomorphism $\mathbb{Z}\to R,n\mapsto \overline{n}=1+\cdots+1$, so that $\varphi(\mathfrak{p}):=\mathfrak{p}^c$. It can be checked that $\varphi$ is continuous, and $\forall\ U\subseteq X$ open, we let $\varphi^\#(U):\mathcal{O}_X(U)\to(\varphi_*\mathcal{O}_Y)(U)=\mathcal{O}_Y(\varphi^{-1}(U))$, with $\varphi^\#(U)(n)=\overline{n}\in\mathcal{O}_Y(\varphi^{-1}(U))$, and it can be checked that this commutes with the restriction homomorphism, and defines therefore a morphism of sheaves. But how do I check that $\varphi$ is unique? My first thought would be a category theoretic proof; clearly, $\mathbb{Z}$ is initial in the category of commutative rings, and it can be shown that $\mathrm{Spec}:\mathbf{CRng}\to\mathbf{AfSc}$ is a contravariant functor; unfortunately this functor is easily seen not to be faithful(*), and is therefore immediately not an equivalence of categories. According to wikipedia, if a functor preserves direct/inverse limits, then it maps initial/terminal objects to initial/terminal objects, respectively. Is there an analogous statement for contravariant functors, and does it apply here? Or is there some other way to prove the result? *Edit: I now realize that statement was incorrect; though two functions might induce the same map on spaces, the maps induced on the entire scheme are nonetheless distinct. My difficulty is now in proving exercise II.2.4 of Hartshorne, which shows essentially that we can define the desired equivalence of categories. REPLY [7 votes]: The category of affine schemes is nothing but the opposite category of commutative rings. So $\mathrm{Spec}\, \mathbb Z$ is a terminal object in the category of affine schemes because $\mathbb Z$ is an initial object in the category of commutative rings (the only morphism $\mathbb Z \to R$ is the one sending $1$ to $1_R$...).<|endoftext|> TITLE: Does every finite dimensional real nil algebra admit a multiplicative basis? QUESTION [5 upvotes]: We say that a finite dimensional real commutative and associative algebra $\mathcal{A}$ is nil if every element $a \in \mathcal{A}$ is nilpotent. By multiplicative basis, I mean a basis $\{ v_1, \dots , v_n \}$ for $\mathcal{A}$ as a real vector space such that for each $v_i$ and $v_j$, the algebra multiplication $v_i \star v_j = c v_k$ for some $c \in \mathbb{R}$, and some other element $v_k$ of the basis. Given such a nil algebra $\mathcal{A}$, does it always admit a multiplicative basis in the sense described above? If not, what is an example of a nil algebra which does not admit a multiplicative basis? REPLY [3 votes]: No. The following constructs a counterexample. Let $R$ be the graded ring $\mathbb{R}[x_1, \ldots, x_5]$ and $I = (x_1, \ldots, x_5)$. In any homgeneous degree $d$, define the "pure" polynomials to be the the products of $d$ linear polynomials, and let the rank of a homogeneous polynomial $f$ be the minimum number of terms needed to express $f$ as a sum of pure polynomials. I assert the following: The grade one piece $R_1$ is isomorphic to the vector space of $1 \times 5$ matrices The grade two piece $R_2$ is isomorphic to the vector space of symmetric $5 \times 5$ matrices The product $R_1 \times R_1 \to R_2$ corresponds to the symmetrized outer product $(v,w) \mapsto \frac{1}{2}(v^Tw + w^Tv)$ (phrased differently: $R_1$ is the space of linear forms, and $R_2$ is the space of symmetric bilinear forms) The rank of a matrix has a similar characterization: $\text{rank}(A)$ is the smallest number of terms you need to express $A$ as a sum of outer products $\sum_i v_i^T w_i$. Of particular note is that if a homogeneous quadratic polynomial $f$ corresponds to the matrix $A$, then $\text{rank}(A) \leq 2 \text{rank}(f)$. Consequently, there exists a homogeneous quadratic polynomial $f$ such that $\text{rank}(f) \geq 3$. One such example is $f = \sum_i x_i^2$. Now, consider the graded algebra $A = I / (I^3 + fR)$. Its grade 1 piece is 5-dimensional and its grade 2 piece is 14-dimensional. Suppose we have a collection of polynomials of $I$ that form a multiplicative basis for $A$. The basis must consist of at least five polynomials that span $I/I^2$. There are 15 products of pairs of these polynomials, and they are all distinct elements of $I^2 / I^3$. Suppose two of these products were the same in $A$. That would imply we have two rank one quadratic polynomials $g$ and $h$ with the property that $rg = sh + tf$ for some scalars $r,s,t$. However, we would have $rt^{-1}g + (-st^{-1})h = f$ which is impossible, because the left hand side has rank at most 2, but the right hand side has rank 3. Consequently, the 15 pairwise products of the multiplicative basis for $A$ are all distinct (and nonzero) elements of $A$, and they are distinct from the original $5$ polynomials as well. Consequently, the basis must have at least 20 elements, contradicting the fact that $A$ is 19-dimensional.<|endoftext|> TITLE: Regarding to OCD about Reading Mathematical Books QUESTION [6 upvotes]: Dear Math Stack Exchange advisers, I recently started to develop an OCD-like symptom about reading books in mathematics. Whenever I read previous pages and proceed to next, I always feel under a huge suspicion and fear that I did not understand and memorize the materials presented on those previous pages, which leads me to re-read those pages, doubt again, and re-read those pages words-to-words, and waste my time eventually. I am sure this is not a normal behavior, and I would like to seek your advices and opinions about this anxiety coming from learning. Somehow, I always under huge fear that I did not perfectly understand previous pages of book, even if I do understand for most parts, and under involuntary response of re-reading those pages. I am quite frustrated about this action. I need to stop that behavior as it demands a lot of time, but I simply cannot stop due to anxiety. REPLY [3 votes]: I think this is a perfectly valid question. By no means I am a good mathematician/math learner, but here is what I think based on my own experience. Usually, if you doubt whether you have understood something or not, it is quite likely that you did not. For example, will you ever doubt that the solutions of $ax^2 + bx + c=0$ is given by $\frac{-b\pm \sqrt{b^2 - 4ac}}{2a}$? Probably not. Because you have solves quadratic equations for so many times. Even if somehow, you start doubting whether this is true or not, you can always verify that this indeed is true easily. Things get a little bit trickier when you study more advanced math, for example, real analysis. My suggestion would be that, if you have enough time, it does not really hurt trying to redo/relearn the proofs that you did before but confuses you now. Notice that however if you have limited time then probably this is not the best idea. To give you a concrete example, yesterday I was playing around with the following problem: Consider the function $f(x)=\sin(\frac{1}{x})$ when $x\ne 0 $ and $f(x)=k$ when $x=0$. For what $k$ is the graph of the function not connected? This is a topology question. However,somehow I was reminded of the continuity of the function. I remember that this function is not continuous at $x=0$ because I did the exact same question when I was taking analysis. However, when I tried to redo the proof, I started confusing myself, which resulted in the following question: Negation of definition of continuity Was it normal that I confused myself? It is hard to say. However, one thing is for sure. That is, I relearned something that I probably overlooked and now I have a better understanding of the subject, which is good. Going back to your question, what you are experiencing is normal. What should you do? Think it over and over until you are totally convinced. As long as you are learning, you are never wasting your time.<|endoftext|> TITLE: what is sine of a real number QUESTION [19 upvotes]: I never understand what the trigonometric function sine is.. We had a table that has values of sine for different angles, we by hearted it and applied to some problems and there ends the matter. Till then, sine function is related to triangles, angles. Then comes the graph. We have been told that the figure below is the graph of the function sine. This function takes angles and gives numbers between $-1$ and $1$ and we have been told that it is a continuous function as it is clear from the graph. Then comes taylor expansion of sine and we have $$\sin (x)=x-\frac{x^3}{3!}+\cdots$$ I know that for any infinitely differentiable function, we have taylor expansion. But how do we define differentiability of the function sine? We define differentiability of a function from real numbers to real numbers.. But sine is a function that takes angles and gives real numbers.. Then how do we define differentiability of such a function? are we saying the real number $1$ is the degree 1? I am confused.. Help me.. Above content is a copy paste of a mail i received from my friend, a 1 st year undergraduate. I could answer some things vaguely i am not happy with my own answers. So, I am posting it here.. Help us (me and my friend) to understand sine function in a better way. REPLY [3 votes]: Try this answer by first just looking at the images (middle-click to enlarge in new tab). If those aren't enough by themselves, then try reading the description. Where the input is in radians: Imagine a circle, like the black one below: For convenience, let's call the up direction on this diagram "north." Imagine you're sitting on the circle at the red dot, in a vehicle that has a very precise odometer. You can imagine the vehicle is a train car, and the black circle is the track, if that helps. Now imagine traveling along the circle and tracing out the orange arc. Suppose you stop at some point. The reading on the odometer is x. (You can imagine the units to be miles, kilometers, megameters, or other generic units that someone decided to call "radians.") The "sine" of x is the distance you'd have to travel south, to reach your original latitude. Stated another way, the "sine" of x is how far north you are of the horizontal blue line. If you've gone more than halfway around the circle, and are in the "southern" half, that distance is going to be a negative number. If you are exactly half way around, or all the way around, it'll be 0. If you are one quarter of the way around, it'll be 1. If you are three quarters of the way around, it'll be -1. A circle's radius is the distance between the center and any point on its edge. (The fact that this is constant is what makes it a circle.) This circle is centered where the blue lines cross, and the distance from that center to the red dot (a point on the circle's outside edge) is 1. This means the radius of this circle is 1. (Any circle with radius 1 gets the special name "unit circle," but you don't actually have to know that to understand this explanation.) A circle's diameter (the distance across the circle through the middle) is always twice the radius. Therefore, the diameter of this circle is 2. $\pi$ is defined as the ratio between a circle's diameter and its circumference (the distance around the outside edge of a circle). Since $\pi$ = circumference / diameter, circumference = diameter * $\pi$, and the circumference of this circle is 2$\pi$. Once the odometer reads 2$\pi$, you'll be back to exactly where you started. If you keep going, you'll be tracing out the same path, and for every reading on the odometer, the distance you'd have to travel south to reach the same latitude you started at is exactly the same as when your odometer read 2$\pi$ less than it does now. In other words, the sine value will be exactly the same as it was last time you were there. This is why the sine function is periodic (meaning, it repeats itself). $sin(x) = sin(x-2\pi) = sin(x+2\pi)$. Where the input is in degrees: Imagine a circle, like the black one below: For convenience, let's call the up direction on this diagram "north." Imagine there is an obelisk at the point where the blue lines meet, which is visible from everywhere on the circle. Imagine you're sitting on the circle at the red dot, one unit east of the obelisk. Imagine the line between that obelisk and your starting point is permanently marked, by what I'll call the "positive horizontal axis." Now imagine traveling along the circle and tracing out the orange arc. Suppose you stop at some point, and make a purple line between where you are and the obelisk. Let's call the angle that sweeps from the positive horizontal axis to that line, $\theta$. The "sine" of $\theta$ is the distance you'd have to travel south, to reach your original latitude. Stated another way, the "sine" of $\theta$ is how far north you are of the horizontal blue line. If you've gone more than halfway around the circle, and are in the "southern" half, that distance is going to be a negative number. If you are exactly half way around, or all the way around, it'll be 0. If you are one quarter of the way around, it'll be 1. If you are three quarters of the way around, it'll be -1. Once you get back to where you started, $\theta$ will be 360 degrees, because there are 360 degrees in a circle (by definition). If you keep going, you'll be tracing out the same path, and for point you reach again, the sine value will be exactly the same as it was last time you were there. This is why the sine function is periodic (meaning, it repeats itself). $sin(\theta) = sin(\theta-360°) = sin(\theta+360°)$. Bonus explanation: Cosine In both examples, the cosine of x, or $\theta$, is the distance you'd have to travel "west" to reach the vertical blue line. Stated another way, it's how far "east" you are of the vertical blue line. If you drew a horizontal line between the end of the arc and the vertical blue line, it's the length of that line. It's the length of the blue side of the triangle in the second diagram above (where the length of the arc is still x). At the starting point, it's 1. In the "western" half of the circle, it's a negative number. If you are exactly half way around, it'll be -1. If you are one quarter of the way around, or three quarters of the way around, it'll be 0. Cosine is periodic in the same way sine is; you could write "cos" instead of "sin" consistently in the formula lines above (that have two = signs) and each would still be correct. Mathematicians call the location of the obelisk the "origin," but I chose not to use that term here to avoid confusion with a more common definition of "origin," namely "where you started," because that refers to a different location in the narrative underlying this answer. I also made an alternate version of the second diagram which uses "origin" instead. This posting also uses the word "line" in multiple places where "line segment" would be more specifically correct; I think the meaning is clear in context and the language used better facilitates understanding here.<|endoftext|> TITLE: Formal symbol for the integer division operation QUESTION [12 upvotes]: The integer division is a common and useful operation in Computer Science. It comes up in many domains, as in the manipulation of matrices and grids. Is there any formal symbol for this operation? Or at least a widely recognisable symbol that can be easily differentiated from the standard division (i.e. inverse of multiplication)? REPLY [2 votes]: I often use $\text{div}$, which is complementary with $\text{mod}$. There are many alternatives (some used in programming languages) such as $/,//,\backslash,\div,\%$ but these don't go without saying. Also $\left\lfloor\dfrac\cdot\cdot\right\rfloor$.<|endoftext|> TITLE: Is "closedness" a proper word? QUESTION [5 upvotes]: In one of my papers I had to prove a list of properties of a set, say, $S=\{a,b,c\}$. Among them we have a fact that $S$ is downward closed with respect to a binary relation $R$. I found it awkward to start proving the property by saying "Regarding downward closedness, the set $\{a,b,c\}$ is downward closed, since ... ." Is using the word "closedness" a good style or is there a better replacement? How would you reformulate the sentence? REPLY [5 votes]: Sure. The phrase “downward closedness” returns plenty of Google hits from math books. But your example feels clunky, because you’re essentially introducing the proof twice. Just eliminate the “regarding downward closedness” bit and you’re good to go! Some more alternatives: “$S$ is downward closed: …” “$S$ is downward closed. To show this, …” “To show that $S$ is downward closed, …”<|endoftext|> TITLE: How to check two circles are linked or not? (without using topology) QUESTION [5 upvotes]: In $\mathbb{R}^6$, three loops $$C_1:=\{(0,x,-x;0,y,-y)\mid x^2+y^2=1\}\\ C_2:=\{(x,0,-x;y,0,-y)\mid x^2+y^2=1\}\\ C_3:=\{(x,-x,0;y,-y,0)\mid x^2+y^2=1\}$$ are embedded. Is there a pair of circles that are linked? REPLY [2 votes]: No, the subspaces are not linked. Consider that if I can move $C_1$ arbitrarily far away from $C_2$ without them ever intersecting, then they are unlinked. Define $$ C_{1,h} = \{(h,x,-x,0,y,-x) \mid x^2+y^2=1 \}, \qquad h \in \mathbf{R}$$ Now let us move $C_1$ far away from $C_2$. Suppose these two subspaces intersected one another as $h$ in $C_{1,h}$ got large, so we had $$ (h,x_1,-x_1,0,y_1,-y_1) = (x_2,0,-x_2,y_2,0,-y_2), \qquad x_1^2+y_1^2=1, \quad x_2^2+y_2^2=1, \quad h > 0 $$ That is, $$ (h-x_2,x_1,-x_1+x_2,-y_2,y_1,y_2-y_1) = 0 $$ so $x_1=0$ and $y_1=0$, violating the requirement $x_1^2+y_1^2=1$. Then the above equality never holds, so the spaces never intersect as we vary $h$. So we can move $C_1$ as far away from $C_2$ as we want without intersections; thus the spaces are not linked. Now what's the point behind this problem? Probably to show you why two one-dimensional subspaces of a high-dimensional space are never linked. It is always possible to move through the larger space to move the subspaces far from one another, without the subspaces ever intersecting.<|endoftext|> TITLE: Why are $S$, $Z$ and $M$ used to denote the Conductor, Cyclic subspace and Annihilator in linear algebra? QUESTION [8 upvotes]: In the text Linear Algebra (by Hoffman and Kunze), there are notations S, Z, M. What are these short for – that is, why are these particular three letters used for the following concepts? (i) S. Let $W$ be an invariant subspace for $T$ and let $\alpha$ be a vector in $V$. The $T$- conductor of $\alpha$ into $W$ is the set $S_{T}(\alpha ; W)$, which consists of all polynomials $g$ (over the scalar field) such that $g(T)\alpha$ is in $W$. (ii) Z. If $\alpha$ is any vector in $V$, the $T$- cyclic subspace generated by $\alpha$ is the subspace $Z(\alpha ;T)$ of all vectors of the form $g(T)\alpha$, $g$ in $F[x]$. (iii) M. If $\alpha$ is any vector in $V$, the $T$-annihilator of $\alpha$ is the ideal $M(\alpha ; T)$ in $F[x]$ consisting of all polynomials $g$ over $F$ such that $g(T)\alpha = 0$. REPLY [2 votes]: From the comments above. The $T$-conductor of $\alpha$ into $W$ is denoted by the letter S possibly because the $T$-conductor is also called the "stuffer" (das einstopfende Ideal), as the authors mention on page 201 (2nd edition). The $T$-cyclic subspace generated by $\alpha$ is denoted by the letter Z possibly because it is also called Zyklische UnterModuln in German, as mentioned by @Bernard in the comments above.<|endoftext|> TITLE: Proving that $\lceil f(x) \rceil$ $=$ $\lceil f(\lceil x \rceil )\rceil$ when $f(x) =$ integer $\implies x =$ integer QUESTION [5 upvotes]: On P. 71 in 'Concrete Mathematics' the following Theorem is given: Let $f$ be any continuous, monotonically increasing function on an interval of the real numbers, with the property that \begin{equation}f(x) = \mathit{integer}\ \ \ \implies\ \ \ x = \mathit{integer} . \end{equation} Then we have \begin{equation} \lfloor f(x) \rfloor = \lfloor f(\lfloor x \rfloor )\rfloor\ \ \ \ and\ \ \ \lceil f(x) \rceil = \lceil f(\lceil x \rceil )\rceil, \end{equation} whenever $f(x)$, $f(\lfloor x \rfloor)$, and $f(\lceil x \rceil)$ are defined. The proof for the second equation goes as follows: If $x = \lceil x\rceil$, there's nothing to prove. Otherwise $x < \lceil x\rceil$, and $f(x) < f(\lceil x \rceil)$ since $f$ is increasing. Hence $\lceil f(x)\rceil \leq \lceil f(\lceil x\rceil)\rceil$, since $\lceil \cdot\rceil$ is nondecreasing. If $\lceil f(x) \rceil < \lceil f(\lceil x\rceil)\rceil$, there must be a number $y$ such that $x \leq y < \lceil x\rceil$ and $f(y) = \lceil f(x)\rceil$, since $f$ is continuous. This $y$ is an integer, because of $f$'s special property. But there cannot be an integer strictly between $\lfloor x\rfloor$ and $\lceil x\rceil$ . This contradiction implies that we must have $\lceil f(x) \rceil = \lceil f(\lceil x \rceil )\rceil$. The part of the proof that I don't understand: If $\lceil f(x) \rceil < \lceil f(\lceil x\rceil)\rceil$, there must be a number $y$ such that $x \leq y < \lceil x\rceil$ and $f(y) = \lceil f(x)\rceil$, since $f$ is continuous. Why is $f(y)=\lceil f(x) \rceil$? REPLY [2 votes]: Consider the case where $x< \lceil x\rceil $ and assume that$\lceil f(x)\rceil <\lceil f(\lceil x\rceil)\rceil$. Firstly, $f(x)$ cannot be an integer or we'd have that $x$ must be an integer, a contradiction to $x< \lceil x\rceil. $ Hence $$ f(x)<\lceil f(x)\rceil. $$ Also, since $\lceil f(x)\rceil$ is an integer, $\lceil f(x)\rceil <\lceil f(\lceil x\rceil)\rceil$ implies that $$ \lceil f(x)\rceil TITLE: Given two dice, what's the probability that land on the last spot on the board? QUESTION [5 upvotes]: So me and my colleagues are discussing board games and we land on the subject of the Danish "Matador" (Monopoly) and on that board there are 40 spaces. You start on Space 1 and are given two dice to make it around the board. The dice are standard 6-side dice. There is one colleague in particular who argues that the game is not really random, even though it is as your movement is decided by dice throw and thus what land you can buy and so forth. What would be the probability of landing on the last field on the board? REPLY [7 votes]: It is certainly random, but not necessarily uniform. Also, the probability of ever landing on the last field (i.e., allowing enough rounds to complete) is $1$. I assume you want to know: What is the probability of landing on space 40 before completing the round? We can compute the probability $P_n$ of landing on space $n$ before going beyond $n$: Clearly, $P_1=1$. We may also set $P_n=0$ for $n\le 0$. Then for all $n>1$ we have $$P_n= \sum_{k=2}^{12}p_kP_{n-k}$$ where $p_k$ is the probability of rolling $k$ (so $p_2=\frac1{36}$, $p_3=\frac1{18}$, etc). One finds $$P_1=1;P_2=0; P_2=\frac1{36}; P_3=\frac1{18}; P_4=\frac{109}{1296};\ \ldots\ ; P_{40}=0.142805773\ldots$$ In the limit as $n\to \infty$ we should find $P_n\to\frac17$ (why?), and we see that $P_{40}$ differs from this by only $\approx0.0008$. If one makes a graph from the $P_n$ computed above, one notices: The probability increases from $P_2=0$ to $P_8\approx 0.182227$, then decreases to $P_{14}\approx0.1247$ and then quickly approaches $\approx \frac 17$.<|endoftext|> TITLE: Complex numbers as exponents QUESTION [7 upvotes]: Is there any formula to calculate $2^i$ for example? What about $x^z$? I was surfing through different pages and I couldn't seem to find a formula like de Moivre's with $z^x$. REPLY [8 votes]: By definition, for non-rational exponents, $$ x^z=e^{z\log(x)} $$ This definition is fine as far as it goes, but the limitation is on the values of $\log(x)$ for $x\in\mathbb{C}$. Since $e^{2\pi i}=1$, logarithms, as inverses of the exponential function, are unique up to an integer multiple of $2\pi i$. Usually, when the base is a positive real number, we use the real value of the logarithm, so $$ 2^i=e^{i\log(2)}=\cos(\log(2))+i\sin(\log(2)) $$ However, if $2$ is viewed as a complex number, we might equally well say $$ 2^i=e^{i\log(2)-2k\pi}=e^{-2k\pi}\cos(\log(2))+ie^{-2k\pi}\sin(\log(2)) $$ for any $k\in\mathbb{Z}$.<|endoftext|> TITLE: $f(x)$ is a quadratic polynomial with $f(0)\neq 0$ and $f(f(x)+x)=f(x)(x^2+4x-7)$ QUESTION [5 upvotes]: $f(x)$ is a quadratic polynomial with $f(0) \neq 0$ and $$f(f(x)+x)=f(x)(x^2+4x-7)$$ It is given that the remainder when $f(x)$ is divided by $(x-1)$ is $3$. Find the remainder when $f(x)$ is divided by$(x-3)$. My Attempt: Let $f(x)=ax^2+bx+c$ and $a,c \neq 0 $ I got $a+b+c=3$ And by the functional equation $a[ax^2+(b+1)x+c]^2+b[ax^2+(b+1)x+c]+c= [ax^2+bx+c][x^2+4x-7]$ Then by putting $x=0$ , $ac^2 +bc+c=-7c$ Since $c \neq 0 $ , we have $ac+b+8=0$ Then comparing the coeffiecient of $x^4$ , we get $a^3=a$ Since $a \neq 0$ , $a=-1 $ or $a =1 $ Then how to proceed with two values of $a$ ? or Is there a polynomial satisfying these conditions? REPLY [3 votes]: In general, if $F$ is a field, whose algebraic closure is $\bar{F}$, and $q(X)\in F[X]$ is a polynomial of degree $2$ with leading coefficient $k\neq 0$, then all solutions to $$f\big(\alpha\,f(X)+X\big)=f(X)\,q(X)\,,\tag{*}$$ where $\alpha\neq 0$ is a fixed element of $F$ and $f(X)\in \bar{F}[X]$, are given by $$f(X)=0\,,$$ $$f(X)=+\frac{1}{\alpha\sqrt{k}}\,q\left(X-\frac{1}{\sqrt{k}}\right)\,,$$ and $$f(X)=-\frac{1}{\alpha\sqrt{k}}\,q\left(X+\frac{1}{\sqrt{k}}\right)\,,$$ where $\sqrt{k}$ is one of the square roots of $k$. Thus, if $X^2-k$ is irreducible in $F[X]$, then the only solution to (*) in $F[X]$ is $f(X)=0$. Hint: It is easy to see that nonconstant solutions to (*) must be of degree $2$. Suppose that $$q(X)=k\,(X-u)(X-v)\,,$$ where $u,v\in\bar{F}$. Then, it follows immediately that $$f(X)=\pm \frac{\sqrt{k}}{\alpha}\,(X-p)(X-q)$$ for some $p,q\in\bar{F}$. Compute $$\frac{f\big(\alpha\, f(X)+X\big)}{k\,f(X)}$$ in terms of $X,p,q,\sqrt{k},\alpha$. This should be the same as $$\frac{q(X)}{k}=(X-u)(X-v)\,.$$ In particular, if $F=\mathbb{C}$, $q(X)=X^2+4\,X-7$ (so that $k=1$), and $\alpha=1$, then the nonzero solutions to the functional equation (*) are $$f(X)=+q(X-1)=+X^2+2\,X-10$$ and $$f(X)=-q(X+1)=-X^2-6\,X+2\,.$$ Note that the extra condition that $f(1)=3$ is incompatible with all such polynomials, whence there are no solutions to the OP's conditions.<|endoftext|> TITLE: What is the intuition behind right-continuous filtration? QUESTION [7 upvotes]: I cannot understand the concept of it. So a filtration is right continuous if for every $t$ it holds that: $\mathcal{F_t}=\bigcap\limits_{\varepsilon>0}\mathcal{F_{t+\varepsilon}}$ But if for every $t$, then it also holds for $t=0$. And if I choose a large $\epsilon$, then it means that at time zero I know every information about the process? REPLY [11 votes]: The idea is that you gain no additional information by taking an infinitesimal step forward in time. Remember that an $\mathit{intersection}$ means that we are taking only the elements contained in EVERY set in the intersection. So, if we think of each $F_t$ as the information contained in the system up to time $t$, the intersection $\cap_{\epsilon > 0} \mathcal{F}_{t+\epsilon}$ contains only the information in EVERY $\mathcal{F}_{t+\epsilon}$ for every possible value of $\epsilon > 0$. That is, only the information contained up until $t+\epsilon$ for every $\epsilon > 0$, in particular, any arbitrarily small $\epsilon$. So, in this intersection, we have added only the information gained by taking an infinitesimally small step forward in time. Thus, the idea of right continuity, $\mathcal{F}_t=\cap_{\epsilon > 0} \mathcal{F}_{t+\epsilon}$ is that no information is added in this infinitesimal step. In other words, there are no instantaneous developments of the system, it evolves in a continuous fashion going forward in time. (Much credit for this answer is due to Huyen Pham, whose book I'm currently using to review some of this material.)<|endoftext|> TITLE: What's the best way to catch wild Pokémon in Pokémon GO? QUESTION [15 upvotes]: In the newly released Pokémon GO, one of the major activities of the game is to catch wild Pokémon. These Pokémon are shown in the "nearby" list and their "rough distance" (RD) to you can be 0, 1, 2, or 3 footprints. If they are further than 3 footprints away, they disappear from the nearby list. So a Pokémon cannot appear with 4 or more footprints. Although Niantic Labs has not released a concrete definition of what the footprints mean, I'll define them as follows. An RD of $x \in \{0,1,2,3\}$ means the Pokémon in question is in the circular band enclosed by two circles of radius $x$ and $x-1$, closed on the outer circumference and open on the inner circumference. E.g., if the number of footprints is 3, then the Pokémon's actual distance from you (at the origin of the Cartesian plane) is in $[3,2[$. What is the best way to triangulate the position of a Pokémon? One of the ambiguities of this question is the fact that "footprints" is undefined in terms of metres. It is said that one footprint is roughly 100 metres, but I can't confirm it. So, it is not true to say that walking 300 metres in any direction will affect the footprints, as 3 footprints may be much larger than 300 metres. Given a Pokémon at an unknown point $(x,y)$ with RD $r$ footprints, pick a random direction and observe if the RD decreases or stays the same. If it does not and instead increases, stop (call this point $A$). Go in the opposite direction (do an about-face) and continue walking. Stop when the $RD$ increases again. Call this point $B$. Go to the midpoint of $A$ and $B$. Choose a direction perpendicular to your original line. Continue in that direction until you reach the Pokémon (RD becomes zero), or, if it increases, turn around and head in the opposite direction, continuing past the midpoint of $A$ and $B$. Discussion: Will this method always find the Pokémon? I believe it will, because we have symmetry in the RD. I.e., if the Pokémon is 3 footprints from you, you are 3 footprints from it. So we can view your path as a line intersecting the circular bands of the Pokémon's $RD$. Since the line segment between $A$ and $B$ defines a chord on the Pokémon's $RD$ (circular bands), a line perpendicular to $AB$ intersecting the midpoint of $A$ and $B$ will pass through the centre of the circle (the Pokémon's position). In the worst case, you would walk in all four directions. In the best case, you would've picked the direction the Pokémon is in and found them without backtracking. Is there a better way to do this? I imagine something involving spirals may be better, but I'm not sure. REPLY [5 votes]: Here's a picture that should help. Basically, you walk around until you find a boundary where the number of foot prints changes. The most efficient way to do that is to walk in one direction. When you reach the boundary, mark your point, then walk in a tight circle around the point. Pay attentention to the number of paw prints. There will be two points where the number of paw prints changes, mark them. You now have three points equidistant from your Pokémon. Triangulate and Go Catch Em All!<|endoftext|> TITLE: Why is the Minimum in the Min-Max Principle for Self-Adjoint Operators attained? QUESTION [8 upvotes]: Let's consider a self-adjoint operator $A$ (not necessarily bounded) on a Hilbert space which is bounded from below, with domain $D$ and whose resolvent is compact. Then, the spectrum consists solely of isolated eigenvalues which are given (in increasing order) by the min-max principle: \begin{equation} \lambda_k = \min_{\substack{V \subset D\\ \dim V = k}} \max_{\substack{x \in V \\ x \neq 0}} \frac{\langle \,x , Ax \rangle}{\langle \, x, x \rangle}, \ k \in \mathbb{N}. \end{equation} The proof I know shows $\lambda_k \geq \min \max \frac{\langle \,x , Ax \rangle}{\langle \, x, x \rangle}$ and $\lambda_k \leq \min \max \frac{\langle \,x , Ax \rangle}{\langle \, x, x \rangle}$ by using a orthonormal basis of eigenvectors. As seen here: Why is the Maximum in the Min-Max Principle for Self-Adjoint Operators attained?, we know that the maximum is attained since the unit sphere is compact in a finite dimensional vector space. But why is the minimum also attained? REPLY [3 votes]: In the finite dimensional case, we get a similar argument to work: the max is continuous (as a function of $V$), and the space of $k$-dimensional vector spaces is compact, so the minimum is reached. In the infinite dimensional case, the main difference is that $\mathcal{G} (k, \infty)$ is no longer compact. It is still a metric space, and the application $F: V \mapsto \max_{x \in V, \|x\|=1} \langle x, Ax \rangle$ is still continuous. I am not sure about the best way to make this line of reasoning work in this case, although I strongly suspect that $F$ is proper (so its infimum is a minimum). One would have to use the fact that the resolvent is compact, though. At worse, one can use the spectral decomposition to show that the minimum is reached. Take the first $k$ eigenvalues, take corresponding unit eigenvectors $(e_i)_{1 \leq i \leq k}$ (choosing them orthogonal if necessary), and put $V = Span((e_i))$. But this can only be done once we know the spectral decomposition, and I would prefer a more elementary (and geometrical) proof. Now, I'll sketch a proof of the geometrical argument. There are a few holes, which should not be too hard to fill in. 1) Let $\mathcal{G} (k,n)$ be the space of $k$-dimensional vector subspaces in $\mathbb{C}^n$ (this is also called a Grassmannian). Let $(V_j)$ be a sequence in $\mathcal{G} (k,n)$, and $V \in \mathcal{G} (k,n)$. We say that $\lim_j V_j = V$ is there exists a sequence $(e_1^{(j)}, \ldots, e_k^{(j)})$ of orthonormal bases of $V_j$, and an orthonormal basis $(e_1, \ldots, e_k)$ of $V$, such that $\lim_j e_i^{(j)} = e_i$ for all $1 \leq i \leq k$. This topology is metrizable. For instance, you can define: $$d (V_1, V_2) := \inf \max_{1 \leq i \leq k} \{\|e_i-f_i\|\},$$ where the infimum runs over all orthonormal bases $(e_i)$ of $V_1$ and $(f_i)$ of $V_2$. It's easy to check that $d$ is symmetric, non-negative and satisfies the triangle inequality, and that $d(V,V) =0$ for all $V$. Proving that $d(V_1, V_2) >0$ if $V_1 \neq V_2$ is harder. Let $V_1 \neq V_2$ be in $\mathcal{G} (k,n)$. Let $u_1 \in V_1 \setminus V_2$ with unit length. Then there exists $\varepsilon >0$ such that $| \langle u_1, v_2 \rangle | \leq 1-\varepsilon$ for all unit $v_2 \in V_2$ (using the compactness of $\mathbb{S}_{n-1}\cap V_2$). Let $(e_i)$ and $(f_i)$ be orthonormal bases of $V_1$ and $V_2$ respectively. Write $u_1 = \sum_i \lambda_i e_i$, with $\sum_i |\lambda_i|^2 = 1$. Then: $$2\varepsilon \leq 2-2 \mathfrak{Re} \langle u_1, \sum_i \lambda_i f_i \rangle = \left\| u_1 - \sum_i \lambda_i f_i\right\| \leq \sum_i |\lambda_i| \|e_i-f_i\| \leq \sum_i |\lambda_i| d(V_1,V_2) \leq k d(V_1,V_2)$$ Hence, $d(V_1, V_2) >0$. Thus, $d$ is a distance. 2) In addition, $\mathcal{G} (k,n)$ is compact. Let $\mathcal{E} (k,n)$ be the space of all $k$-uplets of orthonormal vectors (or the space of orthonormal $k \times n$ matrices). Then $\mathcal{E} (k,n)$ is closed and bounded in the space of $k \times n$ matrices, so compact, and there is an application $\varphi: \mathcal{E} (k,n) \to \mathcal{G} (k,n)$, defined by: $$\varphi (e_1, \ldots, e_k) = Span (e_1, \ldots, e_k).$$ This application is continuous, and even $1$-Lipschitz (it follows easily from the definition of the distance $d$). Hence, the image of $\varphi$ is compact. But $\varphi$ is surjective, so $\mathcal{G} (k,n)$ is compact. 3) Let us define, for $V \in \mathcal{G} (k,n)$: $$F(V) := \max_{\substack{x \in V \\ x \neq 0}} \frac{\langle x, Ax\rangle}{\langle x, x \rangle} = \max_{\substack{x \in V \\ \|x\|=1}} \langle x, Ax \rangle$$ Let $V_1$, $V_2$ be in $V$. Let $(e_i)$, $(f_i)$ be bases of $V_1$ and $V_2$ respectively. Let $u := \sum_i \lambda_i e_i \in \mathbb{S}_{n-1} \cap V_1$. Then: $$\langle \sum_i \lambda_i f_i, A\sum_i \lambda_i f_i \rangle \leq \langle u, Au \rangle+\left| \langle \sum_i \lambda_i (f_i-e_i), Au \rangle \right|+\left| \langle \sum_i \lambda_i (f_i-e_i), A\sum_i \lambda_i f_i \rangle \right| \leq \langle u, Au \rangle + 2 \|A\| \left\| \sum_i \lambda_i (f_i-e_i) \right\| \leq \langle u, Au \rangle + 2k \|A\| \max_i \|f_i-e_i \|$$ Since this is true for all $(\lambda_i)$, we get, for all $v \in V_2$: $$\langle v, Av \rangle \leq \max_{\substack{u \in V_1 \\ \|u\|=1}} \langle u, Au \rangle+2k\|A\| \max_i \|f_i-e_i \|.$$ Since this holds for all bases $(e_i)$ and $(f_i)$, we can choose them as close as possible, whence: $$\langle v, Av \rangle \leq \max_{\substack{u \in V_1 \\ \|u\|=1}} \langle u, Au \rangle+2k\|A\| d(V_1, V_2).$$ Finally, taking the maximum over all unit $v \in V_2$, we get: $$F(V_2) \leq F(V_1)+2k\|A\| d(V_1, V_2).$$ If we exchange $V_1$ and $V_2$, we get $F(V_1) \leq F(V_2)+2k\|A\| d(V_1, V_2)$. Hence, $F$ is Lipschitz, and thus continuous. Since $\mathcal{G} (k,n)$ is compact and $F$ is continuous on $\mathcal{G} (k,n)$, the function $F$ reaches its minimum.<|endoftext|> TITLE: $\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor$ means that $x$ is close to an integer QUESTION [6 upvotes]: Suppose $x>30$ is a number satisfying $\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor$. Prove that $\{x\}<\frac{1}{2700}$, where $\{x\}$ is the fractional part of $x$. My heuristic is that $x$ needs to be "small": i.e. as close to $30$ as possible to get close to the upper bound on $\{x\}$, but I'm not sure how to make this a proof. REPLY [3 votes]: Let $\lfloor x \rfloor =y$ and $\{x\}=b$ Then $\lfloor x\rfloor \cdot \lfloor x^2\rfloor = \lfloor x^3\rfloor =y\lfloor y^2+2by+b^2 \rfloor= \lfloor y^3+3y^2b+3yb^2+b^3\rfloor$ One way this can happen is that $b$ is small enough that all the terms including $b$ are less than $1$, which makes both sides $y^3$. This requires $3y^2b \lt 1$, which gives $b \lt \frac 1{2700}$ as required. Now you have to argue that if $2by+b^2 \ge 1$ the right side will be too large.<|endoftext|> TITLE: Embeddings of pure cubic field in complex field QUESTION [5 upvotes]: I know that the complex embeddings (purely real included) for quadratic field $\mathbb{Q}[\sqrt{m}]$ where $m$ is square free integer, are $a+b\sqrt{m} \mapsto a+b\sqrt{m}$ $a+b\sqrt{m} \mapsto a-b\sqrt{m}$ So, norm of $a+b\sqrt{m}$ is $a^2-mb^2$. Motivated by this , I want to calculate norm of $a+\sqrt[3]{n}$ in $\mathbb{Q}[\sqrt[3]{n}]$ where $n$ is positive cubefree integer. I am able to calculate the norm to be $a^3+n$ using the fact that it's equal to the negative of constant term of the minimal polynomial. But I don't get same answer when I assume embeddings to be $a+\sqrt[3]{n} + 0\sqrt[3]{n^2} \mapsto a+\sqrt[3]{n} + 0\sqrt[3]{n^2}$ $a+\sqrt[3]{n} + 0\sqrt[3]{n^2} \mapsto a-\sqrt[3]{n} + 0\sqrt[3]{n^2}$ $a+\sqrt[3]{n} + 0\sqrt[3]{n^2} \mapsto a+\sqrt[3]{n} - 0\sqrt[3]{n^2}$ So what are the correct conjugation maps for pure cubic field case? REPLY [3 votes]: The norm can be computed without leaving $\mathbb Q$. The norm of $\alpha \in \mathbb{Q}[\sqrt[3]{n}]$ is the determinant of the linear map $x \mapsto \alpha x$. Taking the basis $1, \sqrt[3]{n}, \sqrt[3]{n^2}$, the matrix of this map for $\alpha=a+\sqrt[3]{n}$ is $$ \pmatrix{ a & 0 & n \\ 1 & a & 0 \\ 0 & 1 & a } $$ whose determinat is $a^3+n$.<|endoftext|> TITLE: Finding $\min f(x)$ where $f(x)=\int_0^1 |t-x|t\,dt \quad \forall x \in \mathbb{R}$ QUESTION [5 upvotes]: Can I write the integral as $f(x)=\int_0^{x} |t-x|t\,dt + \int_{x}^1 |t-x|t\,dt$ so that $f(x)=\frac{2x^3-x}{2}+\frac{1-2x^3}{3}$ But here I'm restricting $x$ to the interval $(0,1)$ and I need $x$ to be any real number. So what should be the correct approach here to find the minimum value of $f$? REPLY [4 votes]: If $x \le 0$ then $|t-x| = t-x$ for all $t \in [0,1]$ so that $$\int_0^1 |t-x|t \, dt = \int_0^1 (t-x)t \, dt = \frac 13 - \frac x2$$ which has its minimum value of $\dfrac 13$ at $x=0$. Likewise, if $x \ge 1$ then $|t-x| = x-t$ for all $t \in [0,1]$ and the minimum can be worked out accordingly.<|endoftext|> TITLE: Solving for a function QUESTION [5 upvotes]: How can I find a general solution to following equation, $$ f\left(\frac{1}{y}\right)=y^2 f(y). $$ I know that $f(y) = \frac{1}{1 + y^2}$ is a solution but are there more? Is there a general technique that I can read up about for problems of this kind? REPLY [2 votes]: Let $g(x)$ be any even function. Then $f(x)=\frac{1}{x}g(\ln|x|)$ and satisfies given equation. Proof: $$f\left(\frac{1}{y}\right)=y^2f(y),\quad y\not=0$$ $$\frac{1}{1/y}g\left(\ln\left|\frac{1}{y}\right|\right)=y^2\frac{1}{y}g(\ln|y|)$$ $$yg(-\ln|y|)=yg(\ln|y|)$$ $$g(-\ln|y|)=g(\ln|y|),\quad \ln|y|=z$$ $$g(-z)=g(z)$$ Last equation is true, because $g(x)$ is even.<|endoftext|> TITLE: Density of primes among the prime powers QUESTION [6 upvotes]: What is the relative density of the prime numbers among the set of prime powers? In particular, let $\Pi(x)$ be the number of prime powers less than $x$ and let $\pi(x)$ be the number of primes less than $x$. What can be said of the limit $$\lim_{x \to \infty} \frac{\pi(x)}{\Pi(x)}?$$ In other words, what is the density of prime powers which are square-free? My intuition says the answer should be $1$, but I'm not sure. -- in $\mathbb{Z}$, the square-free integers make up only $\frac{6}{\pi^2}$. On the other hand, in $\mathbb{F}_q[x]$, the square-free polynomials of degree $n$ are $1 + o_n(1)$. REPLY [3 votes]: We can observe that if $p^{n}\leq x$ then $p\leq x^{1/n}$ so we can write $$\Pi\left(x\right)=\pi\left(x\right)+\pi\left(x^{1/2}\right)+\dots+\pi\left(x^{1/n}\right) $$ hence $$\Pi\left(x\right)=\pi\left(x\right)+O\left(\frac{\sqrt{x}}{\log\left(x\right)}\right).$$<|endoftext|> TITLE: A closed form of the series $ \sum_{n=1}^{\infty} q^n \sin(n\alpha) $ QUESTION [5 upvotes]: I am having problems with the following series: $$ \sum_{n=1}^{\infty} q^n \sin(n\alpha), \quad|q| < 1. $$ No restrictions on $\alpha$. I need to find out whether it converges and if yes, evaluate its sum. I can see that it's convergent using the comparison test. But I fail to find its sum. So far I tried grouping subsequent terms and using trigonometric formulas, but it didn't help me much. Where should I start when I see trigonometric functions in a series? In general, I have no idea where to take off in such situations. Thanks in advance. REPLY [2 votes]: I'd like to expand Olivier Oloa's hint: $$ \sum _{ n=1 }^{ \infty } q^{ n }\sin (n\alpha )=q\sin { \alpha +{ q }^{ 2 }\sin { 2\alpha +...+{ q }^{ n }\sin { n\alpha +...\quad \quad \quad \quad \quad \quad \quad \quad \left( 1 \right) } } } \quad \\ \sum _{ n=1 }^{ \infty } q^{ n }\cos { \left( n\alpha \right) } =q\cos { \alpha +{ q }^{ 2 }\cos { 2\alpha } +...+{ q }^{ n }\cos { n\alpha +... } } ,\left| q \right| <1\quad \quad \left( 2 \right) $$ let denote partial sums of $(1)$ and $(2)$as follows: $${ u }_{ n }=\sum _{ n=1 }^{ \infty } q^{ n }\sin (n\alpha )=q\sin { \alpha +{ q }^{ 2 }\sin { 2\alpha +...+{ q }^{ n }\sin { n\alpha } } } \quad \quad \\ { v }_{ n }=\sum _{ n=1 }^{ \infty } q^{ n }\cos { \left( n\alpha \right) } =q\cos { \alpha +{ q }^{ 2 }\cos { 2\alpha } +...+{ q }^{ n }\cos { n\alpha } } $$ by using Euler's formula ${ e }^{ i\varphi }=\cos { \varphi +i\sin { \varphi } } $ we get $${ u }_{ n }+i{ v }_{ n }=q\left( \sin { \alpha } +i\cos { \alpha } \right) +{ q }^{ 2 }\left( \sin { 2\alpha } +\cos { 2\alpha } \right) +...+{ q }^{ n }\left( \sin { n\alpha +i\cos { n\alpha } } \right) =\\ =\frac { q{ e }^{ i\alpha }-{ q }^{ n+1 }{ e }^{ i\left( n+1 \right) \alpha } }{ 1-q{ e }^{ i\alpha } } $$ since $\left| q \right| <1\Rightarrow \left| q{ e }^{ i\alpha } \right| <1$ we have $$\\ \lim _{ n\rightarrow \infty }{ \left( { q }^{ n+1 }{ e }^{ i\left( n+1 \right) \alpha } \right) =0 } $$ finally we get $$u+iv=\lim _{ n\rightarrow \infty }{ \left( { u }_{ n }+i{ v }_{ n } \right) =\frac { q{ e }^{ i\alpha } }{ 1-q{ e }^{ i\alpha } } =q\left( \frac { \cos { \alpha -q } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } +i\frac { \sin { \alpha } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } \right) } $$ where $$u=q\frac { \cos { \alpha -q } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } ,v=\frac { q\sin { \alpha } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } $$ To be more clearly understant the last part $$\frac { q{ e }^{ i\alpha } }{ 1-q{ e }^{ i\alpha } } =q\frac { \cos { \alpha +i\sin { \alpha } } }{ 1-q\cos { \alpha -iq\sin { \alpha } } } =q\frac { \cos { \alpha +i\sin { \alpha } } }{ \left( 1-q\cos { \alpha } \right) -iq\sin { \alpha } } =q\frac { \left( \cos { \alpha +i\sin { \alpha } } \right) \left( \left( 1-q\cos { \alpha } \right) +iq\sin { \alpha } \right) }{ \left( \left( 1-q\cos { \alpha } \right) -iq\sin { \alpha } \right) \left( \left( 1-q\cos { \alpha } \right) +iq\sin { \alpha } \right) } =\\ =q\frac { \cos { \alpha -q\cos ^{ 2 }{ \alpha +iq\cos { \alpha } \sin { \alpha +i\sin { \alpha } -iq\sin { \alpha } \cos { \alpha -q\sin ^{ 2 }{ \alpha } } } } } }{ 1-2q\cos { \alpha +{ q }^{ 2 }\cos ^{ 2 }{ \alpha +{ q }^{ 2 }\sin ^{ 2 }{ \alpha } } } } =q\frac { \cos { \alpha -q+i\sin { \alpha } } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } =\\ =q\left( \frac { \cos { \alpha -q } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } +i\frac { \sin { \alpha } }{ 1-2q\cos { \alpha +{ q }^{ 2 } } } \right) $$<|endoftext|> TITLE: Where do you see cyclic quadrilaterals in real life? QUESTION [7 upvotes]: I've just been studying cyclic quads in geometry at school and I'm thinking see seems pretty interesting, but where would I actually find these in the real world? They seem pretty useless to me... REPLY [7 votes]: Theorem 3 of the Bern-Eppstein paper cited below proves that any polygon of $n$ vertices may be partitioned into $O(n)$ cyclic quadrilaterals. A hint of how this might be achieved can be glimpsed in the figure below, where all the white quadrilaterals are cyclic.                     Fig.5 from cited paper. Quadrilateral meshing is important in many applications. The cyclic quads produced by their algorithm have desirable "quality" characteristics. Bern, Marshall, and David Eppstein. "Quadrilateral meshing by circle packing." International Journal of Computational Geometry & Applications 10.04 (2000): 347-360. (Pre-journal arXiv abstract.) (Journal link.)<|endoftext|> TITLE: Show that $\int_0^1 \int_0^1 {x\ln x\over (1-xy)\ln(xy)} \, dx \, dy=1-\gamma.$ QUESTION [5 upvotes]: $$\int_{0}^{1}\int_{0}^{1}{x\ln x\over (1-xy)\ln(xy)} \, dx \, dy=1-\gamma.\tag1$$ Let $u=xy$ $$\int_0^1 {1\over y^2}\int_0^y {u\ln u -u\ln y \over (1-u)\ln(u)}dudy\tag2$$ $$\int_0^y {u\over 1-u} \, du-\ln y\int_0^y {u\over \ln u} \, du\tag3$$ $$-y-\ln(1-y)-\ln y \int_0^y {u\over \ln u} \, du\tag4$$ As for this integral Setting $n=1$ $$f(n,u)=\int_0^y {u^n\over \ln u} \, du\tag5$$ We can remove $\ln u$ by differentiating $${df\over dn}=\int_0^y u^n \, du = {y^{n+1}\over n+1}\tag6$$ How can I move on to the next step? REPLY [5 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\color{#f00}{% \int_{0}^{1}\int_{0}^{1}{x\ln\pars{x} \over \pars{1 - xy}\ln\pars{xy}} \,\dd x\,\dd y} = \int_{0}^{1}\int_{0}^{y}{\pars{x/y}\ln\pars{x/y} \over \pars{1 - x}\ln\pars{x}}\,{\dd x \over y}\,\dd y \\[3mm] = &\ \int_{0}^{1}{1 \over \pars{1 - x}\ln\pars{x}} \int_{x}^{1}\bracks{x\ln\pars{x}\,{1 \over y^{2}} - x\,{\ln\pars{y} \over y^{2}}}\,\dd y\,\dd x \\[3mm] = &\ \int_{0}^{1}{1 \over \pars{1 - x}\ln\pars{x}}\braces{% x\ln\pars{x}\pars{-1 + {1 \over x}} - x\bracks{1 - x + \ln\pars{x} \over x}}\,\dd x \\[3mm] = &\ \int_{0}^{1}\bracks{1 - {1 - x + \ln\pars{x} \over \pars{1 - x}\ln\pars{x}}} \,\dd x = 1\ -\ \underbrace{\int_{0}^{1}{1 - x + \ln\pars{x} \over \pars{1 - x}\ln\pars{x}}} _{\ds{\color{#f00}{\large ?} = \color{#f00}{\large\gamma}}} \,\dd x = \color{#f00}{1 - \gamma} \end{align} \begin{align} \color{#f00}{\large ?} & = \int_{0}^{1}{1 - x + \ln\pars{x} \over \pars{1 - x}\ln\pars{x}}\,\dd x = -\int_{0}^{1}{\pars{x - 1}/\ln\pars{x} - 1 \over 1 - x}\,\dd x = -\int_{0}^{1}{1 \over 1 - x}\int_{0}^{1}\pars{x^{t} - 1}\,\dd t\,\dd x \\[3mm] & = \int_{0}^{1}\int_{0}^{1}{1 - x^{t} \over 1 - x}\,\dd x\,\dd t = \int_{0}^{1}\bracks{\Psi\pars{t + 1} + \gamma}\,\dd t = \ln\pars{\Gamma\pars{2}} - \ln\pars{\Gamma\pars{1}} + \gamma = \color{#f00}{\gamma} \end{align} Note that $$ \int_{0}^{1}\pars{x^{t} - 1}\,\dd t = {1 \over \ln\pars{x}}\int_{0}^{1}x^{t}\ln\pars{x}\,\dd t - 1 = {1 \over \ln\pars{x}}\int_{0}^{1}\partiald{x^{t}}{t}\,\dd t - 1 = {x - 1 \over \ln\pars{x}} - 1 $$<|endoftext|> TITLE: Fractal dimension of the function $f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}$ QUESTION [32 upvotes]: Consider the function $$ f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n}\, . $$ This is a bizarre and fascinating function. A few properties of this function that SEEM to be true: 1) $f(x)$ is $2\pi$-periodic and odd around $\pi$. 2) $\lim_{x\rightarrow \pi_-} f(x) = \ln 2$. (Can be proven by letting $x = \pi-\epsilon$, expanding the sine function, and taking the limit as $\epsilon\rightarrow 0$.) 3) $\int_0^{\pi}dx\, f(x) = \frac{\pi^3}{8}$ (Can be "proven" by integrating each $\mathrm{sign}\left(\sin(nx)\right)$ term separately. Side question: Is such a procedure on this jumpy function even meaningful?) All of this despite the fact that I can't really prove that this function converges anywhere other than when $x$ is a multiple of $\pi$! A graph of this function (e.g. in Mathematica) reveals an amazing fractal shape. My question: What is the fractal dimension of the graph of this function? Does the answer depend on which definition of fractal dimension we use (box dimension, similarity dimension, ...)? This question doesn't come from anywhere other than from my desire to see if an answer exists. As requested, a plot of this function on the range $x\in[0,2\pi]$: Edited to add: Other, perhaps more immediate, questions about this function: 1) Does it converge? Conjecture: It converges whenever $x/\pi$ is irrational, but doesn't necessarily diverge if $x/\pi$ is rational. See, e.g., $x = \pi$, where it converges to zero, and apparently to $\pm \ln 2$ on either side of $x = \pi$. 2) I would guess that it diverges as $x\rightarrow 0_+$. How does it diverge there? If this really is a fractal function, I would suppose that the set of points where it diverges is dense. For instance, it appears to have a divergence at $x = 2\pi/3$. Edit 2: Another thing that's pretty straightforward to prove is that: $$ \lim_{x\rightarrow {\frac{\pi}{2}}_-} f(x) = \frac{\pi}{4} + \frac{\ln 2}{2} $$ and $$ \lim_{x\rightarrow {\frac{\pi}{2}}_+} f(x) = \frac{\pi}{4} - \frac{\ln 2}{2} $$ Final Edit: I realize now that the initial question about this function - what is its fractal dimension - is (to use a technical term) very silly. There are much more immediate and relevant question, e.g. about convergence, etc. I've already selected one of the answers below as answering a number of these questions. One final point, for anyone who stumbles on this post in the future. The term $\mathrm{sign}(\sin(nx))$ is actually a square wave, and so we can use the usual Fourier series of a square wave to derive an alternate way of expressing this function: $$ f(x)=\sum_{n=1}^{\infty}\frac{\mathrm{sign}\left(\sin(nx)\right)}{n} = \frac{4}{\pi}\sum_{n=1}^{\infty}\sum_{m=1}^{\infty} \frac{\sin(n(2m-1)x)}{n(2m-1)} $$ By switching the order of the sums and doing the $n$ sum first, this could also be represented as a weighted sum of sawtooth waves. REPLY [3 votes]: Although I've already selected one of the excellent answers above, I thought I'd post an answer that I figured out to one of the questions I posed in the original post, namely: "I would guess that it diverges as $x\rightarrow 0_+$. How does it diverge there?" So, as noted above, the term $\mathrm{sign}(\sin(n x))$ is a square wave with period $2\pi/n$. Suppose $x/\pi$ is irrational. Then $\mathrm{sign}(\sin(n x))$ is always -1 or +1, never 0. Define $$ \phi(x,k) = \left\lfloor \frac{k\pi}{x} \right\rfloor $$ as the largest integer such that $\phi(x,k) \, x < k \pi$. Defining $H_m$ as the $m^{\mathrm{th}}$ harmonic number, the original series can then be divided into finite positive and negative subsequences of terms of the form $\pm 1/n$ and rewritten as \begin{align} f(x) &= \sum_{n = 1}^{\phi(x,1)} \frac{1}{n} \;-\;\sum_{n = \phi(x,1)+1}^{\phi(x,2)} \frac{1}{n} \;+\;\sum_{n = \phi(x,2)+1}^{\phi(x,3)} \frac{1}{n} \;-\; ...\\ &= H_{\phi(x,1)} - \left(H_{\phi(x,2)} - H_{\phi(x,1)}\right)+ \left(H_{\phi(x,3)} - H_{\phi(x,2)}\right) - ...\\ &= H_{\phi(x,1)} \;+\; \sum_{n = 1}^{\infty}{(-1)}^n\left(H_{\phi(x,n+1)} - H_{\phi(x,n)}\right) \end{align} for irrational $x/\pi$. In fact, this last expression can be used to calculate the original function. Although this harmonic number representation is ostensibly only valid for irrational $x/\pi$, it seems to overlay the original function very nicely, as shown below: Now, as to the divergence for small $x$. When $x$ is small, $\phi(x,n)$ becomes large, and we have \begin{align} H_{\phi(x,n)}&\approx \ln\left(\phi(x,n)\right) + \gamma\\ &= \ln\left(\left\lfloor \frac{n\pi}{x} \right\rfloor\right) + \gamma\\ &\approx \ln\left(\frac{n\pi}{x}\right) + \gamma\, . \end{align} The above expression for $f(x)$ then reduces to $$ f(x) \approx \ln\left(\frac{\pi}{x}\right) + \gamma + \sum_{n=1}^{\infty}{(-1)}^n \ln\left(1 + \frac{1}{n}\right)\, , $$ where \begin{align} \sum_{n=1}^{\infty}{(-1)}^n \ln\left(1 + \frac{1}{n}\right) &= \ln\left(\prod_{n=1}^{\infty}\left(1 - \frac{1}{4n^2}\right)\right)\\ &= \ln\left(\frac{2}{\pi}\right)\qquad \text{By the Wallis product}\, . \end{align} The small-$x$ approximation then reduces to $$ f(x) \approx -\ln x \;+\; \gamma \;+\; \ln 2\, . $$ This is plotted below along with the original function. So the answer is: A logarithmic divergence as $x\rightarrow 0_+$.<|endoftext|> TITLE: How can I find the dimension of the eigenspace? QUESTION [9 upvotes]: The matrix $A = \begin{bmatrix}9&-1\\1&7\end{bmatrix}$ has one eigenvalue of multiplicity 2. Find this eigenvalue and the dimension of the eigenspace. So I found the eigenvalue by doing $A - \lambda I$ to get: $\lambda = 8$ But how exactly do I find the dimension of the eigenspace? REPLY [9 votes]: By definition, an eigenvector $v$ with eigenvalue $\lambda$ satisfies $Av = \lambda v$, so we have $Av-\lambda v =Av - \lambda I v = 0$, where $I$ is the identity matrix. Thus, $$(A-\lambda I)v = 0,$$ and $v$ is in the nullspace of $A-\lambda I$. Since the eigenvalue in your example is $\lambda = 8$, to find the eigenspace related to this eigenvalue we need to find the nullspace of $A - 8I$, which is the matrix $$\left[\begin{array}{cc}1 & -1 \\ 1 & -1 \\ \end{array} \right].$$ We can row-reduce it to obtain $$\left[\begin{array}{cc} 1 & -1 \\ 0 & 0 \\ \end{array} \right].$$ This corresponds to the equation $$x-y = 0,$$ so $x = y$ for every eigenvector associated to the eigenvalue $\lambda = 8$. Therefore, if $(x,y)$ is an eigenvector, then $(x,y) = (x,x) = x(1,1)$, meaning that the eigenspace is $$W=[(1,1)],$$ and its dimension is $1$.<|endoftext|> TITLE: Is this inequality provable? $e^{\left(\pi^{e^{\pi^{.^{.^{.^{e^\pi}}}}}}\right)}\ge \pi^{\left(e^{\pi^{e^{.^{.^{.^{\pi^e}}}}}}\right)}$ QUESTION [9 upvotes]: I am interested in proving the following inequalities: $e^\pi\ge\pi^e$, $\quad \pi^{(e^\pi)}\ge e^{(\pi^e)}$, and $\quad e^{(\pi^{(e^\pi)})}\ge \pi^{(e^{(\pi^e)})}.$ How we can prove these inequalities? (The dots may denote an infinite power tower. I think this does not matter.) $\boxed{e^{\left(\pi^{\left(e^{\left(\pi^{\left(.^{\left(.^{e^\pi}\right)}\right)}\right)}\right)}\right)}\ge\pi^{\left(e^{\left(\pi^{\left(e^{\left(.^{\left(.^{\pi^e}\right)}\right)}\right)}\right)}\right)}}$ or $\boxed{e^{\left(\pi^{\left(e^{\left(\pi^{\left(.^{\left(.^{e^\pi}\right)}\right)}\right)}\right)}\right)}\le\pi^{\left(e^{\left(\pi^{\left(e^{\left(.^{\left(.^{\pi^e}\right)}\right)}\right)}\right)}\right)}}$ A related question: $e^{\left(\pi^{(e^\pi)}\right)}\;$ or $\;\pi^{\left(e^{(\pi^e)}\right)}$. Which one is greater than the other? REPLY [8 votes]: Define the sequences $E_n$ and $P_n$ by $$E_{n+1}=e^{\pi^{E_n}}\qquad\text{and}\qquad P_{n+1}=\pi^{e{^{P_n}}}$$ with $E_1=e^{\pi}$ and $P_1=\pi^e$. We know that $E_1\gt P_1$ from elementary calculus considerations (or simply by direct numerical calculations). We wish to show that $E_n\gt P_n$ for all $n$. Doing so takes care of towers of even length. But it also takes care of odd-length towers as well, since $E_n\gt P_n$ and $\pi\gt e$ together imply $\pi^{E_n}\gt e^{P_n}$. Note that $$E_{n+1}\gt P_{n+1}\iff\ln\ln E_{n+1}\gt\ln\ln P_{n+1}\iff E_n\ln\pi\gt P_n+\ln\ln\pi$$ Thus to finish off a proof by induction, it suffices to show that $$E_n(\ln\pi-1)\gt\ln\ln\pi$$ for all $n$. But $E_1\gt1$ kicks off a mini-induction $E_{n+1}=e^{\pi^{E_n}}\ge e^{\pi^{E_1}}\gt e^{\pi^1}= E_1$ for all $n$, so it's enough to show $$e^{\pi}(\ln\pi-1)\gt\ln\ln\pi$$ There might be some slick analytic way to establish this inequality without any messy calculation, but for the moment at least, let's just note that the two sides aren't even close: $e^\pi(\ln\pi-1)\approx3.3491498$, while $\ln\ln\pi\approx0.1351687$. Remark: I skipped over the proof that $E_1\gt P_1$ in part because the OP, in the linked-to related question, accepts it as known, so the primary interest here seems to be the higher tetrations. I and others gave purely analytic (non-computational) answers there to the inequality I'm expressing here as $E_2\gt P_2$. Those answers' approaches might might generalize to all subscripts $n$, but offhand I don't see how. That's why I took a partly computational approach here. REPLY [2 votes]: We can use the function $y = x^{{1\over x}}$ to prove $e^{\pi} \gt \pi ^e$ $$y = x^{{1\over x}}$$ Taking logarithm of both sides: $$\ln y = \frac{\ln x}{x}$$ Differentiating with respect to $x$: $$\frac{1}{y}\frac{dy}{dx}=\frac{1- \ln x}{x^2}$$ $$\frac{dy}{dx}=\frac{x^{{1\over x}}}{x^2}{(1-\ln x)}$$ $$\frac{dy}{dx}=0$$ $\implies $ $\ln x = 1 $ $ \implies x=e$ At $e$: $$\frac{d^2y}{dx^2} = \frac{e^{{1\over e}}}{e^2}\left(\frac{-1}{e}\right) \lt 0 $$ $y$ is maximum at $x=e$ But $x=e$ is the only extreme value , hence $y$ is greatest at $ x=e$ $$e^{{1\over e}} \ge x^{{1\over x}} \ , \forall x \gt 0 $$ $$e^{{1\over e}} \gt \pi^{{1\over \pi}}$$ Raising both side to power $e\pi$ we have $$\implies e^\pi \gt \pi^e$$<|endoftext|> TITLE: Sample proportion and the Central Limit Theorem QUESTION [5 upvotes]: Suppose that $ (\Omega,\Sigma,\mathsf{P}) $ is a probability space and that $ (X_{k})_{k \in \mathbb{N}} $ is a sequence of i.i.d. Bernoulli trials on $ (\Omega,\Sigma,\mathsf{P}) $, each with probability of success $ p \in (0,1) $. If we define another sequence $ (\hat{P}_{n})_{n \in \mathbb{N}} $ of random variables on $ (\Omega,\Sigma,\mathsf{P}) $ by $$ \forall n \in \mathbb{N}: \qquad \hat{P}_{n} \stackrel{\text{df}}{=} \frac{1}{n} \sum_{k = 1}^{n} X_{k}, $$ then according to the Central Limit Theorem, we have $$ \forall z \in \mathbb{R}: \qquad \lim_{n \to \infty} \mathsf{P} \! \left( \frac{\hat{P}_{n} - p}{\sqrt{p (1 - p) / n}} \leq z \right) = \Phi(z), $$ where $ \Phi $ denotes the standard normal c.d.f. For each $ n \in \mathbb{N} $, we call $ \hat{P}_{n} $ a sample proportion for a sample of size $ n $. When most statistics textbooks discuss confidence intervals for a sample proportion, they implicitly claim that $$ \frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}} \stackrel{\text{d}}{\longrightarrow} \operatorname{N}(0,1), $$ which is the same as saying that $$ \forall z \in \mathbb{R}: \qquad \lim_{n \to \infty} \mathsf{P} \! \left( \frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}} \leq z \right) = \Phi(z). $$ However, I was unable to rigorously establish this claim using the Central Limit Theorem. Could anyone kindly provide references? Thanks! REPLY [5 votes]: The most straightforward proof of this result requires knowledge of Slutsky's theorem, which in turn requires the concept of convergence in probability. Write $$\frac{\hat{P}_{n} - p}{\sqrt{\hat{P}_{n} (1 - \hat{P}_{n}) / n}}= \frac{\hat{P}_{n} - p}{\sqrt{p(1-p) / n}} \cdot \sqrt{ \frac{p(1-p)}{\hat P_n(1-\hat P_n)}}, $$ a product of two factors. The first factor converges in distribution to the standard normal, by the central limit theorem. The second factor converges almost surely to the constant value $1$, by the law of large numbers. Now apply Slutsky's theorem, since convergence a.s. implies convergence in probability.<|endoftext|> TITLE: How to understand "tensor" in commutative algebra? QUESTION [12 upvotes]: Tensor is sure an important concept in commutative algebra, but the definition is kind of abstract, so is there any way to understand it which is easier? Thanks advance! The definition I see is the one defined by modules. Proposition 2.12. Let $M, N$ be $A$-modules. Then there exists a pair $(T,g)$ consisting of an $A$-module $T$ and an $A$-bilinear mapping $g \colon M \times N \to T$, with the following property: Given any $A$-module $P$ and any $A$-bilinear mapping $f \colon M \times N \to P$, there exists a unique $A$-linear mapping $f' \colon T \to P$ such that $f = f' \circ g$ (in other words, every bilinear function on $M \times N$ factors through $T$). Moreover, if $(T,g)$ and $(T',g')$ are two pairs with this property, then there exists a unique isomorphism $j \colon T \to T'$ such that $j \circ g = g'$. REPLY [5 votes]: Let's start with bilinear maps. In particular, choose a map $f:M\times N\to P$. Now consider the pairs $(u,\alpha v)$ and $(\alpha u,v)$ where $u\in M$, $v\in N$ and $\alpha$ is a scalar. We find that $f(u,\alpha v)=\alpha f(u,v) = f(\alpha u, v)$, and this does not depend on what bilinear map we have chosen. In other words, as far as bilinear maps are concerned, there's no difference between the pairs $(u,\alpha v)$ and $(\alpha u,v)$. Therefore, a natural question arises: Can we write down an object that captures exactly the features of pairs of vectors that are essential for the bilinear map? That is, can we have a set $T$ and a function $g:M\times N\to T$ so that $g(u,v) = g(u',v')$ exactly if for any bilinear map $f$ we have $f(u,v)=f(u',v')$? Obviously, if we can find such a function $g$, then we can write any bilinear map as $f = f'\circ g$ where $f'$ maps objects from $T$ to $P$. After all, by definition the objects in $T$ capture exactly those properties of the pairs of vectors which are relevant for $f$. Now since we are doing linear algebra, we would additionally like $T$ to also be a linear space, and $f'$ to be linear. It is not hard to see that in this case, the function $g$ has to be bilinear. The theorem you quoted now states that such a function $g$ not only always exists, but moreover is unique up to isomorphisms.<|endoftext|> TITLE: Origins of Differential Geometry and the Notion of Manifold QUESTION [42 upvotes]: The title can potentially lend itself to a very broad discussion, so I'll try to narrow this post down to a few specific questions. I've been studying differential geometry and manifold theory a couple of years now. Over this time the notion of a manifold, as some object that locally looks Euclidean though globally may not be, has become a very comfortable notion for me. However, the definition of a (smooth) manifold is quite abstract, and it's not particularly obvious from the outset why such an object would be of importance. The modern definition as I've come to know it, which can be found in many textbooks (John Lee's book, or this reprint of the definiton), I'm guessing is the byproduct of a long evolution of definitions. It's my guess that many candidate definitions were adopted and discarded until we eventually settled on the one we use today. What is the origin of our modern definition of manifold? As succinctly as possible, what was the evolution of this definition and what was the original inspiration for defining such an object? What were the origins of some of the standard manifolds that are used in practice? For instance, I'm certain that some of the objects that gave impetus for the definition of a manifold consisted of $\mathbb{S}^2, SO(3)$, and $\mathbb{T}^2$, but what are the origins of say the projective spaces $\mathbb{RP}^n, \mathbb{CP}^n$, as well as the Grassmann and Stiefel manifolds? The role of manifolds in Hamiltonian mechanics? Are there any "vestigial" concepts that were at one time considered important but eventually discarded due to their ineffectiveness, or ones that possibly yielded contradictory results? How did the evolution of topology as a subject intermingle with that of differential geometry? What are some good books tracing the history of differential geometry (that is, the evolution of the ideas)? I know of a few math history books, including Boyer's book, but the parts about differential geometry/topology are left almost as afterthoughts with the main text dealing with ancient civilizations leading up to the calculus. This is a rather long question, and I don't expect anyone to answer each point in it's entirety. Partial answers are welcome. REPLY [20 votes]: [2016-07-25]: Section Differential Geometry added. Although OP narrowed down the post, there are still many more important historical facts which should be addressed to adequately answer the question, than I can give in this answer. Nevertheless here are some aspects, which might be interesting. At least we will see, OP is right when he thinks that many different candidate definitions of manifolds competed to become the most suitable one. We start with question (5), good books addressing the history of differential geometry/topology. A History of Algebraic and Differential Topology 1900 - 1960 by Jean Dieudonné. I strongly recommend this book which provides a wealth of historical information as well as technical details. Most of it is regrettably beyond my scope, but it's great for me to get at least a glimpse of the exciting development when going through some parts of the book. In what follows I focus on OPs question (1) and provide some small samples of text mostly cited verbatim from the book. As we can read in chapter I, the modern development started with the work of Poincaré. It was his groundbreaking long paper Analysis Situs published in 1895 and followed by five so-called Complements between 1899 and 1905. The Work of Poincaré [ch 1, § 1.] Concepts and results belonging to algebraic and differential topology may already be noted in the eighteenth and nineteenth centuries, but cannot be said to constitute a mathematical discipline in the usual sense. Before Poincaré we should therefore only speak of the prehistory of algebraic topology; ... But note that topological space has not yet been defined at that time. But some intuitive notion of manifolds was already available. Of course, before Frèchet (1906) and Hausdorff (1914) the general notion of topological space had not been defined; what had become familiar after the work of Weierstrass and Cantor were the elementary topological notions (open sets, closed sets, neighborhoods, continuous mappings, etc.) in the spaces $\mathbb{R}^n$ and their subspaces; these notions had been extended by Riemann (in an intuitive way and without any precise definition) to $n$-dimensional manifolds (or rather what we now would call $C^r$-manifolds with $r\geq 1$). In this chapter I Dieudonné examines rather detailed Poincaré's Analysis Situs. He explains that Poincaré was the first who introduced the idea of computing with topologial objects, not only with numbers. Most important he introduced the concepts of homology and fundamental group. With respect to manifolds we also find: Poincaré appealed to the concept of oriented manifold by Klein for surfaces and generalized by von Dyck for manifolds of arbitrary dimension. In Analysis Situs, Poincaré gave a characterization of orientable manifolds by what is still one of the modern criteria: there exist charts $(U_\lambda,\psi_{\lambda})$ such that the transitition diffeomorphisms $$\psi_{\lambda}\left(U_\lambda\cap U_{\mu}\right)\rightarrow\psi_{\mu}\left(U_{\lambda}\cap U_{\mu}\right)$$ have positive determinants for all pairs of indices such that $U_{\lambda}\cap U_{\mu}\neq \emptyset$. But later on Dieudonné addresses also some weak points in connection with this definition. In fact we won't find some final definition of the term manifold by Poincaré as we can read in the next paragraph. Nevertheless the far-reaching character of his work is tremendous: ... As in so many of his papers, he gave free rein to his imaginative powers and his extraordinary intuition, which only very seldom led him astray; in almost every section is an original idea. But we should not look for precise definitions, and it is often necessary to guess what he had in mind by interpreting the context. ... Thus ends this fascinating and exasperating paper, which, in spite of its shortcomings, contains the germs of most of the developments of homology during the next 30 years. In section I. § 4 Duality and Intersection Theory on Manifolds there is a subsection $A$ titled with The Notion of Manifold [ch 1, § 4.A] After the invariance problem had been solved, two main items remained in the implementation of the program outlined by Poincaré: a rigorous proof of the duality theorem and a complete theory of intersection barely begun by Poincaré. Obvious examples show that in neither question can one work with a general cell complex; some restrictions have to be introduced in order to make available the arguments Poincaré used for his manifolds. ... In the meantime, in order to use simplicial methods, topologists had to settle for more tractable definitions of manifolds. In fact, several definitions were proposed ([308], pp. 342-343); Dieudonné refers here to Algebraic Topology by S. Lefschetz from 1942. We can find there $9$ different types of manifolds, all of which are supposed to be $n$-dimensional. Combinatorial manifolds: Let $X=Y-Z$ be an open simple complex, where $Y$ is closed simple and $Z$ is a closed subcomplex of $Y$. We say that $X$ is an orientable combinatorial manifold whenever the following two conditions are fulfilled. (1) The dual $X^\star$ of $X$ has a closed simple weak isomorph $\overline{X}$. (2) If $x,x^\prime$ are distinct elements of $X$, then $\text{Cl } x \cap \text{St } x^\prime$ is acyclic or void. They may be finite or infinite, absolute or relative, orientable or non-orientable, simplicial or merely simple complexes. Geometric manifolds: Euclidean realizations of the preceding simplicial types. Manifolds in the sense of Brouwer: Euclidean complexes such that the star of each vertex is isomorphic with a set of simplexes in an Euclidean $\mathcal{E}^n$ having a common vertex $P$ and making up a neighborhood of $P\in\mathcal{E}^n$. Manifolds in the sense of Newman: Euclidean complexes such that if $a$ is a vertex $\text{St} a = aB$, then $B$ is partition-equivalent to an $(n-1)$-sphere. Manifolds in the sense of Poincaré and Veblen: Topological complexes such that every point has for neighborhood an $n$-cell. Topological manifolds: An $M^n$ of this type is separable metric space with a countable locally finite open covering consisting of $n$-cells. Noteworthy special cases: $C^r$-manifolds, differentiable manifolds, analytical manifolds, $\Gamma$-manifolds, group manifolds. Generalized manifolds: Locally compact spaces discussed by Čech, Lefschetz, Wilder and others and characterized by certain properties of so-called local connectedness or local connectedness in the sense of homology and also by the property: each point is $n$-cyclic. Pseudo-manifolds: This term has been applied by Brouwer and other authors to what we have called a simple geometric $n$-circuit. Manifolds of grade $p$: Simplicial $n$-complexes, investigated by Čech, and which behave like an $M^n$ only as regards the two consecutive dimensions $p-1,p$. Of course this answer hardly touches the surface of information provided in J. Dieudonné's book. ... curious? :-) A nice historical survey is The Concept of Manifold, 1850-1950 by Erhard Scholz. Section 5 is devoted to the development of the modern manifold concept. In subsection 5.4 he describes the birth of the "modern" axiomatic concept in differential geometry. Differential Geometry There was, of course still another line of research, more closely linked to differential geometry, where manifolds played an essential role, and purely topological aspects (independently of whether continuous, combinatorial, or homological ones) did not suffice and still needed elaboration. ... and later on Whitehead and Veblen presented their axiomatic characterization of manifold of class $G$ first in a research article in the Annals of Mathematics (1931) and in the final form in their tract on the Foundations of Differential Geometry (1932). Their book contributed effectively to a conceptual standardization of modern differential geometry, including not only the basic concepts of continuous and differentiable manifold of different classes, but also the modern reconstruction of the differentials $dx=(dx_1,\ldots,dx_n)$ as objects on tangent spaces to $M$. Basic concepts like Riemannian metric, affine connection, holonomy group, covering manifolds, etc. followed in a formal and symbolic precision that even from the strict logical standards of the 1930-s there remained no doubt about the wellfoundedness of differential geometry in manifolds.<|endoftext|> TITLE: Prove that $\dfrac{b^{n-1}a(a+b)(a+2b)\cdots(a+(n-1)b)}{n!}$ is an integer QUESTION [9 upvotes]: Let $a$ and $b$ be integers and $n$ a positive integer. Prove that $$\dfrac{b^{n-1}a(a+b)(a+2b)\cdots(a+(n-1)b)}{n!}$$ is an integer. Define $v_p(x)$ such that if $v_p(x) = n$, then $p^n \mid x$ but $p^{n+1} \nmid x$. Then we need to show that $v_p(b^{n-1})+v_p(a)+v_p(a+b)+\cdots+v_p(a+(n-1)b)\geq v_p(n!)$ for all primes $p$. How should we do that? REPLY [2 votes]: Fix a prime natural number $p$. For an integer $m$ and a nonnegative integer $r$, let $p^r\parallel m$ denote the condition that $p^r\mid m$ but $p^{r+1}\nmid m$. We shall ignore the trivial cases (namely, $n=1$, $a=0$, and $b=0$). Suppose that $p^k\parallel a$ and $p^l\parallel b$ for some integers $k,l\geq 0$. First, we assume that $k < l$ (whence $l\geq 1$). It follows immediately that $v_p(a+jb)=v_p(a)=k$ for all $j=0,1,2,\ldots,n-1$. Thus, $$v_p\left(b^{n-1}\,\prod_{j=0}^{n-1}\,(a+jb)\right)=v_p\left(b^{n-1}\right)+\sum_{j=0}^{n-1}\,v_p(a+jb)=(n-1)l+nk\,.$$ Note that $$v_p(n!)=\sum_{j=1}^\infty\,\left\lfloor\frac{n}{p^j}\right\rfloor<\sum_{j=1}^\infty\,\frac{n}{p^j}\leq \sum_{j=1}^\infty\,\frac{n}{2^j}=n\,.$$ Consequently, $\displaystyle v_p(n!)\leq n-1\leq (n-1)l\leq (n-1)l+nk=v_p\left(b^{n-1}\,\prod_{j=0}^{n-1}\,(a+jb)\right)$. Now, suppose that $k \geq l$. Then, it is evident that, for every $s=1,2,\ldots$, the congruence $$a+jb\equiv 0\,\pmod{p^{l+s}}$$ has at least $\left\lfloor\dfrac{n}{p^s}\right\rfloor$ solutions $j\in\{0,1,2,\ldots,n-1\}$. That is, $$\sum_{j=0}^{n-1}\,v_p(a+jb)\geq nl+\sum_{s=1}^{\infty}\,\left\lfloor\frac{n}{p^s}\right\rfloor=nl+v_p(n!)\geq v_p(n!)\,.$$ Ergo, we again obtain $\displaystyle v_p(n!)\leq v_p\left(b^{n-1}\,\prod_{j=0}^{n-1}\,(a+jb)\right)$. That is, in all cases, $\displaystyle v_p(n!)\leq v_p\left(b^{n-1}\,\prod_{j=0}^{n-1}\,(a+jb)\right)$. Because $p$ is arbitrary, we conclude that $n!$ divides $\displaystyle b^{n-1}\,\prod_{j=0}^{n-1}\,(a+jb)$. I believe that there should be a combinatorial proof of this statement, and hope to see it.<|endoftext|> TITLE: How math help reduce terms and conditions of someone's dying wish? QUESTION [8 upvotes]: Good morning everyone... This is my very first question here, so I apologise in advance for any wrongdoing which I possibly make unintentionally. So here is a little background story. I'm working at a law firm and there's a case of distributing someone's assets after he died to his 5 children from 3 different wives. Luckily the case has been settled, but the problem is still bugging me. I'll try to describe the problem as clearly as possible but I'm not going into detail about this case (although I'm pretty sure that you can have some wild guesses which might be right, haha). Suppose X1, X2, Y1, Y2 > 0; X1 > X2; and Y1 > Y2. How to simplify expressions of Z which satisfy the following conditions: If X2 < X1 < Y2 < Y1, then Z = 0 If Y2 < Y1 < X2 < X1, then Z = 0 If X2 < Y2 < X1 < Y1, then Z = X1 - Y2 If Y2 < X2 < X1 < Y1, then Z = X1 - X2 If Y2 < X2 < Y1 < X1, then Z = Y1 - X2 If X2 < Y2 < Y1 < X1, then Z = Y1 - Y2 The goal is to reduce the expressions of Z yet still satisfy the above conditions. I've tried to scratch around for the simpler expressions during my off days, but none of which are correct. Is it possible to do this (I begin to think not)? If it's possible, then you can imagine how much terms and conditions that can be reduce in this legal document (although in the real world might not be applied). Thank you so much for your kind response. REPLY [5 votes]: Let's take a look at the cases 3-6. In the case 3 & 4, the first number that you need to calculate (i.e. the number from which you subtract the other one) is $X_1$. If we compare it to the table of inequalities, this is the case if $X_1 < Y_1$. In the case 5 & 6, $Y_1$ needs to be calculated, which corresponds to the case $Y_1 < X_1$. More simply, the first number is $\min(X_1, Y_1)$. A similar consideration shows that the number you subtract is $\max(X_2,Y_2)$. This means that a formula to simplify the cases 3-6 would be $\min(X_1, Y_1) - \max(X_2, Y_2)$. Now let's see what happens if we also apply this formula to the first two cases. In the first case we get $X_1 - Y_2$ and in the second case $Y_1 - X_2$. Now how can we distinguish this from the cases 3-6? Well, our formula so far yields a positive value in the cases 3-6 and a negative value for the cases 1 & 2. This means if our formula yields a negative value, we just need to return $0$. This can all be accomplished in one simple formula: $$\max(\min(X_1, Y_1) - \max(X_2, Y_2), 0)$$<|endoftext|> TITLE: Any "interesting" theorems for element-wise matrix product? QUESTION [7 upvotes]: From the point of view of linear algebra, the "natural" multiplication operation for matrices is the usual matrix product, and there are lots of theorems involving this product---e.g. the result $\det(AB) = \det(A)\det(B)$, or $\text{tr}(AB) = \text{tr}(BA)$, etc. However, there are lots of matrices one encounters in practice whose structure allows them to be written in a convenient way as an element-wise (Hadamard) product of two other matrices. This is one of the reasons why the default multiplication of arrays is element-wise in many programming languages (e.g. Python). In situations where element-wise products appear, it could be very nice to have theorems (like the above determinant & trace relations) concerning the linear algebraic character of the element-wise product. My question is: Do any "interesting" such theorems exist? [I don't expect to find any results as slick as the above $\det$ and $\text{tr}$ identities, but perhaps there are analogous inequalities, or maybe some non-trivial statements about diagonalizability, or eigenvalue relations, etc.] REPLY [5 votes]: A matrix $A$ is called doubly nonnegative (DN) if it is entrywise nonnegative and positive semidefinite. For $A \in M_n (\mathbb{C})$ and $\alpha \in \mathbb{R}$, denote by $A^{(\alpha)}$ the entryise Hadamard power, i.e., $A^{(\alpha)} = [a_{ij}^\alpha]$. Let $A$ be a DN matrix. Horn and Fitzgerald [MR0506356; J. Math. Anal. Appl. 61 (1977), no. 3, 633–642] showed that $A^{(\alpha)}$ is DN if and only if $\alpha \in \mathbb{N} \cup [n-2,\infty)$. The methods in this paper are also used to give another proof of the Schur product theorem (cited in another answer). Interestingly, this lower bound is the same for conventional matrix powers; Johnson et al. [MR2810562; Linear Algebra Appl. 435 (2011), no. 9, 2175–2182] established the existence of a critical exponent and conjectured that $A^\alpha$ is DN for every $\alpha \ge 2$. Guillot et al. [MR3091314; Linear Algebra Appl. 439 (2013), no. 8, 2422–2427] settled the conjecture in the affirmative. For $A \in M_n(\mathbb{C})$, denote by $\rho(A)$ the spectral radius of $A$. It is known that the spectral radius is sub-multiplicative with respect to the Hadamard product ($\odot$) for non-negative matrices; i.e., $$ \rho(A \odot B) \le \rho(A)\rho(B),~\forall A,B \ge 0. $$ The standard reference for this is Topics in matrix analysis by Horn and Johnson.<|endoftext|> TITLE: a formula involving order of Dirichlet characters, $\mu(n)$ and $\varphi(n)$ QUESTION [5 upvotes]: Let $p$ a prime number, ${q_{_1}}$,..., ${q_{_r}}$ are the distinct primes dividing $p-1$, ${\mu}$ is the Möbius function, ${\varphi}$ is Euler's phi function, ${\chi}$ is Dirichlet character $\bmod{p}$ and ${o(\chi)}$ is the order of ${\chi}$. How can I show that: $$\sum\limits_{d|p - 1} {\frac{{\mu (d)}}{{\varphi (d)}}} \sum\limits_{o(\chi ) = d} {\chi (n)} = \prod\limits_{j = 1}^r {(1 - \frac{1}{{\varphi ({q_j})}}} \sum\limits_{o(\chi ) = {q_{_j}}} {\chi (n)} ) \quad ?$$ REPLY [6 votes]: Fix $n$ and define $$f(d)=\sum_{o(\chi)=d}\chi(n).$$ Let's show $f$ is multiplicative. First off, let $g$ be a generator for $(\mathbb{Z}/p\mathbb{Z})^\times$ and write $n=g^k$, then let $\psi$ be a generator for the group $\{\chi:\chi^d=1\}$, in which case we may say $o(\chi)=d\iff \chi=\psi^e$ for a unit $e$ mod $d$. As $\psi(g)$ is a primitive $d$th root of unity, the values $\psi(g)^e$ (as $e$ ranges over units mod $d$) will be all primitive $d$th roots of unity, i.e. all $\zeta$ with $o(\zeta)=d$. Thus $$ f(d)=\sum_{(m,d)=1} \psi^m(g^k)=\sum_{o(\zeta)=d}\zeta^k. $$ This is known as a Ramanujan sum. We want to see that $f(d_1d_2)=f(d_1)f(d_2)$ whenever $d_1,d_2$ are coprime. (Technically since $k$ is an integer mod $p-1$ the formula only makes sense for when $d\mid(p-1)$, however if we interpret it as a function of a usual integer $k$ we can talk about any $d$ values we want and it will in fact be multiplicative). The key is the Chinese Remainder Theorem $(\mathbb{Z}/d_1d_2\mathbb{Z})^\times\cong(\mathbb{Z}/d_1\mathbb{Z})^\times\times(\mathbb{Z}/d_2\mathbb{Z})^\times$ which for our purposes means every $d_1d_2$th root of unity is uniquely expressible as a product of $d_1$th and $d_2$th roots of unity, and in particular primitive $d_1d_2$th roots of unity are uniquely expressible as products of primitive $d_1$th and $d_2$th roots of unity. Therefore, $$ \begin{array}{ll} f(d_1)f(d_2) & \displaystyle =\left(\sum_{o(\zeta)=d_1}\zeta^k\right)\left(\sum_{o(\xi)=d_2}\xi^k\right) \\[7pt] & \displaystyle =\sum_{\substack{o(\zeta)=d_1 \\ o(\xi)=d_2}} (\zeta\xi)^k =\sum_{o(\eta)=d_1d_2}\eta^k \\[7pt] & =f(d_1d_2). \end{array} $$ Now, if $g(m)$ is any multiplicative function then we may write $g(m)=\prod_{q^r\| m}g(q^r)$, where $q^r\|m$ means $q^r$ is the power of a prime $q$ that appears in $m$'s prime factorization. Moreoever, if $g(m)$ is multiplicative then so is $\sum_{d\mid m}g(d)$ as a function of $m$, in which case it also gets a factorization that looks like $\prod_{q^r\|m} \sum_{d\mid q^r}g(d)$. If $g(d)$ has a $\mu(d)$ factor, then $\sum_{d\mid q^r}g(d)=g(1)+g(q)$ because $g(q^r)=0$ if $r>1$. Therefore we have $$ \sum_{d\mid(p-1)}\frac{\mu(d)}{\varphi(d)}f(d)=\prod_{q\mid (p-1)}\left(1-\frac{1}{\varphi(q)}f(q)\right) $$ as desired.<|endoftext|> TITLE: What is the name of the circle that is tangent to three mutually-tangent circles centered at the vertices of a triangle? QUESTION [15 upvotes]: I want some information about the little 'tangent circle', but I don't have its name to search for it in the internet. What is it called? REPLY [21 votes]: I think that's the Inner Soddy Circle. REPLY [14 votes]: I think that (inner) Soddy circle is what you want.<|endoftext|> TITLE: Proof of the square root inequality $2\sqrt{n+1}-2\sqrt{n}<\frac{1}{\sqrt{n}}<2\sqrt{n}-2\sqrt{n-1}$ QUESTION [7 upvotes]: I stumbled on the following inequality: For all $n\geq 1,$ $$2\sqrt{n+1}-2\sqrt{n}<\frac{1}{\sqrt{n}}<2\sqrt{n}-2\sqrt{n-1}.$$ However I cannot find the proof of this anywhere. Any ideas how to proceed? Edit: I posted a follow-up question about generalizations of this inequality here: Square root inequality revisited REPLY [5 votes]: Let $f(x)=2\sqrt{x}$. Using mean value theorem we get $$\frac{f(n+1)-f(n)}{(n+1)-n} = f'(c)$$ for some $c \in (n, n+1)$. Equivalently $$2\sqrt{n+1} - 2\sqrt{n} = \frac{1}{\sqrt c}.$$ Since $c>n$, $$\frac{1}{\sqrt c} < \frac{1}{\sqrt{n}},$$ therefore $$2\sqrt{n+1}-2\sqrt{n} < \frac 1{\sqrt n}.$$ Right inequality can be proved in a similar manner.<|endoftext|> TITLE: Find $\int\frac1{\sqrt{1+x^4}}\,dx$ QUESTION [5 upvotes]: $$ \text{Find } \int \frac 1 {\sqrt{1+x^4}} \, dx$$ Let $x^2=\tan u$ $\implies 2x \,dx=\sec^2 u \,du$ $\implies dx=\dfrac{\sec^2 u}{2\sqrt{\tan u}}\,du$ $$= \int \frac{\sec^2 u}{2\sec u\sqrt{\tan u}} \, du $$ $$= \int \frac{\sec u}{2\sqrt{\tan u}} \, du $$ I am unsure how to continue.. REPLY [3 votes]: Let $$I = \int\frac{1}{\sqrt{1+x^4}}dx\;,$$ Now put $x^2=\tan t\;,$ Then $2xdx = \sec^2 tdt$ So $$I = \frac{1}{2}\int\frac{\sec^2 t}{\sec t \sqrt{\tan t}}dt = \frac{1}{\sqrt{2}}\int\frac{1}{\sqrt{\sin 2 t}}dt$$ Now put $\displaystyle 2t=\frac{\pi}{2}-2\theta\;,$ Then $dt = -d\theta\;,$ So we get $$I = -\frac{1}{\sqrt{2}}\int\frac{1}{\sqrt{\cos 2\theta}}d\theta$$ So $$I = -\frac{1}{\sqrt{2}}\int\frac{1}{\sqrt{1-2\sin^2 \theta}}d\theta = -\frac{1}{\sqrt{2}}\int^{\theta}_{0}\frac{1}{\sqrt{1-2\sin^2 u}}du = -\frac{1}{\sqrt{2}}F\left(\theta\mid 2\right)+\mathcal{C}$$ $$ = -\frac{1}{\sqrt{2}}F\left(\frac{\frac{\pi}{4}-t}{2}\mid 2\right) +\mathcal{C}= -\frac{1}{\sqrt{2}}F\left(\frac{\frac{\pi}{4}-\tan^{-1}x^2}{2}\mid 2\right)+\mathcal{C}$$ Using elliptical integral of first kind, $$F(\phi\mid k^2) = \int^{\phi}_{0}\frac{1}{\sqrt{1-k^2\sin^2\theta}}d\theta$$<|endoftext|> TITLE: An inequality involving two complex numbers QUESTION [5 upvotes]: Let $z_1, z_2 \in \mathbb C$ and $a,b \in \mathbb{R} \setminus \{0\}$. Prove that $$|z_1|^2+|z_2|^2-|z_1^2+z_2^2|\le 2\dfrac{|az_1+bz_2|^2}{a^2+b^2}\le |z_1|^2+|z_2|^2+|z_1^2+z_2^2|$$ Attempt at a solution: Let $z=a+ib$. Then, $2\dfrac{|az_1+bz_2|^2}{a^2+b^2}$ can be simplified into the following $$\begin{equation} 2\dfrac{|az_1+bz_2|^2}{a^2+b^2} \implies 2\left|\dfrac{\left(\dfrac{z+\bar{z}}{2}\right)z_1+\left(\dfrac{z-\bar{z}}{2i}\right)z_2}{z}\right|^2 \implies\dfrac{\left|\Re(z(z_1+iz_2))\right|^2}{|z|^2} \end{equation}$$ I tried substituting $z=a+ib; z_1=x_1+iy_1; z_2=x_2+iy_2$ but it just became a mess which I think I can't rearrange to make it something useful for proving the inequality. If possible please also provide the geometrical meaning of this inequality. REPLY [2 votes]: Let $z_1=x_1+iy_1,z_2=x_2+iy_2$ where $x_1,y_1,x_2,y_2\in\mathbb R$. Then, $$|z_1|^2+|z_2|^2-|z_1^2+z_2^2|\le 2\dfrac{|az_1+bz_2|^2}{a^2+b^2}\le |z_1|^2+|z_2|^2+|z_1^2+z_2^2|$$ is equivalent to $$x_1^2+y_1^2+x_2^2+y_2^2-\sqrt{(x_1^2-y_1^2+x_2^2-y_2^2)^2+(2x_1y_1+2x_2y_2)^2}\le 2\dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}\le x_1^2+y_1^2+x_2^2+y_2^2+\sqrt{(x_1^2-y_1^2+x_2^2-y_2^2)^2+(2x_1y_1+2x_2y_2)^2}$$ So, it is sufficient to prove that $$\sqrt{(x_1^2-y_1^2+x_2^2-y_2^2)^2+(2x_1y_1+2x_2y_2)^2}\ge \left|x_1^2+y_1^2+x_2^2+y_2^2-2\dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}\right|$$ Squaring the both sides, $$(x_1^2-y_1^2+x_2^2-y_2^2)^2+(2x_1y_1+2x_2y_2)^2\ge \left(x_1^2+y_1^2+x_2^2+y_2^2-2\dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}\right)^2$$ which is equivalent to $$(x_1^2-y_1^2+x_2^2-y_2^2)^2+(2x_1y_1+2x_2y_2)^2-(x_1^2+y_1^2+x_2^2+y_2^2)^2\ge -4(x_1^2+y_1^2+x_2^2+y_2^2)\dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}+4\left(\dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}\right)^2$$ which is equivalent to $$-(x_1y_2-y_1x_2)^2\ge \dfrac{(ax_1+bx_2)^2+(ay_1+by_2)^2}{a^2+b^2}\cdot\frac{-(ax_2-bx_1)^2-(ay_2-by_1)^2}{a^2+b^2}$$ Multiplying the both sides by $-(a^2+b^2)^2$, $$(a^2+b^2)^2(x_1y_2-y_1x_2)^2\color{red}{\le} ((ax_1+bx_2)^2+(ay_1+by_2)^2)((ay_2-by_1)^2+(bx_1-ax_2)^2)$$ Now, this inequality holds by the Cauchy–Schwarz inequality.<|endoftext|> TITLE: Solve an overdetermined system of linear equations QUESTION [5 upvotes]: I have doubt to solve this system of equations \begin{cases} x+y=r_1\\ x+z=c_1\\ x+w=d_1\\ y+z=d_2\\ y+w=c_2\\ z+w=r_2 \end{cases} Is it an overdetermined system because I see there are more equations than unknowns. Can we just solve this system in a simple way? REPLY [2 votes]: Start with the linear system $$ \begin{align} \mathbf{A} x &= b \\ \left[ \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 1 \\ \end{array} \right] % \left[ \begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array} \right] % &= % \left[ \begin{array}{c} b_{1} \\ b_{2} \\ b_{3} \\ b_{4} \\ b_{5} \\ b_{6} \end{array} \right] % \end{align} $$ We see that $$ \mathbf{A} \in \mathbb{R}^{6\times 4}_{4}, \quad b \in \mathbb{R}^{6}, \quad x \in \mathbb{R}^{4}; $$ that is $\mathbf{A}$ has $m=6$ rows, $n=4$ columns, and has rank $\rho=4$. The nullspace vectors reveal $$ \mathcal{N}\left(\mathbf{A}^{*}\right) = \text{span} \left\{ \left[ \begin{array}{r} 1 \\ 0 \\ -1 \\ -1 \\ 0 \\ 1 \end{array} \right] , \left[ \begin{array}{r} 0 \\ 1 \\ -1 \\ -1 \\ 1 \\ 0 \end{array} \right] % \right\} $$ Let's look at this problem using the method of least squares. As long as the data vector $b$ is not in the nullspace, there will be a least squares solution. The normal equations offer easy resolution: $$ \begin{align} \mathbf{A}^{*}\mathbf{A} x &= \mathbf{A}^{*} b \\[5pt] % \left[ \begin{array}{cccc} 3 & 1 & 1 & 1 \\ 1 & 3 & 1 & 1 \\ 1 & 1 & 3 & 1 \\ 1 & 1 & 1 & 3 \\ \end{array} \right] % \left[ \begin{array}{c} x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \end{array} \right] % &= % \left[ \begin{array}{c} b_1 + b_2 + b_3 \\ b_1 + b_4 + b_5 \\ b_2 + b_4 + b_6 \\ b_3 + b_5 + b_6 \end{array} \right]. % \end{align} $$ The least squares solution is $$ \begin{align} x_{LS} &= \left( \mathbf{A}^{*}\mathbf{A} \right)^{-1} \mathbf{A}^{*} b \\[5pt] % &= % \frac{1}{12} \left[ \begin{array}{rrrrr} 5 & -1 & -1 & -1 \\ -1 & 5 & -1 & -1 \\ -1 & -1 & 5 & -1 \\ -1 & -1 & -1 & 5 \\ \end{array} \right] % \left[ \begin{array}{c} b_1 + b_2 + b_3 \\ b_1 + b_4 + b_5 \\ b_2 + b_4 + b_6 \\ b_3 + b_5 + b_6 \end{array} \right]\\[5pt] % &= % \frac{1}{6} \left[ \begin{array}{rcrcrcrcrcrc} 2 b_1 & + & 2 b_2 & + & 2 b_3 & - & b_4 & - & b_5 & - & b_6 \\ 2 b_1 & - & b_2 & - & b_3 & + & 2 b_4 & + & 2 b_5 & - & b_6 \\ -b_1 & + & 2 b_2 & - & b_3 & + & 2 b_4 & - & b_5 & + & 2 b_6 \\ -b_1 & - & b_2 & + & 2 b_3 & - & b_4 & + & 2 b_5 & + & 2 b_6 \end{array} \right] % \end{align} $$ Because the problem has full column rank $m=\rho=4$, $\mathcal{N}\left( \mathbf{A}\right) = \left\{ \mathbf{0} \right\}$ the solution is unique. Is this the direct solution where $\mathbf{A}x - b = 0$? The residual error vector shows the constraints required for a direct solution. $$ r(x) = \mathbf{A}x - b = % \frac{1}{6} % \left[ \begin{array}{rcrcrcrcrcrc} -2 b_{1} & + & b_{2} & + & b_{3} & + & b_{4} & + & b_{5} & - & 2 b_{6} \\ b_{1} & - & 2 b_{2} & + & b_{3} & + & b_{4} & - & 2 b_{5} & + & b_{6} \\ b_{1} & + & b_{2} & - & 2 b_{3} & - & 2 b_{4} & + & b_{5} & + & b_{6} \\ b_{1} & + & b_{2} & - & 2 b_{3} & - & 2 b_{4} & + & b_{5} & + & b_{6} \\ b_{1} & - & 2 b_{2} & + & b_{3} & + & b_{4} & - & 2 b_{5} & + & b_{6} \\ -2 b_{1} & + & b_{2} & + & b_{3} & + & b_{4} & + & b_{5} & - & 2 b_{6} \end{array} \right] $$<|endoftext|> TITLE: Calculating probability of winning best-of-7-games tournament. Why is my method wrong? QUESTION [10 upvotes]: The question is as follows: A and B participate in a tournament of "best of 7 games". It is equally likely that either A wins the game or B wins the game, or the game ends in a draw. What is the probability that A wins the tournament? So I tried an approach like this. I made a table like: $$ \begin{array}{c|c} \text{Ways of Winning} & \text{Probability} \\ \hline \text{W W W W _ _ _} & (1/3)^4 \\ \color{red}{\text{W W W L }} \text{W _ _} & (1/3)^5\times 4 \\ \color{red}{\text{W W W L L }} \text{W _} & (1/3)^6\times \dfrac{5!}{3!\,2!} \\ \color{red}{\text{W W W L L L }} \text{W} & (1/3)^7\times \dfrac{6!}{3!\,3!} \\ \color{red}{\text{W W W D }} \text{W _ _} & (1/3)^5\times 4 \\ \color{red}{\text{W W W D L }} \text{W _} & (1/3)^6\times \dfrac{5!}{3!} \\ \color{red}{\text{W W W D L L }} \text{W} & (1/3)^7\times \dfrac{6!}{3!\,2!} \\ \vdots & \vdots \\ \color{red}{\text{W D D D D L }} \text{W} & (1/3)^7\times \dfrac{6!}{4!} \\ \color{red}{\text{W D D D D D }} \text{W} & (1/3)^7\times 6 \\ \color{red}{\text{W D D D D D D}} & (1/3)^7\times 7 \\ \end{array} $$ Here the W represents a win for A, L represents a loss while D represents a draw. If a character is in red, it means that it can be exchanged (rearranged) with the other characters in red. If in black, it's position is fixed. Underscores represent any value can be taken at that point. The procedure I followed is that, for zero draws, I kept adding an extra red L before the last W and still letting A win the tournament. Then I added a red D, then kept adding a red L, again letting A win the tournament. I wrote all such arrangements in this fashion and wrote the corresponding probabilities and added them together. There were 16 such rows for me. The answer i got was $651/3^7$ or $217/729$, but the answer given is $299/729$. They calculated it by subtracting the probability of a draw from one and then dividing it by two. I understand why they did it, what I don't understand is why our answers don't match! So, what is wrong with my approach? Am i missing some cases? Or is totally scrap? REPLY [4 votes]: Alternative route Let $N_{A}$ denote the number of times that $A$ wins, and $N_{B}$ the number of times that $B$ wins. Then: $\Pr\left(\text{no winner}\right)=\sum_{k=0}^{3}\Pr\left(N_{A}=N_{B}=k\right)=3^{-7}\sum_{k=0}^{3}\frac{7!}{k!k!\left(7-k\right)!}$ $\Pr\left(A\text{ wins}\right)+\Pr\left(B\text{ wins}\right)+\Pr\left(\text{no winner}\right)=1$ $\Pr\left(A\text{ wins}\right)=\Pr\left(B\text{ wins}\right)$ This leads to: $$\Pr\left(A\text{ wins}\right)=\frac{1}{2}\left[1-3^{-7}\sum_{k=0}^{3}\frac{7!}{k!k!\left(7-k\right)!}\right]=\frac{1794}{4374}=\frac{299}{729}$$<|endoftext|> TITLE: A Continuous Function with a Divergent Fourier Series QUESTION [14 upvotes]: This is a Q&A I hope simply posting a question and then answering it is the right protocol. This is stuff I thought everybody knew, but in at least two recent threads it's turned out to be somewhat mysterious. So: Q: How do you show that there exists a continuous function on the circle whose Fourier series diverges at the origin? Edit @TrialAndError points out that Stackexchange officially encourages asking and answering your own question. REPLY [14 votes]: This is one of the reasons functional analysis is a useful thing: Giving an explicit example is not easy (there's one in Zygmund, due to Fejer), but proving the existence using a little bit of Banach-space theory is very simple. We need the following special case of the Uniform Boundedness Principle, aka the Banach-Steinhaus Theorem: Theorem (UBP, Special Case) Suppose $X$ is a Banach space and $S\subset X^*$. If $\sup_{\Lambda\in S}||\Lambda||=\infty$ then there exists $x\in X$ with $\sup_{\Lambda\in S}|\Lambda x|=\infty$. Now define $\Lambda_n\in C(\Bbb T)^*$ by saying $\Lambda_n f$ is the $n$-th partial sum of the Fourier series at the origin: $$\Lambda_n f=s_n(f,0)=\frac{1}{2\pi}\int_0^{2\pi}f(t)D_n(t)\,dt,$$where $D_n$ is the Dirichlet kernel $$D_n(t)=\sum_{k=-n}^ne^{ikt}=\frac{\sin\left((n+\frac12)t\right)}{\sin\left(\frac12 t\right)}.$$Suppose we can prove two things: $$||D_n||_1\to\infty\quad(n\to\infty)$$and $$||\Lambda_n||_{C(\Bbb T)^*}=||D_n||_1.$$Then UBP says there exists $f\in C(\Bbb T)$ such that $\Lambda_n f$ is unbounded and we're done. Showing that $||D_n||_1\to\infty$ is easy: $$\int_0^\pi|D_n(t)|\,dt\ge2\int_0^\pi\frac{\left|\sin\left((n+\frac12)t\right)\right|}{t}\,dt=2\int_0^{(n+1/2)\pi}\frac{|\sin(t)|}{t}\,dt.$$ The fact that $||\Lambda_n||_{C(\Bbb T)^*}=||D_n||_1$ is immediate from the Riesz Representation Theorem, plus the fact that the norm of an $L^1$ function is the same as its norm as a complex measure. One can also see it directly: Choose $\phi_n\in C(\Bbb T)$ so that $|\phi_n|\le 1$, and such that $\phi_n=1$ on "most" of the set where $D_n>0$ while $\phi_n=-1$ on most of the set where $D_n<0$.<|endoftext|> TITLE: Show that $u_1^3+u_2^3+\cdots+u_n^3$ is a multiple of $u_1+u_2+\cdots+u_n$ QUESTION [12 upvotes]: Let $k$ be a positive integer. Define $u_0 = 0\,,\ u_1 = 1\ $ and $\ u_n = k\,u_{n-1}\ -\ u_{n-2}\,,\ n \geq 2$. Show that for each integer $n$, the number $u_{1}^{3} + u_{2}^{3} + \cdots + u_{n}^{3}\ $ is a multiple of $\ u_{1} + u_{2} + \cdots + u_{n}$. Computing a few terms I found \begin{align*}u_0 &= 0\\u_1 &= 1\\u_2 &= k\\u_3 &= k^2-1\\u_4 &= k(k^2-1)-k = k^3-2k\\u_5 &= k(k^3-2k)-(k^2-1) = k^4-3k^2+1\\u_6 &= k(k^4-3k^2+1)-(k^3-2k) = k^5-4k^3+3k.\end{align*} I am not sure how we can use this to solve the question, but I think it may help. Cubing these expressions seems very computational so there must be an easier way. REPLY [2 votes]: Given the equation $$ u_n=ku_{n-1}-u_{n-2}\tag{1} $$ where $u_0=0$ and $u_1=1$, we get the solution $$ u_n=\frac{\alpha^n-\alpha^{-n}}{\alpha-\alpha^{-1}}\tag{2} $$ where $$ \alpha=\frac{k+\sqrt{k^2-4}}2\tag{3} $$ except when $k=2$ where the solution is $$ u_n=n\tag{4} $$ and the result for $k=2$ follows from the fact that the sum of the cubes of the first $n$ consecutive integers is the square of the sum of the first $n$ consecutive integers. A proof without words is given in this answer. For the solution $(2)$, we get $\alpha^2+\alpha+1=(k+1)\alpha$ and $$ \begin{align} &\sum_{j=0}^{n-1}\left(\frac{\alpha^j-\alpha^{-j}}{\alpha-\alpha^{-1}}\right)^3\\ &=\sum_{j=0}^{n-1}\frac{\alpha^{3j}-3\alpha^j+3\alpha^{-j}-\alpha^{-3j}}{\alpha^3-3\alpha+3\alpha^{-1}-\alpha^{-3}}\\ &=\frac1{\alpha^3-3\alpha+3\alpha^{-1}-\alpha^{-3}}\left(\frac{\alpha^{3n}-1}{\alpha^3-1}-3\frac{\alpha^n-1}{\alpha-1}+3\frac{\alpha^{-n}-1}{\alpha^{-1}-1}-\frac{\alpha^{-3n}-1}{\alpha^{-3}-1}\right)\\ &=\frac1{\alpha^3-3\alpha+3\alpha^{-1}-\alpha^{-3}}\left(\frac{\left(\alpha^{3n}-1\right)\left(1-\alpha^{3-3n}\right)}{\alpha^3-1}-\frac{3\left(\alpha^n-1\right)\left(1-\alpha^{1-n}\right)}{\alpha-1}\right)\tag{5} \end{align} $$ and $$ \begin{align} &\sum_{j=0}^{n-1}\frac{\alpha^j-\alpha^{-j}}{\alpha-\alpha^{-1}}\\ &=\frac1{\alpha-\alpha^{-1}}\left(\frac{\alpha^{n}-1}{\alpha-1}-\frac{\alpha^{-n}-1}{\alpha^{-1}-1}\right)\\ &=\frac1{\alpha-\alpha^{-1}}\left(\frac{\left(\alpha^{n}-1\right)\left(1-\alpha^{1-n}\right)}{\alpha-1}\right)\tag{6} \end{align} $$ Therefore, we can compute the ratios $$ \begin{align} r_{n-1} &=\left.\sum_{j=0}^{n-1}u_j^3\middle/\sum_{j=0}^{n-1}u_j\right.\\ &=\left.\sum_{j=0}^{n-1}\left(\frac{\alpha^j-\alpha^{-j}}{\alpha-\alpha^{-1}}\right)^3\middle/\sum_{j=0}^{n-1}\frac{\alpha^j-\alpha^{-j}}{\alpha-\alpha^{-1}}\right.\\ &=\frac1{\alpha^2-2+\alpha^{-2}}\left(\frac{\left(\alpha^{2n}+\alpha^n+1\right)\left(\alpha^{2-2n}+\alpha^{1-n}+1\right)}{(k+1)\alpha}-3\right)\\ &=\frac1{k^2-4}\left(\frac{\left(\alpha^{2n}+\alpha^n+1\right)\left(\alpha^{2n-2}+\alpha^{n-1}+1\right)}{(k+1)\alpha^{2n-1}}-3\right)\\ &=\frac1{k^2-4}\left(\frac{\alpha^{2n-1}+\alpha^{n}+\alpha^{n-1}+\alpha^{1}+1+\alpha^{-1}+\alpha^{1-n}+\alpha^{-n}+\alpha^{1-2n}}{k+1}-3\right)\\ &=\frac1{k^2-4}\left(\frac{\alpha^{2n-1}+\alpha^{n}+\alpha^{n-1}+\alpha^{1-n}+\alpha^{-n}+\alpha^{1-2n}}{k+1}-2\right)\tag{7} \end{align} $$ Due to the equation $$ \begin{align} &(x-1)\left(x-\alpha\right)\left(x-\alpha^{-1}\right)\left(x-\alpha^2\right)\left(x-\alpha^{-2}\right)\\ &=(x-1)\left(x^2-kx+1\right)\left(x^2-\left(k^2-2\right)x+1\right)\\ &=x^5-mx^4+kmx^3-kmx^2+mx-1\tag{8} \end{align} $$ where $m=k^2+k-1$, the ratios $r_n$ in $(7)$ satisfy the relation $$ r_n=mr_{n-1}-kmr_{n-2}+kmr_{n-3}-mr_{n-4}+r_{n-5}\tag{9} $$ Computing the first few values of $r_{n-1}$ yields $$ \begin{align} r_{-2} &=\frac1{k^2-4}\left(\frac{\alpha^{-3}+\alpha^{-1}+\alpha^{-2}+\alpha^2+\alpha^1+\alpha^3}{k+1}-2\right)\\ &=1\\ r_{-1} &=\frac1{k^2-4}\left(\frac{\alpha^{-1}+\alpha^0+\alpha^{-1}+\alpha^1+\alpha^0+\alpha^1}{k+1}-2\right)\\ &=0\\ r_{0} &=\frac1{k^2-4}\left(\frac{\alpha^1+\alpha^1+\alpha^0+\alpha^0+\alpha^{-1}+\alpha^{-1}}{k+1}-2\right)\\ &=0\\ r_{1} &=\frac1{k^2-4}\left(\frac{\alpha^3+\alpha^2+\alpha^1+\alpha^{-1}+\alpha^{-2}+\alpha^{-3}}{k+1}-2\right)\\ &=1\\ r_{2} &=\frac1{k^2-4}\left(\frac{\alpha^5+\alpha^3+\alpha^2+\alpha^{-2}+\alpha^{-3}+\alpha^{-5}}{k+1}-2\right)\\ &=k^2-k+1 \end{align}\tag{10} $$ The recurrence $(9)$ and the computations $(10)$ ensure that for all $n$, $r_n\in\mathbb{Z}$.<|endoftext|> TITLE: Why work with squares of error in regression analysis? QUESTION [8 upvotes]: In regression analysis one finds a line that fits best by minimizing the sum of squared errors. But why squared errors? Why not use the absolute value of the error? It seems to me that with squared errors the outlyers gain more weight. Why is that justified? And if it is justified to give the outlyers more weight, then why give them exactly this weight? Why not for example take the least sum of exponetial errors? Edit: I am not so much interested in the fact that it might be easier to calculate. Rather the question is: does squaring the errors result in a better fitting line compared to using the absolute value of the error? Furthermore I am looking for an answer in layman's terms that can enhance my intuitive understanding. REPLY [3 votes]: Basically, you can ask the same question in the much simpler setting of finding the "best" average of values $x_1,\ldots,x_n$, where I here refer to average in the general sense of finding a single value to represent them such as the (arithmetic) mean, geometric mean, median, or $l_p$-mean (not sure if that's the right name). For data that actually come from a normal distribution, the mean will be the most powerful estimator of the true mean. However, if the distribution is long-tailed (or has extreme values) the median will be more robust. You can also use the $l_p$ norm and find the $l_p$-mean, $u$, that minimises $\sum_i |x_i-u|^p$ for any $p\ge1$. (For $p<1$ this need no longer be unique.) For $p=2 $ we have the traditional square distance, while for $p=1$ we get the median (almost). I once found $p=1.5$ to behave well in terms of both power and robustness. So, switching from least square regression ($l_2$-norm) to using absolute distance ($l_1$-norm) corresponds to switching from mean to median. Which is better depends on the data, and also on the context of the analysis: what you are actually looking for. The mean does have the advantage that it is an unbiased estimator of the true mean no matter what the underlying distribution is, but usually accuracy is more important than unbiasedness.<|endoftext|> TITLE: Given a finite sequence, can we always find a relation that generates that sequence? QUESTION [5 upvotes]: This is just something I've been wondering about, but I have no idea what the answer is. I suspect it's yes. Given an arbitrary finite sequence, can we always find a relation that generates that sequence? For example, given $4, 7, 10, 13$ we can find at least one relation that generates this sequence, $a_n=3n+1$. Is always it possible to find a relation for any arbitrary finite sequence? If so, how about an infinite sequence? (you would have to be given an infinite number of terms I guess.) REPLY [4 votes]: Consider a sequence of length n+1, $\{a_k\}_{k=0}^{n}$. The sequence specifies $n+1$ coordinates: $(0, a_0), ... , (n, a_n)$. You could use Lagrange's Interpolation Formula on these coordinates to find a polynomial which interpolates your specified sequence, i.e., some $p(x) \in \mathbb{R}_{n}[x]$ such that: $$\{a_k\}_{k=0}^n = \{p(k)\}_{k=0}^n$$ So in terms of a specific relation: $a_n = p(n)$.<|endoftext|> TITLE: What exactly is $\mathbb{P}_\mathbb{Z}^n$? QUESTION [6 upvotes]: So, I have the following definition of $\mathbb{P}_A^n$ for an arbitrary (commutative) ring $A$, from Hartshorne: Set $S=A[x_0,\ldots,x_n]$, so that $S=\bigoplus_{d\geq 0}S_d$ as a graded ring, $S_+=\bigoplus_{d\geq 1}S_d$, and for convenience, let $S^\mathrm{H}=\bigcup_{d\geq 0} S_d$ denote the homogeneous elements. We define the set $\mathrm{Proj}\ S=\{\mathfrak{p}\subset S \mid \mathfrak{p} \mathrm{\ hmg.\ prime}, S_+ \nsubseteq\mathfrak{p}\}$ with closed sets $V(\mathfrak{a})=\{\mathfrak{p}\in\mathrm{Proj}\ S\mid \mathfrak{p}\supseteq\mathfrak{a}\}$ for all homogeneous ideals $\mathfrak{a}\subseteq S$. Next, for all $\mathfrak{p}\in\mathrm{Proj}\ S$, we set $T_\mathfrak{p}=S^\mathrm{H}\setminus\mathfrak{p}$, $S_{(\mathfrak{p})}=\{\frac{f}{g}\in T_\mathfrak{p}^{-1} S \mid f\in S^\mathrm{H}, \deg f = \deg g\}$. Finally, for any open subset $U\subseteq\mathrm{Proj}\ S$, we define $$\mathcal{O}(U)=\{s:U\to\bigsqcup_\mathfrak{p} S_{(\mathfrak{p})} \mid \forall\ \mathfrak{p}\in U, s(\mathfrak{p})\in S_{(\mathfrak{p})}, \exists\ \mathfrak{p}\in V\subseteq U \mathrm{\ open}, a,f \in S^\mathrm{H} \mathrm{\ s.t.\ } \deg\frac{a}{f} = 0, \mathrm{\ and\ }\forall\ \mathfrak{q}\in V, f\notin\mathfrak{q}, s(\mathfrak{q})=\frac{a}{f}\in S_{(\mathfrak{q})}\}$$ When $A$ is an algebraically closed field, I can see the analogy with the projective space $\mathbb{P}^n$ of classical algebraic geometry. But for arbitrary rings, even for the simplest case of $A=\mathbb{Z}$, I struggle to make sense of this mess of symbols. Is there some good intuition to keep in mind when working with $\mathbb{P}_A^n$, or some simpler way of describing the ring of regular functions? At the very least, I'd like to understand what's going on in $\mathbb{P}_\mathbb{Z}^n$, to gain some intuition for the more general case. REPLY [2 votes]: A typical way to picture this is to look at affine patches. If, for example, $X,Y,Z$ are your standard coordinates on $\mathbb{P}^2$, then if we remove the subscheme $Z=0$, we are left with a copy of the affine plane $\mathbb{A}^2 \cong \mathrm{Spec}\left(\mathbb{Z}[\frac{X}{Z}, \frac{Y}{Z}] \right) \subseteq \mathbb{P}^2$. The subscheme $Z=0$, incidentally, is isomorphic to the projective line $\mathbb{P}^1$. So you can think of the projective plane as the affine plane together with a projective line encircling it. This picture works in higher dimensions as well. Similarly, if we remove $Y=0$, we get a different copy of the affine plane $\mathbb{A}^2 = \mathrm{Spec}\left(\mathbb{Z}[\frac{X}{Y}, \frac{Z}{Y}] \right)$. These two copies of the affine plane overlap: their intersection is $$ \mathrm{Spec}\left(\mathbb{Z}\left[\frac{X}{Z}, \frac{Y}{Z}, \frac{X}{Y}, \frac{Z}{Y}\right] \right) = \mathrm{Spec}\left(\mathbb{Z}\left[\frac{X}{Z}, \frac{Y}{Z}, \left(\frac{Y}{Z}\right)^{-1}\right] \right) \cong \mathbb{A}^2 \setminus \mathbb{A}^1 $$ If we remove $X=0$, we get yet another copy of the affine plane $\mathbb{A}^2 = \mathrm{Spec}\left(\mathbb{Z}[\frac{Y}{X}, \frac{Z}{X}] \right)$. These three copies of the affine plane are enough to cover the projective plane. So if you need to, you can do computations in these three affine patches, transferring from one to another as needed. Or to define things, define them on each patch and ensure they're consistent. (or other things, like define something on one affine patch and then extend it by continuity or whatever to the whole projective space)<|endoftext|> TITLE: Amazing isomorphisms QUESTION [46 upvotes]: Just as a recreational topic, what group/ring/other algebraic structure isomorphisms you know that seem unusual, or downright unintuitive? Are there such structures which we don't yet know whether they are isomorphic or not? REPLY [9 votes]: "N.H." has already posted this one, but only very tersely, and it's worth a few comments. $$ L^2(\mathbb S^1) \cong \ell^2(\mathbb Z) $$ Let us take the former to mean the set of all complex-valued functions $f$ on $\mathbb S^1 = \mathbb R/(2\pi)$, i.e. periodic functions of period $2\pi$, that satisfy $$ \int_{\mathbb S^1} |f(x)|^2 \, dx < \infty. $$ Note that since $f(x)$ is complex and not necessarily real, we need to say $|f(x)|^2$ rather than $f(x)^2$. Such functions have a Fourier series $$ f(x) \sim \sum_{n=-\infty}^\infty c_n e^{inx}. $$ The meaning of $\text{“}{\sim}\text{''}$ might take some discussion, but for now let us note that Carleson's theorem says the measure of the set of points $x$ at which the series fails to converge to $f(x)$ is $0$, and a much more easily proved theorem says it converges in $\ell^2(\mathbb Z)$. The space $\ell^2(\mathbb Z)$ is the space of sequences $\{c_n\}_{n=-\infty}^\infty$ for which $\displaystyle\sum_{n=-\infty}^\infty |c_n|^2 < \infty$ with the inner product $\displaystyle \langle b,c\rangle = \sum_{n=-\infty}^\infty b_n \overline{c}_n$, where $\overline c$ is the complex cojugate of $c$. One $L^2(\mathbb S^1)$ we have the inner product $\displaystyle \langle f,g\rangle = \int_{\mathbb S^1} f(x) \overline{g(x)}\,dx$. The Riesz–Fischer theorem says these two inner product spaces are isomorphic, and in particular that the transform from $f$ to $\{c_n\}_{n=-\infty}^\infty$ is an isomophism. Notice that here we have to construe $f$ to mean the equivalence class of $f$ where two functions are equivalent if they're equal almost everywhere. If we didn't do that then (1) two functions differing on a non-empty subset of measure $0$ in the domain would be at distance $0$ from each other, so we wouldn't quite have an inner product space, and (2) the cardinalities of the underlying sets of the two spaces would differ. This last fact enables us to say just what the cardinality of the space $L^2(\mathbb S^1)$ is: It's not actually bigger than that of $\ell^2(\mathbb Z)$.<|endoftext|> TITLE: Finding $\int \frac{\mathrm{d}x}{1 + \frac{2}{x} - x}$ QUESTION [5 upvotes]: I want to solve: $$\int\frac{1}{1+\frac{2}{x}-x} \mathrm{d}x $$ I don't know how to start, maybe I should use partial fraction? REPLY [8 votes]: Multiply by $\frac{x}{x}$ to get a more recognisable form $$\int \frac{x}{2 + x - x^2} \, \mathrm{d}x = \int \frac{-x}{(x+1)(x-2)} \, \mathrm{d}x$$ Some partial fractions (I'll leave the gory details to you) yields $$-\frac{1}{3}\int \frac{1}{x+1} + \frac{2}{x-2} \, \mathrm{d}x$$ which are easy logarithmic anti-derivatives.<|endoftext|> TITLE: Proving $\frac{1}{\cos^2\frac{\pi}{7}}+ \frac {1}{\cos^2\frac {2\pi}{7}}+\frac {1}{\cos^2\frac {3\pi}{7}} = 24$ QUESTION [10 upvotes]: Someone gave me the following problem, and using a calculator I managed to find the answer to be $24$. Calculate $$\frac {1}{\cos^2\frac{\pi}{7}}+ \frac{1}{\cos^2\frac{2\pi}{7}}+\frac {1}{\cos^2\frac{3\pi}{7}}\,.$$ The only question left is, Why? I've tried using Euler's Identity, using a heptagon with Law of Cosine w/ Ptolemy's, etc. but the fact that the cosine values are all squared and in the denominator keeps getting me stuck. If $\zeta=e^{\frac{2\pi i}{7}}$, then the required expression is $$4\left(\frac{\zeta^2}{(\zeta+1)^2}+\frac{\zeta^4}{(\zeta^2+1)^2}+\frac{\zeta^6}{(\zeta^3+1)^2}\right).$$ How do we simplify this result further? REPLY [3 votes]: First notice that since $\cos(x)=\cos(\pi-x)$, we have $$1+2\left(\frac{1}{\cos(\frac{\pi}{7})^2}+\frac{1}{\cos(\frac{2\pi}{7})^2}+\frac{1}{\cos(\frac{3\pi}{7})^2}\right)=\sum_{k=0}^6 \frac{1}{\cos(\frac{k\pi}{7})^2}$$ Now, $x\mapsto 2x$ is a bijection of the integers mod $7$, so we may make the summands $\cos(\frac{2\pi k}{7})^{-2}$. Using $\cos(\frac{2\pi k}{n})=(\zeta^k+\zeta^{-k})/2$ where $\zeta=e^{2\pi i/n}$ combined with the geometric sum formula $$\frac{a^n+b^n}{a+b}=\sum_{r=0}^{n-1} a^{(n-1)-r}b^r,$$ and the fact that for $n$th roots of unity $\xi$, $$\sum_{k=0}^{n-1} \xi^k =\begin{cases} n & \xi=1 \\ 0 & \xi\ne1 \end{cases} $$ we may derive $$\sum_{k=0}^{n-1}\frac{1}{\cos(\frac{2\pi k}{n})^m}=\sum_k \left(\frac{2}{\zeta^k+\zeta^{-k}}\right)^m=\sum_k \left(\frac{\zeta^{nk}+\zeta^{-nk}}{\zeta^k+\zeta^{-k}}\right)^m $$ $$=\sum_k\left(\sum_{r=0}^{n-1}(-1)^r \zeta^{-(2r+1)k}\right)^m=\sum_k \sum_{\substack{r_1,\cdots,r_m \\ \sum r_i=r}}(-1)^r\zeta^{-(2r+m)k}$$ $$ =\sum_{\substack{r_1,\cdots,r_m \\ \sum r_i=r}} (-1)^r \sum_k (\zeta^{-2r-m})^k=n(A-B).$$ Therefore in conclusion we have Theorem. $$\sum_{k=0}^{n-1} \frac{1}{\cos(\frac{2\pi k}{n})^m}=n(A-B)$$ where $A$ and $B$ count the solutions to $r_1+\cdots+r_m\equiv -m/2$ mod $n$ with $\sum_i r_i$ even and odd respectively (and $0\le r_1,\cdots,r_m TITLE: Why is $\{\{1\}\}$ not equal to $\{1,\{1\}\}$? QUESTION [6 upvotes]: Determine whether each of these pairs of sets are equal$$A = \{\{1\}\} \qquad \qquad B = \{1, \{1\}\}$$ I believe $A$ is equal to $B$ because all elements in $A$ are in $B$, but the answer says that it's not. REPLY [43 votes]: Think of $A$ as a bag which contains within it another smaller bag with a one in it. $A=\underbrace{\{~~~~~~~\overbrace{\{1\}}^{\text{second bag}}~~~~~~~~\}}_{\text{first bag}}$ On the other hand, $B$ is a bag which contains in it not only a second bag with a one in it, but also a one which is loose. $B=\underbrace{\{~~~~~~~~\overbrace{\{1\}}^{\text{second bag}}~~~~~\overbrace{1}^{\text{this too}}~~~~~~~\}}_{\text{first bag}}$ $1\in B$ but $1\not\in A$. There is no "loose 1" in $A$, there is only a bag with a one in it in $A$. Thus, $A\neq B$ REPLY [27 votes]: You're correct that all elements in $A$ are in $B$, but not the other way around - $B$ includes the element $1$, but $A$ only has $\{1\}$. Think of it like boxes - $B$ is a box that includes one item and also a box that itself contains one item; $A$ is just a box containing a box containing an item.<|endoftext|> TITLE: Have I found all the numbers less than 50,000 with exactly 11 divisors? QUESTION [17 upvotes]: The math problem I am trying to solve is to find all positive integers that meet these two conditions: have exactly 11 divisors are less than 50,000 My starting point is a number with exactly 11 divisors is of the form: $c_1p_1 * c_2p_2 * c_3p_3 * ... * c_{11}p_{11}$ where: $c_i$ is an integer and $p_i$ is prime and noting that 1 can be a divisor but can only be included once in the list of divisors. Therefore the smallest such number is: $1*2^{10} = 1024$ The next such number would be: $2^{11} = 2048$ So far $c_i$ has been 1, but I will now start letting some of them be 2 until I reach the upper bound of 50,000. This gives me: $2^{12}$, $2^{13}$, $2^{14}$ and $2^{15}$ (as $2^{16} > 50,000)$ I feel like I'm onto a useful pattern. So I will start using 3s as well (again until I reach the upper bound): $3*2^{10}$, $3^2*2^9$, $3^3*2^8$, $3^4*2^7$, $3^5*2^6$, $3^6*2^5$, $3^7*2^4$ Now 4s: $4*2^{10}$, $4^2*2^9$, $4^3*2^8$, $4^4*2^7$ Then 5s: $5*2^{10}$, $5^2*2^9$, $5^3*2^8$ Then 6s: $6*2^{10}$, $6^2*2^9$ Then 7s: $7*2^{10}$, $7^2*2^9$ Then 8s: $8*2^{10}$, $8^2*2^9$ Then 9s: $9*2^{10}$, $9^2*2^9$ Then 10s: $10*2^{10}$ I've now reached the point where there's only one number in the sub-sequence. Therefore, I also have: $11*2^{10}, 12*2^{10}, 13*2^{10}, ..., 48*2^{10}$ I now feel I've exhausted my algorithm. However I'm not sure if my answer is correct and complete. So, have I enumerated all such numbers? Or have I missed some or doubled-up? REPLY [7 votes]: To flog a horse consider how many factors $p^k $ have. The factors are: $1,p,p^2,p^3,.......,p^k$ That's $k+1$ factors. How many factors does $p^kq^m$ have? $1,p^2,p^3,......,p^k$ $q,qp,qp^2,qp^3,......,qp^k $ $q^2,q^2p,q^2p^2,q^2p^3......,q^2p^k$ ...... ...... $q^m,q^mp,q^mp^2,q^mp^3.....,q^mp^k$ That is $(k+1)(m+1)$ total factors. Now how many factors does $\prod p_i^{k_i} $ have? Well, each $p_i$ prime has factors for every power from 0 to $k_i $. That's $k_i +1$ factors. The each of those factors can be multiplied by the powers of any other {1,$p_j,p_j^2,...p_j^{k_j} $. The total number of combinations is therefore. $\prod (k_i +1) $. So if $c = \prod p_i^{k_i} $ then $c $ has $\prod (k_i+1) =11$ factors. as 11 is prime, 11 = $k+1$ so $c=p^{10}$ for some prime $p $. The only such number in range is $c=2^{10}=1024$ that has 11 factors: 1,2, 4 ,8,16,32,64,128,256,512,1024.<|endoftext|> TITLE: Definition of Ordinals in Set Theory in Layman Terms QUESTION [5 upvotes]: I've taken a huge interest in the mathematical concept of infinity and often been contemplating the same over years. But the fundamental concept of set theory ordinals continues to evade my understanding. The questions below comprise (more or less) the gaps in my comprehension of the mathematical infinite: Ordinal numbers in general (1st, 2nd, 3rd, 4th...) are entirely different from ordinal numbers in set theory, correct? I understand that set theory ordinals are basically sets that contain a least element by definition. But, is it necessary for the elements of an ordinal to be strictly in order? For example, must the ordinal 4 be represented as {∅, {∅}, {∅,{∅}}, {∅, {∅}, {∅,{∅}}}....} and not as {{∅,{∅}}, ∅, {∅}, {∅, {∅}, {∅,{∅}}}....}? The cardinality of ω is א‎0 (please correct me if I'm wrong), but where exactly is the position of ω along the number line. Is it א‎0th position (so to speak)? I apologize for the naivety of the questions above (honestly, I really don't find a layman explanation of ordinals anywhere on the web. I saw a very good YouTube video though). The objective is to understand the core concept of set theory ordinals (well enough to be able to explain the same to a layman) rather than memorizing formal, mathematical definitions with little to no true comprehension of the same. Thanks in advance! REPLY [3 votes]: To understand ordinals you first need to understand what is a well-order. Well-orders are a type of strict partial orders which satisfy the following axiom: Every non-empty set has a minimum. This implies that well-orders are linear orders, and that they look kinda like the natural numbers. The most important property of well-orders is that we can use them for inductive constructions and proofs, much like the natural numbers. Now, when we want to do something by induction and use some well-ordered set, the specific set does not matter. What matters is how does the set "looks like". In mathematical terms we care about its order type, rather than its actual elements. Two partially ordered sets have the same order type if there is an order isomorphism between them: a bijection between the two sets which preserves the order attached to each set. Let's consider a quick example: $\Bbb N$ and $\Bbb N\cup\{-42\}$, both ordered by the usual order of the integers, are isomorphic, they have the same order type. The isomorphism can be even given explicitly, $$f(x)=\begin{cases}0 & x=-42\\ x+1 & x\in\Bbb N\end{cases}$$ I will let you check and see that this is an isomorphism. Two important theorems about well-ordered sets are these: If $A$ and $B$ are two well-ordered sets, and there is an order isomorphism from $A$ into some $B'\subseteq B$ and from $B$ into some $A'\subseteq A$, then in fact there is an isomorphism between $A$ and $B$. Moreover, this isomorphism is unique. This is a non-trivial fact. Take a look at $\Bbb Q$ and $\Bbb Q\cap[0,1]$. We can embed each into the other, but they are not isomorphic: one of them has a minimum and maximum and the other does not. And even if we consider a simpler example, $\Bbb Z$ and $\Bbb Z$ itself, there are many ways to move the elements around while preserving the order. If $A$ and $B$ are two well-ordered sets, then either $A$ is isomorphic to an initial segment of $B$ or $B$ is isomorphic to an initial segment of $A$. (And this isomorphism is unique, as a consequence of the previous theorem.) Again, this is nontrivial. Consider $\Bbb Z$ and $\Bbb Q$. Neither one is isomorphic to the other, and neither one is isomorphic to an initial segment of the other. So what do these two theorems tell us? They us that well-orders are very robust, they have great comparability properties, and so if we only consider equivalence classes of well-orders under isomorphism, this induces a natural ordering between the classes, which itself happens to be a well-ordering. That's great! But now we have a mild problem. Other than the empty order, there is a proper class of well-ordered sets from each order type. Just note that for the order type of "one element", every singleton can be a candidate for the well-ordered set, and there is a proper class of those. This is not a huge problem, but wouldn't it be nice if we could replace these proper classes by sets? At some point in the early years of set theory, von Neumann suggested that instead of thinking in terms of abstract well-ordering, since they are so nice, we pick canonical representatives from each isomorphism class. And he proved that you can find a unique set which is transitive and well-ordered by $\in$ inside each class. (Here we say that $A$ is a transitive set if whenever $a\in A$, it follows that $a\subseteq A$.) And this is the von Neumann ordinal assignment. It picks from each order type, the unique transitive set which is well-ordered by $\in$. Then, a magical thing happens, it turns out that $0=\varnothing$, and the successor of $\alpha$ is $\alpha\cup\{\alpha\}$, and that if $\{\alpha_i\mid i\in I\}$ were defined, and $I$ is a well-ordering without a maximal element, then $\bigcup\{\alpha_i\mid i\in I\}$ is the limit of the $\alpha_i$'s. And that gives us a very robust structure for the von Neumann ordinals: $\alpha=\{\beta\mid\beta<\alpha\}$, where $<$ here indicates "isomorphic to a proper initial segment". Let me digress now, and since you brought $\aleph_0$ into the discussion, talk about cardinals for a moment. Cardinals are also isomorphism classes, like order type, only here we use just plain ol' bijections without requiring they preserve any structure. In the days of yore cardinals were also proper classes (except that of the empty set). We would say that $A$ has cardinality $\aleph_0$ if it has a bijection with the natural numbers, for example. As set theory grew, we realized that if $A$ can be well-ordered, then there is a smallest order type of such a well-ordering (because the order types of well-ordered sets have a well-ordering themselves). This meant that we can assign to a set that can be well-ordered the least ordinal that can be put in bijection with that set. And after you have the von Neumann ordinals, this points at a specific set, too. Of course, assuming the axiom of choice every set can be well-ordered and that's fine, but not assuming choice, we need to find a better solution, and I won't discuss it here. We have since come to confuse cardinals with ordinals on a regular basis. This is fine if all parties involved are well-aware of the differences and the implicit contexts and types. But I would generally advise against it (and I try my best to discern in my work between the cardinal and the ordinal, whenever possible). The two systems are equipped with addition, multiplication and exponentiation. And they often disagree. $\aleph_0+\aleph_0=\aleph_0$ as cardinal addition, but $\omega+\omega\neq\omega$ as ordinal addition. So talking about $\aleph_0$ as if it was an ordinal is technically not a mistake, but it can lead others to mistaken conclusions (I recall several occasions that I had struggled with such notation; and historically we can find examples of people who didn't understand implicit contexts and made mistakes.) Now, you asked whether or not ordinals denote "first" and "second" and "$\alpha$th". And the answer is yes. An ordinal is the answer to "how long is the queue to the bathroom". Cardinals, however, are the question "how many people are in the queue to the bathroom". As long as the queue is finite, the answers are the same. But if the queue has order type $\omega$, then every person has to wait only a finite time before using the facilities; but if you join at the end of the queue, we didn't increase the "how many" answer, but you clearly have to wait an infinite time before you can get to the bathroom. So using $\aleph_0$ to denote a location in the queue is not ideal, as I remarked. Sure, you can insist that $\aleph_0$ and $\omega$ are both designated by the same set, therefore you can talk about the $\aleph_0$th ordinal being $\omega$. But it is much better to separate in your mind the two notions of cardinals and ordinals, at least until you have a better grasp of the two.<|endoftext|> TITLE: Counting: how many ways of climbing a stair? QUESTION [9 upvotes]: You are climbing a staircase. At each step, you can either make $1$ step climb, or make $2$ steps climb. Say a staircase of height of $3$. You can climb in $3$ ways $(1-1-1,\ 1-2,\ 2-1)$. Say a staircase of height of $4$, You can climb in $5$ ways. Given a staircase of height of $n$, can you figure out how many ways you can climb? Attempt: This is actually a programming problem, I have already written the C++ code in recursion, but I just don't know how to verify my program using mathematical skills. I feel this is not a complicate math problem, but yet I couldn't solve it. So I am asking for your help. REPLY [12 votes]: Sure, that code looks fine. When I run my own version of the code, I get the Fibbonaci sequence— do you? $$1,\ 2,\ 3,\ 5,\ 8,\ 13,\ 21,\ \cdots$$ This makes sense: suppose $F_n$ represents the number of ways of climbing $n$ steps. If we must climb $n$ steps, we have two choices: we can take 1 step first, then we will have $F_{n-1}$ choices. Or we can take 2 steps first, then we will have $F_{n-2}$ choices. In other words, $F_{n+1} = F_{n} + F_{n-1}$ total choices. As a special base case, we have that $F_0 = 1$ (the base case in your program), and $F_1 = 1$. REPLY [3 votes]: This is a typical example of Fibonacci sequence. To reach up to, say $N^{th}$ step, you need to get to either $(N-1)^{th}$ or $(N-2)^{th}$ step. It is true for all values of $N$ where $N$ is ${1,2,3...}$. This is thus a recursion, as is evident from your code: if (numStairs-BIG>=0) return CountWays(numStairs-SMALL)+CountWays(numStairs-BIG); else return CountWays(numStairs-SMALL);<|endoftext|> TITLE: If domain of $f(x)$ is $[-1,2]$ then what will be the domain of $f([x]-x^2+4)$ $?$ QUESTION [5 upvotes]: If domain of $f(x)$ is $[-1,2]$ then what will be the domain of $f([x]-x^2+4)$ $?$ Here $[.]$ is for greatest integer function. Attempt: since domain of $f(x)$ is $[-1,2]$ therefore for $f([x]-x^2+4)$ $-1\le[x]-x^2+4\le2$ $\Rightarrow x^2\le[x]+5$ and $x^2\ge[x]+2$ solving first inequality, as $x^2$ is always positive so $x\ge-5$ Now I can start taking intervals of $x$ and solve them but this brute force method is not taking me anywhere near to the correct answer. Can someone explain me how is this problem solved? Please give an elaborate solution. REPLY [2 votes]: An $x\in{\mathbb R}$ is admissible if $-1\leq\lfloor x\rfloor-x^2+4\leq2$, or if $$2\leq x^2-\lfloor x\rfloor\leq 5\ .$$ Since for any $x$ one has $x-1<\lfloor x\rfloor\leq x$, an admissible $x$ necessarily has to fulfill $$1\leq x^2-x\leq 5\ .$$ The auxiliary function $g(x):=x^2-x$ is $\geq6$ if $x\leq-2$ or $x\geq3$, and $g(x)\leq0$ for $0\leq x\leq1$. It follows that admissible $x$ can only be found in the two intervals $\>]-2,0[\>$ and $\>]1,3[\>$. This means that we have to treat separately the intervals $$J_1:=\>]-2,-1[\>,\quad J_2:=\>[-1,0[\>,\quad J_3:=\>]0,1[\>,\quad J_4:=\>[1,2[\>\ .$$ In each of these the term $\lfloor x\rfloor$ is constant, so that it should not be too difficult to locate the truly admissible $x$ in these intervals.<|endoftext|> TITLE: Weighted War - Game of Mind and Probability QUESTION [6 upvotes]: Game Weighted War is a game of bidding, where: Both players have cards valued from $1$ to $11$ in their hands There is a third pile of cards from $1$ to $11$ face down on the table and shuffled, with one random card being removed from it at the beginning of the game Each beginning of a turn one random card from the table pile is turned face up, and the players offer one of their own cards face down. When both players decided on their bid card, their cards are flipped and the higher value takes the table card. The bid cards are put aside, and new turn begins. If the bid cards are equal, they are put aside and they start a next turn by flipping the next table card. This turn they bid for both cards. If the equal value repeats, they continue to add table cards to bid pile until someone wins it. (If both players run out of bidding cards and the bid pile hasn't been won yet, it goes to no one and stays aside.) When all cards are won, players count their points by adding the values of the table cards they won. The winner is one with more points. I'm wondering what would be the optimal strategy that maximizes your chances of winning and minimizes your chances of losing? I could find barely anything on this game online. One trivial things is that playing $1$ doesn't make sense since there is one card less at the table than in your hand. Also, playing a random card won't be any good for you since cases like bidding $11$ for a low valued card will rarely have any good effects for you. The video also mentions that they found that it's the best to play the same valued card as the table card but I couldn't find any proof of that. A Counter to that would be playing one card higher, and counter to that would be occasionally sacrificing some low cards to gain advantage in end game. Anyway, I'm also interested in how much luck has an effect here. Pattern I attempted to develop the optimal strategy for when there are only $2$, $3$ or $4$ cards in hopes of helping me to find a strategy for the $11$ card game. Assuming both players use the optimal strategy: For $2$ card case, there is no point in playing $1$ so both play $2$ and end up in a draw. For $3$ card case, a draw is forced $\frac{2}{3}$ times, and $\frac{1}{3}$ times happens when the first table card is $2$, then both players have equal chance of either winning,losing or ending up in a draw again. That depends on the table card that was removed at the beginning of the game. For $4$ card case, a draw is forced $\frac{3}{4}$ times, and $\frac{1}{4}$ times happens when the first table card is $4$, draw should also be forced since $4$ is the best choice to play for both players, but if a $4$ is countered with a $2$ then both players have again equal chance of either winning,losing or ending up in a draw if they continue to play perfectly. ($3$ always beats $2$ and $4$ always beats $3$, if the rest of choices are best possible from both players.) That means the safest option in this $\frac{1}{4}$ case is $4$ and also the best option against a random play; but that always results in a draw if both players play perfectly. Thus, if both perfect players play a set of games until the match is resolved, and observe that they both keep drawing with a $4$, one might attempt to break the draw chain by playing $2$ and give both players an equal chance to resolve the match without needing to worry that the opponent might suddenly play $3$, making that the optimal strategy? But if other can predict your $2$ and counter with a $3$ he beats your method. That's why I'm not sure about the strategy for $4$ case if players are going for a win rather than accepting the draw as the best way of minimizing their losing chances, so I decided to post a separate question. All in all, I have solved these $2,3,4$ cases by observing each possible game state to determine what would be the optimal play. If the pattern holds, the original game of $11$ cards should have a optimal strategy which if used by both players, always results in a draw and/or equal chances for both players to win, lose or end up in a draw. But I still don't know how to create a general optimal strategy other than evaluating all possible states by brute force. I wonder If this can be solved by a optimal set of rules other than a brute force approach? (Either yes or no, I would need a proof of the solution) REPLY [2 votes]: Some work has been done on this game, under the name Goofspiel or Game of Pure Strategy. As of writing, that wikipedia article references Rhoads and Bartholdi 2012 (doi:10.3390/g3040150), which claims to have found the optimal strategy for maximizing score. Unfortunately, their solution is numerical, and it's hard to pick out much intuitive advice for a human player.<|endoftext|> TITLE: How to get the idea of the formula for the mean value property for the heat equation QUESTION [11 upvotes]: From the mean-value property of the Laplace's equation, we have the following mean-value property: $$ u(x)=\frac{1}{a(n)r^n}\int_{B(x,r)}u\,dy. $$ But for the mean-value property of the Heat equation, Evans' book defines a heat ball: $$ E(x,t,r)=\left\{(y,s)\in R^{n+1}\bigg|s\leq t, \Phi(x-y,t-s)\geq \frac{1}{r^n}\right\}. $$ Then, the theorem claims that if $u\in C^2_1(U_T)$ solves the heat equation. Then, $$ u(x,t)=\frac{1}{4r^n}\iint\limits_{E(x,t,r)}u(y,s)\frac{|x-y|^2}{(t-s)^2}\,dy\,ds. $$ My question is: is there any explanation (or a guessed one) about the discover of this theorem? The mean-value property is intuitive. But how can we know that we can achieve the goal by making the integrand as the multiplication of $u(y,s)$ with such a strange factor, $\frac{|x-y|^2}{(t-s)^2}$, and a nonintuitive heat ball? REPLY [3 votes]: I highly recommend you skim through this original paper, Fulks, W., A mean value theorem for the heat equation, Proc. Am. Math. Soc. 17, 6-11 (1966). ZBL0152.10503. The idea is to find an analogy of the Green's 2nd identity (with $\Delta$ replaced by the heat operator $H = \Delta - \partial_t$), so by choosing a suitable region of integration (not too surprisingly we will choose the heat ball $E$ as it is the symmetry inherent in the heat equation) and some suitable test functions we arrive at \begin{equation} u(x, t) = \int_{\partial E(x,t,1/c)} Q(x-y,t-s)u(y,s)\;d\mathcal{H}^1\end{equation} where $Q(y,s)=cx^2[4x^2t^2 + (2t-x^2)^2]^{-1/2}$ (see the paper for the details) This is not yet the volume integral we want. A remarkable observation is that the formula above does not depend on the choice of the constant $c$, so by the coarea formula we have \begin{multline} u(x,t) = \int^\infty_1\frac{1}{c^2}\left(\int_{\partial E(x,t,1/c)} Q(x-y,t-s)u(y,s)\;d\mathcal{H}^1\right) dc \\= \int_{E(x,t,1)}u(y,s)\frac{\vert x - y\vert^2}{4(t-s)^2}\;dyds\end{multline} The weight $1/c^2$ I append in this weighted average is to kill the additional $c^2$ in this calculation. Personally I prefer this derivation than that presented by Evans, as it seems we may apply this method (Green's 2nd identity) to derive mean value formulas for a wide class of PDE.<|endoftext|> TITLE: solve this 1999 problem with geometry QUESTION [9 upvotes]: if $\bigodot P\bigcap \bigodot Q=A,B$,and the common tangent is $C,D$,and $E\in BA$,and $EC\bigcap \bigodot P=F,ED\bigcap \bigodot Q=G$,and if $\angle FAH=\angle HAG$ show that $$\angle FCH=\angle GDH$$ it seem hard, I can't get this answer For Weijie Chen answer,then I have add a fig,let we clear understand REPLY [7 votes]: It's not that difficult but it took me 1h. Here's my solution (Sadly I don't even found the contest at AoPS): Let $M=CD\cap FG$, $X=MA\cap \bigodot P$ and $Y=MA\cap \bigodot Q$ different from A. Observe that if $\bigodot (CHD)$ is tangent to $FG$ we whould finish (angle chasing). Notice that the tangent from $M$ to any circle that passes through $CD$ has the same lenght. Because it is the power from the point $M$. And nos I claim that the lenght is $MA$. Proof: $MC^2=MX\cdot MA$ and $MD^2=MA\cdot MY$ it's easy to show that $XCDY$ is cyclic so we have $MX\cdot MY=MC\cdot MD$ hence $MC\cdot MD=MA^2$ So if we proof that $MA=MH$ we would finish. That is easy since $\bigodot(FAG)$ is tangent to $MA$ because $CDFG$ is cyclic (quite obvious) hence $MC\cdot MD=MA^2=MF\cdot MG$. This means that $\angle FAM=\angle AGF$ now by angle chasing we can show that $\angle MHA=\angle MAH$ hence $MH=MA$ and done. If there's anything that is unclear please let me know.<|endoftext|> TITLE: Differentiating $\mbox{tr} (ABA^TC)$ w.r.t. $A$ QUESTION [5 upvotes]: Why is $\nabla_A \mbox{tr} (ABA^TC) = CAB + C^TAB^T$? Here $A, B, C, D$ are all $n \times n$ matrices. $$\nabla_A f(A) = \left[\begin{matrix} \frac{\partial f}{\partial A_{11}}... \frac{\partial f}{\partial A_{1n}}\\ ...\\ \frac{\partial f}{\partial A_{n1}}... \frac{\partial f}{\partial A_{nn}}\\ \end{matrix}\right]$$ I tried to prove it in this way: $$\begin{align} \nabla_A \mbox{tr} (ABA^TC) &= \nabla_Atr (BA^TC)A\\ &= \nabla_A \mbox{tr} DA ......let \ D=BA^TC\\ &= \nabla_A \mbox{tr} AD\\ &=D^T\\ &=B^TAC^T\end{align}$$ Since $B^TAC^T \neq CAB + C^TAB^T$, there must be something wrong in my derivation. How to prove this property? REPLY [4 votes]: Given $\mathrm A, \mathrm B, \mathrm C \in \mathbb R^{n \times n}$, define $f : \mathbb R^{n \times n} \to \mathbb R$ by $$f (\mathrm X) := \mbox{tr} (\mathrm A \mathrm X \mathrm B \mathrm X^T \mathrm C)$$ The directional derivative of $f$ in the direction of $\mathrm V$ at $\mathrm X$ is $$\begin{array}{rl} D_{\mathrm V} f (\mathrm X) &= \displaystyle\lim_{h \to 0} \frac{1}{h} \left( f (\mathrm X + h \mathrm V) - f (\mathrm X) \right) \\\\ &= \mbox{tr} (\mathrm A \mathrm V \mathrm B \mathrm X^T \mathrm C) + \mbox{tr} (\mathrm A \mathrm X \mathrm B \mathrm V^T \mathrm C)\\\\ &= \mbox{tr} ((\mathrm A^T \mathrm C^T \mathrm X \mathrm B^T)^T \mathrm V) + \mbox{tr} (\mathrm V^T \mathrm C \mathrm A \mathrm X \mathrm B )\\\\ &= \langle \mathrm A^T \mathrm C^T \mathrm X \mathrm B^T , \mathrm V \rangle + \langle \mathrm V, \mathrm C \mathrm A \mathrm X \mathrm B \rangle\\\\ &= \langle \mathrm A^T \mathrm C^T \mathrm X \mathrm B^T + \mathrm C \mathrm A \mathrm X \mathrm B, \mathrm V \rangle\end{array}$$ Hence, $$\nabla_{\mathrm X} f (\mathrm X) = \mathrm A^T \mathrm C^T \mathrm X \mathrm B^T + \mathrm C \mathrm A \mathrm X \mathrm B$$ If $\mathrm A = \mathrm I_n$, then $$\nabla_{\mathrm X} f (\mathrm X) = \color{blue}{\mathrm C^T \mathrm X \mathrm B^T + \mathrm C \mathrm X \mathrm B}$$ matrix-calculus scalar-fields gradient<|endoftext|> TITLE: Closed form of the summation- $\sum_{r=1}^{n}\frac{r^24^r}{(r+1)(r+2)}$ QUESTION [5 upvotes]: I have got the following summation-$$\displaystyle\sum_{r=1}^{n}\frac{r^24^r}{(r+1)(r+2)}.$$ I have to find the closed form or the general form to find the sum of this series. I know upto summation of Telescopic Series and Some special series like those in $AP$ or $GP$. But I have no idea on how to begin on this problem. Thanks for any help!! REPLY [5 votes]: Since you requested a more detailed answer than that of @N.S, here you go. We start with the series you give, $$\displaystyle\sum_{r=1}^{n}\frac{r^24^r}{(r+1)(r+2)}$$ We now want to simplify this into a series you can deal with; the first step will be to run partial fraction decomposition on the term $$\frac{r^2}{(r+1)(r+2)}=\frac{r^2+3r+2-3r-2}{(r+1)(r+2)}=\frac{(r+1)(r+2)-3r-2}{(r+1)(r+2)}$$ $$=1-\frac{3r+2}{(r+1)(r+2)} =1+\frac{1}{r+1}-\frac{4}{r+2}$$ Let me know if you need help with this partial fraction decomposition. We can now rewrite the above series as $$\displaystyle\sum_{r=1}^{n}\left(4^r+\frac{4^r}{r+1}-\frac{4^{r+1}}{r+2}\right)=\displaystyle\sum_{r=1}^{n}\left(4^r\right)+\sum_{r=1}^{n}\left(\frac{4^r}{r+1}-\frac{4^{r+1}}{r+2}\right)\tag{1}$$ The first series is geometric, which you say you know how to evaluate. (It comes out to be $\frac{4}{3}(4^n - 1)$) To evaluate the second series in $(1)$ we will look at the expanded terms: $$\sum_{r=1}^{n}\left(\frac{4^r}{r+1}-\frac{4^{r+1}}{r+2}\right)$$ $$= \left(\frac{4}{2}\color{red}{-\frac{4^2}{3}}\right)+\left(\color{red}{\frac{4^2}{3}}\color{green}{-\frac{4^3}{4}}\right)+\left(\color{green}{\frac{4^3}{4}}\color{purple}{-\frac{4^4}{5}}\right)+\cdots+\left(\color{olive}{\frac{4^{n-1}}{n}}\color{fuchsia}{-\frac{4^n}{n+1}}\right)+\left(\color{fuchsia}{\frac{4^n}{n+1}}\color{navy}{-\frac{4^{n+1}}{n+2}}\right)$$ As my above, colored terms hopefully make clear almost all the terms will cancel out with their neighboring term except for the first and last terms (this is called a telescoping series) and so this series comes out to be $$\sum_{r=1}^{n}\left(\frac{4^r}{r+1}-\frac{4^{r+1}}{r+2}\right) = \frac{4}{2}-\frac{4^{n+1}}{n+2}=2-\frac{4^{n+1}}{n+2}$$ We can now go plug this back into $(1)$ and rewrite your original series as $$\frac{4}{3}(4^n - 1)+2-\frac{4^{n+1}}{n+2}= 4^{n+1}\left(\frac{1}{3}-\frac{1}{n+2}\right)+\frac{2}{3}$$ Note that, as $n\to\infty$, this expression becomes closer and closer to being $$4^{n+1}\left(\frac{1}{3}-0\right)+\frac{2}{3}=\frac{4^{n+1}+2}{3}$$ And so the limit towards $+\infty$ clearly diverges to $+\infty$; likewise, the limit towards $-\infty$ diverges towards $-\infty$.<|endoftext|> TITLE: ⋇ "Division Times" operator in Unicode (U+22C7)? QUESTION [6 upvotes]: I found this maths operator in Unicode: ⋇ It is called "Division Times" (U+22C7). Does it behave like ±? For example: 3 ± 2 means it is an ∈ {1, 5}. So 3 ⋇ 2 means it is an ∈ {1.5, 6}? REPLY [10 votes]: The intent ought to be what you said. So, it is $a ⋇ b$ is short for "$a \ b$ or $a \ b^{-1}$" that is the multiplicative analogue of $\pm$. Yet, a more common way to write $a ⋇ b$ is just $a \ b^{\pm 1}$. In addition, I never heard of this operator, all the early search hits are for Unicode-tables not math, and at least to me it looks quite similar to an asterisks at small font size. In short, it seems more like an artifact of getting a symbol-set somehow 'complete.' Rather than something that is actually used. That said, I just learned that there is a LaTeX (needs amssymb) and also MatjJax command for it, it is \divideontimes giving $\divideontimes$<|endoftext|> TITLE: There exists a regular language A such that for all languages B, A ∩ B is regular. QUESTION [5 upvotes]: There exists a regular language A such that for all languages B, A ∩ B is regular. The above given statement is true but I couldn't make any proof or find any proof. It is an objective type question asked here to find whether the given statement is true or false. I want to know how to conclude this given statement is true. REPLY [27 votes]: Yes, that's true. Consider $A=\emptyset$ (which is regular), then $\emptyset \cap B=\emptyset$ (which is regular). REPLY [25 votes]: If $A$ is a finite language, then it is regular and meets your condition. On the other hand if $A$ is any infinite regular language, since it is countably infinite ($\aleph_0$) it will contain $\aleph_1$ sublanguages. Every regular language is defined by a finite regular expression (of which there are $\aleph_0$) so there will be sublanguages of $A$ which is not regular. So finiteness is necessary and sufficient.<|endoftext|> TITLE: Can the derivative prove my function has only one root? QUESTION [5 upvotes]: I have a function: $$f(x)=x-\ln(x^2+1)+2$$ I want to prove my function has exactly one root. If I differentiate: $$f'(x)=1-\frac{2x}{x^2+1}$$ I can see this value is positive for every $x$. Does this prove that my function is strictly increasing? All the theorems I know about the argument apply for a closed interval, so I don't know if my resolution is valid. Should I also prove the function has a positive and a negative part? REPLY [3 votes]: $g(x)=\log(1+x^2)$ is a convex function on $(-1,1)$ and a concave function on $(-\infty,-1)$ and $(1,+\infty)$, since: $$ g''(x) = 2\frac{1-x^2}{(1+x^2)^2}. $$ By computing the bevaviour in a neighbourhood of $x=1$, we have that $g(x)\leq x$ for any $x\geq 0$, hence the only solutions of $g(x)=x+2$ have to belong to $\mathbb{R}^-$. Over $\mathbb{R}^-$ $g(x)$ is a decreasing function and $x+2$ is an increasing function: by comparing the behaviours in a left neighbourhood of $x=0$ and in a right neighbourhood of $-\infty$ we get that $g(x)=x+2$ has a unique (negative) real solution. Few steps of Newton's method with starting point $-1$ give that such a root is $\approx -1.15369622$.<|endoftext|> TITLE: Proof of the summation $n!=\sum_{k=0}^n \binom{n}{k}(n-k+1)^n(-1)^k$? QUESTION [8 upvotes]: $$n!=\sum_{k=0}^n \binom{n}{k}(n-k+1)^n(-1)^k$$ Could anyone give the proof of the above equation? Thanks in advance! REPLY [3 votes]: Suppose we seek to evaluate $$\sum_{k=0}^n {n\choose k} (-1)^k (n-k+1)^n.$$ Introduce $$(n-k+1)^n = \frac{n!}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} \exp((n-k+1)z) \; dz.$$ We get for the sum $$\frac{n!}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} \exp((n+1)z) \sum_{k=0}^n {n\choose k} (-1)^k \exp(-kz) \; dz \\ = \frac{n!}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} \exp((n+1)z) (1-\exp(-z))^n \; dz \\ = \frac{n!}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{n+1}} \exp(z) (\exp(z)-1)^n \; dz.$$ This is $$n! [z^n] \exp(z) (\exp(z)-1)^n.$$ Now $$\exp(z)-1 = z + \frac{z^2}{2} + \frac{z^3}{6} +\cdots$$ and hence $$(\exp(z)-1)^n = z^n + \cdots.$$ Therefore the result is $$n! [z^0] \exp(z) = n!.$$<|endoftext|> TITLE: How to remember Stolz Angle correctly QUESTION [5 upvotes]: The Stolz angle is a condition used in Abel's Theorem: $$|1-z|\leq M(1-|z|)$$ Q1) How do I intuitively remember (and understand this)? Q2) In particular, is there a quick way to see that $$(1-|z|)\leq M|1-z|$$ is the wrong condition? Thanks for any help. REPLY [12 votes]: Hmm, this is a slightly "soft" question... A Stolz angle $S$ is also known as a "non-tangential approach region", the point being that if you approach $1$ along a curve $\gamma\subset S$ then $\gamma$ is not tangent to the unit circle at $1$. Imagine a curve $\gamma$ in the unit disk that approaches $1$, but which is tangent to the unit circle at $1$. Draw a picture. Points on that curve close to $1$ are much closer to the boundary than they are to $1$, right? Tangential approach to the boundary says $1-|z|$ is much smaller than $|1-z|$. So non-tangential approach says the opposite, that $1-|z|$ is not much smaller than $|1-z|$, which is to say $|1-z|\le M(1-|z|)$. Or look at it this way: A lot of formulas are simpler for the upper half plane $y>0$. A non-tangential approach region (to the origin) in the upper half plane is defined by $$|x| TITLE: Prove that $\sum^{n-1}_{i=1}i^{(n-1)} \equiv -1$ (mod $n$) for all prime $n\in\mathbb{N}$. QUESTION [7 upvotes]: Prove that $\sum^{n-1}_{i=1}i^{(n-1)} \equiv -1$ (mod $n$) for all prime $n\in\mathbb{N}$. I'm having a difficult time proving this problem. I was able to verify that it works for prime $n$ up to 5000 using Python. I'm assuming induction would be useful in this case? Would love some help with this problem! REPLY [7 votes]: It's good that you can get Python to calculate up to 5000 for you, but sometimes what might really help you is to calculate two or three cases by hand to see if you can spot any patterns. Let's try $n = 5$: $1^4 = 1 \equiv 1 \pmod 5$. Since $1^x = 1$ regardless of what $x$ is, we might as well go ahead and regard 1 as the taxi meter drop in this problem for all $n$. $2^4 = 16 \equiv 1 \pmod 5$. $3^4 = 81 \equiv 1 \pmod 5$. $4^4 = 256 \equiv 1 \pmod 5$. These add up to 4, which is one less than 5. Now let's try $n = 6$. I know, that's not prime, but it might still give us an insight. 1 is the taxi meter drop. $2^5 = 32 \equiv 2 \pmod 6$. $3^5 = 243 \equiv 3 \pmod 6$. $4^5 = 1024 \equiv 4 \pmod 6$. $5^5 = 3125 \equiv 5 \pmod 6$. These add up to 21, which is $3 \pmod 6$. In some ways, 6 is a little unusual (don't expect $i^{n - 1} \equiv i \pmod n$ whenever $n$ is composite), but we can still learn from this that if $\gcd(i, n) > 1$, then $i^{n - 1} \equiv 1 \pmod n$ is impossible. In fact, $i^{n - 1} \equiv d \pmod n$ where $d$ is either a nontrivial divisor of $n$ or 0 (which can be considered a divisor in the context of modular arithmetic). For example, for $n = 8$, if $i$ is even, then $i^7 \equiv 0 \pmod 8$. This is because $8 = 2^3$ and $i$ being even means that $i = 2j$. Then $i^7 = 2^7 j$, which is obviously a multiple of 8. So, if $n$ is prime, then all $i$ from 1 to $n - 1$ are coprime to $n$, which means that it's possible that $i^{n - 1} \equiv 1 \pmod n$ for each $i$. In fact, each $i$ does give $i^{n - 1} \equiv 1 \pmod n$, a fact that Pierre de Fermat noticed. I don't know if Fermat himself proved this "little theorem," but plenty of other people have, and the proof is easy enough to follow, though I won't repeat it here. Nor will I repeat the Short One's answer, which will now be hopefully easier to understand. There's only one thing left to address, and that's induction. Many things can be proven with induction, even when it's not the best method of proof. For this particular problem, I think the best method is simply to apply Fermat's little theorem and just let everything fall into place.<|endoftext|> TITLE: When is a stochastic integral a martingale? QUESTION [11 upvotes]: In what follows, let the probability space $(\Omega, \mathcal{F}, \mathbb{P})$ as well as the chosen filtration $(\mathcal{F}_t)_{t \ge 0}$ be known, and let $f$ denote an arbitrary locally bounded progressively measurable process (i.e. bounded on compact intervals and for all $t$ measurable with respect to $\mathcal{B}([0,t])\otimes \mathcal{F_t}$). Consider the process $(Y_t)_{t \ge 0}$ defined by the stochastic integral of $f$ with respect to $(X_t)_{t\ge 0}$: $$Y_t = \int_0^t f(s) \mathrm{d}X_s$$ Which of the following are true? 1. $X_t$ is a martingale $\implies$ $Y_t$ is a local martingale but not necessarily a true martingale. 2. $X_t$ is a locally $L^2-$bounded true martingale $\implies Y_t$ is a locally $L^2$-bounded true martingale. 3. $X_t$ is a local martingale $\implies$ $Y_t$ is a local martingale. 4. $X_t$ is a semimartingale $\implies$ $Y_t$ is a semimartingale. I just want to know which results are true, and then I will supply the proofs on my own later. It seems like 2. only holds when we have the additional condition on $f$ that it is locally $L^2$ bounded. In fact, it seems like I may have had it mixed-up: $X_t$ and $Y_t$ need to be globally $L^2$ bounded i.e. square integrable, but $f$ only needs to be locally $L^2$ bounded i.e. square integrable on compact intervals -- do I have this right? I'm still not sure. https://fabricebaudoin.wordpress.com/2012/09/14/lecture-19-stochastic-integrals-with-respect-to-square-integrable-martingales/ EDIT: OK, fixing the assumptions in the manner mentioned above, this is almost certainly true, see Theorem 19 on p.34 of this document: http://math.bu.edu/people/prakashb/Math/stochint.pdf; this also follows from Theorem 6.3 and Lemma 6.4 on p.35 here: https://staff.fnwi.uva.nl/p.j.c.spreij/onderwijs/master/si.pdf; also pp. 137-140 in Revuz and Yor, where being locally bounded also supposedly implies being locally $L^2-$bounded (see the last paragraph of p.140 and consider that $\langle B, B \rangle=t$ and that Brownian motion is a continuous local martingale. So seemingly the condition of being locally bounded is unnecessarily strong in many places where it is used. I am not sure if it can be replaced by locally $L^2-$bounded in all places. Actually a closer reading of Definition 2.1 on p. 137 of Revuz and Yor implies that 2. holds only for square-integrable integrands $f$, not just locally $L^2-$bounded. If we relax the condition to locally $L^2-$bounded, then only Proposition 2.7 on p.140 is applicable (since martingales are local martingales after all), and we can only conclude that the stochastic integral is a local martingale (although the example given between the end of p.139 and the beginning of p.140 about Brownian motion suggests otherwise -- perhaps if both the integrator and the integrand are locally $L^2$-bounded martingales then we get a martingale, but not square-integrable, as the result? Also it is worth noting that a corollary of the Martingale representation theorem and the associativity of the stochastic integral (when applicable) allows us to phrase most of the conditions given in terms of integration w.r.t. $\langle M, M \rangle_s$ as conditions in terms of $s$, i.e. $L^2$-boundedness, because the quadratic variation of Brownian motion is $t$. REPLY [4 votes]: Well, in the form stated above, none of the statements are true, because you're only assuming $f$ to be progressive and not predictable, and you're not assuming that the integrator $X$ has continuous sample paths. I'd say that point (4) is neither true nor false but undefined, as the stochastic integral is not necessarily well-defined for integrands which are only progressive and not predictable. As regards the other three points, as a counterexample, take e.g. $N$ to be a standard Poisson process and let $f(t) = N_t$, $X_t = N_t - t$ and let $(\mathcal{F}_t)$ be the filtration induced by $N$. Then $f$ is locally bounded (by e.g. the sequence of stopping times corresponding to the jump times of $N$), bounded on compacts (because it has cadlag sample paths) and is progressive (because it is cadlag and adapted). Furthermore, the integral is well-defined since $X$ has sample paths of finite variation, so the integral can be defined as a pathwise Lebesgue integral. It holds that $$ Y_t = \int_0^t f(s) dX_s = \int_0^t (N_{s-} + \Delta N_s) dX_s \\ = \int_0^t N_{s-}dX_s + \sum_{0 TITLE: Notation of the square (or other power) of a function $f(x)$ QUESTION [5 upvotes]: How do you notate the square (or other power) of a function $f(x)$? Is it $f^2(x)$ (similar to $\sin^2(x)$ for example), $f(x)^2$ or do you have to use $(f(x))^2$? Thanks in advance. REPLY [7 votes]: The standard way to write this is $f(x)^2$ (this always means $(f(x))^2$ rather than $f(x^2)$, so you don't have to worry about any ambiguity). You will occasionally see $f^2(x)$, but I would not recommend this--$f^2(x)$ more often means $f(f(x))$. The notation $f^2(x)$ for squaring $f(x)$ is generally used for only a few particular functions (typically ones that you might often want to square and whose argument is often written without parentheses), such as trigonometric functions and $\log x$.<|endoftext|> TITLE: On the separation axiom in a Lawvere or "generalized" metric space QUESTION [5 upvotes]: According to the nLab, Lawvere metric spaces occur rather naturally (that is as certain enriched categories). A Lawvere metric space is a set $X$ equipped with a function $d : X\times X \to [0,\infty]$, such that: $d(x,x) = 0$ $d(x,y) + d(y,z) \geq d(x,z)$ I don't have any background in enriched category theory, so can only see the theory of Lawvere metric spaces on an elementary level. Omitting the separation axiom, that is: $$d(x,y) = 0 \Rightarrow x = y$$ seems to cause some weird things. Consider the category of Lawvere metric spaces, where a morphism $f : X \to Y$ is a function such that: $$d(x,y) \geq d(fx,fy)$$ It is well known, that if it also an isometry, then $f$ is injective. But it appears that for Lawvere metric spaces you cannot prove that every isometry is even mono in the category of Lawvere spaces. I can only show that these maps are "almost" injective in the sense that: $$fx \cong fy \Rightarrow x \cong y$$ where $x\cong y \Leftrightarrow d(x,y) = d(y,x) = 0$. How can I view this "situation" (of isometries being "almost" mono / injective), such that it seems natural instead of "weird" and unwanted? REPLY [3 votes]: Jean Goubeault-Larrecq has written a detailed textbook-level introduction to "Lawvere" metric spaces from a topological point of view in Chapter 6 of his book Non-Hausdorff Topology and Domain Theory. He defines a hemi-metric space to be a set $X$ equipped with a set-function $d\colon X\times X\to[0,\infty]$ such that $d(x,x)=0$ $d(x,y)\leq d(x,z)+d(z,y)$ Recall that a topological space is satisfies the $T_0$ separation axiom, i.e., is Kolmogoroff, if and only if for any pair $x\neq y\in X$, there is an open $U$ so that either $x\in U,y\not\in Y$, or $x\not\in U,y\not\in U$. In other words, a topological space is Kolmogoroff if and only if points are "topologically distinguishable". Any quasi-metric space $(X,d)$ has a topology with a basis given by open balls $B_{x,<\epsilon}^d=\{y\in X:d(x,y)<\epsilon\}$ for $\epsilon\in(0,\infty)$. A quasi-metric space $(X,d)$ is $T_0$ if and only if it satisfies $d(x,y)=0=d(y,x)\implies x=y$ in which case Goubeault-Larrecq calls it a quasi-metric space. If $(X,d)$ is a quasi-metric space and $(X',d)$ is a hemi-metric space, then isometric embeddings $(X,d)\to(X',d')$ are also topological embeddings. In other words, metric embeddings are topological embeddings if the topology associated topology to the domain distinguishes points. In fact, the converse holds because $d(x,y)=0=d(y,x)$ implies $d(x,z)\leq d(x,y)+d(y,z)=d(y,z)\leq d(y,x)+d(x,z)=d(x,z)$ and $d(z,x)\leq d(z,y)+d(y,x)=d(z,y)\leq d(z,x)+d(x,y)=d(z,x)$, hence the quotient $X\twoheadrightarrow X/\sim$ by the relation $x\sim y$ if $d(x,y)=0=d(y,x)$ is an isometry $(X,d)\to (X/\sim,d)$ from a hemi-metric space to its "quasi-metrication". The above relationship between isometries and injections, and particularly why isometries are not always injections can be understood using the theory of topological categories. A quick reference is Section 11 of Oswald Wyler's Lecture Notes on Topoi and Quasitopoi; a more detailed treatment can be found in The Joy of Cats. What I'm writing below is an example application of the theory to hemi-metric spaces. Consider the category $\mathcal Hem$ of hemi-metric spaces and $1$-Lipschitz functions between them, i.e. functions $(X,d)\xrightarrow{f}(X',d')$ such that $d'(f(x),f(y))\leq d(x,y)$. The forgetful functor $\mathcal Set\xleftarrow{U}\mathcal Hem$ is faithful, that is, injective on morphisms, hence establishes establishes $\mathcal Hem$ as a concrete category. $U$ is transportable in the sense that for any bijection $X\overset f\cong X'$ and a hemi-metric structure $(X',d')$ there exists a unique hemi-metric structure $(X,d)$ with $(X,d)\overset f\cong (X',d')$ an isomorphism in $\mathcal Hem$. This is the categorical version of the statement that if $f$ is an isomorphism in $\mathcal Hem$, then $f$ is an isometry. Given a collection of hemi-metric spaces $(X'_i,d'_i)$ and a collection of set functions $X\xrightarrow{f_i}X'_i$ (i.e. a source for $\mathcal Set\xleftarrow{U}\mathcal Hem$), the hemi-metric structure $(X,d)$ given by $d(x,y)=\sup_id_i'(f_i(x),f_i(y))$ is an initial lift in the sense that a set function $X''\xrightarrow{g}X$ is $1$-Lipschitz for $(X'',d'')\xrightarrow{g}(X,d)$ if and only if each $(X'',d'')\xrightarrow{f_i\circ g}(X',d')$ is $1$-Lipschitz. Indeed $d'(f_i(g(x''),f_i(g(y''))\leq d''(x'',y''))$ for each $i$ if and only if $d(g(x''),g(y''))\leq d''(x'',y'')$. You should think of this as analogous to having a weakest topology on a set so that a certain collection of set functions to topological spaces will be continuous. Given a collection of hemi-metric spaces $(X_i,d_i)$ and a collection of set functions $X_i\xrightarrow{g_i}X'$ (i.e. a sink for $\mathcal Set\xleftarrow{U}\mathcal Hem$), consider the source consisting of all set-functions $X'\xrightarrow{f_j}(X''_j,d''_j)$ such that $(X_i,d_i)\xrightarrow{f_j\circ g_i}(X''_j,d''_j)$ is $1$-Lipschitz for each $i$. The initial lift $(X',d')$ of this source given by $d'(x',y')=\sup_jd''_j(f_j(x'),f_j(y'))$ is also a final lift of the sink under consideration. This is an analog of the strongest topology on a set so that a certain collection of set functions from topological spaces will be continuous. Note that the best explicit computation of the hemi-metric of the final lift is the inequality $d'(x',y')\leq\inf_{i,x,y:g_i(x)=x',g_i(y)=y'}d_i(x,y)$. Since $\mathcal Set\xleftarrow{U}\mathcal Hem$ is faithful, transportable, and has initial lifts of all sources, it gives $\mathcal Hem$ the structure of a topological category over $\mathcal Set$. Given a set $X$, the initial lift $(X,d)$ of the empty source from $X$ is the indiscrete or trivial hemi-metric space given by $d(x,y)=0$: any set-function from a hemi-metric space to the trivial space is $1$-Lipschitz. Dually, the final lift $(X,d)$ of the empty sink to $X$ is the discrete hemi-metric space given by $d(x,y)=\begin{cases}\infty&x\neq y\\0&x=y\end{cases}$: any set-function out of it to a hemi-metric space is $1$-Lipschitz. We therefore have explicit discrete and trivial functors $D,T\colon \mathcal Set\to\mathcal Hem$ which are by definition the left and right adjoints to the forgetful functor $\mathcal Set\xleftarrow{U}\mathcal Hem$, i.e. $D\dashv U\dashv T$. Since $U$ is faithful, it follows that a $1$-Lipschitz function is a monomorphism (resp. epimorphism) in $\mathcal Hem$ if and only if it is a monomorphism (resp. epimorphism) in $\mathcal Set$, i.e. if and only if it is injective or surjective. Furthermore, a $1$-Lipschitz function $(X,d)\xrightarrow{f}(X',d')$ is a strong monomorphism (i.e an embedding) if and only if the set function $X\xrightarrow{f}X'$ is a strong monomorphism and $(X,d)\xrightarrow{f}(X',d')$ is coarse in the sense that $(X,d)$ is the initial lift for the source $X\xrightarrow{f}X'$. Since all injective functions in $\mathcal Set$ are strong monomorphisms, the first condition is just the requirement that $(X,d)\xrightarrow{f}(X',d')$ be injective, while the second requirement is that $d(x,y)=d(f(x),f(y))$, i.e. that $f$ is an isometry. Thus one way to answer your question is to say that in a topological category over $\mathcal Set$, coarse maps do not have to be embeddings. The analogy with $\mathcal Top$ is that a continuous map $X\xrightarrow{f} Y$ is a topological embedding if and only if it is injective and $X$ has the weakest topology for which $f$ is continuous (if $f$ is injective, this is the subspace topology). A $1$-Lipschitz map $(X,d)\xrightarrow{f}(X',d')$ is thus an isometry if $(X,d)$ has the "weakest" hemi-metric for which $(X,d)\xrightarrow{f}(X',d')$ is $1$-Lipschitz map; being injective is a completely separate condition. The $T_0$ separation axiom comes into play not from the isometries, i.e from the coarse morphisms but from "computable" "co-isometries". Dual to the previous result, a $1$-Lipschitz function $(X,d)\xrightarrow{f}(X',d')$ is a quotient in $\mathcal Hem$, i.e. a strong epimorphism, if and only if the set-function $X\xrightarrow{f}X'$ is a strong epimorphism and $(X,d)\xrightarrow{f}(X',d')$ is fine, i.e. $(X',d')$ is the final lift of the sink $X\xrightarrow{f}X'$. Since every epimorphism in $\mathcal Set$ is strong, the first condition just says that $X\xrightarrow{f}X'$ is surjective, but the second condition remains in general uncomputable because at best we can say $d'(x',y')\leq\inf_{x,y: f(x)=x',f(y)=y'}d(x,y)$. Let's say a fine morphism $(X,d)\xrightarrow{f}(X',d')$ is computably fine if equality holds, i.e. $d'(x',y')=\begin{cases}0&x'=y'\\\inf_{x,y: f(x)=x',f(y)=y'}d(x,y)&x'\neq y'\end{cases}$. Then a function $X\xrightarrow{f}X'$ to a hemi-metric space $(X',d')$ has a computably fine lift if and only if $\inf_{x,y: f(x)=x',f(y)=y'}d(x,y)$ satisfies the triangle inequality. But the triangle inequality fails if and only if there exist $x_0,y_0,z_1,z_2\in X$ with $f(z_1)=f(z_2)$ and $\inf_{x,y: f(x)=f(x_0),f(y)=f(y_0)}d(x,y)>d(x_0,z_1)+d(z_2,y_0)$. On the other hand, we always have $d(x,y)\leq d(x,x_0)+d(x_0,z_1)+d(z_1,z_2)+d(z_2,y_0)+d(y_0,y)$. To prevent the failure of the triangle inequality, it is sufficient to require that $f(z_1)=f(z_2)$ implies $d(z_1,z_2)=0$ (which by symmetry of $f(z_1)=f(z_2)$ also implies $d(z_2,z_1)=0$). I think this condition is equivalent to the condition that the associated topology to $(X',d')$ is the strongest topology that makes $X\xrightarrow{f}X'$ continuous from $X$ equipped with the open ball topology of $(X,d)$, hence I'll call such Lipschitz functions $(X,d)\xrightarrow{f}(X',d')$ topologically fine. Accordingly, we have an explicit class of strong epimorphisms in $\mathcal Hem$ given by those surjective $(X,d)\xrightarrow{f} (X',d')$ such that $f(x)=f(y)\implies d(x,y)=0=d(y,x)$ (i.e. quotients by some amount of topological indistinguishability) and $d'(x',y')=\inf_{x,y:f(x)=x',f(y)=y'}d(x,y)$ (hence topologically fine quotients in $\mathcal Hem$). This class is the left class in an orthogonal factorization system whose right class are those morphisms such that $x\neq y\wedge f(x)=f(y)\implies\neg(d(x,y)=0=d(y,x))$. This means that any $1$-Lipshitz function $(X,d)\to(X',d')$ factors uniquely (up to isomorphism) as $(X,d)\to (X'',d'')\to (X',d')$ where the first map is in the left class and the second in the right class, and furthermore in a commutative square $$\require{AMScd}\begin{CD} W @>f>> Y\\ @VLVV @VRVV\\ X @>g>> Z \end{CD}$$ there is a unique diagonal morphism $X\to Y$ factoring $g$ through $R$ and $f$ through $L$. Indeed, $W\xrightarrow{L}X$ is a quotient, and $L(w_1)=L(w_2)$ implies $d(w_1,w_2)=0=d(w_2,w_1)$ and $R\circ f(w_1)=g\circ L(w_1)=g\circ L(w_2)=R\circ f(w_2)$. The first and $f$ being $1$-Lipschitz gives $d(f(w_1),f(w_2))=0=d(f(w_2),f(w_1))$, hence $f(w_1)=f(w_2)$ by the second and the condition on the right class. Since morphisms in $L$ are fine in $\class Hem$, the diagonal maps itself is $1$-Lipschitz. Note that the orthogonal factorization system is not a strong epi-mono factorization system because the right class of morphisms does not consist of injective functions -- but note that this lack of injectivity is never because topologically indistinguishable points are getting collapsed. It follows that the factorization of a morphism $(X,d)\to (X',d')$ where $X',d')$ is a quasi-metric space is exactly the unique factorization through the quasi-metrication of the hemi-metric space. Note that symmetry is not used anywhere; a hemi-metric space $(X,d)$ satisfies the stronger condition $d(x,y)=0\implies x=y$ if and only if the associated topological space satisfies the $T_1$ separation axiom: for any pair $x\neq y\in X$, there is an open $U$ so that $x\in U,y\not\in Y$.<|endoftext|> TITLE: $\int_{-\infty}^\infty \frac{\sin (t) \, dt}{t^4+1}$ must be zero and it isn't QUESTION [11 upvotes]: I'm trying to evaluate the integral $$\int_{-\infty}^\infty \frac{\sin (t) \, dt}{t^4+1}$$ using residue and complex plane integration theory. Let $f(t):=\frac{\sin (t)}{t^4+1}$, $f(z):= \frac{\sin (z)}{z^4+1}$. Then $f(z)$ has four singular points, two of which are in the semicircle $R>2$ in the upper half of the complex plane: $p_1:=\exp\{i\frac{\pi}{4}\}$ and $p_2:=\exp\{i\frac{3\pi}{4}\}$. We know that $$\int_{-R}^R \frac{\sin (t) \, dt}{t^4+1}+\int_{C_R}\frac{\sin (z) \, dz}{z^4+1}=2\pi i(\operatorname{Res}_{z=p_1}+\operatorname{Res}_{z=p_2})$$ But here's the mysterious part: $$\lim_{R\to\infty}\int_{C_R}\frac{\sin (z) \, dz}{z^4+1}=0$$ yet $$(\operatorname{Res}_{z=p_1}+\operatorname{Res}_{z=p_2})\ne 0$$ But the original integral must be equal to zero. I'd appreciate if it could be pointed out what I'm not doing right. REPLY [13 votes]: You have $\displaystyle f(t) = \frac{\sin t}{t^4 + 1}$. Instead of considering $f(z)$ for $z \in \Bbb{C}$, consider the function $\displaystyle g(z) = \frac{e^{iz}}{z^4 + 1}$ for $z \in \Bbb{C}$. We do this in general because $e^{iz}$ behaves better (i.e., it's bounded) on $\{z = x + iy \in \Bbb{C} : y \ge 0 \}$ than $\sin z$ does on this set. Let $\gamma$ be the standard semicircular contour. Then we have $$ \int_\gamma \frac{e^{iz}}{z^4 + 1} \, dz = \int_{-R}^R \frac{e^{it}}{t^4+1} \, dt + \int_{\text{arc}} \frac{e^{iz}}{z^4+1} \, dz,$$ where "arc" represents the curved part of the contour $\gamma$. In can be shown that the integral over "arc" tends to zero as $R \to +\infty$. Just parameterize with $z = Re^{i\theta}$ and use the ML-estimate. Also, we can evaluate the left-hand side with residues: $$ \int_\gamma \frac{e^{iz}}{z^4 + 1} \, dz = 2\pi i \sum_{z \in \Gamma} \text{Res } g(z),$$ where I'm using $\Gamma$ to represent the interior of the closed contour $\gamma$. There are two residues as you mentioned. They occur at $z = e^{\pi i/4}$ and $z = e^{3\pi i/4}$. Note that I'm using exponential form for simplicity. Also note that, in exponential notation, $z^4 + 1$ factors as $$z^4 + 1 = (z-e^{\pi i/4})(z - e^{3\pi i/4})(z - e^{5\pi i/4})(z - e^{7\pi i/4}).$$ First residue: \begin{align} \text{Res}\left(g(z), z= e^{\pi i/4}\right) &= \lim_{z \to e^{\pi i/4}} \left(z - e^{\pi i/4}\right) \frac{e^{iz}}{(z-e^{\pi i/4})(z - e^{3\pi i/4})(z - e^{5\pi i/4})(z - e^{7\pi i/4})}\\[0.3cm] &= \lim_{z \to e^{\pi i/4}} \frac{e^{iz}}{(z - e^{3\pi i/4})(z - e^{5\pi i/4})(z - e^{7\pi i/4})}\\[0.3cm] &= \frac{e^{ie^{\pi i/4}}}{(e^{\pi i/4} - e^{3\pi i/4})(e^{\pi i/4} - e^{5\pi i/4})(e^{\pi i/4} - e^{7\pi i/4})}\\[0.3cm] &= -e^{ie^{\pi i/4}}\frac{\sqrt{2}}{8}(1 + i) \end{align} Similarly, the other residue is $e^{ie^{3\pi i/4}}\dfrac{\sqrt{2}}{8}\left(1 - i\right)$. Yes, I'm skipping a lot of messy arithmetic here. Exercise left for the reader. :) Anyway, with the help of a computer algebra system, I find that the sum of the residues is (equivalent to) $$ -\frac{i}{4} e^{-1/\sqrt{2}}\sqrt{2}\left(\cos \frac{1}{\sqrt{2}} + \sin\frac{1}{\sqrt{2}}\right). $$ This is a mess. But that's ok. Note that it's purely imaginary, and that's the only thing we care about regarding this expression. This means that if we multiply it by $2\pi i$, we get a real value. This means that $$ \int_\gamma \frac{e^{iz}}{z^4 + 1} \, dz $$ is real. Finally, note also that $e^{it} = \cos t + i\sin t$. Therefore, when we take the limit as $R \to +\infty$, we have: \begin{align} \int_\gamma \frac{e^{iz}}{z^4 + 1} \, dz &= \int_{-\infty}^{+\infty} \frac{e^{it}}{t^4+1} \,dt\\[0.3cm] &= \int_{-\infty}^{+\infty} \frac{\cos t + i\sin t}{t^4 + 1} \, dt\\[0.3cm] &= \underbrace{\int_{-\infty}^{+\infty} \frac{\cos t}{t^4+1} \, dt}_{\text{real part}} + i \cdot \underbrace{\int_{-\infty}^{+\infty} \frac{\sin t}{t^4 + 1} \, dt}_{\text{imaginary part}} \end{align} Recall that the integral over $\gamma$ is real. Therefore its imaginary part is zero. So if we equate real and imaginary parts then we get $$ \int_{-\infty}^{+\infty} \frac{\sin t}{t^4 + 1} \, dt = 0.$$<|endoftext|> TITLE: If $K$ is compact and $C$ is closed in $\mathbb{R}^k$, prove that $K + C$ is closed using a "direct" proof QUESTION [6 upvotes]: Rudin Exercise 4.25(a) reads: If $K$ is compact and $C$ is closed in $\mathbb{R}^k$, prove that $K + C$ is closed. The hints in the problem suggest a proof by proving that the complement of $K + C$ is open, a path which I was able to follow into a successful proof. However, I want to prove it "directly", by showing that any limit point of $K + C$ must be within $K + C$, however I run into the following problem: My Attempt: Suppose $z$ is a limit point of $K + C$. Then there is a sequence $\{z_n\} \to z$ in $K + C$. Since each $z_i$ is an element of $K + C$, we can write $z_i = k_i + c_i$ for sequences $\{k_i\}$, $\{c_i\}$ in $K$ and $C$ respectively. Now, we simply must show that $\{k_i\} \to k \in K$ and $\{c_i\} \to c \in C$ to be done. However, I noticed that $\{k_n\}$ and $\{c_n\}$ do not necessarily converge when their sum does. As an example, in $\mathbb{R}$, take $k_i = (-1)^i$ and $c_i = (-1)^{i+1}$. Then neither $\{k_n\}$ nor $\{c_n\}$ converge, but their sum does. Are there any suggestions on how to get around this problem and complete this more "direct" proof? Thanks! REPLY [9 votes]: As $K$ is compact, there is a subsequence $k_{i_j}$ such that coverges to some $k \in K$. In particular, the subsequence $z_{i_j} \rightarrow z$. Thus, $c_{i_j}= z_{i_j} -k_{i_j} \rightarrow z - k \in C$, since $C $ is closed. Then, $$ z= k + (z-k) \in K + C. $$ REPLY [4 votes]: There does exist a proof using your start. Your missing step was fully using the compactness of $K$: Let $z$ be any limit point of $K + C$. Then there exists an infinite sequence of points $\{z_n\} \in K + C$ that converges to $z$. Define sequences $\{k_n\} \in K$ and $\{c_n\} \in C$ such that $z_n = k_n + c_n$ for all $n$. $\{k_n\}$ is an infinite sequence in a compact space, therefore $\{k_n\}$ must have some limit point $k \in K$. Thus for any $\varepsilon > 0$, there is an index $n$ for which $|k - k_n| < \varepsilon / 2$ and $|k_n + c_n - z| < \varepsilon / 2$. Put $c = z - k$. By the triangle inequality, we have $|c_n - c| < \varepsilon$ at this $n$. Hence $c$ is a limit point of $\{c_n\}$. This shows that $z = k+c$ for $k \in K$ and $c \in C$; therefore $K + C$ is closed.<|endoftext|> TITLE: Russell's paradox from Cantor's QUESTION [7 upvotes]: I learnt how Russell's paradox can be derived from Cantor's theorem here, but also from S C Kleene's Introduction to Metamathematics, page 38. In his book, Kleene says that if $M$ is set of all sets, then $\mathcal P(M)=M$ but since this implies $\mathcal P(M)$ has same cardinality as $M$, so there exists a subset $T$ of $M$ which is not element of power set $\mathcal P(M)$. This $T$ is desired set for Russell's paradox, i.e., it is the set of all sets which are not members of themselves. I can't understand how $T$ is desired set for Russell's paradox. Also, how is Kleene's argument similar to the quora answer? REPLY [11 votes]: Cantor's theorem shows that for any set $X$ and any function $f:X\to \mathcal{P}(X)$, there is some subset $T\subseteq X$ that is not in the image of $f$. Specifically, $T=\{x\in X:x\not\in f(x)\}$. Kleene is saying that if you apply this theorem to the identity function $f:M\to\mathcal{P}(M)$, the counterexample $T$ you get is exactly Russell's set. Indeed, this is immediate from the definition of $T$ given above. The Quora answer is just carrying out the diagonal proof that $T$ is not in the image of $f$ in this particular example.<|endoftext|> TITLE: Bounding $\int_0^1 f(x) dx $ under the condition $\int_0^1 f'(x)^2 dx \le 1$ QUESTION [13 upvotes]: Any tips on how to solve this? Problem 1.1.28 (Fa87) Let $S$ be the set of all real $C^1$ functions $f$ on $[0, 1]$ such that $f(0) = 0$ and $$\int_0^1 f'(x)^2 dx \le 1 \;. $$ Define $$J(f) = \int_0^1 f(x) dx \; .$$ Show that the function $J$ is bounded on $S$, and compute its supremum. Is there a function $f_0 \in S$ at which $J$ attains its maximum value? If so, what is $f_0$? I tried using Cauchy-Schwartz and got a bound of $\frac23$ but it doesn't seem strong enough. REPLY [8 votes]: Another way for proving the upper bound is the following. We have, using Cauchy-Schwarz twice, that $$\begin{align} \left|\int_{0}^{1}f\left(x\right)dx\right|= & \left|\int_{0}^{1}\int_{0}^{x}f'\left(t\right)dtdx\right| \\ \leq & \left(\int_{0}^{1}\left(\int_{0}^{x}f'\left(t\right)dt\right)^{2}dx\right)^{1/2} \\ \leq & \left(\int_{0}^{1}x^{2}\int_{0}^{x}f'^{2}\left(t\right)dtdx\right)^{1/2} \\ \leq & \left(\int_{0}^{1}x^{2}dx\right)^{1/2} \\ = & \frac{1}{\sqrt{3}}. \end{align}$$<|endoftext|> TITLE: Zeros and poles of some meromorphic 1-forms on the riemann sphere QUESTION [6 upvotes]: Let $X=\mathbb C_{\infty}$ be the Riemann sphere with the local coordinates $\{z\ ,1/z\}$. I want to show the following two statements: i) There does not exist any non-vanishing holomorphic 1-form on $X$. ii) Where are the poles and zeros of the meromorphic 1-forms $dz$ and $d/z$? Also determine their orders. My attempt: i) Let $w$ be a non-vanishing 1-form on $X$. Then we can write $w=f(z)dz$ in the coordinate $z$ for a holomorphic function $f$. In the other chart we have then $w=f(\frac{1}{z})(-\frac{1}{z^2})d/z$. Now the laurent-series of $f$ around $0$ has only non-negative exponents, hence the above function has a pole in $0$, which is a contradiction to the assumption that $f$ is holomorphic. ii) For $w=1 dz$: $1$ has no zeros or poles in $\mathbb C$. Lets consider $\infty:$ In the other char we have $w=-1/z^2$ which has a pole of order two in zero, hence we have $ord_{\infty}w=-2$ and $ord_p(w)=0$ for $p\in\mathbb C$. For $w=dz/z$: $1/z$ has only a pole (of order $1$) in zero. In the other chart we have $w=-1/z$ which has also just a pole of order 1 in zero. Hence we have $ord_0(w)=-1$, $ord_{\infty}(w)=-1$ and $ord_p(w)=0$ otherwise. Since I am a beginner I would like if someone could check my solutions. Thanks in advance!:) REPLY [4 votes]: First, for these kind of questions, the book of Rick Miranda, Algebraic curves and Riemann surfaces is really well done and have lot of details. Remark : maybe for notation this is clearer to write $\frac{1}{z}$ as $w$ and a differential form as $\omega$ (for example, when you change of coordinate you can write $\omega' = -\frac{dw}{w}$ or something like this, instead of using $z$ for two different coordinates). Else, your computations seems all ok to me ! REPLY [4 votes]: Your reasoning is sound! The only (minor) complaints I have are regarding the formatting. For instance, you sometimes forget a differential at the end of a local expression for $\omega$. You should write, for instance, In the other chart we have $\omega=(−1/z^2) d(1/z)$. Or, later: In the other chart we have $\omega=(-1/z) d(1/z)$. Also, note that $\omega$ is not the same thing as $w$ (assuming you're using Miranda's book). But, regarding the actual mathematics, everything is perfectly fine.<|endoftext|> TITLE: Studying for grad school qualifying exams; need a little help on how to effectively study higher math. QUESTION [9 upvotes]: This is entirely embarrassing to admit, but I'm realizing, one year into my doctorate program, I don't know how to effectively study math. I feel like a failure and a fraud for even having to come here. I've been one of the students who got by first by being smart and not having to work hard, and then by swallowing a lot of material wholesale before an exam as I juggled two majors in undergrad. I thought I had the chops to handle the pace and volume of grad level classes on my own, but the truth is I relied a lot on my fellow first year students to explain proofs and subtleties in our classes. Granted, I've been juggling some light research interests as well as I lean toward applied mathematics (they like the applied people to get into research as soon as possible at my institution), but I feel like that's a poor excuse as I'm not the only one. Whenever I attempt to work through problems, I'll come to points where I'm at a total loss: I can't even identify what I don't understand or what other approaches to try. And even if I can, I get sucked down rabbit holes trying to fill in the gaps in my understanding. I want to develop good habits to help me for these exams and for future endeavors when I need to become familiar with a new field for research. So I guess I'm asking if there's any agreed upon protocol or good practice for learning advanced math? Like a reliable troubleshooting method? Mine seems to be grossly ineffective. All perspectives are appreciated. I just really want to pass these exams. REPLY [10 votes]: I'm a third year graduate student who had to pass three qualifying exams. I used the same strategy to study for all three. This is what worked for me. Step one (general comprehension): Spend two to three hours a day reading proofs and important results for the exam you're studying for. Try to work examples as well as understand theorems. Memorize the proofs of important theorems. Spend two more hours working problems on your own that are related to what you were just reading. If you're getting nowhere on a problem for 20 - 30 minutes, ask someone how to solve it, understand the solution and move on. Carry on for a month or however long it takes for you to feel like you're somewhat comfortable with the subject. Step two (getting good at solving problems): Every day, take a practice qualifying exam under test conditions. Hopefully your university has a collection of old exams you can look at; otherwise, use a different university's. So, don't use your notes and continue trying to solve the problems even if you are stuck and have an hour left. You will be surprised what you can remember and make use of if you force yourself into a situation where you cannot get hints. After the time is up, find the solutions to all the problems you didn't get. Keep doing this for a month. Step three (tying up loose ends): Maybe a week before the exam, stop taking practice tests and just rest up. Continue to work new problems, but don't spend too much energy trying to figure them out if you get totally stuck. Instead, ask for help shortly after you get stuck (for example, on stackexchange). Memorizing the solutions of a lot of problems shortly before the exam, combined with the huge amount of problem-solving practice you just did, you should be in pretty good shape. If you're lucky, with the huge amount of problems you have encountered, one or two of them will appear on your exam and it'll be free points.<|endoftext|> TITLE: (Non-)Canonicity of using zeta function to assign values to divergent series QUESTION [8 upvotes]: This article http://blogs.scientificamerican.com/roots-of-unity/does-123-really-equal-112/ got me thinking about the "identity" $$1 + 2 + 3 + \cdots = -1/12,$$ and I wanted to convince myself there was nothing particularly unique about this identity or the Riemann zeta construction. More precisely, this identity only really makes sense if you think of an integer $n$ as being the specialization at $z=-1$ of the function $n^{-z}$. So here's a question: For any complex number $c$, does there exist a domain $\Omega \subset \mathbb{C}$, and analytic functions $F(n, s)_{n\in\mathbb{N}}$ and $f(s)$ on $\Omega$, such that the following hold i. $F(n,0) = n$ ii. $\sum_{n=1}^\infty F(n,s) = f(s)$ on $\Omega$ in some reasonable sense (maybe converges uniformly on compact subsets of $\Omega$?) iii. $f$ can be extended holomorphically to some domain containing both $\Omega$ and $0$ such that $f(0) = c$. So shifting the Euler series and Riemann zeta would be such a construction for $c=-1/12$. As the question stands, I feel that the answer is almost certainly yes, although to be fair the functions $n^{-s}$ have a lot more structure than "holomorphic functions on some domain". So a follow-up question would be: are there "natural" additional constraints for which the answer to this question is No? I apologize that this is kind of open-ended, but the goal is to convince myself that there is nothing particularly canonical about $-1/12$ (or to hear an explanation of why it is canonical). REPLY [2 votes]: Note that $1+2+3+\ldots = (1+1+1+\ldots)+(0+1+2+3+\ldots)$; so if you can regularize $1+1+1+\ldots$ into something nonzero, then you can shift your result away from $-1/12$. Specifically, try $$ 1+2+3+\ldots=\sum_{n=1}^{\infty}\left(n^{z} + (n-1)^{z+1}\right)\big\vert_{z=0}=\zeta(-z)+\zeta(-z-1)\big\vert_{z=0}=\zeta(0)+\zeta(-1)=-\frac{7}{12}, $$ where the sum converges for $\Re (z) < -2$.<|endoftext|> TITLE: Formal definition of “counterexample”. QUESTION [6 upvotes]: What is the preferred formal definition of “counterexample” as in: zero is a counterexample for "every integer is either positive or negative". Where in the literature is the notion of “counterexample” formally defined? And what are the main theorems involving this notion? And what questions concerning it remain open? REPLY [4 votes]: My guess is that formally, a counterexample to $$\forall(x \in X, y \in Y)\varphi$$ is a basically a substitution $(x:= x_0, y:= y_0)$ together with a proof that $$(x:= x_0, y:= y_0) \varphi \rightarrow \bot.$$ Unfortunately, this means that the "set-of-counterexamples function" fails to be preserved by equality of booleans. For instance, the booleans $$\forall x \in \mathbb{R},x+1=x \qquad \forall x \in \mathbb{R},x^2=x$$ are equal, in that they're both FALSE; but, the corresponding sets of counterexamples are different. For instance, $(x:=1)$ gives a counterexample to the former, but not to the latter. (I don't know whether or not this is an actual problem.) By the way, we can think of a substitution like $(x:= x_0,y:= y_0)$ as being a bit like an ordered tuple, in this case $(x_0,y_0)$. The difference is that the "order" is replaced by a name for each individual element; so in this case, $x_0$ is in the "$x$" position (rather than the 1st position) and $y_0$ is in the "$y$" position (rather than the 2nd position.) I think computer scientists call such things records, which are viewed as elements of "named cartesian products" (aka "record types.")<|endoftext|> TITLE: $\{f_n\}$ is uniformly integrable if and only if $\sup_n \int |f_n|\,d\mu < \infty$ and $\{f_n\}$ is uniformly absolutely continuous? QUESTION [7 upvotes]: Let $(X, \mathcal{A}, \mu)$ be a measure space. A family of measurable functions $\{f_n\}$ is uniformly integrable if given $\epsilon$ there exists $M$ such that$$\int_{\{x : |f_n(x)| > M\}} |f_n(x)|\,d\mu < \epsilon$$for each $n$. The sequence is uniformly absolutely continuous if given $\epsilon$ there exists $\delta$ such that$$\left|\int_A f_n\,d\mu\right| < \epsilon$$for each $n$ if $\mu(A) < \delta$. Suppose $\mu$ is a finite measure. How do I see that $\{f_n\}$ is uniformly integrable if and only if $\sup_n \int |f_n|\,d\mu < \infty$ and $\{f_n\}$ is uniformly absolutely continuous? REPLY [5 votes]: Assume that $\{f_n\}$ is uniformly absolutely integrable. Let $\varepsilon > 0$. Now \begin{align} \int_X |f_n| = \int_{X \cap \{f_n \geq M_\varepsilon\}} |f_n| + \int_{X \cap \{f_n < M_\varepsilon\}} |f_n| \leq \varepsilon + M_\varepsilon \mu(X) \end{align} for all $n$. Thus the supremum over $n$ is finite. To get uniform absolute continuity, notice that \begin{align} \left| \int_A f_n \right| & \leq \int_{A \cap \{f_n \geq M_\varepsilon\}} |f_n| + \int_{A \cap \{f_n < M_\varepsilon\}} |f_n| \\ & \leq \varepsilon + M_\varepsilon \mu(A) \end{align} for all $n$. Now choose $\delta < \varepsilon/M_\varepsilon$. Now assume $\sup_n \|f_n\|_1 < \infty$ and the uniform abs. continuity. Let $\varepsilon > 0$ and let $\delta > 0$ be such that $\mu(A) < \delta$ implies $ |\int_A f_n| < \varepsilon$ for all $n$. Since $\int|f_n| < \infty$, we have \begin{align} \lim_{M \to \infty} \mu \{ |f_n| > M \} = 0\,. \end{align} Thus we may choose $M_n$ so large that $\mu\{ |f_n| > M_n \} < \delta$. Now \begin{align} \int_{|f_n| > M_n } |f_n| &= \int_{f_n > M_n} f_n + \int_{f_n < -M_n} (-f_n) \\ &= \left| \int_{f_n > M_n} f_n \right| + \left| \int_{f_n < -M_n} f_n \right| \\ &< \varepsilon + \varepsilon \end{align} for all $n$ since the sets over which we integrate have measure less than $\delta$<|endoftext|> TITLE: Compute $E(X\mid X+Y)$ if $(X,Y)$ is centered normal with known covariance matrix QUESTION [5 upvotes]: The random variable $(X,Y)$ has a two dimensional normal distribution with mean $(0,0)$ and covariance matrix $\begin{pmatrix} 4&2 \\ 2&2 \end{pmatrix}$. Find $E(X\mid X+Y)$. I am completely lost with this question. REPLY [4 votes]: Let's pose $Z = X+Y$. The random variable $(X,Z)$ has a two dimensional normal distribution with mean $$(E(X),E(Z)) ) (E(X), E(X)+E(Y)) = (0,0)$$ and covariance matrix $$\pmatrix{E(X^2) & E(XZ)\\E(ZX) & E(Z^2)} = \pmatrix{E(X^2) & E(X^2)+E(XY) \\E(X^2) + E(YX) & E(X^2)+E(Y^2)+E(2XY)} = \\ \pmatrix{4 & 4+2 \\4 + 2 & 4 + 2 + 2\cdot2} = \pmatrix{4 & 6\\6 & 10}.$$ The correlation coefficient of $X$ and $Z$ is: $$\rho = \frac{E(XZ)}{\sqrt{E(X^2)E(Z^2)}} = \frac{6}{\sqrt{4 \cdot 10}} = 3\frac{\sqrt{10}}{10}.$$ The conditional expected value is: $$E(X\mid Z) = \mu_X + \rho \sqrt{\frac{E(X^2)}{E(Z^2)}}(Z-\mu_Z) = \\ = 0 + 3\frac{\sqrt{10}}{10}\sqrt{\frac{4}{10}}(Z-0) = \frac{3}{5}Z.$$ For the last step, take a look here.<|endoftext|> TITLE: Factoring polynomials with a 2nd degree coefficient greater than $1$ QUESTION [11 upvotes]: I'm learning how to factor polynomials, but I'm having a hard time understanding the approach when the 2nd degree coefficient is greater than $1$. For example, when I begin to factor $12k^4 + 22k^3 - 70k^2$, I first break it down to $2k^2(6k^2 + 11k - 35)$. I would think that I'd want to find two numbers that sum up to $11$ and have a product of $-35$, but instead I'm told we need to multiply $-35$ by $6$ so that I now have to find two numbers that sum up to $11$ and have a product of $-210$. Can anyone help me understand why $-35$ is multiplied by the coefficient $6$? Why isn't $11k$ also multiplied by $6$? REPLY [2 votes]: (High School Level) GUESS AND CHECK METHOD: This method works well if you are just beginning and before too long you will start to see patterns and factor quickly. This speed comes in handy if you need to factor more than one expression in the same question or if you need to do a long list of factoring questions. (1) GUESS: First, think of two possible factors of 6 and of 35. \begin{matrix} {6}{k}^{2} & +11k & -35 & \\ 2 & & 7 & \\ 3 & & 5 & \end{matrix} (2) CHECK: Multiply the factors diagonally, then subtract the results to see if we get 11. \begin{matrix} {6}{k}^{2} & +11k & -35 & \\ 2 & & 7 & 3\times 7=21\\ 3 & & 5 & 2\times 5=10 \end{matrix} \begin{matrix}Check: 21-10=11\end{matrix} NOTE 1: If the expression ends with a -, as with -35, we subtract in the check. If it ends in a + then we add in the check. NOTE 2: If the check did not work, we go back to step (1) and change the order of the factors or try different factors. (3) SIGNS. If the expression ends with a -, we want the signs of the factors in the second column to be different, with one - and one +. We put the negative sign in the correct place so that the check still works. \begin{matrix} {6}{k}^{2} & +11k & -35 & \\ 2 & & 7 & 3\times 7=21\\ 3 & & -5 & 2\times -5=-10 \end{matrix} \begin{matrix}Check: 21-10=11\end{matrix} If the expression ends in a +, we can skip this step because the signs will be the same. (4) RESULT: Write down the result from left to right. \begin{matrix}{6}{k}^{2}+11k-35 = (2k+7)(3k-5)\end{matrix} NOTE to teachers and tutors: This method is also excellent for making up questions for factoring practice. Simply make up the factoring table first (the factors that we guessed when we began the process) and use them to generate the desired polynomial.<|endoftext|> TITLE: Show $A\cap B \neq \varnothing \Rightarrow \operatorname{dist}(A,B) = 0$, and $\operatorname{dist}(A, B) = 0 \not\Rightarrow A\cap B \neq \varnothing$ QUESTION [6 upvotes]: I have a question Let $d$ be a metric on $X$, and define the set to set distance $$\operatorname{dist}(A,B) = \inf\{d(x,y): x\in A, y \in B\}$$ where $A,B \subseteq X$ are nonempty sets Show that $A\cap B \neq \varnothing \Rightarrow \operatorname{dist}(A,B) = 0$, and $\operatorname{dist}(A, B) = 0 \not\Rightarrow A\cap B \neq \varnothing$ First: ($A\cap B \neq \varnothing \Rightarrow \operatorname{dist}(A,B) = 0$) Trivial, since $A \cap B \neq \varnothing \implies \exists z \in A$ and $B$, so $\operatorname{dist}(A,B) = \inf\{d(z,z)\} = 0$ Second: ($\operatorname{dist}(A, B) = 0 \not\Rightarrow A\cap B \neq \varnothing$) We want to produce $A \cap B = \varnothing$ such that $\operatorname{dist}(A,B) = 0$. Is there a metric space where this can happen? I've checked the discrete metric, all the $\ell_p$ metrics. I don't think you can have disjoint sets with their distance zero. REPLY [8 votes]: Let $X=\mathbb{R}$, with the Euclidean metric. Let $A=(-1,0)$ and $B=(0,1)$. Then this is the desired contradiction. For the second statement to be $\implies$, you need that $A$ and $B$ be compact. REPLY [4 votes]: Hint: think $(0,1)$ and $(1,2)$. REPLY [3 votes]: Look for sets $A$ and $B$ so that $A \cap B = \emptyset$ but $\overline{A} \cap \overline{B} \ne \emptyset$, where $\overline{E}$ is the closure of a set $E$ in the $d$-metric. This will work if the sets are compact. Some intuition about why this is what you'd look for: Because you're looking at sequences within the sets $A$ and $B$ approximating the distance between $A$ and $B$, it's natural to think of the closures of $A$ and $B$ instead. So you're looking for two sets whose closures intersect but who don't intersect, which is why you'd look for something missing part of its boundary. For further Googling: This is the Hausdorff metric.<|endoftext|> TITLE: Are there infinitely many primes $n$ such that $\mathbb{Z}_n^*$ is generated by $\{ -1,2 \}$? QUESTION [15 upvotes]: Let $n$ a prime, and let $\mathbb{Z}_n$ denote the integers modulo $n$. Let $\mathbb{Z}^*_n$ denote the multiplicative group of $\mathbb{Z}_n$ Are there infinitely many $n$ such that $\mathbb{Z}^*_n$ is generated by $\{ -1, 2 \}$? Artin's conjecture on primitive roots implies something even stronger: that there are infinitely many $n$ such that $\mathbb{Z}^*_n$ is generated by $\{ 2 \}$. Although likely to be true (in particular it is implied by the Generalized Riemann Hypothesis), as far as I know this conjecture remains open. I am wondering if it is possible that with generators $\{-1,2 \}$, this is known unconditionally. (One could, of course, ask this for any two generators. For reasons that I'll omit here, I am especially interested in the the generating set $\{-1,2 \}$.) REPLY [3 votes]: I am afraid this is out of reach. As quid comments, one can do this with three prime generators, but even two prime generators is too hard; and including $-1$ as a generator does not help much.<|endoftext|> TITLE: Permutation of 2 or more groups while keeping the ordering of the groups QUESTION [5 upvotes]: I've been trying to get a general formula for this, but I couldn't find anything exactly what I need. What I want is, let's say we have 3 groups: (x,y,z),(a,b,c) and (k,l,m) What is the total number of permutations of these three occurring in a single group while they keep their initial order, but can intertwine with other groups? Is there a general formula for this or can we derive it somehow? e.g. (x,a,b,k,y,c,l,m,z) or (k,l,a,x,b,y,z,m,c) notice that x always comes before y and y always comes before z in the combined group, same goes for other groups too. In a small scale with 2 groups: (1,2) and (3,4) All possibilities: (1,2,3,4) (1,3,4,2) (1,3,2,4) (3,4,1,2) (3,1,2,4) (3,1,4,2) Is there a formula which would give me the number "6" for these two groups? Thanks! REPLY [2 votes]: If you haven't graduated to the multinomial coefficient, here is a simple way. Using the same example of $(x,y,z),(a,b,c),(k,l,m)$, imagine $9$ slots are available. Place $(x,y,z)\;$ in this order$\;$ in $\binom 93$ ways, and $(a,b,c)$, similarly in the remaining slots in $\binom63$ ways, to get a total of $\binom 93 \binom 63$ ways [ The last group, of course, automatically goes into the remaining $3$ slots ]<|endoftext|> TITLE: Variant of dominated convergence theorem, does it follow that $\int f_n \to \int f$? QUESTION [5 upvotes]: Suppose $f_n$, $g_n$, $f$ and $g$ are integrable, $f_n \to f$ almost everywhere, $g_n \to g$ almost everywhere, $|f_n| \le g_n$ for each $n$, and $\int g_n \to \int g$. Does it follow that $\int f_n \to \int f$? REPLY [7 votes]: First observe that $|f|\leq g$ since $|f_n|\leq g_n$ for all $n$ and $f_n\to f,g_n\to g$ almost everywhere. Therefore the function $$ h_n=g+g_n-|f-f_n| $$ is non-negative, so we can apply Fatou's lemma to conclude that $$ \int \liminf_{n\to\infty}h_n\leq \liminf_{n\to\infty}\int h_n$$ Using the fact that $\int g_n\to \int g<\infty$, it follows that $$ \limsup_{n\to\infty}\int |f_n-f|=0 $$ and since $$ \Big|\int f_n-\int f\Big|\leq \int |f-f_n| $$ this shows that $\int f_n\to\int f$.<|endoftext|> TITLE: Convex optimization with $\ell_0$ "norm" QUESTION [6 upvotes]: I have an optimization problem of the form $$\begin{align*}\text{minimize }\;&f(x)\\ \text{subject to }\;&||x||_0 \le t,\end{align*}$$ where $t$ is a given constant and $f:\mathbb{R}^d \to \mathbb{R}$ is a convex, differential function and $x$ ranges over $\mathbb{R}^d$. Or, roughly equivalently, I'd like to solve $$\text{minimize }\; f(x) + \lambda \cdot ||x||_0.$$ What techniques are available for this? Techniques I've found in my reading so far: $L_1$ norm optimization: Replace the $L_0$ norm with a $L_1$ norm, and then solve the resulting problem. In other words, solve $$\text{minimize }\; f(x) + \lambda \cdot ||x||_1.$$ Optionally, follow this with iterative re-weighting: given a candidate solution $\hat{x}$, solve $$\text{minimize }\; f(x) + \lambda \sum_{i=1}^d \frac{1}{|\hat{x}_i|} |x_i|,$$ and then replace $\hat{x}$ with the resulting solution; iterate until convergence. Apparently, the $L_1$ norm encourages sparse solutions. (I guess this is the idea behind Lasso and compressed sensing?) Approximate the $L_0$ norm: Replace the $L_0$ norm with $L_{1/2}$ norm, or with $L_p$ norm for $p \in (0,1)$, or with a function $g(x)$ defined by an approximation such as $$g(x) = \sum_{i=1}^d \log(1 + |x_i|/\alpha).$$ Also, two other techniques that I haven't seen described anywhere, but seem like natural algorithms: Forward greedy selection. Start by finding a one-element set $S_1$ that makes the following as small as possible: $$\begin{align*}\text{minimize }\;&f(x)\\ \text{subject to }\;&x_j = 0 \text{ for all } j \notin S_1;\end{align*}$$ Then find a two-element $S_2$ such that $S_1 \subset S_2$ and that makes the following as small as possible: $$\begin{align*}\text{minimize }\;&f(x)\\ \text{subject to }\;&x_j = 0 \text{ for all } j \notin S_2;\end{align*}$$ Iterate, adding one coefficient to the set at each stage. This minimization step can be solved efficiently by trying all possibilities for the one index you add to $S$ in each step. Backward greedy selection. Similar to above, but you start with $S_0=\{1,2,\dots,d\}$. In the $i$th iteration you look for $S_i$ such that $S_i \subset S_{i-1}$ and $|S_i|=|S_{i-1}|-1$ and that makes the following as small as possible: $$\begin{align*}\text{minimize }\;&f(x)\\ \text{subject to }\;&x_j = 0 \text{ for all } j \notin S_i;\end{align*}$$ Are there any other techniques worth knowing about? Are there any "dominance" relationships between these (e.g., method X usually beats method Y, so don't bother with method Y)? I know that the L0 norm $||\cdot||_0$ isn't convex, and in fact, isn't even a norm. I know the optimization problem I'm trying to solve is NP-hard in the worst case, so we can't expect efficient solutions that always produce the optimal answer, but I'm interested in pragmatic heuristics that will often work well when $f(x)$ is nice (smooth, etc.). REPLY [2 votes]: You should have a look at a nice package called Smoothed $ {L}_{0} $ (Smoothed L0 / Smoothed L Zero). They approximate the $ {L}_{0} $ "Pseudo Norm" (Which isn't convex) by a Gaussian Kernel. The idea is iterating on the parameter defining the kernel (Warm Start). It seems to work nicely and fit your problem.<|endoftext|> TITLE: Implications of existence of two inaccessible cardinals? QUESTION [5 upvotes]: Many years ago in an oral exam I was asked, what could be concluded from the existence of an inaccessible cardinal in ZFC? I knew that would provide a model for ZFC and imply the consistency of ZFC. Then I was asked, what could be concluded from the existence of two inaccessible cardinals? I had no clue then and it still bothers me to this day. I've never seen anything like that in the literature. Any insights on what can be inferred from the existence of two inaccessible cardinals? REPLY [6 votes]: The obvious answer is that you get the consistency of $\sf ZFC$ with an inaccessible. But you get more. You get the existence of a transitive model of this theory. But you actually get more. Since an inaccessible cardinal is the limit of worldly cardinals,1 you get a transitive model of $\sf ZFC$+There exists an inaccessible cardinals+There is a proper class of worldly cardinals. But of course, this is the dry an obvious answer. Let's take a look at a few theories which have been proved to be equiconsistent with the existence of an inaccessible cardinal. Because since you get a model of $\sf ZFC$+There exists an inaccessible cardinal, we actually get a model of these theories. $\sf ZF+DC$+Every set of reals is Lebesgue measurable; or $\sf ZFC$+Every projective set of reals is Lebesgue measurable. Shelah proved that it is enough to consider $\Sigma^1_3$ sets of reals in order to conclude an inaccessible must exist. Similarly, you can replace Lebesgue measurable by "every uncountable set of reals contains a perfect subset" (which holds in Solovay's model), which implies $\omega_1$ is a limit in $L$, so in the presence of $\sf DC$, implies the existence of an inaccessible cardinal in an inner model. The consistency of $\sf ZF$+There exists an infinite Dedekind-finite set+Every two Dedekind-finite cardinals are comparable. This was proved by Sageev, the inaccessible is most likely unnecessary, but we don't quite know yet. The consistency of the Kelley-Morse set theory; but also the fact that $\sf ZFC_2$ has at least two non-isomorphic models (one without inaccessible cardinals, and one with exactly one). The consistency of $\sf ZFC$+There are no Kurepa trees. The consistency of $\sf ZF$+For every $\alpha$ there is a set $X_\alpha$ which is a countable union of countable sets, but $\mathcal P(X_\alpha)$ can be mapped onto $\omega_\alpha$. Like Sageev's result, it is unclear if the inaccessible cardinal is actually needed (and unlike Sageev's result, this one was never published as a paper: it was announced in Notices of the AMS, and it should appear in a Ph.D. thesis from ages ago). The consistency of $\sf ZFC$+$\omega_1$ is inaccessible for reals. There are various results in descriptive set theory about $\sigma$-ideals on Polish spaces which have implications when assuming $\omega_1$ is inaccessible to reals, which means that these consequences become true in some models. This list is by no means complete, or even remotely close to being complete. But it does give you a small taste of what people did with just inaccessible cardinals over the years. Frankly, however, for a lot of interesting results inaccessible cardinals turn out to be too weak and too mundane to provide us with the necessary structure for carrying out these results. Footnotes. (1) $\kappa$ is worldly if $V_\kappa$ is a model of $\sf ZFC$. Every inaccessible cardinal is worldly, but the least worldly has countable cofinality, and an inaccessible cardinal has a club of worldly cardinals below it.2 (2) Note that if we replace the Replacement schema by its second-order axiom, then we do get that $V_\kappa\models\sf ZFC_2$ if and only if $\kappa$ is inaccessible.<|endoftext|> TITLE: Open interval $(0,1)$ with the usual topology admits a metric space QUESTION [7 upvotes]: which of the following is/are true ? $(0,1)$ with the usual topology admits a metric which is complete . $(0,1)$ with the usual topology admits a metric which is not complete. $[0,1]$ with the usual topology admits a metric which is not complete. $[0,1]$ with the usual topology admits a metric which is complete. This question has come at my competetive exam. I think this is a wrong question, because completeness a metric space property not a topological space property.In the offical answer key, answer has given (1) and (4), I want to send my representation. So please check my representation. Thank you Let $X = (0,1)$ and $d$ is a Euclidean metric on $X$ which induces the usual topology on $X$ and a sequence $\{\frac{1}{n} \}$ is a cauchy sequence in the Euclidean metric , but not converges in $X$. So $X$ is not complete withbthe usual topology admit a Euclidean metric. On the other hand The map $$ f:(0,1)\to\mathbb{R}:x\mapsto\tan\pi\left(x-\frac{1}{2}\right) $$ is a bijection which allows you to define the metric $$ d(x,y)=|f(x)-f(y)| $$ which makes $((0,1),d)$ complete. Since $f$ maps intervals to intervals then both topologies are equivalent. So Completeness is not a topological property. So this is irrelivent. I would be thankful, if some one check my representation REPLY [4 votes]: I think it is a legit question to ask if a topological space admits a metric. If I give you a topological space and ask you if there is a metric (with certain properties) which induces this topology there is nothing wrong about it, right? (If I got the word admit right...) It is right that completeness may does not make sense on any space, but if I give you a space I can ask you if it makes sense. There is a broader class of spaces in which it make sense. If you are interested you can read about uniform spaces. Now why are 1 and 4 correct? The space in 4 is a closed subspace of a complete space ($\Bbb R$) which is complete. And for 1 you need that $(0,1)$ is homeomorphic to $\Bbb R$ which is complete. Clearly to show that 2 is correct you can show that $(0,1)$ with the usual topology is not complete, just choose a Cauchy sequence which converges to $0$. Most interesting is 3. You have to know that $[0,1]$ with any metric inducing the standard topology is complete. Here you can use the fact that any compact metric space ist complete, since compactness is clearly a topological property. So either I got the question wrong or 2 is also correct. In the second case you can have a look at Completely metrizable space which are topological spaces whose topology is/(can be) induced by a complete metric.<|endoftext|> TITLE: Alternate proof of the dominated convergence theorem by applying Fatou's lemma to $2g - |f_n - f|$? QUESTION [5 upvotes]: Here is a proof of the dominated convergence theorem. Theorem. Suppose that $f_n$ are measurable real-valued functions and $f_n(x) \to f(x)$ for each $x$. Suppose there exists a nonnegative integrable function $g$ such that $|f_n(x)| \le g(x)$ for all $x$. Then$$\lim_{n \to \infty} \int f_n\,d\mu = \int f\,d\mu.$$ Proof. Since $f_n + g \ge 0$, by Fatou's lemma,$$\int f + \int g = \int (f + g) \le \liminf_{n \to \infty} \int (f_n + g) = \liminf_{n \to \infty} \int f_n + \int g.$$Since $g$ is integrable,$$\int f \le \liminf_{n \to \infty} \int f_n.\tag*{$(*)$}$$Similarly, $g - f_n \ge 0$, so$$\int g - \int f = \int (g - f) \le \liminf_{n \to \infty} \int (g - f_n) = \int g + \liminf_{n \to \infty} \int (-f_n),$$and hence$$-\int f \le \liminf_{n \to \infty} \int (-f_n) = -\limsup_{n \to \infty} \int f_n.$$Therefore$$\int f \ge \limsup_{n \to \infty} \int f_n,$$which with $(*)$ proves the theorem.$$\tag*{$\square$}$$ My question is as follows. Can we get another proof of the dominated convergence theorem by applying Fatou's lemma to $2g - |f_n - f|$? REPLY [5 votes]: Yes, absolutely. And actually, applying Fatou to $2g - \lvert f_n - f \rvert$ gives the stronger result that $$\int \lvert f_n -f \rvert d \mu \to 0$$ as $n\to \infty$. From this and $$\left \lvert \int f_n d\mu - \int f d\mu \right \rvert = \left \lvert \int (f_n - f) d\mu \right \rvert \le \int \lvert f_n -f \rvert d\mu$$ we recover the slightly weaker version that is proven above. The dominated convergence theorem is ordinarily proven using $2g - \lvert f_n - f \rvert$.<|endoftext|> TITLE: Is there a meaning to the notation "\arg \sup"? QUESTION [6 upvotes]: When $f$ is a function on a set $A$, the notation: $\arg\max_{x\in A} f(x)$ denotes the set of elements of $A$ for which $f$ attains its maximum value. This set may be empty, for example, if $f(x)=x$ and $A=(0,1)$, then $f$ has no maximum on $A$, so: $$\arg\max_{x\in (0,1)} f(x) = \emptyset$$ However, $f$ always has a supremum (that can be $\infty$ if $f$ is unbounded), so apparently we can define an "argument-supremum". In this case, this would be: $$\arg\sup_{x\in (0,1)} f(x) = \{1\}$$ Can this "arg sup" operator be defined formally? I thought to define it in the following way: $$\arg\sup_{x\in A} f(x) = \arg \max_{x\in \text{closure}(A)}f(x)$$ Is this definition meaningful? Is it used anywhere in mathematics? REPLY [3 votes]: I think the main problem is not non-closedness but non-compactness. Consider $f(x) =-e^x$. Its supremum is zero, but the function is nowhere zero. However, as every supremum, you can approximate its supremum by function values, i. e. there is a sequence $x_n$ such that $f(x_n) $ converges to 0, but the sequence itself does not converge. This problem would not occur if the base set would be compact. Taking some compactification of the base set may do the trick, but the problem is that you need to ensure some continuity of the extension of the function to the compactification.<|endoftext|> TITLE: finite polynomials satisfy $|f(x)|\le 2^x$ QUESTION [10 upvotes]: This is a problem from TsingHua University math competition for high school students. Prove there exists only finite number of polynomials $f\in \mathbb{Z}[x]$ such that for any $x\in \mathbb{N}$ , $|f(x)|\le 2^x$. My attempts: since $f(x)=o(2^x)$, $f$ can only be bounded when $x$ is small, for example $f(0)=0,1, \ \ f(1)=0,1,2, \cdot \cdot \cdot$. Thus using Lagrange interpolation it concludes that for any given $n\in \mathbb{N}$ the number for such polynomial is finite. But I don't know how to proceed from here. Is my methods wrong or there's a better way? Also, I'd like to know something about the background of this problem. Thanks in advance. REPLY [5 votes]: Solution credits to Rui Yao: According to a property of finite differences, if we set $c_n\in \mathbb{Z}$ to be the highest coefficient of $f(x)$, which has degree $n$, then $$n!c_n=\Delta ^n [f](x)=\sum_{i=0}^{n}\binom{n}{i} f(i)(-1)^{n-i}$$ Thus $$|n!c_n|=|\sum_{i=0}^{n}\binom{n}{i} f(i)(-1)^{n-i}|\le \sum_{i=0}^{n}|\binom{n}{i} f(i)(-1)^{n-i}|\le \sum_{i=0}^{n}2^i\binom{n}{i}=3^n$$ Combined with $c_n\in \mathbb{Z}/{0}$ this leads to $n\le 6$. Since we've proved that for any given degree $n$ there's only finite number of polynomials satisfy the condition, we can conclude that the polynomial satisfy the condition is finite.<|endoftext|> TITLE: Why should I learn modern category theory if my interest mainly is structured sets? QUESTION [8 upvotes]: A long time ago I studied mathematics at the University of Stockholm. I had a romantic view of modern algebra and manage to make the first two algebra courses by self studies in order to immediately study homological algebra, Galois theory and such topics. That is not the best way to study. Later as a graduate student I did rather well - until the gaps in my basic knowledge and abilities began to affect too much. Then I stopped focusing on mathematics about 35 years ago. I did self studies in category theory because we were supposed to do that and because it was a good idea. Category theory worked fine with the mathematics evolved at 1950 or so. The universal definitions and duality simplified a lot of mathematics as tensor products and injective/projective modules etc and the functors opened new possibilities. The last 40 years or so the interest in and the development of category theory has exploded and seems nowaday be very abstract but also very consistent. My question is, what modern category theory could be interesting for a person mainly interested in the mathematics concerning structured sets? The bounty will soon expire and there is 50+ in reputation to earn - aren't there anything to express on this topic? REPLY [2 votes]: Perhaps the most compelling answer to the question: My question is, what modern category theory could be interesting for a person mainly interested in the mathematics concerning structured sets? is just that a great deal of modern algebraic geometry, as a result of Grothendieck and others, makes heavy use of category theory. Here, I'm basically replacing "structured sets" with "rings, fields and modules." However, the study of groups and monoids has also been immensely impacted by category theory. In particular, category theory has played a really large role in representation theory. If "structured sets" also includes topological spaces then there's even more category theory that ends up being relevant. And of course, topological spaces, and the study of their invariants, are important in things mentioned above like algebraic geometry and representation theory. One problem with your question is that it's immensely broad. You're basically asking "How is category theory used in algebra?" And well, the answer at this point is almost "In what cases is it not used?" My suggestion, if you want to have some sense of what category theory is all about, is pick up Saunders Mac Lane's book and force yourself to learn the foundations (e.g. categories, functors, natural transformations) from a purely formal point of view, and then read about the examples. Then pick some topic you like (e.g. ring theory, or representation theory) and ask a more specific question about category theory in, say, representation theory. Again, for any of this to make any sense at all, you'll have to have a pretty good grip on whatever topic it is you're interested in. Category theory tends to produce "large scale" structural theorems, and so if you're not familiar enough with a topic to be interested in how all of its pieces fit together, it (in my opinion) will be very hard to motivate category theory. However, after writing all of this, and then reading through the comments above a bit more, I see that someone has really already provided you with an answer, which is this MSE question, so that's probably a pretty good place to start.<|endoftext|> TITLE: What is the correct definition of a group? QUESTION [7 upvotes]: What is the correct definition of a group? More precisely the predicate "being a group"? According to Wikipedia A group is a set, G, together with an operation • (called the group law of G) that... How should one interpret this? $\textbf{Definition A)}\\ \quad \quad G \text{ is a set},\\ \quad \quad +:G\times G\to G \\ \langle G,+\rangle \text{ is a group} :\iff\\ \quad \quad +\text{ is asscociative},\\ \quad \quad \exists 0\in G : \forall x\in G:x+0=0+x=x \text{ and } \exists y:x+y=y+x=0 $ or $\textbf{Definition B)}\\ \quad \quad G \text{ is a set}\\ G \text{ is a group} :\iff\\ \quad \quad \exists +:G\times G\to G:\\ \quad \quad \quad +\text{ is asscociative},\\ \quad \quad \quad \exists 0\in G : \\ \quad \quad \quad \quad\forall x\in G:x+0=0+x=x \text{ and } \exists y:x+y=y+x=0 $ And is there a separate notion of "$G$ being a group with operation $+$"? REPLY [3 votes]: You appear to be asking whether on one hand a set is a group, if an operation with the correct properties exists, or on the other hand whether the group comprises both the set and the operation. The correct definition is that the group is set together with the operation.<|endoftext|> TITLE: The expected distortion of a linear transformation (continued) QUESTION [5 upvotes]: Let $A: \mathbb{R}^n \to \mathbb{R}^n$ be a linear transformation. I am interested in the "average distortion" caused by the action of $A$ on vectors. Consider the uniform distribution on $\mathbb{S}^{n-1}$, and the random variable $X:\mathbb{S}^{n-1} \to \mathbb{R}$ defined by $X(x)=\|A(x)\|_2$. Question: What is the expectation of $X$? (Is there a closed formula?) Using SVD, the problem reduces to $A$ being a diagonal matrix with non-negative entries. So, the question amounts to calculating $$\int_{\mathbb{S}^{n-1}} \sqrt{\sum_{i=1}^n (\sigma_ix_i)^2} $$ (and dividing by the volume of $\mathbb{S}^{n-1}$). This question is related to these two, which ask about the expected distortion of the square of the norm (which is easier, since no square roots are involved). For the above problems, a successful approach was to use standard normal variables, in order to generate a unit random vector (see here). However, it does not seem to help in this case. REPLY [2 votes]: Asaf, I believe at least part of the answer you are looking for is in the paper Surface area and other measures of ellipsoids by Igor Rivin. Look at Equation (3) to see the relationship between the quantity you seek and the ratio of the surface area and the volume of an ellipsoid. Look at Theorem 3 that relates this ratio to an expectation of a function of Gaussian random variables. Look at Equation (10) for an almost closed form solution, which is actually obtained from the book by A. M. Mathai and Serge B. Provost, Quadratic Forms in Random Variables (Theory and Applications).<|endoftext|> TITLE: Help understanding convolutions for probability? QUESTION [5 upvotes]: I have been trying to do some problems in probability that use convolutions but there has not been much of an explanation of what a convolution is or the purpose of using a convolution. For example in the following problem: Let X and Y be two independent exponential distributions with mean $1$. Find the distribution of $\frac{X}{Y}$. So I define $U=\frac{X}{Y}$ and $V=Y$ then $$f_U(u)=\int_{-\infty}^{\infty}f_{XY}(uv,v)dv=\int_{0}^{\infty}e^{-uv}e^{-v}dv=\frac{1}{u+1}$$ Maybe one could explain a simpler problem: Let X and Y be two random variables with joint density function $f_{XY}$ . Compute the pdf of $U = Y − X$. So I tried the following and maybe its correct I dont know just using formulas $$f_U(u)=\int_{-\infty}^{\infty}f_{XY}(x,u+x)dx=\int_{-\infty}^{\infty}f_{XY}(y-u,y)dy$$ I was given the formula $(f*g)(z)=\int_{-\infty}^{\infty}f(z-y)g(y)dy=\int_{-\infty}^{\infty}f(x)g(z-x)dx$ I do not fully understand what I am supposed to be putting into $f(z-y)g(y)$ part of the integrals specifically for $(z-y)$. REPLY [7 votes]: I will try to start from the simplest case possible and then build up to your situation, in order to hopefully develop some intuition for the notion of convolution. Convolution essentially generalizes the process of calculating the coefficients of the product of two polynomials. See for example here: Multiplying polynomial coefficients. This also comes up in the context of the Discrete Fourier Transform. If we have $C(x)=A(x)B(x)$, with $A(x), B(x)$ polynomials, we have: The image is from Cormen et al, Introduction to Algorithms, p. 899. This type of operation also becomes necessary when calculating the probability distributions of discrete random variables. In fact, this type of formula allows us to prove that the sum of independent Bernoulli random variables is binomially distributed. If we want to calculate the probability distribution of two discrete random variables with infinite support (for example, the Poisson distribution, which can take infinitely many possible values with positive probability), then we need to use the Cauchy product to calculate the convolution. This just generalizes the formula given above for infinite series. The following image is from Wikipedia: Now as you probably already know, the (Riemann) integral is the limit of infinite series, hence it should not be surprising that this convolution formula for infinite series also generalizes to a convolution formula for integrals. This is what you are working with for probability distributions of continuous random variables, as in your problem above. Here is the formula (from Wikipedia again): $U=Y-X = Y + (-X)$, so therefore $$f_U(u) =(f_Y * f_{-X})(u) = \int_{-\infty}^{\infty} f_Y(t) f_{-X}(u-t) \mathrm{d}t$$ Right now you only have the joint density, so in order to use the convolution formula, we have to calculate the marginal densities from the joint density. This will lead to us having a double integral. See for example here: How do I find the marginal probability density function of 2 continuous random variables? or Help understanding convolutions for probability? More specifically (see e.g. here), $$f_Y(y) = \int_{-\infty}^{\infty}f_{XY}(x,y)dx$$ $$f_X(x) = \int_{-\infty}^{\infty} f_{XY}(x,y) dy$$ So now we have the (marginal) densities for $X$ and $Y$, but what we need are the densities for $-X$ and $Y$, so we need to calculate the density of $-X$ based on the density for $X$, which is done as follows (for $X$ continuous such that $\mathbb{P}(X=c)=0$ for any $c \in \mathbb{R}$): $$\mathbb{P}(a \le X < b)= \mathbb{P}(-b < X \le -a) \\ \implies \mathbb{P}(a \le -X < b) = \int_{-b}^{-a} f_X(x)dx = \int_b^a [f_X(-x)](-1) \mathrm{d}x=\int_a^b f_X(-x) \mathrm{d}x$$ In other words, $$f_{-X}(x)=f_X(-x) = \int_{-\infty}^{\infty} f(-x,y)\mathrm{d}y.$$ So finally, $$f_U(u)= \int_{-\infty}^{\infty} \left[\int_{-\infty}^{\infty} f_{XY}(x,t)\mathrm{d}x\right] \left[\int_{-\infty}^{\infty} f_{XY}(-(u-t),y)\mathrm{d}y \right]\mathrm{d}t = \int_{-\infty}^{\infty} \left[\int_{-\infty}^{\infty} f_{XY}(x,u-t)\mathrm{d}x \right] \left[ \int_{-\infty}^{\infty} f_{XY}(-t,y)\mathrm{d}y \right]\mathrm{d}t$$ The second version might be easier to calculate; both are equivalent.<|endoftext|> TITLE: Differentiating the binomial coefficient QUESTION [12 upvotes]: I took a lecture in combinatorics this semester and the professor did the following step in a proof: He showed that function $f: x \mapsto \binom{x}{r}$ is convex for $x > r - 1$ (in order to use Jensen's inequality on $f$) and did this in the following way: "By the product-rule we have $$f''(x) = \frac{2}{r!} \sum_{0 \leq i < j \leq r - 1} \prod_{l = 0}^{r - 1} ( x - l) \frac{1}{(x - i ) (x - j)} \geq 0$$ for all $ x > r - 1$." I am a bit confused on his definition: How would one extend the binomial coefficient to $x \notin \mathbf{N}$? I first thought about piecewise linear interpolation, but then I can't differentiate it. I also thought of plugging in the Gamma-function for the factorials, but I doubt that this is the definition he used here. Can anyone explain to me what's happening here? Thanks! REPLY [6 votes]: As small supplement: A generalisation of the binomial coefficient is used in the binomial series representation \begin{align*} (1+x)^\alpha=\sum_{r=0}^\infty\binom{\alpha}{r}x^r\qquad\qquad |x|<1, \,\alpha\in\mathbb{C} \end{align*} where the binomial coefficient \begin{align*} \binom{\alpha}{r}=\frac{1}{r!}\alpha(\alpha-1)\cdots(\alpha-r+1) \end{align*} can be defined even for complex $\alpha$. This implies that we can consider the binomial coefficient as real-valued (or complex-valued) polynomial of degree $r$ \begin{align*} &f:\mathbb{R}\rightarrow\mathbb{R}\\ &f(x)=\binom{x}{r}\\ &\qquad=\frac{1}{r!}x(x-1)\cdots(x-r+1) \end{align*} which is so accessible to analytical operations (differentiation, etc.). And you're right, it's not necessary to involve the Gamma-function here.<|endoftext|> TITLE: Finding integer solutions to $y^2=x^3-2$ QUESTION [5 upvotes]: I have the equation: $$y^2=x^3-2$$ It seems to be deceivingly simple, yet I simply cannot crack it. It is obviously equivalent to finding a perfect cube that is two more than a perfect square, and a brute force check shows no solutions other than $y=5$ and $x=3$ under 10,000. However, I can't prove it. Are there other integer solutions to this equation? If so, how many? If not, can you prove that there aren't? Bonus: What about the more general equation" $$y^2=x^3-c$$ Where $c$ is a positive integer? REPLY [6 votes]: Fact : $\mathbb{Z}[\sqrt{-2}]$ is Unique factorization domain. lemma : in every UFD, if the product of two numbers, which are relatively prime is a cube, then each of them must be a cube. There is no any solution $$x^3 = (y+\sqrt{-2})\times(y-\sqrt{-2})$$ the greatest common divisor of these factors will divide 2-times the $\sqrt{-2}$, which is lead to only finitly many cases. (some cases can be shown impossible, only by the modular an congrunce arithmetic.) finally we have: $$y+\sqrt{-2}=(a+b\sqrt{-2})^3$$ which lead us to the system of equations as follows: $$a^3-6ab^2=y$$ and $$3a^2b-2b^3=1$$ then $$b(3a^2-2b^2)=1$$ which implies the assertion.<|endoftext|> TITLE: If $f$ is a smooth real valued function on real line such that $f'(0)=1$ and $|f^{(n)} (x)|$ is uniformly bounded by $1$ , then $f(x)=\sin x$? QUESTION [26 upvotes]: Let $f : \mathbb R \to \mathbb R$ be a smooth ( infinitely differentiable everywhere ) function such that $f '(0)=1$ and $|f^{(n)} (x)| \le 1 , \forall x \in \mathbb R , \forall n \ge 0$ ( as usual denoting $f^{(0)}(x):=f(x)$) ; then is it true that $f(x)=\sin x , \forall x \in \mathbb R$ ? REPLY [2 votes]: Here's my attempt, some of the details are missing. It is likely that an easier solution can be found. Clarification: As David pointed out, there is no such Paley-Wiener type theorem. I will sketch what I think can be done to fix it in the end. As David mentioned in the comments, your assumptions imply $f$ is an entire function (of exponential type $1$), and also give the following bound: $$| f(z) | \le e^{|\Im z |}.$$ Suppose that a Paley-Wiener type theorem implies we can write $f$ as a Fourier transform: $\begin{equation}f(z) = \int_{-1}^{1} e^{i t z} \, \rm{d} \nu(z), \tag{a}\label{a}\end{equation}$ where $\nu$ is a complex measure on $[-1,1]$. In particular, $\begin{equation}f^{(n)}(z) = \int_{-1}^{1} (it)^n e^{i t z} \, \rm{d} \nu(z). \tag{b}\label{b}\end{equation}$ Consider odd $n>0$. Notice that the function $(it)^n$ is not continuous on $[-1,1]$, if we identify $-1$ and $1$. To fix it, note that $(it)^n e^{-\frac12 i\pi t}$ is continuous, and we have an absolutely convergent Fourier series, $$ (it)^n e^{-\frac12 i\pi t} = \sum_{k \in \mathbb{Z}} C_n(k) e^{i \pi k t}, \quad t\in [-1,1]. \tag{c}\label{c} $$ For example, $$ it e^{-\frac12 i\pi t} = \frac{4}{\pi^2} \sum_{k \in \mathbb{Z}} \frac{(-1)^k}{(2k + 1)^2} e^{i \pi k t}, \quad t\in [-1,1]. \tag{d}\label{d}$$ Combining $\eqref{a}$, $\eqref{b}$, and $\eqref{d}$, we find $$\begin{align*} f^\prime (z) & = & \int_{-1}^1 i t e^{izt} \, \rm{d} \nu(t) = \int_{-1}^1 i t e^{-\frac12 i \pi t} e^{i(z + \frac12 \pi)t} \, \rm{d} \nu(t)\\ &=& \frac{4}{\pi^2} \sum_{k \in \mathbb{Z}} \frac{(-1)^k}{(2k+1)^2} \int_{-1}^1 e^{i t (z + \pi k + \frac12 \pi)} \, \rm{d} \nu(t) \\ &=& \frac{4}{\pi^2} \sum_{k \in \mathbb{Z}} \frac{(-1)^k}{(2k+1)^2} \, f\left(z + \pi (k+\frac12)\right). \end{align*}$$ In particular, $$ 1 = f^\prime(0) = \frac{4}{\pi^2} \sum_{k \in \mathbb{Z}} \frac{(-1)^k}{(2k+1)^2} \, f\left(\pi (k+\frac12)\right). $$ Since $\frac{4}{\pi^2} \sum_{k \in \mathbb{Z}} \frac{1}{(2k+1)^2} = 1$, and $|f|\le 1$, we find that $f\left(\pi (k+\frac12)\right) = (-1)^k$. Notice also that $f^{\prime\prime}(0) = 0$, since otherwise this would contradict $|f^\prime(z)|\le 1$, for $z$ close to $0$. Now (for odd $n>0$), using $\eqref{b}$, and $\eqref{c}$, we get in a similar way to the above $$f^{(n)}(0) = \sum_{k\in\mathbb{Z}} C_n(k) \, f\left(\pi (k+\frac12)\right) = \sum_{k\in\mathbb{Z}} C_n(k) \, (-1)^k = i^n \cdot e^{-\frac12 \pi i} = (-1)^{\frac12 (n-1)}.$$ So for example, $f^{(3)}(0) = -1$, and this implies $f^{(4)}(0)=0$, and so on. Since $f$ is analytic and has the same derivatives as $\sin(z)$ at $z=0$, we find that $\sin(z) + c $ is the only function satisfying the requirements, for some real constant $c$. But $c=0$ is the only possibility, since $|f| \le 1$. One can prove that $f$ can be approximated uniformly on compact sets by a sequence $$\begin{equation} g_j(z) = \int_{-1}^{1} e^{i t z} \, \rm{d} \nu_j(z), \end{equation}$$ where $\nu_j$ are complex measures on $[-1,1]$. See Boas, Entire Functions, Theorem 6.8.14. We have to prove by induction that the first $n$ derivatives of $f$ and $\sin(z)$ at $z=0$ are the same. Then we need to repeat the above arguments, using $\varepsilon, \delta$ methods whenever we had equalities before. Since we have uniform convergence of entire functions, we can approximate (a fixed number of) the derivatives of $f$ on any compact set using the functions $g_j$.<|endoftext|> TITLE: Prove that $\overline{f(z)}$ is differentiable at $a \in D(0;1)$ if and only if $f'(a)=0$ QUESTION [5 upvotes]: Let $f$ be holomorphic in $D(0;1)$ and define $k$ by $k(z)=\overline{f(z)}$. Prove that $k$ is differentiable at $a\in D(0;1)$ if and only if $f'(a)=0$. What I tried was first, assuming $k$ is differentiable and letting $f=u+iv$ we have (first when $h \in \mathbb{R}$) $$k'(z)= \lim_{h \to 0} \frac{u(x+h,y)-u(x,y)}{h} -i\frac{v(x+h,y)-v(x,y)}{h} = u_x -iv_x$$ and when $h=ik, \ k\in \mathbb{R}$ $$k'(z)=\lim_{k \to 0} \frac{u(x,y+k)-u(x,y)}{ik} -\frac{v(x,y+k)-v(x,y)}{h} = \frac{1}{i}u_y -v_y$$ And equating real and imaginary parts, we get that $$u_x=-v_y, \; u_y=v_x$$ Since $f$ is holomorphic, it satisfies the Cauchy-Riemann equations and thus $$u_x=v_y, \; u_y=-v_x$$ so $$f'(a)=-f'(a)$$ and then $f'(a)=0$. I don't know if this works, so please correct me if I'm wrong. Besides that, I'm stuck in proving the other implication. So far I did $$0=f'(a)=\lim_{h\to 0} \frac{f(a+h)-f(h)}{h}=\overline{\lim_{h\to 0}\frac{f(a+h)-f(h)}{h}}=\lim_{h\to 0}\frac{\overline{f(a+h)} -\overline{f(h)}}{\overline{h}}=\overline{f'(a)}=k'(a)$$ But again, I'm not sure if this is right. Any help will be highly appreciate, and thanks in advance! REPLY [5 votes]: $\displaystyle \lim_{h\to0} |\dfrac{f(z+h)-f(z)}{h}|=0 \Leftrightarrow \displaystyle \lim_{h\to0} |\dfrac{\overline{f(z+h)-f(z)}}{h}|=0$ But the expression $\dfrac{\overline{f(z+h)-f(z)}}{h}= \dfrac{\overline{f(z+h)-f(z)}}{\overline{h}}\times\dfrac{\overline{h}}{h}$ does not tend to a limit if the first half of it tends to $w=\overline{f'(a)}\ne0$, because the second half can be made to have any complex value of unit length.<|endoftext|> TITLE: Contiuous function on a closed bounded interval is uniformly continuous. Don't understand the proof. QUESTION [10 upvotes]: I'm self studying real analysis from Wade's "An Introduction to Real Analysis" and I've come across a proof that I don't understand. I was hoping that some might be able to walk me through it. The theorem is as follows Theorem. Suppose that $I$ is a closed, bounded interval. If $f:\rightarrow\mathbb{R}$ is continuous on $I$, then $f$ is uniformly continuous on $I$. Proof. Suppose to the contrary that $f$ is continuous but not uniformly continuous on $I$. Then there is an $\varepsilon_0>0$ and points $x_n, y_n \in I$ such that $|x_n-y_n|<\frac{1}{n}$ and $$|f(x_n)-f(y_n)|\geq \varepsilon_0\;\;\;\;n\in\mathbb{N}$$ By the Bolzano-Weierstrass Theorem and the Comparison Theorem the sequence $\{x_n\}$ has a subsequence, say $x_{n_k}$, which converges as $k\rightarrow\infty$, to some $x \in I$. Similarly the sequence $\{y_{n_k}\}_{k\in\mathbb{N}}$ has a convergent subsequence say $y_{n_{k_j}}$, which converges as $j\rightarrow \infty$, to some $y \in I$. Since $x_{n_{k_j}} \rightarrow x$ as $j\rightarrow \infty$ and $f$ is continuous it follows from above that $|f(x)-f(y)|\geq \varepsilon_0$; that is $f(x)\neq f(y)$. But $|x_n-y_n|<\frac{1}{n}$ for all $n \in \mathbb{N}$ so the Squeeze Theorem implies $x=y$. Therefore, $f(x)=f(y)$, a contradiction. Why in this proof do we need to take a sub-subsequence, why wont subsequences suffice? I have seen slightly different proofs of this theorem which use only subsequences and the triangle inequality. If someone could help me I would be most grateful. REPLY [10 votes]: Under the assumption that $f$ is not uniformly continuous on $I$, we have $\epsilon_0$ and sequences $\{x_n\}$ and $\{y_n\}$ such that $|x_n-y_n|<\frac{1}{n}$ for all $n \in \mathbb{n}$, but $|f(x_n)-f(y_n)| \geq \epsilon_0$. Note that, by framing the inequality in this way, we require $x_n$ and $y_n$ to have the same index; so, in particular, we cannot say $|f(x_n)-f(y_m)| \geq \epsilon_0$ unless $n=m$. Therefore, we cannot simply take any convergent subsequence of each, say $\{x_{n_k}\}$ and $\{y_{m_k}\}$, since then we're not able to guarantee that $n_k=m_k$ for each $k \in \mathbb{N}$. However, once we choose $\{x_{n_k}\}$, we can look at the sequence $\{y_{n_k}\}$. This may or may not converge. If it converges, we are good. If not, we can use Bolzano-Weierstrass once more to furnish a convergent subsequence of it (i.e. a sub-subsequence of $\{y_n\}$), $\{y_{n_{k_j}}\}$. Since subsequences of convergent sequences are also convergent, we now know that both $\{x_{n_{k_j}}\}$ and $\{y_{n_{k_j}}\}$ converge. Since they have the same index, we can use the inequalities above and proceed with the proof. Feel free to ask for more clarification if anything is unclear.<|endoftext|> TITLE: Conjectured value of $\int_{0}^{\infty}\left(\frac{x-1}{\ln^2 x}-\frac{1}{\ln x}\right)\frac{\mathrm{d}x}{x^2+1}$ QUESTION [34 upvotes]: I was curious whether this integral has a closed form expression : $$\int_{0}^{\infty}\left(\frac{x-1}{\ln^2 x}-\frac{1}{\ln x}\right)\frac{\mathrm{d}x}{x^2+1}$$ The integrand has a singularity at $x=1$, but it's removable. And as $x \to \infty$, the integrand behaves like $\frac{1}{x \ln^{2}x}$. So the integral clearly converges. Although I have not been able to derive its closed form, I think, by reverse symbolic calculators, up to 20 digits it could be $$I=\frac{4G}{\pi}$$ where $G$ is Catalan's constant. Is it true or is it completely fabulous? EDIT. NOTE : For better search to this integral I have renamed the title from Conjectured value of logarithmic definite integral, which is ambiguous and did not say anything, to the current one with integral explicitly written. REPLY [5 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Note that $\ds{\quad{x - 1 \over \ln\pars{x}} = \int_{0}^{1}x^{t}\,\dd t\,,\quad x \in \pars{0,1}}$. \begin{align} &\color{#f00}{\int_{0}^{\infty}\bracks{% {x - 1 \over \ln^{2}\pars{x}} - {1 \over \ln\pars{x}}} \,{\dd x \over x^{2} + 1}} \\[5mm] = &\ \int_{0}^{1}\bracks{% {x - 1 \over \ln^{2}\pars{x}} - {1 \over \ln\pars{x}}} \,{\dd x \over x^{2} + 1} + \int_{1}^{0}\bracks{% {1/x - 1 \over \ln^{2}\pars{1/x}} - {1 \over \ln\pars{1/x}}} \,{-\,\dd x/x^{2} \over 1/x^{2} + 1} \\[5mm] = &\ \int_{0}^{1}{\pars{x - 1}^{2} \over x\ln^{2}\pars{x}}\,{\dd x \over x^{2} + 1} = \int_{0}^{1}{1 \over x\pars{x^{2} + 1}}\int_{0}^{1}x^{y}\,\dd y\int_{0}^{1}x^{z}\,\dd z\,\dd x \\[5mm] = &\ \int_{0}^{1}\int_{0}^{1}\int_{0}^{1}{x^{y + z - 1} \over x^{2} + 1} \,\dd x\,\dd y\,\dd z = \int_{0}^{1}\int_{0}^{1}\int_{0}^{1} {x^{y + z - 1} - x^{y + z + 1}\over 1 - x^{4}}\,\dd x\,\dd y\,\dd z \\[5mm] \stackrel{x^{4}\ \mapsto\ x}{=}\,\,\, &\ {1 \over 4}\int_{0}^{1}\int_{0}^{1}\int_{0}^{1} {x^{y/4 + z/4 - 1}\,\,\, -\,\,\, x^{y/4 + z/4 - 1/2}\over 1 - x} \,\dd x\,\dd y\,\dd z \\[5mm] = &\ {1 \over 4}\int_{0}^{1}\int_{0}^{1}\bracks{% \Psi\pars{{y + z \over 4} + \half} - \Psi\pars{{y + z \over 4}}}\,\dd y\,\dd z \\[5mm] = &\ 4\int_{0}^{1/4}\int_{0}^{1/4}\bracks{% \Psi\pars{y + z + \half} - \Psi\pars{y + z}}\,\dd y\,\dd z\tag{1} \end{align} $\ds{\Psi}$ is the Digamma Function and we used its well known integral representation $\ds{\pars{~\gamma\ \mbox{is the}\ Euler\mbox{-}Mascheroni\ Constant~}}$ $$ \Psi\pars{z} = -\gamma + \int_{0}^{1}{1 - t^{z - 1} \over 1 - t}\,\dd t\,, \qquad\Re\pars{z} > 0 $$ Since $\ds{\Psi\pars{z}\ \stackrel{\mbox{def.}}{=}\ \totald{\ln\pars{\Gamma\pars{z}}}{z}}$ $\ds{\pars{~\Gamma\ \mbox{is the}\ Gamma\ Function~}}$, $\ds{\pars{1}}$ is reduced to: \begin{align} &\color{#f00}{\int_{0}^{\infty}\bracks{% {x - 1 \over \ln^{2}\pars{x}} - {1 \over \ln\pars{x}}} \,{\dd x \over x^{2} + 1}} \\[5mm] = &\ 4\int_{0}^{1/4}\bracks{\ln\pars{\Gamma\pars{z + {3 \over 4}}} - \ln\pars{\Gamma\pars{z + {1 \over 4}}} - \ln\pars{\Gamma\pars{z + \half}} + \ln\pars{\Gamma\pars{z}}}\,\dd z \\[5mm] = &\ 4\int_{0}^{1}\ln\pars{\Gamma\pars{z}}\,\dd z + 8\int_{0}^{1/4}\ln\pars{\Gamma\pars{z}}\,\dd z - 8\int_{0}^{3/4}\ln\pars{\Gamma\pars{z}}\,\dd z\tag{2} \end{align} The $\ds{\ln\Gamma}$-integrals are evaluated $\ds{\pars{~\mbox{the first one is rather trivial and it's equal to}\ \half\,\ln\pars{2\pi}~}}$ with the identity ( $\ds{\,\mathrm{G}}$ is the Barnes-G Function ) $$ \int_{0}^{z}\ln\pars{\Gamma\pars{z}}\,\dd z = \half\,z\pars{1 - z} + \half\,\ln\pars{2\pi}z + z\ln\pars{\Gamma\pars{z}} - \ln\pars{\,\mathrm{G}\pars{1 + z}} $$ Namely, \begin{equation} \left\lbrace\begin{array}{\rcl} \ds{\int_{0}^{1}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{\half\,\ln\pars{2\pi}\ \mbox{because}\ \Gamma\pars{1} = \,\mathrm{G}\pars{2} = 1.} \\[3mm] \ds{\int_{0}^{1/4}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{{3 \over 32} + {1 \over 8}\,\ln\pars{2\pi} + {1 \over 4}\,\ln\pars{\Gamma\pars{1 \over 4}} - \ln\pars{\,\mathrm{G}\pars{5 \over 4}}} \\[1mm] & \ds{=} & \ds{{3 \over 32} + {1 \over 8}\,\ln\pars{2\pi} - {3 \over 4}\,\ln\pars{\Gamma\pars{1 \over 4}} - \ln\pars{\,\mathrm{G}\pars{1 \over 4}}} \\[3mm] \ds{\int_{0}^{3/4}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{{3 \over 32} + {3 \over 8}\,\ln\pars{2\pi} + {3 \over 4}\,\ln\pars{\Gamma\pars{3 \over 4}} - \ln\pars{\,\mathrm{G}\pars{7 \over 4}}} \\[1mm] & \ds{=} & \ds{{3 \over 32} + {3 \over 8}\,\ln\pars{2\pi} - {1 \over 4}\,\ln\pars{\Gamma\pars{3 \over 4}} - \ln\pars{\,\mathrm{G}\pars{3 \over 4}}} \end{array}\right.\tag{3} \end{equation} In these expressions we used $\ds{\,\mathrm{G}\pars{1 + z} = \,\mathrm{G}\pars{z}\Gamma\pars{z}}$. Fortunately, values of $\ds{\,\mathrm{G}\pars{z}}$ at $\ds{z = {1 \over 4}, {3 \over 4}}$ are known: \begin{align} \,\mathrm{G}\pars{1 \over 4} & = A^{-9/8}\,\,\Gamma^{\, -3/4}\pars{1 \over 4} \exp\pars{{3 \over 32} - {K \over 4\pi}}\tag{4} \\[5mm] \,\mathrm{G}\pars{3 \over 4} & = A^{-9/8}\,\,\Gamma^{\, -1/4}\pars{3 \over 4} \exp\pars{{3 \over 32} + {K \over 4\pi}}\tag{5} \end{align} $\ds{A}$ and $\ds{K}$ are the Glaisher-Kinkelin and the Catalan Constants, respectively. With $\ds{\pars{4}\ \mbox{and}\ \pars{5}}$, $\ds{\pars{3}}$ becomes \begin{equation} \left\lbrace\begin{array}{rcl} \ds{\int_{0}^{1}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{\phantom{-\,}\half\,\ln\pars{2\pi}} \\[1mm] \ds{\int_{0}^{1/4}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{\phantom{-\,}{K \over 4\pi} + {1 \over 8}\ln\pars{2\pi} + {9 \over 8}\,\ln\pars{A}} \\[1mm] \ds{\int_{0}^{3/4}\ln\pars{\Gamma\pars{z}}} & \ds{=} & \ds{-\,{K \over 4\pi} + {3 \over 8}\ln\pars{2\pi} + {9 \over 8}\,\ln\pars{A}} \end{array}\right.\tag{6} \end{equation} With $\ds{\pars{6}}$, the expression $\ds{\pars{2}}$ is reduced to $\ds{\pars{~\ul{the\ final\ result}~}}$: $$ \color{#f00}{\int_{0}^{\infty}\bracks{% {x - 1 \over \ln^{2}\pars{x}} - {1 \over \ln\pars{x}}} \,{\dd x \over x^{2} + 1}} = \color{#f00}{4\,{K \over \pi}} \approx 1.1662 $$<|endoftext|> TITLE: Area of circle (double integral and cartesian coordinates)? QUESTION [13 upvotes]: I know that the area of a circle, $x^2+y^2=a^2$, in cylindrical coordinates is $$ \int\limits_{0}^{2\pi} \int\limits_{0}^{a} r \, dr \, d\theta = \pi a^2 $$ But how can find the same result with a double integral and only cartesian coordinates? REPLY [3 votes]: Just as an alternative solution to the qbert answer. Note that $|x|$, $|y|$ are not strictly less than $r$, but instead $|x| \leq r$, $|y| \leq r$. This can still insure that $r^2 \geq y^2$ and so $\sqrt{r^2-y^2}$ is real. After the first integral $$ \int_{-r}^r\int_{-\sqrt{r^2-y^2}}^{\sqrt{r^2-y^2}}\mathrm dx\mathrm dy= \int_{-r}^r2\sqrt{r^2-y^2}\mathrm dy $$ Instead of the change of variable, you can remember the known integral: $$\int \sqrt{r^2 - y^2} = \frac{1}{2} \left( r^2 \arcsin \frac{y}{r} + y \sqrt{r^2 - y^2} \right) + C$$ whose condition $|y| \leq |r|$ has just been mentioned and it is satisfied. The result follows almost immediately: $$\int_{-r}^r 2 \sqrt{r^2 - y^2} \mathrm{d} y = \left[ r^2 \arcsin \frac{y}{r} + y \sqrt{r^2 - y^2} \ \right]_{-r}^r = r^2 \arcsin (1) - r^2 \arcsin (-1) = \\ = r^2 \frac{\pi}{2} + r^2 \frac{\pi}{2} = \pi r^2$$ So, using the cartesian coordinates, the only important observation is simply to consider the right extreme values for each variable. Using the circumference equation $x^2 + y^2 = r^2$, you can choose $x$ as a function of $y$, obtaining $x = \pm \sqrt{r^2 - y^2}$, which will be the extreme values for $x$. Then, you let $y$ sweep from $-r$ to $r$, which are the extreme values for $y$. Of course, you can alternatively do vice-versa, with $y$ as function of $x$. Unlike the cylindrical coordinates, there is no angular variation here, but only an horizontal variation between $- \sqrt{r^2 - y^2}$ and $\sqrt{r^2 - y^2}$ as regards $x$, and a vertical variation between $-r$ and $r$ as regards $y$.<|endoftext|> TITLE: When is the universal cover of a Riemannian manifold complete? QUESTION [6 upvotes]: Let $(M,g)$ be a connected Riemannian manifold which admits a universal cover $(\tilde{M}, \tilde{g})$, where $\tilde{g}$ is the Riemannian metric such that the covering is a Riemannian covering. I want to know under what conditions the universal cover $\tilde{M}$ is complete. The reason for this questions is that I want to know under what conditions on $M$ the Hopf-Rinow theorem can be applied to the universal cover. On Wolfram (http://mathworld.wolfram.com/CompleteRiemannianMetric.html) it says that if $M$ is compact, its universal cover is complete. Would someone be able to give a proof of this? And what deductions can we make if $M$ is complete (and possibly fulfills some other conditions)? (I'm not really looking for curvature conditions like corollaries of the Bonnet-Myers theorem). Thanks in advance for any help! REPLY [8 votes]: It's actually true that $M$ is complete if and only if its universal cover $\widetilde{M}$ is complete. Let $p: \widetilde{M} \to M$ be the universal covering map, $q \in M$ and $\tilde{q} \in p^{-1}(q)$. As has already been stated, by Hopf-Rinow, all we need to if we want to conclude that $\widetilde{M}$ implies $M$ is complete is prove the corresponding statement for the exponential maps based at $q$ and $\tilde{q}$. Now if $\widetilde{M}$ is complete and $\widetilde{E}: T_{\widetilde{q}}\widetilde{M} \to \widetilde{M}$ is its exponential map based at $\widetilde{q}$, define a map $$ E = p \circ \widetilde{E} \circ (dp)^{-1}: T_qM \to M$$ You can show that $p$ sends geodesics to geodesics by showing their images are locally length minimizing. Since $(dp)^{-1}$ is linear it sends radial lines to radial lines, and you can use this to show that $E$ is exactly the exponential map for $M$ based at $p$. Then from the above it follows immediately that $E$ is defined on the whole tangent space. The other answer shows the reverse implication.<|endoftext|> TITLE: If $f(x)$ has a vertical asymptote, does $f'(x)$ have one too? QUESTION [26 upvotes]: So here is what I understand: If $f(x)$ is increasing/decreasing, then its derivative $f'(x)$ is positive/negative and... If $f(x)$ is increasing/decreasing, then the derivative of $f'(x)$ (which is $f''(x)$) is concave up/concave down So my question is: if a graph has a vertical asymptote, the derivative must also have a vertical asymptote, too, right? Does it also work vice versa? I feel like there is a trick to it, but I'm not sure. I have a graph from GeoGebra here. The dotted line is the derivative. REPLY [50 votes]: if a graph has vertical asymptote, the derivative must also have a vertical asymptote too, right? No. A counterexample: $$f(x)=\frac{1}{x}+\sin\left(\frac{1}{x}\right)$$ This function is monotone and has a vertical asymptote at $x=0$. But its derivative has no limit.<|endoftext|> TITLE: Why all vector space have a span set? QUESTION [6 upvotes]: I thought about this question, but I don't sure if my proof is correct. In the book, he put this question like a observation of span sets' definition, so I tried proof this. My attempt: Suppose that exists a vector space $V$ such that there isn't a span set $S = \{ v_1, v_2, \ldots, v_n \}$, so exists a $v_{n+1} \in V$ such that $v_{n+1} \neq \sum_{i=1}^{i=n} v_i$, so $S \cup \{ v_{n+1} \}$ can be a span set of $V$, but can be exists $v_{n+2} \in V$ such that $v_{n+2} \neq \sum_{i=1}^{i=n+1}, v_i$, so $S \cup \{ v_{n+1}, v_{n+2} \}$ can be a span set of $V$ and we can be in this cycle infinitely. My doubt is what ensures that doesn't exists infinitely many vectors that can be span by a span set? REPLY [8 votes]: Your argument doesn't work - there are indeed vector spaces which have no finite spanning set (an interesting example of this is $\mathbb{R}$ as a $\mathbb{Q}$-vector space). However, there is a much simpler proof: if $V$ is a vector space, then $V$ is a spanning set of itself. If you want a basis - that is, a spanning set which is linearly independent - then things are much trickier. In fact, without the axiom of choice, they need not exist! We can however construct a basis for $V$ if the axiom of choice holds, using either Zorn's Lemma or transfinite induction (they're ultimately the same, just packaged differently).<|endoftext|> TITLE: How to complete Vakil's proof that the composition of projective morphisms are projective when the target is quasicompact? QUESTION [16 upvotes]: For this question, a morphism $\pi : X \rightarrow Y$ is projective iff there exists a finite type quasicoherent sheaf $\mathcal{E}$ on $Y$ such that $X$ is isomorphic (as a $Y-$scheme) to a closed subscheme of $\mathbb{P}(\mathcal{E})$. I am interested in finding a way to solve this problem as it is presented in Vakil, and not in solving it using completly different methods (such as can be found in e.g. EGA II $5.5.5$, or Stacks Tag $01W7$, although under the additional assumptions that $Y$ is either quasi-separated or Noetherian). In exercise 17.3.B of Vakil's "Foundations of Algebraic Geometry" notes, he asks to show that if $\pi : X\rightarrow Y$ and $\rho : Y \rightarrow Z$ are projective morphisms and $Z$ is quasi-compact, then $\rho \circ \pi$ is also projective. The hint he gives is to show that in the case where $Z$ is affine, if $\mathcal{L}, \mathcal{M}$ are the very ample line bundles on $X,Y$ coming from pulling back the respective $\mathcal{O}(1)$ bundles from the projective bundles $X$ and $Y$ are closed subschemes of, then there is some $m$ such that $\mathcal{L}\otimes \pi^*(\mathcal{M})^{\otimes m}$ is $\rho\circ \pi-$very ample. He then suggests using that $Z$ is quasicompact to cover it by finitely many open affine pieces, but I can't work out how to use this to prove the result. My first instinct would be to glue together the morphisms constructed over each affine piece, or to extend the construction globally, but in this case neither approach works. I can see that covering $Z$ by finitely many affine pieces $U_i$ allows us to find a fixed $m$ such that $\mathcal{L}\otimes \pi^*(\mathcal{M})^{\otimes m}$ is $\rho \circ \pi-$relatively very ample upon restriction to each $(\rho \circ \pi)^{-1}(U_i)$, but I don't understand how to use this to show that it is globally $\rho \circ \pi-$relatively very ample. Vakil does mention several times (and later proves) that with locally Noetherian hypotheses, the property of a line bundle on the source being relatively very ample can be checked affine-locally on the target, which would finish the proof if $Z$ was Noetherian, but this isn't part of the hypothesis of $17.3.B$. My question then, is this: Is it possible to finish this approach to the exercise, perhaps by showing that $\mathcal{L}\otimes \pi^*(\mathcal{M})^{\otimes m}$ is $\rho \circ \pi-$relatively very ample globally given that it is locally? Or is it really the case that you need to either assume that $Z$ is Noetherian, or take a completely different approach to the proof (such as characterising projective morphisms to quasicompact schemes as those that or both quasiprojective and proper)? REPLY [4 votes]: Edit. I decided to just rewrite a proof. I still need quasi-separatedness, however. Theorem [Stacks, Tag 0C4P]. Suppose $\pi\colon X \to Y$ and $\rho\colon Y \to Z$ are projective morphisms, and $Z$ is quasi-compact and quasi-separated. Then, $\pi \circ \rho$ is projective. Proof. Let $\mathscr{M}$ be the $\rho$-very ample line bundle on $Y$. Let $X \hookrightarrow \mathbf{P}_Y(\mathscr{E})$ be the closed embedding factoring $\pi$, where $\mathscr{E}$ is a finite type quasicoherent sheaf on $Y$. Now we claim the following: Key Claim. There exists a finite type quasi-coherent sheaf $\mathscr{G}$ on $Z$ and a surjection $$\rho^*\mathscr{G} \twoheadrightarrow \mathscr{E} \otimes \mathscr{M}^{\otimes m}$$ for $m \gg 0$. We postpone the proof of the Key Claim for now. Using this surjection, we have a sequence of morphisms $$X \hookrightarrow \mathbf{P}_Y(\mathscr{E}) \cong \mathbf{P}_Y(\mathscr{E} \otimes \mathscr{M}^{\otimes m}) \hookrightarrow \mathbf{P}_Y(\rho^*\mathscr{G})$$ whose composition is still a closed embedding. Moreover, we have an isomorphism $$\mathbf{P}_Y(\rho^*\mathscr{G}) \cong \mathbf{P}_Z(\mathscr{G}) \times_Z Y$$ by [EGAII, 4.1.3.1]. Next, let $Y \hookrightarrow \mathbf{P}_Z(\mathscr{F})$ be the closed embedding factoring $\rho$, where $\mathscr{F}$ is a finite type quasicoherent sheaf on $Z$. Then, we have a closed embedding $$X \hookrightarrow \mathbf{P}_Z(\mathscr{G}) \times_Z Y \hookrightarrow \mathbf{P}_Z(\mathscr{G}) \times_Z \mathbf{P}_Z(\mathscr{F})$$ and composing by the (relative) Segre embedding [EGAII, §4.3], we get a closed embedding $$X \hookrightarrow \mathbf{P}_Z(\mathscr{G} \otimes \mathscr{F})$$ Since $\mathscr{G}$ and $\mathscr{F}$ were finite type quasi-coherent sheaves on $Z$, we have that $\pi \circ \rho$ is indeed projective. $\blacksquare$ We now return to the proof of the Key Claim. This is where we use that $Z$ is quasi-separated. Proof of Key Claim. Since projective morphisms are proper, we can apply [Vakil, 17.3.9] to say that $\mathscr{M}$ is in fact $\rho$-ample, and so for $m \gg 0$, we have that $\mathscr{E} \otimes \mathscr{M}^{\otimes m}$ is $\rho$-globally generated, that is, we have that the canonical map $$\rho^*\rho_*\!\left(\mathscr{E} \otimes \mathscr{M}^{\otimes m}\right) \twoheadrightarrow \mathscr{E} \otimes \mathscr{M}^{\otimes m}$$ is a surjection. By [Görtz–Wedhorn, 10.50] we can write $$\rho_*\!\left(\mathscr{E} \otimes \mathscr{M}^{\otimes m}\right) = \varinjlim \mathscr{G}_\lambda$$ for the filtered system of finite type quasi-coherent subsheaves $\mathscr{G}_\lambda \subset \rho_*\!\left(\mathscr{E} \otimes \mathscr{M}^{\otimes m}\right)$. Since $\rho^*$ is the left adjoint of $\rho_*$, it preserves colimits, and the surjection above becomes a surjection $$\varinjlim \rho^*\mathscr{G}_\lambda \twoheadrightarrow \mathscr{E} \otimes \mathscr{M}^{\otimes m}$$ and by [Görtz–Wedhorn, 10.47], for $\lambda$ large enough, we have a surjection $$\rho^*\mathscr{G}_\lambda \twoheadrightarrow \mathscr{E} \otimes \mathscr{M}^{\otimes m}$$ as desired. $\blacksquare$ Remark. Here are possible ideas for getting rid of the quasi-separatedness assumption: Try using the affine case. Since $\mathscr{M}$ is ample, we have locally have surjections $$\rho^*\mathcal{O}_{U_i}^{\oplus n_i} \twoheadrightarrow \left.\left(\mathscr{E} \otimes \mathscr{M}^{\otimes m_i} \right)\right\rvert_{\rho^{-1}(U_i)}$$ on each element $U_i$ of a finite open affine cover of $Z$. Then, we could hope to glue these surjections together somehow. One method is to extend the sheaves on the left-hand side to finite type quasi-coherent sheaves on all of $Z$, and use [EGAInew, 6.9.10.1], but that still uses quasi-separatedness. The issue is that extension theorems for finite type quasi-coherent sheaves need quasi-separatedness to make their glueing arguments work; see [EGAInew, 6.9; Görtz–Wedhorn, §10.11; Stacks, Tag 01PD]. In the proof above, to apply [Görtz–Wedhorn, 10.47], all we needed was to write $\rho_*\!\left(\mathscr{E} \otimes \mathscr{M}^{\otimes m}\right)$ as a filtered colimit of finite type quasi-coherent sheaves. Perhaps this can be done in this case even without quasi-separatedness. Remark. The Stacks Project [Stacks, Tag 0C4P] also restricts to $Z$ quasi-separated.<|endoftext|> TITLE: The sum of more than two consecutive natural numbers cannot be prime. QUESTION [5 upvotes]: The sum of more than two consecutive natural numbers cannot be prime. Is the statement true and is there any way to prove it? I was able to prove that the sum of an odd amount of consecutive numbers cannot be prime: So, since the sum of consecutive integers is $x+(x+1)+(x+2)+(x+3)$ etc... we can also write this as $$nx + n(n-1)/2 = n(x + (n-1)/2)$$ with $n$ as the amount of numbers and $x$ the first number in the row. So, with an odd number as $n\neq 1$, we will get a product which will never result in a prime. Any way to prove this for all $n \ge 2$? Thanks for all the help. REPLY [3 votes]: For a sum of three or more consecutive positive integers $S = x + (x+1) + (x + 2) + ..... + (x + n -1)$ $x > 0; n > 2$ $S = (x + n-1) + (x+n - 1) + (x + n - 2) + ..... + (x + 1)+x$ Add 'em together. $2S = (2x + n -1) + (2x + n-1) + .... (2x + n-1) = n(2x + n-1)$ Case one: $n$ is even. Then $S=\frac n2(2x + n -1)$ is not prime as $n/2 > 1$ and $2x + n - 1 > 2$ Case 2: $n$ is odd. Then $2x + n - 1$ is even and $S = n\frac{2x + n - 1}2$ which is not prime as $n > 1$ and $(2x + n - 1)/2 > 1$. You were 90% of the way there. You just needed to hit it with your paddle a few more times.<|endoftext|> TITLE: The entry-level PhD integral: $\int_0^\infty\frac{\sin 3x\sin 4x\sin5x\cos6x}{x\sin^2 x\cosh x}\ dx$ QUESTION [38 upvotes]: I hope you find this integral interesting. Evaluate $$\int_0^\infty\frac{\sin\left(\,3x\,\right)\sin\left(\,4x\,\right) \sin\left(\,5x\,\right)\cos\left(\,6x\,\right)}{x\,\sin^{2}\left(\,x\,\right)\cosh\left(\,x\,\right)}\,\,\mathrm{d}x\tag1$$ This problem is taken from the PhD graduate entry tests in my college. I've tried to use product-to-sum trigonometric identities $$2\sin 4x\sin 3x=\cos x-\cos 5x$$ and $$2\cos 6x\sin 5x=\sin 11x-\sin x$$ I got a bunch of the following form $$\int_0^\infty\frac{\sin \alpha x\cos \beta x}{x\sin^2 x\cosh x}\ dx\quad\Longrightarrow\quad\int_0^\infty\frac{\sin \gamma x}{x\sin^2 x\cosh x}\ dx\tag2$$ I tried $$I'(\gamma)=\int_0^\infty\frac{\cos \gamma x}{\sin^2 x\cosh x}\ dx\tag3$$ but the latter form is not easy to evaluate either. Can anyone here help me to evaluate $(1)$? Thanks in advance. REPLY [12 votes]: In fact, we have $$ \begin{align} I(M,N)&=\int_0^\infty\frac{\sin Nx\sin(N+1)x\sin Mx\cos(M+1)x}{x\sin^2 x\cosh x}\ dx\\[10pt] &=\sum_{m=1}^M\sum_{n=1}^N\left[\arctan\left( e^{(m+n)\pi} \right)-\arctan\left( e^{(m-n)\pi} \right)\right]\\[10pt] &=\frac{1}{2}\sum_{m=1}^M\sum_{n=1}^N\bigg[\operatorname{gd}\!\big((m+n)\pi\big)-\operatorname{gd}\!\big((m-n)\pi\big)\bigg] \end{align} $$ and the desired integral is $I(5,3)$. Sorry for the Cleo-style answer but right now I'm busy playing Pokemon Go, so I'll post the complete solution when I'm free. See ya...<|endoftext|> TITLE: Meta proof-searching QUESTION [7 upvotes]: Suppose you have a particular theory (ex: $ZFC$) in which you want to prove a statement $\phi$. One can attempt to find a proof of $\phi$ that can be verified, but another tactic can be to find a proof for the existence of a proof of $\phi$ Are there any examples of such "non constructive proof of proofs" where someone proved that "a proof of $\phi$" exists but did not explicitly find that proof itself? I am curious if for certain problems this would be more tractable/efficient to compute the original proof of the sentence. REPLY [5 votes]: It has not turned out to be very helpful to prove that specific theorems are provable indirectly, when no actual proof of the theorem was previously known. There are two settings in mathematical logic, however, where we do show indirectly that particular theorems are provable in particular systems or in particular ways. The difference is that, when we work with particular theorems in these settings, we already know (or assume) the theorems are provable in general, and we are just worried with the specific form of a proof. The first setting is related to cut elimination. Certain kinds of formal proofs are called "cut free"; these are of a particularly simple form. A general theorem shows that, in many systems, if a theorem is provable then it is provable via a cut free proof. This theorem is effective - there is a procedure to create a cut free formal proof from an arbitrary formal proof. But the size of the cut free proof is often much, much larger than the size of the original proof. So we rarely work with cut-free proofs explicitly. They are used primarily as hypothetical objects. The second setting is related to showing that particular theorems are provable in weak systems. Sometimes, it can be shown that if a theorem of some particular syntactic form is provable in a stronger system $S$, then it is also provable in a weaker system $W$. These are called "conservation results". For example, a $\Pi^0_2$ theorem that is provable in the system $\mathsf{WKL}_0$ of second-order arithmetic is also provable in the system $\mathsf{PRA}$ of first-order primitive recursive arithmetic. It is much, much easier to work in $\mathsf{WKL}_0$ than $\mathsf{PRA}$. For example, Kikuchi and Tanaka (1994) showed that certain versions of the incompleteness theorem are provable in $\mathsf{PRA}$ by showing that the theorems are provable in $\mathsf{WKL}_0$, where they can use much more general methods. This only gives a theoretical provability result – no actual proof in $\mathsf{PRA}$ is constructed. In principle, the conservation result gives a method to turn a formal proof in $\mathsf{WKL}_0$ into a formal proof in $\mathsf{PRA}$, but the theoretical provability is of primary interest, not the actual formal proof. Many conservation results are known. Another example is Shoenfield's absoluteness theorem, which is often used to show that particular results are theoretically provable in ZF set theory, based on the syntactic form of the theorem and the provability of the theorem in ZFC. This allows us to show that some theorems of particular syntactic forms are provable without the axiom of choice based on their provability in ZFC with the axiom of choice .<|endoftext|> TITLE: Baby/Papa/Mama/Big Rudin QUESTION [52 upvotes]: Recently, I was looking for the reviews of some Analysis books while encountered terms such as Baby/Papa/Mama/Big Rudin. Firstly, I thought that these are the names of a book! But it turned out that these are some nick names used for the books of Walter Rudin. So I was thinking that $1$. What are the corresponding books of these nick names? $2$. Why such nick names are chosen? or What are their origins? REPLY [48 votes]: In order to sum up the above comments, the corresponding books for these nick names are $1$. Baby = Principles of Mathematical Analysis; $2$. Papa/Big = Real and Complex Analysis; $3$. Grandpa = Functional Analysis; and it seems that the difficulty of contents of the books grows with the age of the nick names! Firstly, you are a baby and things are easy to handle. Then you grow up and become a papa and things get more complicated. Finally, when you are a grandpa you should take care of your legacy very carefully which needs a hardwork! So $1$ is a prerequisite of $2$ and $2$ is prerequisite of $3$.<|endoftext|> TITLE: Why does this ring have rank $k!$? QUESTION [10 upvotes]: Let $R$ be any ring free of rank $k$ over $\mathbb{Z}$ having non-zero discriminant. Let $R^{\otimes k} = R \otimes_{\mathbb{Z}} \cdots \otimes_{\mathbb{Z}} R$. Then $R^{\otimes k}$ is a ring of rank $k^k$ in which $\mathbb{Z}$ lies naturally as a subring via $n \rightarrow n(1 \otimes \cdots \otimes 1)$. Denote by $I_R$ the ideal in $R^{\otimes k}$ generated by elements of the form $$ x \otimes \cdots \otimes 1 + 1 \otimes x \cdots \otimes 1 + \cdots + 1 \otimes \cdots \otimes x - \text{Tr} (x) $$ for $x \in R$. Let $$ J_R = \{ r \in R^{\otimes k} : nr \in I_R \text{ for some } n \in \mathbb{Z} \}. $$ Then the ring $$ R^{\otimes k} / J_R $$ has rank $k!$. I have been trying to work out why this ring has rank $k!$, but I am not quite seeing it yet. I would appreciate any explanation. Thank you very much! PS This is a ring that comes up in ``Higher composition laws III'' by Manjul Bhargava. REPLY [3 votes]: Please see the following paper by Bhargava and Satriano here, which contains the details of this computation and other related results. M. Bhargava and M. Satriano, On a notion of “Galois closure” for extensions of rings, Journal of the European Mathematical Society 16, 1881-1913 (2014). The aim of the authors is to define for commutative ring extensions a notion similar to the notion of Galois closure of fields. For a commutative ring $B$ and a commutative $B$-algebra $A$ which is locally free of rank $n$, they construct a commutative ring $G(A|B)$ which is called the $S_n$-closure of $A$ over $B$. If $B\subseteq A$ is a field extension of degree $n$ with Galois group $S_n$, it is easily seen that $G(A|B)$ is exactly the Galois closure of $A$ as a field extension of $B$. The authors investigate several properties of the $S_n$-closure, such as for example the functoriality, and the behavior with respect to base change or to general finite products. They also investigate several cases of natural ring extensions, e.g. monogenic or étale ones. Finally, they extend the notion of $S_n$-closure to schemes.<|endoftext|> TITLE: Maximum area of triangle inside a convex polygon QUESTION [8 upvotes]: Prove that within any convex polygon of area $A$, there exists a triangle with area at least $cA$, where $c=\tfrac{3}{8}$. Are there any better constants $c$? I'm not sure how to approach this problem. It is easily proven that such a triangle should have its vertices on the perimeter of the polygon, but I don't know how to proceed from here. REPLY [5 votes]: As first pointed out by Mark Fischler in his comment, one can take $c = \frac{3\sqrt{3}}{4\pi}$. In addition to Steiner symmetrization mentioned in Jack D'Aurizio's answer$\color{blue}{{}^{[1],[2]}}$, there is another elegant analytic approach which can be generalized to inscribed $n$-gon. E. Sas (1939) - For any $n \ge 3$, let $c_n = \frac{n}{2\pi}\sin\left(\frac{2\pi}{n}\right)$. For any convex body $B$ in the plane, there exists a $n$-gon $P$ inside $B$ such that $$\verb/Area/(P) \ge c_n \verb/Area/(B) \tag{*1}$$ When $n = 3$ and $B$ is a convex polygon, the claim we can take $c = c_3 = \frac{3\sqrt{3}}{4\pi}$ follows immediately. Following argument is based on a paper by E. Sas in German$\color{blue}{{}^{[3]}}$. All mistakes are mine and in case anything looks fishy. Please refer to E. Sas paper for correct statement. Since I am lazy, I will assume $B$ is a convex body whose boundary $\partial B$ is a smooth Jordan curve. This avoids all sort of potential pathologies and save me from justifying a lot of stuff. Let $2\ell$ be diameter of $B$. Let $L$, $R$ be two points on $\partial B$ at a distance $2\ell$ apart. Choose a coordinate system such that $L,R$ is located at $(-\ell,0)$ and $(\ell,0)$ respectively. Under such a coordinate system, $\partial B$ has a parametrization $\gamma$ of the form: $$[0,2\pi] \ni t \quad\mapsto\quad \gamma(t) = (\ell\cos t, e(t)\sin t ) \in \partial B$$ where $e(t) > 0$ is some smooth function. Extend $\gamma(t)$ and $e(t)$ to smooth periodic functions over $\mathbb{R}$ with period $2\pi$. For any fixed $t \in [0,2\pi]$ and $k \in \mathbb{Z}$, let $t_k = t + \frac{2k\pi}{n}$. It is clear $t_{k+n} = t_k$. Let $P(t)$ be the $n$-gon with vertices $\gamma(t_0), \gamma(t_1), \gamma(t_2), \ldots, \gamma(t_{n-1})$. Since $B$ is convex and $\gamma(t_k) \in B$, $P$ lies inside $B$. Let $f(t)$ be the area of $P(t)$. It is easy to work out $$f(t) = \frac{\ell}{2} \sum_{k=1}^n e(t_k)\sin t_k(\cos(t_{k-1}) - \cos(t_{k+1})) = \ell\sin\left(\frac{2\pi}{n}\right)\sum_{k=1}^n e(t_k)\sin^2(t_k)$$ Now treat $t$ as a variable and average $f(t)$ over $[0,2\pi]$, one get $$\frac{1}{2\pi}\int_0^{2\pi} f(t) dt = \frac{n}{2\pi}\sin\left(\frac{2\pi}{n}\right)\times \ell\int_0^{2\pi} e(t)\sin^2(t)dt = c_n \verb/Area/(B)$$ This implies there exists a $t_{*} \in [0,2\pi]$ such that $f(t_{*}) \ge c_n \verb/Area/(B)$. In other words, there exists a poylgon $P = P(t_{*}) \subset B$ whose area is at least $c_n$ of that of $B$. Back to the problem at hand for convex polygon. For any polygon $P$, let $|P|$ be its number of sides. Given any convex polygon $Q$ and any $0 < c < c_n$, approximate $Q$ by a convex body $B$ with smooth boundary whose area is at least $\frac{c}{c_n}\verb/Area/(Q)$. By $(*1)$, there is a $n$-gon $P \subset B \subset Q$ with $\verb/Area/(P) \ge c_n\verb/Area/(B) \ge c\verb/Area/(Q)$. Since the set of polygon $P \subset Q$ with $|P| \le n$ is compact under the topology induced from $\mathbb{R}^2$ and $\verb/Area/(\cdot)$ is continuous with respect to this topology, there is a polygon $P_{*} \subset Q$ with $|P_{*}| \le n$ and $\verb/Area/(P_*) \ge c_n \verb/Area/(Q)$. If $|P_{*}| < n$, we can turn $P_*$ to a $n$-gon by adding some extra vertices along its edges. Fix $n$ to $3$, this means there is a triangle inside $Q$ whose area at least $c_3 = \frac{3\sqrt{3}}{4\pi}$ of that of $Q$. Notes $\color{blue}{[1]}$ - For a proof of $(*1)$ when $n = 3$, see this answer by Christian Blatter. $\color{blue}{[2]}$ - Christian Blatter's answer uses Steiner symmetrization. For more detail, please refer to Wilhelm Blaschke, Über affine Geometrie III: Eine Minimumeigenschaft der Ellipse. Leipziger Berichte 69 (1917), pages 3–12. $\color{blue}{[3]}$ - E. Sas, Über eine Extremaleigenschaft der Ellipsen, Compositio Math. 6 (1939), 468–470.<|endoftext|> TITLE: If $H TITLE: Bound on $c-b$ for $a^n+b^n=c^n$ QUESTION [5 upvotes]: Let $a\leq b\leq c$ be positive real numbers and $n$ positive integer with $a^n+b^n=c^n$. Prove that $c-b\leq(\sqrt[n]{2}-1)a$. The desired inequality can be written as $c-b+a\leq \sqrt[n]{2}a$. Raising to the power of $n$, this is $(c-b+a)^n\leq 2a^n$. If it were true that $(c-b+a)^n\leq c^n-b^n+a^n$ we would be done, since the latter is just $2a^n$. REPLY [4 votes]: Fix $a$. By convexity of $f(x) = x^n$ on $\mathbb{R}_+$, an increase in $b$ leads to a lesser increase in $c$. Hence, the difference $c-b$ is maximized when $a=b$. The inequality follows.<|endoftext|> TITLE: Restriction of a $C^{\infty}$ vector bundle over a regular submanifold. QUESTION [6 upvotes]: This question is about the content of page 134 of Tu's An Introduction to Manifolds. A $C^{\infty}$ vector bundle or rank r is a triple $(E,M,\pi)$ consisting of manifolds $E$ and $M$ and a surjective smooth map $\pi:E\rightarrow M$ that is locally trivial of rank $r$. More precisely, (i) each fiber $\pi^{-1}(p)$ has the structure of a vector space of dimension $r$, (ii) for each $p\in M$, there are an open neighborhood $U$ of $p$ and a fiber-preserving diffeomorphism $\phi:\pi^{-1}(U)\rightarrow U\times\mathbb{R}^n$ such that for every $q\in U$ the restriction $$\phi\mid_{\pi^{-1}(q)}:\pi^{-1}(q)\rightarrow \{q\}\times\mathbb{R}^r$$ is a vector space isomorphism. My question is: Given any regular submanifold $S\subset M$, is the triple $(\pi^{-1}S,S,\pi\mid_{\pi^{-1}S})$ also a $C^{\infty}$ vector bundle over S? I tried to resolve this issue by checking the conditions (i) and (ii). Condition (i) is obviously satisfied. To prove (ii), I chose an adapted chart $(U,\phi)$ with $p\in U$ (which gives a chart for $S$) and tried showing that the map $$\phi\mid_{\pi^{-1}(U\cap S)}:\pi^{-1}(U\cap S)\rightarrow U\cap S\times\mathbb{R}^r$$ is a fiber-preserving diffeomorphism. But is this map necessarily a diffeomorphism? This is a restriction of a diffeomorphism, and even though it possesses an inverse (using $\phi^{-1}$) which looks like a smooth inverse for $\phi$, we don't know what manifold structures the sets $\pi^{-1}(U\cap S),U\cap S\times\mathbb{R}^r$ have yet, and I'm hesistated to say for sure that those maps are smooth maps, let alone inverse maps to each other. In summary: Do the sets $\pi^{-1}(U\cap S),U\cap S\times\mathbb{R}^r$ have manifold strutures? Canonical ones? Is this true in general? In what sense? : Restriction of a vector bundle is a vector bundle. Thanks in advance. REPLY [4 votes]: First, note that $\pi: E \to M$ is a submersion, so in particular is transverse to any submanifold $S \subset M$. So $\pi^{-1}(S)$ has a canonical manifold structure. Similarly for $\pi^{-1}(U \cap S)$. Any smooth map, restricted to a submanifold, is automatically smooth, so $\phi$ is smooth, as is its inverse.<|endoftext|> TITLE: A surprising inequality about a $\limsup$ for any sequence of positive numbers QUESTION [19 upvotes]: Prove the following: Let $\{x_n\}_{n=1}^{\infty}$ be a sequence of positive real numbers. Then $$ \limsup \frac{x_1+x_2+...+x_n+x_{n+1}}{x_n} \geq 4 \ .$$ Note: Yes, $4$ is sharp: $x_n=2^n$ sequence. I have already proved that $$ \liminf x_n < \infty \implies \limsup \frac{x_1+x_2+...+x_n+x_{n+1}}{x_n} = \infty.$$ So, the interesting case is when $\lim x_n = \infty$. REPLY [4 votes]: Solution(as in my lecture) Suppose that for a positive constant $c$ and a number $N$ $$ \frac{a_1+\dots a_{n+1}}{a_n} \le c \quad \text{ for all }n \ge N $$ Let $b_n:= \sum_{i=1}^n a_n$, the inequality above is equivalent to $$b_{n+1} \le c(b_{n}-b_{n-1}) \text{ for all }n \ge N$$ This form of the inequality is more or less more easier to deal with, but let's try it further, by letting $c_{n}$ denote $\frac{b_n}{b_{n-1}}$,then $$c_{n+1}+\frac{c}{c_n} \le c$$ Thus $(c_n , n \ge N)$ is a sequence of bounded real numbers, thus we can choose a subsequence $(c_{n_k},k\ge 1)$ such that $$\lim_{k \rightarrow \infty} c_{n_k}= \underbrace{\limsup_{n \rightarrow \infty } c_n}_{=:a}$$ Thus $$c \ge \limsup_{k \rightarrow \infty} \left(c_{n_k}+\frac{c}{c_{n_k-1}} \right)=a+\limsup_{k \rightarrow \infty} \frac{c}{c_{n_k-1}} \ge a+ \frac{c}{ \limsup c_n} =a+\frac{c}{a} \ge 2\sqrt{c} $$ Or, $c$ must be bigger than $4$. This means$$\text{ if} c \ge \limsup \frac{ a_1+\dots+a_{n+1}}{a_n} \text{ then } c \ge 4$$ Hence the conclusion. $\square$ A small story: This question brings back lots of memories. The first time I encounter it was when solving the famous analysis problem books by Kaczor and Nowak. A very long time ago. Then after that, in a small tutor class with gifted highschool students, I told them that it was important to find a good presentation of your problem. And this question was the example I used in that class.<|endoftext|> TITLE: Is $X^*$ complete with weak*-topology QUESTION [6 upvotes]: Suppose $X$ is a topological vector space, $X^*$ is its topological dual space. Let the topology of $X^*$ is weak*-topology, Is $X^*$ complete? Suppose $f_s$ is a Cauchy net in $X^*$, it is easy to see that $f=\lim f_s$ exists. We can prove that $f$ is linear, but I couldn't see if it is continuous. REPLY [5 votes]: No. For $X$ Hausdorff locally convex, the completion of $(X^*,\sigma^*)$ consists of all linear functionals on $X$ (the reason is that the semi-norms of $\sigma^*$ are determined by finite subsets of $X$ and on a finite dimensional subspace each linear functional is continuous and can be extended to a an element of $X^*$ by Hahn-Banach). For example, if $X$ is an infinite dimensional Banach space the weak$^*$ dual is incomplete (this depends on the axiom of choice).<|endoftext|> TITLE: Does the series $1-\frac12+\frac12-\frac1{2^2}+\frac13-\frac1{2^3}+\frac14-\frac1{2^4}+\frac15-\frac1{2^5}+\cdots$ converge or diverge? QUESTION [9 upvotes]: $1-\frac{1}{2}+\frac{1}{2}-\frac{1}{2^2}+\frac{1}{3}-\frac{1}{2^3}+\frac{1}{4}-\frac{1}{2^4}+\frac{1}{5}-\frac{1}{2^5}+\cdots$ I've be trying to figure out how to write this series symbolically so I can examine it's limit, but I'm having trouble. So far the best I've come up with is: $\sum_{n=0}^{\infty}(-1)^n\left(\frac{1}{2^n}-\frac{1}{2^{n+1}}\right)$ But obviously the above does not properly reproduce the series. REPLY [5 votes]: Observe that $$S_{n}=\displaystyle\sum_{i=1}^\left\lceil \frac{n}{2}\right\rceil\left(\dfrac{1}{i}\right)-\displaystyle\sum_{i=1}^\left\lfloor \frac{n}{2}\right\rfloor\left(\dfrac{1}{2^i}\right)\ge\left(\displaystyle\sum_{i=1}^\left\lceil\frac{n}{2}\right\rceil\dfrac{1}{i}\right)-1$$ Since the sequence of partial sum diverges, the series also diverges.<|endoftext|> TITLE: Find the average value of the function... QUESTION [5 upvotes]: Find the average value of the function $$F(x) = \int_x^1 \sin(t^2) \, dt$$ on $[0,1]$. I know the average value of a function $f(x)$ on $[a,b]$ is $f_\text{avg}=\dfrac1{b-a} \int_a^b f(x) \, dx$, but I don't know how to apply that to this question... The function loos like the Fresnel integral? But that doesn't quite help me either. REPLY [7 votes]: $$ \begin{align} \int_0^1\int_x^1\sin\left(t^2\right)\,\mathrm{d}t\,\mathrm{d}x &=\int_0^1\int_0^t\sin\left(t^2\right)\,\mathrm{d}x\,\mathrm{d}t\\ &=\int_0^1\sin\left(t^2\right)\,t\,\mathrm{d}t\\ &=\frac12\int_0^1\sin\left(t^2\right)\,\mathrm{d}t^2\\ &=\frac12\left[-\cos\left(t^2\right)\right]_0^1\\[3pt] &=\frac{1-\cos(1)}2 \end{align} $$<|endoftext|> TITLE: Real Analysis, Folland Theorem 3.18 Differentiation on Euclidean Space QUESTION [5 upvotes]: Background Information: A measurable function $f:\mathbb{R}^n\rightarrow \mathbb{C}$ is called locally integrable (w.r.t Lebesgue measure) if $\int_K |f(x)|dx < \infty$ for every bounded measurable $K\subset \mathbb{R}^n$. We denote the space of locally integrable functions by $L^1_{loc}$. If $f\in L^1_{loc}$, $x\in \mathbb{R}^n$, and $r > 0$, we define $A_r f(x)$ to be the average value of $f$ on $B(r,x)$ (ball of radius $r$ around $x$): $$A_r f(x) = \frac{1}{m(B(r,x))}\int_{B(r,x)} f(y) dy$$ Maximal Theorem - There is a constant $C > 0$ such that for all $f\in L^1$ and all $\alpha > 0$, $$m(\{x:Hf(x) > \alpha\}) \leq \frac{C}{\alpha}\int |f(x)|dx$$ Question: Theorem 3.18 - If $f\in L^1_{loc}$ then $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$ I am trying to understand Folland's proof but I am having some trouble. He first begins by saying that it suffices to show that for $N\in\mathbb{N}$, $A_r f(x)\rightarrow f(x)$ for a.e. with $|x| < N$. Why does $|x| \leq N$? Then he states that but for $|x|\leq N$ an $r\leq 1$ the values $A_r f(x)$ depend on $f(y)$ for $|y|\leq N + 1$. Again why is $|y|\leq N+1$? Then he says by Theorem 2.41 we can find a continuous integrable function $g$ such that $\int |g(y) - f(y)|dy < \epsilon$. Then the rest is not so bad to follow. If there is another way of proving this please let me know otherwise I just need to understand the beginning points and I think I should be able to understand the proof. Second question: As mentioned Folland uses Theorem 2.41 to find a continuous integrable function $g$ such that $\int |g(y) - f(y)|dy < \epsilon$. By contunity of $g$ we have that for $x\in\mathbb{R}^n$ and $\delta > 0$ there exists an $r > 0$ such that $|g(y) - g(x)| < \delta$ whenever $|y - x| < r$, and hence $$|A_r g(x) - g(x)| = \frac{1}{m(B(r,x)}\left|int_{B(r,x)} [g(y) - g(x)] dy \right| < \delta$$ therefore $A_r g(x)\rightarrow g(x)$ as $r\rightarrow 0$ for all $x$, so \begin{align*} \lim_{r\rightarrow 0}\sup|A_rf(x) - f(x)| &= \lim_{r\rightarrow 0}\sup |A_r(f-g)(x) + (A_r g - g)(x) + (g - f)(x)|\\ &\leq H (f-g)(x) + 0 + |f-g|(x) \end{align*} Let, $$E_{\alpha} = \{x:\lim_{r\rightarrow 0}\sup |A_r f(x) - f(x)| > \alpha\} \ \ \ F_{\alpha} = \{x: |f - g|(x) > \alpha\}$$ This is where I get confused again. Folland says note that $$E_{\alpha} \subset F_{\alpha/2}\cup \{x: H(f-g)(x) > \alpha/2\}$$ Why does he have $F_{\alpha/2}$ I understand why $E_{\alpha}$ is a subset of these but I don't understand the intuition. Then he says but $$(\alpha/2)m(F_{\alpha/2}) \leq \int_{F_{\alpha/2}} |f(x) - g(x)| dx < \epsilon$$ Is he using the Maximal Theorem there? REPLY [2 votes]: Let us prove it step by step. I wrote a detailed proof following Folland's approach. Theorem 3.18 - If $f\in L^1_{loc}$ then $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$ Proof: Step 1: The purpose of this step is to prove that, without loss of generality, we can assume $f$ to be in $L^1$. Let $\{X_j\}_j$ be any sequence of measurable sets such that $\mathbb{R}^n=\bigcup_j X_j$. IF, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$, THEN we have that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$. In fact, let $F=\{x \in \mathbb{R}^n \: : \: \lim_{r\rightarrow 0} A_r f(x) \neq f(x) \}$. If, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$, it means that, for all $j$, we have $m(F\cap X_j)=0$. Since $$F=F \cap\mathbb{R}^n= F\cap \bigcup_j X_j = \bigcup_j (F\cap X_j)$$ we have $$m(F)\leq \sum_j m(F\cap X_j)=0$$ So we have that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$. Now let us peek one specific sequence $\{X_j\}_j$ of measurable sets such that $\mathbb{R}^n=\bigcup_j X_j$. Take $$X_j=\{ x \in \mathbb{R}^n \: : \: |x|\leq j\}$$ By what we have shown above, we have 1.a: to prove that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$, all we need is to prove that for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$ Since we want to prove that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$, we can, without loss of generality, restrict our attention to $r<1$. Given any $j$, and given any $x \in X_j$, we have that $B(1,x) \subset X_{j+1}$, and since, for $r<1$, $A_r f(x)$ depends only on the value of $f$ on $B(1,x)$, we have that, for all $r<1$ $$A_r f(x)= A_r (f\chi_{X_{j+1}})(x)$$ In particular, we have $\lim_{r \to 0} A_r f(x)= \lim_{r \to 0}A_r (f\chi_{X_{j+1}})(x)$. So we get that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$ iff $\lim_{r \to 0}A_r (f\chi_{X_{j+1}})(x)= f(x)$ for a.e. $x\in X_j$. Note that, since $X_{j+1}$ is bounded, we have that $f\chi_{X_{j+1}}\in L^1$. So, we have 1.b: to prove that for all $f\in L_{loc}^1$, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$, all we need to prove is that, for all $f\in L^1$, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$ then we have proved Combining 1.a and 1.b then, to prove the theorem we need only to prove that for all $f\in L^1$, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in X_j$. In fact, in the next steps below, we will prove that: for all $f\in L^1$, for all $j$, $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in \mathbb{R}^n$. Step 2: Lemma: If $f \in L^1$ then, for all $\alpha>0$, $m(\{x:\lim_{r\rightarrow 0}\sup |A_r f(x) - f(x)| > \alpha\})=0$. Given any $f \in L^1$ and any $\epsilon>0$, by Theorem 2.41 we can find a continuous integrable function $g$ such that $\int |g(y) - f(y)|dy < \epsilon$. By contunity of $g$ we have that for $x\in\mathbb{R}^n$ and $\delta > 0$ there exists an $r > 0$ such that $|g(y) - g(x)| < \delta$ whenever $|y - x| < r$, and hence $$|A_r g(x) - g(x)| = \frac{1}{m(B(r,x)}\left|\int_{B(r,x)} [g(y) - g(x)] dy \right| \leq \frac{1}{m(B(r,x)} \int_{B(r,x)} |g(y) - g(x)| dy \\ <\frac{1}{m(B(r,x)} \:\: \delta \:\: m(B(r,x)= \delta$$ therefore $A_r g(x)\rightarrow g(x)$ as $r\rightarrow 0$ for all $x$, so \begin{align*} \lim_{r\rightarrow 0}\sup|A_rf(x) - f(x)| &= \lim_{r\rightarrow 0}\sup |A_r(f-g)(x) + (A_r g - g)(x) + (g - f)(x)|\\ &\leq H (f-g)(x) + 0 + |f-g|(x) \tag{1} \end{align*} Now, given any $\alpha >0$, let, $$E_{\alpha} = \{x:\lim_{r\rightarrow 0}\sup |A_r f(x) - f(x)| > \alpha\}$$ For all $x\in E_{\alpha}$, from $(1)$, we that either $ H (f-g)(x) > \alpha/2$ or $|f - g|(x) \geq \alpha/2$. It means $$E_{\alpha} \subset \{x: |f - g|(x) \geq \alpha/2\} \cup \{x: H (f-g)(x) > \alpha/2\} \tag{2}$$ Let $ F_{\alpha/2} = \{x: |f - g|(x) \geq \alpha/2\}$. Since, for all $x\in F_{\alpha/2}$, $\alpha/2 \leq|f - g|(x)$, we have $$(\alpha/2)m(F_{\alpha/2}) \leq \int_{F_{\alpha/2}}|f(x)-g(x)|dx \leq \int |f(x)-g(x)|dx <\epsilon$$ So we have $$m(\{x: |f - g|(x) \geq \alpha/2\})=m(F_{\alpha/2})<\frac{2\epsilon}{\alpha}$$ On the other hand, by maximal theorem, $$m(\{x: H (f-g)(x) > \alpha/2\})\leq \frac{2(3^n)\epsilon}{\alpha}$$ Form $(2)$ we have: $$m(E_{\alpha}) \leq m( \{x: |f - g|(x) \geq \alpha/2\}) + m( \{x: H (f-g)(x) > \alpha/2\} ) < \frac{2\epsilon}{\alpha} + \frac{2(3^n)\epsilon}{\alpha}$$ Since $\epsilon>0$ is arbitrary, we have that $m(E_{\alpha})=0$, for all $\alpha >0$. That means: $m(\{x:\lim_{r\rightarrow 0}\sup |A_r f(x) - f(x)| > \alpha\})=0$, for all $\alpha>0$. Step 3: Given any $f \in L^1$, let $$H=\{x : \lim_{r\rightarrow 0} A_r f(x) \textrm{ does not exist or } \lim_{r\rightarrow 0} A_r f(x) \neq f(x)\}$$ Note that $$H= \{x: \limsup_{r\rightarrow 0} |A_r f(x) - f(x)|>0\} =\bigcup_{k=1}^\infty\left \{x:\lim_{r\rightarrow 0}\sup |A_r f(x) - f(x)| > \frac{1}{k} \right\}$$ So, by the lemma in step 2, we have \begin{align*} m(H) &= m(\{x: \limsup_{r\rightarrow 0} |A_r f(x) - f(x)|>0\}) \\ &\leq \sum_{k=1}^\infty m\left (\left \{x:\limsup_{r\rightarrow 0} |A_r f(x) - f(x)| > \frac{1}{k}\right\}\right ) = \sum_{k=1}^\infty 0 =0 \end{align*} So, $$m(H)=0$$ which means that $\lim_{r\rightarrow 0} A_r f(x) = f(x)$ for a.e. $x\in\mathbb{R}^n$.<|endoftext|> TITLE: Homology and cohomology of 7-manifold QUESTION [10 upvotes]: I have the following problem: Let $M$ be a connected closed $7$-manifold such that $H_1(M,\mathbb{Z}) = 0$, $H_2(M,\mathbb{Z}) = \mathbb{Z}$, $H_3(M,\mathbb{Z}) = \mathbb{Z}/2$. Compute $H_i(M,\mathbb{Z})$ and $H^i(M,\mathbb{Z})$ for all $i$. I know that if $M$ is orientable, using Poincaré duality, the fact that $\chi(M)=0$ and the exact sequence for $H^i(M,\mathbb{Z})$ I can get the result. But, I don't know how to prove that $M$ is orientable. I know that is the case if $M$ does not have $2$-torsion on $\pi_1(M)$, but I don't see why this $2$-torsion should descend to $H_1(M)$. REPLY [4 votes]: A quick proof using Stiefel–Whitney classes: a manifold $M$ is orientable iff the first SW class $w_1(M) \in H^1(M;\mathbb{Z}/2\mathbb{Z})$ is zero. But by the universal coefficient theorem, $$H^1(M;\mathbb{Z}/2\mathbb{Z}) = \operatorname{Hom}_\mathbb{Z}(H_1(M;\mathbb{Z}), \mathbb{Z}/2\mathbb{Z}) = 0.$$ Of course under the hood I don't think there's anything more than what you can find in Eric Wofsey's answer, but this argument is quite simple and shows the power of characteristic classes.<|endoftext|> TITLE: Why is uniqueness important for PDEs? QUESTION [30 upvotes]: Every text on PDEs I come across will spend alot of time on showing the existence and uniqueness of solutions to a particular PDE. The importance of the existence of a solution to a PDE is obvious, but I can't see why so much time is spent on uniqueness. Why do we care whether a solution is unique or not as long as we know that there is a solution? Is uniqueness just shown for the sake of it, ie. the sake of completeness, or is there some deeper reason why it's considered important to show uniqueness? REPLY [4 votes]: Suppose you show that $$y=e^x \rightarrow \left(\frac{dy}{dx} = y \wedge y(0) = 1\right).$$ Have you solved the IVP? In my opinion, no: its not solved until you've also proved that $$\left(\frac{dy}{dx} = y \wedge y(0) = 1\right) \rightarrow y=e^x.$$ That's what uniqueness gives you; it lets you derive the second formula from the first, via this principle of logic that I still don't have a name for.<|endoftext|> TITLE: An exact sequence of compact topological groups. QUESTION [6 upvotes]: Let $A, B, C $ be abelian topological groups such that we have the following exact sequence : $$0\to A \to B \to C \to 0. $$ Assume also that A, C are compact and all the maps are open. Then it's it true that $B$ is also compact? If this is false, I would be interested in possible ways to strengthen the hypothesis so that it is true. If it's true, I would also be interested in various ways to weaken the hypothesis. In particular I would like to get rid of the open hypothesis if possible. REPLY [3 votes]: First I give names to the maps by $$ 0\to A \xrightarrow{f} B \xrightarrow{g} C \to 0. $$ If $A$ is open in $B$, then consider $$ B = \coprod_{[x] \in B/A } x.A .$$ Now each $x.A$ is open so this is an open cover of $B$. But this means $C$ is discrete and compact, so $C$ is finite. So $B$ is covered by a finite number of open compact sets thus compact. If $g$ is not open this is in general not true. Just consider the inclusion of $G_{\text{discrete}}$ in $G$ for any compact topological group. By the way, if $B$ is compact, then $B/A$ is compact and so the continuous bijective homomorphism from $B/A$ to $C$ is an iso. So the $g$ is a quotient map of topological groups and thus open. So $g$ has to be necessarily open. We also can show that $g$ is closed by this result. The interesting question is what happens when $f$ is not open?<|endoftext|> TITLE: Inverse Mills ratio for non normal distributions. QUESTION [7 upvotes]: We have the well known result of the inverse Mills ratio: $$ \mathbb{E}[\,X\,|_{\ X > k} \,] = \mu + \sigma \frac {\phi\big(\tfrac{k-\mu}{\sigma}\big)}{1-\Phi\big(\tfrac{k-\mu}{\sigma}\big)},$$ where $\phi(.)$ and $\Phi(.)$ are the PDF and CDF of the Gaussian distribution respectively. The literature seems to confine this ratio to the Gaussian. Are there results for other, more general classes of distributions? For instance, can one derive it for the general Pearson class class of distributions defined as $$f'(x)=-\frac{\left(a_1 x+a_0\right) }{b_2 x^2+b_1 x+b_0}f(x),$$ and express it as a function of parameters? $\textbf{Added:}$ We can derive the inverse Mills ratio as follows. Let $f(.)$ and $F(.)$ be the PDF and CDF, respectively, for a generic distribution with infinite support on the right and $\overline{F}(.)=1-F(.)$. Since $\mathbb{E}(X|_{\ X > k }) = \frac{\int_k^\infty x f(x) \,dx}{\int_k^\infty f(x) \,dx}$, and integrating the numerator by parts : $$\int_k^\infty x f(x) \,dx= -k\, \overline{F}(k)+\int_k^\infty \overline{F}(x) \,\mathrm{d} x,$$ we get: $$ \mathbb{E}(X|_{ X > k}) = -k+ \frac{\int_k^\infty \overline{F}(x) \,\mathrm{d} x}{\overline{F}(k)}.$$ Now the idea is to generalize from this equation. $\textbf{Note:}$ For a Gaussian $\int_k^\infty \overline{F}(x) \,\mathrm{d} x=\frac{1}{2} (\mu -k) \left(1-\text{erf}\left(\frac{k-\mu }{\sqrt{2} \sigma }\right)\right)+\sigma \phi(\frac{k-mu}{\sigma})$. REPLY [6 votes]: We start with the Pearson differential equation: $$f'(x)=-\frac{\left(a_1 x+a_0\right) }{b_2 x^2+b_1 x+b_0}f(x),$$ Define $g(x)=b_2 x^2+b_1 x+b_0$ (using a trick in Diaconis et al.(1991)). Consider (f g)'(x), also written $(f(x) g(x))'=f'(x) g(x) +f(x) g'(x)$. We have $$(f g)'(x)=(-a_0 + b_1 - a_1 x + 2 b_2 x) f(x)$$ We assume that the distribution has compact support of the type that $lim_{x\rightarrow +\infty} f(x) g(x)=0$.The idea is to use a probability distribution as a test function in a Schwartz distribution so $\int f' p=-\int f p'$. Integrating on both sides: The lhs, by parts, $\int_k^\infty f'(x) g(x) \,dx+\int_k^\infty f(x) g'(x) \, dx = -f(k)g(k)$ The rhs: $(b_1-a_0) \overline{F}(k)+ (2 b_2 -a_1)\int_k^\infty x f(x)\,dx$ Therefore: $$\int_k^\infty x f(x)\,dx=\frac{ \left(-b_0-b_1 k-b_2 k^2\right)}{2 b_2-a_1}f(k)-\frac{(b_1-a_0) }{2 b_2-a_1}\overline{F}(k).$$ We note that $\mathbb{E}(X)=-\frac{(b_1-a_0) }{2 b_2-a_1}$, since, integrating by parts, $\int_{-\infty}^\infty (f g)'(x)=0$, and solving for $b_1-a_0+\int_{-\infty}^\infty x\, f(x) (2 b_2-a_1)\, dx=0$. Hence $$\mathbb{E}(X|_{X>k})=\frac{ \left(-b_0-b_1 k-b_2 k^2\right)}{2 b_2-a_1}\frac{f(k)}{\overline{F}(k)}+\mathbb{E}(X)$$<|endoftext|> TITLE: About the definitions of direct product and direct sum of modules. QUESTION [5 upvotes]: Direct product of modules can contain infinitely nonzero elements, but direct sum of modules must contain finitely nonzero elements. What's the point of the definitions? Why the definitions are defined like these ? REPLY [2 votes]: Additionally, you will see in category theory (if you go down that road) that direct products and direct sums satisfy different universal mapping properties: https://en.wikipedia.org/wiki/Direct_sum_of_modules As you notice in the Wiki page, the direct sum of modules is a $coproduct$, meaning it satisfies the universal mapping property $opposite$ of that of the direct product of modules; the direct product satisfies the universal mapping property of a $product$ of objects in a category. If you're a bit mixed up by that, don't worry too much. Suffice to say that constructions in category theory are often identified and delineated by their universal mapping properties, so products are unique up to isomorphism in a given category. Coproducts, too, are unique up to isomorphism by a duality argument. And it is by their universal mapping properties that products (e.g. direct products of modules) and coproducts (e.g. direct sums of modules or disjoint unions mod some equivalence relation in other familiar categories) are most clearly distinguished (for me). For an introduction to category theory (which should be appropriate for you given your experience with modules so far), you can look at Vakil's Algebraic Geometry (using a quick Google search with the word "PDF" at the end) or Steve Awodey's Category Theory text (using the same kind of search). Awodey also gave some lectures at Oregon State on the foundations of category theory which you can find conveniently on YouTube by just looking up his name.<|endoftext|> TITLE: If $H$ is a subgroup of $G$ of index $n$, then $g^{n!} \in H \ \forall g \in G$ QUESTION [5 upvotes]: Let $G$ be a finite group and $H$ a subgroup of $G$ of index $n$, i.e., $[G:H]=n$. Prove that $$\forall g \in G,\; g^{n!} \in H.$$ This is a question I've had in a past exam for Group Theory and I'm really struggling to come up with a solution. There were two hints given in the question, namely that $\# G = n \cdot \# H$ (which is just from Lagrange's Theorem), and we were told to recall that $\#S_{n} = n!$. I've thought about using Cayley's theorem, that every group is isomorphic to a subgroup of $S_{n}$, but can't figure out how to get the desired result. REPLY [5 votes]: Alternatively, one can argue by contradiction. So suppose there is a $g\in G$ with the property $g^{n!}\notin H$. Then neither of the elements $g,g^{2},...,g^{n}$ can belong to $H$. This implies that $$H, Hg, Hg^{2},...,Hg^{n}$$ are distinct Right-cosets. But this violates the assumption $[G:H]=n$.<|endoftext|> TITLE: How to find$\sum_{i,j,k\in \mathbb{Z}}\binom{n}{i+j}\binom{n}{j+k}\binom{n}{i+k}$ for $n \in \mathbb{N}$ QUESTION [6 upvotes]: Yeah, it's $$\sum_{i,j,k\in \mathbb{Z}}\binom{n}{i+j}\binom{n}{j+k}\binom{n}{i+k}$$ and we are summing over all possible triplets of integers. It appears quite obvious that result is not an infinity. I tried to calculate $$\sum_{i,j \in \mathbb{Z}} \binom{n}{i+j}\sum_{k \in \mathbb{Z}} \binom{n}{i+k} \binom{n}{j+k}$$ but it doesn't get easier for me. My intuition says that we are calculating something over set of size $3n$, but i couldn't get any idea right. I'd appreciate some help on this super old exam task. REPLY [5 votes]: $\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{equation} \mathbf{\mbox{The Question:}\quad} \sum_{j,k,\ell\ \in\ \mathbb{Z}}\,\, {n \choose j + k}{n \choose k + \ell}{n \choose j + \ell} =\ ?\tag{1} \end{equation} \begin{equation}\mbox{Note that}\ \sum_{j,k,\ell\ \in\ \mathbb{Z}}\,\, {n \choose j + k}{n \choose k + \ell}{n \choose j + \ell}= \sum_{j,k\ \in\ \mathbb{Z}}\,\, {n \choose j + k}\ \overbrace{\sum_{\ell\ \in\ \mathbb{Z}}{n \choose k + \ell} {n \choose j + \ell}}^{\ds{\equiv\ \,\mathcal{I}}}\tag{2} \end{equation} \begin{align} \fbox{$\ds{\ \,\mathcal{I}\ }$} & = \sum_{\ell\ \in\ \mathbb{Z}}{n \choose k + \ell}{n \choose j + \ell} = \sum_{\ell\ \in\ \mathbb{Z}}{n \choose k + \ell}{n \choose n - j - \ell} \\[4mm] & = \sum_{\ell\ \in\ \mathbb{Z}}{n \choose k + \ell}\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{n} \over z^{n - j - \ell + 1}}\,{\dd z \over 2\pi\ic} = \oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{n} \over z^{n - j + 1}} \sum_{\ell\ \in\ \mathbb{Z}}{n \choose k + \ell}z^{\ell}\,{\dd z \over 2\pi\ic} \\[4mm] & = \oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{n} \over z^{n - j + k + 1}} \sum_{\ell\ \in\ \mathbb{Z}}{n \choose \ell}z^{\ell}\,{\dd z \over 2\pi\ic} \\[4mm] & = \oint_{\verts{z} = 1^{-}}{\pars{1 + z}^{n} \over z^{n - j + k + 1}} \pars{1 + z}^{n}\,{\dd z \over 2\pi\ic} = {2n \choose n - j + k} = {2n \choose n + j - k} \\[4mm] & = \fbox{$\ds{\ \oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{2n} \over z^{n + j - k + 1}}\,{\dd z \over 2\pi\ic}\ }$} = \fbox{$\ds{\ \,\mathcal{I}\ }$} \end{align} The original summation $\ds{\pars{1}}$ is reduced to $\pars{~\mbox{see expression}\ \pars{2}~}$: \begin{align} &\color{#f00}{\sum_{j,k,\ell\ \in\ \mathbb{Z}}\,\, {n \choose j + k}{n \choose k + \ell}{n \choose j + \ell}} = \sum_{j,k\ \in\ \mathbb{Z}}{n \choose j + k}\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{2n} \over z^{n + j - k + 1}}\,{\dd z \over 2\pi\ic} \\[4mm] = &\ \sum_{j\ \in\ \mathbb{Z}}\,\,\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{2n} \over z^{n + j + 1}} \sum_{k\ \in\ \mathbb{Z}}{n \choose j + k}z^{k}\,{\dd z \over 2\pi\ic} = \sum_{j\ \in\ \mathbb{Z}}\,\,\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{2n} \over z^{n + 2j + 1}} \sum_{k\ \in\ \mathbb{Z}}{n \choose k}z^{k}\,{\dd z \over 2\pi\ic} \\[4mm] = &\ \sum_{j\ \in\ \mathbb{Z}}\,\,\oint_{\verts{z} = 1^{-}} {\pars{1 + z}^{3n} \over z^{n + 2j + 1}}\,{\dd z \over 2\pi\ic} = \sum_{j\ \in\ \mathbb{Z}}{3n \choose n + 2j} = \sum_{j\ \in\ \mathbb{Z}}{3n \choose n + j}\,{1 + \pars{-1}^{j} \over 2} \\[4mm] = &\ \half\sum_{j\ \in\ \mathbb{Z}}{3n \choose j} + \half\,\pars{-1}^{n}\sum_{j\ \in\ \mathbb{Z}}{3n \choose j}\pars{-1}^{j} = \half\,\pars{1 + 1}^{3n} + \half\,\pars{-1}^{n}\pars{1 - 1}^{3n} \\[4mm] = &\ \color{#f00}{2^{3n - 1} + \half\,\delta_{n0}} \end{align}<|endoftext|> TITLE: Can the Substitution Rule be Interpreted as a "Change of Measure"? QUESTION [7 upvotes]: I just started learning measure and rigorous integration theory on my own along side my calculus class and I've noticed that with the substitution rule, you have something that looks like this $$ \int^{b}_{a} f(g(x))g'(x) \, dx=\int^{u(b)}_{u(a)} f(u) \, du $$ where $u(x)=g(x)$. I don't know how to formally connect this with measure theory yet but when I was listening to the lecture in class, my instincts/intuition were screaming at me that this has to connect directly to measure theory. I know the statement of the Radon-Nikodym theorem, though I haven't been able to quite grasp the significance but that would be my first guess but a google search only yielded some feint allusions to this connection that were rather unsatisfactory. I just feel like the change of the bounds from $[a,b]$ to $[u(a),u(b)]$ has to be some sort of "change of measure" or something to that effect. I apologize if this is a very elementary or stupid question but I can't get it off my mind and I'm not making much progress trying to formalize it myself. Any ideas or just some hints that will point me in the right direction would be awesome. Thanks in advance. REPLY [2 votes]: Yes, your guess is correct. If $g: [a,b] \to g([a, b])$ is a diffeomorphism and $\lambda$ is the Lebesgue measure on $[a,b]$, then you may consider the push-forward of $\lambda$ through $g$, denoted $g_* \lambda$. The push-forward is defined by $$(g_* \lambda) (A) = \lambda (g^{-1} (A))$$ for every measurable $A$. This can be rewritten using integrals instead of measures as $$\int _A 1 \ \Bbb d (g_* \lambda) = \int _{g^{-1} (A)} 1 \ \Bbb d \lambda ,$$ or equivalently $$\int 1_A \ \Bbb d (g_* \lambda) = \int 1_{g^{-1} (A)} \ \Bbb d \lambda .$$ If $B = g(A)$, the preceding formula becomes $$\int 1_{g^{-1} (B)} \ \Bbb d (g_* \lambda) = \int 1_B \ \Bbb d \lambda .$$ Notice now that $$1_{g^{-1} (B)} (x) = \begin{cases} 1, & x \in g^{-1}(B) \\ 0, & x \notin g^{-1}(B) \end{cases} = \begin{cases} 1, & g(x) \in B \\ 0, & g(x) \notin B \end{cases} = 1_B (g(x)) = (1_B \circ g) (x), $$ so we may rewrite the above equality as $$\int (1_B \circ g) \ \Bbb d (g_* \lambda) = \int 1_B \ \Bbb d \lambda .$$ Remembering that integrable functions are limits of step functions, the above leads to $$\int (f \circ g) \ \Bbb d (g_* \lambda) = \int f \ \Bbb d \lambda$$ for every integrable function $f$. Finally, it can be shown that $(g_* \lambda) (A) = |g'| \ \lambda (A)$, so the above gets rewritten as $$\int (f \circ g) \ |g' (A)| \ \Bbb d \lambda = \int f \ \Bbb d \lambda .$$ In particular, if $g$ is increasing (notice that $g$ must be strictly monotonic, because $g' \ne 0$ by virtue of $g$ being a diffeomorphism), and using that the Lebesgue integral is just the Riemann integral for compact intervals (i.e. $\int _{[a,b]}$ is just the usual $\int _a ^b$), one obtains $$\int \limits _a ^b f (g (x)) g' (x) \ \Bbb d x = \int \limits _{g(a)} ^{g(b)} f (x) \ \Bbb d x .$$<|endoftext|> TITLE: Normalized vector of Gaussian variables is uniformly distributed on the sphere QUESTION [9 upvotes]: I have seen in various places the following claim: Let $X_1$, $X_2$, $\cdots$, $X_n \sim \mathcal{N}(0, 1)$ and be independent. Then, the vector $$ X = \left(\frac{X_1}{Z}, \frac{X_2}{Z}, \cdots, \frac{X_n}{Z}\right) $$ is a uniform random vector on $S^{n-1}$, where $Z = \sqrt{X_1^2 + \cdots + X_n^2}$. Many sources claimed this fact follows easily from the orthogonal-invariance of the normal distribution, but somehow I couldn't construct a rigorous proof. (one such "sketch" can be found here). How to prove this rigorously? Edit: It has been brought to my attention that this question was already asked before here. However, I find the answer there to be incomplete-it shows that $X$ is orthogonally-invariant, but does not explicitly explains why that implies it is uniform. Therefore I think there is value in keeping this copy as well, as I guess we cannot transfer the answer. In my question, I explicitly asked for a complete rigorous proof-and I find the answer there to be incomplete. REPLY [15 votes]: The two word answer is "polar coordinates". In more detail, let $f:S^{n-1}\to\Bbb R$ be a continuous function. Then $$ \eqalign{ \Bbb E[f(X)] &=\int_{\Bbb R^n}f(x_1/z,\ldots,x_n/z)(2\pi)^{-n/2}e^{-z^2/2}\,dx_1\cdots dx_n\cr &=(2\pi)^{-n/2}\int_0^\infty\left[\int_{S^{n-1}} f(u)\,\sigma_{n-1}(du)\right]e^{-r^2/2}r^{n-1}\,dr\cr &=c_n\int_{S^{n-1}} f(u)\,\sigma_{n-1}(du).\cr } $$ Here $\sigma_{n-1}$ is the "surface area" measure on the sphere $S^{n-1}$ and $$ c_n=(2\pi)^{-n/2}\int_0^\infty e^{-r^2/2}r^{n-1}\,dr = \pi^{-n/2}2^{-1}\Gamma(n/2). $$ (Thus $2\pi^{n/2}/\Gamma(n/2)$ is the surface area of $S^{n-1}$.)<|endoftext|> TITLE: Does there exist a multiplicative $f:\mathbb{Q}^+\to\mathbb{Q}^+$ such that $f\neq x\mapsto x^a$ for all $a$? QUESTION [6 upvotes]: If we consider the functional equation: $f:\mathbb{Q}^+\to\mathbb{R}$ such that $$ f(xy)=f(x)f(y) $$ for all $x,y\in\mathbb{Q}^+$ I think, I have constructed a solution which is not of the form $x\mapsto x^a$. Namely, if we consider $V=\mathbb{R}$ as $\mathbb{Q}$-vector space, we can find a base $B$ of $V$ such that $\log(2),\log(3)\in B$. If we set $f(x)=\exp(\lambda(\log(2),\log(x)))$ where $\lambda(b,x)$ for $b\in B$ and $x\in \mathbb{R}$ denotes the coefficient of $b$ in the base $B$ representation of $x$. First of all, is this construction correct? Now, what if we impose the codomain to be $\mathbb{Q}^+$, i.e. $f:\mathbb{Q}^+\to\mathbb{Q}^+$? Can we still find non-ordinary solutions? Intuitively, I would say yes, but the above construction fails in this case. How to proceed? REPLY [10 votes]: Let $f \ne 0$ satisfy the given functional equation. Then there exist $x$ with $f(x) \ne 0$, and since $f(x \cdot 1) = f(x) f(1)$ we get $f(1) = 1$. Next, taking $y = \frac 1 x$ we get $1 = f(1) = f(x \cdot \frac 1 x) = f(x) f(\frac 1 x)$, whence it follows that $f(\frac 1 x) = \frac 1 {f(x)}$. This shows that if $\frac m n$ is a fraction written in lowest terms, with $m,n \in \Bbb N$, then $f (\frac m n) = f(m) f( \frac 1 n ) = \frac {f(m)} {f(n)}$, therefore it is enough to give $f$ on $\Bbb N$. Even more, since every $n \in \Bbb N$ can uniquely be written as $p_1 ^{a_1} \dots p_{k_n} ^{a_{k_n}}$ with all the $p_i$ primes, we have that $f(n) = f(p_1 ^{a_1}) \dots f(p_{k_n} ^{a_{k_n}}) = f(p_1) ^{a_1} \dots f(p_{k_n}) ^{a_{k_n}}$. This means that, if you give the action of $f$ on the prime numbers, the fact of being multiplicative allows you to extend it to $\Bbb Q ^+$ (the values of $f$ on primes will be the building blocks of $f$). Finally, and this is the crux of the matter, you are free to define $f$ on the prime numbers in any way you like, there is no constraint here. In particular, you may very well choose $f(p) = p^p$ for every prime $p$, which clearly is not of the form $p^a$ for some fixed $a$. To conclude: you are right, not all $f$ like in your question need be of the form $x^a$. Concerning your own example, it looks to be a complicated way of saying the following: if $x = \frac m n$, with $\frac m n$ in lowest terms, and if $m = 2^a m'$ and $n = 2^b n'$ with $2 \nmid m'$ and $2 \nmid n'$, then you define $f(x) = 2^{a-b}$. You are right, this is yet another multiplicative function as desired (and it is equal to $\frac 1 {\| x \| _2}$, where $\| x \| _2$ is the "$2$-adic norm" of $x$).<|endoftext|> TITLE: Clarification on "Every polynomial function of degree $\ge1$ has at least $1$ zero in the complex number system." QUESTION [10 upvotes]: The Fundamental Theorem of Algebra says "Every polynomial function of degree $\ge1$ has at least $1$ zero in the complex number system." My question is, where do the rest of the zeroes of the polynomial lie? Can it happen that they do not belong to the complex number system? Would we then have to pass to a number system beyond the complex numbers? REPLY [15 votes]: This is a statement of the Fundamental Theorem of Algebra that is completely correct, but in a way kind of obscures what is actually going on if someone doesn't think about what this actually means... Consider a complex polynomial $p(z) = a_0 + a_1z + ... + a_nz^n$. By the fundamental theorem of algebra, this has at least 1 zero. It is a well-known result that when a polynomial has a zero at $r$ we can write: $$p(z) = (z-r)(b_0 + ... + b_{n-1}z^{n-1}) = (z-r)(q(z))$$ for some polynomial $q(z)$ of degree $n-1$. By the fundamental theorem of algebra we have that q(z) (as long as it is not constant) has at least 1 zero, so we factor that zero out and repeat until the degree of the not-yet-factored part (in this case, $q(z)$) is constant. An induction argument can make this far more formal. The confusion ensues because the zero of $q(z)$ can be the same as the zero of $p(z)$. So we could have that a degree $n$ polynomial $p(z)$ has only one (distinct) zero. For example $p(z) = (z+1)^n$. This polynomial only has 1 distinct zero, but it has multiplicty $n$. But we can use the argument I had outlined above to phrase the fundamental theorem of algebra in a different way (or I've heard some people claim this is a corollary): A nonconstant polynomial of degree $n$ has, counting multiplicity, exactly $n$ zeros.<|endoftext|> TITLE: Are there any natural proofs of irrationality using the decimal characterization? QUESTION [6 upvotes]: Mathematicians typically define rational number to mean quotient of two integers. It is not hard to show that a number is rational by that definition if and only if its decimal expansion terminates or repeats. Let us call that the “decimal characterization” of rationality. The proof of that characterization of rational numbers is obviously just as applicable to other bases as it is to base $10$. Question: Are there any proofs of the irrationality of $\pi$ or $e$ or $\sqrt 2$ or $\log_2 3$ or any other naturally occurring number that use the decimal characterization, showing directly that the decimal expansion does not terminate or repeat, and that are at least as simple as any proof that uses the characterization that is conventionally taken to be the definition? REPLY [5 votes]: I just learnt this, it is not about the decimal expansion, but I think it is clearly related to your question : A simple proof that $e$ is irrational is using its factorial expansion $2+\sum_{n=2}^\infty \frac{1}{n!}$. The theorem is that $$\sum_{n=1}^\infty \frac{a_n}{n!}, \qquad a_1 \in \mathbb{Z},\quad a_n \in \{0, \ldots, n-1\}$$ is irrational if and only if $a_n >0$ infinitely often and $a_n < n-1$ infinitely often. The proof is as follows : any real number can be written as $$\sum_{n=1}^\infty \frac{a_n}{n!}, \qquad a_1 \in \mathbb{Z},\quad a_n \in \{0, \ldots, n-1\}$$ with the same algorithm as for the decimal expansion : $$a_n = \lfloor x_n n!\rfloor, \qquad x_{n+1} = x_n- \frac{a_n}{n!}$$ with this algorithm, every rational number has a finite expansion, so if $x$ has a finite expansion then the algorithm finds it. More generally, if applied to $x = \sum_{n=1}^\infty \frac{c_n}{n!}, c_1\in \mathbb{Z},c_n \in \{0, \ldots, n-1\}$ where $c_n < n-1$ infinitely many times, then the algorithm finds back $a_n = c_n$. when applied to $e-1$, the algorithm finds $a_n = 1$, thus $e-1$ doesn't have any such finite expansion, hence it is irrational. The conclusion is that the factorial expansion (more generally the Cantor series expansion, if someone knows a better reference ?) is much easier than the base $N$ expansion for proving the irrationality of a real number.<|endoftext|> TITLE: Is the smallest ellipsoid enclosing a convex set unique? QUESTION [5 upvotes]: Let $S \subset \mathbb{R}^n$ be a convex set. Assume that it is bounded. We want to find an ellipsoid $E$ of smallest volume such that $S \subset E$. Is $E$ unique? REPLY [4 votes]: My claim is that the answer is affirmative by the following Lemma. If two different ellipsoids $E_1,E_2$ with the same (hyper-)volume intersect, there is some ellipsoid $E_3$ enclosing $E_1\cap E_2$ with the property that $V(E_3)< V(E_1)$. Sketch of the proof: I will deal just with the 3D case. $E_1\cap E_2$ is described by something like: $$ \left \{ \begin{array}{rcl}\frac{(x-x_0)^2}{a^2}+\frac{(y-y_0)^2}{b^2}+\frac{(z-z_0)^2}{c^2}&\leq& 1\\\frac{x^2}{A^2}+\frac{y^2}{B^2}+\frac{z^2}{C^2}&\leq& 1\end{array} \right.\tag{1}$$ where $abc=ABC$. Every point $(x,y,z)$ fulfilling the previous inequalities also fulfills $$ \frac{(x-x_0)^2}{a^2}+\frac{x^2}{A^2}+\frac{(y-y_0)^2}{b^2}+\frac{y^2}{B^2}+\frac{(z-z_0)^2}{c^2}+\frac{z^2}{C^2}\leq 2, \tag{2} $$ that is the equation of an ellipsoid with volume depending on the product: $$ \frac{1}{\sqrt{\frac{1}{2a^2}+\frac{1}{2A^2}}}\cdot \frac{1}{\sqrt{\frac{1}{2b^2}+\frac{1}{2B^2}}}\cdot\frac{1}{\sqrt{\frac{1}{2c^2}+\frac{1}{2C^2}}} \tag{3}$$ Through the Cauchy-Schwarz inequality or other means, it is not difficult to show that under the assumption $abc=ABC$, the previous product is $\leq abc$, with equality achieved only at $(a,b,c)=(A,B,C)$, proving the Lemma. Moreover, the approach is dimension-independent. Back to the original question: assuming that two different ellipsoids $E_1, E_2$ with the same minimal volume enclose $S$, the ellipsoid $E_3$ found through the above "averaging" procedure also encloses $S$, but $V(E_3) TITLE: If $\lim_{n\to\infty} x_n =0$, then $\{f(x_n)\}$ is Cauchy? QUESTION [6 upvotes]: For each positive integer $n$, let $x_n$ be a real number in $\left(0,\frac{1}{n}\right)$. Is the following true? If $f$ is a continuous real-valued function defined on $(0,1)$, then $\{f(x_n)\}_{n=1}^\infty$ is a Cauchy sequence. I can't see why this is wrong. My quick thought was that, if $x_n$ is Cauchy and $f$ is continuous, then $f(x_n)$ is Cauchy as well. I am very sure this is true for compact domain. Is it something going wrong with $x=0$? Could someone give a counterexample? REPLY [2 votes]: This would be true if $f$ is uniformly continuous (basically, a function that maps Cauchy sequences into Cauchy sequences). You're right to be sure if the domain is compact, because a continuous real function on a compact is uniformly continuous. Note that $(x_n)$ converges to $0$ by the squeeze theorem, so it is Cauchy. Now you just need a function that's not uniformly continuous on $(0,1)$ and one that has infinite limit at $0$ is good enough. Another example is $f(x)=\sin(1/x)$ and $x_n=1/(2n)$, because $$ \lim_{n\to\infty}\sin(2n) $$ does not exist (not very easy to show, though, but I'm sure you can find it on the site).<|endoftext|> TITLE: Show that $a_n = [n \sqrt{2}]$ contains an infinite number of integer powers of $2$ QUESTION [10 upvotes]: Show that the sequence $\{a_n\}_{n \geq 1}$ defined by $a_n = [n \sqrt{2}]$ contains an infinite number of integer powers of $2$. ($[x]$ is the integer part of $x$.) I tried listing out the first few values, but I didn't see a pattern: $1,2,4,5,7,8,9,11,\ldots.$ Should we do a proof by contradiction? REPLY [6 votes]: Let $1/\sqrt{2} = \sum_{j=1}^\infty d_j 2^{-j}$ be the base-2 expansion of $1/\sqrt{2}$, where each $d_j$ is $0$ or $1$. Since $1/\sqrt{2}$ is irrational, there are infinitely many $0$'s and infinitely many $1$'s. Let $x_N = \sum_{j=1}^N d_j 2^{-j}$ and $n_N = 1 + 2^N x_N = 1 + \sum_{j=1}^N d_j 2^{N-j}$ which is a positive integer. If $d_{N+1} = 1$ we have $$1/\sqrt{2} - 2^{-N-1} < x_N + 2^{-N-1} < 1/\sqrt{2}$$ and then $$2^N < \sqrt{2}\; n_N = \sqrt{2} (x_N + 1/2^N)2^N < 2^N+\sqrt{2}/2 < 2^N + 1$$ so that $2^N = \lfloor \sqrt{2} n_N \rfloor$.<|endoftext|> TITLE: How many elliptic curves have complex multiplication? QUESTION [21 upvotes]: Let $K$ be a number field. Suppose we order elliptic curves over $K$ by naive height. What is the natural density of elliptic curves without complex multiplication? More generally, suppose we order $g$-dimensional abelian varieties over $K$ by Faltings height. What is the natural density of such varieties without complex multiplication? REPLY [7 votes]: The natural density of elliptic curves with complex multiplication is 0 (say we order by the coefficients A, B in y^2 = x^3 + Ax + B). This follows by Proposition 5 of http://arxiv.org/abs/0804.2166 combined with Hilbert irreducibility: Since the family of elliptic curves given above has surjective mod-\ell Galois representation for all \ell, it follows from Hilbert irreducibility that a density 1 subset of its members have surjective mod \ell image. Then, by Proposition 5, these members have trivial endomorphism ring. This same proof works for rational families of arbitrary genus with surjective mod \ell Galois representation. Note that the assumption on rationality is important to apply Hilbert irreducibility, so the same proof will not go through for the moduli space of Abelian varieties in genus more than 7, as the moduli space is not rational (or even unirational).<|endoftext|> TITLE: Confusion about Vakil's proposition 17.4.5, on finite morphisms between curves QUESTION [6 upvotes]: Proposition 17.4.5 In Vakil's Foundations of Algebraic geometry states: Suppose that $\pi: C \rightarrow C'$ is a finite morphism, where $C$ is a (pure dimension $1$) curve with no embedded points, and $C'$ a regular curve. Then $\pi_*\mathcal{O}_C$ is locally free of finite rank. The proof boils down to showing that $C'$ is a regular integral affine curve $\operatorname{Spec}(A')$ and $C$ = $\operatorname{Spec}(A)$, then for any maximal ideal $\mathfrak{m}$ of $A'$, $A_{\mathfrak{m}}$ is a free $A_{\mathfrak{m}}$ module. $A'_{\mathfrak{m}}$ is a DVR, so we may take a uniformiser $t$ and it suffices to prove the case where $t \in A'$. Then since $A_{\mathfrak{m}}$ is a finitely generated $A'_{\mathfrak{m}}$ module, it is a direct sum of a free module and modules of the form $A'_{\mathfrak{m}}/(t^n)$ for some $n$. But if there were any components of the latter form, then the image of $t$ in $A$ would be a zero divisor. The part that I don't understand is what follows, he says that, since any zero divisor is contained in some associated prime, there is some associated point of $C$ in $\pi^{-1}([\mathfrak{m}])$, which contradicts the fact that $C$ has no embedded points. Firstly, I don't understand why the fact that we know the image of $t$ is contained in an associated prime of $C$ means that we know that there is an associated point in the fibre. Is it the case that any prime containing the image of $t$ maps to $[\mathfrak{m}]$? Certainly any such prime maps to a prime containing $t$, but is $\mathfrak{m}$ the only one?. Secondly, why couldn't a generic point of $C$ map to $[\mathfrak{m}]$? I know that continuity would imply that everything in that component would then also map to $[\mathfrak{m}]$, but I'm not sure why this couldn't happen. I thought it might have something to do with the fact that finite morphisms have finite fibres, so this component must be finite, but there also exist curves with finitely many points so I don't think this leads anywhere. REPLY [2 votes]: I've now understood what was confusing me, and will post this answer for future readers who may have the same problem: A regular scheme is, by definition, locally Noetherian! The confusion arose from the fact that we only define regular points for locally Noetherian schemes, and so $C'$ is in fact locally Noetherian, which I didn't realise at the time. Given this, I can answer my questions as follows: Firstly, at the same time as assuming $t \in A'$, we may assume that $\mathfrak{m} = (t)$, since $\mathfrak{m}$ has finitely many generators, and considering each as an element of $A'_{\mathfrak{m}}$ we see that each is of the form $t \cdot f/g$ for some $f/g \in A'_{\mathfrak{m}}$ and so by considering the distinguished affine open piece corresponding to the product of all such $g$ (of which there are finitely many) we have that $(t) = \mathfrak{m}$. It is now clear that any prime of $A$ containing the image of $t$ pulls back to a prime ideal containing $(t)$, and so equaling $\mathfrak{m}$ by maximality. Secondly, any non-empty fibre of an integral morphism has dimension $0$ (exercise $11.1.E$ in Vakil) so, under finite (even integral) morphisms, only closed points can map to closed points (if a non-closed point mapped to a closed point, so would everything in it's closure by continuity and then the fibre would have dimension strictly positive).<|endoftext|> TITLE: Relation between semiring of sets and semiring in abstract algebra. QUESTION [11 upvotes]: Let a $\mathcal R$ be a family of subsets in $\Omega$ that is closed under finite union and relative complement. We say that $\mathcal R$ is a ring of sets in $\Omega$. Symbolically, for any $A,B\in\mathcal R$ we have $$\begin{align} &1.)\quad A\cup B \in \mathcal R\\ &2.)\quad A\backslash B \in \mathcal R. \end{align}$$ It follows that $\mathcal R$ is also closed under symmetric difference $\Delta$ and finite intersection $\cap$, and that $(\mathcal R,\Delta,\cap)$ is a ring in the sense of abstract algebra. However, a semiring of sets is defined as a family $\mathcal S$ of subsets in $X$ such that for any $A,B\in\mathcal S$ $$\begin{align} &1.)\quad \emptyset \in\mathcal S \\ &2.)\quad A\cap B \in \mathcal S\\ &3.)\quad A\backslash B = \bigcup_{i=1}^n A_i\quad\text{for some disjoint}\ A_i\in\mathcal S. \end{align}$$ What is the relation between a semiring of sets and a semiring in the abstract algebra sense? $\mathcal S$ is not even closed under $\Delta$, so we cannot think of it as a semiring $(\mathcal S,\Delta,\cap)$, where $(\mathcal S,\Delta)$ is a commutative monoid. I tagged measure theory because this structure is commonly found in an introductory chapter on construction of Lebesgue measure on $\Bbb R^n$. $\mathcal S$ is the family of $n-$dimensional intervals of the form $[a,b)$. REPLY [5 votes]: There is no connection. A semiring is a weakening of "ring" and "semiring of sets" is a weakening of "ring of sets" and that is all.<|endoftext|> TITLE: Fundamental group of the plane minus a Cantor set QUESTION [10 upvotes]: If $C⊆ℝ$ is the Cantor set, what is the rank of the (necessarily free) fundamental group $π_1(ℝ^2 - C×\{0\})$? Since the complement of the Cantor set is open, and an open set in $ℝ$ is always a union of at most countably many disjoint open intervals, the number of "gaps" between the points of the Cantor set (through which we may braid loops in the plane) is countable. This leads me to believe that there is a countable set of generators for the fundamental group in question, if indeed every loop may be deformed to such a well-behaved finite braiding. However, I don't know how to proceed. REPLY [3 votes]: The space $X=\mathbb{R}^2\setminus (C\times\{0\})$ is a smooth manifold and thus is triangulable, with countably many cells. It follows that $\pi_1(X)$ is countable, and so must have countable rank. It is clear the rank cannot be finite (since, for instance, $\pi_1(X)$ has a surjection to a free group of rank $n$ for each $n$, by mapping $X$ to a plane with $n$ points removed), so it must be countably infinite. For an alternate way to prove $\pi_1(X)$ is countable, note that we can write $C$ as an intersection $\bigcap C_n$ where each $C_n$ is a finite union of closed intervals. We can thus write $X$ as a union of open subsets $U_n=\mathbb{R}^2\setminus(C_n\times \{0\})$ such that $\pi_1(U_n)$ is free of finite rank for each $n$. By compactness, every loop in $X$ is contained in some $U_n$, and so $\pi_1(X)$ must be countable since each $\pi_1(U_n)$ is countable.<|endoftext|> TITLE: Why this abuse of notation correctly solves the heat equation QUESTION [6 upvotes]: Here's a stupid method I observed to solve the heat equation in $\mathbb R^d$, \begin{align*} \partial_tu=\Delta u,\quad u|_{t=0}=u_0. \end{align*} Pretend that $\Delta$ is a constant so this just becomes a linear ODE in $t$. The solution then is obviously \begin{align*} u=e^{t\Delta}u_0. \end{align*} Interpreting this as a Fourier multiplier and using the well-known Fourier transform of the Gaussian, we get \begin{align*} u(t,x)=e^{t\Delta}u_0(x)=(e^{-t|\cdot|^2})^\vee*u_0(x)=\frac{1}{(4\pi t)^{d/2}}e^{-|\cdot|^2/4t}*u_0(x) \end{align*} which is the right answer! However, this is all absurd since $\Delta$ is not a constant and the exponential would usually multiply the initial condition, not operate on it as it does here. I'm curious if anyone can give a rigorous explanation for why this abuse of notation turns out to work. REPLY [2 votes]: Abstractly, time evolution is an exponential when the system does not depend on time. And that's what you have here. If you know the state $f$ of the system (function of x) at some time $t$, then the state $E(t)f$ observed $t$ seconds later is a function only of the system (which does not depend on $t$ in this discussion,) and depends on time $t$. Therefore, if you start with state $f$, then evolve through $t$ seconds, and then evolve this new state $E(t)f$ through $s$ seconds, you must obtain the same as if you were to start with $f$ and evolve through $t+s$ seconds. In other words, $$ E(s)(E(t)f) = E(t+s)f \mbox{ or } E(s)E(t)=E(s+t). $$ This applies to any Physical system whose characteristics do not vary with time, assuming, of course, that the rules of the Universe don't change along the way. And $E(0)=I$ because you're evolving through no time. This is a very abstract principle first described in operator form by Oliver Heaviside in the late 1800's. He was the really the first to describe things in terms of operators, and to play with them in an algebraic way. He discovered this exponential property, which eventually morphed into what we now call the Laplace transform. Heaviside's operator methods included the $D$ operator method in ODE's, with annihilator, partial fractions, and the cover-up method for partial fractions. These were devised by Heaviside to solve electric circuit equations. The exponential solution operator was a brilliant observation by Heaviside, but it definitely confused people at the time. When you add stability in some norm sense, which is $\lim_{t\downarrow 0}E(t)f=f$, then there is a way to interpret this in terms of an exponential $e^{tA}$ where $A$ is the "generator" of the semigroup. The generator is computed through a derivative: $$ Af = \lim_{t\downarrow 0}\frac{1}{t}\{S(t)-I\}f. $$ In your case, you already know the generator $$ \lim_{t\downarrow 0}\frac{u(x,t)-u(x,0)}{t}=u_{t}(x,0)=\Delta u $$ So the generator is $A=\Delta$. You found a way to represent this exponential operator $E$ with the Fourier transform. Adding the stability condition at $t=0$ makes the semigroup unique. You found it, and it is unique, and it is stable at $t=0$ in every $L^p$ for $1 \le p < \infty$.<|endoftext|> TITLE: How does a complex algebraic variety know about its analytic topology? QUESTION [8 upvotes]: This question has two parts. The first is a reference request regarding a result I assume is standard, and the second is a soft question asking for philosophy and intuition about an issue the first question raises for me. Apologies in advance that it might be hard to answer completely. Nonetheless I expect and hope that both the question and any answers will be interesting and useful to denizens of the site. Part 1: the analytic topology is an isomorphism invariant, right? Let $V,W$ be affine algebraic sets (i.e. vanishing sets of ideals of polynomials) embedded in $\mathbb{C}^m,\mathbb{C}^n$ respectively, and suppose there exists an isomorphism $f:V\rightarrow W$ of algebraic varieties. If $V,W$ are equipped with their subspace topologies as subspaces of the Euclidean spaces $\mathbb{C}^m$ and $\mathbb{C}^n$, is $f$ necessarily a homeomorphism? I assume that the answer is yes and that this is a standard fact, since it seems to me it should be necessary in order to define the analytic topology on a general complex analytic variety. (If the answer is not "yes", then "the analytic topology" is meaningless because it would depend on choices of coordinates.) On the other hand, it is not obvious to me, because the analytic/Euclidean topology on an algebraic subset of $\mathbb{C}^n$ is so much finer than its Zariski topology. How does knowing that $f$ is an algebraic isomorphism allow us to control all the extra data in the analytic topology? This last question suggests the broader inquiry of part 2 to me - Part 2: how does the algebraic variety know about its analytic topology? This is the soft question. If two complex algebraic sets can't be isomorphic as varieties without being homeomorphic as subsets of Euclidean space, this suggests that their structure as algebraic varieties "knows something" about this finer topology. In fact, the variety must in some sense "know" the entire analytic topology, as it can be recovered by embedding into affine space, endowing the latter with the Euclidean topology, and taking the subspace topology. Given this - Where in the data defining the variety (the Zariski topology + the structure sheaf) does the data defining the analytic topology hide? In other words, it must be that given a subset $U$ of a complex algebraic variety $V$, one can test whether $U$ is open for the analytic topology using only the data contained in $V$'s Zariski topology and structure sheaf - What is the test? Addendum 7/21: Hoot's comment and KReiser's answer have clarified for me an aspect of Part 2 that I would like to make explicit. As I insinuated at the beginning of part 2, although given a complex algebraic variety (i.e. scheme of finite type over $\mathbb{C}$), one can recover the analytic topology by embedding its affines individually in complex affine space and taking the subspace topology, I am somehow unsatisfied by this as an explanation of how the variety knows about its analytic topology. Hoot's comment and KReiser's answer clarify for me why I am unsatisfied: what this is doing, philosophically, is using the sections of the structure sheaf to "carry" the analytic topology on $\mathbb{C}^n$ back to the variety, using the fact that the sections can be thought of as functions to $\mathbb{C}$. The reason I wasn't satisfied with this (and hence asked the part 2 question) is that the analytic topology on $\mathbb{C}^n$ is data that lies outside the data that defines the variety. Hence, I would like to know if there is a way to test $U\subset V$ for analytic openness that can query any info contained in the data defining $V$ (its Zariski topology and structure sheaf, i.e. the $\mathbb{C}$-algebra structure of its rings of sections and the restriction maps) but never queries whether or not a subset of any $\mathbb{C}^n$ is analytically open. Is there a test as above that does not consult the a-priori-known analytic topology on $\mathbb{C}$? Another way to sharpen the question: KReiser's answer shows that the reason that the analytic topology is an isomorphism invariant is essentially that polynomials are continuous. (KReisner noted even analyticity, so that an algebraic isomorphism is also an analytic isomorphism, but the only feature needed to conclude it's a homeomorphism is continuity.) Now any topology on $\mathbb{C}$ induces one on $\mathbb{C}^n$, and we can ask with respect to any such topology whether polynomial functions $\mathbb{C}^n\rightarrow\mathbb{C}^m$ are continuous. If they are, the exact same proof shows that the given topology is an isomorphism invariant of affine complex algebraic varieties, and thus can be imposed in a well-defined way on an arbitrary variety. The Zariski and analytic topologies are two such topologies. The latter is a refinement of the former. Can we say anything about the family of topologies described in the last paragraph? How big is it? What do they have in common? Can any of them be recovered from the data defining the variety, without a priori knowledge of the topology? [Obv. the Zariski topology can, because it is part of the data defining the variety.] REPLY [5 votes]: Answer to part 1: If $f$ is an isomorphism of algebraic varieties, it must be a regular map (which is therefore analytic, as polynomials and their appropriately-taken quotients are analytic). Similarly, it's two-sided inverse must also be regular and thus analytic, and so we have that $f$ is also an isomorphism of analytic varieties. Therefore it must be a homeomorphism. Attempt at part 2: You've already said the basic response that would jump in to my head first- you can embed a variety into $\mathbb{C}^n$ and give it the subspace topology from the usual analytic topology. In this strategy, the information you're using comes from a choice of generators for $\mathcal{O}_X(X)$- this determines the embedding in to $\mathbb{A}^n_\mathbb{C}$ which then gets you access to the complex topology after taking $\mathbb{C}$-points. Here's an attempt at a simpler way to test for $U\subset V$ analytically open- if you can build $U$ as a appropriate union and intersection of $f^{-1}(W)$ for some $f$ a section of the Zariski structure sheaf and $W\in \mathbb{C}$ analytically open. This idea could probably use some fleshing out (I'm not completely sure it's correct), but the theme is that regular functions are all ready to work in the analytic world already (as we saw in the answer to part 1).<|endoftext|> TITLE: Is my proof ok? If $\sum u_n$ diverges then $\sum \frac {u_n} {u_1 + u_2 + \dots + u_n}$ also diverges QUESTION [5 upvotes]: The question is : If $\sum u_n$ is a divergent series of positive real numbers and $s_n = u_1 + u_2 + \dots + u_n$ , prove that the series $\sum \frac {u_n} {s_n}$ is divergent. I tried my best. But I failed. Please help me by giving me a hint. Thank you in advance. My solution : Let $\{t_n\}$ be the sequence of partial sums of the series $\sum \frac {u_n} {u_1 +u_2 + \dots +u_n}$. Let $s_n=u_1 +u_2+ \dots +u_n$. If we can prove that $\{t_n\}$ is not Cauchy then our purpose will be served. For this we have to show that $\exists \epsilon>0$ such that $\forall n \in \mathbb N, \exists p \in \mathbb N$ for which $t_{n+p}-t_n \geq \epsilon$. Now since $\sum u_n$ is divergent, then so is $\{s_n\}$. Hence $\forall n \in \mathbb N, \exists q \in \mathbb N$ such that $s_{n+q}>2s_n$. Now, $t_{n+q}-t_n= \frac {u_{n+1}} {s_{n+1}} + \frac {u_{n+2}} {s_{n+2}} + \dots + \frac {u_{n+q}} {s_{n+q}} > \frac {u_{n+1} +u_{n+2}+\dots +u_{n+q}} {s_n} > \frac {s_{n+q} -s_n} {s_n} >1$ which proves that $\{t_n\}$ is not a Cauchy sequence, hence the series $\sum \frac {a_n} {s_n}$ is divergent. REPLY [3 votes]: Your idea is nice, unfortunately its very last step has a crucial mistake: the largest of the numbers $s_{n+1}, \dots, s_{n+q}$ is $s_{n+q}$ (because $\{s_n\}$ increases), so the corect inequality is $$\frac {u_{n+1}} {s_{n+1}} + \frac {u_{n+2}} {s_{n+2}} + \dots + \frac {u_{n+q}} {s_{n+q}} > \frac {u_{n+1} +u_{n+2}+\dots +u_{n+q}} {\color {red} {s_{n+q}}} > \frac {s_{n+q} -s_n} {\color {red} {s_{n+q}}} = 1 - \frac {s_n} {s_{n+q}}$$ and with this you have reached a dead end, because $1 - \frac {s_n} {s_{n+q}} < 1$, which does not help you. This question has been asked awfully many times here, just see this instance of it and from there follow the links that take you along a full chain of duplicates, full of various solutions.<|endoftext|> TITLE: What do mathematicians mean when they say "form"? QUESTION [6 upvotes]: As in differential form, modular form, quadratic form? I'm sorry if this is a really silly question. REPLY [7 votes]: A main possibly non-intuitive usage of "form" is as a somewhat particular type of map/function. Traditionally, the word function was used in a more restrained way and it was mainly used for real and complex functions, only. For example, classically in (real) functional analysis one would have: A function would be a map from $\mathbb{R}$ to $\mathbb{R}$. An operator would be a map from a space of functions to a space of functions. Example: the map "derivative" so $f \mapsto f'$. The term differential operator is still very common. A form would be a map from a space of functions to $\mathbb{R}$, often a linear one. Example: definite integral, $f \mapsto \int_0^1 f(t)dt$. It is still common in this context and in linear algebra that forms (linear, bilinear, etc) map from a (vector) space to the reals (more generally the scalar field). Thus, a form is often a map from a 'complicated' domain (often some space) to a simpler co-domain (typically a field). The terminology differential form also goes under this umbrella. And, while slightly less clear I'd argue quadratic form (and alike), too. That said, there are altogether different usages of the word "form" too though, as mentioned in the other answer, notably in the compound "normal form." But there the usage seems more in line with a common sense dictionary meaning of the word. When there are several different yet equivalent ways to express something, then you can write it in one way or in another way, in one form or in another form. But if the "form" is naturally seen as a map, frequently the co-domain will be 'simpler' than the domain.<|endoftext|> TITLE: Each $2\times 5$ rectangle contains $1\times 3$ rectangle QUESTION [5 upvotes]: A $60\times 60$ board is partitioned into rectangles of size $2\times 5$ (or $5\times 2$). Is it true that there always exist another partition into rectangles of size $1\times 3$ (or $3\times 1$) such that any $2\times 5$ (or $5\times 2$) rectangle contains a $1\times 3$ (or $3\times 1)$ rectangle? For the "simple" partition into $2\times 5$ rectangles, this is certainly true: use the simple partition into $1\times 3$ rectangle. Another way to partition is to first partition the $60\times 60$ board into $12\times 10$ boards, and then for each such board, put two $2\times 5$ at the top and ten $5\times 2$ below. It is not hard to tile each such $12\times 10$ board with $1\times 3$ (or $3\times 1)$ rectangles so that each $2\times 5$ (or $5\times 2$) contains a $1\times 3$ (or $3\times 1)$ rectangle. (I'm assuming that an $m\times n$ rectangle has $m$ rows and $n$ columns.) REPLY [6 votes]: We can even get every 2×5 to contain two 1×3s: After tiling your board with 2×5 rectangles, divide it into squares of 3×3 each, and then subdivide each of these squares into three 1×3 strips either horizontally or vertically such that if possible one of the strips lies completely within a 2×5 rectangle. (This can't be possible both horizontally and vertically; if neither direction works then just choose one arbitrarily). Now, consider one of the 2×5 rectangles; let's say wlog that it is lying horizontally. No matter how it is placed with respect to the 3×3 grid, there will be 3 of its 5 columns that make up one column of the 3×3 grid. One or two 3×3 squares in this column will overlap our 2×5 rectangle, and due to the above principle, it or they will have been divided horizontally into strips.<|endoftext|> TITLE: How to prove the parallel projection of an ellipsoid is an ellipse? QUESTION [6 upvotes]: Take the following ellipsoid in implicit form as an example: $$x^2 + 2 y^2 + 3 z^2 + x y + y z - 2 xz = 5$$ which shows: The parallel projection of the ellipsoid onto $xoy$ coordinate plane can be obtained as: $$ 8 x^2 + 16 x y+23 y^2=60$$ Is it possible to prove: The parallel projection of an ellipsoid is always an ellipse and how? I guess this should be able to be generalized into: the perspective projection of an ellipsoid is a conic curve. How to prove it? In prjective geometry, the quadratic form of conics is useful in such proof. This one seems a little more difficult. REPLY [3 votes]: $\newcommand{\dd}{\partial}$No claim of elegance, but Cartesian coordinates handle both questions, and the answers are "yes": Up to translation, a general ellipsoid can be written in the form $$ Ax^{2} + By^{2} + Cz^{2} + 2(Dxy + Exz + Fyz) = 1 \tag{1} $$ for some positive-definite coefficient matrix $$ \left[\begin{array}{@{}ccc@{}} A & D & E \\ D & B & F \\ E & F & C \\ \end{array}\right]. $$ For definiteness, project the ellipsoid to the $(x, y)$-plane along the $z$-axis, and call the image the shadow. A point $p = (x, y, z)$ on the ellipsoid projects to the boundary of the shadow if and only if the tangent plane to the ellipsoid at $p$ is parallel to the $z$-axis, if and only if $$ 0 = \frac{\dd}{\dd z}\bigl(Ax^{2} + By^{2} + Cz^{2} + 2(Dxy + Exz + Fyz)\bigr) = 2(Ex + Fy + Cz). $$ That is, the boundary of the ellipsoid's shadow is the shadow of a plane section of an ellipsoid (an ellipse), hence itself an ellipse. Let $p_{0} = (x_{0}, y_{0}, z_{0})$ be an arbitrary point outside the ellipsoid. The ray from $p_{0}$ to a point $p = (x, y, z)$ on the ellipsoid is tangent to the ellipsoid if and only if the normal to the ellipsoid at $p$ is orthogonal to the ray, if and only if $$ \nabla\bigl(Ax^{2} + By^{2} + Cz^{2} + 2(Dxy + Exz + Fyz)\bigr) \cdot (p - p_{0}) = 0, $$ or (after dividing by $2$) $$ (Ax + Dy + Ez)(x - x_{0}) + (Dx + By + Fz)(y - y_{0}) + (Ex + Fy + Cz)(z - z_{0}) = 0. \tag{2} $$ After expanding, the second-order terms are precisely the left-hand side of (1); that is, (2) is again a linear equation. Consequently, the "horizon" of the ellipsoid from an arbitrary exterior center of projection is a plane section, so it projects to a (possibly degenerate) ellipse regardless of the "screen" plane.<|endoftext|> TITLE: Conjecture: Every prime number is the difference between a prime number and a power of $2$ QUESTION [11 upvotes]: Conjecture: $\forall p\in\mathbb P\exists q\in\mathbb P\exists n\in \mathbb N: q-p=2^n$ Verified for the 100 first primes. REPLY [17 votes]: This question discusses the existence/infinitude of primes $p$ that can be written in the form $$p = q \pm 2^n$$ where $q$ is a prime and $n \in \mathbb{Z}^+$. In particular for $p = q - 2^n$, Gjergji Zaimi mentions in a comment to his asnwer that $$p = 47,867,742,232,066,880,047,611,079$$ is a counterexample.<|endoftext|> TITLE: Uniqueness of minimizing geodesic $\Rightarrow$ uniqueness of connecting geodesic? QUESTION [8 upvotes]: Let $M$ be a complete connected Riemannian manifold. Fix $p \in M$. Assume every point in $M$ has a unique minimizing geodesic connecting it to $p$. Is it true that for every point, the only geodesic connecting it to $p$ is the minimizing one? (Does the answer change if we count periodic geodesic as one?) Remarks: (1) In all the examples I have checked so far this holds. Of course, if for some $q \in M$ there is a unique minimizing geodesic, it does not imply there is a unique geodesic from $p$ to $q$ (think of a circle). For spheres,tori,cylinders - the claim vacuously holds, since there is no point satisfying the hypothesis (every point has an "antipodal" point where there is more than one minimizing geodesic). (2) If the assertion is true, it implies something quite strong on manifolds where minimizing geodesics are always unique; By this answer such a manifold must be diffeomorphic to $\mathbb{R}^n$ (via at the exponential map). Actually, one can adapt the argument of that answer to see that if there is one such point $p$ (that minimizing geodesics from it to all other points are unique), then the manifold will be diffeomorphic to $\mathbb{R}^n$. Thus, to refute the conjecture, it is enough to find a manifold with one such point which is not diffeomorphic to $\mathbb{R}^n$. REPLY [3 votes]: According to section 4 of this paper: Stephanie B. Alexander, I. David Berg, and Richard L. Bishop, Cut loci, minimizers, and wavefronts in Riemannian manifolds with boundary, Michigan Math Journal, Volume 40, Issue 2 (1993), 229-237 for a connected complete Riemannian manifold $(M,g)$ without boundary the cut-locus $C(p)$ of a point $p\in M$ equals the closure of the set $N(p)$ consisting of points $q\in M$ such that there is more than one minimizing geodesic between $p$ and $q$. Now, your assumption is that $N(p)=\emptyset$ for some $p\in M$. Hence, $C(p)=\emptyset$ for this $p\in M$. Thus, according to the answer to your earlier question (I think you asked this question several times, I remember writing an answer), it follows that any point in $M$ is connected to $p$ by a unique geodesic as $\exp_p$ is a diffeomorphism.<|endoftext|> TITLE: Can $\sqrt[n]{\sqrt{a}+\sqrt{b}}+\sqrt[n]{\sqrt{a}-\sqrt{b}}$ be an integer? QUESTION [8 upvotes]: The number $\sqrt{a}+\sqrt{b}$ cannot be an integer if $a,b$ are integers such that $\sqrt{b}$ is not an integer. (In fact, this is true for any number of square roots, and I believe even for cube roots, etc.) Therefore $\sqrt[n]{\sqrt{a}+\sqrt{b}}$ cannot be an integer either, for any positive integer $n$. What about if we sum it with the conjugate? That is, do there exist positive integers $n\geq 2,a,b$ such that $\sqrt{b}$ is not an integer but $\sqrt[n]{\sqrt{a}+\sqrt{b}}+\sqrt[n]{\sqrt{a}-\sqrt{b}}$ is an integer? Update: As Jyrki Lahtonen points out in the comment, there do exist such integers. What about if we also require that $\sqrt{a}$ is not an integer either? REPLY [5 votes]: The answer is yes. Pell's equations solve the problem. Let $$A.N'^{2}+1=M'^{2}$$ the generic Pell equation, the solution of which is: $N’=\frac{(M+N\sqrt{A})^{n}}{2\sqrt{A}}-\frac{(M-N\sqrt{A})^{n}}{2\sqrt{A}}$, $M’==\frac{(M+N\sqrt{A})^{n}}{2}+\frac{(M-N\sqrt{A})^{n}}{2}$ where $ M$ and $N$ are the basic solutions of $A$ (For example if we choose $A=8$, $N=1$ and $M=3$). $\sqrt[n]{\sqrt{a}+\sqrt{b}}+\sqrt[n]{\sqrt{a}-\sqrt{b}}=M’$, $\sqrt[n]{\sqrt{a}+\sqrt{b}}-\sqrt[n]{\sqrt{a}-\sqrt{b}}=N’\sqrt{A}$; $a=2^{-2(n+1)}\Big((M+N\sqrt{A})^{n}+(M-N\sqrt{A})^{n}\Big)^{2}$, $b=2^{-2(n+1)}\Big((M+N\sqrt{A})^{n}-(M-N\sqrt{A})^{n}\Big)^{2}$. Let's take an example: $A=13$, $N=180$ and $M=649$, we choose $n=4$, then $\sqrt[4]{\sqrt{a}+\sqrt{b}}+\sqrt[4]{\sqrt{a}-\sqrt{b}}=1419278889601$, $a=\frac{1053711982714216551873491874429255128997076465231993345538436469128908160121757992166220331730227201}{256}$, $b=4116062432477408405755827634489277847644829942312474006009517457534797500475617156899298170821200$. If we choose $A=8$,$M=3$ and $N=1$ and $n=5$, we get: $\sqrt[4]{\sqrt{a}+\sqrt{b}}+\sqrt[4]{\sqrt{a}-\sqrt{b}}=3363$, $a=\frac{47370562574818466708936539960450008969}{1024}$, $b=\frac{5921320321852308338617067495056251121}{128}$, and so on…<|endoftext|> TITLE: Good closed form approximation for iterates of $x^2+(1-x^2)x$ QUESTION [8 upvotes]: Let $f(x) := x^2+(1-x^2)x$. Is there a nice nontrivial closed form approximation $g_n(x)$ over $[0,1]$ for the $n$-fold composition $f^{\circ n}(x)$? Obviously near $0$ we have that $f^{\circ n}(x) = x+nx^2+...$ but this is not much use to me. Rather than try to pin down what "nice" ought to mean, I'll channel Potter Stewart and just say I (and I'm sure also a respondent) would know it upon sight. One might be tempted to mumble "solve Schroder's equation" but I don't see how that helps. Nor do I see how computing the Carleman matrix of $f$ helps (but for what it's worth, I believe the matrix elements are $M_{jk} := \sum_{r=0}^j \binom{j}{r} (-1)^{j-r} \binom{r}{k-3j-2r}$). Such tactics are suggested in How would I go about finding a closed form solution for $g(x,n) = f(f(f(...(x))))$, $n$ times? REPLY [2 votes]: FYI, here is a picture illustrating Count's answer at https://math.stackexchange.com/a/1872168/241. Blue curves are the zeroth through tenth iterates of $f$; red curves are obtained by numerically inverting $F(f^{\circ N}(x)) \approx N + F(x)$ for $F(x) := \log \frac{x}{1-x} + \frac{1}{x}$. I will mention that this is an improvement on the exact fourth-order approximation $f^{\circ N}(x) \approx x+Nx^2+(N^2-2N)x^3+([N-1]^3-2[N-1]^2-3[N-1])x^4$ near $0$, and I imagine an improvement on all exact fixed-order approximations. On a separate note, I will also mention the Carleman approach I originally mentioned can be pursued along the lines of http://www.sciencedirect.com/science/article/pii/S0022247X98959868.<|endoftext|> TITLE: Evaluate $\int \frac{\sin^4 x}{\sin^4 x +\cos^4 x}{dx}$ QUESTION [6 upvotes]: $$\int \frac{\sin^4 x}{\sin^4 x +\cos^4 x}{dx}$$ $$\sin^2 x =\frac{1}{2}{(1- \cos2x)}$$ $$\cos^2 x =\frac{1}{2}{(1+\cos2x)}$$ $$\int \frac{(1- \cos2x)^2}{2.(1+\cos^2 2x)}{dx}$$ $$\frac{1}{2} \int \left[1-\frac{2 \cos2x}{1+\cos^22x}\right] dx$$ What should I do next ? Please also tell me alternative way to do this . REPLY [4 votes]: $$I=\int \frac{\sin^4 x}{\sin^4 x +\cos^4 x}{dx}=\int \frac{1-2\cos^2(x)+\cos^4(x)}{1-2\cos^2(x)+2\cos^4(x)}dx$$ $$=\int\frac{1}{2}-\frac 12\frac{-1+2\cos^2(x)}{1-2\cos^2(x)+2\cos^4(x)}dx$$ $$=\int\frac{1}{2}-\frac 12\frac{\cos(2x)}{1-2\cos^2(x)\sin^2(x)}dx$$ $\color{blue}{\text{This last step above is a few manipulations away from being the last step in your question.}}$ Let $u=\cos(x)\sin(x)$ then $du=\cos(2x)dx$ $$\boxed{I=\frac x2-\frac 12\int \frac 1{1-2u^2}du=\frac x2 -\frac12\frac{\tanh^{-1}(\sqrt2 u)}{\sqrt 2} +k_1}$$ Replace $u$ and you're good to go...<|endoftext|> TITLE: Curious integrals for Jacobi Theta Functions $\int_0^1 \vartheta_n(0,q)dq$ QUESTION [11 upvotes]: There are various identities for the Jacobi Theta Functions $\vartheta_n(z,q)$ on the MathWorld page and on the Wikipedia page. But I found no integral identities for these functions. Meanwhile, there are beautiful identities for the simple case of $z=0$: $$\int_0^1 \vartheta_2(0,q)dq=\pi \tanh \pi$$ $$\int_0^1 \vartheta_3(0,q)dq=\frac{\pi}{ \tanh \pi}$$ $$\int_0^1 \vartheta_4(0,q)dq=\frac{\pi}{ \sinh \pi}$$ I found these identities using the series approach and Mathematica for summation. Surprisingly enough Mathematica can't take the integrals themselves, and numerically for $\vartheta_2(0,q),\vartheta_3(0,q)$ they are extremely hard to compute because of the sharp increase around $q=1$. Using the same method, it's possible to find some other interesting integrals, for example: $$\int_0^1 \vartheta_2(0,q) \ln \frac{1}{q} dq=\frac{\pi}{2} \left( \tanh \pi-\frac{\pi}{\cosh^2 \pi} \right)$$ $$\int_0^1 \vartheta_3(0,q) \ln \frac{1}{q} dq=\frac{\pi}{2} \left( \frac{1}{ \tanh \pi}+\frac{\pi}{\sinh^2 \pi} \right)$$ $$\int_0^1 \vartheta_4(0,q) \ln \frac{1}{q} dq=\frac{\pi}{2 \sinh \pi} \left( \frac{\pi}{ \tanh \pi}+1 \right)$$ Where can I find out more about the integral identities for the Jacobi Theta Functions? Are there some identities for the general case of $z \neq 0$? And more, is there some intuition behind the relationship between theta functions and hyperbolic functions? (I can undertand $\pi$, since they are related to elliptic functions, but where do $\tanh, \sinh, \cosh$ come from?) REPLY [12 votes]: The connection to hyperbolic functions is due the following Fourier series, valid when $|z|<\pi$: $$\frac{\pi}{x} \frac{\cosh z x}{\sinh \pi x} = \sum_{n \in \mathbb{Z}}\frac{(-1)^n \cos( z n)}{n^2+x^2} .\tag{1}$$ It is obtained quite straightforwardly. For instance, see @Machinato's related calculation in this answer, or the calculation on this page. Now, using the series expansion of $\vartheta_4(z,q)$ and integrating term by term we obtain: $$\int_0^1 q^{s-1} \vartheta_4(z,q)\,dq= \int_0^1 q^{s-1} \sum_{n \in \mathbb{Z}} (-1)^n q^{n^2} \cos(2 n z) \, dq \\= \sum_{n \in \mathbb{Z}} \frac{(-1)^n \cos(2 n z)}{n^2+s} = \frac{\pi}{\sqrt{s}} \frac{\cosh 2 z \sqrt{s}}{\sinh \pi \sqrt{s}}.\tag{2}$$ Multiplication of the integrand by powers of $\ln q$ is done by successive differentiation with respect to $s$, and to obtain the similar cases for $\vartheta_2$ and $\vartheta_3$ we only need to keep in mind that $$\vartheta_3(z,q) = \vartheta_4(z-\pi/2, q) \tag{3}$$ and $$\vartheta_2(z,q) = \vartheta_3(z,q)-\vartheta_4(z/2,q^{1/4}) \tag{4}.$$<|endoftext|> TITLE: What is the computational complexity of calculating $\pi(x)$ exactly? QUESTION [7 upvotes]: The prime counting function $\pi(x)$ has been determined for $x=10^{26}$. The list of the $10^n$-th primes , however , ends at $n=18$. The $10^{18}$-th prime has $20$ digits. Apparantly, the determination of $\pi(x)$ is easier than the determination of $p_n$ (the $n$-th prime). What is the computational complexity of determining $\pi(n)$ exactly ? What is the computational complexity of determining the $n$-th prime ? Is the second problem actually harder than the first ? (I think this is not the case because with binary search, it should be possible to determine $p_n$ nearly as efficiently as the determination of $\pi(n)$) It is true that the problem is not of great practical interest because $li(x)$ is a very good approximation of $\pi(x)$. I am just curious how far the exact calculation could go on with the current computational power available. REPLY [4 votes]: Right now there are two competing methods for determining $\pi(x)$ when $x$ is large: the combinatorial method of Meissel-Lehmer-Lagarias-Miller-Odlyzko-Deleglise-Rivat (see here), which requires $O(\frac{x^{\frac{2}{3}}}{(\log x)^2})$ time and $O(x^{\frac{1}{3}}(\log x)^2)$ space, and the analytical method of Lagarias-Odlyzko (see here), which requires $O(x^{\frac{1}{2}+\epsilon})$ time and $O(x^{\frac{1}{4}+\epsilon})$ space. (Although actually, the latter method can be modified to require $O(x^{\frac{3-2\delta}{5}+\epsilon})$ time and $O(x^{\delta+\epsilon})$ space for $0 \le \delta \le \frac{1}{4}$.) Interestingly, the record of $\pi(10^{26})$ was obtained using the combinatorial method, which is slower asymptotically; perhaps the crossover point has not yet been reached. Another factor is likely the difficulty of implementing the analytical method. As you say, we can use $\pi(n)$ to determine $p_n$, and the computational complexities will not be much different asymptotically; however, the difference is likely enough to cause $p_n$ to lag behind, since we would need to evaluate $\pi(n)$ many times just to determine one value of $p_n$. But, I would say that the difference of a factor of $10^6$ between the two records is probably due to less attention paid to $p_n$.<|endoftext|> TITLE: Are there ways to build mathematics without axiomatizing? QUESTION [11 upvotes]: Every time I read about a theory in mathematics, it usually starts with axiomatizing the most fundamental concepts that are going to be treated. Recently, I have started finding this troubling. In the foundational crisis, we tried to root all of mathematics on set theory and build it up from there. I believe this is a supremely elegant idea, but I have to ask myself why. I understand why axioms are the brick wall against which all infinite regressions crash. We cannot, after all, ask why something is indefinitely. There must come a time when we simply say: because it is. But why? What happens if we throw logic out of the window and attempt to start everything from scratch? I have read about model and category theory and all types of order logics, but none of them seem to be enough because they are all rooted in something that eventually leads to a so-called "self-evident truth". What if infinite regressions are similar to infinite series: something that at first we assumed was non sensical but actually turns out to be really useful? My question is: are there ways to build mathematics without axiomatizing? If no, is there a proof? REPLY [3 votes]: I remember reading the abstract of an article (or description of a book perhaps) that claimed to answer this using the principles of evolutionary biology; essentially, the author performed various simulations suggesting that organisms that take, as their fundamental logic, anything other than $2$-valued boolean logic tend to die off in the long run. I think if you Google around, you'll probably be able to dig something up in that vein. One might object: ah, but you're using classical logic to build computer simulations and interpret the result of those simulations. That's circular! My gut feeling is that actually, this isn't circular (but my thoughts on this aren't sufficiently well-developed that its worth me trying to write them here.)<|endoftext|> TITLE: Fixed-point free map of the 2-sphere which has order 4 QUESTION [6 upvotes]: The antipodal involution of $\mathbb{S}^2$ clearly has no fixed points. However I cannot think up an example of homeomorphism of order 4 which has no fixed points. Could you help me? REPLY [4 votes]: Consider the linear transformation of $\Bbb{R}^3$ corresponding to $$ A=\left(\begin{array}{ccc}0&-1&0\\1&0&0\\0&0&-1\end{array}\right). $$ The transformation is orthogonal, hence preserves lengths, hence maps $S^2$ to itself. It is clearly of order four. The eigenvalues of $A$ are $i,-i,-1$, so it has no fixed points. This is in the set of homeomorphisms described by Antonio: a 90 degree rotation followed by a reflection.<|endoftext|> TITLE: Difference between | and / QUESTION [5 upvotes]: "If we find a prime $p$ such that $p\mid n$ , then $n/p$ is a positive integer that's smaller than $n$." I understand $n/p$ is $n$ divided by $p$ but what is $n\mid p$? REPLY [5 votes]: The expression "$n/p$" represents the number that you get when you divide $n$ by $p$, whereas the expression "$p\mid n$" represents the statement that $p$ divides evenly into $n$.<|endoftext|> TITLE: Upper bound of $x_1x_2x_3+x_2x_3x_4+\dots+x_{n}x_1x_2$ QUESTION [6 upvotes]: Let $n\geq 3$ be a positive integer and let $x_i$'s be non-negative real numbers with $x_1+x_2+\dots+x_n=1$. What is the maximum of $x_1x_2x_3+x_2x_3x_4+\dots+x_{n}x_1x_2$? If the sum were symmetric consisting of all terms of the form $x_ix_jx_k$ for $i TITLE: Is a finite inverse limit of noetherian rings noetherian? QUESTION [7 upvotes]: Let $\{A_i\}$ be an inverse system of (commutative, unital) Noetherian rings with a finite index set. Is $\varprojlim A_i$ also a Noetherian ring? REPLY [6 votes]: The answer is no. Let $\varphi : k[x,y] \to k[x,y], f \mapsto f(x,0)$ and $$A = \{ f \in k[x,y] ~|~ f(x,0) \in k \} = \varphi^{-1}(k).$$ $A$ is well known to be non-noetherian - $(y,xy,x^2y,x^3y, \dotsc)$ is not finitely generated - but it fits in the following cartesian square (the horizontal arrows are inclusions): $$\require{AMScd} \begin{CD} A @>>> k[x,y]\\ @VV\varphi V @VV\varphi V \\ k @>>> k[x,y] \end{CD}$$<|endoftext|> TITLE: Decompose $5^{1985}-1$ into factors QUESTION [12 upvotes]: Decompose the number $5^{1985}-1$ into a product of three integers, each of which is larger than $5^{100}$. We first notice the factorization $x^5-1 = (x-1)(x^4+x^3+x^2+x+1)$. Now to factorize $x^4+x^3+x^2+x+1$ we get $$(x^2+ax+1)(x^2+bx+1) = x^4+(a+b)x^3+(ab+2)x^2+(a+b)x+1 = x^4+x^3+x^2+x+1$$ implies $a+b = 1,ab+2 = 1$. Thus, $$x^4+x^3+x^2+x+1 = (x^2+\left(\frac{1+\sqrt{5}}{2}\right)x+1)(x^2+\left(\frac{1-\sqrt{5}}{2}\right)x+1).$$ Is it possible to continue from this approach because now the factors I have aren't integers or is there a better way? REPLY [2 votes]: A low-tech solution. $1985=5\cdot 397$, and $ \Phi_5(x) $ is a palyndromic polynomial, decomposable as $$ \left(x^2-\frac{x}{2}+1\right)^2-\frac{5}{4}x^2 \tag{1}$$ or as: $$(x^2+3x+1)^2-5x(x+1)^2 \tag{2}$$ that with $x=5^{397}$ is the difference of two squares, $a^2-b^2=(a-b)(a+b)$, with both $a-b$ and $a+b$ being $>5^{100}$. The claim hence follows from $$ 5^{5\cdot 397} = (5^{397}-1)\cdot\Phi_5\left(5^{397}\right).\tag{3} $$<|endoftext|> TITLE: How to find the centroid of the intersecting region between three circles of differing diameters QUESTION [9 upvotes]: This question is a follow-up to this question I asked earlier which deals with finding the midpoint of the intersecting region of two circles of differing diameters. Using the parametric equation of a line as suggested in the accepted answer, it works perfectly. Now I want to take it a step further. In the image below, you'll see I have three circles of differing diameters which all intersect at various points. I would like to find the centroid of the region at which all three circles overlap. The only information available is the coordinates of the circles' centers and their respective diameters. It doesn't appear to be as straight-forward as the last question, though. The intersection of the circles does not always occur on the line between the centers. I imagine one would need to average the X and Y coordinates of the points of intersection to find it, but I'm not sure how to find the points of intersection in this scenario. Any thoughts or guidance in the right direction is appreciated. REPLY [3 votes]: Get the coordinates of the three intersection points between pairs of circles by solving $$\begin{cases}(x-x_1)^2+(y-y_1)^2=r_1^2\\(x-x_2)^2+(y-y_2)^2=r_2^2\end{cases}$$ (subtract the equations to get a linear one, express $y$ in terms of $x$ ang plug in the first equation to get a quadratic equation in $x$.) The area of the triangle is given by half the cross-product between two sides. The centroid is at the arithmetic mean of the coordinates. For each circular segment, you know the radius and the chord length and you easily derive the aperture angle ($2\arcsin l/2r$). Hence the area, which is the difference between a sector and a triangle ($r^2(\theta-\sin\theta)/2$). The centroid is located on the symmetry axis at a distance from the arc center equal to $$\frac{4r\sin^3\frac\theta2}{3(\theta-\sin\theta)}.$$ See http://mathworld.wolfram.com/CircularSegment.html. For each circle, find the translation and rotation that brings the segment in the reference position (as in the link). The translation just cancels the center coordinates. The rotation angle is found from the slope of the chord. Compute the coordinates of the centroids of the segments, counter-rotate them and translate the centers back. This will give you the absolute coordinates of the centroids. The global centroid is the weighted average of the centroids, where the weights are the areas $$\bar x=\frac{\bar x_{t}A_t+\bar x_{s_1}A_{s_1}+\bar x_{s_2}A_{s_2}+\bar x_{s_3}A_{s_3}}{A_t+A_{s_1}+A_{s_2}+A_{s_3}}$$ and similar for $y$. A complete analytical formula is possible but will probably be pretty large.<|endoftext|> TITLE: Algebras with a self-dual congruence lattice QUESTION [5 upvotes]: The well known (Mal'tsev) conditions that characterize certain properties of the congruence lattice of an algebra. The existence of a 3-ary term $p$ together with familiar identities $p(x,y,y) \sim x \sim p(y,y,x)$, is equivalent to congruence permutability, for example. And imilar conditions exist for other lattice theoretic properties, such as distributivity and modularity. Is there a term-condition, or an identity, that characterizes (or implies) varieties where every algebra has a self-dual congruence lattice? A second question is if there is a Mal'tsev condition that implies that the every algebra has a complemented congruence lattice. For example, the congruence lattice of a finite abelian group is self-dual. Or the congruence lattice of a finite dimensional vector space. REPLY [6 votes]: This question is about the following two properties of varieties: The property that all members of $\mathcal V$ have self-dual congruence lattices. The property that all members of $\mathcal V$ have complemented congruence lattices. As Pedro observed, if all members have complemented congruence lattices, then the subdirectly irreducible members must be simple [in a nonsimple SI the monolith has no complement]. We can also say something about the SI's under the assumption that all members have self-dual congruence lattices. First note that if all members have self-dual congruence lattices, then any interval in a congruence lattice is also self-dual. This follows from repeated use of self-duality together with the fact that upper intervals are congruence lattices of quotients. Now assume that $\mathcal V$ has the property that its members have self-dual congruence lattices. The bottom element of the congruence lattice of an SI ${\mathbf S}$ is completely meet irreducible, so the top element must be completely join irreducible. By the observation of the last paragraph, every nontrivial upper interval is self dual. Since these intervals $[\sigma,1]$ have completely join irreducible top element, they must also have completely meet irreducible bottom element. Consequently every non-top element in the congruence lattice of ${\mathbf S}$ is completely meet irreducible, and similarly every non-bottom element is completely join irreducible. The only algebraic lattices where every non-top element is completely meet irreducible and every non-bottom element is completely join irreducible are finite chains. Conclusion: If every algebra in ${\mathcal V}$ has a self-dual congruence lattice, then the congruence lattice of any SI in $\mathcal V$ is a finite chain. Continuing to assume that members of $\mathcal V$ have self-dual congruence lattices, if ${\mathbf A}\in {\mathcal V}$ is nontrivial and $\alpha$ is a proper congruence on $\mathbf A$, then $\alpha$ can be enlarged to $\beta$ so that ${\mathbf A}/\beta$ is SI. By the result of the last paragraph, we may enlarge $\beta$ further to $\gamma$ if necessary where ${\mathbf A}/\gamma$ is simple. This shows that every proper congruence $\alpha$ on a nontrivial algebra in $\mathcal V$ can be enlarged to a coatom $\gamma$. By self-duality we get that every nonzero congruence must lie above an atom. This is a strong property to impose throughout a variety. It is investigated in this paper: Atomicity and nilpotence, Canad. J. Math. 42(1990), 365-382. A result found there is that if every nonzero congruence on every nontrivial algebra in $\mathcal V$ lies above an atom, then the ascending central series for any ${\mathbf A}\in {\mathcal V}$ reaches the top congruence, although perhaps after transfinitely many steps. What it means here is that if we are in the case where $\mathcal V$ has self-dual congruence lattices, then SI's in $\mathcal V$ have congruence lattices that are finite chains, and the SI algebras themselves are nilpotent. (It is plausible that the SI's must even be abelian, but I see how to prove this only for congruence modular varieties at the moment.) If we are in the case where $\mathcal V$ has complemented congruence lattices, then SI's in $\mathcal V$ are simple and, as one can show by following the argument in the above paper, the SI's are abelian. In this case $\mathcal V$ is an abelian variety. Now for the answers these questions: Is there is a Maltsev condition that implies that the every algebra has a complemented congruence lattice? and Is there is a Maltsev condition that implies that the every algebra has a self-dual congruence lattice? Claim: The only such Maltsev conditions are those expressing that the variety is trivial. This Claim rests on the following Observation: If $\Sigma$ is a Maltsev condition that is satisfied by a nontrivial variety, then it is satisfied by a nontrivial discriminator variety. [If $\mathcal V$ satisfies $\Sigma$, just add the discriminator $d$ to each member of $\mathcal V$ and generate a variety ${\mathcal V}^d$ with the resulting algebras. ${\mathcal V}^d$ is a discriminator variety satisfying any Maltsev condition true in ${\mathcal V}$.] To see how the Observation implies the Claim, assume that $\Sigma$ is a Maltsev condition implying either that congruence lattices are self-dual or implying that congruence lattices are complemented. Let $\mathcal V$ satisfy $\Sigma$. The discriminator variety ${\mathcal V}^d$ satisfies $\Sigma$, so its SI's in ${\mathcal V}^d$ are nilpotent. But the only nilpotent members of a discriminator variety are trivial, so ${\mathcal V}^d$ is trivial. $\mathcal V$ must also be trivial.<|endoftext|> TITLE: Complementary text for mathematical Quantum Mechanics lectures QUESTION [9 upvotes]: I'm looking for a text to complement Frederic Schuller's lectures on QM. His approach is very mathematical -- in fact it looks like the first 12 of 21 lectures are just about the mathematical foundations of QM. He introduces Hilbert and Banach spaces from scratch (from basic linear algebra and analysis really), derives the functional analysis version of the spectral theorem, and gives what I assume are more rigorous definitions. For instance in all of the undergrad books I've looked at, quantum states -- if they're given any definition at all -- are said to be elements of the Hilbert space. But Schuller claims that that is not correct. States are in fact positive trace-class linear maps on the Hilbert space. Does anyone know a good undergrad level QM book that I can follow along with so I have some exercises and extra material to work through as I go through the lectures? Thanks. REPLY [5 votes]: I'll just make my comments into an answer. Reed and Simon Volume 1: Functional Analysis (Methods of Modern Mathematical Physics) here on amazon has a table of contents. Reed and Simon Volume 2: Fourier Analysis, Self-Adjointness (Methods of Modern Mathematical Physics) here on amazon, again you can see toc. Cohen-Tannoudji - Quantum Mechanics (2 vol. set). Reasonably rigorous and may fit Schuller to some extent - lots of end of chapter appendices. amazon link Messiah Quantum Mechanics (2 Volumes in 1) - two volumes has a lot in it, might not be as rigorous as you want it. amazon link Quantum Mechanics in Hilbert Space: Second Edition, suggested by user254665 - The preface to the first edition starts "This book was developed from a fourth-year undergraduate course given at the University of Toronto to advanced undergraduate and first-year graduate students in physics and mathematics. It is intended to provide the inquisitive student with a critical presentation of the basic mathematics of nonrelativistic quantum mechanics at a level which meets the present standards of mathematical rigor." Seems to fit the course reasonably well judging by toc - amazon, toc, preface etc. R&S Volume one introduces Hilbert spaces, Banach spaces, spectral theorem etc. and leads from bounded to unbounded operators and the fourier transform. R&S Volume 2 is very physics orientated, with topics on fourier transforms, hamiltonians in non-rel QM, and talks about self adjoint operators, and a bit about time dependent Hamiltonians. As a note on quantum states, there's various definitions. They can be vectors in a Hilbert space $\mathcal{H}$, but specifically one that satisfies a Schrodinger equation you're interested in (Assuming you're in Schrodinger picture in non-rel QM). State space would be a subspace of a Hilbert space. elements of the projective hilbert space $\Bbb P\mathcal{H}$ traceclass positive operators of trace $1$, normally called $\textit{density matrices}$. A rank one projection operator. It depends on what you want to do with them. The first I think is the most common, as when teaching quantum mechanics, the wave functions usually belong to an $L^2$ space, and are found by solving the Schrodinger PDE. The second last one is useful for statistical mixtures and open quantum systems, you can have pure and mixed states. The pure states can be identified with the last item on the list. As a sidenote this was asked by a different user on physics stack, same question on schullers course, and it was closed, even though it's physics, and a pretty reasonable request. It might be useful to check there for the one answer that was able to be posted before it was closed. https://physics.stackexchange.com/questions/259583/good-texts-on-quantum-mechanics-to-accompany-this-online-course#comment579079_259583<|endoftext|> TITLE: $m^2+2017=n^3$ has no solutions QUESTION [6 upvotes]: Show that $m^2+2017=n^3$ has no solutions for positive integers $m,n$. I'm having trouble tackling this one, especially since $\mathbb{Z}[\sqrt{-2017}]$ isn't a UFD. We can write the equation as $m^2+45^2=n^3+8$ or $m^2+17^2=n^3-12^3$, but I can't do much with either. REPLY [22 votes]: Your claim is false. $$81060^2 + 2017 = 1873^3$$<|endoftext|> TITLE: Closed form for $\sum_{n=1}^{\infty}\frac{(-1)^n}{\sqrt{n^2+a^2}}$ QUESTION [8 upvotes]: Do the convergent sum $$\sum_{n=1}^{\infty}\frac{(-1)^n}{\sqrt{n^2+a^2}}$$ posses a closed form? ($a \in \mathbb{R}$) Special case is known, for $a=0$ one recalls well known alternating harmonic series : $$\sum_{n=1}^{\infty}\frac{(-1)^n}{n}=-\ln 2$$ REPLY [3 votes]: As we are considering a function of $a$, it is always useful to build a Taylor series for it. Unfortunately, in this case it's possible only when $|a|<1$, which we are going to assume here. Using the binomial series and some identities, we can expand the radical as: $$\frac{1}{\sqrt{n^2+a^2}}=\frac{1}{n} \sum_{k=0}^\infty \frac{(-1)^k (2k)!}{k!^2} \left( \frac{a}{2n} \right)^{2k}$$ Now we need to find a closed form for the following series, which is well known: $$\sum_{n=1}^\infty \frac{(-1)^n}{n^{2k+1}}=-\left(1-\frac{1}{2^{2k}} \right) \zeta(2k+1)$$ We need to carefully separate the case with $k=0$ so we don't need to deal with divergencies. Finally we have: $$\sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n^2+a^2}}=- \log 2 -\sum_{k=1}^\infty \frac{(-1)^k (2k)!}{k!^2}\left(1-\frac{1}{2^{2k}} \right) \zeta(2k+1) \left( \frac{a}{2} \right)^{2k}$$ Introducing a new function: $$g(y)=\sum_{k=1}^\infty \frac{(-1)^k (2k)!}{k!^2} \zeta(2k+1) y^{2k}$$ We can write: $$\sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n^2+a^2}}=- \log 2 -g \left( \frac{a}{2} \right)+g \left( \frac{a}{4} \right)$$ $$|a|<1$$ Using the integral form for the zeta function we can write: $$\zeta(2k+1)=\frac{1}{(2k)!} \int_0^\infty \frac{x^{2k}}{e^x-1}dx$$ Now after summation the function under the integral has a closed form in terms of the Bessel function: $$g(y)=\int_0^\infty \frac{J_0 (2 yx)-1}{e^x-1}dx$$ Which makes it possible to write the original series neatly as: $$\sum_{n=1}^\infty \frac{(-1)^n}{\sqrt{n^2+a^2}}=- \log 2 -\int_0^\infty \frac{J_0 (a x)-J_0 (a x/2)}{e^x-1}dx$$ What's more important, this formula works for $|a|>1$ as well, which can be justified by analytic continuation of the Taylor series.<|endoftext|> TITLE: Finding a special subsequence of any Cauchy sequence QUESTION [6 upvotes]: Let $(X,d)$ be a metric space and let $(x_n)$ be a Cauchy sequence in $X$. Let $(\epsilon_n)$ be a sequence of real numbers and decrease to $0$. Show that there is a subsequence $(x_{n_k})$ of $(x_n)$ such that: $$d(x_{n_j},x_{n_k})<\epsilon_{\min\{j,k\}}\:\:\: j,k=1,2,\dots$$ I'm not sure about my solution. Each time I try to find a subsequence, it just makes me more confused. Here is my solution. Since $(x_n)$ is Cauchy, so for $\epsilon_1$ there is an integer $N_1\in\mathbb{N}$, such that for any $j,m\geq N_1$, we have $d(x_m,x_j)<\epsilon_1.$ Now define a subsequence $(x_{n_k})$ of $(x_n)$ such that: $x_{n_1}=x_{N_1},\: x_{n_2}=x_{N_1+1}$ and so on. Obviously, the subsequence $(x_{n_k})$ is Cauchy. Similarly, for $\epsilon_2$, there is an integer $N_2$ such that for any $m,j\geq N_2$, we have $d(x_{n_m},x_{n_j})<\epsilon_2.$ Without loss of generality, let $(x_{n})=(x_{n_k})$. Define the subsequence $(x_{n_k})$ of $(x_{n})$ such that $x_{n_1}=x_{N_2}, \: x_{n_2}=x_{N_2+1}$. Hence, $d(x_{n_1},x_{n_2})<\epsilon_2<\epsilon_1$ and also, $d(x_{n_2},x_{n_3})<\epsilon_2$. Continuing this method $n'$ times, for $\epsilon_{n'}$, there is an integer $N_{n'}$ such that for any $m,j\geq N_{n'}$, we have $d(x_{m},x_{j})<\epsilon_{n'}$. Now define seubsequence $(x_{n_k})$ of $(x_n)$ such that: $x_{n_1}=x_{N_{n'}},\: x_{n_2}=x_{N_{n'}+1}$ and so on. Now for the subsequence $(x_{n_k})$ we have: $$d(x_{n_j},x_{n_k})<\epsilon_{n'}<\epsilon_{\min\{j,k\}}\:\:\: j,k=1,2,\dots,n' $$ REPLY [2 votes]: I think that your way of thinking is correct, but it may also help conceptually to think of these sequences as sets of points which you can manipulate all at once: We have our original Cauchy sequence $S = \{x_1, x_2, x_3,\ldots\}$ and a decreasing sequence $E$ of positive numbers $\epsilon_1 > \epsilon_2 > \epsilon_3 > \ldots$ which converges monotonically toward zero. Because the sequence $S$ is Cauchy, we know that for every $\epsilon_i \in E$, there exists an integer $N_i$ so that all of the terms in the tail of $S$ are within $\epsilon_i$ of each other (i.e. for all $x_a,x_b$ with $a,b>N_i$, we have $d(x_a,x_b)<\epsilon_i$.) The first trick is to notice that we can choose the $N_i$ so that they form an increasing sequence $N_1 < N_2 < N_3 < \ldots$. (Intuitively, this is because if we have some $N_i$, we may freely increase its value as much as we like without affecting the $\epsilon_i$ property.) Now let us define $S_i$ to refer to the tail of the sequence $S$ starting at term $N_i$: $$S_i \equiv \{x_n \in S : n > N_i\}\qquad\text{for }i=1,2,3,\ldots$$ The $S_i$ sets have two important properties: Because the $N_i$ are increasing, the $S_i$ are nested: $S_1 \supseteq S_2 \supseteq S_3 \supseteq \ldots$. Because $S$ is Cauchy, every $S_i$ is nonempty, and every pair of points in $S_i$ is within $\epsilon_i$ of each other. (In fact, every $S_i$ contains infinitely many points!) Now the construction is essentially finished: Pick any $y_1 \in S_1$ for the first term in the subsequence. You could pick $y_2 \in S_2$, but you must ensure that it comes after $y_1$ in the original sequence. Therefore, choose $y_2$ from $S_2$ excluding the finitely many points that came before $y_1$ in the original sequence.[1] Similarly, we can pick each successive term $y_{i+1}$ from the set $S_{i+1}$, excluding the finitely many terms that came before $y_i$ in the original sequence. It will always be possible to pick the $y_i$ because each $S_i$ contains infinitely many points ($S$ is Cauchy), and because we are only ever excluding finitely many of them in the $i$th step. [1] A technicality about this exclusion process: of course sequences may contain the same point more than once. If for the $i$th term in our subsequence, we pick the $j$th term $x_j$ of the original sequence, this exclusion process only ensures that the $(i+1)$th term in our subsequence comes later ($x_k$ for some $k>j$). Hence, we are not excluding points in $X$, only terms from the sequence $S$.<|endoftext|> TITLE: If $x+y=10^{200}$ then prove that 50 divides $x$ QUESTION [7 upvotes]: Let $x$ be a positive integer and $y$ is another integer obtained after rearranging the digits of $x$. If $x+y=10^{200}$ then prove that $x$ is divisible by 50. My attempt Since $y$ is the digit rearrangement of $x$ so $x$ $\cong$ $y$ $\bmod{9}$ from there we get $x$ $\cong$ $5$ $\bmod{9}$ and $y$ $\cong$ $5$ $\bmod{9}$. Also possible last digits of $x$ and $y$ are $(0,0), (1,9),(2,8),(3,7) , (4,6),(5,5),(6,4),(7,3),(8,2),(9,1)$. For last digits $(0,0), (2,8), (4,6),(6,4),(8,2)$ divisibility by $2$ is ensured but divisibility by $25$ and general case is eluding me. Can someone help? Thanks in advance. REPLY [5 votes]: We can assume $x$ and $y$ are non-zero. So, with suitable initial $0$-padding, each has $200$ digits. If $x$ ends in $00$ we are finished. Suppose now that $x$ ends in $0$ but not $00$. Then the next to last digits of $x$ and $y$ are $10$'s complements of each other, and non-zero. Each of $x$ and $y$ has $198$ digits that are $9$'s complements of each other. These come in pairs, since the digits are permuted. So the next to last digit of $x$ and $y$ are equal, and therefore each is $5$. And we cannot have last digit non-zero, for in the rest of $x$ and $y$ the digits come in $9$'s complement pairs, and $199$ is odd.<|endoftext|> TITLE: Find $\lim_{n \to \infty} n \int_0^1 (\cos x - \sin x)^n dx$ QUESTION [13 upvotes]: Find: $$\lim_{n \to \infty} n \int_0^1 (\cos x - \sin x)^n dx$$ This is one of the problems i have to solve so that i could join college. I tried using integration by parts, i tried using notations but nothing works. If someone could please help me i would deeply appreciate it. ! thanks in advance ! I know the answer to the limit is 1. But i need help proving it. REPLY [2 votes]: The following common inequalities suffice here: $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ $1+x+\lfrac12 x^2 \le \exp(x) \le 1 + x + x^2$ for every real $x \in [0,\ln(2)]$. $1 - \lfrac12 x^2 \le \cos(x) \le 1 - \lfrac12 x^2 + \lfrac1{24} x^4$ for every real $x$. $x - \lfrac16 x^3 \le \sin(x) \le x$ for every real $x \ge 0$. They can be proven elementarily by recursively comparing derivatives. As $n \to \infty$:   Let $r = n^{2/3}$.   Given any $x \in [0,\lfrac1r]$:     $\cos(x) - \sin(x) \le 1 - x + \lfrac16 x^3 \le \exp(-x)$.     $\cos(x) - \sin(x) \ge 1 - x - \lfrac12 x^2 \ge \exp(-x-2x^2)$ since $x \to 0$.   Thus $(\cos(x)-\sin(x))^n \in \exp(-x) ^ n \cdot [\exp(-\frac{2}{r^2}),1]^n$.   Thus ${\displaystyle\int}_0^{\lfrac1r} n ( \cos(x) - \sin(x) )^n\ dx \in [\exp(-\frac{2n}{r^2}),1] \cdot {\displaystyle\int}_0^{\lfrac1r} n \exp(-nx)\ dx$.   Also ${\displaystyle\int}_0^{\lfrac1r} n \exp(-nx)\ dx = 1 - \exp(-\lfrac{n}{r}) \to 1$.   Given any $x \in [\lfrac1r,1]$:     $\cos(x) - \sin(x) \le 1 - x - x^2 ( \lfrac12 - \lfrac16 x - \lfrac1{24} x^2 ) \le 1 - \lfrac1r$.     $\cos(x) - \sin(x) \ge 1 - x - \lfrac12 x^2 \ge -\frac12$.     Thus $| n ( \cos(x) - \sin(x) )^n | \le n ( 1 - \lfrac1r )^n \to 0$.   Thus ${\displaystyle\int}_{\lfrac1r}^1 n ( \cos(x) - \sin(x) )^n\ dx \to 0$.   Therefore ${\displaystyle\int}_0^1 n ( \cos(x) - \sin(x) )^n\ dx \to 1$.<|endoftext|> TITLE: Closed form for $\sum_{n=1}^{\infty}\frac{1}{\sinh^2\!\pi n}$ conjectured QUESTION [13 upvotes]: By trial and error I have found numerically $$\sum_{n=1}^{\infty}\frac{1}{\sinh^2\!\pi n}=\frac{1}{6}-\frac{1}{2\pi}$$ How can this result be derived analytically? REPLY [9 votes]: Alhough when I came up with the sum above I couldn't find a proper way how to prove it, I have only flawn ideas which arised from "magic" manipulations of some "magic" formula, now, however, working on some different problem, I finally came up with proper solution - It is not my style to answer my own questions, but a friend of mine persuaded me to write it here anyway, so it goes like that : This aproach requires simple techniques from complex analysis - let us define a meromorphic function $f(z)$ : $$f(z) = \frac{\cot\pi z}{\sinh^2\pi z}$$ This function has poles $z_k$ at $z=n$ and $z=ni$, where $n\in\mathbb{Z}$ Consider a square contour in the picture below $(m\in\mathbb{N})$ By residue theorem : $$\oint _{\gamma} f(z)\, \mathrm{d}z=2\pi i\sum\mathrm{Res}_{z=z_k}{f(z)}\tag{1}$$ For the residues we have : $$\begin{align} & \mathrm{Res}_{z=n}{\frac{\cot\pi z}{\sinh^2\pi z}} = \frac{1}{\pi\sinh^2\pi n} \\ \\ & \mathrm{Res}_{z=ni}{\frac{\cot\pi z}{\sinh^2\pi z}} = \frac{1}{\pi\sinh^2\pi n} \\ \\ & \mathrm{Res}_{z=0}{\frac{\cot\pi z}{\sinh^2\pi z}} = -\frac{2}{3\pi} \end{align}$$ When $m\rightarrow\infty$ we have $$\frac{\cot\pi (x\pm (mi+\frac12))}{\sinh^2\pi (x\pm (mi+\frac12))}\rightarrow \frac{\mp i}{\cosh^2\pi x}$$ Since integrals along the sides vanishes as $m\rightarrow\infty$ we rewrite $(1)$ using residues taking the limit as $m\rightarrow\infty$: $$-2i\int_{-\infty}^\infty\frac{\mathrm{d}x}{\cosh^2\pi x}=2\pi i\left(-\frac{2}{3\pi}+\frac{4}{\pi}\sum_{n=1}^\infty\frac{1}{\sinh^2\pi n}\right)$$ Immediately since $\int_{-\infty}^\infty\frac{\mathrm{d}x}{\cosh^2\pi x}=\frac{1}{\pi}\tanh\pi x\bigg{|}_{-\infty}^\infty=\frac{2}{\pi}$ after simple manipulations we get the desired result : $$\sum_{n=1}^{\infty}\frac{1}{\sinh^2\!\pi n}=\frac{1}{6}-\frac{1}{2\pi}$$ DECLARATION: I am not first and nor last who discovered exact value of the sum, and it is not in my competence to even name it like the Sophomore's dream has a name, however, I have decided to make an exeption because of its breathtaking beauty and after the tradition of "dreams" we shall refer to it as the Nike's dream, after greek goddess of victory - Nike.<|endoftext|> TITLE: Intuition for Fredholm operators? QUESTION [17 upvotes]: Alot of the material I'm reading lately seems to mention Fredholm operators and the 'Fredholm alternative' and operators being 'Fredholm of index $0$'. Can someone give me a high level overview of what's the reason for caring whether an operator is Fredholm or not? What does it enable us to do with the operator? Is being Fredholm of index $0$ a good thing or a bad thing? Would be prefer an operator to be, say, Fredholm of index $2$ for example? REPLY [25 votes]: A Fredholm operator $T$ is an operator for which the solutions of the nonhomogeneous linear problem $Tx = y$ can be described using "finitely many pieces of data" just like in the finite dimensional case even though the operator acts on a possible infinite dimensional space. More explicitly, if $T$ is Fredholm then $\ker(T)$ is finite dimensional and so we can (at least theoretically) find a finite basis $v_1, \dots, v_n$ for $\ker T$. Since the cokernel is also finite dimensional, we can find finitely many linear functionals $\varphi^1, \dots, \varphi^k$ such that $y \in \operatorname{Im}(T)$ if and only if $\varphi^1(y) = \dots \varphi^k(y) = 0$. Then: The equation $Tx = y$ has a solution if and only if $\varphi^1(y) = \dots = \varphi^k(y) = 0$. If the equation $Tx = y$ has a solution, it has a finite affine space of solutions given by $x_0 + \left< v_1, \dots, v_n \right>$ where $x_0$ is some arbitrary particular solution to the inhomogeneous problem (that is, $Tx_0 = y$). The index of $T$ is $n - k$. The important thing about the index is that unlike the quantities $k$ and $n$ alone, it is invariant under compact perturbations and it is a continuous map on the (open) set of Fredholm operators. In fact, the connected components are precisely in bijection with the index. The fact that $k$ and $n$ are not invariant along families but $n - k$ is can already be seen in the finite dimensional case. If $T \colon \mathbb{C}^n \rightarrow \mathbb{C}^n$ is a linear map, it is automatically Fredholm of index zero. When $\lambda$ runs over the complex numbers, the kernel of $T - \lambda I$ is usually trivial and $T - \lambda I$ is onto unless $\lambda$ hits an eigenvalue and then the dimension of the kernel jumps up (by the geometric multiplicity of $\lambda$) while the dimension of the image jumps down (by the same quantity, a consequence of the rank-nullity theorem) so the index stays zero. In the infinite dimensional case, we don't have the rank-nullity theorem and in general there is no relation between the dimensions of the kernel of $T$ and the image of $T$ (which is usually infinite dimensional). However, for Fredholm operators we have the next best thing - for a Fredholm operator $T \colon X \rightarrow Y$ of index $L$, the relation between the dimensions of the kernel and the cokernel is described by $L$ and if $X = Y$ and $T - \lambda I$ stays Fredholm for all $\lambda$ then the quantity by which the dimension of the kernel of $T - \lambda I$ jumps up when $\lambda$ hits an eigenvalue and the "dimension of the image jumps down" is also given by $L$. If $L = 0$, the jumps balance out. If $L > 0$, $T - \lambda I$ can't be injective. If $T - \lambda I$ is surjective, we know that the dimension of the kernel is precisely $L$. In general, the dimension of the kernel can jump from $L$ to $L + r$ (with $r > 0$) but then the "dimension of the image will jump down" by $r$ to compensate (more precisely, the dimension of the cokernel will also jump up from $0$ to $r$). If $L < 0$, $T - \lambda I$ can't be surjective and a similar analysis follows. For me, the most important examples of Fredholm operators come from elliptic operators. In that context, the fact that an elliptic operator is Fredholm is tied closely with elliptic regularity. Consider for example a second order elliptic operator $L \colon H^2(\Omega) \cap H^1_0(\Omega) \rightarrow L^2(\Omega)$ where $\Omega \subseteq \mathbb{R}^n$ is nice enough. In that context, the fact that $L$ is Fredholm is pretty much equivalent to the estimate $$ ||u||_{H^2(\Omega)} \leq C \left( ||Lu||_{L^2(\Omega)} + ||u||_{L^2(\Omega)} \right) $$ which lies at the basis of elliptic regularity theory that guarantees that the solutions to the equation $Lu = v$ are as regular as $v$ and $L$ allows. The inequality above implies in a purely formal way (without knowing anything about PDEs, only a little about Sobolev spaces) that $L$ is semi-Fredholm (has finite dimensional kernel and closed image) and if you know that $L$ is (semi-)Fredholm, you can deduce the inequality above. Finally, another thing worth noting is that while the dimensions of the kernel and cokernel of a Fredholm operator are often impossible to compute explicitly, there are many methods to compute the index which is a much coarser invariant. For example, determining the multiplicity of a specific eigenvalue for the Laplacian with Dirichlet boundary conditions on a domain in $\mathbb{R}^n$ with $n \geq 2$ is usually impossible but since the Laplacian is self-adjoint, the index is zero. The index does give you some information in some cases. For example, a Fredholm operator of index $2$ must have kernel of dimension at least $2$ so if you want many solutions to $Lu = 0$, you might prefer to have a positive index. You might not be able to determine the dimension of the kernel but if you can determine the index, you will at least have a lower bound.<|endoftext|> TITLE: For which values of $n$ the sum $\sum_{k=1}^n k^2$ is a perfect square? QUESTION [7 upvotes]: Question. For which values of $n$ the sum $\sum_{k=1}^n k^2$ is a perfect square? Clearly, $n=24$ is one such value, and I was wondering whether this is the only value for which the above holds. The question is equivalent to finding the positive integer solutions of the Diophantine equation $$ n(n+1)(2n+1)=6N^2. $$ The first thing to observe is that the numbers $n,n+1$ and $2n+1$ are pairwise relatively prime, and one we divide one of the them by 2 and one of the them by 3, the resulting three numbers should all be perfect squares. This leads to a combination of Pell's equations. It seems to me that there should be a simpler way to solve it. REPLY [2 votes]: Partial answer - all the easiest cases eliminated. One hard case remaining. First, since $n,n+1,2n+1$ are pairwise relatively prime, this means that we can consider the different factorings $6=abc$ with $c$ odd, and we then need $n=ax^2,n+1=by^2,2n+1=cz^2$ for some integers $x,y,z$ with $xyz=N$. This gives us the options for $(a,b,c)$: $$(1,6,1)\\(2,3,1)\\(6,1,1)\\(1,2,3)\\(2,1,3)$$ If $(a,b,c)=(1,6,1)$ you need $6y^2-x^2=1$ which is not possible modulo $3$. If $(a,b,c)=(2,3,1)$ then you need $z^2-6y^2=-1$, which is not possible modulo $3$. If $(a,b,c)=(3,2,1)$ then you need $z^2-4y^2=-1$, which is not possible, modulo $4$. If $(a,b,c)=(2,1,3)$ then you need $3z^2-x^2=1$, which is not possible. If $(a,b,c)=(2,3,1)$ then you need $z^2-4x^2=1$, so $(z-2x)(z+2x)=1$, which is not possible unless $x=0,z=1$, yielding $N=0$. So the only hard case is $(a,b,c)=(6,1,1)$. The case $n=24$ actually gives us $(a,b,c)=(6,1,1)$. so you'd need to prove that there is only one solution $(x,y,z)$ to: $$z^2-12x^2=1\\z^2-2y^2=-1$$ The known solution corresponding to $n=24$ is $(x,y,z)=(2,5,7)$. I'm stuck on how to prove there is no other solutions, however. Since they are Pell-like, we know how to generate all $(z,y)$ with $z^2-2y^2=-1$ and all $(z,x)$ with $z^2-12x^2=1$, but I'm not sure how to show there aren't any other intersections between the two sets of $z$ values.<|endoftext|> TITLE: Is it possible to endow $\text{GL}_2(\Bbb R)$ with a ring structure? QUESTION [7 upvotes]: My question is the following: Is it possible to find a binary operation $*$, seen as an addition, such that $(\text{GL}_2(\Bbb R),*,\cdot)$ has a ring structure (not necessarily with a unit)? [We are given the multiplication of the ring, and we are searching for the addition.] I remember that these questions are pretty similar: (1), (2), but in that case we are asked whether an abelian group $(A,+)$ admits a ring structure. This question asked whether the abelian group $(\text{GL}_1(\Bbb Q),\cdot)$ admitted a ring structure $(\text{GL}_1(\Bbb Q),\cdot, \star)$, which is a bit different from my question. I tried to think what the characteristic of such a ring could be (in the case it has a unit). I also tried to find what $(\text{GL}_2(\Bbb R),\cdot)$ was isomorphic to, in order to eventually transport the structure of another ring... without any success. Thank you for your help! REPLY [15 votes]: No nontrivial group $(G,\cdot)$ can be extended to a ring structure $(G,+,\cdot)$, because $G$ has no zero element with respect to the group operation $\cdot$. A zero element, also called an absorption element in a monoid, is an element $0\in G$ for which $g\cdot 0=0=0\cdot g$ for all $g\in G$. Cancelling $0$ from both sides of $0=0\cdot g$ would imply every $g\in G$ is the identity element, i.e. $G$ is trivial. It's also not possible if you want $\mathrm{GL}_2(\mathbb{R})$ to be the monoid of nonzero elements in a ring. For if it was, that ring would be an associative four-dimensional division algebra (as every nonzero element has an inverse), which by the Frobenius theorem entails it must be the quaternions $\mathbb{H}$. But it cannot be the quaternions since $\mathbb{H}^\times\not\cong\mathrm{GL}_2(\mathbb{R})$ because $\mathbb{H}^\times$ only has two elements of order $2$ whereas $\mathrm{GL}_2(\mathbb{R})$ has at least four, $\mathrm{diag}(\pm1,\pm1)$. (If topology were relevant we could proceed differently. Every quaternion has a polar form, a positive real times a unit quaternion, which entails $\mathbb{H}^\times\simeq \mathbb{R}\times\mathbb{S}^3$. On the other hand, $\mathrm{GL}_2(\mathbb{R})$ has two connected components corresponding to positive and negative determinant. Moreoever, $\mathrm{GL}_2^+(\mathbb{R})$ may be decomposed as a direct product of positive scalar multiples of $I_2$ times $\mathrm{SL}_2(\mathbb{R})$, and in tern $\mathrm{SL}_2(\mathbb{R})$ admits an Iwasawa decomposition, from which we may finally conclude that $\mathrm{GL}_2(\mathbb{R})\simeq\mathbb{R}^3\times \mathbb{S}^1\times\mathbb{S}^0$. Therefore $\mathbb{H}^\times$ and $\mathrm{GL}_2(\mathbb{R})$ cannot be homeomorphic, since they have different homotopy groups $\pi_0,\pi_1,\pi_3$.)<|endoftext|> TITLE: How to prove by induction that $3^{3n}+1$ is divisible by $3^n+1$ for $(n=1,2,...)$ QUESTION [8 upvotes]: So this is what I've tried: Checked the statement for $n=1$ - it's valid. Assume that $3^{3n}+1=k(3^n+1)$ where $k$ is a whole number (for some n). Proving for $n+1$: $$3^{3n+3}+1=3^33^{3n}+1=3^3(3^{3n}+1)-26=3^3k(3^n+1)-26=3^3k(3^n+1)-3^3+1=3^3[k(3^n+1)-1]+1$$ and I'm stuck. Any help please? REPLY [4 votes]: It's true for $n=1$. Assume it holds for $n$, i.e: $3^{3n} + 1 = k(3^n +1)$ then consider $$3^{3(n+1)} +1 = 27 \cdot 3^{3n} + 1 = 27 (3^{3n} +1) - 26 = 27k(3^n +1) - 26$$ Let's get it into a more amenable form $$\begin{equation}3^{3(n+1)} +1 = 9k3^{n+1} + 27k - 26 = 9k(3^{n+1}+1) + 18k - 26 \end{equation} \tag{1}$$ So we want to show that $3^{n+1} + 1$ always divides $18k - 26$ where $$k = \frac{3^{3n} + 1}{3^n + 1} = 3^{2n} - 3^n + 1$$ Equivalently we want to show that $3^{n+1} + 1$ divides $18\cdot 3^{2n} - 18 \cdot 3^n - 8 $. But: $$18\cdot 3^{2n} - 18 \cdot 3^n - 8 = 2(3^{n+1} - 4)(3^{n+1} + 1) \quad \quad (\star)$$ It is then clear that $3^{n+1} + 1$ divides $18k - 26$ since $2(3^{n+1} - 4)$ is an integer. And hence, since $3^{3(n+1)} + 1$ is the sum of two terms (see $(1)$), each divisible by $3^{n+1} + 1$, where the divisibility holds due to a relation we exploited from the inductive hypothesis, we are done inductively. If $(\star)$ seems a bit magical, simply look at $f(x) = 18x^2 - 18x - 8 = 2(3x-4)(3x+1)$ and substitute in $x = 3^n$.<|endoftext|> TITLE: Not understanding derivative of a matrix-matrix product. QUESTION [36 upvotes]: I am trying to figure out a the derivative of a matrix-matrix multiplication, but to no avail. This document seems to show me the answer, but I am having a hard time parsing it and understanding it. Here is my problem: We have $\mathbf{D} \in \Re^{m n}$, $\mathbf{W} \in \Re^{m q}$, and $\mathbf{X} \in \Re^{q n}$. Furthermore, $\mathbf{D} = \mathbf{W}\mathbf{X}$. (NOT an element wise multiplication - a normal matrix-matrix multiply). I am trying to derive the derivative of $\mathbf{D}$, w.r.t $\mathbf{W}$, and the derivative of $\mathbf{D}$, w.r.t $\mathbf{X}$. My class note this is taken from seems to indicate that $$ \frac{\delta \mathbf{D}}{\delta \mathbf{W}} = \mathbf{X}^{T} \text{ and that } \frac{\delta \mathbf{D}}{\delta \mathbf{X}} = \mathbf{W}^{T}, $$ but I am floored as to how he derived this. Furthermore, in taking the derivatives, we are asking ourselves how every element in $\mathbf{D}$ changes with perturbations by every element in, say, $\mathbf{X}$, - so wouldn't the resulting combinations blow up to be a-lot more than what $\mathbf{W}^{T}$ has? I cant even see how the dimensionality is right here. EDIT: Id like to add the context of this question. It's coming from here, and here is my marked screen-shot of my problem. How are they deriving those terms? (Note: I understand the chain-rule aspect, and I am not wondering about that. I am asking about the simpler intermediate step). Thanks. REPLY [10 votes]: Like most articles on Machine Learning / Neural Networks, the linked document is an awful mixture of code snippets and poor mathematical notation. If you read the comments preceding the code snippet, you'll discover that dX does not refer to an increment or differential of $X,$ or to the matrix-by-matrix derivative $\frac{\partial W}{\partial X}.\;$ Instead it is supposed to represent $\frac{\partial \phi}{\partial X}$, i.e. the gradient of an unspecified objective function $\Big({\rm i.e.}\;\phi(D)\Big)$ with respect to one of the factors of the matrix argument: $\;D=WX$. Likewise, dD does not refer to an increment (or differential) of D but to the gradient $\frac{\partial \phi}{\partial D}$ Here is a short derivation of the mathematical content of the code snippet. $$\eqalign{ D &= WX \\ dD &= dW\,X + W\,dX \quad&\big({\rm differential\,of\,}D\big) \\ \frac{\partial\phi}{\partial D} &= G \quad&\big({\rm gradient\,wrt\,}D\big) \\ d\phi &= G:dD \quad&\big({\rm differential\,of\,}\phi\big) \\ &= G:dW\,X \;+ G:W\,dX \\ &= GX^T\!:dW + W^TG:dX \\ \frac{\partial\phi}{\partial W} &= GX^T \quad&\big({\rm gradient\,wrt\,}W\big) \\ \frac{\partial\phi}{\partial X} &= W^TG \quad&\big({\rm gradient\,wrt\,}X\big) \\ }$$ Unfortunately, the author decided to use the following variable names in the code: dD   for $\;\frac{\partial\phi}{\partial D}$ dX   for $\;\frac{\partial\phi}{\partial X}$ dW   for $\;\frac{\partial\phi}{\partial W}$ With this in mind, it is possible to make sense of the code snippet $$\eqalign{ {\bf dW} &= {\bf dD}\cdot{\bf X}^T \\ {\bf dX} &= {\bf W}^T\cdot{\bf dD} \\ }$$ but the notation is extremely confusing for anyone who is mathematically inclined. (NB: This answer simply reiterates points made in GeorgSaliba's excellent post)<|endoftext|> TITLE: A limit in a Feynman "proof" about Fermat's Theorem. QUESTION [12 upvotes]: As perhaps some of you already know, Richard P. Feynman, the famous physicist tried a non-orthodox (in his usual way, I suppose) proof of the Fermat's Last Theorem. He tried a probabilistic "proof" that no rigurous mathematician in the world would have accepted, but that shows his enormous creativity and insights. It is very well explained here: http://www.lbatalha.com/blog/feynman-on-fermats-last-theorem I have followed the derivation of Mr. Luis Batalha, but at some point he challenges the reader to prove that the value: $$c_{n} = \int_{1}^{\infty}\int_{1}^{\infty}(u^{n} + v^{n})^{-1 + \frac{1}{n}} \, du\,dv $$ when $n \to \infty $ is approximately $1/n$ ($c_{n} \approx 1/n$). Well, Mr. Batalha says that for big $n$ tends to $1/n$, and I think we can say for $n \to \infty$. I'm afraid I have tried to solve the limit but I am clueless. Thank you and congratulations to Mr. Batalha for such an interesting post. EDIT: Typo in the definition of $c_{n}$, the low limit of the integral is 1, not 0 as it was before. I'm afraid the typo is also present in the link. EDIT 2: When I asked this question a few days ago, I couldn't imagine so many rich and fruitful comments and answers. Thank you to all. I am going to let the question open for a few days more if someone wants to continue adding solutions. I think all the comments have been superb. I think I am goingo to choose the answer by @tired. Thank you all. REPLY [5 votes]: I can give a heuristic derivation of the results found numerically by @RaymondManzoni and @Arentino. I will try to make things more rigorous the next days First of all we observe the invariance of the integral under the transformation $x \leftrightarrow y$ (i relabel $u\rightarrow x, v\rightarrow y $). Therefore we can write by symmetry $$ I_n=\color{blue}{2}\int_1^{\infty}dx\int_x^{\infty}dy(x^n+y^n)^{-1+1/n} $$ now setting $(y/x)^n=q$ we obtain $$ I_n=\frac{\color{blue}{2}}{n}\int_1^{\infty}dx\frac{1}{x^{n-2}}\int_1^{\infty}dq(1+q)^{-1+1/n}q^{-1+1/n} $$ the integral over $x$ is trivial and yields $$ I_n=\frac{\color{blue}{2}}{n}\frac{1}{n-3}\int_1^{\infty}dq(1+q)^{-1+1/n}q^{-1+1/n} $$ the trick now is to recognize that the $n$-dependence of the remaining integral is very weak if $n$ gets big (to be more precise it decreases in powers of $1/n$) so to leading order we may just ignore it $$ I_n\sim\frac{\color{blue}{2}}{n}\frac{1}{n-3}\int_1^{\infty}dq(1+q)^{-1}q^{-1}+\mathcal{O}\left(\frac{1}{n^3}\right) $$ and therefore $$ I_n\sim\frac{\color{blue}{2}}{n}\frac{1}{n-3}\log(2)+\mathcal{O}\left(\frac{1}{n^3}\right)\sim\frac{\color{blue}{2}\log(2)}{n^2}+\mathcal{O}\left(\frac{1}{n^3}\right) $$ in agreement with numerical evaluations, and FEYNMAN SEEMS TO BE WRONG ;) Remark By pushing the expansions one step further, i can also confirm Raymonds calculations up to $1/n^3$ so i'm sure now that my method is correct<|endoftext|> TITLE: Sum of roots rational but product irrational QUESTION [16 upvotes]: Suppose that $x_1,x_2,x_3,x_4$ are the real roots of a polynomial with integer coefficients of degree $4$, and $x_1+x_2$ is rational while $x_1x_2$ is irrational. Is it necessary that $x_1+x_2=x_3+x_4$? For example, the polynomial $x^4-8x^3+18x^2-8x-7$ has roots $$x_1=1-\sqrt{2},x_2=3+\sqrt{2},x_3=1+\sqrt{2},x_4=3-\sqrt{2}.$$ It holds that $x_1+x_2$ is rational while $x_1x_2$ is irrational, and we have $x_1+x_2=x_3+x_4$. REPLY [5 votes]: Since $x_1x_2$ is irrational, there is an automorphism $\sigma$ of $\overline {\Bbb Q}$ that changes $x_1x_2$ into something else. Since $\sigma$ acts on a permutation on the roots, we must have $\sigma(x_1)=x_i$ and $\sigma(x_2) = x_j$ where $i,j \in \{1;2;3;4\}$ and $i \neq j$, and importantly, $x_1x_2 \neq x_ix_j$ Since $x_1+x_2$ is rational it is fixed by $\sigma$ and so $x_1+x_2 = x_i+x_j$. If say $x_i = x_1$, then from this we get $x_j=x_2$, and then $x_ix_j = x_1x_2$ which is impossible. And so we must have $\{i;j\} = \{3;4\}$, and so, $x_1+x_2 = x_3+x_4$. Another way to tell this story is that we have shown that if $x_1+x_2$ is rational and $x_1+x_2-x_3-x_4 \neq 0$ then $x_1x_2$ is rational. In fact, let $x_1,x_2,x_3,x_4$ be indeterminates and consider the Galois extension $K = \Bbb Q(x_1,x_2,x_3,x_4)^{S_4} \subset M = \Bbb Q(x_1,x_2,x_3,x_4)$. Let $L = K(x_1+x_2)$. By the fundamental theorem of Galois theory, $L$ is the subfield of $M$ that is fixed by a certain subgroup $H$ of $S_4$. This subgroup $H$ is the set of permutations of $\{1;2;3;4\}$ that fixes the unordered pair $\{1;2\}$ (because it only has to fix $x_1+x_2$ in addition to the elementary symmetric polynomials), so $H$ is the subgroup $\{id ; (12) ; (34) ; (12)(34) \}$ Since $x_1x_2$ is also fixed by $H$ this means that $x_1x_2 \in K(x_1+x_2)$ : you can express $x_1x_2$ as a rational fraction in terms of $x_1+x_2$ and the elementary symmetric polynomials (i.e. the rational coefficients of your polynomial). Then, what we have proved says that the denominator of that fraction has to be a power of $(x_1+x_2-x_3-x_4)$. Indeed, someone can just waltz in and trivialize this problem by saying that $(x_1+x_2-x_3-x_4)x_1x_2 = (x_1+x_2)(x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4) - (x_1x_2x_3+x_1x_2x_4+x_1x_3x_4+x_2x_3x_4) - (x_1+x_2)(x_1+x_2)(x_3+x_4)$. So if $x_1+x_2 \neq x_3+x_4$ and $x_1+x_2$ is rational this gives you a formula proving that $x_1x_2$ is rational too.<|endoftext|> TITLE: What is the probability that $7$ cards are chosen and no suit is missing? QUESTION [6 upvotes]: Cards are drawn one by one from a regular deck ($13$ cards for each of the $4$ suits). If $7$ cards are drawn, what is the probability that no suit will be missing? Ok, so I tried the approach where I choose the $1$ suit out of $4$ and then I don't know what do next. I dont know how am I supposed to arrange the cards in such a random manner, and I found the total which is obvious, $52$ choose $7$. REPLY [4 votes]: There are $4$ conditions, one for each suit that needs to be included. The number of (unordered) draws that violate $k$ particular conditions is $\binom{13(4-k)}7$, so by inclusion-exclusion the desired probability is \begin{align} \binom{52}7^{-1}\sum_{k=0}^3(-1)^k\binom4k\binom{13(4-k)}7 &= \binom{52}7^{-1}\left(\binom{52}7-4\binom{39}7+6\binom{26}7-4\binom{13}7\right) \\ &= \frac{63713}{111860} \\ &\approx57\%\;. \end{align}<|endoftext|> TITLE: Is there a topology such that $(\Bbb R, +, \mathcal T)$ is a compact Hausdorff topological group? QUESTION [13 upvotes]: I already know that this is impossible for $(\Bbb Q, +, \mathcal T)$ to be a compact Hausdorff topological group (notice that the trivial topology does not work because it is not Hausdorff). Indeed, this follows from Baire theorem: for if $(\Bbb Q, +, \mathcal T)$ were a compact Hausdorff topological group, then $\Bbb Q$ would be the union of the countable collection of closed sets $\{r\}$ (with $r \in \Bbb Q$). As we have a locally compact Hausdorff space, Baire theorem tells us that at least one of the closed set has non empty interior, i.e. some $\{r\}$ is open. Since we have a topological group, it follows that $(\Bbb Q, +, \mathcal T)$ is discrete and hence non compact. But what about $(\Bbb R, +, \mathcal T)$ ? My first idea was to use the group (and even vector spaces) isomorphism $\Bbb R \cong \Bbb Q^{(\Bbb N)}$. Transporting the topology $\mathcal T$ on $\Bbb Q^{(\Bbb N)}$ preserves compactness, and we could try to use the projection $\Bbb Q^{(\Bbb N)} \to \Bbb Q$. But I was not sure what to do then. Any comment would be appreciated! REPLY [16 votes]: A good way to think about this is in terms of Pontryagin duality. Since we only care about the abelian group structure of $\mathbb{R}$, let's first get a nice characterization of this structure. As an abelian group, $\mathbb{R}$ is the unique $\mathbb{Q}$-vector space of its cardinality (up to isomorphism). An abelian group $A$ is a $\mathbb{Q}$-vector space iff for each nonzero $n\in \mathbb{Z}$, the multiplication by $n$ map $n:A\to A$ is an isomorphism. Now the neat thing about this is that this condition is self-dual under Pontryagin duality. If $A$ is a locally compact abelian group, then the dual of the map $n:A\to A$ is just the map $n:\hat{A}\to\hat{A}$ on the dual group. So this says that a locally compact abelian group is a $\mathbb{Q}$-vector space iff its dual is a $\mathbb{Q}$-vector space. In particular, let us use this to classify the compact abelian groups which are isomorphic (as groups) to $\mathbb{R}$. These are just the Pontryagin duals $\hat{V}$ of all $\mathbb{Q}$-vector spaces $V$ (with the discrete topology) for which $\hat{V}$ has cardinality $2^{\aleph_0}$. It is not hard to show that $\hat{\mathbb{Q}}$ has cardinality $2^{\aleph_0}$. If $V$ is a $\kappa$-dimensional $\mathbb{Q}$-vector space then $\hat{V}$ is a product of $\kappa$ copies of $\hat{\mathbb{Q}}$, which has cardinality $2^{\aleph_0\cdot\kappa}$. So to sum up, there are indeed compact group topologies on $\mathbb{R}$. Up to continuous isomorphism, there is one such topology for each cardinal $\kappa$ such that $2^{\aleph_0\cdot\kappa}=2^{\aleph_0}$ (in particular, this includes all $\kappa$ such that $0<\kappa\leq\aleph_0$). The Pontryagin dual of this compact group is a $\mathbb{Q}$-vector space of dimension $\kappa$. The case $\kappa=1$ gives $\hat{\mathbb{Q}}$, which is a solenoid. Explicitly, $\hat{\mathbb{Q}}$ is the inverse limit of the sequence $\dots\to S^1\stackrel{4}{\to}S^1\stackrel{3}{\to}S^1\stackrel{2}{\to}S^1$, since $\mathbb{Q}$ is the direct limit of the sequence $\mathbb{Z}\stackrel{2}{\to}\mathbb{Z}\stackrel{3}\to\mathbb{Z}\stackrel{4}{\to}\mathbb{Z}\to\dots$. For general $\kappa$, you just have a product of $\kappa$ copies of $\hat{\mathbb{Q}}$.<|endoftext|> TITLE: What Stochastic Calculi Other Than Ito And Stratonovich Exist? QUESTION [9 upvotes]: When learning about stochastic calculus, you typically encounter Ito and Stratonovich calculi, usually in that order. There are many differences between the two (Ito processes have better martingale and Markov properties, while Stratonovich processes obey the chain rule from ordinary calculus), but at the fundamental level, these differences stem from how the integrals of each calculus is defined: The Ito calculus is just integration using the forward Euler scheme: $$dX_t=a(t,X_t)dt + b(t,X_t)dW_t \Rightarrow$$ $$ X_t-X_0=\lim_{\Delta t\to 0}\Big(\sum_{n} a(t_n,X_{t_n}) (t_{n+1}-t_n) + \sum_{n} b(t_n,X_{t_n}) (W_{t_{n+1}}-W_{t_n}) \Big)$$ The Stratonovich calculus is just integration using the Trapezoidal rule: $$dX_t=a(t,X_t)dt + b(t,X_t)\circ dW_t \Rightarrow$$ $$ X_t-X_0=\lim_{\Delta t\to 0}\Big(\sum_{n} \frac{a(t_{n+1},X_{t_{n+1}})+a(t_{n},X_{t_{n}})}{2} (t_{n+1}-t_n) + \sum_{n} \frac{b(t_{n+1},X_{t_{n+1}})+ b(t_n,X_{t_n})}{2} (W_{t_{n+1}}-W_{t_n}) \Big)$$ (The above are just rough sketches, especially the sum bounds. I denote Brownian motion by $W_t$, and $\Delta t=t_{n+1}-t_n$ is assumed constant above even though the actual time mesh is unimportant) So...what happens if I choose another integration method, like Simpson's rule? Runge-Kutta? (I remember from Kloeden and Platen that R-K is not possible for some reason) Backward Euler? Et cetera? Is it even possible to do so? Will I end up with something reducible to Ito or Stratonovich, does it lead to "garbage" calculi (i.e. nothing of interest), or is there some other useful calculus out there? REPLY [9 votes]: Actually, the answer to this lies in quite a bit more advanced topic called rough path theory (beware: PDF). A rough path is a way of "enhancing" a $\alpha$-Hölder continuous path with some extra information. A rough path is an ordered pair, $\textbf{X}=(X, \mathbb{X})$ where $X\colon [0,T]\to V$ where $V$ is some Banach space (typically $\Bbb{R}$) and a second order process $\Bbb{X}\colon [0,T]^2\to V\otimes V$. The pair must satisfy $\Bbb{X}_{s,t}-\Bbb{X}_{s,u}-\Bbb{X}_{u,t}=X_{s,u}\otimes X_{u,t}$ The second order process defines the following integral: $$\int_s^t X_{s,r}\otimes dX_r=\colon\Bbb{X}_{s,t}$$ Rough paths can be thought of as a generalization of Ito and Stratonovich calculus. We can have the Ito rough path, $(B,\Bbb{B}^{Ito})$ and the Stratonovich rough path, $(B,\Bbb{B}^{Strat})$. So your question boils down to, "how many different types of Brownian motion rough paths are there?". And the answer is given in the PDF I linked you. The answer is given in example 4.13 (page 59). If you have the process $B$, which is $\alpha$-Hölder continuous, with an enhancement, $\Bbb{B}$, then you can always add on another $2\alpha$-Hölder continuous function. Meaning you can invent any stochastic calculi you want just by adding functions. For example Stratonovich calculus is just Ito Calculus adding a term: $\Bbb{B}^{Strat}_{s,t}=\Bbb{B}^{Ito}_{s,t}+\frac12(t-s)I$. I believe but cannot find proof that ALL such Brownian rough paths are functions added onto the Ito rough path. EDIT: I asked my adviser and it is true that all Brownian rough paths are increments of functions added onto the Ito rough path and this is indeed true and not so bad to prove. Essentially, just take the difference between $\Bbb{B}^{Ito}$ and your rough path, and the triple difference should be $0$, which as a theorem implies the difference is the difference of two variables, or the increment of a function. So this is indeed an IFF.<|endoftext|> TITLE: Find the common divisors of $a_{1986}$ and $a_{6891}$ QUESTION [5 upvotes]: Let $(a_n)_{n \in \mathbb{N}}$ be the sequence of integers defined recursively by $a_0 = 0$, $a_1 = 1, a_{n+2} = 4a_{n+1}+a_{n}$ for $n \geq 0$. Find the common divisors of $a_{1986}$ and $a_{6891}$. I think it is true that $\gcd(a_m,a_n) = a_{\gcd(m,n)}$, but I am not sure how to prove it. From this the answer follows since $\gcd(1986,6891) = 3$ and so $\gcd(a_{1986,6891}) = a_3 = 17$. REPLY [3 votes]: Your $\,a_i\,$ satisfy the same addition law as the Fibonacci numbers (with the same easy proof as as in this answer using matrix multiplication). Therefore the short proof I gave in this answer shows that this sequence too is a strong divisibility sequence, i.e. $\, (a_m,a_n) = a_{\large (m,n)}\,$ which immediately yields the sought result. Remark $\ $ Responding to comments, below are further details. As in the first link we have $$ \begin{bmatrix}a_2 &a_1\\ a_1 & a_0\end{bmatrix} = \begin{bmatrix}4 &1\\ 1 & 0\end{bmatrix},\quad \begin{bmatrix} a_{n+2} &\!\!\! a_{n+1}\\ a_{n+1} & \!\!\!a_n\end{bmatrix} = \begin{bmatrix} a_{n+1} &\!\! a_{n}\\ a_{n} & \!\!\!\!a_{n-1}\end{bmatrix} \begin{bmatrix}4 &1\\ 1 & 0\end{bmatrix}$$ Thus we infer by induction $$ A_n := \begin{bmatrix} a_{n+1} &\!\! a_{n}\\ a_{n} &\!\!\! a_{n-1}\end{bmatrix} = \begin{bmatrix}4 &1\\ 1 & 0\end{bmatrix}^n\! =\, A_1^n $$ Therefore we deduce that $\,A_{m+n} = A_1^{m+n} = A_1^m A_1^n = A_m A_n,\ $ i.e. $$ \begin{align} \begin{bmatrix} a_{m+n+1} &\!\! a_{m+n}\\ a_{m+n} &\!\!\!\! a_{m+n-1}\end{bmatrix} &= \begin{bmatrix} a_{m+1} &\!\! a_{m}\\ \!a_{m} & \!\!\!\!a_{m-1}\end{bmatrix} \begin{bmatrix} a_{n+1} &\!\! a_{n}\\ a_{n} & \!\!\!\!a_{n-1}\end{bmatrix}\\[.5em] &= \begin{bmatrix}a_{m+1}a_{n+1}+a_m a_n &\! a_{m+1}a_n+a_m a_{n-1}\\ a_m a_{n+1}+a_{m-1}a_n &\! a_m a_n + a_{m-1} a_{n-1} \end{bmatrix}\\ \end{align} $$ This yields the addition law $\ a_{m+n} =\, a_{m+1} a_n +a_m a_{n-1}.\ $ For example $$ \begin{align} \begin{bmatrix}a_8 & a_7\\ a_7 & a_6\end{bmatrix} &= \begin{bmatrix}a_4 & a_3\\ a_3 & a_2\end{bmatrix} \begin{bmatrix}a_5 & a_4\\ a_4 & a_3\end{bmatrix}\\[.4em] &= \begin{bmatrix}72 & \!17\\ 17 &\! 4\end{bmatrix} \begin{bmatrix}305 &\!\!\!\!72\\ 72 &\!\!\!\!\! 17\end{bmatrix} = \begin{bmatrix}23184 &\!\!\! 5473\\ 5473 &\!\!\! 1292\end{bmatrix} \end{align} $$ Regarding $\,f_n = \dfrac{x^n-y^n}{x-y},\,$ which satisfies $\,f_{n+2} = (x\!+\!y) f_{n+1}-xy\, f_n,\,$ a similar proof as above shows that it satisfies the addition law $\, f_{m+n} = f_{m+1} f_n - xy\, f_m f_{n-1},\ $ i.e. $$ \dfrac{x^{m+n}\!-y^{m+n}}{x-y}\,=\, \dfrac{x^{m+1}\!-y^{m+1}}{x-y}\,\dfrac{x^{n}\!-y^{n}}{x-y} - xy\, \dfrac{x^{m}-y^{m}}{x-y}\,\dfrac{x^{n-1}-y^{n-1}}{x-y}$$ To help dispel doubts in the comments, here is an Alpha verification of the prior equation.<|endoftext|> TITLE: The completeness relation from QM in terms of inner products QUESTION [5 upvotes]: I remember from QM that the completeness relation says $$ \sum_{n=1}^\infty |e_n\rangle \langle e_n | = I$$ so that $\langle x\mid y\rangle =\sum_{n=1}^\infty \langle x\mid e_n\rangle \langle e_n \mid y\rangle$. I was recently trying to prove a result on Trace operators and one calculation was $$\sum_{k=1}^n \langle A g_k , h_k \rangle = \sum_{k=1}^n \operatorname{tr}(A(g_k\otimes h_k))$$ Where apparently if $f\in X^*$ and $y\in Y$ we define $y\otimes f:X\to Y$ by $(y\otimes f)(x) = f(x)y$; my attempt at this: $$\sum_{k=1}^n \operatorname{tr}(A(g_k\otimes h_k)) = \sum_{k=1}^n\sum_{i=1}^\infty \langle A(g_k \otimes h_k) e_i, e_i\rangle$$ $$= \sum_{k=1}^n\sum_{i=1}^\infty \langle A(\langle e_i,h_k\rangle g_k), e_i\rangle = \sum_{k=1}^n\sum_{i=1}^\infty \langle Ag_k,e_i\rangle\langle e_i,h_k\rangle$$ So using my naive approach from quantum mechanics I just conclude that the last term is $\sum_{k=1}^n \langle A g_k , h_k \rangle $. However, I feel uneasy about this because $\langle \cdot , \cdot \rangle$ is an inner product, while $\langle \cdot \mid \cdot \rangle$ is the bra-ket notation... whatever that means. 1) Can I apply the completeness relation to make my conclusion? 2) Is there a canonical relation between $\langle \cdot, \cdot \rangle$ and $\langle \cdot \mid \cdot \rangle$? 3) How can I prove the completeness relation (I believe it's an axiom in QM, but I reckon its equivalence (in functional analysis (if it exists)) is a theorem). REPLY [3 votes]: Theorem: Let $H$ be a real or complex Hilbert space, and let $\{e_{\alpha}\}_{\alpha\in\Lambda}$ be an orthonormal subset of $H$. The following are equivalent: 1. $\{ e_{\alpha} \}_{\alpha\in\Lambda}$ is a complete orthonormal set, meaning that the only $x\in H$ that is orthogonal to every $e_{\alpha}$ is $x=0$. 2. Parseval's identity $\|x\|^2=\sum_{\alpha\in\Lambda}|\langle x,e_{\alpha}\rangle|^2$ holds for all $x \in H$. 3. The identity $\langle x,y\rangle = \sum_{\alpha\in\Lambda}\langle x,e_{\alpha}\rangle\langle e_{\alpha},y\rangle$ holds for all $x,y\in H$. 4. The subspace consisting of all finite linear combinations of the $e_{\alpha}$ is dense in $H$. The conventions in Quantum provided for ket vectors $|x\rangle$. The bras are linear functionals. On a Hilbert space there is a canonical map $x \mapsto x^*$ from vectors to linear functionals given by $x^*(y)=\langle y,x\rangle$. In this way, you may treat $\langle y | x\rangle$ in the same ways as $\langle x,y\rangle$ by use of the canonical map.<|endoftext|> TITLE: Geometric interpretation of primitive element theorem? QUESTION [6 upvotes]: The primitive element theorem is a basic result about field extensions. I was wondering whether there are nice geometric ways to visualize it or think about it. Since field spectra are singletons, it has to be about the non-trivial automorphisms of points (I think), and I don't know how to think about it. REPLY [10 votes]: Here’s a geometric argument that has nothing to do with algebraic geometry. You may find it insufficiently rigorous, but the idea is certainly sound. Consider a separable extension $K\supset k$. One consequence of separability is that there are only finitely many intermediate fields $E$, $K\supset E\supset k$. Consider the finitely many proper subfields, and look at their (set-theoretic) union — not their join, not their compositum, just their union. This is a union of finitely many proper $k$-subspaces of $K$, each of them of dimension less than the dimension $n=[K:k]$ of $K$ as a $k$-space. But certainly, if $k$ is infinite, you can’t fill up an $n$-dimensional $k$-space with finitely many subspaces of lower dimension. So there has to be an element $\alpha\in K$ that isn’t in any proper subfield of $K$. Then this $\alpha$ must therefore generate $K$ as a field.<|endoftext|> TITLE: Ways to find irrational roots of an n degree polynomial QUESTION [6 upvotes]: I am trying to write a program to find the roots a given polynomial of degree N, with the form $$ A_{0}X^{N}+A_{1}X^{N-1}+A_{2}X^{N-2}+A_{3}X^{N-3}+...+A_{N} $$ I know that if there are rational roots at all, I can find an exhaustive list with the rational root theorem, and then factor them out using synthetic division to find any and all rational roots. I also know that I am fine if I can factor down to degree two, but I would like to know how to find the irrational roots of an nth degree polynomial without numeric ways like Newton's method, to be able to display the polynomial thusly. $$ (x+2)(x-6)(x\pm\sqrt{8})... $$ Any help to be had would be appreciated. REPLY [5 votes]: An efficient algorithm to factor a polynomial into irreducible polynomials is given in this article. The lattice basis reduction algorithm they developed for this purpose is the famous LLL algorithm which has many applications besides its use in polynomial factorization problems.<|endoftext|> TITLE: Is there a mathematical difference between currying and partial application? QUESTION [5 upvotes]: I know the following example doesn't make what I am saying rigorous, but hopefully it clarifies to some extent what I mean. For various computer implementations, dividing by 2 and multiplying by 0.5 require a different number of CPU cycles, even though the two operations are mathematically equivalent. (The first is performing the inverse operation of multiplication with the number 2, which is defined to be multiplication by the multiplicative inverse of 2 which is $\frac{1}{2}$ or 0.5 in decimal.) Google "practical difference between currying and partial application" and at least the entire first page of results explains some of my skepticism about whether there is a mathematical difference between currying and partial application -- none of the results treat the subject mathematically, i.e. by defining them in terms of Hom functors, and instead discuss how currying and partial application are implemented differently in most functional programming languages. (In a nod to other websites in the StackExchange network, I will post the results from them: https://stackoverflow.com/questions/218025/what-is-the-difference-between-currying-and-partial-application https://softwareengineering.stackexchange.com/questions/290131/what-is-the-difference-between-currying-and-partial-function-application-in-prac) Notice that in both of the examples given above, they fail to explain any mathematical difference -- instead the difference is explained with examples of lines of code. While in practice, as in the case when differentiating dividng by 2 and multiplying by 0.5, there is a difference in implementation, it does not seem to amount to a theoretical difference. REPLY [4 votes]: Currying and partial application are two different concepts, which impose different requirements on the category $\mathcal{C}$. Currying is a strictly stronger concept: one can express partial application using currying, but partial application is not sufficient for currying. Currying (in cartesian closed category $\mathcal{C}\,$) In order to talk about currying, the category has to be cartesian closed. In particular, it has to have exponential objects $Y^X$ for any two objects $X$ and $Y$. Currying is then one direction of the adjunction $$ \mathcal{C}(X \times Y, Z) \cong \mathcal{C}(X, Z^Y)\qquad,$$ meaning that whenever we are given an $f: X \times Y \to Z$, we can obtain the curried version $\lambda f: X \to Z^Y$. Partial application (in category with finite products) For partial application, one needs only a category with all finite products (not necessarily cartesian closed). Since there is a terminal object $\mathbb{T}$ (nullary product), it makes sense to talk about elements of objects $X$: an element of $X$ can be defined as a morphism from $\mathbb{T} \to X$. Given an $f: X \times Y \to Z$, we can partially apply $f$ to $x$ as follows: $$ f \circ \langle x \circ \dagger_Y, Id_Y \rangle $$ where $\dagger_Y$ is the terminal morphism $Y \to \mathbb{T}$, the little circle $\circ$ denotes morphism composition, and $\langle - , - \rangle$ denotes the mediating morphism for products. If we now take any element of $Y$, that is, a morphism $y: \mathbb{T} \to Y$, and feed it to the above construction, we obtain: $$ f \circ \langle x \circ \dagger_Y, Id_Y \rangle \circ y = f \circ \langle x \circ \dagger_Y \circ y, Id_Y \circ y \rangle = f \circ \langle x \circ Id_{\mathbb{T}}, y \rangle = f \circ \langle x, y \rangle \quad,$$ which is exactly what we would expect from a "partially applied function". Notice: we have built the morphism for "f partially applied to x" without using $\lambda$ or exponential objects. We don't need the full-blown cartesian closedness for that. Expressing partial application through currying Suppose that $\mathcal{C}$ is cartesian closed, let $f$, $x$, $y$ as above. We can curry $f$ and apply it to $x$: $$ \lambda f \circ x \quad ,$$ obtaining an element of $Z^Y$. In order to be able to postcompose it with $y$, we have to "unpack" the element of $Z^Y$ using the evaluation morphism $\epsilon: Z^Y \times Y \to Z$. Thus, the morphism that represents "f partially applied to x" is: $$ \epsilon \circ \langle \lambda f \circ x \circ \dagger_Y, Id_Y\rangle \,.$$ Using the universal property of $\lambda f$ and $\epsilon$ (the commuting triangle that one sees in the definition of exponential objects), we can calculate: $$ \epsilon \circ \langle \lambda f \circ x \circ \dagger_Y, Id_Y\rangle = \epsilon \circ (\lambda f \times Id_Y) \circ \langle x \circ \dagger_Y, Id_Y \rangle = f \circ \langle x \circ \dagger_Y, Id_Y \rangle \,,$$ thus our notion of "partial application" defined directly coincides with the notion of "partial application" defined through currying. That means, it does not matter which defintion we take in cartesian closed categories. When does the difference matter ... ...in programming: Since categories built into functional programming languages (like $\mathrm{Hask}$) are cartesian closed, it does not really matter when creating simple functional programs. However, as soon as we start to implement other categories inside our functional programming languages, the difference between "currying" and "partial application" begins to influence our design decisions. In general, it should be much easier to provide only finite products (which allow us to implement partial application), without providing currying. If we use those categories to represent certain computations, providing exponential objects and currying makes the framework more expressive, but we usually lose the possibility to inspect the structure of the computation before evaluating it. This is somewhat similar to trade-off between monads and applicatives: monads are more powerful, but applicatives have a more rigid structure, which can be analysed and optimized much better. ...in mathematics: There are countless examples when one wants to have partial application, but does not want to think about whether the category one is working in is cartesian closed. For example, in classical multivariate calculus, we look at smooth functions from $\mathbb{R}^n \to \mathbb{R}^m$, and often want to fix some of the variables, and compute partial derivatives. It is crucial that we do this using only the product structure and partial application, but no currying: if we would take a multivariate function $f: \mathbb{R}^2 \to \mathbb{R}$, and then curry it, we would obtain something like a function from $\mathbb{R}$ to the function space $\mathbb{R} \to \mathbb{R}$, and would thereby fall out of the realm of classical multivariate calculus, and land in the realm of functional analysis. Same with topology: we often want to take some continuous function that depends on multiple variables (some homotopy or something), then fix some of the variables, and argue that the partially applied function is still continuous, and work with that. In basic topology, we do not want to spend lot of time on defining topologies on function spaces, and trying to make the category of topological spaces cartesian closed. We don't have to, because partial application works without currying. Code examples The fact that we do not need currying in order to implement partial application can be illustrated in any functional programming language. The following code snippets show how to implement partialApplication in terms of compose, identity and product, without using lambdas. Scala (with detailed comments): // suppose that we are given only identity morphisms, // composition, terminal morphisms, and products of morphisms // (mediating morphism for product object) // [BEGIN: assume that we can treat Scala functions as a "category with products"] def identity[X]: X => X = x => x def compose[X,Y,Z](g: Y => Z, f: X => Y): X => Z = x => g(f(x)) def terminal[X]: X => Unit = x => () def product[X, Y, Z](fst: X => Y, snd: X => Z): X => (Y, Z) = x => (fst(x), snd(x)) def constant[X](x: X): Unit => X = u => x // [END: assume] // We do not need any lambdas in order to express // partial application! // non-strict version (x is a morphism that takes unit) def partialApplication[X, Y, Z](f: ((X, Y)) => Z, x: Unit => X): Y => Z = compose(f, product(compose(x, terminal[Y]), identity[Y])) // strict version, uses const to transform `x` into constant function def partialApplication[X, Y, Z](f: ((X, Y)) => Z, x: X): Y => Z = compose(f, product(compose(constant(x), terminal[Y]), identity[Y])) // example val f: ((Int, Double)) => Double = { case (n, t) => math.cos(2 * math.Pi * t) } val f42 = partialApplication(f, 42) println(f42(1.0)) The same in Haskell (same comments as in Scala apply): identity :: a -> a identity x = x compose :: (y -> z) -> (x -> y) -> (x -> z) compose g f = g . f prod :: (x -> y) -> (x -> z) -> (x -> (y, z)) prod a b = \x -> (a x, b x) terminal :: a -> () terminal x = () constant :: x -> (() -> x) constant x = \u -> x partialApplication :: ((x, y) -> z) -> (() -> x) -> (y -> z) partialApplication f x = compose f (prod (compose x terminal) identity)<|endoftext|> TITLE: Books for maths olympiad QUESTION [6 upvotes]: I want to prepare for the maths olympiad and I was wondering if you can recommend me some books about combinatorics, number theory and geometry at a beginner and intermediate level. I would appreciate your help. REPLY [2 votes]: If you're looking for a good Geometry book, you can check out Evan Chen's book, Euclidean Geometry In Mathematical Olympiads. There are no prerequisites to the book; all you need to do is know how to read proofs. A PDF of the book is $28: http://www.maa.org/press/ebooks/euclidean-geometry-in-mathematical-olympiads As a sample, here's a free PDF of chapter 2: http://www.maa.org/sites/default/files/pdf/ebooks/pdf/EGMO_chapter2.pdf<|endoftext|> TITLE: Use $\delta-\epsilon$ to show that $\lim_{n\to\infty} a^{\frac{1}{n}} = 1$? QUESTION [5 upvotes]: Hope this is a meaningful question, but I'm curious if is possible to show that $$\lim_{n\to\infty} a^{\frac{1}{n}}=1, \text{where }a>0$$ using $\delta-\epsilon$ directly or other methods. One method that I am aware of is to use the following: If $\{s_n\}$ is a nonzero sequence, then $\liminf\bigl|\frac{s_{n+1}}{s_n}\bigr|\le \liminf |s_n|^{\frac{1}{n}}\le \limsup |s_n|^{\frac{1}{n}}\le\limsup\bigl|\frac{s_{n+1}}{s_n}\bigr|$ REPLY [6 votes]: The case $a=1$ is obvious.Let now $a>1$,then $\sqrt [ n ]{ a } >1$ and $$a={ \left( 1+\left( \sqrt [ n ]{ a } -1 \right) \right) }^{ n }>1+n\left( \sqrt [ n ]{ a } -1 \right) >n\left( \sqrt [ n ]{ a } -1 \right) $$ (Bernouli's inequality was used ) from here we get that $0<\sqrt [ n ]{ a } -1<\frac { a }{ n } <\varepsilon $ when $n>\frac { a }{ \varepsilon } ,\left( \varepsilon >0 \right) $ so $\sqrt [ n ]{ a } \rightarrow 1,n\rightarrow \infty $ now let consider that $01$ and in this case we have also $\sqrt [ n ]{ \frac { 1 }{ a } } \rightarrow 1,n\rightarrow \infty $ so that $$\lim _{ n\rightarrow \infty }{ \sqrt [ n ]{ a } = } \lim _{ n\rightarrow \infty }{ \frac { 1 }{ \sqrt [ n ]{ a } } =\frac { 1 }{ \lim _{ n\rightarrow \infty }{ \sqrt [ n ]{ a } } } =1 } $$<|endoftext|> TITLE: Is the set of aleph numbers countable? QUESTION [8 upvotes]: If I write the set of aleph numbers in this way $\{\aleph_0, \aleph_1, \aleph_2, \aleph_3, \dots\}$ it seems obvious to me that this set is countable, because aleph numbers have integer coefficients. However, maybe we are using the wrong notation for aleph numbers: how do we know that aleph numbers are really countable? I think my question can be rephrased like this: what is the cardinality of the set $\{\mathbb{N}, \mathscr{P}(\mathbb{N}), \mathscr{P}(\mathscr{P}(\mathbb{N})), \mathscr{P}(\mathscr{P}(\mathscr{P}(\mathbb{N}))), \dots\}$, where $\mathscr{P}(\mathbb{N})$ represents the power set of $\mathbb{N}$? How do I prove it? REPLY [6 votes]: Clive Newstead's answer is absolutely correct I would just like to add a short addendum with regards to the second phrasing of your question using power sets. As Clive already pointed out the "set" of $\{\aleph_0,\aleph_1,\aleph_2,\cdots\}$ need not have almost anything to do with the "set" $\{\omega,\mathscr{P}(\omega),\mathscr{P}(\mathscr{P}(\omega)),\cdots\}$. Though the second expression seems more likely to be interpreted as an actual set i.e. $\{\mathscr{P}^n(\omega);\;n\in\omega\}$ due to the fact that you have to introduce a new type of operation to interpret it as anything else. You can't iterate the power set operation more then an arbitrary finite number of times. At some point you have to take a union. This is a standard convention, where it makes sense, to define $\mathscr{P}^\alpha$ for $\alpha\geq\omega$ by the standard recursion approach if $\alpha$ is a limit then $\mathscr{P}^\alpha(\omega)=\bigcup_{\beta<\alpha}\mathscr{P}^\beta(\omega)$ and when $\alpha=\beta+1$ you just have $\mathscr{P}^\alpha(\omega)=\mathscr{P}(\mathscr{P}^\beta(\omega))$. Post script: Becoming quite curious as to how much distance you can put between the two sets you mention I asked a question of my own to which Asaf Karagila kindly provided an answer. To summarize assuming ZFC the second class is a subset of the first class but can easily be quite "thin" in it (for example you can make $\mathscr{P}(\omega)> \aleph_\kappa$ where $\kappa$ is the first ordinal such that $\aleph_\kappa=\kappa$). If you drop choice, then it is consistent that the only thing the two classes have in common is $\omega=\aleph_0$.<|endoftext|> TITLE: Number of integer triplets $(a,b,c)$ such that $ac$, $b+c>a$, $a+c>b$ hold, then the problem can be seen as $a,b,c$ being the sides of a triangle with perimeter $n$. I would like a hint on how to do that as well. REPLY [2 votes]: This answer is a continuation and completion of the nice approach by @SteveKass. Part 1: Looking for the number of triples $(a,b,c)$ with \begin{align*} 0\leq ac\qquad b+c>a\qquad c+a>b \end{align*} In fact we only have to consider $a+b>c$, since the other inqualities follow from $ac\\ a+(a+1+i)&>a+2+i+j\\ a&>j+1 \end{align*} The condition $a>j+1$ is equivalent with $a=k+j+1, k\geq 0$ and we conclude from (2) the number of solutions is \begin{align*} 0\leq i,j,k\qquad \text{with}\qquad &3(k+j+1)+2i+j=n-3\\ &4j+3k+2i=n-6\tag{7}\\ \end{align*} We can now proceed in the same as we did in part 1. We obtain from (7) the generating function $H(x)$ with \begin{align*} H(x)&=\frac{x^6}{(1-x^2)(1-x^3)(1-x^4)}\\ &=-\frac{5}{32}\cdot\frac{1}{1+x}+\frac{1}{16}\cdot\frac{1}{(1+x)^2}\\ &\qquad+\frac{23}{288}\cdot\frac{1}{1-x}-\frac{1}{8}\cdot\frac{1}{(1-x)^2}+\frac{1}{24}\cdot\frac{1}{(1-x)^3}\\ &\qquad-\frac{1}{16}\cdot\frac{1+i}{1-ix}-\frac{1}{16}\cdot\frac{1-i}{1+ix} +\frac{1}{9}\cdot\frac{1}{1-e^{-\frac{2\pi i}{3}}x}+\frac{1}{9}\cdot\frac{1}{1-e^{\frac{2\pi i}{3}}x}\tag{8}\\ &=\color{blue}{1}x^6+\color{blue}{0}x^7+\color{blue}{1}x^8+\color{blue}{1}x^9+\color{blue}{2}x^{10}+\color{blue}{1}x^{11}+\color{blue}{3}x^{12}+\color{blue}{2}x^{13}+\color{blue}{4}x^{14}\\ &\qquad+\color{blue}{3}x^{15}+\color{blue}{5}x^{16}+\color{blue}{4}x^{17}+\color{blue}{7}x^{18}+\color{blue}{5}x^{19}+\color{blue}{8}x^{20}+\color{blue}{8}x^{21}\cdots \end{align*} From (8) we obtain a closed formula for the coefficients $h_n=[x^n]H(x)$ for $n\geq 6$: \begin{align*} [x^n]H(x)&=-\frac{5}{32}(-1)^{n}+\frac{1}{16}\binom{-2}{n}+\frac{23}{288} -\frac{1}{8}\binom{-2}{n}(-1)^{n}+\frac{1}{24}\binom{-3}{n}(-1)^{n}\\ &\qquad-\frac{1}{16}(1+i)i^n-\frac{1}{16}(1-i)(-i)^n+\frac{2}{9}\cos\left(\frac{2\pi n}{3}\right)\\ &=\frac{5}{32}(-1)^{n+1}+\frac{1}{16}\binom{n+1}{1}(-1)^n+\frac{23}{288}-\frac{1}{8}\binom{n+1}{1}+\frac{1}{24}\binom{n+2}{2}\\ &\qquad-\frac{1}{16}(1+i)i^n-\frac{1}{16}(1-i)(-i)^n+\frac{2}{9}\cos\left(\frac{2\pi n}{3}\right)\\ &=\frac{1}{32}(-1)^{n}(2n-3)+\frac{1}{288}\left(6n^2-18n-1\right)\\ &\qquad-\frac{1}{16}i^n\left((1+i)+(1-i)(-1)^n\right)+\frac{2}{9}\cos\left(\frac{2\pi n}{3}\right)\\ &=\begin{cases} \frac{1}{48}n^2\qquad&\qquad n\equiv 0(4), n\equiv 0(3)\\ \frac{1}{48}\left(n^2-16\right)\qquad&\qquad\qquad\qquad\, n\not\equiv 0(3)\\ \frac{1}{144}\left(3n^2-18n-1\right)\qquad&\qquad n\equiv 1(4), n\equiv 0(3)\\ \frac{1}{144}\left(3n^2-18n+15\right)\qquad&\qquad\qquad\qquad\, n\not\equiv 0(3)\\ \frac{1}{48}\left(n^2+12\right)\qquad&\qquad n\equiv 2(4), n\equiv 0(3)\\ \frac{1}{48}\left(n^2-4\right)\qquad&\qquad\qquad\qquad\, n\not\equiv 0(3)\\ \frac{1}{48}\left(n^2-6n+9\right)\qquad&\qquad n\equiv 3(4), n\equiv 0(3)\\ \frac{1}{144}\left(3n^2-18n-21\right)\qquad&\qquad\qquad\qquad\, n\not\equiv 0(3)\\ \end{cases} \end{align*} We can also find this sequence in OEIS: A shifted variant of the sequence $(h_n)_{n\geq 0}$ with generating function $x^{-3}H(x)$ and beginning with \begin{align*} 0, 0, 0, 1, 0, 1, 1, 2, 1, 3, 2, 4, 3, 5, 4, 7, 5, 8, 7, 10, 8, 12, 10, 14, 12, 16, 14, 19, 16, 21,\cdots \end{align*} is known to OEIS as Alcuin's sequence. It counts the number of triangles with integer side and perimeter $n$.<|endoftext|> TITLE: Maximum and minimum of $f(x)=\cos(\sin(x))-\sin(\cos(x))$ QUESTION [5 upvotes]: Given the function: $$f(x)=\cos(\sin(x))-\sin(\cos(x))$$ it has absolute maxima at $x=(2k+1)\pi$ with $k=0,1,..N$ and relative maxima at $x=2k\pi$. It is not clear where are the minima. Putting the derivative to zero doesn't help. Any suggestion on how to find the minimum value and where is it? Thanks. REPLY [2 votes]: $f(x)$ is a $2\pi$-periodic function, whose graph is symmetric with respect to $x=\pi$, so it is enough to study $f(x)$ over the interval $[0,\pi]$. We have: $$ f'(x) = \sin(x)\cos(\cos x)-\cos(x)\sin(\sin x) $$ hence the endpoints of $[0,\pi]$ are for sure stationary points due to the vanishing of $\sin(x)$ and $\sin(\sin x)$. There is a third stationary point inside that interval (an absolute minimum) associated with the solution (unique in $(0,1)$) of a trascendental equation, $$ \frac{\sin u}{u}=\frac{\cos\sqrt{1-u^2}}{\sqrt{1-u^2}}. $$ Numerically, the first relative mimum occurs at $x_0 \approx 0.69272857$ and $$ f(x_0)\approx 0.107127,$$ so we have: $$\boxed{\forall x\in\mathbb{R},\qquad \cos(\sin(x))-\sin(\cos x)\geq \frac{1}{10_{\phantom{}}}.} $$<|endoftext|> TITLE: When evaluating a limit by making a change of variable, how does the sign (+/-) change? QUESTION [5 upvotes]: Say I have $$\lim_{x \rightarrow 4} f(x)=\frac{\sqrt{x}-2}{\sqrt{x^3}-8}.$$ My homework paper says to do a change of variable for $u=\sqrt{x}.$ If I do that, I get $$\lim_{u^2 \rightarrow 4} f(x)=\frac{u-2}{\sqrt{u^6}-8},$$ and from there we simplify like the following: $$\lim_{u^2 \rightarrow 4} f(x)=\frac{u-2}{u^3-8}$$ $$\lim_{u^2 \rightarrow 4} f(x)=\frac{u-2}{(u-2)(u^2+2u+4)}$$ $$\lim_{u^2 \rightarrow 4} f(x)=\frac{1}{u^2+2u+4}$$ My question is, when I square root the $u^2$ as in $u^2\rightarrow4$, does it become $u\rightarrow2$ or $u\rightarrow-2$, and why? It's apparent that in the from the first equation that it "makes sense" or at least "follows some pattern" for $u$ to equal positive $2$ because it would make the numerator $0,$ but this isn't really valid reasoning. So, should $u$ approach $2$ or $-2,$ and how do we know? REPLY [3 votes]: Perhaps it should be made explicit verbally at the outset that the chosen substitution is that $u$ is the positive square root of $x$. In the problem as stated before the substitution is done, the numerator is $\sqrt x - 2$, and $\text{“}\sqrt x\text{ ''}$ denotes one of the two square roots of $x$ and not the other one. If what one knows about $u$ is ONLY that $u^2$ is approaching $4$, then one might conclude in certain contexts that $u$ is approaching either $2$ or $-2$ but one cannot tell which. But in a broader context even that conclusion is not justified: $u$ could be alternating between something approaching $2$ and something approaching $-2$, as $u^2$ approaches $4$, so that $u$ would not be approaching any limit at all. However, in this case the “ONLY” mentioned above is not applicable.<|endoftext|> TITLE: Finding a tricky composition of two piecewise functions QUESTION [6 upvotes]: I have a question about finding the formula for a composition of two piecewise functions. The functions are defined as follows: $$f(x) = \begin{cases} 2x+1, & \text{if $x \le 0$} \\ x^2, & \text{if $x > 0$} \end{cases}$$ $$g(x) = \begin{cases} -x, & \text{if $x < 2$} \\ 5, & \text{if $x \ge 2$} \end{cases}$$ My main question lies in how to approach finding the formula for the composition $g(f(x))$. I have seen a couple of other examples of questions like this online, but the domains of each piecewise function were the same, so the compositions weren't difficult to determine. In this case, I have assumed that, in finding $g(f(x))$, one must consider only the domain of $f(x)$. Thus, I think it would make sense to test for individual cases: for example, I would try to find $g(f(x))$ when $x <= 0$. $g(f(x))$ when $x <= 0$ would thus be $-2x-1$, right? However, I feel like I'm missing something critical, because I'm just assuming that the condition $x < 2$ for $g(x)$ can just be treated as $x <= 0$ in this case. Sorry for my rambling, and many thanks for anyone who can help lead me to the solution. REPLY [6 votes]: $$g(x) = \begin{cases} -x, & x < 2 \\ 5, & x \ge 2 \end{cases} $$ Therefore $$g(f(x)) = \begin{cases} -f(x), & f(x) < 2 \\ 5, & f(x) \ge 2 \end{cases} $$ So now we need to know when $f(x) < 2$ and when $f(x) \ge 2$. $$f(x) = \begin{cases} 2x + 1, & x \le 0 \\ x^2, & x > 0 \end{cases} $$ Let's look at one piece of $f$ at a time. On the first piece, $2x + 1 \ge 2$ means $x \ge 1/2$. But this is impossible because $x \le 0$ on the first piece. Also on the first piece, $2x + 1 < 2$ means $x < 1/2$. Well, on the first piece we have $x \le 0 < 1/2$, therefore $f(x) < 2$ on the entire first piece, i.e., $f(x) < 2$ if $x \le 0$. On the second piece, $x^2 \ge 2$ means $x \ge \sqrt{2}$ or $x \le -\sqrt{2}$. On the second piece we always have $x > 0$, therefore we have $x^2 \ge 2$ when $x \ge \sqrt{2}$. Also on the second piece, $x^2 < 2$ means $-\sqrt{2} < x < \sqrt{2}$. And since on the second piece we always have $x > 0$, then we must have $0 < x < \sqrt{2}$ in order to have $x^2 < 2$. Putting it all together so far, we have the following: $$ f(x) \ge 2 \text{ if and only if } x \ge \sqrt{2}$$ $$ f(x) < 2 \text{ if and only if } x \le 0 \text{ or } 0 < x < \sqrt{2} $$ Notice that this last one can be simplified but we need to keep them separate. Why is this? We'll see as we continue. Recall: $$g(f(x)) = \begin{cases} -f(x), & f(x) < 2 \\ 5, & f(x) \ge 2 \end{cases} $$ Therefore: $$g(f(x)) = \begin{cases} -f(x), & x \le 0 \text{ or } 0 < x < \sqrt{2} \\ 5, & x \ge \sqrt{2} \end{cases} $$ Separating the conditions gives us: $$g(f(x)) = \begin{cases} -f(x), & x \le 0\\ -f(x), & 0 < x < \sqrt{2} \\ 5, & x \ge \sqrt{2} \end{cases} $$ And we need to do this because $f(x)$ itself is different for $x \le 0$ and $0 < x < \sqrt{2}$. Finally, we end up with: $$ h(x) := g(f(x)) = \begin{cases} -(2x+1), & x \le 0 \\ -x^2, & 0 < x < \sqrt{2} \\ 5, & x \ge \sqrt{2} \end{cases} $$<|endoftext|> TITLE: Value of this convergent series: $\frac{1}{3!}+\frac2{5!}+\frac3{7!}+\frac{4}{9!}+\cdots$ QUESTION [9 upvotes]: What is the value of- $$\frac{1}{3!}+\frac2{5!}+\frac3{7!}+\frac{4}{9!}+\cdots$$ I wrote it as general term $\sum\frac{n}{(2n+1)!}$. As the series converges it should be telescopic (my thought). But i dont know how to proceed. I also know $\sum\frac{1}{n!}=e$ Any help /hints appreciated. Thanks! REPLY [14 votes]: We know that $$\frac{\sinh x}{x}=\sum_{n=0}^{\infty }\frac{x^{2n}}{(2n+1)!}$$ let $x\rightarrow \sqrt{x}$ $$\frac{\sinh \sqrt{x}}{\sqrt{x}}=\sum_{n=0}^\infty \frac{x^n}{(2n+1)!}$$ $$\left(\frac{\sinh \sqrt{x}}{\sqrt{x}}\right)'=\sum_{n=1}^\infty \frac{nx^{n-1}} {(2n+1)!}$$ let $x=1$ to get what do you want<|endoftext|> TITLE: What is the cardinality of a non-measurable set? QUESTION [7 upvotes]: We know that $|P( \Bbb{R} )|=|L (\Bbb{R} )|$ ( $L (\Bbb{R} )$ is the set of all Lebesgue-measurable sets). Note that $L (\Bbb{R} ) \subsetneq P( \Bbb{R} )$. What is the cardinality of non-measurable set? Is this set countable? REPLY [14 votes]: A standard argument to show that $\mathcal{L}(\mathbb R)$ has the same size as $\mathcal{P}(\mathbb R)$ is to note that the Cantor subset of $[0,1]$ has the same size as $\mathbb R$ and measure 0, so any of its subsets also has measure 0. You can use the same idea to find the size of $\mathcal P(\mathbb R)\setminus\mathcal L(\mathbb R)$: Fix a nonmeasurable subset $N$ of $[2,3]$, and note that $N\cup A$ is nonmeasurable for any $A$ subset of the Cantor set. (This was asked before, by the way, see here.) The question of what sizes can nonmeasurable sets have is harder. Of course, any such set is uncountable. If $\kappa$ is the least possible size of a nonmeasurable set, then there are nonmeasurable sets of size $\tau$ for any $\tau$ with $\kappa\le\tau\le|\mathbb R|$, by the same argument as in the previous paragraph, so the problem is to see what one can say about $\kappa$ itself. It turns out that $\mathsf{ZFC}$ is not strong enough to give us much information: it is consistent that $\kappa=\aleph_1$ while $|\mathbb R|$ itself can be arbitrarily large. It is also consistent that $\kappa=|\mathbb R|$ and $\mathbb R$ can be as large as wanted. Other behaviors are also consistent. This number $\kappa$ has been studied in the context of cardinal characteristics (or "cardinal invariants") of the continuum, where it is denoted $\mathrm{non}(\mathcal L)$. There are several survey articles containing more information. See for instance the chapters by Andreas Blass and by Tomek Bartoszynski in the Handbook of set theory.<|endoftext|> TITLE: Why the attachment to simplices in (co)homology? QUESTION [7 upvotes]: I've been thinking a bit about why we define the singular homology and cohomology groups with simplices rather than, say, cubes, and it seems to me that the elementary aspects of the theory would all become more elegant if we used cubes: Say we define $C_n(X)$ to be the free abelian group on maps $[0,1]^n \to X$, the boundary operator $\partial$ as Spivak defines it in his first differential geometry book (or probably Calculus on Manifolds), and $H_n(X)$ to be the homology groups of the resulting complex. The first thing that really needs proof is that homotopic maps $f,g: X \to Y$ induce chain homotopic maps $S(f),S(g):C_n(X) \to C_n(Y)$, and this isn't the most obvious thing in the world if we use simplices - we have to decompose $\Delta ^n \times [0,1]$ into a union of $(n+1)$-simplices. But with cubes the proof reduces to the following: Define $P:C_n(X) \to C_{n+1}(Y)$ on singular cubes $\sigma : [0,1]^n \to X$ by $P(\sigma) = F \circ (\sigma \times id)([0,1]^{n+1})$, where $F$ is a homotopy from $f$ to $g$. Then the proof that $\partial P = S(g)-S(f)-P \partial$ follows formally from the definition of $\partial$, but is also quite clear - it says that the boundary of the singular cube $P(\sigma)$ is the top minus the bottom minus the sides (also, $P(\sigma)$ is a singular cube, and if we work with simplices, it's only a singular chain). Next we'd want to show that homology groups can be computed using small cubes. From this we easily obtain the excision theorems and Mayer-Vietoris sequences. With simplices we have to define the barycentric subdivision, which is a beautiful geometric idea but seems to be impossible to define without some decidedly ugly notation. However, for cubes, we can use the standard subdivision of a cube into $2^n$ cubes with side lengths halved. That is, if $I_0 = [0,1/2]$ and $I_1 = [1/2,1]$ we could define for $\sigma : [0,1]^n \to X$, $B(\sigma) = \sum_f (-1)^{\sum f(i)} \sigma | I_{f(1)} \times I_{f(2)} \times \cdots \times I_{f(n)}$, the sum taken over functions $\{1,2,\dots , n\} \to \{0,1\}$. No inductive formula necessary for the subdivision! Showing that $B$ gives a chain map homotopic to the identity is conceptually easier than with the barycentric subdivision again since $[0,1]^n \times [0,1] = [0,1]^{n+1}$ and the subdivision of the $(n+1)$-cube is related in a relatively clear way to the subdivision of the $n$-cube. Last note: say we want to define for a smooth manifold a map from the de Rham complex into the cochain complex Hom$(C_n(X), \mathbb{R})$. As usual, we define it as $\alpha \mapsto \big(c \mapsto \int _c \alpha\big)$. The fact that this is a chain map is exactly Stokes' theorem for cubes, which boils down to the ordinary fundamental theorem of calculus and interchanging orders of integration and could be done in an ad-hoc way in any elementary algebraic topology textbook without taking more than half a page. To summarize, it seems like the formalism at the beginning of an algebraic topology course could be expedited and made more intuitive if we defined the homology groups using cubes. Moreover, the equivalence of the cube definition and the simplex definition is easy because we can decompose a cube into simplices. So why don't introductory algebraic topology books use cubes? Maybe people think simplices aren't that much more difficult to manipulate than cubes and there's some historical inertia, but cubes also seem to have a central role in other fields of mathematics (e.g. Whitney cubes, rectangular paths like those most books use to prove Runge's theorem, the usual proof of Stokes' theorem/Green's theorem). Can anyone please name a place where simplices can be used but cubes can't, or give a good justification for the use of simplices instead of cubes? REPLY [8 votes]: The very first thing we need is that $H_*(pt) = \Bbb Z$ in degree zero and is zero elsewhere. Your definition, as stated, does not have this. Clearly $C_*(pt) = \Bbb Z$ in all degrees, but the boundary map is always identically zero (as opposed to the simplicial case, where the boundary map alternates between being the identity and being zero). So the cohomology groups $H_n(pt) = \Bbb Z$. The way to avoid this is to mod out by "degenerate cubes", those that only depend on $(n-1)$ of the $n$ variables in your cube. When you then define $C_n(X)/Q_n(X)$ to be the chain complex of cubes modulo degenerate cubes, you recover singular homology. If you'd like to see this theory developed, see Massey's book "Singular homology theory". So one reason people might favor simplices is that one doesn't need to worry about modding out by degeneracy. (There's also the fact that simplices are better suited to proving that there's a natural suspension isomorphism.) Suppose you don't mind modding out by degeneracies, but you'd really like that easy suspension isomorphism back. Why not work with some class of 'probing objects' that contains $I$, is closed under product and cone, like some set of polyhedra? You can do that (indeed, if I recall some folks already have, though I forget the reference)... or you could go even more generally and probe by maps out of smooth manifolds with corners, which not only allows an easy suspension isomorphism, an easy product map, etc, but also easy ways of doing "smooth operations" like taking inverse images etc. See Lipyanskiy, Geometric homology.<|endoftext|> TITLE: Geometric Significance of some features of the Exterior Algebra QUESTION [6 upvotes]: I've been tinkering with differential forms for a while now, and I've had a few questions all rolled into one trying to understand them. The exterior derivative is quite natural to me - it looks just like a regular old derivative, and so I believe I have a good intuition about how it works. The first question that struck me was that I did not have a similar intuition about the interior derivative. I wanted to know in what sense it was a derivative, if any, and either way I wanted to get an idea of what this guy really does. A geometric interpretation would have been ideal. At one point I was thinking about tensor contractions and how that's kind of like a trace, but that's not really what the interior derivative is anyway, so I ended up putting that aside for a while. But I noticed something interesting that has given me renewed interest in the question when I learned about Hodge duality. We will work in $\mathbb R^3$ with the usual coordinates. Consider an arbitrary $1-$form which we'll write down as $F_x dx + F_y dy + F_z dz$. We'll compute the exterior derivative of this, and we end up with $$ \bigg ( \frac{\partial F_x}{\partial y} - \frac{\partial F_y}{\partial x} \bigg ) dx \wedge dy + \bigg ( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \bigg ) dz \wedge dx + \bigg ( \frac{\partial F_y}{\partial z} - \frac{\partial F_z}{\partial y} \bigg ) dy \wedge dz $$ Amazingly (at least to me), this is the Hodge dual of the curl! Now, we can do a similar computation using $d(i_X)$. If we let $X = \alpha \frac{\partial }{\partial x} + \beta \frac{\partial}{\partial y} + \gamma \frac{\partial}{\partial z}$. $X$ is a vector field so these could be functions of $x,y,z$, it doesn't really matter. Now if we let $\omega$ denote the $3-$form, $dx \wedge dy \wedge dz$. We can now compute. The computation is a little laborious to do directly from the definition, but there's a nice little formula which we can use to shortcut the computation. What we end up with is $$ \alpha dy \wedge dz + \beta dz\wedge dx + \gamma dy \wedge dz$$ after a few little rearrangements. Now we take the exterior derivative of this, and we have $$\bigg (\frac{\partial \alpha}{\partial x} + \frac{\partial \beta}{\partial y} + \frac{\partial \gamma}{\partial z} \bigg ) (dx \wedge dy \wedge dz) $$And again in a way that totally amazes me, this is Hodge dual of the divergence of $X$! Now this motivates the question: What is the geometric significance Hodge duality? If there was some nice way to think about what the Hodge dual of something is, it would help me to understand the interior derivative, maybe. There is a problem though, which makes me worry that all of this is just coincedence of some sort - the above analysis depends on the fact that the Hodge dual on a 3-manifold takes 1-forms to 2-forms, which is how we able to see what's going on with the curl. In particular, curl is not even defined in dimensions different from 3. Divergence is defined in general though, so that much is a welcome sign. So somehow, I want to say that what I have learned will give me an avenue to understanding interior derivatives, and exterior derivatives better, but I don't know how to interpret Hodge duality, and I have these concerns. In Summary: How is the interior derivative a derivative, geometrically? What is the geometric content of Hodge duality? Can I put these together to understand interior and exterior derivatives? REPLY [2 votes]: How is the interior derivative a derivative? I wouldn't say it is. My background is in Clifford algebra, and that discipline's equivalent of this operation is universally referred to as a product operation, not a derivative operation. What is the geometric content of Hodge duality? Short version: you're finding the orthogonal complement of whatever you're dualizing. Longer version: That orthogonal complement picture doesn't generalize to non-simple forms (i.e. forms that cannot be factored into wedge products of vectors). Only simple forms correspond to subspaces. Can I put these together to understand interior and exterior derivatives? Perhaps, but not in the way I think you want. I think what you really want is the codifferential $\delta$, such that $\delta \omega = \star d \star \omega$ up to a minus sign. The interior derivative is really more of a dual analogue to the wedge product. Let $\chi$ be the 1-form corresponding to a vector field $X$ under some symmetric bilinear form $g$ that determines the Hodge dual operation $\star$. Then $\iota_X \omega = \star^{-1} (\chi \wedge (\star \omega))$.<|endoftext|> TITLE: Find all solutions to $f\left(x^2+xf(y)\right)=xf(x+y)$ QUESTION [9 upvotes]: Find all functions $f:\mathbb{R}\to\mathbb{R}$ such that $$f\left(x^2+xf(y)\right)=xf(x+y)$$ for all $x,y\in\mathbb{R}$. This is somewhat related to this question, but with an $xf(y)$ term instead of $yf(x)$. I've managed to get $f(0)=0$, $f\left(x^2\right)=xf(x)$ and $f$ is odd, but not much more. I think that $f(x)=x$ and $f(x)=0$ are the only solutions, but I'm not sure. REPLY [4 votes]: The function defined by $f(x) = 0$ for all $x \in \mathbb{R}$ is a solution to the functional equation. Suppose that $f$ is some other solution. Then there is a real number $c$ such that $f(c) \neq 0$. For any $r \in \mathbb{R}$, let $x = \frac{r}{f(c)}$, and let $y = c - x$. The functional equation then gives us that $$ f \left( x^2 + xf(y) \right) = xf(x + y) = xf(c) = \frac{r}{f(c)} \cdot f(c) = r $$ showing that $r \in f(\mathbb{R})$. i.e. The function is surjective. As you noted, we have that $f(0) = 0$, which follows by taking $x = y = 0$ in the functional equation. Now let $x = -f(y)$ in the functional equation. We obtain that $$ 0 = f(0) = f \left( f(y)^2 - f(y) f(y) \right) = -f(y) f(y - f(y)) $$ for all $y \in \mathbb{R}$. Then for any $y \in \mathbb{R}$, we have that if $f(y) = 0$, then $f(y - f(y)) = f(y) = 0$, and if $f(y) \neq 0$, then $-f(y) f(y - f(y)) = 0$ implies that $f(y - f(y)) = 0$. We thus have that $$ f(y - f(y)) = 0 $$ for all $y \in \mathbb{R}$. Also as noted by you, we have that $f(x^2) = xf(x)$ by taking $y = 0$ in the functional equation. Now for any real number $x$, we showed earlier that there is some $y \in \mathbb{R}$ such that $f(y) = x$. We then have that $$ xf(x) = f(y) f(f(y)) = f(f(y)^2) = f \left( f(y)^2 + f(y) f \left( y - f(y) \right) \right) $$ using the fact that $f(y - f(y)) = 0$. The functional equation then gives us that $$ xf(x) = f(y) f \left( f(y) + y - f(y) \right) = f(y) f(y) = x^2 $$ holds for all $x \in \mathbb{R}$. For $x = 0$, we already have that $f(x) = 0 = x$. If $x \neq 0$, then we can divide the previous relation by $x$ to find again that $f(x) = x$. We thus conclude that $f(x) = x$ for all $x \in \mathbb{R}$.<|endoftext|> TITLE: Totally real Galois extension of given degree QUESTION [9 upvotes]: Let $n≥1$ be an integer. I would like to prove (or disprove) the existence of a subfield $K \subset \Bbb R$ such that $K/\Bbb Q$ is Galois and has degree $n$. It is easy to construct such a subfield for $n=2^k$: one can take $K=\Bbb Q(\sqrt 2, \sqrt 3, \sqrt 5, ..., \sqrt{p_k})\;$ where $p_k$ is the $k$-th prime number. I found that the irreducible polynomial $x^3-6x+2 \in \Bbb Q[X]$ (Eisenstein's criterion with $p=2$) has three real roots, so its splitting field $K$ satisfies my conditions for $n=3$. Then I'm done with $n=3\cdot 2^k$ thanks to compositum of fields. Apparently, a Galois extension of odd degree must be totally real. In all cases, I don't know how to construct, in general or in particular cases ($n$ prime for instance), such a totally real Galois extension $K/\Bbb Q$ of degree $n$. Thank you for your help! REPLY [14 votes]: Fix $n$, and choose a prime $p$ such that $p\equiv 1$ (mod $2n$). Dirichlet's theorem on primes in arithmetic progressions guarantees the existence of infinitely many such primes. If $K=\mathbb{Q}(\zeta_p)$, then $K$ is a Galois extension of $\mathbb{Q}$ with $\mathrm{Gal}(K/\mathbb{Q})$ cyclic of order $p-1$. If $F=\mathbb{Q}(\zeta_p+\zeta_p^{-1})$ then $F$ is totally real and $[K:F]=2$, hence $F$ is Galois over $\mathbb{Q}$ with $\mathrm{Gal}(F/\mathbb{Q})$ cyclic of order $\frac{p-1}{2}$. Now $n$ divides $\frac{p-1}{2}$ by our choice of $p$, hence $G=\mathrm{Gal}(F/\mathbb{Q})$ has a subgroup $H$ of index $n$. Let $E=F^H$, then $E$ is Galois over $\mathbb{Q}$, is totally real, and $[E:\mathbb{Q}]=[G:H]=n$.<|endoftext|> TITLE: How unique is $e$? QUESTION [8 upvotes]: Is the property of a function being its own derivative unique to $e^x$, or are there other functions with this property? My working for $e$ is that for any $y=a^x$, $ln(y)=x\ln a$, so $\frac{dy}{dx}=\ln(a)a^x$, which equals $a^x$ if and only if $a=e$. Considering equations of different forms, for example $y=mx+c$ we get $\frac{dy}{dx}=m$ and $mx+c=m$ only when $m=0$ and $c=0$, so there is not solution other than $y=0$. For $y=x^a$, $\frac{dy}{dx}=ax^{a-1}$, which I think equals $x^a$ only when $a=x$ and therefore no solutions for a constant a exist other than the trivial $y=0$. Is this property unique to equations of the form $y=a^x$, or do there exist other cases where it is true? I think this is possibly a question that could be answered through differential equations, although I am unfortunately not familiar with them yet! REPLY [4 votes]: The equation $$ \frac{\mathrm{d}}{\mathrm{d}x} f(x) = f(x) $$ is a linear (thus Lipschitz continuous), first-order ordinary differential equation on $\mathbb{R}$. By the Picard-Lindelöf theorem, such an equation has a unique solution for any initial condition of the form $$ f(0) = y_0 $$ with $y_0 \in \mathbb{R}$. In particular, for the condition $$ f(0) = 1 $$ the unique solution is $f = \exp$, so given that condition, $e \equiv \exp(1) = f(1)$ is unique. For the general initial condition, you get, because the ODE is linear, that the solution is always $$ f(x) = y_0 \cdot \exp(x). $$<|endoftext|> TITLE: For $n\geq 2$, any continuous map $f:\mathbb{C}P^n\rightarrow S^2$ induces the zero map on $H_2(*)$ QUESTION [7 upvotes]: I am working through an old qualifying exam from another university. My course did not cover as much material as what is on this test (e.g. we did not cover cohomology). So I am just working through the problems which look like fair game for the material covered in my course. I tried the following problem (because it looks like I should now how to do it), and I am very stuck on it: For $n\geq 2$, any continuous map $f:\mathbb{C}P^n\rightarrow S^2$ induces the zero map on $H_2(*)$. I know that $H_2(\mathbb{C}P^n)=\mathbb{Z}$ and $H_2(S^2)=\mathbb{Z}$ as well. So it is not obvious yet that $f_*$ is the zero map. I tried looking at the long exact sequence for the pair $(\mathbb{C}P^n, S^2)$ and computed the relative homology groups; all of the seemingly relevant ones are trivial (i.e. $i=1,2,3$). So the long exact sequence doesn't seem to give me any more information. I am not sure what I am missing here. REPLY [6 votes]: The cohomology ring of $\Bbb{CP}^n$ is $\Bbb Z[x]/(x^{n+1})$, with $|x|=2$. Any map $f: \Bbb{CP}^n \to S^2$ induces zero on cohomology when $n>1$, because if $z \in H^2(S^2)$ is a generator, then $0 = f^*(z^2) = f^*(z)^2$, so $f^*(z)$ must be zero. Now use the fact that the universal coefficient theorem is natural to see that the induced map on $H_2$ is also zero. (I don't really want to draw the diagram.) EDIT: Here's the above, stated differently. Let $f: S^2 \to S^2$ be a map of nonzero degree. I claim you cannot extend $f$ to $\Bbb{CP}^2$. For $\Bbb{CP}^2$ is obtained from $S^2$ by attaching a 4-cell along the Hopf map $\eta: S^3 \to S^2$, it's only possible to extend the map over the 4-cell if $f\eta$ is null-homotopic; but $f\eta = n[\eta] \in \pi_3(S^2) \cong \Bbb Z$, where the Hopf map is a generator of this homotopy group. The reason these are the same answer is because the way one often proves that $n\eta$ is not null-homotopic is by calculating the cohomology ring structure on $X_n = S^2 \cup_{n\eta} D^4$ and seeing that it's nontrivial.<|endoftext|> TITLE: Proving that ${x +y+n- 1 \choose n}= \sum_{k=0}^n{x+n-k-1 \choose n-k}{y+k-1 \choose k} $ QUESTION [10 upvotes]: How can I prove that $${x +y+n- 1 \choose n}= \sum_{k=0}^n{x+n-k-1 \choose n-k}{y+k-1 \choose k} $$ I tried the following: We use the falling factorial power: $$y^{\underline k}=\underbrace{y(y-1)(y-2)\ldots(y-k+1)}_{k\text{ factors}},$$ so that $\binom{y}k=\frac{y^{\underline k}}{k!} .$ Then $${x +y+n- 1 \choose n} = \frac{(x +y+n- 1)!}{n! ((x +y+n- 1) - n)!} = \frac{1}{n!}. (x +y+n \color{#f00}{-1})^{\underline n} $$ And $$ {x+n-k-1 \choose n-k}{y+k-1 \choose k}$$ $$\frac{1}{(n-k)!}.(x+n-k-1)^{\underline{n-k}}.\frac{1}{k!}.(y+k-1)^{\underline{k}}$$ $$\frac{1}{k!.(n-k)!}.(x+n-k-1)^{\underline{n-k}}.(y+k-1)^{\underline{k}}$$ $$\sum_{k=0}^n{n \choose k}(x+n-k-1)^{\underline{n-k}}.(y+k-1)^{\underline{k}}$$ According to the Binomial-coefficients: $$ ((x+n-k-1) + (y+k-1))^{\underline{n}}$$ $$ (x+y+n\color{#f00}{- 2})^{\underline{n}}$$ What is wrong ? und How can I continue? :/ REPLY [3 votes]: Hint: The binomial formula with the Cauchy product \begin{align*} (x+y)^n=\sum_{k=0}^n\binom{n}{k}x^ky^{n-k} \end{align*} does not use falling factorials $x^{\underline{k}}$ resp. $y^{\underline{n-k}}$. Here is a step by step answer similar to that by @MarkoRiedel. It's convenient to use the coefficient of operator $[z^k]$ to denote the coefficient of $z^k$ in a series. This way we can write e.g. \begin{align*} \binom{n}{k}=[z^k](1+z)^n \end{align*} We obtain \begin{align*} \sum_{k=0}^{n}&\binom{x-1+n-k}{n-k}\binom{y-1+k}{k}\\ &=\sum_{k=0}^{\infty}\binom{x-1+n-k}{n-k}\binom{-y}{k}(-1)^k\tag{1}\\ &=\sum_{k=0}^\infty [t^{n-k}](1+t)^{x-1+n-k}[z^k](1+z)^{-y}(-1)^k\tag{2}\\ &=[t^n](1+t)^{x-1+n}\sum_{k=0}^\infty(-1)^kt^k(1+t)^{-k}[z^k](1+z)^{-y}\tag{3}\\ &=[t^n](1+t)^{x-1+n}\sum_{k=0}^\infty\left(-\frac{t}{1+t}\right)^k[z^k](1+z)^{-y}\\ &=[t^n](1+t)^{x-1+n}\left(1-\frac{t}{1+t}\right)^{-y}\tag{4}\\ &=[t^n](1+t)^{x+y-1+n}\tag{5}\\ &=\binom{x+y-1+n}{n} \end{align*} and the claim follows. Comment: In (1) we use the binomial identity $\binom{-p}{q}(-1)^q=\binom{p+q-1}{q}$ and we extend the upper limit of the series to $\infty$ without changing anything since we are adding zeros only. In (2) we apply the coefficient of operator twice. In (3) we do some rearrangements by using the linearity of the coefficient of operator and we also use the rule \begin{align*} [z^{p-q}]A(z)=[z^p]z^{q}A(z) \end{align*} In (4) we apply the substitution rule \begin{align*} A(t)=\sum_{k=0}^\infty a_kt^k=\sum_{k=0}^\infty t^k[z^k]A(z)\\ \end{align*} with $z=-\frac{t}{1+t}$. In (5) we do some simplifications. In (6) we select the coefficient from $t^n$.<|endoftext|> TITLE: "Alternatives" to Natural Transformations QUESTION [8 upvotes]: I would like someone to either (1) point out the mistake in what follows or (2) confirm what is said is correct. This would be accomplished by addressing the part in yellow only. The rest of the question is context, optional for the reader to consider. Natural transformations are often explained as being "morphisms between functors". morphism between functors - consider a category $\mathscr{F}$ with objects that are functors (for some other category $\mathcal{C}$), and the morphisms are whatever as long as they satisfy the category axioms. The "morphisms between functors" are the morphisms of the category $\mathscr{F}$. However, I feel like this characterization is insufficient, because although natural transformations are the best way to define morphisms between functors, they are not the only way. As far as I can tell, there are other conditions imposed on natural transformations in their definition which are not strictly necessary to be a morphism between functors. Is this true? Is the only thing which differentiates natural transformations between arbitrary morphisms between functors is that natural transformations satsify the interchange law? The question is essentially whether or not the morphisms of such a category $\mathscr{F}$ must also be natural transformations for the category $\mathcal{C}$. I do not think this is the case, and I have given a purported counterexample below using an arrow category, but I am not sure if it correct, since I have not seen something like this discussed before in any textbook I have looked at. Counterexample??? If we form the arrow category $\mathscr{D}^{\to}$ of some category $\mathscr{D}$ whose objects are categories $\mathcal{C} \in \mathcal{Ob}(\mathscr{D})$, then the objects of this arrow category $\mathscr{D}^{\to}=\mathscr{F}$ are functors and the morphisms of this arrow category $\mathscr{D}^{\to}=\mathscr{F}$ are therefore morphisms between functors, but not necessarily natural transformations. (I think, because these can be defined even between functors which do not have the same source and target categories, whereas natural transformations always map between functors with the same source and target categories, hence one reason why they are an appropriate choice of the morphisms of functor categories whose objects are functors between two fixed categories.) As far as I can tell, natural transformations can be used to generalize the notion of homotopy (and perhaps were even invented for this purpose), hence the homotopy hypothesis and homotopy type theory. Since homotopies satisfy a (weak version of?) the interchange law, any attempt to generalize them must be a 2-morphism, and not an arbitrary morphism between functors. Hence the "proper", or "more natural", way to think about natural transformations is as the 2-morphisms of the 2-category $Cat$ of locally small categories, since thinking of them only as morphisms between functors seems to suggest less structure than they actually have. REPLY [9 votes]: Here is, in my opinion, a good explanation of why natural transformations are the most natural notion of morphism between functors, based on the algebra of $\mathbf{Cat}$. Suppose $\mathcal{C}$ is a category. The objects of $\mathcal{C}$ are precisely the functors $\mathbf{1} \to \mathcal{C}$, where $\mathbf{1}$ is the terminal category. (one object, no nonidentity arrows) The arrows of $\mathcal{C}$ are precisely the functors $\mathbf{2} \to \mathcal{C}$, whre $\mathbf{2}$ is the arrow category. (two objects, one nonidentity arrow from one to the other) Now, suppose there was a good notion of a functor category: a category $\mathcal{D}^\mathcal{C}$ whose objects are functors $\mathcal{C} \to \mathcal{D}$. If we insist the usual relationship bewteen products and exponentials holds, then the following notions are equivalent: A functor $\mathbf{2} \to \mathcal{D}^\mathcal{C}$ A functor $\mathbf{2} \times \mathcal{C} \to \mathcal{D}$ Objects of the first type tell us what morphisms between functors should be. Objects of the second type we can handle explicitly, and it's not hard to show they give the usual definition of natural transformation.<|endoftext|> TITLE: A pair of sequences defined by mutual addition/multiplication QUESTION [11 upvotes]: Define sequences $\{a_n\},\,\{b_n\}$ by mutual recurrence relations: $$a_0=b_0=1,\quad a_{n+1}=a_n+b_n,\quad b_{n+1}=a_n\cdot b_n.\tag1$$ The sequence $\{a_n\}$ begins: $$1,\,2,\,3,\,5,\,11,\,41,\,371,\,13901,\,5033531,\,69782910161,\,...\tag2$$ It appears in OEIS as $A003686$ under the name "Number of genealogical $1$-$2$ rooted trees of height $n$." (note that indexing starts with $1$ rather than $0$ in OEIS). It seems that for $n\to\infty$, $$\ln\ln a_n=n\cdot\ln\phi+c+o(1),\tag3$$ where $\phi=\frac{1+\sqrt5}2$ is the golden ratio, the last term $o(1)$ exponentially decays in absolute value (and seems to have alternating signs for $n\ge8$), and the constant $$c\approx-1.11328370529375397170010672464407271138948509227239...\tag4$$ (see more digits here) The equivalent asymptotics is given in OEIS without proof. How can we prove $(3)$? Is there a closed form or some alternative representation (integral, series, product, continued fraction, etc) for the constant $c$? Is it irrational? Is there an efficient way to numerically calculate $c$ to a higher precision? REPLY [3 votes]: Given a sequence $\{A_n\}$ with the Fibonacci recurrence relation $$A_{n+2} = A_{n+1} + A_n$$ we get the well-known asymptotic $$A_n \sim C\phi^n$$ where $\phi$ is the golden ratio and $C$ depends on the initial terms $A_1$ and $A_2$. For example if $A_1 = A_2 = 1$ we get the Fibonacci sequence and $C = 1/\sqrt{5}$. But if we start with different initial values we get a difference constant $C$. Now consider a more general recurrence: $$A_{n+2} = A_{n+1} + A_n + g_n$$ where $g_n \rightarrow 0$ as $n \rightarrow \infty$. Then it seems clear that we'll also get the asymptotic $$A_n \sim C\phi^n$$ where now the constant $C$ depends also on the function $g_n$. In particular, for our sequence we have $$\log a_{n+2} = \log a_{n+1} + \log a_{n} + \log \left(1 + \frac{1}{a_n} - \frac{a_n}{a_{n+1}}\right)$$ where the last term goes to zero pretty quickly, so we have $$\log a_n \sim C\phi^n$$ for some $C$, but I am doubtful that we can get a closed formula for $C$. DETAILED PROOF ADDED LATER: LEMMA: Let $$A_{n+2} = A_{n+1} + A_n + c$$ be a recurrent sequence with constant $c$ and initial conditions $A_1$ and $A_2$. Then $$A_n = \frac{\phi^{n-1}-\psi^{n-1}}{\sqrt{5}} A_2 + \frac{\phi^{n-2}-\psi^{n-2}}{\sqrt{5}} A_1 + \left[\left(\frac{5+3\sqrt{5}}{10}\right)\phi^{n-2} + \left(\frac{5-3\sqrt{5}}{10}\right)\psi^{n-2} -1 \right]c$$ where $\phi = (1+\sqrt{5})/2$ and $\psi = (1-\sqrt{5})/2$. In particular, as $n \rightarrow \infty$, $$A_n = \left(\frac{\phi A_2 + A_1}{\sqrt{5}} + \frac{5+3\sqrt{5}}{10} c\right)\phi^{n-2} - c + o(1).$$ PROPOSITION: Let $$A_{n+2} = A_{n+1} + A_n + c_n$$ be a recurrent sequence with initial conditions $A_1$ and $A_2$ and $c_n$ a bounded sequence. Then $$A_n \sim C \phi^n$$ for some constant $C$ and $\phi = (1+\sqrt{5})/2$, i.e. the limit $$\lim_{n \rightarrow \infty} \frac{A_n}{\phi^n}$$ exists. Proof of LEMMA: Write recurrence in matrix form and take matrix powers. Proof of PROPOSITION: We can use the Lemma to bound our sequence by sequences with constants $c = M$ and $c = -M$ to see that $A_n/\phi^n$ is bounded, say $|A_n/\phi^n| \leq B.$ We will further prove that $A_n/\phi^n$ is a Cauchy sequence, and therefore converges to a limit. We have $|c_n| < M$ for some $M$. Let $\epsilon > 0$. Choose $N$ large enough so that $M / \phi^N < \epsilon$. For $n \geq N$, we again use the Lemma to bound our sequence by sequences with constants $c = M$ and $c = -M$, but now starting from at initial values $A_{N-1}$ and $A_N$. So we have $$\left| A_n - \left(\frac{\phi^{n-N+1}-\psi^{n-N+1}}{\sqrt{5}} A_N + \frac{\phi^{n-N}-\psi^{n-N}}{\sqrt{5}} A_{N-1} \right)\right|$$ $$\leq \left(\left(\frac{5+3\sqrt{5}}{10}\right)\phi^{n-N} + \left(\frac{5-3\sqrt{5}}{10}\right)\psi^{n-N} -1 \right)M.$$ We divide through by $\phi^n$ to get $$\left| \frac{A_n}{\phi^n} - \left(\phi \frac{1 - (\psi/\phi)^{n-N+1}}{\sqrt{5}} \frac{A_N}{\phi^N} + \frac{1}{\phi}\frac{1-(\psi/\phi)^{n-N}}{\sqrt{5}} \frac{A_{N-1}}{\phi^{N-1}} \right)\right|$$ $$\leq \left(\left(\frac{5+3\sqrt{5}}{10}\right) + \left(\frac{5-3\sqrt{5}}{10}\right)\left(\frac{\psi}{\phi}\right)^{n-N} - \frac{1}{\phi^{n-N}} \right) \frac{M}{\phi^N} < 2\epsilon.$$ Now choose $N_2 > N$ large enough so that $|\psi/\phi|^{N_2-N} B < \epsilon$. Then for $n \geq N_2$ we have $$\left| \frac{A_n}{\phi^n} - \left(\phi \frac{1}{\sqrt{5}} \frac{A_N}{\phi^N} + \frac{1}{\phi}\frac{1}{\sqrt{5}} \frac{A_{N-1}}{\phi^{N-1}} \right)\right| < 3 \epsilon.$$ By the triangle inequality, when $n \geq N_2$, we have $$\left| \frac{A_n}{\phi^n} - \frac{A_{N_2}}{\phi^{N_2}}\right| < 6 \epsilon,$$ proving that $A_n/\phi^n$ is a Cauchy sequence.<|endoftext|> TITLE: A strong version of the Dominated Convergence Theorem QUESTION [5 upvotes]: Let $(X, \Sigma, \mu)$ be a measure space, and let $f, f_n:X\rightarrow \mathbb{C}$ be measurable functions with $f_n\rightarrow f$ pointwise. Assume that there are integrable functions $G, g_n:X\rightarrow[0, \infty]$ with finite integrals such that $|f_n|\leq G+g_n$ for every $n$. Also assume that $\int g_n \rightarrow 0$. We need to show that $\int |f-f_n|\rightarrow 0$. I've tried using the dominated convergence theorem but couldn't find the dominating function. I have 2 main "problems" with the statement: $\underset{n}{\sup} g_n$ doesn't necessarily have a finite intgeral; $g_n$ doesn't have to converge pointwise to anything. That's what makes this statement different from all the others in this site I've checked. REPLY [6 votes]: Note that a sequence of real numbers converges to a limit $L$ iff every subsequence has a subsubsequence that converges to $L$. Since your hypotheses are preserved by passing to subsequences, it suffices to show that $\int|f-f_{n_k}|\to 0$ for some subsequence $(f_{n_k})$. In particular, since $\int g_n\to 0$, there is a subsequence $(g_{n_k})$ of $(g_n)$ which converges to $0$ pointwise almost everywhere. Write $h_n(x)=\min(f_n(x),g_n(x))$ if $f_n(x)\geq 0$ and $h_n(x)=\max(f_n(x),-g_n(x))$ if $f_n(x)<0$. Define $F_n(x)=f_n(x)-h_n(x)$. Since $|f_n|\leq G+g_n$, $|F_n|\leq G$. Furthermore, on our subsequence we have that $F_{n_k}$ converges to $f$ almost everywhere. By the dominated convergence theorem, $\int |f-F_{n_k}|\to 0$. Since $$\int |f-f_{n_k}|\leq\int|f-F_{n_k}|+\int |h_{n_k}|\leq\int|f-F_{n_k}|+\int g_{n_k},$$ it follows that $\int |f-f_{n_k}|\to 0$.<|endoftext|> TITLE: Quotient ring of Gaussian integers $\mathbb{Z}[i]/(a+bi)$ when $a$ and $b$ are NOT coprime QUESTION [17 upvotes]: The isomorphism $\mathbb{Z}[i]/(a+bi) \cong \Bbb Z/(a^2+b^2)\Bbb Z$ is well-known, when the integers $a$ and $b$ are coprime. But what happens when they are not coprime, say $(a,b)=d>1$? — For instance if $p$ is prime (which is not coprime with $0$) then $$\mathbb{Z}[i]/(p) \cong \mathbb{F}_p[X]/(X^2+1) \cong \begin{cases} \mathbb{F}_{p^2} &\text{if } p \equiv 3 \pmod 4\\ \mathbb{F}_{p} \times \mathbb{F}_{p} &\text{if } p \equiv 1 \pmod 4 \end{cases}$$ (because $-1$ is a square mod $p$ iff $(-1)^{(p-1)/2}=1$). — More generally, if $n=p_1^{r_1} \cdots p_m^{r_m} \in \Bbb N$, then each pair of integers $p_j^{r_j}$ are coprime, so that by CRT we get $$\mathbb{Z}[i]/(n) \cong \mathbb{Z}[i]/(p_1^{r_1}) \times \cdots \times \mathbb{Z}[i]/(p_m^{r_m})$$ I was not sure how to find the structure of $\mathbb{Z}[i]/(p^{r}) \cong (\Bbb Z/p^r \Bbb Z)[X] \,/\, (X^2+1)$ when $p$ is prime and $r>1$. — Even more generally, in order to determine the structure of $\mathbb{Z}[i]/(a+bi)$ with $a+bi=d(x+iy)$ and $(x,y)=1$, we could try to use the CRT, provided that $d$ is coprime with $x+iy$ in $\Bbb Z[i]$. But this is not always true: for $d=13$ and $x+iy=2+3i$, we can't find Gauss integers $u$ and $v$ such that $du + (x+iy)v=1$, because this would mean that $(2+3i)[(2-3i)u+v]=1$, i.e. $2+3i$ is a unit in $\Bbb Z[i]$ which is not because its norm is $13 \neq ±1$. — I was not able to go further. I recall that my general question is to known what $\mathbb{Z}[i]/(a+bi)$ is isomorphic to, when $a$ and $b$ are integers which are not coprime (for instance $a=p^r,b=0$ or $d=(a,b) = a^2+b^2>1$). Thank you for your help! REPLY [10 votes]: The best approach is to recall that $\mathbb{Z}[i]$ is a PID (Principal Ideal Domain), which is in fact an Euclidean domain with respect to its usual norm. Once you notice this, you will realize that your approach using the Chinese Remainder Theorem is the correct one. The only problem is that you are factoring over $\mathbb{Z}$ instead that over $\mathbb{Z}[i]$. In this way, take $z\in\mathbb{Z}[i]$, factor it over $\mathbb{Z}[i]$ as $\prod q_k^{r_k}$ and you will obtain by the CRT that $$\mathbb{Z}[i]/(z)\cong \prod_k\mathbb{Z}[i]/(q_k^{r_k})$$For example, in your example with $13(2+3i)$, write it as $(2+3i)^2(2-3i)$ and so you obtain $$\mathbb{Z}[i]/(13(2+3i))\cong \mathbb{Z}[i]/(2+3i)^2\times \mathbb{Z}[i]/(2-3i)$$ Now, the only problem is studying which are the primes of $\mathbb{Z}[i]$ and determining the structure of $\mathbb{Z}[i]/(q^r)$ for $q$ prime in $\mathbb{Z}[i]$. The first question is can be answered using the fact that $z$ is prime in $\mathbb{Z}[i]$ iff $\mathbb{Z}[i]/(z)$ is a field (you have worked which primes of $\mathbb{Z}$ are primes of $\mathbb{Z}[i]$ without noticing and I let the proof to you), so we obtain: The primes of $\mathbb{Z}[i]$ are of the form: $(1+i)$. Up to multiplication by units, $1+i$ is the only prime associated to $2$. $p\in \mathbb{Z}$ prime integer with $p \equiv 3$ (mod 4). Up to multiplication by units, $p$ is the only prime of this form for a given integer prime $p \equiv 3$ (mod 4). $q=(x+iy)\in\mathbb{Z}[i]$ with $q\overline{q}$ prime integer. Up to multiplication by units, $q$ and $\overline{q}=(x-iy)$ are the only primes of this form for a given integer prime $q\overline{q}\equiv 1$ (mod 4). Once this is known, one should determine the structure of $\mathbb{Z}[i]/(q^n)$ for each one of these primes. We only have to distinguish three cases (we just use the the isomorphism theorems): $q=1+i$. When $r=2s$ is even, we reduce to $$\mathbb{Z}/(2^s)[X]/(X^2+1)$$ that can be realized as the matrix subalgebra of $M_2(\mathbb{Z}/(2^s))$ given by $$\mathbb{Z}/(2^s)\left[\begin{pmatrix}0&-1\\1&0\end{pmatrix}\right]$$ Note that for $s=1$, this is just $\mathbb{Z}/(4)$. When $r=2s-1$ is odd, we reduce to $$\mathbb{Z}/(2^s)[X]/(X^2+1,2^{s-1}(X+1))$$ which the better realization I can think of is a quotient of the subalgebra of $M_2(\mathbb{Z}/(2^s))$ $$\mathbb{Z}/(2^s)\left[\begin{pmatrix}0&-1\\0&0\end{pmatrix}\right]$$ by the ideal generate by $$\begin{pmatrix}2^{s-1}&2^{s-1}\\2^{s-1}&2^{s-1}\end{pmatrix}$$ $q=p$ is an integer prime. Then as noted by you, we reduce to $$\mathbb{Z}/(p^r)[X]/(X^2+1)$$ A concrete realization can be obtained by considering the matrix subalgebra of $M_2(\mathbb{Z}/(p^r))$ given by $$\mathbb{Z}/(p^r)\left[\begin{pmatrix}0&-1\\1&0\end{pmatrix}\right]$$ $q=a+bi$ is not an integer prime. In this case, it should be noted that for $(a+bi)^n=a_n+b_ni$, $a_n$ and $b_n$ should be coprime because otherwise it would be divisible by a prime non equivalent to $a+bi$ violating the unique factorization. Hence $$\mathbb{Z}[i]/(q^r)\cong \mathbb{Z}/((q\overline{q})^r)$$ in this case by your cited result. This settles the question in general and in a completely satisfactory way in many case. In your considered example, we obtain $\mathbb{Z}/(13)\times \mathbb{Z}/(13^2)$. However, maybe there are better presentations for some of the above cases. In any case, the strategy I give works for any PID.