TITLE: Is every irrational number containing only $2$ distinct digits, transcendental? QUESTION [13 upvotes]: If we have an irrational number, consisting of only $2$ distinct digits, for example: $$0.01011011101111011111 \cdots$$ Can we conclude that the number is transcendental? It is conjectured that every irrational algebraic number is normal in base $10$. This would imply that the answer to my question is yes. But can we prove it? REPLY [2 votes]: You may want to look at this answer in mathoverflow. Our conditions are not strong enough to use the theorem though, as we just have $c_x(n)\leq 2^n$.<|endoftext|> TITLE: Asymptotic value of a sequence QUESTION [7 upvotes]: Assume a real sequence $1=a_1\leq a_2\le \cdots \leq a_n$, and $a_{i+1}-a_i\leq \sqrt{a_i}$. Does this hold: $$\sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} \in O(\log n)$$ REPLY [4 votes]: Lemma 1: If $1 = a_1 \leq a_2 \leq a_3 \leq \cdots$ and $a_{i+1}-a_i \leq \sqrt{a_i}$ then $$ \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} = \Theta(\log a_n) $$ for all $n$. Proof: By the assumptions we have $$ 0 \leq \frac{a_{i+1}-a_i}{a_i} \leq \frac{1}{\sqrt{a_i}} \leq 1, $$ and since $$ \frac{a_{i+1} - a_i}{a_i} = \frac{a_{i+1}}{a_i} - 1 \tag{1} $$ this is equivalent to $$ 1 \leq \frac{a_{i+1}}{a_i} \leq 2. \tag{2} $$ If $1 \leq x \leq 2$ then $$ \log x \leq x-1 \leq \frac{\log x}{\log 2}, $$ so setting $x = a_{i+1}/a_i$ in equations $(1)$ and $(2)$ yields $$ \log \frac{a_{i+1}}{a_i} \leq \frac{a_{i+1} - a_i}{a_i} \leq \frac{1}{\log 2} \log \frac{a_{i+1}}{a_i}. $$ Summing this over the range $i=1,2,\ldots,n-1$ yields $$ \log \frac{a_n}{a_1} \leq \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} \leq \frac{1}{\log 2} \log \frac{a_n}{a_1}. $$ $$ \tag*{$\square$} $$ Lemma 2: If $a_i \geq 1$ and $a_{i+1}-a_i \leq \sqrt{a_i}$ then $1 \leq a_i \leq i^2+2$. Proof: Summing $a_{i+1}-a_i \leq \sqrt{a_i}$ over the range $i=1,2,\ldots,n-1$ yields $$ a_i - 1 = \sum_{j=1}^{i-1} (a_{j+1} - a_j) \leq \sum_{j=1}^{i-1} \sqrt{a_i} \leq i\sqrt{a_i} $$ and hence $$ a_i - i\sqrt{a_i} - 1 \leq 0. \tag{3} $$ The parabola $y = x^2 - ix - 1$ lies below the $x$-axis for $1 \leq x \leq \frac{1}{2}\left(i + \sqrt{4 + i^2}\right)$, so equation $(3)$ combined with the assumption $a_i \geq 1$ yields $$ 1 \leq a_i \leq \left(\tfrac{i}{2} + \tfrac{1}{2}\sqrt{4 + i^2}\right)^2 \leq i^2 + 2, $$ where the last inequality follows from Jensen's inequality. $$ \tag*{$\square$} $$ Claim: If $1 = a_1 \leq a_2 \leq a_3 \leq \cdots$ and $a_{i+1}-a_i \leq \sqrt{a_i}$ then $$ \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} = O(\log n) $$ for all $n$. Proof: The result is trivially true for $n=1$ so suppose $n \geq 2$. Combining Lemmas 1 and 2 yields $$ 0 \leq \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} \leq C\log a_n \leq C\log(n^2+2) \leq D \log n $$ for some constants $C$ and $D$. $$ \tag*{$\square$} $$ Intuition The sum in question behaves in many ways like a "discrete logarithm", and in the sense of Lemma 1 we have something like $$ \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} \approx \log \frac{a_n}{a_1}. $$ For example, if we double every term of the sequence $(a_n)$ then the values on both sides of the $\approx$ remain unchanged. Further, if $a_n$ is the constant sequence $a_n = a_1$ then both sides of the $\approx$ are equal. (I'm not sure what the analogue of $\log xy = \log x + \log y$ would be.) We could try to approach this problem by looking at the smooth analogues of the sequence and sum. The difference $a_{i+1} - a_i$ can be thought of as a discrete derivative and the sum as a discrete integral. So if we can find some function $f$ with $f(n) \approx a_n$ and $$ f'(n) \approx a_{n+1} - a_n $$ then we might expect that $$ \sum_{i=1}^{n-1} \frac{a_{i+1}-a_i}{a_i} \approx \int_1^n \frac{f'(x)}{f(x)}\,dx = \log \frac{f(n)}{f(1)} \approx \log \frac{a_n}{a_1}. $$ This observation was what lead me to the approach in this answer.<|endoftext|> TITLE: What is the point of a $1\times 1$ matrix? QUESTION [5 upvotes]: Surely a $1\times 1$ matrix can only 'produce' vectors with 1 entry, and can take as input also only one entry vectors. So, is there any use for $1\times 1$ matrices? Since to me they do the same like scalars, only worse. REPLY [2 votes]: The point of a $n\times n$ (or $m\times n$)matrix is usually to generalize to $n$ dimensions what you already knew in $1$-dimension. And the point of the $1\times 1$ matrix is to give you back what you knew in $1$-dimension. So you won't need to prove something twice, once for 1-dimension and once for multiple dimensions (ಠ‿ಠ)<|endoftext|> TITLE: Abelian group (Commutative group) QUESTION [5 upvotes]: Prove that if in a group $(ab)^2= a^2 b^2$ then the group is commutative. I am having a hard time doing this. Here is what I have so far: Proof: $a^2 b^2= a^1 a^1 b^1 b^1$ =$aa^{-1}bb$ =ebb Hence,$aa^{-1}=e$ I am stuck, I do not know if this is the right process in proving this REPLY [5 votes]: $aabb=(ab)^2=abab$ implies that $a^{-1}aabbb^{-1} = a^{-1}ababb^{-1}$. So, $eabe = ebae$. Hence $ab=ba$.<|endoftext|> TITLE: Projection of a function onto another QUESTION [5 upvotes]: In a metric space, I can imagine projection of a vector onto another (also called dot product). Or in general any vectors of form $[v_1,v_2,v_3,\cdots]$ I know how to compute their projection onto one another. But for continuous vectors, like when using functions as a vectors, I can't imagine what does dot-product (or inner product) or projection quite means. Can anyone help me with this? REPLY [5 votes]: Let's reconsider the familiar vector space $\mathbb R^2$ for a moment, examine the properties of that space and how they might inform an intuition about vector spaces of functions, and try to distinguish good intuition from harmful intuition. Consider the vectors $(1,1)$ and $(2,-2).$ If you look only at their $x$ coordinates, the vectors appear to be in the same direction. If you look only at $y$ coordinates, the vectors appear to be opposite. In fact, the vectors are orthogonal, but to find this from their coordinates you must look at all of their coordinates. If all you know about a vector is its $x$ coordinate, you know very little about its direction. It could be almost straight up, almost straight down, or anything in between. Trying to compare functions $f(x)$ and $g(x)$ by looking at only one value of $x$ is worse than trying to find the angles of vectors by looking at only one coordinate. To decide whether functions are orthogonal you have to look at the entire region over which the inner product is measured, and find out whether the functions tend to imitate one another, oppose one another, or neither. Another property of vectors in $\mathbb R^2$ is that if you project an arbitrary vector onto each of two orthogonal vectors, and then add the two projected vectors, you get back the original vector. This does not work if the vectors you project onto are not orthogonal. But it does work for projecting a function onto other functions, provided that the functions you project onto are orthogonal. (It also may require an infinite set of such functions to project onto, because the typical function vector spaces we look at have that many dimensions.) Consider the space of continuous real-valued functions $f$ such that $f(x+2\pi)=f(x),$ with the inner product $$ \langle f,g \rangle = \frac{1}{2\pi} \int_0^{2\pi} f(x)g(x)\,dx. $$ The sine and cosine functions are in this space and are orthogonal. Although they are almost always both non-zero, they oppose each other as much as they reinforce each other, and the integral comes out to exactly zero. For $0<\alpha<\frac\pi2,$ the function $\sin(x+\alpha)$ also is in this space but is not orthogonal to either $\sin(x)$ or $\cos(x).$ Instead, its inner product with $\sin(x)$ is $\frac12\cos(\alpha)$ and its inner product with $\cos(x)$ is $\frac12\sin(\alpha).$ But if we take the inner product of $\sin(x)$ with itself, we get $\frac12,$ and the same is true for $\cos(x).$ So if we use the usual formula for projecting a vector $v$ onto a vector $u,$ $$ \left(\frac{\langle v,u \rangle}{\langle u,u \rangle}\right) u, $$ and use it to project $\sin(x+\alpha)$ onto the sine and cosine functions, we get (respectively) $$ \cos(\alpha)\sin(x) \qquad \text{and} \qquad \sin(\alpha)\cos(x). $$ And as we know from the identity for the sine of an angle sum, the sum of these two functions gives us back the function $\sin(x+\alpha).$ So this particular intuition from the vector space $\mathbb R^2$ does carry over at least somewhat into at least one inner product space of functions. If you look at the derivation of a Fourier series for a periodic function, this idea of adding the projections onto orthogonal functions works quite well.<|endoftext|> TITLE: Intuition behind lack of cycles in the Collatz Conjecture QUESTION [5 upvotes]: The Collatz Conjecture concerns the function $f(n) = \begin{cases} n/2, & \text{if $n$ is even} \\ 3n+1, & \text{if $n$ is odd} \end{cases}$ . The conjecture says that if you start with any natural number and repeatedly apply the function, you will eventually end up at 1, after which you will cycle forever between 1, 4, and 2. To prove the conjecture, one would need to prove that No sequence generated in this fashion grows without bound. No sequence generated in this fashion falls into a cycle other than the aforementioned 1, 4, 2 cycle. The first of these requirements seems very likely to be true, but I don't see why the second holds up so well (computers have verified an absence of other cycles for mindbogglingly large starting numbers). It seems to me that if you're bouncing around the integers in a relatively random pattern, every now and then you should end up back where you started. Is there an intuitive explanation for why cycles are so rare? REPLY [11 votes]: Consider the "compacted" formula (as also used in the "Syracuse"-version of the Collatz), defining one "step" beginning on $a$ going to $b$ (both odd $\ge 1$): $$ b = {3a+1\over 2^A } \tag 1$$ where $A$ contains the number of halving-steps. Now to have a (very) simple cycle of just one step we must have: $$ a = {3a+1\over 2^A } \tag 2$$ It is simple to show, there is no solution except when $a=1$ only by rearranging: $$ a = {3a+1\over 2^A } \\ 2^A = {3a+1\over a } $$ $$ 2^A = 3+{1\over a } \tag3\\ $$ and having the rhs a perfect power of $2$ requires $a=1$, thus allowing (only) the well known "trivial" cycle $1 \to 1$. Now extending this to a two-step cycle we need to have $$\begin{array}{} b = {3a+1\over 2^A } & a = {3b+1\over 2^B }\\ \end{array} \tag 4$$ with some so far undetermined $A$ and $B$. To see better the space of possible solutions we can take the product of the lhs's $a \cdot b$ and which must equal the product of the rhs's: $$\begin{array}{} a \cdot b = {3a+1\over 2^A } \cdot {3b+1\over 2^B } \end{array}$$ resulting in the required equality $$\begin{array}{} 2^{A+B} = (3+{1\over a}) \cdot (3+{1\over b }) \end{array} \tag 5$$ This shows, we can only have a solution if the rhs becomes at least integer, which is difficult enough , but actually must even equal a perfect power of 2, larger than $3^2=9$ and namely must equal $16$. Now what values must $a,b$ have such that the rhs equals $16$? Both must contain $a=b=1$ and that is the already known trivial cycle, and no other solution in possible. Now you see the pattern, how this generalizes to the disproof of the 3-step-cycle, 4-step-cycle, and so on for the N-step-cycle. Unfortunately, for several $N$ the possibility for small $a,b,c,...$ exists "theoretically", which means, that the rhs-product can reach a perfect power of 2 by some $1 \lt (a_1 \ne a_2 \ne a_3 \cdots \ne a_N)$ - one might just try some $N$ by hand, remembering the conditions that all members of an assumed cycle should be greater than $1$, should be odd, should not be divisible by $3$ and that all involved $a_k$ should be different from each other, to get a better intuition. Now to proceed more we introduce the knowledge, that by simple heuristics we already know, that $a_k \gt 1000$ (can be done manually with a programming language) or $a_k \gt 1 000 000 $ and even $a_k \gt 2^{60} $ (the latter by an extensive numerical search by de Oliviera and by Rosendaal). If we introduce that knowledge and assume the minimal $a_1$ being, say $a_1=1+2^{60} $ , $a_2 = {3a_1+1\over 2^{A_1}} $ and so on , we find that for all manually reachable $N$ the rhs is disappointingly near to the value of $3^N$ and no perfect power of $2$ is only in any realistic distance from that. One can find, using the continued fraction of $\beta = \log(3)/\log(2)$, we need $N \gt 150 000 $ (or even much more don't have the actual value at hand). Well, this shall just give an intuition as to why cycles are much unlikely. If you like to do more, allow negative numbers for $a_1,a_2,...$ and see how and which solutions for small $N$ you can get. Or proceed and compare the $5x+1$ problem with this: we find actually two or three possible cycles for small $N$ and $a_1 \lt 100 $ but after that the above method can be used to say much more about the nonexistence of certain $N$-step cycles. I've even found two cycles for the $181 x+1$-version having $N=2$ and small $a_1 \lt 100$ but after that, the formula comparing the rhs-product for $N$ steps to perfect powers of $2$ indicates the same "difficulty" for cycles to exist. [Update] Just to have an example, how a (possibly infinite, don't know at the moment) set of $N$ can be ruled out for a solution to exist. Assume for example $N=12$. So we have on the rhs 12 parentheses of the form $(3+1/a_k)$, whose product should equal $2^S$ (where I use in general the letter $S$ for $S=A_1+A_2+...A_{12}$, just the sum of exponents, which is also the number of even-steps). Then what is the next possible perfect power of 2 above $3^{12}$? we get, using $\beta=\log(3)/ \log(2)$: $S=\lceil N \cdot \beta \rceil = 20$,thus we have the condition $$2^{20} =(3+ {1\over a_1})(3+ {1\over a_2})...(3+ {1\over a_{12}}) \tag 6$$and the solution for smallest $1 \ne (a_1 \ne a_2 ... \ne a_{12})$ is $$2^{20} \overset?=(3+ {1\over 5})(3+ {1\over 7})(3+ {1\over 11})...(3+ {1\over 37}) \tag 7$$ Of course we could simply compute the values of the LHS and the RHS getting $$ 2^{20}= 1048576 \gt 697035.738776 $$ and because increasing the values for the $a_k$ would even decrease the value of the rhs there is no solution and thus no $N=12$-step cycle. But for the sake of generality we proceed differently. Instead we do a rough estimate for the mean-value of the $a_k$. Assume all $a_k$ are equal to their "mean" $a_m$, then we can rewrite the equation $$ 2^S = (3+{1\over a_m})^N \\ 2^{S/N} = (3+{1\over a_m})\\ 2^{S/N} -3 = {1\over a_m}\\ {1 \over 2^{S/N} -3} = a_m \tag 7 $$ $$ a_m = {1 \over 2^{20/12}-3}\approx 5.72075494219... \tag 8 $$ and because $a_m$ is somehow a rough mean, some $a_k$ must be smaller and some must be larger. But there is only one possible value for any $a_k \lt a_m$ namely $a_1=5$. After that, rearranging one parenthese with that value assumed to the lhs in eq(6) we get $$ 2^S/(3+1/5) = (3+{1\over a_m})^{N-1} \\ 2^{20} \cdot 5/16 = (3+{1\over a_m})^{11} \\ a_m \approx 5.79638745091$$ and we find, that there is no way that $a_m$ can be a rough mean of the remaining $11$ $a_k$ since there is no more odd integer $a_k$ in $1 \ne 3 \ne 5 \lt a_k $ so a $N=12$-step-cycle cannot exist. We see nicely, that for some $N$ we can exclude the possibility of a cycle just based on the basic assumptions on the form of the members of a possible cycle $a_k$ by the formula (6) and for such $N$ do not need to recurse to the $a_k \gt 2^{60}$ found by de Oliviera and Rosendaal. However, and this leads to the observation that we need for the general solution of the cycle-problem some deeper thinking, there are some $N$ for which $2^S$ is comfortably near to $3^N$ so we can allow a set of small $a_k$ such that the rhs can approach the lhs. The continued fraction of $\beta$ gives us pairs of $N$ and $S$ where $2^S$ are especially near to $3^N$ and for which a cycle cannot be excluded by this method alone. [update 2] I've not done this before explicitely, but trying the continued fraction-convergents and filling into $a_k$ the set of consecutive smallest possible integers ($5,7,11,13,...)$ we get the following small table N S lhs=2^S rhs = (3+1/a_k)*()... lhs/rhs a_m 1, 2, 4 , 3.2 , 1.25 1 ~ 2^0 5, 8, 256 , 292.571428571, 0.875 31.81 ~ 2^5 41, 65, 3.68934881474 E 19, 5.44736223436 E 19, 0.677 1192.08 ~ 2^10 306, 485, 9.98959536101 E 145, 5.57867460455 E 146, 0.179 99780.79 ~ 2^16 15601, 24727, 3.70427126979 E 7443, 1.06756786898 E 7444, 0.346 285817586.21 ~ 2^28 79335, 125743, 2.59863196329 E37852, 8.97264176433 E37852, 0.289 7216102492.69 ~ 2^33 and we see, that the RHS can reach the LHS for that specific $N$ and thus the assumption of the smallest possible values for the $a_k$ does not suffice to exclude the possibility of a cycle. The "mean" $a_m$, estimated by the geometric mean of all parentheses as proposed above, are in the last column; they increase with the size of $N$ and we get an impression as at which cyclelength $N$ this allows values of $a_1 >2^{60}$ and thus this method has its heuristical limit: the last entry in the last row means $a_m \approx 2^{33}$ and this means, that the knowledge that $a_1>2^{60}$ suffices to have disproved all cycles of steps $N \le 79335$. But it might anyway be interesting to see, what explicitely smallest value for $a_1$ (which we can assume to be the smallest of the cycle) and the following sequence of consecutive possible members would suffice to have the LHS smaller than the RHS. That would surely be a nice exercise ...<|endoftext|> TITLE: How to evaluate the integral $\int_0^{2\pi} \theta\exp(x\cos(\theta) + y\sin(\theta))) d\theta$ QUESTION [11 upvotes]: I found five other related integrals whose proofs I am studying now A, B, C, D, and E $$\int^{2\pi}_0e^{\cos \theta}\cos(a\theta -\sin \theta)\,d \theta = \frac{2\pi}{a!}$$ $$\int_0^{2\pi} \exp(\cos(\theta)) \cos(\theta + \sin(\theta)) = 0$$ $$ \int_0^{2\pi} \exp(\alpha \cos(\theta))\cos(\sin(\theta)) = 2\pi I_0(\sqrt{1 - \alpha^2})$$ $$\int_0^{2\pi} \exp(x\cos(\theta) + y\sin(\theta))) = 2\pi I_0(\sqrt{x^2 + y^2})$$ $$ \int_0^\dfrac{\pi}{2}\beta^\alpha\exp\left(-\beta\cos(\theta)\right)d\theta = \dfrac{1}{2}\beta^\alpha\pi\left(J_0(\beta)-L_0(\beta)\right)$$ I was also able to find a very general statement in Gradshteyn as entry number 3.338. $$\int_{-\pi}^{\pi} \frac{\exp{\frac{a + b\sin x + c \cos x}{1 + p \sin x + q \cos x}}}{1 + p \sin x + q \cos x} dx = \frac{2\pi e^{-\alpha}I_0(\beta)}{\sqrt{1 - p^2 - q^2}}$$ $$\textrm{where } \alpha = \frac{bp + cq -a}{1 - p^2 - q^2},\; \beta = \sqrt{\alpha^2 - \frac{a^2 - b^2 - c^2}{1 - p^2 - q^2}}$$ But the simplest approach of using integration by parts to reduce my problem to one of these does not work. Background Here's some background into why I am interested in this integral, let $v = [x, y] \in \mathbb{R}^2$ and $r = [\cos(\theta), \sin(\theta)] \in \mathbb{R}^2$, Consider the value of $$\underset{\theta \tilde{} \textrm{Hill}}{E}[\exp(v^Tr)]$$ This is the expected value of exponential of the projection of a random vector chosen using the Hill distribution, where the "Hill" is an unnormalized distribution that linearly increases from $0$ at $-\pi$ to $1$ at $0$ and then decreases linearly from $0 \textrm{ to } \pi$. Discarding normalizing factor of Hill, This expectation will become: $$ \int_{-\pi}^{0} (\theta + \pi)\exp(x\cos\theta + y\sin\theta) d\theta + \int_{0}^{\pi} (\pi - \theta) \exp(x\cos\theta + y\sin\theta) d\theta $$ Now, there are simplifying unnormalized distributions I could assume in my model, instead of Hill, such as Uniform from 0 to $2\pi$, or $\exp(\cos(\theta))$ both of these distribution allow analytical calculation of the above expectation just based on the identities written below, but I want to know which distributions I can compute this expectation for (Can I do this for Hill?) I will guess that I can only do it for distributions that have some finite decomposition in terms of spherical harmonics. Unfortunately, my knowledge is lacking in complex analysis and spherical harmonics so I can't quickly assess my options. REPLY [3 votes]: Not hard to see that \begin{align*} \int_0^{2 \pi } \theta \cdot e^{x \cos \theta +y \sin \theta } \, \mathrm{d}\theta &=\int_{0}^{2\pi }\theta \sum_{k=0}^{\infty }\frac{\left ( x \cos \theta +y \sin \theta \right )^{k}}{k!}\, \mathrm{d}\theta \\ &=\int_{0}^{2\pi }\theta \sum_{k=0}^{\infty }\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}y^{j}\sin^{j}\theta x^{k-j}\cos^{k-j}\theta \, \mathrm{d}\theta \\ &=\sum_{k=0}^{\infty }\frac{1}{k!}\sum_{j=0}^{k}\binom{k}{j}y^{j}x^{k-j}\int_{0}^{2\pi }\theta \sin^{j}\theta \cos^{k-j}\theta \, \mathrm{d}\theta \end{align*} For the last integral, with the help of Mathematica we get the following complex result \begin{align*} \int_{0}^{2\pi }\theta \sin^{m}\theta \cos^{n}\theta \, \mathrm{d}\theta =&\frac{(-1)^{m+n} \pi ^2 \csc \left(\frac{n \pi }{2}\right) \csc \left(\frac{\pi m}{2}+\frac{n \pi }{2}\right) \Gamma \left(\frac{n}{2}+\frac{1}{2}\right)}{4 \Gamma \left(\frac{1}{2}-\frac{m}{2}\right) \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}+\frac{\pi ^2 \csc \left(\frac{n \pi }{2}\right) \csc \left(\frac{\pi m}{2}+\frac{n \pi }{2}\right) \Gamma \left(\frac{n}{2}+\frac{1}{2}\right)}{4 \Gamma \left(\frac{1}{2}-\frac{m}{2}\right) \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+(-1)^{m+n} 2^{\frac{m}{2}-\frac{1}{2}} \cos \left(\frac{m \pi }{2}\right) \Gamma \left(\frac{m}{2}+\frac{1}{2}\right) \Gamma \left(-\frac{m}{2}-n-\frac{1}{2}\right) \Gamma (n+1) \, _2F_1\left(\frac{1-m}{2},\frac{m+1}{2};\frac{m}{2}+n+\frac{3}{2};\frac{1}{2}\right)\\ &+\frac{(-1)^{m+2 n} 2^{\frac{m}{2}-\frac{1}{2}} \pi \Gamma \left(\frac{m}{2}+\frac{1}{2}\right) \Gamma (n+1) \, _2F_1\left(\frac{1-m}{2},\frac{m+1}{2};\frac{m}{2}+n+\frac{3}{2};\frac{1}{2}\right)}{\Gamma \left(\frac{m}{2}+n+\frac{3}{2}\right)}\\ &+\frac{(-1)^{m+n} \Gamma \left(\frac{n}{2}\right) \Gamma \left(\frac{m}{2}+1\right) \, _3F_2\left(\frac{1}{2},1,\frac{m}{2}+1;\frac{3}{2},1-\frac{n}{2};1\right)}{2 \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{\Gamma \left(\frac{n}{2}\right) \Gamma \left(\frac{m}{2}+1\right) \, _3F_2\left(\frac{1}{2},1,\frac{m}{2}+1;\frac{3}{2},1-\frac{n}{2};1\right)}{2 \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{(-1)^n \Gamma \left(\frac{m}{2}\right) \Gamma \left(\frac{n}{2}+1\right) \, _3F_2\left(\frac{1}{2},1,\frac{n}{2}+1;\frac{3}{2},1-\frac{m}{2};1\right)}{2 \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{(-1)^{m+2 n} \Gamma \left(\frac{m}{2}\right) \Gamma \left(\frac{n}{2}+1\right) \, _3F_2\left(\frac{1}{2},1,\frac{n}{2}+1;\frac{3}{2},1-\frac{m}{2};1\right)}{2 \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{(-1)^n \pi \Gamma \left(\frac{m}{2}+\frac{1}{2}\right) \left(\pi \csc \left(\frac{m \pi }{2}\right) \csc \left(\frac{\pi m}{2}+\frac{n \pi }{2}\right)+\pi \sec \left(\frac{n \pi }{2}\right)\right)}{4 \Gamma \left(\frac{1}{2}-\frac{n}{2}\right) \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{(-1)^{m+2 n} \pi \Gamma \left(\frac{m}{2}+\frac{1}{2}\right) \left(\pi \csc \left(\frac{m \pi }{2}\right) \csc \left(\frac{\pi m}{2}+\frac{n \pi }{2}\right)+\pi \sec \left(\frac{n \pi }{2}\right)\right)}{4 \Gamma \left(\frac{1}{2}-\frac{n}{2}\right) \Gamma \left(\frac{m}{2}+\frac{n}{2}+1\right)}\\ &+\frac{(-2)^{m+n} \pi ^{5/2} \Gamma \left(\frac{m}{2}+\frac{1}{2}\right) \sec \left(\frac{\pi m}{2}+n \pi \right)}{\Gamma \left(\frac{1}{2}-\frac{n}{2}\right) \Gamma \left(-\frac{m}{2}-\frac{n}{2}+\frac{1}{2}\right) \Gamma (m+n+1)} \end{align*} Looking at this horrible result, If I'm not doing the wrong way, I don't there will be a closed form for the integral, as least it won't be too short.<|endoftext|> TITLE: Patterns in $\frac{80}{81}$ and $\frac{10}{81}$. QUESTION [6 upvotes]: The decimal form of $\frac{80}{81}$ is $0.987654320\ldots$ notice the expected $1$ is missing. The decimal form of $\frac{10}{81}$ is $0.12345679\ldots$ notice the expected $8$ is missing. Can someone expansion why the decimal form is the way they are? I think it has something to do with $(10-1)^2=81$. REPLY [3 votes]: This is because $\dfrac1{(1-x)^2} =\sum_{n=0}^{\infty} (n+1)x^n $. Putting $x=.1$, this is $\dfrac1{.9^2} =\dfrac1{.81} =\dfrac{100}{81} =1+2/10+3/100+4/1000 + ... $ so $\dfrac{10}{81} =1/10+2/100+3/1000+4/10000 + ... =0.1234567... $. The next terms are $8/10^8+9/10^9 +10/10^{10}+... $, but we get a carry here (from the $10/10^{10}$) and these terms have a value of $9/10^8+0/10^9 +0/10^{10}+... $ which explains the $....6790...$. The other is just $1-x$ where $x$ is a decimal: $\dfrac{80}{81} =1-\dfrac{1}{81} $.<|endoftext|> TITLE: Interesting closed form for $\int_0^{\frac{\pi}{2}}\frac{1}{\left(\frac{1}{3}+\sin^2{\theta}\right)^{\frac{1}{3}}}\;d\theta$ QUESTION [25 upvotes]: Some time ago I used a formal approach to derive the following identity: $$\int_0^{\frac{\pi}{2}}\frac{1}{\left(\frac{1}{3}+\sin^2{\theta}\right)^{\frac{1}{3}}}\;d\theta=\frac{3^{\frac{1}{12}}\pi\sqrt{2}}{AGM(1+\sqrt{3},\sqrt{8})}\tag{1}$$ where $AGM$ is the arithmetic-geometric mean. Wolfram Alpha does not tell me whether this is correct, but it does appear to be accurate to many decimal places. I have three questions: Can anyone verify whether $(1)$ is in fact correct? Is there a way of generalizing $(1)$ to integrals of the form $\int_0^{\frac{\pi}{2}}\left(a+\sin^2{\theta}\right)^{-\frac{1}{3}}\;d\theta$ or is this integral more special? My derivation (see below) appears to only work for $a=\frac{1}{3}$. There is a superficial similarity between $(1)$ and elliptic integrals (e.g. the $AGM$ evaluation); is there a way to transform this integral into an elliptic integral that I have missed, or is it merely a coincidence that an integral of this form is the reciprocal of an $AGM$? Derivation: I have put this here in case it helps to see where I am coming from; I apologize for its length. I began by using a multiple integration trick of squaring the integral and converting to polar coordinates to evaluate $\int_0^\infty e^{-x^6}dx=\frac{1}{6}\Gamma(\frac{1}{6})$ as follows: $$\left[\int_0^\infty e^{-x^6}\;dx\right]^2=\int_0^\infty\int_0^{\frac{\pi}{2}}re^{-r^6(\cos^6\theta\;+\;\sin^6\theta)}\;d\theta\;dx={\int_0^\infty re^{-r^6}\int_0^{\frac{\pi}{2}}e^{3r^6\cos^2\theta\sin^2\theta}\;d\theta\;dx}$$ $$=\int_0^\infty re^{-r^6}\int_0^{\frac{\pi}{2}}e^{\frac{3r^6}{4}\sin^22\theta}\;d\theta\;dx={\int_0^\infty re^{-r^6}\int_0^{\frac{\pi}{2}}e^{\frac{3r^6}{4}\cos^2\theta}\;d\theta\;dx}$$ I then made use of the following formula (see here): $$\sum_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}x^n=\frac{2}{\pi}\int_0^{\frac{\pi}{2}}e^{4x\cos^2\theta}\;d\theta\tag{2}$$ Using $(2)$ and formally interchanging integration and summation we get: $$\frac{\Gamma(\frac{1}{6})^2}{36}=\int_0^\infty re^{-r^6}\int_0^{\frac{\pi}{2}}e^{4\left(\frac{3r^6}{16}\right)\cos^2\theta}\;d\theta\;dx=\frac{\pi}{2}\int_0^\infty re^{-r^6}\sum_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}\left(\frac{3r^6}{16}\right)^n\;dx$$ $$=\frac{\pi}{2}\sum_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}\left(\frac{3}{16}\right)^n \int_0^\infty r^{6n+1}e^{-r^6}\;dx=\frac{\pi}{12}\sum_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}\left(\frac{3}{16}\right)^n \Gamma\left(n+\frac{1}{3}\right)$$ I then used Laplace transform identities and $(2)$, freely interchanging integrals and sums, to write: $$\sum_{n=0}^{\infty}\frac{(2n)!}{(n!)^3}\frac{\Gamma\left(n+\frac{1}{3}\right)}{s^{n+\frac{1}{3}}}=L\left[\sum_{n=0}^\infty \frac{(2n)!}{(n!)^3}t^{n-\frac{2}{3}}\right](s)={\frac{2}{\pi}L\left[t^{-\frac{2}{3}}\int_0^\frac{\pi}{2}e^{4t\cos^2\theta}\;d\theta\right](s)}={\frac{2}{\pi}\int_0^\frac{\pi}{2}L\left[t^{-\frac{2}{3}}e^{4t\cos^2\theta}\right](s)\;d\theta}={\frac{2}{\pi}\int_0^\frac{\pi}{2}\frac{\Gamma(\frac{1}{3})}{(s-4\cos^2\theta)^{\frac{1}{3}}}\;d\theta}$$ Accordingly, since $\frac{4}{3}-\cos^2\theta=\frac{1}{3}+\sin^2{\theta}$ we can deduce that: $$\frac{\Gamma(\frac{1}{6})^2}{36}=\frac{\Gamma(\frac{1}{3})}{6}\left(\frac{4}{3}\right)^\frac{1}{3}\int_0^\frac{\pi}{2}\frac{1}{(\frac{1}{3}+\sin^2\theta)^{\frac{1}{3}}}\;d\theta$$ Reflection and duplication give $\Gamma(\frac{1}{6})=2^{-\frac{1}{3}}\sqrt{\frac{3}{\pi}}\Gamma(\frac{1}{3})^2$ and hence we have the following identity: $$\int_0^{\frac{\pi}{2}}\frac{1}{\left(\frac{1}{3}+\sin^2{\theta}\right)^{\frac{1}{3}}}\;d\theta=\frac{3^\frac{1}{3}\Gamma(\frac{1}{3})^3}{2^\frac{7}{3}\pi}\tag{3}$$ while $(1)$ may be obtained by using the following identity (see here): $$\Gamma\left(\frac{1}{6}\right)=\frac{2^\frac{14}{9}3^\frac{1}{3}\pi^\frac{5}{6}}{AGM(1+\sqrt{3},\sqrt{8})^\frac{2}{3}}$$ This completes the derivation; I cannot see how a method like this (especially with the conversion to polar coordinates) could be used to give results more general than $(1)$ and $(3)$. REPLY [6 votes]: (Too long for a comment.) Expanding on a concluding remark by Nemo, there is an interesting connection between the OP's, $$\int_0^{\pi/2}\frac1{\sqrt[3]{\color{blue}{\alpha}+\sin^2 x}}dx=\frac{\pi}{2\,\sqrt[3]{\alpha+1}}\,_2F_1\Big(\frac12,\frac13;1;\,\frac1{\color{blue}{\alpha}+1}\Big)\tag1$$ and the integral involved in this post, $$\int_0^{\infty}\frac1{\sqrt[3]{\color{blue}{1+2\alpha}+\cosh x}}dx=2^{2/3}\,3^{1/4}K(k_3)\;_2F_1\Big(\frac13,\frac13;\frac56;-\color{blue}{\alpha}\Big)\tag2$$ Equations $(1),(2)$ are valid for arbitrary $\alpha$. But if it is chosen such that, $$\color{blue}{\alpha} =\alpha(\tau)= \frac1{4\sqrt{27}}\big(\lambda^3-\sqrt{27}\lambda^{-3}\big)^2$$ where, $$\lambda =\frac{\eta\big(\frac{\tau+1}3\big)}{\eta(\tau)},\quad \tau=\frac{1+N\sqrt{-3}}2$$ then there is a beautifully simple relationship between $(1),(2)$ as, $$\frac{N+1}{2^{1/3}\,3^{1/2}}\,\int_0^{\pi/2}\frac1{\sqrt[3]{\color{blue}{\alpha}+\sin^2 x}}dx=\int_0^{\infty}\frac1{\sqrt[3]{\color{blue}{1+2\alpha}+\cosh x}}dx\tag3$$ The closed-form for the RHS (via its hypergeometric equivalent) is already given in this answer, so that automatically leads to the form for the LHS as well. Some rational values are, $$\alpha\big(\tfrac{1+3\sqrt{-3}}2\big)=\large\tfrac13$$ $$\alpha\big(\tfrac{1+5\sqrt{-3}}2\big)=4$$ $$\alpha\big(\tfrac{1+7\sqrt{-3}}2\big)=27$$ which will be algebraic for integer $N>1$. The first one was re-discovered by the OP and lead to the nice evaluations, $$\int_0^{\pi/2}\frac1{\sqrt[3]{\tfrac13+\sin^2 x}}dx=3^{1/12}\,K(k_3)$$ $$\int_0^{\pi/2}\frac1{\sqrt[3]{4+\sin^2 x}}dx=\frac{3^{3/4}}{5^{5/6}}\,K(k_3)$$ $$\int_0^{\pi/2}\frac1{\sqrt[3]{27+\sin^2 x}}dx=\frac{3^{3/4}}7\,K(k_3)$$<|endoftext|> TITLE: Find a minimum of $x^2+y^2$ under the condition $x^3+3xy+y^3=1$ QUESTION [6 upvotes]: As in the title, I've tried to find a maximum and mininum of $x^2+y^2$ when $x^3+3xy+y^3=1$ holds. It is not too hard to show that $x^2+y^2$ has no maximum, but I can't find a minimum. Lagrange multiplier gives a dirty calculation so I can't handle it. Is there any elegant way to find it? Thanks for any help. p.s. Sorry. I make a typo in the $xy$-coefficient. REPLY [2 votes]: Let $x=u+v$ and $y=u-v$. Then $x^2+y^2=2(u^2+v^2)$, while the equation $x^3+3xy+y^3=1$ becomes $2u^3+6uv^2+3(u^2-v^2)=1$, or $$2u^3+3u^2-1=-3v^2(2u-1)$$ Factoring the cubic on the left hand side, we obtain $$(u+1)^2(2u-1)=-3v^2(2u-1)$$ So either $2u-1=0$, in which case $v$ can be anything, or else $(u+1)^2=-3v^2$, which, because of the minus sign, is possible only if $u+1=v=0$. The latter possibility leads to $$x^2+y^2=2((-1)^2+0^2)=2$$ while the former leads to $$x^2+y^2=2\left(\left(1\over2\right)^2+v^2\right)={1\over2}+2v^2$$ which is clearly minimized when $v=0$. So the minimum value of $x^2+y^2$ is $1\over2$, and it occurs at $(x,y)=({1\over2}+0,{1\over2}-0)=({1\over2},{1\over2})$.<|endoftext|> TITLE: How to say (pronounce) $\partial$ in the context of homology? QUESTION [5 upvotes]: I am curious what is the common way to say (in English contexts) the name of $\partial$ (the boundary homomorphism) in homology theory. I.e. when giving a talk / speaking. The boundary homomorphism is $\partial: C_n\to C_{n-1}$ when $(C_\bullet, \partial_\bullet)$ is a chain complex. Some possibilities I can think of is: "Boundary"? "Del"? "Partial"? "Dee"? (it looks like a letter $d$) "Delta"? Thanks for any insights. I haven't found any answer online despite some searching. How do you pronounce (partial) derivatives? is similar in a sense (same symbol $\partial$) but the context is entirely different. REPLY [2 votes]: In my experience there is no single standard pronunciation, though "del" is probably the most common. I have heard all of the possibilities you suggest except "delta" used. If you want to maximize the chance that you will be understood, I would recommend just saying "(the) boundary of" rather than trying to pronounce it as a single word. In cases where you really want a shorter pronunciation, I think "del" is probably the best choice, but as Mike Miller commented the most important thing is to be consistent and unambiguous.<|endoftext|> TITLE: Is the interior of a smooth manifold with boundary connected? QUESTION [7 upvotes]: Let $M$ be a smooth connected manifold with boundary. Is it true that the interior of $M$ is also connected? (As usual I am assuming $M$ is second-countable and Hausdorff). I am trying to rule out something like two "tangent disks" (or circles) where the (topological) interior is obviously not connected. This case is not a counter-example since the union of two tangent disks (or circles) is not a manifold with boundary (the point of tangency is pathological). REPLY [7 votes]: Assume you have two connected components $M_1$ and $M_2$ of $int M$. The boundary of each is a subset of the boundary of $M$. Since $M$ is connected, the closures of $M_1$ and $M_2$ intersect at some boundary point $p$. Let $X:\mathbb{H}^n\to U\subset M$ be a coordinate chart around $p$, where $\mathbb{H}^n$ is the upper half of $\mathbb{R}^n$ (including boundary). Then $X^{-1}(M_i)$ is an open set in $\mathbb{R}^n$, and whose boundary is a subset of the boundary of $\mathbb{H}^n$. But then $X^{-1}(M_i)$ must be the whole $\mathbb{H}^n\setminus \partial\mathbb{H}^n$, which contradict the disjointness of $M_1$ and $M_2$.<|endoftext|> TITLE: Understanding the limit in $\mathbb{Z_p}$ QUESTION [7 upvotes]: Let $A_n = \mathbb{Z}/p^n \mathbb{Z}$ . We define the ring of p-adic integers $\mathbb{Z_p}$ as $$\mathbb{Z_p} := \lim_{\longleftarrow} (A_n , \phi_n)$$ where $\phi_n : A_n \rightarrow A_{n-1}$ is a homomorphism. It's called projective limit of the system $(A_n , \phi_n)$. Any element of $\mathbb{Z_p}$ is a sequence $x = (... , x_n, ...,x_1)$ with $x_n \in A_n$ and $\phi_n(x_n) = x_{n-1}$ for $n >1$. I do not understand what kind of limit is defined for $\mathbb{Z_p}$. Can someone please explain to me this notation? REPLY [5 votes]: I'll examine in some detail what it means from an example. An element in $\mathbf Z_p$ is formally a sequence $(x_1,x_2,\dots,x_n,\dots)$, where each $x_n\in\mathbf Z/p^n\mathbf Z$, satisfying the condition that the canonical image of $x_n\in \mathbf Z/p^n\mathbf Z$ in $\mathbf Z/p^{n-1}\mathbf Z$ is $x_{n-1}$. First consider $x_1\in\mathbf Z/p\mathbf Z$: this congruence class is represented by an integer $a_0\in\{0,1,\dots,p-1\}$. Next $x_2\in\mathbf Z/p^2\mathbf Z$: it is represented by an integer in $\{0,\dots,p-1,p,\dots,p^2-1\}$, which we can write, by Euclidean division, $y_2p+r_2, \enspace 0\le r_2 TITLE: How would one describe set of elements of $\mathbb{Z}(\sqrt{-d})$ whose norm are divisible by a rational prime $p$? QUESTION [6 upvotes]: If we consider non-UFD's with $d > 1$, how would one go about describing the integers in $\mathbb{Z}(\sqrt{-d})$ whose norm is divisible by a prime $p$ but under the stipulation that it must be done using principal Ideals? Can it be done? I'll detail what I mean; An example for a UFD would be the following; Let $f(\alpha)=1$ for $\alpha \in \mathbb{Z}(\sqrt{-1})$, and 0 otherwise.. I want a new function, $g(\alpha)$, such that $g(\alpha)=1$ if $p \mid N\alpha$, otherwise $g(\alpha)=0$. For an inert prime $p$, $g(\alpha)=f(\frac{\alpha}{p})$. If p splits, then $g(\alpha)=f(\frac{\alpha}{\mathfrak{p}})+f(\frac{\alpha}{\bar{\mathfrak{p}}})-f(\frac{\alpha}{N(\mathfrak{p})})$ where $N(\mathfrak{p})=p$. In other words, how can we construct $g$ using only $f$ for Imaginary quadratic fields without unique factorization? An example would be much appreciated. REPLY [4 votes]: The functions $f$ and $g$ can be defined, but they have to operate on ideals rather than numbers, and those ideals may or may not be principal. First, though, let's tidy up the notation. $f_R(\alpha) = 1$ (or True) if $\alpha$ is a number in the ring $R$, and $0$ (or False) otherwise. And $g_R^p(\alpha) = 1$ if $\alpha$ is a number in the prime ideal generated by $p$, or by a splitting factor of $p$, or co-generated by $p$. Now let's examine $\alpha = 82$, $p = 41$ and $R = \textbf{Z}[i]$. Since $41$ splits as $(4 - 5i)(4 + 5i)$, we have $$g_R^p(\alpha) = f_{\textbf{Z}[i]}\left(\frac{82}{4 - 5i}\right) + f_{\textbf{Z}[i]}\left(\frac{82}{4 + 5i}\right) - f_{\textbf{Z}[i]}\left(\frac{82}{41}\right) = 1 + 1 - 1 = 1.$$ That's correct, right? Now try $\alpha = 86$, $p = 43$. As $43$ is inert, we have $g_R^p(\alpha) = 1$, as expected. Let's move on to $g_{\textbf{Z}[\sqrt{-5}]}^{41}(82)$. Since $41$ splits as $(6 - \sqrt{-5})(6 + \sqrt{-5})$, the result should be much the same as $g_{\textbf{Z}[i]}^{41}(82)$. And now we get to something really interesting: $g_{\textbf{Z}[\sqrt{-5}]}^{43}(76 + 18 \sqrt{-5})$. $43$ is inert in this ring, and yet $N(76 + 18 \sqrt{-5}) = 7396$, a multiple of $43$. Worse, $$\frac{76 + 18 \sqrt{-5}}{43} = \frac{2}{43}(38 + 9 \sqrt{-5}),$$ so $$f_{\textbf{Z}[\sqrt{-5}]}\left(\frac{76 + 18 \sqrt{-5}}{43}\right) = 0.$$ If instead we defined $g_R^p(\alpha)$ to look for the containment of $\langle \alpha \rangle$ in $\langle p \rangle$ or $\langle p, x \rangle$ or $\langle p, \overline x \rangle$, we might get the result we want, since $\langle 76 + 18 \sqrt{-5} \rangle$ is in either $\langle 43, 9 + \sqrt{-5} \rangle$ or $\langle 43, 9 - \sqrt{-5} \rangle$, one of those.<|endoftext|> TITLE: Prove that two angles are equal QUESTION [9 upvotes]: $M$ is the midpoint of $BC$ in the triangle $\Delta ABC$. $D$ lies on $AC$, and $AD = BD$. $E$ lies on the line $AM$, $DE$ is parallel to $AB$. How can I prove that the angles $D\hat{B}E$ and $A\hat{C}B$ are equal? REPLY [3 votes]: Let $L$ be chosen on the line $AM$ so that $AM = ML$, i.e. $M$ is the midpoint of segment $AL$. Then $ABLC$ is a parallelogram. Thus $AB$ is parallel to $DE$ and $CL$. Let $N$ be the intersection of line $CL$ with line $BD$. Lemma 1. Since $AD = BD$, the trapezoid $ABCN$ is isosceles with $AN = BC$ and $\angle \, ANB = \angle \, ACB$ Proof: Indeed, the fact that triangle $ABD$ is isosceles implies that $\angle \, ABD = \angle \, BAD = \alpha$. Furthermore, since $AB$ is parallel to $CN$ $$\angle \, NCD = \angle \, NCA = \angle \, BAC = \angle\, BAD = \alpha$$ and $$\angle \, CND = \angle \, CNB = \angle \, ABN = \angle\, ABD = \alpha$$ Consequently, triangle $CDN$ is isosceles with $DC = DN$. Therefore triangles $BDC$ and $ADN$ are congruent so $AN = BC$. Hence $\angle \, ANB = \angle \, ACB$. Lemma 2. The following equalities gold: $$\frac{DF}{NL} = \frac{BF}{BL} = \frac{AD}{AC} = \frac{AQ}{AN} =\frac{QE}{NL}$$ yielding $QE = DF = AB = CL$. Proof: Look at triangle $BNL$ and the line $DF$ parallel to $NL$. Then $$\frac{DF}{NL} = \frac{BF}{BL}$$ by the intercept theorem for the pair of lines $BN$ and $BL$ intersected by the two parallel lines $BF$ and $NL$. Alternatively this result follows from the similarity between the triangles $BDF$ and $BNL$. Since quad $ABLC$ is a parallelogram and $DF$ is parallel to both $AB$ and $CL$, the quad $ABFD$ is also a parallelogram so $BF = AD$ and $BL = AC$ which implies $$\frac{BF}{BL} = \frac{AD}{AC}$$ Next, look at triangle $ANC$ and the line $QD$ parallel to $NC$. Then $$\frac{AD}{AC} = \frac{AQ}{AN}$$ by the intercept theorem for the pair of lines $AN$ and $AC$ intersected by the two parallel lines $QD$ and $NC$. Alternatively this result follows from the similarity between the triangles $AQD$ and $ANC$. Finally, look at triangle $ANL$ and the line $QE$ parallel to $NL$. Then $$\frac{AQ}{AN} = \frac{QE}{NL}$$ by the intercept theorem for the pair of lines $AN$ and $AL$ intersected by the two parallel lines $QE$ and $NL$. Alternatively this result follows from the similarity between the triangles $AQE$ and $ANL$. Completing the proof of the main result: Since $QE = AB$ and $QE$ is parallel to $AB$, the quad $ABEQ$ is a parallelogram, which means that $BE$ is parallel to $AQ$ and therefore $BE$ is parallel to $AN$ too. This means that $\angle \, DBE = \angle \, NBE = \angle \, ANB = \angle \, ACB$.<|endoftext|> TITLE: Expectation stopped Brownian motion with drift QUESTION [7 upvotes]: Let $\{X_t:t\geq 0\}$ be a Brownian motion with drift $\mu>0$ and define a stopping time $\tau$ by $$\tau=\inf\{t\geq 0:X_t=a\}.$$ Now I want to show that $$\mathbb{E}(e^{-\lambda\tau})=e^{(\mu-\sqrt{\mu^2+2\lambda})a}$$ for $\lambda>0$. Now as a hint I know that I need to use the martingale $M_t=e^{\alpha X_t-\alpha\mu t-\frac{1}{2}\alpha^2t}$. Obviously I need to use Doobs optional stopping theorem but I do not know how. Anyone has a suggestion? REPLY [5 votes]: Hints: Check that for fixed $\alpha>0$ the process $$M_t := \exp \left(\alpha X_t-\alpha \mu t- \frac{1}{2} \alpha^2 t \right)$$ is a martingale (with respect to the canonical filtration of the Brownian motion). By the optional stopping theorem, $$\mathbb{E}(M_{\tau \wedge t}) = \mathbb{E}(M_0) = 1, \qquad t \geq 0.$$ Show that $|M_{t \wedge \tau}| \leq e^{\alpha a}$. Deduce from the dominated convergence theorem that $$\mathbb{E}(M_{\tau}) = 1.$$ Since $(X_t)_{t \geq 0}$ has continuous sample paths, we have $X_{\tau}=a$. Hence, $$M_{\tau} = e^{\alpha a} \exp \left( - \left[ \mu \alpha + \frac{1}{2} \alpha^2 \right] \tau \right).$$ It follows from step 3 and 4 that $$\mathbb{E} \exp\left( - \left[ \mu \alpha + \frac{1}{2} \alpha^2 \right] \tau \right) = e^{-\alpha a}.$$ Setting $\lambda := \mu \alpha + \frac{1}{2} \alpha^2$ proves the assertion.<|endoftext|> TITLE: Find solution in integers of $x^3+x-y^2=1$ QUESTION [6 upvotes]: Find integers $x$ and $y$ such that $$x^3+x-y^2=1.$$ My try: $$x^3+x-y^2=1 \implies x^3+x-1=y^2.$$ Now, when $x^3+x-1$ is a perfect square? REPLY [3 votes]: The elliptic curve $y^2=x^3+x-1$ has only finitely many integral points, according to the magma online calculator - see here at MO, namely the points $$ (x,y)=(1,\pm 1),(2,\pm3),(13,\pm 47). $$<|endoftext|> TITLE: Why is $9 \times 11{\dots}12 = 100{\dots}08$? QUESTION [5 upvotes]: While I was working on Luhn algorithm implementation, I discovered something unusual. $$ 9 \times 2 = 18 $$ $$ 9 \times 12 = 108 $$ $$ 9 \times 112 = 1008 $$ $$ 9 \times 1112 = 10008 $$ Hope you can observe the pattern here. What to prove this? What is it's significance? REPLY [5 votes]: The repunit, $R_k = \overbrace{111\ldots 111}^{k \text{ ones}}$ , can be written as $R_k = \dfrac{10^k-1}{9}$ Your nice pattern corresponds to $9\times (R_k+1) = (10^k-1)+9 = 10^k+8$<|endoftext|> TITLE: Polar decomposition of composition of two $2 \times 2$ matrices QUESTION [6 upvotes]: In one of Ruelle`s papers "Rotation Numbers for Flows and Diffeomorphisms" Ruelle has the following calculation which I do not understand completely. Assume you have two invertible $2 \times 2$ matrices $A$ and $B$ with polar decompositions $A = U(\theta(A))|A|$ and $B = U(\theta(B))|B|$ where $U(\theta)$ is the planar rotation matrix by $\theta$ and $|B|=\sqrt(BB^T)$ etc. Then he says that $$ |\theta(AB)-\theta(A)-\theta(B)| \leq \pi $$ I don`t quite understand how he gets this result without a constant depending on norms of A and B. One can start by saying $$ AB = U(\theta(AB))|AB| = U(\theta(A))|A| U(\theta(B))|B| $$ $$ = U(\theta(A)+\theta(B))U(-\theta(B))|A|U(\theta(B))|B| $$ so that $$ U(\theta(AB)-\theta(A)-\theta(B)) = U(-\theta(B))|A|U(\theta(B))|B||AB|^{-1}. $$ Somewhere in the paper he gives as a hint $|\theta(PQ)| \leq \pi$ if $P$ and $Q$ are positive but I cant see how to use it. REPLY [3 votes]: Let $P=U(-\theta(B))|A|U(\theta(B))=U(\theta(B))^tAU(\theta(B))$ and $Q=|B|$ then $AB=U(\theta(A))|A|U(\theta(B))|B|=U(\theta(A))U(\theta(B))PQ=U(\theta(A)+\theta(B))PQ$. Hence, $PQ=U(-\theta(A)-\theta(B))U(\theta(AB))|AB|=U(\theta(AB)-\theta(A)-\theta(B))|AB|=\theta(PQ)|PQ|$. By the uniqueness of the polar decomposition of an invertible matrix, we get $\theta(PQ)=\theta(AB)-\theta(A)-\theta(B)$. Since $P$ and $Q$ are positive definite then $|\theta(PQ)|<\pi$ and the result follows.<|endoftext|> TITLE: Prove that $\underline{\lim} \{a_n\}+ \underline{\lim} \{b_n\} \leq \underline{\lim} (a_n + b_n)$ QUESTION [5 upvotes]: In my real analysis class, my instructor proved that $\underline{\lim} \{a_n\}+ \underline{\lim} \{b_n\} \leq \underline{\lim} (a_n + b_n)$. (Note that $\underline{\lim}$ is the limit inferior of a sequence.) but a lot of details were left out and I want to make sure that I have a correct proof and that my logic is correct. Note that $\underline{lim} \{a_n\} = lim_{n \rightarrow \infty} \inf\{a_k|k \geq n\}$ and in my class we define $t_n =\inf\{a_k|k \geq n\}$. My proof: Suppose that $\underline{\lim} \{a_n\} = a$ and $\underline{\lim} \{b_n\} = b$. First consider $\underline{\lim} \{a_n\} = a$. This means that $ \displaystyle \lim_{n \rightarrow \infty} t_n = a$. So for all $\epsilon > 0$ there exists $N_1 \in \mathbb{N}$ such that when $n \geq N_1$ we have that $ | t_n - a | < \frac{ \epsilon } { 2}$. It follows that $a - \frac{\epsilon}{2} < t_n < a_n$. By a similar argument, there exists $N_2$ such that for all $n \geq N_2$, we have $b - \frac{\epsilon}{2} < b_n$. So if we choose $N = \max \{N_1, N_2 \}$, for all $n \geq N$ we have $a - \frac{\epsilon}{2} < a_n$ and $b - \frac{\epsilon}{2} < b_n$. Combining these inequalities gives $a_n + b_n > a+b-\epsilon$ Then since $a+b - \epsilon$ is a lower bound for the set $\{ a_n + b_n | n \geq N \}$, we must have $\inf\{a_n + b_n | n \geq N \} \geq a+b- \epsilon$ and $ \displaystyle \lim_{N \rightarrow \infty} \inf\{a_n + b_n | n \geq N \} \geq a+b- \epsilon$ (because the ineq. was true for all $n \geq N$). Then since $\epsilon$ was arbitrary, $ \underline{ lim} (a_n + b_n) \geq a+b$ REPLY [7 votes]: It looks correct, assuming both inferior limits are finite and not $-\infty$ (otherwise there's nothing to prove). Here's another approach. Set $t_n=\inf\{a_k\mid k\ge n\}$, $u_n=\inf\{b_k\mid k\ge n\}$ and $v_n=\inf\{a_k+b_k\mid k\ge n\}$. If $k\ge n$, then $t_n\le a_k$ and $u_n\le b_k$, so $t_n+u_n\le a_k+b_k$. Since $k$ is arbitrary, we have $$ t_n+u_n\le v_n $$<|endoftext|> TITLE: Conditional expectation of $X$ given $|X|$ QUESTION [6 upvotes]: Let $X$ be in integrable, with density $f$ with respect to the Lebesgue measure. Compute the conditional expectation : $ \operatorname{E} \left[ X\, \Big|\, |X| \,\right] $ My ansatz was : $ \operatorname{E} \left[ X\, \Big|\, |X| \,\right] = \operatorname{E} \left[ \operatorname{sgn}(X)\cdot |X| \, \Big|\, |X| \,\right] = |X| \operatorname{E} \left[ \operatorname{sgn}(X) \, \Big|\, |X| \,\right] $ At this point I am stuck since one cannot assume in general that $\operatorname{sgn}(X)$ is independent of $|X|$ nor measurable. REPLY [7 votes]: Disclaimer: This answer is currently downvoted. Needless to say, it is perfectly correct, and it answers the question, apparently to the OP's satisfaction. The downvote might be due to extra-mathematical reasons not worthy of being further commented upon. Happy reading! Let $Y=|X|$. When $X$ has a PDF $f$, you might convince yourself that $$E(X\mid Y)=g(Y)$$ where $g(y)$ is defined on $y\geqslant0$ by $$g(y)=y\cdot\frac{f(y)-f(-y)}{f(y)+f(-y)}$$ In the general case, some more measure theoretical machinery is needed. Consider the measures defined by $$\mu_+(B)=P(X\in B)\quad \mu_-(B)=P(-X\in B)\quad \mu(B)=P(Y\in B)$$ for every Borel set $B\subset\mathbb R_+$. Then $\mu_\pm\leqslant\mu$ hence $\mu_\pm\ll\mu$ hence there exists some measurable functions $f_\pm$ such that $$\mu_\pm(B)=\int_Bf_\pm d\mu$$ for every Borel set $B$. Then $E(X\mid Y)=g(Y)$ where $g(y)$ is defined on $y\geqslant0$ by $$g(y)=y\cdot\frac{f_+(y)-f_-(-y)}{f_+(y)+f_-(-y)}$$ Edit 1: This conditional expectation reflects the fact that, for every $y>0$, the conditional distribution of $X$ conditionally on $Y=y$ is given by $$P(X=y\mid Y=y)=\frac{f_+(y)}{f_+(y)+f_-(-y)}$$ and $$P(X=-y\mid Y=y)=\frac{f_-(y)}{f_+(y)+f_-(-y)}$$ which can be summarized as $$P(X=Y\mid Y)=\frac{f_+(Y)}{f_+(Y)+f_-(-Y)}=1-P(X=-Y\mid Y)$$ Edit 2: As often on this site, one meets a specific problem when answering questions involving conditional expectations, which is that every nontrivial one requires a solid definition of the concept. To understand why the OPs interested in such questions regularly lack any such definition would require a separate analysis, hence, for the time being, we will concentrate on adding some sketchy explanations on this definition, referring the interested reader to some accessible and rigorous source (such as the small blue textbook Probability with martingales, by David Williams). Thus, in full generality, one considers random variables $X$ and $Y$ defined on the same probability space with $X$ integrable, then $E(X\mid Y)$ is defined as the (class of) random variable(s) $u(Y)$, for some measurable function $u$ such that $u(Y)$ is integrable and, for every bounded measurable function $w$, $$E(Xw(Y))=E(u(Y)w(Y))$$ Equivalently, one can ask that $u(Y)$ is integrable and that, for every event $A$ in $\sigma(Y)$, $$E(X\mathbf 1_A)=E(u(Y)\mathbf 1_A) $$ In the present case, to prove the formulas proposed in this post when $X$ has PDF $f$, one should find $g$ such that, for every bounded measurable function $w$, $$E(Xw(Y))=E(g(Y)w(Y))$$ that is, since $Y=|X|$ and $X$ has PDF $f$, $$\int xw(|x|)f(x)dx=\int g(|x|)w(|x|)f(x)dx$$ which is equivalent to $$\int_{y>0}yw(y)(f(y)-f(-y))dy=\int_{y>0}g(y)w(y)(f(y)+f(-y))dy$$ This identity holds for every function $w$ if and only if $$y\cdot(f(y)-f(-y))=g(y)\cdot(f(y)+f(-y))$$ almost everywhere for $y>0$, and $g$ follows.<|endoftext|> TITLE: Proving that $\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$ QUESTION [5 upvotes]: This came up while I was looking at the asymptotic behavior of $f(x)=\int_0^x \frac{e^t-1}{t}dt=\sum_{h=1}^\infty\frac{x^h}{h\cdot h!}$, which is a nice and entire function, as $x\to -\infty$, namely that I think $f(x)\sim -\ln(-x)-\gamma$ as $x\to -\infty$. I've reduced that to the problem of proving $$\int_{-1}^0 \frac{e^x+e^{1/x}-1}{x}dx=\gamma$$ where $\gamma\approx 0.577$ is the Euler-Mascheroni constant. REPLY [3 votes]: I'm going to take a different approach than Dr. MV, and evaluate the integral directly. First, due to the singularity at $x=0$, the integral should be interpreted as $\displaystyle \lim_{\epsilon\to 0^-}\int_{-1}^\epsilon \frac{e^x+e^{1/x}-1}x dx$. With this in mind, we can integrate. $$\int\frac{e^x+e^{1/x}-1}x dx=\mathrm{Ei}(x)-\mathrm{Ei}\left(\frac1x\right)-\ln{x}+C\tag{1}$$ It's easy to see that the value at $x=-1$ is $-i\pi$, but the value near $0$ is a tad more complex (no pun intended). Luckily, the first term can be quickly disposed of: $$-\lim_{x\to 0^-}\mathrm{Ei}\left(\frac1x\right)=-\lim_{t\to\infty}\mathrm{Ei}(-t)=0.$$ This leaves us with $\displaystyle\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]$. For $x<0$, we can use the series expansion for $E_1(z)$ to write $$\mathrm{Ei}(x)=\gamma+\ln(-x)+\sum_{n=1}^\infty\frac{x^n}{n\,n!}.\tag{2}$$ We now have $$\lim_{x\to 0^-}\left[\mathrm{Ei}(x)-\ln{x}\right]=\lim_{x\to 0^-}\left[\gamma+\ln(-x)-\ln{x}+\sum_{n=1}^\infty\frac{x^n}{n\,n!}\right]=\gamma-i\pi\tag{3}$$ and may conclude that $$\int_{-1}^0\frac{e^x+e^{1/x}-1}x dx=\gamma+i\pi-i\pi=\gamma.\tag{4}$$ Q.E.D.<|endoftext|> TITLE: If $a^2+b^2+c^2=3$ so $a^2b+b^2c+c^2a\geq3\sqrt[3]{a^2b^2c^2}$ QUESTION [7 upvotes]: Let $a$, $b$ and $c$ be non-negative numbers such that $a^2+b^2+c^2=3$. Prove that: $$a^2b+b^2c+c^2a\geq3\sqrt[3]{a^2b^2c^2}$$ I tried Rearrangement, uvw, BW and more, but without any success. REPLY [2 votes]: $$\Leftrightarrow (a^2b+b^2c+c^2a)^2 \ge 9(abc)^{4/3}=3(abc)^{4/3}(a^2+b^2+c^2)$$ $$\sum_{cyc}a^4b^2+2\sum_{cyc}a^3bc^2 \ge 3\sum_{cyc}\sqrt[3]{a^{10}b^4c^4}$$ By AM-GM : $$a^4b^2+2a^3bc^2 \ge 3\sqrt[3]{a^{10}b^4c^4}$$<|endoftext|> TITLE: Norm of the product of a self-adjoint operator and one of its spectral projections QUESTION [6 upvotes]: Let $A$ be a bounded self-adjoint operator on a Hilbert space $H$ such that $A$ has a cyclic vector. That is, there exist $x \in H$ such that the linear subspace spanned by $\{ x, Ax, A^2x,...\}$ is dense in $H$. Also let $E$ denote the spectral measure corresponding to $A$, i.e. $E : \Sigma \rightarrow B(H)$ with $E(\Omega)=E_{\Omega}=\chi_{\Omega \cap \sigma(A)} (A)$ given by the functional calculus for $A$. Here $\Sigma $ is the Borel-$\sigma$-Algebra over $\mathbb{R}$ and $\chi_M$ denotes the characteristic function of a set $M$. $\sigma(A)$ is the spectrum of $A$. Question: Assume that the interval $[0,1]$ is contained in the spectrum $\sigma(A)$ of A. What can we say about the norm $||E_\Omega A||$ if $\Omega$ is, for example, a subinterval of $[0,1]$, say $\Omega=[\tfrac{1}{4}, \tfrac{1}{2})$? What if $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$? One thing we know is that $||E_\Omega A|| = ||A E_\Omega||$. So we only have to compute the norm on the range of $E_\Omega$. But I don't really see how the cyclic vector comes into play. It tells us that $H$ is separable. I don't know how that could be helpful. REPLY [2 votes]: In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer even if we assume the existence of a cyclic vector. First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$. In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$. Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$. Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.<|endoftext|> TITLE: Left ideals of matrix ring over a field QUESTION [8 upvotes]: The claim is made here that for $k$ a field The left ideals of $M_n(k)$ are all of the form $$\{A \in M_n(k) \mid \operatorname{ker}A > S \}, \rlap{ \qquad \text{for some subspace $S$.}}$$ I was trying to think through that claim. I understand why all the left ideals of $M_n(k)$ are of the form $$\{A \in M_n(k) \mid \operatorname{Rowspace}(A) < S \}, \rlap{ \qquad \text{for some subspace $S$.}}$$ In the presence of an inner product, we have that $\operatorname{ker}A^T = \operatorname{Range}(A)^{\bot}$, and therefore we get the characterization we want. But what about if there isn't an inner product? Does the same hold for $M_n(\Delta)$, where $\Delta$ is a division ring? REPLY [2 votes]: Using the theory of nondegenerate symmetric bilinear forms, it is possible to show that $\operatorname{ker}A^T = \operatorname{Range}(A)^{\bot}$ without the need for existence of an inner product (when we are discussing matrices over a field). The symbol $\bot$ would then represent an orthogonal complement with respect to the relevant nondegenerate symmetric bilinear form. Here is a copy from some work of mine showing that the row space of a matrix is the right orthogonal complement of the null space with respect to what I would regard as the most natural nondegenerate symmetric bilinear form: Consider the function $f: \mathscr{F}^n \times \mathscr{F}^n > \rightarrow \mathscr{F}$, defined by $f(v,w)=v^Tw$. It is easy to see that: $f_{w_0}:\mathscr{F}^n \rightarrow \mathscr{F}$ defined by $f_{w_0}(v)=f(v,w_0)$ for any given $w_0 \in \mathscr{F}^n$ is a linear functional and therefore belongs to the dual space of $\mathscr{F}^n$. Similarly $f_{v_0}:\mathscr{F}^n \rightarrow \mathscr{F}$ defined by $f_{v_0}(w)=f(v_0,w)$ for any given $v_0 \in \mathscr{F}^n$ is a linear functional and therefore belongs to the dual space of $\mathscr{F}^n$. By the properties above, $f$ is a bilinear form, and furthermore it is a symmetric bilinear form since $f(v,w)=f(w,v)$ for any $v,w \in \mathscr{F}^n$. A symmetric bilinear form $g:\mathcal{V} \times \mathcal{V} \rightarrow \mathscr{F}$ is non-degenerate if and only if, for every nonzero $v \in \mathcal{V}$ there exists $w \in \mathcal{V}$ so that $g(v,w) \neq 0$ [p.455]{golan}. Let $v=(v_1,v_2,\ldots,v_n)^T \in \mathscr{F}^n$ be nonzero. Suppose $i \in \{1,2,\ldots,n\}$ is the least integer so that $v_i$ is nonzero. Let $w=(0,\ldots,0,v_i,0,\ldots,0)^T \in \mathscr{F}^n$ (with $v_i$ in entry $i$ of $w$). Then $$f(v,w)=v^Tw=v_i^2 \neq 0,$$ since a field contains no divisors of zero. This proves that $f$ is non-degenerate for any field $\mathscr{F}$. The right $f$-orthogonal complement [p.458]{golan} of row$(G)$ is \begin{eqnarray} \nonumber \text{row}(G)^{\perp_f}&=&\{w \in \mathscr{F}^n:f(v,w)=0 \text{ for all } v \in \text{row}(G) \} \\ \nonumber &=& \{w \in \mathscr{F}^n:Gw=0 \} \\ &=& \text{N}(G). \end{eqnarray} The theory can be studied in detail on page 455-458 of this textbook, which is the source referenced above. As you can see $f(v,w)$ is nondegenerate due to the fact that a field has no divisors of zero. For a division ring the symmetric property would fail in general, but I don't actually think this is absolutely necessary: You can see in the proof above I mention the right $f$-orthogonal complement, and the last part would still hold in the absence of commutativity.<|endoftext|> TITLE: Why axiom 3) of topology is redundant? QUESTION [6 upvotes]: I'm learning topology and I'm reading Dugundji's General Topology book. He gives the definition of axioms of a topology $\tau$ on $X$: 1) & 2) (a topology is closed under arbitrary union and finite intersection ) 3) $\emptyset$ and $X$ belong to topology $\tau$. then he says: "*Observe that since the union (resp. intersection) of an empty family of sets in X is $\emptyset$ (resp. X) axiom 3 is actually redundant. *" My question are, why he consider a empty family of sets of $X$ in the observation? whitout axiom 3, how do we know that in fact the empty set belongs to the topology $\tau$? I really don't understand his observation. Any help would be appreciated. REPLY [9 votes]: Property 1 says an arbitrary union of open sets is open. An arbitrary union of sets $$\bigcup_{A\in \mathcal{A}} A$$ is the union of all the sets in some collection $\mathcal{A}$ of sets. In particular, you could take $\mathcal{A}=\emptyset,$ the empty collection of sets. The union over the empty collection is the empty set (what else?). Thus property 1 implies that the empty set is open. Likewise the intersection over the empty collection is naturally defined as $X.$ This makes sense since intersecting with more and more sets restrict the elements more and more. It makes sense to 'start out' with all the elements, so the intersection of the empty collection of sets is $X.$ Also, it is a finite intersection (since the empty collection is finite). Thus, property 2 implies that $X$ is open. EDIT: As commenters have emphasized, at the end of the day, assigning a value to an intersection of an empty collection of sets is a convention. It only makes sense (to my knowledge) if you have a notion of a total space $X$ that every set under consideration is a subset of, as is the case in topology. In this scenario, the notion is well-defined and I tried to make it intuitive in my answer. Formally, in the scenario with a total space $X,$ the definition of the union and interection are given by $$\bigcup_{A\in \mathcal{A}} A = \{x\in X\mid \exists A\in\mathcal{A}\mbox{ such that } x\in A \}\\\bigcap_{A\in \mathcal{A}} A = \{x\in X\mid \forall A\in\mathcal{A},\; x\in A \}.$$ Applying the definition for the intersecton in the case $\mathcal{A}=\emptyset$ gives the result $X$ as indicated when you apply the notion of vacuous truth to the quantified statement $\forall A\in \emptyset,\ldots$, in which case it is always true regardless of what $\ldots$ is. So while it's true that the 3rd property is redundant, this fact is somewhat formal and not all that interesting. Thus why pretty much everyone retains the 3rd property for clarity.<|endoftext|> TITLE: Diffeomorphism invariance of the Ricci tensor QUESTION [8 upvotes]: It may be a very simple question, but I would be happy to have a quick answer on how we can show that the Ricci tensor is invariant under a diffeomorphism? To be precise, if $ \phi : M\to M$ is a diffeomorphism, I want to show $$ \text{Ric} (\phi^* g) = \phi^* \text{Ric} (g).$$ REPLY [3 votes]: This question bothered me as well and later I found the solution, so I'm writing it here. It is known from basic Riemannian geometry that curvature is preserved by isometries. So if $\phi : (M,g) \to (\tilde{M}, \tilde{g})$ is an isometry, then $\phi^{*}R(\tilde{g}) = R(g)$. But in our case, $\phi$ is just a diffeomorphism. But it is an isometry if considered as a map $\phi: (M, \phi^{*}g) \to (M,g)$. Thus using isometry invariance of curvature we get that $$ \phi^{*}\text{Ric}(g) = \text{Ric}(\phi^{*}g). $$<|endoftext|> TITLE: Lagrange's theorem and divisibility consequences. QUESTION [8 upvotes]: There are some simple, but sometimes intriguing, divisibility statements that are straightforward consequences of Lagrange's theorem. For instance: $p$ divides $a^{p-1}-1$ (Fermat's little thm) $n!$ divides $(p^n-1)(p^n-p)\cdots(p^{n}-p^{n-1}).$ The latter one can be derived from the fact that $S_n \hookrightarrow GL_{n}(\mathbb{F}_p)$. I've noticed that simple examples like those can be very compelling for students (begginers). Question: Are there more interesting divisibility statements that are immediate conseguences of Lagranges' thm? That is, coming from the simple fact a group $H$ is a subgroup of a finite group $G$? REPLY [2 votes]: For $a,n \geq 1$, $n\mid\phi(a^n - 1)$ ($\phi$ being Euler's $\phi$-function). This follows from noting that the order of $a$ in $\left(\Bbb Z/(a^n-1)\Bbb Z\right)^\times$ is $n$.<|endoftext|> TITLE: What do the Hurwitz quaternions have to do with the Hurwitz quaternion order? QUESTION [6 upvotes]: The Hurwitz quaternions are the ring formed by the elements of the form $w+xi+yj+zij$ where $i^2=j^2=-1$, $ij=-ji$, and where $w,x,y,z$ are either all integers or all half-integers. These form a maximal order of the quaternion algebra $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$. The Hurwitz quaternion order, on the other hand, is defined as follows (according to Wikipedia). Let $\rho$ be the primitive seventh root of unity and let $K$ be the maximal real subfield of $\mathbb{Q}(\rho)$. Let $\eta=2\cos(\frac{2\pi}{7})$ (so that $\mathbb{Z}[\eta]$ is the ring of integers of $K$) and consider the quaternion algebra $\Big(\frac{\eta,\eta}{K}\Big)$ (where $i^2=j^2=\eta$). Then let $\tau=1+\eta+\eta^2$ and $j'=\frac{1+\eta i+\tau j}{2}$, and the Hurwitz quaternion order is the maximal order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$. It seems that the Hurwitz quaternion order should be some sort of generalization of the Hurwitz quaternions but there are a lot of decisions here that seem arbitrary to me. What is the motivation for the similar nomenclature? What is special about the order $\mathbb{Z}[\eta][i,j,j']$ in $\Big(\frac{\eta,\eta}{K}\Big)$ and what does it have in common with the Hurwitz quaternions in $\Big(\frac{-1,-1}{\mathbb{Q}}\Big)$? REPLY [3 votes]: What they have to do is they are both related to things studied by Hurwitz. The "Hurwitz quaternions" are an object typically considered in number theory (and as a number theorist, if I hear "Hurwitz order" that's what I'll think of), and were directly studied by Hurwitz. The other "Hurwitz order" comes up in geometry because it is related to Hurwitz surfaces, which are compact Riemann surfaces that have $84(g-1)$ automorphisms ($g =$ genus). This is the maximal number possible by a theorem of Hurwitz. The smallest possible genera for Hurwitz surfaces are 3, 7 and 14. The first two have automorphism groups of the form PSL(2,$q$), whereas the third has automorphism group the triangle group (2,3,7). The latter group is essentially the unit group in the other "Hurwitz order," hence the name, though as far as I know was not actually studied by Hurwitz.<|endoftext|> TITLE: Is there a prime which is the form of $10^n + 1$ except $2, 11, 101$? QUESTION [8 upvotes]: Is there a prime which is the form of $10^n + 1$ except $2, 11, 101$? I confirm there isn't such prime for $n < 64$. REPLY [5 votes]: Note that $(10^{n}+1)|10^{n(2k+1)}+1$ for $n$, $k\in \mathbb{N}$. According to the Table $\boldsymbol{1}$ (page $24$ or $\frac{30}{55}$ of the pdf) from this journal: $10^{2^{n}}+1$ has no (known?) prime factors for $n=13$, $14$, $21$, $23$, $24$, $25$, $\ldots $ That means $10^m+1$ is composite continually for $3\le m \le 8195$.<|endoftext|> TITLE: How did Euler prove this identity? QUESTION [9 upvotes]: While studying Fourier analysis last semester, I saw an interesting identity: $$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}$$ whenever $\alpha \in \mathbb{C}\setminus \mathbb{Z}$, which I learned two proofs using Fourier series and residue calculus. More explicitly, we can deduce the theorem using Fourier series of $f(\theta)=e^{i(\pi - \theta)\alpha}$ on $[0,2\pi]$ or contour integral of the function $g(z)=\frac{\pi}{(z^2-\alpha^2)\tan\pi z}$ along large circles. But these techniques, as long as I know, wasn't fully developed at Euler's time. So what was Euler's method to prove this identity? Is there any proof at elementary level? REPLY [12 votes]: According to Wikipedia (https://en.wikipedia.org/wiki/Basel_problem), Euler was the first to give a representation of the sine function as an infinite product: $$(*) \hspace{2cm}\sin (\pi \alpha)=\pi \alpha \prod\limits_{n=1}^{\infty}(\frac{n^2-\alpha^2}{n^2}),$$ which was formally proved by Weierstrass about 100 years later. Now taking "$\ln$" on by sides of (*) gives $$\ln(\sin (\pi \alpha))=\ln(\pi \alpha)+ \sum \limits_{n=1}^{\infty} \ln (\frac{n^2-\alpha^2}{n^2}),$$ and after taking derivatives on both sides we arrive at $$\sum_{n=1}^{\infty}\frac{1}{n^2-\alpha^2}=\frac{1}{2\alpha^2}-\frac{\pi}{2\alpha\tan\pi\alpha}.$$<|endoftext|> TITLE: Is it possible that the formula for the $n^{th}$prime number is a polynomial in n? QUESTION [7 upvotes]: I know that no pattern has been found yet. And prime numbers are weird, so the formula being a polynomial would be too simple to be true. Has there some proof been given that the expression for $n^{th}$ prime number can't be a polynomial in n? I also found this thing on the internet that $\frac{sin^2\frac{\pi(j-1)!^2}{j}}{sin^2\frac{\pi}{j}}$ is equal to one if and only if j is prime. One thing that I got from simplifying this is that $\frac{(j-1)!^4-1}{j}$ is an integer if and only if j is prime. So, if there's some equation which is only satisfied by integers, then it will also be satisfied by $\frac{(j-1)!^4-1}{j}$ and hence it will also be satisfied by all prime numbers. Is there some equation involving continuous functions which is only satisfied by integers? I couldn't find any equation like that. It should be in terms of some standard continuous functions and shouldn't involve discontinuous functions like the greatest integer function and smallest integer function. And it shouldn't be like n%1 =0 only if n is an integer. That won't help. And also not anything like $sin(n\pi)=0$ only if n is an integer. REPLY [9 votes]: In fact there is no non-constant polynomial whose values at the positive integers are all primes. First, note that any polynomial whose values at the positive integers are primes must have rational coefficients (see e.g. Integer-valued polynomials). Suppose $P$ is such a polynomial, and let $P(x) = p$ where $p$ is coprime to the denominators of all coefficients of $x$. Then since $(x+p)^k \equiv x^k \mod p$ for all nonnegative integers $k$, we get that $P(x+p) \equiv P(x) \equiv 0\mod p$, so $P(x+p)$ can't be prime.<|endoftext|> TITLE: Prove this $ \int_0^\infty\frac{\coth^2x-1}{(\coth x-x)^2+\frac{\pi^2}{4}}dx=\frac45 $ QUESTION [8 upvotes]: I have trouble with this seemingly simple problem $$ K=\int_0^\infty\frac{\coth^2x-1}{(\coth x-x)^2+\frac{\pi^2}{4}}dx=\frac45.\tag{A} $$ Here is the Wolfram Alpha computation Proof. I tried to find residue at the pole $x=\frac{\pi i}{2}$ and get $\frac{9}{5\pi i}$ (link to WA). Therefore residue theorem tell me that $K=\frac{18}{5}\neq\frac45$. I'm stuck. How to prove (A)? REPLY [8 votes]: To answer your question, it is not correct because, while you have calculated the residue of the integrand correctly, you are assuming that the contour of integration is a semicircle. This semicircle encompasses other poles in the complex plane, which explains why your result is not right. Evaluating a real integral via the Residue Theorem can be a tricky business. Here's a start: what contour? The semicircle seem horrible, because how do we get the other poles? Better is a closed contour that contains only the pole at $z=i \pi/2$. Now we need to determine the form of the integrand we need in the contour integral. Remember, somehow we should get the original integral back using a direct parameterization of the contour. So, without further ado... Consider the contour integral $$\oint_C dz \, \tanh{\left [z-\operatorname{arctanh}{\left (z-i \frac{\pi}{2} \right )} \right ]} $$ which is equal to $$\oint_C dz \, \frac{\displaystyle 1-z \coth{z} + i \frac{\pi}{2} \coth{z}}{\displaystyle \coth{z}-z+i \frac{\pi}{2}} $$ where $C$ is the rectangle with vertices $\pm R$ and $\pm R+i \pi$. The contour integral is then equal to $$\int_{-R}^R dx \frac{\displaystyle 1-x \coth{x} + i \frac{\pi}{2} \coth{x}}{\displaystyle \coth{x}-x+i \frac{\pi}{2}} + i \int_0^{\pi} dy \, \frac{\displaystyle 1-(R+i y)\coth{(R+i y)} + i \frac{\pi}{2} \coth{(R+i y)}}{\displaystyle \coth{(R+i y)} - (R+i y) + i \frac{\pi}{2}} \\ + \int_{R}^{-R} dx \frac{\displaystyle 1-x \coth{x} - i \frac{\pi}{2} \coth{x}}{\displaystyle \coth{x}-x-i \frac{\pi}{2}}\\ + i \int_{\pi}^0 dy \, \frac{\displaystyle 1-(-R+i y)\coth{(-R+i y)} + i \frac{\pi}{2} \coth{(-R+i y)}}{\displaystyle \coth{(-R+i y)} - (-R+i y) + i \frac{\pi}{2}}$$ Now consider the second and fourth integrals, i.e., those over the vertical edges of the rectangle. We consider the limit as $R \to \infty$. Note that the integrand of the second integral approaches $1$ in this limit, while the integrand of the fourth integral approaches $-1$. (Exercise for the reader.) Thus, the sum of these two integrals is $i 2 \pi$. We can also combine the integrand of the first and third integrals to get the integrand of the integral we seek, times $i \pi$. Thus, the contour integral is equal to (after exploiting the evenness of that integrand) $$i 2 \pi \int_0^{\infty} dx \, \frac{\coth^2{x}-1}{\displaystyle (\coth{x}-x)^2+\frac{\pi^2}{4}} + i 2 \pi$$ By the residue theorem, the contour integral is also equal to $i 2 \pi$ times the residue at the pole $z=i \pi/2$. Interestingly enough, this pole is not simple, but is instead a third-order pole. Given the integrand, I find it easier to compute the Laurent series directly and use that to find the residue. However, I will simply state the result as follows: $$\operatorname*{Res}_{z=i \pi/2} \frac{\displaystyle 1-z \coth{z} + i \frac{\pi}{2} \coth{z}}{\displaystyle \coth{z}-z+i \frac{\pi}{2}} = \frac{9}{5} $$ so that $$i 2 \pi \int_0^{\infty} dx \, \frac{\coth^2{x}-1}{\displaystyle (\coth{x}-x)^2+\frac{\pi^2}{4}} + i 2 \pi = i 2 \pi \frac{9}{5} $$ or $$ \int_0^{\infty} dx \, \frac{\coth^2{x}-1}{\displaystyle (\coth{x}-x)^2+\frac{\pi^2}{4}} = \frac{4}{5} $$ ADDENDUM Let's take a look at that residue calculation. For this, it helps to know that $$\coth{\left ( z + i \frac{\pi}{2} \right )} = \tanh{z}$$ and $$\tanh{z} = z - \frac13 z^3 + \frac{2}{15} z^5 + O \left ( z^7 \right )$$ so that the integrand looks like, in the neighborhood of $z=i \pi/2$, $$ \frac{1-z \tanh{z}}{\tanh{z}-z} $$ Now we can find the Laurent expansion of this function about $z=0$, which looks like $$-\frac{3}{z^3} \frac{\displaystyle 1-z^2 +O \left ( z^4 \right )}{\displaystyle 1-\frac{2}{5} z^2+O \left ( z^4 \right )} = -\frac{3}{z^3} \left [1-z^2 +O \left ( z^4 \right ) \right ] \left [1+\frac{2}{5} z^2+O \left ( z^4 \right ) \right ]$$ The residue is the coefficient of $z^2$ in the numerator, which may simply be read off as $9/5$ as stated above. ADDENDUM II How do we show that $z=i \pi/2$ is the only pole inside the rectangle? We can use Rouche's theorem. For example, we would just need to show that, on the rectangle, $$\left | \coth{z}-z \pm i \frac{\pi}{2} \right | \gt |\coth{z} | $$ On the horizontal sides of the rectangle, we see that $$\left | \coth{x}-x \pm i \frac{\pi}{2} \right |^2 - |\coth{x} |^2 = \frac{\pi^2}{4} - \left (2 x \coth{x}-x^2 \right ) $$ which is indeed $\gt 0$ for all $x \in \mathbb{R} $. On the vertical sides of the rectangle, the inequality is obviously satisfied because $|\coth{(R + i y)}|$ approaches $1$ as $R \to \pm \infty$. Therefore, by Rouche's theorem, the denominator of the integrand has the same number of zeroes inside the rectangle as $\coth{z}$, which is just the one at $z=i \pi/2$ and no others. Thus, the only pole inside the rectangle is at $z=i \pi/2$.<|endoftext|> TITLE: Quadratic equation of characteristic $2$ QUESTION [12 upvotes]: We know it is easy to decide whether a quadratic equation has a root in a field which has characteristic $\neq$ 2. It is equivalent to judge whether there exists an element whose square is a specific number (cancel the linear term). But this method doesn't work for the case of characteristic $2$. Is there any idea to solve this case? REPLY [3 votes]: The point of this post is to give a bit more precise description of the criteria listed in Sharding4's great answer in the case of finite fields of characteristic two. I thought I had explained this already somewhere on the site but cannot find it now, the closest match is here, but there the focus is very different. Let $F=\Bbb{F}_{2^m}$, aka $GF(2^m)$, be the field of cardinality $2^m$. We have the so called (absolute) trace $T:F\to\Bbb{F}_2$ given by the sum of the Galois conjugates (= powers of the Frobenius): $$ T(x)=x+x^2+x^4+x^8+\cdots+x^{2^{m-1}}. $$ Because $T(x)^2=T(x)$ we always have $T(x)\in\Bbb{F}_2$ as claimed. $T$ is an additive homomorphism, $T(x+y)=T(x)+T(y)$ because the powers of Frobenius are. We trivially also have $T(x^2)=T(x)$, because we can square $T(x)$ term-by-term, and the square of the last term is $(x^{2^{m-1}})^2=x^{2^m}=x$ for all $x\in F$. The mapping $L:F\to F, x\mapsto x+x^2$ is also an additive homomorphism, and its kernel clearly equals $\mathrm{Ker}(L)=\{0,1\}$. It follows that $\mathrm{Im}(L)$ has $2^{m-1}$ elements. By items 2 and 3 it follows that $\mathrm{Im}(L)\subseteq\mathrm{Ker}(T)$. Because $T$ is a polynomial of degree $2^{m-1}$, it cannot have more than $2^{m-1}$ zeros in $F$. Therefore we must have equality: $$ \mathrm{Im}(L)=\mathrm{Ker}(T).$$ Let us then look at the solutions of the equation $$ x^2+ax+b=0\qquad(*) $$ with $a,b\in F$. If $a=0$, we have $x^2=b$, and this has a double root $x=\sqrt b=b^{2^{m-1}}$ that always exists in a finite field of characteristic two. If $a\neq0$ then we introduce the new variable $y=x/a$ and rewrite $(*)$ divided by $a^2$ to read $$ y^2+y=c\qquad(**) $$ with $c=b/a^2$. By item 5. $(**)$ has two solutions $y\in F$ if $T(c)=0$ and no solutions in $F$ when $T(c)=1$. This is standard from all the text books covering finite field characteristic two arithmetic. If $y$ is one solution then $y+1$ is the other, either by Vieta relations or by linearity of the Frobenius, really, we are looking for a coset of $\mathrm{Ker}(L)$. A less well known trick of the trade for finding one of the solutions is to use the so called half-trace. It is a function $H:\mathrm{Ker}(T)\to F$ with the property that $$ H(y)+H(y)^2=y $$ for all $y\in\mathrm{Ker}(T)$. This is by no means a unique function, but we can make the following observations. If $m=2k+1$ is odd, we can use $$H(y)=y^2+y^8+\cdots+y^{2^{m-2}}$$ because then $H(y)^2=y^4+y^{16}+\cdots+y^{2^{m-1}}$. Therefore $$H(y)+H(y)^2=\sum_{j=1}^{m-1}y^{2^j}=y+T(y)=y$$ as we are assuming that $T(y)=0$. So here roughly one half of the terms of the trace $T(y)$ appear in $H(y)$ — hence the name half-trace. See also here. When $m\equiv2\pmod4$ there is a similar explicit formula for the half trace as an $\Bbb{F}_4$ linear combination of the powers of the Frobenius. This time the coefficients are third roots of unity — they are each others squares and their sum $=1$, and that makes it tick. If we use a normal basis $b_i=b_0^{2^i}, i=0,1,\ldots,m-1$, $b_0$ a carefully chosen element of $F$ to store an element $x=\Bbb{F}_{2^m}$ by saving its coordinates $a_i\Bbb{F}_2$ w.r.t. to the normal basis, that is $x=\sum_ia_ib_i$. Then we know that $T(x)=\sum_ia_i=0$, and (one of the points of using normal bases), $x^2=\sum_ia_ib_{i+1\bmod m}$ - the coordinates of $x^2$ are gotten by cyclically shifting those of $x$. So if $x\in\mathrm{Ker}{T}$ then there is an even number of $1$s among the coordinates. It follows that we can solve the system $c_{m-1}=0=c_{-1}$, $c_{j-1}+c_j=a_j$, $j=0,1,\ldots,m-2$, whence $H(x)=\sum_i c_ib_i$ works. This is useful for large $m$, when normal bases are a common option. If we need to solve a large number of quadratic equations over the same field $F$, we can build a useful look-up-table. Let $\{x_1,x_2,\ldots,x_{m-1}\}$ be a basis of $\mathrm{Ker}(T)$ over the prime field. Using linearity of $L$ we can easily build a table of elements $y_1,\ldots,y_{m-1}$ such that $L(y_i)=x_i$. It follows that with $c=\sum_i a_ix_i$ we can then use $H(c)=\sum_ia_iy_i$.<|endoftext|> TITLE: Explain why $\int_0^\infty\frac{\sin{4x}}{x}\prod\limits_{k=1}^n \cos\left(\frac{x}{k}\right) dx\approx\frac{\pi}{2}$ QUESTION [8 upvotes]: Why do we have, for every $n\in\mathbb N$, $$\int_0^\infty\left(\prod_{k=1}^n \cos\left(\frac{x}{k}\right)\right)\frac{\sin{4x}}{x}dx\approx\frac{\pi}{2}\ ?$$ REPLY [9 votes]: Hint. One may use the transformation $$ \begin{align} \prod_{k=1}^n \cos \frac{x}{k} & = \frac{1}{2^n}\sum_{e\in S} \cos\left[\left(\frac{e_1}1+\cdots+\frac{e_n}n\right)x\right] \quad \text{where }S=\{1,-1\}^n \end{align} $$ and the addition formula $$ 2\sin a \cos b = \sin(a + b) + \sin(a - b). $$ Thus, for every positive natural integer $n$, the integral $$ I_n=\int_0^\infty\left(\prod_{k=1}^n \cos\frac{x}{k}\right)\frac{\sin{4x}}{x}\:dx $$ is such that $$2^nI_n=\sum_{e\in S} \int_0^\infty\frac{\sin\left[\left(4+\frac{e_1}1+\cdots+\frac{e_n}n\right)x\right]}{2x}dx+\sum_{e\in S} \int_0^\infty\frac{\sin\left[\left(4-\frac{e_1}1-\cdots-\frac{e_n}n\right)x\right]}{2x}dx. $$ By symmetry of the set $S$ the two sums coincide hence $$ 2^nI_n=\sum_{e\in S} \int_0^\infty\frac{\sin\left[\left(4+\frac{e_1}1+\cdots+\frac{e_n}n\right)x\right]}{x}dx=\sum_{e\in S} \frac\pi2\cdot \mathrm{sgn}\left(4+\frac{e_1}1+\cdots+\frac{e_n}n\right)$$ where we have used that, for every $\alpha \in \mathbb{R}$, $\alpha \ne0$, $$ \int_0^\infty \frac{\sin (\alpha x)}x\:dx=\frac \pi2 \cdot \text{sgn}(\alpha). $$ If $n\leq30$, then $$\left|\frac{e_1}1+\cdots+\frac{e_n}n\right|<4$$ for every $(e_1,\ldots,e_n)$ in $S$ hence all the signs are $+1$ and $$ I_n=\frac \pi2. $$ If $n\ge31$, then one only gets the semi-explicit formula $$ I_n=\frac\pi2\cdot\frac{1}{2^n}\sum_{e\in S}\mathrm{sgn}\left(4+\frac{e_1}1+\cdots+\frac{e_n}n\right) $$ which is nevertheless enough to show that $$0 TITLE: proof: Hilbert Schmidt operator is compact QUESTION [7 upvotes]: Consider the Hilbert Schmidt operator $K: L^2(\Omega) \rightarrow L^2(\Omega)$, $\Omega \subset \subset \mathbb R^N$, with $k \in L^2(\Omega \times \Omega)$ and $f \in L^2(\Omega)$, $$(Kf)(x) := \int_\Omega k(x,y)f(y)\, dy.$$ I want to show that the Hilbert Schmidt operator $K$ is a compact operator. Therefore I'm using this characterization. Let $X$, $Y$ be normed linear spaces and $X$ reflexive. A continuous linear operator $T: X \rightarrow Y$ that maps weakly convergent sequences onto strongly convergent sequences is compact. (We already know that $K$ is well-defined as is proven here.) My question here is, isn't it obvious that $K$ is compact? We know that $K$ is linear and bounded, hence continuous. Every continuous map takes weakly convergent sequences to weakly convergent sequences. The norm itself is also continuous. Weak convergence together with convergence of the norms implies convergence. Thus $K$ is compact. Am I missing something here? Or better: What am I missing here? $\,$ I'm also adding the proof from the textbook for completeness: Proof. Let $(f_n)_{n \in \mathbb N} \subset L^2(\Omega)$ a weakly convergent sequence, then $(f_n)_{n \in \mathbb N}$ is bounded. That is, $\exists C > 0 $ such that $||f_n||_{L^2(\Omega)} \leq C$, $\forall n \in \mathbb N$. By Fubini's theorem we have for almost every $x\in \Omega$ that $$ || k(x,\cdot) ||_{L^2(\Omega)} = \int_\Omega |k(x,y)|^2 \, dy < \infty .$$ Thus for almost every $x \in \Omega$ we have $\begin{align} \lim_{n \rightarrow \infty} (Kf_n)(x) & = \int_\Omega k(x,y)f_n(y) \, dy = \lim_{n \rightarrow \infty} \langle k(x,\cdot), f_n \rangle_{L^2(\Omega)} \\ & = \langle k(x,\cdot), f \rangle_{L^2(\Omega)} = \int_\Omega k(x,y)f(y) \, dy = (Kf)(x) \end{align}$ By Cauchy-Schwarz's inequality we have $$ (Kf_n)(x) \leq ||f_n||_{L^2(\Omega)} \int_\Omega |k(x,y)|^2 \, dy \leq C \, \int_\Omega |k(x,y)|^2 \, dy $$ Hence by Lebesgue's dominant convergence theorem we have convergence of the norms $$ \lim_{n \rightarrow \infty} \int_\Omega |(Kf_n)(x)| \, dx = \int_\Omega |(Kf)(x)| \, dx ,$$ that is $|| Kf_n ||_{L^2(\Omega)} \rightarrow || Kf ||_{L^2(\Omega)}\, \, (n\rightarrow \infty)$. Since weak convergence together with (strong or normal) convergence of the norms implies (strong) convergence, $K$ is compact. REPLY [3 votes]: The norm is continuous as a map $\|\cdot\|: (X,\|\cdot\|_X)\to \mathbb R$ but not when defined on $X$ with its weak topology. This is where your general argumentation fails. The proof from your textbook is fine, however one can in general show that every Hilbert-Schmidt operator is already compact: One can represent the finite rank operators in a Hilbert space as a tensor product. We have several natural norms on this space, whose completions lead to several classes of operators (nuclear operators, Hilbert-Schmidt operators and compact operators) and those norms dominate each other in such a way that we have the inclusions nuclear operator is Hilbert-Schmidt operator is compact operator.<|endoftext|> TITLE: Surcomplex numbers and the largest algebraically closed field QUESTION [12 upvotes]: It's well known that the surreal numbers $\mathbf{No}$ are the largest ordered "field" (more accurately, they form a proper class with field structure, which is sometimes called a Field with capital F), in the sense that every other ordered field can be embedded in them. Since the surreal numbers are real closed, their algebraic closure is given by $\mathbf{No}[i]$, the surcomplex numbers. My question is, is there a similar characterization of the surcomplex numbers as the largest algebraically closed Field of characteristic zero? If they aren't the largest, is it possible to find a proper class with that property? REPLY [10 votes]: Let $F$ be an algebraically closed field of characteristic zero. Let $\kappa$ be the cardinal of a transcendance basis of $F$ over its prime subfield $\mathbb{Q}$. Since the ${\omega_0}^{{\omega_0}^{\alpha}}$, $\alpha < \kappa$ satisfy the relation $\forall \alpha < \beta < \kappa, \forall n \in \mathbb{N}, n.({\omega_0}^{{\omega_0}^{\alpha}})^n < {\omega_0}^{{\omega_0}^{\beta}}$, they form an algebraically independant familly over $\mathbb{Q} \subset No$. So $No[i]$, being algebraically closed, contains an isomorphic copy of $F$ as the relative algebraic closure of $\mathbb{Q}(({\omega_0}^{{\omega_0}^{\alpha}})_{\alpha < \kappa})$. This is also true in NBG with global choice if $F$ is an algebraically closed Field of characteristic zero, and to see this one only needs to repeat the same argument with $Ord$ instead of $\kappa$. Note that being a universal algebraically closed Field of characteristic zero caracterises $No[i]$ as a field whereas being a universal real closed Field does not characterise $No$. The theory of algebraically closed fields of a given characteristic is indeed more stable than that of real closed fields.<|endoftext|> TITLE: For all triangle prove that $\sum\limits_{cyc}\frac{a}{(a+b)^2}\geq\frac{9}{4(a+b+c)}$ QUESTION [7 upvotes]: Let $a$, $b$ and $c$ be a sides-lengths of the triangle. Prove that: $$\frac{a}{(a+b)^2}+\frac{b}{(b+c)^2}+\frac{c}{(c+a)^2}\geq\frac{9}{4(a+b+c)}$$ The Buffalo way kills it, but I am looking for a nice proof for this nice inequality. SOS (sums of squares) gives $\sum\limits_{cyc}\frac{(a-b)^2(ab+b^2-ac)}{(a+b)^2}\geq0$ and I don't see what comes next. REPLY [2 votes]: After using Ravi subtitution, we need to prove $$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{9}{8(x+y+z)}.$$ By the Cauchy-Schwarz inequality we get $$\sum \frac{x+y}{(2y+z+x)^2} \geqslant \frac{\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2}{\displaystyle \sum (2y+z+x)^2(x+y)(42x+3y+55z)^2}.$$ Therefore, we will show that $$8(x+y+z)\left[\displaystyle \sum (x+y)(42x+3y+55z) \right]^2 \geqslant 9\sum (2y+z+x)^2(x+y)(42x+3y+55z)^2,$$ or $$11700 \sum xy^2(y-z)^2 + \sum \left(1954xyz+6723x^2y+\frac{3104xy^2}{3}+\frac{5053yz^2}{3}\right)(z+x-2y)^2$$ $$\geqslant 0.$$ Done.<|endoftext|> TITLE: Simplifying fraction with factorials: $\frac{(3(n+1))!}{(3n)!}$ QUESTION [6 upvotes]: I was trying to solve the limit: $$\lim_{n \to \infty} \sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}$$ By using the root's criterion for limits (which is valid in this case, since $b_n$ increases monotonically): $$L= \lim_{n \to \infty} \frac{a_n}{b_n} = \lim_{n \to \infty} \frac{a_{n+1}-a_n}{b_{n+1}-b_n}$$ Now I realise using Sterling's formula would make everything easier, but my first approach was simplifying the factorial after applying the criterion I mentioned before. So, after a few failed attempts I looked it up on Mathematica and it said that $\frac{(3(n+1))!}{(3n)!}$ (which is one of the fractions you have to simplify) equals $3(n+1)(3n+1)(3n+2)$. Since I can't get there myself I want to know how you would do it. Just so you can correct me, my reasoning was: $$\frac{(3(n+1))!}{(3n)!} = \frac{3\cdot 2 \cdot 3 \cdot 3 \cdot 3 \cdot 4 \cdot (...) \cdot 3 \cdot (n+1)}{3 \cdot 1 \cdot 3 \cdot 2 \cdot 3 \cdot 3 \cdot (...) \cdot 3 \cdot n } = $$ $$= \frac{3^n(n+1)!}{3^{n}n!} = \frac{(n+1)!}{n!} = n+1$$ Which apparently isn't correct. I must have failed at something very silly. Thanks in advance! REPLY [3 votes]: I thought it might be useful to present an approach that relies on elementary tools only. To that end, we proceed. First, we write $$\begin{align} \frac{1}{n}\log((3n!))&=\frac1n\sum_{k=1}^{3n}\log(k)\\\\ &=\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n) \tag 1 \end{align}$$ Now, note that the parenthetical term on the right-hand side of $(1)$ is the Riemann sum for $\int_0^3 \log(x)\,dx=3\log(3)-3$. Using $(1)$, we have $$\begin{align} \lim_{n\to \infty}\sqrt[n]{\frac{(3n)!}{(5n)^{3n}}}&=\lim_{n\to \infty}e^{\frac1n \log((3n)!)-3\log(5n)}\\\\ &=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)+3\log(n)-3\log(5n)}\\\\ &=\lim_{n\to \infty}e^{\left(\frac1n\sum_{k=1}^{3n}\log(k/n)\right)-3\log(5)}\\\\ &= e^{3\log(3)-3-3\log(5)}\\\\ &=\left(\frac{3}{5e}\right)^3 \end{align}$$ as expected! Tools used: Riemann sums and the continuity of the exponential function.<|endoftext|> TITLE: Show that three complex numbers $z_1, z_2, z_3$ are collinear iff $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ QUESTION [8 upvotes]: I need to show that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0 \iff z_1,z_2,$ and $z_3$ are collinear. I know that $\operatorname{Im}(\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1) = 0$ implies that $\overline{z_1}z_2+\overline{z_2}z_3+\overline{z_3}z_1 \in \mathbb{R}$, but I am not sure how to argue in either direction. Please help. Thank you REPLY [2 votes]: Here is another approach that relies on the idea that its easy to detect if three complex numbers lie on a line through the origin, and we attempt to reduce the original problem to this simpler one. To this end, we might hope that translating our complex numbers (in this case, subtracting $z_1$ from each) doesn't affect whether the function $f(z_1, z_2, z_3) = \overline{z_1}z_2 + \overline{z_2}{z_3} + \overline{z_3}{z_1}$ takes a real value or not. And indeed, we'll see that $$\operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) = \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big).$$ Observe that \begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &= \operatorname{Im}\Big(\overline{z_2 - z_1}(z_3 - z_1)\Big) \\[7pt] &= \operatorname{Im}\Big(\overline{z_2}z_3 - \overline{z_1}z_3 - \overline{z_2}z_1 + \overline{z_1}{z_1}\Big) \\[7pt] &= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_1\big) + \underbrace{\operatorname{Im}\big(|z_1|^2\big)}_{=0} \end{align*} where $\overline{z_1}{z_3} = \overline{z_1\overline{z_3}}$ so the two have opposite imaginary parts; that is, $\operatorname{Im}\big(z_1\overline{z_3}) = - \operatorname{Im}(\overline{z_1}z_3)$. Now picking up where we left off, \begin{align*} \operatorname{Im}\Big(f(0, z_2 - z_1, z_3 - z_1)\Big) &= \operatorname{Im}\big(\overline{z_2}z_3) - \operatorname{Im}\big(\overline{z_1}z_3\big) - \operatorname{Im}\big(\overline{z_2}z_3\big) \\[7pt] &= \operatorname{Im}\big(\overline{z_2}z_3) + \operatorname{Im}\big(\overline{z_3}z_1\big) + \operatorname{Im}\big(\overline{z_1}z_2\big)\\ &= \operatorname{Im}\Big(f(z_1, z_2, z_3)\Big) \end{align*} Now we simply need to show that $z_1, z_2, z_3$ are collinear if and only if $f(0, z_2 - z_1, z_3 - z_1) \in \Bbb R$. Letting $w_2 = z_2 - z_1$ and $w_3 = z_3 - z_1$, this is equivalent to showing that $0, w_1, w_2$ are collinear if and only if $f(0, w_2, w_3) \in \Bbb R$. But since $f(0, w_2, w_3) = \overline{w_2}w_3$, this is just the well-known fact that $\overline{w_2}w_3 \in \Bbb R$ if and only if $w_3 = c w_2$ for some real scalar $c$.<|endoftext|> TITLE: Is this a valid way to prove, that $a^2 + b^2 \neq 3c^2$ for all integers $a, b, c$ ? (except the trivial case) QUESTION [5 upvotes]: Update: (because of the length of the question, I put an update at the top) I appreciate recommendations regarding the alternative proofs. However, the main emphasis of my question is about the correctness of the reasoning in the 8th case of the provided proof (with a diagram). Original question: I would like to know, whether the following proof, is a valid way to prove that $a^2 + b^2 \neq 3c^2$ for all $a, b, c \in Z$ (except the trivial case, when $a=b=c=0$). More formally, we have to prove the correctness of the following statement: $$P: (\forall a,b,c \in Z, a^2 + b^2 \neq 3c^2 \lor (a=b=c=0))$$ Proof. (by contradiction) For the sake of contradiction let's assume, that there exist such $a, b, c \in Z$, that $a^2 + b^2 = 3c^2$ (and the combination of $a,b,c$ is not a trivial case). More, formally, let's assume that $\neg P$ is true: $$\neg P: (\exists a,b,c \in Z, a^2 + b^2 = 3c^2 \land \neg (a=b=c=0))$$ There are $2^3$ possible combinations of different parities of $a,b,c$ (8 disjoint cases, which cover entire $Z^3$). So, in order to prove the original statement, we have to consider each case, and show that the true-ness of the $\neg P$ always leads to some sort of contradiction. Let's consider 8 possible cases (7 of which are simple, whereas the 8th case looks a bit intricate, and I am not sure regarding its correctness): Case 1) $a$ is odd, $b$ is odd, $c$ is odd Thus: $a = (2x + 1)$, $b = (2y + 1)$, $c = (2z + 1)$ for some $x, y, z \in Z$ So: $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y + 1)^2 = 3 \cdot (2z + 1)^2 \\ \implies 2 \cdot (2x^2 + 2x + 2y^2 + 2y + 1) = 2 \cdot (6z^2 + 6z + 1) + 1 \\ \implies even\ number = odd\ number \\ $$ However, the derived result contradicts to the fact that odd numbers and even numbers can't be equal. Hence: $(even\ number = odd\ number) \land (even\ number \neq odd\ number)$, or equivalently: $(even\ number = odd\ number) \land \neg (even\ number = odd\ number)$. Contradiction. Case 2) $a$ is odd, $b$ is odd, $c$ is even Thus: $a = (2x + 1)$, $b = (2y + 1)$, $c = 2z$ for some $x, y, z \in Z$ So: $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y + 1)^2 = 3 \cdot (2z)^2 \\ \implies 2 \cdot (2x^2 + 2x + 2y^2 + 2y + 1) = 12z^2 \\ \implies 2 \cdot (x^2 + x + y^2 + y) + 1 = 6z^2 \\ \implies odd\ number = even\ number $$ Contradiction. Case 3) $a$ is odd, $b$ is even, $c$ is odd Thus: $a = (2x + 1)$, $b = 2y$, $c = (2z + 1)$ for some $x, y, z \in Z$ So: $$ a^2 + b^2 = 3c^2 \\ \implies (2x + 1) ^2 + (2y)^2 = 3 \cdot (2z + 1)^2 \\ \implies 4x^2 + 4x + 1 + 4y^2 = 12z^2 + 12z + 3 \\ \implies 4\cdot(x^2 + x + y^2) = 2 \cdot (6z^2 + 6z + 1) \\ \implies 2\cdot(x^2 + x + y^2) = 6z^2 + 6z + 1 \\ \implies even\ number = odd\ number $$ Contradiction. Case 4) $a$ is odd, $b$ is even, $c$ is even The square of an odd number is odd (so, $a^2$ is odd). The square of an even number is even (so, $b^2$ and $3c^2$ are even). Fact: the sum of an even number and an odd number is odd. However, equality: $a^2 + b^2 = 3c^2$ leads to the conclusion, that: $odd\ number + even\ number = even\ number$ Contradiction. Case 5) $a$ is even, $b$ is odd, $c$ is odd Symmetric to the Case 3 (because $a$ and $b$ are mutually exchangeable), which shows the contradiction. Case 6) $a$ is even, $b$ is odd, $c$ is even Symmetric to the Case 4, which shows the contradiction. Case 7) $a$ is even, $b$ is even, $c$ is odd Thus: $a = 2x$, $b = 2y$, $c = (2z + 1)$ for some $x, y, z \in Z$ So: $$ a^2 + b^2 = 3c^2 \\ \implies 4x^2 + 4y^2 = 12z^2 + 12z + 3 \\ \implies even\ number = odd\ number $$ Contradiction. Case 8) $a$ is even, $b$ is even, $c$ is even Thus: $a = 2x$, $b = 2y$, $c = 2z$ for some $x, y, z \in Z$ So: $$ a^2 + b^2 = 3c^2 \\ \implies 4x^2 + 4y^2 = 3 \cdot 4z^2 \\ \implies x^2 + y^2 = 3z^2 $$ Now, we are faced with the similar instance of the problem, however, the size of the problem is strictly smaller ($x = {a \over 2}$, $y = {b \over 2}$, $z = {c \over 2}$). At first glance, it seems that we have to consider again the eight possible parities of $x, y, z$. However, if we analyze all dependencies between the cases of the problem, we will notice that the only possible outcomes are either contradiction or the trivial case: We have shown the contradiction in all cases, hence we have subsequently proved the original statement. $\blacksquare$ So, I would like to know, if there is any problem with reasoning in the 8th case? REPLY [2 votes]: There is truth in your method for case 8. It is called inifinte descent and equivalent to induction (i.e., alternatively you might start away with assuming that $(a,b,c)$ is the smallest non-trivial solution, and then $(a/2,b/2,c/2)$ cannot be a solution).<|endoftext|> TITLE: If $f$ is continuous and $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$, then $f$ is linear? QUESTION [6 upvotes]: Let $f\in C^0(\mathbb R^2,\mathbb R^2)$, with $f(f(x))+f(x)+x=0$ for all $x$ in $\mathbb R^2$. Is $f$ linear ( : $\forall (x,y) \in (\mathbb R^2)^2,f(x+y)=f(x)+f(y)$ ) ? This problem can be translated to a problem of Geometry of the plane. REPLY [5 votes]: The computational part of Eric Wofsey's answer can be simplified a little bit ; in particular it can be done entirely by hand as I show below. Following this answer, let us start with the three points $p=(0,1),q=(1,0),r=(-1,-1)$. Let $b$ be a $C^1$ bijection $[0,1]\to [0,1]$, to be defined later, and let $$ \begin{array}{lclcl} A(t) &=& (1-t)p+tq &=& (t,1-t) \\ B(t) &=& (1-b(t))q+b(t)r &=& (1-2b(t),-b(t)) \\ C(t) &=& -(A(t)+B(t)) &=& (2b(t)-(t+1),t+b(t)-1). \\ \end{array} $$ Then there is a unique bijection $f$, defined on the triangle $pqr$, such that $f$ cyclically permutes every triple $(A(t),B(t),C(t))$ for $t\in [0,1]$. As shown in Eric's answer, the only problem is then to find a $b$ such that f is non-linear and such that the angle $(Or,OC(t))$ varies monotonically as $t$ goes from $0$ to $1$. It is easy to see that $f$ is linear iff $b$ is the identity (indeed, $f$ cyclically permutes $p,q$ and $r$, and this suffices to uniquely define $f$ is $f$ is linear). A little computation shows that the straight lines $(qr)$ and $(OC(t))$ intersect at $D(t)=(1-d(t))r+d(t)p$ where $$ d(t)=\frac{2t-b(t)}{3t-3b(t)+1} $$ So it suffices to find a $b$ such that the derivative of $d$ is never zero on $(0,1)$. Another little computation shows that the numerator of $d'$ is (here $u$ and $v$ denote the numerator and denominator of $d$ respectively) $$ N(t)=u'(t)v(t)-u(t)v'(t)=(3b(t)-1)b'(t)+2-3b(t) $$ If we try $b(t)=t^2$, we are lucky : $N(t)=3t^2-2t+2=3(t-\frac{1}{3})^2+\frac{5}{3}$ is never zero.<|endoftext|> TITLE: $\ker S\subset\ker T\Leftrightarrow\exists R, T=RS$ QUESTION [5 upvotes]: Let $V,W$ be two vector spaces over $F=\mathbb{R}$ or $\mathbb{C}$, such that $W$ is finite dimensional and $S,T\in L(V,W)$. Show that $\ker S\subset\ker T$ if, and only if, there exists a linear operator $R:W\to W$ such that $T=RS$. The converse was easy to verify, since if $Sv = 0$, then $Tv = RSv = R(0) = 0$. I tried to prove the other assertion by constructing such $R$ considering basis for $\operatorname{range} T$ and $\operatorname{range} S$ and send one to the other, but this is as far as I got. Could anyone give me a hint? REPLY [2 votes]: Consider a basis $\{v_1,\dots,v_r\}$ of $\ker S$, which we can extend to a basis $\{v_1,\dots,v_m\}$ of $V$. The set $B=\{S(v_{r+1}),\dots,S(v_n)\}$ is a basis for the image of $S$, as it is readily verified. The map $R$ must satisfy $RS=T$, so $RS(v_i)=T(v_i)$ is the forced choice when $r+1\le i\le n$. Just complete $B$ to a basis of $W$ and define $R$ to be zero (or whatever) on the remaining elements. For $1\le i\le r$, we have $RS(v_i)=0=T(v_i)$. Thus $RS$ and $T$ coincide on a basis of $V$. Note. Finite dimensionality is not required, provided one assumes that every vector space has a basis.<|endoftext|> TITLE: Why does the mandelbrot set and its different variants follow similar patterns to epi/hypo trochodis and circular multiplication tables? QUESTION [5 upvotes]: So the $z^2 + c$ variant has a cardioid shape at the center. This shape is made by an epitrochoid with a ratio of the radi being one, or from the two times table when we display it in a circle (as seen in this video https://www.youtube.com/watch?v=qhbuKbxJsk8). The next variation $z^3 +c$ has a nephroid as its central bulb, his shape is made by an epitrochoid with a ratio of the radi being two (or from the 3 times table when we display it as above). $z^4 + c$ follows the pattern, its central bulb is produced by an epitrochoid with a ratio of the radi being three(or the 4 times table). So to generalise the central bulb has a shape made by an epitrochoid with ratio n-1, where n is the exponent in $z^n +c$. This also hold true for the mandelbar sets (when we flip the sign on its "imaginary" component). The first fractal in the set $\bar{z}^ 2 +c$ has a central bulb of the shape made by a hypotrochoids with ratio 3. The next in the sequence $\bar{z}^ 3 +c$ has a central bulb of the shape made by a hypotrochoid with ratio 4. For the mandelbar set the cental bulb is produced by an hypotrochoid of ratio n+1, where n is the exponent in $\bar{z}^ n +c$. What causes these links? This site has some diagrams of the different fractals mentioned http://www.relativitybook.com/CoolStuff/erkfractals_powers.html. On this site it also talks about the rotational symmetry of each fractal and how it follows the same pattern. (for the mandelbrot sets the rotaional symmetry is n-1 that of exponent, and the mandelbar set the rotational symmetry is n+1 of the exponent). To me it seems odd that the mandelbrot set follows the same rule as epitrochoids and the mandelbar sets (the inverse of the mandelbrot sets to some extent) follows the same rule as the inverse of the epitrochoids, the hypotrochoids. REPLY [4 votes]: The components of the Mandelbrot set correspond to sets of $c$ values such that the corresponding function $f_c(z)=z^2+c$ has some specific behavior. For example, $c$ is in the main cardioid if and only if the function $f_c$ has a attractive fixed point. The disk of radius $1/4$ attached to the left of the main cardioid is the period two bulb; a point $c$ is in this bulb if and only if the function $f_c$ has an attractive orbit of period 2. The set of period 3 parameters is a bit more complicated, since it is not connected. It consists of two disk like components near the top and bottom of the main cardioid and a small copy of the cardioid on the negative real axis. Again, a parameter value $c$ is in this period 3 region if and only if the corresponding function $f_c$ has an attractive orbit of period 3. All of these regions are outlined in this image: Note that those outlines are generated using actual parametrizations that explain exactly how those shapes arise. Again, the parameter value $c$ is in the main cardioid if and only if the corresponding function $f_c$ has an attractive fixed point. Algebraically: \begin{align} f_c(z) &= z \\ |f_c'(z)| &< 1. \end{align} The equation on top states that $z$ is a fixed point. The inequality on the bottom states that the fixed point is attractive. Now, on the boundary we should have: \begin{align} f_c(z) &= z \\ |f_c'(z)| &= 1. \end{align} That is, the fixed point is no longer strictly attractive on the boundary but just neutral. Since any complex number of absolute value 1 can be expressed in the form $e^{it}$, this pair of equations can be written without the absolute value: \begin{align} f_c(z) &= z \\ f_c'(z) &= e^{it}, \end{align} for some $t$. Taking into account the fact that $f_c(z) = z^2+c$, we get \begin{align} z^2+c &= z \\ 2z &= e^{it}. \end{align} This system of equations can be solved for $z$ and $c$ in terms of $t$. It's not even particularly hard. The second equation yields $z=e^{it}/2$. This can be plugged into the first equation to get $$c = e^{it}/2 - e^{2it}/4.$$ This happens to be the parametric representation of a cardioid and is exactly the formula used to generate the image above. To obtain parametrizations of higher order bulbs, we can do similar things with (for example) \begin{align} f_c(f_c(z)) &= z \\ (f_c\circ f_c)'(z) &= e^{it} \end{align} or \begin{align} \left(c+z^2\right)^2+c &= z \\ 4 z\left(c+z^2\right) &= e^{i t}. \end{align} This system can be solved for $c$ to obtain $c=-1+e^{it}/4$, which describes the period two disk attached to the left of the main cardioid. Similar things can be done for higher order bifurcation locuses. For $f_c(z)=z^3+c$, for example, we obtain for the main body: \begin{align} z^3 + c &= z \\ 3z^2 &= e^{i t}. \end{align} This can be solved for $c$ to yield $$c = \frac{e^{{3 i t}/{2}}-3 e^{{i t}/{2}}}{3\sqrt{3}}.$$ I used that parametrization to generate the following image:<|endoftext|> TITLE: Proof: $\lim\limits_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=e^{-1}$ QUESTION [5 upvotes]: Prove that $\displaystyle\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=e^{-1}$. Here's my solution: Stirling's formula tells us $$\lim_{n\to\infty}\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}=1$$which implies $$\lim_{n\to\infty}\sqrt[n]{\frac{n!}{n^ne^{-n}\sqrt{2\pi n}}}=1$$then simplifying the left side we have $$\lim_{n\to\infty}\sqrt[n]{\frac{n!}{n^n}}\lim_{n\to\infty}\sqrt[n]{e^{n}}\lim_{n\to\infty}\frac{1}{\sqrt[2n]{2\pi n}}=e\lim_{n\to\infty}\frac{\sqrt[n]{n!}}{n}=1$$ since $\lim_{n\to\infty}\sqrt[2n]{2\pi n}=1$. Divide both sides by $e$ and we're done. Is this correct? This is a problem from Problems in Real Analysis by Radulescu and Andreescu. The book gives two other proofs. Thanks! REPLY [4 votes]: Let $$A_n=\frac{\sqrt[n]{n!}}n.$$ Then $$\log(A_n)=\frac 1n \log(n!)-\log(n)=\frac 1n \sum_{k=1}^n\log k-\log(n)=\sum_{k=1}^n\frac1n\log(\frac{k}{n})\to\int_0^1\log xdx=-1$$ and hence $$ \lim_{n\to\infty} A_n=e^{-1}. $$<|endoftext|> TITLE: problem on convergence of series $\sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a$ QUESTION [5 upvotes]: Finding the set of all positive values of $a$ for which the series $$ \sum_{n=1}^\infty\left(\frac1n-\tan^{-1} \frac1n\right)^a $$ converges. How will it depend on the the value of a that is its power of the term? After expanding the arc tan term I get the form of summation of $[ -1/n^3(1/3+1/n^2+......]^a $. now how does it depend on a ? REPLY [4 votes]: Hint. One may use a Taylor series expansion, as $n \to \infty$, $$ \arctan \frac1n=\frac1n+O\left(\frac1{n^3}\right) $$ giving, as $n \to \infty$, $$ \left(\frac1n-\arctan \frac1n\right)^a=O\left(\frac1{n^{3a}}\right) $$ I hope you can take it from here.<|endoftext|> TITLE: Derivative of arcsin QUESTION [7 upvotes]: In my assignment I need to analyze the function $f(x)=\arcsin \frac{1-x^2}{1+x^2}$ And so I need to do the first derivative and my result is: $-\dfrac{4x}{\left(x^2+1\right)^2\sqrt{1-\frac{\left(1-x^2\right)^2}{\left(x^2+1\right)^2}}}$ But in the solution of this assignment it says $f'(x)=-\frac{2x}{|x|(1+x^2)}$ I don't understand how they get this. I checked my answer on online calculator and it is the same. REPLY [2 votes]: Use that $(1+x^2)^2\sqrt{1-\frac{(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{(1+x^2)^2-\frac{(1+x^2)^2(1-x^2)^2}{(1+x^2)^2}}=(1+x^2)\sqrt{((1+x^2)+(1-x^2))((1+x^2)-(1-x^2))}=(1+x^2)\sqrt{2\cdot2x^2}$<|endoftext|> TITLE: Compute the following integral: $I = \int_1^\infty \log^2 \left(1-\frac 1 x\right) \, dx$ QUESTION [9 upvotes]: Compute $$I = \int_1^\infty \log^2 \left(1-\frac 1 x\right) \, dx$$ I made the substitution: $$t=\frac 1 x$$ It follows: $$I=\int_0^1 \frac{\log^2(1-t)}{t^2} \, dt$$ My next step would be to compute the derivative of the following integral with parameter $y$, w.r.t to $y$: $$F(y)=\int_0^1 \frac{\log^2(y-t)}{t^2} \, dt$$ Or something like this. I think would be a nice solution to use this kind of approach. But I am getting stuck after computing the derivative. REPLY [10 votes]: I thought it might be instructive to present an approach that relies on integration by parts and recognition of the value of the integral, $\int_0^1 \frac{\log(1-x)}{x}\,dx=\text{Li}_2(1)=\frac{\pi^2}{6}$. To that end, we proceed. Let $I$ be given by $$I=\int_1^\infty \log^2\left(1-\frac1x\right)\,dx=\int_0^1\frac{\log^2(1-x)}{x^2}\,dx$$ Integrating by parts with $u=\log^2(1-x)$ and $v=-1/x$ reveals $$\begin{align} I&=\lim_{\epsilon\to 0}\left(\left.\left(-\frac{\log^2(1-x)}{x}\right)\right|_{0}^{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{x(1-x)}\,dx\right)\\\\ &=\lim_{\epsilon\to 0}\left(\frac{-\log^2(\epsilon)}{1-\epsilon}-2\int_0^{1-\epsilon}\frac{\log(1-x)}{1-x} -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &=\lim_{\epsilon\to 0}\left(-\frac{\log^2(\epsilon)}{1-\epsilon}+2\int_0^{1-\epsilon}\frac12\frac{d\log^2(1-x)}{dx}\,dx -2\int_0^1\frac{\log(1-x)}{x}\,dx\right)\\\\ &=-2\int_0^1\frac{\log(1-x)}{x}\,dx\\\\ &=\frac{\pi^2}{3} \end{align}$$ as was to be shown!<|endoftext|> TITLE: Find all $x$ such that $x^6 = (x+1)^6$. QUESTION [5 upvotes]: Find all $x$ such that $$x^6=(x+1)^6.$$ So far, I have found the real solution $x= -\frac{1}{2}$, and the complex solution $x = -\sqrt[3]{-1}$. Are there more, and if so what/how would be the most efficient find all the solutions to this problem? I am struggling to find the rest of the solutions. REPLY [2 votes]: We can simplify the problem by substituting $x=y-\frac{1}{2}$. $$x^6 = \left(x+1\right)^6$$ $$\left(y-\frac{1}{2}\right)^6 = \left(y+\frac{1}{2}\right)^6$$ If we expand both sides and then collect the terms, the even powers of $y$ drop out and only the odd powers remain. $$6y^5+5y^3+\frac{3}{8}y=0$$ You can factor out the root at $y=0$ (i.e. corresponding to the solution $x=-\frac{1}{2}$) and then factor the remaining polynomial into two quadratics. $$6y\left(y^2+\frac{3}{4}\right)\left(y^2+\frac{1}{12}\right)=0$$ Think you can take it from here?<|endoftext|> TITLE: Definition of stabilizer of a set QUESTION [9 upvotes]: What is a stabilizer of a set? I know what a stabilizer of $x\in X$ with respect to a group $G$ that acts on $X$ is, specifically: $$\{g\in G:g\cdot x=x\}.$$ But when defining the stabilizer of a set $Y\subset X$ this could go two ways: $$\{g\in G:g\cdot x=x,\quad \forall x\in Y\},$$ $$\{g\in G:g\cdot Y\subset Y\}.$$ Which is it? I couldn't find online the definition. The later allows for example that if we had $Y=\{a,b\}$ that $g\cdot a =b, g\cdot b =a$ is a viable element of the latter 'stabilizer'. REPLY [7 votes]: If a group $G$ acts on a set $\Omega$, we may extend this to an action of $G$ on the set of all subsets of $\Omega$ (its power set). This is done by declaring for $S \subseteq \Omega$ that $g \cdot S = \{g \cdot s : s \in S\} \subseteq \Omega$. In this case, the stabilizer of a subset $S$ is any group element that fixes $S$ as a subset, not necessarily fixing each $s \in S$. In the latter case, when $g \cdot s = s$ for all $s \in S$, we say that $g$ fixes $S$ pointwise (I guess we could say "stabilizes pointwise" but it's much less common in my experience). In general, stabilizers of subsets may permute elements within the subset. If I hear something about the stabilizer of a subset, I automatically think "not necessarily pointwise" unless it is explicitly mentioned; I don't think I'm in the minority. But if I'm doing the writing, I'll always explicitly mention whether things are fixed pointwise or not, because the extra clarification never hurts.<|endoftext|> TITLE: Combinatorics - Show that $2^n = 1987 \dots $ QUESTION [5 upvotes]: Attempt: The sequence $2^k$ $(k = 1, 2, . . .)$ is infinite, while the set of residual classes modulo $1000$ is finite, so there are two different integers $n < m$ such that $2^n ≡ 2^m \pmod {1000}$. REPLY [4 votes]: This is certainly not the intended solution if this was originally a contest problem, but here's one argument. Let $\{x\}$ denote the fractional part of $x$. The leading $4$ digits of $2^n$ are $1987$ iff \begin{align*} \log_{10^4} 1987 \leq \{n \log_{10^4} 2\} < \log_{10^4} 1988. \end{align*} The map $x \to x + \log_{10^4} 2$ is uniquely ergodic on $\mathbb{R}/\mathbb{Z}$, so such an $n$ exists; in fact, the set of such $n$ has density $\log_{10^4}(1988/1987)$.<|endoftext|> TITLE: How can I solve $u_{xt} + uu_{xx} + \frac{1}{2}u_x^2 = 0$ with the method of characteristics. QUESTION [9 upvotes]: I am trying to solve the following PDE: $u_{xt} + uu_{xx} = -\frac{1}{2}u_x^2$, with initial condition: $u(x,0) = u_0(x) \in C^{\infty}$ using the method of characteristics. I am a beginner with the method of characteristics and PDE in general. Here is what I have so far. Define $\gamma(x,t)$ as the characteristic curves. $\frac{\partial}{\partial t} u_x(\gamma(x,t),t) = u_{xt} + u_{xx}\gamma_t(x,t) = - \frac{1}{2}u_x^2$ Set $u_t = u_x$ $\Rightarrow \frac{\partial}{\partial t} u_x(\gamma(x,t),t)= (u_t)_x + u_{xx}\gamma_t(x,t)$ $ = u_{xx} + u_{xx}\gamma_t = - \frac{1}{2}u_x^2$ From this I get $\gamma_t = -\frac{1}{2}\frac{u_x^2}{u_{xx}} - 1$ However, I am not sure this is the right approach and do not fully understand how to use the method of characteristics when the solution $u(x,t)$ is constant on the characteristic curves. Any help is much appreciated. Edit: I made some progress by using $v=u_x$ and getting $\frac{dv}{dt} = \frac{-1}{2} v^2$ and $\frac{\partial x}{\partial{t}} = 1$. Then separating the first ODE, I get $\frac{2}{v} = t + c$. However, I am not sure if my solution after integrating with respect to $x$ and using the initial condition is correct. I end up with $u(x,t) = \frac{2}{t+c}x + c_1$, $u(x,0) = \frac{2}{c}x + c_1$. REPLY [2 votes]: Modifying the problem. Rather than consider the PDE $u_{xt}+u\ u_{xx}+\frac{1}{2}u_{x}^2=0$ with initial condition $u(x,0)=u_0(x)$ as asked above, I will consider the following variant. $$ \text{Solve }u_{xy}+u\ u_{xx} + u_x^2=0\text{ subject to }u(x,0)=f(x).\qquad(\star) $$ There are three differences between this question and that which was asked originally. The coefficient of $u_x^2$ has changed from $\frac{1}{2}$ to $1$. The variable $t$ has been renamed to $y$. The initial function $u_0(x)$ has been renamed to $f(x)$. Only (1) represents a significant modification of the problem. It makes the solution more tractable and enables it to be found using an elementary application of the method of characteristics. For these reasons, it is conceivable that this was the intended question. Note: I will not delve into regularity of the solutions in this answer. Reduction to a first order quasilinear PDE. Write the equation as $$ \frac{\partial}{\partial x}\left(u_y+u\ u_x\right)=0. $$ Thus $(\star)$ is equivalent to $$ u_y+u\ u_x=g(y),\qquad u(x,0)=f(x),\qquad (\star\star) $$ where $g(y)$ is an arbitrary function of $y$ (with sufficient regularity). Method of characteristics. Perhaps the simplest formulation of the method of characteristics is for quasilinear first order PDEs. These are PDEs of the form $$a(x,y,u)u_x+b(x,y,u)u_y=c(x,y,u).$$ To solve this equation, one regards the solution as a surface $z=u(x,y)$ in $xyz$-space. Let $s$ parametrize the initial curve $\bigl(s,0,f(s)\bigr)$ and let $t$ be a second parameter, which can be thought of as the distance flowed along a characteristic curve emanating from $\bigl(s,0,f(s)\bigr)$. The characteristic equations are then $$ \frac{dx}{dt}=a(x,y,z),\quad \frac{dy}{dt}=b(x,y,z),\quad \frac{dz}{dt}=c(x,y,z). $$ Returning to our equation $(\star\star)$, this reduces to $a(x,y,u)=u$ and $b(x,y,u)=1$ and $c(x,y,u)=g(y)$. Thus $$ \frac{dx}{dt}=z,\quad \frac{dy}{dt}=1,\quad \frac{dz}{dt}=g(y) $$ with initial conditions $x(0)=s$ and $y(0)=0$ and $z(0)=f(s)$. The solution to this system is $$ x=s+zt,\quad y=t,\quad z=f(s)+h(t), $$ where $h(t)$ is the antiderivative of $g(t)$ satisfying $h(0)=0$. Since $g$ was arbitrary, so is $h$ given $h(0)=0$. The solution. Now we eliminate all occurrences of $t$ by replacing them with $y$, then eliminate $s$ by writing $s=x-zy$. Finally, replace $z$ with $u$ to obtain the implicit equation $$ \boxed{u=f(x-uy)+h(y)}, $$ where $h(y)$ is any sufficiently regular function satisfying $h(0)=0$. This is an implicit equation for the general solution of $(\star)$. TL;DR. Change the $\frac{1}{2}$ in the original question to $1$ to obtain a PDE solvable by the method of characteristics.<|endoftext|> TITLE: Do there Exist Proper Classes that aren't "Too Big" QUESTION [27 upvotes]: Some proper classes are "too big" to be a set in the sense that they have a subclass that can be put in bijection with $\alpha$ for every cardinal $\alpha$. It is implied in this post that every proper class is "too big" to be a set in this sense, however I have been unable to prove it. It's true if every two proper classes are in bijection, but it's consistent with ZFC for there to be a pair of non-bijective classes. So, is the following true in ZFC: For all proper classes, $C$, and $\alpha\in\mathbf{Card}$, $\exists S\subset C$ such that $|S|=\alpha$? If not, is there something reasonable similar that preserves the intuition about classes that are "too big to be sets"? REPLY [14 votes]: Here is an alternative proof to that of Eric. Since $C$ is a proper class, there is no $\alpha$ such that $C\subseteq V_\alpha$. Consider the class $\{C\cap V_\alpha\mid\alpha\in\mathrm{Ord}\}$, if there is an upper bound on cardinality to this class, some $\kappa$, then there cannot be more than $\kappa^+$ different sets in that class, which would make $C$ a set. Therefore there are arbitrarily large $C\cap V_\alpha$'s, and therefore there is one larger than your given cardinal. Interestingly, without choice, it is consistent that there is a proper class which does not have any countably infinite subsets. Although it is true in that every proper class can be mapped onto the class of ordinals.<|endoftext|> TITLE: Prove known closed form for $\int_0^\infty e^{-x}I_0\left(\frac{x}{3}\right)^3\;dx$ QUESTION [12 upvotes]: I know that the following identity is correct, but I would love to see a derivation: $$\int_0^\infty e^{-x}I_0\left(\frac{x}{3}\right)^3\;dx=\frac{\sqrt{6}}{32\pi^3}\Gamma\left(\frac{1}{24}\right)\Gamma\left(\frac{5}{24}\right)\Gamma\left(\frac{7}{24}\right)\Gamma\left(\frac{11}{24}\right)\tag{1}$$ where $I_0(x)$ is a modified Bessel function of the first kind. The form of the RHS makes me assume that the elliptic integral singular value $K(k_6)$ is somehow involved but I see no obvious way of transforming the LHS into an elliptic integral. My request is to see a way of evaluating this integral directly to show that it equals the RHS. Background: This integral arises in the theory of the recurrence of random walks on a cubic lattice. Jonathan Novak elegantly shows here using generating functions that the recurrence probability of a random walk on $\mathbb{Z}^d$ is $p(d)=1-\frac{1}{u(d)}$ where: $$u(d)=\int_0^\infty e^{-x}I_0\left(\frac{x}{d}\right)^d\;dx\tag{2}$$ The MathWorld page lists several ways of writing $u(3)$, including $(1)$, but in none of the references could I find a specific derivation of $(1)$. Most relevant discussions I found (e.g. here and here where elliptic integrals do appear) work from an alternative way of writing $u(3)$, namely: $$u(3)=\frac{3}{2\pi^3}\int_{-\pi}^\pi\int_{-\pi}^\pi\int_{-\pi}^\pi\frac{dx\;dy\;dz}{3-\cos{x}-\cos{y}-\cos{z}}\tag{3}$$ rather than a representation like $(2)$. I believe $(3)$ comes from an alternative way of solving the random walk problem (e.g. see here) which I am not very familiar with. Accordingly, I am asking for a derivation of $(1)$ that works directly from the Bessel function integral, rather than considering other ways of calculating $u(3)$ (deriving the RHS of $(1)$ from $(3)$ would not answer my question). Although I appreciate that there may be simpler proofs of $(1)$ from the theory of random walks, I am wondering if there is a nice way of evaluating the integral in $(1)$ directly. REPLY [2 votes]: I am having hesitations whether I should be writing this post because I will definitely not derive the closed form in question. Yet, after having glanced through all the material linked to this question I guess I am able to contribute more than all what I have seen so far. We start from the formula (3) since the route from (1) to (3) is fairly simple and has been presented in the comments already. \begin{eqnarray} u(3) &=& \int\limits_{[0,\pi]^3} \frac{1}{3-\cos(t)-\cos(u)-\cos(v)} dt du dv\\ &=&-\pi \int\limits_{[0,\pi]^2} \frac{1}{\sqrt{-4+\cos(t)+\cos(u)}\sqrt{-2+\cos(t)+\cos(u)}} dt du\\ &\underbrace{=}_{v=\tan(u/2)}& 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{(-3-5 v^2+(1+v^2) \cos(t))(-1-3 v^2 +(1+v^2) \cos(t))}}dt dv\\ &=& 2\pi \int\limits_{[0,\pi] \times [0,\infty)} \frac{1}{\sqrt{5-6 \cos(t)+\cos(t)^2}}\cdot \frac{1}{\sqrt{(1+v^2)(1+v^2 \frac{(\cos(t)-3)^2}{(\cos(t)-5)(\cos(t)-1)}}} dt dv\\ &=& 2\pi \int\limits_{[0,\pi]} \frac{K(-\frac{4}{5-6 \cos(t)+\cos(t)^2})}{\sqrt{5-6 \cos(t)+\cos(t)^2}} dt\\ &\underbrace{=}_{v=\tan(t/2)}& 2\pi \int_{[0,\infty)} \frac{K(-\frac{(1+v^2)^2}{v^2(2+3 v^2)})}{v\sqrt{2+3 v^2}} dv\\ &\underbrace{=}_{z=-\frac{(1+v^2)^2}{v^2(2+3 v^2)}}& \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} K(z) dz\\ &=& \frac{\pi}{2} \int\limits_{(-\infty,-1/3)} \sqrt{\frac{-1+3 z-3\sqrt{z(-1+z)}}{(1+3z)(-1+z)z}} \frac{1}{\sqrt{1-z}}K(\frac{z}{z-1}) dz\\ &\underbrace{=}_{y=z/(z-1)}&\frac{\imath \pi}{2} \int\limits_{(1/4,1)} \frac{K(y)}{\sqrt{y(1-3 \sqrt{y}+2 y)}}dy\\ &\underbrace{=}_{y\leftarrow \sqrt{y}}& \imath \pi \int\limits_{(1/2,1)} \frac{K(y^2)}{\sqrt{(1-y)(1-2 y)}} dy\\ &\underbrace{=}_{y\leftarrow 2y-1}& \frac{\pi}{\sqrt{2}} \int\limits_{(0,1)} \frac{K(\frac{(1+y)^2}{4})}{\sqrt{(1-y)y}} dy\\ &\underbrace{=}_{y\leftarrow \sqrt{y}}&\pi \sqrt{2} \int\limits_{(0,1)}\frac{K(\frac{(1+y^2)^2}{4})}{\sqrt{1-y^2}} dy\\ &\underbrace{=}_{y=\sin(\phi)}& \int\limits_{(0,\pi/2)} K\left( \frac{(1+\sin(\phi)^2)^2}{4}\right) d\phi\\ &\underbrace{=}_{\phi=\phi/2}& \frac{\pi \sqrt{2}}{4} \int\limits_{[-\pi,\pi]} K\left( \frac{19}{32} - \frac{3}{8} \cos(\phi)+\frac{1}{32} \cos(2 \phi)\right) d\phi\\ &\underbrace{=}_{z=\exp(\imath \phi)}& \frac{\pi \sqrt{2}}{4} \oint\limits_{|z|=1}K\left(\frac{19}{32} - \frac{3}{8}\frac{z+1/z}{2}+\frac{1}{32} \frac{z^2+1/z^2}{2} \right) \frac{dz}{\imath z} \end{eqnarray} I guess all the steps are pretty clear. The last expression lends itself for Cauchy-residue theorem computations. However, for the time being my knowledge of the singularities of the elliptic function is not sufficient to be able to complete this computation.<|endoftext|> TITLE: What is the most general form of the Navier-Stokes equation? QUESTION [5 upvotes]: From a mathematical standpoint, what is the most general form of the Navier-Stokes tensor equation? I'm looking at this from an abstract perspective, so the equations need not be limited to the standard domain of $\mathbb{R}\times\mathbb{R}^3$. I've also seen references to a form for compressible flow as well as one for incompressible flow. Is one more general than the other, or are they separate entities? If there is a single over-arching equation that describes all possible special cases of the Navier-Stokes equation, what is it? REPLY [6 votes]: First of all, some assumption: the first one is that a fluid is treated as a continuum, that is not made unto discrete particles but rather a continuous substance. Another assumption: all the fields of interest like pressure, velocity, density, temperature and so on are differentiable, weakly at least. The equations are derived from the basic principles of conservation of mass, momentum, and energy. For that matter, sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. We start with the so called material derivative Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the Eulerian derivative while the derivative following a moving parcel is called the convective or material derivative. It's defined as $$\frac{D}{Dt} =! \frac{\partial}{\partial t} + v\cdot \nabla$$ where $v$ is the velocity of the fluid. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (i.e. the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position. This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule. For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by mounting it on a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the fluid. Now we need some conservation laws in particular for mass, momentum and energy. This is done via the Reynolds transport theorem, an integral relation stating that the sum of the changes of some extensive property defined over a control volume $\Omega$ must be equal to what is lost (or gained) through the boundaries of the volume plus what is created/consumed by sources and sinks inside the control volume. This is expressed by the following integral equation: $$\frac{d}{dt}\int_{\Omega} L\ \text{d}V = -\int_{\partial\Omega} Lv\cdot n\ \text{d}A - \int_{\Omega} Q\ \text{d}V$$ Where $L$ is a set of extensive properties as written above, $Q$ represents the sources and sinks in the fluid. A straightforward application of the divergence theorem and Leibniz rule lead us to this conclusion: $$\frac{\partial L}{\partial t} + \nabla\cdot (Lv) + Q = 0$$ From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if is a vector, in which case the vector-vector product in the second term will be a dyad. Conservation of Momentum The most elemental form of the Navier–Stokes equations is obtained when the conservation relation is applied to momentum. Writing momentum as $\rho v$ gives: $$\frac{\partial \rho v}{\partial t} + \nabla\cdot (\rho v v) + Q = 0$$ Where $v v$ is what we called before a dyad: a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (also called: a first rank tensor). Noting that a body force (notated $b$) is a source or sink of momentum (per volume) and expanding the derivatives completely: $$v\frac{\partial\rho}{\partial t} + \rho \frac{\partial v}{\partial t} + v v \cdot \nabla\rho + \rho v\cdot \nabla v = b$$ Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it's important to understand that this isn't simply an element by element gradient. Rearranging and recognizing that: $$v \cdot \nabla\rho + \rho v\cdot \nabla v = \nabla\cdot(\rho v)$$ With an easy calculation we can show that we can rewrite the general formulation above as $$v\left(\frac{\partial\rho}{\partial t} + \nabla\cdot(\rho v)\right) + \rho\left(\frac{\partial v}{\partial t} + v\cdot\nabla v\right) = b$$ The leftmost expression enclosed in parentheses is, by mass continuity (shown in a moment), equal to zero. Noting that what remains on the left side of the equation is the convective derivative: $$\rho\left(\frac{\partial v}{\partial t} + v\cdot\nabla v\right) = b ~~~~~ \to ~~~~~ \rho\frac{Dv}{Dt} = b$$ Conservation of mass Mass may be considered also. Taking $Q = 0$ (no sources or sinks of mass) and putting in density: $$\frac{\partial\rho}{\partial t} + \nabla\cdot(\rho v) = 0$$ where $\rho$ is the mass density (per volume).This equation is called the mass continuity equation, or simply "the" continuity equation. This equation generally accompanies the Navier–Stokes equation. In case of an incompressible fluid, $\rho = $ constant and so we gain $$\nabla\cdot v = 0$$ GENERAL FORM The generic body force $b$ seen previously is made specific first by breaking it up into two new terms, one to describe forces resulting from stresses and one for "other" forces such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that $$\rho \frac{Dv}{Dt} = \nabla\cdot\sigma + f$$ Where $\sigma$ is the stress tensor and $f$ accounts the body force present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of any continuum that conserves mass. $\sigma$ is a rank two symmetric tensor given by its covariant components: $$\sigma_{ik} = \begin{pmatrix} \sigma_{xx} & \tau_{xy} & \tau_{xz} \\ \tau_{yx} & \sigma_{yy} & \tau_{yz} \\ \tau_{zx} & \tau_{zy} & \tau_{zz} \end{pmatrix} $$ $\sigma$ are the normal stresses and $\tau$ are the shear stresses. The tensor can be split up to two terms like $$\sigma_{ij} = -p\mathbb{1} + \mathbb{T}$$ where $T$ is the deviator stress components, $1$ is the identity matrix and $$p = -\frac{1}{3}(\sigma_{xx} + \sigma_{yy} + \sigma_{zz})$$ The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor $\mathbb{T}$ in the equation above must be zero for a fluid at rest. Note that $\mathbb{T}$ is traceless. The Navier–Stokes equation may now be written in the most general form: $$\boxed{\rho\frac{Dv}{Dt} = -\nabla p + \nabla\cdot\mathbb{T} + f}$$ This equation is still incomplete. For completion, one must make hypotheses on the form of $\mathbb{T}$ , that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families; additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation.<|endoftext|> TITLE: Sum of series $\sum \limits_{k=1}^{\infty}\frac{\sin^3 3^k}{3^k}$ QUESTION [21 upvotes]: Calculate the following sum: $$\sum \limits_{k=1}^{\infty}\dfrac{\sin^3 3^k}{3^k}$$ Unfortunately I have no idea how to handle with this problem. Could anyone show it solution? REPLY [11 votes]: An overkill. Let $\mathfrak{M}\left(*,s\right) $ the Mellin transform. Using the identity $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\lambda_{k}g\left(\mu_{k}x\right),\, s\right)=\underset{k\geq1}{\sum}\frac{\lambda_{k}}{\mu_{k}^{s}}\mathfrak{M}\left(g\left(x\right),s\right) $$ we have $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}},\, s\right)=\underset{k\geq1}{\sum}\left(\frac{1}{3^{s+1}}\right)^{k}\mathfrak{M}\left(\sin^{3}\left(x\right),s\right) $$ and since $$\mathfrak{M}\left(\sin^{3}\left(x\right),s\right)=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)}{4}\left(3-\frac{1}{3^{s}}\right) $$ we have for $\textrm{Re}\left(s\right)>-1 $ $$\mathfrak{M}\left(\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}},\, s\right)=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)}{4}\left(3-\frac{1}{3^{s}}\right)\frac{1}{3^{s+1}-1}=\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4} $$ and so inverting we get $$\underset{k\geq1}{\sum}\frac{\sin^{3}\left(3^{k}x\right)}{3^{k}}=\frac{1}{2\pi i}\int_{1/2-i\infty}^{1/2+i\infty}\frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4}x^{-s}ds $$ now taking $x=1$ and shifting the complex line to the left we have, from the residue theorem, that we have to evaluate the residues of $$\textrm{Res}_{s=-2n-1} \frac{\Gamma\left(s\right)\sin\left(\frac{\pi}{2}s\right)3^{-s}}{4}=\frac{\left(-1\right)^{k}}{4\left(2k+1\right)!3^{-2n-1}} $$ hence $$\sum_{k\geq1}\frac{\sin^{3}\left(3^{k}\right)}{3^{k}}=\frac{1}{4}\sum_{k\geq 0}\frac{\left(-1\right)^{k}}{\left(2k+1\right)!}3^{2k+1}=\color{red}{\frac{\sin\left(3\right)}{4}}.$$<|endoftext|> TITLE: Why a random minimum spanning tree is not an uniform spanning tree? QUESTION [5 upvotes]: A spanning tree chosen randomly from among all the spanning trees with equal probability is called a uniform spanning tree. A model for generating spanning trees randomly but not uniformly is the random minimum spanning tree. In this model, the edges of the graph are assigned random weights and then the minimum spanning tree of the weighted graph is constructed [Wikipedia]. Why a random minimum spanning tree is not an uniform spanning tree ? REPLY [3 votes]: Random minimum spanning tree is a spanning tree that was obtained via Kruskal's algorithm starting with a random permutation. Take a graph that is a square with one diagonal (4 vertices, 5 edges). There are 8 spanning trees (4 that include the diagonal, and 4 that don't), each should be picked with probability $1/8$. However, that is no so for random minimum spanning tree. Consider the following 5 cases (each happens with equal probability): The diagonal is first. - The diagonal edge is certainly picked. The diagonal is second. - The diagonal edge is certainly picked. The diagonal is third. - First edge is irrelevant, 3 cases for the second edge: same side - no diagonal. other side, adjacent - with diagonal. other side, not adjacent - with diagonal. The diagonal is fourth. - The first 3 edges form a spanning tree, no diagonal. The diagonal is fifth. - Same as above, no diagonal. All in all, we get the following distribution, as you can see, the split is not equal: no diagonal: $\frac{2}{5} + \frac{1}{5}\cdot\frac{1}{3} = \frac{7}{15}$. diagonal: $\frac{2}{5} + \frac{1}{5}\cdot \frac{2}{3} = \frac{8}{15}$. I hope this helps $\ddot\smile$<|endoftext|> TITLE: The fractional part of $n\log(n)$ QUESTION [12 upvotes]: When I was thinking about my other question on the sequence $$p(n)=\min_a\left\{a+b,\ \left\lfloor\frac {2^a}{3^b}\right\rfloor=n\right\}$$ I found an interesting link with the sequence $$q(n)=\{n\log(n)\}=n\log(n)-[n\log(n)]$$ the fractional part of $n\log(n)$. If we draw the sequence $q$, we get this (for $n$ up to $520$, $5\,000$ and $30\,000$ respectively): We can see some gaps looking like parabolas. What is causing those gaps? Why do they look like this? In my other question, we can observe similar structures: An answer is telling me that it is due to the continued fraction of $\log2/\log 3$. Could the two questions be linked? Edit. Thanks to Gottfried Helms idea, here is some more drawings with the moduli varying. The moduli will take those values respectively: $$1,2,3,4,5,6,7,8,9,10,20,30,40,50,60,70,80,90,100.$$ A modulus of $1$ corresponding to the fractional part. REPLY [3 votes]: Not an answer but some more illustration (by my own couriosity) I*ve looked at your function mod 10 and 100. Nice pictures... Ok, just for fun: "fine wood". Also see, how the picture can nicely be stacked vertically; here I stacked the mod-100 picture 3 times:<|endoftext|> TITLE: Looking for closed form of $\int_{0}^{\pi}{\sin^{2k-1}(x)\over 1+\cos^2(x)}\mathrm dx=2^{k-2}\pi+F(k)$ QUESTION [5 upvotes]: I need help on finding the closed form for $(1)$. Partially I managed to find a part of it. $$\int_{0}^{\pi}{\sin^{2k-1}(x)\over 1+\cos^2(x)}\mathrm dx=2^{k-2}\pi+F(k)\tag1$$ $k\ge1$ I only managed to work out for k=1: ${\pi\over 2}$ k=2: $\pi- 2$ k=3: $2\pi-{16\over 3}$ k=4: $4\pi-{176\over 15}$ REPLY [6 votes]: $$I(k)=\int_{0}^{\pi}\frac{\sin(x)^{2k-1}}{1+\cos^2(x)}\,dx = 2\int_{0}^{\pi/2}\frac{\sin(x)^{2k-1}}{1+\cos^2(x)}\,dx=2\int_{0}^{\pi/2}\frac{\cos(x)^{2k-1}}{1+\sin(x)^2}\,dx$$ leads to: $$ I(k) = 2 \int_{0}^{1}\frac{(1-t^2)^{k-1}}{1+t^2}\,dt = 2\int_{0}^{1}\frac{(2-(1+t^2))^{k-1}}{1+t^2}\,dt $$ where: $$ (2-(1+t^2))^{k-1} = \sum_{j=0}^{k-1}\binom{k-1}{j}(1+t^2)^j (-1)^j 2^{k-1-j} $$ gives a way to put $I(k+1)-2\,I(k)$ in a simple closed form. We also have $$ \sum_{k\geq 1}I(k)\,z^{2k}=\int_{0}^{\pi}\frac{z^2\sin(x)}{1-z^2\sin(x)}\,dx =\frac{2z}{\sqrt{1-z^2}}\,\arctan\left(\frac{z}{\sqrt{1-z^2}}\right)$$ as suggested by Sangchul Lee, so the problem boils down to computing the Taylor series of $\frac{1}{\sqrt{1-z^2}}$ or $\arcsin(z)$ in a neighbourhood of the origin, simple through the extended binomial theorem. One way or another we get: $$ \delta_k\stackrel{\text{def}}{=}I(k+1)-2\,I(k) = -\int_{0}^{\pi/2}\sin(x)^{2k-1}\,dx = -\frac{4^k}{k\binom{2k}{k}}$$ and $$\begin{eqnarray*} \delta_k + 2 \delta_{k-1} + 4\delta_{k-2}+\ldots +2^{k-1} \delta_{1} &=& I(k+1)-2^k I(1)\\&=&-\sum_{j=1}^{k}\frac{4^j 2^{k-j}}{j\binom{2j}{j}}\end{eqnarray*}$$ so: $$\boxed{ \int_{0}^{\pi}\frac{\sin(x)^{2k-1}}{1+\cos(x)^2}\,dx = \color{red}{\pi 2^{k-2}-\frac{1}{2}\sum_{j=1}^{k-1}\frac{2^{j+k}}{j\binom{2j}{j}}}. }$$ An interesting consequence is that the last identity proves $$ \sum_{j\geq 1}\frac{2^j}{j\binom{2j}{j}}=\frac{\pi}{2} $$ since $\lim_{k\to +\infty}I(k)=0$, obviously. By Laplace's method, $$ \boxed{\int_{0}^{\pi}\frac{\sin(x)^{2k-1}}{1+\cos(x)^2}\,dx \approx \color{red}{\frac{2\sqrt{\pi}}{\sqrt{4k+2}}}} $$ for large values of $k$.<|endoftext|> TITLE: Can cohomologies always be defined as the quotient of the kernel by the image of a certain operation? QUESTION [8 upvotes]: For example, concerning de Rham cohomology as explained on the lower part of page 5 here, one considers the vector space of all forms on a manifold $M$, $A_{DR}(M)$ in the notation of the paper (?). The $p$-th de Rham cohomology is then defined as the quotient of the kernel and the image of the exterior derivative $d$ acting on the space of $p$ and $p-1$ forms respectively as $$ H_{DR}^p(M) = \frac{Ker(d: A_{DR}^{p}(M)\rightarrow A_{DR}^{p+1}(M))} {Im(d: A_{DR}^{p-1}(M)\rightarrow A_{DR}^{p}(M))} $$ Can cohomology always be defined as the kernel divided by the image of a certain operation? What does cohomology mean or "measure" in as simple as possible intuitive terms in the de Rham case but also more generally? PS: I tried to read wikipedia of course, but it was (not yet?) very enlightening for me ... REPLY [3 votes]: $\newcommand{\dd}{\partial}\newcommand{\Brak}[1]{\left\langle #1\right\rangle}$To give a more elementary answer intended to complement the existing answers, here are some multivariable calculus-type observations. Throughout, $M$ denotes a compact, smooth manifold of dimension $n$ (possibly disconnected), and differential forms are smooth. Briefly, $p$-dimensional de Rham coholomogy measures "how many linearly-independent $p$-dimensional real homology classes $M$ contains". (William S. Massey's Algebraic Topology article in the Encyclopedia Britannica gives a clear, geometric introduction to homology.) If $X$ is a smooth $k$-chain and $\omega$ is a smooth $p$-form on $M$, let $$ \Brak{X, \omega} = \begin{cases} \int_{X} \omega & k = p, \\ 0 & k \neq p, \end{cases} $$ denote the "integration pairing". If $X$ is a $(p + 1)$-chain, Stokes's theorem gives $$ \Brak{\dd X, \omega} = \int_{\dd X} \omega = \int_{X} d\omega = \Brak{X, d\omega}. $$ With respect to the integration pairing, the boundary operator on chains and the exterior derivative operator on forms are formally adjoint. A closed form $\omega$ (i.e., with $d\omega = 0$) satisfies $$ \Brak{\dd X, \omega} = \Brak{X, d\omega} = 0. $$ In words, the integral of $\omega$ over a $p$-chain bounding some $(p + 1)$-chain vanishes. Consequently, if $X_{1}$ and $X_{2}$ are homologous $p$-chains (i.e., whose difference is a boundary, say $X_{1} - X_{2} = \dd Y$), then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$ An exact $p$-form $\omega$ (i.e., with $\omega = d\eta$ for some $(p - 1)$-form $\eta$) satisfies $$ \Brak{X, \omega} = \Brak{X, d\eta} = \Brak{\dd X, \eta}. $$ Particularly, the integral of $\omega$ over a boundaryless $p$-dimensional chain vanishes. Consequently, if $X_{1}$ and $X_{2}$ are $p$-chains having the same boundary, then $$ \Brak{X_{1}, \omega} = \Brak{X_{2}, \omega}. $$ Via the integration pairing, a $p$-form determines a real-valued function on $p$-chains (with real coefficients). Item 1 says that a closed $p$-form determines a real-valued function on real homology classes. Item 2 says this function is unchanged upon addition of an exact $p$-form, i.e., is constant on de Rham cohomology classes.<|endoftext|> TITLE: About a refined Burnside's theorem in prime characteristic. QUESTION [6 upvotes]: The following result is due to W. Burnside: Theorem. Let $G$ be a subgroup of $\textrm{GL}_n(\mathbb{C})$. If $G$ has finite exponent, then $G$ is finite. The proof relies on the following: Lemma. Let $A\in\mathcal{M}_n(\mathbb{C})$ such that for all $k\in\{1,\ldots,n\}$, $\textrm{tr}(A^k)=0$, then $A$ is nilpotent. It is not hard to see that the theorem still holds over fields of characteristic zero. Indeed, to establish the lemma it suffices to consider a splitting field of the characteristic polynomial of $A$. However, the theorem fails to be true over infinite field of prime characteristic; consider the following subgroup of $\textrm{GL}_2(\mathbb{F}_p(t))$: $$G:=\left\{\begin{pmatrix}1&f\\0&1\end{pmatrix};f\in\mathbb{F}_p(t)\right\}.$$ Notice that $G$ is infinite albeit having exponent $p$. My conjecture is that the following refinement is true: Conjecture. Let $k$ be an infinite field of prime characteristic $p$ and let $G$ be a subgroup of $\textrm{GL}_n(k)$. If the exponent of $G$ is finite and prime with $p$, then $G$ is finite. I already proved the conjecture for $n TITLE: Prove that $m^*(A\cup B)=m^*(A)+m^*(B)$ whenever $\exists \alpha>0$ such that $|a-b|>\alpha$ for any $a\in A,b\in B$ QUESTION [5 upvotes]: Let $A,B$ be bounded sets in $\Bbb R$ for which $\exists \alpha>0$ such that $|a-b|>\alpha$ for any $a\in A,b\in B$ . Prove that $m^*(A\cup B)=m^*(A)+m^*(B)$. By countable sub-additivity of outer measure $m^*(A\cup B)\le m^*(A)+m^*(B)$. To show the reverse inequality let $\epsilon>0$. Then there exists a sequence of open intervals $\{I_n\}$ such that $A\cup B\subset \cup I_n$ and $m^*(A\cup B)+\epsilon >\sum l(I_n)$. If I can show that for each sequence of intervals $\{I_n^{'}\}$ and $\{I_n^{"}\}$ covering $A,B$ respectively we have $\sum l(I_n)>\sum l(I_n^{'})+\sum l(I_n^{"})$ then we are done. But I am stuck .Any help. REPLY [3 votes]: Fix $\epsilon > 0$. Then there is a countable set $\{I_n \}$ of intervals such that $$\sum_n l(I_n) < m^*(A \cup B) + \epsilon/2.$$ Write each open interval $I_n$ as a finite union of open subintervals $\{ J_{i,n}\}_{i=1}^{N(n)}$ such that $l(J_{i,n}) < \alpha$ and $\sum_{i=1}^{N(n)}l(J_{i,n}) < l(I_n) + \frac{\epsilon}{2^{n+1}}$. Now, the sets $C(A) = \{J_{i,n} : J_{i,n} \cap A \neq \emptyset \}$ and $C(B) = \{J_{i,n} : J_{i,n} \cap B \neq \emptyset \}$ are disjoint open covers of $A$ and $B$, respectively. (Why?) Hence, we have $$m^*(A \cup B) + \epsilon > \sum_n l(I_n) + \epsilon/2 > \sum_n \sum_{i=1}^{N(n)} l (J_{i,n}) \geq \sum_{J' \in C(A)}l(J') + \sum_{J'' \in C(B)}l(J'') \geq m^*(A) + m^*(B).$$ As $\epsilon$ is arbitrary, we are done.<|endoftext|> TITLE: A monoid is $(M, •)$ or $(M, μ, η)$? QUESTION [7 upvotes]: I have read two definitions of Monoid: wiki link In category theory, a monoid (or monoid object) $(M, μ, η)$ in a monoidal category $(C, ⊗, I)$ is an object $M$ together with two morphisms $μ: M ⊗ M → M$ called multiplication, $η: I → M$ called unit wiki link Suppose that $S$ is a set and $•$ is some binary operation $S × S → S$, then $S$ with $•$ is a monoid if it satisfies the following two axioms: Associativity: For all $a, b, c \in S$, the equation $(a • b) • c = a • (b • c)$ holds. Identity element: There exists an element $e \in S$ such that for every element $a \in S$, the equations $e • a = a • e = a$ hold. I know the 1st definition is from category theory's perspective, the 2nd one is from abstract algebra's. In my opinion, these two definitions defines the same concept from different way, but how could a monoid $(S, •)$ be consider as a category $(M, μ, η)$? REPLY [9 votes]: A monoid in the second sense is a monoid object in the category of sets with its cartesian monoidal structure. Specifically, take the category $\operatorname{Set}$ of sets, with the monoidal structure $A\otimes B := A \times B$ the cartesian product, and unit is a set with one element, $I:=\{ *\}$. Then a monoid object in this category is a set $M$ with a multiplication $\mu: M\times M \to M:(a,b)\mapsto a\cdot b$ and a unit $\eta:\{*\} \to M: *\mapsto e$. The axioms of the monoid object give you the associativity of $\mu$ and the unitality of $e$. Remember in general the triple $(M,\mu,\eta)$ refers to an object $M$ in the monoidal category, not a category itself. So for example, we could take the monoidal category $\operatorname{AbGp}$ of abelian groups, with the tensor product as its monoidal structure. Then a monoid object in $\operatorname{AbGp}$ is precisely the same thing as a ring (with unit). Similarly algebras are monoid objects in the category $\operatorname{Vect}$ of vector spaces.<|endoftext|> TITLE: Hilbert space adjoint vs Banach space adjoint? QUESTION [14 upvotes]: I have read that there are two 'options' for an adjoint when dealing with Hilbert spaces. Let $T : X \to Y$ be a bounded linear operator between the Hilbert spaces $X$ and $Y$. The Hilbert space adjoint: Define $T^* : Y \to X$ by $( T^*(y), x)_X = (y, T x)_Y$. The "usual" Banach space adjoint: Define $T^* : Y' \to X'$ by $\langle T^*(y^*), x \rangle_{X',X} = \langle y^*, T x\rangle_{Y',Y}$. Here, $(\cdot,\cdot)_X$ refers to the scalar product in $X$, whereas $\langle \cdot, \cdot \rangle_{X',X}$ refers to the duality product between $X'$ and $X$. It seems to me that these are both exactly the same, its just that we identify the duality product with the scalar product because we are in a Hilbert space and have an inner product at our disposal. Is this correct or are the adjoints in 1. and 2. fundamentally different? REPLY [9 votes]: $\newcommand{\hdual}{\mathsf H}$Let $T^\hdual$ denote the "Hilbertian" adjoint (usually called conjugate transpose) and $T^*$ the "Dual" or "Banachian" adjoint. Let $\mathrm E_X:X \to X^*$ be the natural embedding of a Hilbert space onto its dual, namely $x \mapsto \langle x,\cdot \rangle$. Here the inner product is taken to be linear in the second component and conjugate-linear in the first. We can show that $$ T^\hdual = \mathrm E_X^{-1} T^* \mathrm E_Y^{\vphantom{-1}} $$ REPLY [2 votes]: They are not fundamentally different. But the difference exists and comes into focus when one writes down the formula $T^*T$. This composition makes sense for the Hilbert space adjoint, but not for the Banach space adjoint, since the domain of the latter is $Y'$ and not $Y$. It's true, as Bananach said, that the Hilbert space adjoint is the composition of the Banach space adjoint with two anti-linear duality maps (so it ends up being linear).<|endoftext|> TITLE: The closure of a set is closed QUESTION [14 upvotes]: Definition: The closure of a set $A$ is $\bar A=A\cup A'$, where $A'$ is the set of all limit points of $A$. Claim: $\bar A$ is a closed set. Proof: (my attempt) If $\bar A$ is a closed set then that implies that it contains all its limit points. So suppose to the contrary that $\bar A$ is not a closed set. Then $\exists$ a limit point $p$ of $\bar A$ such that $p\not \in \bar A.$ Clearly, $p$ is not a limit point of $A$ because if it were then $p\in \bar A$. This means that $\exists$ a neighborhood $N_r(p)$ which does not contain any point of $A$. But $p$ is a limit point of $\bar A$ so it must contain an element $y\in \bar A-A$ in its neighborhood $N_r(p).$ Of course, $y$ is a limit point. Now, $0 TITLE: Can a non-local ring have only two prime ideals? QUESTION [13 upvotes]: Can a non-local ring have only two prime ideals? The only way this would be possible is if the ring $R$ had two distinct maximal ideals $\mathfrak{m}$ and $\mathfrak{n}$, and no other prime ideals. I suspect that such a ring exists, though I don't know how to construct it. REPLY [16 votes]: If $(A,\mathfrak p)$ and $(B,\mathfrak q)$ are local rings with only one prime ideal, then $R=A\times B$ has this property: a prime ideal must contain either $(1,0)$ or $(0,1)$ since their product is $0$, and then it is easy to see the prime must be either $A\times\mathfrak q$ or $\mathfrak p\times B$, respectively. (More generally, if $R=A\times B$ for any two rings $A$ and $B$, the primes in $R$ are exactly the sets of the form $A\times \mathfrak q$ or $\mathfrak p\times B$ where $\mathfrak q$ is a prime of $B$ or $\mathfrak p$ is a prime of $A$.) Conversely, every example has this form. Indeed, if $R$ is not local and has exactly two prime ideals, both prime ideals must be maximal. It follows that $\operatorname{Spec}(R)$ is a discrete space with two points $\mathfrak p$ and $\mathfrak q$. From the fact that the structure sheaf on $\operatorname{Spec}(R)$ is a sheaf it follows that the canonical map $R\to R_{\mathfrak p}\times R_{\mathfrak q}$ is an isomorphism (since $R_{\mathfrak p}$ is exactly the value of the structure sheaf on the open set $\{\mathfrak p\}$ and $R_{\mathfrak q}$ is exactly the value of the structure sheaf on the open set $\{\mathfrak q\}$).<|endoftext|> TITLE: Step forward, turn left, step forward, turn left ... where do you end up? QUESTION [17 upvotes]: Take $1$ step forward, turn $90$ degrees to the left, take $1$ step forward, turn $90$ degrees to the left ... and keep going, alternating a step forward and a $90$-degree turn to the left. Where do you end up walking? It's very easy to see that you end up walking on the perimeter of the same square, with side equal to $1$ step (every $4$ steps you are back at the point where you started), if you are walking on a plane. But what happens if you walk on the surface of a sphere? The answer obviously depends on how long is your step in relationship to the sphere's radius $R$. If your step length is $\pi R$, for example, you end up walking back and forth over the same lune, formed by two half circumferences at $90$ degrees angle. If your step length is $\pi R/2$, you end up walking on a triangle, formed by quarter circumferences at $90$ degrees angle. I believe one can prove the following: a) For any $x\geq 0$ and any arbitrarily small $\epsilon>0$, there is a step length $y\in(x,x+\epsilon)$ such that the process above takes you back to the starting point in a finite number of steps. b) No step length that brings you back to the starting point in a finite number of steps is a rational multiple of the length of the circumference, $2\pi R$, except for integer multiples of $\pi R/2$ (which keep you cycling over the same $1$,$2$, or $3$ points). c) Any step length that does not bring you back to the starting point in a finite number of steps eventually brings you all over the surface of the sphere, in the sense that for any point $P$ on the sphere's surface and any arbitrarily small $\epsilon>0$, you eventually end up within distance $\epsilon$ of $P$. In fact, for any point $P$ and any angle $\alpha$, you eventually end up within distance $\epsilon$ of $P$ and with an angle between $\alpha$ and $\alpha+\epsilon$ of your starting direction (measured as the angle between the respective great circles)! Can you prove any or all of the above? Note that a proof of b) would automatically yield an answer to this related question. REPLY [4 votes]: Let our walk $A_1, A_2, A_3,\ldots$ take place on a unit sphere of center $O$, and let $\theta$ be the step length (of course we move along great circles): $\angle A_1OA_2=\angle A_2OA_3=\ldots=\theta$, while each plane $A_iOA_{i+1}$ is orthogonal to plane $A_{i+1}OA_{i+2}$. Consider now circle $\Gamma$, passing through $A_1$, $A_2$ and $A_3$, let $C$ be its center and set $\phi=\angle A_1CA_2=\angle A_2CA_3$. A counterclockwise rotation of angle $\phi$ about axis $OC$ carries $A_1$ to $A_2$, $A_2$ to $A_3$ and $A_3$ to some point $B$. But $\angle A_3OB=\theta$ and plane $A_3OB$ is orthogonal to plane $A_2OA_3$. In other words: $B=A_4$, and by the same reasoning all points $A_i$ must belong to $\Gamma$. It follows that proposition c) in the question is false: the walk makes me move only along a well defined circle, depending only on my starting point and on my step length $\theta$. To compute angle $\phi$ as a function of $\theta$, we can set up a coordinate system such that $A_1=(0,0,1)$ and $A_2=(\sin\theta,0,\cos\theta)$. Point $A_3$ can be obtained by rotating $A_1$ clockwise by $\pi/2$ about line $OA_2$, yielding: $A_3=(\sin\theta\cos\theta,\sin\theta,\cos^2\theta)$. But $\angle A_1A_2A_3=\pi-\phi$, so that: $$ \cos(\pi-\phi)={(A_1-A_2)\cdot(A_3-A_2)\over ||A_1-A_2||^2}, $$ whence: $$ \phi=\pi-\arccos\left({1-\cos\theta\over2}\right). $$ The path returns to its starting point if $n\phi=2m\pi$, with $n$ and $m$ positive integers, which by the previous equation is equivalent to: $$ \cos\theta=1+2\cos\left(2{m\over n}\pi\right). $$ Given $t\in[-1,1]$, the right hand side can be as close to $t$ as one wants, if rational number ${m\over n}$ varies in the range ${1\over4}\le{m\over n}\le{1\over2}$. It follows that proposition a) is true: for any $x\ge0$ and any arbitrarily small $ϵ>0$ one can always find ${m\over n}$ such that the above expression for $\cos\theta$ takes a value comprised between $\cos x$ and $\cos(x+\epsilon)$. The corresponding value for $\theta$ will then give a step length between $x$ and $x+\epsilon$. Proposition b) could also be true, because such a value of theta is not, in general, a rational multiple of $2\pi$. One should prove that the above equation for $\theta$ as a function of ${m\over n}$ only gives a rational multiple of $2\pi$ when $\theta=k\pi/2$. I don't know if an answer is possible: I asked that in a new question (see EDIT below). Diagram below is taken from an interactive GeoGebra worksheet, which draws the first eight steps of the path, allowing to set the step length with a slider. EDIT. Point b) of the question, as shown above, amounts at proving that equation $$ \cos A-2\cos B=1, $$ where $A$ and $B$ are rational multiples of $2\pi$, has only solutions with $A$ a multiple of $\pi/2$. It turns out that this is indeed true, see the answer to the question I asked. After that answer was given, I found a nice paper by Conway and Jones which explicitly refers to rational sums of cosines of rational angles: their Theorem 7 can be immediately applied to our case, showing that the only solutions of the above equation have $A$ and $B$ both multiples of $\pi/2$. We can thus confirm that proposition b) is true.<|endoftext|> TITLE: Prove that if $x^6+(x^2+y)^3$ is a perfect square, then $y$ is a multiple of $x^2$ QUESTION [6 upvotes]: Let $x$ and $y$ be integers with $x \neq 0$. Prove that if $x^6+(x^2+y)^3$ is a perfect square, then $y \equiv 0 \pmod{x^2}$. We can expand the given expression as $$x^6+(x^2+y)^3 = 2x^6+3x^4y+3x^2y^2+y^3 = (2x^2+y)(x^4+x^2y+y^2)=k^2,$$ for some integer $k$. Now assume $x \neq 0$. We proceed by contradiction. Let $y = mx^2+d$, where $0 < d \leq x^2-1$ for some $m$. Then the original equation is equivalent to $$((2+m)x^2+d)(x^4+mx^4+dx^2+(m^2x^4+2dmx^2+d^2))$$ and to $$((2+m)x^2+d)((1+m+m^2)x^4+(d+2dm)x^2+d^2).$$ This seemed like a computational way of solving this, but could we still solve it like this? REPLY [6 votes]: As far as I am aware, this is how one can prove it. Let $a,b$ be integers that satisfy your condition. So $$a^6+(a^2+b)^3=t^2$$ Then, dividing by $a^6$, we have $$1+\left(1+\frac{b}{a^2} \right)^3=\left(\frac{t}{a^3} \right)^2$$ However, the elliptic curve $1+x^3=y^2$ is an elliptic curve with rank $0$, , so it only has finitely many rational points, all of which are “torsion”, or of finite order. As it has rational points of finite order it follows from the the Nagell-Lutz theorem, that all rational points are integer points. As $\left(1+\frac{b}{a^2}, \frac{t}{a^3} \right)$ is a rational point on $1+x^3=y^2$, it follows that $1+\frac{b}{a^2}$ is an integer, or $$b \equiv 0 \pmod{a^2}$$ As desired. We are done. (Also see here).<|endoftext|> TITLE: How much practice does Timmy need to pass his math subject GRE test? QUESTION [6 upvotes]: Timmy is preparing his math GRE subject test. Unfortunately Timmy sucks at math, but he has been practicing with a bunch of previous GRE tests. The GRE consists of $66$ question, since Timmy is no coward he answers every question with one of the options $a,b,c,d$ or $e$. Timmy knows that he needs $50$ questions to get into his desired college. Since Timmy really sucks at math he is going to select the answers to the questions beforehand, even though he doesn't even know what the questions are! All that Timmy knows is that the answer-code for the GRE never repeats, and Timmy has already taken $N$ previous GREs ( the answer-codes to these are random). What is the minimum number $N$ such that Timmy can always find a new answer-code that is sure to reach at least $50$ correct answers? REPLY [3 votes]: There are $4^{66-i}\binom{66}{i}$ answer keys that will end up giving Timmy $i$ correct answers - $\binom{66}{i}$ ways to choose the $i$ questions that will be answered correctly and $4^{66-i}$ ways to choose the answers for the incorrect questions. Since Timmy needs at least $50$ correct answers, the total number of possible answer keys is $$\sum_{i=50}^{66} 4^{66-i}\binom{66}{i}=984401002589851920172825$$ (as computed by Wolfram Alpha). Therefore, since there are a total of $5^{66}$ answer keys, Timmy will need to do at least $$5^{66}-984401002589851920172825=13552527156068805425092175609871681540902092800$$ answer keys.<|endoftext|> TITLE: Multiple choice exam where no two consecutive answers are the same (2) QUESTION [8 upvotes]: Continuing from this question, except I've changed the statement so that no two questions have the same answer. You are given a multiple choice exam. It has twelve questions, and each question has four possible answers labelled (a)-(d).You didn't study for the exam at all, so you might as well just guess the answer to each question. But you do have one important piece of information: the exam was designed so that no two consecutive correct answers have the same label. So if one answer is (c), the next one cannot be (c). What strategy should you adopt to maximize the probability that you get at least half the questions correct, at least one question correct. I'm asking this question because I really have no intuition on how these two questions are that different from the previous (linked) question. REPLY [3 votes]: To maximize the chance of getting at least half the answers right, answer a,b,c,d,a,b,c,d,a,b,c,d. To maximize the chance of getting at least one answer right, answer a,a,a,a,a,a,a,a,a,a,a,a. As has been noted elsewhere, every strategy will yield the same expected number of correct answers (one quarter of 12, i.e. 3). If you ensure that every answer of yours equals the preceding $3$ as often as possible, you decrease the probability of it being right if they are right - and by the same token then also the probability of it being wrong if they are wrong. Thus, you increase the probability your score is closer to the average, and in particular that it's at least $1$. Conversely, if you ensure that every answer differs from the preceding $3$ as often as possible, you decrease the probability of it being wrong if they are right, and decrease the probability of getting just an "average" result - you are more likely to get an improbably bad result, but also an improbably good one such as getting at least $6$ answer right! It's easier to reason about this type of problem looking at the case where the answer to the first and second questions are guaranteed to be different, and so are those to the third and fourth, to the fifth and sixth etc. and to ask what's the best strategy to avoid getting them all wrong, or to succeed in getting them all right. If you answer all questions randomly, the probability of getting any given pair of answers both wrong is $(1-1/4)(1-1/4)=9/16$, and the probability of getting all answers wrong is then $(9/16)^6=0.03167...$. If you give all identical answers, the probability of getting both answers of a pair wrong is $(1-1/4)(1-1/3)=1/2$, which is less than $9/16$ obtained above! The probability of not getting a single answer right is $(1/2)^6=0.01562...$, or about half that with random answers. But of course, getting them all right is impossible, since if one is right, the other of the pair is certainly wrong. If you answer all questions randomly, the probability of getting any given pair both right is $(1/4)(1/4)=1/16$. If you give different answers, however, the probability becomes a slightly larger $(1/4)(1/3)=1/12$. Note that the probability of getting the second right is $1/3$ if the first is right and the two are different! It turns out that the probability of getting all answers right is tiny in both cases, but in the second case it's $(4/3)^6=5.61...$ times larger. But you could also show that with all answers different, the probability of getting all wrong increases compared to the random baseline.<|endoftext|> TITLE: Closed form for the limit of the iterated sequence $a_{n+1}=\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2}$ QUESTION [7 upvotes]: Is there a general closed form or the integral representation for the limit of the sequence: $$a_{n+1}=\frac{\sqrt{(a_n+b_n)(a_n+c_n)}}{2} \\ b_{n+1}=\frac{\sqrt{(b_n+a_n)(b_n+c_n)}}{2} \\ c_{n+1}=\frac{\sqrt{(c_n+a_n)(c_n+b_n)}}{2}$$ in terms of $a_0,b_0,c_0$? $$L(a_0,b_0,c_0)=\lim_{n \to \infty}a_n=\lim_{n \to \infty}b_n=\lim_{n \to \infty}c_n$$ For the most simple case $a_0=b_0$ we have some interesting closed forms in terms of inverse hyperbolic or trigomonetric functions: $$L(1,1,\sqrt{2})=\frac{1}{\ln(1+\sqrt{2})}$$ $$L(1,1,1/\sqrt{2})=\frac{2 \sqrt{2}}{\pi}$$ $$L(1,1,2)=\frac{\sqrt{3}}{\ln(2+\sqrt{3})}$$ $$L(1,1,1/2)=\frac{3 \sqrt{3}}{2 \pi }$$ $$L(1,1,3)=\frac{\sqrt{2}}{\ln(1+\sqrt{2})}$$ $$L(1,1,1/3)=\frac{2 \sqrt{2}}{3 \arcsin \left( \frac{2 \sqrt{2}}{3} \right)}$$ $$L(1,1,\sin 1)=\frac{2 \cos 1}{\pi -2}$$ $$L(1,1,\sin 1/2)=\frac{2 \cos 1/2}{\pi -1}$$ $$L(1,1,\cosh 1)=\sinh 1$$ $$L(1,1,\cosh 2)=\frac{\sinh 2}{2}$$ These values are obtained by Wolfram Alpha and Inverse Symbolic Calculator and checked with Mathematica. This result seems familiar to me, but I'm pretty sure I don't know how to obtain a general closed form or even integral representation. This question is closely related, but the sequence is very different. Update: It seems likely that the closed form for the special case $a_0=b_0=1$ is: $$L(1,1,x)=\frac{\sinh \left(\cosh ^{-1}\left(x\right)\right)}{\cosh ^{-1}\left(x\right)}$$ However, the proof would be nice as well as the more general case. Thanks to Sangchul Lee I now see that this sequence is exactly Carlson's transformation. Change: $$a^2=A,\quad b^2=B,\quad c^2=C$$ $$A_{n+1}=\frac{1}{4} (A_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$ $$B_{n+1}=\frac{1}{4} (B_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$ $$C_{n+1}=\frac{1}{4} (C_n+\sqrt{A_nB_n}+\sqrt{B_nC_n}+\sqrt{C_nA_n})$$ Accoding to Wikipedia and references wherein, we have: $$R_F(A_{n+1},B_{n+1},C_{n+1})=R_F(A_n,B_n,C_n)$$ $$R_F(A,B,C)=\frac{1}{2} \int_0^\infty \frac{dt}{\sqrt{(t+A)(t+B)(t+C)}}$$ Which is exactly the 'closed form' I wanted. REPLY [2 votes]: Here is an observation: As in your link, if $b_0 = c_0$ then we can prove recursively that $b_n = c_n$ for all $n \geq 0$. Then plugging this to OP's recurrence relation, we find that the sequence $(a_n, b_n)$ satisfies the Schwab-Borchardt recurrence relation $$ a_{n+1} = \frac{a_n + b_n}{2}, \qquad b_{n+1} = \sqrt{a_{n+1}b_n}. $$ So if we write $(a_0, b_0) = (a, b)$, then the limit is given by the Schwab-Borchardt mean $$ \lim_{n\to\infty} a_n = \lim_{n\to\infty} b_n = SB(a_0, b_0) = \begin{cases} \dfrac{\sqrt{a^2 - b^2}}{\operatorname{arccosh}(a/b)}, & a > b \\ \dfrac{\sqrt{b^2 - a^2}}{\operatorname{arccos}(a/b)}, & a < b \\ a, & a = b \end{cases} \tag{*} $$ Anyway, let me write down my failed attempt to mimic Carlson's proof of $\text{(*)}$ so that future me do not repeat this mistake. Define $I(a,b,c)$ by $$ I(a,b,c) := \lim_{R\to\infty} \int_{-R}^{R} \frac{dx}{(x+a^2)^{1/3}(x+b^2)^{1/3}(x+c^2)^{1/3}}, $$ where $z^{1/3} = \exp(\frac{1}{3}\log z)$ with the branch cut $-\pi < \arg(z) \leq \pi$. Then we can check that $I(a, b, c)$ converges. Now using the substitution $x \mapsto 4x + ab + bc + ca$, we find that $$ I(a, b, c) = I\left( \frac{\sqrt{(a+b)(a+c)}}{2}, \frac{\sqrt{(b+c)(b+a)}}{2}, \frac{\sqrt{(c+a)(c+b)}}{2} \right). $$ This tells us that $I(a_n,b_n,c_n) = \cdots = I(a_0,b_0,c_0)$. If we recall how the AGM is computed from Landen's transformation, this may possibly allow us to analyze $L$ using $I$. Well, it turns out that the function $I$ has a serious issue, which is that $I$ is identically zero! This can be checked either by using the fact $I(a,a,a) = 0$ or by interpreting $I(a,b,c)$ as a value of the Schwarz–Christoffel mapping.<|endoftext|> TITLE: Showing $A-I$ is invertible, when $A$ is a skew-symmetric matrix QUESTION [8 upvotes]: Let $A\in M_{n\times n}(\mathbb{R})$ be a skew-symmetric matrix. show that $A-I$ is invertible and $(A-I)^{-1}(A+I)$ is an orthogonal matrix. $|A-I|=|(A-I)^T|=|-(A+I)|=(-1)^n|A+I|$ I have no clue to solve this problem, I appreciate any help. REPLY [3 votes]: Here is a slightly different approach: Note that $A$ has a basis of orthonormal eigenvectors (in the same manner as Hermitian matrices). Suppose $Av= \lambda v$ for a unit vector $v$, then $v^*Av = \lambda = -v^* A^* v = - \overline{\lambda} $, hence all eigenvalues are purely imaginary. In particular, they are not equal to one, hence $A-I$ is invertible. To show that $(A-I)^{-1}(A+I)$ is orthogonal, it is sufficient to show that $\|(A-I)^{-1}(A+I) v \| = 1$ for all unit eigenvectors $v$. Since $(A-I)^{-1}(A+I) v = {\lambda +1 \over \lambda -1 }v$ and $|{\lambda +1 \over \lambda -1 }| = 1$, we have the desired result.<|endoftext|> TITLE: Ask about beautiful properties of $e$ QUESTION [9 upvotes]: One of students asked me about "some beautiful properties (or relation) of $e$". Then I list like below \begin{align} & e \equiv \lim_{x \to \infty} \left(1+\frac{1}{x} \right)^x\\[10pt] & e = \sum_{k=0}^\infty \frac{1}{k!}\\[10pt] & \frac{d}{dx} (e^x) = e^x\\[10pt] & e^{ix} = \cos x + i \sin x \quad \text{(Euler)}\\[10pt] & e^{i \pi} + 1 = 0 \end{align} After this, he asked me for more relation or properties. I said I'll think and answer ... Now I want help to add some relation, properties, or visual things (like proof without words) Please help me to add something more. Thanks in advance. ***The class was math. 1. engineering REPLY [4 votes]: Here are some more relations which might be pleasing. From section 1.3 of Mathematical Constants by S.R. Finch: A Wallis-like infinite product is \begin{align*} e=\frac{2}{1}\cdot\left(\frac{4}{3}\right)^{\frac{1}{2}} \cdot\left(\frac{6\cdot 8}{5\cdot 7}\right)^{\frac{1}{4}} \cdot\left(\frac{10\cdot 12\cdot 14\cdot 16}{9\cdot 11\cdot 13\cdot 15}\right)^{\frac{1}{8}}\cdots \end{align*} From Stirling's formula we derive \begin{align*} e=\lim_{n\rightarrow \infty}\frac{n}{(n!)^{\frac{1}{n}}} \end{align*} Another continued fraction representation is \begin{align*} e&=2+\frac{\left.2\right|}{\left|2\right.} +\frac{\left.3\right|}{\left|3\right.} +\frac{\left.4\right|}{\left|4\right.} +\frac{\left.5\right|}{\left|5\right.}+\cdots\\ &=2+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\cdots}}}} \end{align*} $$ $$ In section Intriguing Results in Real Infinite Series by D.D. Bonar and M.J. Khoury we find Gem 89 (American Math Monthly 42:2 pp. 111-112) \begin{align*} e=\frac{1}{5}\left(\frac{1^2}{0!}+\frac{2^2}{1!}+\frac{3^2}{2!}+\frac{4^2}{3!}+\cdots\right) \end{align*}<|endoftext|> TITLE: Triviality of vector bundles over Lie groups QUESTION [5 upvotes]: I was wondering whether the triviality of the tangent bundle of a Lie group is a shared feature with all other vector bundles over Lie groups. Naturally, the first example of a non-trivial vector bundle that comes to mind is the Moebius strip, which is a vector bundle over $S^1$ -a Lie group-, so the answer is no. However, the tangent bundle of a Lie group is also a group, and is hence orientable (unlike the Moebius strip). So, my question is: Are orientable vector bundles over Lie groups trivial? or actually better, does anybody know an example of a non-trivial vector bundle over a Lie group which is itself a Lie group? I thought that a way around this question was to inspect the Euler class of the given bundle, because I thought that I knew the cohomology of Lie groups. I had Borel's theorem in mind: "If $G$ is a compact connected Lie group, then the cohomology ring is an exterior algebra of odd degree" I thought that this implied that even rank bundles would be necessarily trivial. Unfortunately, later I realized this doesn't say that there are no elements of even degree... Also, this would only work for compact groups, and I would like to say something of non-compact groups as well. REPLY [5 votes]: Here's an indirect way to see that most compact connected Lie groups have nontrivial complex vector bundles over them. The starting point is the observation that if $X$ is a finite CW complex then taking Chern characters gives an isomorphism $$K(X) \otimes \mathbb{Q} \cong H^{2 \bullet}(X, \mathbb{Q})$$ between the rationalized complex K-theory of $X$ and the rational even cohomology of $X$. Now, almost all compact connected Lie groups have nontrivial rational even cohomology: perhaps the simplest example is $S^1 \times S^1$, and the simplest simply connected example is $SU(2) \times SU(2) \cong \text{Spin}(4)$, which has rational cohomology $\mathbb{Q}[x_3, y_3]$ where $x_3, y_3$ are odd, and hence their product $x_3 y_3$ is even. Because the Chern character isomorphism is defined in terms of Chern classes, the conclusion is that not only does there exist a nontrivial complex vector bundle on $SU(2) \times SU(2)$, but there exists one with nontrivial Chern class $c_3$. More explicitly, the classification of complex line bundles over $S^1 \times S^1$ is given by $H^2(S^1 \times S^1, \mathbb{Z}) \cong \mathbb{Z}$, so there are countably many nontrivial complex line bundles over $S^1 \times S^1$. They can all be described as holomorphic line bundles over elliptic curves in terms of divisors and meromorphic functions, if you like. I know less about real vector bundles; I expect most compact connected Lie groups have nontrivial real K-theory (even the simply connected ones, in which case every vector bundle is orientable), but I don't know how to prove that off the top of my head.<|endoftext|> TITLE: What is the weak*-topology on a set of probability measures? QUESTION [8 upvotes]: While trying to answer this question, I've come across the notion of the weak*-topology on a set of probability measures. I'd like some clarification about what this means. More specifically, let $(\Omega, \mathcal{F})$ be a measurable space. We don't assume that $\Omega$ has any metric or topological structure. What does it mean to equip the set $\mathcal{M}$ of probability measures on this space with the weak*-star topology? I understand that the weak*-topology is the weakest topology on the dual space $V'$ of a normed vector space $V$ that makes the evaluation functionals defined by $\lambda_f(\phi) = \phi(f)$, $\phi \in V'$ and $f \in V$, continuous. What I don't understand is how $\mathcal{M}$ can be equipped with this topology as it's not a vector space. From what I've read, I think that measures in $\mathcal{M}$ are being identified with linear functionals on a space of measurable functions. For instance, $P \in \mathcal{M}$ gives rise to a linear functional $\phi$ on the normed linear space of bounded $\mathcal{F}$-measurable functions, equipped with the $\sup$-norm, by $\phi(f) := \int f dP$. Is something like this correct? Which underlying vector space of measurable functions should be used? I would appreciate if someone could please sketch the relevant theory for me and/or refer me to a comprehensive textbook treatment of this topic. Addendum. My current understanding of this topic is summarized as part of my attempt to answer my own question in the link above. REPLY [4 votes]: You can consider the set of probabilities over $\mathcal{F}$ as a subset of the linear space $V(\mathcal{F})$ of all finitely additive (bounded) scalar measures over $\mathcal{F}$ endowed with the variation norm (see Theory of charges (K. Bhaskara Rao, M. Bhaskara Rao), chapter 7). We need to show that this space is the topological dual of another (locally convex) topological vector space. When $\mathcal{F}$ is a Boolean algebra of subsets of $\Omega$ (which in this case does not have to be a topological space) we define $S(\mathcal{F})$ as the linear space generated by $\{\chi_A:\ A\in\mathcal{F}\}$ (the characteristic functions of the sets in $\mathcal{F}$). $S(\mathcal{F})$ is called the space of simple functions. Now, for every $f\in S(\mathcal{F})$, $\Vert f\Vert_s:=\sup\vert f\vert<\infty$ so $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is a normed space. It is not hard to prove that the dual of $(S(\mathcal{F}),\Vert\cdot\Vert_s)$ is the space $V(\mathcal{F})$. In fact, on one hand every $\lambda\in V(\mathcal{F})$ defines the bounded linear functional $f\mapsto \int f\ d\lambda$ (for $f\in S(\mathcal{F})$), on the other, every $x^*\in S(\mathcal{F})^*$ defines on $\mathcal{F}$ the measure $\lambda_{x^*}(A):=x^*(\chi_A)$ (for $A\in\mathcal{F}$). See Topological Riesz spaces and measure theory (Fremlin) for a more complete reference. Having showed that the set of probabilities over $\mathcal{F}$ is contained in the topological dual of the normed space $S(\mathcal{F})$, it is clear why we can talk about weak$^*$-convergence. PS: It is worth observing that by the Stone representation Theorem for Boolean rings any Boolean ring $\mathcal{R}$ is isomorphic to the ring of clopen sets in a locally compact Hausdorff space (see Measure theory Vol III (Fremlin), or this survey by Tao). Following this line, the approach showed by Tomasz can be proved to be much closer to the one exposed so far than someone would think. It would be interesting to go through this idea.<|endoftext|> TITLE: What exactly is a Non Integrable function? How can they be solved? QUESTION [6 upvotes]: I was trying to find the integral of $\frac{\sin x}{x}$ recently, and nothing I tried appeared to get me any closer to a solution, so I looked it up and apparently it's a Non-integrable function. What makes a function Non-integrable and is there a way to solve it? Why can't it be solved with the "normal" methods. Forgive my error-ridden terminology, I'm new. REPLY [12 votes]: "Integrability" (more specifically, Lebesgue integrability) is a technical condition similar to absolute convergence of a sequence where a function is only considered integrable on $(a,b)$ if $$ \int_a^b|f(x)|dx < \infty.$$ By comparison with $1/x,$ $\sin(x)/x$ fails this criterion on $(-\infty, \infty)$ and is thus considered a non-integrable function on $(-\infty, \infty).$ This is likely the sense that was meant when you read that $\sin(x)/x$ is not integrable. However, just like series can be convergent but not absolutely convergent, it turns out that since $\sin(x)/x$ oscillates as $|x|\rightarrow\infty$, the 'area under the curve' cancels out over the oscillations and is finite. The integral $$ \int_{-\infty}^\infty \frac{\sin(x)}{x}dx$$ can be defined as an improper Riemann integral and happens to equal $\pi.$ This is just a warning that 'not integrable' (in the Lebesgue sense) doesn't mean that the definite integral doesn't have a value. All of this has little to do with your concern of finding an antiderivative for $\sin(x)/x$ and thus computing the integral over an interval using the fundamental theorem of calculus. As others have mentioned, it turns out that the antiderivative cannot be expressed as an elementary function, so it's no surprise you haven't been successful in applying various integration tricks. Even though you cannot compute the integral in the usual way by taking an antiderivative, remember that, since $\sin(x)/x$ is a nice bounded, continuous function, the definite integral $\int_a^b\frac{\sin(x)}{x}dx$ exists a well-defined number (area under the curve) for any $a$ and $b$. Likewise, as others have mentioned, an antiderivative function $\mathrm{Si}(x) = \int_0^x\frac{\sin(t)}{t}dt$ exists and we have $$\int_a^b\frac{\sin(x)}{x}dx = \mathrm{Si}(b)-\mathrm{Si}(a).$$ It's just that $\mathrm{Si}(x)$ does not have a nice expression in terms of elementary functions like $\sin$, $\cos,$ $\ln$, $e^x,$ etc like you would usually compute with integration techniques.... it's its 'own thing' (called the sine-integral function). Functions without an elementary antiderivative are usually called 'functions without an elementary antiderivative' and not called 'not integrable', although the latter would be perfectly reasonable terminology if it weren't already reserved for the technical condition I mentioned above.<|endoftext|> TITLE: How to solve this definite integral with involving logarithm: $\int_0^1\left( \ln (4-3^x)+2 \ln(1+3^x)\right)dx$ QUESTION [8 upvotes]: $$\int_0^1\left( \ln (4-3^x)+2 \ln(1+3^x)\right)dx.$$ The answer given $\log 16$. Can anyone explain this question for me ? Thanks a lot. REPLY [5 votes]: This integral involves the use of Polylogarithms. Recall that:$$\int-\frac{\ln(1-x)}{x}dx=\mathrm{Li}_2(x)+C$$ For your case we will need to apply the definition given above. We have:$$\int_{0}^{1} \ln (4-3^x)+2 \ln(1+3^x)dx=\\\int_{0}^{1} \ln (4-3^x)+ \int_0^12 \ln(1+3^x)dx$$Now let's take a look at the first one:$$\\ \int_{0}^{1} \ln \left(4\left(1-\frac{3^x}{4}\right)\right)dx=\\ \int_{0}^{1} \ln (4)+\ln \left(1-\frac{3^x}{4}\right)dx=\\$$$$\bbox[5px,border:2px solid red] {u=\frac{3^x}{4}\quad \quad du=\ln(3)u\ dx}$$$$\\ \int_{\frac14}^{\frac{3}{4}} \ln (4)+\frac{\ln(1-u)}{\ln(3)u}du=\\ \left[x\ln(4)-\frac{\mathrm{Li}_2(\frac{3^x}{4})}{\ln(3)}\right]_0^1$$ Now the second one: $$\int_0^12 \ln(1+3^x)dx=\\$$$$\bbox[5px,border:2px solid red] {z=-3^x\quad \quad dz=\ln(3)z}$$$$\\ 2\int_1^{-3}\frac{\ln(1-z)}{\ln(3)z}dz=\\\left[\frac{-2\mathrm{Li}_2(-3^x)}{\ln(3)}\right]_0^1$$ Therefore, we finally have: $$\left[\frac{x\ln(4)\ln(3)-\mathrm{Li}_2(\frac{3^x}{4})-\mathrm{Li}_2(-3^x)}{\ln(3)}\right]_0^1=\frac{\ln(3)\ln(4)-\mathrm{Li}_2(-3)-\mathrm{Li}_2(\frac{3}{4})-\frac{\pi^2}{6}+\mathrm{Li}_2(\frac{1}{4})}{\ln(3)} \approx$$$$\bbox[5px,border:2px solid black] {2.773}$$<|endoftext|> TITLE: Can the following be done without zorn's lemma QUESTION [5 upvotes]: Prove that $R$ is local ring with maximal ideal $M$ if and only if every element of $R\backslash M$ is a unit. I understand that this is a easy consequence of zorn's lemma. But I'm wondering if this can be done without zorn's lemma. Any help or insight is deeply appreciated. REPLY [10 votes]: As noted in comments, Zorn's Lemma is known to be equivalent to the statement that every (nonzero, unital) commutative ring has a maximal ideal. Suppose $S$ is a commutative ring without a maximal ideal, and $k$ a field. Then $R=k\times S$ has a unique maximal ideal $M=0\times S$, but not every element of $R\setminus M$ is a unit. So the full strength of Zorn's Lemma is needed to prove that if a commutative ring $R$ has a unique maximal ideal $M$, then every element of $R\setminus M$ is a unit.<|endoftext|> TITLE: Prove or disprove my conjecture about triangles. QUESTION [8 upvotes]: Prove or disprove: For a set of at least 3 points not all collinear, we can always construct a triangle that contains the points, with the added condition that each of the triangle's edges has a point at its center. Example: See the figure below. 7 points are scattered about at random, and the black triangle contains all of them. (Note that I consider the red point in the bottom left contained despite being intersected by the edge of the triangle.) Furthermore, the triangle's sides each have a point from the set at their center. I have colored these center points orange for convenient viewing, but there is nothing special about them: I might have selected another three center points when building a triangle to contain this set. REPLY [4 votes]: Your conjecture is true, and for a surprisingly (to me) elementary reason. I strongly suggest you draw my solution for yourself - it isn't deep. A finite set of points $S$ will determine finitely many triangles; take the largest of these in area, $T$. I claim that all points will fall inside the triangle $M(T)$ which has $T$ as its medial triangle (which will clearly satisfy our requirements on each side of the triangle having a midpoint belonging to $S$). Now suppose that some point $s\in S$ fell outside $M(T)$; let $L$ be the side of $M(T)$ that $s$ lies above. (By this I mean that if we extend the sides of $M(T)$ to infinity, there will be three sections of the plane, determined by these lines, which touch the sides of the triangle; if $s$ is in the section that touches $L$, we can visualize it as "lying above $L$.") Now, take the side $K$ in the original triangle $T$ which is parallel to $L$ in $M(T)$. Because $s$ lies above $L$, it lies above the vertex $v$ in $T$ that falls along $L$, which means - since $K$ and $L$ are parallel - that the altitude of $s$ from $K$ is higher than that of $v$ from $K$. But then the triangle having $s$ as a vertex instead of $v$ would have strictly larger area (take $K$ as the triangle's base to see this). This would contradict our hypothesis that $T$ was the largest triangle we could make from points in $S$!<|endoftext|> TITLE: $(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})...$ QUESTION [5 upvotes]: Is there any well-known value (Or Approximation) for this? $$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})...$$ we know that it converges as $$\sum_{i=2}^{\infty}\frac{(-1)^{i+1}}{i}=ln2-1$$ So there is a trivial upper bound $\frac{2}{e}$ for it. Is there any better result? In addition is there any similar result for $$(1-\frac{1}{2})(1-\frac{1}{4})(1-\frac{1}{8})(1-\frac{1}{16})...$$ or $$(1+\frac{1}{2})(1+\frac{1}{4})(1+\frac{1}{8})(1+\frac{1}{16})...$$ REPLY [2 votes]: $$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... $$ Except the first term, consider consecutive couples of terms : $(1+\frac{1}{3})(1-\frac{1}{4})= (\frac{4}{3})(\frac{3}{4})= 1$ $(1+\frac{1}{5})(1-\frac{1}{6})= (\frac{6}{5})(\frac{5}{6})= 1$ $(1+\frac{1}{7})(1-\frac{1}{8})= (\frac{8}{7})(\frac{7}{8})= 1$ And so on All couples $=1$ . So, only the first term remains : $(1-\frac{1}{2})=\frac{1}{2}$ Finally $$(1-\frac{1}{2})(1+\frac{1}{3})(1-\frac{1}{4})(1+\frac{1}{5})(1-\frac{1}{6})(1+\frac{1}{7})... = \frac{1}{2}$$<|endoftext|> TITLE: $\int_{0}^{1}{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})\over (1-x)^2}\cdot{\left(1-x^{\phi}\over 1-x\right)^{1\over \phi}}\mathrm dx=\phi^{\phi}$ QUESTION [14 upvotes]: How can we show that? $$\int_{0}^{1}{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})\over (1-x)^2}\cdot{\left(1-x^{\phi}\over 1-x\right)^{1\over \phi}}\mathrm dx=\phi^{\phi}\tag1$$ Where $\phi$;Golden ratio This integral look too complicated. I try and make a $u=1-x$ still not simplified $$\int_{0}^{1}{u[1-2(1-u)^{\phi}]+\phi[1-u-(1-u)^{\phi}]\over u^2}\cdot\left[1-(1-u)^{\phi}\over u\right]^{1\over \phi}\mathrm du$$ $$\int_{0}^{1}{\phi-{u\over \phi}-(1-u)^{\phi}(2u+\phi)\over u^2}\cdot\left[1-(1-u)^{\phi}\over u\right]^{1\over \phi}\mathrm du$$ Simplifed $(1)$: $$\int_{0}^{1}{1+{x\over \phi}+x^{\phi}(2x-\phi{\sqrt{5}})\over (1-x)^2}\cdot\left({1- x^\phi}\over 1-x\right)^{1\over \phi}\mathrm dx=\phi^{\phi} \tag2$$ I have no idea where to go from here. REPLY [12 votes]: Using the relation $\phi - 1 = 1/\phi$, we find that \begin{align*} &\int_{0}^{1} \frac{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})}{(1-x)^2}\cdot\left(\frac{1-x^{\phi}}{1-x}\right)^{1/\phi} \, \mathrm{d}x \\ &= \int_{0}^{1} \left( 2 - \frac{\phi^2}{1-x^{\phi}} + \frac{\phi}{1-x} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} \, \mathrm{d}x \\ &= \bigg[ x \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} \bigg]_{0}^{1} \\ &= \phi^\phi. \end{align*} As a corollary, we have $$ \int \frac{(1-x)(1-2x^{\phi})+\phi(x-x^{\phi})}{(1-x)^2}\cdot\left(\frac{1-x^{\phi}}{1-x}\right)^{1/\phi} \, \mathrm{d}x = x \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} + C. $$ Here is my line of reasoning that led to this solution: I tried to simplify the integrand so that it minimizes the amount of cancellation as well as mimics partial fraction decomposition. Now, the integrand in the second line looks similar to what we obtain when we apply the logarithmic differentiation $$ \frac{d}{dx}\left(\frac{1-x^{\phi}}{1-x}\right)^{\phi} = \left( \frac{\phi}{1-x} - \frac{\phi^2 x^{\phi-1}}{1-x^\phi} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi}. \tag{1} $$ Although this is not exactly the same as what we want, it hints that we might actually compute the antiderivative. Playing a little bit, we find that $$ \frac{d}{dx}\frac{(1-x^{\phi})^{\phi}}{(1-x)^{\phi-1}} = \left( -2 -\frac{\phi^2 x^{\phi-1}}{1-x^{\phi}} + \frac{\phi^2}{1-x^{\phi}} \right) \left(\frac{1-x^{\phi}}{1-x}\right)^{\phi}. \tag{2}$$ Bingo! $\text{(1)} - \text{(2)}$ gives exactly what we want and we are done.<|endoftext|> TITLE: If every element of R has a (multiplicative) inverse, then $R = \{0\}$ QUESTION [7 upvotes]: I am struggling to understand why is this the case. I need to prove this, but I don't understand how it's true. For example, if every non-zero element of $R$ has a multiplicative inverse, then it's a field. So how does $R=\{0\}$? Thanks you for your time :) REPLY [3 votes]: First of all if every non zero element of $R$ has a multiplicative inverse that makes it a division ring and not a field a good example is the quaternions it only becomes a field if the multiplication is commutative. Secondly from the axioms of ring theory $0$ does not have a multiplicative inverse except for the ring {$0$}<|endoftext|> TITLE: Do the Chern classes of a principal bundle depend on an embedding $G\hookrightarrow U(n)$? QUESTION [7 upvotes]: Let $G$ be a compact Lie group and $\pi:P\to M$ a principal $G$-bundle. Since $G$ is compact, it has an embedding $G\hookrightarrow U(n)$ for some $n$. This embedding determines a unique principal $U(n)$-bundle over $M$ and hence we obtain Chern classes $c_k\in H^{2k}_{\mathrm{dR}}(M;\Bbb{R})$ for $k=1,\ldots,n$ (by applying the Chern-Weil homomorphism to the $U(n)$-invariant polynomials $\mathfrak{u}(n)\to\mathbb{R}$ obtained by expanding the function $\det(\lambda I-\frac{1}{2\pi i}X)$ in powers of $\lambda$). To what extent do these cohomology classes depend on the choice of embedding $G\hookrightarrow U(n)$? In other words, if $G\hookrightarrow U(m)$ is another embedding and we construct the respective Chern classes $\tilde{c}_k\in H^{2k}_{\mathrm{dR}}(M;\Bbb R)$, do we have $\tilde{c}_k=c_k$ for $1\leq k\leq\min(m,n)$ and all other classes are zero? REPLY [2 votes]: This certainly depend on the emebding. For instance let $G=U(1)$ the correspondant $C^n$ bundle splits as the sume of line budnles $L^{n_i}$ where the $n_i$ are the weight of the representation, and $L$ is the line bundle associated with the isomorphic representation $U(1)\to U(1)$. Thus the Chern character is the product $\Pi_i^n (1+n_ic)$. Something like this is true for the general case ; the computation reduces the case to the set of irreducible representations of $U(n)$.<|endoftext|> TITLE: What is the intuitive meaning of the Wronskian? QUESTION [11 upvotes]: What is the intuition behind the Wronskian? I do not want an explanation that involves a rigorous proof. I just want to learn how to intuitively think about it. Edit: And since it is tied to the weight function in the theory of differential equations, it would also be great to have an intuitive connection between the two included in the answer! REPLY [5 votes]: To me, the key formula in understanding the Wronskian of a linear homogeneous first order differential system is the following: $$ \text{if}\quad \frac{d A}{dt}(t)=B(t)A(t)\quad \text{then}\quad \frac{d}{dt}\left(\det A(t)\right) = \mathrm{tr}\, B(t)\det A(t).$$ Here $A, B$ are square matrices and $\mathrm{tr}$ stands for "trace" and this formula is often called "Liouville's formula". You apply it as follows. Consider the system of $n$ differential equations $$ \frac{d\boldsymbol{x}}{dt} = B(t) \boldsymbol x(t).$$ (Boldface denotes elements of $\mathbb R^n$, considered as column vectors). Taking $n$ solutions $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$ one forms the matrix $$ A(t)=\begin{bmatrix} \boldsymbol{x}_1,\ \ldots\, , \boldsymbol{x}_n\end{bmatrix}\in\mathbb{R}^{n\times n}.$$ The Wronskian determinant $\det A(t)$ is geometrically interpreted as the oriented volume of the parallelepiped spanned by $\boldsymbol{x}_1\ldots \boldsymbol{x}_n$. Liouville's formula says that this volume's evolution is governed by the trace of $B(t)$. For a single differential equation of order $n$ $$ x^{(n)}(t)+ b_{n-1}(t)x^{(n-1)}(t)+\ldots +b_0(t) x(t) = 0$$ one introduces the vector $$ \boldsymbol{x}(t)= \begin{bmatrix} x(t) \\ x'(t) \\ \vdots \\ x^{(n-1)}(t)\end{bmatrix}$$ and the matrix ("companion matrix") $$ B(t)=\begin{bmatrix} 0& 1& 0& \ldots& \ldots& \ldots \\ 0 & 0 & 1 & 0 & \ldots&\ldots \\ & & &\vdots & & \\ 0& \ldots& \ldots& \ldots& 0 & 1 \\ -b_0(t) & -b_1(t)& \ldots &\ldots & -b_{n-2}(t) & -b_{n-1}(t)\end{bmatrix}$$ so that the equation is equivalent to $\frac{d\boldsymbol{x}}{dt}=B(t)\boldsymbol{x}(t)$. Often, one says that $\boldsymbol{x}$ belongs to the phase space of the original equation. (This is especially true when $n=2$, in which case the phase space is typically interpreted the Cartesian product of position and velocity.) In the case of a linear homogeneous differential equation of order $n$, the Wronskian is interpreted as an oriented volume in phase space. Liouville's formula says that the evolution of the Wronskian is governed by the term $\mathrm{tr}\, (B(t))=-b_{n-1}(t)$. Further development. This interpretation fits well in the frame of Liouville's theorem for Hamiltonian systems. See also here.<|endoftext|> TITLE: Find the remainder when $f(x)$ is divided by $(x - 3)$ given $f(1)$ and $f(7)$ QUESTION [8 upvotes]: A polynomial $f(x)$ is given. All we know about it is that all its coefficients are non-negative integers, $f(1) = 6$ and $f(7) = 3438$. Hence find the remainder when $f(x)$ is divided by $(x-3)$. I have no clue on how to go about solving this. REPLY [13 votes]: The polynomial would be $x^4+ 3x^3 +x+1$. The first observation $f(1)=6$ is that all the coefficients are less than $6$. So $f(7)=3438$ is the decimal equivalent of the base 7 number formed by the coefficients $1 3 0 1 1$ ($=3438_{10}$). So by remainder theorem the remainder is $f(3)= 166$.<|endoftext|> TITLE: Probability of double matchings QUESTION [5 upvotes]: Yesterday, I solved this problem question from probability theory: Two similar decks of $N$ distinct cards are matched against a similar target deck. Find the probability of exactly $m \leq N$ matches. I proceeded in the following manner. Let $A_i$ denote the event that $i^{\text{th}}$ card is matched (from both the decks) against the target deck. Therefore, let $$P(\text{E = exactly $m$ match occurs})(N,m)$$ (don't mind the bad notation please) then $$P(N,m) = S_m - \binom{m+1}{m}S_{m+1} + \binom{m+2}{m}S_{m+2} + \ldots + \binom{N}{m} S_{N}$$ where $S_1 = \sum_{1\leq i \leq N} P(A_i)$, $S_2 = \sum_{1\leq i \lt j \leq N} P(A_i \cap A_j) \ldots$ Clearly, we have $$S_{m+k} = \binom{N}{m+k} \frac{(N-m-k)!^2}{N!^2}$$ Therefore, \begin{align*} P(N,m) &= \sum_{k=0}^{N-m} (-1)^k \binom{m+k}{m} \binom{N}{m+k} \frac{(N-m-k)!^2}{N!^2} \\ &= \frac{1}{m!} \frac{1}{N!} \sum_{k=0}^{N-m} (-1)^k \frac{(N-m-k)!}{k!} \end{align*} After obtaining the above expression, I thought if there exists some nice closed formula for the series. So I plugged it on W|A but it doesn't returns one (in terms of elementary functions). Next, I started wondering, how does this probability function behaves as $N \rightarrow \infty$. Because this limit might be actual translation of some real world phenomena (although that is something to be considered about, later on). So, I first tried to check for $m=0$ \begin{align*} \lim_{N \rightarrow \infty} P(N,0) &= \lim_{N \rightarrow \infty} \frac{1}{N!} \sum_{k=0}^{N} (-1)^k \frac{(N-k)!}{k!} \\ &= \lim_{N \rightarrow \infty} \left(1 - \frac{1}{N} + \frac{1}{2!}\frac{1}{N(N-1)} - \ldots \right) \end{align*} It doesn't strikes me on how to solve this limit, as I cannot evaluate the limit pointwise since it is an infinite sum. So I thought of setting up a recurrence (which may help?). This is what I found: $$P(N+2,0) = P(N+1,0) + \frac{P(N,0)}{(N+1)(N+2)} + \frac{(-1)^{N}}{(N+2)!^2}$$ But again, I still couldn't figure out much. I even expressed this as an integral (just because sometimes, it does help) and then tried to do some manipulations, but still no clue $$P(N,0) = (N+1) \int_0^1 \sum_{k=0}^N\frac{ t^k(1-t)^{N-k}}{k!^2} \mathrm{d}t$$ So these are the questions, I am trying to find a solution to: Is there any nice closed form for the expression? How does the probability function, which I derived, behaves when $N \rightarrow \infty$ for a fixed $m$? What happens as $N \rightarrow \infty$ and $m \rightarrow \infty$? Any help would be appreciated. Edit 1: I figured out that $P(N,0) \rightarrow 1$ as $n \rightarrow \infty$ with some computations but I guess, it still requires a rigorous proof. REPLY [2 votes]: Let $X_k = \mathbf{1}_{A_k}$, $k=1,\dots,N$, be the variable indicating a (double) match in the $k$th card so that $X = X_1+\dots + X_N$ is the number of matches. By linearity, $$ \mathbb{E}[X] = \sum_{k=1}^N \mathbb{E}[X_k] = N\cdot \frac{1}{N^2} = \frac{1}{N}. $$ By the Markov inequality, $\mathbb{P}(X\ge 1) \le \mathbb{E}[X] \to 0$, $N\to\infty$. Therefore, $P(N,0)\to 1$, $N\to\infty$.<|endoftext|> TITLE: How to prove that this iterated mean gives $\frac{2 \cdot 2^{7/8}}{\vartheta _2\left(0,\frac{1}{\sqrt{2}}\right)}$ for $a_0=2,b_0=1$? QUESTION [6 upvotes]: I tried a very simple iterated mean and got a very strange closed form for a particular value. The sequence in question goes like this: $$a_{n+1}=\frac{2a_nb_n}{a_n+b_n}, \qquad b_{n+1}=\frac{a_{n+1}+b_n}{2}$$ Note that this looks like the Arithmetic-Harmonic mean (i.e. just the Geometric mean, or the method for computing square roots), however, it is different, since we substitute the current value of $a_n$ in $b_n$ instead of the previous value. $$a_{n+1}=\frac{2a_nb_n}{a_n+b_n}, \qquad b_{n+1}=\frac{3a_nb_n+b_n^2}{2(a_n+b_n)}$$ Thus, this is similar in spirit to the Schwab-Borchardt mean, but this is a rational sequence. It's easy to show that it converges: $$L(a_0,b_0)=\lim_{n \to \infty}a_n=\lim_{n \to \infty}b_n$$ What is truly suprising, the closed form I've got for the simple case: $$L(2,1)=\frac{2 \cdot 2^{7/8}}{\vartheta _2\left(0,\frac{1}{\sqrt{2}}\right)}=\frac{2}{\sum_{k=0}^\infty 2^{-k(k+1)/2}}=$$ $$=1.2182994221324572310422292086114491025901998820372$$ I obtained the closed form in a roundabout way using Inverse Symbolic Calculator, OEIS and Mathematica. This is one of the Jacobi elliptic functions. So far I haven't found any connection between the elliptic functions and this kind of iterated means. Except of course for the elliptic integrals and arithmetic-geometric mean, but that is a different case. Neither was I able to find the general closed form for $L(x,y)$, or any other particular closed form. Some elementary considerations. $$a_{n+1}-b_{n+1}=\frac{b_n}{a_n+b_n}\frac{a_n-b_n}{2} TITLE: Proof about inclusion of subspaces QUESTION [5 upvotes]: I have the following question and I'm not sure my proof is correct/approached correctly: Let $V$ be a $n$-dimensional vector space, let $U_i \subset V$ be subspaces of V for $i = 1,2,\dots,r$ where $$U_1 \subset U_2 \subset \dots \subset U_r$$ If $r>n+1$ then there exists an $in+1$ then there exists an $in+1$, the above result proves that not all inclusions are proper. Let $j$ be the maximum index such that $U_{j}\subsetneq U_{j+1}$ (or $j=0$ if all inclusions are equalities): by the result above, $j TITLE: How to prove that $\frac{1000!}{(500!)^2}$ is not divisible by 7? QUESTION [8 upvotes]: How to prove that $$\frac{1000!}{(500!)^2}$$ is not divisible by 7? I reduced the above fraction to: $$\frac{\prod_{k=501}^{1000}k}{500!} $$ But I don't know how to proceed. Please help. REPLY [7 votes]: It is a general formula that the number of times a prime $p$ occurs in the prime factorisation of $n!$ is given by $$ \left[\frac{n}{p}\right] + \left[\frac{n}{p^2}\right] + \left[\frac{n}{p^3}\right] + \cdots $$ where $[x]$ denotes the greatest integer $≤x$. In $1000!$ you'd have $$ \left[\frac{1000}{7}\right] + \left[\frac{1000}{7^2}\right] + \left[\frac{1000}{7^3}\right] + \cdots = 164 $$ $7$'s, and in $500!$ you have $$ \left[\frac{500}{7}\right] + \left[\frac{500}{7^2}\right] + \left[\frac{500}{7^3}\right] + \cdots = 82 $$ $7$'s. So you'll have a total of $164$ 7's in $(500!)^2$. The rest follows. REPLY [2 votes]: In $\{1,\dots, 500\}$, there are 71 multiples of 7, 10 multiples of $7^2$ and one multiple of $7^3$. How many multiples of $7$, $7^2$, $7^3$ do you have in $\{501,\dots, 1000\}$? Same amount.<|endoftext|> TITLE: What is the difference between relational logic and predicate logic? QUESTION [5 upvotes]: I am studying the Introduction to logic course from Stanford University and I begin learning about relational logic. However when I search on google for the terms there I end up often with results from websites that teach predicate logic. Is there a difference between the two types of logic ? I am talking about THIS course from Standford : http://logic.stanford.edu/intrologic/notes/chapter_06.html REPLY [4 votes]: In the Stanford course "relational logic" just means First-order logic (FOL) with Herbrand semantics. In other words, FOL without model theory. You could say that Herbrand semantics introduces a new kind of model theory, but it really is just a way to set model theory aside and focus on logic. I think this is a good idea because it emphasizes the ancient distinction between logic and grammar, where logic is concerned with form, and grammar is concerned with meaning and interpretation. Taking this view to the extreme, we could say that model theory should not even be seen as part of logic, but rather grammar. They are closely related, but the distinction is important.<|endoftext|> TITLE: Cover for Cech cohomology of the constant sheaf $\mathbb{Z}$ over $\mathbb{P}^n$ QUESTION [7 upvotes]: First of all consider $\mathbb{P}^n$ as a complex analytic manifold. In Griffiths and Harris's Principles of Algebraic geometry p.145 it is stated $$ H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}, $$ that is, the second Cech cohomology group of $\mathbb{P}^n$ over the constant sheaf $\mathbb{Z}$ is isomorphic to $\mathbb{Z}$. I wanted to check this using the usual algebraic cover of open sets: $\mathcal{U} = \{ U_i \}_{0 \le i \le n}$, where $U_i = \{ x_i \neq 0 \}$, but already for $\mathbb{P}^1$ this fails, meaning that $\mathcal{U}$ is not fine enough for computing Cech cohomology. Edit: As Rene remarked in the answer below, there is a theorem of Leray which states that given a sheaf $\mathcal{F}$ and a cover $\mathcal{U}= \{ U_i \}$ such that $$ H^p(U_{i_0} \cap \dotsb \cap U_{i_k}, \mathcal{F}) = 0 $$ for all finite intersections of $\mathcal{U}$ and all $p$, then $$ H^p(\mathcal{U},\mathcal{F}) \rightarrow H^p(X, \mathcal{F}) $$ is an isomorphism for all $p$. This still gives rise to questions: Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition? Is there any intuition on how to choose such covers? Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$ ? REPLY [2 votes]: $\text{}$1. Is there a standard, or at least well known, cover of $\mathbb{P}^n$ satisfying this condition? Not an explicit one as far as I know. $\text{}$2. Is there any intuition on how to choose such covers? Any covering consisting of geodesically convex sets works. $\text{}$3. Is there any other direct method of seeing the isomorphism $H^2 (\mathbb{P}^n, \bf{\mathbb{Z}}) \cong \mathbb{Z}$? The simplest approach in my opinion is to use the cell complex structure of $\mathbb{P}^n$: it has exactly one cell in each even dimension $0$, $2$, $\ldots$ , $2n$. Then use the definition of homology for CW complexes, and apply it to this cell decomposition.<|endoftext|> TITLE: Finding a Galois extension of $\Bbb Q$ of degree $3$ QUESTION [12 upvotes]: I want to find a Galois extension $K/\mathbb{Q}$ such that $[K:\mathbb{Q}]=3$. I thought about this for a while, but haven't been able to come up with one yet. What I tried so far: (i) Taking a separable polynomial $f\in\mathbb{Q}[x]$ of degree three and considering its splitting field. (ii) Looking at the splitting fields of primitive roots of unity. The second one doesn't work because the splitting field over such a root has as degree a value in the range of Euler's totient function, and this doesn't contain three. The first approach also didn't work. I tried polynomials of the form $(x-\sqrt{p})(x-\sqrt{q})(x-\sqrt{r})$ for primes, but those have degree $8$. I then tried 'third roots' $\alpha$, but the minimal polynomials of those have complex as well as real roots, so the simple extensions $K(\alpha)$ aren't normal unless they're trivial. Could anyone please give me a hint on what else to try. REPLY [11 votes]: Consider the cyclotomic polynomial $\Phi_7(x) = x^6+x^5+\ldots + x + 1$ which is irreducible and generates an extension of $\Bbb Q$ of degree $6$ which is abelian (i.e. it is Galois with abelian Galois group). Then if $\zeta_7$ is a primitive $7^{th}$ root of $1$, $F=\Bbb Q(\zeta_7)$ is the extension. The element $\zeta_7+\zeta_7^{-1}$ is fixed by complex conjugation (an element of order $2$) and no other automorphism (you can check directly by noting $\zeta_7\mapsto \zeta_7^{k}, 1\le k\le 6$ are the automorphisms of $F$ and that any other automorphism besides $k=6$ gives a different element. But then $K= \Bbb Q(\zeta_7+\zeta_7^{-1})\subseteq F$ is an extension of degree $3$, because that is the index of the fixing Galois group generated by complex conjugation. Hence $K/\Bbb Q$ is the desired extension. You can even describe it explicitly as $K=\Bbb Q\left((\cos\left({2\pi\over 7}\right)\right)$. Working out the details you can see it is generated by the polynomial $$p(x) = x^3+x^2-2x-1.$$<|endoftext|> TITLE: Objects are shuffled. What is the probability that exactly one object remains in its original position? QUESTION [7 upvotes]: We have a deck with $n$ cards enumerated $1,2,\ldots,n$. The deck is shuffled. What is the probability of exactly one card to remain on its original position? What is the limit as $n$ rises to infinity? $$ \begin{array}{rcl} \{1\} & : & \dfrac 11 \\[6pt] \{12,21\} & : & \dfrac 02 \\[6pt] \{123,132,213,231,312,321\} & : & \dfrac 36 \\[6pt] & \vdots \end{array} $$ At $n = 100 0$ and $10 000$ trials: $$ \begin{array}{rcl} \text{value of }n & & \text{probability} \\ \hline 1000 & & 0.3739 \\ 1001 & & 0.3689 \\ 1002 & & 0.3722 \\ 1003 & & 0.3638 \\ 1004 & & 0.3707 \\ 1005 & & 0.3664 \\ 1006 & & 0.3616 \\ 1007 & & 0.3728 \\ 1008 & & 0.3702 \\ 1009 & & 0.3801 \end{array} $$ At $n = 100 000$ and $10 000$ trials: $\text{value of } n \quad \text{probability}$ $\quad 100000 \quad \quad 0.3659$ $\quad 100001 \quad \quad 0.3552$ $\quad 100002 \quad \quad 0.356$ $\quad 100003 \quad \quad 0.367$ $\quad 100004 \quad \quad 0.3738$ $\quad 100005 \quad \quad 0.3647$ $\quad 100006 \quad \quad 0.3654$ $\quad 100007 \quad \quad 0.3637$ $\quad 100008 \quad \quad 0.3718$ $\quad 100009 \quad \quad 0.3708$ Apparently, probability approaches $0.36-0.38$, but how can one derive it analytically? REPLY [11 votes]: By the inclusion-exclusion principle, in the symmetric group $S_n$ there are $$ n!\sum_{k=0}^{n}\frac{(-1)^k}{k!} $$ permutations without fixed points (see derangements). It follows that in the same group there are $$ n\cdot (n-1)!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!} $$ elements with exactly one fixed point, and the limit probability is $\color{red}{\large\frac{1}{e}}$ in both cases.<|endoftext|> TITLE: Is there a meaningful example of probability of $\frac1\pi$? QUESTION [29 upvotes]: A large portion of combinatorics cases have probabilities of $\frac1e$. Secretary problem is one of such examples. Excluding trivial cases (a variable has a uniform distribution over $(0,\pi)$ - what is the probability for the value to be below $1$?). I can't recall any example where $\frac1\pi$ would be a solution to a probability problem. Are there "meaningful" probability theory case with probability approaching $\frac1\pi$? REPLY [4 votes]: There are quite a few geometric probabilities related problems, involving $\pi$. The simplest I can think of, imagine this being a darts board. The probability of hitting the square, assuming the shot didn't miss the board, is $\frac{2}{\pi}$. If the radius of the circle is $r$, then one side of the square is $r\sqrt{2}$ and $p=\frac{S_{square}}{S_{circle}}=\frac{2 r^2}{\pi r^2}=\frac{2}{\pi}$<|endoftext|> TITLE: Difficult exponential equation $6^x-3^x-2^x-1=0$ QUESTION [8 upvotes]: $$6^x-3^x-2^x-1=0$$I don't have any idea how to solve this equation. I don't know how to show that the function $f(x)=6^x-3^x-2^x-1$ is strictly increasing (if it's the case). The derivative doesn't show me anything. I tried also to use the Lagrange theorem(like in this equation for example: $2^x+9^x=5^x+6^x$), but it's not the case! But I need a solution without calculus. Please help! Edit: In my country the theorem that states roughly this $f'(c)=\dfrac{f(b)-f(a)}{b-a}, $where $f:[a,b]\to\mathbb{R} $ is called Lagrange theorem REPLY [2 votes]: I quite liked your question so thought this might be of assistance next time you face a question similar to this one you can employ some mathematical software to help for e.g. Maple. You can rewrite the equation to look like this $$ f(x) = g(x) $$ where $f(x)= 6^x - 3^x -2^x$ and $g(x)=1$ is a constant function. So it's an intersection problem. The graph is below. Clearly, they intersect at $x=1$ or you can use the following code to be $100\%$ sure fsolve({f(x) = g(x)}, {x}, x = -1 .. 2) where $f(x)$ and $g(x)$ are defined as above.<|endoftext|> TITLE: Showing a Mobuis strip is a surface QUESTION [5 upvotes]: A Mobius strip is a surface which is diffeomorphic to the surface $S$ defined below. Let $α : R → R^{3}$ be defined as $α(t) = (\cos{t},\sin{t}, 0)$. Let $ψ : R × (−1, 1) → R^{3}$ be defined as $ψ(t, s) = 5α(t) + s(\cos(\frac {t}{2})(0, 0, 1) + \sin(\frac {t}{2})α(t))$. $(5\cos{t},5\sin{t}, 0) + s \sin(\frac {t}{2}) (\cos{t},\sin{t}, 0) + s \cos(\frac {t}{2})(0, 0, 1) $ In this form its allittle easier to view this is cylindrical coordinates. With some help i have finally been able to visualize this as a circle of radius 5 with a line segment on the end of radius of the circle moving around the entire circle while completing a flip of the line segment with one rotation about the axis. s appears to control the length of this line segment and seems to change the orientation of the line segment? as it transitions through s=0. Sketch the image $S$ of $ψ$. Mobius band turning counter clockwise when s is negative clockwise when s is positive. appears to be a half turn each way. when s=0 we got a circle of radius 5. Show that S is a $C^{\infty}$ in $ \mathbb{R^{3}}$ i think i need to do this in a range from 0 to $\pi$ and $ \pi $ to 2 $ \pi $ where the area is s 2.5 $ \pi $ for each half? Bounty Award: I would like an approach to show its a $C^{\infty}$ surface preferably by using patchs not the IFT any ideas how to set this up is sufficient for bounty. I am given in the Question that this is diffemorphic to mobius strip. Can i somehow use that and the fact that a mobius strip is a surface to show that this if a surface? REPLY [2 votes]: The strategy about splitting the Möbius strip into two pieces and using IFT is the right way to do the problem from the point of view of Pressley's definitions. However, there is another way to look at this: a surface does not really depend on its embedding into $\mathbb{R}^3$. For example, if I gave you two spheres with the same radii but different centers, you would still think of them as really being the same sphere, just put into $\mathbb{R}^3$ at different places. And you can think of the Klein bottle as being a surface, probably, even though it actually can't be embedded into $\mathbb{R}^3$ (you need one more dimension). Indeed, if you wanted to construct a sphere, it suffices to give you two disks to be surface patches and tell you how to glue them together by specifying smooth and invertible transition functions. Similarly, to construct a Möbius strip, all I actually have to do is take two copies of $(-1,1) \times (-1,1)$ and specify smooth transition functions on the pieces I want to glue together (picture attached). The embedding into $\mathbb{R}^3$ is extraneous data.<|endoftext|> TITLE: A cubic nonlinear Euler sum QUESTION [6 upvotes]: Any idea how to solve the following Euler sum $$\sum_{n=1}^\infty \left( \frac{H_n}{n+1}\right)^3 = -\frac{33}{16}\zeta(6)+2\zeta(3)^2$$ I think It can be solved it using contour integration but I am interested in solutions using real methods. REPLY [2 votes]: \begin{align} S&=\sum_{n=1}^\infty\left(\frac{H_n}{n+1}\right)^3=\sum_{n=1}^\infty\left(\frac{H_{n-1}}{n}\right)^3\\ &=\sum_{n=1}^\infty\frac{H_n^3}{n^3}-3\sum_{n=1}^\infty\frac{H_n^2}{n^4}+3\sum_{n=1}^\infty \frac{H_n}{n^5}-\sum_{n=1}^\infty\frac{1}{n^6} \end{align} Substituting the following results: $$\sum_{n=1}^\infty\frac{H_n^3}{n^3}=\frac{93}{16}\zeta(6)-\frac52\zeta^2(3)$$ $$\sum_{n=1}^\infty \frac{H_n^2}{n^4}=\frac{97}{24}\zeta(6)-2\zeta^2(3)$$ $$\sum_{n=1}^\infty\frac{H_n}{n^5}=\frac74\zeta(6)-\frac12\zeta^2(3)$$ We get $$\boxed{S=2\zeta^2(3)-\frac{33}{16}\zeta(6)}$$ Note that the first and second sum are proved here and here respectively. As for the third sum, can be obtained using the Euler identity.<|endoftext|> TITLE: Terrible equation $(x+x^{\ln x})^{10}=2^{10}$ QUESTION [5 upvotes]: $$(x+x^{\ln x})^{10}=2^{10}$$ I tried to take the logarithm of both sides. To apply 10th root, but didn't get too far. Please help! Edit: I have the following result which tells me that it has only one solution. $$x^{\ln x}=2-x$$ we take logarithm on both sides$$\ln^2{x}=\ln{(2-x)}$$ $\ln^2{x}$ is strictly increasing function (for $x>0$) and this function $\ln(2-x)$ is strictly decreasing (because is a composition of a strictly increasing one with a decreasing one). So we have only one solution $x=1$. What is wrong here? REPLY [12 votes]: After taking the 10th root on both sides, we obtain: $$|x+x^{\ln{x}}|=2$$ Note that since $x \in \mathbb{R}^+$, we can deduce that: $$x+x^{\ln{x}}=2$$ Now, this equation has a solution which is obvious, at $x=1$. However, there is another solution to this equation. I do not think there is a closed form solution in terms of elementary functions for $x$. Therefore, we must use a numerical method. I will use the Newton-Raphson Method. The process is as follows: $$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$ We choose an initial starting point $x_0=0.2$, a reasonable estimate of the solution. We use the functions: $f(x)=x+x^{\ln x}-2$ And find its derivative: $f'(x)=2\ln{x} \cdot x^{\ln{x}-1}+1$ To obtain the iteration: $$x_{n+1}=x_n-\frac{x_n+{x_n}^{\ln{x_n}}-2}{2\ln{x_n} \cdot {x_n}^{\ln{x_n}-1}+1}$$ Doing this gives the solution: \begin{array}{c|c}n&x_n\\\hline0&0.2\\1&0.253997\\2&0.322907\\3&0.402142\\4&0.476180\\5&0.523933\\6&0.539434\\7&0.540775\\8&0.540784\\9&0.540784\end{array} As the iterations $n \to \infty$, $x_n \to x$, the solution to the equation. Therefore: $$x \approx 0.54078414712$$ You can implement this method to a spreadsheet, or by a more sophisticated software such as MATLAB.<|endoftext|> TITLE: How to define a covariant derivative on a smooth vector bundle using only an Ehresmann connection? QUESTION [5 upvotes]: I'm trying to get a clear picture of covariant differentiation in my head. I'm looking for a definition of the covariant derivative using only the structure of an Ehresmann connection on a smooth vector bundle. The data of an Ehresmann connection on any submersion can be specified in the three usual equivalent ways of specifying a splitting of a short exact sequence: If $f:X\to Y$ is a submersion, there's the exact short sequence of Atiyah, of bundles over $X$ $$\mathrm VX\to \mathrm TX\to f^\ast \mathrm TY.$$ If we take a right splitting $\nabla:f^\ast \mathrm TY\to \mathrm TX$, the only new possibility seems to horizontally lift vector fields. How to define a covariant derivative on a smooth vector bundle $f:X\to Y$ using only an Ehresmann connection? Update following levap's great answer. If I understand correctly, here's the diagram describing the maps of levap's answer. $$\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}} \begin{array}{c} \mathrm T Y & \ra{\mathrm d s} & \mathrm T X & \ra{K} & \mathrm VX & \ra{\Phi} & X\times _YX & \ra{\pi_2} & X\\ \da{} & & \da{} & & \da{} & & \da{\pi_1} & & \da{f}\\ Y & \ras{s} & X & \ras{=} & X & \ras{=} & X & \ras{f} & Y \\ \end{array}$$ Here, $K$ is a section of the bundle map $\mathrm VX\to \mathrm TX$ over $X$, which fiberwise projects from a tangent space to its vertical subspace. $\Phi$ is fiberwise $\mathrm T_pf^{-1}(y)\cong f^{-1}(y)$ given by identifying the vector space $f^{-1}(y)$ with its tangent space at $p\in f^{-1}(y)$. I still don't feel I understand the geometry so I'll try to describe what I do. The bundle I'm visualizing is the "infinite Möbius strip" over the circle. The differential $\mathrm ds$ is by functoriality a section of $\mathrm df$, which means fiberwise $\mathrm d_ys$ a section of $\mathrm d_pf$. Now, the fiber $(\mathrm d_pf)^{-1}(v)$ consists of tangents upstairs. The fiber of a nonzero vector in $\mathrm T_yY$ consists of tangents with a horizontal component, since they're not in the kernel. At any rate, $\mathrm ds$ smoothly chooses a subbundle of $\mathrm TX\to X$. For the infinite Möbius strip this amounts to drawing a single arrow on each of the fibers in a smoothly varying way. $\pi_2\circ \Phi \circ K$ then projects this subbundle onto the vertical bundle. For the infinite Möbius strip we project each arrow on a fiber to the fiber itself, identified with its tangent space. The picture is then a smooth array of vertical (in the direction of the fiber) arrows, one on each fiber. Finally, given a vector field $\mathcal Y$ downstairs, $\nabla_\mathcal{Y}s$ is simply precomposition of $\nabla s$ with $\mathcal Y$. Why is the projection to the vertical bundle capturing the same as differentiating the parallel transport? It looks like we're ignoring variation between fibers by projecting onto the vertical bundle - exactly the opposite of parallel transport. I don't understand the intuition here... Here's the best I have: the fact we're parallel along fibers amounts to saying we're moving vectors "without changing them w.r.t the horizontal direction". This is somehow analogous to the projection on the vertical bundle, which also ignores horizontal changes. The vertical changes are the ones intrinsic to the manifold upstairs because the fibers of $f$ are "straight", as opposed to general tangents which may "point outside of the surface". So is the covariant derivative of a section of a vector bundle $f:X\to Y$ sort of like "partial differentiation along the directions of the fibers of $f$?" REPLY [6 votes]: I'll change your notation a little to make things clearer (in my opinion, at least). Let $\pi \colon E \rightarrow M$ be a smooth vector bundle. With it comes the associate short exact sequence $$ 0 \rightarrow VE \hookrightarrow TE \xrightarrow{d\pi} \pi^{*}(TM) \rightarrow 0 $$ of vector bundles over $E$. For the purpose of defining the covariant derivative, it is better to consider a left splitting $K \colon TE \rightarrow VE$ (over $E$). Note that $VE \cong \pi^{*}(E)$ using the natural isomorphism which allows to identify $V_{(p,v)}E = T_{(p,v)}(E_p)$ (vectors which are tangent to the fiber $E_p$) with the vector space $E_p$. Denote this isomorphism by $\Phi$ and let $\pi_{\sharp} \colon \pi^{*}(E) \rightarrow E$ be the natural map of vector bundles that covers $\pi$. Then we can define the covariant derivative of a section $s \in \Gamma(E)$ by $$ \nabla s = \pi_{\sharp} \circ \Phi \circ K \circ ds.$$ More explicitly, $s$ is a map from $M$ to $E$ and $ds \colon TM \rightarrow TE$ is the regular differential. To get the covariant derivative, we take the regular derivative $ds$, project it to the vertical space using $K$ and then identify the vertical space with $E$ to get back a section of $E$ over $M$. If the splitting $K$ satisfies the equivariance conditions appropriate for a connection on a vector bundle, this will reconstruct the usual covariant derivative. Let us try and see concretely how the process above works when $E = M \times \mathbb{R}^k$ is the trivial bundle. Fix some coordinate neighborhood $U$ with coordinates $x^1,\dots,x^n$ and let $\xi^1,\dots,\xi^k$ denote the coordinates on $\mathbb{R}^k$. Then $\pi^{-1}(U)$ is a coordinate neighborhood with coordinates I'll denote by $\tilde{x}^1,\dots,\tilde{x}^n$ and $\tilde{\xi}^1,\dots,\tilde{\xi}^k$. We have $\tilde{x}^i = x^i \circ \pi_1$ and $\tilde{\xi}^i = \xi^i \circ \pi_2$ and I use the $\tilde \,$ to differentiate between the coordinates on the base / fiber and on the total space. With this notation, the vertical space $V_{(p,v)}E$ at $(p,v)$ is precisely $$\operatorname{span} \left \{ \frac{\partial}{\partial \tilde{\xi}^1}|_{(p,v)}, \dots, \frac{\partial}{\partial \tilde{\xi}^k}|_{(p,v)} \right \}. $$ A projection $K$ from $TE$ onto $VE$ will look like: $$ K|_{(p,v)} = a_i^j(p,v) d\tilde{x}^i \otimes \frac{\partial}{\partial \tilde{\xi}^j} + d\tilde{\xi}^i \otimes \frac{\partial}{\partial \tilde{\xi}^i}$$ (the image must be the vertical bundle and it must satisfy $K^2 = K$). Now, let $s \colon M \rightarrow M \times \mathbb{R}^k$ be a section and write $s(p) = (p, f(p))$ for some $f = (f^1,\dots,f^k) \colon M \rightarrow \mathbb{R}^k$. Set $$e_i(p) := (p, \underbrace{(0,\dots,0,1,0,\dots,0)}_{i\text{th place}}$$ to be the constant sections corresponding to the standard basis vectors so $s = f^i e_i$. Let us see how the covariant derivative of $s$ in the direction $\frac{\partial}{\partial x^l} = \partial_l$ (in the base) at the point $p$ looks like: $$ ds|_{p} = dx^i \otimes \frac{\partial}{\partial \tilde{x}^i} + \frac{\partial f^i}{\partial x^j} dx^j \otimes \frac{\partial}{\partial \tilde{\xi}^i}, \\ K \circ ds = \left( a_i^j(p,f(p)) + \frac{\partial f^j}{\partial x^i}(p) \right) dx^i \otimes \frac{\partial}{\partial \tilde{\xi}^j}, \\ \nabla_l(s)(p) = \left( a_l^j(p, f(p)) + \frac{\partial f^j}{\partial x^l}(p) \right) e_j(p). $$ Note that $\nabla_l(s)(p)$ has two components. The second is the regular directional derivative of the components of $s$ with respect to the frame $(e_1,\dots,e_k)$ in the direction $\partial_l$. The first comes from the the projection $K$. If $a_i^j \equiv 0$, this is gone. Also, the components $a_i^j$ depend both on the point $p$ and the value $f(p)$ (this reflects the fact that $K$ gives us a projection of $TE$ onto $\pi^{*}(E)$). For a general vector bundle, this is the local picture. Regarding your questions, we're not ignoring the variation between fibers. This is encoded in the particular way $K$ projects onto $VE$ (through the coefficients $a_i^j$ which give rise under certain assumptions to the Christoffel symbols $\Gamma_{ik}^j$ of the connection). While the image of $K$ is always $VE$, the kernel of $KE$ is different at each point and provides us with the horizontal space. The horizontal space tells us how we should identify fibers infinitesimally along curves over the base space. Covariant differentiation allows us to differentiate a section along a vector field on $M$ and get back a section. It is done by performing regular differentiation and obtaining a tangent vector in $E$ which is necessarily not tangent to the fiber. The connection mechanism, via $K$, provides us with a way to project this tangent vector in a consistent way to get a vector which is tangent to the fiber and then identify it with an element of the fiber.<|endoftext|> TITLE: When does intersection commute with tensor product QUESTION [9 upvotes]: Given two submodules $U,V \subseteq M$ over a (commutative) ring $R$, and a flat $R$-module $A$, I can interpret $U \otimes_R A$ and $V \otimes_R A$ as submodules of $M \otimes_R A$. Is it necessarily true that $$(U \cap V) \otimes_R A \cong (U \otimes_R A) \cap (V \otimes_R A) ?$$ I think it should be true in many cases, with intuition coming from $\mathbb{Z}$-modules and $A = \mathbb{Q}$, but I'm unsure about what happens in general. REPLY [6 votes]: Finite intersections are pullbacks, so in particular are finite limits, and a module is flat iff tensoring with it commutes with all finite limits (exercise).<|endoftext|> TITLE: Is there an explanation for the multiple large entries of those continued fraction expansions? QUESTION [6 upvotes]: I searched numbers $N$, such that the continued fraction of $N^{1/3}$ has very large entries. I only searched for a single large entry, but I was surprised that two continued fractions contained not only one large entry, but multiple large entries. Here the two amazing expansions : $$102175 [46, 1, 2, 1, 8741, 2, 186, 2, 13112, 1, 6, 1, 8, 2, 9, 2, 623, 1, 33, 1 , 9, 1, 2, 2, 17484, 14, 2, 2, 1, 4, 19021, 2, 1, 1, 1, 1, 1, 1, 3437888, 2, 2, 6, 21510, 2, 1, 2, 55063048, 1, 1, 1, 1, 1, 2, 8, 44, 2, 1, 4, 1, 4, 61, 2, 1666 1, 2, 1, 3, 1, 1, 23, 1, 4, 2, 2, 8, 3, 3, 1, 1, 2, 6, 3, 1, 1, 3, 5, 17, 21, 17 , 3, 168, 3, 1, 1, 17, 1, 3, 2, 3, 4, 3] 55063048$$ $$267090 [64, 2, 2, 31104, 1, 4, 64, 4, 1, 46657, 1288, 55545, 1127, 62210, 2, 2, 40, 1, 1, 2, 1, 1, 4, 559, 8, 1, 1, 1, 1, 2, 3, 2, 1, 1, 1, 1, 1, 101091, 3, 1, 1, 8, 6, 10, 3, 1, 2, 2, 1, 1, 2, 17, 2, 1, 1, 2, 1, 4897902700, 1, 54, 1, 288, 1, 1, 1, 20, 1, 1, 5, 31360929, 1, 15, 9, 1, 1, 30, 1, 5, 6, 2, 7, 16, 2, 3, 1, 2, 3, 9935, 1, 3, 2, 1, 5, 4, 4, 2, 1, 28, 1, 27] 4897902700$$ Explanation : First, the number $N$ is displayed, then the first $100$ terms of the continued fraction of $N^{1/3}$ and finally the maximum of the entries. The cubic roots of the numbers $102175$ and $267090$ seem to have a very special continued fraction. In the second continued fraction, we have even $5$ consecutive large entries, and in both continued fractions we have an entry larger than $10^6$ besides the maximum entry. This is not at all what I expected, in particular because almost every real number has a continued fraction expansion that follows a special distribution mentioned here : Typicality of boundedness of entries of continued fraction representations The continued fraction expansions above are far away from this distribution (even if we do not consider the maximum entry). How can this phenomen be explained ? REPLY [2 votes]: This does not answer the question, but it may give some ideas as to where to look for an answer. In 1965, Brillhart found that the real root of $x^3-8x-10$ has several very large partial quotients in its continued fraction expansion, e.g., $a_{17}=22986$ $a_{33}=1501790$ $a_{59}=35657$ $a_{81}=49405$ $a_{103}=53460$ $a_{121}=16467250$ $a_{139}=48120$ $a_{161}=325927$ After this, it "calms down". There is an explanation for this behavior, involving some fairly advanced stuff (modular forms and the like). See, for example, Churchhouse and Muir, Continued fractions, algebraic numbers and modular invariants, J. Inst. Maths Applics 5 (1969) 318-328. Stark, An explanation of some exotic continued fractions found by Brillhart, in Atkin and Birch, eds., Computers in Number Theory, Academic Press 1971, 21-35. https://oeis.org/A002937 is also worth a look, and possibly R.P. Brent, Alfred J. van der Poorten, Herman J.J. te Riele: "A comparative study of algorithms for computing continued fractions of algebraic numbers" Algorithmic Number Theory (Talence, 1996), pages 35-47, Lecture Notes in Computer Science, 1122, Springer, Berlin, 1996.<|endoftext|> TITLE: A limit related to asymptotic growth of tetration QUESTION [16 upvotes]: The tetration is denoted $^n a$, where $a$ is called the base and $n$ is called the height, and is defined for $n\in\mathbb N\cup\{-1,\,0\}$ by the recurrence $$ {^{-1} a} = 0, \quad {^{n+1} a} = a^{\left({^n a}\right)},\tag1$$ so that $${^0 a}=1, \quad {^1 a} = a, \quad {^2 a} = a^a, \quad {^3 a} = a^{a^a}, \, \dots \quad {^n a} = \underbrace{a^{a^{{.^{.^{.^a}}}}}}_{n\,\text{levels}}.\tag2$$ Let $a$ be a real number in the interval $e^{-e} < a < e^{1/e}$. It is known that the following limit exists $$L(a) = \lim_{n\to\infty} {^n a},\tag3$$ where $L(a)$ satisfies $a^{L(a)}=L(a)$. For example, $L\left(\!\sqrt2\right)=2$. It is also known that $$\lim_{n\to\infty} \, \frac{L(a) - {^{n+1} a}}{L(a) - {^n a}} = \ln L(a).\tag4$$ Finally, it is known that the following limit exists $$C(a) = \lim_{n\to\infty} \, \frac{L(a) - {^n a}}{\left(\ln L(a)\right)^n}.\tag5$$ Apparently, no closed form for the function $C(a)$ is known. But the numerical evidence suggests the following conjecture (basically, this is the coefficient of the linear term in the Taylor series expansion of $C(a)$ near $a=1$): $$C'(1) = \lim_{a\to1} \, \frac{C(a)}{a-1} \stackrel?= 1.\tag{$\diamond$}$$ How can we prove it? Can we find values of some higher-order derivatives? Are they all integers? Is there a general formula, recurrence or an efficient algorithm to compute them? Related questions: [1][2][3]. REPLY [6 votes]: I am aware of this answer as well as of this question in MO and its answer. There are, of course, some overlaps, but I proceed differently and answer the questions of the OP. To fix notation, we consider $a$ close to 1, $b=b(a)=\log(a)$ and $L=L(a)$ the solution close to 1 of $L=a^{L}(=\exp(b\,L))$. Keep in mind that $L$ is close to 1 and $b$ is close to $0$. The sequence ${^n\!a}$, $n\in\mathbb N$ is defined by $^0 a=1$ and ${^{n+1}\!a}=\exp(b\ {^n\! a})$. We want to study the function defined by $$C=C(a)=\lim_{n\to\infty}\frac{L-{^n\! a}}{(bL)^n}\mbox{ for }a\neq1.$$ We show Theorem: The function $C(a)$ has a power series around $a=1$. Its derivatives $C^{(n)}(1)$ at $a=1$ are integers for all $n$. These numbers can be computed using computer algebra. Algorithms to compute the numbers $C^{(n)}(1)$ are also contained in other answers, but I have not seen a proof that they are all integers. Part 1. Since $b$ is close to 0, it is more convenient to consider it as the basic variable and have $a=\exp(b)$ etc. For convenience, we do not indicate the dependence upon $b$ when unnecessary. As also used in other answers, the idea is to linearise the iteration ${^{n+1}\!a}=\exp(b\ {^n\! a})$ around its fixed point $L$. So we introduce the sequence $z_n={^n\!a}-L$ and the function $f(z)=f(b,z)=L\left(\exp(bz)-1\right)$. Then the iteration can be written $z_0=1-L$, $z_{n+1}=f(z_n)$ and we have $f(0)=0$, $f'(0)=bL$. The OP asks about properties of $C=-\lim_{n\to\infty}(bL)^{-n}z_n$. Lemma 1: There is a uniquely determined series $T(b,z)=z+\sum_{k=2}^{\infty}d_k(b)z^k$ having a convergence radius larger than 1 for all sufficiently small $b$ such that $$T(b,f(b,z))=bL T(b,z)\mbox{ for all small }b\mbox{ and }|z|<1.$$ As a consequence, we have $C=-T(1-L)$: Indeed, on the one hand $(bL)^{-n}T(z_n)=(bL)^{-n}z_n(1+o(1))$ tends to $-C$, on the other hand, by induction using the functional equation, we have $(bL)^{-n}T(z_n)=T(z_0)=T(1-L)$. This shows that $C$ is a holomorphic function of small $b$ and hence in a small disk around $a=1$. For the proof, we rewrite the functional equation for $T$. We first put $T(z)=z+V(b,z)$. The functional equation for $V$ is then \begin{equation}\label{eqv}\tag{1}V(b,z)=\frac1b(\exp(bz)-1-bz)+\frac1{bL}V(b,L(\exp(bz)-1)). \end{equation} Now we put $V(b,z)=z^2U(b,z)$ and obtain the functional equation \begin{equation}\label{equ}\tag{2}\begin{array}{rcl} U(b,z)=(\varphi(U))(b,z)&:=& b\,\frac{\exp(bz)-1-bz}{b^2z^2}\\&&+bL\left(\frac{\exp(bz)-1}{bz}\right)^2U(b,L(\exp(bz)-1)). \end{array}\end{equation} Now consider some $R>1$, $\delta>0$ and the Banach space $\mathcal B$ of all bounded holomorphic functions $U$ defined for $|z| TITLE: What is the meaning of $\sum_{i,j = 1}^n$? QUESTION [5 upvotes]: In the multi-dimensional Ito formula, this notation pops up:$$\sum_{i,j = 1}^n$$ What does that even mean? Do we take $i = 1$, then $j = 1, ....,n$, then $i =2$ and $j = 1,...,n$, and so on? Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on? If it is the latter, then why don't they just write $\sum_{k=1}^n$, and then everywhere inside the summation, exchange both $j$ and $i$ for $k$? If it is the former, then surely the double summation notation makes the point clearer and is also better suited for calculation (i.e, if you want to take a constant in the inner sum out into the outer). REPLY [2 votes]: It's the summation of a square array. Here's a concrete example: $$\sum_{i, j = 1}^5 \gcd(i, j).$$ You could start with $\gcd(1, 1)$, then $\gcd(1, 2)$, $\gcd(1, 3)$, and so on and so forth until reaching $\gcd(5, 5)$. But like Moop said, addition is commutative, so you could fix $j$ while you iterate $i$, or you could proceed in diagonals, or you could even go through it randomly (of course keeping track of what you've already calculated). I personally would prefer $$\sum_{i = 1}^5 \sum_{j = 1}^5 \gcd(i, j)$$ because you can more easily change that to a nonsquare rectangular array if you need to. But it could be a parsing speed bump if someone gets temporarily confused and thinks there is multiplication involved. Or do we take both $i$ and $j$ equal to 1, and then both $i$ and $j$ equal to 2, and so on? As you've already noted, that would be inefficient, since $i$ and $j$ could be collapsed to a single variable, like you said. it's the kind of mistake I would not expect in a book or a peer-reviewed journal article. But on a post on this website? It could happen, I presume we're all human here.<|endoftext|> TITLE: Convexity of $x\left(1+\frac1x\right)^x,\ x\ge 0$ QUESTION [16 upvotes]: This may turn out to be really simple but I do not see a quick way to the proof. How would one show $\displaystyle x\Big(1+\frac1x\Big)^x,\ x\ge 0$ is convex? I derived the second derivative. It has a negative term. I suppose I could combine certain terms to make the negativeness disappear. But I hope there is a really clever way to see it right away. REPLY [4 votes]: I just want to add to the elegant proof of @wangtwo, and remedy the only blemish of an ugly proof of the convexity of $\frac{\ln(1+x)}x$ in a piece of otherwise perfect work. Here is an elegant replacement. $$\frac{\ln(1+x)}x=\int_0^1 \frac1{1+xt}dt$$ is convex since the integrand is convex with respect to $x$. Here is a closely related problem and my own solution as mentioned by @wangtwo in his comment below.<|endoftext|> TITLE: What I am missing in this simple equation from Nesterov's paper? QUESTION [6 upvotes]: In this paper by Prof Nesterov, First-order methods of smooth convex optimization with inexact oracle, proof of Theorem 2, there is the following very simple equation which I think it is wrong, \begin{align} \Vert x_{k+1}-x^*\Vert^2 = \Vert x_{k}-x^*\Vert^2 + 2\langle B(x_{k+1}-x_k),x_{k+1}-x^*\rangle -\Vert x_{k+1}-x_k\Vert^2 \end{align} (They defined norm as $\Vert x\Vert^2 =\langle Bx,x\rangle$) Obviously, it must be \begin{align} \Vert x_{k+1}-x^*\Vert^2 = \Vert x_k-x^*+x_{k+1}-x_{k}\Vert^2 = \Vert x_{k}-x^*\Vert^2 + 2\langle B(x_{k+1}-x_k),x_{k}-x^*\rangle \\ +\Vert x_{k+1}-x_k\Vert^2 \end{align} Am I missing something? I also, checked journal version, Mathematical Programming. REPLY [8 votes]: The author is correct. The norm in question is induced by the real inner product $(x,y):=\langle Bx,y\rangle$ where $B$ is positive definite with respect to $\langle\cdot,\cdot\rangle$. Let $u = x_{k+1} - x^\ast$ and $v = x_{k+1}-x_k$. Then the equation is simply saying that $$ \|u\|^2 = \|u-v\|^2 + 2(v,u) - \|v\|^2, $$ which is just a rearrangement of terms in the cosine law $$ \|u-v\|^2 = \|u\|^2 - 2(v,u) + \|v\|^2. $$<|endoftext|> TITLE: Proving a matrix inequality QUESTION [32 upvotes]: Let $A, B \in \mathbb R^{m\times m}$ be symmetric positive semi-definite matrices. Is it true that $$\sup_{\|x\| = 1} \left| \|Ax\| - \|Bx\| \right| \geq c(m) \|A-B\|,$$ with $c(m) > 0$ and where $\|\cdot\|$ denotes the 2-norm? Here is how I approached the problem. Let us introduce the notation $\Delta = A -B$ for convenience. Without loss of generality, we can assume that $B$ is diagonal, that $\| \Delta\|$ = 1, and that $\Delta$ has an eigenvalue equal to $+1$. We have $$ \begin{align} \|Ax\| = \sqrt{x^T (B+\Delta)^2 x} &= \sqrt{x^T B^2 x + x^T (B \Delta + \Delta B) x + x^T \Delta^2 x} \\ &= \sqrt{\left(\sqrt {x^T B^2 x} + \sqrt{x^T \Delta^2 x}\right)^2 - 2 \left(\sqrt{x^T B^2 x} \sqrt{x^T \Delta^2 x} - x^T B \Delta x\right)}. \end{align} $$ By Cauchy-Schwarz inequality, the expression in the second brackets is non-negative, so $$ \begin{align} \sup_{\|x\| = 1} \, \left| \|Ax\| - \|Bx\| \right| &\geq \sup_{\|x\| = 1}\frac{|2 \, x^T B \Delta x + x^T \Delta^2 x|}{2(\sqrt {x^T B^2 x} + \sqrt{x^T \Delta^2 x})} \end{align} $$ From here things get more complicated and any help would be appreciated. Of course, there might be other approaches to the problem. Thank you. Bonus question: Is the statement true for positive self-adjoint operators on a Hilbert space? REPLY [8 votes]: Here's a proof of the inequality. Start by setting $\epsilon=\lVert A-B\rVert > 0$. I am using the operator norm on $\mathbb{R}^{m\times m}$, so this is the maximum absolute eigenvalue of $A-B$. Exchanging $A$ and $B$ if necessary, there exists a unit vector $e_1$ with $(B-A)e_1=\epsilon e_1$. Diagonalising the bilinear form $(x,y)=x^TAy$ on the orthogonal complement of $e_1$ we extend to an orthonormal basis $(e_1,e_2,\ldots,e_m)$ with respect to which $A$ is $$ A = \left(\begin{array}{ccc}a_1&-u_2&-u_3&-u_4&\cdots\\ -u_2&a_2&0&0&\cdots\\ -u_3&0&a_3&0&\cdots\\ -u_4&0&0&a_4&\\ \vdots&\vdots&\vdots\end{array}\right) $$ Positive semidefiniteness of $A$ gives $\sum_{k=2}^mu_k^2/a_k\le a_1$ (any terms with $a_k=0$ necessarily have $u_k=0$, and I am setting the ratio to zero). This inequality follows from $x^TAx\ge0$ where $x_1=1$ and $x_k=u_k/a_k$ for $k\ge2$. Now, choose a $\delta_0$ small enough, the precise value to be chosen later. As we have $m$ distinct intervals $(\delta^2,\delta)$ for $\delta=\delta_0^{2^k}$ ($k=0,1,\ldots,m-1$), at least one of these will be disjoint from $\{\lvert u_k/a_k\rvert\colon k=2,\ldots,m\}$. Let $S$ be the set of $k=2,\ldots,m$ with $a_k=0$ or $\lvert u_k\rvert/a_k\ge\delta$, and $S^\prime=\{2,\ldots,m\}\setminus S$, so that $\lvert u_k\rvert/a_k\le\delta^2$ for $k\in S^\prime$. Define $x\in\mathbb{R}^m$ by $x_1=1$ and $$ x_k=\begin{cases} 0,&k\in S,\\ u_k/a_k,&k\in S^\prime. \end{cases} $$ We can compute $$ (Ax)_1=a_1-\sum_{k\in S^\prime}u_k^2/a_k\ge\sum_{k\in S}u_k^2/a_k\ge\delta\sum_{k\in S}\lvert u_k\rvert\ge\delta u $$ where $u=\sqrt{\sum_{k\in S}u_k^2}$. And, $(Ax)_k=-u_k$ for $k\in S$ and $(Ax)_k=0$ for $k\in S^\prime$. If we define $C\in\mathbb{R}^{m\times m}$ by $C_{11}=B_{11}=A_{11}+\epsilon$ and $C_{ij}=A_{ij}$ otherwise, then $$ \lVert Cx\rVert=\sqrt{((Ax)_1+\epsilon)^2+u^2}. $$ As $(Ax)_1\ge\delta u$, this implies $$ \lVert Cx\rVert\ge\frac{\epsilon\delta}{\sqrt{1+\delta^2}}+\lVert Ax\rVert. $$ Here, I have used the simple fact that the gradient $(d/da)\sqrt{a^2+u^2}\ge\delta/\sqrt{1+\delta^2}$ over $a\ge\delta u$. We also have $\lVert C-B\rVert\le\epsilon$ and, as $\lvert x_k\rvert\le\delta^2$ for $k=2,\ldots,m$, $$ \left\lvert\lVert Cx\rVert-\lVert Bx\rVert\right\rvert \le\lVert(C-B)x\rVert\le\epsilon\delta^2\sqrt{m-1}. $$ Hence, $$ \lVert Bx\rVert-\lVert Ax\rVert\ge\frac{\epsilon\delta}{\sqrt{1+\delta^2}}-\epsilon\delta^2\sqrt{m-1}. $$ So, as we have $\lVert x\rVert\le\sqrt{1+(m-1)\delta^4}$ and $\epsilon=\lVert A-B\rVert$, $$ \sup_{\lVert x\rVert=1}\left\lvert\lVert Ax\rVert-\lVert Bx\rVert\right\rvert\ge\left(\frac{1}{\sqrt{1+\delta^2}}-\delta\sqrt{m-1}\right)\frac{\delta}{\sqrt{1+(m-1)\delta^4}}\lVert A-B\rVert. $$ As $\delta$ is in the range $\delta_0^{2^{m-1}}\le\delta\le\delta_0$, $$ \sup_{\lVert x\rVert=1}\left\lvert\lVert Ax\rVert-\lVert Bx\rVert\right\rvert\ge\left(\frac{1}{\sqrt{1+\delta_0^2}}-\delta_0\sqrt{m-1}\right)\frac{\delta_0^{2^{m-1}}}{\sqrt{1+(m-1)\delta_0^4}}\lVert A-B\rVert. $$ Choosing $\delta_0$ small enough to make the multiplier on the right hand side positive, which can be done independently of $A,B$ (e.g., $\delta_0=1/(2\sqrt{m})$), gives the result.<|endoftext|> TITLE: If $\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0$, show that $\lim_{n\to\infty}{a_n} =0$. QUESTION [5 upvotes]: Problem: If $\lim_{n\to\infty} \frac{a_{n+1}}{a_n} = 0$, show that $\lim_{n\to\infty}a_n = 0$. Attempted Proof: Let $\epsilon > 0$. From the hypothesis, $\exists \ N \in \mathbb{P}$ such that if $n \geq N$, then $$\left|\ \frac{a_{n+1}}{a_n} - 0 \ \right| < \epsilon.$$ This implies $$\left| {a_{n+1}} \right| < \epsilon \left| a_n \right|.$$ Thus, let $n\geq N'$ such that $$\left|\ \frac{a_{n+1}}{a_n} - 0 \ \right| < \epsilon\left|a_n\right|.$$ Then we have $\left|a_{n+1}\right| < \epsilon,$ which implies $\lim_{n\to\infty}a_n = 0$. My main concern with my proof is that $\epsilon$ depends on $a_n$. Is this an issue? REPLY [6 votes]: Since $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$, then for $\epsilon=\frac12$, there is $N\in\mathbb{N}$ such that when $n\ge N$, $$ \bigg|\frac{a_{n+1}}{a_n}\bigg|<\epsilon. $$ Thus for $n>N$, one has $$ \bigg|\frac{a_{N+1}}{a_N}\bigg|<\epsilon, \bigg|\frac{a_{N+2}}{a_{N+1}}\bigg|<\epsilon,\cdots,\bigg|\frac{a_{n}}{a_{n-1}}\bigg|<\epsilon$$ and hence $$ \bigg|\frac{a_{n}}{a_N}\bigg|<\epsilon^{n-N}$$ or $$ |a_n|<\epsilon^{n-N}|a_N|. $$ So $$ \lim_{n\to\infty}a_n=0.$$ REPLY [3 votes]: That would be an issue, yes. The quickest way to prove this is to recognize that $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ tells you that $\sum_0^\infty a_n$ converges by the ratio test. Thus $a_n$ must go to $0$.<|endoftext|> TITLE: The formula of the induced homomorphism of chain maps QUESTION [7 upvotes]: It is known that a chain map $f_\bullet$ between chain complexes $(A_\bullet, d_{A,\bullet})$ and $(B_\bullet, d_{B,\bullet})$ induces a homomorphism $$(f_\bullet)_*: H_\bullet(A_\bullet)\to H_\bullet(B_\bullet).$$ (https://en.wikipedia.org/wiki/Chain_complex#Chain_maps) The sources (Hatcher / wikipedia) I read do not explicitly mention how the homomorphism is induced, so I would like to confirm if my idea is correct? For $\alpha+\text{Im}\,\partial_{A,n+1}\in H_n(A_\bullet)$, $(f_n)_*(\alpha+\text{Im}\,\partial_{A,n+1})=f_n(\alpha)+\text{Im}\,\partial_{B,n+1}$? And this works since $f_n$ maps cycles to cycles and boundaries to boundaries? Thanks for any help. REPLY [8 votes]: You are exactly right. To elaborate, the fact that $f_n$ sends cycles to cycles means that we have a map (abusing notation slightly) $$f_n:\ker(d_n^A)\to\ker(d_n^B)$$ and the fact that $f_n$ sends boundaries to boundaries means that $f_n(\operatorname{Im}(d_{n+1}^A))\subseteq \operatorname{Im}(d_{n+1}^B)$. Hence the map $$\pi_n^B\circ f_n:\ker(d_n^A)\to H_n(B_{\bullet})$$ (where $\pi_n^B:\ker(d_n^B)\to H_n(B_{\bullet})$ is the projection) sends every element of $\operatorname{Im}(d_{n+1}^A)\subseteq \ker(d_n^A)$ to the identity in $H_n(B_{\bullet})$. Therefore, by the universal property of quotient groups, $\pi_n^B\circ f_n$ factors uniquely through $H_n(A_{\bullet})$: $$\pi_n^B\circ f_n:\quad \ker(d_n^A)\xrightarrow{\ \pi_n^A \ }H_n(A_{\bullet})\xrightarrow{\ f_* \ }H_n(B_{\bullet}).$$ Since $\pi_n^B\circ f_n = f_*\circ\pi_n^A$, we see that $$f_*(\alpha+\operatorname{Im}(d_{n+1}^A)) = f_n(\alpha)+\operatorname{Im}(d_{n+1}^B),$$ just as you claimed.<|endoftext|> TITLE: Intuition Behind, or Canonical Examples of Finite Type Morphisms QUESTION [12 upvotes]: I'm new to the world of schemes, so I'm still trying to grasp some of the basics. I understand most of the simple topological properties of schemes, as well as some of the sheaf-theoretic properties like reducedness, integrality, normality, etc. But I'm having a hard time digesting what finte and finite type morphisms really entail concretely. Also, Hartshorne doesn't seem to include many nice examples from what I can tell. Pretty much the only example I have down at this point is what it means for a scheme $X$ to be of finite type over a field $k$. In such a case, we'd have a morphism of schemes $f:X \to \rm{Spec}(k)$, and since the spectrum of a field is simply a point, the inverse image is all of $X$. Hence, just applying the definitions, we see that $X$ has a finite open cover by affine schemes $\rm{Spec}A_{i}$, where each $A_{i}$ is a finitely generated $k$-algebra. If we add integrality of $X$, we observe that a variety over $k$ is simply a integral scheme of finite type over a field $k$. So this is nice, but beyond this I'm failing to grasp any intuition. Does anyone have any canonical examples that I should work through? Or maybe any pointers on how to think about finite and finite type. Eisenbud and Harris point out that pretty much all geometrically interesting morphisms are usually in these two regimes. For example, I guess spectra of local rings are "non-geometric" as they aren't of finite type over a field. This is a great example of something I don't understand well enough yet. REPLY [2 votes]: I suggest reflecting upon the following themes. Consider, say, an affine scheme of finite type $X = \text{Spec}\,A$ over an algebraically closed field $k$. In the sense of classical algebraic geometry, its points are in bijection with maximal prime ideals. But in the schematic environment, its points are all prime ideals, for example $(0)$, if $A$ has no zero divisors. This $(0)$ is called the generic point of $X$. Describe all points of the closure of the set, consisting of one point. Show that this closure is an affine scheme itself. What is its function ring? In particular, localization $\text{Spec}\,A_\mathfrak{p}$ generally has more points than just one closed and one generic point! There are generic points of other localizations: which ones? This was about zooming. But you can also start, with say, one closed point, and then successively extend its "neighborhood", but keeping it infinitesimally small: consider spectra of $A/(\mathfrak{p}^n)$, $n = 1, 2, 3, \ldots$.<|endoftext|> TITLE: Is there a notion of "codependent" types for coinductive types? QUESTION [6 upvotes]: Inductively defined types are common within the type theory literature, and there are many ways of characterizing them. In the Homotopy Type Theory book, inductive types are characterized using certain "induction principles" describing how to construct (dependent) maps out of or into such types. Thus, in some sense, we can construct inductive types from dependent types. My question is: Is there some characterization of coinductive types that is dual to this "induction principle" characterization of inductive types? Moreover, if this is the case, does this notion still rely on the standard notion of dependent types (with the role of dependent sum and dependent product swapped, I suppose), or some new notion of "codependent types"? REPLY [5 votes]: There is a dual notion of coinductive types, yes, and we don't need to look past the framework of dependent types. One way to think about inductive types is as initial algebras. In this perspective, the defining data for an inductive type is a functor $F : \operatorname{Type} \to \operatorname{Type}$ of a particular form, and an $F$-algebra is a type $A$ together with a natural transformation $F A \to A$. All the data for the inductive type you wish to define (the type itself, its recursion rule, and its computation rules) is then equivalent to the type of initial $F$-algebras. Dually, the coinductive type should be the type of terminal $F$-coalgebras. You can see Section 5.4 in the HoTT book for details (as least for the inductive case), but it will be helpful to see an example. Consider the inductive definition of the natural numbers: Inductive Nat : Type := | zero : Nat | succ : Nat -> Nat Directly from this definition, you can read off a particular functor: \begin{align*} F := \lambda A : \operatorname{Type} \mathop{.} 1 + A \end{align*} Think of the arguments to $+$ as the data you need to provide to the "zero" constructor and the "succ" constructor, respectively, replacing Nat by the type parameter $A$ when it occurs. From here, we can define what it means for a type to have an $F$ algebra and $F$ coalgebra structure: \begin{align*} \operatorname{FAlgebra} &:= \lambda A : \operatorname{Type} \mathop{.} F A \to A \\ \operatorname{FCoalgebra} &:= \lambda A : \operatorname{Type} \mathop{.} A \to F A \end{align*} An $F$ algebra structure on a type $A$ is familiar: it consists just of the data you need to inductively define a function from $\operatorname{Nat} \to A$. (To see why, remember that a function $1 + A \to A$ consists equivalently of a function $1 \to A$ together with a function $A \to A$. These are the base case and inductive case for the inductive definition, respectively.) Following that intuition, the natural numbers are then just an $F$-algebra $\operatorname{Nat}$ with the property that there is a unique homomorphism (that is, a map preserving the $F$-algebra structure in a suitable sense) from $\operatorname{Nat}$ to any other $F$-algebra. That is, $\operatorname{Nat}$ is the initial $F$-algebra. The coinductive case is dual. An $F$-coalgebra structure on $A$ is the data necessary to define a function from $A$ to a type built from zero and succ: for any element of $A$, think of the corresponding element you build in $1 + A$ to be deciding whether that element maps to zero or a successor and, if it maps to a successor, what it is the successor of. The target type of this construction is the corresponding coinductive type: the terminal $F$ coalgebra is an $F$ coalgebra called $\operatorname{CoNat}$ with the property that there is a unique $F$-coalgebra homomorphism from any $F$ colagebra to $\operatorname{CoNat}$. Bonus: you didn't ask about the possibility of "higher" coinductive types, but I should say something about them anyway. The HoTT book doesn't talk about them as far as I know, and they seem a bit delicate. To be concrete, consider the following higher inductive definition: Inductive Foo : Type := | zero : Foo | succ : Foo -> Foo | succ_path : forall x:Foo, succ x = x This definition should let us build terms in Foo out of zero and successor, as for the natural numbers, but should also let us uniformly prove that every Foo is equal to its successor. In order to get a function from Foo to some type $A$, you ought to provide a base case $z : A$, inductive case $s : A \to A$, and a proof that $s a = a$ for all $a$. With that in mind, we at least know what it should mean to be an algebra: \begin{align*} \operatorname{FooAlgebra} A = \sum_{z : A} \sum_{s : A \to A} \prod_{a:A} s a = a \end{align*} If we want to see what a coalgebra should be, one approach would be to find a functor $F : \operatorname{Type} \to \operatorname{Type}$ such that the type $F A \to A$ is equivalent to $\operatorname{FooAlgebra} A$, and look at coalgebras of that functor. I'm not sure how plausible it is to find such a functor, however. Even without it, though, it may (or may not) be possible to give an independent account of what the coalgebras should look like. I don't know what work has been done in this area.<|endoftext|> TITLE: Integration with respect to counting measure when $X=\mathbb{R}$ QUESTION [5 upvotes]: Consider the measure space $(X,\mathcal{M},\mu)$ where $X=\mathbb{R}$, $\mathcal{M}=\mathcal{P}(X)$ and $\mu$ is the counting measure. I want to show that for every measurable function $f : \mathbb{R} \mapsto [0,+\infty]$, given $x \in \mathbb{R}$, $\int_{\{ x\}} f \, d \mu = f(x) $. Generally speaking $\int_{X} f \, d \mu = \sum_{x \in X} f(x) $. I can do this in the case $X=\mathbb{N}$ with a monotone convergence argument, but when $X$ is uncountable I find some trouble. How can I fix things when $X$ is uncountable? REPLY [3 votes]: First note that by definition, $$\int_{\{x\}} f\;d\mu=\int f\cdot1_{\{x\}}d\mu.$$ However, note that $f\cdot 1_{\{x\}}=f(x)1_{\{x\}}$. Therefore we use linearity of the integral to compute $$\int_{\{x\}}f\;d\mu=\int f(x)1_{\{x\}}d\mu=f(x)\int 1_{\{x\}}d\mu=f(x)\,\mu(\{x\})=f(x)$$ where the last equality is by definition of the counting measure. I'd recommend working on the more general case on your own, using what we just proved.<|endoftext|> TITLE: Burnside's Lemma applied to grids with interchanging rows and columns QUESTION [7 upvotes]: I've recently learned about Burnside's Lemma (https://en.wikipedia.org/wiki/Burnside%27s_lemma) and its applications to rotating necklaces, coloring cubes and such, but I fear my understanding of it isn't mature enough and am unable to apply it to the following situation: Suppose you have a 2x3 matrix, and a set of 3 distinct colors $R, Y, B$. How many non-equivalent ways are there to color the matrix? Note that two matrices $m_1$ and $m_2$ are considered equivalent if you can turn the former into the latter by swapping any rows and/or columns as many times as you want. So I got started by setting up the Burnside's equation as follows: \begin{align*}\frac{1}{|G|}\sum_{g \in G} |X^g| &= \frac{1}{H! W!} \sum_{\sigma \in S_H}\sum_{\tau \in S_W} |X^{(\sigma, \tau)}|. \end{align*} where $|G|$ is the total no. of elements that act to permute the set $X$ of matrices with height and width $H\times W$. Since there are $H!$ ways to permute the rows and $W!$ ways to permute the columns, the total no. of ways to permute the matrix is $|G| = H!W!$. So we can iterate through every way to permute the rows and every way to permute the columns (hence the double summation above), and for each permutation, find out the value of |$X^{(\sigma, \tau)}$| -- which is the number of matrices that are fixed (a.k.a. not changed) when applying permutation $\sigma$ to the rows and $\tau$ to the columns. For example, say we have a 2x3 grid with the following indices for each cell: (1,1) (1,2) (1,3) (2,1) (2,2) (2,3) There would be 2! = 2 ways to permute the rows and 3! = 6 ways to permute the columns, so the Burnside's equation becomes: \begin{align*}\frac{1}{2! 3!} \sum_{\sigma \in S_2}\sum_{\tau \in S_3} |X^{(\sigma, \tau)}| \end{align*} It is here that I hit a stumbling block. Given we have 3 distinct colors to work with, it isn't clear to me how to count up |$X^{(\sigma, \tau)}$| for each permutation. If someone could show me a step-by-step way to compute the answer for this specific example, I feel I could probably learn to apply it to a more general situation, with arbitrary values of $W$, $H$, and number of colors. Thanks! REPLY [9 votes]: Let me just observe that while rigorous notation is a must for any serious mathematician it can sometimes block the path of the beginning reader. What is happening here is very simple. We need the cycle index of the permutations from $S_2\times S_3$ acting on the slots of the matrix. While the present case can be computed by inspection I will explain the general method. We require the cycle index $Z(M_{2,3})$ of the pairs $(\sigma,\tau)$ of row and column permutations acting simultaneously on the slots of the matrix ($6$ of them, and with $12$ permutations total). Start from the two cycle indices for the row and column permutations $$Z(S_2) = \frac{1}{2} a_1^2 + \frac{1}{2} a_2$$ and $$Z(S_3) =\frac{1}{6} b_1^3 + \frac{1}{2} b_1 b_2 + \frac{1}{3} b_3.$$ The method here is as follows. We draw a diagram of the cycle types of $\sigma$ and $\tau,$ perhaps one beneath the other. Now for the cycle index of the cartesian product of $S_2$ and $S_3$ we must factor the combined action of the two permutations on the slots into cycles. We represent row-column pairs i.e. slots by marking say the row and the column and connecting them with a edge, not to be confused with the directed edges of the two cycles. This edge travels in parallel along the two cycles it is on and returns to its initial position after $\mathrm{lcm}(l_1, l_2)$ steps, where $l_{1,2}$ are the lengths of the cycles. As the pair of cycles contributes $l_1\times l_2$ slot identifiers we get for the contribition to the combined cycle index the term $$a_{\mathrm{lcm}(l_1, l_2)}^{l_1 l_2/\mathrm{lcm}(l_1, l_2)}.$$ We now do the computation. There are thus six possible combinations of cycle types that combine to form $Z(M_{2,3})$. We process these in turn, leaving the most difficult part for last. First, combining $a_1^2$ and $b_1^3.$ This fixes all six slots for a contribution of $$\frac{1}{12} a_1^6.$$ Second, combining $a_1^2$ and $b_3.$ This partitions the pairs into three-cycles for a contribution of $$\frac{1}{6} a_3^2.$$ Third, combining $a_2$ and $b_1^3.$ Here we have everything on two-cycles to get $$\frac{1}{12} a_2^3.$$ Fourth, combining $a_2$ and $b_3$. This will produce a six-cycle to get $$\frac{1}{6} a_6.$$ Now for the tricky part. Fifth, combining $a_1^2$ and $b_1 b_2.$ There are two pairs that are fixed by these, and two pairs on two-cycles and we obtain $$\frac{1}{4} a_1^2 a_2^2.$$ Finally, sixth, combining $a_2$ and $b_1 b_2.$ We have everything on two-cycles and obtain $$\frac{1}{4} a_2^3.$$ Adding everything we now have the cycle index $$Z(M_{2,3}) = \frac{1}{12} a_1^6 + \frac{1}{3} a_2^3 + \frac{1}{6} a_3^2 + \frac{1}{4} a_1^2 a_2^2 + \frac{1}{6} a_6.$$ In order to apply Burnside and ask about colorings with at most $N$ colors we have that the assignment of the colors must be constant on each cycle and we obtain $$\frac{1}{12} N^6 + \frac{1}{3} N^3 + \frac{1}{6} N^2 + \frac{1}{4} N^4 + \frac{1}{6} N.$$ This yields the sequence $$M_n = 1, 13, 92, 430, 1505, 4291, 10528, 23052, \ldots$$ which is OEIS A027670. In particular we find there are $92$ colorings using at most three distinct colors. We could apply PET at this point since we have the cycle index. E.g. for three colors we obtain $$1/12\, \left( R+G+B \right) ^{6} \\ +1/4\, \left( R+G+B \right) ^{2} \left( {B}^{2}+{G}^{2}+{R }^{2} \right) ^{2}+1/6\, \left( {B}^{3}+{G}^{3}+{R}^{3} \right) ^{2} \\ +1/3\, \left( {B}^{2}+ {G}^{2}+{R}^{2} \right) ^{3}+1/6\,{B}^{6} +1/6\,{G}^{6}+1/6\,{R}^{6}$$ which expands to $${B}^{6}+{B}^{5}G+{B}^{5}R+3\,{B}^{4}{G}^{2}+3\,{B}^{4}GR \\ +3\,{B}^{4}{R}^{2}+3\,{B}^{3}{G}^{3}+6\,{B}^{3}{G}^{2}R \\ +6\,{B}^{3}G{R}^{2}+3\,{B}^{3}{R}^{3}+3\,{B}^{2}{G}^{4} \\ +6\,{B}^{2}{G}^{3}R+11\,{B}^{2}{G}^{2}{R}^{2}+6\,{B}^{2}G{R}^{3} \\ +3\,{B}^{2}{R}^{4}+B{G}^{5}+3\,B{G}^{4}R+6\,B{G}^{3}{R}^{2} \\ +6\,B{G}^{2}{R}^{3}+3\,BG{R}^{4}+B{R}^{5}+{G}^{6}+{G}^{5}R \\ +3\,{G}^{4}{R}^{2}+3\,{G}^{3}{R}^{3}+3\,{G}^{2}{R}^{4} \\+G{R}^{5}+{R}^{6}.$$ The reader might want to verify some of these with pen and paper. We may also apply inclusion-exclusion to obtain the count of colorings of the matrix with exactly $N$ colors. The nodes of the poset correspond to sets $P\subseteq [N]$ of colors which include colorings that use some subset of these colors, with the top node representing at most $N$ colors, which is the only $P$ that includes colorings with exactly $N$ colors. Colorings with exactly $p \lt N$ colors where $p\ge 1$ are included at all nodes that are supersets of the $p$ colors. We thus obtain for the total weight $$\sum_{q=p}^N {N-p\choose q-p} (-1)^{N-q} = (-1)^{N-p} \sum_{q=0}^{N-p} {N-p\choose q} (-1)^q = 0$$ since $N-p\ge 1.$ Colorings with less than $N$ colors have total weight of zero in the poset. We thus obtain $$\sum_{q=1}^N {N\choose q} (-1)^{N-q} M_q.$$ We get a finite sequence since with six slots it is not possible to have a coloring with more than six distinct colors: $$1, 11, 56, 136, 150, 60$$ The last term represents colorings with exactly six colors. This means all slots in the matrix are distinct. Therefore all orbits have the same size, the number of permutations in the group, which is twelve, and indeed $6!/12 = 60.$ The term for two colors indicates that the two monochrome colorings have been excluded. This MSE link has the Maple code for the general case. Addendum. Here is what we mean when we say in the introduction that the cycle index can be computed by inspection. This refers to the isomorphism between $M_{2,3} = S_2\times S_3$ and $D_6$, the dihedral group (reflections and rotations of regular polygons) acting on six slots in this case. The cycle indices $Z(D_p)$ are tabulated and have simple closed forms, consult e.g. Wikipedia. Label the vertices of a hexagon in clockwise order with the labels $(0,0), (1,1), (0,2), (1,0), (0,1)$ and $(1,2).$ Then it is not difficult to see that the rotations of the hexagon are in a bijection with the pairs of cycles from $C_2 \times C_3$ embedded in $M_{2,3}.$ E.g. the rotation that takes the top vertex to its clockwise neighbor corresponds to the two cycles $(0,1)$ and $(0,1,2)$. The reflections in an axis passing through opposite vertices preserve parity (permutation $(0)(1)$ from $S_2$) and fix one element from $0,1,2$ while permuting the other two in two-cycles. The reflections in an axis passing through opposite edges flip parity (permutation $(0,1)$ from $S_2$) and fix one of three elements from $0,1,2$ while exchanging the other two. In this way we have bjectively accounted for all permutations and the proof of the isomorphism is complete.<|endoftext|> TITLE: Find all integers $(x, y)$ such that $1 + 2^x + 2^{2x + 1} = y^2$ QUESTION [5 upvotes]: Find all integers $$(x, y)$$ such that $$1 + 2^x + 2^{2x + 1} = y^2$$ So I basically used $$ f(x) = 1 + 2^x + 2^{2x + 1} = y^2$$ and created a table from 0 to 20. I got two pairs of integers: $$(0, \pm2)$$ and $$(4, \pm23)$$ I want to know if there's another method or if there are any other pairs of integers, or a generalisation. REPLY [4 votes]: $1 + 2^x + 2^{2x + 1} = y^2 \\\implies 2^x(2^{x+1}+1) = (y+1)(y-1)$ $x<-1$ gives a non-integer LHS (no solutions) $x=-1$ gives LHS $= 1$ with no solutions for $y$ $x=0$ gives LHS $= 3$ and $y=\pm 2$ For $x>0$, $y$ is odd so put $y=2k+1$ and $2^x(2^{x+1}+1) = (2k+2)(2k) = 4k(k+1)$ which is divisible by $8$ so $x\ge 3$ and $2^{x-2}(2^{x+1}+1) = k(k+1)$. Clearly we cannot have $k=2^{x-2}$ or $k+1=2^{x-2}$ so we need $(2^{x+1}+1)$ to split into (odd) factors $r,s$ such that $2^{x-2}r = s\pm 1$. Then $|r|<8$ otherwise $2^{x-2}|r| > (2^{x+1}+1)$. Also $2^{x-2}r^2 = sr\pm r$ so $|r|=3$ is the only viable choice and $2^{x-2}9 = (2^{x+1}+1) \pm 3$ gives $2^{x-2} = 4$ i.e. $x=4$ (and $y=\pm 23$) as the only other solution. In summary: the only solutions $(x,y)$ are $(0,\pm 2)$ and $(4,\pm 23)$, those you found.<|endoftext|> TITLE: if : $abc=8 $ then : $(a+1)(b+1)(c+1)≥27$ QUESTION [7 upvotes]: if : $$abc=8 :a,b,c\in \mathbb{R}_{> 0}$$ then : $$(a+1)(b+1)(c+1)\ge 27.$$ My try : $$(a+1)(b+1)(c+1)=1+(a+b+c)+(ab+ac+bc)+abc$$ $$(a+1)(b+1)(c+1)=1+(a+b+c)+(ab+ac+bc)+8, $$ then? REPLY [2 votes]: By Holder $$\prod\limits_{cyc}(a+1)\geq\left(\sqrt[3]{abc}+1\right)^3=27$$<|endoftext|> TITLE: What knot is this? QUESTION [56 upvotes]: My headphone cables formed this knot: however I don't know much about knot theory and cannot tell what it is. In my opinion it isn't a figure-eight knot and certainly not a trefoil. Since it has $6$ crossings that doesn't leave many other candidates! What is this knot? How could one figure it out for similarly simple knots? REPLY [80 votes]: Arthur's answer is completely correct, but for the record I thought I would give a general answer for solving problems of this type using the SnapPy software package. The following procedure can be used to recognize almost any prime knot with a small number of crossings, and takes about 10-15 minutes for a new user. Step 1. Download and install the SnapPy software from the SnapPy installation page. This is very quick and easy, and works in Mac OS X, Windows, or Linux. Step 2. Open the software and type: M = Manifold() to start the link editor. (Here "manifold" refers to the knot complement.) Step 3. Draw the shape of the knot. Don't worry about crossings to start with: just draw a closed polygonal curve that traces the shape of the knot. Here is the shape that I traced: If you make a mistake, choose "Clear" from the Tools menu to start over. Step 4. After you draw the shape of the knot, you can click on the crossings with your mouse to change which strand is on top. Here is my version of the OP's knot: Step 5. Go to the "Tools" menu and select "Send to SnapPy". My SnapPy shell now looks like this: Step 6. Type M.identify() The software will give you various descriptions of the manifold, one of which will identify the prime knot using Alexander-Briggs notation. In this case, the output is [5_1(0,0), K5a2(0,0)] and the first entry means that it's the $5_1$ knot.<|endoftext|> TITLE: Pullback of a differential form by a local diffeomorphism QUESTION [11 upvotes]: Suppose I have to smooth oriented manifolds, $M$ and $N$ and a local diffeomorphism $f : M \rightarrow N$. Let $\omega$ be a differential form of maximum degree on $N$, let's say, $r$. How can I rewrite $$\int_N \omega$$ in terms of the integral of the pullback of $\omega$ by $f$, $f^*\omega$? So, I know that when $f$ is a diffeomorphism, then $$\int_N\omega = \pm \int_M f^*\omega$$ depending on whether $f$ preserves the orientation or not. But that's in part due to the fact that $f$ is bijective, but that condition is removed when assuming $f$ is a local diffeomorphism. So is there a nice way to write that integral in terms of the pullback? REPLY [10 votes]: Yeah, and you can make it work even if $f$ is not a local diffeomorphism, only a proper map. The relevant notion is that of the degree of a smooth proper map between oriented manifolds of the same dimension. Assume for simplicity that $M,N$ are closed (compact, without boundary), non-empty and connected and let $f \colon M \rightarrow N$ be an arbitrary smooth map. Since $M,N$ are oriented, we can a generator $[\omega_1] \in H^{\text{top}}(M)$ such that $\omega_1$ is consistent with the orientation on $M$ and $\int_M \omega_1 = 1$. Choose $[\omega_2]$ similarly for $N$. The map $f$ induces a map $f^{*} \colon H^{\text{top}}(N) \rightarrow H^{\text{top}}(M)$ on cohomology and since the top cohomology groups are one-dimensional, we must have $f^{*}([\omega_2]) = c [\omega_1]$ for some $c \in \mathbb{R}$. This $c = \deg(f)$ is called the degree of $f$ and is a priori a real number. However, it can be shown that $c$ is in fact an integer which can be computed by counting the number of preimages of a regular value $p \in N$ with appropriate signs which take the orientations of $M,N$ into consideration. Knowing that, given $\omega \in \Omega^{\text{top}}(N)$, write $[\omega] = c[\omega_2]$ for some $c \in \mathbb{R}$. Then $$ [f^{*}(\omega)] = f^{*}([\omega]) = f^{*}(c[\omega_2]) = c \deg(f) [\omega_1] $$ so $$ \int_M f^{*}(\omega) = \deg(f) c \int_N \omega_1 = \deg(f) c = \deg(f) c\int_N \omega_2 = \deg(f) \int_N \omega. $$ In particular, if $f$ is a diffeomorphism, $\deg(f) = \pm 1$ so this generalizes your starting point. If $f$ is a local diffeomorphism then $f \colon M \rightarrow N$ is a covering map and $\deg(f)$ will be the number of points in an arbirary fiber, counted with appropriate signs. For much more details and proofs, see the book "Differential Forms in Algebraic Topology".<|endoftext|> TITLE: About the roots of cubic polynomial QUESTION [8 upvotes]: Let $\alpha, \beta, \gamma$ be the complex roots of the polynomial $P_3(x)=ax^3+bx^2+cx+d$. Is there any known formula for calculating $\alpha^2 \beta+\beta^2 \gamma+ \gamma^2\alpha \; , \; \alpha \beta^2+\beta \gamma^2+\gamma\alpha^2$ (in terms of $a,b,c,d$)? If no, can someone obtain it? REPLY [22 votes]: Let $u=\alpha^2\beta+\beta^2\gamma+\gamma^2\alpha$ and $v=\alpha\beta^2+\beta\gamma^2+\gamma\alpha^2$. We have the following relations between the elementary symmetric polynomials $e_1$, $e_2$ and $e_3$ of $\alpha$, $\beta$ and $\gamma$ and the coefficients of $P_3$: $$ e_1 = \alpha+\beta+\gamma = -\frac{b}{a} \\ e_2 = \alpha\beta+\beta\gamma+\gamma\alpha = +\frac{c}{a} \\ e_3 = \alpha\beta\gamma = -\frac{d}{a} $$ We find \begin{eqnarray*} p & := u+v = & e_1e_2-3e_3 \\ q & := uv = & e_1^3e_3-6e_1e_2e_3+e_2^3+9e_3^2 \end{eqnarray*} From this, you can obtain $u$ and $v$ solving $t^2-pt+q=0$.<|endoftext|> TITLE: Limit $\lim_{n\to\infty} n^{-3/2}(1+\sqrt{2}+\ldots+\sqrt{n})=\lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}}$ QUESTION [5 upvotes]: How do I find the following limit? $$ \lim_{n \to \infty} \frac{\sqrt{1} + \sqrt{2} + ... + \sqrt{n}}{n\sqrt{n}} $$ The answer (from Wolfram) is $\frac{2}{3}$, but I'm not sure how to proceed. Is this an application of the Squeeze theorem? I'm not quite sure. REPLY [4 votes]: All the other answers are by Riemann sums, but one can also use the Stolz theorem. The limit becomes $$\frac{\sqrt{n}}{n\sqrt{n}-(n-1)\sqrt{n-1}}=\frac{\sqrt{n}(n\sqrt{n}+(n-1)\sqrt{n-1})}{n^3+(n-1)^3}=$$ $$\frac{n^2+(n-1)\sqrt{n^2-n})}{n^2+n(n-1)+(n-1)^2}\to\frac{2}{3}$$<|endoftext|> TITLE: Number theory/combinatorics problem? QUESTION [6 upvotes]: Given any set of $14$ (different) natural numbers, prove that for some $k$ ($1 ≤ k ≤ 7$) there exist two disjoint $k$-element subsets $\{a_1,...,a_k\}$ and $\{b_1,...,b_k\}$ such that the sum of the reciprocals of all elements in each set differ by less than $0.001$, i.e. $|A−B| < 0.001$, where $A =$ the sum of the reciprocals of the elements of the first subset and $B =$ the sum of the reciprocals of the elements of the second subset. Note: this problem is from the $1998$ Czech and Slovak Math Olympiad REPLY [13 votes]: Consider the $\binom{14}{7} = 3432$ $7$-element subsets of the fourteen numbers, and look at the sums of the reciprocals of the numbers in each subset. Each sum is at most$$1 + {1\over2} + \ldots + {1\over7} = {{363}\over{140}} < 2.60,$$so each of the $3432$ sums lies in one of the $2600$ intervals$$\left(0, {1\over{1000}}\right], \text{ }\left({1\over{1000}}, {2\over{1000}}\right], \text{ }\ldots\text{ }, \text{ }\left({{2599}\over{1000}}, {{2600}\over{1000}}\right].$$Then by the pigeonhole principle, some two sums lie in the same interval. Taking the corresponding sets and discarding any possible common elements, we obtain two satisfactory subsets $A$ and $B$.<|endoftext|> TITLE: Roll an N sided die K times. Let S be the side that appeared most often. What is the expected number of times S appeared? QUESTION [7 upvotes]: For example, consider a 6 sided die rolled 10 times. Based on the following monte-carlo simulation, I get that the side that appears most will appear 3.44 times on average. n = 6 k = 10 samples = 10000 results = [] for _ in range(samples): counts = {s:0 for s in range(n)} for _ in range(k): s = randint(0, n-1) counts[s] += 1 results.append(max(counts.values())) print sum(results)/float(len(results)) But I can't figure out how to get this in a closed form for any particular N and K. REPLY [2 votes]: Here is a closed form, we will need more sophisticated methods for the asymptotics. Following the notation introduced at this MSE link we suppose that the die has $m$ faces and is rolled $n$ times. Rolling the die with the most occured value being $q$ and instances of this size being marked yields the species $$\mathfrak{S}_{=m} (\mathfrak{P}_{=0}(\mathcal{Z}) + \mathfrak{P}_{=1}(\mathcal{Z}) + \cdots + \mathcal{V}\mathfrak{P}_{=q}(\mathcal{Z})).$$ This has generating function $$G(z,v) = \left(\sum_{r=0}^{q-1} \frac{z^r}{r!} + v\frac{z^q}{q!}\right)^m.$$ Subtracting the values where sets of size $q$ did not occur we obtain the generating function $$H_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m - \left(\sum_{r=0}^{q-1} \frac{z^r}{r!}\right)^m.$$ This also follows more or less by inspection. We then obtain for the desired quantity the closed form $$\bbox[5px,border:2px solid #00A000]{ \frac{n!}{m^n} [z^n] \sum_{q=1}^n q H_q(z).}$$ Introducing $$L_{q}(z) = \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^m$$ we thus have $$\frac{n!}{m^n} [z^n] \sum_{q=1}^n q (L_{q}(z) - L_{q-1}(z)).$$ This is $$\frac{n!}{m^n} [z^n] \left(n L_n(z) - \sum_{q=0}^{n-1} L_q(z)\right).$$ We also have for $$[z^n] L_q(z) = \sum_{k=0}^{\min(q, n)} \frac{1}{k!} [z^{n-k}] \left(\sum_{r=0}^{q} \frac{z^r}{r!}\right)^{m-1}$$ Furthermore we obtain for $m=1$ $$[z^n] L_q(z) = [[n \le q]] \times \frac{1}{n!}.$$ With these we can implement a recursion, which in fact on being coded proved inferior to Maple's fast polynomial multiplication routines. It is included here because it memoizes coefficients of $L_q(z)$, thereby providing a dramatic speed-up of the plots at the cost of allocating more memory. All of this yields the following graph where we have scaled the plot by a factor of $n/m.$ This is it for a six-sided die: 3+ H + | H + H | + H + HH | HH 2+ HH | HHH + HHHH + HHHHHHH | HHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 And here is the plot for a twelve-sided die. (I consider it worth observing that we have the exact value for the expectation in the case of $120$ rolls of this die, a case count that has $130$ digits, similar to what appeared in the companion post.) 8+ | + + | + + H | 6+ | + + | H + + H | 4+ HH + H | H + HH + HH | HHHH + HHHHHH 2+ HHHHHHHHHHH | HHHHHHHHHHHHHHHHHHHHHHHHH + HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH -+--+---+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- 0 20 40 60 80 100 120 This was the Maple code. with(plots); with(combinat); ENUM := proc(n, m) option remember; local rolls, res, ind, counts, least, most; res := 0; for ind from m^n to 2*m^n-1 do rolls := convert(ind, base, m); counts := map(mel->op(2, mel), convert(rolls[1..n], `multiset`)); res := res + max(counts); od; res/m^n; end; L := (m, rmax) -> add(z^r/r!, r=0..rmax)^m; X := proc(n, m) option remember; local H; H := q -> expand(L(m,q)-L(m,q-1)); n!/m^n* coeff(add(q*H(q), q=1..n), z, n); end; LCF := proc(n,m,q) option remember; if n < 0 then return 0 fi; if m = 1 then if n <= q then return 1/n! fi; return 0; fi; add(1/k!*LCF(n-k,m-1,q),k=0..min(q,n)); end; LVERIF := (m, q) -> add(LCF(n, m, q)*z^n, n=0..q*m); XX := proc(n, m) option remember; local res; res := n*LCF(n,m,n) - add(LCF(n,m,q), q=0..n-1); res*n!/m^n; end; DICEPLOT := proc(nmx, m) local pts; pts := [seq([n, XX(n,m)/(n/m)], n=1..nmx)]; pointplot(pts); end;<|endoftext|> TITLE: Complete the short exact sequence $0\to \mathbb Z\to A\to Z_n \to 0$ QUESTION [5 upvotes]: I want to find all possible $A$ in the following SES: $$0\to \mathbb Z\to A\to Z_n \to 0$$ I know by structure theorem of finite generated abelian groups, $A\cong \mathbb Z\oplus \mathbb Z_d$. Then how can we find the relation between $d$ and $n$? What are two maps in the middle? Since this is an exercise in Hatcher's book at the beginning section of homology theory, a solution without using Ext will be much better. REPLY [5 votes]: This is a nice exercise for a beginning section of homology theory, since doing this without Ext will make you really appreciate the machinery homological algebra has to offer. I do not think, that I will go through all details, but let us start: First of all, we have the case $\mathbb Z_d = 0$ (one could refer to this as $d=1$), i.e. a short exact sequence $$0 \to \mathbb Z \to \mathbb Z \to \mathbb Z_n \to 0.$$ Of course the first map is given by $\cdot m$ for some $m \in \mathbb Z$ and by looking at the cokernel, we get that $m = \pm n$. The second map is given by $1 \mapsto e$, where $e$ is co-prime to $n$. The much more involved case is the case $d \geq 2$, i.e. where the summand $\mathbb Z_d$ actually occurs. Call the first map $g: \mathbb Z \to \mathbb Z \oplus \mathbb Z_d$ and the second map $f: \mathbb Z \oplus \mathbb Z_d \to \mathbb Z_n$. Denote the kernel of $f$ by $K$. By exactness we get $K \cong \mathbb Z$. Note that $f(0,n)=0$. $K$ is torsion free, but $(0,n) \in \mathbb Z \oplus \mathbb Z_d$ is of course torsion (annihilated by $d$), hence we have $(0,n)=(0,0)$, i.e. $d|n$. So we have all possibilites for $A$, namely $A \in \{\mathbb Z \} \cup \{\mathbb Z \oplus \mathbb Z_d ~ | ~ d \geq 2,d | n\}$. We still have to figure out the maps in the second case: Of course $g$ is given by $g(1) = (m,e)$ for some $m,e \in \mathbb Z, 0 \leq e >> \mathbb Z^2 @>\begin{pmatrix}m&0\\e&d\end{pmatrix}>> \mathbb Z^2 @>>> C_{m,e} @>>>0\\ @VVV @VXVV @VVYV @VVV @VVV \\ 0 @>>> \mathbb Z^2 @>>\begin{pmatrix}n&0\\0&1\end{pmatrix}> \mathbb Z^2 @>p>> \mathbb Z_n @>>>0 \end{CD}$$ , where $X$ and $Y$ are obtained via the algorithm to determine the Smith normal form. The map $p \circ Y$ factors through $\mathbb Z \oplus \mathbb Z_d$ and the factorization is $f$.<|endoftext|> TITLE: Show that $x^2+y^2+z^2=999$ has no integer solutions QUESTION [10 upvotes]: The question is asking us to prove that $x^2+y^2+z^2=999$ has no integer solutions. Attempt at a solution: So I've noticed that since 999 is odd, either one of the variables or all three of the variables must be odd. If I assume that only one variable is odd, I can label the variables like this: $$x=2k_1+1$$ $$y=2k_2$$ $$z=2k_3$$ By substituting, and doing some algebra, I can conclude that $k_1^2+k_2^2+k_3^2+k_1=249.5$, which is not possible since all $k_i\in\Bbb Z$. If all three are odd, I can rename the variables like this: $$x=2k_1+1$$ $$y=2k_2+1$$ $$z=2k_3+1$$ Eventually I conclude that $k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249$, but I don't know where to go from there. An alternative I've considered is brute-forcing it, but I'd rather avoid that if I can. Any assistance here would be greatly appreciated. REPLY [5 votes]: My immediate solution was the same as Jorge Fernández Hidalgo, using $\bmod 8$ limits, but carrying on from your sticking point (and trusting your work to that point): $$k_1^2+k_2^2+k_3^2+k_1+k_2+k_3 = 249 \\ (k_1^2+k_1) + (k_2^2+k_2)+(k_3^2+k_3) = 249 \\ k_1(k_1+1) + k_2(k_2+1)+k_3(k_3+1) = 249 \\ $$ and we have three even terms summing to an odd number, which cannot therefore exist.<|endoftext|> TITLE: Function that satisfies $f(x+ y) = f(x) + f(y)$ but not $f(cx)=cf(x)$ QUESTION [5 upvotes]: Is there a function from $ \Bbb R^3 \to \Bbb R^3$ such that $$f(x + y) = f(x) + f(y)$$ but not $$f(cx) = cf(x)$$ for some scalar $c$? Is there one such function even in one dimension? I so, what is it? If not, why? I came across a function from $\Bbb R^3$ to $\Bbb R^3$ such that $$f(cx) = cf(x)$$ but not $$f(x + y) = f(x) + f(y)$$, and I was wondering whether there is one with converse. Although there is another post titled Overview of the Basic Facts of Cauchy valued functions, I do not understand it. If someone can explain in simplest terms the function that satisfy my question and why, that would be great. REPLY [5 votes]: Take a $\mathbb Q$-linear function $f:\mathbb R\rightarrow \mathbb R$ that is not $\mathbb R$-linear and consider the function $g(x,y,z)=(f(x),f(y),f(z))$. To see such a function $f$ exists notice that $\{1,\sqrt{2}\}$ is linearly independent over $\mathbb Q$, so there is a $\mathbb Q$-linear function $f$ that sends $1$ to $1$ and $\sqrt{2}$ to $1$. So clearly $f$ is not $\mathbb R$-linear. ( Zorn's lemma is used for this).<|endoftext|> TITLE: Approximation of a summation by an integral QUESTION [11 upvotes]: I am going to approximate $\sum_{i=0}^{n-1}(\frac{n}{n-i})^{\frac{1}{\beta -1}}$ by $\int_{0}^{n-1}(\frac{n}{n-x})^{\frac{1}{\beta -1}}dx$, such that $n$ is sufficiently large. Is the above approximation true? If the above approximation is true, by which theorem or method (like Newton's method) it can be holds? What is the error of approximation? REPLY [2 votes]: Using Riemann Siegel formula you can approximate $\sum_{i=0}^{n-1} \left( \frac{n}{n-i} \right)^s =\simeq n^s\left\{\zeta(s)-1/(s-1)(n+1/2)^{1-s}\right\}.$ This gives very accurate values for real $s>1$ even for very small $n$. For instance, for $n=2$ and $s=3$, one obtains $\zeta(3)\simeq 241/200= 1.205$.<|endoftext|> TITLE: Problem in showing $\lim_{(x,y)\to (0,0)} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0$ using polar coordinates QUESTION [5 upvotes]: I'm trying to show that $$\lim_{(x,y)\to (0,0)} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0$$ I've used polar coordinates but when I do this I get the possibility of $\frac{0}{0}$ if $\cos(\theta)\to 0$ as $r\to 0$. So it must be that I need some sort of bound to make sense of this limit. But I'm not sure how to proceed. REPLY [5 votes]: If we wish to use polar coordinates $(r,\theta)$, then we can write $$\begin{align} \frac{12x^3y^5+4x^4y^4}{x^6+4y^8}&=r^2\left(\frac{12\cos^3(\theta)\sin^5(\theta)+4\cos^4(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\\\\ &=r\left(3\sin(\theta)+\cos(\theta)\right)\left(\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}\right)\tag1 \end{align}$$ Let $ g(r,\theta)=\frac{4r\cos^3(\theta)\sin^4(\theta)}{\cos^6(\theta)+4r^2\sin^8(\theta)}$. Denote $\sin(\theta)$ by $s$ and $\cos(\theta)$ by $c$. We will view $g(r,\theta)$ as a function of $\theta\in \mathbb{R}$, which is differentiable and $2\pi$-periodic. Therefore the extrema occur at points for which $\frac{\partial g(r,\theta)}{\partial \theta}=0$. Then, taking the partial derivative with respect to $\theta$, and , we have $$\begin{align} \frac{\partial g(r,\theta)}{\partial \theta}&=4rs^3c^2\,\left(\frac{(4c^2-3s^2)(c^6+4r^2s^8)-s^2c^2(32r^2s^6-6c^4)}{(c^6+4r^2s^8)^2}\right)\\\\ &= 4rs^3c^2\,\left(\frac{(3+c^2)(c^6-4r^2s^8)}{(c^6+4r^2s^8)^2}\right)\tag 2 \end{align}$$ We see that $\frac{\partial g(r,\theta)}{\partial \theta}=0$ when $\sin(\theta)=0$ or $\cos(\theta)=0$ or $\cos^6(\theta)=4r^2\sin^8(\theta)$. When $\sin(\theta)=0$ or $\cos(\theta)=0$, $g(r,\theta)=0$. When $\cos(\theta)^6=4r^2\sin^8(\theta)$, $$g(r,\theta)=\text{sgn}(\cos(\theta))\tag 3$$ Finally, using $(3)$ in $(1)$ reveals $$\left|\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}\right|\le r|3\sin(\theta)+\cos(\theta)|$$ whereupon applying the squeeze theorem yields the coveted limit $$\bbox[5px,border:2px solid #C0A000]{\lim_{(x,y)\to (0,0)}\frac{12x^3y^5+4x^4y^4}{x^6+4y^8}=0}$$<|endoftext|> TITLE: How can I show that $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$ QUESTION [5 upvotes]: Let $V$ is a finite dimensional vector space over a field $\mathbb F.$ Let $\rm dim (V)=n>0$ and $\mathcal {B}=\{v_1,\ldots,v_n\}$ be a basis of $V.$ Now we know dimension of $V \otimes_\mathbb {F}V$ is $n^2$ as $V \otimes_\mathbb {F}V \cong \mathbb {F^{n^2}}.$ Now since the set $\mathcal {A}=\{v_i\otimes v_j:1 \leq i,j\leq n\}$ spans $V \otimes_\mathbb {F}V$ and the number of elements in $\mathcal {A}$ is $n^2$, $\mathcal {A}$ forms a basis for $V \otimes_\mathbb {F}V$ as a vector space over $\mathbb F.$ In this way every element in $\mathcal {A}$ is non zero. Now my question is if I don't want to use the above arguments how can I show that for any $i$ and $j$ the element $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$. By the construction of tensor product if it can be shown then it will help me. Help me. Many Thanks. REPLY [4 votes]: Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero. REPLY [2 votes]: If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero.<|endoftext|> TITLE: How can I find $\sum_{cyc}\sin x\sin y$ QUESTION [6 upvotes]: $x$, $y$ & $z$ are real number such that $$\frac{\sin{x}+\sin{y}+\sin{z}}{\sin{(x+y+z)}}=\frac{\cos{x}+\cos{y}+\cos{z}}{\cos{(x+y+z)}}=2$$ find the value $$\sum_{cyc}\sin x\sin y$$ All help would be appreciated. REPLY [3 votes]: Since \begin{align*} 2 \cos(x+y+z) &= \cos x + \cos y + \cos z \\ 2 \sin(x+y+z) &= \sin x + \sin y +\sin z \end{align*} \begin{align*} 2 e^{i(x+y+z)} & = e^{ix} + e^{iy} + e^{iz} \end{align*} Multiplying throughout by $e^{-ix}$, we get \begin{align*} 2e^{i(y+z)} = 1 + e^{i(y-x)} + e^{i(z-x)} \end{align*} Equating the real parts, we get \begin{align*} 2\cos(y+z) &= 1 + \cos(y-x) +\cos(z-x)\\ 2(\cos y \cos z - \sin y \sin z) & = 1 + \cos x \cos y + \sin x \sin y + \cos z \cos x + \sin z \sin x \end{align*} Similarly, we get \begin{align*} 2(\cos x \cos z - \sin x \sin z) &= 1 + \cos x \cos y + \sin x \sin y + \cos z \cos y + \sin z \sin y\\ 2(\cos x \cos y - \sin x \sin y) &= 1 + \cos x \cos z + \sin x \sin z+ \cos z \cos y + \sin z \sin y\\ \end{align*} Adding, and canceling $2 \sum \cos x \cos y$ from the sides, we get $$\sum \sin x \sin y = -\frac{3}{4}$$ REPLY [3 votes]: I used the Blue's beautiful idea. Let $e^{ix}=a$, $e^{iy}=b$ and $e^{iz}=c$. Hence, $\sin{x}=\frac{a-\frac{1}{a}}{2i}=\frac{a^2-1}{2ai}$, $\cos{x}=\frac{a^2+1}{2a}$, $\sin{y}=\frac{b^2-1}{2bi}$, $\sin{x}=\frac{c^2-1}{2ci}$ and $\cos{x}=\frac{c^2+1}{2c}$. Thus, $\sum\limits_{cyc}\sin{x}=2\sin(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc-ab)=2(a^2b^2c^2-1)$ and $\sum\limits_{cyc}\cos{x}=2\cos(x+y+z)$ gives $\sum\limits_{cyc}(a^2bc+ab)=2(a^2b^2c^2+1)$ or $ab+ac+bc=2$ and $a+b+c=2abc$. Thus, $$-\sum\limits_{cyc}\sin{x}\sin{y}=\sum_{cyc}\frac{(a^2-1)(b^2-1)}{4ab}=\frac{\sum\limits_{cyc}c(a^2-1)(b^2-1)}{4abc}=$$ $$=\frac{abc(ab+ac+bc)-\sum\limits_{cyc}(a^2b+a^2c)+a+b+c}{4abc}=\frac{abc(ab+ac+bc)-(a+b+c)(ab+ac+bc)+3abc+a+b+c}{4abc}=\frac{1}{2}-1+\frac{3}{4}+\frac{1}{2}=\frac{3}{4}$$<|endoftext|> TITLE: Find $\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $ without L'hopital's rule QUESTION [5 upvotes]: I have to find the limit without L'hopital's rule: $$\lim_{x \to 0} \frac{\ln (x^2+1)} {x^2} $$ Is it possible? I thought about using squeeze theorem or something, but it didn't work out. Hints are more than welcome! P.S - I didn't study Taylor series or Integrals yet. REPLY [3 votes]: With series: $ \ln(x^2+1)=x^2-\frac{x^4}{2}+\frac{x^6}{3}-+...$ for $|x|<1$, hence $\frac{\ln (x^2+1)} {x^2}=1-\frac{x^2}{2}+\frac{x^4}{3}-+... \to 1$ for $x \to 0$<|endoftext|> TITLE: Find the limit $\lim_{x \to 0^{+}} (x^{x^{x}} - x^x)$ QUESTION [5 upvotes]: How do we find $$\lim_{x \to 0^{+}} (x^{x^{x}} - x^x)??$$ The answer given is equal to $-1$. Any help is appreciated. REPLY [9 votes]: Write $$ x^{x^x}=e^{x^x\log x}=e^{e^{x\log x}\log x}$$ and $$x^x=e^{x\log x}$$ thus the limit becomes $$\lim_{x\rightarrow 0^+}\Big( e^{e^{x\log x}\log x}\space -\space e^{x\log x}\Big)=\lim_{x\rightarrow 0^+}e^{e^{x\log x}\log x}-\lim_{x\rightarrow 0^+}e^{x\log x}$$ Now it is easy to see that the first limit is $0$ and second is $1$; thus the different is $-1$.<|endoftext|> TITLE: Show that the closed unit-ball in $\mathbb{R}^2$ is homeomorphic to the unit-2 sphere in $\mathbb{R}^3$ QUESTION [5 upvotes]: This problem is taken from Topology: A First Course by Munkres Let $X$ be the closed unit ball $$\{ \langle x, y\rangle \mid x^2 + y^2 \le 1\}$$ in $\mathbb{R}^2$, and let $X^{\ast}$ be the partition of $X$ consisting of all the one-point sets $\{ \langle x, y\rangle\}$ for which $x^2 + y^2 < 1$, along with the set $S^1= \{ \langle x, y\rangle \mid x^2 + y^2 = 1 \}$. One can show that $X^{\ast}$ is homeomorphic with the subspace of $\mathbb{R}^3$ called the unit 2-sphere, defined by $$S^2 = \{\langle x, y, z\rangle \mid x^2 + y^2 + z^2 =1 \}.$$ My Attempted Proof $(X^*, \mathcal{T})$ is the quotient space of $X$ where $\mathcal{T}$ is the topology induces by the quotient map $p : X \to X^*$. $$\mathcal{T} = \left\{U \subset X^* \ \middle| \ p^{-1}(U) = X \cap W \text{ for $W$ open in $\mathbb{R}^2$}\right\}$$ Since $S^2 \subset \mathbb{R}^3$, the topology $\mathcal{M}$ on $S^2$ is given by $$\mathcal{M} = \left\{S^2 \cap V \ | \ V \text{ open in $\mathbb{R}^3$}\right\}$$ Now define $h : X^* \to S^2$ by $$h\left(\langle x, y \rangle\right) = \begin{cases} \langle x, y, \sqrt{1-x^2 + y^2} \rangle \ \ \ \text{ if $x^2 + y^2 < 1$} \\ \langle x, y, 0 \rangle \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if $x^2 + y^2 = 1$} \\ \end{cases}$$ We now show that $h$ is a homeomorphism. Pick $U \in \mathcal{T}$, we first show that $h(U) \in \mathcal{M}$. $U$ is of the form $U = \{ \langle x, y \rangle \ | \ x^2 + y^2 = 1 \text{ or } x^2 + y^2 < 1 \}$, and by definition of $h$ we have $h(U) = \{\langle x, y , z\rangle \ | \ x^2 + y^2 +z^2 = 1 \} \subset \mathbb{R}^3$. Fix $\epsilon > 0$ and choose $V = (-1 - \epsilon, 1 + \epsilon) \times (-1 - \epsilon, 1 + \epsilon) \times (-1 - \epsilon, 1 + \epsilon)$. Then $V$ is open in $\mathbb{R}^3$ and we have $h(U) \cap V = h(U) \in \mathcal{M}$. Thus $U \in \mathcal{T} \implies h(U) \in \mathcal{M}$ We now have to prove that $h(U) \in \mathcal{M} \implies U \in \mathcal{T}$. But that is where I got stuck. I'm not show how to show $U \in \mathcal{T}$ if we pick $h(U) \in \mathcal{M}$. Am I on the correct track, are there any errors so far in my proof, is it non-rigorous. If so, please let me know. If not then how can I go about proving the reverse implication to show that $h$ is a homeomorphism? REPLY [3 votes]: OK, so the question is clearer if you write it like this: There's a relation on a $n$-dimensional closed unit ball $D^n$: $$x\sim y\mbox{ if and only if }x=y\mbox{ or }\lVert x\rVert =\lVert y\rVert =1$$ Prove that $D^n\ /\sim$ is homeomorphic to $S^n$. First of all note that your function $h$ is not a homeomorphism because it is not "onto" (your last coordinate is always nonnegative). Proof. Start by looking the other way around. Define $$\theta:\mathbb{R}^{n}\to S^n$$ $$\theta(x_1, \ldots, x_n)=\bigg(\frac{S-1}{S+1}, \frac{2x_1}{S+1},\cdots,\frac{2x_n}{S+1}\bigg)$$ where $S=\sum x_i^2$. This map is also known as the stereographic projection (or more accurately its inverse). Note that the image consists of all points on the sphere except for $(1,0,\ldots, 0)$. Also $\theta$ is one-to-one. Now define a map from open disk to $\mathbb{R}^n$: $$g:B^n\to\mathbb{R}^n$$ $$g(x)=\begin{cases} \tan\bigg(\frac{\pi}{2}\lVert x\rVert\bigg)\frac{x}{\lVert x\rVert}\mbox{ if }x\neq 0 \\ 0\mbox{ otherwise } \end{cases}$$ This is a homeomorphism, an inverse given by scalar multiplication by $\arctan$. Finally we are ready to define our function: $$f:D^{n}\to S^n$$ $$f(x)=\begin{cases} \theta\big(g(x)\big)\mbox{ if } \lVert x\rVert < 1 \\ (1,0,\ldots,0)\mbox{ otherwise} \end{cases}$$ You can easily check that it is continous (that's because of properties of stereographic projection, the closer arguments get to infinity in $\mathbb{R}^n$ the closer values get to $(1,0,\ldots,0)$) and "onto". Also $$f(x)=f(y)\mbox{ if and only if }x=y\mbox{ or }\lVert x\rVert = \lVert y\rVert = 1$$ In particular $f$ induces the mapping $$\overline{f}:D^n\ /\sim\to S^n$$ $$\overline{f}([x])=f(x)$$ This map is continous, one-to-one and "onto". Since $D^n$ is compact then so is $D^n\ /\sim$ (you only need to verify that it is Hausdorff) and thus $\overline{f}$ is also closed. Therefore the inverse is continous so $\overline{f}$ is a homeomorphism. $\Box$<|endoftext|> TITLE: Show that the dual space of $\ell^1$ is isomorphic to $\ell^{\infty}$ QUESTION [8 upvotes]: I've started taking my first course in multivariable analysis and in our notes there is an exercise about the dual space of the sequence space $\ell_1$ and I'm unsure how to prove that it is isomorphic to $\ell_{\infty}$ as in class we haven't gone over the dual space in detail. If somebody could also give me some extra intuition about dual spaces that would be much appreciated :) REPLY [13 votes]: The dual space consists of all continuous linear functionals on the space. What continuity buys you is that, if you can determine the functionals on a dense subspace $M$, you can bootstrap to the full space by continuity. Continuity of a linear function $F$ on the normed space assures that $F$ is determined by its values on the dense subspace $M$. For example, because $\ell^1$ has a basis of sequences $$ e_0 = \{ 1, 0, 0, 0, \cdots \} \\ e_1 = \{ 0, 1, 0, 0, \cdots \} \\ e_2 = \{ 0, 0, 1, 0, \cdots \} \\ \vdots $$ then there is a natural way to bootstrap. Specifically, if $x=\{ \alpha_k \}\in\ell^1$, then $$ \left\| x - \sum_{n=0}^{N}\alpha_n e_n \right\|_{\ell^1} \le \sum_{k=N+1}^{\infty}|\alpha_n|\rightarrow 0 \mbox{ as } N\rightarrow\infty. $$ Therefore, by continuity, if $F$ is a continuous linear functional on $\ell^1$, then $$ F(x) = \lim_{N\rightarrow\infty}F\left(\sum_{k=0}^{N}\alpha_k e_k \right) = \lim_{N\rightarrow\infty} \sum_{k=0}^{N}\alpha_k F(e_k) = \sum_{k=0}^{\infty}\alpha_k F(e_k). $$ So, already you have a representation of any such linear functional. The sum on the right is guaranteed to converge. In fact it must converge absolutely because you can choose unimodular constants $u_k$ such that $u_k\alpha_k F(e_k)=|\alpha_k||F(e_k)|$ and apply the above to $\{u_k\alpha_k\}$ instead. Continuity of $F$ is equivalent to boundedness, meaning the existence of a smallest non-negative constant $\|F\|$ such that $|F(x)| \le \|F\|\|x\|_{\ell^1}$ for all $x\in\ell^1$. From this, you get $\{ F(e_k) \}\in\ell^{\infty}$ and $\|\{ F(e_k)\}\|_{\ell^{\infty}} \le \|F\|$. Conversely, $$|F(\{\alpha_k\})| \le \|\{F(e_k)\}\|_{\infty}\|\{\alpha_k\}\|_{\ell^1}$$ gives $\|F\|\le\|\{F(e_k)\}\|_{\infty}$ because $\|F\|$ is the smallest such constant. Hence, $$\|F\|=\|\{F(e_k)\}\|_{\ell^{\infty}}.$$ So there is an isometric linear correspondence between $(\ell^{1})^*$ and $\ell^{\infty}$.<|endoftext|> TITLE: The diophantine equation $a^{a+2b}=b^{b+2a}$ QUESTION [8 upvotes]: While working on the recent inequality question $\qquad$Find the $least$ number $N$ such that $N=a^{a+2b} = b^{b+2a}, a \neq b$. posted by Nirbhay, I decided to fool with the associated diophantine equation, and I did manage to solve it. Here is the problem, offered as a fun challenge: Prove that the only positive integer pairs $(a,b)$ with $ak^3$ for all $k\ge 2$. Therefore $n\le 8$. If $n=8$, $8^{k-1}>k^3$ for every $k\ge 3$. If $k=2$ we get an equality, and this gives us $a=16$ and $b=32$. A similar reasoning gives us no solutions for every $n\le 7$, except for $n=4$, where we find $k=4$, and thus $a=16$ and $b=64$. Hence, all the solutions are $(a,b)=(16, 32)$ and $(a, b)=(16, 64)$ as you stated.<|endoftext|> TITLE: Trace inequality on the product of positive semi-definite matrices QUESTION [5 upvotes]: Let $A_1$ and $A_2$ be positive semi-definite matrices such that Tr$(A_1) \leq$ Tr$(A_2)$. Let $B$ be another positive semi-definite matrix. Is it true that Tr$(A_1B) \leq$ Tr$(A_2B)$? REPLY [6 votes]: No. Set $$A_1=\begin{bmatrix} 3 & 0 \\ 0 & 3 \end{bmatrix}$$ $$A_2=\begin{bmatrix} 10 & 0 \\ 0 & 1 \end{bmatrix}$$ $$B=\begin{bmatrix} 0.1 & 0 \\ 0 & 2 \end{bmatrix}$$<|endoftext|> TITLE: Prob. 2, Sec. 1.4 in Kreyszig's Functional Analysis Book: Any Cauchy sequence with a convergent subsequence converges QUESTION [6 upvotes]: Here is Prob. 2, Sec. 1.4 in the book Introductory Functional Analysis With Applications by Erwine Kreyszig. If $\left( x_n \right)$ is Cauchy and has a convergent subsequence, say, $x_{n_k} \to x$, show that $\left( x_n \right)$ is convergent with the limit $x$. My effort: Let $(X, d)$ be the metric space in which $\left( x_n \right)$ is a Cauchy sequence, and let $\varphi \colon \mathbb{N} \to \mathbb{N}$ be a strictly increasing function such that the sequence $\left( x_{\varphi(n)} \right)$ converges to a point $x$ of $X$. And, we can put $$n_k = \varphi(k) \ \mbox{ for all } k \in \mathbb{N}.$$ Then, given a real number $\varepsilon > 0$, we can find a natural number $N$ such that $$ (1) \ \ \ \ d \left( x_m, x_n \right) < \frac{\varepsilon}{2} \ \mbox{ for any pair } (m, n) \mbox{ of natural numbers such that } m > N \mbox{ and } n > N.$$ Now for each $k \in \mathbb{N}$, we know that $n_k = \varphi(k) \in \mathbb{N}$; moreover, $$ n_k = \varphi(k) < \varphi(r) = n_r \ \mbox{ if $k \in \mathbb{N}$, $r \in \mathbb{N}$, and $k < r$},$$ because $\varphi$ is a strictly increasing function. Thus, $n_1 = \varphi(1) \in \mathbb{N}$ and so $n_1 \geq 1$. Let $k$ be any natural number, and suppose that $$n_k = \varphi(k) \geq k.$$ Then since $\varphi$ is a strictly increasing function and since $k+1 \in \mathbb{N}$, therefore we can conclude that $$n_{k+1} = \varphi(k+1) > \varphi(k) \geq k.$$ But $\varphi(k+1) \in \mathbb{N}$. So we can conclude that $\varphi(k+1) \geq k+1$. Hence by induction it follows that $$n_k = \varphi(k) \geq k \mbox{ for all } k \in \mathbb{N}. $$ Thus we know that, for each $k \in \mathbb{N}$, (i) $\varphi(k) \in \mathbb{N}$ and (ii) $\varphi(k) \geq k$. Now since $$\lim_{k \to \infty} x_{\varphi(k)} = \lim_{k \to \infty} x_{n_k} = x, $$ therefore we can find a natural number $K$ such that $$ (2) \ \ \ \ d\left( x_{n_k}, x \right) = d\left( x_{\varphi(k)}, x \right) < \frac{\varepsilon}{2} \ \mbox{ for any natural number } k \mbox{ such that } k > K.$$ Now let $M$ be any natural number such that $M > \max \left( K, N \right)$. Such an $M$ exists since the set of natural numbers is not bounded above (or by the Archimedian property of $\mathbb{R}$). Let $n \in \mathbb{N}$ such that $n > M$. Then we note that $$n_{M+1} = \varphi(M+1) \geq M+1 > M > K$$ so that (2) can be used, and we also note that $$n_{M+1} \in \mathbb{N}, \ \ \ n \in \mathbb{N}, \ \ \ n_{M+1} > M > N, \ \ \ \mbox{ and } \ n > N$$ so that (1) can be used. Then, from (1) and (2) above, we see that, $$ d\left( x_n, x\right) \leq d \left( x_n, x_{n_{M+1}} \right) + d \left( x_{n_{M+1}}, x \right) < \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon,$$ for any natural number $n > M$. Thus we have shown that, corresponding to every real number $\varepsilon > 0$, we can find a natural number $M$ such that $$ d \left( x_n, x \right) < \varepsilon \ \mbox{ for any natural number } n > N.$$ Hence the sequence $\left(x_n \right)$ converges in the metric space $(X, d)$ to the point $x \in X$. Is the above proof correct? If so, then is the presentation good enough? If not, then where lies the flaw? REPLY [4 votes]: The above proof is correct, but there's a lot of parts that can be trimmed. For one, you don't really need to introduce the function $\phi$ at all. It just introduces confusion, and you can always write $n_k$ instead of $\phi(k)$ and lose no clarity. So, a trimmed down proof: Let $(x_n)_{n\in\mathbb N}$ be a Cauchy sequence, and let $(x_{n_k})_{k\in\mathbb N}$ be a convergent subsequence of it. Also, let $x=\lim_{k\to\infty} x_{n_k}$. Let $\epsilon > 0$. Then, there exists some $N$ such that if $m,n\ge N$, we have $d(x_n, x_m)<\frac{\epsilon}{2}$. there exists some $K$ such that if $k\ge K$, we have $d(x, x_{n_k})<\frac\epsilon2$. Set $N'$ to be the first value of the sequence $(n_K, n_{K+1},\dots)$ that is greater than $N$, and let $n>N'$. From point (2), we know that $d(x, x_{N'})<\frac\epsilon 2$ From point (1), we know that $d(x_{N'}, x_n)\leq \frac\epsilon 2$ (because $N'>N$ and $n>N$. Then, $$d(x, x_n) \leq d(x, x_{N'}) + d(x_{N'}, x_n) < \frac\epsilon 2 + \frac\epsilon2=\epsilon$$ and the proof is over.<|endoftext|> TITLE: On $Z = \max \left(X_1, X_2, \dots, X_N \right)$ where $X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$ QUESTION [6 upvotes]: How does the maximum of $N$ normal random variables behave? If they are i.i.d., then on the right tail ($x \rightarrow \infty$) it behaves as if it is distributed with Gumbel. But any results for the generic case? That is, how does $Z = \max \left(X_1, X_2, \dots, X_N \right)$ behave where $X_i \sim \mathcal{N}(\mu_i, \sigma_i^2)$, or with shared variance $X_i \sim \mathcal{N}(\mu_i, \sigma^2)$. I assume the shared variance & different mean case should behave similar to the i.i.d. case, but are there any known results? REPLY [3 votes]: Ross 2003 gives an upper bound for the more general case, where $X_i$ can be dependent: $$\mathbb{E} \left[ \max_i X_i \right] \le c + \sum_{i=1}^N \left(\sigma^2_i + f_i(c) + (\mu_i - c) Pr\left\{X_i > c \right\} \right)$$ for all values of $c$, where $f_i(c)$ is the pdf of $X_i$. The tightest bound can be obtained by using a $c$ that satisfies $$\sum_{i=1}^N Pr\left\{X_i > c \right\} = 1 \: . $$<|endoftext|> TITLE: A condition on a non convex subset of $\mathbb{R}^2$ QUESTION [8 upvotes]: In the euclidean plane $\mathbb{R}^2$ it's possible to state the following theorem: Theorem: Consider any finite set of $n$ points $\{x_1,\dots,x_n\}\subset \mathbb{R}^2$ and any n-ple of positive real numbers $(r_1,\dots,r_n)$ such that $$\cap_{i=1}^nB_{r_i}(x_i)\neq\emptyset$$ where $B_{r_i}(x_i)$ is the closed ball of center $x_i$ and radius $r_i$. Then for any other set of $n$ points $\{x_1',\dots,x_n'\}\subset \mathbb{R}^2$ such that $$|x'_i-x'_j|\le|x_i-x_j|$$ for every $i,j=1,\dots,n$ it also happens $$\cap_{i=1}^nB_{r_i}(x'_i)\neq\emptyset$$ Question: does this result still hold true if we consider $$X:=\{x\in\mathbb{R}^2 \text{ such that } 0\le\text{arg}(x)\le 7\pi/4 \}$$ instead of $\mathbb{R}^2$? By this I mean that both $\{x_1,\dots,x_n\}$ and $\{x_1',\dots,x_n'\}$ will be in $X$ and the condition $|x'_i-x'_j|\le|x_i-x_j|$ is replaced by $d(x'_i,x_j')\le d(x_i,x_j)$ where $d$ is the induced patch metric by the euclidean metric of $\mathbb{R}^2$ on $X$. Also, the intersections are considered in $X$ and the balls are defined using $d$. This doesn't seem easy to me because the only proof I know for the theorem in $\mathbb{R}^2$ isn't really easy (it's contained in "A Lipschitz Condition Preserving Extension for a Vector Function" by Valentine and it's for $\mathbb{R}^n$) and doesn't seem to adapt to the case of $X$. As user Moishe Cohen pointed out, I should sketch it. Lemma 1: (by Alexandrov and Hopf): Let $A_1,\dots,A_n$ be closed sets in $\mathbb{R}^m$ which cover the simplex $T$. Suppose that for each side $a_{i_1},\dots,a_{i_\tau}$ of $T$ it's true $a_{i_1},\dots,a_{i_\tau}\subset A_{i_1}\cup\dots\cup A_{i_\tau}$. Then $A_1\cap \dots\cap A_n\neq \emptyset$. Lemma 2: Let $\Delta(x_1',\dots,x_n')$ be the euclidean simplex of vertex $x_1',\dots,x_n'$. Under the hypothesis of the theorem $\Delta(x_1',\dots,x_n')$ is covered by the sets $B_{r_i}(x'_i)$ proof of the lemma: suppose $\Delta(x_1',\dots,x_n')$ is not covered by the sets $B_{r_i}(x'_i)$, then we can choose $$x\in \cap_{i=1}^nB_{r_i}(x_i)\neq\emptyset,\qquad x'\in \Delta(x_1',\dots,x_n')\setminus \cup_{i=1}^nB_{r_i}(x'_i)$$ call $R_i$ the vector from $x$ to $x_i$ and $R_i'$ the vector from $x'$ to $x_i'$. Since $|x'_i-x'_j|\le|x_i-x_j|$ it follows $$ |R_i'|>|R_i|, \qquad |R_i'-R_j'|\le|R_i-R_j|$$ for every $i,j=1,\dots,n$. This implies $R_i'\cdot R_j'>R_i\cdot R_j$. Let $(a_1',\dots,a_n')\in \mathbb{R}_+^n$ such that $\sum_{i=1}^na_i'R_i'=0$, then from the previous inequality it follows $\sum_{i=1}^n|a_i'R_i'|^2<0$, which is impossible. proof of the theorem: since $\cap_{i=1}^nB_{r_i}(x_i)\neq\emptyset$ it follows $B_{r_i}(x_i)\cap B_{r_j}(x_j)\cap \overline{x_ix_j}\neq \emptyset$ and since $|x'_i-x'_j|\le|x_i-x_j|$ it follows $B_{r_i}(x'_i)\cap B_{r_j}(x'_j)\cap \overline{x'_ix'_j}\neq \emptyset$. The result then follows from the two lemmas. As I said this proof is for the general case of $\mathbb{R}^n$, $n>1$ and is quite complicated. I don't know if for $n=2$ there's a simpler one which would adapt to the case of $X$. I can't understand how to adapt this one because it doesn't seem possible to define $\Delta(x_1',\dots,x_n')$ in $X$ in the first place. REPLY [3 votes]: The answer is negative already for $n=3$: There are points $x_1, x_2, x_3, x_1', x_2', x_3'\in C$ and positive numbers $r_1, r_2, r_3$ for every nonconvex cone $C\subset E^2$ such that: $d(x_i, x_j)= d(x_i', x_j')$ for all $i, j=1, 2, 3$. $$ \{ x\}= \bigcap_{i} \bar{B}(x_i, r_i)\ne \emptyset. $$ $$ \bigcap_{i} \bar{B}(x'_i, r_i)= \emptyset. $$ In my examples $r_1+r_2=d(x_1, x_2)$, $r_i=d(x,x_i), i=1, 2, 3$; thus $x\in x_1x_2$. The point is that for every nonconvex cone $C$, there exists a geodesic triangle $x_1x_2x_3$ and a point $x\in x_1x_2$ such that in the Euclidean comparison triangle $x_1'x_2'x_3'$ and the comparison point $x'\in x'_1x'_2$, we have $$ d(x'_3, x')> d(x_3, x). $$ [To find such examples place $x_1, x_2$ on the different boundary rays of the cone $C$ and place $x_3$ (in the cone interior) so that the geodesic $x_1x_3$ passes through $x=$ the apex of the cone.] It follows that $$ \bigcap_{i} \bar{B}(x'_i, r_i)= \emptyset, $$ since $$ \bar{B}(x_1',r_1)\cap \bar{B}(x_2',r_2)=x', $$ but $x'\notin \bar{B}(x_3',r_3)$. Now, the cone $C$ contains arbitrarily large flat disks, so we can place the points $x_i'$ as in your question in such disks, so that they necessarily span flat comparison triangles.<|endoftext|> TITLE: Fitting $\cos(x)$ with a polynomial QUESTION [29 upvotes]: Consider the graph of $\cos(x)$ on the interval $[-\pi/2,\pi/2]$. It looks very close to a parabola. I would like to find the parabola (or possibly family of parabolas) that "fit" the $\cos$ function the best on this interval. By "fit" the best, I mean I want to minimize the absolute area between the curves as much as possible, i.e. minimize $$\int_{-\pi/2}^{\pi/2}|\cos(x)-p_2(x)|dx,$$ where $p_2$ is a degree $2$ polynomial. My initial approach was to look at the Taylor approximation about $x=0$ of degree $2$ $$\cos(x)\approx1-\frac {x^2}2$$ The area between these curves is $2-\pi-\frac{\pi^3}{24}\approx0.15403$. Surely we can do better. I then looked at the interpolating polynomial through the points $(-\pi/2,0),(0,1),(\pi/2,0)$, which gives the function $1-\frac{4x^2}{\pi^2}$. The area between these two curves is $\frac23(\pi-3)\approx0.094395$. Better! Next, I tried looking what quadratic, with vertex $(0,1)$, satisfies $$\int_{-\pi/2}^{\pi/2}\cos(x)-p_2(x)dx=0.$$ We then find that $p_2(x)=1-\frac{12(\pi-2)}{\pi^3}x^2$. So, there area between these two curves is $0$, but the absolute area is approximately $0.0539276$. Checking numerically, this appears to be the best-fitting parabola that has vertex $(0,1)$. Is the polynomial $p(x)=1-\frac{12(\pi-2)}{\pi^3}x^2$ the unique quadratic that minimizes $$\int_{-\pi/2}^{\pi/2}|\cos(x)-p_2(x)|dx,$$ where $p_2$ is a degree two polynomial? Of course, it could be that there is a better-fitting polynomial that does not have its vertex at $(0,1)$. If not, then I simply ask: Find a quadratic polynomial $p_2(x)$ that minimizes $$\int_{-\pi/2}^{\pi/2}|\cos(x)-p_2(x)|dx.$$ Now, this can be extended to other degree polynomials. For example, the best fitting constant is $c=\sqrt{2}/2$, and that gives $$\int_{-\pi/2}^{\pi/2}|\cos(x)-\frac{\sqrt2}2|dx=2(\sqrt2-1)\approx0.82843.$$ This raises my more general question. Let $n\in\mathbb{N}$ and let $p_n(x)$ be an $n$th degree polynomial. For each $n$, what polynomial minimizes $$\int_{-\pi/2}^{\pi/2}|\cos(x)-p_n(x)|dx?$$ REPLY [3 votes]: Just examples of approximations by such polynomials. With comparison of Taylor approximations. Here $t_n(x)$: degree-$n$ polynomials given from Taylor series $$ \cos(x) = 1 - \frac{1}{2!} x^2 + \frac{1}{4!}x^4 - \frac{1}{6!} x^6 + \ldots $$ And $p_n(x)$: "best fitting" polinomials found empirically. \begin{array}{|c|l|l|} \hline n & t_n(x) \mbox { or } p_n(x) & \epsilon & -\log_{10}(\epsilon) \\ \hline 2 & t_2(x)=1.000000 - 0.500000x^2 & 0.150335 & 0.823 \\ 2 & p_2(x)=0.985095 - 0.427001x^2 & 0.044907 & 1.348 \\ \hline 4 & t_4(x)= 1.0000000 - 0.5000000x^2 + 0.0416667x^4 & 0.0090497 & 2.043 \\ 4 & p_4(x)= 0.9996916 - 0.4969942x^2 + 0.0375584x^4 & 0.0009478 & 3.023 \\ \hline 6 & t_6(x)=1.00000000-0.50000000x^2+0.04166667x^4-0.00138889x^6 & 3.1380\times10^{-4} & 3.503 \\ 6 & p_6(x)=0.99999658-0.49994448x^2+0.04153112x^4-0.00128529x^6 & 1.0604\times10^{-5} & 4.975 \\ \hline 8 & t_8(x)=1.0000000000-0.5000000000x^2+0.0416666667x^4-0.0013888889x^6+0.0000248016x^8 & 7.0852\times10^{-6} & 5.150 \\ 8 & p_8(x)=0.9999999763-0.4999994242x^2+0.0416644880x^4-0.0013860546x^6+0.0000233125x^8 & 7.3437\times10^{-8} & 7.134 \\ \hline \end{array}<|endoftext|> TITLE: Why weak convergence doesn't imply convergence? QUESTION [16 upvotes]: Let $(H,\langle,\rangle)$ be a Hilbert space. We have that $u_n$ converges to $u$ weakly if $$\lim_{n\to \infty}\langle u_n,v\rangle=\langle u,v\rangle$$ for all $v\in H$. But why doesn't it converge strongly? Indeed, if $v=u_m$, then, $$\lim_{n\to \infty }\langle u_n,u_m\rangle=\langle u,u_m\rangle$$ for all $m$, and thus $$\lim_{m\to \infty }\lim_{n\to \infty }\langle u_n,u_m\rangle=\langle u,u_m\rangle\lim_{m\to \infty }\langle u,u_m\rangle=\langle u,u\rangle$$ therefore, $$\lim_{n\to \infty }\langle u_n,u_n\rangle=\langle u,u\rangle\implies \lim_{n\to \infty }\|u_n\|=\|u\|$$ and thus it converges weakly. What's wrong here? REPLY [14 votes]: This can be understood geometrically. To makes things easier, specify the limit point to be zero: $u = 0$. The general case $u \neq 0$ follows from affine translation. The set $$\{ u : \langle u, v \rangle \le \epsilon \}$$ describes a semi-infinite slab contained between two closely spaced parallel hyperplanes with normal vector $v$. The convergence $$\langle u_n, v \rangle \rightarrow 0$$ occurs successfully if for every slab width, nomatter how thin, a tail of the sequence is contained within the slab. I.e., the sequence is asymptotically "controlled" in the $v$-direction. The sequence converges weakly if this holds for every possible slab direction. I.e., for each direction v: for each width ε: A tail of the sequence is contained in the slab with direction v and width ε. In contrast, the sequence converges strongly if every scaled copy of the unit ball, $$\{ u : ||u|| \le \epsilon \},$$ nomatter how small, contains a tail of the sequence: for each width ε: A tail of the sequence is contained within the ball of radius ε. Strong convergence controls the convergence in all directions simultaneously. In infinite dimensions a problem occurs since one can create a well-spaced infinite sequence without going anywhere, by cycling through the infinity of different dimensions: \begin{align*} u_1 &= (1,0,0,\dots) \\ u_2 &= (0,1,0,\dots) \\ u_3 &= (0,0,1,\dots) \\ \vdots & \end{align*} The sequence converges weakly since every direction is controlled eventually, but it does not converge strongly since you have to wait infinitely long for every direction to be controlled.<|endoftext|> TITLE: Please critique my proof that $\sqrt{12}$ is irrational QUESTION [6 upvotes]: I would like critiques on correctness, conciseness, and clarity. Thanks! Proposition: There is no rational number whose square is 12 Proof: Suppose there were such a number, $a = \in \mathbb{Q}$ s.t. $a^2 = 12$. This implies that $\exists$ $m, n \in \mathbb{Z}$ s.t. $\frac{m^2}{n^2} = 12.$ Assume without loss of generality that $m,~ n$ have no factors in common. $\Rightarrow m^2 = 12n^2$. This implies that $m^2$ is even, and therefore that $m$ is even; it can thus be written $2k = m$ for some $k \in \mathbb{Z}$. Thus $m^2 = 12n^2 $ $\Rightarrow 4k^2 = 12n^2 $ $\Rightarrow \frac{k^2}{3} = n^2$ Because $n^2$ is an integer, it is clear that $3$ divides $k^2$ which imples that $k$ has $3$ or $\frac{k}{n}$ has a factor (because $\frac{k^2}{n^2}= 3$) Suppose that the former is true, and $3$ is a factor of $k$. Then $k = 3j$ for some integer j, which implies that $(3j)^2 = 3n^2$ $\Rightarrow 9j^2 = 3n^2 $ $\Rightarrow n^2 = 3j^2 $ $\Rightarrow n^2 = \frac{k^2}{n^2}j^2$ $\Rightarrow k = \frac{n^2}{j}$ but this implies that $j$ divides $n^2$, but $j$ divides $m$, and by initial assumption $n$ and $m$ have no factors in common, so this is a contradiction. Suppose now that $\frac{k}{n}$ is a factor of k. Then $k = \frac{k}{n}j$ for some integer $j$. Then $(\frac{k}{n}j)^2 = 3n^2$ which implies that $3j^2 = 3n^2 \Rightarrow j^2 = n^2 \Rightarrow j = n$. But this means that $n$ divides $m$, which again is a contradiction. Thus any rational representation of the number whose square equals $12$ leads to a contradiction and this number must therefore have no rational representation. REPLY [2 votes]: "This implies that ∃ m,n∈Z s.t. m2n2=12. Assume without loss of generality that m, n have no factors in common." I, personally, would not argue "without loss of generality" . $q \in \mathbb Q$ is defined as $q = \frac mn$ for some relatively prime integers. So we declare them to have no factors in common by fiat-- not merely by lack of loss of generality. (It's not that big of an issue.) "This implies that $m^2$ is even, and therefore that m is even" I'd accept this but dxiv very much has a point, that it should require some justification. I personally would simply put it in more definitive language. I'd say: "Therefore $2|m^2$ and, as $2$ is prime, $2|m$". This could require a little justification in that all numbers have a unique prime factorization so that for prime $p$ we know if $p|ab$ then $p|a$ or $p|b$ so if $p|m^2$ then $p|m$ or $p|m$. "Because $n^2$ is an integer, it is clear that 3 divides $k^2$ which implies that k has 3 or $\frac kn$ has a factor ". As $\frac {k^2}3$ is a integer, it implies $3|k^2$. Period. That always happens. That any thing else may happen doesn't matter. It may have $k/n$ as a factor or it may have $7$ as a factor. Or it may not. Those don't matter. Also, if $\frac kn$ is an integer at all, then it is trivial that $\frac kn$ is a factor $k$ whether or not $k^2/3$ is an integer or not. And if $\frac kn$ is not an integer then the statement $\frac kn$ is a factor of $k$ is meaningless. And if $k/n$ is an integer, then $n|m = 2k$ and as $n,m$ have no factor in common then $n = 1$. (Which would mean $\sqrt{12} = 2\sqrt{3}$ is an integer which is easy to verify is not the case). "if the former ($3|k$) then .... " All that is just fine and the rest is unneeded. But the rest is a bit of a mess. "Suppose now that $k/n$ is a factor of $k$" Again, this is trivial if $n|k$ and is meaningless if $n \not \mid k$. And we can rule out $n|k$ as that would imply $\sqrt{12} = m/n = 2k/n$ is an integer. Which But beware. This is true of all numbers and nothing relevant is likely to arise. And it doesn't: " Then $(\frac knj)^2=3n^2$ which implies that $3j^2=3n^2$" Actually, no, it implies $3j^2 = 3n^4$. And thus we get $j = n^2$ (we can assume $j$ is positive this time as we can assume $k$ and $n$ are positive). "But this means that n divides m, which again is a contradiction." Actually it's not a contradiction if $n = 1$. But this isn't a contradiction that needed to be reached. $k/n$ is a factor of $k$ only makes sense if $n|k$ which would imply $n|m = 2k$.<|endoftext|> TITLE: Standard inner product of matrices QUESTION [6 upvotes]: What is the correct definition of the standard inner product of two given matrices? REPLY [4 votes]: It's worth noting that, with this definition (see answer by @Dietrich Burde), the standard inner product of two rectangular real matrices (with the same dimensions) is : $$\left=\sum_{1\le i,j\le n}a_{i,j}b_{i,j}$$ which clearly reminds us the way we calculate a (standard) inner product in $\mathbb{R}^n$ : adding the products of coordinates of the same index.<|endoftext|> TITLE: Derivative of $\frac{1}{\sqrt{x-5}}$ via definition of derivative QUESTION [5 upvotes]: A high school student has asked me to help with a limit. The teacher wants them to calculate the derivative of $$\frac{1}{\sqrt{x-5}}$$ at the point $x=9$ using the definition of the derivative. AND! They don't know $(1+x)^\alpha \approx 1 + \alpha x$. I'm puzzled since don't know how to proceed without it. $$\left.\left(\dfrac{1}{\sqrt{x-5}}\right)'\,\right|_{x=9} = \lim_{h\to 0} \dfrac{1}{h}\left(\dfrac{1}{\sqrt{4+h}}-\dfrac{1}{2}\right)= \lim_{h\to 0} \dfrac{1}{2h}\left((1+h/4)^{-1/2}-1\right)\color{red}{{\bf=}}-1/16 $$ Is there really a way to walk around?.. BTW, what is the easiest way to derive $(1+x)^\alpha \approx 1 + \alpha x$ for $\alpha \in \mathbb{R}$? I forgot how we did this in school. REPLY [3 votes]: You have $$ \frac1h\,\left(\frac1{\sqrt{4+h}}-\frac12\right) =\frac{2-\sqrt{4+h}}{2h\sqrt{4+h}} =-\frac1{2\sqrt{4+h}(2+\sqrt{4+h})}, $$ after multiplying and dividing by $2+\sqrt{4+h}$.<|endoftext|> TITLE: Proving that Null Cauchy Sequences Form A Maximal Ideal QUESTION [7 upvotes]: Suppose we have a field $F$ with norm function $|| \cdot ||.$ Let $\mathcal{F}$ be the set of Cauchy sequences in $F$ (with respect to the given norm), made into a ring in the usual way by addition and multiplication of corresponding terms in the sequence. That is, we define $$\left\{\alpha_n\right\}_{n\ge 0} + \left\{\beta_n\right\}_{n\ge 0} = \left\{\alpha_n+\beta_n\right\}_{n\ge 0}$$ and $$\left\{\alpha_n\right\}_{n\ge 0} \cdot \left\{\beta_n\right\}_{n\ge 0} = \left\{\alpha_n\cdot\beta_n\right\}_{n\ge 0}$$ for any $\left\{\alpha_n\right\}_{n\ge 0}, \left\{\beta_n\right\}_{n\ge 0} \in \mathcal{F}.$ Finally, take $I$ to be the set of sequences $\left\{\alpha_n\right\}_{n\ge 0} \in \mathcal{F}$ satisfying $$\lim_{n\to \infty} ||\alpha_n|| = 0.$$ One can show that $I$ is an ideal. Is there a direct way to prove that $I$ is a maximal ideal? By direct method, I mean a proof that does not appeal to the fact that $\mathcal{F}/I$ is a field. REPLY [9 votes]: Let $e = (1,1,1,...)$. Then $e$ is the multiplicative identity of $\mathcal{F}$. Suppose $x \in \mathcal{F}$, $x \notin I$. To show that $I$ is a maximal ideal of $\mathcal{F}$, it suffices to show $(I,x) = (e)$. Since $x \notin I$, at most finitely many terms of $x$ are zero. Thus, suppose $x_n \ne 0$ for $n > N$. Let $f$ be the element of $\mathcal{F}$ such that $f_k = 1$ for $0 \le k \le N$ and $f_k = 0$ for $k > N$. Clearly $f \in I$. Let $y = x - xf + f$. Then $y \in (I,x)$, $\;y \notin I$. By choice of $f$, all terms of $y$ are nonzero. Let $1/y$ denote the sequence $(1/y_0,1/y_1,1/y_2,...)$. Since $y$ is a Cauchy sequence with all terms nonzero, and $y \notin I$, the terms of $y$ are bounded away from zero, hence the products $y_my_n$ are also bounded away from zero. Since \begin{align*} &\bullet\;\; \left\Vert{1/y_m - 1/y_n}\right\Vert = \left\Vert\frac{y_n - y_m}{y_my_n}\right\Vert\\[6pt] &\bullet\;\; \text{The denominators $y_my_n$ are bounded away from zero}\\[6pt] &\bullet\;\; y \text{ is a Cauchy sequence}\\ \end{align*} it follows that $1/y$ is a Cauchy sequence, so $1/y \in \mathcal{F}$. Since $y \in (I,x)$ and $1/y \in \mathcal{F}$, it follows that $y(1/y) \in (I,x)$. Thus $e \in (I,x)$, so $(I,x) = (e)$. Therefore $I$ is a maximal ideal of $\mathcal{F}$, as was to be shown.<|endoftext|> TITLE: How does $x^4-2x^3-2x+1$ factor in $\mathbb{F}_{p}$? QUESTION [7 upvotes]: I've been struggling to prove the general formula for how the polynomial in the title factors mod $p$, for some arbitrary prime $p$. This is how it must factors, although I should mention this "guess" wasn't formulated by factoring for some primes and then assuming any pattern notice held; $x^4-2x^3-2x+1=Q_{1}\cdot L_{1} \cdot L_{2}$ if $p \equiv 11 \pmod{12}$ $x^4-2x^3-2x+1=Q_{1}\cdot Q_{2}$ if $p \equiv 7 \pmod{12}$ $x^4-2x^3-2x+1=Qu_{1}$ if $p \equiv 5 \pmod{12}$ $x^4-2x^3-2x+1=L_{1} \cdot L_{2} \cdot L_{3} \cdot L_{4}$ if $p = a^2+36 b^2$ $x^4-2x^3-2x+1=Q_{1}\cdot Q_{2}$ if $p=4a^2+9b^2$ Where $Q, L, Qu$ are irreducible in $\mathbb{F}_{p}$, and $Q_{i}, L_{j}, Qu_{k}$ are quadratic, linear, and quartic polynomials respectively. I'm working on various other polynomials too of varying degree as well, all with solvable Galois Groups. Any help as to how to factor them like this formula would be much appreciated. REPLY [4 votes]: The polynomial is reciprocal, so there is an easy way of solving the corresponding equation: dividing through by $x^2$ gives $$ x^2 - 2x - \frac{2}{x} + \frac1{x^2} = \Big(x + \frac1x\Big)^2 - 2 \Big(x + \frac1x\Big) - 2 = 0,$$ hence $$ x + \frac1x = 1 \pm \sqrt{3}. $$ Multiplying through by $x$ gives the quadratic $$ x^2 - \omega x + 1 = 0, $$ where $\omega = 1 \pm \sqrt{3}$. This shows that the field generated by the roots of the polynomial is ${\mathbb Q}(\sqrt{2\sqrt{3}}) = {\mathbb Q}(\sqrt[4]{12})$. The remaining calculations involve quadratic and quartic reciprocity for describing the splitting of primes in pure quartic extensions. As an example, look at the normal closure of ${\mathbb Q}(\sqrt[4]{12})$, which is $L = {\mathbb Q}(i, \sqrt[4]{-3})$. Its maximal abelian subfield $F$ is the field of $12$-th roots of unity, hence exactly the primes $p \equiv 1 \bmod 12$ split in $F$. Among these primes, the ones splitting completely in $L$ are exactly those for which $-3$ is a quartic residue modulo $p$. By classical criteria due to Gauss this happens if and only if $p = a^2 + 4b^2$ for $b$ divisible by $3$. A direct approach for describing the splitting of primes in dihedral extensions using binary quadratic forms requires the concept of ring class fields. You might want to look into Cox's book on primes of the form $x^2 + ny^2$, where this is explained in detail.<|endoftext|> TITLE: What is edge maximal graph? QUESTION [5 upvotes]: Going through Graph theory , i found We call Graph $G$ edge Maximal if with a given graph property if $G$ itself has the property but no graph $G$+$xy$ does for non adjacent vertices $x,y$ $\epsilon$ G I am not getting what does it really mean .! please help me out !!! REPLY [13 votes]: A graph with a certain property is called edge maximal for that property if you cannot add another edge but keep the property. For instance, a tree is an edge-maximal cycle-free graph. You cannot add an edge while keeping it cycle-free, because adding an edge to a tree always adds a cycle. Similarly, if you graph consists of two components, each of which is a complete graph, then this graph is edge maximal disconnected: adding any edge to the graph turns it into a connected graph.<|endoftext|> TITLE: Where is the mathematical logic with this exponent behaviour? QUESTION [6 upvotes]: I'm learning how the diffie-hellman key exchange algorithm, works and I came to a mathematical conflict which I can't understand its logic . This is the story: ( yuu can bypass this , it's just a brief) Alice and Bob wants to generate the same secret number for they can use to encrypt data. (Eve is the bad guy and listening to every transmission over the network). Alice Eve Bob secret 15 secret 13 Alice says to Bob ( let's choose the formula : `3^e % 17`) Eve knows now 3,7 3^secret % 17 =6 3^15 % 17 = 6 ----> 6 Eve knows now 3,7,6 3^secret % 17 = 12 3^13 % 17 = 12 12 <---- Eve knows now 3,7,6,12 (bob's 12)^secret %17 12^15 %17 = 10 (Alice's 6)^secret %17 6^13 %17 = 10 They got the same number 10 , and it's hard for Eve to try figure 10 with very large numbers And it's becuase : (3^13 % 17) ^ 15 %17 is equal to ((3^15 % 17)^13 %17) But I was wondering about that : (3^13 % 17) ^ 15 %17 It appears that (3^13 % 17) ^ 15 %17 is equal also to (3^13 ) ^ 15 %17 My question is what is the logic behind it. I'm not asking for the accurate mathematical proof , I just want to understand the simple-human-logic behind this. REPLY [2 votes]: Assuming the number and base are positive, all the modulo operator n % 17 does is to give you the last "digit" of $n$ when written in base 17. Now when you want to calculate the last digit (base 17) of $(n^{13})^{15}$, you could calculate $x=n^{13}$, then raise $x$ to the power $15$, then look at the last digit. In order to raise your big number $x$ to the power $15$, you have to multiply a bunch of numbers (all equal to $x$) together. But if all you care about is the last digit (base 17) of the answer, all you need to know is the last digit of $x$, because when you do a multiplication longhand only the digits in the last column affect the last column of the answer. This is perhaps easier to convince yourself of working in base 10, since that's how we're used to doing long multiplication, but it works equally well in any base -- the last digit of $a\times b$ only depends on the last digits of $a$ and $b$.<|endoftext|> TITLE: Proving a well-known inequality using S.O.S QUESTION [6 upvotes]: Using $AM-GM$ inequality, it is easy to show for $a,b,c>0$, $$\frac{a}{b} + \frac{b}{c} + \frac{c}{a} \ge 3.$$ However, I can't seem to find an S.O.S form for $a,b,c$ $$f(a,b,c) = \frac{a}{b} + \frac{b}{c} + \frac{c}{a} - 3 = \sum_{cyc}S_A(b-c)^2 \ge 0.$$ Update: Please note that I'm looking for an S.O.S form for $a, b, c$, or a proof that there is no S.O.S form for $a, b, c$. Substituting other variables may help to solve the problem using the S.O.S method, but those are S.O.S forms for some other variables, not $a, b, c$. REPLY [2 votes]: Here is another SOS (Shortest) $$ab^2+bc^2+ca^2-3abc={\frac {a \left( c-a \right) ^{2} \left({b}^{2}+ ac+cb \right) +b \left( 2\,ab+{c}^{2}-3\,ac \right) ^{2}}{4\,ab+ \left( c-a \right) ^ {2}}} \geqslant 0$$<|endoftext|> TITLE: Recently proposed problem by George Andrews on partitions in Mathstudent Journal (India) QUESTION [6 upvotes]: Show that the number of parts having odd multiplicities in all partitions of $n$ is equal to difference between the number of odd parts in all partitions of $n$ and the number of even parts in all partitions of $n.$ Example: $n=5.$ The number of partitions $5',\quad4'1',\quad3'2',\quad3'11,\quad221',\quad2'1'11,\quad1'1111$ have $10$ parts with odd multiplicities (marked with $'$). On the other hand, the number of odd parts is $15$, $$5,1,3,3,1,1,1,1,1,1,1,1,1,1,1$$ and the number of even parts is $5$, $$4,2,2,2,2.$$ I gave up solving the problem because in this problem parts are taken in all partitions not any single partition. I tried thinking some bijective proof or analytic proof using generating function but gave up as it seems to me a very different problem . Does anyone have any idea? The problem is very nice and it is recently been proposed by George Andrews in Mathstudent Journal (India). REPLY [3 votes]: This answer is based upon generating functions. We start by considering the generating function of all partitions of $n$ \begin{align*} \prod_{m=1}^\infty\frac{1}{1-z^m}=\prod_{m=1}^\infty\frac{1}{(1-z^{2m})(1-z^{2m-1})}\tag{1} \end{align*} From the RHS of (1) we derive $A(z,t)$, a generating function of all partitions with marked odd parts and $B(z,t)$, a generating function with marked even parts. \begin{align*} A(z,t)&=\prod_{m=1}^\infty\frac{1}{(1-z^{2m})(1-tz^{2m-1})}\\ &=1+tz+(t^2+1)z^2+(t^3+2t)z^3+(t^4+2t^2+2)z^4\\ &\qquad+(t^5+2t^3+4t)z^5+\cdots\\ B(z,t)&=\prod_{m=1}^\infty\frac{1}{(1-tz^{2m})(1-z^{2m-1})}\\ &=1+z+(t+1)z^2+(t+2)z^3+(t^2+2t+2)z^4\\ &\qquad+(t^2+3t+3)z^5+\cdots \end{align*} Similarly we obtain a generating function $C(z,t)$ with marked odd multiplicities by separating terms with odd and even index \begin{align*} C(z,t)&=\prod_{m=1}^{\infty}\left(\frac{1}{1-z^{2m}}+\frac{tz^m}{1-z^{2m}}\right)\\ &=\prod_{m=1}^{\infty}\frac{1+tz^m}{1-z^{2m}}\\ &=1+tz+(t+1)z^2+(t^2+2t)z^3+(t^2+2t+2)z^4\\ &\qquad+(3t^2+4t)z^5+\cdots \end{align*} In terms of generating functions the equality of the difference between the number of odd and even partitions of a number $n$ and the number of odd multiplicities of the partitions of $n$ translates to the following Claim: \begin{align*} \left.\frac{\partial}{\partial t}A(z,t)\right|_{t=1}-\left.\frac{\partial}{\partial t}B(z,t)\right|_{t=1} =\left.\frac{\partial}{\partial t}C(z,t)\right|_{t=1} \end{align*} Before we show the claim we do a plausibility check and look at partitions of small $n=1,\ldots,5$. \begin{array}{c|ccc|l} n &[z^n]\left.\frac{\partial}{\partial t}A(z,t)\right|_{t=1} &[z^n]\left.\frac{\partial}{\partial t}B(z,t)\right|_{t=1} &[z^n]\left.\frac{\partial}{\partial t}C(z,t)\right|_{t=1} &partitions\\ 1&1&0&1&1\\ 2&2&1&1&2,1^2\\ 3&5&1&4&3,21,1^3\\ 4&8&4&4&4,31,2^2,21^2,1^4\\ 5&15&5&10&5,41,32,31^2,2^21,21^3,1^5\\ \hline \end{array} Here we see the coefficients of the power series for $n=1,\ldots,5$ and observe the difference of odd and even parts coincides with the number of odd multiplicities. :-) The right-most column lists the partitions of $n$ with the compact notation e.g. $21^3$ is a short-hand for $2+1+1+1$. We obtain using logarithmic differentiation \begin{align*} \left.\frac{\partial}{\partial t}A(z,t)\right|_{t=1} &=\left.\frac{\partial}{\partial t}\prod_{m=1}^\infty\frac{1}{(1-z^{2m})(1-tz^{2m-1})}\right|_{t=1}\\ &=\left.\prod_{m=1}^\infty\frac{1}{(1-z^{2m})(1-tz^{2m-1})}\sum_{k=1}^\infty\frac{z^{2k-1}}{1-z^{2k-1}}\right|_{t=1}\\ &=\prod_{m=1}^\infty\frac{1}{(1-z^{2m})(1-z^{2m-1})}\sum_{k=1}^\infty\frac{z^{2k-1}}{1-z^{2k-1}}\\ &=\prod_{m=1}^\infty\frac{1}{1-z^{m}}\color{blue}{\sum_{k=1}^\infty\frac{z^{2k-1}}{1-z^{2k-1}}}\\ \end{align*} and similarly \begin{align*} \left.\frac{\partial}{\partial t}B(z,t)\right|_{t=1} &=\left.\frac{\partial}{\partial t}\prod_{m=1}^\infty\frac{1}{(1-tz^{2m})(1-z^{2m-1})}\right|_{t=1}\\ &=\prod_{m=1}^\infty\frac{1}{1-z^{m}}\color{blue}{\sum_{k=1}^\infty\frac{z^{2k}}{1-z^{2k}}}\\ \end{align*} Next we get \begin{align*} \left.\frac{\partial}{\partial t}C(z,t)\right|_{t=1} &=\left.\frac{\partial}{\partial t}\prod_{m=1}^\infty\frac{1+tz^m}{1-z^{2m}}\right|_{t=1}\\ &=\left.\prod_{m=1}^\infty\frac{1+tz^m}{1-z^{2m}}\sum_{k=1}^\infty\frac{z^k}{1+tz^k}\right|_{t=1}\\ &=\prod_{m=1}^\infty\frac{1+z^m}{1-z^{2m}}\sum_{k=1}^\infty\frac{z^k}{1+z^k}\\ &=\prod_{m=1}^\infty\frac{1}{1-z^{m}}\color{blue}{\sum_{k=1}^\infty\frac{z^k}{1+z^k}} \end{align*} We observe the claim is valid if the following holds: \begin{align*} \color{blue}{\sum_{k=1}^\infty\frac{z^{2k-1}}{1-z^{2k-1}} -\sum_{k=1}^\infty\frac{z^{2k}}{1-z^{2k}} =\sum_{k=1}^\infty\frac{z^k}{1+z^k}}\tag{2} \end{align*} We start with the left-hand side of (2) and get \begin{align*} \color{blue}{\sum_{k=1}^\infty}&\color{blue}{\frac{z^{2k-1}}{1-z^{2k-1}} -\sum_{k=1}^\infty\frac{z^{2k}}{1-z^{2k}}}\\ &=\sum_{k=1}^\infty\left(\frac{z^{k}}{1-z^{k}}-\frac{z^{2k}}{1-z^{2k}}\right) -\sum_{k=1}^\infty\frac{z^{2k}}{1-z^{2k}}\\ &=\sum_{k=1}^\infty\left(\frac{z^{k}}{1-z^{k}}-\frac{2z^{2k}}{1-z^{2k}}\right)\\ &=\sum_{k=1}^\infty\frac{z^k}{1-z^k}\left(1-\frac{2z^k}{1+z^k}\right)\\ &=\sum_{k=1}^\infty\frac{z^k}{1-z^k}\cdot\frac{1-z^k}{1+z^k}\\ &\color{blue}{=\sum_{k=1}^\infty\frac{z^k}{1+z^k}} \end{align*} and the claim follows.<|endoftext|> TITLE: Limit $\lim_{x\to 0} \frac{\tan ^3 x - \sin ^3 x}{x^5}$ without l'Hôpital's rule. QUESTION [5 upvotes]: I need to solve $$\lim_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5}$$ I did like this: $\lim \limits_{x\to 0} \dfrac{\tan ^3 x - \sin ^3 x}{x^5} = \lim \limits_{x\to 0} \dfrac{\tan ^3 x}{x^5} - \dfrac{\sin ^3 x}{x^5}$ $=\dfrac 1{x^2} - \dfrac 1{x^2} =0$ But it's wrong. Where I have gone wrong and how to do it? REPLY [2 votes]: $$\frac{\tan ^3 x - \sin ^3 x}{x^5}=\tan^3x\dfrac{1 - \cos ^3 x}{x^5}=\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{(1-\cos x)(1+\cos x+\cos^2x)}{x^2}\right)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{1-\cos x}{x^2}\right)(1+\cos x+\cos^2x)=\\ =\left(\dfrac{\tan x}{x}\right)^3\left(\dfrac{\sin x}{x}\right)^2\left(\dfrac{1}{1+\cos x}\right)(1+\cos x+\cos^2x)$$ Now use the fundamental limits: $$\lim_{x\to 0}\frac{\sin x }{x}=1=\lim_{x\to 0}\frac{\tan x }{x}$$<|endoftext|> TITLE: An inequality for convolution using Hölder inequality QUESTION [5 upvotes]: Let $p , q , r$ be three real numbers in $[1 , + \infty]$ such that $\frac{1}{p} + \frac{1}{q} = 1 + \frac{1}{r}$. If $f \in L^p({\mathbb{R}}^n)$, $g \in L^q({\mathbb{R}}^n)$ and $r < \infty$, I have to prove that $$ {|(f*g)(x)|}^r \leq {\|f\|}_p^{r - p} {\|g\|}_q^{r - q} \int_{{\mathbb{R}}^n} {|f(y)|}^p {|g(x - y)|}^q dy $$ and my indication is to use Hölder inequality for three functions; my intuition sais that $f$ and $g$ are two of that functions but I have to obtain the third. Can you help me with this problem please? Thank you very much. REPLY [4 votes]: Let us define $$\alpha = \left(\frac{1}{p} - \frac{1}{r}\right)^{-1}, \quad \beta = \left(\frac{1}{q} - \frac{1}{r}\right)^{-1}.$$ Notice that they are such that $$ \frac{1}{\alpha} + \frac{1}{\beta} + \frac 1 r = 1. $$ We have: $$\begin{align} |(f*g)(x)| &\leq \int_{\mathbb R^n} |f(y)|\,|g(x-y)|\,dy \\ &= \int_{\mathbb R^n} \left(|f(y)|^{1-\frac{p}{r}}\right) \, \left(|g(x-y)|^{1 - \frac{q}{r}}\right)\, \left(|f(y)|^{p} \, |g(x-y)|^{q}\right)^{\frac{1}{r}}\,dy \\ &= \int_{\mathbb R^n} \left(|f(y)|^{\frac{p}{\alpha}}\right) \, \left(|g(x-y)|^{\frac{q}{\beta}}\right)\, \left(|f(y)|^{p} \, |g(x-y)|^{q}\right)^{\frac{1}{r}}\,dy \end{align}$$ Now apply apply Holder's inequality with exponents $\alpha, \beta, r$ to obtain: $$\begin{align} |(f*g)(x)| &\leq \|f\|_p^{\frac{p}{\alpha}} \, \|g\|_q^{\frac{q}{\beta}} \, \left(\int_{\mathbb R^n} |f(y)|^{p} \, |g(x-y)|^{q}\,dy\right)^{\frac{1}{r}}. \end{align}$$ from which the statement follows by raising both sides to the power $r$ and simplifying.<|endoftext|> TITLE: Given $ x_1 + x_2 + .... + x_{1994} = 1994$ find all $x_i$ QUESTION [5 upvotes]: $ x_1 + x_2 + ..... + x_{1994} = 1994$ $ x_1^3 + x_2^3 + .... +x_{1994}^3 = x_1^4 + x_2^4 + .... +x_{1994}^4$ Find all $x_i$ where all are real numbers I tried to prove all are equal to 1 using inequality but couldn't do anything useful. I tried to prove base case where there are only $x_1 $ and $x_2$, but no luck that way. Any help would be appreciated. REPLY [5 votes]: We can apply the Tchebychev inequality for ordered sequences; it asserts that if for real numbers $a_1,...,a_n$ and $b_1,...,b_n$ we have $a_1≤...≤a_n$ and $b_1≤...≤b_n$ then $$ \sum_{i=1}^n a_ib_i≥\frac{1}{n}\left(\sum_{i=1}^n a_i\right)\left(\sum_{i=1}^n b_i\right). $$ with equality if and only if one of the sequences is constant. We then see that $x_1^3,...,x_{1994}^3$ and $x_1-1,...,x_{1994}-1$ are ordered in the same way and thus $$ 0=\sum_{i=1}^{1994} x_i^4-\sum_{i=1}^{1994} x_i^3=\sum_{i=1}^{1994} x_i^3(x_i-1)≥\frac{1}{1994}\left(\sum_{i=1}^n x_i^3\right)\left(\sum_{i=1}^n (x_i-1)\right)=0. $$ Hence one of the sequences is constant and therefore $x_1=...=x_{1994}=1$. REPLY [2 votes]: $$1994\sum_{i=1}^{1994}x^4\geq\sum_{i=1}^{1994}x_i\sum_{i=1}^{1994}x_i^3$$ The equality occurs for $x_i=1$. The above inequality it's enough to prove for all $x_i\geq0$, which is true by Muirhead.<|endoftext|> TITLE: If $abc=1$ so $\sum\limits_{cyc}\sqrt{\frac{a}{4a+2b+3}}\leq1$. QUESTION [11 upvotes]: Let $a$, $b$ and $c$ be positive numbers such that $abc=1$. Prove that: $$\sqrt{\frac{a}{4a+2b+3}}+\sqrt{\frac{b}{4b+2c+3}}+\sqrt{\frac{c}{4c+2a+3}}\leq1$$ The equality "occurs" also for $a\rightarrow+\infty$, $b\rightarrow+\infty$ and $a>>b$. I tried AM-GM, C-S and more, but without any success. REPLY [2 votes]: With the substitutions $a = \frac{x}{y}, \ b = \frac{y}{z}$ and $c = \frac{z}{x}$, it suffices to prove that $$\sum_{\mathrm{cyc}} \sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le 1.$$ It suffices to prove that (The desired result follows by summing cyclically.) $$\sqrt{\frac{zx}{4zx + 2y^2 + 3yz}} \le \frac{9x^2+9z^2 + 16xy + 4yz + 34zx}{18x^2+18y^2+18z^2+54xy+54yz+54zx}.$$ Squaring both sides, it suffices to prove that $f(x, y, z) \ge 0$ where \begin{align} f(x,y,z) &= 162 x^4 y^2-549 x^4 y z+504 x^4 z^2+576 x^3 y^3-452 x^3 y^2 z-1300 x^3 y z^2+1708 x^3 z^3+512 x^2 y^4\nonumber\\ &\quad +1144 x^2 y^3 z-1148 x^2 y^2 z^2-1582 x^2 y z^3+504 x^2 z^4-68 x y^4 z-440 x y^3 z^2-596 x y^2 z^3\nonumber\\ &\quad+180 x y z^4+32 y^4 z^2+192 y^3 z^3+378 y^2 z^4+243 y z^5. \end{align} We use the Buffalo Way. There are three possible cases: 1) $z = \min(x,y,z)$: Let $y = z + s, \ x = z+ t; \ s, t\ge 0$. We have $$f(z+t, z+s, z) = a_4z^4 + a_3z^3 + a_2z^2 + a_1z + a_0$$ where \begin{align} a_4 &= 5616 s^2-1728 s t+1728 t^2, \\ a_3 &= 3376 s^3+12864 s^2 t-1176 s t^2+1000 t^3, \\ a_2 &= 476 s^4+7400 s^3 t+10156 s^2 t^2-1376 s t^3+117 t^4, \\ a_1 &= 956 s^4 t+4920 s^3 t^2+1924 s^2 t^3-225 s t^4, \\ a_0 &= 512 s^4 t^2+576 s^3 t^3+162 s^2 t^4. \end{align} It is easy to prove that $a_4\ge 0, \ a_3\ge 0, \ a_2 \ge 0, \ a_0 \ge 0$ and $4a_2a_0 \ge a_1^2$. Thus, $f(z+t, z+s, z) \ge 0$. 2) $y = \min(x,y,z)$: Let $z = y+s, \ x = y+t; \ s, t\ge 0$. We have \begin{align} f(y+t, y, y+s) &= (5616 s^2-1728 s t+1728 t^2) y^4+(6400 s^3+6600 s^2 t+5088 s t^2+1000 t^3) y^3\nonumber\\ &\quad +(2277 s^4+6116 s^3 t+11626 s^2 t^2+3908 s t^3+117 t^4) y^2\nonumber\\ &\quad +(243 s^5+1188 s^4 t+5558 s^3 t^2+5840 s^2 t^3+459 s t^4) y+504 s^4 t^2+1708 s^3 t^3+504 s^2 t^4. \end{align} Clearly, $f(y+t, y, y+s)\ge 0$. 3) $x = \min(x,y,z)$: Similar.<|endoftext|> TITLE: Find orthocentre of a triangle given equations of its three sides, without finding triangle's vertices QUESTION [6 upvotes]: Find the coordinates of the orthocenter of the triangle, the equations of whose sides are $x + y = 1$, $2x + 3y = 6$, $4x − y + 4 = 0$. I know how to find it by first finding all the vertices (the orthocenter is the point of intersection of all altitudes). How can we find it without finding the coordinates of its vertices? REPLY [6 votes]: (See figure below) Here is a solution bypassing the obtention of vertices' coordinates, by using "pencils of lines". Being given two lines $L_1$ and $L_2$ with resp. equations $$\begin{cases}u_1x+v_1y+w_1=0\\u_2x+v_2y+w_2=0\end{cases}$$ Any line passing through point $L_1 \cap L_2$ has general equation $$m(u_1x+v_1y+w_1=0)+(1-m)(u_2x+v_2y+w_2=0)=0, \ \ m \in \mathbb{R}.$$ The set of all these lines is called the "pencil of lines" defined by $L_1$ and $L_2$. Thus, the pencil of lines defined by $x+y-1=0$ and $2x+3y-6=0$ is: $$a(x+y-1)+(1-a)(2x+3y-6)=0$$ $$\tag{1} \iff \ (2-a)x+(3-2a)y+(5a-6)=0$$ Among these lines, one is the height. This height is characterized by the fact that its normal vector is orthogonal to the normal vector of the third line: $$\binom{2-a}{3-2a} \perp \binom{4}{-1} \ \ \iff \ \ (2-a)4+(3-2a)(-1)=0 \ \ \iff \ \ a=\frac52$$ (recall: a line with equation $ux+vy+w=0$ has normal vector $\binom{u}{v}.$) Plugging this value of $a$ in (1) gives the equation of the height: $$\tag{2}x+4y-13=0.$$ Working in the same way with a group of 2 other sides: $$b(2x+3y-6)+(1-b)(4x-y+4)=0$$ $$\tag{3} \iff \ \ (4-2b)x+(4b-1)y+(4-10b)=0$$ $$\binom{4-2b}{4b-1} \perp \binom{1}{1} \ \ \iff \ \ (4-2b)1+(4b-1)1=0 \ \ \iff \ \ b=-\frac32$$ Plugging this value of $b$ in (3) gives the equation of the second height: $$\tag{4}7x-7y+19=0.$$ The solution of the system formed by (2) and (4) are the coordinates of the orthocentre $$H\left(\frac{3}{7},\frac{22}{7}\right).$$<|endoftext|> TITLE: What is the Arctangent of Tangent? QUESTION [5 upvotes]: This is probably just me misunderstanding trig properties. I remember from my trig days that $\tan(\arctan(x)) = x$ But I can't remember if that holds the other way. Can someone help me out? Is this valid: $$\arctan(\tan(\theta)) = \theta$$ REPLY [9 votes]: It is true that $\tan\arctan x=x$, for every $x$. The converse is not true and it cannot be, because the tangent is not an injective function. Recall that $\arctan x$ returns a number (an angle if you prefer) in the interval $(-\pi/2,\pi/2)$. So we have the equality $$ \arctan(\tan\theta)=\theta $$ if and only if $\theta\in(-\pi/2,\pi/2)$. A formula can be given for any $\theta$: you just need to “reduce” the angle to the right interval by subtracting an integral multiple of $\pi$ so that $$ -\frac{\pi}{2}<\theta-k\pi<\frac{\pi}{2} $$ which is equivalent to $$ -\frac{1}{2}<\frac{\theta}{\pi}-k<\frac{1}{2} $$ or $$ 0<\frac{\theta}{\pi}+\frac{1}{2}-k<1 $$ or $$ k<\frac{\theta}{\pi}+\frac{1}{2} TITLE: How can a Fields Medallist be 'not very good at logic'? QUESTION [6 upvotes]: Source 2: Sir Michael Atiyah on math, physics and fun. I think the way a lot of people think about mathematics, since it's all based on logic, it can't be unpredictable. Well, the idea that mathematics is synonymous with logic is a great ridiculous statement that some people make. Mathematics is very difficult to define, actually, what constitutes mathematics. Logical thinking is a key part of mathematics, but it's by no means the only part. You've got to have a lot of input and material from somewhere, you've got to have ideas coming from physics, concepts from geometry. You've got to have imagination, you're going to use intuition, guesswork, vision, like a creative artist has. In fact, proofs are usually only the last bit of the story, when you come to tie up the... dot the i's and cross the T's. Sometimes the proof is needed to hold the whole thing together like the steel structure of a building, but sometimes you've stopped putting it together, and the proof is just the last little bit of polish on the surface.   So the most time mathematicians are working, they're concerned with much more than proofs, they're concerned with ideas, understanding why this is true, what leads where, possible links. You play around in your mind with a whole host of ill-defined things.   And I think that's one thing the field can get wrong when they're being taught to students. They can see a very formal proof, and they can see, this is what mathematics is.** My story I can tell. When I was a student I went to some lectures on analysis where people gave some very formal proofs about this being less than epsilon and this is bigger than that. Then I had private supervision from a Russian mathematician called Bessikovich, a good analyst, and he'd draw a little picture and say, this -- this is small, this -- this is very small. Now that's the way an analyst thinks. None of this nonsense about precision. Small, very small. You get an idea what is going on. And then you can work it out afterwards. And people can be misled, if you read books, textbooks or go to lectures, and you see this very formal approach and you think, gosh that's the way I gotta think, and they can be turned off by that because that's not an interesting thing, mathematics, you see. You aren't thinking at that point imaginatively.   But you mustn't get carried away by the other extreme. You mustn't go all the time with airy-faery ideas that you can't actually write and solve a problem. That's a danger. But you've got to have a balance between being able to be disciplined and solve problems and apply logical thinking when necessary. And at other times you've got to be able to freely float in the atmosphere like a poet and imagine the whole universe of possibilities, and hope that eventually you come down to Earth somewhere else. So it's very exciting to be a practicing mathematician or a physicist. Physicists in principle have to tie themselves down to Earth more than mathematicians, and one day look at experimental data. Well mathematicians have to tie themselves down in other ways too. A proof is one of the things that instantly ties them down. But it's a mistake to think that mathematics and logic are the same. They overlap in important ways, but it's a big mistake. $\color{red}{\text{I'm not very good at logic.}}$ Per the italicised sentences above, I understand that math needs some logic, more than logic; but not all mathematicians need specialise in Mathematical Logic. Assumption: Sir Michael means the red (last sentence) literally, and not humbly or self-effacingly. Then how is the red true? An excellent mathematician must have mastered the fundaments of logic, e.g. undergraduate Mathematical Logic. So does the red instead mean research-level Mathematical Logic? REPLY [4 votes]: I think the sentiment expressed by the author is pretty clear: being a good mathematician is being able to go back and forth between being able to be really precise (logic), and being able to explore (and maybe even 'intuit') interesting relationships and connections, play with new ideas, speculate, hypothesize, etc. But, as such, I do believe Sir Michael did (probably intentionally) overstate his case a bit: as he says himself, at some point you need to take these new ideas and suspected theorems and make them hard, and that requires being good at logic. So I agree with you there that this seems like a strange statement of his. Indeed, Sir Michael is of course perfectly capable in logical thinking! Then again, he does refer to some of the high technical formalizations of logic, and how that seems almost too 'extreme', too 'restricted' ... how indeed it is hard to see how, say, a deeply formal logical axiomatization of, say, arithmetic, gives us any novel ideas about arithmetic (if you work with the Peano Axioms, I think you'll get the idea). So that kind of logic is something he says that he resists doing. But in the end you're right: to say that he is not good at logic at all, seems overstated, though most likely intentionally so and for dramatic effect: I think it's his way of trying to get those of us who do tend to stay in the purely 'precise' domain to 'venture out' a little more!<|endoftext|> TITLE: $\forall \ x>0: \lim_{\ n\to\infty}f(a_nx)=0$, then $\lim_{\ x\to\infty}f(x)=0$ QUESTION [5 upvotes]: Prove or Disprove: Let $(a_n)_{n=1}^{\infty}$ be an increasing monotonic sequence of positive real number such that: $a_n\to\infty \ $ and $\ \frac{a_{n+1}}{a_n}\to 1$. If $f:\mathbb{R}\to\mathbb{R}$ is a continuous function such that: $\forall \ x>0: \lim_{\ n\to\infty}f(a_nx)=0$, then $\lim_{\ x\to\infty}f(x)=0$. REPLY [4 votes]: The claim is true, and the proof proceeds along similar lines to the classical $ a_n = n $ case. Fix $ \epsilon > 0 $, and define the sets $$ A_n = \{ x \in \mathbb R^{+} : |f(a_n x)| \leq \epsilon \} $$ $$ B_n = \bigcap_{m \geq n} A_m $$ By continuity of $ f $, both $ A_n $ and $ B_n $ are closed. Furthermore, the given condition implies that $$ \mathbb R^{+} = \bigcup_{n} B_n $$ By the Baire category theorem, it follows that one of the $ B_n $, say $ B_M $, contains a nonempty open interval, say $ (a, b) $. Pick $ \delta > 0 $ such that $ b(1 - \delta) > a $. By the convergence $ a_{n+1} / a_n \to 1 $, we can pick $ N $ sufficiently large such that for all $ n > N $, we have $ a_n > (1 - \delta) a_{n+1} $. Thus, we have that $$ a_n b > a_{n+1} (1 - \delta) b > a_{n+1} a $$ and the intersection $ (a_n a, a_n b) \cap (a_{n+1} a, a_{n+1} b) $ is nontrivial for $ n > N $. The divergence $ a_n \to \infty $ then implies that $$ S = \bigcup_{n > \max\{M, N\}} (a_n a, a_n b) = (C, \infty) $$ for some $ C $. By definition of $ B_M $ and the inclusion $ (a, b) \subset B_M $, it follows that for all $ x \in S $, we have that $ |f(x)| \leq \epsilon $. Since $ \epsilon > 0 $ was arbitrary, it follows that $ f(x) \to 0 $ as $ x \to \infty $.<|endoftext|> TITLE: Can Peano's Axioms be derived from ZFC without AOI? QUESTION [5 upvotes]: From "Axiom of Infinity," Wikipedia: [T]here is a set I (the set which is postulated to be infinite), such that the empty set is in I and such that whenever any x is a member of I, the set formed by taking the union of x with its singleton {x} is also a member of I.[...] The infinite set I is a superset of the natural numbers. To show that the natural numbers themselves constitute a set, the axiom schema of specification can be applied to remove unwanted elements, leaving the set N of all natural numbers. I presume that some version of Peano's Axioms will be derived in this process of extracting the natural numbers. It has been suggested by a contributor at sci.math (not a crank), that it may be possible to derive Peano's Axioms using ZFC without AOI. Can this be true? EDIT: Specifically, without citing AOI, can we prove the existence of a set $N$, function $S: N \to N$ and $0\in N$ such that: $\forall x,y \in N: [S(x)=S(y) \implies x=y]$ $\forall x\in N: S(x)\ne 0$ $\forall P\subset N: [0\in P \land \forall x\in P: [S(x)\in P] \implies P=N]$ REPLY [4 votes]: EDIT: For the new version of the question, the answer is no: it is a good exercise to show that $V_\omega$, the class of hereditarily finite sets, is a counterexample. Note that while of course there is a class in $V_\omega$ with the desired properties - namely, $\omega$ itself - there is no such set. And this is an important distinction, with the class version of the question having a number of subtleties; see my original answer, which is below the fold. Incidentally, ZF-AOI is equiconsistent with (first-order) PA. PA proves that PA has no finite model; roughly, this means that models of ZF-AOI which "come from" models of PA (essentially, which satisfy $\neg AOI$ - in particular $V_\omega$ "comes from" $\mathbb{N}$ itself) do not have an inductive set. As usual with this type of question, the answer depends on exactly what you mean. Here's a thing that is true: If $M$ is a model of ZF-AOI, then $FinOrd^M$ - the set (class, as far as $M$'s concerned, but set externally) of $M$-finite ordinals is a model of first-order $PA$ (with $+, \times$ defined as usual in this context). Now, I suspect you're actually interested in second-order PA, which I'll denote "$PA_2$". But this produces some subtleties. Namely, second-order PA only makes sense in a model of set theory. One might respond instinctively, "Well, look at $M$'s version of $PA_2$!" But there's a problem with that - remember that $FinOrd^M$ is (a priori) a class in $M$, rather than a set! So when we ask "Does $FinOrd^M$ satsisfy $PA_2^M$?", what we really mean is "Does $FinOrd^M$ satisfy the 'class version' of $PA_2^M$?". That is, I think the right way to ask the question you're getting at is the following: Let $M\models ZF-AOI$. Then is it necessarily the case that, for all formulas $\varphi(x, \overline{y})$ in the language of set theory and all parameters $\overline{a}\in M$, we have $$M\models \varphi(0, \overline{a})\wedge \forall x\in FinOrd[\varphi(x, \overline{a})\implies \varphi(x+1, \overline{a})]\implies \forall x\in FinOrd(\varphi(x,\overline{a}))?$$ (The point is that the classes of the form "$\{x\in FinOrd^M: M\models\varphi(x,\overline{a})\}$" represent all the sub"sets" of $FinOrd^M$ which $M$ can "see", and so they're the ones it makes sense to query with respect to satisfying the right version of $PA_2$. The answer to this question is yes, since ZF-AOI proves that the ordinals are well-ordered and that every finite ordinal has a predecessor. However, there are a couple caveats (for simplicity I assume the $M$'s below satisfy $\neg$AOI): First, note that while $M$ can express "$FinOrd\models \varphi$" for a specific (first-order in the language of set theory - in particular, no quantifying over classes!) sentence $\varphi$, $M$ can't express "$FinOrd\models\Gamma$" for $\Gamma$ a set of sentences. So in a certain sense, $M$ doesn't know that $FinOrd^M$ is a model of the $M$-class-version of $PA_2$. Second, note that if $\varphi$ is a nonstandard first-order sentence in $M$ - that is, a nonstandard finite ordinal in $M$ which $M$ thinks is the Goedel number of a sentence - then $M$ can't express "$FinOrd\models\varphi$"! This isn't really an issue here, but it's worth keeping in mind. Finally, it's worth noting that ZF-AOI does not prove the Completeness Theorem - in particular, a model of ZF-AOI may not contain a set model of even first-order PA, even if it thinks that theory is consistent! ($V_\omega$ is an example of this.)<|endoftext|> TITLE: How to evaluate $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$ QUESTION [7 upvotes]: Is there an easy way to evaluate the integral $\int_0^\pi \cos(x) \cos(2x) \cos(3x) \cos(4x)\, dx$? I know that I can plugin the $e$-function and use the linearity of the integral. However this would lead to 16 summands which I really dont want to calculate separately. REPLY [2 votes]: Using Werner's formula with $a\ge b>0$ so that $a\pm b$ are integers $$\int_0^\pi\cos ax\cos bx\ dx=\dfrac12\int_0^\pi\{\cos(a+b)x+\cos(a-b)x\} dx=\cdots=\begin{cases}0&\mbox{if } a\ne b\\ \dfrac\pi2 & \mbox{if } a=b\end{cases}$$ Now, $(2\cos x\cos4x)(2\cos2x\cos3x)=(\cos3x+\cos5x)(\cos x+\cos5x)$ $=\cos3x\cos x+\cos x\cos5x+\cos3x\cos5x+\cos5x\cdot\cos5x$ So, $\displaystyle4\int_0^\pi\cos x\cos2x\cos3x\cos4x\ dx=\dfrac\pi2$ for $a=b=5$<|endoftext|> TITLE: Let $\omega=e^{i2\pi/2015}$, evaluate $\sum_{k=1}^{2014}\frac{1}{1+\omega^k+\omega^{2k}}$ QUESTION [9 upvotes]: Let $\omega=e^{i2\pi/2015}$, evaluate: $$S=\sum_{k=1}^{2014}\frac{1}{1+\omega^k+\omega^{2k}}$$ My Attempt: Clearly, $1,\omega,\omega^2,\omega^3,...,\omega^{2014}$ are the $2015$th roots of unity. Thus, $\omega^{2014}=\bar\omega$, $\omega^{2013}=\bar\omega^2$, and so on. Note that $$\dfrac{1}{1+\omega^k+\omega^{2k}}+\dfrac{1}{1+\bar\omega^k+\bar\omega^{2k}}=\dfrac{1}{1+\omega^k+\omega^{2k}}+\dfrac{\omega^{2k}}{1+\omega^k+\omega^{2k}}=\dfrac{1+\omega^{2k}}{1+\omega^k+\omega^{2k}}$$ hence $$S=\sum_{k=1}^{1007}\frac{\omega^k+\omega^{-k}}{1+\omega^k+\omega^{-k}}=\sum_{k=1}^{1007}\frac{2\cos\left(\frac{2k\pi}{2015}\right)}{2\cos\left(\frac{2k\pi}{2015}\right)+1}$$ How to proceed from here. Appears to be standard trigonometric expression but I am not able to recall. REPLY [7 votes]: For simplicity I will write $n=2015$. Since $X^n-1=\prod\limits_{k=0}^{n-1}(X-\omega^k)$ we conclude that $$\frac{nX^{n-1}}{X^n-1}=\sum_{k=0}^{n-1}\frac{1}{X-\omega^k}$$ Substituting $X=j=e^{2i\pi/3}$ and $X= \overline{j}$ and then subtracting we get $$\frac{nj^{n-1}}{j^n-1}-\frac{n\bar{j}^{n-1}}{\bar{j}^n-1}=\sum_{k=0}^{n-1}\left(\frac{1}{j-\omega^k}-\frac{1}{\bar{j}-\omega^k}\right)=(\bar{j}-j)\sum_{k=0}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}} $$ Finally, since $n=2015\equiv 2\mod 3$ we see that $j^{n-1}=j$ and $j^n=j^2=\bar{j}$, we get $$\frac{1}{-i\sqrt{3}}\left(\frac{nj}{j^2-1}-\frac{n\bar{j}}{j-1}\right)=\frac{1}{3}+\sum_{k=1}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}}$$ The final step is easy and we get $$\sum_{k=1}^{n-1}\frac{1}{1+\omega^k+\omega^{2k}}=\frac{2n-1}{3}=1343$$ This conclusion is valid for every $n$ which is equal to $2\pmod3$.<|endoftext|> TITLE: A month for Olympiad-level geometry. QUESTION [18 upvotes]: In one month, I'll be writing a second round of a mathematical Olympiad. My biggest concern is geometry. While I think I'm doing pretty well in number theory, algebra, combinatorics etc., I still can't say I really understood Olympiad geometry (and bashing is not always possible). Obviously, there are always problems one can and one can't solve, regardless of the training, but my question is: What is the best books/set of exercises/article, that I could study from and get a greater grasp of Olympiad-level geometry a month before the competition? I mean, a full month, $30$ days, no school or any other activities. I must say, I'd prefer ones that don't use overly sophisticated notations and don't go into super-advanced theorems you happen to use once or never. And I'm aware that a month is not really a long time and this question may seem a bit like: "quickly, what should I learn to win an Olympiad". I don't mean it. I'm just asking - what to study Olympiad geometry from to greatest benefit while certainly greatly determined to? EDIT: I'd like to address some comments regarding taking Olympiad too seriously. While I agree to great extent with this reasoning and I appreciate your concern, I'd like to clarify: it's not that I'm going to be working $\frac{24}7$ on maths only. I'd just like to work with the best resources available and thus not waste my time on exercises that aren't of much benefit to my geometry skills. REPLY [10 votes]: I'm supporting Wolfram's comment, You shouldn't take Math Olympiad too much seriously. Take this as fun/relaxation. I would suggest the following books (Please read the whole post) - Geometry Revisited by Coxeter & S.L. Greitzer. This book is must. You should have completed it at least. This contains the very basics of Olympiad Level Problems. 103 Trigonometry Problems from USA Math Camp by Titu Andrescu. This is also a must. All the books from USA Math Camp is really cool. i.e. 101 Problems in Algebra, 102 Combinatorial Problems, 103 Trigonometry Problems from USA Math Camp, 104 Number Theory Problems, all by Titu Andrescu. You must take a look at these books, as geometry is not all in Math Olympiads. In our country we are assured that if anyone completes these 4 books, he doesn't need to worry about Nationals. Problems in Plane and Solid Geometry by Viktor Parasolov. This book covers all sides of plane and solid geometry. The book is of 495 pages, only with problems. So, I don't think you can cover this in 30 days only. But you should solve at least from chapter 1 to 5. By this your idea about the theorems will become very clear. And the solutions of this book are really cool. Geometry Unbound by Kiran S. Kedlaya. At last this. This books is specially written for the students preparing for IMO. You should touch this book after reading the previous books I suggested. Our country coaches said that if you can solve at least 35% of this book, no one will be able to defeat you to take your place in IMO Team. Geometry is not all of the Math Olympiad. Almost all coaches say that you may not solve Algebra, you may not solve Combinatorics or a number theory problem, But you should be able to solve the Geometry one. Here are solving books that you should try, for getting better place in Olympiad- Number Theory Structures, Examples, and Problems also by Titu Andrescu. I think this is the best book on Number Theory I seen. It covers all sides of Number Theory. This is also written mainly for the students preparing for IMO. Principles and Techniques in Combinatorics, this cover many thing from beginner to advanced. But I think 103 Combinatorial problems will be sufficient, If you have previous knowledge. Functional Equations also by Titu Andrescu. All you need about Functional Equations. Elementary Number Theory by David M. Burton. It contains most of the theorems you need. But I prefer Titu Andrescu's book most. If you ave time you can take a look at this after Titu's book. Then you'll need to see only the exercises. As you don't have so much time, here is my suggestion in which order you should read the books - One fourth of the problems in Olympiad will be from Geometry. It is not a good idea to spend all time in Geometry. I think you should read the 4 books from USA Math Camp. Each of them will need 3 days if you try heart and soul. One book may take near 3 days or 4. You should be able to read all theories of each of these books in 12 hours, then 24 hours to solve the Introductory problems, and then 36 hours to solve the Advanced problems. So, here near 14 days will be gone. After this you need to see in which side you are weak. Then read books on that side on this 4 days. Now each 4 days you need to cover a whole topic. Spend more time where you are weak and less time where you are strong. If you see you are weak in geometry. Then first read the book of Viktor Parasolov. I don't expect that one can cover this in even 6 months. Try heart and soul. For a second round you will not need all. Try to solve 5 chapters. You must look at Titu Andrescu's Number Theory book. He is Director of MOP, Head Coach of the USA IMO Team and Chairman of the USAMO. His book is also great. But you need all of them to be solved. Just see the techniques used in the book, the approach to solve NT Problems. You should take this book after everything covered on other sides. And try this book till the end. As I am a contest programmer I didn't need Combinatorics much. They always seem easy to me. I think this is easy to all. You shouldn't spend more than 2 days on it. I don't expect that after all these you will have any time. But if you have time. See Geometry Unbound. Try to solve at least 10%. If you can't solve a problem you don't need to stuck at that problem. Just continue to next problem. Note: You should see more and more SOLUTIONS even if you solved them. You need to see the techniques. Seeing solutions has a bad effect but I think not in this area. Weather or not you solve the problem, see the solution / technique. Good luck! Wish you get chance into IMO Team :) P.S: Ignore my bad English. I am not responsible if the links I provided violates any copyright Law. As I just searched google and gave the first link that came up.<|endoftext|> TITLE: Keyhole Contour with Square Root Branch Cut on Imaginary Axis QUESTION [5 upvotes]: Consider integrating the following function of a complex variable, $z$, $$ f(z)=\frac{z e^{irz}}{\sqrt{z^2+m^2}}, $$ around the contour shown here. It seems straightforward to show that the infinite radius arc segments, $C_1$ and $C_2$, vanish and we are left with $$ 0=I_1+I_2+I_3, $$ via the residue theorem. Now, I expect that the integrals along each side of the branch cut which runs from $im$ to $i\infty$ will differ "by a phase" so that they will add together rather than cancel. However, I don't see how to show this explicitly: \begin{align} I_2 & = \int^m_\infty \frac{Re^{i\pi/2}}{\sqrt{R^2e^{i\pi}+m^2}}e^{irRe^{i\pi/2}}e^{i\pi/2}dR \\ & = \int^\infty_m \frac{R}{\sqrt{-R^2+m^2}}e^{-rR}dR \\ \end{align} But \begin{align} I_3 &= \int^\infty_m \frac{Re^{i3\pi/2}}{\sqrt{R^2e^{i3\pi}+m^2}}e^{irRe^{i3\pi/2}}e^{i3\pi/2}\\ &= -\int^\infty_m \frac{R}{\sqrt{-R^2+m^2}}e^{rR}dR, \end{align} so it appears $I_2\ne I_3$ which I believe to be wrong. REPLY [4 votes]: $$I_2 = i \int_{\infty}^m dy \, \frac{i y \, e^{-r y}}{+i \sqrt{y^2-m^2}} $$ $$I_3 = i \int_m^{\infty} dy \, \frac{i y \, e^{-r y}}{-i \sqrt{y^2-m^2}} $$ The difference in phase comes from how we define the behavior of the square root in the vicinity of the branch cut $[i m,i \infty)$. So the contributions from $I_2$ and $I_3$ add. One should not necessarily ignore the contribution around the branch point where $z=i m + \epsilon e^{i \phi}$, $\phi \in [\pi/2,-3 \pi/2]$: $$I_4 = i \epsilon \int_{\pi/2}^{-3 \pi/2} d\phi \, e^{i \phi} \frac{\left (i m +\epsilon e^{i \phi} \right ) e^{i r \left (i m +\epsilon e^{i \phi} \right )} }{\sqrt{m^2+\left (i m +\epsilon e^{i \phi} \right )^2}}$$ which indeed vanishes as $\epsilon \to 0$. Thus, by Cauchy's theorem we have the relation $$\int_{-\infty}^{\infty} dx \frac{x \, e^{i r x}}{\sqrt{m^2+x^2}} = i 2 \int_m^{\infty} dy \, \frac{y \, e^{-r y}}{\sqrt{y^2-m^2}} = i 2 m K_1(r m)$$<|endoftext|> TITLE: Semigroup analogue to the classification of finite simple groups? QUESTION [11 upvotes]: Is there a semigroup analogue to the classification of finite simple groups? If so what are some of the major results? --Edit, reference links--- Classification of Finite Simple Groups Special Classes of Semigroups One of my semigroup equation sequences in OEIS, g(f(x)) = f(f(f(x))) REPLY [5 votes]: In semigroup theory, the term "simple" has a specialized meaning. If you interpret it to mean that the semigroup has no quotients other than itself and the trivial semigroup, then there is a complete classification. In semigroup theory, these are called "congruence-free semigroups". (In universal algebra, "simple" means no non-trivial quotients.) There is a complete classification of finite congruence-free semigroups. They are either: a two-element semigroup a finite simple group a completely 0-simple semigroup. Most of these are not congruence-free, but the ones that are have a complete description. They are basically paramatrized by 0-1 matrices. The proof that a congruence-free semigroup that is not covered by cases 1 or 2 must have a zero is sketched in this Math Overflow answer. (A zero is just an element $0$ such that $0a = a0 = 0$ for all $a$.) I don't know of a great online source for the third case. This preprint states the theorem, with reference, as Theorem 3.1, but it's not the main subject of the paper.<|endoftext|> TITLE: Number of ways to form a partition from another. QUESTION [6 upvotes]: Suppose I'm given two partitions of $n = a_1+a_2+\cdots+a_r$ and $n=b_1+b_2+\cdots+b_s$ with $s TITLE: How do I find the equation of a curve formed by a series of lines tangent to a curve? QUESTION [6 upvotes]: When I was young I used to draw a sequence of straight lines on graph paper which made a curve after I finished. On a coordinate plane, the lines would be equivalent to starting at $y=9$ on the $y$ axis and ending at $x=1$ on the $x$ axis. With each line, I would decrease the $y$ by one unit and increase the $x$ by one unit. Here is a desmos graph that illustrates. https://www.desmos.com/calculator/u4ea8swmfg Here is a similar example where the angle between the lines is 60 degrees. https://drive.google.com/open?id=0B5QHq_oPha0ybGdrbFNhUHRPOGc I think each line is basically a tangent line along the curve produced by taking a first derivative. Are these types of curves parabolas or possibly hyperbolae? And how could I find the equation of such a curve? REPLY [4 votes]: The curve you have described is in fact a parabola; specifically, it is the parabola generated as the quadratic Bézier curve with the control points as the ends of your lines. The equation of this parabola is, for three points $a, b, c$: $$f(t)=(1-t)^2a+t(1-t)b+t^2c$$ This is a parametric formula; you enter a $t$ value and it gives the $x$ and $y$ values of a point on the curve, as opposed to entering an $x$ value and getting the $y$ value out. In the first case, with the right angle, the parabola is simply $$f(t)=\left(10t^2,10(1-t)^2\right)$$ shown in red on this figure. The $60^\circ$ angle looks like this.<|endoftext|> TITLE: Find all prime solutions of equation $5x^2-7x+1=y^2.$ QUESTION [10 upvotes]: Find all prime solutions of the equation $5x^2-7x+1=y^2.$ It is easy to see that $y^2+2x^2=1 \mod 7.$ Since $\mod 7$-residues are $1,2,4$ it follows that $y^2=4 \mod 7$, $x^2=2 \mod 7$ or $y=2,5 \mod 7$ and $x=3,4 \mod 7.$ In the same way from $y^2+2x=1 \mod 5$ we have that $y^2=1 \mod 5$ and $x=0 \mod 5$ or $y^2=-1 \mod 5$ and $x=4 \mod 5.$ How put together the two cases? Computer find two prime solutions $(3,5)$ and $(11,23).$ REPLY [2 votes]: Completing the square and dividing by $5$, we have $$ (10 x - 7)^2 - 20 y^2 = 29$$ Thus $z = 10 x - 7$ and $w = 2 y$ are solutions of the Pell-type equation $$ z^2 - 5 w^2 = 29$$ The positive integer solutions of this can be written as $$\pmatrix{z\cr w\cr} = \pmatrix{9 & 20\cr 4 & 9\cr}^n \pmatrix{7\cr 2\cr} \ \text{or}\ \pmatrix{9 & 20\cr 4 & 9\cr}^n \pmatrix{23\cr 10\cr}$$ for nonnegative integers $n$. Now you want $w$ to be even and $z \equiv 3 \mod 10$. All the solutions will have $w$ even, and $z$ altermately $\equiv \pm 3 \mod 10$. Thus for $n$ even you get integers for $x,y$ with $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 & 20\cr 4 & 9\cr}^n \pmatrix{23\cr 10\cr}$$ and for $n$ odd, $$ \pmatrix{z_n\cr w_n\cr} = \pmatrix{9 & 20\cr 4 & 9\cr}^n \pmatrix{7\cr 2\cr}$$ You do get primes for $n=0$ ($z_0 = 23, w_0 = 10, x_0 = 3, y_0 = 5$) and $n=1$ ($z_1 = 103, w_1 = 46, x_1 = 11, y_1 = 23$). In general, $x_n \equiv 0 \mod 3$ for $n \equiv 0$ or $3 \mod 4$. $x_n \equiv 0 \mod 17$ for $n \equiv 2$ or $3 \mod 6$. $x_n \equiv 0 \mod 5$ or $y_n \equiv 0 \mod 5$ for $n \equiv 0,3, 4, 5, 6, 9 \mod 10$. $x_n \equiv 0 \mod 11$ for $n \equiv 1, 8 \mod 10$. $x_n \equiv 0 \mod 13$ for $n \equiv 3, 5, 6, 7, 8, 10 \mod 14$. $y_n \equiv 0 \mod 23$ for $n \equiv 1, 2, 5, 6 \mod 8$. And every integer $n$ is in at least one of these classes. We conclude there are no other prime solutions.<|endoftext|> TITLE: Is this hierarchy of manifolds correct? QUESTION [5 upvotes]: Question: a Hausdorff differentiable manifold (locally Euclidean space): $$ \text{is metrizable} \iff \text{is paracompact} \iff \text{admits a Riemannian metric} \,?$$ Does one also have for a locally Euclidean Hausdorff space (not necessarily differentiable): $$\text{second countable} \iff \text{metrizable with countably many connected components}? $$ Thus second countability is strictly stronger for such spaces than metrizability/paracompactness/existence of a Riemannian metric? (According to this question, it seems they are all equivalent when connectedness is assumed, and according to this question even equivalent to the existence of a universal cover when connected.) Attempt: By the Smirnov metrization theorem, a locally metrizable space (e.g. locally Euclidean spaces) are metrizable if and only if they are Hausdorff and paracompact. According to Wikipedia, Riemannian manifolds are metrizable. (Is this only true for connected Riemannian manifolds, or can we use the trick where we make the metric less than 1 on each connected component and 1 for distances between points in different components? Or perhaps only when there are at most countably many components?) Finally, according to Wikipedia, any differentiable paracompact manifold admits a Riemannian metric (I'm not sure if the differentiable hypothesis is necessary -- this question seems related). Thus (at least for connected, differentiable Hausdorff manifolds) "admits Riemannian metric" $\implies$ "metrizable" $\implies$ paracompact $\implies$ "admits Riemannian metric". Context: This question is motivated by how the definition of manifold used by Spivak in Comprehensive Introduction to Differential Geometry (see here for a related question) is different from the one used by Lee in Introduction to Smooth Manifolds (where second countability is required). In particular, I had thought that the definition my professor was using (Hausdorff, metrizable, locally Euclidean) was equivalent to Lee's, until Spivak started mentioning a bunch of counterexamples which I recognized as not being second countable. Although I don't remember if my professor specified countably many components, which would rule out most of Spivak's counterexamples consisting of uncountable spaces with the discrete topology and metric. REPLY [2 votes]: It seems like the answer to many of these questions can be found on p.459, Appendix A, of Spivak's Comprehensive Introduction to Differential Geometry, 3rd edition 1999. Specifically, the Theorem appaears to address my confusion regarding how the cardinality of the number of components of a manifold plays into these properties/definitions. Note that this seems to hold even for topological manifolds, i.e. not necessarily smooth ones. Theorem The following properties are equivalent for any manifold $M$: (a) Each component of $M$ is $\sigma$-compact. (b) Each component of $M$ is second countable (has a countable base for the topology). (c) $M$ is metrizable. (d) $M$ is paracompact. (In particular, a compact manifold is metrizable.)<|endoftext|> TITLE: Spectrum of the derivative operator QUESTION [12 upvotes]: Let $X$ be the subspace of $L^2(\Bbb{R})$ which consists of continuously differentiable functions with derivatives in $L^2(\Bbb{R})$. Then it asks to perform spectral analysis on the differential operator $A=\frac{d}{dx}$, i.e. find the spectrum of $A$. The statement of the question itself is vague. From my understanding, I think the question means the following: we need to find the spectrum of the linear operator $$A:X\to S,\quad Af=f',$$ where $S=C(\Bbb{R})\cap L^2(\Bbb{R})$, and both $X,S$ are endowed with the $L^2$ norm $\|\cdot\|_2$. About the definition of spectrum I have: Let $T: V\to W$ be a bounded linear operator between two normed linear spaces, we say that $\lambda\in\mathbb{C}$ is in the point spectrum of $A$ if $\lambda I-A$ is not injective the residual spectrum of $A$ if $\lambda I-A$ is injective but the range of $\lambda I-A$ is not dense in $W$ the continuous spectrum if $\lambda I-A$ is injective and the range of $\lambda I-A$ is dense in $W$, but the inverse $(\lambda I-A)^{-1}$ defined on the range is unbounded. My attempt: First I found the point spectrum $\sigma_p(A)$. For any $\lambda\in\Bbb{C}$, let $f\in Ker(\lambda I-A)$. This implies that $\lambda f(t)-f'(t)=0$ for all $t\in\Bbb{R}$. Then we can see that $f(t)=ce^{\lambda t}$ for $c\in\Bbb{C}$. Since $f$ is square integrable on $\Bbb{R}$, this forces $c=0$. Hence we see that $f\equiv 0$ and therefore $\sigma_p(A)=\emptyset.$ Then I was trying to find the residual and continuous spectrums, and I got stuck at this point. Any help is greatly appreciated. REPLY [16 votes]: $\newcommand{\id}{I}$ As it was mentioned in the comments, the domain where you defined the operator is not correct - If you take $C^1$-functions with derivatives in $L^2$ the domain will be "too small" in the sense that $A-\lambda I$ does not have closed range and therefore cannot be surjective for any $\lambda$ (The range would be the continuous functions in $L^2$, its easy to show that this is not closed under the $L^2$-norm.). This means that $A-\lambda I$ cannot be invertible for any $\lambda$, hence $\sigma(A)=\mathbb{C}$. In this case, the range of $A-\lambda I$ are the continuous $L^2$-functions which are known to be dense in $L^2$, hence $\sigma(A)=\sigma_c(A)$ since you already showed there cannot be any eigenfunctions. Now considering the "correct" domain for this operator: Naturally, this will be given by the Sobolev space $H^1(\mathbb{R})$ Since we are in one dimension, this space can be characterized as a subspace of the absolutely continuous $L^2$-functions on $\mathbb{R}$, since for these functions you may define a derivative almost everywhere. The subspace you consider as a domain are then just the functions such that this a.e. derivative ("weak derivative") again is $L^2$-integrable. I will now shift your operator by $-i$, since this operator is a bit easier to handle: You may check or look up, that the operator $\hat{p}=-i\frac{d}{dx}=-iA$ is self-adjoint with this domain, i.e. $\hat{p}=\hat{p}^\dagger$ where the domain of $\hat{p}^\dagger$ is canonically defined. On a side note, this operator corresponds to the momentum in quantum mechanics. You may also check, that for densely defined linear operators, we have $$ R(A)^\perp=\operatorname{Kern}(A^\dagger). $$Using this relation, you can in particular prove, that self-adjoint operators have a spectrum which is fully contained in the real line. self-adjoint operators have empty residual spectrum. You can also look up this stuff in any basic textbook on functional analysis which contains some operator theory, e.g. Rudin or the first volume of Reed-Simon. I will now prove that $\sigma (\hat{p})=\mathbb{R}$, which implies that $\sigma(A)=i\mathbb{R}$. Let $\lambda\in\mathbb{R}$, we show that $\hat{p}-\lambda I$ is not invertible. Consider a nonzero smooth function $f\in C_0^{\infty}(\mathbb{R})\subset D(\hat{p})$, i.e. $f$ has compact support, and define $g_k(x)=\frac{1}{\sqrt{k}}e^{i\lambda x}f(k^{-1}x)$. Clearly, $g_k\in D(\hat{p})$ and you may check that $\|f\|=\|g_k\|$. Furthermore, you may calculate $\|(\hat{p}-\lambda\id) g_k\|=\frac{1}{k}\|f'\|$. Assume that $\hat{p}-\lambda\id$ has a bounded inverse, then there would hold $$ \|f\|=\|g_k\| =\|(\hat{p}-\lambda\id)^{-1}(\hat{p}-\lambda\id)g_k\| $$ $$ \leq\|(\hat{p}-\lambda\id)^{-1}\| \|(\hat{p}-\lambda\id)g_k\| = k^{-1}\|(\hat{p}-\lambda\id)^{-1}\| \|f'\| $$ for any(!) $k\in\mathbb{N}$, which would imply $f$ being constant zero. Since $f$ was chosen not to be constant zero, this is a contradiction, and hence $\mathbb{R}\supset\sigma(\hat{p})$. Since $\hat{p}$ is self-adjoint, it cannot be any more, hence the claim follows. Since the residual spectrum is empty and you already showed that the point spectrum is empty, this implies that the continuous spectrum of $A=\frac{d}{dx}$ is $i\mathbb{R}$ and all other spectra are empty. You may of course ask yourself "how the f does one come up with such a function?", the reasoning is the following: For $\hat{p}$, $e^{i\lambda x}$ is an eigenfunction to $\lambda$, but it does not lie in $L^2$. However, it is not that bad of a function in the sense, that you can approximate it sufficiently well with functions from the domain of $\hat{p}$ - this is exactly what $g_k$ does, the function $f$ serves as a cutoff function. Diving further into the theory, you will find that $g_k$ is what we call a "Weyl sequence" for $\hat{p}$ at $\lambda$.<|endoftext|> TITLE: $\lim_{r\to 0}\frac{\operatorname{vol}f(B(a;r))}{\operatorname{vol}B(a;r)}=|\det f'(a)|$ QUESTION [6 upvotes]: I'm trying to solve this question: Let $U\subset \mathbb R^m$ be an open set and $f:U\to \mathbb R^m$ a function of class $C^1$. Suppose there is $a\in U$ such that $f'(a):\mathbb R^m\to \mathbb R^m$ is an isomorphism. Show $$\lim_{r\to 0}\frac{\operatorname{vol}f(B(a;r))}{\operatorname{vol}B(a;r)} = |\det f'(a)|$$ My attempt Using the inverse function theorem, there is $\delta>0$ such that $f_{|B(a,r)}$ is a diffeomorphism (henceforth for simplicity let's still call this restriction $f$). Suppose from now on $|r|\lt \delta$. Since $f$ is a diffeomorphism and $B(a,r)$ is compact we can use change of variables: \begin{align} & \operatorname{vol}f(B(a,r)) \\[10pt] = {} & \int_{f(B(a;r))}1\cdot dy \\[10pt] = {} & \int_{B(a;r)}1\cdot (f(a))|\det f'(a)|dx \\[10pt] = {} & \int1\cdot|\det f'(a)| \, dx \\[10pt] = {} & |\det f'(a)|\int_{B(a;r)}1 \, dx \\[10pt] = {} & |\det f'(a)|\operatorname{vol}B(a;r) \end{align} So I think I have proved that $\dfrac{\operatorname{vol}f(B(a;r))}{\operatorname{vol}B(a;r)}=|\det f'(a)|$ which is stronger than what the question asks. What is wrong with my answer and how can I correct that? REPLY [3 votes]: Actually you have $$ \text{vol}(f(B(a,r)))=\int_{f(B(a,r))}1\, dy=\int_{B(a,r)}\lvert\det f'(y)\rvert\,dy. $$ Now you need to use the continuity of $y\mapsto\lvert\det f'(y)\rvert$ on the ball $B(a,r)$ to conclude that $$\lim_{r\to 0}\frac{\text{vol}(f(B(a,r)))}{\text{vol}(B(a,r))}=\lvert\det f'(a)\rvert.$$ Namely, given $\varepsilon>0$, there exists $r_0>0$ such that $$ \lvert\det f'(y)-\det f'(a)\rvert<\varepsilon $$ for $y\in B(a,r)$ and $r TITLE: A known closed form for Borchardt mean (generalization of AGM) - why doesn't it work? QUESTION [9 upvotes]: There is a curious four parameter iteration introduced by Borchardt: $$a_{n+1}=\frac{a_n+b_n+c_n+d_n}{4} \\ b_{n+1}=\frac{\sqrt{a_n b_n}+\sqrt{c_n d_n}}{2} \\ c_{n+1}=\frac{\sqrt{a_n c_n}+\sqrt{b_n d_n}}{2} \\ d_{n+1}=\frac{\sqrt{a_n d_n}+\sqrt{b_n c_n}}{2} $$ Apparently, the limit of this iteration, denoted hereafter $B(a_0,b_0,c_0,d_0)$ has a closed form in terms of a certain double integral. However, this 'closed form' doesn't check out when I try implementing it in Mathematica. For a complete description see this paper, section 2.5. For more general information see this paper. Here is the description of the closed form from the first link above. First, define: $$A = a_0 + b_0 + c_0 + d_0 \\ B = a_0 + b_0 - c_0 - d_0 \\ C = a_0 - b_0 + c_0 - d_0 \\ D = a_0 - b_0 - c_0 + d_0$$ (Note the problem - some of the numbers above can be negative). Then define: $$B_1=\frac{\sqrt{A B}+\sqrt{C D}}{2} \\ B_2=\frac{\sqrt{A B}-\sqrt{C D}}{2} \\ C_1=\frac{\sqrt{A C}+\sqrt{B D}}{2} \\ C_2=\frac{\sqrt{A C}-\sqrt{B D}}{2} \\ D_1=\frac{\sqrt{A D}+\sqrt{B C}}{2} \\ D_2=\frac{\sqrt{A D}-\sqrt{B C}}{2} $$ (Note the problem - some of the numbers above can be complex). Then define: $$\Delta=\sqrt[4]{ABCDB_1C_1D_1B_2C_2D_2}$$ And finally, the main parameters: $$\alpha_0=\frac{A C B_1}{\Delta} \\ \alpha_1=\frac{C C_1 D_1}{\Delta} \\ \alpha_2=\frac{A C_2 D_1}{\Delta} \\ \alpha_3=\frac{B_1 C_1 C_2}{\Delta}$$ And define a function: $$R(x):=x(x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3)$$ According to the paper, the limit of the iteration above is equal to: $$\frac{\pi^2}{B(a_0,b_0,c_0,d_0)}=\int _0^{\alpha_3}\int _{\alpha_2}^{\alpha_1}\frac{x-y}{\sqrt{R(x) R(y)}}dxdy \tag{1}$$ I tried to implement this 'closed form' in Mathematica. However, for most initial conditions, even simple ones, Mathematica has trouble computing the integral and gives complex values anyway. How can I tell if $(1)$ is correct? Is there a typo somewhere in my formulas? Or maybe there is some typo in the linked paper? I want to know if $(1)$ is correct, and if not what is the correct form. I don't need the full proof. I tried to find Borchardt's original paper, but I couldn't. And it's not in English anyway. REPLY [2 votes]: Ok, my answer got a little bit out of hand, but I hope it helps to figure out what the problem is. In my opinion the problems are mainly due to typos that are carried over. My answer is based on two articles. One is the original article of Borchardt, where the link was provided in Ewan's answer. [1] Borchardt, K. W. (1876): Hr. Borchardt las über das arithmetisch-geometrische Mittel aus vier Elementen. Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin (Berlin: Verl. d. Kgl. Akad. d. Wiss., 1877), Berlin. (http://bibliothek.bbaw.de/bibliothek-digital/digitalequellen/schriften/anzeige/index_html?band=09-mon/1876&seite:int=646) [2] J. M. Borwein and P. B. Borwein. On the Mean Iteration $(a,b) \leftarrow \left(\frac{a+3b}{4},\frac{\sqrt{ab}+b}{2}\right)$. Mathematics of Computation, 53(187):311–326, 1989. (https://pdfs.semanticscholar.org/5ead/200192d495859702eb47bc54babb20ab75ae.pdf) I didn't double check the mathematics (i.e. the reasoning and proofs) in [1] and [2], but I compared the relevant formulas and implemented them numerically. This finally led to a solution that I think is numerically satisfying (but see below). Basically, [2] uses the same definitions as in the question, so I'll start from [1] and highlight the differences. The first important difference are the conditions on $a,b,c,e$. In [1] (on top of page 612) the real numbers $a,b,c,e$ have to be such that (i) $a > b > c > e > 0$ and (ii) $a - b - c + e > 0$. I will call this condition $(C)$. On the other hand in [2] page 324 we have $a > b > c > e > 0$ such that $ae - bc > 0$, and I'll call this condition $(C^*)$. The conditions $(C)$ and $(C^*)$ are clearly there to prevent problems with negative square roots etc., which should already solve some of the problems. However, it seems (numerically) that $(C^*)$ is the one that should finally be used (see below). From the original article [1], page 612, I take $a_0 = a, b_0 = b, c_0 = c, e_0 = e$ and \begin{align} a_{n+1} &= (a_n+b_n+c_n+e_n)/4,\\ b_{n+1} &= (\sqrt{a_nb_n}+\sqrt{c_ne_n})/2,\\ c_{n+1} &= (\sqrt{a_nc_n}+\sqrt{b_ne_n})/2,\\ e_{n+1} &= (\sqrt{a_ne_n}+\sqrt{b_nc_n})/2, \end{align} and I set $g = \lim_{n \to \infty} a_n$ (the limits are all the same, so I pick wlog the one for $a_n$). From [1], page 617, I also take \begin{align} A &= a+b+c+e,\\ B &= a+b-c-e,\\ C &= a-b+c-e,\\ E &= a-b-c+e, \end{align} and \begin{align} B_1 &= (\sqrt{AB}+\sqrt{CE})/2,\\ B_2 &= (\sqrt{AB}-\sqrt{CE})/2,\\ C_1 &= (\sqrt{AC}+\sqrt{BE})/2,\\ C_2 &= (\sqrt{AC}-\sqrt{BE})/2,\\ E_1 &= (\sqrt{AE}+\sqrt{BC})/2,\\ E_2 &= (\sqrt{AE}-\sqrt{BC})/2, \end{align} where it is important to notice that [1] defines $E_1 = (\sqrt{AE}-\sqrt{BC})/2 = E_2$ which I believe is a typo and hence I switch the $-$ for a $+$. So far we are in agreement with the definitions in the question and in [2], but considering the values of $\alpha_i$, $i=0,1,2,3$ there are differences. From [1], page 618, I take \begin{align} \Delta = (abce B_1 C_1 E_1 B_2 C_2 E_2)^{1/4}, \end{align} which is different from the question or the definition in [2], see page 325, in that I use the starting values $a,b,c,e$ and not the derived values $A,B,C,E$ (as stated in [1]). The values of $\alpha_i$ also differ between [1] and [2]. From [1], page 618, I take \begin{align} \alpha_0 &= \frac{acB_1}{\Delta},\\ \alpha_1 &= \frac{c C_1 E_1}{\Delta},\\ \alpha_2 &= \frac{a C_2 E_1}{\Delta},\\ \alpha_3 &= \frac{B_1 C_1 C_2}{\Delta}, \end{align} where again the starting values are used, and not $A,B,C,E$ as in the question or in [3], page 325. All sources however agree on the definition of $R(x)$ as \begin{align} R(x) = x (x-\alpha_0)(x-\alpha_1)(x-\alpha_2)(x-\alpha_3). \end{align} Now [1], page 618, states that \begin{align} 4 \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where it is important to notice the factor $4$ that is missing in [2] and in the question. My numerical experiments show that neither [1] nor [2] is correct. In fact I think the following statement should hold: Theorem: For $a > b > c > e > 0$ such that $ae - bc > 0$ we have \begin{align} \frac{\pi^2}{g} = \int_{0}^{\alpha_3}\int_{\alpha_2}^{\alpha_1} \frac{x-y}{\sqrt{R(x)R(y)}} dx dy, \end{align} where $g = \lim_{n \to \infty} a_n$ and $\alpha_i$, $i=0,1,2,3$ is defined as in [1], i.e., as above. It is important to notice that the statement uses condition $(C^*)$ from [2], page 324, uses the definition from [1] regarding $\Delta$ and $\alpha_i$, $i=0,1,2,3$, and does not include the factor $4$ which is present in [1] but not in [2]. In summary, it's a wild mix of [1] and [2]... I got to this basically following [1] and then using $(C^*)$ instead of $(C)$ and realizing that the value was $4$ times too high. You can double check this with your favorite numerical software tool, I have added my R implementation below. Finally, I have to say that the values don't match perfectly. The numerical integration of the rational function seems to be difficult, but the numbers are always (for random starting values obeying $(C^*)$) close (enough) together in my opinion. As long as I enforce $(C^*)$ I also don't experience any trouble with negative square roots or the like (but this is possible if $(C)$ is used). R Implementation #non-random starting values that work with Borwein & Borwein page 324 Theorem 3 condition a <- 7 b <- 4 c <- 3 e <- 2 #get some random starting values that are OK goodStart <- FALSE while (goodStart==FALSE){ X <- sort(rexp(4,rate=20)) #Borwein & Borwein page 324 Theorem 3 condition if ( X[4]*X[1] - X[2]*X[3] > 0){ a <- X[4] b <- X[3] c <- X[2] e <- X[1] goodStart = TRUE } } #Borwein & Borwein page 324 Theorem 3 condition if ( a*e - b*c <= 0){stop("Borwein & Borwein coefficient condition not ok")} #Borchardt condition - seems to be wrong #if ( a - b - c + e <= 0){stop("coefficient condition not ok")} a_i <- a b_i <- b c_i <- c e_i <- e for (i in 1:1000) { tmp_a <- (a_i+b_i+c_i+e_i)/4 tmp_b <- (sqrt(a_i*b_i) + sqrt(c_i*e_i))/2 tmp_c <- (sqrt(a_i*c_i) + sqrt(b_i*e_i))/2 tmp_e <- (sqrt(a_i*e_i) + sqrt(b_i*c_i))/2 a_i <- tmp_a b_i <- tmp_b c_i <- tmp_c e_i <- tmp_e } #print(max(a_i,b_i,c_i,e_i)-min(a_i,b_i,c_i,e_i)) #just pick any of the four g <- a_i #compute close form solution I1 <- pi^2/g print("sequence") print(I1) #page 617 bottom A <- a + b + c + e B <- a + b - c - e C <- a - b + c - e E <- a - b - c + e B1 <- (sqrt(A*B)+sqrt(C*E))/2 B2 <- (sqrt(A*B)-sqrt(C*E))/2 C1 <- (sqrt(A*C)+sqrt(B*E))/2 C2 <- (sqrt(A*C)-sqrt(B*E))/2 E1 <- (sqrt(A*E)+sqrt(B*C))/2 #this seems to be wrong in the original E2 <- (sqrt(A*E)-sqrt(B*C))/2 Delta <- (a*b*c*e*B1*C1*E1*B2*C2*E2)^(1/4) alpha0 <- a*c*B1/Delta alpha1 <- c*C1*E1/Delta alpha2 <- a*C2*E1/Delta alpha3 <- B1*C1*C2/Delta integrand <- function(x,y,param){ alpha0 <- param$alpha0 alpha1 <- param$alpha1 alpha2 <- param$alpha2 alpha3 <- param$alpha3 Rx <- x*(x-alpha0)*(x-alpha1)*(x-alpha2)*(x-alpha3) Ry <- y*(y-alpha0)*(y-alpha1)*(y-alpha2)*(y-alpha3) z <- (x-y) / sqrt(Rx*Ry) return(z) } p <- list(alpha0=alpha0,alpha1=alpha1,alpha2=alpha2,alpha3=alpha3) library(statmod) gauss <- gauss.quad(500,kind="legendre") w <- gauss$weights x <- gauss$nodes upper_x <- alpha1 lower_x <- alpha2 upper_y <- alpha3 lower_y <- 0 I <- 0 for (i in 1:length(x)) { for (j in 1:length(x)) { xx <- (upper_x-lower_x)*x[i]/2 + (upper_x+lower_x)/2 yy <- (upper_y-lower_y)*x[j]/2 + (upper_y+lower_y)/2 ww <- w[i]*w[j] I <- I + ww*integrand(xx,yy,p) } } I <- (upper_x-lower_x)*(upper_y-lower_y)*I/4 print("numerical integral:") print(I)<|endoftext|> TITLE: On the integral $\int\sqrt{1+\cos x}\text{d} x$ QUESTION [8 upvotes]: I've been wondering about the following integral recently: $$ I = \int\sqrt{1+\cos x}\,\text{d}x $$ The way I integrated it is I used the identity $\sqrt{1+\cos x} = \sqrt2\cos\frac x2$, and so $$ I = 2\sqrt2\sin\frac x2 + C $$ the problem is that $\sqrt{1+\cos x}$ is actually integrable over the entire real line, but the derivative of $2\sqrt2\sin\frac x2$ is only equal to $\sqrt{1+\cos x}$ in certain intervals. This is because the actual identity is $\sqrt{1+\cos x} = \sqrt2\left|\cos\frac x2\right|$. Now, I wasn't exactly sure how to integrate the absolute value, so I thought I would "fix" the function after integration. The first fix we can do is making sure that the sign of the result of integration is correct: $$ I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + C $$ The problem with just $\sin\frac x2$ was that sometimes it was "flipped", on certain intervals it was actually the antiderivative of the $-\sqrt{1+\cos x}$ (this is because we dropped the absolute value signs). There is one further problem with this, however. The above function is not continuous, meaning it's not continuously differentiable with derivative equal to $\sqrt{1+\cos x}$ everywhere. Namely, it is discontinuous at $x=(4n\pm1)\pi$ where $n\in\mathbb{Z}$. I noticed, however, that the limit of the derivative on either side of $x=(4n\pm1)\pi$ existed and was equal to each other. Hence, I figured that I could somehow "stitch" these continuous sections end to end and get a continuous result whose derivative was $\sqrt{1+\cos x}$ everywhere. The resulting function I got was $$ I = 2\sqrt2\sin\frac x2\text{sgn}\cos\frac x2 + 4\sqrt2\left\lfloor\frac1{2\pi}x+\frac12\right\rfloor + C $$ Now, I was wondering, is there any way to arrive at this result just by integrating $\sqrt{1+\cos x}$ using the usual techniques? The method I used can be boiled down to "integrating" then "fixing", but I'm just wondering if you can arrive at a result that is continuous and differentiable on the entire real line by doing just the "integrating" part. Any help would be appreciated. Thanks! Edit: To be clear, I'm not looking for exactly the function above, but rather simply any function that is "nice", continuous and differentiable on $\mathbb{R}$, and has derivative equal to $\sqrt{1+\cos x}$ everywhere, and which is attainable through "simple" integration methods. REPLY [2 votes]: The function to be integrated is $$ \sqrt{1+\cos x}=\sqrt{2}\left|\cos\frac{x}{2}\right| $$ so we can as well consider the simpler $$ \int\lvert\cos t\rvert\,dt $$ or, with $t=u+\pi/2$, the even simpler $$ \int|\sin u|\,du $$ One antiderivative is $$ \int_0^u\lvert\sin v\rvert\,dv $$ Note that the function has period $\pi$, so let $u=k\pi+u_0$, with $0\le u_0<\pi$. Then $$ \int_0^u\lvert\sin v\rvert\,dv= \int_0^{k\pi}\lvert\sin v\rvert\,dv+ \int_{k\pi}^{k\pi+u_0}\lvert\sin v\rvert\,dv \\=2k+\int_0^{u_0}\sin v\,dv= 2k+1-\cos(u-k\pi) $$ Now do the back substitution; write $k=u\bmod\pi$, if you prefer.<|endoftext|> TITLE: Proving property of Fibonacci sequence with group theory QUESTION [11 upvotes]: I stumbled upon this problem on a chapter about group theory, and I can't seem to get a grasp on it: Let $p$ be a prime and $F_1 = F _2 = 1, F_{n +2} = F_{n +1} + F_n$ be the Fibonacci sequence. Show that $F_{2p(p^2−1)}$ is divisible by $p$. The book gives the following hint: Look at the group of $2×2$ matrices mod $p$ with determinant $± 1$ I know I have to use lagrange theorem at some point, but how do I stablish the link between the fibonacci sequence and the matrices? I mean, I know that there's a special $2×2$ matrix such that it yields the $n^{th}$ fibonacci number when taking the $n^{th}$ power, I even used it to obtain a general formula for the numbers during my linear algebra course, but how can I use that to prove this? REPLY [8 votes]: Call the group $G = \left\{A \in (\mathbb{Z}/p\mathbb{Z})^{2\times 2} \ \middle\vert\ \det A = \pm 1\right\}$. First, we count the order of $G$. We have elements $\left(\array{a & b \\ c & d}\right)$, with $a,b,c,d \in \mathbb{Z}/p\mathbb{Z}$ and $ad - bc \equiv \pm 1$ ($\equiv$ means congruent modulo $p$). Let us first count the elements with determinant one. $\mathbb{Z}/p\mathbb{Z}$ is a field, which means that every element except $0$ has a multiplicative inverse. Therefore, $ad - bc \equiv 1$ is equivalent to $(d \equiv 0\ \land\ c \equiv -b^{-1})\ \lor\ (a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))$. Counting the possibilities gives $$ \underbrace{(d \equiv 0\ \land\ c \equiv -b^{-1})}_{\text{$a$: $p$ choices, $b$: $p - 1$ choices}}\ \lor\ \underbrace{(a \not\equiv 0 \land d \equiv a^{-1}(1 + bc))}_{\text{$a$: $p - 1$ choices, $b,c$: $p$ choices}} $$ for a total of $p(p-1) + (p-1)p^2 = p(p+1)(p-1) = p(p^2 - 1)$ elements with determinant $1$. By symmetry (swapping $a$ with $b$ and $d$ with $c$) there has to be an equal number of elements with determinant $-1$. So the order of $G$ is $|G| = 2p(p^2 - 1)$. Now, let $M = \left(\array{1 & 1 \\ 1 & 0}\right)$. You already know that $M^n = \left(\array{F_{n+1} & F_{n} \\ F_{n} & F_{n-1}}\right)$. For $M$, as for any element in a finite group, we know that $M^{|G|} = I$ (consider the cyclic subgroup $\left$ generated by $M$ and use Lagrange's theorem). This fact translates to $$ \left(\array{F_{2p(p^2 - 1) + 1} & F_{2p(p^2 - 1)} \\ F_{2p(p^2 - 1)} & F_{2p(p^2 - 1) - 1}}\right) \equiv \left(\array{1 & 0 \\ 0 & 1}\right) \mod{p}.$$ The off-diagonal components of this equation are what you wanted to show.<|endoftext|> TITLE: What restrictions on decimal expansions lead to countably infinite subsets of the reals? QUESTION [5 upvotes]: Consider real numbers $S : x \in [0,1]$ whose decimal expansions are $x = 0.d_1 d_2 d_3 \ldots$. Now institute various exclusions, listed below. I am interested to learn of general principles that will allow me to conclude that $S_X$ is either countable or uncountable. $S_{!5}$: The decimal representation excludes all $5$'s. $S_{5^{\textrm{th}}}$: In the decimal representation, every $5^{\textrm{th}}$ digit is $5$. $S_{\textrm{odd}}$: The decimal representation excludes all even digits: $0,2,4,6,8$. $S_{01}$: The decimal representation excludes all but the two digits: $0$ and $1$. $S_{1}$: The decimal representation uses (after the $0.$) only the digit $1$: $0.1, 0.11, 0.111, \ldots$. $S_{\ge}$: The decimal representation is non-decreasing: successive digits are the same or larger. E.g., $.1144456777777788999\ldots$. $S_{k\pm}$: The decimal representation $k$-oscillates: The sequence consists of $k$ or more digits (non-strictly) increasing, followed by a sequence of $k$ or more digits that (non-strictly) decreases, and so on. E.g., for $k=5$, $0.11339 \; 966432 \; 567777 \; \ldots$. $S_{\textrm{max/min}}$: The decimal representation is finitely oscillatory: there are only a finite number of digit minima and maxima in the sequence of digits. Perhaps each case must be handled separately? I am particularly interested in difficult, borderline cases that are not so straightforward to settle, which could be used as good student exercises to distinguish countable from uncountable. REPLY [3 votes]: Don't know that I'd call it a general principle, but most of these follow from remapping the restricted set to the reals, perhaps in another basis. $S_{!5}$: The decimal representation excludes all $5$'s. Decrement all digits larger than $5$, then the set becomes the representation of the entire $[0,1]$ in base $9$, thus uncountable. $S_{5^{\textrm{th}}}$: In the decimal representation, every $5^{\textrm{th}}$ digit is $5$. Drop every $5^{th}$ digit, then the set becomes the decimal representation of the entire $[0,1]$, thus uncountable. $S_{\textrm{odd}}$: The decimal representation excludes all even digits: $0,2,4,6,8$. Remap the $5$ remaining digits $1,3,5,7,9$ to $0,1,2,3,4$, then the set becomes the representation of the entire $[0,1]$ in base $5$, thus uncountable. $S_{01}$: The decimal representation excludes all but the two digits: $0$ and $1$. That gives the representation of the entire $[0,1]$ in base $2$, thus uncountable. $S_{1}$: The decimal representation uses (after the $0.$) only the digit $1$: $0.1, 0.11, 0.111, \ldots$. This can be indexed by the number of decimal digits, thus countable. $S_{\ge}$: The decimal representation is non-decreasing: successive digits are the same or larger. E.g., $.1144456777777788999\ldots$. The sequence of digits will become constant eventually, thus countable. $S_{k\pm}$: The decimal representation $k$-oscillates: The sequence consists of $k$ or more digits (non-strictly) increasing, followed by a sequence of $k$ or more digits that (non-strictly) decreases, and so on. E.g., for $k=5$, $0.11339 \; 966432 \; 567777 \; \ldots$. Drop every other group of $k$ digits, and truncate the remaining ones to the first $k$ digits, which leaves groups of exactly $k$ increasing decimal digits. Consider each such group as one digit in some base made up of all increasing sequences of $k$ decimal digits. Then the set becomes the representation of the entire $[0,1]$ in that base, thus uncountable. $S_{\textrm{max/min}}$: The decimal representation is finitely oscillatory: there are only a finite number of digit minima and maxima in the sequence of digits. The sequence of digits will become constant eventually, thus countable. [ EDIT ]   Numbered the points to stay in sync with the OP, and made a minor change to #7 which I had first (mis)read as exactly $k$ digits.<|endoftext|> TITLE: Does $I \varprojlim M_n = \varprojlim I M_n$? QUESTION [6 upvotes]: Let $I$ be an ideal of a Noetherian ring $A$ and $(M_n)_{n \geq 0}$ an inverse system of finitely-presented $A$-modules whose inverse limit can also be finitely presented. Is it then the case that $$I\varprojlim M_n = \varprojlim IM_n ?$$ (In the hoped-for application, $A$ is a power series ring in finitely many indeterminates over a subring of $\mathbb Q$.) REPLY [3 votes]: I don't know if you have sorted out so far (in case you have, feel free to give a comment upon what you have found out). I thought about your question for a while and here is a version of what you're asking (some facts are probably well known and elementary, so if you already know them skip them out!) I misread previously what you had written so this is another proof: I will assume moreover that the above inverse system $(M_n)_{n \geq 0}$, consists of finite length, flat modules such that the inverse limit is flat and that the ideal $I$ is finitely generated. It's true that the inverse limit of flats need not be flat (in fact in our case finitely presented $+$ flat implies projective, and inverse limits of projectives need not be projective even in ideal cases as for abelian groups. A standard counterexample is the inverse system $(\prod_{i=1}^{m}\mathbb{Z}, \pi_m),$ where the maps are projections). In that case we have a characterization for an any commutative ring $A$ and a given $A$-module $M$, of the form $$ M \textit{ is flat iff } \thinspace I \otimes_A M \cong IM \textit{ is an isomorphism for all ideals } \thinspace I \trianglelefteq A . $$ Tensor products although preserve direct limits, they do not necessarily preserve inverse limits. In fact the latter is true whenever the inverse system consists of modules of finite length (that is Noetherian and Artinian modules, and in our case just Artinian suffices), and the tensor product is taken with respect a finitely presented $A$-module. Though any Noetherian ring is coherent, and the latter is equivalent with the fact that finitely generated ideals are finitely presented. So in $A$ our ideal is finitely-presented hence from our assumption that the inverse limit is flat we get the following isomorphism $$ I \otimes_A (\varprojlim_{n\in \mathbb{N}} M_n) \cong \varprojlim_{n\in \mathbb{N}}(I \otimes_A M_n),$$ which implies for the left-hand side $I \otimes_A (\varprojlim_{n\in \mathbb{N}} M_n) \cong I(\varprojlim_{n\in \mathbb{N}} M_n)$, whilst for the right one $\varprojlim_{n\in \mathbb{N}}(I \otimes_A M_n) \cong \varprojlim_{n\in \mathbb{N}} IM_n$. By the latter your answer follows. I couldn't figure out a proof without this assumption, or a concrete counterexample to prove the opposite. In fact I believe that this is probably not true in the general case. If you have found out something more general or a counterexample please do let me know.<|endoftext|> TITLE: Proof of Hilbert's nullstellensatz, QUESTION [6 upvotes]: Let $k$ be an algebraically closed field and $$K=\frac{k[x_1,\dots,x_n]}{m}$$ be a finitely generated $k$-algebra, where $m$ is a maximal ideal. $K$ is algebraic over $k$. Then why is $k$ isomorphic to $K$? Sorry if this is obvious. REPLY [3 votes]: This answer from a book: Elementary algebtaic geometry, Klaus Hulek, p25: Theorem: Let $k$ be a field with infinitely many elements, and let $A=k[a_1,...,a_n]$ be a finitely generated $k-$algebra. If $A$ is a field, then $A$ algebraic over $k$. In proof of Hilbert's Nullstellensatz we have $K$ is a field and finitely generated $k-$ algebra, then by theorem implies that $K$ is algebraic over $k$ then we have $k \hookrightarrow k[x_1,...x_n] \to k[x_1,...x_n]/m =K $ then $K$ is an algebraic extension of $k$ so $K\cong k$ where $k$ is algebraic closed.<|endoftext|> TITLE: What is a homomorphism and what does "structure preserving" mean? QUESTION [22 upvotes]: I am not a mathematician and have not formally studied mathematics so I hope someone will be able to explain this to me in a way that I can understand given my level of mathematical understanding. I have read the other posts about this question but the answers seem to assume some knowledge that I don't have. I am learning about multilinear algebra and topological manifolds. It is said that "linear maps" (of which I understand the definition) between vector spaces are so called "structure preserving" and are therefore called "homomorphisms". Could someone explain in both an intuitive way, and with a more formal definition, what it means for a "structure to be preserved"? REPLY [22 votes]: There are many sorts of "structures" in mathematics. Consider the following example: On a certain set $X$ an addition is defined. This means that some triples $(x,y,z)$ of elements of $X$ are "special" in so far as $x+y=z$ is true. Write $(x,y,z)\in{\tt plus}$ in this case. To be useful this relation ${\tt plus}\subset X^3$ should satisfy certain additional requirements, which I won't list here. Assume now that we have a second set $Y$ with carries an addition ${\tt plus'}$ (satisfying the extra requirements as well), and that a certain map $$\phi:\quad X\to Y,\qquad x\mapsto y:=\phi(x)$$ is defined by a formula, some text, or geometric construction, etc. Such a map is called a homomorphism if $$(x_1,x_2,x_3)\in{\tt plus}\quad\Longrightarrow\quad\bigl(\phi(x_1),\phi(x_2),\phi(x_3)\bigr)\in{\tt plus'}\tag{1}$$ for all triples $(x_1,x_2,x_3)$. In $(1)$ the idea of "structure preserving" works only in one direction: special $X$-triples are mapped to special $Y$-triples. Now it could be that the given $\phi$ is in fact a bijection (a one-to-one correspondence), and that instead of $(1)$ we have $$(x_1,x_2,x_3)\in{\tt plus}\quad\Longleftrightarrow\quad\bigl(\phi(x_1),\phi(x_2),\phi(x_3)\bigr)\in{\tt plus'}$$ for all triples $(x_1,x_2,x_3)$. In this case $\phi$ is called an isomorphism between the structures $X$ and $Y$. The elements $x\in X$ and the elements $y\in Y$ could be of totally different "mathematical types", but as far as addition goes $X$ and $Y$ are "structural clones" of each other.<|endoftext|> TITLE: Definition of split chain complex QUESTION [5 upvotes]: I'm trying to reconcile two definitions in homological algebra. One of them is standard and the other is found in Weibel's Homological Algebra (so probably also standard, but I can't say from my own experience.) First, an exact sequence $0\to A\to B\to C\to 0$ in an abelian category is split if any of the following equivalent conditions hold: the map $A\to B$ has a section, the map $B\to C$ has a retract, $B=A\oplus C'$ for some subobject $C'$ of $B$, and $B=A'\oplus C$ for some quotient object $A'$ of $B$. Second, Weibel defines a chain complex $C_\bullet$ to be split if there are maps $s_n:C_n\to C_{n+1}$ such that $d_{n+1}s_nd_{n+1}=d_{n+1}$. Such a chain complex has very nice properties: the maps $s_n$ induce splittings of $C_n$ and $Z_n(C_\bullet)$. Although Weibel's definition has nice properties, it comes from nowhere. Is there some way to view his definition as a particular case of the first? It seems like if you just constructed the right short exact sequence of chain complexes, perhaps something involving a shifted complex like $C[-1]_\bullet$, then these two definitions could be reconciled. I can't see how to do it, though. REPLY [7 votes]: I think it's more natural to view the definition of a split short exact sequence as a special case of the definition of a split chain complex. Say you have a chain complex $$\cdots\xrightarrow{\ \ \ }C_{n+2}\xrightarrow{\ d_{n+2} \ }C_{n+1}\xrightarrow{\ d_{n+1} \ }C_n\xrightarrow{\ d_n \ }C_{n-1}\xrightarrow{\ d_{n-1} \ }C_{n-2}\xrightarrow{\ \ \ }\cdots$$ which is split. Then there exist maps $\{s_n:C_n\to C_{n+1}\}$ satisfying $d_n\circ s_{n-1}\circ d_n=d_n$ for all $n$. Now, suppose that $C_{n+2}=C_{n-2}=0$, and we are exact at $n-1$, $n$, and $n+1$. Then, we have a short exact sequence: $$0\xrightarrow{\ \ \ }C_{n+1}\xrightarrow{\ d_{n+1}\ }C_n\xrightarrow{\ d_n \ }C_{n-1}\xrightarrow{\ \ \ }0.$$ Since $d_{n+1}\circ s_n\circ d_{n+1} = d_{n+1}$ and $d_{n+1}$ is injective, this implies that $s_n\circ d_{n+1}=id_{C_{n+1}}$, which gives the first condition. Similarly, we have $d_n\circ s_{n-1}\circ d_n=d_n$, and as $d_n$ is surjective, it has a right-inverse, and composing on the right with this inverse implies $d_n\circ s_{n-1}=id_{C_{n-1}}$, giving the second condition. Lastly, it can be checked that the map $$\phi:C_n\to C_{n+1}\oplus C_{n-1},\quad \phi(x) = (s_n(x),d_n(x))$$ is an isomorphism. I'll let you workout the other two equivalent statements. Now as you said, Weibel's definition of a split chain complex doesn't come with much motivation so I'll briefly motivate that as well. As explained in this question the condition $d_n=d_n\circ s_{n-1}\circ d_n$ for all $n$ is equivalent to $id = d_{n+1}\circ s_{n}+s_{n-1}\circ d_n$ for all $n$. So, being split exact is analogous to the topological property of being contractible, since the condition $id = d_{n+1}\circ s_{n}+s_{n-1}\circ d_n$ is really just saying that the identity map is null-homotopic. This can be made even more concrete using the ideas in the question: Can we think of a chain homotopy as a homotopy?.<|endoftext|> TITLE: Sums of squares are closed under division QUESTION [8 upvotes]: Surprisingly, we got only one question for our 2-hour exam and I think nobody solved it. Here is the problem: Assuming that $K$ is a field, show that $S$ is stable under addition, multiplication and division, where $S$ is defined as follow: $$S=\left\{\sum_{i=1}^{n}{x_i}^2 \mid n\in \Bbb N ,\ x_i\in K\right\}.$$ REPLY [14 votes]: $\begin{align}{\bf Hint} \ \ a\neq 0\ \Rightarrow&\ \ \color{#c00}{a^{\large -2}} =\, (a^{\large -1})^{\large 2} \in S\ \ \text{by $\,S\,$ contains all squares}\\ {\rm so}\ \ \ a\in S\ \Rightarrow&\ \ a^{\large -1} =\, a\cdot \color{#c00}{a^{\large-2}}\in S\ \ \text{by $\,S\,$ is closed under multipication} \end{align}$ Remark $ $ You asked how it generalizes. The above idea works not only for squares, but for any positive power, i.e. we can obtain $\,a^{-1}\,$ from any of its positive powers via $\,a^{\large -1} = a^{\large n-1}(a^{\large -1})^{\large n}.$ Also the only property of squares (or powers) employed in proving multiplicative closure is that they are closed under multiplication. These observations lead to the following generalization. Theorem $ $ Suppose $K$ is a field, $M$ is a subset of $K$ and $S$ is the set of all sums of elements of $M.$ $(1)\ \ S$ is closed under multiplication if $M$ is closed under multiplication. $(2)\ \ S$ is closed under division (by all nonzero elements) if $S$ is both closed under multiplication, and every $\,a\in K$ has some positive power $\,a^{\large n}\in S.$ Proof $\ (1)\,$ is an immediate consequence of the distributive law. $\ (2)\ $ follows as above, namely if $\,0\neq s\in S\,$ then by hypothesis $\,(s^{\large -1})^{\large n}\in S\,$ for some $\,n\geq 1\,$ thus $\,s^{\large n-1}(s^{\large -1})^{\large n} = s^{\large -1}\in S\,$ because $S$ is closed under multiplication. Hence $\,a\in S\,\Rightarrow\,as^{-1} = a/s\in S,\,$ so $S$ is closed under division.<|endoftext|> TITLE: Limit $\lim_{x\rightarrow +\infty}\sqrt{x}e^{-x}\left(\sum_{k=1}^{\infty}\frac{x^{k}}{k!\sqrt{k}}\right)$ QUESTION [10 upvotes]: $$\lim_{x\rightarrow +\infty}\sqrt{x}e^{-x}\left(\sum_{k=1}^{\infty}\frac{x^{k}}{k!\sqrt{k}}\right)$$ Any hint will be appreciated. Note: There is a related question on MathOverflow: Asymptotic expansion of $\sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n!{\sqrt{n}}}$. The MO question references this question and it also links to some other posts containing similar expressions. REPLY [9 votes]: (I have overwritten my previous incorrect answer.) For any positive constant $c > 1$ we have: \begin{align*}S(cx) = e^{-x}\sum\limits_{k \ge cx} \frac{x^k}{k!} &= e^{-x}\frac{x^{cx}}{(cx)!}\left(1 + \frac{x}{(cx+1)} + \frac{x^2}{(cx+1)(cx+2)} + \cdots\right) \\& \le e^{-x}\frac{x^{cx}}{(cx)!}\sum\limits_{k=0}^{\infty}\frac{1}{c^k} < \frac{c}{c-1}e^{-x}x^{cx}\frac{e^{cx}}{(cx)^{cx}} = \frac{ce^{-((c\log c) -c +1)x}}{c-1}\end{align*} where, we used the inequality, $x! > e^{-x}x^{x}$ for sufficiently large $x$. Also note that $(c\log c) - c + 1 > 0$ for all $c > 0$ Again, for sufficiently large $x$: \begin{align*}T(x/c) = e^{-x}\sum\limits_{0 \le k < x/c} \frac{x^k}{k!} = e^{-x}\sum\limits_{0 \le k < x/c} \frac{c^k(x/c)^k}{k!} & \le e^{-x}\frac{(x/c)^{x/c}}{(x/c)!}\sum\limits_{0 \le k < x/c} c^k \\& < e^{-x}e^{x/c}\frac{xc^{x/c}}{c} = \frac{x}{c}e^{-\left(1 - \frac{1}{c} - \frac{\log c}{c}\right)x}\end{align*} where, we note that $(1 - \frac{1}{c} - \frac{\log c}{c}) > 0$. Hence, we have for any $c>1$, $$\displaystyle e^{-x}\sum\limits_{x/c \le k \le cx} \frac{x^k}{k!} = 1 - S(cx) - T(x/c) \quad \underbrace{\longrightarrow}_{ x \to +\infty} \quad 1 \tag{1}$$ Now, we have the simple estimates, $$\frac{e^{-x}}{\sqrt{cx}}\sum\limits_{x/c \le k \le cx} \frac{x^k}{k!} < e^{-x}\sum\limits_{x/c \le k \le cx} \frac{x^k}{k!\sqrt{k}} < \frac{e^{-x}}{\sqrt{x/c}}\sum\limits_{x/c \le k \le cx} \frac{x^k}{k!} \tag{2}$$ and $$e^{-x}\sum\limits_{k \not\in [x/c,cx]} \frac{x^k}{k!\sqrt{k}} < T(x/c) + S(cx) \tag{3}$$ where, both the terms in the upper bound decays exponentially. Hence, from $(1),(2)$ and $(3)$ we have that the required limit: $$\displaystyle \lim\limits_{x \to \infty} \sqrt{x}e^{-x}\sum\limits_{k=1}^{\infty} \frac{x^k}{k!\sqrt{k}} \in \left(\frac{1}{\sqrt{c}},\sqrt{c}\right)$$ for arbitrary $c > 1$, i.e., the required limit is $1$ (as pointed out by user tired in the comments). I suppose for any function $\alpha(x)$ with atmost polynomial growth and continuous at $x = 1$ we can slightly modify the above argument and show, $$\lim\limits_{x \to \infty} e^{-x}\sum\limits_{k=1}^{\infty} \alpha\left(\frac{k}{x}\right)\frac{x^k}{k!} = \alpha(1)$$<|endoftext|> TITLE: Show that $45 x_{n} > \sqrt{2n+25}$$ PROOF This holds for $n=1$. This can be checked numerically. Assume this holds for $n=m$. Then, we can prove the left hand inequality through induction as $$x_{m+1}^2=x_{m}^2+2+\frac{1}{x_{m}^2}>2n+27 \implies x_{m+1} > \sqrt{2n+27}$$ Now from $x_{m+1}^2-x_{m}^2=2+\frac{1}{x_{m}^2}$, we now have $$x_{m+1}^2-x_{0}^2=\sum_{n=0}^{m-1} \frac{1}{x_{n}^2}<\sum_{n=0}^{m-1}\frac{1}{2n+25}<\sum_{n=1}^{m}\frac{1}{2n}\le \frac{1+\ln m}{2} \tag{1}$$ Thus, our claim is proved. The desired result follows from our Claim. $(1)$: See here NOTE: My original answer was wrong, which is the reason for this rather large edit. REPLY [10 votes]: $x_n^2-x_{n-1}^2=2+\frac{1}{x_{n-1}^2}$ for $n\geq1$. Thus, $$x_{1000}^2=2\cdot1000+25+\frac{1}{x_0^2}+\frac{1}{x_1^2}+...+\frac{1}{x_{999}^2}>2025$$ and $$x_{1000}^2=2\cdot1000+25+\frac{1}{x_0^2}+\frac{1}{x_1^2}+...+\frac{1}{x_{999}^2}<2025+\frac{100}{x_0^2}+\frac{900}{x_{100}^2}<$$ $$<2025+4+\frac{900}{225}=2033<45.1^2$$ Because $x_{100}^2>225$.<|endoftext|> TITLE: How do I differentiate under the integral sign here. QUESTION [6 upvotes]: Assume we are working on the interval $[a,b]$, we have that $f \in L^1 [a,b]$. Let $\gamma \in (0,1)$. We then have that $$\int_a^uf(t)(u-t)^{1-\gamma}dt, u \in [a,b]$$ is well-defined. Because $(u-t)^{1-\gamma}$ is continuous on $[a,u]$, and hence bounded. A book I am reading states that: $$\frac{d}{du }\int_a^uf(t)(u-t)^{1-\gamma}dt=(1-\gamma)\int_a^uf(t)(u-t)^{-\gamma}dt.$$ But why is this the case? I tried using the leibniz integral rule. But from what I see I can not use it because the derivative $(u-t)^{-\gamma}$ does not behave well when $t=u$. Another problem I have is that by Fubini/Tonelli we may have that $(1-\gamma)\int_a^uf(t)(u-t)^{-\gamma}dt$ is well defined only a.e. on $[a,b]$? But do you see how we can prove that identity if $(1-\gamma)\int_a^uf(t)(u-t)^{-\gamma}dt$ is well-defined? I tried writing the definition of the derivative, and then taking limits etc.. But I do not get that the fraction is bounded(so I can use the dominated convergence theorem), because $(u-t)^{-\gamma}$ is not bounded. If I am supposed to solve it using this I get: $\frac{\int_a^{u+\Delta u}f(t)(u+\Delta u-t)^{1-\gamma}dt-\int_a^{u}f(t)(u-t)^{1-\gamma}dt}{\Delta u}=\int_a^u\frac{f(t)[(u+\Delta u-t)^{1-\gamma}-(u-t)^{1-\gamma}]dt}{\Delta u}+\int_u^{u+\Delta u}\frac{f(t)(u+\Delta u-t)^{1-\gamma}dt}{\Delta u}.$ Do you see what happens with the last terms when $\Delta u$ goes to zero? (If what I wrote in the start is correct, the last term is supposed to go to zero. And in the second last term we are supposed to be able to write the derivative inside, but how do I argue for this using DCT?) ANSWER For those who are interested I think I found the answer. The book I was reading was a book containg articles, and I think the article my quesiton is based on was badly written. I looked at the source for the article and it seemed better. Note first if we can show that $\int_a^u f(t)(u-t)^{1-\gamma}dt=(1-\gamma)\int_a^u[\int_a^t\frac{f(s)ds}{(t-s)^\gamma}]dt , H(t)=\int_a^t\frac{f(s)ds}{(t-s)^\gamma}\in L^1[a,b]$, we are done, because then the result follows from the fundamental theorem of calculus. First we show that $H$ is well defined, and simultaneously show that we can use Fubini to interchange limits.: $\int_a^b\int_a^t\frac{|f(s)|ds}{(t-s)^\gamma}dt=\int_a^b \int_s^b|f(s)|(t-s)^{-\gamma} dtds=\int_a^b |f(s)|(b-s)^{1-\gamma} \cdot1/(1-\gamma)ds<\infty$. And now the same calculuation with $u$ instead of b, and $f(s)$ instead of $|f(s)|$ tells us that $\int_a^u f(t)(u-t)^{1-\gamma}dt=(1-\gamma)\int_a^u[\int_a^t\frac{f(s)ds}{(t-s)^\gamma}]dt$. REPLY [2 votes]: Since $(u-t)^{-\gamma}$ behaves fine on $[a,u-\epsilon]$ for $\epsilon\gt0$, $$ \frac{\mathrm{d}}{\mathrm{d}u}\int_a^{u-\epsilon}f(t)(u-t)^{1-\gamma}\,\mathrm{d}t =f(u-\epsilon)\epsilon^{1-\gamma}+(1-\gamma)\int_a^{u-\epsilon}f(t)(u-t)^{-\gamma}\,\mathrm{d}t $$ As long as $$ \lim_{\epsilon\to0^+}f(u-\epsilon)\epsilon^{1-\gamma}=0 $$ we get $$ \frac{\mathrm{d}}{\mathrm{d}u}\int_a^uf(t)(u-t)^{1-\gamma}\,\mathrm{d}t =(1-\gamma)\int_a^uf(t)(u-t)^{-\gamma}\,\mathrm{d}t $$<|endoftext|> TITLE: Cutting a $m \times n$ rectangle into $a \times b$ smaller rectangular pieces QUESTION [7 upvotes]: How many $a \times b$ rectangular pieces of cardboard can be cut from $m \times n$ rectangular piece of cardboard so that the amount of waste("left over" cardboard) is a minimum? This question was given to me by my Mathematics Teacher as a "brain teaser". At first I divided $m \times n$ by $a \times b,$ but then I realized that it is not possible in the given situation, as the "left over" cardboard will also be "divided" among $a \times b$. Next I tried started thinking it terms of perimeter. I imagined all the $a \times b$ rectangles lying side by side in a "rows and columns" format(sort of a grid), without any space between them, thus forming another rectangle/square. I assumed $\lambda$ rows and $\sigma$ columns of those smaller rectangles. After a bit of calculations, my answer came out as following : Number of smaller $a \times b $ rectangles $\geq \sigma \times \lambda$ = $\left(\dfrac {m - m \, mod \, a }{a}\right) \times \left(\dfrac {n-n\, mod \, b}{b}\right).$ Is my "second" approach correct ? Is there any alternate way of tackling this problem? This might seem a 'vague' question, but can this problem be generalized to any given shape ? Any help will be gratefully acknowledged :). REPLY [2 votes]: A simple example, for $a=2, b=3$, and a few values for $n$ and $m$ shows that the problem is not easily approachable, specially in the last configuration. --- added----- It is in fact a Cutting Stock Problem as rightly indicated by @sas, however simplified by having rectangular and fixed size items. It is quite an interesting subject, so starting from the sketch above, let's try and establish some facts about, without pretending to be rigourous and to provide proofs. Premised that we use the following symbols, for the integral and fractional part and for modulus $$ \begin{gathered} x = \left\lfloor x \right\rfloor + \left\{ x \right\} \hfill \\ \frac{x} {t} = \left\lfloor {\frac{x} {t}} \right\rfloor + \left\{ {\frac{x} {t}} \right\} \hfill \\ x = t\left\lfloor {\frac{x} {t}} \right\rfloor + t\left\{ {\frac{x} {t}} \right\} = t\left\lfloor {\frac{x} {t}} \right\rfloor + x\bmod t \hfill \\ \end{gathered} $$ then Upper bound for $N$ Clearly we must have $N a b \leqslant nm$, i.e. $$ \bbox[lightyellow]{ N \leqslant \left\lfloor {\frac{{n\,m}} {{a\,b}}} \right\rfloor = \left\lfloor {\,\frac{{n\,m/\gcd (nm,ab)}} {{a\,b/\gcd (nm,ab)}}} \right\rfloor = \left\lfloor {\frac{{n'\,}} {{a'}}\,\frac{{m'}} {{b'}}} \right\rfloor \tag {1} \\ } $$ This also indicates an equivalence in the problem when the parameters are scaled so as to keep the ratio $(nm)/(ab)$ constant, and that we can always reduce the parameters and get $n$ and $m$ be coprime vs. $a$ and vs. $b$. But, geometrically speaking, the down-scaling looks to be applicable only if $max(a',b') \leqslant min(n',m')$. An example is given in this sketch Equivalence under rotation, reflection Clearly the problem does not change under rotation or reflection. "Cross histogram" Consider to take a horizontal scan and to record, per each unit row, the number of layers of horizontal dimension $a$ and $b$ traversed. Let's do the same on a vertical scan. Clearly, every $b$ counts for a layer of width $a$ in one direction will correspond to $a$ counts for $b$ in the other direction. And viceversa. Conclusions Condensing all the considerations above we are led to claim the following: a) Among the various possible partitions of $m$ and $n$ (in case $n'$, $m'$) as a linear combination of $a$ and $b$, with non-negative integral coefficients, as $$ \bbox[lightyellow] { \left\{ \begin{gathered} n = n_{\,a} a + n_{\,b} b + n_{\,r} \hfill \\ m = m_{\,a} a + m_{\,b} b + m_{\,r} \hfill \\ \end{gathered} \right. \tag {2} }$$ we shall choose those which, while giving minimal remainder ($n_r,m_r$), also assure best symmetry around $n/2$,$m/2$, that is $$ \bbox[lightyellow] { \left\{ \begin{gathered} n_{\,a} a \approx n/2 \approx n_{\,b} b \hfill \\ m_{\,a} a \approx m/2 \approx m_{\,b} b \hfill \\ \end{gathered} \right. \tag {3} }$$ b) Note that the above goal of symmetry is to be achieved globally on $n$ and $m$, so that it happens that we shall compromise somehow on one of the parameters, to keep the best on the other. When that happens, for the unbalanced parameter we are asked to choose two partions on the opposite sides of the symmetry, so that they make an optimal one on average, in case inserting a remainder if necessary (re. to the case $n=4,m=5$ in the first sketch). So, practically, we are determining a pair of partitions for each parameter. c) The above situation can be faced when $n,m$ are relatively small with respect to $a,b$. When they are much larger than $a,b$ then a partion (or two on average) close to symmetry can be found for both. Therefrom a possible strategy seems to be as follows (you can follow the process on one of the sketches given) repart one side of the rectangle (e.g. $m$) according to one of its pair of partitions, putting first all the $a$'s, then the remainder, then the $b$'s; following the perimeter, repart the contiguous side ($n$), this time starting with the $b$'s; pass to next side ($m$), and apply the remaining partition of the pair associated with $m$; same for the last side; expand the partition of the perimeter towards the inner part, as allowed From the examples, it looks that it is possible to reach $N= \left\lfloor {\frac{{n\,m}}{{a\,b}}} \right\rfloor$ in "many" cases.<|endoftext|> TITLE: When is the derivative of an inverse function equal to the reciprocal of the derivative? QUESTION [20 upvotes]: When is this statement true? $$\dfrac {\mathrm dx}{\mathrm dy} = \frac 1 {\frac {\mathrm dy}{\mathrm dx}}$$ where $y=y(x)$. I think that $y(x)$ has to be bijective in order to have an inverse and let the expression $\dfrac {\mathrm dx}{\mathrm dy}$ make sense. But is there any other condition? REPLY [11 votes]: The answers so far are arguably incorrect; they merely give sufficient but not necessary conditions, and one of them even states that their conditions are necessary. We do not need differentiability in some (open) neighbourhood of the point, even for the conventional (very restrictive) definition of derivative. Furthermore, if we work with a natural generalized definition of derivative, we do not even need one-to-one correspondence between the values of $x$ and values of $y$ near the point, for the derivative to exist there. I shall first state and prove the general fact, and then give examples that refute the necessity of these two conditions. $ \def\less{\smallsetminus} \def\rr{\mathbb{R}} \def\lfrac#1#2{{\large\frac{#1}{#2}}} $ Theorem If $\lfrac{dy}{dx}$ exists and is not zero, then $\lfrac{dx}{dy}$ exists and is the reciprocal. This holds in any framework where $\lfrac{dy}{dx}$ is the limit of $\lfrac{Δy}{Δx}$ as $Δt \to 0$ (undefined if the limit is undefined), where $x,y$ are variables that vary continuously with respect to some parameter $t$ (which could be $x$ itself). Here "$Δx$" denotes change in $x$ from a given point, and so "$Δt \to 0$" essentially captures the limiting behaviour as $t$ approaches (but does not reach) a certain value. This captures not only the usual situations such as derivatives of functions, but also allows simple yet rigorous implicit differentiation even for constraints that are not locally bijective. (See below for notes justifying this framework.) Proof Take any variables $x,y$ varying with parameter $t$. Take any point where $\lfrac{dy}{dx} \in \rr \less \{0\}$. As $Δt \to 0$:   $\lfrac{Δy}{Δx} \approx \lfrac{dy}{dx} \ne 0$.   Thus $\lfrac{Δy}{Δx} \ne 0$ and hence $Δy \ne 0$ (eventually).   Thus $\lfrac{Δx}{Δy} = (\lfrac{Δy}{Δx})^{-1} \approx (\lfrac{dy}{dx})^{-1}$. Therefore $\lfrac{dx}{dy} = (\lfrac{dy}{dx})^{-1}$. Example 1 Consider $f : \rr \to \rr$ such that $f(0) = 0$ and $f(x) = \lfrac{2}{\lfrac1x+2(1-(\frac1x\%2))}$ for every $x \in \rr \less \{0\}$, where "$x\%y$" is defined to mean "$x-\lfloor \lfrac{x}{y} \rfloor y$". Then $f$ is a bijection from $\rr$ to $\rr$ and has gradient $2$ at $0$ but is clearly not differentiable at any open interval around $0$. Since $\lfrac{d(f(x))}{dx} = 2$, satisfying the condition I stated, $f^{-1}$ has gradient $\lfrac12$ at $0$. Note that $f'(0)$ and ${f^{-1}}'(0)$ both exist even under the conventional definition of derivative, because $f$ is bijective, and $y=f(x)$ is squeezed between $y=\frac2{1/x+2}$ and $y=\frac2{1/x-2}$, which are tangent at the origin. So this provides a counter-example to the claim that we need differentiability in some open neighbourhood. Example 2 Let $t$ be a real parameter and $x,y$ be variables varying with $t$ such that $(x,y) = (0,0)$ if $t = 0$ and $(x,y) = (t+2t^3\cos(\lfrac1{t^2}),t+2t^3\sin(\lfrac1{t^2}))$ if $t \ne 0$. Then $\lfrac{dy}{dx} = \lfrac{dx}{dy} = 1$ when $t = 0$ despite the curve having no local bijection between the values of $x$ and the values of $y$ in any open ball around the origin! Notice that the conventional framework for real analysis cannot even state this fact that the curve has gradient $1$ at the origin! This is one kind of situation where the framework I am using is superior; another kind involves path integrals. Notes This framework is self-consistent and more general than the conventional one in 'elementary calculus' where you can only write "$\lfrac{dy}{dx}$" when $y$ is a function of $x$. If you think about it a little, you would realize that "function of $x$" is nonsense in the logical sense. In any standard foundational system, no object $y$ can be both a function and a real number. So it is utterly meaningless to say "$y$ is a function of $x$". Yet people write things like "$y = f(x)$ where $f$ is a function from $\rr$ to $\rr$". This technically is equally nonsensical, because either $x$ is previously defined and so $y$ is just a single real number, or $x$ is treated as a parameter so $y$ is actually an expression in the language of the foundational system. Only in the latter case does it make sense to ask for the derivative of $y$ with respect to $x$, which is also an expression otherwise it is senseless. If you are actually rigorous, you would find that many texts use ambiguous or inconsistent notation for this very reason. However, the framework I used above is rigorous yet logically consistent. Specifically, when we say that a set of variables vary with a parameter $t$, it should be interpreted as that each variable is a function over the range of $t$, and every expression involving the variables denotes a function by interpreting "$t$" to be its input parameter and all operations to be pointwise. For example if we say that $x,y$ vary with $t \in \rr$, we should interpret $x,y$ as functions on $\rr$ and interpret expressions like "$xy+t$" to be the pointwise sum of $x,y$ plus the input, namely $( \rr\ t \mapsto x(t)y(t)+t )$. Similarly we should interpret "$Δx$" to denote "$( \rr\ t \mapsto x(t+Δt)-x(t) )$", where "$Δt$" is interpreted as a free parameter with exactly the same function as "$h$" in "$\lim_{h \to 0} \lfrac{x(t+h)-x(t)}{h}$". Finally, we permit the evaluation of the variables at a given point, so for example we might say "when $x = 0$, ..." which should be interpreted as "for every $t$ such that $x(t) = 0$, ...". Also, we must make a distinction between "$→$" and "$≈$". "$x → c$" means "$x$ eventually stays close but not equal to $c$", while "$x ≈ y$" means "$x$ eventually stays close to $y$ (possibly equal)". You could express these via the typical ε-δ definition of limits, but it is easier to view them topologically; "$x ≈ y$ as $Δt → 0$" would mean "given any ball $B$ around $0$, $(x-y)(t+Δt)$ lies in $B$ for every $Δt$ in some sufficiently small punctured ball around $0$". (An alternative view that is equivalent under a weak choice principle is via sequential continuity; "$x ≈ y$ as $Δt → 0$" would mean "for every sequence $Δt$ that is eventually nonzero but converges to zero, the sequence $(x-y)(t+Δt)$ converges to zero".) Now it is easy to check that my above definition of "$\lfrac{dy}{dx}$" is absolutely rigorous and not only matches the intuitive notion of gradient far better but also is far more general. In fact, as I've shown above, it is easier to translate intuitive arguments for properties of gradients into this framework. For example, the above proof is a direct translation of the symmetry of ratios. Finally, this framework is built upon and hence completely compatible with standard real analysis, using no unnecessary set-theoretic axioms, unlike non-standard analysis. It also extends naturally to asymptotic notation.<|endoftext|> TITLE: What is the meaning of second derivative? QUESTION [8 upvotes]: I looked for various answers and couldn't find anything helpful.What I know is the derivative of a curve at a point is the slope of tangent line drawn to the point.Nowwhat does second derivative mean slope of the slope of line?I know it in terms of velocity,acceleration etc. but it is hard for me to get it in math. For example: The first derivative of $x^3$ is $3x^2$ but why is it not a straight line?In my book,it is written as slope of tangent.Please help regarding all these doubts in simple terms.I'm just in high school REPLY [10 votes]: The second derivative tells you something about how the graph curves on an interval. If the second derivative is always positive on an interval $(a,b)$ then any chord connecting two points of the graph on that interval will lie above the graph. If the second derivative is always negative on the interval $(a,b)$ then any chord connecting two points of the graph on that interval will lie below the graph. In the graph below of $y=x(x-1)(x+1)$ the graph has a negative second derivative on the interval $(-\infty,0)$ and a positive second derivative on the interval $(0,\infty)$ so it is concave down and concave up, respectively on the two intervals. Another way of expressing the same idea is that if a continuous second differentiable function has a positive second derivative at point $(x_0,y_0)$ then on some neighborhood of $(x_0,y_0)$ the tangent line at $(x_0,y_0)$ lies below the graph (except at the point of tangency). If the second derivative is negative at the point of tangency the tangent line lies above the graph on some neighborhood of the point of tangency (except at the point of tangency). REPLY [6 votes]: You are correct that the first derivative of $x^3$ is, $\frac{d}{dx}x^3=3x^2$. This is of course not a straight line, but rather a parabola. However, consider wanting to find the slope at a specific point along $x^3$, say $x=5$. The slope at this line is obtained simply by plugging $x=5$ into the derivative of $x^3$. That is, the slope, $m$, of $x^3$ is $3(5)^2=75$. This, I think you will agree is a straight tangent line. That is, at the point $x=5$ $y$ changes by $75$ as $x$ changes by $1$. In this sense, the derivative is not a tangent line, but rather a function to generate the tangent line at each point along a curve. Similarly, one can think of the second derivative as the function which generates the rate of change of the first derivative at every point along the function. As stated above, if the second derivative is positive, it implies that the derivative, or slope is increasing, while if it is negative, implies that the slope is decreasing. As a graphical example, consider the graph, $y=(x)(x-2)(x-3)$ which looks like this. The derivative of this function is $3x^2-10x+6$, which we graph as the following. We can then read off the slope at any point along the original curve. For example, at $x=2$, we see that the slope is $3(2)^2-10(2)+6=-2$, which is negative. Looking at the graph of $x(x-2)(x-3)$, we can indeed see that $m$ for the tangent line drawn at $x=2$ would indeed have a negative slope. As for the second derivative; it is given by, $6x-10$ (which again you may verify for yourself), and it looks like this. The point at which it changes from positive to negative is at $6x-10=0\rightarrow x=\frac{10}{6}=\frac{5}{3}$. This does not mean that the slope is positive at $\frac{5}{3}$, rather, it means that the slope, on the interval $[\frac{5}{3},\infty)$ is increasing. On this interval the slope, generated by the first derivative, $3x^2-10x+6$ is increasing, such that $3(x+\Delta x)^2-10(x+\Delta x)+6>3x^2-10x+6$, for all $x$ in the interval, where $\Delta x$ is some small positive change in $x$. For example, the slope at $x=2$ which is in the interval is given by $-2$, whereas the slope of $x=3$, with $3$ obviously greater than $2$, is $3(3)^2-10(3)+6=3$, which is greater than the slope at $x=2$. It's harder to see in this graph, but the significance of the second derivative is that, if you drew tangent lines along every point on the initial curve, that the slope of these tangent lines would be decreasing as you increased $x$ when negative, and increasing where positive.<|endoftext|> TITLE: Why isn't $\int \frac{1}{x}~dx = \frac{x^0}{0}$? QUESTION [30 upvotes]: I know that $\int \frac{1}{x}~dx = \ln|x| + C$ and I know the antiderivative method works for all powers of $x$ except $-1$. But why is that the case? I am still in high school and teachers aren't really helpful with these questions. Edit: I do realize dividing by zero is meaningless. That's not the question. Just because dividing by zero is meaningless doesn't mean mathematicians just choose to ignore it and work just make up a new answer. There has to be an explanation, and that's what I am asking for. The explanation. REPLY [4 votes]: Notice that, because integration involves addition of a constant, the following is also an antiderivative of $x^n$: $$\frac{x^{n+1}-1}{n+1}$$ It is indeterminate at $n=-1$, but one might instead consider $$\lim_{n\to -1}\frac{x^{n+1}-1}{n+1}$$ which, by L'Hopital's rule, is equal to $$\begin{align} \lim_{n\to -1}\frac{x^{n+1}-1}{n+1} &=\lim_{n\to -1}\frac{e^{(n+1)\ln(x)}-1}{n+1}\\ &=\lim_{n\to -1}\frac{e^{(n+1)\ln(x)}\ln(x)}{1}\\ &=e^0\ln(x)\\ &=\ln(x)\\ \end{align}$$<|endoftext|> TITLE: Why is volume of a high-D ball concentrated near its surface? QUESTION [6 upvotes]: I came across the following sentence while reading a book on applied math: Volume of a high dimensional unit ball is concentrated near its surface and is also concentrated at its equator. This is from book's introduction, and I believe the sentence will be explained at some later point in the book. However, it is hard to me to accept it on intuition level. Can some of you explain this sentence to me, in a sort of laymen-style, if possible? REPLY [6 votes]: The book probably means that the volume of the unit ball in $\mathbb{R}^n$ has an unexpected behaviour as a function of $n$: it is given by $$ V_n = \frac{\pi^{n/2}}{\Gamma\left(1+\frac{n}{2}\right)} $$ and the surface area is given by $$ A_n = n V_n = \frac{n\pi^{n/2}}{\Gamma\left(1+\frac{n}{2}\right)}. $$ In particular, $\frac{V_n}{V_{n-1}}$ (that is a measure of "concentration around the equator") behaves like $\sqrt{\frac{\pi}{n}}$ and $\frac{V_n}{A_n}$ (that is a measure of "concentration around the boundary") is exactly $\frac{1}{n}$. Additionally, by the central limit theorem the behaviour of $$ A(\tau)=\mu\{(x_1,\ldots,x_n)\in\mathbb{R}^n: x_1+\ldots+x_n = \tau, x_1^2+\ldots+x_n^2\leq 1\} $$ is approximately gaussian and more and more concentrated around $\tau=0$ as $n\to +\infty$.<|endoftext|> TITLE: Example of non quasi-compact scheme. QUESTION [8 upvotes]: Is there an example of a scheme $X$ and $f\in \Gamma(X,\mathcal{O}_X)$ such that the map $$\Gamma(X,\mathcal{O}_X)_f\rightarrow \mathcal{O}_X(X_f)$$ is not injective ? Hartshorne II 2.16 b) states its injective under the assumption of quasi-compactness. Notation:$X_f=\{x\in X|f_x\not \in \mathfrak{m}_x\}$. REPLY [2 votes]: Let $X_n=\operatorname{Spec} k[t]/t^n$, and consider $X=\coprod_{n>0} X_n$ with the global section $f=(t,t,\cdots)$. It is clear that the restriction of $f$ to any local ring is a nilpotent and therefore belongs to the maximal ideal, so $X_f=\emptyset$ and $\mathcal{O}_X(X_f)=0$. On the other hand, $\Gamma(X,\mathcal{O}_X)_f$ is nonzero: if $1=0$ in the localization, then we would necessarily have $f^m=0$ for some $m$, which doesn't happen (look at the restriction of $f$ to the local ring at $X_{m+1}$).<|endoftext|> TITLE: Does $\sum_{n=1}^{\infty} \frac{p_n}{n!}$ converge? QUESTION [6 upvotes]: Introduction I was so fascinated the first time I saw Eulers Number $e$ as the sum of the infinite series over $\frac{1}{n!}$ that I began playing on variants of it, learning a lot on the way. However when I started playing around with primes things became tricky, I found: $$\sum_{n=1}^{\infty} \frac{p_n}{n!}=\frac{2}{1!}+\frac{3}{2!}+\frac{5}{3!}+\dots = 4.73863870268..$$ It intrigued me when I noticed that it appeared to converge as I know about the divergence of the sums of reciprocal primes. I couldn't find this in any literature and couldn't find a closed form. I have ran simulations and computed this for fun to hundreds of decimal places however I have ploted a graph for $\frac{p_{n+1}}{(n+1)!} - \frac{p_n}{n!}$, $n$ ranging from $1$ to $9$ Plot I have checked with a program if every next difference is smaller than the previous and this holds up to $n=1000$ (with the idea that if this holds on forever it will eventually become $0$ thus the series will converge to a constant value) Question #1: Does it converge? Question #2: Is this something known (probably), and if yes, where could I read up on it? REPLY [6 votes]: Yes, it converges. As a hint, the prime number theorem gives us an asymptotic estimate $$p_n \sim n \log n. $$ See if you now can apply any of the convergence tests you know.<|endoftext|> TITLE: Does product and sum rule characterize the derivative? QUESTION [7 upvotes]: If $F$ is a field (one can do it in any ring, but lets restrict ourselves to fields), then we can define a formal derivative of a polynomial $$ f(x)=a_nx^n+a_{n-1}x^{n-1}+\cdots + a_1x+a_0 \in F[x]$$ as $$ D_xf(x)=na_nx^{n-1}+(n-1)a_{n-1}x^{n-2}+\cdots+2a_2x+a_1 \in F[x].$$ With this definition, one can prove the usual rules for the derivative of the sum and product of polynomials, they are \begin{equation}\tag{1} D_x(f(x)+g(x))=D_x(f(x))+D_x(g(x)) \end{equation} and \begin{equation}\tag{2} D_x(f(x)g(x))=D_x(f(x))g(x)+D_x(g(x))f(x). \end{equation} From the product rule we can deduce that $D_x(\lambda f(x))=\lambda D_x(f(x))$ for any $\lambda \in F$ and $f(x) \in F[x],$ so the derivative is a linear map. Question I was wondering if the relations (1) and (2) does in fact characterize the formal derivative of a polynomial, at least in fields such as $\mathbb{R}$ and $\mathbb{C}$. I mean, if I have a linear map $L:\mathbb{C}[x]\to\mathbb{C}[x]$ (or $L:\mathbb{R}[x]\to\mathbb{R}[x]$) that satisfies (1) and (2), does it necessarily be the formal derivative? Also, I was wondering if this can be generalized to general linear maps between smooth functions from $\mathbb{R}^n$ to $\mathbb{R}^n$ (or from $\mathbb{R}$ to $\mathbb{R}$), so that if $L:C^{\infty}(\mathbb{R}^n) \to C^{\infty}(\mathbb{R}^n)$ is a linear map that satisfies (1) and (2), then $L$ must be the derivative. If not, I was wondering if there is some other conditions that $L$ must satisfy to guarantee that is must be the derivative. I appreciate any comments on the problem. Note: I had a little problem with the tags of this question, as I asked by formal derivative between fields but also consider the question to general linear maps between $\mathbb{R}^n$ and $\mathbb{R}^n$. I appreciate if someone could help me with the tags too. REPLY [2 votes]: In general, a $\mathbb{C}$-linear function $L\colon\mathbb{C}[x]\to\mathbb{C}[x]$ satisfying $$L[f(x)g(x)] = f(x)\,L[g(x)] + L[f(x)]\,g(x)$$ is called a derivation. (This definition makes sense for any algebra over a field.) Note that such a map must satisfy $L[1] = 0$, since $$ L[1] \,=\, L[1\cdot1] \,=\, 1\,L[1] + 1\,L[1] \,=\, 2\,L[1]. $$ However, not every derivation satisfies $L[x] = 1$. Indeed, if $k(x)$ is any (fixed) polynomial in $\mathbb{C}[x]$, we can define a derivation $L\colon \mathbb{C}[x]\to\mathbb{C}[x]$ by $$ L[f(x)] \,=\, k(x)\,f'(x). $$ It is easy to check that $L$ is a derivation, and that $L[x] = k(x)$. It is also easy to check that every derivation must be of this form, since any two derivations $L_1$ and $L_2$ satisfying $L_1[x]=L_2[x]$ must be equal by induction on the degree of a polynomial.<|endoftext|> TITLE: Existence and uniqueness of $-\Delta u + u^3 =f$ QUESTION [6 upvotes]: I want to show the existence and uniqueness of the equation of the form $-\Delta u+u^3=f\in L^2(\Omega),\quad u=0~\mbox{on}~ \partial\Omega,~$ $\Omega\subset\mathbb{R}^3$-bounded I know one proof using Minty-Browder theorem and by direct method of calculus of variations, I'm looking for other proofs. Any hints? What about $\mathbb{R}^2$. I would be really thankful REPLY [2 votes]: Only for $n=3$. Let the norm of $H_0^1(\Omega)$ and $L^p(\Omega)$ be $\|\cdot\|_2$ and $|\cdot|_p$. Note $H_0^1(\Omega)\hookrightarrow L^6(\Omega)$ is continuous. First let us get a priori estimates. Multiplying both sides of the equation by $u$ gives $$ \|u\|_2^2+|u|_4^4=\int fudx\le \frac1{2}|f|^2_2+\frac1{2}|u|_2^2\le\frac{1}{4\epsilon}|f|^2_2+\frac\epsilon{2}\sqrt{|\Omega|}|u|_4. \tag{1}$$ From this, we have $\|u\|_2\le C,|u|_4\le C, |u|_6\le C$ for some $C$ depending on $\Omega, f$. Let $M=\{u\in H^1_0(\Omega): \|u\|_2\le C\}$. Now we use the Schauder fixed point theorem to prove the existence of a solution. Note that the problem $$ \left\{\begin{array}{ll}-\Delta u=h,&x\in\Omega\\ u=0,&x\in\partial\Omega\end{array}\right. $$ has a unique solution for $h\in L^2(\Omega)$. Define $u=Kh$ and then $K: L^2(\Omega)\to L^2(\Omega)$ is compact. The equation can be written as $$ \left\{\begin{array}{ll}-\Delta u=-u^3+f,&x\in\Omega\\ u=0,&x\in\partial\Omega\end{array}\right. $$ which can be written as $u=K(-u^3+f)$. Let $Tu=K(-u^3+f)$. Now we first show $T: M\to H^1_0(\Omega)$ is compact. In fact, let $\{u_n\}\subset M$. Then there is a subsequence, still denoted by $\{u_n\}$ such that $$ u_n\to u \text{ weakly in }H^1_0(\Omega)\text{ and strongly in } L^4(\Omega). $$ Let $u_n^*=Tu_n$, namely $u_n^*\in H^1_0(\Omega)$ satisfies $$ -\Delta u_n^*+u_n^3=f. \tag{2}$$ Similarly we have $\|u_n^*\|_2\le C$ which implies that there is $u^*\in H_0^1(\Omega)$ such that $$ u_n^*\to u^* \text{ weakly in }H^1_0(\Omega)\text{ and strongly in } L^q(\Omega). $$ Using $u_n^*-u$ as a test function in (2), we have $$ \int_{\Omega}|\nabla(u_n^*-u^*)|^2dx+\int_{\Omega}\nabla u^*\nabla(u_n^*-u^*)dx+\int_{\Omega}u_n^3(u_n^*-u^*)dx=\int_{\Omega}f(u_n^*-u^*)dx. $$ Letting $n\to\infty$, we have $u_n^*\to u^*$ in $H^1_0(\Omega)$. So $T$ is compact. It is easy to show that $T$ is continuous. Next we show that the set $$ \{x\in H_0^1(\Omega): x=\lambda Tx \text{ for }\lambda\in[0,1]\} $$ is bounded. It is equivalent to showing the solution of $$ -\Delta u=\lambda(-u^3+f) $$ is bounded in $H_0^1(\Omega)$. Using the same argument in (1), it is easy to see $\|u\|_2\le C$. Thus by the Schauder fixed point theorem, $T$ has a fixed point.<|endoftext|> TITLE: Why are hyperbolic circles in the upper half plane also Euclidean circles? QUESTION [5 upvotes]: In a few places I've seen talking about the upper half plane as a model of hyperbolic geometry, it's mentioned offhand that circles (in the sense of a set of points equidistant from a given point under a given metric) in the Euclidean and hyperbolic senses coincide - but that the centers are different! Is there a good way to show this? I've tried using the hyperbolic metric by brute computation, and tried using some properties of Mobius transformations, but am stuck. REPLY [6 votes]: One way to show this is to go through the Poincaré disk model. In particular: The hyperbolic circles in the Poincaré disk model centered at the center point of the disk are clearly the same as the Euclidean circles centered at that point. Since the isometries of the Poincaré disk model are Möbius transformations, and since Möbius transformations map circles to circles, it follows that the hyperbolic circles centered at any point in the Poincaré disk are Euclidean circles (possibly with a different center). Finally, since there is a Möbius transformation that maps the Poincaré disk model isometrically to the upper half plane model, it follows that the same holds for the upper half plane model.<|endoftext|> TITLE: What kinds of algebraic integers are of degree $4$? QUESTION [8 upvotes]: Not that I fully understand quadratic integer rings yet, but I've been wondering about quartic integer rings. Which leads me to the question: what kinds of algebraic integers generate quartic integer rings? I am fairly certain only square roots of squarefree integers are algebraic integers of degree $2$. So for degree $4$, these are the numbers that I am aware of: Fourth roots of squarefree integers with prime factors, like $\root 4 \of 5$, $\root 4 \of 6$, etc. (so $-i$ is not among these, despite being a fourth root of $1$). Sums of two square roots of coprime squarefree integers, like $\sqrt 2 + \sqrt 3$, $\sqrt 5 + \sqrt 7$. What am I missing? REPLY [3 votes]: I choose not to be daunted by this question, even though in some regards it is daunting, however much that has been overplayed so far. I think that a lot of these algebraic integers of degree $4$ can be boiled down to $a + b \theta + c \theta^2 + d \theta^3$, where $a, b, c, d$ are all integers, or perhaps all rational numbers satisfying a certain condition, and $\theta$ is an algebraic integer of degree $4$. Something tells me that it is this $\theta$ that you're actually interested in, namely, your apparent ignorance of quadratic integers like $\omega$ and $\phi$ despite your earlier demonstrated acquaintance with them. It is for these $\theta$ that things get hairy. What I have pieced together so far, mainly from your question and from comments: Fourth roots of integers, provided they are not perfect powers and not divisible by any fourth powers. Sums of two square roots of coprime squarefree integers. Square root of an integer plus a square root. Maybe the an integer plus a square root, divided by a square root or the cube of a square root? At least we don't have to worry about Abel's impossibility theorem at this degree.<|endoftext|> TITLE: Proving $\frac 1{R(q)}-1-R(q)$ (Ramanujan's formula) QUESTION [6 upvotes]: Question: How would you prove $$\dfrac 1{R(q)}-1-R(q)=\dfrac {f(-q^{1/5})}{q^{1/5} f(-q^5)}\tag{1}$$ where $f(-q)=(q;q^5)_\infty$ and $R(q)$ is defined as $$R(q)=q^{1/5}\dfrac {(q;q^5)_\infty (q^4;q^5)_\infty}{(q^2;q^5)_\infty (q^3;q^5)_\infty}\tag{2}$$ I'm not sure what to do. I started with Euler's Pentagonal Theorem $$f(-q)=\sum\limits_{n=-\infty}^{\infty} (-1)^n q^{n(3n+1)/2}\tag3$$ and got the following: $$\begin{align*} & \color{red}{f(-q^{1/5})}=\sum\limits_{n=-\infty}^{\infty} (-1)^n q^{n(3n+1)/10}\tag4\\ & \color{blue}{f(-q^5)}=\sum\limits_{n=-\infty}^{\infty} (-1)^n q^{5n(3n+1)/2}\tag5\end{align*}$$ Thus, the RHS can be expressed as $$\dfrac {f(-q^{1/5})}{q^{1/5} f(-q^5)}=\dfrac {\color{red}{\sum\limits_{n=-\infty}^\infty (-1)^n q^{n(3n+1)/10}}}{q^{1/5}\color{blue}{\sum\limits_{n=-\infty}^\infty (-1)^n q^{5n(3n+1)/2}}}\tag6$$ However, I'm not sure how to further reduce down $(6)$. And "even furthermore," I don't know how to simplify $R(q)$ to get $$R(q)=q^{1/5}\dfrac {(q;q^5)_\infty (q^4;q^5)_\infty}{(q^2;q^5)_\infty (q^3;q^5)_\infty}\tag7$$ REPLY [3 votes]: The proof I present has been considered in Duke's paper [Du]. Another proof can be found in Berndt's book [Ber] but it is much more involved (it requires quite complicated transformations of theta series) and is still lengthy. If $|q|<1$, then $$ R(q)=q^{1/5}\frac{\sum^{\infty}_{n=-\infty}(-1)^nq^{5n^2/2+3n/2}}{\sum^{\infty}_{n=-\infty}(-1)^nq^{5n^2/2+n/2}}=q^{1/5}\prod^{\infty}_{n=1}(1-q^n)^{(n|5)},\tag A $$ where $(n|m)$ is the usual Jacobi symbol. We define the theta functions with parameters $\alpha,\beta$ in $\textbf{R}$ by $$ \theta(\alpha,\beta;z):=\sum^{\infty}_{n=-\infty}e\left(\frac{1}{2}\left(n+\frac{\alpha}{2}\right)^2z+\frac{\beta}{2}\left(n+\frac{\alpha}{2}\right)\right)\textrm{, }Im(z)>0,\tag 1 $$ where $e(z):=e^{2\pi i z}$. These theta functions we have the following properties: Theorem 1. (see [FK]) If $l,m\in\textbf{Z}$ and $N$ positive integer, then $$ \theta(\alpha,\beta;z)=e\left(\mp \frac{\alpha m}{2}\right)\theta\left(\pm \alpha+2l,\pm \beta+2m;z\right)\tag 2 $$ $$ \theta(\alpha,\beta;z)=\sum^{N-1}_{k=0}\theta\left(\frac{\alpha+2k}{N},N\beta;N^2z\right)\tag 3 $$ The theta function (1) also satisfies for $a,b,c,d\in\textbf{Z}$, with $ad-bc=1$ the following functional (modular) relation $$ \theta\left(\alpha,\beta;\frac{az+b}{cz+d}\right)=\kappa\sqrt{cz+d}\cdot\theta\left(a\alpha+c\beta-ac,b\alpha+d\beta+bd;z\right),\tag 4 $$ where $$ \kappa=e\left(-\frac{1}{4}(a\alpha+c\beta)bd-\frac{1}{8}(ad\alpha^2+cd\beta^2+2bc\alpha\beta)\right)\kappa_0,\tag 5 $$ with $\kappa_0$ beeing an eighth root of unity, depending only on $a,b,c,d$. In particular we have $$ \theta(\alpha,\beta;z+1)=e\left(-\frac{\alpha}{4}\left(1+\frac{\alpha}{2}\right)\right)\theta(\alpha,\beta;z)\tag 6 $$ and $$ \theta\left(\alpha,\beta;\frac{-1}{z}\right)=e\left(-\frac{1}{8}\right)\sqrt{z}\cdot e\left(\frac{\alpha\beta}{4}\right)\theta\left(\beta,-\alpha;z\right).\tag 7 $$ Also holds the following product formula $$ \theta(\alpha,\beta;z)= $$ $$ e\left(\frac{\alpha\beta}{4}\right)q^{\alpha^2/8}f(-q)\prod^{\infty}_{n=1}\left(1+e\left(\beta/2\right)q^{n-\frac{\alpha+1}{2}}\right)\left(1+e\left(-\beta/2\right)q^{n+\frac{\alpha-1}{2}}\right),\tag B $$ where $q=e(z)$, $Im(z)>0$. Theorem 2 If $q_1=e(-1/z)$ and $q=e(z)$, $Im(z)>0$, then exists a fifth root of unity $\zeta=\zeta(q)=e\left(\nu(q)/5\right)$ such that $\nu(q)\in\{0,1,2,3,4\}$ and $$ R(q_1)=\overline{\zeta\left(q_1\right)}\cdot\frac{1-\phi \zeta(q) R(q)}{\phi+\zeta(q) R(q)},\tag 8 $$ where $\phi:=\frac{1+\sqrt{5}}{2}$ and where $\overline{z}$ denotes the conjugate complex number of $z$. Proof. If $z$ is in the upper half plane, then relation (A) can be written, with the help of (1) as $$ R(q)^5=\left(e\left(-1/10\right)\frac{\theta(3/5,1;5z)}{\theta(1/5,1;5z)}\right)^5.\tag 9 $$ The 5th power in both sides of (9) is for avoiding the factor $q^{1/5}$ of (A). Hence we can write $$ \zeta(q)\cdot R(q)=e\left(-1/10\right)\frac{\theta(3/5,1;5z)}{\theta(1/5,1;5z)},\tag {10} $$ where $\zeta(q)$ is a 5th root of unity. Using (7),(2),(3) we get $$ \zeta(q_1)R(q_1)= e\left(-1/10\right)\frac{\theta(3/5,1;-5/z)}{\theta(1/5,1;-5/z)}=\frac{\theta(1,3/5;z/5)}{\theta(1,1/5;z/5)}= $$ $$ =\frac{\sum^{4}_{\nu=0}\theta(\frac{2\nu+1}{5},3;5z)}{\sum^{4}_{\nu=0}\theta(\frac{2\nu+1}{5},1;5z)}.\tag{11} $$ But $$ \sum^{4}_{\nu=0}\theta\left(\frac{2\nu+1}{5},3;5z\right)=\theta(1/5,3;5z)+\theta(3/5,3;5z)+\theta(1,3;5z)+ $$ $$ +\theta(7/5,3;5z)+\theta(9/5,3;5z)= $$ $$ =e(1/10)\theta(1/5,1;5z)+e(3/10)\theta(3/5,1;5z)+e(1)\theta(1,1;5z)+ $$ $$ +e(7/10)\theta(2-7/5,1,5z)+e(9/10)\theta(2-9/5,1,5z)= $$ $$ =\left(e^{\frac{i\pi}{5}}+e^{\frac{18i\pi}{5}}\right)\theta(1/5,1;5z)+\left(e^{\frac{3i\pi}{5}}+e^{\frac{4i\pi}{5}}\right)\theta(3/5,1,5z).\tag{12} $$ In the same way we get $$ \sum^{4}_{\nu=0}\theta\left(\frac{2\nu+1}{5},1,5z\right)=\left(1+e^{\frac{9\pi i}{5}}\right)\theta(1/5,1;5z)+\left(1+e^{\frac{7\pi i}{5}}\right)\theta(3/5,1;5z).\tag{13} $$ From (10),(11),(12) and (13) we get the result. $qed$ Notes 1 Actualy the ''constant'' term $\zeta=\zeta(q)=e^{2\pi i\nu(q)/5}$ in (8) and (10), is chainging along with the value of $Re(z)$. If $-1/20$ and define $r_0(z):=\zeta(q) R(q)$ (according to (10)), then $$ r_0(z+1)=e^{2\pi i/5}r_0(z)\tag{14} $$ and $$ r_0(-1/z)=\frac{1-\phi r_0(z)}{\phi+r_0(z)}\tag{15} $$ Proof. For to prove (14) we use (6) in (10). The result follows easily. The second relation is immediate consequence of the definition of $r_0(z)$ and Theorem 2. $qed$ Deffinition 1 i) We denote with $\textbf{H}$ the upper half complex plane. ii) We call fundamental domain $D$ the set $$ D=\{z\in\textbf{H}\textrm{ : }|Re(z)|\leq\frac{1}{2}\textrm{, }|z|\geq 1\}\tag{16} $$ iii) $$ \Gamma=\left\{\left( \begin{array}{cc} a\textrm{ }b\\ c\textrm{ }d \end{array} \right)\textrm{ : }a,b,c,d\in\textbf{Z}\textrm{, }ad-bc=1\right\}\tag{17} $$ iv) We call Moebious transformation$-\sigma$ the map $\sigma:\textbf{C}\rightarrow\textbf{C}$ such that $$ \sigma(z)=\frac{az+b}{cz+d}\textrm{, }ad-bc\neq0.\tag{18} $$ Notes 2 (see [Mor,Wag] pg.134-135) The main properties of the fundamental domain $D$ are 1) For every $z\in\textbf{H}$, there is a $\sigma$ Moebious transform in $\Gamma$ such that $\sigma(z)\in D$, and 2) If $z\in D$ and $\sigma\in \Gamma$ are such that $\sigma(z)\in D$ and $\sigma$ is not the identity, then either $Re(z)=\pm\frac{1}{2}$ and $\sigma(z)=z\pm 1$, or $|z|=1$ and $\sigma(z)=-1/z$. Lemma 1 If $h$ is bounded and holomorphic function in $\textbf{H}$ such that $h(z+1)=h(z)$ and $h(-1/z)=h(z)$, for all $z$ in $\textbf{H}$, then $h$ is constant. Proof.(Heilbron) Since the function $h(z)$ is $1-$periodic in $\textbf{H}$, then exists function $g:\textbf{K}\rightarrow\textbf{C}$, where $\textbf{K}=\{z\in\textbf{C}:|z|<1\}$, such that $h(z)=g(q)$, $q=e(z)$, $Im(z)>0$. But $h(z)$ is bounded and holomorphic in $\textbf{H}$. Hence we conclude that $h$ it is regular at the origin. By subtracting the value $g(0)$ we may assume that $h(z)\rightarrow 0$ as $Im(z)\rightarrow\infty$. Now for any point $z\in\textbf{H}$ there is a $\sigma$ Moebius transformation in $\Gamma$ with $\sigma(z)\in D$, where $D$ is the fundamental domain. Assume that $|h(z)|$ attains its maximum at some point $z_0$ in $D$. This point is in $\partial D$ but not at infinity. Suppose $|h(z_0)|=M$ is the maximum where $z_0\in\partial D$. If $h(z)$ where not a constant, we could find another point $z_1$ in a neighborhood of $z$ and not in $D$ with $|h(z_1)|>M$. Hence for the Moebius map $\sigma$ we have $\sigma(z_1)\in D$ and $|h(z_1)|>M\geq |h(\sigma(z_1))|=|h(z_1)|$, which is a contradiction. This proves the lemma. $qed$ Lemma 2 If $h$ is bounded and holomorphic function in $\textbf{H}$, such that $h(z+T)=h(z)$, $T\in\textbf{N}:=\{1,2,3,\ldots\}$ and $h(-1/z)=1/h(z)$, for all $z\in \textbf{H}$, then $h$ is constant. Proof. Suppose that $h(z)$ is $T-$periodic bounded not constant and holomorphic in $\textbf{H}$. Then exists function $g:\textbf{K}\rightarrow\textbf{C}$, such that $h(z)=g(q)=g\left(e^{2\pi i z/T}\right)$, $Im(z)>0$. Since $h(z)$ is bounded in $\textbf{H}$, then $h$ is bounded in $D$. Also exists $M>0$ and $z_0$ in $\partial D$ ($z_0$ is the point in which $h(z)$ attends its maximum in $D$), such that $\frac{1}{M}=|h\left(\frac{-1}{z_0}\right)|\leq|h(z)|\leq |h(z_0)|= M$ for all $z\in D$, (since $h(-1/z)=1/h(z)$). Obviously the point $z_0$ is not at infinity. Also in the neighborhood of $z_0$ in the upper halph plane (not in $D$) exists a $z_1$ such that $|h(z_1)|>M$. But for the Moebious map $\sigma\in\Gamma$, $\sigma(z):\textbf{H}\rightarrow D$, we have $h\left(\sigma(z_1)\right)=\frac{1}{h(z_1)}$ and $|h(\sigma(z_1))|\geq\frac{1}{M}$, or equivalently $\frac{1}{|h(z_1)|}\geq\frac{1}{M}$, or equivalently $M<|h(z_1)|\leq M$. But this is contradiction. $qed$ Definition 2 We define the Dedekind eta function $\eta(z)$ from $\textbf{H}$ to $\textbf{C}$ as $$ \eta(z):=q^{1/24}f(-q)\textrm{, }q=e(z)\textrm{, }Im(z)>0,\tag{19} $$ where $$ f(-q):=\prod^{\infty}_{n=1}(1-q^n) $$ For $\eta$ function holds the following modular relation: Theorem 4 $$ \eta(-1/z)=\sqrt{-iz}\cdot\eta(z)\textrm{, }Im(z)>0.\tag{20} $$ Theorem 5 Let $q=e(z)$, $Im(z)>0$, then $$ r_0(z)^{-1}-1-r_0(z)=\frac{\eta(z/5)}{\eta(5z)}\tag{21} $$ Proof. For $q_1=e(-1/z)$, $q=e(z)$, with $Im(z)>0$ we set $$ h(z):=\left(r_0(z)^{-1}-1-r_0(z)\right)\frac{\eta(5z)}{\eta(z/5)}. $$ Then $$ h(-1/z)=\left(r_0(-1/z)^{-1}-1-r_0(-1/z)\right)\frac{\eta(-5/z)}{\eta(-1/(5z))} $$ Using (15) and (20) we get after elementary algebraic evaluations $$ h(-1/z)=\frac{5r_0(z)}{1-r_0(z)-r_0(z)^2}\cdot\frac{\sqrt{-iz/5}\cdot\eta(z/5)}{\sqrt{-i5z}\cdot\eta(5z)}=1/h(z). $$ From (14) $h(z)$ is $5-$periodic i.e. $h(z+5)=h(z)$. The boundedness of $h$ follows from the fact that $r_0\left(x+it\right)$ has no poles or roots when $x\in\textbf{R}$, $00$, is continuous and have no roots and poles. Also we have $$ R(q)=q^{1/5}\left(1-q+O\left(q^2\right)\right)\textrm{, }q\rightarrow 0 $$ $$ R(q)^{-1}=q^{-1/5}\left(1+q+O\left(q^3\right)\right)\textrm{, }q\rightarrow 0 $$ and $$ \frac{f(-q^{1/5})}{q^{1/5}f(-q^5)}=q^{-1/5}-1-q^{1/5}+q^{4/5}+q^{6/5}+O\left(q^{11/5}\right)\textrm{, }q\rightarrow 0. $$ From these arguments we have the limit $$ \lim_{t\rightarrow +\infty}\left(r_0(x+it)^{-1}-1-r_0(x+it)\right)\frac{\eta\left(5(x+it)\right)}{\eta\left((x+it)/5\right)}=1. $$ Hence we conclude that $h(x+it)$ for $x\in\textbf{R}$ and $00$, then $$ R(q)^{-5}-11-R(q)^5=r_0(z)^{-5}-11-r_0(z)^5=\left(\frac{\eta(z)}{\eta(5z)}\right)^6\tag{22} $$ Proof. Suppose that $q_1=e(-1/z)$, $q=e(z)$, $Im(z)>0$. Then set $$ A_1(z):=\left(r_0(z)^{-5}-11-r_0(z)^5\right)\left(\frac{\eta(5z)}{\eta(z)}\right)^6.\tag{23} $$ and $$ h_1(z):=r_0(z)^{-5}-11-r_0(z)^5.\tag{24} $$ Hence $$ A_1(z)=h_1(z)\left(\frac{\eta(5z)}{\eta(z)}\right)^6. $$ A simple algebraic calculation using (15) can show us that $$ h_1(-1/z)=h_1(z)\frac{125}{h_2(z)^6},\tag 25 $$ where from Theorem 5: $$ h_2(z)=r_0(z)^{-1}-1-r_0(z)=\frac{\eta(z/5)}{\eta(5z)}.\tag{26} $$ Hence from (20),(23),(25),(26) we get $$ \frac{A_1(-1/z)}{A_1(z)}=\frac{h_1(-1/z)}{h_1(z)}\frac{\eta(-1/(z/5))^6}{\eta(-1/z)^6}\frac{\eta(z)^6}{\eta(5z)^6}= $$ $$ =\frac{h_1(z)125}{h_1(z)h_2(z)^6}\frac{\eta(-1/(z/5))^6}{\eta(-1/z)^6}\frac{\eta(z)^6}{\eta(5z)^6}= $$ $$ =\frac{125}{h_2(z)^6}\frac{\left(\sqrt{-iz/5}\cdot\eta(z/5)\right)^6}{\left(\sqrt{-iz}\cdot\eta(z)\right)^6}\frac{\eta(z)^6}{\eta(5z)^6} =\frac{125\eta(5z)^6}{\eta(z/5)^6}\frac{\eta(z/5)^6}{125\eta(z)^6}\frac{\eta(z)^6}{\eta(5z)^6}=1. $$ Hence $$ A_1(-1/z)=A_1(z).\tag{27} $$ Observe also (easily) that $A_1(z)$ is $1-$periodic. Hence using Lemma 1 we get that $A_1(z)$ is constant and from this we easily get that $A_1(z)=1$, for all $z$ in the upper half plane. $qed$ Notes 3 Actualy there hold the following more general identities: For all $|q|<1$, we have $$ \frac{1}{R(q)}-1-R(q)=\frac{f(-q^{1/5})}{q^{1/5}f(-q^5)} $$ and $$ \frac{1}{R(q)^5}-11-R(q)^5=\frac{f^6(-q)}{q f^6(-q^5)}. $$ [Ber]: B.C. Berndt, 'Ramanujan`s Notebooks Part III'. Springer Verlang, New York (1991) [Du]: W. Duke. 'Continued fractions and Modular functions'. Bull. Amer. Math. Soc. (N.S.), 42 (2005), 137-162. [Mor,Wag]: Carlos J. Moreno, Samuel S. Wagstaff. Jr. 'Sums of Quares of Integers'. Chapman and Hall/CRC, Taylor and Francis Group, (2006). [FK]: H. Farkas and I. Kra, 'Theta Constants, Riemann surfaces and the modular group'. Amer. Math. Soc., Providence, RI, 2001. Note. If there are any problems or gaps in the proof please let me know.<|endoftext|> TITLE: Solving higher order logarithms integrals without the beta function QUESTION [8 upvotes]: Prove the following $$\int^1_0 \frac{\log(x)^3}{\sqrt{x(1-x)}}dx= -\pi (12 \zeta(3) + \log^3(4) + \pi^2 \log(4))$$ The basic approach is differentiating the beta function three times which I am trying to avoid here.A contour method would be nice. Question 1 Hence how to solve the integral without having to differentiate the beta function ? My question can be generalized to any problem involving higher order logarithms. Question 2 Is there a general formula for $$\frac{\partial^n \partial^m}{\partial x^n \partial y^m} B(x,y)$$ REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\int_{0}^{1}{\ln^{3}\pars{x} \over \root{x\pars{1 - x}}}\,\dd x \,\,\,\stackrel{x\ =\ \sin^{2}\pars{\theta}}{=}\,\,\, 16\int_{0}^{\pi/2}\ln^{3}\pars{\sin\pars{\theta}}\,\dd\theta \\[5mm] = &\ \left.16\,\Re\int_{0}^{\pi/2}\ln^{3}\pars{{1 - z^{2} \over 2z}\ic} \,{\dd z \over \ic z}\,\right\vert_{\ z\ =\ \exp\pars{\ic\theta}} = \left.16\,\Im\int_{0}^{\pi/2}\ln^{3}\pars{{1 - z^{2} \over 2z}\ic} \,{\dd z \over z}\,\right\vert_{\ z\ =\ \exp\pars{\ic\theta}} \\[5mm] \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\sim} & -16\,\Im\int_{\pi/2}^{0} \bracks{-\ln\pars{2\epsilon} + \pars{{\pi \over 2} - \theta}\ic}^{\,3}\ \ic\,\dd\theta - 16\,\Im\int_{\epsilon}^{1} \bracks{\ln\pars{1 - x^{2} \over 2x} + {\pi \over 2}\,\ic}^{3}\,{\dd x \over x} \\[5mm] = &\ 16\int_{0}^{\pi/2}\bracks{-\ln^{3}\pars{2\epsilon} + 3\ln\pars{2\epsilon}\,\theta^{2}}\,\dd\theta - 16\int_{\epsilon}^{1}\bracks{3\ln^{2}\pars{1 - x^{2} \over 2x}\,{\pi \over 2} -{\pi^{3} \over 8}}\,{\dd x \over x} \\[1cm] = &\ -8\pi\ln^{3}\pars{2\epsilon} + \color{#f00}{2\pi^{3}\ln\pars{2\epsilon}} - 24\pi\int_{\epsilon}^{1}{\ln^{2}\pars{1 - x^{2}} - 2\ln\pars{1 - x^{2} }\ln\pars{2x} + \ln^{2}\pars{2x} \over x}\,\dd x \\[5mm] &\ \color{#f00}{\mbox{} - 2\pi^{3}\ln\pars{\epsilon}} \\ \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\sim} &\ \color{#f00}{-8\pi\ln^{3}\pars{2\epsilon}} + 2\pi^{3}\ln\pars{2} - 12\pi\ \overbrace{\int_{0}^{1}{\ln^{2}\pars{1 - x} \over x}\,\dd x} ^{\ds{2\zeta\pars{3}}}\ +\ 24\pi\ln\pars{2}\ \overbrace{\int_{0}^{1}{\ln\pars{1 - x} \over x}\,\dd x} ^{\ds{-\,\mrm{Li}_{2}\pars{1} = -\,{\pi^{2} \over 6}}} \\[5mm] &\ \mbox{} + 12\pi\ \underbrace{\int_{0}^{1}{\ln\pars{1 - x}\ln\pars{x} \over x}\,\dd x} _{\ds{\zeta\pars{3}}}\ -\ 24\pi\ \underbrace{\int_{2\epsilon}^{2}{\ln^{2}\pars{x} \over x}\,\dd x} _{\ds{{\ln^{3}\pars{2} \over 3} - \color{#f00}{\ln^{3}\pars{2\epsilon} \over 3}}} \label{1}\tag{1} \end{align} Note that $$ \left\{\begin{array}{rcl} \ds{\int_{0}^{1}{\ln\pars{1 - x}\ln\pars{x} \over x}\,\dd x} & \ds{=} & \ds{-\int_{0}^{1}\mrm{Li}_{2}'\pars{x}\ln\pars{x}\,\dd x = \int_{0}^{1}{\mrm{Li}_{2}\pars{x} \over x}\,\dd x = \int_{0}^{1}\mrm{Li}_{3}'\pars{x}\,\dd x} \\[1mm] & \ds{=} & \ds{\mrm{Li}_{3}\pars{1} = \bbx{\ds{\zeta\pars{3}}}} \\[5mm] \ds{\int_{0}^{1}{\ln^{2}\pars{1 - x} \over x}\,\dd x} & \ds{=} & \ds{\int_{0}^{1}{\ln^{2}\pars{x} \over 1 - x}\,\dd x = 2\int_{0}^{1}{\ln\pars{1 - x}\ln\pars{x} \over x}\,\dd x = \bbx{\ds{2\,\zeta\pars{3}}}} \end{array}\right. $$ Expression \eqref{1} becomes \begin{align} &\int_{0}^{1}{\ln^{3}\pars{x} \over \root{x\pars{1 - x}}}\,\dd x \,\,\,\stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\to}\,\,\, 2\pi^{3}\ln\pars{2} - 24\pi\,\zeta\pars{3} - 4\pi^{3}\ln\pars{2} + 12\pi\,\zeta\pars{3} - 8\pi\ln^{3}\pars{2} \\[5mm] = &\,\,\, \bbox[15px,#ffe,border:1px dotted navy]{\ds{-2\pi\bracks{\vphantom{\Large A}6\,\zeta\pars{3} + 4\ln^{3}\pars{2} + \pi^{2}\ln\pars{2}}}} \approx -96.6701 \end{align}<|endoftext|> TITLE: L'Hopital's rule and $\frac{\sin x}x$ QUESTION [11 upvotes]: I have heard people say that you can't (or shouldn't) use the L'Hopital's rule to calculate $\lim\limits_{x\to 0}\frac{\sin x}x=\lim\limits_{x\to 0}\cos x=1$, because the result $\frac d{dx}\sin x=\cos x$ is derived by using that limit. But is that opinion justified? Why should I be vary of applying L'Hopital's rule to that limit? I don't see any problem with it. The sine function fulfills the conditions of the L'Hopital's rule. Also, it is a fact that the derivative of sine is cosine, no matter how we proved it. Certainly there is a way to prove $\frac d{dx}\sin x=\cos x$ without using the said limit (if someone knows how, they can post it) so we don't even have any circular logic. Even if there isn't, $\frac d{dx}\sin x=\cos x$ was proven sometime without referencing the L'Hopital's rule so we know it is true. Why wouldn't we then freely apply the L'Hopital's rule to $\frac {\sin x}x$? PS I'm not saying that this is the best method to derive the limit or anything, but that I don't understand why it is so frowned upon and often considered invalid. REPLY [7 votes]: I expand my comment to ziggurism's answer here. Suppose that by a lot of hardwork, patience and ... (add more nice words if you like) I have obtained these three facts: $\cos x$ is continuous. $\dfrac{d}{dx}(\sin x) = \cos x$ L'Hospital's Rule and my next goal is establish the following fact: $$\lim_{x \to 0}\frac{\sin x}{x} = 1\tag{*}$$ Because of all that hardwork, patience and ... I know that my goal $(*)$ is an immediate consequence of just the fact $(2)$ alone stated above. Then why would I combine all the three facts mentioned above to achieve my goal? To borrow an idea from user "zhw.", wouldn't a person doing this would be considered silly? More often than not, many students don't really understand what's going behind the scenes when we use the mantra of "differentiate and plug" championed by L'Hospital's Rule. The act of differentiation itself entails that we know certain limits (and rules of differentiation) and further most of the derivatives are continuous (so that plugging works after differentiation step). If one is so fond of L'Hospital's Rule why not put that to a better use to solve complex problems (like this and this) instead of using it to obtain limits which are immediate consequences of differentiation formulas.<|endoftext|> TITLE: Braid groups as knot groups? QUESTION [5 upvotes]: Let $B_n$ denote the braid group on $n$ strands. The knot group of the trefoil knot is isomorphic to $B_3$. Are there other knots $K_n$ such that the knot group of $K_n$ is $B_n$? REPLY [7 votes]: No. Knot complements are aspherical by the sphere theorem, so the cohomology of knot groups coincides with that of the knot complement, which vanishes outside degree 1. Now we have the following calculation, due to Fuks, of the Poincare polynomial of pure braid groups: $$\sum_{i=0}^\infty \text{rk} H^i(P_n, \Bbb Z) t^i = \prod_{j=1}^{n-1} (1 + jt).$$ In particular, for $n > 3$, the pure braid group has cohomology in degrees 3 and higher. But if $B_n$ were the fundamental group of a knot complement, then a finite (degree $n!$) cover of this 3-manifold would have fundamental group $P_n$. But that's preposterous, since a noncompact 3-manifold cannot have nontrivial cohomology in degrees 3 and higher. Alternatively starting at $B_6$ there are evident $\Bbb Z^3$ subgroups of the braid groups, and an aspherical noncompact 3-manifold cannot have fundamental group $\Bbb Z^3$ for the same cohomological reasons as before (but now you can calculate it yourself, since $\Bbb Z^3$ is the fundamental group of $T^3$). But then $B_4$ and $B_5$ need special arguments. You can find $\Bbb Z^3 \subset B_4$ in a somewhat more complicated way: it's generated by $\sigma_1$, $\sigma_3$, and $(\sigma_1 \sigma_2 \sigma_3)^4$.<|endoftext|> TITLE: Given positive integers $ x , y$ and that $3x^2 + x = 4y^2 +y$ Prove that $x-y$ is a perfect square QUESTION [5 upvotes]: This is some old Olympiad problem (I vaguely remember its Iranian ) I factorised it as $ (x-y)(3x + 3y +1) = y^2 $ Then I fruitlessly tried to prove $\gcd(x-y, 3x + 3y +1 ) = \gcd (6x+1,6y+1) = 1$ for proving that $x-y$ is a perfect square. How do I continue my method or are there other ways to approach this intuitively? REPLY [8 votes]: Defining $z = x-y$, from $y^2 = (x-y)(3x + 3y +1)$, we have $$ y^2 = 3z^2 + 6yz+ z$$ which is equivalent to $$(y - 3z)^2 = z(12z+1).$$ Since $\gcd(z, 12z+1)=1 $, $z$ and $12z+1$ both must be perfect squares.<|endoftext|> TITLE: If $S = x_1 + x_2 + .. + x_n$, Prove that $ (1+x_1)(1+x_2)..(1+x_n) \le 1 + S + \frac{S^2}{2!} + .. + \frac{S^n}{n!}$ QUESTION [11 upvotes]: Let $x_1, x_2, \ldots ,x_n$ be positive real numbers, and let $$ S = x_1 + x_2 + \cdots + x_n.$$ Prove that $$ (1+x_1)(1+x_2)\ldots(1+x_n) \le 1 + S + \frac{S^2}{2!} + \cdots + \frac{S^n}{n!}$$ Here's my attempt: Case 1 : Let $x_1 = x_2= x$ (When all terms are equal) $LHS = (1+x)(1+x) = 1+2x+x^2$ $RHS = 1+\frac{x+x}{1!}+\frac{(x+x)^2}{2!}=1+2x+\color{blue}{2}x^2$ Hence when all terms are equal, $LHS TITLE: Simple closed form for $\int_0^\infty\frac{1}{\sqrt{x^2+x}\sqrt[4]{8x^2+8x+1}}\;dx$ QUESTION [7 upvotes]: Some time ago, I used a fairly formal method (in the second sense of this answer) to derive the following integral, and am wondering whether it is correct or not: $$\int_0^\infty\frac{1}{\sqrt{x^2+x}\sqrt[4]{8x^2+8x+1}}\;dx=\frac{\Gamma\left(\frac{1}{8}\right)^2}{2^{\frac{11}{4}}\Gamma\left(\frac{1}{4}\right)}\tag{1}$$ Wolfram Alpha doesn't evaluate this integral, but gives the first 5 decimal places of the transformed integral $\int_0^1\frac{1}{\sqrt{x-x^2}\sqrt[4]{x^2-8x+8}}\;dx$ which are correct. The method I used was formal and convoluted, with much rearrangement of integrals and series whose convergence I did not know; such methods have given me incorrect results in the past (e.g. here and here). Nevertheless, the integral appears simple and perhaps there is a change of variables which would enable easy evaluation to the RHS (in particular I wonder whether elliptic integrals are involved) but I cannot find one. My question is: is the above result correct, and if so is there a simpler way to prove it than that below (which feels like I might have almost come in a circle)? Derivation: I began with the fact that $\int_0^\infty e^{-x^n}dx=\frac{\Gamma(\frac{1}{n})}{n}$ and then proceeded as follows: $$\left[\int_0^\infty e^{-x^8}\;dx\right]^2=\frac{\Gamma(\frac{1}{8})^2}{64}=\int_0^\infty\int_0^\infty e^{-x^8-y^8}\;dx\;dy=\int_0^\infty\int_0^\frac{\pi}{2}re^{-r^8(\cos^8\theta\;+\;\sin^8\theta)}\;d\theta\;dr={\int_0^\infty\int_0^\frac{\pi}{2}re^{-r^8(1\;-\;\sin^2\theta\;+\;\frac{\sin^42\theta}{8})}\;d\theta\;dr}={\int_0^\infty re^{-r^8}\int_0^\frac{\pi}{2}e^{r^8(\sin^2\theta\;-\;\frac{\sin^42\theta}{8})}\;d\theta\;dr}$$ In order to evaluate the inner integral I used Maclaurin series, the binomial theorem, and freely rearranged integrals and sums as follows: $$\int_0^\frac{\pi}{2}e^{a(\sin^2\theta\;-\;\frac{\sin^42\theta}{8})}\;d\theta=\int_0^\frac{\pi}{2}\sum_{n=0}^\infty \frac{a^n}{n!}\left(\sin^2\theta-\frac{\sin^42\theta}{8}\right)^n\;d\theta={\int_0^\frac{\pi}{2}\sum_{n=0}^\infty \frac{a^n}{n!}\sum_{m=0}^n\frac{n!}{m!(n-m)!}\sin^{2n-2m}2\theta\;\left(\frac{-1}{8}\right)^m\sin^{4m}2\theta\;d\theta}={\sum_{n=0}^\infty\sum_{m=0}^n\frac{a^n\left(\frac{-1}{8}\right)^m}{m!(n-m)!}\int_0^\frac{\pi}{2}\sin^{2n+2m}2\theta\;d\theta}$$ Using $\int_0^\frac{\pi}{2}\sin^{2k}2\theta\;d\theta=\frac{1}{2}\int_0^\pi\sin^{2k}\theta\;d\theta=\int_0^\frac{\pi}{2}\sin^{2k}\theta\;d\theta=\frac{\pi(2k)!}{2^{2k+1}(k!)^2}$, I got: $${\sum_{n=0}^\infty\sum_{m=0}^n\frac{a^n\left(\frac{-1}{8}\right)^m}{m!(n-m)!}\int_0^\frac{\pi}{2}\sin^{2n+2m}2\theta\;d\theta}={\frac{\pi}{2}\sum_{n=0}^\infty\sum_{m=0}^n\left(\frac{a}{4}\right)^n\left(\frac{-1}{32}\right)^m\frac{(2n+2m)!}{m!(n-m)!(n+m)!^2}}$$ So that I had: $$\int_0^\frac{\pi}{2}e^{a(\sin^2\theta\;-\;\frac{\sin^42\theta}{8})}\;d\theta=\frac{\pi}{2}\sum_{m=0}^\infty\sum_{n=0}^\infty\left(\frac{a}{4}\right)^n\left(\frac{-a}{128}\right)^m\frac{(2n+4m)!}{m!\;n!\;(n+2m)!^2}\tag{2}$$ Substituting $(2)$ into our original integral, using the series and integral definitions of the hypergeometric function, and rearranging integrals and sums gave: $$\frac{\Gamma(\frac{1}{8})^2}{64}=\frac{\pi}{2}\int_0^\infty re^{-r^8}\sum_{m=0}^\infty\sum_{n=0}^\infty\left(\frac{r^8}{4}\right)^n\left(\frac{-r^8}{128}\right)^m\frac{(2n+4m)!}{m!\;n!\;(n+2m)!^2}\;dr={\frac{\pi}{2}\sum_{m=0}^\infty\sum_{n=0}^\infty\left(\frac{1}{4}\right)^n\left(\frac{-1}{128}\right)^m\frac{(2n+4m)!}{m!\;n!\;(n+2m)!^2}\int_0^\infty r^{8n+8m+1}e^{-r^8}\;dr}={\frac{\pi}{16}\sum_{m=0}^\infty\sum_{n=0}^\infty\left(\frac{1}{4}\right)^n\left(\frac{-1}{128}\right)^m\frac{(2n+4m)!}{m!\;n!\;(n+2m)!^2}\Gamma\left(m+n+\frac{1}{4}\right)}={\frac{\sqrt{\pi}}{16}\sum_{m=0}^\infty\left(\frac{-1}{8}\right)^m\frac{\Gamma(m+\frac{1}{4})\;\Gamma(2m+\frac{1}{2})}{m!\;\Gamma(2m+1)}\sum_{n=0}^\infty\frac{\Gamma(m+n+\frac{1}{4})}{\Gamma(m+\frac{1}{4})}\frac{\Gamma(2m+1)}{\Gamma(n+2m+1)}\frac{\Gamma(n+2m+\frac{1}{2})}{\Gamma(2m+\frac{1}{2})}}={\frac{\sqrt{\pi}}{16}\sum_{m=0}^\infty\left(\frac{-1}{8}\right)^m\frac{\Gamma(m+\frac{1}{4})\;\Gamma(2m+\frac{1}{2})}{m!\;\Gamma(2m+1)}\;_2F_1\left(m+\frac{1}{4},2m+\frac{1}{2};2m+1;1\right)}={\frac{1}{16}\sum_{m=0}^\infty\left(\frac{-1}{8}\right)^m\frac{\Gamma(m+\frac{1}{4})}{m!}\int_0^1 x^{2m-\frac{1}{2}}(1-x)^{-\frac{1}{2}}(1-x)^{-m-\frac{1}{4}}\;dx}={\frac{1}{16}\int^1 x^{-\frac{1}{2}}(1-x)^{-\frac{3}{4}}\sum_{m=0}^\infty\frac{\Gamma(m+\frac{1}{4})}{m!}\left(-\frac{x^2}{8(1-x)}\right)^m\;dx}$$ In order to evaluate a sum of the form present in the integrand I used basic Laplace transform identities and interchanged sum and Laplace transform to get: $$\sum_{n=0}^\infty\frac{1}{n!}\frac{\Gamma(n+\frac{1}{4})}{s^{n+\frac{1}{4}}}=\sum_{n=0}^\infty\frac{1}{n!}L\left[t^{-\frac{3}{4}}\right](s)=L\left[t^{-\frac{3}{4}}e^t\right](s)=\frac{\Gamma(\frac{1}{4})}{(s-1)^\frac{1}{4}}$$ and hence $\sum\limits_{n=0}^\infty\frac{x^n}{n!}\Gamma\left(n+\frac{1}{4}\right)=\frac{\Gamma(\frac{1}{4})}{(1-x)^\frac{1}{4}}$ so that I had: $$\frac{\Gamma(\frac{1}{8})^2}{64}={\frac{\Gamma(\frac{1}{4})}{16}\int_0^1 x^{-\frac{1}{2}}(1-x)^{-\frac{3}{4}}\left(1+\frac{x^2}{8(1-x)}\right)^{-\frac{1}{4}}\;dx}={\frac{8^{\frac{1}{4}}\Gamma(\frac{1}{4})}{16}\int_0^1 x^{-\frac{1}{2}}(1-x)^{-\frac{1}{2}}(8-8x+x^2)^{-\frac{1}{4}}\;dx}$$ From which $(1)$ follows immediately by the substitution $u=\frac{1}{x}-1$. My questions are: Is $(1)$ is correct? Could $(1)$ have been derived by a more straightforward method? REPLY [8 votes]: Here is a way to go. Perform the substitution $y=8 x^2 +8 x +1$. Then your integral assumes the form $$\int_1^\infty\!dy\,\frac{1}{2y^{1/4}(y^2-1)^{1/2}}.$$ In a next step introduce $z=1/y^2$ and you obtain $$\frac{1}{4}\int_0^1\!dz\,z^{-7/8}(1-z)^{-1/2}.$$ This is solved by the Beta-function and you obtain the result $$ \frac14 B(1/8, 1/2) = \frac{\Gamma(1/8) \Gamma(1/2)}{4\Gamma(5/8)}. \tag{1} $$ Edit: To show that (1) is equivalent to the expression given by the OP, we may employ the duplication formula $$2^{3/4} \pi^{1/2} \Gamma(1/4) = \Gamma(1/8) \Gamma(5/8)$$ to replace $\Gamma(5/8)$ in (1) and the fact that $\Gamma(1/2)=\pi^{1/2}$. Edit 2: Some more details for the first substitution: $$\frac{dy}{dx} = 16 x + 8$$ and thus $$\frac{1}{\frac{dy}{dx} \sqrt{x^2+x}} =\frac{1}{\sqrt{(16 x+8)^2(x^2+x)}} = \frac{1}{\sqrt{4 [(8 x^2 +8 x +1)^2-1]}}.$$ For the second substitution, we have $$\left| \frac{dz}{dy}\right| = \frac{2}{y^3}$$and thus $$ \frac{1}{\left| \frac{dz}{dy}\right| 2 y^{1/4}(y^2-1)^{1/2}} =\frac{y^{11/4}}{4(y^2-1)^{1/2}} = \frac{1}{4 z^{7/8} \sqrt{1-z}}.$$<|endoftext|> TITLE: Let $f$ be continuous on $ [a,b]$, differentiable on $ (a,b) $ and $ f(x) \neq 0 \forall x \in (a,b) $ Prove $ \exists c \in (a,b) $ such that QUESTION [5 upvotes]: To Prove : $$ \frac{f'(c)}{f(c)} = \frac{1}{a-c} + \frac{1}{b-c} $$ I think I should proceed in following way : Define $g(x) = \ln|f(x)| + \ln|x-a| + \ln|b-x|$ such that $$g'(x) = \frac{f'(x)}{f(x)} - \frac{1}{a-x} - \frac{1}{b-x} $$ Then, using Rolle's Theorem , I have to prove that : $\exists c \in (a,b)$ such that $g'(c) = 0$ But I am not able to solve it any further . REPLY [5 votes]: Let $$g(x) = f(x) (x-a) (b - x) $$ then $g(a) =g(b) = 0$ and hence there is a $c\in (a, b) $ for which $g'(c) =0$. Clearly this means that $$ f'(c) (c-a) (b - c) + f(c) (b-c) - f(c) (c-a) =0$$ Dividing by $f(c)(c-a) (b-c) $ we are done.<|endoftext|> TITLE: Evaluation of ratio of two binomial expression QUESTION [8 upvotes]: If $\displaystyle A = \sum_{k=0}^{24}\binom{100}{4k}.\binom{100}{4k+2}$ and $\displaystyle B = \sum_{k=1}^{25}\binom{200}{8k-6}.$ Then $\displaystyle \frac{A}{B}$ $\bf{My\; Try::}$ For evaluation of $$A= \sum_{k=0}^{24}\binom{100}{4k}.\binom{100}{4k+2}= \sum^{24}_{k=0}\binom{100}{100-4k}\cdot \binom{100}{4k+2}$$ $$ = \binom{100}{100}\cdot \binom{100}{2}+\binom{100}{96}\cdot \binom{100}{6}+\cdots \cdots+\binom{100}{4}\cdot \binom{100}{98} = \binom{200}{102}$$ Using $$(1+x)^{100} = \binom{100}{0}+\binom{100}{1}x+\binom{100}{2}x^2+\cdots +\binom{100}{100}x^{100}$$ and $$(x+1)^{100} = \binom{100}{0}x^{100}+\binom{100}{1}x^{99}+\binom{100}{2}x^2+\cdots +\binom{100}{100}$$ Now finding Coefficients of $x^{102}$ in $\displaystyle (1+x)^{100}\cdot (x+1)^{100} = \binom{200}{102}$ Now how can i calculate $B,$ Help Required, Thanks REPLY [4 votes]: We obtain \begin{align*} \color{blue}{A}&\color{blue}{=\sum_{k=0}^{24}\binom{100}{4k}\binom{100}{4k+2}}\\ &=\sum_{k=0}^{24}\binom{100}{4k}\binom{100}{98-4k}\tag{1}\\ &=[z^{98}]\sum_{n=0}^{200}\left(\sum_{k=0}^{24}\binom{100}{4k}\binom{100}{n-4k}\right)z^n\tag{2}\\ &=[z^{98}]\sum_{n=0}^{200}\left(\sum_{{4k+l=n}\atop{k,l\geq 0}}\binom{100}{4k}\binom{100}{l}\right)z^n\\ &=[z^{98}]\frac{1}{4}\left((1+z)^{100}+(1+iz)^{100}\right.\\ &\qquad\qquad\quad\left.+(1-z)^{100}+(1-iz)^{100}\right)(1+z)^{100}\tag{3}\\ &=[z^{98}]\frac{1}{4}\left((1+z)^{200}+\left(1-z^2\right)^{100}\right)\tag{4}\\ &\,\,\color{blue}{=\frac{1}{4}\left[\binom{200}{98}-\binom{100}{49}\right]}\tag{5} \end{align*} Comment: In (1) we use the binomial identity $\binom{p}{q}=\binom{p}{p-q}$. In (2) we introduce coefficient of operator and interpret the expression as convolution of the product of two polynomials in $z$. In (3) we recall the default case $$\sum_{n=0}^{200}\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\binom{100}{k}\binom{100}{l}\right)z^n=(1+z)^{100}(1+z)^{100}.$$ We use series multisection with the $4$-th roots of unity to filter all elements which are not a multiple of $4$. In (4) we skip terms which do not contribute. In (5) we select the coefficient of $z^{98}$. We obtain \begin{align*} \color{blue}{B}&\color{blue}{=\sum_{k=1}^{25}\binom{200}{8k-6}}\\ &=\frac{1}{8}\sum_{k=1}^8\left(\omega_{8}^k\right)^6\left(1+\omega_8^k\right)^{200}\tag{6}\\ &=\frac{1}{8}\sum_{k=1}^8\left(\frac{1+i}{\sqrt{2}}\right)^{6k}\left(1+\left(\frac{1+i}{\sqrt{2}}\right)^k\right)^{200}\tag{7}\\ &=\frac{1}{8}\left((-i)(1+\omega_8)^{200}-(1+i)^{200}+i\left(1-\overline{\omega}_8\right)^{200}+(1-1)^{200}\right.\\ &\qquad\qquad\left.(-i)(1-\omega_8)^{200}-(1-i)^{200}+i\left(1+\overline{\omega}_8\right)^{200}+(1+1)^{200}\right)\\ &=\frac{1}{8}\left((1+1)^{200}-\left((1+i)^{200}+(1-i)^{200}\right)\right)\tag{8}\\ &\,\,\color{blue}{=2^{197}-2^{98}}\tag{9} \end{align*} Comment: In (6) we use again multisection of series as we did in (3). This is formula (6.20) in Binomial Identities Derived from Trigonometric and Exponential Series by H.W. Gould. In (7) we note the $8$-th root of unity is $\omega_8=\frac{1+i}{\sqrt{2}}$. We recall the powers of $\omega_8$ modulo $8$: $\{\omega_8,i,-\overline{\omega}_8,-1,-\omega_8,-i,\overline{\omega}_8,1\}$ which are used in the next line. In (8) we skip terms which do not contribute. We finally conclude from (5) and (9) \begin{align*} \color{blue}{\frac{A}{B}=\frac{\binom{200}{98}-\binom{100}{49}}{2^{199}-2^{100}}} \end{align*}<|endoftext|> TITLE: How do you solve a system of quadratic equations? QUESTION [9 upvotes]: I was watching a video where a problem in Galois theory was posed such that it became necessary to tell if a certain element was a perfect square in a finite field extension of the rationals. By writing a general element in terms of its $\mathbb{Q}$-basis and squaring it, the result was a system of quadratic equations. In particular, it was system of three non-homogeneous equations in three unknowns. The video deferred the solution to a computer algebra system, which felt unsatisfying. I was wondering what techniques would be useful for solving general equations of this kind. My only thought was that Groebner bases might be useful (and I'm sure that's how the computer algebra system solves it). But I don't know much about them, and I wonder how amenable the Buchberger algorithm is to solving it by hand. REPLY [5 votes]: It seems systems like these can be solved with the use of resultants. Say we have three polynomials $f(x, y, z) = g(x, y, z) = h(x, y, z) = 0$ in three variables. We can treat each as a polynomial in the ring $R[x,y][z]$. We can use a resultant to form a new, simpler system $F(x, y) = Res(f, g) = 0$ and $G(x, y) = Res(g, h) = 0$. This effectively eliminates the $z$ variable from our system. Doing this again, we get a final polynomial $R(x) = Res(F, G)$ (taken with respect to $y$ this time over the ring $R[x]$). We solve $R(x) = 0$ to get all possible values of $x$. Then, we go through each of these possible values and plug them into the system $F(x, y) = G(x, y) = 0$. This gives us the corresponding values for $y$. And then, with values for $x$ and $y$ in hand, we solve the original system $f(x, y, z) = g(x, y, z) = h(x, y, z) = 0$ to get the corresponding $z$ values.<|endoftext|> TITLE: How to solve this limit: $\lim\limits_{x \to 0}\left(\frac{(1+2x)^\frac1x}{e^2 +x}\right)^\frac1x$ QUESTION [7 upvotes]: $$\lim\limits_{x \to 0}\left(\frac{(1+2x)^\frac{1}{x}}{e^2 +x}\right)^\frac{1}{x}=~?$$ Can not solve this limit, already tried with logarithm but this is where i run out of ideas. Thanks. REPLY [2 votes]: All such limits can be mechanically computed easily using asymptotic expansions. One should not use L'Hopital unless it is obvious that it works well. $ \def\lfrac#1#2{{\large\frac{#1}{#2}}} \def\wi{\subseteq} $ The complete solution produced by the mechanical computation is as follows. As $x \to 0$:   $\Big(\lfrac{(1+2x)^{1/x}}{e^2+x}\Big)^{1/x}$ $= \Big(\lfrac{\exp(\lfrac1x\ln(1+2x))}{e^2+x}\Big)^{1/x}$ $\in \Big(\lfrac{\exp(\lfrac1x(2x-2x^2+O(x^3)))}{e^2+x}\Big)^{1/x}$   $= \Big(\lfrac{\exp(2-2x+O(x^2))}{e^2+x}\Big)^{1/x}$ $= e^{-2} \Big(\lfrac{\exp(O(x^2))}{1+e^{-2}x}\Big)^{1/x}$ $\wi e^{-2} \Big(\lfrac{1+O(x^2)}{1+e^{-2}x}\Big)^{1/x}$   $\wi e^{-2} \Big((1+O(x^2))(1-e^{-2}x)\Big)^{1/x}$ $\wi e^{-2} \Big(1-e^{-2}x+O(x^2)\Big)^{1/x}$   $= e^{-2} \exp\!\Big(\lfrac1x\ln(1-e^{-2}x+O(x^2))\Big)$ $\wi e^{-2} \exp\!\Big(\lfrac1x(-e^{-2}x+O(x^2))\Big)$   $= e^{-2} \exp(-e^{-2}+O(x))$ $= e^{-2-e^{-2}} \exp(O(x))$ $\wi e^{-2-e^{-2}}(1+O(x))$   $= e^{-2-e^{-2}}+O(x)$. The two asymptotic expansions used in the above solution are: $\exp(x) \in 1+O(x)$ if $x \in o(1)$. $\ln(1+x) \in x+\lfrac1{2}x^2+O(x^3)$ if $x \in o(1)$. One question is how I know how many terms of the asymptotic expansions to use. The answer is I do not know in advance. I just start with using the first two terms, and if at some point I cannot simplify due to the terms cancelling and leaving only asymptotic classes, then I trace where those remaining terms arose from and increase the number of terms in previous asymptotic expansions where needed. This is a mechanical process and is actually used in computer algebra systems to find such limits.<|endoftext|> TITLE: Find all complex numbers $z$ such that $|z|=\frac{1}{|z|}=|1-z|$ QUESTION [5 upvotes]: Problem: Find all complex numbers $z$ such that $|z|=\frac{1}{|z|}=|1-z|$. Basically I have an idea how to solve this and I get $x=\frac12$ but how should I express it mathematically? Should I go and find $y$ also? REPLY [4 votes]: Hint: $\require{cancel}$ $|z|^2=|1-z|^2$ $\iff$ $z \bar z = (1-z)(1-\bar z)$ $\iff$ $\cancel{z \bar z}=1-z-\bar z+\cancel{z \bar z}\,$ Since $1=|z|^2=z \bar z$ it follows that $\bar z = \cfrac{1}{z}$ so the last equality becomes $z^2-z+1=0\,$, which is a simple quadratic having the two possible values of $z\,$ as roots.<|endoftext|> TITLE: Tiling of regular polygon by rhombuses QUESTION [5 upvotes]: A regular decagon and a regular dodecagon have been tiled with rhombuses. In each case, the sides of the rhombuses are the same length as the sides of the regular polygon. How many rhombuses will be there in a tiling by rhombuses of a $2002$-gon? $A) 2002$ $B) 500 × 1001$ $C) 1000 × 1001$ $D) 1000 × 2002$ Could someone share please share the thought process of this question? I am not able to initiate it. REPLY [4 votes]: This is not an answer but a hint. (No alternate way to place a figure.) Look at the labelization I have done of your figures: how many rhombii do you have with the same label ? Generalize... by thinking to a certain arithmetic progression.<|endoftext|> TITLE: countable intersection of connected sets QUESTION [6 upvotes]: Let X be a topological space. Given a decreasing sequence of closed and connected subsets $J_1\supset J_2 \supset J_3\supset\cdots$, consider $J:=\bigcap_{i=1}^\infty J_i$. Is it always true that $J$ is connected when $X=\mathbb{R}^2$ ? and when $X$ is compact but not Hausdorff ? (if $X$ is compact and Hausdorff $J$ is connected) REPLY [4 votes]: Here is a counterexample when you remove the Hausdorff assumption. Let $X=[-1,1]\cup\{0'\}$ be topologized like the line with two origins, so neighborhoods of $0'$ are neighborhoods of $0$ but with $0$ replaced by $0'$. Note that $X$ is compact but is not Hausdorff, since $0$ and $0'$ do not have disjoint neighborhoods. Let $J_n=[-1/n,1/n]\cup\{0'\}$. Then each $J_n$ is closed and connected, but their intersection is just $\{0,0'\}$, which has the discrete topology and thus is disconnected.<|endoftext|> TITLE: Number of nonnegative integral solutions of $3x+y+z \leq 25$ QUESTION [6 upvotes]: Find the number of nonnegative integral solutions of $$3x+y+z \leq 25$$ I can get the answer to $3x+y+z=25$ but I can't get the answer with inequality. Please help. REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ The number of solutions $\ds{\,\mc{S}_{s}}$ with $\ds{3x + y + z = s}$ $\ds{\pars{~\mbox{with}\ x,y,z\ \in\ \mathbb{N}_{\geq 0}\ \mbox{and}\ s \geq 0~}}$ is given by: \begin{align} \mc{S}_{s} & \equiv \bracks{t^{s}}\sum_{x = 0}^{\infty}t^{3x} \sum_{y = 0}^{\infty}t^{y}\sum_{z = 0}^{\infty}t^{z} = \bracks{t^{s}}{1 \over \pars{1 - t^{3}}\pars{1 - t}^{2}} \\[5mm] & = \bracks{t^{s}}\sum_{i = 0}^{\infty}t^{3i} \sum_{j = 0}^{\infty}{-2 \choose j}\pars{-t}^{j} = \bracks{t^{s}}\sum_{i = 0}^{\infty}\sum_{j = 0}^{\infty} \pars{j + 1}\sum_{k = 0}^{\infty}\delta_{k,3i + j}\,t^{k} \\[5mm] & = \bracks{t^{s}}\sum_{k = 0}^{\infty}\bracks{\sum_{i = 0}^{\infty}\sum_{j = 0}^{\infty}\pars{j + 1}\delta_{k,3i + j}}t^{k} = \sum_{i = 0}^{\infty}\sum_{j = 0}^{\infty}\pars{j + 1}\delta_{s,3i + j} = \sum_{i = 0}^{\left\lfloor s/3 \right\rfloor}\pars{s - 3i + 1} \end{align} The number of solutions with $\ds{3x + y + z \leq 25}$ $\ds{\pars{~\mbox{with}\ x,y,z\ \in\ \mathbb{N}_{\geq 0}~}}$ is given by: \begin{align} \sum_{s = 0}^{25}\mc{S}_{s} & = \sum_{s = 0}^{25}\sum_{i = 0}^{\left\lfloor s/3\right\rfloor}\pars{s - 3i + 1} = \bbx{\ds{1215}} \end{align}<|endoftext|> TITLE: Gambler's ruin and martingale QUESTION [5 upvotes]: I don't understand 2 steps of the solution to a gambler's ruin exercise. set-up for the gambler's ruin problem: $(X_n)_{n\geq 1}$ are i.i.d. rv with $P(X_1=1)=1-P(X_1=-1)=p$ and $p\in (0,1),\ p\neq 1/2$. We have integers $0 TITLE: Hölder continuity definition through distributions. QUESTION [8 upvotes]: I am trying to prove that for a given Hölder parameter $\alpha \in (0, 1)$ and a distribution $f \in \mathcal{D}'(\mathbb{R}^d)$ the following are equivalent: $f \in C^{\alpha}$ For any $x$ there exists a polynomial $P_x$ such that $| \langle f - P_x, \phi_x^{\lambda} \rangle | \le C \lambda^{\alpha.}$ Where the latter estimate holds uniformly over all$x$ and $\phi \in \mathcal{D}$ with compact support in the unit ball, and: $$\phi_x^{\lambda}(\cdot) = \lambda^{-d} \phi\left( \frac{\cdot \ - \ x}{\lambda} \right)$$ Proving that the first implies the second is fairly easy. The other way around gives me some problems. The first step I took is to realize that we care only about the order zero term of $P_x,$ since all other terms vanish at a order higher than $\alpha.$ Here we assume that the polynomial is centered in $x.$ Then I would like to prove that $P_x(x) = g(x)$ defines a $\alpha$ - Hölder function. Thus we would get that $g \in \mathcal{D}'.$ Eventually I would like to prove that $g = f$ in $\mathcal{D}'.$ My problem is that I can't prove any implications between 2,3,4 nor any of 2,3,4 starting from 1. Any hints, help, suggestions? REPLY [4 votes]: Your steps are the right ones, in this answer I'll fill in the details. Throughout this answer, I will call $\phi_x^\lambda$ a test function of scale $\lambda$ at $x$ (where $\phi \in \mathcal{D}$ has support in $B(0,1)$). Firstly, let's check that $g$ defines an $\alpha$-Holder function. Let $\rho$ be a smooth probability density with support in $B(0,1)$. Notice here that $\rho_x^\lambda$ is essentially just a recentered mollifier. This means that we have $$|g(x) - g(y)| = \lim_{\lambda \to 0} |\langle P_x, \rho_x^\lambda \rangle - \langle P_y, \rho_y^\lambda \rangle|$$ Now we seek to bound the right hand side. \begin{align*} |\langle P_x, \rho_x^\lambda \rangle - \langle P_y, \rho_y^\lambda \rangle| \leq& |\langle P_x - f, \rho_x^\lambda \rangle| + |\langle f, \rho_x^\lambda - \rho_y^\lambda \rangle | + | \langle f - P_y, \rho_y^\lambda \rangle | \\ \leq & C_1 \lambda^\alpha + |\langle f, \rho_x^\lambda - \rho_y^\lambda \rangle | \end{align*} We are left to control $|\langle f, \rho_x^\lambda - \rho_y^\lambda \rangle |$. To do this, we notice that $\Phi_{x,y}^\lambda = \rho_x^\lambda - \rho_y^\lambda$ is a test function at scale $\frac{|x-y|}{2} + \lambda$ centered at $\frac{x+y}{2}$ such that $\langle C, \Phi_{x,y}^\lambda \rangle = 0$ for any constant $C$. This means that \begin{align*} |\langle f, \Phi_{x,y}^\lambda \rangle| =& |\langle f - P_{\frac{x+y}{2}}\big(\frac{x+y}{2}\big), \Phi_{x,y}^\lambda \rangle| \\ \leq & |\langle f - P_{\frac{x+y}{2}}, \Phi_{x,y}^\lambda \rangle| + |\langle P_{\frac{x+y}{2}} - P_{\frac{x+y}{2}}\big(\frac{x+y}{2}\big), \Phi_{x,y}^\lambda \rangle| \\ \leq& C_2 (|x-y| + \lambda)^\alpha \end{align*} where we used $\alpha$-Holder continuity of $P_{\frac{x+y}{2}}$ to bound the second term in the second to last line. In total, $$|g(x) - g(y)| = \lim_{\lambda \to 0} |\langle P_x, \rho_x^\lambda \rangle - \langle P_y, \rho_y^\lambda \rangle| \leq \lim_{\lambda \to 0} C_1 \lambda^\alpha + C_2(|x-y| + \lambda)^\alpha = C_2 |x-y|^\alpha$$ so $g$ is $\alpha$-Holder. This already implies that $g$ defines a regular distribution. Finally, we must show that $f = g$ in $\mathcal{D}'$. This will follow from the fact that any distribution that vanishes at order $\alpha > 0$ must be the zero distribution. Lemma: Suppose $F \in \mathcal{D}'(\mathbb{R}^d)$ and for every compact set $k \subseteq \mathbb{R}^d$, $$|\langle F, \phi_x^\lambda \rangle| \leq C \lambda^\alpha$$ for $\alpha>0$ where $C$ is uniform over $x \in k$, $\gamma \in [0,1)$ and $\phi \in \mathcal{D}(\mathbb{R}^d)$ with $\operatorname{supp} \phi \subseteq B(0,1)$. Then $F = 0$. Proof of Lemma: Fix a smooth probability density $\rho$ with compact support in $B(0,1)$. Then for any test function $\varphi \in \mathcal{D}'$, by standard facts about mollification, $$\langle \varphi, \rho_x^\lambda \rangle = \int_{\mathbb{R}^d} \varphi(y) \rho_x^\lambda(y) dy \to \varphi(x)$$ as $\lambda \to 0$ and in fact, the convergence takes place in $\mathcal{D}$ where $\Phi^\lambda(x) := \langle \varphi, \rho_x^\lambda \rangle$ is considered as a function of $x$. Hence \begin{align*} |\langle F, \varphi \rangle| =& \lim_{\lambda \to 0} |\langle F, \Phi^\lambda \rangle| \\ =& \lim_{\lambda \to 0} \bigg |\int_{\mathbb{R}^d} \langle F, \rho_x^\lambda \rangle \varphi(y) dy \bigg|\\ \leq& \lim_{\lambda \to 0} C\lambda^\alpha \int_{\mathbb{R}^d} |\varphi(y)| dy = 0 \end{align*} In our particular case, we have \begin{align*} |\langle f-g, \phi_x^\lambda \rangle \leq& |\langle f - P_x, \phi_x^\lambda \rangle| + |\langle P_x - g(x), \phi_x^\lambda \rangle| + |\langle g - g(x), \phi_x^\lambda \rangle | \\ \leq& C_3 \lambda^\alpha \end{align*} by our assumption on $f$ and $\alpha$-Holder continuity of $g$ and $P_x$ (since $P_x - g(x) = P_x-P_x(x)$). Hence, by the lemma, $f=g$ as desired.<|endoftext|> TITLE: 6th Grade (12 Years Old) Mathematics - Triangle Inequality Question QUESTION [15 upvotes]: My son's math homework has this question which I find to be extremely confusing. Neither my wife or I can wrap our heads around what they are expecting as an answer: Two sides of a triangle have lengths $7$ and $9$. The third side has length $x$. Tell whether each of the following statements is always, sometimes but not always or never true. a) $x = 12$ b) $x = 2$ c) $x < 2$ d) $x < 16$ e) $x < 15$ To my mind the answer is sometimes to all of them. For $x = 12$ and $x = 2$, when considering all the possible values of $x$ it will sometimes be those values. For $x < 2$, $x < 15$, and $x < 16$, it's sometimes because all values less than those make it true, until you hit $0$ which forms a line and no longer a triangle. It looks like I've found the the teachers edition of the book (scroll to page 6 in the pdf; question 12), that just confuses me even further! REPLY [21 votes]: I think you and your wife and your son can have fun figuring this out. Even before you start thinking about triangles, you should be clear about distinguishing among "sometimes", "always" and "never". Each requires a different kind of argument. That's what I'll focus on in this answer. I'd also recommend cutting out some strips of paper $7$ and $9$ inches long, and and a strip $16$ inches long marked off in inches. Then play around a bit before trying to think about the arithmetic. You'll notice that if you put the two shorter strips together at one end, the other ends will be at most $16$ inches apart and at least $2$ inches apart – the extreme cases when they're lined up. You will have discovered something that you already know: the sum of any two sides of a triangle must be strictly greater than the third side. If the sum is equal, the "triangle" lies on one line. Now you're ready for a careful reading, one statement at a time. a) $x=12$.  There is a triangle with a third side of $12$, but the third side does not have to be $12$.  So the answer is that it's sometimes $12$. My original answer was wrong. I'll leave it here since it's instructive to think about why. a) $\require{enclose}\enclose{horizontalstrike}{x=12}$.  Since there's just one number to think about, the answer can't be "sometimes". It has to be "always" or "never".  Your strips of paper will tell you it's "always".  The arithmetic is $\enclose{horizontalstrike}{9-7 < 12 < 9 + 7}$. b) and c) Clearly "never", since $9-7 = 2$. d) $x < 16$. Well, every triangle you managed to build had a third side less than $16$. The question starts with an actual triangle, so the answer is "always". That's not the same as "for every $x < 16$ you have a triangle". That's false, as you observed in a comment, since $2 < 16$ and there's no triangle with third side of $2$. e) $x < 15$. OK, we have a triangle with third side $x$. Can it be less than $15$? Well, yes, obviously. Must it be less than $15$? No. It must be less than $16$, but it might be $15.5$.  So the answer to this one is "sometimes".  (Even if $x$ had to be an integer (not specified in the question), it could be exactly $15$.) Note: I've deliberately written a rather long winded answer. Thinking things out this way is the best way to learn, even if what your son ends up turning in to his teacher is closer to "just the answer".<|endoftext|> TITLE: Trapezoids - Which definition has a stronger case? QUESTION [11 upvotes]: Today my daughter Ella asked me "Is a trapezoid an irregular polygon?" and I realized I cannot give her a definitive answer. According to the Internet, trapezoids are alternately defined as having only one pair of parallel lines, and also at least one pair of parallel lines. My understanding is that this is simply an unresolved ambiguity in mathematics. My question is, which definition has the stronger case? So far I have this: The case for "only one": Many people seem to think this is more intuitive and/or traditional The case for "at least one": Inclusive definitions are generally more useful (if this true I'd like to learn why) It's the only definition that fits with the concept of trapezoidal sums in calculus What am I missing? REPLY [2 votes]: It's something of a truism to say that inclusive definitions are better, because any properties proved of a more general category automatically apply to all special cases contained within it. However, consider the following theorems: Theorem. The angle bisectors of any rectangle intersect at four points which form the vertices of a square. Theorem. The perpendicular bisectors of any rhombus intersect at four points which form the vertices of another rhombus. (See Figures.) These theorems are true statements if rectangle and rhombus are defined exclusively, i.e. if "rectangle" is defined as "a quadrilateral that is equiangular but not equilateral" and if "rhombus" is defined as "a quadrilateral that is equilateral but not equiangular". However, if inclusive definitions are used, these theorems are false, because if the rectangle (resp. rhombus) is a square, then the angle bisectors (resp. perpendicular bisectors) would be concurrent, and the "figure" formed by them would be a single point. In other words: the theorems are true for the general category only if the more specific category is excluded. Depending on whether or not you think these theorems are things you want to be true in your theory, this could be a reason in support of adopting exclusive definitions. There is another option, of course: one could define "rectangle" and "rhombus" inclusively, but in the statement of the theorem specify in the hypothesis that it is true of a non-square rectangle or a non-square rhombus. But if you have to do this often enough, at a certain point it might make sense to introduce a name for "non-square rectangle" and "non-square rhombus" -- and then we are right back to exclusive definitions again. What about trapezoids, the topic the OP specifically asks about? As discussed in this answer, if trapezoids are defined exclusively, then it is possible to calculate the area of a trapezoid given only its four lengths (provided you know which ones are the parallel ones and which ones are the non-parallel ones). (Even if you do not know which sides are the parallel ones, you can determine that the area must be one of six uniquely determined possible values.) However, if trapezoid are defined inclusively, then the four lengths do not determine the area at all, as the area of a trapezoid with side lengths $a,b,a,b$ can take on any real value between $0$ and $ab$. (Related: For historical background on when the "inclusive" definitions of quadrilaterals came to be standard, see also this answer on MESE.)<|endoftext|> TITLE: Divisor effective on compact Riemann surface QUESTION [5 upvotes]: I have the following problem: $\texttt {(1)Show that every divisor of degree ≥ p on a compact Riemann surface}$ $\texttt {of genus p is linearly equivalent to an effective divisor.}$ My initial action is to prove that: (2) $D$ be a divisor on compact Riemann surface is linearly equivalent to an effective divisor $\Leftrightarrow h^{0}(D)\ge 1$. So if I prove (2), I prove (1). I wonder if you're reasonable. And some idea to prove (2)? Thank you! REPLY [3 votes]: As you already noticed, we need to apply Riemann-Roch for a Divisor of degree $k\ge p$. This gives $$\dim H^0(X, O_D) - \dim H^1(X,O_{D}) = 1-p + k\ge1.$$ Thus $\dim H^0(X, O_D)\ge 1$, which means there is a non-zero function $f \in O_D(X)$. So the divisor of $f$ satisfies $(f)+D\ge 0$. But then the Divisor $\tilde D:=(f) + D$ is effective and linearly equivalent to $D$, i.e. $D$ is linearly equivalent to the effective divisor $\tilde D$.<|endoftext|> TITLE: All subsets of $\mathbb{R}$ which are countable or the complement of a countable set QUESTION [8 upvotes]: Let C be the class of all subsets of $\mathbb{R}$ which are either countable or are the complement of a countable set. Show that C is a $\sigma$ field. This is what I have right now: (i) $\varnothing$ $\in$ $C$ since $\emptyset$ is countable . (ii) A $\in$ C since A is countable since given C is countable. (iii) $\bigcup^{\infty}_{i=1}A_i$ $\in$ C, is true since $A_1,A_2$, are countable then so is $\cup^{}_{n}A_n$ since a countable union of countable sets is countable. I need help understanding (ii) and (iii). REPLY [12 votes]: Let me point out a minor nitpick before answering: "Countable" here means "at most countable", i.e. "finite or countable". If we were to take "countable" as strictly meaning "in bijection with the natural numbers", this would not produce a $\sigma$-algebra, since $\varnothing$ is finite (cardinality 0) end its complement is uncountable, and hence $\varnothing$ would not be in our class. That said, I take most of what I say below from comments. $$\text{(i: The empty set)}$$ $\varnothing$ is finite, hence at most countable, hence $\varnothing\in C$. I am assuming $\varnothing\in\emptyset$ is a typo for $\varnothing\in C$, otherwise that "formula" is Gibberish to me. $$\text{(ii: Closure under taking complements)}$$ This point is a bit unclear, but it seems to be trying to show closure of $C$ under complement. The other interpretation would be that it's trying to show the "universe set" is in $C$, but that "universe" is $\mathbb{R}$ here, and it has never been called $A$ in the question. So I assume $A\subseteq\mathbb{R}$, and in fact $A\in C$. We then have two cases: $A$ is at most countable; $A$ is the complement of an at most countable set. In the first case, $\mathbb{R}\smallsetminus A$ (alias $A^C$) is the complement of an at most countable set, hence $A^C\in C$. In the second case, $A^C$ must be at most countable, since $A$ is not at most countable and hence must be the complement of an at most countable set, and $A=(A^C)^C$; $$\text{(iii: Closure under countable unions)}$$ If $A_i\in C$ for all $i$, we have three cases: a) All $A_i$'s are at most countable; b) All $A_i$'s are complement of at most countable sets; c) There are some $A_i$'s that are at most countable, and some that are complements of at most countable sets; in case a), a countable union of countable sets is countable, as you remarked; so your (iii) works for case a), but neglects the other cases. In case b), the union of the $A_i$'s is: $$\bigcup_iA_i=\bigcup_i(A_i^C)^C=\left(\bigcap_iA_i^C\right)^C,$$ and that intersection is contained in, e.g., $A_1^C$, which is at most countable, and hence is at most countable. In case c), we remark that the union operator is associative and commutative, so we reduce $\bigcup_iA_i$ to: $$\bigcup_iA_i=\left(\bigcup_{i:|A_i|\leq|\mathbb N|}A_i\right)\cup\left(\bigcup_{i:|A_i^C|\leq|\mathbb N|}A_i\right).$$ So this union is the union of two sets, the first of which is at most countable as shown in case a), and the second of which is the complement of an at most countable set as shown in case b). Hence, all that is left to prove is that, if $|A|\leq|\mathbb N|$ and $|B^C|\leq|\mathbb N|$ (i.e. $A$ is at most countable as is $B^C$), then $A\cup B$ is either at most countable or the complement of an at most countable set. Now, $B\subseteq A\cup B$, so if $B$ is not at most countable, $A\cup B$ cannot be either. Let us look at the complement: $$(A\cup B)^C=A^C\cap B^C\subseteq B^C,$$ and $B^C$ is at most countable. Hence, $A\cup B$ is the complement of an at most countable set, which completes the proof that $C$ is a $\sigma$-algebra (or $\sigma$-field, as you call it). I hope this is clear.<|endoftext|> TITLE: Is the classifying space $B^nG$ the Eilenberg-MacLane space $K(G, n)$? QUESTION [9 upvotes]: Question: How should we interpret and understand the classifying space $B^nG$? Is that Eilenberg-MacLane space $K(G,n)$? What one can learn about $BG$ follows the basic: A classifying space $BG$ of a topological group $G$ is the quotient of a weakly contractible space $EG$ (i.e. a topological space for which all its homotopy groups are trivial) by a free action of $G$. It has the property that any $G$ principal bundle over a paracompact manifold is isomorphic to a pullback of the principal bundle $EG \to BG$. For a discrete group $G$, $BG$ is, roughly speaking, a path-connected topological space $X$ such that the fundamental group of $X$ is isomorphic to $G$ and the higher homotopy groups of $X$ are trivial, that is, $BG$ is an Eilenberg-MacLane space, or a $K(G,1)$. So if $G$ is a finite discrete group, what are the conditions of the higher homotopy groups of $B^nG$? REPLY [8 votes]: If $G$ is a topological group, then there is a universal principal $G$-bundle $EG \to BG$ where $EG$ is weakly contractible. Using the long exact sequence in homotopy, we see that $$\dots \to \pi_{i+1}(EG) \to \pi_{i+1}(BG) \to \pi_i(G) \to \pi_i(EG) \to \dots$$ As $EG$ is weakly contractible, $\pi_{i+1}(EG) = 0$ and $\pi_i(EG) = 0$, so $\pi_{i+1}(BG) = \pi_i(G)$. Now, if $BG$ is a topological group (which happens if and only if $G$ is an $E_2$ space), then the above argument shows that $\pi_{i+2}(B^2G) = \pi_{i+2}(B(BG)) = \pi_{i+1}(BG) = \pi_i(G)$. In general, if $G$ is an $E_k$ space, then $B^kG$ is defined and satisfies $\pi_{i+k}(B^kG) = \pi_i(G)$. If $G$ is a discrete group, then $\pi_0(G) = G$ and $\pi_i(G) = 0$ for $i > 0$, so $\pi_k(B^kG) = G$ and $\pi_i(B^kG) = 0$ for $i \neq k$. Therefore $B^kG$ is an Eilenberg-MacLane space $K(G, n)$.<|endoftext|> TITLE: Does the category with two objects, one arrow exist? QUESTION [5 upvotes]: According to pg. 10 of Categories for the Working Mathematician, there exists a category called $\mathbf{2}$ s.t. $\mathbf{2}$ is the category with two objects $a$, $b$ and just one arrow $a \rightarrow b$ not the identity But doesn't this violate the axiom that all categories must have, for each object, an identity arrow? Here there is no identity arrow $id_a$ and $id_b$, which seems to suggest that $\mathbf{2}$ is in fact not a category. REPLY [8 votes]: This is a wording issue. The phrase "just one arrow $a\rightarrow b$ not the identity" means that the category has only one non-identity arrow; so this is a category with three arrows total, the identities on $a$ and $b$ and the unique $a\rightarrow b$. (Note that the non-identity arrows determine the entire category!) It would probably be clearer to write it as "just one arrow which is not an identity", but oh well.<|endoftext|> TITLE: Is the following limit finite ....? QUESTION [12 upvotes]: I would like to see some clue for the following problem: Let $a_1=1$ and $a_n=1+\frac{1}{a_1}+\cdots+\frac{1}{a_{n-1}}$, $n>1$. Find $$ \lim_{n\to\infty}\left(a_n-\sqrt{2n}\right). $$ REPLY [4 votes]: As other answers indicate, this sequence obeys to the following recurrence relation : $$a_1=1\quad\mathrm{and}\quad\forall n\in\mathbb{N},\,a_{n+1}=a_n+\frac{1}{a_n}$$ This proves that the sequence is increasing. Supposing its convergence to some finite limit $L>0$ would lead to $L=L+\dfrac{1}{L}$, a contradiction. Hence the sequence diverges towards $+\infty$. Now, for all $n\in\mathbb{N}$, we have : $$a_{n+1}^2-a_n^2=\left(a_n+\frac{1}{a_n}\right)^2-a_n^2=2+\frac{1}{a_n^2}\longrightarrow 2$$ By Cesaro's lemma : $$\frac{1}{n}\left(a_n^2-a_0^2\right)=\frac{1}{n}\sum_{k=0}^{n-1}\left(a_{k+1}^2-a_k^2\right)\longrightarrow 2$$ Thus $$a_n\sim\sqrt{2n}$$ Now, we will use ... Lemma Given a sequence $(u_n)_{n\ge1}$ of positive real numbers such that $u_n\sim\dfrac{1}{n}$, we have : $$\sum_{k=1}^nu_k\sim\ln(n)$$ (See below for a proof.) Since $a_{n+1}^2-a_n^2-2\sim\dfrac{1}{2n}$ and by the previous lemma : $$a_n^2-2n\sim\frac{\ln(n)}{2}$$ which can be written : $$a_n=\sqrt{2n}\,\sqrt{1+\frac{\ln(n)}{4n}+o\left(\frac{\ln(n)}{n}\right)}$$ Using now the Taylor expansion $\sqrt{1+t}=1+\frac{t}{2}+o(t)$ as $t\to0$, we get finally : $$\boxed{a_n=\sqrt{2n}\left(1+\frac{\ln(n)}{8n}+o\left(\frac{\ln(n)}{n}\right)\right)}$$ In particular, we see that $\lim_{n\to\infty}\left(a_n-\sqrt{2n}\right)=0$, but the result above is much more accurate. Proof of the above lemma Given $\epsilon>0$, there exists $N\in\mathbb{N}^\star$ such that : $$k>N\implies\left|u_k-\frac{1}{k}\right|\le\epsilon$$ As soon as $n>N$, we have : $$\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le\underbrace{\left|\sum_{k=1}^N\left(u_k-\frac{1}{k}\right)\right|}_{=A}+\sum_{k=N+1}^n\left|u_k-\frac{1}{k}\right|\le A+\epsilon\sum_{k=N+1}^n\frac{1}{k}$$ And a fortiori : $$\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le A+\epsilon\sum_{k=1}^n\frac{1}{k}$$ Since $\lim_{n\to\infty}\sum_{k=1}^n\frac{1}{k}=+\infty$, there exists $N'\in\mathbb{N}^\star$ such that : $$n>N'\implies\sum_{k=1}^n\frac{1}{k}>\frac{A}{\epsilon}$$ Finally : $$n>\max\{N,N'\}\implies\left|\sum_{k=1}^nu_k-\sum_{k=1}^n\frac{1}{k}\right|\le2\epsilon\sum_{k=1}^n\frac{1}{k}$$ This proves that : $$\sum_{k=1}^nu_k\sim\sum_{k=1}^n\frac{1}{k}$$ But we know that $\sum_{k=1}^n\frac{1}{k}\sim\ln(n)$ as $n\to\infty$, hence the conclusion (by transitivity of $\sim$).<|endoftext|> TITLE: Finding out the limit $\lim_{a \to \infty} \frac{f(a)\ln a}{a}$ QUESTION [12 upvotes]: For any real number $a \geq 1$ let $f(a)$ denote the real solution of the equation $x(1+\ln x)=a$ then the question is to find out $$ \lim_{a \to \infty} \frac{f(a)\ln a}{a}$$. It is clear that if we denote $h(a)$ by $h(a)=a(1+\ln a)$ then $f(a)$ is the inverse function of $h(a)$. Also $f(a)$ is increasing function in its domain. Also the limit persuades using lhospital's but I cannot see how to apply it here. Thanks. REPLY [4 votes]: The function $g(x)=x(1+\ln x)$ defined over $[1,\infty)$ has derivative $$ g'(x)=1+\ln x+1=2+\ln x>0 $$ and $\lim_{x\to\infty}g(x)=\infty$, so the function is increasing and therefore it has an inverse function defined over $[g(1),\infty)=[1,\infty)$. Its inverse is exactly the function $f$ you have to analyze the behavior of. Now you can use the substitution $a=g(x)$ so the limit becomes $$ \lim_{x\to\infty}\frac{x\ln(g(x))}{g(x)}= \lim_{x\to\infty}\frac{x\ln\bigl(x(1+\ln x)\bigr)}{x(1+\ln x)}= \lim_{x\to\infty}\frac{\ln x}{1+\ln x}+ \lim_{x\to\infty}\frac{\ln(1+\ln x)}{1+\ln x}=1 $$<|endoftext|> TITLE: $ \lim_{n \rightarrow \infty} n \left( \frac{1}{(n+1)^2} + \frac{1}{(n+2)^2} + \cdots + \frac{1}{(2n)^2} \right)$ as Riemann sum? QUESTION [6 upvotes]: I am trying to evaluate the limit $$ \lim_{n \rightarrow \infty} n \left( \frac{1}{(n+1)^2} + \frac{1}{(n+2)^2} + \cdots + \frac{1}{(2n)^2} \right).$$ I have been trying to convert it to the Riemann sum of some integral, but have been unable to recongize what the integral should be. How should I go about solving this problem? REPLY [3 votes]: The polygamma functions are great to solve this kind of problems. $$\sum_{k=1}^N \frac{1}{(x+k)^2}=\psi^{(1)}(x+1)-\psi^{(1)}(N+x+1)$$ $\psi^{(1)}(z)$ is the trigamma function, i.e.: the polygamma[1,z] function. With $z=n$ and $N=n$ : $$n\sum_{k=1}^n \frac{1}{(n+k)^2}=n\left(\psi^{(1)}(n+1)-\psi^{(1)}(2n+1) \right)$$ The asymptotic expansion of the trigamma function is : $\psi^{(1)}(z+1)=\frac{1}{z}-\frac{2}{z^2}+O\left(\frac{1}{z^3}\right)$ $$n\sum_{k=1}^n \frac{1}{(n+k)^2}=n\left(\frac{1}{n}-\frac{2}{n^2}-\frac{1}{2n}+\frac{2}{4n^2}+O\left(\frac{1}{n^3}\right) \right) = \frac{1}{2}+\frac{3}{2n}+O\left(\frac{1}{n^2}\right)$$ $$\lim_{n \rightarrow \infty} n\sum_{k=1}^n \frac{1}{(n+k)^2}=\frac{1}{2}$$<|endoftext|> TITLE: Is $(11)$ a prime ideal of $\mathbb{Z}[\sqrt{-5}]$? QUESTION [6 upvotes]: Is $(11)$ a prime ideal of $\mathbb{Z}[\sqrt{-5}]$? I know that $11$ is an irreducible element in $\mathbb{Z}[\sqrt{-5}]$. Now to determine whether it is prime we can say $\mathbb{Z}[\sqrt{-5}]$ isomorphic to $\mathbb{Z}[x]/(x^2 + 5)$. So we get an isomorphism $$ \mathbb{Z}[\sqrt{-5}]/(11) \;\;\simeq\;\; \mathbb{Z}_{11}[x]/(x^2 + 5) \,.$$ Since $\mathbb{Z}_{11}$ is a field, $\mathbb{Z}_{11}[x]$ is a PID, and since $(x^2 + 5)$ is irreducible over $\mathbb{Z}_{11}[x]$, the ring $\mathbb{Z}_{11}[x]/(x^2 + 5)$ is a field. Hence $(11)$ can be treated as a maximal ideal as well as a prime ideal in the ring $\mathbb{Z}[\sqrt{-5}]$. REPLY [3 votes]: As indicated in the comments, yes, you are correct. It might be wise to justify why $(x^2+5)$ is irreducible over $\mathbb{Z}_{11}[x]$ though. I'm posting this CW answer so that users who confidently concur have something to vote on, and so this question doesn't stagnate in the Unanswered Questions Queue. If however anyone would like to write a more substantial response to the question, please downvote this answer and post your response.<|endoftext|> TITLE: Irrationality of Euler's constant $\gamma=0.577216...$ QUESTION [5 upvotes]: Other constants such as $\pi$, $e$, $\phi$, $\zeta(3)$ etc, have been proven to be irrational. There are many series, infinite products and integrals that represent Euler's constant and yet there is still the open problem of its irrationality mystery. Why is it so hard to prove whether or not Euler's constant is irrational? REPLY [4 votes]: Other constants such as $\pi$, $e$, $\phi$, $\zeta(3)$ etc, had been proof to be of irrational constants. Perhaps. But, in case you haven't noticed it by now, those proofs of irrationality are not one and the same. In other words, there is no catch-all method for proving that something is irrational in general. Various methods do exist for various situations $($such as the Gelfond-Schneider theorem, for instance$)$, but they do not cover all possible cases. Indeed, they don't even cover a majority of cases, but only some countable subset, whereas irrationals $($ more specifically, transcendentals $)$ are uncountable. Hope this helps.<|endoftext|> TITLE: lattice of subgroup QUESTION [6 upvotes]: In the spirit of category theory, we use relations between objects to describe the object themselves. In the same sense I want to use the lattice structure of subgroups to decide whether a subgroup is normal. We know that if $H \lhd G$ and $K \subset G$, then there is a lattice preserving bijection from subgroups of $K$ containing $H \cap K$ to subgroups of $HK$ containing $H$. This gives us a categorical description of normality. My question is: Does the converse hold? That is, if for every $K \subset G$, this bijection also holds, must $H$ be normal in $G$? (Here we need a modification: $HK$ needs to be replaced by $\langle H, K \rangle$.) REPLY [3 votes]: Even after reading Alex Kruckman's nice answer, the OP might still wonder if the structure of a subgroup lattice could give some information about the location, or number, of normal subgroups. Generally speaking, it cannot. For example, there is an abelian group with the very same subgroup lattice as the one in Alex's example---namely, the group $\mathbb Z/3 \times \mathbb Z/3$. Of course, all subgroups are normal in this case (unlike in Alex's example). On the other hand, there are certainly special cases in which the subgroup lattice tells us which subgroups are normal, but usually that's a consequence of the lattice structure telling us much more. For example, the group $A_4$ can be uniquely identified by its subgroup lattice. That is, no other group has the same subgroup lattice. And in that case, we know that the only nontrivial proper normal subgroup happens to be the top of the $M_3$ sublattice of the subgroup lattice of $A_4$ (but this was "a priori" information---we didn't derive it directly from the lattice structure per se). As another trivial example, when a subgroup lattice is a chain, every subgroup is normal. In this case, we can deduce this information from the shape of the subgroup lattice. (If any of the subgroups were non-normal, they would have conjugate subgroups at the same height in the lattice.) There are many more examples like these, and much more to say about what properties of a group can be inferred from the structure of its subgroup lattice. See Roland Schmidt's book "Subgroup Lattices of Groups."<|endoftext|> TITLE: Subgroup of finite index contains a normal subgroup of finite index QUESTION [8 upvotes]: Let $G$ be a group and $H\leq G$. Suppose $[G:H]$ is finite. Show that there exists a normal subgroup $N \subseteq H$ in $G$ which is also of finite index in $G$. My idea was to use $$N := \bigcap_{g \in G} gHg^{-1}$$ It is clear that $N$ is normal in $G$ and that $H$ contains $N$. Is it true that this specific $N$ is of finite index? How would I show it? REPLY [7 votes]: Thanks to the hint of user1952009 I managed to solve the exercise with proof. Let $$X := \{ xH : x \in G\}$$ and consider the group action $$\begin{cases} G \times X\to X\\ (g,xH) \mapsto gxH\end{cases}$$ Naturally we can associate with each group action a bijection $$\lambda_g:\begin{cases}X\to X\\ xH \mapsto gxH\end{cases}$$ for $g \in G$ and a homomorphism $$\lambda:\begin{cases}G\to S_X\\ g \mapsto \lambda_g\end{cases}$$ where $S_x$ denotes the symmetric group of $X$. Now we show $$\ker \lambda = \bigcap_{g \in G}gHg^{-1}$$ Let $y \in \bigcap_{g \in G}gHg^{-1}$. Thus we can write $y = xh_x x^{-1}$ with $x \in G$ and $h_x \in H$. Therefore for any $x \in G$ we have $$\lambda_y(xH) = yxH = xh_xx^{-1}xH = xh_xH = xH$$ Conversly, if $y \in \ker \lambda$ we have $\lambda_y = \operatorname{id}_X$. Hence $$yxH = xH$$ for all $xH \in X$. Therefore $$yxh_1 = xh_2 \quad \Leftrightarrow \quad y = xh_2h_1^{-1}x^{-1} \in xHx^{-1}$$ for all $x \in G$. Thus $y \in\bigcap_{g \in G}gHg^{-1}$. By the isomorphism theorem we have $$G/\ker \lambda = G/\bigcap_{g \in G}gHg^{-1} \cong \operatorname{Im} \lambda \leq S_X$$ And since $[G:H]$ is finite, $X$ is finite of cardinality $[G:H]!$. Thus we conclude that $[G:\bigcap_{g \in G}gHg^{-1}]$ is also finite.<|endoftext|> TITLE: Why is the "Buffalo Way" considered inelegant? QUESTION [7 upvotes]: I was going through an "article" on the "Buffalo Way", where the author said that one should NEVER use the Buffalo Way for proving inequalities in actual real-time contests as it is "highly inelengant". What is the reason behind this notion ? In Mathematics, there are a whole lot of ways to attempt a given question. If the BW provides a proof for some inequality, then why it is given the downvote ? REPLY [8 votes]: I think we can not delete BW from all methods, with there help we can prove polynomial inequalities. This method is very useful. See here: http://www.artofproblemsolving.com/community/c6h522084 There are polynomial inequalities, which we can prove by nice way, but to find this way during a competition is very hard. By the way, BW can give a quick proof sometimes. There are polynomial inequalities for which there is a proof by BW only. See here: Olympiad Inequality $\sum\limits_{cyc} \frac{x^4}{8x^3+5y^3} \geqslant \frac{x+y+z}{13}$ By the way, there are polynomial inequalities for which BW does not help. See here: Inequality $\sum\limits_{cyc}\frac{a^3}{13a^2+5b^2}\geq\frac{a+b+c}{18}$<|endoftext|> TITLE: Let $S$ be the set of all circles on the unit sphere in $\mathbf{R}^3$. Give a smooth manifold structure to $S$. QUESTION [5 upvotes]: I am not sure how to do this. $S$ includes singletons too (radius $0$). I have tried mapping a circle to the pair consisting of its center point and its radius, but it isn't injective. REPLY [2 votes]: Here is my favorite way to see this. Consider $R^3$ as an affine chart in $RP^3$, the real-projective space. Let $B\subset RP^3$ denotes the closed unit ball, with the boundary sphere $S^2$. For any point $p\in RP^3 - B$ consider the conic which is the union of lines through $p$ tangent to $S^2$. The set of points of tangency is a circle. Conversely, every circle in $S^2$ appears this way. One way to see this is to find a projective transformation preserving $B$ and sending the given circle $C$ to a great circle $C'\subset S^2$. Then for $C'$ the relevant conic is the cylinder through $C$ with the axis $L$ passing through the center of $B$ (plus the point $p$ in $RP^3 - R^3$ represented by the line $L$). Thus, with any reasonable topological structure on the set of circles of positive radius in $S^2$, it is homeomorphic to $RP^3 - B$, which is a manifold, being an open subset of $RP^3$. Now, if you allow circles of zero radius,you have to add to $RP^3 - B$ the boundary sphere $S^2$: I a sequence of points $p_n\in RP^3 -B$ converges to a point $p\in S^2$, the corresponding circles $C_n\subset S^2$ converge to $p$ as well. Hence, your space $S$ is homeomorphic to $RP^3 - int(B)$, which is a smooth manifold with boundary.