TITLE: How do I visualize if three points represent a right or left turn? QUESTION [5 upvotes]: Consider three points: $P_1$, $P_2$ and $P_3$. We have to decide whether $P_1P_2P_3$ represents a "right turn" (i.e. a turn in clockwise order) or a "left turn" (i.e. a turn in counter-clockwise order). Here's a method to determine it. For three points $P_1 = (x_1, y_1)$, $P_2 = (x_2, y_2)$ and $P_3 = (x_3, y_3)$, compute the z-coordinate of the cross-produt of the vectors $\overrightarrow{P_1P_2}$ and $\overrightarrow{P_1P_3}$, which is given by the expression $(x_2 - x_1)(y_3 - y_1) - (y_2 - y_1)(x_3 - x_1)$. If the result is $0$, the points $P_1$, $P_2$ and $P_3$ are collinear. If the result is positive, the three points constitute a "left turn" (or a counter-clockwise orientation), otherwise the points represent a "right turn" (or a clockwise orientation). This reasoning assumes counter-clockwise numbered points. I am not interested in these calculations, rather I would like to be able to identify a "right" or "left" turn by looking an imagine containing three points $P_1$, $P_2$ and $P_3$. How do you usually interpret such pictures? REPLY [2 votes]: How about simply looking when approaching a point following the line, whether the next point is in the left half-plane or the right half-plane w.r.t the line (of your motion)? Just as if you are driving a car, and your examples is a road map.<|endoftext|> TITLE: Is it possible to have a spherical object with only hexagonal faces? QUESTION [45 upvotes]: If so, what would be the most efficient algorithm for generating spheres with different number of hexagonal faces at whatever interval required to make them fit uniformly or how might you calculate how many hexagonal faces are required for each subdivision? REPLY [10 votes]: If a compromise is acceptable, the $2p, 4p, 8p, ... $ subdivisions of each side of an icosahedron in Buckminster Fuller domes leave behind 12 pentagons at each of its 12 vertices. Else, possible by means of a stereographic projection of a flat regular hexagonal net (a node of which touches south pole and other nodes/junctions connect to north pole), the curvilinear hexagon boundary cells shrink to zero towards north pole according to standard stereographic scaling. They can be seen on POV Ray image provided by user PM 2Ring below. A part of spiral has been traced connecting opposite vertices of some hexagons. It is a collection of log spirals centered at south pole that isogonally ( i.e., conformally) project to rhumb-lines (loxodromes constant inclination to meridians at $\pm \pi/6$) with corresponding latitude circles drawn. I could make an image later if you wish to see. Since these loxodromes are not overtly seen on the above image, some segments are indicated across some hexagon diameters.<|endoftext|> TITLE: A question of the proof of Kronecker’s Theorem regarding Cyclotomic Polynomials QUESTION [6 upvotes]: Theorem: ${f \in \Bbb{Z}[X]}$ is irreducible, monic and all roots of ${f}$ have absolute value at most 1. Then $f$ is a cyclotomic polynomial. Here is a part of the proof I feel confused: It suffices to prove that every root of $f$ is a root of unity. If $f(X)=(X-\alpha_1)...(X-\alpha_r)$consider the family of polynomials $f_n(X) =(X-\alpha_1^n)...(X-\alpha_r^n)$.The coefficients of ${f_n}$ are algebraic integers, since they are calculated using multiplications and additions starting from ${\alpha_1,...,\alpha_r}$. On the other hand, the coefficients of ${f_n}$ are symmetric polynomials in ${(\alpha_i)}$, so they are rational, and therefore integers. I don't understand why "the coefficients of ${f_n}$ are symmetric polynomials in ${(\alpha_i)}$, so they are rational, and therefore integers."? Why is it rational? And why therefore integer? REPLY [7 votes]: As it is written right now, the claim is false. Indeed, $f=X$ is an irreducible monic polynomial with integers coefficients and whose roots lie in the unit disk. However, it is not a cyclotomic polynomial. The result will hold if you assume that $0$ is not a root of the given polynomial! Note that I am being picky, this is the only counterexample. In fact, one has: Theorem. An irreducible monic polynomial with integers coefficients is either $X$ or a cyclotomic polynomial. Regarding your question, notice that for $\ell\in\{0,\cdots,r\}$, the coefficient of $X^{r-\ell}$ in $f_n$ is: $$(-1)^\ell\sigma_{\ell}({\alpha_1}^n,\ldots,{\alpha_r}^n),$$ where $\sigma_{\ell}$ is the $\ell$-th elementary symmetric polynomial. Furtheremore, notice that $\sigma_{\ell}({X_1}^n,\ldots,{X_r}^n)$ is a symmetric polynomial and therefore there exists $S_{\ell}\in\mathbb{Z}[X_1,\cdots,X_n]$ such that: $$\sigma_{\ell}({X_1}^n,\ldots,{X_r}^n)=S_\ell(\sigma_1(X_1,\ldots,X_r),\ldots,\sigma_r(X_1,\ldots,X_r)).$$ In particular, one has: $$\sigma_{\ell}({\alpha_1}^n,\ldots,{\alpha_r}^n)=S_\ell(\sigma_1(\alpha_1,\ldots,\alpha_r),\ldots,\sigma_r(\alpha_1,\ldots,\alpha_r)).$$ However, for all $k\in\{0,\cdots,r\}$, $\sigma_k(\alpha_1,\cdots,\alpha_r)$ is itself an integer as it is the the coefficient of $X^{n-k}$ in $f\in\mathbb{Z}[X]$ multiply by $(-1)^r$. Whence the result, since $\mathbb{Z}$ is closed under addition and multiplication. The result you are trying to prove is a consequence of: Theorem. (Kronecker) Let $f$ be a monic polynomial with integers coefficients whose complex roots are nonzero and lie in the unit disk, then the roots of $f$ are roots of unity. Remark. The key point is that if $A$ is a commutative ring with a units, then $A[X_1,\cdots,X_n]^{S_n}$ is a finitely generated $A$-module. Moreover, the elementary symmetric polynomials are generators: $$\sum_{\substack{I\subseteq\{0,\cdots,n\}\\|I|=k}}\prod_{i\in I}X_i$$<|endoftext|> TITLE: Prove or disprove $p_1p_2\cdots p_n+1$ is prime for all $ n\geq 1$ QUESTION [5 upvotes]: Let $p_1=2$, $p_2=3$, $p_3=5$ and, in general, let $p_i$ be the $i$-th prime. Prove or disprove that $$p_1p_2 \cdots p_n+1$$ is prime for all $ n\geq 1$ Well I was able to find a counter example when $n=6$ but I do not have a general way to show why it shouldn't be prime. REPLY [4 votes]: Maybe the best we can do is give an intuition based on the prime number theorem. If $p_n$ is the $n$th prime, then the prime counting function $\pi(p_n) = n$. Denote by $p_n\#$ the product of the first $n$ primes. Then, as you already know, $p_n\# + 1$ is not divisible by any of the first $n$ primes. This suggests that it is prime, contradicting, as you already know, the idea that the primes are finite. But if $p_n\# + 1$ is composite, as you already know, it also contradicts the idea that the primes are finite. If $p_n\# + 1$ is indeed composite, its least prime factor must be greater than $p_n$ but less than $\sqrt{p_n\# + 1}$. Since there are something like $$\frac{\sqrt{p_n\# + 1}}{\log \sqrt{p_n\# + 1}}$$ primes less than $\sqrt{p_n\# + 1}$, there are about $$\frac{\sqrt{p_n\# + 1}}{\log \sqrt{p_n\# + 1}} - n$$ potential least prime factors for $p_n\# + 1$. And since this number is positive and greater than $1$ for $n > 3$, it seems likelier than not that $p_n\# + 1$ does indeed have a nontrivial least prime factor.<|endoftext|> TITLE: Is the product of primes + 1 always eventually composite and, if so, how long does it take? QUESTION [23 upvotes]: I was thinking about Euclid's proof of the infinitude of primes the other day and started thinking about what would happen for different initial sets of primes than the usual first $N$ primes. Is there a finite subset of the primes which never actually hits a composite number when generating numbers as in Euclid's proof? Consider the following algorithm: Take a set $A_{k}=\left\{q_{1},\ldots,q_{k}\right\}$ of primes and let $n = q_{1}\cdots q_{k}+1$. If $n$ is prime, start over with $A_{k+1} = A_{k}\cup\left\{n\right\}$. If $n$ is composite, terminate. Written in a C-like pseudocode: function euclid(array primes[]) { n = 1; steps = 1; for(i = 0; i < primes.size(); i++) { n *= primes[i]; } n++; while(isPrime(n)) { steps++; primes.append(n); n = n*(n-1) + 1; } return steps; } My question is: Does this always terminate for any initial set of primes? If so, how can we express the number of steps in terms of the initial set of primes? Is the number of steps taken unbounded if we vary the initial set of primes? The only thing I've been able to determine on my own, so far, is that this obviously terminates after the first step whenever 2 isn't included. Here are some examples (all include 2): $\{2\}\to\{2,3\}\to\{2,3,7\}\to\{2,3,7,43\}\to\{2,3,7,43,1807\}\to$ Terminate: $13\mid1807$. 4 steps. $\{2,5\}\to\{2,5,11\}\to\{2,5,11,111\}\to$ Terminate: $3\mid111$. 2 steps. $\{2,7\}\to\{2,7,15\}\to$ Terminate: $3\mid15$. 1 step. $\{2,3,5\}\to\{2,3,5,31\}\to\{2,3,5,31,931\}\to$ Terminate: $7\mid931$. 2 steps. Since $q_{\ell+1} = q_{\ell}\left(q_{\ell}-1\right)+1$ for all $\ell>k$, it seems that looking at the polynomial $x^{2}-x+1$ may provide some insight. If anyone can link to references for this or similar problems, I'd really appreciate it. REPLY [12 votes]: I found some chains of length 5. I am using the probable prime command from GMP, my impression is that it is guaranteed correct up to much larger numbers. However, if this is more than curiosity, checking would be a good idea. I read the documentation and bumped up the number of repetitions (50 of Miller-Rabin) to the maximum recommended. Slowed it down, but got the same half dozen up to $10^8.$ Mon Jan 30 19:25:50 PST 2017 13115173 172007749704757 29586665958494159812918724293 875370802539517140393632331212098638022167320345629625557 766274041938678308049319803711051212986259395320990707203549752995233768068251390160273049235842065222326397934693 23272621 541614864937021 293346661920746958223431417421 86052264060045013489275157603303175774028761968533710873821 7404992149859714708930737706369339944615210649341690766424129198306152332487064738593882427023567001814322241672266221 43796593 1918141514611057 3679266870074397876472472046193 13537004701227056184212572777137962994585421608693327853747057 183250496281043420667204993845631634549777423549644826314801936966317119658847689648539960945358606795775203767987482414193 64693357 4185230375236093 17516153293798843629675114668557 306815626211860078798689815788249593808206393118777152849793693 94135828487775841446943380610380023116334939583097326111040294547737602950518680763144469059375179058255558887576839812784557 72387043 5239883921896807 27456383514952658161000834898443 753852995720164083849269319663386535220519708325540409288925807 568294339156265728521041918046170074189415178100247125572427897938945308853001216856867572395396326046090335520807838661675443 74448109 5542520859227773 30719537474974965545765035311757 943689982676391281914926286332303062191089120887819365147115293 890550783403767677768012591611033133058156175355170445099083019318797635091721580565901937527616358694962036159921434287360557 Mon Jan 30 19:39:19 PST 2017 Here are some chains of length 4, where each $p$ is succeeded by $p^2 - p + 1$ 55441 3073649041 9447318424166570641 89251825407597135537814006922276580241 202987 41203519183 1697729993022645468307 2882287129208671830499464750943820695977943 275059 75657178423 5724008646853999588507 32764274989259355373312611751567271326900543 287491 82650787591 6831152689329948795691 46664647064939791926935807581286611311371791 381991 145916742091 21291695622305494310191 453336302472902950617751284445540769432146291 393583 154907184307 23996235749767959885943 575819330158441883939292524503281817609113307 520717 271145673373 73519976188626455523757 5405186898776201001723230343143689030735871293 703123 494381250007 244412820357989456250043 59737626755346825192835475875901093166281251807 761377 579694174753 336045336241982008436257 112926468009986746720388592656212511936423733793 916189 839401367533 704594655815431145138557 496453629003665878435240241092992595903582903693 996367 992746202323 985545022225746104394007 971298990833946382893772169070355680806593122043 Here are the primes $p < 10000$ such that $q = p^2 - p + 1$ is also prime. Note that, for $p > 3,$ it is necessary to have $p \equiv 1 \pmod 3,$ as $2^2 - 2 + 1 = 3 \equiv 0 \pmod 3.$ At the same time, be aware that there is no proof that there are infinitely many prime values of $x^2 - x + 1$ for integer $x.$ There are infinitely many primes $r = x^2 - xy+ y^2,$ indeed all primes $r \equiv 1 \pmod 3;$ that does not seem to help you, though. I see what kinds of restrictions do show up... we already said that $p,$ being the first of your prime sequence for which the next entry is $p^2 - p + 1.$ If $p > 3,$ we must have $p \neq 2 \pmod 3.$ If $p > 7,$ we must have $p \neq 3,5 \pmod 7.$ This is where $31$ failed, as $31 \equiv 3 \pmod 7.$ If $p > 13,$ we must have $p \neq 4,10 \pmod {13}.$ This is where $43$ failed, as $43 \equiv 4 \pmod {13}.$ Given a prime $s \equiv 1 \pmod 3,$ there are two square roots of $-3 \pmod s.$ $$ \mbox{If} \; \; p > s, \; \; \color{red}{ p \neq \frac{1 \pm \sqrt{-3}}{2} \pmod s}.$$ 3 7 7 43 13 157 67 4423 79 6163 139 19183 151 22651 163 26407 193 37057 337 113233 349 121453 379 143263 457 208393 541 292141 613 375157 643 412807 727 527803 769 590593 919 843643 991 981091 1021 1041421 1093 1193557 1117 1246573 1201 1441201 1231 1514131 1381 1905781 1423 2023507 1549 2397853 1567 2453923 1597 2548813 1621 2626021 1693 2864557 1747 3050263 1789 3198733 1801 3241801 1933 3734557 1987 3946183 2011 4042111 2017 4066273 2113 4462657 2137 4564633 2143 4590307 2239 5010883 2281 5200681 2557 6535693 2647 7003963 2659 7067623 2683 7195807 2689 7228033 2731 7455631 3049 9293353 3271 10696171 3331 11092231 3511 12323611 3541 12535141 3607 13006843 3733 13931557 3847 14795563 3889 15120433 3919 15354643 4003 16020007 4057 16455193 4111 16896211 4159 17293123 4327 18718603 4447 19771363 4507 20308543 4561 20798161 4813 23160157 5011 25105111 5179 26816863 5209 27128473 5527 30542203 5641 31815241 5749 33045253 5779 33391063 5839 34088083 6007 36078043 6043 36511807 6091 37094191 6217 38644873 6379 40685263 6397 40915213 6421 41222821 6427 41299903 6451 41608951 6553 42935257 6577 43250353 6637 44043133 6703 44923507 6883 47368807 7027 49371703 7393 54649057 7573 57342757 7753 60101257 7933 62924557 7951 63210451 8017 64264273 8089 65423833 8269 68368093 8563 73316407 8647 74761963 8689 75490033 8719 76012243 8731 76221631 8761 76746361 8803 77484007 8887 78969883 8929 79718113 9001 81009001 9043 81766807 9127 83293003 9157 83841493 9181 84281581 9199 84612403 9241 85386841 9319 86834443 9463 89538907 9601 92169601 9613 92400157 9661 93325261 9769 95423593 9781 95658181 9829 96599413 9883 97663807 9967 99331123 ================================ These are the primes $p < 100,000$ for which a chain of at least three primes shows up. 3 7 43 379 143263 20524143907 1789 3198733 10231889606557 2143 4590307 21070913763943 3889 15120433 228627478987057 6553 42935257 1843436250720793 8929 79718113 6354977460562657 9661 93325261 8709604247392861 11467 131480623 17287154092987507 12853 165186757 27286664522990293 15439 238347283 56809427075134807 17497 306127513 93714053909437657 19531 381440431 145496802020025331 25999 675922003 456870553463610007 27409 751225873 564340311513386257 31123 968610007 938205344691930043 32869 1080338293 1167130826241815557 33601 1128993601 1274626549969953601 45697 2088170113 4360454418738262657 49627 2462789503 6065332133624197507 52837 2791695733 7793565062858711557 54541 2974666141 8848638647437165741 55441 3073649041 9447318424166570641 56431 3184401331 10140411833690170231 58657 3440584993 11837625090616225057 60943 3713988307 13793709140818737943 62773 3940386757 15526647790800590293 62983 3966795307 15735465003670428943 63361 4014552961 16116635472659314561 64849 4205327953 17684783188077842257 65167 4246672723 18034229212025562007 67033 4493356057 20190248650485231193 70393 4955104057 24553056210742755193 71947 5176298863 26794069913918793907 83227 6926650303 47978484413123341507 87049 7577441353 57417617450577029257 87511 7658087611 58646305850093599711 89611 8030041711 64481569872369765811 95803 9178119007 84237868497476547043 97213 9450270157 89307606030834534493<|endoftext|> TITLE: What is the difference between the Taylor and Maclaurin series? QUESTION [26 upvotes]: What is the difference between the Taylor and the Maclaurin series? Is the series representing sine the same both ways? Can someone describe an example for both? REPLY [37 votes]: A Taylor series centered at $x=x_0$ is given as follows: $$f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$ while a Maclaurin series is the special case of being centered at $x=0$: $$f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n$$ You may find this very similar to a power series, which is of the form $$f(x)=\sum_{n=0}^\infty a_n(x-x_0)^n$$ Particularly where $a_n=\frac{f^{(n)}(x_0)}{n!}$. If a function is equal to it's Taylor series locally, it is said to be an analytic function, and it has a lot of interesting properties. However, not all functions are equal to their Taylor series, if a Taylor series exists. One may note that most of the most famous Taylor series are a Maclaurin series, probably since they look nicer. For example, $$\sin(x)=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)!}$$ or, $$\sin(x)=\sum_{n=0}^\infty\frac{(-1)^n(x-2\pi)^{2n+1}}{(2n+1)!}$$ Which is trivially due to the fact that $\sin$ is a periodic function. So, if you had to choose, you'd probably choose the first representation. Just a convention. The geometric series is a rather beautifully known Maclaurin series, which one may derive algebraically without taking derivatives: $$\frac1{1-x}=\sum_{n=0}^\infty x^n=1+x+x^2+x^3+\dots$$ However, it gets a little bit more involved when you try to take the Taylor series at a different point.<|endoftext|> TITLE: Why does any nonzero number to the zeroth power = 1? QUESTION [8 upvotes]: I can't properly wrap my head around this odd concept, though I feel I'm almost there. A non-zero base raised to the power of 0 always results in 1. After some thinking, I figured this is the proof: $\frac{{x}^{2}}{{x}^{2}}= x^{2-2}=x^{0}=1$ Assuming that's true, would it be correct to assume that anything raised to 0 is a "whole" (1)? Because if $\frac{{x}^{2}}{{x}^{2}}=1$, then no matter what x is, it will always result in 1. I would like to understand this concept intuitively and deeply, rather than just memorizing that $x^{0}=1$ EDIT: Thank you all for the answers. Each and everyone of them have been insightful and I've now gained a deeper understanding. This is a new account, so it seems I can't upvote, but if I could I would upvote each and everyone of you. Thanks :) REPLY [4 votes]: Let me start off by saying that I am not a mathematician, and that I will be using some pseudo-mathematical terms in the interest of writing something more akin to simplified English rather than accurate mathematical jargon. The question as stated is: Why does any non-zero number to the zeroth power equal one? To answer this question lets first talk about what is meant by "zeroth" power. "zeroth power" refers to exponentiation. To understand why the zeroth power works the way it does it's important that we properly define exponentiation. Exponentiation is the act of raising a number to the power of another number. That's actually not too helpful because now you need to know what "raising to the power" means. ... But first, lets talk about multiplication. Multiplication is the act of adding a number ($a$) some other number ($b$) of times ($a \times b$). $$2 + 2 + 2 = 2 \times 3$$ This is all well and good, but when we talk about mulitplying by $0$ we need to know what number to put on the left hand side: $$? = 2 \times 0$$ The base for addition is $0$. It's the additive identity. Every addition equation may be implicitly started with $0$. This means that above, two times three is actually: $$0 + 2 + 2 + 2 = 2 \times 3$$ In this form certain behaviors become quite clear: $$0 + 2 + 2 + 2 = 2 \times 3$$ $$0 + 2 + 2 = 2 \times 2$$ $$0 + 2 = 2 \times 1$$ $$0 = 2 \times 0$$ Negatives also make sense, because instead of adding numbers, you do the opposite, you un-add (often called "subtraction"): $$0 = 2 \times 0$$ $$0 - 2 = 2 \times -1$$ $$0 - 2 - 2 = 2 \times -2$$ ...Ok, with all that in mind, now it's time to look at exponentiation. Exponentiation is the act of multiplying a number ($a$) some other number ($b$) of times ($a ^ b$) $$2 \times 2 \times 2 = 2 ^ 3$$ This is all well and good but when we talk about "raising to the power of 0" we need to know what number to put on the left hand side: $$? = 2 ^ 0$$ The base for multiplication is $1$. It's the multiplicative identity. Every multiplication equation may be implicitly started with $1$. This means that above, two to the power of three is actually: $$1 \times 2 \times 2 \times 2 = 2 ^ 3$$ In this form certain behaviors become quite clear: $$1 \times 2 \times 2 \times 2 = 2 ^ 3$$ $$1 \times 2 \times 2 = 2 ^ 2$$ $$1 \times 2 = 2 ^ 1$$ $$1 = 2 ^ 0$$ Likewise, negatives also make sense, because instead of multiplying numbers, you do the opposite, you un-multiply (often called "division"): $$1 = 2 ^ 0$$ $$1 \div 2 = 2^{-1}$$ $$1 \div 2 \div 2 = 2^{-2}$$ Note that these patterns hold regardless of the base: $$n \times n \times n \times 1 = n^3$$ $$n \times n \times 1 = n^2$$ $$n \times 1 = n^1$$ $$1 = n^0$$ $$1 \div n = n^{-1}$$ $$1 \div n \div n = n^{-2}$$ $$1 \div n \div n \div n = n^{-3}$$<|endoftext|> TITLE: Generalization to higher dimension of $e^r \not \in \mathbb Q$ QUESTION [6 upvotes]: The following is well known and not difficult to prove: $$\forall r \in \mathbb Q^*, e^r \not \in \mathbb Q.$$ See for instance https://proofwiki.org/wiki/Exponential_of_Rational_Number_is_Irrational Could this be generalized with the following result, for $n\geq 2$: $$\forall M \in \mathrm{GL}(n,\mathbb Q), \exp(M) \in \mathrm{GL}(n,\mathbb R)\setminus \mathrm{GL}(n,\mathbb Q)?$$ If yes, do you have any proof or reference? REPLY [2 votes]: Actually, a stronger result is true: If $A$ is a non-singular matrix with algebraic (over $\mathbb{Q}$) entries, then $\exp A$ has at least one transcendental entry. To prove this, you can use the Jordan canonical form and assume that $A$ is upper triangular. Since $A$ is non-singular, we have $A_{ii}\neq 0$ for some $i$, and so the $(i,i)$ entry of $\exp A$ is transcendental by Lindemann--Weierstrass. (The above argument turned out to be contained essentially in the earlier answer by Pierre-Guy.)<|endoftext|> TITLE: Evaluate$\sum_{n=0}^\infty\binom{3n}{n}x^n$ QUESTION [9 upvotes]: The question is: Evaluate $$\sum_{n=0}^\infty\binom{3n}{n}x^n$$ After applying a few numbers as $x$ in Wolfram Alpha, I guess that the answer is probably: $$2\sqrt{\frac1{4-27x}}\cos\left( \frac13\sin^{-1}\frac{3\sqrt{3x}}{2} \right)$$ that I can never prove. (Interestingly the above becomes simply $2\cos\frac{\pi}9$ when $x=\frac19$.) (*) To give you the background, the motivation that led me to this question is the Algebra problem #$10$ in the Harvard-MIT Math Test in Feb. 2008, that concludes to: $$\sum_{n=0}^\infty\binom{2n}{n}x^n=\frac1{\sqrt{1-4x}}$$ And then I thought about what if it was $3n$ instead of $2n$. REPLY [7 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Lets $\ds{\,\mrm{f}\pars{x} = \sum_{n = 0}^{\infty}{3n \choose n}x^{n}. \qquad\mrm{f}\pars{0} = 1}$. Then, \begin{align} \mrm{f}'\pars{x} & = \sum_{n = 1}^{\infty}{3n \choose n}nx^{n - 1} = \sum_{n = 0}^{\infty}{3n + 3 \choose n + 1}\pars{n + 1}x^{n} = \sum_{n = 0}^{\infty}{\pars{3n + 3}! \over n!\pars{2n + 2}!}\,x^{n} \\[5mm] & = \sum_{n = 0}^{\infty}{\pars{3n + 3}\pars{3n + 2}\pars{3n + 1}\pars{3n}! \over n!\pars{2n + 2}\pars{2n + 1}\pars{2n}!}\,x^{n} = {3 \over 2}\sum_{n = 0}^{\infty}{\pars{3n + 2}\pars{3n + 1} \over 2n + 1} {3n \choose n}\,x^{n} \\[5mm] & = {27 \over 8}\ \underbrace{\sum_{n = 0}^{\infty}{3n \choose n}\,x^{n}}_{\ds{\mrm{f}\pars{x}}}\ +\ {27 \over 4}\ \underbrace{\sum_{n = 0}^{\infty}{3n \choose n}\,n\,x^{n}} _{\ds{x\,\mrm{f}'\pars{x}}}\ -\ {3 \over 8}\sum_{n = 0}^{\infty}{1 \over 2n + 1}{3n \choose n}\,x^{n} \label{1}\tag{1} \end{align} where $\ds{\mrm{f}'\pars{0} = 3}$. Note that \begin{align} \sum_{n = 0}^{\infty}{1 \over 2n + 1}{3n \choose n}\,x^{n} & = \sum_{n = 0}^{\infty}{3n \choose n}\,x^{n}\int_{0}^{1}t^{2n}\,\dd t = \int_{0}^{1}\sum_{n = 0}^{\infty}{3n \choose n}\,\pars{xt^{2}}^{n}\,\dd t = \int_{0}^{1}\mrm{f}\pars{xt^{2}}\,\dd t \\[5mm] & = {1 \over 2}\,x^{-1/2}\int_{0}^{x}{\mrm{f}\pars{t} \over t^{1/2}}\,\dd t \end{align} Expression \eqref{1} is reduced to \begin{align} \pars{16x^{1/2} - 108x^{3/2}}\,\mrm{f}'\pars{x} & = 54x^{1/2}\,\mrm{f}\pars{x} -3\int_{0}^{x}{\mrm{f}\pars{t} \over t^{1/2}}\,\dd t \end{align} Moreover, \begin{align} &\pars{8x^{-1/2} - 162x^{1/2}}\,\mrm{f}'\pars{x} + \pars{16x^{1/2} - 108x^{3/2}}\,\mrm{f}''\pars{x} \\[5mm] = &\ 27x^{-1/2}\,\mrm{f}\pars{x} + 54x^{1/2}\,\mrm{f}'\pars{x} - 3\,\mrm{f}\pars{x}x^{-1/2} \end{align} and \begin{equation}\bbx{\ds{ \pars{16x - 108x^{2}}\,\mrm{f}''\pars{x} + \pars{8 - 216x}\,\mrm{f}'\pars{x} - 24\,\mrm{f}\pars{x} = 0\,,\qquad \left\{\begin{array}{rcl} \ds{\mrm{f}\pars{0}} & \ds{=} & \ds{1} \\[2mm] \ds{\mrm{f'}\pars{0}} & \ds{=} & \ds{3} \end{array}\right.}}\label{2}\tag{2} \end{equation} The solution of differential equation \eqref{2} is given by: $$\bbox[#ffe,25px,border:1px dotted navy]{\ds{% {\root{3} \over \root{4 - 27x}}\bracks{% \cos\pars{{1 \over 3}\,\mrm{arccsc}\pars{2 \over \root{4 - 27x}}} + \sin\pars{{1 \over 3}\,\mrm{arccsc}\pars{2 \over \root{4 - 27x}}}}}} $$ An equivalent expresion is $$\bbox[#ffe,25px,border:1px dotted navy]{\ds{% {\root{6} \over \root{4 - 27x}} \cos\pars{{1 \over 3}\,\mrm{arccsc}\pars{% 2 \over \root{4 - 27x}} - {\pi \over 4}}}} $$<|endoftext|> TITLE: Proving Ramanujan's Integral Formula QUESTION [16 upvotes]: In a letter to Hardy, Ramanujan described a simple identity valid for $0 TITLE: Can both $n+3\; \text{and}\; n^2+3$ both be cubic numbers at same time? QUESTION [7 upvotes]: Can both $n+3\; \text{and}\; n^2+3$ both be cubic number at same time? Where $n$ is an integer number. Not necessarily positive. I tried writing $x^3 = n+3$ and expressing $n^2+3$ in terms of $x$. I found $x^6 -6x^3+12$ but this doesn't help. How do I prove this? REPLY [12 votes]: Slightly overkill, but if $n + 3$ and $n^2 + 3$ are both cubes, then so is their product, and so $$ (n + 3)(n^2 + 3) = n^3 + 3n^2 + 3n + 9 = (n + 1)^3 + 2^3 $$ would be a cube. But as is well known, the only possible solutions to this can occur when one of the cubes is $0$, and so we have that either $n = -3$ or $n = -1$, and we can verify that neither of these yield solutions.<|endoftext|> TITLE: Differentiation of a double summation QUESTION [6 upvotes]: How does one reach from from 45 to 46? REPLY [9 votes]: We split the double sum with respect to the occurrence of the variable $x_k$ and consider each sum separately. \begin{align*} \frac{\partial \alpha}{\partial x_k} &=\frac{\partial }{\partial x_k}\left(\sum_{j=1}^n\sum_{i=1}^n a_{ij} x_i x_j\right)\\ &=\frac{\partial }{\partial x_k}\left(\sum_{{j=1}\atop{j\ne k}}^n\sum_{{i=1}\atop{i \ne k}}^n a_{ij} x_i x_j+x_k\sum_{{i=1}\atop{i \ne k}}^n a_{ik} x_i+x_k\sum_{{j=1}\atop{j \ne k}}^n a_{kj} x_j+a_{kk}x_k^2\right)\tag{1}\\ &=0+\sum_{{i=1}\atop{i \ne k}}^n a_{ik} x_i+\sum_{{j=1}\atop{j \ne k}}^n a_{kj} x_j+2a_{kk}x_k\tag{2}\\ &=\sum_{i=1}^n a_{ik} x_i+\sum_{j=1}^n a_{kj} x_j\\ \end{align*} Comment: In (1) we split the double sum according to the occurrence of $x_k$. We consider an index $k$ with $1\leq k\leq n$ arbitrarily, fixed. The general term \begin{align*} a_{i,j}x_ix_j\qquad\qquad 1\leq i,j\leq n \end{align*} contains No $x_k$: This is the case if neither $i$ nor $j$ is equal to $k$. We select all terms having no $x_k$ as \begin{align*} \sum_{{j=1}\atop{j\ne k}}^n\sum_{{i=1}\atop{i \ne k}}^n a_{ij} x_i x_j \end{align*} Precisely one $x_k$: This is the case if $j=k$ and $i\ne k$ or on the other hand if $i=k$ and $j\ne k$. We select terms having precisely one $x_k$ as \begin{align*} x_k\sum_{{i=1}\atop{i \ne k}}^n a_{ik} x_i+x_k\sum_{{j=1}\atop{j \ne k}}^n a_{kj} x_j \end{align*} Product $x_k^2$: This is the case if both indices $i=k$ and $j=k$. There is only one term in the double sum to select, namely \begin{align*} a_{kk}x^2 \end{align*} In (2) we differentiate the terms according to the occurrence of the variable $x_k$.<|endoftext|> TITLE: not well founded $\omega$-models of ZFC QUESTION [7 upvotes]: I would like to know how to obtain not well-founded $\omega$-models of ZFC. Are there any books about it? Other references to the literature are also welcome. REPLY [9 votes]: Once you know that the consistency of "There is a transitive model of ZFC" implies the consistency of "There is an $\omega$-model of ZFC", assuming the existence of the former will provide you with the existence of the latter. Of course, it is consistent that there are no $\omega$-models, even if ZFC is consistent. So just obtaining them out of a model of ZFC is impossible. But here is a nice way to get what you want. Start with a transitive model of ZFC, $M$. Fix some regular $\kappa>\omega$, and now add a generic ultrafilter and consider the generic ultrapower of $M$. It will have critical point $\kappa$, so it remains an $\omega$-model, but it will be ill-founded, as long as you didn't use a precipitous ideal for your forcing.<|endoftext|> TITLE: How to evaluate $\sum\limits_{k=0}^{n}\arctan f(k)$ where $f(k)$ is a rational fraction QUESTION [6 upvotes]: Find the sum closed form $$\sum_{k=0}^{n}\arctan{\dfrac{k^4+6k^3+10k^2-k-9}{(k+1)(k+2)(k+3)(k^3+7k^2+15k+8)}}$$ For problems involving sums, the idea is to use trigonometricidentities to write the sum in the form $$\sum_{k=1}^{n}[g(k)-g(k-1)]$$ and I initially considered pairing every two terms up to use the $arctanx+arctany $ trick, but it doesn't work because each arctanarctan term has a different coefficient. REPLY [4 votes]: By setting $f(k)=3+(7+2i)k+(5+i)k^2+k^3$ we are dealing with: $$ \sum_{k=1}^{n}\text{arg}\left(f(k)\;\overline{f(k+1)}\right) =\text{arg }f(1)-\text{arg }f(n+1).\tag{1}$$ In order to notice that, I factored $(k+1)(k+2)(k+3)(k^3+7k^2+15k+8)+i(k^4+6k^3+10k^2-k-9)$, getting: $$\left(3+(7+2 i) k+(5+i) k^2+k^3\right) \left((16-3 i)+(20-4 i) k+(8-i) k^2+k^3\right)\tag{2}$$ and checked that $(2)$ has the $f(k)\,\overline{f(k+1)}$ structure.<|endoftext|> TITLE: Residue Proof of Fourier's Theorem Dirichlet Conditions QUESTION [18 upvotes]: Whittaker gives two proofs of Fourier's theorem, assuming Dirichlet's conditions. One proof is Dirichlet's proof, which involves directly summing the partial sums, is found in many books. The other proof is an absolutely stunning proof of Fourier's theorem in terms of residues, treating the partial sums as the residues of a meromorphic function and showing that, on taking the limit, we end up with Dirichlet's conditions. My question is about understanding the latter half of the residue proof, given here. The jist of the proof is to consider a trigonometric series with real coefficients, assume the coefficients are Fourier coefficients of a function $f$, and then simplify the partial sum \begin{align} S_k(f) &= a_0 + \sum_{m=1}^k (a_m \cos(mz) + b_m \sin(mz)) \\ &= \frac{1}{2 \pi} \int_0^{2 \pi} f(t)dt + \frac{1}{\pi} \sum_{m=1}^k \int_0^{2 \pi} f(t)\cos[m(z-t)] dt \\ &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^{2 \pi} f(t)e^{im(z-t)} dt \\ &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^z f(t)e^{im(z-t)} dt + \sum_{m=-k}^k \frac{1}{2\pi} \int_z^{2 \pi} f(t)e^{im(z-t)} dt \\ &= U_k + V_k. \end{align} Next we try to turn $U_k$ into the sum of the residues of a meromorphic function derived from this, so try to modify it: \begin{align} U_k(z) &= \sum_{m=-k}^k \frac{1}{2\pi} \int_0^z f(t)e^{im(z-t)} dt \\ &= \sum_{m=-k}^k \frac{w}{2\pi w} \int_0^z f(t)e^{w(z-t)} dt |_{w = im, m \neq 0} \\ &= \sum_{m=-k}^k \frac{w}{1 + 2\pi w - 1} \int_0^z f(t)e^{w(z-t)} dt |_{w = im, m \neq 0} \\ &\to \frac{1}{1 + 2\pi w + \dots - 1} \int_0^z f(t)e^{w(z-t)} dt \\ &= \frac{1}{e^{2 \pi w} - 1} \int_0^z f(t)e^{w(z-t)} dt \end{align} to find $$\phi(w) = \frac{1}{e^{2 \pi w} - 1} \int_0^z f(t)e^{w(z-t)} dt$$ so that, if $C_k$ is a circle in the $w$ plane containing $0,i,-i,2i,-2i,\dots,ki,-ki$ and no more poles, say of radius $k+1/2$, we see $$ \frac{1}{2 \pi i} \int_{C_k} \phi(w) dw = U_k.$$ From this we integrate over the boundary explicitly via $w = (k + 1/2)e^{i\theta}$ so that $U_k$ reduces to $$U_k = \frac{1}{2 \pi} \int_0^{2 \pi} w \phi(w) d \theta$$ and from here on we are supposed to end up with Dirichlet's conditions. Can anybody explain the rest of the proof? Since this aspect of the proof seems to be the crux of other flawed proofs, need to make sure I get the rest of it with no hand-waving, seems unmotivated. REPLY [7 votes]: The motivation is that you're trading the sum of all finite residues for a single "residue at infinity." For example, if you had only a finite number of poles in the plane for a holomorphic function $F$, and $\Gamma$ enclosing those poles in its interior, then the sum of the residues would be $$ \frac{1}{2\pi i}\oint_{\Gamma}F(\lambda)d\lambda = -\frac{1}{2\pi}\oint_{1/\Gamma}F(1/\mu)\frac{1}{\mu^2}d\mu. $$ The negative would cancel the new negative orientation of $1/\Gamma$, and the end result of the integration as you let $\Gamma$ expand without bound in, say, a circle, would be $$ \lim_{\mu\rightarrow 0}F(1/\mu)(1/\mu)= \lim_{\lambda\rightarrow \infty}\lambda F(\lambda). $$ There are lots of issues in how that limit is achieved, but that's the basic idea. This idea works very nicely for matrices and operators, too. Fredholm was the pioneer in this type of analysis, and his work fueled earliest forms of Spectral Theory through the use of the resolvent operator $R(\lambda)=(L-\lambda I)^{-1}$. (Resolvent was a term coined by Fredholm, who also was the first to define a linear operator.) For example, if you have an $N\times N$ selfadjoint matrix $L$, then you can show that $(\lambda I-L)^{-1}$ has simple poles at the eigenvalues, and the residue at such a pole is the projection onto the eigenspace associated with that eigenvalue. Then completeness of the eigenfunctions is a consequence of the fact that the sum of these residues is $$ \lim_{\lambda\rightarrow\infty}\lambda (\lambda I-L)^{-1} = \lim_{\lambda\rightarrow\infty}\frac{\lambda}{\lambda I-L}=I. $$ And you really can make this rigorous because you can show the above limit is $I$ for any $N\times N$ matrix. Furthermore, regardless of the matrix, you can show that the residue at an eigenvalue is a projection onto the space spanned by the Jordan blocks associated with the eigenvalue. So you have completeness due to an unusual conservation law associated with holomorphic functions. By looking on the Riemann sphere, you see that you can trade the finite residues for the residue at $\infty$. For normal and selfadjoint matrices, all poles are first order the and the residues are the projections onto the corresponding eigenspaces. And the Complex Analysis trick then shows that the sum of all of the projections onto eigenvalues of a normal matrix must sum to the identity $I$, thereby proving that the eigenvectors form a basis. This trick also works for more general operators, such as the differentiation operator $Lf = \frac{1}{i}\frac{d}{dx}$ on $L^2[-\pi,\pi]$ where the domain is chosen to consist of continuously differentiable periodic functions with $f' \in L^2$ (absolutely continuous is even better.) The resolvent can be computed directly for this operator by solving $(L-\lambda I)g=f$ for $g$ as a function of $\lambda$. This is done by assuming a given $f$, and solving a first order ODE for $g$ as a function of $\lambda$: $$ \frac{1}{i}g'-\lambda g = f \\ g(0)=g(2\pi) $$ It is my understanding that Cauchy came up with the proof given by Whittaker by considering this equation. To solve this equation multiply by $i$ and then by an integrating factor $e^{-i\lambda t}$ to obtain $$ e^{-i\lambda t}g'-i\lambda e^{-i\lambda t}g = ie^{-i\lambda t}f \\ \frac{d}{dt}(e^{-i\lambda t}g)=ie^{-i\lambda t}f \\ e^{-i\lambda t}g(t) = i\int_{0}^{t}e^{-i\lambda s}f(s)ds + C \\ g(t) = ie^{i\lambda t}\int_{0}^{t}e^{-i\lambda s}f(s)ds+Ce^{i\lambda t} $$ The constant $C$ is determined by requiring periodicity: $$ C = g(0) = g(2\pi)=ie^{2\pi i\lambda}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds+Ce^{2\pi i\lambda} \\ C(1-e^{2\pi i\lambda})=ie^{2\pi i\lambda}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds \\ C = \frac{ie^{2\pi i\lambda}}{1-e^{2\pi i\lambda}}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds $$ Therefore, $g(t,\lambda)$ is given by $$ g(t,\lambda)=\left(i\int_{0}^{t}f(s)e^{-i\lambda s}ds +\frac{ie^{2\pi i\lambda}}{1-e^{2\pi i\lambda}}\int_{0}^{2\pi}e^{-i\lambda s}f(s)ds\right) e^{i\lambda t} $$ Notice that the residues of this expression are negatives of the projections onto the one-dimensional eigenspaces associated with $e^{int}$. $$ R_n = -\frac{1}{2\pi}\int_{0}^{2\pi}f(s)e^{-ins}ds e^{int} $$ (The negative is because of using $(L-\lambda I)^{-1}$ instead of $(\lambda I-L)^{-1}$.) You can write this in a more symmetric form simply by splitting the integral over $[0,2\pi]$ into integrals over $[0,t]$ and $[t,2\pi]$, and that's how the analysis is carried out for $\lim_{\lambda\rightarrow\infty}\lambda g(t,\lambda)$, after noting that the $g$ associated with a constant $f \equiv C$ is easily found to be $-C/\lambda$ because a constant function is periodic, and $(\frac{1}{i}\frac{d}{dt}-\lambda)\frac{-C}{\lambda}=C$. (I believe Whittaker's analysis stems from the use of the symmetric form.) Reference: E. C. Titchmarsh, Eigenfunction Expansions Associated with Second Order Differential Equations Titchmarsh proves a general expansion theorem that includes the ordinary Fourier case in the first 20 pages of the first chapter. Titchmarsh was student of G. H. Hardy, and pioneered much of the rigorous pointwise analysis for this subject. Further Detail: Rewrite $g(x,\lambda)=R(\lambda)f$ as \begin{align} R(\lambda)f & = \frac{i}{1-e^{2\pi i\lambda}}\left\{ \int_{0}^{x} e^{i\lambda (x-t)}f(t)\,dt - \int_{x}^{2\pi}e^{i\lambda(2\pi-(t-x))}f(t)\,dt\right\} \\ & = \frac{i}{1-e^{-2\pi i\lambda}} \left\{ -\int_{0}^{x} e^{-i\lambda(2\pi-(x-t))}f(t)\,dt + \int_{x}^{2\pi}e^{-i\lambda(t-x)}f(t)\,dt \right\} \end{align} The first form is convenient for examining the resolvent for $\Im\lambda > 0$, and the second is convenient for $\Im\lambda < 0$. When examining the integral of the resolvent on a circle of half-integer radius $|\lambda|=N+1/2$, the function $1/(1-e^{2\pi i\lambda})$ is uniformly bounded by a constant $M$ for $\Im\lambda \ge 0$, and the exponentials in the integrals are well-behaved. A similar analysis may be carried out using the second form for $\Im\lambda \le 0$. More to come later ...<|endoftext|> TITLE: Space of smooth structures QUESTION [11 upvotes]: Is there a space of smooth structures on a manifold, analogous of the moduli space of complex structures on a manifold? If so, what is the natural topology? Also, I am a bit confused about why smooth structures usually form a discrete space; for example there are finitely many smooth structures on spheres. I think one possible exception is that $\mathbb{R}^4$ admits a continuum of smooth structures but I think this is with some unusual topology. Intuitively this is because if we deform a smooth structure, we are deforming "smoothly" so we can undo this deformation using a diffeomorphism and hence smooth structures that are deformation equivalent are diffeomorphic. We can similarly deform complex structures using diffeomorphisms but then nearby complex structures are not connected by biholomorphism but by diffeomorphism and hence are different in the moduli space. It would be great if someone could make this argument rigorous. As a separate but related question: is it true that any compact topological space admits at most finitely many smooth structures? (up to homeomorphism of the topological space). This is true for spheres. This fails for $\mathbb{R}^4$, which is non-compact. REPLY [14 votes]: Let me answer your last question. Theorem. (Kirby, Siebenmann) Let $M^n$ be a closed $n$-dimensional topological manifold, where $n\ge 5$. Then the set of isomorphism classes of smooth structures on $M$ is finite. You can extract this from their Classification theorem, page 155 of Essay IV of their book "Foundational essays on topological manifolds, smoothings and triangulations", vol. 88 of Annals of Mathematics Studies, Princeton University Press, 1977. The basic reason of finiteness is that (according to their classification theorem) the isotopy classes of smooth structures on $M$ are in bijective correspondence with vertical homotopy classes of sections of a certain bundle $E\to M$ and the homotopy groups of the fiber of this bundle are all finite. (The latter is because of finiteness of the group of smooth structures on $S^n$ with fixed $n$, which was proven by Keraire and Milnor.) In dimensions $\le 3$ every topological manifold (compact or not) has unique (up to isotopy) smooth structure. What happens in dimension 4 is anybody's guess. There are examples of closed 4-manifolds supporting infinitely many nondiffeomorphic smooth structures (R. Friedman and J. Morgan, On the diffeomorphism types of certain algebraic surfaces, I and II, J. Diff. Geom. 27 (1988), 297-398). It is conceivable that this is the case for all closed 4-manifolds. It is known (again Kirby and Siebenmann) that in dimension 4 PL category is isomorphic to DIFF category (every PL manifold admits a smooth and the latter is unique). From this you can easily see that every closed 4-manifold has at most countably many smooth structures. Edit 1. A direct proof of the fact that there are only countably many diffeomorphism classes of smooth compact manifolds is a corollary of S. Peters, Cheeger's finiteness theorem for diffeomorphism classes of Riemannian manifolds, Journal für die reine und angewandte Mathematik 349 (1984) p. 77-82. Namely, he gives a self-contained differential-geometric proof of Cheeger's theorem that given $n$, $D$, $V$ and $K$, there are only finitely many diffeomorphism classes of Riemannian $n$-manifolds of volume $\ge V$, diameter $\le D$ and sectional curvature in the interval $[-K, K]$. (Cheeger's original proof used results of Kirby and Siebenmann.) Now, take $D$ and $K$ to be natural numbers and $V$ be of the form $1/N$, where $N$ is a natural number. As I said in my comments, the proof is quite painful and you need to know some basic Riemannian geometry (say, the first 5 chapters of do Carmo's "Riemannian Geometry") to appreciate the proof. Of course, it is still much-much easier than to read Kirby and Siebenmann. If you really decide to understand his proof, you can do it in less than two months (starting with the definition of a smooth manifold). In contrast, you probably will never get to the point of understanding any proofs in Kirby-Siebenmann. Edit 2. Here is a possible topology on the space of (isomorphism classes of ) smooth structures on an $m$-dimensional compact manifold $M$, which is inspired by the proof of Cheeger's theorem. Fix a finite smooth atlas for a smooth structure $s$ on $M$. This atlas determines (and is determined by) a collection of its transition maps, which are diffeomorphisms between open bounded subsets of $R^m$, $f_{ij}: U_{ij}\to V_{ij}$. Then you declare an open $\epsilon$-neighborhood of $s$ to consist of those smooth structures $s'$ on $M$ which admit a finite atlas with the connection of transition maps $f'_{ij}: U'_{ij}\to V'_{ij}$ such that: The domains $U_{ij}, U'_{ij}$ are within $\epsilon$-Hausdorff distance from each other. Set $U''_{ij}:= U_{ij}\cap U'_{ij}$. The $C^1$-uniform distance between the maps $f_{ij}|U''_{ij}, f'_{ij}|U''_{ij}$ is $<\epsilon$. One needs to check that this defines a basis of topology (this seems OK). I think this topology will be discrete because a smooth map between closed manifolds (sufficiently) $C^1$-close to a diffeomorphism is a diffeomorphism. However, I do not want to do either one of these things.<|endoftext|> TITLE: At least ten language is spoken QUESTION [5 upvotes]: At a party of $250$ mathematician , each mathematician speaks one or more languages. It is found that for any two mathematicians, each speaks at least one language not spoken by other. Show that there are at least $10$ different languages spoken in the party. I have a feeling that Pigeon Hole Principle to be used, but I am clueless how to use it. Please help. REPLY [2 votes]: It's more complicated than that, and you need to use Sperner's Theorem. If you consider the set of languages spoken by each mathematician, you know that all these sets are different, and none of the sets is a subset of another. This makes it a Sperner family (antichain), and the theorem gives the maximum size of such a family.<|endoftext|> TITLE: Show that $\mathbb{Q_p} $ is locally compact QUESTION [5 upvotes]: Suppose $\mathbb{Q_p} $ is the fraction field of $\mathbb{Z_p}$ ($p$-adic integers) i.e. $$\mathbb{Q_p} = \left\lbrace\frac{x}{y} \space \bigg{|} \space x,y \in \mathbb{Z_p} , y\neq 0 \right\rbrace$$ Now with respect to the topology defined by $d(x,y) = e^{-v_p(x-y)}$ ($v_p$ is the $p$-adic valuation) , we need to show that $\mathbb{Q_p} $ is locally compact. Any suggestions? REPLY [6 votes]: The accepted answer is not complete because it does not argue why $p^n\mathbb{Z}_p$ is compact. I think the following idea deal with this: To prove that $\mathbb{Q}_p$ is locally compact is enough to prove that $\mathbb{Z}_p$ is compact. For this, thinking in the $p$-adic integers as sequences we have $$\mathbb{Z}_p=\{(a_0,a_1,a_2,...)\mid a_n\in \mathbb{Z}/p\mathbb{Z} \}=(\mathbb{Z}/p\mathbb{Z})^\mathbb{N}$$ Now, by Tychonoff $(\mathbb{Z}/p\mathbb{Z})^\mathbb{N}$ is compact with the product topology and also is easy to see that $x+p^n\mathbb{Z}_p$ is an open set of this topology. So every cover of $\mathbb{Z}_p$ with sets of the form $x+p^n\mathbb{Z}_p$ will have a finite subcover and then $\mathbb{Z}_p$ is compact.<|endoftext|> TITLE: Fields having exactly one quadratic extension (up to isomorphism) QUESTION [7 upvotes]: Let $F$ be an infinite field such that $F$ has, up to field isomorphisms$^{[1]}$, exactly one extension $K/F$ of degree $2$. Does it imply that $[\overline F : F]<\infty$ ? What happens if $F$ has characteristic $0$? I don't think that this holds, even if characteristic $0$, but I didn't have an example of a field $F$ of characteristic $0$ ($\implies F$ is infinite) such that $[\overline F : F] = \infty$ and in $F^*$, any product of two non-squares is a square, and there is at least one non-square (see 4) below). My thoughts: 1) The only example of such $F$ I know is $\Bbb R$: any quadratic extension embeds in $\Bbb C$, which has already degree $2$ over $\Bbb R$. My question is to know if other examples exist: if $[\overline F : F]<\infty$ then (by Artin-Schreier theorem) $F$ is a real closed field. 2) Any finite field $\Bbb F_q$ has exactly one extension of degree $n$ (namely $\Bbb F_{q^n}$), up to field isomorphisms, for every $n \geq 1$. 3) It implies that all the quadratic extensions are isomorphic as fields, but this is not sufficient, precisely when $F$ has no quadratic extension, e.g. $\overline F=F$ (or $\bigcup_{n \geq 0} K_n$, with $K_0=\Bbb Q,K_{n+1}=\{x \in \Bbb C \mid x^2 \in K_n\}$). 4) In characteristic different from $2$, any quadratic extension of $F$ is separable and has the form $F(\sqrt a)$ where $\sqrt a \not \in F$. Therefore, as mentioned here, the quadratic extensions of $F$ correpond to $A:=F^* / F^{*,2}$ where $F^{*,2} = \{x^2 \mid \in F^*\}$. Notice that $A$ is a $\Bbb F_2$-vector space, via $[a]_2 \cdot [x]_{F^{*,2}} = [x^a]_{F^{*,2}}$. So the most interesting case is when we are looking for fields of characteristic $\neq 2$ such that $A=F^* / F^{*,2}$ has order $2$. Equivalently, in $F^*$, any product of two non-squares is a square, and there is at least one non-square (because $x,y$ non squares $\implies x,1/y$ non-squares $\implies x/y = a^2 \in F^{*,2} \implies [x]_{F^{*,2}} = [y]_{F^{*,2}}$). Let $a$ be a non-square in $F^*$ and let $i = \sqrt a$. Showing that $F(i)$ is algebraically closed is not reasonable. Notice that this condition about $[F^* : F^{*,2}]=2$ is involved in 3. here, which is precisely the situation where $a=-1$ is not a square. Then I thought to some extension $F$ of $K=\mathrm{Frac}(\Bbb R[x,y]/(x^2+y^2+1))$ since $-1$ is not a square in $K^*$ but is a sum of squares ; we just need $[F^* : F^{*,2}]=2$, and then it's a counterexample, since $F$ won't be a formally real field. I only know how to do $[F^* : F^{*,2}]=1$, see my $K_n$'s in 3). 5) I tried $F = \overline{\Bbb F_2}(t)$, because $t^{1/n}$ has degree $n$ for any $n$, so $[\overline F :F]=\infty$. I think that any extension $F\left(\sqrt{P(t)/Q(t)}\right)$ is isomorphic to $F(\sqrt t) = \overline{\Bbb F_2}(\sqrt t)$ when $P,Q \in \overline{\Bbb F_2}[t]$ i.e. $P/Q \in F$, because we have $\sqrt{a+b}=\sqrt a + \sqrt b$ in the sense that $x^2=a,y^2=b \implies (x+y)^2 = a+b$, so that $\sqrt{P(t)/Q(t)}$ is just a rational fraction in $\sqrt t$, i.e. belongs to $F(\sqrt t)$. However, in characteristic $2$, it is not clear that all quadratic extensions arise as $F(\sqrt a)$ for some non-square $a \in F$. 6) In characteristic 0 (at least $\neq 2$), it is not clear that $F(\sqrt{t+1}) \not \cong F(\sqrt t)$. If there was a field isomorphism, then there is $u \in F(\sqrt t)$ such that $u^2=t+1$, hence there are $a,b \in F[t]$ such that $(a(\sqrt t)/b(\sqrt t))^2 = t+1$, which yields $a(x)^2=(x^2+1)b(x)^2$ as polynomials in $F[x]$... $^{[1]}$ I'm only interested in fields isomorphisms, not in "field extensions" isomorphisms (i.e. not in $F$-algebras isomorphisms – these are equivalent to saying that $f : K \stackrel{\cong}{\to} K'$ commutes with the embeddings $i : F \to K$ and $i' : F \to K'$). REPLY [5 votes]: Finite fields still give loads of examples. Let $\kappa$ be any finite field, and let $q$ be any prime (not necessarily distinct from the characteristic). Now take the union $\mathcal K$ of all fields $K_n$ for which $[K_n:\kappa]=q^n$. Then $\mathcal K$ is infinite, and will have a unique extension of every degree prime to $q$, in particular only one quadratic extension, if $q\ne2$. EDIT — Addition: On looking more closely at your question, I see that you make some guesses about characteristic two. There the quadratic-extension picture is simultaneously simpler and more complicated. When you adjoin the square root of something, you’re making an inseparable extension of degree $p$, and the truth of the matter is that when the base is perfect, as $\overline{\Bbb F_2}$ is, and your field is finitely generated over the base, and the transcendence degree is only one, then there is precisely one inseparable extension of degree $p$, indeed only one purely inseparable extension of degree $p^m$ for each $m$. In other words, starting with $k=\overline{\Bbb F_2}(t)$, no matter what rational function you adjoin the square root of, you get the same extension, $k^{1/2}$ On the other hand, there are infinitely many nonisomorphic quadratic separable extensions of $k=\overline{\Bbb F_2}(t)$, for instance the ones gotten by adjoining the roots of $X^2+t^mX+t$.<|endoftext|> TITLE: When does $n$ divide $2^n+1$? QUESTION [10 upvotes]: For which $n$ does $n\mid2^n+1$ ? My hypothesis is that the only solution is $n=3^k$, for some positive integer $k$. REPLY [2 votes]: More generally: If $n\mid 2^{n}+1$ and $d\mid 2^n+1$ then $nd\mid 2^{nd}+1$. This is because $n$ and $d$ are odd, so if $2^{n}\equiv -1\pmod d$ then $(-2)^n\equiv 1\pmod d$ and $$\begin{align}\frac{2^{nd}+1}{2^n+1}&=\frac{1-((-2)^n)^d}{1-(-2)^n} \\ &= 1+(-2)^n+(-2)^{2n}-\cdots +(-2)^{n(d-1)}\\ &\equiv 1+1+1+\cdots 1 \\&= d\equiv 0\pmod{d}.\end{align}$$ So $(2^n+1)d\mid 2^{nd}+1$ and hence $nd\mid 2^{nd}+1$. So we can say a solution $n$ is absolutely primitive if, for each prime $p\mid n$, $2^{n/p}+1$ is not divisible by $\mathrm{lcm}(n/p,p).$ For example, $171=3^2\cdot 19$ is not absolutely primitive because $\mathrm{lcm}(171/19,19)=171\mid 2^{171/19}+1$. And $3$ is not primitive since $\mathrm{lcm}(3/3,3)=3\mid 2^{3/3}+1$. Indeed, I wonder if there are any absolutely primitive $n$ other than $1$. If prime $p\mid n$ and $p\mid 2^n+1$ then $2^{n}=( 2^{n/p})^p\equiv 2^{n/p}\pmod p$ and hence if $n$ is absolutely primitive, then $\frac{n}{p}$ can't be a divisor of $2^{n/p}+1$ for any prime divisor $p$ of $n$. Now given two distinct prime factors $p,q$ of $n$, if $q^k\mid 2^n+1$ and $q^k\not \mid 2^{n/p}+1$ then we must have $q\mid \sum_{i=0}^{p-1} (-2)^{ni/p}$. But if $q\mid 2^{n/p}+1$, we'd have $(-2)^{n/p}\equiv 1\pmod{q}$ and thus $q\mid p$, which is not possible. This means that, if $n$ is absolutely primitive, then for each prime $p\mid n$ there must be another prime $q\mid n$ such that $q\not\mid 2^{n/p}+1$ and $q\mid 2^{n}+1$. But if $2^{n/p}+1$ is not divisible by $q$ and $2^{n}+1$ is divisible by $p$, then $-1$ must be a non-trivial $p$th power, modulo $q$, and thus $q-1$ must be divisible by $p$. So if $p$ is the largest prime factor of $n$, we get that no such $q$ can exist. Therefore, for any $n$ with $n\mid 2^n+1$ either $n=1$ or the largest prime $p\mid n$ satisfies $\frac{n}{p}\mid 2^{n/p}+1$ and $p\mid 2^{n/p}+1$.<|endoftext|> TITLE: If $\sum a_n$ converges then $\liminf na_n=0$ QUESTION [5 upvotes]: I am trying to prove the following statement: Let $(a_n)_{n\in\mathbb N}$ be a positive sequence such that the series $\sum a_n$ is convergent. Prove that $$\liminf_{n}na_n=0$$ Now, if $(a_n)_{n\in \mathbb N}$ is a real valued sequence, then $\liminf a_n=\alpha$ if and only if the following two conditions hold: If $\beta \in \mathbb R \cup \{\pm \infty\}$ is such that $\beta<\alpha$ there exists $n_0\in \mathbb N$ such that $a_n>\beta$ for every $n>n_0$ There exists a subsequence $(a_{\varphi (k)})_{k\in \mathbb N}$ of $(a_n)_{n\in \mathbb N}$ that converges to $\alpha$. The first one is trivial since $(na_n)_{n\in \mathbb N}$ is positive, so the problem basically boils down to constructing a subsequence on $na_n$ that converges to zero. The given hypothesis implies that $a_n\to 0$ and that $a_n<\frac{1}{n}$ eventually, but I cannot seem to be able to use these bits of information to construct a subsequence that converges to $0$. I also tried by contradiction, assuming that $\liminf na_n\neq 0$, but did not get very far. This brings me to the two following questions: Is it possible to explicitly construct a subsequence of $(na_n)_{n\in \mathbb N}$ that converges to zero? How can I prove the given statement? REPLY [10 votes]: A constructive argument is not always necessary - it is not, at least, in this problem; we shall reason by contradiction. Assume that $\liminf n a_n > 0$. This means that no subsequence of $(na_n)_n$ tends to $0$, which means that there exist $n_0 \in \Bbb N$ and $r>0$ such that $n a_n \ge r$ for $n \ge n_0$ (in simple words: from $n_0$ onwards no term of $(n a_n)_n$ comes closer to $0$ than $r$ - otherwise, if there were terms coming arbitrarily close to $0$, they would form a subsequence tending to $0$, which would contradict our assumption). In this case, then, $$\sum _{n=0} ^\infty a_n = \sum _{n=0} ^{n_0 - 1} a_n + \sum _{n=n_0} ^\infty a_n \ge \sum _{n=0} ^{n_0 - 1} a_n + \sum _{n=n_0} ^\infty \frac r n = \infty$$ because the second term is essentially the harmonic series which is divergent - but this contradicts the convergence of $\sum a_n$! Therefore, our assumption must be false, so $\liminf n a_n = 0$.<|endoftext|> TITLE: Restriction of self-adjoint operator self-adjoint? QUESTION [5 upvotes]: Consider an unbounded self-adjoint operator $A$ on a Hilbert space $\mathcal{H}$. Let $\mathcal{J} \subset \mathcal{H}$ a closed subspace reducing $A$, i.e. such that $P A \subset A P$ where $P$ denotes orthogonal projection onto $J$. Equivalently, $P \mathcal{D}(A) \subset \mathcal{D}(A)$ and $P A \psi = A P \psi$ for all $\psi \in \mathcal{D}(A)$. Then the restriction $A|\mathcal{J}$ is a densely defined operator on $\mathcal{J}$ with domain $\mathcal{D}(A) \cap \mathcal{J}$. My question is this: is $A|\mathcal{J}$ again self-adjoint? The reason I am interested in this question is the following: Take $\mathcal{H} = L^2(\mathbb{R}^{3N})$, and $\mathcal{J} = \Lambda L^2(\mathbb{R}^{3N})$, where the $\Lambda$ denotes the totally antisymmetric subspace, i.e. those functions $\psi(\vec{x_1},...,\vec{x_N})$ with the property that for any permutation $\sigma \in S^N$, \begin{equation} \psi(\vec{x_{\sigma(1)}},...,\vec{x_{\sigma(N)}}) = sign(\sigma) \psi(\vec{x_1},...,\vec{x_N}) \end{equation} Then $\mathcal{H}$ is the phase space of an atom consisting of $N$ electrons, with the nucleus fixed at the origin, and $\mathcal{J}$ is the phase space for the same system, but respecting the Pauli principle. My operator on $\mathcal{H}$ is the self-adjoint operator given by \begin{equation} H^N = - \sum_{j=1}^{N} \Delta_j + \sum_{j = 1}^{N} V_{en}(x_j) + \sum_{i < j} V_{ee}(x_i - x_j) \end{equation} where the $V_{ee}$ terms denote electron-electron repulsion, and $V_{en}$ electron-nucleus attraction. In fact, in this case I know of a proof: since the Fourier transform maps antisymmetric functions to antisymmetric functions, one can first show that $H_0 = - \Delta$ is self-adjoint when restricted to $\mathcal{J}$. Then use the fact that the remaining terms are $H_0$-bounded with $H_0$-bound $0$, which remains true for the restriction. I am aware of the related question at Selfadjoint operators. However, the case I am interested in is very different, in the sense that the restricted operator $A|\mathcal{J}$ is considered an operator on $\mathcal{J}$ rather than on the full Hilbert space $\mathcal{H}$. Indeed, considering $A|\mathcal{J}$ to be an operator on $\mathcal{H}$, it is in general (and certainly in my case) not even densely defined. REPLY [4 votes]: The answer is yes. Since $A|\mathcal{J}$ certainly remains symmetric, it suffices to show that $(A \pm i I)(\mathcal{D}(A) \cap \mathcal{J}) = \mathcal{J}$. But we know that $A \pm i I : \mathcal{D}(A) \rightarrow \mathcal{H}$ are surjective since $A$ is self-adjoint. Hence for $\psi \in \mathcal{J} \subset \mathcal{H}$ there exist elements $\phi_{\pm} \in \mathcal{D}(A)$ such that $(A \pm i I)\phi_{\pm} = \psi$. Now replacing $\phi_{\pm}$ by $P \phi_{\pm}$ where $P$ is the orthongonal projection to $\mathcal{J}$, we see that $P \phi_{\pm} \in \mathcal{D}(A) \cap \mathcal{J}$ and $(A \pm i I)P \phi_{\pm} = P(A \pm i I) \phi_{\pm} = P \psi = \psi$. Hence $A | \mathcal{J}: \mathcal{D}(A) \cap \mathcal{J} \rightarrow \mathcal{J}$ is self-adjoint.<|endoftext|> TITLE: Expanding integers into distinct egyptian fractions - what is the optimal way? QUESTION [7 upvotes]: We know that there are infinite ways to represent $1$ as a sum of distinct unit fractions (i.e. egyptian fractions). The most optimal one (the least demoninators and the least number of fractions) is: $$1=\frac{1}{2}+\frac{1}{3}+\frac{1}{6}$$ But how to represent other integers in the same way? The rules are as follows: we can't use $1$ as a denominator and no repetitions are allowed. So far I found a way, which is surely not optimal: for any integer $a$ we find the closest harmonic number such that $H_n-1 TITLE: Solve $y ^2-x(\frac{dy}{dx})^2 = 1$ using proposed change of variables QUESTION [6 upvotes]: I am kind of stuck with this non linear differential. I am preparing for my finals and I cannot get this one. Full question goes like this: Find all the solutions to the equation $y ^2-x(\frac{dy}{dx})^2 - 1= 0$ stating in each case the maximal solution interval. Hint: Use $u=y'\sqrt{-x},\,x<0$ and $u=y'\sqrt{x},\,x>0$ Also the final solutions are also given: $y=1$ and $y=-1 \quad\forall x $ $y(x)=cosh(2\sqrt{x}+K),\quad x>0$ $y=cos(2\sqrt{-x}+K),\quad x<0$ What I have done so far. Let $u=y'\sqrt{x}\Rightarrow u'=y''\sqrt{x}+y'\frac{1}{2\sqrt{x}}$ and I plug it into the original equation and I differentiate w.r.t x: $y^2-u^2=1\Rightarrow 2yy'-2u(y''\sqrt{x}+\frac{y'}{2\sqrt{x}})=0$. Switching back the change $u=y'\sqrt{x},\,x>0$ we get: $2yy'-2y'\sqrt{x}(y''\sqrt{x}+\frac{y'}{\sqrt{x}})=yy'-y'y''x-y'^2 =y'(y-y''x-y')=0$. So we get $y'=0 \Rightarrow y=C$, which is not one of the stated solutions or $(y-y''x-y')=0$ which does not make a lot of sense to me as we ended up with a second order equation, which needs two arbitrary constants, when we actually started with a first order equation. I actually got the first three solutions using a different approach, which is not the one hinted, but I posted nonetheless in case it might help someone in order to help me :) $y'^2=\frac{y^2-1}{x}\Rightarrow\frac{1}{\sqrt{y^2-1}}dy=\pm\frac{1}{\sqrt{x}}dx$. Solutions $y=\pm 1$ appear at this step. Using the substitution $y=cosht$ we get $t=\pm2\sqrt{x}+C$ which gives the third one: $y=cosh(2\sqrt{x}+C)$. However I just cannot get the proposed substitution to work and I cannot find the last solution when x is negative. Any help is really appreciated!!!! Thanks!!! REPLY [2 votes]: So we get $y′=0⇒y=C$, which is not one of the stated solutions You are right. It is not one of the stated solutions. It is two of them. Namely the solutions $y=1$ and $y = -1$. Recall that to reach this stage, you differentiated your original equation: $y^2−u^2=1$ to get the equation you have at this point. Just like squaring equations in algebra, this introduces additional solutions. So you have to check the solutions you get against the original equation to see which ones work for it. $y = C$ for an arbitrary $C$ is a solution to the differentiated equation, but when you plug that equation back into the original, you find that it reduces to $C^2 = 1$. I.e., it is only a solution to the original equation when $C = 1$ or $C = -1$. Similarly, not all solutions of $y−y′′x−y′=0$ will be solutions of the original equation either. As you noted, it will have two arbitrary constants. But plugging those solutions into the original equation will give you an equation relating those two arbitrary constants, reducing the degrees of freedom back down to one. Concerning your own method of separation of variables, when you went from $y'=\pm\frac{y^2-1}{x}$ to $\frac{1}{\sqrt{y^2-1}}dy=\pm\frac{1}{\sqrt{x}}dx$, you implicitly assumed that $y^2 > 1$. If instead you examine $y^2 < 1$, then you get the cosine solution.<|endoftext|> TITLE: What is $\, _4F_3\left(1,1,1,\frac{3}{2};\frac{5}{2},\frac{5}{2},\frac{5}{2};1\right)$? QUESTION [39 upvotes]: I have been trying to evaluate the series $$\, _4F_3\left(1,1,1,\frac{3}{2};\frac{5}{2},\frac{5}{2},\frac{5}{2};1\right) = 1.133928715547935...$$ using integration techniques, and I was wondering if there is any simple way of finding a closed-form evaluation of this hypergeometric series. What is a closed-form expression for the above series? REPLY [19 votes]: General Principle. Let $A$ (resp. $M, N, B$) be a vector with all components in $\mathbb Z/2$ (resp. $\mathbb N, \mathbb N, \mathbb C$), $A, M$ and $B, N$ are of same length, $S, T$ vectors that met one of five following conditions ($k,m,n,i,j\in\mathbb Z$): $$\color{blue}{0.\ S=\{k\},\ T=\emptyset}\ \ \ \ \color{green}{1.\ S=\{k+1/2\},\ T=\emptyset}\ \ \ \ \color{purple}{2.\ S=\{k,m\},\ T=\{n+1/2\}}$$ $$\color{red}{3.\ S=\{k+1/2, m+1/2\},\ T=\{n\}}\ \ \color{orange}{4.\ S=\{k,m,n\},\ T=\{i+1/2,j+1/2\}}$$ Then the hypergeometric series $\, _{q+1}F_q(S,A,B;T,A+M,B-N;1)$, whenever convergent and non-terminating, is expressible via level $4$ MZVs. OP's series belongs to case $4$ and is of low weight, thus solved without much difficulty. For the statement's proof and various examples, see Theorem $1$ here. To show its power we illustrate a $_4F_3$ table. One may generate an infinitude of $_4F_3$ with half-integer parameters based on principle above. The table below consists of all known $_4F_3$ with $z=1$ and all parameters in $\{1/2,1,3/2,2\}$ that has MZV or Gamma closed-form. $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=\frac{\pi ^3}{48}+\frac{1}{4} \pi \log ^2(2)$ $\small\pi \, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};1,\frac{3}{2},\frac{3}{2};1\right)=-16 \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)+\frac{3 \pi ^3}{8}+\frac{1}{2} \pi \log ^2(2)$ $\small\pi \, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{2};\frac{3}{2},\frac{3}{2},2;1\right)=-8 C-32 \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)+\frac{3 \pi ^3}{4}+4+\pi \log ^2(2)$ $\small\, _4F_3(1,1,1,1;2,2,2;1)=\zeta (3)$ $\small\, _4F_3\left(1,1,1,1;\frac{3}{2},\frac{3}{2},2;1\right)=2 \pi C-\frac{7 \zeta (3)}{2}$ $\small\, _4F_3\left(1,1,1,1;\frac{1}{2},2,2;1\right)=\frac{7 \zeta (3)}{4}+\frac{\pi ^2}{2}-\frac{1}{2} \pi ^2 \log (2)$ $\small\, _4F_3\left(1,1,1,1;\frac{3}{2},2,2;1\right)=\frac{1}{2} \pi ^2 \log (2)-\frac{7 \zeta (3)}{4}$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},1;\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=\frac{7 \zeta (3)}{8}$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},1;\frac{3}{2},\frac{3}{2},2;1\right)=-\pi +2+\pi \log (2)$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},1;2,2,2;1\right)=8-\frac{16 \Gamma \left(\frac{3}{4}\right) \Gamma \left(\frac{7}{4}\right)}{\pi \Gamma \left(\frac{5}{4}\right)^2}$ $\small\pi \, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},1;\frac{3}{2},2,2;1\right)=16 C-24+4 \pi$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},2;1,\frac{3}{2},\frac{3}{2};1\right)=\frac{\pi }{4}+\frac{1}{4} \pi \log (2)$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},2;\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=\frac{7 \zeta (3)}{16}+\frac{\pi ^2}{16}$ $\small\, _4F_3\left(1,1,1,\frac{3}{2};2,2,2;1\right)=\frac{\pi ^2}{3}-4 \log ^2(2)$ $\small\, _4F_3\left(1,1,1,\frac{1}{2};2,2,2;1\right)=-\frac{\pi ^2}{3}+8+4 \log ^2(2)-8 \log (2)$ $\small\, _4F_3\left(1,1,1,\frac{1}{2};2,2,\frac{3}{2};1\right)=4 \log (2)-\frac{\pi ^2}{6}$ $\small\, _4F_3\left(1,1,1,\frac{1}{2};2,\frac{3}{2},\frac{3}{2};1\right)=4 C-\frac{\pi ^2}{4}$ $\small\, _4F_3\left(1,1,1,\frac{1}{2};\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=\frac{7 \zeta (3)}{2}-\pi C$ $\small\, _4F_3\left(\frac{3}{2},\frac{3}{2},\frac{3}{2},1;2,2,2;1\right)=\frac{8 \pi }{\Gamma \left(\frac{3}{4}\right)^4}-8$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},1,1;\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=4 \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)-\frac{\pi ^3}{32}-\frac{1}{8} \pi \log ^2(2)$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},1,1;\frac{3}{2},\frac{3}{2},2;1\right)=\frac{\pi ^2}{4}-2 \log (2)$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},1,1;\frac{3}{2},2,2;1\right)=2 \pi -8+4 \log (2)$ $\small\pi \, _4F_3\left(\frac{1}{2},\frac{1}{2},1,1;2,2,2;1\right)=-32 C-16 \pi +48+16 \pi \log (2)$ $\small\pi \, _4F_3\left(1,1,\frac{3}{2},\frac{3}{2};2,2,2;1\right)=16 \pi \log (2)-32 C$ $\small\, _4F_3\left(\frac{1}{2},\frac{1}{2},1,2;\frac{3}{2},\frac{3}{2},\frac{3}{2};1\right)=C+2 \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)-\frac{\pi ^3}{64}-\frac{1}{16} \pi \log ^2(2)$ $\small\pi \, _4F_3\left(1,1,\frac{1}{2},\frac{3}{2};2,2,2;1\right)=32 C+8 \pi -16-16 \pi \log (2)$ To elaborate its full power we illustrate more Higher weight examples (one for each case). $\small \, _7F_6\left(\{1\}_6,\frac{3}{2};\{2\}_3,\{\frac52\}_3;1\right)=1512 \pi C+2592 \pi \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)+3456 \pi \Im\left(\text{Li}_4\left(\frac{1}{2}+\frac{i}{2}\right)\right)-2592 \text{Li}_4\left(\frac{1}{2}\right)-1728 \text{Li}_5\left(\frac{1}{2}\right)-3024 \zeta (3)+\frac{5859 \zeta (5)}{4}-\frac{81}{8} \pi \zeta \left(4,\frac{1}{4}\right)+\frac{81}{8} \pi \zeta \left(4,\frac{3}{4}\right)-\frac{369 \pi ^4}{10}\\ \scriptsize-1620 \pi +4536+\frac{72 \log ^5(2)}{5}-108 \log ^4(2)-6 \pi ^2 \log ^3(2)+27 \pi ^2 \log ^2(2)+\frac{123}{5} \pi ^4 \log (2)$ $\small \, _7F_6\left(\frac{1}{2},1,\{\frac54\}_5;\frac{3}{2},\{\frac94\}_5;1\right)=-\frac{3125 C}{81}-\frac{96875 \zeta (5)}{96}-\frac{21875 \zeta (3)}{216}+\frac{756250}{243}-\frac{3125 \pi ^2}{648}-\frac{3125 \pi ^4}{864}-\frac{3125 \pi ^3}{864}-\frac{3125 \pi }{972}-\frac{15625 \pi ^5}{4608}-\frac{3125}{486} \log (2)+\frac{3125 }{2304}\left(\zeta \left(4,\frac{3}{4}\right)-\zeta \left(4,\frac{1}{4}\right)\right)$ $\small \, _8F_7\left(\{\frac12\}_4,\frac{7}{6},\frac{5}{4},\frac{4}{3},\frac{3}{2};\frac{1}{6},\frac{1}{4},\frac{1}{3},\{\frac52\}_4;1\right)=\frac{2835 \pi \zeta (3)}{32}-\frac{17739 \pi }{128}-\frac{1593 \pi ^3}{512}+\frac{945}{16} \pi \log ^3(2)-\frac{4779}{128} \pi \log ^2(2)+\frac{945}{64} \pi ^3 \log (2)-\frac{3645}{64} \pi \log (2)$ $\small \, _8F_7\left(\{\frac12\}_4,1,1,\frac{4}{3},\frac{5}{3};\frac{1}{3},\frac{2}{3},\{\frac32\}_4,\frac{5}{2};1\right)=-\frac{3}{8} S+\frac{3}{8} T-\frac{105 C}{64}+\frac{105}{16} \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)+\frac{3}{4} \Im\left(\text{Li}_4\left(\frac{1}{2}+\frac{i}{2}\right)\right)-3 \Im\left(\text{Li}_5\left(\frac{1}{2}+\frac{i}{2}\right)\right)+\frac{3 \zeta \left(4,\frac{3}{4}\right)}{2048}-\frac{3 \zeta \left(4,\frac{1}{4}\right)}{2048}+\frac{35 \pi ^5}{8192}+\frac{105}{128}-\frac{105 \pi ^3}{2048}+\frac{1}{512} \pi \log ^4(2)+\frac{1}{256} \pi \log ^3(2)+\frac{3 \pi ^3 \log ^2(2)}{1024}-\frac{105}{512} \pi \log ^2(2)+\frac{3 \pi ^3 \log (2)}{1024}$ $\small \pi \, _7F_6\left(\{-\frac12\}_2,\{1\}_5;\{2\}_6;1\right)=-\frac{2560}{9} S+\frac{9728}{27} T-\frac{47104 C}{243}-\frac{14336}{27} \Im\left(\text{Li}_3\left(\frac{1}{2}+\frac{i}{2}\right)\right)-\frac{32768}{27} \Im\left(\text{Li}_4\left(\frac{1}{2}+\frac{i}{2}\right)\right)-\frac{16384}{9} \Im\left(\text{Li}_5\left(\frac{1}{2}+\frac{i}{2}\right)\right)+\frac{256 \pi \zeta (3)}{27}-\frac{64}{9} \pi \zeta (3) \log (2)+\frac{32 \zeta \left(4,\frac{1}{4}\right)}{9}-\frac{32 \zeta \left(4,\frac{3}{4}\right)}{9}+\frac{4}{27} \zeta \left(4,\frac{1}{4}\right) \log (2)-\frac{4}{27} \zeta \left(4,\frac{3}{4}\right) \log (2)+\frac{25 \pi ^5}{9}+\frac{112 \pi ^3}{9}-\frac{46784 \pi }{729}+\frac{117248}{729}-\frac{1}{9} 32 \pi \log ^4(2)+\frac{512}{27} \pi \log ^3(2)+\frac{16}{3} \pi ^3 \log ^2(2)-\frac{448}{9} \pi \log ^2(2)-\frac{128}{9} \pi ^3 \log (2)+\frac{23552}{243} \pi \log (2)$ Here $S,T$ denotes $\Im \sum_{k>j>0} \frac{i^k}{k^4 j},\ \ \Im \sum_{k>j>0} \frac{i^k (-1)^j}{k^4 j}$ repsectively, which are irreducible level $4$ MZVs. See paper linked above for more.<|endoftext|> TITLE: When to use integral? QUESTION [5 upvotes]: I have a question concerning when to use integral and what is the difference between two formulations, one with integral and another one without. I have formulated a simple example: Let's assume we have m = 1 kg of water that is heated. And because of this, there is some vapor forming. Now let's say that the fraction of vapor ($\theta$) takes values from 0 to 1 and has the profile shown below: Now it comes. For calculating the mass of the vapor ($m_{vapor}$), there are two formulations in my mind: $m_{vapor} = m \cdot \theta(t)$ $m_{vapor} = m \cdot \int_t \theta (t) dt$ But the problem is that I don't know why would I use one over another and here is where I need help. Can anyone please help me to understand why would one use the integral formulation and why not? What makes them different? I have serious problems understanding the function of the integral, other than the fact that it represents the area under a curve. But when to use it? I would highly appreciate if anyone can explain it to me in detail. Thank you in advance! REPLY [11 votes]: You can't tell whether to use an integral just by looking at the graph of $\theta(t)$. Whether or not to use an integeral depends on two things: What does the formula mean? What is the answer you need? In this case, you were told $\theta(t)$ is the fraction of vapor as a mass $m$ of water is heated. I take this to be the following definition of $\theta(t)$: $$ \theta(t) = \frac{m_\mathrm{vapor}(t)}{m}. $$ From this, simple algebra tells us immediately that $m_\mathrm{vapor}(t) = m \cdot \theta(t).$ Therefore no integral is necessary. If instead of the fraction of vapor, you had some measurement of the rate at which the water was being turned into vapor, then you could use an integral to determine how much water was turned to vapor between two times. The two times could be "before any water was vaporized" and "now", if that's appropriate to what is being asked.<|endoftext|> TITLE: What are the simply-connected two-dimensional Lie groups? QUESTION [5 upvotes]: I would like to know what the simply-connected Lie groups of dimension $2$ are. It is well-known that for every Lie algebra, there is exactly one simply-connected Lie group having it as its Lie algebra. I know that there are two two-dimensional Lie algebras, namely the abelian Lie algebra and another Lie algebra with basis ${x, y}$ such that $[x, y]=x$. The simply-connected Lie group corresponding to the abelian Lie algebra is $\mathbb{R}^2$. Can you tell me, what the Lie group corresponding to the other Lie algebra is? REPLY [2 votes]: Here is perhaps a more explicit answer. Since $G$ is simply connected, it suffices to look at the Lie algebra. Let $G$ be a Lie group with a two-dimensional Lie algebra $\mathfrak{g}$ spanned by vectors $X,A.$ If all Lie brackets of $\mathfrak{g}$ are trivial then $\mathfrak{g}$ is just a commutative Lie algebra and there is really nothing else to say. Suppose next that $\mathfrak{g}$ is not commutative. Next, let us consider the adjoint representation of $\mathfrak{g}$ with respect to the ordered basis $\mathcal{B}=\left( X,A\right) .$ Then the matrix representation of $ad\left( X\right) $ with respect to this fixed basis is given by $$ \left[ ad\left( X\right) \right] _{\mathcal{B}}=\left[ \begin{array} [c]{cc}% 0 & a\\ 0 & b \end{array} \right] $$ Similarly, the matrix representation of $ad\left( A\right) $ with respect to our fixed basis is given by $$ \left[ ad\left( A\right) \right] _{\mathcal{B}}=\left[ \begin{array} [c]{cc}% c & 0\\ d & 0 \end{array} \right] . $$ Next, $\left[ ad\left( A\right) \right]_{\mathcal{B}}$ has for eigenvalues $c,0.$ Note that $c$ cannot be zero. Otherwise, we could then find a basis for the Lie algebra such that the endomorphism $adA$ is zero; contradicting the fact that $\mathfrak{g}$ is not commutative. Assuming next that $c$ is not zero, then $c^{-1}ad\left( A\right) $ has for eigenvalue $1,0.$ As such, $c^{-1}ad\left( A\right) $ is diagonalizable and its Jordan canonical form is $$ \left[ \begin{array} [c]{cc}% 1 & 0\\ 0 & 0 \end{array} \right] . $$ Consequently, there exists a basis $\left( Y,B\right) $ for the Lie algebra $\mathfrak{g}$ such that the only non-trivial Lie brackets are given by $\left[ B,Y\right] =Y.$ This completes the classification of all two-dimensional Lie algebras.<|endoftext|> TITLE: Prove: If $x+y+z=xyz$ then $\frac {x}{1-x^2} +\frac {y}{1-y^2} + \frac {z}{1-z^2}=\frac {4xyz}{(1-x^2)(1-y^2)(1-z^2)}$ QUESTION [8 upvotes]: If $x+y+z=xyz$, prove that: $$\frac {x}{1-x^2} +\frac {y}{1-y^2} + \frac {z}{1-z^2}=\frac {4xyz}{(1-x^2)(1-y^2)(1-z^2)}$$. My Attempt: $$L.H.S=\frac {x}{1-x^2}+\frac {y}{1-y^2}+\frac {z}{1-z^2}$$ $$=\frac {x(1-y^2)(1-z^2)+y(1-x^2)(1-z^2)+z(1-x^2)(1-y^2)}{(1-x^2)(1-y^2)(1-z^2)}$$ $$=\frac {x+y+z-xz^2-xy^2+xy^2z^2-yz^2-yx^2+x^2yz^2-zy^2-zx^2+zx^2y^2}{(1-x^2)(1-y^2)(1-z^2)}$$. I could not move on from here.Please help. Thanks REPLY [2 votes]: $\textbf{HINT:}$ Try putting $x=tan(\alpha);y=tan(\beta);z=tan(\gamma)$ Use: $$tan(2\theta)=\frac{2tan(\theta)}{1-tan^2(\theta)}$$ $\textbf{Note that $\alpha +\beta+\gamma=\pi$}$ by the condition since: $$tan(\alpha+\beta+\gamma)=\frac{\Sigma tan(\alpha)-tan(\alpha)tan(\beta)tan(\gamma)}{1-\Sigma tan(\alpha)tan(\beta)}$$ Where $\Sigma$ denotes cyclic summation.<|endoftext|> TITLE: What is the color number of the 3D space, if we allow only convex regions? QUESTION [7 upvotes]: I am thinking on the analogy of the well-known 2D coloring problem for the 3D space (with the trivial geometry & topology). As this reference says, simply increasing the dimensions by one doesn't work. It would elevate the color number to infinite, because: In this case, however, once you go to three dimensions, you can make partitions of space into regions for which you need N colors to color the regions in order that no two adjacent regions will have the same color for any N. You can make an example by starting with one ball. Now, add a ball to the picture and connect it with a thin tube to the first ball. Now, add a third ball to the picture and connect this ball with two thin tubes to the two balls already in the picture. You can keep adding balls and connecting them to all the other balls like this because there is enough space in three dimensions to work with. If the balls represent regions, since each ball is touching every other ball, you need at least as many colors as there are balls to color them. I think it would be useful if we would add a restriction: all of the regions should be convex. Also I think not I am the first one who thought on this possibility. Is it possible? How to even start to think on such a problem? What could be the result? REPLY [2 votes]: I actually considered this very problem some years ago and managed to convince myself that the answer should be finite if we restricted to convex regions (I thought it would be less than 20). Unfortunately, it seems that even when we restrict to convex regions, the 'colour number' can still be arbitrarily large. There are two references which I've found online which claim to prove this, specifically when we restrict to just using cuboid regions. Painting the Office by Reed and Allwright. Colouring Rectangular Blocks in 3-space by Magnant and Martin The first paper also has references to earlier works where it was proved that the colour number is unbounded for convex regions.<|endoftext|> TITLE: Norms of linear maps QUESTION [12 upvotes]: Let $M_n(\mathbb{C})$ denote the algebra of $n \times n$ complex matrices, and $H_n(\mathbb{C})$ denote the linear space of $n \times n$ Hermitians. Both spaces are endowed with the usual operator norm. Assume that $$ \Phi \colon H_n(\mathbb{C}) \rightarrow H_n(\mathbb{C})$$ is a (real) linear map of norm $1.$ Then $\Phi$ has a natural linear extension to $M_n(\mathbb{C})$ by $$\Phi(A) := \Phi\left({A+A^* \over 2}\right) + i\Phi\left({A-A^* \over 2i}\right).$$ Could you give me an example with $\|\Phi\| > 1 ?$ Or an explanation why this might happen? REPLY [6 votes]: I think I have an example showing the norm can be greater than $1$. For Hermitian $H=\begin{bmatrix}a&b\\\overline{b}&d\end{bmatrix}\in H_2(\mathbb C)$, define $\Phi(H)=\begin{bmatrix}0&\dfrac{a+di}{\sqrt2}\\\dfrac{a-di}{\sqrt2}&0\end{bmatrix}.$ We have $\|\Phi(H)\|=\frac1{\sqrt2}\sqrt{a^2+d^2}\leq\max\{|a|,|d|\}\leq\|H\|,$ with equality holding in case $|a|=|d|$ and $b=0$. Consider the extension applied to $A=\begin{bmatrix}1&0\\0&i\end{bmatrix}$. We have $\|A\|=1$ while $\tilde{\Phi}(A)=\begin{bmatrix}0&0\\\sqrt2&0\end{bmatrix},$ so $\|\tilde{\Phi}(A)\|=\sqrt2$. Motivation: To have the norm of the extension increase, it makes sense to look for cases where the real and imaginary parts of $A$ are "nonoverlapping," so that $A+A^*$ and $A-A^*$ can be as large as possible without increasing the norm of $A$. Then if we can have $\Phi$ map each of those parts in such a way that they do "overlap," an increase in norm can happen. This was done by taking distinct diagonal entries and placing them in the same off-diagonal positions. The off-diagonal was needed to allow the diagonal entries from the Hermitian matrices to be sent to real and imaginary parts, keeping the norm of $\Phi$ from being too large on $H_2(\mathbb C)$.<|endoftext|> TITLE: Approximation of the exponential of an operator to the second order QUESTION [5 upvotes]: The second-order approximation of the exponential function of a real variable $x$ is $$e^{x}\approx 1+x+\frac{x^2}{2!}, \hspace{0.3cm}\text{for}\hspace{0.3cm}x^3\to 0$$ and we can write $$e^{x}= 1+x+\frac{x^2}{2!}+\mathcal{O}(x^3), \hspace{0.3cm}\text{for}\hspace{0.3cm}x^3\to 0$$ The exponential of an operator $\hat{A}$ is defined as: $$e^{\lambda\hat{A}}=I+\lambda\hat{A}+\frac{(\lambda\hat{A})^2}{2!}++\frac{(\lambda\hat{A})^3}{3!}+\cdots$$ where $\lambda$ is some parameter (real or complex number). When we are allowed to write $$e^{\lambda\hat{A}}\approx I+\lambda\hat{A}+\frac{(\lambda\hat{A})^2}{2!}$$ i.e. to approximate it to the second-order? We cannot just say that the operator $\hat{A}$ has to be small such that $\hat{A}^3\to 0$, because it doesn't make sense as with the real variable $x$. Is it enough to have $\lambda^3 \to 0$ ? REPLY [5 votes]: The short answer is that you need to define a norm to measure the "size" of each operator. In order to answer your question, we need to ask what it means to say that $e^{\hat{A} } = I + \hat{A} + \frac{ \hat{A}^2 }{2!} + \mathcal{O}(\hat{A}^3 )$. This is where we need to define a norm on your space of operators. This is a function mapping from your space of operators to $[0,\infty)$. I'm going to denote the norm of an operator as $||\hat{A}||$. Once you've defined a norm, i.e. a way of measuring the "size" of an operator, then we can talk about things like $\mathcal{O}(\hat{A} )$. So once you've defined (or been given) a norm $||\hat{A}||$, what you're looking for is basically the same as you have in the real variable $x$ as in your post, but you're sending the norm of the operator to zero, and saying that as this goes to zero, you want the norm of the higher order terms to go to zero faster than the lower order terms. So the higher order terms are asymptotically negligible, and so for very small values of $||\hat{A}||$, we can ignore the higher order terms without it making too much of a difference. Formally, I'd write this as: $$||e^{\hat{A} } - ( I + \hat{A} + \frac{ \hat{A}^2 }{2!}) || = \mathcal{O}(||\hat{A}^3|| ),$$ which is just like how in the real variable case you can say $|e^x - (1+ x + x^2/2)| = O(x^3)$. Hopefully this helps you see the parallels between the real variable and the operator cases. Also, it turns out that if you multiply by $\lambda$ and then send $\lambda$ to zero, this is going to be the same as sending the norm of the operator to zero. That is, sending $\lambda$ to zero sends $||\lambda \hat{A}||$ to zero. So you were on the right track with your suggestion of using a scalar multiplicative constant. :)<|endoftext|> TITLE: A proof for the identity $\sum_{n=r}^{\infty} {n \choose r}^{-1} = \frac{r}{r-1}$ QUESTION [15 upvotes]: Do you have any idea to prove the following identity via a combinatorial (or algebraic) method? $\sum_{n=r}^{n=\infty} {n \choose r}^{-1} = \frac{r}{r-1}$ This is Exercise 71 in Chapter 2 of the book Chen C.C., Koh K.M. Principles and techniques in combinatorics. The book does not give a solution, although it mentions: "see H. W. Gould, Combinatorial Identities, Morgantown, W.V. (1972), 18-19". many thanks in advance, Shahram REPLY [3 votes]: \begin{align} \sum_{n=r}^\infty \binom{n}{r}^{-1} &= \sum_{n=r}^\infty \, (n+1)\int_0^1 x^{n-r} (1-x)^{r}\,dx \\ &= \int_0^1 \left( \sum_{n=r}^\infty (n+1)x^{n-r} \right) (1-x)^r \,dx \\ &= \int_0^1 \frac{1+r-rx}{(1-x)^2} (1-x)^r \,dx \\ &= \frac{r}{r-1} \end{align}<|endoftext|> TITLE: How did Descartes come up with the spoof odd perfect number $198585576189$? QUESTION [10 upvotes]: We call $n$ a spoof odd perfect number if $n$ is odd and and $n=km$ for two integers $k, m > 1$ such that $\sigma(k)(m + 1) = 2n$, where $\sigma$ is the sum-of-divisors function. In a letter to Mersenne dated November $15$, $1638$, Descartes showed that $$d = {{3}^2}\cdot{{7}^2}\cdot{{11}^2}\cdot{{13}^2}\cdot{22021} = 198585576189$$ would be an odd perfect number if $22021$ were prime. Here is my question: How did Descartes come up with the spoof odd perfect number $198585576189$? REPLY [2 votes]: If $22021$ were prime, we would have $$\sigma(d)=\sigma(3^2\cdot 7^2\cdot 11^2\cdot 13^2)\cdot 22022=(1+3+3^2)\cdot(1+7+7^2)\cdot(1+11+11^2)\cdot(1+13+13^2)\cdot 22022=2d$$ which can be verified by multiplication I guess Descartes calculated $\sigma(3^2\cdot 7^2\cdot 11^2\cdot 13^2)=3^2\cdot 7\cdot 13\cdot 19^2\cdot 61$ , tried to multiply with the $19^2\cdot 61$ not fitting to the $3^2\cdot 7^2\cdot 11^2\cdot 13^2$-part and was lucky.<|endoftext|> TITLE: If $[\overline F : F] = \infty$, does $F$ have extension of degree $n$ for any $n \geq 1$? QUESTION [5 upvotes]: Let $F$ be a field and assume that $[\overline F : F] = \infty$. Does it imply that for any $n \geq 1$, there is a field extension $K/F$ of degree $n$? Notice that if $[\overline F : F] < \infty$ then $F$ is a real closed field so the answer is no (for $n > 2$). Moreover, it is not interesting to ask for extensions $\overline F/K$ such that $[\overline F : K] = [\overline K : K] = n$ because, by Artin-Schreier, this implies $n \leq 2$. I know that there exist extensions $K/F$ with arbitrarily large degree (take $x_0 \in \overline F \setminus F$ then $K_0 = F(x_0)$ has finite degree over $F$, so we can find $x_1 \in \overline F \setminus K_0$, then $K_1=K_0(x_1)$ has finite degree over $F$ and so on). Clearly, there are $F$-vector subspaces of $\overline F$ of dimension $n$ over $F$, but they might not be subfields. I know that $L = \Bbb Q(\sqrt p \mid p \text{ prime}) / \Bbb Q$ has no sub-extension $K/\Bbb Q$ of degree $3$. But here $L$ is not the algebraic closure of $\Bbb Q$. REPLY [8 votes]: No. Let $ p $ be any prime and let $ F $ be any field whose absolute Galois group is the profinite completion $ \bar{\mathbf Z} $, and consider the compositum $ L $ of all finite extensions of $ F $ of degree prime to $ p $ in some fixed algebraic closure $ \bar{F} $. Then, $ \textrm{Gal}(\bar{L}/L) \cong \mathbf Z_p $, and any finite extension of $ L $ has degree a power of $ p $. Since the Galois group is infinite, it follows that the extension is also infinite. Concrete examples include $ F = \mathbb F_q $ for any prime $ q $, and $ F = \mathbb C((T)) $.<|endoftext|> TITLE: Formula for a "Fairness Variance" QUESTION [5 upvotes]: Short question: Propose a formula to apply on x0, x1, x2, ..., xn that returns a number which can sort these 7 datasets in this order: Medium question: Given 3 datasets, I want to have a formula that returns a number to represents the "(un)fairness" of a dataset, so I can sort/compare the datasets on that. Let's define fairness as the best situation for the worst, then the best situation for the second worst, and so on. For example, suppose we want to make assigning 15 shifts to 5 employees as fair as possible. In the above example, the middle dataset is the fairest, because the employee worst off (most shifts, so purple), is the best off (least shifts, only 5 in the middle dataset). However, if we calculate the variance (2.8) on these datasets, the second and third dataset have the same number. Is there a formula for number (let's call it Fairness Variance for now) that would allow us to sort these datasets on fairness? Long question: See this blog article which demonstrates that all common formula's (including standard deviation etc) don't work properly. Does such a formula even exist? Can anyone prove it does or doesn't? REPLY [2 votes]: An ideal measuring function $\,g(x)\,$ should indicate that an equally schedule “all employees had the same number of tasks $\,(\alpha)\,$” is more fair than the perfect fair schedule with one employee has $\,(\alpha+1)\,$ task. $$ n\cdot g(\bar{x}) \,\color{red}{\lt}\, n\cdot g(\alpha) \,\color{red}{\lt}\, (n-1)\cdot g(\bar{x})+g(\alpha+1) $$ Where: $\,\qquad\qquad\,\bar{x}\,\colon\,$ ideal number of tasks. $\,\left(\,\bar{x}=3\,\right)\,$. $\,\qquad\qquad\,n\,\colon\,$ number of employees. $\,\left(\,n=5\,\right)\,$. And by considering the inequality: $\,n\,(n+1)^{\alpha}\,\lt\, (n+1)^{\alpha+1}\,$, It is possible to create a good formula as follow: $$ \begin{align} {\small\text{Measuring}\,\text{function}}\quad g(x_i) &=n^{\left| x_i-\bar{x} \right|} \\[3mm] {\small\text{Deviation}\,\,\,\text{function}}\quad d(\,n\,) &= \frac{\sum_{i=1}^{n}g(x_i)}{n} =\frac{\sum_{i=1}^{n}\,n^{\left| x_i-\bar{x} \right|}}{n} \\[3mm] {\small\text{Unfairness}\,\text{function}}\quad f(\,n\,) &= \log_{n}\frac{\sum_{i=1}^{n}g(x_i)}{n} = \color{red}{\frac{\log\left(\sum_{i=1}^{n}\,n^{\left| x_i-\bar{x} \right|}\right)}{\log{n}}-1} \\[3mm] \end{align} $$ Where the logarithmic scale shall keep the numbers reasonably readable.<|endoftext|> TITLE: Normal derivative of a $H^1$- Sobolev function QUESTION [6 upvotes]: Let $u\in H^1(\Omega)$, where $\Omega$ is a bounded open set of $\mathbb{R}^n$ with Lipschitz boundary. We denote the outward unit normal as $n$, defined a.e. on $\partial\Omega$, and the normal derivative of $u$ as $$ \frac{\partial u}{\partial n}:=\nabla u\cdot n. $$ Which space does the normal derivative belong to? Is it possible to show $\frac{\partial u}{\partial n}\in L^2(\partial\Omega)$? I think it's not possible if we don't require at least that $u\in H^2(\Omega)$. Indeed it is easy to get $$ \|\frac{\partial u}{\partial n}\|_{L^2(\partial \Omega)}\le \|\nabla u \|_{L^2(\partial \Omega)}. $$ By the Trace theorem, we know that $\nabla u \in L^2(\partial\Omega)$ if $\nabla u\in H^1(\Omega)$, i.e. $u\in H^2(\Omega)$. Note that my notation is quite messy when I deal with the norm of the gradient... REPLY [4 votes]: If $u$ belongs merely to $H^1(\Omega)$, then you cannot define a normal-derivative-trace operator. Indeed, if $T : H^1(\Omega) \to S(\partial\Omega)$ would be such an operator, where $S(\partial\Omega)$ is some Banach space on the boundary and if $T$ would be linear, you arrive at the following contradiction: if $T$ is reasonably defined, you would have $T \varphi = 0$ for all $\varphi \in C_c^\infty(\Omega)$. By density of $C_c^\infty(\Omega)$ in $H_0^1(\Omega)$ and continuity of $T$, this implies $T u = 0$ for all $u \in H_0^1(\Omega)$. But this is absurd (consider $u \in C^1(\bar\Omega)$). On the other hand, if $u \in H^1(\Omega)$ and $\Delta u \in L^2(\Omega)$, you can define the trace of the normal derivative in $H^{-1/2}(\Omega)$ by duality. Indeed, for regular $v$, you have $$\int_\Omega \Delta u \, v + \nabla u \cdot \nabla v \, \mathrm{d}x = \int_{\partial\Omega} \frac{\partial u}{\partial n} \, v \, \mathrm{d}s.$$ Now, if $v \in H^{1/2}(\partial\Omega)$ is arbitrary (and $\Omega$ possesses some regularity), you find $E v \in H^1(\Omega)$ with $(Ev)|_{\partial\Omega} = v$. Then, you define $$\langle \frac{\partial u}{\partial n}, v \rangle := \int_\Omega \Delta u \, Ev + \nabla u \cdot \nabla Ev \, \mathrm{d}x.$$<|endoftext|> TITLE: Proof that the Lorentz Group SO(3,1) is a manifold QUESTION [6 upvotes]: I am trying to prove that the Lorentz group $SO(3,1)$ is a Lie group. To prove that it is a manifold, I was thinking of proving that it is a closed subgroup of $GL(4,\mathbb{R})$. Firstly, I have not convinced myself that it is in fact closed. If it is, I am not sure how to start that proof. I have also considered proving it is a manifold by means of the constant-rank level set theorem in the way that O(n) is proven to be a regular submanifold of $GL(n,\mathbb{R})$ by devising a constant rank map $f: GL(n,\mathbb{R}) \rightarrow GL(n,\mathbb{R})$ of which $SO(3,1)$ is the inverse. However, I have yet to find such a map. Does anyone have any hints to get me started on this proof, or a link to an alternate proof? edit: I am now thinking that the best route to take is to prove that $SO(3,1)$ is the zero set of polynomial equations on $GL(n,\mathbb{R})$. If I can show that, then I can prove that $SO(3,1)$ is closed. However, the definition of $SO(3,1)$ is more complicated than that of $O(n)$ or $SL(n,\mathbb{R})$, so I am still stuck on this proof. REPLY [6 votes]: The Lorentz group $\mathrm{O}(3,1)$ is the zero level set of $$ f : \mathrm{GL}(n)\to\mathrm{GL}(n), \Lambda\mapsto \Lambda \eta \Lambda^T - \eta $$ by definition, where $\eta$ is the usual Minkowski metric of signature $(3,1)$.<|endoftext|> TITLE: Quaternions multiplication order (to rotate & unrotate) QUESTION [7 upvotes]: Say, I have parent object rotation quaternion Qp & child object rotation (local - relative to parent) quternion Qch. 1) Which order should I multiply them to get child object world(total) rotation QW? QW = Qp * Qch or QW = Qch * Qp ? And what geometric interpretation of this order - same or reverse order of rotation execution? 2) And one more question: If I already have result total rotation of child object QW (calculated in proper way (see #1), I also know Qp & want to calculate Qch. Which order should I multiply Qp.Inverse & QW ? 3) And last - if we have situation #2, but opposite: QW & Qch are known, & Qp we need to get, what order of QW & Qch.Inverse multiplication should we use? Thanks a lot! REPLY [5 votes]: 1): QW == Qp * Qch It means we apply Qch 1st, & Qp then. So order of rotations applying is always from right to left. 2): Qch == Qp.Inversed * QW So we apply QW 1st, then unrotate it by Qp back. 3): Qp == QW * Qch.Inversed So we apply inverse Qch rotation. Then total QW. It yield Qp. Note: #2 & #3 work so, provided that QW is obtained by #1 formula. Ps: This my answer is just concise translation to 3D-engines implementation the @DavidK's answer. See it for more fundamental understanding of, what is happened here under the hood.<|endoftext|> TITLE: Is it true that the integral of $\delta(x)/x$ between symmetrical limits is zero? QUESTION [5 upvotes]: My professor is claiming that the following is true: $$\int_{-\infty}^{\infty}\frac{\delta(x)}{x}dx=0,$$ where $\delta(x)$ is the Dirac delta "function", as he calls it. I think the integral diverges from the definition of the delta "function", but his rational is that the solution must be zero because the integrand is "an odd function of $x$". I think that if he is correct then the definition of the delta distribution is basically meaningless.I know it is true that $\delta(x)=\delta(-x)$, but I think that the reason his explanation that the integrand is an odd function fails because because the delta distribution isn't a function at all, and (presumably) distributions don't have this usual integration property. Would somebody please confirm or deny, and if possible explain why a distribution doesn't have to integrate to zero in the same way as a function when it is odd-valued? REPLY [3 votes]: As written in the other answers, "$\delta_0(x)/x$" does not mean a priori anything (since there is no defintion for the product of two singular distributions in general). In a general procedure, one can only define the multiplication of $\delta_0$ with functions $g$ continuous at $0$ as the distribution acting on smooth functions $\varphi$ as $$ \langle g\,\delta_0,\varphi\rangle = g(0)\,\delta_0. $$ However, one can look in the particular case of "$\delta_0(x)/x$" what is the distribution $T$ that has the closest properties to what we would expect. A way is to take $T$ as a solution of the equation $$ x\,T(x) = \delta_0(x) $$ However there are several solutions to this equation (since if $T$ is solution, then $T + c\, \delta_0$ is also solution). An additional constraint can be to require the solution to be homogeneous. In this case, we are left with only one solution (see e.g. here for more details): $$ T = -\delta_0', $$ which is a well defined distribution defined by $\langle T,\varphi\rangle = \varphi'(0)$. Remarking that this defines a linear functional not only on smooth compactly supported functions, but also on $C^1$ functions, we can take $\varphi = 1$, which yields $$\boxed{ \langle T,1\rangle = 0} $$ And this is the rigorous version of the result claimed by your professor. Indeed, with less rigorous notations, we could define the (possibly misleading) notation $"\frac{\delta_0(x)}{x}" := T = - \delta_0'$ and then the (possibly misleading) notation $"∫_{-\infty}^\infty T(x)φ(x)\,\mathrm d x" := \langle T,φ\rangle$. With these notations, $\langle T,1\rangle = 0$ becomes $$ "∫_{-\infty}^\infty \frac{\delta_0(x)}{x}\,\mathrm d x" = 0. $$<|endoftext|> TITLE: How could I calculate this limit without using L'Hopital's Rule $\lim_{x\rightarrow0} \frac{e^x-1}{\sin(2x)}$? QUESTION [6 upvotes]: I want to calculate the limit which is above without using L'hopital's rule ; $$\lim_{x\rightarrow0} \frac{e^x-1}{\sin(2x)}$$ REPLY [5 votes]: Equivalents: $\;\mathrm e^x-1\sim_0 x$, $\;\sin 2x\sim_0 2x$, so $\;\dfrac{\mathrm e^x-1}{\sin 2x}\sim_0\dfrac{x}{2x}=\dfrac12.$<|endoftext|> TITLE: Why does one have to check if axioms are true? QUESTION [28 upvotes]: In Tao's book Analysis 1, he writes: Thus, from the point of view of logic, we can define equality on a [remark by myself: I think he forgot the word "type of object" here] however we please, so long as it obeys the reflexive, symmetry, and transitive axioms, and it is consistent with all other operations on the class of objects under discussion in the sense that the substitution axiom was true for all of those operations. Does he mean that, if one wants to define define equality on a specific type of object (like functions, ordered pairs, for example), one has to check that these axioms of equality (he refers to these four axioms of equality as "symmetry", "reflexivity", "transitivity", and "substitution") hold in the sense that one has to prove them? It seems so, because of these two passages: [In section 3.3 Functions] We observe that functions obey the axiom of substitution: if $x=x'$, then $f(x) = f(x')$ (why?). (My answer would be "because that's an axiom", but Tao apparently wouldn't accept that.) And after defining equality of sets ($A=B:\iff \forall x(x\in A\iff x\in B)$), Tao writes (on page 39): One can easily verify that this notion of equality is reflexive, symmetric, and transitive (Exercise 3.1.1). Observe that if $x\in A$ and $A = B$, then $x\in B$, by Definition 3.1.4. Thus the "is an element of" relation $\in $ obeys the axiom of substitution So he gives the exercise to prove the axioms of equality for sets. Why does one has to prove axioms? Or, put differently: if one can prove these things, why does he state them as axioms? REPLY [2 votes]: Austin Mohr's answer is excellent; however, as it led to an extensive argument in the comments I wish to put forth a slight simplification and restatement of it which may add something. (I posted this as a comment originally, but wanted to expand on it.) So he gives the exercise to prove the axioms of equality for sets. Why does one has to prove axioms? Or, put differently: if one can prove these things, why does he state them as axioms? The basic point to realize here is that English words have meaning outside of math. You're free to invent any relation you wish, with whatever properties you wish. You can label it with whatever made-up term you want without any necessity to prove anything about it. If you do this, you are just defining what you are talking about, and you can then proceed to say something using your stated definitions. However, if you use an English word to name or describe your relation, then you should take into account the English meaning of the word. That is, in choosing an English word as a name, you should choose one which aligns with the properties your relation has. And the corollary: If you want to use a particular English word as a name for your relation, you should be sure your relation has the properties that would be implied by that name. This applies whether we are naming relations, operators, or anything else. If I define a unary operator and call it the "inverse," but my operator has the property that repeated application of that operator will never produce the original input, I have misnamed it. If I define a relation and call it the "equality relation," but it is not transitive nor symmetric, I have again chosen the wrong name. (Can you misname your relations and operators? Of course you can. The only thing it will break down is your communication with other people, which is of course the only reason to have names for things in the first place.) A relation which is not reflexive, symmetric, and transitive, will violate the English-language meaning of the word "equal," so if your relation may not have those properties then use another name for it instead. In this case, the "equality relation" between sets has been defined in your textbook, and it is now your task to show the appropriateness of the label "equality relation" for the defined relation, by showing that it has the properties one would expect out of the English-language meaning of the word "equality."<|endoftext|> TITLE: If the map induces identity on all homotopic groups then it is homotopic to identity QUESTION [6 upvotes]: Recall that, in general, maps of CW complexes $X\to Y$ which induce the same maps of all homotopy groups $\pi_*(X,x)\to \pi_*(Y,y)$, need not be homotopic. Assume however, that we have a continuous self-map $f: (X,x) \rightarrow (X,x)$ of a connected CW complex $X$ which induces identity morphisms on all homotopy groups. Is it true that $f$ is actually homotopic to the identity map on $X$? Note that in this setting Whitehead's theorem says $f$ is a homotopy equivalence. REPLY [4 votes]: Shih proved that, if $X$ is simply connected with two nontrivial homotopy groups, then the group of self-homotopy equivalences of $X$ inducing the identity on the homotopy groups is naturally identified the cohomology group $H^m(K(\pi_n(X),n),\pi_m(X))$, and so is frequently nontrivial.<|endoftext|> TITLE: Showing a number $n$ is prime if $(n-2)! \equiv 1 \pmod n$ QUESTION [7 upvotes]: I need to show that if $(n-2)! \equiv 1 \pmod n$ then $n$ is prime. I think that if $n$ is composite, then we'll have every factor of $n$ in $(n-2)!$, and it would yield that $(n-2)! \equiv 0 \pmod n$. However, I didn't use the fact that it specifically congruent to $1 \bmod n$, so I think I'm getting something fundamental wrong. Is my solution correct? Why do we demand congruence to $1 \bmod n$? REPLY [4 votes]: Your solution is correct aside from the small detail of 4!, which has already been pointed out in the comments. 4 is somewhat special, being the first composite number, and the square of the "oddest" prime. So I don't think this little mistake is at all "fundamental." Still, I think it's better to use the fact of $1 \bmod n$, since it guarantees that $\gcd((n! - 2), n) = 1$. The least prime factor of $n$, if $n$ is not itself prime, is less than or equal to $\sqrt n$, and $\sqrt n < (n - 2)$ for all $n > 4$. Therefore, if $n$ is composite, then $(n - 2)!$ is divisible by the least prime factor of $n$, making $(n - 2)! \equiv 1 \bmod n$ impossible. Yet another way you can go about it is this: If $n$ is even and composite, then $n - 2$ is also even and therefore $(n - 2)! \equiv 2k \bmod n$, where $k$ is some nonnegative integer we don't care too much about. But 1 is not even, proving $n$ is not an even composite number. If $n$ is odd and composite, it is divisible by some odd prime $p \leq \sqrt n$. And since $p \leq \sqrt n < (n - 2)$, it follows that $p \mid (n - 2)!$ and $(n - 2)! \equiv pk \bmod n$. And $pk \neq 1$, proving $n$ is not an odd composite number. And yet another way is to use Wilson's theorem, but then I would be merely restating one or two of the other answers.<|endoftext|> TITLE: Continuing logarithm $\log(\log(\dots\log(z)))$ QUESTION [5 upvotes]: I don't know the best way to describe it in technical terms, but what is the result of a continuing logarithm of $z$, for example: $$\log(\log(\dots\log(z)))$$ Where it is taking the logarithm of the logarithm and so on, for an infinite amount of times? How would this type of thing behave? Does it converge, go off to infinity or an infinitesimal? Does the resulting behavior depend on whether $z$ is imaginary or real? Positive or negative? REPLY [2 votes]: If $\Im(z)$ (the imaginary part of $z$) is greater than or equal to zero, then this converges to roughly $z_\infty = 0.318152 + 1.33724 i$. If $\Im(z)<0$ it converges to $z_\infty = 0.318152 - 1.33724 i$. This of course relies on the usual branch cut for $\log z$. The exceptional cases are any cases where a finite number of itetations lands on $1$ (or starting with $z=0$. These include $1, e, e^e$, and so forth. However, there are isolated points for which the iterated log neither goes to infinity nor converges. For example, for any $z$ such that $$e^z = \log z \neq z$$ the iterated log oscillates between $z$ and $\log z$. I think there are such points; for example, there is an unstable 2-cycle fixed point at roughly $$ z= 0.883998 + 6.922346 i $$<|endoftext|> TITLE: Examples of interesting and creative problems about differentiation (in one variable) QUESTION [5 upvotes]: Problems about differentiation of functions in one variable that we find in the majority of textbooks are usually boring, that is, they are only a simple application of very known rules. So, what I want in this post is examples of derivatives (functions of one variable) that are interesting to take. I'd like that the problems were original, but if they aren't, feel free to share the same way. REPLY [2 votes]: Here are two examples that involve conic sections. Suppose $a > 0$, and consider the line $L$ tangent to $y = \frac{1}{x}$ at $x = a$. Find the area of the triangular region between $L$, the $x$-axis, and the $y$-axis. This problem's fun because it turns out the area doesn't depend on $a$ at all! I remember this is from Stewart's Calculus. Given the parabola $f(x) = a(x - h)^2 + k$ where $a > 0$, consider its tangent line at $x = x_0$. Show that this tangent line crosses the parabola's axis of symmetry $f(x_0) - k$ units below the vertex. I'm having a hard time wording this one. But the cool fact is that if we focus on the $y$-coordinates only, $(x, f(x_0))$ and the intersection of the tangent line with the axis of symmetry will be equally spaced on either side of the vertex. I don't remember where I saw this one. I guess the "theme" for both problems is to have to differentiate and work comfortably with parameters floating around. In the first, you can try a couple of values of $a$ to get a sense of what's happening, but you eventually have to work with $a$ as a fixed but unknown value. It's similar with the second, but more involved. Possibly so involved that you'd want to give a specific parabola if you don't think you have really strong students.<|endoftext|> TITLE: $\sum\limits_{k\in\mathbb{Z}}$ versus $\sum\limits_{k=-\infty}^{\infty}$ QUESTION [5 upvotes]: If $a_k\in\mathbb{C}$ ($k\in\mathbb{Z}$), here are two equivalent definitions for $$ \sum_{k=-\infty}^{\infty}a_k $$ For reference, the two definitions are: $$\sum_{k=-\infty}^{\infty}a_k=L\\ \Updownarrow\\\forall\epsilon>0,\exists N,m,n> N\implies\left|\sum_{k=-m}^na_k-L\right|<\epsilon$$ $$ \sum_{k=-\infty}^{\infty}a_k=L\\ \Updownarrow\\ \sum_{k=0}^{\infty}a_k\text{ and }\sum_{k=1}^{\infty}a_{-k}\text{ both exist and }\sum_{k=0}^{\infty}a_k+\sum_{k=1}^{\infty}a_{-k}=L $$ Questions: Is the notation $\displaystyle\sum_{k\in\mathbb{Z}}a_k$ usually defined to mean $\displaystyle\sum_{k=-\infty}^{\infty}a_k$ (according to one of the two equivalent definitions given)? Is the notation $\displaystyle\sum_{k\in\mathbb{Z}}a_k$ a particular case of a definition of a summation of complex numbers over arbitrary index sets (see here)? If the answer to question 2. is yes, then are the definitions I just gave consistent with the general definition? REPLY [6 votes]: I am not sure if there is a unanimous consensus on the meaning of the notation $\sum_{k\in\Bbb{Z}} a_k$, but it is often defines as one of the following equivalent notion: $\sum_{k\in\Bbb{Z}} a_k$ is the limit of the net $ \{ \sum_{k \in F} a_k : F \subset \Bbb{Z} \text{ and $F$ is finite} \}$. $\sum_{k\in\Bbb{Z}} a_k = \sum_{k=-\infty}^{\infty} a_k$ when the series is absolutely convergent, i.e., $\sum_{k=-\infty}^{\infty} |a_k| < \infty$. As you can see from the second definition, the sum $\sum_{k\in\Bbb{Z}} a_k$ is a strictly stronger notion than the doubly infinite sum $\sum_{k=-\infty}^{\infty} a_k$. Example. Let us consider $a_k = (-1)^k /k$ for $k \neq 0$ and $a_0 = 0$, then $\sum_{k=-\infty}^{\infty} a_k = 0$ is easy to check. On the other hand, $\sum_{k\in\Bbb{Z}} a_k$ is simply undefined. And the first definition is exactly a special case of the definition introduced in your link.<|endoftext|> TITLE: When does $\left\lfloor\sqrt{2015(n-1)}\right\rfloor = \left\lfloor\sqrt{2015n}\right\rfloor$ hold? QUESTION [14 upvotes]: For how many integers $1 \leq n \leq 2015$ does the following equation hold? $$\left\lfloor\sqrt{2015(n-1)}\right\rfloor = \left\lfloor\sqrt{2015n}\right\rfloor$$ I have been struggling with this simple-looking problem for a while. What I have done so far: If $n \leq 504$, we can make use of $\sqrt{2015(n-1)}<\sqrt{2015n}-1$ to write $$\left\lfloor\sqrt{2015(n-1)}\right\rfloor \leq \sqrt{2015(n-1)} < \sqrt{2015n}-1 < \left\lfloor\sqrt{2015n}\right\rfloor,$$ and hence the equation does not hold. For $n>504$, I get stuck. I wrote a MATLAB code to find such $n$ that satisfies the equation. The first few values of $n$ are $544, 565, 581, 595, \dots $ but I can't find a pattern. Can you please give me a hint? PS. The problem comes from a 27th Chilean 2015-16 mathematical olympiad. REPLY [8 votes]: Edit: Directly solving OP's question (using the same idea as below): $$\sqrt{2015n}-\sqrt{2015(n-1)}\le 1$$ is true for $n\ge 504$ and false for $n\le503$ So when $n=1,2,\ldots,503$ we have $\sqrt{2015n}$ takes only distinct values (as the difference between two consecutive values is greater than $1$), for a total of $503$ values. For $n=504,505,\ldots,2015$, we have $\sqrt{2015n}$ does not skip any values (as the difference between two consecutive values is less than $1$), so $\sqrt{2015n}$ takes all the values between $\lfloor \sqrt{2015\cdot 504}\rfloor=1007$ and $\lfloor\sqrt{2015\cdot 2015}\rfloor=2015$, a total of $2015-1007+1=1009$ values. So when $1\le n\le 2015$, we have that $\sqrt{2015n}$ takes $1512$ distinct values; the non-distinct values duplicating the previous values, since they are increasing with $n$. So the number of solutions of the equation is $2015-1512=503$ The original problem in the link asks to determine the number of different values of $\big\lfloor\frac{n^2}{2015}\big\rfloor$, for $1\le n\le 2015$. I think that your equation is equivalent to the number of duplicates (I will verify when I get a chance) The solution is to take the difference of two consecutive terms and compare it with $1$: $$\frac{n^2}{2015}-\frac{(n-1)^2}{2015}=\frac{2n-1}{2015}\le1\iff n\le1008$$ As long as $n\le 1008$, no value will be skipped, so all values between $0$ and $\big\lfloor\frac{1008^2}{2015}\big\rfloor=504$ will be taken ($505$ values). When $n>1008$ each term will generate a new value, So we have $2015-1008+1=1008$ new values. The total is $505+1008=1513$ distinct values The number of duplicates is $2016-1513=503$. So for $502$ values your equation holds.<|endoftext|> TITLE: How can I visualize a four-dimensional point inside a Schlegel diagram of a tesseract? QUESTION [5 upvotes]: I would like to draw a Schlegel diagram of a tesseract to visualize via a Cartesian coordinate system inside the tesseract the symmetry of some four-dimensional points located in a range of integer values no longer than the half of the length of the side of the tesseract. (The questions are at the end of the explanation) My idea is as follows: For instance in two dimensions, it is possible to visualize two-dimensional points $(x_1,x_2)$ inside a square whose side length is for instance $s$, where $| x_1 |,|x_2| \le \frac{s}{2}$ if the center of the Cartesian system is located in the center of the rectangle. In the same fashion, is is possible to do the same inside a cube for three-dimensional points $(x_1,x_2,x_3)$, locating the center of the Cartesian system in the exact center of the cube, keeping the same restriction for the values of the coefficients of the points (smaller than half the length of the side of any face of the cube. I want to apply that idea for a set of four-dimensional points $(x_1,x_2,x_3,x_4)$ in the Schlegel diagram of a tesseract. But in this case I am not sure how to visualize a given point $(x_1,x_2,x_3,x_4)$. My guessing is something like this: In a static image of the tesseract my intuition is that I should be able to visualize three coefficients of a given point $(x_1,x_2,x_3,x_4)$ for instance let us say for this example that they are $(x_1,x_2,x_3)$ (if the belong to the current position of visualization of the tesseract), but not very sure where should be located the fourth one (always assuming that it should be visible in the current position of visualization of the tesseract). I would like to ask the following questions: Is it possible to visualize four-dimensional points in a Schlegel diagram of a tesseract? Is there a known technique to do it correctly? In the example above I have assumed that in a static view of the Schlegel diagram of the tesseract I can visualize three dimensions of a given point, for instance $x_1,x_2$ and $x_3$. Is that intuition wrong? Where should I plot/locate/visualize the remaining fourth dimension, $x_4$? Is it possible in a static image, or the diagram must be shown in movement so all the dimensions of each point are visualized depending on the "rotation" though the dimensions of the tesseract? Is this kind of approach in use for visualization of fourth-dimensional problems in some field of Mathematics? Are there online tools (initially I did not find one) to make this kind of visualization? Any hints are very welcomed, thank you! UPDATE: I have found a very related question here. REPLY [3 votes]: Well, gathering the information regarding the basic theory here and the nice explanation here regarding the projection, I was able to build my own version of the tesseract, and yes it is possible to show a point inside the tesseract, and I was wrong in the assumptions I made regarding the methodology applied to visualize the 4D point. Basically, if we want to show a point inside the tesseract, we need to project the tesseract first, and then project the desired point as well, following the same projection rules. The definition of the tesseract (credits to draks...) The tesseract is a four dimensional cube. It has 16 edge points $v=(a,b,c,d)$, with $a,b,c,d$ either equal to $+1$ or $-1$. Two points are connected, if their distance is $2$. Given a projection $P(x,y,z,w)=(x,y,z)$ from four dimensional space to three dimensional space, we can visualize the cube as an object in familiar space. The effect of a linear transformation like a rotation $$ R(t)=\pmatrix{1&0&0&0\\0&1&0&0&\\0&0&\cos(t)&\sin(t)\\0&0&-\sin(t)&\cos(t)} $$ in $4D$ space can be visualized in $3D$ by viewing the points $v(t) = P R(t) v$ in $\mathbb R^3$. The definition of the projection (credits to Andrew D. Hwang) $$ P(x, y, z, w) = \frac{h}{h - w}(x, y, z). $$ And finally, the definition of the distance between two four-dimensional points is calculated as follows (this is used to show the edges of the tesseract properly, making lines between the correct projected vertices): $$d=\sqrt{(x_0-x'_0)^2+(x_1-x'_1)^2+(x_2-x'_2)^2+(x_3-x'_3)^2}$$ I have prepared a Python code snippet that creates the frames (jpg) of an animation of a tesseract including an internal point. In this case, the length of the edge is $1000$ so the distance between the vertices is not $2$, but $1000$. For the projection I have used a light source located at three times the length of the edge, this is $h=3000$. Finally, I have applied a rotation as defined above. The star is marking the location of the point $(\frac{3}{4} \cdot \frac{1000}{2}, \frac{3}{4} \cdot \frac{1000}{2}, \frac{3}{4} \cdot \frac{1000}{2}, \frac{3}{4} \cdot \frac{1000}{2})$ while we rotate the tesseract. Be aware that the position of the camera in the animation is lateral. The typical location of the camera is from above, which is the usual "square inside a square" view. But for visualization purposes (we want to see clearly the movement of the projection of the point due to the rotation of the tesseract) the camera in this case was located in a lateral position. Please use and modify it freely (for instance instead of one point it is possible to show a set of points and verify if there is symmetry, etc.): from math import pi, sin , cos, sqrt import matplotlib.pyplot as plt import matplotlib as mpl from mpl_toolkits.mplot3d import Axes3D edge_length=1000 edge_half_length= int(edge_length/2) lotuples=[] list_of_loxt_lists=[] list_of_loyt_lists=[] list_of_lozt_lists=[] rotation_accuracy=100 filled_once=False for ratio in range(0,rotation_accuracy): angle= ((2*pi)*ratio)/rotation_accuracy loxt=[] loyt=[] lozt=[] #t=edge_half_length (positive) a=-edge_half_length b=edge_half_length ret0=-edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=-edge_half_length ret0=-edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=edge_half_length ret0=-edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=-edge_half_length ret0=-edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=edge_half_length ret0=edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=-edge_half_length ret0=edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=edge_half_length ret0=edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=-edge_half_length ret0=edge_half_length ret1=edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) #t=-edge_half_length (negative) a=-edge_half_length b=edge_half_length ret0=-edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=-edge_half_length ret0=-edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=edge_half_length ret0=-edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=-edge_half_length ret0=-edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=edge_half_length ret0=edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=edge_half_length b=-edge_half_length ret0=edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=edge_half_length ret0=edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) a=-edge_half_length b=-edge_half_length ret0=edge_half_length ret1=-edge_half_length finala=a finalb=b finalret0=(ret0*cos(angle))+(ret1*sin(angle)) finalret1=(ret0*(-sin(angle)))+(ret1*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-(finalret1))) loxt.append(light_projection_factor*finala) loyt.append(light_projection_factor*finalb) lozt.append(light_projection_factor*finalret0) if filled_once==False: lotuples.append([a,b,ret0,ret1]) filled_once=True list_of_loxt_lists.append(loxt) list_of_loyt_lists.append(loyt) list_of_lozt_lists.append(lozt) list_of_loxi_lists=[] list_of_loyi_lists=[] list_of_lozi_lists=[] list_of_loxi_lists_axis=[] list_of_loyi_lists_axis=[] list_of_lozi_lists_axis=[] for ratio in range(0,rotation_accuracy): angle= ((2*pi)*ratio)/rotation_accuracy loxi=[] loyi=[] lozi=[] finala=int((3/4)*edge_half_length) finalb=int((3/4)*edge_half_length) finalret0=(int((3/4)*edge_half_length)*cos(angle))+(int((3/4)*edge_half_length)*sin(angle)) finalret1=(int((3/4)*edge_half_length)*(-sin(angle)))+(int((3/4)*edge_half_length)*cos(angle)) light_projection_factor = ((edge_length*3)/((edge_length*3)-finalret1)) loxi.append(light_projection_factor*finala) loyi.append(light_projection_factor*finalb) lozi.append(light_projection_factor*finalret0) list_of_loxi_lists.append(loxi) list_of_loyi_lists.append(loyi) list_of_lozi_lists.append(lozi) # Show projection of refence axes BEGIN loxi=[] loyi=[] lozi=[] finala_axis=finala finalb_axis=0 finalret0_axis=0 finalret1_axis=0 light_projection_factor = ((edge_length*3)/((edge_length*3)-finalret1_axis)) loxi.append(light_projection_factor*finala_axis) loyi.append(light_projection_factor*finalb_axis) lozi.append(light_projection_factor*finalret0_axis) finala_axis=0 finalb_axis=finalb finalret0_axis=0 finalret1_axis=0 light_projection_factor = ((edge_length*3)/((edge_length*3)-finalret1_axis)) loxi.append(light_projection_factor*finala_axis) loyi.append(light_projection_factor*finalb_axis) lozi.append(light_projection_factor*finalret0_axis) finala_axis=0 finalb_axis=0 finalret0_axis=finalret0 finalret1_axis=0 light_projection_factor = ((edge_length*3)/((edge_length*3)-finalret1_axis)) loxi.append(light_projection_factor*finala_axis) loyi.append(light_projection_factor*finalb_axis) lozi.append(light_projection_factor*finalret0_axis) finala_axis=0 finalb_axis=0 finalret0_axis=0 finalret1_axis=finalret1 light_projection_factor = ((edge_length*3)/((edge_length*3)-finalret1_axis)) loxi.append(light_projection_factor*finala_axis) loyi.append(light_projection_factor*finalb_axis) lozi.append(light_projection_factor*finalret0_axis) list_of_loxi_lists_axis.append(loxi) list_of_loyi_lists_axis.append(loyi) list_of_lozi_lists_axis.append(lozi) # Show projection of refence axes END for ratio in range(0,rotation_accuracy): fig = plt.figure() ax = fig.gca(projection='3d') ax.view_init(elev=17., azim=-152) for i in range(0,len(lotuples)): for j in range(i+1,len(lotuples)): distance = int(sqrt(((lotuples[i][0]-lotuples[j][0])**2)+((lotuples[i][1]-lotuples[j][1])**2)+((lotuples[i][2]-lotuples[j][2])**2)+((lotuples[i][3]-lotuples[j][3])**2))) if distance<=edge_length: ax.plot([list_of_loxt_lists[ratio][i],list_of_loxt_lists[ratio][j]],[list_of_loyt_lists[ratio][i],list_of_loyt_lists[ratio][j]],[list_of_lozt_lists[ratio][i],list_of_lozt_lists[ratio][j]],"r") ax.plot([-edge_length],[edge_length],[-edge_length],"w") ax.plot([-edge_length],[-edge_length],[-edge_length],"w") ax.plot([edge_length],[edge_length],[-edge_length],"w") ax.plot([edge_length],[-edge_length],[-edge_length],"w") ax.plot([edge_length],[edge_length],[edge_length],"w") ax.plot([edge_length],[-edge_length],[edge_length],"w") ax.plot([-edge_length],[edge_length],[edge_length],"w") ax.plot([-edge_length],[-edge_length],[edge_length],"w") ax.plot([list_of_loxi_lists[ratio][0]], [list_of_loyi_lists[ratio][0]], [list_of_lozi_lists[ratio][0]], "r*") #projection of refence axes around the point ax.plot([0,list_of_loxi_lists_axis[ratio][0]],[0,list_of_loyi_lists_axis[ratio][0]],[0,list_of_lozi_lists[ratio][0]],"b") ax.plot([0,list_of_loxi_lists_axis[ratio][1]],[0,list_of_loyi_lists_axis[ratio][1]],[0,list_of_lozi_lists_axis[ratio][1]],"b") ax.plot([0,list_of_loxi_lists_axis[ratio][2]],[0,list_of_loyi_lists_axis[ratio][2]],[0,list_of_lozi_lists_axis[ratio][2]],"b") ax.plot([0,list_of_loxi_lists_axis[ratio][3]],[0,list_of_loyi_lists_axis[ratio][3]],[0,list_of_lozi_lists_axis[ratio][3]],"b") ax.dist=8 mpl.pyplot.savefig("tesseract_movie_"+str(ratio)+".png") print("End ratio "+str(ratio)) (The animated gif was generated joining the jpg files with VirtualDub). UPDATE 2017/02/06: I have included both in the image and the Python code in blue color the reference axes of the point (the projection of the $(x_0,0,0,0),(0,x_1,0,0),(0,0,x_2,0),(0,0,0,x_3)$ points and the reference axes generated with them).<|endoftext|> TITLE: In which order the woman should bring the cats back in order to minimize the time? QUESTION [5 upvotes]: A woman watches her cats leave one by one with different speeds in different directions. She took a motorcycle with one extra seat and follows the cats and picks up one cat at a time and brings them back home. Each cat moves with constant individual speed $V_i$ and left home at time $T_i$. In which order the woman should bring the cats back in order to minimize the time? I am trying to solve this problem but do not know how to begin. REPLY [3 votes]: I would suggest considering simple example with two cats first. Suppose the women is deciding which cat to go after first. Currently, both cats have already left and are $D_{1}$ and $D_{2}$ far away. Suppose the women decided to go after cat $1$ first and then after $2$. What time she needs to be done? (Assume the women's speed is $V_{0}$ and is larger than any of the cats.) To catch up with the first cat she needs $t_{1}$ that solves $t_{1}V_{0}=D_{1}+t_{1}V_{1}$, or, equivalently, $t_{1}=\frac{D_{1}}{V_{0}-V_{1}}$. By $2t_{1}$ she is back home with the first cat at which point the second cat is $D_{2}+2t_{1}V_{2}$ far away. To catch up with the second cat she needs $t_{2}$ that solves $t_{2}V_{0}=D_{2}+2t_{1}V_{2}+t_{2}V_{2}$, or, equivalently, $t_{2}=\frac{D_{2}+2t_{1}V_{2}}{V_{0}-V_{2}}$. Hence the entire operation takes $2(t_{1}+t_{2})$. Substituting in the model parameters, the length of the operation is $$2\frac{D_{2}(V_0-V_1)+D_{1}(V_0+V_2)}{(V_0-V_1)(V_0-V_2)}=2\frac{V_{0}(D_{1}+D_{2})+D_{1}V_{2}-D_{2}V_{1}}{(V_0-V_1)(V_0-V_2)}$$ Similarly, if the women chooses the other order, the operation takes $$2\frac{D_{1}(V_0-V_2)+D_{2}(V_0+V_1)}{(V_0-V_1)(V_0-V_2)}=2\frac{V_{0}(D_{1}+D_{2})+D_{2}V_{1}-D_{1}V_{2}}{(V_0-V_1)(V_0-V_2)}$$ Therefore, the $(1,2)$ order is optimal if $\frac{D_{1}}{V_{1}}>\frac{D_{2}}{V_{2}}$. My conjecture is that the general solution goes after the cats in the decreasing order of $\frac{D_{i}}{V_{i}}$, which, note, equals $\frac{tV_{i}}{V_{i}}=t$ at time $t$ for all the cats if all the cats left at the same time. In other words, what I am conjecturing is that if all the cats left at the same time, the order does not matter. In fact (further supportive evidence, not a proof), Mathematica thinks so as well for $5$ casts: t1 = d1/(v0 - v1); t2 = (d2 + 2*t1*v2)/(v0 - v2); t3 = (d3 + 2*(t1 + t2)v3)/(v0 - v3); t4 = (d4 + 2(t1 + t2 + t3)v4)/(v0 - v4); t5 = (d5 + 2(t1 + t2 + t3 + t4)v5)/(v0 - v5); Simplify[t1+t2+t3+t4+t5/.d1->tv1/.d2->tv2/.d3->tv3/.d4->tv4/.d5->tv5]<|endoftext|> TITLE: What kind of functions cannot be described by the Taylor series? Why is this? QUESTION [25 upvotes]: It's true that I'm not familiar with too many exotic functions, but I don't understand why there exist functions that cannot be described by a Taylor series? What makes it okay to describe any particular functions with such a series? Is there any difference for different number sets? In the case of complex numbers maybe? Could somebody provide an example? REPLY [4 votes]: In addition to all the comments here, I would like to add the curious Weierstrass function, which is known for its quality of being nowhere differentiable despite the fact that it is continuous everywhere: $$ W(x) = \sum_{n=0}^\infty a^n\cos(b^n\pi x)$$ Consequently, it does not have a Taylor series. You can find a visualization of $W$ here.<|endoftext|> TITLE: Integrating composite functions by a general formula? QUESTION [7 upvotes]: There is a "Chain - Rule" in calculus that allows us to differentiate a composite function in the following way (Sorry for mixing up the two standard derivative notations.) : $$\frac {d}{dx} f(g(x)) =g'(x) . f'(g(x))$$ Does there exist a similar "(Reverse) Chain Rule" for Integration ? $$\int f(g(x)) dx = ?$$ Wolfram Alpha says : "no result found in terms of standard mathematical functions" and gives a very horrible "Series expansion of the integral at x=0". So I expect a negative response, but then how can expressions be integrated : $\sin(nx),\cos(nx) , etc.$ It is obvious that some other techniques like u-substitution, trigo substitution , by parts, etc. have to be applied, but does there exist "something" in this universe which provides a general formula for integrating composite functions ? REPLY [6 votes]: Yes, you are partly correct. Sometimes there are composite functions like you have mentioned, which can be integrated using traditional methods to obtain elementary functions as their antiderivative. But then again, you have certain composite functions like $e^{x^2}$, $\ln(x^2+x)$ which do not have proper antiderivatives expressible in terms of elementary functions. So, in order to determine which function, or rather composite function, can be integrated indefinitely to obtain a result in terms of elementary functions, we can use the Risch Algorithm which tests whether a certain function can be integrated indefinitely and also gives the procedure of obtaining that integral. Hope this helps you.<|endoftext|> TITLE: Systems of linear equations: Why does no one plug back in? QUESTION [49 upvotes]: When someone wants to solve a system of linear equations like $$\begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases}\,,$$ they might use this logic: $$\begin{align} \begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases} \iff &\begin{cases} -2x-y=0 \\ 3x+y=4 \end{cases} \\ \color{maroon}{\implies} &\begin{cases} -2x-y=0\\ x=4 \end{cases} \iff \begin{cases} -2(4)-y=0\\ x=4 \end{cases} \iff \begin{cases} y=-8\\ x=4 \end{cases} \,.\end{align}$$ Then they conclude that $(x, y) = (4, -8)$ is a solution to the system. This turns out to be correct, but the logic seems flawed to me. As I see it, all this proves is that $$ \forall{x,y\in\mathbb{R}}\quad \bigg( \begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases} \color{maroon}{\implies} \begin{cases} y=-8\\ x=4 \end{cases} \bigg)\,. $$ But this statement leaves the possibility open that there is no pair $(x, y)$ in $\mathbb{R}^2$ that satisfies the system of equations. $$ \text{What if}\; \begin{cases} 2x+y=0 \\ 3x+y=4 \end{cases} \;\text{has no solution?} $$ It seems to me that to really be sure we've solved the equation, we have to plug back in for $x$ and $y$. I'm not talking about checking our work for simple mistakes. This seems like a matter of logical necessity. But of course, most people don't bother to plug back in, and it never seems to backfire on them. So why does no one plug back in? P.S. It would be great if I could understand this for systems of two variables, but I would be deeply thrilled to understand it for systems of $n$ variables. I'm starting to use Gaussian elimination on big systems in my linear algebra class, where intuition is weaker and calculations are more complex, and still no one feels the need to plug back in. REPLY [7 votes]: Another point of view. Gaussian elimination of a system of linear equations is equivalent to multiplication by certain types of elementary matrices. To avoid getting bogged down I will give specific examples and get you to look up more general cases if you find it of interest. There are three basic types of row operation. Add a multiple of an equation to another equation, say, $3$ times equation 1 to equation 2. For example $$\eqalign{x+2y&=3\cr4x-5y&=6\cr}\quad\longrightarrow\quad \eqalign{x+2y&=3\cr7x+\phantom{1}y&=15\cr}$$ This corresponds to multiplying the (augmented) coefficient by a certain matrix, $$\pmatrix{1&2&3\cr4&-5&6\cr}\quad\longrightarrow\quad \pmatrix{1&0\cr3&1\cr}\pmatrix{1&2&3\cr4&-5&6\cr} =\pmatrix{1&2&3\cr7&1&15\cr}$$ Multiply an equation by a non-zero constant, say, the first row by $-2$. For example, $$\eqalign{x+2y&=3\cr4x-5y&=6\cr}\quad\longrightarrow\quad \eqalign{-2x-4y&=-6\cr4x-5y&=6\cr}$$ corresponds to $$\pmatrix{1&2&3\cr4&-5&6\cr}\quad\longrightarrow\quad \pmatrix{-2&0\cr0&1\cr}\pmatrix{1&2&3\cr4&-5&6\cr} =\pmatrix{-2&-4&-6\cr4&-5&6\cr}$$ Finally, interchange two equations: $$\eqalign{x+2y&=3\cr4x-5y&=6\cr}\quad\longrightarrow\quad \eqalign{4x-5y&=6\cr x+2y&=3\cr}$$ which corresponds to $$\pmatrix{1&2&3\cr4&-5&6\cr}\quad\longrightarrow\quad \pmatrix{0&1\cr1&0\cr}\pmatrix{1&2&3\cr4&-5&6\cr} =\pmatrix{4&-5&6\cr1&2&3\cr}\ .$$ And now the point: all three classes of multiplying matrices are invertible, and their inverses are matrices of the same types. This means you can automatically get from your final equations back to the original, and checking solutions is not necessary. However, while this works for linear equations, it does not in general work for other types of equations. Your practice of checking solutions is absolutely correct and very important - please don't stop doing it!!<|endoftext|> TITLE: The intuition behind gamma distribution QUESTION [10 upvotes]: What is the intuition behind gamma distribution? For instance, I understand how to "construct" Gaussian distribution. This is my intuition: Bernoulli distribution - which is simple concept A sequence of Bernoulli trials is a Binomial distribution. I understand how binomial coefficient is constructed Using Stirling approximation we can deduce Gaussian distribution Hence I understand that shape of a Gaussian distribution is determined by the binomial coefficients and so on. How can Gamma distribution be derived step by step using relatively simple concepts? REPLY [10 votes]: I understand the exponential distribution in a following way: $$e^{-\lambda \cdot x} = \lim_{N \to \infty} (1 - \frac{\lambda}{N})^{N \cdot x}$$ Where: $\lambda$ - number of events per one time unit (denoted by $1$) $N$ - tends to infinity. It splits whole time unit into $N$ small intervals each of a length of $\frac{1}{N}$, such that only one event can occur within this small interval $\frac{\lambda}{N}$ - is a probability of an event within one small time frame. Each time frame is a Bernoulli trial: event - success, no event - failure. $(1 - \frac{\lambda}{N})$ - probability of "failure" - no event $N \cdot x$ - is a number of consecutive "failures". Where $x$ is a part of 1 interval, let $N = 1000$ - some big number, then half of the interval ($x = 0.5$) would be $N \cdot x = 1000 * 0.5 = 500$ small time frames $(1 - \frac{\lambda}{N})^{N \cdot x}$ - probability of $N \cdot x$ consecutive failures, or probability that the event will not occur $x$ amount of time<|endoftext|> TITLE: Closed form for $\int_0^1...\int_0^1\frac{1}{\left(1+\sqrt{1+x_1^2+...+x_n^2}\right)^{n+1}}\;dx_1...dx_n$ QUESTION [12 upvotes]: I am wondering whether there is a closed form for the following integral for $n\in\mathbb{N}$: $$\gamma(n)=\int_0^1...\int_0^1\frac{1}{\left(1+\sqrt{1+x_1^2+...+x_n^2}\right)^{n+1}}\;dx_1...dx_n\tag{*}$$ Particular values which I am aware of include: $$\gamma(1)=\frac{4\sqrt{2}-5}{3}$$ $$\gamma(2)=\frac{5}{4}-\frac{9\sqrt{3}}{8}+\frac{\pi}{4}$$ Both of these values were obtained by evaluating different integrals to the above which solved the same problem (see below), and I am not sure how to attack $(*)$ directly; I can see no good way of solving this integral. I am wondering especially about the value of $\gamma(3)$; does it also have a simple closed form? Is there a closed form for all $n$? Background: I posted a question here about calculating the proportion $p(n)$ of an $n$-cube closer to the centre than to the outside, which seems to me like an interesting problem. The $n=2$ case is simple to solve in terms of the following integral: $$p(2)=2\int_{0}^{\sqrt{2}-1}\frac{1-x^2}{2}-x\;dx=\frac{4\sqrt{2}-5}{3}$$ I was able to write $p(3)$ as follows: $$p(3)=6\int_{0}^{\frac{\sqrt{3}-1}{2}}\int_{z}^{\sqrt{2-z^2}-1}\frac{1-x^2-z^2}{2}-x\;dx\;dz$$ and I managed to evaluate this to $\frac{5}{4}-\frac{9\sqrt{3}}{8}+\frac{\pi}{4}$, but I was not able to use my method to solve the problem for higher dimensions. In the comments, however, achille hui made a proposition that we have $p(n)=\gamma(n-1)$ for all $n$ and although I still do not perfectly understand his reasoning, the claim does check out numerically for the two values I know already. Furthermore, the new integral is in a nice simple symmetric form (unlike the methods I had been using which required a case-by-case analysis for every dimension, with ugly bounds on the integrals), which makes me hope for a solution method. However, I really cannot see how to go about it. Thus I ask, is there a method for computing the integral $(*)$? REPLY [7 votes]: Just for reference, here is the derivation of the formula $p(n) = \gamma(n-1)$ proposed by @achille hui. Consider the cube $\mathcal{C} = [-1, 1]^n$ and define $\mathcal{D} = \{x \in \mathcal{C} : |x| < \operatorname{dist}(x, \partial \mathcal{C}) \}$. Then for each point $x \in \partial\mathcal{C}$, we can find the unique point $q(x) \in \partial\mathcal{D}$ that lies on the line segment joining $0$ and $x$. $\hspace{7.5em}$ Using this, define $r(x)$ as the ratio $$ r(x) = \frac{\text{[length of the line segment from $0$ to $q(x)$]}}{\text{[length of line segment from $0$ to $x$]}} = \frac{|q(x)|}{|x|}. $$ Let me first give a geometric argument. Let $d\mathcal{S}$ be an infinitesimally small portion of the surface $\partial\mathcal{C}$ near $x$ and consider the cone $d\mathcal{V}$ with the base $d\mathcal{S}$ and the vertex $0$. $\hspace{7.5em}$ Then roughly speaking, the portion of $d\mathcal{V}$ that intersects $\mathcal{D}$ is similar to the cone $d\mathcal{V}$ with ratio $r(x)$. Thus their volume roughly satisfies $$ |\mathcal{D} \cap d\mathcal{V}| \approx r(x)^n |d\mathcal{V}| = r(x)^n \cdot \frac{1}{n}|d\mathcal{S}|. $$ From this, we have $$ p(n) = \frac{|\mathcal{D}|}{|\mathcal{C}|} = \frac{\int |\mathcal{D} \cap d\mathcal{V}|}{\int |d\mathcal{V}|} = \frac{\int_{\partial \mathcal{C}} r(x)^n \frac{1}{n} \, dA}{\int_{\partial \mathcal{C}} \frac{1}{n} \, dA}. \tag{*} $$ Here is a rigorous justification of the argument above. Write \begin{align*} |\mathcal{D}| = \int_{\mathcal{C}} \mathbf{1}_{\mathcal{D}}(x) \, dx &= \int_{0}^{1} \int_{\partial[-s,s]^n} \mathbf{1}_{\mathcal{D}}(y) \, dyds \\ &= \int_{0}^{1} \int_{\partial\mathcal{C}} \mathbf{1}_{\mathcal{D}}(s\omega) \, s^{n-1}d\omega ds \\ &= \int_{\partial\mathcal{C}} \int_{0}^{1} s^{n-1} \mathbf{1}_{\mathcal{D}}(s\omega) \, ds d\omega. \end{align*} The note that $s\omega \in \mathcal{D}$ if and only if $s|\omega| < |q(\omega)|$, or equivalently, $s < r(\omega)$. From this, we have $$ |\mathcal{D}| = \int_{\partial\mathcal{C}} \int_{0}^{r(\omega)} s^{n-1} \, ds d\omega = \int_{\partial\mathcal{C}} \frac{1}{n} r(\omega)^n \, d\omega. $$ Replacing $\mathbf{1}_{\mathcal{D}}$ by $\mathbf{1}$ from the above computation, we also get $$ |\mathcal{C}| = \int_{\partial\mathcal{C}} \frac{1}{n} \, d\omega. $$ This proves $\text{(*)}$. Now we give an explicit formula for $\text{(*)}$. Consider the top face of $\partial\mathcal{C}$. This face can be written as $[-1,1]^{n-1}\times\{1\}$. Then for each point $x = (x',1) \in [-1, 1]^{n-1}\times\{1\}$, it is not hard to verify that $$ r(x) = \frac{1}{1 + \sqrt{|x'|^2+1}}. $$ (Using the rotational symmetry, it boils down to proving this when the dimension is $n = 2$.) Plugging this back and exploiting the symmetry, we finally have $$ p(n) = \int_{[0,1]^{n-1}} r(x',1)^n \, dx' = \int_{[0,1]^{n-1}} \frac{1}{\left( 1 + \sqrt{|x'|^2+1} \right)^n} \, dx' = \gamma(n-1). $$<|endoftext|> TITLE: Let $R$ be an integral domain and $I,J$ be ideals such that $IJ$ is a principal ideal. Then $I$ is finitely generated? QUESTION [8 upvotes]: Let $R$ be an integral domain and $I,J$ be ideals such that $IJ$ is a principal ideal. Then is it true that $I$ is finitely generated ? I was thinking like if $IJ=(a)$ , where $a=\sum_{i=1}^{k} x_iy_i$ , $x_i\in I , y_i \in J$ , then we might have $I=(x_1,...,x_k)$ , but I am not sure and I cannot proceed further . Please help , Thanks in advance REPLY [7 votes]: If $J=(0)$, then $IJ=(0)$, so the statement that $IJ$ is principal imposes no restriction on $I$. This means that there is no reason for $I$ to be finitely generated. However, if $J\neq (0)$, then your claim is true, and here is how to continue your argument. The statement is true if $I=(0)$, since $(0)$ is finitely generated. Now assume that $I\neq (0)\neq J$. If $IJ=(a)$, then necessarily $a\neq 0$ because $R$ is a domain. We must have $a=\sum_{i=1}^{k} x_iy_i$ , $x_i\in I , y_i \in J$. Let $I_0=(x_1,\ldots,x_k)$ and $J_0=(y_1,\ldots,y_k)$. We must have $I_0\subseteq I$ and $J_0\subseteq J$, and the goal is to show that we also have $I\subseteq I_0$. Choose any $x\in I$. Since $xy_i\in IJ=(a)$ for $1\leq i\leq k$ there must exist $r_i\in R$ such that $xy_i=r_ia$ for $1\leq i\leq k$. Multiplying by $x_i$ yields $xx_iy_i=r_ix_ia$. Summing this equality as $i$ ranges from $1$ to $k$ yields $xa=x(\sum x_iy_i) = (\sum r_ix_i)a$. From the domain property, we may cancel $a$ to obtain $x=\sum_{i=1}^k r_ix_i\in I_0$. This proves $I\subseteq I_0$, as required. EDIT. January 21, 2022. I will address some criticisms from the comments that were posted yesterday. Let me copy the statements and then respond to them. (1): I just want to add that this solution assumes that the ring R has the unit 1. If it didn't have the unit 1, then $a$ doesn't need to decompose into $\sum_{i=1}^k x_iy_i$. Maybe I am wrong, but if I am then this proof has a "whole" or I just don't know some basic stuff. – donaastor (2): Yeah, I was right. The statement is false for "rngs". Take $R=2\cdot \mathbb Z$ (even numbers). Take $I=(2)$ (all numbers divisible by 4) and $J$ to be all numbers divisible by $6$. Now $I\cdot J=(12)$, but $12$ can't be decomposed into the wanted sum because whenever $a\in I$ and $b\in J$, we have $24|ab$. In addition, $J$ is not finitely generated (you can never get $6$), so it is a counter-example. – donaastor First let's agree on definitions (as they pertain to commutative rings): (1) An integral domain is a commutative, unital ring with no nonzero zero-divisors. In order to address the criticisms, let's also allow nonunital rings in this definition. (2) If $R$ is a ring and $X\subseteq R$, then $(X)$ denotes the least ideal of $R$ containing the set $X$. In particular, for $a\in R$, $(a)$ is the least ideal of $R$ containing the element $a$. If $R$ is unital, then one can prove that $(a) = Ra = \{ra\;|\;r\in R\}$. If $R$ is nonunital, then one can prove that $(a) = \mathbb Za+Ra = \{na+ra\;|\;n\in\mathbb Z, r\in R\}$. (3) The product of two ideals $I$ and $J$ is the least ideal of $R$ containing all the products $xy, x\in I, y\in J$. One can prove that $IJ = (\{x_iy_j\}) = \{\sum_{i=1}^k x_iy_i\;|\;x_i\in I, y_j\in J\}$ whether $R$ is unital or not. Now let me take the criticisms one at a time. (A) this solution assumes that the ring $R$ has the unit $1$. Yes, I WAS assuming that $R$ had $1$ when I wrote the solution, but that assumption does not affect the proof in any way except on the first line of the 3rd paragraph. Where I write "there must exist $r_i\in R$" one should should instead write "there must exist $r_i\in \mathbb Z+R$" in the nonunital case. In all other aspects, the proof given works for nonunital rings. (B) If it didn't have the unit 1, then $a$ doesn't need to decompose into $\sum_{i=1}^k x_iy_i$. This is a false statement. (C) The statement is false for "rngs". This is a false statement. Moreover there can be no counterexample constructed as a nonunital subring of $\mathbb Z$ since all additive subgroups of $\mathbb Z$ are cyclic (hence any ideal of any nonunital subring must be finitely generated). To conclude, I assert that the proof given works equally well for unital or nonunital integral domains with the one minor change I noted above in (A). That is, one way to derive the result for nonunital rings from the above proof for unital rings is to make a small change. There is a second (better) way to derive a proof for the nonunital case from a proof for the unital case. One can apply a general "transfer principal" that shows that, for many types of statements, the statement is true for integral domains in the unital case iff it is true also in the nonunital case. This result is derivable from the paper Szendrei, J. On the extension of rings without divisors of zero. Acta Univ. Szeged. Sect. Sci. Math. 13 (1950), 231-234. The paper, which focuses on not-necessarily-commutative rings, proves that the "unital completion" of a domain is a domain. In the commutative case it means this: for each nonunital integral domain $R$ there is an extension $\widehat{R}$ of $R$, which is unique up to isomorphism over $R$, with the properties that (i) $\widehat{R}$ is an unital integral domain containing $R$ as an ideal, (ii) $\widehat{R}$ is generated as a ring by $R\cup \{1\}$, (iii) there is no subring lying strictly between $R$ and $\widehat{R}$, (iv) the ideals of $R$ are exactly the ideals of $\widehat{R}$ that are contained in $R$, ideal generation and ideal product work the same way for ideals of $\widehat{R}$ contained in $R$, ETC. This can used as follows: assume the problem is stated for nonunital $R$. Apply my original proof to the unital completion $\widehat{R}$ for ideals $I, J\subseteq R$. Deduce the result for $\widehat{R}$, and then see that the result has the same meaning for $R$.<|endoftext|> TITLE: Finite field extension with no non-trivial subextension QUESTION [7 upvotes]: Is there a field extension $K/F$ of degree $n>1$ with $n$ not prime, such that every element $x \in K \setminus F$ has degree $n$ ? What happens if $F$ has characteristic $0$? The motivation of this question is to notice that if $[K:F]=p$ is prime (or $=1$), then any element $x \in K\setminus F$ has degree $p$, and in particular $K/F$ has no non trivial subextension (actually there is a proper subextension $F \subsetneq L \subsetneq K$ iff there is some $x \in K$ of degree $\neq 1, \neq n$). My question is about the converse of this property. Obviously if $n$ is not prime, we can find some non trivial $F$-vector subspaces, but they might not be subfields of $K$. I tried to work with subfields fixed by subgroups of $\mathrm{Aut}_F(K)$, but this group may be trivial! If $K/F$ is separable (e.g. $\mathrm{char}(F)=0$), then we can write $K=F(a)$. But $a^2$ or some polynomials in $a$ might also have degree $n$ over $F$, and I don't see how to get a polynomial $P(a) \in K \setminus F$ in $a$ with degree $ TITLE: The other trace of the curvature tensor QUESTION [6 upvotes]: I am using index notation here, since denoting traces in index notation is easier. Einstein summation convention assumed. If $(M,g)$ is a Riemannian or pseudo-Riemannian manifold, and $$ R^\rho_{\sigma\mu\nu}=\partial_\mu\Gamma^\rho_{\nu\sigma}-\partial_\nu\Gamma^\rho_{\mu\sigma}+\Gamma^\rho_{\mu\lambda}\Gamma^\lambda_{\nu\sigma}-\Gamma^\rho_{\nu\lambda}\Gamma^\lambda_{\mu\sigma} $$ is the Riemann curvature tensor, then the only independent trace of $R$ is the Ricci tensor $R_{\mu\nu}=R^\sigma_{\mu\sigma\nu}$, since the trace $R^\sigma_{\sigma\mu\nu}$ is zero. If we are, on the other hand, given an arbitrary linear connection, it is necessarily a $\text{GL}(n,\mathbb{R})$-connection, and there is nothing specific to be said about the first two indices of the curvature tensor, so the tensor field $Q_{\mu\nu}=R^\sigma_{\sigma\mu\nu}$ is not necessarily zero. What is there to be said about this tensor? What is its geometric meaning? What does it signify that for a Riemannian curvature tensor, this is zero? I do realize that if $\nabla$ is an arbitrary $g$-compatible connection then, for and arbitrary frame $e_{a}$ (latin indices - frame indices, greek indices - coordinate indices) we have $$d^\nabla g_{ab}=0=dg_{ab}-\omega^c_{\ a}g_{cb}-\omega^c_{\ b}g_{ac},$$ so $dg_{ab}=\omega_{ba}+\omega_{ab}$, so if $e_a$ is an orthonormal frame then the connection forms are skew-symmetric, and then the curvature form $\Omega^a_{\ b}=d\omega^a_{\ b}+\omega^a_{\ c}\wedge\omega^c_{\ b}$ is also skew-symmetric. Moreover, since unlike $\omega$, $\Omega$ is gauge-covariant, this skew-symmetry is preserved even if calculated in a non-orthonormal frame. Therefore, if $Q_{\mu\nu}$ does not vanish, then $\nabla$ cannot be metric-compatible for any metric I assume. But I am curious about more info. Does the vanishing of $Q$ also imply that $\nabla$ is metric compatible for some metric? What else can be said about $Q$? REPLY [7 votes]: For any affine connection $\nabla$ on a smooth manifold, the curvature $R_{ab}{}^c{}_d$ may be uniquely decomposed as $$R_{ab}{}^c{}_d = C_{ab}{}^c{}_d + 2 \delta^c{}_{[a} {\mathsf P}_{b]d} + \beta_{ab} \delta^c{}_d \qquad (\ast)$$ for some totally tracefree $C$, called the projective Weyl tensor, and skew $\beta$; the tensor $\mathsf P$ is called the projective Schouten tensor. The First Bianchi Identity, $R_{[ab}{}^c{}_{d]} = 0$, implies that $-2 {\mathsf P}_{[ab]} = \beta_{ab}$. Now, taking the trace over ${}^c{}_d$ gives $$Q_{ab} := R_{ab}{}^c{}_c = -2 {\mathsf P}_{[ab]} + n \beta_{ab} = (n + 1) \beta_{ab}.$$ Then, taking the trace of $(\ast)$ over ${}^c{}_a$ implies that $Q_{ab} = -2 R_{[ab]}$, so $Q$ is, up to a constant multiple, the skew part of the Ricci curvature of $\nabla$. A more concrete geometric interpretation is this: Computing from $(\ast)$ gives that the curvature of the connection $\nabla$ induces on the anticanonical bundle $\Lambda^n TM$ is $Q$, or equivalently that the curvature of the connection induced on the canonical bundle $\Lambda^n T^*M$ is $-Q$. Thus, this bundle locally admits parallel sections, that is, $\nabla$ (locally) preserves a volume form on $M$, iff $Q = 0$, corresponding to the fact that (by definition) $Q = 0$ iff $R$ takes values in ${\frak sl}(TM)$. In particular, the Levi-Civita connection of any metric $g$ preserves the volume form of the restriction of that metric to any open orientable subspace (endowed with either choice of orientation), so $Q = 0$ for a Levi-Civita connection, or, like you say, for any metric connection. Expanding the Second Bianchi Identity, $\nabla_{[e} R_{ab]}{}^c{}_d$, using $(\ast)$ implies that $dQ = 0$, so $Q$ defines a second cohomology class $[Q] \in H^2(M)$. If $\nabla$ is torsion-free, any connection projectively equivalent to $\nabla$, that is, sharing the same (unparameterized) geodesics as $\nabla$, has the form $$\hat\nabla_a \xi^b = \nabla_a \xi^b + \Upsilon_a \xi^b + \Upsilon_c \xi^c \delta^b{}_a$$ for some $\Upsilon \in \Gamma(T^*M)$ (and any choice of $\Upsilon$ gives projectively equivalent connections). The corresponding tensors $Q, \hat Q$ are related by $\hat Q = Q + 2 (n + 1) d\Upsilon$, and in particular, they differ by an exact form. Thus, the cohomology class $[Q] = [\hat Q]$ is actually an invariant of the projective structure---that is, the equivalence class of projective equivalent connections---that $\nabla$ defines. On the other hand, this transformation rule for $Q$ shows that locally $\nabla$ is projectively equivalent to one with $Q = 0$ (such connections are sometimes called special). So, in the setting of local projective differential geometry, we may as well just work with special connections, which enjoy the convenient feature that ${\mathsf P}_{ab}$ and $R_{ab}$ are symmetric. This formulation can be found, by the way, in $\S$3 of the following reference: T. Bailey, M. Eastwood, A.R. Gover, "Thomas' structure bundle for conformal, projective and related structures" Rocky Mountain J. Math, 24 (1994), 1191-1217. It is not true that vanishing of $Q$ implies that $\nabla$ is a Levi-Civita connection, or even that it is projectively equivalent to one. Naively one should expect as much: Vanishing of $Q$ implies that the (local) holonomy group of $\nabla$ based at any point $x$ is contained in $\textrm{SL}(T_x M)$, but if $\nabla$ is the Levi-Civita connection of a metric $g$, the holonomy group must be contained in the much smaller group $\textrm{SO}(g_x)$. A simple example is the connection $\nabla$ on $\Bbb R^3$ whose nonzero Christoffel symbols are specified (in the canonical coordinates $(x^a)$) by $\Gamma_{21}^3 = \Gamma_{31}^2 = x^2$. (The projective structure this connection defines is the so-called Egorov projective structure, which is interesting for other reasons, too.) The nonzero components of curvature are specified by $R_{23}{}^1{}_2 = -R_{32}{}^1{}_2 = -1$, so $\nabla$ is special, but $\S$2.3 of the below reference shows that it is not a Levi-Civita connection, nor even projectively equivalent to one. M. Dunajski, M. Eastwood, "Metrisability of three-dimensional path geometries", European J. Math (2015), 809-834.<|endoftext|> TITLE: Classical version and idelic version of class field theory QUESTION [6 upvotes]: Last semester, I took a course about class field theory and I learned about Artin reciprocity, which gives a map from ideal class group to Galois group, $$ \left(\frac{L/K}{\cdot}\right):I_{K}\to Gal(L/K), \,\,\,\,\prod_{i=1}^{m}\mathfrak{p}_{i}^{n_{i}}\mapsto \prod_{i=1}^{m}\left(\frac{L/K}{\mathfrak{p}_{i}}\right)^{n_{i}} $$ where $\left(\frac{L/K}{\mathbb{p}_{i}}\right)$ is a Frobenius map corresponds to prime ideal $\mathfrak{p}$. Today, I learned an adelic version of (global) class field theory, which is $$ \mathbb{A}^{\times}_{F}/\overline{F^{\times}(F_{\infty}^{\times})^{o}} \simeq G_{F}^{ab}$$ where $F$ is number field, $\mathbb{A}_{F}$ is adele over $F$ and $G_{F}^{ab}=Gal(F^{ab}/F)$. I cannot understand how these two are connected. Could anyone can explain explicit relation between these two things? REPLY [3 votes]: For more clarity, let us make more precise the definition of the Artin reciprocity map : 1) Over $\mathbf Q$, CFT is the Kronecker-Weber theorem, which says that any finite abelian $L/\mathbf Q$ is contained in a cyclotomic field $\mathbf Q_m = \mathbf Q(\zeta_m)$. Such an $m$ is called a defining modulus for $L/\mathbf Q$ and the conductor $f_L$ of $L/\mathbf Q$ is the smallest (w.r.t. division) defining modulus of $L$. Given a defining modulus $m$ of $L$, set $C_m=(\mathbf Z/m\mathbf Z)^*$, and for $a\in C_m$, define the Artin symbol ($a,L/\mathbf Q$) to be the automorphism of $L$ sending $\zeta_m$ to $\zeta_m^{a}$ , and denote by $I_{L,m}$ its kernel, so as to get an isomorphism $ C_m/I_{L,m} \cong Gal(L/\mathbf Q)$ via the Artin symbol. 2) In classical CFT over a number field $K$, the previous notions can be generalized, but in a very non obvious way. Define a $K$-modulus $\mathfrak M$ to be the formal product of an ideal of the ring of integers $A_K$ and some infinite primes of $K$ (implicitly raised to the first power). In the sequel, for simplification, we'll "speak as if" $\mathfrak M$ was an ideal. Denote by $A_{\mathfrak M}$ the group of fractional prime to $\mathfrak M$ and by $R_{\mathfrak M}$ the subgroup of principal fractional ideals $(x)$ s.t. $x$ is "congruent to" $1$ mod $\mathfrak M$ , and put $C_{\mathfrak M}=A_{\mathfrak M}/R_{\mathfrak M}$. For a finite abelian extension $L/K$, define $I_{L/K,\mathfrak M}=N(C_{L,\mathfrak M})$ , where $N_{L/K}$ is the norm of $L/K$ . A defining $K$-modulus of $L/K$ is s.t. $(C_{\mathfrak M}:I_{L/K,\mathfrak M})=[L:K]$, and the conductor $f_{L/K}$ is the "smallest" defining $K$-modulus of $L/K$. For a finite $K$-prime $\mathfrak P$, coprime with $\mathfrak M$, it can be shown that there exists an unique Artin symbol $(\mathfrak P , L/K) \in G(L/K)$ characterized by $(\mathfrak P, L/K)(x)\equiv x^{N\mathfrak P}$ mod $\mathfrak PA_L$ for any $x\in A_L$, with $N=N_{K/\mathbf Q}$. This definition can be extended multiplicatively to $C_{\mathfrak M}$, and the Artin reciprocity law is the isomorphism $C_{\mathfrak M}/I_{L/K,\mathfrak M} \cong G(L/K)$ via the Artin symbol. 3) In idelic CFT over a number field $K$, the previous $C_{\mathfrak M}$ 's are replaced by idèle class groups. The idèle group $J_K$ is the group of invertible elements of the adèle ring of $K$ (equipped with the "restricted product topology") and the idèle class group $C_K$ is the quotient $J_K/K^*$ . Write $C'_K=C_K/D_K$ , where $D_K$ = the connected component of identity = the subgroup of infinitely divisible elements of $C_K$. For a $K$-modulus ${\mathfrak M}$, let $I_{\mathfrak M} = J_{\mathfrak M}.K^*/K^*$, where $J_{\mathfrak M}$ is the subgroup of idèles which are "congruent" to 1 mod $\mathfrak M$. Given an abelian $L/K$, a defining $K$-modulus $\mathfrak M$ is such that $I_{\mathfrak M}$ is contained in $N_{L/K}C_L$. The Artin global reciprocity map $(.,L/K)$ is defined as follows : by the Chinese Remainder theorem, for any $j \in J_K$, there exists $x \in K^*$ s.t. $j$ is "congruent to" $x$ mod ${\mathfrak M}$; then define $(j, L/K)$ to be the product of the elements $(L/K, \mathfrak P)^{n_\mathfrak P}$ , where $n_\mathfrak P = ord (jx^{-1})_\mathfrak P$, for all $\mathfrak P$ coprime to $\mathfrak M$. It is easy to see that this can be "passed to the quotient" to define a map $(., L/K) : C'_K \to G(L/K)$ s.t. $C'_K/N_{L/K}C'_L \cong G(L/K)$ . This is the Artin reciprocity law in idelic terms. Now that we are rid of the cumbersome modulii $\mathfrak M$, we can take projective limits along the finite abelian extensions of $K$ to get a canonical isomorphism $C'_K \cong G(K^{ab}/K)$, which you can check to coincide with the (rather unexploitable) expression that you gave. Needless to say, almost all the properties explained above are very elaborate and difficult theorems.<|endoftext|> TITLE: Symplectic but not Hamiltonian Vector Fields QUESTION [8 upvotes]: In symplectic geometry, given a manifold $M$ with closed nondegenerate symplectic 2-form $\omega$, it is known that a vector field $X$ is Hamiltonian if $$\iota_X\omega=dH$$ for some smooth function $H\in C^\infty(M)$. A vector field is symplectic if it preserves the symplectic structure along the flow, i.e. $$\mathcal L_X\omega=0\,.$$ One of the easiest ways to check them is to note that if $X$ is symplectic then $\iota_X\omega$ is closed, and if $X$ is Hamiltonian then $\iota_X\omega$ is exact. Consequently, all Hamiltonian vector fields are symplectic but the converse is not true. Locally, however, Poincare lemma guarantees that every symplectic vector field is Hamiltonian. Now consider symplectic 2-torus $(\mathbb T^2,d\theta\wedge d\phi)$ and a vector field $$X=\frac{\partial}{\partial \theta}\,.$$ Using the first de Rham cohomology, one usually concludes that $X$ is not Hamiltonian. However, I am unsure why: note that $\iota_X\omega=d\phi$, so it looks to me this is exact. Of course, we see that $\phi$ is not globally defined on $\mathbb T^2$, so perhaps this is not correct. But this argument would seem to imply that for symplectic 2-sphere $(S^2,d\theta\wedge d\phi)$, $X$ is not Hamiltonian (even though it should be, since it is symplectic and $$H^1_{\text{de Rham}}(S^2)=0\,.$$ Another example: Consider symplectic 2-sphere $(S^2,d\theta\wedge dh$), where $H(\theta,h)=h$ is a height function. In this case, the same vector field $X$ is Hamiltonian, since we obtained the required smooth Hamiltonian function $H$. Now I reverse the problem: consider another 2-torus $(\mathbb T^2,d\theta\wedge dh)$ and the same vector field $X$. Now it looks like $X$ is Hamiltonian, even though we know $$H^1_{\text{de Rham}}(\mathbb T^2)\neq0\,.$$ From my (naive) understanding, $H^1_{\text{de Rham}}(M)$ should be the only obstruction for a symplectic vector field to be Hamiltonian vector field, and not on the choice of the symplectic 2-form. Question: What has gone wrong here? For the first example, for instance, it may have to do with seeing that $d\phi$ is not exterior derivative of $\phi$, which I may have misunderstood. REPLY [9 votes]: For the first problem, you have already detected where the problem lies: the variable $\phi$ is not a function defined on the whole manifold. Indeed, it is a priori a function in a chart on the manifold and a chart usually does not cover by itself the whole manifold. On the other hand, the particular case of the torus is special because we can more or less canonically 'parametrize' the torus by $\mathbb{R}^2$ (which is its universal cover), for instance via the map $q: \mathbb{R}^2 \to \mathbb{R}^2 / \mathbb{Z}^2 \cong T^2$. As $\phi$ can be chosen to be one of two cartesian coordinates on $\mathbb{R}^2$, its derivative $d\phi$ (on the plane) is left invariant by any translation, in particular the ones by vectors in $\mathbb{Z}^2$. As such, $d\phi$ 'passes to the quotient' i.e. there exists a well-defined closed 1-form $\eta$ on $T^2$ such that $q^{\ast}\eta = d\phi$. This is another motivation to write $\eta = d\phi$, but note that $\phi$ itself would be a multi-valued function on the torus (and hence not a genuine function, so we wouldn't consider it as an antiderivative to $\eta$). On the sphere, any chart misses at least one point, so again it is not surprising that one can find an antiderivative to a closed 1-form inside this chart. But if you can't extend $\phi$ and $\theta$ to the whole sphere, it is not clear how you can extend their derivatives to globally closed 1-forms in the first place: your problem possibly does not show up. Besides, the fact that the vector field $X = \partial/\partial \theta$ on the sphere can be globally defined (by rotation invariance and also by the null vectors at the poles) is not related to the (im)possibility that $\theta$ (or $d\theta$) is globally well-defined, but only to the fact that $X \lrcorner \omega$ is a closed (and exact) 1-form : an antiderivative is the height function, which is clearly not the angle 'function' $\theta$. The obstruction to a symplectic vector field $X$ to be Hamiltonian is precisely whether the closed 1-form $X \lrcorner \omega$ is exact. In other words, does the cohomology class $[X \lrcorner \omega] \in H^1_{dR}(M; \mathbb{R})$ vanish? (The nonvanishing of this class is the obstruction to $X$ being Hamiltonian.) This question makes sense on any manifold; the point is that when $H^1_{dR}(M; \mathbb{R})=0$, then the answer is 'yes' whatever the symplectic field $X$. So on the 2-sphere, any symplectic vector field is Hamiltonian, whereas on the torus it depends on the symplectic vector field considered. Put differently, the (non)vanishing of the 1-cohomology group is the obstruction to the equality $Symp(M, \omega) = Ham(M, \omega)$.<|endoftext|> TITLE: Reconstructing a functional from its Euler-Lagrange equations QUESTION [5 upvotes]: Is it true that Euler-Lagrange equations associated to a functional determine the functional? Suppose I give you an equation and I claim that it is an Euler-Lagrange equation of some functional. Can you tell me what was the functional? Of course, there is always more than one functional whith prescribed E-L equations, since the critical points of $E$ and of $\phi(E)$ where $\phi:\mathbb{R} \to \mathbb{R}$ is smooth and stirclty monotonic are identical. (By the chain rule $ (\phi \circ f)'(x)=\phi'(f(x))\cdot f'(x)$). Is it true that there is a functional $E$ whose E-L equations are the prescribed ones, and every other functional with the same E-L equations is a function of $E$? One can think on different ways to formalize this question like different choices for the domain of the functional: paths in a manifold, real valued functions on $\mathbb{R}^n$, mappings between Riemannian manifolds etc, but at this stage of the game I don't want to choose a specific form yet. (Although I am particularly interested in the latter case). REPLY [5 votes]: It is well-known that adding boundary/total divergence terms and/or overall scaling of a functional preserve the Euler-Lagrange (EL) equations. On the other hand, there is in general no classification of possible functionals that lead to a given set of EL equations. An instructive example from Newtonian point mechanics is given in this Phys.SE post, where two Lagrangians $L=T-V$ and $L=\frac{1}{3}T^2+2TV-V^2$ both have Newton's second law as their EL equation. Another example: The functional in this Math.SE post has the same EL equation as the functional $F[y]=\int_0^3 \! \mathrm{d}x~y^{\prime 2}.$<|endoftext|> TITLE: Algebraic extension of $\Bbb Q$ with exactly one extension of given degree $n$ QUESTION [13 upvotes]: Let $n \geq 2$ be any integer. Is there an algebraic extension $F_n$ of $\Bbb Q$ such that $F_n$ has exactly one field extension $K/F_n$ of degree $n$? Here I mean "exactly one" in a strict sense, i.e. I don't allow "up to (field / $F$-algebra) isomorphisms". But a solution with "exactly one up to field (or $F$-algebra) isomorphisms" would also be welcome. I'm very interested in the case where $n$ has two distinct prime factors. My thoughts: This answer provides a construction for $n=2$. I was able to generalize it for $n=p^r$ where $p$ is an odd prime. Let $S = \left\{\zeta_{p^r}^j\sqrt[p^r]{2} \mid 0 \leq j < p^r \right\}$. Then $$\mathscr F_S = \left\{L/\Bbb Q \text{ algebraic extension} \mid \forall x \in S,\; x \not \in L \text{ and } \zeta_{p^r} \in L \right\} =\left\{L/\Bbb Q \text{ algebraic extension} \mid \sqrt[p^r]{2} \not \in L \text{ and } \zeta_{p^r} \in L \right\} $$ has a maximal element $F$, by Zorn's lemma. In particular, we have $$ F \subsetneq K \text{ and } K/\Bbb Q \text{ algebraic extension} \implies \exists x \in S,\; x \in K \implies \exists x \in S,\; F \subsetneq F(x) \subseteq K $$ But $X^{p^r}-2$ is the minimal polynomial of any $x \in S$ over $F$ : it is irreducible over $F$ because $2$ is not a $p$-th power in $F$. Therefore $F(x)$ has degree $p^r$ over $F$ and using the implications above, we conclude that $F(x) = F(\sqrt[p^r]{2})$ is the only extension of degree $p^r$ of $F$, when $x \in S$. Assume now that we want to build a field $F$ with the desired property for some $n=\prod_{i=1}^r p_i^{n_i}$. I tried to do some kind of compositum, without any success. I have some trouble with the irreducibility over $F$ of the minimal polynomial of some $x \in S$ ($S$ suitably chosen) over $\Bbb Q$... I know that $\mathbf C((t))$ is quasi-finite and embeds abstractly in $\bf C$, so there is an uncountable subfield of $\bf C$ having exactly one field extension of degree $n$ for any $n \geq 1$. REPLY [4 votes]: If you choose a random $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and consider the fixed field $K\subset \bar{\mathbb{Q}}$ of $\sigma$, then $\bar{\mathbb{Q}}/K$ will be Galois. The Galois group $G$ will almost always be $\hat{\mathbb{Z}}$, the profinite completion of the integers, and this is a group that has exactly one finite index subgroup of each index. Then $K$ will solve your problem for each $n$. There will be infinitely many such fields. The problem is that it is hard to write down any concrete element of the absolute Galois group (except complex conjugation, as lulu referred to), and then also hard to write down its fixed field. So I'm afraid this answer is very nonconstructive. See here for details: https://mathoverflow.net/questions/273224/what-is-the-probability-of-generating-a-given-procyclic-subgroup-in-mathrmgal<|endoftext|> TITLE: Prove that $\sqrt{2}+\sqrt{3}+\sqrt{5}$ is irrational. Generalise this. QUESTION [10 upvotes]: I'm reading R. Courant & H. Robbins' "What is Mathematics: An Elementary Approach to Ideas and Methods" for fun. I'm on page $60$ and $61$ of the second addition. There are three exercises on proving numbers irrational spanning these pages, the last is as follows. Exercise $3$: Prove that $\phi=\sqrt{2}+\sqrt{3}+\sqrt{5}$ is irrational. Try to make up similar and more general examples. My Attempt: Lemma: The number $\sqrt{2}+\sqrt{3}$ is irrational. (This is part of Exercise 2.) Proof: Suppose $\sqrt{2}+\sqrt{3}=r$ is rational. Then $$\begin{align} 2&=(r-\sqrt{3})^2 \\ &=r^2-2r\sqrt{3}+3 \end{align}$$ is rational, so that $$\sqrt{3}=\frac{r^2+1}{2r}$$ is rational, a contradiction. $\square$ Let $\psi=\sqrt{2}+\sqrt{3}$. Then, considering $\phi$, $$\begin{align} 5&=(\phi-\psi)^2 \\ &=\phi^2-2\psi\phi+5+2\sqrt{6}. \end{align}$$ I don't know what else to do from here. My plan is/was to use the Lemma above as the focus for a contradiction, showing $\psi$ is rational somehow. Please help :) Thoughts: The "try to make up similar and more general examples" bit is a little vague. The question is not answered here as far as I can tell. REPLY [3 votes]: An alternative solution. Assume that $\alpha=\sqrt{2}+\sqrt{3}+\sqrt{5}=\frac{a}{b}\in\mathbb{Q}$. By quadratic reciprocity, there is some prime $p>b$ such that $3$ and $5$ are quadratic residues $\!\!\pmod{p}$ while $2$ is not. That implies that $\alpha$ is an algebraic number over $\mathbb{K}=\mathbb{F}_p$ with degree $2$, since $\sqrt{2}$ does not belong to $\mathbb{K}$ but belongs to a quadratic extension of $\mathbb{K}$. On the other hand $b TITLE: Distribution of interarrival times in a Poisson process QUESTION [5 upvotes]: I am new to Statistics. I am studying Poisson process, I have certain questions to ask. A process of arrival times in continuous time is called a Poisson process of rate $\lambda$ if the following two conditions hold: The number of arrivals in an interval of length $t$ is $\text{Pois}(\lambda t)$ random variable. The number of arrivals that occur in disjoint time intervals are independent of each other. Let $X_1$ denote the time of first arrival in a Poisson process of rate $\lambda$. Let $X_2$ denote the time elapsed between the first arrival and the second arrival. We can find the distribution of $X_1$ as follows: $$\mathbb{P}(X_1>t)=\mathbb{P}\left(\text{No arrivals in }[0,t]\right)=\mathrm{e}^{-\lambda t}$$ Thus $\mathbb{P}(X_1\le t)=1-\mathrm{e}^{-\lambda t}$, and hence $X_1\sim\text{Expo}(\lambda)$. Suppose we want to find the conditional distribution of $X_2$ given $X_1$. I found the following discussion in my textbook. $\begin{equation}\begin{split}\mathbb{P}(X_2>t\mid X_1=s) & = \mathbb{P}\left(\text{No arrivals in }(s,s+t] \mid \text{Exactly one arrival in [0,s]} \right) \\ & =\mathbb{P}\left(\text{No arrivals in }(s,s+t]\right)\\ &=\mathrm{e}^{-\lambda t}\end{split}\end{equation}$. Thus, $X_1$ and $X_2$ are independent, and $X_2\sim\text{Expo}(\lambda)$. However, I have the following questions regarding the above discussion. Since $X_1$ is a continuous random variable, $\mathbb{P}(X_1=k)=0$ for every $k\in\mathbb{R}$. Thus, $\mathbb{P}(X_1=s)=0$. In other words, we are conditioning on an event with zero probability. But when I studied conditional probability, conditioning on events with zero probability was not defined. So in this case, is conditioning on an event with zero probability valid? Second, assuming that conditioning on $X_1=s$ is valid, what we have found is the conditional distribution of $X_2$ given $X_1=s$. In other words, the conditional distribution of $X_2$ given $X_1$ is $\text{Expo}(\lambda)$, not the distribution of $X_2$ itself. But the author claims that $X_2\sim\text{Expo}(\lambda)$. Why is this true? REPLY [3 votes]: If the conditional distribution of $X_2$ given the event $X_1=s$ is the same for all values of $s$, then the marginal (i.e. not conditional) distribution of $X_2$ is also that same distribution, and they are independent. If the conditional distribution of $X_2$ given $X_1=s$ depended on $s$, then the distribution of $X_2$ would be a weighted average of those conditional distributions, with weights given by the distribution of $X_1$. But if all of those conditional distributions are the same, then you're taking a weighted average of things that are all the same. How to define conditioning on an event of probability $0$ is somewhat more delicate; maybe I'll say more about that later.<|endoftext|> TITLE: Is the Lie Algebra of a connected abelian group abelian? QUESTION [10 upvotes]: Is the Lie Algebra of a connected abelian group abelian? I guess that this should be true, but how do you prove it? REPLY [26 votes]: Yes, and connectedness is not necessary. I know three proofs: Proof 1 When $G$ is abelian, the inverse map $$i:G\to G,\quad g\mapsto g^{-1}$$ is a group homomorphism. Hence, its differential at $1\in G$ $$di_1:\mathfrak{g}\to\mathfrak{g},\quad X\mapsto -X$$ is a Lie algebra homomorphism. But then $$-[X,Y]=di_1([X,Y])=[di_1(X),di_1(Y)]=[-X,-Y]=[X,Y],$$ so $[X,Y]=0$. Proof 2 For any Lie group $G$, the differential at $1$ of the map $\mathrm{Ad}:G\to GL(\mathfrak{g})$ is $\mathrm{ad}:\mathfrak{g}\to\mathrm{End}(\mathfrak{g})$ where $\mathrm{ad}(X)(Y)=[X,Y]$. But when $G$ is abelian, $\mathrm{Ad}$ is the constant map to the identity (since $\mathrm{Ad}(g)$ is the differential of the map $G\to G,a\mapsto gag^{-1}$ which is the identity when $G$ is abelian), so $\mathrm{ad}=0$. Proof 3 For any Lie group $G$ we have that for $X,Y\in\mathfrak{g}$, $$\exp(sX)\exp(tY)=\exp(tY)\exp(sX),\forall s,t\in\mathbb{R}\quad\iff[X,Y]=0.$$ If $G$ is abelian, the left-hand side always hold, so $[X,Y]=0$ for all $X,Y\in\mathfrak{g}$. Remark about the converse The last proof can be used to prove the converse when $G$ is connected. This is because $\exp$ restricts to a diffeomorphism from a neighborhood of $0$ in $\mathfrak{g}$ to a neighborhood of the identity in $G$ and a connected group is generated by any neighborhood of the identity. However, connectedness is necessary for the converse. For example, if $T$ is any abelian connected Lie group and $H$ is any non-abelian finite group, then $G=T\times H$ is a non-abelian Lie group with abelian Lie algebra.<|endoftext|> TITLE: How do we prove a set axioms never lead to a contradiction? QUESTION [8 upvotes]: How can we be sure that a set of axioms will never lead to a contradiction? If there's a contradiction, we will find it first or later. But if there's no one, how can we be sure we choosen reasonably the axioms so that no contradiction will ever arise? Is there a general approach or for every known axiom set there was a specific proof? (In example, there exist such proof for Peano's Axioms)? REPLY [10 votes]: Proofs presuppose axioms. In order to prove that "$T$ is consistent," we need to work within some other axiom system $S$; this, then means that our proof is only as convincing as our belief in the consistency of $S$. Note that even without Goedel's incompleteness theorem, we shouldn't be convinced by $S$ proving "I am consistent" - of course it would if it were inconsistent! So I actually think Goedel is a red herring, here. That said, this doesn't kill the project of proving consistency, it just changes it. In order to prove that a theory $T$ is consistent, we want to find some theory $S$ for which we have good reason to believe that it is consistent, and then prove inside $S$ that $T$ is consistent. One standard example of this is ordinal analysis: the goal is to assign a linear order $\alpha_T$ to $T$ which is "clearly" well-ordered, and then show that the very weak theory PRA, together with "$\alpha_T$ is well-ordered", proves that $T$ is consistent (I'm skipping many details here). For $T=PA$, for instance, this was done by Gentzen; the relevant ordering is the ordinal $\epsilon_0$. This is, however, of dubious use for convincing us of the consistency of theories: for weak theories like $PA$, I find the consistency of $PA$ more "obviously true" than the well-orderedness of $\epsilon_0$, and for stronger theories the relevant $\alpha_T$s are incredibly complicated to describe. EDIT: Symplectomorphic asked about the model-theoretic answer: we know a theory is consistent if we can exhibit a model. I did omit this above, so let me address it now. What I want to convince you of is that this is a bit more complicated than it sounds. I claim that - even if you have a model of your theory in hand - you're still going to need to do some work to convince me of the consistency of your theory, and ultimately my first paragraph above is still going to be relevant. So suppose you have a theory $T$ you're trying to convince me is consistent, and you have a model $\mathcal{M}$ of $T$ "in hand" (whatever that means). What do you need to persuade me about? First, you have to prove that having a model means your theory is consistent. This sounds trivial, but it's really a fact about our proof system - soundness. It's an extremely basic fact, but technically something that requires proof. Second, when we exhibit a model, what we're really doing is describing a mathematical object. Well, you need to prove to me that it exists. There are really complicated mathematical objects out there, and theories we believe to be consistent which provably have no "simple" models (like ZFC), so this really isn't a silly objection in general. Finally, even if I'm convinced that our logic is sound, and that the structure you've described for me exists, you need to convince me that it is in fact a model of your theory! And the more complicated your theory is, the more complicated your model will be, and hence the more difficult this task will be. In fact, this is super hard in general: is $(\mathbb{N}; +, \times)$ a model of the sentence, "There are infinitely many twin primes"? How about "ZFC is consistent"? Now, the first obstacle is a rather silly one - I think it's fine to take the soundness of logic for granted. But the second and third aren't so trivial (and even the first isn't really completely trivial). What I'm saying is, there's no way to ground a claim of consistency as solidly as a claim of inconsistency. To show a theory is inconsistent, you exhibit a proof of a contradiction; and then I'm completely convinced. To show that a theory is consistent by exhibiting a model, you need to build a model and verify that it satisfies the theory, and each of those steps implicitly takes place in a background theory whose consistency I could in principle question.<|endoftext|> TITLE: Two inequalities involving the rearrangement inequality QUESTION [6 upvotes]: Well, there are two more inequalities I'm struggling to prove using the Rearrangement Inequality (for $a,b,c>0$): $$ a^4b^2+a^4c^2+b^4a^2+b^4c^2+c^4a^2+c^4b^2\ge 6a^2b^2c^2 $$ and $$a^2b+ab^2+b^2c+bc^2+ac^2+a^2c\ge 6abc $$ They seems somewhat similar, so I hope there'a an exploitable link between them. They fall easily under Muirhead, yet I cannot figure out how to prove them using the Rearrangement Inequality. Any hints greatly appreciated. REPLY [2 votes]: \begin{eqnarray*} a(b-c)^2+b(c-a)^2+c(a-b)^2 \geq 0 \end{eqnarray*} and the second inequality follows. Now substitute $a^2$ for $a$ etc and the first inequality follows.<|endoftext|> TITLE: Understanding impredicative definitions QUESTION [7 upvotes]: In studying more on the mathematics in the past of Frege, Russell, and Zermelo, and I was wanting to learn more about impredicative/predicative definitions to solve some inquiries I had. 1. How does banning impredicative definitions avoid Russell's Paradox? From what I read, The Vicious Circle Principle played a role where "No entity can be defined in terms of a totality to which this entity belongs". From this, I can see that this does indeed ban the definition of Impredicativity. Is there more to this that I'm missing? 2. Does ZFC allow impredicative definitions? If it does, how does it avoid Russell's Paradox? Zermelo and Fraenkel, developed the ZFC and they did allow impredicative definitions as they did not allow the existence of universal sets and only referred to Pure Sets/proper classes and prevents its model from containing elements of sets that are not themselves sets. Were there other factors that ZFC had to avoid Russell's paradox? Thanks for reading & helping! REPLY [7 votes]: (1) You are right, with caveats. The caveat is that "impredicative" is an intuition that Russell tried to pin the blame for the paradox on -- and then he spent reams of words and many years trying to define what exactly "impredicative" means, such that banning it would both avoid the paradoxes and still allow ordinary mathematics. The results were not exactly successful -- at least they didn't catch on. (2a) Yes, ZFC allows impredicative definitions. For example let's define A natural number $n$ is called hooplish if every subset $A$ of $\mathbb N$ with the property that every prime power is a sum of at most $n$ elements of $A$ must contain an arithmetic sequence of length $n$. (The details of this don't matter -- in fact, I have no idea which numbers are or are not hooplish, or whether the concept is trivial or not). What matters is that "$n$ is hooplish" can certainly by defined by a formula in the language of set theory, and therefore ZFC's Axiom of Separation allows us to define $$ H = \{ n\in\mathbb N \mid n\text{ is hooplish} \} $$ According to this definition, in order to figure out whether some number is in $H$, we need to quantify over all subsets of $\mathbb N$, including $H$ itself. That is by every reasonable standard impredicative! But ZFC has no problem with it; it promises us that there is a set with which property. And nobody has, so far, been able to leverage that guarantee into an proof of a contradiction. The philosophical underpinning of this is the view that the Axiom of Separation does not generate the subsets of $\mathbb N$ -- in the intended interpretation they are all there from the beginning, and the axiom just explains that we can pick one of them in such-and-such way. (2b) ZFC avoids Russell's paradox by not having an axiom that guarantee that $\{x\mid x\notin x\}$ describes a set. ZFC doesn't say that the problem with the definition is that it is "impredicative", but simply that it doesn't fit into any of the precisely enumerated kinds of definitions that ZFC does allow. Russell thought that banning impredicative definitions would be one way to avoid the paradoxes while preserving ordinary mathematics. Just because he said so, however, doesn't mean that he was right -- opinions seem to be divided whether in his quest to preserve ordinary mathematics he didn't, effectively, open a back door to at least some kind of impredicative definitions. And in any case, I don't think Russell claimed such a ban would be the only way to reach the goal (though he evidently was of the opinion it would be the best way, if only the details could be gotten right). ZFC simply follows a different strategy, one that seems to be pretty successful so far.<|endoftext|> TITLE: Find the sum of $\binom{2016}{4} + \binom{2016}{8} +\binom{2016}{12} + \dots + \binom{2016}{2016}$ QUESTION [10 upvotes]: Problem: Find $$\dbinom{2016}{4} + \dbinom{2016}{8} +\dbinom{2016}{12} + \dots + \dbinom{2016}{2016}$$ I don't know how to attempt this problem, other than that this sum is equivalent to finding the sum of the coefficients of degree 4 terms in the polynomial, $$P(x) = (x+1)^{2016}$$ **I know that there's a duplicate of this problem somewhere, but I just can't find it on the website. Any help is appreciated! REPLY [5 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \sum_{n = 1}^{504}{2016 \choose 4n} & = -1 + \sum_{n = 0}^{2016}{2016 \choose n} {1 + \pars{-1}^{n} + \ic^{n} + \pars{-\ic}^{n} \over 4} \\[5mm] & = -1 + {1 \over 4}\sum_{n = 0}^{2016}{2016 \choose n} + {1 \over 4}\sum_{n = 0}^{2016}{2016 \choose n}\pars{-1}^{n} \\[2mm] & + {1 \over 2}\,\Re\sum_{n = 0}^{2016}{2016 \choose n}\ic^{n} \\[5mm] & = -1 + {1 \over 4}\,\pars{1 + 1}^{2016}+ {1 \over 4}\,\pars{1 - 1}^{2016} + {1 \over 2}\,\Re\pars{1 + \ic}^{2016} \\[5mm] & = -1 + 2^{2014} + {1 \over 2}\,\Re\pars{2^{1008}\expo{504\pi\ic}} = \bbx{\ds{-1 + 2^{2014} + 2^{1007}}} \end{align}<|endoftext|> TITLE: Show that $\int_{-\infty}^{\infty}\frac{\cos(x)}{(x^2+1)^2} dx = \frac{\pi}{e}$ using complex analysis QUESTION [5 upvotes]: I am trying to show that $\int_{-\infty}^{\infty}\frac{\cos(x)}{(x^2+1)^2} dx = \frac{\pi}{e}$ by considering integration around a rectangle in the upper half complex plane and substitute $z = x$. But I am not sure how to proceed from here. Any help is appreciated. REPLY [4 votes]: I wouldn't integrate along a rectangle, but rather along a semicircle running along the real axis and then being closed in the upper half plane. Since $\sin x$ is odd, we have: \begin{align} \int_{-\infty}^\infty\frac{\cos(x)}{(x^2+1)^2}dx=\int_{-\infty}^\infty\frac{\cos(x)+i\sin(x)}{(x^2+1)^2}dx=\int_{-\infty}^\infty\frac{e^{ix}}{(x^2+1)^2}dx\\ \end{align} Consider the contour integral, where the contour $\Gamma$ runs along the real axis from $-R$ to $R$ and is then closed in the upper half plane, which we can split into two parts: $$\oint_\Gamma\frac{e^{iz}}{(z^2+1)^2}dz=\int_{-R}^R\frac{e^{ix}}{(x^2+1)^2}dx+\int_\text{Arc}\frac{e^{iz}}{(z^2+1)^2}dz$$ However, by the Residue Theorem, we have: \begin{align} \oint_\Gamma\frac{e^{iz}}{(z^2+1)^2}dz &= 2\pi i \text{ Res}\left(\frac{e^{iz}}{(z^2+1)^2},i\right)\\ &=2\pi i\lim\limits_{z\rightarrow i}\frac{d}{dz}\frac{e^{iz}}{(z+i)^2}\\ &=2\pi i\lim\limits_{z\rightarrow i}\frac{ie^{iz}(z+3i)}{(z+i)^3}\\ &=2\pi i\frac{ie^{-1}\cdot4i}{(2i)^3}\\ &=\frac{\pi}{e} \end{align} Thus we have: $$\frac{\pi}{e}=\int_{-R}^R\frac{e^{ix}}{(x^2+1)^2}dx+\int_\text{Arc}\frac{e^{iz}}{(z^2+1)^2}dz$$ Taking the limit as $R\rightarrow\infty$, the integral along the arc of the semi-circle vanishes, because the positive imaginary part of $z$ leads to an exponential damping factor in the upper half plane. Thus, we are left with our desired result: $$\int_{-\infty}^\infty\frac{\cos(x)}{(x^2+1)^2}dx = \frac{\pi}{e}$$<|endoftext|> TITLE: Prove that for any even positive integer $n$, $n^2-1 \mid 2^{n!}-1$ QUESTION [7 upvotes]: Prove that for any even positive integer $n$, $n^2-1 \mid 2^{n!}-1$ This is from a book. They have given the proof. But I didn't understand it well. I am looking for a simpler proof. Or it will be helpful if someone explain this a little bit more - Proof: Let $m = n+1$ then we need to prove that $m(m-2) \mid 2^{(m-1)!}-1$. Because of $\phi(m) \mid (m-1)!$, we have $2^{\phi(m)} -1 \mid 2^{(m-1)!} -1$. And from Euler's theorem, $(m-2) \mid 2^{(m-1)!}-1$. Because $m$ is odd, $gcd(m,m-2)=1$ and the conclusion follows. REPLY [2 votes]: $2\,$ is coprime to $\,n\!+\!1\,$ so $\,{\rm mod}\ n\!+\!1\!:\, $ $\,2$ has order $\le n,\,$ i.e. $\,\color{#c00}{2^{\large k}\! \equiv 1}\,$ for $\,k\le n,\,$ thus $\,k\mid n!\:$ hence $\,2^{\large n!}\!\equiv (\color{#c00}{2^{\large k}})^{\large n!/k}\!\equiv 1.\,$ Similarly $\,2^{\large n!}\!\equiv 1\pmod{\!n\!-\!1}.$ Thus $\,2^{\large n!}\!-1\,$ is divisible by $\,n\!+\!1\,$ and $\,n\!-\!1\,$ hence also by their lcm = product, since $\:\gcd(n\!+\!1,n\!-\!1) = \gcd(n\!+\!1,2) = 1,\,$ by $n$ even. REPLY [2 votes]: I have a shorter proof, because $n$ is even then $n^2-1$ is odd and $\gcd(n^2-1,2)=1$, thus according to Euler's theorem $$2^{\varphi(n^2-1)}\equiv 1 \pmod{n^2-1}$$ But totient function is multiplicative and $\gcd(n-1,n+1)=1$ or $$\varphi(n^2-1)=\varphi(n+1)\cdot \varphi(n-1)\leq n\cdot (n-2) TITLE: Do we know a transcendental number with a proven bounded continued fraction expansion? QUESTION [7 upvotes]: The simple continued-fraction-expansion for the transcendental number $e$ is known to be unbounded. What about bounded continued fractions ? Do we know any transcendental number for which it is proven that the simple continued-fraction-expansion is bounded ? It is conjectured that the simple continued-fraction-expansion of the algebraic numbers with minimal polynomial degree greater than $2$ are unbounded. If this would be true, every bounded non-periodic infinite simple continued-fraction-expansion would correspond with a transcendental number. But to my knowledge, it was not proven for a single algebraic number with minimal polynomial degree greater than $2$, that its simple continued-fraction-expansion is unbounded. REPLY [11 votes]: Do we know any transcendental number for which it is proven that the simple continued-fraction-expansion is bounded? Here's one for you: $\begin{align} K &= \sum^\infty_{n=0}10^{-2^{n}} \\ &= 10^{-1}+10^{-2}+10^{-4}+10^{-8}+10^{-16}+10^{-32}+10^{-64}+\ldots \\ &= 0.\mathbf{1}\mathbf{1}0\mathbf{1}000\mathbf{1}0000000\mathbf{1}000000000000000\mathbf{1}0000000000000000000000000000000\mathbf{1}\ldots \end{align}$ a constant with 1's in positions corresponding to an integer power of two and zeros everywhere else. K has a canonical continued fraction expansion of: $\left[0; 9, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 8, 10, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 8, 10, 12, 10, 10, 8, 10, 12, 8, 10, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, 12, 10, 8, 12, 10, 8, 10, 10, 12, 10, 8, 12, 10, 10, 8, 10, \ldots\right]$ After calculating the first 1000000 terms on Wolfram Cloud, I'm fairly certain that (except for the first term which is 0 and the second term which is 9) all of the terms are 8, 10, or 12. (Maybe someone can prove this) Looking at the terms themselves, the position numbers of the 12's seem to all be congruent to 2 or 7 mod 8, and even after 10000 terms there seems to be nothing special as to their ordering. And the positions of the eights (5, 9, 12, 17, 21, 24, ...) are all congruent to 1 or 0 mod 4. But it seems that there is a particular order as to which of the positions are which. I was also able to use Wolfram Alpha to find a function that was able to correctly evaluate the positions of all the 8's for the first 10000 terms. And after unsuccessfully trying to find a formula for the 10's, here is what the structure of the continued fraction appears to look like: $K=a_0+\frac{1}{a_1+\frac{1}{a_2+\frac{1}{a_3+\frac{1}{a_4+\frac{!}{a_5+\ldots}}}}}$ where $\forall~n\in\mathbb{Z}_{\geqslant 0},~a_n=\begin{cases} 0 & n=0 \\ 8 & n\in\left\{\frac{8m+\left(\frac{-1}{m-1}\right)+1}{2}~:~m\in\mathbb{Z}^{+}\right\} \\ 9 & n=1 \\ 10 & \text{otherwise} \\ 12 & n\equiv 2\left(\operatorname{mod}8\right)\text{or}~7\left(\operatorname{mod}8\right) \end{cases}$ where $\left(\frac{n}{m}\right)$ is the Jacobi symbol. So there we have it. A transcendental number whose continued fraction has bounded terms.<|endoftext|> TITLE: How does the derivative with respect to the complex conjugate even make sense? QUESTION [6 upvotes]: I came across this the other day: $$ \frac{\partial f}{\partial \bar{z}} = \frac12\left(\frac{\partial f}{\partial x}+i\frac{\partial f}{\partial y}\right) $$ I decided to attempt to work it out myself to better understand it. I know $2x = z + \bar{z}$, and $2iy = z - \bar{z}$, and using the total derivative we have $$ \frac{\partial f}{\partial \bar{z}} = \frac{\partial f}{\partial x}\frac{\partial x}{\partial \bar{z}} + \frac{\partial f}{\partial y}\frac{\partial y}{\partial \bar{z}} $$ and this is about where I got stuck. How exactly am I supposed to calculate $\frac{\partial x}{\partial \bar{z}}$? My confusion doesn't lie in the notation, but in the mechanics of the very thing I'm being asked to differentiate. Look at $x$: $$ x(\bar{z}) = \frac{\bar{z} + z}{2} = \frac{\bar{z}+\bar{\bar{z}}}{2} $$ if we label $Z = \bar{z}$, then $\frac{\partial x}{\partial \bar{z}} = \frac{\partial x}{\partial Z}$, and $x(Z) = \frac{Z+\bar{Z}}2$. However, as far as I can tell, $\frac{Z+\bar{Z}}{2}$ isn't even complex differentiable, because $\bar{Z}$ isn't complex differentiable with respect to $Z$. $x(Z)$ doesn't satisfy the CR equations: $$ x(X+iY) = X + i0 = u(X, Y) + iv(X, Y) \\ u_X = 1 \neq 0 = v_Y \\ v_X = 0 \neq -1 = -u_Y $$ so how could I possibly take the complex derivative of it? That doesn't make any sense. What exactly am I missing here? Is the derivative $\frac{\partial x}{\partial \bar{z}}$ a different kind of derivative? Are we not supposed to do the complex derivative but instead something else? REPLY [7 votes]: It is a different way of thinking. Think of a function of two variables. The variables may be $x,y$. But you can write $$ z = x+iy,\qquad \overline{z} = x - i y $$ and get two new variables $z, \overline{z}$. You can write $x$ and $y$ in terms of $z$ and $\overline{z}$. You can write $z$ and $\overline{z}$ in terms of $x$ and $y$. Thus, you can do a change of variables. When $f$ is considered a function of $z$ and $\overline{z}$ in this way, then of course the two partial derivatives make sense. Some day perhaps you will study differential geometry, where you will learn a more complete way of doing this.<|endoftext|> TITLE: $2^5 \times 9^2 =2592$ QUESTION [14 upvotes]: $$2^5 \times 9^2 =2592$$ I am trying to find any other number in this pattern. That is find natural numbers $a$ , $b$ , $c$ and $d$ such that $$a^b \times c^d = \overline{abcd} $$ We have $$a^b \times c^d = \overline{cd} + 100\overline{ab} $$ So $$a^b \times c^d - \overline{cd} = 100\overline{ab} $$ LHS is a multiple of $100$ .Any help from here will be greatly appreciated REPLY [11 votes]: The 2592 puzzle apparently originated with Henry Ernest Dudeney's 1917 book Amusements in Mathematics where it is given as puzzle 115, "A PRINTER'S ERROR": In a certain article a printer had to set up the figures $5^4\times2^3,$ which of course meant that the fourth power of $5\ (625)$ is to be multiplied by the cube of $2\ (8),$ the product of which is $5,000.$ But he printed $5^4\times2^3$ as $5423,$ which is not correct. Can you place four digits in the manner shown, so that it will be equally correct if the printer sets it up aright, or makes the same blunder? [. . . .] The answer is that $2^5\times9^2$ is the same as $2592,$ and this is the only possible solution to the puzzle. It was apparently rediscovered fifteen years later and published in the American Mathematical Monthly, vol. 40, December 1933, p. 607, as problem E69, proposed by Raphael Robinson, in the following form: Instead of a product of powers, $a^bc^d,$ a printer accidentally prints the four digit number, $abcd.$ The value is however the same. Find the number and show that it is unique. A solution by C. W. Trigg was published in vol. 41, May 1934, p. 332; the problem was also solved by Florence E. Allen, W. E. Buker, E. P. Starke, Simon Vatriquant, and the proposer.<|endoftext|> TITLE: What do the two things such that "Data is fixed" and "Parameters vary" in Bayesian statistics mean? QUESTION [5 upvotes]: While following the bayesian statistics online, the lecturer said that "Data is fixed" and "Parameters vary" in Bayesian statistics. But the explanations I got doesn't really make me understand what those things mean. The two things sound important to begin with the basic idea of Bayesian statistics. Hope to hear explanations. REPLY [7 votes]: Under a frequentist point of view you might have an unknown parameter, say $\theta$, that you want to estimate based on some data you have collected. You assume that this true and unknown parameter is fixed. Your data are expressed through a random variable, say $X$. So, for example you are interested in maximizing a likelihood based on the probability density function $f(X\mid\theta)$. This means that you model your collected data under a belief that the probability function of your data depends on that unknown parameter. You then can estimate that parameter say by maximizing the likelihood of the data (i.e. you consider the data random, in a sense that there are a random realization of the population that you study). Under a bayesian point of view things are a bit reversed. You do not view the parameter $\theta$ as an unknown constant, i.e. fixed at some value, that you try to estimate. You rather consider that the parameter itself has a marginal distribution $f(\theta)$ which is called a prior. This expresses you prior beliefs regarding the parameter which now is viewed as a random variable since it follows a distribution. Under such a framework you might be interested in modelling $f(\theta\mid X)$, namely update your knowledge for $\theta$ GIVEN the data you have collected. Since now the data are given, they are not random hence the "data is fixed" that your lecturer mentioned.<|endoftext|> TITLE: Is there a quicker way to solve this integral: $\int \frac{3-\cos(x)}{(1+2\cos(x))\sin^2(x)}dx$? QUESTION [9 upvotes]: The integral is: $$ \int \frac{3-\cos(x)}{(1+2\cos(x))\sin^2(x)}dx$$ This is the way I approached it: $$ \tan\left(\frac{x}{2}\right)=u\\dx=\frac{2}{\sec^2\left(\frac{x}{2}\right)}du$$ By using trigonometric identities we get: $$ \sin(x)=\frac{2u}{1+u^2};\ \cos(x)=\frac{1-u^2}{1+u^2};\ \sec^2\left(\frac{x}{2}\right)=1+u^2 $$ Therefore the integral now becomes: $$ 2\int \frac{3-\frac{1-u^2}{1+u^2}}{\left(1+2\left(\frac{1-u^2}{1+u^2}\right)\right)\left(\frac{2u}{1+u^2}\right)^2(1+u^2)}du=$$ $$\int\frac{(1+2u^2)(1+u^2)}{u^2(3-u^2)}du$$ By dividing the two polynomials we get: $$\int\left(-2-\frac{9u^2+1}{u^2(3-u^2)}\right)du$$ Using partial fractions we get to the simplified form: $$\int\left(-2-\frac{1}{3u^2}+\frac{28}{3(3-u^2)}\right)du$$ $$-2u-\frac{1}{3u}+\frac{28\sqrt3}{9}\int\frac{1}{1-\left(\frac{u}{\sqrt3}\right)^2}du \\ -2u-\frac{1}{3u}+\frac{28\sqrt3\tanh^{-1}{\left(\frac{u}{\sqrt3}\right)}}{9}+C$$ By substituting back in for $u$, we get the solution: $$ \bbox[5px,border:2px solid black]{\frac{28\sqrt3\tanh^{-1}{\left(\frac{\tan\left(\frac{x}{2}\right)}{\sqrt3}\right)}}{9}-2\tan\left(\frac{x}{2}\right)-\frac{1}{3}\cot\left(\frac{x}{2}\right)+C}$$ My question is, as you can understand from the title, is there any easier and faster way to solve this integral? If so, how? Thank you. REPLY [7 votes]: HINT One can look for coefficients of identity $$f(y)=\frac{3-y}{(1+2y)(1-y^2)}=\frac A{1+2y}+\frac B{1-y}+\frac C{1+y}:$$ $$A=\lim_{y\to -\dfrac12}(1+2y)f(y) =\frac{14}3,$$ $$B=\lim_{y\to 1}(1-y)f(y) = \frac13,$$ $$C=\lim_{y\to-1}(1+y)f(y) = -2$$ and then find the integrals through the universal trigonometric substitution and known integrals $$\int\dfrac{\mathrm dx}{1-\cos(x)}=\dfrac12\int\dfrac{\mathrm dx}{\sin^2\left(\dfrac x2\right)} = -\cot\left(\dfrac x2\right)+constant,$$ $$\int\dfrac{\mathrm dx}{1+\cos(x)}=\dfrac12\int\dfrac{\mathrm dx}{\cos^2\left(\dfrac x2\right)} = \tan\left(\dfrac x2\right)+constant.$$<|endoftext|> TITLE: Prove the recurrence $x_{n}=\frac{x_{n-2}}{1+x_{n-1}}$ converges for a unique $x_1$ QUESTION [8 upvotes]: While reading Steven Finch's amazing book Mathematical Constants I once encountered Grossman's constant. This is an interesting constant $c$ defined as the unique $x_1\in\mathbb{R}$ such that the sequence $\{x_n\}_{n=0}^\infty$ defined by the recurrence: $$x_{n}=\frac{x_{n-2}}{1+x_{n-1}}$$ for $n\ge0$ with $x_0=1$ converges, where $c\approx$$\;0.73733830336929...$. This seems like quite a remarkable theorem and I have no idea how to go about proving that a recurrence of this form converges for a single value, although it seems to have something to do with the limiting behaviour of the odd and even terms. I do not have access to the paper referenced by Finch and MathWorld in which the proof is apparently given, so I am wondering at the very least what techniques were used to prove it. My question is: Does anyone know of (or can come up with) a proof (or even the idea of a proof) that this sequence converges for a unique $x_1$? Also, is any closed form for $c$ yet known? REPLY [2 votes]: This is not an answer but here is a collection of facts about the sequences : If $x_0,x_1 \ge 0$ then $x_n \ge 0$ forall $n$, and $x_{n+2} = \frac{x_n} {1+x_{n+1}} \le x_n$, so that the two sequences $(x_{2n})$ and $(x_{2n+1})$ are decreasing, so they have limits $l_0$ and $l_1$. If the limit of one of the subsequences is nonzero, then the other sequence converges to $0$ exponentially, so one of them has to be $0$. Then we have to prove that forall $x_0 \ge 0$ there is a unique $x_1 \ge 0$ such that the sequence converges to $0$. A long computation shows that $(x_{n+3} - x_{n+2}) - (x_{n+1} - x_n) = \frac {x_n^2 x_{n+1}}{(1+x_{n+1})(1+x_n+x_{n+1})} \ge 0$, and so the sequences $(x_{2n+1}-x_{2n})$ and $(x_{2n+2}-x_{2n+1})$ are increasing. In particular, as soon as one of them gets positive, we know that the sequence will not converge. Conversely, if $(x_{2n})$ doesn't converge to $0$ then $(x_{2n+1})$ converges to $0$ and so we must have $x_{2n+1} - x_{2n} > 0$ at some point, and similarly for the other case. This means that $(x_n)$ converges to $0$ if and only if it stays decreasing forever, and we can decide if a particular sequence doesn't converge to $0$ by computing the sequence until it stops decreasing. It also follows that the set $\{(x_0,x_1) \in\Bbb R_+^2\mid \lim x_n = 0\}$ is a closed subset of $\Bbb R_+^2$.<|endoftext|> TITLE: there are infnitely many postive integer $n$ such $ \lfloor \sqrt{7}\cdot n \rfloor=k^2+1(k\in \Bbb{Z})$ QUESTION [7 upvotes]: show that: there are infnitely many postive integer $n$ such $$ \lfloor \sqrt{7}\cdot n \rfloor=k^2+1(k\in \Bbb{Z})$$ I think use pell equation to solve it. But I can't. REPLY [6 votes]: I get that there are an infinite number of $n$ such that $\lfloor n\sqrt{d} \rfloor =k^2-1 $, not $k^2+1$. However, for $d$ such that there are solutions to $x^2-dy^2 = -3$, such as $d=7$, then there are $n$ such that $\lfloor n \sqrt{d} \rfloor = k^2+1 $. This generalizes to $k^2 \pm j$ depending on the existence of solutions to $x^2-dy^2 = \pm m$ for different $m$. Here we go. As the OP stated, the Pell equation comes into it. We start with the fact that there are an infinite number of integer solutions to $x^2-dy^2 = 1$, where $d$ is square free. For each of these, $1 =x^2-dy^2 =(x-y\sqrt{d})(x+y\sqrt{d}) $ so $(x-y\sqrt{d}) =\dfrac1{x+y\sqrt{d}} $ or, squaring, $x^2-2xy\sqrt{d}+dy^2 =\dfrac1{(x+y\sqrt{d})^2} $ or $2xy\sqrt{d} =x^2+dy^2-\dfrac1{(x+y\sqrt{d})^2} =x^2+(x^2-1)-\dfrac1{(x+y\sqrt{d})^2} =2x^2-1-\dfrac1{(x+y\sqrt{d})^2} $ or $xy\sqrt{d} =x^2-\frac12(1+\dfrac1{(x+y\sqrt{d})^2}) $. Since $0 < \dfrac1{(x+y\sqrt{d})^2}) < \frac12$, $\frac12 < \frac12(1+\dfrac1{(x+y\sqrt{d})^2}) < 1 $ so $\lfloor xy\sqrt{d} \rfloor =\lfloor x^2-\frac12(1+\dfrac1{(x+y\sqrt{d})^2}) \rfloor = x^2-\lfloor\frac12(1+\dfrac1{(x+y\sqrt{d})^2}) \rfloor = x^2-1 $. This is not what is asked. However, if there is one solution to $x^2-dy^2 = -1$, then there are an infinite number of solutions. Modifying this calculation we get $x^2-2xy\sqrt{d}+dy^2 =\dfrac1{(x+y\sqrt{d})^2} $ or $2xy\sqrt{d} =x^2+dy^2-\dfrac1{(x+y\sqrt{d})^2} =x^2+(x^2+1)-\dfrac1{(x+y\sqrt{d})^2} =2x^2+1-\dfrac1{(x+y\sqrt{d})^2} $ or $xy\sqrt{d} =x^2+\frac12(1-\dfrac1{(x+y\sqrt{d})^2}) $ $\lfloor xy\sqrt{d} \rfloor =\lfloor x^2+\frac12(1-\dfrac1{(x+y\sqrt{d})^2}) \rfloor = x^2+\lfloor\frac12(1-\dfrac1{(x+y\sqrt{d})^2}) \rfloor = x^2 $. However, there are no solutions to $x^2-7y^2 = -1$, so this does not hold. However, suppose there are an infinite number of solutions to $x^2-dy^2 = -m$. Modifying this calculation we get $-m =x^2-dy^2 =(x-y\sqrt{d})(x+y\sqrt{d}) $ or $x-y\sqrt{d} =\dfrac{-m}{x+y\sqrt{d}} $. Squaring, $x^2-2xy\sqrt{d}+dy^2 =\dfrac{m^2}{(x+y\sqrt{d})^2} $ or $2xy\sqrt{d} =x^2+dy^2-\dfrac{m^2}{(x+y\sqrt{d})^2} =x^2+(x^2+m)-\dfrac{m^2}{(x+y\sqrt{d})^2} =2x^2+m-\dfrac{m^2}{(x+y\sqrt{d})^2} $ or $xy\sqrt{d} =x^2+\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2}) $ so $\lfloor xy\sqrt{d} \rfloor =\lfloor x^2+\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor = x^2+\lfloor\frac12(m-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor $. If $m$ is odd, $m = 2j+1$, then $\lfloor xy\sqrt{d} \rfloor = x^2+\lfloor\frac12(2j+1-\dfrac{m^2}{(x+y\sqrt{d})^2}) \rfloor =x^2+j $ once $x+y\sqrt{d} > m $. Since there are solutions to $x^2-7y^2 = -3$ (e.g., $5^2-7\cdot 2^2 = -3$) there are an infinite number of solutions, so OP's statement is true.<|endoftext|> TITLE: All pair of $m,n$ satisfying $lcm(m,n)=600$ QUESTION [5 upvotes]: Find the number of pairs of positive integers $(m,n)$, with $m \le n$, such that the ‘least common multiple’ (LCM) of $m$ and $n$ equals $600$. My tries: It's very clear that $n\le600$, always. Case when $n=600=2^3\cdot 3\cdot 5^2$, and let $m=2^{k_1}\cdot 3^{k_2}\cdot 5^{k_3}$, all possible values of $k_1=3+1=4,\ k_2=1+1=2,\ k_3=2+1=3$. So number of $m$ which satisfy above will be $4\cdot 2 \cdot 3=24$ Help me analyzing when $n<600$. REPLY [4 votes]: Forget about the condition $m\leq n$ for the moment. Since $600=2^3\cdot 3^1\cdot 5^2$ we have $$m=2^{\alpha_2}3^{\alpha_3}5^{\alpha_5},\quad n=2^{\beta_2}3^{\beta_3}5^{\beta_5}$$ with $\alpha_i$, $\beta_i\geq0$ and $$\max\{\alpha_2,\beta_2\}=3,\quad \max\{\alpha_3,\beta_3\}=1,\quad \max\{\alpha_5,\beta_5\}=2\ .$$ It follows that $$\eqalign{(\alpha_2,\beta_2)&\in\{(0,3),(1,3),(2,3),(3,3),(3,2),(3,1),(3,0)\}\>,\cr (\alpha_3,\beta_3)&\in\{(0,1),(1,1),(1,0)\}\>,\cr (\alpha_5,\beta_5)&\in\{(0,2),(1,2),(2,2),(2,1),(2,0)\}\cr}$$ are admissible, allowing for $7\cdot3\cdot5=105$ combinations. Exactly one of them has $m=n$, namely $m=n=600$, and in all other $104$ cases $m\ne n$. Since we want $m\leq n$ we have to throw out half of these cases, leaving $52+1=53$ different solutions of the problem.<|endoftext|> TITLE: Why are partial derivatives of a harmonic function also harmonic? QUESTION [5 upvotes]: I've tried manipulating some expressions but still can't quite get my head around why the partial derivatives of $u(x,y)$, a harmonic function, are also harmonic. REPLY [7 votes]: A harmonic function satisfies $\sum_i\frac{\partial^2 f}{\partial x_i^2}=0$ Let's take a look at $g=\frac{\partial f}{\partial x_j}$ for some $j$. Then \begin{align*} \sum_i\frac{\partial^2 g}{\partial x_i^2}&=\sum_i\frac{\partial^3 f}{\partial x_i^2\partial x_j}\\ &=\sum_i\frac{\partial}{\partial x_j}\left(\frac{\partial^2 f}{\partial x_i^2}\right)\\ &=\frac{\partial}{\partial x_j}\sum_i\left(\frac{\partial^2 f}{\partial x_i^2}\right)\\ &=\frac{\partial}{\partial x_j}0\\ &=0 \end{align*} and thus $g$ is harmonic. This is of course assuming that the third partial derivatives of $f$ are well-defined. Additional note: This proof requires that the order in which the partial derivatives are taken does not matter, i.e. $\frac{\partial^3 f}{\partial x_i^2\partial x_j} = \frac{\partial^3 f}{\partial x_j\partial x_i^2}$. I believe that this is not generally true (it does hold if the third partial derivative is continuous), so the proof only works for those functions where this is true. REPLY [3 votes]: Let $u$ be harmonic. If we investigate wether $u_x$ is harmonic we have to suppose that $u_x \in C^2$, hence we suppose $u \in C^3$. Let $v=u_x$ From $u_{xx}+u_{yy}=0$ we get by differentiatin w.r.t $x$: $u_{xxx}+u_{yyx}=0$. But this means: $v_{xx}+v_{yy}=0$ .<|endoftext|> TITLE: Why do we only consider ideals with a prime norm when looking at ideals smaller than the Minkowski bound? QUESTION [5 upvotes]: I was working on some examples on how to compute the class number of a quadratic number field: I do understand that for some quadratic number field $\mathbb{Q}(\sqrt{d})$, with $d\in \mathbb{Z}, d \neq \{0,1\}$ and squarefree, I need to compute the Minkowski bound and then look at ideals with norm smaller than the Minkowski bound, which then gives me the class number. However, I was wondering why I only look at ideals with a prime number as the norm? One example was the number field $K:=\mathbb{Q}(\sqrt{-163})$ (which has class number $1$) where I computed that in every class of the class group, there exists some ideal $\mathfrak{A}$ with $N(\mathfrak{A}) < 8.1$. I could easily compute that there are no ideals with norm $2,3,5$ or $7$ as those are inert in $\mathcal{O}_K$. Now I'm having some trouble to understand why I only need to look at those ideals and ignore those with norm $4,6$ or $8$ (as was done in the example). I assume that there's maybe some argument working with the factorization of ideals into prime ideals (which works in $\mathcal{O}_K$ as a Dedekind domain). I also looked up other questions on this topic but did unfortunately not find a definite answer. Thank you for your help and explanations! REPLY [3 votes]: You do not, you only consider prime ideals, which may or may not have prime norm. The reason we only consider the prime ideals is because the ideal group is generated by the primes (that's what unique factorization means!) so if all the generators are principal, any product of them is principal. So it is necessary and sufficient that all prime ideals be principal in order to show the ring is a PID.<|endoftext|> TITLE: If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, is at least one point on the line $y=x$? QUESTION [7 upvotes]: I was reading about intersection points of $f(x)$ and $f^{-1}(x)$ in this site. (Proof: if the graphs of $y=f(x)$ and $y=f^{-1}(x)$ intersect, they do so on the line $y=x$) Then, I saw this statement was wrote by N. S.: "If the graphs of $f(x)$ and $f^{-1}(x)$ intersect at a single point, then that point lies on the line $y=x$. It is also true that if the graphs of $f(x)$ and $f^{-1}(x)$ intersect at an odd number of points, then at least a point point lies on the line $y=x$. This follows immediately from the observation that the intersection points are symmetric with respect to that line..." I want to know whether it's true or not and if it's true how we can prove it algebraically? My try: I tried many function and this statement was true but I can't prove it. (Or disprove.) REPLY [2 votes]: The argument is mainly a counting argument involving just a little algebra on the functions themselves. The functions $f(x)$ and $f^{-1}(x)$ intersect either at a finite number of points, or an infinite number of points. If the number of intersections is infinite, it is neither odd nor even. So we only need to consider a finite number of intersections. If $(p,q)$ is one of the intersection points, that means $q = f(p) = f^{-1}(p).$ But from $q=f(p)$ we can deduce that $f^{-1}(q) = p,$ and from $q = f^{-1}(p)$ we can deduce that $f(q) = p,$ therefore $p = f(q) = f^{-1}(q),$ that is, and the two functions also intersect at $(q,p).$ So consider the set of intersection points that are above the line $y=x.$ Suppose there are $n$ of these points, where $n \geq 0.$ For each point $(p,q)$ above the line $y=x$ (that is, where $q>p$), there is a corresponding point $(q,p)$ below the line $y=x,$ and vice versa. Hence there are $n$ points below the line $y=x.$ Let the number of intersection points on the line $y=x$ be $m.$ Then the total number of intersection points is $n$ above the line, $n$ below the line, and $m$ on the line (where $m\geq 0$), for a total of $$ 2n + m. $$ Now, $m$ has the same parity as $2n+m.$ If the total number of intersections $2n+m$ is odd, it follows that $m$ is odd. But zero is not odd; all non-negative odd numbers are positive. So the total number of intersections on the line $y=x$ in that case is an odd positive number. In particular, it is at least $1.$ In this proof, we never assume there are any intersection points above the line $y=x,$ nor that there are any below the line or on the line. But we show that if there are an odd number of intersection points altogether, the number of intersection points on the line is positive.<|endoftext|> TITLE: Natural isomorphism of hom functors imply isomorphism of objects QUESTION [6 upvotes]: Let $\mathcal{C}$ be a category. Let $A$ and $B$ be objects of $\mathcal{C}$. If we have an isomorphism natural in $-$: $\mathcal{C}(-,A)\cong\mathcal{C}(-,B)$; does that imply $A\cong B$? REPLY [4 votes]: Yes (as the other answer points out). But here's a proof that does not invoke Yoneda. It is given that $\mathcal C(-, A) \cong \mathcal C(-, B)$ naturally in $-$. Plugging in $A$: $\mathcal C(A, A) \cong \mathcal C(A, B)$ naturally in $A$. But $1_A$ is an element of $\mathcal C(A, A)$, and is an isomorphism, and the natural isomorphism from $\mathcal C(A, A)$ to $\mathcal C(A, B)$ must map this to an isomorphism in $\mathcal C(A, B)$, which means $A \cong B$. To prove that last claim, let $\alpha \colon \mathcal C(-, A) \Rightarrow \mathcal C(-, B)$ be a natural isomorphism, and $\alpha^{-1} \colon \mathcal C(-,B) \to \mathcal C(-, A)$ its inverse. Let $f = \alpha_A(1_A) \in \mathcal C(A, B)$. The naturality square is then as given below. $\require{AMScd}$ \begin{CD} \mathcal C(A, A) @>{\alpha_A}>> \mathcal C(A, B)\\ @A(-\circ f)AA @AA(- \circ f)A \\ \mathcal C(B,A) @>>{\alpha_B}> \mathcal C(B, B) \end{CD} Here $- \circ f$ is the function $\mathcal C(f, A)$ (and also $\mathcal C(f, B)$) whose action is to take a morphism from $\mathcal C(A,A)$ (or $\mathcal C(A,B)$) and compose it with $f$. We must find an inverse $g \colon B \to A$ for $f$, and an obvious choice is $g = \alpha_B^{-1}(1_B) \in \mathcal C(B, A)$. So now we just need to verify that these are inverses. First, observe from the naturality square that \begin{equation*} \alpha_A \circ (- \circ f) = (- \circ f) \circ \alpha_B = (\alpha_B(-)) \circ f. \end{equation*} Applying these functions to $g$ (which is an element of $\mathcal C(B, A)$), we get \begin{align*} \alpha_A(g \circ f) &= \alpha_B(g) \circ f\\ &= 1_B \circ f\\ &= f. \end{align*} Then $g \circ f = \alpha_A^{-1}(f) = 1_A$. Similarly, $f \circ g = 1_B$.<|endoftext|> TITLE: Direct limit of infinite direct products mapped onto each other via shift maps QUESTION [5 upvotes]: While working on a project, I ended up having to take direct limits, for which I admit I don't have a good intuition. Hoping that it is a simple problem for those who have more experience with direct limits than I do, I decided to ask it here. Let $(G_i)_{i \in \mathbb{N}}$ be a sequence of abelian groups such that $G_i \subseteq G_{i+1}$ and consider the directed system $G_0 \times G_1 \times \dots \rightarrow_{\varphi_0} G_1 \times G_2 \times \dots \rightarrow_{\varphi_1} \dots$ where each homomorphism $\varphi_k$ is given by $(\alpha,\beta,\gamma,\delta,\dots) \mapsto (\alpha\beta, \gamma, \delta, \dots)$. Is it possible to describe the direct limit of this system in terms of standard group theoretic constructions? I am aware of the standard construction of direct limits by taking an appropriate quotient of the disjoint union. I was hoping that there is a "simpler" description which avoids disjoint unions. Here is what I was able to think so far. Consider the system $G_1 \times G_2 \dots \rightarrow_{\psi_0} G_2 \times G_3 \dots \rightarrow_{\psi_1} \dots$ where each homomorphism $\psi_k$ is the left shift map given by $(\alpha,\beta,\gamma,\delta,\dots) \mapsto (\beta, \gamma, \delta, \dots)$. If I am not mistaken, the direct limit of this system should be the reduced product of the groups $G_i$ along the cofinite filter, where the isomorphism takes any element in the direct limit to the equivalence class of the appropriate sequence. This suggests that the reduced product should be a part of the original direct limit I am considering. However, I can't really figure out how the first component that I discarded is going to interact with the reduced product. REPLY [2 votes]: First of all, there is a much simpler description of direct limits in the case where all of the bonding maps are surjective. In particular, if $$ G_0 \,\xrightarrow{\varphi_0}\, G_1 \,\xrightarrow{\varphi_1}\, G_2 \,\xrightarrow{\varphi_2}\, \cdots $$ is a directed system of groups and epimorphisms, then the direct limit is a quotient $G_0/N$, where $N$ is the following normal subgroup of $G_0$: $$ N \,=\, \{g\in G_0 \mid \varphi_n\cdots\varphi_1\varphi_0(g) = 1\text{ for some }n\in\mathbb{N}\} \,=\, \bigcup_{n\in\mathbb{N}} \ker(\varphi_n\cdots \varphi_1\varphi_0). $$ For the directed system you have given, it follows that the direct limit is the quotient $$ (G_0\times G_1 \times \cdots )\,\bigr/\,N $$ where $N$ is the subgroup of the infinite direct sum $G_0 \oplus G_1 \oplus \cdots$ consisting of all tuples $(g_0,g_1,\ldots,g_n,1,1,1\ldots)$ for which $g_0g_1\cdots g_n = 1$.<|endoftext|> TITLE: Calculate $\int_0^\infty {\frac{x}{{\left( {x + 1} \right)\sqrt {4{x^4} + 8{x^3} + 12{x^2} + 8x + 1} }}dx}$ QUESTION [14 upvotes]: Prove $$I=\int_0^\infty {\frac{x}{{\left( {x + 1} \right)\sqrt {4{x^4} + 8{x^3} + 12{x^2} + 8x + 1} }}dx} = \frac{{\ln 3}}{2} - \frac{{\ln 2}}{3}.$$ First note that $$4{x^4} + 8{x^3} + 12{x^2} + 8x + 1 = 4{\left( {{x^2} + x + 1} \right)^2} - 3,$$ we let $${x^2} + x + 1 = \frac{{\sqrt 3 }}{{2\cos \theta }} \Rightarrow x = \sqrt { - \frac{3}{4} + \frac{{\sqrt 3 }}{{2\cos \theta }}} - \frac{1}{2},$$ then $$I=\frac{1}{2}\int_{\frac{\pi }{6}}^{\frac{\pi }{2}} {\frac{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)\sec \theta }}{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} + 1} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}d\theta } .$$ we have \begin{align*} &\frac{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)\sec \theta }}{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} + 1} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} = \frac{{{{\left( {\sqrt {2\sqrt 3 \sec \theta - 3} - 1} \right)}^2}\sec \theta }}{{\left( {2\sqrt 3 \sec \theta - 4} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}\\ =& \frac{{\left( {2\sqrt 3 \sec \theta - 2 - 2\sqrt {2\sqrt 3 \sec \theta - 3} } \right)\sec \theta }}{{\left( {2\sqrt 3 \sec \theta - 4} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} = \frac{{\left( {\sqrt 3 \sec \theta - 1 - \sqrt {2\sqrt 3 \sec \theta - 3} } \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}\\ = &\frac{{\left( {\sqrt 3 \sec \theta - 1} \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }} - \frac{{\sec \theta }}{{\sqrt 3 \sec \theta - 2}}. \end{align*} and $$\int {\frac{{\sec \theta }}{{\sqrt 3 \sec \theta - 2}}d\theta } = \ln \frac{{\left( {2 + \sqrt 3 } \right)\tan \frac{\theta }{2} - 1}}{{\left( {2 + \sqrt 3 } \right)\tan \frac{\theta }{2} + 1}}+ C.$$ while \begin{align*}&\int {\frac{{\left( {\sqrt 3 \sec \theta - 1} \right)\sec \theta }}{{\left( {\sqrt 3 \sec \theta - 2} \right)\sqrt {2\sqrt 3 \sec \theta - 3} }}d\theta } = \int {\frac{{\sqrt 3 - \cos \theta }}{{\left( {\sqrt 3 - 2\cos \theta } \right)\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } \\ = &\frac{1}{2}\int {\frac{1}{{\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } + \frac{{\sqrt 3 }}{2}\int {\frac{1}{{\left( {\sqrt 3 - 2\cos \theta } \right)\sqrt {2\sqrt 3 \cos \theta - 3{{\left( {\cos \theta } \right)}^2}} }}d\theta } . \end{align*} But how can we continue? It is related to elliptic integral. REPLY [5 votes]: This is a pseudo-elliptic integral, it has an elementary anti-derivative: $$\int \frac{x}{(x+1)\sqrt{4x^4+8x^3+12x^2+8x+1}} dx = \frac{\ln\left[P(x)+Q(x)\sqrt{4x^4+8x^3+12x^2+8x+1}\right]}{6} - \ln(x+1) + C$$ where $$P(x) = 112x^6+360x^5+624x^4+772x^3+612x^2+258x+43$$ and $$Q(x) = 52x^4+92x^3+30x^2-22x-11$$ To obtain this answer, just follow the systematic method of symbolic integration over simple algebraic extension. Alternatively, you can throw it to a CAS with Risch algorithm implemented (not Mathematica), a convenient software is the online Axiom sandbox.<|endoftext|> TITLE: If $xy+xz+yz=1+2xyz$ then $\sqrt{x}+\sqrt{y}+\sqrt{z}\geq2$. QUESTION [10 upvotes]: Let $x$, $y$ and $z$ be non-negative numbers such that $xy+xz+yz=1+2xyz$. Prove that: $$\sqrt{x}+\sqrt{y}+\sqrt{z}\geq2$$ The equality occurs for $x=y=1$ and $z=0$. I tried Lagrange Multipliers and more, but I don't see a proof. REPLY [2 votes]: Short proof. Clearly $xy+yz+zx \ge 1$ We have by AM-GM $$(\sqrt{x}+\sqrt{y}+\sqrt{z})^2=x+y+z+2(\sqrt{xy}+\sqrt{yz}+\sqrt{zx})$$ $$\ge x+y+z+ \frac{4xy}{x+y}+\frac{4yz}{y+z}+\frac{4zx}{z+x} $$ $$ \ge x+y+z + \frac{4(xy+yz+zx)}{x+y+z} \ge x+y+z+\frac{4}{x+y+z} \ge 4$$ The proof is complete.<|endoftext|> TITLE: Looking for references about a tessellation of a regular polygon by rhombuses. QUESTION [17 upvotes]: A regular polygon with an even number of vertices can be tessellated by rhombii (or lozenges), all with the same sidelength, with angles in arithmetic progression as can be seen on figures 1 to 3. Fig. 1 Fig. 2 Fig. 3 I had already seen this kind of tessellation, and I met it again in a recent question on this site (Tiling of regular polygon by rhombuses). Let the polygon be $n$-sided with $n$ even. The starlike pattern of rhombii issued from the rightmost point, that we will call the source, can be seen as successive ''layers'' of similar rhombii. A first layer $R_1$ with the most acute angles (they are $m:=\dfrac{n}{2}-1$ of them), then moving away from the source, a second layer $R_2$ with $m-1$ rhombii, etc. with a grand total of $\dfrac{m(m+1)}{2}$ rhombii. It is not difficult to show that rhombii in layer $R_p$ are characterized by angles $p\dfrac{\pi}{m+1}.$ In fact (I had no idea of it at first), the rhombii pattern described above is much less mysterious when seen into a larger structure such as shown in figure 4. The generation process is simple: a regular polygon with $m$ sides is rotated by successive rotations with angle $\dfrac{\pi}{m+1}$ around one of its vertices. Fig. 4 My question about this tessellation is twofold: where can I find some references? are there known properties/applications? The different figures have been produced by Matlab programs. The program that has generated Fig. 2 is given below ; it uses complex numbers, especially apt to render angular relationships: hold on;axis equal m=9;n=2*m+2; i=complex(0,1);pri=exp(2*i*pi/n); v=pri.^(0:(n-1)); for k=0:m-1 z=1-(pri^k)*(1-v(1:m+2-k)); plot([z,NaN,conj(z)],'color',rand(1,3),'linewidth',5); end; Edit : I am indebted to @Ethan Bolker for attracting my attention to zonohedra (or zomes, as some architects call them), a 3D extension of Fig. 4 (or an equivalent one with less or more circles) ; by 3D extension, we mean a polyhedron made of (planar) rhombic facets whose projection on $xOy$ plane is the initial figure, as shown on Fig. 5. The idea is simple (we refer here to the two left figures in Fig. 6): the central red "layer" (with the thinnest rhombi) is "lifted" as an umbrella whose highest point, the apex of the zonohedra, say at height $z=1$, with the bottom of the $n$ ribs of the umbrella at $z=1-a$. Let us denote by $V_k, \ k=1, \cdots n$ with components $\left(\cos(\tfrac{2 \pi k}{n}),\sin(\tfrac{2 \pi k}{n}), -a\right)$ the (3D) vectors issued from the apex. Layer $1$ rhombi have sides $V_k$ and $V_{k+1}$ ; by the very definition of a rhombus, layer $2$ (yellow) rhombi have sides $V_k$ and $V_{k+2}$, etc. Note that Fig. 6, unlike Fig. 5, displays a closed zonohedron obtained by gluing 2 identical zonohedra. The right part of Fig. 6 displays the same zonohedron colored in a spiraling way. Let us remark that there is a degree of freedom, i.e., the way the initial "umbrella" with ribs $V_k$ is more or less open, i.e., $a$ can be chosen. Fig. 5 : The upper part of a regular zonohedron and its projection onto the horizontal plane. Fig. 6 : A typical regular zonohedron generated by Minkowski addition of vectors $(\cos(2k \pi/n), \sin(2k \pi/n),1)$ for $k=1,2,...n$ with $n=15$. Fig. 7 : A rhombic 132-hedron (image borrowed to the Wikipedia article). See the very educational page on S. Dutch's site : (https://www.uwgb.edu/dutchs/symmetry/zonohedra.HTM) (sorry: broken link) About "zomes", a word coined by architects as a condensate of "zonohedra" and "domes", have a look at (http://baselandscape.com/portfolio/the-algarden/) (http://www.structure1.com/zomes-coming-to-the-states/). Have a look at the article (https://en.wikipedia.org/wiki/Zonohedron) which enlarges the scope ; I have isolated the picture of the rhombic 132-hedron (Fig. 7). The blog of "RobertLovePi" has stunning illustrations, for example : (https://robertlovespi.net/2014/02/16/zonohedron-featuring-870-rhombic-faces-of-15-types/). A general definition of zonotopes (general name for zonohedra) is as a Minkowski addition of segments. See (http://www.cs.mcgill.ca/~fukuda/760B/handouts/expoly3.pdf). See also the article by Fields medallist Jean Bourgain (https://link.springer.com/content/pdf/10.1007%2FBF02189313.pdf). A funny article about zomes (http://archive.bridgesmathart.org/2012/bridges2012-545.pdf). "Bridges Organization" promotes connections between mathematics and arts, in particular graphical arts. See (https://www.encyclopediaofmath.org/index.php/Zonohedron) and references therein. The zonotopes page on the site of David Eppstein. The rhombic dodecahedron is a zonohedron that can tessellate the 3D space. A Geogebra puzzling animation, A very interesting 19 pages article by Sandor Kabai in the book entitled "Homage to a Pied Puzzler" Ed. Pegg Jr, Alan H. Schoen, Tom Rodgers Editors, AK Peters, 2009 (this book is a tribute to Martin Gardner). A zonohedron can be "decomposed" as a sum of (hyper) parallelepipeds, giving a way to compute its volume (https://mathoverflow.net/q/349558) REPLY [3 votes]: Elaborating some more on my previous comment: Don't know about real applications, but the construction would make a great "proof without words" for this trig identity: $\;\sum_{k=1}^m(m−k+1) \sin \dfrac{k \pi}{m+1}= \dfrac{m+1}{2} \cot \dfrac{\pi}{2(m+1)}\,$. With OP's notation where $\,n=2(m+1)\,$ is the number of sides of the regular polygon, it can be easily seen that there are $\,m\,$ "bands" of congruent rhombi in the tesselation. From right to left, the first band consists of $\,m\,$ rhombi with an angle of $\,\frac{\pi}{m+1}\,$, then the $k^{th}$ band is made of $\,m-k+1\,$ rhombi with increasing angles $\,\frac{k \pi}{m+1}\,$, all the way to $\,k=m\,$ which is the single leftmost rhombus. A rhombus with side $\,a\,$ and an angle of $\,\alpha\,$ has an area of $\,a^2 \sin \alpha\,$, and the areas of all bands sum up to the area of the regular polygon $\,\frac{na^2}{4} \cot \frac{\pi}{n}\,$, from which the identity above follows. Other trigonometric identities can be derived from this tesselation as well. For just one example, in the case of odd $\,m\,$ the horizontal diagonals of the odd-numbered rhombi add up to the diameter of the circumscribed circle $\,\frac{a}{\sin \pi/n}\,$, and therefore $\,\sum_{k=1}^{(m+1)/2} \cos \frac{(2k-1) \pi}{2(m+1)} = \frac{1}{2} \csc \frac{\pi}{2(m+1)}\,$.<|endoftext|> TITLE: Asymptotic behavior of integral $\int_1^\infty \frac{e^{-xt}}{\sqrt{1+t^2}}dt$ as $x \to 0$ QUESTION [6 upvotes]: I wish to prove that: $$ \int_1^\infty \frac{e^{-xt}}{\sqrt{1+t^2}}dt \sim - \ln x \quad \mathrm{as} \quad x \to 0^+$$ using the fact that: $$ f \underset{b}{\sim} g \Rightarrow \int_a^x f \underset{x \to b}{\sim} \int_a^x g$$ if $\int_a^x g \to \infty$ as $x \to b$ and $f$ and $g$ are integrable on every interval $[a,c]$ with $c < b$. Does anyone have an idea? Thank you! REPLY [3 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\left.\int_{1}^{\infty}{\expo{-xt} \over \root{1 + t^{2}}}\,\dd t\, \right\vert_{\ x\ >\ 0} = \int_{1}^{\infty}{\expo{-xt} \over t}\,\dd t + \int_{1}^{\infty}\expo{-xt}\pars{{1 \over \root{1 + t^{2}}} - {1 \over t}} \,\dd t \\[5mm] & = x\int_{1}^{\infty}\ln\pars{t}\expo{-xt}\,\dd t - \int_{1}^{\infty}{\expo{-xt} \over t\root{1 + t^{2}}\pars{\root{1 + t^{2}} + t}} \,\dd t \\[1cm] & = -\ln\pars{x}\expo{-x} \\[5mm] & + \int_{x}^{\infty}\ln\pars{t}\expo{-t}\,\dd t + \int_{1}^{\infty}\ln\pars{t}\expo{-t}\,\dd t - \int_{1}^{\infty}{\expo{-xt} \over t\root{1 + t^{2}}\pars{\root{1 + t^{2}} + t}} \,\dd t \end{align} The 'remaining' integrals are finite as $\ds{x \to 0^{+}}$ such that $$\bbx{\ds{% \int_{1}^{\infty}{\expo{-xt} \over \root{1 + t^{2}}}\,\dd t \sim -\ln\pars{x} \quad\mbox{as}\ x \to 0^{+}}} $$<|endoftext|> TITLE: How to find length of a part of a curve? QUESTION [27 upvotes]: How can I find the length of a curve, for example $f(x) = x^3$, between two limits on $x$, for example $1$ and $8$? I was bored in a maths lesson at school and posed myself the question: What's the perimeter of the region bounded by the $x$-axis, the lines $x=1$ and $x=8$ and the curve $y=x^3$? Of course, the only "difficult" part of this is finding the length of the part of the curve between $x=1$ and $x=8$. Maybe there's an established method of doing this, but I as a 16 year-old calculus student don't know it yet. So my attempt at an approach was to superimpose many triangles onto the curve so that I could sum all of their hypotenuses. Just use many triangles like the above, $$ \lim_{\delta x\to 0}\frac{\sqrt{\left(1+\delta x-1\right)^2+\left(\left(1+\delta x\right)^3-1^3\right)^2}+\sqrt{\left(1+2\delta x-\left(1+\delta x\right)\right)^2+\left(\left(1+2\delta x\right)^3-\left(1+\delta x\right)^3\right)^2}+\cdots}{\frac7{\delta x}} $$ I'm not entirely sure if this approach is correct though, or how to go on from the stage I've already got to. REPLY [6 votes]: Your idea to use right triangles is good! It might be easier to look at a general curve than $y=x^3$, though, to build the machinery. Let's say we wish to measure the arclength between $x=a$ and $x=b$. Choose $N+1$ points $\{P_1,P_2,\dots,P_{N+1}\}$ to partition the curve into $N$ sections. In the graphic I chose $N=6$ such that I had $7$ points. The distance between two points, $P_{n}$ and $P_{n+1}$ is essentially the Pythagorean theorem. $$ P_{n}P_{n+1} = \sqrt{(\Delta x)^2 + (\Delta y)^2}$$ We can estimate the arclength by summing the distances between these points. $$\sum_{n=1}^N P_nP_{n+1} = \sum_{n=1}^{N} \sqrt{(\Delta x_n)^2 + (\Delta y_n)^2}$$ As we take the number of partitioning points to infinity, or $N\to\infty$, we have $\Delta x\to dx$ and $\Delta y\to dy$. We call the resulting differential distance $ds$. It is a measure of infinitesimal length. \begin{align} ds &= \sqrt{dx^2 + dy^2} \\ &= \sqrt{1 + \left(\frac{dy}{dx}\right)^2}dx \\ &= \sqrt{1 + \left(f'(x)\right)^2}dx \end{align} In the limit, the sum above becomes the integral you seek (i.e. we are now summing up infinitely many infinitesimal arc length elements along the curve). $$\sum_{n=1}^{N} P_nP_{n+1} \to \int_{P_1}^{P_{N+1}} ds = \int_a^b \sqrt{1 + \left(f'(x)\right)^2}dx$$ You ought to find a similar dialogue in Stewart's or Thomas's Calculus. Look in the index for arc length integral.<|endoftext|> TITLE: How many sets $(A_1,A_2,\cdots, A_k)$ which are subsets of $\{1,2,\cdots ,n\}$ QUESTION [5 upvotes]: For given natural numbers $n,k$, how many are there $k$-tuples $(A_1,A_2,\cdots ,A_k)$ such that $$A_1\subseteq A_2\subseteq\cdots \subseteq A_k\subseteq \{1,2,3,\cdots ,n\}$$ I've thought to prove by induction on $k$ that the number of $k$-tuples is equal to $$\sum_{t=0}^n{n\choose t}k^t=(k+1)^n$$ Though I have no idea what happens when you add another subset,my idea was when I add another subset to let it be $A_1$ and shift every other subset index by $1$.Then split it into $n+1$ cases such that $|A_2|=t$ for each $t$ from $0$ to $n$. Maybe there is a better way. REPLY [4 votes]: The easiest solution is certainly the one in the comments, but we can approach the problem using induction in the following way: The first observation is that the only important property of the set $\lbrace 1, 2, \dots, n \rbrace$ in the problem is that it has $n$ elements. We would have the same answer if we replaced it with any other set with $n$ elements. We now use induction on $k$. For the base case, we consider $k = 1$. If we only want a single subset $$ A_1 \subseteq \lbrace 1, 2, \dots, n \rbrace $$ then $A_1$ can be any of the $2^n = {(1+1)}^n$ subsets of $\lbrace 1, 2, \dots, n \rbrace$, and so the number of subsets in this case is indeed ${(k+1)}^n$. Now suppose that the result is true for some $k$ and for every $n$. We wish to then prove that the number of ways of choosing sets $A_1, A_2, \dots, A_{k+1}$ such that $$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_{k+1} \subseteq \lbrace 1, 2, \dots, n \rbrace $$ is equal to ${(k+2)}^n$. We count the number of ways of choosing the subsets by considering the number of elements in $A_{k+1}$. Let this number be $m$. Then there are $\binom{n}{m}$ ways to choose the elements in $A_{k+1}$. Now $A_{k+1}$ is a set with $m$ elements, so by our earlier observation that the set $\lbrace 1, 2, \dots, n \rbrace$ is arbitrary, we can see that once we have chosen $A_{k+1}$, the number of ways of choosing sets $A_1, A_2, \dots, A_k$ such that $$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_k \subseteq A_{k+1} $$ is equal to ${(k+1)}^m$. Thus the number of ways of choosing sets $A_1, A_2, \dots, A_{k+1}$ such that $$ A_1 \subseteq A_2 \subseteq \dots \subseteq A_{k+1} \subseteq \lbrace 1, 2, \dots, n \rbrace $$ and such that $A_{k+1}$ has $m$ elements is equal to $$ \binom{n}{m} {(k+1)}^m. $$ We see that the total number of ways of choosing the sets $A_1, A_2, \dots, A_{k+1}$ is then equal to $$ \sum_{m=0}^{n} \binom{n}{m} {(k+1)}^m $$ which by the binomial theorem is equal to ${(k+2)}^n$.<|endoftext|> TITLE: Is there a notion of "normal subcategory" analgous to the notion of normal subgroup? QUESTION [5 upvotes]: The idea of a replete subcategory to me seems very analogous to the idea of a characteristic subgroup, as both are, in some sense, subobjects invariant under a notion of equivalence (categorical equivalence for replete subcategories, automorphisms for characteristic subgroups). This led me to the question: Is there a notion in category theory that is similarly analogous to that of a normal subgroup in group theory? REPLY [3 votes]: Despite the negativity of comments on this question, considering how the concept of normal subgroup extends to other categories is a very fruitful pastime, and I'll give a flavour of a few of generalizations. Since it seems to have been the motivation for asking this, I'll also explain how they apply to the ($1$-)category of categories. The category of groups has the rather special property of admitting zero morphisms: between any pair of groups, there is a homomorphism sending everything to the identity element. A subgroup $N \leq G$ is normal if there is some homomorphism $h: G \to H$ such that the equalizer of $h$ and the zero morphism $0: G \to H$ is precisely the subgroup $N$. Of course, not every category has zero morphisms. The most direct generalization in a category with equalizers is that of regular subobject, where we just ask for expressibility of the subobject as an equalizer of some pair of morphisms. $$N \hookrightarrow G \rightrightarrows H$$ Characterizing regular subcategories of categories is a little tricky (and trying to find references to them in the literature is tricky because regular categories refer to something else!) They are subcategories, as you would expect, but whenever we have morphisms $g: A \to C$ and $u:B \to C$ in a regular subcategory and morphisms $f:A \to B$, $v: C \to B$ with $u \circ f = g$ and $v \circ u = \mathrm{id}_B$, then we must also have $f$ in the regular subcategory, since knowing that $Fg = Gg$ and $Fu = Gu$ for a pair of functors $F,G$ means that $Fu \circ Ff = F(g) = G(g) = Gu \circ Gf = Fu \circ Gf$, whence $Ff = Gf$; the dual property is also necessary{*}. This in particular means that whenever a morphism of a regular subcategory has a two-sided inverse in the larger category, the regular subcategory must contain that inverse. However, it is a bit weaker than being closed under conjugation: when we view the alternating group $A_5$ as a one-object category, the equalizer of the identity homomorphism with the homomorphism obtained by conjugation with the element $(1 2)(3 4)$ is a non-trivial subgroup (which is clearly not a normal subgroup). In particular, there are regular subobjects in the category of groups which are not normal subgroups, so this is a weak generalization. Alternatively, we could observe that in the category of groups, the zero morphisms are those which factor through the trivial group, which is a 'zero object' in the category of groups, being both initial and terminal. As such, another way to express normal subgroups of $G$ are those which can be expressed as a kernel: the pullback along some group homomorphism $G \to H$ of the unique homomorphism $0 \to H$. $$\require{AMScd} \begin{CD} N @>>> G;\\ @VVV @VVV \\ 0 @>>{!}> H; \end{CD}$$ This only uses the fact that $0$ is an initial object, so we can extend this definition to any category with an initial object and pullbacks. Unfortunately, this doesn't extend well to the 1-category of categories, since this has a strict initial object: the empty category. If we pull back the unique morphism from the empty category, we will always get the empty subcategory... All is not lost, however. We can also note that a subgroup is normal if and only if it is the fiber of its cokernel, or in other words if the pushout of the inclusion of the subgroup along the unique morphism to the zero object produces a pullback square. $$\require{AMScd} \begin{CD} N @>>> G;\\ @V{!}VV @VVV \\ 1 @>>> H; \end{CD}$$ This time we are using the fact that the zero object is a terminal object, and this definition makes sense as soon as we have a terminal object and pushouts. This is stronger than the definition of regular subobject I gave earlier, since $N \hookrightarrow G$ is automatically the equalizer of the given morphism $G \to H$ and the constructed morphism $G \to 1 \to H$ in this scenario. Note that we couldn't just define normal subobjects to be pullbacks of a morphism $1 \to H$ since in general there may be several (or none) of these to choose from; using a pushout in this definition gives us a canonical choice. This definition coincides with the usual one for groups, even when we view groups as one-object categories living in the 1-category of categories. A normal subcategory in this sense has the 2-out-of-3 property and has morphisms which are closed under conjugation by any isomorphisms in the larger category between its objects{*}. There are further characterizations one could generalize, like closure under inner automorphisms, although that involves exploiting some 2-categorical structure (to extend the notion of inner automorphism to categories, where conjugation by a single element doesn't directly make sense, we would use the fact that in the 2-category of groups these correspond to automorphisms which are naturally isomorphic to the identity automorphism). As always in category theory, which generalization is the right one depends on what you want to do with it, but I encourage you to have fun seeing what else you can find! {*} Note that I have not proven, nor do I know, whether the properties I give are sufficient as well as necessary.<|endoftext|> TITLE: $\lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}$ QUESTION [8 upvotes]: How can we show that $$\lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag1$$ $(1)$, substitution doesn't work, either integration by parts. We know $(2)$ $$\int_{-\infty}^{\infty}{\mathrm dx\over 1+x^2}=\pi\tag2$$ $${\sin(n+0.5)x\over \sin(x/2)}={\sin(nx)\cos(x/2)+\sin(x/2)\cos(nx)\over \sin(x/2)}\tag3$$ Simplified to $$=\sin(nx)\coth(x/2)+\cos(nx)\tag4$$ $$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}+\int_{-\infty}^{\infty}\cos(nx)\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag5$$ $$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}+{\pi\over e^n}=\pi\cdot{e+1\over e-1}\tag6$$ $$\lim_{n\to\infty}\int_{-\infty}^{\infty}\sin(nx)\cot(x/2)\cdot{\mathrm dx\over 1+x^2}=\pi\cdot{e+1\over e-1}\tag7$$ I am not sure how to continue REPLY [12 votes]: Here is one quick way. Note that $${\sin(n+0.5)x\over \sin(x/2)}$$ is a Dirichlet kernel and we may use the identity (which can easily be proven) $${\sin(n+0.5)x\over \sin(x/2)} = 1 + 2\sum_{k=1}^n \cos (kx)$$ to rewrite the integral as \begin{align} I&= \lim_{n\to\infty}\int_{-\infty}^{\infty}{\sin(n+0.5)x\over \sin(x/2)}\cdot{\mathrm dx\over 1+x^2}=\lim_{n\to\infty}\int_{-\infty}^{\infty}\left[1 + 2\sum_{k=1}^n \cos (kx)\right]\cdot{\mathrm dx\over 1+x^2}\\ &=\int_{-\infty}^{\infty}{\mathrm dx\over 1+x^2} + 2\sum_{k=1}^{\infty }\int_{-\infty}^{\infty}{\cos (kx) \over (1+x^2)} \end{align} The latter integral can be evaluated using residues to get $$I = \pi + 2\pi \sum_{k=1}^{\infty }e^{-k}=\pi + \frac{2\pi}{e-1}=\pi \frac{e+1}{e-1}$$<|endoftext|> TITLE: Derivative of $\int_{0}^{x} \sin(1/t) dt$ at $x= 0$ QUESTION [5 upvotes]: I've been trying to figure out how to evaluate $$ \frac{d}{dx}\int_{0}^{x} \sin(1/t) dt $$ at $x = 0$. I know that the integrand is undefined at $x = 0,$ but is there any way to "extend" the derivative to the point? Or is it not differntiable there - and if so, why? REPLY [2 votes]: Use the function $g(x) =x^{2}\cos(1/x),g(0)=0$ so that $g$ is differentiable with $$g'(x) =2x\cos(1/x)+\sin(1/x),g'(0)=0$$ and hence upon integrating we get $$\frac{1}{x}\int_{0}^{x}\sin(1/t)\,dt=\frac{g(x)}{x}-\frac{2}{x}\int_{0}^{x}t\cos(1/t)\,dt$$ Taking limits as $x\to 0$ we can see that the RHS tends to $0$ so the desired derivative is $0$.<|endoftext|> TITLE: Find all conditions for $x$ that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm n=x$ has a solution. QUESTION [13 upvotes]: Find all conditions for $x$ so that the equation $1\pm 2 \pm 3 \pm 4 \pm \dots \pm 1395=x$ has a solution. My attempt: $x$ cannot be odd because the left hand side is always even then we have $x=2k(k \in \mathbb{N})$ also It has a maximum and minimum $1-2-3-4-\dots-1395\le x \le 1+2+3+4+\dots +1395$ But I can't show if these conditons are enough or make some other conditions. REPLY [4 votes]: Let $K = \sum_{i=1}^n {k_i}i$ where $k_i \in \{1,-1\}$ be one of the expressible numbers and $M = \sum_{i=1}^n {m_i}i$ where $m_i \in \{1,-1\}$ be another. $M - K = \sum_{i=1}^n (m_i - k_i) i = \sum_{i=1}^n [\{-2|0|2\}]i$ is an even number so all such numbers have the same parity. Cleary any $K$ is such $-\frac{n(n+1)}2=- \sum i \le K \le \frac{n(n+1)}n$. Let $K < \frac{n(n+1)}n$ so one of the ${k_i} = -1$. Let $j$ be so that ${k_{m }} = 1; \forall m < j$ but ${k_j} = -1$. Let $\overline{K} = \sum {m_i}$ where $m_i = k_i$ for $i \ne j|j-i$; ${m_j} = 1$ (whereas ${k_j} = -1)$ and, if $j > 1$ then ${m_{j-1}} = -1$ whereas ${k_{j-1}} = 1$. Then $\overline {K} = K + 2j - 2(j-1) = K + 2$. So via induction, all (and only) $K; -\frac{n(n+1)}2=- \sum i \le K \le \frac{n(n+1)}n; K $ of the same parity of $\frac{n(n+1)}n$ are possible. So for $n = 1395$, all even numbers between $-\frac{1395*1396}2$ and $\frac{1395*1396}2$ are possible.<|endoftext|> TITLE: Eisenbud 2.16 - units and nilpotents QUESTION [5 upvotes]: This should be a pretty easy problem, but I'm a dummy so I'm stuck. Here's the statement: Let $R$ be a $\mathbb Z$-graded ring, and $M$ a graded $R$-module, and let $x \in R_k$ for some non-zero integer $k$. Then $u = 1-x$ is not a zero divisor. Show that $u$ is a unit if and only if $x$ is nilpotent. Now I know that a similar question has been asked here many times before, so let me say that I know how to show $u$ is not a zero divisor, and I can show that if $x$ is nilpotent, $u$ is a unit. This is easy and has been done on this site a million times. My struggle is in the converse, that is to say, if $u$ is a unit, then I want to prove that $x$ is nilpotent. Apologies if this has also already been done on this site, but I can't seem to find the question on hand. REPLY [4 votes]: Assume that $$(1-x)y=1$$ and let $y=\sum y_i$ be a sum of homogeneous elements $y_i$ then we have $$\sum y_i-xy_i=1$$ Now we see that $$y_0=1$$ and that $$y_{i+k}=xy_i$$ Since the sum is finite, $x$ is nilpotent. Note that if $-l$ is the smallest negative index where $y_{-l}$ is non zero then (assuming, $k$ positive by symmetry) we have $$y_{-l}+xy_{-l}=0$$ and since these have different degrees $y_{-l}=0$.<|endoftext|> TITLE: Is this a construction of $E_8$? QUESTION [7 upvotes]: Let $\{1, \omega, \omega^2\}$ be the three cube-roots of one. Define the Eisenstein integers, $\Bbb{E}$, to be the $\Bbb{Z}$-linear combinations of $1$ and $\omega$. Note that $\omega^2 = -1-\omega \in \Bbb{E}$. Let $\lambda = 1-\omega \in \Bbb{E}$. If we identify elements of $\Bbb{E}$ that differ by a multiple of $\lambda$, we obtain three equivalence classes, with representative elements $\{0, 1, -1\}$. Let $c: \Bbb{E} \to \{0,+, -\}$ be the classifier function that tells us which equivalence class a given integer belongs to. Define the Tetracode, $T$, to be the following set (which is a perfect linear error-correcting code with Hamming distance 3): $$\left\{\begin{matrix} (0,0,0,0), & (0,+,+,+), & (0,-,-,-), \\ (+,0,+,-), & (+,+,-,0), & (+,-,0,+), \\ (-,0,-,+), & (-,+,0,-), & (-,-,+,0) \end{matrix}\right\}$$ Let $E_8' = \left\{(w, x, y, z) \in \Bbb{E}^4 \mid (c(w), c(x), c(y), c(z)) \in T\right\}$. It is an 8-dimensional lattice, in which every point has 240 nearest neighbours. (24 of those neighbours are found by changing one coordinate, and 216 are found by changing three coordinates). Is $E_8'$ equivalent to $E_8$? REPLY [5 votes]: Short answer: yes, because $E_8$ is the unique eight-dimensional lattice with kissing number 240 (the largest possible). (Reference: Theorem 8, Chapter 14 by Bannai and Sloane, from Conway and Sloane, Sphere packings, lattices and groups, 1988.) (Edit to add: your construction is an instance of what Sloane calls "Construction A for complex lattices", from Section 8, Chapter 7 of Conway and Sloane, whereby you can combine a length $n$ $q$-ary code with an index $q$ ideal of $\Bbb{E}$ or similar to form a complex $n$-dimensional lattice. Example 11b notes that the Tetracode yields $E_8$.) If you want an explicit isomorphism, then write elements of $E_8'$ as $v=(a,b,c,d,e,f,g,h)\in\Bbb{Z}^8$, representing $(a+b\omega,c+d\omega,e+f\omega,g+h\omega)\in\Bbb{E}^4$. The triality vector $(c(a+b\omega),c(c+d\omega),c(e+f\omega),c(g+h\omega))$ is then $(a+b,c+d,e+f,g+h)\in T$, taking the co-ordinates modulo 3. Define $$ M=\begin{pmatrix} \begin{array}{rrrrrrrr} 0 & 0 & 0 & 0 & 3 & 0 & 3 & 0 \\ 0 & 0 & -2 & -2 & 1 & -2 & 1 & -2 \\ 0 & 0 & -2 & 4 & 1 & -2 & 1 & -2 \\ 0 & 0 & 4 & -2 & 1 & -2 & 1 & -2 \\ 0 & 0 & 0 & 0 & 3 & 0 & -3 & 0 \\ -2 & -2 & 0 & 0 & 1 & -2 & -1 & 2 \\ -2 & 4 & 0 & 0 & 1 & -2 & -1 & 2 \\ 4 & -2 & 0 & 0 & 1 & -2 & -1 & 2 \\ \end{array} \end{pmatrix} $$ Then, considering an element of $E_8'$ as a column vector in $\Bbb{Z}^8$ as described above, multiplying on the left by $\frac16 M$ will give an element of the standard version of $E_8$: $$\Gamma_8 = \left\{(x_i) \in \mathbb Z^8 \cup (\mathbb Z + \tfrac{1}{2})^8 : {\textstyle\sum_i} x_i \equiv 0\!\!\pmod 2\right\}.$$ Of course this is one of many possible maps since $E_8$ has a large symmetry group, but I don't think there is a much nicer form of the matrix. Maybe it is not so surprising that $M$ will look a bit ragged, since the definition of the Tetracode used a particular choice of (non-symmetric) basis and $M$ has to depend on this choice. To check this works, we first need to see that $x=\frac16 Mv\in E_8$ for $v\in E_8'$. First check that each component of $Mv$ is congruent to zero mod 3. Note that pairs of entries in $M$ satisfy $M_{2i,j}\equiv M_{2i+1,j}\pmod 3$, which means that $$6x_i\equiv M_{i,0}.c(a+b\omega)+M_{i,2}.c(c+d\omega)+M_{i,4}.c(e+f\omega)+M_{i,6}.c(g+h\omega)\pmod 3.$$ Modulo 3, the only values of $(M_{i,0},M_{i,2},M_{i,4},M_{i,6})$ that arise are $(0,0,0,0)$, $(0,1,1,1)$, and $(1,0,1,2)$. Dotting these with a Tetracode vector, $(c(a+b\omega),c(c+d\omega),c(e+f\omega),c(g+h\omega))$, gives zero modulo 3 (easily checked; also follows from the fact that the Tetracode is self-dual). Thus $6x_i\equiv0\pmod3$, and so $x_i\in\frac12\Bbb{Z}$. To see $x_i\equiv x_j\pmod1$, note that each column of $M$ is constant modulo 2. Thus $6x_i\equiv 6x_j\pmod2$, which is what we want since $6x_i$ and $6x_j$ are also zero modulo 3. To complete the proof that $x\in E_8$, add the rows of $M$ to see that $$\sum_i x_i=\frac16 \sum_{ij} M_{ij}v_j=2(e-f)\equiv0\pmod2.$$ Finally, to show that the image $\frac16 M(E_8')$ is the whole of $E_8$, it's sufficient to check that we can reach a basis of $E_8$. $$\frac16 M\begin{pmatrix} \begin{array}{rrrrrrrr} 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & -1 & 2 & 0 \\ -1 & -1 & 1 & 1 & -1 & 0 & 0 & 0 \\ -1 & -1 & 2 & -1 & 0 & 0 & 0 & 0 \\ 1 & -1 & 0 & 0 & 1 & -1 & 0 & 1 \\ 0 & -1 & 0 & 0 & 1 & -1 & 0 & -1 \\ 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 1 & 0 & 0 \\ \end{array} \end{pmatrix}= \begin{pmatrix} \begin{array}{rrrrrrrr} 1 & -1 & 0 & 0 & 0 & 0 & 0 & 1/2 \\ 1 & 1 & -1 & 0 & 0 & 0 & 0 & 1/2 \\ 0 & 0 & 1 & -1 & 0 & 0 & 0 & 1/2 \\ 0 & 0 & 0 & 1 & -1 & 0 & 0 & 1/2 \\ 0 & 0 & 0 & 0 & 1 & -1 & 0 & 1/2 \\ 0 & 0 & 0 & 0 & 0 & 1 & -1 & 1/2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1/2 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/2 \\ \end{array} \end{pmatrix} $$ The columns of the matrix on the left are all in $E_8'$, having triality vectors $(0,1,1,1)$, $(0,1,1,1)$, $(0,0,0,0)$, $(0,0,0,0)$, $(0,2,2,2)$, $(1,0,1,2)$, $(0,0,0,0)$, $(0,0,0,0)$ respectively. The columns of the matrix on the right form a basis of $E_8$.<|endoftext|> TITLE: What is the Mathematical Property that justifies equating coefficients while solving partial fractions? QUESTION [9 upvotes]: The McGraw Hill PreCaculus Textbook gives several good examples of solving partial fractions, and they justify all but one step with established mathematical properties. In the 4th step of Example 1, when going from: $$1x + 13 = (A+B)x+(4A-5B)$$ they say to "equate the coefficients", writing the linear system $$A+B = 1$$ $$4A-5B=13$$ It is a simple step, color coded in the textbook for easy understanding, but McGraw Hill does not justify it with any mathematical property, postulate or theorem. Addition and/or multiplication properties of equality don't seem to apply directly. Can someone help me justify this step?! REPLY [2 votes]: The general principle is: two polynomials are equal at every point if and only if their coefficients are equal. "If their coefficients are equal then the polynomials are equal" is clear. Proving the reverse is not so easy in general. It follows from a stronger result from linear algebra, which says that the Vandermonde matrix for $d+1$ distinct real numbers is invertible, and so there is a unique polynomial of degree at most $d$ passing through any $d+1$ points, provided they all have different $x$ coordinates. This is probably not accessible to you at your level, but it is probably the best way to see it overall. Another way to see it, though making this rigorous requires some calculus, is to note that if two polynomials are equal at each point, then their constant terms must be the same. Subtracting off the constant term from each and dividing by $x$, you have two polynomials that now again have to be equal at each point. So you plug in $x=0$, which gives agreement of the linear coefficients of the original polynomials. Doing this a total of $d+1$ times gives the desired result. Where the lack of rigor comes in is in saying that $x/x=1$ even when $x=0$, which is not properly true. What we are really doing here is noticing that if two differentiable functions are equal everywhere then their derivatives are equal everywhere, and that if $p(x)=\sum_{k=0}^n a_k x^k$ then $a_k=\frac{p^{(k)}(0)}{k!}$, where $p^{(k)}$ denotes the $k$th derivative of $p$.<|endoftext|> TITLE: Proof of the Reduced-to-Separated theorem QUESTION [7 upvotes]: I'm trying to understand Vakil's proof of the Reduced-to-Separated theorem: Theorem: Two $S$-morphisms $\pi:U\to Z$ and $\pi':U\to Z$ from a reduced scheme to a separated $S$-scheme agreeing on a dense open subset of $U$ are the same. Proof: Let $V$ be the locus where $\pi$ and $\pi'$ agree (we have just proved that this exists). It is a closed subscheme of $U$ (because $Z$ is separable) which contains a dense open set. But the only closed subscheme of a reduced scheme $U$ whose underlying open set is dense is all of $U$. The sentence I've written in bold is the sentence I don't understand. Vakil doesn't say anything further, and I don't remember ever proving this fact (though I may have missed it somewhere). Can somebody help me see why this is true? REPLY [6 votes]: It is sufficient to show this for an affine scheme $U=Spec(R)$. If we have a closed subscheme $X=Spec(R/I)$, it is enough to show that $X$ doesn't contain some point in $U$, as then if $U'$ is an open dense subset of $X$, $\bar{U}=X\neq U$. Assume $X$ contains all points $p\in U$. Thus as ideals, $I\subset p$ $\forall p\in U$, thus $I$ is contained in the nilradical of $R$, so unless $I=0$ and $X=Spec(R)$, $U$ is nonreduced, a contradiction.<|endoftext|> TITLE: Optimization problem: The curve with the minimum time to get through a pile of quicksand - Calculus of Variations QUESTION [5 upvotes]: Suppose we have a function for the velocity given by $v(r,\theta)=r$, or in Cartesian form $v(x,y)=\sqrt{x^2+y^2}$. As we can see below, as we get closer to the origin $(0,0)$, the velocity decreases. I've found it easy to visualize the field as being some form of quicksand where it is harder to move through as you approach the origin. This is demonstrated below by a plot I've made using Wolfram Mathematica: What I am trying to do: Find the two functions for $y(x)$ which would minimize the time taken to get from point $A(-1,0)$ and $B(1,0)$. I deduced that the fastest path cannot be the straight line directly from $A$ to $B$, since it would require an infinite time to get through the origin. A guess for the two curves is shown by the $\color{#0050B0}{\text{dark blue}}$ and the $\color{#00AAAA}{\text{light blue}}$ curves I've made. I'm almost certain they would be symmetrical. I've first guessed that the optimized curve would be similar to an ellipse, however I hesitated after I've plotted this. I've done some research on the problem and figured it may be similar to the derivation of the Brachistochrone curve, using the Euler-Lagrange equations. I am new to the Calculus of Variations, so here is the working I've done so far. We have: $$dt=\frac{ds}{v} \Rightarrow dt=\frac{\sqrt{dx^2+dy^2}}{\sqrt{x^2+y^2}} \Rightarrow dt=\frac{\sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2}}{r}~d\theta$$ On the third step I converted it to polar coordinates. Adding integration signs: $$\int_{0}^{T}~dt=\int_{\theta_1}^{\theta_2}\frac{\sqrt{r^2+\left(\frac{dr}{d\theta}\right)^2}}{r}~d\theta$$ $$T=\int_{\theta_1}^{\theta_2} \sqrt{1+\frac{(r')^2}{r^2}}~d\theta$$ Where $T$ is the total time taken to get from $A$ to $B$. I thought of using the following Euler-Lagrange Equation: $$\frac{d}{d\theta}\left(\frac{\partial L}{\partial r'}\right)=\frac{\partial L}{\partial r} \tag{1}$$ For the functional: $$L(\theta,r,r')=\sqrt{1+\frac{(r')^2}{r^2}}$$ Evaluating partial derivatives: $$\frac{dL}{dr}=-\frac{(r')^2}{r^3\sqrt{\frac{(r')^2}{r^2}+1}}=-\frac{(r')^2}{r^2\sqrt{(r')^2+r^2}}$$ $$\frac{dL}{dr'}=\frac{r'}{r^2\sqrt{\frac{(r')^2}{r^2}+1}}=\frac{r'}{r\sqrt{(r')^2+r^2}}$$ Substituting into $(1)$: $$\frac{d}{d\theta}\left(\frac{r'}{r\sqrt{(r')^2+r^2}}\right)=-\frac{(r')^2}{r^2\sqrt{(r')^2+r^2}}$$ I integrated both sides with respect to $\theta$ and obtained: $$\frac{r'}{r\sqrt{(r')^2+r^2}}=-\frac{(r')^2\theta}{r^2\sqrt{(r')^2+r^2}}+C \tag{2}$$ Now, I realize I must solve this differential equation. I've tried simplifying it to obtain: $$r\frac{dr}{d\theta}=-\left(\frac{dr}{d\theta}\right)^2\theta+Cr^2\sqrt{\left(\frac{dr}{d\theta}\right)^2+r^2} \tag{3}$$ However, I think I've hit a dead end. I'm not certain that it is solvable in terms of elementary functions. Both Mathematica and Wolfram|Alpha have not given me a solution to this differential equation. To conclude, I would like some guidance on how to continue solving the differential equation, assuming I have done the calculation and methodology correctly so far. If I have not done the correct methodology, I would appreciate some guidance on how to proceed with the problem. REPLY [2 votes]: Just to show that this can be done using the calculus of variations: start with your functional $$ L(\theta,r,r')=\sqrt{1+\frac{(r')^2}{r^2}} $$ Now, we have that $\partial L/\partial \theta = 0$, which implies (via the Beltrami identity) that the quantity $$ L - r' \frac{\partial L}{\partial r'} = C $$ where $C$ is a constant with respect to $\theta$. In your case, this implies that $$ C = \sqrt{1+\frac{(r')^2}{r^2}} - r' \frac{r'/r^2}{\sqrt{1+\frac{(r')^2}{r^2}}} = \frac{1}{\sqrt{1+\frac{(r')^2}{r^2}}} $$ Re-arranging, we find that $$ \frac{r'}{r} = \sqrt{\frac{1}{C^2} - 1} $$ which itself is another constant $D$; thus, we have $r' = D r$, or $r = e^{D\theta}$ for some constant $D$.<|endoftext|> TITLE: How to show this inequality $\sum\sqrt{\frac{x}{x+2y+z}}\le 2$ QUESTION [6 upvotes]: Let $x,y,z,w>0$ show that $$\sqrt{\dfrac{x}{x+2y+z}}+\sqrt{\dfrac{y}{y+2z+w}}+\sqrt{\dfrac{z}{z+2w+x}}+\sqrt{\dfrac{w}{w+2x+y}}\le 2$$ I tried C-S, but without success. REPLY [2 votes]: By C-S: $(LHS)^2\le \sum_{cyc}a(b+2c+d) \sum_{cyc}\frac{1}{(a+2b+c)(b+2c+d)}$ $\sum_{cyc}a(b+2c+d) \sum_{cyc}\frac{1}{(a+2b+c)(b+2c+d)}\le 4 \ \ \iff \ \ (a-c)^2(b-d)^2\ge 0$<|endoftext|> TITLE: smoothness of solution to heat equation + differentiation under integral sign QUESTION [6 upvotes]: I am reading Evan's PDE book, and I need some help understanding the following result [Theorem 1 on pg47 of the book] Let $g$ be continuous and essentially bounded function on $\mathbb{R}^n$ and let $K$ be the heat kernel. Then, the function $u$ which is a convolution of $g$ and $K$ is $C^{\infty}$. The proof of this theorem goes as: Since $K$ is infinitely differentiable, with uniformly bounded derivatives of all orders, on $[\delta, \infty)$ for each $\delta > 0$, we see that $u$ is $C^{\infty}$ I am not really understanding this proof. (1) Am I correct that the uniform boundedness of derivatives of all orders means that: There exists a constant $M$ such that for every non-negative integer $\alpha$ and multi-index $\beta$, $|\frac{\partial^\alpha}{\partial t^{\alpha}} D^{\beta} K(x,t)| \leq M$ for every $x$ and $t$ in $\mathbb{R}^n \times [\delta, \infty)$? If so, how do I know that the derivatives are uniformly bounded? I know that each derivative is bounded since $t\geq \delta > 0 $, but how do I show the existence of the uniform constant $M$? (2) Why does the uniform boundedness of all derivatives allow us to differentiate under the integral sign? If I let $\Delta f_h$ to denote the difference quotient for the corresponding derivative $Df$, then I can write $$\int |\Delta f_h(x-y)| g(y) dy = \int |Df(x-y+c)| g(y) dy$$ for some $c$ between $x-y$ and $x-y+h$ by the mean value theorem, so I have as my dominating function $M |g(y)|$ where $M$ is the uniform bound constant, but since $g$ is only bounded and not necessarily integrable, so I cannot apply the dominated convergence theorem. What am I doing wrong? It was never intuiviely clear to me when differentiation under the integral is allowed and when it is not allowed. For example, (3) suppose that I have a function $f(x,y)$ in $\mathbb{R}^2$ and assume further that we know $\frac{\partial}{\partial x} f(x,y)$ exists and is integrable in $y$ over $\mathbb{R}$. Then, is it always the case $\dfrac{d}{dx} \int_{\mathbb{R}} f(x,y)dy = \int_{\mathbb{R}} \frac{\partial}{\partial x} f(x,y) dy$? REPLY [6 votes]: Let me give a simple example to illustrate what is required: Consider $$g(x) = \int_{-\infty}^\infty f(x,y) \, dy.$$ The question is when can we exchange the derivative and integral, so that $$g'(x) = \int_{-\infty}^\infty f_x(x,y) \, dy.$$ Taking difference quotients we have $$\frac{g(x+h) - g(x)}{h} = \int_{-\infty}^\infty \frac{f(x+h,y)-f(x,y)}{h} \, dy.$$ So the question is really when can we exchange the limit as $h \to 0$ with the integral. To use the dominated convergence theorem, we need a dominating function that is integrable, so we need that for some $\delta>0$ there exists an integrable function $g(y)$ such that $$\left|\frac{f(x+h,y)-f(x,y)}{h}\right| \leq g(y) \ \ \text{ for all } |h|<\delta.$$ By the mean value theorem $$f(x+h,y) - f(x,y) = hf_x(z,y)$$ for some $z$ between $x$ and $x+h$. So it is enough to assume that for some $\delta>0$ there exists an integrable function $g(y)$ such that $$|f_x(z,y)| \leq g(y) \ \ \text{for all } z \text{ with } |z-x|\leq \delta.$$ This condition is often called uniform integrability of $f_x$, meaning that $f_x$ is integrable uniformly in its first argument. All derivatives of the heat kernel satisfy the uniform integrability property as long as you restrict $t$ away from zero. This is what Evans means when he says "uniform boundedness".<|endoftext|> TITLE: A closed form for a triple integral with sines and cosines QUESTION [18 upvotes]: $$\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin(x)\sin(y)\sin(z)}{xyz(x+y+z)}(\sin(x)\cos(y)\cos(z) + \sin(y)\cos(z)\cos(x) + \sin(z)\cos(x)\cos(y))\,dx\,dy\,dz$$ I saw this integral $I$ posted on a page on Facebook . The author claims that there is a closed form for it. My Attempt This can be rewritten as $$3\small\int^\infty_0 \int^\infty_0 \int^\infty_0 \frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z)}{xyz(x+y+z)}\,dx\,dy\,dz$$ Now consider $$F(a) = 3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz(x+y+z)}\,dx\,dy\,dz$$ Taking the derivative $$F'(a) = -3\int^\infty_0 \int^\infty_0 \int^\infty_0\frac{\sin^2(x)\sin(y)\cos(y)\sin(z)\cos(z) e^{-a(x+y+z)}}{xyz}\,dx\,dy\,dz$$ By symmetry we have $$F'(a) = -3\left(\int^\infty_0 \frac{\sin^2(x)e^{-ax}}{x}\,dx \right)\left( \int^\infty_0 \frac{\sin(x)\cos(x)e^{-ax}}{x}\,dx\right)^2$$ Using W|A I got $$F'(a) = -\frac{3}{16} \log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)$$ By integeration we have $$F(0) = \frac{3}{16} \int^\infty_0\log\left(\frac{4}{a^2}+1 \right)\arctan^2\left(\frac{2}{a}\right)\,da$$ Let $x = 2/a$ $$\tag{1}I = \frac{3}{8} \int^\infty_0\frac{\log\left(x^2+1 \right)\arctan^2\left(x\right)}{x^2}\,dx$$ Question I seem not be able to verify (1) is correct nor find a closed form for it, any ideas ? REPLY [2 votes]: $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Given the 'cyclic symmetry' of your integral $\ds{\color{#f00}{\mc{J}}}$, it's equivalent to \begin{align} \color{#f00}{\mc{J}} & \equiv 3\int_{0}^{\infty}\!\!\int_{0}^{\infty}\!\!\int_{0}^{\infty}\!\! \mrm{sinc}\pars{x}\mrm{sinc}\pars{y}\mrm{sinc}\pars{z}\sin\pars{x}\cos\pars{y} \cos\pars{z}\ \times \\[3mm] & \phantom{\equiv 3} \underbrace{\bracks{\int_{0}^{\infty}\expo{-\pars{x + y + z}t}\,\dd t}} _{\ds{1 \over x + y + z}}\dd x\,\dd y\,\dd z = 3\int_{0}^{\infty}\mrm{f}\pars{t}\mrm{g}^{2}\pars{t}\,\dd t \\[5mm] & \mbox{where}\qquad \left\{\begin{array}{rcl} \ds{\mrm{f}\pars{t}} & \ds{=} & \ds{\Im\mc{I}\pars{t}} \\[2mm] \ds{\mrm{g}\pars{t}} & \ds{=} & \ds{\Re\mc{I}\pars{t}} \\[2mm] \ds{\mc{I}\pars{t}} & \ds{\equiv} & \ds{\int_{0}^{\infty}\mrm{sinc}\pars{x}\expo{-\pars{t - \ic}x}\dd x} \end{array}\right.\label{1}\tag{1} \end{align} Then \begin{align} \left.\mc{I}\pars{t}\right\vert_{\large\ \color{#f00}{t\ >\ 0}} & \equiv \int_{0}^{\infty}\mrm{sinc}\pars{x}\expo{-\pars{t - \ic}x}\dd x = \int_{0}^{\infty} \overbrace{\pars{{1 \over 2}\int_{-1}^{1}\expo{\ic kx}\,\dd k}} ^{\ds{\mrm{sinc}\pars{x}}}\ \expo{-\pars{t - \ic}x}\dd x \\[5mm] & = {1 \over 2}\int_{-1}^{1}\int_{0}^{\infty}\expo{-\pars{t - \ic - \ic k}x} \dd x\,\dd k = {1 \over 2}\int_{-1}^{1}{\dd k \over t - \ic - \ic k} = {1 \over 2}\int_{-1}^{1}{t + \pars{k + 1}\ic \over \pars{k + 1}^{2} + t^{2}}\,\dd k \\[5mm] & = {1 \over 2}\int_{0}^{2}{t + k\ic \over k^{2} + t^{2}}\,\dd k = {1 \over 2}\int_{0}^{2/t}{1 + k\ic \over k^{2} + 1}\,\dd k = {1 \over 2}\arctan\pars{2 \over t} + {1 \over 4}\ln\pars{{4 \over t^{2}} + 1 }\ic \end{align} $\ds{\color{#f00}{\mc{J}}}$ becomes ( see \eqref{1} ): \begin{align} \color{#f00}{\mc{J}} & = 3\int_{0}^{\infty}\bracks{{1 \over 4}\ln\pars{{4 \over t^{2}} + 1}} \bracks{{1 \over 2}\arctan\pars{2 \over t}}^{2}\dd t \\[5mm] & \stackrel{2/t\ \mapsto\ t}{=}\,\,\, {3 \over 16}\int_{\infty}^{0}\ln\pars{t^{2} + 1} \arctan^{2}\pars{t}\pars{-\,{2\,\dd t \over t^{2}}} = {3 \over 8}\ \underbrace{\int_{0}^{\infty} {\ln\pars{t^{2} + 1}\arctan^{2}\pars{t} \over t^{2}}\,\dd t} _{\ds{{\Large\color{#f00}{\S}}: {\pi^{3} \over 12} + \pi\ln^{2}\pars{2}}} \end{align} $\ds{{\Large\color{#f00}{\S}}}$: The integral was already evaluated in the $\texttt{@Zaid Alyafeai}$ fine answer. Finally, the answer to the proposed OP integral is given by $$ \bbox[15px,#ffe,border:1px dotted navy]{\ds{{\color{#f00}{\mc{J}} = {\pi^{3} \over 32} + {3 \over 8}\,\pi\ln^{2}\pars{2}}}} \approx 1.5350 $$<|endoftext|> TITLE: Find the polynomials which satisfy the condition $f(x)\mid f(x^2)$ QUESTION [5 upvotes]: I want find the polynomials which satisfy the condition $$f(x)\mid f(x^2).$$ I want to find such polynomials with integer coefficients, real number coefficients and complex number coefficients. For example, $x$ and $x-1$ are the linear polynomials which satisfy this condition. Here is one way to find the $2$-degree polynomials with integer coefficients. Let the quadratic be $p=ax^2+bx+c$, so its value at $x^2$ is $q=ax^4+bx^2+c$. If $p$ is to be a divisor of $q$ let the other factor be $dx^2+ex+f.$ Equating coefficients gives equations [1] $ad=a,$ [2] $ae+bd=0,$ [3] $af+be+cd=b,$ [4] $bf+ce=0,$ [5] $cf=c.$ Now we know $a,c$ are nonzero (else $p$ is not quadratic, or is reducible). So from [1] and [5] we have $d=f=1.$ Then from [2] and [4] we obtain $ae=ce.$ Here $e=0$ leads to $b=0$ from either [2] or [4], and [3] then reads $a+c=0$, so that $p=a(x^2-1)$ which is reducible. So we may assume $e$ is nonzero, and also $a=c.$ At this point, [2] and [4] say the same thing, namely $ae+b=0.$ So we may replace $b=-ae$ in [3] (with its $c$ replaced by $a$) obtaining $a+(-ae)e+a=-ae,$ which on factoring gives $a(2-e)(e+1)=0.$ The possibility $e=2$ then leads after some algebra to $2a+b=0$ and $p=a(x-1)^2$ which is reducible, while the possibility $e=-1$ leads to $a=b$ and then $p=ax^2+ax+a$ as claimed. Should we list out all the irreducible degree polynomials and then check if these polynomials satisfy the condition $x$ $x+1$ $x^2 + x + 1$ $x^3 + x^2 + 1$ $x^3 + x + 1$ $ x^4 + x^3 + x^2 + x + 1 $ $ x^4 + x^3 + 1 $ $ x^4 + x + 1 $ With the real number coefficients which can be factored into $$(x-c_1)(x-c_2)\cdots(x^2-2a_1x-(a_1^2+b_1^2))(x^2-2a_2x-(a_2^2+b_2^2))\cdots$$ If all of these linear terms and quadratic terms satisfy $$f(x)\mid f(x^2),$$ this polynomial satisfy too? So what's pattern in the real number polynomials? REPLY [4 votes]: The polynomials with $f(x)\mid f(x^2)$ are closed under multiplication. In fact, if $f$ is any such polynomial and $g(x)\mid f(x^2)/f(x)$, then $f(x)g(x)$ is also such a polynomial. WLOG assume $x\nmid f(x)$. The relation $f(x)\mid f(x^2)$ implies $$ \{\alpha:f(\alpha)=0\}\subseteq \{\beta:f(\beta^2)=0\}=\{\pm\sqrt{\beta}:f(\beta)=0\}.$$ Let $\alpha$ be a zero. Then $\alpha=\pm\sqrt{\beta}$ for some other zero $\beta$, or equivalently $\alpha^2=\beta$. Put another way, the square of any zero is also a zero, so the set of zeros is closed under squaring. Therefore we have a sequence of zeros $\alpha,\alpha^2,\alpha^4,\cdots$ which must eventually terminate since $f$ has finitely many zeros, in which case $\alpha^{2^n}=\alpha^{2^m}$ eventually, so $\alpha^{2^r(2^s-1)}=1$ and thus $\alpha$ is a root of unity. We can restrict our attention to $f$ that cannot be written as a nontrivial product of other polynomials with this property. I don't think there's a very nice characterization of the possible set of zeros of $f$ beyond "start with a root of unity and keep squaring until you get a repeat." For example, over $\mathbb{C}$ we have that $f(x)=(x-i)(x+1)(x-1)$ is such a polynomial; it includes a kind of "cycle" of length two $\{-1,1\}$ in its zero set, but it also has a kind of "hangnail" at the front, namely $i$. If we think about this in terms of integers mod $n$, we can write $n=2^km$ and use the Chinese Remainder Theorem to track what $x\mapsto 2x$ does to an integer mod $n$; the sequence is eventually periodic but at the beginning the $\mathbb{Z}/2^k\mathbb{Z}$ coordinate may be nonzero. To get the $f$ with real coefficients, just make sure the set $\{\alpha,\alpha^2,\alpha^4\cdots\}$ is closed under conjugation; if it isn't, then adjoin all their conjugates to construct an $f$ with real coefficients. And to get $f$ with integer coefficients, if $f$ has an $n$th root of unity as a zero then $f$ is divisible by the cyclotomic polynomial $\Phi_n(x)$. If $n$ is even, then squaring primitive $2n$th roots of unity yield primitive $n$th roots of unity, meaning both $\Phi_{n}(x)$ and $\Phi_{n/2}(x)$ are factors. Writing $n=2^km$, this means it is divisible by $\Phi_{2^km}(x)\Phi_{2^{k-1}m}(x)\cdots\Phi_m(x)$. One can check these polynomials satisfy the condition.<|endoftext|> TITLE: Asymptotic estimation problem about $\sum\limits_{j = 1}^n {\sum\limits_{i = 1}^n {\frac{{i + j}}{{{i^2} + {j^2}}}} } $ QUESTION [8 upvotes]: How to get$$\mathop {\lim }\limits_{n \to \infty } n\left( {\frac{\pi }{2} + \ln 2 - \frac{1}{n}\sum\limits_{j = 1}^n {\sum\limits_{i = 1}^n {\frac{{i + j}}{{{i^2} + {j^2}}}} } } \right).$$ I think we can use Euler–Maclaurin formula$$\sum_{n=a}^b f(n) \sim \int_a^b f(x)\,\mathrm{d}x + \frac{f(b) + f(a)}{2} + \sum_{k=1}^\infty \frac{B_{2k}}{(2k)!} \left(f^{(2k - 1)}(b) - f^{(2k - 1)}(a)\right),$$ where $a,b$ are both integers. But it seems difficult because of the double summation! REPLY [9 votes]: Actually, for $S_n=\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{i+j}{i^2+j^2}$, the limit $S=\displaystyle\lim_{n\to\infty}\left[\left(\frac{\pi}{2}+\ln 2\right)n\color{red}{-\ln n}-S_n\right]$ exists. This limit is seen as $1+\displaystyle\lim_{n\to\infty}(I_n-S_n)$, where $$\begin{align*}I_n&=\iint_{[\frac{1}{2},n+\frac{1}{2}]^2}\frac{x+y}{x^2+y^2}\,dx\,dy\\ &=(2n+1)\ln(2n+1)\\ &-(n+1)\ln(2n^2+2n+1)\\ &+n(2\arctan(2n+1)-\pi/2).\end{align*}$$ To prove existence, note that $I_n-S_n=\displaystyle\sum_{i=1}^{n}\sum_{j=1}^{n}\Delta_{i,j}$, where $$\begin{gather}\Delta_{i,j}=\iint_{[-\frac{1}{2},\frac{1}{2}]^2}\big(f(i+x,j+y)-f(i,j)\big)\,dx\,dy,\quad f(x,y)=\frac{x+y}{x^2+y^2};\\ \frac{1}{n!}\left|\frac{\partial^{n}\!f}{\partial x^k\partial y^{n-k}}\right|\leqslant\left|\frac{x+y}{x^2+y^2}\right|^{n+1},\quad \frac{\partial^2\!f}{\partial x^2}+\frac{\partial^2\!f}{\partial y^2}=0,\end{gather}$$ and Taylor's theorem produces $|\Delta_{i,j}|=\mathcal{O}\left(\big(\frac{i+j}{i^2+j^2}\big)^5\right)$ which is sufficient. Computing $S$ using (its definition and) Lagrange-Zagier extrapolation, I get $$ \color{blue}{S=1.0042628439817233943074076864477736788445647263436\ldots} $$ I wonder whether $S$ is related to some known mathematical constants... The higher-order asymptotics can indeed be derived from Euler-Maclaurin formula (or its two-dimensional extension, but I'm going the elementary way below). Let $$ \begin{gather}S_n\asymp Kn-\ln n-S-\sum_{k=1}^{(\infty)}\frac{a_k}{n^k},\quad K=\frac{\pi}{2}+\ln 2;\\ S_n-S_{n-1}\asymp K+\ln\Big(1-\frac{1}{n}\Big)+\sum_{k=2}^{(\infty)}n^{-k}\sum_{r=1}^{k-1}\binom{k-1}{r-1}a_r.\end{gather} $$ Now we apply E.-M. to $f(x)=2\dfrac{1+x}{1+x^2}, x\in[0,1]$. We have $\displaystyle\int_0^1 f(x)\,dx=K$, $$ \begin{gather}\frac{1}{n}\left(\frac{f(0)+f(1)}{2}+\sum_{k=1}^{n-1}f\Big(\frac{k}{n}\Big)\right)=\frac{1}{n}+S_n-S_{n-1},\\ \frac{f^{2k-1}(0)}{(2k)!}=\frac{(-1)^{k-1}}{k},\quad\quad\frac{f^{2k-1}(1)}{(2k)!}=-\frac{(-1)^{\lfloor k/2\rfloor}}{2^k\cdot k}\end{gather} $$ (the last two e.g. from power series); Euler-Maclaurin gives $$ S_n-S_{n-1}\asymp K-\frac{1}{n}-\sum_{k=1}^{(\infty)}\frac{c_k}{n^{2k}},\quad c_k=\frac{B_{2k}}{k}\Big((-1)^{k-1}+2^{-k}(-1)^{\lfloor k/2\rfloor}\Big). $$ Thus, $$ \sum_{r=1}^{k-1}\binom{k-1}{r-1}a_r=b_k=\frac{1}{k}-\begin{cases}c_{k/2}&(k\text{ even})\\ 0 &(k\text{ odd})\end{cases}\quad(k>1) $$ and, recognizing the inverse matrix for this system here, we finally get $$ a_n=\frac{1}{n}\sum_{k=1}^{n}\binom{n}{k}B_{n-k}b_{k+1}. $$ This sequence begins with $$ \frac{1}{4}, \frac{1}{24}, -\frac{7}{144}, \frac{3}{160}, 0, -\frac{1}{2016}, -\frac{19}{2688}, \frac{31}{3840}, 0, \frac{1}{4224}, -\frac{1453}{59136}, \frac{29713}{698880}, 0, \ldots $$ (this coincides with the values computed numerically in a prior edition of this answer).<|endoftext|> TITLE: If a triangle ABC has sides $a,b,c$ in A.P then what is the largest possible value of $\angle B$. QUESTION [6 upvotes]: It was easy to see that for an equilateral triangle maximum value of $\angle B$ can be $\angle B=60^\circ$ as for other triangles $\angle C$ or $\angle A$ would have the largest angle, depending on the common difference of the sides. But how do I prove it using trigonometry, geometry or even calculus I tried taking sides as $a-d,\; a,\; a+d$ and applying $Law \;of\; sines\;$ but couldn't get the result. REPLY [4 votes]: Let the sides be $b-d, b, b+d$. To form a triangle, one needs $d TITLE: Convergence of $1 +\frac{1}{5}+\frac{1}{9} +\frac{1}{13}+\dots$ QUESTION [5 upvotes]: This is probably incredibly simple, but we've just started the topic, and we've just gone over geometric series, p-series, and harmonic series. Its simple when the series is given explicitly in sigma-notation, but I struggle when they don't give you the form and just give you the first few numbers. The question exactly is: Determine whether the following series converges or diverges. Give a reason for your answer. $$1+\frac15+\frac19+\frac1{13}\dots$$ Any tips/hints/help would be much appreciated. REPLY [4 votes]: You could also observe that $$1 +\frac{1}{5}+\frac{1}{9}\cdots \geq 1 + \frac{1}{5}\left (1 +\frac{1}{2}+\frac{1}{3} +\cdots\right)$$ The term inside brackets is the harmonic series which....<|endoftext|> TITLE: Evaluate $\prod_{n=1}^\infty \frac{2^n-1}{2^n}$ QUESTION [6 upvotes]: Is there a closed form expression for this limit? $$\prod_{n=1}^\infty \frac{2^n-1}{2^n}$$ Wolfram Alpha says $0.2887880950866024212788997219292307800889\dots$ and the Inverse Symbol Calculator found nothing but the above expression. REPLY [8 votes]: It is equal to $\phi(1/2)$ where $\phi(q)$ is the Euler function, defined by $$ \phi(q)=\prod_{n=1}^{\infty}(1-q^n). $$ This is closely related to the $q$-Pochhammer symbol as well. From Euler's pentagonal number theorem one obtains the following rapidly convergent binary expansion for $\phi(1/2)$: $$ \phi(1/2)=\sum_{n=-\infty}^{\infty}(-1)^n2^{(-3n^2+n)/2}, $$ that is, $$ \phi(1/2)=1-\frac{1}{2}-\frac{1}{2^2}+\frac{1}{2^5}+\frac{1}{2^7}-\frac{1}{2^{12}}-\frac{1}{2^{15}}+\cdots $$ with the signs repeating in the pattern $-,-,+,+$ and the exponents growing quadratically. Proof of transcendence. As pointed out by P. Singh in the comments above, $\phi(1/2)$ is known to be transcendental. This follows from results established in Nesterenko, Yu. V. (1996), Modular functions and transcendence questions, Mat. Sb. p. 65-96 MR1422383. Since this article does not have open access, I am posting the statement of the main theorem below. We use the following identity (whose proof is indicated below) $$ \phi(q)^{24}=\frac{Q(q)^3-R(q)^2}{1728q} $$ to observe that, if $\phi(1/2)$ was algebraic, then $Q(1/2)$ and $R(1/2)$ would be algebraically dependent, contradicting the theorem. Thus $\phi(1/2)$ is transcendental, as claimed. Proof of the identity: This is equivalent to a well-known identity expressing the modular discriminant in terms of Eisenstein series.<|endoftext|> TITLE: What is $x$, if $3^x+3^{-x}=1$? QUESTION [6 upvotes]: I came across a really brain-racking problem. Determine $x$, such that $3^x+3^{-x}=1$. This is how I tried solving it: $$3^x+\frac{1}{3^x}=1$$ $$3^{2x}+1=3^x$$ $$3^{2x}-3^x=-1$$ Let $A=3^x$. $$A^2-A+1=0$$ $$\frac{-1±\sqrt{1^2-4\cdot1\cdot1}}{2\cdot1}=0$$ $$\frac{-1±\sqrt{-3}}{2}=0$$ I end up with $$\frac{-1±i\sqrt{3}}{2}=0$$ which yields no real solution. And this is not the expected answer. I'm a 7th grader, by the way. So, I've very limited knowledge on mathematics. EDIT I made one interesting observation. $3^x+3^{-x}$ can be the middle term of a quadratic equation: $$3^x\cdot\frac{1}{3^x}=1$$ $$3^x+3^{-x}=1$$ REPLY [4 votes]: Just building upon previous comments, it doesn't have as you pointed out a Real result but the Imaginary solution can be analytically found as: $3^x = \dfrac{1\pm\sqrt{3} i}{2}$ Reexpressing rhs in polar notation: $3^x = e^{i \dfrac{\pi}{3}}$ And changing lhs basis to $e$ $3^x = e^{\ln{3^x}} = e^{x\ln{3}} $ Then: $\boxed{x = i \dfrac{\pi}{3\ln{(3)}}}$ Note: this is the principal value solution. Due to the periodicity of the function, any $x = i \dfrac{\pi}{3\ln{(3)}} + i \dfrac{2\pi n}{\ln{3}}$, for $n\in \mathbb{Z}$ will also be a solution. Also the 2nd quadrant values need to be considered $x = - i \dfrac{\pi}{3\ln{(3)}} + i \dfrac{2\pi n}{\ln{3}}$<|endoftext|> TITLE: Is $1$ a limit point of the fractional part of $1.5^n$? QUESTION [16 upvotes]: It is an open problem whether the fractional part of $\left(\dfrac32\right)^n$ is dense in $[0...1]$. The problem is: is $1$ a limit point of the above sequence? An equivalent formulation is: $\forall \epsilon > 0: \exists n \in \Bbb N: 1 - \{1.5^n\} < \epsilon$ where $\{x\}$ denotes the fractional part of $x$. Here is a table of $n$ against $\epsilon$ that I computed: $\begin{array}{|c|c|}\hline \epsilon & n \\\hline 1 & 1 \\\hline 0.5 & 5 \\\hline 0.4 & 8 \\\hline 0.35 & 10 \\\hline 0.3 & 12 \\\hline 0.1 & 14 \\\hline 0.05 & 46 \\\hline 0.01 & 157 \\\hline 0.005 & 163 \\\hline 0.001 & 1256 \\\hline 0.0005 & 2677 \\\hline 0.0001 & 8093 \\\hline 0.00001 & 49304 \\\hline 0.000005 & 158643 \\\hline 0.0000005 & 835999 \\\hline \end{array}$ References Unsolved Problems, edited by O. Strauch, in section 2.4 Exponential sequences it is explicitly mentioned that both questions whether $(3/2)^n\bmod 1$ is dense in $[0,1]$ and whether it is uniformly distributed in $[0,1]$ are open conjectures. Power Fractional Parts, on Wolfram Mathworld, "just because the Internet says so" REPLY [6 votes]: Another comment, but too big for the standard box. An atanh() rescaling might be an interesting thing, see my example: The pink and the blue lines are hullcurves connecting the points $\small (N,f(N))$ where $f(N)$ is extremal (with moving maxima/minima) and the grey dots are points $\small (N,f(N))$ at $\small N \le 1000 $ which shall illustrate the general random distribution of the $\small f(N)$. The grey lines are manually taken smooth subsets of the extremaldata and symmetrized (by merging of datasets and adapting sign) to show the rough tendency of extension of the vertical intervals. I liked it that the atanh()-scaling seem to suggest some roughly linear increase/decrease of the hullcurves. [update] The data for the picture were extended by data from the OP and OEIS A153663 (magenta upper curve) and from the OEIS A081464 (blue lower curve). Note, that the OEIS has even more datapoints, but that needed excessive memory/time to compute the high powers of (3/2) and its fractional parts.<|endoftext|> TITLE: Limit of a monotonically increasing sequence and decreasing sequence QUESTION [9 upvotes]: If a sequence ($a_n$) is monotonically increasing, and ($b_n$) is a decreasing sequence, with $\lim_{n\to\infty}\,(b_n-a_n)=0$, show that $\lim a_n$ and $\lim b_n$ both exist, and that $\lim a_n=\lim b_n$. My attempt: To show that the limits of both sequences exist, I think I should be using the Monotone Convergence Theorem (MCT). For that I would need to show that the sequences are bounded. ($a_n$) is increasing, and so it should be bounded below. ($b_n$) is decreasing, so it should be bounded above. The challenge here is to show that ($a_n$) can be bounded above and ($b_n$) can be bounded below. This should utilise the third condition, from which I get: $$\begin{align*} & \lim_{n\to\infty}\,(b_n-a_n)=0 \\[3pt] \iff & \forall\varepsilon>0,\ \exists N\in \mathbb{N} \text{ s.t. } \forall n\geq N,\ |{b_n-a_n}|<\varepsilon \end{align*}$$ I then tried using the triangle inequality: $$ |b_n|-|a_n|\leq|b_n-a_n|<\varepsilon$$ but I'm not sure where to go from here. REPLY [2 votes]: Since $\lim_{n\to\infty}(b_n-a_n)=0$, there is an $N$ such that $|a_n-b_n|<1$ for all $n\ge N$. ($1$ is a number that I have just chosen for $\varepsilon$.) Since $b_n$ is decreasing, we have $a_n TITLE: Irrationality of $\sum\limits_{n=1}^{\infty} r^{-n^{2}}$ for every integer $r > 1$ QUESTION [11 upvotes]: In the preface to Introduction to Algebraic Independence Theory Yuri V. Nesterenko mentions the series $$f(r) = \sum_{n=1}^{\infty} \frac {1}{r^{n^{2}}}$$ which was introduced as an example by Joseph Liouville in 1851, who proved that $f(r) $ is irrational for all integers $r>1$. It appears that the proof is elementary like Liouville's proofs for irrationality of $e^{2}$ and $e^{4}$ discussed in my blog posts. Is there any simple way to prove the irrationality of $f(r) $? Or perhaps a reference regarding Liouville's proof? REPLY [3 votes]: I found this proof which sounds like something Liouville would have done. Let: $$\mathcal{L}=\sum_{h=1}^\infty r^{-h^2}$$ $$\frac p{r^{n^2}} = \sum_{h=1}^n r^{-h^2}$$ $$r^{-(x-1)^2}\geq r^{\lfloor x \rfloor ^2}$$ $$\int_n^\infty r^{-(x-1)^2}dx\geq\int_n^\infty r^{\lfloor x \rfloor ^2}dx=\sum_{h=n}^\infty r^{-h^2}$$ $$\ln(r)^{-1/2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy\geq \sum_{h=n+1}^\infty r^{-h^2}=\mathcal{L}-\frac p{r^{n^2}}$$ This limit is due to wolfram alpha: $$\lim_{n\to\infty}r^{n^2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy=\lim_{x\to\infty}e^{x^2}\int_x^\infty e^{-y^2}dy=0$$ Then $$r^{n^2}\left(\mathcal L -\frac p{r^{n^2}}\right)\leq \ln(r)^{-1/2}r^{n^2}\int_{n\ln(r)^{1/2}}^\infty e^{-y^2}dy=\epsilon$$ Where $\epsilon$ can be made arbitrarily small. Then $$0<\mathcal L -\frac p{r^{n^2}}\leq\frac \epsilon {r^{n^2}}$$ Let $r^{n^2}=q$. If $\mathcal L$ were rational, say $\frac ab$ $$0<\frac ab-\frac pq=\frac{aq-bp}{bq}\leq \frac\epsilon q$$ $$aq-bp>0$$ $$aq-bp\leq\epsilon b$$ The LHS is a positive integer, and the RHS can be made arbitrarily small. Contradiction.<|endoftext|> TITLE: Physics and the Apéry constant, an example for mathematicians QUESTION [8 upvotes]: The Wikipedia's entry for the Apéry's constant tell us that the Apéry's constant $$\zeta(3)=\sum_{n=1}^\infty\frac{1}{n^3}$$ arises in physical problems. Question. Can you tell us, from a divulgative viewpoint but with mathematical details if it is possible, a nice physical problem involving the Apéry's constant? Many thanks. I believe that it is a curiosity, but and you know mathematics but and an example of such concise problem in physics (see the problems, or other, that refers Wikipedia if you know to show/explain us the calculations after your introduction to the physic problem), then your answer should be nice for all us. REPLY [3 votes]: The $\zeta(3)$ constant appears in fluid mechanics as the added mass of a sphere approaching a wall, such as a raindrop (Weihs & Small, 1975), in the form $3\zeta(3) -2$. Weihs, D.; Small, R. D., An exact solution of the motion of two adjacent spheres in axisymmetric potential flow, Israel J. Technol. 13, 1-6 (1975). ZBL0318.76010.<|endoftext|> TITLE: Closed form for $\int_{0}^{\infty }\!{\rm erf} \left(cx\right) \left( {\rm erf} \left(x \right) \right) ^{2}{{\rm e}^{-{x}^{2}}}\,{\rm d}x$ QUESTION [7 upvotes]: I encountered this integral in my calculations: $$\int_{0}^{\infty }\!{\rm erf} \left(cx\right) \left( {\rm erf} \left(x \right) \right) ^{2}{{\rm e}^{-{x}^{2}}}\,{\rm d}x$$ where $c>0$ and $c\in \mathbb{R}$ but could not find a closed-form representation for it. I also tried to find possible closed forms using Inverse Symbolic Calculator and WolframAlpha but they did not find anything. I was looking in the book "Integrals ans Series Volume 2-Prudnikov-Brychkov-Marychev" but did not find a similar formula. I am not sure it exists, but if it exists I want to know it. Closed forms are easier to manipulate, sometimes closed forms of different integrals or sums contain terms that cancel each other etc Could you please help me to find a closed form (even using non-elementary special functions), if it exists? REPLY [2 votes]: Let us denote : \begin{equation} {\mathcal I}(c) := \int\limits_0^\infty \operatorname{erf}(c x) \cdot [\operatorname{erf}( x)]^2 e^{-x^2} dx \end{equation} By differentiating with respect to the parameter $c$ we have: \begin{equation} \frac{ d }{d c} {\mathcal I}(c) = \frac{2^2}{\pi^{3/2}} \frac{1}{1+c^2} \cdot \frac{1}{\sqrt{2+c^2}} \cdot \arctan\left( \frac{1}{\sqrt{2+c^2}}\right) \end{equation} therefore the only thing we need to do is to integrate the right hand side. I have calculated a more generic integral that involves this one as a special case in A generalized Ahmed's integral . Here I only state the result: \begin{eqnarray} &&{\mathcal I}(c) = \frac{4}{\pi^{3/2}} \left(\right.\\ && \arctan( \frac{c}{\sqrt{2+c^2}}) \arctan( \frac{1}{\sqrt{2+c^2}})+\\ && \frac{\imath}{2} \left.\left[ {\mathcal F}^{(\alpha_-,+e^{-\imath \phi})}(t)+ {\mathcal F}^{(\alpha_-,-e^{-\imath \phi})}(t)- {\mathcal F}^{(\alpha_-,-e^{+\imath \phi})}(t)- {\mathcal F}^{(\alpha_-,+e^{+\imath \phi})}(t) \right]\right|_0^B-\\ && \frac{\imath}{2} \left.\left[ {\mathcal F}^{(\alpha_+,+e^{-\imath \phi})}(t)+ {\mathcal F}^{(\alpha_+,-e^{-\imath \phi})}(t)- {\mathcal F}^{(\alpha_+,-e^{+\imath \phi})}(t)- {\mathcal F}^{(\alpha_+,+e^{+\imath \phi})}(t) \right]\right|_0^B \left.\right) \end{eqnarray} where $\alpha_- = \sqrt{2}-1$, $\alpha_+:=\sqrt{2}+1$, $\phi:= \arccos(1/\sqrt{3})$, $B:=(-\sqrt{2}+\sqrt{2+c^2})/c$ and \begin{eqnarray} &&{\mathcal F}^{(a,b)}(t):=\int \arctan(\frac{t}{a}) \frac{1}{t-b} dt = \log(t-b) \arctan(\frac{t}{a})\\ &&-\frac{1}{2 \imath} \left( \log(t-b) \left[ \log(\frac{t-\imath a}{b-\imath a}) - \log(\frac{t+\imath a}{b+\imath a})\right] + Li_2(\frac{b-t}{b-\imath a}) - Li_2(\frac{b-t}{b+\imath a})\right) \end{eqnarray} Update: Note that the anti-derivative ${\mathcal F}^{(a,b)}(t)$ may have a jump. This will happen if and only if either the quantity $(t+\imath a)/(b+\imath a)$ crosses the negative real axis or the quantity $(t-\imath a)/(b-\imath a)$ crosses the negative real axis both for some $t\in(0,B)$. This has an effect that the argument of the logarithm jumps by $2\pi$. In order to take this into account we have to exclude from the integration region a small vicinity of the singularity in question. In other words the correct formula reads: \begin{eqnarray} &&{\mathcal I}(c) = \frac{4}{\pi^{3/2}} \left(\right.\\ && \arctan( \frac{c}{\sqrt{2+c^2}}) \arctan( \frac{1}{\sqrt{2+c^2}})+\\ && \frac{\imath}{2}\left[ {\bar {\mathcal F}}^{(\alpha_-,+e^{-\imath \phi})}(0,B)+ {\bar {\mathcal F}}^{(\alpha_-,-e^{-\imath \phi})}(0,B)- {\bar {\mathcal F}}^{(\alpha_-,-e^{+\imath \phi})}(0,B)- {\bar {\mathcal F}}^{(\alpha_-,+e^{+\imath \phi})}(0,B) \right]-\\ && \frac{\imath}{2} \left[ {\bar {\mathcal F}}^{(\alpha_+,+e^{-\imath \phi})}(0,B)+ {\bar {\mathcal F}}^{(\alpha_+,-e^{-\imath \phi})}(0,B)- {\bar {\mathcal F}}^{(\alpha_+,-e^{+\imath \phi})}(0,B)- {\bar {\mathcal F}}^{(\alpha_+,+e^{+\imath \phi})}(0,B) \right] \left.\right) \end{eqnarray} where \begin{eqnarray} {\bar {\mathcal F}}^{a,b}(0,B) &:=& {\mathcal F}^{(a,b)}(B)-{\mathcal F}^{(a,b)}(A) +\\ && 1_{t^{(*)}_+ \in (0,1)} \left( -{\mathcal F}^{(a,b)}(B(t^{(*)}_+ +\epsilon))+{\mathcal F}^{(a,b)}(B(t^{(*)}_+ -\epsilon))\right)+\\ && 1_{t^{(*)}_- \in (0,1)} \left( -{\mathcal F}^{(a,b)}(B(t^{(*)}_- +\epsilon))+{\mathcal F}^{(a,b)}(B(t^{(*)}_- -\epsilon))\right) \end{eqnarray} where \begin{eqnarray} t^{(*)}_\pm:= \frac{Im[\mp \imath a(\bar{b} \mp \imath a)]}{B Im[\bar{b} \mp \imath a]} \end{eqnarray} See Mathematica code below for testing: F[t_, a_, b_] := Log[t - b] ArcTan[t/a] - 1/(2 I) (Log[ t - b] (Log[(t - I a)/(b - I a)] - Log[(t + I a)/(b + I a)]) + PolyLog[2, (b - t)/(b - I a)] - PolyLog[2, (b - t)/(b + I a)]); FF[A_, B_, a_, b_] := Module[{res, rsp, rsm, tsp, tsm, eps = 10^(-9)}, res = F[B, a, b] - F[A, a, b]; tsp = -(Im[I a (Conjugate[b] - I a)]/(B Im[Conjugate[b] - I a])); tsm = +(Im[I a (Conjugate[b] + I a)]/(B Im[Conjugate[b] + I a])); (* If[0\[LessEqual] tsp\[LessEqual]1,Print["Jump +!!"]]; If[0\[LessEqual] tsm\[LessEqual]1,Print["Jump -!!"]]; *) rsp = If[ 0 <= tsp <= 1, -F[A + (tsp + eps) (B - A), a, b] + F[A + (tsp - eps) (B - A), a, b], 0]; rsm = If[ 0 <= tsm <= 1, -F[A + (tsm + eps) (B - A), a, b] + F[A + (tsm - eps) (B - A), a, b], 0]; res + rsp + rsm ]; For[count = 1, count <= 100, count++, c = RandomReal[{-10, 10}, WorkingPrecision -> 50]; x1 = NIntegrate[Erf[c x] Erf[x]^2 Exp[-x^2], {x, 0, Infinity}, WorkingPrecision -> 30]; 4/Pi^(3/2) NIntegrate[ 1/(1 + xi^2) 1/Sqrt[2 + xi^2] ArcTan[1/Sqrt[2 + xi^2]], {xi, 0, c}]; A1 = 1; A2 = 1; A3 = c; phi = ArcCos[1/Sqrt[3]]; B = (-Sqrt[2] + Sqrt[2 + c^2])/c; 4/Pi^(3/ 2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1 /Sqrt[2 + c^2]] + 4 Sqrt[2] NIntegrate[(ArcTan[t/(Sqrt[2] - 1)] - ArcTan[t/(Sqrt[2] + 1)]) t/((1 - t^2)^2 + (2 ) (1 + t^2)^2), {t, 0, (-Sqrt[2] + Sqrt[2 + c^2])/c}, WorkingPrecision -> 30]); 4/Pi^(3/ 2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1 /Sqrt[2 + c^2]] + I/2 NIntegrate[(ArcTan[t/(Sqrt[2] - 1)] - ArcTan[t/(Sqrt[2] + 1)]) (1/(t - E^(-I phi)) - 1/(t - E^(I phi)) + 1/(t + E^(-I phi)) - 1/(t + E^(I phi))), {t, 0, (-Sqrt[2] + Sqrt[2 + c^2])/c}, WorkingPrecision -> 30]); x2 = 4/Pi^(3/2) ((ArcTan[c/Sqrt[2 + c^2]]) ArcTan[1/Sqrt[2 + c^2]] + I/2 (FF[0, B, (Sqrt[2] - 1), 1/Sqrt[3] - I Sqrt[2/3]] + FF[0, B, (Sqrt[2] - 1), -(1/Sqrt[3]) + I Sqrt[2/3]] - FF[0, B, (Sqrt[2] - 1), -(1/Sqrt[3]) - I Sqrt[2/3]] - FF[0, B, (Sqrt[2] - 1), 1/Sqrt[3] + I Sqrt[2/3]]) - I/2 (FF[0, B, (Sqrt[2] + 1), 1/Sqrt[3] - I Sqrt[2/3]] + FF[0, B, (Sqrt[2] + 1), -(1/Sqrt[3]) + I Sqrt[2/3]] - FF[0, B, (Sqrt[2] + 1), -(1/Sqrt[3]) - I Sqrt[2/3]] - FF[0, B, (Sqrt[2] + 1), 1/Sqrt[3] + I Sqrt[2/3]])); If[Abs[x2/x1 - 1] > 10^(-3), Print["results do not match..", {c, {x1, x2}}]; Break[]]; If[Mod[count, 10] == 0, PrintTemporary[count]]; ];<|endoftext|> TITLE: Strengthening the Sylvester-Schur Theorem QUESTION [6 upvotes]: The Sylvester-Schur Theorem states that if $x > k$, then in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there is at least $1$ number containing a prime divisor greater than $k$. It has always struck me that this theorem is significantly weaker than the actual reality, especially as $n$ gets larger. As I was trying to check my intuition, I had the following thought: Let $k$ be any integer greater than $1$ Let $p_n$ be the $n$th prime such that $p_n \le k < p_{n+1}$. If an integer $x$ is sufficiently large, then it follows that in the set of integers: $x, x+1, x+2, \dots, x+k-1$, there are at least $k-n$ numbers containing a prime divisor greater than $k$. Here's my argument: (1) Let $k > 1$ be an integer with $p_n \le k < p_{n+1}$ where $p_n$ is the $n$th prime. (2) Let $x > 2p_n$ be an integer (3) Let $0 \le t_1 < p_n$ be the smallest integer greater than $x$ such that $gpf(x+t_1) \le p_n$ where gpf() = greatest prime factor. (4) It is clear that $x+t_1$ consists of at least one prime divisor $q$ where $q \le p_n$ (5) Let $t_1 < t_2 < p_n$ be the second smallest integer greater than $x$ such that $gpf(x+t_2) \le p_n$. (6) Let $f = gcd(x + t_1,t_2 - t_1)$ where gcd() = greatest common divisor. (7) Let $u = \frac{x+t_1}{f}, v = \frac{t_2-t_1}{f}$ so that $u > 2$ and $1 \le v < p_n$ and $gcd(u+v,x+t_1)=1$ (8) $x+t_2 = uf + vf = f(u+v)$ and since $u+v > 3$, there exists a prime $q$ that divides $u+v$ but does not divide $w+t_1$. (9) Let $t_2 \le t_3 < p_n$ be the third smallest integer greater than $x$ such that $gpf(x+t_3) \le p_n$ (10) We can use the same arguments as steps (5) thru steps (8) to show that $x+t_3$ contains a prime divisor relatively prime to $x+t_1$ and relatively prime to $x+t_2$ Let $f_1 = gcd(x+t_1,t_3-t_1), u_1 = \frac{x+t_1}{f_1}, v_1 = \frac{t_3-t_1}{f1}$ Let $f_2 = gcd(x+t_2,t_3-t_2), u_2 = \frac{x+t_2}{f_2}, v_2 = \frac{t_3-t_2}{f_2}$ $x+t_3 = f_1(u_1 + v_1) = f_2(u_2 + v_2)$ and $gcd(u_1 + v_1,x+t_1)=1, gcd(u_2 + v_2,x+t_2)=1$ Let $h = gcd(f_1,f_2)$ so that $gcd(\frac{f_1}{h},\frac{f_2}{h})=1$ Then, $\frac{f_1}{h}(u_1 + v_1) = \frac{f_2}{h}(u_2+v_2)$ And: $\frac{u_1+v_1}{\frac{f_2}{h}} = \frac{u_2+v_2}{\frac{f_1}{h}}$ (11) We can repeat this argument until $x+t_n$ at which point there are no more primes less than or equal to $p_n$. (12) We can thus use this same argument to show that all remaining integers in the sequence $x,x+1, x+2, \dots x+k-1$ have at least one prime divisor greater than $p_n$. Of course, in order to make this argument, $x$ may well need to be greater than $(p_n) ^ n$ since I am assuming that at each point $\frac{u_i + v_i}{\frac{f_i}{h}} > p_n$. Is my reasoning sound? Is this a known property of large numbers? Is there a more precise formulation for smaller numbers? For example, my argument seems like it could be improved to argue that for $x > 2p_n$, there are at least $2$ numbers with a prime divisor greater than $p_n$. Edit: I found a simpler argument (modified on 12/28/2017) Let $w > 1$ be an integer Let $p_n$ be the $n$th prime such that $p_n \le w < p_{n+1}$ Let $R(p,w)$ be the largest integer $r$ such that $p$ is a prime and $p^r \le w$ but $p^{r+1} > w$ Let $x > \prod\limits_{p < w} p^{R(p,w)}$ be an integer Let $i$ be an integer such that $0 \le i < w$ I claim that if $gpf(x+i) \le p_n$, then there exists $k,v$ such that $1 \le k \le n$ and $(p_k)^v \ge w$ and $(p_k)^v | x+i$ Assume no such $k,v$ exists. It follows that each $x+i \le \prod\limits_{p < w} R(p,w)$ which goes against assumption. I also claim that there are at most $n$ instances where $gpf(x+1) \le p_n$. Assume that there exists integers $v_2 > v_1$ and $i \ne j$ where $(p_k)^{v_1} | x+i$ and $(p_k)^{v_2} | x+j$. Then there exists positive integers $a,b$ such that $a(p_k)^{v_1} = x+i$ and $b(p_k)^{v_2} = x+j$ Let $u = x+j - x - i = j - i = (p_k)^{v_1}(a - b(p_k)^{v_2 - v_1})$ We can assume $u$ is positive since if it were negative, we could set $u = x+i - x - j$ instead. We can assume therefore that $a - b(p_k)^{v_2 - v_1} \ge 1$. But now we have a contradiction since $w > j - i$ but $(p_k)^{v_1} \ge w$. REPLY [4 votes]: I think your second proof is correct. I'm going to rewrite it: Theorem (Sylvester's theorem generalization): Let $n,k\in\mathbb{N}$ with $n\geq$ lcm$(1,\ldots,k)$, and let $\pi(x):=\sum_{p\leq x} 1$ be the number of primes not greater than $x$. Then in the interval $[n,n+k]$ there are at least $k+1-\pi(k)$ integers $n_i$ with a prime factor $p_i>k$. Proof: For $p$ prime let $\nu_p(k)$ be the $p$-adic valuation of $k$. Let gpf$(x)$ be the greatest prime factor of $x$ and $p_j$ be the $j$-th prime. Consider $0\leq i\leq k$. Suppose that $i$ is such that gpf$(n+i)\leq p_{\pi(k)}$ ($p_{\pi(k)}$ is the greatest prime not greater than $k$). Then there exist a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$, as otherwise $$n+i\leq\displaystyle\prod_{p\leq k}p^{v_p(k)}=\text{lcm}(1,\ldots,k)k$. Therefore $p_i\neq p_j$. Thus, to every integer $i$ such that gpf$(n+i)\leq p_{\pi(k)}$ there corresponds a different prime $p_i\leq p_{\pi(k)}$, so that there can be at most $\pi(k)$ integers of this form. Hence there are at least $k+1-\pi(k)$ numbers $n+i\in [n,n+k]$ such that gpf$(n+i)\geq p_{\pi(k)+1}>k$. Corollary (Grimm's conjecture): If $n\geq$lcm$(1,\ldots,k)$, then for every integer $n_i\in[n,n+k]$ there is a different prime $p_i$ such that $p_i|n_i$ (i.e., Grimm's conjecture is true for this choice of $n$ and $k$). Proof: If gpf$(n+i)\leq p_{\pi(k)}$, pick $p_i$ (we already know $p_i\neq p_j$ if $i\neq j$). Otherwise gpf$(n+i)>k$ and this factor cannot divide any other $n+j$ with $i\neq j\leq k$. In fact, the two results are equivalent: Lemma: Grimm's implies Sylvester's. Proof: If there is a different prime $p_i|n_i$ for every $n_i\in[n,n+k]$, then as there are $\pi(k)$ primes below $k$, there must be at least $k+1-\pi(k)$ numbers $n_i$ such that $p_i>k$. Now that I have put it like this, I realize that this theorem (and its proof!) are a particular case of Theorem 1 of M. Langevin, Plus grand facteur premier d'entiers en progression arithmétique, Séminaire Delange-Pisot-Poitou. Théorie des nombres (1976-1977), 18(1), 1-7. So this was known (although perhaps not very well known!). Observe that Langevin manages to prove the result with the less restrictive condition that $n+i$ does not divide lcm$(1,\ldots,k)$ for any $i\in\{0,\ldots,k\}$. We can adapt your proof to get this condition: if gpf$(n+i)\leq p_{\pi(k)}$ and $n+i\not|$lcm$(1,\ldots,k)$ then there must be a prime $p_i\leq p_{\pi(k)}$ and an exponent $v_i\in\mathbb{N}$ such that $p_i^{v_i}|n+i$ and $p_i^{v_i}>k$. The proof then follows as before.<|endoftext|> TITLE: Transition from a Riemann sum to an Integral QUESTION [5 upvotes]: The Riemann sum over an interval $[a,b]$ is usually defined as $$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(a+k\cdot\frac{b-a}{N}\right)\frac{b-a}{N}$$ Thus if we encounter a sum of the form $$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(k\cdot\frac{1}{N}\right)\frac{1}{N}$$ we can conclude that it is equal to an integral over the interval $[0,1]$. $$\lim\limits_{N\to\infty}\sum\limits_{k=0}^Nf\left(k\cdot\frac{1}{N}\right)\frac{1}{N}=\int_0^1f(x)dx\tag{1}\label{1}$$ What can we conclude about the following sum $$\lim\limits_{N\to\infty}\lim\limits_{M\to\infty}\sum\limits_{k=0}^M f\left(k\cdot\frac{1}{N}\right)\frac{1}{N}\tag{2}\label{2}$$ To clarify, this is an infinite sum \eqref{2}, that differs from the Riemann sum \eqref{1}, in the upper limit of the sum. In the Riemann sum \eqref{1}, there is a relation between $M$ and $N$, namely $N=M$, while there is no such relation specified in \eqref{2}. If we can equate it to an integral, how are we to determine the limits of integration? The equation \eqref{2} is to be taken, that the $M\to\infty$, we thus have an infinite sum (suppose it is convergent). Than we form a sequence of infinite sums, where $N$ increases for each element of the sequence. That is $$S_N=\lim\limits_{M\to\infty}\sum\limits_{k=0}^M f\left(k\cdot\frac{1}{N}\right)\frac{1}{N}$$ What does this sequence tend to? Is it true that (or when is it true) $$\lim\limits_{N\to\infty}S_N=\int_0^\infty f(x)dx$$ Also the general term in \eqref{2} is $C_k=f\left(k\cdot\frac{1}{N}\right)$. How does it behave in the limit, namely $$\lim\limits_{N\to \infty}\lim\limits_{M\to \infty}f\left(M\cdot\frac{1}{N}\right)$$ REPLY [4 votes]: Suppose $f$ is Riemann integrable on $[0,b]$ for every $b > 0$ and the improper integral over $[0, \infty)$ is convergent. We first consider the case where $f$ is nonnegative and non-increasing, as suggested by @Winther, where we have $$\frac{f((k+1)/N)}{N} \leqslant \int_{k/N}^{(k+1)/N} f(x) \, dx \leqslant \frac{f(k/N)}{N}. $$ This implies $$\int_0^{(M+1)/N} f(x) \, dx \leqslant \frac{1}{N} \sum_{k=0}^{M} f(k/N) \leqslant \frac{f(0)}{N} + \int_0^{M/N} f(x) \, dx. $$ The sequence of partial sums is increasing and bounded , hence convergent as $M \to \infty$, with $$\int_0^{\infty} f(x) \, dx \leqslant\frac{1}{N} \sum_{k=0}^{\infty} f(k/N) \leqslant \frac{f(0)}{N} + \int_0^{\infty} f(x) \, dx. $$ Therefore, $$\lim_{N \to \infty} \lim_{ M \to \infty}\frac{1}{N} \sum_{k=0}^{M} f(k/N) = \int_0^\infty f(x) \, dx.$$ Can this still hold if $f$ is not monotonic? For example, consider $f(x) = \sin x /x$, where $$\int_0^\infty \frac{\sin x}{x} \, dx = \frac{\pi}{2}.$$ Examining the corresponding series (WLOG starting with $k=1$) we find $$\frac{1}{N}\sum_{k = 1}^{\infty} \frac{\sin (k/N)}{k/N} = \sum_{k = 1}^{\infty} \frac{\sin (k/N)}{k} \\ = \frac{\pi}{2}-\frac{1}{2N} \\ \longrightarrow_{N \to \infty} \frac{\pi}{2}.$$ I have not yet found a counterexample for a non-monotone function. As the integral test can be generalized to $C^1$ functions of bounded variation, I suspect this may characterize a wider class of functions for which this result holds. This could be shown by considering $$\left|\int_{k/N}^{(k+1)/N} f(x) \, dx - \frac{f(k/N)}{N} \right| \leqslant \int_{k/N}^{(k+1)/N} |f(x) - f(k/N)| \, dx $$ and then summing over $k$, applying the mean value theorem when $f$ is differentiable, and using $\int_0^\infty |f'(x)|\, dx < \infty$ to show that the sum converges to the integral as $N \to \infty.$<|endoftext|> TITLE: On an expansion of $(1+a+a^2+\cdots+a^n)^2$ QUESTION [6 upvotes]: Question: What is an easy or efficient way to see or prove that $$ 1+2a+3a^2+\cdots+na^{n-1}+(n+1)a^n+na^{n+1}+\cdots+3a^{2n-2}+2a^{2n-1}+a^{2n}\tag{1} $$ is equal to $$ (1+a+a^2+\cdots+a^n)^2\tag{2} $$ Maybe this is a particular case of a more general, well-known result? Context: This is used with $a:=e^{it}$ to get an expression in terms of $\sin$ for the Fejér kernel. Thoughts: I thought about calculating the coefficient $c_k$ of $a^k$. But my method is not so obvious that we can get from $(1)$ to $(2)$ in the blink of an eye. $\mathbf{k=0}$ : clearly $c_0=1$. $\mathbf{1\leq k\leq n}$ : $c_k$ is the number of integer solutions of $x_1+x_2=k$ with $0\leq x_1,x_2\leq k$, which in turn is the number of ways we can choose a bar $|$ in $$ \underbrace{|\star|\star|\cdots|\star|}_{k\text{ stars}} $$ So $c_k=k+1$. $\mathbf{k=n+i\quad(1\leq i\leq n)}$ : $c_k$ is the number of integer solutions to $x_1+x_k=n+i$ with $0\leq x_1,x_2\leq n$, which in turn is the number of ways we can choose a bar $|$ in $$ \underbrace{|\star|\star|\cdots|\star|}_{n+i\text{ stars}} $$ different from the $i$-th one from each side. So $c_k=(n+i)+1-2i=n-i+1$. REPLY [2 votes]: Hint: Use synthetic division twice after you you've rewritten the expression as $$\frac{(a^{n+1}-1)^2}{(a-1)^2}=\frac{a^{2n+2}-2a^{n+1}+1}{(a-1)^2}$$ $$\begin{array}{*{11}{r}} &1&0&0&\dotsm&0&-2&0&0&\dots&0&0&1\\ &\downarrow&1&1&\dotsm&1&1&-1&-1&\dotsm&-1&-1&-1\\ \hline \times1\quad&1&1&1&\dotsm&1&-1&-1&-1&\dotsm&-1&-1&0\\ &\downarrow&1&2&\dotsm&n&n+1&n&n-1&\dotsm&2&1\\ \hline \times1\quad&1&2&3&\dotsm&n+1&n&n-1&n-2&\dotsm&1&0 \end{array}$$<|endoftext|> TITLE: The integral of $\left|\frac{\cos x}x\right|$ QUESTION [7 upvotes]: I'm looking to determine whether the following function is unbounded or not: $$ F(x) = \int_1^x\left|\frac{\cos t}{t}\right|\text{d} t $$ I can't seem to do much with it because of the $|\cos(t)|$. I thought of using the fact that $\int |f| \ge |\int f|$, but the problem is that the integral of $\frac{\cos t}t$ (without the absolute values) is bounded, and so that doesn't prove that $F(x)$ is unbounded or bounded. I tried re-expressing this as a cosine integral (the function $\text{Ci}(x)$) but to no avail. I'm not sure where else to go with this; the main problem seems to be the fact that its very difficult to derive an inequality with the $|\cos(t)|$ without a $|\cos(t)|$ on the other side of the inequality (or at least some trig function). Any help would be appreciated. REPLY [5 votes]: Hint: Consider the harmonic series and $$\int_{\pi/2 + k\pi}^{3\pi/2 + k\pi} \frac{| \cos t|}{t} \, dt \geqslant \frac{1}{3\pi/2 + k \pi}\int_{\pi/2 + k\pi}^{3\pi/2 + k\pi} |\cos t| \, dt = \frac{2}{3\pi/2 + k \pi}$$<|endoftext|> TITLE: How to integrate $\int_{0}^{1} \frac{1-x}{1+x} \frac{dx}{\sqrt{x^4 + ax^2 + 1}}$? QUESTION [20 upvotes]: The question is how to show the identity $$ \int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{x^4 + ax^2 + 1}} = \frac{1}{\sqrt{a+2}} \log\left( 1 + \frac{\sqrt{a+2}}{2} \right), \tag{$a>-2$} $$ I checked this numerically for several cases, but even Mathematica 11 could not manage this symbolically for general $a$, except for some special cases like $a = 0, 1, 2$. Addendum. Here are some backgrounds and my ideas: This integral came from my personal attempt to find the pattern for the integral $$ J(a, b) := \int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{1 + ax^2 + bx^4}}. $$ This drew my attention as we have the following identity $$ \int_{0}^{\infty} \frac{x}{x+1} \cdot \frac{dx}{\sqrt{4x^4 + 8x^3 + 12x^2 + 8x + 1}} = J(6,-3), $$ where the LHS is the integral from this question. So establishing the claim in this question amounts to showing that $J(6,-3) = \frac{1}{2}\log 3 - \frac{1}{3}\log 2$, though I am skeptical that $J(a, b)$ has a nice closed form for every pair of parameters $(a, b)$. A possible idea is to write \begin{align*} &\int_{0}^{1} \frac{1-x}{1+x} \cdot \frac{dx}{\sqrt{x^4 + ax^2 + 1}} \\ &\hspace{5em}= \int_{0}^{1} \frac{(x^{-2} + 1) - 2x^{-1}}{x^{-1} - x} \cdot \frac{dx}{\sqrt{(x^{-1} - x)^2 + a + 2}} \end{align*} This follows from a simple algebraic manipulation. This suggests that we might be able to apply Glasser's master theorem, though in a less trivial way. I do not believe that this is particularly hard, but I literally have not enough time to think about this now. So I guess it is a good time to seek help. REPLY [4 votes]: $$\text{let } \ \frac{1-x}{1+x}=t \Rightarrow x=\frac{1-t}{1+t}\Rightarrow dx=-\frac{2}{(1+t)^2}dt\ \text{ then:}$$ $$\int_0^1 \frac{1-x}{1+x}\frac{dx}{\sqrt{x^4+ax^2+1}}=\int_0^1 \frac{2t}{\sqrt{(a+2)t^4-2(a-6)t^2+(a+2)}}dt$$ $$\overset{t^2=x}=\frac{1}{\sqrt{a+2}}\int_0^1 \frac{dx}{\sqrt{x^2-2\left(\frac{a-6}{a+2}\right)x+1}}=\boxed{\frac{1}{\sqrt{a+2}}\ln \left(1+\frac{\sqrt{a+2}}{2}\right),\quad a>-2}$$<|endoftext|> TITLE: Linear isometry between Hilbert spaces has a closed range QUESTION [5 upvotes]: Let $U$ be a linear isometry between Hilbert spaces. Why does the fact that the range of $U$ is dense imply that the range of $U$ is closed? I am trying to understand the proof of theorem 5.4 in Conway's A Course in Functional Analysis. REPLY [9 votes]: In fact, the range of a linear isometry $U \colon H \rightarrow H'$ between Hilbert spaces must always be closed. If the range of $U$ is also dense then $U(H) = H'$ so $U$ is one-to-one and onto. The reason is that the range $U(H)$ of $U$ is complete and a complete subspace of a normed space must be closed. To see that $U(H)$ is complete, let $Ux_n$ be a Cauchy sequence in $U(H)$ so $\| Ux_n - Ux_m \|_{H'} \rightarrow 0$. Since $U$ is an isometry, $\| x_n - x_m \|_{H} = \| Ux_n - Ux_m \|_{H'}$ and so $(x_n)$ is Cauchy in $H$. Since $H$ is complete, $x_n \rightarrow x$ for some $x \in H$ but then $Ux_n \rightarrow Ux$ .<|endoftext|> TITLE: What method did this person use to rotate the points in 2D Space to imitate 3D Rotation? QUESTION [5 upvotes]: I've been wondering about how people seem to rotate graphs on a 2D area, and came across this Desmos 2D graph, found Here (desmos.com). Once I saw this, I looked at the equations and was blown by the complexity to rotate them with different variables ($a$, $b$, and $c$). An example of the one of the equations of the points, which are cartesian in the format of $(x,y)$: $\left(\cos (u)\cos (v)-\sin (u)\cos (v)+\sin (v),\sin (u)\sin (w)-\cos (u)\sin (v)\cos (w)+\sin (u)\sin (v)\cos (w)+\cos (u)\sin (w)+\cos (v)\cos (w)\right)$ Quite the long equation to find a point- but understandable. I'm just interested in knowing the mathematical reasoning behind him using the functions to find the location of the points, not the lines (They are just connecting multiple points). Is this related to the rotation matrix in any way? Or is it using something else that I could possibly know the name of so I could pursue future research? Thanks! REPLY [5 votes]: Set all parameters $a, b, c$ to $0$ and then you get a square on the $xy$ plane with vertices at $(\pm 1, \pm 1$). Imagine that this is what you see when you look at a cube in $\mathbb{R}^3$ whose vertices are $(\pm 1, \pm 1, \pm 1)$ "from above" (that is, from the $z$ axis to the $xy$ plane). From this perspective, the upper face of the cube (with vertices $(\pm 1, \pm 1, 1)$) completely hides away the lower face of the cube (with vertices $(\pm 1, \pm 1, -1)$) and so you can only see a square. Now, if you change the $b$ parameter (where $b = \pi v$), the square gets rotated on the $xy$ plane. This corresponds to rotating the cube around the $z$-axis. Changing the $a$ parameter (where $a = \pi u$) corresponds to rotating the cube around the $y$ axis and changing the $c$ parameter (where $c = \pi w$) corresponds to rotating the cube around the $x$ axis. Each such rotation can be done by multiplication with an appropriate rotation matrix. In this case, a possible formula for transforming a point $(x,y,z)^T$ is given by $$ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto \begin{pmatrix} 1 & 0 & 0 \\ 0 & \cos w & \sin w \\ 0 & -\sin w & \cos w \end{pmatrix} \begin{pmatrix} \cos v & \sin v & 0 \\ -\sin v & \cos v & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \cos u & 0 & -\sin u \\ 0 & 1 & 0 \\ \sin u & 0 & \cos u \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} \\ = \begin{pmatrix} \cos u \cos v & \sin v & -\cos v \sin u \\ \sin u \sin w - \cos u \cos w \sin v & \cos v \cos w & \cos w \sin u \sin v + \cos u \sin w \\ \cos w \sin u + \cos u \sin v \sin w & -\cos v \sin w & \cos u \cos w - \sin u \sin v \sin w\end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix}. $$ This corresponds to performing first a rotation in the $xz$ plane, then a rotation in the $xy$ plane and finally a rotation in the $yz$ plane (and this is indeed the formula the application uses). We look at the picture from above and so we are interested only in the $xy$-components of the result giving us $$ \begin{pmatrix} x \\ y \\ z \end{pmatrix} \mapsto \begin{pmatrix} x \cos u \cos v + y \sin v - z \cos v \sin u \\ x (\sin u \sin w - \cos u \cos w \sin v) + y \cos v \cos w + z (\cos w \sin u \sin v + \cos u \sin w )\end{pmatrix}. $$ For example, point number $7$ in the application correspond to the vertex $(1,1,1)$ of the cube. Hence, the formula for changing $(1,1,1)$ in terms of $u,v,w$ is given by $$ \begin{pmatrix} \cos u \cos v + \sin v - \cos v \sin u \\ \sin u \sin w - \cos u \cos w \sin v + \cos v \cos w + \cos w \sin u \sin v + \cos u \sin w \end{pmatrix} $$ which is the formula you wrote in the question. To summarize, the person who wrote the application considered a cube in 3D and projected it (orthogonally) to the $xy$ plane (2D). By modifying three parameters, you can rotate the vertices and edges of the cube in 3D and see the projected result in 2D.<|endoftext|> TITLE: Is every open set the interior of a closed set? QUESTION [30 upvotes]: I am wondering if this is generally true for any topology. I think there might be counter examples, but I am having trouble generating them. REPLY [17 votes]: Since the complement of an open set is closed (and vice versa), and since the complement of the interior is the closure of the complement, we can rephrase your question equivalently as: Is every closed set the closure of some open set? This immediately suggests a counterexample: any singleton (i.e. a set containing only one point) is closed in $\mathbb R^n$ (with the usual Euclidean topology), but has no non-empty open subsets that it could be the closure of. Conversely, the complement of any singleton (i.e. $\mathbb R^n \setminus \{x\}$ for any $x \in \mathbb R^n$) provides a counterexample to your original claim, being an open set that cannot be the interior of any closed set.<|endoftext|> TITLE: How can we show that $4\arctan\left({1\over \sqrt{\phi^3}}\right)-\arctan\left({1\over \sqrt{\phi^6-1}}\right)={\pi\over 2}$ QUESTION [6 upvotes]: $$4\arctan\left({1\over \sqrt{\phi^3}}\right)-\arctan\left({1\over \sqrt{\phi^6-1}}\right)={\pi\over 2}\tag1$$ $\phi$;Golden ratio I understand that we can use $$\arctan{1\over a}+\arctan{1\over b}=\arctan{a+b\over ab-1}$$ that would take quite a long time and simplifying algebraic expressions involving surds is also difficult task to do. How else can we show that $(1)={\pi\over 2}$? REPLY [2 votes]: A detailed and autonomous prove (without reference to previous publication) :<|endoftext|> TITLE: construct a square without a ruler QUESTION [11 upvotes]: How can I construct a square using only a pencil and a compass, i.e. no ruler. Given: a sheet of paper with $2$ points marked on it, a pencil and a compass. Aim: plot $2$ vertices such that the $4$ of them form a square using only a compass. P.S.: no cheap tricks involved. REPLY [5 votes]: The key to solve this problem is how to construct $\sqrt{2}$.<|endoftext|> TITLE: Does the unique map on the zero space have determinant 1? QUESTION [11 upvotes]: The trivial vector space over any field $K$, consisting of only the zero vector, admits exactly one endomorphism, let's call it $z$, sending $0$ to itself. It is the identity map, so it should have determinant $1$. On the face of it, the zero map should have determinant $0$. But this is usually argued via $\lambda z = z$ for all $\lambda \in K$, so $\det z = \det (\lambda z) = \lambda^n \det z$, i.e. $(\lambda^n - 1)\det z = 0$. Normally that's enough to conclude that $\det z = 0$, but of course $n = 0$ in this case, so $\lambda^n = 1$ for all $\lambda$, and we learn nothing. Despite being the zero map, it's full rank and has trivial kernel. There are no nonzero vectors, so it has no eigenvectors, so it has no eigenvalues, so their product is $1$. On the other hand, the determinant is meant to be multilinear, and so should map the zero matrix to zero. But should we say that $z$ is represented by a zero matrix, given that its matrix representation is $0\times 0$ and doesn't have any entries at all? I can't help but feel like this is all very silly, but clearly the answer can't be anything other than $1$. Is there anything wrong with giving this answer? Does it cause any problems with any other typical properties of the determinant? Does it simplify any definitions or theorems? REPLY [3 votes]: I don't recall offhand any reference which discusses this issue but I'll add my two cents in support of taking $\det{f} = 1$ as the definition for the unique map $f \colon V \rightarrow V$ on a zero-dimensional space. This definition is consistent with various standard theorems in linear algebra so that one doesn't need to exclude the zero dimensional case as an exception. In fact, I can't think of a single theorem which will become false taking $\det{f} = 1$ as the definition while most will break if you take $\det{f} = 0$ as the definition and don't exclude the zero dimensional case. For example, An endomorphism on a finite dimensional vector space is invertible iff $\det(f) \neq 0$. The characteristic polynomial of an operator $f \colon V \rightarrow V$ on a finite dimensional vector space is monic of degree $\dim V$ and a scalar $\lambda \in \mathbb{F}$ is a root of the characteristic polynomial iff $\lambda$ is an eigenvalue of $f$. The characteristic polynomial of $f$ is defined as $\chi_f(x) = \det(x \cdot \operatorname{id} - f)$ so using the "1" convention it becomes $\chi_f(x) = 1$ which is indeed monic of degree zero and doesn't have any roots. Using the $0$ convention gives $\chi_f(x) = 0$ which has degree $-\infty$ and all scalars as roots. The minimal polynomial of $f$ divides the characteristic polynomial. Again, the minimal polynomial is the unique monic polynomial $m_f$ of minimal (non-zero) degree such that $m_f(f) = 0$. In the zero dimensional case, it becomes $1$ (indeed $m_f(f) = \operatorname{id}_V = 0$) and it divides the characteristic polynomial $1$. If the characteristic polynomial would be zero, this would be false. The characteristic polynomial of the restriction of an operator $g$ to an $g$-invariant subspace divides the characteristic polynomial of $g$. Since $\{ 0 \}$ is a legitimate $g$-invariant subspace, it makes sense to take the characteristic polynomial of $g|_{\{0\}} = f$ to be $1$ and not $0$. An orthogonal map on an inner product space should have determinant one. The determinant of the identity map is $1$. It might look silly but the idea of orienting a zero-dimensional vector space is important (for example, to state a general version of Stokes' theorem which includes the fundamental theorem of calculus as a special case). An orientation on a zero-dimensional real vector space is just a choice of $\pm 1$ which states whether the point is "positive or negative". The unique map on the zero-dimensional vector space does nothing (it is the identity map) so it should be orientation preserving and a map is orientation preserving iff $\det(f) > 0$. Finally, let me say that in my mind, if one wants to define the determinant of the unique map $f \colon V \rightarrow V$ on the zero dimensional space, then the definition shouldn't depend on the choice of field $\mathbb{F}$ over which we are working on (this is more of a meta-mathematical statement). It would be ridiculous to define say $\det(f) = 2$ if $\mathbb{F} = \mathbb{R}$ while $\det(f) = 3$ if $\mathbb{F} = \mathbb{Z}_5$. Thus it leaves one with two sensible choices: $\det(f) = 0$ or $\det(f) = 1$. Since $\det(f) = 0$ breaks up so many theorems, it doesn't make sense to take it as the definition, better just leave it undefined and that's it. Added: Almost any textbook which embraces the definition of the determinant via the exterior algebra will have as a result a definition for the zero dimensional case which will be $\det(f) = 1$. For such textbooks, it won't be a convention but a (trivial) result. For a specific example, see Algebra 1, Chapters 1-3 by Bourbaki. On page 525 they state that $\det([]) = 1$ but for them, this is not a convention or an ad hoc definition but a result proved from their definition of the determinant and the notion of a matrix.<|endoftext|> TITLE: Ab*surd* Integrals QUESTION [6 upvotes]: I am unable to find a proof for these integrals on the internet. emphasized text $$\displaystyle \int_0^{\frac{\pi}{2}} \cot^{-1}(\sqrt{1+\csc{\theta}}\,) \, \text{d}\theta = \frac{\pi^2}{12}$$ $$\displaystyle \int_0^\frac{\pi}{2} \csc^{-1}(\sqrt{1+\cot{\theta}}\,) \, \text{d}\theta = \frac{\pi^2}{8}$$ Sources: Brilliant, AoPS I tried differentiating under the integral sign but I can't think of an appropriate parameter that leaves easily integrable rational functions. I have tried exploiting the bounds to reflect and transform the integrand but to no avail. A real solution is preferred but a complex solution is perfectly acceptable. A geometric solution is not something I have considered but I'm just grasping at straws here. REPLY [10 votes]: The second integral equals $$ I_2=\int_{0}^{\pi/2}\arcsin\sqrt{\frac{\tan t}{1+\tan t}}\,dt=\int_{0}^{\pi/2}\arctan\sqrt{\tan t}\,dt=\int_{0}^{+\infty}\frac{\arctan\sqrt{u}}{1+u^2}\,du$$ and by splitting the last integration range as $(0,1)\cup(1,+\infty)$ and performing the substitution $u\mapsto\frac{1}{u}$ on the second part, $$ I_2 = \int_{0}^{1}\frac{\arctan\sqrt{u}}{1+u^2}\,du+\int_{0}^{1}\frac{\frac{\pi}{2}-\arctan\sqrt{u}}{1+u^2}\,du = \frac{\pi}{2}\int_{0}^{1}\frac{du}{1+u^2}=\frac{\pi}{2}\cdot\frac{\pi}{4}=\color{red}{\frac{\pi^2}{8}}.$$ The first integral is $$ I_1=\frac{\pi^2}{4}-\int_{0}^{\pi/2}\arctan\sqrt{\frac{1+\sin t}{\sin t}}\,dt=\frac{\pi^2}{4}-2\int_{0}^{\pi/4}\arctan\sqrt{\frac{1+\cos(2t)}{\cos (2t)}}\,dt$$ and $$\int_{0}^{\pi/4}\arctan\sqrt{\frac{1+\cos(2t)}{\cos (2t)}}\,dt=\int_{0}^{1}\arctan\sqrt{\frac{2}{1-u^2}}\frac{du}{1+u^2}$$ is a variant of Ahmed's integral that can be tackled through differentiation under the integral sign: it is enough to be able to integrate $\frac{\sqrt{1-u^2}}{(1+a-u^2)(1+u^2)}$.<|endoftext|> TITLE: Casimir Operator of $\mathfrak{sl}_n(\mathbb{C})$. QUESTION [5 upvotes]: If I have the Lie algebra $\mathfrak{g} = \mathfrak{sl}_n(\mathbb{C})$ and the trace form $B(x,y) = \text{tr}(x,y)$ for $x, y \in \mathfrak{g}$ how does one calculate by which scalar the Casimir element $C \in Z(\mathscr{U}(\mathfrak{g}))$ acts on the highest weight module $V(\lambda)$? I know that one defines the Casimir by $C = \sum_i x_i x_i^*$ where $\{x_i\}$ is a basis for the Lie algebra and $\{x_i^*\}$ a dual basis, which are arbitrary (i.e. C is independent of the choice of bases). I have calculated by hand some simple examples (n=3 acts by the scalar $\frac{8}{3}$ for example on V(2)). How should one proceed in general? Thanks. REPLY [4 votes]: The way to calculate the action of $C$ is to make a smart choice of dual bases and then observe that it suffices to check the action of $C$ on a highest weight vector. (The fact that you use the trace form rather than the Killing form shows up only in an overall scaling of $C$.) Basically, you have to observe that the Cartan subalgebra $\mathfrak h$ is orthogonal to all root spaces with respect to $B$ while for two roots $\alpha$ and $\beta$, the restriction of $B$ to $\mathfrak g_{\alpha}\times\mathfrak g_{\beta}$ is non-zero if and only if $\beta=-\alpha$. First choose a basis $\{H_i\}$ of the space $\mathfrak h$ of diagonal matrices which is orthonormal with respect to $B$. Next, for $i TITLE: Sum of all rationals between 0 and 1 squared QUESTION [5 upvotes]: Yesterday I came with a question: if rational numbers are countable, that means that all rational numbers between 0 and 1 can be listed in a sequence. Let be $Q(n)$ that sequence. It is pretty clear that $\sum_{n=1}^{\infty}Q(n) >\sum_{n=1}^{\infty}\frac{1}{n}$, it diverges. But what about $\sum_{n=1}^{\infty}Q(n)^2$? Does this serie converge? Is there even a way to define $Q(n)$ in a precise way? Many thanks in advance!! REPLY [3 votes]: I will prove a little more general statement: For all $\varepsilon>0$, the sum of all squared rational numbers between $0$ and $\varepsilon$ diverges. Let's fix $\varepsilon >0$. Let's denote by $(Q_{\varepsilon}(n))$ an enumeration of the rationnals between $0$ and $\varepsilon$. Since $[\varepsilon/2,\varepsilon]\cap \mathbb Q$ is infinite (because $\varepsilon>0$), we can extract an infinite sub-sequence $(Q'_{\varepsilon}(n))$ of $(Q_{\varepsilon}(n))$ by conserving only the $Q_{\varepsilon}(n)$ such that $Q'_{\varepsilon}(n)\geqslant \frac{\varepsilon}2$. We then have: $$\sum_{n\in \mathbb N} Q_{\varepsilon}(n)^2\geqslant\sum_{n\in \mathbb N} Q'_{\varepsilon}(n)^2\geqslant \sum_{n\in \mathbb N} \frac{\varepsilon^2}4=+\infty.$$ So the original series diverges, and you can deduce your result from the case $\varepsilon =1$.<|endoftext|> TITLE: Evaluating $\int \frac{1-7\cos^2x}{\sin^7x\cos^2x}dx$ QUESTION [6 upvotes]: How do i evaluate $$\int \frac{1-7\cos^2x}{\sin^7x\cos^2x}dx$$. I tried using integration by parts and here is my approach $\int \frac{ sinx}{(1-cos^2x)^4\cos^2x} dx$ and then put $cos x=t$ and then tried to use partial fractions.I applied similar logic for the other part.But that made it lengthy to solve as decomposition into partial fractions is very time consuming.This question came in an objective examination in which time was limited.Can anyone help me with a shorter way to solve this problem.Thanks. REPLY [8 votes]: Well, we know that: $$\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}=\csc^7\left(x\right)\left(\sec^2\left(x\right)-7\right)\tag1$$ So, for the integral we get: $$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\int\csc^7\left(x\right)\sec^2\left(x\right)\space\text{d}x-7\int\csc^7\left(x\right)\space\text{d}x\tag2$$ Now, for the right integral you can use the reduction formula. $\color{red}{\text{But}}$ using integration by parts: $$\int\csc^7\left(x\right)\sec^2\left(x\right)\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+7\int\csc^7\left(x\right)\space\text{d}x\tag3$$ So, we get that: $$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+\color{red}{7\int\csc^7\left(x\right)\space\text{d}x-7\int\csc^7\left(x\right)\space\text{d}x}\tag4$$ Which gives that: $$\int\frac{1-7\cos^2\left(x\right)}{\sin^7\left(x\right)\cos^2\left(x\right)}\space\text{d}x=\csc^6\left(x\right)\sec\left(x\right)+\text{C}\tag{5}$$<|endoftext|> TITLE: Sum of random decreasing numbers between 0 and 1: does it converge?? QUESTION [153 upvotes]: Let's define a sequence of numbers between 0 and 1. The first term, $r_1$ will be chosen uniformly randomly from $(0, 1)$, but now we iterate this process choosing $r_2$ from $(0, r_1)$, and so on, so $r_3\in(0, r_2)$, $r_4\in(0, r_3)$... The set of all possible sequences generated this way contains the sequence of the reciprocals of all natural numbers, which sum diverges; but it also contains all geometric sequences in which all terms are less than 1, and they all have convergent sums. The question is: does $\sum_{n=1}^{\infty} r_n$ converge in general? (I think this is called almost sure convergence?) If so, what is the distribution of the limits of all convergent series from this family? REPLY [51 votes]: The probability $f(x)$ that the result is $\in(x,x+dx)$ is given by $$f(x) = \exp(-\gamma)\rho(x)$$ where $\rho$ is the Dickman function as @Hurkyl pointed out below. This follows from the the delay differential equation for $f$, $$f^\prime(x) = -\frac{f(x-1)}{x}$$ with the conditions $$f(x) = f(1) \;\rm{for}\; 0\le x \le1 \;\rm{and}$$ $$\int\limits_0^\infty f(x) = 1.$$ Derivation follows From the other answers, it looks like the probability is flat for the results less than 1. Let us prove this first. Define $P(x,y)$ to be the probability that the final result lies in $(x,x+dx)$ if the first random number is chosen from the range $[0,y]$. What we want to find is $f(x) = P(x,1)$. Note that if the random range is changed to $[0,ay]$ the probability distribution gets stretched horizontally by $a$ (which means it has to compress vertically by $a$ as well). Hence $$P(x,y) = aP(ax,ay).$$ We will use this to find $f(x)$ for $x<1$. Note that if the first number chosen is greater than x we can never get a sum less than or equal to x. Hence $f(x)$ is equal to the probability that the first number chosen is less than or equal to $x$ multiplied by the probability for the random range $[0,x]$. That is, $$f(x) = P(x,1) = p(r_11$ in terms of $f(1)$. First, note that when $x>1$ we have $$f(x) = P(x,1) = \int\limits_0^1 P(x-z,z) dz$$ We apply the compression again to obtain $$f(x) = \int\limits_0^1 \frac{1}{z} f(\frac{x}{z}-1) dz$$ Setting $\frac{x}{z}-1=t$, we get $$f(x) = \int\limits_{x-1}^\infty \frac{f(t)}{t+1} dt$$ This gives us the differential equation $$\frac{df(x)}{dx} = -\frac{f(x-1)}{x}$$ Since we know that $f(x)$ is a constant for $x<1$, this is enough to solve the differential equation numerically for $x>1$, modulo the constant (which can be retrieved by integration in the end). Unfortunately, the solution is essentially piecewise from $n$ to $n+1$ and it is impossible to find a single function that works everywhere. For example when $x\in[1,2]$, $$f(x) = f(1) \left[1-\log(x)\right]$$ But the expression gets really ugly even for $x \in[2,3]$, requiring the logarithmic integral function $\rm{Li}$. Finally, as a sanity check, let us compare the random simulation results with $f(x)$ found using numerical integration. The probabilities have been normalised so that $f(0) = 1$. The match is near perfect. In particular, note how the analytical formula matches the numerical one exactly in the range $[1,2]$. Though we don't have a general analytic expression for $f(x)$, the differential equation can be used to show that the expectation value of $x$ is 1. Finally, note that the delay differential equation above is the same as that of the Dickman function $\rho(x)$ and hence $f(x) = c \rho(x)$. Its properties have been studied. For example the Laplace transform of the Dickman function is given by $$\mathcal L \rho(s) = \exp\left[\gamma-\rm{Ein}(s)\right].$$ This gives $$\int_0^\infty \rho(x) dx = \exp(\gamma).$$ Since we want $\int_0^\infty f(x) dx = 1,$ we obtain $$f(1) = \exp(-\gamma) \rho(1) = \exp(-\gamma) \approx 0.56145\ldots$$ That is, $$f(x) = \exp(-\gamma) \rho(x).$$ This completes the description of $f$.<|endoftext|> TITLE: Bishop ML and pattern recognition calculus of variations linear regression loss function QUESTION [5 upvotes]: On page $46$, there is ($1.87$) $E[L]=\int \int \{y(x)-t\}^2p(x,t)dxdt$ Calculus of variations is used to give ($1.88$) $\dfrac{\partial E[L]}{\partial{y(x)}} = $2$ \int \{y(x)-t\}p(x,t)dt = 0$ The reader is referred to appendix $D$ on calculus of variations, but I am still confused. How does one get from ($1.87$) to ($1.88$), step by step? REPLY [11 votes]: Rename $\hat x$ as $x$, then interchange the order of integration, so that we integrate with respect to $x$ last. Then Equation (1.87) is $$ \int\int[y(x)-t]^2p(x,t)\,dt\,dx $$which is of the form $$ \int G(y(x),y'(x),x)\,dx\tag{D.5} $$ where $$ G(y,y',x)=\int[y-t]^2p(x,t)\,dt.\tag{*}$$ By the Euler-Lagrange equations we require $$ \frac{\partial G}{\partial y} -\frac d{dx}\left(\frac{\partial G}{\partial y'}\right)=0.\tag{D.8} $$ In this case the function $G$ doesn't depend on $y'$ so the LHS of the Euler-Lagrange equations simplifies to $$\frac{\partial G}{\partial y}=\int 2[y-t]p(x,t)\,dt,$$ obtained by differentiating (*) under the integral sign.<|endoftext|> TITLE: Does "either" make an exclusive or? QUESTION [16 upvotes]: This is a very "soft" question, but regarding language in logic and proofs, should "Either A or B" Be interpreted as "A or B, but not both"? I have always avoided saying "either" when my intent is a standard, inclusive or, because saying "either" to me makes it feel like an exclusive or. REPLY [8 votes]: In everyday speech, "or" is usually exclusive even without "either." In mathematics or logic though "or" is inclusive unless explicitly specified otherwise, even with "either." This is not a fundamental law of the universe, it is simply a virtually universal convention in these subjects. The reason is that inclusive "or" is vastly more common.<|endoftext|> TITLE: A specific example regarding the inscribed square problem QUESTION [5 upvotes]: Toeplitz' conjecture (also called inscribed square problem) says that: For every Jordan curve $\mathscr C$, there exists four distincts points $A$, $B$, $C$ and $D$ belonging to $\mathscr C$ such that $ABCD$ is a square. A Jordan curve is a non self-intersecting continuous loop. Here is a drawing to illustrate the situation, and a link to the Wikipedia page if you want to find out more about this conjecture. The conjecture has already been proven in several cases, including when $\mathscr C$ is piecewise analytic. So we know that for these two figures, there exists an inscribed square. The question is how do I find those squares? REPLY [5 votes]: There is. If we draw an orthogonal line with respect to the one that is aligned with the curve, then you can build the square.<|endoftext|> TITLE: Partition edges of complete graph into paths of distinct length QUESTION [6 upvotes]: Let $K_n$ be the complete undirected graph on $n$ vertices. Can you partition the edges of $K_n$ into $n-1$ paths of lengths $1,2,\ldots,n-1$ such that the edge-sets of the paths are pairwise disjoint? I believe the statement to be true, but I cannot prove it. It also possible that this is an open problem. REPLY [5 votes]: For odd $n \geq 5$, we can decompose $K_n$ into $(n-1)/2$ edge-disjoint $n$-cycles. These can be broken into paths of lengths $1,2,\ldots,n-1$ (break the first one into path lengths $1$ and $n-1$, the second one into $2$ and $n-2$, and so on). (This is called the Walecki decomposition.) By deleting a vertex from the Walecki decomposition for $K_{n+1}$, we find: for even $n \geq 4$, we can decompose $K_n$ into $n/2$ edge-disjoint $(n-1)$-paths. These can be broken into paths of lengths $1,2,\ldots,n-1$ (leave one alone, break one into path lengths $1$ and $n-2$, the second one into $2$ and $n-3$, and so on).<|endoftext|> TITLE: Is $\langle\ a,b\ \vert\ aba=bab,\ abab=baba\ \rangle$ a presentation of the free group on a single generator? QUESTION [5 upvotes]: Is the following a presentation of the free group generated by a single element? $\langle\ a,b\ \vert\ aba=bab,\ abab=baba\ \rangle.$ My thinking is the following: $abab = baba=b(bab)=b^2ab$ by substituting the first relation into the second. Simplifying, we get $a=b$. Since these steps give equivalent statements, the above presentation is in fact $\langle\ a,b\ \vert\ a=b\ \rangle$, i.e., the free group on one generator. Is this correct? REPLY [6 votes]: Yes, this is correct. One thing I would leave out is the $b^2ab$ step, for $a=b$ follows from $abab=b(bab)$ by cancelling $bab$ on the right.<|endoftext|> TITLE: Distribution on $(0, \infty)$ which cannot be extended to $\mathbb{R}$ QUESTION [8 upvotes]: I am working on exercises from Friedlander's Introduction to the Theory of Distributions and I am stuck in a particular problem. The question is: "Show that $\langle u, \phi\rangle = \sum_\limits{k \geq 1} \partial^k \phi(1/k)$ is a distribution on $(0, \infty)$, but that there is no $v\in \mathcal{D}'(\mathbb{R})$ whose restriction to $(0, \infty)$ is equal to $u$." I believe I have managed to prove the first part: Given any compact $K \subset (0,\infty)$, take a test function $\phi\in C^{\infty}_c(0, \infty)$ with $\operatorname{supp} \phi \subset K$. Take then $N$ such that $\frac{1}{N+1} < \min \operatorname{supp} \phi$. We have that $\langle u, \phi\rangle = \sum_\limits{k = 1}^N \partial^k \phi(1/k)$, since $\phi(1/k) = 0, \forall k>N+1$. And so it is clear that $\exists C$ and $\exists N$ such that $u$ satisfies the seminorm estimates $|\langle u, \phi\rangle| \leq \sum\limits_{k=1}^N |\sup\partial^k \phi|$ for any $\phi$. Now, the second part is troubling me. I believe the way is to suppose there is a distribution $v\in \mathcal{D}'(\mathbb{R})$ with $v|_{(0,\infty)} = u$, and show that it would not satisfy the seminorm estimate because of the restriction. However, I am struggling to see how that should be done. Note: I have recognized this distribution to be equivalent to $\sum\limits_{k\geq 1} \delta^{(k)}(x-1/k)$ but I am not sure how this helps! REPLY [3 votes]: To close the question, Willie Wong's suggestion was to choose a test function which is equal to $\exp(x)$ within $\{x:|x|<1\}$. Then $\langle u, \phi\rangle = \sum\limits_{k\geq 1} \exp(1/k) > \sum\limits_{k\geq 1} \exp(0)$ which is unbounded as $k\to\infty$. And so $|\langle u, \phi\rangle|$ cannot be bound by seminorm estimates for our chosen $\phi$.<|endoftext|> TITLE: Are groups ordered pairs or sets? QUESTION [6 upvotes]: Some books say stuff like "if $\forall x \in G$, $x^2=e$, then $G$ is abelian". But the notation of a group is $\langle G, \circ \rangle$, and that looks like an ordered pair. So, should not the elements of the group be $\{G\}$ and $\{\circ,G\}$ by definition of ordered pair? Or am i getting wrong the notation? I got this doubt also with the notation of partially ordered sets. REPLY [14 votes]: Yes, a group is an ordered pair: the first element of the pair is a set (the underlying function of the group), and the second is a binary function on that set (which, in set theory, is actually a set too). Saying something like "$G$ is abelian" is an abuse of notation: technically it's incorrect, but it has only one reasonable interpretation (this is only true btw if we aren't considering two different group structures on the same set, which we sometimes do). It's used because it's slightly easier to write than "$(G, \circ)$ is abelian." Incidentally, given that "$e$" isn't actually part of the tuple $(G, \circ)$, that's also an abuse of notation - one should write $e_G$ (to distinguish it from the identity of some other group) or similar. But, again, we can get away with it in contexts where it won't lead to confusion. Also, it's worth pointing out that many texts treat groups as ordered triples of the form $(G, \circ, e)$. REPLY [9 votes]: Often, in maths you encounter structures that are sets plus some structure on that set. For example with groups, you are dealing with a set $G$ and some operation $\cdot: G \times G \to G$ on this set. In topology, you have a set $X$ and a topology $\tau$ on this set. In measure theory you have a set $X$, a $\sigma$-algebra on $X$, denoted $\Gamma$ and a measure on $\Gamma$ denoted $\mu$. In all of these cases you can describe the thing properly by providing the set and the structure together: a group is $(G,\cdot)$, a topological space is $(X, \tau)$ and a measure space is $(X, \Gamma, \mu)$. I think this is what your notation is about. REPLY [5 votes]: This is technically an abuse of language. Conflating a tuple $(A,\ldots)$ defining a set with structure with the set $A$ itself extremely common and I don't remember hearing anyone ever complain about it. This is because the ordered tuple construction is quite artificial and is not the only way to associate a set with the structures we put on it. For the example of groups, we could instead define a group as an object in the category $\mathbf{Grp}$ of all groups. Now this really is a set. The algebraic structure is then hidden in the morphisms of the category.<|endoftext|> TITLE: Convex sets as intersection of half spaces QUESTION [11 upvotes]: I want to prove that any closed convex sets can be written as an intersection of half spaces using only the separation theorem as a pre-requisite. I'm getting a feel that I need to show two sets are subsets of each other, but not being able to understand how exactly to go about it. REPLY [12 votes]: I think that your approach should work. Let $C\subseteq \mathbb{R}^n$ be a closed, convex set. Let $\mathcal{H}$ be the collection of closed, half-spaces that contain $C$. You would like to show that $$C = \bigcap_{H\in \mathcal{H}}H.$$ First we can show $C \subseteq \bigcap_{H\in \mathcal{H}}H.$ Let $x\in C$. By the definition of $\mathcal{H}$, any $H\in \mathcal{H}$ satisfies $C\subseteq H$. Hence $x\in H$ for any $H\in \mathcal{H}$ and therefore $x\in \bigcap_{H\in \mathcal{H}}H.$ This gives us the desired inclusion. It is left to show that $C \supseteq \bigcap_{H\in \mathcal{H}}H.$ We prove this using the contrapositive, that is we will show that if $x\not\in C$ then $x\not\in \bigcap_{H\in \mathcal{H}}H.$ So choose $x$ such that $x\not\in C$. Since $C$ is closed and convex, there is a hyperplane that strictly separates $x$ from $C$. This hyperplane defines a half space $H$ containing $C$. Hence $x\not\in H$ implying that $x\not\in \bigcap_{H\in \mathcal{H}}H$. This proves the desired inclusion.<|endoftext|> TITLE: Continuity in a compact metric space. QUESTION [7 upvotes]: Let $(X,d)$ be a compact metric space and let $f, g: X \rightarrow \mathbb{R}$ be continuous such that $$f(x) \neq g(x), \forall x\in X.$$ Show that there exists an $\epsilon$ such that $$|f(x) - g(x)| \geq \epsilon, \forall x \in X.$$ I'm assuming he means $\epsilon > 0$. Well, suppose to the contrary that for all $\epsilon > 0$, there exists an $x' \in X$ such that $|f(x') - g(x')| < \epsilon.$ Since $f(x')$ and $g(x')$ are fixed values, we must have $f(x') = g(x')$, a contradiction. Seems uh... too easy? I didn't even have to use continuity or compactness? So seems wrong? (I'm really sick, so terrible at math this week, but is this right?) REPLY [6 votes]: The problem with your proof is that you cannot fix $x'$ and vary $\epsilon$. This is because $x'$ is conditioned on your given $\epsilon$. As for a correct solution note that $|f(x) - g(x)|$ is a continuous function from $X$ to $\mathbb{R}$. What do you know about the minimum of a continuous function from a compact space to $\mathbb{R}$? REPLY [3 votes]: Denying we have: given $k > 0$ there exist's $x_k$ such that $|f(x_k) - g(x_k)| < \frac{1}{k}.$ Make $x = (x_k).$ Since $M$ is compact, we have that $x$ must have a convergent subsequence to some $y \in M$. Then passing to the subsequence we must have, using continuity of $f,$ and $g$, that $g(y) = f(y)$. What is a contradiction.<|endoftext|> TITLE: When does $2^n-1$ divide $3^n-1$? QUESTION [7 upvotes]: Is it possible for some integer $n>1$ that $2^n-1\mid 3^n-1$ ? I have tried many things, but nothing worked. REPLY [6 votes]: I was looking for this as well, and eventually figured it out myself. So here's my solution for future reference. The short answer is, $2^n - 1$ never divides $3^n - 1$. Here's the proof, making use of the Jacobi symbol. Assume $2^n - 1 \mid 3^n - 1$. If $n = 2k$ is even, then $2^n - 1 = 4^k - 1 \equiv 0 \bmod 3$. Consequently, $3$ must also divide $3^n - 1$, which is a contradiction. At the very least, we can already assume $n = 2k + 1$ is odd. Next, since $3^n \equiv 1 \bmod 2^n - 1$, from the properties of the Jacobi-symbol it follows that \begin{equation} 1 = (\frac{1}{2^n - 1}) = (\frac{3^n}{2^n - 1}) = (\frac{3^{2k}}{2^n - 1}) \cdot (\frac{3}{2^n - 1}) = (\frac{3}{2^n - 1}) \end{equation} However, using Jacobi's law of reciprocity we also know \begin{equation} (\frac{2^n - 1}{3}) = (\frac{3}{2^n - 1}) \cdot (\frac{2^n - 1}{3}) = (-1)^{\frac{3 - 1}{2}\frac{2^n - 2}{2}} = (-1)^{2^{n - 1} - 1} = -1 \end{equation} The only quadratic non-residue $\bmod 3$ is $2$, therefore $2^n - 1 \equiv 2 \bmod 3$ or alternatively $2^n \equiv 0 \bmod 3$. Since this implies $3$ divides $2^n$, we again arrive at a contradiction.<|endoftext|> TITLE: Double Quotienting of a Ring is Isomorphic to Ring Quotient by Sum of Ideals QUESTION [6 upvotes]: First let me say, please edit my title if their is a more appropriate one and if this is a duplicate please direct me and close the question. I tried searching for my question, but I don't know if the theorem I am trying to prove has a name - so it was difficult to know what to search. Also let me say that I have already tried to squeeze this theorem out of one of the isomorphism theorems, but I cant quite see how to get it, so if your hint or answer is `check the isomorphism theorems for rings' I might need more then that. I would like to understand this for personal benefit, but I am kind of limited on time. I am specifically working with $A$ as a commutative ring with unity, $\mathfrak{a}$ is an ideal of $A$, and I believe $\mathfrak{b}$ is to be taken as an ideal in $A/\mathfrak{a}$, and $\mathfrak{b}'$ is the ideal in $A$ that corresponds to $\mathfrak{b}$. Then I would like to show that $$A/\mathfrak{a}/\mathfrak{b} \approx A/(\mathfrak{a} + \mathfrak{b}').$$ I tried work through the mechanics of a specific example in full formality for insight, specifically I investigated $$\mathbb{Z}/<12>/<3+<12>>$$ where <12> is my ideal generated by 12 in $\mathbb{Z}$, and <3+ <12>> is my ideal generated by the coset with representative 3 in $\mathbb{Z}/<12>$. Sorry for the horrible notation here. I am aware this is probably unusually formal approach to the "coset" approach to quotienting, But I want to make sure I understand the nuts and bolts before passing off to theorems and the more "homomorphic image" approach to quotienting. Anyways In this example we get $\mathbb{Z}/<12>/<3+<12>>$ = $$\{ <0+<12>>, <1 + <12>>, <3+<12>> \}$$ Which should be easily isomorphically mapped to $\mathbb{Z}/<12>+<3>$ since that sum of ideals is just <12>+<3> = <3>. If you have any advice or direction please let me know. REPLY [4 votes]: I have already tried to squeeze this theorem out of one of the isomorphism theorems, but I cant quite see how to get it... Well, it's almost exactly the third isomorphism theorem, so it's a little strange you didn't succeed! Perhaps the difficulty lay in understanding the relationship of $\mathfrak b$ to $\mathfrak b'$. The corresponding ideal is just an ideal $\mathfrak b'$ of $A$ which contains $\mathfrak a$, such that $\mathfrak b=\frac{\mathfrak b'}{\mathfrak a}$. With that said... is there any reason to write $\mathfrak b$ anymore? Perhaps not. The third isomorphism theorem says that $\frac{A}{\mathfrak a}/\frac{\mathfrak b'}{\mathfrak a}\cong\frac{A}{\mathfrak b'}$. You shouldn't have to write $\mathfrak a +\mathfrak b'$ because $\mathfrak a+\mathfrak b'=\mathfrak b'$.<|endoftext|> TITLE: Determine whether or not $\exp\left(\sum_{n=1}^{\infty}\frac{B(n)}{n(n+1)}\right)$ is a rational number QUESTION [5 upvotes]: Let $B(n)$ be the number of ones in the base 2 expression for the positive integer n. Determine whether or not $$\exp\left(\sum_{n=1}^{\infty}\frac{B(n)}{n(n+1)}\right)$$ is a rational number. Attempt: I tried to make the sum into something that resembles the power series of log, that way it would be easier to determine whether this number is rational. But I have no idea how to deal with $B(n)$. Thanks in advance! REPLY [3 votes]: For $B(n)$ we have the following properties: \begin{align} & B(2k) = B(k) & \text{if }n = 2k+1 \\ & B(2k + 1) = B(k) + 1 & \text{if }n = 2k \end{align} Hence, $$S = \sum\limits_{n=1}^{+\infty} \dfrac{B(n)}{n(n+1)} = \sum\limits_{k=0}^{+\infty} \dfrac{B(2k+1)}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} \dfrac{B(2k + 2)}{(2k + 2)(2k+3)} = \sum\limits_{k=0}^{+\infty} \dfrac{B(k) + 1}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} \dfrac{B(k + 1)}{(2k + 2)(2k+3)} = \sum\limits_{k=0}^{+\infty} \dfrac{1}{(2k+1)(2k+2)} + \sum\limits_{k=0}^{+\infty} B(k + 1)\left(\dfrac{1}{(2k + 2)(2k+3)} + \dfrac{1}{(2k + 3)(2k+4)}\right) = \ln{2} + \sum\limits_{k=1}^{+\infty} B(k)\left(\dfrac{4k + 2}{2k(2k+1)(2k+2)}\right) = \ln{2} + \dfrac{1}{2}\sum\limits_{n=1}^{+\infty} \dfrac{B(k)}{k(k+1)} = \ln{2} + \dfrac{1}{2}S \Rightarrow S = 2\ln{2} = \ln{4}$$ Thus, $\exp\left\{\sum\limits_{n=1}^{+\infty} \dfrac{B(n)}{n(n+1)} \right\} = 4$, which is a rational number. Note that manipulations with series above are legal, because $B(n) \le 1 + [\log_2(n)]$, so $\dfrac{B(n)}{n(n+1)} \le \dfrac{1}{n^{3/2}}$ for large enough $n$, which means that the series in question converges.<|endoftext|> TITLE: Closed form for $\int x^ne^{-x^m} \ dx\ ?$ QUESTION [10 upvotes]: While entertaining myself by answering a question, the following problem arose. For what natural numbers $n,m$ does the following undefined integral have a closed form $$\int x^ne^{-x^m} \ dx\ ?$$ Closed form means that the antiderivative consists only of powers of $x^{...}$ and $x$ in $e^{-x^{...}}$. I created the following matrix showing for different pairs of $n$ and $m$ the nature of the antiderivative. $$\begin{matrix} & m&1&2&3&4&5&6&7\\ n\\ 1&&\checkmark&\checkmark&\Gamma&\text{erf}&\Gamma&\Gamma&\Gamma\\ 2&&\checkmark&\text{erf}&\checkmark&\Gamma&\Gamma&\text{erf}&\Gamma&\\ 3&&\checkmark&\checkmark&\Gamma&\checkmark&\Gamma&\Gamma&\Gamma&\\ 4&&\checkmark&\text{erf}&\Gamma&\Gamma&\checkmark&\Gamma&\Gamma\\ 5&&\checkmark&\checkmark&\checkmark&\text{erf}&\Gamma&\checkmark&\Gamma\\ 6&&\checkmark&\text{erf}&\Gamma&\Gamma&\Gamma&\Gamma&\checkmark\\ 7&&\checkmark&\checkmark&\Gamma&\checkmark&\Gamma&\Gamma&\Gamma\\ \end{matrix}$$ $$$$ The $\checkmark$ sign stands for a closed form, "erf" signals that the antiderivative contains the erf function , and $\Gamma$ signals that the antiderivative contains the upper incomplete $\Gamma$ function. I have no clue. Does anybody? REPLY [5 votes]: Let's start with a simple substitution $x=u^{1/m}$. This gives us $$I=\frac1m\int u^{(n+1)/m-1}e^{-u}\ du=\frac1m\gamma\left(\frac{n+1}m,x^m\right)+c$$ This trivially has closed forms for $\frac{n+1}m\in\mathbb N$ due to integration by parts. Indeed, checking your table, it corresponds with every checkmark perfectly. And just for the record, when $k\in\mathbb N$, $$\int x^ke^{-x}\ dx=-e^{-x}\sum_{n=0}^k(k-n)!x^n+c$$<|endoftext|> TITLE: Transformation of Random Variable $Y = X^2$ QUESTION [10 upvotes]: I'm learning probability, specifically transformations of random variables, and need help to understand the solution to the following exercise: Consider the continuous random variable $X$ with probability density function $$f(x) = \begin{cases} \frac{1}{3}x^2 \quad -1 \leq x \leq 2, \\ 0 \quad \quad \text{elsewhere}. \end{cases}$$ Find the cumulative distribution function of the random variable $Y = X^2$. The author gives the following solution: For $0 \leq y \leq 1: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-\sqrt y \leq X \leq \sqrt y) = \int_{-\sqrt y}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{2}{9}y\sqrt y.$ For $1 \leq y \leq 4: F_Y(y) = P(Y \leq y) = P(X^2 \leq y) \stackrel{?}{=} P(-1 \leq X \leq \sqrt y) = \int_{-1}^{\sqrt y}\frac{1}{3}x^2\, dx = \frac{1}{9} + \frac{1}{9}y\sqrt y.$ For $y > 4: F_{Y}(y) = 1.$ Previous to this exercise, I've managed to follow the solutions of two similar (obviously simpler) problems for a strictly increasing and strictly decreasing function of $X$, respectively. However in this problem, I don't understand the computations being done, specifically: How does the three intervals $0 \leq y \leq 1$, $1 \leq y \leq 4$ and $y > 4$ are determined? In the two previous problems I've encountered, we only considered one interval which was identical to the interval where $f(x)$ was non-zero. In the case where $0 \leq y \leq 1$, why does $P(X^2 \leq y) = P(-\sqrt y \leq X \leq \sqrt y)$ and not $P(X \leq \sqrt y)$? I have put question marks above the equalities that I don't understand. I think I have not understand the theory well enough. I'm looking for an answer that will make me understand the solution to this problem and possibly make the theory clearer. REPLY [10 votes]: Let's start by seeing what the density function $f_X$ of $X$ tells us about the cumulative distribution function $F_X$ of $X$. Since $f_X(x) = 0$ for $-\infty < x < -1$, we see that $$F_X(x) = \int_{-\infty}^x f_X(t) \, dt \equiv 0 $$ in this range. Similarly, since $f_X(x) = 0$ in the range $2 < x < \infty$, we see that $$F_X(x) = \int_{-\infty}^x f_X(t) \, dt = \int_{-\infty}^{\infty} f_X(t) \, dt \equiv 1$$ in this range. In other words, the random variable is "supported on the interval $[-1,2]$" in the sense that $P(X \notin [-1,2]) = 0$. Now let us consider $Y = X^2$. This variable is clearly non-negative and since $X$ is supported on $[-1,2]$, we must have that $Y$ is supported on $[0, \max((-1)^2,2^2)] = [0,4]$. This is intuitively clear because the variable $X$ (with probability $1$) takes values in [-1,2] and so $X^2$ takes values in $[0,\max((-1)^2,(2)^2)]$. So we only need to understand $F_Y(y)$ in the range $y \in [0,4]$. Now, we always have $$ F_Y(y) = P(Y < y) = P(X^2 < y) = P(-\sqrt{y} < X < \sqrt{y}) = \int_{-\sqrt{y}}^{\sqrt{y}} f_X(t) \, dt $$ but since $f_X$ is defined piecewise, to proceed at this point we need to analyze several cases. We already know that $F_Y(y) = 0$ if $y \leq 0$ and $F_Y(y) = 1$ if $y \geq 4$. If $0 \leq y \leq 1$ then $[-\sqrt{y},\sqrt{y}]$ is contained in $[-1,1]$ and on $[-1,1]$ the density function is $f_X(x) = \frac{1}{3}x^2$ so we can write $$ F_Y(y) = \int_{-\sqrt{y}}^{\sqrt{y}} \frac{1}{3} t^2 \, dt. $$ However, if $1 < y \leq 4$ then $-\sqrt{y} < -1$ and so the interval of integration splits as $[-\sqrt{y}, -1] \cup [-1,\sqrt{y}]$. Over the left $[-\sqrt{y},-1]$ part, the density function is zero so the integal will be zero and we are left only with calculating the integral over the right part: $$ F_Y(y) = \int_{-\sqrt{y}}^{-1} f_X(t) \, dt + \int_{-1}^{\sqrt{y}} f_X(t) \, dt = \int_{-1}^{\sqrt{y}} \frac{1}{3}t^2 \, dt. $$<|endoftext|> TITLE: find total number of maximal ideals in $\mathbb{Q}[x]/\langle x^4-1\rangle$. QUESTION [10 upvotes]: find total number of maximal ideals in $\mathbb{Q}[x]/\langle x^4-1\rangle$. Let $J=\langle x^4-1\rangle$, $R=\mathbb{Q}[x]$. I want to use $(R/J)/(I/J)\simeq R/I$, where $I $ is ideal of $R$ which contain $J$. Then $R/I$ is field, and $R$ is a principal ideal domain. Let $I=\langle f(x) \rangle$ hence $f(x)$ must be irreducible in $R$, so only choice for $f(x)$ are $x-1,x+1,x^2+1$. So answer should be $3$. Is it right explanation? and better method thanks in advance REPLY [3 votes]: That is all correct. (To get this out of the unanswered queue.)<|endoftext|> TITLE: What are the group homomorphisms from $ \prod_ {n \in \mathbb {N}} \mathbb {Z} / \bigoplus_ {n \in \mathbb {N}} \mathbb {Z} $ to $ \mathbb {Z} $? QUESTION [8 upvotes]: By a theorem of Specker, there’s only the zero map since any map out of $ \prod_{n \in \mathbb{N}} \mathbb{Z} $ is determined by the values of the unit vectors, which all lie in $ \bigoplus_{n \in \mathbb{N}} \mathbb{Z} $, but the original proof is more general, uses a bunch of machinery, and in German. Isn’t there an easier way? REPLY [6 votes]: There is a nice quick proof. I'm not sure who the proof is due to. The statement is equivalent to If $P$ is the group of sequences ${\bf a}=(a_0,a_1,\dots)$ of integers, and $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences (so that $f({\bf a})=f({\bf b})$ whenever ${\bf a}$ and ${\bf b}$ differ in only finitely many places), then $f=0$. Suppose $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences. For any ${\bf a}\in P$, we can write $a_n=b_n+c_n$, where $b_n$ is divisible by $2^n$ and $c_n$ is divisible by $3^n$. Then for each $n$, ${\bf b}$ differs in only finitely many places from a sequence divisible by $2^n$, so $f({\bf b})$ is divisible by $2^n$ for all $n$, and so $f({\bf b})=0$. Similarly $f({\bf c})=0$, and so $f({\bf a})=f({\bf b}+{\bf c})=0$.<|endoftext|> TITLE: How to disprove that every odd number can be written in the form $2^n + p$ with $p$ prime? QUESTION [8 upvotes]: How can I disprove that every odd number, $2k+1>1$ can be written in the form $2k+1 = 2^n + p$ with $p$ prime? I know it's not true but I don't know how to explain that it is not true. REPLY [11 votes]: It suffices to find a counterexample. After some searching, we find A133122 Odd numbers which cannot be written as the sum of an odd prime and a power of two $$1, 3, 127, 149, 251, 331, 337, 373, 509, 599, 701,\dots$$ Even allowing $n=0$ and the use of even primes to say $3=2^0+2$ and ignoring $1$, the smallest counterexample is apparently $127$. To prove that $127$ is in fact a counterexample, note that $127 = 64+3^2\cdot 7 = 32+5\cdot 19 = 16 + 3\cdot 37 = 8+7\cdot 17=\dots$ and so no power of two is valid. These numbers are named Obstinate Numbers.<|endoftext|> TITLE: Prove that the center of G has order divisible by p QUESTION [8 upvotes]: Let $G$ be a non-trivial finite group and $p$ a prime number. If every subgroup $H\leq$ G has index divisible by $p$, prove that the center of $G$ has order divisible by $p$. So I have that $[G:H]=pk$ for some integer $k$, and we need to prove that $|Z(G)|=pl$ for some integer $l$. Let $|G|=n$, I can prove the case if assume $G$ is an abelian group, then $|Z(G)|=n$, so the center has order divisible by $p$. How should I approach if $G$ is not abelian? REPLY [16 votes]: Note that the class equation of a finite group $G$ is $|G|=|Z(G)|+\sum_{i=1}^n |cl(a_i)|\implies |Z(G)|=|G|-\sum_{i=1}^n |cl(a_i)|$ where $a_i$ are the distinct class representatives. Now $|cl(a_i)|=\dfrac{|G|}{|C(a_i)|}$ where $C(a_i)=\{x\in G:xa_i=a_ix\}$. Note that $C(a_i)$ is a subgroup of $G$ for each $1\leqslant i\leqslant n.$ Since every subgroup has index divisible by $p$ , hence $p\mid |cl(a_i)|$ for all $1\leqslant i\leqslant n.$ Also for any subgroup $H$ of $G$ we have $|G|=|{\dfrac{G}{H}}||H|$ , $p \mid \dfrac{|G|}{|H|}\implies p\mid |G|$ Since the RHS is divisible by $p$ so is the LHS.<|endoftext|> TITLE: The rate of convergence of Cesaro average of Fourier series QUESTION [5 upvotes]: Do you know any estimates of rate of convergence of Cesaro average of Fourier series? It does not matter for which classes of functions. It would be great if you can give some estimates depending on the smoothness of the function. It is well known that Cesaro average convergence uniformly for all continuous functions. Also for example there well known estimate for Fourier series (not Cesaro average) that looks like $O(\frac{\log n}{n^p})$ where $p$ is smoothness of the functions. I would like to know some analogical results for Cesaro sums. Great thanks for any links, papers, books and so on! REPLY [6 votes]: There is a paper of R. Bojanic and S.M. Mazhar, An estimate of the rate of convergence of the Norlund.-Voronoi means of the Fourier series of functions of bounded variation, Approx. Theory III, Academic Press (1980), 243-248 It says that if $f:[-2\pi,2\pi]\rightarrow\mathbb{R}$ is $2\pi$-periodic and of bounded variation and $S_{n}(f,x)$ is the partial sum of its Fourier series, then $$ \left\vert \frac{1}{n}\sum_{k=1}^{n}S_{k}(f,x)-\frac{1}{2}(f_{+}% (x)+f_{-}(x))\right\vert \leq\frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}% \nolimits_{[0,\frac{\pi}{k}]}g_{x}, $$ where for every fixed $x\in\lbrack-2\pi,2\pi]$, $g_{x}(t):=f(x+t)+f(x-t)-f_{+}% (x)-f_{-}(x)$ for $t\neq0$ and $g_{x}(0):=0$. Here $f_{+}(x)$ and $f_{-}(x)$ are the left and right limits. In particular, if $f$ is piecewise $C^{1}$, then \begin{align*} \operatorname*{Var}\nolimits_{\lbrack0,\frac{\pi}{k}]}g_{x} & =\int% _{0}^{\frac{\pi}{k}}|g_{x}^{\prime}(t)|\,dt=\int_{0}^{\frac{\pi}{k}}% |f^{\prime}(x+t)-f^{\prime}(x-t)|\,dt\\ & \simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{1}{k}% \end{align*} and so $$ \frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}\nolimits_{[0,\frac{\pi}{k}]}% g_{x}\simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{c}{n}\sum_{k=1}^{n}% \frac{1}{k}\simeq|f_{+}^{\prime}(x)-f_{-}^{\prime}(x)|\frac{\log n}{n}. $$ If $f_{+}^{\prime}(x)=f_{-}^{\prime}(x)$ and $f$ is piecewice $C^{2}$, then $$ \int_{0}^{\frac{\pi}{k}}|f^{\prime}(x+t)-f^{\prime}(x-t)|\,dt\simeq |f_{+}^{\prime\prime}(x)-f_{-}^{\prime\prime}(x)|\frac{1}{k^{2}}% $$ and so $$ \frac{c}{n}\sum_{k=1}^{n}\operatorname*{Var}\nolimits_{[0,\frac{\pi}{k}]}% g_{x}\simeq|f_{+}^{\prime\prime}(x)-f_{-}^{\prime\prime}(x)|\frac{c}{n}% \sum_{k=1}^{n}\frac{1}{k^{2}}. $$<|endoftext|> TITLE: Why is important for a manifold to have countable basis? QUESTION [18 upvotes]: I've seen there are few questions similar, but I haven't seen anyone so precise or with a good answer. I'd like to understand the reason why we ask in the definition of a manifold the existence of a countable basis. Does anybody has an example of what can go wrong with an uncountable basis? When does the problem arise? Does it arise when we want to differentiate something or does it arise before? Thank You REPLY [24 votes]: There is one point that is mentioned in passing in Moishe Cohen's nice answer that deserves a bit of elaboration, which is that a lot of the time it is not important for a manifold to have a countable basis. Rather, what is important in most applications is for a manifold to be paracompact: this is what gives you partitions of unity, which are essential to an enormous amount of the theory of manifolds (for instance, as the other answer mentioned, proving that any manifold admits a Riemannian metric). Paracompactness follows from second-countability, which is the main reason why second-countability is useful. Paracompactness is weaker than second-countability (for instance, an uncountable discrete space is paracompact), but it turns out that it isn't weaker by much: a (Hausdorff) manifold is paracompact iff each of its connected components is second-countable. To put it another way, a general paracompact manifold is just a disjoint union of (possibly uncountably many) second-countable manifolds. So if you care mainly about connected manifolds (or even just manifolds with only countably many connected components), you lose no important generality by assuming second-countability rather than paracompactness. There are also a few situations where it really is convenient to assume second-countability and not just paracompactness. For instance, in the theory of Lie groups, it is convenient to be able to define a (not necessarily closed) Lie subgroup of a Lie group $G$ as a Lie group $H$ together with a smooth injective homomorphism $H\to G$. If you allowed your Lie groups to not be second-countable, you would have the awkward and unwanted example that $\mathbb{R}$ as a discrete space is a Lie subgroup of $\mathbb{R}$ with the usual $1$-dimensional smooth structure (via the identity map). For instance, this example violates the theorem (true if you require second-countability) that a subgroup whose image is closed is actually an embedded submanifold.<|endoftext|> TITLE: Problems understanding proof of if $x + y = x + z$ then $y = z$ (Baby Rudin, Chapter 1, Proposition 1.14) QUESTION [20 upvotes]: I'm having trouble with whether Rudin actually proves what he's tried to prove. Proposition 1.14; (page 6) The axioms of addition imply the following statements: a) if $x + y = x + z$ then $y = z$ The author's proof is as follows: $ y = (0 + y) = (x + -x) + y = -x + (x + \textbf{y})$ $$ = -x + (x + \textbf{z}) = (-x + x) + z = (0 + z) = z $$ I emphased the section which troubles me. How does Rudin prove that $ y = z $ if he substituted $y = z$? REPLY [53 votes]: He didn't substitute $z$ for $y$; rather, he substituted $x+z$ for $x+y$. This is legitimate based on the assumption that $x+y = x+z$.<|endoftext|> TITLE: Triangular and Fibonacci numbers: $\sum_{k=0}^{2n}T_{2n-k}\color{red}{F_k^2}=F_{2n}F_{2n+1}-n$ QUESTION [6 upvotes]: Well known Fibonacci square series $(1)$ $$0^2+1^2+1^2+2^2+3^2+\cdots F_{n}^2=F_{n}F_{n+1}\tag1$$ $T_n=0,1,3,6,10,..$ and $F_n=0,1,1,2,3,...$ For $n=0,1,2,3,...$ Now we included Triangular numbers into $(1)$ as shown below $$T_0F_0^2=F_1F_2-1$$ $$T_1F_0+T_0F_1=F_2F_3-1$$ $$T_2F_0^2+T_1F_1^2+T_0F_2^2=F_3F_4-2$$ $$T_3F_0^2+T_2F_1^2+T_1F_2^2+T_0F_3^2=F_4F_5-2$$ $$T_4F_0^2+T_3F_1^2+T_2F_2^2+T_1F_3^2+T_0F_4^2=F_5F_6-3$$ $$T_5F_0^2+T_4F_1^2+T_3F_2^2+T_2F_3^2+T_1F_4^2+T_0F_5^2=F_6F_7-3$$ Observing the series involving Triangular and Fibonacci numbers together we found the following closed form. For even terms $$\sum_{k=0}^{2n+1}T_{2n+1-k}\color{red}{F_k^2}=F_{2n+1}F_{2n+2}-n-1\tag2$$ For odd terms $$\sum_{k=0}^{2n}T_{2n-k}\color{red}{F_k^2}=F_{2n}F_{2n+1}-n\tag3$$ How can we prove $(2)$ and $(3)$? An attempt: Knowing that $T_n={n(n+1)\over 2}$ then $(3)$ becomes $${1\over 2}\sum_{k=0}^{2n}(2n-k)(2n-k+1)F_k^2=F_{2n}F_{2n+1}-n$$ Simplified down to $$(4n^2+2n)\sum_{k=0}^{2n}F_k^2+\sum_{k=0}^{2n}(k^2-k-4nk)F_k^2=2F_{2n}F_{2n+1}-2n$$ finally down to $$\sum_{k=0}^{2n}(k^2-k-4nk)F_k^2=(2-2n-4n^2)F_{2n}F_{2n+1}-2n$$ we are not sure what to do next... REPLY [4 votes]: This answer uses $$\sum_{k=0}^{n}F_k^2=F_nF_{n+1}\tag4$$ $$\sum_{k=0}^{n}kF_k^2=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}\tag5$$ $$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$ The proofs are written at the end of the answer. We want to prove that $$\sum_{k=0}^{m}T_{m-k}F_k^2=F_{m}F_{m+1}-\left\lceil\frac{m}{2}\right\rceil\tag7$$ Let us prove $(7)$ by induction on $m$ using $(4)(5)(6)$. $(7)$ holds for $m=1$. Supposing that $(7)$ holds for some $m\ (\ge 1)$ gives $$\begin{align}\sum_{k=0}^{m+1}T_{m+1-k}F_k^2&=\sum_{k=0}^{m}(T_{m-k}+m+1-k)F_k^2\\\\&=\left(\sum_{k=0}^{m}T_{m-k}F_k^2\right)+\left(\sum_{k=0}^{m}(m+1-k)F_k^2\right)\\\\&=F_{m}F_{m+1}-\left\lceil\frac m2\right\rceil+(m+1)\left(\sum_{k=0}^{m}F_k^2\right)-\left(\sum_{k=0}^{m}kF_k^2\right)\\\\&=F_{m}F_{m+1}-\left\lceil\frac m2\right\rceil+(m+1)F_{m}F_{m+1}-\left(mF_{m}F_{m+1}-F_{m}^2+\frac{1+(-1)^{m-1}}{2}\right)\\\\&=2F_{m}F_{m+1}+F_{m}^2-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=2F_{m}F_{m+1}+F_{m-1}F_{m+1}-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}(F_m+F_m+F_{m-1})-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}F_{m+2}-(-1)^m-\left\lceil\frac m2\right\rceil-\frac{1+(-1)^{m-1}}{2}\\\\&=F_{m+1}F_{m+2}-\left\lceil\frac{m+1}{2}\right\rceil\qquad\blacksquare\end{align}$$ Let us prove $(4)$ by induction on $n$. $$\sum_{k=0}^{n}F_k^2=F_nF_{n+1}\tag4$$ $(4)$ holds for $n=1$. Supposing that $(4)$ holds for some $n\ (\ge 1)$ gives $$\sum_{k=0}^{n+1}F_k^2=F_nF_{n+1}+F_{n+1}^2=F_{n+1}(F_n+F_{n+1})=F_{n+1}F_{n+2}\qquad \blacksquare$$ Next, let us prove $(5)$ by induction on $n$ using $(6)$. $$\sum_{k=0}^{n}kF_k^2=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}\tag5$$ $$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$ $(5)$ holds for $n=1$. Supposing that $(5)$ holds for some $n\ (\ge 1)$ gives $$\begin{align}\sum_{k=0}^{n+1}kF_k^2&=nF_nF_{n+1}-F_n^2+\frac{1+(-1)^{n-1}}{2}+(n+1)F_{n+1}^2\\\\&=nF_nF_{n+1}+nF_{n+1}^2+F_{n+1}^2-F_n^2+\frac{1+(-1)^{n-1}}{2}\\\\&=nF_{n+1}(F_n+F_{n+1})+(F_{n+1}+F_n)(F_{n+1}-F_n)+\frac{1+(-1)^{n-1}}{2}\\\\&=nF_{n+1}F_{n+2}+F_{n+2}F_{n+1}-F_{n+2}F_n+\frac{1+(-1)^{-1}}{2}\\\\&=(n+1)F_{n+1}F_{n+2}-(F_{n+1}^2+(-1)^{n+1})+\frac{1+(-1)^{n-1}}{2}\\\\&=(n+1)F_{n+1}F_{n+2}-F_{n+1}^2+\frac{1+(-1)^n}{2}\end{align}$$ Finally, let us prove $(6)$ by induction on $n$. $$(-1)^n=F_{n-1}F_{n+1}-F_n^2\tag6$$ $(6)$ holds for $n=1$. Supposing that $(6)$ holds for some $n\ (\ge 1)$ gives $$\begin{align}(-1)^{n+1}&=-(-1)^n\\\\&=-F_{n-1}F_{n+1}+F_n^2\\\\&=-(F_{n+1}-F_n)F_{n+1}+F_n^2\\\\&=-F_{n+1}^2+F_nF_{n+1}+F_n^2\\\\&=-F_{n+1}^2+F_n(F_{n+1}+F_n)\\\\&=F_{n}F_{n+2}-F_{n+1}^2\qquad\blacksquare\end{align}$$<|endoftext|> TITLE: On the integral $\int_{e}^{\infty}\frac{t^{1/2}}{\log^{1/2}\left(t\right)}\alpha^{-t/\log\left(t\right)}dt,\,\alpha>1.$ QUESTION [13 upvotes]: Let $\alpha>1$. I would like to find a closed form or an upper bound of $$f\left(\alpha\right)=\int_{e}^{\infty}\frac{t^{1/2}}{\log^{1/2}\left(t\right)}\alpha^{-t/\log\left(t\right)}dt.$$ For the closed form I'm very skeptical but I have trouble also for an upper bound. I tried, manipulating a bit, to integrate w.r.t. $\alpha$ since $$\frac{\partial}{\partial\alpha}\alpha^{-t/\log\left(t\right)}=-\frac{t}{\alpha\log\left(t\right)}\alpha^{-t/\log\left(t\right)}$$ but it seems quite useless and at this moment I didn't see a good way to proceed. Maybe it is interesting to see, using some trivial substitutions, that $$f\left(\alpha\right)=\int_{e}^{\infty}\frac{\left(e^{3/2}\right)^{-W_{-1}\left(-1/v\right)}}{v\left(-W_{-1}\left(-\frac{1}{v}\right)\right){}^{1/2}}\frac{W_{-1}\left(-\frac{1}{v}\right)}{W_{-1}\left(-\frac{1}{v}\right)+1}\alpha^{-v}dv$$ $$=\int_{e}^{\infty}g\left(w\right)\alpha^{-v}dv$$ where $W_{-1}\left(x\right)$ is the Lambert $W$ function. So it seems that $f(\alpha)$ is somehow connected to the Mellin transform of $g(w).$ Thank you. REPLY [2 votes]: A naif but probably efficient approach is to exploit the fact that the logarithm function is approximately constant on short intervals and $$ \frac{1}{\sqrt{N}}\int_{e^N}^{e^{N+1}}\sqrt{t}\,\alpha^{-t/N}\,dt =\frac{N\sqrt{\pi}}{2\log(\alpha)^{3/2}}\,\text{Erf}\left(\sqrt{\frac{e^N\log\alpha}{N}}\right)$$ can be efficiently approximated through the continued fraction for the error function. We may also consider this fact: through the Laplace transform $$ \int_{0}^{+\infty}\sqrt{t}\exp\left(-\frac{t\log\alpha}{N}\right)\,dt = \int_{0}^{+\infty}\mathcal{L}^{-1}\left(\frac{1}{\sqrt{t}}\right)\,\mathcal{L}\left(t \exp\left(-\frac{t\log\alpha}{N}\right)\right)\,ds $$ we get the following integral: $$ \int_{0}^{+\infty}\frac{N^2}{\sqrt{\pi s}(Ns+\log\alpha)^2}\,ds =\frac{2}{\sqrt{\pi}}\int_{0}^{+\infty}\frac{1}{(s^2+\frac{\log\alpha}{N})^2}\,ds$$ that is simple to estimate in terms of $N$ and $\alpha$. The original integral is a weigthed sum of these integrals, that according to my computations should behave like $$\exp\left(-\log(\alpha)^{3/2}\right).$$ But I am probably over-complicating things, and we may recover the same bound by just applying a modified version of Laplace's method to the original integral.<|endoftext|> TITLE: Why can a matrix without a full rank not be invertible? QUESTION [12 upvotes]: I know you could just say because the $\det = 0$. But during the introduction of determinants the professor said, obviously if two columns of the matrix are linearly dependent the matrix can't be inverted, therefore it is zero. He made it sound like it is an intuitive thing, a simple observation, but I always have to resort to the properties of determinants to show it. How does one trivially see that you can not invert a matrix without a full rank? REPLY [17 votes]: Suppose that the columns of $M$ are $v_1, \ldots, v_n$, and that they're linearly dependent. Then there are constants $c_1, \ldots, c_n$, not all $0$, with $$ c_1 v_1 + \ldots + c_n v_n = 0. $$ If you form a vector $w$ with entries $c_1, \ldots, c_n$, then (1) $w$ is nonzero, and (2) it'll turn out that $$ Mw = c_1 v_1 + \ldots + c_n v_n = 0. (*) $$ (You should write out an example to see why this first equality is true). Now we also know that $$ M0 = 0. (**) $$ So if $M^{-1}$ existed, we could say two things: $$ 0 = M^{-1}0 \ (**)\\ w = M^{-1} 0\ (*) $$ But since $w \ne 0$, these two are clearly incompatible. So $M^{-1}$ cannot exist. Intuitively: a nontrivial linear combination of the columns is a nonzero vector that's sent to $0$, making the map noninvertible. But when you really get right down to it: proving this, and things like it, help you develop your understanding, so that statements like this become intuitive. Think about something like "the set of integers that have integer square roots". I say that it's intuitively obvious that $19283173$ is not one of these. Why is that "obvious"? Because I've squared a lot of numbers, and all the squares have a last digit that's either $0, 1, 4, 5, 6,$ or $9$ (because those are the last digits of squares of single-digit numbers). Now that I've told you that, my statement about "intuitively obvious" is obvious to you, too. But until you'd at least learned a little about integer squares by investigating them, your intuition wasn't as good as mine. Sometimes "intuition" is just another name for "applied experience."<|endoftext|> TITLE: Prove that $f(7) = 56$ given $f(1)$ and $f(9)$ and $f' \le 6$ QUESTION [6 upvotes]: Let $f(x)$ be continues function at $[1,9]$ and differentiable at $(1,9)$ and also $f(1) = 20 , f(9) = 68 $ and $ |f'(x)| \le 6$ for every $x \in (1,9)$. I need to prove that $f(7) = 56$. I started by using the Lagrange theorem and found that there exist $ 16$. If it goes off, above the line, then we may apply a rather same reasoning.<|endoftext|> TITLE: Evaluate a limit involving a definite integral QUESTION [22 upvotes]: Let $(I_n)_{n \geq 1}$ be a sequence such that: $$I_n = \int_0^1 \frac{x^n}{4x + 5} dx$$ Evaluate the following limit: $$\lim_{n \to \infty} nI_n$$ All I've been able to find is that $(I_n)$ is decreasing and converges to $0$. Thank you! REPLY [3 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\mrm{g}\pars{n,x} \equiv {\pars{1 - x}^{n} \over 9 - 4x}}$ This is an application of Laplace Method: \begin{align} \lim_{n \to \infty}\pars{n\int_{0}^{1}{x^{n} \over 4x + 5}\,\dd x} & = {1 \over 9}\lim_{n \to \infty}\bracks{n \int_{0}^{1}{\pars{1 - x}^{n} \over 1 - 4x/9}\,\dd x} = {1 \over 9}\lim_{n \to \infty}\pars{n\int_{0}^{\infty}\expo{-nx}\,\dd x} \\[5mm] & = \bbx{\ds{1 \over 9}} \end{align}<|endoftext|> TITLE: Are there more than $\beth_1$ non-homeomorphic topological subspaces of $\Bbb R$? QUESTION [8 upvotes]: I've been asked by a younger student about a certain claim he had on a classification of topological subsets of $\Bbb R$. The overall idea was a bit fuzzy, but in hindsight it revolved around taking the $\sigma$-algebra generated by six (Borel) subsets + translations. I successfully (and, I hope, instructively) argumented against it. However, this led me to the question: Could I just cut it short and fancy with a cardinality argument? Specifically, if $\sim$ is the homeomorphism equivalence on $\mathcal P(\Bbb R)$, is $\operatorname{card}\left(\mathcal P(\Bbb R)/\sim\right)>\beth_1$ ? Intuitively, I'd say yes, because, "come on, there are $\beth_2$ nasty non-Borel sets". And, "at chit-chat level, homeomorphisms $(a,b)\to(c,d)$ are monotone functions". However, this is neither a proof nor a sufficient reason for my question to even be decidable in ZFC. In fact, on the topic I found this weaker fact: "closed subsets up to homeomorphism are exactly $\beth_1$". Thank you for links and/or answers. REPLY [10 votes]: Every subset of $\mathbb{R}$ has a countable dense subset. If $X\subseteq\mathbb{R}$ and $A\subseteq X$ is a countable dense subset, a homeomorphism from $X$ to another subset $Y\subseteq\mathbb{R}$ is determined by its restriction to $A$. So there is an injection from the set of homeomorphisms from $X$ to other subsets of $\mathbb{R}$ to the set of functions from $A$ to $\mathbb{R}$. There are only $\beth_1$ functions from $A$ to $\mathbb{R}$ since $A$ is countable. So each subset of $\mathbb{R}$ can be homeomorphic to at most $\beth_1$ other subsets of $\mathbb{R}$. Since there are $\beth_2>\beth_1$ different subsets of $\mathbb{R}$, there must be $\beth_2$ different homeomorphism classes of subsets of $\mathbb{R}$.<|endoftext|> TITLE: Hitting time of an open set is not a stopping time for Brownian Motion QUESTION [7 upvotes]: Let $(B_t)$ be a standard Brownian motion and $\mathcal F_t$ the associated canonical filtration. It's a standard result that the hitting time for a closed set is a stopping time for $\mathcal F_t$ and the hitting time for an open set is a stopping time for $\mathcal F_{t+}$. Is there an elementary way to see that the hitting time for an open set is not in general a stopping time for $\mathcal F_t$? Say the hitting time for an open interval $(a,b)$? I'm interested in this question because it would show the filtration generated by a right-continuous process need not be right-continuous. There are other counterexamples for this on M.SE but they're all somewhat artificial. My apologies if this is obvious. I just started learning about such things. REPLY [7 votes]: No, it's not at all obvious. If we interpret $\mathcal{F}_t$ as "information up to time $t$", it's not surprising that the hitting time of an open interval $(a,b)$ is not an $\mathcal{F}_t$-stopping time. For instance if $B_t(\omega)=a$ for some $t>0$, then the information about the past, i.e. $(B_s(\omega))_{s \leq t}$, is not enough to decide whether $\tau(\omega)=t$; we need a small glimpse into the future. Making this intuition rigorous is, however, not easy. One possibility is to apply Galmarino's test which states that a mapping $\tau: \Omega \to [0,\infty]$ is a stopping time (with respect to $(\mathcal{F}_t)_{t \geq 0}$) if and only if $$\tau(\omega)=t, B_s(\omega) = B_s(\omega') \, \, \text{for all $s \leq t$} \implies \tau(\omega')=t, \tag{1}$$ see this question. If $\tau$ is the hitting time of an open interval $(1)$ is not satisfied; the easiest way to see this is to consider the canonical Brownian motion, i.e. consider $(B_t)_{t \geq 0}$ as a process on the space of continuous mappings. Let me finally remark that there are other ways to prove that the filtration generated by a Brownian motion is not right-continuous, see, for instance, this answer.<|endoftext|> TITLE: A real function which is additive but not homogenous QUESTION [21 upvotes]: From the theory of linear mappings, we know linear maps over a vector space satisfy two properties: Additivity: $$f(v+w)=f(v)+f(w)$$ Homogeneity: $$f(\alpha v)=\alpha f(v)$$ which $\alpha\in \mathbb{F}$ is a scalar in the field which the vector space is defined on, and neither of these conditions implies the other one. If $f$ is defined over the complex numbers, $f:\mathbb{C}\longrightarrow \mathbb{C}$, then finding a mapping which is additive but not homogenous is simple; for example, $f(c)=c^*$. But can any one present an example on the reals, $f:\mathbb{R}\longrightarrow \mathbb{R}$, which is additive but not homogenous? REPLY [18 votes]: If $f : \Bbb{R} \to \Bbb{R}$ is additive, then you can show that $f(\alpha v) = \alpha f(v)$ for any $\alpha \in \Bbb{Q}$ (so $f$ is a linear transformation when $\Bbb{R}$ is viewed as a vector space over $\Bbb{Q}$). As $\Bbb{Q}$ is dense in $\Bbb{R}$, it follows that an additive function that is not homogeneous must be discontinuous. To construct non-trivial discontinuous functions on $\Bbb{R}$ with nice algebraic properties, you usually need to resort to the existence of a basis for $\Bbb{R}$ viewed as a vector space over $\Bbb{Q}$. Such a basis is called a Hamel basis. Given a Hamel basis $B = \{x_i \mid i \in I\}$ for $\Bbb{R}$ (where $I$ is some necessarily uncountable index set), you can easily define a function that is additive but not homogeneous, e.g., pick a basis element $x_i$ and define $f$ such that $f(x_i) = 1$ and $f(x_j) = 0$ for $j \neq i$. REPLY [10 votes]: Additive but not homogenous functions $f: \mathbb R\to\mathbb R$ have to be a little bit more complicated since one can show that those functions can't be measurable and therefore need the axiom of choice in some way to be constructed. Consider $\mathbb R$ as a vectorspace over the field $\mathbb Q$ and select a basis $(r_i)_{i\in I}$. Call $(x,i)$ the coefficient of $r_i$ in the base representation of $x$ with respect to the base $(r_i)_{i\in I}$. Then $x\mapsto (x,i)$ is $\mathbb Q$-linear and therefore especially additive, but it is obviously not $\mathbb R$-homogenous because $(r_i,i) = 1$ and $0 = (r_j,i) = (\frac{r_j}{r_i}\cdot r_i,i)$ for $i\neq j$.<|endoftext|> TITLE: Is there an explicit formula that gives the value of $\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}$ for $n$ square roots? QUESTION [10 upvotes]: $$\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}$$ I know that with infinite square roots it's $x = \sqrt{2 + x}$, but what about a non-infinite number of roots? I've searched around a lot for this, and can't find anything useful, nor can I make a dent in the problem myself. Maybe I'm searching using the wrong vocabulary? REPLY [14 votes]: Elaborating on Michael Rozenberg's answer: Note that $$\sqrt{2+2\cos\alpha} = \sqrt{4\cos^2\left(\frac{\alpha}{2}\right)} = 2\cos\left(\frac{\alpha}{2}\right)$$ So, $$\sqrt{2} = 2\cos\left(\frac{\pi}{4}\right)$$ $$\sqrt{2+\sqrt{2}} = 2\cos\left(\frac{\pi}{8}\right)$$ $$\vdots$$ Thus, if we have $n$ square roots, we have $$x=2\cos\left(\frac{\pi}{2^{n+1}}\right)$$<|endoftext|> TITLE: Calculating $\int \sqrt{1 + x^{-2}}dx$ QUESTION [6 upvotes]: I would like to find $$\int \sqrt{1 + x^{-2}}dx$$ I have found that it is equivalent to $$ \int \frac{\sqrt{1 + x^2}}{x}dx $$ but I am not sure what to do about it. With trig substitution $x = \tan(x)$ I get $$ \int \frac{1}{\sin(\theta)\cos^2(\theta)}d\theta $$ but that seems to be a dead end. REPLY [2 votes]: Set $x^{-1}=\sinh t$, so $\sqrt{1+x^{-2}}=\cosh t$. Then $$ dx=-\frac{\cosh t}{\sinh^2t}\,dt $$ and the integral becomes $$ -\int\frac{\cosh^2t}{\sinh^2t}\,dt= -\int\frac{1+\sinh^2t}{\sinh^2t}\,dt=\frac{\cosh t}{\sinh t}-t+c= \sqrt{1+x^2}-\operatorname{arsinh}\frac{1}{x}+c $$ You can find a more explicit expression for $\operatorname{arsinh}\frac{1}{x}$ by setting $$ \frac{1}{x}=\frac{e^t-e^{-t}}{2} $$ or $$ xe^{2t}-2e^t-x=0 $$ so $$ e^t=\frac{1+\sqrt{1+x^2}}{x} $$ The final antiderivative is $$ \sqrt{1+x^2}-\log\frac{1+\sqrt{1+x^2}}{x}+c $$<|endoftext|> TITLE: Triple Integral $\iiint x^{2n}+y^{2n}+z^{2n}dV$ QUESTION [6 upvotes]: Evaluate: $$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV $$ I have tried to convert to spherical polars and then compute the integral, but it gets really messy because of the 2n power. Any tips? REPLY [6 votes]: First observation: it is symmetric in $x,y,z$, so by linearity we have $$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV =3\iiint_{x^2+y^2+z^2 \leqslant 1} z^{2n} dV.$$ Choosing spherical coordinates it becomes $$3\iiint_{x^2+y^2+z^2 \leqslant 1} (r\cos \theta)^{2n} dV$$ where $dV= r^2 \sin \theta \ \text{d}r \ \text{d}\theta \ \text{d}\phi$. Thus the integral simplifies to $$3 \int_0^{2\pi}\int_0^{\pi}\int_0^1 r^{2(n+1)} (\cos \theta)^{2n} \sin \theta \ \text{d}r \ \text{d}\theta \ \text{d}\phi = \frac{3}{2n+3}2 \pi \int_0^{\pi}(\cos \theta)^{2n} \sin \theta \ \text{d}\theta. $$ Using that $$\int_0^{\pi}(\cos \theta)^{2n} \sin \theta \ \text{d}\theta = \frac{2}{2n+1} $$ we have $$\iiint_{x^2+y^2+z^2 \leqslant 1} x^{2n}+y^{2n}+z^{2n} dV= \frac{3}{2n+3}2 \pi \frac{2}{2n+1}.$$<|endoftext|> TITLE: Why this algorithm for egyptian fractions doesn't terminate in ~$2$% cases? QUESTION [14 upvotes]: I thought up yet another algorithm for egyptian fraction expansion which turned out to be very effective (in terms of the length and the denominator size) - in most cases. However, for some fractions it doesn't terminate at all - it leads to an infinite loop. Here is the algorithm: Let $\frac{p}{q}<1$ and $p,q$ coprime. Find the minimal $m$ such that $q/(p+m)$ is an integer. We only need to consider $m \in [1,q-p]$. Represent the fraction as: $$\frac{p}{q}=\frac{p+m}{q}-\frac{m}{q}$$ Now to obtain a positive term instead of a negative one, we split the first fraction in two: $$\frac{p+m}{q}-\frac{m}{q}=\frac{p+m}{2q}+\frac{p+m}{2q}-\frac{m}{q}=\frac{p+m}{2q}+\frac{p-m}{2q}$$ Here is a conditional: if $pm$ then $\frac{p}{q} \to \frac{p-m}{2q}$ and we repeat the first step of the algorithm. The working name is complementary method, so I will use CM to denote it for now. Despite its simplicity (it's not at all obvious why we are dividing by $2$ instead of using some other way to expand the first term) the algorithm works very well. In a lot of cases it beats every other algorithm I tried. Since the greedy algorithm and Engel expansion are usually bad in terms of denominator size, I used two other methods to compare: Binary Remainder Method and my own 'Splitting-Joining method' (the details can be found in my Mathematica SE question. I also compared it to a modification of Engel proposed by Daniel Fischer in this answer and CM is mostly better for the examples he provided. Some examples of the best results (a sequence of denominators is provided in each case): 4/49: CM {14,98}; BR {16,98,196,392,784}; SJ {13,325,925,1813} 3/35: CM {14,70}; BR {16,70,140,560}; SJ {20,28} 47/104: CM {4,8,13}; BR {4,8,16,104,208}; SJ {4, 14, 26, 28, 52, 70, 104, 130, 182} Some examples of the worst results (but still valid - algorithm terminates): 94/191: CM {4, 8, 16, 32, 64, 256, 512, 1024, 2048, 4096, 8192, 24448, 48896, 97792, 195584, 391168, 782336, 1564672} 65/157: CM {4, 8, 32, 256, 512, 1024, 2048, 4096, 10048, 20096, 40192, 80384, 160768, 643072} 52/139: CM {4, 16, 32, 64, 128, 278, 556, 1112, 2224, 8896, 17792} However, in these cases both BR and SJ methods also give long expansions with large denominators. Now the real problem which I'm trying to solve - why in some cases the algorithm doesn't terminate, but leads to loops? From large scale experiments I estimates the proportion of such fractions to be about $1.8$% (for numerators and denominators below $1000$). The examples of such 'bad' fractions are: $$\frac{41}{111},\frac{5}{87},\frac{8}{87},\frac{14}{87},\frac{47}{87},\frac{61}{102},\frac{17}{69},\frac{33}{119},\frac{38}{93},\frac{77}{177},\frac{32}{57},\frac{99}{185},\frac{98}{141},\frac{100}{129},\dots$$ The most common denominator is $87$ for some reason. Note that not all of the denominators and/or numerators are prime. The problem can be solved by using $\frac{p-1}{q}$ instead, but not in every case, for example $7/87$ doesn't work either. However, both $6/87$ and $2/87$ work, and give different denominators, so we can expand $8/87$ after all. I think the problem might be related to the use of the expansion $1=1/2+1/2$ to divide the fist term. However, when I tried some other schemes, I didn't get good results (for example, I've got repeating fractions when using $1=1/3+2/3$). The working Mathematica code for the algorithm is: x=6/87; p0=Numerator[x]; q0=Denominator[x]; S=0; Nm=100; a=Table[1,{k,1,Nm}]; m=Table[1,{k,1,Nm}]; p1=p0; q1=q0; j=1; While[Abs[p0]>1&&j<=Nm&&q0<10^35,M=Catch[Do[If[FractionalPart[q0/(p0+k)]<1/10^55,Throw[k]],{k,0,q0-p0}]]; m[[j]]=M; a[[j]]=(p0+M)/(2 q0); p1=Numerator[a[[j]]-M/q0]; q1=Denominator[a[[j]]-M/q0]; While[p1<0,a[[j]]=a[[j]]/2; p1=Numerator[a[[j]]+p1/q1]; q1=Denominator[a[[j]]+p1/q1]]; If[a[[j]]!=1,S+=a[[j]]]; j++; p0=p1;q0=q1]; a[[j]]=p1/q1; S+=a[[j]]; Denominator[Table[a[[k]],{k,1,j}]] And the second question: how to modify the algorithm so it always terminates? Update: Among the first $10000$ fractions with $p \neq 1$ in lexicographic order there are $269$ fractions for which this algorithm doesn't work. (Seems to be more than $2$%). They are: 5/33,5/51,2/55,32/55,4/57,7/57,13/57,23/57,32/57,6/65,43/66,4/69,8/69,11/69,17/69,40/69,50/69,8/85,59/85,4/87,5/87,7/87,8/87,14/87,34/87,47/87,62/87,65/87,76/87,5/93,7/93,10/93,19/93,38/93,50/93,67/93,6/95,8/95,63/95,61/102,9/110,59/110,4/111,7/111,8/111,13/111,16/111,22/111,25/111,31/111,41/111,44/111,59/111,62/111,68/111,82/111,7/114,65/114,71/114,83/114,103/114,6/115,11/115,17/115,63/115,3/119,5/119,10/119,15/119,16/119,33/119,37/119,45/119,61/119,66/119,67/119,71/119,73/119,78/119,96/119,101/119,4/123,5/123,8/123,10/123,11/123,17/123,20/123,26/123,29/123,35/123,46/123,49/123,67/123,70/123,76/123,86/123,92/123,4/129,5/129,10/129,13/129,14/129,19/129,28/129,31/129,37/129,47/129,53/129,71/129,74/129,80/129,91/129,100/129,77/130,53/132,119/132,11/138,31/138,77/138,85/138,91/138,103/138,4/141,5/141,7/141,8/141,14/141,16/141,17/141,23/141,32/141,35/141,41/141,52/141,55/141,74/141,79/141,82/141,88/141,98/141,101/141,110/141,121/141,3/143,7/143,21/143,40/143,42/143,60/143,73/143,80/143,98/143,120/143,138/143,6/145,8/145,13/145,21/145,64/145,79/145,93/145,122/145,6/155,7/155,9/155,12/155,14/155,69/155,99/155,102/155,107/155,131/155,5/159,7/159,10/159,11/159,14/159,19/159,20/159,23/159,32/159,38/159,58/159,64/159,83/159,85/159,91/159,113/159,116/159,125/159,136/159,9/161,11/161,101/161,103/161,16/165,41/165,61/165,116/165,151/165,33/170,101/170,7/174,37/174,43/174,65/174,95/174,97/174,101/174,103/174,115/174,155/174,8/175,11/175,78/175,108/175,111/175,113/175,116/175,148/175,4/177,5/177,8/177,10/177,11/177,13/177,17/177,19/177,20/177,22/177,26/177,29/177,35/177,38/177,44/177,64/177,67/177,70/177,77/177,94/177,95/177,97/177,103/177,122/177,128/177,131/177,137/177,140/177,154/177,4/183,5/183,7/183,10/183,11/183,13/183,14/183,19/183,20/183,22/183,28/183,34/183,37/183,40/183,49/183,65/183,68/183,71/183,74/183 Update 2 (Important) The question was deleted for a time, because for some of the listed fraction algorithm seems to work just fine if it's done by hand. There is some error in my code, which I hadn't been able to find yet. But there are fractions which lead to loops by hand as well (such as $41/111$) so the question still stands. REPLY [6 votes]: There's still an error in your code, in the While loop you take if $p TITLE: Geodesics between singular points in a translation surface QUESTION [5 upvotes]: Consider a translation surface $X$ with $n\ge 2$ points of conical singularity $x_1,\dots,x_n$ of cone angle $\theta_i=2k_i\pi$, $k_i>1$. Suppose that the geodesic $\sigma$ from $x_1$ to $x_2$ for the singular flat metric is a straight segment. By "geodesic" I mean a global geodesic, meaning that the length of $\sigma$ with respect to the singular flat metric equals the distance of the two points with respect to the induced metric. Now consider any smooth point $x\in X$ such that the geodesic $\tau$ from $x_1$ to $x$ for the singular flat metric is a segment and such that the angle at $x_1$ between $\sigma$ and $\tau$ is greater than $\pi$ (by "angle" I don't mean Alexandrov's definition of angle, but simply the angle measured at the conical point, where the total angle is $2k_1\pi$). Question 1: Is $\sigma\ast \tau^{-1}$ always the geodesic from $x$ to $x_2$? Or could such geodesic be a straight segment or pass through another singular point? Question 2: If $x_2$ were a smooth point then the answer to the previous question is always yes? REPLY [2 votes]: In general, given a nonpositively curved manifold (equipped with a possibly singular Riemannian metric) which nontrivial topology, (local) geodesics need not be global distance minimizers and no local assumptions can help you with this. As a specific example for your question, start with the flat 2-torus $T^2$. Let $c$ be the shortest closed geodesic on $T^2$. (There might be several, pick one.) Pick two points $p, q\in c$ which divide the geodesic into arcs of equal length. Pick also a point $r\in T^2 - c$. Now, consider the 3-fold branch cover $S\to T^2$ ramified (with degree 3) at the points $p, r$. Lift the flat metric on $T^2$ to a singular flat metric on $S$. Let $x$ be the preimage of $p$ in $S$. The loop $c$ will lift to several different loops on $S$, all of the length equal to the length of $c$, one of them will be a loop $\tilde{c}$ which makes the angle $3\pi$ at $x$. The point $q$ will have three preimages in $S$, one of them will be on $\tilde{c}$, I will denote it by $y$. The loop $\tilde{c}$ is the concatenation of two arcs $\sigma=yx$ and $\tau^{-1}=xy$ of equal length. Since the loop $c$ was the shortest closed geodesic on $T^2$, both arcs will be distance-minimizers on $S$. However, their concatenation is, of course, is not a distance-minimizer, since it is a closed geodesic on $S$.<|endoftext|> TITLE: Tensor Product of dual linear maps QUESTION [5 upvotes]: Suppose $V$ and $W$ are finite dimensional linear spaces and $V^*$ as well as $W^*$ are their appropriate linear duals. Now let $f: V \to W$ and $g: V \to W$ be linear maps. Is the following identity correct? $f^* \otimes g^* = (f \otimes g)^*$ That is the tensor product of the dual linear maps, is the linear dual of the tensor product of the maps. Can't find this neither on the Wikipedia page of the tensor product, nor on the Wikipedia page of the dual linear maps. Therefore its properly wrong? Don't think so. REPLY [3 votes]: The problem is that the two maps you are considering do not have the same domain and codomain: $$f^*\otimes g^*:W^*\otimes W^*\to V^*\otimes V^*\quad\quad (f\otimes g)^*:(W\otimes W)^*\to(V\otimes V)^*$$ so they cannot possibly be equal as maps. However, we do have a canonical map $$\eta_W:W^*\otimes W^*\to (W\otimes W)^*,\quad (\eta_W(\phi\otimes\psi))(w_1\otimes w_2) = \phi(w_1)\psi(w_2)$$ (as explained in this question) which is an isomorphism in the finite dimensional case. Although $f^*\otimes g^*$ and $(f\otimes g)^*$ are not equal, I believe that the following is a factorization of $f^*\otimes g^*$: $$W^*\otimes W^*\xrightarrow{\ \eta_W \ }(W\otimes W)^*\xrightarrow{\ (f\otimes g)^* \ }(V\otimes V)^*\xrightarrow{\ \eta_V^{-1} \ }V^*\otimes V^*.$$ To prove this, it suffices to show that the square: $$\require{AMScd} \begin{CD} W^*\otimes W^* @>{\eta_W}>> (W\otimes W)^*\\ @V{f^*\otimes g^*}VV @VV{(f\otimes g)^*}V\\ V^*\otimes V^* @>>{\eta_V}> (V\otimes V)^* \end{CD} $$ commutes. Let $\phi\otimes\psi\in W^*\otimes W^*$. We want to show that $$((f\otimes g)^*\circ\eta_W)(\phi\otimes\psi) \quad\text{and}\quad ((f^*\otimes g^*)\circ\eta_V)(\phi\otimes\psi)$$ are equal in $(V\otimes V)^*$ (so equal as maps from $V\otimes V$ to the underlying field $\mathbb{F}$). With that in mind, let $v_1\otimes v_2\in V\otimes V$. Then, we have \begin{align} \notag ((f\otimes g)^*\circ\eta_W(\phi\otimes\psi))(v_1\otimes v_2) &= (\eta_W(\phi\otimes\psi)\circ(f\otimes g))(v_1\otimes v_2)\\ \notag &= \eta_W(\phi\otimes\psi)(f(v_1)\otimes g(v_2))\\ \notag &= \phi(f(v_1))\cdot \psi(g(v_2)) \end{align} and: \begin{align} \notag (\eta_V\circ(f^*\otimes g^*)(\phi\otimes\psi))(v_1\otimes v_2) &= \eta_V((\phi\circ f)\otimes(\psi\circ g))(v_1\otimes v_2)\\ \notag &= \phi(f(v_1))\cdot \psi(g(v_2)). \end{align} Since everything was arbitrary, we have $f^*\otimes g^* = \eta_V^{-1}\circ (f\otimes g)^*\circ\eta_W$.<|endoftext|> TITLE: weak*-convergence and weak operator topology - multiplication operator QUESTION [6 upvotes]: The setting : let $(\Omega,\mu)$ $\sigma$-finite measure space and let $M_\phi : L^2(\Omega,\mu) \to L^2(\Omega,\mu)$ the multiplication operator with $\phi \in L^{\infty}(\Omega,\mu)$ I want to show : If $M_{\phi_{i}} \to M_\phi $ in weak operator topology, then $\phi_i \to \phi$ in weak*-topology I already managed to show the reverse statement. I don't know if this helps or even is true : Maybe I can write every $f \in L^1$ as product of two functions in $L^2$ ? REPLY [2 votes]: As alluded to in your question, the hardest part is writing an $L^1$ function as a product of two $L^2$ functions. But this turns out to be easier than expected. Suppose $M_{\phi_i}$ is WOT-convergent to $M_\phi$, and let $f\in L^1$ be given. Then we can write $f=|f|e^{i\theta}$, where $\theta$ is a measurable function. Now define \begin{align*} g&=|f|^{1/2}e^{i\theta}, \\ h&=|f|^{1/2}. \end{align*} Then $g,h\in L^2$ and we have $$\langle M_{\phi_i}g,h\rangle=\int\phi_i|f|e^{i\theta}\ d\mu =\int\phi_if\ d\mu. $$ By hypothesis, $\langle M_{\phi_i}g,h\rangle\to\langle M_\phi g,h\rangle$, and thus $$ \int\phi_if\ d\mu\to\int\phi f\ d\mu. $$ Since $f\in L^1$ was arbitrary, we know $\{\phi_i\}$ is weak$^*$-convergent to $\phi$.<|endoftext|> TITLE: When is a matrix function the Jacobian matrix of another mapping QUESTION [5 upvotes]: Suppose $J(x)$ is a continuous matrix function $\mathbb{R}^D \to \mathbb{R}^D \times \mathbb{R}^D$. Do there always exist a mapping $f: \mathbb{R}^D \to \mathbb{R}^D$ so that $J = \nabla f$. If not, are there well-known conditions such that this mapping exists? REPLY [2 votes]: Let $J_{1}, \ldots, J_{D}$ denote the columns of $J$. Then each $J_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}^{D}$ and so you are trying to find functions $f_{i}:\mathbb{R}^{D}\rightarrow\mathbb{R}$ such that $J_{i}=\nabla f_{i}$ for every $i=1, \ldots, D$. To construct counter-examples is easy. If $J$ is $C^{1}$ and not just continuous, since the domain is $\mathbb{R}^{D}$, then a necessary and sufficient condition for each $J_{i}$ to be the gradient of a function is that $J_{i}$ is irrotational, that is, $\frac{\partial J_{i,j}}{\partial x_{k}}=\frac{\partial J_{i,k}}{\partial x_{j}}$ for all $j$, $k$, where $J_{i}=(J_{i,1},\ldots,J_{i,D})$. In $\mathbb{R}^{2}$ take $J_{1}(x,y)=(y,2x)$ and anything you want for $J_{2}$. Then $\frac{\partial }{\partial y}(y)=1\neq\frac{\partial}{\partial x}(2x)=2$, and so $J_{1}$ is not irrotational. If $J$ is just continuous, then a necessary and sufficient condition for each $J_{i}$ to be the gradient of a function is that $\int_{\gamma}J_{i}=0$ for every closed curve $\gamma$. This is not so easy to use because you have to check every closed curve, but if you find one for which the integral is nonzero, then you immediately know that $J_{i}$ cannot be the gradient of a function. You can find all this stuff in Fleming "Functions of several variables". Look for exact differential forms.<|endoftext|> TITLE: How to evaluate $\sum\limits_{n \geq 0} \left(S_{n + 2} + S_{n + 1}\right)^2(-1)^n$, given the multivariable recurrence relation? QUESTION [5 upvotes]: The given multivariable recurrence relation is that for every $n \geq 1$ $$S_{n + 1} = T_n - S_n$$ where $S_1 = \dfrac{3}{5}$ and $T_1 = 1$. Both $T_n$ and $S_n$ depend on the following condition $$ \dfrac{T_n}{S_n} = \dfrac{T_{n + 1}}{S_{n + 1}} = \dfrac{T_{n + 2}}{S_{n + 2}} = \dots $$ The goal is to evaluate $$\sum\limits_{n \geq 0} \left(S_{n + 2} + S_{n + 1}\right)^2 (-1)^n$$ Since the change between $T_n$ and $T_{n + 1}$ is not constant, I believe that the way to approach this problem is to have all terms with consistent coefficient. However, I am not skillful enough to simplify the summation into a single variable. REPLY [4 votes]: Notice that $$\frac53=\frac{T_1}{S_1}=\frac{T_n}{S_n}$$ Thus, $T_n=\frac53S_n$. Putting this in, we get $$S_{n+1}=\frac23S_n$$ which is a geometric sequence. The general form is then $S_n=\frac35\times\left(\frac23\right)^n$, so we have $$\text{Sum}=\frac49\sum_{n\ge0}a^n$$ where $a=-\frac49$ is a very simple geometric series.<|endoftext|> TITLE: How are these problems called and how are they solved? QUESTION [5 upvotes]: I'm self learning calculus and I stumbled upon the following problem: Express $I_n =\int \frac{dx}{(x^2+a^2)^n}$ Using $I_{n-1}$ ($a$ is a positive parameter and $n=2,3,4,...$) Is this about double integrals? Could anyone please elaborate a bit more so I can learn how to solve this type of problems? ================= EDIT Continuing @SimplyBeautifulArt's answer: $I_n=a^{1-2n}\int{cos^{2n-2}(u)du} = a^{1-2n}\int{cos^{2n-3}(u)cos(u)du}$ $I_{n-1}=a^{3-2n}\int{cos^{2n-4}(u)du}$ Integrating by parts($f:cos^{2n-3}(u); dg:cos(u)du$): $I_n=a^{1-2n}(cos^{2n-3}(u)sin(u) + (2n-3)\int{cos^{2n-4}(u)sin^2(u)du})$ $I_n=a^{1-2n}(cos^{2n-3}(u)sin(u) + (2n-3)(\int{cos^{2n-4}(u)du} -\int{cos^{2n-2}(u)du}))$ $I_n=a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)(\frac{a^{3-2n}}{a^2}\int{cos^{2n-4}(u)du} -a^{1-2n}\int{cos^{2n-2}(u)du})$ $I_n=a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)(\frac{I_{n-1}}{a^2} -I_n)$ $I_n=(a^{1-2n}cos^{2n-3}(u)sin(u) + (2n-3)\frac{I_{n-1}}{a^2})/2n-2$ Recall $u=arctan(\frac xa)$ $I_n=(a^{1-2n}\frac{1}{\sqrt{1+(x/a)^2}}^{2n-3}\frac{x/a}{\sqrt{1+(x/a)^2}} + (2n-3)\frac{I_{n-1}}{a^2})/2n-2$ Is that all? REPLY [3 votes]: Use the substitution $x=a\tan(u)$ to get $$I_n=\int\frac{a\sec^2(u)}{(a^2\tan^2(u)+a^2)^n}\ du$$ Recall the trigonometric identity $1+\tan^2=\sec^2$ to reduce this to $$I_n=a^{1-2n}\int\sec^{2-2n}(u)\ du$$ $$=a^{1-2n}\int\cos^{2n-2}(u)\ du$$ This is then handled using pythagorean theorem, integration by parts, and/or substitution, depending on the value of $n$, as described in this post.<|endoftext|> TITLE: Do integrable functions vanish at infinity? QUESTION [13 upvotes]: If $f$ is a real-valued function that is integrable over $\mathbb{R}$, does it imply that $$f(x) \to 0 \text{ as } |x| \to \infty? $$ When I consider, for simplicity, positive function $f$ which is integrable, it seems to me that the finiteness of the "the area under the curve" over the whole line implies that $f$ must decay eventually. But is it true for general integrable functions? REPLY [4 votes]: There are already good answers, I only wanted to make it more visual. Observe that \begin{align} \infty &< \sum_{k=0}^{\infty} k\ \cdot\ \ \ 2^{-k}\ \ =\hspace{10pt}2 < \infty \\ \infty &< \sum_{k=0}^{\infty} k\cdot(-2)^{-k} =-\frac{2}{9} < \infty \end{align} (it's easy enough to do by hand, but if you want, here and here are links to WolframAlpha). Thus, we can use: $$ f(x) = \sum_{k = 0}^{\infty}k\cdot(-1)^k \cdot \max(0,1-2^k\cdot|x-k|) $$ Below are diagrams for $|f|$ and $f$: I hope this helps $\ddot\smile$<|endoftext|> TITLE: Prove that a Cauchy sequence is convergent QUESTION [7 upvotes]: I need help understanding this proof that a Cauchy sequence is convergent. Let $(a_n)_n$ be a Cauchy sequence. Let's prove that $(a_n)_n$ is bounded. In the definition of Cauchy sequence: $$(\forall \varepsilon>0) (\exists n_\varepsilon\in\Bbb N)(\forall n,m\in\Bbb N)((n,m>n_\varepsilon)\Rightarrow(|a_n-a_m|<\varepsilon))$$ let $\varepsilon=1$. Then we have $n_1\in\Bbb N$ such that $\forall n,m\in\Bbb N (n,m>n_1)\Rightarrow(|a_n-a_m|<1)$. From there for $n>n_1$ we have $|a_n|\leq |a_n-a_{n1+1}|+|a_{n1+1}|(*).$ Now $M=\max\{|a_1|,...|a_{n1}|,1+|a_{n1+1}|\}$ such that $|a_n|\leq M,\ \forall n\in\Bbb N.$ Bounded sequence $(a_n)_n$ has a convergent subsequence $(a_{p_n})_n$, i.e. there exists $a=\lim_n a_{p_n}$. Let's prove $a=\lim_n a_n$. Let $\varepsilon>0$ be arbitrary. From the convergence of subsequence $(a_{p_n})_n$ we have $n'_\varepsilon\in\Bbb N$ such that $$(n>n'_\varepsilon)\Rightarrow(|a_{p_n}-a|<\frac{\varepsilon}{2}).$$ Because $(a_n)_n$ is a Cauchy sequence, we have $n''_\varepsilon\in\Bbb N$ such that $$(n,m>n''_\varepsilon)\Rightarrow(|a_n-a_m|<\frac{\varepsilon}{2}).$$ Let $n_\varepsilon=\max\{n'_\varepsilon, n''_\varepsilon\}$ so for $n>n_\varepsilon$ because $p_n\geq n$ we have $$|a_n-a|\leq|a_n-a_{p_n}|+|a_{p_n}-a|<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon \ (**)$$ i.e. $a=\lim_n a_n$. $(*)$Where did $|a_n|\leq |a_n-a_{n1+1}|+|a_{n1+1}|$ come from? I understand why that inequality is true, but I don't see the point in writing in like that. $(**)$ Why is $|a_n-a_{p_n}|<\frac{\varepsilon}{2}?$ REPLY [2 votes]: Here is a quick proof using only the supremum property : Let $(a_n)$ be a Cauchy sequence of reals. It is bounded [ There is an $N$ such that $ a_N, a_{N+1}, \ldots $ are in $ (a_N - 1, a_N + 1) $. Now $ \max \{ |a_1|, \ldots, |a_{N-1}|, |a_N|+1 \} $ is $ \geq $ each $ | a_n | $ ]. So $ \alpha_{j} := \sup\{a_j, a_{j+1}, \ldots \} $ are well-defined, bounded (and decreasing). Therefore they converge, to $ \alpha := \inf\{ \alpha_1, \alpha_2, \ldots \} $. Let $\epsilon > 0 $. There is an $ N (=N_{\epsilon}) $ such that $ a_N, a_{N+1}, \ldots $ are in $ (a_N - \epsilon, a_N + \epsilon) $. So $ \alpha_N, \alpha_{N+1}, \ldots $ are in $ [ a_N - \epsilon, a_N + \epsilon ] $, and hence so is $ \alpha $. Finally $ a_N, a_{N+1}, \ldots $ and $ \alpha $ are all in $ [ a_N - \epsilon, a_N + \epsilon ] $, ensuring each $ | a_N - \alpha |, |a_{N+1} - \alpha |, \ldots $ is $ \leq 2 \epsilon $.<|endoftext|> TITLE: Prove that $\sin x+\sin y=1$ does not have integer solutions QUESTION [17 upvotes]: Suppose $x$ and $y$ are angles measured in radians. Then how to show that the equation $$\sin x+\sin y=1$$ does not have a solution $(x,y)\in\mathbb{N}\times\mathbb{N}$? This question is prompted by curiosity. I don't have any ideas how it can be approached. REPLY [22 votes]: No, and there is not even a solution for $(x,y)\in\mathbb Q\times \mathbb Q$. We can quickly exclude $x=y$, which would require that $\sin x=\frac12$, but that is only true for $x=n\frac{\pi}{6}$ for certain nonzero integers $n$, and none of these produce a rational. Similarly we can easily exclude $x=0$, $y=0$, or $x=-y$. Now, using Euler's formula, rewrite the equation to $$ \tag{*} e^{ix} + e^{iy} - e^{-ix} - e^{-iy} = 2i\cdot e^0 $$ and apply the Lindemann–Weierstrass theorem which in one formulation says that the exponentials of distinct algebraic numbers are linearly independent over the algebraic numbers. But $\{\pm ix,\pm iy,0\}$ are all algebraic and (by our assumptions so far) different, so $\text{(*)}$ would be one of the linear relations that can't exist. This argument generalizes to show that the only algebraic number that can be written as a rational combination of sines of algebraic (radian) angles is $0$.<|endoftext|> TITLE: Understanding predicativity QUESTION [9 upvotes]: In understanding the differences between impredicative and predicative definitions, I was able to understand impredicative as the following A definition is said to be impredicative if it invokes over the set being defined or another set which contains the thing being defined. A prime example of this definition is the Russell's paradox Now comparing to predicative definition, reading wiki it says that it entails constructing theories where quantification over lower levels results in variables of some new type, distinguished from the lower types that the variable ranges over. The definition for predicative seems to be on a whole new level in terms of the description. An example of an predicative definition I sort of tried to connect the description with was Frege's first and second order calculus. Could anyone perhaps offer a more simpler definition of a predicative definition, along with an example? Thanks! REPLY [3 votes]: As Professor Mummert has noted, the notion of a "predicative definition" is vague, although I would disagree that the same holds for "predicative mathematics". There are many complicated issues involved. With respect to "definition", is it "obvious" that mathematics ought to be based upon "undefined primitives"? Russell and Whitehead made such a claim. You will find a detailed analysis with criticism of "Principia Mathematica" in the book $\underline{Definition}$ by Richard Robinson. Among the kinds of definitions one finds in non-foundational mathematics is "implicit definition". And, you will find that Professor Robinson does discuss them as legitimate forms of mathematical definition. When you think about the matter closely, you will realize that the "intensional definition" -- upon which Church introduced the lambda calculus -- is, in fact, a variation of implicit definition. The functions which Church introduced may be applied to themselves. Such functions are not representable in Zermelo-Fraenkel set theory because the axiom of foundation restricts that notion of set to being well-founded. Thus, the extension of a function in the sense of what Church did (that is, its representation as a set of ordered pairs) would have to appear as a domain element of the function. The axiom of foundation restricts against this infinite descending chains of membership relations. Now, consider the definition, $$\forall x \forall y ( x \subset y \leftrightarrow ( \forall z ( y \subset z \rightarrow x \subset z ) \wedge \exists z ( x \subset z \wedge \neg y \subset z ) ) )$$ I use this form of sentence for both the set theory and the arithmetic (interpreted as proper divisor) in which I am interested. The syntax is clearly circular. Is it an impredicative definition? According to a monograph by Moschovakis, a sentence of this nature appears to be impredicative if one naively attributes it to be a second-order sentence but is, in fact, recursively constructive. And, indeed, you will find a sentence of this form used in $\underline{Set Theory}$ by Kunen in his discussion of forcing. By contrast, the full-blown transfinite recursion is presented by Jech in the first edition of his book $\underline{Set Theory}$. When I say that "predicative mathematics" does not suffer from the same problem as "predicative definition", it is because it originates with Russell and Whitehead with the express purpose of avoiding the circularity which they believed responsible for the many early paradoxes in set theory. So, one understands sets, first and foremost, as collections of individuals which are not, themselves, a collection. Then, one may form additional sets from those individuals and those initial sets of individuals. The next "type" will be sets formed of "objects" previously obtained through "set formation". I apologize for finishing with all of these quotes. But, in natural language it gets complicated. In combination with the axioms of union and power set, the axiom of foundation provides for this structure. This kind of distinction may be found in Aristotle. For Aristotle, individuals are primary substance. Notions such as "species" and "genus" are substances in the sense that what they categorize are individuals. But, Aristotle refers to them as secondary substances. One of the interesting things one discovers when reading Aristotle is that his only admonition against circularity is that against trying to simultaneously attribute truth to deductive reasoning and inductive reasoning at the same time. In modern mathematics, this seems to be related to the Lyndon interpolation theorem. The proof of that theorem uses negation normal forms. The significance of this is the restricted second-order language presented by Flum and Ziegler in the early 1980's. Its formation rules are governed by negation normal forms while its semantics coincide with first-order semantics on trivial topologies and discrete topologies. It is clear that predicative mathematics will avoid invoking both the universal quantifier and the existential quantifier simultaneously. It emphasizes the existential quantifier as being semantically prior to the universal quantifier. But, without some accommodation to the logic, the syntactic definition of individuals (as opposed to relations) merely on the basis of "properties" puts one at risk of attributing truth to both the universal quantifier and the existential quantifier simultaneously. This is what the distinction between "predicative definition" and "impredicative definition" is trying to restrict. But, it is not at all clear that a classification of definitions is the appropriate vehicle. What is at stake is the claim that "mathematics is extensional" and the interpretation of quantifiers as collections which are objects. The circularity of intensional definitions and recursive definitions does not seem to always lead to paradox.<|endoftext|> TITLE: $(1,1)$ tensor vs a linear transformation (matrix) QUESTION [5 upvotes]: Take $d$-dimensional Vector space $V$ with Field $R$. A typical linear algebra linear transformation $V \to V$ can be represented by a $d \times d$ matrix $A$ such that for some $v,w \in V$, $Av=w$. I'm learning about tensors, and I understand that a $(1,1)$ tensor $T$ is a linear transformation $V^* \times V \to R$. I've read that such a $(1,1)$ tensor is equivalent to such a matrix. However, I find it very difficult to imagine what $V^*$ (the dual space, i.e. set of all maps $V\to R$) has to do with a simple linear transformation from $R^d$ to $R^d$. Moreover, the tensor components apparently are defined as $T^i_{\space \space j}=T(\epsilon_i, e^j)$, where $e^j, \epsilon _i$ are the $d$ bases of $V$ and $V^*$ respectively. This means that if we would write $T$ as a 2-dimensional array, it would have nothing to do with a matrix as in linear algebra. So how are these two concepts connected? This post is related to my question, but it doesn't really go into the difference between the matrix and tensor form. REPLY [4 votes]: Given a linear map $\alpha:V\to V$ we can construct a bilinear form $\tau:V^*\times V\to R$, by taking $\tau(f,v)=f(\alpha v)$. (Note that $f(\alpha v)$ makes sense because $v\in V$ and $\alpha:V\to V$ so $\alpha v\in V$, and then $f\in V^*$ means $f:V\to R$, so $f(\alpha v)\in R$.) Similarly given a bilinear form $\tau':V^*\times V\to R$ we can construct a map $\alpha': V\to V$ by noting that if $v\in V$ then $\tau'(-,v):V^*\to R$, and hence $\tau'(-,v)\in V^{**}$. Since $V$ is finite dimensional we have $V^{**}\cong V$ and hence we can define $\alpha'(v)$ to be the element of $V$ corresponding to $\tau'(-,v)$ in $V^{**}$. This means that $f(\alpha' v)=\tau'(f,v)$. Hence given a map $V\to V$ we get a map $V^*\times V\to R$ and given a map $V^*\times V\to R$ we get a map $V\to V$ (and furthermore if we translate back and forth we end up where we started). So we can view linear maps $V\to V$ as "the same as" bilinear maps $V^*\times V\to R$. Finally lets check the matrices are the same. Given a map $\alpha:V\to V$ its matrix is defined by $A^i_{\;j}=\epsilon_j(\alpha e^i)$, and given a map $\tau:V^*\times V\to R$ its matrix is defined by $T^i_{\;j}=\tau(\epsilon_j,e^i)$. So if we have $\tau(f,v)=f(\alpha v)$ then $T^i_{\;j}=\tau(\epsilon_j,e^i)=\epsilon_j(\alpha e^i)=A^i_{\;j}$.<|endoftext|> TITLE: Double Sum with a Neat Result QUESTION [6 upvotes]: Based on an interesting question here (second question), I have devised a similar one. Evaluate the following double sum without expansion and substitution of standard sum-of-integers formula. $$\sum_{x=1}^n\sum_{y=1}^n (n-x+y)$$ REPLY [3 votes]: Here's a slightly different way to look at it. First, we rewrite the sum as: $$\sum_{x=1}^{n} \sum_{y=1}^{n} (n-x+y) = \sum_{(x,y)\in\{1,\dots,n\}^2} (n-x+y) \enspace,$$ with the usual meaning of $A^2$ as the set of all ordered pairs of elements of the set $A$. Then, we observe that for $x\neq y$, the sum of the two terms corresponding to the pairs $(x,y)$ and $(y,x)$ is: $$(n-x+y) + (n-y+x) = 2n$$ Therefore, we can think of each of the two pairs as contributing $n$ to the sum. For $x=y$, the term corresponding to $(x,y)$ is just $n$. Conclusion: every pair contributes $n$ to the sum, so the sum is $n^2\cdot n= n^3$.<|endoftext|> TITLE: Are there further gaps in the Eisenstein primes? QUESTION [9 upvotes]: I recently played around with Eisenstein primes a bit (in an admittedly very amateurish way) and noticed among other things that there are no primes on the hexagonal ring that goes through (8,0) on the Eisenstein grid of the complex plane: I thought this was a neat feature of the distribution of the primes and started looking for further such gaps. To my astonishment I haven't been able to find a single such gap up to at least a "radius" of 40,000,000. So now I'm wondering whether 8 is indeed the only such gap (ignoring the trivial cases of 0 and 1), or whether there might be further gaps at larger radii. My Google efforts haven't turned up anything on this and I'm not sure how one would go about answering the question short of keeping the search running in hopes of finding another gap (which of course will never yield the answer "no further gaps exist"). I assume one could make a statistical argument based on the density of the Eisenstein primes, but I'm not sure how the prime number theorem applies to them. REPLY [4 votes]: It seems to me that these problems (both in case of Eisenstein and Gaussian primes) are really hard and outside of today's possibilities. I've checked all possible squares (in case of Gaussian primes) and hexagons (in case of Eisenstein primes) of a size up to $10^9$ and the only "primeless" polygon was the hexagon mentioned by OP and the one eight times smaller. However the proof in case of the Gaussian primes would be equivalent to showing that for every $n$ there exists $0 \le k \le n$ such that $n+ki$ is a gaussian prime which is (almost) equivalent of proving that $n^2+k^2$ is a prime number. It is even unclear why such a $k$ should exist and proving that it should be of a size $O(n)$ seems to be even harder task. Eisenstein primes have similar problems but now (at least for hexagons passing through even integers) the problem is (almost) equivalent to finding $0 \le k \le n$ such that $3n^2+k^2$ is a prime number. I am saying almost because for $k=0$ the criteria are different but it seems to not matter, as one can still find primes with $k>0$.<|endoftext|> TITLE: Associativity of concatenation QUESTION [6 upvotes]: Prove that the following operator is associative for $b\in \Bbb N$ $$x||y = x\cdot b^{1+\lfloor\log_{b}{y}\rfloor}+y$$ One thing that you can notice is that it is the concatenation operator. However, you are not allowed to use this fact. In first order logic, we refuse to attach any meaning to any objects and try to prove things starting with axioms. REPLY [4 votes]: I will be working off the formula $$(x || y) = x * b^{\lfloor\log_b(y)\rfloor + 1} + y$$ You can see that this should be the case because with $b = 10$, $y \in [10,99]$ should multiply $x$ by $100$. In the following, I will assume $b = 10$ and write $\log(x)$ for $\log_{10}(x)$. (The proof generalizes immediately to any $b$). The proof does not require case analysis, only applying a couple elementary properties of the floor function. The more interesting point is perhaps about whether concatenation is 'unmathematical.' I'm not sure what you mean by 'unmathematical'. In general we want to be able to define concatenation of words for arbitrary symbolic systems (alphabets). I suppose you (or this youtube poster) are taking issue with the dependency on indexing (and knowing the length of the second argument) in the usual definition of concatenation. We have by definition of the operator $||$ $\big(x || y \big) = 10x * 10^{\lfloor\log(y)\rfloor} + y$ $\big(y || z\big) = 10y * 10^{\lfloor\log(z)\rfloor} + z$ So $\big(x || y\big) || z =$ $$10*\big(x || y\big)*10^{\lfloor\log(z)\rfloor} + z =$$ $$(100x * 10^{\lfloor\log(y)\rfloor} + 10y)*10^{\lfloor\log(z)\rfloor} + z = $$ $$100x*10^{\lfloor\log(y)\rfloor + \lfloor\log(z)\rfloor} + 10y*10^{\lfloor\log(z)\rfloor} + z$$ Meanwhile $x||\big(y||z\big) = $ $$x || \big(10y * 10^{\lfloor\log(z)\rfloor} + z\big) =$$ $$10x * 10^{\lfloor \log (10y * 10^{\lfloor\log(z)\rfloor}) \rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z = $$ $$10x * 10^{\lfloor 1 + \log(y) + \lfloor\log(z)\rfloor \rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z$$ Consider that $\lfloor 1 + a + \lfloor b \rfloor \rfloor = 1 + \lfloor b \rfloor + \lfloor a \rfloor$, since $1 + \lfloor b \rfloor$ is an integer. Applying this to the last line in the expansion of $x||\big(y||z\big)$ shows $x||\big(y||z\big) = $ $$10x * 10^{1 + \lfloor \log(y) \rfloor + \lfloor\log(z)\rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z = $$ $$100x * 10^{\lfloor \log(y) \rfloor + \lfloor\log(z)\rfloor} + 10y * 10^{\lfloor\log(z)\rfloor} + z$$. This shows that $x||\big(y||z\big) = \big(x||y\big)||z$ as desired, and we have proved that your formal "concatenation" operator satisfies associativity! Now, have we actually mathematically captured concatenation? The only problem to get around is zeros. $x || 0$ is not well-defined for us, and $0 || y$ returns $y$, so it isn't really concatenation in a string sense. Moreover if a number contains $0$ in it's middle, then treating it as a word we see that is the concatenation of $x||y$ where $y$ has a leading zero, and the formula breaks. The problems are that our formula deals with numbers, but concatenation deals with strings. In the former, leading zeros don't matter, but in the latter, they do. This isn't ultimately a problem though. We can use this formula to define concatenation for any finite symbolic system. Let $||$ be defined as above on any base $b$ number system with the explicit definition $x || 0 = x$ for all $x$. Given an alphabet $A$ consisting of $n$ characters, we define concatenation on two words $u,v$ of $A$ as follows: $$u || v = \phi^{-1}\big(\phi(u) || \phi(v)\big)$$ where $\phi$ is any bijection $\phi: A \rightarrow \{1,\ldots,n\}$ extended to act on words of $A$ elementwise, with its image on the empty word explicitly defined to be $0$. Note that $\phi$ is then a bijection between words of $A$ and numbers base $n+1$ which either are zero or have no zeros as digits. So the operator of concatenation on words of any alphabet can be defined quite rigorously without making reference to the lengths or elements of the words. We only require for this that the alphabet be finite, because our formula for concatenation in a base $b$ number system breaks down for $b$ infinite.<|endoftext|> TITLE: Proof: Inequality in Mercer's theorem QUESTION [6 upvotes]: In the outline of the Mercer's theorem proof there is an inequality assumed without any explanation: $$\sum_{i=0}^{\infty} \lambda_i \vert e_i(t) e_i(s) \vert \le \sup_{x \in [a,b]} \vert K(x,x)\vert^2$$ Why does this need to hold? REPLY [3 votes]: The Parseval-Bessel Theorem leads to $$f= \sum_{j=1}^\infty (f,e_j)\, e_j~~ \text{for every }~~f\in L^2(a,b) \tag{1}$$ Which implies by linearity and continuity along with the fact that $Ke_j =\lambda_je_j$ that $$ T_Kf =Kf =\sum_{j=1}^\infty \lambda_j\,(f,e_j)\, e_j~~ \text{for every }~~f\in L^2(a,b) \tag{2}$$ Hence, one can carefully check starting with finite summation and Bessel inequality $$ (Kf,f)= \sum_{j=1}^\infty \lambda_j\,|(f,e_j)|^2 \tag{3}$$ We also set the Kernel $$K_n(t,s) = \sum_{j=1}^{n} \lambda_j e_j(t) e_j(s)\tag{Kn}$$ then, $$TK_nf(t) = \sum_{j=1}^{n} \lambda_j e_j(t) \int_{a}^{b}f(s)e_j(s)\,ds = \sum_{j=1}^{n} \lambda_j (f\, ,e_j)e_j(t) $$ hence, $$ (K_nf,f)= \sum_{j=1}^{n} \lambda_j |(f\, ,e_j)|^2.$$ Next we consider the truncated Kernel $$ R_n(t,s) =K(t,s)- \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(s)\tag{4}$$ It derives from the foregoing that $$ (R_nf,f)= \sum_{j=n+1}^\infty \lambda_j\,|(f,e_j)|^2\ge 0~~\text{for every }~~f\in L^2(a,b) \tag{5}$$ i.e $(R_nf,f)\ge0$ then there is one result claiming that $R_n(t,t)\ge0$ for almost every $t\in(a,b)$ which leads to $$ R_n(t,t) =K(t,t)- \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(t)\ge 0 \tag{6}$$ ie $$ \sum_{j=1}^n \lambda_j\,e_j(t)\, e_j(t)\le K(t,t) \tag{7}$$ This holds true for abitratry $n\in\mathbb{N}$. whence, $$ \sum_{j=1}^\infty \lambda_j\,e_j(t)\, e_j(t)\le \sup_{t\in [a,b]} K(t,t) $$ Applying Cauchy-Schwartz inequality we get, \begin{split} \Big|\sum_{j=1}^\infty \lambda_j\,e_j(s)\, e_j(t)\Big|^2 &\le& \Big(\sum_{j=1}^\infty \lambda_j\,e_j(s)\, e_j(s)\Big ) \Big(\sum_{j=1}^\infty \lambda_j\,e_j(t)\, e_j(t)\Big)\\ &\le& \sup_{t\in [a,b]} K^2(t,t) \end{split} Prove of the Claim in the complex case suppose there is $x_0$ such that $K(x_0,x_0)<0$ then there are $c,d $ such that $a\le c TITLE: Show that in Lyapunov equation $A^TQ+QA=-I$, the matrix $Q$ is positive definite. QUESTION [5 upvotes]: Let $A$ be matrix whose eigenvalues all have negative real parts. Define $Q=\int^{\infty}_0 B(t)dt$ where $B(t)=e^{A^Tt}e^{At}$. Prove that $Q$ is symmetric and positive definite. This question is related to the corresponding Lyapunov equation $A^TQ+QA=-I$. By the above we know that $B(t)^T=B(t)$ and $\forall x \neq 0. x^TB(t)x>0$. Therefore: \begin{align} -I &=\lim_{\tau \to \infty} B(\tau) -I\\ &=\lim_{\tau \to \infty} \int^{\tau}_0\frac{d B(t)}{dt} \\ &= \lim_{\tau \to \infty} \Big( A^T\int^{\tau}_0B(t)dt+\int^{\tau}_0B(t)dt\ A \Big)\\ &=A^TQ+QA\\ \end{align} However I am confused on how to use these facts to show that $Q$ is symmetric and positive definite. REPLY [3 votes]: The key is to note that any (pointwise) constant linear transformation commutes with integration. For example, we can show that $Q$ is symmetric since $$ Q^T = \left[\int B(t)\right]^T = \int[B(t)]^T = \int[e^{At}]^T[e^{A^Tt}]^T = \int e^{A^Tt}e^{At} = \int B(t) = Q $$ similarly, show that $x^TQx > 0$ so that $Q$ is positive definite. Note in particular that $$ x^TB(t)x = \|e^{At}x\|^2 $$ Moreover: if $x \neq 0$, $t \mapsto \|e^{At}x\|^2$ is necessarily a continuous, positive-valued function.<|endoftext|> TITLE: Number of normals to a parabola from a given point QUESTION [6 upvotes]: I know that from any point a maximum of three normals could be drawn to a parabola because the equation of normal is cubic. But I want to know the condition on the point for the number of normals REPLY [10 votes]: Claude's technique can be extended slightly to find the the points and normal lines given a particular point off of the parabola. Using this, I obtained an animation of some of the normals.<|endoftext|> TITLE: A summation involving $\arctan$, $\pi$ and Hyperbolic function QUESTION [6 upvotes]: Prove that $$\sum_{n\in\mathbb{Z}}\arctan\left(\frac{\sinh(1)}{\cosh(2n)}\right)=\frac{\pi}{2}$$ Writing $$\dfrac{\sinh(1)}{\cosh(2n)}=\dfrac{e^{1}-e^{-1}}{e^{2n}+e^{-2n}}$$ I tried to use the identity $$\arctan\left(\frac{a_1}{a_2}\right)+\arctan\left(\frac{b_1}{b_2}\right)=\arctan\left(\frac{a_1b_2+ a_2b_1}{a_2b_2-a_1b_1}\right)$$ with a suitable choice of $a_1,a_2,b_1,b_2$ but I haven't been able to find a telescopic sum. REPLY [8 votes]: That is a telescopic sum in disguise. We may notice that: $$\arctan\tanh(n+1)-\arctan\tanh(n-1) = \arctan\left(\frac{\tanh(n+1)-\tanh(n-1)}{1+\tanh(n-1)\tanh(n+1)}\right) $$ equals $\arctan\left(\frac{\sinh(2)}{\cosh(2n)}\right)$ and: $$ \arctan\left(\frac{\sinh(1)}{\cosh(2n)}\right) = \arctan\tanh\left(n+\frac{1}{2}\right)-\arctan\tanh\left(n-\frac{1}{2}\right). $$ You may easily draw your conclusions now.<|endoftext|> TITLE: Showing matrices in $SU(2)$ are of form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ QUESTION [8 upvotes]: Matrices $A$ in the special unitary group $SU(2)$ have determinant $\operatorname{det}(A) = 1$ and satisfy $AA^\dagger = I$. I want to show that $A$ is of the form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ with complex numbers $a,b$ such that $|a|^2+|b|^2 = 1$. To this end, we put $A:= \begin{pmatrix} r & s \\ t & u\end{pmatrix}$ and impose the two properties. This yields \begin{align}\operatorname{det}(A) &= ru-st \\ &= 1 \ ,\end{align} and \begin{align} AA^\dagger &= \begin{pmatrix} r & s \\ t & u\end{pmatrix} \begin{pmatrix} r^* & t^* \\ s^* & u^* \end{pmatrix} \\&= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \ .\\ \end{align} The latter gives rise to \begin{align} |r|^2+|s|^2 &= 1 \\ &= |t|^2+|u|^2 \ , \end{align} and \begin{align} tr^*+us^* &= 0 \\ &= rt^*+su^* \ . \end{align} At this point, I don't know how to proceed. Any hints would be appreciated. @Omnomnomnom's remark \begin{align} A A^\dagger &= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} |r|^2+|t|^2 & sr^* +ut^* \\ rs^*+tu^* & |s|^2 + |u|^2\end{pmatrix} = A^\dagger A \ , \end{align} gives rise to $$ |t|^2 = |s|^2 \\ |r|^2 = |u|^2 $$ and $$ AA^\dagger :\begin{pmatrix} rt^* +su^* = sr^* +ut^* \\ tr^*+us^* = rs^*+tu^* \end{pmatrix}: A^\dagger A $$ At this point, I'm looking in to find a relation between $t,s$ and $r,u$ respectively. REPLY [11 votes]: The condition $A^{\ast}A=I$ says that $A$ has orthonormal columns. Suppose the first column is $v=[\begin{smallmatrix}a\\b\end{smallmatrix}]$. It must have unit norm, so $|a|^2+|b|^2=1$. What can the second column be? It must be orthogonal to the first, which means it must be in the complex one-dimensional orthogonal complement. Thus, if $w$ is orthogonal to $v$, then the possibilities for the second column are $\lambda w$ for $\lambda\in\mathbb{C}$. Since $\det[v~\lambda w]=\lambda\det[v~w]$, only one value of $\lambda$ will make the determinant $1$, hence the second column is unique. So it suffices to check $w=[-b ~~ a]^{\ast}$ works, which is natural to check because in ${\rm SO}(2)$ the second column would be $[-b~~a]^T$.<|endoftext|> TITLE: Do any two coprime factors of $x^n-1$ over the $p$-adic integers $\mathbb{Z}_p$ which remain coprime over $\mathbb{F}_p$ generate comaximal ideals? QUESTION [6 upvotes]: Let $f,g$ be distinct irreducible factors of $x^n-1$ over $\mathbb{Z}_p[x]$ (polynomials over $p$-adic integers). Suppose $\overline{f},\overline{g}$ are coprime in $\mathbb{F}_p[x]$ - thus, the ideal generated by them $(\overline{f},\overline{g}) = 1$ in $\mathbb{F}_p[x]$. Must $(f,g) = 1$ in $\mathbb{Z}_p[x]$? Note that $f,g$ are certainly coprime, but $\mathbb{Z}_p[x]$, coprime doesn't mean comaximal (e.g. $p,x$ are coprime but not comaximal). REPLY [4 votes]: Suppose $(f,g)\ne 1$, then they are contained in some maximal ideal $m\supset (f,g)$, but the maximal ideals of $\mathbb{Z}_p[x]$ are precisely the ideals of the form $(p,h(x))$, where $h(x)$ is irreducible and remains irreducible mod $p$. Thus, $\mathbb{Z}_p[x]/m\cong \mathbb{F}_p[x]/(\overline{h})$. This implies that $(\overline{h})\supset(\overline{f},\overline{g})$, but since $\overline{f},\overline{g}$ are comaximal, they generate the unit ideal, and so $\overline{h}$ must be a unit, contradicting the fact that $h$ is irreducible mod $p$. This implies that $(f,g) = 1$.<|endoftext|> TITLE: Prove that this iteration cuts a rational number in two irrationals $\sum_{n=0}^\infty \frac{1}{q_n^2-p_n q_n+1}+\lim_{n \to \infty} \frac{p_n}{q_n}$ QUESTION [6 upvotes]: For any $1 \leq p1$ they are increasing (because $q_n$ is strictly increasing and it 'helps' $p_n$ after the first step). For $n \to \infty$ we have $p_n \to \infty$ and $q_n \to \infty$, thus: $$\frac{p_{n+1}}{q_{n+1}}=\frac{(q_n-p_n)(p_nq_n-1)}{q_n(q_n^2-p_n q_n+1)} \approx \frac{p_n}{q_n}$$ It is apparent the limit exists. The limit for $A$ exists because the sequence $q_n^2-p_n q_n+1$ grows much faster than $n^2$ and the sum obviously converges. Update A little something on a closed form. The system of recurrence relations can be rewritten as a single recurrence relation, using: $$p_n=q_n+\frac{1}{q_n}-\frac{q_{n+1}}{q_n^2}$$ Then we have a second order recurrence relation: $$q_{n+2}=q_{n+1}(q_{n+1}q_n+1)+\frac{q_{n+1}^3}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$ $$q_0=q_0, \qquad q_1=q_0(q_0^2-q_0 p_0+1)$$ Or a more symmetric form: $$\frac{q_{n+2}}{q_{n+1}}=q_{n+1}q_n+1+\frac{q_{n+1}^2}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$ If we find a closed form for it (which I'm not sure exists) we can take the limit and find the closed form for $B$. We also have a more simple looking relation (but it still requires us to know $q_n$): $$\frac{p_n}{q_n}=1+\frac{1}{q_{n-1}^2}-\frac{q_{n-1}}{q_n}-\frac{q_n}{q_{n-1}^3}$$ And in fact, we can also write $A$ in terms of $q_n$: $$A=\sum_{n=0}^\infty \frac{q_n}{q_{n+1}}$$ Update 2 Getting rid of some unnecessary parts, we can reformulate the problem: Set some $q_1>q_0>0$. Then we can define a second order recurrence: $$q_{n+2}=q_{n+1}(q_{n+1}q_n+1)+\frac{q_{n+1}^3}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right)$$ With the following property: $$L(q_0,q_1)-S(q_0,q_1)=\lim_{n \to \infty} \frac{q_{n+1}}{q_n^3}- \sum_{n=0}^\infty \frac{q_n}{q_{n+1}}=\frac{q_1-q_0}{q_0^3}$$ Can we find a closed form for the recurrence? Or separately for the limit $L$ or the sum $S$ above? Note that for the limit $L$ to be finite we need to have as $ n \to \infty$: $$q_n \asymp C \cdot a^{3^n}$$ For example we have: $$S(1,2)=0.645953147800624278311945190231458547= \\ = \frac{1}{2}+\frac{1}{7}+\frac{1}{323}+\frac{1}{33657247}+\frac{1}{38127274806076464952763}+\dots$$ No closed form for this number either, however look at the denominator sequence - all the numbers end with $3$ or $7$. This pattern continues as far as I can see. REPLY [2 votes]: As an answer for the question in the title I propose the following (using the results from the OP): $$A=\sum_{n=0}^\infty \frac{q_n}{q_{n+1}} \tag{1}$$ We have: $$\frac{q_{n+2}}{q_{n+1}}=q_{n+1}q_n+1+\frac{q_{n+1}^2}{q_n^2} \left(\frac{q_{n+1}}{q_n}-1 \right) \tag{2}$$ Set $a_n=\frac{q_n}{q_{n-1}}$ and $b_n=q_{n-1}q_{n-2}+1$, then we have: $$a_n=q_{n-1}q_{n-2}+1+a_{n-1}(a_{n-1}-1):=a_{n-1}(a_{n-1}-1)+b_{n-1}$$ Thus, according to this paper: The Approximation of Numbers as Sums of Reciprocals, the sum in $(1)$ is the greedy expansion of the number $A$: $$A=\sum_{n=1}^\infty \frac{1}{a_n}$$ According to the paper, every such expansion for a real number has the form: $$x=\frac{1}{a_1}+\frac{1}{a_2}+\dots$$ $$a_{k+1}=a_k(a_k-1)+b_k,~~~a_1 \geq 2,~~b_k > 1,~~~~a_k,b_k \in \mathbb{N}$$ All of the requirements are met. (To prove that $a_n$ are all integers we only need to look at the initial definition of $q_n$). And since the greedy expansion for a rational number is finite, but the sequence $a_n$ is not, we have proved that $A$ is irrational.<|endoftext|> TITLE: Is this formula for $\frac{e^2-3}{e^2+1}$ known? How to prove it? QUESTION [29 upvotes]: I found an interesting infinite sequence recently in the form of a 'two storey continued fraction' with natural number entries: $$\frac{e^2-3}{e^2+1}=\cfrac{2-\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}{2+\cfrac{3-\cfrac{4-\cdots}{4+\cdots}}{3+\cfrac{4-\cdots}{4+\cdots}}}$$ The numerical computation was done 'backwards', starting from some $x_n=1$ we compute: $$x_{n-1}=\frac{a_n-x_n}{a_n+x_n}$$ And so on, until we get to $x_0$. The sequence converges for $n \to \infty$ if $a_n>1$ (or so it seems). For constant $a_n$ we seem to have quadratic irrationals, for example: $$\frac{\sqrt{17}-3}{2}=\cfrac{2-\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}{2+\cfrac{2-\cfrac{2-\cdots}{2+\cdots}}{2+\cfrac{2-\cdots}{2+\cdots}}}$$ For $a_n=2^n$ we seems to have: $$\frac{1}{2}=\cfrac{2-\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}{2+\cfrac{4-\cfrac{8-\cdots}{8+\cdots}}{4+\cfrac{8-\cdots}{8+\cdots}}}$$ I found no other closed forms so far, and I don't know how to prove the formulas above. How can we prove them? What is known about such continued fractions? There is another curious thing. If we try to expand some number in this kind of fraction, we can do it the following way: $$x_0=x$$ $$a_0=\left[\frac{1}{x_0} \right]$$ $$x_1=\frac{1-a_0x_0}{1+a_0x_0}$$ $$a_1=\left[\frac{1}{x_1} \right]$$ However, this kind of expansion will not give us the above sequences. We will get faster growing entries. Moreover, the fraction will be finite for any rational number. For example, in the list notation: $$\frac{3}{29}=[9,28]$$ You can easily check this expansion for any rational number. As for the constant above we get: $$\frac{e^2-3}{e^2+1}=[1,3,31,74,315,750,14286,\dots]$$ Not the same as $[1,2,3,4,5,6,7,\dots]$ above! We have similar sequences growing exponentially for any irrational number I checked. $$e-2=[1,6,121,284,1260,3404,25678,\dots]$$ $$\pi-3=[7,224,471,2195,10493,46032,119223,\dots]$$ By the way, if we try CF convergents, we get almost the same expansion, but finite: $$\frac{355}{113}-3=[7,225]$$ $$\frac{4272943}{1360120}-3=[7,224,471,2195,18596,227459,\dots]$$ So, the convergents of this sequence are not the same as for the simple continued fraction, but similar. Comparing the expansion by the method above and the closed forms at the top of the post, we can see that, unlike for simple continued fractions, this expansion is not unique. Can we explain why? Here is the Mathematica code to compute the limit of the first fraction: Nm = 50; Cf = Table[j, {j, 1, Nm}]; b0 = (Cf[[Nm]] - 1)/(Cf[[Nm]] + 1); Do[b1 = N[(Cf[[Nm - j]] - b0)/(Cf[[Nm - j]] + b0), 7500]; b0 = b1, {j, 1, Nm - 2}] N[b0/Cf[[1]], 50] And here is the code to obtain the expansion in the usual way: x = (E^2 - 3)/(E^2 + 1); x0 = x; Nm = 27; Cf = Table[1, {j, 1, Nm}]; Do[If[x0 != 0, a = Floor[1/x0]; x1 = N[(1 - x0 a)/(x0 a + 1), 19500]; Print[j, " ", a, " ", N[x1, 16]]; Cf[[j]] = a; x0 = x1], {j, 1, Nm}] b0 = (1 - 1/Cf[[Nm]])/(1 + 1/Cf[[Nm]]); Do[b1 = N[(1 - b0/Cf[[Nm - j]])/(1 + b0/Cf[[Nm - j]]), 7500]; b0 = b1, {j, 1, Nm - 2}] N[x - b0/Cf[[1]], 20] Update I have derived the forward recurrence relations for numerator and denominator: $$p_{n+1}=(a_n-1)p_n+2a_{n-1}p_{n-1}$$ $$q_{n+1}=(a_n-1)q_n+2a_{n-1}q_{n-1}$$ They have the same form as for generalized continued fractions (a special case). Now I understand why the expansions are not unique. REPLY [2 votes]: For the first one, you could write \begin{equation} f(n) = \frac{n-f(n+1)}{n+f(n+1)} \end{equation} Then you suggest \begin{equation} f(2) = \frac{e^2-3}{e^2+1} \end{equation} But this then gives \begin{align} f(1) = \frac{2}{e^2-1}\\ f(3) = \frac{4}{e^2-1} \\ f(4) = \frac{3e^2-15}{e^2+3}\\ f(5) = \frac{-2e^2+18}{e^2-3} \end{align} I don't know if there is a recurrence relation that solves this, but you have a few more closed forms... The second one we have \begin{equation} g(2) = \frac{2 - g(2)}{2+g(2)} \end{equation} so we can solve the quadratic $x^2+3x-2$ to get $(\sqrt{17}-3)/2$. For the third one, we have \begin{equation} h(n) = \frac{n-h(2n)}{n+h(2n)} \end{equation} using the trial version of $h(2)=1/2$, we get \begin{align} h(4)=\frac{2}{3}\\ h(8)=\frac{4}{5}\\ h(16)=\frac{8}{9} \end{align} then it is likely that \begin{equation} h(n)=\frac{n}{n+2} \end{equation} as this satisfies the recurrence and that $h(2)=1/2$.<|endoftext|> TITLE: Is there a name for the center of a line? QUESTION [14 upvotes]: Is there a name for the center point for a line? For example: ---------o--------- If the dashes represent a straight line and the O represents the center of that line, what would the name for that center point be? REPLY [31 votes]: A line goes forever in both directions, so it has no center. If you have a line segment - a part of a line with two definite ends - then the name is "midpoint."<|endoftext|> TITLE: Set of quadratic expressions $nx^2+m$ whose union is all integers? QUESTION [9 upvotes]: Is there a set of quadratic equations whose union is equal to the counting integers (1,2,3,4...) but all pairs of intersections are empty. An example of linear equations which satisfy this is simply: $$ \begin{align*} S_1 &= \{1,3,5,7,\ldots,2n+1\}\\ S_2 &= \{2,4,6,8,\ldots,2n+0\}\\[0.2in] S_1 \cup S_2 &= \{1,2,3,4,5,6,7,8,9,\ldots\}\\ S_1 \cap S_2 &= \varnothing \end{align*}$$ It is easy to construct these for linear functions, but is it impossible to do the same for functions of the form $f_{mn}(x)=nx^2+m$? REPLY [2 votes]: Proposition: Let $c\ne 0$ be any integer. Then there exists a multiplier $B\ge 1$ such that $(Bn)^2+c$ is never a square (inclusive of $0$) for any $n\ge 1$. Proof: There are only finitely many ways to write $c$ as the product of two integers. Choose $B$ so that $2B$ exceeds the maximum difference between any complementary factors of $c$. Then $(m-Bn)(m+Bn) = c$ has no integer solutions with $n\ge 1$. Lemma: Let $A$ be any finite set of integers, and $c \ne A$. There exists a $B\ge 1$ such that the set $\{(Bn)^2+c: n > 0\}$ is disjoint from the union of quadratic progressions $\{ n^2 + a : n \ge 0, a \in A \}$. Proof: Apply the proposition to each value $c-a$, and take $B$ to be the LCM of all the (finitely many) multipliers so obtained. Theorem: There exists an infinite sequence $(B_k, c_k) : k \ge 0$ with $B_k > 0$ such that the quadratic progressions $\{ B_k n^2 + c_k : n \ge 0 \}$ form a partition of $\mathbb N$. Proof: Start with $(B_0,c_0) = (1,1)$. We proceed inductively: suppose that $(B_k, c_k)$ have been chosen for all $k 0 \}$ is disjoint from all previous progressions, and also $\{ B_m n^2 + c_m : n = 0 \}$ is disjoint by choice of $c$. Thus we may construct an infinite sequence $(B_k,c_k)$ in this manner. Finally, this is certain to cover all of $\mathbb N$ since we chose $c$ minimally, so that the first $m$ progressions necessarily cover $\{1,\ldots,m\}$.<|endoftext|> TITLE: Differential of a Map QUESTION [5 upvotes]: I have the following map that embeds the Torus $T^2$ into $\mathbb{R}^3$: $$f(\theta, \phi)=(cos\theta(R+rcos(\phi)),sin\theta(R+rcos(\phi)), rsin\phi)$$ noting that $0 TITLE: What is $\mathbb{Z}[x]/(x,x^2+1)$ isomorphic to? QUESTION [6 upvotes]: Consider the quotient ring $\mathbb{Z}[x]/(x,x^2+1)$. Taking the quotient by $(x)$ first, we get a ring that is isomorphic to $\mathbb{Z}$ by setting the relation $x=0$. Applying the relation, $(x^2+1)$ becomes $(1)$, so the quotient ring is isomorphic to $\mathbb{Z}/(1)=\{0\}$. Taking the quotient by $(x^2+1)$ first, we get a ring that is isomorphic to $\mathbb{Z}[i]$ by setting the relation $x^2=1$ (or equivalently, $x=i$). Applying the relation, $(x)$ becomes $(i)$, so the quotient ring is isomorphic to $\mathbb{Z}[i]/(i)\approx\mathbb{Z}$. Which approach, if either, is correct? REPLY [5 votes]: You could also note that $1=(x^2+1)-x(x)\in (x,x^2+1)$, so $(x,x^2+1)=\mathbb{Z}[x]$. From this point of view, the quotient is evidently 0.<|endoftext|> TITLE: Can endpoints be local minimum? QUESTION [8 upvotes]: My textbook defines local maximum as follows: A function $f$ has local maximum value at point $c$ within its domain $D$ if $f(x)\leq f(c)$ for all $x$ in its domain lying in some open interval containing $c$. The question asks to find any local maximum or minimum values in the function $$g(x)=x^2-4x+4$$ in the domain $1\leq x<+\infty$. The answer at the back has the point $(1,1)$, which is the endpoint. According to the definition given in the textbook, I would think endpoints cannot be local minimum or maximum given that they cannot be in an open interval containing themselves. (ex: the open interval $(1,3)$ does not contain $1$). Where am I wrong? REPLY [3 votes]: I think fundamentally the comments are right, and you should speak with your teacher to confirm definitions and expectations. But there's also a point to make about topology here, which could justify the book's definition and answer as consistent. The definition of local maximum you gave is: A function $f$ has a local maximum at point $c$ within its domain $D$ if $f(x) \leq f(c)$ for all $x$ in its domain lying in some ** open ** interval containing $c$. If you interpret this as saying that the interval can come from $\mathbb{R}$, and is not restricted to $D$, then you have no problem, as others have pointed out. But like you I am thinking about being restricted to $D$ and my instinct is to think only about intervals in $D$. This can still be ok, if we just alter our interpretation of "open" a little bit (in a natural way)... Now, whenever we say "open" we're really saying "open with respect to ** insert topology here ** ." A lot of the time it's obvious from context or the textbook has established a practice of contextual implication, but in this case (without knowing your book) I'd argue there are two reasonable interpretations: We might be talking open intervals with respect to the standard topology on $\mathbb{R}$ (which is what you've probably been using in your class), but since we're restricting our attention to a domain $D \subset \mathbb{R}$, it's also pretty normal to talk about a different topology, called the subset topology on $D$ (induced by the standard topology on $R$). In the subset topology on $D \subset \mathbb{R}$ (induced by the standard topology), a set $S$ is open if and only if $S$ is the intersection $D \cap X $, with $X$ open in $\mathbb{R}$ with respect to the standard topology on $\mathbb{R}$. We're often more interested in the subset topology than the usual topology on the whole space just because of situations like the one you're in, in which a definition doesn't work quite like you expect when $D \not= \mathbb{R}$. So let's work with a slightly different definition of local maximum: A function $f$ has a local maximum at point $c$ within its domain $D$ if $f(x) \leq f(c)$ for all $x$ in its domain lying in some interval $I$ containing $c$ such that $I$ is open with respect to the subset topology on $D$. Now back to your case. Let $D = [1, \infty)$. For any $a > 1$, we have that $$[1,a) = D \cap (-a,a)$$ Since $(-a,a)$ is open in $\mathbb{R}$ with respect to the standard topology, $[1,a)$ is open in $D$ with respect to the subset topology on $D$. This intuitively makes sense, because if you were an ant walking on $f(D)$, when you came to $f(1)$ you'd have nowhere to go but down.<|endoftext|> TITLE: Where is the absolute value when computing antiderivatives? QUESTION [23 upvotes]: Here is a typical second-semester single-variable calculus question: $$ \int \frac{1}{\sqrt{1-x^2}} \, dx $$ Students are probably taught to just memorize the result of this since the derivative of $\arcsin(x)$ is taught as a rule to memorize. However, if we were to actually try and find an antiderivative, we might let $$ x = \sin \theta \quad \implies \quad dx = \cos \theta \, d \theta $$ so the integral may be rewritten as $$ \int \frac{\cos \theta}{\sqrt{1 - \sin^2 \theta}} \, d \theta = \int \frac{\cos \theta}{\sqrt{\cos^2 \theta}} \, d \theta $$ At this point, students then simplify the denominator to just $\cos \theta$, which boils the integral down to $$ \int 1 \, d \theta = \theta + C = \arcsin x + C $$ which is the correct antiderivative. However, by definition, $\sqrt{x^2} = |x|$, implying that the integral above should really be simplified to $$ \int \frac{\cos \theta}{|\cos \theta|} \, d \theta = \int \pm 1 \, d \theta $$ depending on the interval for $\theta$. At this point, it looks like the answer that we will eventually arrive at is different from what we know the correct answer to be. Why is the first way correct even though we're not simplifying correctly, while the second way is... weird... while simplifying correctly? REPLY [8 votes]: Let $\operatorname{sgn}(x)$ be the function that takes values $-1, 0, 1$ depending on the sign of $x$. For the sake of generality, if you have two variables $x$ and $\theta$ related by $x = \sin \theta$ and the square root symbol means to always take the positive square root, then the opening post is correct: the right formula relating the differentials is $$ \frac{\mathrm{d}x}{\sqrt{1 - x^2}} = \operatorname{sgn}(\cos(\theta)) \mathrm{d} \theta $$ Now, one thing to note is that the domain of these functions excludes $x = \pm 1$; similarly, it excludes all values of $\theta$ for which $\cos(\theta) = 0$. On this domain, $\operatorname{sgn}(\cos(\theta))$ is locally constant. In this situation, the domain consists of a series of completely disjoint intervals $$\ldots \cup (-3\pi/2, -\pi/2) \cup (-\pi/2, \pi/2) \cup (\pi/2, 3\pi/2) \cup \ldots$$ "Locally constant" means any function that is constant on each of these intervals, but can have different values on different intervals. Nearly everywhere in calculus where you learned something involving constants is actually about things that are locally constant For example, since $\operatorname{sgn}(\cos(\theta))$ is locally constant, its antiderivatives are all of the form $$ \operatorname{sgn}(\cos(\theta)) \theta + C(\theta) $$ where $C(\theta)$ is also locally constant. (note that we need a local constant of integration, not merely a constant of integration!) Now, if we were so inclined, we can extend this formula to the domain of all $\theta$ by lining up all of the constants. The end result is that the antiderivative is a constant plus the sawtooth function depicted below: Image produced by Wolfram alpha As an example of seeing how this working, suppose our goal was to compute the integral $$ \int_{-1}^1 \frac{\mathrm{d}x}{\sqrt{1 - x^2}} $$ While unusual, we can rewrite this as $$ \int_{-\pi/2}^{5\pi/2} \operatorname{sgn}(\cos(\theta)) \mathrm{d} \theta $$ This isn't an invertible substitution, since each value of $x$ corresponds to three different values of $\theta$ (barring a few exceptions). But one-dimensional integration is very robust, and we should still expect to get the right answer if we have the details right. And we do; if we take the sawtooth function above as the antiderivative, then the integral becomes $$ \left( \frac{\pi}{2} \right) - \left( -\frac{\pi}{2} \right) = \pi $$ which is the correct answer — and the same answer we'd get by only integrating over $(-\pi/2, \pi/2)$. Of course, if we aren't interested in the greater generality, we can just simplify by insisting that $\theta \in [-\pi/2, \pi/2]$ and simply take $\theta + C$ as the antiderivative, thus avoiding any hassles with the sign.<|endoftext|> TITLE: Is every element on a set also a set? QUESTION [7 upvotes]: I've been trying to understand in a more formal way what a set actually is, but I have some questions. According to the axiom of regularity for every non-empty set A there exists an element in the set that's disjoint from A. That would mean that such element is also a set, right? I read here: Axiom of Regularity , that in axiomatic set theory everything is a set, I understand that natural numbers are constructed from the empty set, integers are constructed from the naturals, rationals from the integers, and reals from the rationals. I can see how every element in such sets is also a set. But, for example, in the set of all the letters of the alphabet, or the sample space of an experiment when the possible results are not numbers, or the set of my classmates; it's not clear to me how their elements are also sets. So, are they really sets? Is every element in a set also a set? Thank you REPLY [5 votes]: That depends on which axioms or system you're using. You could of course make up a system which allows (distinct) atomic objects (they're called ur-elements) that can be elements of sets, this is how ZF set theory originally started. However in (modern) ZF set theory the axiom of extensionality basically prohibits anything that's not a set. Things are considered equal if they have the same elements and since anything not being a set does not have any elements they would be considered equal to the empty set. As you pointed out one constructs the natural numbers and so on by constructing set. However one normally does not use those properties of them (being sets), you almost never see things like $1 \cup 2$ or $0 \in 1$. One should note that the way natural numbers are constructed is not standardized, that is there is different ways to achieve the same (standardized) properties of the numbers (which makes using numbers as they are sets being non-standard and have no universally accepted meaning).<|endoftext|> TITLE: Show that $f_{\alpha}(t)$ is a p.d.f. QUESTION [6 upvotes]: Let $\displaystyle \phi(t)=\frac{1}{\sqrt{2\pi}}e^{-t^2/2}$,$t\in \Bbb R$ be the standard normal density function and $\displaystyle \Phi(x)=\int_{-\infty}^x\phi(t)\,dt$ be the standard normal distribution function. Let $f_{\alpha}(t)=2\phi(t)\Phi(\alpha t)$,$t\in \Bbb R$ where $\alpha \in \Bbb R$. Show that $f_{\alpha}$ is a probability density function. we have $\Phi'(x)=\phi(x)$. We have to show that $\displaystyle\int_{-\infty}^{\infty}f_{\alpha}(t)\,dt=1$. I tried by integration by-parts but I got the value is $0$. Dose there any other process or where is my mistake.? Edit : $\displaystyle \int_{-\infty}^{\infty} f_{\alpha}(t)dt= 2\int_{-\infty}^{\infty}\phi(t)\Phi(\alpha t)\,dt=2\left[\Phi(\alpha t)\int_{-\infty}^{\infty}\phi(t)\,dt\right]_{-\infty}^{\infty}-2\int_{-\infty}^{\infty}\left[\alpha\Phi'(\alpha t).\int_{-\infty}^{\infty}\phi(t)\,dt\right]\,dt=2[\Phi(\infty)-\Phi(-\infty)]-2\int_{-\infty}^{\infty}\alpha\phi(\alpha t)\,dt=\cdots=0$ REPLY [9 votes]: A probabilistic interpretation: consider $(X,Y)$ i.i.d. standard normal, then $\Phi(at)=P(X TITLE: Is every $F_{\sigma\delta}$-set a set of points of convergence of a sequence of continuous functions? QUESTION [7 upvotes]: It is well known that if $\langle f_n:n\in\mathbb{N}\rangle$ is a sequence of continuous functions, $f_n\colon\mathbb{R}\to\mathbb{R}$, then $\big\{x\in\mathbb{R}:\lim_{n\to\infty}f_n(x)\text{ exists}\big\}$ is an $F_{\sigma\delta}$-set (see this post). I am asking if the converse is true, i.e., whether for every $F_{\sigma\delta}$-set $E\subseteq\mathbb{R}$ there exists a sequence $\langle f_n:n\in\mathbb{N}\rangle$ of continuous functions, $f_n\colon\mathbb{R}\to\mathbb{R}$, such that $\big\{x\in\mathbb{R}:\lim_{n\to\infty}f_n(x)\text{ exists}\big\}=E$. My attempt: I would try to prove it in two steps. (1) Given an $F_{\sigma\delta}$-set $E$, find closed sets $E^k_n$, $n,k\in\mathbb{N}$, such that $E^k_n\supseteq E^l_n$ and $E^k_n\subseteq E^k_m$ for $k\le l$ and $n\le m$, and $E=\bigcap_k\bigcup_n E^k_n$. (2) Given $E^k_n$ as above, find continuous functions $f_n\colon\mathbb{R}\to\mathbb{R}$ such that for every $x$, $x\in E^k_N$ iff $\left|f_n(x)-f_m(x)\right|\le 2^{-k}$ for all $m\ge n\ge N$. (1) would be accomplished as follows. Let $E=\bigcap_k\bigcup_n F^k_n$, $F^k_n$ closed. Let $\langle G^0_n:n\in\mathbb{N}\rangle$ consists of all elements of $\langle F^0_n:n\in\mathbb{N}\rangle$, each repeating infinitely many times. Let $\langle G^1_n:n\in\mathbb{N}\rangle$ consists of all possible intersections $F^0_i\cap F^1_j$, $i,j\in\mathbb{N}$, each repeating infinitely many times and ordered so that $G^1_n\subseteq G^0_n$ for every $n$. Similarly, let $\langle G^k_n:n\in\mathbb{N}\rangle$ consists of all possible intersections $G^0_{i_0}\cap\cdots\cap G^k_{i_k}$, $i_0\dots,i_k\in\mathbb{N}$, each repeating infinitely many times and ordered so that $G^k_n\subseteq G^l_n$ for every $n$, whenever $l TITLE: Is there any example of a sequentially-closed convex cone which is not closed? QUESTION [5 upvotes]: I am interested in showing that a sequentially-closed convex cone is closed in order to prove a representation theorem for a pre-ordered preference relation. Thank you in advance! REPLY [3 votes]: Consider the space $(\ell_\infty)^*$ endowed with the weak*-topology. The canonical image of $\ell_1$ in that space is sequentially closed by the Schur property of $\ell_1$, however it is also dense by Goldstine's theorem.<|endoftext|> TITLE: Prove linear combinations of logarithms of primes over $\mathbb{Q}$ is independent QUESTION [5 upvotes]: Suppose we have a set of primes $p_1,\dots,p_t$. Prove that $\log p_1,\dots,\log p_t$ is linear independent over $\mathbb{Q}$. Now, this implies $ \sum_{j=1}^{t}x_j\log(p_j)=0 \iff x_1=\dots=x_t=0$. I think I have to use that fact that every $q\in\mathbb{Q}$ can be written as $\prod_{\mathcal{P}}$, where $n_p$ is a unique sequence ($n_2$,$n_3$,$\dots$) with domain $\mathbb{Z}$. Here, $\mathcal{P}$ denotes the set of all integers. Now how can I use this to prove the linear independency? REPLY [5 votes]: If $\sum_{j=1}^{t}x_j\log(p_j)=0$ then $\sum_{j=1}^{t}y_j\log(p_j)=0$ where $y_j \in \Bbb Z$ is the product of $x_j$ by the common denominator of the $x_j$'s. Therefore $\log\left(\prod_{j=1}^t p_j^{y_j}\right) = 0$, which implies $\prod_{j=1}^t p_j^{y_j} = 1$, and this is only possible if $y_j=0$ for all $j$. Indeed, you have $$ \prod\limits_{\substack{1 \leq j \leq t\\ y_j \geq 0}} p_j^{y_j} = \prod\limits_{\substack{1 \leq i \leq t\\ y_i < 0}} p_i^{-y_i}$$ and uniqueness of prime powers decomposition implies $y_j=0$ for all $j$. The converse is easy to see: if $x_j=0$ for all $j$, then $\sum_{j=1}^{t}x_j\log(p_j)=0$.<|endoftext|> TITLE: Find minimal value of $abc$ if the quadratic equation $ax^2-bx+c = 0$ has two roots in $(0,1)$ QUESTION [9 upvotes]: If $$ ax^2-bx+c = 0 $$ has two distinct real roots in (0,1) where a, b, c are natural numbers then find the minimum value of product abc ? REPLY [13 votes]: Since $a,b,c $ are positive the roots are trivially greater than 0. What remains is to solve the inequality: $\frac{b + \sqrt{b^2-4ac}}{2a} <1$ This reduces to $ a+c>b$ But the roots being real and distinct we have $b^2 >4ac$ Combining both we have : $a^2 + c^2 + 2ac > b^2 > 4ac$ $b^2 > 4ac$ tells us $b> 2$ (why?) $ a^2 + c^2+ 2ac > 4ac $ tells us $a \neq c$ Checking small cases we get $(a,b,c) =(5,5,1)$ where $abc =25$ EDIT: Checking "small" cases is not informative, so adding an explanation: Keeping in mind $a+c>b$, the minimum value of $ac$ occurs when $a=b$ and $c=1$. So for given $b$, the minim of $abc$ is $b^2$. The smallest value of $b$ which agrees the inequality $b^2>4b$ is 5 (as $ac=b$). Hence the corresponding minimum value is $5^2$<|endoftext|> TITLE: Justify geometrically: an element and its inverse are not conjugate QUESTION [6 upvotes]: Consider the group $G$ of rotations of regular tetrahedron in $\mathbb{R}^3$. We know that this group is $A_4$. We also know that a rotation of order $3$ and its inverse are not conjugate: ratation of order $3$ corresponds to a 3-cycle $(123)$ and in $A_4$ we know by algebraic arguments that $(123)$ and $(132)$ are not conjugate. Q. Is there any geometric smart way to show that a rotation of order $3$ and its inverse are not conjugate in the group of rotational symmetries? REPLY [3 votes]: You can visualize this as follows: The tetrahedron is orientable, that is you can draw on each face a circular arc such that where these orientations meet on the edges they annihilate each other. There are six rotations, two for each face : a positive one and a negative one. It is impossible for the action of the symmetry group to map a positive rotation into a negative one.<|endoftext|> TITLE: Are closed ball convex in a translation surface? QUESTION [8 upvotes]: Let $(X,\omega)$ be a translation surface and $x$ any point (smooth or not) in it. Let $r\in \mathbb{R}^+$ be such that it is smaller than the diameter of $(X,\omega)$. Is the closed ball $B_r(x)$ always convex? My guess is no. I tried to figure it out using a simple translation surfaces: the regular octahedron with sides identified (it has one point of conical singularity of total angle $6\pi$). Then if I'm not wrong I can find a smooth point $x$ and an $r>0$ such that the closed ball $B_r(x)$ "overlaps" about the singular point giving non convexity. In the figure below I've drawn the situation I mean: the ball $B_r(x)$ is the dark part of the octahedron and I drew two segments not entirely contained in it. Are my guess and my construction right? Thank you REPLY [2 votes]: You are correct that the answer is no. Your example seems correct as well. Perhaps the simplest example is an infinite circular cylinder of radius $r$ (and circumference $2\pi r$). If $p$ is any point on this cylinder, the ball of radius $\pi r$ centered at $p$ is "tangent" to itself on the back side, and is thus clearly not convex. The following picture shows this ball: Indeed, balls on this cylinder are convex if and only if their radius is less than $\pi r/2$. Of course, this example is non-compact, but basically the same geometry works on a flat torus of sufficient size.<|endoftext|> TITLE: Compute $E(\sin X)$ if $X$ is normally distributed QUESTION [7 upvotes]: If $X$ is normally distributed with mean $\mu$ and standard deviation $\sigma$ what is the expected value $E[\sin(x)]$? I think this has something to do with the characteristic function... REPLY [10 votes]: Let $X\sim \mathcal{N}(\mu,\sigma)$. Then, the characteristic function of $X$ is $$t\mapsto\phi_{X}(t):=\Bbb E[\exp(itX)]=\exp\left(i\mu-\frac{\sigma^{2}t^{2}}{2}\right)$$ By linearity of the integral, we have, for any integrable complex-valued function $f$: $$\mathfrak{Im}\int f=\int \mathfrak{Im} f \tag{1}$$ where $\mathfrak{Im}$ denotes the imaginary part of a complex number and is defined pointwise for a complex-valued function. Indeed, let $(\Omega,\mathcal{F},\nu)$ be a measure space and $f:\Omega\to\Bbb C$ a $\nu$-integrable function. Then, for any $\omega\in\Omega$, we can write: $$f(\omega)=f_{1}(\omega)+if_{2}(\omega)$$ where $f_{1}$ and $f_{2}$ are real-valued function on $\Omega$. It is easy to see that $f_{1}$ and $f_{2}$ are integrable if $f$ is integrable (actually, if and only if). Therefore, we have: $$\int_{\Omega} f\text{d}\nu=\int_{\Omega}f_{1}+if_{2}\text{d}\nu:=\int_{\Omega}f_{1}\text{d}\nu+i\int_{\Omega}f_{2}\text{d}\nu$$ $(1)$ follows obviously. Hence, we have: \begin{align*} \Bbb E[\sin(X)]&=\,\Bbb E[\mathfrak{Im}\exp(iX)]\\ &=\mathfrak{Im}\,\Bbb E[\exp(iX)]\\ &=\mathfrak{Im}\,\Bbb \phi_{X}(1)\\ &=\mathfrak{Im}\exp\left(i\mu-\frac{\sigma^{2}}{2}\right)\\ &=\sin(\mu)\exp\left(-\frac{\sigma^{2}}{2}\right) \end{align*}<|endoftext|> TITLE: Is there any mathematical reason for this "digit-repetition-show"? QUESTION [136 upvotes]: The number $$\sqrt{308642}$$ has a crazy decimal representation : $$555.5555777777773333333511111102222222719999970133335210666544640008\cdots $$ Is there any mathematical reason for so many repetitions of the digits ? A long block containing only a single digit would be easier to understand. This could mean that there are extremely good rational approximations. But here we have many long one-digit-blocks , some consecutive, some interrupted by a few digits. I did not calculate the probability of such a "digit-repitition-show", but I think it is extremely small. Does anyone have an explanation ? REPLY [152 votes]: The architect's answer, while explaining the absolutely crucial fact that $$\sqrt{308642}\approx 5000/9=555.555\ldots,$$ didn't quite make it clear why we get several runs of repeating decimals. I try to shed additional light to that using a different tool. I want to emphasize the role of the binomial series. In particular the Taylor expansion $$ \sqrt{1+x}=1+\frac x2-\frac{x^2}8+\frac{x^3}{16}-\frac{5x^4}{128}+\frac{7x^5}{256}-\frac{21x^6}{1024}+\cdots $$ If we plug in $x=2/(5000)^2=8\cdot10^{-8}$, we get $$ M:=\sqrt{1+8\cdot10^{-8}}=1+4\cdot10^{-8}-8\cdot10^{-16}+32\cdot10^{-24}-160\cdot10^{-32}+\cdots. $$ Therefore $$ \begin{aligned} \sqrt{308462}&=\frac{5000}9M=\frac{5000}9+\frac{20000}9\cdot10^{-8}-\frac{40000}9\cdot10^{-16}+\frac{160000}9\cdot10^{-24}+\cdots\\ &=\frac{5}9\cdot10^3+\frac29\cdot10^{-4}-\frac49\cdot10^{-12}+\frac{16}9\cdot10^{-20}+\cdots. \end{aligned} $$ This explains both the runs, their starting points, as well as the origin and location of those extra digits not part of any run. For example, the run of $5+2=7$s begins when the first two terms of the above series are "active". When the third term joins in, we need to subtract a $4$ and a run of $3$s ensues et cetera.<|endoftext|> TITLE: Integral of a bivariate Gaussian in the positive quadrant QUESTION [5 upvotes]: I am looking for a reference (or a somewhat simple proof) for the following result, which for instance Mathematica spits out without too much effort. Here $a,b,c \in \mathbb{R}$ are constants satisfying $a, c < 0$ and $b^2 < 4 a c$. $$\int_0^{\infty}\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy = \frac{1}{2\sqrt{4ac-b^2}} \left(\pi + 2 \arctan\left(\frac{b}{\sqrt{4ac-b^2}}\right)\right).$$ One idea I had was to write: \begin{align} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy &= 2 \int_0^{\infty}\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy \\ &+ 2 \int_0^{\infty}\int_0^{\infty} \exp(a x^2 - b x y + c y^2) \, dx \, dy \end{align} The integral on the left can easily be computed as it corresponds to the total probability of the corresponding bivariate Gaussian distribution (minus scaling factors), but then I'm stuck at solving the other integral with $(-b)$ substituted for $b$, which seems just as complicated to compute and is not easily relatable to the original integral. So I'm not sure if this approach leads anywhere. A more direct approach would be to simply compute both integrals explicitly, one at a time. Doing one integral first, a human can verify without too much effort that $$\int_0^{\infty} \exp(a x^2 + b x y + c y^2) \, dx = \sqrt{\frac{\pi}{-4 a}} \cdot \exp\left(\frac{(4 a c -b^2) y^2}{a}\right) \left(\text{erf}\left(\frac{b y}{2 \sqrt{-a}}\right)+1\right).$$ Doing the second integral with $\exp(u y^2) (\text{erf}(v y) + 1)$ is not so straightforward though, and for instance I could not find a reference for computing such integrals (and getting an arctangent in the process) in Abramowitz and Stegun's handbook. So if someone has a reference for integrating $\exp(u y^2) \text{erf}(v y)$ over $y > 0$ for $u, v$ as above, that would also be appreciated. REPLY [5 votes]: Of course, just when I gave up and posted the question, I found a solution... It is based on substituting $y = x s$ (and $dy = x \, ds$) before computing the integral over $x$, so that it greatly simplifies and an integral over $1/(a + bs + cs^2)$ remains, which leads to the arctangent solution. \begin{align} \int_{y=0}^{\infty}\int_{0}^{\infty} \exp(a x^2 + b x y + c y^2) \, dx \, dy &= \int_{s=0}^{\infty} \left(\int_{0}^{\infty} \exp\left((a + bs + cs^2)x^2 \right) \, x \, dx\right) \, ds \\ &= \int_{0}^{\infty} \left[\frac{\exp\left((a + bs + cs^2) x^2\right)}{2 (a + bs + cs^2)}\right]_{x=0}^{\infty} \, ds \\ &= \int_{0}^{\infty} \left[0 - \frac{1}{2 (a + bs + cs^2)}\right] \, ds \\ &= \frac{-1}{2} \int_{0}^{\infty} \frac{1}{a + bs + cs^2} \, ds \\ &= \frac{-1}{2} \left[\frac{2}{\sqrt{4 a c - b^2}} \, \arctan \left(\frac{2 a s + b}{\sqrt{4 a c - b^2}}\right)\right]_{s=0}^{\infty} \\ &= \frac{-1}{\sqrt{4 a c - b^2}} \left[-\frac{\pi}{2} - \arctan \left(\frac{b}{\sqrt{4 a c - b^2}}\right)\right]. \end{align} Here we used the provided conditions on $a, b, c$ several times throughout. For instance the third equality uses that $a + bs + cs^2 < 0$ for all $s > 0$.<|endoftext|> TITLE: Discriminant of a cyclotomic field QUESTION [7 upvotes]: If $\zeta$ is a primitive $n$-th root of unity, prove that: $$d(1, \zeta,...,\zeta^{\varphi(n)-1})=(-1)^{\varphi(n)/2}n^{\varphi(n)}\prod_{p\mid n} p^{-\frac{\varphi(n)}{p-1}}$$ Let $n=\prod_{i=1}^{m}p_i^{e_i}$. After looking it up in some books, I was able to understand why this is true for $m=1$. However, they all ignored the general case $m>1$ or simply stated that it could be done by induction on $m$, but I really can't see how it could be done. The only interesting thing I could find out was that for $n,m$ with $\gcd(n, m)=1$, we get, on the right hand side of the equation: $$(-1)^{\varphi(nm)/2}nm^{\varphi(nm)}\prod_{p\mid nm} p^{-\frac{\varphi(nm)}{p-1}}=$$ $$\left((-1)^{\varphi(n)/2}n^{\varphi(n)}\prod_{p\mid n} p^{-\frac{\varphi(n)}{p-1}}\right)^{\varphi(m)}\left((-1)^{\varphi(m)/2}n^{\varphi(m)}\prod_{p\mid m} p^{-\frac{\varphi(m)}{p-1}}\right)^{\varphi(n)}$$ That makes me think I'm getting somewhere, but I'm stuck with the problem of showing that $d(1, \zeta,...,\zeta^{\varphi(nm)-1})=[d(1, \zeta,...,\zeta^{\varphi(n)-1})]^{\varphi(m)}[d(1, \zeta,...,\zeta^{\varphi(m)-1})]^{\varphi(n)}$, which doesn't seem trivial at all. Any ideas? Thanks! REPLY [5 votes]: because we have a proposition that says If $K, F$ are two number fields linearly disjoint over $\mathbb{Q}$, $KF$ their compositum, and their discriminants are coprime. then $$\delta_{KL}=\delta_{K}^{[L:\mathbb{Q}]}\cdot\delta_{L}^{[K:\mathbb{Q}]}$$ and in our case we have $\mathbb{Q}(\zeta_{n})$ and $\mathbb{Q}(\zeta_{m})$ are linearly disjoint because $ gcd(n,m)=1$ , and their discriminants are coprime then $$\delta_{\mathbb{Q}(\zeta_{mn})}=\delta_{\mathbb{Q}(\zeta_{n})}^{\phi(m)}\cdot\delta_{\mathbb{Q}(\zeta_{m})}^{\phi(n)}$$. But the problem is $$\bigg( (-1)^{\phi(n)/2}n^{\phi(n)}\prod_{p\mid n}p^{\frac{-\phi(n)}{p-1}}\bigg)^{\phi(m)}\cdot\bigg( (-1)^{\phi(m)/2}n^{\phi(m)}\prod_{p\mid m}p^{\frac{-\phi(m)}{p-1}}\bigg)^{\phi(n)}=(-1)^{\phi(nm)}(nm)^{\phi(nm)}\prod_{p\mid nm}p^{\frac{-\phi(nm)}{p-1}}$$<|endoftext|> TITLE: A smooth function can not be transformed into another smooth function without changing the value of every open interval. QUESTION [5 upvotes]: Take any $C^\infty$ (smooth) function $f: R \to R$. For any arbitrary function $t:R\to R$, define $g :R\to R$ as $g(x)= (t\circ f)(x)$ Conjecture: For any such $g$, if $g$ is smooth ($g\in C^\infty$), the following must necessarily hold: $(i)$: Either: $s(x) = x$ (identity function), or $(ii)$: There exists no open ($O_R$) interval $U$ on the domain of $f, g$, for which holds: $f(U)=g(U)$. i.e.: $$\forall U\in O_R:\exists x\in U:f(x)\neq g(x)$$ In plain English: A smooth function cannot be transformed into another smooth function, without changing the values in all its intervals: Only isolated points may remain unchanged. Here is an incomplete argument why it seems to me must be true: Assume we have a an arbitrary smooth function $f$, and an arbitrary function $s$, and $g=s\circ f$. Assume that $s$ is not the identity function (contradicting condition $i$), and that for some interval $(a,b)$, $f(x)=g(x)$ for all $x\in (a,b)$, (contradicting condition $ii$). Take $b$ here to be the largest $b$, such that this holds (which is possible by the Completeness Axiom on $R$). Now denote by $f_n, g_n$ the $n$th derivative of $f, g$ respectively. Since by assumption, $f$ is smooth on $b$, we know that $$(1): \underset{\delta \to 0^-}{\text{Lim}}\left(\frac{f_{n-1}(b+\delta)-f_{n-1}(b)}{\delta}\right)=:L_{f_n}^-=L_{f_n}^+:=\underset{\delta \to 0^+}{\text{Lim}}\left(\frac{f_{n-1}(b+\delta)-f_{n-1}(b)}{\delta}\right)$$ ($L$ will denote the limit with respect to the point $b$). $(2):$ Since $f$ and $g$ are identical on $(a,b)$, we also know that $L_{f_n}^-=L_{g_n}^-$, for all $n\in \mathbb N$. $(3):$ Now assume (in order to derive a contradiction) that $g$ is smooth on $b$, so that $L_{g_n}^-=L_{g_n}^+$ for all $n \in \mathbb N$. Then using $(1,2)$ it also holds that $L_{f_n}^+=L_{g_n}^+$ for all $n \in \mathbb N$. However, since $b$ is the largest value such that $f(x)=g(x)$ on $(a,b)$, that means that either $f(b)\neq g(b)$ (in which case $g$ is discontinuous and not smooth, completing the proof for that case), or for some $c>b$, it is the case that $f(x)\neq g(x)$ for all $x\in (b,c)$. Now here comes a bit of a leap: Given that $f(x)\neq g(x)$ for all $x\in (b,c)$, we also know that there is an interval $(b,\beta _1)$, where $\beta_1\leq c$, in which for all $x$: $f_1(x)\neq g_1(x)$. Similarly, given interval $(b, \beta_i)$ in which for all $x: f_i(x)\neq g_i(x)$, there is an interval $(b, \beta_{i+1})$, where $\beta_{i+1}\leq \beta_i$, in which for all $x: f_{i+1}(x)\neq g_{i+1}(x)$ Again a leap: Hence we know that for any $n\in \mathbb N$, there is a $\beta \in \mathbb N$, such that for all $x\in (b, \beta), f_{n}(x)\neq g_{n}(x)$. Hence there exists an $n\in \mathbb N$, such that $L_{g_n}^+ \neq L_{f_n}^+$. This contradicts $(3)$, therefore, $g$ is not smooth. Discussion: Is this conjecture correct? Is the first part of the proof correct? Is there a way to fill in the "leaps" at the end? Are there better ways to prove it (or if the conjecture is false, to restate it into a correct one)? ps. note, I have no formal maths training, and I came up with this conjecture myself based on intuition, so if this is a stupid conjecture or proof, understand that. REPLY [3 votes]: Counterexample: Define $$f(x) = \begin{cases} 0 & x\le 0\\e^{-1/x} & x>0\end{cases}$$ Then $f\in C^\infty(\mathbb R).$ With $t(x) = x^2,$ we have $f$ and $ t\circ f$ equal to $0$ on $(-\infty,0].$<|endoftext|> TITLE: Half iteration of exponential function QUESTION [8 upvotes]: I'm working on the half iteration of the exponential function. No one has any idea what fractional iterations could mean but I think intuitively it should be a function $f(x)$ such that $f(f(x))=e^x$. Here's how I'm finding $f(x)$ when $x\approx 0$: If $x\approx 0$, then, we have, $$e^x\approx 1+x+\frac{x^2}{2}$$. ...(1) Now, if we assume the required function $f(x)$ to be of the form $ax^2+bx+c$, then $$f(f(x))= a^3x^4+2a^2bx^3+(2a^2c+ab^2+ab)x^2+(2abc+b^2)x+ac^2+bc+c$$ But, since $x\approx 0$ therefore, $$f(f(x))=e^x\approx ac^2+bc+c+(2abc+b^2)x+(2a^2c+ab^2+ab)x^2$$. ....(2) Comparing coefficients of like powers of $x$ in equation (1) and (2), we get, $$ac^2+bc+c=1 \tag {3.1}$$ $$2abc+b^2=1 \tag {3.2}$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag {3.3}$$ The problem is solving these equations. I've tried substitution but they get reduced to a polynomial of very high degree which I don't know how to solve. Is there some way to solve these to get $a$, $b$, and $c$ and hence get the required half iteration function of $e^x$ as $ax^2+bx+c$? Please tell me how to solve these three equations. REPLY [2 votes]: Looking at the equations $$ac^2+bc+c=1 \tag 1$$ $$2abc+b^2=1 \tag 2$$ $$2a^2c+ab^2+ab=\frac{1}{2} \tag 3$$ we can eliminate $b$ from $(1)$ $$b=\frac{1-c-a c^2}{c}$$ Replacing in $(2)$ and solving for $a$ leads to $$a=\frac{\sqrt{1-2 c}}{c^2}$$ Replacing in $(3)$ leads to $$-c \left(c^3+12 c+6 \sqrt{1-2 c}-14\right)+4 \sqrt{1-2 c}=4$$ After squaring steps, this reduces to $$c^7+24c^5-28c^4+152c^3-264c^2+160c-32=0$$ which has only one real root close to $c=\frac 12$. Using Newton method for finding the zero of the equation in $c^7$ leads to $$a=0.261795456735753$$ $$b=0.878112905194437$$ $$c=0.497894079064888$$ as already given in Gottfried Helms's answer. These numbers can be rationalized as $$a=\frac{37409}{142894}\qquad b=\frac{77821}{88623}\qquad c=\frac{18323}{36801}$$ Edit Back to the problem eighteen months later, we could get expressions for the solution using $[1,n]$ Padé approximants for the septic equation (built around $c=\frac 12$). For different values of $n$, this would give $$a_1=\frac{146336 \sqrt{352121}}{331786225}\qquad b_1=\frac{18369-4 \sqrt{352121}}{18215}\qquad c_1=\frac{18215}{36584}$$ $$a_2=\frac{3257213 \sqrt{44685705147}}{2630063332009}\qquad b_2=\frac{1635466-\sqrt{44685705147}}{1621747}\qquad c_2=\frac{1621747}{3257213}$$<|endoftext|> TITLE: Denoting all the cube roots of a real number QUESTION [9 upvotes]: This may be a very simple question to ask, but I am confused with these definitions and would like to clarify here. $\sqrt{81} = 9$. But $\sqrt{81} \ne -9$ because $\sqrt{}$ is used to represent the principal root. So, If I want to represent both the roots, I have to mention it as $\pm\sqrt{81} = \pm 9$. We know every real number ($\ne 0$) has three cube roots, one real and two complex. So, if we say $\root 3 \of {27}$, it means the principal cube root, which is $3$. If so, (a) How do we indicate that we are referring to all the three cube roots together (like $\pm \sqrt{81}$ for square roots) because $\root 3 \of {}$ refers to only principal cube root? (b) Why are there no commonly accepted guidelines to decide the principal cube root because in some places, it is the real number while some books refer it to the one in positive imaginary axis. Please help me to clear my doubts. REPLY [2 votes]: When using the cubic formula (like the quadratic formula, except for cubics), the first root is found using the principal cube root, and in this case it is the one with the LARGES REAL PART, if it is a tie, you chose the one with the larges imaginary that is in the tie. The second root is the one with the largest imaginary part that is not the principle cube root. The third is the other. If you do not do this, you will not get the right answer in the cubic formula.<|endoftext|> TITLE: Why does the gradient commute with taking expectation? QUESTION [6 upvotes]: Let $X, Y$ be two random variables, with $X$ taking values in $\Bbb R^n$ and $Y$ taking values in $\Bbb R$. Then we can look at the function $h: \Bbb R^n \to \Bbb R$ given by $$\beta \mapsto \Bbb E[(Y-X^T\beta)^2]$$ It is claimed that the gradient of $h$ is given by $$\nabla h = \Bbb E[2X(X^T\beta-Y)]$$ This seems like a special case of the identity $$\nabla \Bbb E[f]=\Bbb E [\nabla f]$$ Where the expectation is taken over the mutual distribution of some random variables. Formally, We want the following: Suppose $X_1,...,X_m$ are random variables returning values in some sets $A_i$ with some given mutual probability distribution. Then for every function $f: \Bbb R^n \times \prod A_i \to \Bbb R$, for every $\beta \in \Bbb R^n$ we can form the random variable $f(\beta, X_1,...,X_m)$ and take its expectation. Taking different values of $\beta$ gives rise to a function $\Bbb R^n \to \Bbb R$. We claim that its gradient is equal to the vector obtained by first fixing the values of $X_1,...,X_m$ and taking the gradient of the resulting function $\Bbb R^n \to \Bbb R$, and this gives a random variable returning values in $\Bbb R^n$, for which we can take the expectation. REPLY [2 votes]: $\beta$ is not a random variable so you can expand the expression and take $\beta$ out as a factor. Then differentiate that expression with respect to $\beta$ and check that the claimed equality holds. It is not more complicated than that. It would be a problem if you couldn't factor $\beta$ out (e.g. $e^{X^{\top}\beta}$). Then you would indeed have to justify interchanging integration and differentiation.<|endoftext|> TITLE: Two interesting results in integration $\int_{0}^{a}f(a-x) \ \mathrm{d}x= \int_{0}^{a}f(x)\ \mathrm{d}x$ and differentiation of powers of functions QUESTION [6 upvotes]: I am investigating the following result in integration $\displaystyle\int_{0}^{a}f(a-x) \ \mathrm{d}x = \int_{0}^{a}f(x) \ \mathrm{d}x \ \ \ (*)$ This neat little result forms the basis for many questions in calculus exams, often then asking one to evaluate something like $\displaystyle\int_{0}^{\frac{\pi}{2}}\frac{\sin^n x}{\sin^n x + \cos^n x} \ \mathrm{d}x$ where $n$ is a positive integer. The process of solving this integral isn't too challenging, and is almost immediate from $(*)$. My question is this: can anyone think of any more challenging integrals out there (possibly requiring some clever substitution, integration by parts etc.) that $(*)$ can help solve? UPDATE I also came across another identity involving differentiation: $\displaystyle \frac{\mathrm{d}}{\mathrm{d}x}(u(x))^{v(x)} = (u(x))^{v(x)}\left(\frac{\mathrm{d}v(x)}{\mathrm{d}x}\ln u(x) + \frac{v(x)}{u(x)}\frac{\mathrm{d}u(x)}{\mathrm{d}x}\right)$. This is another identity that can be used to solve integrals, but I am again unable to find any creative examples, so if anyone could suggest some I'd be happy to give them a go. REPLY [2 votes]: There are a lot of possible answers. For example, $$\int_{0}^{1} \frac{x^3}{3x^2-3x+1} \mathrm{d} x=\int_{0}^{1} \frac{x^3}{x^3+(1-x)^3} \mathrm{d} x=\frac{1}{2}$$ or $$\int_{0}^{1}\frac{x^5}{5x^4-10x^3+10x^2-5x+1}\mathrm{d}x=\frac{1}{2}$$ are both good examples of how this property can be used. We can use this property to calculate these complicated looking integrals in less than a few seconds. If we were not to use this property, we would have to use things like $$\int \frac{x^5}{5x^4-10x^3+10x^2-5x+1}\mathrm{d} x$$ Which ends up being more than slightly complicated, as can be seen here. . In general, we have the property $$\int_{0}^{1} \frac{x^{2n+1}}{\sum_{k=1}^{2n+1}\binom{2n+1}{k} (-x)^{2n+1-k}}\mathrm{d}x=\frac{1}{2}$$