TITLE: General Principles of Solving Radical Equations QUESTION [12 upvotes]: What are the general ways to solve radical equations similar to questions like $\sqrt{x+1}+\sqrt{x-1}-\sqrt{x^2 -1}=x$ $\sqrt{3x-1}+\sqrt{5x-3}+\sqrt{x-1}=2\sqrt2$$\sqrt{\frac{4x+1}{x+3}}-\sqrt{\frac{x-2}{x+3}}=1$ Are there just a few known ways to solve them? How do you know the best way to solve such questions? I have trouble with a lot of square root equations, and when I ask them on this site, I get good answers, but for one question. I was wondering if there were any general principles of solving such questions. REPLY [12 votes]: One rather general strategy is to replace each new root $\sqrt[k]{expression}$ in the equation by a new variable, $r_j$, together with a new equation $r_j^k = expression$ (so now you will have $m+1$ polynomial equations in $m+1$ unknowns, where $m$ is the number of roots). Then eliminate variables from the system, ending with a single polynomial equation in one unknown, such that your original variable can be expressed in terms of the roots of this polynomial. This procedure can introduce spurious solutions if you only want the principal branch of the $k$'th root, so don't forget to check whether the solutions you get are valid. For example, in your second equation, we get the system $$ \eqalign{r_1 + r_2 + r_3 - 2 \sqrt{2} &= 0\cr r_1^2-(3x-1) &= 0\cr r_2^2-(5x-3) &= 0\cr r_3^2-(x-1) &=0\cr}$$ Take the resultant of the first two polynomials with respect to $r_1$, then the resultant of this and the third with respect to $r_2$, and the resultant of this and the fourth with respect to $r_3$. We get $$ 121 x^4-4820 x^3+28646 x^2-45364 x+21417$$ which happens to factor as $$ \left( x-1 \right) \left( x-33 \right) \left( 121\,{x}^{2}-706\,x+ 649 \right) $$ However, only the solution $x=1$ turns out to satisfy the original equation.<|endoftext|> TITLE: Ideal of 8 general points in $\mathbb{P}^2$ QUESTION [6 upvotes]: I am working through chapter 3 of Eisenbud's Geometry of Syzygies. In the first example he makes the claim that the ideal of 8 general points in $\mathbb{P}^2$ is generated by two cubics and a quartic. Q: How can I see that this is the case? I am familiar with Bezout's theorem and the Cayley-Bacharach theorem, but I have never done one of these types of arguments on my own before. REPLY [2 votes]: The linear forms, quadratics, cubics, and quartics on the plane are vector spaces of dimensions 3,6,10,15 respectively. If we choose each point P generally, imposing the extra condition of vanishing at P will decrease each of the dimensions above by 1 (unless the dimension is already 0). If this helps, we are just saying that, if we have chosen $P_1$ through $P_i$, we can choose the next point to not lie on the common zero locus of the degree d forms that vanish on our first $i$ points, unless the zero polynomial is the only such form of degree d. Therefore, after choosing 8 general points, the dimensions are now 0,0,2,7. So there are two cubics that vanish on the 8 points and one additional quartic that is not generated by the 2 cubics. The cubics will intersect in a subscheme of length 9. If we add the quartic, we must decrease the length, so the length must become 8, so we have found a generating set.<|endoftext|> TITLE: Intuitively what is the second directional derivative? QUESTION [7 upvotes]: I'm thinking that the second directional derivative, if both dd's are evaluated in the same direction, will just give you the concavity (the second scalar derivative) in that direction. Is that right? But what if the second directional derivative is evaluated in a different direction? As in $D_{\vec v}D_{\vec u} f(\vec x)$ where $\vec v\ne \vec u$. Then what would this thing mean? Does it still have to do with concavity? REPLY [2 votes]: The answer to this question rests in linear algebra. If $f$ is twice continuously differentiable (just to avoid pathological things) then we have that partial derivatives of $f$ commute. I'll skip the proof, but, the same goes for the directional derivatives of $f$. We have $D_u(D_v f) = D_v(D_u(f))$. Furthermore, if $u,v$ are linearly independent then they form a basis for $\mathbb{R}^2$ and we can study the behavior of the quadratic term in the multivariate Taylor expansion in terms of the theory of quadratic forms. In particular, $$ Q(h,k) = [h,k]\left[ \begin{array}{cc} D_uD_uf & D_uD_v f \\ D_uD_v f & D_vD_v f \end{array} \right]\left[ \begin{array}{c}h \\ k \end{array} \right] = \lambda_1\bar{x}^2+\lambda_2\bar{y}^2$$ where $\bar{x}, \bar{y}$ are eigencoordinates and $\lambda_1, \lambda_2$ are the eigenvalues of $A = \left[ \begin{array}{cc} D_uD_uf & D_uD_v f \\ D_uD_v f & D_vD_v f \end{array} \right]$. Using linear algebra and the specific form of $A$, $$ \text{trace}(A) = \lambda_1+\lambda_2 = D_uD_uf+D_vD_vf $$ and $$ \text{det}(A) = \lambda_1\lambda_2 = (D_uD_uf)(D_uD_uf)-(D_uD_vf)^2$$ If $\nabla f =0$ at a point then $f(p+(h,k)) = f(p) + Q(h,k)+ \cdots $ which means that the nature of $z = f(x,y)$ near $p$ is totally governed by $Q$. It's easy to see If $\lambda_1\lambda_2 <0$ then $f$ is both increasing and decreasing near $p$ hence we face a saddle point. Specifically, in terms of directional derivatives, $(D_uD_uf)(D_uD_uf)-(D_uD_vf)^2 < 0$ provides $\text{det}(A)<0$ and hence the eigenvalues differ in sign. If $\lambda_1, \lambda_2>0$ then $f$ increases as we travel away from $p$ this is manifest from the formula $\lambda_1\bar{x}^2+\lambda_2\bar{y}^2$ for $Q$. This requires $(D_uD_uf)(D_uD_uf)-(D_uD_vf)^2 > 0$ and $\text{trace}(A) >0$, but, if you think about it, given $\text{det}(A)>0$ we have $D_uD_u f>0$ implies $D_vD_v f>0$. In short, if $(D_uD_uf)(D_uD_uf)-(D_uD_vf)^2 > 0$ and $D_uD_u f>0$ then $f$ has a local minimum at $p$. If $\lambda_1, \lambda_2<0$ then $f$ decreases as we travel away from $p$ this is manifest from the formula $\lambda_1\bar{x}^2+\lambda_2\bar{y}^2$ for $Q$. This requires $(D_uD_uf)(D_uD_uf)-(D_uD_vf)^2 > 0$ and $\text{trace}(A) < 0$, but, if you think about it, given $\text{det}(A)>0$ we have $D_uD_u f<0$ implies $D_vD_v f<0$. In short, if $(D_uD_uf)(D_uD_uf)-(D_uD_vf)^2 > 0$ and $D_uD_u f<0$ then $f$ has a local maximum at $p$. At this point, you may complain, I meant to study $f: \mathbb{R}^n \rightarrow \mathbb{R}$. What is the significance of mixed directional derivatives in that context for $n>3$. It's more complicated, to give the complete picture, I need $n$-LI directions and all $n(n+1)/2$-independent mixed directional derivatives. If I have all that data then I can construct the $n \times n$ analog of $A$ and again characterize the nature of the graph $x_{n+1} = f(x_1, \dots , x_n)$ at a critical point in terms of the eigenvalues of $A$ (assuming $A \neq 0$, in the case $A=0$, we'd need higher derivative data...) Another way to look at my answer is this: the mixed directional derivatives also have to do with concavity. The eigenvalues tell us the concavity of the sections of the graph in the direction of eigenvectors. If the point is critical, then that concavity reveals the extremal nature of the point.<|endoftext|> TITLE: Application of Combinatorics/Graph Theory to Organic Chemistry? QUESTION [9 upvotes]: Recently, I have been self-teaching graph theory and having an organic chemistry course at school. When I was learning isomer enumeration I found great resemblance between organic molecules and graphs. Every atom can be regarded as a vertex, with carbon vertices of $4$ degree, hydrogen atoms of $1$ degree etc. On the whole they just constitute a loopless yet usually not simple graph! While I am excited about this idea, I am unclear about how to apply graph theory (and some combinatorial techniques) to chemistry. Am I correct? Are there really applications of combinatorics or graph theory to organic chemistry, particularly isomer enumeration? If so how? Are there any books or resources from where I can learn about this amazing idea? REPLY [8 votes]: Yes there are quite a lot actually, see for example Trinajstic's book Chemical Graph Theory. http://docdro.id/L3DrG6B or https://www.scribd.com/doc/313337501/Chemical-Graph-Theory-Trinajstic To list some of the topics discussed in the book verbatim, we have for example: Kekule structures Hückel theory the conjugated circuit model Zagreb group Cayley generating function for enumerating isomers Actually most (if not all) applications in that book seem to be exclusively for organic chemistry, so you have quite a lot to choose from.<|endoftext|> TITLE: How to compute $(1 \cdot 3 \cdot 5 \cdots 97)^2 \pmod {101}$ QUESTION [5 upvotes]: How to compute $(1 \cdot 3 \cdot 5 \cdots 97)^2 \pmod {101}$ in easiest and fastest way? REPLY [7 votes]: I guess the only tool you currently have is the Wilson's Theorem, which says that if $p$ is a prime then $$(p-1)!\equiv-1(\mathrm{mod}~p)$$ Notice that $101$ is a prime, and $1\equiv-100(\mathrm{mod}~101),~3\equiv-98(\mathrm{mod}~101),\cdots$. Hence, $$(1\cdot3\cdot5\cdots97)^{2}\equiv(1\cdot3\cdot5\cdots97)(1\cdot3\cdot5\cdots97)\equiv(1\cdot3\cdot5\cdots97)(-100\cdot-98\cdot-96\cdots-4)(\mathrm{mod}~101)$$ Now can you finish up?<|endoftext|> TITLE: Positive integers $a,b$ satisfying $a^3+a+1=3^b$ QUESTION [15 upvotes]: How to prove that $a=b=1$ is the only positive integer solution to the following Diophantine equation?$$a^3+a+1=3^b$$ REPLY [4 votes]: Yes, $a=b=1$ is the only solution. As Mike Bennett suggests in the comments, this can be proved by asking more generally for integer points on the elliptic curves $y^2 = x^3 + x + 1$ [for $b$ even, with $(x,y) = (a,3^{b/2})$] and $3y^2 = x^3+x+1$ [for $b$ odd, with $(a,y) = (a,3^{(b-1)/2})$]. Siegel's theorem shows that there are only finitely many solutions; his proof is ineffective, but since then new techniques have been developed and implemented to provably list all solutions. As Gerry Myerson suggests in his comment, these curves are simple enough that the computation of their integer points has already been done. These days the most extensive online databases for such results are collected in the LMFDB, so I looked there. The first curve, $y^2 = x^3 + x + 1$, is 496.a1; there are two pairs of integral points, $(0,\pm 1)$ and $(72,\pm 611)$. Only the first makes $y$ a power of $3$, and gives the solution $(a,b)=(0,0)$, which is integral, but not positive as Ghartal asks. The second curve, $3y^2 = x^3 + x + 1$, is not in Weiestrass form, but we can take $X=3x$ and find that $$ X^3+9X+27 = 27(x^3+x+1) = 81y^2 = (9y)^2, $$ so $(3x,9y)$ is an integral point on the curve $Y^2 = X^3 + 9X + 27$. This is curve 2232.j1 in the LMFDB; its only integral points are $(X,Y) = (-2,\pm1)$ and $(3,\pm9)$. The former yields $(a,b) = (-2/3,-3)$, a rational solution but again not satisfying Ghartal's condition that $a,b$ be positive integers. The latter yields the known solution $(a,b)=(1,1)$. This completes the determination of solutions of $a^3+a+1 = 3^b$ in positive integers. It may be quite hard to give an elementary proof; e.g. once we have the nontrivial solution with $b=1$ it's impossible to exclude further solutions by congruence conditions alone (congruence modulo powers of $3$ only impose conditions on $a$ modulo powers of $3$, which cannot combine with congruences to moduli coprime with $3$ to give a contradiction).<|endoftext|> TITLE: Showing that the Hopf fibration is a non-trivial fibre bundle QUESTION [6 upvotes]: I want to show that the Hopf bundle $$ \mathbb{S}^1 \rightarrow \mathbb{S^3} \rightarrow \mathbb{S}^2$$ is non-trivial as a principal fibre bundle. I have seen hints of several different approaches: Hopfs original approach, using linking numbers, see Hopf fibration and $\pi_3(\mathbb{S}^2)$. Hopf invariant. Cohomology. I want to keep it simple. My gut says cohomology is my best bet (I am slightly familiar with de Rahm Cohomology). Unfortunately, I am having trouble finding sources that treat this on my level (without a lot of general theory I think I don't need). I have the following books: The Topology of Fibre Bundles, Steenrod. Fibre Bundles, Husemoller. Manifold and Differential Geometry, Jeffry Lee. What is the bare minimum of theory I need to to get to this result? What route would you advise? I am an undergraduate working on a Bsc thesis. REPLY [11 votes]: The easier way to show that is to remark that $\pi_1(S_3)=1$ and $\pi_1(S_2\times S_1)=Z$.<|endoftext|> TITLE: Expected number of rolls to get all colors on 6-sided die colored in with 3 colors QUESTION [6 upvotes]: If I have a die that has 3 red sides, 2 blue sides, and 1 green side, how many rolls do I expect until every color has appeared at least once? I have run some tests and I’m getting numbers around 7.31, but clearly I’m looking for a mathematical solution. Thanks in advance. REPLY [6 votes]: Label the possible states of the game according to which colors have previously been seen. Thus we consider $8$ possible states $\{r,b,g\}$ where each symbol can be either $0$ or $1$ according to whether the associated color has been seen. Similarly, we denote by $E[r,b,g]$ the expected number of throws it will take to finish from the state $\{r,b,g\}$. The answer you want is $E=E[0,0,0]$. We'll proceed by backwards induction, starting with the observation that $E[1,1,1]=0$. I. Missing one color. Say we are in state $\{1,1,0\}$, so we are only missing Green. We throw the die. We finish if we get a Green, probability $\frac 16$. With probability $\frac 56$ we stay in the state $\{1,1,0\}$. Thus $$E[1,1,0]=\frac 16\times 1+\frac 56\times (E[1,1,0]+1)\implies E[1,1,0]=6$$ Similarly,$$E[1,0,1]=\frac 26\times 1+\frac 46\times (E[1,0,1]+1)\implies E[1,0,1]=3$$ And $$E[0,1,1]=\frac 12\times 1+\frac 12\times (E[0,1,1]+1)\implies E[0,1,1]=2$$ II. Missing two colors. Say we are in state $\{1,0,0\}$. As before we roll the die and see that $$E[1,0,0]=\frac 26\times (E[1,1,0]+1)+\frac 16\times (E[1,0,1]+1)+\frac 36\times(E[1,0,0]+1)$$ $$=\frac {14}6+\frac {4}6+\frac 36\times(E[1,0,0]+1)\implies E[1,0,0]=7$$ Similarly, we get: $$E[0,1,0]=\frac {26}4\;\;\&\;\;E[0,0,1]=\frac {19}5$$ Finally, $$E=E[0,0,0]=\frac 36\times (E[1,0,0]+1)+\frac 26\times (E[0,1,0]+1)+\frac 16\times (E[0,0,1]+1)=7.3$$<|endoftext|> TITLE: Show that $Z(G) = \cap_{a \in G} C(a)$ QUESTION [5 upvotes]: Show that $Z(G) = \cap_{a \in G} C(a)$. Let $a \in Z(G)$. Then $ax=xa$ for all $x$ in $G$. In particular we can say that $ax_1=x_1a$ and $ax_2=x_2a$ and $ax_3=x_3a$ and so on ($x_i$ are elements of $G$). This is nothing but intersection of all subgroups of form $C(a)$. However I doubt my way of doing this question. Please guide me Thanks REPLY [2 votes]: I tried to write a proof in the style you're going for. Suppose $x\in Z(G)$. Then, $xa=ax$ for all $a\in G$, and hence, $x\in C(a)$ for all $a\in G$. Thus, $x\in\bigcap_{a\in G}C(a)$. Conversely, suppose $x\in\bigcap_{a\in G}C(a)$ for all $a\in G$. Then, $xa=ax$ for all $a\in G$, so that $x\in Z(G)$. By mutual inclusion, $Z(G)=\bigcap_{a\in G}C(a)$.<|endoftext|> TITLE: Prove that $2^n+3^n $ is never a perfect square QUESTION [11 upvotes]: My attempt : If $n$ is odd, then the square must be 2 (mod 3), which is not possible. Hence $n =2m$ $2^{2m}+3^{2m}=(2^m+a)^2$ $a^2+2^{m+1}a=3^{2m}$ $a (a+2^{m+1})=3^{2m} $ By fundamental theorem of arithmetic, $a=3^x $ $3^x +2^{m+1}=3^y $ $2^{m+1}=3^x (3^{y-x}-1) $ Which is not possible by Fundamental theorem of Arithmetic Is there a better method? I even have a nagging feeling this is wrong as only once is the equality of powers of 2 and 3 $(n) $are used. REPLY [2 votes]: Yet another approach. For the odd case of $n$, as you mentioned the result is $2\mod 3$ which cannot be a square number. As for the even case $n=2m$, we have $2^{2m}+3^{2m}\mod 10$ being equal to $2$ (for when $m=0$) or either $3$ (for $n=2+4k,k\geq0$) or $7$ (for $n=4k,k\geq1$), which clearly cannot be square (as a square number modulus $10$ takes values $0,1,4,9,6,5$).<|endoftext|> TITLE: The reason behind the definition of manifold QUESTION [8 upvotes]: I was going thorough the definition of a manifold and needless to say it wasn't something that I could digest at one go. Then I saw the following Quora link and Qiaochu's illustrative answer. It was great to see the motivation behind the concept of manifold. Then I looked at the definition of manifold that I have at my disposal which is the following: A topological space $M$ is an $n$-dimensional real manifold if there is a family of subsets $U_\alpha$, $\alpha \in A$, of $\mathbb{R}^n$ and a quotient map $f \colon \coprod_\alpha U_\alpha \to M$ such that $f|_{U_\alpha}$ is a homeomorphism onto the image for all $\alpha$. I understand that we are trying to conceptualize about a bigger space which when looked at a very small region looks like something else (a euclidean space) thereby giving a possibly incorrect bigger picture about the shape. Now what was the reason behind introducing disjoint union in this definition. Can someone help me to get in terms with this idea? I went through this related question and Samuel's brilliant answer to it. Leaving aside the points about Haussdorff and second countable spaces I could draw that homeomorphism is the concept that we use to convey the similarity between two spaces. I can loosely convince myself that the existence of homeomorphism between two spaces means the similarity in the pattern of open sets in the two spaces. (I might not be expressing what I feel about it.) But still then can anyone make it a bit more elaborate as to why we use homeomorphism here? If we divided this bigger surface, let's say earth into smaller circles, then I can see that we wouldn't have gotten the local shape same everywhere, somewhere it would have been a circle and some other points it would have been an area in between four circles or maybe something else. But are these local shapes homeomorphic to $\mathbb{R}^2$? What are other shapes which wouldn't have been homeomorphic to $\mathbb{R}^2$? I guess "What happens when two spaces are homeomorphic?" would be a good way to start the discussion. REPLY [7 votes]: "Being homeomorphic" is a very strong condition for spaces - it means that we have a bijection between the underlying sets of the spaces that is continuous, and whose inverse is continuous. In particular, you also get a bijection between the open sets of the two spaces. We can therefore consider two spaces that are homeomorphic to be "essentially the same" for the purposes of topology. So what goes wrong with your "Earth" analogy? I suspect when you say "circle" you mean "disc" (a circle is e.g. $\{x \in \mathbb R^2 \: | \: |x| = 1\}$ while a(n open) disc is e.g. $\{x \in \mathbb R^2 \: | \: |x| < 1\}$). Circles aren't open in the normal topology on a sphere, so we definitely want to be thinking about discs. You say "some other points it would have been an area in between four [disks] or maybe something else" - this is true, but when dealing with a manifold we're always careful to make sure that the open disks we choose completely cover the manifold. This means that every point $p$ is contained in some $U_\alpha$. I think part of the confusion is that the definition you've given isn't (to my mind) the most natural. In my opinion the nicest way to approach it is as follows: a manifold is a topological space $M$, and a collection of open sets $\{U_\alpha : \alpha \in A\}$ that cover $M$ (so given $p \in M$ we can find some $\alpha$ with $p \in U_\alpha$), and a corresponding collection $\{V_\alpha : \alpha \in A\}$ of open subsets of $\mathbb R^n$, and a collection of homeomorphisms $\phi_\alpha: U_\alpha \to V_\alpha$. (There are also the hypotheses about Hausdorff, second countable etc., but these aren't key for understanding the idea of a manifold.) This definition is equivalent to the one you gave, but it makes it clearer that what we're caring about is that at every point $p$, the manifold $M$ locally looks like a subset of $\mathbb R^n$. I'm not sure if that completely answers your questions, feel free to comment if there was something more specific that was confusing you. REPLY [4 votes]: The definition you've given is using a disjoint union as a sort of shorthand for a collection of maps. After all, one map $f: \coprod_\alpha U_\alpha \to M$ is equivalent to a collection of maps $\{f_\alpha: U_\alpha \to M\}$ where $f_\alpha = f|_{U_\alpha}$. The $f_\alpha$ are the charts for the manifold which identify an open subset of $M$ with an open subset of $\Bbb R^n$. The definition of manifold which I'm more familiar with is the following: A topological space $M$ is an $n$-dimensional manifold if there exist open subsets $U_\alpha \subseteq \Bbb R^n$, $V_\alpha \subseteq M$, and homeomorphisms $f_\alpha: U_\alpha \to V_\alpha$ such that $\bigcup_\alpha V_\alpha = M$ (i.e. every point in $M$ is in some chart $V_\alpha$). We also assume that $M$ is second countable and Hausdorff. Personally, I find this definition to be a bit more transparent. It's easier to visualize what this definition is saying. Now, for your question about what a homeomorphism really means. Intuitively, two spaces are homeomorphic if they are completely equivalent for the purposes of topology. Any topological construction you make, or topological property you prove about one space will hold for a homeomorphic space. They really are exacly the same. More technically, a homeomorphism $f: X \to Y$ is firstly a bijection of sets, so it shows that if you start with $X$ and just relabel the points, then you get $Y$, as a set. More than that though, $f$ also induces a bijection on the sets of open subsets. Let $\tau_X$ be the topology on $X$, i.e. the set of all open subsets of $X$. Likewise for $\tau_Y$. Then we have a bijection $f: \tau_X \to \tau_Y$ given by $U \mapsto f(U)$. Therefore, the homeomorphism $f$ shows that if you start with $X$ and relabel its open subsets, you end up with the open subsets of $Y$. So if all you care about is the collection of open subsets of a space (i.e. you're doing topology), then homeomorphic spaces only differ by how you labeled the points and open subsets (which obviously doesn't make any tangible difference).<|endoftext|> TITLE: Hilbert function and Hilbert polynomial QUESTION [7 upvotes]: I have largely studied Hilbert function and Hilbert polynomial for polynomial rings over fields of characteristic zero. Is it possible to extend the theory also for polynomial rings over fields of characteristic positive? Thank you REPLY [3 votes]: Yes: for a finitely generated $k$-algebra, for instance, you can still take your additive function $\lambda$ to be $\dim_k$, and you get that the Hilbert series $\mathcal P(V,t) = \sum_{i \geq 0} \lambda(V_i)t^i$ is a rational function. See e.g. http://tartarus.org/gareth/maths/notes/iii/Commutative_Algebra_2013.pdf, p. 31 onwards, for an approach to it.<|endoftext|> TITLE: Evaluation of $\int_{0}^{1}\frac{\ln x}{x^2-x-1}dx$ QUESTION [8 upvotes]: Evaluation of $\displaystyle \int_{0}^{1}\frac{\ln x}{x^2-x-1}dx$ $\bf{My\; Try::}$ Let $\displaystyle I = \int_{0}^{\infty}\frac{\ln x}{x^2-x-1}dx=\int_{0}^{\infty}\frac{\ln(x)}{\left(x-\frac{1}{2}\right)^2-\left(\frac{\sqrt{5}}{2}\right)^2}dx$ Now Put $\displaystyle \left(x-\frac{1}{2}\right)=\frac{\sqrt{5}}{2}\sec \theta\;,$ Then $\displaystyle dx = \frac{\sqrt{5}}{2}\sec \theta \tan \theta$ So $$ I= -\frac{2}{\sqrt{5}}\int_{-\frac{1}{2}}^{\infty}\frac{\ln(\sqrt{5}\sec \theta+1)-\ln(2)}{\tan \theta}\cdot \sec \theta d\theta$$ So $$I = \frac{2}{\sqrt{5}}\int_{-\frac{1}{2}}^{\infty}\frac{\ln(2)+\ln(\cos \theta)-\ln(\sqrt{5}+\cos \theta)}{\sin \theta} d\theta$$ Now How can I solve after that, Help Required, Thanks REPLY [6 votes]: So, by the comment of the OP, the integral we have to study is $$\int_{0}^{1}\frac{\log\left(x\right)}{x^{2}-x-1}dx.$$ We note that $$\begin{align} I= & \int_{0}^{1}\frac{\log\left(x\right)}{x^{2}-x-1}dx \\ = & \int_{0}^{1}\frac{\log\left(x\right)}{\left(x-\frac{1-\sqrt{5}}{2}\right)\left(x-\frac{1+\sqrt{5}}{2}\right)}dx \\ = & -\frac{2}{\sqrt{5}}\left(\int_{0}^{1}\frac{\log\left(x\right)}{2x+\sqrt{5}-1}dx-\int_{0}^{1}\frac{\log\left(x\right)}{2x-\sqrt{5}-1}dx\right) \\ = & -\frac{2}{\sqrt{5}}\left(I_{1}-I_{2}\right), \end{align}$$ say. Let us analyze $I_{1}$. Integrating by part we get $$I_{1}=\int_{0}^{1}\frac{\log\left(x\right)}{2x+\sqrt{5}-1}dx=\frac{1}{\sqrt{5}-1}\int_{0}^{1}\frac{\log\left(x\right)}{\frac{2x}{\sqrt{5}-1}+1}dx=-\frac{1}{2}\int_{0}^{1}\frac{\log\left(\frac{2x}{\sqrt{5}-1}+1\right)}{x}dx $$ $$=\frac{1}{2}\textrm{Li}_{2}\left(-\frac{2}{\sqrt{5}-1}\right) $$ where $\textrm{Li}_{2}(x)$ is the Dilogarithm function. In a similar way we can find that $$I_{2}=\int_{0}^{1}\frac{\log\left(x\right)}{2x-\sqrt{5}-1}dx=\frac{1}{2}\textrm{Li}_{2}\left(\frac{2}{\sqrt{5}+1}\right) $$ so $$I=\frac{\textrm{Li}_{2}\left(\frac{2}{\sqrt{5}+1}\right)-\textrm{Li}_{2}\left(-\frac{2}{\sqrt{5}-1}\right)}{\sqrt{5}} $$ and since holds $$\textrm{Li}_{2}\left(\phi^{-1}\right)=\frac{1}{10}\pi^{2}-\log^{2}\left(\phi\right),\,\textrm{Li}_{2}\left(-\left(\phi-1\right)^{-1}\right)=-\frac{1}{10}\pi^{2}-\log^{2}\left(\phi\right) $$ where $\phi=\frac{\sqrt{5}+1}{2} $ is the golden ratio we have $$I=\frac{\pi^{2}}{5\sqrt{5}}.$$<|endoftext|> TITLE: Combinatorial proof of a certain alternating sum of binomial coefficients QUESTION [6 upvotes]: The following identity appeared as a question earlier today $$\displaystyle\sum\limits_{k=0}^n (-1)^k\binom{m+1}{k}\binom{m+n-k}{n-k} = \begin{cases} 1\ \text{if}\ n=0 \\ 0\ \text{if}\ n>0 \end{cases}$$ and was given an algebraic solution. Is there a combinatorial proof? REPLY [5 votes]: Suppose that you want to choose $n$ integers from the set $[m+n]$ in such a way that you do not include any integer less than or equal to $m+1$. If $n=0$ this is possible in exactly one way: you choose the empty set. If $n>0$, it’s not possible, since only the $n-1$ integers in $[m+n]\setminus[m+1]$ are available to be chosen. For each $k\in[m+1]$ let $\mathscr{A}_k$ be the set of $n$-element subsets of $[m+n]$ that contain $k$; we want the number of $n$-element subsets of $[m+n]$ that are not in $\bigcup_{k\in[m+1]}\mathscr{A}_k$. Clearly $$\left|\bigcap_{k\in I}\mathscr{A}_k\right|=\binom{m+n-|I|}{n-|I|}$$ whenever $\varnothing\ne I\subseteq[m+1]$, so by the inclusion-exclusion principle we have $$\begin{align*} \left|\bigcup_{k\in[m+1]}\mathscr{A}_k\right|&=\sum_{\varnothing\ne I\subseteq[m+1]}(-1)^{|I|-1}\left|\bigcap_{k\in I}\mathscr{A}_k\right|\\ &=\sum_{\varnothing\ne I\subseteq[m+1]}(-1)^{|I|-1}\binom{m+n-|I|}{n-|I|}\\ &=\sum_{k=1}^{m+1}(-1)^{k-1}\binom{m+1}k\binom{m+n-k}{n-k}\;, \end{align*}$$ and hence the number of $n$-element subsets of $[m+n]$ that are not in $\bigcup_{k\in[m+1]}\mathscr{A}_k$ is $$\begin{align*} \binom{m+n}n&-\sum_{k=1}^{m+1}(-1)^{k-1}\binom{m+1}k\binom{m+n-k}{n-k}\\ &=\binom{m+n}n+\sum_{k=1}^{m+1}(-1)^k\binom{m+1}k\binom{m+n-k}{n-k}\\ &=\sum_{k=0}^{m+1}(-1)^k\binom{m+1}k\binom{m+n-k}{n-k}\;. \end{align*}$$ Thus, $$\sum_{k=0}^{m+1}(-1)^k\binom{m+1}k\binom{m+n-k}{n-k}=\begin{cases} 1,&\text{if }n=0\\ 0,&\text{if }n>0\;. \end{cases}$$ The fact that I have $m+1$ instead of $n$ as the upper limit of summation is unimportant: in both versions we’re simply summing over all values of $k$ for which the terms are non-zero.<|endoftext|> TITLE: Is it bad to call series a generalization of sum? QUESTION [6 upvotes]: In a recent question I asked why series has a name separate from that of sum, and the general answer was that a series does not have the nice properties of sum. Does this mean it is bad to call series a generalization of sum? REPLY [9 votes]: Concept $X$ generalizes concept $Y$ if every instance of $Y$ is a special case of $X$, at least with regard to some relevant property of the concepts involved. The infinite series $\sum_{n=1}^{\infty}s_{n}$ is defined to be the limit of the partial sums of a sequence $(s_{n})$. Given a "finite summation", it can be thought of as an infinite series where, for some $N$, we have $s_{n}=0$ whenever $n>N$, in the sense that the infinite series will converge to the value of the finite summation (this is the "relevant property" in this case). Therefore the concept of an infinite series can be regarded as a valid generalization of the usual concept of summation, so the answer to your original question is no.<|endoftext|> TITLE: Check that $\lim\limits_{n\to\infty}\sum\limits_{i=1}^{n}\left(\frac{i+x}{n}\right)^n=\frac{e^{x+1}}{e-1}$ QUESTION [5 upvotes]: Show that $$\lim_{n\to\infty}\sum_{i=1}^{n}\left(\frac{i+x}{n}\right)^n=\frac{e^{x+1}}{e-1}$$ Any hints how I can tackle this problem? Although I checked on a sum calculator that it converges very slowly, this result gives me a reason that the proposed closed form is incorrect. Can anyone verify it? I know that $$\lim_{n\to\infty}\left(\frac{n+x}{n}\right)^n=e^x$$ The limit to be computed is $$\lim_{n\to\infty}\left(\frac{1+x}{n}\right)^n+\left(\frac{2+x}{n}\right)^n+\left(\frac{3+x}{n}\right)^n+\cdots$$ It looks like the natural number series $$1^n+2^n+3^n+4^n+\cdots$$ But this is the farthest I can go. REPLY [5 votes]: This solution uses the following facts: For every real number $t$, $\left(1+\frac{t}n\right)^n\to e^t$ when $n\to\infty$. For every real number $t$, $1+t\leqslant e^t$. For every real number $t\geqslant-\frac12$, $1+t\geqslant e^{t-t^2}$. One is asking to prove that the limit of $$S_n(x)=\sum_{i=1}^{n}\left(\frac{i+x}n\right)^n=\sum_{k=0}^{n-1}\left(1+\frac{x-k}n\right)^n$$ when $n\to\infty$, exists and equals $$s(x)=\sum_{k=0}^{\infty}e^{x-k}=\frac{e^{x+1}}{e-1}.$$ To prove this, first note that $$S_n(x)=S_n(x-1)+\left(1+\frac{x}n\right)^n-\left(\frac{x}n\right)^n,$$ hence, for every $x$, $$\lim_{n\to\infty}\ (S_n(x)-S_n(x-1))=e^x.$$ Next, assume that $x\geqslant0$. Then, the bound $1+t\leqslant e^t$, valid for every $t$, and the fact that $1+\frac{x-k}n\geqslant0$ for every $k$ in the second sum above defining $S_n(x)$, yield $$S_n(x)\leqslant\sum_{k=0}^{n-1}\left(e^{(x-k)/n}\right)^n=\sum_{k=0}^{n-1}e^{x-k}\leqslant s(x).$$ Likewise, pick some $a$ in $(0,1)$ and assume that $n$ is large enough for $n^{1-a}\geqslant2$ to hold. Then, the bound $1+t\geqslant e^{t-t^2}$, valid for every $t\geqslant-\frac12$, and the fact that $1+\frac{x-k}n\geqslant0$ and that $\frac{x-k}n\geqslant-\frac12$ for every $k\leqslant n^a$, together yield $$S_n(x)\geqslant\sum_{k=0}^{n^a}\left(e^{(x-k)/n-(x-k)^2/n^2}\right)^n=\sum_{k=0}^{n^a}e^{x-k-(x-k)^2/n}\geqslant e^{-n^{2a-1}}\sum_{k=0}^{n^a}e^{x-k}=e^{-n^{2a-1}}s(x)\left(1-e^{-n^a}\right).$$ If $a$ is in $(0,\frac12)$, $e^{-n^{2a-1}}\to1$ and $e^{-n^a}\to0$, hence $S_n(x)\to s(x)$, thus the claim holds for every $x\geqslant0$. Finally, the claim holds for every $x$ because $$s(x)-s(x-1)=e^x=\lim_{n\to\infty}\ (S_n(x)-S_n(x-1)).$$<|endoftext|> TITLE: On the converse of Schur's Lemma QUESTION [10 upvotes]: Let $G$ be a finite group and $F$ a field with $\mathrm{char}(F)=0$ or coprime to $|G|$. Let $V$ be a $FG$-module in a way that every $ FG$-homomorphism $ f : V \to V $ is given by $f(x)= \lambda x $. Then $V$ is irreducible. I already managed to proof this by contradiction, using Maschke's Theorem. We can write $V$ as $U\oplus W$ and get a $FG$-homomorphism $\pi:U\oplus W \to U\oplus W$, $\pi (u+w)=u$, then $\pi(u+w)=\lambda (u+w)=u $ then either $U$ or $W$ are trivial and thus $V$ is irreducible. My question is if this statements still holds if the characterictic of $F$ divides the order of $G$, since I only used this fact in my proof to use Maschke's theorem. REPLY [8 votes]: There are counterexamples in general. Suppose $V$ is a non-split extension of two non-isomorphic simple modules. I.e., $V$ has a unique simple submodule $W$ with simple quotient $U=V/W$ where $U\not\cong W$. Then any non-zero endomorphism of $V$ is an isomorphism, since the only possible kernel is $W$, but the image can't be $V/W=U$ since $V$ has no submodule isomorphic to $U$. If, further, $U$ and $W$ have $F$ as endomorphism ring, this isomorphism must be a scalar multiple of the identity. For an explicit example, let $G$ be the group of invertible upper triangular $2\times 2$ matrices over a finite field $F$ with $\vert F\vert>2$, and let $V$ be the natural $2$-dimensional $FG$-module.<|endoftext|> TITLE: Detail in the proof that sheaf cohomology = singular cohomology QUESTION [17 upvotes]: Theorem: If $X$ is locally contractible, then the singular cohomology $H^k(X,\mathbb{Z})$ is isomorphic to the sheaf cohomology $H^k(X, \underline{\mathbb{Z}})$ of the locally constant sheaf of integers on $X$. In proving this, I have seen several sources proceed more or less the same way: Let $\tilde{\mathcal{S}}^k$ be the sheafification of the presheaf $\mathcal{S}^k$ of singular cochains on $X$. Then $\underline{\mathbb{Z}} \to \tilde{\mathcal{S}}^0 \to \tilde{\mathcal{S}}^1 \to \dotsb$ is a complex of sheaves. It is exact because $X$ is locally contractible. The sheaves $\tilde{\mathcal{S}}^k$ can be shown to be flabby, and so acyclic. Therefore the homology of the complex $\tilde{\mathcal{S}}^\bullet(X)$ is just $H^\bullet (X, \underline{\mathbb{Z}})$. For each $k$, $\tilde{\mathcal{S}}^k(X) \cong \mathcal{S}^k(X)/\mathcal{S}^k(X)_0$, where $\mathcal{S}^k(X)_0 \subset \mathcal{S}^k(X)$ is those cochains $\sigma$ for which there exists an open cover $\mathcal{U}$ of $X$ so that $\sigma(s) = 0$ whenever $s$ is a singular simplex that lies completely in one of the sets of $\mathcal{U}$. $\mathcal{S}^\bullet(X) \to \mathcal{S}^\bullet(X)/\mathcal{S}^\bullet(X)_0$ is a homology equivalence. It is the second step where I am finding trouble**. Consider the sheafification map $\mathcal{S}^k \xrightarrow{\mathrm{shf}} \tilde{\mathcal{S}}^k$. I am comfortable with the fact that the kernel of $\mathcal{S}^k(X) \xrightarrow{\mathrm{shf}_X} \tilde{\mathcal{S}}^k(X)$ is $\mathcal{S}^k(X)_0$. But why should the map be surjective? Possible Reasons: The sheafification map is surjective on global sections if $X$ is paracompact and the presheaf satisfies the gluing axiom (proof here). Certainly $\mathcal{S}^k$ satisfies the gluing axiom. But $X$ is merely locally contractible, which doesn't imply paracompact. (A counterexample is a topological sum of uncountably many copies of $\mathbb{R}$.) (Edit: Eric Wofsey points out that this is not the right counterexample to choose. The long line works.) Consider the proof offered below (original source): I don't see how they conclude that $\tilde{\beta}_i(\alpha) = \tilde{\beta}_j(\alpha)$. We know that $\tilde{\beta}_i$ and $\tilde{\beta}_j$ have the same germs on $V_i \cap V_j$, but that doesn't imply they're the same cochain (because $\mathcal{S}^k$ isn't a monopresheaf). So far I haven't been able to figure it out. Anyone have any ideas? **Although in showing that $\tilde{\mathcal{S}}^k$ are flabby, it is assumed that the sheafification map is surjective, and that's what I have a problem with in part 2. REPLY [15 votes]: To elaborate on Dmitri Pavlov's comment, your questions have been answered in a recent paper by Yehonatan Sella. Sella give an example (Example 0.3) of a locally contractible space $X$ for which $\mathcal{S}^k(X) \xrightarrow{\mathrm{shf}_X} \tilde{\mathcal{S}}^k(X)$ is not surjective. Sella's space has $5$ points, and the argument that $\mathrm{shf}_X$ is not surjective heavily exploits the non-Hausdorff nature of the space. It remains unclear to me whether you can get a counterexample which is Hausdorff. Here is a different counterexample from the one Sella gives (but based on the same idea) which I think more clearly demonstrates what is going on geometrically. Take $X=\mathbb{R}\cup\{0'\}$ to be the line with two origins, so neighborhoods of $0'$ look like neighborhoods of $0$ except with $0$ replaced by $0'$. Define $1$-cochains $\beta_0$ on $U=X\setminus\{0'\}$ and $\beta_1$ on $V=X\setminus\{0\}$ as follows. Define $\beta_0=0$. Define $\beta_1(\alpha)=0$ for any $1$-simplex $\alpha$ unless $\alpha$ is the linear path from $\epsilon$ to $2\epsilon$ for some $\epsilon>0$, in which case $\beta_1(\alpha)=1$. Note that any point in $U\cap V=\mathbb{R}\setminus\{0\}$ has a neighborhood which does not contain $[\epsilon,2\epsilon]$ for any $\epsilon>0$, and so $\beta_0$ and $\beta_1$ agree locally on $U\cap V$. However, there is no $1$-cochain $\beta$ on $X$ which agrees with $\beta_0$ in a neighborhood of $0$ and which also agrees with $\beta_1$ on a neighborhood of $0'$, since such neighborhoods would contain $[\epsilon,2\epsilon]$ for all sufficiently small $\epsilon$. After showing this counterexample, Sella then gives an alternate proof that singular cohomology agrees with sheaf cohomology that only requires the space to be locally contractible (or actually even just semi-locally contractible, meaning that there is a basis of open sets $U$ such that the inclusion map $U\to X$ is nullhomotopic). The idea is that instead of taking the presheaf quotient $\mathcal{S}^k/\mathcal{S}^k_0$, which, as seen above, may not be a sheaf, you take a smaller presheaf quotient $\mathcal{C}^k$. This quotient $\mathcal{C}^k$ can be shown to be a sheaf, and still forms a flasque resolution of the constant sheaf and has the same cohomology on global sections as $\mathcal{S}^k$. Here's the motivation behind the construction, if I'm understanding it correctly. The reason that it's hard to prove that $\mathcal{S}^k/\mathcal{S}^k_0$ is a sheaf is that the equivalence relation on cochains is too weak. Two cochains are equivalent iff they agree when restricted to some neighborhood of each point. However, the restriction to a neighborhood of a point still involves plenty of simplices that don't pass through that point, and so these "local" conditions at different points interact with each other and are hard to simultaneously satisfy at all points. This is what's going on with the counterexample above: you can't glue $\beta_0$ and $\beta_1$ because the constraints of being locally equal to $\beta_0$ at $0$ and locally equal to $\beta_1$ at $0'$ interfere with each other. To fix this, Sella defines a stronger equivalence relation on cochains that makes the "local equality" at different points independent of each other. Slightly simplifying, he declares two cochains $\beta,\beta'\in\mathcal{S}^k(U)$ to be equivalent if for each $x\in U$ there is a neighborhood $V$ of $x$ such that $\beta(\alpha)=\beta'(\alpha)$ for all simplices $\alpha$ which are contained in $V$ and have barycenter $x$. If you left out the barycenter condition, this would just be equivalence mod $\mathcal{S}^k(U)_0$. Note that this barycenter condition means that to satisfy this local condition at a point $x$, you only care about simplices with barycenter $x$, which do not affect the local condition at any other point. Sella then defines $\mathcal{C}^k(U)$ to be the quotient of $\mathcal{S}^k(U)$ by this equivalence relation (actually, he uses a more complicated equivalence relation in order to make this construction compatible with the coboundary operation on cochains, but never mind that). Because of the independence of the local conditions at different points, it is easy to show that $\mathcal{C}^k$ is a sheaf (to glue compatible sections, just define a cochain whose value on simplices with barycenter $x$ is given by the germ of one of your sections at $x$). It is then not hard to show that these sheaves form a flasque resolution of the constant sheaf. The hard part is to show that $\mathcal{C}^\bullet(X)$ still has the same cohomology as $\mathcal{S}^\bullet(X)$, and this is what occupies the bulk of Sella's paper. (Disclaimer: I have not carefully read Sella's paper, and the above is just the impression of the main ideas which I got from a cursory reading.)<|endoftext|> TITLE: How many partial derivatives does a multivariate polynomial have? QUESTION [9 upvotes]: Definition: My motivation for this question stems from the following definition: Define the deterministic finite automata generated by the nonconstant* polynomial $f(x_0 , \dots , x_n) \in \mathbb{Z} [x_0 , x_1 , \dots , x_n]$, denoted $\text{DFA}(f)$, to be the $5$-tuple $$\text{DFA}(f) = (\partial f, \Sigma, f, \delta, \equiv0),$$ where $\partial f = \{ \text{all partial derivatives of } f \}$ is the set of states, $\text{alphabet } \Sigma = \Bigg \{ \frac{\partial}{\partial x_0}, \frac{\partial}{\partial x_1}, ... , \frac{\partial}{\partial x_n} \Bigg \},$ $f = f(x_0 , x_1 , \dots , x_n )$ is the unique start state, transition (partial, as in not defined for all possible inputs) function $\delta : \Sigma \times \partial f \longrightarrow \partial f, \bigg( \frac{ \partial}{\partial x_i} , p \bigg) \mapsto \frac{\partial p}{\partial x_i}$, where we only keep those elements $\bigg( \frac{\partial}{\partial x_i} , p \bigg)$ that satisfy either $\frac{\partial p}{\partial x_i} \neq 0$ or $\frac{\partial p }{\partial x_i} = 0 \text{ for all } i$ and $p \not\equiv 0$, $\text{and } \equiv0$, the zero polynomial, is the unique accept state. The cardinality of $\partial f$ is the title of the question. REPLY [3 votes]: For monomials, this is not to difficult to solve: In particular, consider $x_1^{n_1}x_2^{n_2}\cdots x_k^{n_k}$ Then any sequence $(a_1,a_2,\ldots,a_k) \in \{0,1,...,n_1\}\times\{0,1,...,n_2\}\times \cdots \{0,1,\ldots,n_k\}$ Map it to the monomial $\dfrac{d^{a_1+a_2+\cdots+a_k}}{dx_1^{a_1} dx_2^{a_2}\cdots dx_k^{a_k}}x_1^{n_1}x_2^{n_2}\cdots x_k^{n_k}$ Each monomial is unique and covers every possible monomial, except $0$, is of that form. Hence the total is: $$ (n_1+1)(n_2+1)\cdots(n_k+1)+1$$ In your example, $n_1=n_2=1$ so the total number is $(1+1)(1+1)+1=5$.<|endoftext|> TITLE: Matrices and prime numbers QUESTION [7 upvotes]: Let $ p $ be a prime number and \begin{align} K=\left\{ \begin{pmatrix} a &b \\ c& d \end{pmatrix} \mid a,b,c,d \in \left\{0,1,\ldots,p-1 \right\}, \right. & a+d \equiv 1 \!\!\!\! \pmod p, \\ & ad-bc \equiv 0 \!\!\!\! \pmod p \left.\vphantom{\begin{pmatrix} a &b \\ c& d \end{pmatrix}} \right\}. \end{align} Determine $\operatorname{card}(K) $. I have taken some particular cases: $ p=2,3,5 \text{ or } 7 $ and I've deduced that $\operatorname{card}(K)=p(p+1) $, but I can't extend the solution to the general case. REPLY [3 votes]: Here's one way to count them. Note that any such matrix can be built by one of the two following methods: Method 1: Select $a \in \{0,1,\dots,p-1\}$ Select a non-zero value for $b$ Solve for $d = 1 - a$, $c = \frac{ad}{b}$ Note that there are $p(p-1)$ matrices that we get from this method. Method 2: Set $b = 0$ $c$ may be freely chosen Either set $a = 0$ and $d = 1$, or $a = 1$ and $d = 0$ There are exactly $2p$ matrices that we get from this method. In total, we thus have $p(p-1) + 2p = p(p+1)$ such matrices.<|endoftext|> TITLE: Is the Riemann integral of a strictly smaller function strictly smaller? QUESTION [5 upvotes]: We all know that if $f\leq{}g$ in $[a,b]$ then $$ \int_a^bf\,dx\leq\int_a^bg\,dx $$ now, imagine that we have $f 0$ and $\delta > 0$, there exists a relatively open subset $U \subseteq [a, b]$ such that $U$ is the union of finitely many relatively open subintervals of $[a, b]$, the lengths of $U$ is less than $\delta$, and $\{ x \in [a, b] : h(x) > \epsilon \} \subseteq U$. We first check that this indeed implies the claim. For each $n \geq 1$, choose $U_n$ as in Observation with $\epsilon = 1/n$ and $\delta = 3^{-n}(b-a)$, so that the length of $U_n$ is less than $3^{-n}(b-a)$, and $\{ x \in [a, b] : h(x) > 1/n \} \subseteq U_n$. Then we find that $$ \{ x \in [a, b] : h(x) > 0 \} = \bigcup_{n=1}^{\infty} \{ x \in [a, b] : h(x) > 1/n \} \subseteq \bigcup_{n=1}^{\infty} U_n. $$ Now assume otherwise that $h > 0 $ on all of $[a, b]$. Then it follows that $\bigcap_{n=1}^{\infty} U_n = [a, b]$ and thus $\{ U_n : n \geq 1 \}$ is an open cover of $[a, b]$. So we can pick a finite subcover, say $\{ U_{n_1}, \dots, U_{n_K} \}$. This implies that $$ [a, b] = U_{n_1} \cup \cdots \cup U_{n_K}. $$ This is a contradiction since the right-hand side has length at most $$\sum_{n=1}^{\infty} 3^{-n}(b-a) < b-a. $$ Step 2. It now remains to prove the observation. (The proof is essentially a variant of the Markov's inequality.) Choose a partition $P$ such that $U(P, h) < \delta \epsilon$. Write $P = \{a = x_0 < \cdots < x_N = b\}$ an define $M_j = \sup_{[x_{j-1}, x_j]} h$ and $\Delta x_j = x_j - x_{j-1}$. Then we know that $U(P, h) = \sum_{j=1}^{N} M_j \Delta x_j < \delta \epsilon$. On the other hand, let $J$ be the set of indices $j$ for which $M_j > \epsilon$. Then $$ \sum_{j \in J} \Delta x_j \leq \frac{1}{\epsilon} \sum_{j \in J} M_j \Delta x_j \leq \frac{1}{\epsilon} U(P, h) < \delta $$ and that $\cup_{j \notin J} [x_{j-1}, x_j]$ is a finite union of closed intervals on which $h \leq \max_{j \notin J} M_j \leq \epsilon$ holds. Therefore the observation follows by taking $U$ as the complement of $\cup_{j \notin J} [x_{j-1}, x_j]$.<|endoftext|> TITLE: Help understanding the weak law of large numbers with respect to statistics QUESTION [11 upvotes]: I'm trying to do some self-studying to upgrade my statistics knowledge, and came across this term in a section discussing the weak law of large numbers and Bernoulli's theorem: $$\sum_{k=0}^n k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k}$$ According to the book I was reading, this term can "easily be shown to equal np". I am at a loss as to how to do so and could use some guidance! Edited Lower limit $k=0$, not $1$ REPLY [2 votes]: Let $X_n=B(n,p)$ be a binomially distributed random variable. Also notice that $X_n=Y_1+Y_2+\cdots+ Y_n$ where $Y_i$ are i.i.d. Bernoulli with parameter $p$. Now observe that \begin{align} \sum_{k=0}^n k\frac{n!}{k!\,(n-k)!}p^k(1-p)^{n-k}&= \operatorname{E}(X_n)\\ &= \operatorname{E}( Y_1+Y_2+\cdots Y_n)\\ &=\operatorname{E}( Y_1)+\operatorname{E}(Y_2)+\cdots +\operatorname{E}(Y_n)\\ &=np \end{align}<|endoftext|> TITLE: Is it really impossible to lose the Hydra game? QUESTION [16 upvotes]: In a number of blog posts I found the claim that the game described below cannot be lost, which is to say, every possible strategy is a winning strategy. In each case, a sketch proof is given that involves ordinal arithmetic, and the claim is made that Peano arithmetic is not strong enough to prove the result. The problem is that I must have misunderstood something rather basic, because by playing around with pen and paper I was quickly able to find strategies for which the number of "heads" grows indefinitely, which seems to mean the claimed result is incorrect. My question is whether my misunderstanding is in the mechanics of the Hydra game, or in the nature of the claim being made about it. Here are the rules of the game, as I understand them. A hydra is a rooted tree. Its leaves are called "heads". The player (Hercules) must kill the hydra (i.e. reduce it to only its root node) by chopping off its heads. To chop of a head, we first remove it from the tree. If this creates a new leaf node, it becomes a new head. We then find the grandparent node of the original head that we removed, and look at all of the subtrees whose root is the grandparent node. Each of these is duplicated $n$ times, where $n$ is any arbitrary natural number, such that the new subtrees each have the grandparent node as their root. (If the original head didn't have a grandparent because its parent was the root node, no new subtrees are generated.) Here is a figure to illustrate this process, taken from one of the posts linked above. The claim is that every strategy is a winning strategy, which as I understand it means that no matter which head is removed at each time step, and no matter what value of $n$ is chosen at each time step, the hydra will always be reduced to its root node in a finite number of moves. Now, let us write $x$ for a head and put nodes in parentheses if they share a parent. Then $((xxx))$ is a hydra, whose root node has one child, which in turn has three children, which are heads. Removing one head yields $((xx))$. The head's grandparent was the root node, so duplicating the subtree (with $n=2$) yields $$ ((xx)(xx)). $$ Removing one of these heads, again with $n=2$, yields $((xx)(x))$, and duplicating again with $n=2$ gives $$ ((xx)(xx)(x)(x)). $$ But now the previous step is a subtree. By chopping off the leftmost head we obtain $((xx)(xx)(x)(x)(x)(x))$ and then $((xx)(xx)(x)(x)(x)(x)(x)(x)(x)(x))$, and by induction after the $k^\text{th}$ move we will always have the $(xx)(xx)$ subtree, followed by $2^k$ copies of $(x)$. This then appears to be a losing strategy, contradicting what I understood to be the claim that no such strategy exists. Moreover, there appears to be no winning strategy for this tree as long as we always choose $n>1$, because the number of subtrees will always increase. What have I misunderstood? REPLY [13 votes]: Your description of how the hydra "regenerates" is incorrect. You don't add copies of the entire tree above the grandparent node, only part of the tree above the grandparent node that starts with going to the parent node. This means that after cutting off a head from $((xx)(xx))$, you get $((xx)(x)(x))$, because you only duplicate the $(x)$ part you had left after chopping off the head, not the other $(xx)$ from which you didn't chop off a head.<|endoftext|> TITLE: What is the difference between an adapted process and a predictable process? QUESTION [8 upvotes]: As the footnote on page 1 of this document mentions, even most experts in the field of stochastic processes don't seem to know rigorously what the difference is. However, since I don't have any idea of what the difference between the two is, I can't remember a definition, right or wrong for either of them. Would it be possible to give the difference in terms of "simple" adapted and predictable processes (i.e. the composition of a discrete-time process with an a.s. increasing sequence of stopping times) as is done here for predictable processes only? (i.e. what about simple adapted processes?) Moreover, I know that an adapted process is a well-defined concept for discrete time; is the same true for a predictable process? If so, what is the difference between their definitions in discrete time? I would be happy to understand even just this, because then I could take the intuition behind the Ito Isometry to just be "it's true for discrete time, now take the limit in probability and somehow it works magically in continuous time". Seemingly related: Proof that the predictable sigma algebra is also generated by continuous and adapted processes Is a predictable process adapted? What is the definition of a "predictable process"? If the above two questions give the answer to my question, I don't understand how, soI would be very grateful for a definition. REPLY [18 votes]: In discrete time at least, the definitions I'm familiar with are fairly straightforward. Given an increasing filtration $\{\mathcal{F}_n\}_{n=0}^{\infty}$, a process $\{X_n\}_{n=0}^{\infty}$ is adapted if each $X_n$ is $\mathcal{F}_n$-measurable. For predictable processes, the random variables are measurable with respect to slightly smaller $\sigma-$algebras. Given an increasing filtration $\{\mathcal{F}_n\}_{n=0}^{\infty}$, a process $\{Y_n\}_{n=1}^{\infty}$ is predictable if each $Y_n$ is $\mathcal{F}_{n-1}$-measurable. I remember these definitions by an analogy with gambling: An adapted process $X_n$ represents the cumulative gain or loss after $n$ turns, while a predictable process represents a betting strategy. It stands to reason that your betting strategy at the $n$th step can depend on the outcome of the previous $n-1$ steps, but not on the $n$th step itself.<|endoftext|> TITLE: $\bar{z}$ cannot be uniformly approximated by polynomials in $z$ on the closed unit disc in $\mathbb{C}$. QUESTION [6 upvotes]: In reference to this question, I tried to prove that $\bar{z}$ cannot be uniformly approximated by complex polynomials in $z$ on the closed unit disc $D$. I came up with a proof, but I'm not entirely sure whether it's correct. Here's how the proof goes: Assume it can be done. Pick an $\varepsilon < 0.1$ and then you'll have a polynomial $p$ such that $|p(z) - \bar{z}| < \varepsilon$ for all $z \in D$. Multiply both sides by $z$ to get: $$|zp(z) - z\bar{z}| < |z|\varepsilon$$ Now if we just look at the points where $|z| = 1$, the inequation becomes: $$|zp(z) - 1| < \varepsilon$$ Now I'll show on some point on the unit circle $C$, $zp(z)$ has a non-positive real part, which will lead to a contradiction because then $|zp(z) - 1| > 1 > \varepsilon$. Notice that $zp(z)$ is of the form: $$zp(z) = a_nz^n + \cdots + a_1z$$ Pick $2(n!)$ points on the circle such that the angle between two adjacent points is $\frac{\pi}{n!}$. The sum of $zp(z)$ over these points will be $0$. Here's why: For each point, there exists a point $\pi$ radians away, so the $z$ term in $zp(z)$ goes to $0$. Similarly, the $z^2$, $z^3$ and so on terms also add up to $0$ hence the polynomial adds up to $0$. Which means at one of those points, the real part of $zp(z) \leq 0$. This leads to the contradiction mentioned above and completes the proof. Is this proof correct? And if it is correct, this only works for sets containing a neighbourhood of $0$. But you'd expect a proof a global property (i.e. uniform approximation) to not depend on any local properties (i.e. a neighbourhood of $0$). It'd be nice if someone could explain how to extend this proof to any compact set in $\mathbb{C}$. REPLY [5 votes]: Your proof can be translated by integrating around the unit circle (if you take $n$ big enough, you're approximating the integral). Note that $$\frac{1}{2\pi i}\displaystyle\int_{|z|=1} \bar zdz=\frac{1}{2\pi i}\int_0^{2\pi} e^{-it}ie^{it}dt =1$$ and assume that given $\varepsilon >0$ there is $p_\varepsilon(z)$ such that $|p_\varepsilon(z)-\bar z|<\varepsilon$ on $\overline B(0,1)$. Since polynomials admit primitives their integrals over closed curves are $0$, so $$1=\left | \frac{1}{2\pi i}\int_{|z|=1} (\bar z-p(z))dz\right|\leqslant \sup_{|z|=1} |p_\varepsilon(z)-\bar z|<\varepsilon$$ by the standard estimate. This is a contradiction if $\varepsilon$ is small enough.<|endoftext|> TITLE: How to prove a bundle map is a bundle isomorphism? QUESTION [5 upvotes]: Example #1 In proving pullback bundle is homotopy invariant Ralph Cohen's notes on the topology of fiber bundles use the following proof based on the covering homotopy theorem (pp.47): Let $p: E \to B$ be a fiber bundle and $f_0, f_1: X \to B$ be homotopic to each other with the homotopy given by $H: X \times I \to B$. By the covering homotopy theorem we have the following fiber bundle over $X \times I$. $\require{AMScd}$ \begin{CD} f_0^*(E) \times I @>\tilde{H}>> E \\ @VVV @VVpV \\ X \times I @>H>> B \end{CD} Then he claims that by definition this defines a map of bundles over $X \times I$ $\require{AMScd}$ \begin{CD} f_0^*(E) \times I @>>> H^*(E) \\ @VVV @VVV \\ X \times I @= X \times I \end{CD} which is clearly a bundle isomorphism since it induces the identity map on both the base space and on the fibers. I don't understand why $f_0^*(E) \times I \to H^*(E)$ is an isomorphism. I agree that it induces the identity on the base space but how can the map on fibers is also the identity? If both are identity wouldn't that imply the two total spaces are the same? Example #2 On the same notes pp.53 during the proof of the bijection between $[X,B]$ and isomorphism classes of principal bundles over $X$ where $(h,f)$ defines a homeomorphism of principal bundle as below $\require{AMScd}$ \begin{CD} P @>h>> E \\ @VqVV @VVpV \\ X @>f>> B \end{CD} which induces an equivariant isomorphism on each fiber thus induces the following bundle isomorphism to the pullback $\require{AMScd}$ \begin{CD} P @>>> f^*(E) \\ @VqVV @VVpV \\ X @= X \end{CD} again I don't understand why $P \to f^*(E)$ is a bundle isomorphism. Example #3 On the same notes pp.48 in showing K-theory is a homotopy invariant, the missing step is to show if $E \to Y$ and $E' \to Y$ are isomorphic bundles over $Y$ i.e. $\require{AMScd}$ \begin{CD} E @>\cong>> E' \\ @VVV @VVV \\ Y @>\cong>> Y \end{CD} then their pullback bundles along the same map $f: X \to Y$ is also isomorphic i.e. $\require{AMScd}$ \begin{CD} f^*(E) @>\cong>> f^*(E') \\ @VVV @VVV \\ X @>\cong>> X \end{CD} The global homeomorphism between $E$ and $E'$ induces homeomorphism on their local trivializations which defines a homeomorphism on local trivializations of $f^*(E)$ and $f^*(E')$ (as the local trivialization of the pullback is defined using the local trivialization of the bundle). Global homeomorphism between $f^*(E)$ and $f^*(E')$ then follows from the pasting lemma? REPLY [4 votes]: In the first case, the explanation given in the notes is indeed rather poor. The fact that the map $f_0^*(E) \times I \to H^*(E)$ is an isomorphism follows not from the statement of the covering homotopy theorem (which, as those notes state it, doesn't even guarantee that $\tilde{H}$ induces homeomorphisms on each fiber) but from its proof. When you examine the proof, you can see that it constructs a certain bundle map $f_0^*(E) \times I \to H^*(E)$ which is seen to be continuous by the pasting lemma (it is constructed by pasting together pieces which lie over a cover of $B\times I$ by certain closed sets $A_{ij}=\bar{W}_i'\times [t_j,t_{j+1}]$). But actually, over each of these pieces $A_{ij}$, there are trivializations of both $f_0^*(E)\times I$ and $H^*(E)$ given by $\phi_k$ (since each of them is pulled back from a bundle on $U_\alpha$ which is trivialized by $\phi_k$), and when you use these trivializations to identify both bundles with the trivial bundle $A_{ij}\times F$, the map is just the identity map. This is what the notes mean when they say the map is "the identity on fibers": when you choose these local trivializations induced by $\phi_k$ on each piece of the base, the map becomes the identity on fibers. It follows that the inverse map, similarly defined piecewise, is also continuous, so the map is a homeomorphism. In the second case, the explanation is much simpler: any equivariant map between principal bundles which covers the identity map is automatically a homeomorphism. To see this, you can look locally on the base, so you can assume both your bundles are trivial. Then you have map $\alpha:X\times G\to X\times G$ of the form $\alpha(x,g)=(x,\beta(x)g)$ for some continuous map $\beta:X\to G$ (the map $\beta$ is continuous because it is the second coordinate of the map $\alpha(x,1)$). The inverse of $\alpha$ is then just $\alpha^{-1}(x,g)=(x,\beta(x)^{-1}g)$, which is continuous because the inverse map $G\to G$ is continuous.<|endoftext|> TITLE: Exterior derivative on principal bundle QUESTION [7 upvotes]: In Nakahara's Geometry, Topology and Physics on page 375, he constructs a Lie-algebra-valued one-form $\omega$ on a principal bundle $P$ by "lifting" a Lie-algebra-valued one-form $\mathcal A_i$ on an open covering $\{U_i\}$ on the base manifold $M$. Given a $\mathfrak{g}$-valued one-form $\mathcal{A}_i$ on $U_i$ and a local section $\sigma_i:U_i\to\pi^{-1}(U_i)$, he defines the connection one-form $\omega$ like this: $$ \omega_i := g_i^{-1}\pi^*\mathcal A_i g_i + g_i^{-1}\mathrm{d}_Pg_i $$ Here, $\mathrm{d}_P$ denotes the exterior derivative on $P$ and $g_i$ is the canonical local trivialization such that $\phi_i^{-1}(u)=(p,g_i)$ and $u=\sigma_i(p)g_i$. Later he proves that $\sigma_i^*\omega_i=\mathcal A_i$, where he uses the fact that $\mathrm{d}_Pg_i({\sigma_i}_*X)=0$ because of $g\equiv e$ along ${\sigma_i}_*X$. I don't understand the notation used here. I know that $G$ acts from the right on $u\in P$. I think that's the operation meant in $\sigma_i(p)g_i$. There's the adjoint action of the group $G$ on its Lie algebra $\mathfrak g$. I think that's what's meant with $g_i^{-1}\pi^*\mathcal A_i g_i + g_i^{-1}\mathrm{d}_Pg_i$, because $\omega_i$'s values should be in $\mathfrak g$. But what is $\mathrm{d}_Pg_i({\sigma_i}_*X)$ and why does it vanish? How does $g_i$ act on the exterior derivative on $P$? And how does $\mathrm{d}_P$ produce a one-form? I thought $\mathrm{d}_P$ maps $p$-forms onto $(p+1)$-forms, so it needs a 0-form (function) as input to produce a one-form. Thanks for any hints. REPLY [7 votes]: For what follows, I will denote with $p$ the points of $P$ and with $u$ the points of $U_i$, otherwise I will get confused. But be careful that what you have written in the question (which probably comes from the book) is the other way around! For example, $\sigma_i$ is a function on $U_i$, so it takes $u$ as an argument, and not $p$...and so on! So, we have an open region $U_i$ of $M$. $U_i$ can be seen as the projection onto $M$ of a region $\pi^{-1}(U_i)\subseteq P$ which "looks" like $U_i\times G$, but only locally, i.e. only over $U_i$, and not over the whole $M$. What does it mean that it "looks" like $U_i\times G$? $U_i\times G$ has a distinguished section, namely, the identity section $U_i\times {e}$. Instead, $\pi^{-1}(U_i)$ has no distinguished section. A local trivialization is a choice of such a special section, i.e. a map $\sigma_i:U_i\to \pi^{-1}(U_i)$ whose image corresponds to $U_i\times {e}$. Of course there is no canonical choice of such a section (unless the bundle is trivial), and on different open sets $U_i$ we will have to choose another section, since they don't glue together nicely (unless the bundle is trivial). But locally, only over $U_i$, we can choose out $\sigma_i$, and this will make $\pi^{-1}(U_i)$ look like $U_i\times G$, i.e. locally trivial. The map $\phi_i:U_i\times G\to \pi^{-1}(U_i)$ makes this explicit: $\phi(u,e)=\sigma_i(u)$ for any $u\in U_i$, and $\phi(u,g)=\sigma_i(u)g$. Thanks to the map $\phi_i$, each point $p\in \pi^{-1}(U_i)$ can be seen as an element in the form $(u,g)\in U_i\times G$. In fact, any point in the image of $\sigma_i$ will look like $(u,e)$, and any other point $p\in \pi^{-1}(U_i)$ will be obtained with right mutiplication by some element $g$. So, every element $p\in \pi^{-1}(U_i)$ can be written uniquely (given a section $\sigma_i$) as $\sigma_i(u)g_i$, for some $g_i$. This $g_i$ depends of course on $p$, and it does so smoothly: there is a function $g_i:\pi^{-1}(U_i)\to G$ mapping $p\in \pi^{-1}(U_i)$ to the unique $g_i(p)$ such that $p=\phi_i(\pi(p),g_i(p))$. So, $g_i$ is actually a smooth function on $\pi^{-1}(U_i)\subseteq P$. We can derive it, and we denote its differential by $d_P g_i$, to emphasize that it is derived as a function on $\pi^{-1}(U_i)\subseteq P$, and not on $U_i\subseteq M$. In particular, $g_i:\pi^{-1}(U_i)\to G$, so at some point $p$, the differential is a map $(d_P g_i)_p: T_pP\to T_{g_i(p)}G$. The target is then a tangent vector to $G$, but not at the identity, i.e. we don't have directly an element of the Lie algebra (viewing the Lie algebra as the tangent space at the identity). However, as you probably know, at any $g\in G$ there is a canonical isomorphism $T_gG\to T_eG$ given by the differential of the left-translation: $g^{-1}:G\to G$, seen as a function on $G$, which maps $g'\mapsto g^{-1}g'$, and $(dg^{-1})_{g'}:T_{g'}G\to T_{g^{-1}g'}G$. This $dg^{-1}$ at each point $g'\in G$ is then an action on vector fields on $G$ (= the Lie algebra), which we write simply as a left action. So: $g^{-1} v$, for $v$ tangent vector field, will mean $(dg^{-1})_{g'}v_{g'}$ at each $g'$. In particular $(dg^{-1})_g:T_gG\to T_eG$, so we can write: $g^{-1}v_g\in T_eG$. Now, we do this with $g_i(p)\in G$ at each point $p$. We have: $(dg_i(p)^{-1})_{g_i(p)}:T_{g_i(p)}G\to T_eG$, and we write this map simply as $g_i(p)^{-1}:T_{g_i(p)}G\to T_eG$. Composing $d_Pg_i$ with this map we get: $$ g_i(p)^{-1} (d_Pg_i)_p:T_pP\to T_eG\;, $$ which is a linear map from $T_pP$ to $T_eG$, in other words, a differential form on (an open set of) $P$ with values in the Lie algebra. The term you have $g_i^{-1} d_P g_i$ is exactly this! The other term, $g_i^{-1} \pi^* A_i g_i$ is indeed simply the adjoint action on a form $\pi^* A_i$ on (an open set of) $P$ with values in the Lie algebra. The sum $g_i^{-1} \pi^* A_i g_i + g_i^{-1} d_P g_i$ is then the sum of two 1-forms on an open set of $P$ with values in the Lie algebra. The big result is that while both terms can change if we change local trivialization, their sum will not! Therefore we can define the object globally as a Lie-algebra-valued 1-form on $P$. I hope this answers your questions. In particular $\sigma_i g_i$ is the right action of $G$ on $P$: yes. $g_i^{-1} \pi^* A_i g_i$ is the adjoint: yes. And by the way, $\pi^*A$ is the pullback of a 1-form on $U_i$. What is $d_P g_i$ and why it is a 1-form: should be explained above. How does $g_i^{-1}$ act on $d_Pg_i$: should be explained above. Here is why $(d_p g_i)({\sigma_i}_*X) = 0$: Since $\phi(u,e)=\sigma_i(u)$, by definition of the function $g_i$, $g_i(\sigma_i(u))=g_i(\phi(u,e))=e$, so that the composite function $g_i\circ\sigma_i:U_i\to G$ is constant, and therefore it has differential zero: for any vector $X$ tangent to $U_i$: $$ 0=d(g_i\circ\sigma_i)X = (d_p g_i)({\sigma_i}_*X) \;. $$<|endoftext|> TITLE: Sudoku with special properties QUESTION [25 upvotes]: Sudoku is a puzzle, with the objective is to fill a 9×9 grid with digits so that each column, each row, and each of the nine 3×3 sub-grids that compose the grid (also "sudoku-blocks") contains all of the digits from 1 to 9. Let's define block as a 3x3 sub-grid (not necessarily forming one of the sudoku-blocks, but including them) containing all the digits 1 to 9. Let's define N as a number of all valid blocks on the grid. For the usual sudoku puzzle $N=9$. The maximum theoretically possible $N=49$ (7 blocks per row*7 blocks per column). I found sudoku puzzle with $N=10$, to prove that puzzles with $N>9$ exist. Here's one: +-------+-------+-------+ | 3 9 6 | 4 1 5 | 2 7 8 | | 1 2 5 | 7 3 8 | 4 6 9 | | 4 7 8 | 2 6 9 | 3 1 5 | +-------+-------+-------+ | 7 5 9 | 6 4 2 | 8 3 1 | | 8 4 3 | 5 9 1 | 7 2 6 | | 2 6 1 | 3 8 7 | 5 9 4 | +-------+-------+-------+ | 5 3 4 | 9 2 6 | 1 8 7 | | 6 8 7 | 1 5 3 | 9 4 2 | | 9 1 2 | 8 7 4 | 6 5 3 | +-------+-------+-------+ The 10th block is in the top right corner: 5 2 7 8 4 6 9 3 1 And here's another with $N=33$ ($N = 3*7 + 3*7 - 9$) +-------+-------+-------+ | 1 2 3 | 4 5 6 | 7 8 9 | | 4 5 6 | 7 8 9 | 1 2 3 | | 7 8 9 | 1 2 3 | 4 5 6 | +-------+-------+-------+ | 2 3 1 | 5 6 4 | 8 9 7 | | 5 6 4 | 8 9 7 | 2 3 1 | | 8 9 7 | 2 3 1 | 5 6 4 | +-------+-------+-------+ | 3 1 2 | 6 4 5 | 9 7 8 | | 6 4 5 | 9 7 8 | 3 1 2 | | 9 7 8 | 3 1 2 | 6 4 5 | +-------+-------+-------+ Questions: Does sudoku puzzle with N=49 exist? No If yes, then what is it? If no, then what's the maximum possible N? Why? Update. This update is fully based on @Emisor answer and proof. Assume $N=49$ possible, let's try generating a puzzle: +-------+--- | 1 2 3 | X | 4 5 6 | Y | 7 8 9 | Z +-------+--- | A B C | D The block on the first figure is to be taken as our "starting" one. Since there are no other numbers on the board, having them ordered makes no difference, for now. Now, for $N=49$ several conditions must be met: X, Y, Z must be filled with 1, 4, 7 A, B, C must be filled with 1, 2, 3 Since, X cannot be 1 and A cannot be 1, this statements are also true: Y or Z must be 1 B or C must be 1 That makes block 56Y,89Z,BCD invalid as it must contain two 1, therefore $N=49$ is impossible. That makes only one question left: What's the maximum possible $N$? Why? REPLY [2 votes]: UPDATE: After adding condition 3 below (only 3 out of every 4 locations in a 2x2 square can be a block, savile row + lingeling (a SAT solver) prove there is no solution with 34 or more blocks in about 10 minutes. This does not of course provide any kind of nice proof I have tried modelling this problem in the Savile Row system, which can generate input for SAT solvers and the Minion constraint solver. This can re-create the N=33 almost instantly, but so far hasn't found anything for N > 33, after a couple of hours. I plan on working on improving my model. At the moment the only non-obvious facts I am using are: When counting the 3x3 squares which satisfy the condition, remember the original sub-blocks always do We can, without loss of generality, assign the first row to 1,2,...9 In any 2x2 block, at most 3 of the blocks with this square as their top left are blocks. Here is the Savile Row model, for the interested reader (designed to find > 33 squares). language ESSENCE' 1.0 letting RANGE be domain int(1..9) letting LONGRANGE be domain int(0..9) letting VALUES be domain int(0..9) find field: matrix indexed by [RANGE, RANGE] of RANGE find squarecheck : matrix indexed by [LONGRANGE, LONGRANGE] of bool find squares: int(34..100) such that $ all rows have to be different forAll row : RANGE . allDiff(field[row,..]), $ all columns have to be different forAll col : RANGE . allDiff(field[..,col]), $ all 3x3 blocks have to be different forAll i,j : int(0..2) . ( allDiff([field[row1+(i*3), col1+(j*3)] | row1 : int(1..3), col1 : int(1..3) ]) /\ squarecheck[i*3,j*3] = true ), forAll i : int(0..8). forAll j : int(0..8). squarecheck[i,j] + squarecheck[i+1,j] + squarecheck[i,j+1] + squarecheck[i+1,j+1] <= 3, forAll i : int(0..9). forAll j : int(0..9) . squarecheck[i,j] = allDiff([field[row1+i, col1+j] | row1 : int(1..3), col1 : int(1..3) ]), sum(flatten(squarecheck)) = squares, forAll i : int(1..9). field[1,i] = i<|endoftext|> TITLE: How to prove Poisson Distribution is the approximation of Binomial Distribution? QUESTION [8 upvotes]: I was reading Introduction to Probability Models 11th Edition and saw this proof of why Poisson Distribution is the approximation of Binomial Distribution when n is large and p is small: An important property of the Poisson random variable is that it may be used to approximate a binomial random variable when the binomial parameter $n$ is large and $p$ is small. To see this, suppose that $X$ is a binomial random variable with parameters $(n, p),$ and let $\lambda=n p .$ Then $$ \begin{aligned} P\{X=i\} &=\frac{n !}{(n-i) ! i !} p^{i}(1-p)^{n-i} \\ &=\frac{n !}{(n-i) ! i !}\left(\frac{\lambda}{n}\right)^{i}\left(1-\frac{\lambda}{n}\right)^{n-i} \\ &=\frac{n(n-1) \cdots(n-i+1)}{n^{i}} \frac{\lambda^{i}}{i !} \frac{(1-\lambda / n)^{n}}{(1-\lambda / n)^{i}} \end{aligned} $$ Now, for $n$ large and $p$ small $$ \left(1-\frac{\lambda}{n}\right)^{n} \approx e^{-\lambda}, \quad \frac{n(n-1) \cdots(n-i+1)}{n^{i}} \approx 1, \quad\left(1-\frac{\lambda}{n}\right)^{i} \approx 1 $$ Hence, for $n$ large and $p$ small, $$ P\{X=i\} \approx e^{-\lambda} \frac{\lambda^{i}}{i !} $$ I can understand most part of the proof except for this equation: $\left(1-\frac{\lambda}{n}\right)^{n} \approx e^{-\lambda}$ I really don't remember where it comes from, could anybody explain this to me? Thanks!. REPLY [8 votes]: Well, this is a basic fact of the exponential function $e^x$. One definition of $e$ is the limit $\lim_{n\to\infty}(1+\frac1n)^n$. By a monotonicity argument one can prove $\lim_{x\to\infty}(1+\frac1x)^x=e$ where $x$ now ranges the real numbers. Also note that $1-\frac1x=\frac{x-1}x=1/\frac x{x-1}=1/(1+\frac1y)=(1+\frac1y)^{-1}$ where $y=x-1$. So, one has the following: $$\begin{aligned} \lim_{x\to\infty}(1-\frac1x )^x &= \lim_{y\to\infty}(1+\frac1y )^{-(y+1)} \\ &=\lim_{y\to\infty}(1+\frac1y)^{-y}\times\lim_{y\to\infty}(1+\frac1y)^{-1} \\ &=e^{-1}\times1=e^{-1}\,. \end{aligned} $$ From here, assuming $\lambda>0$, $$\begin{aligned} e^{-\lambda}=(e^{-1})^\lambda &= \lim_{x\to\infty}(1-\frac1x)^{\lambda x} &\to{\ z:=\lambda x} \\ &= \lim_{z\to\infty}(1-\frac\lambda z)^z\,. \end{aligned} $$ In consequence, we have this limit for every sequence $z_n\to\infty$ written in place of $z$ and limiting on the natural $n\to\infty$. In particular, this also holds for $z_n=n$. Note that we had to take the turnaround for arbitrary real numbers instead of integers only because of the exponent $\lambda$.<|endoftext|> TITLE: Learning differential calculus through infinitesimals QUESTION [5 upvotes]: In class, we've studied differential calculus and integral calculus through limits. We reconstructed the concepts from scratch beginning by the definition of limits, licit operations, derivatives and then integrals. But the teacher really did everything to avoid talking about infinitessimals. For instance when we talked about variable changes we had to swallow that for a certain $u=\phi(x)$, $du = \phi'(x)dx$. What bothers me is how much we use them in maths and physics without understanding them. My question is how can I learn to manipulate infinitessimals? I've read an article on the field of infinitessimals but it didn't quite satisfy my curiosity. What literature would you recommend on the subject knowing that I have some background on linear algebra (vector spaces, matrices), algebraic structures (groups, rings, fields) and monovariable calculus. REPLY [2 votes]: The best calculus textbook based on infinitesimals is Keisler's textbook Elementary Calculus. This opinion is based on our experience teaching calculus with infinitesimals based on the book over the past three years. We have taught over 250 students by now using this method, yielding better results than parallel groups that did not use infinitesimals, based on exam scores and teacher evaluations at the end of the course.<|endoftext|> TITLE: Prove $\sum\limits_{j=1}^k(-1)^{k-j}j^k\binom{k}{j}=k!$ QUESTION [21 upvotes]: In reading about A polarization identity for multilinear maps by Erik G F Thomas, I am led to prove the following combinatorial identity, which I cannot find anywhere, nor do I have any idea how to prove it. $$\sum\limits_{j=1}^k(-1)^{k-j}j^k\binom{k}{j}=k!.$$ After some reductions (including $\binom{k+1}{j}=\binom{k}{j}+\binom{k}{j-1}$) and "simplifications", I am still stuck and have no clues how to proceed. I do not know if this is true, either, though I think it should be. Any help or reference is sincerely welcomed, thanks in advance. REPLY [3 votes]: Here is a variant based upon the coefficient of operator $[z^k]$ used to denote the coefficient of $z^k$ in a series. This way we can write e.g. \begin{align*} [z^j](1+z)^k=\binom{k}{j}\qquad\qquad\text{or}\qquad\qquad k![z^k]e^{jz}=j^k \end{align*} We obtain \begin{align*} \sum_{j=1}^k(-1)^{k-j}j^k\binom{k}{j} &=\sum_{j=0}^\infty(-1)^{k-j}k![z^k]e^{jz}[u^k](1+u)^k\tag{1}\\ &=(-1)^kk![z^k]\sum_{j=0}^{\infty}\left(-e^{z}\right)^j[u^k](1+u)^k\tag{2}\\ &=(-1)^kk![z^k](1-e^z)^k\tag{3}\\ &=k![z^k]\left(z+\frac{z^2}{2!}+\frac{z^3}{3!}+\cdots\right)^k\tag{4}\\ &=k! \end{align*} and the claim follows. Comment: In (1) we apply the coefficient of operator twice and extend the lower and upper limit of the series without changing anything, since we add only zeros. In (2) we use the linearity of the coefficient of operator and do some rearrangements. In (3) we use the substitution rule with $u=-e^z$: \begin{align*} A(z)=\sum_{j=0}^\infty a_jz^j=\sum_{j=0}^\infty z^j [u^k]A(u) \end{align*} In (4) we see that in each of the $k$ factors $(e^z-1)$ only $z^1$ provide a contribution to $z^k$.<|endoftext|> TITLE: Show that if $(\sum x_n)$ converges absolutely and $(y_n)$ is bounded then $(\sum x_n y_n)$ converges QUESTION [7 upvotes]: This is the exercise 2.7.6 of the book Understanding analysis of Abbott, I want a check of my proof and if is needed additional information to complete it. a) Show that if the sequence $(\sum x_n)$ converges absolutely and the sequence $(y_n)$ is bounded then the sequence $(\sum x_n y_n)$ converges b) Find a counterexample that demonstrates that a) does not always hold if the convergence of $(\sum x_n)$ is conditional For the part a): if $(\sum x_n)$ converges absolutely it means, using the definition of Cauchy sequences (that demonstrates convergence on complete spaces) applied to series that $$\forall \varepsilon> 0, \exists N\in\Bbb N :\sum_{n=m}^{t} |x_n|<\varepsilon, \forall t>m> N$$ and we have the inequalities $|\sum x_n y_n|\le \sum|x_n y_n|$ and $|\sum x_n|\le\sum|x_n|$. And cause $(y_n)$ is bounded we have that $|y_n|\le B$ and then $$\left|\sum x_n y_n\right|\le\sum|x_n y_n|\le B\sum|x_n|$$ Then we have that $$\sum_{n=m}^{t} |x_n|<\frac{\varepsilon}{B}\implies\left|\sum_{n=m}^{t} x_n y_n\right|\le B\sum_{n=m}^{t}|x_n|<\varepsilon$$ For the part b) we can take the conditional convergent sequence $(\sum\frac{(-1)^{n-1}}{n})$ and the bounded sequence $((-1)^{n-1})$. If we multiply both as the problem define we get the divergent sequence $(\sum \frac1n)$ REPLY [5 votes]: (Writing this just so the question can get marked as answered.) You are correct in both cases. You're right in the first part that it basically boils down to $$\left|\sum\limits_{n}x_n y_n\right|\leq \sum\limits_{n}|x_n y_n| = \sum\limits_n |x_n||y_n|\leq \sup\limits_{k}|y_k|\sum\limits_{n}|x_n|< +\infty.$$ Your counter-example in the second part is good.<|endoftext|> TITLE: Possible ranks of a $n!\times n$ matrix with permuted rows QUESTION [5 upvotes]: Let $a_1,\cdots,a_n$ be $n$ arbitrary real numbers. Form the $n! \times n$ matrix $M$ whose rows are obtained by permuting the $n$ numbers given. Find all the possible ranks of such a matrix. Obviously, the order of listing out these permutation does not affect the rank. When $n=3$, a possible such $M$ is $$\begin{pmatrix} a_1 & a_2 & a_3 \\ a_1 & a_3 & a_2\\ a_2 & a_3 & a_1 \\ a_2 & a_1 & a_3 \\ a_3 & a_1 & a_2 \\ a_3 & a_2 & a_1\end{pmatrix}$$ For a general $n$, I have proved the existence of ranks with value $0,1,n-1,n$: $$\eqalign{ & {\text{For }}\operatorname{rank} M = 0,{\text{ put all }}{a_i} = 0 \cr & {\text{For }}\operatorname{rank} M = 1,{\text{ put all }}{a_i} = 1 \cr & {\text{For rank }}M = n - 1,{\text{ put }}{a_1} = {a_2} = ... = {a_{n - 1}} = 1{\text{ and }}{a_n} = - n + 1 \cr & {\text{For rank }}M = n,{\text{ put }}{a_1} = 1{\text{ and all other }}{a_i} = 0 \cr} $$ All except the third one are trivial observations, to prove the third one, note that when ${a_1} = {a_2} = ... = {a_{n - 1}} = 1{\text{ and }}{a_n} = - n + 1$, the sum of the $n$ columns of $M$ is zero, hence its columns are linear dependent, we have ${\text{rank }}M \leqslant n - 1$. However, note that the $(n - 1) \times (n - 1)$ matrix $${\left( {\begin{array}{*{20}{c}} { - n + 1}&1& \cdots &1 \\ 1&{ - n + 1}& \cdots &1 \\ \vdots &1& \ddots &1 \\ 1&1&1&{ - n + 1} \end{array}} \right)}$$ is a submatrix of $M$, and by formula for $n \times n$ matrix with $x$ on diagonal and $y$ elsewhere, $$\left| {\begin{array}{*{20}{c}} x&y& \cdots &y\\ y&x& \cdots &y\\ \vdots &y& \ddots &y\\ y&y&y&x \end{array}} \right| = {(x - y)^{n - 1}}\left[ {x + (n - 1)y} \right]$$ this submatrix is invertible and thus has rank $n-1$. Hence the original $M$ satisfies ${\text{rank }}M \geqslant n - 1$ However, I can't prove or disprove the existence of other values of rank. Even the simplest case when $M$ is a $4! \times 4$, I cannot assert or rule out the existence of $M$ such that ${\text{rank }}M = 2$ There is a similar question in Stackexchange, but it only tells the facts that I had already knew, and does not give any insight. This is a question from Artin's Algebra so I think this is not very hard. Any help will be greatly appreciated. REPLY [2 votes]: You've already listed all the ranks that can be achieved. The row space affords a representation of the symmetric group $S_n$ acting on the row vectors. This representation decomposes into the standard representation and the trivial representation (see e.g. Wikipedia). If you write the top row as $\vec s+\vec t$, with $\vec s$ in the subspace that transforms according to the standard representation and $\vec t$ in the subspace that transforms according to the trivial representation, you have the following options: If $\vec s=\vec0$ and $\vec t=0$, the rank is $0$. If $\vec s=\vec0$ and $\vec t\ne0$, all permutations reproduce $\vec t$, so the rank is $1$. If $\vec s\ne\vec0$ and $\vec t=0$, since the standard representation is irreducible, acting with all permutations on $\vec s$ produces row vectors that span the entire $(n-1)$-dimensional subspace, so the rank is $n-1$. If $\vec s\ne\vec0$ and $\vec t\ne0$, the row vectors are of the form $\vec s'+\vec t$, where the $\vec s'$ together span the entire $(n-1)$-dimensional subspace. By acting with the projector onto the $1$-dimensional subspace corresponding to the trivial representation (that is, by forming the average of the rows), we can form $\vec t$ as a linear combination of the rows. This we can then subtract from all rows, yielding the vectors $\vec s'$ that span the $(n-1)$-dimensional subspace. Thus in this case the rank is $n$.<|endoftext|> TITLE: Tic Tac Toe: What is the probability that a random player draws against an infallible player? QUESTION [5 upvotes]: I have simulated a tournament between an infallible Tic Tac Toe player and one that chooses its moves randomly. Even after 5 million games, the infallible player wins every single game. I know that if both players play infallible, a Tic Tac Toe game always ends in a draw. I could not find a bug in my program and I am missing the foundations in probability theory to calculate whether it is probable that even after 5 million games no draw occurs. Is there a way to calculate and prove it? REPLY [8 votes]: The second player has 8 choices for his first move, 6, for the second, etc., so in any game he has a total of $8 \times 6 \times 4 \times 2 = 384$ possible sequences of moves. If only one choice is correct at each point, which is an underestimation, then in 5 million games he would be expected to draw at least $\lambda = 5 \times 10^6 / 384 \approx 1.3 \times 10^4$ games. Using a Poisson approximation, the probability that he will actually draw zero games is then $e^{- \lambda} \approx 1.3 \times 10^{-5655}$. This isn't likely. We could make a more refined estimate-- for example, it may be that the inferior player actually has two drawing choices at his second move instead of one-- but this would only serve to increase his expected number of draws and decrease the final probability of zero draws.<|endoftext|> TITLE: Does this Fractal Have a Name? QUESTION [58 upvotes]: I was curious whether this fractal(?) is named/famous, or is it just another fractal? I was playing with the idea of randomness with constraints and the fractal was generated as follows: Draw a point at the center of a square. Randomly choose any two corners of the square and calculate their center. Calculate the center of the last drawn point and this center point of the corners. Draw a new point at this location. Not sure if this will help because I just made these rules up while in the shower, but sorry I do not have any more information or an equation. Thank you. REPLY [31 votes]: As others have noted, what you're describing is an iterated function system — specifically, a system of affine contraction maps — which is a common way of constructing self-similar fractals. In particular, it can be written as a system consisting of the following five maps: \begin{aligned} (x,y) &\mapsto \tfrac12 (x,y) \\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(0,1)\\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(0,-1) \\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(1,0) \\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(-1,0) \\ \end{aligned} where I've taken the center of your square to be at the origin, and its corners to be at $(\pm 1, \pm 1)$. Written like this, you can indeed see that each of these maps represents taking the average of the current point $(x,y)$ and some fixed target point. The first map (where the target point is simply the origin) arises whenever the two corners you choose in step 2 are opposite, and is thus twice as likely to be chosen in your randomized iteration as the other four maps, each of which results from picking two corners that are adjacent to each other. The first, obvious question then is what the fixed set of your iterated function system (i.e. the unique non-empty compact set $S \subset \mathbb R^2$ such that applying each of your affine maps to $S$ and taking the union of the results yields the same set $S$) is. This fixed set is the closure of the limit set obtained by infinitely iterating your function system from any starting point, and thus, in some sense, represents "the" limit shape obtained by iterating your system. Alas, for your system, the answer is kind of boring: the fixed set is simply a (tilted) square with corners at the top, right, left and bottom of the outer square (i.e. at $(0,\pm1)$ and $(\pm1,0)$ in the parametrization I used above). The easy way to see that is to observe that the last four maps in your system map this square into four smaller squares that precisely tile the original square (and the first map just produces a square that redundantly overlaps the other four). Another, more rigorous way is to show that any point within this square (and only within this square!) can be approached arbitrarily closely from any starting point by iterating the maps given above in a suitable order. For this, it helps to rotate, scale and translate the coordinates so that the fixed square (with corners at $(0,\pm1)$ and $(\pm1,0)$) is mapped to the unit square (with corners at $x,y \in \{0,1\}$). With this coordinate transformation, the maps given above become: \begin{aligned} (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(\tfrac12, \tfrac12)\\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(0,0)\\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(0,1) \\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(1,0) \\ (x,y) &\mapsto \tfrac12 (x,y) + \tfrac12(1,1) \\ \end{aligned} We can then interpret the last four maps as shifting the binary representation of the coordinates $x$ and $y$ right by one place, and setting the first binary digits after the radix point to one of the four possible combinations. Thus, starting from the origin (which we can approach arbitrarily closely from any starting point just by iterating the second map above), we can reach any point with dyadic rational coordinates — i.e. coordinates whose binary representation terminates — inside the unit square with a finite number of steps. Since the dyadic rationals are dense in the unit square, any point within the square can be approached arbitrarily closely in this way. (For completeness, we should also show that no point outside the unit square can be a limit of iterating these maps. A simple way to do that is to show that if the point $(x,y)$ lies outside the unit square — i.e. either of $x$ and $y$ is negative or greater than $1$ — then applying any of these maps will move it closer to the unit square.) So, if the limit of iterating your system is simply a square, why are you seeing all that interesting fractal-looking structure, then? The reason for that is the redundant first map (the one that just pulls each point closer to the center of the square), which causes some points in the square to be more likely that others to occur during the iteration. Thus, the invariant measure of your iterated function system will not be the uniform measure over the square (like you'd get if you removed the first map from the system). Rather, it ends up looking like this (rotated 45°, like in my transformed system above): $\hspace{64px}$ The picture above is a discretized approximation of the invariant measure, with the darkness of each pixel being proportional to the probability of the randomly iterated point landing in that pixel. I obtained this picture simply by starting with the uniform measure on the unit square, and repeatedly adding together scaled and translated copies of it. Specifically, I used the following Python code to do this: import numpy as np import scipy.ndimage import scipy.misc d = 9 # log2(image size) s = 2**d / 4 # number of pixels to shift first map by a = np.ones((2**d, 2**d)) # iteration should converge in about d steps; run for 2*d to make sure for i in range(2*d): b = scipy.ndimage.zoom(a, 0.5, order=1) * (4/6.0) a = np.tile(b, (2, 2)) # last four maps just tile the square a[s:3*s, s:3*s] += 2*b # add first map (twice, since it has higher weight) scipy.misc.imsave('measure.png', -a) # negate to invert colors Of course, keep in mind that this is just an approximation of the actual invariant measure, which would seem to be singular. (If anyone can characterize this measure more explicitly, I'd very much like to see it; the numerical approximation is suggestive, but doesn't really tell much about what really happens in the limit.) BTW, there do exist actual fractals that resemble your iterated function system. For example, the following system: \begin{aligned} (x,y) &\mapsto \tfrac13 (x,y) \\ (x,y) &\mapsto \tfrac13 (x,y) + \tfrac23(0,1)\\ (x,y) &\mapsto \tfrac13 (x,y) + \tfrac23(0,-1) \\ (x,y) &\mapsto \tfrac13 (x,y) + \tfrac23(1,0) \\ (x,y) &\mapsto \tfrac13 (x,y) + \tfrac23(-1,0) \\ \end{aligned} which differs from your original system just by weighing the fixed corner points by twice as much in the average, generates the Vicsek fractal as its fixed set (again, shown rotated by 45° below): $\hspace{200px}$ The Sierpiński space-filling curve also bears some resemblance to your system: it also has the entire square as its limit set, but the intermediate stages of the construction show a similar fractal-like structure.<|endoftext|> TITLE: How large can a set of pairwise disjoint 2-(7,3,1) designs (Fano planes) be? QUESTION [5 upvotes]: As wikipedia defines well, the Fano plane is a small symmetric block design, specifically a 2-(7,3,1)-design. The points of the design are the points of the plane, and the blocks of the design are the lines of the plane. Now what I'm trying to find is the number of different Fano plane with points labeled $1$ to $7$. By "different" I mean every set of blocks should be disjoint from every other; that no two planes have common blocks. Put another way, I want to know how large a set of Fano planes can be if the sets of lines are pairwise disjoint. Every Fano plane has 7 blocks and here I listed all the blocks you could see in the picture: \begin{align*} \{6, 4, 3\} \\ \{3, 2, 5\} \\ \{5, 1, 6\} \\ \{6, 7, 2\} \\ \{3, 7, 1\} \\ \{5, 7, 4\} \\ \{1, 4, 2\} \\ \end{align*} Please use a combinatorial approach, not groups and algebra. REPLY [4 votes]: Any set of two pairwise disjoint Fano planes is maximal; three or more Fano Planes must share at least one line. Henning's answer is streets ahead in terms of succinctness (and quite clever to boot), but frankly, I didn't do all of this work not to post an answer :) But before I launch into a tedious case-by-case proof that there do not exist $3$ or more pairwise-disjoint Fano planes I would like to talk a bit about what I dream might yield a nicer approach for a someone with more insight than me (possibly my future self). It is fairly well known that any pair of Fano planes share exactly $0, 1$ or $3$ lines (in fact, I found today that each Fano plane is disjoint from exactly $8$ others, shares a single line with exactly $14$ others, and shares $3$ lines with exactly $7$ others). To quote a passage from Buckard Polsters article, YEA WHY TRY HER RAW WET HAT: A Tour of the Smallest Projective Space, There is a unique partition of the $30$ labelled Fano planes into $2$ sets $X$ and $Y$ of 15 each such that any $2$ Fano planes in $1$ of the sets have exactly $1$ line in common. You can take the $15$ single-intersection planes and make a nice picture of something called a generalized quadrangle from them (this one's "the doily", also from his article) whose points are all labeled Fano planes and whose lines are Lines the planes share (there's a big picture in the pdf showing the planes). It's obviously silent on pairs of planes that are disjoint, but I'd like to believe there's some insight to be gained by considering how the two partitions interact with each other. As a sidenote, the article is both very informative and quite fun to read. The usual definition is that two Fano planes are considered equivalent if they have the same set of lines. Under this definition, there are $30$ non-equivalent Fano planes that we'll arbitrarily separate into one of two categories: The first do not contain the line $\{1,2,3\}$ and there are $4! = 24$ of these. The second do contain the line $\{1,2,3\}$ and there are $3! = 6$ of these. Let us now find pairwise disjoint (those sharing no lines) Fano planes. We'll search in the first category of planes, those not containing the line $\{1,2,3\}$. I find it more convenient to use the set $\{1,2,3,x,y,z,w\}$ instead of $\{1,2,3,4,5,6,7\}$ as point labels. Without loss of generality, we will start with the plane whose lines are \begin{array}{ccc} 12x & 13y & 23z\\ 1zw & 2yw & 3xw\\ &xyz& \end{array} and attempt to find a disjoint plane by permuting $\{x, y, z, w\}$. Such a permutation must have no fixed points: if $x$ did not move, the line $12x$ would be preserved (and similarly for $y$ and $z$), while if $w$ were fixed, the line $xyz$ would be preserved. Thus our permutation is either a $4$-cycle ($a \mapsto b \mapsto c \mapsto d \mapsto a$), or a product of disjoint $2$-cycles, as these are the only permutations of $4$ letters with no fixed points. The list of products of disjoint $2$-cycles is rather small \begin{align*}x \leftrightarrow y &\text{ and } z \leftrightarrow w \\ x \leftrightarrow z &\text{ and } y \leftrightarrow w \\ x \leftrightarrow w &\text{ and } y \leftrightarrow z \end{align*} and for each, one of the lines containing $w$ is preserved (e.g., for $x \leftrightarrow y \text{ and } z \leftrightarrow w$, the line $1zw$). Thus we must use some $4$-cycle to permute the letters, and without loss of generality, we may choose $x \mapsto y \mapsto z \mapsto w \mapsto x$, yielding the plane with lines \begin{array}{ccc} 12w & 13x & 23y\\ 1yz & 2xz & 3zw\\ &xyw& \end{array} which is easily seen to be disjoint from our starting plane. Now we attempt to add a third plane, one that still avoids the line $\{1,2,3\}$. We have enough lines to avoid that it's best to keep them in mind, and attempt to place $x, y, z, w$ around the restrictions: In the picture, the green letters are the only possible values for a certain spot (we can't have $x$ in the line with $1$ and $2$, since we've already used the line $12x$, and so on, for the straight lines). The fact that the two circles have been lines $xyz$ and $xyw$ means that we can't use both $x$ and $y$ in the circular line. Thus, each point has two possibilities. By making a single choice, the rest of the points are completely determined. Alas, neither choice has a happy ending: So to recap, we've only managed to find a pitiful pair of disjoint Fano planes from the $24$ that avoid the line $\{1,2,3\}$. But maybe we can find a plane containing $\{1,2,3\}$ that's disjoint from ours (spoiler alert: We cannot). Using the template above (but $x, y, z, w$ instead of $4, 5, 6, 7$) for a plane containing $\{1,2,3\}$, we write possibilities for the three to-be-labeled points in green. We also have the same Forbidden lines from our two disjoint planes. Using each of the Forbidden lines with two letters (e.g., $2$ can't go at the top because the line $2yw$ has already been used), we are able to narrow down the possibilities. But, to our shared dismay, $2$ is needed in two places, and we cannot find another Fano plane, disjoint from our two. As we have been working in full generality, no set of two disjoint Fano planes can be expanded to a set of three or more pairwise disjoint Fano planes.<|endoftext|> TITLE: Find the remainder when a large number is divided by 35. QUESTION [6 upvotes]: I don't know why I am wrong with this problem. Here is what I did: The last two digit of $6^{2006}$ is 36. So the answer should be 1. Find the remainder when $6^{2006}$ is divided by 35. REPLY [2 votes]: Here is another approach. For solving problems$\pmod{N}$, we can try finding the prime factors of $N$, then solving the problem modulo each of those factors. Sometimes this is easier than directly solving $\pmod{N}$. Then something called the Chinese Remainder Theorem lets us combine the results for each factor to produce the result for $N$. In this case the prime factors of $35$ are $5$ and $7$. Since $6 \equiv 1 \pmod{5}$ and $6 \equiv -1 \pmod{7}$, we see straightaway that: $6^{2006} \equiv 1\pmod 5$ $6^{2006} \equiv 1 \pmod 7$ , since $2006$ is even The Chinese Remainder Theorem tells us that, given congruences $N \equiv a \pmod{p}$ and $N \equiv b \pmod {q}$ , then there is a unique $x$ such that $N \equiv x \pmod{pq}$, and it is the same $x$ for all $N$ (and this generalizes to multiple factors). In the general case you may have to follow an algorithm or use brute force to find $x$. In this case it is right in front of us: letting $N = 1$ satisfies our two congruences. So $1 \pmod{35}$ is the answer we are looking for.<|endoftext|> TITLE: What is a topological space good for? QUESTION [51 upvotes]: I know there are already some questions similar to this, which all give an answer that a topological space creates some structure on a set which is an abstraction of distance and makes it possible to define other concepts like connectedness, compactness, metrizability asf. I understand this in theory, but when I create a simple example I fail to grasp what distance/closeness means in a topological space. For example, when I have the following set $X$ and a set of subsets $\tau_1$ which does not satisfy the axioms of a topological space: $$ X=\{a,b,c\},\tau_1=\{X,\emptyset,\{a,b\},\{b,c\} \} $$ now I make this into a topological space by adding the sets $\{c\}$ and $\{b\}$ $$ X=\{a,b,c\}, \tau_2=\{X,\emptyset,\{a,b\},\{c\},\{b,c\},\{b\} \} $$ What is "better" now and how is this related to some notion of distance or closeness? In other words, what did I gain by adding the subsets $\{c\}$ and $\{b\}$ to $\tau_2$? More over what can I do with $\tau_2$ I can't do with $\tau_1$? REPLY [2 votes]: As Hurkyl demonstrates, general topological spaces can be defined in several ways. Another way is with the concept of a Derived Set. Wikipedia begins, In mathematics, more specifically in point-set topology, the derived set of a subset $S$ of a topological space is the set of all limit points of S. It is usually denoted by ${\displaystyle S'}$. The concept was first introduced by Georg Cantor in 1872 and he developed set theory in large part to study derived sets on the real line. This is amazing. Perhaps we should back up a bit and ask what is the benefit of studying sets. For Georg Cantor the creation of point-set topology came as an inspirational 'package deal'. I personally always prefers the term point-set topology over general topology. It emphasizes that instead of examining the elements in a set, we can talk about 'points' in a 'space'. This abstract and precise way of making 'spatial arguments' is something that Euclid, who developed 'point' geometry thousands of years ago, would no doubt appreciate. I wonder if mathematicians will be throwing a big 150-year anniversary party in 2022 to recognize the astounding achievements of this man...<|endoftext|> TITLE: Exponential of a symmetric matrix QUESTION [6 upvotes]: Let $A$ be a real, symmetric and positive definite matrix and suppose $B$ is a real symmetric matrix such that $\exp(B) = A$. Is $B$ unique? The solution of my homework sheet says that $B$ is uniquely determined by $A$ since, for every eigenvalue $t \in \mathbb{R}$ of $B$, the $t$-eigenspace of $B$ equals the $e^t$-eigenspace of $A$. So far I was only able to show one inclusion of this equality. How do I prove that if $r>0$ is an eigenvalue of $A$ and $v$ a vector such that $Av = rv$, then $Bv = \log(r)v$? Is this even true? REPLY [3 votes]: Thanks for the hint, Erick. I now came up with the following argument. Suppose $A$ is a real positive definite symmetric Matrix and that $B$ is a real symmetric logarithm of $A$, i.e. $B$ is real and symmetric and $\exp(B) = A$. We wish to show that $B$ is uniquely determined by $A$. Since $B$ is diagonalizable, it suffices to show that $\sigma(B)$, the set of eigenvalues of $B$, is uniquely determined by $A$, and that every eigenspace $E_t(B) = \lbrace v \in \mathbb{R}^n \, : \, Bv = tv \rbrace$, $t \in \sigma(B)$ is uniquely determined by $A$. Since $B$ is symmetric, $B$ is similar to a diagonal matrix $D$, which has entries from $\sigma(B)$. Hence $\exp(B) $ is similar to $\exp(D)$. It follows that $$\sigma(\exp(B)) = \lbrace e^t \, : \, t \in \sigma(B) \rbrace \Leftrightarrow \sigma(B ) = \lbrace \log(s) \,: \, s \in \sigma(A) \rbrace$$ Since $t \mapsto e^t$ is a bijection from $\mathbb{R}$ to $(0, \infty)$. We next show that for every eigenvalue $t$ of $B$ we have $$E_{t}(B) = E_{e^t}(\exp(B))$$ Let $v \in E_{t}(B)$. Then $$\exp(B)v = \sum_{k\geq 0}(\frac{B^k}{k!})v = \sum_{k\geq 0}(\frac{B^kv}{k!}) = \sum_{k\geq 0}(\frac{t^kv}{k!}) = e^t v$$ Hence $E_t(B) \subset E_{e^t}(\exp(B))$. In order to prove equality, we prove that the dimensions of both spaces are equal. Since both $B$ and $\exp(B)$ are diagonalizable, we have the direct sum decompositions $$ \bigoplus_{t \in \sigma(B)} {E_t(B)} = \mathbb{R}^n = \bigoplus_{t \in \sigma(B)} {E_{e^t}(\exp(B))}$$ Hence $$\sum_{t \in \sigma(B)} {\dim(E_t(B))} = n = \sum_{t \in \sigma(B)} {\dim(E_{e^t}(\exp(B)))}$$ This implies $\dim(E_t(B)) = \dim(E_{e^t}(\exp(B)))$ for every $t \in \sigma(B)$, as desired.<|endoftext|> TITLE: Hard Question in Stochastic processes - variance Martingales QUESTION [11 upvotes]: I got some hard challenge to solve and I am looking for a small clue/help. My question goes like this: 10 Englishmen are trying to leave a pub in a rainy weather. They do it in the following way. Initially they store all 10 umbrellas in a basket next to the exit from the pub. They enter and drink a pint each. Then they return to the basket and each one picks an umbrella at random (random permutation). Those who picked their own umbrellas leave upset, while those who did pick a wrong umbrella, put it back and return to the pub for another pint of ale. After that they return to the basket and try once again. And so on. Let $T$ be the number of rounds needed for all Englishmen to leave, and let $N$ be the total number of ales consumed during the procedure. (a) Compute $E(T)$. (b) Compute $E(N)$. Hint: For $n = 0, 1, 2, \dots$, set $X_n$ to be the number of Englishmen in the pub after $n$-th round, and consider $M_n = (X_n + n) 1_{\{n=2$. Therefore $(X_n+n)1_{n TITLE: Homotopic retract vs deformation retract QUESTION [5 upvotes]: Let's say that $A \subset X$ is a deformation retract. It follows that $A$ is both a retract and a space homotopically equivalent to $X$. Is the converse true? Probably not, but I couldn't find any example yet. More specifically the converse would be: If $A \subset X$ is a retract which is homotopic to $X$ as a topological space then does there exist a homotopy between the retraction and the identity map: $$H:X \times [0, 1] \to X$$ such that $H(x,0)=x$, $H(x,1)\in A$ and $H(a,1)=a$ for $a\in A$. REPLY [2 votes]: No. Let $X = \{0,1,2,3,\dots\}$ and $A = \{1,2,3,\dots\}$, both with the discrete topology, and let $i: A \to X$ be the inclusion. Then $i$ has a retraction $r: X \to A, n\mapsto\max\{n,1\}$, and is even a cofibration. $X$ and $A$ are clearly isomorphic. The inclusion, however, is not a homotopy equivalence.<|endoftext|> TITLE: Counter-examples of Galois Correspondence QUESTION [5 upvotes]: What are some examples of a separable field extension $L/K$ and a subgroup $H$ of $\text{Aut}(L/K)$ such that $\text{Aut}(L/L^H) \neq H$? Here $L^H$ means the fixed field of $H$. REPLY [2 votes]: For $H$ finite, it is impossible: Dummit & Foote Section 14.2 Corollary 11 (adapted with your notation): Let $H$ be a finite subgroup of automorphisms of a field $L$ and $L^{H}$ be the fixed field, then $Aut(L/L^{H})=H$. In your problem, $H\leq Aut(L/K)\leq Aut(L)$, we don't care about $K$ at all, we still get $Aut(L/L^{H})=H$.<|endoftext|> TITLE: Prove $\frac{ab}{4-d}+ \frac{bc}{4-a}+\frac{cd}{4-b}+\frac{da}{4-c} \leqslant \frac43$ for positive $a,b,c,d$ QUESTION [6 upvotes]: $a,b,c,d \geqslant 0$ and $a^2+b^2+c^2+d^2=4$, Prove $$\frac{ab}{4-d}+ \frac{bc}{4-a}+\frac{cd}{4-b}+\frac{da}{4-c} \leqslant \frac43$$ I try some reverse AM-GM techniques but fail. I don't think rearrangement inequality works because of cyclic nature of this inequality. I try to homogenize this inequality to remove the constrain, but the square root in the denominator kills me. REPLY [2 votes]: Note: in this solution, $\sum$ denotes the cyclic sum, so that $\sum f(a, b, c, d)=f(a, b, c, d)+f(b, c, d, a)+f(c, d, a, b)+f(d, a, b, c)$. Lemma: $\sum c^4+2\sum abc^2+6\sum ab\le 36$. Proof: Expanding the inequality $\sum[2(a-b)^2+(ab-1)^2]\ge 0$ gives: $$6\sum ab\le 20+\sum a^2b^2$$. Expanding $\sum (ab-ac)^2\ge 0$ gives: $$2\sum abc^2\le \sum a^2b^2+2(a^2c^2+b^2d^2)$$ Summing these, we have: \begin{align*} \sum c^4+2\sum abc^2+6\sum ab &\le 20+\sum c^4+2\sum a^2b^2+2(a^2c^2+b^2d^2) \\ &= 20+(a^2+b^2+c^2+d^2)^2 \\ &= 36 \end{align*} so the lemma is true as required. Now we may use the lemma, along with Cauchy Schwarz of the form $\sum \frac{x_i^2}{y_i}\ge \frac{(\sum x_i)^2}{\sum y_i}$: \begin{align*} \sum\frac{ab}{4-d}&= \sum\frac{2ab}{7-d^2+(d-1)^2} \\ &\le \sum\frac{2ab}{7-d^2} \\ &= \sum\frac{2ab}{3+2ab+c^2+(a-b)^2} \\ &\le \sum\frac{2ab}{3+2ab+c^2} \\ &=4-\sum \frac{c^2+3}{c^2+2ab+3} \\ &=4-\sum \frac{(c^2+3)^2}{(c^4+2abc^2+6ab)+6c^2+9} \\ &\le 4-\frac{(\sum c^2+12)^2}{\sum(c^4+2abc^2+6ab)+6\sum c^2+36} \\ &\le 4-\frac{16^2}{36+60} \\ &=4-\frac{8}{3} \\ &=\frac{4}{3} \end{align*} so the inequality is proved. Equality holds at $a=b=c=d=1$.<|endoftext|> TITLE: What is the degree of the zero polynomial and why is it so? QUESTION [19 upvotes]: My teacher says- The degree of the zero polynomial is undefined. My book says- The degree of the zero polynomial is defined to be zero. Wikipedia says- The degree of the zero polynomial is $-\infty$. I am totally confused and want to know which one is true or are all true? REPLY [7 votes]: Defining it as $-\infty$ makes the most sense. As mentioned in Surb's answer and comments, some properties of degrees are kept intact this way, e.g. $\deg(PQ)=\deg P+\deg Q$ If $\deg P>\deg Q$ then $\deg(P+Q)=\deg P$ It also starts making more sense if you consider expressions that can take on negative powers as well. That is, instead of $\sum_{k=0}^na_kx^k$, consider $\sum_{k=-\infty}^{n}a_kx^k$. So you could have $3x^2+2x$ and $x+1+3x^{-2}$ and $2x^{-3}-\frac45x^{-5}$. Then degree is just "the supremum of all $k$s for which $a_k\neq 0$. The degrees of these 3 expressions are 2, 1 and -3 respectively. Then it's easy to see that 0, which has no nonzero coefficients, has a degree of $-\infty$. The same works if you consider expressions than can also have fractional degrees. Then the degree of, say, $3\sqrt{x}-x^{-3}$ is $1/2$. Of course, this inspires the definition of a "dual" degree, which is the infimum instead of the supremum. Then the degree of $3x^4+2x^3+5x^2$ will be 2, and the degree of $0$ will be $\infty$. Keeping the degree of $0$ undefined is understandable (not everyone wants to deal with infinities). Defining it as $-1$ has merits (if you don't consider negative powers, $0$ is one step down from nonzero constants). But there is absolutely no sense in defining the degree as $0$. The $0$ polynomial has as much similarity with constants, as constants have with linear polynomials.<|endoftext|> TITLE: Variation of Nim, where one has to divide a pile into any number of piles. QUESTION [5 upvotes]: I am learning the basics of combinatorial game theory (impartial games). After learning about decompose a game into the sum of games, I feel comfortable with games that can divided into the sum of 1 pile games. The situation is more or less clear to me: I have to find the game graph, calculate the Sprague-Grundy values and use them to find the solution to a game. But I do not really know what to do in case when I can't decompose a game in 1 pile games. Here is an example: You have piles of stones, people alternate turns, person who can't make a move loses. During the move, a player can select any one of the piles divide the stones in it into any number of unequal piles such that no two of the newly created piles have the same number of stones. I have huge problem in analyzing the 1 pile subgame (calculating grundy values for the pile of $1, 2, 3, ... n$ stones in the pile), because after each move 1 piles is divided into more piles. How should I analyze such games? REPLY [3 votes]: As Jyrki commented, moves in this game do decompose piles into sums of piles, and the xor rule still applies when you have a sum of more than two piles. With a little Mathematica code just using the mex and xor rules for the Sprague-Grundy values, I found that the values for a single pile from size $1$ to size $70$ are: $0,0,1,0,2,3,4,0,5,6,7,\ldots,66$. In particular, the value for a single pile of size $n$ (where $n>9$) appears to be $n-4$. This sort of pattern makes sense because as the numbers grow, there are so many ways to partition the pile that you can achieve every previous number as the xor of the values you can reach. However, I haven't been able to see an obvious pattern in the winning moves, so I'm not sure how to prove that this pattern continues forever. code: moves[n_] := moves[n] = Map[If[Union[#] == Sort[#] && Length[#] > 1, #, Nothing] &, IntegerPartitions[n]]; mex[list_] := mex[list] = Do[If[Not[MemberQ[list, i - 1]], Return[i - 1]], {i,Length[list] + 1}]; nimber[n_] := nimber[n] = mex[Map[Apply[BitXor, Map[nimber, #]] &, moves[n]]]; Edit: Using cgsuite, we can get much further. If we restrict the rules to say that you can decompose a pile into at most five piles, then the Sprague-Grundy values can be calculated very efficiently by cgsuite code: hr:=game.heap.HeapRules.MakeRules("&60;!."); hr.NimValues(500) With this, we see that splitting into at most five piles is enough for the pattern mentioned above to persist all the way up to size $499$ with value $495$. Restricting to at most four piles puts a 0 between 12 and 13, but continues an $n-5$ pattern thereafter all the way up to an initial pile size of $499$ as well. (At most three piles gives too few options for a simple pattern).<|endoftext|> TITLE: Prove $x+y^2+z^3 \geqslant x^2y+y^2z+z^2x$ for $xy+yz+zx=1$ QUESTION [5 upvotes]: $x,y,z \geqslant 0$ and $xy+yz+zx=1$, prove $$x+y^2+z^3 \geqslant x^2y+y^2z+z^2x$$ What I try $$x+y^2+z^3 \geqslant x^2y+y^2z+z^2x$$ $$\Leftrightarrow x(xy+yz+zx)+y^2+z^3- x^2y-y^2z-z^2x \geqslant 0$$ $$\Leftrightarrow \left(x^2z+\frac{z^3}{4}-z^2x \right)+ \left(y^2+\frac{3z^3}{4}+xyz-y^2z\right) \geqslant 0$$ I strongly believe that $\left(y^2+\frac{3z^3}{4}+xyz-y^2z\right) \geqslant 0$, but I have no proof. REPLY [3 votes]: As you wrote, from the identity: $$x(xy+yz+zx)+y^2+z^3-x^2y-y^2z-z^2x=z(\frac{z}{2}-x)^2+(y^2+\frac{3}{4}z^3-y^2z)+xyz$$ It suffices to check $y^2+\frac{3}{4}z^3\ge y^2z$ given $yz\le 1$. We take $3$ cases: If $z\le 1$, then $y^2\ge y^2z$ already. If $z\ge 1$ and $y\le \frac{1}{4}$, then $\frac{3}{4}z^3\ge \frac{3}{4}z\ge \frac{1}{16}z\ge y^2z$. If $z\ge 1$ (so $y\le 1$) and $y\ge\frac{1}{4}$ (so $z\le 4$) then $y^2\ge \frac{1}{4}y^2z$ and $\frac{3}{4}z^3\ge \frac{3}{4}z\ge \frac{3}{4}y^2z$. Adding these gives the required. This covers all cases, so the inequality is proved. "Equality" holds when $x=\frac{1}{\epsilon}, y=\epsilon, z=0$ and $\epsilon\to 0$.<|endoftext|> TITLE: What am I doing wrong in calculating the following limit? QUESTION [7 upvotes]: $$\lim_{x\to-2} \frac{x+2}{\sqrt{6+x}-2}=\lim_{x\to-2} \frac{1+2/x}{\sqrt{(6/x^2)+(1/x)}-2/x^2}$$ Dividing numerator and denominator by $x \neq0$ $$\frac{1+2/-2}{\sqrt{(6/4)+(1/-2)}-2/4}=\frac{0}{1/2}=0$$ but the limit is $4$ according to Wolfram Alpha? REPLY [3 votes]: You can apply L'Hospital rule, i.e differentiating numerator and denominator functions with respect to $x$, $$2\sqrt{x + 6} = 2 \sqrt{-2 + 6} = 4$$<|endoftext|> TITLE: Maximum value of the smallest number of operations to obtain configuration from original configuration QUESTION [9 upvotes]: Let $n$ be a positive integer. There are $n(n+1)/2$ marks, each with a black side and a white side, arranged into an equilateral triangle, with the biggest row containing $n$ marks. Initially, each mark has the black side up. An operation is to choose a line parallel to the sides of the triangle, and flipping all the marks on that line. A configuration is called admissible if it can be obtained from the initial configuration by performing a finite number of operations. For each admissible configuration $C$, let $f(C)$ denote the smallest number of operations required to obtain $C$ from the initial configuration. What is the maximum value of $f(C)$, where $C$ varies over all admissible configurations? REPLY [5 votes]: We can think of the operations as vectors spanning a subspace of $\mathbb{Z}_2^{n(n+1)/2}$. We're searching the girth of the Cayley graph of the additive group (with these operation vectors as generators) of this vector space. This is usually difficult, but might be easier given the special structure of the group. As an upper bound, no admissible configuration will need more than $\frac{3n}{2}$ operations: Let's call the three sides of the triangle $P$, $Q$ and $R$. Let's call $P$-operations those operations that affect a line parallel to $P$, and likewise define $Q$- and $R$-operations. When we add all $P$- and all $Q$-operations, we get the zero vector. Now assume there is an admissible configuration that needs more than $\frac{3n}{2}$ operations. That means there is a vector that can be written as the sum of $m>3n/2$ operation vectors, but not with less than $m$ such vectors. For one such fixed way of writing the vector, seperately count the number of occurances of $P$-, $Q$- and $R$-operations, and call those numbers $p$, $q$ and $r$. Then we have $$ p+q+r = m > 3n/2, $$ and without loss of generality we may assume $$p\geq q\geq r.$$ It follows that $$p+q>n. \tag{*}\label{*}$$ Now add zero in the form of all $P$- and $Q$-operations to the vector sum and let them interact with the $P$- and $Q$-operations that were already there. Those that were already there will now appear twice in the sum and cancel each other, leaving only the $n-p$ $P$-operations and the $n-q$ $Q$-operations that weren't there before. (The $R$-operations remain unchanged.) We have found a new way of writing the vector, which uses $$2n-(p+q)+r$$ operations. Because of \eqref{*}, this is less than $m$, a contradiction to the assumed minimality of $m$. $\blacksquare$ Unless I made a mistake, the series of maximally needed numbers of operations for different $n$ starts like this: 1, 2, 3, 6, 7, 8, 9, 12,... Observe that 1, 6, 7 and 12 are the smallest (integer) values allowed by our upper bound. Observe also that if we allow $n=0$, then we can prepend a 0 to the series, and we can see what might be a regular pattern. Additional results I can now show that for $n=4k+2$ and $4k+3$ we always need one operation less than already guaranteed by the above upper bound. Call an operation even if it affects an even number of marks, odd otherwise. It is not difficult to show that every mark is affected by either zero or two even operations, so all even operations together have no net effect. The same is true for the combination of for example all odd $P$-operations, all odd $Q$-operations and all even $R$-operations, and two other such combinations. Crucially, if we choose either the even or the odd $P$-operations, and either the even or the odd $Q$-operations, then there is a way to choose either the even or the odd $R$-operations so that the net effect is nothing. Now assume that for $n=4k+2$, we need $p+q+r=6k+3$ operations for some admissable configuration. The argument above that $p+q>n$ leads to a contradiction needs no assumption about $p+q+r$ (that was just used to get $p+q>n$). So we have $p=q=r=2k+1$. Next count how many of the used $P$-operations are even and how many are odd, call those numbers $p_e$ and $p_o$, and do the same for the types of operations. Then we have $$p_e+p_o=q_e+q_o=r_e+r_o=2k+1.$$ Since $p_e+p_o$ is odd, $p_e$ and $p_o$ can't be equal. Call the bigger one $p_+$, the other $p_-$, and likewise for the rest. Without loss of generality we may assume $p_+\geq q_+\geq r_+$. We will now choose one of the new neutral sets of operations. If $p_+$=$p_e$, we choose the even $P$-operations, otherwise the odd ones. In the same way, if $q_+=q_e$, we choose the even $Q$-operations, otherwise the odd ones. Finally, we choose the even or odd $R$-operations, so that the net effect is none. As above, we let these operations interact with the old ones, cancelling those that were already there. Since the number of even as well as the number of odd $P$-operations is $2k+1$, this will replace $p_+$ $P$-operations with $2k+1-p_+=p_-$ $P$-operations, so that in total there will be $2p_-$ $P$-operations. The $Q$-operations are reduced in the same way. The number of $R$-operations may also be reduced, but may as well be increased to $2r_+$. However, because of $q_+\geq r_+$, the possible increase of $R$-operations will at most be as big as the decrease of $Q$-operations, so we have at least the decrease of $P$-operations as net result, contradicting the minimality assumption. $\blacksquare$ We can do essentially the same for $n=4k+3$, here's a sketch: the old upper bound is $6k+4$, so we have $p=2k+2$ and $q=r=2k+1$. There are $2k+1$ even operations of each kind, and $2k+2$ odd ones. We may have $p_e=p_o$, if that is the case, choose the even $P$-operations, that will decrease the number of $P$-operations by one. Again assume $q_+\geq r_+$, and if $q_+=r_+$ and only one of them corresponds to even operations, choose these operations. Then again the total number of $Q$- and $R$-operations will not increase. $\blacksquare$ Finally, an argument that for $n=4k$, we always need $6k$ operations. Call an operation short if it affects at most $n/2$ marks. The set of all short operations has size $6k$ with $p_e=p_o=q_e=q_o=r_e=r_o=k$, so it can not be reduced with any of the methods above. Any way to combine operations so that they have the same effect as all short operations would also give a new way of combining operations such that their total effect is none. That means that considered as vectors they belong to the kernel of the obvious linear map from the $3n$-dimensional $\mathbb{Z}_2$ vector space of formal linear combinations of operations to $\mathbb{Z}_2^{n(n+1)/2}$. For $n\geq 2$, the dimension of the image seems to be $3n-3$. That doesn't look like it's hard to prove, but I have only calculated it for $n$ up to 250. In any way, if it's true, then the kernel has dimension 3, and we have already seen seven nonzero kernel elements, so there couldn't be other ways to rewrite combinations of operations. Missing ends Let's prove the dimension formula conjectured above: Call the vector space of formal linear combinations of operations $O_n$ and the image of the map considered above $C_n$. Let $n\geq 2$ and look at the maps $$O_{n+1}\to C_{n+1}\to C_n,$$ where the last map comes from ignoring the last line of marks. The vectors corresponding to the operations that affect all marks, the left mark, or the right mark of the last line all belong to the kernel of the second map and are linearly independent. So that map has nullity at least 3. So does the first map, as explained above. The maps are surjective and $O_{n+1}$ has dimension $3n+3$. So if $C_n$ has dimension $3n-3$, then both nullities are exactly 3 and the dimension of $C_{n+1}$ is $3n$. It is easy to see that $C_2$ has indeed dimension 3, so the result follows by induction. $\blacksquare$ So for $n=4k$, the combination of all short operations indeed leads to an admissable configuration that can't be reached by fewer than $6k$ operations. For $n=4k+2$, again all short operations lead to a configuration that needs the maximal number of operations. However, as we have seen, that number is $6k+2$. For odd $n$, use all short operations and one operation that affects the middle mark. For $n=4k+1$, this set of operations can't be reduced, for $n=4k+3$ it can only be reduced by 1. So as suspected, the series is given by $$a_n=\begin{cases} 6k & \text{if $n=4k$,} \\ 6k+1 & \text{if $n=4k+1$,}\\ 6k+2 & \text{if $n=4k+2$,}\\ 6k+3 & \text{if $n=4k+3$.}\\ \end{cases}$$<|endoftext|> TITLE: Real Analysis, Folland problem 1.4.24 Outer Measures QUESTION [5 upvotes]: Let $\mu$ be a finite measure on $(X,M)$, and let $\mu^*$ be the outer measure induced by $\mu$. Suppose that $E\subset X$ satisfies $\mu^*(E) = \mu^*(X)$ (but not that $E\in M$). a.) If $A,B\in M$ and $A\cap E = B\cap E$ then $\mu(A) = \mu(B)$ b.) Let $M_{E} = \{ A \cap E: A\in M\}$, and define the function $\nu$ on $M_E$ defined by $\nu(A\cap E) = \mu(A)$ (which makes sense by (a)). Then $M_E$ is a $\sigma$-algebra on $E$ and $\nu$ is a measure on $M_E$. Attempted proof a.) Suppose $A,B\in M$ and $A\cap E = B\cap E$ where $E\subset X$. Since $\mu$ is finite on $(X,M)$ then $\mu(X) < \infty$ and we have $\mu(X) = \mu(A\cup A^c) = \mu(B\cup B^c)$. Note that $$\mu(A) + \mu(A^c\cap B) = \mu(A\cup B) = \mu(B) + \mu(B^c\cap A)$$ by symmetry, so it suffices to show that $\mu(A^c\cap B) = 0$. We have that $E\subset A\cup B^c = (A^c\cap B)^c$. So $$\mu(X) = \mu^*(X) = \mu^*(E)\leq \mu^*((A^c\cap B)^c) = \mu((A^c\cap B)^c) = \mu(X) - \mu(A^c\cap B)$$ and hence $\mu(A^c\cap B) = 0$. Attempted solution b.) Let $M_E = \{A\cap E: A\in M\}$ and define a function $\nu$ on $M_E$ defined by $\nu(A\cap E) = \mu(A)$. To show that $M_E$ is a $\sigma$-algebra I believe we have to first show that it is an algebra and in doing so we have to show that $M_E$ is closed under finite unions and complements. Then once we have shown that $M_E$ is an algebra we have to show that it is closed under countable unions or countable intersections to show that it is a $\sigma$-algebra. I am stuck here any suggestions on this part is greatly appreciated. REPLY [9 votes]: Let $\mu$ be a finite measure on $(X,M)$, and let $\mu^*$ be the outer measure induced by $\mu$. Suppose that $E\subset X$ satisfies $\mu^*(E) = \mu^*(X)$ (but not that $E\in M$). a.) If $A,B\in M$ and $A\cap E = B\cap E$ then $\mu(A) = \mu(B)$ b.) Let $M_{E} = \{ A \cap E: A\in M\}$, and define the function $\nu$ on $M_E$ defined by $\nu(A\cap E) = \mu(A)$ (which makes sense by (a)). Then $M_E$ is a $\sigma$-algebra on $E$ and $\nu$ is a measure on $M_E$. Before we proceed to the proof, it is worth to understand the idea in this exercise. Given a finite measure $\mu$ on $(X,M)$, and $E \subseteq X$, we want to "restrict $\mu$ to $E$". Suppose $E\in M$. First we need to restrict $M$ to $E$. The way to do it is very natural, we define $$M_{E} = \{ A : A\in M \text{ and } A\subseteq E\}$$ and we define, for all $A\in M_E$, $\nu(A)=\mu(A)$. It is easy to prove that $M_E$ is a a $\sigma$-algebra and $\nu$ is a measure. Now, what happens if $E$ is not measurable ($E\notin M$)? In this case, $M_E$ as previously defined may reduce to just $\{\emptyset\}$ and $E$ surely will not be in $M_E$ ($M_E$ is no longer a $\sigma$-algebra). So, "restriction of a measure" to a non-measurable set, in general, does not work. However, IF $\mu^*(E) = \mu^*(X)$, we can adjust our definitions and have a similar result. We define $$M_{E} = \{ A \cap E: A\in M\}$$ and, for all $A\cap E\in M_E$, $\nu(A\cap E)=\mu(A)$ Then we need to prove that $M_{E}$ is a $\sigma$-algebra and that $\nu$ is a measure. The first step to prove $\nu$ is a measure is to prove that $\nu$ is well defined. So we must prove that, if $A, B\in M$ and $A \cap E =B \cap E$, then $\nu(A \cap E)=\nu(A \cap E)$, that is, $\mu(A)=\mu(B)$. (That is why we need item a. in the exercise). Now let us proceed to the proof. Proof: a.) Suppose $A,B\in M$ and $A\cap E = B\cap E$. Then $$(A \setminus B) \cap E = A\cap B^c \cap E = (A\cap E) \cap B^c = (B\cap E) \cap B^c = \emptyset$$ So $A \setminus B \subseteq E^c$, so $E \subseteq (A \setminus B )^c$, and we have, using that $(A \setminus B )^c\in M$ , $$\mu(X)=\mu^*(E)\leqslant \mu^*((A \setminus B )^c) = \mu((A \setminus B )^c)\leqslant \mu(X)$$ So, we have $ \mu((A \setminus B )^c) =\mu(X)$. On the other hand, we have that $\mu(X)=\mu(A \setminus B ) +\mu((A \setminus B )^c)$. So we get $$\mu(X)=\mu(A \setminus B ) +\mu(X)$$ Since $\mu(X)<\infty$, we have $ \mu(A \setminus B )=0$ In a similar way, we can prove that $ \mu(B \setminus A )=0$. Now, note that $$\mu(A)=\mu(A)+0=\mu(A)+\mu(B \setminus A )=\mu(A\cup B ) = \mu(B)+\mu(A \setminus B )= \mu(B)+0=\mu(B)$$ So, we have $\mu(A)=\mu(B)$. b.) It is straight forward to check that $M_E$ is a $\sigma$-algebra on $E$. From item a, we know that $\nu$ is well defined. It is straight forward to check it is in fact a measure. $M_{E} = \{ A \cap E: A\in M\}$ is a $\sigma$-algebra on $E$. i. Since $\emptyset \in M$, $\emptyset = \emptyset \cap E \in M_{E}$. ii. Given any $B\in M_{E}$, there is $A \in M$ such that $B=A \cap E$ then $$E\setminus B = E \setminus (A\cap E) = (X\setminus A) \cap E$$ Since $X\setminus A \in M$, we have that $E\setminus B \in M_{E}$. iii. Given any countable family $\{B_n\}_{n\in \mathbb{N}}$ of sets in $M_{E}$, for each $n\in \mathbb{N}$, there is $A_n \in M$ such that $B_n=A_n \cap E$. So $$ \bigcup_{n\in \mathbb{N}} B_n= \bigcup_{n\in \mathbb{N}} (A_n \cap E) = \left( \bigcup_{n\in \mathbb{N}} A_n \right)\cap E$$ Since $\bigcup_{n\in \mathbb{N}} A_n \in M$, we have that $\bigcup_{n\in \mathbb{N}} B_n \in M_{E}$. So we have proved $M_{E}$ is a $\sigma$-algebra on $E$. Now, let us prove that the function $\nu$ on $M_E$ defined by $\nu(A\cap E) = \mu(A)$ is a measure on $M_E$. First, note that, as a consequence of item a, $\nu$ is well defined as a function. Since $\emptyset=\emptyset \cap E$, we have $$\nu(\emptyset)=\nu(\emptyset \cap E)=\mu(\emptyset)=0$$ Second, given any countable family $\{B_n\}_{n\in \mathbb{N}}$ of disjoint sets in $M_{E}$, for each $n\in \mathbb{N}$, there is $A_n \in M$ such that $B_n=A_n \cap E$. Note that the sets $A_n$ may not be disjoint. Let, for each $n\in \mathbb{N}$, define $C_{n}=A_{n} \setminus \bigcup_{i=0}^{n-1} A_i$, (note that $C_0=A_0$). Then, we have that the sets $C_n$ are disjoint sets in $M$, and we also have $$ C_n\cap E= \left( A_{n} \setminus \bigcup_{i=0}^{n-1} A_i \right)\cap E= (A_{n}\cap E) \setminus \bigcup_{i=0}^{n-1} (A_i \cap E)= B_n \setminus \bigcup_{i=0}^{n-1} B_i = B_n$$ the last step holds because $\{B_n\}_{n\in \mathbb{N}}$ is a family of disjoint sets. So, by the definition of $\nu$, for all $n\in \mathbb{N}$, $$\nu(B_n) = \nu(C_n\cap E)= \mu(C_n) \tag{1}$$ and we also have that $\bigcup_{n\in \mathbb{N}} C_n \in M$ and $$\bigcup_{n\in \mathbb{N}} B_n = \bigcup_{n\in \mathbb{N}} (C_n \cap E)=\left ( \bigcup_{n\in \mathbb{N}} C_n \right) \cap E$$ So we have, using $(1)$, $$\nu \left ( \bigcup_{n\in \mathbb{N}} B_n \right) = \nu \left (\left ( \bigcup_{n\in \mathbb{N}} C_n \right) \cap E\right)= \mu\left ( \bigcup_{n\in \mathbb{N}} C_n \right) = \sum_{n\in \mathbb{N}} \mu(C_n)=\sum_{n\in \mathbb{N}} \nu(B_n) $$<|endoftext|> TITLE: What is Convex about Locally Convex Spaces? QUESTION [5 upvotes]: This might be a silly question, but what motivates the name "locally convex" for locally convex spaces? The definition in terms of semi-norms seems to have nothing to do with convexity or with the other definition involving neighborhood bases -- and the neighborhood basis definition makes little sense to me either, because it refers to sets which are "absorbent", "balanced", and convex. Why the restrictions? And then why aren't they called "locally absorbent, balanced, and convex spaces"? And why do we never here about the terms absorbent or balanced in any other context? Also, I know that Banach spaces are locally convex, but this just confuses me further -- what do Banach spaces have to do with convexity? And why are locally convex spaces a natural generalization of Banach spaces? I have some vague ideas -- the Hahn-Banach theorem (and hyperplanes) are used a lot in convex programming, and the "p-norm" is only a norm for $p \ge 1$, the same values for which $x^p$ is a convex function -- are norms somehow "convex", does this follow from the triangle inequality? Then why aren't arbitrary complete metric spaces locally convex? Any insights would be greatly appreciated. REPLY [6 votes]: The only reason the question seems silly is that you include the answer! A locally convex TVS is one that has a basis at the origin consisting of balanced absorbing convex sets. The reason for the emphasis on "convex" is that that's what distinguishes locally convex TVSs from other TVSs: every TVS has a local base consisting of balanced absorbing sets. Regarding "Also, I know that Banach spaces are locally convex, but this just confuses me further -- what do Banach spaces have to do with convexity? And why are locally convex spaces a natural generalization of Banach spaces?", surely this is clear. Balls in a normed space are convex (as well as balanced and absorbing). Why aren't arbitrary complete metric spaces locally convex? Huh? The notion of convexity makes no sense in a general metric space.<|endoftext|> TITLE: Functions of a random walk and martingales QUESTION [7 upvotes]: Let $\xi_1,\xi_2,\ldots$ be a sequence of iid random variables, such that $$\mathbb{P}(\xi_i=1)=p\ne \frac{1}{2},\,\mathbb{P}(\xi_i=-1)=q=1-p.$$ Consider the corresponding random walk $X_n=\xi_1+\xi_2+\ldots+\xi_n$. The goal is to find all functions $f:\mathbb{Z}\to\mathbb{R}$, such that $M_n=f(S_n)$ is a martingale. What I know is that $f_0(m)=\left(\frac{q}{p}\right)^m$ is a solution (it can be shown easily). Is it true, that $f_0$ and a constant spans the space of such functions? I will be grateful for some hints. Thanks in advance! REPLY [2 votes]: Elaborating on @Did's comment, from $\mathbb E[f(\xi_1)\mid S_0=k]=f(k)$ we have $$f(k)= f(k-1)\mathbb P(\xi_1=-1)+f(k+1)\mathbb P(\xi_1=1)=q\cdot f(k-1)+p\cdot f(k+1), $$ and hence $$f(k+1) - \frac1p f(k) +\left(\frac qp\right) f(k-1)=0.$$ This recurrence relation has characteristic polynomial $$\lambda^2-\frac1p\lambda +\frac qp, $$ with roots $\lambda=1,\frac qp$. It follows that $$f(k) = c_1\left(\frac qp\right)^k + c_2, $$ where $c_1,c_2\in\mathbb R$.<|endoftext|> TITLE: Jensen-like averaging inequality on integers QUESTION [5 upvotes]: Let $\mathbb{Z}^*=\mathbb{Z}^+\cup\{0\}$. Let $f:\mathbb{Z}^*\rightarrow\mathbb{R}$ be a nondecreasing function such that $f(a+b)\leq f(a)+f(b)$ for all $a,b\in\mathbb{Z}^*$. Is it true that for all $k,n\in\mathbb{Z}^+$, we have $$f(k)\ge\frac{f(a_1)+\dots+f(a_n)}{2n}$$ for all $a_1,\dots,a_n\in\mathbb{Z}^*$ with $a_1+\dots+a_n=kn$? For $n=2$, this is true, since $f(k)\geq\frac{f(2k)}{2}\geq\frac{f(a_1)+f(a_2)}{4}$, where we use both conditions on $f$. For $n=1$ it is also true trivially. REPLY [3 votes]: The statement is true, even with strict inequality. By induction we have $f(nk)\leq nf(k)$. Suppose that we round each $a_i$ up to the next multiple of $k$, say $b_i$, so $f(a_i)\leq f(b_i)$ where $b_i-a_i TITLE: How to resolve the singularity of $xy+z^4=0$? QUESTION [5 upvotes]: This singularity can not be resolved by one time blow-up. I don't know how to blow up the singularity of the "variety" obtained by the first blow-up, in other words, I am confused with how to do the successive blow-ups. Does it have a explicit form in affine coordinates? And how to find the exceptional divisor in this case? Please show the process in detail. Here is my attempt: Let $(x,y,z;p_1,p_2,p_3)$ be the coordinates of $\mathbb{C}^3\times\mathbb{P}^2$ , after blow-up at $(0,0,0)$ we have three pieces: $p_1=1, x^2+p_2p_3=0$ $p_2=1, p_3+p_1^4y^2=0$ $p_3=1, p_2+p_1^4z^2=0$ And from the Jacobian matrix, the first piece still has a singularity. We need to blow-up again. But it is not the subvariety of $\mathbb{C}^3$ anymore, we can not use the technique again. My problem is how to do in the next. P.S. There is a similar problem in the Hartshorne's book(the exercise 5.8 in Chapter V.) P.S.S. I am unfamiliar with the intersection theory, so I want to work this out in the rudimentary way... REPLY [5 votes]: I don't know if you are still interested, here is a more detailed answer. By the change of variable $a = x+iy, b=x-iy, c = z$ your singularity becomes $a^2 + b^2 + c^4 = 0$, and indeed it is a $A_3$ singularity, so I will assume the equation is $x^2 + y^2 + z^4 = 0$. For the blow-up, I will take affine coordinates, so performing several blow-up is more easy. The first chart will have coordinates $x, y_1 := y/x, z_1 = z/x$ (which corresponds to the usual charts on $\Bbb P^2$), so $xy_1 = y$ and $xz_1 = z$. Plugging these relations we get the equation $x^2(1 + y_1^2 + x^2z_1^4) = 0$. This is a reducible surface, the $x^2 = 0$ term correspond to the exceptional divisor $E$ of the blow-up of $\Bbb C^3$, the $1 + y_1^2 + x^2z_1^4 = 0$ term is the strict transform $X'$ of your surface which is smooth so we don't need to worry about this chart, similarly for the chart with coordinates $x/y,z/y,y$. Now let's look at the last chart, i.e the coordinates are $x_2,y_2,z$ where $zx_2 = x$ and $zy_2 = y$. The equation becomes $z^2(x_2^2 + y_2^2 + z^2) = 0$. The strict transform is singular (it is the $A_1$ singularity). The exceptional divisor $E_1$ is given by $z = 0, x_2^2 + y_2^2 = 0$ i.e we obtain two lines, intersecting at $(0,0,0)$ which is the singular point. Spoiler : after one further blow-up, they will be separated and we will get the $A_3$ configuration as expected. Let's check it, and perform a second blow-up. I will check everything in one chart and you can check for yourself that nothing more happens in the two other charts. After one blow-up, our surface equation was given by $x_2^2 + y_2^2 + z^2 = 0$ , and the exceptional divisor $E_1$ was given by $z = 0, x_2^2 + y_2^2 = 0$, i.e $z = 0, x_2 = \pm i y_2$. We will take the following coordinate for the second blow-up : $x_3,y_2,z_3$ with $y_2x_3 = x_2, z_3y_2 = z$. The equation becomes $y_2^2(x_3^2 + 1 + z_3^2) = 0$. As usual, $y_2^2 = 0$ corresponds to the exceptional divisor of the blow-up of $\Bbb C^3$ and $x_3^2 + 1 + z_3^2 = 0$ is the strict transform of the surface, which is smooth. The exceptional divisor $E_2$ is given by $y=0, 1 + x_3^2 + z_3^2 = 0$ i.e it's a smooth conic. Now, the strict transform of $E_1$ is given by the equation $z_3 = 0, x_3 = \pm i$. We see that $E_2$ intersects each component of $E_1$ at $(\pm i, 0, 0)$ and I claim that the two component of $E_1$ do not intersect anymore, so we get indeed the dual configuration of the Dynkin diagram $A_3$.<|endoftext|> TITLE: Why Are Semimartingales the Largest Possible Class of Stochastic Integrators? QUESTION [6 upvotes]: I am trying to understand why semimartingales are the most general possible class of stochastic integrators. (I was hoping that this question would give me my answer, but it didn't.) I thought at first it was because they were the most general class of processes with defined quadratic variation. (Because I thought local martingales are supposed to be the most general class for which the quadratic variation was non-zero and could not be decomposed into the sum of non-trivial processes, one with zero quadratic variation, and because adapted processes of locally bounded variation have zero quadratic variation.) However, it turns out that this is not the content of the Bichteler-Dellacherie theorem (as I had expected). Furthermore, there are apparently counter-examples involving fractional Brownian motion (which I do not understand at all) Quadratic variation - Semimartingale The definition of the stochastic integral, when moving from Brownian motion for which the differential is just $dt$ because the quadratic variation is $t$, to arbitrary local martingales, and then semimartingales, clearly has something to do with quadratic variation, but I feel like I must be missing some part of the story if semimartingales are not the most general processes for which quadratic variation exists, is defined, and can be non-zero. EDIT: I guess it probably has something to do with p.52 of Protter, Stochastic Integration and Differential Equations, namely where a semimartingale $X_t$ is defined as any adapted process such that it is cadlag and for which the integral of the simple predictable process $H_t$ $$I_X(H)= H_0X_0 + \Sigma_{i=1}^n H_i [X_{T_{i+1}}-X_{T_i}]$$ is continuous. However, what I don't understand: 1. Why cadlag? I assume because it guarantees only jump discontinuities, and with that at most countably many discontinuities (like for a distribution function? or is that only because a distribution function is increasing -- if we don't have that the sample paths are of locally bounded variation anymore, does it still hold that we can only have countably many jump discontinuities?) 2. Why do we care if that integral is continuous? I know it forms the basis for the Ito integral, but what's wrong with having it be a discontinuous process? (If for example we can generalize from continuous semimartingales to arbitrary semimartingales which must be cadlag, why can't we consider cadlag integrals?) REPLY [2 votes]: Check out George Lowther's blog: https://almostsure.wordpress.com/2010/01/03/the-stochastic-integral/ Especially Lemma 3, answers your question "why cadlag?"<|endoftext|> TITLE: Why is a square root not a linear transformation? QUESTION [16 upvotes]: The question says: Prove that the function $f(x)=\sqrt{x}$ is not a linear transformation (particularly $\sqrt{1+x^2}≠1+x$) I think that this is because the exponent of $\sqrt{x}$ is $1/2$, and with that exponent, $T(cu)=c^{1/2}T(u)$, which does not follow the rules of a linear transformation... REPLY [3 votes]: The statement Statement 1: The square root is not a linear transformation. is not generally true, at least as it stands. According to the most often applied definition (see, e.g., Wikipedia), a linear transformation $f : V \to W$ is a mapping between two vector spaces $V$ and $W$ over the same field. (In a weaker form of the definition $V$ and $W$ can be modules over the same ring.) The difficulty is to acknowledge that the operations on $V$ and $W$ may well be different from usual addition and usual multiplication, even if the symbols $+$ and $\cdot$ are often used to denote vector addition and and scalar multiplication when we talk about abstract vector spaces. The terms vector addition and scalar multiplication themselves are suggestive, but misleading. Indeed, when the vector space under consideration is the concrete vector space $\mathbb{R}$, then vector addition and scalar multiplication agree with usual addition and usual multiplication; but there are numerous examples in which this is not the case. An important and simple example is the concrete vector space of the positive-real numbers, $\mathbb{R}_{>0}$, in which multiplication and exponentiation take on the role of vector addition and scalar multiplication. Now consider the OP's square-root function $f$. The domain of $f$ cannot be $\mathbb{R}$, because the square root is not defined for negative values (let us leave aside the complex-valued case). Therefore, it is questionable if the operations of $\mathbb{R}$ are, after all, validly used to proof that Statement 1 is true. It may be argued that such proofs are flawed, because the requirements of the definition of a liear map are not met. The appendage in the OP's question, $\sqrt{1 + x^2} \neq 1 + x$, is actually irrelevant for this matter. In fact, one can easily prove, meeting all requirements of the definition of a linear map, the converse of Statement 1: Proposition 1: The map $f : \mathbb{R}_{>0} \to \mathbb{R}_{>0}, \, x \mapsto \sqrt{x}$ is a linear transformation. Proof: Let $x,y \in \mathbb{R}_{>0}$, and let $\lambda \in \mathbb{R}$. Then $$ \begin{array}{rclr} f(x \cdot y) =& \sqrt{x \cdot y} = \sqrt{x} \cdot \sqrt{y} &= f(x) \cdot f(y) & \text{(preservation of vector addition)} \\ f(x^\lambda) =& \sqrt{x^\lambda} = \left( \sqrt{x} \right)^\lambda &= f^\lambda(x) & \text{(preservation of scalar multiplication)} \end{array} $$ $$\tag*{$\square$}$$<|endoftext|> TITLE: $A^\dagger$ - how to handwrite this?! QUESTION [6 upvotes]: In one book I came across the notation $A^\dagger := \overline{A}^T$. But how does one usually handwrite it? When I try to do it, it seems so similar to $A^+$ REPLY [2 votes]: Never thought I'd give handwriting advice here, but I suppose this does fall under notation. I agree, the dagger can look like a plus sign. My rendition adds a guard to the dagger's pommel. Additional advantage: it's a single stroke.<|endoftext|> TITLE: About a curious nested radical representation for $\cos 1^\circ$ QUESTION [5 upvotes]: I have found the following nested radical representation. By using the triple angle formula for the cosine, $\cos 3\theta$, and making $\theta = 1^\circ$, we get the cubic equation $ 4x^3-3x = \cos 3^\circ $. In this step , I'll use a trick that is rarely used, by expressing $ x $ as $ x = \frac{1}{2}\sqrt{3+\frac{\cos 3^\circ}{x}} $ Now I just iterate for x and in this way I get that $ \cos 1^\circ= \frac{1}{2}\sqrt{3+\frac{\cos 3^\circ}{\frac{1}{2}\sqrt{3+\frac{\cos 3^\circ}{\frac{1}{2}\sqrt{3+\frac{\cos 3^\circ}{\frac{1}{2}\sqrt{3+...}}}}}}} $ Direct calculation shows that this periodic radical converges to $ \cos 1^\circ$ How to prove rigorously that the left hand side is the limit of the right hand side? That's to say, I need a proof that the nested radical converges to $ \cos 1^\circ $ REPLY [2 votes]: You just have to show that $g:z\mapsto\frac{1}{2}\sqrt{3+\frac{\cos 3^\circ}{z}}$ is a contraction in a neighbourhood of $z=1$ then apply Banach fixed-point theorem. We have: $$ g'(z)=-\frac{\cos\left(\frac{\pi }{60}\right)}{4 z^2 \sqrt{3+\frac{\cos\left(\frac{\pi }{60}\right)}{z}}}$$ and $|g'(z)|$ is much smaller than $1$ (it is close to $\frac{1}{8}$) in a large neighbourhood of $z=1$. Done.<|endoftext|> TITLE: Iteration of polynomial has only positive roots QUESTION [6 upvotes]: Let $P(x)$ be a real polynomial with a positive leading coefficient, and $k\geq 2$ an integer. Suppose that $Q(x)=P(P(\dots(P(x))\dots))$, where there are $k$ iterations of $P$'s, has at least one positive root and no real nonpositive root. Does it always hold that $P(x)$ has at least one positive root and no real nonpositive root? It is true that $P(x)$ must have at least one positive root - if not, $P(x)>0$ for all $x>0$, so $P(P(x))>0$ for all $x>0$ and so on, implying that $Q(x)$ has no positive root either. The question is now whether $P(x)$ cannot have a real nonpositive root. REPLY [4 votes]: Let $P_k(x)=P(P(...(x))$ with $k$ iterations of $P$. Suppose (for contradiction) that $P(x)$ has a non-positive root $s$ and let $r$ be a positive root of $P(x)$ so that $P(x)=(x-r)(x-s)f(x)$. Then $P_2(x)= (P(x)-r)(P(x)-s)g(x)$. Note that $P(x)$ must have arbitarily large negative values or arbit. large positive values to the left of $s$; thus $P(x)=r$ or $P(x)=s$ for some $x \leq s \leq 0$. Now $P_3(x)=(P_2(x)-r)(P_2(x)-s)h(x)$ and repeating the same argument, we inductively prove that each $P_k(x)$, and hence $Q(x)$, must have both a positive root and a non-positive root, giving the desired contradiction.<|endoftext|> TITLE: Unexpected result from Euler's formula QUESTION [10 upvotes]: I am a bit confused with a result I get from Euler's formula: $e^{2\pi i} = 1$ $\sqrt[3] { e^{2\pi i} }= \sqrt[3]{ 1 }$ $(e^{2\pi i})^{\frac{1}{3}}= 1$ $e^{\frac{2}{3} \pi i} = 1$ This last result seems problematic, since the left-hand side of the previous equation should result in...: $e^{\frac{2}{3} \pi i} = −\frac{1}{2} + \frac{\sqrt{3}}{2}i$ ... given that $e^{x i} = \cos(x) + i\sin(x)$ So is there a mistake in my first manipulations? If not, how can one interpret the results? REPLY [5 votes]: You are making the classical mistake of generalizing the exponentiation rule $(a^b)^c=a^{bc}$ to complex numbers: it doesn't hold !<|endoftext|> TITLE: Derivative of $\log |AA^T|$ with respect to $A$ QUESTION [5 upvotes]: What is the derivative of $\log |AA^T|$ with respect to $A$, where $|A|$ denotes the determinant of $A$? REPLY [2 votes]: Let $M = AA^T$ and $f = \log \det \left( M \right)$. We will utilize the following the identities Trace and Frobenius product relation $$A:B={\rm tr}(A^TB)$$ or $$A^T:B={\rm tr}(AB)$$ Cyclic property of Trace/Frobenius product $$\eqalign{ A:BC &= AC^T:B \cr &= B^TA:C \cr &= {\text etc.} \cr }$$ Jacobi's formula (for nonsingular matrix $M$) $$\log \det \left( M \right) = {\rm tr}\log\left( M \right)$$ Now, we obtain the differential first and thereafter we obtain the gradient. So, \begin{align} df &= d \log \det \left( M \right) \\ &= d \ {\rm tr}\left( \log\left( M \right) \right) \hspace{8mm} \text{note: utilized Jacobi's formula} \\ &= {\rm tr} \left( M^{-1} dM \right) \\ &= M^{-T} \ : \ dM \hspace{8mm} \text{note: utilized trace and Frobenius relation} \\ &= \left(AA^T\right)^{-T} \ : \ d\left(AA^T\right) \\ &= \left(AA^T\right)^{-1} \ : \ d\left(AA^T\right) \\ &= \left(AA^T\right)^{-1} \ : \ \left( dA \ A^T + A \ dA^T\right) \\ &= \left(AA^T\right)^{-1} : 2 \ dA \ A^T \\ &= 2 \left(AA^T\right)^{-1} (A^T)^T : dA \hspace{8mm} \text{note: utilized cyclic property of Frobenius product} \\ &= 2 \left(AA^T\right)^{-1} A : dA \\ \end{align} So, the derivative of $f = \log \det \left( AA^T \right)$ with respect to $A$ is \begin{align} \frac{\partial}{\partial A} f = \frac{\partial}{\partial A} \log \det \left( AA^T \right) = 2 \left(AA^T\right)^{-1} A .\\ \end{align}<|endoftext|> TITLE: Graphically solving for complex roots -- how to visualize? QUESTION [9 upvotes]: So recently we've been doing the complex roots of quadratics, cubics and polynomials in general in school. But my question is, is there a way to see where these roots are, just like you can see where the real roots are by seeing where they intercept with the X-axis? For example, in this cubic here, it is evident that there is a real root just under -1, but is there a way to visualise the complex roots? Is there another line (similar to the x-axis) which intercepts the equation in another dimension? REPLY [4 votes]: I have developed a very clear method of visualizing where the complex roots of an equation are. The method involves drawing a graph of y = f(x) in the usual way on x, y plane but adding a 3rd axis to allow those special complex x values which also produce real y values. This means we have a normal y AXIS but a complex x PLANE. This is my first time on stackexchange so I cannot yet provide you with some excellent diagrams showing this method. I have however, just made a short video showing how the method works. I encouraging you to view it. Solutions of Cubics using Phantom Graphs http://screencast.com/t/dkAYxFDwH Also, I have written a special section on this exact topic in my website: http://www.phantomgraphs.weebly.com just scroll right down to the last entry I have made especially for this question.<|endoftext|> TITLE: Is Whitehead's manifold with a point removed homotopy equivalent to a sphere? QUESTION [6 upvotes]: A contractible open subset of $\mathbb{R}^n$ need not be homeomorphic to $\mathbb{R}^n$. The Whitehead manifold is an open subset of $\mathbb{R}^3$ which is contractible but not homeomorphic to $\mathbb{R}^3$ (it is not simply connected at infinity). Removing a point from $\mathbb{R}^3$, we obtain a space homotopy equivalent to $S^2$. Is this true of any open contractible subset of $\mathbb{R}^3$? In particular: Is Whitehead's manifold with a point removed homotopy equivalent to $S^2$? REPLY [10 votes]: Yes. Deleting a point from a manifold of dimension $n \geq 3$ doesn't change the fundamental group, so the result is still simply connected. (Either use van Kampen or transversality.) Mayer-Vietoris shows that the homology of the resulting space is the same as $S^2$, whence $\pi_2(U \setminus p)$ is isomorphic to $H_2(U \setminus p) = \Bbb Z$, generated by a small sphere around $p$. Because the map $S^2 \to U \setminus p$ is an isomorphism on homology, and $U \setminus p$ is simply connected, Whitehead + repeated application of Hurewicz shows that the map is a homotopy equivalence. This works in any dimension $n \geq 3$. (Of course, it's also true in dimension 2, where the only open contractible surface is homeomorphic to $\Bbb R^2$, as well as dimension 1, where the only open curve is $\Bbb R$. Indeed the same proof applies in the first case, where van Kampen instead implies that the result when you delete a point has fundamental group $\Bbb Z$.) Of course, it is not proper homotopy equivalent to $\Bbb R^3 \setminus 0$, as proper homotopy equivalences preserve the fundamental groups at infinity of the ends.<|endoftext|> TITLE: Formally, why does a logical contradiction have probability zero? QUESTION [14 upvotes]: In terms of formal probability theory, why does an event representing a logical contradiction (such as $A \wedge \neg A$) always have probability zero? I understand intuitively why this is the case, but I don't see anything in the axioms of probability that directly explains why $A \wedge \neg A$ must have probability zero (or, equivalently, why a logical tautology such as $A \vee \neg A$ must have probability 1). REPLY [5 votes]: Your axiom list states that Additivity: $P(E_1 \cup E_2)=P(E_1)+P(E_2)$, where $E_1$ and $E_2$ are mutually exclusive. Note that $A \wedge \neg A$ is mutually exlusive with itself. Hence $$P(A \wedge\neg A)=P\left((A \wedge \neg A)\cup(A \wedge \neg A)\right) = P(A \wedge\neg A) + P(A \wedge\neg A) $$<|endoftext|> TITLE: Need help with $\int_{-\infty}^\infty \frac{x^2 \, dx}{x^4+2a^2x^2+b^4}$ QUESTION [7 upvotes]: I'm having trouble trying to evaluate this definite integral. Mathematica didn't help much. $$\int_{-\infty}^\infty \frac{x^2 \, dx}{x^4+2a^2x^2+b^4}$$ where $a$, $b$ $\in \Bbb R^+$. Is it possible to solve it analytically? What methods should I try? REPLY [2 votes]: $\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ With $\ds{a, b \in \mathbb{R}^{+}}$ and $\ds{p \equiv \pars{a/b}^{2}}$, \begin{align} \color{#f00}{% \int_{-\infty}^{\infty}{x^{2}\,\dd x \over x^{4} + 2a^{2}x^{2} + b^{4}}} & = {1 \over \verts{b}} \int_{-\infty}^{\infty}{x^{2}\,\dd x \over x^{4} + 2px^{2} + 1} = {1 \over \verts{b}} \int_{0}^{\infty}{x^{1/2}\,\dd x \over x^{2} + 2px + 1} \\[3mm] & = {1 \over \verts{b}} \int_{0}^{\infty}{x^{1/2}\,\dd x \over \pars{x - x_{+}}\pars{x - x_{-}}} \end{align} where $\braces{x_{\pm}}$ are the roots of $x^{2} + 2px + 1 = 0$ which are given by $x_{\pm} = -p \pm \root{p^{2} - 1}$. The integration can be performed along a key-hole contour which takes into account the branch-cut of $z^{1/2} = \verts{z}^{1/2}\exp\pars{\ic\phi/2}\,,\ z \not= 0\,,\ 0 < \phi < 2\pi$. Then, ${\large p < 1}$ The roots are given by $x_{\pm} = -p \pm \ic\root{1 - p^{2}}$ with $\verts{x_{\pm}} = 1$. \begin{align} 2\pi\ic\bracks{% {\verts{x_{+}}^{1/2}\expo{\ic\phi_{+}/2} \over x_{+} - x_{-}} + {\verts{x_{-}}^{1/2}\expo{\ic\phi_{-}/2} \over x_{-} - x_{+}}} & = \int_{0}^{\infty}{x^{1/2}\,\dd x \over x^{2} + 2px + 1} + \int_{\infty}^{0}{x^{1/2}\expo{\ic\pi}\,\dd x \over x^{2} + 2px + 1} \\[3mm] \int_{0}^{\infty}{x^{1/2}\,\dd x \over x^{2} + 2px + 1} & = {\pi \over 2\root{1 - p^{2}}}\pars{\expo{\ic\phi_{+}/2} - \expo{\ic\phi_{-}/2}} \\[3mm] \phi_{+} = {\pi \over 2} + \delta\phi\,,\quad \phi_{-} = {3\pi \over 2} - \delta\phi\,,\quad & \delta\phi \equiv \arctan\pars{{p \over \root{1 - p^{2}}}} \end{align} \begin{align} \int_{0}^{\infty}{x^{1/2}\,\dd x \over x^{2} + 2px + 1} & = {\pi \over 2\root{1 - p^{2}}}\pars{% \expo{\ic\pi/4}\expo{\ic\delta\phi/2} - \expo{3\ic\pi/4}\expo{-\ic\delta\phi/2}} \\[3mm] & = {\pi \over 2\root{1 - p^{2}}}\bracks{% {1 + \ic \over \root{2}}\,\expo{\ic\delta\phi/2} - {-1 + \ic \over \root{2}}\,\expo{-\ic\delta\phi/2}} \\[3mm] & = {\root{2}\pi \over 2\root{1 - p^{2}}}\bracks{% \cos\pars{\delta\phi \over 2} - \sin\pars{\delta\phi \over 2}} \\[3mm] & = {\root{2}\pi \over 2\root{1 - p^{2}}}\bracks{% \root{{1 + \cos\pars{\delta\phi} \over 2}} - \root{{1 - \cos\pars{\delta\phi}} \over 2}} \end{align} However, $\ds{\cos\pars{\delta\phi} = {1 \over \root{\tan^{2}\pars{\delta\phi} + 1}} = \root{1 - p^{2}} = {\root{b^{2} - a^{2}} \over \verts{b}}}$: \begin{align} \int_{0}^{\infty}{x^{1/2}\,\dd x \over x^{2} + 2px + 1} & = {\pi\verts{b}^{1/2} \over 2\root{b^{2} - a^{2}}}\bracks{% \root{\verts{b} + \root{b^{2} - a^{2}}} - \root{\verts{b} - \root{b^{2} - a^{2}}}} \end{align} ${\large p > 1}$. Both roots are negative: $x_{\pm} = -p \pm \root{p^{2} - 1}$. The calculation is similar to the previous one.<|endoftext|> TITLE: Where can I learn about differential graded algebras? QUESTION [8 upvotes]: I want to learn more about differential graded algebras so that I can construct explicit examples of derived schemes over characteristic 0, compute smooth resolutions of morphisms of schemes, and compute examples of cotangent complexes of morphisms of schemes. Where can I learn about differential graded algebras which will provide the computational tools to tackle these questions? REPLY [3 votes]: In the same vein as zyx's answer, if you want to learn about cdgas and their applications to rational homotopy theory (before trying to use them in derived AG, which would make sense IMO), I can recommend these two books: Yves Félix, Stephen Halperin, and Jean-Claude Thomas. Rational homotopy theory. Graduate Texts in Mathematics 205. New York: Springer-Verlag, 2001, pp. xxxiv+535. ISBN: 0-387-95068-0. DOI: 10.1007/978-1-4613-0105-9. MR1802847. Yves Félix, John Oprea, and Daniel Tanré. Algebraic models in geometry. Oxford Graduate Texts in Mathematics 17. Oxford: Oxford University Press, 2008, pp. xxii+460. ISBN: 978-0-19-920651-3. MR2403898. There is also this introduction that you can find online: Kathryn Hess. “Rational homotopy theory: a brief introduction”. In: Interactions between homotopy theory and algebra. Contemp. Math. 436. Providence, RI: Amer. Math. Soc., 2007, pp. 175–202. DOI: 10.1090/conm/436/08409. MR2355774. You also have the original papers: Dennis Sullivan. “Infinitesimal computations in topology”. In: Inst. Hautes Études Sci. Publ. Math. 47.47 (1977), 269–331 (1978). ISSN: 0073-8301. Numdam: PMIHES_1977__47__269_0. MR0646078. Daniel Quillen. “Rational homotopy theory”. In: Ann. of Math. (2) 90 (1969), pp. 205–295. ISSN: 0003-486X. JSTOR: 1970725. MR0258031. But note that the two books I mentioned at the beginning have the advantage of having been written 20+ years after these things were discovered, the subject matter was a bit more "settled" and the exposition is a bit clearer. The book by Félix/Halperin/Thomas also has a sequel, Rational homotopy theory. II (same authors, published last year) that goes over the same material as the first book but more quickly, and then explain how to apply the same techniques to non-simply connected spaces. The original paper by Sullivan already deals with non-simply connected spaces, he does the whole theory at once; I find it's a bit simpler to learn first about rational homotopy theory of simply connected spaces, then learn about non-simply connected ones.<|endoftext|> TITLE: Why is $\log(1+e^x) - \frac{x}{2}$ even? QUESTION [5 upvotes]: I'm dealing with Fourier series and I'm trying to figure out $\log(1+e^x) - \frac{x}{2}$ is even??? I've tried the $f(-x) = f(x)$ method but it doesn't give me the equality. But I've plotted it, and it is even? :S REPLY [3 votes]: Why do you say that checking $f(-x)=f(x)$ doesn't work? Of course it does. $\ddot\smile$ $$\ln(1+e^{-x})-\frac{-x}2=\ln\frac{e^x+1}{e^x}+\frac x2=\ln(e^x+1)-\ln e^x+\frac x2=\\=\ln(1+e^x)-x+\frac x2=\ln(1+e^x)-\frac x2$$<|endoftext|> TITLE: $a^{|b-a|}+b^{|c-b|}+c^{|a-c|} > \frac52$ for $a,b,c >0$ and $a+b+c=3$ QUESTION [14 upvotes]: Let $a,b,c >0$ with $a+b+c=3$. Prove that $$a^{|b-a|}+b^{|c-b|}+c^{|a-c|} > \frac52.$$ What I did: It is cyclic inequality so I assume $c= \min\{ a,b,c \}$. I consider the first case where $a\ge b\ge c$ then $$a^{|b-a|}+b^{|c-b|}+c^{|a-c|} > \frac52$$ $$\Leftrightarrow \frac{a^a}{a^b} +\frac{b^b}{b^c}+\frac{c^a}{c^c}> \frac52$$ I check function $f(x) =x^x$ to see if it is a strictly monotonic function or not. It turns out that it is a concave up function so I get stuck here. REPLY [2 votes]: Not a bounty candidate. Just a pictorial comment.Make an isoline/contour plot in the $(a,b)$-plane of the function: $$ f(a,b) = a^{|b-a|}+b^{|c-b|}+c^{|a-c|} - \frac52 \quad \mbox{with} \quad c=3-a-b $$ Then this is what we get. The blue spots are where $\,|f(a,b)| < 0.02$ . There seem to be several of these minimum values. I wish the rigorous proof producers among us good luck.<|endoftext|> TITLE: Repeatedly taking mean values of non-empty subsets of a set: $2,\,3,\,5,\,15,\,875,\,...$ QUESTION [38 upvotes]: Consider the following iterative process. We start with a 2-element set $S_0=\{0,1\}$. At $n^{\text{th}}$ step $(n\ge1)$ we take all non-empty subsets of $S_{n-1}$, then for each subset compute the arithmetic mean of its elements, and collect the results to a new set $S_n$. Let $a_n$ be the size of $S_n$. Note that, because some subsets of $S_{n-1}$ may have identical mean values, $a_n$ may be less than the number of non-empty subsets of $S_{n-1}$ (that is, $2^{a_{n-1}}-1$). For example, at the $1^{\text{st}}$ step we get the subsets $\{\{0\},\,\{1\},\,\{0,1\}\}$, and their means are $\{0,\,1,\,1/2\}.$ So $S_1=\{0,\,1,\,1/2\}$ and $a_1=|S_1|=3.$ At the $2^{\text{nd}}$ step we get the subsets $\{\{0\},\,\{1\},\,\{1/2\},\,\{0,\,1\},\,\{0,\,1/2\},\,\{1,\,1/2\},\,\{0,\,1,\,1/2\}\},$ and their means are $\{0,\,1,\,1/2,\,1/2,\,1/4,\,3/4,\,1/2\}.$ So, after removing duplicate values, we get $S_2=\{0,\,1,\,1/2,\,1/4,\,3/4\}$ and $a_2=|S_2|=5.$ And so on. The sequence $\{a_n\}_{n=0}^\infty$ begins: $2,\,3,\,5,\,15,\,875,\,...$ I submitted it as A273525 in OEIS. A brute-force algorithmic approach allows to easily find its elements up to $a_4=875$, but becomes computationally unfeasible after that. My question is: What is the value of $a_5$? It's easy to see that $5\times10^5 TITLE: A conjecture about the prime function $p_n$: $p_m \cdot p_n >p_{m \cdot n}$ QUESTION [10 upvotes]: While testing my system Zet for computational mathematics I find possible relations now and then. The latest is: Conjecture: For all $(m,n)\in\mathbb Z_+^2$ except $(3,4),(4,3) \text{ and } (4,4)$ it holds that $p_m\cdot p_n > p_{m\cdot n}$ REPLY [4 votes]: The proof of the conjecture for sufficiently large $n,m$ has been given in the other answer. I will here focus on getting control on the "sufficiently large" part thereby giving a way to make a proof of the conjecture for all $n,m$. I will stick to simply presenting the inequalities and the range for which they are satisfied without showing all the hairy calculations so you should try to verify these for yourself. For $n\geq 6$ Rosser's theorem gives precice inequalities for the $n$'th prime number $$\log(n) + \log\log(n) - 1 < \frac{p_n}{n} < \log(n) + \log\log(n)$$ From this we get $[\log(n) +\log\log(n)-1][\log(m)+\log\log(m)-1] < \frac{p_np_m}{nm}$ and $\frac{p_{nm}}{nm} < \log(nm) + \log\log(nm)$ so the conjecture follows if we can prove $$\log(nm) + \log\log(nm) < [\log(n) +\log\log(n)-1][\log(m)+\log\log(m)-1]$$ This inequality can be shown to hold for all $n,m\geq 15$. The cases $n,m<15$ can be done by brute force and shows only three cases where it's violated: $(3,4),(4,3)$ and $(4,4)$. The remaining cases follows if we can prove $$\log(mn) + \log\log(mn) < \frac{p_m}{m}[\log(n) + \log\log(n)-1]$$ for $m=1,2,3,\ldots,14$ for $n\geq 15$. This inequality can again be shown to hold for $n\geq 33$. A final brute-force check of the cases $n\in[15,33]$ and $m\in[1,14]$ completes the proof of the conjecture. Note that one does not need to prove the best possible version of the inequalities above (i.e. $15$ for the first inequality and $33$ for the second inequality). We can strick with a worse value at the expence of checking more cases numerically. Replacing $15$ and $33$ by $100$ and $1000$ should make the proof of the inequalities easier and it will still be fast to verify the remaining cases $(n \leq 100, m \leq 1000)$ numerically.<|endoftext|> TITLE: Infinite product probability spaces QUESTION [8 upvotes]: Does the infinite product of probability spaces always exist (using the sigma algebra that makes all projections measurable and providing a probability measure on this sigma algebra)? I always assumed the answer was yes, but today I read in some lecture notes: "...For that we will need to be able to construct infinite product measure spaces and extend natural measures to them. Interstingly the construction requires that the measure structure be tied to the natural topology on this space. A class of topologies that behave naturally under such construction goes under the name Polish; the measure spaces arising thereby are called standard Borel." Can someone explain this paragraph to me? Was I mistaken about the general existence? What does topology have to do with it? Even if I was right, what exactly is special about standard Borel spaces when one is interested in questions that involve topology? REPLY [20 votes]: It seems that some of the confusion lives on in the answers and comments of others. After consulting the literature (I am not an expert in this area), I think I know what the confusion is about. There are two common situations that arise when considering infinite families of probability measures. The first one concerns arbitrary products of probability spaces: Situation 1. Let $I$ be a non-empty set, let $\{\Omega_i,\mathscr A_i,\mu_i\}_{i\in I}$ be an indexed family of probability spaces and let $(\Omega,\mathscr A)$ denote the product of their underlying measurable spaces. We want to find a probability measure $\mu$ on $(\Omega,\mathscr A)$ such that: the coordinate projections $\pi_i : \Omega \to \Omega_i$ are independent random variables (with respect to $\mu$); for each $i\in I$ the pushforward measure of $\mu$ along the coordinate projection $\pi_i : \Omega \to \Omega_i$ (in other words, the distribution of the random variable $\pi_i$) coincides with the original measure $\mu_i$. Does such a probability measure exist? Is it unique? A proposition in measure theoretic probability says that the answer to both questions is yes in all cases. Proofs of this theorem can be found in many of the more advanced textbooks on measure theory or probability. (See references below.) Note: though the proof uses only basic measure theory, it only works if the factors in the product are probability spaces. It does not extend to arbitrary measure spaces! The second situation is adressed by Kolmogorov's extension theorem. Here the independence requirement is replaced by a much weaker consistency criterion. Situation 2. Let $I$ be a non-empty set and let $\{\Omega_i,\mathscr A_i\}_{i\in I}$ be an indexed family of measurable spaces. For every non-empty $J \subseteq I$, let $(\Omega_J,\mathscr A_J)$ denote the partial product $\prod_{j\in J} (\Omega_j,\mathscr A_i)$ of measurable spaces. Furthermore, for $\varnothing \neq A \subseteq B \subseteq I$, we let $\pi^B_A : \Omega_B \to \Omega_A$ denote the coordinate projection (clearly $\pi^B_A$ is measurable). Finally, let $\mathscr F$ denote the collection of all finite, non-empty subsets of $I$. Suppose that we have an indexed family of probability measures $\{\mu_F\}_{F\in \mathscr F}$ on the spaces $\{(\Omega_F,\mathscr A_F)\}_{F\in\mathscr F}$ with the following consistency property: for all $A,B\in\mathscr F$ with $A \subseteq B$, the measure $\mu_A$ coincides with the pushforward measure of $\mu_B$ along $\pi^B_A$. Does there exist a probability measure $\mu$ on $(\Omega,\mathscr A) := (\Omega_I,\mathscr A_I)$ such that for each $F\in\mathscr F$ the pushforward measure of $\mu$ along $\pi^I_F$ coincides with $\mu_F$? Is such a $\mu$ unique? It turns out that the answer to this question is no in general (see Cohn, exercise 10.6.5). However, if the spaces $(\Omega_i,\mathscr A_i)$ are sufficiently nice, the answer is yes, and luckily this still covers many practical uses. So what exactly is the difference between the two situations? To answer this question, you must convince yourself that the consistency requirement is much weaker than independence. In fact, situation 2 can be used to find a joint distribution for an infinite family of dependent random variables! This plays an important role in the theory of stochastic processes. (Indeed, Bauer proves the result in the first section of his chapter on stochastic processes; see references below.) It is of course possible to use Kolmogorov's extension theorem in order to form infinite products of independent random variables: we simply specify the probability measure on every finite product $(\Omega_F,\mathscr A_F)$ to be the one that makes the random variables $\pi_i$ independent. It seems that the author of OP's lecture notes mistakenly thought that this is the only way to construct infinite sequences of independent random variables, and therefore incorrectly concluded that such sequences only exist in certain (topological) cases. An understandable mistake, but confusing nonetheless. As a final word, those who have seen such constructions in other areas of mathematics may recognise situation 1 as a product of probability spaces and situation 2 as a projective limit of probability spaces. This nomenclature is also used by Bauer; see references below. References: Donald L. Cohn, Measure Theory (Second Edition), Birkhäuser Advanced Texts: Basler Lehrbücher, 2013. Both theorems are treated in section 10.6, and it was this exposition that led me to understand the difference between the two results. The result from situation 1 is proved only for countable products in the main text; the general case is deferred to the exercises. Furthermore, an outline of a counterexample for the general case of situation 2 is given in the exercises. Heinz Bauer, Robert R. Burckel (translator), Probability Theory, de Gruyter Studies in Mathematics 23, Walter de Gruyter, 1996. This is a very technical (but nevertheless great) introduction to probability theory from a measure theoretic point of view, which assumes knowledge of measure theory as a prerequisite. Situation 1 is covered in §9 (infinite products of probability spaces), and situation 2 is covered in §35 (projective limits of probability measures). The independence requirement in situation 1 is somewhat hidden in the wording of theorem 9.2, but it follows from the remarks preceding the theorem. An outline of a counterexample for the general case of situation 2 is given at the end of §35. Paul R. Halmos, Measure Theory, Graduate Texts in Mathematics 18, Springer, 1974 (reprint of the 1950 edition by Van Nostrand). This book includes a clear proof and helpful remarks for situation 1 in §38 (infinite dimensional product spaces). Again the result is only proven for countable products, and the general case is deferred to the exercises. Like Bauer, he formulates the theorem without using the word independence. The book does not seem to treat situation 2. Jacques Neveu supposedly also addresses situation 1 in his book Mathematical Foundations of the Calculus of Probability (translated from French), but I don't seem to have access to this book.<|endoftext|> TITLE: For which $n$ is the $n$-dimensional hypercube a planar graph? QUESTION [9 upvotes]: I've been asked the following question: For which values of $n$ is $Q_n$ a planar graph, where $Q_n$ is the $n$-dimensional hypercube? I succeeded to prove that for $n$ equal or greater than $6$ it won't be, but I know that actually $Q_n$ is a planar graph if and only if $n \leq 3$. Please help Thx REPLY [15 votes]: $K_{3,3}$ is a minor of $Q_4$, hence $Q_4$ is not a planar graph, and obviously $Q_4$ is a minor of $Q_n$ for any $n>4$, hence the only planar hypercubes are $Q_n$ with $n\leq 3$. Another approach: $Q_4$ is a triangle-free graph, but any planar and triangle-free graph with $n$ vertices has at most $2n-4$ edges. $Q_4$ has $16$ vertices and $32>2\cdot 16-4$ edges, so $Q_4$ cannot be a planar graph. Another approach may be to compute the Colin de Verdière graph invariant, since the spectrum of the adjacency matrix of $Q_n$ has a nice, easily-provable structure.<|endoftext|> TITLE: Solve summation expression QUESTION [7 upvotes]: For a probability problem, I ended up with the following expression $$\sum_{k=0}^nk\ \binom{n}{k}\left(\frac{2}{3}\right)^{n-k}\left(\frac{1}{3}\right)^k$$ Using Mathematica I've found that the result should be $\frac{n}{3}$. However, I have no idea how to get there. Any ideas? REPLY [6 votes]: Just some binomial coefficient massage and the binomial formula: $$ \sum_{k=0}^nk\binom nkx^{n-k}y^k =\sum_{k=1}^nn\binom{n-1}{k-1}x^{n-k}y^k =ny(x+y)^{n-1} $$ Now plug in $x=\frac23$ and $y=\frac13$.<|endoftext|> TITLE: Least value of $a$ for which $4ax^2 + \frac{1}{x} \geq 1$ QUESTION [5 upvotes]: Find the least value of $a \in R$ for which $4ax^2 + \frac{1}{x} \geq 1$, for all $x>0$. The equation will transform into (Using $x>0$) $4ax^3-x+1\geq 0$ But I don't know how to deal with this cubic inequality. Could someone give me some hint as how to proceed? REPLY [5 votes]: If we make a change of variable $a=1/A^3$ and $x=Au/2$, then the problem becomes one of finding the largest value of $A$ for which the inequality $$u^2+{2\over u}\ge A$$ holds for all $u\gt0$. If you know calculus, then it's easy to see that $f(u)=u^2+{2\over u}$ is minimized at $u=1$, so that $f(1)=3$ is the largest that $A$ can be. If you don't know calculus, then some extra work is required: In order for the inequality to hold for all $u\gt0$, we clearly need $A\le1^2+{2\over1}=3$. But it's also easy to see that, for $u\gt0$, we have $$\begin{align} u^2+{2\over u}\ge3&\iff u^3-3u+2\ge0\\ &\iff(u-1)^2(u+2)\ge0 \end{align}$$ Since the final inequality here is obviously true (the product of a square and a positive number is necessarily non-negative), we can conclude that $A=3$ is the largest number for which the inequality $u^2+{2\over u}\ge A$ holds for all $u\gt0$, and thus $a=1/27$ is the smallest value for which the original inequality holds for all $x\gt0$.<|endoftext|> TITLE: Verify the correctness of $\sum_{n=1}^{\infty}\left(\frac{1}{x^n}-\frac{1}{1+x^n}+\frac{1}{2+x^n}-\frac{1}{3+x^n}+\cdots\right)=\frac{\gamma}{x-1}$ QUESTION [5 upvotes]: $x\ge2$ $\gamma=0.57725166...$ (1) $$\sum_{n=1}^{\infty}\left(\frac{1}{x^n}-\frac{1}{1+x^n}+\frac{1}{2+x^n}-\frac{1}{3+x^n}+\cdots\right)=\frac{\gamma}{x-1}$$ Series (1) converges very slowly, we are not sure that (1) has closed form $\frac{\gamma}{x-1}$ Can anyone verify (1)? REPLY [4 votes]: For $x =2$ the identity is true. Claim. $\displaystyle \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{(-1)^k}{2^n+k} = \gamma. $ Proof. We first rearrange the given sum: $$ S := \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{(-1)^k}{2^n+k} = \sum_{l = 2}^{\infty} \bigg( \sum_{\substack{(n,k) \ : \ 2^n + k = l \\ n \geq 1, k \geq 0}} \frac{(-1)^k}{2^n+k} \bigg) = \sum_{l = 2}^{\infty} \frac{(-1)^l}{l} \lfloor \log_2 l \rfloor. \tag{1} $$ (For a careful reader: see below for a rigorous proof.) Then group each $2^n$ terms to write $$ S = \sum_{n=1}^{\infty} n \bigg( \sum_{l=2^n}^{2^{n+1}-1} \frac{(-1)^l}{l} \bigg) = \sum_{n=1}^{\infty} n ( A_{n-1} - A_n ), $$ where $A_n = H(2^{n+1}-1) - H(2^n-1)$ and $H(n) = 1 + \frac{1}{2} + \cdots + \frac{1}{n}$ is the $n$-th harmonic number. Now the $N$-th partial sum of the RHS is \begin{align*} \sum_{n=1}^{N} n(A_{n-1} - A_n) &= (A_0 + \cdots + A_{N-1}) - N A_N \\ &= (N-1)H(2^N-1) - N H(2^{N+1}-1). \end{align*} Taking limit as $N \to \infty$ gives the desired limit $\gamma$. //// Remark. Let $f(z) = \sum_{k=0}^{\infty} \frac{(-1)^k}{z+k}$. Using the formula $f(z) = \psi_0(z) - \psi_0(z/2) - \log 2$ together with the asymptotic expansion of $\psi_0(z)$ gives a much shorter proof. We can check that $f(z) = \frac{1}{2z} + \mathcal{O}(z^{-2})$. Thus $$ \sum_{n=1}^{\infty} f(x^n) = \frac{1}{2(x-1)} + \mathcal{O}\left(\frac{1}{x^2} \right). $$ This tells us that the proposed formula is false, which is already confirmed by other users. Addendum 1 - A careful justification of $\text{(1)}$. The trick is to group each pair of successive terms to create a non-negative sum. Then by the Tonelli's theorem we can freely rearrange the order of summation. \begin{align*} \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{(-1)^k}{2^n+k} &= \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{1}{(2^n+2k)(2^n+2k+1)} \\ &= \sum_{l' = 1}^{\infty} \bigg( \sum_{\substack{(m,k) \ : \ 2^m + k = l' \\ m \geq 0, k \geq 0}} \frac{1}{(2^{m+1}+2k)(2^{m+1}+2k+1)} \bigg) \\ &= \sum_{l' = 1}^{\infty} \frac{1 + \lfloor \log_2 l' \rfloor}{2l'(2l'+1)} \\ &= \sum_{l' = 1}^{\infty} \bigg( \frac{\lfloor \log_2 (2l') \rfloor}{2l'} - \frac{\lfloor \log_2 (2l'+1) \rfloor}{2l'+1} \bigg)\\ &= \sum_{l = 1}^{\infty} \frac{(-1)^l}{l}\lfloor \log_2 l \rfloor. \end{align*} Addendum 2 - Heuristic. We use the following asymptotic expansion $$ \sum_{k=0}^{\infty} \frac{(-1)^k}{z+k} = \psi_0(z) - \psi_0(z/2) - \log 2 = \frac{1}{2z} + \sum_{k=1}^{\infty} \frac{B_{2k}}{2k} \frac{2^{2k}-1}{z^{2k}}, $$ where $(B_k)$ are Bernoulli numbers. (Notice: This is only a formal sum because the asymptotic expansion above does not converge for any $z$. It is numerically meaningful only when we truncate the sum.) Then we formally have \begin{align*} \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{(-1)^k}{x^n + k} &= \frac{1}{2(x-1)} + \sum_{n=1}^{\infty} \frac{B_{2n}}{2n} \frac{2^{2n}-1}{x^{2n}-1} \\ &= \frac{1}{2(x-1)} + 2 \sum_{n=1}^{\infty} \frac{(-1)^{n-1} \Gamma(2n)\zeta(2n)}{(2\pi)^{2n}} \frac{2^{2n}-1}{x^{2n}-1} \\ &= \frac{1}{2(x-1)} + 2 \int_{0}^{\infty} \bigg( \sum_{n=1}^{\infty} (-1)^{n-1}\zeta(2n) \frac{2^{2n}-1}{x^{2n}-1} t^{2n-1} \bigg) e^{-2\pi t} \, \mathrm{d}t. \end{align*} Although the summation in the last line does not converge for $t \geq x/2$, it may have an analytic continuation on all of $t > 0$. Then with that continuation the last expression may make sense and even possibly give the correct formula for the original sum. Indeed, when $x = 2$ it follows that $$ 2 \sum_{n=1}^{\infty} (-1)^{n-1}\zeta(2n) t^{2n-1} = \pi \coth \pi t - \frac{1}{t} $$ and hence $$ \sum_{n=1}^{\infty} \sum_{k=0}^{\infty} \frac{(-1)^k}{2^n + k} \text{ “}=\text{" } \frac{1}{2} + \int_{0}^{\infty} \bigg( \pi \coth \pi t - \frac{1}{t} \bigg) e^{-2\pi t} \, \mathrm{d}t = \gamma. $$ Now we know that this alleged equality is indeed true. So the above formal computation seems to give the correct answer (at least for $x =2$, and hopefully for $x > 2$). But for $x > 2$, I have no idea what will come out.<|endoftext|> TITLE: Every even integer $n>2$ is a semiprime or sum of two semiprime numbers. QUESTION [5 upvotes]: Progress: A slightly stronger version of the original assumption is this: Every even integer $n>2$ is a semiprime or sum of two even semiprime numbers. I was wondering as to how this statement can be proved/disproved, though it may seem a bit trivial. I would really appreciate any opinion or insight as well. Regards REPLY [3 votes]: To summarize some of what I think the Comments are trying to say falsifies the "strong" version, let $n$ be any odd composite number. Then $2n$ is not an "even semiprime" (since $n$ is not prime). But if $2n$ were the sum of two even semiprimes, we should have $n$ as a sum of two primes. Since this is possible only if $n-2$ is prime, it is pretty easy to find counterexamples. As barto and Wojowu point out, $n=27$ is an odd composite with $n-2$ also composite, so we cannot write $2n= 54$ as a sum of two even semiprimes. There are indeed infinitely many such counterexamples, so the strong version fails despite adding a caveat for "all sufficiently large integers". On the other hand the weak version might hold for all sufficiently large integers. For example, the value $54$, despite not being "an even semiprime", can be expressed as the sum of two semiprimes: $$ 54 = 15 + 39 = 3\cdot 5 + 3\cdot 13 $$ If $2n$ is not an even semiprime, then $n$ must be a composite. Suppose that $p$ is a prime factor of $n$, and that $n = pm$ for $m \gt 1$. Applying Goldbach's conjecture to $2m$ would give it as a sum of a pair of primes: $$ 2m = q_1 + q_2 $$ and give $2n = 2pm = p q_1 + p q_2$ as a sum of a pair of semiprimes. Therefore the "weak" version shown in the Question's title is implied by Goldbach's conjecture.<|endoftext|> TITLE: Solution of integral $\int \frac{\sin (x)}{\sin (5x) \sin (3x)}\,\mathrm dx.$ QUESTION [7 upvotes]: Find the following integral: $$\int \frac{\sin (x)}{\sin (5x) \sin (3x)}\,\mathrm dx.$$ I don't know how to deal with the $\sin (x)$ in the numerator. If it had been $\sin (2x)$ then we could have used $\sin (2x)= \sin (5x-3x)$. How to deal given integral? Could someone help me with this? REPLY [2 votes]: I think it's a little easier if you leave in in terms of Chebyshev polynomials of the second kind, $U_n(\cos x)=\frac{\sin(n+1)x}{\sin x}$. Then the integral looks like $$\begin{align}\int\frac{\sin x}{\sin5x\sin3x}dx&=\int\frac{\sin x}{\sin^2xU_4(\cos x)U_2(\cos x)}dx\\ &=\int\frac{-dv}{(1-v^2)U_4(v)U_2(v)}\end{align}$$ Where we have made the substitution $v=\cos x$. Then you can use $\sin(n+1)x=2\sin nx\cos x-\sin(n-1)x$ to run these out far enough: $$\begin{array}{rl}\sin2x&=2\sin x\cos x\\ \sin3x&=4\sin x\cos^2x-\sin x\\ \sin4x&=8\sin x\cos^3x-4\sin x\cos x\\ \sin5x&=16\sin x\cos^4x-12\sin x\cos^2x+\sin x\end{array}$$ So now we are up to $$\int\frac{\sin x}{\sin5x\sin3x}dx=\int\frac{dv}{64\left(v^2-1\right)\left(v^2-\frac14\right)\left(v^4-\frac34v^2+\frac1{16}\right)}$$ The roots of that last factor are given by $\sin\left(5\cos^{-1}v\right)=0=\sin n\pi$, so $v=\cos\frac{n\pi}5$, or $$v\in\left\{\frac{\phi}2,\frac1{2\phi},-\frac1{2\phi},-\frac{\phi}2\right\}$$ Where $\phi=\frac{\sqrt5+1}2$, just like @Jack D'Aurizio said. So we have $$\begin{align}&\frac1{64\left(v^2-1\right)\left(v^2-\frac14\right)\left(v^4-\frac34v^2+\frac1{16}\right)}\\ &=\frac1{64(v-1)(v+1)(v-\frac12)(v+\frac12)(v-\frac{\phi}2)(v-\frac1{2\phi})(v+\frac1{2\phi})(v+\frac{\phi}2)}\\ &=\frac A{v-1}+\frac B{v+1}+\frac C{v-\frac12}+\frac D{v+\frac12}+\frac E{v-\frac{\phi}2}+\frac F{v-\frac1{2\phi}}+\frac G{v+\frac1{2\phi}}+\frac H{v+\frac{\phi}2}\end{align}$$ We can knock these partial fractions out pretty quick with L'Hopital's rule. For example $$\begin{align}\lim_{v\rightarrow\frac{\phi}2}\frac{v-\frac{\phi}2}{64\left(v^2-1\right)\left(v^2-\frac14\right)\left(v^4-\frac34v^2+\frac1{16}\right)}&=\frac1{64\left(\frac{\phi^2}4-1\right)\left(\frac{\phi^2}4-\frac14\right)\left(4\frac{\phi^3}{8}-\frac34(2)\frac{\phi}2\right)}\\ &=-\frac1{5\phi}=\lim_{v\rightarrow\frac{\phi}2}\frac{E(v-\frac{\phi}2)}{v-\frac{\phi}2}=E\end{align}$$ The symmetries mean that we only need one of a pair of algebraic conjugates or additive inverses, so really only a total of $3$ numerators need be separately computed. Eventually after getting the other coefficients and integrating and expressing in terms of $x$, we get $$\begin{align}\int\frac{\sin x}{\sin5x\sin3x}dx&=-\frac1{5\phi}\ln\left|\frac{\cos x-\frac{\phi}2}{\cos x+\frac{\phi}2}\right|+\frac{\phi}5\ln\left|\frac{\cos x+\frac1{2\phi}}{\cos x-\frac1{2\phi}}\right|\\ &+\frac13\ln\left|\frac{\cos x-\frac12}{\cos x+\frac12}\right|+\frac1{30}\ln\left|\frac{\cos x-1}{\cos x+1}\right|+C\end{align}$$ I checked it by comparison with numeric quadrature, so the only errors left are the typos. The use of Chebyshev polynomials of the second kind left the denominator in a nicer factored form.<|endoftext|> TITLE: Tricks for quickly reading off the eigenvalues of a matrix QUESTION [16 upvotes]: I noticed that some mathematicians have an uncanny ability to identify the eigenvalues of matrices without doing much in the way of computation. For instance, one might notice that all the rows have the same sum, from which it follows that the sum must be an eigenvalue. There can be no simple way to find eigenvalues of arbitrary matrices, but what special cases might we look for where finding the eigenvalues are easy? What tricks do you know? REPLY [4 votes]: Suppose we are given a matrix $A \in \mathbb{R}^{n \times n}$. If $A = O_n$, the one eigenvalue of $A$ is $0$, with multiplicity $n$. If the first $n-1$ rows or columns of $A$ are either zero or a multiple of the $n$-th row or column, then we have a rank-$1$ matrix that can be written in the form $$A = \mathrm{u} \mathrm{v}^T$$ where $\mathrm{u}, \mathrm{v} \in \mathbb{R}^n$. As the null space of $\mathrm{v}^T$ is $(n-1)$-dimensional, the null space of $A$ is at least $(n-1)$-dimensional. Hence, $0$ is an eigenvalue of $A$ with multiplicity at least $n-1$. Since the trace is the sum of the eigenvalues, the other eigenvalue is $$\lambda = \operatorname{tr} (A) = \operatorname{tr} (\mathrm{u} \mathrm{v}^T) = \operatorname{tr} (\mathrm{v}^T \mathrm{u}) = \mathrm{v}^T \mathrm{u} = \langle \mathrm{u}, \mathrm{v} \rangle$$ For example, suppose we are given $$A = \begin{bmatrix} 1 & 1 & 1\\ 1 & 1 & 1\\ 1 & 1 & 1\end{bmatrix}$$ As the 2nd and 3rd rows are equal to the 1st one, we conclude that $\operatorname{rank} (A) = 1$. Hence, one of the eigenvalues of $A$ is $0$, with multiplicity $2$. The other eigenvalue is $\operatorname{tr} (A) = 3$, with multiplicity $1$.<|endoftext|> TITLE: Why does the hard-looking integral $\int_{0}^{\infty}\frac{x\sin^2(x)}{\cosh(x)+\cos(x)}dx=1$? QUESTION [33 upvotes]: I have to ask this question; most looking complicated definite integral yield not so nice closed form or irrational numbers or mixed of what ever ect. Why is this particular hard looking integral gives a $1$ as an answer? $$\int_{0}^{\infty}\frac{x\sin^2(x)}{\cosh(x)+\cos(x)}dx=1$$ REPLY [22 votes]: One may employ the generating function $$ \frac{\sin x}{\cosh x+\cos x}=2\sum_{k=1}^\infty(-1)^{k-1}e^{-kx}\sin kx $$ and the classic result of $$ \int_0^\infty x^{m-1}e^{-ax} \cos bx \ dx = \frac{\Gamma(m)}{(a^{2} + b^{2})^{m/2}}\cos\left(m\tan^{-1}\left(\frac{b}{a}\right)\right) $$ We then have \begin{align} \int_{0}^{\infty}\frac{x\sin^2 x}{\cosh x+\cos x}\ dx&=2\sum_{k=1}^\infty(-1)^{k-1}\int_{0}^{\infty}x\ e^{-kx}\sin x\sin kx\ dx\\[10pt] &=\sum_{k=1}^\infty(-1)^{k-1}\left[\int_{0}^{\infty}x\ e^{-kx}\cos(k-1)x\ dx-\int_{0}^{\infty}x\ e^{-kx}\cos(k+1)x\ dx\right]\\[10pt] &=\sum_{k=1}^\infty(-1)^{k-1}\left[ \frac{\cos\left(2\tan^{-1}\left(\frac{k-1}{k}\right)\right)}{k^{2}+(k-1)^2}-\frac{\cos\left(2\tan^{-1}\left(\frac{k+1}{k}\right)\right)}{k^{2}+(k+1)^2}\right]\\[10pt] &=1 \end{align} and the claim follows. The latter expression is indeed a telescoping series as one may observe its partial sum equals $$ \sum_{k=1}^n(-1)^{k-1}\left[ \frac{\cos\left(2\tan^{-1}\left(\frac{k-1}{k}\right)\right)}{k^{2}+(k-1)^2}-\frac{\cos\left(2\tan^{-1}\left(\frac{k+1}{k}\right)\right)}{k^{2}+(k+1)^2}\right]=1+\frac{\cos\left(2\tan^{-1}\left(\frac{n+1}{n}\right)\right)}{n^{2}+(n+1)^2} $$ Notice that $$ \cos2a+\cos2b=2\cos(a-b)\cos(a+b) $$ and $$ \tan^{-1}(x)+\tan^{-1}\left(\frac{1}{x}\right)=\frac{\pi}{2}\qquad,\qquad x>0 $$<|endoftext|> TITLE: When does the dual of $s =s$? QUESTION [5 upvotes]: Why I believe this is not a duplicate: This question might be the same, but the accepted answer is only a partial answer, because it gives no reason as to why those are the only solutions. Since the answer is accepted, that question will likely not receive any further answers. I would like to reopen this question in order to place a bounty and hopefully get a more complete answer. When does $s^*=s$? $s^*$ represents the dual of $s$, where $s$ is a compound proposition involving only $T, F, \wedge, \vee, \neg $, and $s^*$ is obtained by interchanging $T$ for $F$, $F$ for $T$, $\wedge$ for $\vee$, and $\vee$ for $\wedge$. A big obstacle is that question is from the $2^{nd}$ section of chapter $1$ of a $2000^-level discrete math book, so we are not even introduced to induction. I don't even know what sort of tools I'm supposed to use to solve this. What I've tried: First, I drew up some truth tables of compound prepositions and their duals to look for any patterns, and what I noticed (in the few examples that I tried) was that the number of "True" outputs of $s$ was equal to the number of "False" outputs from $s^*$. For example, if $s$ was a compound proposition of $p, q, r$ and $s$ was true in $5$ cases and false in $3$, $s^*$ was false in $5$ cases and true in $3$ (they didn't match up though). I think that if $a=b$, then $a^*=b^*$. I'm not sure how to prove this or if it's even necessary to prove it, but I suspect it's true. If it is, then at least $s^*=s$ holds for compound propositions $s$ which can be simplified to a single proposition. For example $s=(p \vee F) \wedge (q \vee T)=p$, therefore $s=p=p^*=s^*$. I know that in the definition of "dual" $s$ has to be a compound proposition; but as an exercise, the book asked to find the dual of this $s$ so I guess it's valid. If this is the only time when $s^*=s$, then I was thinking we can prove this by induction (even though we're probably not supposed to use it.) We know that if $s$ reduces to $p \wedge q$ or to $p \vee q$, then $s^* \ne s$, and then maybe we could use induction to show that if it reduces to a simpler proposition of $n+1$ variables, $s^*=s$ also doesn't hold. REPLY [2 votes]: I am assuming that by $=$, you mean logical equivalence, which I will denote by $\equiv$. One thing that might be helpful is the following: For a compound formula $s$, let $s'$ be the formula you get by replacing every variable by its negation. Then, you get the theorem that $s^* \equiv \lnot s'$. Unfortunately, you will need induction on $s$ to prove that: If $s$ is $T$ or $F$, then $s^*$ will be $F$ or $T$, respectively, as will $\lnot s' = \lnot s$. If $s$ is a variable $p$, then we have $s^* = p = s$, and, by the definition of $s'$, we have $s' = \lnot p = \lnot s$. Thus we have $s^* \equiv \lnot s'$. If $s = a \lor b$, then we assume that $a^* \equiv \lnot a'$ and $b^* \equiv \lnot b'$. Then we have $$ s^* = a^* \land b^* \equiv \lnot a' \land \lnot b' \equiv \lnot (a' \lor b') = \lnot s' $$ by De Morgan's law. If $s = a \land b$, then we assume that $a^* \equiv \lnot a'$ and $b^* \equiv \lnot b'$. Then we have $$ s^* = a^* \lor b^* \equiv \lnot a' \lor \lnot b' \equiv \lnot (a' \land b') = \lnot s' $$ by De Morgan's law as well. Now, this has one important consequence: You mention that $s$ only contains $T$, $F$, $\land$, $\lor$ and $\lnot$, and no variables. In this case, you have $s' = s$ and thus $s^* \equiv \lnot s$, so you will never have $s^* = s$! But if your $s$ involves variables anyways, you can find out if $s* \equiv s$ by looking at the truth table for $s$: For every entry, you check the "opposite" entry where all the variables are assigned the opposite truth value (for example the opposite of the entry where $p = T, q = F$ is the one where $p = F, q = T$), and then you check if the two entries are different. One neat result of this is that if $s^* \equiv s$, then exactly half the ways of assigning truth values to the variables will result in $s$ becoming true! But because the answer to the question if $s^* \equiv s$ depends on the truth table of $s$, I don't expect there to be a method of checking if $s^* \equiv s$ that is faster than just making the truth table.<|endoftext|> TITLE: Random variable independent of $\sigma$-algebra and conditional expectation QUESTION [6 upvotes]: What does it mean to say that a random variable is independent of a sigma-algebra, and why then does this imply that $E(RV| \sigma) = RV$?. I have no clue what this independence stuff is about (couldn't find it using google either), and surely $E(RV| \sigma) = RV$ seems like the logical definition of it? REPLY [4 votes]: As @carmichael561 remarks, independence between random variable $X$ and sigma-algebra $\cal G$ amounts to the assertion $$ P(X\in B\mid A)=P(X\in B)\tag1$$ for every Borel $B$ and every $A\in\cal G$. In particular it implies $$ E(X\mid A)=E(X).\tag2 $$ for every $A\in\cal G$. Regarding your question whether (2) can serve as the definition, the answer is that (2) is not equivalent to (1). It can be proved that condition (1) implies the desirable property $$ E(h(X)\mid A) = E(h(X))\tag3 $$ for all functions $h$ (assuming $h$ measurable and $h(X)$ integrable), but you can't get to (3) from (2) alone. Here's an example where (2) does not imply (1). On the four-point set $\Omega=\{a,b,c,d\}$ let $P$ put mass $\frac12$ on the set $A:=\{a\}$ and mass $\frac16$ on the remaining three singletons. Define $X$ by $X(a)=X(b)=0$, $X(c)=1$, $X(d)=-1$. Take $\cal G$ to be the sigma-algebra generated by set $A$. You can check that $X$ satisfies condition (2) for every set in $\cal G$. However, $P(X=0\mid A)=1\ne \frac23=P(X=0)$, so condition (1) is not satisfied. Moreover, $E(X^2\mid A)=0\ne \frac13=E(X^2)$, so condition (3) is not satisfied.<|endoftext|> TITLE: How do you calculate the smallest cycle possible for a given tile shape? QUESTION [10 upvotes]: If you connect together a bunch of regular hexagons, they neatly fit together (as everyone knows), each tile having six neighbors. Making a graph of the connectivity of these tiles, you can see that there are cycles all over the place. The smallest cycles in this graph have three nodes. $f(\text{regular hexagon}) = 3$ If you use regular pentagons, they don't fully tile the plane, but they do still form cycles. The smallest cycles have 6 nodes. $f(\text{regular pentagon}) = 6$ If you use regular heptagons, the smallest cycle also seems to require 6 tiles. $f(\text{regular heptagon}) = 6$ And it's the same with nonagons, 6 tiles to a cycle: $f(\text{regular nonagon}) = 6$ Is there a general way to calculate the minimum number of tiles needed to form a cycle? $f(\text{polygon}) = ?$ Ideally, I'd love to find a method that worked for any equilateral non-intersecting polygon, but even a method that works for any regular polygon would be great. Sorry for the crudeness of the drawings. REPLY [4 votes]: I did some experiments for regular polygons, for up to $n=24$ (click to enlarge): These experiments suggest that If $6\vert n$ you get a $3$-cycle Else if $2\vert n$ you get a $4$-cycle Else you get a $6$-cycle That you can get the $3$-cycle for anything that has edges aligned as a hexagon has them is pretty obvious. On the other hand, for reasons of symmetry any arrangement of $3$ regular $n$-gons has to be symmetric under $120°$ rotation, so you have to have those hexagon-aligned edges else it won't work. It's also easy to see that you can get a $4$-cycle if $4\vert n$, since that's the obvious arrangement for squares. The other $4$-cycles are more tricky. The most systematic (and reasonably symmetric) choice there would be a configuration where two polygons have opposite vertices aligned with one axis, while the other two polygons have an axis perpendicular to the first passing through the centers of two opposite edges. Something like this: Right now I have no obvious answer as to why you can't get $4$-cycles for odd $n$, nor why $6$-cycles are always possible. Looking at the angles it's easy to see that the length of the cycle has to be even if $n$ is odd, so once the $4$-cycles are ruled out for those cases, the $5$-cycles are out of the question and the $6$-cycles will provide the solution. Perhaps others can build more detailed answers with the help of my pictures.<|endoftext|> TITLE: Coprime polynomials in $k[x,y]$ are also coprime in $k(y)[x]$ QUESTION [6 upvotes]: Let $f,g \in k[x,y]$ be polynomials with no common factor. Prove that when viewed as elements of $k(y)[x]$ they still do not have a common factor. Say we have $f=\sum a_{ij}x^iy^j,\ g=\sum b_{ij}x^i y^j$, and $h=\sum h_i(y) x^i \in k(y)[x]$ satisfies both $h \mid f$ and $h\mid g$. Hence $f = hp$ for $p = \sum p_{i}(y)x^i \in k(y)[x]$ and similarly for $g=hq$. The claim is noted here. However I don't understand the argument implied there, of multiplying by the lcm of the denominators of all terms in $h$ and all terms in the divisors $p,q$: we get a relation of the form $f \cdot \text{lcm(complicated polynomial w(y))}=hpw \in k[x,y]$, what is the common factor in $k[x,y]$ of $f,g $ here? (A concrete example will probably help.) Where do we use that $k(y)[x]=\mathbb{F}[x]$ is a euclidean domain and in particular a PID? REPLY [8 votes]: Let $R$ be a GCD domain, and $f,g\in R[X]$ such that $\gcd(f,g)=1$ (in $R[X]$). Then $\gcd(f,g)=1$ in $K[X]$, where $K$ is the field of fractions of $R$. Let $h\in K[X]$ such that $h\mid f$ and $h\mid g$ (in $K[X]$). We want to show $\deg h=0$. Let $d$ be the product of all denominators of the coefficients of $h$, and $k=dh\in R[X]$. Then $k\mid df$ and $k\mid dg$ in $K[X]$, so there are $a,b\in R\setminus\{0\}$ such that $k\mid (ad)f$ and $k\mid (bd)g$ in $R[X]$. Write $(ad)f=kp$ and $(bd)g=kq$ with $p,q\in R[X]$. In the following one denotes by $c(r)$ the greatest common divisor of the coefficients of $r\in R[X]$, and write $r=c(r)r_1$ where $r_1$ is primitive, that is, the greatest common divisor of its coefficients is $1$. From $(ad)f=kp$ and $(bd)g=kq$ we get $(ad)c(f)=c(k)c(p)$ and $(bd)c(g)=c(k)c(q)$. But $(ad)c(f)f_1=c(k)c(p)k_1p_1$ and $(bd)c(g)g_1=c(k)c(q)k_1q_1$, so $f_1=k_1p_1$ and $g_1=k_1q_1$. Thus we get $k_1\mid f_1\mid f$ and $k_1\mid g_1\mid g$, so $k_1=1$, and we are done.<|endoftext|> TITLE: Why was Sheaf cohomology invented? QUESTION [43 upvotes]: Sheaf cohomology was first introduced into algebraic geometry by Serre. He used Čech cohomology to define sheaf cohomology. Grothendieck then later gave a more abstract definition of the right derived functor of the global section functor. What I still don't understand what was the actual motivation for defining sheaf cohomology. What was the actual problem they were trying to solve? REPLY [59 votes]: Sheaves and sheaf cohomology were invented not by Serre, but by Jean Leray while he was a World War II prisoner in Oflag XVII (Offizierlager=Officer Camp) in Austria. After the war he published his results in 1945 in the Journal de Liouville. His remarkable but rather obscure results were clarified by Borel, Henri Cartan, Koszul, Serre and Weil in the late 1940's and early 1950's. The first spectacular application of Leray's new ideas was Weil's proof of De Rham's theorem: he computed the cohomology of the constant sheaf $\underline {\mathbb R}$ on a manifold $M$ through its resolution by the acyclic complex of differential forms $\Omega_M^*$ on $M$. The next success story for sheaves and their cohomology was the proof by Cartan and Serre of theorems $A$ and $B$ for Stein manifolds, which solved a whole series of difficult problems (like Cousin I and Cousin II) with the help of techniques and theorems of Oka, who can be said a posteriori to have implicitly introduced sheaves in complex analysis. The German complex analysts (Behnke, Stein, Thullen,...) who had up to then be the masters of the field were so impressed by the new cohomology techniques that they are reported to have exclaimed: "the French have tanks and we have bows and arrows!" Armed with his deep knowledge of these weapons of complex analysis Serre took the incredibly bold step of introducing sheaves and their cohomology on algebraic varieties endowed with their Zariski topology. This was of remarkable audacity because of the coarseness of Zariski topology, which had led specialists to believe that it was just some rather unimpressive tool allowing one for example to talk rigorously of generic properties. As all algebraic geometers now know, Serre stunned his colleagues by showing in FAC how cohomological methods yielded deep results, at the centre of which are theorems $A$ and $B$ for coherent sheaves on affine varieties. Other fundamental novelties obtained by Serre in FAC are his twisting sheaves $\mathcal O(n)$, the computation of the cohomology of coherent sheaves on projective space, the vanishing of the cohomology groups $H^q(V,\mathcal F(n))$ on a projective variety $V$ for $q\gt0, n\gt\gt 0$,... Last not least: the introduction of sheaves and their cohomology in FAC paved the way for Grothendieck's revolutionary introduction of schemes in algebraic geometry, as acknowledged in the preface of EGA. Actually reading FAC was the secondary thesis of Grothendieck, accompanying his PhD on nuclear spaces in functional analysis. Afer his PhD defence, someone (Cartan if I remember the anecdote correctly) told him good-humouredly that he seemed not to have understood much in FAC. The story goes that Grothendieck was piqued, invested much energy in understanding Serre, and the rest is history. Se non è vero, è ben trovato...<|endoftext|> TITLE: Creating arithmetic expression equal to 1000 using exactly eight 8's and parentheses QUESTION [5 upvotes]: I would like to find all the expressions that can be created using nothing but arithmetic operators, exactly eight $8$'s, and parentheses. Here are the seven solutions I've found (on the Internet) so far: \begin{align} 1000 &= (8888 - 888) / 8\\ 1000 &= 888 + 88 + 8 + 8 + 8\\ 1000 &= 888 + 8 \cdot (8 + 8) - 8 - 8\\ 1000 &= 8 \cdot (8 \cdot 8 + 8 \cdot 8) - 8 - 8 - 8\\ 1000 &= 8 \cdot (8 \cdot (8 + 8) - (8 + 8) / 8) - 8\\ 1000 &= (8 \cdot (8 + 8) - (8 + 8 + 8) / 8) \cdot 8\\ 1000 &= (8 \cdot (8 + 8) - 88 / 8 + 8) \cdot 8 \end{align} Are there others, and if there are, what are they? Update After sifting through achille hui's answer and adding one of mathlove's solutions, I get the following $16$ possibilities: \begin{align} 1000 &= (8888 - 888)/8\\ 1000 &= 888 + (888 + 8)/8\\ 1000 &= 888 + 88 + 8 + 8 + 8\\ 1000 &= 888 + (8 + 8) \cdot (8 - 8/8)\\ 1000 &= 888 - (8 + 8) \cdot (8/8 - 8)\\ 1000 &= 888 + 8 \cdot (8 + 8) - 8 - 8\\ 1000 &= (888 \cdot (8/8 + 8) + 8)/8\\ 1000 &= 8 \cdot (8 - 88/8 + 8 \cdot (8 + 8))\\ 1000 &= 8 \cdot 8 - 88 + 8 \cdot 8 \cdot (8 + 8)\\ 1000 &= 8 \cdot 8 \cdot (8 + 8 - 8/(8 + 8)) + 8\\ 1000 &= 8 \cdot (8 \cdot 8 + 8 \cdot 8) - 8 - 8 - 8\\ 1000 &= 8 \cdot (8 \cdot (8 + 8) - 8/8) - 8 - 8\\ 1000 &= 8 - 8 \cdot (8 \cdot (8/(8 + 8) - 8 - 8))\\ 1000 &= (8 - 8/(8 \cdot 8)) \cdot (8 + 8) \cdot 8 - 8\\ 1000 &= (8\cdot 8\cdot 8-8)\cdot (8+8)/8-8\\ 1000 &= (8 + 8) \cdot (8 \cdot 8 - (8 + 8)/8) + 8 \end{align} If any of these are equivalent, please let me know. REPLY [6 votes]: By brute force, if order of parentheses matter ( i.e. expression like $(8+8)+8$ is considered to be distinct from $8+(8+8)$), I have counted $623$ solutions. Since I don't have a clean cut criterion to tell which solutions are equivalent when we unwind the parentheses, I cooked up some ad hoc hash function to classify the $623$ solutions. Under this classification, solutions with different number of operators are considered to be distinct. The result is a list of $23$ expressions. To proceed further, we replace the appearance of $8, 88, 888, 8888$ in these expressions by 4 variables $x_1, x_2, x_3, x_4$ and ask an CAS to simplify them. Based on the result, we split the $23$ expressions into $7$ groups. (x4-x3)/x1 1 # ( ( 8888 - 888 ) / 8 ) ((x1+1)*x3+x1)/x1 2 #a ( ( ( 8 + 888 ) / 8 ) + 888 ) 3 #b ( ( ( 888 * ( ( 8 / 8 ) + 8 ) ) + 8 ) / 8 ) x3+x2+3*x1 4 # ( 8 + ( 8 + ( 8 + ( 88 + 888 ) ) ) ) x3+2*x1^2-2*x1 5 #a ( 888 + ( ( 8 * ( 8 + 8 ) ) - ( 8 + 8 ) ) ) 6 #a ( 888 - ( ( 8 - ( 8 * ( 8 + 8 ) ) ) + 8 ) ) 7 #a ( ( 888 - 8 ) - ( 8 - ( 8 * ( 8 + 8 ) ) ) ) 8 #b ( 888 + ( ( 8 - ( 8 / 8 ) ) * ( 8 + 8 ) ) ) 9 #b ( 888 - ( ( ( 8 / 8 ) - 8 ) * ( 8 + 8 ) ) ) -x2+2*x1^3+x1^2 10 #a ( ( ( 8 * 8 ) - 88 ) + ( 8 * ( 8 * ( 8 + 8 ) ) ) ) 11 #a ( ( 8 * 8 ) - ( 88 - ( 8 * ( 8 * ( 8 + 8 ) ) ) ) ) 12 #b ( 8 * ( ( 8 - ( 88 / 8 ) ) + ( 8 * ( 8 + 8 ) ) ) ) 13 #b ( 8 * ( 8 - ( ( 88 / 8 ) - ( 8 * ( 8 + 8 ) ) ) ) ) (4*x1^3-x1^2+2*x1)/2 14 # ( ( 8 * ( 8 * ( ( 8 - ( 8 / ( 8 + 8 ) ) ) + 8 ) ) ) + 8 ) 15 # ( ( 8 * ( 8 * ( 8 - ( ( 8 / ( 8 + 8 ) ) - 8 ) ) ) ) + 8 ) 16 # ( 8 - ( 8 * ( 8 * ( ( ( 8 / ( 8 + 8 ) ) - 8 ) - 8 ) ) ) ) 2*x1^3-3*x1 17 #a ( ( 8 * ( ( 8 * 8 ) + ( 8 * 8 ) ) ) - ( 8 + ( 8 + 8 ) ) ) 18 #a ( ( ( ( 8 * ( ( 8 * 8 ) + ( 8 * 8 ) ) ) - 8 ) - 8 ) - 8 ) 19 #a ( ( ( 8 * ( ( 8 * 8 ) + ( 8 * 8 ) ) ) - 8 ) - ( 8 + 8 ) ) 20 #b ( ( ( ( 8 * 8 ) - ( ( 8 + 8 ) / 8 ) ) * ( 8 + 8 ) ) + 8 ) 21 #b ( ( ( ( 8 * ( 8 + 8 ) ) - ( 8 / 8 ) ) * 8 ) - ( 8 + 8 ) ) 22 #c ( ( ( 8 - ( 8 / ( 8 * 8 ) ) ) * ( 8 * ( 8 + 8 ) ) ) - 8 ) 23 #d ( ( ( 8 - ( ( 8 / 8 ) / 8 ) ) * ( 8 * ( 8 + 8 ) ) ) - 8 ) Expressions in different group are definitely in-equivalent. Within a group, some expressions looks so different, we should really treat them as distinct. e.g. expression 2 and 3 above has different number of variables expression 7 doesn't involves division while expression 8 does. Once again, I have to emphasis I don't have a clean cut criterion to tell which expressions are equivalent. I will leave the 23 expressions as is and let you make your own judgement.<|endoftext|> TITLE: Prove that $\sum_{j=0}^{n}H_j{n\choose j}^2={2n\choose n}\left(2H_n-H_{2n}\right)$ QUESTION [8 upvotes]: Let $H_n$ the $n$th Harmonic numbers and $H_0=0.$ Prove that $$\sum_{j=0}^{n}H_j{n\choose j}^2={2n\choose n}\left(2H_n-H_{2n}\right)$$ I encounter this problem since 2012 and have verify numerically and not sure it is correct for sure. So can anybody help me to prove it. REPLY [13 votes]: Let $j\leq n $ and let us define $$H_{n}\left(x\right)=\sum_{k=1}^{n}\frac{1}{n+x}. $$ It is not difficult to prove that $$\frac{d}{dx}\dbinom{x+n}{j}=\dbinom{x+n}{j}\left(H_{n}\left(x\right)-H_{n-j}\left(x\right)\right) $$ so in particular $$\frac{d}{dx}\dbinom{x+n}{j}_{x=0}=\dbinom{n}{j}\left(H_{n}-H_{n-j}\right). $$ Now for the Chu-Vandermonde identity we have $$\sum_{j=0}^{n}\dbinom{n+x}{j}\dbinom{n}{n-j}=\dbinom{2n+x}{n} $$ so if we take the derivative we have $$\sum_{j=0}^{n}\dbinom{n+x}{j}\dbinom{n}{n-j}\left(H_{n}\left(x\right)-H_{n-j}\left(x\right)\right)=\dbinom{2n+x}{n}\left(H_{2n}\left(x\right)-H_{n}\left(x\right)\right) $$ then, if we take $x=0 $, $$H_{n}\sum_{j=0}^{n}\dbinom{n}{j}^{2}-\sum_{j=0}^{n}\dbinom{n}{j}^{2}H_{j}=\dbinom{2n}{n}\left(H_{2n}-H_{n}\right) $$ but since $$ \sum_{j=0}^{n}\dbinom{n}{j}^{2}=\dbinom{2n}{n} $$ we have $$\sum_{j=0}^{n}\dbinom{n}{j}^{2}H_{j}=\dbinom{2n}{n}\left(2H_{n}-H_{2n}\right).$$<|endoftext|> TITLE: $M$ connected $\iff$ $M$ and $\emptyset$ are the only subsets of $M$ open and closed at the same time QUESTION [6 upvotes]: I'm trying to understand this proof that: $M$ connected $\iff$ $M$ and $\emptyset$ are the only subsets of $M$ open and closed at the same time Which is: If $M=A\cup B$ is a separation, then $A$ and $B$ are open and closed. Recriprocally, if $A\subset M$ is open and closed, then $M = A\cup(M-A)$. What? I know that if $M=A\cup B$ is a separation, $A$ and $B$ are both open. But why closed? Also, the 'recriprocally' part is totally nonsense to me. Anybody could help? Also, there's another proof, which states: $M$ and $\emptyset$ are the only subsets of $M$ at the same time closed and open $\iff$ if $X\subset M$ has empty boundary, theb $X=M$ or $X=\emptyset$ which is proved as the following: given $X\subset M$, we know the condition $X\cap \partial X = \emptyset$ implies $X$ is open, while the condition $\partial X \subset X$ implies $X$ is closed. Then, $X$ is open and closed $\iff$ $\partial X = X\cap \partial X = \emptyset$, this show $\iff$ for the theorem above. First of all, I think that the condition $X$ has boundary empty implies that $X\cap \partial X$ is empty, but who said anything about $\partial X\subset X$? Also, where's the $\rightarrow$ of this proof? I can only see $\leftarrow$ REPLY [4 votes]: For the first proof, if you have a separation, $M = A \cup B$, then $A$ and $B$ are both open, and $A \cap B = \emptyset$. But, $A$ is also closed since $B$ is open, and $A = M \setminus B$. Same goes for $B$. So this is the contrapositive of the reverse direction of the statement. When they say reciprocally, they are referring to the forward implication of the statement. Namely, proving that if $M$ is connected, then $M$ and $\emptyset$ are the only sets that are both open and closed. Again, they prove the contrapositive. Assume there exists some set $A \subset M$ that is both open and closed. Then $M \setminus A$ is open. Since $A \cap (M \setminus A) = \emptyset$, and $A \cup (M \setminus A) = M$, then $M$ is not connected. Now for the second proof, they are assuming that $\partial X = \emptyset$. Thus $\partial X = \emptyset \subset X$. To see the forward implication, assume that $M$ and $\emptyset$ are the only subsets of $M$ that are both open and closed. Then $M$ and $\emptyset$ are the only sets with empty boundary. Therefore, if $X \subset M$ has empty boundary, then $X$ is either $M$ or $\emptyset$.<|endoftext|> TITLE: What is the function for "round n to the nearest m"? QUESTION [7 upvotes]: Where $n \in \mathbb{R}$ and $m \in \mathbb{R}$, what is the function $f(n, m)$ that can achieve rounding behavior of money where the smallest denomination is not an power of ten? For instance, if a 5¢ coin is the smallest denomination (like in Canada): $f(1.02, 0.05) = 1.00$ $f(1.03, 0.05) = 1.05$ $f(1.29, 0.05) = 1.30$ $f(1.30, 0.05) = 1.30$ Or if $20 is the cutoff: $f(9.99, 20) = 0$ $f(10, 20) = 20$ $f(39.99, 20) = 40$ $f(40, 20) = 40$ REPLY [8 votes]: Are you familiar with the floor function? For $x \in\mathbb{R}$, $\lfloor x\rfloor$ is the greatest integer less than or equal to $x$. That is, there exists an integer $n$ such that $n \leq x TITLE: Why is a circle not simply-connected? QUESTION [7 upvotes]: To be simply-connected means to be path-connected and able to continuously shrink a closed curve while remaining in the domain. According to wikipedia, a circle is not simply connected, but a disk is. Why is that? EDIT: I didn't realize that a circle is just the perimeter, and a disk is a circle that is filled in. Thanks to all who answered! REPLY [6 votes]: Imagine you have a rubber band and you lay it down on top of a circle drawn on a piece of paper. You're asked to stretch/shrink the rubber band until it's crumpled up at a single point; however, the rubber band must stay on top of the circle at all times, and you can't cut it. Intuitively this does not seem possible. This corresponds to the topological fact that $\pi_1(S^1)=\mathbb{Z}$. This means there are loops in $S^1$ (the circle) which cannot be continuously shrunk to a point. Hence the circle is not simply connected. Now if you're allowed to move the rubber band on top of the circle but also inside the circle (i.e., in a disk), it's easy to crumple it up to one point - just push every point on the rubber band in a straight line toward the point where you want it to end up. No matter how the rubber band is initially arranged within the disk, you can still crumple it to a point. This corresponds to the topological fact that $\pi_1(D^2)=0$. That is, every loop in $D^2$ (the disk) can be continuously shrunk to a point (via a straight-line homotopy as described with the rubber band). Hence the disk is simply connected. REPLY [5 votes]: A circle is the perimeter without the interior. Intuitively, it isn't simply connected because there are two ways around without the interior. To shrink the circle to a point you have to break it. The disk is simply connected. Any closed curve can be shrunk to a point continuously.<|endoftext|> TITLE: Is there an infinite sequence of real numbers $a_1, a_2, a_3,... $ such that ${a_1}^m+{a_2}^m+a_3^m+...=m$ for every positive integer $m$? QUESTION [5 upvotes]: Is there an infinite sequence of real numbers $a_1, a_2, a_3,...$ such that ${a_1}^m+{a_2}^m+a_3^m+...=m$ for every positive integer $m$? I tried assuming that the sequence $a_1^m, a_2^m,...$ forms a geometric progression, because that is the only type of infinite series that I know how to evaluate. I know my attempt doesn't work for all integers $m$, but it does work for $m=1$: Let $a_1=\dfrac 12$ and $a_n=a_1^n$ We have $a_1^m+a_2^m+a_3^m+...=(a_1)^m+(a_1^2)^m+(a_1^3)^m+...= \left( \dfrac 12 \right)^m+ \left(\dfrac 12 \right)^{2m}+ \left(\dfrac 12 \right)^{3m}+...=\sum_{i=1}^\infty \left( \dfrac 12 \right)^{i \cdot m}=m$ Now if we let $m=1$, we have $\sum_{i=1}^\infty \left( \dfrac 12 \right)^{i}=\dfrac {\dfrac 12}{1-\dfrac 12}=1$ REPLY [8 votes]: Such a sequence doesn't exist. In fact, there is no sequence of real numbers (be it finite or infinite) such that $$\sum_{k} a_k^m = m\quad\text{ for }\quad m = 2, 3, 4$$ Assume the contrary, let's say there is indeed such a sequence. Apply Cauchy-Schwarz to $(a_k)$ and $(a_k^2)$, we find: $$9 = 3^2 = \left(\sum_{k} a_k^3\right)^2 = \left(\sum_{k} a_k\cdot a_k^2 \right)^2 \le \left(\sum_{k} a_k^2 \right)\left(\sum_{k} a_k^4 \right) = 2\cdot 4 = 8$$ which is clearly impossible.<|endoftext|> TITLE: I don't get it, does "augmented chain complex" actually mean anything? QUESTION [6 upvotes]: If I understand correctly, chain complexes make sense in any category enriched in the world of pointed sets. In practice, there's also a notion of an augmented chain complex, where we have an extra morphism $\varepsilon : C_0 \rightarrow \mathbb{Z}$ between $C_0$ and $0$. In practice, these seem to arise in algebraic topology by taking certain generators of $C_0$ to $1_\mathbb{Z}$ and extending by linearity to get $\varepsilon$. But I don't really understand what an "augmented chain complex" actually is; I only understand how they arise. So my question is: Question. Does "augmented chain complex" actually mean anything? If so, what? If not, what should be going through your head whenever you encounter this phrase to best understand the writer's meaning/intention? For example, one of our algebraic topology homework questions says: Let $\{C,\varepsilon\}$ be an augmented chain complex. Show that $H_{-1}(C) = 0$ and $$\tilde{H}_0(C) \oplus \mathbb{Z} \cong H_0(C).$$ I don't understand how to interpret this question. Am I supposed to assume that there's a simplicial complex floating around somewhere in the background? If not, what is the likely intended meaning of the question? REPLY [8 votes]: As others have said, an augmented chain complex is an $(\mathbb{N} \cup \{-1\})$-graded chain complex $C_*$ with $C_{-1} = \mathbb{Z}$ (or more generally your base ring $\Bbbk$) and such that $H_{-1}(C) = 0$, equivalently the map $C_0 \to C_{-1} = \mathbb{Z}$ is surjective. The splitting result follows from the projective property of $\mathbb{Z}$ among abelian groups. The "$C_{-1} = \mathbb{Z}$" and the "$C_0 \to C_1$ is surjective" conditions are rather is rather specific to algebraic topology, and it's probably possible that in some other contexts the conditions would be dropped. So if in general by chain complex you mean $\mathbb{N}$-graded chain complex, then an augmented chain complex would be an $(\mathbb{N} \cup \{-1\})$-graded chain complex. In some other contexts chain complexes are $\mathbb{Z}$-graded anyway, so this wouldn't even be anything special. It's all a matter of context. To shed some light on the terminology, consider the usual notion of an augmented algebra, i.e. an algebra $A$ (say over $\mathbb{Z}$) equipped with a ring morphism $\varepsilon : A \to \mathbb{Z}$. Such a ring morphism is necessarily surjective, since $\varepsilon(1) = 1$ and there aren't many subgroups of $\mathbb{Z}$ containing $1$. There is the usual bar construction $B_\bullet(A,A,\mathbb{Z})$, a simplicial abelian group whose realization is a cofibrant replacement of $\mathbb{Z}$ as a left $A$-module. It's defined by $$B_n(A,A,\mathbb{Z}) = A \otimes A^{\otimes n} \otimes \mathbb{Z} = A^{\otimes (n+1)}$$ and the simplicial maps are explicit. (See two-sided bar construction, the monad being $A \otimes -$. The construction $B_\bullet(R,A,L)$ is more generally well-defined for a right $A$-module $R$ and a left $A$-module $L$). Then it turns out to actually be an augmented simplicial abelian group, i.e. a functor $\Delta_+^{\mathrm{op}} \to \mathsf{Ab}$ where $\Delta_+ = "\Delta \cup \{ \varnothing \}"$. Such an augmented simplicial object $X_{\bullet \ge -1}$ is of course the same thing as a simplicial object $X_{\bullet \ge 0}$ equipped with a morphism to the constant simplicial object $X_{\bullet \ge 0} \to \operatorname{const} X_{-1}$. In our case $\varepsilon$ induces $B_\bullet(A,A,\mathbb{Z}) \to \operatorname{const} \mathbb{Z}$, and under the realization functor this is the cofibrant replacement map. Now there is an enhanced version of the Dold–Kan correspondence that says that an augmented simplicial abelian group is in fact the same thing as an $(\mathbb{N} \cup \{-1\})$-graded chain complex. If you apply it to $B_\bullet(A,A,\mathbb{Z})$ you obtain a chain complex that looks like $\dots \to A \to \mathbb{Z} \to 0$, with $\mathbb{Z}$ in degree $-1$. So there you have it! An augmented chain complex. PS: The condition "$C_0 \to C_{-1}$ is surjective" is a bit superfluous. Consider that it's reasonable to say that $H_{-1}(\varnothing) = \mathbb{Z}$! This is also why the $n$Lab says a space is "$(-1)$-connected" if it is nonempty. It also explains why the empty set is not (generally considered to be) contractible, as it's too simple to be simple, and so on.<|endoftext|> TITLE: Suppose $y(x)$ is continuous and $y'(x)=0$ has uncountable many solutions but $y(x)$ is not constant on any interval. Is this possible? QUESTION [5 upvotes]: Suppose $y(x)$ is continuous and $y'(x)=0$ has uncountable many solutions but $y(x)$ is not constant on any interval. Is this possible? REPLY [3 votes]: Take a Cantor set $C\subseteq [0,1]$ and let $d(x)$ be the distance function from $x$ to $C$. Then $(d(x))^2$ is differentiable with derivative vanishing at all points of $C$ hence uncountably many.<|endoftext|> TITLE: Atomless measure space without measure preserving isomorphisms QUESTION [7 upvotes]: Question: Could somebody give an example of a nontrivial atomless measure space without measure preserving isomorphisms (except for the identity)? Background: A measure preserving isomorphism on a measure space $(X,\Sigma,\mu)$ is a bijection $\phi$ such that $$\forall A\in\Sigma:\mu(\phi^{-1}(A))=\mu(\phi(A))=\mu(A)$$ Edit: By 'except for the identity' I obviously meant to exclude all $\phi$ such that $\mu(A\triangle\phi(A))=0$ for all $A\in\Sigma$. REPLY [3 votes]: Didn't know the answer to the original version, but only because of stupid/boring examples. Here's an example for the modified version: An atomless measure space such that if $\phi$ is any measurable bijection with a measurable inverse, measure-preserving or not, then $\phi(A)=A$ for every measurable set $A$. Let $<$ be a well-ordering of $[0,1]$ (so in particular $<$ has nothing to do with the standard order). By transfinite recursion we can "construct" a strictly increasing map $\kappa$ from $[0,1]$ to the class of infinite cardinals, for example by taking $\kappa(x_0)=\aleph_0$ if $x_0$ is the smallest element of $[0,1]$ and $$\kappa(x)=2^{\bigcup_{x'x_0).$$Let $(S_x)_{x\in[0,1]}$ be a pairwise disjoint collection of sets with $$|S_x|=\kappa(x).$$Set $$X=\bigcup_{x\in[0,1]}S_x.$$For $E\subset[0,1]$ let $$A_E=\bigcup_{x\in E}S_x.$$ Say $A\subset X$ is measurable if and only if $A=A_E$ for some Lebesgue-measurable $E\subset[0,1]$, and define $$\mu(A_E)=m(E),$$where $m$ is Lebesgue measure. No atoms ($S_x$ has no non-trivial measurable subsets but it's not an atom because it has measure zero). Say $\phi:X\to X$ is a measurable bijection with a measurable inverse. Then induction on $x$ shows that $\phi(S_x)=S_x$. First, since $\phi(S_x)$ is measurable it must be a union of $S_y$ for some set of $y$'s. But $$|S_y|>|S_x|\quad(y>x),$$hence $$\phi(S_x)\subset\bigcup_{y\le x}S_y.$$By induction we have $\phi(S_y)=S_y$ for all $y TITLE: What is wrong with this proof that the identity map of $S^1$ is nullhomotopic? QUESTION [5 upvotes]: I have read that the identity map of the unit circle $S^1$ is not nullhomotopic. In fact, I am very new to the subject, so I wonder what is wrong with the following reasoning (that seems to suggest the opposite): If the identity map $i : S^1 \to S^1$ is nullhomotopic, there exists a homotopy $F : S^1 \times I \to S^1$ with $F(x,0)=x$ and $F(x,1)=k$, for all $x \in S^1$. But this, in other words, just means that $x$ and $k$ can be joined by a path, for any $x \in S^1$ - which is true, since $S^1$ is path connected. After all, can't we think of a homotopy of two functions $f, g : X \to Y$ as a function that for all $x \in X$, gives us a path between $f(x)$ and $g(y)$? I wonder where is the contradiction? What I am missing? REPLY [7 votes]: It is true that if you have a homotopy $F : X \times [0,1] \to Y$ between $f,g : X \to Y$, then for all $x \in X$ this defines a path between $f(x)$ and $g(x)$, given by $t \mapsto f(x,t)$. This is a "if... then..." statement. The converse need to be true. If I give you a map $F : X \times [0,1] \to Y$, and I tell you that for all $x$, $t \mapsto F(x,t)$ is continuous; in other words, I give you a collection of paths from $f(x)$ to $g(x)$ for all $x$; does it imply that $F$ itself is continuous? Of course not, and it's rather easy to construct counterexamples. There is an intuitive way to see this for the circle. Imagine your circle is a clock. Let's say you want to find a homotopy between the identity and the map that's constantly equal to 12 o'clock (i.e. $i \in S^1 = \{ z \in \mathbb{C} \mid |z| = 1 \}$). Then for each point $x$ of the circle you have to a path from $x$ to 12:00. Either you can go through the left part of the circle (going clockwise), or you can go through the right part (going counterclockwise). It's rather intuitive that if you have a point that just a little bit before 12:00, you will want to go clockwise. And if you're a little bit after 12:00, you will want to go counterclockwise. Since you want this whole thing to be continuous, this will extend all the way to the bottom. But what about 6 o'clock now? How do you choose clockwise or counterclockwise? Whichever way you choose, you will "rip" the circle in half, because just a little bit to the left or to the right you will be going in the other direction. And intuitively, ripping things in half isn't a continuous thing. This isn't a formal proof, obviously. There is a lot of theory to actually prove that it's impossible to find a homotopy between the identity and a constant map, e.g. by showing that the fundamental group of the circle is $\mathbb{Z}$, which isn't trivial. But this is the intuition: if you have to choose continuously paths from each $x \in S^1$ to a fixed $x_0 \in S^1$, you will have to "rip" the circle in half sooner or later, and this isn't continuous.<|endoftext|> TITLE: What other properties follow from having a ring homomorphism to $\mathbb{Z}$? QUESTION [6 upvotes]: (All my rings have $1$, and ring homomorphisms preserve $1$.) In $\mathbf{Set},$ the points of an object $X$ can be thought of as arrows from the terminal object $1$ to $X$. So I guess in general, we could say that a "copoint" of an object $X$ is an arrow to the initial object of whatever category $X$ belongs to. In $\mathbf{Set}$, this is pretty boring, because a set has a copoint iff it is empty. Its also pretty boring in categories like $\mathbf{Ab}$ with a $0$ object, for a different reason: every object has precisely one copoint in this case. In $\mathbf{Ring},$ things are more interesting. If $R$ is a ring, then a copoint of $R$ is a ring homomorphism $f:R \rightarrow \mathbb{Z}$. Hence the existence of a copoint implies that $R$ has characteristic $0$, since for any integer $k \in \mathbb{Z}$, if we assume $k1_R = 0_R$, then $f(k1_R) = 0_\mathbb{Z}$, so $kf(1_R) = 0_\mathbb{Z}$, so $k1_\mathbb{Z} = 0_\mathbb{Z}$, so $k=0$. Question. What other properties follow from having a ring homomorphism to $\mathbb{Z}$? REPLY [3 votes]: Suppose we have a ring homomorphism $f\colon R\to\mathbb{Z}$. The composite of the homomorphism $f$ following the homomorphism $j\colon \mathbb{Z}\to R : n\mapsto n\cdot 1_R$ is the identity homomorphism $f\circ j=\mathrm{id}_{\mathbb{Z}}$ (this because $\mathrm{id}_{\mathbb{Z}}$ is the only endomorphism of the ring $\mathbb{Z}$), which implies that $j$ is injective, and hence the characteristic of $R$ is zero. Note that $\mathrm{char}(R)=0$ if and only if the element $1_R$ of the $\mathbb{Z}$-module $R$ is torsion-free, if and only if the set $\{1_R\}$ is linearly independent over $\mathbb{Z}$. This was just an alternative proof of what you have already proved. Let $N$ be the kernel of $f$. Then the underlying additive group of the ring $R$ is the (inner) direct sum of the additive groups of the subring $\mathbb{Z}1_R$ and of the ideal $N$, which we write $R=\mathbb{Z}1_R\oplus N$. Indeed, if $x$ is any element of the ring $R$, then $f(x-f(x)1_R)=0$ and therefore $x-f(x)1_R\in N$, which proves that $R=\mathbb{Z}1_R+N$. If $x=n\cdot 1_R\in N$ (where $n\in\mathbb{Z}$), then $n=f(x)=0$ and hence $x=0$, which proves that $\mathbb{Z}\cap N=0$, so the sum $\mathbb{Z}1_R+N$ is direct. The ideal $N$, equipped with the addition and multiplication inherited from $R$, is a rng. $\,$(This "rng" is not a typo: a rng is an abelian group equipped with an associative biadditive multiplication. Nothing is said of a multiplicative identity; even if it is present, it is ignored in the sense that homomorphisms of rngs are not required to preserve multiplicative identities, since a multiplicative identity is not an officially acknowledged constituent of a rng structure.)$\,$ The inclusion map $N\hookrightarrow R$ is a homomorphism or rngs, where the ring $R$ is regarded as a rng (we 'forget' $1_R$). The "ring regarded as a rng" is in fact a forgetful functor $U$ from the category $\mathbf{Ring}$ of (unital) rings to the category $\mathbf{Rng}$ of rngs. The left adjoint $F\colon\mathbf{Rng}\to\mathbf{Ring}$ constructs for each rng $A$ a ring $F\mspace{-2mu}A$ and a homomorphism of rngs $\eta=\eta_A\colon A\to UF\mspace{-2mu}A$ so that the following is true: given any ring $R$ and a homomorphism of rngs $h\colon A\to UR$ there exists a unique homomorphism of rings $h'\colon F\mspace{-2mu}A\to R$ such that $h=Uh'\circ\eta$. $\,$(Here the homomorphism of rngs $Uh'\colon UF\mspace{-2mu}A\to UR$ is precisely the same mapping as the homomorphism or rings $h'$; though the mapping $h'$ sends the multiplicative identity of the ring $F\mspace{-2mu}A$ to the multiplicative identity of the ring $R$, we ignore this fact when we regard $h'$ as a homomorphism of rngs and write it $Uh'$.)$\,$ I am sure you know the construction of $F\mspace{-2mu}A$. We set $F\mspace{-2mu}A=\mathbb{Z}\oplus\mspace{-2mu}A$, write elements of the direct sum $\mathbb{Z}\oplus\mspace{-2mu}A$ as formal sums $n+a$ ($n\in\mathbb{Z}$, $a\in A$), which we add componentwise and multiply them as follows: $(n+a)(m+b):=(nm)+(nb+ma+ab)$. The homomorphism $\eta\colon A\to F\mspace{-2mu}A$ of rngs maps an element $a$ of $A$ to the formal sum $0+a$ in $F\mspace{-2mu}A$. The homomorphism $\eta$ embeds the rng $A$ into the ring $F\mspace{-2mu}A$ as the subrng $\eta A=0\oplus\mspace{-2mu} A$. $\,$And now comes the punchline of this particular joke: the mapping $\varphi\colon F\mspace{-2mu}A\to\mathbb{Z} : n+a\mapsto n$ is a homomorphism of rings, and $\ker\varphi=0\oplus\mspace{-2mu} A=\eta A$. In the situation with which we began, the ring $FN$ is naturally isomorphic to the ring $R$: the natural isomorphism $FN\to R$ maps a formal sum $n+z\in FN$ ($n\in\mathbb{Z}$, $z\in N$) to the element $n\cdot 1_R+z$ of the ring $R$. Therefore, the special property of a ring $R$, besides $\mathrm{char}(R)=0$, which actually characterizes the existence of a ring homomorphism $R\to\mathbb{Z}$, is the existence of a split $R=\mathbb{Z}1_R\oplus N$, where $N$ is a two-sided ideal of $R$ and the direct sum is that of the underlying additive groups. For every rng $A$ there exists a ring homomorphism $R\to\mathbb{Z}$ whose kernel is rng-isomorphic to $A$. In a certain sense a ring homomorphism $f\colon R\to\mathbb{Z}$ is just a 'ringed' presentation of the rng $N=\ker F$. Another way to describe this correspondence is by saying that the notion of a ring homomorphism $R\to\mathbb{Z}$ is cryptomorphic to the notion of a rng; the two notions are just two equivalent presentations/realizations/implementations/incarnations of the same concept. Let $\mathfrak{R}$ be the category or ring homomorphisms $R\to\mathbb{Z}$, where a homomorphism of $f\colon R\to\mathbb{Z}$ to $g\colon S\to\mathbb{Z}$ is a homomorphism of rings $h\colon R\to S$ such that $f=g\circ h$. $\,($This category $\mathfrak{R}$ is the comma category $(\mathbf{Rings}\downarrow\mathbb{Z})$.$)\,$ The category $\mathfrak{R}$ is equivalent to the category of rngs $\mathbf{Rng}\,$; figure out on your own the pair of functors that constitute the equivalence of these two categories -- you have almost all ingredients already at hand. This equivalence of categories can be regarded as the 'cryptomorphism' between the notion of a ring homomorphism $R\to\mathbb{Z}$ and the notion of a rng.<|endoftext|> TITLE: Does $A^2 \geq B^2 > 0$ imply $ACA \geq BCB$ for square positive definite matrices? QUESTION [8 upvotes]: Assume we have two $n \times n$ real nondegenerate matrices $ A^2 $ and $B^2$, such that $$ A^2 \geq B^2 > 0, $$ where "$\geq$" means positive semidefinite (Loewner) ordering. Does the following inequality holds for any real matrix $C$ $$ ACA \geq BCB \ ? $$ If not, under which conditions on $C$ (or additional conditions on $A$ and $B$) does it holds? I would appreciate any ideas, suggestions, counterexamples. Thanks! REPLY [4 votes]: In [Theorem 2.6a, 1] it is proven using Fuglede-Putnam that Proposition. For $I \geq A, B \geq 0$ we have $\sqrt{A} B \sqrt{A} \leq B$ if and only if $AB = BA$. (The matrix $\sqrt{A} B \sqrt{A}$ models the sequential measurement of first $A$ andthen $B$ in quantum mechanics and its properties are studied by various authors. By them it is called the "sequential product"). As a corollary we can partially answer your question: Corollary. Suppose $I \geq A,B \geq 0$ with $A^2 \geq B^2$. If for all real $C$, we have $ACA \geq BCB$, then $AB=BA$. Proof. Consider $C:=A$. By assumption $A \geq A^2 \geq BAB=\sqrt{B^2}A\sqrt{B^2}$ and so $B^2$ commutes with $A$. But then also~$B$ commutes with $A$. QED [1] Sequential quantum measurements by Gudder and Nagy<|endoftext|> TITLE: May a 'ball' that has been 'cut off' still be called a 'ball'? QUESTION [11 upvotes]: Consider the metric subspace $[0,1] \subseteq \mathbb{R}$ with the metric defined in the usual sense, and the ball $B(0,1)$, defined to be the ball centred at $x=0$ with radius $1$. Now since only the right half of the ball 'exists' within our subspace, then can we still say that the ball $B(0;1)$ exists? REPLY [6 votes]: I think expecting the same intuitive result for open balls in different metric spaces isn't a good idea in general. For example, take $\Bbb R$ with the discrete metric $d(x,y)=0$ iff $x=y$ and $d(x,y)=1$ otherwise. Then $B(0,r)=\Bbb R$ if $r>1$ and $\{0\}$ if $r\leq 1$. Of course it's quite unintuitive that what's in the ball doesn't depend on the radius except whether it is bigger or smaller than $1$. They're still the open balls in that space. You can however expect to find the same open sets in two equivalent metrics, like $\Bbb R^2$ with $d_E$ the standard Euclidean metric and $d_\infty(x,y)=\max\{|x_1-y_1|, |x_2-y_2|\}$. The Euclidean metric gives open balls that look like discs, and I believe that with $d_\infty$ they look like squares, but all of the open sets are the same. Lastly if you took $(0,1)$ as a metric subspace of $\Bbb R$, you would have a ordinary open intervals as open balls. However the fact that some of the open balls are 'missing' half of their usual elements could either be intuitively attributed to the fact that you're using a subspace hence missing elements, or in another way to the fact that you're now using a bounded metric space, so after some finite radius you get no new elements to an open ball. Again though that intuition breaks down in other examples.<|endoftext|> TITLE: Is really $f(x)=\int g(x) dx$ a function? QUESTION [7 upvotes]: I saw many of this kind of questions on some text/question books. Is there any other explanation of this, or is it really wrong as I thought? Here is a question of that kind: If $\displaystyle f(x)=\int x(x^2-a)^2 dx$ and $f(a)=7$ then $f(-a)=?$ Here what is $f(0)$ or $f(1)$? $\displaystyle f(0)=\int 0(0^2-a)^2 d0$ or $\displaystyle f(1)=\int 1(1^2-a)^2 d1$ does not make sense. For me, a right function need to be as: $\displaystyle f_c(x)=\int_c^xt(t^2-a)^2dt$ (where $c$ is some constant, can be $0$ as usual). REPLY [2 votes]: One usually uses the notation $\int f(x) \, dx$ to denote an anti-derivative, or, more rigorously, the collection of all possible anti-derivatives of the function $f(x)$ over some domain that it usually understood implicitly from the context. Thus, one can interpret the statement $$ \int x^2 \, dx = \frac{x^3}{3} + C $$ as the statement that all the possible anti-derivatives of the function $x^2$ on the real line are of the form $\frac{x^3}{3} + C$ for some $C \in \mathbb{R}$. This way, one can interpret the expression $f(x) = \int x(x^2 - a)^2 \, dx$ as the statement "$f(x)$ is one of the possible anti-derivatives of the function $x(x^2 - a)^2$ on $\mathbb{R}$" although admittedly, this can be written in a much less confusing way as "$f'(x) = x(x^2 - a)^2$ for all $x \in \mathbb{R}$". The fundamental theorem of calculus states that if $f$ is continuous on an interval $(a,b)$, then an anti-derivative of $f$ is given by $\int_c^x f(t) \, dt$ where $c \in (a,b)$ is an arbitrary point but the notation $\int f(x) \, dx$ can be used without even knowing what is a definite integral, only knowing what is a derivative.<|endoftext|> TITLE: Why does this approximation work (and why does it fail)? QUESTION [5 upvotes]: I have a function $$f(x)=\frac{e^{-x}}{x}$$ and I am trying to find an expression for the inverse function $f^{-1}(x)$. So far I have come up with the approximation: $$\hat{f}^{-1}(x)=\left( \left( x+v \right )^e-v^e \right)^{-\frac{1}{e}}$$ where $v \approx 0.83$ This formula is a good approximation for $x>0$. I basically came up with this formula from tinkering, but I'm not exactly sure what the relation is between the e-th root of an expression and e to the power of said expression. Does it have anything to do with the limit definition of e: $$ \lim_{n \rightarrow \infty}{\left (1+\frac{1}{n} \right)^n} $$ However, this is just an approximation — an approximation that fails. Different values of $v$ offer better approximations, and I think that $v$ is related to $e$ in some way, however I'm not sure what way that is. Why does my function come so close, but also fail? Is this just a special case? Here is a diagram showing $f^{-1}(x)$ (red) and $\hat{f}^{-1}(x)$ (blue). REPLY [4 votes]: My opinion is that you just happened to stumble upon a function which is asymptotically equivalent to the actual inverse. The actual inverse to your equation is given by the Lambert W function. The principal branch of the Lambert W has a Taylor series expansion near $0$ given by $$W(x) = x-x^2+\mathcal{O}(x^3).$$ The actual inverse to your function $f(x)$ is given by $$f^{-1}(x) = W\left(\frac{1}{x}\right),$$ so for large $x$, we have $$f^{-1}(x) = \frac{1}{x} - \frac{1}{x^2} +\mathcal{O}\left(\frac{1}{x^3}\right).$$ Your approximate inverse just happened to go as $$\hat{f}^{-1}(x) \sim \frac{1}{x+v} \sim \frac{1}{x} - \frac{v}{x^2} + \mathcal{O}\left(\frac{v^2}{x^3}\right),$$ where $v$ happens to be relatively close to $1$, which is the main reason why the approximation seems nice. In fact, if you were to plot the relative error between $f^{-1}$ and $1/x-1/x^2$ versus the relative error between $\hat{f}^{-1}$ and $f^{-1}$, you will find that the former is a much better approximation for $x \gtrsim 10$, as it must be since it is the Taylor polynomial. It is not surprising to me that you can find a better approximation for a small segment of the function by varying the value of $v$. Your approximation is best around $x\approx 2.3665$, which doesn't seem to be any special value for $W$, so I am inclined to say that you were rather lucky in finding a good approximation, in conjunction with the look-elsewhere effect. As you've admitted yourself, differing values of $v$ will give differing degrees of approximation, and I don't think there is anything particularly special going on. If you could tell us how you obtained this approximation, we might be able to say something more concrete. Incidentally, if you also want an approximation which works for $x \ll 1$, then you might try the asymptotic expansion $$W(1/x) \sim -\ln(-x\ln x)-\frac{\ln\left(-\ln x\right)}{\ln x} + \mathcal{O}\left(\frac{\ln\left(-\ln x\right)}{(\ln x)^2}\right).$$<|endoftext|> TITLE: Adjacency matrix of the complement of a graph QUESTION [6 upvotes]: I read a lemma. $J$ is the all ones $n \times n$ matrix, $I_n$ is the $n \times n$ identity matrix. Let the adjacency matrix of a simple graph $G$ on $n$ vertices be $A = A(G)$. Then the adjacency matrix of its complement is $\bar A = A \bar{(G)} = J - I - A$. From a previous posting, I know that $\bar A=J-I-A$. Can anyone help me understand how? The linked question has no comments regarding this. REPLY [4 votes]: You want to prove that if the adjacency matrix of the graph $G$ is $A$, then the adjacency matrix $\overline{A}$ of the complement graph $\overline{G}$ is $J-I-A$. Observe that vertices $i$ and $j$ are adjacent in $G$ iff vertices $i$ and $j$ are nonadjacent in $\overline{G}$. Hence, the edge set of $G$ and of $\overline{G}$ together form the edge set of the complete graph $K_n$. Observe that the adjacency matrix of the complete graph is $J-I$. Hence, $A + \overline{A} = J-I$. So, $\overline{A} = J-I-A$.<|endoftext|> TITLE: What is this structure involving a monad and a comonad? QUESTION [6 upvotes]: Let $F$ be a monad on some category $\mathsf{C}$ and $G$ be a comonad on the same category. Assume further that they "commute" (see below): $FG \cong GF$. Then, for lack of a better name, one can define "$(F,G)$-bimodules": objects $X$ equipped with an $F$-algebra structure and a $G$-coalgebra structure, both compatible in the sense that the two following maps are equal: $$(F(X) \to X \to G(X)) = (F(X) \to G(F(X)) \to G(X)).$$ Is this kind of structure studied? What kind of properties does it have? For the precise meaning of "$F$ and $G$ commute", consider the case where $\mathsf{C}$ is the category of modules over some commutative ring, you have an algebra $A$, a coalgebra $C$, and $F = - \otimes A$ and $G = C \otimes -$ are respectively the "free right $A$-module" and the "cofree left $C$-comodule" functors. Then there is a natural isomorphism $C \otimes (M \otimes A) \cong (C \otimes M) \otimes A$, which is moreover compatible with the monad and comonad structure maps in a way that I don't know how to formalize precisely. REPLY [4 votes]: A distributive law of a monad $(F, \eta, \mu)$ over a comonad $(G, \epsilon, \delta)$ is a natural transformation $\xi : F G \Rightarrow G F$ that satisfies the following equations: \begin{align} \xi \bullet \eta G & = G \eta & \xi \bullet \mu G & = G \mu \bullet \xi F \bullet F \xi \\ \epsilon F \bullet \xi & = F \epsilon & \delta F \bullet \xi & = G \xi \bullet \xi G \bullet F \delta \end{align} For example, in a monoidal category, if $(F, \eta, \mu)$ is the monad induced by a monoid and $(G, \epsilon, \delta)$ is the comonad induced by a comonoid, then the associator defines a distributive law in the sense above. Now, given a distributive law as above, a bialgebra is an object $B$ with an $(F, \eta, \mu)$-action $\alpha : F B \to B$ and a $(G, \epsilon, \delta)$-coaction $\beta : B \to G B$ that satisfies the following equation: $$\beta \circ \alpha = G \alpha \circ \xi_B \circ F \beta$$ This is studied in more detail in [Power and Watanabe, Combining a monad and a comonad].<|endoftext|> TITLE: Verifying that the weighted projective space $\Bbb{P}_{\Bbb{Q}}(1,1,2,3)$ is singular. QUESTION [5 upvotes]: I while ago I attended a talk that was somewhat over my head, and the speaker mentioned in passing that the weighted projective space $\Bbb{P}:=\Bbb{P}_{\Bbb{Q}}(1,1,2,3)$ is singular. I suppose this means that there is a point $P\in\Bbb{P}$ for which $$\dim\mathcal{O}_{\Bbb{P},P}\neq\dim_{k(P)}\mathfrak{m}_P/\mathfrak{m}_P^2,$$ where $\mathfrak{m}_P$ is the maximal ideal of $\mathcal{O}_{\Bbb{P},P}$. I believe $P$ need not be a $\Bbb{Q}$-point, but please correct me if I'm wrong. Unfortunately I have no clue how to check this claim; how would I find such a singular point? Checking this (in)equality for a point $P\in\Bbb{P}$ is very cumbersome (I have tried for a handful of points, to no avail), and it is obviously impossible to check all points $P\in\Bbb{P}$. Is there an easy way to check the (in)equality above? Or some way to narrow down the number of candidates for singular points (drastically)? Thanks in advance for any answers or pointers as to where to look for them. REPLY [6 votes]: We can explicitly describe the singular locus of $\Bbb P(q_0,\dotsc,q_n)$. Recall that $\Bbb P(q_0,\dotsc,q_n)$ has homogeneous coordinates $(x_0:\dotsb:x_n)$ satisfying \begin{align*} (x_0:\dotsb:x_n)&=(\lambda^{q_0} x_0:\dotsb:\lambda^{q_n}x_n)& \lambda\in\Bbb C^* \end{align*} For every prime $p$ define $$ \DeclareMathOperator{Sing}{Sing} \Sing_p\Bbb P(q_0,\dotsc,q_n)=\{x\in\Bbb P(q_0,\dotsc,q_n): x_i\neq0\Rightarrow p\mid q_i\} $$ One may then prove that the singular locus of $\Bbb P(q_0,\dotsc,q_n)$ is $$ \Sing \Bbb P(q_0,\dotsc,q_n)=\bigcup_{p\text{ prime}}\Sing_p\Bbb P(q_0,\dotsc,q_n) $$ In our case we have $$ \Sing_p\Bbb P_{1123}= \begin{cases} \{(0:0:1:0)\} & p=2 \\ \{(0:0:0:1)\} & p=3 \\ \varnothing & p\neq 2,3 \end{cases} $$ so that $$ \Sing\Bbb P_{1123}=\{(0:0:1:0),(0:0:0:1)\} $$ Hence we have two singular points $P=(0:0:1:0)$ and $Q=(0:0:0:1)$. I'll leave it to you to prove that \begin{align*} \dim\mathfrak m_P/\mathfrak m_P^2 &=1 & \dim\mathfrak m_Q/\mathfrak m_Q^2 &=1 \end{align*} If you're familiar with toric varieties, then proving that $\Bbb P_{1123}$ is not smooth is quite simple. Let \begin{align*} \rho_0 &= \langle-3,\,-2,\,-1\rangle & \rho_1 &= \langle0,\,0,\,1\rangle & \rho_2 &= \langle0,\,1,\,0\rangle & \rho_3 &=\langle1,\,0,\,0\rangle \end{align*} The space $\Bbb P_{1123}\DeclareMathOperator{Cone}{Cone}$ is the toric variety $X_\Sigma$ corresponding to the fan $\Sigma$ generated by the cones \begin{align*} \sigma_0 &= \Cone(\rho_1,\rho_2,\rho_3) & \sigma_1 &= \Cone(\rho_0,\rho_2,\rho_3) & \sigma_2 &= \Cone(\rho_0,\rho_1,\rho_3) & \sigma_3 &= \Cone(\rho_0,\rho_1,\rho_2) \end{align*} A cone is called smooth if its generators are a subset of a $\Bbb Z$-basis of the ambient lattice. The toric variety $X_\Sigma$ is smooth if and only if each $\sigma_i$ is smooth. However, $\sigma_2$ and $\sigma_3$ are not smooth. Hence $X_\Sigma$ is not smooth. In fact, under the orbit-cone-correspondence, our two singular points correspond to the cones $\sigma_2$ and $\sigma_3$.<|endoftext|> TITLE: 63% chance of event happening over repeated attempts QUESTION [5 upvotes]: This question is of interest: If there is a $1 / x$ chance of something happening, in $x$ attempts, for large numbers over $50$ or so, the likelihood of it happening is about $63\%$. If there's a $1$ in $10\,000$ chance of getting hit by a meteor if you go outside, if you go outside $10\,000$ times, you have a $63\%$ chance of getting hit with a meteor at some point. If there's a $1$ in a million chance of winning the lottery and you buy a million (random) lottery tickets, you have a $63\%$ chance of winning. What is the main idea underlying this property? REPLY [2 votes]: It's a limit problem.. the limit of 1-(x-1)/x)x as x approaches infinity is 1-(1/e) which is ~ 63%. So yes, if you have a one in a million chance of winning the lotto, and you buy 1 million random tickets, you have a 63% chance of winning.<|endoftext|> TITLE: Integral with arithmetic-geometric mean ${\large\int}_0^1\frac{x^z}{\operatorname{agm}(1,\,x)}dx$ QUESTION [19 upvotes]: The arithmetic-geometric mean$^{[1]}$$\!^{[2]}$ of positive numbers $a$ and $b$ is denoted $\operatorname{agm}(a,b)$ and defined as follows: $$\text{Let}\quad a_0=a,\quad b_0=b,\quad a_{n+1}=\frac{a_n+b_n}2,\quad b_{n+1}=\sqrt{a_n b_n}.$$ $$\text{Then}\quad\operatorname{agm}(a,\,b)=\lim_{n\to\infty}a_n=\lim_{n\to\infty}b_n.\tag1$$ It can be expressed$^{[3]}$ in terms of the complete elliptic integral of the $1^{\text{st}}$ kind$^{[4]}$$\!^{[5]}$: $$\operatorname{agm}(a,\,b)=\frac\pi4\cdot\frac{a+b}{{\bf K}\!\left(\frac{a+b}{a-b}\right)}.\tag2$$ It appears that $$\int_0^1\frac{x^z}{\operatorname{agm}(1,\,x)}dx\stackrel{\color{gray}?}=\frac{\Gamma\!\left(\frac z2+\frac12\right)}{2\,\Gamma\!\left(\frac z2+1\right)},\quad\forall z\in\mathbb C,\,\Re(z)>-1.\tag3$$ How can we prove it? REPLY [14 votes]: $$\int_{0}^{+\infty}\frac{dt}{\sqrt{(1+t^2)(1+x^2 t^2)}}=\frac{\pi}{2\,\text{agm}(1,x)}\tag{1}$$ hence $$ I(z)=\int_{0}^{1}\frac{x^z}{\text{agm}(1,x)}\,dx = \frac{2}{\pi}\int_{0}^{1}\int_{0}^{+\infty}\frac{x^z}{\sqrt{1+x^2\sinh^2(t)}}\,dt\,dx\tag{2}$$ that can be managed through integration by parts, getting the same recurrence relation of the Euler Beta function involved in the claim. That, plus the logarithmic convexity of $I(z)$ (that follows from the integral form of the Cauchy-Schwarz inequality applied to the original definition of $I(z)$) is enough to prove the given conjecture. For instance, by exchanging the order of integration we have: $$ I(0) = \frac{2}{\pi}\int_{0}^{+\infty}\frac{t}{\sinh(t)}\,dt = \frac{4}{\pi}\eta(2) = \frac{\pi}{2}\tag{3}$$ as well as: $$ I(1) = \frac{2}{\pi}\int_{0}^{+\infty}\frac{\cosh(t)-1}{\sinh(t)^2}\,dt = \frac{2}{\pi}.\tag{4}$$ I also think that your conjecture directly follows from the Legendre differential equation for $K(k)$ and $K(k')$, coupled with $(3)$ and $(4)$. Nice catch, Vladimir!<|endoftext|> TITLE: Is the $L$ in $LU$ factorization unique? QUESTION [9 upvotes]: I was doing an $LU$ factorization problem \begin{bmatrix} 2 & 3 & 2 \\ 4 & 13 & 9 \\ -6 & 5 &4 \end{bmatrix} and I was going to multiply the second row by .$5$ and subtract the result from row $1$, then do something similar to row reduce row $3$. My book tells me that you first row-reduce to get $U$, then you extract certain columns from various steps in the reduction process to get $L$. That's when I realized that if I instead multiplied row $1$ by $2$ subtracted from one times row $2$, this would be equivalent to getting $U$, but it would change the resultant $L$ matrix! What is wrong with this approach? REPLY [2 votes]: For a singular matrix, the LU decomposition may not be unique; for example, $$\begin{bmatrix}1&1&1\\1&1&1\\1&1&1\end{bmatrix}=\begin{bmatrix}1&0&0\\1&1&0\\1&x&1\end{bmatrix}\begin{bmatrix}1&1&1\\0&0&0\\0&0&0\end{bmatrix}$$ for any $x$. It is interesting to note that for a $2\times 2$ matrix, the LU decomposition is unique, even if the matrix is singular. (One can show this by direct calculation.)<|endoftext|> TITLE: closed $1$-form that is exact QUESTION [5 upvotes]: Let $\alpha=\sum_{i=1}^n f_idx_i$ be a closed $1$-form defined on all of $\mathbb{R}^n$. Verify that the function $g(\textbf{x})=\sum_{i=1}^nx_i\int_0^1f_i(t\textbf{x})dt$ satisfies $dg=\alpha$. Proof. we must show that $\frac{\partial g}{\partial x_i}=f_i$. Then \begin{align} \frac{\partial g}{\partial x_j}(\textbf{x})&=\frac{\partial }{\partial x_j}\left(\sum_{i=1}^n x_i\int_0^1f_i(t\textbf{x})dt\right)\\ &=\sum_{i=1}^n \frac{\partial }{\partial x_j}\left( x_i\int_0^1f_i(t\textbf{x})dt\right)\\ &=\sum_{i=1}^n\left( \frac{\partial }{\partial x_j}x_i\int_0^1f_i(t\textbf{x})dt+x_i\frac{\partial }{\partial x_j}\int_0^1f_i(t\textbf{x})dt\right)\\ &=\sum_{i=1}^n\left(\frac{\partial }{\partial x_j}x_i\int_0^1f_i(t\textbf{x})dt\right)+\sum_{i=1}^nx_i\int_0^1\frac{\partial}{\partial x_j}f_i(t\textbf{x})dt\\ &=\int_0^1f_j(t\textbf{x})dt+\sum_{i=1}^nx_i\int_0^1\frac{\partial}{\partial x_j}f_i(t\textbf{x})dt \end{align} I don't know how to continue REPLY [3 votes]: The important step is that since $\alpha$ is closed you have $d\alpha=0$, giving $\frac{\partial f_i}{\partial x_j} = \frac{\partial f_j}{\partial x_i}$. Write $f_j(t {\bf x})= (\frac{\partial}{\partial t} t)\ f_j(t {\bf x})$, then with partial integration you get $$\int_0^1dt\ f_j(t {\bf x})= {\large[}tf_j(t \textbf{x}){\large]}_{t=0}^{t=1}-\int_0^1dt\ t\ \frac\partial{\partial t} f_j(tx_1,tx_2,...,tx_n)$$ Now you max expand $\frac{\partial}{\partial t}f_j(t\mathbf x)$ with the chain rule. What you get is (letting $Df_j$ denote the differential of $f_j$) $$\frac{\partial}{\partial t}f_j(t\mathbf x)=\sum_i [Df_j]_i \cdot \frac\partial{\partial_t}(t\mathbf x)_i = \sum_i [\partial_i f_j](t\mathbf x)\  x_i.$$ Here $\partial_i f_j$ is the $i$-th component of the Jacobian of $f_j$, ie $\partial_i f_j (\mathbf x) = \frac{\partial}{\partial x_i}f_j(\mathbf x)$, this notation is adopted instead of using $\frac\partial{\partial x_i}f$ because the derivative $f$ is evaluated at the point $t\mathbf x$, writing $\frac{\partial}{\partial x_i}f(t\mathbf x)$ would give an extra factor of $t$. Anyway in the last equation you may further use $\partial_i f_j = \partial_j f_i$ since $\alpha$ is closed. Plugging in the form for $\frac{\partial}{\partial t}f_j(t\mathbf x)$ into the integral then gives: $$\int_0^1dt\ t\ \frac\partial{\partial t} f_j(tx_1,tx_2,...,tx_n)=\int_0^t dt\ t\sum_i x_i \partial_j f_i(tx_1,...,x_n)\\ =\int_0^1 dt\ t\sum_i x_i\ \frac1t \frac \partial{\partial x_j}\ f_i(t x_1,...,tx_n)$$ This final term is the same as the term on the right of OPs last equation. So if we plug in the expression for $\int_0^1 f_j(t\mathbf x)\ dt$ we have just derived into OPs expression we get: $$\frac{\partial}{\partial x_j} g({\bf x})=f_j({\bf x})$$ giving $$dg=\sum_j f_j({\bf x}) dx_j.$$<|endoftext|> TITLE: If $f(\frac{x+y}{2})=\frac{f(x)+f(y)}{2}$, then find $f(2)$ QUESTION [8 upvotes]: Let $f(\frac{x+y}{2})=\frac{f(x)+f(y)}{2}$ and it is given that $f(0)=1$ and $f'(0)=-1$, where $f'$ denotes first derivative. Find the value of $f(2)$ Could someone tell me how to use $f'(0)=-1$ here? I am not able to use this information. REPLY [8 votes]: If $f$ satisfies your equation, then $g(x) = f(x) - f(0)$ satisfies the Cauchy functional equation $g(x+y) = g(x) + g(y)$. Of course, if $f'(0)$ exists, then $g$ is continuous. The only continuous solutions of the Cauchy functional equation are the linear functions $g(x) = ax$, so the only continuous solutions of your equation are the affine functions $f(x) = a x + b$. The rest is easy.<|endoftext|> TITLE: Integrate $\int_0^\infty \frac{e^{-x/\sqrt3}-e^{-x/\sqrt2}}{x}\,\mathrm{d}x$ QUESTION [8 upvotes]: I can't solve the integral $$\int_0^\infty \frac{e^{-x/\sqrt3}-e^{-x/\sqrt2}}{x}\,\mathrm{d}x$$ I tried it by using Beta and Gamma function and integration by parts. Please help me to solve it. REPLY [8 votes]: \begin{align} \int_0^{\infty}e^{-yx}\ \mathrm dx &=\frac{1}{y}\\[9pt] \int_b^a\int_0^{\infty}e^{-yx}\ \mathrm dx\ \mathrm dy &=\int_b^a\frac{\mathrm dy}{y}\\[9pt] \int_0^{\infty}\int_b^ae^{-xy}\ \mathrm dy\ \mathrm dx&=\ln a-\ln b\\[9pt] \int_0^{\infty}\left[-\frac{e^{-xy}}{x}\right]_b^a\ \mathrm dx &=\ln\left(\frac{a}{b}\right)\\[9pt] \int_0^{\infty}\frac{e^{-ax}-e^{-bx}}{x}\ \mathrm dx &=\ln\left(\frac{b}{a}\right)\\[9pt] \end{align} $$ \int_0^{\infty}\frac{e^{-x/\sqrt{3}}-e^{-x/\sqrt{2}}}{x}\ \mathrm dx =\bbox[8pt,border:3px #FF69B4 solid]{\color{red}{\large\frac{1}{2}\ln\left(\frac{3}{2}\right)}} $$<|endoftext|> TITLE: Prob. 4 (a), Sec. 20 in Munkres' Topology, 2nd ed: Are these functions continuous in the product, uniform, and box topologies? QUESTION [5 upvotes]: Here is Prob. 20 (a) in the book Topology by James R. Munkres, 2nd edition. Consider the product, uniform, and box topologies on $\mathbb{R}^\omega$. In which of these topologies are the following functions from $\mathbb{R}$ to $\mathbb{R}^\omega$ continuous? $$f(t) = \left( t, 2t, 3t, \ldots \right),$$ $$g(t) = \left( t, t, t, \ldots \right),$$ $$h(t) = \left( t, \frac{t}{2}, \frac{t}{3}, \ldots \right)$$ for each $t \in \mathbb{R}$. Let $U \colon= \prod_{n \in \mathbb{N}} U_n$ be a product topology basis element for $\mathbb{R}^\omega$; let $n_1, \ldots, n_k$ be the natural numbers such that $U_{n_i}$ is a proper open subset of $\mathbb{R}$ for each $i = 1, \ldots, k$; suppose $U_n = \mathbb{R}$ for all other $n \in \mathbb{N}$. Suppose $t \in f^{-1}(U)$ so that $f(t) \in U$ and hence $$\pi_{n_i}\left(f(t) \right) = n_i t \in U_{n_i}$$ for each $i = 1, \ldots, k$. Here, for each $n \in \mathbb{N}$, the map $\pi_n \colon \mathbb{R}^\omega \to \mathbb{R}$ is defined by $$\pi_n(x) \colon= x_n \ \mbox{ for all } \ x \colon= \left( x_1, x_2, x_3, \ldots \right) \in \mathbb{R}^\omega.$$ Since $U_{n_i}$ is open in $\mathbb{R}$ under the usual topology, there exists a positive real number $\delta_i$ such that $$\left( \pi_{n_i}\left( f(t) \right) - \delta_i, \pi_{n_i}\left( f(t) \right) + \delta_i \right) = \left( n_i t - \delta_i , n_i t + \delta_i \right) \subset U_{n_i}$$ for each $i = 1, \ldots, k$. Let $$\delta \colon= \min \left\{ \frac{\delta_1}{n_1}, \ldots, \frac{\delta_k}{n_k} \right\}. $$ This $\delta$ is a positive real number, and if $s \in (t- \delta, t+\delta)$, then $$ t - \frac{ \delta_{n_i} }{ n_i } < s < t + \frac{ \delta_{n_i} }{ n_i },$$ which implies that $$n_i t - \delta_i < n_i s < n_i t + \delta_i,$$ which in turn implies that $$n_i s \in U_{n_i},$$ for each $i = 1, \ldots, k$, and hence $f(s) \in U$. Thus, for each $t \in f^{-1}(U)$, there is a positive real number $\delta$ such that $$t \in (t- \delta, t + \delta) \subset f^{-1}(U),$$ showing that the inverse image $f^{-1}(U)$ is open for each basis element $U$ of the product topology on $\mathbb{R}^\omega$. Now suppose that $t \in g^{-1}(U)$. Then, for each $i = 1, \ldots, k$, $$\pi_{n_i}\left( g(t) \right) = t \in U_{n_i}$$ and hence $$\left( t - \delta_i, t+\delta_i \right) \subset U_{n_i}$$ for some positive real number $\delta_i$. Let $$\delta \colon= \min \left\{ \delta_1, \ldots, \delta_k \right\}.$$ This $\delta$ is a positive real number, and if $s \in ( t-\delta, t+\delta )$, then $s \in \left(t-\delta_i, t + \delta_i \right)$ and hence $g(s) \in U$, showing that $g^{-1}(U)$ is open. Finaly, if $t \in h^{-1}(U)$, then, for each $i = 1, \ldots, k$, $$ \pi_{n_i}\left( h(t) \right) = \frac{t}{n_i} \in U_{n_i}$$ and hence $$\left( \frac{t}{n_i} - \delta_i , \frac{t}{n_i} + \delta_i \right) \subset U_{n_i}$$ for some positive real number $\delta_i$. Let's take $$\delta \colon= \min \left\{ n_1 \delta_1, \ldots, n_k \delta_k \right\}.$$ This $\delta $ is a positive real number, and if $s \in (t - \delta , t + \delta )$, then $$t- n_i \delta_i < s < t + n_i \delta_i$$ and hence $$ \frac{t}{n_i} - \delta_i < \frac{s}{n_i} < \frac{t}{n_i} + \delta_i $$ or $$ \pi_{n_i}\left( h(s) \right) \in \left( \frac{t}{n_i} - \delta_i, \frac{t}{n_i} + \delta_i \right) \subset U_{n_i}$$ for each $i = 1, \ldots, k$, which implies that $s \in h^{-1}(U)$, from which it follows that $h^{-1}(U)$ is open in $\mathbb{R}$. Have I been able to get these proofs right? Now for the uniform topology. Let's take a real number $\varepsilon$ such that $0 < \varepsilon < \frac 1 2$. Let $t \in \mathbb{R}$. If $s \in \mathbb{R}$ such that $s \neq t$, then $$\tilde{\rho}\left( f(s), f(t) \right) = \sup \left\{ \ \min \left\{ n \vert s-t\vert, 1 \right\} \ \colon \ n \in \mathbb{N} \ \right\} = 1 > \varepsilon,$$ from which it follows that $f$ is not continuous (at any point of $\mathbb{R}$). Now let us take a real number $\delta$ such that $0 < \delta \leq \varepsilon$. Then, for any points $s, t \in \mathbb{R}$ such that $\vert s-t \vert < \delta$, we have $$\tilde{\rho}\left( g(s), g(t) \right) = \sup \left\{ \ \min \left\{ \vert s-t \vert, 1 \right\} \ \colon \ n \in \mathbb{N} \right\} = \vert s- t \vert < \varepsilon,$$ and hence it follows that $g$ is (uniformly) continuous. Now let's choose $\varepsilon$ and $\delta$ as above. Then, for any $s, t \in \mathbb{R}$ such that $\vert s-t\vert < \delta$, we have $$\tilde{\rho}\left( h(s), h(t) \right) = \sup \left\{ \ \min \left\{ \frac{\vert s-t \vert}{n}, 1 \right\} \ \colon \ n \in \mathbb{N} \ \right\} = \min \left\{ \vert s-t\vert, 1 \right\} = \vert s-t\vert < \varepsilon,$$ from which it follows that $h$ is (uniformly) continuous. Have I been able to come up with the right answer in each case? If so, have I been able to get these proofs right as well? For the product topology, we can also use Theorem 19.6 in Munkres. Here I have attempted to show the continuity of $f$, $g$, and $h$ directly. And, the box topology on $\mathbb{R}^\omega$ is the one havaing as a basis all sets of the form $$ (a_1, b_1) \times (a_2, b_2) \times (a_3, b_3) \times \cdots, $$ where $\left(a_i \right)_{i \in \mathbb{N}} \in \mathbb{R}^\omega$ and $\left(b_i \right)_{i \in \mathbb{N}} \in \mathbb{R}^\omega$ are such that $a_i < b_i$ for each $i = 1, 2, 3, \ldots$, and $(a_i, b_i)$ denotes the segment (i.e. open interval) with $a_i$ as the left endpoint and $b_i$ as the right endpoint. The functions $f$, $g$, and $h$ are not continuous when $\mathbb{R}^\omega$ is given the box topology. The inverse image under each of $f$, $g$, and $h$ of the basis element $$B \colon= \left( -1, 1 \right) \times \left(-\frac{1}{2^2}, \frac{1}{2^2} \right) \times \left( -\frac{1}{3^2}, \frac{1}{3^2} \right) \times \cdots, $$ for example, contains the point $t = 0$ since $$f(0) = g(0) = h(0) = (0, 0, 0, \ldots) \in B.$$ So, in order for this inverse image to be open in $\mathbb{R}$ with the usual topology, there must be an open interval $\left(-\delta_f, \delta_f \right)$, $\left( -\delta_g, \delta_g \right)$, and $\left( -\delta_h, \delta_h \right)$, for some positive real numbers $\delta_f$, $\delta_g$, and $\delta_h$, respectively, such that $$ \left(-\delta_f, \delta_f \right) \subset f^{-1}(B),$$ $$\left( -\delta_g, \delta_g \right) \subset g^{-1}(B), $$ $$\left( -\delta_h, \delta_h \right) \subset h^{-1}(B). $$ In particular, we must have $$f\left( \frac{\delta_f}{2} \right) \in B,$$ $$g\left( \frac{\delta_g}{2} \right) \in B,$$ $$h\left( \frac{\delta_h}{2} \right) \in B.$$ So, for each $n \in \mathbb{N}$, we have $$\frac{n \delta_f}{2} \in \left( - \frac{1}{n^2}, \frac{1}{n^2}\right), \ \frac{\delta_g}{2} \in \left( - \frac{1}{n^2}, \frac{1}{n^2}\right), \ \frac{\delta_h}{2n} \in \left( - \frac{1}{n^2}, \frac{1}{n^2}\right), $$ and hence $$n^3 < \frac{2}{ \delta_f } \ \mbox{ for all} \ n \in \mathbb{N},$$ $$n^2 < \frac{2}{ \delta_g } \ \mbox{ for all} \ n \in \mathbb{N},$$ $$n < \frac{2}{ \delta_h } \ \mbox{ for all} \ n \in \mathbb{N},$$ each of which is impossible. Am I right? That $g$ is not continuous was also shown by Munkres himself in Example 2, Sec. 19 on page 117. REPLY [3 votes]: Yes, your proofs are correct! I think they could be more streamlined and readable, though. Due to the simplicity of the functions involved, it's easy to compute $v^{-1}(U)$ explicitly for $v = f, g, h$ and determine if the result is open in $\mathbb R$ rather than picking an arbitrary point $t \in v^{-1}(U)$ and then some $\delta> 0$ with $(t-\delta, t+\delta) \subseteq v^{-1}(U)$ to show that $v^{-1}(U)$ is open. In either case, it should first be justified continuity of $v$ is equivalent to "$v^{-1}(U)$ is open when $U$ is a basis element" (rather than an arbitrary open set) because we can always write $U$ as a union $U = \bigcup_\alpha U_\alpha$ of basis elements, and since inverse imeages preserve unions, $v^{-1}\left( \bigcup_\alpha U_\alpha \right) = \bigcup_\alpha v^{-1}(U_\alpha)$, the latter of which is open if $v^{-1}(U_\alpha)$ is open for basis elements $U_\alpha$. By defining $v_n = \pi_n \circ v$, we can write $v \colon \mathbb R \to \mathbb R^\omega$ as $v = (v_1, v_2, \dots)$, where $v_n \colon \mathbb R \to \mathbb R$, in the sense that $$ v(t) = \big( v_1(t), v_2(t), \dots \big). $$ Substituting $v = f, g, h$, \begin{align*} f_n(t) = nt,\quad g_n(t) = t,\quad h_n(t) = \frac{t}{n}. \end{align*} We can consider the product and box topologies simultaneously by taking a general (nonempty) basis element $$ U = (a_1, b_1) \times (a_2, b_2) \times \cdots, $$ where $-\infty \leq a_n < b_n \leq \infty$ for all $n \in \mathbb N$. The product topology just has the stipulation that $(a_n, b_n) \neq (-\infty, \infty) = \mathbb R$ only for a finite subset of indices $n_1, \dots, n_k \in \mathbb N$. To compute $v^{-1}(U)$, note that $$ t \in v^{-1}(U) \iff v(t) \in U \iff \left[ v_n(t) \in (a_n, b_n) \quad\forall n \in \mathbb N \right]. $$ Hence $t \in f^{-1}(U) \iff nt \in (a_n, b_n) \quad \forall n \iff t \in \left( \tfrac{a_n}{n}, \tfrac{b_n}{n} \right) \quad \forall n \iff t \in \bigcap_{n=1}^\infty \left( \tfrac{a_n}{n}, \tfrac{b_n}{n} \right)$. The latter set is open in the product topology, being the intersection of only a finite number of open sets (since $(a_n, b_n) = \mathbb R$ for all but finitely many $n$), and hence $f$ is continuous in the product topology. However, that set is not always open in the box topology (take $a_n = -1$ and $b_n = 1$ for all $n$ to see $f^{-1}(U) = \{0\}$, which isn't open), and hence $f$ is not continuous in the box topology. $t \in g^{-1}(U) \iff t \in (a_n, b_n) \quad \forall n \iff t \in \bigcap_{n=1}^\infty \left( a_n, b_n \right)$. By the same reasoning as for $f$, $g$ is continuous in the product topology. In the box topology, taking $\left( a_n, b_n \right) = \left( -\tfrac{1}{n}, \tfrac{1}{n} \right)$ shows that $g$ is not continuous. $t \in h^{-1}(U) \iff \tfrac{t}{n} \in (a_n, b_n) \quad \forall n \iff t \in \left( n a_n, n b_n \right) \quad \forall n \iff t \in \bigcap_{n=1}^\infty \left( n a_n, n b_n \right)$. By the same reasoning, $h$ is continuous in the product topology. In the box topology, take $a_n = -\tfrac{1}{n^2}$ and $b_n = \tfrac{1}{n^2}$ to see that $h$ is not continuous. For the uniform topology, I wouldn't change a thing!<|endoftext|> TITLE: I don't understand this definition of the integers. QUESTION [6 upvotes]: Definition: The set of integers is $\mathbb{Z}:=\frac{\mathbb{N} \times \mathbb{N}}{R}=\{[(m,n)]:(m,n)\in \mathbb{N}\times \mathbb{N}\}$. I understand this is the set of all the equivalence classes of $\mathbb{N}\times \mathbb{N}$ using the equivalence relation $R$, but who are $m$ and $n$? place holders for what? Introducing, the relation of equivalence: $(m,n)R(p,q)\Leftrightarrow m + q = n + p.$ I didn't see this before! REPLY [3 votes]: Formally you can't define a subtraction on $\mathbb{N}$, since an expression like $2-3$ wouldn't make any sense in $\mathbb{N}$. Nevertheless you can still define the subtraction implicitly via the addition by rewriting the relation $$b-x = a \Leftrightarrow a+x = b.$$ With the equation on the right-hand side $x$ is uniquely determined by the pair $(a,b) \in \mathbb{N} \times \mathbb{N}$, but choosing a different pair $(a',b')$ could result in the same $x$ as before, for example for $(a',b')=(a+1,b+1)$ we get $$a+x = b \Leftrightarrow (a+1)+x = (b+1).$$ You now want to identify all pairs $(a,b)$, which define the same $x$, so your goal is $$(a,b) \sim (a',b') :\Leftrightarrow (a,b) \text{ and } (a',b') \text{ define the same } x.$$ This is exactly done by the equivalence relation you mentioned above and the set of all solutions to the problem $a+x=b$ for $a,b \in \mathbb{N}$ is obviously $\mathbb{Z}$.<|endoftext|> TITLE: $ \int_0^\infty \ \frac{(x\cdot\cos x - \sin x)^3}{x^6} \ dx$ QUESTION [8 upvotes]: What is the value of $$ \int_0^\infty \ \frac{(x\cdot\cos x - \sin x)^3}{x^6} \ dx $$ I have no idea how to start with this integral, any hint? REPLY [4 votes]: You may use the Laplace transform: $$\mathcal{L}^{-1}\left(\frac{1}{x^6}\right)=\frac{s^5}{120},\qquad \mathcal{L}\left((x\cos x-\sin x)^3\right) = \frac{31104-196992 s^2-94080 s^4-13440 s^6}{(s^2+1)^4(s^2+9)^4}$$ turning the original integral into $$ \frac{1}{120}\int_{0}^{+\infty}\frac{31104s^5-196992 s^7-94080 s^9-13440 s^{11}}{(s^2+1)^4(s^2+9)^4}\,ds $$ that by partial fraction decomposition equals: $$\color{red}{\frac{4-9\log 3}{40}}$$<|endoftext|> TITLE: Moving half of the nuts QUESTION [20 upvotes]: An even number of nuts is divided into three nonempty piles. In each step, we are allowed to take half the nuts from a pile with an even number of nuts, and put them on another pile. Can we always reach a point where exactly half of the nuts belong to one pile? For example, if we start with $(3,5,6)$, we can transform as $(3,5,6)\rightarrow (3,8,3)\rightarrow (3,4,7)$, and now the last pile has half of the nuts. Note that in each step, some pile of nuts must be even, so we can keep moving. Moreover, we will never empty a pile. REPLY [5 votes]: The answer is yes and here is how (I got the idea from Shooter's comment above): Step 1: If you are given three even piles of nuts (otherwise go to step 2): Choose one pile and take nuts from it as long as you can (until the number of nuts is odd). Put them wherever you like. Step 2: You now are in the situation even (pile A), odd (pile B), odd (pile C) (up to changing the order). Now, you cut pile A into half as often as you can. Put all of the nuts on pile C except in the last step in which you put them one pile B. So that now pile B is the only one with an even number of nuts. Step 3: Go back to Step 2 (with A and B reversed) unless pile B has precisely double the size of pile A. In that case go to Step 4. Step 4: You arrived at numbers of the form (x,2x,sum-3x). Cut B and put it on C, then cut C and put it on A to get (sum/2,x,sum/2-x). For this algorithm to work, we have to argue, why indeed we get to step 4 at all: If we look at our loop, it is clear that the sum of the nuts in piles A and B is non-increasing and so there comes a point where it will be constant for all eternity (nothing goes to C any longer). If the loop never ends the numbers would then behave like this: $$(m,2n) \to (m+n,n) \to (\frac{1}{2}m+\frac{1}{2}n,\frac{1}{2}m+\frac{3}{2}n) \to (\frac{3}{4}m+\frac{5}{4}n,\frac{1}{4}m+\frac{3}{4}n)\to ...$$ In this sequence it is easy to see that the difference between one entry and the same entry two steps later is always of the form $\pm 2^{-k}(m-n)$ with $k$ increasing with each step (it is a geometric sum). But all the numbers are integers so that $2^{-k}(m-n)$ must be an integer for all $k$. Therefore, $m=n$! I hope I did not miss anything and this is correct.<|endoftext|> TITLE: How do I prove that $f_n\to f$ in $L^p$? QUESTION [8 upvotes]: Let $\{f_n\}$ be a sequence in $L^p([0,1])$ for $p\geq 1$. Suppose there exists $f\in L^p([0,1])$ satisfying $\lim_{n\to\infty} \int_0^1 f_n(x)g(x)dx = \int_0^1 f(x)g(x)dx$ for any $g\in L^2([0,1])$. Moreover, assume that $\lim_{n\to\infty} ||f_n||_p=||f||_p$. In this case, how do I prove that $f_n\rightarrow f$ in $L^p$? Even when $p=2$, the result is non-trivial since $\lim_{n\to\infty}=0$ (weak convergence) does not simply imply the convergence in $L^2$. More seriously, if $p\neq 2$, I'm not sure what's going on here. Why for given $f\in L^p$ and $g\in L^2$, $fg\in L^1$?. Holder inequality cannot be applied here. Moreover, I'm not sure how to use the condition $\lim_{n\to\infty} ||f_n||_p =||f||_p$. If $f_n \to f$ pointwise a.e., this condition seems useful, but this is not the case. How do I prove this? Thank you in advance. REPLY [2 votes]: This solves the case $p>1$: Let's show that $\int f_ng\to\int fg$ for all $g\in L^q=L^q[0,1]$. Let $g\in L^q$. Let's show that $\int f_ng$ is Cauchy. Let $\epsilon>0$. Choose $p\in C[0,1]$ with $\Vert g-p\Vert_q<\epsilon$. Then for $n,m$ large, $$|\int f_ng-\int f_mg|\leq|\int f_n(g-p)|+|\int f_np-f_mp|+|\int f_m(p-g)|$$ Applying Hölder's inequality in the first and the third terms in the RHS, and using the fact that the sequence $(\Vert f_n\Vert_p)$ is bounded, say $K=\sup_n\Vert f_n\Vert_p$, yields $$|\int f_ng-\int f_mg|\leq 2\epsilon K+|\int f_np-f_mp|\tag{$*$}$$ which is small for $n,m$ large (because $K$ does not depend on $\epsilon$). Let $L=\lim_n\int f_ng$. We will show that $L=\int fg$. Indeed, take $\epsilon$, $p$ and $K$ as above, and note that \begin{align*} |L-\int fg|&\leq|L-\int f_ng|+|\int f_n(g-p)|+|\int f_np-\int fp|+|\int f(p-g)|\\ &\leq|L-\int f_n g|+K\epsilon+|\int f_np-\int fp|+K\epsilon \end{align*} and both $|L-\int f_n g|$ and $|\int f_np-\int fp|$ go to $0$. This shows that $|L-\int fg|<2K\epsilon$ for all $\epsilon$, and therefore $L=\int fg$. Therefore, $f_n\to f$ weakly. Refering to the same question as in BigbearZzz's answer, we conclude that $f_n\to f$ in $L^p$.<|endoftext|> TITLE: Find Surface Area Via a Line Integral (Stokes' Theorem) QUESTION [7 upvotes]: I am trying to use Stokes' Theorem to calculate the surface area of a parametrized surface via a line integral. The surface is the part of $z= x^2+y^2$ below the plane $z=5$. I know this can be done the usual way, without Stokes' Theorem, but there is another surface whose area I'm trying to calculate for which the usual way doesn't work well due to tough limits of integration. As stated in the question Un-Curl Operator, we can find the surface area via $\oint\vec{F}\cdot \vec{ds}$ over the curve of intersection of the paraboloid and plane, provided we can find a field $\vec{F}$ such that $\text{curl}(\vec{F})\cdot \vec{n}=1.$ This would mean that $$\text{curl}(\vec{F}) = \frac{\vec{n}}{{\Vert \vec{n} \Vert}^2},$$ so that, $$\text{curl}(\vec{F})\cdot \vec{n}=\left\Vert\frac{\vec{n}}{{\Vert \vec{n} \Vert}^2}\right\Vert\cdot\Vert \vec{n} \Vert=\frac{{\Vert \vec{n} \Vert}^2}{{\Vert \vec{n} \Vert}^2}=1,$$as required. But finding such a vector field has proven difficult. I first tried using the method described in the question Anti-Curl Operator, but realized that this wouldn't work because $\text{curl}(\vec{F})$ has a nonzero divergence. I then turned to the method described in an answer to the Un-Curl Operator question, known as the Helmholtz decomposition. The idea is that given our vector field, $\text{curl}(\vec{F})$ in this case, we can split it into a curl-free part and a divergence-free part: $$\text{curl}(\mathbf{F})=-\nabla\Phi+\nabla\times \mathbf{A}$$ I believe (I could be wrong though) that it remains only to find $\mathbf{A}$, since the first term will not contribute to the line integral around a closed path. However, this is where my confusion begins. According to Wikipedia, $\mathbf{A}$ is found as follows: $$ \mathbf{A}(\mathbf{r})=\frac{1}{4\pi}\int_V\frac{\nabla'\times\mathbf{F}(\mathbf{r}')}{\left\vert\mathbf{r}-\mathbf{r}'\right\vert}\text{d}V' - \frac{1}{4\pi}\oint_S\mathbf{\hat{n}}'\times\frac{\mathbf{F}(\mathbf{r}')}{\left\vert\mathbf{r}-\mathbf{r}'\right\vert}\text{d}S'$$ I do not know how to use this formula. Specifically, I do not know what most of the symbols represent in the context of this problem. What is the difference between $\mathbf{r}$ and $\mathbf{r}'$? What is the region of integration? Also, the Wikipedia page mentions that the second term can be dropped if the field satisfies certain conditions. Does that apply here? I am also wondering if there is anything else I need to do to specify that the dot product of $\text{curl}(\mathbf{F})$ and $\mathbf{n}$ is $1$. The answer on the Un-Curl Operator question used the following notation: $$\mathbf{G}\vert_{\partial\Omega}\cdot\mathbf{n}=1,$$ where $\mathbf{G}$ = $\text{curl}(\mathbf{F})$. I'm pretty sure that $\partial\Omega$ represents the boundary of a domain $\Omega$, but what is $\Omega$ in the context of this problem? Am I missing something? How would we use the Helmholtz decomposition to find the vector field $\mathbf{A}$ in this problem? REPLY [3 votes]: I'm not sure whether this helps you or not. Suppose $\mathbf{B}$ is a constant vector. Define $$ \mathbf{A}(\mathbf{r})=-\frac{1}{2}\mathbf{r}\times \mathbf{B}\Longrightarrow \nabla\times \mathbf{A}=\mathbf{B} $$ Then by Stokes theorem $$ \mathbf{B}\cdot \mathbf{S}=\int_{\Omega} \mathbf{B}\cdot d\mathbf{a}=\oint_{\partial\Omega} \mathbf{A}\cdot d\mathbf{r}= -\frac{1}{2}\oint_{\partial\Omega} (\mathbf{r}\times\mathbf{B})\cdot d\mathbf{r}= \mathbf{B}\cdot \left(\frac{1}{2}\oint_{\partial\Omega} \mathbf{r}\times d\mathbf{r}\right) $$ where $\mathbf{S}$ is the total (vector) surface area of $\Omega$. Now choose $\mathbf{B}=\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3$ to obtain $$ \mathbf{S}_\Omega=\int_{\Omega} d\mathbf{a}=\frac{1}{2}\oint_{\partial \Omega} \mathbf{r}\times d\mathbf{r} $$ This is a formula for calculating the surface area via line integrals.<|endoftext|> TITLE: Why is the homotopy category actually important? QUESTION [9 upvotes]: Since the homotopy category (of whatever) generally has very few limits and colimits, and these don't coincide with homotopy limits and colimits in the lifted model structure, why do we care about the homotopy category? Is it "just" for classifying homotopy types? REPLY [13 votes]: As soon as you have a homotopy invariant functor, it factors through the homotopy category, by the universal property of localizations. To give a homotopy invariant functor $\mathsf{C} \to \text{Whatever}$ is exactly the same thing as giving a functor $\operatorname{Ho}(\mathsf{C}) \to \text{Whatever}$. So if we can understand the homotopy category well enough, then we understand homotopy invariants. You know plenty of homotopy invariants: homology, cohomology, homotopy groups... and presumably you know why we care about them. So this explains why we care about the homotopy category. The fact that the homotopy category behaves badly doesn't mean it's uninteresting. It means that it's hard to understand it. It's because the homotopy category is the "shadow" of something else: the model structure of your category, or however else you want to encode stuff ($\infty$-categories, dg-categories, triangulated categories...). Think about it like, say, chain complexes and their homology. It's easy to manipulate chain complexes: you can do sums, tensors products, take duals... Without really worrying. But as soon as you take their homology, things start to go awry: you have the $\operatorname{Tor}$-terms that appear for tensor products, extending scalars isn't as easy as just tensoring anymore, taking duals is a mess... So to really understand the homotopy category, we go one level up and we look at model categories (or any other way of encoding such information). Now things are really nice and well-behaved. Once we've done our work above, we return below and obtain information about the homotopy category, and eventually about homotopy invariants. This is where we get all these spectral sequences and other things, and do concrete computations. If you look at things now, we already have model categories (or whatever), and so you might think "why care about homotopy categories". But model categories didn't come first: in the beginning, people only had homotopy invariants functors, thought "I will localize at homotopy equivalences" and realized this made a mess of a category. They then realized that it was the shadow of something above it, and they looked for this, and behold, we now have model categories, dg-categories, triangulated categories... But, as far as I understand, this is all with the eventual goal of understanding better the homotopy category, where it's difficult to make computations but where the actual stuff happens.<|endoftext|> TITLE: Differentiablity at $0$ of a function $f: \mathbb R \to \mathbb R$ which is twice differentiable in $\mathbb R \setminus \{0\}$ QUESTION [7 upvotes]: Let $f: \mathbb R \to \mathbb R$ be a function , twice differentiable in $\mathbb R \setminus \{0\}$ such that $f'(x)<00>f''(x) , \forall x >0$ ; then is it true that $f$ cannot be differentiable at $0$ ? EDIT : My attempt : Suppose $f'$ exists at $0$ , then $f$ is differentiable everywhere , then as $f'(-1)<0f'(1)>0$ for any $c\in (0,1)$. Now suppose $h\in(0,1)$. By the mean value theorem, there is some $c\in (0,h)$ such that $f(h)-f(0)=hf'(c)>hf'(1)$. Thus $\frac{f(h)-f(0)}{h}>f'(1)$ for all positive $h<1$. Taking the limit as $h\to 0$, we find that $f'(0)\geq f'(1)>0$, if it exists. This contradicts your observation that $f'(0)=0$. (Alternatively, instead of citing Darboux, you could just use a similar argument for negative $h$ to show that if you compute $f'(0)$ by a limit as $h$ approaches $0$ from below, $f'(0)\leq 0$.)<|endoftext|> TITLE: $f \in C^2(\mathbb R)$ , $(f(x))^2 \le 1$ ; $(f'(x))^2+(f''(x))^2 \le 1 $ ; then is $(f(x))^2+(f'(x))^2 \le 1 $? QUESTION [7 upvotes]: Let $f \in C^2(\mathbb R)$ be such that $$(f(x))^2 \le 1 ; (f'(x))^2+(f''(x))^2 \le 1 , \forall x \in \mathbb R$$ Then is it true that $(f(x))^2+(f'(x))^2 \le 1 , \forall x \in \mathbb R$ ? I haven't gotten anywhere with this problem . Please help . Thanks in advance REPLY [10 votes]: Let $g(x) = f^2(x) + f'^2(x)$. We want to show that $g$ is bounded by $1$ when $f^2(x) \leq 1$ and $f'^2(x) + f''^2(x) \leq 1$. The extremal value(s) of $g$ are found at points $x_*$ where $g'(x_*) = 2f'(x_*)(f(x_*) + f''(x_*)) = 0$. If $f'(x_*) = 0$ then $g(x_*) = f^2(x_*) \leq 1$ and if $f(x_*) = -f''(x_*)$ we have $g(x_*) = f'^2(x_*) + f''^2(x_*) \leq 1$. $g$ is therefore bounded by $1$ at any local maximum point. If $g$ has an infinite number of maximum points $\{x_n\}_{n=1}^\infty$ and $x_n$ is unbounded in both directions then it follows that $g$ has to be bounded by $1$ for all $\mathbb{R}$. Now assume (as suggested in the comments by Eric Thoma) there exist a $x_*$ such that $g(x_*) > 1$. We can wlog assume $g’(x) > 0$ for all $x>x_*$ as otherwise there exist a point where $g’(x) = 0$ and $g(x) > 1$ contradicting the result above (if $g'(x) < 0$ then the same argument can be applied to the function $g(-x)$ for which $g' > 0$). Since $g$ is bounded and strictly increasing for $x>x_*$ the limit $\lim_{x\to\infty} g(x)$ exists and satisfy $$\lim_{x\to\infty} g(x) = \lim_{x\to\infty} f^2(x) + f’^2(x) = a^2 > 1$$ Since $g'(x) > 0$ and $g$ is bounded we must also have $$\lim_{x\to\infty} g'(x) = \lim_{x\to\infty}f'(x)(f(x) + f''(x)) = 0$$ Since $f,f',f''$ are bounded we must either have $ \lim_{x\to\infty}f'(x) = 0$ or $\lim_{x\to\infty}f(x) + f''(x) = 0$. If $\lim_{x\to\infty}f'(x) = 0$ then $a^2 = \lim_{x\to\infty} f^2(x) \leq 1$ gives us a contradiction and if $\lim_{x\to\infty} f(x) + f''(x) = 0$ then $$a^2 = \lim_{x\to\infty} f^2(x) + f’^2(x) = \lim_{x\to\infty} f''^2(x) + f’^2(x) \leq 1$$ also gives us a contradition. It follows that $g$ is bounded by $1$ for all $x$.<|endoftext|> TITLE: Why do subvarieties correspond to Hodge classes? QUESTION [5 upvotes]: Let $X$ be a smooth complex projective variety and define $$Hdg^k(X)=H^{2k}(X,\mathbb{Z})\cap H^{k,k}(X)$$ the group of integral $(k,k)$ cycles on $X$. Now it is a fact that we can associate to the complex subvariety of $X$ an element in $Hdg^{k}(X)$ but I don't quite get the details of the association. From what I have gathered so far I think we are working with the diagram $$ \require{AMScd} \begin{CD} \mathcal{Z}_p(X) @>{i}>> H_{2n-2p}(X;\mathbb Z) @>{PD}>> H^{2p}(X;\mathbb Z)\\ @VVV @. @VV{i}V \\ H^{2n-2p}_{dR}(X)^* @>{\cong}>> H^{2p}_{dR}(X) @>{de Rham}>> H^{2p}(X;\mathbb{C}) \end{CD}$$ Where along the top row we map $$Z\mapsto [Z]\mapsto [Z]^{PD}$$ Where $^{PD}$ denotes the Poincare dual and $[Z]$ the pushforward of the fundamental class of $Z$ over the inclusion. Along the bottom row we map $$Z\mapsto \left[\omega\mapsto \int_Zi^*\omega\right]\mapsto\left( \alpha \text{ s.t. } \int_{X}\beta\wedge \alpha=\int_Zi^*\beta\right)\mapsto \left(\alpha^{top} \text{ s.t. }\alpha^{top}(V)=\int_V\alpha\right)$$ (we assume that $Z$ is smooth and all that to make things easier). The last map is given by the inverse of the de Rham isomorphism. Now the image of the bottom row can easily be shown to be a class of type $(p,p)$. The image of the top row is clearly the image of the integral cohomology class, by definition. Now if this diagram commutes we have a map $Z_p(X)\to Hdg^p(X)$. However, I don't see why this diagram should commute. So my question is: why does the above diagram commute? I'll add the following link. At the bottom of page $148$ the authors states without proof (I renamed the objects to be consistent with the above): The canonical morphism $H_{2n-2}(X,\mathbb{Z})\to H^{2n-2p}(X,\mathbb{C})^*$ carries the topological class $[Z]$ of an analytic subspace $Z$ of codimension $p$ in $X$ into the fundamental class $\left[\omega\mapsto \int_Zi^*\omega\right]$. Similarly, the morphism $H^{2p}(X,\mathbb{Z})\to H^{2p}(X,\mathbb{C})$ carries the topological class $[Z]^{PD}$ to $[\alpha^{top}]$ I am looking for a proof of this result. (this is equivalent with the diagram above commuting, but might give a bit more context) REPLY [2 votes]: This is carried out in a lot of detail in Voisin – Hodge theory and complex algebraic geometry I, section 11.1.2. The particular result you're interested in seems to be Corollary 11.15.<|endoftext|> TITLE: Explicit unit/counit of inverse image/direct image adjunction. QUESTION [6 upvotes]: Is there a nice explicit description for the unit and counit of the inverse image/direct image adjunction $f^{-1} \dashv f_*$ between sheaves of rings (and in the version $f^* \dashv f_*$ for $\mathcal{O}_X$-modules)? It is said these come from natural maps, but I can only see they exist because I can argue we should have a hom-set adjunction, and that iso is constructed by using the universal property of the colimit of which $f^{-1}$ is a sheafification (in the sheaf case). Here I'm using the definition that given a morphism $f:(X,\mathcal{O}_X) \to (Y,\mathcal{O}_Y)$ of ringed spaces, and sheaves $\mathcal{F}$ on $X$ and $\mathcal{G}$ on $Y$ (resp. of $\mathcal{O}_X$ and $\mathcal{O}_Y$ modules), then $f^{-1}\mathcal{G}$ is the shefification of the presheaf $U \mapsto \ colim_{f(U) \subseteq V}\ \mathcal{G}(V)$ and in the case of $\mathcal{O}_X$-modules we define $f^*\mathcal{G} = f^{-1}\mathcal{G} \otimes_{f^{-1} \mathcal{O}_Y} \mathcal{O}_X$. REPLY [2 votes]: I will give the unit and counit for the first adjunction $f^{-1} \dashv f_*$. Given a continuous map $f: X \to Y$ of spaces, let $f^{\dagger}$ denote the inverse image of presheaves, and $f^{-1}$ the inverse image for sheaves. The unit map $\eta: 1 \Rightarrow f_*f^\dagger$ for presheaves has components given by the canonical "insertion" maps into the colimit: $$F(W) \to \mathrm{colim}_{U \supseteq f(f^{-1}W)} F(U)=f^\dagger F(f^{-1} W) = f_*f^\dagger F(W).$$ Note that these exist since $f(f^{-1} W) \subseteq W$. Since sheafification is left adjoint to the inclusion of sheaves into presheaves, we can get the unit for the sheaf case as follows. To be more precise, let $\iota$ denote the inclusion of sheaves into presheaves, and $a$ its left adjoint (sheafification). Then take the unit map of the sheafification adjunction $f^{\dagger} \iota F \to \iota a f^\dagger \iota F$, apply the direct image functor $f_*$, and then precompose with $\eta$ to get the correct unit: $$\iota F \xrightarrow{\eta_{\iota F}} f_* f^\dagger \iota F \rightarrow f_* ia f^\dagger \iota F =f_* \iota f^{-1} F = \iota f_* f^{-1} F.$$ Now, notice that if $f(V) \subseteq U$ then $V \subseteq f^{-1}f(V) \subseteq f^{-1} U $, and hence we get a restriction map $G(f^{-1}U) \to G(V)$. By the universal property of the colimit, this gives us a unique map $$f^{\dagger}f_* G(V)=\mathrm{colim}_{U \supseteq f(V)} G(f^{-1}U) \to G(V).$$ These canonical maps give us the components of the counit $\varepsilon: f^\dagger f_* \Rightarrow 1$. If you want the counit for the adjunction in the case of sheaves, just transpose $\varepsilon$ using the sheafification adjunction : $$\mathrm{Hom}_{\mathrm{Sh}(X)}(f^{-1}f_* F, G) \cong \mathrm{Hom}_{\mathrm{PSh}(X)}(f^{\dagger}f_* F, G).$$<|endoftext|> TITLE: Prove the n-th power of a matrix is the null matrix QUESTION [12 upvotes]: Let $A,B$ be $n\times n$ matrices with complex elements, $AB=BA$, $\det(B)\ne0$, having the following property: $$|\det(A+zB)|=1, \forall z \in \mathbb{C}, |z|=1.$$ Prove $A^n=0_n$. Does this remain true if $AB \ne BA$? So far, no idea. Any help is appreciated. Update I think I made some progress: Let $f(z)=\det(A+zB)=a_nz^n + .. + a_0$ From $|f(z)|=1$ we have $f(z)\overline {f(z)}=1, \forall z, |z|=1$ then, after replacing $\overline z$ with $\frac 1 z$: $(a_nz^n + .. + a_0)(\overline a_0z^n + .. + \overline a_n)=z^n \tag 1$ Because (1) is true for infinitely many $z$ then (1) must be a polynomial identity. Identifying coefficients I've got $$a_0=a_1=..a_{n-1}=0, a_n\overline a_n=1$$ So $\det(A+zB)=a_nz^n$ where $|a_n|=1$. It follows that $det(A)=0, det(B)=a_n$. REPLY [6 votes]: A non-constant entire function that has constant modulus on the unit circle must be a constant multiple of the power function. It follows that $\det(zI+AB^{-1})=z^n$ and $AB^{-1}$ is nilpotent. As $B$ is invertible and it commutes with $A$, $A$ must also be nilpotent. When $A,B$ do not commute, the assertion does not necessarily hold. It is easy to construct a counterexample such that $AB^{-1}$ is nilpotent but $A$ isn't. E.g. set $AB^{-1}=\pmatrix{0&1\\ 0&0}$ and $B=\pmatrix{0&1\\ 1&0}$, so that $A=\pmatrix{1&0\\ 0&0}$.<|endoftext|> TITLE: Prove that $R(+,.)$ is a division ring but I disproved it QUESTION [5 upvotes]: QUESTION: Let $R=\left[\begin{matrix}\alpha & \beta \\ \bar\beta & \bar\alpha\end{matrix}\right]\in \mathbf{M_2(\mathbb{C})} $ where $\bar\alpha,\bar\beta$ denote the conjugates of $\alpha, \beta$ respectively. Prove that $R$ is a division ring but not field under the usual matrix addition and multiplication. MY ATTEMPT: I am comfortable with what I have to do and what I have to prove. I have successfully proved that it is not a field as the matrix multiplication is not commutative. But instead of proving that it is a division ring, I have disproved it. We know that, A division ring, also called a skew field, is a ring in which division is possible. Specifically, it is a nonzero ring in which every nonzero element a has a multiplicative inverse, i.e., an element $x$ with $a·x = x·a = 1$. Stated differently, a ring is a division ring if and only if the group of units equals the set of all nonzero elements. Now the condition in bold is what I have shown not to hold. Actually the inverse of a matrix exists iff the matrix is not singular. But $\left|\begin{matrix}\alpha & \beta \\ \bar\beta & \bar\alpha\end{matrix}\right|=|\alpha|^2-|\beta|^2$ which can be $0$ if $|\alpha|=|\beta|$ which holds for infinitely many $\alpha,\beta \in \mathbb{C}$. So $R(+,.)$ is not a division ring since it contains an infinite number of non-invertible matrices. Am I right or wrong? Please help. REPLY [7 votes]: You are right about $$R=\left\{\begin{bmatrix}\alpha & \beta \\ \bar\beta & \bar\alpha\end{bmatrix}\mid \alpha,\beta\in\mathbb{C}\right\} $$, especially since $e=\frac12\begin{bmatrix}1&1\\ 1&1\end{bmatrix}$ would create zero divisors : $e(1-e)=0$. But presumably what you're reading is about $$R=\left\{\begin{bmatrix}\alpha & \beta \\ -\bar\beta & \bar\alpha\end{bmatrix}\mid \alpha,\beta\in\mathbb{C}\right\} $$ which is isomorphic to Hamilton's quaternions, and is a division ring, of course.<|endoftext|> TITLE: Meromorphic, analytic, holomorphic and all that QUESTION [9 upvotes]: I must have slept through something in my complex variables course, because all my life I have used the terms holomorphic, meromorphic, and analytic somewhat interchangeably. These are all also related to regular functions. I have also thought of "entire" and "everywhere analytic" as interchangeable terminology. What are the distinctions between these terms? And what is the correct terminology for a function which may have poles but not essential singularities. (For example, $$e^{-\frac{1}{z^2}}$$ is in some sense nastier at $z=0$ than $z^{-4}$)? REPLY [5 votes]: Let $\Omega \subset \mathbb{C}$ be open set. A function $f : \Omega \to \mathbb{C}$ is called holomorphic if it is complex differentiable in any $z \in \Omega$. A holomorphic function $f : \mathbb{C} \to \mathbb{C}$ is called entire. A function $f : \Omega \to \mathbb{C}$ is called analytic if it can be represented as a convergent power series in a neighborhood of each point $z \in \Omega$. A function $f : \Omega \to \mathbb{C}$ is called meromorphic if it is holomorphic on $\Omega$ except for a set of poles, i.e., $f : \Omega \setminus P \to \mathbb{C}$ is holomorphic, where $P$ denotes the set of poles of $f$.<|endoftext|> TITLE: How can one tell if a PDE describes wave behaviour? QUESTION [9 upvotes]: I have been looking at a lot of different non-linear PDEs which describe waves lately and have come to the realisation that I don't know what it is about these PDEs that make them behave like waves. For example, the Schrodinger equation, $i u_{t}+u_{xx}=0$ describes wave behaviour however the very similar looking diffusion equation $u_t-u_{xx}=0$ doesn't. What is it that makes the KdV equation, the CHM equation, and the Nonlinear Schrodinger equation exhibit wave behaviour? REPLY [3 votes]: The problem with answering this question is defining exactly what we mean by a wave and wave-behavior. I will here describe some properties waves have and show some examples of PDEs which describe different forms of wave behavior. I'll also touch upon some different ways we can define wave-behavior for linear and nonlinear PDEs. The simplest forms of what we call waves is solution of a PDE on the form $u(x,t) = f(x\pm ct)$. This represents a disturbance travelling with velocity $c$ to the right (-) or left (+). Such a solution also satisfy the wave-equation $u_{tt} = c^2u_{xx}$ and has the property that it does not dissipate energy and there is no dispersion. For example the linear Schrödinger equation $iu_t = u_{xx}$ has (periodic) wave-solutions $e^{i(x+t)} = \cos(x+t) + i\sin(x+t)$. However not all of such solutions are waves in the intuitive sense. For example $f(x) = e^x \implies u(x,t) = e^{x+t}$ can be seen as a wave under the definition above. Note that this is a solution to the diffusion equation which you don't see as having wave behavior. However if we also impose that the PDE allow solutions when $f$ is localized (see wave-packets) or periodic then when we get closer to the intuitive understanding most people have of a wave. The definition above does not represent all waves as it does not allow for dissipation and dispersion (in the case where we go beyond a single plane wave). For example the PDE $u_t + u_x = u_{xx}$ has wave solutions $u(x,t) = \cos(k(x-t))e^{-k^2t}$. The last factor describes the decrease in amplitude of the wave as it travels and because of this it can not be written on the form $f(x-ct)$ as above. Dispersion means waves of different frequency have different velocity. A simple example is the PDE $u_t + u_x + u_{xxx} = 0$ which has solutions $\cos(k\left(x-\frac{\omega}{k}t\right))$. If there is no dispersion (as for the wave-equation) we have $c = \frac{\omega}{k} = $ constant while for the equation here we have $c(k) = \frac{\omega}{k} = 1 - k^2$ thus waves with larger wavelengths travel faster than waves of shorter wavelengths. How to define wave-behavior for non-linear PDEs is more messy. Linear PDEs satisfy the superposition principle which says that any solution can be seens as a superposition of simpler solutions (like e.g. $\cos(kx-\omega(k) t)$ waves) so if we can show a PDE has some simple wave-solutions then we can build up any solution as a superposition of these waves. This is not true for non-linear PDEs. However nonlinear PDE's can have other types of waves known as solitons. A soliton is a self-reinforcing solitary wave, that has permanent shape, is localised within a region, does not satisfy the superposition principle, propagates at a constant velocity and does not disperse. Physically, such waves are caused by a cancellation of nonlinear and dispersive effects. All the nonlinear equations you mention as having wave-behavior has solitons so this is one possible answer to your question (they have wave-behavior because they have solitons). For example the Korteweg–de Vries (KdV) equation $u_t + u_{xxx} + 6 u u_x = 0$ has the soliton $u(x,t) = \frac{c}{2}\text{sech}\left(\frac{\sqrt{c}}{2}(x-ct)\right)$. The same is true for the nonlinear Schrödinger equation and the Charney-Hasegawa-Mima (CHM) equation.<|endoftext|> TITLE: Can a elementary row operation change the size of a matrix? QUESTION [6 upvotes]: My question is very simple - Can an elementary row operation change the size (eg: $2\times2$ or $3\times 2$) of a matrix? I think the answer should be no, but while reading Linear Algebra by Hoffman Kunze I stumbled upon this: Definition. An $m\times n$ matrix is said to be an elementary matrix if it can be obtained from the $m\times m$ identity matrix by means of a single elementary row operation. Now, I know of 3 elementary row operations, (adding a multiple of one row to another, multiplying throughout a row by a non zero constant and interchanging two rows) but none of them can change the size of a matrix. But since this is a highly praised book I can't trust myself as much as I'd like to. REPLY [3 votes]: Original Answer: It must be a typo. For another reference, if you look at Horn & Johnson's book here (chapter 0, section 0.3.3 in the first edition) the authors discuss how elementary row operations can be achieved via left multiplication by square matrices. Side note: If we use the fact that elementary row operations on a matrix $\boldsymbol{M}$ are equivalent to multiplying $\boldsymbol{M}$ on the left by (certain) square matrices, it is easy to determine the effect of elementary row operations on the determinant (recall that, for square matrices, $det(\boldsymbol{UM})=det(\boldsymbol{U})det(\boldsymbol{M})$). Update: I can see it not being a typo if what the authors mean is this: Elementary row operations can be represented using matrix multiplication. How? Suppose I have a matrix $\boldsymbol{A}\in M_{m\times n}$ and I want to apply any of the three operations. All I have to do is: Begin with the identity matrix of size $\boldsymbol{m}$, call it $\boldsymbol{I}_m$. Apply whichever of the three elementary row operations you want to $\boldsymbol{I}_m$, call this new matrix $\boldsymbol{I}_{new}$. In other words, if you want to apply an elementary row operation to $\boldsymbol{A}$ you first apply it to $\boldsymbol{I}_m$. Multiply $\boldsymbol{A}$ on the left by $\boldsymbol{I}_{new}$. Note that this method for performing elementary row operations essentially begins with the identity matrix $\boldsymbol{I}_m$ and then performs the elementary operation on $\boldsymbol{I}_m$. This might be in line with what the authors meant. Of course, you then need to multiply $\boldsymbol{A}$ by this modified identity matrix which won't change the dimension of $\boldsymbol{A}$. Example (interchanging to rows): I want to change the first and third rows of $\boldsymbol{A}\in \mathbb{R}_{3\times5}$. Solution: Multiply: \begin{align*} {\underbrace{\left[\begin{array}{ccc}0 & 0 & 1 \\0 & 1 & 0 \\1 & 0 & 0\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A}, \end{align*} this will change the first and third row of $\boldsymbol{A}$. Note: the determinant of $\boldsymbol{I}_{new}$ here is -1. (This can be shown using the Alternating Sums formulation of the determinant). Example (multiplying a row by a non-zero scalar): I want to multiply the second row of $\boldsymbol{A}\in \mathbb{C}_{3\times 5}$ by 7/3. Solution: Multiply: \begin{align*} {\underbrace{\left[\begin{array}{ccc}1 & 0 & 0 \\0 & 7/3 & 0 \\0 & 0 & 1\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A}, \end{align*} this will multiply the second row of $\boldsymbol{A}$ by $7/3$. Note: The determinant of $\boldsymbol{I}_{new}$ here is 7/3. (The determinant of a diagonal matrix is the product of the diagonal entries.) Example (adding a multiple of one row to another): I want to add 5 times row 1 to row 2 of $\boldsymbol{A}\in \mathbb{C}_{3\times 5}$. Solution: Multiply: \begin{align*} {\underbrace{\left[\begin{array}{ccc}1 & 0 & 0 \\5 & 1 & 0 \\0 & 0 & 1\end{array}\right]}_{\boldsymbol{I}_{new}}}\boldsymbol{A}, \end{align*} this will multiply the first row of $\boldsymbol{A}$ by $5$ and add it to the second row of $\boldsymbol{A}$ leaving the first row unchanged. Note: The determinant of $\boldsymbol{I}_{new}$ here is 1. (The determinant of a triangular matrix is the product of the diagonal entries.) Notice that, in each of these cases, all that was necessary was to choose the identity matrix of the right dimension (in our case, $3\times3$) and to apply the elementary row operation to this identity matrix. Then, we multiply $\boldsymbol{A}$ on the left by this new matrix. If we wish to apply multiple elementary row operations then we can simply repeat this process sequentially for each operation we apply. It is now easy to see how elementary row operations change the determinant of a square matrix $\boldsymbol{A}$ since the determinant of the product of two matrices is the product of the determinants. Additionally, the determinants of the "$\boldsymbol{I}_{new}$" matrices are easy to compute. All these "$\boldsymbol{I}_{new}$" matrices are nonsingular (it is easy to see that their determinant is nonzero).<|endoftext|> TITLE: Area form for $M^2 \subseteq \Bbb R^4$ QUESTION [7 upvotes]: We know that in general, given a orientable hypersurface $M^{n-1} \subseteq \Bbb R^n$, the volume form on $M$ is given by $$dM = \sum_{i=1}^n(-1)^{i-1}n_i\,dx^1 \wedge\cdots\wedge \widehat{dx^i}\wedge \cdots \wedge dx^n,$$where $(n_1,\cdots,n_n)$ is the unit normal to $M$. Moreover, we have $$n_i\,dM = (-1)^{i-1}dx^1 \wedge\cdots\wedge \widehat{dx^i}\wedge \cdots \wedge dx^n$$for tangent vectors. Another general case is for bi-dimensional surfaces in $\Bbb R^n$, for which in coordinates we can write using $\sqrt{EG-F^2}$. But I wanted to write something like the first expression for surfaces $M^2 \subseteq \Bbb R^4$, specifically. There is no standard choice of orthonormal basis for the normal space as far as I know. Is there a way to get around this, at least in this case? If there is, I'd guess it would adapt for submanifolds of codimension $2$, but that would be the cherry on the cake. REPLY [3 votes]: I pulled up my sleeves and did the computation for $M^2 \subseteq \Bbb R^4$. I'll post it here, but I'll leave the question open for a while in case someone sees a non-medieval way to do this. First of all, take $\{e_1,e_2\}$ an orthonormal positive basis of $T_xM$, and pick any orthonormal positive basis $\{n,\nu\}$ of $T_xM^\perp$ with $\{e_1,e_2,n,\nu\}$ being again an orthonormal basis of $T_x(\Bbb R^4)$. We note that $\det(e_1,e_2,n,\nu) = 1$, so that $$dM(v,w) = \det(v,w,n,\nu)$$is the area form in $M$. Let's do a quick reality check and see that this does not depend on our choice of $n$ and $\nu$. If $n',\nu'$ is another choice with the properties above, write $$n' = (\cos t) n + (\sin t) \nu, \quad \nu' = (-\sin t) n + (\cos t) \nu$$for some convenient $t$. Using that $\det$ is multilinear, it is easy to see that $$\det(v,w,n,\nu)=\det(v,w,n',\nu')$$for all $v$ and $w$. Now write $n = (n^1,\cdots,n^4)$ and $\nu = (\nu^1,\cdots,\nu^4)$, and similarly for $v$ and $w$. Recall that $$dx^i \wedge dx^j(v,w) = \begin{vmatrix} v^i & v^j \\ w^i & w^j\end{vmatrix}.$$ Then you take $$dM(v,w) = \begin{vmatrix} v^1 & v^2 & v^3 & v^4 \\ w^1 & w^2 & w^3 & w^4 \\ n^1 & n^2 & n^3 & n^4 \\ \nu^1 & \nu^2 & \nu^3 & \nu^4 \end{vmatrix}$$and expand that on the bottom row to get a combination of $3 \times 3$ subdeterminants with some signs and coefficientes $\nu_i$. Then you expand these subdeterminants by their bottom rows again, and after some pain and suffering it yields $$dM= \sum_{1 \leq i TITLE: Dimensions of immersions vs embeddings QUESTION [7 upvotes]: Let's say that you have a manifold which you know can be immersed in $\mathbb{R}^n$. Is there a $k$ such that you can say, for sure, that the manifold is embedded in $\mathbb{R}^{n+k}$? I imagine that there is and that this is common knowmedge, but cursory googling did not throw up anything. Thank you in advance!! REPLY [4 votes]: This is nowhere near an answer. I'm just throwing some thoughts out there. Hopefully this inspires others to write down better results. I assume the domain is closed. We can always take $k \leq n-2$. One path at attempting to get lower bounds at $k$ is to find manifolds that immerse into small-dimensional spaces but embed only in large-dimensional spaces. As an attempt at this, a parallelizable $n$-manifold immerses by Smale-Hirsch immersion theory into $\Bbb R^{n+1}$; how high of a dimension can you force for the embedding? It is conjectured that every parallelizable $n$-manifold embeds into $\Bbb R^{3n/2}$, so certainly no known examples can beat $k = (n-3)/2$ via this approach. I bet but have not made much effort to find parallelizable manifolds that achieve (roughly, at least) this bound. Real projective spaces probably give nice bounds on $k$. Complex projective spaces do not. Another approach is to use Cohen's immersion theorem, which says that every $n$-manifold immerses into $\Bbb R^{2n-\alpha(n)-1}$, where $\alpha(n)$ is the number of 1s in the binary expansion of $n$; this is not a particularly fruitful line of attack, because the best we can get from this is $k \geq \alpha(n)+1$. This is pitifully small, much smaller than the approach above. Now for some small-dimensional examples. For $n=1, 2$ the answer is obviously $k=0$. For $n=3$ all we have are surfaces, which all embed in $\Bbb R^4$, so $k(3)=1$. For $n=4$ we have 3-manifolds, which famously all embed in $\Bbb R^5$. So $k(4) = 1$. Suppose a 4-manifold immerses in $\Bbb R^5$. Then if $M$ is orientable, we see that its tangent bundle has trivial characteristic classes, including its Euler class and (most importantly!) Pontryagin class. (Normally one could only say its Pontryagin class is 2-torsion, but $H^4(M;\Bbb Z)$ does not have torsion.) This implies that its signature is zero, and Cappell and Shaneson proved that it embeds smoothly in $\Bbb R^6$. Danny Ruberman gives a proof here. For the non-orientable case, a simple Stiefel-Whitney class manipulation shows that (if $\nu$ is the normal bundle to the immersion in $\Bbb R^5$) that $w_1(\nu) = w_1(M)$ and $w_2(M) = w_1(M)^2$. This implies that the stable normal bundle of $M$ has $w_2(\nu) = 0$ by yet more algebraic manipulation. Now it is likely that such a manifold embeds in $\Bbb R^6$, but this does not appear to be known: see some partial progress in this paper of Fang. In particular we can safely conjecture that $k(5) = 1$. It is not hard to find examples of 5-manifolds that immerse in $\Bbb R^6$ but only embed into $\Bbb R^8$. I find it likely that $k(6) = 3$, but have not put much effort into writing down or finding a proof.<|endoftext|> TITLE: What shapes, with boundary collapsed to a point, are homeomorphic to $S^n$? QUESTION [5 upvotes]: Consider the following construction: Given a set $A\subseteq\Bbb R^n$, form the quotient space $A/\sim$ which identifies all the points on the boundary $\partial A$ (w.r.t $\Bbb R^n$). For which sets $A$ is the resulting topological space homeomorphic to $S^n$? Obviously this works for $D^n$ (the unit ball in $\Bbb R^n$), as well as $[0,1]^n$ and $\Delta^n$ (the simplex). But I think it will also work on much more complicated sets, like the Koch curve (with interior). In 2D I believe the Riemann mapping theorem will help to construct a homeomorphism, but I don't know whether that generalizes to $\Bbb R^n$. Some necessary conditions: $A^\circ$ must be path-connected, because there is a path on $S^n$ connecting any two points and avoiding the pole. More generally, $A^\circ$ must be homeomorphic to $\Bbb R^n$, so it must in fact be simply connected. For similar reasons, $A$ cannot be nowhere dense. $A^\circ$ must be bounded. If not, take some sequence $(x_n)\in A^\circ$ that diverges to infinity (and satisfies $d(x_m,x_n)\ge1$ for $m\ne n$), and take a subsequence that converges in $S^n$ (necessarily to the pole). Then $A\setminus\bigcup_n\bar B(x_n,\frac12\min(d(x_n,\partial A),1))$ is open in $A$, because all the closed sets in the union are separated from each other, and it contains $\partial A$, hence the image is an open set containing the pole and missing all the $x_n$'s, a contradiction. REPLY [4 votes]: Necessary and sufficient conditions are that $A\supseteq\overline{A^\circ}$, $\overline{A^\circ}$ is compact, and $A^\circ$ is homeomorphic to $\mathbb{R}^n$. First, suppose these conditions hold. Note that $A/{\sim}\cong\overline{A^\circ}/{\sim}$, so we may assume $A=\overline{A^\circ}$ and in particular that $A$ is compact. Then since $\sim$ is a closed equivalence relation on a compact Hausdorff space, the quotient $A/{\sim}$ is also compact Hausdorff. It is also easy to see that the quotient map restricts to a homeomorphism (onto its image) on $A^\circ$. Thus $A/{\sim}$ is a one-point compactification of a space homeomorphic to $\mathbb{R}^n$, and so $A/{\sim}$ is homeomorphic to $S^n$. Conversely, suppose $A/{\sim}\cong S^n$. You have already noted that $A^\circ\cong\mathbb{R}^n$. If $\overline{A^\circ}$ is not compact or is not contained in $A$, we can choose a sequence $(x_n)$ in $A^\circ$ that does not accumulate at any point of $A$. The set $X=\{x_n\}$ is then closed in $A$ and saturated under $\sim$, so its image in $A/{\sim}$ is closed. But the same reasoning shows that the image of any subset of $X$ is also closed. Thus the image of $X$ is an infinite closed discrete subset of $A/{\sim}$. This is a contradiction, since $A/{\sim}\cong S^n$ is compact.<|endoftext|> TITLE: Decomposition of a positive semidefinite matrix QUESTION [14 upvotes]: Let $Y \in \mathbb{R}^{n \times n}$ be a symmetric, positive semidefinite matrix such that $Y_{kk} = 1$ for all $k$. This matrix is supposed to be factorized as $Y = V^T V$, where $V \in \mathbb{R}^{n \times n}$. Does this factorization/decomposition have a name? How is it possible to compute $V$? REPLY [24 votes]: If $\rm Y$ is symmetric, then it is diagonalizable, its eigenvalues are real, and its eigenvectors are orthogonal. Hence, $\rm Y$ has an eigendecomposition $\rm Y = Q \Lambda Q^{\top}$, where the columns of $\rm Q$ are the eigenvectors of $\rm Y$ and the diagonal entries of diagonal matrix $\Lambda$ are the eigenvalues of $\rm Y$. If $\rm Y$ is also positive semidefinite, then all its eigenvalues are nonnegative, which means that we can take their square roots. Hence, $$\rm Y = Q \Lambda Q^{\top} = Q \Lambda^{\frac 12} \Lambda^{\frac 12} Q^{\top} = \underbrace{\left( Q \Lambda^{\frac 12} \right)}_{=: {\rm V}} \left( Q \Lambda^{\frac 12} \right)^{\top} = V^{\top} V$$ Note that the rows of $\rm V$ are the eigenvectors of $\rm Y$ multiplied by the square roots of the (nonnegative) eigenvalues of $\rm Y$.<|endoftext|> TITLE: Closed form for $1^k + ... + n^k$ (generalized Harmonic number) QUESTION [9 upvotes]: This question must have been asked, it's just very hard to search for such questions. I'm looking for the cleanest method I can find for getting a closed form formula for $\sum_{i=1}^n i^k$ Wikipedia provides https://en.wikipedia.org/wiki/Faulhaber%27s_formula which has a tantalising "There is also a similar (but somehow simpler) expression:..." paragraph but fails to follow up. http://www.maa.org/press/periodicals/convergence/sums-of-powers-of-positive-integers-conclusion contains a thorough "through the ages" expose, the last two entries being Pascal and Bernoulli. However, https://www.youtube.com/watch?v=8nUZaVCLgqA seems to contain a fresh approach that I can't find documented anywhere else. However, I find the video hard to follow. And maybe there is some technique that is cleaner still... I understand that "cleanest" is maybe subjective and hence it is an imperfect question, but it would be interesting to see the various alternatives (and surely there cannot be many) battle it out. REPLY [2 votes]: Having posted a much similar question recently, I'm inclined to provide you with my somewhat unique answer. $$a_p=1-p\int_0^1f(t,p-1)dt,\quad f(x,0)=x$$ $$f(x,p)=a_px+p\int_0^xf(t,p-1)dt$$ $$f(x,p)=\sum_{k=1}^xk^p$$ Easy enough to see that $$x=\sum_{k=1}^xk^0$$ Also relatively easy to find that $$a_1=1-\int_0^1t\ dt=\frac12\implies f(x,1)=\frac12x+\int_0^xt\ dt=\frac12x+\frac12x^2=\sum_{k=1}^xk^1$$ And you can keep going with this. Requires very little calculus skill to do.<|endoftext|> TITLE: Prove that $\text{rank } T = \operatorname{rank} T^2 \iff \operatorname{Im}T \cap \ker T = \{ \vec 0\}$ QUESTION [10 upvotes]: $\newcommand{\r}{ \operatorname{rank} } $ Let $T: V\to V$ be a linear transformation with $\dim V< \infty$. Prove that: $$ \r T = \r T^2 \iff \operatorname{Im} T \cap \ker T = \{ \vec 0 \}.$$ $"\Rightarrow"$ Let $\r T = \r T^2$. Then, by rank - nullity theorem we have that $$\dim \ker T =\dim \ker T^2 \tag 1.$$ But it is always true that: $\ker T \subseteq \ker T^2 .\tag 2$ By $(1),(2)$ we have that $\ker T = \ker T^2.$ So, instead of $\r T = \r T^2$ we can say that $\ker T = \ker T^2$ and we need to prove that $\operatorname{Im} T \cap \ker T = \{ \vec 0\}$. Proof: Suppose that there is a $z \in \operatorname{Im}T \cap \ker T$ with $z \neq 0$. Since $z \in \ker T \implies T(z) = 0$. Also, since $z \in \operatorname{Im}T \implies \exists y\in V$ such that $T(y) = z \implies T^2(y) = T(z) = 0.$ But this implies that $y \in \ker T^2 $ and by our hypothesis we have that $y \in \ker T \implies T(y) = 0 = z, $ which is absurd, because we assumed that $z \neq 0$. $"\Leftarrow"$ We need to prove that $\ker T = \ker T^2$ or $\ker T^2 \subseteq \ker T.$ Proof: Let $x \in \ker T^2$, which implies $T^2(x) = T\left(T(x)\right) = 0$. It is implied $T(x) \in \ker T,$ but also $T(x) \in \operatorname{Im}T.$ Thus, $T(x) \in \operatorname{Im}T \cap \ker T = \{0\}$. Thus, $T(x) = 0 \implies x \in \ker T.$ I would like to know if my reasoning is correct and if all the points are clear. Also, I would like to know if there is any shorter proof. REPLY [3 votes]: Let $ W = \textrm{Im}\, T $, then the given conditions imply that $ T $ restricts to a surjective linear map $ T_W : W \to W $. Surjective linear maps from a vector space onto itself are invertible, which means that $ T_W $ has trivial kernel. The result follows as $ \ker T_W = W \cap \ker T $.<|endoftext|> TITLE: Asymptotic expansion of $(1+\epsilon)^{s/\epsilon}$ QUESTION [6 upvotes]: I have taken the logarithm of this expression and computed the Taylor expansion of the $\log(1+\epsilon)$ term but by doing this we're required to calculate powers of this series when using the definition of the exponential function, but this gives a combinatorical mess. eg. $(1+\epsilon)^{s/\epsilon}=\exp\left(s\right)\exp\left(s\delta\right),$ where $\delta=\sum_{i=1}^\infty \frac{(-1)^n \epsilon^n}{n+1}.$ I was wondering if anyone knows of a general formula for the coefficient of $\epsilon^n$ of $(1+\epsilon)^{s/\epsilon}$ as $\epsilon \rightarrow 0$. Any known papers on this topic too? Thanks REPLY [2 votes]: I am stuck with the general formula. For the first terms, it is quite simple : starting with $$A=(1+\epsilon )^{s/\epsilon }$$ $$\log(A)=\frac{s }{\epsilon }\log (1+\epsilon)=\frac{s }{\epsilon }\sum_{n=1}^\infty(-1)^{n+1}\frac{\epsilon^n} n=s \sum_{n=1}^\infty(-1)^{n+1}\frac{\epsilon^{n-1}} n$$ Now, using $A=e^{\log(A)}$ and using Taylor series again leads to $$A=e^s\left(1-\frac{s }{2}\epsilon+\frac{s (3 s+8)}{24} \epsilon ^2-\frac{s(s+2)(s+6) }{48} \epsilon ^3+O\left(\epsilon ^4\right) \right)$$ So, $$A=e^s \sum_{n=0}^\infty (-1)^n P_n(s) \epsilon^n$$ where $P_n(x)$ is a polynomial of degree $n$ in $s$.<|endoftext|> TITLE: Prove that $\det(A)=p_1p_2-ba={bf(a)-af(b)\over b-a}$ QUESTION [13 upvotes]: Let $f(x)=(p_1-x)\cdots (p_n-x)$ $p_1,\ldots, p_n\in \mathbb R$ and let $a,b\in \mathbb R$ such that $a\neq b$ Prove that $\det A={bf(a)-af(b)\over b-a}$ where $A$ is the matrix: $$\begin{pmatrix}p_1 & a & a & \cdots & a \\ b & p_2 & a & \cdots & a \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ b & b & b & \cdots & p_n \end{pmatrix}$$ that is the entries $k_{ij}=a$ if $ij$ I tried to do it by induction over $n$. The base case for $n=2$ is easy $\det(A)=p_1p_2-ba={bf(a)-af(b)\over b-a}$ The induction step is where I don´t know what to do. I tried to solve the dterminant by brute force(applying my induction hypothesis for n and prove it for n+1) but I don´t know how to reduce it. It gets horrible. I would really appreciate if you can help me with this problem. Any comments, suggestions or hints would be highly appreciated REPLY [15 votes]: Here is a possible proof without induction. The idea is to consider $\det A$ as a function of $p_n$. We define the function $F: \Bbb R \to \Bbb R$ as $$ F(p) = \begin{vmatrix} p_1 &a &\ldots &a &a \\ b &p_2 &\ldots &a &a \\ \vdots &\vdots &\ddots &\vdots &\vdots \\ b &b &\ldots &p_{n-1} &a\\ b &b &\ldots &b &p \end{vmatrix} $$ $F$ is a linear function of $p$ and therefore completely determined by its values at two different arguments. $F(a)$ and $F(b)$ can be computed easily: By subtracting the last row from all previous rows we get $$ F(a) = \begin{vmatrix} p_1 &a &\ldots &a &a \\ b &p_2 &\ldots &a &a \\ \vdots &\vdots &\ddots &\vdots &\vdots \\ b &b &\ldots &p_{n-1} &a\\ b &b &\ldots &b &a \end{vmatrix} = \begin{vmatrix} p_1-b &a-b &\ldots &a-b &0\\ 0 &p_2-b &\ldots &a-b &0 \\ \vdots &\vdots &\ddots &\vdots &\vdots\\ 0 &0 &\ldots &p_{n-1}-b &0 \\ b &b &\ldots &b &a \end{vmatrix} \\ $$ i.e. $$ F(a) = a(p_1-b)\cdots (p_{n-1}-b) \, . $$ In the same way (or by using the symmetry in $a$ and $b$) we get $$ F(b) = b (p_1-a)\cdots (p_{n-1}-a) \, . $$ Now we can compute $\det A = F(p_n)$ with linear interpolation: $$ \det A = \frac{b-p_n}{b-a} F(a) + \frac{p_n-a}{b-a} F(b) \\ = \frac{- a(p_1-b)\cdots (p_{n-1}-b)(p_n-b) + b(p_1-a)\cdots (p_{n-1}-a)(p_n-a) }{b-a} \\ = \frac{-af(b) + bf(a)}{b-a} \, . $$<|endoftext|> TITLE: References on the moduli space of flat connections as a symplectic reduction QUESTION [8 upvotes]: In their Yang Mills equations over Riemann surfaces paper, Atiyah & Bott famously remark that the moduli space of flat connections on a principal bundle over a compact orientable surface may be obtained as a symplectic reduction on the space of connections (with the curvature as the moment map). In that paper, however, it is only a passing remark. One must be careful about this construction since the spaces involved are infinite-dimensional (e.g. the space of connections). I've only found references which ignore this technical detail and present the ideas in analogy to the finite-dimensional case. While they are helpful to give a simple understanding of what's going on, I was wondering if there is a reference that actually goes through the infinite-dimensional analysis to formalize this process. Any recommendation is appreciated. REPLY [7 votes]: Some remarks on infinite-dimensional manifolds. There are two approaches which to me still feel very manifold-ish (there are others yet: Frolicher spaces and diffeological spaces, which feel a bit less so); the approach of Banach manifolds and the much more general approach of Frechet manifolds. The former is almost precisely the same theory as the theory of finite-dimensional smooth manifolds and is a harmless technical addition. The latter is much more complicated, and I would be hesitant to make any actually technical claims about it, as I am not expert in Kriegl-Michor (the lovely book you cited). The advantages of Frechet manifold theory are that the most obvious and naturally occuring manifolds are Frechet manifolds (spaces of smooth mappings; diffeomorphism groups), and that if you honestly want a theory of infinite-dimensional Lie groups, Banach manifolds will not do (it is a theorem that no Banach Lie group can act transitively and effectively on a finite-dimensional smooth manifold; in particular no large group of diffeomorphisms can be given a Banach Lie group structure). On the other hand, you lose the technical simplicity of Banach manifolds and a bit of the power of the inverse function theorem. If one wants to do analysis, the inverse function theorem is indispensible, which is why one sees lots of Sobolev completions flying around. There's Nash-Moser but I don't understand it very well and do not think it has the wide applicability of the straight-up inverse function theorem. Let $\Sigma$ be a surface equipped with a Riemannian metric $g$. Fix a $U(n)$-bundle $E \to \Sigma$, and importantly, fix a smooth connection $A_0$ on $E$. (Whenever I say a connection on $E$, I expect it to preserve the unitary structure; it should be valued in $\mathfrak u(n)$, not just $\mathfrak{gl}(n,\Bbb C)$.) Define the space of $L^2_k$ connections, $\mathcal A_k(E)$, to be the set of connections $A$ on $E$ such that $A-A_0 \in \Omega^1(\text{End}(E))$ is an $L^2_k$ 1-form. (Note that, to make sense of $L^2_k$, I need a connection on $\text{End}(E) \otimes \Lambda^j T^*M$ - but this is provided by my base connection $A_0$ and my metric $g$.) This is topologized precisely so that it is affine over the space of $L^2_k$ sections of $\text{End}(E) \otimes T^*M$ with the $L^2_k$ topology. Henceforth I will suppress the $L^2_k$ in notation. We also have a group $\mathcal G_{k+1}$, of $L^2_{k+1}$ automorphisms of $E$. (These are sections of a principal bundle instead of sections of a bundle, so one needs to be careful about what we mean about $L^2_{k+1}$ section: by Sobolev embedding, $L^2_k$ sections are automatically continuous for $k \geq 0$, though one needs larger $k$ for larger-dimensional base, after which it is easy to make sense of the derivatives being $L^2_k$ and whatnot). This acts on $\mathcal A_k$ in the standard way. We need to increase regularity for the gauge group because its action involves taking a derivative. Because $\mathcal A_k(E)$ is an affine space over $\Omega^1(\text{End}(E))$, its tangent space at every point $A$ is $\Omega^1(\text{End}(E))$. Slightly more subtle is the tangent space of $\mathcal G_{k+1}(E)$, which is $\Omega^0(\text{End}(E))$. Now as in Atiyah-Bott, we can put a symplectic structure on $\mathcal A_k(E)$, preserved by the action of $\mathcal G_{k+1}(E)$: $\omega(v,w) = \int [v \wedge w]$, where by the inside of the integral I mean the composition of the maps $\wedge: \Omega^1(\text{End}(E)) \to \Omega^2(\text{End}(E) \otimes \text{End}(E))$, and $\Omega^2(\text{End}(E) \otimes \text{End}(E)) \to \Omega^2(B)$ (this is just a fiberwise inner product). Here is the first infinite-dimensional subtlety. This is nondegenerate in the sense that the induced map $\omega^*: T_x \mathcal A_k \to T_x^* \mathcal A_k$ is injective, but it is not an isomorphism! This is essentially because we're taking the $L^2$ inner product of $L^2_k$ forms, and the dual under the $L^2$ inner product is not $L^2_k$ - it's $L^2_{-k}$. But for the sake of having a symplectic form, this is fine. Atiyah-Bott gives an argument that the curvature map $F: \mathcal A_k \to \Omega^2_{k-1}(\text{End}(E))$ is the moment map. This argument is perfectly valid. Now in the setting of Hilbert manifolds, the regular value theorem (or transverse intersection theorems etc) is still valid; since you're interested in flat connections, you're interested in particular in $F^{-1}(0)$. Why is $0$ a regular value? Because the derivative of $F$ at $A$ is just the derivative map $d_A$, we're asking why the differential is surjective at any flat connection. Well... it's not. At a flat connection the cokernel of $d_A$ is canonically isomorphic to the $H^2(\text{End}(E);d_A)$, the de Rham cohomology with differential from flat connection $A$. Poincare duality (and the self-duality of $\text{End}(E)$) implies that this is the same as $H^0(\text{End}(E);d_A)$: the dimension of parallel sections of $\text{End}(E)$. How big this dimension is says how reducible the connection $A$ is; as an example, for $G = SU(2)$, there are three possibilities: you could be fully reducible, where $H^0$ is 3-dimensional; you could be a $U(1)$-reducible, where $H^0$ is 1-dimensional; you could be irreducible, where $H^0$ is zero-dimensional. Our notion of transversality needs to have reducibility baked into it. If $H$ is the centralizer of some subgroup of $G$, $H$ is a possible group to reduce to - in the sense that the holonomy group of a connection can be $H$. We can stratify the space (not linearly, if the structure of these subgroups is complicated) $\mathcal A$ into subsets $\mathcal A_H$ of connections reducible to $H$. On each of these, $F$ is constant rank. So $F^{-1}(0)$ is a stratified space, of which each stratum a manifold. Unfortunately, we're now in trouble. The quotient by the action of $\mathcal G$ - canonically identified with $\text{Hom}(\pi_1 X, G)/G$ - is not usually a manifold. Consider, for instance, $X = T^2$ and $G = SU(2)$; then this quotient is the "pillowcase", a sphere with four singular points. On the irreducible part of the moduli space, you get a smooth structure and a symplectic structure by the above method (the same way you usually would; we've more or less passed all the technicalities). I think the people who spend a lot of time thinking about this representation variety have an appropriate sort of structure on it even at the singular points, but I'm not really sure what it is. For further reading, you could look at this thesis, which I haven't but looks nice.<|endoftext|> TITLE: Hypergraph $2$-colorability is NP-complete QUESTION [5 upvotes]: So far all my searches for a proof of this well-known theorem have led me to the one below (Lovász 1973), reducing $k$-colorability for ordinary graphs to $2$-colorability for hypergraphs. In the construction, $H$ seems to contain copies of $G$, so its chromatic number should be higher than that of $G$, contrary to the last claim. If I'm not mistaken, I would guess the proof can be easily corrected. But I haven't found the correction yet. REPLY [5 votes]: Here is a reduction, but I'm not sure it's the one Lovász had in mind... Let $y$ and $z$ be two new points. Let the hyperedges of $H$ consist of $\{y,z\}$ $\{x_{i\mu},x_{i\nu},z\}$ ($1 \leq i \leq k$) when $\{x_{\mu},x_{\nu}\}$ is an edge from $G$ $f_\nu = \{x_{1\nu},\ldots,x_{k\nu},y\}$ ($1 \leq \nu \leq n$) as described by Lovász Suppose we have a {red,blue}-coloring of the resulting hypergraph $H$. Without loss of generality, $y$ is colored red and $z$ is colored blue (they can't be of the same color because of the hyperedge $\{y,z\}$). Note that the set of vertices $B_i$ from the original graph $G$ that are colored blue in the copy $G_i$ form an independent set for if $B_i$ contained an edge $\{x_{\mu},x_{\nu}\}$ then the hyperedge $\{x_{i\mu},x_{i\nu},z\}$ would be all blue. Looking at the hyperedge $f_\nu = \{x_{1\nu},\ldots,x_{k\nu},y\},$ we see that at least one of $x_{1\nu},\ldots,x_{k\nu}$ must be colored blue, else the hyperedge $f_\nu$ would be all red. Therefore, every vertex of the original graph is contained in some $B_i$. So we have a cover of $V(G)$ with $k$ independent sets $B_1,\ldots,B_k$, which means that $G$ is $k$-colorable. Reversing the process, we can go from a $k$-coloring of $G$ to a {red,blue}-coloring of $H$.<|endoftext|> TITLE: How to generate correlated random numbers with specific distributions? QUESTION [8 upvotes]: After read the answers of some similar questions on this site, e.g., Generate Correlated Normal Random Variables Generate correlated random numbers precisely I wonder whether such approaches can assure the specific distributions of random variables generated. In order to make it easier to present my question, let us consider a simple case of creating correlated two uniform continuous random variables on $[0,1]$ with correlation coefficient $\dfrac{1}{2}=\rho$. The methods by Cholesky decomposition (or spectral decomposition, similarly) first generates $X_1$ and $X_2$ which are independent pseudo random numbers uniformly distributed on $[0,1]$, and then creates $X_3=\rho X_1+\sqrt{1-\rho^2} X_2$. The $X_1$ and $X_3$ thus created are random variables with correlation coefficient $\rho$. But the problem is, $X_3$ 's probability density fuction is triangle /trapezoid distribution which can be deducted by the convolution of the density functions of $X_1$ and $X_2$. The probability density functions of $\rho X_1$ and $\sqrt{1-\rho^2} X_2$ are: The convolution (sum) of them $X_3$ has density function: This means, the distribution of $X_3$ is not the desired uniform one on $[0,1]$. What should I do in order to create random variables uniformly distributed on $[0,1]$ with correlation coefficient $\rho$ ? The similar issue persists when I want to create multiple correlated random variables with predefined correlation matrix. Considering the pseudo random variables usually are not really independent with a correlation coefficient between -1 and 1, it seems that: it is difficult to generate numerically independent $[0,1]$ uniform random variables since the uncorrelation transformation seems to always change the distribution profile. PS: Before asking this question, I had read the following questions and links but didnot find an answer : http://www.sitmo.com/article/generating-correlated-random-numbers/ http://numericalexpert.com/blog/correlated_random_variables/ https://en.wikipedia.org/wiki/Whitening_transformation REPLY [4 votes]: One suggestion is to work with copulas. In a nutshell, a copula allows you to separate out the dependency structure of a distribution function. Say, $F_1,F_2,\ldots,F_n$ are the 1D marginals of a distribution $F$ then the copula $C$ is the function defined as $$C(u_1,u_2,\ldots,u_n)=F(F^{-1}_1(u_1),F^{-1}_1(u_2),\ldots,F^{-1}_n(u_n))$$ This makes $C$ a function from $[0,1]^n$ to $[0,1]$. For instance, if you take the bivariate normal distribution, by doing the computation above, you'll find the Gaussian copula $$C^{\text{Gauss}}_{\rho}=\int_{-\infty}^{\phi^{-1}(u_1)}\int_{-\infty}^{\phi^{-1}(u_2)}\frac{1}{2\pi\sqrt{1-\rho^2}}\exp\left(-\frac{s_1^2-2\rho s_1s_2+s_2^2}{2(1-\rho^2)}\right)ds_1ds_2$$ I used the package copula in R to illustrate. If you just take the copula as such, it is as if you constructed a probability distribution with the dependency structure of a bivariate normal, but with uniform marginals. So I generated 1000 random vectors from a Gaussian copula with $\rho=0.5$. Here's the code library(copula); norm.cop <- normalCopula(0.5); u <- rCopula(1000, norm.cop); plot(u,col='blue',main='Random variables, uniform marginals, gaussian copula, > rho=0.5',xlab='X1',ylab='X2') cor(u); and the result I also computed the sample correlation which is $0.5060224$. I also computed a plot to show you the marginals are indeed uniform dom<-(1:length(u[,1]))/length(u[,1]); par(mfrow=c(1,2)); plot(dom,sort(u[,1]),col='blue',main='marginal X1'); abline(0,1,col='red'); plot(dom,sort(u[,2]),col='blue',main='marginal X2'); abline(0,1,col='red'); This is all very nice, but there are a number of pitfalls that have to be discussed: Copula's for discrete distributions are a real can of worms. If we can use a multivariate Gaussian distribution to get a dependency structure, why not use a multivariate student t? Or a multivariate Pareto? Or other dependencies which are much more exotic, but all could in principle also lead to a $0.5$ correlation if you set the parameters right. Given marginals and a correlation, it is not always the case that you can construct a copula and hence a multivariate distribution with the desired properties. A nice example is given in Embrechts (2009), "Copulas: A Personal View", The Journal of Risk and Insurance, Vol. 76, No. 3, 639-650. The example shows that if you want the marginals to be lognormal distributed $LN(0,1)$ and $LN(0,6)$ respectively, your correlation is restricted to the range $[-0.00025,0.01372]$. The heavy tails of the lognormals essentially constrain you to that range. This can be proven from the Fréchet-Hoeffding bounds. More details are in the article. More can be said and I think the article I quoted in my last item is a nice starting point.<|endoftext|> TITLE: On the proof that every positive continuous random variable with the memoryless property is exponentially distributed QUESTION [9 upvotes]: The theorem to prove is: $X$ is a positive continuous random variable with the memoryless property, then $X \sim Expo(\lambda)$ for some $\lambda$. The proof is explained in this video, but I will type it out here as well. I would like to get some clarification on certain parts of this proof. Proof Let $F$ be the CDF of $X$, and let $G(x)=P(X>x)=1-F(x)$. The memoryless property says $G(s+t)=G(s)G(t)$, we want to show that only the exponential will satisfy this. Try $s=t$, this gives us $G(2t)=G(t)^2,G(3t)=G(t)^3,...,G(kt)=G(t)^k$. Similarly, from the above we see that $G(\frac{t}{2})=G(t)^\frac{t}{2},...,G(\frac{t}{k})=G(t)^{\frac{1}{k}}$. Combining the two, we get $G(\frac{m}{n}t)=G(t)^\frac{m}{n}$ where $\frac{m}{n}$ is a rational number. Now, if we take the limit of rational numbers, we get real numbers. Thus, $G(xt)=G(t)^x$ for all real $x>0$. If we let $t=1$, we see that $G(x)=G(1)^x$ and this looks like the exponential. Thus, $G(1)^x=e^{xlnG(1)}$, and since $0 t+s\mid X>s)=\mathsf P(X>t)$ So we can use this to state: $$\begin{align}\mathsf P(X>s+t) =&~\mathsf P(X>s)\mathsf P(X>s+t\mid X>s)\\=&~\mathsf P(X>s)\mathsf P(X>t)\\[2ex] 1-F_X(s+t)=&~ (1-F_X(s))(1-F_X(t))\\[1ex] F_X(s+t)=&~ F_X(s)+F_X(t)-F_X(s)F_X(t)\\[3ex] G_X(s+t) =&~ G_X(s)G_X(t) & G_X(z):=1-F_X(z)\end{align}$$ Using $G$ gives a more useful result. What does the professor mean when he says that you can get real numbers by taking the limit of rational numbers. Every real number is the limits of some sequence of rational numbers. $$\forall r\in\Bbb R :\lim_{n\in\Bbb N, n\to\infty}\frac{\lfloor{rn}\rfloor}{n}=r$$ In the video, he just says that $G(x)=G(1)^x$ looks like an exponential and thus, $G(x)=G(1)^x=e^{x\ln G(1)}$.   How did he know that this is an exponential? Because it looks like it.   It is an easily recognised pattern. By definition of the $\ln()$ function, for any $a$ (except zero), $a^x = e^{x\ln a}$. Thus $G(1)^x=e^{x\ln G(1)}$<|endoftext|> TITLE: Determinant of a large block matrix QUESTION [5 upvotes]: $\newcommand{\lmt}{\left[\begin{matrix}}$ $\newcommand{\rmt}{\end{matrix}\right]}$ Hi, I was reading through a proof of the number of domino tilings of a $(2n)\times(2n)$ chessboard, and somewhere in the proof was the following unjustified claim: Let $A=\lmt 0&1&0&\cdots&0\\ -1&0&1&\ddots&\vdots\\ 0&-1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&1\\ 0&\cdots&0&-1&0 \rmt$ and $B=\lmt 0&1&0&\cdots&0\\ 1&0&1&\ddots&\vdots\\ 0&1&0&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&1\\ 0&\cdots&0&1&0 \rmt$. In case my notation isn't clear, they have $\pm1$ on the superdiagonal and subdiagonal and $0$ everywhere else. Let $C$ be the matrix with blocks $\lmt -A&I&0&\cdots&0\\ I&-A&I&\ddots&\vdots\\ 0&I&-A&\ddots&0\\ \vdots&\ddots&\ddots&\ddots&I\\ 0&\cdots&0&I&-A \rmt$. Let $p_B$ be the characteristic polynomial of $B$. Then, the claim is that $$ \det C = \det p_B(A). $$ This was stated without proof, so I'm wondering if this follows from a well-known theorem, or if there's a slick proof of it. Also, I'm curious to what extent this claim generalizes. Thanks! REPLY [2 votes]: Yes, this works fairly generally. (I don't know if this is "well known," maybe to others, I didn't know it before.) Since $B$ is self-adjoint, we can diagonalize it by a unitary transformation: $U^*BU=\textrm{diag}(b_1, \ldots , b_{2n})$. Moreover, this also works for the block versions of these matrices, in the following sense: if we let $$ \mathcal U = \begin{pmatrix} u_{11}I & u_{12}I & \ldots & u_{1,2n}I\\ && \ldots & \\ u_{2n,1}I & u_{2n,2}I & \ldots & u_{2n,2n}I \end{pmatrix} , $$ with the matrix elements $u_{jk}$ of $U$, then this $\mathcal U$ is still unitary and $\mathcal U^*\mathcal{B}\mathcal U=\textrm{diag}(b_1I, \ldots , b_{2n}I)$. Here, I write $\mathcal B$ for the block version of $B$, where we replace the $1$'s with $I$. Similarly, $\mathcal A$ will denote the matrix with $A$ on the "diagonal" and $0$'s everywhere else. Then $$ \det C = \det (\mathcal{B-A})=\det(\mathcal{U^*(B-A)U}) =\det\textrm{diag}(b_1I-A, b_2I-A, \ldots , b_{2n}I-A) . $$ The last step uses that $\mathcal{U^*AU}=\mathcal A$, and this follows because $\mathcal U$ is unitary with blocks that are multiples of $I$ and $\mathcal A$ is block diagonal (just writing it out will give it quickly). Thus $\det C=\prod \det (b_jI-A) = \det \prod (b_j I-A) = \det p_B(A)$, as claimed.<|endoftext|> TITLE: $6 \times 6$ and $7 \times 7$ integer matrices QUESTION [6 upvotes]: Can one fill a $6 \times 6$ matrix with integers so that the sum of all the numbers in each $3 \times 3$ square equals $2016$ and the sum of all the numbers in each $5 \times 5$ square equals $2015$? Solve the same problem for the $7 \times 7$ case. My work so far: I solved the problem for table $6\times6$: \begin{bmatrix} 2017 & -1 & 0 & 0 & 0 & 2017 \\ 0 & -6050 & 4033 & 4034 & -6050 & -1 \\ 0 & 4034 & -2017 & -2017 & 4033 & 0 \\ 0 & 4033 & -2017 & -2017 & 4034 & 0 \\ -1 & -6050 & 4034 & 4033 & -6050 & 0 \\ 2017 & 0 & 0 & 0 & -1 & 2017 \end{bmatrix} Addition: I proved that $x_1+x_2+x_3+x_4=4\cdot 2017$ $y_1+y_2+y_3+y_4=-4$ $x_1+x_3=x_2+x_4$ $S=0$ Further Q: How to prove or disprove the existence of the table $7\times7$? REPLY [8 votes]: Hint: in the $7 \times 7$ case, how many $3 \times 3$ squares and how many $5 \times 5$ squares contain each entry of the table? EDIT: OK, time's up... each entry of the $7 \times 7$ table is in the same number of $3 \times 3$ squares as it is in $5 \times 5$ squares. So i you take the sums of the numbers in each $3 \times 3$ square and add them up, you should get the same result as if you take the sum of the numbers in each $5 \times 5$ square and add them up. Now, what does that tell you?<|endoftext|> TITLE: Finding the Centre of an Abritary Set of Points in Two Dimensions QUESTION [10 upvotes]: I am currently working on a program that needs to transform one set of coordinates by shifting them to the center of the screen. The points are offset from the middle of the screen - either to the left or to the right. The coordinates of the points are first presented within a rectangular bound. In the next step I need to figure out how to move them to the center of the screen. The number of points vary from $2$ to $5$ points. How can I... Find the center of a finite set of points? Translate the positions to a new set of coordinates centered around a new center? REPLY [10 votes]: You can use the centroid of the points. Imagine you have a set $S$ of $n$ points (in blue in the picture below) $$ S=\{(x_1,y_1),(x_2,y_2),\dots (x_n,y_n)\}$$ Their centroid is given by: $$(\bar x,\bar y) = \left(\frac{1}{n}\sum_{i=0}^n x_i, \frac{1}{n}\sum_{i=0}^n y_i\right).$$ Then, substracting those coordinates to your points $$ S_{(0,0)}=\{(x_1-\bar x,y_1-\bar y),(x_2-\bar x,y_2-\bar y),\dots (x_n-\bar x,y_n-\bar y)\}$$ you will move them towards the origin (in red in the picture below). If you want to translate them to a different point (say, towards point $(a,b)$), then you just need to sum those coordinates to the ones above (in green in the picture below): $$ S_{(a,b)}=\{(x_1-\bar x+a,y_1-\bar y+b),(x_2-\bar x+a,y_2-\bar y+b),\dots (x_n-\bar x+a,y_n-\bar y+b)\}$$ Of course, your program does not need to do both steps separately. You can just apply the last one directly.<|endoftext|> TITLE: Does there exist a function such that $\lim_{x \to a} f(x) = L$ for all $a \in \mathbb R$ but $f(x)$ is never $L$? QUESTION [10 upvotes]: Does there exist a function $f:\mathbb R \to \mathbb R$ such that $\lim_{x \to a} f(x) = L$ for all $a \in \mathbb R$ but $f(x) \neq L$ for all $x$? I found such a function in $\mathbb Q \to \mathbb Q$, where $f(\frac{p}{q})= \text{first p+q digits of }\pi$ satisfies the above condition. However, I have not been able to extend it to reals. So, is such a function possible, and if it is, is there any explicit example preferably related to the above function in rationals? REPLY [5 votes]: Suppose $f$ is such a function. Like @Tim kinsella did, set $A_n = \{x\in [0,1]: |f(x) - L| > 1/n\}.$ Then $[0,1] = \cup A_n.$ Hence some $A_{n_0}$ must be infinite. By Bolzano-Weierstrass, $A_{n_0}$ has an accumulation point $x_0.$ Thus $x_0$ is a limit of a sequence $x_m$ such that $|f(x_m) - L| > 1/n_0$ for every $m.$ Since $\lim_{x\to x_0} f(x) = L,$ we have a contradiction.<|endoftext|> TITLE: Fake proofs using matrices QUESTION [28 upvotes]: Having gone through the 16-page-list of questions tagged fake-proofs, and going though both the relevant MSE Question and Wikipedia page, I didn't find a single fake proof that involved matrices. So the question (or challange) here is: what are some fake proof using matrices? In particular, the fake proof should use a property, an operation, ..., specific to matrices (or at least not present in $\mathbb{R}$ or $\mathbb{C}$), e.g. Noncommutativity (Non-)existence of an inverse Matrix sizes Operations as $\det$, $\text{trace}$, ... Eigenvalues and diagonalization Matrix decompositions and normal forms ... Note: It does not matter if the result being "proven" is correct or not. The fallacy in the proof itself is what matters. Examples: Proof that 1 = 0 Proof: it is a well-known fact that $(x+y)^2 = x^2 + 2xy + y^2$. Now let $$x = \begin{pmatrix}0 & 1\\ 0 & 0 \end{pmatrix},\;\;y = \begin{pmatrix}1 & 0\\ 0 & 0 \end{pmatrix}.$$ On the one hand, we have that $$ (x+y)^2 = \begin{pmatrix}1 & 1\\ 0 & 0 \end{pmatrix}^2 = \begin{pmatrix}1 & 1\\ 0 & 0 \end{pmatrix},$$ on the other hand we have $$x^2 + 2xy + y^2 = \begin{pmatrix}0 & 0\\ 0 & 0 \end{pmatrix} + 2\begin{pmatrix}0 & 0\\ 0 & 0 \end{pmatrix} + \begin{pmatrix}1 & 0\\ 0 & 0 \end{pmatrix} = \begin{pmatrix}1 & 0\\ 0 & 0 \end{pmatrix}.$$ Since two matrices are equal if and only if all their entries are equal, we conclude that $1 = 0$. The mistake here is that $x$ and $y$ do not commute. Thus $(x+y)^2 = x^2 + xy + yx + y^2 \neq x^2 + 2xy + y^2$. Proof that 2 = 0 Proof: We know that $\det (AB) = \det (BA)$, since $$\det (AB) = (\det A) (\det B) = (\det B) (\det A) = \det (BA).$$ Now consider the matrices $$ A = \begin{pmatrix}1 & 0 & 1\\ 0 & 1 & 0 \end{pmatrix}, \; \; B = \begin{pmatrix}1 & 0\\ 0 & 1\\ 1 & 0 \end{pmatrix}.$$ We have that $$AB = \begin{pmatrix}2 & 0\\ 0 & 1 \end{pmatrix}, \;\; BA = \begin{pmatrix}1 & 0 & 1\\ 0 & 1 & 0 \\1 & 0 & 1\end{pmatrix}.$$ Hence $\det (AB) = 2$ and $\det (BA) = 0$, therefore $2 = 0$. The mistake here is that $\det$ is defined for square matrices only, and thus $\det AB = \det BA$ only holds in general if $A$ and $B$ are square. REPLY [13 votes]: Here is a fake proof of a true theorem, Cayley-Hamilton. So the characteristic polynomial of a matrix $A$ is $f_A(t) = \textrm{det}(tI-A)$. Then $f_A(A) = \textrm{det}(AI-A) = \textrm{det}(A-A) = \textrm{det}(0) = 0$.<|endoftext|> TITLE: Meaning of the word "axiom" QUESTION [24 upvotes]: One usually describes an axiom to be a proposition regarded as self-evidently true without proof. Thus, axioms are propositions we assume to be true and we use them in an axiomatic theory as premises to infer conclusions, which are called "theorems" of this theory. For example, we can use the Peano axioms to prove theorems of arithmetic. This is one meaning of the word "axiom". But I recognized that the word "axiom" is also used in quite different contexts. For example, a group is defined to be an algebraic structure consisting of a set $G$, an operation $G\times G\to G: (a, b)\mapsto ab$, an element $1\in G$ and a mapping $G\to G: a\mapsto a^{-1}$ such that the following conditions, the so-called group axioms, are satisfied: $\forall a, b, c\in G.\ (ab)c=a(bc)$, $\forall a\in G.\ 1a=a=a1$ and $\forall a\in G.\ aa^{-1}=1=a^{-1}a$. Why are these conditions (that an algebraic structure has to satisfy to be called a group) called axioms? What have these conditions to do with the word "axiom" in the sense specified above? I am really asking about this modern use of the word "axiom" in mathematical jargon. It would be very interesting to see how the modern use of the word "axiom" historically developed from the original meaning. Now, let me give more details why it appears to me that the word is being used in two different meanings: As peter.petrov did, one can argue that group theory is about the conclusions one can draw from the group axioms just as arithmetic is about the conclusions one can draw from the Peano axioms. But in my opinion there is a big difference: while arithmetic is really about natural numbers, the successor operation, addition, multiplication and the "less than" relation, group theory is not just about group elements, the group operation, the identity element and the inverse function. Group theory is rather about models of the group axioms. Thus: The axioms of group theory are not the group axioms, the axioms of group theory are the axioms of set theory. Theorems of arithmetic can be formalized as sentences over the signature (a. k. a. language) $\{0, s, +, \cdot\}$, while theorems of group theory cannot always be formalized as sentences over the signature $\{\cdot, 1, ^{-1}\}$. Let me give an example: A typical theorem of arithmetic is the case $n=4$ of Fermat's last theorem. It can be formalized as follows over the signature $\{0, s, +, \cdot\}$: $$\neg\exists x\exists y\exists z(x\not = 0\land y\not = 0\land z\not = 0\quad\land\quad x\cdot x\cdot x\cdot x + y\cdot y\cdot y\cdot y = z\cdot z\cdot z\cdot z).$$ A typical theorem of group theory is Lagrange's theorem which states that for any finite group G, the order of every subgroup H of G divides the order of G. I think that one cannot formalize this theorem as a sentence over the "group theoretic" signature $\{\cdot, 1, ^{-1}\}$; or can one? REPLY [2 votes]: It happens that we find "interesting" to explore the conclusions that can be logically derived from a set of formal statements. It could be "interesting" for different reasons such as: maybe the statements are all true for a specific "interesting" model (such as arithmetics or set theory) that we want to study maybe the statements are all true for several structures that are not all necessarily interesting individually but are "widespread" in different areas of knowledge (such as groups/rings/ordered sets...) Every time we want to explore the conclusions that follow from a set of statements we use to call them "axioms".<|endoftext|> TITLE: Prove $A=B=\pi$ QUESTION [15 upvotes]: $\gamma=0.57721566...$ $\phi=\frac{1+\sqrt5}{2}$ Let, $$A=\sum_{n=1}^{\infty}\arctan\left(\frac{(e+e^{-1})(\phi^{\frac{2n-1}{2}}+\phi^{\frac{1-2n}{2}})}{e^2+e^{-2}-1+\phi^{2n-1}+\phi^{1-2n}}\right)$$ $$B=\sum_{n=1}^{\infty}\arctan\left(\frac{(\gamma+\gamma^{-1})(\phi^{\frac{2n-1}{2}}+\phi^{\frac{1-2n}{2}})}{\gamma^2+\gamma^{-2}-1+\phi^{2n-1}+\phi^{1-2n}}\right)$$ (1) $$A=B=\pi$$ We found (1) accidentally while trying to search for $\pi$ in term of $\arctan(x)$ most idea are from Dr Ron Knott his Fibonacci site. Can anyone prove (1) REPLY [17 votes]: For any $a \ge 1$, let $f_n(a)$ be the expression $$f_n(a) \stackrel{def}{=}\frac{(a+a^{-1})(\phi^{n-1/2} + \phi^{1/2-n})}{a^2 + a^{-2} - 1 + \phi^{2n-1} + \phi^{1-2n}}$$ The sums at hand equal to $\sum\limits_{n=1}^\infty \tan^{-1}f_n(e)$ and $\sum\limits_{n=1}^\infty \tan^{-1}f_n(\gamma^{-1})$ respectively. For any $n \ge 0$, let $p_n = \phi^{n-1/2} - \phi^{1/2-n}$. It is easy to verify $p_{n+1} - p_{n-1} = (\phi - \phi^{-1})(\phi^{n-1/2} + \phi^{1/2-n}) = \phi^{n-1/2} + \phi^{1/2-n}$ $p_{n+1} p_{n-1} = (\phi^{n+1/2} - \phi^{-1/2-n})(\phi^{n-3/2} - \phi^{3/2-n}) = \phi^{2n-1} + \phi^{1-2n} - 3$ Let $u = a + a^{-1}$. Notice $$\frac{\frac{u}{p_{n-1}} - \frac{u}{p_{n+1}}}{ 1 + \frac{u}{p_{n-1}} \frac{u}{p_{n+1}} } = \frac{u(p_{n+1}-p_{n-1})}{u^2 + p_{n+1}p_{n-1}} = \frac{(a+a^{-1})(\phi_{n-1/2} + \phi_{1/2-n})}{a^2 + a^{-2} - 1 + \phi^{2n-1} + \phi^{1-2n}} = f_n(a) $$ There exists integers $N_n$ such that $$\tan^{-1} f_n(a) = \tan^{-1} \left(\frac{\frac{u}{p_{n-1}} - \frac{u}{p_{n+1}}}{ 1 + \frac{u}{p_{n-1}} \frac{u}{p_{n+1}} }\right) = \tan^{-1}\frac{u}{p_{n-1}} - \tan^{-1}\frac{u}{p_{n+1}} + N_n \pi $$ When $n > 1$, $p_{n+1} > p_{n-1} > 0$, it is easy to see $N_n = 0$. When $n = 1$, we have $$ \begin{cases} f_1(a), \frac{u}{p_2} > 0\\ \frac{u}{p_0} < 0 \end{cases} \quad\implies\quad \begin{cases} \tan^{-1}f_1(a), \tan^{-1}\frac{u}{p_2} \in (0,\frac{\pi}{2})\\ \tan^{-1}\frac{u}{p_0} \in (-\frac{\pi}{2},0) \end{cases} \quad\implies\quad N_1 = 1 $$ This implies $$\begin{align} & \sum_{n=1}^\infty \tan^{-1}f_n(a)\\ = & \sum_{m=1}^\infty \left( \tan^{-1}f_{2m-1}(a) + \tan^{-1}f_{2m}(a)\right)\\ = & \pi + \lim_{N\to\infty}\sum_{m=1}^N\left[\left( \tan^{-1}\frac{u}{p_{2m-2}} - \tan^{-1}\frac{u}{p_{2m}}\right) + \left( \tan^{-1}\frac{u}{p_{2m-1}} - \tan^{-1}\frac{u}{p_{2m+1}}\right)\right]\\ = & \pi + \lim_{N\to\infty} \left[ \left(\tan^{-1}\frac{u}{p_0} + \tan^{-1}\frac{u}{p_1}\right) - \left(\tan^{-1}\frac{u}{p_{2N}} + \tan^{-1}\frac{u}{p_{2N+1}}\right) \right]\\ = & \pi + \left(\tan^{-1}\frac{u}{p_0} + \tan^{-1}\frac{u}{p_1}\right) \end{align} $$ Notice $p_0 = -p_1$, the last bracket vanishes. As a result, $$\sum_{n=1}^\infty \tan^{-1}f_n(a) = \pi\quad\text{ for all } a \ge 1$$ As pointed out by @Winther, we can relax the condition from $a \ge 1$ to $a > 0$. This is because for $a \in (0,1)$, $f_n(a) = f_n(a^{-1})$. We can generalize this a little bit. Let $g_n(u)$ be the expression $$g_n(u) \stackrel{def}{=} \frac{u(\phi^{n-1/2} + \phi^{1/2-n})}{ u^2 + (\phi^{n-1/2} + \phi^{1/2-n})^2 - 5}$$ We have $f_n(a) = g_n(a + a^{-1})$. If we adopt the convention that $\tan^{-1}(\mathbb{R}) = \left(-\frac{\pi}{2}, \frac{\pi}{2} \right]$, i.e $\tan^{-1}\infty = \frac{\pi}{2}$ and $\tan^{-1}x < 0$ for $x < 0$, we have $$\sum_{n=1}^\infty \tan^{-1}g_n(u) = \begin{cases} \pi, & u_c \le u\\ 0, & 0 < u < u_c \end{cases} \quad\text{ where }\quad u_c = \sqrt{3-\sqrt{5}} $$ For all $u > 0$, the analysis is essentially the same as above. When $u < u_c$, there is one place we need to adjust the argument. Namely, $g_1(u)$ becomes $-ve$ and this forces the corresponding $N_1$ to vanish. At the end, when $u < u_c$, this causes the $\pi$ term disappear from above sum.<|endoftext|> TITLE: Why is there no general solution for the general 2nd order linear ODE QUESTION [6 upvotes]: We can always solve a general first order linear ODE: $$y'(x)+a(x)y(x)=b(x).$$ I am looking for some intuition why the general 2nd order linear ODE $$y''(x)+a(x)y'(x)+b(x)y(x)=c(x) $$ does not have a gerneral formula. Is it mathematically impossible, or is there a chance, that someone will find a general solution? If it is mathematically impossible, is there any intuitive explanation to this phenomenon? REPLY [2 votes]: It is mathematically impossible. The proof goes (generally speaking) along the same lines as it is proved that there is no general formula to solve algebraic equations of degree 5 or up. All the details can be found in, e.g., An introduction to differential algebra by Irving Kaplansky.<|endoftext|> TITLE: Inclusion of Schwartz space on $L^p$ QUESTION [8 upvotes]: I'm looking for a proof of $\mathcal{S}(\mathbb{R}) \subset L^p(\mathbb{R})$ for $1 \leq p \leq \infty$. My informal probe follow like this: For any function $f \in L^p(\mathbb{R})$ exists a piecewise function $h_n$ such that $||h_n(x) - f(x)||_{L^p} \rightarrow 0 \ \text{with} \ n\rightarrow \infty \ (\forall x)$, and any $h_n$ could be approximated by compactly supported smooth functions ($C_{c}^{\infty}(\mathbb{R})$). And Since $C_{c}^{\infty}(\mathbb{R})$ is dense in $\mathcal{S}(\mathbb{R})$, then can be concluded that $\mathcal{S}(\mathbb{R}) \subset L^p(\mathbb{R})$. Any help to formalize that, or any different proof will be helpful. REPLY [17 votes]: To make Davide Giraudo's comment more clear, here are the steps: $$ \int |f(x)|^p = \int \left((1+x^2)|f(x)|\right)^p\frac{1}{(1+x^2)^p} $$ From here, since $(1+x^2)f(x)$ is bounded (because $f\in \mathcal{S}$), and $\frac{1}{(1+x^2)^p}\leq \frac{1}{1+x^2}\in L^1$ (for $p\geq 1$), one gets $$ \int |f(x)|^p \leq \|(1+x^2)f(x)\|^p_\infty \int \frac{1}{1+x^2}\leq \pi (\|f\|_{0,0}+\|f\|_{2,0})^p $$ where $$ \|f\|_{\alpha,\beta} = \underset{x\in\mathbb{R}^n}{\sup} |x^\alpha D^\beta f(x)| $$ Since $f\in \mathcal{S}$, $\|f\|_{\alpha,\beta}$ is finite for all $\alpha,\beta\geq 0$.<|endoftext|> TITLE: placing balls inside ball QUESTION [6 upvotes]: Is it possible to put pairwise disjoint open 3d-balls with radii $\frac{1}{2},\frac{1}{3},\frac{1}{4},\dots$ inside a unit ball? not an original question, I found it somewhere in the internet once, but without any answer. REPLY [7 votes]: As pointed out by @mjqxxxx, this is possible. In fact, it is possible to pack two families of open balls of radii $\frac12, \frac13, \ldots$ into unit sphere. The picture below illustrate one possible configuration (with $n$ up to $300$): $\hspace1in$ In above picture, the two families are related to each other by a reflection with respect to origin. Let us concentrate on the families of brown balls. The brown ball with radius $\frac12$ is centered at $(0,0,-\frac12)$. For $n \ge 3$, the brown ball with radius $\frac1n$ is centered at $$(x_n,y_n,z_n) = (r_n\sin\theta_n\cos\phi_n,r_n\sin\theta_n\sin\phi_n, r_n\cos\theta_n)$$ where $\displaystyle\;r_n = 1 - \frac1n,\;\theta_n = \frac{\pi}{4}\left(\frac{n+1}{n-1}\right)\;$ and $\phi_n$ is defined recursively by $$ \phi_n = \begin{cases} 0, & n = 3\\ \phi_{n-1} + \rho_n + \rho_{n-1}, &n > 3 \end{cases} \quad\text{ where }\quad \rho_n = \sin^{-1}\left(\frac{1}{(n-1)\sin\theta_n}\right) $$ If one orthogonal project the ball with radius $\frac1n$ onto $xy$-plane, $\rho_n$ will be the half-angle subtended by the ball's image with result to origin. The condition $\phi_{n} - \phi_{n-1} = \rho_{n} + \rho_{n-1}$ guarantee the balls with radii $\frac{1}{n}$ and $\frac{1}{n-1}$ are disjoint from each other. For $n = 2$, the brown ball with radius $\frac12$ is touching the blue ball with radius $\frac12$ and the brown ball with radius $\frac13$. However, their interiors are disjoint. For small $n$, one can build a 3d model of above configuration and verify by eye the balls are disjoint from each other. Above picture contains balls with $n$ upto $300$, balls with $2 < n \le 300$ have been inspected and they are disjoint from each other. For larger $n$, the balls form a spiral converging to some sort of limit cycle at $\theta = \frac{\pi}{4}$. For a typical ball there, its projection onto $xy$-plane will subtend an angle $2\rho_n \sim \frac{2\sqrt{2}}{n}$ with respect to the origin. Furthermore, the balls nearby will be arranged info "arms". Let $\frac{1}{Kn}$ be the radius of nearest ball on the next arm of spiral (i.e. the arm closer to the limit cycle immediately above the current arm). The "angles" between the ball and its nearest neighbor on next arm will be around $2\pi$. This means $$\int_{n}^{Kn} \frac{2\sqrt{2}}{n} dn = 2\sqrt{2}\log K \approx 2\pi \quad\implies\quad K \approx e^{\pi/\sqrt{2}}$$ The distance between this pair of balls is around $\theta_n - \theta_{Kn} \approx \frac{\pi}{2Kn} - \frac{\pi}{2n} = \frac{\pi}{2n}\left(\frac1K-1\right)$. We can compare it with the required separation $\frac{1}{n} + \frac{1}{Kn} = \frac{1}{n}\left(1 + \frac{1}{K}\right)$ between them. Since the ratio $\frac{\pi}{2}\left(\frac{K-1}{K+1}\right) = \frac{\pi}{2}\left(\frac{e^{\pi/\sqrt{2}}-1}{e^{\pi/\sqrt{2}}+1}\right) \approx 1.263418 > 1$, when $n$ is large, the ball with radius $\frac1n$ is disjoint from the one on next arm of spiral. To bridge the analysis for small and large $n$, we need to check by the time $n$ reaches $300$, whether we can switch to use the result for large $n$ or not. I don't have a rigorous proof. However, when we increase $n$ to around $100$, the relative error between $2\rho_n$ and its approximation $\frac{2\sqrt{2}}{n}$ already falls below $0.55\%$. The relative error in $K$ should be around $\frac{\pi}{\sqrt{2}} \cdot 0.55\% \sim 1.3\%$. As far as I can see, other approximations should generator relative errors of same order. Since the ratio $1.263418$ above is tens of percents larger than $1$. It seems $n = 100$ is already large enough. Combining all these analysis, a safe bet is all the balls in above two families are disjoint from each other.<|endoftext|> TITLE: Why is the fundamental group of the plane with two holes non-abelian? QUESTION [11 upvotes]: I know $\pi_1(\mathbb{R}^2\setminus\{x,y\}) = \mathbb{Z}\ast\mathbb{Z} = \langle a,b\rangle$, but its non-abelian-ness isn't obvious to me. Specifically, I draw a box and two points to represent $x$ and $y$. I call the clockwise loop around $x$ $a$, and the clockwise loop around $y$, $b$. If I draw the loops $ab$ and $ba$ they look `obviously homotopic' to me (they are both a single clockwise loop containing both $x$ and $y$), so should be the same element in $\pi_1$, which is clearly not the case if we take the free product of $\mathbb{Z}$ with itself. It seems that I must be making a big mistake somewhere. What is the obstruction to $ab \simeq ba$? I would prefer something concrete, I know this fundamental group follows from standard theorems (van Kampen's theorem), but I want to see the homotopy fail, because I think they are pretty clearly homotopic - so there is something wrong with my intuition which I'd like to fix. REPLY [26 votes]: An important detail is that the fundamental group is built from loops that all start and end at a common base point. We know that some loops can be continuously deformed into other loops; these loops are called homotopically equivalent. As a special case, some loops can be continuously deformed into other loops while keeping the base point fixed in place. The fundamental group consists of loops which are considered equivalent when they are homotopic in this special base-point-preserving sense. The fundamental group fails to be commutative when there are loops that are homotopically equivalent in the generic sense, and yet not homotopically equivalent in the base-preserving sense. In pictures, below you can see the plane with two holes along with three loops a, b, c with a common base point. Note that you can't continuously deform b into c unless you're allowed to move the base point. (This is our signal that the fundamental group doesn't commute, which we'll show in more detail.) Loops b and c aren't homotopic in a way that preserves the base point. Therefore, b ≢ c in the fundamental group. Even so, b and c are homotopic in the general sense. For example, one (non-base-point-preserving) homotopy from b to c involves sliding the base point around the lower hole: A neat observation is that during this homotopy, the trajectory of the base point is recognizable as loop a! Loops b and c are homotopic in the general sense. The trajectory of the base point under that homotopy is loop a. From this, it follows that there's a base-preserving homotopy between loop c•a ("go around the hole, then perform c") and loop a•b ("perform b, then go around the hole"): Loops c•a and a•b are equivalent in the fundamental group. In other words, loops b and c are conjugates: $$ca \equiv ab$$ $$c \equiv a\cdot b \cdot a^{-1}$$ If the fundamental group were abelian, then this conjugacy would imply that b = c. Because b ≢ c, the fundamental group is nonabelian. In general, when there's a homotopy between two loops b and c, let a be the trajectory of the base point under that homotopy. Then a is itself a loop with the same base point as b and c, and c= aba⁻¹ (b and c are conjugates). If there isn't a base-preserving homotopy, then a is nontrivial and b ≢ c, so the fundamental group isn't abelian. Final comment: Note that from the beginning, the source of the problem (noncommutativity) is the presence of holes. We saw intuitively that we could deform b into c, but only by moving the base point— there's a hole in the way. Now we can see that property emerge algebraically: Loops b and c are deformable into one another when they're conjugate c= aba⁻¹. The element a describes the trajectory of the base point during that deformation. So if they're deformable without moving the base point, this is equivalent to saying that conjugacy holds when a is trivial (topologically, a is the constant loop at the base point; algebraically, it's the multiplicative unit) in which case we have b = c. Otherwise, if we must move the base point in order to deform one loop into the other, this is equivalent to saying that conjugacy c= aba⁻¹ only holds for a nontrivial choice of loop a. A loop is nontrivial just when it encircles a hole (!), hence this is an algebraic way of saying that a hole (call it hole A) stands in the way of deforming b into c.<|endoftext|> TITLE: Behavior of a Collatz-like mod-4 sequence: Do some numbers increase without limit? QUESTION [6 upvotes]: Define \begin{eqnarray} f(n) &=& (n-1)^2 \; \textrm{if} \; (n \bmod 4) = 1\\ f(n) &=& \lfloor n/4 \rfloor \; \textrm{otherwise} \end{eqnarray} and let $f^k(n) = f(f( \cdots (n) \cdots ) )$ be the result of applying $f(\;)$ $k$ times to $n$. So $f^3(5)=0$ because the iterates produce the sequence $(5,16,4,1,0)$, $f^8(13)=0$ because the sequence is $(13,144,36,9,64,16,4,1,0)$, and so on. Many numbers map to $0$. But some seem not to, i.e., to grow without bound. E.g., here is the start of the sequence for $n=53$, the smallest "problematic" number: $$ (53,2704,676,169,28224,7056,1764,441,193600,48400,12100,3025,9144576,2286144,571536,142884,35721,1275918400,318979600,79744900,19936225,\ldots) $$ My question is: What is special about the numbers $53, 85, 77, 101, \ldots$ that seem to grow without bound? Can one prove that, for any particular number $n$, $\lim_{k \to \infty}f^k(n) = \infty$ ? REPLY [2 votes]: This is not an answer but too long for a comment. I've looked for more simple structure in the list of the potential diverging startingpoints n. I've got this remarkable list for n increased in steps of 16. The expression $n_{40}$ means the 40th iterate started from $n_0$, and documented is the value $f(x)=\log_4(1+x) $ . It seems that all $n_0$ have a diverging trajectory, with the main systematic exception: if $n_0 = 5+k \cdot 16$ the iterations seem to diverge if $n_0 = 5+2^j \cdot 16$ the tested iterations converged at $n_{40},n_{60}$ or so However, at index 45, $n_0=5+45\cdot 16 = 725$ there is a suspicious small value, and indeed, further iterations show, that this case has also a converging trajectory. Perhaps here starts another systematic exception - I've not tested it further so far. Here is the q&d - table of occurences. Note, there are some more $n_0$ with seemingly divergent trajectories, possibly one can state similar lists for them. index n_0 f(n_40) 0 5 0.E-202 1 21 0.E-202 2 37 0.E-202 3 53 7319.43109000 4 69 0.E-202 5 85 7317.43109000 6 101 1778.43168503 7 117 21713.5570367 8 133 0.E-202 9 149 33105.8454885 10 165 21714.5570367 11 181 64089.1819950 12 197 8794.71300153 13 213 31233.9918267 14 229 14399.1441614 15 245 19588.9982699 16 261 0.E-202 17 277 11692.8913443 18 293 21324.9071387 19 309 52525.0264043 20 325 75280.3205060 21 341 412816.543598 22 357 8457.22505438 23 373 48695.2063595 24 389 7944.86049345 25 405 114951.949346 26 421 43423.1140735 27 437 245610.738045 28 453 7863.21231992 29 469 108387.380307 30 485 27329.9204850 31 501 104478.303516 32 517 0.E-202 33 533 127664.156130 34 549 13441.3809126 35 565 14188.4534802 36 581 19847.5273459 37 597 31155.1619582 38 613 13649.1336926 39 629 261684.344974 40 645 9768.31253578 41 661 122116.777066 42 677 113931.186043 43 693 68043.5535663 44 709 24346.4329603 45 725 0.0 ** 90 iterations are needed 46 741 15570.4518083 47 757 29258.7243600 48 773 9214.60724988 49 789 73974.3922236 50 805 129620.074893 51 821 150552.641012 52 837 28679.5491401 53 853 67066.2217326 54 869 2730.81273259 55 885 33191.3766959 56 901 10048.1870338 57 917 629982.314847 58 933 26841.3037906 59 949 9626.11985719 60 965 12369.1434013 61 981 288602.715968 62 997 33603.2403205 63 1013 64087.1819950 64 1029 0.E-202 65 1045 154052.076543 66 1061 587584.400245 67 1077 161520.351894 68 1093 60651.5733297 69 1109 311809.590214 70 1125 142115.581822 71 1141 18330.0985841 72 1157 12911.4593518 73 1173 82858.7585839 74 1189 17327.8175741 75 1205 20186.8735728 76 1221 32147.7713491 77 1237 36688.7632139 78 1253 18574.8869986 79 1269 9183.46362766 80 1285 52446.0968995 81 1301 174327.664458 82 1317 79516.9774718 83 1333 42645.8142774 84 1349 68222.8682003 85 1365 332166.113291 86 1381 139971.870409 87 1397 168699.829041 88 1413 31801.1434636 89 1429 178776.254181 90 1445 72907.9979830 91 1461 88912.4667464 92 1477 7397.26201777 93 1493 39601.4741882 94 1509 658789.653898 95 1525 76077.5540848 96 1541 22169.7016133 97 1557 344441.943668 98 1573 148744.102969 99 1589 84606.9183080 100 1605 7752.98564084 101 1621 39059.2501431 102 1637 77974.8093225 103 1653 87240.8574615 104 1669 66247.6873805 105 1685 43938.1719113 106 1701 38571.9327632 107 1717 45235.6466826 108 1733 594993.327534 109 1749 19080.0053926 110 1765 339488.585999 111 1781 42421.4133738 112 1797 51544.3446576 113 1813 380052.965036 114 1829 38816.8961967 115 1845 382527.130545 116 1861 19586.9982699 117 1877 44300.2273006 118 1893 74159.1965251 119 1909 359757.992003 120 1925 13293.9091873 121 1941 383583.566650 122 1957 150775.076478 123 1973 94657.1031825 124 1989 70276.1119030 125 2005 20190.6474120 126 2021 340196.241896 127 2037 4508.31551895 128 2053 0.E-202 * 50 iterations are needed<|endoftext|> TITLE: How to differentiate product of vectors (that gives scalar) by vector? QUESTION [8 upvotes]: I'm trying to understand derivation of the least squares method in matrices terms: $$S(\beta) = y^Ty - 2 \beta X^Ty + \beta ^ T X^TX \beta$$ Where $\beta$ is $m \times 1$ vertical vector, $X$ is $n \times m$ matrix and $y$ is $n \times 1$ vector. The question is: why $$\frac{d(2\beta X^Ty)}{d \beta} = 2X^Ty$$ I tried to derive it directly via definition of derivative: $$\frac{d(2\beta X^Ty)}{d \beta} = \lim_{\Delta \beta \to 0} \frac{2\Delta\beta X^T y}{\Delta \beta} = \lim_{\Delta \beta \to 0} 2\Delta\beta X^T y \cdot \Delta \beta^{-1}$$ May be the last equality must be as in the next line, but anyway I don't understand why $$2\Delta\beta \Delta \beta^{-1} X^T y $$And, what is $\Delta \beta^{-1}$? Vectors don't have the inverse form. The same questions I have to this quasion: $$(\beta ^ T X^TX \beta)' =2 X^T X \beta$$ REPLY [6 votes]: Recall that the multiple regression linear model is the equation given by $$Y_i = \beta_0 + \sum_{j=1}^{p}X_{ij}\beta_{j} + \epsilon_i\text{, } i = 1, 2, \dots, N\tag{1}$$ where $\epsilon_i$ is a random variable for each $i$. This can be written in matrix form like so. \begin{equation*} \begin{array}{c@{}c@{}c@{}c@{}c@{}c} \begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_N \end{bmatrix} &{}={} &\begin{bmatrix} 1 & X_{11} & X_{12} & \cdots & X_{1p} \\ 1 & X_{21} & X_{22} & \cdots & X_{2p} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & X_{N1} & X_{N2} & \cdots & X_{Np} \end{bmatrix} &\begin{bmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{bmatrix} &{}+{} &\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\ \epsilon_N \end{bmatrix} \\ \\[0.1ex] \mathbf{y} &{}={} &\mathbf{X} &\boldsymbol{\beta} &{}+{} &\boldsymbol{\epsilon}\text{.} \end{array} \end{equation*} Since $\boldsymbol{\epsilon}$ is a vector of random variables, note that we call $\boldsymbol{\epsilon}$ a random vector. Our aim is to find an estimate for $\boldsymbol{\beta}$. One way to do this would be by minimizing the residual sum of squares, or $\text{RSS}$, defined by $$\text{RSS}(\boldsymbol{\beta}) = \sum_{i=1}^{N}\left(y_i - \sum_{j=0}^{p}x_{ij}\beta_{j}\right)^2$$ where we have defined $x_{i0} = 1$ for all $i$. (These are merely the entries of the first column of the matrix $\mathbf{X}$.) Notice here we are using lowercase letters, to indicate that we are working with observed values from data. To minimize this, let us find the critical values for the components of $\boldsymbol{\beta}$. For $k = 0, 1, \dots, p$, notice that $$\dfrac{\partial\text{RSS}}{\partial\beta_k}(\boldsymbol{\beta}) = \sum_{i=1}^{N}2\left(y_i - \sum_{j=0}^{p}x_{ij}\beta_{j}\right)(-x_{ik}) = -2\sum_{i=1}^{N}\left(y_ix_{ik} - \sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j}\right)\text{.}$$ Setting this equal to $0$, we get what are known as the normal equations: $$\sum_{i=1}^{N}y_ix_{ik} = \sum_{i=1}^{N}\sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j}\text{.}\tag{2}$$ for $k = 0, 1, \dots, p$. This can be represented in matrix notation. For $k = 0, 1, \dots, p$, $$\begin{align*} \sum_{i=1}^{N}y_ix_{ik} &= \begin{bmatrix} x_{1k} & x_{2k} & \cdots & x_{Nk} \end{bmatrix} \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_{N} \end{bmatrix} = \mathbf{c}_{k+1}^{T}\mathbf{y} \end{align*}$$ where $\mathbf{c}_\ell$ denotes the $\ell$th column of $\mathbf{X}$, $\ell = 1, \dots, p+1$. We can then represent each equation for $k = 0, 1, \dots, p$ as a matrix. Then $$\begin{bmatrix} \mathbf{c}_{1}^{T}\mathbf{y} \\ \mathbf{c}_{2}^{T}\mathbf{y} \\ \vdots \\ \mathbf{c}_{p+1}^{T}\mathbf{y} \end{bmatrix} = \begin{bmatrix} \mathbf{c}_{1}^{T} \\ \mathbf{c}_{2}^{T} \\ \vdots \\ \mathbf{c}_{p+1}^{T} \end{bmatrix}\mathbf{y} = \begin{bmatrix} \mathbf{c}_{1} & \mathbf{c}_{2} & \cdots & \mathbf{c}_{p+1} \end{bmatrix}^{T}\mathbf{y} = \mathbf{X}^{T}\mathbf{y}\text{.} $$ For justification of "factoring out" $\mathbf{y}$, see this link, on page 2. For the right-hand side of $(2)$ ($k = 0, 1, \dots, p$), $$\begin{align*} \sum_{i=1}^{N}\sum_{j=0}^{p}x_{ij}x_{ik}\beta_{j} &= \sum_{j=0}^{p}\left(\sum_{i=1}^{N}x_{ij}x_{ik}\right)\beta_{j} \\ &= \sum_{j=0}^{p}\left(\sum_{i=1}^{N}x_{ik}x_{ij}\right)\beta_{j} \\ &=\sum_{j=0}^{p}\begin{bmatrix} x_{1k} & x_{2k} & \cdots & x_{Nk} \end{bmatrix} \begin{bmatrix} x_{1j} \\ x_{2j} \\ \vdots \\ x_{Nj} \end{bmatrix}\beta_j \\ &= \sum_{j=0}^{p}\mathbf{c}^{T}_{k+1}\mathbf{c}_{j+1}\beta_j \\ &= \mathbf{c}^{T}_{k+1}\sum_{j=0}^{p}\mathbf{c}_{j+1}\beta_j \\ &= \mathbf{c}^{T}_{k+1}\begin{bmatrix} \mathbf{c}_{1} & \mathbf{c}_2 & \cdots & \mathbf{c}_{p+1} \end{bmatrix}\begin{bmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_p \end{bmatrix} \\ &= \mathbf{c}^{T}_{k+1}\mathbf{X}\boldsymbol{\beta}\text{.} \end{align*} $$ Bringing each case into a single matrix, we have $$\begin{bmatrix} \mathbf{c}^{T}_{1}\mathbf{X}\boldsymbol{\beta}\\ \mathbf{c}^{T}_{2}\mathbf{X}\boldsymbol{\beta}\\ \vdots \\ \mathbf{c}^{T}_{p+1}\mathbf{X}\boldsymbol{\beta}\\ \end{bmatrix} = \begin{bmatrix} \mathbf{c}^{T}_{1}\\ \mathbf{c}^{T}_{2}\\ \vdots \\ \mathbf{c}^{T}_{p+1}\\ \end{bmatrix}\mathbf{X}\boldsymbol{\beta} = \mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta}\text{.}$$ Thus, in matrix form, we have the normal equations as $$\mathbf{X}^{T}\mathbf{X}\boldsymbol{\beta} = \mathbf{X}^{T}\mathbf{y}\text{.}\tag{3}$$<|endoftext|> TITLE: Gaining Mathematical Maturity QUESTION [31 upvotes]: I was redirected here by a kind fellow from math.overflow. This is not a typical math question, so I apologize if that is discourteous. I am currently a sophomore in my undergraduate mathematics program. It has taken me a while to take school seriously; I was one of "those" students who just skated by without studying until Linear Algebra. However, I did not start to take my education seriously until about two months ago. I took the semester off to re-assess what I truly wanted to study and decided to follow my (difficult) passion of mathematics. I am now enrolled in summer semester at my university and am taking ODEs and Real Analysis 1. Like many students, I find analysis to be challenging yet very exciting. Sometimes, however, I catch myself getting frustrated with my own self because I feel I am not making enough progress. Have you ever felt self doubt in your career as a mathematician? How did you overcome those worries? Also, what are some good techniques or resources to advance one's skills as a undergraduate level mathematician? Thank you for taking the time to read this. Sincerely, Rebecca REPLY [13 votes]: Have you ever felt self doubt in your career as a mathematician? How did you overcome those worries? Also, what are some good techniques or resources to advance one's skills as a undergraduate level mathematician? You bet I did. I graduated with a B.S. Mathematics, Statistics emphasis two years ago, and I am now pursuing a M.S. Statistics at a top-20 school part-time while working full-time in a job that I love. I remember the first week of my Analysis I course. I was scared to death about the class, because all of the math majors were telling me about how difficult it was, and here I was, this actuarial science major who is taking Analysis I to raise my GPA. (I, quite frankly, didn't do so well in my actuarial science courses because I felt that I was just regurgitating facts, and that the professors didn't teach the results in generality.) Before the drop-date, I asked my professor to grade my first homework and use that to give me a recommendation on continuing in the class. He told me to keep going. I got an A in Analysis I. This was quite pleasing, considering that I was an actuarial science major. And then pure math just clicked: I got As in Abstract Algebra I, Analysis II, and Abstract Algebra II. By then, I realized that I had completely diverged from the actuarial science major, and then on my last semester, I declared the new major. How did I overcome these worries? I didn't look at math in fear, but as a challenge I could solve. You can't study math with the fear of math in advance of studying it. Techniques/Resources: Read many, many math books. I found that spending some time each day - just merely reading the textbook, not necessarily trying to understand the concepts in-depth, but just get a small idea for what's going on - helped immensely. Don't keep yourself to one math book, either, per subject. Talk to a lot of people about what you're studying. This helps you communicate mathematics with others, along with helping you learn more about how others view mathematics. I've learned a lot of techniques that I didn't learn in class that I learned from tutoring others.<|endoftext|> TITLE: Why was I wrong about the monster-gem riddler QUESTION [6 upvotes]: Every week I like to do the fivethirtyeight.com Riddler, an interesting and pleasantly challenging (at least for me) weekly math puzzle which comes out Fridays, with the answer and explanation to the previous problem being supplied when the new one is posted. I just looked at the solution to last week's problem, and while I understand the explanation given, I am at a loss as to why my approach produced an incorrect answer. The problem is as follows: You're playing a game in which you kill monsters. Each time you kill a monster, it drops a gem, which you then collect. The gems can be either common, uncommon or rare, with probabilities ${1\over2}$, ${1\over3}$, and ${1\over6}$, respectively. If you play until you have 1 gem of each type, how many common gems will you have on average? The official answer is 3.65, and the explanation is available here (scroll down past the introduction and todays puzzle-- not quite halfway down the page). My answer was 4.65 (interesting that I was off by exactly 1), and my reasoning was as follows: It's possible that you collect both other rarities before you find your first common gem. This occurs with probability $ {1 \over 3} * {1 \over 4} +{1 \over 6 }*{2\over5}={3\over20}$. In such cases you will stop after you find your first common gem, and the number you've collected (henceforth $N$) will be 1. In all other cases, play will proceed in two phases: Phase 1: you find some number $N_a$ (possibly 0) of common gems followed by a single non-common gem (call it type $a$) Phase 2: you find some number $N_b$ (possibly 0) of common gems, mixed with an irrelevant number of type $a$ gems, followed by a single gem of the other non-common type (call it type $b$). Now I make a couple of simplifying observations. First, since the case addressed above where the common gem was the last one we found corresponds to the case were $N_a=N_b=0$, we don't need to explicitly remove this case to avoid double-counting. It represents the situation where $N$ exceeds $N_a+N_b$ rather than a case where the 2-phase model doesn't apply at all. Second, for given types $a$ and $b$, $N_a$ and $N_b$ are independent, because the individual gem drops are independent. Third, because there are (by definition) no type $b$ gems found in phase 1 and type $a$ gems found in Phase 2 are irrelevant, it doesn't matter which non-common type (rare or uncommon) is type $a$ and which is type $b$ because in either case $N_{rare}$ (the number of common gems found in the phase terminated by a rare gem) will be a geometrically distributed random variable with $p=$ (the probability of a rare gem being dropped given that an uncommon gem is not dropped) $= {1\over4}$, and likewise for $N_{uncommon}$. Thus we have $$E[N]=E[N_{rare}]+E[N_{uncommon}]+{3\over20}=\frac{3\over5}{2\over5}+\frac{3\over4}{1\over4}+{3\over20}={3\over2}+3+{3\over20}={93\over20}=4.65$$ What's wrong with this method? REPLY [3 votes]: There's a subtle flaw in your reasoning. You're effectively conditioning on which non-common gem type comes first. Your calculation for phase $2$ is correct: The non-common gem type already seen in phase $1$ has become irrelevant, and only the relative probabilities of the other two gem types matter. But this doesn't work in phase $1$, because the length of phase $1$ is influenced by the conditioning. You can perhaps see this most clearly if you imagine the "uncommon" gem type to have probability close to $1$. Then if you condition on the rare gem type appearing before the "uncommon" gem type, you're almost certain not to get any "common" gems in phase $1$, no matter how much more probable they are than the rare gems, simply because it's so improbable that more than one gem wouldn't be "uncommon". This argument doesn't apply to phase $2$, where the gem type that's excluded really doesn't matter; even if it has probability close to $1$ and you collect thousands of that type, you can still ignore them for the purpose of calculating the number of common gems collected before encountering the third type. P.S.: It's perhaps worth mentioning here another nice and relatively simple approach, which is also mentioned after the solution you linked to. The expected total number of gems drawn is the expected total number of coupons drawn in a coupon collector's problem with unequal probabilities (see e.g. The Coupon Collector's Problem): $$ \frac1{\frac12}+\frac1{\frac13}+\frac1{\frac16}-\frac1{\frac12+\frac13}-\frac1{\frac13+\frac16}-\frac1{\frac16+\frac12}+\frac1{\frac12+\frac13+\frac16}=\frac{73}{10}\;. $$ The expected number of common gems drawn is half of this. It's a good exercise to think about why this is true, even though it's not true when you condition on the total number of gems drawn.<|endoftext|> TITLE: Is $\sin z/z$ analytic at the origin? QUESTION [5 upvotes]: For $z\in\Bbb C$ let $$ f(z) = \frac{\sin z}{z} $$ Along the real line this is well behaved, and approaches $1$ as $z\to 0$. But is $f(z)$ analytic at the origin ($z=0$)? I tried explicitly checking the Cauchy conditions but that gets ugly (unless I am missing something). The function I was originally interested in is $$ g(z) = z\,\sin\left( \frac{1}{z} \right) $$ which my gut tells me must not be analytic at the origin, but is the singularity essential or a pole, or the end of a branch cut or something even uglier? REPLY [4 votes]: Since $$ \sin z=\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\,z^{2k+1} $$ we have, for $z\ne0$, $$ \frac{\sin z}{z}=\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\,z^{2k} $$ Since this power series has radius of convergence infinite, it defines an entire function, which is an analytic continuation of $(\sin z)/z$ at $0$, where it has the value $1$. So, as written, $(\sin z)/z$ is not analytic at $0$, because it's not defined there, but $0$ is a removable singularity. For $z\sin(1/z)$ you can use the same series: $$ z\sin\frac{1}{z}=\sum_{k\ge0}\frac{(-1)^k}{(2k+1)!}\,z^{-2k} $$ which holds for $z\ne0$. This is the Laurent series for the analytic function $z\sin(1/z)$ at $0$ and it shows the singularity is essential, because the Laurent series at a pole has only a finite number of terms with $z$ having negative exponent.<|endoftext|> TITLE: Proof of Tychonoff's Theorem for an undergrad QUESTION [6 upvotes]: In the midst of learning about compactness I come across Tychonoff's Theorem: Let $\{X_i : i \in \mathcal{A}\}$ be any collection of compact spaces. Then $\displaystyle\prod_{i \in \mathcal{A}}X_i$ is compact in the product topology. I've just come from the fact that a finite product of compact spaces is compact, and I also know from studying bases of topologies that uncountable products aren't necessarily as nice (for example, the box topology has some problems for uncountable products). The proof for Tychonoff's Theorem is: Omitted (this is much harder than anything we have done here). Internet searches lead to math overflow and topics that are very outside of my comfort zone. Is there a proof of Tychonoff's Theorem for an undergrad? REPLY [6 votes]: Perhaps the most elementary proof is the one that I first encountered as a freshman, using the Alexander subbase lemma. It requires Zorn’s lemma, but it does not require knowledge of filters, ultrafilters or nets. It’s carried out completely in this PDF. (And Alexander’s result is of some interest in its own right.)<|endoftext|> TITLE: Subgroups of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ QUESTION [5 upvotes]: If I'm understanding the main theorem of (infinite) Galois theory correctly, applied to $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$, it gives us: a) all its open subgroups are $\mathrm{Gal}(\overline{\mathbb{Q}}/K)$, with $K/\mathbb{Q}$ a finite extension. b) all its closed (and not-open) subgroups are $\mathrm{Gal}(\overline{\mathbb{Q}}/K)$, with $K/\mathbb{Q}$ an infinite extension. Is this correct? I've also heard that for example $\mathrm{Gal}(\overline{\mathbb{Q}}_p/\mathbb{Q}_p)$ is a closed subgroup of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. How does this fit with the facts above? Does this mean that $\mathrm{Gal}(\overline{\mathbb{Q}}_p/\mathbb{Q}_p) \cong \mathrm{Gal}(\overline{\mathbb{Q}}/K)$ for some $K$? Also, what are its non-open subgroups? I assume this includes the Galois groups $\mathrm{Gal}(K/\mathbb{Q})$. REPLY [3 votes]: Is this correct? Yes. Infinite Galois theory provides a bijection between subfields of $\overline{\mathbb Q}$ and closed subgroups of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. Since every open subgroup of a topological group is closed, and the open subgroups all have finite index in $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$, both your statements are correct. How does this fit with the facts above? Does this mean that $\mathrm{Gal}(\overline{\mathbb{Q}}_p/\mathbb{Q}_p) \cong \mathrm{Gal}(\overline{\mathbb{Q}}/K)$ for some $K$? The group $\mathrm{Gal}(\overline{\mathbb Q}_p/\mathbb Q_p)$ can be viewed as a subgroup of $\mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q)$ as follows. Fix an embedding of $\overline{\mathbb Q}\hookrightarrow \overline{\mathbb Q}_p$. Then there is an inclusion $$ \mathrm{Gal}(\overline{\mathbb Q}_p/\mathbb Q_p)\hookrightarrow\mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q)\\\sigma\mapsto\sigma|_{\overline{\mathbb Q}}$$ Choosing an embedding $\overline{\mathbb Q}\hookrightarrow \overline{\mathbb Q}_p$ is the same as picking a prime $\mathfrak p$ of $\overline{\mathbb Q}$ lying over $p$. In particular, we can identify $ \mathrm{Gal}(\overline{\mathbb Q}_p/\mathbb Q_p)$ with the decomposition group $$D_\mathfrak p :=\{\sigma\in\ \mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q):\sigma(\mathfrak p) = \mathfrak p\} \subset\mathrm{Gal}(\overline{\mathbb Q}/\mathbb Q).$$ Analogously to the finite situation, the fixed field $\overline{\mathbb Q}^{D_\mathfrak p}$ can be identified with the maximal extension $K$ of $\mathbb Q$ such that $\mathfrak p\cap K$ is unramified and the residue field $\mathbb F_{\mathfrak p\cap K}=\mathbb F_p$. The problem with this approach is that it is completely non-canonical and depends entirely on which choice of prime we chose of $\overline{\mathbb Q}$ lying above $p$. Practically, it is better to think of $\mathrm{Gal}(\overline{\mathbb{Q}}_p/\mathbb{Q}_p)$ as a subgroup of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ that is well-defined up to conjugation: analogously to the finite case, for different choices of primes $\mathfrak {p,p}'$, $D_\mathfrak p$ and $D_{\mathfrak p'}$ will be conjugate in $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. Also, what are its non-open subgroups? I assume this includes the Galois groups $\mathrm{Gal}(K/\mathbb{Q})$. There are lots of non-closed and non-open subgroups, and they are not easy to describe. However the groups $\mathrm{Gal}(K/\mathbb{Q})$ should be seen as quotients of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$, and not as subgroups.<|endoftext|> TITLE: Elementary properties of gradient systems QUESTION [5 upvotes]: Consider $x_0\in\mathbb{R}^n$ and a $C^{1,1}$ function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ (that is, a differentiable function whose gradient is Lipschitz function). Consider the system $$ \begin{cases} \dot{x}(t)=-\nabla f(x(t)),\\ x(0)=x_0. \end{cases} $$ Let $\gamma_{x_0}(t)$ be a trajectory (orbit) of the above system which starts at the point $x_0$ is defined for all $t$ in the maximal interval $[0,T_{x_0})$ ($T_{x_0}\in(0,+\infty]$). I am interested in the following elementary properties of the trajectory (orbit) $\gamma_{x_0}(t)$: 1) Consider the mapping $t\mapsto\rho(t):=f(\gamma_{x_0}(t))$. Prove that if $\nabla f(x_0)\ne 0$ then $\rho(t)$ is strictly deacreasing. My attemption: The derivative of $\rho$ is given by $$ \rho^{\prime}(t)=\langle\nabla f(\gamma_{x_0}(t)),\dot{\gamma}(t)\rangle=-\|\nabla f(\gamma_{x_0}(t)\|^2=-\|\dot{\gamma}(t)\|^2\leq 0. $$ Hence $\rho(t)$ is decreasing. But I cannot know how to prove $\rho(t)$ is strictly decreasing. 2) The $\omega-$limit set of $\gamma_{x_0}$ is defined by $$ \Omega(\gamma_{x_0}):=\{\gamma_\infty:\exists (t_n)\rightarrow\infty, \lim\gamma_{x_0}(t_n)=\gamma_\infty\}. $$ Prove that if $\gamma_{x_0}$ is bounded (that is, there exists a bounded set $B$ such that $\gamma_{x_0}(t)\in B$ for all $t$) then: $\Omega(\gamma_{x_0})$ is either singleton or infinite; $\nabla f(\gamma_\infty)=0$ for all $\gamma_\infty\in\Omega(\gamma_{x_0})$; $f$ is constant on $\Omega(\gamma_{x_0})$; $\text{dist}(\gamma_{x_0}(t),\Omega(\gamma_{x_0}))\leq\text{dist}(\gamma_{x_0}(t), S)$, where $$ S:=\{x:\nabla f(x)=0\}. $$ Moreover, $\text{dist}(\gamma_{x_0}(t), S)\rightarrow 0$ as $t\rightarrow\infty$. If $\gamma_{x_0}$ has finite length, that is $$ \text{length}(\gamma):=\int_{0}^{T_{x_0}}\|\dot{\gamma}(t)\| dt<+\infty$$ then $\Omega(\gamma_{x_0})$ is a singleton which is denoted by $\gamma_\infty$ and $$ \lim_{t\rightarrow\infty}\gamma_{x_0}(t)=\gamma_\infty. $$ I have tried to prove these properties but it is uneasy to prove them. I would be grateful if someone could help me to make clear all the properties. Thank you for all kind help. REPLY [4 votes]: Here are hints for parts of the questions: 1) Note that if $\nabla f(x_0) \neq 0$ then $\dot{\gamma}(t) \neq 0 $ for all $t \ge 0$. To see this, if $\dot{\gamma}(t_0) = 0$ for some $t_0 >0$, then $t \mapsto \gamma(t_0)$ and $t \mapsto \gamma(t)$ would be two solutions passing through $\gamma(t_0)$ and by uniqueness they must be the same hence a contradiction. It follows that $\dot{\rho}(t) <0$ for all $t \ge 0$. 2) Since $\gamma_{x_0}$ is bounded, it is clear that $\Omega(\gamma_{x_0})$ is non empty (the sequence $\gamma_{x_0}(n)$ must have an accumulation point). In particular, there is at least one $\omega$-limit point. It is not too difficult to show that the $\omega$-limit is closed, hence compact. Suppose $\gamma_\infty^k$ are $\omega$-limit points for $k=1,...,m$, for $m>1$. Choose $\epsilon>0$ such that the open sets $U_k=B(\gamma_\infty^k, \epsilon)$ are disjoint. Since $K=\Omega(\gamma_{x_0}) \setminus ( U_1 \cup \cdots \cup U_m)$ is non empty (since $\gamma_{x_0}$ is continuous) and compact, it follows that the trajectory must intersect $K$ infinitely often, hence $K$ contain another distinct $\omega$-limit point. It follows that there are an infinite number of $\omega$-limit points. Suppose $\gamma_\infty \in \Omega(\gamma_{x_0})$ and $\nabla f(\gamma_\infty) \neq 0$. Let $K$ be a compact set containing the trajectory $\gamma_{x_0}$. It follows that there is some $M$ such that $\|\dot{\gamma_{x_0}}(t)\| \le M$, hence $t \mapsto {\gamma_{x_0}}(t)$ is uniformly continuous. Let $\epsilon = { 1\over 2} \| \nabla f(\gamma_\infty) \|$ and choose $\delta>0$ such that if $x \in B(\gamma_\infty, \delta)$ then $\| \nabla f(x) \| > \epsilon$. There is a sequence $t_n \to \infty$ such that $\gamma_{x_0}(t_n) \in B(\gamma_\infty, {1 \over 2}\delta)$, and since $\gamma_{x_0}$ is uniformly continuous, there is some $T>0$ such that $\gamma_{x_0}(t) \in B(\gamma_\infty, \delta)$ for all $t \in [t_n, t_n+T]$. Note that $\rho(t_n+T) -\rho(t_n) \le - T\epsilon^2$. Since $t \mapsto \rho(t)$ is non increasing, we see that $\rho(t_n) \downarrow -\infty$, which is a contradiction and so $\nabla f(\gamma_\infty) = 0$. Hence $\Omega(\gamma_{x_0}) \subset S$. Since $\rho$ is non increasing, it follows that $f$ is constant on the $\omega$-limit set. It is straightforward to show that if $d(\gamma_{x_0}(t), S) \not\to 0$, then there is an $\omega$-limit point $p$ that is not in $S$. This is a contradiction. For the last point, suppose $\Omega(\gamma_{x_0})$ contains two distinct points $\gamma_1, \gamma_2$. Let $U_k = B(\gamma_k, {1 \over 3} \|\gamma_1-\gamma_2\| )$ (a diagram might help here). Let $t_n \to \infty$ be a sequence of points such that $\gamma_{x_0}(t_n) \in U_1$, and let $t_n' = \inf \{ t \ge t_n | \gamma_{x_0}(t) \in U_2 \}$. By taking a subsequence if necessary we can assume that $t_{n+1} > t_n'$. Then ${1 \over 3} \|\gamma_1-\gamma_2\| \le \|\gamma_{x_0}(t_n)- \gamma_{x_0}(t_n') \| \le \int_{t_n}^{t_n'} \| \dot{\gamma_{x_0}}(\tau)\| d \tau$. Since the intervals $[t_n,t_n']$ are disjoint we see that $\int_0^t \| \dot{\gamma_{x_0}}(\tau)\| d \tau $ is unbounded. It follows from this that if the length of the path is bounded then the $\omega$-limit set is a singleton and a compactness argument shows that the path converges to this limit.<|endoftext|> TITLE: Examples for the fact that a pullback of an epimorphism is not necessarily an epimorphism. QUESTION [6 upvotes]: I'm reading in Borceux' book Basic Category Theory about pullbacks. It turns out that the pullback of an epimorphism is not necessarily an epimorphism. On the linked page, Borceux gives a counterexample in the category Haus. I was wondering whether there also exist easy counterexamples in Ring, the category of commutative rings with a 1 element (or in Grp, Rng, etc.). I have been playing around with the embedding of $\mathbb{Z}$ in $\mathbb{Q}$, which is an epimorphism, but I couldn't find anything. REPLY [5 votes]: An example in $\mathsf{Ring}$ is the inclusion $\mathbb{C}[t]\to\mathbb{C}[t,t^{-1}]$, which is an epimorphism, but whose pullback along $\mathbb{C}[t^{-1}]\to\mathbb{C}[t,t^{-1}]$ is $\mathbb{C}\to\mathbb{C}[t^{-1}]$, which is not an epimorphism. However, every pullback of $\mathbb{Z}\to\mathbb{Q}$ is an epimorphism. There's a proof in the answer to this related question.<|endoftext|> TITLE: Can any higher-dimensional Spheres be rotated everywhere equally? QUESTION [14 upvotes]: You can rotate a circle so that every point on it (just the perimeter, not the interior) moves "equally". That is, every point moves with the same speed and even has the same "acceleration" (first-order derivative of velocity, which is the first-order derivative of a point's movement wrt time). However, there is no way to rotate a sphere with this same character: the farthest points from the axis of rotation (equator) move much faster and have greater acceleration than the points closest to the axis (poles). AFAIK, there is no closed (finite, no edges) 3-dimensional surface with this property. You can do it with a cylinder, but that is not a closed shape. You could also do it with a torus by rotating it through the center (rather than around the center) the way a smoke ring does, but that's not a true "rigid" rotation (all points maintain the same distance to all other points throughout the rotation). The points on the inner part of the ring are closer together than those on the outside. What I've been wondering is how this plays out in higher dimensions? Can the volume-surface of a 4D sphere be rigidly rotated equally like a 2D circle? Or does it fall to the same problems as 3D spheres? What about even higher dimensional spheres? For that matter, are there any higher dimensional closed shapes that can be so rotated? For clarity, I am only looking for "smooth" shapes, no significant discontinuities, and not just a disconnected set of points. REPLY [18 votes]: One way to make sense of what you wrote is: one can rotate odd-dimensional spheres without any fixed points. A way to do this is: if you identify $\mathbb R^{2n}$ with $\mathbb C^n$, then at time $t$ you multiply by $\exp(2\pi i t)$. On the other hand, this is impossible for every even-dimensional sphere, due to the hairy ball theorem.<|endoftext|> TITLE: To find whether $a$ is a prime number QUESTION [11 upvotes]: I have been using this rule to determine whether a number is a prime number, but not how to prove it. Why it has to be $\sqrt{a}$? If $a$ is not divisible by all the prime numbers less than or equal to $\sqrt{a}$, then $a$ is a prime number. REPLY [6 votes]: Just another point of view Suppose that $a$ is divisible by a prime $p$. This means that: $$a = p \cdot k \Rightarrow p = \frac{a}{k}.$$ Moreover, suppose that $p > \sqrt{a}$, then: $$\frac{a}{k} > \sqrt{a} \Rightarrow k < \sqrt{a}.$$ This means that I can only use the primes $p \in [\sqrt{a}, a]$ to determine if $a$ is prime, instead of the set $[0, \sqrt{a}]$. Which is the set that best suits for the algorithm? I guess the smallest, since you will have a smaller number of primes to be tested. The set $[\sqrt{a}, a]$ has length $a-\sqrt{a}$, while the set $[0, \sqrt{a}]$ has length $\sqrt{a}$. Notice that: $$\sqrt{a} < a-\sqrt{a} \Rightarrow 2\sqrt{a} < a \Rightarrow \sqrt{a} > 2 \Rightarrow a > 4.$$ In general, you want test a number $a$ bigger than $4$. For this reason, you just look to the set $[0, \sqrt{a}]$... it is just faster! Example. Take $a = 77$ and notice that $\sqrt{a} \simeq 8.78$. If you use the set $[0, \sqrt{a}]$, the you will have to check if $a$ is divisible by $3,5,7$ (only three numbers). On the other hand, if you choose the set $[\sqrt{a}, a]$, you will have to check if $a$ is divisible by $11, 13, 17, 19, 23, 29, 31, 37, ...$. Hey, this is too much, no?!<|endoftext|> TITLE: Is it circular to define the Von Neumann universe using "sets"? QUESTION [10 upvotes]: I was just reading the Wikipedia page on the Von Neumann universe, where it is stated that this universe "is often used to provide an interpretation or motivation of the axioms of ZFC." However, later on in the article, the Von Neumann universe is formally constructed using "sets," starting with "the empty set." This bothers me greatly, because as I understand it, ZFC is merely a theory describing how the "sets" in a model must behave; it is not itself a model. For example ZFC, does not specify the particular object that the "empty set" is; it merely stipulates that its models must satisfy the axiom of the empty set (through either including this axiom directly or deducing it from the other axioms), i.e., the axiom, $$ \exists x\forall y(\lnot y\in x) \text. $$ So, how is it possible to construct the Von Neumann universe from the very theory it's supposed to be a model for? Where does the definition of the empty set actually come in? I would appreciate any clarification. Update: Asaf Karagila writes here that If you want to consider the "simple" foundational approach to theories like $\sf ZFC$, then sets are primitive objects and $V$ is a given universe to begin with. The axioms of $\sf ZFC$ tell you what sort of properties $V$ and its $\in$ relation satisfy. [...] What the von Neumann hierarchy gives you is the understanding that if $V$ is already given, then we can write this wonderful filtration of $V$ into a very nice hierarchy. Additional theorems like the reflection theorem also tell you more about this hierarchy and its deep connection with the structure of $V$ as a universe of sets. So am I correct in understanding that the Von Neumann universe is a construction that we put together once we assume that we have some model $V$ of ZFC? REPLY [4 votes]: I know this question is old, but if I may add some new material. Yes the Von Neumann Cumulative Hierarchy as a set-theoretic Concept motivates and gives an intuition to the ZFC Axioms, as formalizing an easily conceivable notion of (Pure) Set. But no, it doesn't need to come before ZFC and no that doesn't mean either that we'd need to assume some ethereal model of ZFC before working on this: We just need, as we always actually do, some Formal System to work in: here ZFC. That we conceive it a priori as designed to formalize the Concept of Set isn't an obstacle to sharpen this conception a posteriori by seeing those Sets as layered in a "Universe" structured according to this Cumulative Hierarchy, since our selected formalization of the fuzzy initial Concept necessarily gives us this picture as a Theorem.<|endoftext|> TITLE: Using the Weierstrass M-test, show that the series converges uniformly on the given domain QUESTION [5 upvotes]: $\sum_{k \geq 0} \frac{z^k}{z^k+1}$ on the domain $\overline{D}[0, r]$, where $0 \leq r < 1$ I'm honestly not sure how to do this. My text mentions the Weierstrass M-test but the example they gave after stating it uses a completely different method (looks like a repeat of a previous example) and looks nothing like the M-test. REPLY [4 votes]: From the notation and complex analysis tag I would guess we should assume $z\in \mathbb C,|z|\le r.$ For $k>0,$ we can then say $|1+z^k|\ge 1-|z^k| = 1-|z|^k \ge 1- |z| \ge 1-r.$ Thus $$\left |\frac{z^k}{1+z^k}\right | = \frac{|z^k|}{|1+z^k|} \le \frac{r^k}{1-r}.$$ Since $\sum_{k=1}^{\infty} r^k/(1-r) < 1/(1-r)^2,$ we have uniform convergence in $|z|\le r$ by Weierstrass M.<|endoftext|> TITLE: Constructing a family of sets QUESTION [7 upvotes]: I am completely stuck at the following question. Suppose $X$ is an infinite set. Show that there is a family $\mathcal{F}$ of subsets of $X$ satisfying the following: (a) If $A \subseteq X$ is finite, then $A \in \mathcal{F}$ iff $|A|$ is even. (b) If $A, B$ are disjoint subsets of $X$, then $A \cup B \in \mathcal{F}$ iff either $A, B $ are both in $\mathcal{F}$ or $A, B$ are both not in $\mathcal{F}$. What I tried: I started by adding all even size sets to $\mathcal{F}$ and then tried to use Zorn's lemma but could not come up with a partial order to make it work. Could someone give me any hints? Thanks! REPLY [4 votes]: Let $[X]^{\lt\omega}$ be the set of all finite subsets of $X$. Let $\mathcal U$ be an ultrafilter on $[X]^{\lt\omega}$ such that $\{E\in[X]^{\lt\omega}:x\in E\}\in\mathcal U$ for each $x$ in $X$. Let's say that a statement $P(E)$ holds for almost all $E\in[X]^{\lt\omega}$ if $\{E\in[X]^{\lt\omega}:P(E)\}\in\mathcal U$. Now define $$\mathcal F=\{A\subseteq X:|A\cap E|\text{ is even for almost all }E\in[X]^{\lt\omega}\}.$$ Then (a) and (b) are satisfied. More generally: For any positive integer $n$ and any set $A$, there is a function $f:\mathcal P(A)\to\{0,\dots,n-1\}$ satisfying the conditions: (1) $f(X)\equiv|X|\pmod n$ for every finite set $X\subseteq A$; (2) if $X,Y\subseteq A$ and $X\cap Y=\emptyset$, then $f(X\cup Y)\equiv f(X)+f(Y)\pmod n$.<|endoftext|> TITLE: Does $\int_{0}^1\frac{\ln(1+x+x^2)}{x}\mathrm dx$ have a closed form? QUESTION [5 upvotes]: $$\int_0^1 \frac{\ln(1+x+x^2)}{x} \mathrm{d}x = 1.09662$$ I am trying to find a closed form of this integral. I think one might exist, this integral looks like it might be related to $\pi$, but I don't know. REPLY [6 votes]: First $$1+x+x^2=\frac{1-x^3}{1-x}$$ So rewrite the integrand as $$\int_{0}^{1} \frac{\ln(1-x^3)}{x}-\frac{\ln(1-x)}{x} dx.$$ But using u-subtitution $u=x^3,du=3x^2dx$ $$\int_{0}^{1} \frac{\ln(1-x^3)}{x}dx=\int_{0}^{1} \frac{\ln(1-u)}{3u}du$$, so this means $$\int_{0}^{1} \frac{\ln(1-u)}{3u}du-\int_{0}^{1}\frac{\ln(1-x)}{x} dx=\frac{-2}{3}\int_{0}^{1}\frac{\ln(1-x)}{x} dx.$$ Now it is well known from the Basel Problem: $$\int_{0}^{1}\frac{\ln(1-x)}{x} dx=\int_{0}^{1}\int_{0}^{1}\frac{-1}{1-xy} dydx=-\zeta(2)=\frac{-\pi^2}{6}.$$ So $$\int_{0}^{1}\frac{\ln(1+x+x^2)}{x}dx=\frac{\pi^2}{9}$$ REPLY [6 votes]: Note that $\ln(1+x+x^2)=\ln(1-x^3)-\ln(1-x)=-\sum_{k=1}^{\infty}(x^{3n}/n)+\sum_{k=1}^{\infty}(x^{n}/n)$. Change the order of integration and summation (it is valid, why?), and we have $$\begin{align}\int_0^1 \frac{\ln(1+x+x^2)}{x} \mathrm{d}x&=\int_0^1\left(-\sum_{k=1}^\infty\frac{x^{3n-1}}{n}+\sum_{k=1}^\infty\frac{x^{n-1}}{n}\right)\mathrm{d}x\\ &=-\sum_{k=1}^\infty\int_0^1\frac{x^{3n-1}}{n}\mathrm dx+\sum_{k=1}^\infty\int_0^1\frac{x^{n-1}}{n}\mathrm dx\\ &=-\sum_{k=1}^\infty\frac{1}{3n^2}+\sum_{k=1}^\infty\frac{1}{n^2}=\frac{\pi^2}9. \end{align}$$<|endoftext|> TITLE: How does induction fail in computable nonstandard models? QUESTION [10 upvotes]: Tennenbaum's theorem does not apply to Robinson Arithmetic ($Q$). There is a computable, nonstandard model of $Q$ "consisting of integer-coefficient polynomials with positive leading coefficient, plus the zero polynomial, with their usual arithmetic." What is an example of how the first order induction schema fails in a computable, nonstandard model of $Q$? Can such a model have a predicate, in the language of Q, that is only true for standard natural numbers and false for "infinite" natural numbers? Can a computable nonstandard model have overspill? Given the model above, what is an example of a statement, in the language of $Q$, that is true for the zero polynomial and all successors of the zero polynomial and false for some other polynomial in the model? REPLY [10 votes]: Here's an even simpler one: "Every number is either even or odd." That is, $$\forall x\exists y(x=y+y \mbox{ or } x=y+y+1).$$ The polynomial $x$ is a counterexample. Ignoring the specific model and focusing on the theory, it's also worth noting that Robinson arithmetic doesn't even prove that addition is commutative or that the successor of a number is different from that number! (If I recall correctly, this is covered in Burgess' book Fixing Frege - see a review at http://www.utexas.edu/cola/_files/iaa4774/On_Burgess_Fixing_Frege.pdf.) EDIT: Now that I look more closely, this is basically all covered in the wikipedia page on Robinson arithmetic, in the section "Metamathematics"; I suggest you read it and the linked references. FURTHER EDIT: Here's even more statements, with easy induction proofs, that Robinson doesn't believe! :P (Note that these do in fact hold in the specific model in the OP.) $\sqrt{2}$ is irrational. There are infinitely many primes. More precisely, "for every $x$ there is a prime $>x$." Unique factorization into primes. See Francois Dorais' answer to https://mathoverflow.net/questions/19857/has-decidability-got-something-to-do-with-primes. REPLY [4 votes]: Here is another example, $$\forall x(S(x)\neq x)$$ It holds for $0$, and if it holds for $x$, then it holds for $S(x)$ as well. But it is possible to have a model with an element which is its own successor.<|endoftext|> TITLE: Intersection of any set of ideals is an ideal QUESTION [9 upvotes]: Prove that the intersection of any set of Ideals of a ring is an Ideal. I'm looking for hints. Let A, B both be Ideals of a ring R. Suppose $I \equiv A\cap B$. Since A and B are both Ideals of a ring R, A and B are both Subrings of a ring R. In particular, we have that $\left ( A,+ \right ),\left ( A\setminus \left \{ 0 \right \} ,\cdot \right ),\left ( B,+ \right ),\left ( B\setminus \left \{ 0 \right \},\cdot \right )$ are Abelian. Now, Suppose $x_{1},x_{2} \in I$. I'm not entirely sure how I can justify $x_{1}+\left ( -x_{2} \right ) \in I.$ Might be overthinking this but I might have to use the fact that I is the intersection. REPLY [17 votes]: Hints, as requested: Let $I = \bigcap\limits_{j \in J} I_j$ be an intersection of ideals $I_j$ (where $J$ is an indexing set, finite or otherwise). Show that for any $x, y \in I$, $x - y \in I$. Use the fact that $x, y \in I_j$, $\forall j \in J$. Show that for any $r \in R$ and $x \in I$, $rx \in I$. Again, use the fact that $x \in I_j$, $\forall j \in J$. [If the ring is not commutative, this works only for left ideals, and the proof is similar for right ideals and two-sided ideals]. Solution (using above hints): Since $x, y \in I_j$, and $I_j$ is an ideal, $x - y \in I_j$, $\forall j \in J$. Therefore, $x - y \in \bigcap\limits_{j \in J} I_j = I$. Similarly, since $x \in I_j$, $rx \in I_j$, $\forall j \in J$. Therefore, $rx \in I$. The proof is similar for right ideals and two-sided ideals (or alternatively, a two-sided ideal is both a left ideal and a right ideal). Note: If rings are defined to have $1$, it is enough to show $x + y \in I$, since $-y = (-1)y$.<|endoftext|> TITLE: If the left Riemann sum of a function converges, is the function integrable? QUESTION [5 upvotes]: If the left Riemann sum of a function over uniform partition converges, is the function integrable? To put the question more precisely, let me borrow a few definitions first. Pardon my use of potentially non-canon definitions of convergence. Given a bounded function $f:\left[a,b\right]\to\mathbb{R}$, A partition $P$ is a set $\{x_i\}_{i=0}^{n}\subset\left[a,b\right]$ satisfying $a=x_0\leq x_1\leq\cdots\leq x_n=b$. The norm of a partition $\newcommand\norm[1]{\left\lVert#1\right\rVert}\norm{P}:=\max_{0\leq i\leq n}|x_i-x_{i-1}|$ The left Riemann sum of $f$ over partition $P$ is $l(f,P):=\sum_{i=1}^nf(x_{i-1})(x_i-x_{i-1})$ The left Riemann sum of $f$ is said to converge to $L$ iff $\newcommand\norm[1]{\left\lVert#1\right\rVert}\forall\epsilon>0, \exists\delta>0:\norm{P}<\delta$ implies $\left|l(f,P)-L\right|<\epsilon$ A uniform partition $P_n$ of $n$ divisions is defined by $x_i=a+\frac{b-a}{n}i$ The left Riemann sum of $f$ over uniform partitions is said to converge to $L$ iff $\forall\epsilon>0, \exists N\in\mathbb{N}: n\geq N\implies \left|l(f,P_n)-L\right|<\epsilon$ Now, is the following statement true? If the left Riemann sum of $f$ converges to $L$, $f$ is Riemann integrable and its Riemann integral $\int_a^b f$ equals $L$. In particular, I am curious whether the following limited case is true. If the left Riemann sum of $f$ over uniform partitions converges to $L$, $f$ is Riemann integrable and its Riemann integral $\int_a^b f$ equals $L$. My hunch is that the statements above are not true. But I can't come up with a counter example. Can someone give me some help here please? REPLY [5 votes]: In this context "Cauchy integral" has the meaning you know. It is a fact that if a function is bounded and Cauchy integrable over $[a,b]$, then it is also Riemann integrable over that interval. It seems that there is no elementary proof of this theorem. The proof in Kristensen, Poulsen, Reich A characterization of Riemann-Integrability, The American Mathematical Monthly, vol.69, No.6, pp. 498-505, (theorem 1), could be considered elementary because plays only with Riemann sums but is an indigestible game. Note that there exist unbounded functions Cauchy integrable. Also the use of regular partitions is enough to define Riemann integral. See Jingcheng Tong Partitions of the interval in the definition of Riemann integral, Int. Journal of Math. Educ. in Sc. and Tech. 32 (2001), 788-793 (theorem 2). I repeat that the use of only left (or right) Riemann sums with only regular partitions doesn't work.<|endoftext|> TITLE: Alice and Bob make all numbers to zero game QUESTION [7 upvotes]: Alice and Bob are playing a number game in which they write $N$ positive integers. Then the players take turns, Alice took first turn. In a turn : A player selects one of the integers, divides it by $2, 3, 4, 5$ or $6$, and then takes the floor to make it an integer again. If the integer becomes 0, it is erased from the board. The player who makes the last move wins. Assuming both play optimally, we need to predict who wins the game. Example : Let N=2 and numbers are [3,4] then alice is going to win this one. Explanation : Alice can win by selecting 4 and then dividing it by 2. The integers on the board are now [3,2]. Bob can make any choice, but Alice will always win. Bob can divide 2 by 3, 4, 5 or 6, making it 0 and removing it. Now only one integer remains on the board, 3, and Alice can just divide it by 6 to finish, and win, the game. Bob can divide 3 by 4, 5 or 6, making it 0 and removing it. Now only one integer remains on the board, 2, and Alice can just divide it by 6 to finish, and win, the game. Bob can divide 2 by 2. Now the integers are [1,3]. Alice can respond by dividing 3 by 3. The integers are now [1,1]. Now Bob has no choice but to divide 1 by 2, 3, 4, 5 or 6 and remove it (because it becomes 0). Alice can respond by dividing the remaining 1 by 2 to finish, and win, the game. Bob can divide 3 by 2 or 3. Now the integers are [1,2]. Alice can respond by dividing 2 by 2. The integers are now [1,1]. This leads to a situation as in the previous case and Alice wins. REPLY [3 votes]: Background If you're unfamiliar with Sprague-Grundy theory or the strategy for Nim, check out this community wiki collection of tutorials about them, because they're necessary for getting the answer for large $N$, and I'll be assuming familiarity with them. Since there are two players and the player who makes the last move wins, the Sprague-Grundy theorem applies, and these sorts of games combine as if they were Nim in disguise. But since players only divide a single number on their turn, a game is a disjunctive sum of games where $N=1$, and we can use the strategy for Nim to analyze all $N>1$ if we know the nimbers/Grundy values for $N=1$. Grundy Values for $N=1$ Claim: Suppose there is a single number $n\ge1$. Then the Grundy value of the game is: $1$ if the first base-12 digit of $n$ is $1$ (that is, if $n\in[12^k,2*12^k)$ for some $k\ge0$) $2$ if the first base-12 digit of $n$ is $2$ or $3$ (if $n\in[2*12^k,4*12^k)$ for some $k$) $3$ if the first base-12 digit of $n$ is $4$ or $5$ (if $n\in[4*12^k,6*12^k)$ for some $k$) $0$ if the first base-12 digit of $n$ is $6$ or higher (if $n\in[6*12^k,12^{k+1})$ for some $k$) Alternatively, if you prefer an ugly formula using floor, the Grundy values are given by: $$G\left(n\right)=\begin{cases} 1 & \text{if }n*12^{-\left\lfloor \log_{12}n\right\rfloor }<2\\ 2 & \text{if }2\le n*12^{-\left\lfloor \log_{12}n\right\rfloor }<4\\ 3 & \text{if }4\le n*12^{-\left\lfloor \log_{12}n\right\rfloor }<6\\ 0 & \text{otherwise} \end{cases}$$ Proof: We will prove this by induction. First, as base cases, calculate the Grundy values for $n$ from $1$ through $24$ by hand or computer (to get $1, 2, 2, 3, 3, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2$). Then we may assume $n\ge 25$, so that the five moves "divide by $2$","divide by $3$",... all yield different results. Now, assume that the claim holds for all lower $n$. We just need to verify the four cases. Case 1 If $n$ begins with $1$ in base-12, then $n$ is in the range $[12^k,2*12^k)$. So the five moves are to numbers in the following ranges: $[6*12^{k-1},12^{k})$, so Grundy value $0$ $[4*12^{k-1},8*12^{k-1})$, so Grundy value $3$ or $0$ $[3*12^{k-1},6*12^{k-1})$, so Grundy value $2$ or $3$ $[2.4*12^{k-1},4.8*12^{k-1})$, so Grundy value $2$ or $3$ $[2*12^{k-1},4*12^{k-1})$, so Grundy value $2$ By the mex rule for the Grundy values, since there is a move to a position of value $0$ and not a move to a position of value $1$, the Grundy value for $n$ is indeed $1$. Case 2 If $n$ begins with $2$ or $3$ in base-12, then $n$ is in the range $[2*12^k,4*12^k)$. So the five moves are to numbers in the following ranges: $[12^{k},2*12^{k})$, so Grundy value $1$ $[8*12^{k-1},1.33\ldots*12^{k})$, so Grundy value $0$ or $1$ $[6*12^{k-1},12^{k})$, so Grundy value $0$ $[4.8*12^{k-1},9.6*12^{k-1})$, so Grundy value $3$ or $0$ $[4*12^{k-1},8*12^{k-1})$, so Grundy value $3$ or $0$ Since there are moves to positions of value $0$ and $1$ and no move to a position of value $2$, the Grundy value for $n$ is indeed $2$. Case 3 If $n$ begins with $4$ or $5$, then $n\in[4*12^k,6*12^k)$. So the moves are to: $[2*12^{k},3*12^{k})$, so $2$ $[1.33\ldots*12^k,2*12^{k})$, so $1$ $[12^k,1.5*12^k)$, so $1$ $[9.6*12^{k-1},1.2*12^k)$, so $0$ or $1$ $[8*12^{k-1},12^k)$, so $0$ Since there are moves to $0,1,2$ but not $3$, the value of $n$ is $3$. Case 0 If $n$ begins with $6$ or more, then $n\in[6*12^k,12^{k+1})$: $[3*12^{k},6*12^k)$, so $2$ or $3$ $[2*12^{k},4*12^k)$, so $2$ $[1.5*12^k,3*12^k)$, so $1$ or $2$ $[1.2*12^k,2.4*12^k)$, so $1$ or $2$ $[12^k,6*12^k)$, so $1$ or $2$ or $3$ Since there are no winning moves to $0$, this is a losing position, and the value of $n$ is $0$. Winner for $N\ge1$ By the strategy for Nim, we can use bitwise XOR (aka "adding in base 2 without carrying") to quickly find out the Grundy value for $N>1$ from the Grundy values of the individual numbers. A Grundy value of $0$ corresponds to a loss for the first player (so a win for Bob), and a positive result corresponds to a win for the first player, Alice. Example and Winning Strategy To show how this works in practice (and how the above proof yields the strategy), let's walk through the following example: Suppose $N=3$ and the numbers are $(200,400,800)$. Then in base-12, the numbers would be $142_{12},294_{12},568_{12}$, with leading digits $1,2,5$, corresponding to Grundy values $1,2,3$. To calculate the bitwise XOR, we convert to binary ($01_2,10_2,11_2$) and add without carrying, to get $00_2=0$. Since this is $0$, this must be a win for Bob, that means that Bob has a winning response to all $15$ of Alice's moves. In ever case, Alice's move will change one of the three values $1,2,3$, and the new bitwise XOR will be nonzero. Bob must move to make the bitwise XOR $0$ again. For these small values, Bob can always do this by making the new three values $0,x,x$ in some order, either by reducing a value to $0$, or matching Grundy values if Alice changes one to $0$. We can find Bob's move not by trial and error, but by looking up the needed move in one of the subcases of the proof above. If Alice divides $200$ by $2$, then the new Grundy value of $100$ is $0$. An easy response is to move the $800$ to $400$ (a number of Grundy value $2$). If Alice moves in $100$, Bob would have a local response (in that same number), and otherwise Bob can mirror moves in the two copies of $400$. If Alice divides $200$ by $3$ to yield $66$, then the new Grundy value is $3$ and Bob can counter by dividing $400$ by $4$ to yield $100$ which has value $0$. If Alice divides $200$ by $4$ to yield $50$, then the new Grundy value is still $3$ and Bob can turn $400$ into $100$. If Alice divides $200$ by $5$ to yield $40$, then the new Grundy value is $2$, and Bob can respond by dividing $800$ by $6$ to yield $133$, which has Grundy value $0$. If Alice divides $200$ by $6$ to yield $33$, then the new Grundy value is still $2$, and Bob still turn $800$ into $133$. If Alice turns $400$ into $200$, then Bob can respond by dividing $800$ by $6$ to yield $133$, which has Grundy value $0$. If Alice turns $400$ into $133$, then Bob can respond by turning $800$ into $200$. If Alice turns $400$ into $100$, then Bob can still respond by turning $800$ into $200$. If Alice turns $400$ into $80$, then the new Grundy value is $0$ and Bob can still respond by turning $800$ into $200$. If Alice turns $400$ into $66$, then Bob can respond by turning $200$ into $100$. $800\to400$? $200\to100$. $800\to266$? $266$ has Grundy value $1$ so Bob can do $400\to100$. $800\to200$? $400\to100$. $800\to160$? $160$ has Grundy value $1$ so Bob can do $400\to100$. $800\to133$? $400\to200$.<|endoftext|> TITLE: mathematics was recreated on a foundation of number concepts rather than geometrical ones QUESTION [8 upvotes]: In Richard Courant and Fritz John's book Introduction to Calculus and Analysis Volume I, says In modern times mathematics was recreated and vastly expanded on a foundation of number concepts rather than geometrical ones. Why such recreation happened ? For getting rid of the limitations of geometrical methods ? REPLY [13 votes]: Proofs using geometrical intuition and Euclid axioms were almost the norm in ancient Greek mathematics. However side by side the idea of numbers was also developing for obvious practical needs (i.e. counting and measuring). With advent of algebra it became obvious that methods based on algebraic manipulation of numbers were far more powerful than the non-obvious geometrical arguments based on Euclidean axioms and this was the main reason for shift of focus from geometry to arithmetic/algebra. Use of algebra became so much so prevalent that it became fashionable to study geometry using numbers (co-ordinate geometry). However as many mathematicians during 1800-1900 realized the idea of replacing geometrical concepts with numbers was not easy. The concept of a straight line as a smooth continuum of points was very difficult to map with the existing theory of numbers (which basically was a theory of numbers accessible via algebraic techniques i.e. the numbers considered were algebraic). Only with the development of a proper theory of real numbers by Cantor, Dedekind, Weierstrass it was possible to map the points of a line with real numbers and the set of real numbers could then be viewed as a sort of arithmetical continuum. All this development by the way had a side effect. The charm of Euclid's Elements is no more available in standard high school curriculum. Only the bare minimum material based on Euclid axioms is covered in school syllabus and students get to learn analytic geometry and tools of calculus to deal with general curves. The beautiful theory of conics by Apollonius of Perga is a remarkable work based on Euclid axioms and sadly very few students are aware of it. I did study some of it from the book Apollonius, Treatise on Conic Sections by T. L. Heath and wrote some posts on conics.<|endoftext|> TITLE: Galois Theory.Subgroups of Galois Group QUESTION [5 upvotes]: How to List the subgroups of the Galois group in general.Im not interested in a specific Example but to make it easier. Supposes the galois group $G=Gal[Q(v,i):Q]$ $$v= \sqrt[4]{2}$$ I know how to find the possible Automorphisms (elements of the Galois group using the fact that is $a$ is a root of an irreducible polynomial its $(σa)$ is also a root (σ) is an automorphism) but how do i list its subgroups.And how to know that i have listed them all and im not missing one? REPLY [3 votes]: if we know the Galois group, this means we know its structure, in this case it is a problem of group theory and not of Galois theory, in this case we can, as indicated in the commentary, to proceed by recognizing growing small subgroups while the group is not already indicated in a classification list!!!. But it seems of interest to you to use Galois theory for a Galois extension $ K / \Bbb{Q} $ of degree $ n $; in this case, $K = \Bbb {Q} (\alpha)$, a relevant information is that the group $Gal (K / \Bbb{Q})$ is a group of order $n$ and is isomorphic to a transitive subgroup of $ S_{n}$. So 1) it is essential to have a primitive element $ \alpha $ of $K$ and know all its conjugates (the roots of $ irr (\alpha, K / \Bbb{Q}) $, say $ \alpha = \alpha_1, \alpha_2 \cdot\cdot\cdot \alpha_n$. 2) Learn the different divisors $ m $ of $ n $, to avoid trivial subgroups keep only $ m \in \{2, \cdot\cdot\cdot, n-1 \} $. 3) form the different product $P= (X- \alpha_1) (\prod(X -\alpha_{i_1}) \cdot\cdot\cdot (X -\alpha_{i_r}))$ with $ i_1 + \cdot\cdot\cdot+ i_r = m-1$ and $i_j\neq i_l)$ search the field generated by $\Bbb{Q }$ and the coefficients of the polynomial $P$ in 3) 4) The Galois correspondence determines the subgroups as required. For the given example: a primitive element is $ ^4\sqrt{2} + i$, whose conjugates are in number $ 8$. Then the Galois group is of order 8, as we may see as compositor of the real extension $\Bbb{Q} (^4\sqrt{2})$ with $\Bbb{Q}(i)$. Moreover, he identifies with a transitive subgroup of $ S_4 $,that is a 2-Sylow of $S_4$, hence a dihedral group generated by a 4-cycle and a suitable transposition. In this stage we can lock to the classification list (Group Theory) and determined the sub groups of the dihedral group $D_4 = \{e, a, a^2, a^3,b, ab, ab^2, ab^3 \} $, there are: sub-groups of order 4: $ \{e, a, a^2, a^3 \}, \{e, a^2, b, ba^2 \} , \{e, a^2, ba, ba^3 \} $ of index 2, so normal in $D_4$. sub-groups of order 2: $ \{e, a^2 \}, \{e, b \}, \{e, ba\},\{e, ba^2\}, \{e,ba^3 \} $ others of order 1 and 8 are obvious. so then if we need to determine the intermediate extensions, the Galois correspondence is applied after choosing well isomorphism between Galois group and D_4. Returning to the determination of these subgroup by Galois theory, ie by means of the intermediate extensions of $K / \Bbb{Q}$. by the method described above. $\alpha = ^4\sqrt{2} + i $, start by $(\alpha-i)^4=2$ to arrive at $irr(\alpha,\Bbb{Q})=X^8-16X^6+28X^2+1$, i know the roots: $^4\sqrt{2} + i$ , $^4\sqrt{2} - i$,$-^4\sqrt{2} + i$ and $-^4\sqrt{2} + i$, $i^4\sqrt{2} + i$ , $i^4\sqrt{2} - i$,$-i^4\sqrt{2} + i$ and $-i^4\sqrt{2} + i$, we form all product of two factors s.t (X-\alpha) divide the product $(X-\alpha) (X-^4\sqrt{2} + i) (X-\alpha) (X+^4\sqrt{2} - i) (X-\alpha) (X+^4\sqrt{2} + i) (X-\alpha) (iX-^4\sqrt{2} - i) (X-\alpha) (iX-^4\sqrt{2} + i) (X-\alpha) (X+i^4\sqrt{2} - i) (X-\alpha) (X+i^4\sqrt{2} + i)$ and we form all product of four factors s.t (X-\alpha) divide the product $(X-(\sqrt[4]{2}+i))(X-(\sqrt[4]{2}-i))=X^2-2X\sqrt[4]{2}+\sqrt{2}+1$ $(X-(\sqrt[4]{2}+i))(X-(-\sqrt[4]{2}-i))=X^2-\sqrt{2}-2i\sqrt[4]{2}+1$ $(X-(\sqrt[4]{2}+i))(X-(-\sqrt[4]{2}+i))=X^2-2iX-\sqrt{2}-1$ $(X-(\sqrt[4]{2}+i))(X-(i\sqrt[4]{2}-i))=X^2-iX\sqrt[4]{2}-X\sqrt[4]{2}+i% \sqrt{2}-i\sqrt[4]{2}-\sqrt[4]{2}+1$ $(X-(\sqrt[4]{2}+i))(X-(i\sqrt[4]{2}+i))=X^2-iX\sqrt[4]{2}-2iX-X\sqrt[4]{2}+i% \sqrt{2}+i\sqrt[4]{2}-\sqrt[4]{2}-1$ $(X-(\sqrt[4]{2}+i))(X-(-i\sqrt[4]{2}+i))=X^2+iX\sqrt[4]{2}-2iX-X\sqrt[4]{2}% -i\sqrt{2}+i\sqrt[4]{2}+\sqrt[4]{2}-1$ $(X-(\sqrt[4]{2}+i))(X-(-i\sqrt[4]{2}-i))=X^2+iX\sqrt[4]{2}-X\sqrt[4]{2}-i% \sqrt{2}-i\sqrt[4]{2}+\sqrt[4]{2}+1$ we get the field generayed with this polynoms.... This is the first time I do this kind of calculation, it is tiring but we can always try. Note that can be helpful is that the third polynomial generates the Galois extension $\Bbb{Q}(i,\sqrt{2})$ which corresponds to the normal sub group $\{e,a^2\}$ of $D_4$. sorry for my English and Good luck<|endoftext|> TITLE: open and closed sets in discrete space QUESTION [5 upvotes]: I am confusing how to determine the set is clopen, neither open or closed, open but not closed and closed but not open. I read an example from "Topology without Tears". Let $X=\{a,b,c,d,e,f\}$ and $\tau=\{X,\emptyset,\{a\},\{c,d\},\{a,c,d\},\{b,c,d,e,f\}\}$. $\tau$ is a topology on $X$. Then The set $\{a\}$ is both open and closed. The set $\{b,c\}$ neither open nor closed. The set $\{c,d\}$ is open but not closed. The set $\{a,b,e,f\}$ is closed but not open. I still cant figure how it will be open,closed,both or neither. Can anyone explain to me? Thank you. REPLY [6 votes]: Hint: A set is open if it is in $\tau$; so, for example $\{a\}$ is open because $\{a\}\in\tau$. A set is closed if its complement is in $\tau$; so, for example, $\{a\}$ is closed because $X\setminus\{a\}=\{b,c,d,e,f\}\in\tau$. $\tau$ is just a list of the open sets. You can get a list of the closed sets by just writing down the complements of the sets in $\tau$.<|endoftext|> TITLE: What is the average rational number? QUESTION [67 upvotes]: Let $Q=\mathbb Q \cap(0,1)= \{r_1,r_2,\ldots\}$ be the rational numbers in $(0,1)$ listed out so we can count them. Define $x_n=\frac{1}{n}\sum_{k=1}^nr_n$ to be the average of the first $n$ rational numbers from the list. Questions: What is required for $x_n$ to converge? Certainly $0< x_n < 1$ for all $n$. Does $x_n$ converge to a rational or irrational? How does the behavior of the sequence depend on the choice of list? I.e. what if we rearrange the list $\mathbb Q \cap(0,1)=\{r_{p(1)},r_{p(2)},\ldots\}$ with some one-to-one permutation $p: \mathbb N \to \mathbb N$? How does the behavior of $x_n$ depend on $p$? My thoughts: Intuitively, I feel that we might be able to choose a $p$ so that $x_n\rightarrow y$ for any $y\in[0,1]$. However, it also makes intuitive sense that, if each rational appears only once in the list, that the limit is required to be $\frac{1}{2}.$ Of course, intuition can be very misleading with infinities! If we are allowed to repeat rational numbers with arbitrary frequency (but still capturing every rational eventually), then we might be able to choose a listing so that $x_n\rightarrow y$ for any $y\in(0,\infty)$. This last point might be proved by the fact that every positive real number has a sequence of positive rationals converging to it, and every rational in that list can be expressed as a sum of positive rationals less than one. However, the averaging may complicate that idea, and I'll have to think about it more. Example I: No repetition: $$Q=\bigcup_{n=1}^\infty \bigcup_{k=1}^n \left\{\frac{k}{n+1}\right\} =\left\{\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{3}{4},\frac{1}{5},\ldots\right\}$$ in which case $x_n\rightarrow\frac{1}{2},$ a very nice and simple example. Even if we keep the non-reduced fractions and allow repetition, i.e. with $Q=\{\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\boxed{\frac{2}{4},}\frac{3}{4},\frac{1}{5},\ldots\},$ then $x_n\rightarrow\frac{1}{2}.$ The latter case is easy to prove since we have the subsequence $x_{n_k}=\frac{1}{2}$ for $n_k=\frac{k(k+1)}{2},$ and the deviations from $1/2$ decrease. The non-repetition case, I haven't proved, but simulated numerically, so there may be an error, but I figure there is an easy calculation to show whether it's correct. Example II: Consider the list generated from the Stern-Brocot tree: $$Q=\left\{\frac{1}{2},\frac{1}{3},\frac{2}{3},\frac{1}{4},\frac{2}{5},\frac{3}{5},\frac{3}{4},\ldots\right\}.$$ I'm sure this list could be studied analytically, but for now, I've just done a numerical simulation. The sequence of averages $x_n$ hits $\frac{1}{2}$ infinitely often, but may be oscillatory and hence not converge. If it converges, it does so much slower than the previous examples. It appears that $x_{2^k-1}=0.5$ for all $k$ and that between those values it comes very close to $0.44,$ e.g. $x_{95743}\approx 0.4399.$ However, my computer code is probably not very efficient, and becomes very slow past this. REPLY [55 votes]: Depending on how you order the rationals to begin with, the sequence $x_n$ could tend to anything in $[0,1]$ or could diverge. Say $y\in[0,1]$. Start with an enumeration $r_1,\dots$ of the rationals in $(0,1)$. When I say "choose a rational such that [whatever]" I mean you should choose the first rational currently on that list that satisfies [whatever], and then cross it off the list. Start by choosing $10$ rationals in $I_1=(y-1/10,y+1/10)$. Then choose one rational in $[0,1]\setminus I_1$. Then choose $100$ rationals in $I_2=(y-1/100,y+1/100)$, and then choose one rational in $[0,1]\setminus I_2$. Etc. First, note we have in fact defined a reordering of the original list. No rational appears in the new ordering more than once, because it is crossed off the original list the first time it is chosen. And every rational appears on the new list. In fact you can show by induction on $n$ that $r_n$ must be chosen at some stage: By induction you can assume that every $r_j$ for $j TITLE: Help to understand the generalization of the Argument Principle QUESTION [11 upvotes]: I'm reading Conway's complex analysis book and I'm trying to prove this theorem left to the reader on page 124: I tried to use integration by parts without success. I need some hint how prove this theorem. REPLY [16 votes]: You can proceed similarly as in the proof of the "ordinary" argument principle. Write $$ f(z) = \frac{\prod_{j=1}^n (z-z_j)}{\prod_{k=1}^m (z-p_k)} h(z) $$ where $h$ is holomorphic and $\ne 0$ in $G$. Now take the logarithmic derivative $$ \frac{f'(z)}{f(z)} = \sum_{j=1}^n \frac{1}{z-z_j} - \sum_{k=1}^m \frac{1}{z-p_k} + \frac{h'(z)}{h(z)} $$ and multiply with $g(z)$ $$ g(z) \frac{f'(z)}{f(z)} = \sum_{j=1}^n \frac{g(z)}{z-z_j} - \sum_{k=1}^m \frac{g(z)}{z-p_k} + g(z) \frac{h'(z)}{h(z)} \\ = \sum_{j=1}^n \frac{g(z_j)}{z-z_j} - \sum_{k=1}^m \frac{g(p_k)}{z-p_k} + \left( g(z) \frac{h'(z)}{h(z)} + \sum_{j=1}^n \frac{g(z)-g(z_j)}{z-z_j} - \sum_{k=1}^m \frac{g(z)-g(p_k)}{z-p_k} \right) \, . $$ The term in parentheses has only removable singularities in $G$, therefore $$ \tag{*} g(z) \frac{f'(z)}{f(z)} = \sum_{j=1}^n \frac{g(z_j)}{z-z_j} - \sum_{k=1}^m \frac{g(p_k)}{z-p_k} + \phi(z) $$ with an holomorphic function $\phi$ in $G$. The assertion now follows since for $a =z_1, \ldots, z_n, p_1, \ldots, p_m$ $$ \frac{1}{2 \pi i} \int_\gamma \frac{g(a)}{z-a}\, dz = g(a) \operatorname{n}(\gamma; a) $$ and $$ \frac{1}{2 \pi i} \int_\gamma \phi(z) \, dz = 0 $$ due to the Cauchy integral theorem. Remark: Another method to obtain $(*)$ would be to observe that both $$ g(z) \frac{f'(z)}{f(z)} $$ and $$ \sum_{j=1}^n \frac{g(z_j)}{z-z_j} - \sum_{k=1}^m \frac{g(p_k)}{z-p_k} $$ are holomorphic in $A$ with the only exception of (at most) simple poles at the zeros and poles of $f$, and that the principle parts at those points are the same. Therefore the difference has only removable singularities.<|endoftext|> TITLE: In an algebraic category a morphism is a regular epi iff it is surjective QUESTION [5 upvotes]: According to the nLab (see the 4th point under "Examples") in an "algebraic category" a morphism is a regular epi if and only, if it is surjective. Here a morphism $e$ is said to be surjective, if its underlying function $U(e)$ is. There is a definition (or perhaps multiple definitions?) given for the term "algebraic category". I want to prove the above theorem, where I replace "is an algebraic category" by some "reasonable" and "reasonably weak assumptions" for a concrete category $(\mathcal{A}, U : \mathcal{A}\to \mathsf{Set})$. Here are some ideas: Let $e : A \to E$ be a regular epi and assume, that there is an object $S$ in $\mathcal{A}$ which is free on the singleton $1$, that is: There is a map $\eta : 1 \to U(S)$ such that for all objects $A$ in $\mathcal{A}$ and maps $y : 1 \to U(A)$, there is a unique morphism $\bar{y} : S \to A$ in $\mathcal{A}$, such that $U(\bar{y}) \circ \eta = y$. Let $y\in U(E)$. Then $y$ "is" a morphism $y : 1 \to U(E)$, hence there is a corresponding morphism $\bar{y} : S\to E$. Under certain assumptions (axiom of choice?) $S$ is in fact regularly projective, in particular there exists a morphism $x : S \to A$, such that $e\circ x = \bar{y}$. Then $U(x) \circ \eta : 1\to U(A)$ "is" an element of $U(A)$ and furthermore $U(e)\circ (U(x) \circ \eta) = U(e\circ x)\circ \eta = U(\bar{y})\circ \eta = y$, hence $U(e)$ is surjective. Conversily, let $U(e)$ be surjective. Suppose $e$ has a kernel pair $(P,p_1,p_2)$ and that $U$ preserves limits ($U$ is represented by $S$). Let $q : A \to Q$ be such that $q\circ p_1 = q\circ p_2$. Then $(U(P),U(p_1),U(p_2))$ is a kernel pair of $U(e)$, because $U$ preserves kernel pairs. Furthermore $U(e)$ is surjective, hence a regular epi and therefore the coequalizer of $U(p_1),U(p_2)$, and $U(q)\circ U(p_1) = U(q)\circ U(p_2)$, so there exists a unique map $\tilde{u} : U(E) \to U(Q)$, such that $\tilde{u} \circ U(e) = U(q)$. A morphism $u : E \to Q$ with $u\circ e = q$ is unique, since then $U(u) = \tilde{u}$. Now showing that a $u$ with $U(u) = \tilde{u}$ exists remains. How can I fill the holes of this proof idea to make it an actual proof? A part of this question is, what assumption I really need, and in particular the following things are unclear: if or when $S$ is regularly projective and whether I actually need that if or when the $\tilde{u}$ can be extended to a morphism $u$ Any alternative suggestions are welcome as well. REPLY [4 votes]: A reasonable set of assumptions is that $U$ is representable by an object $s$ (the free object on one element), $U$ is conservative, and $A$ has, and $U$ preserves, reflexive coequalizers. This is notably the case if $A$ is the category of models of a Lawvere theory, or somewhat more generally the category of algebras of a monad preserving reflexive coequalizers. Conversely, by the crude monadicity theorem, with the above hypotheses (together with the hypothesis that $A$ has all coproducts), the functor $U$ is monadic, and the monad preserves reflexive coequalizers. An example where the above hypotheses don't hold is $A = \text{Top}$, where the only hypothesis that fails is conservativity. Here the regular epis are the quotient maps, but there are many surjective continuous maps that aren't quotient maps. From here the important technical fact is the following. Lemma: A conservative functor reflects any limits or colimits it preserves. Hence, with the above hypotheses, $U$ preserves and reflects kernel pairs and reflexive coequalizers, which implies that it both preserves and reflects regular epimorphisms (because the coequalizer of a kernel pair is always reflexive). Of course to apply this statement to a familiar case like groups you need a way of verifying that $U$ preserves reflexive coequalizers. A more general result you can try to prove is that the forgetful functor from models of a Lawvere theory to $\text{Set}$ preserves sifted colimits (basically because these commute with finite products in $\text{Set}$). As far as regular projectivity goes, $s$ is regular projective if and only if $U$ preserves regular epimorphisms more or less by definition, so that's just a restatement of the question in one direction. If you want a reference, Corollary 3.5.3 in Borceux Volume II covers the case of Lawvere theories.<|endoftext|> TITLE: Is the derivative of differentiable function $f:\mathbb{R}\to\mathbb{R}$ measurable on $\mathbb{R}$? QUESTION [9 upvotes]: Suppose we have a bounded differentiable function $f:\mathbb{R}\to[a,b]\subset\mathbb{R}$. Hence $f$ is continuous and measurable (in terms of standart Lebesgue measure) on $\mathbb{R}$. I want $f$ to have bounded measurable derivative $f'$. Derivative $f'$ doesn't have to be bounded by default, so I add additional constrain on $f$, saying that $f$ is bounded differentiable function with bounded derivative. Now do I have to explicitly postulate that $f'$ is measurable, or do I have it automatically from already mentioned conditions? REPLY [14 votes]: Assuming merely that $f$ is differentiable, $f'$ is measurable because $\{x\mid f'(x)>L\}$ is Borel for every $L$: Since we know $f$ to be differentiable everywhere, the $x$ for which $f'(x)>L$ are those where $\dfrac{f(x+1/n)-f(x)}{1/n}> L$ for all sufficiently large $n$. And once we choose an $n$, the set $$ A_n = \left\{ x \;\Big|\; \frac{f(x+1/n)-f(x)}{1/n} > L \right\} $$ is open, so $$\{x\mid f'(x)>a \} = \bigcup_{k=1}^\infty \bigcap_{n=k}^\infty A_n $$ is Borel.<|endoftext|> TITLE: Prove $\pi^2\int_0^\infty\frac{x\sin^4\pi x}{\cos\pi x+\cosh\pi x}dx=e^2\int_0^\infty\frac{x\sin^4ex}{\cos ex+\cosh ex}dx=\frac{176}{225}$ QUESTION [16 upvotes]: Marco Cantarini and Jack D'Aurizio proved hard-looking integrals (see Marco and Jack) in my recent two posts. This is our final hard-looking integral that yield a rational answer: $$\pi^2\int_{0}^{\infty}\frac{x\sin^4(x\pi)}{\cos(x\pi)+\cosh(x\pi)}dx=e^2\int_{0}^{\infty}\frac{x\sin^4(xe)}{\cos(xe)+\cosh(xe)}dx=\frac{176}{225}\tag1$$ Can anyone provide us a prove of $(1)$? REPLY [7 votes]: Taking the cue from Sophie Agnesi and doing a similar manner as my previous answer in your post, then \begin{align} S_A&=\int_{0}^{\infty}\frac{x\sin^4 x}{\cosh x+\cos x}\ dx\\[10pt] &=2\sum_{k=1}^\infty(-1)^{k-1}\int_{0}^{\infty}x\ e^{-kx}\sin^3 x\sin kx\ dx\\[10pt] &=\frac{1}{2}\sum_{k=1}^\infty(-1)^{k-1}\left[3\int_{0}^{\infty}x\ e^{-kx}\sin x\sin kx\ dx-\int_{0}^{\infty}x\ e^{-kx}\sin3 x\sin kx\ dx\right]\\[10pt] &=\frac{3}{4}-\frac{1}{4}\sum_{k=1}^\infty(-1)^{k-1}\left[\int_{0}^{\infty}x\ e^{-kx}\cos(k-3)x\ dx-\int_{0}^{\infty}x\ e^{-kx}\cos(k+3)x\ dx\right]\\[10pt] &=\frac{3}{4}-\frac{1}{4}\sum_{k=1}^\infty(-1)^{k-1}\left[ \frac{\cos\left(2\tan^{-1}\left(\frac{k-3}{k}\right)\right)}{k^{2}+(k-3)^2}-\frac{\cos\left(2\tan^{-1}\left(\frac{k+3}{k}\right)\right)}{k^{2}+(k+3)^2}\right]\\[10pt] &=\frac{3}{4}-\frac{1}{4}\left(-\frac{29}{225}\right)\\[10pt] &=\frac{176}{225} \end{align} and the claim follows. I leave the latter sum to you as a brainstorming.<|endoftext|> TITLE: A very difficult Diophantine problem $n^2 \mid 3^n+2^n+1$ QUESTION [7 upvotes]: Prove that $n=3$ is the only positive integer greater than $1$, for which$$n^2 \mid 3^n+2^n+1.$$ This is a conjecture. REPLY [4 votes]: Worth an answer. It is fairly likely that there is no known solution for this. Several years ago we had some nonsense with the similar $$(n^2 - 1) | (3^n + 5^n)$$ See https://mathoverflow.net/questions/16341/on-polynomials-dividing-exponentials I admit, the question here differs in a way that may be important, as the analogous MO problem would be $$n^2 | (1 + 3^n + 5^n)$$ The same question was asked here, Find all positive integers $n$ s.t. $3^n + 5^n$ is divisible by $n^2 - 1$ Gottfried put in a good deal of effort, however, if you check his comments below his answer, he realizes that he does not have a complete proof, although there are places inside the answer that claim completeness.<|endoftext|> TITLE: Derive Hall's theorem from Tutte's theorem QUESTION [7 upvotes]: I'm trying to derive: Hall Theorem A bipartite graph G with partition (A,B) has a matching of A $\Leftrightarrow \forall S\subseteq A, |N(S)|\geq |S|$ From this: Tutte Theorem A graph G has a 1-factor $\Leftrightarrow \forall S\subseteq V(G), q(G-S)\leq |S|$ where $q()$ denotes the number of odd connected components. The idea of the proof is to suppose true the Tutte's condition for a bipartite graph G and by contradiction suppose that $\exists S\subseteq A,|N(S)|<|S|$ Now consider what happens if we remove from G the vertices of $N(S)$. All the vertex of $S$ become isolated vertex (odd component), and potentially we also have some other odd component, so we know that: $|S|\leq q(G-N(S))\leq |N(S)|$ where we apply Tutte's theorem in the second inequality. Is this a valid proof? REPLY [6 votes]: It’s somewhat easier to prove the converse of the above statement: the existence of a set $S$ not satisfying the Hall criterion $|S|≤|N(S)|$ implies the existence of a set $T$ not satisfying the Tutte criterion, so that $G−T$ has more than $T$ odd components. So, if $G$ does not meet the conditions of Hall’s Theorem, then there is a vertex-set $S⊂A$ such that $|S|>|N(S)|$. Let $T=N(S)$. By the definition of a neighborhood,every element of $S$ is adjacent to an element of $T$, so every element of $S$ is an isolated vertex of $G−T$. Thus, $G−T$ contains at least $|S|$ isolated vertices, which are in themselves odd components; since $|S|>|T|$, this set violates Tutte’s Theorem. #Note that this is just a proof from this link with sole purpose to benefit the readers because I believe this solution is somewhat more 'intuitive' and simpler.<|endoftext|> TITLE: Simplify $\sqrt[3]{\sqrt[3]{A}-B}$ into $\sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}$ QUESTION [8 upvotes]: I'm wondering if there is a systematic way to simplify the nested cubic radicals $$\sqrt[3]{\sqrt[3]{A}-B}$$ into the denested form $$\sqrt[3]{a}+\sqrt[3]{b}+\sqrt[3]{c}$$ Examples such as $$\sqrt[3]{7\sqrt[3]{20}-1}=\sqrt[3]{\frac {16}{9}}+\sqrt[3]{\frac {100}{9}}-\sqrt[3]{\frac {5}{9}}$$ $$\sqrt[6]{7\sqrt[3]{20}-19}=\sqrt[3]{\frac {5}{3}}-\sqrt[3]{\frac {2}{3}}$$ seem to follow this rule ... REPLY [6 votes]: The answer is yes, curtesy of Ramanujan. For nested cubic radicals of the form $\sqrt[3]{\sqrt[3]{A}-B}$, denote $C =\sqrt[3]{B^3-A}$ and construct the corresponding cubic polynomial below $$R(x)=x^3 + \frac{B+2C}3x^2 - \frac{(B-C)(2B+C)}{27}x+ \frac{(B-C)^3}{729}$$ which, per Ramanujan, has the property that its roots $x_1$, $x_2$ and $x_3$ satisfy $$\sqrt[3]{x_1}+ \sqrt[3]{x_2 }+ \sqrt[3]{x_3 } =\sqrt[3]{\sqrt[3]{A}-B} $$ Thus, denesting $\sqrt[3]{\sqrt[3]{A}-B}$ reduces to solving the cubic equation $R(x)=0$, a straightforward exercise. Take the example of $\sqrt[3]{21\sqrt[3]{6}-17}$, with $B=17$ and $C= -37$. Its denesting polynomial is $$x^3 -19x^2+6x + 216 =(x-18)(x-4)(x+3)$$ which results in $$\sqrt[3]{21\sqrt[3]{6}-17}=\sqrt[3]{18}+\sqrt[3]{4}-\sqrt[3]{3}$$ Listed below are other examples admitting rational detestation, along with their respective polynomials: $$ \begin{align} \sqrt[3]{\sqrt[3]{2}-1}&=\sqrt[3]{\frac19}-\sqrt[3]{\frac29}+\sqrt[3]{\frac49}, &\>\>\> &x^3 -\frac13 x^2-\frac2{27}x + \frac8{729}\\ \sqrt[3]{39\sqrt[3]{12}-19}&=\sqrt[3]{48}+\sqrt[3]{9}-\sqrt[3]{4}, &&x^3 -53x^2+204x + 1728\\ \sqrt[3]{7\sqrt[3]{20}-1} &=\sqrt[3]{\frac {100}9}+\sqrt[3]{\frac {16}9}-\sqrt[3]{\frac 59}, && x^3 -\frac{37}3x^2+\frac{340}{27}x + \frac{8000}{729}\\ \sqrt[3]{93\sqrt[3]{30}+19}&=\sqrt[3]{180}+\sqrt[3]{25}-\sqrt[3]{6}, &&x^3 -199 x^2+3270 x + 27000 \\ \sqrt[3]{13\sqrt[3]{70}-17}&=\sqrt[3]{\frac{245}9}+\sqrt[3]{\frac{50}9} -\sqrt[3]{\frac{28}9}, &&x^3 -\frac{89}3 x^2+\frac{1330}{27} x + \frac{343000}{729}\\ \end{align} $$<|endoftext|> TITLE: How many 6 digit numbers are possible with no digit appearing more than thrice? QUESTION [6 upvotes]: How many 6 digit numbers are possible with at most three digits repeated? My attempt: The possibilities are: A)(3,2,1) One set of three repeated digit, another set of two repeated digit and another digit (Like, 353325, 126161) B)(3,1,1,1) One set of three repeated digit, and three different digits.(Like 446394, 888764) C)(2,2,1,1) Two sets of two repeated digits and two different digits (Like, 363615, 445598) D)(2,2,2) Three sets of two repeated digits (Like, 223344, 547547) E)(2,1,1,1,1,1) One set of two repeated digit and four different digits (Like 317653, 770986) F)(1,1,1,1,1,1) Six Different digits (like 457326, 912568) G)(3,3) Two pairs of three repeated digits. Let's try to calculate each possibilities separately. F) is the easiest calculate. Let us try to workout Case E) Let's divide the case into two parts: Case E(1) Zero is not one of the digit We can choose any $5$ numbers form $9$ numbers $(1,2,3,\cdot, 9)$ in $\binom{9}{5}$ ways , the digit which one is repeated can be chosen in 5 ways, and you can permute the digits in $\frac{6!}{2!}$ ways. The total number of ways$=\binom{9}{5}\times 5\times \frac{6!}{2!} $ Case E(2) Zero is one of the digit. Case E(2)(a) Zero is the repeated digit We need to choose four other numbers which can be done in $\binom{9}{4}$ ways, the digits can be permuted in $\frac{6!}{2!}$ ways, but we need to exclude the once which starts with zero ($5!$ many). The total number of ways =$=\binom{9}{4}\times (\frac{6!}{2!} -5!)$. Case E(2)(b) Zero is not the repeated digit We need to choose four other numbers which can be done in $\binom{9}{4}$ ways, the repeated digit can be chosen in 4 ways, the digits can be permuted in $\frac{6!}{2!}$ ways, but we need to exclude the once which starts with zero ($5!$ many). The total number of ways =$=\binom{9}{4}\times 4\times (\frac{6!}{2!} -5!)$. Before, proceeding to workout the other cases, I want to know Is my attempt correct? If it is correct, it is too lengthy, is there any other way to solve this? REPLY [3 votes]: I will assume that the question means that no digit appears more than three times. As Austin Mohr points out, the question is badly phrased. Since the leading digit cannot be zero, there are $9 \cdot 10^5 = 900,000$ six digit positive integers. Like Soke, I will exclude those in which a digit appears four or more times. We consider cases. Case 1: The same digit is used six times. Since the leading digit cannot be zero, there are $9$ of these. They are $$111 111, 222 222, 333 333, 444 444, 555 555, 666 666, 777777, 888 888, 999 999$$ Case 2: One digit is used five times, while a different digit is used once. There are two subcases. Subcase 1: The leading digit is repeated. Since the leading digit cannot be zero, there are nine ways to select the leading digit. We must select four of the remaining five places to place the other occurrences of the leading digit. We then have nine choices for the other digit since we can now use zero. $$9 \cdot \binom{5}{4} \cdot 9 = 405$$ Subcase 2: The leading digit is not repeated. We still have nine ways of selecting the leading digit. That leaves us with nine ways to choose the repeated digit that fills the remaining five places. $$9 \cdot 9 = 81$$ Case 3: One digit is used four times, while a different digit is used twice. Subcase 1: The leading digit appears four times. We have nine ways of selecting the leading digit. We have $\binom{5}{3}$ ways of choosing the other three positions in which it appears. We have nine ways of choosing the digit that fills the two open positions. $$9 \cdot \binom{5}{3} \cdot 9 = 810$$ Subcase 2: The leading digit appears twice. We have nine ways of selecting the leading digit and $\binom{5}{1}$ ways of choosing the other position in which it appears. We have nine choices for choosing the repeated digit that fills the four open positions. $$9 \cdot \binom{5}{1} \cdot 9 = 405$$ Case 4: One digit is used four times, while two other digits are used once each. Subcase 1: The leading digit is repeated. We have nine ways of choosing the leading digit and $\binom{5}{3}$ ways of choosing the other three positions in which it appears. We have nine choices for the leftmost open position and eight choices for the remaining position. $$9 \cdot \binom{5}{3} \cdot 9 \cdot 8 = 6480$$ Subcase 2: The leading digit is not repeated. We have nine ways of choosing the leading digit. We have nine ways of choosing the repeated digit and $\binom{5}{4}$ ways of selecting four of the five open positions in which to place it. We have eight ways of filling the remaining open position. $$9 \cdot 9 \cdot \binom{5}{4} \cdot 8 = 3240$$ That gives a total of $$9 + 405 + 81 + 810 + 405 + 6480 + 3240 = 11,430$$ excluded cases. Hence, there are $$900,000 - 11,430 = 888,570$$ six-digit positive integers in which no digit appears more than three times.<|endoftext|> TITLE: All cohomology of quadrics comes from algebraic cycles QUESTION [5 upvotes]: I've read in a number of place now the statement that all cohomology of quadrics (complex projective ones) comes from algebraic cycles, but I cannot find any source for this. So I hope someone here can point me to a book/paper where this is explained. REPLY [9 votes]: Let me try to outline the proof, and you can fill in the details. We take an $n$-dimensional smooth quadric hypersurface $Q$ sitting in $\mathbf P^{n+1}$. By the Lefschetz hyperplane theorem, for $k \neq n$, the restriction maps $$H^k(\mathbf P^n, \mathbf Z) \rightarrow H^k(Q,\mathbf Z)$$ are isomorphisms, and so the cohomology groups of $Q$ in these degrees certainly come from algebraic cycles. Moreover, by the universal coefficient theorem this also shows that $H^k(X,\mathbf Z)$ is torsion-free for all $k$. Using the normal bundle sequence $$ T_Q \rightarrow T_{\mathbf P^n} \rightarrow N_Q $$ one computes the Euler characteristic of $Q$; together with the information from Step 1 above this shows that $$ H^n(Q,\mathbf Z) = \begin{cases} \mathbf Z \oplus \mathbf Z \quad &\mbox{if $n$ is even} \\0 \quad &\mbox{if $n$ is odd}\end{cases}$$ Now it just remains to find two non-homologous cycles of middle dimension on an even-dimensional quadric. Thinking about the case $\operatorname{dim} Q=2$, where $Q \cong \mathbf P^1 \times \mathbf P^1$, gives you a big hint about what these should be!<|endoftext|> TITLE: Linear transformation on the vector space of complex numbers over the reals that isn't a linear transformation on $\mathbb{C}^1$. QUESTION [5 upvotes]: I am having a little trouble with the following question (that is from the Linear Transformations chapter in Hoffman's / Kunze Linear Algebra): Let $\mathbb{V}$ be the set of all complex numbers regarded as a vector space over the field of real numbers (usual operations). Find a function from $\mathbb{V}$ into $\mathbb{V}$ which is a linear transformation on the above vector space, but which is not a linear transformation on $\mathbb{C}^1$, i.e., which is not complex linear. $\mathbb{C}^1$ is the set of all complex numbers regarded as a vector space over the field of complex numbers right? If then, isn't all linear transformations in $\mathbb{V}$ a linear transformation on $\mathbb{C}^1$? (Because $\mathbb{R}\subset\mathbb{C}$) I think I'm missing something fundamental here. Any help is welcome. Thanks! REPLY [10 votes]: Define $T:\mathbb{C}\to \mathbb{C}$ by $T(z)=\bar{z}$, where $\overline{x+iy}=x-iy$. Then one can check that $T$ is $\mathbb{R}$-linear, but since $$ T(i)=-i\neq i=i\cdot T(1) $$ it follows that $T$ is not $\mathbb{C}$-linear. REPLY [8 votes]: The question is asking, in other words, for a function $f: \mathbb C \to \mathbb C$ which is $\mathbb R$-linear but not $\mathbb C$-linear. In other words, $f$ should satisfy $f(x+y)=f(x)+f(y)$ and $f(cx)=cf(x)$ for all $x,y\in\mathbb C$ and $c\in\mathbb R$, but it should not satisfy $f(cx)=cf(x)$ for all $c\in\mathbb C$. REPLY [7 votes]: As a vector space over $\mathbb{C}$, $\mathbb{V}$ is a 1-dimensional vector space (with basis = $\{1\}$), and all linear maps $T: \mathbb{V} \rightarrow \mathbb{V}$ are of the form $T(z) = z_0z$, for some constant $z_0 = a + ib$. Thus, for $z = x + i y$, $$ T(z) =T(x + iy) = (a + ib)(x+iy) = (ax-by) + i(ay+bx). $$ As a vector space over $\mathbb{R}$, $\mathbb{V}$ is a 2-dimensional vector space, with basis $\{1, i\}$. The $\mathbb{C}$-linear map $T(z)$ above is also $\mathbb{R}$-linear, and its matrix representation, with respect to this basis, is $$ \begin{bmatrix}a & -b \\b&a\end{bmatrix} \;\;\;\; (*) $$ So for your example, take any linear transformation with matrix representation that has a form different from $(*)$.<|endoftext|> TITLE: How to Derive Softmax Function QUESTION [7 upvotes]: Can someone explain step by step how to to find the derivative of this softmax loss function/equation. \begin{equation} L_i=-log(\frac{e^{f_{y_{i}}}}{\sum_j e^{f_j}}) = -f_{y_i} + log(\sum_j e^{f_j}) \end{equation} where: \begin{equation} f = w_j*x_i \end{equation} let: \begin{equation} p = \frac{e^{f_{y_{i}}}}{\sum_j e^{f_j}} \end{equation} The code shows that the derivative of $L_i$ when $j = y_i$ is: \begin{equation} (p-1) * x_i \end{equation} and when $j \neq y_i$ the derivative is: \begin{equation} p * x_i \end{equation} It seems related to this this post, where the OP says the derivative of: \begin{equation} p_j = \frac{e^{o_j}}{\sum_k e^{o_k}} \end{equation} is: \begin{equation} \frac{\partial p_j}{\partial o_i} = p_i(1 - p_i),\quad i = j \end{equation} But I couldn't figure it out. I'm used to doing derivatives wrt to variables, but not familiar with doing derivatives wrt to indxes. REPLY [5 votes]: We have a softmax-based loss function component given by: $$L_i=-log\left(\frac{e^{f_{y_i}}}{\sum_{j=0}^ne^{f_j}}\right)$$ Where: Indexed exponent $f$ is a vector of scores obtained during classification Index $y_i$ is proper label's index where $y$ is column vector of all proper labels for training examples and $i$ is example's index Objective is to find: $$\frac{\partial L_i}{\partial f_k}$$ Let's break down $L_i$ into 2 separate expressions of a loss function component: $$L_i=-log(p_{y_i})$$ And vector of normalized probabilities: $$p_k=\frac{e^{f_{k}}}{\sum_{j=0}^ne^{f_j}}$$ Let's substitute sum: $$\sigma=\sum_{j=0}^ne^{f_j}$$ For $k={y_i}$ using quotient rule: $$\frac{\partial p_k}{\partial f_{y_i}} = \frac{e^{f_k}\sigma-e^{2f_k}}{\sigma^2}$$ For $k\neq{y_i}$ during derivation $e^{f_k}$ is treated as constant: $$\frac{\partial p_k}{\partial f_{y_i}} = \frac{-e^{f_k}e^{f_{y_i}}}{\sigma^2}$$ Going further: $$\frac{\partial L_i}{\partial p_k}=-\left(\frac {1}{p_{y_i}}\right)$$ Using chain rule for derivation: $$\frac{\partial L_i}{\partial f_k}=-\left(\frac {1}{\frac{e^{f_{k}}}{\sigma}}\right)\frac{\partial p_k}{\partial f_{y_i}}=-\left(\frac {\sigma}{{e^{f_{k}}}}\right)\frac{\partial p_k}{\partial f_{y_i}}$$ Considering $k$ and $y_i$, for $k=y_j$ after simplifications: $$\frac{\partial L_i}{\partial f_k}=\frac{e^{f_k}-\sigma}{\sigma}=\frac{e^{f_k}}{\sigma}-1=p_k-1$$ And for $k\neq y_j$: $$\frac{\partial L_i}{\partial f_k}=\frac{e^{f_k}}{\sigma}=p_k$$ These two equations can be combined using Kronecker delta: $$\frac{\partial L_i}{\partial f_k}=p_k-\delta_{ky_i}$$<|endoftext|> TITLE: Showing that $\sum_{n=0}^{\infty}\frac{2^{n+2}}{{2n\choose n}}\cdot\frac{n-1}{n+1}=(\pi-2)(\pi-4)$ QUESTION [8 upvotes]: Showing that (1) $$\sum_{n=0}^{\infty}\frac{2^{n+2}}{{2n\choose n}}\cdot\frac{n-1}{n+1}=(\pi-2)(\pi-4)$$ see here (2) $$(\arcsin(x))^2=\frac{1}{2}\sum_{n=1}^{\infty}\frac{(2x)^{2n}}{n^2{2n\choose n}}$$ Look very much similar to (1). How can I make the use of (2) to solve (1)? Let try and differentiate (1) $$\frac{d}{dx}\arcsin(x))^2=\frac{d}{dx}\frac{1}{2}\sum_{n=1}^{\infty}\frac{(2x)^{2n}}{n^2{2n\choose n}}$$ $$\arcsin(x)\cdot\frac{1}{\sqrt{1-x^2}}=2\sum_{n=1}^{\infty}\frac{(2x)^{2n-1}}{n{2n\choose n}}$$ Getting close but not near yet I am stuck, any help please? REPLY [4 votes]: It is worth considering that: $$ \frac{1}{\binom{2n}{n}}=\frac{n!^2}{(2n)!}=(2n+1)\cdot B(n+1,n+1)=(2n+1)\int_{0}^{1}x^n(1-x)^n\,dx \tag{1}$$ so: $$\sum_{n\geq 0}\frac{2^{n+2}}{\binom{2n}{n}}=\int_{0}^{1}\frac{4+8x(1-x)}{(1-2x(1-x))^2}\,dx = 4\int_{-1}^{1}\frac{3-y^2}{(1+y^2)^2}\,dy=8+2\pi\tag{2}$$ as well as: $$\begin{eqnarray*} \sum_{n\geq 0}\frac{2^{n+2}}{(n+1)\binom{2n}{n}}&=&4\int_{0}^{1}\int_{0}^{1}\frac{1+2xy-2x^2y}{(1-2xy+2x^2 y)^2}\,dx\,dy \\&=&8\int_{0}^{1}\frac{dx}{1-2x(1-x)}+2\int_{0}^{1}\frac{\log(1-2x(1-x))}{x(1-x)}\tag{3}\end{eqnarray*}$$ that is easy to compute through the substitution $y=2x-1$ as in $(2)$.<|endoftext|> TITLE: A continuous onto function from $[0,1)$ to $(-1,1)$ QUESTION [5 upvotes]: How I can construct a continuous onto function from $[0,1)$ to $(-1,1)$ ? I know that such a function exists and also I have a function $\displaystyle f(x)=x^2\sin\frac{1}{1-x}$ which is continuous and onto from $[0,1)$ to $(-1,1)$. But I don't know how I can construct this type of function. There are many function which is continuous onto from $[0,1)$ to $(-1,1)$. I can construct a continuous onto function from $(0,1)$ to $(-1,1)$. But when the domain interval is left-closed then how I can construct ? Please give the idea to construct the function. REPLY [7 votes]: I suggest to start by drawing: draw the bounding box $[0,1)\times (-1,1)$, place your pencil at $(0,0)$, and trace different functions. The following are observations and hints, not logical proofs. A first idea is that, since you want a continuous function, a monotonous one might be troublesome in mapping a semi-open interval like $[0,1)$ to an open one like $(-1,1)$. You can find one location in $[0,1)$ where $f(x)$ approaches $ 1$ and another location in $[0,1)$ where $f(x)$ approaches $ -1$. If these locations are in some $[\epsilon,1-\epsilon]$ ($\epsilon >0$), continuity might impose that the values $-1$ or $1$ will be reached strictly inside $[0,1)$, which you do not want. One remaining choice is that values $y=-1$ and $y=1$ are both approached on an open end like $\left[.~ 1\right)$. So you might need a function that oscillates infinitely often close to $x=1$. In other word, it can be wise to open $[0,1)$ to something like $[0,+\infty)$. To summarize, three main ingredients ingredients could be useful, in the shape of a fellowship of continuous functions that you could compose. Many choices are possible, here is one (borrowing from Tolkien's Ring poem): three functions for the unit-interval $[0,1)$ up to the sky $[0,+\infty)$ (hereafter $f_0$, $f_1$, $f_\phi$), one for the oscillation lords in their infinite $[0,+\infty)$ hall of sound (hereafter $f_2$), one function to bring them all and in $]-1,1[$ bind them (hereafter $f_3$), in the Land of functions where continuity lies (indeed, continuous functions tend to cast continuous shadows to connected intervals). The first ingredient $f_1(x)$ is easy with functions you know, defined on some $[a,b[$ with a singularity at $b$. It is easy to remap $[a,b[$ to $[0,1[$, so let us stick to that interval $[0,1[$. Examples: $\frac{1}{1-x}$, $-\log (1-x)$, $\tan (\pi x/2)$, and many more. If you want more flexibility, you can start with any function $f_0$ that maps $[0,1[$ to $[0,1[$: $x^p$ with $p>0$, $\sin(\pi x/2)$, $\log(1+x(e-1))$. For the second ingredient $f_2$ on $[0,+\infty)$, the sine is a nice pick, and a lot of combinations of sines and cosines, like a chirping sound. But you have a lot of fancy alternatives. And you can easily plug in a function $f_\phi$ that maps $[0,+\infty)$ to $[0,+\infty)$ (for instance $\exp(x)-1$, $x^p$). The choice of $f_2$ is possibly the most sensitive, since you will need to strictly bound it afterward inside $ (−1,1)$, therefore you will need a third ingredient: a function $f_3$ that compensates (as a product for instance) the envelope of $f_2$ so that the result does not exceed $1$ in absolute value. So for the sine, $x^p$ or $\frac{\exp{x}-1}{e-1}$ will do the job. A function whose magnitude is strictly less than $1$, and tends to $1$ as $x\to 1$ is likely to work. Finally, a function can be obtained by composing $f(x)= f_3(x)\times f_2\left( f_\phi\left( f_1\left( f_0\left( x\right)\right)\right)\right)$. In your case, you have for instance $f_0(x)=x$, $f_1(x)=\frac{1}{1-x}$, $f_\phi(x)=x$, $f_2(x)=\sin(x)$, $f_3(x)=x^2$. While not fully generic, you can cook of lot of recipes with those ingredients. For instance (see image below):$$f(x)=\sin(\pi x^7/2)\sin \left( \tan \left(\pi \sqrt{x}/2\right)\right)\,.$$<|endoftext|> TITLE: Any ideal of a field $F$ is $0$ or $F$ itself QUESTION [5 upvotes]: Prove that the only ideals of a field are $\left\{ 0 \right\}$ and the field itself. Let $F$ be a field and $I$ be an Ideal of $F$. Let $0 \ne x \in I$. Since $I$ is an Ideal of $F$, it is true that $I \subseteq F$ and so $x \in F$. $F$ is a field so there exists an element, say, $x^{-1}$ s.t $x \cdot x^{-1} = e = 1 \in F$. Any hints to keep me going? Thanks in advance. REPLY [5 votes]: Hint: Ideals are closed under external multiplication. For all $r \in F$, $x \in I$, $rx \in I$. Just apply this to the elements you obtained. Complete Solution: Let $I$ be any non-zero ideal of a field $F$. Then, $I$ contains at least one non-zero element $x$. Since $F$ is a field, $x^{-1} \in F$, which implies that $x^{-1} x = 1 \in I$ (since $I$ is an ideal of $F$). Therefore, $I = F$. Appendix: In any ring $R$, $\{0\}$ and $R$ are always ideals: $\{0\}$ and $(R, +)$ are always subgroups of $(R, +)$. For closure under external multiplication, observe that $\forall r \in R, r \cdot 0 = 0 \cdot r = 0 \in \{0\}$, and $\forall r \in R$ (the ring) and $s \in R$ (the ideal), $rs \in R$ (the ring and the ideal!). An ideal $I$ of a unital ring $R$ is a proper ideal of $R$ if and only if $1 \not\in I$. If $1 \not\in I$, $I$ is clearly a proper ideal (given that it is an ideal). Conversely, if $1 \in I$, then $\forall r \in R$, $r \cdot 1 = 1 \cdot r = r \in I$, which implies that $I = R$.<|endoftext|> TITLE: Probabilities ant cube QUESTION [5 upvotes]: I have attached a picture of the cube in the question. An ant moves along the edges of the cube always starting at $A$ and never repeating an edge. This defines a trail of edges. For example, $ABFE$ and $ABCDAE$ are trails, but $ABCB$ is not a trail. The number of edges in a trail is known as its length. At each vertex, the ant must proceed along one of the edges that has not yet been traced, if there is one. If there is a choice of untraced edges, the following probabilities for taking each of them apply. If only one edge at a vertex has been traced and that edge is vertical, then the probability of the ant taking each horizontal edge is $\frac12$. If only one edge at a vertex has been traced and that edge is horizontal, then the probability of the ant taking the vertical edge is $\frac23$ and the probability of the ant taking the horizontal edge is $\frac13$. If no edge at a vertex has been traced, then the probability of the ant taking the vertical edge is $\frac23$ and the probability of the ant taking each of the horizontal edges is $\frac16$. In your solutions to the following problems use exact fractions not decimals. a) If the ant moves from $A$ to $D$, what is the probability it will then move to $H$? If the ant moves from $A$ to $E$, what is the probability it will then move to $H$? My answer: $A$ to $D$ then to $H = \dfrac23$ $A$ to $E$ then to $H = \dfrac12$ b) What is the probability the ant takes the trail $ABCG$? My answer: Multiply the probabilities: $$\frac16\times\frac13\times\frac23 = \frac1{27}$$ c) Find two trails of length $3$ from $A$ to $G$ that have probabilities of being traced by the ant that are different to each other and to the probability for the trail $ABCG$. My answer: $$\begin{align} ABFG&=\frac16\times\frac23\times\frac12=\frac1{18}\\[5pt] AEHG&=\frac23\times\frac12\times\frac13=\frac19 \end{align}$$ d) What is the probability that the ant will trace a trail of length $3$ from $A$ to $G$? I don't know how to do d). Do I just multiply every single probability? Also, could you please check to see if I have done the a) to c) correctly? I am not completely sure if this is the correct application of the multiplicative principle. REPLY [3 votes]: Your answers for a) to c) are correct (except for your loose use of the equals sign). For d), note that any path of length $3$ to G will contain exactly two horizontal steps and one vertical step, the vertical step can come at any of the three steps, the probabilities are fully determined by when the vertical step comes, and all trails are mutually exclusive events. You've already determined the probabilities for the three types of trail, so now you just need to count how many of each there are and add up the probabilities multiplied by those multiplicities.<|endoftext|> TITLE: Verify $y=x^aZ_p\left(bx^c\right)$ is a solution to $y''+\left(\frac{1-2a}{x}\right)y'+\left[(bcx^{c-1})^2+\frac{a^2-p^2c^2}{x^2}\right]y=0$ QUESTION [7 upvotes]: In order for the question that I have to make any sense I must first include some background information as given in my textbook: The standard form of Bessel's differential equation is $$x^2y^{\prime\prime}+xy^{\prime} + (x^2 - p^2)y=0\tag{1}$$ where $(1)$ has a first solution given by $$\fbox{$J_p(x)=\sum_{n=0}^\infty\frac{(-1)^n}{\Gamma(n+1)\Gamma(n+1+p)}\left(\frac{x}{2}\right)^{2n+p}$}\tag{2}$$ and a second solution given by $$\fbox{$J_{-p}(x)=\sum_{n=0}^\infty\frac{(-1)^n}{\Gamma(n+1)\Gamma(n+1-p)}\left(\frac{x}{2}\right)^{2n-p}$}\tag{3}$$ where $J_p(x)$ is called the Bessel function of the first kind of order $p$. Although $J_{−p}(x)$ is a satisfactory second solution when $p$ is not an integer, it is customary to use a linear combination of $J_p(x)$ and $J_{−p}(x)$ as the second solution. Any combination of $J_p(x)$ and $J_{−p}(x)$ is a satisfactory second solution of Bessel’s equation. The combination which is used is called the Neumann (or the Weber) function and is denoted by $N_p(x)$ where $$N_p(x)=\frac{\cos(\pi p)J_p(x)-J_{-p}(x)}{\sin(\pi p)}\tag{4}$$ Full details on the derivation of $(2)$ as a solution to $(1)$ can be found here in my previous question. Many differential equations occur in practice that are not of the standard form $(1)$ but whose solutions can be written in terms of Bessel functions. It can be shown that the differential equation: $$\fbox{$y^{\prime\prime}+\left(\frac{1-2a}{x}\right)y^{\prime}+\left[\left(bcx^{c-1}\right)^2+\frac{a^2-p^2c^2}{x^2}\right]y=0$}\tag{5}$$ has the solution $$\fbox{$y=x^aZ_p\left(bx^c\right)$}\tag{6}$$ where $Z_p$ stands for $J_p$ or $N_p$ or any linear combination of them, and $a,b,c,p$ are constants. To see how to use this, let us “solve” the differential equation: $$y^{\prime\prime}+9xy=0\tag{7}$$ If $(7)$ is of the type $(5)$, then we must have $$1-2a=0$$ $$2(c-1)=1$$ $$(bc)^2=9$$ $$a^2-p^2c^2=0$$ from these $4$ equations we find $$a=\dfrac12$$ $$c=\dfrac32$$ $$b=2$$ $$p=\dfrac{a}{c}=\dfrac13$$ Then the solution of $(7)$ is $$y=x^{1/2}Z_{1/3}\left(2x^{3/2}\right)\tag{8}$$ This means that the general solution of $(7)$ is $$y=x^{1/2}\left[AJ_{1/3}\left(2x^{3/2}\right)+BN_{1/3}\left(2x^{3/2}\right)\right]\tag{9}$$ where $A$ and $B$ are arbitrary constants. Finally,$\color{#180}{\text{ my goal is to show that }}$${(6)}$ $\color{#180}{\text{is a solution to }}$$(5)$. However to gain some insight, I must first be able to show that $(8)$ or $(9)$ is a solution to $(7)$. So my attempt goes as follows: So I need to compute $y^{\prime\prime}$ or at least to begin with, $y^{\prime}$; It is at this point where I am immediately stuck as I do not understand how to differentiate $$y=x^{1/2}Z_{1/3}\left(2x^{3/2}\right)\tag{8}$$ as I'm confused as to how to take the derivative of the $Z_{1/3}\left(2x^{3/2}\right)$ factor. Could someone please provide some hints or advice on how I would go about carrying out this differentiation? REPLY [2 votes]: I will here show how one could go about to derive the solution of your ODE without knowing a priori what the answer should be. Your ODE $(5)$ can after multiplication by $x^2$ be written $$x^2y^{\prime\prime}(x)+x(1-2a)y^{\prime}(x)+\left[\left(bcx^{c}\right)^2+a^2-p^2c^2\right]y(x)=0$$ This looks very close to the Bessel ODE so we will try to see if we can bring it closer to that form. The term with $x^c$ on the right-hand side is troublesome so lets try to simplify this bringing it closer to the $x^2$ form we have in the Bessel ODE. This motivates a change of variables $x\to x^c$. Another reason for why this is a natural thing to try is the fact that the derivatives are all on the (logarithmic) form $x^ny^{(n)}(x)$. This means that a change of variables $x\to x^n$ will not change the basic structure of the derivative-terms in the ODE - just the coefficient in front of them. We therefore take $z = x^c$ and compute $$x\frac{d}{dx} = x\frac{dz}{dx}\frac{d}{dz} = cz\frac{d}{dz}$$ $$x^2\frac{d^2}{dx^2} = c^2 z^2\frac{d^2}{dz^2} + c(c-1) z \frac{d}{dz}$$ to get that the ODE transforms into $$z^2 y''(z) + z\left(1-\frac{2a}{c}\right)y'(z) + [(bz)^2 + \frac{a^2}{c^2} - p^2]y(z) = 0$$ where a prime is now a differential with respect to $z$. The $z^2$ term is now close to the desired form and we just need a simple scaling of the variables $w = bz$ to get it right $$w^2 y''(w) + w\left(1-\frac{2a}{c}\right)y'(w) + [w^2 + \frac{a^2}{c^2} - p^2]y(w) = 0$$ Now this is starting to look more and more like the Bessel ODE, however we still have the troublesome $-\frac{2a}{c}$ term. There is no simple change of variables that can solve this, but a transformation $y(w) = w^n g(w)$ could potentially work as this has the effect of modifying the relative coefficents of the $w^nf^{(n)}(w)$ terms. Another way that could have lead us to try this is the fact that if we did not have the $w^2y(w)$ term in the ODE then $y(w) = w^{\frac{a}{c} \pm p}$ would be a solution to the ODE so it would be natural to try to factor out the power-law growth by trying an ansatz on the form $y(w) = w^n g(w)$. Substituting $y(w) = w^n g(w)$ into the ODE above leads to $$w^2g''(w) + \left(1-\frac{2a}{c} + 2n\right)wg'(x) + \left[w^2 + \frac{a^2}{c^2} -2n\frac{a}{c} + n^2 - p^2\right]g(w) = 0$$ We see that the choice $n = \frac{a}{b}$ does the job and brings the term $wg'(w)$ to the desired form (relative to $w^2g''(w)$) and as a bonus the last terms also happens to simplify for this choice of $n$ leaving us with $$w^2g''(w) + wg'(w) + \left[w^2 - p^2\right]g(w) = 0$$ which is exactly the ODE for the Bessel-functions. Backtracking our steps we get the solution in terms of the original variables $$f(x) = w^nZ_p(w) = (bz)^nZ_p(bz) = b^{\frac{a}{c}} x^{\frac{a}{c}\cdot c} Z_p(bx^c) \propto x^a Z_p(bx^c)$$<|endoftext|> TITLE: Intuitionistic logic plus $A → B \lor C \vdash ( A → B ) \lor ( A → C )$ QUESTION [12 upvotes]: The following is a classically valid deduction for any propositions $A,B,C$. $\def\imp{\rightarrow}$ $A \imp B \lor C \vdash ( A \imp B ) \lor ( A \imp C )$. But I'm quite sure it isn't intuitionistically valid, although I don't know how to prove it, which is my first question. If my conjecture is true, my next question is what happens if we add this rule to intuitionistic logic. I do not think we will get classical logic. Is my guess right? [Edit: The user who came to close and downvote this 4 years after I asked this presumably did not see my comment: "The BHK interpretation suggests to me that it isn't intuitionistically valid, but that's just my intuition...". If you are not familiar with Kripke frames (and I was not at that time), good luck trying to figure out what to try!] REPLY [2 votes]: It's interesting to me that different people's intuitions were so different on this. Different models are helpful with different examples, and nobody in the thread mentioned the connection between intuitionistic logic and programming language semantics, which for this example I found very helpful. In the programming language model, $$(A\to(B\lor C))\to((A\to B)\lor(A\to C))$$ is the type of a function $f$ which is given a value $in$ of type $$A\to(B\lor C)$$ and which transforms it into a value $out$ of type $$(A\to B)\lor(A\to C).$$ How might we implement such a function $f$? An experienced computer programmer will think something like this: I would need to produce a function that turns $A$ into $B$. I can't do that because all I have is a function that turns $A$ into $B\lor C$ and it might not give me the $B$ I need. That's the intuition. The intuition is a shortcut for the following thought process. In code, our function $f$ will look like this: def f in = out where -- `in` has type A → (B ∨ C) out = ??? -- `out` should have type (A → B) ∨ (A → C) $\def\lt{\mathtt{Left}\ }\def\rt{\mathtt{Right}\ }$ A value of type $(A\to B)\lor(A\to C)$ must have the form $\lt out_L$ where $out_L$ has type $A\to B$, or $\rt out_R$ where $out_R$ has type $A\to C$. Those are the only two choices. Let's try the first one: def f in = out where -- `in` has type A → (B ∨ C) out = Left outL -- `outL` should have type A → B Since outL has type $A\to B$, it is a function that takes a value of type $A$ and turns it into a value of type $B$: def f in = out where -- `in` has type A → (B ∨ C) out = Left outL outL a = ??? -- we want to produce a value of type B Where is $out_L$ going to get a value of type $B$? The only tools it has available are $in$ (of type $A\to(B\lor C)$) and $a$ (of type $A$) and there is only one thing it can do with these: it must give the argument $a$ to the function $in$, producing a result of type $B\lor C$: def f in = out where -- `in` has type A → (B ∨ C) out = Left outL outL a = ?? in a ?? -- `in a` has type B ∨ C -- we want to produce a value of type B The in a expression produces a value of type $B\lor C$. A value of type $B\lor C$ looks either like $\lt b$ or like $\rt c$, so we need a case expression to handle the two cases. def f in = out where -- `in` has type A → (B ∨ C) out = Left outL outL a = case in a of -- we want to produce a value of type B Left b -> ??? Right c -> ??? In the $\lt b$ case it's easy to see how to proceed: just return the $b$: def f in = out where -- `in` has type A → (B ∨ C) out = Left outL outL a = case in a of -- we want to produce a value of type B Left b -> b -- … and we did! Right c -> ??? In the $\rt c$ case there's no way to proceed, because $out_L$ has available a value $c$ of type $C$, but it still needs to produce a value of type $B$ and it has no way to get one. There is no way to complete the implementation. Perhaps we made a mistake back at the beginning when we chose to define out = Left outL? Would out = Right outR work better? No, the problem will be exactly the same: def f in = out where -- `in` has type A → (B ∨ C) out = Right outR outR a = case in a of -- we want to produce a value of type C Left b -> ??? -- … but we don't have one Right c -> c This exhausts the possibilities. There is no way to implement $f$. The points I want to make here are not only that programmers can and do acquire this sort of intuition for what can't be implemented and why not, but also that they can convert the intuition into an argument that doesn't rely on the intuition alone. Once the intuition is turned a detailed argument, the Curry-Howard correspondence can be used to convert the failed implementation directly to an LJ sequent calculus proof or analytic tableau proof that the statement is invalid. [ Addendum: Note by the way that the reverse implication, which is intuitionistically valid, is easy to prove and just as easy to construct a program: def fINV out a = -- fINV has type ((A→B)∨(A→C))→(A→(B∨C)) case out of Left outL -> Left (outL a) -- outL has type A→B Right outR -> Right (outR a) -- outR has type A→C and again the Curry-Howard correspondence converts the program into a proof or vice-versa. ]<|endoftext|> TITLE: Upper Numbering of Ramification Groups of Absolute Galois Groups for Totally Ramified Extensions QUESTION [5 upvotes]: Suppose $K'/K$ is a totally ramified extension of $p$-adic fields of degree $e.$ A paper (p.9, line 15) I am reading seems to use the following formula for the upper numbering on the absolute galois groups $$G_{K'}^{eu}=G_{K}^{u}$$ for $u>0.$ They quote Proposition 15, Chapter IV of Serre's Local Fields for this fact, which says that the Herbrand function $\varphi_{F/L}$ and its inverse $\psi_{F/L}$ satisfy transitivity formulas $$\varphi_{F/L}=\varphi_{L'/L}\circ \varphi_{F/L'}\text{ and }\psi_{F/L}=\psi_{F/L'}\circ\psi_{L'/F}$$ for an extension of fields $F/ L' / L.$ How does the formula above follow from this proposition? Or is the formula not true in general (quite possibly I am missing some subtlety that is making this formula work in the paper)? REPLY [5 votes]: The extension $K'\supset K$ is tamely ramified in the paper you’re reading. But in the case of tame ramification, everything is concentrated at the origin of the Herbrand graph. That is, $\varphi^{K'}_K(x)=x/e$ for $x\ge0$. All follows from that.<|endoftext|> TITLE: Prove that $\sin x \cdot \sin (2x) \cdot \sin(3x) < \tfrac{9}{16}$ for all $x$ QUESTION [13 upvotes]: Prove that $$ \sin (x) \cdot \sin (2x) \cdot \sin(3x) < \dfrac{9}{16} \quad \forall \ x \in \mathbb{R}$$ I thought about using derivatives, but it would be too lengthy. Any help will be appreciated. Thanks. REPLY [11 votes]: One has $$2\sin x\sin(3x)=\cos(2x)-\cos(4x)\ ,$$ and therefore $$2\sin x\sin(2x)\sin(3x)=(1+u-2u^2)\sqrt{1-u^2}=:f(u)\qquad(-1\leq u\leq1)\ ,$$ where we have put $\cos(2x)=:u$. One computes $$\sqrt{1-u^2}f'(u)=6u^3-2u^2-5u+1$$ with zeros at $$u\in\left\{1, -{\sqrt{10}+2\over6},{\sqrt{10}-2\over6}\right\}\ .$$ The third of these leads to the maximal value of $f(u)$, which we then have to divide by $2$. The result can be simplified to $$\sin x\sin(2x)\sin(3x)\leq{68+5\sqrt{10}\over108\sqrt{2}}\doteq0.548737<{9\over16}\ .$$<|endoftext|> TITLE: Fixed points and cardinal exponentiation QUESTION [5 upvotes]: Let the function $F: On \rightarrow On$ be defined by the following recursion: $F(0) = \aleph_0$ $F(\alpha+1) = 2^{F(\alpha)}$ (cardinal exponentiation) $F(\lambda) = \sup\{F(\alpha):\alpha \in \lambda\}$ for $\lambda$ a limit ordinal Prove that there is a fixed point for $F$, i.e. an ordinal $\kappa$ with $F(\kappa) = \kappa$. Are such fixed points always cardinals? Thoughts: So I can see that such a fixed point is going to have to be for a limit ordinal, since the function is strictly increasing for successor ordinals. $F(\lambda) = \sup\{\aleph_{\alpha}: \alpha \in \lambda\}$ I feel as if $\aleph_{\omega}$ might be a fixed point and suspect that any fixed points have to be cardinals, but I don’t have a justification for either. I’m not sure how to go about proving a fixed point exists and whether it has to always be a cardinal. REPLY [3 votes]: This is a consequence of a very useful lemma whose general proof is almost as simple as providing a fixed point in this special case. Let's say that a function $F \colon \operatorname{On} \to \operatorname{On}$ is normal iff it is strictly increasing (i.e. $\alpha < \beta$ implies $F(\alpha) < F(\beta)$) and continuous (i.e. $F(\lambda) = \sup_{\alpha < \lambda} F(\alpha)$ for all limit ordinals $\lambda$). The 'Normal Function Lemma' states: Let $F \colon \operatorname{On} \to \operatorname{On}$ be a normal function. Then the class $F' := \{ \alpha \in \operatorname{On} \mid F(\alpha) = \alpha \}$ is closed (i.e. if $(\alpha_{\xi} \mid \xi < \theta)$ is a strictly increasing sequence of $F$-fixed points, then $\alpha := \sup_{\xi < \theta} \alpha_{\xi}$ is an $F$-fixed point) and unbounded (i.e. for all $\alpha \in \operatorname{On}$ there is some ordinal $\alpha < \beta$ such that $F(\beta) = \beta$). Moreover, for any regular ordinal (recall that regular ordinals are cardinals) $\omega \le \lambda$ there are arbitrarily large fixed points of $F$ of cofinality $\lambda$. Proof. If $(\alpha_{\xi} \mid \xi < \theta)$ is a strictly increasing sequence in $F'$, then $$F(\sup_{\xi < \theta} \alpha_{\xi}) = \sup_{\xi < \theta} F(\alpha_{\xi}) = \sup_{\xi < \theta} \alpha_{\xi}.$$ It thus suffices to prove that for any regular cardinal $\lambda \geq \omega$ and any $\alpha_{0} \in \operatorname{On}$ there is some $\alpha_{0} < \alpha$ such that $F(\alpha) = \alpha$ and $\alpha$ has cofinality $\lambda$: Recursively construct a sequence $(\alpha_{\xi} \mid \xi < \lambda)$ as follows. $\alpha_{0}$ is as above and given $\alpha_{\xi}$, we let $\alpha_{\xi+1} := F(\alpha_{\xi})$. If $\theta < \lambda$ is a limit ordinal and $(\alpha_{\xi} \mid \xi < \theta)$ has already been constructed, let $\alpha_{\theta} := \sup_{\xi < \theta} \alpha_{\xi}$. Finally, let $\alpha := \sup_{\xi < \lambda} \alpha_{\xi}$. Since $(\alpha_{\xi} \mid \xi < \lambda)$ is strictly increasing, we have that $\operatorname{cf}(\alpha) = \operatorname{cf}(\lambda) = \lambda$ and furthermore, by the construction of $(\alpha_{\xi} \mid \xi < \lambda)$, $$ \begin{eqnarray*} F(\alpha) &= \sup_{\beta < \alpha} F(\beta) \\ &= \sup_{\xi < \lambda} F(\alpha_{\xi}) \\ &= \sup_{\xi < \lambda} \alpha_{\xi +1} \\ &= \alpha. \end{eqnarray*} $$ Since $\alpha_{0} < \alpha_{1} < \alpha$, the claim follows. QED In your case, the range of $F$ constists only of cardinals (which is trivial for $F(\alpha +1)$ and follows for $F(\lambda)$, $\lambda$ limit, because limits of cardinals are cardinals). Thus, in your case, any fixed point is a cardinal. This does not hold for general normal functions.<|endoftext|> TITLE: Is counting roots with multiplicites at all a geometric concept? QUESTION [6 upvotes]: It is well known that a polynomial of degree $n$ admits $n$ roots when the field is algebraically closed. However, this comes with a caveat, in particular that the roots be counted with multiplicity. From an algebraic standpoint, counting roots with multiplicity is very natural: If a root $\alpha_i$ has multiplicity $m_i$, then we may factorize the polynomial as $a \displaystyle \prod_{i=1}^l (x - \alpha_i)^{m_i}$. However, this is unsatisfying to me since we are first introduced to roots of a polynomial as a geometric concept: in the real case, it's places where the curve crosses the $x$-axis. Thus, when we look at the theorem from a purely geometric standpoint without looking at the underlying algebraic framework, the theorem becomes a lot less interesting: the polynomial of degree $n$ admits $n$ roots, but it can seem to admit less if some of those roots happen to have multiplicity greater than one. My question is, is there any geometric relation between a root and its multiplicity which allow us to see the full strength of the theorem (the existence of $n$ roots) without relying on the underlying algebraic structure of the polynomial? Stated differently, can we look at the graph/image of a polynomial and determine how many roots it has (without appealing to arguments about its degree and inferring the number of roots from that)? Note that my question can also be applied to something like Bezout's theorem where plane curves of degree $m$, $n$ intersect $mn$ times, assuming the intersections are counted with multiplicity. The condition that they be counted with multiplicity is even more disappointing to the geometric nature here. REPLY [3 votes]: Consider function $f(z)=z^n$. As you go around a small contour around $z=0$ which is $n$-fold zero of $f$, $f(z)$ "goes around" $0$ in complex plane $n$ times. The same will be true for any holomorphic function with $n$-fold zero at some point (meaning that it is zero there together with its first $n-1$ derivatives). This multiplicity of zeros is therefore connected to topological invariants such as winding number. It is also connected with notion of degree of a mapping, which I will try to explain analysing example of polynomials in complex plane. If you have $n$-th order polynomial $g(z)$ and go with $z$ around a huge contour which contains all the zeros of $g$ inside then $g(z)$ will wrap around complex plane $n$ times. It can be shown that functions with this property (which needs to be more rigorously defined but I'm just trying to give you the idea) must have $n$ zeros counting with multiplicities. Fundamental theorem of algebra follows from this. It is crucial that you count with multiplicities for the following reason: degree of a mapping tells you how much you "wrap around in total". Multiplicity of a zero tells you "how much you wrap around when you go around this zero". Therefore sum of multiplicities must be equal to degree. For polynomials degree is the same as order of polynomial. But idea of a degree is much more general and it applies to continuous function between compact spaces. Complex plane itself is not complact but it can be made compact by adding one "point at infinity", promoting it to the celebrated Riemann sphere.<|endoftext|> TITLE: Orientability of $m\times n$ matrices with rank $r$ QUESTION [31 upvotes]: I know that $$M_{m,n,r} = \{ A \in {\rm Mat}(m \times n,\Bbb R) \mid {\rm rank}(A)= r\}$$is a submanifold of $\Bbb R^{mn}$ of codimension $(m-r)(n-r)$. For example, we have that $M_{2, 3, 1}$ is non-orientable, while some others are, such as $M_{3,3,1}$ and $M_{3, 3,2}$. Is there a way to decide in general if $M_{m,n,r}$ is orientable, in terms of $m,n$ and $r$? For example, to see that $M_{2,3,1}$ is non-orientable, we parametrize it using two maps. Call $V_i$ the open set in $M_{2,3,1}$ on which the $i-$th row is a multiple of the other non-zero one. We have $M_{2,3,1} = V_0 \cup V_1$. Put $$\alpha_1:(\Bbb R^3 \setminus\{{\bf 0}\})\times \Bbb R\to V_1, \quad \alpha_1(v,t) = (tv,v),$$and similarly for $\alpha_2$, where we look at these pairs as rows of the matrix. These $\alpha_i$ are good parametrizations that cover $M_{2,3,1}$. One then checks that $\alpha_1^{-1}(V_1\cap V_2)$ has two connected components, namely, $(\Bbb R^3\setminus\{{\bf 0}\}) \times \Bbb R_{>0}$ and $(\Bbb R^3\setminus\{{\bf 0}\}) \times \Bbb R_{<0}$. And $\det D(\alpha_2^{-1}\circ \alpha_1)(v,t)$ changes sign there, so $M_{2,3,1}$ is non-orientable. The strategy for proving that $M_{m,n,r}$ is a submanifold is different and writes it locally as an inverse image of regular value. This is an early exercise in Guillemin & Pollack's Differential Topology book. It seems difficult to attack the general case using parametrizations this way (computing inverses is hard). REPLY [11 votes]: $M_{m,n,r}$ is an homogenous space under the natural action of the (orientable) Lie group $G=Gl^+(m)\times Gl^+(n)$, through the action $(P,Q) M= PMQ^{-1}$. The isotropy group of the matrix $I_{m,n,r}= \left( \begin{array}{cc} I_r & 0_{n-r,r} \\ 0_{r,m-r} & 0_{m-r,n-r} \end{array} \right)$ is the set of matrices $(P,Q)$ with $P= \left( \begin{array}{cc} R & B \\ 0 & D \end{array} \right) , Q= \left( \begin{array}{cc} R & 0 \\ C & H \end{array} \right)$ with $R,D,H$ invertible of rank $r, m-r, n-r$, $B,C$ arbitrary. The orientability can be checked by looking at the action of this subgroup $H$ on the tangent space at the identity in $G/H$, and see if it preserves or not an orientation. To see this let us note that by continuity, on the connected component of the identity in $H$ the determinant must remains positive, and we just have to check what happens on the other components. Note that, unless $r=m$ or $r=n$, the group $h$ has exactly two components : the identity component and the component of the pair $\epsilon=(P,Q)$ of matrices $\alpha= \left( \begin{array}{cc} S_r & O \\ 0 & S_{m-r} \end{array} \right) , \beta= \left( \begin{array}{cc} S_r & 0 \\ 0 & S_{n-r} \end{array} \right)$ where $S_r$ is the diagonal matrix with one eigenvalue equal to $-1$, the other equal to 1. The tangent space of $G$ at the identity $\mathcal G$ is the product $M_m\times M_n$, it contains the tangent space $\mathcal H$ of $H$. In order to check that the action of $\epsilon$ on $\mathcal G/\mathcal H$ preserve the orientation or not, it is enough to check if its action on $\mathcal H$ does. Identify $\cal H$ with the set of matrices $M=(\left( \begin{array}{cc} X & Y \\ 0 & Z \end{array} \right),\left( \begin{array}{cc} X & U\\ 0 & V \end{array} \right)) $, and compute the action of $\epsilon$, one find $X\to S_rXS_r^{-1}$, $Y\to S_r Y, Z\to S_{m-r}Z; U\to US_{n-r}^{-1}, V\to VS_{n-r}^{-1}$. Its determinant is $(-1)^{2r+r+m-r+r+n-r}= (-1)^{m+n}$ So (unless mistakes on calculations), the orientability depends on the parity of $m+n$. The cas $r=m$ or $r=n$ can be treated in a similar way.<|endoftext|> TITLE: Do Gödel numbers have a practical use? QUESTION [7 upvotes]: Is there any example of Gödel numbers being actually used in practice? If so for what purpose? REPLY [2 votes]: The key idea of Gödel numbering is that logical syntax can be treated as data. Applications of this concept pervade modern computing.<|endoftext|> TITLE: How is the entropy of the normal distribution derived? QUESTION [11 upvotes]: Wikipedia says the entropy of the normal distribution is $\frac{1}2 \ln(2\pi e\sigma^2)$ I could not find any proof for that, though. I found some proofs that show that the maximum entropy resembles to $\frac{1}2+\ln(\sqrt{2\pi}\sigma)$ and while I see that this can be rewritten as $\frac{1}2\ln(e\sigma\sqrt{2\pi})$, I do not get how the square root can be get rid of and how the extra $\sigma$ can be put into $\ln$. It is clear that an additional summand $\frac{1}2\ln(\sigma\sqrt{2\pi})$ would help, but where do we get it from? Probably just thinking in the wrong way here... So, what is the proof for the maximum likelihood entropy of the normal distribution? REPLY [4 votes]: You have already gotten some good answers, I thought I could add something more of use which is not really an answer, but maybe good if you find differential entropy to be a strange concept. Since we can not store a real or continuous number exactly, entropy for continuous distributions conceptually mean something different than entropy for discrete distributions. It means the information required except for the resolution of representation. Take for example the uniform distribution on $[0,2^a-1]$ for an integer $a$. At integer resolution it will have $2^a$ equiprobable states and that would give $a$ bits of entropy. Also, the differential entropy is $\log(2^a-0)$, which happens to be the same. But if we want another resolution, lesser or more bits are of course required. Double resolution ($\pm 0.5$) would require 1 more bit (on average).<|endoftext|> TITLE: Can $\mathbb{Z}/6\mathbb{Z}$ act freely and properly discontinuously on $\Sigma_4$? QUESTION [6 upvotes]: Let $\Sigma_m$ denote the closed connected orientable surface of genus $m$. Let $N_m$ denote the closed connected non-orientable surface of genus $m$. I was wondering which cyclic groups could act freely and properly discontinuously on $\Sigma_4$. It is clear that such a group must be finite. So the only possibilities are $\mathbb{Z}/n\mathbb{Z}$. By using Euler characteristics, one can show that the only possibilities are $n=1,2,3$ or 6. It is not hard to show that $\Sigma_4$ is the orientable double cover of $N_5$. The deck group of this cover is $\mathbb{Z}/2\mathbb{Z}$. By considering rotational symmetry, it is also easy enough to see that $\mathbb{Z}/3\mathbb{Z}$ can act on $\Sigma_4$. The quotient of this action is $\Sigma_2$. So the remaining question is: could $\mathbb{Z}/6\mathbb{Z}$ act freely and properly discontinuously on $\Sigma_4$? By considering Euler characteristics, one can show that if such an action were to exist, then the quotient would have to be $N_3$. One thing that I considered was composing some of the covering maps which I'm already aware of. For example, we could compose the covering map $\Sigma_4 \rightarrow \Sigma_2$ (which comes from rotational symmetry) with the orientable double covering map $\Sigma_2 \rightarrow N_3$. This would give a six-sheeted cover $\Sigma_4 \rightarrow N_3$. However, it is not clear to me that the deck group is $\mathbb{Z}/6\mathbb{Z}$. Another thing that I considered was looking for epimorphisms $\varphi :\pi_1(N_3) \rightarrow \mathbb{Z}/6\mathbb{Z}$. For each such $\varphi$, there exists a regular cover of $N_3$ with deck group $\mathbb{Z}/6\mathbb{Z}$. However, it is not clear to me that any of these covering spaces is orientable. Any insight would be appreciated. REPLY [6 votes]: As a connected sum of $3$ real projective planes, the fundamental group $\pi_1(N_3)$ has the presentation $$ \pi_1(N_3)=\langle a,b,c\mid a^2b^2c^2\rangle. $$ Each one of the generators $a,b,c$ is a simple closed loop through one of the $3$ cross-caps of $N_3$, thinking of $N_3$ as $3$ cross-caps glued onto the complement of the $3$ open discs in $S^2$. Let $$ f\colon \pi_1(N_3)\rightarrow\mathbf Z/6 $$ be defined by $$ f(a)=f(b)=f(c)=1. $$ Let $\Sigma/N_3$ be the covering associated to $f$. We have to show that $\Sigma$ is orientable. Let $2\mathbf Z/6$ be the subgroup of $\mathbf Z/6$ generated by $2$. It's index is equal to $2$. The subgroup $f^{-1}(2\mathbf Z/6)$ of $\pi_1(N_3)$ is the normal subgroup of all the words in $a,b,c,a^{-1},b^{-1},c^{-1}$ that are of even length. The associated covering of $N_3$ is orientation covering $O/N_3$. Therefore, the map $\Sigma\rightarrow N_3$ factorizes through $O\rightarrow N_3$. In particular, $\Sigma$ is orientable. It follows that $\Sigma=\Sigma_4$.<|endoftext|> TITLE: Volume of intersection of the $n$-ball with a hyperplane QUESTION [5 upvotes]: Let $\mathcal{B}_n$ be the $n$-ball of radius $r>0$ and centre $\mathbf{x}_0$, i.e., $\mathcal{B}_n=\{\mathbf{x}\in\mathbb{R}^n\colon \|\mathbf{x}-\mathbf{x}_0\| \leq r\}$. The volume of $\mathcal{B}_n$ is given by $$ V_n(r)=\frac{\pi^\frac{n}{2}}{\Gamma(\frac{n}{2}+1)} r^n. $$ Moreover, let $\mathcal{H}:\mathbf{w}^\top\mathbf{x}+b=0$ be a hyperplane in $\mathbb{R}^n$. I would like to find the volume of the fraction of $\mathcal{B}_n$ "cut" by $\mathcal{H}$. If $d$ is the distance between the centre of the ball, $\mathbf{x}_0$, and the hyperplane, $\mathcal{H}$, then the desired volume,$V_\mathcal{H}$, is $$ \begin{cases} 0 & \text{if } d\ge r, \\[8pt] \dfrac{V_n(r)} 2 & \text{if } d=0, \\[8pt] 0 < V_{\mathcal H} \le V_n(r) & \text{if } d TITLE: Random Binary matrix QUESTION [5 upvotes]: This is a question from Strang's "Linear Algebra and its Applications", right in the first chapter (I'm studying it by myself). I couldn't solve it, it isn't in the Solutions Manual, and my research suggests that there shouldn't be a simple solution for it. However, its presence in the very first chapter suggests me that I'm missing something. Here it goes: 1.6: a) There are sixteen 2x2 matrices whose entries are 1's and 0's. How many are invertible? b) (Much harder!) If you put 1's and 0's at random into the entries of a 10 by 10 matrix, is it more likely to be invertible or singular? From what I can tell, this can't be solved by elementary linear algebra so... it shouldn't be there? I'm guessing there is a clever computational way to exhaust all cases? REPLY [3 votes]: @FrancoVS @N74 This is not a proof, rather an indication that the larger the matrix, the more the odds that it IS invertible. The broken line below displays the results of a (Matlab) simulation: for each $n = 1,2 \cdots 20$, we have generated $20,000$ $n \times n$ matrices (with entries following a Bernoulli Ber(1/2) distribution), and their determinant has been computed. This graphic displays the fact that, for $n>15$, the frequency of non-zero determinants rapidly tends to 1... (after a small decline for small values of $n$).<|endoftext|> TITLE: Prove $(x+y)(y^2+z^2)(z^3+x^3) < \frac92$ for $x+y+z=2$ QUESTION [17 upvotes]: $x,y,z \geqslant 0$ and $x+y+z=2$, Prove $$(x+y)(y^2+z^2)(z^3+x^3) < \frac92$$ While numerical method can solve this problem, I am more interested in classical solutions. I tried this problem for the past few months, using all kinds of AM-GM and CS, but still cannot solve it. I hope someone can help me out with this one. REPLY [3 votes]: Let $f(x,y,z)=(x+y)(y^2+z^2)(z^3+x^3)$. Assume that $y$ is largest of $x,y,z$. Then $$f(x,y,z)-f(x,z,y)=(y^2+z^2)(x + y) (x + z) (y - z) (x - y - z)\le0$$and therefore we may assume $x$ or $z$ is largest of $x, y, z$. If $x$ is largest, then $$f(x,y,z)-f(z,y,x)=(z^3+x^3)(z-x)(xy+xz-y^2+yz)\le0$$and therefore we may assume that $z$ is largest of $x,y,z$. Now,$$f(0,x+y,z)-f(x,y,z)=(x+y)(((x+y)^2+z^2)z^3-(y^2+z^2)(z^3+x^3))\\=x (x+y)(-x^2 y^2 - x^2 z^2 + x z^3 + 2 y z^3)\ge0$$and we may assume that $x=0$. Now it is just $y(y^2+(2-y)^2)(2-y)^3<\frac{9}{2}$, or$$2 y^6 - 16 y^5 + 52 y^4 - 88 y^3 + 80 y^2 - 32 y + 4.5>0$$which is true for all nonnegative $y$.<|endoftext|> TITLE: Decomposition of a representation into a direct sum of irreducible ones QUESTION [19 upvotes]: I'm studying representation theory and in the book (Fulton and Harris) the author makes the following proposition with the following proof: Proposition: For any representation $V$ of a finite group $G$, there is a decomposition $$V = V_1^{\oplus a_1}\oplus\cdots \oplus V_k^{\oplus a_k},$$ where the $V_i$ are distinct irreducible representations. The decomposition of $V$ into a direct sum of the $k$ factors is unique, as are the $V_i$ that occur and their multiplicities. Proof: It follows from Schur's lemma that if $W$ is another representation of $G$, with a decomposition $W = \bigoplus W_j^{\oplus b_j}$, and $\varphi : V\to W$ is a map of representations, then $\varphi$ must map the factor $V_i^{\oplus a_i}$ into the factor $W_j^{\oplus b_j}$ for which $W_j\simeq V_i$; when applied to the identity map of $V$ to $V$, the stated uniqueness follows. I must confess I didn't understand. The fact that we can decompose $V$ like this I do understand that follows from the fact that if $V$ has a proper nonzero subrepresentation $W$ then there is another subrepresentation $W'$ such that $V = W\oplus W'$. In that case, if either $W$ or $W'$ are not irreducible we can apply the same idea to them, until we have the desired decomposition. Now, this proof of uniqueness I really can't understand. I mean, uniqueness means that if we have $$V = V_1^{\oplus a_1}\oplus\cdots \oplus V_k^{\oplus a_k}\simeq W_1^{\oplus b_1}\oplus\cdots \oplus W_{r}^{\oplus b_r},$$ then we have $k = r$, $a_i = b_i$ and $W_i\simeq V_i$. I can't understand how this argument the author presents shows all of this. Indeed the whole point is that $V$ has these two decompositions then they are isomorphic, so that there exists one isomorphism $$\varphi : V_1^{\oplus a_1}\oplus\cdots \oplus V_k^{\oplus a_k}\to W_1^{\oplus b_1}\oplus\cdots \oplus W_r^{\oplus b_r}.$$ If we restrict it to $V_i$ we get one isomorphism $\varphi : V_i\to \varphi(V_i)$. But why $\varphi(V_i)=W_j$ for some $j$? I mean, couldn't $\varphi(V_i)$ be some other subspace of the direct sum of the $W_i$ which is not one of the $W_i$ themselves? So how to understand this proof about the decomposition of a representation? What really is the argument used in this proof? REPLY [13 votes]: I think this may be easier to understand if you change the notation a bit. Instead of grouping the direct summands by their isomorphism type, just list them all without grouping. So we have two decompositions $V=\bigoplus S_m$ and $W=\bigoplus T_n$, where each $S_m$ and each $T_n$ is irreducible. Given an isomorphism $\varphi:V\to W$, let $\varphi_{mn}:S_m\to T_n$ be the composition of $\varphi$ with the inclusion $S_m\to V$ and the projection $W\to T_n$. By Schur's lemma, each $\varphi_{mn}$ is either an isomorphism or $0$. Now since $\varphi$ is injective, for each $m$ there must exist some $n$ such that $\varphi_{mn}\neq 0$. Thus for each $m$, there exists some $n$ such that $\varphi_{mn}$ is an isomorphism, and hence $T_n\cong S_m$. Moreover, $\varphi_{mn}=0$ for all $n$ such that $T_n\not\cong S_m$. This means that image of the restriction of $\varphi$ to $S_m$ is contained in the direct sum of all the $T_n$'s which are isomorphic to $S_m$. Now fix an irreducible representation $R$ and let $A\subseteq V$ be the direct sum of all the $S_m$'s that are isomorphic to $R$, and let $B$ be the direct sum of all the other $S_m$'s, so $V=A\oplus B$. Similarly, let $C\subseteq W$ be the direct sum of all the $T_n$'s that are isomorphic to $R$, and $D$ be the direct sum of all the other $T_n$'s, so $W=C\oplus D$. The discussion above shows that $\varphi(A)\subseteq C$ and $\varphi(B)\subseteq D$. Since $\varphi$ is surjective, we must have $\varphi(A)=C$ and $\varphi(B)=D$. Thus $\varphi$ gives an isomorphism from $A$ to $C$. It follows that the number of $S_m$'s which are isomorphic to $R$ is equal to the number of $T_n$'s which are isomorphic to $R$, which is exactly what we wanted to prove. Note that you're right that, for instance, $\varphi(S_m)$ might not actually be equal to any of the $T_n$. For instance, if $G$ is trivial, this is just saying that if you have two bases for the vector space, you can have a vector in one basis that is not a scalar multiple of any single vector in the other basis. But $\varphi(S_m)$ is still isomorphic to one of the $T_n$. Moreover, $\varphi(A)$ is actually equal to $C$, or in the language of the question, $\varphi(V_i^{\oplus a_i})=W_j^{\oplus b_j}$ for some $j$. So while the individual irreducible summands might not map to individual irreducible summands, when you group together all the irreducible summands of a given isomorphism type, they map to the sum of all the irreducible summands of the same isomorphism type.<|endoftext|> TITLE: Birthday line to get ticket in a unique setup QUESTION [6 upvotes]: At a movie theater, the whimsical manager announces that a free ticket will be given to the first person in line whose birthday is the same as someone in line who has already bought a ticket. You have the option of choosing any position in the line. Assuming that you don't know anyone else's birthday, and that birthdays are uniformly distributed throughout the year (365 days year), what position in line gives you the best chance of getting free ticket? REPLY [2 votes]: Let $p_n$ be the probability that the $n$th person in line wins, and let $q_n$ be the probability that the first $n$ persons have different birthdays. Then for a generic $n$ we have $$ p_{n+1} = q_n \frac{n}{365} \\ p_{n+2} = q_n \frac{365-n}{365} \frac{n+1}{365} $$ and therefore $$ \frac{p_{n+2}}{p_{n+1}} = \frac{(365-n)(n+1)}{365n} $$ This is larger than $1$ if and only if $$ (365-n)(n+1) > 365n $$ which rearranges as $$ n(n+1) < 365 $$ Initially the $p_n$s grow steadily, but eventually they start to fall steadily towards $0$. The last $n$ for which $n(n+1)<365$ is $18$, so the largest $p_n$ will be $p_{20}$. REPLY [2 votes]: The probability, $p(n)$, of getting a free ticket when you are the $n_{th}$ person is line is: (probability that none of the first $n−1$ people share birth dates) * (probability that you share birthday with one of the first $n−1$ people) So, $p(n) = [1 *(\frac{364}{365})*(\frac{363}{365}) * ... *(\frac{(365−(n−2))}{365})] * [\frac{(n−1)}{365}]$, Here, $0 p(n+1)$, or $\frac{p(n)}{p(n+1)} > 1$. Now, $\frac{p(n)}{p(n+1)} = \frac{365}{(366−n)} * \frac{(n−1)}{n}$ $\implies 365n − 365 > 366n − n^2 $, $\implies n^2 − n - 365 > 0$ $\implies (n - \frac{(1+\sqrt(1461)}{2})*(n - \frac{(1-\sqrt(1461)}{2}) > 0$ $\implies n = \frac{(1+\sqrt(1461)}{2} = 19.6115148536 $ ($\because n>0$) $\implies n = 20$ (ceiling of computed value) Hence the $20^{th}$ position maximizes the chances. Note: See my first comment below if you don't want to solve quadratic equation using discriminant method.<|endoftext|> TITLE: Shortest distance as measured in norm $||\cdot ||$ from point to a sphere in norm $||*||$ QUESTION [5 upvotes]: I recently found this theorem, which is used in some clustering algorithms: Let $x,v \in \mathbb{R}^p$, $r>0$, $||\cdot ||_{\ast}$ be a given norm on $\mathbb{R}^p$ and $\partial B_{||\cdot||_{\ast}}(v,r) = \{ y\in\mathbb{R}^p: ||y-v||_{\ast}=r \}$ be the closed ball of radius $r$ centered at $v$. Then the shortest distance, as measured by $||\cdot ||_{\dagger}$, from any point in $\partial B_{||\cdot||_{\ast}}(v,r)$ to $x$ is $\left|\ ||x-v||_{\ast}-r \right|.$ I know that this holds for $||\cdot ||_2,$ but how would one prove that this holds for any two pairs of norms $||\cdot ||_{\ast}$ and $||\cdot ||_{\dagger}$? I've tried tracking down the original article, because all the articles I've seen that have cited it do exactly that, just cite it, without giving the proof. This isn't homework or anything like that, I'm just curious as to what approach one could use when trying to prove this? EDIT: Source of this claim: Page 65. Now that there is a counterexample in one of the answers, I suppose that there is actually only one norm here. I apologize for the confusion, but as you will see from the source, the two norms are marked differently, which led me to the conclusion that they are not the same. REPLY [2 votes]: Your claim is wrong even if one of the norms is the Euclidean norm. Take $p=2, \ r=1, \ v=(0,0)$ and $x=(2,2)$. We take the following two norms $$ \Vert (y,z) \Vert_1 := \sqrt{y^2+z^2}, \quad \Vert (y,z) \Vert_2:= \frac{1}{2} \max\{\vert y \vert, \vert z \vert \}.$$ Then $$ \min_{\Vert w \Vert_1=1} \Vert w - x \Vert_2 \geq \frac{1}{2} > 0 = \vert \ \Vert x \Vert_2 - 1 \vert = \vert \ \Vert x - v \Vert_2 - r\vert.$$ Where we used $$ \Vert x \Vert_2 = \Vert (2,2) \Vert_2 = \frac{1}{2}\max\{ \vert 2 \vert, \vert 2 \vert \} = 1.$$ The first inequality follows from the following considersation. $$1= \Vert (w_1, w_2) \Vert_1 \Rightarrow 1 = 1^2 = w_1^2 + w_2^2 \Rightarrow \max \{ \vert w_1\vert, \vert w_2 \vert \} \leq 1.$$ Thus, if $\Vert (w_1, w_2) \Vert_1 = 1 $, then $$ \Vert (w_1, w_2) - x \Vert_2 = \frac{1}{2} \max \{ \vert w_1 - 2\vert, \vert w_2 - 2 \vert\} =\frac{1}{2} \max \{ 2- w_1, 2-w_2\} \geq \frac{1}{2}.$$<|endoftext|> TITLE: Is this greedy sequence optimal? QUESTION [8 upvotes]: Consider this strictly increasing sequence $(x_n)$ of 21 natural numbers: $$1, 3, 4, 11, 12, 27, 28, 59, 60, 123, 124, 251, 252, 507, 508, 1019, 1020, 2043, 2044, 4091, 4092$$ i.e., $$x_n=\begin{cases}1&n=1\\2^{k+2}-5&n=2k\\2^{k+2}-4&n=2k+1>1\end{cases}$$ It has the property $$\tag{$\star$} \text{There is no index $k$ with $x_{k-1}x_{k+1}<2x_k^2<4x_{k-1}x_{k+1}$.}$$ Questions: Is there a sequence with property $(\star)$ and $x_{21}<4092$? What is the minimal possibly value of $x_{21}$? More general, for $n\gg 1$, what is the minimal possible value of $x_n$? Is there an infinite sequence with property $(\star)$ that achieves the minimal value for all $n\gg 1$? Note that numbers from $1$ to $2046$ can (approximately) be interpreted as showing that $x_{21}\ge 2047$. In my answer to that question, I improved this first to $x_{21}\ge 2559$ and then to $x_{21}\ge 3071$. There's probably still room for improvement along the methods I used, but the case distinctions started to become convoluted. The sequence above was constructed greedily, i.e., starting with $x_1=1$ and $x_2=3$ we recursively find the smallest $x_{k+1}$ that is $>x_k$ and is either $\ge \frac{2x_k^2}{x_{k-1}}$ or $\le \frac{x_k^2}{2x_{k-1}}$. Note that being too greedy is bad: Starting with $x_1=1$, $x_2=2$, we arrive at the much worse $1,2,8,9,21,22,47,\ldots, x_{21}=6647$. REPLY [2 votes]: Lemma 1. Let $a(2a)^2$ and hence $b>2a$. If $c\ge2a+3$, then $2ac=4a^2+6a>4a^2+4a+1=(2a+1)^2$ and hence $b>2a+1$. These findings are logically equivalent to the claims. $\square$ For naturals $a,b$ with $ab$ for which $ac<2b^2<4ac$ is false. Then we extract from the above proof that $$ \tag4 S(a,b)=\begin{cases}b+1&\text{if }b\ge 2a+1\\ \left\lceil\frac{2b^2}{a}\right\rceil&\text{otherwise}\end{cases}$$ In particular, $$\tag5S(a,a+1)=2a+5\qquad\text{if }a\ge 2.$$ Indeed, as $a+1$ is not $\ge 2a+1$, $S(a,a+1)=\left\lceil\frac{2(a+1)^2}{a}\right\rceil=\left\lceil2a+4+\frac2a\right\rceil=2a+5$. Let $\mathscr X$ be the set of all increasing sequences of natural numbers with property $(\star)$, i.e., such that $x_nx_{n+2}<2x_{n+1}^2<4x_nx_{n+2}$ is false for all $n$. For naturals $a\mu_n+2$. Theorem. Let $(x_n)\in\mathscr X$. Then $$\tag{$P_n$} x_n\ge\mu_n$$ and $$\tag{$Q_n$} \text{if }x_n=\mu_n\text{ and }n>1\text{, then }x_{n-1}=x_n-1. $$ Proof. Clearly $x_1\ge 1$ ($\to P_1$) and $x_2\ge 2$ ($\to P_2$), and if $x_2=2$ then necessarily $x_1=1$ ($\to Q_2$). From $S(1,2)=8$, we see: We cannot have $x_3=3$ ($\to P_3$), and if $x_3=4$ then $x_2=3$ ($\to Q_3$). To show $P_4$ and $Q_4$, assume $x_4\le 8$, or $x_4=9$ and $x_3x_3$, $x_1=1$ from $S(2,3)=9>x_3$, $x_3\ge 5$ from $S(3,4)=11>x_4$. But then $x_1x_3\le 7<18=2x_2<20=4x_1x_3$. We conclude that both $P_4$ and $Q_4$. Then $x_3=4$ or $x_3\ge 7$ If $4\le x_3<7$ then $S(3,x_3)=\left\lceil\frac{2x_3^2}{3}\right\rceil\ge \left\lceil\frac{2\cdot 4^2}{3}\right\rceil=11>9$. Hence $x_3=7$. But then and $x_4\ge S(3,7)$Then from $S(2,3)=9>x_3$, $x_1=1$. Now $x_1x_3\le 7<2x_2^2=18 Let $n\ge 5$ and assume that we already know $P_\nu, Q_\nu$ for $1\le \nu\mu_n$ so that $P_n$ and (vacuously) $Q_n$. Assume $(3)$ holds. Then $Q_n$ follows immediately. If $x_{n-2}>\mu_{n-2}$, then $x_n\ge 2x_{n-2}+2\ge 2\mu_{n-2}+4=\mu_4$ and so $P_n$. If $x_{n-2}=\mu_{n-2}$, then by $Q_{n-2}$ and $(5)$, $x_n=x_{n-1}+2\ge 2x_{n-3}+6=2\mu_{n-2}+4=\mu_n$ and again $P_n$. $\square$ Answers to the original questions We can interprete the first part of the theorem as $$ \min\{\,x_n\mid(x_k)_{k\in\Bbb N}\in\mathscr X\,\}=\mu_n.$$ This answers the third question for all $n\ge 1$. For example, we compute $x_{21}\ge \mu_{21}=2092$, which answers the second question as well as (negatively) the first. Finally, the corollary says that $(\mu_n,\mu_{n+1},\mu_{n+2})$ cannot occur as consecutive terms in a sequence $\in\mathscr X$ as that would contradict lemma 1. Hence every sequence in $\mathscr X$ differs from $(\mu_n)$ in infinitely many places, which answers the fourth question negatively. Nevertheless, the greedy sequence in the OP (sequence $\beta$) is "optimal" insofar as it coincides with $\mu_n$ for all odd $n$ (and $\alpha$ for even $n$).<|endoftext|> TITLE: CW complex structure of geometric realization QUESTION [5 upvotes]: In Ralph Cohen's notes on the topology of fiber bundles he makes the following claims: on pp.69, he says the geometric realization of a simplicial set is a CW complex on pp.70, he says the geometric realization of a simplicial space may not be a CW complex My question is, what is in the category of sets that guarantees the geometric realization has a CW complex structure which is missing in the category of topological spaces? REPLY [11 votes]: Here's a sketch of the proof why realizations of simplicial sets are CW complexes. Roughly speaking, a CW complex is a space built by one-by-one gluing in new simplices along their boundaries. If $X_\bullet$ is a simplicial set, then $|X_\bullet|$ is built by gluing together the spaces $X_n\times\Delta^n$, where you give $X_n$ the discrete topology. But instead of gluing together these spaces, you can just think of $X_n\times \Delta^n$ as a disjoint union of copies of $\Delta^n$, one for each element of $X_n$, and then glue together these simplices one-by-one. So you are just gluing together a bunch of simplices, and you can check that you get a CW-complex. (I am of course glossing over some details, the most notable of which is that in a simplicial set (unlike a CW-complex), your simplices might be glued together by degeneracy maps instead of by boundary maps. To avoid this issue, you should let $Y_n\subseteq X_n$ be the set of nondegenerate simplices, and then only glue together simplices corresponding to points of $Y_n$, because all the other ones are degenerate and so don't actually add anything to the geometric realization.) What goes wrong with this for simplicial spaces? Well, if $X_\bullet$ is a simplicial space, then $|X_\bullet|$ is still built by gluing together the spaces $X_n\times\Delta^n$. But this time $X_n$ may not have the discrete topology, so $X_n\times\Delta^n$ is not just a disjoint union of simplices! So you can't glue in each simplex one-by-one; you have to take the topology of $X_n$ into account, and so it is not at all obvious how you could get a CW-complex structure. In fact, for any space $A$, you can consider the constant simplicial space $X_\bullet$ with $X_n=A$ for all $n$ and every face and degeneracy map the identity, and then $|X_\bullet|$ is just $A$. So every space can be the realization of a simplicial space. Finally, in response to some of the discussion in the comments, let me note that in this context, sets are a subcategory of spaces by considering each set to have the discrete topology. This comes up in the discussion above because if you consider a simplicial set to be a simplicial space in this way, then its geometric realization as a simplicial set is the same as its geometric realization as a simplicial space.<|endoftext|> TITLE: Conjecture $\sum_{n=1}^\infty\frac{\ln(n+2)}{n\,(n+1)}\,\stackrel{\color{gray}?}=\,{\large\int}_0^1\frac{x\,(\ln x-1)}{\ln(1-x)}\,dx$ QUESTION [13 upvotes]: Numerical calculations suggest that $$\sum_{n=1}^\infty\frac{\ln(n+2)}{n\,(n+1)}\,\stackrel{\color{gray}?}=\,\int_0^1\frac{x\,(\ln x-1)}{\ln(1-x)}\,dx=1.553767373413083673460727...$$ How can we prove it? Is there a closed form expression for this value? REPLY [4 votes]: Additionnally, by applying Theorem $2$ (here), one obtains a closed form in terms of the poly-Stieltjes constants. Proposition. We have $$ \begin{align} \int_0^1\frac{x\,(\ln x-1)}{\ln(1-x)}\,dx= \ln 2+\gamma_1(2,0)-\gamma_1(1,0) \tag1 \\\\\sum_{n=1}^\infty\frac{\ln(n+2)}{n\,(n+1)}=\ln 2+\gamma_1(2,0)-\gamma_1(1,0) \tag2 \end{align}$$ where $$ \gamma_1(a,b) = \lim_{N\to+\infty}\left(\sum_{n=1}^N \frac{\log (n+a)}{n+b}-\frac{\log^2 \!N}2\right). $$<|endoftext|> TITLE: Prove $\sum_{k=0}^{n}\frac{n!}{k!}(n-k)n^k=n^{n+1}$ for any $n\in\mathbb N$. QUESTION [8 upvotes]: I want to prove the following: $$\sum_{k=0}^{n}\frac{n!}{k!}(n-k)n^k=n^{n+1}\quad\text{for any $n\in\mathbb N$.}$$ I tried induction and invoking the binomial theorem, to little avail. I’m looking for some quick and dirty solution. Thanks for any hints. As the answers below reveal, the following update I had added earlier is not really of much use, so I struck it. Update: After some rearrangements, the left-hand side above can be rewritten as $$\require{enclose} \enclose{horizontalstrike}{n\sum_{k=0}^{n-1}\binom{n-1}{k}n^k(n-k)!}$$ This form seems to suggest resorting to the binomial theorem. REPLY [3 votes]: Here’s a combinatorial argument. Clearly $n^{n+1}$ is the number of functions from $\{0,1,\ldots,n\}$ to $[n]$. If $f$ is any such function, let $$k_f=\min\left\{k\in[n]:\exists\ell TITLE: Dividing a Checkerboard into L-Shaped Regions QUESTION [6 upvotes]: In preparation for the GRE Math-Subject test, and honestly for the fun of it, I've been working through a select number of my texts. The first of which is Saracino's Abstract Algebra text. I was hoping this wouldn't happen, but I've been very quickly stumped. The following is Exercise 0.22, and I'm looking for a few hints. "Prove that the follow statement is true for any integer $n \geq 1$: If the number of squares in a "checkerboard" is $2^n \times 2^n$ and we remove any one square, then the remaining part of the board can be broken up into L-shaped regions each consisting of 3 squares." Before I wrote this proof, I went ahead and pulled out a sheet of paper and did a little testing, because I'm not exactly sure if all of these have to be correct. I found that I could remove pretty much any square on the $2^2 \times 2^2$ board and not be able to break it up. This leads me to believe that the statement is false. Any advice? REPLY [10 votes]: You can prove this using induction. Our trivial base case is $n=2$. Now, assume that the statement is true for $n-1$ We can split a $2^n\times 2^n$ board into four $2^{n-1} \times 2^{n-1}$ boards, as shown above. W.L.O.G. the square that we choose to take out is in the upper right quadrant. Since a $2^{n-1} \times 2^{n-1}$ board can be filled with Ls if one square is taken out, the upper right quadrant can be filled without any issue. Now, suppose we take out the L shape in the middle of the board (which is highlighted in red in the picture above). Then each of the other quadrants will have one square missing, and thus can be filled with Ls. Since the shape in the middle itself is an L, the whole board can be filled with Ls. (Sorry for the bad english.)<|endoftext|> TITLE: Prove that a $k$-regular bipartite graph has a perfect matching QUESTION [11 upvotes]: Prove that a $k$-regular bipartite graph has a perfect matching by using Hall's theorem. Let $S$ be any subset of the left side of the graph. The only thing I know is the number of things leaving the subset is $|S|\times k$. REPLY [2 votes]: Let $S$ be any subset of vertices in the left vertex set of the $k$-regular bipartite graph. The number of edges adjacent to vertices in $S$ is exactly $|S|~ k$. Since the number of edges incident to each vertex in the right vertex set of the bipartite graph is exactly $k$, any set of $|S|~k$ edges in the bipartite graph will be incident to $|S|$ or more vertices in the right vertex set. (For example, fewer than $|S|$ vertices in the right vertex set can `accomodate' at most $(|S|-1)k$ edges.) Thus, the number of neighbors of $S$ is at least $|S|$. By Hall's theorem, the graph has a perfect matching.<|endoftext|> TITLE: The Dirac delta does not belong in L2 QUESTION [12 upvotes]: I need to prove that Dirac's delta does not belong in $L^2(\mathbb{R})$. First, I found the next definition of Dirac's delta $$\delta :D(\mathbb R)\to \mathbb R$$ is defined by: $$\langle \delta,\varphi \rangle=\int_{-\infty}^{+\infty}\varphi(x)\delta(x)\,\mathrm{d}x = \varphi(0),$$ and $$\delta(x)= \begin{cases} 1,& x= 0\\ 0 ,& x\ne 0. \end{cases} \\$$ The space $L^2(\mathbb{R})=\{f:f \text{ is measurable and } \|f\|_{2}<+\infty \}$. I'm thinking suppose otherwise, i.e, Dirac's delta in $L^2$, but I have problems to prove that Dirac's delta is measurable, but I suspect that in calculating of $\|f\|_2$ I'll find the contradiction. Could you give me any suggestions?? REPLY [3 votes]: Suggestion for a proof. The Riesz-Fisher theorem states that $L^p(\mathbb{R})$ is complete. Riesz-Fisher theorem Another theorem states that $C^\infty_c(\mathbb{R})$ is dense in $L^p(\mathbb{R})$. Check out denseness of smooth functions This implies that for any $f\in L^p(\mathbb{R})$, there exists a sequence of functions $\{f_n\}\in C^\infty_c(\mathbb{R})$ such that $f_n\to f$ in $L^p(\mathbb{R})$ If $\delta(x)\in L^p(\mathbb{R})$, for some fixed $\epsilon >0$ there is an $N\in\mathbb{N}$ such that $||f_n(x) - \delta(x)||_p<\epsilon$, for $n\ge N$. This results in a contradiction though, because at some point, $f_n$ would not be continuous. Refer to reuns's first comment: no sequence of continuous functions exist that converge to $\delta(x)$.<|endoftext|> TITLE: Calculating SVD by hand: resolving sign ambiguities in the range vectors. QUESTION [11 upvotes]: When calculating the SVD of the matrix $$A = \begin{bmatrix}3&1&1\\-1&3&1\end{bmatrix}$$ I followed these steps $$A A^{T} = \begin{bmatrix}3&1&1\\-1&3&1\end{bmatrix} \begin{bmatrix}3&-1\\1&3\\1&1\end{bmatrix} = \begin{bmatrix}11&1\\1&11\end{bmatrix}$$ $$\det(A A^{T} - \lambda I) = (11-\lambda)^{2} - 1 = 0$$ Hence, the eigenvalues are $\lambda_{1} = 12$ and $\lambda_{2} = 10$. When $\lambda_{1} = 12$: $$ \begin{bmatrix}11-\lambda_{1}&1\\1&11-\lambda_{1}\end{bmatrix} \begin{bmatrix}x_{1}\\x_{2}\end{bmatrix} = \begin{bmatrix}0\\0\end{bmatrix}$$ $$x_{1} = x_{2} \implies u_{1} = \begin{bmatrix}t\\t\end{bmatrix}$$ And for $\lambda_{2} = 10$: $$x_{1} = -x_{2} \implies u_{2} = \begin {bmatrix}t\\-t\end{bmatrix}$$ Now $$U = \begin {bmatrix} u_{1}&u_{2} \end{bmatrix}$$ $u_{1}$ and $u_{2}$ are orthonormal. So the for $u_{1} = \begin{bmatrix}t\\t\end{bmatrix}$ , $u_{2} = \begin{bmatrix}t\\-t\end{bmatrix}$ I know $\left| t \right| = \frac{1}{\sqrt{2}}$ and $u_{1}.u_{2}=0$. My question how can we decide about the sign? For example I think both $U= \begin{bmatrix}\frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} &\frac{-1}{\sqrt{2}} \end{bmatrix}$ and $U=\begin{bmatrix}\frac{-1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \end{bmatrix}$ could be answers. Then Which one should I choose? ====== Update1: Based on answers posted I rewrite: $u_{1} = sgn (t_1) \begin{bmatrix} \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}\end{bmatrix}$ $u_{2} = sgn (t_2) \begin{bmatrix} \frac{1}{\sqrt{2}}\\ \frac{-1}{\sqrt{2}}\end{bmatrix}$ $$U= \begin{bmatrix} \frac{1}{\sqrt{2}}& \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix} \operatorname{sgn} (t_1)&0 \\ 0& \operatorname{sgn} (t_2) \end{bmatrix}$$ ====== Update2: I continued by calculation of $V$ as follow: $ A^{T} A = \begin{bmatrix}3&-1\\1&\\1&1\end{bmatrix} \begin{bmatrix}3&1&1\\-1&3&1\end{bmatrix} = \begin{bmatrix}10&0&2\\0&10&4\\2&4&2\end{bmatrix}$ $det( A^{T} A- \lambda I)=0$ $\lambda_{1} = 12 , v_1 = sgn(t_3) \begin{bmatrix}t_{3}\\ 2t_{3} \\ t_{3} \end{bmatrix}$ $\lambda_{2} = 10 , V_{2} = sgn(t_4) \begin{bmatrix}t_{4}\\ -0.5t_{4} \\ 0 \end{bmatrix}$ $\lambda_{3} = 0 , V_{3} = sgn(t_5) \begin{bmatrix}t_{5}\\ 2t_{5} \\ -5t_{5} \end{bmatrix}$ $V= \begin{bmatrix}\frac{1}{\sqrt{6}} &\frac{2}{\sqrt{5}} &\frac{1}{\sqrt{30}}\\ \frac{2}{\sqrt{6}}&\frac{-1}{\sqrt{5}}&\frac{2}{\sqrt{30}}\\ \frac{1}{\sqrt{6}}& 0& \frac{-5}{\sqrt{30}}\end{bmatrix}\begin{bmatrix} \operatorname{sgn} (t_3)&0&0 \\ 0& \operatorname{sgn} (t_4)&0\\ 0&0& \operatorname{sgn} (t_5) \end{bmatrix}$ I try to check if all possible answers for U and V are valid : $A = U\Sigma V^{*}$ $A = \begin{bmatrix} \frac{1}{\sqrt{2}}& \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix} \operatorname{sgn} (t_1)&0 \\ 0& \operatorname{sgn} (t_2) \end{bmatrix} \Sigma (\begin{bmatrix}\frac{1}{\sqrt{6}} &\frac{2}{\sqrt{5}} &\frac{1}{\sqrt{30}}\\ \frac{2}{\sqrt{6}}&\frac{-1}{\sqrt{5}}&\frac{2}{\sqrt{30}}\\ \frac{1}{\sqrt{6}}& 0& \frac{-5}{\sqrt{30}}\end{bmatrix} \begin{bmatrix} \operatorname{sgn} (t_3)&0&0 \\ 0& \operatorname{sgn} (t_4)&0\\ 0&0& \operatorname{sgn} (t_5) \end{bmatrix} )^{*} $ $A = \begin{bmatrix} \frac{1}{\sqrt{2}}& \frac{-1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\end{bmatrix} \begin{bmatrix} \operatorname{sgn} (t_1)&0 \\ 0& \operatorname{sgn} (t_2) \end{bmatrix} \Sigma \begin{bmatrix} \operatorname{sgn} (t_3)&0&0 \\ 0& \operatorname{sgn} (t_4)&0\\ 0&0& \operatorname{sgn} (t_5) \end{bmatrix} \begin{bmatrix}\frac{1}{\sqrt{6}} &\frac{2}{\sqrt{5}} &\frac{1}{\sqrt{30}}\\ \frac{2}{\sqrt{6}}&\frac{-1}{\sqrt{5}}&\frac{2}{\sqrt{30}}\\ \frac{1}{\sqrt{6}}& 0& \frac{-5}{\sqrt{30}}\end{bmatrix}^{*} $ When I assigned $U= \begin{bmatrix}\frac{-1}{\sqrt{2}} &\frac{-1}{\sqrt{2}} \\\frac{-1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \end{bmatrix}$ and $V= \begin{bmatrix}\frac{-1}{\sqrt{6}} &\frac{-2}{\sqrt{5}} &\frac{-1}{\sqrt{30}}\\ \frac{-2}{\sqrt{6}}&\frac{1}{\sqrt{5}} & \frac{-2}{\sqrt{30}} \\ \frac{-1}{\sqrt{6}}& 0& \frac{5}{\sqrt{30}}\end{bmatrix}$ and $\Sigma = \begin{bmatrix}\sqrt{20}&0&0\\ 0&\sqrt{10}&0\end{bmatrix} $in $A = U\Sigma V^{*}$ I got the A. But when I updated U as $U = \begin{bmatrix}\frac{1}{\sqrt{2}} &\frac{1}{\sqrt{2}} \\\frac{1}{\sqrt{2}} &\frac{-1}{\sqrt{2}} \end{bmatrix}$, it produced -A. This probably means certain version of $U$ and $V$ will reproduce A. I haven't figured how should I choose them. REPLY [6 votes]: This is a very nice post with an illuminating discussionon the confusion between sign choices and how they propagate. Define the singular value decomposition of a rank $\rho$ matrix as $$ \mathbf{A} = \mathbf{U} \, \Sigma \, \mathbf{V}^{*}. $$ Connecting vectors between row and column spaces The sign conventions can be understood by looking at the relationship between the $u$ and $v$ vectors $$ u_{k} = \sigma^{-1}_{k} \mathbf{A} v_{k}, \qquad v_{k} = \sigma^{-1}_{k} u^{*}_{k} \mathbf{A}, \qquad k = 1, \rho \tag{1} $$ Changing the sign of a $u$ vector induces a sign change in the corresponding $v$ vector; changing the sign of $v$ vector induces a sign change the corresponding $u$ vector. Example matrix We certainly agree the singular values are $\sigma = \left\{ 2\sqrt{3}, \sqrt{10} \right\}$. Let's nominate a set of eigenvectors. $$ \mathbf{A}\,\mathbf{A}^{*}: \quad % u_{1} = % \frac{1}{\sqrt{2}} \left[ \begin{array}{r} 1 \\ 1 \\ \end{array} \right], \quad % u_{2} = % \frac{1}{\sqrt{2}} \left[ \begin{array}{r} -1 \\ 1 \\ \end{array} \right] $$ $$ \mathbf{A}^{*} \mathbf{A}: \quad % v_{1} = % \frac{1}{\sqrt{6}} \left[ \begin{array}{r} 1 \\ 2 \\ 1 \end{array} \right], \quad % v_{2} = % \frac{1}{\sqrt{5}} \left[ \begin{array}{r} -2 \\ 1 \\ 0 \end{array} \right] % v_{3} = % \frac{1}{\sqrt{30}} \left[ \begin{array}{r} -1 \\ -2 \\ 5 \end{array} \right] $$ How did we decide to distribute the minus signs in these vectors? Use the convention of making the first entries negative. Stepping back, the eigenvectors have an inherent global sign ambiguity. For example $$ \begin{align} % \mathbf{A}^{*} \mathbf{A} v_{1} &= \mathbf{0} \\ % \mathbf{A}^{*} \mathbf{A} \left(-v_{1}\right) &= \left(- \mathbf{0} \right) = \mathbf{0} % \end{align} $$ Using this arbitrary set of column vectors, construct the default SVD: $$ \mathbf{A} = \left[ \begin{array}{cc} u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ If we select the sign convention $\color{blue}{\pm}$ on the $k-$th column vector, we induce the sign convention $\color{red}{\pm}$ on the $k-$th column vector in the complimentary range space. The $\color{blue}{choice}$ induces the $\color{red}{consequence}$. Choices and consequences For example, find the SVD by resolving the column space to produce the vectors $u$. The vectors $v$ will be constructed using the second equality in $(1)$. Flipping the global sign on $u_{1}$ flips the global sign on $v_{1}$. $$ \mathbf{A} = \left[ \begin{array}{cc} \color{blue}{\pm}u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} \color{red}{\pm}v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ A nice feature of @Crimson's post is the demonstration that where can choose which range space to compute and which to construct. Compute $\mathbf{V}$ and construct $\mathbf{U}$, or compute $\mathbf{U}$ and construct $\mathbf{V}$. The other path is to resolve the row space vectors $v$ and use the first equality in $(1)$ to construct the vectors $u$. The point is that flipping the global sign on $v_{1}$ flips the global sign on $u_{1}$ $$ \mathbf{A} = \left[ \begin{array}{cc} \color{red}{\pm}u_{1} & u_{2} \end{array} \right] % \Sigma \, % \left[ \begin{array}{ccc} \color{blue}{\pm}v_{1} & v_{2} & v_{3} \end{array} \right]^{*} $$ Choice and consequence swap places. The process for the first vector extends to all $\rho$ vectors in the range space. Counting choices The number of unique sign choices is $2^\rho$, two choices for each range space vector. Here the matrix rank is $\rho = 2$ and there are $2^{2} = 4$ unique vector sets. If $\mathbf{A}\mathbf{A}^{*}$ is resolved, then: $$ \begin{array}{cc|cc} u_{1} & u_{2} & v_{1} & v_{2} \\\hline \color{blue}{+} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{+} & \color{blue}{-} & \color{red}{+} & \color{red}{-} \\ \color{blue}{-} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{-} & \color{blue}{-} & \color{red}{-} & \color{red}{-} \\ \end{array} $$ If instead $\mathbf{A}^{*}\mathbf{A}$ is resolved, then: $$ \begin{array}{cc|cc} v_{1} & v_{2} & u_{1} & u_{2} \\\hline \color{blue}{+} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{+} & \color{blue}{-} & \color{red}{+} & \color{red}{-} \\ \color{blue}{-} & \color{blue}{+} & \color{red}{+} & \color{red}{+} \\ \color{blue}{-} & \color{blue}{-} & \color{red}{-} & \color{red}{-} \\ \end{array} $$ Another example is in SVD and the columns — I did this wrong but it seems that it still works, why?<|endoftext|> TITLE: Integral ${\large\int}_0^\infty\big(2J_0(2x)^2+2Y_0(2x)^2-J_0(x)^2-Y_0(x)^2\big)\,dx$ QUESTION [10 upvotes]: I'm interested in the following definite integral: $$\int_0^\infty\big(2J_0(2x)^2-J_0(x)^2+2Y_0(2x)^2-Y_0(x)^2\big)\,dx,\tag1$$ where $J_\nu$ and $Y_\nu$ are the Bessel functions of the first and the second kind. Mathematica evaluates this integral symbolically to $\frac{\ln2}\pi$, but the result of a numerical integration looks more like $\frac{\ln4}\pi$, so I suspect the symbolic result is incorrect. Moreover, it looks like both components converge and make equal contribution: $$\int_0^\infty\big(2J_0(2x)^2-J_0(x)^2\big)\,dx\stackrel?=\int_0^\infty\big(2Y_0(2x)^2-Y_0(x)^2\big)\,dx\stackrel?=\frac{\ln2}\pi,\tag2$$ but it is more difficult to check numerically, because both integrands here are oscillating (unlike $(1)$ where the integrand looks monotonic). How can we find values of these integrals and prove them correct? Can we generalize results for values of the index $\nu$ other than $0$? REPLY [6 votes]: Somewhat similar to this answer, we can use Ramanujan's master theorem to show that $$\int_{0}^{\infty}\big(2J_0(2x)^2-J_0(x)^2\big)\,dx = \frac{\ln 2}{\pi}. $$ As $x \to \infty$, $$\big(2J_0(2x)^2-J_0(x)^2\big) \sim \frac{\sin (4x)-\sin (2x)}{\pi x}. \tag{1}$$ So the integral does indeed converge (but not absolutely). The hypergeometric representation of the square of the Bessel function of the first kind of order zero is $$J_{0}(z)^{2} = \, _1F_2\left(\frac{1}{2}; 1, 1; -z^{2} \right). $$ So for $a>0$ and $0 < s < 1$, the Mellin transform of $J_{0}(ax)^{2}$ is $$ \begin{align} \int_{0}^{\infty} x^{s-1} J_{0}(ax)^{2} \,dx &= \int_{0}^{\infty} x^{s-1} \, _1F_2\left(\frac{1}{2}; 1, 1; -(ax)^{2} \right) \, dx \\ &= \frac{1}{2a^{s}}\int_{0}^{\infty} u^{s/2-1} \, _1F_2\left(\frac{1}{2}; 1, 1; -u \right) \, du \\ &= \frac{1}{2a^{s}} \Gamma\left(\frac{s}{2} \right) \frac{\Gamma \left(\frac{1}{2}-\frac{s}{2} \right) \Gamma(1)^{2} }{\Gamma\left(\frac{1}{2}\right) \Gamma\left(1- \frac{s}{2} \right)^{2}} \\ &= \frac{1}{2a^{s}} \Gamma\left(\frac{s}{2} \right) \frac{\Gamma \left(\frac{1}{2}-\frac{s}{2} \right)}{\sqrt{\pi} \, \Gamma\left(1- \frac{s}{2} \right)^{2}} . \end{align}$$ Therefore, assuming we can bring the limit inside the integral, $$ \begin{align} \int_0^\infty\big(2J_0(2x)^2-J_0(x)^2\big)\,dx &= \frac{1}{\pi} \lim_{s \to 1^{-}} \left(\frac{1}{2^{s}}- \frac{1}{2} \right) \Gamma \left(\frac{1}{2} - \frac{s}{2} \right) \\ &= \frac{1}{\pi} \lim_{s \to 0^{-}} \left( \frac{1}{2^{s+1}} - \frac{1}{2} \right)\Gamma \left(- \frac{s}{2} \right) \\ &= \frac{1}{\pi} \lim_{s \to 0^{-}} \left(- \frac{\ln 2}{2} s + \mathcal{O}(s^{2}) \right) \left(- \frac{2}{s} + \mathcal{O}(1) \right) \\ &= \frac{\ln 2}{\pi}. \end{align}$$ The same approach seemingly also shows that $$\int_{0}^{\infty}\big(2J_v(2x)^2-J_v(x)^2\big)\,dx = \frac{\ln 2}{\pi} \, , \quad v> - \frac{1}{2}. $$ $(1)$ https://en.wikipedia.org/wiki/Bessel_function#Asymptotic_forms<|endoftext|> TITLE: Represent a Uniform[0,1] random variable as a sum of independent Bernoulli(1/2) random variables QUESTION [11 upvotes]: With $X \sim U[0,1]$, Lecturer says that $X = \sum_{k\ge 1} B_k(\frac{1}{2}) 2^{-k}$ where the $B_k(\frac{1}{2})$ are independent Bernoulli random variables with parameter $1/2$. I have no idea how we can express uniformly distributed RV like that. Can anyone please explain? Thanks, REPLY [17 votes]: Theorem. $(X_i)_{i\in\mathbb{N}}\text{ iid Uniform }\{0,1\}\iff X=(0.X_1X_2X_3...)_2\sim\text{ Uniform }[0,1].$ Proof: ($\implies$) If random variables (r.v.s) $X_1,X_2,\ldots$ are i.i.d. uniformly distributed on the set $\{0,1\}$, then $X=(0.X_1X_2\ldots)_2$ is a r.v. uniformly distributed on the real interval $[0,1]$. To see this, note that for any $x=(0.x_1x_2\ldots)_2\in[0,1)$ (always taking, WLOG, the unique binary representation of $x$ that has infinitely many $0$s), we have the following: $$\begin{align}\{X > x\} = & \{X_1>x_1\}\cup\\ &\{\{X_1=x_1\}\cap \{X_2>x_2\}\}\cup\\ &\{\{X_1=x_1\}\cap \{X_2=x_2\}\cap\{X_3>x_3\} \}\cup\\ &\ldots \end{align}$$ Now, $P(X_i >x_i) = \frac{1}{2}(1-x_i)$, so the probability of the above disjoint union is just $$\begin{align}P(X>x) &= \frac{1}{2}(1-x_1) + \frac{1}{2^2}(1-x_2) + \frac{1}{2^3}(1-x_3)+\ldots\\ &= \sum_{i=1}^\infty \frac{1}{2^i} - \sum_{i=1}^\infty \frac{x_i}{2^i}\\ &= 1 - x\\ \therefore P(X\le x) &= x \end{align} $$ therefore $X$ is a r.v. uniformly distributed on the interval $[0,1]$. $(\impliedby)$ If $X$ is a r.v. uniformly distributed on the interval $[0,1]$ and $X_i$ denotes its $i$th binary digit (following the same convention as above for unique representation), then the $X_i$ are iid Uniform on the set $\{0,1\}$. To see this, note that $$P\left(X_{i_1}=x_{i_1},X_{i_2}=x_{i_2},...,X_{i_k}=x_{i_k}\right)=P\left(X\in 0.*{x_{i_1}}*{x_{i_2}}*...*{x_{i_k}}*\right)$$ where $\ \ 0.*{x_{i_1}}*{x_{i_2}}*...*{x_{i_k}}*\ \ $ denotes the set of real numbers in the interval $[0,1]$ having the specified binary digits. For example, $$\begin{align}0.1*&=[0.1000...,\ 0.1000...+0.0111...)=[0.1, 1)\\ \\ 0.*x_3*&=0.00x_3*\ \cup\ 0.01x_3*\ \cup\ 0.10x_3*\ \cup\ 0.11x_3*\\ \\ &= [0.00x_3, 0.00x_3+0.001)\ \cup\ [0.01x_3, 0.01x_3+0.001)\\ \\ &\quad\ \cup\ [0.10x_3, 0.10x_3+0.001)\ \cup\ [0.11x_3, 0.11x_3+0.001),\\ \\ \end{align}$$ and in general $$\begin{align} \ 0.*{x_{i_1}}*{x_{i_2}}*...*{x_{i_k}}*&=\bigcup_{s_1s_2...s_k\in\{0,1\}^{i_k-k}}\left[0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}},\ \ 0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}}+2^{-{i_k}}\right). \end{align}$$ That is, the general term $\ 0.*{x_{i_1}}*{x_{i_2}}*...*{x_{i_k}}*\ $ is a union of $2^{i_k-k}$ disjoint intervals, each having width $2^{-i_k}.$ (The binary string $s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}}$ is $i_k$ digits long and contains $k$ $x$s, so $s_1s_2...s_k$ is a binary string of length $i_k-k$, of which kind there are $2^{i_k-k}$ altogether.) Therefore, $$\begin{align}&P\left(X_{i_1}=x_{i_1},X_{i_2}=x_{i_2},...,X_{i_k}=x_{i_k}\right)\\ \\ &=P(X\in 0.*{x_{i_1}}*{x_{i_2}}*...*{x_{i_k}}*)\\ \\ &=P\left(X\in\bigcup_{s_1s_2...s_k\in\{0,1\}^{i_k-k}}\left[0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}},\ \ 0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}}+2^{-{i_k}}\right) \right)\\ \\ &=\sum_{s_1s_2...s_k\in\{0,1\}^{i_k-k}} P\left(X\in\left[0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}},\ \ 0.s_1{x_{i_1}}s_2{x_{i_2}}...s_k{x_{i_k}}+2^{-{i_k}}\right) \right)\\ \\ &= 2^{i_k-k}\cdot 2^{-i_k}\\ &= 2^{-k}. \end{align}$$ As a special case, $P(X_i=x_i) = 2^{-1}$, so we have mutual independence of the $X_i$: $$P\left(X_{i_1}=x_{i_1},...,X_{i_k}=x_{i_k}\right) = 2^{-k}=\prod_{j=1..k}P\left(X_{i_j}=x_{i_j}\right); $$ consequently, the $X_i$ are iid Uniform on the set $\{0,1\}.$ QED<|endoftext|> TITLE: If $\lambda$ is the eigen-value of a $n\times n$ non-singular orthogonal matrix $A$, then prove that $\frac{1}{\lambda}$ is also an eigen-value. QUESTION [5 upvotes]: QUESTION: If $\lambda$ is the eigen-value of a $n\times n$ non-singular matrix $A$ and $A$ is a real orthogonal matrix, then prove that $\frac{1}{\lambda}$ is an eigen-value of the matrix $A$. MY ATTEMPT: Since $\lambda$ is the eigen-value of a $n\times n$ matrix $A$, we have $$|A-\lambda I_n|=0$$ Also since $A$ is a real orthogonal matrix,we have $$AA^T=A^TA=I_n$$ So we can conclude that $$|A-\lambda( AA^T)|=0$$ Or, $$|\lambda A\left(\frac{1}{\lambda}I_n- A^T\right)|=0$$ Or, $$\left|\lambda A\right|\cdot \left|\frac{1}{\lambda}I_n- A^T\right|=0$$ Or, since $A$ is non-singular, $$\left|A^T-\frac{1}{\lambda} I_n\right|=0$$ So, we can conclude that $\frac{1}{\lambda}$ is an eigen-value of the matrix $A^T$. But how do I prove that $\frac{1}{\lambda}$ is an eigen-value of the matrix $A$? Is my working faulty? Or is there a mistake in the question? Please help. REPLY [2 votes]: Let's prove that for any matrix $A$ we have $\det{A}=\det{A^T}$. Once this is done it is obvious that $$\det{(A-\lambda\cdot I)^T}=\det{(A^T-\lambda\cdot I)}$$ and this proves that $A$ and $A^T$ have the same characteristic polynomial and therefore the same eigenvalues. Let's now start wit the Leibniz formula for a determinant $$\det{A}=\sum_{\sigma\in\Sigma_n}\epsilon(\sigma)a_{\sigma(1),1}\cdots a_{\sigma(n),n}$$ Transposing we get $$\det{A^T}=\sum_{\sigma\in\Sigma_n}\epsilon(\sigma)a_{1,\sigma(1)}\cdots a_{n,\sigma(n)}$$ Because all the $\sigma$ are bijections $$\det{A^T}=\sum_{\sigma\in\Sigma_n}\epsilon(\sigma)a_{\sigma^{-1}(1),1}\cdots a_{\sigma^{-1}(n),n}$$ Now reindexing with $\tau=\sigma^{-1}$ we get $$\det{A^T}=\sum_{\tau\in\Sigma_n}\epsilon(\tau)a_{\tau(1),1}\cdots a_{\tau(n),n}=\det{A}$$<|endoftext|> TITLE: 'Bee flying between two trains' problem QUESTION [41 upvotes]: There is a famous arithmetic question : Two trains $150$ miles apart are traveling toward each other along the same track. The first train goes $60$ miles per hour; the second train rushes along at 90 miles per hour. A fly is hovering just above the nose of the first train. It buzzes from the first train to the second train, turns around immediately, flies back to the first train, and turns around again. It goes on flying back and forth between the two trains until they collide. If the fly's speed is $120$ miles per hour, how far will it travel? It is easy to determine the distance travelled by the bee. But how to determine how many times it touches first/second train? or Which train it touches last? REPLY [4 votes]: Here we derive some recurrence relations to determine position and times the bee meets one of the trains. Scenario: We consider the interval $[0,150]$ on the $x$-axis with train $A$ starting at time $t=0$ at position $x=0$ and train $B$ starting at the same time from $x=150$. Train A moves with $60$ mph towards B whereas train B moves with $90$ mph towards A. The bee starts at time $t=0$ from $x=0$ towards $B$ with $120$ mph. It meets $B$ the first time at position $B_1$ after time $t_1$. At that time we denote the position of train $A$ with $A_1$. Then it turns around flies back to $A$ and meets $A$ at position $A_2$ after time $t_2$ while the train B is at position $B_2$ at that time. Then the bee turns around again and continues this dangerous game. We denote the positions of the train A with $A_n$ and the positions of the train B with $B_n$, $n\geq 0$. Note the bee meets the train $A$ at even positions $A_{2n}$ and it meets the train B at odd positions $B_{2n+1}$. The delta time between $A_{n}$ and $A_{n+1}$ is denoted with $t_{n+1}$ which is the same as the time between $B_{n}$ and $B_{n+1}$. We consider the following sequences \begin{array}{rlllll} \text{Train A: }&A_0=0,&A_1,&A_2,&\ldots &A_{n},\ldots\\ \text{Train B: }&B_0=150,&B_1,&B_2,&\ldots &B_{n},\ldots\\ \text{Delta times: }&t_0=0,&t_1,&t_2,&\ldots &t_{n},\ldots\\ \text{Bee: }&A_0=0,&B_1,&A_2,&\ldots&B_{2n-1},A_{2n},B_{2n+1},\ldots \end{array} We show the following is valid for $n\geq 0$ \begin{align*} A_{2n}&=60\left(1-\frac{1}{3^n7^n}\right)\qquad&B_{2n}&=150-90\left(1-\frac{1}{3^n7^n}\right)\\ A_{2n+1}&=60\left(1-\frac{2}{3^n7^{n+1}}\right)\qquad&B_{2n+1}&=150-90\left(1-\frac{2}{3^n7^{n+1}}\right)\\ \\ t_{2n}&=\frac{5}{3^n7^n}\\ t_{2n+1}&=\frac{5}{3^n7^{n+1}} \end{align*} A first plausibility check shows \begin{align*} \lim_{n\rightarrow \infty}A_{2n}=\lim_{n\rightarrow \infty}A_{2n+1} =\lim_{n\rightarrow \infty}B_{2n}=\lim_{n\rightarrow \infty}B_{2n+1}=60 \end{align*} The trains will meet each other after one hour and since the speed of A is $60$ mph and it started from $x=0$ at $t=0$ its position is at $x=60$ after one hour. The same holds for $B$ since B moves with $90$ mph from $x=150$ and $150 - 90 = 60$. Recurrence relations: The bee meets the train A at delta time $t_{2n}$ at position $A_{2n}$ and then flies with $120$ mph towards the train B. We can calculate the next meeting point $B_{2n+1}$ with train B as \begin{align*} A_{2n}+120 t_{2n+1}&=B_{2n}-90 t_{2n+1}\\ t_{2n+1}&=\frac{1}{210}\left(B_{2n}-A_{2n}\right)\tag{1} \end{align*} The positions of A and B after delta time $t_{2n+1}$ are \begin{align*} A_{2n+1}=A_{2n}+60t_{2n+1}=\frac{5}{7}A_{2n}+\frac{2}{7}B_{2n}\tag{2}\\ B_{2n+1}=B_{2n}-90t_{2n+1}=\frac{3}{7}A_{2n}+\frac{4}{7}B_{2n}\tag{3} \end{align*} Similarly the bee meets the train B at odd indexed delta times $t_{2n+1}$ at position $B_{2n+1}$ and then flies with $120$ mph towards train A. We describe the next meeting position $A_{2n+2}$ with train A and obtain \begin{align*} B_{2n+1}-120 t_{2n+2}&=A_{2n+1}+60 t_{2n+2}\\ t_{2n+2}&=\frac{1}{180}\left(B_{2n+1}-A_{2n+1}\right) \end{align*} The positions of A and B after delta time $t_{2n+2}$ are \begin{align*} A_{2n+2}=A_{2n+1}+60t_{2n+2}=\frac{2}{3}A_{2n+1}+\frac{1}{3}B_{2n+1}\tag{4}\\ B_{2n+2}=B_{2n+1}-90t_{2n+2}=\frac{1}{2}A_{2n+1}+\frac{1}{2}B_{2n+1}\tag{5} \end{align*} From these equation we derive recurrence relations for odd and even $A_n$ in terms of $A_{n-1}$ and $A_{n-2}$ and we do the same for $B_n$. We obtain from (2) - (5) \begin{align*} A_{2n+2}&=\frac{2}{3}A_{2n+1}+\frac{1}{3}B_{2n+1}\\ &=\frac{2}{3}A_{2n+1}+\frac{1}{3}\left(\frac{3}{7}A_{2n}+\frac{4}{7}B_{2n}\right)\\ &=\frac{2}{3}A_{2n+1}+\frac{1}{7}A_{2n}+\frac{4}{21}\left(\frac{7}{2}A_{2n+1}-\frac{5}{2}A_{2n}\right)\\ &=\frac{4}{3}A_{2n+1}-\frac{1}{3}A_{2n} \end{align*} and \begin{align*} A_{2n+1}&=\frac{5}{7}A_{2n}+\frac{2}{7}B_{2n}=\frac{5}{7}A_{2n}+\frac{2}{7}\left(\frac{1}{2}A_{2n-1}+\frac{1}{2}B_{2n-1}\right)\\ &=\frac{5}{7}A_{2n}+\frac{1}{7}A_{2n-1}+\frac{1}{7}\left(3A_{2n}-2A_{2n-1}\right)\\ &=\frac{8}{7}A_{2n}-\frac{1}{7}A_{2n_1} \end{align*} In the same way we derive for odd and even $n$ a representation of $B_n$ in terms of $B_{n-1}$ and $B_{n-2}$. \begin{align*} B_{2n+1}&=\frac{8}{7}B_{2n}-\frac{1}{7}B_{2n-1}\\ B_{2n+2}&=\frac{4}{3}B_{2n+1}-\frac{1}{3}B_{2n} \end{align*} With the help of (1) and (2) we can calculate $A_1$ and obtain a fully specified recurrence relation for $A_n$: \begin{align*} A_{2n+1}&=\frac{8}{7}A_{2n}-\frac{1}{7}A_{2n-1}\qquad\qquad n\geq 1\tag{6}\\ A_{2n+2}&=\frac{4}{3}A_{2n+1}-\frac{1}{3}A_{2n}\qquad\qquad n\geq 0\tag{7}\\ A_0&=0\\ A_1&=\frac{300}{7} \end{align*} $$ $$ Generating function for $A_{n}$: We derive based upon the recurrence relation for $A_n$ a generating function $A(x)$ with \begin{align*} A(x)=\sum_{n= 0}^\infty A_nx^n \end{align*} Since we have according to (6) and (7) to respect even and odd part separately we do it in two steps and consider the even part $(A(x)+A(-x))/2$ and odd part $(A(x)-A(-x))/2$ of the generating function accordingly. We start from (7) and replace for convenience $n$ with $n-1$. We derive from \begin{align*} A_{2n}&=\frac{4}{3}A_{2n+1}-\frac{1}{3}A_{2n}\qquad\qquad n\geq 1\\ \end{align*} the generating function \begin{align*} \sum_{n= 1}^\infty A_{2n}x^{2n}&=\frac{4}{3}\sum_{n= 1}^\infty A_{2n-1}x^{2n}-\frac{1}{3}\sum_{n= 1}^\infty A_{2n-2}x^{2n}\\ \frac{A(x)+A(-x)}{2}-A_0&=\frac{4}{3}x\sum_{n= 0}^\infty A_{2n+1}x^{2n+1}-\frac{1}{3}x^2\sum_{n= 0}^\infty A_{2n}x^{2n}\\ &=\frac{4}{3}x\cdot\frac{A(x)-A(-x)}{2}-\frac{1}{3}x^2\cdot\frac{A(x)+A(-x)}{2} \end{align*} Noting that $A_0=0$ we obtain after some rearrangement \begin{align*} A(x)(x^2-4x+3)+A(-x)(x^2+4x+3)=0\tag{8} \end{align*} Now the odd part (6). We obtain \begin{align*} \sum_{n=1}^\infty A_{2n+1}x^{2n+1}&=\frac{8}{7}\sum_{n= 1}^\infty A_{2n}x^{2n+1}-\frac{1}{7}\sum_{n= 1}^\infty A_{2n-1}x^{2n+1}\\ \frac{A(x)-A(-x)}{2}-A_1x&=\frac{8}{7}x\cdot\frac{A(x)+A(-x)}{2}-\frac{1}{7}x^2\frac{A(x)-A(-x)}{2} \end{align*} with $A_1=\frac{300}{7}$ we obtain \begin{align*} A(x)(x^2-8x+7)-A(-x)(x^2+8x+7)=600x\tag{9} \end{align*} Combining (8) and (9) we can eliminate $A(-x)$ and obtain after some simplifications and partial fraction decomposition \begin{align*} A(x)&=\frac{300x(x+3)}{(x-1)(x^2-21)}\\ &=\frac{60}{1-x}+180\frac{2x+7}{x^2-21}\\ &=\frac{60}{1-x}+180\left(\frac{2\sqrt{21}-7}{42\left(1+\frac{x}{\sqrt{21}}\right)} -\frac{2\sqrt{21}+7}{42\left(1-\frac{x}{\sqrt{21}}\right)}\right)\\ &=60\sum_{n= 0}^\infty x^n+\frac{30}{7}(2\sqrt{21}-7)\sum_{n= 0}^\infty \left(-\frac{1}{\sqrt{21}}\right)^nx^n\\ &\qquad\qquad\qquad-\frac{30}{7}(2\sqrt{21}+7)\sum_{n= 0}^\infty \left(\frac{1}{\sqrt{21}}\right)^nx^n\\ &=60\sum_{n= 0}^\infty x^n-\frac{60}{7}\sqrt{21}\sum_{n= 0}^\infty \frac{1-(-1)^n}{\left(\sqrt{21}\right)^n}x^n -30\sum_{n=0}^\infty \frac{1+(-1)^n}{\left(\sqrt{21}\right)^n}x^n\\ &=60\sum_{n= 0}^\infty x^n-\frac{120}{7}\sum_{n\geq 0}\frac{1}{21^n}x^{2n+1} -60\sum_{n=0}^\infty \frac{1}{21^n}x^{2n}\\ &=60\left(\sum_{n= 0}^\infty x^n-\sum_{n= 0}^\infty \frac{2}{3^n7^{n+1}}x^{2n+1} -\sum_{n= 0}^\infty \frac{1}{3^n7^n}x^{2n}\right)\tag{10} \end{align*} We can now easily deduce the coefficients $A_n$ from (10). \begin{align*} A_{2n}&=60\left(1-\frac{1}{3^n7^n}\right)\\ A_{2n+1}&=60\left(1-\frac{2}{3^n7^{n+1}}\right) \end{align*} and the first part of the claim follows. According to (2) we obtain after some rearrangement \begin{align*} B_{2n}&=\frac{7}{2}A_{2n+1}-\frac{5}{2}A_{2n}\\ &=150-90\left(1-\frac{1}{3^n7^n}\right) \end{align*} and we get using (3) \begin{align*} B_{2n+1}&=\frac{3}{7}A_{2n}+\frac{4}{7}B_{2n}\\ &=\frac{3}{7}A_{2n}+2A_{2n+1}-\frac{10}{7}A_{2n}\\ &=2A_{2n+1}-A_{2n}\\ &=150-90\left(1-\frac{2}{3^n7^{n+1}}\right) \end{align*} which is the second part of the claim. Finally we obtain \begin{align*} t_{2n+1}&=\frac{1}{210}\left(B_{2n}-A_{2n}\right)=\frac{5}{3^n7^{n+1}}\tag{11}\\ t_{2n}&=\frac{1}{180}\left(B_{2n-1}-A_{2n-1}\right)=\frac{5}{3^{n}7^{n}} \end{align*} showing the last part of the claim is valid. Epilogue: Of course we know the game is over after one hour and the distance travelled by the bee is $120$ miles. But we could also check it based upon the calculations above. The total time the bee is flying is according to (11) \begin{align*} \sum_{n=0}^\infty\left(t_{2n+1}+t_{2n+2}\right) &=\sum_{n=0}^\infty\left(\frac{5}{3^n7^{n+1}}+\frac{5}{3^{n+1}7^{n+1}}\right)\\ &=\left(\frac{5}{7}+\frac{5}{21}\right)\sum_{n=0}^\infty\frac{1}{3^n7^n}\\ &=\frac{20}{21}\frac{1}{1-\frac{1}{21}}\\ &=\frac{20}{21}\cdot\frac{21}{20}\\ &=1 \end{align*} which is precisely one hour. The distance travelled by the bee is according to the last result \begin{align*} 120t_1+120t_2+120t_3+\cdots=120\sum_{n=0}^\infty\left(t_{2n+1}+t_{2n+2}\right)= 120 \end{align*} miles.<|endoftext|> TITLE: How to prove that the image of a prime ideal is also a prime ideal QUESTION [5 upvotes]: If $f:A\rightarrow R$ be a ring homomorphism, where $A$ and $R$ are commutative rings. If $f$ is surjective and $P$ is a prime ideal in $A$, how to prove that $f(P)$ is a prime ideal in $R$? This a an exercise I came across while self-studying joseph rotman's book advanced modern algebra. I searched this website and find similar questions but they have another condition: $P$ contains the kernel of $f$. However, I want to know how to prove this result without that condition. Give thanks to any useful help! REPLY [5 votes]: Suppose $r,s\in R$ and $rs\in f(P)$. Since $f$ is surjective, you have $r=f(a)$ and $s=f(b)$ for some $a,b\in A$, so the condition is $$ f(a)f(b)=f(c) $$ for some $c\in P$. In turn this becomes $ab-c\in\ker f$. If $P\supseteq\ker f$, you can conclude $ab-c\in P$, so also $ab\in P$ and it's easy to finish. Without the assumption $\ker f\subseteq P$ there's no way to prove the statement. For instance, consider $A=\mathbb{Z}$ and $R=\mathbb{Z}/2\mathbb{Z}$, with $f$ the canonical projection. Then $f(3\mathbb{Z})=R$, which is not a prime ideal. If you think more about the problem, the assertion would be equivalent to the statement that, if $P$ is a prime ideal of $A$ and $I$ is any ideal of $A$, then $P+I$ is prime: just consider the projection $f\colon A\to A/I$. Saying that $f(P)$ is prime in $A/I$ implies that $f^{-1}(f(P))=P+I$ is prime in $A$.<|endoftext|> TITLE: Solve for integers $x, y, z$ such that $x + y = 1 - z$ and $x^3 + y^3 = 1 - z^2$. QUESTION [11 upvotes]: Solve for integers $x, y, z$ such that $x + y = 1 - z$ and $x^3 + y^3 = 1 - z^2$. I think we'll have to use number theory to do it. Simply solving the equations won't do. If we divide the second equation by the first, we get: $$x^2 - xy + y^2 = 1 + z.$$ Also, since they are integers $z^2 \ge z \implies -z^2 \le -z$. This implies $$x + y = 1 - z \ge 1 - z^2 = x^3 + y^3.$$ This shows that atleast one of $x$ and $y$ is negative with the additive inverse of the negative being larger than that of the positive. I have tried but am not able to proceed further. Can you help me with this? Thanks. REPLY [4 votes]: Substituting the first in the second gives $$x^3+y^3+(x+y)^2-2(x+y)=0$$ so $x+y=0$ (giving $z=1$) or $$x^2-xy+y^2+x+y-2=0,$$ that is, $$\left(x+\frac12-\frac y2\right)^2+\frac34(y+1)^2-3=0$$ so $(y+1)^2\leq4$, which leaves to check $y\in\{-3,-2,-1,0,1\}$. All solutions are given by $$\begin{align*}(x,y,z)\in\{&(a,-a),\;a\in\mathbb Z,&&(z=1)\\ &(-2,-3),&&(z=6)\\ &(-3,-2),(0,-2),&&(z=6,z=3)\\ &(-2,0),(1,0),&&(z=3,z=0)\\ &(0,1)&&(z=0)\}\end{align*}$$ Perhaps some explanation how I got $\left(x+\frac12-\frac y2\right)^2+\frac34(y+1)^2-3=0$. This is called Completing the square: Starting from $x^2-xy+y^2+x+y-2=0$ we first get rid of the linear term in $x$. Using $x^2+x=(x+\frac12)^2-\frac14$ we find: $$\left(x+\frac12\right)^2-\frac14-xy+y^2+y-2=0.$$ Let $X=x+\frac12$. We have $$X^2-\frac14-Xy+\frac y2+y^2+y-2=0.$$ Now we want to get rid of the mixed term (for the moment we don't care about additional terms in $y$ or constant terms). Using $X^2-Xy=(X-\frac y2)^2-\frac{y^2}4$ we find: $$\left(X-\frac y2\right)^2-\frac{y^2}4-\frac14+\frac y2+y^2+y-2=0.$$ Now we're left only with terms in $y$: $\frac34y^2+\frac32y-\frac94$. Using $y^2+2y=(y+1)^2-1$ we find: $$\frac34y^2+\frac32y-\frac94=\frac34(y+1)^2-\frac34-\frac94.$$ So finally, $$\left(X-\frac y2\right)^2+\frac34(y+1)^2-3=0;\qquad X=x+\frac12.$$ Note: Using this technique, any (inhomogeneous) binary quadratic equation $$ax^2+bxy+cy^2+\text{linear and constant terms}=0$$ with nonzero discriminant $D=b^2-4ac$ can be rewritten in the form $$U^2-DV^2=c$$ where $U$ is a linear (better: affine) function of $x$ and $y$, and $V$ is an affine function of $y$. If $D<0$ (as was the case here), the equation clearly has only finitely many solutions. It can be shown that if $D>0$ it has either $0$ or $\infty$ solutions (in that case we call it a Pell-type-equation or something). If $D=0$ things get ugly. Geometrically, these correspond to finding integer points on an ellipse if $D<0$, a hyperbola if $D>0$ and a parabola or a union of at most two lines if $D=0$.<|endoftext|> TITLE: Prove that $\mathbb P(X>Y) =\frac{b}{a + b}$ if $X, Y$ are exponentially distributed with parameters $a$ and $b$. QUESTION [6 upvotes]: Let $X, Y$ be an exponentially distributed random variables with parameters $a, b$. Then $X$ has pdf: $$f_X(x) =\begin{cases} a e^{-a x},& x\geq 0\\ 0,& \text{otherwise}.\end{cases}$$ Suppose $X$ and $Y$ independent. Show that $$\mathbb P(X>Y) = \frac{b}{a+b}.$$ Now I thought the following: $$f(x,y) = f_X(x)\ f_Y(y) = abe^{-ax -by},\qquad\text{for } x,y > 0.$$ And then $$\mathbb P(X>Y) = \int_0^\infty \int_0^x a b e^{-ax -by}\,dydx$$ However, if I solve this (manually or using Wolframalpha), I can't seem to end up with $\frac{b}{a+b}$. Any ideas? REPLY [3 votes]: There is an alternative approach see this reference by first determining the pdf of $X-Y$ (or find it in tables) $$f(x) = \frac{ab}{a+b} \begin{cases}e^{b x} & \text{if} \ x \leq 0\\ e^{-a x} & \text{if} \ x \geq 0 \end{cases} \ \ \ (*)$$ Remarks: 1) the curve of $f$ is tent-shaped (see figure below for the case $a=3$ and $b=1$). 2) a particular, rather well-known case of (*), is for $a=b$, the so-called double exponential $f(x)=\frac{1}{2}e^{-|x|}$. Then the result is $$\int_{x=0}^{\infty}f(x)dx=\int_{x=0}^{\infty}\frac{ab}{a+b}e^{-ax}dx=\frac{ab}{a+b}\frac{1}{a}=\frac{b}{a+b}.$$<|endoftext|> TITLE: Detailed explanation of the Γ reflection formula understandable by an AP Calculus student QUESTION [11 upvotes]: In my recent question about the Fransén-Robinson constant, an answer was given using the Gamma reflection formula. However, as an AP Calculus student, I didn't quite understand how the reflection formula worked. After two days of research, I have only found explanations for the Gamma reflection formula in terms of Weierstrass products, which I don't begin to understand. Is there a proof for the Gamma reflection formula by which I can understand, or at least begin to understand, how this formula works? REPLY [10 votes]: Note: This is a description from N.N. Lebedev, Special Functions and Their Applications, Dover, New York, 1972, it is not my work but it can be used as starting point. Lebedev uses in his section 1.2 (Some Relations Satisfied by the Gamma Function) a double-integral approach. From the well-known integral formula $$\Gamma(z) = \int_{0}^{\infty}t^{z-1}e^{-t}\, d t \qquad (\Re z > 0)$$ temporarily assume $ 0 < \Re z < 1,\,$ use the formula for $1-z\,$ and get $$\Gamma(z)\Gamma(1-z) = \int_{0}^{\infty}\int_{0}^{\infty}s^{-z}t^{z-1}e^{-(s+t)}\,ds\, dt \qquad (0<\Re z <1)$$ With the new variables $u = s + t, v = t/s$ this gives $$\Gamma(z)\Gamma(1-z) = \int_{0}^{\infty}\int_{0}^{\infty}\frac{v^{z-1}}{1+v}e^{-u}\,du\, dv = \int_{0}^{\infty}\frac{v^{z-1}}{1+v} dv = \frac{\pi}{\sin \pi z}$$ For the last step he refers to Titchmarsh. Then he uses continuation to extend the formula to all $z\in \mathbb{C}$ without the integers.<|endoftext|> TITLE: Meeting probability of two bankers: uniform distribution puzzle QUESTION [6 upvotes]: Two bankers each arrive at the station at some random time between 5PM and 6PM (arrival time for each of them is uniformly distributed). They stay exactly five minutes and then leave. What is the probability that they will meet on a given day? I am not sure how to go about modelling this problem as uniform distribution and solving it. Appreciate any help. Here is how I start with it: Assume banker A arrives X minutes after 5PM and B arrives Y minutes after 5PM. Both X and Y are uniformly distributed between 5PM and 6PM. So pdf of X, Y is $\frac{1}{60}$. Now A and B will meet if $|X - Y| < 5$. So required probability is $P(|X - Y| < 5)$ = Integral of joint distribution function of $|X - Y|$ from $0$ to $5$? Now not sure how to write the equation from this point onwards and solve it. Answer: $\frac {23}{144}$ REPLY [5 votes]: Hint: We will consider the situation in terms of minutes. We notice that that they will meet only if $|X-Y|\leq 5/60$, where $X,Y$ is the time of arrival as a fraction of the hour. We notice that $X,Y\overset{iid}{\sim} \text{unif(0,1)}$. Then they problem becomes $$P(|X-Y| \leq 5/60).$$ It will help to draw a picture. No integration required. You can do it considering the $X,Y\overset{iid}\sim\text{unif}(1,60)$ too. Notice that we want the blue part. Also, notice that we can instead consider the complement \begin{align*} P(|X-Y|<5) &= P(-5 TITLE: Prove that $\frac{x^x}{x+y}+\frac{y^y}{y+z}+\frac{z^z}{z+x} \geqslant \frac32$ QUESTION [22 upvotes]: $x,y,z >0$, prove $$\frac{x^x}{x+y}+\frac{y^y}{y+z}+\frac{z^z}{z+x} \geqslant \frac32$$ I have a solution for this beautiful and elegant inequality. I am posting this inequality to share and see other solutions from everyone on MSE. Since some of you asked for my solution, I used the following estimations $$x^x \geqslant \frac12 \left(x^2+1 \right)$$ Proving rest of this inequality is very simple. Method 1: $$\sum_{cyc}\frac{x^x}{x+y} \geqslant \frac12\sum_{cyc}\frac{x^2+1}{x+y}\geqslant \frac12 \left( \frac{(x+y+z)^2}{2(x+y+z)}+\sum_{cyc}\frac{1}{x+y} \right) \geqslant \frac12 \sum_{cyc}\left(\frac{x+y}{4}+\frac{1}{x+y} \right) \geqslant \frac12 \sum_{cyc} 2 \cdot\sqrt{\frac{x+y}{4}\cdot \frac{1}{x+y}} \geqslant \frac32$$ Method 2: $$\sum_{cyc} \frac{x^x}{x+y} \geqslant \frac12 \sum_{cyc}\frac{x^2+1}{x+y} \geqslant \frac32 \sqrt[3]{\frac{(x^2+1)(y^2+1)(z^2+1)}{(x+y)(y+z)(z+x)}}\geqslant \frac32$$ It is because C-S yields $$(x^2+1)(y^2+1) \geqslant (x+y)^2$$ so $$\frac{(x^2+1)(y^2+1)(z^2+1)}{(x+y)(y+z)(z+x)} \geqslant 1$$ Method 3: Notice that $$\sum_{cyc} \frac{x^2+1}{x+y}=\sum_{cyc} \frac{y^2+1}{x+y}$$ Hence $$\sum_{cyc}\frac{x^x}{x+y} \geqslant \frac12 \sum_{cyc}\frac{x^2+1}{x+y}=\frac14\sum_{cyc} \frac{x^2+y^2+2}{x+y} \geqslant\frac14\sum_{cyc}\frac{\frac{(x+y)^2}{2}+2}{x+y}$$$$=\frac14 \sum_{cyc} \left(\frac{x+y}{2}+\frac{2}{x+y} \right) \geqslant \frac14 \sum_{cyc}2 =\frac32$$ Can you find a different solution that does not employ the estimation $x^x \geqslant \frac12(x^2+1)$ ? Update 1 The solution of user260822 is not correct. This is not a Nesbitt's inequality. We have $$\frac{x}{x+y}+\frac{y}{y+z}+\frac{z}{z+x} > 1 $$ (not $\frac32$) Update 2 Another similar question I posted a few weeks back Unconventional Inequality $ \frac{x^x}{|x-y|}+\frac{y^y}{|y-z|}+\frac{z^z}{|z-x|} > \frac72$ Please help. Thank you REPLY [11 votes]: Since $e^x\geq 1+x$ for all $x\in \mathbb{R}$, by letting $x=-\log u$ we have that for any $u >0$, $\log u\geq 1-\frac{1}{u}$, and so $u\log u>u-1$ for any $u>0$. Combining these two inequalities, it follows that $$\sqrt{x^x y^y}= \exp\left(\frac{x\log x}{2}+\frac{y\log y}{2}\right)\geq \exp\left(\frac{x+y}{2}-1\right)\geq \frac{x+y}{2}.$$ Hence $$\frac{x^x}{x+y}+\frac{y^y}{y+z}+\frac{z^z}{z+x}\geq \frac{1}{2} \left(\sqrt{\frac{x^x}{y^y}}+\sqrt{\frac{y^y}{z^z}}+\sqrt{\frac{z^z}{x^x}}\right),$$ and the result follows from the AM-GM inequality. Remark: By combining $e^x\geq 1+x$ for $x\in\mathbb{R}$ and $u\log u\geq u-1$ for $u>0$ as above, we obtain the following "reverse exponential AM-GM inequality": $$\sqrt[n]{x_{1}^{x_{1}}x_{2}^{x_{2}}\cdots x_{n}^{x_{n}}}\geq \frac{x_1+x_2+\cdots+x_n}{n}.$$<|endoftext|> TITLE: Prove $\sqrt[2]{x+y}+\sqrt[3]{y+z}+\sqrt[4]{z+x} <4$ QUESTION [12 upvotes]: $x,y,z \geqslant 0$ and $x^2+y^2+z^2+xyz=4$, prove $$\sqrt[2]{x+y}+\sqrt[3]{y+z}+\sqrt[4]{z+x} <4$$ A natural though is that from the condition $x^2+y^2+z^2+xyz=4$, I tried a trig substitutions $x=2\cdot \cos A$, $y=2\cdot \cos B$ and $z=2\cdot \cos C$, where $A,B,C$ are angles of an acute triangle. Our inequality becomes $$\sqrt[2]{2(\cos A+ \cos B)}+\sqrt[3]{2(\cos B+\cos C)}+\sqrt[4]{2(\cos C+\cos A)} <4$$ I used formula $\cos(A)+\cos(B) < 2$ and estimate that $LHS <5$ but not $4$. I do not know how to proceed. REPLY [2 votes]: Remark: My second proof is given in https://artofproblemsolving.com/community/c6h2483725p20868007. There, KaiRain@AoPS gave a nice proof. Proof without using derivative We split into three cases: $y+z \le 1$ and $z+x \le 1$: We have $x + y \le 2$. Thus, we have $\mathrm{LHS} \le \sqrt{2} + 1 + 1 < 4$. The inequality is true. $y + z > 1$ and $z + x \le 1$: Since $\sqrt[3]{y+z} < \sqrt{y+z}$ and $\sqrt[4]{z+x} \le 1$, it suffices to prove that $$\sqrt{x+y} + \sqrt{y+z} \le 3.$$ By AM-QM inequality, we have $\sqrt{x+y} + \sqrt{y+z} \le \sqrt{2(x+y + y + z)}$. Thus, It suffices to prove that $$x+2y+z \le \frac{9}{2}.$$ From $x^2+y^2+z^2 + xyz = 4$, we have $y^2 + (\frac{1}{2} + \frac{y}{4})(x + z)^2 + (\frac{1}{2} - \frac{y}{4})(x - z)^2 = 4$ which results in $y^2 + (\frac{1}{2} + \frac{y}{4})(x + z)^2 \le 4$ and $x + z \le \sqrt{\frac{4(4 - y^2)}{2 + y}}$. Thus, it suffices to prove that $2y + \sqrt{\frac{4(4 - y^2)}{2 + y}} \le \frac{9}{2}$. It suffices to prove that $(\frac{9}{2} - 2y)^2 \ge \frac{4(4 - y^2)}{2 + y}$ that is $\frac{1}{4}(4y - 7)^2 \ge 0$. The inequality is true. $z + x > 1$: Since $\sqrt[3]{z+x} > \sqrt[4]{z+x}$, it suffices to prove that $$ \sqrt{x+y} + \sqrt[3]{y+z} + \sqrt[3]{z+x} < 4. $$ Note that $u\mapsto \sqrt[3]{u}$ is concave on $u > 0$. It suffices to prove that $$ \sqrt{x+y} + 2\sqrt[3]{\frac{y+z + z+x}{2}} < 4. $$ From $x^2+y^2+z^2 + xyz = 4$, we have $(z + \frac{1}{2}xy)^2 = \frac{1}{4}(4-x^2)(4-y^2)$ which results in $$z = \frac{1}{2}\sqrt{(4-x^2)(4-y^2)}-\frac{1}{2}xy \le \frac{1}{2}\frac{4-x^2+4-y^2}{2} - \frac{1}{2}xy = 2- \frac{(x+y)^2}{4}.$$ Thus, it suffices to prove that $$ \sqrt{x+y} + 2\sqrt[3]{\frac{y+x}{2} + 2- \frac{(x+y)^2}{4}} < 4. $$ Let $v = \sqrt{x+y}$. Since $x+y \le \sqrt{2(x^2+y^2)} \le \sqrt{2 \cdot 4} < 4$, we have $v\in (0, 2)$. It suffices to prove that $$v + 2\sqrt[3]{\frac{v^2}{2} + 2- \frac{v^4}{4}} < 4$$ or $$\frac{v^2}{2} + 2- \frac{v^4}{4} < \frac{1}{8}(4 - v)^3$$ or $$\frac{1}{8}(2v^4-v^3+8v^2-48v+48) > 0$$ for $v\in (0, 2)$. It is easy to prove that $2v^4-v^3+8v^2-48v+48 > 0$ for $v\in (0, 2)$. We are done.<|endoftext|> TITLE: Prove that, at least one of the matrices $A+B$ and $A-B$ has to be singular QUESTION [6 upvotes]: Problem: Let $A$ and $B$ be real orthogonal matrices, $n$x$n$, where $n$ is an odd number. Prove that, at least one of the matrices $A+B$ and $A-B$ has to be singular. What have I done so far: -Since matrices $A$ and $B$ are real orthogonal, it means that their determinants are $-1$ or $+1$ First I observed matrix $A+B$: $$A+B=AI+BI=ABB^T+BAA^T=AB(B^T+A^T)=AB(A+B)^T\Rightarrow$$ $$det(A+B)=det(AB)det(A+B)^T=det(A)det(B)det(A+B)$$ So, if $detA=detB$ we have $det(A+B)=det(A+B)$ which tells me nothing. But, if $detA=-detB$ we have $det(A+B)=-det(A+B)$ which can only hapen if $det(A+B)=0$ making $A+B$ singular. Then, I tried the same for $A-B$: $$A-B=AI-BI=ABB^T-BAA^T=AB(B^T-A^T)=AB(A-B)^T\Rightarrow$$ $$det(A-B)=det(AB)det(A-B)^T=det(A)det(B)det(A-B)$$ So, if $detA=-detB$ we have $det(A-B)=-det(A-B)$ which can happen if $det(A-B)=0$ making $A-B$ singular. Is this correct or should I do matrix $A-B$ differently? I am also a little confused why does it have to be said that $n$ is an odd number? What should that tell me? Any help is greatly appreciated. REPLY [5 votes]: You can write $$ A\pm B=A(\mathbf 1 \pm A^T B)=A(\mathbf 1 \pm Q), \quad Q\textrm{ orthogonal} $$ Then $A\pm B $ is singular iff $\mathbf 1 \pm Q $ is singular. But $Q$ has always an eigenvector $v$ with eigenvalue $\pm 1$ when $n$ is odd, therefore $\mathbf 1 -Q $ or $\mathbf 1 + Q$ has to be singular.<|endoftext|> TITLE: Is there a relation between the branches of the Lambert function? QUESTION [7 upvotes]: Is it possible to express $W_{-1}(z)$ exactly by a closed-form expression, allowing the principal branch function $W_0$ ? Update: I found this related post: https://mathoverflow.net/a/196321. REPLY [2 votes]: $\DeclareMathOperator{\W}{W}$ $\DeclareMathOperator{\Arg}{Arg}$ Small Edit: I took it for granted that the theory behind the continuation was assumed and understood, but it came to me just now (trying to understand user1952009's comment as well as his worries (in another thread about branch points etc.)) that maybe I should restructure this section, because it indeed looked like bad course, if it needed such a restructuring. Sorry about that. My intention in avoiding all the theoretical stuff, was precisely to avoid having the reader think that I was pushing for some sort of "novelty" solution. Unfortunately with my rep. this didn't work (see Alvarez'comment, for example). All this is really nothing more than a trivial application of analytic continuation to any analytic function, if it is adjusted properly. In any case, hopefully it's not damaged beyond repair, so if anybody has additional suggestions for further editing, removal or addition of sections before it is downvoted to hell more, fire away. If it is ever rectified to an acceptable answer, I will remove this edit. Let's see two ways of relating the two branches. $\W_0(z)$ will be denoted as $\W(z)$ for simplicity. The first is somewhat naive and would give a closed form expression, unfortunately the relationship has the wrong orientation between domain and range so the expected closed form reduces to a trivial identity. The second method is using the concept of germs in the map's analytic continuation. I am using the second method to derive a relation between $\W_0$ and $\W_1$ ( instead of between $\W_0$ and $\W_{-1}$) to make it a little more interesting, urging the interested reader to try the implementation him/herself. The modification of the code to reach point $\W_{-1}(1+i)$ instead is fairly trivial. 1) Through the real relation between $\W_{-1}(x)$ and $\W_0(x)$ If we graph the two branches in their corresponding domains, which are $[-1/e,0)$ for $\W_{-1}(x)$ and $[-1/e,+\infty)$ for $\W(x)$, we get using the following with(plots); W := LambertW; f := proc (z) options operator, arrow; z*exp(z) end proc;#inverse of W xbp:= -exp(-1); dx:= .2; L := [[xbp, W(xbp)]]; M := [[xbp+dx, W(xbp+dx)], [xbp+dx, W(-1, xbp+dx)]]; p1 := plot(W(x), x = xbp .. 3, scaling = constrained, color = red); p2 := plot(W(-1, x), x = xbp .. -.1, scaling = constrained, color = blue); p3 := plot(L, style = point, symbol = circle, symbolsize = 20); p4 := plot(M, style = line, color = black); p5 := plot(M, style = point, symbol = solidcircle, symbolsize = 20); display(p1, p2, p3, p4, p5); Continuation of branch $\W(x)$ downto $\W_{-1}(x)$ for real $x$, with branch point at $x_{bp}=-1/e$. The relation between the two real branches is shown with the vertical black line. Unfortunately the relation is the result of multivaluedness, so to pick it up we have to invert the function. After the following code: M := [[W(xbp+dx), xbp+dx], [W(-1, xbp+dx), xbp+dx]]; pi1 := plot(f(x), x = -3.5 .. 1, scaling = constrained); pi2 := plot(M, style = line, color = black); pi3 := plot(M, style = point, symbol = solidcircle, symbolsize = 20); display(pi1, pi2, pi3); Relation between branches $\W(x)$ and $\W_{-1}(x)$ for real $x$, displayed on the inverse of $\W$, $x\cdot e^x$. x0 := evalf(W(xbp+dx)); x1 := evalf(W(-1, xbp+dx)); evalf(f(x0)); evalf(f(x1)); And from the graph it is now obvious that, if $x_{bp}=-1/e$ and $f(x)=x\cdot e^x$, then: if $x_0=\W(x_{bp}+dx)$ and $x_1=\W_{-1}(x_{bp}+dx)$, then: $$f(x_0)=f(x_1)$$ The above however resolves trivially, since by the very definition of $\W$, $\W_k(z)\cdot e^{W_k(z)}=z$, $\forall k\in\mathbb{Z}$, so: $$\begin{align} f(x_0)&=f(x_1)\Leftrightarrow\\ f(\W(x_{bp}+dx))&=f(\W_{-1}(x_{bp}+dx))\Leftrightarrow\\ \W(x_{bp}+dx)\cdot e^{\W(x_{bp}+dx)}&=\W_{-1}(x_{bp}+dx)\cdot e^{\W_{-1}(x_{bp}+dx)}\Leftrightarrow(*2)\\ x_{bp}+dx&=x_{bp}+dx\Leftrightarrow\\ 0&=0 \end{align}$$ In this case therefore, because the resolvent is an identity, we cannot extract a functional relation between the two branches. Note (after your update): Except the trivial relation $\W_0\cdot e^{\W_0}=\W_{-1}\cdot e^{\W_{-1}}$, which has been given in (*2) and is mentioned in Eremenko's answer in your link. The ones in mugnaini's answer are more interesting and follow from the first graph (black line 2-valuedness, $\W_0(x)$ and $\W_{-1}(x)$), by adjusting the two domains and marking the difference between $\W_0$ and $\W_{-1}$ as $x$, using a suitable transcedental function which measures the difference. I hope I am not missing something else. We can do a little better through germs. 2) Some theory behind analytic continuation The idea is that we can approximate the value of an analytic map at $z_n$, using a sequence of germs $z_k$, $k\in\{1,2,\ldots,n\}$, which has ta least one point close to $z_n$ and around which we expand into a series, by using the values of the series at a previous point. This will work only if the resulting circles around the germs $z_n$ form a connected domain and this means of course having $z_{k+1}\in\{z\in\mathbb{C}\colon|z-z_k|\lt R_k\}$, $\forall k\in\mathbb{N}$. The important consideration then is to be able to determine a region which can serve as the connected domain within which we can plant the germs. So suppose we are given $z\in\mathbb{C}$, with $|z|\ge R$. We don't know anything about the function's other possible bad points (branch points or singularities, we already know about a possible first one, at $R$ by definition), so suppose the next closest bad point is at $z_{bp}$, with $|z_{bp}|\ge R$, so it holds: $R\le|z|\le|z_{bp}|$. Then we can take the sequence of $m$ germs to be: $$z_k=|z|\cdot e^{\left(\Arg(z)+\frac{2k\pi}{m}\right)i}\text{, }k\in\{0,1,2,\cdots, m-1\}$$ provided $|z_{k+1}-z_k|0$) (1)}$$ Now, we just need to describe the form it takes when we move the center $z_k$ to another point $z_{k+1}$, with $|z_{k+1}-z_k|\lt R_k$. A new series round $z_{k+1}$ will be: $$f_{k+1}(z)=\sum\limits_{n=0}^\infty \frac{f^{(n)}(z_{k+1})}{n!}\cdot (z-z_{k+1})^n,\text{ with:}|z-z_{k+1}|\lt R_{k+1}\text{ ($R_{k+1}>0$) (2)}$$ Now: $$ \begin{align} f_{k+1}(z)&=\sum\limits_{n=0}^\infty \frac{f^{(n)}(z_{k+1})}{n!}\cdot (z-z_{k+1})^n\\ &=\frac{f^{(0)}(z_{k+1})}{0!}+\frac{f^{(1)}(z_{k+1})}{1!}\cdot (z-z_{k+1})+\frac{f^{(2)}(z_{k+1})}{2!}\cdot (z-z_{k+1})^2+\cdots\\ &=f(z_{k+1})+\frac{f^{(1)}(z_{k+1})}{1!}\cdot (z-z_{k+1})+\frac{f^{(2)}(z_{k+1})}{2!}\cdot (z-z_{k+1})^2+\cdots\\ &=f_k(z_{k+1})+\sum\limits_{n=1}^\infty \frac{f^{(n)}(z_{k+1})}{n!}\cdot (z-z_{k+1})^n\\\\ \end{align} $$ and we are essentially done, because $f_k(z_{k+1})$ can been gotten from the inductive step. So this is precisely the recursive step we need to apply the iteration to a full sequence of germs $z_k$, $k\in\mathbb{N}$ that reach all the way to the last point of our sequence of germs, using only values from $f$'s original region of analyticity. Connection between branches Taking into consideration the same assumptions for a germ sequence $z_k$, $k\in\{0,1,2,\ldots, m-1\}$ for a total of $m$ germs per winding, unwinding the above recursion and calling $\gamma=j-l\gt 0$, the winding number between branches $f(l,z)$ and $f(j,z)$ with $j>l\ge 0$, will yield: $$f(j,z)=f(l,z)+\sum\limits_{k=1}^{\gamma\cdot m}\left(\sum\limits_{n=1}^\infty \frac{f^{(n)}(z_k)}{n!}\cdot (z-z_k)^n\right)\text{ (3)}$$ 3) $\W$ is analytic Suppose now that $z\in\mathbb{C}$, with $|z|>1/e$. $\W$ has only two branch points at $-1/e$ and $0$ and a singularity at $0$ (only non-principal branches). Find $m\in\mathbb{N}$ with suitable assumptions from the section above, such that: $$z_k=|z|\cdot e^{\left(\Arg(z)+\frac{2k\pi}{m}\right)i}\text{, }k\in\{0,1,\ldots,m-1\}$$ in the extended annulus $1/e\lt|z|<+\infty$, with $|z_{k+1}-z_k|\lt R_k$, where: $$R_k=\min\{1/e,||z|-R|\}$$ Expanding into appropriate series and unwinding the recursion will give: $$\bbox[5px,border:2px solid black]{ \W_j(z)=\W_l(z)+\sum\limits_{k=1}^{\gamma\cdot m}\left(\sum\limits_{n=1}^\infty \frac{\W^{(n)}(z_k)}{n!}\cdot (z-z_k)^n\right),\text{ }j>l\ge 0\text{ (4)}}$$ After that an expression between positive and negative branches can now be gotten through the conjugation: $$\overline{\W_k(z)}=\W_{-k}(\overline{z}),\text{ }k\in\mathbb{Z}\text{ (5)}$$ which can be found in standard references for $\W$ (refs in first article of my scholar google profile). The exact expression which ties branches $-1$ and $0$ therefore, can be given using (5) for $k=1$ and (4) for $j=1$ and $l=0$, as: $$ \begin{align} \W_{-1}(\overline{z})=\overline{W_1(z)}&=\overline{\left[\W_0(z)+\sum\limits_{k=1}^{ m}\left(\sum\limits_{n=1}^\infty \frac{\W^{(n)}(z_k)}{n!}\cdot (z-z_k)^n\right)\right ]}\text{ (4) with $j=1,l=0$ }(\dagger)\\ \end{align} $$ While the relation gotten this way is a perfectly valid analytic expression algebraically (with "analytic" qualified as "closed form expression"), it's not an analytic function perse, since the composition with (5) involves the map $\overline{z}$, which is not analytic. The relation between any two positive branches or any positive and the principle branch however as given through (4) for $j>l\ge 0$, clearly is. 4) Numerical approximation Suppose we need to relate the value of $\W_{-1}(1-i)$ to that of the principal branch, $\W(1-i)$. Through the reflection expression (5) we have: $$\W_{-1}(1-i)= \overline{\W_1(\overline{1-i})}=\overline{\W_1(1+i)}$$ It suffices now to calculate an approximation for $\W_1(1+i)$, for which we need first the value of $\W_0(1+i)$. 41) Approximation of $\W_0(1+i)$ The series for the principal branch has a very small radius of convergence, $r=1/e$ and $z=1+i$ is not inside the disk centered at 0 with this radius of convergence. The trick is to consider intersecting disks $D_k=\{z\colon|z-z_k|\lt 1/e\}$, which form a connected domain and reach from 0 all the way to $1+i$. Here's a picture: Going from $0$ to $1+i$, through germs. The strategy now is simple: We have a series at $0$. Use this to calculate $\W(z_1)$, where $z_1$ is a point inside the disk, along the line that connects $0$ with $1+i$. If we now are given a formula for the derivative of $\W(z)$, we can form a series centered at $z_1$ using Taylor's Theorem. Using the last series, we can calculate $\W(z_2)$, where $z_2$ is a point inside the new disk, along the line that connects $z_1$ with $1+i$. We repeat the above procedure each time (for each step) using the Taylor's series: $$f_k(z)=\sum\limits_{n=0}^\infty \frac{\W^{(n)}(z_k)}{n!}\cdot (z-z_k)^n$$ until we reach close to $1+i$. When we are close enough, we can use the last series to calculate $\W(1+i)$. On the picture, the disk centers are marked as blue diamonds and $1+i$ is marked with a black cross. Using $dr\sim 0.067$, the center of the 21-st disk is pretty close to $1+i$. In the Maple code that follows, Maple of course knows the derivative of $\W(z)$. It is $\W(z)/(1+\W(z))/z$. It is also known that $\W(0)=0$. Using then the code: r:=exp(-1); #radius of convergence of series for W d:=abs(1+I); #distance to 1+i epsilon:=0.3; dr:=r-epsilon; #inside the disk W:=LambertW; #Maple's built in W function steps:=ceil(d/dr); #number of circles required for k from 0 to steps do z[k]:=evalf((k*dr)*exp(I*Pi/4)); #find the points z[k] od; for k from 0 to steps do if k=0 then f:=add(limit(diff(W(z),z$n),z=z[k])/n!*(z-z[k])^n,n=1..6); #calculate series around 0 else f:=Wz+add(subs(z=z[k],subs(LambertW(z)=Wz,diff(W(z),z$n)))/n!*(z-z[k])^n,n=1..6);; #calculate series around z[k] fi; if k < steps then Wz:=subs(z=z[k+1],f); #keep the value of Wz we found, because we need it for the next series fi; od: Checking the resulting accuracy: subs(z=1+I,f); #estimated value .6569660229+.3254515443*I evalf(W(1+I)); #actual value .6569660692+.3254503394*I With an error of .1205789244e-5. Not bad. 42) Approximation of $\W_1(1+i)$ Now we can use the above code to approximate $\W_1(1+i)$. For this, we construct a path of disk centers $z_{22}-z_{121}$, starting at $1+i$ and rotating counterclockwise(1) until we come full circle back (or close) to $1+i$. Until we reach the branch cut ($z_{58}$), we are still on the principal branch. Subsequent points ($z_{59}-z_{121}$) lie on branch $\W_1$. See figure below: Complete path from $z_{22}$ to $z_{121}$. We now modify the Maple code to deal with the complete path, from 0 to $z_{121}$: restart; r:=exp(-1); #radius of convergence d:=abs(1+I); #|1+i| epsilon:=0.3; dr:=r-epsilon; #offset from center of disk W:=LambertW; steps:=ceil(d/dr); #number of steps to reach 1+i for k from 0 to steps do z[k]:=evalf((k*dr)*exp(I*Pi/4)); #construct path z[1]-z[21] od: r:=abs(z[21]); steps:=121-22; for k from 0 to steps do z[k+22]:=evalf(r*exp(I*((k+1)*Pi/50+Pi/4))); #construct path z[22]-z[121] od; steps:=121; for k from 0 to steps do if k=0 then f:=add(limit(diff(W(z),z$n),z=z[k])/n!*(z-z[k])^n,n=1..6); #series at 0 else f:=Wz+add(subs(z=z[k],subs(LambertW(z)=Wz,diff(W(z),z$n)))/n!*(z-z[k])^n,n=1..6); #series at z[k] fi; if k < steps then Wz:=subs(z=z[k+1],f); #approximate value of W at z[k+1] #print k+1, value found, actual value and error if k < 58 then #we are ready to cross the branch cut! Caution! print(k+1,z[k+1],Wz,W(z[k+1]),abs(Wz-W(z[k+1]))); else print(k+1,z[k+1],Wz,W(1,z[k+1]),abs(Wz-W(1,z[k+1]))); fi; fi; od; After running the above code, we get: subs(z=1+I,f); -1.342848857+5.247252283*I Actual value for $\W_1(1+i)$: evalf(W(1,1+I)); -1.342848941+5.247249374*I With an error of: evalf(abs(W(1,1+I)-subs(z=1+I,f))); .2910212535e-5 43) $\W_{-1}(1-i)$ Finally, we have the value $\W_1(1+i)$, therefore: $$\W_{-1}(1-i)=\overline{\W_1(1+i)}$$ which gives for $\W_{-1}(1-i)$: -1.342848941-5.247249374*I Check with built in $\W$: evalf(W(-1, 1-I)); -1.342848941-5.247249374*I 5) Errors in 4) (1) Each time one uses the Taylor's series as above the error is given by the remainder form $R_n(z)$, so in the above case the total error should not exceed $s\cdot R_n$, where $s$ the number of germs (in the case above $s=121$, so something like $121\cdot R_7(z)$). (2) One may remark that this germ strategy could be improved by first going around a small circle and then shooting off to the wanted point. This way, we could lose the accumulated error from going around the circle. Unfortunately this won't work, because if we rotate inside the radius of convergence, we can't switch branches, which requires crossing a branch cut. Therefore we have to rotate with a radius greater than $1/e$. But if we do that, the total error will again accumulate per each step of the circle, since we are outside the region of convergence. Notes: It should be fairly obvious that if we wanted to approximate $\W_{-1}(1+i)$, i.e., we should work in a similar way but instead we should be going clockwise, landing again near $1+i$. Hence, by rotating once counterclockwise to $1+i$, we reach branch $\W_1$ and in general by rotating $k$ times counterclockwise to $1+i$, we reach $\W_k(1+i)$. By rotating once clockwise to $1+i$, we reach $\W_{-1}(1+i)$ and in general by rotating $k$ times clockwise to $1+i$, we reach branch $\W_{-k}(1+i)$. $k$ is of course the winding number of $\W$. By generalizing the code with such a clockwise/counterclockwise strategy, we can compute any value $\W_k(z)$, for any $k\in\mathbb{Z}$ and any $z\in\mathbb{C}$, using only values of $\W$ from its region of analyticity around $0$.<|endoftext|> TITLE: Prove that function has only one maximum QUESTION [9 upvotes]: I have a function $f_n(x)$ with an integer parameter $n \in \{3,4,5,\dots\}$ and $x\in ]0,1[$, and i want to show that $f_n(x)$ has only one critical point for every value of $n$. The function is $$ f_n(x)=(1 - x)^n + (1 + x)^n + \frac{ x [(1 - x)^n - (1 + x)^n]}{ \sqrt{1 - x^2}-1}. $$ Setting the derivative to zero, gives after some simplification and case differentiations the equation $$ \left(\frac{1-x}{1+x}\right)^n= \frac{1+n \left(1-x -\sqrt{1-x^2}\right)}{1+n \left(1+x -\sqrt{1-x^2}\right)}. \quad \quad (1)$$ So if there is only one $x \in ]0,1[$ that fullfills equation (1), the problem is solved. However i struggle to prove that. (The fact is obvious from plotting the function) Maybe there is a better way to show that, instead of setting the derivative to zero? Just as an example consider the plot of the function for $n=5$, Update (with the help of MotylaNogaTomkaMazura) If you parametrize $x=\cos(2t)$ with $t \in [0,\pi/4]$, the original function can be written as $$ g_k(t)=\frac{\cos ^k(t)-\sin ^k(t)}{\cos (t)-\sin (t)},$$ with $k=2n+1$ and omitting a factor $2^{1 + n}$. The maximum of $g_k(t)$ is the maximum of $f_n(x)$. An equivalent form is $$ g_k(t)=\frac{1}{\sin(t)} \sum_{n=1}^k \cos(t)^{k - n} \sin(t)^n.$$ Setting the derivative to zero gives $$ k \left( \frac{\tan(t)}{\tan^k(t)-1}-\frac{\cot(t)}{\cot^k(t)-1} \right)=\frac{1}{\cot(t)-1}-\frac{1}{\tan(t)-1}.$$ Update 2 Another way to rewrite the function $f_n(x)$ is $$h_m(t)=\cos ^m(t) \cdot \sum _{k=0}^m \tan ^k(t),\quad \quad \quad (2)$$ with $x=\cos(2t)$, $t \in [0,\pi/4]$ and $m=2n$, again with omitting the factor $2^{1 + n}$. So proving that $h_m(t)$ has only one maximum is equivalent to the original problem. Now we have the product of two functions one is monotonically increasing and convex, and the other is monotonically decreasing and convex. Is this enough to prove that the function has one and only one maximum for all $n$? REPLY [2 votes]: MotylaNogaTomkaMazura has found a very good parametrization $x=\cos(2t)$. (but it seems that his/her answer has an error) This answer uses this parametrization. If you parametrize $x=\cos(2t)$ with $t \in [0,\pi/4]$, the original function can be written as $$ g_k(t)=\frac{\cos ^k(t)-\sin ^k(t)}{\cos (t)-\sin (t)},$$ with $k=2n+1$ and omitting a factor $2^{1 + n}$. True. Let $c:=\cos(t),s:=\sin(t)$. For $k=2n+1\ge 7$, $$\begin{align}\frac{d}{dt}\left(\frac{c^k-s^k}{c-s}\right)&=\frac{(kc^{k-1}(-s)-ks^{k-1}c)(c-s)-(c^k-s^k)(-s-c)}{(c-s)^2}\\&=\frac{-k(c^{k-1}s+s^{k-1}c)(c-s)+(c^k-s^k)(s+c)}{(c-s)^2}\end{align}$$ Here, let $$F(t):=-k(c^{k-1}s+s^{k-1}c)(c-s)+(c^k-s^k)(s+c)$$ Then, $$F'(t)=-k((k-1)c^{k-2}(-s)s+c^{k-1}c+(k-1)s^{k-2}c^2+s^{k-1}(-s))(c-s)$$$$-k(c^{k-1}s+s^{k-1}c)(-s-c)+(kc^{k-1}(-s)-ks^{k-1}c)(s+c)+(c^k-s^k)(c-s)$$ $$\begin{align}&=-k(-(k-1)c^{k-2}s^2+c^{k}+(k-1)s^{k-2}c^2-s^{k})(c-s)+(c^k-s^k)(c-s)\\&=(c-s)(k(k-1)c^{k-2}s^2-k(k-1)s^{k-2}c^2+c^k-kc^{k}-s^k+ks^{k})\\&=(k-1)(c-s)(kc^{k-2}s^2-ks^{k-2}c^2+s^k-c^k)\\&=c^k(k-1)(c-s)(k\tan^2(t)-k\tan^{k-2}(t)+\tan^k(t)-1)\end{align}$$ Here, let $$G(u):=ku^2-ku^{k-2}+u^k-1$$ Then, $$G'(u)=2ku-k(k-2)u^{k-3}+ku^{k-1}$$ $$G''(u)=2k-k(k-2)(k-3)u^{k-4}+k(k-1)u^{k-2}$$ $$\begin{align}G'''(u)&=-k(k-2)(k-3)(k-4)u^{k-5}+k(k-1)(k-2)u^{k-3}\\&=k(k-2)u^{k-5}((k-1)u^2-(k-3)(k-4))\end{align}$$ This is negative, so $G''(u)$ is decreasing with $G''(0)=2k\gt 0$ and $G''(1)=-k(k-1)(k-5)\lt 0$. So, there exists only one real $0\lt\alpha\lt 1$ such that $G''(\alpha)=0$, and $G'(u)$ is increasing for $0\lt u\lt\alpha$ and is decreasing for $\alpha\lt u\lt 1$ with $G'(0)=0$ and $G'(1)=-k(k-5)\lt 0$. So, there exists only one real $0\lt \beta\lt 1$ such that $G'(\beta)=0$, and $G(u)$ is increasing for $0\lt u\lt\beta$ and is decreasing for $\beta\lt u\lt 1$ with $G(0)=-1\lt 0$ and $G(1)=0$. So, there exists only one real $0\lt \gamma\lt 1$ such that $G(\gamma)=0$. It follows from this that there exists only one real $0\lt\arctan\gamma\lt\pi/4$ such that $F'(\arctan\gamma)=0$. So, $F(t)$ is decreasing for $0\lt t\lt\arctan\gamma$ and is increasing for $\arctan\gamma\lt t\lt \pi/4$ with $F(0)=1\gt 0$ and $F(\pi/4)=0$. It follows from this that $f_n(x)$ where $x\in ]0,1[$ has only one critical point for every value of $n \in \{3,4,5,\dots\}$. $\blacksquare$<|endoftext|> TITLE: Using Green's Theorem to Express the Integral $I=\int_C (Pdx+Qdy)$ as an expression of $I_i=\int _{C_i} (Pdx+Qdy)$ QUESTION [6 upvotes]: Let $p_1,...p_n$ be points in $\mathbb{R}^n$. Let $P(x,y), Q(x,y)$ be functions with continuous derivatives in $ D=\mathbb{R}^2\setminus\{p_1,...p_n\}$ such that $Q_x-P_y=1$ for all $(x,y)\in D$. For all $i$ let $C_i$ be a circle of radius $r_i$ around $p_i$ such that $C_i$ doesn't contain $p_j$ for all $j\neq i$. Let $C$ be a circle of radius $R$ around $(0,0)$, such that $\forall i=1,...n, p_i \in C$. Let $I=\int_C (Pdx+Qdy), I_i =\int_C(Pdx+Qdy)$. Express $I$ as a function of $I_1,...I_n, r_1,...r_n, R$. I was able to prove this considering that $C_1,...C_n$ are all disjoint and $\cup_{i=1}^nC_i \subseteq C$, but wasn't able to prove this when they were not disjoint. Here's the proof for disjoint $C_1,...C_n$ and $\cup_{i=1}^nC_i \subseteq C$: Let $A$ be $C\setminus \cup_{i=1}^nC_i$. $A$ is a Green region, since it's boundary is $C\cup \cup_{i=1}^nC_i$, and therefore: $I+\sum_{i=1}^nI_i=\int_C(Pdx+Qdy)+\sum_{i=1}^n\int_{C_i}(Pdx+Qdy)=\int_{\partial A}(Pdx+Qdy)=\iint_A(Q_x-P_y)dxdy=\iint_A1dxdy=\mu (A)$ And therefore, $I=\mu(A)-\sum_{i=1}^nI_i$, but since $C_1,..C_n$ are all disjoint and if we write $B$ to be the interior of $C$ and $B_i$ to be the interior of $C_i$, then, we get that $\mu(A)-\mu(B)-\sum_{i=1}^n\mu(B_i)=\pi(R-\sum_{i=1}^nr_i^2)$, such that $I=\pi(R-\sum_{i=1}^nr_i^2)-\sum_{i=1}^nI_i$. In the case that $C,C_1,...C_n$ are not disjoint I looked at the case of only two that intersect. Let's assume that $B_1\cap B_2=E\neq \emptyset$. Since $B_1$ only contains $p_1$ of the set $\{p_1,...p_n\}$ and $B_2$ only contains $p_2$ of $\{p_1,...p_n\}$, then $E$ doesn't contain any of the points $\{p_1,...p_n\}$ and therefore, $P,Q$ and their derivatives are continuous on $E$. Also, if $E$ contains more than one point, then it can be proven that $E$'s boundary is a Jordan curve, and therefore, Green's theorem applies on $E$. Therefore, I figured out that $\int_{C_1\cup C_2}(Pdx+Qdy)=\int_{C_1}(Pdx+Qdy)+\int_{C_2}(Pdx+Qdy)-\iint_E(Q_x-P_y)dxdy=I_1+I_2-\mu(E)$. From here on, I couldn't find a direction that worked. REPLY [3 votes]: Write $Pdx+Qdy=:\omega$ for short. Denote by $B_{i,\epsilon}$ the disk of radius $\epsilon>0$ around the point $p_i$. Choose $\epsilon<\min_i r_i$ so small that the $B_{i,\epsilon}$ do not intersect the large circle $C$. Green's theorem, applied to the annular regions bounded by $C_i$ and $\partial B_{i,\epsilon}$ gives $$\int_{C_i}\omega\ -\int_{\partial B_{i,\epsilon}}\!\!\omega\ =\ {\rm area}\bigl(B_{i, r_i}\setminus B_{i,\epsilon}\bigr)=\pi(r_i^2-\epsilon^2)\qquad(1\leq i\leq n)\ .\tag{1}$$ We now apply Green's theorem to the "Emmentaler cheese" region generated by removing the $n$ small disks $B_{i,\epsilon}$ from the large disk $B$ bounded by $C$: $$\int_C\omega\ -\sum_{i=1}^n \int_{\partial B_{i,\epsilon}}\!\!\omega\ =\pi(r^2-n\epsilon^2)\ .\tag{2}$$ Subtracting the sum of the equations $(1)$ from $(2)$ we obtain $$I -\sum_{i=1}^n I_i=\pi\left(r^2-\sum_{i=1}^n r_i^2\right)\ ,$$ since all other contributions cancel. It follows that $$I=\pi r^2+\sum_{i=1}^n (I_i-\pi r_i^2)\ ,$$ whether or not some of the $C_i$ intersect $C$. Up to this point there was no limit $\epsilon\to0$ involved. But note that you can consider $$I_i-\pi r_i^2=\lim_{\epsilon\to0}\int_{\partial B_{i,\epsilon}}\omega$$ as a kind of residue of $\omega$ at the singularity $p_i$.<|endoftext|> TITLE: Complete calculus of first-order logic working for empty structures too QUESTION [9 upvotes]: Usually, in model theory, one presupposes that structures (models) are non-empty. I don't like this (related: What's the deal with empty models in first-order logic?). So let us explicitly permit empty structures. My requests are: Can you give a complete calculus of first-order logic that works for empty structures too? By "working for empty structures too", I mean that if the demanded calculus proves a sentence, then this sentence should hold in all structures, also in the empty structures. Can you give a completeness proof for this calculus? You may wonder: why don't you look in the standard logic texts, where a complete calculus + proof of the completeness should be given? But my problem is: In every logic text which allows empty models, they do not give a complete calculus and proof the completeness. Instead, they prove the compactness theorem and other standard results with purely model theoretic methods, rather than proof theoretic methods. Examples of such texts: http://math.uga.edu/~pete/modeltheory2010FULL.pdf http://www.math.uni-hamburg.de/home/geschke/teaching/ModelTheory.pdf REPLY [4 votes]: There is a full treatment of empty-domain models in MENDELSON: Introduction to Mathematical Logic. You will see there why they are a special case of First Order Logic.<|endoftext|> TITLE: Prove $\int_{0}^{\infty}\frac{2x}{x^8+2x^4+1}dx=\frac{\pi}{4}$ QUESTION [10 upvotes]: $$\int_{0}^{\infty}\frac{2x}{x^8+2x^4+1}dx=\frac{\pi}{4}$$ $u=x^4$ $\rightarrow$ $du=4x^3dx$ $x \rightarrow \infty$, $u\rightarrow \infty$ $x\rightarrow 0$, $u\rightarrow 0$ $$=\int_{0}^{\infty}\frac{2x}{u^2+2u+1}\cdot\frac{du}{4x^3}$$ $$=\frac{1}{2}\int_{0}^{\infty}\frac{1}{u^2+2u+1}\cdot\frac{du}{x^2}$$ $$=\frac{1}{2}\int_{0}^{\infty}\frac{1}{u^2+2u+1}\cdot\frac{du} {\sqrt{u}}$$ Convert to partial fractions $$\int_{0}^{\infty}\frac{1}{\sqrt{u}}-\frac{2}{u+1}+\frac{1}{(u+1)^2}du$$ $$=\left.2\sqrt{u}\right|_{0}^{\infty}-\left.2\ln(1+x)\right|_{0}^{\infty} -\left.\frac{1}{1+u}\right|_{0}^{\infty}$$ Where did I went wrong during my calculation? REPLY [2 votes]: At first - substitution: $$I=\int\limits_0^\infty{2x\,dx\over x^8+2x^4+1} = \int\limits_0^\infty{d(x^2)\over x^8+2x^4+1} = \int\limits_0^\infty{dy\over (y^2+1)^2}.$$ And then - by parts: $$I = \int\limits_0^\infty{1+y^2-y^2\over (y^2+1)^2}\,dy = \int\limits_0^\infty{dy\over y^2+1} + {1\over2}\int\limits_0^\infty y\,d{1\over y^2+1}\,dy$$ $$ = \int\limits_0^\infty {dy\over y^2+1} + {1\over2}{y\over y^2+1}\biggr|_0^\infty - {1\over2}\int\limits_0^\infty {dy\over y^2+1} = {1\over2}\arctan y\biggr|_0^\infty = {\pi\over4}.$$<|endoftext|> TITLE: Does the ring of integers for the field $\mathbb{Q}(\sqrt{-1+2\sqrt{2}})$ have a power basis? QUESTION [6 upvotes]: Specifically I am interested in the the ring of integers for the field $\mathbb{Q}(\sqrt{-1+2\sqrt{2}})$. Does this ring of integers have a power basis? More generally, for any Salem number $s$, does the ring of integers for $\mathbb{Q}(s)$ have a power basis? Does anyone know of a sage command or some clever way to determine if any given ring of integers for an algebraic number field has a power basis? Thanks! REPLY [5 votes]: Let $u = (\sqrt{-1+2\sqrt 2}+\sqrt 2+1)/2$. The minimal polynomial of $u$ over $\Bbb Q(\sqrt 2)$ is $u^2-u(1+\sqrt 2)+1$, and over $\Bbb Q$ it is $u^4-2u^3+u^2-2u+1$, whose discriminant is $-448$. Since this is the discriminant of the number field, its ring of integers is $\Bbb Z[u]$.<|endoftext|> TITLE: Non-trivial "I know what number you're thinking of" QUESTION [23 upvotes]: Consider the following 'trick' (WARNING: very lame) Think of a number. Multiply this number by two. Add four. Divide the number by two. Subtract the number you were originally thinking of. I guess that the number now in your head is two. This trick might stump non-mathematicians for a few minutes, but most people will eventually figure it out. The trick relies on the distributive law of numbers, which is not entirely trivial, but most people will find the trick rather lame once they figure it out. I was wondering if it is possible to do a "I know what number you're thinking of" that relies on less trivial results, from number theory for example, tricks that require some heavier results to explain and are thus more 'magical'. Too avoid being flagged as too subjective or vague, I will try to specify this. Consider two people, the mathematician $M$ and the non-mathematician $N$. $M$ asks $N$ to think of a number ($M$ may restrict the numbers from which to choose to a sufficiently large set, like the numbers between 1 and 1000 or the prime numbers) and then lets $N$ apply a finite sequence of functions to this number. These functions should preferably be easy and well-known, i.e. basic arithmetic operations, division with residue, decimal representation of a number (e.g. "take the fifth digit"), à la limite prime factorisation, but no functions which require a calculator like trigonometric functions. The following two conditions are imposed: The function must have a finite image, i.e., $M$ should be able to give a finite, preferably small, set in which the image of the function always lies. For example "Either you're thinking of 5 or you're thinking of 29" would still impress $N$. On the other hand, the trick bellow relies on some serious number theory, but gives an infinite image, which I think is less impressing to most people. Think of a natural number which cannot be written as a sum of three squares of natural numbers. If the number you're thinking of is divisible by four, divide it by four. Keep dividing by four untill you can no longer do so in $\mathbb{N}$. Add one to the number you now have. The number you end up with is divisible by eight. The function uses specific properties of the natural numbers or integers, not just algebraic manipulations. This is what I mean by non-trivial. In particular, the same trick should not work in any commutative ring $A$. For example, tricks relying on the fact that $\mathbb{Z}/p\mathbb{Z}$ is a field for any $p$ prime could satisfy this. I'd be interested in knowing what's possible here. REPLY [2 votes]: Think of a number 0-9. When you are ready - here is the number: 7<|endoftext|> TITLE: Intuitive understanding of Maximin Principle QUESTION [5 upvotes]: From the the book page $324$, does someone could explain to me the Theorem $2$. Maximin principle? I have a bit of difficulties to well understand how works this theorem. A simple example would be appreciate in the explication, if possible. Theorem $2$. Maximin principle : Fix a positive integer $n \geq 2$. Fix $n-1$ arbitrary trial functions $y_1(x), \dots, y_{n-1}(x)$. Let $$\lambda_{n*}=\min \frac{\| \nabla w\|^2}{\| w\|^2}$$ among all trial functions $w$ that are orthogonal to $y_1, \dots, y_{n-1}.$ Then $$\lambda_n= \max \lambda_n*$$ over all choices of the $n-1$ trial functions $y_1, \dots, y_{n-1}.$ REPLY [2 votes]: Let's assume we're on a bounded domain $\Omega\subset\mathbb{R}^N$ and we've fixed boundary conditions. Recall that the Laplacian $\Delta$ induces an orthogonal decomposition of $L^2(M)$ into eigenspaces $E_k$, where $E_k$ is associated to the $k^{th}$ eigenvalue $\lambda_k$, and $\Delta$ acts by scaling on $E_k$. Write $0\leq \lambda_1\leq\lambda_2\leq\cdots$ where we do not account for multiplicity. (For example, the Neumann spectrum of the unit square $[0,1]\times[0,1]$ would be $0\leq\pi^2\leq\pi^2\leq 4\pi^2\leq\cdots$.) Further, let $u_i$ be an eigenfunction corresponding to $\lambda_i$, so that $\langle u_i,u_j\rangle = \delta_{ij}$ where $\langle\cdot,\cdot\rangle$ is the $L^2$ inner product. Think of this statement in terms of $(n-1)$-dimensional linear subspaces of $L^2(\Omega)$. Let $V$ be such a subspace. The intuition is dimension counting: the perpendicular complement of $V$ must "bleed into" eigenspaces corresponding to low-frequency eigenvalues, and the best you can do is rule out the lowest $n-1$ frequencies. Let's formalize it this way. First, call the ratio $\mathcal{R}(v) = \|\nabla v\|^2/\|v\|^2$ the Rayleigh quotient of $v$, if it exists, for $v\in L^2$. Let $V$ be the span of $y_1,\ldots,y_{n-1}$. We're going to search for functions $v\in V^\perp$ which are also elements of $\oplus_{k\leq n} E_k$. Such a function must satisfy $$ v = \sum_{j=1}^n a_ju_j $$ and $$ \langle v,y_i\rangle = 0 $$ for all $i=1,\ldots,n-1$. If let $C$ be the matrix with entries $c_{ij} = \langle y_i,u_j\rangle$, we are searching for a solution $A = (a_j)$ to the equation $CA = 0$. As this system has more unknowns than equations, by rank-nullity the matrix $C$ must have a nontrivial kernel. Denote by $\mu$ the infimum of $\mathcal{R}$ on $V^\perp$. Then $$ \mu\|v\|^2 \leq \|\nabla v\|^2 = \sum_{j=1}^n \lambda_ja_j^2 \leq \lambda_n\|v\|^2 $$ (Verify this for yourself with an integration by parts.) Therefore we have $\mu\leq\lambda_n$. The subspace $V$ spanning $\{u_1,\ldots,u_{n-1}\}$ achieves equality. This establishes the desired result.<|endoftext|> TITLE: Proving $\int_{0}^{\infty}\frac{4x^2}{(x^4+2x^2+2)^2}dx\stackrel?=\frac{\pi}{4}\sqrt{5\sqrt2-7}$ QUESTION [9 upvotes]: $$\int_{0}^{\infty}\frac{4x^2}{(x^4+2x^2+2)^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\infty}\frac{4x^2}{[(1+(1+x^2)^2]^2}dx=\frac{\pi}{15}$$ $u=\tan(z)$ $\rightarrow$ $du=\sec^2(z)$ $u$ $\rightarrow \infty$, $\tan(z)=\frac{\pi}{2}$ $u$ $\rightarrow 0$, $\tan(z)=0$ $$\int_{0}^{\pi \over 2}\frac{4\tan^2(z)}{[(1+(1+\tan^2(z))^2]^2}\frac{du}{\sec^2(z)}=\frac{\pi}{15}$$ $1+\tan^2(z)=\sec^2(z)$ $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+\sec^4(z)]^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+i\sec^2(z))(1-i\sec^2(z))]^2}dx=\frac{\pi}{15}$$ Hopeless! Any suggestion? Try again $$\int_{0}^{\pi \over 2}\frac{4\sin^2(z)}{[(1+\sec^4(z)]^2}dx=\frac{\pi}{15}$$ $$\int_{0}^{\infty}\frac{\sin^2(2z)\cos^6(z)}{(1+\cos^4(z))^2}dx=\frac{\pi}{15}$$ $\sin^2(2z)=\frac{1-\cos(4z)}{2}$ No more I gave up. Any hints? REPLY [3 votes]: Here is an approach using contour integration in case anyone is interested. An effort has been made to use pen-and-paper type manipulations only. These are simple yet demand a certain care with the algebra. Suppose we seek to verify that $$\int_0^\infty \frac{4x^2}{(x^4+2x^2+2)^2} dx = \frac{\pi}{4} \sqrt{5\sqrt{2}-7}$$ or alternatively $$\int_0^\infty \frac{x^2}{(x^4+2x^2+2)^2} dx = \frac{\pi}{16} \sqrt{5\sqrt{2}-7}.$$ We use a semicircular contour in the upper half plane with two straight components $\Gamma_0$ and $\Gamma_1$ on the positive and negative real axis and having radius $R$ ($\Gamma_2.$) The denominator here is $$((x^2+1)^2+1)^2$$ so the poles are double and located at $$\rho_{0,1,2,3} = \pm\sqrt{-1\pm i}.$$ We convert this to polar form in order to determine which poles are in the upper half plane, getting $$\pm\sqrt{\sqrt{2} \exp(\pi i \pm \pi i/4)} = \sqrt[4]{2} \exp(\pi i/2 \pm \pi i/8 + \pi i/2 \pm \pi i/2) \\ = \sqrt[4]{2} \exp(\pi i \pm \pi i/8 \pm \pi i/2).$$ Fortunately we can see by inspection that only the two poles $$\rho_0 = \sqrt[4]{2} \exp(\pi i - \pi i/8 - \pi i/2) = \sqrt[4]{2} \exp(3\pi i/8) \\ \quad\text{and}\quad \rho_1 = \sqrt[4]{2} \exp(\pi i + \pi i/8 - \pi i/2) = \sqrt[4]{2} \exp(5\pi i/8)$$ are inside the contour (arguments are $3\pi/8$ and $5\pi/8$, the other two are at $-3\pi /8$ and $-5\pi /8.$) For the residue we get $$\frac{1}{2\pi i} \int_{|z-\rho_0|=\epsilon} \frac{z^2}{(z^4+2z^2+2)^2} \; dz.$$ In order to get a pole that is amenable to easy algebra we introduce $w = z\exp(-3\pi i/8)/\sqrt[4]{2}$ and $z = w\exp(3\pi i/8)\sqrt[4]{2}$ which maps $\rho_0$ to $1$ so we obtain $$\exp(3\pi i/4+3\pi i/8)\sqrt{2}\sqrt[4]{2} \\ \times \frac{1}{2\pi i} \int_{|w\exp(3\pi i/8)\sqrt[4]{2}-1|=\epsilon} \frac{w^2}{(-2iw^4+2w^2(-1+i)+2)^2} \; dw \\ = - \exp(9\pi i/8) \frac{2^{3/4}}{4} \frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^2}{(w-1)^2\times (w^2-i)^2(w+1)^2} \; dw.$$ The residue is thus given by $$- \exp(9\pi i/8) \frac{2^{3/4}}{4} \lim_{w\rightarrow 1} \left(\frac{w^2}{(w^2-i)^2(w+1)^2}\right)' \\ = - \exp(9\pi i/8) \frac{2^{3/4}}{4} \lim_{w\rightarrow 1} \left(\frac{2w}{(w^2-i)^2(w+1)^2} \\ - \frac{w^2}{(w^2-i)^4(w+1)^4} (2(w^2-i) 2w (w+1)^2 + (w^2-i)^2 2(w+1))\right).$$ This works out to $$-\exp(9\pi i/8) \frac{2^{3/4}}{4} \times \frac{1}{8} (2-i) = i\exp(-3\pi i/8) \frac{2^{3/4}}{4} \times \frac{1}{8} \sqrt{5} \exp(-i\beta)$$ where $2-i = \sqrt{5}\exp(-i\beta).$ Continuing with the second pole we we introduce $w = z\exp(-5\pi i/8)/\sqrt[4]{2}$ and $z = w\exp(5\pi i/8)\sqrt[4]{2}$ and obtain $$\exp(5\pi i/4+5\pi i/8)\sqrt{2}\sqrt[4]{2} \\ \times \frac{1}{2\pi i} \int_{|w\exp(5\pi i/8)\sqrt[4]{2}-1|=\epsilon} \frac{w^2}{(2iw^4+2w^2(-1-i)+2)^2} \; dw \\ = - \exp(15\pi i/8) \frac{2^{3/4}}{4} \frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^2}{(w-1)^2\times (w^2+i)^2(w+1)^2} \; dw.$$ This is the same as in the previous pole except the sign in the $w^2-i$ term has been flipped. Re-using the derivative thus yields $$-\exp(15\pi i/8) \frac{2^{3/4}}{4} \times \frac{1}{8} (2+i) = i\exp(3\pi i/8) \frac{2^{3/4}}{4} \times \frac{1}{8} \sqrt{5} \exp(i\beta).$$ Adding the two residues we thus obtain $$\frac{2^{3/4}}{32}\times\sqrt{5}\times 2i\cos(\beta+3\pi /8).$$ Returning to the main computation, on the part of the contour that is on the negative real axis which is $\Gamma_1$ we trivially obtain $$\int_{-R}^0 \frac{x^2}{(x^4+2x^2+2)^2} \; dx$$ which yields with $z=-x$ $$- \int_R^0 \frac{z^2}{(z^4+2z^2+2)^2} \; dz = \int_{\Gamma_0} \frac{z^2}{(z^4+2z^2+2)^2} \; dz.$$ Finally we have by the ML bound for the semicircular component $$\lim_{R\rightarrow\infty} \left|\int_{\Gamma_2} \frac{z^2}{(z^4+2z^2+2)^2} \; dz\right| \le \lim_{R\rightarrow\infty} 2\pi R/2 \times \frac{R^2}{(R^4-2R^2+2)^2} = 0.$$ It follows that $$\int_0^\infty \frac{x^2}{(x^4+2x^2+2)^2} \; dx = \frac{1}{2}\times 2\pi i \times \frac{2^{3/4}}{32}\times\sqrt{5}\times 2i\cos(\beta+3\pi /8) \\ = -\frac{\pi}{16} 2^{3/4} \sqrt{5} \cos(\beta+3\pi /8).$$ To manipulate this to match the form in the introduction we use angle sum and half-angle formulae as in $$\sqrt{5}\cos(\beta+3\pi /8) = \sqrt{5}\cos\beta\cos(3\pi /8) - \sqrt{5}\sin\beta\sin(3\pi /8) \\ = 2\cos(3\pi /8) - \sin(3\pi /8).$$ As we are integrating a function that is never negative on the integration interval we see that the sign on this last term must be negative. Observe that $$\cos(3\pi/8) = \sqrt{\frac{1+\cos(3\pi/4)}{2}} = \sqrt{\frac{1-\sqrt{2}/2}{2}}$$ and $$\sin(3\pi/8) = \sqrt{\frac{1-\cos(3\pi/4)}{2}} = \sqrt{\frac{1+\sqrt{2}/2}{2}}.$$ Squaring we obtain $$4 \frac{1-\sqrt{2}/2}{2} + \frac{1+\sqrt{2}/2}{2} - 4 \sqrt{\frac{1-2/4}{4}} = \frac{5}{2} - \frac{3}{4}\sqrt{2}-\sqrt{2} \\ = \frac{5}{2} - \frac{7}{4}\sqrt{2}.$$ We thus have for the end result $$-\frac{\pi}{16} 2^{3/4} \times - \sqrt{\frac{5}{2} - \frac{7}{4}\sqrt{2}} = \frac{\pi}{16} \times \sqrt{\frac{5}{2}2^{3/2} - \frac{7}{4}2^2} \\ = \frac{\pi}{16} \times \sqrt{5\sqrt{2}-7}.$$ This is the claim.<|endoftext|> TITLE: What is a composition in category theory? QUESTION [8 upvotes]: I'm just beginning to learn category theory. So far, the basic examples (like Set) are making sense. But I'm having a little trouble getting my head around the fundamentals. Suppose I try to define a category with objects A, B, and C. There's an arrow (f) from A to B, an arrow (g) from B to C, and two arrows (h1 and h2) from A to C (in addition to the identity arrows). Now it must obey the composition axiom: For any two arrows f : A → B, g : B → C, where src(g) = tar (f), there exists an arrow g ◦ f : A → C, ‘g following f’, which we call the composite of f with g. It seems we have two choices for g ◦ f. Since we're not trying to give the category any meaning, it shouldn't matter which we pick. But does the choice matter in identifying the category? Does picking h1 give a "different" category than picking h2? If so, shouldn't it be part of the category's definition? If not, then is it meaningless to ask "which one is the composite?" Perhaps another way of asking this is: does composition have to mean any (one) thing? My question probably doesn't make a lot of sense, but hopefully there are enough clues in here for someone to correct my confusion. Edit: Perhaps restating the axiom in these different ways clarifies my confusion. a) For any two arrows f : A → B, g : B → C, where src(g) = tar (f), there exists at least one arrow g ◦ f : A → C, ‘g following f’, which we could call the composite of f with g. b) For any two arrows f : A → B, g : B → C, where src(g) = tar (f), there exists an arrow g ◦ f : A → C, ‘g following f’, that is the composite of f with g. REPLY [9 votes]: To define a category, you have to specify what composition is in that category. It's like the multiplication operation in a group: to define a group, it's not enough to just say you have a set and it is possible to multiply elements of the set; you have to actually say what you mean by "multiply" as part of the definition of the group. So you might define one operation $\circ_1$ for which $g\circ_1 f=h_1$, and a different operation $\circ_2$ for which $g\circ_2 f=h_2$, and these define two different categories with the same sets of objects and morphisms. (As Qiaochu commented though, these two categories are isomorphic, by just switching $h_1$ and $h_2$.)<|endoftext|> TITLE: Prove $\frac{xy}{5y^3+4}+\frac{yz}{5z^3+4}+\frac{zx}{5x^3+4} \leqslant \frac13$ QUESTION [21 upvotes]: $x,y,z >0$ and $x+y+z=3$, prove $$\tag{1}\frac{xy}{5y^3+4}+\frac{yz}{5z^3+4}+\frac{zx}{5x^3+4} \leqslant \frac13$$ My first attempt is to use Jensen's inequality. Hence I consider the function $$f(x) =\frac{x}{5x^3+4}$$ Compute second derivative we have $$\tag{2}f''(x)=\frac{30x^2(5x^3-8)}{(5x^3+4)^3}$$ $(2)$ shows that the function is neither concave or convex. So I don't think Jensen useful here. The author of $(1)$ is the same as of this inequality. REPLY [7 votes]: We can use the BW ( http://www.artofproblemsolving.com/community/c6h522084 ) We need to prove that $$\frac{1}{27(x+y+z)}\geq\sum\limits_{cyc}\frac{xy}{135y^3+4(x+y+z)^3}$$ $x\leq y\leq z$, $y=x+u$, $z=x+u+v$. Hence, $$27(x+y+z)\prod\limits_{cyc}(135x^3+4(x+y+z)^3)\left(\frac{1}{27(x+y+z)}-\sum\limits_{cyc}\frac{xy}{135y^3+4(x+y+z)^3}\right)=$$ $$=708588(u^2+uv+v^2)x^7+19683(263u^3+462u^2v+177uv^2-11v^3)x^6+$$ $$+39366(413u^4+961u^3v+603u^2v^2+55uv^3+2v^4)x^5+$$ $$+10935(2534u^5+7253u^4v+7007u^3v^2+2650u^2v^3+496uv^4+67v^5)x^4+$$ $$+1215(22600u^6+77007u^5v+99132u^4v^2+61585u^3v^3+21201u^2v^4+4341uv^5+388v^6)x^3+$$ $$+243(64879u^7+257249u^6v+406614u^5v^2+334330u^4v^3+161315u^3v^4+48417u^2v^5+8572uv^6+692v^7)x^2+$$ $$+27(2u+v)(89479u^7+360224u^6v+570894u^5v^2+463900u^4v^3+221075u^3v^4+67872u^2v^5+12872uv^6+1172v^7)x+$$ $$4(2u+v)^3(18871u^6+67548u^5v+87702u^4v^2+51889u^3v^3+17808u^2v^4+4944uv^5+556v^6)\geq$$ $$\geq708588v^2x^7-19683\cdot11v^3x^6+39366\cdot2v^4x^5\geq0$$ $x\leq z\le y$, $y=x+u+v$, $z=x+v$. Hence, $$27(x+y+z)\prod\limits_{cyc}(135x^3+4(x+y+z)^3)\left(\frac{1}{27(x+y+z)}-\sum\limits_{cyc}\frac{xy}{135y^3+4(x+y+z)^3}\right)=$$ $$=708588(u^2+uv+v^2)x^7+19683(-11u^3+42u^2v+327uv^2+263v^3)x^6+$$ $$+39366(2u^4-80u^3v+198u^2v^2+691uv^3+413v^4)x^5+$$ $$+10935(67u^5-125u^4v+193u^3v^2+3335u^2v^3+5417uv^4+2534v^5)x^4+$$ $$+1215(388u^6+399u^5v+168u^4v^2+16873u^3v^3+54097u^2v^4+58593uv^5+22600v^6)x^3+$$ $$+243(692u^7+2092u^6v-588u^5v^2+15920u^4v^3+110770u^3v^4+225579u^2v^5+196904uv^6+64879v^7)x^2+$$ $$+27(u+2v)(1172u^7+4252u^6v-3408u^5v^2+1700u^4v^3+118975u^3v^4+288609u^2v^5+266129uv^6+89479v^7)x+$$ $$4(u+2v)^3(556u^6+1299u^5v-4062u^4v^2+859u^3v^3+33027u^2v^4+45678uv^5+18871v^6)\geq$$ $$\geq708588u^2x^7+19683(-11)u^3x^6+39366(-4)u^4x^5+10935\cdot60u^5x^4\geq0$$ Done!<|endoftext|> TITLE: Eigenvalues of Matrix vs Eigenvalues of Operator QUESTION [12 upvotes]: I'm having some trouble reconciling the concept of eigenvalues of operators with eigenvalues of matrices: Say you have an $n\times n$ matrix $A$. It represents a linear operator $T:V\to V$ with respect to some basis $\{e_i\}$ in the background. Now my understanding is that 1.) Whatever the basis is, it has no effect on the eigenvalues of $A$. I.e. solutions to $\text{det}(A-\lambda I)=0$ gives the same solutions regardless if we have $\{e_i\}$ or $\{e_i'\}$ as the basis in the background, so long as we keep the entries of $A$ the same in both cases. 2.) The eigenvalues of $A$ are the same as the eigenvalues of $T$ as long as we use the basis $\{e_i\}$ in which $A$ represents $T$. However, if we keep the entries of $A$ the same, and change the basis in the background, then $A$ represents a different linear operator $T'$. This seems contradictory since \begin{align} \{\text{eigenvalues of} \ T\}&=\{\text{eigenvalues of} \ A \ \text{with respect to basis} \{e_i\}\} \ \ \text{by 2}\\ &=\{\text{eigenvalues of} \ A \ \text{with respect to basis} \{e_i'\}\} \ \ \text{by 1} \\ &=\{\text{eigenvalues of} \ T'\} \ \ \text{by 2} \end{align} But there's no reason the two operators $T$ and $T'$ should have the same eigenvalues. Can someone point out what's wrong here? Any help would be appreciated. REPLY [7 votes]: Two matrices $A,B \in M_n(\mathbb{F})$ are called similar if there exists an invertible matrix $P$ such that $P^{-1}AP = B$. Two linear maps $T,S \colon V \rightarrow V$ on a finite dimensional vector space are called similar if there exists an invertible linear transformation $R$ (which is the same thing as an isomorphism) such that $R^{-1} \circ T \circ R = S$. Consider the following statements: The maps $T,S$ are similar if and only if the matrices $[T]_{\mathcal{B}},[S]_{\mathcal{B}}$ representing the maps with respect to an arbitrary ordered basis $\mathcal{B}$ of $V$ are similar. Two matrices $A,B \in M_n(\mathbb{F})$ are similar if and only if for any $n$ dimensional vector space $V$, any choice of ordered basis $\mathcal{B}_1$ for $V$ and any operator $T \colon V \rightarrow V$ such that $[T]_{\mathcal{B}_1} = A$ you can find a basis $\mathcal{B}_2$ such that $[T]_{\mathcal{B}_2} = B$. Thus, similar matrices define the same linear operators up to a choice of basis. Two linear maps $T,S \colon V \rightarrow V$ are similar if and only if they can be represented by the same matrix with respect to two different ordered bases of $V$. You can check directly that similar linear maps have the same eigenvalues and this will also imply that similar matrices have the same eigenvalues (as the eigenvalues of a matrix $A$ are precisely the eigenvalues of the linear map $T_A \colon \mathbb{F}^n \rightarrow \mathbb{F}^n$ associated to $A$). In your case, the maps $T$ and $T'$ are similar and hence they have the same eigenvalues.<|endoftext|> TITLE: Choosing Functions for the Squeeze Theorem QUESTION [5 upvotes]: Evaluate $$\lim_{n\to \infty}\dfrac{1}{\sqrt{n^2}}+\dfrac{1}{\sqrt{n^2+1}}+...+\dfrac{1}{\sqrt{n^2+2n}}$$ $$$$ I came across the the question on this site itself but had a few doubts on the given solution. As I do not yet have 50 reputation points, I cannot comment over there. Could somebody please help me? $$$$ From what I understand of the Squeeze Theorem, the three functions are related as $$g(x)\le f(x)\le h(x)$$ $$$$ Now in the selection of terms, the following inequality has been used: $$\displaystyle \frac{1}{n+1} \leq \frac{1}{\sqrt{n^2+k}} \leq \frac{1}{n} $$when $0 \leq k \leq 2n$ $$$$ This inequality lead to the one used as the three functions for the application of the Squeeze theorem: $$\frac{2n+1}{n+1} \leq S(n) \leq \frac{2n+1}{n}$$ where $S(n)=\dfrac{1}{\sqrt{n^2}}+\dfrac{1}{\sqrt{n^2+1}}+...+\dfrac{1}{\sqrt{n^2+2n}}$ $$$$ I don't understand how $$\displaystyle \frac{1}{n+1} \leq \frac{1}{\sqrt{n^2+k}} $$ $$$$ Isn't $$n^2+k\le n^2+2n<(n+1)^2 \Rightarrow n^2+k<(n+1)^2$$ $$\Rightarrow \sqrt{n^2+k}< (n+1)$$$$\Rightarrow \displaystyle \frac{1}{n+1} < \frac{1}{\sqrt{n^2+k}}$$ $$$$ Thus shouldn't the resulting set of inequalities be $$\displaystyle \frac{1}{n+1} < \frac{1}{\sqrt{n^2+k}} \leq \frac{1}{n} $$ $$\Longrightarrow \frac{2n+1}{n+1} < S(n) \leq \frac{2n+1}{n}$$ $$$$ In this case there is a $<$ sign instead of the $\le$ sign. How then can the Squeeze Theorem be applied? Many thanks in advance. $$$$ EDIT: Also, since Limits preserve Inequalities, how can $$\lim_{n\to \infty} \frac{2n+1}{n+1} = \lim_{n\to \infty}\frac{2n+1}{n}$$ when $$\frac{2n+1}{n+1} < \frac{2n+1}{n}$$ REPLY [2 votes]: Remember that $a TITLE: What are vector norms used for? QUESTION [7 upvotes]: I'm currently working with a computer science problem that requires me to build vectors that can return their own norms. Based on Wolfram Alpha's description, I think I have an idea of how this is accomplished for the simple $L^2$ norm ($\sqrt{a^2+b^2+c^2}$, and so on) required by the exercise, but I've no notion of what this is actually useful for or why I would want to find it, outside of it being required by the exercise. Any insight is appreciated! REPLY [5 votes]: Norms are a measure of distance. One has different ways to define what is the distance between points in multiple dimensions, which collapse to the usual notion of the absolute value in 1D. in particular, Euclidean distance is defined by the 2-norm, $$ \left\| \begin{pmatrix} x_1 \\ \vdots \\ x_k \end{pmatrix} \right\|_2 = \sqrt{\sum_{i=1}^k x_i^2} $$ There are others, of which the 1-norm $$ \left\| \begin{pmatrix} x_1 \\ \vdots \\ x_k \end{pmatrix} \right\|_1 = \sum_{i=1}^k \left|x_i^2\right| $$ and the infinity norm $$ \left\| \begin{pmatrix} x_1 \\ \vdots \\ x_k \end{pmatrix} \right\|_\infty = \max_{1 \le i \le k} \left\{|x_i|\right\} $$ are the most useful. You are welcome to read up more on Wikipedia<|endoftext|> TITLE: Spaces of functions similar to $L^p$ but with different local and global sizes QUESTION [6 upvotes]: Obviously much of analysis on $\mathbb R^n$ considers $L^p$ spaces or other Banach spaces derived from them. The definition of $L^p(\mathbb R^n)$ looks very natural, but I've been bothered for some time that what it measures might not so natural. My issue specifically is that it measures the "local" size of $f$ (say $f\cdot\mathbf1_{|f|>1}$) with the same $p$ that it measures the "global" size (say $f\cdot\mathbf1_{|f|\leq1}$). I intuitively feel like the local and global sizes have nothing to do with each other and therefore it is arbitrary to say they should be measured with the same exponent. It seems to me then that analysis would benefit from some Banach space $X^{p,q}$ that lets you measure the function locally in $p$ and globally in $q$. To illustrate my concern, consider $|x|^{-a}$ ($a>0$), which are not in any $L^p$ spaces! But we have $|x|^{-a}\in X^{p,q}$ as long as $n/q TITLE: The Boolean Pythagorean triples problem, a $200$-terabyte proof, and $d=163$ QUESTION [7 upvotes]: I came across this interesting math article, "Computer cracks 200-terabyte maths proof" where one phrase caught my attention and I quote, "... all triples could be multi-coloured in integers up to $7824$". Alternatively, from page 2 of this paper, Theorem 1. The set {$1,\dots,7824$} can be partitioned into two parts, such that no part contains a Pythagorean triple. This is impossible for {$1,\dots,7825$}. The number $N=7824$ was awfully familiar. A quick factorization showed that it was in fact, $$N = 7824 = 2^4 \times 3\times \color{blue}{163}$$ Questions: Does anybody know why the largest Heegner number $163$ figures in the largest $N$ that can be multi-colored in the Boolean Pythagorean triples problem? A272709 is the sequence $2, 4, 8, 16, 24, 48, 96, 192,....0,0,0,0,0\dots$ where the zeros start at $a(7825)$. What is the exact value of $a(7824)$? (In the comments, it just says $a(7824)\geq8$.) REPLY [3 votes]: It is not difficult at all to show that $a(7824)$ is immensely huge. For example, many numbers do not appear in any pythagorean triple of numbers up to 7824. These can be put in any partition. More precisely, in the article (arXiv:1605.00723, section 6.3) they say they found a solution of 7824 with 1567 free variables. I guess these are boolean variables, so this gives at least $a(7824)\geq 2^{1567}$. (On a side note, let me share a remark on the appearance of 163. Neither the number 7824 nor the set $\{1,\ldots,7824\}$ look anyhow special to this problem. For instance, the number 7824 is one of the numbers that can be put in any side of the partition. The true special number here is 7825, together with the combinatorial complexity of the Pythagorean triples containing it, etcetera. There is a beautiful system of triples (not involving 7824) that is an obstruction to the problem, and that in fact allows a partition as soon as you remove the 7825. Therefore, I would rather seek for a pattern for $a(163 k+1)$... or better look at the factorization $7825=5^2\times 313$.)<|endoftext|> TITLE: How to find the inverse of $f(x)=x+ \frac{x^{3}}{1+x^{2}}$? QUESTION [10 upvotes]: I know that given, $f(x)=x+ \frac{x^{3}}{1+x^{2}}$ I should set $y=x+ \frac{x^{3}}{1+x^{2}}$ and solve in terms of $x$, then just swap the $x$'s and $y$'s. I know that, since the derivative is always positive, and since the function is composed of polynomials, it is continuous and one-to-one, so that an inverse exists. But I cant seem to figure out the algebra to solve in terms of $x$? I just end up going in circles. Any help would be greatly appreciated. REPLY [4 votes]: You get a cubic function/equation for $x$: $2x^3 - y \cdot x^2 + x - y = 0$ The solution for $x$ seems to be as follows: $x = - \frac{1}{6} \cdot (-y + C + \frac{y^2-6}{C})$ where $C = \sqrt[3]{-y^3-45y + \sqrt{108y^4+1917y^2+216}}$ I just followed (by hand) the formulas given here: general formula for the roots (of a cubic function)<|endoftext|> TITLE: Saturation of a measure Folland Problem 1.3.16 QUESTION [9 upvotes]: Exercise 16 - Let $(X,M,\mu)$ be a measure space. A set $E\subset X$ is called locally measurable if $E\cap A\in M$ for all $A\in M$ such that $\mu(A) < \infty$. Let $\tilde{M}$ be the collection of all locally measurable sets. Clearly, $M\subset \tilde{M}$, if $M = \tilde{M}$, then $\mu$ is saturated. a.) If $\mu$ is $\sigma$-finite, then $\mu$ is saturated. Proof - Suppose $\mu$ is $\sigma$-finite. Let $A\in\tilde{M}$, and let $X = \bigcup_{1}^{\infty}E_j$ where $E_j\in M$ and $\mu(E_j) < \infty$ for all $j$. We know $M\subset \tilde{M}$ what we want to show is that $\tilde{M}\subset M$. We can write $$A = A\cap X = A\cap \left(\bigcup_{1}^{\infty}E_j\right) = \bigcup_{1}^{\infty}E_j\cap A$$ Since $\mu(E_j)<\infty$ we have $E_j\cap A\in M$ for all $j$. Therefore, $\tilde{M}\subset M$, thus $\mu$ is saturated. b.) $\tilde{M}$ is $\sigma$-algebra. Proof - i.) $\emptyset\in M\subset \tilde{M}$, so $\emptyset\in \tilde{M}$. ii.) Let $B\in \tilde{M}$. Take any $E\in M$ such that $\mu(E) < \infty$. Then $$E\setminus B = E\cap B^c = E\cap (E\cap B)^c$$ since $E\in M$ and $(E\cap B)\in M$ then $(E\cap (E\cap B)^c\in M$. Thus we have $B^c\in \tilde{M}$. iii.) Let $\{B_j\}_{1}^{\infty}\in \tilde{M}$. Take any $E\in M$ with $\mu(E)< \infty$. Then, $$\left(\bigcup_{1}^{\infty}B_j\right)\cap E = \bigcup_{1}^{\infty}(B_j\cap E)\in M$$ so, by definition of $\tilde{M}$, $\bigcup_{1}^{\infty}B_j\in \tilde{M}$. Therefore $\tilde{M}$ us a $\sigma$-algebra. c.) Define $\tilde{\mu}$ on $\tilde{M}$ by $\tilde{\mu}(E) = \mu(E)$ if $E\in M$ and $\tilde{\mu}(E) = \infty$ otherwise. Then $\tilde{\mu}$ is a saturated measure on $\tilde{M}$, called the saturation of $\mu$. Step 1: Show that $\tilde{\mu}$ is a measure on $\tilde{M}$. Proof - i.) $\tilde{\mu}(\emptyset) = \mu(\emptyset) = 0$. ii.) Let $\{E_j\}_{1}^{\infty}\in \tilde{M}$ that is pairwise disjoint. Let $$E = \bigcup_{1}^{\infty}E_j$$ If $E \in M$ then \begin{align*} \tilde{\mu}(E) = \tilde{\mu}\left(\bigcup_{1}^{\infty}E_j\right) &= \mu\left(\bigcup_{1}^{\infty}E_j\right)\\ &= \sum_{1}^{\infty}\mu(E_j)\\ &= \sum_{1}^{\infty}\tilde{\mu}(E_j) \end{align*} If $E\notin M$ then $\tilde{\mu}(E) = \infty$... not sure where to go from here. Step 2 - $\tilde{\mu}$ is saturated. Proof - Let $E\subset X$ such that $E\cap A\in \tilde{M}$ when $\tilde{\mu}(A) < \infty$. Choose a $B\in M$ such that $\mu(B)<\infty$. Then, clearly $\tilde{\mu}(B)<\infty$ and $E\cap B\in \tilde{M}$. So, $E\cap B = (E\cap B)\cap B\in M$ so $E\in \tilde{M}$ it thus follows that $\tilde{\mu}$ is saturated. d.) If $\mu$ is complete, so is $\tilde{\mu}$. Proof - Suppose $\mu$ is complete. Let $A\subset X$ and suppose there is a $B\in \tilde{M}$ such that $A\subset B$ and $\mu(B) = 0$. Since $B\in\tilde{M}$ and $\mu(B) = 0$ then $\tilde{\mu}(B) < \infty$ and hence $B\in M$. This, since $A\subset B$ we have $A\in M$ by completeness of $\mu$. Therefore, $A\in \tilde{M}$ and $\tilde{\mu}$ is complete. e.) Suppose that $\mu$ is semifinite. For $E\in\tilde{M}$, define $\underline{\mu}(E) = \sup\{\mu(A):A\in M, A\subset E\}$. Then $\underline{\mu}$ is a saturated measure on $\tilde{M}$ that extends $\mu$. Step 1 - $\underline{\mu}$ is a measure. Proof - i.) $\overline{\mu}(\emptyset) = \mu(\emptyset) = 0$ ii.) Let $\{E_j\}_{1}^{\infty}$ be a sequence of disjoint sets in $\tilde{M}$. Set, $$E = \bigcup_{1}^{\infty}E_j$$ then by definition of $\tilde{M}$ there is an $A\in M$ and $A\subset E$. Case 1 - $\mu(A) < \infty$. Then $$\mu(A) = \mu\left(\bigcup_{1}^{\infty}E_j\cap A\right) = \sum_{1}^{\infty}\mu(E_j\cap A)\leq \sum_{1}^{\infty}\underline{\mu}(E_j)$$ Case 2 - $\mu(A) = \infty$. By semifiniteness, for all $C>0$ there exists a $F\subset A$ such that $F\in M$ and $\mu(F) = C$. Then by case 1, $\leq \sum_{1}^{\infty}\underline{\mu}(E_j) = \infty$. Therefore, $\mu(A) \leq \sum_{1}^{\infty}\underline{\mu}(E_j)$. Taking the supremum over all $A$ we have $$\underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right)\leq \sum_{1}^{\infty}\underline{\mu}(E_j)$$ Now we need to show the reverse inequality. By the definition of supremum there exists a sequence $\{B_i\}_{1}^{\infty}\in M$ and $B_i\subset E_i$ for all $i$. Thus, $\underline{\mu}(E_i)\leq \mu(B_i) + \epsilon 2^{-i}$. Therefore, \begin{align*} \sum_{1}^{\infty}\underline{\mu}(E_i) &\leq \sum_{1}^{\infty}\mu(B_i) + \epsilon\\ &= \mu\left(\bigcup_{1}^{\infty}B_i\right) + \epsilon \ \ \text{is this true because of case 1?}\\ &\leq \underline{\mu}\left(\bigcup_{1}^{\infty}E_i\right) + \epsilon \end{align*} Since this holds for all $\epsilon > 0$, we have $$\sum_{1}^{\infty}\underline{\mu}(E_j)\leq \underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right)$$ Therefore, $$\underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right) = \sum_{1}^{\infty}\underline{\mu}(E_j)$$ and hence $\underline{\mu}$ is a measure. Step 2 - $\underline{\mu}$ is saturated. Proof - Let $E\subset X$ be such that $E\cap A\in \tilde{M}$ when $\underline{\mu}(A)< \infty$. Take any $B\in M$ such that $\mu(B) < \infty$. Then $\underline{\mu}(B) < \infty$ so $E\cap B\in \tilde{M}$. Thus, $E\cap B = (E\cap B)\cap B\in M$ and $E\in\tilde{M}$, hence, $\underline{\mu}$ is saturated. Step 3 - $\underline{\mu}$ is an extention of $\mu$. Proof - Let $E\in M$. For any $A\in M$ such that $A\subset E$, we have by monotonicity that $\mu(A)\leq \mu(E)$. Since $\underline{\mu}(E)$ is the supremum over all such $A$, we must have that $\underline{\mu}(E)\leq \mu(E)$. OTOH.... not sure how to show the reverse inequality. f.) Let $X_1$ and $X_2$ be disjoint uncountable sets, $X = X_1\cup X_2$, and $M$ the $\sigma$-alegbra of countable or co-countable sets in $X$. Let $\mu_0$ be counting measure on $\mathcal{P}(X_1)$ and define $\mu$ on $M$ by $\mu(E) = \mu_0(E\cap X_1)$. Then $\mu$ is a measure on $M$, $\tilde{M} = \mathcal{P}(X)$, and in the notation of parts (c) and (e), $\tilde{\mu}\neq \underline{\mu}$. Step 1 - $\mu$ is a measure on $M$. Proof - i.) $\mu(\emptyset) = \mu_0(\emptyset\cap X_1) = 0$ ii.) Let $\{E_j\}_{1}^{\infty}$ be a sequence of disjoint sets in $M$, then \begin{align*} \mu\left(\bigcup_{1}^{\infty}E_j\right) &= \mu_0\left(\bigcup_{1}^{\infty}E_j\cap X_1\right)\\ &= \sum_{1}^{\infty}\mu_0(E_j\cap X_1)\\ &= \sum_{1}^{\infty}\mu(E_j) \ \ \ \text{is this true because} \ \mu_0 \ \text{is a counting measure?} \end{align*} Therefore $\mu$ is a measure on $M$. Step 2 - $\tilde{M} = \mathcal{P}(X)$ Proof - Step 3 - $\tilde{\mu}\neq \underline{\mu}$ Proof - Take $y_1,y_2\in X_1$. Let $E = \{y_1,y_2\}\cup X_2$. Then $E\notin M$, so $\tilde{\mu}(E) = \infty$. However, $\underline{\mu}(E) = 2$. I will re-edit my question and include these other proofs as I continue to do them. I am pretty sure my proof for $\tilde{\mu}$ is a measure is incorrect. But I am not really sure how to do it. Any suggestions on any of these is greatly appreciated. REPLY [8 votes]: Items a and b are correct. For the rest of the items, some need just minor improvements and some really need to be fixed. I tried to keep the proof as close to your proof as possible. c.) Define $\tilde{\mu}$ on $\tilde{M}$ by $\tilde{\mu}(E) = \mu(E)$ if $E\in M$ and $\tilde{\mu}(E) = \infty$ otherwise. Then $\tilde{\mu}$ is a saturated measure on $\tilde{M}$, called the saturation of $\mu$. Step 1: Show that $\tilde{\mu}$ is a measure on $\tilde{M}$. Proof - It is clear that $\tilde{\mu}$ is a non-negative well-defined function on $\tilde{M}$. i.) $\tilde{\mu}(\emptyset) = \mu(\emptyset) = 0$. ii.) Let $\{E_j\}_{1}^{\infty}\in \tilde{M}$ that is pairwise disjoint. Let $$E = \bigcup_{1}^{\infty}E_j$$ If $E \in M$, then we have two cases. First case: if, for all $j$, $E_j\in M$ then \begin{align*} \tilde{\mu}(E) = \tilde{\mu}\left(\bigcup_{1}^{\infty}E_j\right) &= \mu\left(\bigcup_{1}^{\infty}E_j\right) = \sum_{1}^{\infty}\mu(E_j)= \sum_{1}^{\infty}\tilde{\mu}(E_j) \end{align*} Second case: we begin remarking that if $E \in M$ and $\mu(E)< \infty$ then, for all $j$, $E_j=E_j \cap E \in M$. So, if $E \in M$ and, there is $j_0$ such that $E_{j_0}\notin M$ then $\mu(E)=\infty$, and we have $$\tilde{\mu}(E) = \mu(E) = \infty = \tilde{\mu}(E_{j_0})\leqslant \sum_{1}^{\infty}\tilde{\mu}(E_j)\leqslant \infty$$ So we have $$\tilde{\mu}(E) = \infty = \sum_{1}^{\infty}\tilde{\mu}(E_j)$$ If $E\notin M$ then $\tilde{\mu}(E) = \infty$. Since $E = \bigcup_{1}^{\infty}E_j$ and $M$ is a $\sigma$-algebra, there is $j_0$ such that $E_{j_0}\notin M$. So $\tilde{\mu}(E_{j_0}) = \infty$. So we have $$\tilde{\mu}(E) = \infty = \tilde{\mu}(E_{j_0})\leqslant \sum_{1}^{\infty}\tilde{\mu}(E_j)\leqslant \infty$$ So we have $$\tilde{\mu}(E) = \infty = \sum_{1}^{\infty}\tilde{\mu}(E_j)$$ Step 2 - $\tilde{\mu}$ is saturated. Proof - Let $E\subset X$ such that $E\cap A\in \tilde{M}$ when $\tilde{\mu}(A) < \infty$. For any $B\in M$ such that $\mu(B)<\infty$, we clearly have $\tilde{\mu}(B)=\mu(B)<\infty$ and $E\cap B\in \tilde{M}$. Now, since $E\cap B\in \tilde{M}$ and $B\in M$ such that $\mu(B)<\infty$ we have $E\cap B = (E\cap B)\cap B\in M$. So we proved that, for any $B\in M$ such that $\mu(B)<\infty$, $E\cap B \in M$. So $E\in \tilde{M}$. It thus follows that $\tilde{\mu}$ is saturated. d.) If $\mu$ is complete, so is $\tilde{\mu}$. Proof - Suppose $\mu$ is complete. Let $A\subset X$ and suppose there is a $B\in \tilde{M}$ such that $A\subset B$ and $\tilde{\mu}(B) = 0$. Since $B\in \tilde{M}$ and $\tilde{\mu}(B) = 0 < \infty$ and hence $ B \in M$ and $\mu(B)=\tilde{\mu}(B) = 0$. Since $A\subset B$ we have $A\in M \subset \tilde{M}$ by completeness of $\mu$. Therefore, $A\in \tilde{M}$ and $\tilde{\mu}$ is complete. e.) Suppose that $\mu$ is semifinite. For $E\in\tilde{M}$, define $\underline{\mu}(E) = \sup\{\mu(A):A\in M, A\subset E\}$. Then $\underline{\mu}$ is a saturated measure on $\tilde{M}$ that extends $\mu$. Before proving the item e we prove a lemma Lemma: Suppose that $\mu$ is semifinite. For $E\in\tilde{M}$, $$\underline{\mu}(E) = \sup\{\mu(A):A\in M, A\subset E \textrm{ and } \mu(A)<\infty\}$$ Proof of the lemma: We clearly have $$\underline{\mu}(E)=\sup\{\mu(A):A\in M, A\subset E \} \geqslant \sup\{\mu(A):A\in M, A\subset E \textrm{ and } \mu(A)<\infty\}$$ Case 1: If, for all $A\in M$, $A\subset E$, we have $\mu(A)<\infty$ then we have $$\underline{\mu}(E)=\sup\{\mu(A):A\in M, A\subset E \} = \sup\{\mu(A):A\in M, A\subset E \textrm{ and } \mu(A)<\infty\}$$ Case 2: Now, suppose there is $A_0\in M$, $A_0\subset E$ such that $\mu(A_0)=\infty$. Then $\underline{\mu}(E)=\infty$ and, since $\mu$ is semifinite, we have that $$\sup\{\mu(A):A\in M, A\subset A_0 \textrm{ and } \mu(A)<\infty\}=\mu(A_0)=\infty$$ So we have $$\infty=\underline{\mu}(E)=\sup\{\mu(A):A\in M, A\subset E \} \geqslant \sup\{\mu(A):A\in M, A\subset E \textrm{ and } \mu(A)<\infty\}\geqslant \\ \geqslant \sup\{\mu(A):A\in M, A\subset A_0 \textrm{ and } \mu(A)<\infty\}=\infty$$ So we have $$\underline{\mu}(E)=\infty= \sup\{\mu(A):A\in M, A\subset E \textrm{ and } \mu(A)<\infty\}$$ End of proof of the lemma Step 1 - $\underline{\mu}$ is a measure. Proof - It is clear that $\tilde{\mu}$ is a non-negative well-defined function on $\tilde{M}$. i.) $\underline{\mu}(\emptyset) = \mu(\emptyset) = 0$ ii.) Let $\{E_j\}_{1}^{\infty}$ be a sequence of disjoint sets in $\tilde{M}$. Set, $$E = \bigcup_{1}^{\infty}E_j$$ Then, by the lemma, we have: \begin{align*} \underline{\mu}\left ( \bigcup_{1}^{\infty}E_j \right ) & = \sup\left\{\mu(A):A\in M, A\subset \bigcup_{1}^{\infty}E_j \textrm{ and } \mu(A)<\infty \right\} \end{align*} Since each $E_j \in \tilde{M}$, then, for each $A\in M$, $A\subset \bigcup_{1}^{\infty}E_j$ and $\mu(A)<\infty$, we have $E_j \cap A \in M$ and $$\mu(A)=\sum_1^\infty\mu(E_j \cap A)$$ So we have \begin{align*} \underline{\mu}\left ( \bigcup_{1}^{\infty}E_j \right ) & = \sup\left\{\mu(A):A\in M, A\subset \bigcup_{1}^{\infty}E_j \textrm{ and } \mu(A)<\infty \right\}=\\ & = \sup\left\{\sum_1^\infty\mu(A \cap E_j):A\in M, A\subset \bigcup_{1}^{\infty}E_j \textrm{ and } \mu(A)<\infty \right\}\leqslant \\ &\leqslant \sum_1^\infty\sup\left\{\mu(A \cap E_j):A\in M, A\subset \bigcup_{1}^{\infty}E_j \textrm{ and } \mu(A)<\infty \right\}\leqslant \\ &\leqslant\sum_1^\infty\sup\left\{\mu(B):B\in M, B\subset E_j \textrm{ and } \mu(B)<\infty \right\}= \\ &= \sum_1^\infty \underline{\mu}( E_j ) & \leqslant \end{align*} So we proved $$\underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right)\leqslant \sum_{1}^{\infty}\underline{\mu}(E_j)$$ Now note that from the defintion of $\overline{\mu}$ we have \begin{align*} \sum_{1}^{\infty}\underline{\mu}(E_j)&= \sum_{1}^{\infty}\sup\left\{\mu(B_j):B_j\in M, B_j\subset E_j \right\}= \\ &= \sup\left\{\sum_{1}^{\infty}\mu(B_j):B_j\in M, B_j\subset E_j \right\}= \\ &= \sup\left\{\mu(\bigcup_{1}^{\infty}B_j):B_j\in M, B_j\subset E_j \right\}\leqslant \\ &\leqslant \sup\left\{\mu(B):B\in M, B\subset \bigcup_{1}^{\infty} E_j \right\}= \\ &=\underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right) \end{align*} So we proved $$\underline{\mu}\left(\bigcup_{1}^{\infty}E_j\right)= \sum_{1}^{\infty}\underline{\mu}(E_j)$$ and hence $\underline{\mu}$ is a measure. Step 2 - $\underline{\mu}$ is saturated. Proof - Let $E\subset X$ be such that $E\cap A\in \tilde{M}$ when $A \in \tilde{M}$ and $\underline{\mu}(A)< \infty$. Take any $B\in M$ such that $\mu(B) < \infty$. Then $\underline{\mu}(B) \leqslant \mu(B) < \infty$ so $E\cap B\in \tilde{M}$. Thus, $E\cap B = (E\cap B)\cap B\in M$. So, we proved that, for any $B\in M$ such that $\mu(B) < \infty$, $E\cap B \in M$. So $E\in\tilde{M}$, hence, $\underline{\mu}$ is saturated. Step 3 - $\underline{\mu}$ is an extention of $\mu$. Proof - Let $E\in M$. Then $$\underline{\mu}(E)=\sup\{\mu(A): A\in M, A\subset E \}=\mu(E)$$ So $\underline{\mu}$ is an extention of $\mu$. f.) Let $X_1$ and $X_2$ be disjoint uncountable sets, $X = X_1\cup X_2$, and $M$ the $\sigma$-alegbra of countable or co-countable sets in $X$. Let $\mu_0$ be counting measure on $\mathcal{P}(X_1)$ and define $\mu$ on $M$ by $\mu(E) = \mu_0(E\cap X_1)$. Then $\mu$ is a measure on $M$, $\tilde{M} = \mathcal{P}(X)$, and in the notation of parts (c) and (e), $\tilde{\mu}\neq \underline{\mu}$. Step 1 - $\mu$ is a measure on $M$. Proof - i.) $\mu(\emptyset) = \mu_0(\emptyset\cap X_1) = 0$ ii.) Let $\{E_j\}_{1}^{\infty}$ be a sequence of disjoint sets in $M$, then \begin{align*} \mu\left(\bigcup_{1}^{\infty}E_j\right) &= \mu_0\left(\bigcup_{1}^{\infty}E_j\cap X_1\right)\\ &= \sum_{1}^{\infty}\mu_0(E_j\cap X_1)\\ &= \sum_{1}^{\infty}\mu(E_j) \ \ \ \text{is this true because} \ \mu_0 \ \text{is a counting measure?} \end{align*} Therefore $\mu$ is a measure on $M$. Step 2 - $\tilde{M} = \mathcal{P}(X)$ Proof - Note that if $A \in M$ and $\mu(A)<\infty$ then $A \cap X_1$ is a finite set, so $A$ can not be co-countable. So $A$ is countable. Given any set $E\subset X$, then for all $A \in M$ and $\mu(A)<\infty$, $A$ is countable and so is $E \cap A$. So $E \cap A \in M$. So $\tilde{M}=\mathcal{P}(X)$. Step 3 - $\tilde{\mu}\neq \underline{\mu}$ Proof - Take $y_1,y_2\in X_1$. Let $E = \{y_1,y_2\}\cup X_2$. Then $E\notin M$, so $\tilde{\mu}(E) = \infty$. However, $\underline{\mu}(E) = 2$.<|endoftext|> TITLE: Smallest number of $n$-simplices in a triangulation of the sphere QUESTION [7 upvotes]: Let $X$ be a simplicial complex homeomorphic to $S^n$. I proved that there must be at least $(n+2)$ vertices in $X$ and that there must be at least one $n$-simplex in $X$. Now I want to prove that there are at least $(n+2)$ n-simplices in $X$. My idea was to assume that there are fewer than $(n+2)$ n-simplices and then proving that the simplicial boundary map $\partial_n:C_n^{\Delta}(X)\to C_{n-1}^{\Delta}(X)$ is injective, contradicting $H_n^{\Delta}(X)=\Bbb Z$. This quickly proved to be very messy and I am not sure if that's the best way to go about it. I appreciate all help. REPLY [4 votes]: As you said, there must be at least one $n$-simplex in $X$, call it $\sigma$. This simplex has $n+1$ faces $f_0,\dots, f_n,$ with the face $f_i$ being opposite to the vertex $v_i$ of $\sigma$. Due to the local topology of an $n$-manifold, each $f_i$ is the intersection of $\sigma$ and another $n$-simplex $\sigma_i$ which does not contain $v_i$. Since every other $\sigma_j$ intersects $\sigma$ in $f_j$, it contains $v_i$ and is thus distinct from $\sigma_i$. This shows that there are $n+2$ $n$-simplices $\sigma, \sigma_0,\sigma_1,\dots,\sigma_n$.<|endoftext|> TITLE: What is the difference between $\frac{\mathrm{d}}{\mathrm{d}x}$ and $\frac{\partial}{\partial x}$? QUESTION [15 upvotes]: Is there not any difference between $\frac{\mathrm{d}}{\mathrm{d}x}$ and $\frac{\partial}{\partial x}$ as long as your function has one variable? $f(x) = x^3\implies \left\{\begin{align}&\dfrac{\mathrm{d}}{\mathrm{d}x}f = \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}(x\mapsto x^3)}{\mathrm{d}x} = x\mapsto 3x^2&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x}= \dfrac{\partial(x\mapsto x^3)}{\partial x} = x\mapsto 3x^2&\color{green}{\checkmark}\end{align}\right.$ And if so, why does this change with two (or more) variables? $\require{cancel} f(x,y) = yx^3\implies \left\{\begin{align}&\color{grey}{\cancel{\dfrac{\mathrm{d}}{\mathrm{d}x}f = }} \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}((x,y)\mapsto yx^3)}{\mathrm{d}x} \neq x\mapsto 3yx^2&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x} =\dfrac{\partial((x,y)\mapsto x^3)}{\partial x} = x\mapsto 3yx^2&\color{red}{\mathcal{X}}\end{align}\right.$ I get that it is supposed to be something like this $f(x,y) = yx^3\implies \left\{\begin{align}&\color{grey}{\cancel{\dfrac{\mathrm{d}}{\mathrm{d}x}f = }} \dfrac{\mathrm{d}f}{\mathrm{d}x}=\dfrac{\mathrm{d}((x,y)\mapsto yx^3)}{\mathrm{d}x} \neq\\&\cdots\quad (x,y)\mapsto 3y\dfrac{\mathrm{d}\color{red}{(x\mapsto x^3)}}{\mathrm{d}x}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3 =\\&\cdots\quad (x,y)\mapsto 3y\color{red}{(x\mapsto x^2)}+\dfrac{\mathrm{d}\color{red}{(y\mapsto y)}}{\mathrm{d}x}x^3&\color{green}{\checkmark}\\&\dfrac{\partial}{\partial x}f = \dfrac{\partial f}{\partial x} = \dfrac{\partial(x,y)\mapsto x^3)}{\partial x} = x\mapsto 3yx^2&\color{green}{\checkmark}\end{align}\right.$ REPLY [3 votes]: In the following we consider real-valued functions \begin{align*} &f:\mathbb{R}\rightarrow\mathbb{R}&\text{and}\qquad\quad&g:\mathbb{R}^2\rightarrow\mathbb{R}\\ &x\mapsto f(x)&&(x,y)\mapsto g(x,y) \end{align*} We denote with $\frac{\partial}{\partial x}$ the partial derivative of a function $f$ with respect to the variable $x$ and with $\frac{d}{dx}$ the total derivative of a function $f$ with respect to the variable $x$. A short answer is: In the one-variable case there is no difference between the total derivative and the partial derivative of $f$ with respect to $x$. In the multi-variable case there is in general a difference between the total and the partial derivative. $$ $$ Multivariable case: We consider the total derivative of $g=g(x,y)$ with respect to $x$. It is defined as \begin{align*} \frac{dg}{dx}&=\frac{\partial g}{\partial x}\cdot\frac{dx}{dx}+\frac{\partial g}{\partial y}\cdot\frac{dy}{dx}\\ &=\frac{\partial g}{\partial x}+\frac{\partial g}{\partial y}\cdot\frac{dy}{dx}\tag{1} \end{align*} and this is generally not the same as \begin{align*} \frac{\partial g}{\partial x} \end{align*} Example: $g(x,y)=yx^3$ We obtain \begin{align*} \frac{d}{dx}g(x,y)&=\frac{\partial }{\partial x}g(x,y)+\frac{\partial }{\partial y}g(x,y)\cdot\frac{dy}{dx}\\ &=\frac{\partial }{\partial x}(yx^3)+\frac{\partial }{\partial y}(yx^3)\cdot\frac{dy}{dx}\\ &=3x^2y+x^3\frac{dy}{dx}\\ \end{align*} whereas \begin{align*} \frac{\partial }{\partial x}g(x,y)&=\frac{\partial }{\partial x}(yx^3)\\ &=3x^2y \end{align*} We observe the total derivative and the partial derivative are different in general. They are the same in the example above only when \begin{align*} \frac{dy}{dx}\equiv 0 \end{align*} Single variable case: In this case the total derivative and the partial derivative are the same since applying (1) in the single variable case is \begin{align*} \frac{df}{dx}&=\frac{\partial f}{\partial x}\cdot\frac{dx}{dx}=\frac{\partial f}{\partial x} \end{align*}<|endoftext|> TITLE: Visualization of the dual space of a vector space QUESTION [9 upvotes]: I am wondering what the motivation was for defining a dual space of a vector space, and how to visualize the dual space. I'm asking since it doesn't seem to me to be intuitive to deal with such a space. In particular, I'm looking for questions where it would be natural to consider the space of linear functionals in order to answer these questions. REPLY [8 votes]: What is a "linear equation" in $n$ variables $X_1, \dots, X_n$ over the field $\mathbb{F}$? In some sense, it is a formal expression of the form $a_1 X_1 + \dots + a_n X_n$ where $a_i \in \mathbb{F}$ are scalars and the $X_i$ are "placeholders" in which you can plug in values and obtain a result in $\mathbb{F}$. This is precisely an element $\varphi$ of the dual space $(\mathbb{F}^n)^{*}$ (here, $\varphi(X_1, \dots, X_n) = a_1X_1 + \dots + a_n X_n$). Thus, you can think of $(\mathbb{F}^n)^{*}$ as the space of linear equations. It is somewhat surprising in the beginning that the space of linear equations itself has a linear structure and can be considered as a vector space. You can add two linear equations and multiply a linear equation by a scalar. Let me show you that if you are familiar with the basic techniques of linear algebra, you already used the notion of a dual space in disguise many times: When you are performing Gaussian elimination to solve a system $$ a_{11} X_1 + \dots + a_{1n} X_n = \varphi_1(X_1, \dots, X_n) = 0, \\ \vdots \\ a_{k1} X_1 + \dots + a_{kn} X_n = \varphi_k(X_1, \dots, X_n) = 0 $$ you are applying row operations to the corresponding matrix. The operations you perform on the equations correspond precisely to the operations you perform on the linear functionals $\varphi_i$ in $(\mathbb{F}^{n})^{*}$. They result in replacing the vectors $\varphi_1, \dots, \varphi_k$ in $(\mathbb{F}^n)^{*}$ with linear combinations in such a way that the span in $(\mathbb{F}^n)^{*}$ doesn't change. If we set $W = \operatorname{span} (\varphi_1, \dots, \varphi_k)$ then solving the linear system above corresponds to finding (a basis for) the annihilator of $W$ (after a certain identification). Row operations don't change $W$ and so don't change the annihilator. If you are given a subspace $V \subseteq \mathbb{F}^n$ described as a span of vectors and you want to describe it as a solution of a system of equations, you are trying to find a basis $\varphi_1, \dots, \varphi_k$ for the subspace of linear equations that $V$ satisfy. This is precisely finding a basis for the annihilator of $V$. For example, if $V = \mathrm{span} ((1,1,1),(1,1,-1))$ then $V$ is a two-dimensional plane defined for example by the equation $X - Y = 0$ but it is also defined by the equation $2X - 2Y = 0$. The "equation" $(X,Y,Z) \mapsto X - Y$ is a basis for the space of equations that vanish on $V$. Given a matrix $A \in M_{n \times m}(\mathbb{F})$, you know that the column space of $A$ is precisely the image of the associated linear map $T_A \colon \mathbb{F}^m \rightarrow \mathbb{F}^n$. You might have wondered what is the meaning of the row space of $A$ in terms of the linear map $T_A$. The row space of $A$ is precisely the set of equations $\ker(T_A)$ satisfies Finally, when working with an abstract vector space $V$ you don't have any preferred coordinates so you can't define "a linear equation" on $V$ as above - the correct definition that is consistent with what I described is precisely that of a linear functional on $V$ the space $V^{*}$ is the space of linear equations on $V$.<|endoftext|> TITLE: Range of function $f(x) = \sqrt{x+27}+\sqrt{13-x}+\sqrt{x}$ QUESTION [6 upvotes]: Range of function $f(x) = \sqrt{x+27}+\sqrt{13-x}+\sqrt{x}$ $\bf{My\; Try::}$ For $\min$ of $f(x)$ $$\left(\sqrt{13-x}+\sqrt{x}\right)^2=13-x+x+2\sqrt{x}\sqrt{13-x}= 13+2\sqrt{x}\sqrt{13-x}\geq 13$$ Now $$\sqrt{x+27} + \sqrt{13-x}+\sqrt{x} \geq \sqrt{27} + \sqrt{13}$$ and equality hold at $x=0$ Now How can i calculate $\max$ of $f(x)\;,$ Help required, Thanks REPLY [3 votes]: Hint:by Cauchy-Schwarz inequality $$121=[(x+27)+3(13-x)+2x][1+\dfrac{1}{3}+\dfrac{1}{2}]\ge f^2(x)$$<|endoftext|> TITLE: Connection of $\mathcal{O}(n)$ on a toric manifold QUESTION [5 upvotes]: The holomorphic line bundle $\mathcal{O}_X(1)$ over a toric manifold $X$, admits a hermitian connection, $A^{(1)}$, whose $U(1)$ gauge transformation in a local patch of the base space is $$ A^{(1)}_Idx^I\rightarrow A^{(1)}_Idx^I-\textrm{ }d\lambda, $$ where $x^I$ are sections of the bundle. On page 61 of https://arxiv.org/abs/hep-th/0005247, it is assumed without explanation that the hermitian connection of $\mathcal{O}_X(-n)$ is just $$ A^{(-n)}=-nA^{(1)} $$ Why is this true? References would be appreciated. REPLY [2 votes]: $\mathcal{O}(-n)=\mathcal{O}(-1)^{\otimes n}$ is a tensor product of line bundles, so the connection is simply $n$ times the connection on $\mathcal{O}(-1)$. If that was not obvious, it will be useful if we review the basic ingredients involved here, which will hopefully clear up any confusion. Consider a principal $G$-bundle $P$ over a manifold $X$. Suppose we have a connection $A$ on $P$, i.e. a locally defined $1$-form valued in the Lie algebra $\mathfrak{g}$ of $G$. Let $\rho:G \to \mathrm{Aut}(V)$ be a representation of $G$ on the vector space $V$, and let $\hat\rho: \mathfrak{g} \to \mathrm{Aut}(V)$ be the induced representation of $\mathfrak{g}$ on $V$. Let $E_\rho = P \times_\rho V$ be the vector bundle associated to $P$ via the representation $\rho$. The connection $A$ on $P$ defines the covariant derivative acting on sections of $E_\rho$ by $$\nabla \equiv \mathrm{d} + \hat\rho(A),$$ where $\mathrm{d}$ is the differential on $X$. Suppose $\rho':G\to \mathrm{Aut}(V')$ is another representation of $G$, and let $E_{\rho'}$ denote the associated vector bundle. Then we can form the tensor product bundle $E_\rho\otimes E_{\rho'}$ whose fibers are the tensor products of the fibers of $E_\rho$ and $E_{\rho'}$. The covariant derivative acting on sections of $E_\rho\otimes E_{\rho'}$ is $$\nabla = \mathrm{d} + (\hat\rho \otimes \hat\rho')(A),$$ where $$(\hat\rho\otimes \hat\rho')(A) = \hat\rho(A)\otimes 1 + 1\otimes \hat\rho'(A).$$ Now, consider the simple case $G = \mathrm{U(1)}$. Its irreducible representations are $1$-dimensional and are labeled by an integer $q\in \mathbb{Z}$, $\rho_q :\mathrm{U}(1) \to \mathrm{Aut}(\mathbb{C})$ with $\rho_q(e^{i\theta}) = e^{iq \theta}$. The associated Lie algebra representation is simply multiplication by $q$. The covariant derivative on $E_{\rho_q}$ is therefore simply $\nabla = \mathrm{d} + q A$. The covariant derivative on $E_{\rho_q} \otimes E_{\rho_{q'}}$ is then $\nabla = \mathrm{d} + (q+q')A$, since the tensor product is trivial for these $1$-dimensional representations. Thus, if $P$ is a $\mathrm{U(1)}$ bundle with connection $A$ and $\mathcal{L}$ is the charge $1$ associated line bundle with covariant derivative $\nabla = \mathrm{d} + A$, then the covariant derivative on $\mathcal{L}^{\otimes n}$ is $\nabla = \mathrm{d} + n A$. For the original question, $\mathcal{O}(1)$ is a line bundle with covariant derivative $\mathrm{d} + A$. $\mathcal{O}(-1)$ is its dual, with covariant derivative $\mathrm{d} - A$. $\mathcal{O}(-n)$ is by definition the tensor product bundle $\mathcal{O}(-1)^{\otimes n}$, and its covariant derivative is $\mathrm{d} - n A$.<|endoftext|> TITLE: The integral is the area under the curve. Is there a similar notion for stochastic integrals? QUESTION [14 upvotes]: As discussed in the answers to this question, the integral is defined to be the (net signed) area under the curve. The definition in terms of Riemann sums is precisely designed to accomplish this. Now the stochastic integral in Ito calculus is more formally defined and the result of the integration is another stochastic process. Is there a similar geometric Interpretation of a stochastic integral? Are there special cases which are simpler to understand? For example what about Brownian Motion? Are there restrictions which allow pathwise integration? Edit: I found a related question here which asks how to compute $\int W_sdW_s$. REPLY [6 votes]: For the geometric interpretation there is a nice result in the book by Karatzas and Shreve (chapter 3 2.29): Having a partition $\Pi$ with $0=t_0 TITLE: Combinatorial formulas and interpretations QUESTION [8 upvotes]: I found that $$ \sum_{j=0}^{s}(n-s+j)!\binom{s}{j}(s-j)! =s! \sum_{j=0}^{s} \frac{(n-s+j)!}{j!} = \frac{(n+1)!}{n+1-s}$$ I proved this formula with induction, but I was wondering if there is a (combinatorial?) interpretation that can explain it. Moreover, I wanted to simplify in a similar way also the following: $$ \sum_{j=0}^{s}(n-s+j)!\binom{s}{j}3^{s-j}(s-j)! = s! 3^s \sum_{j=0}^{s} \frac{(n-s+j)!}{j!3^j} = ?$$ is it possible? REPLY [12 votes]: Here's another non-induction and non-combinatorial proof. $$\begin{align} \sum_{j=0}^{s}(n-s+j)!\binom{s}{j}(s-j)! &=s! \sum_{j=0}^{s} \frac{(n-s+j)!}{j!} \cdot \color{lightgrey}{\frac{(n-s)!}{(n-s)!}}\\ &=s!(n-s)!\sum_{j=0}^s\binom {n-s+j}{n-s}\\ &=s!(n-s)!\binom {n+1}{n-s+1} &&\tiny\text{using }\sum_{r=a}^b\binom {m+r}{m+a}=\binom {m+b+1}{m+a+1}\\ &=s! (n-s)!\frac {(n+1)!}{(n-s+1)!s!}\\ &=(n-s)!\frac{(n+1)!}{(n-s+1)!}\\ &=\frac{(n+1)!}{n-s+1}\qquad\blacksquare \end{align}$$ Alternatively $$\begin{align} \sum_{j=0}^{s}(n-s+j)!\binom{s}{j}(s-j)! &= n!\sum_{j=0}^s\frac{\binom sj}{\frac{n!}{(n-s+j)!(s-j)!}}\\ &=n!\sum_{j=0}^s\frac{\binom s{s-j}}{\binom n{s-j}}\\ &=\frac{n!}{\binom ns}\sum_{j=0}^s\frac{\binom ns\binom s{s-j}}{\binom n{s-j}}\\ &=\frac{n!}{\frac{n!}{s!(n-s)!}}\sum_{j=0}^s\frac{\binom n{s-j}\binom {n-s+j}{j}}{\binom n{s-j}}\\ &=s!(n-s)!\sum_{j=0}^s\binom {n-s+j}{n-s}\\ &=s!(n-s)!\binom{n+1}{n-s+1}\\ &=s!(n-s)! \frac{(n+1)!}{(n-s+1)!s!}\\ &=\frac{(n+1)!}{n-s+1}\qquad\blacksquare \end{align}$$<|endoftext|> TITLE: What is the period of $(2007)^{\sin x}$? QUESTION [6 upvotes]: What is the period of $(2007)^{\sin x}$? Please explain how to proceed and what's the technique to generally solve these kind of problems. REPLY [7 votes]: $f(x+2\pi) = 2007^{\sin(x+2\pi)} = 2007^{\sin(x)} = f(x)$<|endoftext|> TITLE: Intuitive explanation of $L^2$-norm QUESTION [8 upvotes]: I have to play a lot with the $L^2$-norm defined as $\|w\|=\sqrt{\int_a^b \langle f,f \rangle}$. However, I don't understand the interpretation of that norm. We know that the Euclidean norm measure the length of a vector in the Euclidean space, but what does the $L^2$-norm? Is there anyone could give me an intuitive (even a graph, if possible) explanation of that norm? REPLY [9 votes]: There are several good answers here, one accepted. Nevertheless I'm surprised not to see the $L^2$ norm described as the infinite dimensional analogue of Euclidean distance. In the plane, the length of the vector $(x,y)$ - that is, the distance between $(x,y)$ and the origin - is $\sqrt{x^2 + y^2}$. In $n$-space it's the square root of the sum of the squares of the components. Now think of a function as a vector with infinitely many components (its value at each point in the domain) and replace summation by integration to get the $L^2$ norm of a function. Finally, tack on the end of last sentence of @levap 's answer: ... the $L^2$ norm has the advantage that it comes from an inner product and so all the techniques from inner product spaces (orthogonal projections, etc) can be applied when we use the $L^2$ norm.<|endoftext|> TITLE: An interesting AM-HM-GM inequality: $\text{AM}+\text{HM}\geq C_n\cdot \text{GM}$ QUESTION [16 upvotes]: It is not difficult to prove that if $x,y\in\mathbb{R}^+$ the inequality $$ \frac{x+y}{2}+\frac{2}{\frac{1}{x}+\frac{1}{y}}\geq \color{purple}{2}\cdot\sqrt{xy} $$ holds, and the constant $\color{purple}{2}$ is optimal. In a recent question I proved, with a quite involved technique, that if $x,y,z\in\mathbb{R}^+$ then $$ \frac{x+y+z}{3}+\frac{3}{\frac{1}{x}+\frac{1}{y}+\frac{1}{z}}\geq \color{purple}{\frac{5}{2\sqrt[3]{2}}}\cdot\sqrt[3]{xyz} $$ holds, and the constant $\color{purple}{\frac{5}{2\sqrt[3]{2}}}$ (that is a bit less than $2$) is optimal. Then I was wondering: Given $x_1,x_2,\ldots,x_n\in\mathbb{R}^+$, what is the optimal constant $C_n$ such that: $$\text{AM}(x_1,\ldots,x_n)+\text{HM}(x_1,\ldots,x_n)\geq \color{purple}{C_n}\cdot \text{GM}(x_1,\ldots,x_n)$$ I do not think my approach with $3$ variables has a simple generalization (also because in $\mathbb{R}_+^3$ the stationary points are non-trivial), but maybe something is well-known about the improvements of the AM-GM inequality, or there is a cunning approach by some sort of induction on $n$. REPLY [5 votes]: Empirically it seems that the $n$ terms $\left(1,1,\ldots,1,1,\dfrac{1}{(n-1)^2}\right)$ suggest a low value for $C_n$ of $(n-1)^{2/n}\left(1+(n-1)^{-2}\right)$. If $n-1$ of the terms are equal (and without loss of generality equal to $1$) and the other term is $x$ then $\frac{AM+HM}{GM}=\frac{\frac{x+n-1}{n}+\frac{n}{\frac{1}{x}+n-1}}{{{x}^{\frac{1}{n}}}}$ and its derivative is $$\frac{\left( n-1\right) \cdot {{\left( x-1\right) }^{2}}\cdot {{x}^{-\frac{1}{n}-1}}\cdot \left( {{n}^{2}}\cdot x-2\cdot n\cdot x+x-1\right) }{{{n}^{2}}\cdot {{\left( n\cdot x-x+1\right) }^{2}}}$$ which has zeros at $x=0,1,\frac{1}{(n-1)^2}$. Of these $x=0$ causes problems for the $GM$ and $HM$, while $x=1$ gives a value of $\frac{AM+HM}{GM}=2$ and a second derivative of $0$, and the interesting $x=\frac{1}{(n-1)^2}$ gives the value stated above which is less than $2$ and has a positive second derivative for $n\gt 2$<|endoftext|> TITLE: Number of homomorphisms between two cyclic groups. QUESTION [17 upvotes]: Is it true that the number of homomorphisms between any two finite cyclic groups of order $m\,\&\,n$ is $\gcd(m,n)$? I have posted an answer which I believe is true, just wanted to know different approaches to this problem. REPLY [16 votes]: Let us consider homomorphisms $\mathbb Z_m\to\mathbb Z_n$. Let us denote $d=\gcd(m,n)$. Since $1$ is generator of $\mathbb Z_m$, the choice of $f(1)=a\in\mathbb Z_n$ uniquely determines all values of $f$. Namely we get $$f(k) = f(k\times 1)= k\times f(1) = k\times a.$$ (Here $k\times b$ denotes the addition $\underset{\text{$k$-times}}{\underbrace{b\oplus b\oplus \dots\oplus b}}$, where $\oplus$ denotes the addition in the given group, in this case either $\mathbb Z_m$ or $\mathbb Z_n$. I chose this notation to distinguish it from the usual addition of integers.) But not all choices of $a$ are possible. We definitely need $$f(0) = f(m\times 1) = m\times a = 0.$$ I.e., $ma \equiv 0 \pmod n$. This gives us that $ma=nb$ for some integer $b$. If we divide both sides by $d$, we get $$\frac md a = \frac nd b.$$ Since $\frac nd$ and $\frac md$ are coprime, this implies that $$\frac nd \mid a.$$ But there are exactly $d$ such numbers in $\mathbb Z_n$, namely the numbers $0,\frac{n}d, \frac{2n}d,\dots, \frac{(d-1)n}d$. So we see that there are at most $d$ homomorphisms $\mathbb Z_m\to\mathbb Z_n$. It remains to check somehow that for any choice of $a$ such that $m\times a=0$ (i.e., one of the $a$'s described above) the function given by $$f(k) = k\times a$$ is indeed a homomorphism. If $k,l\in\mathbb Z_m$, then we want to check whether $$f(k\oplus l) = f(k) \oplus f(l),$$ i.e., whether $$((k+l)\bmod m)\times a = (k\times a + l\times a)\bmod n.$$ We can write $$((k+l)\bmod m)\times a \overset{(1)}= (k+l)\times a \overset{(2)}= k\times a \oplus l\times a \overset{(3)}= (k\times a + l\times a)\bmod n.$$ $(1)$ is true since $m\times a=0$ $(2)$ follows simply from properties of groups. (I will link to a similar proof for rings. But the basic idea is the same and this should be fairly obvious.) $(3)$ is simply the definition of the operation $\oplus$ on the group $\mathbb Z_n$.<|endoftext|> TITLE: Frullani 's theorem in a complex context. QUESTION [21 upvotes]: It is possible to prove that $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=-i\frac{\pi}{2}$$ and in this case the Frullani's theorem does not hold since, if we consider the function $f(x)=e^{-x}$, we should have $$\int_{0}^{\infty}\frac{e^{-ax}-e^{-bx}}{x}dx$$ where $a,b>0$. But if we apply this theorem, we get $$\int_{0}^{\infty}\frac{e^{-ix}-e^{-x}}{x}dx=\log\left(\frac{1}{i}\right)=-i\frac{\pi}{2}$$ which is the right result. Questions: is it only a coincidence? Is it possible to generalize the theorem to complex numbers? Is it a known result? And if it is, where can I find a proof of it? Thank you. REPLY [4 votes]: $\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\int_{0}^{\infty}{\expo{-\ic x} - \expo{-x} \over x}\,\dd x = \int_{0}^{\infty}\pars{\expo{-\ic x} - \expo{-x}} \int_{0}^{\infty}\expo{-xt}\,\dd t\,\dd x \\[5mm] = &\ \int_{0}^{\infty}\int_{0}^{\infty} \bracks{\expo{-\pars{t + \ic}x} - \expo{-\pars{t + 1}x}}\dd x\,\dd t = \int_{0}^{\infty} \pars{{1 \over t + \ic} - {1 \over t + 1}}\dd t = \left.\ln\pars{t + \ic \over t + 1}\right\vert_{\ t\ =\ 0}^{\ t\ \to\ \infty} \\[5mm] = &\ -\ln\pars{\ic} =\ \bbox[#ffe,10px,border:1px dotted navy]{\ds{-\,{\pi \over 2}\,\ic}} \end{align}<|endoftext|> TITLE: The Hardest Sudoku Puzzle QUESTION [14 upvotes]: I was playing a casual game of Sudoku today when a friend came by and asked "What's the hardest game of Sudoku possible?" My response: "A Sudoku puzzle with the minimal amount of starting numbers where the puzzle is still solvable." However, I am not happy with this because I want to know the actual minimum to the amount of starting squares I can have. Of course, position matters as well, so I will assume you can place the numbers wherever for optimization. The closest I can do is look at individual situations to see if they are solvable. But even when I do that, I don't know if there is a setup with even less starting numbers? Q1: What is the least amount of starting numbers required for a game of Sudoku to be solvable? Q2: How would you define the "hardest" game of Sudoku? REPLY [4 votes]: What about this paper : arXiv:1208.0370v1 ? The authors Maria Ercsey-Ravasz and Zoltan Toroczkai made a study on how to classify sudoku problems relatively to a Richter-type scale. They express the problem in term of the time used by a k-SAT deterministic continuous time solver to solve the problem. To enter a bit into detail, in fact they show that such problems (k-SAT) solving time grow exponentially with the number of variables and that the dynamic of the solver evolves to a chaotic system if the logical clause cannot be statified. Since sudoku have always a solution, the solver escape chaos at some point and converges to a solution. This is precisely this escape-rate from chaotic state which is used as a measure of the difficulty of the puzzle. According to their measure, the subjectively difficult puzzles known so far [there is a list of some in the bibliography of this paper] are actually receiving a high score from this measure, so making them also objectively difficult in some sense. The scale range from $0$ to $4$, with $[0,1]$ being the easy sudokus, $[1,2]$ medium ones, $[2,3]$ hard ones and finally above $3$ the top hard problems. The high score presented in the paper is around $3.6$.<|endoftext|> TITLE: Why can't I find anyone who has discovered the (irrational) constant 1.29128...? QUESTION [15 upvotes]: The constant is exactly $\sum_{n=1}^∞\frac{1}{n^n}$. Why does it seem that no one has written about it? Did I not search well enough? If so, what is the name for it? If not, it is not sufficiently "interesting?" I can't find it anywhere, which seems very strange. (I apologize about how little my experience in higher maths I have...) REPLY [22 votes]: What? You mean the Sophomore's Dream? (Actually, the "dream" is that $\int_0^1 x^{-x} \,\mathrm{d}x = \sum_{n=1}^\infty n^{-n}$, but this is just two representations of your value.) Your value appears in the ISC, associated with that sum. This sequence of digits appears in the OEIS as A073009 (with various references, including to Bernoulli's proof that the integral equals the sum).<|endoftext|> TITLE: Derivating natural numbers. QUESTION [5 upvotes]: Some years ago I came up with a weird idea which I think was of my own, although it is not sensible to make such assertions. The idea was to define a function $f: \mathbb{N} \rightarrow \mathbb{N}$ which imitates the behaviour of the derivative of a product, i.e. (uv)' = u'v + uv'. Using the prime notation, I define the derivative of a natural number $n$ whose primary decomposition is $\; n = \prod_{i = 1}^{i = r} p_i^{\alpha_i}$ as $$n' := \sum _{j = 1}^{j = r} \alpha_j p_j^{\alpha_j - 1}\prod_{i \neq j}^{i = r} p_i^{\alpha_i} = n (\sum_{i=1}^{i = r} \frac{\alpha_i}{p_i}).$$ So I somehow applied the rule of the derivative of a multiple product, hence $(nm)' = n'm + mn'$ follows. Some particular cases are $1' = 0$ and $p' = 1$ if $p$ is prime. Although I have never found useful this "natural derivative", I got interested on it. Here are some nice facts about the natural derivative: Solutions of $n' = n$ are $n = p ^p,$ where $p$ is prime. It has an air of the exponential $e^x$. The "average value" of $n'$ is $n \times \sum_{p \in \mathbb{P} }\frac{1}{p(p-1)}$ One can reformulate the Goldbach conjecture as: All even numbers greater than $2$ are the image of a pseudoprime number (i.e. $pq$, where $p$ and $q$ are prime). It can be easily extended to $\mathbb{Q}$. So now I have introduced to you this marvellous function, I would like to ask you: Did I invented it or has it been used before? In the second case: In which context does it appear, and how much is known about it? Thanks a bunch for your answers. REPLY [3 votes]: Good going! There's a whole Wikipedia page on this (link), and some people have seriously considered the properties of this function. Hopefully the references there will satisfy your curiosity for a while. Also, this sequence is listed in the Online Encyclopedia of Integer Sequences as sequence $\text{A}003415$. Usually some interesting stuff can be found there. Finally, if you're interested in computing these numbers, see this question over on Programming Puzzles & Code Golf SE, which is how I first found out about this function.<|endoftext|> TITLE: What is the relationship between diffeomorphisms of the sphere modulo isotopy and exotic spheres? QUESTION [5 upvotes]: In his "Classification of (n-1)-connected 2n-dimensional manifolds and the discovery of exotic spheres", Milnor observes that since his exotic 7-spheres admit a Morse function with only two critical points, they are diffeomorphic to two 7-disks glued along a diffeomorphism $g : S^6 \to S^6$, which can't be isotopic to the identity due to the fact the result can't be diffeomorphic to $S^7$ (here I suppose Milnor forgot to comment that $g$ preserves orientation). How far does this relationship go in general? If I take an orientation-preserving diffeomorphism $g : S^n \to S^n$ which is not isotopic to the identity, and construct a topological sphere by gluing two copies of $D^{n + 1}$ along $g$, will that sphere be exotic? Can every exotic sphere be obtained in this way? REPLY [5 votes]: Grumpy Parsnip answered the question, but let me summarize the situation and add some other aspects of the story. Denote by $Diff^\partial(D^n)$ the group of diffeomorphisms of the $n$-disc that are the identity on a neighborhood of the boundary, by $Diff^+(S^n)$ the group of orientation preserving diffeomorphisms and by $\Theta_n$ the group of exotic spheres (which coincides with the group of h-cobordism classes of homotopy spheres provided $n\neq 4$ by the h-cobordism theorem). We have maps $$\pi_0(Diff^\partial(D^n))\rightarrow \pi_0(Diff^+(S^n))\rightarrow\mu(Diff^+(S^n))\rightarrow \Theta_{n+1},$$ where $\mu(Diff(M))$ denotes the group of pseudoisotopy classes of diffeomorphisms of a manifold $M$, the first map is induced by the map $Diff^\partial(D^n)\rightarrow Diff^+(S^n)$ that crushes the boundary of the $n$-disc, the second map is provided as isotopy implies pseudoisotopy and the last map is given by the twisted spheres construction (gluing two $n+1$ discs via the given diffeomorphism to obtain a, possibly exotic, spheres). What can we say about the quality of these maps? The last map is an isomorphism for $n\neq 4 $ by Smale's h-cobordism theorem. The middle map is an isomorphism for $n\ge 5$ by the pseudoistopy theorem of Cerf which says that $\mu(Diff(M))\cong \pi_0(Diff(M))$ for simply connected manifolds of dimension $n\ge 5$. So we are left with the question in which dimensions the first map is an isomorphism as well. We have a map $Diff^+(S^n)\rightarrow SO(n+1)$ given by taking the derivative at a fixed point $x$ and identifying the frame bundle of $S^n$ with $SO(n+1)$. This is a fibration with fiber the group of diffeomorphism that fix $x$ and the tangent space of $x$, which is homotopy equivalent to the group of diffeomorphisms fixing $x$ and a neighborhood thereof, which is $Diff^\partial(D^n)$. We hence get a fibration sequence $$Diff^\partial(D^n)\rightarrow Diff^+(S^n)\rightarrow SO(n+1),$$ which splits up to homotopy (a splitting is given by the inclusion $SO(n+1)\subseteq Diff^+(S^n)$), so we get $$Diff^+(S^n)\simeq SO(n+1)\times Diff^\partial(D^n).$$ As $\pi_0(SO(n+1))$ is trivial, the first map in the above chain is always an isomorphism. Now the question arises whether actually $SO(n+1)$ is equivalent to the path component of the identity of $Diff^+(S^n)$. This is true in dimensions 1-3, unknown in dimension 4 and wrong in dimensions 5 and above.<|endoftext|> TITLE: Definition of smallest equivalence relation QUESTION [12 upvotes]: I came across the term 'smallest equivalence relation' in the course of a proof I was working on. I have never thought about ordering relations. I googled the term and checked stackexchange and couldn't find a clear definition. Is someone able to provide me a definition of what a smallest equivalence relation is and an example of one equivalence relation being smaller than another? REPLY [5 votes]: Equivalence relations are (partially) ordered by implication; $\Theta \leq \Phi$ if and only if $$ x \Theta y \implies x \Phi y $$ is an identity. In fact, this partial ordering is a complete lattice; the meet operation (a.k.a. the greatest lower bound) is given by logical and. That is, the equivalence relation $\Theta = \bigwedge_{i \in I} \Theta_i$ is the one defined by $$x \Theta y \Longleftrightarrow \bigwedge_{i \in I} x \Theta_i y$$ or in terms of quantifiers rather than infinitary operations, $$x \Theta y \Longleftrightarrow \forall i \in I: x \Theta_i y $$ If you look at the graph of the relations, this can all be rephrased in terms of subsets and intersections. A simple source of examples is to use the first isomorphism theorem — there is a bijective correspondence between congruences and quotients. e.g. in terms of sets, quotients can be viewed as partitions of the set, and the correspondence is that the equivalence classes of an equivalence relation are the parts of the partition. The smallest equivalence relation on a nonempty set $X$ corresponds to the partition whose classes are all singletons; it is equality. The largest equivalence relation is the one with just a single partition; the corresponding equivalence relation is the one where everything is related. The ordering on equivalence relations corresponds to whether one partition refines another.<|endoftext|> TITLE: Is there such a thing as "finite" induction? QUESTION [10 upvotes]: I am not sure of the terminology that I am looking for, but I would like to use an inductive proof on the following type of structure. I have something of the form, for every $n \geq 2$ and for any $1 \leq d \leq n-1$, some property $P(n, d)$ is true. I chose to do induction on $d$ since it appears to make the proof simpler (fewer cases in the case-by-case analysis for my problem). So my base-case was $P(n, 1)$, then I showed that $P(n, d-1)$ implies $P(n, d)$ for any $2 \leq d \leq n-1$. Typically, induction has this "infinite domino effect", like in the proof that $\sum_{i=1}^n i = n(n+1)/2$, but in my proof, this is not the case as the "domino effect" stops at $d = n-1$. Is this an okay thing to do? Do I still call this a proof by induction? Or does it have a different name? REPLY [3 votes]: If you have $n$ as a variable (not as a fixed constant like $n=30$) then I am quite sure (though I did to try to write down the details) that you can prove the full induction principle (for natural numbers) from the validity of your special induction principle. From this point of view, I would really call it a proof by induction. To see that your special induction principle is a special case of the general induction principle (for natural numbers) is simple: Given a natural number $n$, just apply the general induction principle to the property $$``\text{$d\ge n$ or $P(n,d)$``}$$<|endoftext|> TITLE: Similarity classes of matrices QUESTION [6 upvotes]: Let $M_n(K)$ be the set of all $n\times n$ matrices over a field $K$. If $\mathcal{R}$ is the equivalence relation defined by matrix similarity, what does the quotient $M_n(K)/\mathcal{R}$ looks like? Is there something that characterizes it in terms of cardinality? Is there a way to extend matrix operations to equivalence classes, making it an algebraic quotient structure of $M_n(K)$? Is there a good interpretation of $M_n(K)/\mathcal{R}$ in terms of similarity invariants (e.g., matrix determinants)? Today, on my linear algebra class, we were introduced the concept of determinants. Our teacher used geometric isometry invariants (such as area and volume) to introduce the motivation for matrix determinants. I then asked myself what would be the generalized interpretation of the determinant as some type of ''invariant''. Some Google search led me to matrix similarity, and my algebraic intuition says that this information would be codified in the quotient structure - once we ''kill' the big structure by the invariants, we would find exactly what is not varying. Well, though I never though about verifying it for any pathological case, I believe that the map $\det: M_n(K) \to K$ is surjective. That would mean that the quotient $M_n(K)/\mathcal{R}$ is at least the same cardinality as $K$ (because similar matrices have the same determinant, but I'm not sure about the converse. If it is true, matrix with the same determinant are similar, then cardinality equality would follow). I'm not sure how the operations can be extended. Maybe the quotient can be seen as a vector space over $K$ as well: scalar multiplication is surely well defined, but I'm not sure about the sum. If I restrict myself to some subset of $M_n(K)$, like the general linear group or the special linear group, could I get something more? And the interpretation or mathematical application is exactly what I long for. REPLY [5 votes]: Let $\mathcal C$ be the set of $n\times n$ matrices over $K$ which are in rational canonical form. Since every element of $M_n(K)$ is similar to a unique matrix in rational canonical form, you can identify $M_n(K)/\mathcal R$ with $\mathcal C$. In particular, the cardinality of $M_n(K)/\mathcal R$ is the number of $n\times n$ matrices over $K$ in rational canonical form, i.e. the number of ways to construct a matrix $$ \begin{bmatrix} 0&0&\dots&\dots&\dots&-b_0\\ 1 &0&\dots&\dots&\dots&-b_1\\ 0&1&\dots&\dots&\dots&-b_2\\ 0&0&\ddots& &&\vdots\\ \vdots&\vdots&&\ddots&&\vdots\\ 0&0&\dots&\dots&1&-b_{n-1} \end{bmatrix}. $$ There are $n$ choices for $b_0,\dots,b_{n-1}$, and so $$ |M_n(K)/\mathcal R|=|K^n|. $$ As for your reasoning about the determinant: it is true that $\det:M_n(K)\to K$ is a surjective homomorphism. Given $x\in K$, the matrix $$ \begin{bmatrix} x&0&\dots&0\\ 0&1&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&1 \end{bmatrix} $$ has determinant $x$. Since similar matrices have the same determinant, you are right that $|M_n(K)/\mathcal R|\geq |K|$. However, the converse is false: matrices with the same determinant might not be similar. For example, $$ SO(2)=\left\{\begin{bmatrix}\cos\theta &-\sin\theta\\\sin\theta&\cos\theta\end{bmatrix}:\theta\in [0,2\pi)\right\} $$ consists of rotation matrices with determinant $1$, but no two distinct matrices in $SO(2)$ are similar (one way to see prove this is to note that the eigenvalues of a rotation matrix by $\theta$ are $e^{\pm i\theta}$).<|endoftext|> TITLE: Are sets and symbols the building blocks of mathematics? QUESTION [19 upvotes]: A formal language is defined as a set of strings of symbols. I want to know that if "symbol" is a primitive notion in mathematics i.e we don't define what a symbol is. If it is the case that in mathematics every thing(object) is a set and the members of a set are themselves sets, shouldn't we define symbols by set? I'm confused by what comes first, set theory or formal languages. REPLY [38 votes]: The things you actually write on the paper or some other medium are not definable as any kind of mathematical objects. Mathematical structures can at most be used to model (or approximate) the real world structures. For example we might say that we can have strings of symbols of arbitrary length, but in the real world we would run out of paper or ink or atoms or whatever it is we use to store our physical representations of strings. So let's see what we can build non-circularly in what order. Natural language Ultimately everything boils down to natural language. We simply cannot define everything before we use it. For example we cannot define "define"... What we hope to do, however, is to use as few and as intuitive concepts as possible (described in natural language) to bootstrap to formal systems that are more 'powerful'. So let's begin. Natural numbers and strings We simply assume the usual properties of natural numbers (arithmetic and ordering) and strings (symbol extraction, length and concatenation). If we don't even assume these, we can't do string manipulation and can't define any syntax whatsoever. It is convenient to assume that every natural number is a string (say using binary encoding). Program specification Choose any reasonable programming language. A program is a string that specifies a sequence of actions, each of which is either a basic string manipulation step or a conditional jump. In a basic string manipulation step we can refer to any strings by name. Initially all strings named in the program are empty, except for the string named $input$, which contains the input to the program. A conditional jump allows us to test if some basic condition is true (say that a number is nonzero) and jump to another step in the program iff it is so. We can easily implement a $k$-fold iteration of a sequence of actions by using a natural number counter that is set to $k$ before that sequence and is decreased by $1$ after the sequence, and jumping to the start of the sequence as long as $k$ is nonzero. The execution of a program on an input is simply following the program (with $input$ containing the input at the start) until we reach the end, at which point the program is said to halt, and whatever is stored in the string named $output$ will be taken as the output of the program. (It is possible that the program never reaches the end, in which case it does not halt. Note that at this point we don't (yet) want to affirm that every program execution either halts or does not halt. In special cases we might be able to observe that it will not halt, but if we cannot tell then we will just say "We don't know." for now.) One special class of programs are those where conditional jumps are only used to perform iteration (in the manner described above). These programs always terminate on every input, and so they are in some sense the most primitive. Indeed they are called primitive recursive. They are also the most acceptable in the sense that you can 'see clearly' that they always halt, and hence it is very 'well-defined' to talk about the collection of strings that they accept (output is not the empty string), since they always halt and either accept or don't accept. We call such collections primitive recursive as well. (As a side note, there are programs that always halt but are not primitive recursive.) Formal system specification We can now use programs to represent a formal system. Specifically a useful formal system $T$ has a language $L$, which is a primitive recursive collection of strings, here called sentences over $T$, some of which are said to be provable over $T$. Often $T$ comes with a deductive system, which consists of rules that govern what sentences you can prove given sentences that you have already proven. We might express each rule in the form "$φ_1,φ_2,...,φ_k \vdash ψ$", which says that if you have already proven $φ_1,φ_2,...,φ_k$ then you can prove $ψ$. There may even be infinitely many rules, but the key feature of a useful $T$ is that there is one single primitive recursive program that can be used to check a single deductive step, namely a single application of any one of the rules. Specifically, for such a $T$ there is a primitive recursive program $P$ that accepts a string $x$ iff $x$ encodes a sequence of sentences $φ_1,φ_2,...,φ_k,ψ$ such that $φ_1,φ_2,...,φ_k \vdash ψ$. Since all useful formal systems have such a program, we can verify anyone's claim that a sentence $φ$ is provable over $T$, as long as they provide the whole sequence of deductive steps, which is one possible form of proof. Up to now we see that all we need to be philosophically committed to is the ability to perform (finitely many) string manipulations, and we can get to the point where we can verify proofs over any useful formal system. This includes the first-order systems PA and ZFC. In this sense we clearly can do whatever ZFC can do, but whether or not our string manipulations have any meaning whatsoever cannot be answered without stronger ontological commitment. Godel's incompleteness theorems At this point we can already 'obtain' Godel's incompleteness theorems, in both external and internal forms. In both we are given a useful formal system $T$ that can also prove whatever PA can prove (under suitable translation). Given any sentence $P$ over $T$, we can construct a sentence $Prov_T(P)$ over $T$ that is intended to say "$P$ is provable over $T$". Then we let $Con(T) = ¬Prov_T(\bot)$. To 'obtain' the external form (if $T$ proves $Con(T)$ then $T$ proves $\bot$), we can explicitly write down a program that given as input any proof of $Con(T)$ over $T$ produces as output a proof of $\bot$ over $T$. And to 'obtain' the internal form we can explicitly write down a proof over $T$ of the sentence "$Con(T) \rightarrow \neg Prov_T(Con(T))$". (See this for more precise statements of this kind of result.) The catch is that the sentence "$Prov_T(P)$" is completely meaningless unless we have some notion of interpretation of a sentence over $T$, which we have completely avoided so far so that everything is purely syntactic. We will get to a basic form of meaning in the next section. Basic model theory Let's say we want to be able to affirm that any given program on a given input either halts or does not halt. We can do so if we accept LEM (the law of excluded middle). With this we can now express basic properties about $T$, for example whether it is consistent (does not prove both a sentence and its negation), and whether it is complete (always proves either a sentence or its negation). This gives meaning to Godel's incompleteness theorems. From the external form, if $T$ is really consistent then it cannot prove $Con(T)$ even though $Con(T)$ corresponds via the translation to an assertion about the natural numbers that is true iff $T$ is consistent. But if we further want to be able to talk about the collection of strings accepted by a program (not just primitive recursive ones), we are essentially asking for a stronger set-comprehension axiom, in this case $Σ^0_1$-comprehension (not just $Δ^0_0$-comprehension). The area of Reverse Mathematics includes the study of distinction between such weak set-theoretic axioms, and the linked Wikipedia article mentions these concepts and others that I later talk about, but a much better reference is this short document by Henry Towsner. With $Σ^0_1$-comprehension we can talk about the collection of all sentences that are provable over $T$, whereas previously we could talk about any one such sentence but not the whole collection as a single object. Now to prove the compactness theorem, even for (classical) propositional logic, we need even more, namely WKL (weak Konig's lemma). And since the compactness theorem is a trivial consequence of the completeness theorem (say for natural deduction), WKL is also required to prove the completeness theorem. The same goes for first-order logic. Turing jumps Now it does not really make sense from a philosophical viewpoint to only have $Σ^0_1$-comprehension. After all, that is in some sense equivalent to having an oracle for the halting problem (for ordinary programs), which is the first Turing jump. The halting problem is undecidable, meaning that there is no program that always halts on any input $(P,x)$ and accepts iff $P$ halts on $x$. By allowing $Σ^0_1$-comprehension we are in a sense getting access to such an oracle. But then if we consider augmented programs that are allowed to use the first Turing jump (it will get the answer in one step), the halting problem for these programs will again be undecidable by any one of themselves, but we can conceive of an oracle for that too, which is the second Turing jump. Since we allowed the first one there is no really good reason to ban the second. And so on. The end result is that we might as well accept full arithmetical comprehension (we can construct any set of strings or natural numbers definable by an formula where all quantifiers are over natural numbers or strings). And from a meta-logical perspective, we ought to have the full second-order induction schema too, because we already assume that we have only been accepting assumptions that hold for the standard natural numbers, namely those which are expressible in the form "$1+1+\cdots+1$". Note that everything up to this point can be considered predicative, in the sense that at no point do we construct any object whose definition depends on the truth value of some assertion involving itself (such as via some quantifier whose range includes the object to be constructed). Thus most constructively-inclined logicians are perfectly happy up to here. Higher model theory If you only accept countable predicative sets as ontologically justified, specifically predicative sets of strings (or equivalently subsets of the natural numbers), then the above is pretty much all you need. Note that from the beginning we have implicitly assumed a finite alphabet for all strings. This implies that we have only countably many strings, and hence we cannot have things like formal systems with uncountably many symbols. These occur frequently in higher model theory, so if we want to be able to construct anything uncountable we would need much more, such as ZFC. One example of the use of the power of ZFC is in the construction of non-standard models via ultrapowers, which require the use of a weak kind of the axiom of choice. The nice thing about this construction is that it is elegant, and for instance the resulting non-standard model of the reals can be seen to capture quite nicely the idea of using sequences of reals modulo some equivalence relation as a model for the first-order theory of the real numbers, where having eventual consistent behaviour implies the corresponding property holds. The non-constructive ultrafilter is needed to specify whether the property holds in the case without eventually consistent behaviour. I hope I have shown convincingly that although we need very little to get started with defining and using a formal system, including even ZFC, all the symbol-pushing is devoid of meaning unless we assume more, and the more meaning we want to express or prove, the stronger assumptions we need. ZFC (excluding the axiom of foundation) is historically the first sufficiently strong system that can do everything mathematicians had been doing, and so it is not surprising that it is also used as a meta-system to study logic. But you're not going to be able to ontologically justify ZFC, if that is what you're looking for. Sets in set theories Finally, your question might be based on a common misconception that in ZFC you have a notion of "set". Not really. ZFC is just another formal system and has no symbol representing "set". It is simply that the axioms of ZFC were made so that it seems reasonable to assume that they hold for some vague notion of "sets" in natural language. Inside ZFC every quantifier is over the entire domain, and so one cannot talk about sets as if there are other kinds of objects. If we use a meta-system that does not have sets, then a model of ZFC might not have any 'sets' at all, whatever "set" might mean! In ZFC, one cannot talk about "the Russell set", since the comprehension axiom does not allow us to construct such a collection. In MK (Morse-Kelley) set theory, there is an internal notion of sets, and one can construct any class of sets definable by some formula, but one cannot construct anything that resembles a "class of classes" for the same reason as Russell's paradox. In the non-mainstream set theory NFU, one has both sets and urelements (extensionality only applying to sets), and so one can make sense of talking about sets here. But NFU is not a very user-friendly system anyway. And this is also where my answer shall stop.<|endoftext|> TITLE: Are my calculations of a new constant similar to Mill's constant based on $\lfloor A^{2^{n}}\rfloor$ and Bertrand's postulate correct? QUESTION [13 upvotes]: As Wikipedia explains in number theory, Mills' constant is defined as: "The smallest positive real number $A$ such that the floor function of the double exponential function $\lfloor A^{3^{n}}\rfloor$ is a prime number, for all natural numbers $n$. This constant is named after William H. Mills who proved in 1947 the existence of $A$ based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unknown, but if the Riemann hypothesis is true, it is approximately $1.3063778838630806904686144926...$ (sequence A051021 in OEIS)." In other words, $\lfloor A^{3^{n}}\rfloor$ is said to be a prime-representing function, because $f(x)$ is a prime number for all positive integral values of $x$. The demonstration made by Mills in 1947 (pdf here) is very easy to follow (one must be careful about the already known typos of the paper to follow properly the explanation). It is also known that generically (as explained in this nice question at MSE) $\lfloor Q^{k^{n}}\rfloor$ always works if $k$ is at least $3$, so there are other constants $Q$ not defined yet that will work when $k$ is greater than $3$. It is also known that $k=2$ might not work because it depends on the Legendre's conjecture: is there always a prime between $N^2$ and $(N+1)^2$ (thought to be extremely difficult to demonstrate). The reason of this question is that based on the original demonstration and using the Bertrand's postulate as key for some manipulations, I tried to do my own version of the Mill's constant to learn, and it seems that was able to find a constant $L$ that for $\lfloor L^{2^{n}}\rfloor$ provides a sequence of integers associated to their next prime, so it is possible to calculate the associated sequence primes by a prime-representing function. The difference of this approach is that the sequence of integers obtained from $\lfloor L^{2^{n}} \rfloor$ are not primes directly, it is an integer sequence, but there is an associated sequence of primes obtained from them. Here is how it works (the questions are at the end of the explanation): According to Bertrand's postulate for a given integer (let also suppose for the demonstration that it is prime) $p_i$ it holds: $p_i < p_j < 2p_i$ for some existing prime $p_j$ Now adding $+p_i^2+1$ in the inequalities: $p_i+p_i^2+1 < p_j+p_i^2+1 < 2p_i+p_i^2+1$ Which is also true if in the left side we just leave $p_i^2$: $p_i^2 < p_j+p_i^2+1 < (p_i+1)^2$ Calling $e_j = p_j+p_i^2+1$ it is possible to build (from here I will use the same steps as in Mill's paper) a sequence of integers $E_1, E_2,...E_n$ such as: $E_1=P_1=2$ $E_1^2 < E_2 < (E_1+1)^2$ $E_2^2 < E_3 < (E_2+1)^2$ ... $E_{n-1}^2 < E_n < (E_{n-1}+1)^2$ ... The same conditions for the sequence shown in Mill's demonstration would hold for this expression, and in this case the exponent is $2$ instead of $3$ but we do not depend on Legendre's conjecture because the associated prime $P_n$ to $E_n$ is a Bertrand's prime (so it exists always in the interval $[E_n,2E_n]$). In other words, it is possible to use the power of $2$ because the sequence is not directly a sequence of primes, but a sequence of integers attached to primes by Bertrand's postulate according to the manipulation: $P_n = E_n-E_{n-1}^2-1$ And in the same fashion as Mill's constant: $E_n=\lfloor L^{2^{n}} \rfloor$ Where L is obtained after some manual calculations (up to $n=11$) as $L_{11}=1.7197977844041078190854...$ The way of manually calculating the constant is as follows, in the same fashion that Mill's demonstration, the relationship between $L_n$ (the constant calculated after knowing the $n^{th}$ element of the sequence $E_n$) and $E_n$ is: $L_n=E_n^{\frac{1}{2^n}}$ These are the calculations for the first $5$ elements: For instance for $n=1..5$, $\lfloor L^{2^{n}} \rfloor$ is: $E_1=2$ $E_2=8$ and $P_1=E_2-(E_1)^2-1=8-2^2-1=3$ $E_3=76$ and $P_2=E_3-(E_2)^2-1=76-8^2-1=76-64-1=11$ $E_4=5856$ and $P_3=E_4-(E_3)^2-1=5856-76^2-1=5856-5776-1=79$ $E_5=34298594$ and $P_4=E_5-(E_4)^2-1=34298594-5856^2-1=5857$ $L$ slightly grows on every iteration but the growth is each time smaller, so it clearly tends to a fixed value when $n \to \infty$. This is the graph: So in essence the prime-representing function would be: $P_n = \lfloor L^{2^{n}} \rfloor -E_{n-1}^2-1 = \lfloor L^{2^{n}} \rfloor -(\lfloor L^{2^{n-1}} \rfloor)^2-1$ for $n \ge 2$, $P_n \in \Bbb P$. ...being the starting element of the sequence $E_1=2=P_1$ and $L$ the value of $L_n$ when $n \to \infty$. This is the Python code to calculate and test $L$, please be free to use it or modify it: def mb(): from sympy import nextprime from gmpy2 import is_prime import matplotlib.pyplot as plt import matplotlib as mpl import decimal print("L constant test. Equivalent to Mill's constant applying Bertrand's postulate") print("----------------------------------------------------------------------------") print() # Decimal precision is required, change according to test_limit decimal.getcontext().prec = 2000 # List of elements E_n generated by L^(2^(n)) E_n = [] # List of progressive calculations of L_n, the last one of the list is the most accurate # capable of calculate all the associated primes L_n=[] # List of primes obtained from L^(2^(n))-E_(n-1) associated to the Bertrand's postulate P_n=[] # depth of L: L will be able to calculate the primes L^(2^(n))-E_(n-1) for n=1 to test_limit-1 test_limit=12 for n in range(1,test_limit): if n==1: # E_1=2 E_n.append(2) else: # E_n=(E_(n-1)^2)+1 # Be aware that the Python list starts in index 0 and the last current element is index n-1: # that is why n-2 appears in the calculation below E_n.append((E_n[n-2]**(decimal.Decimal(2)))+P_n[n-2]+1) # Next prime greater than E_n: it will be in the interval [E_(n-1),2*E_(n-1)] (Bertrand's postulate) P_n.append(nextprime(E_n[n-1])) # Calculation of L_n L_n.append(E_n[n-1]**(decimal.Decimal(1)/(decimal.Decimal(2)**(decimal.Decimal(n))))) print("List of Elements of L^(2^(n)) for n = 1 to " + str(test_limit-1) + ":") mystr = "" for i in E_n: mystr=mystr+str(i)+"\t" print(mystr) print() print("List of Primes obtained from L constant: L^(2^(n))-E_(n-1)-1 for n = 1 to :" + str(test_limit-1) + ":") mystr = "" for i in P_n: mystr=mystr+str(i)+"\t" print(mystr) print() print("List of calculations of L_n (the most accurate one is the last one) for n = 1 to :" + str(test_limit-1) + ":") mystr = "" ax = plt.gca() ax.set_axis_bgcolor((0, 0, 0)) figure = plt.gcf() figure.set_size_inches(18, 16) n=1 for i in L_n: mystr=mystr+str(i)+"\t" plt.plot(n,i,"w*") n=n+1 print(mystr) # Print the graph of the evolution of L_n # Clearly it tends to one specific value when n tend to infinity plt.show() #Testing the constant print() print("L Accuracy Test: using the most accurate value of L will calculate the primes associated") print("to the constant and will compare them to the original primes used to create the constant.") print("If they are the same ones, the constant is able to regenerate them and the theoretical background") print("applied would be correct.") #Using the most accurate value of L available L = L_n[len(L_n)-1] print() print("L most accurated value is:") print(L) print() for i in range(1,test_limit): print() tester = int(L**(decimal.Decimal(2)**(decimal.Decimal(i)))) if i==1: tester_prime = 2 else: tester_prime = decimal.Decimal(tester) - (E_n[i-2]*E_n[i-2]) - 1 print("Current calculated E:\t" + str(tester) + " for n = " + str(i)) print("Original value of E:\t" +str(E_n[i-1]) + " for n = " + str(i)) if tester == E_n[i-1]: print("Current calculated prime:\t" + str(tester_prime) + " for n = " + str(i)) if i==1: print("Original value of prime:\t2 for n = " + str(i)) else: print("Original value of prime:\t" + str(P_n[i-2]) + " for n = " + str(i)) else: # If we arrive to this point, then the constant and the theory would not be correct print("ERROR L does not generate the correct original E") return print() print("TEST FINISHED CORRECTLY: L generates properly the primes associated to the Bertrand's postulate") mb() The list of first primes is (the Python code provides the output up to $n=11$) $2,3,11,79,5857,34298597,1176393584675453,1383901866065518680880707763873,$ $1915184374899624805461632693020424461862877625516091453480257,...$ Initially $L=1.71979778...$ seems not have been defined before, but I am not really sure. At least I was not able to find such constant at Mill's constant-related papers on Internet, but it seems interesting. The advantages of $L$ versus $A$ (the original Mill's constant) would be: The growth rate of the double exponential is lower due to the use of powers of $2$ instead of powers of $3$ (or more for other Mill's constant interpretations), so for the same quantity of $n$ values calculated, $L$ provides smaller primes than $A$ in less time. The use of $\lfloor L^{2^{n}} \rfloor$ would not require the validation of Legendre's conjecture because it depends on Bertrand's postulate, so the theory would hold as it is. I would like to ask the following questions: Are the manipulations for the calculation of the elements of $\lfloor L^{2^{n}} \rfloor$ applying Bertrand's postulate correct or there is a restriction or error? (initially the heuristics show that it would be correct) Is such constant interesting? Initially seems so because it is able to use a power of $2$ growth rate (versus power of $3$ in best of cases for standard Mill's constant). If the calculations are correct, are there other manipulations that could be applied to lower even more the growth rate? Thank you! REPLY [4 votes]: It has been a long time since I wrote the above question, and I was able to gather a confirmation that the contents and calculations are correct. During the last year I tried to publish the contents as a very simple paper ($5$ pages long) in several (up to $7$ different) known journals (each one of them took from $1$ month to $4$ months to receive feedback since the paper was sent). All of them confirmed (via peer review) that the calculations are correct, but basically the general comment is that is a result that can be expected to happen, so it is not so interesting to be published. I have learnt from the experience how to write correct papers (in American English) and how difficult is the professional world of publications. So let me say that now I can understand better the struggles of professional mathematicians to be able to publish new and interesting content (I admire you!). I would like to share all the anonymous feedback I have received from the different peers that reviewed the paper in order to close the question. The more feedback I received the better the paper was getting in terms of completion. So for your viewing pleasure (some of them were extremely kind and useful to update the paper) and not in the same order that the paper was sent to the journals: The author has a simpler theoretical foundation at the cost of less cute result. It is not a surprise that such thing can be done, and the proof is indeed very short and elementary. You might consider submitting a version of your paper to Mathematics Magazine for their problems section. You do not adequately survey the literature. There are many papers dealing with refinements of Mills' paper that you do not cite; for example, Matomäki, K., Prime-representing functions. Acta Math. Hungar. 128 (2010), no. 4, 307–314 already obtains a sequence with growth rate like $L^{2^n}$. And other papers, such as Baker, Roger C.The intersection of Piatetski-Shapiro sequences. Mathematika 60 (2014), no. 2, 347–362. do even better. (I reviewed and added the references, this was a very important update). In principle the proof idea should work. All in all I do not find the paper illuminating enough for the () experts won't find the result surprising and non-experts won't get a clear idea of the topic from this manuscript. The subject itself will be of interest to very few people - the Mills' constant itself is a function that uses the primes to construct primes, and thus does not really tell any useful or interesting things about primes (the same thing can be done with any sequence, not just the primes). This is a promising project, but I am not convinced that it is successful. To be more precise, the construction in the present form is not complete. (Followed by some specific points that still were not clear in the paper and I was not able to fix properly at that moment). Plus the classical answers: I am sorry to say we will be unable to consider your manuscript for publication, because we currently have an excessively large backlog of articles awaiting publication. As a result we have been forced to drastically curtail submissions for the time being. Your submission is unacceptable since the topics are too specialized. So in general, it has been a good experience, I have found a lot of patient and kind peers, professionals that had a serious look to the paper (even if I was not a professional) and provided very useful insights and advises. Finally I wanted to thank @Konstantinos Gaitanas the kind help and review during these months by email. Unfortunately I was not able to give a successful result to that effort in this occasion. But I hope that this experience will help other people to try again until success. Lessons learned.<|endoftext|> TITLE: Showing that $2\int_0^\infty {\cosh(x)-1\over x(e^{ax}-1)}\,\mathrm dx=\ln\left({\pi \over a\sin\left({\pi\over a }\right)}\right)$ QUESTION [5 upvotes]: $$2\int_{0}^{\infty}{\cosh(x)-1\over x(e^{ax}-1)}\,\mathrm{d}x=\ln\left({\pi \over a\sin\left({\pi\over a }\right)}\right)$$ $$\cosh(x)=1+{x^2\over 2!}+{x^4\over 4!}+{x^6\over 6!}+\cdots$$ $${\cosh(x)-1\over x}={x\over 2!}+{x^3\over 4!}+{x^5\over 6!}+\cdots=\sum_{n=1}^{\infty}{x^{2n-1}\over (2n)!}$$ $$I=\sum_{n=1}^{\infty}{2\over (2n)!}\int_{0}^{\infty}{x^{2n-1}\over e^{ax-1}}\,\mathrm{d}x$$ Recall the Zeta function $${a^k\over(k-1)!}\int_{0}^{\infty}{x^{k-1}\over e^{ax}-1}\,\mathrm{d}x=\zeta(k)$$ Now setting $k=2n$ $${a^{2k}\over(2k-1)!}\int_{0}^{\infty}{x^{2k-1}\over e^{ax}-1}\text{d}x=\zeta(2k)$$ Rearrange $$I=\sum_{n=1}^{\infty}{2\over (2n)!}\cdot{(2n-1)!\zeta(2n)\over a^{2n}}=\sum_{n=1}^{\infty}{\zeta(2n)\over a^{2n}\cdot{n}}$$ So finally we have $$\sum_{n=1}^{\infty}{\zeta(2n)\over a^{2n}\cdot{n}}=\ln\left({\pi \over a\sin\left({\pi\over a }\right)}\right)$$ This does not answer my question, it is only showing the transformation from integral into an infinite sum. Anyone can help here to show that the infinite sum produced this closed form? REPLY [2 votes]: Jack D'Aurizio comments on the answer given by Olivier Oloa and I expand his remark into an answer. Thanks to him for this simple and beautiful observation. We know that $$\sin \pi x = \pi x\prod_{n = 1}^{\infty}\left(1 - \frac{x^{2}}{n^{2}}\right)\tag{1}$$ and hence \begin{align} \log\left(\frac{\pi x}{\sin \pi x}\right) &= -\sum_{n = 1}^{\infty}\log\left(1 - \frac{x^{2}}{n^{2}}\right)\notag\\ &= \sum_{n = 1}^{\infty}\sum_{k = 1}^{\infty}\frac{x^{2k}}{kn^{2k}}\notag\\ &= \sum_{k = 1}^{\infty}\frac{x^{2k}}{k}\sum_{n = 1}^{\infty}\frac{1}{n^{2k}}\notag\\ &= \sum_{n = 1}^{\infty}\frac{\zeta(2n)}{n}x^{2n}\tag{2} \end{align} and this is what you need in your last equation (with $1/a$ in place of $x$).<|endoftext|> TITLE: Find the maximum and minimum values of $\sin^2\theta+\sin^2\phi$ when $\theta+\phi=\alpha$ QUESTION [8 upvotes]: Find the maximum and minimum values of $\sin^2\theta+\sin^2\phi$ when $\theta+\phi=\alpha$(a constant). $\theta+\phi=\alpha\implies\phi=\alpha-\theta$ $\sin^2\theta+\sin^2\phi=\sin^2\theta+\sin^2(\alpha-\theta)$ Let $f(\theta)=\sin^2\theta+\sin^2(\alpha-\theta)$ $f'(\theta)=2\sin\theta\cos\theta-2\sin(\alpha-\theta)\cos(\alpha-\theta)$ Putting $f'(\theta)=0$ gives $\sin2\theta=\sin2(\alpha-\theta)$ $2\theta=2\alpha-2\theta\implies \alpha=2\theta$ If $\alpha=2\theta$,then by $\theta+\phi=\alpha$ gives $\phi=\theta$ I am stuck here,the answer given is maximum $1+\cos\alpha$ and minimum $1-\cos\alpha$.But i have found only one critical value(when $\phi=\theta$) and that too i cannot decide whether it will give maximum or minimum value. Please help. REPLY [5 votes]: Write \begin{eqnarray} \sin^2 \phi + \sin^2 \theta &=& \sin^2 \phi + 1-\cos^2 \theta \\ &=& 1+(\sin \phi - \cos \theta) (\sin \phi + \cos \theta) \\ &=& 1+(\sin \phi - \sin ( { \pi \over 2} - \theta)) (\sin \phi + \sin ( { \pi \over 2} - \theta)) \\ &=& 1+4 \cos ({ \phi -\theta + { \pi \over 2}\over 2} ) \sin ({ \phi +\theta - { \pi \over 2}\over 2} ) \sin ({ \phi -\theta + { \pi \over 2}\over 2} ) \cos ({ \phi +\theta - { \pi \over 2}\over 2} ) \\ &=& 1+ \sin (\phi -\theta + { \pi \over 2}) \sin ( \phi +\theta - { \pi \over 2} ) \\ &=& 1 - \sin (\phi -\theta + { \pi \over 2}) \cos \alpha \end{eqnarray} We can choose $\phi -\theta$ to have arbitrary values while maintaining the relationship $\phi+\theta = \alpha$, and so the above can be written as $1-\sin t \cos \alpha$, which can be easily extremized,<|endoftext|> TITLE: Which way should you run from the lions? QUESTION [8 upvotes]: This is a fun problem that I saw somewhere on the internet a long time ago: Suppose you are at the center of an equilateral triangle with side length $s$. At each of its vertices, there is a lion which is determined to eat you. The lions start at a constant speed of $v_l$, and they are always running directly towards your current location. You start in the center, and can run at a constant speed $v_h$ (assume instantaneous acceleration for all parties). You are NOT enclosed in the triangle, you are free to try to run wherever you want. Which patch should you take in order to survive the longest time possible? How long can you survive? At first, because everybody's speed is constant, I thought we can just work with functions of $x$ for the paths of the lions and the human, and try to maximize the arc length of the lions' paths. However, I think it's much easier to work with the functions in parametric form, because the tangent lines to the lions' path at $t=t_0$ should go trough your position at $t_0$. Also, I think it's reasonable to assume that at $t=0$ your direction is straight towards the midpoint of one of the sides, because any other direction would cause you to meet one of the lions faster. REPLY [2 votes]: For the case I mention above (the man and the lions have same speed, otherwise the answer is obvious), let us pick the person in the centre of the triangle and running towards the middle of one side, and the lion just above him starting to run towards the person with same speed. Using a reference frame in which the man is still, and complex numbers to describe the lion position, one can write for $z=\rho e^{I \phi}$: $$\dot z = -v e^{I \phi} -v$$ (where $v$ is the speed). We can rescale distances in units of the side of the triangle $L$ and time in units of $L/v$, to get the dimensionless $$\dot z=-1-e^{I \phi}$$, from where: $$\dot \rho =-(1+\cos \phi) \\ \rho \dot \phi =\sin \phi.$$ The solution to this equations are: $$\rho = \rho_0 ( \frac{\sin\frac{\phi}{2}}{\sin\frac{\phi_0}{2}} )^2$$ and $$\log\tan^2\frac{\phi}{2}-\frac{1}{\sin^2\frac{\phi}{2}} \Bigg\vert_{\phi_0}^{\phi}=\frac{2t}{\rho_0 \sin^2\frac{\phi_0}{2}},$$ where $t$ is the time (starting from $0$), $\rho_0=1/\sqrt 3$, and $\phi_0=\pi/3$. Note from the last equation, that we need to solve for $\phi =\phi(t)$. This is possible if we notice that $\sin^2$ can be expressed as function of $\tan^2$ and we solve for $\tan$ using the Lambert W-function. With it, we can obtain $\rho=\rho(t)$ from the above solution $\rho=\rho(\phi)$. The solution obtained using the above and validated by integrating the complex ODE equation with the Runge-Kutta method of order $4$ give for the trajectory of the lion (remember that the man sits still at the origin of the complex plane): The lion as seen by the running man, starts in the upper right corner. Remember that the units of length are in units of the side of the triangle, so if the triangle is too small, the lion even if it does not reach the origin (where the person is in his reference frame), could still grab him if in close range.<|endoftext|> TITLE: Finding the limit of a difficult exponential function QUESTION [12 upvotes]: I am stumped on this question, any help or tips would be appreciated! Find the limit, if it exists: $$\lim_{x\to\infty } \left(\sqrt{e^{2x}+e^{x}}-\sqrt{e^{2x}-e^{x}}\right)$$ REPLY [13 votes]: Another way could be to consider $$A=\sqrt{e^{2x}+e^{x}}-\sqrt{e^{2x}-e^{x}}=e^x\Big(\sqrt{1+\frac 1 {e^x}}-\sqrt{1-\frac 1 {e^x}}\Big)$$ and use Taylor series (or the generalized binomial theorem) $$\sqrt{1+y}=1+\frac{y}{2}-\frac{y^2}{8}+O\left(y^3\right)$$ Replacing $y$ by $e^{-x}$ in the first term and by $-e^{-x}$ in the second term lead to $$A=1+\frac{e^{-2 x}}{8}+\cdots$$ which shows the limit and how it is approached.<|endoftext|> TITLE: A Sine and Inverse Sine integral QUESTION [20 upvotes]: A demonstration of methods While reviewing an old text book an integral containing sines and sine inverse was encountered, namely, $$\int_{0}^{\pi/2} \int_{0}^{\pi/2} \sin(x) \, \sin^{-1}(\sin(x) \, \sin(y)) \, dx \, dy = \frac{\pi^{2}}{4} - \frac{\pi}{2}.$$ One can express $\sin^{-1}(z)$ as a series and integrate, but are there other methods nearly as efficient? REPLY [5 votes]: To complement the existing answers, here is a geometrical way. Consider the following integral on the surface of a unit sphere (positive octant): $$\mathcal{I}=\int\limits_{x^2+y^2+z^2=1,~x,y,z\ge0}d\Omega~\sin^{-1}(y).$$ where $\Omega$ is the solid angle (or area). By using the spherical coordinate system: $x=\sin\theta\cos\phi,~y=\sin\theta\sin\phi,~z=\cos\theta,$ where $0\le\theta,\phi\le\pi/2,$ we see this is the original integral that we need to evaluate. Note $\mathcal{I}$ is invariant under rotation, so we also have $$\mathcal{I}=\int\limits_{x^2+y^2+z^2=1,~x,y,z\ge0}d\Omega~\sin^{-1}(z).$$ Again using the spherical coordinates, we get $$\mathcal{I}=\int_0^{\pi/2}d\phi\int_0^{\pi/2}d\theta~\sin\theta~\sin^{-1}(\cos\theta)=\frac{\pi}{2}\int_0^{\pi/2}d\theta~\sin\theta\left(\frac{\pi}{2}-\theta\right)=\frac{\pi}{2}\left(\frac{\pi}{2}-1\right)$$ as desired, where we have used integration by parts for the last step. For the general case that David H. considered, we have when $r\le1$: $$\mathcal{I}(r)=\int\limits_{x^2+y^2+z^2=r^2,~x,y,z\ge0}d\Omega~\sin^{-1}(y).$$ Similarly we get $$\mathcal{I}(r)=\int_0^{\pi/2}d\phi\int_0^{\pi/2}d\theta~\sin\theta~\sin^{-1}(r\cos\theta)=\frac{\pi}{2r}\int_0^rdt\sin^{-1}(t)~=\frac{\pi}{2r}\left(r\sin^{-1}(r)+\sqrt{1-r^2}-1\right)=\frac{\pi}{2}\left(\sin^{-1}(r)+\frac{\sqrt{1-r^2}-1}{r}\right).$$ Again we have used integration by parts for the last integral.<|endoftext|> TITLE: How to compute $\det(A+J)$? QUESTION [5 upvotes]: If $A$ be an $n\times n$ matrix and $J$ be a matrix of same order with all entries $1$ then Show that $\det( A + J)=\det A$ + sum of all cofactors of $A$. I have tried using Laplace Expansion but I am not getting it. Please give some hints at doing the same. REPLY [2 votes]: Hint: Let $a_1,\dots,a_n$ be the columns of $A$ and let $v_0$ be the vector $(1,1,\dots,1)$. When you expand $\det(a_1+v_0,\dots,a_n+v_0)$ according to multilinearity of the determinant, then apart from $\det(A)$ you only get $n$ further summands which can be non-zero. The latter summands can be easily computed by expansion along their "special column". Edit: The $n$ potentially non-zero additional summands are of the form $\det(a_1,\dots,a_{i-1},v_0,a_{i+1},\dots,a_n)$. Computing this determinant by expansion with respect to the $i$th column (the one containing $v_0$) you get a sum of cofactors of $A$ (with appropriate signs).